8130

The Impact of Data Technology on Future Society (*)

Valdemar W.Setzer

The title of this essay does not coincide with the title proposed by BCC News, "The Impact of Information Technology on Future Society," because in my opinion "Data Technology" (DT) is a more precise term than "Information Technology." According to my concepts, what is stored in, processed by and transmitted by means of computers is data, and not information. Data consists of quantitative symbolic representations. Thus, a tree is not a piece or a collection of data. But its description in the form of text or a picture is data. Information presupposes a human being who receives it, eventually under the form of data, and interprets it, associating it to known concepts, memories in the form of images, sensations, etc. If a Chinese would write the description of that tree, say, in Mandarin, to me this description would be a collection of data: I would be able to format it, to change its fonts if stored in a computer, even sort its lines if I would be given a collating sequence of the ideograms. But I would not be able to obtain any information whatsoever from that description. The same description in Portuguese or English (or any other language that I know) is data that becomes information to me, as soon as I read it and understand its phrases, but would probably be just a heap of data for a Chinese. A red traffic light is a piece of data, which is interpreted by a car driver, becoming information to him.

It is very important to recognize that for a computer everything is data, and nothing is information. For more details on this, see my paper "Data, information, knowledge and competency" on my web site. According to these concepts, Claude Shannon did not develop an Information Theory, but a Data Theory. In fact, his theory deals, e.g., with the capacity of channels transmitting data, and not information. It does not deal in any way with the contents of what is being transmitted, just with its form.

I am going to consider Data Technology as encompassing computers, computer networks, data and computer programs. I am interested primarily in its deep impacts.

The title requires an initial speculation on what society of the future will be. We can imagine two scenarios. The first is the continuation of the trends verified mainly during the 20th century, which are a continuation of the consequences of changes in the human inner constitution that have been ongoing since the 15th century. The second would be a deviation to other paths, as a possible consequence of those changes. For considerations on positive changes of mentality occurred during the 20th century, please see my paper on the obsolescence of education, on my web site.

In the first scenario, the ever-accelerating infiltration of DT would continue in all branches of society, carrying on the trends of the last 50 years. This is due to various factors: (1) The facts that we are constantly immersed in the products of abstract and technical thinking, and that DT deals exclusively with the highly limited kinds of thinking that can be stored on computers in the form of data or programs. In contrast, in ancient times the human being was immersed almost completely in a natural world that was not the product of human thought, with the exception of houses and a few simple tools. (2) The fact that technical progress has been placed at the service of interests whose chief ambition is wealth or power, and which do not seek to meet real human needs but to satisfy selfish and greedy - that is to say, antisocial - interests. Society is not invited to express itself about the introduction of a new hardware or software gadget. If there is no need for it, this need is created through millionaire marketing efforts. Another source for new hardware or software is forced obsolescence, quite typical for the area of DT. (3) The absolutely undue marriage between the economy and scientific knowledge, which leads the latter to dedicate itself mainly to technological development and to whatever can bring about profit and power. (4) Fashion and curiosity, which by far surpass conscience in the use of technical innovations.

One of the greatest problems of DT is that it functions only by means of data and programs. The first is purely quantitative and the second is confined to the symbolic logic of the computer, also quantitative. In this way, the qualitative, which cannot be quantified or expressed in symbolic logic instructions, is lost. The existence of the qualitative cannot be proved, but neither can it be proved that everything can be reduced to quantity. For instance, sense perception is clearly not experienced as quantitative, just as thoughts, feelings, and will impulses are not. For this reason, indiscriminate quantification simply does not correspond to our lived experience, and is necessarily inhuman. The quantification required by DT causes quality to be eliminated or overlooked. This in turn leads to the impoverishment of human existence, which in the first of our scenarios for the future will continue to grow. I think that this impoverishment has a tendency of reducing human behavior to the level of machinery, that is, to a sub-natural level - contrary to what I think should be our path, that is, in the direction of our becoming supernatural. (When cave dwellers created cave-paintings, they had already become beings who were not completely natural, and I speculate that they never had been.)

It may be objected that texts already existed before DT and that they used quantitative representation. I think the introduction of writing was also the product of a phase of general impoverishment of the human condition. It is important to observe, however, that this impoverishment was necessary for human beings to acquire liberty, self-consciousness, and individuality, which in another light may be considered enrichments. It was not accidental that Plato called attention to the loss that the introduction of writing represented: Compare the capacity of our memory with that displayed by Homer as he recited his enormous epics without the use of notes. One of the major differences between printed matter and DT in relation to texts and images is that the former provides a physical object to hold in one's hands, in which physical annotations can be made and with which one can have a personal relationship. On the other hand, DT clearly introduced great flexibility with search, selection, and hypertext. Unfortunately, DT mediated by the Internet has eliminated the responsibility of those who edit books and periodicals. Now, anyone can post any sort of silliness to be accessed by hundreds of millions of people, without anyone taking responsibility apart from the author (who could eventually use a public web site and remain anonymous), and without the ability to connect it with a known source such as, e.g., a sensationalist magazine, which allows one to avoid reading it. This means diluting something useful into an ocean of uselessness. I have the impression that the garbage on the Internet grows exponentially while the amount of useful material growths linearly- with a small coefficient indeed. Umberto Eco stated at a recent Davos world economic forum that "the glut of information leads to the evolution of a society of idiots." Another advantage of a book was that it constituted a whole. I would like to find anyone who has read an entire book on a computer screen without having printed it out. The fragmentation produced by the machine hinders the concentration necessary to read a book with constant attention from start to finish.

But what did not exist formerly was the algorithmic processing of data, except for some types of calculations, such as those used by astronomers in the 16th century, which employed sheets hanging on clotheslines inside storage buildings. The Egyptian pyramids were clearly not calculated in advance, at least not using the mathematical methods employed in static physics, and not even the Incas much more recently did so in erecting their enormous, perfectly fitting huge stones, since they lacked writing (and wheels!). To the degree that society depends more and more on data processing, it will become more and more impoverished, mainly through the loss of the qualitative aspects of life and through the reduction of processing to the limited logical and symbolical manipulations permitted by the instructions of machine languages (the internal codes interpreted by computers). Naturally, I am not referring to the processing of entities that are inherently quantitative, such as units of currency (which led to the perfect marriage of DT and the banking system) or to the kind of necessary calculations used, for example, in engineering.

A trend I find absolutely idiotic but which will probably end up widely implemented is that of "ubiquitous computing," the notion of using computers everywhere (much more so than today). An example cited in our milieu and also featured at a recent home appliance fair is the coffeemaker connected to the Internet. Can anyone imagine how this might be useful? Perhaps I may activate the home coffeemaker from my office as I am heading home, saving a few minutes, which is the time spending in making coffee? But consider how it robs me of the experience of taking a spoon full of powder from the bag, savoring the aroma, of mixing it in the boiling water, deciding intuitively how much to use according to my taste at the moment, of stirring the mix and watching it dissolve, perhaps choosing to use a paper filter because I donít feel like having to wash later on the cloth filter I prefer, and so on. Of course, if the reader already uses a machine to make coffee and is not used to undertaking the manual process, there will be no difference in the way that Internet coffee tastes. But if I am used to making coffee manually, this Internet coffee machine will prevent me from hearing my wife say, "What a delicious coffee you made!" I must confess, however, that since I personally neither drink nor make coffee, the idea of the Internet-connected coffee machine seems all the more idiotic, though I expect that many readers will imagine themselves in the situation described in my example and extrapolate to other cases. One of these is the background music system in Bill Gates' house, where, according to Gates' horrifying book, the music follows each person from room to room. It accompanies the unfortunate person who cannot feel at ease without the hell (in my view) that is background music and cannot move about without the computer tracking his movements. (Any exaggeration is purely intentional.) By the way, recent news reports tell that Gates' computerized home, which cost $50 million, is a headache to maintain, requiring the constant presence of three technicians to keep it running. On the subject of ubiquitous computing, see some of the recent editions of Netfuture: Technology and Human Responsibility (http://www.netfuture.org/), an excellent online magazine with a large number of subscribers. Curiously, Netfuture contradicts the current trend: It is completely antidemocratic, since it exclusively reflects (apart from a few transcribed letters and rare essays) the personal opinions of its editor, Stephen Talbott, author of the book The Future Does Not Compute. That is, in the era of depersonalization caused by machines and computers, a writer shows up on the Internet and becomes popular despite his utterly personal style, which is characterized, moreover, by an attempt to criticize without assaulting his opponents and without getting caught up in polemics.

In the first scenario, the first deep effects of DT on the society of the future will be the increasing impoverishment of reality through its representation as data and its processing as such, and through the replacement of more and more human activities by machines. The argument that we are merely eliminating undesirable tasks is not sustainable: Anyone who does not have the time and the inclination to make a cup of coffee or a cake for themselves, preferring those made by a machine, is no longer a human being, at least in the fuller sense I afford this notion. By the way, I make bread nearly every week using a machine (though I help it somewhat by kneading the dough as it beats it, in order to check its consistency), since it is more practical, though I still recognize that handmade bread is superior! With respect to the time savings, I believe that anyone who lacks a few minutes out of the day to make coffee or read the newspaper is leading a misguided, inhuman life and must be suffering because of it.

Another deep effect will be the continuation, growth, and popularization of the idea that the human being is a machine, common in intellectual circles since the mid-19th century. This is not such a new idea: In 1748 La Mettrie published a book titled LíHomme-Machine. But it certainly was a new idea at that time. There has never been a more comprehensive metaphor for this conception than the computer, since there has never been a machine that could simulate such a wide range of human mental processes. In the contemporary world, we are surrounded by products of human thought, therefore the computer, which simulates part of our thinking, became the most important technical development. It is necessary to know what a computer is from its logical point of view (that is, its machine language), in order to understand that it only simulates certain limited mental processes (those that can be reduced to the instructions of the machine language), and that it does not think. Moreover, if one understands digital circuits, one realizes that a computer does not even perform simple additions: It merely combines logical symbols to produce the expected result of this arithmetic operation. But most people understand none of this, and the specialists are not interested in demystifying the machine, introducing false and misleading terms such as "artificial intelligence." This expression was coined by John McCarthy, a pioneer in the study of what is called the "formal semantics" of programming languages, which in fact, is not semantics at all, since the computer is a purely syntactic machine. He even asserted, as John Searle described in his book Minds, Brains and Science, that "machines as simple as thermostats can be said to have beliefs." Inquired by Searle what the beliefs of his thermostat might be, McCarthy said, "My thermostat has three beliefs - it's too hot in here, it's too cold in here, and itís just right in here." I mention this discussion in order to display the mentality of those who believe that human beings are machines (in this case, that our beliefs are like the mechanical reactions of a thermostat). It is not by chance that McCarthy invented the phrase abbreviated in English as "AI" (which stands for "automated imbecility," in my interpretation). If it is believed that a human being is a machine, obviously someone will conclude that a machine will someday possess human intelligence, whatever that may be. How many people besides myself refer to the fact that a computer may appear to display intelligent behavior, but it is fundamentally completely idiotic - because it always follows a predetermined program? By the way, let me state Setzerís last Law: "Only fools require a definition of intelligence." Corollary: Computers are fools.

My greatest fear about the impacts of computers is the influence they have on our way of thinking. The nature of this influence is clear: it downgrades the concept of a human being to that of a machine. Human nature becomes subnatural, that is, even inferior to non-living matter, because the latter is imbued with an infinite wisdom, making possible the existence of living beings. My fear is that this way of thinking is going to cause terrible tragedies. We may already imagine the possible consequences of this: The notion that human beings are fundamentally determined by their genome (deciphered with the indispensable assistance of computers, another instance of quantification!) and that we will be able to alter the genome to "improve" a person, is a pure product of this mentality. I call this tendency "genetic racism." It could more properly be called "genetic eugenics," but I exaggerate it intentionally, and leads to the following. The Nazis, those most outstanding examples of what I call "the century of barbarism," treated millions of human beings like animals, shipping them off in cattle wagons, confining them in cages, and so on. But it is possible to maintain ethical principles with regard to the treatment of animals; take the case of organizations for the prevention of cruelty towards animals, the campaigns to save the whales and other endangered species, the campaigns to save animals from scientific experimentation, etc. A feeling of reverence for nature remains, though in this scenario DT may well bring an end to it. What does not make sense is to adopt the same ethic in relation to machines - well, as long as we do not share McCarthy's way of thinking. I would not be surprised if he felt remorse when switching off his personal computer. Thus, the notion of the human being as a machine could produce attitudes so inhuman they would leave the Nazis in the dust. I especially worry about the psychological development of children who use computers as they mature. They cannot possibly comprehend what DT is and are overly impressionable.

There are other disastrous consequences to fear in this scenario. Another is the growing fragmentation of human life. Reflect on your behavior as you use your computer, especially the Internet. Notice that you generally do not concentrate on a single task or subject. For example, you read rapidly through a pile of e-mails; if one of them links to a Web site, you open it, and follow the links you find there looking at further sites, and so on. This fragmentation tends to undermine self-consciousness and individuality, reducing freedom to a choice among a menu of actions permitted by the software you are using. Once again, the human being is reduced in his humanity.

Another impact of DT that will probably grow worse in the future is the increasing speed with which computer users are obliged to respond. Once again, it is as though the human being were reduced to the status of a machine that reacts very rapidly. Children and youths are being trained to do this by horrible electronic games (read my articles on this subject on my Web site) and by computers. It constitutes a veritable "mechanization" of human beings in terms of their actions. Reasoning is always a slow process, at least where it concerns thoughts that must be subjected to quiet reflection before the decision to act is taken, something that animals cannot do, since they instinctively follow their behavioral "programming." But DT imposes a speed that does not fit the normal, slow pace of conscious deliberations. This happens in addition to the utterly inhuman pace of technological and social change, a subject treated by other people that I will not reiterate here. The issue of speed is infiltrating the business world as well. I have long said that natural selection should not be applied to human beings except in one area: economic life, where the capitalist law of the jungle clearly dictates the "survival of the strongest." Similarly, we are entering into an era of "survival of the fastest." Businesses that do not adapt to the Internet quickly enough are swallowed up by those that do. This is another type of dehumanization of society.

Finally, related to the acceleration of subjective time there is another impact that will certainly worsen in this scenario: the acceleration of personal development and the development of humanity. I have already written extensively about the undesirable acceleration in the development of children and youth when they use computers for whatever purpose. On this subject, I recommend reading Neil Postmanís book The Disappearance of Childhood. Postman provides an interesting analysis of the concept of childhood, approaching the problem from the point of view of communication media. I think that the exaggerated freedom introduced by the Internet is another aspect of this process of acceleration. I greatly appreciate this freedom, so long as I am not receiving spam mail or viruses, my credit card is not being cloned, and the mailing lists in which I participate do not flood my inbox with e-mails every day (I have unsubscribed from a number of lists for this reason). I have the impression that humanity is not yet sufficiently evolved to handle the degree of freedom the Internet offers us. Freedom calls for self-consciousness and social responsibility, which we are still developing. The undue acceleration of the future could entail a contrary effect: we may lose our freedom because we do not know how to make use of it. This already occurs when people who lack the required maturity, such as children, are improperly influenced by certain Web sites. But the problems that arise from the exaggerated freedom offered by the Internet pale in comparison to those associated with the planned implementation of Freenet, a network on which one can anonymously post anything one wishes; the author does not need to log on to a central service provider, and it will be impossible to find and erase the file in which the information is stored. If the scenario in which we find ourselves is not altered, it seems to me that Freenet will become a reality.

In short, the gravest impacts of DT will be the continuing and accelerated dehumanization of the human being. The consequences could be terrible: social disintegration, personal despair and disillusionment with life, the growth of fundamentalisms of religious, ethnic, and scientific kinds (such as the belief that biology is a branch of physics), leading, perhaps, to an apocalyptic war of all against all, a phenomenon that can already be observed flaring up in various parts of the world.

Now I will move on to the alternate scenario. I will have to be very brief, as I am approaching the limits of the space allocated to me. In the alternate scenario, human beings would completely reverse this process. This would be the result of a conscious effort on the part of those who are now acting unconsciously. The economy would begin to serve genuine social needs rather than selfish and therefore antisocial ambitions. Observe how social injustice, despair, and hopelessness are growing at an accelerated pace, in such a way that the beneficial effects of Adam Smith's "invisible hand," a mode of thought that governs modern economic life, are increasingly difficult to perceive, except by the privileged few. The introduction of a new technology or machine would have to serve real social needs as defined by society as a whole (and not just by the state or by the means of production). In the economic realm and the means of production there should be no total freedom, much less equality; a spirit of fraternity should govern this area, requiring the production of what is mostly needed and has the highest priority. In this scenario, DT would serve useful ends, not personal ambitions and consumerist fashions. Since the banking system is quantitative by the very nature of the currency it deals with, it is obvious that DT can be useful in this area, without forgetting, however, that people must not be eliminated from processes where they are truly indispensable: not to conduct a transfer of funds, but to give investment advice, for example. We would not allow ourselves to be distracted by the Internet-connected coffeemaker, since we would recognize that there were social needs of an infinitely higher priority to be met, just as we would not send spaceships to Mars under the fallacious pretext that I have heard since men walked on the Moon: "This time, we will discover the truth about the origins of the solar system." Vaccines against benign illnesses such as chicken-pox would not be used just so that American parents can avoid missing a day and a half of work (I am not joking!). Perhaps the very notion of illness might be transformed in this scenario, leading to the recognition that many illnesses are necessary to the well-being of the organism and should be not eliminated but controlled in order to produce the effect that human beings, those non-machines, sometimes wisely and unconsciously seek. Common language speaks of our "catching" an illness and not of the illness catching us. Given the quantity of germs and bacteria that teem in a megalopolis of almost 20 million people like S„o Paulo and adjacent cities, if illnesses caught us we should not be able to get out of bed. DT, and specifically the Internet, could be very useful as a vector for spreading this shift in mentality: I doubt that a medical journal would permit the publication of an article demonstrating that a valid alternate conception of illness is possible. This reminds me of a well-known astronomer who was ostracized for having the temerity of trying to publish a theory proposing that the red shift of the light emitted by stars is a consequence not of the expansion of the universe, but of gravitational attraction, a notion that contradicts the accepted paradigms (read: prejudices) of current astronomy.

Readers will think me a utopian, but I call their attention to the fact that the fatalism of the attitude that says "this machine is here to stay, so tighten your belts" does not apply to many technological innovations. For example, it is already proven that the atom bomb is not here to stay; it may be that nuclear power plants are not here to stay either. It is possible that most citizens of S„o Paulo are hoping that the automobile is not here to stay, believing that decent public transportation (such as the lines of the Paris subway, which even at the city outskirts are required to stop within 500 meters of every household), would be a great improvement. Some consequences of the exaggerations of DT: Internet addiction (recent studies show that 6 percent of users are addicted to it); the decrease in social interaction and in the increase in clinical depression associated with Internet usage, as shown by recent studies from Carnegie Mellon (1998), Stanford (1999), and Harvard (2000); the fact that industrial robots are proven to be less efficient that human workers in the long range, since humans adapt themselves much more readily to changes in the production line while robots require reprogramming and eventual rebuilding, processes that requires a great deal of time in order to produce reliable results.

A deep observation about the subject reveals that the forces behind machines are infinitely intelligent, but lack every bit of common sense. Thus, contrary to their own interests, they always end up exaggerating in some sense. This promotes consciousness and reactions against them. Look, for example, at the case of global pollution or the sex and violence on Brazilian television. Freenet, mentioned above, clearly exaggerates the freedom it offers to those not prepared to exercise it, and perhaps it will show that freedom demands responsibility, which we have not yet developed to a sufficient degree. Thus, there is some hope that the misuse and exaggeration of DT, and the problems that stem from them, will lead people to realize that they must say "enough!" and take control of it, placing technology at our service in a genuinely human sense, and not in the sense of humans regarded as machines.

(*) This essay was written in Portuguese in April 2000, under the invitation of BCC News, a paper newsletter published by the students of the B.Sc. program in Computer Science at the University of S„o Paulo, Brazil. The original may be found in the authorís web site. For this version, minor corrections were introduced and the conversational style of the original was preserved.


© 2002 Valdemar Setzer

V.W.Setzer is a professor of the Department of Computer Science of the Institute of Mathematics and Statistics (IME) at the University of S„o Paulo. vwsetzer@ime.usp.br - www.ime.usp.br/~vwsetzer

Unknown translator. Revised by V.W.Setzer.


HOME