DISCONNECT?
or Can Machines Think? - II
Kevin Kelly and Steve Talbott
This exchange is part of an ongoing dialogue about machines and organisms. For the previous installment see SCR Nr. 18
STEVE TALBOTT: In the last installment of our dialogue (NF #136) you asked, "What would you need as fully convincing evidence that machines and organisms are truly becoming one?"
You will recall that earlier (in NF #133) I pointed out what seems to me a crucial distinction between mechanisms and organisms: the functional idea of the mechanism is imposed from without (by us) and involves an arrangement of basic parts that are not themselves penetrated and transformed by this idea. In the organism, by contrast, the idea (or, if you prefer, the archetype, or being, or entelechy) works from within; it is not a matter of fixed parts being arranged, but of each individual part coming into existence (as this particular part with its own particular character) only as an expression of the idea of the whole.
I illustrated this organic wholeness by describing how we read the successive words of a text. Almost with the first word we begin apprehending the governing idea of the larger passage, which comes into progressive focus as we proceed. And this idea shines through and transforms every individual word. Dictionary definitions alone would make a joke of any profound text; each word becomes what it is, with all its qualities and connotations, only by virtue of its participation in the meaning of the whole, only as it is infused by the whole.
Our atomistic habits of thought, of course, run counter to this description. We can scarcely imagine a whole except as something "built up from" individual parts with their own self-contained character. But the fact is that we could never write a meaningful text, and could never understand such a text, if the words were not caught up into a preceding whole that transformed them into expressions of itself.
When Craig Holdrege, in his study of the sloth (NetFuture #97), said that every detail of the animal speaks "sloth", he was pointing to the same truth. The fine sculpting of every bone, the character of basic physiological processes, the smallest behavioral gesture -- all these are "shone through" by the coherent and distinctive qualities that we can recognize as belonging to the sloth.
So, Kevin, when you ask what would convince me that machines are becoming organisms, certainly one prerequisite is that I would have to see that the foregoing distinction is without basis in reality. I would have to see that its own idea is native to the mechanism and governs the mechanism in the way the idea of the organism governs and shapes the organism from within, bringing the parts into existence as expressions of itself -- or else that organisms fail to show this sort of relation between part and whole.
Now, I realize that in NF #133 your initial response to my distinction was to deny it. It seemed obvious to you that I was thinking of "old" technology -- industrial-age machines -- and not things like cellular automatons, neural nets, artificially intelligent robots, and all sorts of other technologies that show complexly interacting elements. I remain hopeful, however, that your response was more a function of my brief and inadequate effort to capture the distinction than a real disagreement.
With that hope in mind, let me explain why your counterexamples don't work for me. Think first of a computer. The hardware can be implemented in radically different materials with radically different designs. (You're doubtless aware of all the ways people have imagined constructing a
Universal Turing Machine.) Then there is the programming, or software, which defines the functional idea of the computer. Does this program work in the computer in the same way the idea of the organism works in the organism?
Clearly not. You could remove the software from one computer and install what is essentially the same software in a wholly different computer. Conversely, having removed the software from the first machine, you can load a second program into it. In the former case, you have the same functional idea driving two computers that may be unrecognizably different in materials and design. In the latter case you have two completely different functional ideas successively driving the same computer. This arbitrary relation between the programmatic idea and its hardware embodiment is something you will never find in the psychosomatic unity of an organism. (Try putting the mind of a horse into the body of a pig!)
The relevance to my larger point is this. If there is no horse/pig problem with computers, it's because the software coordinates the pre-existing elements of the hardware rather than enlivening them and bringing them into being; and the different programs are therefore free to coordinate the elements in different ways. These elements are not themselves transformed by the program from within, in the manner of words in a text, or bones, muscle fibers and cells in a developing organism. Nor is the program continually embodying itself in new, previously non-existent forms of hardware as it "matures". (If you think genetic algorithms contradict this, then we need to talk about them.)
Does this capture the distinction I'm after a little better?
One other thing. I get the feeling that you half expect me, upon reviewing all the achievements in robotics and AI, to be stunned by the sheer evidential weight in favor of the increasingly organic and life-like character of mechanisms. Rest assured: I am impressed -- sometimes even stunned -- by these achievements. They reinforce my conviction that there is no ultimate bound upon human creative potentials, and these certainly include the possibility of housing our ever more sophisticated and subtle ideas in mechanisms. I see no end to this process, no limit to how life-like our devices can become or how fully they will insert themselves into the warp and woof of our lives.
This, in fact, is why I'm convinced that the decisive trial humanity must now endure has to do with whether we can hold on to our own fullest capacities so as to remain masters of our machines. If we fail the test, we will find that we can no longer differentiate ourselves from our creations. But this will not mean that machines have become organisms. It will mean, rather, that we have continued to lose our ability to distinguish the organism's act of creation from its products and therefore have abdicated the very selfhood that is one with our creative powers. We will have succumbed to the downward pull of our machines, becoming like them.
So what you and I are discussing is not at all a merely academic question! I am grateful to you for your tenacity in demanding clarity from me in my explanations. I trust you will not relent.
KEVIN KELLY: OK, so let's put your criteria to a test. We'll take a few organisms (a sparrow, a reindeer lichen, and a diatom) pull them apart, and ask some experts if they can identify the organism -- if they can see the whole organism -- in the parts. And let's do the same with some technology (a 747 plane, a book, and a watch). We'll take them apart and ask some experts if they can identify the technological species -- if they can see the whole thing -- from the parts. My guess is that the two teams would have roughly the same degree of success, on average.If there is no horse/pig problem with computers....
Would you agree that if they did have the same degree of success that this would (as you seem to suggest) convince you that machines and organisms are becoming one?
You could remove the software from one computer and install what
is essentially the same software in a wholly different computer.
Man, are you wrong about this. Have you ever tried this? Have you ever spoken to anyone who has tried to port software for one computer onto a wholly different computer? They would universally tell you that it was like "putting the mind of a horse into the body of a pig!" There is profound universality in computation (see my December Wired article) but what this does not mean is that any particular implementation of it can be moved to another matrix. It simply never happens in practice. Because: machines are just like organisms.
But there is a horse/pig problem, and this problems stems from the commonality of machines and organisms as complex, dynamic systems in reality.
One other thing .... I see no end to this process, no limit to how
life-like our devices can become or how fully they will insert
themselves into the warp and woof of our lives.
Now I am totally confused. This is what I have been saying. So let me see if I have this right. You say that there is no limit to how life-like our devices can become. You admit that we'll add ever more life-like functionality to our machines, that there is no limit to what lessons we can extract from biology to import into machines, until (without limit) we are able to grow and evolve them. But while these machine organisms will be used everywhere, and we'll depend on them the way we depend upon organisms, and while these things look like organisms, behave like organisms, and are used like organisms, in fact they aren't organisms at all because they lack an unlocatable trait, a spark, a vital something that we can't measure, can't pinpoint, and have trouble perceiving over time because our third eye which can detect this spark of real life is dimming. So while we will be surrounded by vast quantities and varieties of technology that will appear life-like to all who look and in any way we measure, this lifeness will be an illusion because in fact these things will lack an inner, unmeasurable quality that we -- ooops -- can no longer see. That is why when a scientist says, I see no difference between this man-made being and an organism, the proper response is: that is because you have lost Ulysses's vision. The improper exfoliation of life-likeness in machines has blinded your ancient sight. And if you can't see the true inner life of life, than it must be because (aiyeee!) you have turned into a machine. True life recognizes true life; fake life only recognizes fake life. Blessed are those with true life.
Is this right?
ST: Well, it must at least be right as a statement of how you have read my words -- which has me very, very disappointed. It is you, after all, and not I who say machines "grow" and "evolve" when in fact everyone knows we manufacture them. And it is you who speak of an unlocatable vital essence, when my entire effort has been to describe for you what numerous people over the past few centuries (who have bothered to think about the matter) have been able to recognize in organisms, wholes, parts, and machines.
Please, please, Kevin, hold in your mind both aspects of my reiterated claim: (1) we can abstract a certain formal structure from our own intelligent activity and impress something of this structure upon mechanical devices; and (2) this impressing of ideas from without is identifiably different from the living idea that organizes and constitutes matter from within -- a difference recognizable in the relation between whole and part.
Every thermostat, every printed page, every complex, electromechanical loom or harvesting machine, every silicon chip testifies to our wonderful ability to engrave something of the structure of our intelligence upon the stuff of the world. (Do you think all these are alive, more or less? If not, why?) It would be insane for me to say there is some limit to this process -- to say that at some particular point we will no longer be able to take a next step.
But saying there is no limit to the structure we can imprint upon physical materials is not the same as saying these materials must be alive. I'm frustrated that you keep trying to get me to infer life from complex structure without giving me any reason for doing so apart from, "Gee, look at this amazing spectrum of contraptions out there -- some of them sure seem alive!" Well, so do mechanical dolls and Aibos to some people. Is that supposed to be the convincing point? Or could it be that we actually need to think about it a little, even if this strikes you as miserably "philosophical"?
As far as I can see, the idea of an unlocatable spark serves no role in this conversation except to enable you to avoid discussing in its own terms the actual distinction I've been making between organic wholeness and mechanism.
As for asking a group of experts to pull a sparrow and airplane apart, the issue was whether there's a different sort of relation between whole and part in the two cases. Are you really wanting to decide this by a democratic vote of experts rather than through your own attempt to grasp the substance of the matter? And are you serious in suggesting such a gruesome test for your experts? Surely you realize that to pull the bird apart is to destroy the very thing you're looking for! "We murder to dissect".
Your suggestion is the quintessential expression of the historical development I mentioned earlier, whereby we have learned to ignore the very aspects of the world that would have helped us to understand the organism. No wonder our culture must largely say to those who would point to the organism, "I look, but I don't see". The only looking we practice is a murderous looking. You can, if you wish, ridicule the attempt to rise above such practice as a quest for "ancient sight", but the fact is that whoever has not yet learned to transcend the limitations of his own culture remains a prisoner of this culture -- a point I thought you agreed with.
All this reminds me of the twentieth-century behaviorists, who dominated academia with their denial of mind. They kept proclaiming, "We don't see it" while steadfastly refusing the only possible way of looking for it, which was to attend to their own act of looking. If the matter had been decided by a vote of the experts in 1950, the cognitivist revolution leading to the kind of computational stance you are now assuming would never have happened.
Actually, Kevin, I suspect you could be one of the new revolutionaries we need today, because I'm sure you yourself have an instinctive feel for the truth of the matter. Having witnessed the 747 being pulled apart, you would not consider it outlandish if the plane were to be reassembled and made to fly again. It's just a matter of putting the parts back into the right relationship with each other. But if you watched the sparrow being reassembled from its parts, you would not expect it to fly. What's taken flight is the inner being that enlivened it and made it an organic unity.
Remember the Star Wars robot, C3PO, lying dismembered on a table? I'm sure you complained of no deus ex machina when it was remanufactured; but you ought to have complained if those were human parts on the table and they were successfully "remanufactured".
There's a closely related point where I'm sure you also have sound instincts. An orthopedic surgeon manipulating your arm to discover a "mechanical" defect regards the arm in a manner completely different from when she is attending to its meaningful gestures. Likewise, the doctor examining your eyeball will step back in order to regard you when it is time to report the results of her observation. Your eye, face, and arm are now taken as the unified outer expression of a whole -- an expression
of your inner being -- where before they were viewed (perhaps to the detriment of your health) as the isolated parts of a mere mechanical exterior. The two ways of looking couldn't present a starker contrast.You would in fact rebel if the doctor continued unrelentingly to objectify you. You tolerate it only as long as you think there's a legitimate reason for the more external and mechanical approach, and you recognize a difference between the two approaches.
I know; you don't need to say it: "Some people now look into the eyes of robots the way they look into the eyes of their friends". Of course they do. Already in the 1970s there were those who projected a living psychiatrist into Joseph Weizenbaum's ELIZA program. This is what I meant about losing our ability to distinguish between organisms and machines. But in the face of disappearing capacities, are we obligated to go with the flow? Even if there were only one remaining person on earth who could see colors, should he deny his color because of the prevailing blindness? But I'm quite sure that at some level everyone (including you) still recognizes the difference between a machine and an organism.
As for porting software between computers: yes, I'm aware of the need for machine-level code. How else could the software "coordinate the elements of the hardware" in the external way I described? But this scarcely alters my point: you can take a massively complex program with its own distinctive character (say, a connectionist AI program rather than an expert system or "central command and control" program) and you can port this program, with its distinctiveness largely intact, to utterly different pieces of hardware.
Also, you ignored the other half of my example: not only can you port the same type of software to many different machines, but you can also drive the same machine with many different software packages. C3PO could have been remanufactured with an entirely new "personality" -- or, for that matter, with some of the character of a donkey. So I say again:
This arbitrary relation between the programmatic idea and its hardware
embodiment is something you will never find in the psychosomatic unity
of an organism. (Try putting the mind of a horse into a pig's body!)
Finally, a look ahead. We've been dealing very generally with the relation between organisms and mechanisms. We might obtain more traction by specifically considering the human being. Here is where the living idea (or being or entelechy, if you prefer) of the organism lights up in a bright, centered focus of self-consciousness. In this self-consciousness we certainly have no obscure trait requiring your "third eye" to perceive. Rather, we have what is most immediate and undeniable, what is as close to us as our most intimate selves -- the inescapable starting point for anything we could possibly build or even hypothesize.
The problem of consciousness is a crucial stumbling block for the AI project. This is because intelligence as inner activity (as opposed to the various outward results that always presuppose the activity) is inseparable from consciousness, and we have no reason to think we can endow any current or conceivable machine with consciousness.
KK: In the end, Steve, we are just going to have to agree to disagree. I feel our conversation is circling back to itself, without covering any new ground at this point. Whatever evidence you supply that we can't ever make living machines (or minds) I reject as shortsighted, and whatever evidence I supply that this is possible you reject as irrelevant.
At this point, I think we should let the question lie. It will be proven one way or the other in time. Unfortunately for me, I don't expect artificial consciousness in my lifetime.
So for the moment (my lifetime) I will have to agree with you. So I'll state that we can tell the difference between machines and organisms now. But what this means is that if by some weird breakthrough, nerds were able to make in my lifetime a machine that 90% of humans thought was conscious, or an artificial being that 90% of humans thought was alive, then I will be pleasantly surprised, and you ... you would be what? In the 10% group who said it was all an illusion, or who said it didn't really matter, or who suspected a hoax? I'm not sure. I suspect you would try to define the label away, since what we call it is a matter of words and definitions anyway. (The history of artificial life and mind is a history of redefining life and mind.)
But I am not saying this to try to convince you, because I have just agreed that I can't do that, and that for the sake of this argument I agree with you within my lifetime. I am only pointing out that you being right doesn't change much, but if I am right, then it changes almost everything. Now, one could say the same thing about discovering an ET intelligence: however the fact that it would be momentous does not mean that it is probable or likely. But few would say encountering an alien was impossible (on any timescale), which is what I think I hear you say about AI and A-life. (Part of what I am suggesting is that we will encounter an alien being on this planet -- one that we make ourselves.) I mention this asymmetry only to indicate that when there is such a high-impact it will pay to monitor it closely.
So I think I'd like to end my part in this conversation about the relationship between machines and life with this suggestion. I will continue to rehearse in my mind the possibility that the demarcation between the made and the born remains forever (not so hard for me because I don't expect it to vanish completely in my lifetime); at the same time you might try rehearsing what life (and your life and philosophy) would be like if the border disappeared forever.
That's not a challenge, only a genuine suggestion for contemplation. In the meantime, perhaps another topic will come along that can engage us and move our understanding forward.
ST: So be it, Kevin, although this saddens me.
I will round out my own contribution to this discussion by answering your question about what my response would be if ninety percent of my fellows took a robot to be alive. The obvious and inescapable answer: it would depend on my understanding of robots and living things. To the extent I had some understanding, opinion polls would be irrelevant.
It's true that "anything might happen" is an appropriate expectation whenever we lack all insight. (A dragon might swallow the sun; a pot of tepid water might spontaneously boil over.) But the whole point of science is to gain enough understanding of the essential principles of a situation, however subtle they may be, so that we are no longer reduced to saying "anything might happen".
In this regard, I've been puzzled by your preference for a kind of gut-feeling populism, in which you are fortified by your subculture's common hope that tomorrow anyone might walk through the door, including a living robot. Maybe the hope is justified, or maybe not, but the only way to get a firmer grip on the situation is to deepen our understanding of living beings and mechanisms. To say "let's just keep building these things and see what happens" does little good if we fail to understand what we have built. We merely "discover" what we expected to find all along.
There are, after all, ways to pursue the key issues. The huge mechanist-vitalist controversy focused on questions not unlike those you and I have been discussing -- and, within mainstream science at least, the mechanists came away confident that they had vanquished the vitalists for good. (What's needed, I think, is to revisit that debate without the Cartesian assumptions by which both sides were bound.)
All this may help you see why I'm uncomfortable with your repeated suggestion that anyone who attempts to discuss the issues in substantive terms must be engaging in mere empty play with definitions. He may, of course, but the charge needs to be demonstrated, not used as a catch-all
means of dismissal.
In any case, Kevin, I do want to say that I've benefited a great deal from our vigorous interactions, and I thank you for your willingness to participate. It's been bracing -- and, for me, humbling at times. I've learned, among other things, how easily my most deeply felt words can prove merely obscure to an extremely intelligent reader. I've often had the feeling, "Well, Steve, you sure blew that one. Back to the drawing board".
But, on a happier note, I'd like to issue you a standing invitation: if you wish to respond to anything I've just now said -- or anything I say in the future -- the pages of NetFuture will be open to you.
Steve Talbott, author of The Future Does Not Compute: Transcending the Machines in Our Midst currently edits NetFuture, a freely distributed newsletter dealing with technology and human responsibility. NetFuture is published by The Nature Institute, 169 Route 21C, Ghent NY 12075 (tel: 518-672-0116; web: http://www.natureinstitute.org). Email: [email protected]
Kevin Kelly is the editor of the print magazine "Wired". Email: [email protected]This article was originally distributed as part of NetFuture: www.netfuture.org/. It may be redistributed for noncommercial purposes, with this notice attached.