During the Renaissance and scientific revolution -- so the conventional
story runs -- our ancestors began for the first time to see the world.
For inquirers such as Alberti, Columbus, Da Vinci, Gilbert, Galileo, and
Newton it was as if a veil had fallen away. Instead of seeking wisdom in
a spiritual realm or in appeals to authority or in the complex mazes of
medieval ratiocination, the great figures at the dawn of the modern era
chose to look at the world for themselves and record its testimony. It
was an exhilarating time, when the world stood fresh and open before them,
ripe for discovery. And they quickly discovered that certain questions
could be answered in a satisfyingly precise, demonstrable, and
incontestable way. They lost interest in asking how many angels can dance
on the head of a pin and turned their attention to the pin itself as a
physical phenomenon available for investigation.
There is some truth in this rather-too-neat view of the past -- a truth
that makes the central fact of our own era all the more astonishing: as
scientific inquirers, we have shown ourselves increasingly content to
disregard the world around us. Judging from the dominant, well-funded
scientific and technical ventures, we much prefer to navigate our own
arcane labyrinths of abstract ratiocination, whether they consist of the
infinitely refined logic we impress upon silicon, or the physicist's
esoteric classificational systems for subatomic particles, or the
universe-spanning equations of the cosmologists. It's true that we no
longer ask, "How many angels can dance on the head of a pin?" but we are
certainly entranced with the question, "How much data can we store on the
head of a pin?" And our trance is only deepened when the answer turns out
to be: "a hell of a lot".
What many haven't realized yet is how easily our preoccupation with the
invisible constructions on that pinhead blind us to the world we
originally set out to perceive and understand in its full material glory.
The alluring data, fully as much as any dancing angel, distracts us from
more mundane realities. We have, as a result, been learning to ignore as
vulgar or profane the "crude" content of our senses. This content may be
useful for occasional poetic excursions, but it is only a base temptation
for the properly ascetic student of science, who moves within a more
rarefied, mathematical atmosphere. "It must be admitted", remarked the
British historian of consciousness, Owen Barfield,
that the matter dealt with by the established sciences is coming to be
composed less and less of actual observations, more and more of such
things as pointer-readings on dials, the same pointer-readings arranged
by electronic computers, inferences from inferences, higher
mathematical formulae and other recondite abstractions. Yet modern
science began with a turning away from abstract cerebration to
objective observation! (1963)
It is not that we lack all interest in the material world, but only that
our interest is of a peculiar, one-sided sort. Surrounded by the
remarkable physical machinery bequeathed to us by science, we are more
concerned with manipulating the world than with seeing it profoundly. In
fact, the distinction between seeing and manipulating scarcely even
registers within science any longer. Philosopher Daniel Dennett tells us
that the proper discipline of biology "is not just like engineering; it is
engineering. It is the study of functional mechanisms, their design,
construction, and operation" (1995, p. 228).
It may seem counterintuitive at first, but I will argue that our
preoccupation with workable mechanisms, far from contradicting our
preference for abstract cerebration, is itself a primary symptom of our
flight into abstraction and our refusal to see the world. And there is
every reason to believe that our failure to interest ourselves in seeing
and understanding the observable world (as opposed to manipulating it like
so much gadgetry) is as fateful for our knowledge enterprises today as
their own abstractions were for the inquiries of our medieval forbears.
One consequence of our failure is that we have felt justified in
substituting sadly inadequate mechanistic models for the world we no
longer bother to observe.
On Making a Game of Life
There is a computer program called the Game of Life. The program divides
your computer screen into a fine-meshed rectangular grid, wherein each
tiny cell can be either bright or dark, on or off, "alive" or "dead". The
idea is to start with an initial configuration of bright or live cells and
then, with each tick of the clock, see how the configuration changes as
these simple rules are applied:
If exactly two of a cell's eight immediate neighbors are alive at the
clock tick ending one interval, the cell will remain in its current
state (alive or dead) during the next interval.
If exactly three of a cell's immediate neighbors are alive, the cell
will be alive during the next interval regardless of its current state.
And in all other cases -- that is, if less than two or more than three
of the neighbors are alive -- the cell will be dead during the next
interval.
You can, then (as the usual advice goes) think of a cell as dying from
loneliness if too few of its neighbors are alive, and dying from over-
crowding if too many of them are alive.
What intrigues many researchers is the fact that, given well-selected
initial configurations, fascinating patterns are produced as the program
unfolds. Some of these patterns remain stable or even reproduce
themselves endlessly. Investigations of such "behavior" have led to the
new discipline known as "artificial life".
Referring to the Game of Life and the three-part rule governing its
performance, Dennett has remarked that "the entire physics of the Life
world is captured in that single, unexceptioned law". As a result, in the
Life world "our powers of prediction are perfect: there is no
[statistical] noise, no uncertainty, no probability less than one". The
Life world "perfectly instantiates the determinism made famous by LaPlace:
if we are given the state description of this world at an instant, we
observers can perfectly predict the future instants by the simple
application of our one law of physics" (Dennett 1995, pp. 167-69).
These are startlingly errant statements from one of the most influential
philosophers of our day. The three-part rule, after all, is hardly a law
of physics. It is an algorithm -- roughly, a program or precise recipe
-- and its deterministic, LaPlacian perfection holds true only so long as
we remain within the perfectly abstract realm of the algorithm's
crystalline logical structure. Try to embody this structure in any
particular stuff of the world, and its perfection suddenly vanishes. For
example, if you install it in a running computer, you can be absolutely
sure that the algorithm will fail at some point, if not because of spilled
coffee or a power failure or an operating system glitch, then because of
normal wear and tear on the computer over time. Contrary to Dennett's
claim, you will find in every physical implementation of this algorithm
that there is noise, no certainty, and no probability equal to one.
Dennett's comments about the Game of Life illustrate how the world can
disappear behind a grid of abstractions. He is so transfixed by the
logical perfection of the algorithm that he loses sight of the distinction
between it and the real stuff that happens to embody it. With scarcely a
thought he shifts in imagination from disembodied rules to physics -- a
move made easy by the fact that his physics is essentially a mere
reification of the rules. This carries huge implications. If, as he
tells us, biology is engineering, and if the devices we engineer are
nothing in essence but their algorithms, then real dogs, rocks, trees, and
people dissolve into a fog of well-behaved, abstract mentation.
While it is not the topic of this essay, the enveloping and thickening fog
of abstraction is evident on every hand. Look, for example, at almost any
branch of the public discourse and you will find that its subjects -- the
elderly, the sick, victims of war, soldiers, political leaders,
terrorists, corporate CEOs, wilderness areas, oil wells, fetuses, doctors,
voters -- appear only as generalized debating tokens torn loose from
complex, full-fleshed reality. Their assigned place in an established
logic of discourse is almost all that matters. The public discussion then
becomes nearly as lifeless and predictable as the Game of Life.
The Externality of Machine Algorithms
We can, I believe, learn a great deal about certain tendencies of science
and society by looking more deeply into this symptomatic disappearance of
the material world into abstraction -- a disappearance that will seem as
strange to our successors as medieval attempts to understand motion by
ruminating over Aristotelian texts now seem to us.
It will help, in understanding these tendencies, to grasp as clearly as
possible the relation between an algorithm, such as the one embodied in
the Game of Life program, and the machine executing it. And the first
thing to say is that the algorithm really is there -- in the machine (so
long as it is working properly) and therefore in the world. Which is to
say: we can articulate the parts of a machine so that, when viewed at
an appropriate level of abstraction, they "obey" and manifest the rules of
the Game of Life.
But it is crucial to see the external and nonessential nature of these
rules. Yes, they are embodied in the machine -- but only in a rather
high-level and abstract sense. The rules are not intrinsic to the
machine. That is, they are not necessary laws of the copper, silicon,
plastic, and other materials. To see the rules we almost have to blind
ourselves to the particular character of these materials -- materials that
could, in fact, be very different without altering the logic of
arrangement we are interested in.
In other words, the determining idea of the machine as a humanly designed
artifact is something we impose upon it "from the outside"; there is
nothing inherent in copper, silicon, and plastic that dictates or urges or
even suggests their assembly into a computer. We had to have the idea,
and we had to bring it to bear upon the materials through their proper
arrangement. The functional idea of the computer abides in this
arrangement, and will be there only as long as the arrangement holds.
This external relation between the material machine and the logic of the
idea imposed upon it explains a double disconnection. On the one hand,
the logic fails to characterize fully the material entity it is associated
with. We can construct computers out of vastly different materials and
still see exactly the same rule-following when they execute the Game of
Life. The game's algorithm leaves its embodiment radically undefined (or
"underdetermined").
But just as differently constructed physical machines can, at a certain
level of abstraction, follow the same rules, so, too, the same machine can
be made to follow different rules. This is obvious enough in the case of
a computer, which can execute entirely different software algorithms. But
it remains true more generally: when a new context arises, an existing
piece of technology may become a tool for a previously unforeseen function
-- as when, trivially, the handle of a screwdriver is used to hammer in a
tack, or when a typewriter's alphanumeric keyboard serves to construct
graphic images rather than text. The underlying artifact remains
unchanged even as the rules of its use and its meaning as a tool change.
So we see that machines of completely different materials and
configuration can serve the same function, and a machine of given
materials and configuration can serve different functions. The functional
idea, then -- whether it is a computer algorithm or the cleaning procedure
of a washing machine -- is by no means equivalent to a full understanding
of the machine as a part of the material world. The parts of the machine
present us with a physical reality we are able to employ in a mechanical
construction, but our employment of them does not explain the physical
reality, nor does the physical reality explain the employment.
There is some irony in all this. To recognize a governing idea externally
imposed upon the parts of a machine through the manner of their
arrangement is to grant an irreducibly human and subjective element in
every machine as a machine. As Michael Polanyi remarked many years ago,
a knowledge of physics and chemistry can never tell us what a machine is
(1962, pp. 328-35). For such an understanding we have to know (among
other things) something about the human context in which it will operate,
the human purposes it was designed to serve, and the particular functional
idea that guided the builders in coordinating the machine's physical
principles.
So while mechanistic thinkers profess a great fondness for objectivity,
which they interpret to mean "freedom from human influence", their
predilection for machine-based explanations marries them to human-
centered, designer-centered modes of thought. In fact, for all their
tough-mindedness, it is they who indulge an unhealthy anthropocentrism.
The world, after all, is not a humanly designed machine. Whatever
material principles we summon to account for the phenomena we observe,
they will fail in the accounting if they go no deeper than the mechanistic
principles we impose externally and abstractly upon pre-existing matter.
Does Mathematics Alone Give Us the World's Essence?
As I have already suggested, one indication of our tendency to ignore the
observable world lies in the force of our temptation, following Dennett,
to separate the machine's algorithm from the machine itself and then allow
the former to overshadow the latter. The temptation is no small matter,
given the overwhelming commitment to machine-like explanations within
mainstream science. For a mechanistic science, the machine's reduction to
an abstraction is the world's reduction to an abstraction.
Many are eager for the reduction. Peter Cochrane, former head of research
at British Telecom, believes "it may turn out that it is sufficient to
regard all of life as no more than patterns of order that can replicate
and reproduce" (undated). When Cochrane says "no more than patterns of
order", he seems content to let the substance manifesting these patterns
fall completely out of the picture.
Likewise, robotics guru Hans Moravec describes the essence of a person as
"the pattern and the process going on in my head and body, not the
machinery supporting that process" (quoted in Rubin 2003, p. 92). And
Christopher Langton, founder of the discipline of artificial life, has
surmised that "life isn't just like a computation, in the sense of being
a property of the organization rather than the molecules. Life literally
is a computation" (quoted in Waldrop 1992, p. 280).
Could there be a clearer attempt to dissociate the world's essence from
sense-perceptible matter? Pattern, algorithm, computation -- these
formal, mathematically describable abstractions are made to stand alone as
self-sufficient explanations of reality. Economist Brian Arthur captures
a sentiment widespread within all the sciences when he remarks that to
mathematize something is to "distill its essence". And if you've got the
essence, why bother looking for anything more?
The derivation of mathematical relationships can indeed be valuable. But
to leave the matter there -- and nearly all those who share Arthur's
sentiment about the mathematical essence of things do leave the matter
there -- is to reveal a blind spot at least as gaping as any irrational
lacuna in the thought of our ancestors. After all, mathematics in its
purely formal exactness tells us nothing at all about the material world
until it is brought into relation with the stuff of this world. And we
can establish this problematic relation -- we can have more than our pure
mathematics in mental isolation -- only by understanding the non-
mathematical terms of the relation. Mathematics alone cannot tell us what
the mathematics is being applied to.
This is already evident with simple, quantitative statements. It's one
thing to say "5" and quite another to say "5 pounds of force" or "5 pounds
of mass". In the latter case, while we may take comfort from the
conciseness of "5", we're now also up against the conceptual darkness of
"force" and "matter". The number, however exact, can illuminate the
material world only to the degree we know what we mean by "force" and
"matter" -- terms that have vexed every scientist who ever dared to think
about them. What physicist Richard Feynman said about energy is true of
many other fundamental scientific concepts as well: "we have no knowledge
of what energy is" (Feynman, Leighton, and Sands 1963, p. 4-2).
To ignore the darkness in key terms of our science -- to claim that
mathematics gives us the essence of things when we can't even say what the
things are and we have no non-mathematical language adequate to them -- is
to be no less in the grip of nonsense than were those medieval thinkers
who were content to explain the character of gold by appealing to an
occult quality of "goldness". Mathematics, as a self-contained, "essence-
yielding" discipline, offers a pseudo-explanation no more helpful than the
occult quality. It does no more good for us merely to assume we know what
the mathematics is being applied to than it did for our predecessors to
assume they knew what goldness was.
An elementary point, you might think. But the scientist and engineer have
shown a powerful tendency to conceive their desired mechanisms through an
ever more disciplined focus upon mathematics, algorithm, and software,
blithely inattentive to the character of the world the experimental
apparatus is coercing into algorithmic "obedience". It is true (and a
remarkable accomplishment) that, unlike our medieval forerunners, we have
perfected a method for obtaining workable devices. In fact, many such as
Daniel Dennett ("biology is engineering") would more or less collapse our
entire project of understanding into a single-minded pursuit of workable
devices. But in doing so they increasingly attend, in their
anthropocentric way, only to the sort of clean, mathematical structure
they temporarily manage to impose upon these devices at a high level of
abstraction. That is, they are interested in seeing only machines, and in
seeing machines only as manageable abstractions.
Accordingly, the device itself, as physical phenomenon, recedes from view
while the immaterial logic we have associated with it consumes our
attention. In our quest for understanding we have become obsessed with
the equivalent of angelic hosts bearing data in a timeless algorithmic
dance, and whether the dance proceeds on the head of a pin or along
silicon pathways or within the deeply worn logical grooves of our own
minds hardly matters. The dancers are, from our preferred point of view,
pure and chaste, insubstantial, uncontaminated by gross matter. They give
us a kind of otherworldly "physics", as Dennett claimed, free of noise and
uncertainty.
The only way for us to break the hypnotic spell of our own abstract
cerebrations is to open our senses again so as to re-experience the world
we have been fleeing, much as our ancestors of several centuries ago broke
the medieval spell and looked out at a new world. But just as it took
those earlier pioneers centuries to understand what breaking free really
meant -- medieval thought habits persisted even in Newton -- so, too, it
may require a long time for us to escape the trance of mechanistic thought
and begin to recognize the living qualities of the substances we have
learned to ignore behind a veil of precisely behaved abstractions.
Looking Ahead
It is time to pause and ask where these prefatory remarks might lead us in
examining mechanistic science and its technological foundations. I offer
here a bare statement of several theses as a kind of prospectus for future
articles in this series.
First, as we have begun to see, the resort to mathematical formalism,
whether it is a formalism of equations, rules, logic, or algorithms, is
inadequate even to explain a machine. The explanatory logic -- conceived
by us and imposed upon the machine -- relates to our purposes and
operations, and, as a genuine lawfulness, remains in a sense external to
the actual substance of the machine. If our science is a science of such
formalisms governing a world-machine, then it cannot give us any full
understanding of this world-machine.
Second, nothing in the natural world -- including inanimate nature -- is
machinelike if by "machine" we refer to the human artifacts we usually
call by this name. The governing idea of a machine is imposed upon it by
a designer through a proper arrangement of parts; the idea is not
intrinsic to the parts, not demanded by them, not the necessary expression
of their existence.
Nature, on the other hand, has no designer -- at least not one of this
external sort. We cannot think of laws on one side and a substance
obeying or "instantiating" the laws on the other. Rather, the laws belong
to the substance itself as an expression of its essential character. The
lawfulness of a machine is in part a cultural artifact; the lawfulness of
the physical world is through and through the intrinsic expression of its
own being. And we can understand this being only to the degree we
penetrate and illuminate the more or less opaque terms of our science --
"force", "mass", "energy", and all the rest.
Third (looking forward), the immanent mathematical lawfulness we do
discover in the natural world is never the law or the explanation of
whatever transpires in the world. It is merely an implicit aspect of the
substantive (but largely ignored) phenomenal reality that must be there in
order for there to be something, some worldly process, that exhibits the
given mathematical character. The relation between the mathematical
character and the reality in which it is found is the relation between
syntax and semantics. We see this same relation between the formal
grammar and the meaning of our speech. The grammar is an implicit
lawfulness. But, given the grammatical structure alone, we cannot know
the meaningful content of the speech. The structure is abstracted from
this content, leaving behind much of what matters most.
This, I believe, is a crucial point, deserving a great deal of elaboration
(which I will in the near future try to provide).
Fourth (and finally), many readers will by now be yelling at me in their
minds: "You fool! Pay attention to different levels of description!"
What they are getting at is something like this:
Of course rules such as those defining the Game of Life are
inadequate to provide a complete explanation of the machine executing
them. The rules describe the machine only at a high level of
abstraction. But we can also provide descriptions of the same sort at
progressively lower levels of abstraction, until finally we have
described the fundamental particles constituting the machine. All
these descriptions together tell us everything we could possibly know
about the machine, and also about the world.
But this appeal to descriptive levels fails utterly to bridge the gap
between mechanistic explanations and an adequate explication of the
world's lawfulness. The problem is that the shortcoming of the
mechanistic style of explanation follow you all the way down. If, when
you finally arrive at the particles, you try to describe them as if they
were little machines -- if, that is, you remain faithful to your
mechanistic convictions -- then your rules, algorithms, mathematics, and
logic explain no more about these particles than the Game of Life does
about the concrete machine it is running on. But now you are at the
supposedly fundamental level of understanding, so the limitations here
apply to the entire edifice you have erected upon this foundation.
If you want a true understanding of the world's order, the crucial gap you
have to leap is not the gap between levels of description. Rather, it is
the distance between unembodied mathematical, logical, and algorithmic
formalisms on the one hand, and the full content of the world these
formalisms are abstracted from on the other. Highlighting this gap will
be one of my primary aims in forthcoming essays.
Summarizing, then:
Mechanistic explanations do not even explain machines.
The world is not in any case a machine.
A mathematical regularity, or syntax, is implicit in the world's
phenomena and can be said to explain the world no less and no more
than the grammatical syntax of a speech explains the content of the
speech.
If, when appealing to a hierarchy of descriptive levels, we remain
committed to mechanistic explanation, then the limitations of such
explanation afflict us all the way down the hierarchy.
Those of you acquainted with the philosophy of science will recognize in
these statements implications of the most radical sort, even though so far
I have done little more than offer brief justification for the first
thesis of the four. In the end we will find ourselves confronting, among
other things, an entirely new (or, rather, very old and forgotten) style
of explanation based on form.
We will also see the necessity for reversing the far-reaching decision
within science to ignore qualities. This decision, if not reversed, must
lead ultimately to the disappearance of the world, which is not there
apart from its qualities -- and therefore it will lead also to the
annihilation of the science that began with such promise as a resolve to
reject levitated abstraction and observe the world.
Bibliography
Barfield, Owen (1963). "Introduction" in Rudolf Steiner, Tension Between
East and West (Hudson, NY: Anthroposophical Press, 1963), pp. 10-11.
Cochrane, Peter (undated). http://www.cochrane.org.uk/inside/quotes.htm.
Dennett, Daniel C. (1995). Darwin's Dangerous Idea: Evolution and the
Meanings of Life. New York: Simon & Schuster.
Feynman, Richard P., Robert B. Leighton, and Matthew Sands (1963). The
Feynman Lectures on Physics vol. 1. Reading MA: Addison-Wesley.
Polanyi, Michael (1962). Personal Knowledge: Towards a Post-Critical
Philosophy. Chicago: University of Chicago Press.
Rubin, Charles T. (2003). "Artificial Intelligence and Human Nature",
The New Atlantis vol. 1, no. 1 (summer), pp. 88-100.
Waldrop, M. Mitchell (1992). Complexity: The Emerging Science at the
Edge of Order and Chaos. NY: Simon & Schuster.
© 2003 Steve Talbott
Steve Talbott, author of The Future Does Not Compute: Transcending the Machines in Our Midst currently edits
NetFuture, a freely distributed newsletter dealing with technology and human responsibility. NetFuture is published by The Nature Institute, 169 Route 21C, Ghent NY 12075 (tel: 518-672-0116; web: http://www.natureinstitute.org). You can reach Steve at [email protected]
This article was originally distributed as part of NetFuture:
http://www.netfuture.org/. You may redistribute this article for noncommercial purposes, with this notice attached.