A Critical Review of What Computers Still Can't Do, by Hubert Dreyfus (from 1995)
Ron Barnette, Valdosta State University
Review of What Computers Still Can't Do: a Critique of
Artificial Reason, by Hubert L. Dreyfus. 1993. The MIT Press:
Cambridge, Massachusetts and London, England.
Reviewer: Ron Barnette, Professor of Philosophy (Ph.D.
University of California, Irvine), Valdosta State University,
University System of Georgia, USA.
This book bears a new title (adding the word 'Still') for two earlier
editions (1972, 1979) of a philosophical work aimed at
undermining a significant physicalistic theory of mind: Classical
Cognitivism, central to nearly all (until recently) approaches to
Artificial Intelligence (AI) models of human intelligence, inspired
by the directions set forth by Turing and von Neumann. What is
of particular interest about the new 1993 issue is the thirty-eight
page Introduction, which articulates Dreyfus' scope of resistance
to the mind/machine research paradigm; what follows the new
Introduction are seven pages of Notes, and Introductions to the
second and first editions, together with the original text of What
Computers Can't Do( Introduction material). As such, I will focus
my remarks on the latest Introduction, which reiterates and
attempts to strengthen earlier arguments in the up-to-date context
of new directions in AI research and design.
In a tone of almost "I told you so---but why won't it go away?!,"
Dreyfus tries to lay to final rest the AI approach dubbed by John
Haugeland (1985) 'Good Old Fashioned Artificial Intelligence,' or
GOFAI. Inspired by a Turing model of intelligent behavior as
essentially computational, the thesis of GOFAI is that the
processes underlying intelligence are symbolic in nature. More
specifically: GOFAI models human intelligence as von Neumann
computational architectures that (1) perform computations on
abstract symbolic representations, (2) by computations
governed by a stored program that contains an explicit list of
instructions or rules which transform these symbolic
representations into new symbolic states, and (3) in terms of
which these computations are performed in serial fashion by a
CPU that has information stored in the computers' permanent
memory. As such, GOFAI depicts mentality within the context of
what philosophers know as the Representational Theory of Mind,
according to which the mind is an entity which performs
calculations over mental representations, or inner tokens or
symbols which refer to features of the outer world. In short, the
mind is thus viewed as a symbolic information-processing
device, operating in a serial fashion, and governed by rules
which are, at base, the language of thought.
Intractable problems confront GOFAI, according to Dreyfus,
who recounts his major objections and lays the groundwork for a
criticism as well for non-GOFAI modelling, specifically for
Parallel Distributed Processing (PDP), or Connectionist,
architectures and programming. To reinforce earlier GOFAI
criticisms, he describes insuperable difficulties (outlined in 1972)
confronting successful modelling of common-sense
understanding, which requires a notion of relevance,
contextually and holistically characterized, resulting from worldly,
bodily experiences, not compatible with atomistic, symbolic
data structures and discrete computations. He then develops the
1979-edition criticism which cites a further insuperable GOFAI
problem: that of modelling the know-how requisite to judge
relevance. 'Know-how,' or that activity of generalizing and
determining relevance in an open-textured world constantly
confronted, is argued to be not a matter of manipulating data, no
matter how much is provided. Moreover, with serial processing
strictures, symbolic computational attempts appear to be
biologically unrealizable as well, and demand, in principle,
procedures that require a combinatorial explosion of information
that cannot be resolved by computational means alone.
More recent developments in AI change the landscape, for some
dramatically. With the fashionable PCD, or Connectionist,
models claiming to have overcome combinatorial bottleneck
which plagued all von Neumannesque attempts, and with these
non-serial and seemingly non-computational symbol-processing
approaches to understanding the nature of the mental, Dreyfus
wonders why anyone would cling to GOFAI. Besides,
Connectionism even looks biologically correct, since brain design
appears to be more like a pandemonium of parallel-processors,
seeking some sort of neuronal equilibrium, than a central brain-
in-a-brain operation carried out by a nervous system CPU
surrogate. Why cling to GOFAI, indeed? And can, Dreyfus asks,
the PDP architectures fare any better than GOFAI in describing
the nature of human intelligence? Curiously, responses to both
questions receive a similar treatment from him.
Apparently, GOFAI-lovers just don't seem to get it when it
comes to appreciating what Dreyfus calls the Commonsense
Knowledge Problem (CKP), alluded to in his earlier critcisms,
nor---and this is new---do PDP models look to be any more
promising. (This turns out to be a major criticism, and deserves
more space than I can give it.) Basically, CKP is defined by three
problems: (1) How everyday knowledge must be organized so
that one can make inferences from it; (2) How skills or
knowledge can be represented as knowing-that; and (3) How
relevant knowledge can be brought to bear in particular situations
(xviii). In fact, one might treat all three problems as ones
involving selection of relevance. For example, learning to
generalize is critical for intelligent behavior, but this requires
associating inputs of the same type with successful decisions
and actions. But in what does the relevant type consist?
Relevance in this regard and in the context of ignoring and
attending to features of novel settings as we confront them are
not inherent in the context data, Dreyfus argues, but are,
instead, relative to current situations in light of a myriad of
human background experiences. What is relevant in one setting
might not be so in another. Thus, whether by means of symbolic
tokens (GOFAI), or having been learned through adjusted
network connections (PDP), relevance is not to be gleaned by
means of providing more information for the system to work with
and through. Information about what is relevant only leads to
circularity or a vicious regress of what is relevant to relevance for
relevance for..... Admittedly, in narrowly-defined problem
domains a machine might seem to pull off generalization skills,
but this would be to mimic intelligence, at best, artificially
enforced, as it were. Solving the CKP is a gauntlet Dreyfus lays
down. Can the PDP paradigm solve it?
The philosopher Daniel Dennett, in Consciousness Explained
(1991), thinks Dreyfus might very well believe so, at least
indirectly, for he writes of two AI skepics (Dreyfus and John
Searle) "Dreyfus has pledged allegiance to connectionism" (p.
270), referring to a 1988 piece written by Dreyfus and his brother
Stuart Dreyfus (1988). Yet Hubert will shortly write in the
current Introduction: "Indeed, neural-network researchers with
their occasional ad hoc success but no principled way to
generalize seem to be at a stage of GOFAI researchers when I
wrote about them in the 1960's. It looks likely that the neglected
and then revived connectionist approach is merely getting its
deserved chance to fail" (xxxviii). With nothing short of
harshness, the new anti-AI forces seem to draw yet another line
in the sand.
For many AI system-designers, as well as for many scientifically-
minded philosophers (myself included), Dreyfus' repeated 'It can't
be done' stance will be unsettling, especially in light of his overt
willingness to leave the putative mystery of the mind as simply
that: a mystery. Yet, in fairness, he does raise serious objections
that deserve and challenge equally serious counterarguments,
presented carefully. Still, for those who do see a physicalistic
basis of mentality as a correct and natural one, one should
appreciate, in response to Dreyfus' skepticism, that machine
models of human cognition need to square with neuroscience
realities, just as brain mechanism models need to square with AI
models that try to replicate intelligence. Combinations of multi-
disciplinary approaches will no doubt emerge, with an aim to
reproducing a system that actually learns to act intelligently in a
world not circumscribed a priori by information overload or
researchers' semantic paternalism. But we haven't heard the last
of Dreyfus; maybe the 2001 revision will be titled What
Computers and the Brain Can't Do (Either).
(1) Dennett, Daniel C. 1991. Consciousness Explained .
Little, Brown and Company: Boston, Toronto, London.
(2) Dreyfus, H. L., and Dreyfus, S. E. 1988. 'Making a Mind
Versus Modeling the Brain: Artificial Intelligence Back at a
Branchpoint,' in Graubard, S. R. 1988. The Artificial
Intelligence Debate: False Starts, Real Foundations. The M.I.T.
Press: Cambridge , Massachusetts.
(3) Haugeland, J. 1985. Artificial Intelligence: the Very Idea.
The M.I.T. Press: Cambridge, Massachusetts.