• Non ci sono risultati.

N/A
N/A
Protected

Academic year: 2021

Condividi ""

Copied!
19
0
0

Testo completo

(1)

Barbara Warnick 3

DANGEROUS FUTURES: ARTIFICIAL INTELLIGENCE AND SCIENTIFIC ARGUMENT

􏰆 Communication with the public about scientific research and technology development subsumes a number of different genres of discourse. Some of these include scientific reports in such periodicals as Scientific American

and Science, while others take the form of trade books or online resources intended for nonspecialist readers. This essay considers a particular example in one subgenre of this category—publications for general consumption that promote technology development. I am specifically concerned with the problematic nature of some sci- entific publications that exaggerate the expected benefits of developing technolo- gies, fail to consider dangers in their development, and avoid discussion of their ethical implications.

The present study corroborates Robert C. Goldbart’s claim (1998) that many scientists are not explicitly trained to communicate their findings or to mount ar- guments effectively designed for lay consumption.

It also supports the work of Jeanne Fahnestock (1998) who noted that much public discourse about scientific research is disproportionately favorable. Fahnestock compared argumentation and language use in scientific reports on a set of topics with popular press accounts of the same topics and found that the latter were overwhelmingly celebratory, over- simplifying the research and exaggerating its benefits.

Artificial intelligence is a frequently discussed topic in trade books, periodicals, public symposia, and on the World Wide Web. In these forums, researchers in computer science, artificial intelligence, and other fields engage in debates about the future of the human race and envision a world where machine

(2)

intelligence may (or may not) equal or surpass human intelligence. Because of their successes in product development and research, spokesmen such as Ray Kurzweil, Marvin Minsky, Hans Moravec, and others have a certain cachet with readers interested in

Consalvo v-viii-64 2/26/04 11:28 AM Page 38

(3)

38 the internet as an area of research

AI-related issues. In this essay, I will focus on the work of one of these authors, the quality of his arguments, and responses made to them. My emphasis will be on the nature and character of public arguments made by AI advocates rather than on the substantive merits of their claims, which have been discussed at length

elsewhere (Crevier, 1993; Turkle, 1995; Ekbia, 2001, 2002).

By virtue of their placement in books, general periodicals, and on Web sites in- tended for the lay reader, writings on AI take the form of public argument. That is, they are addressed to nonspecialists, make claims that can potentially be empiri- cally substantiated, present evidence to support their claims, and are designed to persuade their reading audiences. As many argument theorists have observed, pub- lic argument is a genre of discourse that, in order to qualify as argument per se (rather than mere hype or fantasy or propaganda), should fulfill some obligations to its readers. In particular, arguments addressed to the public should position readers in such a way as to enable them to make an independent judgment of the merits of their claims.

In part, this means presenting reasoning and evidence on both sides of a ques- tion.

For example, in writings about AI, open argumentation would mean citing failures as well as successes in AI research, discussing dangers and risks as well as future promise of the work, and reporting on factors that impede as well as those that promote future research progress. In part also, this means implementing an ethical standard for the conduct of an argument. In principle, such argument would leave room for interlocutors to disagree and should leave open the possibil- ity that one or both parties to an argument might change their mind about the issue in question.

It is important to think about these ethical standards in judging many of the public discussions about technology development. Experts in technoscience should be held all the more to these standards because they often write for publics who are not positioned to judge the technical merits of what they say. The only way that publics can make informed decisions about technology development is in a public sphere where all the dangers, risks, social implications, and ethical issues relevant to a question are weighed. In the example that follows, I will briefly consider a case study in which an author initially failed to place his readers so as to make a

(4)

consid- ered judgment about the merits of what he had to say. After the initial publication of his argument, however, he followed up by posting a website that included both arguments against his original position by scientific experts and message boards in which respondents to his book could openly discuss the issues pro and con.

Ray Kurzweil’s The Age of Spiritual Machines: When Computers Exceed Human Intelligence was published in 1999 by Penguin Putnam. Kurzweil there argued that in the first half of the 21st century, machine intelligence will come close to equal- ing human intelligence, and that before the century is over, machine intelligence will exceed human intelligence. To support his claim, Kurzweil noted that in light of Moore’s Law (Miller, 1996) concerning the exponential increases in the speed and density of computing, computers can be expected to achieve the memory

(5)

one line short

Consalvo v-viii-64 2/26/04 11:28 AM Page 39

warnick | Dangerous Futures 39

capacity and computing speed of the human brain by around the year 2020. Once computers have become capable of independent thought and can communicate that to humans, Kurzweil predicted that they will come to be viewed as conscious entities. In his view, the increasing rapprochement of human and machine intelli- gence will be reciprocal; that is, machine intelligence will be developed through re- verse engineering human brains so as to design machine prototypes of human in- telligence, and once that has been successfully completed, people will be able to download their minds into machines.

Although some readers might have had difficulty taking Kurzweil’s predictions seriously, their credibility was buttressed by his past record of innovation and in- vention. As he reminded readers, his accomplishments included invention of the Kurzweil Reading Machine and pioneering work in speech recognition systems and digital music synthesis. Furthermore, many of his short-term predictions in an earlier book, The Age of Intelligent Machines (1990) (e.g., development of a glo- bal information network, cyberterrorism, surveillance technologies) have turned out to be true. However, many observers, including reviewers of the book and Kurzweil’s own colleagues, questioned his vision and his assumptions as he looked further into the future (Shaffer, 1999; Proudfoot, 1999; Muska, 2000;

Lanier, 2000a; 2000b).

What is it about the writings of Kurzweil and other artificial intelligence re- searchers that evokes either the incredulity of skeptics or the fascination of admir-

(6)

ers? Why does public discourse on such topics seem to split into opposing camps without the moderating influence of serious deliberation about the merits and eth- ical implications of the claims that are made? As an argument theorist, I am very interested in the narrative and argument structures used in The Age of Spiritual Ma- chines. Considering their rhetorical features, as well as the arguments posed by his critics, might help us to better understand how public discourse proceeded in this case.

To support his views, Kurzweil worked through various topics using a limited number of patterns of thought. These include deductive, analytical reasoning; use of algorithms; and progressive narratives with a predetermined conclusion. For readers used to thinking in these ways, Kurzweil’s arguments probably seemed compelling and forceful. Analytical reasoning as exemplified in formal logic in- cludes categorical, disjunctive, and conditional syllogisms. These work well so long as the premises are taken as true and the terms are reduced and unequivocal in meaning. So long as one stays inside the universe of formal validity, the conclusion of such reasoning is unquestionable. An example (and an important move in Kurzweil’s discussion) is:

Evolutionary processes build on themselves. Technology is an evolutionary process.

Therefore, technology development builds on itself.

(Kurzweil, 1999, p. 32)

(7)

one line short

Consalvo v-viii-64 2/26/04 11:28 AM Page 40

40 the internet as an area of research

So long as the major and minor premises are taken as true, the conclusion follows logically. Conditional syllogistic forms relying on if/then relationships are also fre- quently used in Kurzweil’s book. Because of their simplicity and clear logic, these categorical and conditional forms of argument appear to be persuasive on their face.

Kurzweil’s use of algorithmic thinking is of real interest. This mode of reason- ing, not often covered in texts on informal or practical argument (van Eemeren, Grootendorst, & Henkemans, 1996; Inch & Warnick, 2002) seemed to be Kurzweil’s favored form of thought, both in his own practice and in the sort of thinking he expected artificial intelligence agents to perform. Algorithmic thinking uses a step by step procedure in which answers to initial questions determine the next question to be asked in the sequence (“Algorithm,” 2001). Algorithmic think-

(8)

ing serves well for many purposes to which computing is well suited. For example, in warfare it can be used to control weapons deployed for strategic action; in med- icine, to assist physicians in diagnosing and treating patients; in education, to teach students some of the basic skills and elements of critical thinking. There are other contexts, however, to which it is poorly suited. These are situations in which what Aristotle called phronesis or practical wisdom is needed (Kennedy, 1991). For example, algorithmic thinking will not help us to decide whether to attack a mili- tary target where civilians are present, or to decide whether an already terminally ill patient should receive a particular medical treatment, or to teach students how to make principled, ethical, moral choices. The problem, then, is with equating cer- tain, limited forms of thinking with all

thinking. Kurzweil does this, and because he reduces thinking to only some of its forms, he makes the view that computers will achieve human intelligence logically supportable.

A third mode of thought at the organizational or macro level in Kurzweil’s book is the progressive narrative. This narrative form sets a pattern, gains momen- tum as it unfolds, and its structure seems to discredit any possible objections or counter narratives. As with the premises of deductive arguments, so long as one anticipates and accepts the foregone conclusion of the narrative that is implied in its telling, the conclusion is inevitable. For example, early in his book, Kurzweil sets up a pattern of seven stages in technological development. These include the precursor stage in which the technology is imagined, the invention stage where it first appears, the development stage, the stage of maturity, the stage of pretenders where an upstart threatens to eclipse the older technology, followed by obsoles- cence and antiquity (1999, pp. 19–20).

Reading this apparently seamless account of the progressive and nearly inexor- able development of technology might cause one to wonder whether there have ever been instances of technological development that might not have followed Kurzweil’s progressive narrative pattern. Elsewhere in the book, Kurzweil lists a number of past negative predictions regarding future technology development that now seem ludicrous. Here are some examples:

Heavier than air flying machines are not possible. Lord Kelvin, 1895

(9)

Consalvo v-viii-64 2/26/04 11:28 AM Page 41

warnick | Dangerous Futures 41

I think there is a world market for maybe five computers. IBM Chairman Thomas Watson, 1943

(10)

The Internet will catastrophically collapse in 1996. Robert Metcalfe, n.d. (1999, pp. 169–

170)

There are, by contrast, other optimistic predictions specifically regarding AI that were not mentioned by Kurzweil and have not come to pass within their pre- dicted time frames. Two examples are:

Machines will be capable within twenty years of doing any work that a man can do.

Herbert A. Simon, 1965 (Crevier, 1993, p. 109)

Within a generation . . . few compartments of intellect will remain outside the machine’s realm—the problem of creating “artificial intelligence” will be substan- tially solved.

Marvin Minsky, 1967 (Crevier, 1993, p. 109)

One would like to think that critical readers of Kurzweil’s book would pause to think of the opposite case, but it could be that many of them might not do so.

After its publication, discussions of and responses to The Age of Spiritual Ma- chines occurred in many venues; the book became a topic of serious discussions;

and Kurzweil’s treatment seemed to rekindle interest in the field of artificial intel- ligence. In April 2000, Bill Joy published a lengthy essay in Wired that was in- tended in part to respond to some of the issues raised by Kurzweil. Joy was partic- ularly struck by Kurzweil’s prediction that humans in the future will concede social control to robots. For Joy, this raised the specter of a future in which a small human elite might retain control of large robotic systems and also control the lives and society of the human race. Joy argued that those who develop tech- nologies are responsible for their later use (2000, p. 243). Joy’s attitude and con- cern contrast sharply with the views of many technophiles who have seemed fatal- istic or unconcerned about such matters. Furthermore, Joy raised other ethics-based questions about Kurzweil’s proposed program for AI development. Whose brains will be destructively scanned so that the brain can be reverse engi- neered? How much will it cost to upload one’s mind into a form of machine intel- ligence? What will become of humans who are not placed so as to benefit from machine

intelligence? What might be the implications of disembodied intelli- gence? And, eventually, what will become of the human race if it is superseded by forms of inorganic machine intelligence?

Elsewhere, the discussion continued into late 2000 and early 2001 on the Web site Edge (<http://www.edge.org>) and included Kurzweil’s defense of the ideas in The Age of Spiritual Machines as criticized by Jaron Lanier (2000a) in a commen- tary on Joy’s essay. Edge is a by-invitation Web forum self-described as “an informal salon, a forum for eminent scientists, members of the digerati and science journalists

(11)

Consalvo v-viii-64 2/26/04 11:28 AM Page 42

(12)

42 the internet as an area of research

from all over the world” (Mundy, 2000). It is edited by Jon Brockman who in Sep- tember 2000 decided to publish Jaron Lanier’s “One Half of a Manifesto”—a refu- tation of what Lanier viewed as “Cybernetic Totalism” and, in particular, AI re- search. Since Lanier is a pioneer in virtual reality and lead scientist for the National Tele-Immersion Initiative, other Edge participants took his remarks seriously.

Lanier argued that AI enthusiasts such as Kurzweil and Hans Moravec confuse ideal computers with real computers that behave differently. Real computers run on software rendered inadequate to keep pace with hardware advances because of a “legacy” effect—the disruptive influence of underlying code on which later code and code components depend. This is what leads to brittleness—the subtle incom- patibility between chunks of software that were originally created in different times and contexts.

Lanier furthermore argued that many AI researchers have a limited view of human thought. He maintained that we still do not fundamentally understand the

processes of rational thought—in particular, humans’ ability to build abstract rep- resentations of the world and to enact common sense. This is often unrecognized by the general public because, as Lanier noted, thinkers who “place what is essen- tially a form of algorithmic computation at the center of reality . . . tend to be con- fident and crisp and to occasionally have new and good ideas” (Lanier, 2000a).

La- nier subsequently wrote a postscript to his “Manifesto” specifically addressed to Kurzweil. In it, he emphasized Kurzweil’s tendencies to make no distinction between quantity (Moore’s Law) and quality, to blend phenomena in different cat- egories together indiscriminately, and to cite only those examples and facts that supported his own predictions (Lanier, 2000b).

Kurzweil (2001) responded by characterizing Lanier’s concern about bad soft- ware as “engineer’s pessimism”—a trait causing Lanier to lose sight of the long- term implications of technology growth. He noted that similar forms of pessi- mism had earlier plagued the human genome project and early views of the Internet’s potential. In response to the software issue, Kurzweil argued that he viewed reverse engineering of the human brain as the solution. He cited recent ad- vances in improved understanding of the brain’s physical structure and its func- tion, and he insisted that advances in brain research enable us to “observe the brain’s massively parallel methods . . . scan and understand its connections . . . and replicate its methods” (Kurzweil, 2001). He said that he viewed the subsumption

(13)

of human intelligence by machine intelligence as “neither utopian nor dystopian”

but as the logical outcome of an evolutionary process.

Subsequent to Joy’s public indictment of his views and Lanier’s discussion of his book on Edge.org, Kurzweil launched a new Web site—kurzweilai.net—on February 22, 2001. The express purpose of this site was to provide an open forum for his and his critics’ ideas on AI. On this site, which has been assiduously main- tained and upgraded since its inception, Kurzweil has frequently responded to the views of some of his critics. In particular, the site contains the entire text of a book

—Are We Spiritual Machines: Ray Kurzweil vs. the Critics of Strong AI, pub- lished in Spring 2002 (Richards, 2002).

(14)

Consalvo v-viii-64 2/26/04 11:28 AM Page 43

warnick | Dangerous Futures 43

One of Kurzweil’s respondents in Are We Spiritual Machines was William A.

Dembski, a mathematician and philosopher and research associate professor at Baylor University. Dembski noted that Kurzweil’s aim of reverse engineering the human mind and his descriptions of intelligence and brain function indicate that he is a materialist who believes that “mind must, in some fashion, reduce to mat- ter.”

Dembski notes that Kurzweil’s view aligns with neuroscience, which holds that mind does ultimately reduce to neurophysiology. Dembski observes that many neuroscientists describe ordinary psychology as “folk psychology” as op- posed to a revamped psychology grounded in neuroscience. The view is that even- tually,

“in place of talking cures that address our beliefs, desires, and emotions, tomorrow’s healers of the soul will manipulate brain states directly and ignore such outdated categories as beliefs, desires, and emotions” (Dembski, 2002).

Dembski offers a number of interesting arguments against a materialist view of the mind’s function. Among them are examples of people with badly damaged brains, such as Louis Pasteur, who continued to function optimally. He asks how one can explain a flourishing intellectual life despite a damaged brain if mind and brain coincide. He also notes that actual neuroscience research is a modest affair and

“fails to support materialism’s vaulting ambitions” (Dembski, 2002). He then proceeds to note that whereas the goal of neuroscience is to reduce intelligent agency to neurophysiology, the goal of AI is to reduce it to computation. He con- cludes that cognitive scientists still have the task of showing in what sense brain function is computational.

Kurzweil’s lack of concern about “everything else but matter” is reflected in his discussions of consciousness and of ethics. He admits that we really have no idea

(15)

of what consciousness is: “It’s hard even to define what each object or thing is that might be conscious, as there are no clear boundaries. Or maybe there’s more than one conscious awareness associated with my own brain and body. There are plenty of hints along these lines with multiple personalities, or people who appear to do fine with only half a brain (either half will do)” (International Society, 2002). He is not concerned with consciousness in his discussions, and he focuses on those portions of the brain structure and neurological activity that can be objectively measured. “It’s the difference between the concept of ‘objectivity,’ which is the basis for science, and ‘subjectivity’ which is a synonym for consciousness” (Kurz- weil, 2001). Because of the fact that we cannot resolve the issues of consciousness entirely through objective measurement and analysis, “there is a critical role for philosophy, which we sometimes call religion” (Kurzweil, 2001). In Kurzweil’s view, the question of consciousness is therefore assigned to the nonscientific disci- plines and is not an issue he wishes to engage.

If a necessary aim of public discussion about technology development is to

promote rather then inhibit critical discussion, then Kurzweil’s work in publishing his website and his critics’ views has been successful. In its conception, design, and substance, kurzweilai.net affords its users a valuable opportunity to read, consider, and debate a range of issues pertinent to AI development. Although Kurzweil’s own lack of concern about dystopic scenarios of the AI future may be disappointing,

(16)

Consalvo v-viii-64 2/26/04 11:28 AM Page 44

44 the internet as an area of research

such concern would be out of alignment with his faith in the inherently meliora- tive force of science and technology development. In any case, Kurzweil seems to have rekindled public interest in AI and mounted a successful defense of it as a vi- able, albeit risky, technology.

References

“Algorithm.” (2001). Oxford English Dictionary. Oxford: Oxford University Press.

Crevier, D. (1993). AI: The tumultuous history of the search for artificial intelligence.

New York:

Basic Books.Dembski, W. A. (2002). Kurzweil’s Impoverished Spirituality. In J. W.

Richards (Ed.) Are

We Spiritual Machines: Ray Kurzweil vs. the Critics of Strong AI. Seattle, WA:

Discovery Institute Press. Retrieved September 8, 2002, from

(17)

<http://www.kurweilai.net/meme/ frame.html?main=/articles/art0497.html>.

Ekbia, H. R. (2001, November). Artificial intelligence at a crossroads. Paper presented at the Society for Social Studies of Science Conference, Cambridge, MA.

Ekbia, H. R. (2002). Artificial intelligence: Hype versus hope. Unpublished doctoral disserta- tion, Indiana University, Bloomington.

Fahnestock, J. (1998, July). Accommodating science. Written Communication 15.

Goldbart, R. C. (1998, April). Scientific writing: Three neglected aspects. Journal of Envi-

ronmental Health 60. Expanded Academic Index, File A57533204.Inch, E. S., and Warnick, B. (2002). Critical thinking and communication: The use of reason in

argument. Boston: Allyn and Bacon.International Society for Complexity, Information, and Design. (2001, July 19). ‘Live Mod-

erated Chat: Are We Spiritual Machines?’ Retrieved September 11, 2002, from <http://

www.kurzweilai.net/ meme/ frame.html?m=17>.Joy, B. (2000, April). Why the future doesn’t need us. Wired 8.04, 288–245, 248–263. Kennedy, G. A. (1991). Aristotle on rhetoric. New York: Oxford University Press.Kurzweil, R. (1999). The age of spiritual machines: When computers exceed human intelligence.

New York: Viking.Kurzweil, R. (2001). One half of an argument. Edge, August 4.

Retrieved March 19, 2003,

from <http://www.edge.org/3rd_culture/kurzweil/kurzweil_print.html>.Lanier, J.

(2000a). One half of a manifesto. Edge 74, September 25. Retrieved March 19,

2003, from <http://www.edge.org/documents/archive/edge74.html>.Lanier, J. (2000b).

Postscript re: Ray Kurzweil. Edge, November 20. Retrieved March 19,

2003, from <http://www.edge.org/discourse/jaron_answer.html>.Miller, S. E. (1996).

Civilizing cyberspace: Policy, power, and the information superhighway. New

York: Addison-Wesley.Mundy, T. (2000). The edge of science. Edge 74, September 25.

Retrieved March 19, 2003,

from <http://www.edge.org/documents/archive/edge74.html>.Muska, R. (2000). Created laws and spiritual machines. [Review of The age of spiritual ma-

chines]. The Skeptical Inquirer 24: 56.Proudfoot, D. (1999). How human can they get?

[Review of the book The age of spiritual

machines]. Science 284: 745.Richards, J. W. (Ed.). (2002). Are We Spiritual Machines:

Ray Kurzweil vs. the Critics of

(18)

one line short

Consalvo v-viii-64 2/26/04 11:28 AM Page 45

(19)

warnick | Dangerous Futures 45

Strong AI. Seattle, WA: Discovery Institute Press. Retrieved September 8, 2002, from

<http://www.kurweilai.net/meme/frame.html?main=/articles/art0497.htm>.Shaffer, R. A.

(1999). Pundit forecasts portable, praying PCs in The age of spiritual machines.

[Review of the book The age of spiritual machines]. Fortune 139, 124.Turkle, S. (1995).

Life on the screen: Identity in the age of the Internet. New York: Simon and Schuster.

Riferimenti

Documenti correlati

We consider the presence of a phase shifter (PS), which is usually adopted in scan-based LBIST to reduce the correlation among the test vectors applied to adjacent scan-chains

Its service system has been designed to ensure that the interactions between the different players involved (e.g. service operation centre, product specialist, direct field

As a matter of fact the application of astrometry spreads over several fields of astro- physics: (i) stellar astrophysics, where the measurement of parallax and proper motions,

Together with knowledge transfer to industries, the Third Mission ought to also include a proactive approach in the dissemination of knowledge and the creation of a

Allora è necessario assegnarle come Terza Missione non solo quella di trasferire conoscenze alle imprese, ma anche e soprattutto quella di collaborare attivamente

This work will expand upon the process of localization with a major focus on the aid of MT and the role technology is playing with regard to the translation job, focusing on

relations in response to globalisation and demographic and labour market changes (Eurofound, 2002d), the influence of European integration on industrial relations (Eurofound,

The EU is making significant investments in elements of a pan-European information infrastructure to drive better biomedical research, health system surveillance and clinical