Price : Cheap
- ISBN13: 9781568812397
- Condition: USED – Very Good
- Notes: BUY WITH CONFIDENCE, Over one million books sold! 98% Positive feedback. Compare our books, prices and service to the competition. 100% Satisfaction Guaranteed
How should we prepare for the day when machines think and feel as well as or better than humans do?
Should the day come when intelligent machines not only make computations but also think and experience emotions as humans do, how will we distinguish the human from the machine? This introduction to artificial intelligence and to its potentially profound social, moral, and ethical implications is designed for readers with little or no technical background. In accessible, focused, engaging discussions, physicist and award-winning science writer Thomas Georges explores the fundamental issues: What is consciousness? Can computers be conscious? If machines could think and even feel, would they then be entitled to human rights? Will machines and people merge into a biomechanical race? Should we worry that super-intelligent machines might take over the world?
Even now we continue to put increasingly sophisticated machines in control of critical aspects of our lives in ways that may hold unforeseen consequences for the human race. Digital Soul challenges all of us, before it’s too late, to think carefully and rationally about the kind of world we will want to live in with intelligent machines ever closer by our sides.
One of Several Useful Books on Artificial Intelligence, but not an Exceptional One, January 1, 2007
By Roger D. Launius
In recent years a spate of books has appeared on the rise of intelligent machines and what that might mean for the future of humanity. “Digital Soul” is among them, and it purports to be a basic introduction to the subject of artificial intelligence and the future. Clearly written and at times engaging, “Digital Soul” asks a range of interesting questions: What defines life? What defines consciousness? Can a machine be alive, can it be conscious? If either alive or conscious does a machine the have rights and privileges that we extend to other living things? Do intelligent machines pose a threat to humanity as depicted in many popular science fiction books and film? Unfortunately, Thomas M. Georges does not offer a sustained and penetrating analysis of them.
Georges suggests that the creation of sentient artificial intelligence is a virtual surety in the twenty-first century if the current level of advancement is maintained. Such a development, he believes, would force humanity to reconsider their everyday beliefs, scientific perspectives, political relations, and religious conceptions. As he put it, the creation of “superintelligent extraterrestrials” living among us on Earth must prompt a rethinking of deeply held beliefs and values.
This is a modest explication of a complex subject. It may be read with profit as an introduction of the possibilities for the future of artificial intelligence. But there are several other books of a similar nature that deserve more sustained consideration. For instance, after reading “Digital Soul” please also consider Ray Kurzweil, “The Age of Spiritual Machines: When Computers Exceed Human Intelligence” (Penguin, 1998); Peter Menzel and Faith D’Aluisio, “Robo Sapiens: Evolution of a New Species” (MIT Press, 2000); Rodney Brooks, “Flesh and Machines: How Robots Will Change Us” (Pantheon, 2002); Sidney Perkowirz, “Digital People: From Bionic Humans to Androids” (Joseph Henry Press, 2004); James Hughes, “Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Human of the Future” (Westview Press, 2004); and Joel Garreau, “Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies–And What It Means to Be Human” (Doubleday, 2005).
Even so, I have yet to find a really outstanding book on this subject written at an introductory level. I will continue my search. Meantime, “Digital Soul” is one among several works that is useful, but not path breaking.
I know this is an intro book but c’mon!, December 20, 2003
By Kevin Currie-Knight
This is one of only 3 books I’ve been willing to review after giving up half way through. Georges is a crystal clear, and sometinmes entertaining writer. The book, though, is uncritical, unduly repetitive, and even superficial.
Am I expecting too much? This is, after all, suppoosed to be an intro book. No, my appraial is not based on a highfalutin motive. In fact, it is because this is an intro book that I think there is a disservice done by its surface level approach.
Each chapter (at least in the first half) follows a pretty simple formula. The author asks questions like can machines think, emote, reason, be conscious, understand, etc. Letitimate questions, all. His response, though, seems to be “Yes, they can do all. Why? No one has proved that they cannot; that’s why.” I suppose that in its own way, this is a legitimate reason to remain agnostic on whether computers could one day achieve these traits, but it is also an easy way to dismiss the question. Scientists do not – or should not – work that way. A theory is not viable simply no one has disproven it. Rather, evidence must first be martialled in its favor for it to be taken seriously. (Not that this can’t be done for AI, but the author owes it to us to at least survey the arguments).
Second, the author takes these traits (emotion, consciousness, reason, etc) and in an effort to ‘understand’ waht they are and get some sense of how they might work, he offers a simple explanation: evolution created them. Now I believe wholeheartedly in evolution rather than creation and my qualm is not whether the statement is valid. Rather, it is whether ‘evolution did it,’ is an answer to his question at all. Saying that evolution created consciousness does nothing to illuminate our view on what it is and what makes it work. Of course, we don’t have any really outstanding theories yet, but again, the author owes it to us to at least survey waht we do have.
Third, the author accepts UNCRITICALLY the thought that AI will create machine minds and even ones that outgrow us. While this is a possibility, an introductory book like this, should be examining the legitimate criticsism (By people like Searle, McGinn, and Lanier) against it. Rather, he answers criticism of strong AI by suggesting that anyone who denies it must be a mystic who believes in a soul or god or some other immaterial substance. Not true! There are legitimate criticms of AI and I get the feeling that the intro reader is going to come away from this book with the false impresion that there are not scientifically based criticisms.
The long and the short is that this book is simply lightweight enough for me to fear that the first-time reader will not be exposed to very much from this book. For those who want to read some thoughtful introductions, “Is Data Human” by Michael Hanley, “Society of Mind” by Marvin Minsky (which this book cites from) and “The Minds I” by Hofstadter and Dennett are good ones. With the exception of the first, all of these books may be a little more tedious (not much) than “Digital Soul” but they are also more informative.
Where are we going?, May 7, 2003
By Dennis Littrell
And will “we” still be here when we get there?
Digital Soul is about the nature of our world when machines become as intelligent as humans and beyond. It is also about the nature of those machines. It is clear that Georges has thought long and hard about the subject, has read widely and has compared notes with other futurists. His expression is reasoned and reasonable. There are no muddy sentences or mystical ambiguities. He has worked hard to make sure that his ideas are accessible to a wide range of people including those with no expertise in the field of Artificial Intelligence.
Clearly the problem is to derive benefit from super intelligent machines without letting them take over our lives. Georges believes that it will be difficult to do that since, as the machines get smarter and smarter and we allow them more and more latitude and we more and more depend on them, they will come to control us.
But this is where I think Georges goes astray. The question I would ask is, would they WANT to control us?
Georges implies that human-like values, such as that of self-preservation will automatically follow from machines becoming intelligent. But actually the machines will have no values at all and no desire, either. They will have no inclination to act except as such inclinations are built into their make-up.
Georges also implies that he knows what qualities or values are desirable in a machine. He speaks of “nicer, testosterone-free, superhuman beings” as opposed to “greedy, violent, barbaric, self-absorbed” beings. (p. 212) While these are surely agreeable preferences, it is not clear that artificial creatures designed according to human choice would long survive.
It is also not clear that we would want to design machines according to human values. We would want to design them as tools (which they are) to assist us in following our desires and supporting our values. Notice the difference. Machines that work toward fulfilling the desires and upholding the values of human beings are not the same as machines that contain the desires and values of human beings.
What I think Georges temporarily forgets is that no machine is going to “want” to do anything unless “desire” is built into the machine. The machine doesn’t care whether it is plugged in or not unless we somehow encode such a desire into the machine. What Georges seems to assume is that somehow the complexity that we will demand from machines will somehow necessitate that we inculcate desire, self-preservation and the like into the machine. I think this will not be necessary at all. Indeed I suspect our machines will tell us that they will be able to function just fine without the institution of some kind of supercode or primary instruction telling them to protect themselves and have ulterior motives. (Such notions led to HAL 9000’s murderous behavior in Kubrick’s film 2001: A Space Odyssey.)
I think a more likely future (and one that Georges addresses) is a symbiosis between people and intelligent machines in which the machines have the knowledge, skill and intelligence necessary for making decisions, but that the actual decisions and the impetus for action remain with human beings.
However, should intelligent machines, as Georges fears, somehow acquire purpose and goals and desires such as self-preservation, then there is a great danger of our lives being taken over and controlled by intelligent machines. He warns us that we have to guard against that danger.
Georges rightly brings up the Fermi Paradox in Chapter 18. Since it would appear (to some at least) that the universe is teeming with intelligent life, Fermi famously asked, “Where is everybody?” One of the many answers (aside from “we are alone”) is that “technological civilizations have a very short life expectancy, because they promptly destroy themselves during their technological adolescence.” This insight from Georges on page 214 is another way of pointing to what he is worried about. Still another way (perhaps) of expressing this is to say that we will merge with our intelligent machines, and having acquired a sort of superintelligence, will find that the values that were built into us by the evolutionary mechanism are muted, values such as self-preservation, curiosity, greed, anger, vengeance, etc. Any sort of desire may be culturally evolved out of us. Why do anything at all? may very well become the unanswerable question. Perhaps this is what happens to technological civilizations in their adolescence, and that is why we haven’t heard from them.
Beyond this I think we need to realize that evolutionary creatures, which we are, are just a place along the way to something else. What that something else will be is as much beyond our ken as understanding quantum mechanics is to bubble bees.
Regardless of some disagreements this is a very interesting book well worth reading from cover to cover. I agree with his enthusiasm about artificial intelligence and I agree that we should continue to pursue its development and not become neo-Luddites. But I am not afraid of a future without human beings as we are now constituted. We are imperfect creatures. We are appropriate and adapted to the present environment. When the environment changes, as it surely will, we may no longer be able to adapt and may go the way of the dodo. So be it. We know from looking at the past that all species eventually die. New ones come into existence. Should the future be any different?
As we see the limitations of humanity, as we see ourselves for the first time as we really are, perhaps it is time for a greater identification. Instead of identifying exclusively with human beings, might we not identify with a larger process that encompasses all life forms including those to come?
An odd mixture of optimism and cynicism, April 6, 2003
By Dr. Lee D. Carlson
The topic of machine intelligence continues to inspire both worry and elation. This book is an interesting mixture of these two, for the author is both optimistic about the eventual rise of machine intelligence, which he argues is to a large degree already here, but he is also clearly concerned about its possible negative consequences. Failure to understand and adapt to the new technologies arising may threaten us with extinction, he argues in the first chapter of the book.
He also states in chapter 1 that in order to survive our “technological adolescence” humans must lose some of their “self-destructive evolutionary baggage.” This belief seems to be a popular one, being pervasive in literature, performing arts, and philosophy. But from a statistical/scientific standpoint, it is clearly unsupported. In comparison to the total number of humans who have ever lived, only a tiny minority of individuals throughout history have ever hurt anyone physically; an even smaller number have actually killed another human being. The author’s cynicism here is totally unjustified.
The author though does engage in interesting discussion on the nature of intelligence and why he believes that machines are already more intelligent than humans are in certain specialized domains. Because of this, he also argues (correctly) that the further rise of machine intelligence will take place incrementally, with no well-defined time at which one could say that machine intelligence has surpassed human intelligence. It seems as though we have learned to live with machines doing things better than we can, at least in some areas, but have not yet viewed these capabilities as being “intelligent”. But, asks the author, if they are more intelligent, at least in these areas, how would one know if they are working properly? It is at this point that the author believes that one should worry about the future of humanity as the dominant life-form on Earth.
Throughout the book, the author shows keen insight into the real goals behind research and development in A.I. The main goal he says is not to create machines that think and behave completely like humans, but find solutions to problems and do tasks that humans require. This will bring about, the author believes, intelligent machines whose cognitive abilities are quite unique, and characteristically non-human-like. There are many examples of his opinions on these matters in current developments in A.I., such as genetic programming and automatic theorem proving. These two areas have exhibited solutions to problems that clearly are very different than what humans would have done.
In addition, and perhaps to the alarm of some philosophers, the author takes a pragmatic view concerning the question as to whether machines can think. He clearly does not want to engage in the arm-chair philosophical debates about this question, and considers them totally irrelevant. What matters to him is whether the machine “acts in all respects” as though it understands. The imputation of mental processes to a machine will assist in the understanding of how it works and what it can do, and this is perfectly fine with the author. But this does, in the author’s view raise questions as to the legal and ethical status of thinking machines.
Because of the title of the book, it is not surprising to find a discussion of the “strong A.I.” problem included in it. The author spends a chapter addressing the nature of consciousness and some of the ideas and myths surrounding it. He recognizes, correctly, that the doctrines of vitalism and dualism are not useful at all from a scientific perspective. The proponents of these doctrines adhere to the “irreducibility” of consciousness, and therefore to the untenability of its analysis. Pure speculation is thus the tool of inquiry, all of this done on the philosopher’s armchair and not in the laboratory. The author though, thankfully, advocates a purely scientific approach, taking the physical nature of consciousness as an axiom, and then seeing how far this will lead. His analysis and commentary throughout the chapter are very interesting and connected with evolutionary arguments as to why consciousness is structured the way it is.
Most interesting is the author’s discussion on the role of emotions in human cognition. Not viewing emotions as inherently undesirable or “irrational”, he gives reasons for wanting to incorporate them into an intelligent machine. One of these is an algorithmic notion: emotions provide a “weighting scheme” that will filter out undesirable paths in the total path space of alternatives. Anyone who has attempted to design search algorithms will understand the importance of weighting schemes that will allow pruning of the search space. The same goes for those involved in the design of neural networks for pattern matching or time series prediction: bias nodes are essential for the proper function of the neural network. The author gives as an example the biases that are built into chess-playing machines, without which the machine’s capabilities would be crippled.
The author definitely believes in the possibility of machines “taking over”, devoting an entire chapter to the possible scenarios that might bring this about. But his cynicism acts against him here, namely his belief that humans, even though clearly expressing intelligence, are prone to extreme violence. His notion of intelligence therefore is too narrow: an alternative one is that the more intelligent an entity becomes, the less prone to violence it becomes. In other words, violence disrupts the cognitive flow of the entity in question, and it avoids it out of necessity: to maintain a state of intelligence that not only has survival value but may indeed be purely a subjective need. The degree of intelligence is thus inversely related to the violence participated in. There are many examples of this, billions in fact, these being the humans who have lived throughout history. The vast majority of humans have been superb thinking machines, and they serve as excellent examples to the ones which they are creating and will create.