A new model for building AI (Humans)

by dhw, Tuesday, November 16, 2010, 13:13 (4911 days ago) @ xeno6696

MATT: dhw asks many good questions, but I wish to direct to a specific question at a time: "What do we want to preserve of our humanness?"-I can't take the credit for any of the questions. They were asked by Dr Cynthia Breazeal, from whose excellent article I quoted several passages.-Your response is: "What makes you think that we have to preserve anything?"-I don't think you can take that question on its own. She leads into it from the statement that robots "may eventually incorporate more biological technologies to leverage from biochemical processes, such as the ability to self-repair." This makes her ask, "Will we still be human? What does it mean to be human? What do we want to preserve of our humanness? What are the implications for granting the status of personhood?" I see all these questions as interconnected. If biological technology is married to robotic technology, we begin to move closer and closer to the fictional scenarios of Frankenstein, Jekyll and Hyde, the Stepford Wives, in which humans lose control of their identities. -Like David, I'm sceptical as to whether we can ever build a robot indistinguishable in all respects (including intellectual and emotional) from humans, but Cynthia Breazeal's questions enter into fields of philosophy. You call man an "ineffable river...never the same thing twice" (ineffable means indescribable or inexpressible, so I'm not sure how it applies to a river), and you ask if we should "cast man in stone as we did Gods and thus destroy them". I think that may be the very problem ... that robots could be the same thing twice, and having been programmed they could be metaphorically cast in stone; even organic humans subjected to technological implants could be cast in stone, in the sense that their behaviour might be dictated by technology and not by that still unfathomably mysterious personal identity linked to the source or medium called consciousness. I don't think the process is stoppable, and I don't think it has to be stopped because I think it will reach insurmountable barriers, but the moral questions still apply even before we reach those barriers. Imagine the consequences in a totalitarian regime if the government controlled the technology to change and direct people's behaviour. It's already bad enough that they can influence thought (see Orwell's 1984), but at least individuals are still potentially capable of thinking for themselves (hence rebellions and resistance movements). And so I think you have answered your own question. What we have to preserve is individual autonomy ... which is the right not to be "cast in stone".


Complete thread:

 RSS Feed of thread

powered by my little forum