> Seriously, I do not see where Prof Hawking is coming from. > There may well come a day when computer systems are > self-aware and Prof Hawking turns out to be an unsung > prophet of doom. Maybe a HAL2000 scenario is possible > ("Open the pod door Hal") on an individual basis but I > think/hope unlikely that on a global scale computers could > do physical harm. In which case who's really in charge ? > It would be unpalatable to go back living without computers > but if they proved to be such a scourge then turn them off, > pick up the pieces and learn a lesson I agree that Hawkings has lost the plot if the version of this in our local Newspaper is true to his comments. He is not alone in this idea though - AFAIR one of the world's foremost AI authorities says that this is a real risk and that development in the field should be curtailed for this reason! However ludicrous this may seem, one should not discount EFFECTIVE self awareness in computers in the forseeable medium term, all other things being equal. The article cited Hawking citing a doubling of capability every 18 months which is "Moore's law" and originally related only to the number of transistors in a design. Extrapolating it from the earliest PC days for hard disks, RAM and processor power produces some fairly good straight lines. Where a "self aware" computer could do inestimable damage is in the first part of the Terminator scenario, where a computer makes decisions for "its own benefit" which lead to use of nuclear weapons or whatever is the flavour of the day equivalent. It seems to me that this is just the sort of area that some of the most advanced computer systems are liable to be in use. IF a self serving self aware computer system ever did come to fruition more by accident than on purpose then the odds are that it would have internet access. Once it has that, anything that can be done via the net is within its grasp - a super hacker. If a nuclear weapons delivery system is accessible from the net, no matter how complex the path to it, then there will be "trouble" (to switch computer / movie metaphors). Lest something that at least passes as apparent self awareness seems like extreme SciFi consider what it would take to write a program that had at least rudimentary aspects of this. Many on this list could write programs which has traces of apparent self awareness - probably down at the sub 1 year old human level in most cases (no reflection on the human level capabilities of the participants :-) ), We would immediately recognise interchanges with such "beings" as not being with a self aware being as we know that babies with rudimentary concepts of self awareness do not use computers. If there was some means for babies to convey their minimal capabilities via the internet and if it was normal for them to do this then we would probably easily see programs passing the Turing test at this level. (I sleep, wake, hunger, feed,defecate, cry, sleep, ... therefore I am). From there the path to APPARENT 2 year old self awareness seems continuous albeit very very steep and complex. But maybe not. Apparent self awareness and actual self awareness are very very likely to be separated by a bottomless chasm. But apparent self awareness and vested apparent self interest should be enough to start a nuclear war. regards Russell McMahon -- http://www.piclist.com hint: To leave the PICList mailto:piclist-unsubscribe-request@mitvma.mit.edu