> It's helpful to discuss though as a background to AI in even > a basic bot. If you want to give the bot some smarts at some > point you have to stop with the engineering and take a look > at imprinting it with some physiological behaviour After the little I have come to know about the world I live in most attempts by humans to endow their robotic creations with 'intelligence' (of the kind that allows it to cross the street or buy groceries) will likely fail. I think that bug-style simple logic and the capability to learn (carrot or stick button and some form of evolution capable AI system, like a neural network), is what is needed for a start, and then many, many, many thousands or tens of thousands of real world interactions to build a workable set of rules. Once you have a bunch of working rules it is easy to transfer them to other robots. Of course someone would have to pay for this, and then no-one would insure any kind of robot based on a heuristic system that cannot be proved to be 'safe' (because it has learned what it can do in the same way as people have, who *are* life and accident insured by the same companies). Yet I believe that this is the only way to gather information about what those rules should really be. F.ex. Asimov's robot laws would not allow a robot autopilot or defense system to save 1 million people by killing one, for example a terrorist who has stolen an aircraft with nuclear weapons on board, en route for a major city. Okay this is extreme, but there are other scenarios. Like three unconscious people in a submarine after an accident, one each in an automatically sealed compartment, with oxygen on board just enough for two to live until rescue is sheduled to come. Can the robot decide who is to live? Will he kill all three by not killing one? Will he be 'blamed' if he does? If he does not? Peter -- http://www.piclist.com#nomail Going offline? Don't AutoReply us! email listserv@mitvma.mit.edu with SET PICList DIGEST in the body