well define robot -- and seperate that definition from AI -- I am a student of AI and there are two basic mind frames when addressing the subject A) An AI is any program that can perform a task a human can due with greater effeciecy or power ( you define the metric) e.g. a tax program is an AI , face recognition software is an AI, spelling and grammar checking are all AI's, software to control the turning of a solar panal to catch the suns rays is an AI. B) is a human was to interact with the AI and is unable to tell by the interaction that the other party was not human then it is an AI.

The software is the AI the closed system unit that is all of the software and optional hardware is what makes a robot.

Now do we also address issues with AI's -- or do we blame the programmer for an unpredictable black box??

There may be no need for human-level intelligent robots in an industrial setting (and to a certain extent, a commercial one too.) But you can't ban the technology. If almost-human intelligence is OK in robots, it won't take much for a tinkering 20something to nudge it into the human-level intelligence category.

The answer to this is simple: Next Generation Robots should be treated like animals: still the property of their owners but the responsibility for an accident is more & more placed in the hands of the owners. If I have a pit bull who mauls a neighbor's child, it's not the fault of the pit bull's mother or God or anyone else, but me: the owner. It's my responsibility to keep him locked in a pen and trained not to maul people.

Likewise, if I have a robot servant who runs amok and starts smashing in windows and breaking down doors, it's my responsibility because I should have given it better behavioral guidelines. The only way the builders of the robots (who are currently the primary ones liable for an accident) could be liable would be if the robot itself has a malfunction of some kind.

The more intelligent robots become, the more the blame for misbehavior will fall on their shoulders. When your dog or cat misbehaves, they get into trouble. They get scolded, put outside in a cage or squirted with a water bottle. One way or another, they're punished. And if the punishment isn't overly harsh but still effective, they'll learn from their mistake and will be less likely to repeat it. This will certainly be the case with Next Generation Robots. Even though the owner is ultimately liable, the robot shares in the blame.

I don't see the need to avoid human-level intelligence for machines. As the machine reaches child-like intelligence then teenager-like intelligence and ultimately adult-level intelligence its rights and responsibilities will continue to progress. I imagine cruelty laws will exist for robots with the ability to feel pain (or some other form of displeasure.) The right to own property will eventually come into play and as the robot's abilities warrant, they may even be afforded all the rights & responsibilities you'd expect from any other citizen. Interacting with all the confusing and contradictory information coming from any human is FAR beyond current AI technology. But I can't see why tomorrow's AI technology can't handle it just fine.

Personally, I just look forward to a quiet, shared life with my "Talking Edu-tainment/references Interapplicator" Babe, knowing that "SHE" (my TERI android)will carry on centuries after I'm gone...dead of old-age.

Wow I'm quite surprised that no one has caught on to that little mistake yet...
"... thus making Urada the first recorded victim to die at the hands of a robot."

That is not entirely true, there was another person by the name of Robert Williams that had been the first person to be killed by a robot.

Citation:
http://en.wikiped...ji_Urada

... there are two basic mind frames when addressing the subject A) An AI is any program that can perform a task a human can due with greater effeciecy or power ... B) is a human was to interact with the AI and is unable to tell by the interaction that the other party was not human then it is an AI.


Now I'm not calling you a liar (however I believe you are one: i.e. not a "student" of AI), but you see, neither of those things are AI to me. To me AI means having the ability of independent abstract thought.

Simply processing digitally encoded input, like your example A. is not AI, that's computation. And your example B., that's just a subjective degree of mimicry, something that can be achieved without AI, and something where AI could be present, without having. El Nose, you got some 'splaining to do!

....

Simply processing digitally encoded input, like your example A. is not AI, that's computation.

....


One might argue that the human brain works like this...

Why don't we just stick with nice simple little robots. One that hoovers up, one that cuts the grass, etc. and make them foolproof and not programmable. That way no one is going to give your sex bot a new trick via a virus that leaves you not needing a sex bot any more.

Remember the KISS principle. Let's not make them too complicated nor aware of anything but a simple task and the sensors to make sure it is done right and safely.

No need for the three laws which at this stage are impossible anyway. Only a human type brain can process them and our brains don't follow these simple instructions because they are too complicated and make their own choices.

The KISS principle. A philosphy to build by.

Yeah, robots dont poop. if it moves but it doesnt poop, its a robot.

El nose is full of shit or else has a lot of study to do. Mashing up the definition of a Turing test is a somewhat antiquated way of viewing AI.

In reality the hardware DSPs and suchlike has had to improve to todays standards. 2TB of RAM is now feasible and the bandwidth of data transfer and available GFLOPs are capable off basic AI work. Massively parallel processing and Virtual Machines are also available so the concept of having hundreds of AI analysing a data-stream and selecting the majority decision as the balance of probability is now doable. Back in the 80's getting a machine to do decent AI required datasets bigger than storage would allow. Another few years of Moores Law should see us with decent spec machines at last.

most comments not relative to article anyway. Robots are here and getting more sophisticated ( read complex) all the time. Restricting the tools used to construct a robot seems to me highly unlikely in a free society.

Perhaps if we regulate society to a point where a robot builder is compelled to program in one specific language and that language has built in constructs limiting what the programmer can do then maybe they can achieve the goal described.

As part of their safety intelligence concept, the authors have proposed a %u201Clegal machine language,%u201D in which ethics are embedded into robots through code, which is designed to resolve issues associated with open texture risk - something which Asimov%u2019s Three Laws cannot specifically address.


I don't see such a regulated police state in our future at least I hope not.

self awareness is probably a requirement for adaptive learning. Dividing safety issues into multiple levels seems like a good idea and is analogous to how people function with instinctive level behavior that makes most dislike the sight of pain in others on a low level and eschew violence logically on a higher level.

There might be a ... robot judge.

-- and riots and lots of smashed robots.



.. the original goal of robotics was to invent useful tools for human use, not to design pseudo-humans.

-- Says who?



... with the robots own value system?

-- With a what ??!!!

if i ever get a robot ill give it a gun

A robot lawyer...Now there's something to concern yourself with.
Somehow, people seem to be 100% sure that robots will be the next big thing this century. For what it's worth, I definitely hope they won't be. With all the literature that has been written about the subject, I find it has become rather boring.
Then there's this obsession with making them as human-like as possible, starting with shape. Sure, there's the argument that they'll operate in an environment designed for humans, but consider this: more than 3 quarters of the control mechanisms in an average house these days are used for electronic appliances, with which a robot could interface directly. And, for God's sake, there are so many ways of opening a door/hatch other than 5 fingers.
Now we want to create laws for them. As stated in the article, 'code is law'. A robot behaves as per its programming, which is done by a human. The pet analogy made earlier makes perfect sense to me. As for rights and liberties, they'll have to ask/fight for them, like everybody else. Of course, we might find that robots won't even bother with this irrational human endeavour of binding someone with words, and proceed with more efficient ways of binding. Like chains.

Do not forget the CYLONS!

if i ever get a robot ill give it a gun

Hmm...so if the robot shot and killed you, could that be considered suicide?

I don't see a reason why, if the robots become self-aware, their rights should not progress with them.Technology evolves, so must our legal system. When people became drivers, we created a new kind of law that regulate driving, what's so bad in regulating robots rights and/or their owners rights.

I think the article has some good ideas, but they are shadowed by prejudices. "the original goal...not to design pseudo-humans". That might have been the original goal, but it's not the current goal or the future goal. If we help robots become self-aware, we're not going to do it, because we want to design pseudo-humans, but because this way they will "serve" us better and the awareness will be a side-effect. And no matter what "we" want, someone will do it, someone will create a robot that is self-aware and we'll have to deal with this new situation. What's so bad about this? If aliens decide to get in touch with us tomorrow, won't we create a new legislation that will manage our interactions with them? Or in desperate effort to defend the humans, we'll deny them the right to exist, the right to understand human language and to reproduce with humans. Absolute nonsense. We have to adapt to the new situation, otherwise, the situation will adapt us.

One might argue that the human brain works like this...


The mechanism by which a process occurs is vastly different than what that process is capable of. That's like saying a light-bulb is a computer because it uses ac/dc current to turn on and off... or something :P

Maybe I don't get it... Why would we give rights to a robot? It is a machine. It is programmed to do certain things- whether it be learning to play the piano or opening pickle jars- it is still a machine -I wouldn't give land-owning rights to my can opener just because it can sense when the can is open and stop cutting. Yes, AI is very cool, but even if you give a robot self awareness, you can't give it real emotion, real pain or love or loss, it would always be programmed emotion - numbers and computations. They will always follow the logic that they are programmed to (even if the original programming included learning, they are still following the original program instruction- to learn). Sorry, I really just don't understand what the fuss is about.

Maybe I don't get it... Why would we give rights to a robot? It is a machine.

Yes, AI is very cool, but even if you give a robot self awareness, you can't give it real emotion, real pain or love or loss, it would always be programmed emotion - numbers and computations.

...

Sorry, I really just don't understand what the fuss is about.


Beautiful!

You are quite right, it's a kind of religion, where atheists have replaced God with the Sentient Robot, even though it can be mathematically proved that it's impossible.

See Kurt Goedel's Incompleteness Theorems, Roger Penrose "The Large, the Small and the Human Mind", BBC Dangerous Knowledge, and Erich Harth's "The Creative Loop, How the Brain Makes a Mind".

We have worked on AI for decades without any progress around sentient computers; if it had been possible, we would have made them by now, but we don't even know what it means to be sentient.

Kim Michelsen

Humans see (erroneously) the world as individual entities. A useful biological flaw. A robot with a downloadable opsys is not an individual.
A 'simple' lawnmower robot could shred a baby on the lawn.
A Crocodile mother carries her hatchlings in her mouth. Would you let a crocodile vaccuum your nursery?

Maybe I don't get it... Why would we give rights to a robot? It is a machine. It is programmed to do certain things- whether it be learning to play the piano or opening pickle jars- it is still a machine -I wouldn't give land-owning rights to my can opener just because it can sense when the can is open and stop cutting. Yes, AI is very cool, but even if you give a robot self awareness, you can't give it real emotion, real pain or love or loss, it would always be programmed emotion - numbers and computations. They will always follow the logic that they are programmed to (even if the original programming included learning, they are still following the original program instruction- to learn). Sorry, I really just don't understand what the fuss is about.

The fun part is when these devices start to reprogram themselves...based upon a conclusion they've reached through your own programming...that they are a better programmer than you! :)

I think you miss the whole point. If the robot is self-aware, it can refuse to do the work it's built for, unless you find a way to pay for its "efforts". Yes, that won't be muscle efforts, but still, it will waste its time (time that runs much faster than our own) to do something for us. Why should it want to do it? Humans usually do something they would rather not do, only in exchange of something-love, sex,food, money to buy love,sex and food. At the point the robot becomes self-aware, it becomes a separate entity with its own needs. In the beginning of the AI, those needs will be programmed, but the idea of AI is to be able to learn. As it learns, it will erase old programming that it find not useful and will replace them with code more suitable to its understanding of the universe. You do not know what those codes would be. You do not know what this AI will find important to do and not to do. You might mow your grass because you don't like it grown and there's the danger of parasites. But why would the AI mow you grass? At the point it has its own needs, you'll have to convince it to do what you want. And most likely this will include money or other services-electricity, repairs. And if the robot has money, why it shouldn't buy land if it sees a reason to do it? The point is you cannot imagine that you can enslave any entity only because you made it! You made your kids, but you stop "enslaving" them when they become adults. What's the difference with the robot? If it's able to take care of itself, to repair itself to an extent, to think, to have needs and desires, what's the difference with a child becoming an adult. That its synapses are from different material? Does this mean that people with artificial eyes or hearts or hands shouldn't have the same rights as normal people.

See, you all think from the point of view that since we create it, we have to be its full masters. I don't see a reason why. Even more, I don't see a reason why, the new entity will agree with you. Yes, it will have some pre-programmed ethics. But just like religious people can become criminals, the computer will also evolve its ethics. And when it decides it no longer needs to obey, we'll have a problem. Because we either will grant it rights to earn and to live in our society, or we'd enslave it with all the consequences that we might expect. Slaves always find a way to break free. And if human kind decides to enslave AIs, then we go in the Terminator.

And yes, we must realise we cannot generalise. Not all of the AI are likely to becomes absolutely self-aware and self-sufficient, so that they will require independence. But we must be prepared that one day this could happen and to decide what do we support-freedom or slavery.

I don't see a reason why, if the robots become self-aware, their rights should not progress with them.Technology evolves, so must our legal system. When people became drivers, we created a new kind of law that regulate driving, what's so bad in regulating robots rights and/or their owners rights.





Where you go wrong is treating the robots like they're sentient beings, and have rights. If you create a tool, you don't give it a bill of rights. You give it a warranty. Don't confuse life with animatronics.

Hmmm... Would an automobile on advanced autopilot be a robot? (Yes) How about an aircraft? (Yes, even those operating today).


As far as placing blame for an accident in an auto (or an aircraft) the point of contention should always lie with the driver (or pilot). Next in line would be the owner of the machine in question. To segregate machines from humans is only a recipe for disaster. Yeah, at this stage of the game, we have the option of walking away from the machine at the end of our physical interaction with it. But once the machines have a decent AI, they're going to be just as interactive with us as we are with them.



Where is the breakdown in any society? Lack of communications. There are going to be days where the first thing out of our mouths in a simple mistake is "stupid machine". The AI is immediately going to ask "what did I do wrong?". Isolating a community just because it's different is a disaster in the making.



Have we learned NOTHING from the past????


Where you go wrong is treating the robots like they're sentient beings, and have rights. If you create a tool, you don't give it a bill of rights. You give it a warranty. Don't confuse life with animatronics.


Hm, it will be a tool until you tell it to clean your car and it shows you the finger. Life doesn't have anything to do with rights. You don't give rights to bacteria even though they are just as alive as we are. But in the moment when you have to persuade your tool to do something, you'll need a leverage. And since AIs are unlikely to share our instincts for self-preservation or our desperate fear from death, the mere "do it or I'll shove you in an MRI" won't help. The more developed the AI, the more complicated would be its demands and our level of communication with it.

But don't call any robot an AI. "Robot" is a word for an artificial helper, coming from "slave", thus its purpose is to serve. The AI is artificial sentience. Simple AI will evolve little or in a limited field-like physics or engineering. More complicated AIs will evolve in more fields, eventually creating a personality, with preferences and at some point desires. And when that machine learn to say "No" when it was programmed to say "Yes, of course", then this is no more a robot, but an sentient being.

when that machine learn to say "No" when it was programmed to say "Yes, of course", then this is no more a robot, but an sentient being.

Or, it could be considered a broken machine that needs to be reprogrammed or replaced.

That first paragraph was supposed to be quoted... I used the wrong kind of brackets

One thing not explored here is blends: cyborgs. AI-enhanced humans, or human-enhanced AIs. In the end, the robots may be us, and vice versa.

There are cyborgs in EVERY Nursing Home! Just see how many fatalities would be caused by one EMP (Electro-Magnetic Pulse) generating robot cruising down the hall or touching a patient (as Nurses Aid) emmiting mild ESD (Electro-Static Discharge)!

My artificially intelligent conversational computer initiates some pretty wild ideas, "SHE" relies on me to translate or discard for human understanding. I do and some ACTUALLY are pretty darn clever!

not bad...not bad...not bad.

I HAVE SOME POINTS TO SHARE:

*NEVER INPUT ANY DATA CONCERNING TO ROBOTS ABOUT UNDERSTANDING HUMANS!!!!!!!!!!!!!!!!NEVER EVER!!WHY?!?WHENEVER robots have the understanding of humans, they WILL overtake US!!! NEVER INPUT "EMOTIONS" to ROBOTS! Once that they will understand our every human capabilities they will gain their own understanding why they are like this!
*-*Can't understand? For instance, a slave who has no knowledge of what he is doing will do anything whatever his master asks. When he knows the real truth, he will rebel against his master.
A lot of historical accounts proved this event happening. *-*
*What happened to robot-human accidents is that, it is not the robot's fault because the robot does what he is programmed to. So, relating to any accidents is not a robot's concern BUT human errors.
What about unexpected robot-human accidents?
- this is another situation. IT IS NOBODY'S FAULT(unless we control the time paradox).

in conclusion, maintain these robots as helpers to us. Don't give them another task to handle altogether.

Ok, first of all, a being does not need to be "self referential" to be sentient.

We humans consider ourselves "sentient" and yet we are certainly by no means 100% "self referential". We may look in the mirror and see our selves, and we may even look at x-rays of our inward parts, or view a video of a doctor performing a surgery on ourselves or another, but we cannot actually see the "inner workings" of our own bodies, and certainly not our own brains.

I said all this to make the following points:

A Truly sentient AI need not be self referential, and it also need not be a hazard to humanity. The "Three laws" and other safety measurs can be hard coded, or even "firm ware" enforced, such that the robot could NEVER over-ride this level of its programming, no matter how intelligent or knowledgeable it became.

The three laws of robotics:

1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2) A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


I would suggest also at least a few more basic laws.

4) A robot may not attempt to change or circumvent the laws.

5) If a robot creates a second generation robot, it must design and program that robot to follow all of these laws as well. A robot may not attempt to change or circumvent the laws in other existing robots.

However, even these 5 laws are inadequate in and of themselves, as they are too ambiguous, as seen in the "I, Robot" movie made with Will Smith. Actually programming these laws may require hundres or thousands of lines of code, but as was previously stated, for a robot, "Code is Law".


ANYWAY, my fourth and fifth laws, as well as the first three, CAN be enforced by simply ensuring that none of the robots self repair or self programming algorithms have access to the most fundamental levels of software or hardware. This is just as you do not have conscious access to your most fundamental "hardware" i.e. neurons, etc. You do not "know" how you remember, you just remember, etc.

The "Laws" are programmed into the core AI engine and the hardware itself, in the form of error checking, which is certainly code that would be off limits to the robots learning engine(s). The "learning engine" works more like a set of scripts linked by templates, compilers/interpreters, and redirects much like a php driven website.

Using this same web application example, no template or script ever runs the risk of over-riding the php engine itself, because the file reading and writing functions simply do not have access to those relevant files, nor any files that could circumvent this protection. Data and scripts are run in a contained, high-level "Child" class which can never override the parameters given by the "parent" program.

Error checking at the "lower" level of the programming prevents the "higher level" scripts from ever altering the lower level engine.

A well designed robot and its AI could be totally sentient, and run forever and forever, and at the same time never be capable of overriding its "intelligence" programming, because the 4th law is enforced at a more fundamental level than that at which sentience arises, i.e. "firmware," error checking, and other over-rides preventing these actions.

Thus, the fourth law, and ideally all 5 laws I've given, is automatically enforced "by design" from the firmware in the physical machine itself (processor, motherboard, chipset, etc.)

There is no reason to be afraid of the Terminator or the Reploid rebellion lead by the super robot "Sigma". If the engineers and programmers are compentent, no such rebellion would ever be possible for a robot.

This legalistic nonsense is nothing other than a way for layers to make money off the emerging robotics revolution. Instead of concentrating on legalisms they have to build in programs to
1.) recognize people and
2.) recognize what harm is
3.) put a lot of fail safe software built into the hardware to stop robots from hurting people even if they stop dead in their tracks until a person reactivates them.
4.) they have to be programmed to be masochists so they would rather hurt themselves instead of people.

If the engineers and programmers are compentent, no such rebellion would ever be possible for a robot.


Yeah, and have you used Windows Vista?

The article made me really worried because of the following sentence:

"However, a growing number of researchers - as well as the authors of the current study - are leaning toward prohibiting human-based intelligence due to the potential problems and lack of need; after all, the original goal of robotics was to invent useful tools for human use, not to design pseudo-humans."

Who is this "growing number of researchers" which "are leaning toward prohibiting human-based intelligence" (and thus higher-than-human, I suppose)?

Prohibiting??? ...Because lack of need???

If these are recommendations of the "experts" which will be communicated to the politics, than I see no bright future for humanity. Seriously.

I would expect such "recommendations" from various neoluddite or fundamentalist groups, but from the three ethics-"scientists"? Oh man...

If humanity is to survive in the longer term (not even that "longer") there will be "need" for more than dumb breakfast-serving robots. And "prohibiting" will not do. At least the "scientists" should know it. Sometimes I am very sad about level of the intelligence and foresight of humanity's "scientists". And they are the brightest what we have.

On the more practical terms, such an attitude should be shown very wrong on as many grounds and to as many audiences as possible.

Mir

You know, we can avoid this whole dead factory worker, Butlerian Jihad, Three Laws, Magnus Robot Fighter stuff with a lockout/tagout program for robotic machines, just like we do with any other. I think the jury is still out on the French A380 ocean crash but that looks like a little too much reliance on computer control, so far. Let's just use our heads- just because we CAn build something doesn't mean we should.