Karolina Wisniewski
Contributor
No longer the stuff of Star Trek episodes, the development of fully autonomous humanoid ro- bots is within tantalizing – or unsettling – reach.
Aldebaran Robotics, a French company established in 2005, recently released an academic edition of Nao, the cute little robot you may have seen dancing or playing soccer on YouTube. The rumored release date for a privately purchasable version of Nao is set sometime in late 2011, rendering the idea of one day owning a fully programmable robot more realistic than it was a few years ago. One of the most fas- cinating aspects of Nao’s genesis is the attempt to encode such robots with ethical principles.
The strides in technology this represents may be riveting, but the development of Nao and others like it raises some difficult questions for the disciplines of applied ethics. We might ask whether we can really attribute thoughts to humanoid robots, or if it’s fair to truly call them intelligent.
If the answer to these questions is ‘yes,’ we need to consider what effect this may have on our ideas of knowledge and consciousness, which have thus far only been attributed to living things.
As varied as ideas on ethics may be, programmers Susan Anderson and her research partner and husband Michael Anderson of the Uni- versity of Hartford have managed to encode some universal principles into humanoid machines. The proposed model requires the robot follow three basic precepts: do good, avoid harm and be fair. The robot will determine to what degree each of these precepts is satisfied in any given course of action, and, on this basis, will reason towards the “best” decision to make.
For example, the Nao giving a patient medication will insist the patient take medication until the patient refuses. It will then calculate the harm done if the patient does not take the medication, how much good would be done by insisting they take medication and how fair they’re being to let the patient make their own choice. This may seem straightforward enough, but difficulties soon arise. Suppose, for example, that a robot is given two conflicting instructions, like making sure the patient takes medication and respecting the patient’s choice to refuse medication.
It’s unclear whether it’s capable of devising some third, presumably preferable option. Furthermore, on this account, determining the ethical value of any action is, in effect, no different from completing any other input-output function. This potentially puts important decisions in the hands of an ethical calculator, raising questions as to whether or not the reduction of complicated philosophical issues is acceptable.
While Nao may be the newest development in robotics, “intel- ligent” machines such as computers have been around for decades, and are already a part of our daily lives. Remember, for example, Deep Blue, the computer who defeated world chess champion Garry Kasparov in 1997. As we infuse machines with higher order functions, like ethical deliberation, the line between machine and mind becomes increasingly blurred.
Humanoid devices, such as Nao, seem to lack the capacity for creativity, long-term planning and emotions, which means one could argue they’re not fully conscious in the way we are. Of course, we may soon be able to devise humanoid robots equipped with these qualities.
Consider the fact our human minds may go through these same kinds of calculations. Nao may not be capable of the full spectrum of mental activities available to humans just yet, but it possesses the ability to reason, which seems to count for something.
Nao can replicate certain decision-making capacities, but we hesitate to think of it as we think of ourselves. This sheds light on the generally unpopular idea that the brain is just another computer, only more complex and sophisticated than anything humans have created thus far and yet, there’s something that dissuades us from considering that the only difference between Nao and us is one of degree.
The advent of ethical robotics may be wholly positive, but it may also be another example of our pesky tendency to play God. A human invention can only resemble a thinking person so much before it’s too close for comfort.
If creations like Nao are to function as intended, and occupy as significant a role in our activities as they are projected, they must be instilled with ethical principles: that much is certain. The ethical questions it raises, however, are still up for debate.
The mechanical ethical dilemma
