As we interact with people throughout the day, we make hundreds of decisions. In each one, we weigh our choices against what's right and wrong, what's fair and unfair. If we want robots to behave like us, they'll need an understanding of ethics.
Like language, coding ethical behavior is an enormous challenge, mainly because a general set of universally accepted ethical principles doesn't exist. Different cultures have different rules of conduct and varying systems of laws. Even within cultures, regional differences can affect how people evaluate and measure their actions and the actions of those around them. Trying to write a globally relevant ethics manual robots could use as a learning tool would be virtually impossible.
With that said, researchers have recently been able to build ethical robots by limiting the scope of the problem. For example, a machine confined to a specific environment -- a kitchen, say, or a patient's room in an assisted living facility -- would have far fewer rules to learn and would have reasonable success making ethically sound decisions. To accomplish this, robot engineers enter information about choices considered ethical in selected cases into a machine-learning algorithm. The choices are based on three sliding-scale criteria: how much good an action would result in, how much harm it would prevent and a measure of fairness. The algorithm then outputs an ethical principle that can be used by the robot as it makes decisions. Using this type of artificial intelligence, your household robot of the future will be able to determine who in the family who should do the dishes and who gets to control the TV remote for the night.