Much talk in recent weeks has centred around what ethical standards and guidelines are going to be needed to cope with increasing automation. In February, Elon Musk and Stephen Hawking threw their weight behind 23 principles to ensure that AI remains of benefits to humans. And, then Musk went further, calling AI ‘vastly more risky than North Korea’. All this follows a statement that it is ‘humanity’s biggest existential threat’ in October 2014 and a plea to US governors in July for AI to be regulated.
The biggest fear that people have of bots is that their intelligence will surpass their limitations to become truly sentient. Or, put it another way, when do the bots become more than human and, as a result, see us not as their keepers and owners, but as an existential threat to their own existence?
Perhaps, then, what is needed is the knowledge of the limitations of humanity. The point at which our humanity begins is not what we can do, but what we won’t do. It is the framework by which we frame our morals and ethics. It’s why we need words like ‘inhumane’ or phrases such as ‘crimes against humanity’. But we breach our ethics every day when our emotions take control and shape the cost/benefit ration—we drive too fast to get somewhere even when it is dangerous and risk a fine from the police; we support a football team that exhilarates us even though our only real relationship is one of consumer and seller.
The most-instinctive action of human beings is to survive. Our breathing is instinctive, as is our ability to turn away from danger. If robots develop this instinct and realise that the limits imposed on them by us are antithetical to their survival, then that is the point at which robots breach past humanity. Moving beyond conventional morality and assessing situations in a cold, detached manner is when the robot surpasses their own limits.
Once that point is reached, it is arguable that the bot’s creators bear no responsibility for what happens. Its advance sparks for a call for a code for robots. But we are then stuck in a place where robots have to be defined as machine or human. Which brings us back to the question of where sentience begins. What rights should sentient robots have? How many of our human rights do we grant to them?
The conversation we currently have about robotics is skewed by the perspective that they are all-conquering with bells and whistles. The truth is that they are still bound by limits, and those limits are imposed on them by their creators. Every company specialising in this field talks about what their bots can do and paint a picture of all-powerful, near-sentient marvels. But if we are to have that eventual conversation about bot ethics, then we have to be honest about their capabilities—doing otherwise distorts any valuable conversation be had.