Much talk in recent weeks has cent­red around what ethi­cal stan­dards and gui­de­li­nes are going to be nee­ded to cope with increa­sing auto­ma­ti­on. In Febru­ary, Elon Musk and Ste­phen Haw­king threw their weight behind 23 princi­ples to ensu­re that AI remains of bene­fits to humans. And, then Musk went fur­t­her, cal­ling AI ‘vast­ly more ris­ky than North Korea’. All this fol­lows a state­ment that it is ‘humanity’s big­gest exis­ten­ti­al thre­at’ in Octo­ber 2014 and a plea to US gover­nors in July for AI to be regu­la­ted.


The big­gest fear that peop­le have of bots is that their intel­li­gence will sur­pass their limi­ta­ti­ons to beco­me tru­ly sen­ti­ent. Or, put it ano­t­her way, when do the bots beco­me more than human and, as a result, see us not as their kee­pers and owners, but as an exis­ten­ti­al thre­at to their own exis­tence?


Perhaps, then, what is nee­ded is the know­ledge of the limi­ta­ti­ons of huma­ni­ty. The point at which our huma­ni­ty begins is not what we can do, but what we won’t do. It is the frame­work by which we frame our morals and ethics. It’s why we need wor­ds like ‘inhu­ma­ne’ or phra­ses such as ‘cri­mes against huma­ni­ty’. But we bre­ach our ethics every day when our emo­ti­ons take con­trol and shape the cost/benefit ration—we dri­ve too fast to get some­whe­re even when it is dan­ge­rous and risk a fine from the poli­ce; we sup­port a foot­ball team that exhil­ara­tes us even though our only real rela­ti­ons­hip is one of con­su­mer and sel­ler.


The most-instinc­tive action of human beings is to sur­vi­ve. Our bre­at­h­ing is instinc­tive, as is our abi­li­ty to turn away from dan­ger. If robots deve­lop this instinct and rea­li­se that the limits impo­sed on them by us are anti­theti­cal to their sur­vi­val, then that is the point at which robots bre­ach past huma­ni­ty. Moving bey­ond con­ven­tio­nal mora­li­ty and asses­sing situa­ti­ons in a cold, detached man­ner is when the robot sur­pas­ses their own limits.


Once that point is reached, it is argu­able that the bot’s crea­tors bear no respon­si­bi­li­ty for what hap­pens. Its advan­ce sparks for a call for a code for robots. But we are then stuck in a place whe­re robots have to be defi­ned as machi­ne or human. Which brings us back to the ques­ti­on of whe­re sen­ti­ence begins. What rights should sen­ti­ent robots have? How many of our human rights do we grant to them?


The con­ver­sa­ti­on we cur­r­ent­ly have about robotics is ske­wed by the per­spec­tive that they are all-con­que­ring with bells and whist­les. The truth is that they are still bound by limits, and tho­se limits are impo­sed on them by their crea­tors. Every com­pa­ny spe­cia­li­sing in this field talks about what their bots can do and paint a pic­tu­re of all-power­ful, near-sen­ti­ent mar­vels. But if we are to have that even­tu­al con­ver­sa­ti­on about bot ethics, then we have to be honest about their capabilities—doing other­wi­se distorts any valu­able con­ver­sa­ti­on be had.