At the begin­ning of this month, the Finan­ci­al Times posted an arti­cle about the set­up of hybrid systems—where auto­ma­ted pro­ces­ses and humans work together—and of the dif­fe­rent approa­ches that they often take.

The sto­ry begins by tel­ling of how a pede­stri­an was kil­led by one of Uber’s self-dri­ving cars. It was an acci­dent that occur­red when the pede­stri­an was crossing the road and the sys­tem was unab­le to react pro­per­ly, defaul­ting to dri­ver con­trol. That dri­ver, local poli­ce con­clu­ded, was distracted—potentially becau­se they were watching a TV show on their smart­pho­ne. It was an acci­dent that may have been unavo­ida­ble even if the driver’s atten­ti­on had been ful­ly on the road, given that rese­arch from Stan­ford Uni­ver­si­ty says it takes six seconds for a human dri­ver to reco­ver awa­re­ness and take back con­trol of the vehi­cle.

Three approa­ches taken by tho­se desi­gning auto­ma­ti­on are out­lined. The first was in use in the Uber case, in which a human is used as a ‘back­up’ to the auto­ma­ted tech­no­lo­gy; the second is when sen­si­ti­ve deci­si­ons are always left to the deci­si­on-making skills of a flesh-and-blood per­son; and the third is when the AI is not able to hand­le a task on its own and is merely an aid to a per­son.

In pre­vious reports on this blog, I have writ­ten exten­si­ve­ly about how robots and humans can work toge­ther, taking as a star­ting point the posi­ti­on that robots are not the­re to replace humans but will ins­te­ad help them by taking on the rou­ti­ne, mun­da­ne tasks that requi­re litt­le crea­ti­vi­ty and are depen­dent on data. That would be the third approach out­lined above. The other approa­ches have not been tack­led sin­ce they do not fall wit­hin the ope­ra­ting para­me­ters of cur­rent Ret­res­co tech­no­lo­gy.

So whe­re in the past I have spo­ken about how auto­ma­ted tech­no­lo­gy and humans can come toge­ther, it is also important to talk about how robots and humans can be sepa­ra­ted.

It is important to keep in mind that sepa­ra­ti­on is not always fea­si­ble. This is becau­se some tasks have poten­ti­al con­se­quen­ces that are so serious that human over­sight is a neces­si­ty. How serious? Look at the first para­graph of this post. This was not the first fata­li­ty invol­ving self-dri­ving cars, nor is it likely to be the last. And out­side of this arti­cle, the­re are inte­res­ting ques­ti­ons as to whe­re fault lies when such inci­dents occur. But, as For­tu­ne points out in ano­t­her arti­cle, the big­gest risk with self-dri­ving cars still comes from humans, not the cars them­sel­ves.

The­re will be a lot of deba­te over which jobs can be far­med out to auto­ma­ted tech­no­lo­gy, and whe­ther they should be far­med out total­ly or in part. And, if so, how much should be far­med out. But the­re are a few basic princi­ples that we should look to if we want to go down the path of sepa­ra­ti­on.

First­ly, any tasks not over­se­en by a human should car­ry no risk of serious harm. If a machi­ne can do some­thing, that is gre­at. But if the con­se­quen­ces of tho­se actions could be serious and adver­se, a rethink is necessa­ry. Like­wi­se, an auto­ma­ted sys­tem should not be in the posi­ti­on whe­re it could crea­te issu­es of libel—again, this is a judgment call and one that needs care­ful and con­s­i­de­red thought.

The key to sol­ving this is to have a clear and robust deve­lop­ment pro­cess, plan­ned cor­rect­ly and with a defi­ni­te objec­tive. Blind spots should be taken into account at the plan­ning sta­ge. The pos­si­ble inter­pre­ta­ti­on of the data should be solid and lea­ve no room for ambi­gui­ty. This requi­res con­cep­tu­al work, but that helps to pre­pa­re the deve­lop­ment of such sys­tems for cer­tain con­tin­gen­ci­es.

The data so far, howe­ver, shows that self-dri­ving vehi­cles and auto­ma­ted con­tent are still much, much less pro­ne to error than their human-sourced coun­ter­parts. But it will pay to be con­sci­en­tious and rea­listic about the limi­ta­ti­ons of what we offer.

 

 

For more infor­ma­ti­on, plea­se con­tact:

Pete Car­vill ( @pete_carvill )
Com­mu­ni­ca­ti­ons Mana­ger
+49 (0) 30 555 781 999
peter.carvill@retresco.de

 

 

About Ret­res­co

Foun­ded in Ber­lin in 2008, Ret­res­co has beco­me one of the lea­ding com­pa­nies in the field of natu­ral lan­guage pro­ces­sing (NLP) and machi­ne learning. Ret­res­co deve­lops seman­tic app­li­ca­ti­ons in the are­as of con­tent clas­si­fi­ca­ti­on, recom­men­da­ti­on, as well as high­ly inno­va­ti­ve tech­no­lo­gy for natu­ral lan­guage gene­ra­ti­on (NLG) . Through near­ly a deca­de of deep indus­try expe­ri­ence, Ret­res­co helps acce­le­ra­te its digi­tal trans­for­ma­ti­on, increa­se ope­ra­tio­nal effi­ci­en­ci­es, and enhan­ce custo­mer enga­ge­ment.