A few days ago, the Finan­ci­al Times’ Lau­ra Noon­an published an arti­cle, ‘Com­merz­bank sets AI to work wri­ting ana­lyst reports’, reporting on our work with the Ger­man ban­king giant to use Natu­al Lan­guage Gene­ra­ti­on (NLG) tech­no­lo­gy to wri­te basic ana­lyst notes.

As one invest­ment bank head told Noon­an, “There’s defi­ni­te­ly work that can be done, [and] parts of the [rese­arch] pro­cess that can be enhan­ced by algo­rithms and AI tools.”

 

 
Whe­re we think the arti­cle is slight­ly opa­que is about the true effect of auto­ma­ti­on on human ana­lysts. Just becau­se auto­ma­ted con­tent will take some of the work from ana­lysts, it does not mean that it will take their jobs. Robots may be quicker and more effi­ci­ent than their flesh-and-blood coun­ter­parts, but they still lack fun­da­men­tal skills that only humans have. This is some­thing that we have touched on repeated­ly, espe­ci­al­ly here, here, and here.

 

 

In each of tho­se arti­cles, writ­ten over the last year, we out­lined what robots could and could not do. Our view, at each step of the way, has been that auto­ma­ti­on will free peop­le up to do the more-crea­ti­ve tasks that only humans can do. Or, as I wro­te back in August, “Ulti­mate­ly, poe­try-gene­ra­ting machi­nes fail. Not becau­se they fail to pro­du­ce poems but becau­se they fail to pro­du­ce any­thing out­side the nar­row lines it is given. We can teach a machi­ne to do some­thing, but they’re not yet at the point whe­re the machi­ne can learn it by them­sel­ves.”

The limits of a robot’s capa­bi­li­ties should be the begin­ning of whe­re a good ana­lyst brings qua­li­ty. Robots are limi­ted by the data­set they work from, mea­ning that they are blind to out­side influ­en­ces; can­not always find con­nec­tions or give con­text; and lack the abi­li­ty to make com­plex com­pa­ri­sons or be able to par­se the big­ger pic­tu­re.

 

 
On the other hand, robots are fas­ter than humans; do not get tired, hungry, or thirs­ty; work twen­ty-four seven; and do not make mista­kes in cal­cu­la­ti­on. Howe­ver, they are only as good as the data­sets they are fed and the para­me­ters they are desi­gned along, which they yet can­not gene­ra­te or form them­sel­ves (making them reli­ant on humans). Noon­an herself touched on this in April in her long­form arti­cle ‘AI in Ban­king: The Rea­li­ty behind the Hype’ (we respon­ded to that arti­cle, too.)

 

 

Noonan’s report gene­ra­ted a num­ber of com­ments and respon­ses. The­re were a few inte­res­ting thoughts among them whe­re tho­se wri­ting reco­gnis­ed the value that NLG con­tent could bring to their indus­try. As one com­men­ter wro­te, “AI should be a tool to aug­ment and impro­ve rese­arch. If you make it pure­ly about cost reduc­tion, that’s a road to poor qua­li­ty.”

They are right. Com­pa­nies should not be loo­king to NLG and AI in order to redu­ce costs (alt­hough that can be a wel­co­me bene­fit). The real value is in sup­porting qua­li­ty by hand­ling the mun­da­ne tasks that make up the quan­ti­ty of a workload. That is why the first sen­tence of that com­ment is the most true: AI should aug­ment and impro­ve alre­ady-exis­ting prac­ti­ses.

Perhaps the most-sali­ent respon­se to the arti­cle came from someo­ne using the name ‘Koba’, who wro­te: “Much talk of ‘repla­cing’ ana­lyst rese­arch but such line­ar thin­king rare­ly pro­du­ces results. We can see much value in using AI to aggre­ga­te and crunch earnings reports. Machi­nes will defi­ni­te­ly have an edge over humans in col­la­ting and com­pa­ring vast num­bers of earnings reports in real time, extrac­ting trends and out­liers for future ana­ly­sis. It will be some time howe­ver befo­re machi­nes can par­se earnings calls and form a view of mar­ket psy­cho­lo­gy. Hope­ful­ly, this tech­no­lo­gy will auto­ma­te that which should be auto­ma­ted and allow the expen­si­ve human ana­lyst to con­cen­tra­te on judgment calls on an expan­ded uni­ver­se of stocks.”

Koba’ is right. An NLG pro­duct should not be deve­lo­ped with the aim of repla­cing an ana­lyst. It should ins­te­ad but­tress the work of that ana­lyst.

It seems that many fear auto­ma­ti­on becau­se they over­esti­ma­te the cur­rent capa­bi­li­ties of NLG and fear that such pro­jects are inten­ded as repla­ce­ments for humans. This is very much not the case. A good par­al­lel can be drawn with the wea­ring of specta­cles. Nobo­dy would pre­tend that wea­ring glas­ses is a repla­ce­ment for having eyes.

The truth is that some orga­ni­sa­ti­ons may seek out NLG as a way to redu­ce their work­force. But that would be a short­sight­ed and mis­gui­ded moti­va­ti­on, a surefire way to har­vest quick, short-term pro­fits but sow the seeds of fail­u­re. And any com­pa­ny that seeks to replace employees this way will seek to replace tho­se employees by any method. It won’t be the NLG that kills jobs, but how it is deve­lo­ped and deploy­ed.

 

 

 

 
For more infor­ma­ti­on, plea­se con­tact:

Pete Car­vill (@pete_carvill)
Com­mu­ni­ca­ti­ons Mana­ger
+49 (0)30 555 781 999
peter.carvill@retresco.de

 

 

About Ret­res­co

Foun­ded in Ber­lin in 2008, Ret­res­co has beco­me one of the lea­ding com­pa­nies in the field of natu­ral lan­guage pro­ces­sing (NLP) and machi­ne learning. Ret­res­co deve­lops seman­tic app­li­ca­ti­ons in the are­as of con­tent clas­si­fi­ca­ti­on, recom­men­da­ti­on, as well as high­ly inno­va­ti­ve tech­no­lo­gy for natu­ral lan­guage gene­ra­ti­on (NLG). Through near­ly a deca­de of deep indus­try expe­ri­ence, Ret­res­co helps its cli­ents acce­le­ra­te digi­tal trans­for­ma­ti­on, increa­se ope­ra­tio­nal effi­ci­en­ci­es, and enhan­ce custo­mer enga­ge­ment.

 

Quel­len

©Joss­hua Sorti­no, ©Alex Knight, ©Dani­el Che­ung, ©Ben Kol­de via Uns­plash

 

Contact eng