FB pixel

Apologetic Intelligence – should bots handle complaints?

Apologetic Intelligence – should bots handle complaints?
 

By Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner

Can Artificial Intelligence be sincere? The ‘A’ in AI probably holds a clue.

It’s not an entirely philosophical question because the BBC is back in its own news, this time by announcing a plan to use AI to handle viewers’ complaints. The £40m Serco contract is raising questions and eyebrows but is there anything fundamentally wrong with using AI for complaints?

At one level, complaints handling is transactional, and AI offers a great way of dealing with correspondence particularly where there’s a lot of it. However, if you anticipate receiving so many complaints that you need AI to process them, what does that say about the quality of the service you are providing? And if a high volume of complaints has simply become a fact of life, perhaps an organisational success measure would be getting them down to a level where you no longer need the technology to handle them for you? Now there’s a trend reversal we don’t often see.

There are other public services providers whose numbers would probably justify the investment in AI for mass processing of complaints – I’m thinking utilities and railways – but what about critical emergency services or criminal conviction review bodies? Would we want police complaints to be handled by AI for example?

The key question is which part of the process is being automated. Mailroom sorting is only the start and people lodging a complaint usually want something to happen as a result. That ‘something’ often begins with an apology.

Automating the humble apology is an interesting idea. In some ways the UK broadcaster has come to epitomise the Public Service Apology, both in its own utterances and those quoted within its fervent pursuit of balance. Apologies from public bodies have become so formulaically bland they already sound as though they were issued by a robot: “I’m sorry if you were: offended/unhappy/dissatisfied [insert adjective]”. The qualified apology (“we’re sorry this happened, but”) has also become a coded response, replacing evasive classics like ‘mistakes were made’. So, why shouldn’t you just cut the human from the loop lest they deviate from the script and talk us into deeper trouble by ad libbing?

Comparing the purpose of seeking an apology with the purpose of issuing one can help untangle the knots. Organisationally, issuing an apology may also be transactional, drawing a line, demonstrating action and moving forward but for people seeking one that probably won’t do. Research suggests that when people seek an apology they hope for genuine remorse. We know that proving authenticity raises some of the most challenging features for biometrics developers, for example around identity, liveness or personhood indicators, but authenticity is not same as sincerity, is it? Sincerity may be existentially incompatible with AI, like spontaneity, another unprogrammable human feature of flexible complaints resolution. Many organisations have already hunted both to extinction in their corporate complaints frameworks, so does it really matter if we can’t automate them?

Apologetic Intelligence presents an engineering challenge but in operational terms saying sorry is a distinctly human thing. It’s different from a ‘thank you’ which is often just a reflexive politeness, already a digitised commodity within correspondence. I’m told the best apologies ask for forgiveness – can you forgive AI? I don’t know, but the installation of AI Jesus in Lucerne suggests some believe it can run in the other direction.

Beyond apologies, other desired outcomes of complaints processes are explanation, action and assurance. This is why litigation is often disappointing when compared to alternatives such as mediation because courts can’t usually force an apology, explanation or undertaking for all future cases, but you can pretty well include anything in a mediated outcome (which some of my fellow Weinstein International Mediation Fellows are exploring this week).

Effective grievance policies connect all these features – from contrition to reparation – with the author of the action complained of. Increasingly our complaints will be about the AI and something it did, for example misidentifying us, getting our age wrong or denying us access to our bank account. If the bot’s to blame, that’s surely the right source of an apology and remedy. In such cases the organisation can tell the AI to issue a meaningful apology – “Alexa, make it real” – but isn’t that like when your parents made you say sorry to someone else irrespective of the rights and wrongs? (or was it just mine that did that?). Perhaps the AI could use its predictive genius and issue some of its apologies before it gets the thing wrong and avoid the log jam altogether.

Some who complain (e.g. lawyers) may just want acknowledgment and compensation – for them the answer is uncomplicated. For the rest, the proper role of AI in complaints handling raises profound questions about human fragility and ethics. All this is probably tied up with guilt, penitence and other murky aspects of the ego but soon we will be using AI to make our complaints for us as well as managing them. Once we start doing that, some of our complaining will become a good example of the ‘magnificently futile conflict’ that Alan Watts identified as defining human history. Perhaps AI will help us evolve.

Until then, we live in a world where even the most glaring miscarriages of justice and cast-iron foul ups don’t make it above “regrettable”, so the choice may be either to fully automate complaints systems which are already partly artificial or revise our approach and make dispute resolution human again. And if in reading this article you took any offence, I’m sincerely sorry.

About the author

Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner, is Professor of Governance and National Security at CENTRIC (Centre for Excellence in Terrorism, Resilience, Intelligence & Organised Crime Research) and a non-executive director at Facewatch.

Related Posts

Article Topics

 |   | 

Latest Biometrics News

 

Biometrics providers and systems evolve or get left behind

Biometrics are allowing people to prove who they are, speeding journeys through airports, and enabling anonymous online proof of age,…

 

Findynet funding development of six digital wallet solutions

Finnish public-private cooperative Findynet has announced it will award 60,000 euros (US$69,200) to six digital wallet vendors to help translate…

 

Patchwork of age check, online safety legislation grows across US

As the U.S. waits for the Supreme Court’s opinion on the Texas case of Paxton v. Free Speech Coalition, which…

 

AVPA laud findings from age assurance tech trial

The Age Verification Providers Association (AVPA), and several of its members, have welcomed the publication of preliminary findings from the…

 

Sri Lanka to launch govt API policies and guidelines

Sri Lanka’s government, in the wake of its digital economy drive, is gearing up to release application programming interface (API)…

 

Netherlands’ asylum seeker ID cards from Idemia use vertical ICAO format

The Netherlands will introduce new identity documents for asylum seekers Idemia Smart Identity, compliant with the ICAO specification for vertical…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events