“Without appropriate oversight, controls and change governance, technologies like RPA, machine learning and AI could become a source of reputational and operational risk.”

When we think of digital technologies, the risks envisioned alongside them are often those posed to human jobs, and not necessarily those posed to organizations and their customers. A 2016 article in the The Guardian cited a research report noting that, by 2021, “.. AI/ cognitive technology will displace jobs, with the biggest impact felt in transportation, logistics, customer service and consumer services.”1 While numerous reports highlight the risks posed by these technologies to human workers, they rarely cover the other risks posed by implementing these new tools.

Technologies such as robotic process automation (RPA) can automate manual processes, and machine learning algorithms can help us better harness and use data to make intelligent decisions. Previously manual processes, such as calculating payments to customers or determining whether a customer will be approved for a loan, can now be automated, bringing significant savings—but also unique risks.

Martin Wheatley, former chief executive of the UK’s Financial Conduct Authority, outlined a vision in which “self-improving artificial intelligence” could help mitigate the risks posed to customers by mis-advice or human error.2 Though these technologies can mitigate a range of risks, organizations should also consider the risks posed by digital technologies themselves. Without appropriate oversight, controls and change governance, technologies like RPA, machine learning and artificial intelligence (AI) could become a source of reputational and operational risk.

For an example of new technologies creating operational risk, consider the recent case of an AI chatbot being taught by Twitter Inc. users to disseminate offensive messages, which led to reputational damage and embarrassment for the chabot owner.3 This chatbot example garnered a lot of public attention, but its impact was low when compared to the potential damage an RPA solution or a machine learning/AI algorithm could cause.

Imagine, for example, a “robo adviser” algorithm that recommends suitable investments to a customer based on their risk appettite. Although an algorithmic solution can be built to specified requirements fitting most customers, there is always a risk that certain customers’ scenarios are not treated correctly and so a customer might be mis-sold in error by the algorithm.

If, as we’ve seen, new technology can bring risks beyond those posed to human jobs, how can financial firms address and resolve those risks?

Managing digital technology risk

Digital technologies like RPA, machine learning and AI can deliver benefits that reduce both operational risks and costs within Financial Services. These technologies require oversight, control and change governance to help mitigate potential regulatory and operational risks.

Below is a road map for mitigating the risks associated with digital technologies such as RPA and machine learning:

1. Define your digital target operating model.

Establish the appropriate operating model to manage digital technology governance and risk.

    • Determine the owners who oversee specific technology streams, and align new technologies to existing change or technology frameworks.
    • Be sure these technologies are integrated into the organization’s wider enterprise architecture, so unrelated changes do not unwittingly impact automated processes.
    • Have outlined roles and responsibilities.

2. Agree on upfront and ongoing assurance.

Accept the appropriate level of upfront and ongoing assurance for automated processes and algorithmic decision-making in order to perform to targets.

    • Thoroughly test technologies against business, technical and regulatory requirements.
    • Document automated processes thoroughly, with clear rules that can be evidenced to internal or external parties as required.
    • Determine the degree of ongoing oversight required, e.g., regular business testing so that niche issues are not impacting customers.
    • Outline the level of audit trail and data management required and maintain this in accessible data sources to support root cause analysis.

3. Grow knowledge and manage change.

Have change management functions understand digital technologies, and verify that changes are appropriately impact assessed.

    • Align change management functions to specific technologies to build up “expert” knowledge, e.g., automation working groups.
    • Retain subject matter specialists (SMSs) to prevent knowledge loss if process updates or regulatory engagement are required.
    • Check that digital technologies are reviewed as part of any impact assessment for large business change, supporting regulatory compliance and alignment between manual and automated processes.
    • Perform regression testing on any changes to maintain the integrity of any updated solution.

4. Learn and adapt.

Leverage digital technologies to improve the efficiency of ongoing assurance activities.

    • Actively monitor and manage digital technologies such as robotic process automation, and use clever data solutions to manage risk.
    • Reduce the need for human intervention by delegating some quality assurance tasks to RPA bots, with humans used for more complex checks.
    • Handle data appropriately, regulatory compliance can be evidenced and any potential customer impacts can be sized.

Properly overseeing the risks associated with RPA, machine learning, and AI helps financial services firms increase potential benefits when they implement and maintain these technologies. On the other hand, the risks related to a less stringent approach might be costly; they could impact customers or lead to regulatory violations, particularly given the changes coming as part of General Data Protection Regulation (GDPR).

 

References

  1. “Robots will eliminate 6% of all US jobs by 2021, report says,” The Guardian, September 14, 2016. Access at: https://www.theguardian.com/technology/2016/sep/13/artificial-intelligence-robots-threat-jobs-forrester-report
  2. “The answer to mis-selling? Let a robot pick your investments,” The Telegraph, May 29, 2014. Access at: http://www.telegraph.co.uk/finance/personalfinance/investing/10863305/The-answer-to-mis-selling-Let-a-robot-pick-your-investments.html
  3. “Microsoft ‘deeply sorry’ for racist and sexist tweets by AI chatbot” The Guardian, March 26, 2016. Access at: https://www.theguardian.com/technology/2016/mar/26/microsoft-deeply-sorry-for-offensive-tweets-by-ai-chatbot

Submit a Comment

Your email address will not be published. Required fields are marked *