Will the AI explain itself?
Will the AI explain itself?
Though artificial intelligence has existed for decades, it has remained in the fringes for long, with few game-changing applications until today. That said, the current Covid-19 pandemic has moved the world towards embracing AI in as many ways as possible to compensate for the unprecedented restrictions. AI-based social distancing detection tools and healthcare applications have benefited people across the world, with applications ranging from remote diagnosis to telemedicine and counseling. In manufacturing, AI-enabled automation and material movement have helped compensate for reduced worker density and keep up the productivity levels.
As remote working and contactless transactions have become the norm than exceptions, the banking and financial services sector that has always pioneered the adoption and usage of AI, went full swing on AI-enabled chatbots and virtual assistants for customer-facing functions. In fact, it was in credit and risk analysis that saw the strongest impact of AI usage.
Decisions good and bad
While AI has already been powering credit decisioning systems, the pandemic situation has put AI’s capabilities through the litmus test. How effective has AI been in sorting the good borrowers from the bad? has been the moot point. When banks were inundated with moratorium demands from even good borrowers, they saw their machine learning models fall short, leaving them struggling in their decision making. Bankers realized that simple variable models were no longer able to make the cut, as effective decisions needed clarity on economic indicators such as unemployment rates and GDP, medical forecasts such as the transmission intensity of the pandemic, and other low-probability, high-impact events – known as black swan events.
Besides all the above, customer behavior patterns gathered during crisis events have offered good insights for the banking and financial sector. During the 2008 financial crisis in the US, borrowers had prioritized automobiles and credit card payments over home loan repayments. Interestingly, during the current pandemic-led economic crisis, people who were largely homebound prioritized their home mortgages over credit card payments or auto loan repayments.
Back home in India, the impact of AI in the financial sector has been a mixed bag due to a lack of understanding of how AI models work, coupled with a dearth of skilled data scientists. Despite that, AI investments that averaged $200 million a year between 2014-17, climbed to $400 million in 2018, before dropping steeply in 2019.
Deep down, the pressing issue about AI in fintech has been about credibility and trust. Are the machines as good as us humans, if not better? That is what explainable AI (xAI) is all about. Built to engineer transparency in its model, xAI provides valuable insights into how decisions are made, providing the context – the bedrock of intelligence. The evolution of xAI has overcome the major pitfall of today’s Deep Neural Networks’ (DNNs) inability to explain its outputs. Recently, xAI progressed even further as Google announced a new set of xAI tools for developers in 2019.
So, what is the problem?
The problem with AI decisions is that it is tough to verify if the outputs are correct, and even tougher to troubleshoot where it went wrong. On top of it, there are creator biases and prejudices that creep in, resulting in devastating consequences. For example, in 2018, Amazon’s AI was found to favour male candidates over females when automatically selecting high performers with over a decade of experience, revealing how society’s gender bias and wage gaps crept into the AI’s veins. In US courtrooms where computationally calculated risk assessments are common, algorithm-led convictions posed problems of bias, as they could not be challenged. Financial institutions also face the risk of lawsuits for the robot advisory and automated portfolio management, that could wipe out hard-earned wealth by one wrong stroke.
Above all, legislation is another hill to climb. In Europe, a part of its General Data Protection Regulation (GDPR) mandates machine learning / AI decision making answerable to the ‘right of explanation’.
Meanwhile, most Indian banks have only come so far to include AI in their chatbots (not real AI), and therefore, xAI is still some years away. A 2019 PWC survey of over 1,000 CXOs in India revealed that only 10 percent of Indian CEOs were confident about the reliability of their AI applications, though over 60 percent had already adopted them in some form. Interesting! Also, about 67 percent of the surveyed organizations were unsure of the regulatory compliance regarding AI.
Also, traction is yet to be seen in two large problem areas, credit risk and fraud/cybercrime. A common problem in fraud detection solutions of today is false positives that mark legitimate transactions as suspicious, leading to payment stoppages or even frozen accounts. Oops.
The silver linings
However, the green shoots are seen in the realm of credit decisioning that always reels from human errors. Global banking software company Temenos developed an xAI-based model that has proven to be over 25 percent more accurate in comparison to risk scores provided by leading credit bureaus. This has led to banks increasing their approval rates by 20 percent and keep defaulters under control. A US-based digital solutions company Zest Financial rebranded itself as Zest AI in 2019, making its principal business crystal clear. Zest’s clients reportedly achieved a 15 percent increase in loan approval rates and a 30 percent reduction in credit losses, with no added risks.
The future of AI is indeed interesting and steeped in mystery. Whether human brains can work to eliminate their inherent biases and shortcomings from creeping into AI models for the larger good, remain the million-dollar question. As they say, the best, or the worst, is yet to come. Let’s hope for the best, as always!