Editorial & Advertiser disclosureOur website provides you with information, news, press releases, Opinion and advertorials on various financial products and services. This is not to be considered as financial advice and should be considered only for information purposes. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third party websites, affiliate sales networks, and may link to our advertising partners websites. Though we are tied up with various advertising and affiliate networks, this does not affect our analysis or opinion. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you, or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish sponsored articles or links, you may consider all articles or links hosted on our site as a partner endorsed link.

The need to nurture: why AI isn’t a silver bullet

By Alix Melchy, Jumio’s VP, AI

Until recently, the terms ‘AI’ and ‘algorithms’ have largely been confined to those interested in technology. Whether or not it is widely understood that algorithms are what actually keep us scrolling on social media platforms or help us define our search terms on search engines, suddenly, these words were thrown into the media spotlight and mainstream conversations.

At the end of August, we saw schools and colleges in the UK grappling with how to accurately and fairly award students their A-Level results after the COVID-19 pandemic forced schools to close their doors earlier in the year. With university places and further education plans hinging on these results, it was a monumentally important task to get right.

But on results day, students from certain communities were disproportionately and negatively impacted, while the scores of other students were inflated, all as a result of the algorithm implemented by Ofqual. Naturally, with the academic futures of thousands of students threatened, the public outcry was massive and it wasn’t long before the algorithm that underpinned the decision-making process came under fire.

While results were reversed and students were awarded their predicted grades, this fiasco brought to light the very real misconceptions of AI and algorithms, and how, if businesses plan to utilise this technology, they need to consider some very important elements.

Start with why

It’s a misconception that AI can simply solve everything. It just isn’t a silver bullet that can be used to make tasks better and faster automatically. If businesses are to implement AI, they need to start by focusing on the question that they’re trying to answer and the exact problem they’re trying to solve. As any tool, even of the most powerful sort, AI cannot dispense from clearly articulating the problem statement and identifying the acceptability criteria. By starting from this point, organisations will be able to come back to this initial objective time and time again throughout the project to ensure it still aligns with the original goal.

An AI model is only as good as the data that underpins it

Another important factor to consider is the data that is going to underpin your AI model. Do you have enough of that data and is it representative of the world? Algorithms are data hungry and this data needs to be well stratified. It’s absolutely vital that the data represents society fairly so that it doesn’t reproduce historical biases, which was the case with Ofqual’s algorithm. A note of caution here: it’s possible to buy datasets to speed up the process of building your AI model, but it’s important to ensure this data meets the criteria just explained.

Practice makes perfect

Another process that businesses must implement in their AI practices is a pilot testing phase to ensure that the algorithm is working as expected and to better understand why an algorithm is making a certain decision. By running a test in the early stages, and before the algorithm is put into the real-world scenario, feasibility, duration, cost and adverse events are all assessed. In the case of Ofqual, this was not done. The validity of the data was compromised and subsequently didn’t provide the right answer. In addition, AI still requires a human element. Every algorithm has a set of limitations and the human eye is still needed to understand how the AI is working, train it and ultimately ensure it is working as expected.

Alix Melchy
Alix Melchy

Ensuring ethical AI

In recent months, we’ve seen a far greater focus placed on the ethics of AI and this should be at the forefront of every organisation’s mind. There are some key areas to consider when it comes to ensuring AI is ethical:

  • Diversity and representativity: both the data and the practitioners working on the AI models need to represent society so that it does not reinforce existing biases
  • Transparency and trust building: articulating in plain terms what the model is about and how it makes a decision
  • Consent of usage: ensuring that the data used to train a model has been acquired with the proper consent

If we are to look to other industries whereby AI is being leveraged, like in the document-centric identity proofing space whereby documents (such as a passport) are matched with a corresponding selfie to connect real-world and digital identities, proving AI is being used in an ethical way is becoming crucial. Gartner predicts that by 2022, more than 95% of RFPs for document-centric identity proofing will contain clear requirements regarding minimising demographic bias, an increase from fewer than 15% today.

It’s clear that AI does have the power to transform many business functions, but as the saying goes, a bad workman blames his tools. Before embarking on any AI project, organisations need to consider these crucial four points in order for it to succeed because, put simply, an algorithm will inherit the flaws it is based upon. By following this guidance, businesses can ensure that their AI projects start off on the right foot and pave the way for successful use cases, avoiding the pitfalls we saw in the education space earlier this year.

Leveraging advanced technology including AI, biometrics, machine learning, liveness detection and automation, Jumio helps organisations fight fraud and financial crime, onboard good customers faster and meet regulatory compliance including KYC, AML and GDPR. Jumio has verified more than 250 million identities issued by over 200 countries and territories from real-time web and mobile transactions. Jumio’s solutions are used by leading companies in the financial services, sharing economy, digital currency, retail, travel and online gaming sectors.