undefined

By Erin Nicholson, Global Head of Data Protection and Privacy at Thoughtworks

 

The UK’s AI journey hinges on public trust. A year after the AI Safety Summit at Bletchley Park, the UK has doubled down on its AI delivery promises with a £100 million investment in aims to AI regulation research and development. The UK government’s goal is to propel sector-specific research hubs, upskill regulators, and solidify the UK’s position as a leader in responsible AI development. 

Despite mixed public perception of AI, with Thoughtworks research findings that 42% of individuals feel both excitement and nervousness, a key question emerges: can this investment bridge the public trust gap surrounding this powerful technology? 

While the potential benefits are vast, anxieties linger around data privacy, ethics, and potential job displacement. This is where transparency and responsible action become crucial, not just for ethical reasons, but as a strategic advantage for businesses.

Bridging the gap: The role of meaningful transparency

The investment is exciting, sure, but it also fuels concerns about data misuse and privacy violations. With the rise of data breaches and privacy scandals, consumers are becoming more aware of the data being collected about them and are increasingly concerned about how it’s being used. Conversely, an organisation only has 90 seconds on average to get their point across to consumers within a privacy notice, and there is a lot of information to convey in such a short amount of time. 

People want to know what information is being collected, how it’s being used, and who has access to it. Consumers are asking for robust safeguards, and rightly so. 82% of UK consumers, according to Thoughtworks research, demand transparency and equity from businesses using AI, while data security is paramount for 91%. This isn’t just a moral imperative, or merely a legislative requirement, it’s a market reality. 

As businesses, overlooking these demands is short-sighted. A lack of meaningful transparency shakes consumer loyalty – 82% of respondents favour businesses that clearly communicate their AI governance. Companies prioritising ethical AI, like those ensuring fairness and fostering open dialogue, differentiate themselves in the market.

In short, transparency, security, and trust are the golden tickets to navigating the GenAI landscape. After all, with great power comes great responsibility, and with GenAI, that responsibility starts with building trust with the very people we serve.

 Consumers demand transparency and regulation

Transparency unlocks opportunity, and opportunity builds a sustainable future. Mere transparency on AI use is insufficient. Consumers who understand your commitment to transparent, ethical GenAI use are 31% more likely to trust you. That’s a significant jump, indicating a growing appetite for responsible innovation. 

Simply stating you use AI isn’t enough – taking concrete actions regarding data privacy and ethical implications fosters brand loyalty and positions you as a leader in this crucial conversation.

This leadership translates to several advantages. Future regulations loom, and proactive alignment minimises compliance risks. More importantly, you build trust with stakeholders, demonstrating your commitment to long-term, sustainable AI implementation. This is an investment in building a future-proof organisation on a foundation of trust and ethical practices

Navigating the landscape for UK businesses

In a world increasingly wary of technology’s impact, transparency isn’t just a regulatory hurdle, it’s a powerful shield against scepticism. Getting it right, and being able to convey in understandable language to an audience whose attention is only captured by privacy or transparency notices for a fleeting amount of time, is essential. Short statements which simply assure that your organisation “takes their privacy seriously”, or pages of privacy notice which bury the important information within it, will turn consumers away.

It’s no surprise then that 68% of consumers demand transparency on data privacy compliance. By embracing meaningful transparency, we pave the way for ethical innovation with GenAI, building trust and ensuring this technology serves as a tool for progress, not a source of fear.

Remember, transparency fosters enhanced brand loyalty, market advantage, and even reduced compliance risks. In short, it’s a win-win. 

But where do you begin? Here are two key steps to remember:

  1. Stay informed: Leverage the wealth of resources available. Research hubs and upskilled regulators offer valuable insights into best practices and emerging standards. Knowledge is power, and it empowers you to lead the way in responsible AI development.
  2. Lead by example: Don’t just talk the talk, walk the walk. Actively advocate for accountable AI and data privacy practices. By shaping the future of the industry in a positive way, you attract valuable partnerships and opportunities that share your commitment to transparent AI.

The potential of AI is undeniable yet anxieties cling to us like data shows. So, can this £100 million be the bridge over the trust gap? The jury’s still out, but it does serve as a testament to the importance of responsible AI. 

The investment acknowledges the public’s apprehension. Building trust through transparency, data security, and ethical action – not just research hubs – is crucial. 

Businesses that are transparent about their use of AI and data collection can gain a competitive advantage. Consumers are more likely to trust and support businesses that are open and honest about their practices.