Business Express is an online portal that covers the latest developments in the world of business and finance. From startups and entrepreneurship to mergers and acquisitions, Business Express provides reporting on the stories that matter most to business leaders and decision-makers.The website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website.
iStock 1345658982 1
iStock 1345658982 1

Battle of the bots: navigating the AI landscape


AI the buzzword on everyone’s lips – but should businesses be concerned with the safety of jumping on the craze and adopting AI software?

Rob Cottrill, Technology Director at ANS, discusses the changing role of AI and why we should harness the power of generative AI to fight against threat actors who use AI maliciously. 

AI has developed rapidly over the last few years. With such fast paced technological advances, and new uses and capabilities emerging at pace, there must be a focus on using AI responsibly. Take a look at the last year alone – ChatGPT has shifted from an unknown to an application that has been widely used by both organisations and cybercriminals alike. 

Why should enterprises care about AI?

It’s clear that more enterprises are now recognising the benefits AI can bring. A study by Adobe highlighted that whilst only 15% of enterprises are using AI, a further 31% say it’s on their agenda in the next 12 months. These enterprises are recognising that AI can provide a multitude of benefits, from boosting productivity to improving resilience against cyber threats.

CEOs and CIOs are aware of the benefits AI can bring to improve performance. We’re seeing rapid advancements in AI, through the likes of software such as ChatGPT, that businesses are able to use to better their efficiency and speed up, or even eliminate menial tasks. 

AI can also enable the modernisation of legacy systems. By harnessing AI capabilities, legacy systems, which often include outdated methodologies and technologies, can be systematically reinvigorated to achieve greater efficiency, scalability and security. This empowers older systems to overcome their past constraints, allowing them to smartly manage how resources are used, foresee potential operational issues, and simplify intricate workflows with exceptional accuracy.

A study by Forbes highlighted that over half of business owners use artificial intelligence for cyber security and fraud management. So, we know that whilst AI can be used in cyber crimes, such as AI-powered password cracking and creating sophisticated scams, it can also be used to fend off such attacks through cyber threat hunting solutions to detect and block potential scams and attacks. 

Changing role of AI

It’s no secret that AI is constantly changing and also being increasingly adopted, with global AI adoption by organizations is set to expand at a CAGR of 38.1% between 2022 to 2030. It’s a continual evolution of understanding the bounds of AI’s capabilities and determining what is and isn’t appropriate for it to be used for by organisations utilising the software. If we look at how AI was implemented to provide tailored recommendations to customers, we can see that it helped to enhance the user experience and improve sales. 

A recent study by Forbes highlighted that 40% of consumers believing that AI improves the customer experience. However, as organisations increasingly adopted this there were concerns raised over AI violating user privacy by collecting their information. 

Consider the rapid advancement of ChatGPT as an example. A year ago the platform didn’t even exist, six months ago there were major concerns over the use of ChatGPT. Now the use of ChatGPT is often welcomed by organisations, encouraging their employees to use the AI platform to complete tasks. 

However, whilst ChatGPT has started to be positively adopted by organisations it has also started to be used in malicious ways by cyber criminals One example of this is utilising the software to create more convincing phishing emails, by analysing data to make them more personalised and imitating writing styles so they seem more legitimate. 

It’s evident that businesses are seeing the potential for AI to rewrite the industry to add value, but the question they need to ask is how will the value be added and how long will it take? There’s a fight between moral and dark uses of AI, in which we’re seeing that AI can be used to businesses advantage, whilst simultaneously being a threat. 

Don't miss out on any breaking news or insightful opinions!
Subscribe to our free newsletter and stay updated on the go!


By submitting this form, you are consenting to receive marketing emails from: Global Banking & Finance Review. You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email.

Battle of AI

Whilst we’re seeing a rapid adoption of AI within organisations, we’re also seeing increased concerns over its use. The ethics surrounding AI remains a contentious subject. It’s not just hackers, but organisations who can exploit data using AI, with users having little to no knowledge of how their personal information is being used. The Facebook Cambridge Analytica scandal is a perfect example of this, with the consultancy firm harvesting data from millions of Facebook profiles without user consent. More recently Zoom was met with controversy when it was revealed that they may be using customer data to train AI. Therefore, the threats of using AI are not just menacing cyber criminals, but also the organisations themselves. Whilst there are no, or limited, AI laws or regulations in place this unethical use of AI may continue.

We also know that attacks are becoming increasingly more complex, which AI has contributed to. Hackers are now able to use AI to identify specific weak points through analysing data quickly and identifying patterns humans may not be able to see. The speed of AI is therefore a concern, allowing threat actors to constantly keep ahead of the curb and deploy cyber attacks rapidly. 

Whilst AI can be used in cyber attacks, the use and impact of AI is often undisclosed so it’s difficult to gauge the true extent of how cyber criminals are using AI in these attacks. This gives hackers a greater edge to do more, as their methods remain hidden to most organisations, making them a much greater risk.

It’s important to state though, that whilst AI can be used maliciously, it can also be used for good. AI can fight against AI to protect organisations, we call this the battle of the bots.

However, AI can also fight against these cyber attacks. AI tools can identify risks in AI and avoid vulnerabilities and weaknesses. It is better able to identify potential cyber attacks than humans. We’re seeing an increasing number of organisations use software to protect themselves from cyber attacks. Github CodeQL AI can scan code for potential safety risks. While AWS’ Code Guru Reviewer uses AI to find security vulnerabilities in code. Organisations using these tools have the potential to build safer and more secure AI systems, fending off cyber attacks.

The best defence against AI is making it open-source. By making AI open source it becomes accessible for scrutiny by all stakeholders. The effectiveness of safeguarding against vulnerabilities hinges on the pace at which solutions are identified, exploited, and subsequently fortified. In essence, the transparency afforded by open-source practices empowers businesses to collectively address AI-related challenges by fostering collaborative innovation and timely mitigation.

Hype vs. reality

In the current landscape, businesses are focused on the hype surrounding AI. Organisations often perceive AI as a solution that can address any and all of their problems. These issues can range from data analysis to process automation to boost productivity and performance. However, this broad perception can overshadow the nuances for individual businesses and markets. Companies, therefore, hinder their own capabilities by not optimising AI utilisation to their own environments. It’s imperative for organisations to evaluate and redefine how they use AI, channelling its potential towards achieving maximal outcomes with minimal outputs.

In the rapidly evolving landscape of AI by businesses, immediate benefits are often the focal point. However, for businesses to really understand the risks associated with AI they need to be forward-looking. Having this future-gazing perspective means organisations are aware of how current AI software may advance, how the uses may change and what future developments may be.

Future of AI

It’s difficult to predict the future of AI. It’s likely that we’ll see organisations adopting AI that better fits the needs of their organisation, utilising AI personas to automate tasks that are specific to the business.

However, as AI advances it’s likely that hackers will also take advantage of the new developments. It’s key that organisations are aware that whilst it can provide benefits for them that they don’t blindly adopt all new forms of AI and they become more aware of all uses of the AI software and potential ways it can be used maliciously.

At ANS we help CEOs and enterprises to not only positively adopt AI software but understand what they should be concerned about and how they can mitigate these risks that may occur. 

 

Recent Post: