Opinion

The hidden risks of generative AI: A wake-up call for businesses

Peter Garraghan, Co-Founder & CEO at Mindgard, spells out the dangers
By
Peter Garraghan
By

As a business owner or senior executive, you're probably well aware of the buzz surrounding generative AI and large language models (LLMs) like ChatGPT. These technologies are set to upend the way businesses operate, offering unprecedented opportunities for innovation and growth.

While AI, in the form of machine learning, has been around for a while now, the extensive application of neural networks in Large Language Models (LLMs) and other generative AI “foundation models” takes things to a new level. Research among early users of Microsoft’s Copilot AI assistant show that 70% said they were more productive and 68% recognised an improvement in the quality of their work. Analysts at Goldman Sachs suggest that generative AI could raise global GDP by 7%.

These are sizable numbers. 

As a result, generative AI is entering business processes and practices at an incredible pace. One route is via established tools, such as ChatGPT and Copilot.  In the US, a staggering 74% of employees are already using AI tools, often without the knowledge or consent of their employers. A similar survey in Switzerland reported 60% adoption. The figure in the UK is likely to be in a similar range. 

Another route is via deliberate programmes by businesses to build generative AI tools for specific tasks relevant to their needs - for example customer service chatbots. These programs usually build on top of a foundation model, such as OpenAI’s ChatGPT or Anthropic’s Claude. 

In either case – unplanned experimentation or deliberate programmes – there’s an elephant in the room that needs attention: security. While such rapid, widespread adoption is impressive, it also exposes a significant blind spot in many organisations' cybersecurity strategies.

The hidden security risks of generative AI

Sometimes, AI tools simply get things wrong. Air Canada’s chatbot recently encouraged a booking from a customer who was concerned about the airline’s refund policy for bereavements, despite the fact that Air Canada's policy explicitly states it will not provide refunds for bereavement travel after the flight is booked. Apparently, the chatbot in this case completely fabricated a new bereavement policy.

But the real security issues are much deeper than that. Even the most advanced AI foundation models are not immune to vulnerabilities. In 2023, ChatGPT itself  experienced a significant data breach caused by a bug in an open-source library, exposing users' personal information and chat titles. Data poisoning, extraction, and model copying are just a few of the threats that can have severe consequences for businesses. What's more concerning is that traditional security tools often fall short when it comes to detecting and mitigating these risks.

AIs that have not been properly secured can leak data if you just politely ask it to do so. Gab AI, a platform for a far-right social media network, was coaxed into revealing the darker side of its instructions to investigators

Often what is going on is that AI is being used against itself. That’s what happened when a customer service bot, based on ChatGPT and developed for a car dealership in the USA, recently achieved notoriety when pranksters manipulated it into agreeing to sell a new Chevy for just $1. People noticed the “powered by ChatGPT” claim on the chatbot and began experimenting to see what it could and couldn’t do. 

At a more sophisticated level, bad actors are adapting a security vulnerability in conventional IT called SQL injection – where a web application is exploited by code inputted into fields in login or contact forms - to the new world of AI. Here “prompt injection” in public-facing AI applications means that can be coaxed into giving out source code and instructions, business IP or even customer data.

The complexity of identifying AI vulnerabilities, coupled with the cost of the necessary specialised skills, is brewing a perfect storm for any businesses. Penetration testing, a crucial process in ensuring the security of AI systems, is time-consuming and expensive. Moreover, the rapid pace of AI development means that these tests can quickly become obsolete, leaving organisations exposed to evolving threats.

Furthermore, the democratisation of AI has made it easier than ever for businesses to adopt these technologies. While this is undoubtedly a positive development, it also means that many organisations are deploying AI systems without fully understanding the security implications. This lack of awareness can lead to a false sense of security, leaving businesses vulnerable to attacks.

As a leader, it's crucial to recognise that ignoring these security implications can have far-reaching consequences. Data breaches, intellectual property theft, and reputational damage are just a few of the potential outcomes of leaving your AI systems unsecured. With the business value of data now clear to everyone, no business can afford to take such risks.

Five practical steps towards better AI cybersecurity

1. Conduct a comprehensive AI security audit: Before deploying any AI system, assess your current security posture. This includes identifying potential vulnerabilities, evaluating existing security measures, and determining the level of risk associated with each AI tool.

2. Develop an AI security strategy: Based on the findings of your security audit, create a comprehensive strategy that addresses the unique risks posed by AI. This should include policies and procedures for secure AI development, deployment, and monitoring.

3. Invest in specialised AI security tools: Traditional security measures, such as firewalls and antivirus software, are not designed to handle the unique threats posed by AI. Businesses must explore purpose-built solutions to red team, security test, detect and mitigate vulnerabilities in their AI systems.

4. Educate your team: To address these challenges requires a multi-faceted approach to AI security. This starts with educating leadership teams and employees about the risks associated with generative AI and LLMs. By fostering a culture of security awareness, businesses can ensure that everyone understands the importance of protecting their AI systems.

5. Collaborate and stay informed: Collaboration and knowledge-sharing will be key in navigating this new landscape. Actively engage with the AI security community, staying up-to-date with the latest research and best practices. By working together, businesses can collectively raise the bar for AI security and create a more resilient ecosystem.

Don’t put your head in the sand

The AI revolution is here, and it's not slowing down. As a business leader, you have a responsibility to ensure that your organisation is prepared for the challenges and opportunities that lie ahead. By prioritising AI security and taking a proactive approach to mitigating risks, you can position your business for success in the era of generative AI and LLMs.

I implore you not to wait until a security incident occurs to address the risks associated with AI. By investing in education, specialised tools, and collaboration, you can safeguard your data, intellectual property, and reputation. The future of your business depends on it.

About the author 

Peter Garraghan, CEO and co-founder of Mindgard, is an internationally recognised expert in AI infrastructure and security. He has pioneered research innovations that were implemented globally by a leading technology company used by over 1 billion people. As a professor at Lancaster University, he has raised over €11.6 million in research funding and published over 60 scientific papers.

Written by
May 17, 2024