Opinion

Why leaders should experiment more with AI

‘Is AI flawed? Absolutely. But we must embrace our own flaws if we are to stay in control’
By
By
Nishant Kumar Behl

We’ve all had a giggle looking at photos of the Pope in a puffa jacket, and any number of images and videos created by GenAI, purporting to be real. Most of us are well enough tuned into critical thinking to see these for what they are – demonstrations of a highly powerful digital tool. There are of course serious potential social and political repercussions when these deepfakes are shared and accepted widely as the truth. Meanwhile, commentators were quick to report on the incident at OpenAI’s launch of ChatGPT – 4o, when the tech recognised the human presenter as a piece of wood.

But what about the countless times that AI gets it right? We seem to be holding AI to a higher standard of accuracy than we do with the rest of the tech we use every day.  AI has been around for some time, invisibly integrated into popular apps such as Google Maps and wearable fitness trackers. Many of us are still getting to grips with its more recent iteration, GenAI, and are experimenting with using it to help us write reports, or analyse our data, with mixed results. Considering all of the hype around GenAI over the past couple of years, it is understandable that users could feel short-changed when their ambitious expectations are not met.

But this tendency to scrutinise the occasional AI mishap despite its frequent correct responses tends to overshadow AI’s overall reliability and creates an unfairly high expectation for perfection. Much of the most useful software we all rely on in our daily working lives contains bugs that are a completely normal byproduct of developing and writing code. The internet is awash with comments, forums, and advice pages to help users deal with bugs in their Apple and Microsoft word processing and spreadsheet apps – it doesn’t stop these being highly effective tools that help us get our jobs done more quickly and easily.

If we can accept blips in our workhorse applications, it seems unreasonable to hold AI to such a high standard.  Of course, we are right to be cautious. When AI can be used so effectively by bad actors set on misleading and misdirecting the public, there is undoubtedly a great potential power being unleashed here. But we must own this technology and not allow it to be used to manipulate us or work against us. No technology is smarter than humans and as technology gets smarter, it pushes humans to become smarter too.  There’s no point worrying that AI can perform our jobs better than we can. When we own it, using our critical skills and fully collaborating with AI, the inputs of humans and artificial intelligence work together, and that’s when magic happens.

By automating many of the boring, repetitive processes we go through at work, AI is already freeing up more human time, doing the heavy lifting so we can be more creative and focus on more fulfilling tasks. But artificial intelligence isn’t real intelligence (the clue’s in the name). It is built by humans and can only learn by mimicry and processing vast datasets that inform what it can produce. If AI screws up, it’s human error.

Developers and users - we are all responsible for ensuring we deploy AI appropriately, with consideration for when, how, and why we are doing so. Humans are a critical part of the mix and we need to be asking the right questions and making connections based on our unique human sensibility and perception, if the AI we use is to become more accurate, useful and better serve our purpose.

Don’t put AI on a pedestal and then feel disappointed when it occasionally fails. Instead we must stand up, take ownership of the data, the way it is managed, standardised, categorised, and used, to power AI. It gives us the opportunity to nurture our own propensity for critical thinking, for sensing when things don’t feel quite right, and implement our unique human traits such as compassion and empathy. That’s how we get the most out of this exciting new technology. And yes, it will still give us something to laugh about from time to time, safe in the knowledge that the AI does not share, or care about, our sense of humour.

Written by
August 27, 2024
Written by
Nishant Kumar Behl