SCMAP Perspective is our fortnightly column on PortCalls, tackling the latest developments in the supply chain industry, as well as updates from within SCMAP. On this column, Henrik Batallones sees generative AI in a supply chain application for himself – and ponders how we can take advantage.

Potentially transformative

Lately it seems it’s easy to take all this talk of artificial intelligence for granted. Everyone’s talking about it. Every new phone and gadget seems to have it as a main feature, never mind that most of the time it’s really just a branding exercise. We’ve heard from evangelists who believe fully embracing AI is necessary for your business to survive, and from creatives who are afraid that they will lose their means of living thanks to applications such as ChatGPT and Midjourney. All this talk… you either take it for granted, or you tune out of it actively.

I’ll admit I was one of those people who talk about AI – not incessantly, I hope, but I have certainly devoted several columns to the subject. Knowing that this has already been around for years in various forms (the term “artificial intelligence” itself was first coined in the 1950s) it’s been rewarding to explore how it has already helped our work in supply chain, and how it could help us further, if we do it right. One of the points I have always made is how the advent of large-language models could democratize supply chain insights, making it more accessible to those working on the ground, rather than being limited to those in the boardroom and subject to their biases. It could allow supply chain professionals across the board to have the information they need faster, to make decisions faster. At a time when being agile is an important quality to have, well, AI could be really helpful, right?

To be honest, that was just speculation on my part. But I was lucky enough to be able to take part in a workshop on AI, co-presented by Google and the Department of Trade and Industry, and, I will admit, I had my mind blown when I saw that very thing I was writing about in practice, happen in front of me.

It was during a demonstration of Gemini, Google’s new AI platform. One of its strengths, according to the tech giant, is that it can use different file formats as part of its instructions, or prompts. Say, in ChatGPT, you would type in your instructions and expect a result. In Gemini, you can add to your instructions by attaching a photo, or an audio recording, or in this case, an Excel spreadsheet with rows and rows (and columns) of numbers. The question: which items are running low?

It took Gemini a while to process it – and by “a while” I mean a minute or so – but it had gone through the entire spreadsheet, understood what it meant, and then… asked for more clarification. I’m paraphrasing here. “What do you mean by ‘running low’?” It wants to know what counts as low in stock. The first prompt did not make it clear, but by then both you and the system have something to work with, and the next step is for you to give a little more information, to narrow the focus. Admittedly, it is a bit of a learning curve, but “having a conversation” with the platform feels more intuitive, more democratized, compared to manually going through a spreadsheet and finding the right number you need… and risking getting it wrong because your eyes are tired.

So, yes, it can be done. But there are still some caveats, the most important of which is that AI – at least as it stands now – is not there to provide you with a value judgment. (And I hope it doesn’t reach that case, or else all those sci-fi scenarios of robots killing humanity to save the planet isn’t far off.) Its main role, at least from the example I provided, is to parse large amounts of data and make it easier for you to understand. A lot of it still relies on our input. Are we asking the platform the right questions? Have we fed it the right data? Is that data correct and clean?

The further adoption of AI into various workplace and supply chain applications also raises a few more questions. As it stands, platforms like Gemini are available to the public, in most cases for free – but the security and integrity of the data you choose to feed it will always be a concern. One tip mentioned during the workshop is to not provide the system sensitive information you wouldn’t otherwise want leaked to the public. Makes sense. But if you absolutely have to, you might want to invest in paid solutions where your information can be siloed off. That might be a significant cost for small businesses – and even for large ones, especially with the ongoing arms race for processing power driving adoption costs up.

The other question is more obvious, and easier to address: are our people up for it? I’d say yes, with a little more investment in upskilling them. Tech literacy is something we should continually invest in, not just in the context of the workplace, but in our daily lives, too. How to consume and disseminate information responsibly. How to be more skeptical when it comes to whatever is presented to us by an “all-knowing” screen. How we can use the technology available to us to work for us, without potentially compromising our positions. And training people to be more adept with this technology is so essential, it shouldn’t be something that is kept away from people for fear that they might leave the company with that knowledge. A potentially transformative technology requires us to better understand it – both its advantages and its risks, and the people it may potentially leave behind. Like, you know, the creative types who do artworks and write essays. Like, well, me.