My name is James Ramsden, and a year ago I was asked to make five predictions about how AI would evolve,
and evolve us, in 2024.
One year later, I thought I'd take stock of the year and see how my predictions panned out.
Let's see how I did.
2023 James: 2024 will see a focus on the practicalities of AI assurance, not just the high-level principles.
2024 James: Indeed! 2024 was the year that Naimuri delivered its first one-stop-shop solution for AI assurance workflows to customers, and the demand for assured AI is really starting to get raucous. The ability to provide customers with evidenced AI risk mitigation reports and recommendations is starting to be seen as the key bridge between innovation and operation. In 2025, might we start seeing significant job adverts for AI Assurance Engineers and Risk Assessors? Fingers and toes crossed!
2023 James: Following the release of Meta's LLaMa and the advent of open source LLM tuning techniques, we'll see a shift from proprietary AI to open source innovation.
2024 James: Pretty close! As ChatGPT evolves, it is still the one to beat overall, but it is now regularly beaten on some metrics by open source models that you can deploy securely to your own infrastructure. And open source DevOps frameworks are proving good alternatives to proprietary end-to-end MLOps services, even on the cloud. Give me Argo and MLflow any day! Successful startups that integrated with OpenAI APIs in 2023 or even earlier are moving toward self-hosting fine-tuned and secured LLM capabilities. This year I attended Manchester's first meet-up (MLOps.wtf) for the open source MLOps community.
2023 James: We'll need to change our trust relationship with data, with bottom-up approaches like text watermarking, to more social approaches like critical thinking.
2024 James: The first part of my prediction seemed to pan out: OpenAI developed text watermarking in 2024; we've seen and been involved in a redoubled effort in deepfake detection; just this week we investigated an opportunity around representative signal generation. The other part of my prediction was the nervous punt. Have we changed our trust relationship with data? 2024 saw glimpses of what AI wars in elections will look and sound like going forward. It seems likely that elections can be won or lost on the disinformation battlefield generally, but it's less obvious whether AI is starting to play a significant role in that. What is more apparent is how the very existence of deepfake can be immediately and effectively exploited to devalue actual evidence and well-provenanced claims of fact. Fact-checking itself is starting to be seen as an act of antagonism in some quarters. And while companies like OpenAI do have the tools to help, they seem disinclined to put them into action in case it reduces profit. With hindsight, I think 2023 me looks quite naive to 2024 me's eyes. I'm just waiting for the Naimuri Book Club to recommend Jean Baudrillard's Simulacra and Simulation. (edited)
2023 James: We'll see an increased uptake in AI for Business Operations first and foremost, and more organisations will start the process of business transformation to be more AI-facing.
024 James: It's clear that customers are taking the challenges of becoming AI-facing seriously.We have seen an uplift in demand for AI assurance, MLOps, data synthesis capabilities, data catalogues... all of the ingredients you need to make AI part of how you do business. We have certainly seen an increase in AI uptake, particularly in the impact area of business operations (or BusOps) as we predicted. (At time of writing, I have an embarrassment of riches when it comes to LLM-based solutions for business opportunity development.) Different organisations are at different points on the journey, with some ready to adopt LLMs into their day-to-day working, while others are realising they need to know where their data is and amend how it is captured. But it is happening!
2023 James: Organisations will place a great deal of importance on AI governance.
2024 James: Lots of AI strategies, AI ethical guidelines and model cards are flying about, sure! But what tooling, skills, training, ways of working, life-cycling should you adopt to ensure that governance is more than just words on a page written? For cataloguing AI products, applying standards, and calculating risks, we have options now we didn't have last year (e.g.https://fairnow.ai/), and those options are being taken up in some quarters. Proprietary E2E solutions, piecemeal open source ones and bespoke capabilities are being developed well beyond the tried and tested model card. The snag has been that good AI governance needs to be built on a good data culture, so for many there's a marathon to run just to get to the starting line.