Just five per cent of companies are getting real value from AI, according to Boston Consulting Group’s Build for the Future 2025 report. Yet the rewards for those that do are substantial, with twice the revenue growth and 40 per cent greater cost reductions, compared with companies that haven’t invested in AI.
The gap between companies using AI effectively and those dabbling without direction isn’t just a minor competitive difference, it’s a widening chasm. Some firms rush to invest in AI, buying into the hype rather than putting a strategy in place, often without considering what they want to achieve. However, AI success isn’t about racing to implement the latest tools; it’s about building strong foundations first, with the right data, culture, and people in place.
This is where operational research plays a crucial role. Operational Research (OR) uses advanced analytical methods to help organisations make better decisions and solve complex problems. By applying structured problem-solving, modelling, and optimisation techniques, OR helps businesses identify where AI can genuinely add value, test scenarios before investing at scale, and ensure that AI initiatives are aligned with strategic priorities rather than driven by hype.
Start with data and culture
A report from the IBM Institute of Business Value highlights challenges businesses face when adopting AI. Among the top concerns are data accuracy and bias, issues that business leaders can address by prioritising strong governance, transparency, and ethical AI practices.
But before any of this is possible, organisations need to start with their data. AI only delivers value when it has high-quality data to work from. Put rubbish in and rubbish will come out. Many organisations sit on vast amounts of information but lack the processes to make it usable. Data is often spread across multiple systems, departments, and formats, creating silos and inconsistencies that make extracting insight difficult.
Without proper standardisation and verification feeding poor-quality data into AI risks amplifying errors and biases. In the rush to adopt AI, this can lead to reputational and financial damage. The launch of Google’s Gemini AI , for example, faced significant backlash over its image generation tool after claims it was over-correcting to avoid bias, highlighting the risks of deploying complex systems before they are fully understood and tested.
Organisations need to take a more methodical and disciplined approach to AI adoption, following a ‘crawl, walk, run’ progression before attempting to scale. First, they must assess what data exists, where it is stored, and its quality and then organise and standardise this data to create a reliable foundation. Only then can it be used effectively for business analysis and decision-making, before automation is introduced at scale. Done well, this ensures AI becomes a multiplier of intelligence rather than a mirror reflecting organisational weaknesses.
Equally important is creating a culture that values data. Responsibility should extend beyond IT teams to include HR, operations, legal, and other parts of the business. Empowering employees and creating cross-functional communities around data encourages trust and collaboration, helping to break down silos and support better decision-making. Organisations that invest in data literacy and shared accountability are far more likely to see measurable impact from AI adoption.
This approach also helps overcome human barriers to AI adoption. Employees often worry about looking uninformed, misusing AI, or misinterpreting its outputs. By starting with small, tangible initiatives, such as pilot programmes or cross-functional hackathons focused on solving real problems, organisations can build familiarity and confidence, embedding new practices gradually and safely. These early wins lay the groundwork for larger-scale AI integration and are essential for businesses to start to understand how they can get the best out of AI.
AI governance, ethics, and purpose
AI is a powerful tool, but its impact depends entirely on how it’s implemented. Strong data governance, clear ownership, quality controls, and consistent processes are vital, as without them, organisations risk operational mistakes, bias, or serious reputational damage.
AI doesn’t just analyse data; it magnifies the patterns within it. That means any biases in historical data get baked into AI outputs. A clear example comes from AI recruitment tools . Some models trained on past hiring data ended up favouring male candidates, unintentionally rejecting qualified women. This highlights why ethics can’t be an afterthought. Organisations must establish ethical guidelines and oversight from day one and be clear why they are using AI in the first place.
New AI tools shouldn’t be adopted simply because they are the latest thing. They must solve real business problems or free employees to focus on higher-value work. Done right, AI can eliminate repetitive tasks, boost efficiency, and spark innovation but only when objectives are clearly defined and aligned with broader strategic goals.
Compliance also matters. While not the most glamorous part of AI, following frameworks like the EU AI Act is vital for safe and responsible use. By contrast, the U.S. approach is often called the “wild west” due to its fragmented, reactive, and largely voluntary nature, potentially leaving firms open to significant business risks.
Organisations that integrate best practice in governance, ethics, and compliance early in in their AI journey will be better positioned to navigate future regulations, protect sensitive data, and scale AI responsibly. In other words, getting this right isn’t just about avoiding risk; it’s about creating a foundation for AI so it delivers true value to the business.
Moving beyond the hype
The current AI hype cycle echoes the dot-com bubble of the 1990s, but unlike then, when many companies fell by the wayside when it burst, today’s AI technology is maturing, and organisations that get it right can emerge stronger and transform their industries.
The winners will be those organisations that take the time now to organise and standardise their data, establish practices across the business, and align AI adoption with clear objectives. These companies will be best positioned to capture the long-term rewards that AI promises.
Some key takeaways for businesses are:
1. Clarity and quality of data are essential: Poor input produces poor output.
2. Governance and shared accountability matter: Organisations must define ownership and foster a culture of responsibility.
3. Start small with achievable projects: AI cannot solve all a business’s problems overnight. Beginning with tangible, manageable projects allows organisations to build confidence and capability before scaling.
4. Use operational research to prioritise and optimise AI investment: Modelling and scenario analysis help organisations focus AI on the areas of greatest impact, reducing risk and maximising measurable returns
AI will increasingly be used to test, refine, and strengthen human work, raising quality, consistency, and confidence. The organisations that succeed will not be those chasing every new tool, but those applying disciplined thinking to where and how AI is deployed. By combining strong data foundations with structured OR methodologies, businesses can move beyond experimentation and turn AI ambition into measurable, sustainable results.