be curious
Making sense of AI
26 March 2026
Introduction
AI has undoubtedly been the topic of the past few years. Most large companies have implemented, or at least experimented with, AI solutions in one form or another [A, B]. And if you are in a line of business that is not yet using AI, it can feel like a race to get on board. Even when the business case is unclear and the demand is vague, “we just need some AI” is a phrase we hear repeatedly from executives. In many industries, the pressure to adopt AI is driven more by fear of falling behind than by well-defined opportunities [D]. For some leaders, this creates the impression that without AI, their function lacks legitimacy.
Some organisations have rolled out AI tools widely, such as Copilot, ChatGPT, or similar solutions, across the workforce. Others have held back, often due to data privacy concerns. Ironically, the companies that are most restrictive in their AI policies may face the highest risk. Employees often turn to private accounts anyway, leading to “shadow AI” use and leaving the organisation with very limited control [D, E, F].
The challenge
Reports on the return on investment from AI tell very different stories. Some show low productivity gains and disappointing results, while others highlight major efficiency improvements and cost savings [C, G, H, I, J, K]. The truth is likely somewhere in between. AI’s benefits are real, but they can be uneven, highly context-dependent, and difficult to measure. This is despite the rapid pace of development and the diffusion of powerful new tools. So, what is going wrong?
At least four things seem to be common pitfalls across organisations:
1. Unclear goals: Too often, AI is deployed heedlessly. The “we just need some AI” problem is driven by fear of missing out or falling behind rather than by a clear purpose [D].
2. Neglected user adoption: Little time or resources are allocated to support and training, leaving employees unable to understand and use AI effectively [M, N].
3. Rapid evolution: AI solutions evolve so quickly that it is difficult for users to remain competent and confident in applying them [L, M].
4. Overly general deployments: Broad, one-size-fits-all tools are launched without attention to smaller, targeted solutions that could drive real impact where it matters most [I, J, M].
So, what can you do to avoid these pitfalls? It might be a help to start thinking of AI differently. We propose three alternative ways to view AI.
Think of AI as a specific tool
Think of AI more like a tool than a magic solution to every problem. AI can absolutely improve both quality and productivity, but only when applied with intention. It may be more useful to talk less about AI in the abstract, and more about the specific tool at hand. Specificity matters. If you hand people a new tool with vague instructions such as “this will help you with just about everything,” most will either not use it at all or use it suboptimally.
Instead of thinking “we need AI in HR,” consider how AI might improve one particular process. For example, rather than expecting AI to transform recruitment as a whole, start with a well-defined subtask such as writing job ads. Provide a template, specify the length, tone of voice, legal requirements, and feed it with your organisation’s role descriptions. With just a few keywords about the role, seniority, and context, AI can then generate well-written drafts. Keep in mind, however, that you remain responsible for checking both the quality and compliance of the results.
And keep in mind that, as with any tool, it takes time and requires practice to learn how to use it properly. It doesn’t happen by itself, so be sure to dedicate time for it – for yourself and for your employees.
Think of AI as an assistant
Think of AI as your student assistant or new team member. AI is not always right. You need to verify and validate its output, just as you would with work from a student assistant or a new team member. You also need to provide context and clear instructions. In an AI context, this means making an effort with your prompts. Expecting your AI to understand exactly what you want based on a one-sentence prompt is naive. Forget the notion that AI knows everything, has access to all information, or always produces quality results. It can only work with the data and training it has, and mistakes are inevitable.
The good news is that, like a human assistant, AI gets better the more you work with it. The first time you ask for a job ad, you will need to review and correct. After a few iterations, the corrections diminish as the system adapts to your style and preferences (depending on the AI solution and settings). Over time, you also learn where the AI can be trusted to work independently and where you need to stay closely involved. In this sense, AI becomes a capable assistant, one that requires guidance and oversight but that can eventually take on more of the routine workload. Either way, it will (or should) never relieve you of your privilege and duty to think critically.
Think of AI as infrastructure
Finally, think of AI as something that, over time, becomes part of the invisible backbone of your organisation, like electricity, the internet, or email. Nobody talks about “using electricity in HR,” although everyone surely does. Instead, you think about how light, machines, or servers enable specific tasks. The same will be true for AI. As it matures, AI will shift from being a buzzword to becoming a silent enabler embedded across workflows and systems.
Companies that approach AI as infrastructure, integrating it into daily operations, governance, and culture, will be best positioned for long-term value. Those that treat it as a stand-alone project, a shiny pilot, or simply keep the AI conversation on an abstract level risk ending up with isolated experiments that never deliver lasting, company-wide impact.
References
[B] IBM. (2023). “Global AI Adoption Index 2023–24”. https://www.ibm.com/reports
[C] MLQ.ai. (2025). “The GenAI divide: State of AI in business 2025”. https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf
[D] Microsoft. (2024). “Work Trend Index 2024”. https://www.microsoft.com/work-trend-index
[E] Cisco. (2024). “2024 Data Privacy Benchmark Study”. https://www.cisco.com
[F] Microsoft. (2025, August). “Entra Internet Access for shadow AI discovery”. TechCommunity. https://techcommunity.microsoft.com
[G] Noy, S., & Zhang, W. (2023). “Experimental evidence on the productivity effects of generative artificial intelligence.” Science, 381(6654), 187–192. https://doi.org/10.1126/science.adh2586
[H] UK Government. (2025, June). “Cross-Government Microsoft 365 Copilot Findings.” https://www.gov.uk
[I] Peng, A., et al. (2023). “Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity.” Harvard Business School Working Paper No. 24-013. https://www.hbs.edu
[J] Model Evaluation & Testing Research (METR). (2025, July). “Randomized trial on experienced developers”. https://metr.org
[K] Government Digital Service & Department for Business and Trade. (2025, August). “Evaluation of Copilot pilots.” https://www.gov.uk
[L] Stanford University. (2025). “AI Index Report 2025.” Stanford HAI. https://aiindex.stanford.edu
[M] Deloitte. (2024). “State of Generative AI in the Enterprise: Year-End 2024.” https://www.deloitte.com
[N] Microsoft. (2024). “Work Trend Index 2024 Press Summary.” https://news.microsoft.com