Permission to start wrong
How to experiment with AI in tough environments
If you’re working within the Philippines’ Department of Education right now, then you’re managing 26 million learners and 900,000 teachers in the midst of a learning crisis, against a backdrop of anti-corruption protests and compounding climate shocks.
When that’s your daily reality, how do you switch into a mode of “Let’s experiment with some emerging technology!”?
Some version of this is reality for most people working in government right now, as they think about how to implement AI tech. But that hasn’t stopped some of these teams from thinking big.
When we launched our Ministry of Education AI Challenge (part of the AI Observatory which is made possible by the support of UK International Development) in July, asking how AI could improve the behind-the-scenes work of running education systems, thirty-six teams applied (including the Philippines) and their vision was very apparent. They weren’t asking for chatbots. They wanted to do things like predict student dropout before it happens, track school safety risks in real time, and identify malnourished children from a single photo.
In this issue we’re asking how teams working on challenging problems in complex environments can give themselves the permission to experiment with AI, while navigating all the hype and uncertainty swirling around it.
When playing it safe is the risky play
The AI Observatory maps government AI projects across three horizons: Upgrade (making existing systems work better), Disrupt (creating new ways of working), and Transform (reimagining what’s possible entirely).
What we’ve seen is that most institutional AI projects tend to cluster in the Upgrade territory, some reach Disrupt, but Transform remains genuinely rare. Why? Because the psychological and institutional conditions make anything beyond Upgrade feel impossibly risky when - for example - millions of children’s education is at stake.
Research on loss aversion shows that people weigh potential losses roughly twice as heavily as equivalent gains. And this asymmetry intensifies in public sector contexts where failed experiment gets scrutinised and punished, while successful ones often go unnoticed. The IMF’s work on political risk aversion showed that people block innovation not just because they’re risk-averse generally, but because they fear losing political clout or credibility in future decisions. Even when the new approach would likely be better, the uncertainty about who benefits and who loses makes the status quo feel safer.
When you add social and institutional pressures, the effect compounds. If everyone around you is talking about AI (and LinkedIn makes it look like everyone else has figured it out) then the pressure to act becomes intense. Research on defensive decision-making shows that when people fear being blamed for failures, they tend to choose options that are easier to defend rather than options they believe are actually best for the organisation. This is why teams can sometimes rush towards the ‘safer’ solutions over ‘riskier’ ones that could be genuinely impactful.
Organic idea farming
During our October webinar with the six ministries taking part in our challenge, Dr. Basti Ibanez from the Philippines Education Center for AI Research talked about the need for “quick wins” because that’s how you build institutional confidence and show ministers that the technology can deliver results.
But he was also clear-eyed that quick wins can become their own trap if they’re all you ever pursue.
He went on to talk about designing solutions that can grow “organically”. That means creating the conditions where innovation can be introduced into systems and then allowed to scale naturally. As unexpected challenges are encountered, you can learn from what you’re seeing play out in the real world and then iterate based on that information.
Once you know you can’t plan for everything upfront, you can start designing safe spaces in which you can adapt as you go.
Making uncertainty the plan
Working this way lowers the stakes of any single decision. Instead of committing to a grand plan that has to work, you’re running small experiments that teach you what’s actually needed. That makes it psychologically easier to take the first step, because failure becomes information rather than catastrophe.
The six ministries taking part in our challenge aren’t special because they’ve figured everything out. What makes them stand apart is their willingness to test and learn and adapt publicly. To say “we’re not entirely sure this will work but here’s what we’re trying and here’s what we’re learning.”
That permission they’ve granted themselves to experiment even when conditions aren’t ideal, might turn out to be more valuable than any specific algorithm or tool they develop, because it’s the culture change that makes continued innovation possible. And as those small, incremental experiments grow in scale and ambition, they not only move from Upgrade through Disruption towards Transformation; they also begin to generate the evidence and momentum needed to tackle those systemic challenges around infrastructure, privacy and power dynamics that no single project could solve alone.
📚 Brain Food
If you want to dig into more on this topic:
📖 Read: The EdTech Hub AI Observatory blog is where we’re publishing evidence and updates from the six ministries as their work progresses. There’s also this deck, which breaks down the findings from speaking to teachers about the potential for AI.
🎧 Listen: Beth Noveck, New Jersey’s first Chief AI Strategist, visits The Road To Accountable AI podcast to explore AI’s transformative power in public governance.
📺 Watch: Taiwan’s former Digital Minister and now Cyber Ambassador, Audrey Tang discussing how the vTaiwan platform grew from small civic experiments to institutional practice, and how they’re now helping California apply the same model.
📖 Read: Luana Faria from Brazil’s Ministry of Management and Innovation talks about creating “psychologically safe spaces” for public servants to experiment and how “resistance to change is a lazy way to justify the lack of innovation”.
This month’s mystery links
Your reward for reading down this far…
How dead pigs are helping in the search for missing victims of Mexico’s drug wars.
The world’s first meditation app designed to help you find inner peace while the world burns.
There are currently 1,662 people receiving this newsletter. In March 1662, the first public bus service began operating in Paris. While originally designed for ordinary Parisians, the nobility started using it and the intended users were excluded, leading the service to fold within a few years. But the seed was planted and it’s safe to say that public transit systems managed to take off eventually.
If you’d like to help this small experiment scale a little, click the button to send this issue to a friend or colleague (we don’t care if they’re nobility or not).
