This article originall appeared on the CEMA Newsletter in February 2026.
"If you're certain, you're certainly wrong, because nothing deserves certainty"
Bertrand Russell
There are places in the lower Gambia where, for generations, people simply did not go. The land there was fertile, the water abundant, the vegetation thick and promising. By any rational measure, it was land that should have been settled, farmed, and made productive. Yet it remained untouched. Not because it was barren, but because something was believed to live there.
The Mandinka people told stories of Ninki Nanka, a vast, swamp-dwelling creature, part serpent, part crocodile, sometimes crowned, sometimes horned, whose very presence was said to bring sickness, and whose gaze meant certain death. The stories were vivid, inconsistent in their details, and unwavering in their warning. To encounter the Ninki Nanka was not to be tested or challenged. It was to be avoided. Entire patterns of movement, settlement, and livelihood bent around that belief.
Whether the Ninki Nanka ever existed is irrelevant. What matters is that the people who told these stories behaved as though it did, and in doing so, revealed a sophisticated response to danger they could not fully explain. They understood, long before modern engineering language gave us better words, that some threats do not announce themselves clearly, that some risks cannot be calculated, and that the most dangerous problems are often the ones you notice only after you have already entered the swamp.
This, I want to suggest, is the Ninki Nanka problem, and it lies at the heart of why so many well-intentioned software and AI projects fail.
Uncertainty is an odd adversary. It does not announce itself. It does not arrive with sharp edges or obvious failure modes. It hides in assumptions, in optimism, in phrases like “we’ll figure that out later.”
And yet, it is the quiet force behind almost every project that runs disastrously over time and budget, consumes far more resources than planned, and leaves teams exhausted, disillusioned, and quietly bitter. Uncertainty wastes money, yes, but it also wastes something less often measured: mana, the energy, trust, and belief that innervates the start of something potentially groundbreaking.
The Mandinka did not have a Ninki Nanka problem. We do.
In my experience, software practitioners are simply not frightened enough by it.
To see how this plays out at scale, we do not need to look to software at all. We can look instead to one of the most ambitious engineering projects in modern Europe.
On the Normandy coast stands the Flamanville Nuclear Power Plant. Its third reactor, Flamanville 3, was intended to be a flagship: France’s first European Pressurised Reactor, or EPR, an ambitious design that promised improved safety, greater output, and a new chapter in nuclear engineering.
Construction began in 2007. The reactor was expected to come online in 2012. The budget was set at €3.3 billion.
Flamanville 3 during its construction in 2010. Image from Wikipedia under CC BY 3.0.
What followed was not a single catastrophic failure, but a slow accumulation of unresolved unknowns. Contractors found themselves grappling with unfamiliar technologies. Quality control processes revealed anomalies in critical components only after installation. Supply chains fractured. Key subcontractors collapsed under the weight of delays and complexity. Safety systems failed tests late in the process, when fixes were most expensive.
By 2012, the estimated cost had jumped to €8.5 billion. By 2015, metallurgical flaws in the reactor vessel triggered further delays. By 2018, functional testing failures halted progress again. By 2022, the projected cost had reached €19.1 billion, more than six times the original estimate, with the reactor still not operational.
By the time Flamanville 3 came into production in December 2024 it was over a decade late.
What is striking is that none of this was driven by bad intentions. The engineers were capable. The goals were noble. The oversight structures were extensive.
What failed was not competence, but the management of uncertainty.
Too many unknowns were treated as manageable risks. Too many learning moments occurred during execution rather than design. And as uncertainty compounded, it did not cancel out, it multiplied.
We often speak of uncertainty and risk as though they were interchangeable. They are not.
Risk is what we deal with when outcomes are known but timing or frequency is not. We know car accidents happen. We know roughly how often. Insurance exists because risk can be priced, pooled, and compared. Competition between insurers is possible precisely because the uncertainty has been reduced to probability.
Uncertainty, by contrast, concerns outcomes we cannot yet enumerate.
Donald Rumsfeld’s much-maligned phrase, “unknown unknowns”, captures this uncomfortably well. These are situations where we do not even know what questions to ask. When governments introduce untested economic policies, markets often freeze not because risk has increased, but because uncertainty has. Investors wait. Projects stall. Activity slows.
Uncertainty paralyses.
And as Flamanville demonstrates, uncertainty generates waste on a staggering scale—not only of money and time, but of human commitment, the mana. Engineers burn out. Organisations lose credibility. Public trust erodes.
This is why uncertainty deserves fear.
Not all uncertainty is the same. Some of it is woven into the fabric of reality, randomness (aleatoric uncertainty) we can bound but never remove. Some of it stems from ignorance (epistemic uncertainty), from limits in what we know or can measure (measurement uncertainty). Some of it arises from the models we choose (model uncertainty), which inevitably privilege certain perspectives while obscuring others.
In software and AI, two forms dominate and do the most damage. One is epistemic uncertainty: the uncomfortable territory of what we do not yet understand, and worse, what we do not realise we have misunderstood. The other is model uncertainty: the cost of committing to one way of representing the world when many were possible.
Both are seductive because they often masquerade as clarity.
In software engineering, uncertainty enters through many doors, but not all are equally dangerous.
The most destructive uncertainty concerns the problem itself. There is no greater waste than building an elegant solution to the wrong problem. As John Gall observed, “an intervention must operate at the correct logical level to be effective”. Today, we live in a world awash with solutions. The harder task is determining which problems genuinely exist and which merely sound compelling.
Closely following this is uncertainty about users. Many developers are trained in environments where problems are neat, bounded, and fully specified. Real users are none of these things. Their needs are contextual, often tacit, and shaped by constraints invisible to designers. Software built around imaginary users, phantoms assembled from assumptions, almost always fails quietly.
Then comes design. Design is not decoration. It is the act of thinking problems through to exhaustion before committing them to code. A good design functions as a build manual, exposing contradictions, edge cases, and dependencies while they are still cheap to fix. Skipping this step does not accelerate delivery; it merely delays decision-making until the most expensive moment.
Technology choices introduce their own uncertainties, particularly in an era where new AI tools appear weekly. Novelty is intoxicating, but every choice constrains future options. Selecting technology is not about fashion, it is about narrowing uncertainty responsibly.
Finally, there is delivery uncertainty. Do you truly have the people, time, and attention required? Are teams focused, or spread thin across competing priorities? Under-resourced projects do not fail loudly. They erode slowly.
AI intensifies uncertainty rather than eliminating it.
Outputs may appear fluent and confident while being subtly wrong. Failures can be difficult to explain, harder still to predict. Operational costs, especially for large language models, can render a viable prototype economically unsustainable at scale. Dependency on external vendors introduces strategic fragility that is often discovered too late.
AI systems, perhaps more than any other technology, reward scepticism.
The people who feared the Ninki Nanka were not naïve. They were responding rationally to uncertainty with the tools they had: story, caution, restraint.
We would do well to recover some of that sensibility.
What strikes me about the Ninki Nanka stories is not their strangeness, but their efficiency.
The Mandinka were not conducting environmental impact assessments. They were not running probabilistic hazard models of swamp ecology. Yet they arrived at a remarkably effective behavioural outcome: people avoided dangerous places without needing to understand precisely why they were dangerous.
This is the first deep parallel.
Folklore compresses uncertainty into something graspable. It turns diffuse, poorly understood risk, disease-ridden swamps, predatory animals, treacherous terrain, into a single agent with a name, a face, and a story. The complexity of the system is collapsed into a symbol powerful enough to coordinate behaviour across generations.
In modern engineering terms, folklore is a lossy abstraction, but a useful one.
Software systems, especially large ones, suffer when we refuse to admit that similar abstractions are necessary. We expect teams to “just be careful,” to “use best practices,” to “be mindful of complexity,” while offering no shared mental model strong enough to actually shape behaviour. In folklore, fear does that work. In software, we often replace it with optimism.
And optimism is a terrible coordination mechanism.
Another overlooked feature of the Ninki Nanka myth is how overdetermined it is.
The creature is not just big. It is fatal to look at. It causes sickness merely by being seen. It inhabits places that are simultaneously fertile and dangerous. Even indirect contact, seeing its body but not its eyes, is enough to bring harm.
This is not narrative excess. It is redundancy.
Folklore designers, whether consciously or not, knew something modern system designers routinely forget: warnings must be stronger than curiosity. If the story had been subtle, it would have failed. If only some encounters were dangerous, people would test the boundary. If the punishment were mild, risk-taking would creep in.
So the story exaggerates. It stacks consequences. It leaves no room for casual experimentation.
Now compare this to how uncertainty is treated in software projects.
Early signals, vague requirements, unclear ownership, immature technology, are routinely dismissed as “normal” or “things we’ll sort out later.” Teams continue forward precisely because the consequences have not yet been made vivid enough to override momentum. There is no Ninki Nanka in the room, no shared symbol that says this path is dangerous even if we can’t yet articulate why.
By the time the danger becomes undeniable, the project is already deep in the swamp.
One of the most revealing aspects of the Ninki Nanka myth is that there is no heroic victory over the creature. No slaying. No conquest. The winning strategy is avoidance.
This is profoundly unmodern.
Modern engineering culture celebrates intervention. We fix things. We debug. We refactor. We recover. The implicit belief is that intelligence and effort can overcome any obstacle given enough time.
Folklore does not share this belief.
Folklore assumes that some systems are too complex, too opaque, or too dangerous to engage directly. The rational response is not mastery but restraint. Do not go there. Do not look. Do not try.
In software and AI, this translates into an uncomfortable lesson: some projects should not start. Some architectures should not be attempted. Some uses of AI should be declined, not because they are impossible, but because the uncertainty envelope is too large relative to our capacity to manage it.
Flamanville 3 is not a story about failure of execution. It is a story about entering a swamp believing one could always find dry ground later.
There is one final, subtler parallel.
Folklore persists where documentation fails.
The Ninki Nanka story survived without design documents, without formal education systems, without version control. It was robust to personnel turnover. It did not depend on a single expert. It embedded itself into culture.
Software projects, by contrast, routinely lose institutional memory within a few years. Teams change. Context evaporates. Decisions made under uncertainty are forgotten, and the same mistakes are repeated by new hands convinced they are seeing the problem afresh.
This is not a tooling failure. It is a storytelling failure.
When uncertainty is not named, not narrated, not ritualised, it disappears from collective memory. Each new team rediscovers the swamp on its own terms.
Folklore solved this by making uncertainty unforgettable.
If we take the folklore parallel seriously, and I think we must, then the lesson is not merely “be careful with uncertainty.”
It is this: uncertainty must be made socially real.
It must be visible, discussable, memorable. It must influence behaviour before evidence is complete. It must occasionally feel exaggerated, even unfair, because that is how it competes with ambition and momentum.
In other words, every serious software or AI organisation needs its own equivalent of the Ninki Nanka: not a superstition, but a shared story about where danger lies, why caution is rational, and when walking away is wisdom rather than weakness.
The people of the lower Gambia did not fear the swamp because they misunderstood it.
They feared it because they understood,long before they could explain, that uncertainty kills quietly.
Chatfield, Christopher. “Model Uncertainty, Data Mining and Statistical Inference.” Journal of the Royal Statistical Society: Series A (Statistics in Society) 158, no. 3 (1995): 419–444.
Dieck, Ronald H. Measurement Uncertainty: Methods and Applications. Research Triangle Park, NC: ISA, 2007.
Gall, John. The Systems Bible: The Beginner’s Guide to Systems Large and Small. 3rd ed. Walker, MN: General Systemantics Press, 2003.
Hüllermeier, Eyke, and Willem Waegeman. “Aleatoric and Epistemic Uncertainty in Machine Learning: An Introduction to Concepts and Methods.” Machine Learning 110, no. 3 (2021): 457–506. https://doi.org/10.1007/s10994-020-05946-3.
“Ninki Nanka.” Wikipedia. Last modified January 2026. Accessed January 23, 2026. https://en.wikipedia.org/wiki/Ninki_Nanka.
“Flamanville Nuclear Power Plant.” Wikipedia. Last modified January 2026. Accessed January 23, 2026. https://en.wikipedia.org/wiki/Flamanville_Nuclear_Power_Plant.