top of page
Search

Can AI Be Both Probabilistic and Deterministic? Why This Tension Matters More Than We Think

  • Writer: Tolga Gemicioglu
    Tolga Gemicioglu
  • Mar 23
  • 3 min read

One of the simplest but most overlooked truths about AI is this: Ask the same question twice, and you may get different answers. That’s not a bug. It’s the nature of probabilistic systems.


AI models don’t “know” things in a deterministic sense—they generate outputs based on likelihood. In simple terms, they are extremely advanced guess engines. This is precisely what makes them powerful: they can handle vast, messy, unstructured inputs—effectively the entire internet—and still produce useful outputs. But this is also where things start to break.




The Clash: Probabilistic AI vs Deterministic Systems

Most of our digital world is deterministic.

Click “add to basket” on Amazon → the item appears.

2 + 2 = 4 → always.


These systems are built on predictability, traceability, and control. Every input has a defined output. Every action can be logged, explained, debugged.

Now we’re layering probabilistic AI on top of this. And that creates tension.



Where It Breaks in Practice

A simple example from a finance AI agent—part of the richify.ai app, where I’m supporting Çağrı in developing an agent called Frontier Analyst: We asked the model to calculate a specific KPI—something non-trivial but clearly definable.


What happened?

  • Sometimes it calculated correctly

  • Sometimes it gave different numbers

  • Sometimes it avoided the calculation entirely and explained instead

  • Occasionally, it used proxy assumptions instead of real data


Why?

Because the model is optimising for a response, not necessarily the exact same response every time. It may choose different reasoning paths depending on context, efficiency, or internal probability weighting. From a product perspective, this is uncomfortable.



The First Fix: Inject Determinism

So we introduced a deterministic layer:

  • Built a code-based calculator

  • Forced the AI to pass inputs into that system

  • Let the system return the output

Result: Consistency improved significantly—but still not 100%.


Why? Because the probabilistic layer still sits on top:

  • It may send slightly incorrect inputs

  • It may reframe the problem

  • It may choose not to call the function at all

What this means is that deterministic systems don’t fail—interfaces between probabilistic and deterministic systems do.



Why This Matters More Than It Seems

This is not just a technical nuance. It’s a structural shift.


For decades, software has been:

  • Traceable

  • Explainable

  • Debuggable

You can follow every step. You can identify exactly where something went wrong.


With AI:

  • Outputs are not always reproducible

  • Reasoning is not always transparent

  • Boundaries are harder to define


This has real implications:

  • Debugging becomes harder

  • Error handling becomes probabilistic

  • Logging and accountability become more complex

  • Reputational risk increases


Thinking about this, I was reminded of an interesting parallel in physics. Albert Einstein’s deterministic view of the universe (relativity) has long coexisted—somewhat uncomfortably—with the probabilistic nature of Quantum Mechanics. Despite decades of work to reconcile the two, that tension is still not fully resolved.


In a much more practical and immediate way, we’re now facing a similar duality in software systems. It’s not surprising that many organisations feel friction in adopting AI—not because they don’t see the value, but because this breaks their operating assumptions.



So Why Use AI at All?

Because the upside is undeniable.

  • Ability to process unstructured data at scale

  • Flexibility across use cases

  • Quality of output in complex domains

In many cases, AI doesn’t just improve systems—it redefines what’s possible.



Where This Is Going

We’re already seeing patterns that try to bridge the gap:

  • Agentic systems with execution logs → making actions traceable

  • AI + code hybrids → AI decides, deterministic systems execute

  • Agent orchestration → probabilistic “brains” managing deterministic tools

  • Prompt engineering & guardrails → constraining variability

  • Structured workflows → limiting when and how AI can act


In essence, we are not replacing deterministic systems. We are wrapping them with probabilistic intelligence.


This duality—probabilistic vs deterministic—is one of the core design challenges not only for AI-native products, but also for integrating AI into existing systems.

It’s also one of the less explicitly discussed reasons behind slow enterprise adoption. Not because AI is weak. But because it doesn’t behave like the systems organisations are built on.


How this balance will ultimately be solved—or whether there will be a clear “winning” approach at all—is still playing out. For now, we’re watching this evolve in real time.


How are you thinking about this balance?Where do you draw the line between probabilistic flexibility and deterministic control?


 
 
 

Comments


105 Thornsbeach Road

London, SE6 1EY

UK

  • Facebook
  • Twitter
  • LinkedIn
  • Instagram

Get in Touch

Thanks for submitting!

 

© 2024 by TG Initiatives. Powered and secured by Wix 

 

bottom of page