Does Your Workflow Need A Model?

I have been circling this idea in a few recent posts. I am not sure this one brings it all the way in for a landing, but it at least feels like it is on the approach. What sharpened it for me were two recent customer conversations where AI was assumed to be part of the answer. In both cases, a closer look showed that it was not necessary.

That experience points to a recurring pattern for me. Someone describes a workflow they want to improve and, almost immediately, AI becomes the assumed implementation path. On closer inspection, though, many of these workflows are still highly deterministic. The rules may be poorly documented, the process may be hard to explain, and the surrounding conversation may be messy, but the desired behavior is still rule-bound.

The question is not whether a capability fits under the broad label of AI, but whether or not the task needs probabilistic behavior at runtime.

AI may be useful in discovering, designing, building, testing, or explaining a deterministic solution. It can help translate human intent into requirements, identify edge cases, generate code, or produce documentation. None of that means AI needs to become part of the operating architecture. Once the workflow is known, testable, and repeatable, conventional software may be the better runtime choice.

That is not simply an architectural preference, but also a governance decision. As the industry wrestles with the expansion of AI, its environmental costs, its infrastructure demands, and its operational risks, appropriate use becomes one of the first practical forms of responsible AI. Responsible AI does not begin with choosing a model. It begins with deciding whether the workflow needs one.

Deterministic, Probabilistic, and AI

It is worth pausing briefly on terminology, because the distinction can get muddy. A deterministic system is one in which the same input, processed by the same rules, should produce the same output. That does not make the system simple, but it does mean the behavior is intended to be repeatable. A probabilistic system, by contrast, estimates, classifies, ranks, predicts, or generates based on patterns and likelihoods. Its behavior may be useful precisely because it can operate in situations where the inputs are incomplete, inconsistent, or not fully structured.

Large language models fall into that second category. They are probabilistic systems that generate plausible outputs based on patterns learned from data. They do not execute fixed rules in the way conventional software does. That is part of what makes them useful, but it is also part of what makes their placement in a workflow important.

This does not mean all AI or machine learning behaves the same way. Some models are embedded in highly constrained systems, and some AI-enabled workflows can be made quite repeatable through careful design, evaluation, and controls. It also does not mean probabilistic behavior automatically means AI. Organizations have long used probability, statistics, simulation, Bayesian reasoning, Monte Carlo methods, stochastic models, and scenario analysis to reason about uncertainty. Depending on the task, those approaches may be more transparent, more constrained, and more fit for purpose than a language model or machine-learning system.

For this discussion, the distinction is not whether something carries the label of AI; it is whether the task needs runtime inference, and if it does, what kind of probabilistic method the work actually requires. Since this post is focused on proposed AI use cases, I will mostly discuss these requirements in terms of AI, especially large language models, but that should not be read to mean AI is the only way to reason under uncertainty.

Deterministic Does Not Mean Simple

One mistake in this discussion would be to treat deterministic as a synonym for easy. Many deterministic workflows are difficult, expensive, and mission-critical. They may require deep domain knowledge, careful implementation, strong testing, and a clear understanding of failure modes. The point is not that deterministic work is simple, rather that its desired behavior is knowable.

Geospatial work offers plenty of examples. Coordinate transformations, topology validation, routing, tiling, indexing, schema validation, ETL, spatial joins, and standards-based data exchange can all be technically demanding. They can also produce wrong answers when implemented carelessly. But in each case, the goal is not to generate a plausible answer based on learned patterns. The goal is to apply known rules correctly and consistently.

That point needs one caveat. Geospatial foundation models are emerging, and they will expand the parts of geospatial workflows where learned, probabilistic methods are useful. A model may infer features from imagery, detect change, generate embeddings, classify land cover, or help interpret other spatial signals that are not fully captured by explicit rules. Those are real uses of AI in geospatial work, and they should not be waved away by saying that geospatial systems are mostly deterministic.

The boundary still matters, though. Geospatial foundation models may shift where probabilistic inference belongs, but they do not eliminate the need to decide where inference ends and deterministic execution begins. A model may help identify a feature, but the system still needs to locate it, validate it, transform it, join it, publish it, govern it, and explain its lineage. Some problems are hard because the rules are complex, while others are hard because the rules are unclear. AI is usually more valuable in the second category. When the rules are already known, the better path may be to encode them clearly, test them thoroughly, and run them reliably.

Where AI Is Most Useful

AI becomes more interesting where uncertainty remains part of the work. That can take many forms from unclear user intent to inconsistent source material, unstructured documents to the need to synthesize context across many sources. In those situations, the system is not simply applying a known rule to a known input. It is helping interpret, infer, prioritize, or explain.

That is where large language models, in particular, can be useful. They can help a user express a need that is not yet fully formed or summarize a set of documents that do not follow the same structure. They can extract candidate entities from messy text, suggest categories for ambiguous cases, or help a person explore an unfamiliar information space. These are not magic capabilities, and they still require evaluation and oversight, but they are closer to the kinds of problems where probabilistic behavior has a purpose.

The point is not that AI is good or bad in the abstract. The point is that its strengths should match the shape of the problem. If ambiguity is present only while people are figuring out what the workflow should be, AI may belong in discovery or design. If ambiguity remains in the operating path, AI may have a role there. Those are different architectural decisions, with different consequences for cost, reliability, governance, and trust.

AI Before Runtime

There is an important middle ground between rejecting AI and embedding it into every production workflow. AI may be useful before the workflow becomes deterministic. It can help during discovery, design, implementation, testing, and explanation, especially when the people involved are still translating human intent into operational rules.

That use can be valuable. AI can per many tasks, such as summarizing stakeholder input, drafting requirements, generating code, or producing documentation. In those roles, AI can shorten the distance between what users are trying to accomplish and what the system ultimately needs to do. It can help clarify the problem before the solution is encoded.

That does not mean AI needs to remain in the operating architecture. Once the rules are known, testable, and repeatable, the production path may be better served by conventional deterministic software. AI may help build the deterministic system without becoming part of it. Using AI to help construct a workflow is very different from requiring a model call every time the workflow runs.

Runtime AI Has Recurring Costs

There is a cost difference between bounded use and runtime dependency. Using AI during discovery, design, or development may be episodic. Embedding AI into the operating path makes live inference part of the system’s normal behavior. A demo can be forgiven for acting like inference is free. A production system does not get that luxury. Every model-backed decision or generated response can require uncertainty management, output evaluation, and, in many cases, human review. Those are not incidental burdens when the system depends on live inference to operate.

Runtime AI can also complicate reproducibility, explainability, security, privacy, and dependency management. These concerns are not unique to AI in every respect, but model-backed execution can make them harder to contain. A deterministic system can usually be tested against expected outputs and traced through known logic. A runtime AI system needs additional attention to prompts, model behavior, evaluation criteria, monitoring, drift, and failure modes.

None of this means runtime AI is inappropriate by default. There are workflows where uncertainty remains present at the point of execution, and a probabilistic system may be the right tool for that part of the work. But when the task is already known, testable, and repeatable, adding AI can impose recurring operational and governance burdens without adding corresponding value. The fact that AI can perform a task is not evidence that AI should perform the task.

Appropriate Use Avoids Waste

The environmental implications of AI are not incidental to this discussion, even if we are not trying to quantify them here. Large-scale AI systems depend on substantial computing infrastructure, and repeated inference is not free. The cloud is still someone else’s building, full of someone else’s machines, drawing very real power and, in some cases, very real water. Repeated inference consumes energy, uses hardware, can require water for cooling, depends on data center capacity, and adds demand to systems that already have material physical footprints. The exact impact of any individual workflow will vary, but the design principle does not require a precise estimate to be useful.

If a task genuinely needs runtime inference, those physical costs may be justified. A model may be the right tool when the system must work through uncertainty that remains present during operation. In those situations, the cost of inference is part of the cost of solving the actual problem.

The concern is different when AI is added to a workflow that does not need it. If the rules are known, the inputs are structured, and the desired output is repeatable, then recurring model calls become avoidable consumption. They may use more energy, more water, more hardware, and more data center capacity than the problem requires. Without careful placement, AI can turn a solved automation problem into a permanently metered dependency.

That is why appropriate use is one of the most basic sustainability arguments available to practitioners. It does not require rejecting AI or waiting for perfect measurements. It starts with a simpler discipline: do not spend probabilistic compute performing deterministic work. If AI has real environmental, water, and infrastructure costs, then minimizing unnecessary use is not cosmetic. It is part of responsible system design.

The Workflow Is the Unit of Analysis

This is why it is not enough to ask whether something is an AI use case. That phrasing treats the workflow as a single object, when most real workflows contain several different kinds of work. Some parts may involve interpretation or uncertainty. Other parts may involve execution, validation, calculation, transformation, or recordkeeping. Those parts do not need to be handled by the same kind of system.

A mature design decomposes the workflow before choosing tools. The uncertain parts may require one approach. The repeatable parts may require another. A request may begin in natural language, but the actual work may resolve into parameters, rules, queries, transformations, or spatial operations. Once that happens, the responsible architecture may be a handoff from probabilistic interpretation to deterministic execution, followed by validation and human review where needed.

The design question is where probabilistic behavior adds value, not where an AI component can be inserted. In some workflows, AI may have no useful role at all. In others, it may support only the parts where uncertainty remains. The workflow, not the AI label, should be the unit of analysis.

Start With the Kind of Problem

The first question should not be whether AI can do the task. The first question should be what kind of task it is. If the rules are known, the inputs are structured, the desired output is objectively testable, and repeatability is more important than flexibility, the workflow is probably asking for deterministic software.

A better diagnostic starts with where uncertainty enters the process. Does it appear while people are still describing the workflow, or does it remain when the system is operating? Can it be resolved into rules, parameters, schemas, thresholds, or structured inputs? Would a conventional system produce the same answer more reliably once those decisions are made? These questions do more useful work than a capability demonstration.

A model may be able to summarize the document, classify the request, generate the response, or suggest the next step. That tells us something about capability, but it does not settle the design question. The more important question is whether the workflow still needs model-backed uncertainty management after the problem has been clarified.

That shift changes the conversation. What is the work? Where are the rules known? Where are they unclear? What should be deterministic by the time the system is operating? Only after those questions are answered does the AI question become useful.

Appropriate Use as Stewardship

Responsible AI often gets discussed in terms of model selection, evaluation, prompting, transparency, security, and governance. Those are all necessary concerns when AI belongs in the system. But they are not the first concern. The first concern is whether the workflow needs AI at all.

That makes appropriate use a form of stewardship. It asks practitioners to understand the work before choosing the tool, to separate deterministic execution from probabilistic interpretation, and to place AI only where its behavior serves the workflow. Sometimes that will mean no AI. Sometimes it will mean AI during discovery, design, development, testing, or explanation. Sometimes it will mean AI at runtime because ambiguity remains present when the system is operating.

The discipline is in knowing the difference. As AI becomes easier to attach to products, workflows, and services, the temptation will be to treat model access as a default capability. That may be good for demos, but it is not necessarily good system design. It can add avoidable energy use, water use, infrastructure demand, cost, uncertainty, and governance burden to work that could have been handled more directly.

Appropriate use does not make AI smaller than it is. It makes AI more accountable to the problem being solved.

Header image: Hannes Grobe, CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0, via Wikimedia Commons