One of the advantages of writing a blog for nearly twenty years is that you can go back and see how some of the things you wrote about have held up over time. Suffice it to say there are a number of posts that tempt me to hit the delete key. There were times when I seemed to be using this blog more like a Tumblr. Then, of course, there was my “Silverlight Period.” I leave those posts as a cautionary tale for others. Silverlight is dead, but I have no doubt a Silverlight-like thing will arise in the future. (Perhaps it already has with React and Vue?)
I’ve written a number of posts recently that were essentially talking around a concept that kept eluding me. Thanks to a recent snow/ice storm, I finally had the brain space to pull it together.
About ten years ago, I wrote a post called “Post GIS“. The argument was simple: GIS was fading into the overall information landscape, becoming less distinguishable as a distinct entity. The value of what it did, I wrote, was much greater than the value of what it was. Geography was escaping the walled garden.
That’s still true. But the dissolution has gone further than I imagined.
We’re no longer just disaggregating the monolith into open-source libraries and web services. We’re dissolving the code itself. With the advent of AI and vibe coding, we can describe what “correct” looks like (the physics, the regulations, the business rules, the edge cases) and the AI handles the implementation. Just as the hiring manager I wrote about and Google both arrived at map tiles by thinking outside GIS orthodoxy, we can arrive at working software by ignoring the assumption that building systems requires being a programmer. That’s still somewhat hyperbolic, given the current state of tooling, but its rate of acceleration makes me think that end state isn’t very far off.
To me, “Post GIS” now means the barrier to entry is no longer syntax. It’s domain expertise. We’re moving from building spatial stacks to prompting spatial agents.
Three Waves
I view our progression to this point in terms of three waves, roughly encompassing the 21st Century so far.
The first wave, roughly 2005 to 2015, dissolved traditional GIS monoliths into modular libraries (often open-source, but not always) and web standards. PostGIS gave us spatial SQL without the Oracle overhead. OpenLayers and Leaflet brought interactive maps to any browser. OGC services, for all their verbosity, meant you could swap out components. The walled garden was starting to open.

The second wave, 2015 to 2023, was the stack era. You could assemble your own spatial infrastructure from commodity parts. Python and JavaScript became the lingua franca. Docker containers meant you could ship a working environment, not just code. Cloud services abstracted the hardware. The skill was knowing which pieces to assemble and how to make them talk to each other. You still had to write the glue, but at least you got to choose the pieces. DevOps ruled during this period.
Now we’re in the third wave. The code itself is dissolving into natural language orchestration. Claude Code, GPT-4, Gemini aren’t just autocomplete for programmers. They interpret intent, generate implementations, and iterate based on feedback. The skill is no longer assembly. It’s articulation.
Each wave lowered a barrier. The first made software affordable. The second made infrastructure composable. The third is making implementation incidental. What remains constant is the need to understand the problem you’re solving.
When Syntax Stops Being the Bottleneck
Sparkgeo recently prototyped a natural language interface for satellite imagery search. Users type “Show me Sentinel-2 images of Berlin from September 2023” and get results. Behind the scenes, the system parses the query, geocodes “Berlin” to a bounding box, converts “September 2023” to ISO 8601 date ranges, identifies the relevant STAC collections, and executes the search. The user never sees a STAC query. They never learn what a bounding box is. They ask a question and get imagery back.
This is the pattern everywhere now. The SQL, the API calls, the coordinate transformations are all generated, executed, and verified without manual intervention. The code exists, but it’s not the point. The question is the point. The logical progression from here is that a user doesn’t even ask about satellite imagery. They ask to solve a problem and the system figures out that satellite imagery is needed to do so.
AWS and Foursquare recently introduced geospatial AI agents that let domain experts answer complex spatial questions in minutes instead of months. By using H3 as a universal join key, these agents let insurance analysts combine property records with climate risk projections—no custom data engineering required. The analyst who understands the business question gets the answer. The spatial joins happen invisibly.
The MIT “State of AI in Business” study put it well: “domain fluency and workflow integration matter more than flashy UX“. When the syntax barrier falls, what remains is the ability to articulate the problem clearly. That requires understanding the domain.
I’ve said before that prompting resembles technical writing more than literary fiction. It still needs to be written well. You have to know what you’re asking for. You have to recognize when the answer is wrong. You have to understand the problem deeply enough to iterate toward a solution. The syntax barrier has fallen. The clarity barrier remains.
Cloud-Native and Ephemeral
The 2015-era assumption (at least by me) was that every serious project needed a standing PostGIS instance. You provisioned a database, tuned it, backed it up, and kept it running. The database was infrastructure, always on, always waiting. That assumption is fading.
GeoParquet and DuckDB have emerged as cloud-native alternatives for data transfer and ad-hoc analysis. GeoParquet gives you columnar storage with embedded spatial metadata. It is portable, versioned, readable by anything that understands Parquet. DuckDB gives you SQL without the server: an in-process analytical database that spins up in milliseconds and disappears when you’re done. Together, they enable a different pattern: data lives in object storage, the database materializes only when needed.
No standing infrastructure means no backup schedules, no connection pooling, no patching. Data in S3 is already replicated, already durable. You version it like code. You query it like a database. But there’s no database to maintain. Just files.
Here’s what that looks like in practice. A county assessor needs to analyze a decade of parcel transactions stored as GeoParquet files in S3. Using a Claude Code skill, they issue a natural language prompt: “Show me all parcels in the downtown overlay district that changed ownership more than twice since 2015, with total transaction value.”
The skill contains what the AI needs to know: a SKILL.md file with instructions for querying GeoParquet via DuckDB’s spatial extension, including syntax for S3 access, spatial joins, and CRS handling. It may also contain business logic, describing how to interpret the data schema at runtime. There’s a Python module that initializes DuckDB with the spatial and httpfs extensions, handles authentication, and executes generated SQL. Reference documentation covers GeoParquet schema conventions and common spatial predicates.
Claude Code reads the skill, generates the appropriate DuckDB SQL, using ST_Within for the overlay district filter, executes it against the remote GeoParquet files, and returns the results. The “database” exists only for the duration of the query. No server was provisioned. No connection strings were configured. The assessor never saw SQL.

This is serverless spatial. The compute is ephemeral. The data is durable. The intelligence is in the skill. The barrier to entry is knowing what question to ask.
Metadata as Contract
Throughout my career, metadata was always the afterthought. We knew it mattered. We wrote it into project plans. And then we shipped without it, because there was always something more urgent.
LLMs are changing that calculus. When a human analyst encounters a column named “height_ft,” they make reasonable assumptions. It’s probably height. Probably in feet. Probably measured from the ground. If they’re wrong, they notice eventually—the numbers don’t make sense, the map looks off, a colleague catches it in review. We navigate ambiguity through context, experience, and intuition.
LLMs behave differently. They can infer some things, and will attempt to by default, but they operate better with explicit information. A column named “height_ft” conveys something, but a comment that says “building height at eave level, NAVD88, surveyed 2019” conveys what the AI agent actually needs. Without explicit units, datums, and lineage, agents hallucinate. Not because they’re broken, but because they’re doing exactly what statistical models do when context is missing: they guess. (It’s often a very sophisticated guess, and they can often get the right answer, but the results are not deterministic and detailed metadata reduces risk.)
This reframes metadata from documentation to infrastructure. It’s not a nice-to-have for future analysts. It’s the contract between the data architect and the AI. If the schema isn’t well-described, the AI may not reason accurately or consistently. Column comments in PostGIS, CRS-WKT definitions, lineage notes in a GeoParquet file are all operational requirements now, not administrative overhead.
The incentive shift is subtle but real. For decades, the cost of poor metadata was diffuse: slower onboarding, occasional errors, frustrated users. The benefits of good metadata were equally diffuse: easier maintenance, better interoperability, fewer surprises. Hard to justify the investment when the payoff was so indirect.
LLMs surface the cost immediately. Ask an agent to transform coordinates without a defined CRS, and it will either fail or guess wrong. Ask it to compute areas without knowing the units, and it will confidently return nonsense. The feedback loop is tight. The cost of missing context shows up in the next query, not the next project.
I wrote about this at length in “Metadata Rising,” but the short version is this: the same habits that help human analysts understand a dataset help models understand it too. Document what you know, close to where the data lives, in formats that machines can parse. The payoff is no longer deferred. It’s compounding.
The Agentic Web
So far I’ve described individual tools like Claude Code querying GeoParquet and natural language interfaces to STAC catalogs. But the deeper shift is in how these tools connect as we move from isolated AI assistants to orchestrated agent networks.

Google’s Geospatial Reasoning framework is one potential example. Built on Gemini, it orchestrates multiple foundation models alongside Earth Engine and BigQuery to answer complex spatial questions through natural language. In their published example, a user asks about post-hurricane building damage. The system interprets the question, identifies relevant satellite imagery, applies damage detection models, joins the results with parcel data, and returns a summary. The user never specifies which APIs to call or which models to invoke. The orchestration is invisible.
CARTO calls this “agentic GIS“, consisting of AI agents that bring spatial analysis to non-specialists by handling the complexity internally. The pattern that is consistently emerging across platforms is to wrap spatial capabilities in natural language interfaces, let agents coordinate the underlying services, and present results without exposing the machinery.
Model Context Protocol (MCP) , a bridge between AI agents and existing infrastructure, enables this pattern at scale. As researchers at Penn State have noted, MCP could provide a pathway for connecting AI systems to the existing ecosystem of OGC services, including over 200,000 registered endpoints, 30,000+ Census Bureau variables, and existing spatial data infrastructure (SDI). Rather than replace existing SDI, it can be made available for exploitation in AI workflows.
So the pattern is not a single AI that knows everything, but a series of specialized agents that can find each other, negotiate capabilities, and coordinate work. Modern GIS may not be a desktop application or a cloud platform, but a protocol layer that makes spatial intelligence accessible wherever it’s needed.
The Hybrid Future
Not everything will flow through third-party APIs. Some spatial work involves data that can’t leave your control (health records, defense applications, proprietary assets, personally identifiable information). The assumption that agentic AI requires sending data to external services is already obsolete.

Models like Llama 3 and GPT-OSS are tool-aware. They support function calling, can connect to MCP services, and enable agent orchestration entirely within infrastructure you control, whether that’s on-premises, in your own cloud VPC, or in an air-gapped environment. The patterns I’ve described (skills, protocol-based discovery, coordinated agents) work without sending data to a third party. You can stand up an MCP server that exposes your internal PostGIS instance, connect it to a model running in your own environment, and get natural language spatial analysis without data crossing trust boundaries.
This matters because geospatial data is often sensitive. It reveals where people live, work, travel, and gather. As AI agents become more capable, the question of who has access to query traffic and results is a primary concern. Running the full stack in infrastructure you control can address that concern.
The capability gap between self-managed deployments and managed AI services is shrinking. For many tasks, such as query translation, code generation, or workflow orchestration, models you deploy yourself are sufficient. The tradeoff is operational: you’re managing infrastructure instead of paying for API calls. But for organizations with strict data governance requirements, that tradeoff is easy to make.
Many organizations may choose a hybrid approach, using managed services for public data and access to frontier models and self-managed deployments for sensitive data and workflows that require full control. The boundary is determined by trust and policy, not by the nature of the infrastructure.
What I Might Be Wrong About
I’ve been writing about technology long enough to know that confident predictions age poorly. As I mentioned above, some of my older posts make me cringe today. Having learned from experience, I’m going to hedge my bets so “10-years-from-now me” won’t be so appalled. Here are some areas where I’m genuinely uncertain.
Domain expertise might commoditize too. I’ve argued that domain knowledge becomes more valuable as syntax barriers fall. But LLMs are accumulating domain knowledge at a remarkable pace. A model trained on every lighting engineering textbook, every building code, every municipal regulation might eventually encode what my friend Ari knows about street lighting design, or enough of it to close the gap. If domain expertise becomes just another layer the AI absorbs, then the advantage shifts again. To what? Taste? Judgment? Relationships? I don’t know.
This is most likely a transitional moment, not an end state. The patterns I’ve described (Claude Code skills, MCP orchestration, GeoParquet on S3) might look quaint in five years. The “prompt era” could be as brief as the “stack era” that preceded it. We might be headed toward AI systems that don’t need prompting at all, that infer intent from context and act autonomously. If so, the skills that matter this year may not matter in five years.
Taste might matter more than expertise. When implementation is cheap, differentiation comes from knowing what to build, not how to build it. The curator becomes more valuable than the craftsman. But I’m not sure that’s entirely good. There’s something lost when deep technical knowledge, the kind that only comes from wrestling with the details, becomes optional. I worry about a world where everyone can generate but few can evaluate.
Institutional friction is real. Everything I’ve described assumes organizations can adapt. Most can’t, at least not quickly. Procurement cycles, security reviews, training budgets, regulatory compliance don’t dissolve because the technology is ready. The gap between what’s possible and what’s deployed may widen before it narrows. I’ve probably undersold how much organizational culture constrains technological adoption.
I offer these not as hedges but as genuine uncertainties. The picture described in this post feels right to me in January of 2026. I may feel differently in December.
Conclusion
I tend to view the 2005 roll-out of map tiles by Google Maps as a precipitating event that kicked off a period of acceleration and innovation in geospatial technology that continues to the present day. Call it web thinking or design thinking or infrastructure thinking, but it immediately changed how we thought about delivering geospatial information to a browser, to users, and to other systems.
Twenty years later, that momentum has accelerated beyond what I could have imagined. The syntax that once separated experts from amateurs is dissolving into natural language. The infrastructure that once required standing servers now materializes on demand. The integrations that once required custom middleware now negotiate themselves through protocol. The barriers keep falling.
What remains is what has always remained: the need to understand the problem you’re solving. To know what “correct” looks like. To recognize when the output is wrong. To ask the right question in the first place.
In 2016, I believed that the value of what GIS does is greater than the value of what it is. In 2026, GIS is becoming something that doesn’t need a name at all. Instead, there is simply spatial intelligence woven into whatever system needs it, summoned by whoever understands the domain well enough to ask.
We have access to an infinite toolset now. The constraint was never the tools. It was knowing what to build.
Header image: Rhododendrites, CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0, via Wikimedia Commons