Twenty Years, Part One

In 1993, at the very start of my career, I was a newly minted AML developer working on a data automation project. A good bit of the industry’s energy at the time was focused on digitizing vast amounts of geospatial information that still existed in analog form, including mylar, paper maps, and other physical artifacts, so it could be brought into GIS and made useful. A great deal of the work was about moving spatial information from static media into systems where it could actually be analyzed.

Today, geospatial data is far more likely to originate in digital form, streaming from sensors, documents, platforms, and connected devices in near real time. Increasingly, AI-driven workflows help extract, structure, and operationalize that information. The technology bears little resemblance to what we were using in 1993, but the underlying motivation is remarkably familiar. We still invest in all of this complexity for the same basic reason: spatial information has value, and that value increases when it informs analysis.

This blog quietly crossed into its twentieth year recently and will hit the full milestone in December. The last time I comprehensively re-read it from start to finish was in 2017 in preparation for a keynote. A lot has changed since then in the geospatial industry as a whole and for me personally and professionally. I’ll make this a quarterly series for this year, examining it from different perspectives.

Looking back across that span, what stands out is not any single technology or trend but the larger arc of the field itself. Over the past few decades, geospatial has moved from the mechanics of GIS, to institutional adoption, and now toward a world where spatial data increasingly functions as infrastructure. Of course, this blog represents only my personal journey over that time. Perhaps it is only my perception that has remained consistent, but I do think that the value of spatial information is so compelling to use that we keep pushing it forward; adapting it, integrating it, and constantly improving how we interact with it.

Tools, Practice, and the Early Web

In that earlier phase of the industry, a good deal of the field’s energy was focused on tools, workflows, conferences, and the everyday practice of working with spatial data. That was where many of the central problems still lived. GIS was still experienced more directly as a specialized practice, and much of the work revolved around the mechanics of getting data into usable form, learning the software well enough to do something meaningful with it, and watching emerging approaches like web mapping and open source as they slowly began to reshape the edges of the profession.

Looking back now, what stands out is how much of that period was defined by effort at the level of enablement. Before spatial information could become embedded in larger systems, it first had to become manageable, portable, and useful in day-to-day workflows. A lot of the energy in the field went into that foundational work. The central challenge was not abstraction. It was the practical business of making geospatial technology work well enough to matter.

From Tools to Institutions

As the field matured, the center of gravity began to shift. The interesting questions were no longer only about software, formats, or workflow. They increasingly had to do with how organizations absorbed geospatial capability and what happened once spatial thinking moved beyond a small group of specialists. Geospatial technology was becoming less of a stand-alone craft and more of an institutional function, shaped by budgets, governance, mission needs, and the practical realities of how organizations adopt any technology that matters.

That transition was not always smooth, and it was not always driven from inside the geospatial industry itself. In some cases, geospatial had to be dragged along by larger currents in enterprise and government IT. Desktop GIS had long been central to the field, but as policy and operational trends made thick client software harder to deploy and support, the broader technology world moved toward the web much faster than geospatial did. For a time, geospatial occupied an awkward middle ground, relying on browser plugins and transitional architectures to bridge the gap between established desktop assumptions and the realities of the newer environment.

Something similar happened with innovation. Some of the most important spatial advances of the last two decades did not emerge from the geospatial mainstream at all. Map tiles, slippy maps, and other now-familiar patterns were pushed forward by companies solving larger product and platform problems in which geography happened to be central. Google is the obvious example, but hardly the only one. Uber, Niantic, and others innovated around the spatial dimensions of their own problem sets and then shared those innovations more broadly. That history is worth remembering because it says something important about the field. The value of spatial information has long been clear to outsiders. What has been less consistent is the geospatial industry’s ability to present that value in ways that fit naturally into other verticals.

One of the more persistent habits in the field has been its tendency to set itself apart linguistically by prepending “geo” onto existing concepts, from GEOINT to GeoAI and many others. Some of that may be harmless shorthand, but it also suggests a recurring instinct to frame geospatial as a category apart rather than as a capability that can be absorbed into broader analytic and operational contexts. I do not know that this tendency has materially hindered adoption, but it is hard to imagine that it has helped.

Over time, the mechanics of the tools themselves became somewhat less central than the larger question of how spatial information was used inside systems of work. Questions of organizational memory, process, and adoption started to matter more. In retrospect, that feels like a natural transition, though not an uncomplicated one. Once the foundational work of enablement had advanced far enough, the next challenge was not simply making geospatial technology work. It was making it durable, legible, and useful inside institutions that were not organized around GIS for its own sake, but around decisions, operations, and outcomes.

Spatial Data as Infrastructure

That institutional shift set the stage for the next one, though this transition was not especially clean either. Once spatial capability became durable enough inside organizations, it also became easier for it to be absorbed into the larger systems those organizations depend on. In many cases, that is where geospatial sits now. It is no longer always encountered as a distinct discipline with its own visible boundaries. More often, it appears as a capability inside logistics platforms, financial models, risk systems, sensor networks, and analytic workflows that may not even describe themselves as geospatial.

That change matters, but not simply because it reflects some tidy story of maturity within the geospatial field itself. It also reflects a larger reality. Spatial information is too useful to remain confined within a narrow vertical, and the wider technology world has repeatedly found ways to absorb, operationalize, and extend it, sometimes faster than the geospatial mainstream has managed to do on its own. Infrastructure is often most important when it becomes least visible. We still see the maps, the models, and the interfaces, but more and more of the value comes from the way spatial information is structured, connected, and made available to systems that use it to support decisions. GDAL is a good example. It is embedded in products as varied as ArcGIS, Autodesk, John Deere, Unreal, and Blender. Most users of those systems are not thinking about geospatial infrastructure as such, but they are benefiting from it all the same.

Wikideas1, CC0, via Wikimedia Commons

AI is accelerating that trend, not by replacing the underlying logic of geospatial work, but by increasing the speed and scale at which spatial data can be extracted, interpreted, and incorporated into operational workflows. If there is a clear arc here, it is not one of smooth self-directed progress. It is that spatial information kept proving its value, and the systems around it kept evolving until they were capable of absorbing that value more directly. First we worked to make spatial information usable, then to make it durable inside institutions, and now to make it available as part of the infrastructure of analysis itself.

The Throughline

For all the changes in tooling, architecture, delivery models, and industry language, the underlying problem has remained remarkably consistent. We observe the physical world, try to capture something meaningful about it, structure that information so it can be analyzed, and then attempt to use the resulting insight to support some decision or action. That basic sequence has survived every technical transition the field has gone through. The software changed. The interfaces changed. The scale changed. The speed changed. The problem did not.

In that sense, the real throughline is not GIS as a product category or even geospatial as a vertical. It is the continuing effort to turn observations of the world into structured information that systems can use. Sometimes that work has been framed as mapping. Sometimes it has been framed as analysis, intelligence, operations, logistics, risk, or automation. The labels matter less than the function. Spatial information keeps becoming more valuable when it can be integrated with other forms of data, connected to a workflow, and made legible to the people and systems responsible for acting on it.

That is also why the field’s persistent myopia matters. The challenge has rarely been proving that spatial information has value. That value has been obvious for a long time. The harder problem has been packaging, presenting, and operationalizing it in ways that fit how other domains actually work. Even so, the underlying pattern has held. We keep finding new ways to capture the world, structure what we find, and push that information closer to the point of decision. The tools and institutions around that work have changed repeatedly. The logic underneath it has been surprisingly stable.

Looking Ahead

Twenty years is long enough to watch several generations of tools, platforms, and assumptions rise, spread, and recede. It is also long enough to see that the core value proposition of spatial information has remained more stable than the technologies built around it. We still want to understand where things are, how they relate to one another, what is changing, and what those changes might mean. The means have changed dramatically. The demand behind them has not.

If anything, the next phase is likely to make geospatial both more powerful and less conspicuous. Spatial capability will continue to disappear into larger systems, even as its importance increases inside them. That will create opportunities, but it will also keep posing the same old challenge in newer forms. It is one thing to capture the world. It is another to make what we capture useful, legible, and actionable in the contexts where decisions are actually made. That problem was present at the start of my career, it was present when I began writing here over ten years into my career, and it is still with us now. I do not expect that to change any time soon.