With my first post of 2016, I’d like to wish you a happy new year. After a bit of a shutdown for the holidays, I am back at work on some project activities that I had been working in the last quarter of 2015. Specifically, one of our long-standing federal customers has been directed by their customer to migrate their application platform to an open-source stack. The application in question is a mature analysis and visualization application that has been built on a Microsoft (with a little bit of Esri) stack. A migration will be no small effort.

Image by Daniel Rosengren [CC BY 4.0], via Wikimedia Commons

The migration has brought me back in touch with all of the potential issues that must be considered in a migration from any platform to another. (I’ll set aside the proprietary-to-open-source considerations because they are really side issues in this case.) The target platform is a grab-bag of open-source tools of varying provenance. The source platform is built using Microsoft SQL Server, .Net-based APIs, and Silverlight. The GIS/location requirements are mostly handled by Esri tools, but they represent less than 20% of the overall system function.

You’ll never see me shed a tear over the prospect of ditching Silverlight, but the migration process of a mature UI must nonetheless be well-thought out. That alone would be a significant task, but we must also look at migrating the data store (to PostgreSQL? NoSQL?) and the middle tier logic (to Java? Node? Something else?) as well. Considerations abound, the least problematic of which are technological.

As always, human factors bear the most consideration. When starting with a team well-versed in something like .Net, what is the best target to choose? Java, because it will be most familiar to the team? Node, because coupled with an HTML5/JavaScript front-end, it will reduce the number of languages the team needs to master? Regarding the data store, do we move to another relational platform in order to ease transition for the team who are familiar with SQL, or do we take the opportunity to re-imagine the data structures in a graph database or some other structure that may map to the analytical tools better? How long can we freeze current capability to transition before users lose interest?

In short, I’m living in a case study of exactly the kinds of effects that happen when an organization commits to a technology stack. Staff get trained and develop proficiency, tools get customized, organizational efficiency is developed as everyone becomes comfortable with tools and how they behave. The prospect of a migration tosses all of that into the air. Without good direction and planning, it can be difficult to know where things will land.

Over the course of my career, I’ve come to understand that it’s best to plan for some agility in order to avoid massive platform migrations as much as possible. Technologies change, or simply go away, with increasing speed. Organizations need to be looking ahead to anticipate such changes, prototype their effects, and fold them in incrementally. It’s not a perfect approach (there are none), but I have come to see that it’s the least invasive.

All of this serves as a reminder that we become closely bound to our tools, whether we are programmers or carpenters. Any migration to a new set of tools needs to be approached holistically, considering people, processes, and technology and given a sufficient amount of time to work through until people can feel productive and efficient again.