The calendar is inching up on the nine-year anniversary of this blog and it’s starting to feel like it’s been that long since I’ve actually written anything. It’s been an interesting year and the last couple of months have been no exception. It’s probably a bit early for a year-end recap but I feel the need to clear my mind so I can focus on what comes next.
I started the year splitting my time between two projects: one was implementing a geospatial data publication workflow for a US federal civilian agency. I was part of a large team and my role was to work out the ingest, registration, publication of all data types. That project got me elbow-deep in Node, PostGIS, GeoServer, and also gave me some exposure to the Voyager search API. I found the whole experience pretty exciting as we had a really strong implementation team. As a result, I learned a lot and , hopefully, was able to teach a few things along the way. It was the kind of experience you hope every project can be. My involvement wound down toward the middle of the year.
Last week, I attended the JS.GEO event in Philadelphia. In this post, I offer a brief recap of what I saw. It is brief for two reasons. First, it has already been ablycovered in detail by others. I went on family-related travel immediately afterward and could not sit down to collect my thoughts until the latter part of this week. The existing posts cover the blow-by-blow well. Second, due to the aforementioned travel, I had to leave the event shortly after lunch.
The one-day, no-fluff model of JS.GEO is one that should be emulated more often. The small time commitment makes it easy to fit into a schedule and the low cost is accessible to a wide variety of budgets (student, local government, etc.). The tight schedule was a positive to me. There was a lot of good technical content without a lot of the marketing fluff that comes with other, larger industry events. The fact that that JS.GEO is vendor and philosophy-neutral is refreshing. While there is a heavy contingent of open-source tools being discussed, it is not specifically an open-source (or closed-source, for that matter) event. Those types of outlooks were checked at the door and the pace of the content didn’t really allow them to surface. As a result, the audience was able to focus on the merits of the solutions and approaches being presented. Our industry could use more of that.
This kind of rapid change is bound to shake things up a bit, which brings me to a technology that did not exist at the first JS.GEO but was ubiquitous at this year’s event: Turf. Usable on either the server (Node) or in the browser, Turf provides advanced spatial analysis capability, with the ability to do so in the browser being the most appealing to me. Turf was mentioned so often in the presentations I saw that I began to wonder if I was at a landscaping convention. It’s been on my to-do list for a while but I cracked it open since I’ve gotten home and plan to rework some previous applications to use Turf.
My brief stay at JS.GEO was informative and motivating. Thanks to Chris, Brian, all of the presenters, and all of the other sponsors for making it such a worthwhile event. I am looking forward to next year.
I’ve been working with a mix of technologies lately that includes Node and GeoServer. I’ve recently begun integrating the two by using Node to manipulate GeoServer’s configuration through the REST API it provides for that purpose. One task I’ve been working on automating is the registration of vector layers stored in PostGIS with GeoServer to make them available via WMS, WFS, and the various other services provided by GeoServer.
Over the past few weeks, I’ve had the opportunity to get back in touch with GeoServer. It used to figure more prominently in my toolbox but I got away from it because it simply didn’t factor into most of my project work. Time being a limited resource, it had to go on a shelf.
I’m working with GeoServer 2.6.1 this time around. I always found it to be easy to set up but it think the initial installation borders on trivial now. I was setting it up on an Ubuntu EC2 instance so the entire process was conducted from the command line. From start to finish, it took me about ten minutes, half of which was Tomcat configuration.
We do a lot of tiles for various customers at Zekiah. Tiling is as much art as science and sometimes things go wrong so we have a range of utilities that we use to perform various kids of QA. Because the caches can be large, we usually want to perform a visual QA on the static tiles before pushing them up to wherever they are going to live full-time.