Recently, I had the occasion to attempt to generate an OGC GeoPackage from QGIS and publish it using GeoServer. The use case was fairly straightforward. I had been given data in GML format and needed to publish it. For many valid reasons (such as lack of spatial indexing), GeoServer does not natively support publishing GML data. As a result, I need to convert it to something that GeoServer did support.
QGIS opened and displayed the data easily and, from there, I could export it into any number of formats. (Or I could have used OGR.) The feature attributes had very long names and I didn’t want to lose that richness by exporting to shapefile. I was trying to keep my server-side life simple, so I was hoping to avoid setting up an RDBMS data store for this purpose. It was then that I noticed QGIS supports exporting to GeoPackge, so I decided to give it a go.
For purposes of this post, I am using a shapefile of building footprints of Leonardtown, Maryland. The process is the same for a GML file, however.
As shown below, you initiate the process like any other by right-clicking and choosing “Save As…” in the context menu.
Aside from a day at the Esri Federal GIS Conference, I’ve been laying fairly low from geo industry events for about the past year. There’s no single reason for that; it’s been more that a combination of things like work deadlines or family happenings have taken priority over conflicting conferences and events. I’ve generally been watching from afar, finding tweet streams and their attendant embedded links to be particularly effective.
I had been considering heading out to San Diego for the Esri user conference this year. It’s the largest gathering of geospatial people in one place every year. Even if you are not an Esri user and can’t attend the event itself, it’s worth going and being in the vicinity as 15,000 geographers descend on San Diego. Even Mapbox is getting into the game on this.
Back in the dark old days of ArcSDE, when it first started to support PostgreSQL/PostGIS as a back-end data store, I did a series of posts about how to work with it. Of course, working with PostGIS in ArcGIS was a theme of the early days of this blog, through my association with zigGIS. Although it’s been the case for a while, I’m feeling a bit happy today that it’s now as simple as this to work with (vanilla, non-geodatabased) PostGIS in ArcMap. (Post continues below the GIF.)
You might ask “Why not just work in QGIS?” and you would have a valid question. QGIS is a perfectly fine desktop PostGIS client. As a matter of fact, I went almost two years without a functioning copy of ArcMap and using QGIS as my primary desktop tool (which is why I’m exploring the capabilities of ArcGIS 10.4 now). Sometimes, projects dictate what tools you need to use. The data-level interoperability implied by the support shown above has me thinking about hybrid workflows to allow shops (especially small ones) that have need for final products to end up in an Esri stack to still exercise a measure of choice with regard to tools. It may be time to re-tool that old series of posts for the state of GIS tools circa the middle of this decade.
Consulting is enjoyable due to the variety, but it would be fun to help build a platform.
— Bill (@billdollins) May 2, 2016
Earlier this week, I posted the above tweet. To explain the variety I referred to, here is a partial list, in no particular order, of the tools I’ve worked with in the past week.
- TileMill (Yes, I still use it)
- ArcGIS Server
- SOAP (!)
- Windows Communication Foundation
- Microsoft SQL Server
- SQL (Spatial and non-spatial for the above platforms)
- X.509 certificates
Lately, I’ve been working on a project that involved retrofitting authentication via client certificates, similar to CAC/PIV smart card authentication, into an existing set of Windows Communication Foundation (WCF) web services and a desktop (yes, desktop) client application that was designed to interact with them. The first part was pretty easy to figure out; the second part was less so.
The truth is that the code needed for the client application is not onerous. The trick was finding any documentation/examples that pointed the way. If I had ever doubted that desktop applications are second-class citizens (I didn’t), this task confirmed it.
If you’ve accessed a web site that required smart card or certificate authentication (which are really the same thing), the dialog above is probably very familiar to you. With a web application, the browser is the actual client, and it detects that the back-end site or service needs a certificate. The browser then prompts you to provide a certificate and, assuming you do, passes you through to the site. With a desktop application, you need to build all of that interaction in. (In case you’re wondering why all of the certificates above say “DO NOT TRUST,” it’s because I applied a filter to show only Fiddler dummy certificates for the screen shot.)
I’ve worked as a consultant for my entire career, and one of the most rewarding aspects of it is the variety of projects you get exposed to. I’ve gotten to meet and work with great people over the years and have also gotten to work with a lot of emerging technology. In that regard, it’s been a great experience.
One of the most challenging aspects of being a consultant, and probably the biggest thing that makes it not a life for everyone, is what I call the “consultant’s dilemma.” It goes like this: A consultant is often brought into an organization to provide a specific set of expertise that does not exist in the organization at a sufficient level to meet a goal or solve a problem. Despite being brought in to provide a form of leadership, the consultant is never the owner of the solution; nor does the consultant have authority to direct execution. In short, a consultant is brought in to provide direction, but must do so from the back seat.