DDJ on “The Return of the Desktop”

Okay, I swear I’m not on the DDJ payroll, but this article caught my eye immediately. Michael Swaine has been on a roll lately but I think this one just drips with significance for the GIS community.

Over the past 10 years, as everyone has run screaming from the desktop, I’ve been a little mystified as to why it was considered a good thing to reduce a CPU more powerful than everything NASA had in 1969 to a mere vehicle for a browser. The browser-based model reduced our computers to really cool-looking equivalents of a VT220 so it’s nice to see that the market is starting to gain back a little sanity.
Continue reading “DDJ on “The Return of the Desktop””

Way Ahead for zigGIS

Abe, Paolo and I had a chat session yesterday to coordinate efforts on zigGIS. It was good session and we’re all on the same page with getting zigGIS solidified.

The general theme of the outcome from yesterday is that we want to make zigGIS a solid experience for PostGIS users first. We have discussed other data sources in the past but we feel that we should nail down PostGIS first and then let that be a reference implementation for other data sources such as Oracle Spatial, MS SQL Spatial and the like.

With that said, the most glaring gap in the current capability of zigGIS is editing. Meaning you can’t. Editing is our highest priority in the near future. Over the next few weeks, we will be developing use cases to define an editing workflow. We’ll also prototype ways to integrate zigGIS with the native editing capabilities of ArcMap in order to reduce the amount of custom editing code we need to write.

We’ll also be restructuring the existing code. The current code is rather flat in that individual .cs files contain multiple classes and the directory structure doesn’t reflect the namespaces. Fixing all of that will give us a little more flexibility with version control and managing distributed programming. It’ll also make it easier if…ahem…any other programmers want to join the project.

There are also still some performance issues with the current capabilities we need to iron out, such as the one Bruce pointed out a while ago.

Of course the catalog objects are ongoing but there may not be any movement on that until after we get the code restructured. Paolo is going on vacation for three weeks 😀 but Abe is planning to start the code restructure in the meantime as a way to re-familiarize himself with the code.

As always, the biggest impediment is time. It seems that the need to put food on the table refuses to go away but we should be able to build a little more forward momentum than we’ve seen recently.

zigGIS Logo

ZigGIS Catalog Objects

Things have been REALLY slow on zigGIS for a while. All of us (Paolo, Abe, me) have all been up to our eyeballs with work, leaving little time to work on the zig. Now that Abe is out on his own, the three of us have been trying to coordinate a chat session to lay out a little bit of a way ahead. That has proven somewhat difficult in itself. I think the best way to handle it at this point is for Abe and I to go out to Rome and sit down with Paolo 😉 .

Anyway, I had a little time so I pushed forward on the catalog objects. I’ve got it down to where the layers for a PostGIS database actually show up in the ArcCatalog tree but it’s still a little flaky from there. The screenshot below shows what I’ve got. You’ll notice that the geometry icons don’t show up. Well, the layer doesn’t draw in ArcCatalog either but everything still works fine in ArcMap.

It’s a little closer…

So now that I’ve got the hierarchy fleshed out, I can focus on the display logic. Incidentally, the attributes for the layers do show up so all is not completely lost.

That’s what I’ve got for now. I can’t vouch for a specific schedule for a new release but we’re trying to coordinate that.

New DDJ Column on Concurrency

Anyone who has browsed this blog enough has probably figured out that I am a fan of Doctor Dobb’s Journal. This month’s issue inaugurates a new column by Herb Sutter dealing with concurrency. In my opinion, the confluence of mult-processor/multi-core systems with a greater emphasis on server-based GIS makes concurrency a huge issue for the GIS community.

The most obvious example of where concurrency can be of benefit is ArcGIS Server. The entire ArcObjects library is exposed in such a way that complex geospatial operations can be published as services. Many such operations can benefit from concurrency, especially as they are executed in a multi-user, server-based context.

I am of the opinion that the current 9.x architecture of AGS (with COM under the hood) is not optimally suited for such an approach but it will be interesting to see how ESRI addresses concurrency as 10.0 evolves. I also view the current AGS as a step along a path rather than an intended end state. Of course, there are other technologies out there. I merely hold up AGS as an obvious and visible example of a product that can benefit from being built with concurrency in mind. Of course, it’s up to us as software developers to correctly build our apps as well.

That said, I highly recommend this column. The inaugural installment rightly kicks off from a conceptual standpoint but I suspect we’ll be in the weeds soon enough.

Consuming GeoRSS in ArcMap With InMemoryWorkspaceFactory

This will be my last post for a couple of weeks. I’m heading out to Florida tomorrow to spend time with my family and the Mouse. But before I head out, I thought I’d share a little something I’ve been working on.

I’ve been playing the last few days with the InMemoryWorkspaceFactory class in ArcObjects. I am looking at using it for a project I will be working on when I get back so I thought I’d do a little prototyping beforehand.

The fact that it works in memory is very attractive, especially for using volatile data. GeoRSS seemed like a natural source to use for prototyping.
Continue reading “Consuming GeoRSS in ArcMap With InMemoryWorkspaceFactory”

Bringing It All Together…

These are pretty fun times to be working in GIS. There’s an explosion of new technologies across the whole spectrum and integration possibilities are seemingly boundless. Back when I got started, ESRI and Intergaph were duking it out and others, like MapInfo, were out there on the periphery also. GRASS was flying the OS banner and there were a few good free tools, such as the various ones put out by Sol Katz at the BLM.

Everything quieted down for a few years but there’s been a flowering of new technologies (well-documented elsewhere) over the last couple of years in both the commercial and open-source communities. I am particularly impressed with the pace at which the open-source community has ramped up with tools of strong quality and capability. In addition, the back-and-forth between Google and Microsoft has put better tools in the hands of the average consumer. That doesn’t come without drawbacks, but I see it as a net positive.

What I find interesting about all of this activity is that it demonstrates that the closed/commerical/competitive approach can bear fruit and so can the open-source/free/collaborative approach. Where you fall in the spectrum between the two is totally up to you but no one can offer up any concrete evidence that one is vastly superior to the other at this point. In addition, it’s becoming increasingly clear that they don’t have to be mutually exclusive.

When you throw in OGC standards and well as de facto standards such as KML, it becomes quite possible to stitch together technologies of various parentage effectively. I offer up the following as a concrete example:

This screenshot depicts a map (no, it’s not pretty but blame that on me) that was assembled using several technologies. The application is obviously ArcMap. Working from the bottom up, the layers are:

  • Counties – US Counties residing in ArcSDE and SQL Server
  • State Boundaries – US States loaded directly from a local install of PostGIS using the zigGIS connector
  • Major Water – A shapefile on the local hard drive
  • Hospitals – A WFS layer being served from my GeoServer instance and loaded into ArcMap with the help of the free CarbonArc Lite extension from the Carbon Project. Behind Geoserver, the data sits in PostGIS.

So we have commercially licensed software (ArcMap, ArcSDE, SQL Server), freeware (CarbonArc Lite) and open-source software (zigGIS, PostGIS, GeoServer, PostgreSQL) working together to make this product. All are good tools that, working together, bolster each others’ shortcomings. This kind of thing demostrates the “best-of-breed” (hate that term) concept very well.

A few months back, Paolo posted about mixing commercial and open-source tools. I think this will eventually be the prevalent means of doing business.

Rotating a Point Around a Base Point

A while back, I was working on a project that required us to rotate a polygon around a base point and do a spatial query to analyze some underlying demographic data. I was working in ArcObjects and could find no intrinsic way to do what I needed to do so I wrote the following routine. As you can imagine, I had to break the polygon up into individual points and rotate each one. Despite the fact that I was using ArcObjects point objects and all of the attendant COM interop calls, it worked pretty well (a polygon consisting of ~5000 point was rotated in less than a second on a less-than-robust workstation).

The math is pretty simple: Assuming that the base point and target point form the two ends of the hypotenuse of a right triangle with one leg of the triangle being a segment of the X axis, you simply:

  1. Calculate the length of the hypotenuse
  2. Calculate the current angle of the hypotenuse
  3. Add the rotation angle to the current angle
  4. Calculate the coordinates of the new end point of the hypotenuse

The code below shows how to do it but there are a couple of notes about it:

  • The coordinates must be in decimal degrees so you’ll need to unproject any projected coordinates and then re-project the result. This may introduce some distortion. I didn’t notice any in my application but I’d suggest some more rigorous testing if you’ve got tight precision requirements.
  • Rotation follows engineering standards (zero East, counter-clockwise)
  • This code is only mildly based on ArcObjects. It uses the AO IPoint interface and Point but that’s it. It is trival to implement it with another point object (such as SharpMap) or just use numeric values.
  • The base point is shifted to 0,0 and the same offset is applied to all other points in order to keep the math straightforward.
  • It’s in C#

So there it is. It’s fairly simple but it’s been useful for me on a couple of occasions since I wrote it.
[sourcecode language=”csharp”]
private IPoint rotatePoint(IPoint basePoint, IPoint sourcePoint, double rotationAngle)
{
double r;
double theta;
double offsetX;
double offsetY;
double offsetTheta;
double rotateX;
double rotateY;
double rotationRadians;
IPoint retPoint;
try
{
//shift x and y relative to 0,0 origin
offsetX = (sourcePoint.X + (basePoint.X * -1));
offsetY = (sourcePoint.Y + (basePoint.Y * -1));
//convert to radians. take absolute value (necessary for x coord only).
offsetX = Math.Abs(offsetX * (Math.PI / 180));
offsetY = offsetY * (Math.PI / 180);
rotationRadians = rotationAngle * (Math.PI / 180);
//get distance from origin to source point
r = Math.Sqrt(Math.Pow(offsetX, 2) + Math.Pow(offsetY, 2));
//get current angle of orientation
theta = Math.Atan(offsetY / offsetX);
// add rotation value to theta to get new angle of orientation
offsetTheta = theta + rotationRadians;
//calculate new x coord
rotateX = r * Math.Cos(offsetTheta);
//calculate new y coord
rotateY = r * Math.Sin(offsetTheta);
//convert new x and y back to decimal degrees
rotateX = rotateX * (180 / Math.PI);
rotateY = rotateY * (180 / Math.PI);
//shift new x and y relative to base point
rotateX = (rotateX + basePoint.X);
rotateY = (rotateY + basePoint.Y);
//return new point
retPoint = new PointClass();
retPoint.X = rotateX;
retPoint.Y = rotateY;
return retPoint;
}
catch
{
return sourcePoint;
}
}[/sourcecode]