Okay, I swear I’m not on the DDJ payroll, but this article caught my eye immediately. Michael Swaine has been on a roll lately but I think this one just drips with significance for the GIS community.
Over the past 10 years, as everyone has run screaming from the desktop, I’ve been a little mystified as to why it was considered a good thing to reduce a CPU more powerful than everything NASA had in 1969 to a mere vehicle for a browser. The browser-based model reduced our computers to really cool-looking equivalents of a VT220 so it’s nice to see that the market is starting to gain back a little sanity.
I will readily admit that the browser model has its advantages. Application deployment is a snap compared to what it takes to ensuring desktops are in synch. Anyone who’s had to deal with NMCI will vouch for that. In addition, there’s the matter of targeting the desktop OS. That can be a pain for a desktop app (Windows/UNIX/Linux?, Flavor of Linux?, 32bit or 64bit?, ugh). That hasn’t changed much over the years. I remember running ArcView under Win32s on Windows 3.11 also testing Installshield builds against WinNT 4, Win2K and Win9x for MapObjects apps. So, yeah, the web app model is definitely attractive. However, the trend has also led to the need for really big bandwidth, multi-socket/multi-core servers and a loss of control at the desktop for the user. If the developer of the server app/service didn’t think of it and the sysadmin of the server doesn’t want you to have it, you’re kind of outta luck. Also, Moore’s Law has been giving us faster, better CPUs but we’ve been asking them to do less and less.
The article gives a few examples of products that are making use of local resources instead merely relying on the server for everything. In the article, things like Dojo, Gears and Silverlight are discussed. We’re already seeing some of this trend in our market with the advent of the various virtual globes (Google Earth, NASA World Wind, ArcGIS Explorer, etc.). These apps tap into very data-rich servers or services but use local resources for tasks such as tile caching. I think GIS is an ideal place to push the boundaries of this model due to the resource-intensive nature of some geospatial processes.
This is an area where, with a little work, the ESRI product line could shine. With ArcGIS Desktop, Engine and Server, the same objects can potentially reside on the server as well as the desktop. It would be interesting to see these objects be able to communicate together in such a way as distribute processing load between themselves. Of course, any of the technologies mentioned in the article could serve as a basis to do a similar thing with non-ESRI technologies or for users with only a browser. Not that such an approach would be easy or trivial, but it would certainly be worthwhile.
Bill
Interesting comments. I am ususlly on the opposite side of the fence in that I think most web applications are using far less client side CPU power than is what is available. The biggest example of this is ArcIMS. You make a request with your powerful computer and wait, wait, wait for the server to do a massive power crunch to return …. a GIF file. I am using MapServer to send raster images only and PostGIS to send vector data in SVG format and let the client browser render the map. This is a significant reduction in bandwidth and server load. It also scales far better than ArcIMS. A production GIS version is at:
The same PostGIS database and data can also server the floodplain data in KML format for Google Earth. This is a great desktop app that is what you are describing above.
The issue I have with ArcServer is the obscene cost and maintenance fees charges by ERSI for software products that IMO are of worse quality each year. We are a fairly small company but we are paying almost 10K per year for maintenance of our ArcGIS licenses. I have lost track of the tech support calls than ended with – “Yep that’s a bug.” If I have to do a quick edit on a shapefile I still fire up ArcView 3.3 because it just works better.
The bottom line to me is a balance of technologies that work best for the client. If it is browser based for a large web distribution or desktop based for things like GE just use what works.
Bruce,
All excellent points. I wholly agree with your ArcIMS example. Also your ArcServer concerns. One of my analysts just did a two-month go around with tech support to end up with “yep that’s a bug” over something that worked in 9.1 but broke in 9.2.
I merely raised ArcGIS to illustrate what the technology could do if taken in the right direction but the licensing model definitely gets in the way of whatever promise the tecnology has.
Ultimately, yeah, let my CPU do some of the work. That’s what it’s there for.