A while back, I wrote a post about using Python to generate triggered notifications in PostGIS. The approach described in that post is somewhat fraught with peril and I’ve learned a little bit more since then, so I have a different approach, using PostgreSQL NOTIFY and LISTEN, which I’ll describe here.
The steps I followed are:
Create a spatial table:
CREATE TABLE public.simple_table ( id integer NOT NULL DEFAULT nextval('simple_table_id_seq'::regclass), message text, shape geometry(Point,4326), CONSTRAINT pk_notify PRIMARY KEY (id) ) WITH ( OIDS=FALSE ); ALTER TABLE public.simple_table OWNER TO postgres;
Create a trigger function. This is where the formatting gets done. It’s best to keep the payload small, so this function creates a GeoJSON object that returns the geometry, record ID, table name, and an indication of the action. In order to keep the example simple, this one is hard-coded to handle inserts. Notice that the channel is named ‘actions’ in the trigger function. This is important for consuming the payload later.
CREATE OR REPLACE FUNCTION public.sample_notify() RETURNS trigger AS $BODY$ DECLARE BEGIN PERFORM pg_notify('actions', '{"type": "Feature", "geometry": ' || st_asgeojson(NEW.shape) || ',"properties": {"message": "' || 'insert' || '",' || '"schema": "' || TG_TABLE_SCHEMA || '",' || '"table": "' || TG_TABLE_NAME || '",' || '"id": "' || NEW.id || '"' || '}}' ); RETURN new; END; $BODY$ LANGUAGE plpgsql VOLATILE COST 100; ALTER FUNCTION public.sample_notify() OWNER TO postgres;
Add an actual trigger to the table. As mentioned above, this example only handles inserts, but updates could be handled as well.
CREATE TRIGGER sample_table_trigger AFTER INSERT ON public.simple_table FOR EACH ROW EXECUTE PROCEDURE public.sample_notify();
Next, build some sort of consumer to listen for the notifications. In this case, I’ll use a very simple Node app that uses ‘pg-pubsub‘ to handle the interactions.
var PGPubsub = require('pg-pubsub'); var pubsubInstance = new PGPubsub('postgres://postgres:user@localhost:5432/database'); pubsubInstance.addChannel('actions', function (channelPayload) { console.log(channelPayload); console.log('---------------------------------'); console.log(JSON.stringify(channelPayload)); });
Notice that the ‘addChannel’ call listens to the ‘actions’ channel defined in the trigger function above. If the message payload is a JSON string, then pg-pubsub automatically processes it into a JavaScript object. The code above simply dumps the data to the console (both the object and its stringified serialization). This is ideally where something more intelligent, such as adding the object to a message queue, may occur.
At this point, just run the node application and it will wait for messages. Issue a simple insert in psql, like so:
INSERT INTO simple_table(message, shape) VALUES('test message', ST_GeomFromText('POINT(-76.6955 38.3754)', 4326));
The resulting console output looks like this:
{ type: 'Feature', geometry: { type: 'Point', coordinates: [ -76.6955, 38.3754 ] }, properties: { message: 'insert', schema: 'public', table: 'simple_table', id: '12' } } --------------------------------- {"type":"Feature","geometry":{"type":"Point","coordinates":[-76.6955,38.3754]},"properties":{"message":"insert","schema":"public","table":"simple_table","id":"12"}}
This approach is safer for the database, but all things have trade-offs, so it’s best to research performance implications and such. It’s also a fairly accessible middle ground before jumping off the cliff into logical decoding. The basic tenets of this approach are to keep message payloads small and not to try to do too much processing in the trigger itself. The goal is really to get a notification out of the database so that an external application can do more heavy lifting.