A Familiar Pattern for MCP and Skills

My recent work has gotten me thinking about the relationship between skills, as in Claude skills rather than personal ability, and Model Context Protocol (MCP) servers. I am building MCP servers for projects and starting to put together skills that my teams can share across engagements. That work has also overlapped with GeoFeeds, where I deployed an MCP server last winter and have since built a skill that uses it to produce a daily editorial briefing on the geospatial blog ecosystem.

I am going to use the GeoFeeds work as the example throughout this post because it is public, self-contained, and easy to point at. My professional work is doing the same job in a different setting, but the seams are similar, and GeoFeeds is the cleaner illustration.

The boundary between a skill and MCP is one most geospatial practitioners already know in another form. It is the boundary between a workflow and an API. For example, a WFS service is an API and an FME workspace is a workflow. The WFS exposes a resource, such as features, schemas, and queries against a dataset, in a shape another piece of software can consume. The FME workspace defines a sequence of operations that may or may not include reading from that WFS, but its job is to transform, filter, join, and produce an output. Nobody in our field would confuse the two. Nobody would suggest building the FME workspace’s logic into the WFS endpoint, or distributing the WFS by copying its config file around the team.

That same distinction separates an MCP server from a skill. An MCP server is an API. Its intended client is an LLM rather than a piece of GIS software, but the design considerations are familiar. It should expose a resource cleanly and atomically, in a shape the consumer can compose against. A skill is the workflow. It defines how the agent works through a problem, what steps it takes, what conventions it follows, and what the output looks like. The skill calls the MCP server the way an FME workspace calls a WFS, and for the same reason: the workflow needs to reach the resource, and the resource has no opinion about what the workflow is doing with it.

The skill is the workflow

In the case of GeoFeeds, the daily briefing skill is behavior. It has an opinion about what counts as noise in a feed. Job listings, OSM tagging diaries, and pure changelog entries get dropped. It has an opinion about which voices weigh more, with independent analysis weighted over corporate press releases and technical substance weighted over announcements. It has an opinion about what a “topic” is, what a fifty-word “why this matters” section should sound like, and how the output document should be structured. It also has an opinion about the time window, 0800 ET to 0800 ET, that defines what counts as today’s news. None of that is about the underlying feed data. All of it is about how to work with that data to produce a particular thing. Most importantly, all of those opinions came from me. They are mine and the skill is how I guide the LLM in applying them.

Here is a slice of the skill file itself (truncated for brevity), showing how those opinions are articulated in the file:

---
name: geofeeds-daily-briefing
description: Generate a daily editorial briefing summarizing the geospatial industry's RSS feed ecosystem. Use this skill whenever the user asks for a geospatial news summary, daily briefing, feed digest, or asks "what happened in geo today/yesterday." Also trigger when the user asks to summarize geofeeds, geospatial blogs, or wants a roundup of geospatial content. The skill produces a markdown file with three thematic topics and five top posts drawn from 113 geospatial RSS feeds via the geofeeds MCP tools.
---
 
# GeoFeeds Daily Briefing
 
Produce a concise, editorially sharp daily summary of the geospatial feed ecosystem. The output is a markdown file covering the previous day's posts (0800–0800 ET window), identifying thematic convergences and surfacing the most significant individual posts.
 
## Prerequisites
 
This skill requires access to the **geofeeds MCP tools**:
- `geofeeds:get_aggregated_feed` — pull all recent posts
- `geofeeds:search_feed_items` — search by topic/keyword with date filters
- `geofeeds:list_cached_feeds` — enumerate available feeds
If these tools are not available, inform the user that the geofeeds MCP server must be connected.
 
## Time Window
 
By default, the briefing covers the **previous 0800 ET to 0800 ET window**, regardless of when the skill is run. For example, if run at 1400 ET on February 13, it covers 0800 ET February 12 to 0800 ET February 13.
 
To calculate the window:
- `dateTo` = today's date at 13:00 UTC (0800 ET)
- `dateFrom` = yesterday's date at 13:00 UTC (0800 ET)
The user may override this with explicit date ranges (e.g., "summarize last weekend's posts").
 
## Workflow
 
### Step 1: Gather Posts
 
Call `geofeeds:get_aggregated_feed` with `limit: 50` to pull recent posts. Review the publication dates and filter to only posts within the target time window.
 
If the aggregated feed doesn't cover the full window (e.g., high-volume days push older posts out), supplement with targeted searches using `geofeeds:search_feed_items` with the `dateFrom` and `dateTo` parameters.
 
### Step 2: Filter Noise
 
Discard posts that are:
- Job listings (from GIS Jobs Clearinghouse, GISjobs.com, or similar)
- Event calendar entries with no substantive content
- OSM community diary entries about tagging minutiae, personal slice-of-life posts, or non-English posts without geospatial substance
- Pure product changelog entries with no editorial context
- Webinar/event registration promos with no analytical content

This is what a workflow looks like. The opinions are the substance of the thing. The skill is allowed to be narrow, specific, and opinionated because that is the natural shape of a workflow. An FME workspace is the same. It is full of choices about how the assembly proceeds, what each transformer does, and what the output looks like. Nobody complains that the workspace has too many opinions. The opinions are the workspace.

In this case, the skill does more than describe the desired output. It tells the agent which MCP tools to use, how to map the briefing window to tool parameters, and when to supplement the aggregated feed with targeted searches. Those instructions map directly to the tools shown in the next section.

The MCP server is the API

The GeoFeeds MCP is a gateway to a resource I own. The resource happens to be a feed cache. It could just as easily be a database server, an application server, a search index, or any other resource the consumer cannot reach directly. The operations it exposes, such as pulling the aggregated feed, searching items with a date range, listing cached feeds, and fetching items from a single feed, are the kind of operations any reasonable API over a feed cache would offer. None of them know what a briefing is. None of them bundle steps together or anticipate a particular use case.

Here is a representative tool definition from the server. It shows the metadata for the search_feed_items tool, which does a lot of work in the briefing. This is how the LLM, once it is made aware of an MCP server, knows what operations are available. It’s not unlike a GetCapabilities call in WFS, providing a machine-readable description of what the service does.

{
      "name": "search_feed_items",
      "description": "Search across all cached feeds for items matching a query string. Supports multi-term search (AND/OR), field-specific queries (title:term), quoted phrases, fuzzy matching, date/time ranges, and feed filtering. Results are relevance-scored and sorted by relevance then date. dateFrom/dateTo accept YYYY-MM-DD (date-only, uses start/end of day) or full ISO 8601 date/time. Use compact=true and maxDescriptionLength=150-200 to reduce response size for LLM context windows.",
      "inputSchema": {
        "type": "object",
        "properties": {
          "query": {
            "type": "string",
            "description": "Search query string. Supports: multi-term (AND/OR), field-specific (title:term, description:term, source:term), quoted phrases (\"exact phrase\"), and boolean operators (AND, OR). Default is AND logic."
          },
          "limit": {
            "type": "number",
            "description": "Maximum number of results to return (default: 20)"
          },
          "useWordBoundary": {
            "type": "boolean",
            "description": "Use word boundary matching instead of substring matching (default: true)"
          },
          "fuzzyTolerance": {
            "type": "number",
            "description": "Fuzzy matching tolerance (Levenshtein distance, 0-2). 0 = exact only, 1 = allow 1 char difference, 2 = allow 2 char difference (default: 1)"
          },
          "dateFrom": {
            "type": "string",
            "description": "Filter results from this date/time. Supports:\n- Date only (YYYY-MM-DD): interpreted as start of that day UTC (00:00:00)\n- Date/time (ISO 8601, e.g. YYYY-MM-DDTHH:mm:ssZ): used as-is\n\nIMPORTANT: If the user provides natural language dates (e.g., \"Q3 2025\", \"last month\"), convert to ISO 8601 before calling.\n\nExamples:\n- \"Q3 2025\" → \"2025-07-01\" or \"2025-07-01T00:00:00Z\"\n- \"today 2pm UTC\" → \"2026-01-22T14:00:00Z\"\n- \"yesterday\" → \"2026-01-21\""
          },
          "dateTo": {
            "type": "string",
            "description": "Filter results until this date/time. Supports:\n- Date only (YYYY-MM-DD): interpreted as end of that day UTC (23:59:59.999)\n- Date/time (ISO 8601, e.g. YYYY-MM-DDTHH:mm:ssZ): used as-is\n\nIMPORTANT: If the user provides natural language dates, convert to ISO 8601 before calling.\n\nExamples:\n- \"Q3 2025\" → \"2025-09-30\" or \"2025-09-30T23:59:59Z\"\n- \"today\" → \"2026-01-22\" (includes entire day)\n- \"end of month\" → \"2026-01-31T23:59:59Z\""
          },
          "feedUrls": {
            "type": "array",
            "items": {
              "type": "string"
            },
            "description": "Filter to specific feed URLs only. If not provided, searches all feeds."
          },
//Some content truncated for this post
        },
        "required": [
          "query"
        ]
      }
    }

What I have found in my work so far is that building MCP servers is most effective when they are designed the way any other API would be. If you were building a REST API in front of the same cache, you would not push the report generation logic behind the API and make the resource perform it. You would expose the operations needed to retrieve the components, such as feeds, items, searches, and filters, and let the calling layer assemble them into whatever it needed. The application is where the assembly belongs. The API exists to make the resource reachable in a useful shape.

An MCP server follows the same architecture with a different consumer. The calling layer is no longer a web app or a script. It is the agent. The assembly that a traditional client app would perform in code, the agent performs by reasoning over tool calls. That shift does not change what belongs behind the API. It changes who is doing the assembly. There is no generate_daily_briefing endpoint in GeoFeeds, and there should not be, for the same reason a REST API over the cache would not have one. That is workflow logic, and it belongs in the calling layer.

In fact, I have gotten into the habit in my project work of building REST APIs using something like FastAPI and then wrapping the API with MCP using FastMCP. This gives me a standard REST option should I need it and gives me a single code base regardless of entry point. It also gives me a place to make MCP-specific optimizations without reworking all of the code.

A few things change at the margins because the consumer is an LLM. Response sizes need to respect context windows. Tool descriptions need to help a model decide which one to call. Pagination has to account for the fact that each page costs context. These are real considerations, but they are adjustments to API design, not a replacement for it. The underlying shape is still an API over a resource.

They compose because they each have their role

The important thing is that the pieces compose because they each have their role. The skill is the workflow, deciding what to fetch and when, filtering the results, applying its conventions, and assembling the output. The MCP server is the API, sitting in front of the resource, returning what it is asked for and nothing more. This is classic, bread-and-butter separation of concerns.

That separation is what makes the pieces extensible. The same cache could feed a different skill tomorrow: a weekly summary, a topic-filtered digest, or an alerting tool that watches for specific keywords. The server would not need to change. The same skill, in principle, could draw from a different MCP server that exposed a different feed ecosystem and produce the same shape of briefing for it. Neither piece has to know what the other is going to do next because the contract between them is just the API.

This is the same architectural habit that has kept WFS useful for a long time. Keep the resource interface clean. Keep the workflow logic in the workflow. Let each layer do its own job.

The skill is the workflow. The MCP server is the API. They are different things, built for different purposes, and the cleanest way to think about either of them is to remember that we have been doing this for a long time.

Livioandronico2013, CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0, via Wikimedia Commons