Depending on your point of view, “vibe coding” – using generative AI to iteratively develop code by using natural language to describe desired functionality – is either revolutionary or the slippery slope to deploying poor, irresponsible software. While both viewpoints have merit, I fall somewhere in the middle. As a programmer who is approaching 50 years working in the craft, including professional and pre-professional work, I think it’s necessary to understand software development and to be able to understand code in order to make it perform well. I also recognize that a significant portion of any software project is, shall we say, uninspiring and rote.
Vibe coding can be great for speeding up the rote, but you still need to understand the code that is generated. If you are simply bluffing your way through software development using genAI and you don’t really understand the code you’re looking at, that’s a slippery slope to deploying poor, irresponsible software.
Recently, I ran into a requirement that represented a side quest in the execution of a project. A portion of the work involves integrating with an API that controls physical infrastructure assets. What was interesting was that the vendor did not provide any kind of developer sandbox for testing the API. This meant, we’d have to test the integration by interacting with live assets. This went so far as to include kicking the tires through the API’s SwaggerUI page. I would run a test in Swagger and could see the effect on the live asset.
Obviously, this is not ideal so I decided I needed to build a mock API to use as my own internal sandbox. I was officially on a side quest. As I mentioned, the API used Swagger documentation, which implies OpenAPI compliance. For some reason, the vendor did not make its actual OpenAPI document available anywhere. I could access pieces through Swagger, but not the whole API spec. This was already taking more time than I wanted to put into it.
I decided to try vibe coding the mock API with ChatGPT. The first thing I noticed was that the vendor required authentication to Swagger UI (probably because it worked with live infrastructure). This made simply passing a link to ChatGPT unworkable. So I did what any developer does in 2025 – I printed it to PDF and uploaded it.
It has probably become clear that I am intentionally not naming the vendor or the API, so I’ll emulate my process using the Swagger Petstore demo site, using the exact prompts I’d used previously.
First, I uploaded the PDF and asked it to explain the target endpoints to me.

When providing a document for use by an LLM, it’s generally a good idea to start by asking a question that forces it to read the document thoroughly. Like humans, LLMs will skim when they can. The response to the question above was sufficient to let me know that it had read the document correctly. Here is an excerpt for the first endpoint.

For both endpoints, it correctly identified input and output payloads as well as various HTTP codes to expect and what each means. From there, I asked it to generate code.

It was able to ignore my typo and generate a mock API quickly. It also gave me cURL samples to use. Subsequently, I had it update the code to simulate bearer token authentication, but to accept any token as valid. This is because my existing client code was already generating bearer tokens and I wanted to drop the mock API in easily. Finally, I had it create the package.json necessary to deploy the app.

So how well did this work? As I mentioned, the code shown here is from the Swagger demo. The API generated for my production use case worked well enough that I was able to drop it in place behind my existing client code and run tests without needing to adjust the client code at all. That was a big win. With genAI, I was able to create my own developer sandbox without spending a lot of unbudgeted time doing so.
Admittedly, the vendor restrictions on access to the API documentation drove this approach. Having access to the full OpenAPI spec would have been a lot better, but that wasn’t an option in this case.
The middle ground I have struck with vibe coding is that I’ll use it for low-risk, low-value, or rote development, such as in this use case. I know there are a lot of people out there who are using it for more but, if I am ultimately responsible for maintaining the resulting application, then I need to understand the code thoroughly. I can do that best if I have written it or been involved with a team that has written it.
I realize that using AI even for rote tasks raises larger questions about how entry-level programmers will gain experience, and those questions are valid. It’s probably too early to speculate on that. Each recent technological shift has resulted in a corresponding shift in workforce mix and skills.
Perhaps we end up needing fewer programmers, but more data scientists to ensure the data training the models is valid. Maybe, given the advent of natural language interfaces, traditional writing and grammar will become more important. Maybe psychologists will be more useful for troubleshooting wayward LLMs.
It’s hard to say what the necessary skills for an AI-centric technology landscape will be, but there is no denying that the technology is here to stay and advancing rapidly so it is important to learn how to use it effectively and appropriately – even if you’re a programmer who is approaching 50 years working in the craft. Not doing so could put you on the slippery slope to deploying poor, irresponsible software.
Here is the full ChatGPT conversation used to generate the code discussed in this post: https://chatgpt.com/share/67f3ce6f-7f38-800b-a24b-d92af08d8d20