1346 words
7 minutes
Vibin Dirty – A Vibe Coding Experiment
2026-03-25

♫ They see me vibin, codin ♫
♫ Reviewers tryin to catch me vibin dirty, catch me vibin dirty ♫

It started out as a normal “Play Sunday”. Drop kiddo off at the parents, brunch with the wife, see play at Playhouse Square, back to the parents for dinner and to get kiddo.

On the way to brunch, I was complaining about needing to sign back up for Facebook just to get notifications and communication for the school district which lead into having a billion apps for all the notifications I like to see. Ring and Nextdoor for crime and safety, CodeRed for county alerts, NWS for severe weather. I jokingly said that “I should just vibe code something to aggregate everything”, to which she responded “You should”.

I opened up a chat with Claude Opus 4.6 and got to work. I gave it the premise, the technology I want to use, and started working on a plan document. By the time we got to brunch, the plan was pretty solid and I let it loose to start generating the artifacts.

This is where the experiment begins. Can I truly vibe code something. No hand holding. Just tell it what to do and go. I was not about to miss out on my day date with the wife wanting to be engaged and present. There were plenty of points to check in on the AI, be it potty breaks, driving, or that the conversation circled back to what it was doing. No other point during our time together was I thinking about what the AI was doing. Just the way I wanted it.

The Plan — Scuttlebutt#

The word on the street. A self-hosted, LLM-powered dashboard and RSS feed that aggregates hyperlocal safety, weather, and community information into a single prioritized stream.

1. Goals#

  • Single pane of glass for neighborhood-relevant information
  • Priority-scored via local LLM (Ollama) — surface critical alerts, suppress noise
  • Deduplicated across sources (same incident reported on Ring + Nextdoor + Facebook → one event)
  • Near-real-time for safety/weather; digest-latency acceptable for community posts
  • Zero cloud dependencies — runs entirely on local hardware
  • RSS-first output — consumable by any feed reader, automation tool, or push notification service
  • Minimal JS — server-rendered HTML via HTMX with SSE for live updates

2. Sources & Ingestion Strategy#

SourceMethodLatencyAuthStatus
NWS Severe WeatherREST API (api.weather.gov)~60s pollingNone✅ Done
CodeRED AlertsIMAP email parsing~1–5 minEmail credentials✅ Done
Ring (device events)Node.js bridge (ring-client-api)~5s real-timeRing refresh token✅ Done
Ring (Neighbors feed)IMAP email notifications~5–15 minEmail credentials✅ Done
Facebook GroupsIMAP email notifications~5–15 minEmail credentials✅ Done
NextdoorIMAP email digests~15–60 minEmail credentials✅ Done

By the end of the day, there were enough touch points that we had made it completely through our plan and was ready to check the code.

Two days later, I finally had the time.

The Setup#

Follow the instructions to make sure that dependencies were present.

Check!

Build the project… err… ok, compilation errors. No big deal, let’s work through them.

For some reason Opus chose to use libcurl and FFI rather than what is in the Zig standard library. Fine… Whatever. I didn’t have to write the code. It claimed to fix the FFI type issues and the compilation errors went away. Who am I to argue?!?

It also experienced some language version confusion, deciding to use constructs and syntax from an older version of the language even though I had told it to use the latest version. No big deal. It was able to address those issues easy enough.

There were many places that were violating the var, const and variable usage rules that Zig enforces at compile time. These were the most obnoxious to fix.

Project compiles. Lets run… err… segfault…

The segfault was in the way that it was trying to interface with the libcurl library. After a bit of back and forth of trying to work out the issues, I decided to call it.

I could have continued to instruct it to remove libcurl and replace with functionality from the standard library but at this point I was curious what other bad decisions were waiting for me.

The Review#

I started to review the code to understand what was going on.

The first thing I noticed were lots of single line function bodies. Most of them were wrappers around the C FFI function calls but they didn’t even do anything useful like clarify an FFI function name or marshal data across the boundary.

🤔 I guess one could argue that it was trying to conform to the different function naming semantics between Zig and the C libraries.

Related, there were many functions with only one call site…

🤔 Maybe it was done for assigning a name to a given unit of work or for testability? But for me, that is a stretch. Inline until otherwise needed.

Lots of OOPness and other programming paradigms that made the code feel over engineered.

🤔 I am not claiming that these practices are bad, but I tend to treat them more as guidelines then the rule. Straight out of the gate it feels like premature optimization and makes the code harder to reason about.

What Next?#

It’s not all doom and gloom. After taking things into my own hands, getting rid of all the libcurl FFI nonsense, replacing it with functionality in the standard library, things were starting to come together according to the initial plan.

The things that I ended up keeping were the initial plan document, a couple of cool coding patterns, a few small units of work around working with IMAP for reference, some constructs that had me going “Oh cool, I didn’t know you could do that in Zig!”.

Everything else was tossed.

If I had to quantify, it would be somewhere between 80-87% of the generated code was or will be deleted and rewritten.

How to Increase Success#

TIP

Ground the model in truths

This is arguably the most important tip to increasing success when working with large language models.

The language version confusion as well as some of the API usage problems that were encountered could have been solved by providing and grounding the AI with the appropriate documentation. The platforms have the ability to search the web for this information, but at this point in time at least, some places are still very hostile to bots accessing the content.

This also focuses the model limiting the search space. You can get wildly different results between “build me a kernel driver in rust” and “build me a kernel driver in rust” then proceeding to provide the rust language documentation, hardware specification and any programming guidelines required.

TIP

More checkpoints and smaller focused units of work

I find that the current models like to computer science the crap out of whatever it is doing. It takes effort to keep it on track. Remember that every line of code is a liability and something that must be maintained.

Looking at my experience above, I would have;

  1. Steered it clear of using libcurl in favor of the standard library functionality.
  2. Pushed it to inline more functionality until I felt it necessary to pull them out. Especially if there was only one callsite, or if there was no FFI marshaling convenience.
  3. If I was also in an interface where I or it could compile the code as we went along, the version and API confusion could have been caught early as well.
TIP

Leverage an agentic workflow

Spinning up smaller throw away agents to do the work having the primary model do the verification keeps its context small, clean and relevant. There are many times working in a single session I find that the model is confusing itself with decisions made in the past. This signals to me it’s time to start a new session with only the relevant decision points and none of the back and forth that got us there. This practice prolongs, if not automates that away.

Final Thoughts#

I don’t want to babysit the AI.

If I am going to vibe, I want it to take my poorly worded idea and push it across the finish line with some definition of reasonably maintainable code. Something that another human or AI can pickup and go. You know, like how enterprise software development is done today. Right?… RIGHT?!? 😏

AI should be doing the stuff I DON’T want to do, so that I can be engaged and present in the things that I DO want to do.

~ SK

Vibin Dirty – A Vibe Coding Experiment
https://www.kichka.dev/posts/vibin-dirty/
Author
Stephen Kichka
Published at
2026-03-25
License
CC BY-NC-SA 4.0