First off, let’s cut to the chase. Writing a bunch of vague instructions to a large language model, and hoping the model gives you something usable, is not coding.
It may look like coding. You may get code back. But in reality, you are not doing what real programmers have historically defined as “coding.” And it matters — especially today — to have this distinction.
At a high level, programming is about deriving deterministic systems. Specifically, you write code, compile (or run) it, and expect to get a repeatable outcome stemming from a known input. When the code doesn’t work as intended, you can track back through the logic, trace the stack, and debug.
All of this relies on definitiveness. You are in control. You are responsible. You own the success as well as the failure.
Now compare this to “vibe coding” — rather, throwing verbal instructions into an LLM such as ChatGPT, Copilot, or whatever flavor-of-the-month codegen is being used, and hoping that what it returns is (a) correct, (b) idiomatic, © optimized, and (d) secure.
LLMs (large language models) are statistical guess machines. They have no way of knowing your context, the architecture you are working with, your edge cases, or your…


