I am by no means a software engineer, coder, hacker, or any other kind of technical developer, but many (if not most) of my clients are or work with them intimately. Eventually, I noticed batches of questions where several different clients would ask the same question on the same day. It turned out that they were all reading tldr in the morning (and now I do too). Recently, I ran into Addy Osmani’s recent article on “vibe coding” – pointedly titled “Vibe coding is not an excuse for low-quality work” – when doing my morning reading, and it struck a cord with me. In it, Osmani celebrates the promise of AI-assisted coding but sounds a clear warning: don’t let the “AI magic” fool you into dropping your standards. His message resonates in an era when lawyers are increasingly experimenting with generative AI for drafting documents. After all, if programmers are now coding by vibe – letting an AI auto-generate code based on a prompt and a prayer – one might imagine attorneys could start drafting by vibe, relying on an AI’s confident output to craft contracts or briefs. The parallels run deep. Professionals faced with time and cost constraints can be (understandably?) tempted to trust AI’s output without full understanding, to lean on the vibe of a good result rather than the structure and rigor that traditionally underpins quality work. In both fields, it’s becoming clear that a baseline literacy (be it in programming or in law) is non-negotiable before these powerful tools can be safely integrated. In this restack-style commentary, we explore Osmani’s key points about vibe coding and what I am terming vibe drafting.
Tl;dr
In both coding and law, AI is here to stay – and that’s a good thing, so long as we don’t lose sight of reality. AI’s speed must be overseen by human judgment. AI is a tool – a brilliant, evolving tool, but a tool nonetheless. The vibe of a well-written draft is not enough; it’s our job to ensure the substance backs it up.
When Code “Vibes” and Contracts Follow Suit
AI adoption is no longer a theoretical future in law – it’s happening now. It’s estimated that 79% of legal professionals are adopting AI in some form, and law firms from New York to London have rolled out pilot programs with GPT-based tools. Just as GitHub Copilot and ChatGPT have become “developer’s little helpers” in coding, tools like Harvey AI and CoCounsel are entering the lawyer’s toolbox for research and drafting assistance. The promise is huge: faster drafting, automated research, and newfound efficiency. But as we’ll see, the pitfalls – if we treat AI outputs with uncritical reverence – can be equally significant.
Osmani’s Take on Vibe Coding: High Promises, Hidden Perils
Osmani’s article dissects “vibe coding”, a term capturing the trend of code being written with AI assistance based on a developer’s intent described in plain language. He acknowledges the excitement: AI coding tools can “lower barriers” and turbo-charge development. Yet, the crux of his message is cautionary. Some of Osmani’s key points include:
Quality Over “Vibes”: Enthusiasm aside, AI assistance is no excuse to slack on quality. As Osmani flatly puts it, “vibe coding is not an excuse for low-quality work”. The convenience of having an AI suggest code in seconds doesn’t absolve a developer from thinking critically about correctness, style, and maintainability.
“Built on Sand” – Hidden Fragility: Code that an AI generates may appear to work on the surface but hide serious flaws underneath. Osmani warns that AI-produced solutions can be “built on sand,” appearing functional but containing hidden issues. He’s used the term “house of cards code” to describe code that “looks complete but collapses under real-world pressure”. In other words, just because the code compiles or passes basic tests doesn’t mean it won’t fall apart in a production scenario or edge case. The vibe might be right; the foundation may be shaky.
Overreliance and the Lure of the 70%: Interestingly, AI often gets you “mostly there” – say 70% of a coding task – but that last 30% (integrating edge cases, polishing architecture, fixing subtle bugs) is where many vibe coders stumble. Less experienced devs may be so impressed with the initial output that they overlook the missing pieces. Osmani notes that junior engineers too often accept the AI’s output more readily, leading to fragile systems. They might blithely use a snippet that “looks complete” without recognizing it’s incomplete or slightly wrong. The result is a program that works great until it doesn’t, much like a house of cards ready to topple. Seasoned engineers know better – they treat the AI’s draft as a starting point and then apply “years of hard-won engineering wisdom” to fortify it.
AI as an Intern, Not a Replacement: Perhaps Osmani’s most practical piece of advice is to use AI as a tool, not a crutch. He suggests treating AI “as your intern, not your replacement”. In practice, that means a developer should review every AI-generated line as rigorously as they would review a junior programmer’s work. The senior dev sets the strategy, checks the details, and mentors the AI (via iterative prompts and fixes) to improve the output. The AI is there to accelerate grunt work, not to take over the creative problem-solving or accountability for the result.
Mandatory Code Review and Literacy: In Osmani’s view, code literacy remains essential even when using AI. “Never accept AI-written code into your codebase unreviewed,” he insists. The developer must still understand the code that ends up in the product. That entails reading through what the AI wrote, testing it, and ensuring it meets both functional and quality standards. Implicit in this is the need for developers to know what to look for – security vulnerabilities, poor error handling, performance traps – which means they need a baseline of training to effectively vet AI output. As Osmani emphasizes elsewhere, AI coding actually helps experienced developers more than beginners because veterans have the knowledge to guide and correct the AI, whereas novices can’t discern a good suggestion from a bad one. This “knowledge paradox” flips the script on the assumption that AI will let newbies do advanced work — in reality, without fundamental understanding, newbies risk blindly shipping flawed code.
Best Practices for “High-Quality” Vibe Coding: Finally, Osmani doesn’t advocate abandoning AI; he advocates using it responsibly. The key, he concludes, is to practice “high-quality vibe coding” with clear rules and best practices. That means establishing guidelines for when and how to use AI in development, requiring thorough testing of AI contributions, and continuously educating the team so they build AI literacy alongside traditional skills. By setting a strong quality bar, teams can enjoy AI’s speed boost and avoid pumping out garbage code. In short, keep the vibes, but verify them.
Osmani’s insights paint a balanced picture: AI in coding is a game-changer, yes, but it doesn’t rewrite the fundamental rules of engineering. If anything, it makes those fundamentals (code reviews, understanding your tools, designing robust systems) even more crucial. A developer who abandons structure and rigor for the “easy vibes” of AI output is courting disaster – or at least technical debt and bug-fixes down the line.
Parallels in Legal Drafting: The Perils of “Vibe Drafting”
So how do these lessons translate to legal practice? Strikingly well. As law firms begin to integrate AI into drafting contracts, briefs, and memos, a similar dynamic is emerging. Let’s call it “vibe drafting”: when an attorney leans on an AI to produce legal text based on the feel of a good answer, rather than through methodical legal reasoning. Just as vibe coding can yield code that seems right at first glance but hasn’t been truly vetted, vibe drafting can produce documents that look polished yet hide logical or legal pitfalls. Here are the key parallels and challenges when lawyers adopt AI:
Temptation to Trust AI Outputs Without Understanding: Legal professionals might be just as prone as junior devs to over-trust an AI’s confident output. If ChatGPT or a legal AI assistant spits out a well-worded paragraph analyzing a case or a clause in a contract, the natural temptation is to take it at face value – after all, it sounds authoritative. But uncritical trust can backfire spectacularly. A now-infamous example is the pair of New York lawyers who, in 2023, filed a brief that looked expertly researched – until it was revealed that six of the cited cases were pure fabrications by ChatGPT. The attorneys (who hadn’t practiced in that area and turned to AI for help) later admitted they didn’t fully understand the tool’s limits. They said, “We made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth.” Their blind trust led to embarrassment, sanctions, and a stark lesson: AI can and will fabricate facts or law if you don’t know enough to catch it.
The analogy to Osmani’s “house of cards code” is apt. An AI-written brief or contract may read convincingly – the citations look legit, the clauses sound boilerplate – but without the lawyer’s seasoned eye, it could be built on nothing. The legal filing that collapses under scrutiny is the exact parallel of code that collapses under real-world conditions. In both cases, the professional must peer behind the curtain. Does that case actually exist and hold up? Does that contract clause actually do what it purports to do under applicable law? If you don’t know, you’d better find out before pressing “File” or “Send.”
Overreliance on Vibes Rather Than Structure: Legal writing is a structured process. Good attorneys don’t just vomit words that feel lawyerly; they methodically issue-spot, build arguments, cite law, and tailor language to the client’s needs and jurisdictional requirements. AI, however, is a master of vibe: it excels at producing fluent, formal-sounding text that feels right. This can lull lawyers into a false sense of security, skipping steps in the legal reasoning process. For example, an AI might draft a contract clause that reads well but subtly fails to address a crucial contingency or cites legal principles without ensuring they apply in the current jurisdiction or scenario. The output has the form of quality legal work, but not the substance.
Relying on these surface-level “vibes” is akin to a developer accepting an AI’s code without considering edge cases. The risk is ending up with a fragile document. Think of a contract that hasn’t been through the usual rigor (defining all key terms, checking enforceability, aligning with client objectives) – it might hold up fine until a dispute or an unusual situation arises, and then suddenly its gaps and ambiguities show. In coding, the product might be “built on sand”; in law, the contract might be built on fluff. Both will crumble without a solid foundation. The remedy is the same: impose structure onto the AI’s output. A lawyer should use their training to outline the document’s needed elements, verify each reference, and cross-check the logic of arguments, rather than just trusting the AI’s seemingly coherent prose. It’s the difference between writing with AI and letting AI write.
The Need for Baseline Literacy and Expertise: Perhaps the most crucial commonality is that AI doesn’t eliminate the need for expertise – it heightens it. Just as Osmani observed that AI-assisted coding works best in the hands of experienced engineers, AI drafting is safest in the hands of knowledgeable attorneys. A seasoned lawyer knows the black-letter law and the bespoke needs of a case and thus can spot when an AI’s answer is off-base or incomplete. They’re also equipped to guide the AI: asking the right questions, providing correct facts, and refining the prompt when the output isn’t initially on point. By contrast, a law student or non-lawyer who tries to rely on AI for “free legal drafting” is playing with fire. Without legal literacy, they might not recognize that the AI’s seemingly logical analysis actually misapplies a statute, or that the contract it drafted lacks a vital clause.
In short, if you can’t evaluate it, you shouldn’t delegate it to AI. This is why forward-looking firms are training their attorneys on how AI works and its limitations, and why bar associations are urging competence in technology. Even judges are stepping in to enforce this principle: in May 2023, a Texas federal judge began requiring lawyers to certify that no filing is submitted via AI without human verification for accuracy. A few months later, a New York judge issued a similar order that counsel “must at all times personally confirm ... the accuracy of any research” done with tools like ChatGPT. The legal system itself is effectively echoing Osmani’s rule “never accept AI-written code unreviewed” – translating it for lawyers: never submit AI-written legal work unreviewed by a human expert. Baseline legal knowledge and meticulous review aren’t optional, they’re required if AI is in the mix.
AI as Junior Associate, Not Rainmaker: Lawyers often joke about wanting a junior associate to handle the first draft or tedious research. AI offers exactly that prospect – and thus should be treated in the same vein. The attorney remains the responsible senior partner in the equation. In practice, this means using AI to augment your work, not to replace your own thinking. For example, an attorney might use an AI tool to generate a rough draft of a standard section of a contract (saving time on boilerplate language), but then edit it line-by-line, adjusting tone, fixing legal nuances, and inserting deal-specific provisions. Or one might have an AI summarize a batch of cases to speed research, but then personally read the most relevant cases to ensure nothing was lost in summarization. The mindset should be: AI is here to take on the drudge work and make me more efficient, but I am still the lawyer on this matter. The moment an attorney takes themselves out of that loop – letting AI output directly become the final product – is the moment the “vibe drafting” risk skyrockets.
In all these parallels, the through-line is clear: whether coding or drafting, AI can amplify productivity but also amplify mistakes if used without proper oversight. The allure of vibe drafting is real – who wouldn’t be tempted by a perfectly formatted brief that appears in seconds? – but as legal professionals we must remember that appearance isn’t reality. Solid legal writing, like solid coding, requires an underpinning of knowledge and a rigorous process. Fortunately, just as developers have strategies to reap the benefits of AI while maintaining quality, lawyers can adopt similar best practices.
Forward-Looking Reflections: Toward Responsible AI-Assisted Lawyering
Both Osmani’s insights and the early experiences in law suggest a common prescription: embrace AI’s advantages, but with eyes wide open and guardrails firmly in place. What does responsible AI integration look like for legal drafting? A few thoughts for the road ahead:
Education and Literacy: First, the legal industry needs to invest in AI literacy. This means not only understanding what a given tool can do, but also its failure modes. Attorneys should have a basic grasp of how large language models operate – e.g. knowing that they predict text based on patterns, that they do not actually “know” facts or law and can hallucinate convincing falsehoods. With this understanding, lawyers can approach AI output with healthy skepticism and informed eyes. As one Federal Bar Association article put it, “attorneys must ensure that they maintain a basic understanding of how these systems work” in order to use them safely. Baseline tech literacy is becoming part of baseline legal competence.
Firm Policies and Best Practices: Law firms and departments should develop clear policies for AI use, akin to the “clear rules and best practices” Osmani advocates for developers. These might include guidelines such as: do not input confidential client information into unvetted AI tools (to protect privilege and privacy), always verify any citations or quotes an AI provides, mark AI-generated content in drafts so that it gets special attention during review, and limit AI usage to appropriate tasks (for instance, use it for preliminary research or boilerplate drafting, but not for final judgment calls on legal strategy). By institutionalizing quality control, the profession can prevent the wild-west scenario of unchecked vibe drafting.
Human-in-the-Loop Workflow: The future of AI-assisted legal work, much like AI-assisted coding, lies in hybrid workflows. Rather than aiming for one-click fully automated document generation, the goal should be to let AI do what it’s good at (speed, pattern matching, generating template language) while the human does what they are good at (critical thinking, nuanced judgment, ethical evaluation). In practice, this might look like an attorney preparing a detailed outline or term sheet, using AI to flesh it out into full prose, and then meticulously reviewing and editing that draft. Or using AI to identify potentially relevant cases but then reading and shepardizing those cases manually. The AI is “in the loop” but not running the loop autonomously. This approach mirrors the idea of AI as an assistant or junior – helpful at each step, but always under supervision. It’s also synergistic: the attorney’s feedback can improve the AI’s subsequent outputs (for example, by refining prompts or fine-tuning firm-specific AI on vetted data), leading to better and safer results over time.
Maintaining Professional Standards: Importantly, lawyers must not let the convenience of AI dilute their professional standards. Every jurisdiction has ethical rules about competence and diligence. Using an AI tool doesn’t outsource those obligations. If an error slips through because “the AI wrote it, not me,” it is still the lawyer’s error. This mindset needs reinforcement. Some commentators have suggested treating AI like any other software tool – e.g., spell-check or a calculator – useful but requiring validation. Just as you wouldn’t assume a calculator gives the right answer to a complex equation without double-checking key inputs, you shouldn’t assume an AI gives a legally correct answer without double-checking sources and reasoning. The attorney of the near future might be one who is equal parts lawyer and editor – editing not just human junior lawyers’ work, but AI’s work as well, to ensure it passes muster.
Opportunities for Improvement: On a more optimistic note, when used wisely, AI can actually enhance the rigor of legal practice. Consider that an AI can quickly compare a draft against thousands of similar documents or check consistency of terms across a 100-page contract – tasks a human might do more slowly or miss altogether. If lawyers approach AI with the mindset of “trust but verify,” they can leverage such capabilities to augment their quality control. For example, after an attorney writes a brief, they might ask an AI to critique it or identify any missing counterarguments, essentially as a second pair of eyes (knowing that any suggestions are just that, suggestions to be evaluated). In this way, AI can act as a sounding board or error-checker, flagging things the lawyer can then confirm or correct. The end result could be higher-quality work. This is analogous to how senior developers use AI coding assistants to catch edge cases or suggest alternative approaches – not blindly, but as a prompt for their own review. If done right, the synergy of human judgment and machine efficiency could raise the bar in both industries.
Learning from Each Other: Finally, the worlds of software development and legal practice would do well to keep comparing notes. It’s fascinating to see two very different professions grapple with the same core issue: maintaining rigor in the age of AI automation. Developers worry about bugs and security vulnerabilities; lawyers worry about factual errors and legal misinterpretations. Both talk about not “losing the craft” – whether it’s coding fundamentals or legal reasoning – even as routine tasks get automated. By drawing parallels (as we’ve done here with vibe coding and vibe drafting), each field can benefit from the other’s learnings. The legal field can adopt some of the testing discipline of software (e.g., systematically checking an AI-drafted contract the way one would test code), and the software field can borrow some of law’s caution and emphasis on verification (e.g., the idea of certifying accuracy, akin to an attorney’s duty of candor to the court). Cross-pollination of best practices will help ensure AI truly elevates these professions instead of undermining them.
Beyond the Vibes
In both coding and law, AI is here to stay – and that’s a good thing, so long as we don’t lose sight of reality amid the hype. “High-quality vibe coding,” as Osmani advocates, means melding AI’s speed with human judgment. The same goes for what we might call “high-quality vibe drafting.” Lawyers can and will use AI to draft faster and work smarter, but they must bring to that process the full weight of their legal knowledge, skepticism, and ethical responsibilities. The moment we start taking AI output as a substitute for our own understanding is the moment the house of cards starts to wobble.
Ultimately, an AI is a tool – a brilliant, evolving tool, but a tool nonetheless. The lawyer’s role (just like the developer’s) doesn’t disappear; it evolves. We shift from being sole creators to being curators and editors of AI-generated material. We ask new kinds of questions (“Does this AI-suggested clause actually protect my client?”) and we enforce age-old standards (“Is this argument logically sound and supported by real precedent?”). The vibe of a well-written draft is not enough; it’s our job to ensure the substance backs it up.
Addy Osmani’s rallying cry that vibe coding shouldn’t excuse low-quality work carries a broader wisdom: good work remains good work, AI or not. Whether one is wrangling Python code or polishing a legal brief, diligence, expertise, and integrity of process are irreplaceable. By keeping that in focus, we can welcome our new AI assistants into the fold – not as replacements, but as partners – and continue to deliver solid code and sound counsel, far beyond just vibes.
What kind of lawyer would I be without a disclaimer?
Everything I post here constitutes my own thoughts, should only be used for informational purposes, and does not constitute legal advice or establish a client-attorney relationship (though I am happy to discuss if there is something I can help you with). I can be reached via email at dlopezkurtz@crokefairchild.com, on telegram @davidlopezkurtz, on twitter @lopezkurtz, and on LinkedIn here.