Developers such as Saad Sahi, seen at home in Edmonton on Dec. 12, have found work cleaning up vibe-coding, or code generated with the help of artificial intelligence.Amber Bracken/The Globe and Mail
My computer coding education ended in a high school classroom in the early 2000s, when I created a game in which two camels spit at one another. The experience of typing every line of code was frustrating, but it was rewarding. (The game itself? It worked – mostly.)
These days, I don’t need any coding knowledge to build software thanks to artificial intelligence. That’s both a powerful and potentially dangerous thing.
AI companies such as OpenAI and Anthropic have long focused on building tools that can write code that will speed up software development. Coding startups such as Cursor and Lovable, meanwhile, are worth billions of dollars. Professionals can use AI coding tools to assist with development, while amateurs who don’t know their frontends from their backends can type plain language instructions into a chatbot, which will then be converted to code to create functioning software and websites.
This is known as vibe-coding, a term coined this year by Andrej Karpathy, the former director of AI at Tesla Inc. TSLA-T Among some programmers and Reddit boards, vibe-coding is derided as the realm for charlatans who will produce buggy software full of security risks. Indeed, no serious company would deploy fully vibe-coded software without an overhaul. One study found AI models introduce a known security flaw into code about 45 per cent of the time.
But vibe-coding has its place. In that sense, it’s not unlike any other application of generative AI. It can be impressive and useless, accurate and error-prone, save you time and waste it, solve your exact problem and lead you astray, all while carrying the potential to completely change how software is made and what the career itself will look like. Startups can already grow faster with fewer engineers thanks to AI coding tools, while larger companies are rethinking how many programmers they need to employ. At League Inc. in Toronto, which develops online health care platforms, engineers are saving between five to eight hours each week because of AI, founder Michael Serbinis said recently.
Some of Sahi’s screens at his Edmonton home. Sahi has a business helping people with their vibe-coded projects.Amber Bracken/The Globe and Mail
Vibe-coding is permeating C-suites, too. The leadership team at George Weston Ltd., including Galen G. Weston, recently got a crash-course. The chief digital officer at Loblaw Cos. Ltd. L-T wrote on LinkedIn that most of them had never heard of the concept before, but in under two hours, they built functional applications, including a pricing intelligence tool. (Loblaw did not respond to requests for more detail, including what Mr. Weston vibe-coded and whether the execs indeed “replaced costly software,” per the LinkedIn post.)
BrainStation Inc., which provides certification for tech skills, has jumped on the trend. It offers two vibe-coding workshops, including one for Lovable, a popular platform for making web apps. “We’re seeing people sign up for them in droves,” said Jennifer McCuaig, vice-president of experience in Toronto.
The popularity of vibe-coding is changing software development inside organizations, at least in the early stages. A designer with no coding ability can mock up something without involving the engineering team. Conversely, an engineer with no sense of colour theory can do the same without a designer. Functional prototypes can rapidly come together with less work, even if the blurring of roles can make some people uncomfortable. “There are probably individuals at certain organizations that feel threatened,” Ms. McCuaig said. “But it’s going to allow the best ideas to rise to the top faster.”
Nvidia to license Groq technology, hire executives
I want my ideas to rise to the top, so I gave Lovable a try. Social media is rife with AI hucksters showing off projects made with “just one prompt!” But vibe-coding typically doesn’t work that way. I told Lovable to generate a simple crossword app and it produced a fever dream. There was a word grid, though most of the black squares were clustered to one side, and it was totally unplayable. I tinkered for an hour or so without getting much further. Writing in all caps did not make the AI model follow instructions any better, much like shouting at an underling.
My breakthrough, if it can be called that, was to explain my problem to another AI model, Google’s GOOGL-Q Gemini chatbot. Gemini told me to break up the task into components, such as getting the grid generation right first. Gemini was nudging me to engage in computational thinking, a core tenet of computer science that involves deconstructing problems and developing solutions that can be carried out by a computer.
Fortunately, I didn’t have to think much at all because Gemini also wrote detailed instructions for me to feed to Lovable. I was a trained monkey cutting and pasting, but making fantastic progress. I later ditched Lovable to start over with Google’s own vibe-coding platform. When I ran into a problem where my crossword app populated half the grid with gibberish, no matter how many times I told it to stop making up words, Gemini explained this was a “classic LLM vs. Logic problem.” The AI model had no memory of the words it had already placed, so it was just filling in cells to get the job done.
Gemini told me to instruct the app to write a “recursive backtracking algorithm,” which sounded computer science-y enough, but didn’t work. I suspected the vibe-coding app was gaslighting me when it told me it had scripted a “recursive backtracking algorithm.” Gemini provided another solution, which did the trick. After a few mornings, I had a basic but workable AI-generated crossword app, complete with bland AI-generated clues. I hadn’t done much, but I felt accomplished.
Eric Reguly: Artificial intelligence: the good, the bad and the ugly environmental costs
Later, I came across a blog post from a developer in Britain, Alberto Fortin, that called vibe-coding a “recipe for disaster” for non-coders. “Walls of code you don’t understand, error after error,” he wrote, “which makes things even messier and complicated.” As the creator of a hot vibe-coded crossword app, I took offence, though I knew he was right.
Depending on the platform, vibe-coding can be expensive. You typically buy credits, and you can quickly burn through them when directing AI to fix errors beyond your comprehension. “You can spend quite a lot of money with AI, and you can also get nowhere,” Mr. Fortin told me. “It’s a bit of a shame that it’s being hyped so much.”
The market always provides a solution, though. Search LinkedIn, and you’ll find an emerging class of “vibe-coding cleanup specialists” who offer to fix your janky AI-generated app. Heygon Lago, who is completing a master’s degree in Toronto, has tidied up a few projects for people who had given up. Security issues and the need to connect to databases tend to throw up roadblocks. “All of these clients started with an idea using AI, but couldn’t even validate it with real customers because they couldn’t get it publicly accessible,” he said. “Or what they built was publicly accessible and full of errors.” Sometimes, he’s had to start over from scratch.
Amid a tough time for entry level workers, cleaning up vibe-coded slop could prove to be a decent side hustle. Saad Sahi in Edmonton said he applied for hundreds of jobs without success after graduating with a computer science degree this year. The part-time gig he already had turned into full-time this fall, but many of his peers are still looking.
Mr. Sahi has worked on about nine clean-up jobs so far on the side. “I’ll be the middleman to fix everything,” he said. He doubted that AI tools are going to be perfect any time soon, meaning more gigs for him. “A normal human who’s never been that close to computer languages cannot fully explain what’s in their mind,” he said. “I can understand human language and I can understand computer language.”
Sahi at his desk on Dec. 12. He says he does not think AI tools will be perfect anytime soon, meaning work for him.Amber Bracken/The Globe and Mail
Just as amateurs can hit a wall, professional coders have found the limits. Josh Anderson, a consultant in North Carolina, used Anthropic’s Claude to entirely vibe-code an app for planning road trips. As the lines of code piled up and the project became more complex, things started to break. Claude freelanced instead of following directions, and insisted it had correctly completed tasks when, in fact, it had not. He had to dive into the code himself. “I was just lost,” he said. The code was ostensibly his, but it wasn’t really, and he had no intuitive understanding of it.
Still, he completed the app in about 60 hours on evenings and weekends (without AI, it would have taken eight hours a day for a few weeks) and the exercise was useful to understand the edges of these tools. He found Claude failed to remember context beyond a certain point, unlike humans. “What’s most effective now with the current version of the models is to keep its scope and context as small as possible,” he said.
Researchers have documented similar complications. Yegor Denisov-Blanch at Stanford University tracked what happened when the engineering team at a large unnamed company adopted AI this year. Some companies and studies simply measure the lines of code generated as a proxy for productivity, but by looking deeper, Mr. Denisov-Blanch found that the quality of code decreased and it required more editing after introduction of AI, while productivity barely budged. In a conference presentation in December, he said the results should not be interpreted as a reason to ditch AI. Instead, the company should use the data to figure out what’s going wrong.
Mr. Fortin once looked at the guts of a vibe-coded project and was shocked. The script was like something written by a team of developers working in isolation from one another. He eventually realized he was relying too much on AI and that his coding skills were at risk of atrophying. His first instinct was no longer to sketch out ideas or solve problems with a pen and paper, but to turn to AI. “It’s very hard to resist the urge to just give it as many things as you can,” he said. He now forces himself to carry a notepad and pen to write things down first. Then he’ll take a photo and feed it to AI.
Mr. Fortin has been coding for years. What about students who are just learning computer science? In school, under pressure to turn in assignments and score decent grades, the temptation to rely on AI coding tools – and skip the hard work of learning – is presumably great.
Eyal de Lara, computer science chair at the University of Toronto, said it’s too early to say what kind of impact AI is having. Still, there have been changes. Assignments are now more complex, and there is an emphasis on having students walk instructors through their code to ensure they understand it. The department has also spent “a significant amount of money” upgrading labs to quiz students regularly in a controlled environment. “The faculty has worked in different ways to make sure they are periodically testing the students to make sure they’re actually learning,” he said.
Opinion: AI’s convenience is putting critical thinking on notice
At the University of Waterloo, Charles Clarke, who teaches first-year students, is concerned that some of them may rely too much on AI, and then their skills will plateau. “I can sense it happening,” he said, adding that it’s only his personal observation and that it’s the school’s role to help students avoid that trap.
He uses AI in his own programming work, in targeted ways. “I don’t think I’d ever code again without some kind of AI assistance,” he said. “It’s like working with a junior developer. You’re keeping an eye on everything they’re doing.”
Ironically, the guy credited with the term appears to have become slightly less enamoured with it. When Andrej Karpathy released a new project in October, someone asked him on X how much he wrote himself. Mr. Karpathy tried AI, he wrote, but it didn’t work well enough and was downright unhelpful. “It’s basically entirely handwritten.”