Augment has updated its pricing model for Augment Code, an AI coding assistant, to be based on AI usage rather than message interactions. The company said its existing model “isn’t sustainable” but users have calculated that the new one is more than ten times as expensive.

The startup was launched in April 2024, co-founded by Igor Ostrovsky (ex-Microsoft software engineer) and Guy Gur-Ari (ex-Google AI research), and backed by venture capital including investment from Eric Schmidt (former Google CEO). Its main product is Augment Code, which provides AI-powered chat, Next Edit code suggestions, inline code completions, and agentic AI programming, which can create an app from scratch.

Popular features in Augment Code include a Memories feature that persists context across conversations, and a 200K context window, meaning the AI is better informed about the codebase it is asked to work on.

The price increase follows an earlier hike just six months ago. Originally, there was a free community plan for individuals, a $30/user-month plan for professionals, and a $60 plan for enterprise, all of which provided unlimited chats and completions. In early May, this was replaced by “new, simpler pricing” based on the number of messages successfully processed. Free users got 50 messages, a $50.00 developer plan 600 messages, $100 professional 1,500 messages, and $250 max plan 4,500 messages.

A developer complained at the time that it “now costs more than Cursor and Windsurf combined.” The free plan then disappeared, replaced by an indie plan with 125 messages at $20/month.

That was just the start. A new post from CEO Matt McClernan said “the user message model isn’t sustainable for Augment Code as a business.”

The problem is that the message abstraction does not reflect the actual AI usage, he explained, since a complex prompt could involve a lot of backend processing. He said one user on the $250 max plan is costing the company “approaching $15,000 per month,” though it is not clear whether this user also purchased additional messages.

The latest model is based on credits, which are intended to reflect the actual cost of processing prompts. Since there is no exact mapping from messages to credits, the impact on users is variable, but McClernan noted that “our heaviest users will likely feel the change in price the most.”

Users have done their own calculations. One was informed by email that in the previous seven days they had used “31 messages, corresponding to 40,982 credits under the new pricing model.” This works out to a price increase of more than ten times. “I’m out. It was good while it lasted,” they said.

Another reaction was that Augment had exploited early users to refine the system and is now pricing them out. “We tested and optimized their infrastructure and paid for the privilege, and now are tossed aside.”

McClernan argued that usage-based pricing is “fast becoming the industry standard,” referencing pricing changes from competitors including Zed, Replit, Cursor, and Anthropic.

It appears that Augment entered this market with an unrealistic pricing model; two huge price rises within six months suggest a substantial miscalculation.

That said, it is also a wake-up call for users who do not appreciate the high cost of AI processing, which is compute-intensive.

A lesson here is the importance of AI cost optimization. Developers can craft prompts that use fewer tokens and tune models to reduce usage. Models also vary in cost from their providers. In an enterprise context, though, persuading developers to consider AI cost optimization when they are focused on getting their coding done may be challenging. ®