{"id":201710,"date":"2025-12-24T15:01:08","date_gmt":"2025-12-24T15:01:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/il\/201710\/"},"modified":"2025-12-24T15:01:08","modified_gmt":"2025-12-24T15:01:08","slug":"how-ai-coding-agents-work-and-what-to-remember-if-you-use-them","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/il\/201710\/","title":{"rendered":"How AI coding agents work\u2014and what to remember if you use them"},"content":{"rendered":"<p>This context limit naturally limits the size of a codebase a LLM can process at one time, and if you feed the AI model lots of huge code files (which have to be re-evaluated by the LLM every time you send another response), it can burn up token or usage limits pretty quickly.<\/p>\n<p>Tricks of the trade<\/p>\n<p>To get around these limits, the creators of coding agents use several tricks. For example, AI models are fine-tuned to write code to outsource activities to other software tools. For example, they might write Python scripts to extract data from images or files rather than feeding the whole file through an LLM, which saves tokens and avoids inaccurate results.<\/p>\n<p>Anthropic\u2019s documentation <a href=\"https:\/\/www.anthropic.com\/engineering\/effective-context-engineering-for-ai-agents\" rel=\"nofollow noopener\" target=\"_blank\">notes<\/a> that Claude Code also uses this approach to perform complex data analysis over large databases, writing targeted queries and using Bash commands like \u201chead\u201d and \u201ctail\u201d to analyze large volumes of data without ever loading the full data objects into context.<\/p>\n<p>(In a way, these AI agents are guided but semi-autonomous tool-using programs that are a major extension of a concept we <a href=\"https:\/\/arstechnica.com\/information-technology\/2023\/02\/meta-develops-an-ai-language-bot-that-can-use-external-software-tools\/\" rel=\"nofollow noopener\" target=\"_blank\">first saw<\/a> in early 2023.)<\/p>\n<p>Another major breakthrough in agents came from dynamic context management. Agents can do this in a few ways that are not fully disclosed in proprietary coding models, but we do know the most important technique they use: context compression.<\/p>\n<p>                        <img width=\"1024\" height=\"660\" src=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2025\/12\/codex_terminal-1024x660.png\" class=\"center large\" alt=\"The command line version of OpenAI codex running in a macOS terminal window.\" decoding=\"async\" loading=\"lazy\"  \/><\/p>\n<p>\n      The command-line version of OpenAI Codex running in a macOS terminal window.<\/p>\n<p>          Credit:<\/p>\n<p>          Benj Edwards<\/p>\n<p>When a coding LLM nears its context limit, this technique compresses the context history by summarizing it, losing details in the process but shortening the history to key details. Anthropic\u2019s documentation <a href=\"https:\/\/www.anthropic.com\/engineering\/effective-context-engineering-for-ai-agents\" rel=\"nofollow noopener\" target=\"_blank\">describes<\/a> this \u201ccompaction\u201d as distilling context contents in a high-fidelity manner, preserving key details like architectural decisions and unresolved bugs while discarding redundant tool outputs.<\/p>\n<p>This means the AI coding agents periodically \u201cforget\u201d a large portion of what they are doing every time this compression happens, but unlike older LLM-based systems, they aren\u2019t completely clueless about what has transpired and can rapidly re-orient themselves by reading existing code, written notes left in files, change logs, and so on.<\/p>\n","protected":false},"excerpt":{"rendered":"This context limit naturally limits the size of a codebase a LLM can process at one time, and&hellip;\n","protected":false},"author":2,"featured_media":201711,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[345,343,344,85,46,125],"class_list":{"0":"post-201710","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-il","12":"tag-israel","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/201710","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/comments?post=201710"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/201710\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media\/201711"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media?parent=201710"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/categories?post=201710"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/tags?post=201710"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}