Most experts agree, however, that AI agents are self-contained code modules that can direct actions independently. Andres Riancho, cybersecurity researcher at Wiz, tells CSO, “The basic concept is that you are going to have an LLM that can decide to perform a task, that is then going to be executed through most likely an MCP,” or Model Context Protocol server, which acts as a bridge between AI models and various external tools and services.
Ben Seri, co-founder and CTO of Zafran Security, draws a parallel between the rise of AI agents and the rise of generative AI itself. “These are the tools that would enable this LLM to act like an analyst, like a mediator, like something of that nature,” he tells CSO. “It’s not that different in a way from generative AI where it started, where it’s a machine, you can give it a question, and it can give you an answer, but the difference is now it’s a process. It’s when you are taking an AI and LLM and you’re giving it agency or ability to perform some actions on its own.”
Trust, transparency, and moving slowly are crucial
Like all technologies, and perhaps more dramatically than most, agentic AI carries both risks and benefits. One obvious risk of AI agents is that, like most LLM models, they will hallucinate or make errors that could cause problems.