{"id":549070,"date":"2026-03-20T13:37:10","date_gmt":"2026-03-20T13:37:10","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/549070\/"},"modified":"2026-03-20T13:37:10","modified_gmt":"2026-03-20T13:37:10","slug":"meta-ai-agents-instruction-causes-large-sensitive-data-leak-to-employees-ai-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/549070\/","title":{"rendered":"Meta AI agent\u2019s instruction causes large sensitive data leak to employees | AI (artificial intelligence)"},"content":{"rendered":"<p class=\"dcr-130mj7b\">An AI agent instructed an engineer to take actions that exposed a large amount of Meta\u2019s sensitive data to some of its employees, in the latest example of AI causing upheaval in a large tech company.<\/p>\n<p class=\"dcr-130mj7b\">The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum. An AI agent responded with a solution, which the employee implemented \u2013 causing a large amount of sensitive user and company data to be exposed to its engineers for two hours.<\/p>\n<p class=\"dcr-130mj7b\">\u201cNo user data was mishandled,\u201d a Meta spokesperson said, and they emphasised that a human could also give erroneous advice. The incident, first reported by The Information, triggered a major internal security alert inside Meta, which the company has said is an indication of how seriously it takes data protection.<\/p>\n<p class=\"dcr-130mj7b\">This breach is one of several recent high-profile <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/feb\/20\/amazon-cloud-outages-ai-tools-amazon-web-services-aws\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">incidents<\/a> caused by the increasing use of AI agents within US tech companies. Last month, a report from the Financial Times said Amazon experienced at least two outages related to the deployment of its internal AI tools.<\/p>\n<p class=\"dcr-130mj7b\">More than half a dozen Amazon employees later <a href=\"https:\/\/www.theguardian.com\/technology\/ng-interactive\/2026\/mar\/11\/amazon-artificial-intelligence\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">spoke to the Guardian<\/a> about the company\u2019s haphazard push to integrate AI into all elements of their work, leading, they said, to glaring errors, sloppy code and reduced productivity.<\/p>\n<p class=\"dcr-130mj7b\">The technology that underlies all these incidents, agentic AI, has evolved rapidly over the past months. In December, developments in Anthropic\u2019s AI coding tool, Claude Code, triggered widespread <a href=\"https:\/\/www.theatlantic.com\/technology\/2026\/01\/claude-code-ai-hype\/685617\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">hubbub<\/a> over its ability to autonomously book theatre tickets, manage personal finance, and even grow plants.<\/p>\n<p class=\"dcr-130mj7b\">Soon after was the advent of OpenClaw, a viral AI personal <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/feb\/02\/openclaw-viral-ai-agent-personal-assistant-artificial-intelligence\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">assistant<\/a> that ran on top of agents such as ClaudeCode but could operate entirely autonomously \u2013 trading away millions of dollars in cryptocurrency, for example, or mass-deleting users emails \u2013 leading to heady talk about the advent of AGI, or artificial general intelligence, a catch-all term for AI that is capable of replacing humans for a wide number of tasks.<\/p>\n<p class=\"dcr-130mj7b\">In the weeks that followed, stock markets have wobbled over fears that AI agents will gut software businesses, <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/feb\/24\/feedback-loop-no-brake-how-ai-doomsday-report-rattled-markets\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">reshape<\/a> the economy and replace human workers.<\/p>\n<p class=\"dcr-130mj7b\">Tarek Nseir, a co-founder of a consulting company focused on how businesses use AI, said these incidents showed that Meta and Amazon were in \u201cexperimental phases\u201d of deploying agentic AI.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThey\u2019re not really kind of standing back from these things and actually really taking an appropriate risk assessment. If you put a junior intern on this stuff, you would never give that junior intern access to all of your critical severity one HR data,\u201d he said.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThe vulnerability would have been very, very obvious to Meta in retrospect, if not in the moment. And what I can say and will say is this is Meta experimenting at scale. It\u2019s Meta being bold.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Jamieson O\u2019Reilly, a security specialist who focuses on building offensive AI, said AI agents introduced a certain kind of error that humans did not \u2013 and this may explain the incident at Meta.<\/p>\n<p class=\"dcr-130mj7b\">A human knows the \u201ccontext\u201d of a task \u2013 the implicit knowledge that one should not, for example, set the sofa on fire in order to heat the room, or delete a little-used but crucial file, or take an action that would expose user data downstream.<\/p>\n<p class=\"dcr-130mj7b\">For AI agents, this is more complicated. They have \u201ccontext windows\u201d \u2013 a sort of working memory \u2013 in which they carry instructions, but these lapse, leading to error.<\/p>\n<p class=\"dcr-130mj7b\">\u201cA human engineer who has worked somewhere for two years walks around with an accumulated sense of what matters, what breaks at 2am, what the cost of downtime is, which systems touch customers. That context lives in them, in their long-term memory, even if it\u2019s not front of mind,\u201d O\u2019Reilly said.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThe agent, on the other hand, has none of that unless you explicitly put it in the prompt, and even then it starts to fade unless it is in the training data.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Nseir said: \u201cInevitably there will be more mistakes.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"An AI agent instructed an engineer to take actions that exposed a large amount of Meta\u2019s sensitive data&hellip;\n","protected":false},"author":2,"featured_media":549071,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,61],"class_list":{"0":"post-549070","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/549070","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=549070"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/549070\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/549071"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=549070"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=549070"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=549070"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}