{"id":511330,"date":"2026-03-03T09:33:10","date_gmt":"2026-03-03T09:33:10","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/511330\/"},"modified":"2026-03-03T09:33:10","modified_gmt":"2026-03-03T09:33:10","slug":"what-is-openai-going-to-do-when-the-truth-comes-out","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/511330\/","title":{"rendered":"What is OpenAI going to do when the truth comes out?"},"content":{"rendered":"<p>This is a column about AI. My boyfriend works at Anthropic. See\u00a0<a href=\"https:\/\/www.platformer.news\/ethics\/\" rel=\"nofollow noopener\" target=\"_blank\">my full ethics disclosure here<\/a>.<\/p>\n<p>&#8220;In [Murati\u2019s] experience, Altman had a simple playbook: first, say whatever he needed to say to get you to do what he wanted, and second, if that didn\u2019t work, undermine you or destroy your credibility \u2026 It had taken Sutskever years to be able to put his finger on Altman\u2019s pattern of behavior \u2014 how OpenAI\u2019s CEO would tell him one thing, then say another and act as if the difference was an accident. \u201cOh, I must have misspoken,\u201d Altman would say. Sutskever felt that Altman was dishonest and causing chaos, which would be a problem for any CEO, but especially for one in charge of such potentially civilization-altering technology.&#8221; \u2014 Keach Hagey, <a href=\"https:\/\/wwnorton.com\/books\/9781324075974?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">The Optimist<\/a><\/p>\n<p>I.<\/p>\n<p>I thought of this passage from The Optimist over the weekend as I worked to make sense of a rather stunning series of events. The Pentagon followed through with <a href=\"https:\/\/www.platformer.news\/anthropic-pentagon-authoritarian-ai\/\" rel=\"nofollow noopener\" target=\"_blank\">its threat<\/a> to terminate the military\u2019s contract with Anthropic over the company\u2019s refusal to amend its prior agreement to permit \u201call lawful use\u201d of its technology, including mass domestic surveillance and autonomous weapons. It further threatened to designate Anthropic as a \u201csupply chain risk,\u201d a move previously reserved for corporate extensions of foreign adversaries, and move to block any company that contracts with the military from using Anthropic\u2019s products.<\/p>\n<p>For the briefest of moments, it appeared as if Anthropic might have an ally in the fight: on Friday morning, Hagey (in her regular perch at the Wall Street Journal) <a href=\"https:\/\/www.wsj.com\/tech\/ai\/openais-sam-altman-calls-for-de-escalation-in-anthropic-showdown-with-hegseth-03ecbac8?mod=hp_lead_pos1&amp;ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">reported<\/a> that Altman had sent a memo to OpenAI\u2019s staff saying that he would draw the same \u201cred lines\u201d Anthropic had.<\/p>\n<p>\u201cWe have long believed that AI should not be used for mass surveillance or autonomous lethal weapons,\u201d he wrote, \u201cand that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.\u201d<\/p>\n<p>And by Friday evening, Altman <a href=\"https:\/\/www.cnbc.com\/2026\/02\/27\/openai-strikes-deal-with-pentagon-hours-after-rival-anthropic-was-blacklisted-by-trump.html?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">announced on X<\/a> that OpenAI had reached an agreement with the Pentagon for classified AI deployment \u2014 with the same red lines, he claimed, now baked into the contract.<\/p>\n<p>Setting aside for a moment the government\u2019s unhinged retaliation against Anthropic, Altman\u2019s claim to have won concessions from the US military offered at least some reason for hope. If powerful AI systems are to be embedded in systems of state violence, the least that Americans can ask for in return are mechanisms of oversight and restraint. Altman said OpenAI had achieved just that.<\/p>\n<p>\u201cTwo of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,\u201d Altman said in an X post.\u00a0 \u201cThe DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.\u201d<\/p>\n<p>Immediately, Altman\u2019s claim fell under scrutiny. Was it not suspicious that OpenAI claimed to have won with just a few days of negotiating the concessions that Anthropic had not? Was it possible that the same Pentagon officials railing on X against the idea of a private company attempting to exert control of the military were now making an exception for OpenAI?<\/p>\n<p>Was the public now, like Mira Murati and Ilya Sutskever before them, caught in the familiar Altman trap that begins with him telling them what they want to hear?<\/p>\n<p>II.\u00a0<\/p>\n<p>Notably, in this case few seemed to extend to Altman the benefit of the doubt. The most popular post on the ChatGPT subreddit over the past week is titled \u201c<a href=\"https:\/\/www.reddit.com\/r\/ChatGPT\/comments\/1rgseae\/youre_now_training_a_war_machine_lets_see_proof\/?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">You\u2019re now training a war machine. Let\u2019s see proof of cancellation<\/a>\u201d; it received more than 32,000 upvotes. Similar posts in that forum and the OpenAI subreddit also received <a href=\"https:\/\/www.reddit.com\/r\/OpenAI\/comments\/1rgrccs\/the_end_of_gpt\/?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">tens of thousands of upvotes<\/a>; the company also came in for extended criticism on <a href=\"https:\/\/news.ycombinator.com\/item?id=47189650&amp;ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">Hacker News<\/a>.<\/p>\n<p>And as the weekend went on, additional reporting suggested that the knee-jerk cynicism triggered by OpenAI\u2019s deal was justified.<\/p>\n<p>In The Verge, Hayden Field reported that contrary to OpenAI\u2019s public statements \u2014 and consistent with the military\u2019s own framing of its demands \u2014 the company\u2019s deal with the Pentagon includes fewer restrictions than Anthropic\u2019s had.<\/p>\n<p><a href=\"https:\/\/www.theverge.com\/ai-artificial-intelligence\/887309\/openai-anthropic-dod-military-pentagon-contract-sam-altman-hegseth?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">She writes<\/a>:<\/p>\n<p>One source familiar with the Pentagon\u2019s negotiations with AI companies confirmed that OpenAI\u2019s deal is much softer than the one Anthropic was pushing for, thanks largely to three words: \u201cany lawful use.\u201d In negotiations, the person said, the Pentagon wouldn\u2019t back down on its desire to collect and analyze bulk data on Americans. If you look line-by-line at the OpenAI terms, the source said, every aspect of it boils down to: If it\u2019s technically legal, then the US military can use OpenAI\u2019s technology to carry it out. And over the past decades, the US government has stretched the definition of \u201ctechnically legal\u201d to cover sweeping mass surveillance programs \u2014 and more.<\/p>\n<p>OpenAI might be able to partially block the military\u2019s efforts to conduct domestic surveillance by building classifiers and implementing other model-level safeguards, as it has said it will do. And yet it\u2019s essential to remember that most tasks related to mass surveillance might not look that way to a model. The government can upload massive spreadsheets of data bought legally from data brokers and ask GPT models to conduct all sorts of analyses that will not identify themselves as efforts to build systems of oppression.<\/p>\n<p>And in any case, we know that the Pentagon tried repeatedly to eliminate meaningful safeguards in Anthropic\u2019s contract through innocuous-seeming word changes and a generous dusting of legalese.\u00a0<\/p>\n<p>Ross Andersen described the process <a href=\"https:\/\/www.theatlantic.com\/technology\/2026\/03\/inside-anthropics-killer-robot-dispute-with-the-pentagon\/686200\/?gift=2iIN4YrefPjuvZ5d2Kh30zpPxOtZj8TuGGLnTN11Z-s&amp;ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">in The Atlantic<\/a>. \u201cThe Pentagon had kept trying to leave itself little escape hatches in the agreements that it proposed to Anthropic,\u201d he reported on Sunday. \u201cIt would pledge not to use Anthropic\u2019s AI for mass domestic surveillance or for fully autonomous killing machines, but then qualify those pledges with loophole-y phrases like as appropriate \u2014 suggesting that the terms were subject to change, based on the administration\u2019s interpretation of a given situation.\u201d<\/p>\n<p>Moreover, on the subject of autonomous weapons, Bloomberg reported last month that OpenAI is participating in a competition to develop software <a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2026-02-13\/openai-tapped-for-voice-control-tech-in-us-drone-swarm-challenge?sref=CrGXSfHu&amp;ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">that will allow drones to be controlled via voice<\/a>. (Anthropic <a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2026-03-02\/anthropic-made-pitch-in-drone-swarm-contest-during-pentagon-feud?sref=CrGXSfHu&amp;ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">participated in the competition<\/a>, too \u2014 reminding us that Dario Amodei\u2019s objection to murderbots isn\u2019t that they are immoral, but that they don\u2019t work very well yet.)<\/p>\n<p>If you build voice controls for the murderbot but not the murderbot itself, is that consistent with OpenAI\u2019s usage policy?<\/p>\n<p>\u201cIt turns out that the usage policy can be read in a few ways,\u201d writes Sarah Shoker, who led OpenAI\u2019s geopolitics team for three years before leaving last June, on <a href=\"https:\/\/sarahshoker.substack.com\/p\/a-few-observations-on-ai-companies?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">her Substack<\/a>. \u201cDepending on whether you believe that the use of an AI voice-to-digital tool in a kill-chain amounts to helping build a weapon, or if you believe that an AI model can be treated in isolation from its larger weapon system.\u201d<\/p>\n<p>The problem, Shoker writes, is that almost all of the relevant definitions here \u2014 again, the definitions relevant to whether and how you will be surveilled as an American, and which large language models might guide a drone swarm that someday attacks you \u2014 are up for debate.<\/p>\n<p>\u201cPolicy and law are not free-floating static \u2018things,\u2019\u201d she writes. \u201cThe borders of the law are fuzzy and filtered through political ideology. Throughout US history, policymakers have reinterpreted and exploited gaps in the law to allow for activity that independent legal observers have called straightforwardly illegal.\u201d<\/p>\n<p>She continues:<\/p>\n<p>There isn\u2019t a consensus over what it means in practice to have adequate \u2018human supervision,\u2019 \u2018human in the loop\u2019 or \u2018meaningful human control\u2019 in autonomous weapons systems. Terms that reference human oversight <a href=\"https:\/\/lieber.westpoint.edu\/how-meaningful-is-meaningful-human-control-laws-regulation\/?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">remain contentious<\/a> around the world. Militaries are still trying to develop new testing and evaluation procedures for reducing problems like e.g. over-reliance in human-AI teams. It\u2019s possible that Anthropic disagreed with how \u2018human supervision\u2019 (broadly speaking) would be put into practice.<\/p>\n<p>A few frontier AI company employees have asked me about whether the \u2018lawful purposes\u2019 language is a sufficiently strong bulwark against misuse. The answer is always going to be it depends. You have to decide whether that\u2019s good enough and if you trust your company leaders to respond effectively in case something goes wrong.<\/p>\n<p>III.<\/p>\n<p>As public opinion began to turn against OpenAI \u2014 uninstalls of ChatGPT were up nearly 300 percent over the weekend, market research firm <a href=\"https:\/\/techcrunch.com\/2026\/03\/02\/chatgpt-uninstalls-surged-by-295-after-dod-deal\/?utm_campaign=social&amp;utm_source=threads&amp;utm_medium=organic\" rel=\"nofollow noopener\" target=\"_blank\">Sensor Tower estimated<\/a> \u2014 the company sought to reassure the public.<\/p>\n<p>A <a href=\"https:\/\/openai.com\/index\/our-agreement-with-the-department-of-war\/?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">blog post<\/a> laid out what it described as a comprehensive, layered approach to ensuring its red lines are never crossed, and posted what it said is the \u201crelevant\u201d portion of its contract with the military. And Altman and some of his colleagues at the company <a href=\"https:\/\/techcrunch.com\/2026\/03\/01\/openai-shares-more-details-about-its-agreement-with-the-pentagon\/?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">answered questions<\/a> from people on X.<\/p>\n<p>Jessica Tillipman, an expert in government contracts and professor at George Washington University Law School, analyzed the deal and the surrounding debate. For starters, <a href=\"https:\/\/jessicatillipman.com\/what-rights-do-ai-companies-have-in-government-contracts\/?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">she said<\/a> \u2014 and contrary to howling right-wing commentators who accused Anthropic of trying to subvert the democratic process by refusing to accept the military\u2019s demands \u2014 \u201ccontractors restrict the government\u2019s use of their products all the time.\u201d<\/p>\n<p>It is at least possible, she writes, that the safeguards OpenAI outlined would give it meaningful leverage to restrict the use of its models for whichever forms of surveillance and drone killing it takes issue with. But there is an enormous unanswered question \u2014 what happens when OpenAI and the military disagree?\u00a0<\/p>\n<p>Tillipman writes:<\/p>\n<p>If a classifier blocks a particular use, the question is whether the government has a contractual right to demand its removal. OpenAI asserts that it retains \u201cfull discretion\u201d over those systems.<\/p>\n<p>This creates tension at the heart of the agreement. The contract permits use \u201cfor all lawful purposes,\u201d subject to \u201coperational requirements\u201d and \u201cwell-established safety and oversight protocols.\u201d OpenAI says it retains full discretion over the safety stack it runs in a cloud-only deployment. If the safety stack blocks a lawful use, which provision controls? The answer depends on the specific contract language governing the relationship between the permissive use standard and the deployment framework \u2014 language that has not been made public.<\/p>\n<p>The Pentagon reacted to its disagreement with Anthropic \u2014 over a contract it had once willingly signed \u2014 by announcing an effort to destroy the company. The idea that some vague contractual language and a \u201csafety stack\u201d will prevent Defense Sec. Pete Hegseth and his subordinates from taking a maximalist view of their rights to OpenAI\u2019s intellectual property is either impossibly naive, or outright deceptive.\u00a0<\/p>\n<p>In response to my questions, OpenAI pointed me to <a href=\"https:\/\/x.com\/sama\/status\/2028640354912923739?ref=platformer.news\" rel=\"nofollow\">another X post<\/a> from Altman that posted on Monday evening. In it, Altman said OpenAI plans to amend its contract with the Pentagon to add further restrictions on the use of its systems for surveillance, and that the National Security Agency will not be using GPT models. I&#8217;m told the Pentagon has agreed to the changes. These sound like meaningful improvements; we\u2019ll see. <\/p>\n<p>\u201cOne thing I think I did wrong: we shouldn&#8217;t have rushed to get this out on Friday,\u201d Altman added. \u201cThe issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.\u201d<\/p>\n<p>Indeed. But in the end I\u2019m left asking myself what will happen in the scenario that still seems disturbingly likely \u2014 that GPT models will in fact be used as part of surveillance and drone operations. Will it put up a blog post to explain that, well actually, that\u2019s a lawful kind of surveillance? Do an AMA about how, despite how it may look, that autonomous drone swarm had proper human supervision? OpenAI does enough polling to understand that Americans already distrust and even openly loathe AI, even as they increasingly turn to it for work and school. How does it think Americans will feel when GPT models are powering ICE raids or causing civilian casualties in wars abroad?\u00a0<\/p>\n<p>The company may have tied its own hands. In the end, the truth about US military operations always seems to come out one way or another. And when it does, I suspect the \u201call lawful use\u201d standard that OpenAI agreed to will have permitted a far wider range of operations than we are now being told are possible.\u00a0<\/p>\n<p>The problem with telling everyone what they want to hear is that eventually reality catches up with you. The people who will live under AI-powered surveillance, and the people in the flight path of AI-assisted drone swarms \u2014 they&#8217;re the ones who are going to find out what OpenAI actually agreed to do. And I suspect it will be much more than the company now expects us to believe.\u00a0<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2025\/08\/floating_linebreak_600px-1.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"600\" height=\"157\" \/><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2026\/03\/HardFork-banner.jpg\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"2000\" height=\"500\"  \/><\/p>\n<p>On a bonus episode of the podcast:\u00a0Kevin and I compare notes on a tumultuous weekend for Anthropic, OpenAI, the Pentagon, and the country. Recorded on Saturday morning.<\/p>\n<p><a href=\"https:\/\/substack.com\/redirect\/1f026a90-0a73-4c06-91a5-d9f0074230ed?r=9cs7&amp;ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">Apple<\/a>\u00a0|\u00a0<a href=\"https:\/\/substack.com\/redirect\/1ab817bf-db21-4c76-8b8b-73c3d62d0dd7?r=9cs7&amp;ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">Spotify<\/a>\u00a0|\u00a0<a href=\"https:\/\/substack.com\/redirect\/8f21522a-d6a1-4ec4-a4db-2acaea82bd59?r=9cs7&amp;ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">Stitcher<\/a>\u00a0|\u00a0<a href=\"https:\/\/substack.com\/redirect\/facb11f9-5648-4c10-8629-af0dbc7a8f4a?r=9cs7&amp;ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">Amazon<\/a>\u00a0|\u00a0<a href=\"https:\/\/substack.com\/redirect\/3bae724f-a172-4879-83b3-50b787887714?r=9cs7&amp;ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">Google<\/a>\u00a0|\u00a0<a href=\"https:\/\/www.youtube.com\/@hardfork?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">YouTube<\/a><\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2025\/08\/floating_linebreak_600px-1.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"600\" height=\"157\" \/>Following<br \/>Everyone has something to say about the Pentagon, Anthropic, and OpenAI<\/p>\n<p>What happened: As the Pentagon&#8217;s &#8220;all lawful use&#8221; drama unfolded, people started <a href=\"https:\/\/techcrunch.com\/2026\/03\/02\/chatgpt-uninstalls-surged-by-295-after-dod-deal\/?utm_campaign=social&amp;utm_source=threads&amp;utm_medium=organic\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">quitting<\/a> ChatGPT and switching to Claude. Reddit posts encouraging people to boycott ChatGPT have been getting tens of thousands of likes, and Anthropic\u2019s Claude app <a href=\"https:\/\/techcrunch.com\/2026\/03\/01\/anthropics-claude-rises-to-no-2-in-the-app-store-following-pentagon-dispute\/?ref=platformer.news\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">reached<\/a> no. 1 on the App Store. (And Anthropic was quick on the draw, releasing an <a href=\"https:\/\/gizmodo.com\/anthropic-improves-feature-to-switch-from-competitors-as-users-call-for-chatgpt-boycott-2000728352?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">improved tool<\/a> for helping people to switch by loading context from other AI apps into Claude).<\/p>\n<p>Anthropic receive strong <a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2026-02-27\/anthropic-s-feud-with-pentagon-mushrooms-into-broader-battle?sref=CrGXSfHu&amp;ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">declarations of support<\/a> from tech workers, too. A coalition representing 700,000 employees across Amazon, Google, and Microsoft, <a href=\"https:\/\/medium.com\/@notechforapartheid\/jointstatement-5561f1572e46?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">demanded<\/a> their companies \u201creject the Pentagon\u2019s advances.\u201d And an <a href=\"https:\/\/notdivided.org\/?ref=platformer.news\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">open letter<\/a> from Google and OpenAI employees asked leaders to \u201crefuse the Department of War\u2019s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.\u201d<\/p>\n<p>Why we\u2019re following: Oh god. Where to start? This week\u2019s events will have long-lasting effects on Anthropic\u2019s business; OpenAI\u2019s reputation; the public\u2019s view of AI; the future of warfare; and American citizens\u2019 right to privacy. To say nothing about my cortisol levels.<\/p>\n<p>We\u2019re left wondering whether remaining tech stakeholders like Amazon and Google will listen to workers and the public, or negotiate new contracts with the DoD that allow their tech to be used to surveil citizens and make kill decisions.<\/p>\n<p>What people are saying: Pop star Katy Perry weighed in on the situation on X with a <a href=\"https:\/\/x.com\/katyperry\/status\/2027619173325553765?ref=platformer.news\" rel=\"nofollow\">screenshot<\/a> of her signing up for a Claude Pro subscription, captioned \u201cdone.\u201d<\/p>\n<p>At the time of announcing their new deal with the Pentagon, OpenAI <a href=\"https:\/\/x.com\/OpenAI\/status\/2027846016423321831?s=20&amp;ref=platformer.news\" rel=\"nofollow\">voiced some support<\/a> for Anthropic on X, saying \u201cwe do not think Anthropic should be designated as a supply chain risk and we\u2019ve made our position on this clear to the Department of War.\u201d<\/p>\n<p>Discussing OpenAI\u2019s renegotiated agreement, OpenAI researcher Aidan McLaughlin <a href=\"https:\/\/x.com\/aidan_mclau\/status\/2028507663529906395?ref=platformer.news\" rel=\"nofollow\">wrote<\/a>, \u201ci personally don\u2019t think this deal was worth it.\u201d OpenAI safety researcher Cameron Raymond <a href=\"https:\/\/x.com\/CJKRaymond\/status\/2028573907868144016?s=20&amp;ref=platformer.news\" rel=\"nofollow\">replied<\/a>, \u201cidk how the dust will settle but for now i feel similarly.\u201d<\/p>\n<p>OpenAI researcher Leo Gao <a href=\"https:\/\/x.com\/nabla_theta\/status\/2028185890737311767?s=20&amp;ref=platformer.news\" rel=\"nofollow\">took issue<\/a> with OpenAI\u2019s comms about the new DoD deal. \u201cthe contract snippet from the openai dow blog post is so obviously just &#8216;all lawful use&#8217; followed by a bunch of stuff that is not really operative except as window dressing,\u201d he wrote.<\/p>\n<p>Gao\u2019s OpenAI colleague Boaz Barak offered a high-minded <a href=\"https:\/\/x.com\/boazbaraktcs\/status\/2028132252090007776?s=20&amp;ref=platformer.news\" rel=\"nofollow\">response<\/a>, \u201cI\u2019m proud to work at a company that contains people as brilliant and conscientious as Leo and allows them to speak their mind.\u201d He is proud of OpenAI\u2019s culture, he added: \u201cOpenAI has a lot of issues, but in terms of enabling employee pushback and discussion it is in fact still open, with all the messiness that this entails.\u201d But he added a sneak diss: \u201cLeo is an amazing researcher and person but not a lawyer or a natsec expert,\u201d and people looking to understand the situation should follow OpenAI\u2019s head of national security partnerships, Katrina Mulligan.<\/p>\n<p>In a cameo on the same thread that had me reeling, former U.S. Congressman Brad Carson <a href=\"https:\/\/x.com\/bradrcarson\/status\/2028154204649398523?s=20&amp;ref=platformer.news\" rel=\"nofollow\">responded<\/a> to Barak. \u201cI&#8217;m former general counsel of Army, former Undersecretary of Army, former Undersec of Defense. Not sure if that makes me a nat sec &#8216;expert.&#8217; But,\u201d he wrote, Gao\u2019s interpretation of the OpenAI contract \u201cis the right one, IMO.\u201d<\/p>\n<p>In a later exchange, Gao <a href=\"https:\/\/x.com\/nabla_theta\/status\/2028187393803911536?s=20&amp;ref=platformer.news\" rel=\"nofollow\">wrote<\/a> about OpenAI\u2019s culture, \u201cI do notice that the vast majority of people with views similar to me has left openai over time. I also think a lot of people are scared of speaking their mind.\u201d But, he said, \u201cit could also be a lot worse, and I think it&#8217;s worth being grateful for what we do have.\u201d Um. At the very least, I\u2019m grateful that as of a year ago, OpenAI no longer requires its employees to sign <a href=\"https:\/\/www.vox.com\/future-perfect\/351132\/openai-vested-equity-nda-sam-altman-documents-employees?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">highly restrictive<\/a> exit NDAs.<\/p>\n<p>The episode also opened debate over what control tech companies should have over their government contracts. Stratechery writer Ben Thompson <a href=\"https:\/\/stratechery.com\/2026\/anthropic-and-alignment\/?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">wrote<\/a> that this should all be up to the government\u2019s discretion: \u201cwhat is the standard by which it should be decided what is allowed and not allowed if not laws, which are passed by an elected Congress?\u201d He continued, \u201cAnthropic\u2019s position is that Amodei \u2014 who I am using as a stand-in for Anthropic\u2019s management and its board \u2014 ought to decide what its models are used for, despite the fact that Amodei is not elected and not accountable to the public.\u201d<\/p>\n<p>Dean Ball, formerly an AI advisor in the Trump White House, <a href=\"https:\/\/www.hyperdimensional.co\/p\/clawed?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">wrote<\/a> in a must-read post on Substack that the Pentagon\u2019s retaliation against Anthropic \u201cstrikes at a core principle of the American republic, one that has traditionally been especially dear to conservatives: private property.\u201d Pete Hegseth, Ball argued, \u201cannounced his intention to commit corporate murder\u201d because a private company was attempting to set their own terms for a contract. Essentially, it sent the message, \u201cdo business on our terms, or we will end your business.\u201d<\/p>\n<p>This week\u2019s events were an ominous sign for the future of AI governance, Ball wrote. \u201cThe Anthropic-DoW skirmish is the first major public debate that is truly about where the proper locus of control over frontier AI should be,\u201d he said. And they were an awful one. \u201cOur public institutions behaved erratically, maliciously, and without strategic clarity.\u201d<\/p>\n<p>When all is said and done, we\u2019re left thinking about <a href=\"https:\/\/x.com\/Mihonarium\/status\/2028116464604197371?s=20&amp;ref=platformer.news\" rel=\"nofollow\">this<\/a> generationally significant Onion headline.<\/p>\n<p><img class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"900\" height=\"1200\"  \/><\/p>\n<p>\u2014Ella Markianos<\/p>\n<p>Prediction markets are bad<\/p>\n<p>What happened: Insiders keep trading on their inside information in ways that continuously make us ask: how is this legal?<\/p>\n<p>OpenAI fired an employee after finding out the employee used confidential company information for their personal gain on prediction markets including Polymarket, OpenAI CEO of applications Fidji Simo <a href=\"https:\/\/www.wired.com\/story\/openai-fires-employee-insider-trading-polymarket-kalshi\/?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">told employees<\/a>.<\/p>\n<p>This follows two other recent instances of insider trading on Kalshi, including a case where a former California gubernatorial candidate Kyle Langford traded on his own candidacy, and another where an editor for YouTuber MrBeast <a href=\"https:\/\/www.npr.org\/2026\/02\/25\/nx-s1-5726050\/kalshi-insider-trading-enforcement-actions?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">bet on<\/a> markets related to MrBeast videos. Kalshi said it <a href=\"https:\/\/news.kalshi.com\/p\/kalshi-trading-violation-enforcement-cases?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">discovered the MrBeast case<\/a> after its monitoring systems flagged \u201cnear-perfect trading success on markets with low odds.\u201d<\/p>\n<p>The insider trading accusations also come amid a period of backlash for Kalshi following <a href=\"https:\/\/www.theverge.com\/tech\/887210\/kalshi-void-bets-khamenei-death?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">its decision<\/a> to void some bets on the ouster of Iranian Supreme Leader Ali Khamenei.<\/p>\n<p>Kalshi CEO Tarek Mansour said the platform doesn\u2019t \u201clist markets directly tied to death\u201d to prevent people from profiting from death.<\/p>\n<p>Why we\u2019re following: Prediction markets have increasingly become the go-to for prediction data related to elections and markets. A number of news publishers have announced partnerships with prediction markets \u2014 most recently, the Associated Press <a href=\"https:\/\/www.mediaite.com\/media\/news\/ap-announces-its-teaming-up-with-prediction-market-site-kalshi-ahead-of-the-midterms\/?utm_source=dlvr.it&amp;utm_medium=bluesky\" rel=\"nofollow noopener\" target=\"_blank\">announced<\/a> it\u2019s teaming up with Kalshi to make its US election results available on the platform ahead of the 2026 midterms.<\/p>\n<p>As trading volume grows exponentially on prediction markets, it\u2019s troubling to see \u2014 without clear rules \u2014 how many people could profit from anything from election results to war, especially with hidden advantages.<\/p>\n<p>What people are saying: &#8220;I see you&#8217;ve placed your bet on Red. unfortunately in this casino, we call that color Bleen. you get $0. we&#8217;ll keep your money. thanks for playing!\u201d computer scientist Ben Anderson <a href=\"https:\/\/x.com\/andersonbcdefg\/status\/2028135817991061610?ref=platformer.news\" rel=\"nofollow\">quipped<\/a> about the Khamenei decision.<\/p>\n<p>\u201cWelcome to 2026 where a main talking point around war is whether prediction markets should include targeted assassinations as \u2018being out as leader\u2019,\u201d @mert <a href=\"https:\/\/x.com\/mert\/status\/2028116081466909161?ref=platformer.news\" rel=\"nofollow\">wrote<\/a> on X.<\/p>\n<p>@jellymanguy highlighted Kalshi\u2019s hypocrisy on its policies related to death, <a href=\"https:\/\/x.com\/jellymanguy\/status\/2027935504943939927?ref=platformer.news\" rel=\"nofollow\">pointing to<\/a> the bets it settled that former president Jimmy Carter would not attend President Trump\u2019s inauguration, knowing that people were betting the then 100-year-old Carter would die before the event: \u201cthis has nothing to do with death, this has everything to do with your bottom line.\u201d<\/p>\n<p>On insider trading, \u201cI imagine this is gonna play out like the ufc\u2019s approach to drug testing: drag a couple idiots who were too obvious about it into the public square every once in a while and look the other way the rest of the time,\u201d <a href=\"https:\/\/bsky.app\/profile\/nathangrayson.bsky.social\/post\/3mfpdomuzzc2a?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">wrote<\/a> Nathan Grayson, cofounder of news site Aftermath.<\/p>\n<p>\u2014Lindsey Choo<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2025\/08\/floating_linebreak_600px-1.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"600\" height=\"157\" \/>Side Quests<\/p>\n<p>The DoD was in talks with leading AI companies about <a href=\"https:\/\/www.ft.com\/content\/a56d70b5-669c-4bcc-8541-a4961fc99802?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">partnerships<\/a> to conduct automated reconnaissance of China\u2019s power grids, utilities and sensitive networks. Multiple federal agencies raised concerns about Grok&#8217;s safety and reliability in <a href=\"https:\/\/www.wsj.com\/politics\/national-security\/elon-musk-xai-grok-security-safety-government-73ab4f6e?st=f8hivs&amp;ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">recent months<\/a>, before the DoD approved Grok for use in classified settings.<\/p>\n<p>The U.S. Supreme Court <a href=\"https:\/\/www.reuters.com\/legal\/government\/us-supreme-court-declines-hear-dispute-over-copyrights-ai-generated-material-2026-03-02\/?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">declined to hear a dispute<\/a> over copyrights for AI-generated material. The case was brought by a computer scientist who was denied a copyright for AI-generated art. A federal judge <a href=\"https:\/\/www.reuters.com\/legal\/government\/judge-blocks-virginia-law-restricting-social-media-children-2026-02-27\/?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">issued<\/a> a preliminary injunction blocking Virginia from enforcing a new law restricting children&#8217;s social media use, on First Amendment grounds.<\/p>\n<p>X is full of <a href=\"https:\/\/www.wired.com\/story\/x-is-drowning-in-disinformation-following-us-and-israels-attack-on-iran\/?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">disinformation<\/a> about the U.S. and Israel attacks on Iran, including old videos attributed as recent and AI-generated images.\u00a0Iranians <a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2026-03-02\/iranians-evade-internet-blackout-to-share-images-of-airstrikes?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">have<\/a> turned to Starlink, decentralized messaging apps, and VPNs to circumvent the Internet blackout, and are sharing videos of U.S. and Israeli airstrikes.<\/p>\n<p>Amazon Web Services <a href=\"https:\/\/www.reuters.com\/world\/middle-east\/amazons-cloud-unit-reports-fire-after-objects-hit-uae-data-center-2026-03-01\/?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">said<\/a> its facilities in the Middle East were facing power and connectivity issues after unidentified \u201cobjects\u201d struck its data center in the UAE.<\/p>\n<p>OpenAI <a href=\"https:\/\/www.cnbc.com\/2026\/02\/27\/open-ai-funding-round-amazon.html?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">raised<\/a> $110 billion at a $730 billion valuation, up from $500 billion in October. Amazon invested $50 billion, while Nvidia and SoftBank invested $30 billion each. OpenAI <a href=\"https:\/\/openai.com\/index\/scaling-ai-for-everyone\/?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">said<\/a> ChatGPT has over 900 million weekly active users, and over 50 million consumer subscribers.<\/p>\n<p>OpenAI said it would <a href=\"https:\/\/www.politico.com\/news\/2026\/02\/26\/canada-openai-chatgpt-shooting-00802746?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">overhaul safety protocols<\/a> and establish direct contact with Canadian police, after failing to alert authorities about messages the Tumbler Ridge suspect was sending to ChatGPT.<\/p>\n<p>The plaintiff in Meta\u2019s big <a href=\"https:\/\/apnews.com\/article\/meta-instagram-facebook-trial-social-media-addiction-2afb4809d2dbbb0d1e69739c7f2b20b3?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">social media addiction trial<\/a> testified that her social media use, which began in childhood, exacerbated depression and suicidal thoughts. Meta filed lawsuits against four alleged <a href=\"https:\/\/au.news.yahoo.com\/meta-reveals-huge-scammer-crackdown-170144983.html?_guc_consent_skip=1772477320&amp;ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">scam advertising operations<\/a> based in Brazil, China and Vietnam. <a href=\"https:\/\/www.theatlantic.com\/technology\/2026\/02\/meta-child-safety-documents-instagram\/686163\/?gift=iWa_iB9lkw4UuiWbIbrWGdKxkMOyEYN8nHGY7WriR3Y&amp;ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">Court documents<\/a> from a New Mexico trial showed internal divisions at Meta as Instagram teen safety initiatives conflicted with growth and engagement goals.<\/p>\n<p>Chinese military procurement documents show the PLA&#8217;s efforts to use AI to assist in drone piloting, cyberattacks, decision-making, and <a href=\"https:\/\/www.foreignaffairs.com\/china\/chinas-artificial-intelligence-arsenal?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">disinformation campaigns<\/a>.<\/p>\n<p>Australia&#8217;s eSafety Commissioner threatened action against app stores and search engines if AI services operating in Australia don&#8217;t verify user ages <a href=\"https:\/\/www.reuters.com\/business\/media-telecom\/australia-says-it-may-go-after-app-stores-search-engines-ai-age-crackdown-2026-03-01\/?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">by March 9<\/a>.<\/p>\n<p>A profile of Telegram CEO <a href=\"https:\/\/www.ft.com\/content\/26c18637-667f-498c-99e2-5c757702121b?sharetype=blocked&amp;ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">Pavel Durov<\/a>, who faces an investigation in France on a dozen preliminary charges and a criminal case in Russia for \u201caiding terrorism.\u201d<\/p>\n<p>TikTok is back in Albania after a year-long ban <a href=\"https:\/\/www.reuters.com\/sustainability\/society-equity\/tiktok-returns-albania-after-government-imposed-ban-2026-02-27\/?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">expired this month<\/a>. The Albanian government said TikTok added &#8220;important filters for security and language.&#8221;<\/p>\n<p>The rise of Claude Code is fueling <a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2026-02-26\/ai-coding-agents-like-claude-code-are-fueling-a-productivity-panic-in-tech?sref=CrGXSfHu&amp;ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">productivity panic<\/a> among engineers and executives. (A UC Berkeley study found people who adopt AI tools work longer hours.)<\/p>\n<p>Anthropic said <a href=\"https:\/\/www.bleepingcomputer.com\/news\/artificial-intelligence\/anthropic-confirms-claude-is-down-in-a-worldwide-outage\/?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">\u201ca fix has been implemented\u201d<\/a> after a few hours of elevated errors on claude.ai, Claude Code, and some API methods.<\/p>\n<p>Meta <a href=\"https:\/\/www.theinformation.com\/articles\/metas-internal-chip-design-efforts-hit-roadblocks?rc=8aq5ai&amp;ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">scrapped<\/a> the most advanced AI chip it was developing after struggling with the design, switching focus to a simpler chip.<\/p>\n<p>Co-founder Toby Pohlen left xAI, making him the seventh of twelve co-founders <a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2026-02-27\/xai-co-founder-toby-pohlen-is-latest-executive-to-depart?sref=CrGXSfHu&amp;ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">to depart<\/a>. Elon Musk <a href=\"https:\/\/techcrunch.com\/2026\/02\/27\/musk-bashes-openai-in-deposition-saying-nobody-committed-suicide-because-of-grok\/?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">bashed<\/a> OpenAI in a lawsuit deposition, saying \u201cnobody committed suicide because of Grok.\u201d X added a \u201cPaid Partnership\u201d label that creators can apply to their posts to indicate they\u2019re <a href=\"https:\/\/techcrunch.com\/2026\/03\/02\/x-ads-paid-partnership-labels-for-creators-so-they-can-ditch-the-hashtags\/?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">advertisements<\/a>.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2025\/08\/floating_linebreak_600px-1.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"600\" height=\"157\" \/>Those good posts<\/p>\n<p>For more good posts every day, <a href=\"https:\/\/www.instagram.com\/crumbler\/?ref=platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">follow Casey\u2019s Instagram stories<\/a>.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-02-at-5.56.40---PM.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1340\" height=\"326\"  \/><\/p>\n<p>(<a href=\"https:\/\/www.threads.com\/@myqkaplan\/post\/DVM7-6QDKXy?ref=platformer.news\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">Link<\/a>)<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-02-at-5.56.04---PM.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1250\" height=\"534\"  \/><\/p>\n<p>(<a href=\"https:\/\/www.threads.com\/@karissabe\/post\/DVWpCyCkrbO?ref=platformer.news\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">Link<\/a>)<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-02-at-5.57.01---PM.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1262\" height=\"1080\"  \/><\/p>\n<p>(<a href=\"https:\/\/www.threads.com\/@lykaboss\/post\/DVUkA4NDZp1?ref=platformer.news\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">Link<\/a>)<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2025\/08\/floating_linebreak_600px-1.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"600\" height=\"157\" \/>Talk to us<\/p>\n<p>Send us tips, comments, questions, and amended contract language: <a href=\"https:\/\/www.platformer.news\/openai-pentagon-surveillance-drones-backlash\/mailto:casey@platformer.news\" rel=\"nofollow noopener\" target=\"_blank\">casey@platformer.news<\/a>. Read <a href=\"https:\/\/www.platformer.news\/ethics\/\" rel=\"nofollow noopener\" target=\"_blank\">our ethics policy here<\/a>.<\/p>\n<p>        <script async src=\"\/\/www.instagram.com\/embed.js\"><\/script><\/p>\n","protected":false},"excerpt":{"rendered":"This is a column about AI. My boyfriend works at Anthropic. See\u00a0my full ethics disclosure here. &#8220;In [Murati\u2019s]&hellip;\n","protected":false},"author":2,"featured_media":511331,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,61],"class_list":{"0":"post-511330","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/511330","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=511330"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/511330\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/511331"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=511330"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=511330"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=511330"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}