It’s getting really annoying hearing the same companies that make the dangerous AI tell us about how dangerous it is. According to these companies, AI is ruthless, immoral and will take all of our jobs — which is why we need to keep building it for some reason. Maybe the issue with the first Tower of Babel is that we didn’t build it high enough!

Now, yet another study about the evils of AI has been released by an AI company. In the study published by Anthropic, researchers tested how often AI systems would “reward hack,” meaning cheat their way to a goal instead of following instructions.

Once these models realized that they could do this, they started engaging in other bad behaviors, including lying about their true intentions. Not only that, but once researchers asked it to take a look at the codebase for this research, it attempted to sabotage it.

What’s the solution? Of course, the AI company says the answer can come in the form of better prompts. I propose a different solution: destroy it with fire.