Criminal enterprises thrive on social engineering, coercion and weak controls. Cyber-scam compounds in Myanmar and Cambodia show the global scale of the problem: trafficked workers are forced to engineer trust at scale using romance pitches, fake investments and urgent pleas, while victims worldwide are nudged into irreversible digital payments.

As well as posing a law-enforcement challenge offshore, scams present a national safety issue at home. Australians often learn about scams from headlines yet aren’t aware of the habits that can keep them safe.

Australia’s cyber problem is not only technical; it’s behavioural. Criminals exploit people with relentless social engineering techniques—methods designed to manipulate or influence targets to into performing actions—then cash out through fast digital payments and weak identity checks. Losses keep climbing, and a growing share of scams are beginning on social platforms, messaging apps, websites and email.

We should treat this as national safety as well as cybersecurity. Australian Minister for Home Affairs Tony Burke has spoken about national safety—putting people first and making harm reduction the test of success—as a parallel priority to national security—the harms that show up in daily life.

Under the bonnet, one trend matters most. Social engineering has overtaken malware and exploits as a primary intrusion method. It’s not new: the Nigerian prince email, a common scam requesting money with the promise of significant future returns, was the chain letter made digital. Today, generative AI lets offenders with neither expertise nor money personalise lures at scale, clone voices and adapt when a target hesitates. Even hardened organisations are seeing breaches that start with a message or a call, rather than vulnerabilities in computer systems.

Australian incident reports show that phishing, compromise of business emails and identity fraud are stubbornly present. There’s a reason for that: traditional learning fails. A US study

found ‘organisations should not expect large anti-phishing benefits from either annual security awareness training or embedded phishing as commonly deployed today.’

Social engineering works so well because of human-hacking: our brains and social instincts are built for community survival. To avoid being human-hacked, individuals need to build reflexes, not just awareness. Inoculation theory, tested in misinformation research, shows that brief, active pre-bunking improves discernment and builds resistance to deception. People learn to spot manipulation through repeated exposure to realistic examples with immediate feedback. This presents a practical fix, building one habit: pause and check before acting. Treated this way, it complements filters, takedowns and payment controls.

National messaging should match that purpose, with calls to action tied to a question, a habit and measured for behaviour change, not just reach. For Australia, the policy task is to scale this nationally and get measurable results. Three moves would help.

First, make habit-building core to every anti-scam initiative. Tie public funding to interventions that use active practice with real-world lures, quick feedback and simple rules of thumb. Ask vendors to publish evidence of learning gains and decay over time. Require adaptation to languages, devices and channels used by older Australians, small businesses, new migrants and First Nations communities. Keep content short, repeatable and accessible.

Second, meet people where scams begin. Losses are rising where first contact is social media, messaging apps, websites and email. Push pre-bunking content and micro-challenges inside those feeds and apps. Pair that with platform-level obstacles for risky payment flows and verified callback pathways for government and bank contact. Make the safest path the smoothest path.

Third, link education to response. Practice only works if help is one click away. Currently, there are more than seven different reporting bodies—including the Australian Cyber Security Centre, ScamWatch, the Australian Securities and Investments Commission, Services Australia, police, banks and telecommunication providers—with overlapping roles and advice. Instead, create one place for the public to report, with triaging to the relevant agency occurring behind the scenes. This would also enable data sharing between agencies.

This is not a public awareness campaign; it’s service delivery. That service should be provided in schools, vocational training centres, workplaces, aged care, community centres and libraries. Work with unions, industry associations, multicultural networks and regional councils so that social proof spreads fast. Track outcomes at three horizons: immediate learning gain, behaviour in the wild, and systemic shifts in loss patterns and time to takedown.

None of this replaces stronger upstream controls. Banks and telecommunication companies should expand name and number matching, delay suspicious transfers, tighten caller ID and SMS-sender protections, and provide voice challenge phrases for high-risk calls.

Government entities should consider accelerating data minimisation, stronger identity verification and faster, more transparent redress after major breaches. Platforms should scale rapid takedowns of fake ads and impersonation, improve advertiser verification and give users safer defaults.

If we are serious about national safety, we should hold to a simple principle. Start with what keeps Australians safe, then fund and measure that capability. Put behaviour change, scam obstruction and rapid recovery on the same footing as technical uplift. Australia can lead with a simple shift in emphasis: teach the habit, not the headline. Make ‘is it dodgy or not?’ a reflex, then back it with systems that reward the right choice. The cost is small, but the impact is national, and the habit lasts longer than a news cycle.