Artificial intelligence tools like ChatGPT have become a routine part of daily life for many, with users inputting sensitive personal, medical and professional information. But careless use could expose that data and endanger your privacy, cyber experts warn.

According to recent cybersecurity research, it’s relatively easy for a skilled hacker to access such data. While OpenAI works continuously to prevent such breaches, the reality remains a cat-and-mouse game: every time one vulnerability is patched, attackers quickly look for another.

Still, the National Cyber Directorate recommends five simple steps users can take to reduce the risk of exposing personal information:

1. Turn off chat history and model training

When using ChatGPT, whether the free or paid version, there is a setting that allows OpenAI to use your chats to train its models. If enabled, your personal or business-related inputs may be stored and could resurface in future iterations of the model.

What to do:
Go to Profile > Settings > Data Controls, and disable the option labeled “Improve the model for everyone.”

2. Avoid sharing sensitive conversations

ChatGPT allows you to share chats via a link, but you lose control over that link’s distribution — even if you later delete the original conversation.

What to do:
Do not share chats that contain private or sensitive information. Currently, there is no way to limit access permissions on shared links.

3. Be cautious with AI agents

AI “agents” can take automated actions such as browsing websites or making online purchases. These agents lack human judgment and could click malicious links or enter information into phishing sites.

What to do:
Provide clear instructions on what the agent is and is not allowed to do. Avoid entering passwords or financial data on sites accessed through the agent, and always verify that the website is legitimate.

4. Watch for prompt injection attacks

Prompt injection is a form of cyberattack where a hacker hides malicious instructions inside a webpage, document or link. When your AI agent accesses such content, it might execute harmful commands without realizing it.

What to do:
Just like in the previous step, write clear and restrictive prompts for AI agents. You can also use another AI model to help you write safer prompts.

5. Enable two-factor authentication (2FA)

Two-factor authentication adds a layer of security to your account. Even if your password is stolen (for instance, through phishing), a temporary code from your phone would still be required to log in.

What to do:
Go to Settings > Security > Multi-factor authentication and enable the option. Using an authentication app is the most secure method.

By following these guidelines, users can greatly reduce the risk of exposing sensitive data and better protect themselves while using powerful AI tools.