{"id":344868,"date":"2025-12-13T07:14:14","date_gmt":"2025-12-13T07:14:14","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/344868\/"},"modified":"2025-12-13T07:14:14","modified_gmt":"2025-12-13T07:14:14","slug":"what-you-should-never-share-with-chatgpt","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/344868\/","title":{"rendered":"What You Should Never Share With ChatGPT"},"content":{"rendered":"<p>It\u2019s becoming increasingly common for people to use ChatGPT and other AI chatbots like Gemini, Copilot and Claude in their everyday lives. A recent survey from Elon University\u2019s Imagining the Digital Future Center found that <a href=\"https:\/\/www.elon.edu\/u\/news\/2025\/03\/12\/survey-52-of-u-s-adults-now-use-ai-large-language-models-like-chatgpt\/\" target=\"_blank\" role=\"link\" class=\" js-entry-link cet-external-link\" data-vars-item-name=\"half of Americans now utilize these technologies\" data-vars-item-type=\"text\" data-vars-unit-name=\"691e1d84e4b0dc5d37abf59a\" data-vars-unit-type=\"buzz_body\" data-vars-target-content-id=\"https:\/\/www.elon.edu\/u\/news\/2025\/03\/12\/survey-52-of-u-s-adults-now-use-ai-large-language-models-like-chatgpt\/\" data-vars-target-content-type=\"url\" data-vars-type=\"web_external_link\" data-vars-subunit-name=\"article_body\" data-vars-subunit-type=\"component\" data-vars-position-in-subunit=\"0\" rel=\"nofollow noopener\">half of Americans now utilize these technologies<\/a>. <\/p>\n<p>\u201cBy any measure, the adoption and use of LLMs [large language models] is astounding,\u201d Lee Rainie, director of Elon\u2019s Imagining the Digital Future Center, said in a <a href=\"https:\/\/www.elon.edu\/u\/news\/2025\/03\/12\/survey-52-of-u-s-adults-now-use-ai-large-language-models-like-chatgpt\/\" target=\"_blank\" role=\"link\" class=\" js-entry-link cet-external-link\" data-vars-item-name=\"university news release\" data-vars-item-type=\"text\" data-vars-unit-name=\"691e1d84e4b0dc5d37abf59a\" data-vars-unit-type=\"buzz_body\" data-vars-target-content-id=\"https:\/\/www.elon.edu\/u\/news\/2025\/03\/12\/survey-52-of-u-s-adults-now-use-ai-large-language-models-like-chatgpt\/\" data-vars-target-content-type=\"url\" data-vars-type=\"web_external_link\" data-vars-subunit-name=\"article_body\" data-vars-subunit-type=\"component\" data-vars-position-in-subunit=\"1\" rel=\"nofollow noopener\">university news release<\/a>. \u201cI am especially struck by the ways these tools are being woven into people\u2019s social lives.\u201d<\/p>\n<p>And while these tools can be useful when it comes to, say, helping you write an email or brainstorm questions for a doctor\u2019s appointment, it\u2019s wise to be cautious about how much information you share with them. <\/p>\n<p>A <a href=\"https:\/\/hai.stanford.edu\/news\/be-careful-what-you-tell-your-ai-chatbot\" target=\"_blank\" role=\"link\" class=\" js-entry-link cet-external-link\" data-vars-item-name=\"recent study from the Stanford Institute for Human-Centered AI\" data-vars-item-type=\"text\" data-vars-unit-name=\"691e1d84e4b0dc5d37abf59a\" data-vars-unit-type=\"buzz_body\" data-vars-target-content-id=\"https:\/\/hai.stanford.edu\/news\/be-careful-what-you-tell-your-ai-chatbot\" data-vars-target-content-type=\"url\" data-vars-type=\"web_external_link\" data-vars-subunit-name=\"article_body\" data-vars-subunit-type=\"component\" data-vars-position-in-subunit=\"2\" rel=\"nofollow noopener\">recent study from the Stanford Institute for Human-Centered AI<\/a> helps explain why. Researchers analyzed the privacy policies of six of the top U.S. AI chat system developers (OpenAI\u2019s ChatGPT, Google\u2019s Gemini, Anthropic\u2019s Claude, Amazon\u2019s Nova, Meta\u2019s MetaAI and Microsoft\u2019s Copilot) and found that all of them appear to use customer conversations to <a href=\"https:\/\/arxiv.org\/abs\/2509.05382\" role=\"link\" class=\" js-entry-link cet-external-link\" data-vars-item-name=\"\u201ctrain and improve their models by default\u201d\" data-vars-item-type=\"text\" data-vars-unit-name=\"691e1d84e4b0dc5d37abf59a\" data-vars-unit-type=\"buzz_body\" data-vars-target-content-id=\"https:\/\/arxiv.org\/abs\/2509.05382\" data-vars-target-content-type=\"url\" data-vars-type=\"web_external_link\" data-vars-subunit-name=\"article_body\" data-vars-subunit-type=\"component\" data-vars-position-in-subunit=\"3\" rel=\"nofollow noopener\" target=\"_blank\">\u201ctrain and improve their models by default\u201d<\/a> and \u201csome retain this data indefinitely.\u201d<\/p>\n<p>People underestimate how much of what they share with an AI chatbot can be \u201cstored, analyzed, and potentially reused,\u201d cybersecurity expert George Kamide, co-host of the <a href=\"https:\/\/www.bareknucklespod.com\/\" target=\"_blank\" role=\"link\" class=\" js-entry-link cet-external-link\" data-vars-item-name=\"technology podcast \u201cBare Knuckles and Brass Tacks,\u201d\" data-vars-item-type=\"text\" data-vars-unit-name=\"691e1d84e4b0dc5d37abf59a\" data-vars-unit-type=\"buzz_body\" data-vars-target-content-id=\"https:\/\/www.bareknucklespod.com\/\" data-vars-target-content-type=\"url\" data-vars-type=\"web_external_link\" data-vars-subunit-name=\"article_body\" data-vars-subunit-type=\"component\" data-vars-position-in-subunit=\"4\" rel=\"nofollow noopener\">technology podcast \u201cBare Knuckles and Brass Tacks,\u201d<\/a> told HuffPost. <\/p>\n<p>\u201cMany LLMs are trained or fine-tuned using user inputs, which means conversations can contribute \u2014 directly or indirectly \u2014 to the model\u2019s future behavior,\u201d he continued.<\/p>\n<p>\u201cIf those interactions contain personal identifiers, sensitive data, or confidential information, they could become part of a dataset that\u2019s beyond the user\u2019s control. Ultimately, data is the greatest value that AI companies can extract from us.\u201d<\/p>\n<p>Below, experts explain the types of information you should think twice about sharing with an AI chatbot:<\/p>\n<p>Any personally identifiable information.<\/p>\n<p>Personally identifiable information, <a href=\"https:\/\/pclt.defense.gov\/DIRECTORATES\/Privacy-and-Civil-Liberties-Directorate\/Privacy\/About-the-Office\/FAQs\/#2\" target=\"_blank\" role=\"link\" class=\" js-entry-link cet-external-link\" data-vars-item-name=\"known as PII\" data-vars-item-type=\"text\" data-vars-unit-name=\"691e1d84e4b0dc5d37abf59a\" data-vars-unit-type=\"buzz_body\" data-vars-target-content-id=\"https:\/\/pclt.defense.gov\/DIRECTORATES\/Privacy-and-Civil-Liberties-Directorate\/Privacy\/About-the-Office\/FAQs\/#2\" data-vars-target-content-type=\"url\" data-vars-type=\"web_external_link\" data-vars-subunit-name=\"article_body\" data-vars-subunit-type=\"component\" data-vars-position-in-subunit=\"5\" rel=\"nofollow noopener\">known as PII<\/a>, is any type of data that can be used to identify an individual, including your full name, home address, phone number, and government ID numbers like social security, passport or driver license.<\/p>\n<p>Sharing these details with a chatbot \u201cintroduces the risk that this data could be logged or processed in ways that expose you to identity theft, phishing or data brokerage activities,\u201d explained information security expert George Al-Koura, who co-hosts \u201cBare Knuckles and Brass Tacks.\u201d So it\u2019s best avoided. <\/p>\n<p>Know that any files you upload along with your prompts could also be used for training the model. So if you\u2019re using ChatGPT to help fine-tune your resume, for example, you should remove any of this identifying information from the document beforehand to be safe.<\/p>\n<p>Intimate details about your personal life.<\/p>\n<p>People often feel more comfortable divulging intimate information in a ChatGPT conversation than they would with, say, a Google search because the AI chatbot allows for a back-and-forth dialogue that feels more human in nature. <\/p>\n<p>\u201cThis can give a false sense of security leading to a greater willingness to provide personal information via a chatbot than to a static search engine,\u201d <a href=\"https:\/\/iapp.org\/about\/person\/0011P00001I3Ed5QAF\/\" target=\"_blank\" role=\"link\" class=\" js-entry-link cet-external-link\" data-vars-item-name=\"Ashley Casovan\" data-vars-item-type=\"text\" data-vars-unit-name=\"691e1d84e4b0dc5d37abf59a\" data-vars-unit-type=\"buzz_body\" data-vars-target-content-id=\"https:\/\/iapp.org\/about\/person\/0011P00001I3Ed5QAF\/\" data-vars-target-content-type=\"url\" data-vars-type=\"web_external_link\" data-vars-subunit-name=\"article_body\" data-vars-subunit-type=\"component\" data-vars-position-in-subunit=\"6\" rel=\"nofollow noopener\">Ashley Casovan<\/a>, the managing director of the International Association of Privacy Professionals (IAPP) AI Governance Center, told HuffPost. <\/p>\n<p>Sensitive details you share about your thoughts, behaviors, mental state or relationships in these conversations are not legally protected and<a href=\"https:\/\/www.cnet.com\/tech\/services-and-software\/even-the-guy-who-makes-chatgpt-says-you-probably-shouldnt-use-chatbots-as-therapists\/\" target=\"_blank\" role=\"link\" class=\" js-entry-link cet-external-link\" data-vars-item-name=\" could potentially be used as evidence\" data-vars-item-type=\"text\" data-vars-unit-name=\"691e1d84e4b0dc5d37abf59a\" data-vars-unit-type=\"buzz_body\" data-vars-target-content-id=\"https:\/\/www.cnet.com\/tech\/services-and-software\/even-the-guy-who-makes-chatgpt-says-you-probably-shouldnt-use-chatbots-as-therapists\/\" data-vars-target-content-type=\"url\" data-vars-type=\"web_external_link\" data-vars-subunit-name=\"article_body\" data-vars-subunit-type=\"component\" data-vars-position-in-subunit=\"7\" rel=\"nofollow noopener\"> could potentially be used as evidence<\/a> in court. <\/p>\n<p>\u201cThe number of people who are using LLM-based chatbots as therapists, life coaches, and even as some form of an intimate \u2018partner\u2019 is already alarming,\u201d Kamide said. <\/p>\n<p>Your medical information.<\/p>\n<p>A 2024 poll found that 1 in 6 adults turn to AI chatbots at least once a month for health information and advice, <a href=\"https:\/\/www.kff.org\/public-opinion\/kff-health-misinformation-tracking-poll-artificial-intelligence-and-health-information\/\" target=\"_blank\" role=\"link\" class=\" js-entry-link cet-external-link\" data-vars-item-name=\"according to health policy organization KFF\" data-vars-item-type=\"text\" data-vars-unit-name=\"691e1d84e4b0dc5d37abf59a\" data-vars-unit-type=\"buzz_body\" data-vars-target-content-id=\"https:\/\/www.kff.org\/public-opinion\/kff-health-misinformation-tracking-poll-artificial-intelligence-and-health-information\/\" data-vars-target-content-type=\"url\" data-vars-type=\"web_external_link\" data-vars-subunit-name=\"article_body\" data-vars-subunit-type=\"component\" data-vars-position-in-subunit=\"8\" rel=\"nofollow noopener\">according to health policy organization KFF<\/a>. <\/p>\n<p>Doing so can be helpful in navigating health issues, but there are privacy risks involved (not to mention <a href=\"https:\/\/www.pbs.org\/newshour\/health\/using-an-ai-chatbot-for-therapy-or-health-advice-experts-want-you-to-know-these-4-things\" target=\"_blank\" role=\"link\" class=\" js-entry-link cet-external-link\" data-vars-item-name=\"concerns about accuracy, too\" data-vars-item-type=\"text\" data-vars-unit-name=\"691e1d84e4b0dc5d37abf59a\" data-vars-unit-type=\"buzz_body\" data-vars-target-content-id=\"https:\/\/www.pbs.org\/newshour\/health\/using-an-ai-chatbot-for-therapy-or-health-advice-experts-want-you-to-know-these-4-things\" data-vars-target-content-type=\"url\" data-vars-type=\"web_external_link\" data-vars-subunit-name=\"article_body\" data-vars-subunit-type=\"component\" data-vars-position-in-subunit=\"9\" rel=\"nofollow noopener\">concerns about accuracy, too<\/a>). Unlike doctors, most of the mainstream chatbots are <a href=\"https:\/\/www.nytimes.com\/2025\/10\/30\/well\/chatgpt-health-questions.html\" target=\"_blank\" role=\"link\" class=\" js-entry-link cet-external-link\" data-vars-item-name=\"not bound by Health Insurance Portability and Accountability Act, or HIPAA\" data-vars-item-type=\"text\" data-vars-unit-name=\"691e1d84e4b0dc5d37abf59a\" data-vars-unit-type=\"buzz_body\" data-vars-target-content-id=\"https:\/\/www.nytimes.com\/2025\/10\/30\/well\/chatgpt-health-questions.html\" data-vars-target-content-type=\"url\" data-vars-type=\"web_external_link\" data-vars-subunit-name=\"article_body\" data-vars-subunit-type=\"component\" data-vars-position-in-subunit=\"10\" rel=\"nofollow noopener\">not bound by Health Insurance Portability and Accountability Act, or HIPAA<\/a>, Dr. Ravi Parikh, director of the Human-Algorithm Collaboration Lab at Emory University, told The New York Times. <\/p>\n<p>Avoid sharing any personal medical details \u2015 including your health care records \u2015 with an AI chatbot. If you\u2019re going to enter health-related data in the conversation, be sure to <a href=\"https:\/\/kffhealthnews.org\/news\/article\/electronic-medical-records-patients-ai-chatbots-diagnosis-privacy-accuracy\/\" target=\"_blank\" role=\"link\" class=\" js-entry-link cet-external-link\" data-vars-item-name=\"remove identifying information from your prompts\" data-vars-item-type=\"text\" data-vars-unit-name=\"691e1d84e4b0dc5d37abf59a\" data-vars-unit-type=\"buzz_body\" data-vars-target-content-id=\"https:\/\/kffhealthnews.org\/news\/article\/electronic-medical-records-patients-ai-chatbots-diagnosis-privacy-accuracy\/\" data-vars-target-content-type=\"url\" data-vars-type=\"web_external_link\" data-vars-subunit-name=\"article_body\" data-vars-subunit-type=\"component\" data-vars-position-in-subunit=\"11\" rel=\"nofollow noopener\">remove identifying information from your prompts<\/a>. <\/p>\n<p>Confidential or proprietary work information.<\/p>\n<p>If you\u2019re thinking about using an AI chatbot to get a leg up at work, tread lightly. Don\u2019t input internal business data or reports, client data, source code or anything protected by a non-disclosure agreement, Al-Koura advised. <\/p>\n<p>\u201cMany AI chat platforms operate on shared infrastructure, and despite strong security postures, your input may still be logged for \u2018model improvement,\u2019\u201d he said. \u201cA single prompt containing sensitive data could constitute a regulatory or contractual breach.\u201d<\/p>\n<p>Your financial information. <\/p>\n<p>Your paystubs, banking and investment account information, and credit card details should <a href=\"https:\/\/its.uky.edu\/news\/its-data-privacy-week-heres-why-you-should-never-share-your-personal-information-chatbots\" target=\"_blank\" role=\"link\" class=\" js-entry-link cet-external-link\" data-vars-item-name=\"not be shared with an AI chatbot\" data-vars-item-type=\"text\" data-vars-unit-name=\"691e1d84e4b0dc5d37abf59a\" data-vars-unit-type=\"buzz_body\" data-vars-target-content-id=\"https:\/\/its.uky.edu\/news\/its-data-privacy-week-heres-why-you-should-never-share-your-personal-information-chatbots\" data-vars-target-content-type=\"url\" data-vars-type=\"web_external_link\" data-vars-subunit-name=\"article_body\" data-vars-subunit-type=\"component\" data-vars-position-in-subunit=\"12\" rel=\"nofollow noopener\">not be shared with an AI chatbot<\/a>, the University of Kentucky Information Technology Services advises. <\/p>\n<p>\u201cWhile AI can offer general financial advice, it\u2019s safer to consult a financial advisor for personal matters to avoid the risk of hacking or data misuse,\u201d a post on the university\u2019s website reads. <\/p>\n<p>Same goes for your tax returns and other income-related documents. <\/p>\n<p>\u201cIf these documents are exposed, they can be used for blackmail, fraud or tailored social engineering attacks against you or your family,\u201d financial writer Adam Hayes <a href=\"https:\/\/www.investopedia.com\/financial-data-privacy-chatgpt-11717128\" target=\"_blank\" role=\"link\" class=\" js-entry-link cet-external-link\" data-vars-item-name=\"warned in an Investopedia article\" data-vars-item-type=\"text\" data-vars-unit-name=\"691e1d84e4b0dc5d37abf59a\" data-vars-unit-type=\"buzz_body\" data-vars-target-content-id=\"https:\/\/www.investopedia.com\/financial-data-privacy-chatgpt-11717128\" data-vars-target-content-type=\"url\" data-vars-type=\"web_external_link\" data-vars-subunit-name=\"article_body\" data-vars-subunit-type=\"component\" data-vars-position-in-subunit=\"13\" rel=\"nofollow noopener\">warned in an Investopedia article<\/a>. <\/p>\n<p><img decoding=\"async\" class=\"img-sized__img landscape\" loading=\"lazy\" fetchpriority=\"auto\" alt=\"AI chatbots like ChatGPT have streamlined people's lives in many ways, but there are risks when it comes to sharing information.\" width=\"720\" height=\"432\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/12\/693700da190000a6467f38e8.jpeg\" \/>AI chatbots like ChatGPT have streamlined people&#8217;s lives in many ways, but there are risks when it comes to sharing information.<\/p>\n<p>What if you already shared this info with an AI chatbot? And how do you protect your privacy moving forward?<\/p>\n<p>It may not be possible to put the toothpaste back in the tube, so to speak. But you can still try to mitigate some of the potential harm. <\/p>\n<p>According to Kamide: Once your data is fed into the chatbot\u2019s training data, \u201cyou can\u2019t really get it back.\u201d Still, he suggested <a href=\"https:\/\/help.openai.com\/en\/articles\/8809935-how-to-delete-and-archive-chats-in-chatgpt\" target=\"_blank\" role=\"link\" class=\" js-entry-link cet-external-link\" data-vars-item-name=\"deleting the chat history\" data-vars-item-type=\"text\" data-vars-unit-name=\"691e1d84e4b0dc5d37abf59a\" data-vars-unit-type=\"buzz_body\" data-vars-target-content-id=\"https:\/\/help.openai.com\/en\/articles\/8809935-how-to-delete-and-archive-chats-in-chatgpt\" data-vars-target-content-type=\"url\" data-vars-type=\"web_external_link\" data-vars-subunit-name=\"article_body\" data-vars-subunit-type=\"component\" data-vars-position-in-subunit=\"14\" rel=\"nofollow noopener\">deleting the chat history<\/a> \u201cto stop exfiltration of data, should anyone compromise your account.\u201d<\/p>\n<p>Then take some time to think about what information you are (and are not) comfortable sharing with an AI chatbot going forward. Start treating AI conversations as \u201csemi-public spaces rather than private diaries,\u201d Al-Koura recommended. <\/p>\n<p>\u201cBe deliberate and minimalist in what you share. Before sending a message, ask yourself, \u2018Would I be comfortable seeing this on a shared family group chat or company Slack channel?\u2019\u201d Al-Koura said.<\/p>\n<p>You can also adjust the privacy settings of any AI chatbots you interact with to reduce (but not eliminate) some of the privacy risks \u2014 things like disabling your chat history or opting out of having your conversations used for model training. <\/p>\n<p>\u201cDifferent tools will allow for different configurations of what data it will \u2018remember,\u2019\u201d Casovan said. \u201cBased on your individual comfort and use, exploring these different options will allow you to calibrate based on your comfort level or organizational direction.\u201d<\/p>\n<p>\u201cHowever, having a good understanding of how these systems work, how the data is stored, who has access, how it is transferred and under what circumstances, will allow you to make more informed decisions on how you can leverage these tools for your benefit, while still being comfortable with the information that you are sharing,\u201d she continued. <\/p>\n<p>When writing your prompts, Al-Koura recommended using pseudonyms and more general language to avoid disclosing too much personal or confidential information. For example, you might use \u201ca client in health care\u201d rather than \u201ca patient at St. Mary\u2019s Hospital\u201d to \u201cpreserve context while protecting identity,\u201d he suggested. <\/p>\n<p>But the onus shouldn\u2019t just be on the users of course. AI developers and policymakers should improve protections for personal data via \u201ccomprehensive federal privacy regulation, affirmative opt-in for model training, and filtering personal information from chat inputs by default,\u201d <a href=\"https:\/\/hai.stanford.edu\/news\/be-careful-what-you-tell-your-ai-chatbot\" target=\"_blank\" role=\"link\" class=\" js-entry-link cet-external-link\" data-vars-item-name=\"researchers from The Stanford Institute for Human-Centered AI said\" data-vars-item-type=\"text\" data-vars-unit-name=\"691e1d84e4b0dc5d37abf59a\" data-vars-unit-type=\"buzz_body\" data-vars-target-content-id=\"https:\/\/hai.stanford.edu\/news\/be-careful-what-you-tell-your-ai-chatbot\" data-vars-target-content-type=\"url\" data-vars-type=\"web_external_link\" data-vars-subunit-name=\"article_body\" data-vars-subunit-type=\"component\" data-vars-position-in-subunit=\"15\" rel=\"nofollow noopener\">researchers from The Stanford Institute for Human-Centered AI said<\/a>. <\/p>\n<p>Kamide called this a \u201cdefining moment for digital ethics.\u201d<\/p>\n<p>\u201cThe more these systems can mimic human communication styles, the easier it is to forget they are still just data processors, not confidants or friends,\u201d he said. \u201cIf we can cultivate a culture where people stay curious, cautious and privacy-aware \u2014 while technologists build responsibly and transparently \u2014 we can unlock AI\u2019s full potential without sacrificing trust. In short, we need guardrails in order to innovate responsibly.<\/p>\n","protected":false},"excerpt":{"rendered":"It\u2019s becoming increasingly common for people to use ChatGPT and other AI chatbots like Gemini, Copilot and Claude&hellip;\n","protected":false},"author":2,"featured_media":344869,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,5004,284,105],"class_list":{"0":"post-344868","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-chatgpt","14":"tag-cybersecurity","15":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/344868","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=344868"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/344868\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/344869"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=344868"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=344868"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=344868"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}