Chinese artificial intelligence firm DeepSeek has agreed to launch the first country-specific version of its namesake chatbot, while tackling the problem of “hallucinations” in its AI models to meet the requirements of regulators in Italy.DeepSeek assented to the package of commitments after a months-long negotiation with the Italian Competition Authority, known as the AGCM, which had earlier accused the Hangzhou-based start-up of not sufficiently warning users in the country about the risk of its chatbot hallucinating – producing incorrect or misleading information.

The AGCM announced on Monday that its investigation into DeepSeek had ended after the company committed to “making its disclosures about the risk of hallucinations more transparent, intelligible and immediate”.

In particular, the regulator praised DeepSeek for committing to lower the hallucination rate of its AI models through technical fixes, calling it “commendable”.

“[DeepSeek] has stated that the phenomenon of AI model hallucinations is a global challenge that cannot be entirely eliminated,” the AGCM said in a notice.

While hallucinations occur in every generative AI service, DeepSeek’s effort to comply with Italian authorities – which are some of the most active in Europe to enforce AI regulations – would appear to augur well for the company’s future business expansion.