Open-source use, China-linked concerns fuel reputational risks in national AI race
(Ministry of Science and ICT)
South Korea’s ambitious state-led project to foster a sovereign AI model is facing mounting skepticism, as rigid criteria for technological independence and shifting evaluation standards have led to the disqualification of key participants — undermining trust and weakening momentum behind the initiative.
The project, designed to propel Korea into the global AI top-tier alongside the US and China, aims to cultivate homegrown large-scale AI models and designate a select group of companies to receive concentrated government support. It was launched with the vision of securing “AI sovereignty” through domestic technological autonomy and avoiding long-term reliance on foreign models such as ChatGPT, Gemini and DeepSeek.
In August, five teams — Naver Cloud, Upstage, SK Telecom, NC AI and LG AI Research — were selected through a public contest involving 15 candidates. The original plan called for two eliminations: one team after the first evaluation, another six months later. By the first half of next year, only two “K-AI elite teams” would remain.
But things didn’t go as planned. In the first round alone, two teams were eliminated — NC AI, due to underperformance, and Naver Cloud, for failing to meet the independence criteria. The Ministry of Science and ICT now says it will fill the gap by selecting one more team and proceeding toward naming the top two by mid-2027.
At the heart of the controversy is the government’s interpretation of “technological independence.” Naver Cloud scored well in both performance and usability. Still, it was dropped — the company had used pre-trained vision encoders and weights from Alibaba’s open-source Qwen model.
Vision encoders act as the AI’s “eyes,” converting visual data into machine-readable signals. The ICT ministry argued that merely using pretrained weights doesn’t meet the bar for independence. The weights, it said, must be reset and trained from scratch — ensuring full domestic development and protection from foreign licensing risks.
“Even if open-source use is common in the global ecosystem, retraining from scratch is the minimum condition for independence,” said Second Vice Minister Ryu Je-myung.
That strict interpretation hasn’t gone unchallenged. AI researchers and engineers point out that leveraging open-source components like encoders is now standard industry practice. It’s not a shortcut, they argue, but a deliberate choice to improve reliability and speed. Naver Cloud backed that view, saying its inference engine and architecture were built in-house, and the encoder could be replaced anytime with its own technology.
Some observers warn that such a narrow definition of independence could limit participation in the national AI initiative. Training large-scale vision models entirely from scratch demands substantial data, computing resources and time — burdens that only a handful of players can realistically shoulder. If applied broadly, the standard may end up favoring theoretical purity over practical competitiveness, they argue, at a moment when speed and scalability are becoming decisive in global AI development.
China factor: Sovereignty beyond software
The tension over independence has since expanded — surfacing questions not only about engineering criteria, but about geopolitical sensitivity and reputational consequences.
Some have gone further. They question whether the same level of scrutiny would have applied had the imported component come from the US, not China. The notion that a “China factor” influenced the decision has stirred additional controversy. Observers say an overly rigid “China frame” may have overshadowed technical judgment, pushing the debate beyond engineering.
They also suggest that the government’s emphasis on sovereignty may have outweighed concerns over efficiency. The requirement for absolute independence appears driven less by performance than by control — a hedge against future restrictions or license changes from foreign entities.
The ministry made its stance clear. It described certain uses of open-source tools as “free-riding,” framing the project as more than a tech race. This, officials say, is a matter of national strategy.
The ongoing controversy has sharpened around three intertwined issues — how to define independence, how to treat open-source inputs from geopolitical rivals, and how public disqualification affects the reputation of AI developers.
The debate isn’t just about engineering rules. It’s about how far independence needs to go, whether borrowing from open-source models, especially those tied to China, compromises control. And it’s about what it means to be publicly cut from a project meant to build national pride.
Some in the field view Naver Cloud’s disqualification not as a reflection of weak innovation, but as the outcome of an inflexible definition of independence. Cho Kyung-hyun, a professor of computer science and data science at New York University, called the decision “regrettable.” He argued that AI’s true strength lies not in building everything from scratch, but in integrating diverse inputs into a unified system.
Lee Kyoung-jun, an AI and business professor at Kyung Hee University, offered a more cautious view. Open-source tools, he acknowledged, are efficient and often indispensable. But for a government-backed model, long-term risks matter. “The key issue is not open-source use itself, but whether there are legal or proprietary limitations that could compromise independent use,” he said.
The elimination of prominent players like Naver Cloud has set off alarm bells in the industry. The worry: A high-stakes competition might inadvertently brand capable companies with a mark of failure.
“There’s growing unease that disqualification could be interpreted as a failure of technological competence,” said an industry source who requested anonymity. “That perception could hurt companies in attracting talent, investment, or future partnerships — even though they may still be building highly competitive AI services.”
Some warn that once a company is publicly disqualified at a national level, the line between policy-driven criteria and technical capability can easily blur. Such perceptions, they argue, tend to linger in the market long after the evaluation itself has ended.
Others note the emotional toll. “What started as a promising national initiative has now become a source of extreme stress,” said another source. “The pressure of being one of the final two, or else facing the fallout, is very real.”
Lee, however, pushed back on claims of unfairness. “That is the nature of a tournament-style evaluation,” he said. “There will be winners and losers, but that doesn’t mean it’s unfair.”
yeeun@heraldcorp.com