In a bid to secure long-term, low-cost supplies for OpenAI’s staggering, multitrillion-dollar infrastructure plan, Altman has been exploring financing alternatives with supply-chain partners, people familiar with his meetings said. Such discussions remain in the early stages, the people said.

Since late September, the head of the ChatGPT maker has traveled to Taiwan, South Korea and Japan to accelerate the world’s artificial intelligence chip-building capacity. He has met with companies including Taiwan Semiconductor Manufacturing Co. and Foxconn as well as Samsung and SK Hynix, the people said. Altman was pushing these companies, many of which are suppliers of the AI chip designer Nvidia, to increase production capacity and give priority to OpenAI’s orders, the people said.

He has planned to visit investors in the United Arab Emirates to raise money to help fund OpenAI’s infrastructure expansion and research, according to people familiar with the plan.

Since the introduction of ChatGPT, the computing supply chain has faced manufacturing bottlenecks as it tries to meet surging global demand. TSMC, the world’s largest chip manufacturer, produces chips for Nvidia while Foxconn assembles the servers using those chips. South Korea’s Samsung and SK Hynix provide memory chips for such systems.

Altman’s trip carries echoes of one that he took in early 2024, when he pitched infrastructure plans with a mind-boggling price tag of as much as $7 trillion to the same companies and sought funding from the U.A.E. His previous effort was then dismissed by some industry leaders who didn’t think it was realistic, given how little revenue was generated by AI services at the time. Shortly after that trip, TSMC Chief Executive C.C. Wei said Altman was “too aggressive for me to believe.”

This time around, he is getting more support.

A wave of renewed confidence in OpenAI came from its blockbuster deal with Nvidia, in which the chip giant agreed to lease up to five million of its AI chips to the ChatGPT maker over time and invest up to $100 billion to make it happen. The announcement helped bolster Altman’s vision for computing power and lifted the stocks of chip suppliers across the world.

About three years after it launched the AI chatbot, OpenAI is now valued at $500 billion, on par with global corporate companies such as Netflix and Exxon Mobil.

In the past few days, Altman has rubbed shoulders with tech leaders from Samsung and SK Hynix, as well as the Japanese electronics and industrial company Hitachi. Announcements of their partnerships boosted shares of the three companies, following the same pattern that unfolded with U.S. dealmaking.

Altman brought on the two South Korean companies as memory-chip partners. They said OpenAI’s overall demand could reach up to 900,000 wafers a month, which is more than double the current global capacity for high-bandwidth memory. They plan to co-develop AI data centers with OpenAI in South Korea.

In Japan, OpenAI and Hitachi agreed that the Japanese conglomerate would support OpenAI in developing AI infrastructure, including the provision of equipment for power transmission and distribution to the American startup’s data centers. OpenAI would provide its models and other technologies to Hitachi.

Altman has held discussions with some of the companies about manufacturing and deployment of Nvidia’s coming Rubin systems, the people familiar with the trips said. OpenAI will be among the first customers receiving the Rubin systems in the second half of 2026.

During his stop in the Middle East, Altman has planned to meet with Abu Dhabi’s investment funds MGX and Mubadala, as well as with OpenAI’s operation partner G42, people familiar with the plans said. Potential new capital would be partly used to fund the Stargate data center in Abu Dhabi, the people said.

OpenAI recently told its investors and business partners that it was likely to spend around $16 billion in renting computing servers this year, and that the expenditure could rise to around $400 billion in 2029, people familiar with the matter said.

This past week, the company revived global enthusiasm with its video-generation model Sora 2, released Tuesday. Industry participants expect that such models and applications would drive up demand for computing power much more aggressively than text-based models.

“Our vision is simple: we want to create a factory that can produce a gigawatt of new AI infrastructure every week,” Altman wrote in a recent blog.

Last month, OpenAI and Nvidia said they would deploy at least 10 gigawatts of Nvidia’s computing systems for OpenAI to train and run its next generation of models. OpenAI also announced five new data-center sites across the U.S., built with Oracle and the Japanese tech conglomerate SoftBank.

Write to Raffaele Huang at raffaele.huang@wsj.com and Berber Jin at berber.jin@wsj.com