On October 13th, local time, artificial intelligence technology giant OpenAI and chip design giant Broadcom announced a partnership to develop custom AI accelerators for 10 gigawatt (GW) of data centers. OpenAI will design these accelerators and systems and collaborate with Broadcom on development and deployment.
Following the news, Broadcom's stock price soared over 10% in pre-market trading. As of the close of US markets on October 13th, Broadcom's stock price remained up 9.88%, bringing its market capitalization to $1.68 trillion.
OpenAI stated that by designing its own AI chips and systems, it can embed the experience it has accumulated in developing cutting-edge models and products directly into hardware, unlocking new levels of functionality and intelligence. These racks are fully scaled with Broadcom Ethernet and other connectivity solutions and will be deployed at OpenAI facilities and partner data centers to meet the growing demand for AI worldwide.
OpenAI and Broadcom have entered into a long-term agreement for the joint development and supply of AI accelerators. The two companies have signed a term sheet to deploy racks containing AI accelerators and Broadcom networking solutions.
"Partnering with Broadcom is a critical step in building the infrastructure needed to unleash the potential of AI and deliver real benefits to individuals and businesses," said Sam Altman, co-founder and CEO of OpenAI. "Developing our own accelerator will further enrich our broader ecosystem of partners, enabling us to jointly build the capabilities necessary to push the frontiers of AI for the benefit of all humanity."
"Broadcom's collaboration with OpenAI marks a pivotal moment in our quest for general artificial intelligence," said Hock E. Tan, president and CEO of Broadcom. "OpenAI has been at the forefront of the AI revolution since the creation of ChatGPT. We are excited to jointly develop and deploy 10 gigawatts of next-generation accelerators and networking systems to pave the way for the future of AI."
"Our collaboration with Broadcom will drive breakthroughs in AI and bring the technology's full potential closer to reality," said Greg Brockman, co-founder and president of OpenAI. "By building our own chips, we can embed what we learn from creating cutting-edge models and products directly into the hardware, unlocking new levels of power and intelligence."
Charlie, president of Broadcom's Semiconductor Solutions Group Dr. Kawwas stated, "Our collaboration with OpenAI continues to set new industry standards for the design and deployment of open, scalable, and energy-efficient AI clusters. Custom accelerators, combined with standards-based Ethernet scale-up and scale-out networking solutions, deliver a cost- and performance-optimized next-generation AI infrastructure. These racks encompass Broadcom's end-to-end portfolio of Ethernet, PCIe, and fiber connectivity solutions, further solidifying our leadership in AI infrastructure."
It is worth noting that while the agreement between OpenAI and Broadcom did not disclose the specific transaction value, rumors have been circulating in the past few weeks that OpenAI is the new cloud customer announced by Broadcom during its earnings conference call, placing a $10 billion order.
Furthermore, OpenAI executives have previously estimated that, based on current electricity prices, the cost of deploying 1 gigawatt of AI computing capacity is approximately $50 billion. This implies a total investment of $500 billion for the 10 GW project. Data shows that servers account for over 60% of data center costs, while AI chips account for over 70% of the cost of AI services. This means that within this $500 billion project, AI chips could account for over $200 billion of the project's total investment.
However, since OpenAI chose to develop its own AI chips, with Broadcom providing back-end design services and TSMC outsourcing final manufacturing, the actual cost should be more cost-effective than directly purchasing AI chips from Nvidia and AMD. This was a key factor in OpenAI's decision to develop its own AI chips.
According to CNBC, the collaboration between OpenAI and Broadcom was not immediate; the two parties had actually been working together in secret for 18 months. These customized systems encompass networking, storage, and compute capabilities, and are specifically designed for OpenAI's workloads.
Furthermore, according to previous leaks, OpenAI's in-house AI chips will be optimized for inference and will connect to the network via Broadcom's Ethernet stack. The two parties plan to gradually develop and deploy racks containing these in-house AI chips starting by the end of 2026. OpenAI President Greg Brockman also revealed that the company even plans to use its own models to accelerate the chip design process and improve efficiency.
In fact, OpenAI's recent partnership with Broadcom for 10GW of self-developed AI chips is part of OpenAI's massive commitment to computing power for its future development. Sam Altman has also hinted that this 10GW is just the beginning. OpenAI's current computing power is just over 2GW. This 2GW of capacity has been sufficient to scale ChatGPT to its current size, develop and launch the video generation service Sora, and conduct extensive AI research.
Recently, OpenAI has announced a $300 billion cloud services agreement with Oracle, a $22 billion cloud services agreement with CoreWeave, a $100 billion chip procurement agreement with Nvidia (for 10GW of data center capacity), and a 6GW AI chip procurement agreement with AMD. If these massive partnerships are fulfilled, OpenAI will have over 33GW of AI data center computing power, which is expected to accelerate the development of its AI technology.
In summary, this collaboration represents another key step for OpenAI in meeting its historic and ambitious AI data center construction plans. For Broadcom, this collaboration reflects its core position in the field of customized AI accelerators, and the selection of Broadcom's Ethernet technology as the vertical and horizontal scale-out network technology for AI data centers further consolidates Broadcom's core position in the AI infrastructure supply chain.





