OpenAI's Exploration of Developing Its Own AI Chips

Discover OpenAI's bid to tackle the chip shortage by considering AI chip development. Learn about their strategies and the implications for the AI.
open ai

OpenAI, renowned as one of the most well-funded AI startups, is currently delving into the possibility of creating its very own AI chips.

Reports suggest that discussions regarding OpenAI's AI chip strategies have been in progress since at least the previous year. This move comes as the chip shortage for training AI models continues to escalate. OpenAI is said to be evaluating several approaches to bolster its chip aspirations, including the potential acquisition of an existing AI chip manufacturer or embarking on an in-house chip design initiative.

OpenAI's CEO, Sam Altman, has placed a high priority on acquiring more AI chips for the company, according to Reuters.

Presently, OpenAI, much like its competitors, relies on GPU-based hardware to develop models such as ChatGPT, GPT-4, and DALL-E 3. GPUs' ability to perform numerous computations in parallel makes them ideal for training today's most advanced AI models.

However, the surge in demand for generative AI, which has significantly benefited GPU manufacturers like Nvidia, has put immense pressure on the GPU supply chain. Microsoft has recently cautioned about a severe shortage of the server hardware required for AI, which could potentially lead to service disruptions, as reported in their summer earnings report. Additionally, Nvidia's top-performing AI chips are reportedly unavailable until 2024.

GPUs are also indispensable for running and serving OpenAI's models, with the company relying on GPU clusters in the cloud to handle customer workloads. However, the cost associated with these GPUs is notably high.

An analysis by Bernstein analyst Stacy Rasgon estimates that if ChatGPT queries were to reach a scale even a tenth that of Google Search, it would necessitate an initial investment of roughly $48.1 billion in GPUs and an annual expenditure of approximately $16 billion to maintain operations.

OpenAI wouldn't be the first entity to explore the creation of custom AI chips. Google boasts its own processor, the TPU (tensor processing unit), designed for training large generative AI systems like PaLM-2 and Imagen. Amazon provides proprietary chips to AWS customers, both for training (Trainium) and inferencing (Inferentia). Microsoft is also reportedly collaborating with AMD to develop an in-house AI chip named Athena, which OpenAI is said to be testing.

Certainly, OpenAI is well-positioned to make substantial investments in research and development. The company, having raised over $11 billion in venture capital, is nearing $1 billion in annual revenue. Furthermore, it is contemplating a share sale that could potentially catapult its secondary-market valuation to $90 billion, according to a recent Wall Street Journal report.

Nevertheless, the hardware business, particularly in the realm of AI chips, is a challenging venture. Last year, AI chipmaker Graphcore reportedly witnessed a valuation drop of $1 billion after a deal with Microsoft fell through, leading to planned job cuts due to the "extremely challenging" macroeconomic environment. The situation worsened in recent months as Graphcore reported declining revenue and increased losses. Similarly, Habana Labs, an AI chip company owned by Intel, laid off approximately 10% of its workforce. Meta's custom AI chip efforts have also encountered difficulties, resulting in the abandonment of some experimental hardware projects.

Even if OpenAI commits to bringing a custom chip to market, such an endeavor could take several years and require an annual expenditure of hundreds of millions of dollars. It remains uncertain whether the startup's investors, including Microsoft, are willing to embrace such a high-risk endeavor.