TLDR OpenAI is dissatisfied with some Nvidia chips for inference tasks and has been seeking alternatives since last year for about 10% of its inference computingTLDR OpenAI is dissatisfied with some Nvidia chips for inference tasks and has been seeking alternatives since last year for about 10% of its inference computing

OpenAI Seeks Nvidia Chip Alternatives as $100 Billion Investment Deal Stalls

2026/02/03 17:37
3 min read

TLDR

  • OpenAI is dissatisfied with some Nvidia chips for inference tasks and has been seeking alternatives since last year for about 10% of its inference computing needs
  • The $100 billion Nvidia investment deal in OpenAI, expected to close within weeks, has been delayed for months as negotiations continue
  • OpenAI has struck deals with AMD, Broadcom, and Cerebras Systems for alternative chips, particularly those with more embedded memory for faster inference
  • Nvidia responded by licensing Groq’s technology for $20 billion and hiring away Groq’s chip designers to strengthen its inference capabilities
  • Both CEOs publicly downplayed tensions, with Sam Altman calling Nvidia’s chips “the best in the world” and Jensen Huang dismissing reports as “nonsense”

OpenAI has been looking for alternatives to some of Nvidia’s chips since last year. The ChatGPT maker needs different hardware for inference tasks, which is when AI models respond to user queries.

The company wants chips that can provide faster responses for specific problems. These include software development and AI systems communicating with other software.

OpenAI is seeking alternatives for about 10% of its future inference computing needs. Seven sources familiar with the matter confirmed the company’s dissatisfaction with Nvidia’s current hardware speed for certain tasks.

Delayed Investment Deal

Nvidia announced plans in September to invest up to $100 billion in OpenAI. The deal was supposed to close within weeks but has dragged on for months.


NVDA Stock Card
NVIDIA Corporation, NVDA

OpenAI’s changing product roadmap has altered its computational requirements. This shift has complicated the ongoing negotiations with Nvidia.

During this period, OpenAI signed deals with AMD, Broadcom, and Cerebras Systems. These companies provide chips designed to compete with Nvidia’s offerings.

The issue became particularly visible in OpenAI’s Codex product for creating computer code. Staff members attributed some of Codex’s performance issues to Nvidia’s GPU-based hardware.

On January 30, Sam Altman told reporters that coding model customers value speed. He said OpenAI would meet this demand partly through its recent deal with Cerebras.

Technical Requirements

OpenAI has focused on companies building chips with large amounts of SRAM memory. SRAM is embedded in the same piece of silicon as the rest of the chip.

This design offers speed advantages for chatbots and AI systems serving millions of users. Inference requires more memory than training because chips spend more time fetching data from memory.

Nvidia and AMD GPU technology relies on external memory. This setup adds processing time and slows chatbot response speeds.

Competing products like Anthropic’s Claude and Google’s Gemini use different hardware. They rely more heavily on Google’s tensor processing units, which are designed for inference calculations.

OpenAI discussed working with startups Cerebras and Groq for faster inference chips. However, Nvidia struck a $20 billion licensing deal with Groq that ended OpenAI’s talks with the company.

Nvidia also hired away Groq’s chip designers as part of the agreement. Groq had been in talks with OpenAI and received investor interest at a $14 billion valuation.

Public Statements

Nvidia stated that customers continue to choose its chips for inference because of performance and cost effectiveness. The company said Groq’s intellectual property was highly complementary to its product roadmap.

An OpenAI spokesperson said the company relies on Nvidia to power most of its inference fleet. The spokesperson added that Nvidia delivers the best performance per dollar for inference.

OpenAI infrastructure executive Sachin Katti posted on Monday that the company was “anchoring on Nvidia as the core of our training and inference.” Both companies emphasized their ongoing partnership despite the reported issues.

The post OpenAI Seeks Nvidia Chip Alternatives as $100 Billion Investment Deal Stalls appeared first on CoinCentral.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

Stellar (XLM) Powers IRL’s Stealth Crypto Onboarding at Major Cultural Events

Stellar (XLM) Powers IRL’s Stealth Crypto Onboarding at Major Cultural Events

The post Stellar (XLM) Powers IRL’s Stealth Crypto Onboarding at Major Cultural Events appeared on BitcoinEthereumNews.com. Terrill Dicki Feb 12, 2026 05:39
Share
BitcoinEthereumNews2026/02/13 06:46
Ringgit strength seen extending lower – MUFG

Ringgit strength seen extending lower – MUFG

The post Ringgit strength seen extending lower – MUFG appeared on BitcoinEthereumNews.com. MUFG’s Senior Currency Analyst Lloyd Chan expects USD/MYR to keep trending
Share
BitcoinEthereumNews2026/02/13 07:20
Kalshi debuts ecosystem hub with Solana and Base

Kalshi debuts ecosystem hub with Solana and Base

The post Kalshi debuts ecosystem hub with Solana and Base appeared on BitcoinEthereumNews.com. Kalshi, the US-regulated prediction market exchange, rolled out a new program on Wednesday called KalshiEco Hub. The initiative, developed in partnership with Solana and Coinbase-backed Base, is designed to attract builders, traders, and content creators to a growing ecosystem around prediction markets. By combining its regulatory footing with crypto-native infrastructure, Kalshi said it is aiming to become a bridge between traditional finance and onchain innovation. The hub offers grants, technical assistance, and marketing support to selected projects. Kalshi also announced that it will support native deposits of Solana’s SOL token and USDC stablecoin, making it easier for users already active in crypto to participate directly. Early collaborators include Kalshinomics, a dashboard for market analytics, and Verso, which is building professional-grade tools for market discovery and execution. Other partners, such as Caddy, are exploring ways to expand retail-facing trading experiences. Kalshi’s move to embrace blockchain partnerships comes at a time when prediction markets are drawing fresh attention for their ability to capture sentiment around elections, economic policy, and cultural events. Competitor Polymarket recently acquired QCEX — a derivatives exchange with a CFTC license — to pave its way back into US operations under regulatory compliance. At the same time, platforms like PredictIt continue to push for a clearer regulatory footing. The legal terrain remains complex, with some states issuing cease-and-desist orders over whether these event contracts count as gambling, not finance. This is a developing story. This article was generated with the assistance of AI and reviewed by editor Jeffrey Albus before publication. Get the news in your inbox. Explore Blockworks newsletters: Source: https://blockworks.co/news/kalshi-ecosystem-hub-solana-base
Share
BitcoinEthereumNews2025/09/18 04:40