Meta, the parent company of Facebook, Instagram, and WhatsApp, has begun testing its own custom chip designed for training artificial intelligence (AI) systems, Reuters reports.
This means the tech giant is working to reduce dependence on external suppliers like Nvidia and lower infrastructure costs as it bolsters AI-driven innovations.
According to sources familiar with the matter, Meta has deployed the chip on a small scale and will expand production if testing proves successful. The chip is part of the company’s AI strategy, which includes a heavy focus on recommendation systems and generative AI products.
Developing its own AI chips, Meta seeks to control expenses and optimise performance. The company has projected expenses between $114 billion and $119 billion for 2025, with up to $65 billion allocated to capital expenditures, largely for AI infrastructure.
The new chip, specifically designed for AI workloads, is expected to be more power-efficient than conventional GPUs, which are typically used for AI training. Unlike general-purpose chips, Meta’s new hardware focuses solely on AI-specific tasks, potentially improving efficiency and cost-effectiveness.
Production of the chip is being handled by Taiwan Semiconductor Manufacturing Company (TSMC), a major player in the global semiconductor industry.
The test deployment began after Meta completed a critical stage in chip development known as “tape-out,” where an initial design is sent to a manufacturing plant. This process, which can cost tens of millions of dollars, is a key milestone in chip production.
Meta previously abandoned a similar project after early tests failed, opting instead to invest billions in Nvidia GPUs in 2022.
Nonetheless, Meta is still one of Nvidia’s largest customers, using its hardware to train AI models that power recommendation systems, advertising tools, and its Llama series of foundation models.
The company has already deployed an earlier version of its in-house chip, known as the Meta Training and Inference Accelerator (MTIA), for AI inference—where AI systems generate responses based on user inputs.
Meta now aims to expand its use of proprietary chips for AI training, which involves feeding massive amounts of data into models to improve their accuracy and capabilities.
Speaking at the Morgan Stanley technology, media, and telecom conference last week, Meta’s Chief Product Officer Chris Cox described the company’s chip strategy as progressing in stages. “We’re working on how would we do training for recommender systems and then eventually how do we think about training and inference for gen AI,” he said. He added that while the company is still in the early phases, the first-generation MTIA chip for recommendations was seen as a “big success.”
Meta’s goal to reduce reliance on Nvidia comes when the AI chip market is changing. The dominance of large language models trained on vast datasets has been challenged by new approaches focused on computational efficiency.
Chinese startup DeepSeek recently launched low-cost AI models that rely more on inference rather than extensive training, raising talks about the long-term sustainability of scaling up AI models with massive amounts of data. This change briefly led to a sharp decline in Nvidia’s stock value earlier this year, though the company has since regained most of its losses.
While Meta’s in-house chip build could eventually reduce costs and enhance AI performance, the company still has challenges in matching Nvidia’s advanced hardware capabilities.