With the AI hardware space heating up, Meta Platforms is taking a big step towards self-sufficiency, through advanced in-house chips to support its fast growing AI ecosystem.
A company circular released last week outlines plans to lower operating costs by reducing reliance on external suppliers, particularly NVIDIA.
Meta confirmed four chips under its Meta Training and Inference Accelerator (MTIA) programme:
- MTIA 300 (already in use)
- MTIA 400
- MTIA 450
- MTIA 500
The later models, especially the MTIA 450 and 500, are expected to roll out from early 2027 to handle large-scale AI workloads.
Meta also plans to introduce new chip versions every six months, signalling how quickly demand for AI infrastructure is rising.
Over the past few years, Meta has shifted beyond its social media roots into a broader technology company spanning hardware and artificial intelligence.
Its growing focus on custom silicon reflects the need to control both performance and cost as AI usage scales across its platforms.
Reducing Dependence on Nvidia, Not Replacing It
Nvidia’s GPUs still dominate the AI hardware market, and Meta’s move into custom chips is aimed at reducing, not eliminating, that dependence.
In designing its own silicon, the company can optimise performance for specific internal workloads while cutting long-term infrastructure costs.
However, this is not a full breakaway. Meta is investing heavily in external hardware, including a recent multi-billion-dollar deal with Nvidia, alongside significant purchases from Advanced Micro Devices (AMD).
In practical terms, Meta’s custom chips are expected to handle targeted tasks such as inference, while Nvidia GPUs remain critical for training large AI models.
MTIA Chips vs Nvidia’s Vera CPU
Meta’s MTIA chips and Nvidia’s Vera CPU reflect two different approaches to AI infrastructure.
The MTIA series, including the 300, 400, 450 and 500, functions as application-specific integrated circuits (ASICs), built mainly for inference and selected training tasks within Meta’s data centres. They are tailored to the company’s internal systems across its apps and services.
By contrast, Nvidia’s Vera CPU is designed as a general-purpose data centre processor, capable of handling a broader range of workloads, including emerging agentic AI systems that require decision-making, orchestration and large-scale data processing.
In simple terms, Meta is building hardware for its own needs, while Nvidia continues to supply the flexible, high-performance systems that power much of the wider AI industry, including parts of Meta’s infrastructure.
Meta is not alone in this shift. Across the tech industry, major companies are investing in custom chips to gain greater control over performance, cost and scalability.
The trend underlines a broader reality: AI is no longer just about software, the hardware behind it is becoming just as critical.
Ultimately, Meta’s strategy looks more like diversification than disruption. By combining in-house chips with external suppliers, the company is building a more flexible and resilient AI infrastructure.




