Why Is Meta Investing Billions to Build Its Own AI Chip?

Meta AI

Meta is developing its first in-house AI training chip. The latest report from Reuters suggests that Meta has started a small-scale deployment of its AI chip and plans to expand if tests go well. This step aims to reduce its reliance on Nvidia’s GPUs.

The report also states that developing a custom chip is part of Meta’s plan to cut infrastructure costs. The tech giant has forecasted expenses of $114 billion to $119 billion in 2025, among which $65 billion will be allocated to AI infrastructure.

How is Meta’s AI Chip Different? 

Meta’s new chip is a dedicated AI accelerator, meaning it is designed only for AI tasks. Unlike traditional GPUs, which handle multiple functions, this chip is more power-efficient for training AI models. 

The company is working with TSMC, the leading chip manufacturer, to produce the chip. Meta has said it will complete its first “tape-out” phase, where the initial design is sent for manufacturing. This process costs millions of dollars and takes months to complete.  

However, this is not Meta’s first attempt to build its in-house AI chip. The company has tried making custom chips before but had to scrap an inference chip after it failed in tests. As a result, the tech giant had to buy Nvidia GPUs worth billions in 2022. To date, Meta is one of Nvidia’s largest customers, using its hardware to train AI models for content recommendations and ads.

What’s Next for Meta’s AI Plans? 

If everything goes according to plan and the tests get successful results, Meta aims to start using its own chips for recommendation algorithms by 2026. The company could also use them for generative AI needs, including for its Meta AI chatbot.

Meta’s Chief Product Officer, Chris Cox, said the company is taking a step-by-step approach. While the first inference chip was a success, the training chip is a bigger test. If it works, Meta can reduce costs and gain more control over its AI infrastructure.

Now, it all depends on Meta’s results, which will determine whether it can move away from Nvidia or continue relying on external suppliers. However, it’s important to keep in mind that the dynamics in AI research are changing. Today, many experts believe that simply scaling AI models with more computing power may not lead to further breakthroughs, and more efficient AI models like DeepSeek are challenging this approach.

Geekflare Newsletter

Stay up-to-date with the latest trends in the tech business world in just 3 Minutes! 🌍