ADVERTISEMENT
TechEconomy
Monday, May 12, 2025
No Result
View All Result
Advertisement
  • News
  • Tech
    • DisruptiveTECH
    • ConsumerTech
      • Accessories
      • Phones
      • Laptop
      • Gadgets and Appliances
      • Apps
    • How To
    • TechTAINMENT
  • Business
    • Telecoms
      • Broadband
    • Mobility
    • Environment
    • Travel
    • Commerce
    • StartUPs
    • TE Insights
    • Security
  • Partners
  • Economy
    • Finance
    • Fintech
    • Digital Assets
    • Personal Finance
    • Insurance
  • Features
    • IndustryINFLUENCERS
    • Guest Writer
    • Appointment
    • EventDIARY
    • Editorial
  • Apply
  • TecheconomyTV
  • Techeconomy Events
  • BusinesSENSE For SMEs
  • TBS
  • News
  • Tech
    • DisruptiveTECH
    • ConsumerTech
      • Accessories
      • Phones
      • Laptop
      • Gadgets and Appliances
      • Apps
    • How To
    • TechTAINMENT
  • Business
    • Telecoms
      • Broadband
    • Mobility
    • Environment
    • Travel
    • Commerce
    • StartUPs
    • TE Insights
    • Security
  • Partners
  • Economy
    • Finance
    • Fintech
    • Digital Assets
    • Personal Finance
    • Insurance
  • Features
    • IndustryINFLUENCERS
    • Guest Writer
    • Appointment
    • EventDIARY
    • Editorial
  • Apply
  • TecheconomyTV
  • Techeconomy Events
  • BusinesSENSE For SMEs
  • TBS
No Result
View All Result
Tech | Business | Economy
No Result
View All Result
Podcast

Home » Introducing the AI Research SuperCluster – Meta’s cutting-edge supercomputer for AI research

Introducing the AI Research SuperCluster – Meta’s cutting-edge supercomputer for AI research

Techeconomy by Techeconomy
January 25, 2022
in News
0

RelatedPosts

Glo and JAMB

Glo: Our Product Can Help Students Excel in JAMB, other Exams

May 12, 2025

From Coal to Code: Leo Stan Ekeh Rallies Young Innovators at Enugu Tech Festival

May 12, 2025

Developing the next generation of advanced AI will require powerful new computers capable of quintillions of operations per second.

Today, Meta is announcing that we’ve designed and built the AI Research SuperCluster (RSC) — which we believe is among the fastest AI supercomputers running today and will be the fastest AI supercomputer in the world when it’s fully built out in mid-2022.

AI Research SuperCluster
United BANK
Mark Zuckerberg’s post on AI Research SuperCluster

Our researchers have already started using RSC to train large models in natural language processing (NLP) and computer vision for research, with the aim of one day training models with trillions of parameters.

RSC will help Meta’s AI researchers build new and better AI models that can learn from trillions of examples; work across hundreds of different languages; seamlessly analyze text, images, and video together; develop new augmented reality tools; and much more. Our researchers will be able to train the largest models needed to develop advanced AI for computer vision, NLP, speech recognition, and more.

We hope RSC will help us build entirely new AI systems that can, for example, power real-time voice translations to large groups of people, each speaking a different language, so they can seamlessly collaborate on a research project or play an AR game together. Ultimately, the work done with RSC will pave the way toward building technologies for the next major computing platform — the metaverse, where AI-driven applications and products will play an important role.

AI Research SuperCluster by Meta

Why do we need an AI supercomputer at this scale?

Meta has been committed to long-term investment in AI since 2013, when we created the Facebook AI Research lab. In recent years, we’ve made significant strides in AI thanks to our leadership in a number of areas, including self-supervised learning, where algorithms can learn from vast numbers of unlabeled examples, and transformers, which allow AI models to reason more effectively by focusing on certain areas of their input.

To fully realize the benefits of self -supervised learning and transformer-based models, various domains, whether vision, speech, language, or for critical use cases like identifying harmful content, will require training increasingly large, complex, and adaptable models.

Computer vision, for example, needs to process larger, longer videos with higher data sampling rates. Speech recognition needs to work well even in challenging scenarios with lots of background noise, such as parties or concerts. NLP needs to understand more languages, dialects, and accents. And advances in other areas, including robotics, embodied AI, and multimodal AI, will help people accomplish useful tasks in the real world.

High-performance computing infrastructure is a critical component in training such large models, and Meta’s AI research team has been building these high- powered systems for many years.

The first generation of this infrastructure, designed in 2017, has 22,000 NVIDIA V100 Tensor Core GPUs in a single cluster that performs 35,000 training jobs a day. Up until now, this infrastructure has set the bar for Meta’s researchers in terms of its performance, reliability, and productivity.

In early 2020, we decided the best way to accelerate progress was to design a new computing infrastructure from a clean slate to take advantage of new GPU and network fabric technology. We wanted this infrastructure to be able to train models with more than a trillion parameters on data sets as large as an exabyte — which, to provide a sense of scale, is the equivalent of 36,000 years of high-quality video.

While the high-performance computing community has been tackling scale for decades, we also had to make sure we have all the needed security and privacy controls in place to protect any training data we use.

Unlike with our previous AI research infrastructure, which leveraged only open source and other publicly available data sets, RSC also helps us ensure that our research translates effectively into practice by allowing us to include real-world examples from Meta’s production systems in model training.

By doing this, we can help advance research to perform downstream tasks such as identifying harmful content on our platforms as well as research into embodied AI and multimodal AI to help improve user experiences on our family of apps. We believe this is the first time performance, reliability, security, and privacy have been tackled at such a scale.

AI Research SuperComputer by Meta

AI supercomputers are built by combining multiple GPUs into compute nodes, which are then connected by a high-performance network fabric to allow fast communication between those GPUs. RSC today comprises a total of 760 NVIDIA DGX A100 systems as its compute nodes, for a total of 6,080 GPUs — with each A100 GPU being more powerful than the V100 used in our previous system.

The GPUs communicate via an NVIDIA Quantum 200 Gb/s InfiniBand two-level Clos fabric that has no oversubscription.

RSC’s storage tier has 175 petabytes of Pure Storage FlashArray, 46 petabytes of cache storage in Penguin Computing Altus systems, and 10 petabytes of Pure Storage FlashBlade.

Early benchmarks on RSC, compared with Meta’s legacy production and research infrastructure, have shown that it runs computer vision workflows up to 20 times faster, runs the NVIDIA Collective Communication Library (NCCL) more than nine times faster, and trains large-scale NLP models three times faster. That means a model with tens of billions of parameters can finish training in three weeks, compared with nine weeks before.

AI Research SuperCluster by Meta

Designing and building something like AI Research SuperCluster isn’t a matter of performance alone but performance at the largest scale possible, with the most advanced technology available today.

When RSC is complete, the InfiniBand network fabric will connect 16,000 GPUs as endpoints, making it one of the largest such networks deployed to date. Additionally, we designed a caching and storage system that can serve 16 TB/s of training data, and we plan to scale it up to 1 exabyte.

All this infrastructure must be extremely reliable, as we estimate some experiments could run for weeks and require thousands of GPUs. Lastly, the entire experience of using AI Research SuperCluster has to be researcher-friendly so our teams can easily explore a wide range of AI models.

United BANK

A big part of achieving this was in working with a number of long-time partners, all of whom also helped design the first generation of our AI infrastructure in 2017.

Penguin Computing, our architecture and managed services partner, worked with our operations team on hardware integration to deploy the cluster and helped set up major parts of the control plane.

Pure Storage provided us with a robust and scalable storage solution. And NVIDIA provided us with its AI computing technologies featuring cutting-edge systems, GPUs, and InfiniBand fabric, and software stack components like NCCL for the cluster.

…and doing it remotely, during a pandemic

But there were other unexpected challenges that arose in RSC’s development — namely the coronavirus pandemic. RSC began as a completely remote project that the team took from a simple shared document to a functioning cluster in about a year and a half.

COVID-19 and industry-wide wafer supply constraints also brought supply chain issues that made it difficult to get everything from chips to components like optics and GPUs, and even construction materials — all of which had to be transported in accordance with new safety protocols.

To build this cluster efficiently, we had to design it from scratch, creating many entirely new Meta-specific conventions and rethinking previous ones along the way.

We had to write new rules around our data center designs — including their cooling, power, rack layout, cabling, and networking (including a completely new control plane), among other important considerations. We had to ensure that all the teams, from construction to hardware to software and AI, were working in lockstep and in coordination with our partners.

Beyond the core system itself, there was also a need for a powerful storage solution, one that can serve terabytes of bandwidth from an exabyte-scale storage system.

To serve AI training’s growing bandwidth and capacity needs, we developed a storage service, AI Research Store (AIRStore), from the ground up.

To optimize for AI models, AIRStore utilizes a new data preparation phase that preprocesses the data set to be used for training. Once the preparation is performed one time, the prepared data set can be used for multiple training runs until it expires.

AIRStore also optimizes data transfers so that cross-region traffic on Meta’s inter-datacenter backbone is minimized.

How we safeguard data in AI Research SuperCluster

To build new AI models that benefit the people using our services — whether that’s detecting harmful content or creating new AR experiences — we need to teach models using real-world data from our production systems. RSC has been designed from the ground up with privacy and security in mind, so that Meta’s researchers can safely train models using encrypted user-generated data that is not decrypted until right before training.

For example, AI Research SuperCluster is isolated from the larger internet, with no direct inbound or outbound connections, and traffic can flow only from Meta’s production data centers.

To meet our privacy and security requirements, the entire data path from our storage systems to the GPUs is end-to-end encrypted and has the necessary tools and processes to verify that these requirements are met at all times. Before data is imported to RSC, it must go through a privacy review process to confirm it has been correctly anonymized.

The data is then encrypted before it can be used to train AI models and decryption keys are deleted regularly to ensure older data is not still accessible. And since the data is only decrypted at one endpoint, in memory, it is safeguarded even in the unlikely event of a physical breach of the facility.

Phase two and beyond

AI Research SuperCluster (RSC) is up and running today, but its development is ongoing. Once we complete phase two of building out RSC, we believe it will be the fastest AI supercomputer in the world, performing at nearly 5 exaflops of mixed precision compute.

Through 2022, we’ll work to increase the number of GPUs from 6,080 to 16,000, which will increase AI training performance by more than 2.5x.

The InfiniBand fabric will expand to support 16,000 ports in a two-layer topology with no oversubscription. The storage system will have a target delivery bandwidth of 16 TB/s and exabyte-scale capacity to meet increased demand.

We expect such a step function change in compute capability to enable us not only to create more accurate AI models for our existing services, but also to enable completely new user experiences, especially in the metaverse.

Our long-term investments in self-supervised learning and in building next-generation AI infrastructure with RSC are helping us create the foundational technologies that will power the metaverse and advance the broader AI community as well.

Loading

Author

  • Techeconomy
    Techeconomy

    View all posts
0Shares

Tags: AI research infrastructureAI Research SuperClusterMetaRSCRSC by Meta
Previous Post

GSA Africa exposes African Edtech solutions empowering tomorrow’s future leaders

Next Post

GTBank Kenya receives $15,000,000 loan from IFC to support SMEs

Techeconomy

Techeconomy

Related Posts

Glo and JAMB
News

Glo: Our Product Can Help Students Excel in JAMB, other Exams

by Destiny Eseaga
May 12, 2025
0

Technology Company, Globacom, has stated that its recently unveiled Examination Preparatory Service was designed to empower students to excel in...

Read more
Leo Stan Ekeh speaking at the Enugu Tech Festival

From Coal to Code: Leo Stan Ekeh Rallies Young Innovators at Enugu Tech Festival

May 12, 2025
Glo Foundation

Glo Foundation holds “Rest and Relaxation Day” for Sanitation Workers

May 11, 2025
Dr. Bayero Agabi - CECAD speaks on cyber sanity in Nigeria

Agabi Advocates for Cyber Sanity and Responsible Cyberspace in Nigeria

May 10, 2025
Gov Mbah Commissions NITDA South East Regional Office in Enugu

Gov Mbah Commissions NITDA South East Regional Office in Enugu

May 8, 2025
SPPG Class of 2025

SPPG Students Help Youths Develop Digital Skills, Financial Literacy

May 8, 2025
Next Post

GTBank Kenya receives $15,000,000 loan from IFC to support SMEs

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

I agree to the Terms & Conditions and Privacy Policy.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Techeconomy Podcast

Techeconomy Podcast
Techeconomy Podcast

Infowave is brought to you by TechEconomy. Every week we will bring new stories from startups and influencers who are shaping and changing the world we live in. We’ll also bring you reports on topics you should know.

Follow us @techeconomyng for more.

CYBERSECURITY ESSENTIALS
byTecheconomy

BUILDING STRONGER NETWORKS AND COMMUNITIES

CYBERSECURITY ESSENTIALS
CYBERSECURITY ESSENTIALS
April 24, 2025
Techeconomy
Digital Marketing Trends and strategies for 2025 and beyond
February 27, 2025
Techeconomy
Major Lesson for Techies in 2024 and Projections for 2025
December 6, 2024
Techeconomy
Major Lessons for Techies in an AI-Driven World | Techeconomy Business Series Highlights
November 26, 2024
Techeconomy
Maximizing Profitability Through Seasonal Sales: Strategies For Success
November 8, 2024
Techeconomy
Techeconomy Business Series
October 15, 2024
Techeconomy
PRIVACY IN THE ERA OF AI: GETTING YOUR BUSINESS READY
May 30, 2024
Techeconomy
Unravel the Secrets of Marketing Everywhere All At Once with Isaac Akanni from Infobip | Infowave Podcast Episode 1
February 9, 2024
Techeconomy
The Role of Ed-tech in Life Long Learning and Continuous Education
October 19, 2023
Techeconomy
Filmmaking and Technology: A chat with Micheal Chineme Ike
June 7, 2023
Techeconomy
Search Results placeholder

WHAT IS TRENDING

https://www.youtube.com/watch?v=g_MCUwS2woc&list=PL6bbK-xx1KbIgX-IzYdqISXq1pUsuA4dz
uba

Follow Us

  • About Us
  • Contact Us
  • Careers
  • Privacy Policy

© 2025 Techeconomy - Designed by Opimedia.

No Result
View All Result
  • News
  • Tech
    • DisruptiveTECH
    • ConsumerTech
      • Accessories
      • Phones
      • Laptop
      • Gadgets and Appliances
      • Apps
    • How To
    • TechTAINMENT
  • Business
    • Telecoms
      • Broadband
    • Mobility
    • Environment
    • Travel
    • Commerce
    • StartUPs
    • TE Insights
    • Security
  • Partners
  • Economy
    • Finance
    • Fintech
    • Digital Assets
    • Personal Finance
    • Insurance
  • Features
    • IndustryINFLUENCERS
    • Guest Writer
    • Appointment
    • EventDIARY
    • Editorial
  • Apply
  • TecheconomyTV
  • Techeconomy Events
  • BusinesSENSE For SMEs
  • TBS

© 2025 Techeconomy - Designed by Opimedia.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.