Meta Expands AI Infrastructure
The collaboration will further intensify the already strong relationship between the social media giant and the AI-focused cloud operator, with the arrangement now extended through December 2032, as indicated in the press release.
CoreWeave plans to roll out dedicated cloud capacity across multiple data centers to support a major industry transition toward “AI inference,” which demands substantial computing resources to run and deploy pre-trained AI models.
For Meta, CoreWeave’s infrastructure will include some of the first deployments in the sector of chip manufacturer Nvidia’s upcoming Vera Rubin graphics processing unit (GPU) platform.
CoreWeave co-founder and CEO Michael Intrator noted that the agreement highlights the growing demand across the industry for advanced, high-performance infrastructure needed to support rapidly developing AI applications.
This $21 billion commitment builds on a previous $14 billion deal signed by the two companies last year, reflecting Meta’s increasing requirement for specialized computing power that is reportedly more advanced than its own internal data center systems.
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.