We’ve learned that Meta Platforms is putting the final touches on one such cluster, which will be a little bigger than 100,000 H100s, located somewhere in the U.S. The company will use the new supercomputing cluster to train the next version of its Llama model—Llama 4, for those of you who are counting—according to a person at the company who is involved in the effort. The cost of the chips alone could be more than $2 billion!

💬 The Information

(my bold)