Most formidable supercomputer ever is warming up for ChatGPT 5 — thousands of ‘old’ AMD GPU accelerators crunched 1-trillion parameter models



The most powerful supercomputer in the world has used just over 8% of the GPUs it’s fitted with to train a large language model (LLM) containing one trillion parameters – comparable to OpenAI’s GPT-4.

Frontier, based in the Oak Ridge National Laboratory, used 3,072 of its AMD Radeon Instinct GPUs to train an AI system at the trillion-parameter scale, and it used 1,024 of these GPUs (roughly 2.5%) to train a 175-billion parameter model, essentially the same size as ChatGPT.



Source link

This post originally appeared on TechToday.

Leave a Reply

Your email address will not be published. Required fields are marked *