Large scale training with NeMo Megatron on AWS ParallelCluster using P5 instances
Large scale training with NeMo Megatron on AWS ParallelCluster using P5 instances
Akshit Arora (NVIDIA), Peter Dykas (NVIDIA), Aman Shanbhag (AWS), Sean Smith (AWS), Pierre-Yves (AWS)
Available on AWS HPC Blog - May 2024
This journal was last updated on 5/29/2024.