Congrats to @AIatMeta on Llama 3 release!! 🎉https://t.co/fSw615zE8S
Notes:Releasing 8B and 70B (both base and finetuned) models, strong-performing in their model class (but we'll see when the rankings come in @ @lmsysorg :))
400B is still training, but already encroaching…— Andrej Karpathy (@karpathy) April 18, 2024
Scaling laws. Very notably, 15T is a very very large dataset to train with for a model as “small” as 8B parameters, and this is not normally done and is new and very welcome. The Chinchilla “compute optimal” point for an 8B model would be train it for ~200B tokens. (if you were only interested to get the most “bang-for-the-buck” w.r.t. model performance at that size). So this is training ~75X beyond that point, which is unusual but personally, I think extremely welcome. Because we all get a very capable mod