nickpsecurity
today at 8:13 PM
Most have been trained on illegally-distributed, copyrighted works. They might output them, too. People might want untainted models. Additionally, some have weaknesses due to tokenizers, pre-training data, or moral alignment (political bias).
For those reasons, users might want to train a new model from scratch.
Researchers of training methods have a different problem. They need to see whether a new technique, like an optimization algorithm, gets better results. They try them more quickly with less money if they have small, training runs representative of what larger models do. If BabyLM-10M was representative, they could test each technique at the FLOPS/$ of a 10M model instead of a 1B model.
So, both researchers and users might want new models trained from scratch. The cheaper to train, the better.