Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well both aren’t “more important”, since that’s illogical. I think recent strides in high performance small LLMs have shown that the tasks LLMs are useful for may not require the level of representational capacity that trillion-parameter models offer.

However: the labs releasing these high-intelligence-density models are getting them by first training much larger models and then distilling down. So the most interesting question to me is, how can we accelerate learning in small networks to avoid the necessity of training huge teacher networks?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: