20241208 영어 공부 feat. On-device LLM
2024년 12월 28일
Step1. 준비 없이 스피킹 해보기
so my on-device LNM has a huge memory, so it needed to be loaded on big memory size. So it's not used for just for the mobile and endpoint device. Although the size of the LNM is huge, the performance is very good. So there are a lot of tasks to make it smaller. Yes, if it becomes smaller, the performance is getting low. So people try to make small size LNM and keep the performance in the same level as it was
Step2. 인풋을 통해 실력 업그레이드
Reducing the parameters in the model results in
lesser expensive training and inference
less powerful devices
Companies are playing with Smaller models
in terms of their parameters, memory, and MMLU benchmark performance
a comprehensive framework designed to
starting next year
real-time interactions with AI applications
the dramatic reduction in latency
making technology more accessible and consistent in performance regardless of network conditions
Step3. 학습