티스토리 뷰

반응형

2024년 12월 28일

 

Step1. 준비 없이 스피킹 해보기

so my on-device LNM has a huge memory, so it needed to be loaded on big memory size. So it's not used for just for the mobile and endpoint device. Although the size of the LNM is huge, the performance is very good. So there are a lot of tasks to make it smaller. Yes, if it becomes smaller, the performance is getting low. So people try to make small size LNM and keep the performance in the same level as it was

 

 

Step2. 인풋을 통해 실력 업그레이드

인풋글: https://medium.com/state-of-the-art-technology/the-edge-of-intelligence-navigating-the-shift-to-on-device-llms-cbb5368dbdd0

 

Reducing the parameters in the model results in

lesser expensive training and inference

less powerful devices

Companies are playing with Smaller models

in terms of their parameters, memory, and MMLU benchmark performance

a comprehensive framework designed to 

starting next year

real-time interactions with AI applications

the dramatic reduction in latency

making technology more accessible and consistent in performance regardless of network conditions

 

 

Step3. 학습

 

반응형

'영어공부' 카테고리의 다른 글

20241230 영어 공부 feat. On-device LLM  (0) 2024.12.30
영어 회화 공부 Start  (2) 2024.12.28
반응형
공지사항
최근에 올라온 글
최근에 달린 댓글
Total
Today
Yesterday
링크
«   2025/01   »
1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31
글 보관함