QLoRA - Efficient Finetuning of Quantized LLMs
4.7 هزار بار بازدید -
پارسال
-
QLoRA allows for an efficient
QLoRA allows for an efficient finetuning approach that supports using a 4-bit approach. This allows people to fine models using a single GPU. It's possible to now fine-tune a 33B parameter model in less than 24 GB.
#datascience #machinelearning #lora #peft #qlora #finetuning #largelanguagemodels
Paper: https://arxiv.org/abs/2305.14314
Code+Demo: https://github.com/artidoro/qlora
Samples: https://colab.research.google.com/dri...
Colab: https://colab.research.google.com/dri...
Background by Vishnu Mohanan: https://unsplash.com/collections/1779...
━━━━━━━━━━━━━━━━━━━━━━━━━
★ Rajistics Social Media »
● Link Tree: https://linktr.ee/rajistics
● LinkedIn: LinkedIn: rajistics
━━━━━━━━━━━━━━━━━━━━━━━━━
#datascience #machinelearning #lora #peft #qlora #finetuning #largelanguagemodels
Paper: https://arxiv.org/abs/2305.14314
Code+Demo: https://github.com/artidoro/qlora
Samples: https://colab.research.google.com/dri...
Colab: https://colab.research.google.com/dri...
Background by Vishnu Mohanan: https://unsplash.com/collections/1779...
━━━━━━━━━━━━━━━━━━━━━━━━━
★ Rajistics Social Media »
● Link Tree: https://linktr.ee/rajistics
● LinkedIn: LinkedIn: rajistics
━━━━━━━━━━━━━━━━━━━━━━━━━
پارسال
در تاریخ 1402/03/05 منتشر شده
است.
4,781
بـار بازدید شده