Город МОСКОВСКИЙ
00:08:36

Efficient Large Language Model training with LoRA and Hugging Face PEFT

Аватар
Кодовый Тьютор
Просмотры:
22
Дата загрузки:
03.12.2023 17:46
Длительность:
00:08:36
Категория:
Технологии и интернет

Описание

Outline:
0:00 Introduction
0:26 Setting up Google Collab
1:25 Loading and preparing dataset
2:40 Loading Model for Fine-Tuning with LoRA
5:08 Starting Training job
7:00 Evaluate and Run Inference with LoRA

Based on:
Efficient Large Language Model training with LoRA and Hugging Face
https://www.philschmid.de/fine-tune-flan-t5-peft

Notebook:
https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/peft-flan-t5-int8-summarization.ipynb

Other References:
PEFT: Parameter-Efficient Fine-Tuning of Billion-Scale Models on Low-Resource Hardware: https://huggingface.co/blog/peft
A Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using Hugging Face Transformers, Accelerate and bitsandbytes: https://huggingface.co/blog/hf-bitsandbytes-integration
PEFT Trained Model: https://huggingface.co/philschmid/flan-t5-xxl-samsum-peft
━━━━━━━━━━━━━━━━━━━━━━━━━
★ Rajistics Social Media »
● Link Tree: https://linktr.ee/rajistics
● LinkedIn: https://www.linkedin.com/in/rajistics/
━━━━━━━━━━━━━━━━━━━━━━━━━

Рекомендуемые видео