Skip to main content

Fine-tuning large models on local hardware

Track:
PyData: LLMs
Type:
Talk
Level:
intermediate
Room:
Forum Hall
Start:
12:30 on 11 July 2024
Duration:
30 minutes

Abstract

Fine-tuning big neural nets like Large Language Models (LLMs) has traditionally been prohibitive due to high hardware requirements. However, Parameter-Efficient Fine-Tuning (PEFT) and quantization enable the training of large models on modest hardware. Thanks to the PEFT library and the Hugging Face ecosystem, these techniques are now accessible to a broad audience.

Expect to learn:

  • what the challenges are of fine-tuning large models
  • what solutions have been proposed and how they work
  • practical examples of applying the PEFT library