✨🚀 Enroll now for "Live Building Agentic AI & Generative AI Applications" starting Jan 25—Contact us for details! 🚀✨Contact Us

Workshop Duration:

4 Days, Intensive Workshop

Instructor: Krish Naik And Sunny Savitha

Dates: 28th and 29th September,5th and 6th October

Time: 10am IST - 2pm IST

Overview:

This 4-day hands-on workshop is designed to provide an in-depth understanding of fine-tuning large language models (LLMs), with practical implementations and real-world examples. It is tailored for data scientists, ML engineers, and AI enthusiasts who are looking to enhance their skills in fine-tuning, leveraging models like BERT, T5, LLAMA, GPT-4, and more. The workshop will cover supervised fine-tuning, instruction fine-tuning, reinforcement learning from human feedback (RLHF), and cost-effective methods such as LoRA and QLoRA. Attendees will gain practical skills in working with custom data, and open-source models, and integrating APIs for fine-tuning commercial LLMs.

Day 1: Introduction to Transfer Learning & Fine-Tuning

Overview of Transfer Learning vs Fine-Tuning: Understand the fundamental differences, their use cases, and when to use each approach.

Complete End-to-End Fine-Tuning Roadmap: A detailed roadmap covering supervised fine-tuning, instruction fine-tuning, RLHF, and optimization methods like DPO and PPO.

Why Fine-Tuning Matters for Specific Tasks: Explore how fine-tuning is critical for domain-specific tasks and how to adapt models for high-impact applications.

Cost Analysis for Fine-Tuning: Learn the financial implications of fine-tuning large models and strategies to optimize costs.

Fine-Tuning BERT and T5 Models on Custom Data: A hands-on session on fine-tuning popular models like BERT and T5 on custom datasets.

Day 2: Supervised Fine-Tuning with Open-Source Models

Fine-Tuning Open-Source Models (LLAMA, Mistral, Zephyr, etc.): Learn how to fine-tune open-source models for various use cases.

Hands-on: Supervised Fine-Tuning with PEFT (LoRA and QLoRA): Practical implementation of Parameter Efficient Fine-Tuning (PEFT) techniques such as LoRA and QLoRA to reduce computational load while achieving powerful results.

Day 3: Instruction Fine-Tuning for OpenAI Models

Instruction Fine-Tuning for OpenAI Models: Deep dive into fine-tuning OpenAI’s GPT-3.5 Turbo and GPT-4 models, requiring API integration.

Hands-on: Fine-Tuning GPT-3.5 Turbo and GPT-4: A practical session where you will fine-tune OpenAI models to optimize performance for specific tasks or domains using OpenAI’s API.

Day 4: Reinforcement Learning from Human Feedback (RLHF) and Optimization Methods

Understanding RLHF: Explore Reinforcement Learning from Human Feedback (RLHF) and its applications in improving the quality of language models.

DPO and PPO Over RLHF: Learn how Direct Preference Optimization (DPO) and Proximal Policy Optimization (PPO) methods improve over RLHF in certain tasks.

Hands-on: RLHF Implementation: Implement RLHF on custom data to see how feedback-driven reinforcement can be used to optimize LLMs.