Aexaware Infotech – Smart IT Solutions That Scale

Free LLM and Gen AI courses to take in 2025

Looking for free LLM and Gen AI courses to dive into? Here are 10 online courses that focus on the practical side of building with LLMs. Our picks prioritize hands-on learning and accessibility: each course is either free or offers open access to its materials. We hope this list helps you find your next learning adventure.

Think we’ve missed a great one? Let us know! Our Discord is the best place to share your suggestions.

Disclaimer: All course information is sourced from their official websites. We simply put it together.

LLM evaluation

LLM evaluations for AI builders: applied course

You can go through all the materials at your own pace.
Author: Evidently AI
Duration: 3 hours
Price: free.
Course certificate: yes (for cohort participants).
The course syllabus is available 
here.

LLM evaluation for AI builders is a new applied LLM course created by the team that develops Evidently, an open-source evaluation and observability framework for ML and LLM systems.

This LLM course covers core LLM evaluation workflows, such as building test datasets, comparing prompts and models, and tracing LLM outputs. It is practice-oriented and consists of 10 end-to-end code tutorials, from custom LLM judges to RAG evaluations to adversarial testing.

The course program is designed for AI/ML engineers and everyone building real-world LLM apps. Basic Python skills are required.

Intro to LLM evaluations for AI product teams

You can go through all the materials at your own pace.
Author: Evidently AI
Duration: 1 hour
Price: free.
Course certificate: yes (for cohort participants).
The course syllabus is available 
here.

LLM evaluations for AI product teams from Evidently is an opportunity to learn everything you need to know to get started with LLM evaluations in under 1 hour!

This course is a gentle introduction to LLM evaluation for AI leaders. It explores the fundamentals of LLM evaluations and observability, from evaluation methods and benchmarks to LLM guardrails and LLM judges.

You will learn to build LLM evaluation datasets, use synthetic data to run LLM evals, create custom LLM evaluators, and check your app for AI risks and vulnerabilities. The course also covers practical examples of evaluating complex LLM systems like RAGs, Q&A systems, and AI agents.

The course is designed for AI product managers and everyone interested in grasping the core concepts of LLM quality and observability. No coding skills are required to take the course.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top