Available courses

Course Description:

This course provides a comprehensive introduction to Large Language Models (LLMs), covering foundational concepts, key architectures, and real-world applications. Students will explore the history and evolution of LLMs, from early language models to state-of-the-art architectures like GPT, BERT, and T5. Through hands-on exercises and projects, students will learn how LLMs generate text, perform tasks such as translation, summarization, and question-answering, and be exposed to ethical considerations and responsible AI practices.

Key Topics:

  1. Understanding LLMs – Explore the history, purpose, and advancements of large language models, including key architectures and innovations.
  2. Core Concepts and Architecture – Dive into model architectures (e.g., transformers, attention mechanisms), training techniques, and fine-tuning methods.
  3. LLM Applications – Discover the diverse applications of LLMs, such as conversational AI, content generation, sentiment analysis, and more.
  4. Prompt Engineering – Learn techniques to effectively interact with LLMs to get desired outputs and optimize model behavior.
  5. Ethics and Responsible AI – Discuss ethical considerations, risks, and approaches to ensure responsible use of language models.

Learning Outcomes:

By the end of this course, students will be able to:

  • Explain how LLMs function and differentiate among popular architectures.
  • Utilize prompt engineering techniques to interact with LLMs for various tasks.
  • Apply LLMs to practical scenarios, such as summarization and Q&A.
  • Analyze ethical implications and adopt best practices for using LLMs responsibly.

Target Audience:

This course is ideal for data scientists, developers, and AI enthusiasts who are interested in gaining a solid foundation in LLMs and their applications, as well as individuals considering integrating LLM capabilities into their projects or workflows. Basic knowledge of machine learning and Python is recommended.

Course Description:

This introductory course offers a practical guide to Retrieval-Augmented Generation (RAG), an advanced AI technique that enhances Large Language Models by integrating them with retrieval systems. Students will learn the fundamental concepts behind RAG, including how it enables LLMs to access external data sources for generating more accurate, contextually relevant, and up-to-date responses. The course covers key components of RAG, such as the retrieval mechanisms, query processing, and the interaction between LLMs and search databases.

Through hands-on exercises, students will build simple RAG models, learn how to optimize retrieval strategies, and understand the benefits and limitations of using RAG in various applications, from dynamic chatbots to knowledge-driven content generation. This course is ideal for AI enthusiasts, data scientists, and developers interested in enhancing LLM performance by incorporating real-time information retrieval capabilities.

Key Topics:

  1. Introduction to RAG – Overview of RAG’s purpose and role in improving LLM accuracy.
  2. Core Components – Understanding the architecture of retrieval and generation modules in RAG.
  3. Building a Simple RAG Model – Step-by-step guide to constructing a basic RAG system.
  4. Optimizing Retrieval – Techniques for refining queries and managing retrieval quality.
  5. Use Cases and Applications – Real-world applications of RAG in customer support, knowledge management, and more.

Learning Outcomes:

By the end of this course, students will be able to:

  • Explain the RAG framework and its advantages over standalone LLMs.
  • Construct and experiment with basic RAG models.
  • Utilize retrieval techniques to improve response accuracy.
  • Identify practical applications for RAG across different industries.

Prerequisites: Basic understanding of machine learning and familiarity with language models is recommended.