🌉
3-Week Building LLMs Bootcamp
  • Welcome to the Bootcamp
    • Course Structure
    • Course Syllabus and Timelines
    • Know your Educators
    • Action Items and Prerequisites
    • Kick Off Session at Tryst 2024
  • Basics of LLMs
    • What is Generative AI?
    • What is a Large Language Model?
    • Advantages and Applications of LLMs
    • Bonus Resource: Multimodal LLMs and Google Gemini
    • Group Session Recording
  • Word Vectors, Simplified
    • What is a Word Vector
    • Word Vector Relationships
    • Role of Context in LLMs
    • Transforming Vectors into LLM Responses
    • Bonus Section: Overview of the Transformers Architecture
      • Attention Mechanism
      • Multi-Head Attention and Transformers Architecture
      • Vision Transformers
    • Graded Quiz 1
    • Group Session Recording
  • Prompt Engineering and Token Limits
    • What is Prompt Engineering
    • Prompt Engineering and In-context Learning
    • For Starters: Best Practices to Follow
    • Navigating Token Limits
    • Hallucinations in LLMs
    • Prompt Engineering Excercise (Ungraded)
      • Story for the Excercise: The eSports Enigma
      • Your Task for the Module
    • Group Session Recording
  • RAG and LLM Architecture
    • What is Retrieval Augmented Generation (RAG)?
    • Primer to RAG: Pre-trained and Fine-Tuned LLMs
    • In-context Learning
    • High-level LLM Architecture Components for In-context Learning
    • Diving Deeper: LLM Architecture Components
    • Basic RAG Architecture with Key Components
    • RAG versus Fine-Tuning and Prompt Engineering
    • Versatility and Efficiency in RAG
    • Key Benefits of using RAG in an Enterprise/Production Setup
    • Hands-on Demo: Performing Similarity Search in Vectors (Bonus Module)
    • Using kNN and LSH to Enhance Similarity Search (Bonus Module)
    • Bonus Video: Implementing End-to-End RAG | 1-Hour Session
    • Group Session Recording
    • Graded Quiz 2
  • Hands-on Development
    • Prerequisites
    • 1 – Dropbox Retrieval App
      • Understanding Docker
      • Building the Dockerized App
      • Retrofitting your Dropbox app
    • 2 – Amazon Discounts App
      • How the Project Works
      • Building the App
    • 3 – RAG with Open Source and Running "Examples"
    • 4 (Bonus) – Realtime RAG with LlamaIndex/Langchain and Pathway
      • Understanding the Basics
      • Implementation with LlamaIndex and Langchain
    • Building LLM Apps with Open AI Alternatives using LiteLLM
  • Bonus Resource: Recorded Interactions from the Archives
  • Final Project + Giveaways
    • Prizes and Giveaways
    • Suggested Tracks for Ideation
    • Sample Projects and Additional Resources
    • Form for Submission
Powered by GitBook
On this page
  • 1. Exploring Frontiers of LLMs | Recorded Interaction with Jan Chorowski at IIT Bombay
  • 2. Understanding Real-time Use Cases | Recorded Interaction with Adrian Kosowski on ML & AI Podcast

Was this helpful?

Bonus Resource: Recorded Interactions from the Archives

PreviousBuilding LLM Apps with Open AI Alternatives using LiteLLMNextFinal Project + Giveaways

Last updated 1 year ago

Was this helpful?

As a part of this bootcamp, we will be announcing our live sessions soon. In the meantime, we have curated a selection of valuable resources from Pathway's Archives to broaden your learning horizon.

1. Exploring Frontiers of LLMs | Recorded Interaction with Jan Chorowski at IIT Bombay

  • Session Overview:

    • Participants: Anup Surendran (Growth Head at Pathway) and Jan Chorowski (CTO at Pathway).

    • Jan's Background: An AI reference figure with a PhD in Neural Networks. Jan has a rich history of co-authoring papers with AI luminaries like Yoshua Bengio and Geoff Hinton and has worked with Microsoft Research, Google Brain, and MILA AI, boasting over 10,000 Google Scholar citations.

  • Session Highlights:

    • Evolution of LLMs: Insights into the development and practical applications of Large Language Models.

    • Operational Challenges: Discussion on the challenges faced by LLMs in real-world scenarios.

    • Learning to Forget: Exploring this crucial concept in LLM development.

    • Real-Time Relevance: Delving into how LLMs adapt and remain pertinent in dynamic environments.

    • Interactive Q&A: Audience-driven discussion on various aspects of LLMs, including document versioning.

2. Understanding Real-time Use Cases | Recorded Interaction with Adrian Kosowski on ML & AI Podcast

  • Session Overview:

    • Adrian's Background: A distinguished figure in the realm of competitive programming. He previously co-founded Spoj.com, a platform used by millions of developers. Adrian earned his PhD at 20 and has since accrued over 15 years of research experience across disciplines, contributing to over 100 publications.

  • Session Highlights:

    • Reactive Data Processing: Gain a deep understanding of real-time data processing nuances.

    • Stream vs. Batch Processing: Explore the differences and practical applications.

    • Transformers in Data Engineering: The role of transformers in managing and streaming data.

    • ML Innovations for Startups: Discover emerging machine learning tools and approaches beneficial for startups.

Don't miss these illuminating Fireside Chats that offer unique perspectives on the fast-evolving domains of Large Language Models and Real-time Data Processing. These sessions provide valuable insights into the wonderful world of AI and machine learning.

Participants: Jon Krohn (Chief Data Scientist at Nebula | ) and Adrian Kosowski (CPO at Pathway | ).

Stay tuned for more updates as we gear up to announce the live interactions for this exciting bootcamp.

😊
GitHub
Google Scholar