Skip to content

Ryota-Kawamura/LangChain-Chat-with-Your-Data

Repository files navigation

Join our new short course, LangChain: Chat With Your Data! The course delves into two main topics: (1) Retrieval Augmented Generation (RAG), a common LLM application that retrieves contextual documents from an external dataset, and (2) a guide to building a chatbot that responds to queries based on the content of your documents, rather than the information it has learned in training.

You’ll learn about:

  • Document Loading: Learn the fundamentals of data loading and discover over 80 unique loaders LangChain provides to access diverse data sources, including audio and video.
  • Document Splitting: Discover the best practices and considerations for splitting data.
  • Vector stores and embeddings: Dive into the concept of embeddings and explore vector store integrations within LangChain.
  • Retrieval: Grasp advanced techniques for accessing and indexing data in the vector store, enabling you to retrieve the most relevant information beyond semantic queries.
  • Question Answering: Build a one-pass question-answering solution.
  • Chat: Learn how to track and select pertinent information from conversations and data sources, as you build your own chatbot using LangChain.

Start building practical applications that allow you to interact with data using LangChain and LLMs.