Skip to content

aws-samples/rag-with-amazon-bedrock-and-memorydb

QA with LLM and RAG (Retrieval Augmented Generation)

This project is a Question Answering application with Large Language Models (LLMs) and Amazon MemoryDB for Redis. An application using the RAG(Retrieval Augmented Generation) approach retrieves information most relevant to the user’s request from the enterprise knowledge base or content, bundles it as context along with the user’s request as a prompt, and then sends it to the LLM to get a GenAI response.

LLMs have limitations around the maximum word count for the input prompt, therefore choosing the right passages among thousands or millions of documents in the enterprise, has a direct impact on the LLM’s accuracy.

In this project, Amazon MemoryDB for Redis is used for knowledge base.

The overall architecture is like this:

rag_with_bedrock_memorydb_arch

Overall Workflow

  1. Deploy the cdk stacks (For more information, see here).
    • An Amazon MemoryDB for Redis to store embeddings.
    • An SageMaker Studio for RAG application and data ingestion to Amazon MemoryDB for Redis.
  2. Open SageMaker Studio and then open a new terminal.
  3. Run the following commands on the terminal to clone the code repository for this project:
    git clone https://github.com/aws-samples/rag-with-amazon-bedrock-and-memorydb.git
    
  4. Open data_ingestion_to_memorydb.ipynb notebook and Run it. (For more information, see here)
  5. Run Streamlit application. (For more information, see here)

References

Security

See CONTRIBUTING for more information.

License

This library is licensed under the MIT-0 License. See the LICENSE file.

About

Question Answering Generative AI application with Large Language Models (LLMs), Amazon Bedrock, and Amazon MemoryDB for Redis

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published