Today a dominant usecase for deploying Large Language Models (LLMs) within the enteprise is as a enterprise search assistant. LLMs without any underlying context a prone to halucination. Retrieval Augmented Generation (RAG) is an architecture where the LLM is pared with a vector database. The vector database contains a specialised index of enterprise documents which provide 'context' to the LLM allowing more accurate results AND allowing employees to follow up with source documents. This project is a reference pattern for deploying a RAG search assistant built on top of Red Hat OpenShift AI.
This project uses validated patterns to allow this project to be consistently deployed across multiple platforms. The project itself has been forked off of the core mutlicloud-gitops pattern.
The current pattern for Red Hat OpenShift AI makes the following assumptions:
- Single cluster deployment
Validated patterns currently includes significant boilerplate which will be stripped out over time.
The deployment method for validated pattenrs is described here.
- An OpenShift Cluster with
- Cluster admin rights logged in with
oc
where the repository is cloned out - Connectivity through to GitHub (or an equivalent git platform)
- GPUs on at least one node within the cluster.
- Cluster admin rights logged in with