View file | ||||
---|---|---|---|---|
|
On a single node using Docker Engine, running in Intel Tiber Cloud
View file | ||||
---|---|---|---|---|
|
On Kubernetes running anywhere using OPEA GenAI Microservice Connector (GMC).
The demo shows response to a prompt requesting current information, ingesting current data to answer the prompt and finally changing the model.
Using GMC supports re-use of unchanged services when changing/updating a GenAI pipeline definition, which is particularly valuable during the development phase.
Further GMC supports re-use of common services in the same namespace between distinct GenAI applications/pipelines
making things more resource efficient through sharing and faster to launch.
View file | ||||
---|---|---|---|---|
|
This page highlights different demos that can be shown for OPEA
- All samples on Intel Developer Cloud
- ChatQnA sample on AI PC
- All samples on Red Hat OpenShift AI (ChatQnA - NOW, CodeGen/SearchQnA - Sept, more to be scheduled)
- ChatQnA on Kubernetes
- Intel Kubernetes Service - Xeon + Gaudi
- Amazon EKS