Next Gen UI MCP Server Container
This module is part of the Next Gen UI Agent project.
Next Gen UI Agent MCP Server container image.
Provides
- container image to easily run Next Gen UI Agent MCP server
Installation
Usage
Locally
podman run --rm -it -p 5100:5100 --env MCP_PORT="5100" \
--env NGUI_MODEL="llama3.2" --env NGUI_PROVIDER_API_BASE_URL=http://host.containers.internal:11434 --env NGUI_PROVIDER_API_KEY="ollama" \
quay.io/next-gen-ui/mcp
Openshift
Configuration
The MCP server container can be configured via environment variables. For available env variables and their meaning see MCP Server Guide.
Dependencien necessary for openai inference provider are installed in the image.
json and rhds renderers are installed. Create child image to install additional ones.
Default values are changed for some configurations in the image!
| Environment Variable | Default Value | Description |
|---|---|---|
MCP_TRANSPORT |
streamable-http |
Transport protocol (stdio, sse, streamable-http) |
MCP_HOST |
0.0.0.0 |
Host to bind to (for HTTP transports) |
MCP_PORT |
5000 |
Port to bind to (for HTTP transports) |
NGUI_PROVIDER |
openai |
Inference provider (mcp, openai, anthropic-vertexai) |
NGUI_MODEL |
gpt-4o |
Model name |
Usage Examples
Basic Usage with Ollama (Local LLM)
podman run --rm -it -p 5000:5000 \
--env MCP_PORT="5000" \
--env NGUI_PROVIDER="openai" \
--env NGUI_MODEL="llama3.2" \
--env NGUI_PROVIDER_API_BASE_URL="http://host.containers.internal:11434/v1" \
--env NGUI_PROVIDER_API_KEY="ollama" \
quay.io/next-gen-ui/mcp
OpenAI API Configuration
podman run --rm -it -p 5000:5000 \
--env NGUI_PROVIDER="openai" \
--env NGUI_MODEL="gpt-4o" \
--env NGUI_PROVIDER_API_KEY="your-openai-api-key" \
quay.io/next-gen-ui/mcp
Remote LlamaStack Server
podman run --rm -it -p 5000:5000 \
--env NGUI_PROVIDER="openai" \
--env NGUI_MODEL="llama3.2-3b" \
--env NGUI_PROVIDER_API_BASE_URL="http://host.containers.internal:5001/v1" \
quay.io/next-gen-ui/mcp
Configuration Using Environment File
Create a .env file:
# .env file
MCP_PORT=5000
MCP_HOST=0.0.0.0
MCP_TRANSPORT=streamable-http
MCP_STRUCTURED_OUTPUT_ENABLED="false"
NGUI_COMPONENT_SYSTEM=json
NGUI_PROVIDER=openai
NGUI_MODEL=gpt-4o
NGUI_PROVIDER_API_KEY=your-api-key-here
Run with environment file:
Network Configuration
For local development connecting to services running on the host machine:
- Use
host.containers.internalto access host services (works with Podman and Docker Desktop) - For Linux with Podman, you may need to use
host.docker.internalor the host's IP address - Ensure the target services (like Ollama) are accessible from containers