Next Gen UI Agent A2A Server Container
This module is part of the Next Gen UI Agent project.
Provides
- container image to easily run Next Gen UI Agent A2A Server
Installation
Usage
podman run --rm -p 9999:9999 \
-e INFERENCE_MODEL=llama3.2 \
-e OPEN_API_URL=http://host.containers.internal:11434/v1 \
quay.io/next-gen-ui/a2a
Configuration
The A2A Server container can be configured via environment variables. For available env variables and their meaning see A2A Server Guide.
Dependencien necessary for openai inference provider are installed in the image.
json and rhds renderers are installed. Create child image to install additional ones.
Default values are changed for some configurations in the image!
| Environment Variable | Default Value | Description |
|---|---|---|
A2A_HOST |
0.0.0.0 |
Host to bind to (for HTTP transports) |
A2A_PORT |
9999 |
Port to bind to (for HTTP transports) |
NGUI_MODEL |
gpt-4o |
Model name |
Usage Examples
Basic Usage with Ollama (Local LLM)
podman run --rm -it -p 5000:5000 \
--env A2A_PORT="5000" \
--env NGUI_PROVIDER="openai" \
--env NGUI_MODEL="llama3.2" \
--env NGUI_PROVIDER_API_BASE_URL="http://host.containers.internal:11434/v1" \
--env NGUI_PROVIDER_API_KEY="ollama" \
quay.io/next-gen-ui/a2a
OpenAI API Configuration
podman run --rm -it -p 5000:5000 \
--env NGUI_PROVIDER="openai" \
--env NGUI_MODEL="gpt-4o" \
--env NGUI_PROVIDER_API_KEY="your-openai-api-key" \
quay.io/next-gen-ui/a2a
Remote LlamaStack Server
podman run --rm -it -p 5000:5000 \
--env NGUI_PROVIDER="openai" \
--env NGUI_MODEL="llama3.2-3b" \
--env NGUI_PROVIDER_API_BASE_URL="http://host.containers.internal:5001/v1" \
quay.io/next-gen-ui/a2a
Configuration Using Environment File
Create a .env file:
# .env file
A2A_PORT=5000
A2A_HOST=0.0.0.0
NGUI_COMPONENT_SYSTEM=json
NGUI_PROVIDER=openai
NGUI_MODEL=gpt-4o
NGUI_PROVIDER_API_KEY=your-api-key-here
Run with environment file:
Network Configuration
For local development connecting to services running on the host machine:
- Use
host.containers.internalto access host services (works with Podman and Docker Desktop) - For Linux with Podman, you may need to use
host.docker.internalor the host's IP address - Ensure the target services (like Ollama) are accessible from containers