Skip to content

Next Gen UI MCP Server

This module is part of the Next Gen UI Agent project.

Module Category Module Status

This package wraps Next Gen UI Agent in a Model Context Protocol (MCP) tools using the official Python MCP SDK.

Since MCP adoption is so strong these days and there is an apetite to use this protocol also for handling agentic AI, we also deliver UI Agent this way.

The most common way of utilising MCP tools is to provide them to LLM to choose and execute with certain parameters. This approach of using Next Gen UI Agent makes sense if you want your AI Orchestrator give a chance to decide about the UI component generation. For example to select which backend data loaded during the processing needs to be visualized in UI. You have to prompt LLM in a way to pass the correct user prompt and structured backend data content into the UI MCP unaltered, to prevent unexpected UI errors.

Alternative approach is to invoke this MCP tool directly (or even using another AI framework binding) with the parameters as part of your main application logic at the specific moment of the flow, after gathering structured backend data for response. This approach is a bit more reliable, helps to reduce main LLM processing price (tokens) and saves processing time, but is less flexible.

Provides

  • __main__.py to run the MCP server as the standalone server
  • NextGenUIMCPServer to embed the UI Agent MCP server into your python code

Installation

Note: alternatively, you can use container image to easily install and run the server.

pip install -U next_gen_ui_mcp

Depending on your use case you may need additional packages for inference provider or design component renderers. More about this in the next sections.

Usage

Running the standalone server

To get help how to run the server and pass the arguments run it with -h parameter:

python -m next_gen_ui_mcp -h

Few examples:

  # Run with MCP sampling (default - leverages client's LLM)
  python -m next_gen_ui_mcp

  # Run with OpenAI inference
  python -m next_gen_ui_mcp --provider openai --model gpt-3.5-turbo

  # Run with OpenAI compatible API of Ollama (local)
  python -m next_gen_ui_mcp --provider openai --model llama3.2 --base-url http://localhost:11434/v1 --api-key ollama

  # Run with MCP sampling and custom max tokens
  python -m next_gen_ui_mcp --sampling-max-tokens 4096

  # Run with MCP sampling and model preferences
  python -m next_gen_ui_mcp --sampling-hints claude-3-sonnet,claude --sampling-speed-priority 0.8 --sampling-intelligence-priority 0.7

  # Run with SSE transport (for web clients)
  python -m next_gen_ui_mcp --transport sse --host 127.0.0.1 --port 8000

  # Run with streamable-http transport
  python -m next_gen_ui_mcp --transport streamable-http --host 127.0.0.1 --port 8000

  # Run with patternfly component system
  python -m next_gen_ui_mcp --component-system rhds

  # Run with rhds component system via SSE transport
  python -m next_gen_ui_mcp --transport sse --component-system rhds --port 8000

  # Run with custom CORS configuration
  python -m next_gen_ui_mcp --transport sse --cors-allow-origins "http://localhost:3000,http://localhost:8080"

  # Run with CORS allowing all origins (development only)
  python -m next_gen_ui_mcp --transport streamable-http --cors-allow-origins "*"

  # Run with custom CSP resource domains (for UI images, scripts, styles)
  python -m next_gen_ui_mcp --csp-resource-domains "https://cdn.jsdelivr.net,https://image.tmdb.org"

As the above examples show you can choose to configure mcp sampling, openai or anthropic-vertexai inference provider. You have to add the necessary dependencies to your python environment to do so, otherwise the application will complain about them missing. See detailed documentation below.

Similarly pluggable component systems such as rhds also require certain imports, next_gen_ui_rhds_renderer in this particular case. json renderrer is installed by default.

Configuration Reference

Server can be configured using commandline arguments, or environment variables. CLI has precedence over env variable.

Commandline Argument Environment Variable Default Value Description
--config-path NGUI_CONFIG_PATH - Path to YAML configuration files (to merge more yaml files, multiple commandline args can be used/comma separated in env variable).
--component-system NGUI_COMPONENT_SYSTEM json UI Component system (json + any installed). Overrides value from YAML config file if used.
--transport MCP_TRANSPORT stdio Transport protocol for MCP (stdio, sse, streamable-http).
--host MCP_HOST 127.0.0.1 Host to bind to (for sse and streamable-http transports).
--port MCP_PORT 8000 Port to bind to (for sse and streamable-http transports).
--tools MCP_TOOLS - List of enabled tools (comma separated). All are enabled by default until some are disabled in yaml config.
--structured_output_enabled MCP_STRUCTURED_OUTPUT_ENABLED true Enable or disable structured output.
--provider NGUI_PROVIDER mcp LLM inference provider (mcp, openai, anthropic-vertexai), for details see below.
--model NGUI_MODEL - Model name. Required for other than mcp providers.
--base-url NGUI_PROVIDER_API_BASE_URL - Base URL for API, provider specific defaults. Used by openai, anthropic-vertexai.
--api-key NGUI_PROVIDER_API_KEY - API key for the LLM provider. Used by openai, anthropic-vertexai.
--temperature NGUI_PROVIDER_TEMPERATURE - Temperature for model inference, float value (defaults to 0.0 for deterministic responses). Used by openai, anthropic-vertexai.
--sampling-max-tokens NGUI_SAMPLING_MAX_TOKENS - Maximum LLM generated tokens, integer value. Used by mcp (defaults to 2048) and anthropic-vertexai (defaults to 4096).
--sampling-hints NGUI_SAMPLING_HINTS - Comma-separated list of model hint names (e.g., "claude-3-sonnet,claude"). Used by mcp provider.
--sampling-cost-priority NGUI_SAMPLING_COST_PRIORITY - Cost priority (0.0-1.0). Higher values prefer cheaper models. Used by mcp provider.
--sampling-speed-priority NGUI_SAMPLING_SPEED_PRIORITY - Speed priority (0.0-1.0). Higher values prefer faster models. Used by mcp provider.
--sampling-intelligence-priority NGUI_SAMPLING_INTELLIGENCE_PRIORITY - Intelligence priority (0.0-1.0). Higher values prefer more capable models. Used by mcp provider.
--anthropic-version NGUI_PROVIDER_ANTHROPIC_VERSION - Anthropic version value used in the API call (defaults to vertex-2023-10-16). Used by anthropic-vertexai.
--cors-allow-origins MCP_CORS_ALLOW_ORIGINS http://localhost:8080 Comma-separated list of allowed origins for CORS. Use * to allow all origins. Only applies to sse and streamable-http transports.
--cors-allow-credentials MCP_CORS_ALLOW_CREDENTIALS true Allow credentials (cookies, authorization headers) in CORS requests. Only applies to sse and streamable-http transports.
--cors-allow-methods MCP_CORS_ALLOW_METHODS * Comma-separated list of allowed HTTP methods for CORS. Use * to allow all methods. Only applies to sse and streamable-http transports.
--cors-allow-headers MCP_CORS_ALLOW_HEADERS * Comma-separated list of allowed headers for CORS. Use * to allow all headers. Only applies to sse and streamable-http transports.
--cors-expose-headers MCP_CORS_EXPOSE_HEADERS mcp-session-id,mcp-protocol-version Comma-separated list of headers to expose to the browser. Required for MCP streamable-http transport. Only applies to sse and streamable-http transports.
--csp-resource-domains MCP_CSP_RESOURCE_DOMAINS https://cdn.jsdelivr.net Comma-separated list of domains allowed to load resources in the UI (images, scripts, styles). Used for MCP Apps UI Content Security Policy.
--debug - Enable debug logging.

LLM Inference Providers

The Next Gen UI MCP server supports multiple inference providers, controlled by the --provider commandline argument / NGUI_PROVIDER environment variable:

Provider mcp

Uses Model Context Protocol sampling to leverage the client's LLM capabilities.

⚠️ IMPORTANT: MCP client must support the Sampling feature! Not all MCP clients implement this part of MCP specification yet. Also it's important to note that the client makes the final model selection and may ignore these preferences or interpret values in its own way.

Parameters: - NGUI_SAMPLING_MAX_TOKENS (optional): Maximum LLM generated tokens, integer value (defaults to 2048). - NGUI_SAMPLING_HINTS (optional): Comma-separated list of model hint names (e.g., "claude-3-sonnet,claude"). Hints are treated as substrings that can match model names flexibly. Multiple hints are evaluated in order of preference. Allows the server to suggest preferred models to the client. - NGUI_SAMPLING_COST_PRIORITY (optional): Cost priority value (0.0-1.0). Higher values indicate preference for cheaper models. Helps the client select more cost-effective models. - NGUI_SAMPLING_SPEED_PRIORITY (optional): Speed priority value (0.0-1.0). Higher values indicate preference for faster models. Guides the client toward lower-latency models. - NGUI_SAMPLING_INTELLIGENCE_PRIORITY (optional): Intelligence priority value (0.0-1.0). Higher values indicate preference for more capable models. Suggests the client should use more advanced reasoning models.

Provider openai

Uses LangChain OpenAI inference provider, so can be used with any OpenAI compatible APIs, eg. OpenAI API itself, or Ollama for localhost inference, or Llama Stack server v0.3.0+.

Requires additional package to be installed:

"pip install langchain-openai"

Requires:

  • NGUI_MODEL: Model name (e.g., gpt-4o, llama3.2).
  • NGUI_PROVIDER_API_KEY: API key for the provider.
  • NGUI_PROVIDER_API_BASE_URL (optional): Custom base URL for OpenAI-compatible APIs like Ollama or Llama Stack. OpenAI API by default.
  • NGUI_PROVIDER_TEMPERATURE (optional): Temperature for model inference (defaults to 0.0 for deterministic responses).

Base URL examples:

  • OpenAI: https://api.openai.com/v1 (default)
  • Ollama at localhost: http://localhost:11434/v1
  • Llama Stack server at localhost port 5001 called from MCP server running in image: http://host.containers.internal:5001/v1

Provider anthropic-vertexai

Uses Anthropic/Claude models from proxied Google Vertex AI API endpoint.

Called API url is constructed as {BASE_URL}/models/{MODEL}:streamRawPredict. API key is sent as Bearer token in Authorization http request header.

Requires:

  • NGUI_MODEL: Model name.
  • NGUI_PROVIDER_API_BASE_URL: Base URL of the API.
  • NGUI_PROVIDER_API_KEY: API key for the provider.
  • NGUI_PROVIDER_TEMPERATURE (optional): Temperature for model inference (defaults to 0.0 for deterministic responses).
  • NGUI_PROVIDER_ANTHROPIC_VERSION (optional): Anthropic version to use in API call (defaults to vertex-2023-10-16).
  • NGUI_SAMPLING_MAX_TOKENS (optional): Maximum LLM generated tokens, integer value (defaults to 4096).

YAML configuration

Common Next Gen UI YAML configuration files can be used to configure UI Agent functionality.

Configuration file extension is available to provide ability to fine-tune descriptions for the MCP tools and their arguments, to get better performance in your AI assitant/orchestrator. It also allows you to enable or disable individual tools. For details see mcp field in the Schema Definition.

Example of the mcp yaml configuration extension:

mcp:
  tools:
    generate_ui_component:
      enabled: false  # Disable this tool (default: true)
    generate_ui_multiple_components:
      enabled: true  # Explicitly enable (optional, default is true)
      description: Generate multiple UI components for given user_prompt and input data.\nAlways get fresh data from another tool first.
      argument_descriptions:
        user_prompt: "Original user prompt without any changes, so UI components have necessary context. Do not generate this."

# other UI Agent configurations

Schema Argument Exclusion

You can configure which arguments are excluded from the MCP tool schema, so that they are not visible to the calling LLM. You can remove unused arguments this way to make schema passed to the LLM smaller to save context/tokens. Excluded arguments can be sent to the NGUI MCP server still by the MCP client framework. Optimized way how to pass structured data from the previous tool call to NGUI can be implemented this way, without need to pass all the data through the LLM. The session_id argument is always excluded (doesn't need to be listed), as it is send by some MCP client frameworks (LlamaStack). Additional arguments can be excluded using schema_excluded_args:

mcp:
  tools:
    generate_ui_multiple_components:
      schema_excluded_args:
        - structured_data  # Exclude this argument from schema
    generate_ui_component:
      schema_excluded_args:
        - data_type_metadata  # Exclude this argument from schema

Tool Enabling/Disabling Precedence

The system supports multiple ways to control which tools are enabled, with the following precedence order (highest to lowest):

  1. CLI/Environment (highest): --tools / MCP_TOOLS - when specified, completely overrides YAML configuration
  2. YAML Configuration: mcp.tools.<tool_name>.enabled - controls per-tool enablement when CLI/env is not specified
  3. Default (lowest): All tools enabled

Examples:

  • YAML disables generate_ui_component, no CLI arg → tool is disabled
  • YAML disables generate_ui_component, CLI --tools generate_ui_component → tool is enabled (CLI wins)
  • YAML enables generate_ui_component, CLI --tools generate_ui_multiple_components → only generate_ui_multiple_components is enabled (CLI specifies exact list)
  • No YAML, no CLI → all tools enabled (default)

Running Server locally from Git Repo

If you are running this from inside of our NextGenUI Agent GitHub repo then our pants repository manager can help you satisfy all dependencies. In such case you can run the commands in the following way:

  # Run with MCP sampling (default - leverages client's LLM)
  pants run libs/next_gen_ui_mcp/server_example.py:extended

  # Run with streamable-http transport and Red Hat Design System component system for rendering
  pants run libs/next_gen_ui_mcp/server_example.py:extended --run-args="--transport streamable-http --component-system rhds"

  # Run directly
  PYTHONPATH=./libs python libs/next_gen_ui_mcp -h

Testing with MCP Client

As part of the GitHub repository we also provide an example client. This example client implementation uses MCP SDK client libraries and ollama for MCP sampling inference provision.

You can run it via this command:

pants --concurrent run libs/next_gen_ui_mcp/mcp_client_example.py
The --concurrent parameter is there only to allow calling it while you use pants run for starting the server. By default pants restrict parallel invocations.

Using NextGenUI MCP Agent through Llama Stack

Llama-stack documentation for tools nicely shows how to register a MCP server but also shows the below code on how to invoke a tool directly

result = client.tool_runtime.invoke_tool(
    tool_name="generate_ui_component",
    kwargs=input_data,
)

Available MCP Tools

generate_ui_multiple_components

The main tool that wraps the entire Next Gen UI Agent functionality.

This single tool handles:

  • Component selection based on user prompt and data
  • Data transformation to match selected components
  • Design system rendering to produce final UI

Parameters:

  • user_prompt (str, required): User's prompt which we want to enrich with UI components
  • structured_data (List[Dict], required): List of structured input data. Each object has to have id, data, type, and optionally type_metadata field.
  • session_id (str, optional): Session ID. Not used, present just for compatibility purposes.

You can find the input schema in spec/mcp/generate_ui_input.schema.json.

Returns:

Object containing:

  • UI blocks with rendering and configuration
  • Textual summary of the UI Blocks generation

When error occurs during the execution valid ui blocks are rendered. The failing UI Block is mentioned in the summary and don't appear in blocks field.

Textual summary is usefull to give the calling LLM a chance to "understand" what happened and react accordingly, include info about UI in natural language response etc.

By default the result is provided as structured content where structured content contains JSON object and the text content just "human readable summary". It's beneficial to send to Agent only text summary for LLM processing and use structured content for UI rendering on client side.

If it's disabled via --structured_output_enabled=false then there is no structured content in the result and the text content contains the same content but as serialized JSON string.

For compatibility the JSON object contains the summary as well.

Example:

{
  "blocks": [
    {
      "id": "e5e2db10-de22-4165-889c-02de2f24c901",
      "rendering": {
        "id": "e5e2db10-de22-4165-889c-02de2f24c901",
        "component_system": "json",
        "mime_type": "application/json",
        "content": "{\"component\":\"one-card\",\"image\":\"https://image.tmdb.org/t/p/w440_and_h660_face/uXDfjJbdP4ijW5hWSBrPrlKpxab.jpg\",\"id\":\"e5e2db10-de22-4165-889c-02de2f24c901\",\"title\":\"Toy Story Movie Details\",\"fields\":[{\"id\": \"title\",\"name\":\"Title\",\"data_path\":\"$..movie_detail.title\",\"data\":[\"Toy Story\"]},{\"id\": \"year\",\"name\":\"Release Year\",\"data_path\":\"$..movie_detail.year\",\"data\":[1995]},{\"id\": \"imdbRating\",\"name\":\"IMDB Rating\",\"data_path\":\"$..movie_detail.imdbRating\",\"data\":[8.3]},{\"id\": \"runtime\",\"name\":\"Runtime (min)\",\"data_path\":\"$..movie_detail.runtime\",\"data\":[81]},{\"id\": \"plot\",\"name\":\"Plot\",\"data_path\":\"$..movie_detail.plot\",\"data\":[\"A cowboy doll is profoundly threatened and jealous when a new spaceman figure supplants him as top toy in a boy's room.\"]}]}"
      },
      "configuration": {
        "data_type": "movie_detail",
        "input_data_transformer_name": "json",
        "json_wrapping_field_name": "movie_detail",
        "component_metadata": {
          "id": "e5e2db10-de22-4165-889c-02de2f24c901",
          "title": "Toy Story Movie Details",
          "component": "one-card",
          "fields": [
            {
              "id": "title",
              "name": "Title",
              "data_path": "$..movie_detail.title"
            },
            {
              "id": "year",
              "name": "Release Year",
              "data_path": "$..movie_detail.year"
            },
            {
              "id": "imdbRating",
              "name": "IMDB Rating",
              "data_path": "$..movie_detail.imdbRating"
            },
            {
              "id": "runtime",
              "name": "Runtime (min)",
              "data_path": "$..movie_detail.runtime"
            },
            {
              "id": "plot",
              "name": "Plot",
              "data_path": "$..movie_detail.plot"
            },
            {
              "id": "posterUrl",
              "name": "Poster",
              "data_path": "$..movie_detail.posterUrl"
            }
          ]
        }
      }
    }
  ],
  "summary": "Components are rendered in UI.\nCount: 1\n1. Title: 'Toy Story Movie Details', type: one-card"
}

You can find schema for the reponse in spec/mcp/generate_ui_output.schema.json.

generate_ui_component

The tool that wraps the entire Next Gen UI Agent functionality and with decomposed one input object into individual arguments.

Useful for agents which are able to pass one tool cool result to another.

When error occures, whole tool execution fails.

Parameters:

  • user_prompt (str, required): User's prompt which we want to enrich with UI components
  • data (str, required): Raw input data to render within the UI components
  • data_type (str, required): Data type. For example name of the tool used to load data.
  • data_type_metadata (str, optional): Metadata for data argument related to data_type. For example tool call arguments used to load the data.
  • data_id (str, optional): ID of Data. If not present, ID is generated.
  • session_id (str, optional): Session ID. Not used, present just for compatibility purposes.

Returns:

Same result as generate_ui_multiple_components tool.

Available MCP Resources

system://info

Returns system information about the Next Gen UI Agent including:

  • Agent name
  • Component system being used
  • Available capabilities
  • Description

ui://generate_ui_component/mcp-app.html

HTML resource for rendering UI components (single or multiple) in MCP Apps-compatible hosts. The server serves the appropriate file based on --component-system: patternfly-mcp-app.html for patternfly or json, for every other component system it'll look for the file following naming convention {component-system}-mcp-app.html.

MIME Type: text/html;profile=mcp-app

MCP Apps Integration

This server supports MCP Apps for displaying interactive UI in compatible chat clients (Claude Desktop, etc.).

How It Works

┌─────────────────┐       ┌──────────────────┐       ┌─────────────────────┐
│  MCP Host       │       │  Python Server   │       │  TypeScript UI      │
│  (Chat Client)  │       │  (This Module)   │       │  (Browser View)     │
└─────────────────┘       └──────────────────┘       └─────────────────────┘
        │                         │                            │
        │  1. call_tool()        │                            │
        │───────────────────────>│                            │
        │                         │                            │
        │  2. Generate UIBlock   │                            │
        │     with component     │                            │
        │     config             │                            │
        │<───────────────────────│                            │
        │                         │                            │
        │  3. read_resource()    │                            │
        │     ui://...html       │                            │
        │───────────────────────>│                            │
        │                         │                            │
        │  4. HTML content       │                            │
        │<───────────────────────│                            │
        │                         │                            │
        │  5. Load in iframe     │                            │
        │───────────────────────────────────────────────────>│
        │                         │                            │
        │  6. ui/initialize      │                            │
        │<───────────────────────────────────────────────────│
        │                         │                            │
        │  7. Send toolResult    │                            │
        │───────────────────────────────────────────────────>│
        │                         │                            │
        │                         │  8. Parse & render        │
        │                         │     with DynamicComponent │
        │                         │                           │

Data Flow

  1. Server generates UIBlock:

    UIBlock(
        id="data-123",
        rendering=UIBlockRendering(
            content='{"component":"data-view","id":"table-1","fields":[...]}'
        )
    )
    

  2. Host reads UI resource:

  3. Requests ui://generate_ui_component/mcp-app.html
  4. Receives self-contained HTML with React app

  5. React app processes result:

    // Extract from tool result
    const output = app.toolResult.structured_content;
    
    // Parse component config from rendering.content
    const config = JSON.parse(output.blocks[0].rendering.content);
    
    // Render with DynamicComponent
    <DynamicComponent config={config} />
    

  6. DynamicComponent renders PatternFly components:

  7. Maps config.component to PatternFly components
  8. Supports: data-view, one-card, chart-bar, chart-line, etc.

Building UI Resources

The UI resources are built separately from a TypeScript module:

# Quick update (from project root)
pants run libs/next_gen_ui_mcp:update-ui

# Or manually:
cd libs/next_gen_ui_mcp_apps_ui_patternfly
npm install
npm run build
cd ../next_gen_ui_mcp_apps_ui_rhds
npm install
npm run build
cd ../next_gen_ui_mcp
mkdir -p ui_resources
cp ../next_gen_ui_mcp_apps_ui_patternfly/dist/patternfly-mcp-app.html ui_resources/
cp ../next_gen_ui_mcp_apps_ui_rhds/dist/rhds-mcp-app.html ui_resources/

Files generated: - ui_resources/patternfly-mcp-app.html (~1.3MB) — used when --component-system is patternfly or json - ui_resources/rhds-mcp-app.html — used when --component-system is rhds

These files are self-contained and include: - All React code (PatternFly app) or vanilla JS + RHDS (RHDS app) - All PatternFly or RHDS components and CSS - MCP Apps SDK

Development Workflow

Use this workflow when iterating on the UI locally. (For packaging the wheel or Docker image, the UI apps are built automatically by pants package; you do not need to run update-ui.)

  1. Modify TypeScript UI (PatternFly or RHDS):

    cd libs/next_gen_ui_mcp_apps_ui_patternfly   # or next_gen_ui_mcp_apps_ui_rhds
    npm run watch  # Auto-rebuild on changes
    

  2. Copy built UI into the module (so the server can serve it from ui_resources/):

    # In another terminal, after the watch build has produced dist/*.html
    pants run libs/next_gen_ui_mcp:update-ui
    

  3. Run MCP server:

    pants run libs/next_gen_ui_mcp/server_example.py
    
    Or with options: pants run libs/next_gen_ui_mcp/server_example.py:extended --run-args="--transport streamable-http --component-system rhds"

  4. Test with MCP host:

  5. Connect Claude Desktop or another MCP Apps-compatible host
  6. Call generate_ui_component tool
  7. Verify UI displays in iframe

Content Security Policy (CSP)

The UI tools use Content Security Policy to allow external resources (images, scripts, styles). CSP resource domains are configurable at runtime via --csp-resource-domains or the MCP_CSP_RESOURCE_DOMAINS environment variable. The server injects these domains into both tool metadata and resource metadata automatically.

Runtime configuration (recommended):

# Add domains for images, CDN, etc. (comma-separated)
python -m next_gen_ui_mcp --csp-resource-domains "https://cdn.jsdelivr.net,https://image.tmdb.org"

# Or set the environment variable
export MCP_CSP_RESOURCE_DOMAINS="https://cdn.jsdelivr.net,https://image.tmdb.org"
python -m next_gen_ui_mcp

Default: If not set, the server uses https://cdn.jsdelivr.net (for Red Hat Design System and PatternFly CSS from CDN).

CSP field used by the server: - resourceDomains — Origins for images, scripts, stylesheets, fonts, media (maps to img-src, script-src, style-src, font-src, media-src). This is the only field configurable via the runtime parameter; the server sets it from --csp-resource-domains / MCP_CSP_RESOURCE_DOMAINS.

Other CSP fields (from the MCP Apps UI spec; not currently configurable at runtime): connectDomains, frameDomains, baseUriDomains. Wildcard subdomains are supported in values, e.g. https://*.example.com.

Note: The MCP host enforces CSP via HTTP headers for security. External resources must be allowed by the configured resource domains.

Troubleshooting

UI resources not found:

FileNotFoundError: UI resource not found: .../ui_resources/patternfly-mcp-app.html
(or rhds-mcp-app.html when using --component-system rhds)

Solution:

pants run libs/next_gen_ui_mcp:update-ui

Components not rendering correctly: - Verify component configs match expected format - Check browser console for errors - Ensure rendering.content is valid JSON

External images/resources blocked (CSP violation):

Loading the image 'https://example.com/image.jpg' violates Content Security Policy

Solution: Add the domain to the runtime CSP configuration (recommended):

python -m next_gen_ui_mcp --csp-resource-domains "https://cdn.jsdelivr.net,https://example.com"
# Or: export MCP_CSP_RESOURCE_DOMAINS="https://cdn.jsdelivr.net,https://example.com"

Build fails:

cd libs/next_gen_ui_mcp_apps_ui_patternfly   # or next_gen_ui_mcp_apps_ui_rhds
npm install  # Ensure dependencies are installed
npm run build