Skip to main content

AI Insights

ProxCenter integrates with AI language models to provide intelligent analysis of your infrastructure. Get natural-language summaries, optimization recommendations, and anomaly explanations powered by either a local Ollama instance or the OpenAI API.

Overview

AI Insights analyzes your Proxmox metrics, events, and configuration to surface actionable recommendations. Instead of manually reviewing dashboards and logs, you can ask questions about your infrastructure and receive contextual answers.

Examples of what AI Insights can help with:

  • "Why is node-03 running hot this week?"
  • "Which VMs are oversized and could have their resources reduced?"
  • "Summarize the backup failures from the last 7 days"
  • Infrastructure optimization suggestions based on resource usage patterns
  • Anomaly detection explanations when metrics deviate from normal baselines

Supported Providers

Ollama (Self-Hosted)

Run AI models locally on your own hardware using Ollama. No data leaves your network.

FieldDescription
Ollama URLThe Ollama API endpoint (e.g., http://localhost:11434)
ModelThe model to use (e.g., llama3, mistral, mixtral)
tip

For best results with infrastructure analysis, use a model with at least 7B parameters. Larger models (13B+) produce more detailed and accurate insights but require more GPU memory.

OpenAI

Use OpenAI's cloud API for AI analysis.

FieldDescription
API KeyYour OpenAI API key
ModelThe model to use (e.g., gpt-4o, gpt-4o-mini)
OrganizationOptional OpenAI organization ID

Configuration

  1. Navigate to Settings > AI
  2. Select your provider (Ollama or OpenAI)
  3. Enter the connection details
  4. Click Test Connection to verify the AI provider is reachable
  5. Save the configuration

Privacy and Data

ProxCenter sends infrastructure metadata (resource names, metrics, event summaries) to the configured AI provider for analysis. It does not send:

  • Credentials or API tokens
  • Disk contents or VM data
  • User passwords or personal information
warning

When using OpenAI, infrastructure metadata is sent to OpenAI's servers. If your security policy prohibits sending infrastructure details to external services, use Ollama with a locally hosted model instead.

Where AI Insights Appear

Once configured, AI Insights are available in several places:

  • Dashboard -- An insights panel with proactive recommendations
  • Alerts -- AI-generated explanation of why an alert fired and suggested actions
  • Events -- Natural-language summary of event patterns
  • Reports -- AI-generated executive summary in infrastructure reports
Enterprise Feature

AI Insights is available in the Enterprise edition.

Permissions

PermissionDescription
settings.manageRequired to configure AI provider settings
ai.viewView AI-generated insights and recommendations