AI Insights
ProxCenter integrates with AI language models to provide intelligent analysis of your infrastructure. Get natural-language summaries, optimization recommendations, and anomaly explanations powered by either a local Ollama instance or the OpenAI API.
Overview
AI Insights analyzes your Proxmox metrics, events, and configuration to surface actionable recommendations. Instead of manually reviewing dashboards and logs, you can ask questions about your infrastructure and receive contextual answers.
Examples of what AI Insights can help with:
- "Why is node-03 running hot this week?"
- "Which VMs are oversized and could have their resources reduced?"
- "Summarize the backup failures from the last 7 days"
- Infrastructure optimization suggestions based on resource usage patterns
- Anomaly detection explanations when metrics deviate from normal baselines
Supported Providers
Ollama (Self-Hosted)
Run AI models locally on your own hardware using Ollama. No data leaves your network.
| Field | Description |
|---|---|
| Ollama URL | The Ollama API endpoint (e.g., http://localhost:11434) |
| Model | The model to use (e.g., llama3, mistral, mixtral) |
For best results with infrastructure analysis, use a model with at least 7B parameters. Larger models (13B+) produce more detailed and accurate insights but require more GPU memory.
OpenAI
Use OpenAI's cloud API for AI analysis.
| Field | Description |
|---|---|
| API Key | Your OpenAI API key |
| Model | The model to use (e.g., gpt-4o, gpt-4o-mini) |
| Organization | Optional OpenAI organization ID |
Configuration
- Navigate to Settings > AI
- Select your provider (Ollama or OpenAI)
- Enter the connection details
- Click Test Connection to verify the AI provider is reachable
- Save the configuration
Privacy and Data
ProxCenter sends infrastructure metadata (resource names, metrics, event summaries) to the configured AI provider for analysis. It does not send:
- Credentials or API tokens
- Disk contents or VM data
- User passwords or personal information
When using OpenAI, infrastructure metadata is sent to OpenAI's servers. If your security policy prohibits sending infrastructure details to external services, use Ollama with a locally hosted model instead.
Where AI Insights Appear
Once configured, AI Insights are available in several places:
- Dashboard -- An insights panel with proactive recommendations
- Alerts -- AI-generated explanation of why an alert fired and suggested actions
- Events -- Natural-language summary of event patterns
- Reports -- AI-generated executive summary in infrastructure reports
AI Insights is available in the Enterprise edition.
Permissions
| Permission | Description |
|---|---|
settings.manage | Required to configure AI provider settings |
ai.view | View AI-generated insights and recommendations |