Docs AI Engine AI Engine

AI Engine

VortexHQ integrates 7 AI providers across every module — generate code, queries, commands, and summaries.

Supported Providers

ProviderDefault ModelAPI Key
Vortex (Built-in)autoNot required — uses your Vortex account
OpenAIgpt-4oRequired
Anthropicclaude-sonnet-4-20250514Required
DeepSeekdeepseek-chatRequired
Groqllama-3.3-70b-versatileRequired
Ollama (local)llama3Not required — runs locally
Customgpt-4o (configurable)Required — any OpenAI-compatible endpoint

Configuration

Go to Settings → AI Engine to configure your provider. Each provider can be tuned with:

  • API Key — Stored securely (base64-encoded in encrypted config)
  • Model — Override the default model
  • Base URL — Override for self-hosted or custom endpoints
  • Temperature — 0 to 1.0 (default: 0.3)
  • Max Tokens — 256 to 8,192 (default: 2,048; step: 256)
  • Test Connection — Verify your configuration works

15 AI Capabilities

AI is deeply integrated across every module:

ModuleCapabilities
EmailSummarize emails (action items, tone, dates), interactive Q&A chat about email content
API ClientGenerate single requests from descriptions, generate entire clusters from descriptions, generate clusters from project source code (auto-expands Route::resource), analyze API responses
SSH TerminalNatural language → shell commands, explain terminal output and errors
SQL ClientNatural language → SQL queries (MySQL/PostgreSQL-aware with schema context), explain and optimize queries
FTP / SFTPFile management assistance, generate config files (.htaccess, nginx.conf, .env)
PHP REPLGhost-text inline completions as you type, PHP code generation from descriptions
GeneralGeneral text/code explanation

Token Limits

When using the Vortex (Built-in) provider, AI usage is metered with a 3-window token system (daily, weekly, monthly). The backend automatically caps tokens at 4,000 per request. Requests using your own API key (OpenAI, Anthropic, etc.) bypass all VortexHQ limits entirely.

Last updated 12 hours ago