Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

LLM Provider Configuration

SpecForge includes LLM-based features such as natural-language based spec generation and error explanation. To use these features, you need to configure an LLM provider.

Supported Providers

SpecForge currently supports three LLM providers:

  • OpenAI - Cloud-based API (recommended for most users)
  • Gemini - Google's cloud-based API
  • Ollama - Run models locally on your machine

Configuration Methods

For Executable (Windows, macOS, Linux)

Set the following environment variables before starting the SpecForge server:

OpenAI

# Linux / macOS
export LLM_PROVIDER=openai
export LLM_MODEL=gpt-5-nano-2025-08-07
export OPENAI_API_KEY=your-api-key-here
# Windows PowerShell
$env:LLM_PROVIDER="openai"
$env:LLM_MODEL="gpt-5-nano-2025-08-07"
$env:OPENAI_API_KEY="your-api-key-here"

Get an API key from platform.openai.com/api-keys.

Gemini

# Linux / macOS
export LLM_PROVIDER=gemini
export LLM_MODEL=gemini-2.5-flash
export GEMINI_API_KEY=your-api-key-here
# Windows PowerShell
$env:LLM_PROVIDER="gemini"
$env:LLM_MODEL="gemini-2.5-flash"
$env:GEMINI_API_KEY="your-api-key-here"

Get an API key from ai.google.dev/gemini-api/docs/api-key.

Ollama

First, install and run Ollama from docs.ollama.com/quickstart.

Then set the environment variables:

# Linux / macOS
export LLM_PROVIDER=ollama
export LLM_MODEL=your-model-name  # e.g., llama3.2, mistral
export OLLAMA_API_BASE=http://127.0.0.1:11434
# Windows PowerShell
$env:LLM_PROVIDER="ollama"
$env:LLM_MODEL="your-model-name"  # e.g., llama3.2, mistral
$env:OLLAMA_API_BASE="http://127.0.0.1:11434"

Change OLLAMA_API_BASE if your Ollama server is running on a different machine.

For Docker

Modify the environment variables in your docker-compose.yml file:

- LLM_PROVIDER=openai # other options: ollama, gemini
- LLM_MODEL=gpt-5-nano-2025-08-07 # choose the appropriate model for your provider
# One of the following, depending on LLM_PROVIDER:
- OPENAI_API_KEY=${OPENAI_API_KEY}
- GEMINI_API_KEY=${GEMINI_API_KEY}
- OLLAMA_API_BASE=http://127.0.0.1:11434 # change if your ollama server is running remotely

You can insert API keys directly in the file:

- LLM_PROVIDER=gemini
- GEMINI_API_KEY=abc123XYZ # no string quotes

However, it is better to use environment variables for security.

Default Models

If you don't set the LLM_MODEL variable:

  • OpenAI: Defaults to gpt-5-nano-2025-08-07
  • Gemini: Defaults to gemini-2.5-flash
  • Ollama: You must specify a model (no default)

Without LLM Configuration

Without an appropriate LLM provider configuration, LLM-based SpecForge features will be unavailable. The rest of SpecForge will continue to work normally.