Setting up SpecForge
The SpecForge suite consists of a few components:
- The SpecForge Server which is the backend server which the other components connect to. It can be run via Docker or as an executable.
- The SpecForge VSCode Extension which provides Lilo Language support in VSCode for editing and managing specifications, as well as rendering interactive visualizations.
- The SpecForge Python SDK which provides an API for interacting with the SpecForge server from Python code. This can be used to communicate and exchange specifications or data with the SpecForge server from Python scripts or Jupyter notebooks.
All necessary files can be obtained from the SpecForge releases page.
Quick Start
Follow these steps to get started quickly:
- Install dependencies
z3andrsvg-converter(see OS-specific instructions;rsvg-converteris optional) - Download and extract the SpecForge executable for your operating system
- Configure your license (place
license.jsonin the appropriate location for your OS) - (Optional) Configure LLM provider by setting environment variables (e.g.,
SPECFORGE_LLM_PROVIDER=openai,OPENAI_API_KEY=...) - see LLM Provider Configuration - Start the SpecForge server:
./specforge serve(or.\specforge.exe serveon Windows) - Install the VSCode Extension (see docs)
- Create a directory for your project and place your
.lilofiles directly in it - Open the directory in VSCode and start writing specifications
Note: The lilo.toml project configuration file is optional. For initial setup, you can skip it and place your specification and data files directly in the project root. See Project Configuration for details on when and how to use lilo.toml.
Detailed Setup Instructions
Choose your platform for detailed setup instructions:
- Windows - Complete setup guide for Windows
- macOS - Complete setup guide for macOS (Apple Silicon)
- Linux - Complete setup guide for Linux
- Docker - Using Docker instead of the executable
Server Environment Variables
The SpecForge server reads the following environment variables at startup. These can be set before running specforge serve or configured in your Docker Compose file.
| Variable | Type | Default | Description |
|---|---|---|---|
SPECFORGE_PORT | Integer | 8080 | Port the server listens on. |
SPECFORGE_CACHE_BOUND | Integer | 64 | Maximum number of entries in the exemplification cache. The cache stores results from the SMT solver (Z3) to avoid redundant computations. Set to 0 to disable caching. |
SPECFORGE_SEMAPHORE_LIMIT | Integer | 8 | Maximum number of concurrent SMT solver (Z3) instances. Limits parallelism to prevent resource exhaustion. Set to 0 to disable the semaphore (no concurrency limit). |
Example usage:
SPECFORGE_PORT=9090 SPECFORGE_CACHE_BOUND=128 ./specforge serve
For LLM-related environment variables (SPECFORGE_LLM_PROVIDER, SPECFORGE_LLM_MODEL, OPENAI_API_KEY, etc.), see the LLM Provider Configuration guide.
Note that all of these can also be configured in the settings of the SpecForge VSCode extension.