OpenSRE runs as a set of Docker Compose services. You can have a fully functional AI SRE platform running locally in under 5 minutes.
OPENROUTER_API_KEY.git clone https://github.com/swapnildahiphale/OpenSRE.git
cd OpenSRE
Create a .env file in the project root:
OPENROUTER_API_KEY=your-openrouter-api-key-here
That's the only required variable to get started. See Configuration for the full list of optional settings.
make dev
This starts the following services:
| Service | Port | Description | |---------|------|-------------| | PostgreSQL | 5433 | Primary database | | config-service | 8081 | Configuration API | | Neo4j | 7475 (HTTP), 7688 (Bolt) | Knowledge graph | | LiteLLM | 4001 | LLM proxy | | sre-agent | 8001 | Investigation agent | | web-ui | 3002 | Admin console |
Navigate to http://localhost:3002 to access the OpenSRE admin console.
In the web console, you can trigger a test investigation:
High error rate on payments-service: 5xx errors spiked to 15% in the last 10 minutesThe first investigation may take 2-5 minutes depending on which skills are invoked and your LLM's response time.
To receive alerts and investigations via Slack:
# Add to your .env
SLACK_BOT_TOKEN=xoxb-your-bot-token
SLACK_APP_TOKEN=xapp-your-app-token
# Start with Slack bot
make dev-slack