Getting Started with OpenSRE
OpenSRE is designed to be straightforward to set up. This guide walks you through the basics of getting it running in your environment.
Prerequisites
- Docker and Docker Compose
- An API key for an LLM provider (OpenRouter, Anthropic, or OpenAI)
- Your monitoring stack credentials (Prometheus, Grafana, etc.)
Quick Setup
- Clone the repository
git clone https://github.com/swapnildahiphale/OpenSRE.git
cd OpenSRE
- Configure environment
Copy the example environment file and add your API key:
cp .env.example .env
# Edit .env and set OPENROUTER_API_KEY
- Start the platform
make dev
This starts all services: the SRE agent, web console, Neo4j knowledge graph, PostgreSQL, and LiteLLM proxy.
- Open the console
Visit http://localhost:3000 to access the web console. From here you can trigger investigations, view reports, and configure integrations.
Connecting Your Tools
OpenSRE integrates with your existing monitoring stack. Configure connections to:
- Prometheus — metrics queries
- Grafana — dashboard context
- Kubernetes — cluster state and pod health
- Slack — receive alerts and investigation reports
Configuration is done through the web console or by editing the team configuration file.
What's Next
Once OpenSRE is running, try triggering a test investigation. The platform will walk through its investigation pipeline and produce a report. Each investigation teaches the episodic memory system, making future investigations faster and more accurate.
Check out the documentation for detailed configuration options and advanced features.