Skip to content

Commit d6e9bc5

Browse files
committed
feat: add Gemini support, fix Neo4j Aura connection, and add setup guide
1 parent 3117d55 commit d6e9bc5

5 files changed

Lines changed: 289 additions & 2 deletions

File tree

SETUP_GUIDE.md

Lines changed: 182 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,182 @@
1+
# 🚀 Secrin Setup Guide
2+
3+
Welcome to Secrin! This guide covers two ways to run the project:
4+
5+
1. **☁️ Hybrid Mode (Recommended):** Cloud DB + Gemini API. Fast, lightweight, runs on any laptop.
6+
2. **🔒 Local Mode:** Local Docker DB + Ollama. Privacy-focused, requires good hardware (16GB+ RAM).
7+
8+
---
9+
10+
## 🛠️ 1. Prerequisites (Check these first!)
11+
Open your terminal and check these versions. If missing, install them.
12+
13+
- **Git:** `git --version`
14+
- **Python (3.10+):** `python --version`
15+
- **Node.js (v20+):** `node -v`
16+
- **Poetry:** `pip install poetry`
17+
- **(Optional) Docker:** Required only if you choose "Local Mode".
18+
- **(Optional) Ollama:** Required only if you choose "Local Mode".
19+
20+
---
21+
22+
## 🔑 2. Credentials & Services
23+
Choose **Option A** (Easier) or **Option B** (Private).
24+
25+
### Option A — Hybrid (Cloud DB + Gemini)
26+
1. **Database:** Go to https://console.neo4j.io
27+
- Create a **Free Instance**.
28+
- ⚠️ **Copy the password immediately.** You won't see it again.
29+
- Copy the URI (example: `neo4j+s://abc12345.databases.neo4j.io`).
30+
2. **AI Model:** Get a key from Google AI Studio: https://aistudio.google.com/app/apikey
31+
32+
### Option B — Fully Local (Docker + Ollama)
33+
1. **Database:** Run this Docker command to start Neo4j locally:
34+
```bash
35+
docker run --restart always --publish=7474:7474 --publish=7687:7687 --env NEO4J_AUTH=neo4j/password neo4j:latest
36+
```
37+
2. **AI Model:** Install Ollama and pull the model:
38+
```bash
39+
ollama pull llama3.2
40+
ollama serve
41+
```
42+
43+
---
44+
45+
## 🖥️ 3. Backend Setup (Terminal 1)
46+
47+
### Step 3.1 — Install dependencies
48+
```bash
49+
git clone https://github.com/SecrinLabs/secrin.git
50+
cd secrin
51+
poetry install
52+
```
53+
54+
### Step 3.2 — Configure environment (.env)
55+
Create the .env file from the example:
56+
57+
```bash
58+
cp .env.example .env
59+
```
60+
61+
Open `.env` and paste the configuration matching your choice from Step 2.
62+
63+
Option A (Hybrid / Gemini) — example `.env`:
64+
65+
```Bash
66+
# --- Database (Neo4j Aura) ---
67+
NEO4J_URI=neo4j+s://<YOUR_INSTANCE_ID>.databases.neo4j.io
68+
NEO4J_USER=neo4j
69+
NEO4J_PASS=<YOUR_SAVED_PASSWORD>
70+
NEO4J_DB=neo4j
71+
72+
# --- Embedding (Local CPU/GPU) ---
73+
EMBEDDING_PROVIDER=sentence_transformer
74+
SENTENCE_TRANSFORMER_MODEL=sentence-transformers/all-MiniLM-L6-v2
75+
EMBEDDING_DIMENSION=384
76+
77+
# --- LLM (Gemini) ---
78+
LLM_PROVIDER=gemini
79+
GOOGLE_API_KEY=<YOUR_GEMINI_KEY>
80+
```
81+
82+
Option B (Local / Ollama) — example `.env`:
83+
84+
```Bash
85+
# --- Database (Local Docker) ---
86+
NEO4J_URI=bolt://localhost:7687
87+
NEO4J_USER=neo4j
88+
NEO4J_PASS=password
89+
NEO4J_DB=neo4j
90+
91+
# --- Embedding (Ollama) ---
92+
EMBEDDING_PROVIDER=ollama
93+
OLLAMA_BASE_URL=http://localhost:11434
94+
OLLAMA_EMBEDDING_MODEL=mxbai-embed-large
95+
EMBEDDING_DIMENSION=1024
96+
97+
# --- LLM (Ollama) ---
98+
LLM_PROVIDER=ollama
99+
LLM_MODEL_OLLAMA=llama3.2
100+
```
101+
102+
### Step 3.3 — Launch backend
103+
```bash
104+
# 1. Verify connection
105+
poetry run setup
106+
107+
# 2. Start server
108+
poetry run server
109+
```
110+
Success: you should see Uvicorn running on http://0.0.0.0:8000.
111+
112+
---
113+
114+
## 🎨 4. Frontend Setup (Terminal 2)
115+
116+
Open a new terminal window:
117+
```bash
118+
cd secrin/apps/web
119+
```
120+
121+
### Step 4.1 — Install dependencies
122+
The project prefers pnpm. Try these methods in order.
123+
124+
Method 1 — Standard:
125+
```bash
126+
corepack enable
127+
pnpm install
128+
```
129+
130+
Method 2 — If you get "Permission Denied" (Windows) (PowerShell):
131+
132+
```powershell
133+
iwr https://get.pnpm.io/install.ps1 -useb | iex
134+
```
135+
Then retry Method 1.
136+
137+
Method 3 — Fallback to npm:
138+
```bash
139+
npm install
140+
```
141+
142+
### Step 4.2 — Start UI
143+
```bash
144+
cp .env.example .env
145+
pnpm dev # OR 'npm run dev' if you used Method 3
146+
```
147+
Success: open http://localhost:3000
148+
149+
---
150+
151+
## 🚀 5. First Run & Ingestion
152+
- The graph will be blank on first login. This is normal.
153+
- Login using your Neo4j credentials.
154+
- Hybrid users: if `neo4j+s://` fails to connect, try `bolt+s://`.
155+
156+
Ingest data:
157+
1. Go to "Integrations" in the sidebar.
158+
2. Paste a GitHub URL (example: `https://github.com/SecrinLabs/secrin`).
159+
3. Click "Ingest".
160+
4. Watch the backend terminal — it will clone the repo and embed it.
161+
5. When the backend terminal says "Finished", go to the Graph tab to view your data.
162+
163+
---
164+
165+
## ❓ Troubleshooting — Common Errors
166+
167+
- "No routing servers available"
168+
1. Check if Neo4j Aura is "Paused" in the console. Resume it.
169+
2. Try using `bolt+s://` instead of `neo4j+s://`.
170+
3. Try a mobile hotspot (some Wi‑Fi blocks port 7687).
171+
172+
- "Database does not exist"
173+
- Neo4j Aura Free tier only allows the database name `neo4j`. Check your `.env`.
174+
175+
- "Command pnpm not found"
176+
- Restart your terminal after installing it. If it still fails, in PowerShell run:
177+
```powershell
178+
$env:PATH = "$env:LOCALAPPDATA\pnpm;$env:PATH"
179+
```
180+
181+
- "Unsupported LLM provider: gemini"
182+
- You are running an old version. `git pull` the latest changes or check `llm_factory.py`.

apps/web/components/Neo4jGraph.tsx

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -92,11 +92,17 @@ export default function Neo4jGraph({
9292
// Dynamic import of neo4j-driver for client-side only
9393
const neo4j = (await import("neo4j-driver")).default;
9494

95+
// FIX: Check if the URL implies a secure connection (Cloud/Aura)
96+
const isSecure = neo4jUrl.includes('+s');
97+
9598
const driver = neo4j.driver(
9699
neo4jUrl,
97100
neo4j.auth.basic(username, password),
98-
{ encrypted: false }
101+
// If it's a secure URL, don't pass any config (let the URL handle it).
102+
// If it's local, explicitly turn off encryption.
103+
isSecure ? {} : { encrypted: "ENCRYPTION_OFF" }
99104
);
105+
100106
const session = driver.session({ database });
101107

102108
const start = Date.now();

packages/config/settings.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -100,6 +100,9 @@ class Settings(BaseSettings):
100100
description="Embedding cache TTL in seconds (if caching enabled)"
101101
)
102102

103+
# Google Api Configuration
104+
GOOGLE_API_KEY: Optional[str] = None
105+
103106
# OpenAI Configuration
104107
OPENAI_API_KEY: str = Field(
105108
default="",

packages/memory/factories/llm_factory.py

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,8 @@
66
from typing import Optional
77
from packages.memory.llm.base import BaseLLMProvider
88
from packages.memory.llm.providers.ollama import OllamaProvider
9+
# --- ADDED: Import Gemini ---
10+
from packages.memory.llm.providers.gemini import GeminiProvider
911
from packages.config import Settings
1012

1113

@@ -47,10 +49,16 @@ def create_provider(
4749
temperature=temperature,
4850
max_tokens=max_tokens
4951
)
52+
elif provider.lower() == "gemini":
53+
return GeminiProvider(
54+
model=model,
55+
temperature=temperature,
56+
max_tokens=max_tokens
57+
)
5058
else:
5159
raise ValueError(
5260
f"Unsupported LLM provider: {provider}. "
53-
f"Currently supported: ['ollama']"
61+
f"Currently supported: ['ollama', 'gemini']"
5462
)
5563

5664
@classmethod
Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,88 @@
1+
"""
2+
Gemini LLM provider implementation.
3+
"""
4+
5+
import requests
6+
import json
7+
import os
8+
from typing import List, Any, Optional, Iterator
9+
from packages.memory.llm.base import BaseLLMProvider
10+
from packages.config.settings import Settings
11+
12+
class GeminiProvider(BaseLLMProvider):
13+
"""Gemini provider using Google's Generative Language API."""
14+
15+
def __init__(
16+
self,
17+
model: Optional[str] = "gemini-2.5-flash",
18+
temperature: Optional[float] = None,
19+
max_tokens: Optional[int] = None,
20+
api_key: Optional[str] = None
21+
):
22+
settings = Settings()
23+
24+
# 1. Configuration
25+
model = model or "gemini-2.5-flash"
26+
temperature = temperature if temperature is not None else settings.LLM_TEMPERATURE
27+
max_tokens = max_tokens or settings.LLM_MAX_TOKENS
28+
29+
super().__init__(model=model, temperature=temperature, max_tokens=max_tokens)
30+
31+
# 2. Auth: Try settings first, then fallback to OS environment
32+
self.api_key = api_key or getattr(settings, "GOOGLE_API_KEY", None) or os.getenv("GOOGLE_API_KEY")
33+
34+
if not self.api_key:
35+
raise ValueError("GOOGLE_API_KEY is missing. Please add it to your .env file.")
36+
37+
def get_provider_name(self) -> str:
38+
return "gemini"
39+
40+
def is_available(self) -> bool:
41+
return bool(self.api_key)
42+
43+
def generate_answer(self, question: str, context_items: List[Any], search_type: str) -> str:
44+
"""Generate answer using Gemini."""
45+
prompt = self._build_prompt(question, context_items, search_type)
46+
return self.generate_text(prompt)
47+
48+
def generate_text(self, prompt: str, system_prompt: Optional[str] = None) -> str:
49+
"""
50+
Generate text using Gemini REST API.
51+
"""
52+
url = f"https://generativelanguage.googleapis.com/v1beta/models/{self.model}:generateContent?key={self.api_key}"
53+
54+
# Combine system prompt if exists (Gemini supports system_instruction but simple concatenation is robust)
55+
full_prompt = prompt
56+
if system_prompt:
57+
full_prompt = f"System: {system_prompt}\n\nUser: {prompt}"
58+
59+
payload = {
60+
"contents": [{
61+
"parts": [{"text": full_prompt}]
62+
}],
63+
"generationConfig": {
64+
"temperature": self.temperature,
65+
"maxOutputTokens": self.max_tokens
66+
}
67+
}
68+
69+
try:
70+
response = requests.post(url, json=payload, headers={"Content-Type": "application/json"}, timeout=30)
71+
response.raise_for_status()
72+
73+
data = response.json()
74+
# Extract text from complex Gemini JSON response
75+
try:
76+
return data["candidates"][0]["content"]["parts"][0]["text"]
77+
except (KeyError, IndexError):
78+
return ""
79+
80+
except Exception as e:
81+
raise Exception(f"Gemini API Error: {str(e)}")
82+
83+
def stream_text(self, prompt: str, system_prompt: Optional[str] = None) -> Iterator[str]:
84+
"""
85+
Stream text using Gemini (Simplified to non-streaming for stability first).
86+
"""
87+
# For the initial setup, we simply yield the full result to avoid async complexities
88+
yield self.generate_text(prompt, system_prompt)

0 commit comments

Comments
 (0)