Skip to content

Commit b09a3c5

Browse files
authored
Merge pull request #26 from aditya452007/feat/hybrid-setup-gemini
feat: Add Gemini Support + Hybrid Setup Docs + Neo4j Aura Fixes
2 parents f28477a + 27a2d63 commit b09a3c5

6 files changed

Lines changed: 314 additions & 9 deletions

File tree

.env.example

Lines changed: 18 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,11 @@ LOG_LEVEL=INFO # DEBUG | INFO | WARNING | ERROR | CRITICAL
1010
# Neo4j Database
1111
# =============================================================================
1212

13+
# URI Formats:
14+
# Local: bolt://localhost:7687
15+
# Cloud (Aura): neo4j+s://<instance-id>.databases.neo4j.io
1316
NEO4J_URI=bolt://localhost:7687
17+
1418
NEO4J_USER=neo4j
1519
NEO4J_PASS=your_password_here
1620
NEO4J_DB=neo4j
@@ -26,6 +30,8 @@ NEO4J_CONNECTION_TIMEOUT=30
2630

2731
# Provider: openai | ollama | sentence_transformer
2832
EMBEDDING_PROVIDER=ollama
33+
34+
# Note: Dimension must match the specific model used (e.g., 1024 for mxbai, 384 for MiniLM, 1536 for OpenAI)
2935
EMBEDDING_DIMENSION=1024
3036
EMBEDDING_BATCH_SIZE=100
3137
EMBEDDING_CACHE_TTL=3600
@@ -41,25 +47,30 @@ OLLAMA_BASE_URL=http://localhost:11434
4147
OLLAMA_EMBEDDING_MODEL=mxbai-embed-large
4248
OLLAMA_TIMEOUT=120
4349

44-
# Sentence Transformer Settings
50+
# Sentence Transformer Settings (Local CPU/GPU)
51+
# Common model: sentence-transformers/all-MiniLM-L6-v2 (Requires EMBEDDING_DIMENSION=384)
4552
SENTENCE_TRANSFORMER_MODEL=sentence-transformers/all-MiniLM-L6-v2
4653
SENTENCE_TRANSFORMER_DEVICE=cpu # cpu | cuda | mps
4754

4855
# =============================================================================
4956
# LLM Configuration (for Question Answering)
5057
# =============================================================================
5158

52-
# Provider: ollama
53-
LLM_PROVIDER=ollama
54-
55-
# Ollama LLM Settings
56-
LLM_MODEL_OLLAMA=llama3.2
59+
# Provider: ollama | gemini
60+
LLM_PROVIDER=ollama
5761

5862
# LLM General Settings
5963
LLM_TIMEOUT=120
6064
LLM_MAX_TOKENS=1000
6165
LLM_TEMPERATURE=0.7
6266

67+
# Ollama LLM Settings
68+
LLM_MODEL_OLLAMA=llama3.2
69+
70+
# Google Gemini Settings
71+
GEMINI_API_KEY=your_gemini_api_key_here
72+
LLM_MODEL_GEMINI=gemini-2.5-flash
73+
6374
# =============================================================================
6475
# Vector Search Configuration
6576
# =============================================================================
@@ -121,4 +132,4 @@ FEATURE_EMBEDDINGS=true
121132
FEATURE_VECTOR_SEARCH=true
122133
FEATURE_HYBRID_SEARCH=true
123134
FEATURE_BATCH_PROCESSING=true
124-
FEATURE_CACHING=false
135+
FEATURE_CACHING=false

SETUP_GUIDE.md

Lines changed: 182 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,182 @@
1+
# 🚀 Secrin Setup Guide
2+
3+
Welcome to Secrin! This guide covers two ways to run the project:
4+
5+
1. **☁️ Hybrid Mode (Recommended):** Cloud DB + Gemini API. Fast, lightweight, runs on any laptop.
6+
2. **🔒 Local Mode:** Local Docker DB + Ollama. Privacy-focused, requires good hardware (16GB+ RAM).
7+
8+
---
9+
10+
## 🛠️ 1. Prerequisites (Check these first!)
11+
Open your terminal and check these versions. If missing, install them.
12+
13+
- **Git:** `git --version`
14+
- **Python (3.10+):** `python --version`
15+
- **Node.js (v20+):** `node -v`
16+
- **Poetry:** `pip install poetry`
17+
- **(Optional) Docker:** Required only if you choose "Local Mode".
18+
- **(Optional) Ollama:** Required only if you choose "Local Mode".
19+
20+
---
21+
22+
## 🔑 2. Credentials & Services
23+
Choose **Option A** (Easier) or **Option B** (Private).
24+
25+
### Option A — Hybrid (Cloud DB + Gemini)
26+
1. **Database:** Go to https://console.neo4j.io
27+
- Create a **Free Instance**.
28+
- ⚠️ **Copy the password immediately.** You won't see it again.
29+
- Copy the URI (example: `neo4j+s://abc12345.databases.neo4j.io`).
30+
2. **AI Model:** Get a key from Google AI Studio: https://aistudio.google.com/app/apikey
31+
32+
### Option B — Fully Local (Docker + Ollama)
33+
1. **Database:** Run this Docker command to start Neo4j locally:
34+
```bash
35+
docker run --restart always --publish=7474:7474 --publish=7687:7687 --env NEO4J_AUTH=neo4j/password neo4j:latest
36+
```
37+
2. **AI Model:** Install Ollama and pull the model:
38+
```bash
39+
ollama pull llama3.2
40+
ollama serve
41+
```
42+
43+
---
44+
45+
## 🖥️ 3. Backend Setup (Terminal 1)
46+
47+
### Step 3.1 — Install dependencies
48+
```bash
49+
git clone https://github.com/SecrinLabs/secrin.git
50+
cd secrin
51+
poetry install
52+
```
53+
54+
### Step 3.2 — Configure environment (.env)
55+
Create the .env file from the example:
56+
57+
```bash
58+
cp .env.example .env
59+
```
60+
61+
Open `.env` and paste the configuration matching your choice from Step 2.
62+
63+
Option A (Hybrid / Gemini) — example `.env`:
64+
65+
```Bash
66+
# --- Database (Neo4j Aura) ---
67+
NEO4J_URI=neo4j+s://<YOUR_INSTANCE_ID>.databases.neo4j.io
68+
NEO4J_USER=neo4j
69+
NEO4J_PASS=<YOUR_SAVED_PASSWORD>
70+
NEO4J_DB=neo4j
71+
72+
# --- Embedding (Local CPU/GPU) ---
73+
EMBEDDING_PROVIDER=sentence_transformer
74+
SENTENCE_TRANSFORMER_MODEL=sentence-transformers/all-MiniLM-L6-v2
75+
EMBEDDING_DIMENSION=384
76+
77+
# --- LLM (Gemini) ---
78+
LLM_PROVIDER=gemini
79+
GOOGLE_API_KEY=<YOUR_GEMINI_KEY>
80+
```
81+
82+
Option B (Local / Ollama) — example `.env`:
83+
84+
```Bash
85+
# --- Database (Local Docker) ---
86+
NEO4J_URI=bolt://localhost:7687
87+
NEO4J_USER=neo4j
88+
NEO4J_PASS=password
89+
NEO4J_DB=neo4j
90+
91+
# --- Embedding (Ollama) ---
92+
EMBEDDING_PROVIDER=ollama
93+
OLLAMA_BASE_URL=http://localhost:11434
94+
OLLAMA_EMBEDDING_MODEL=mxbai-embed-large
95+
EMBEDDING_DIMENSION=1024
96+
97+
# --- LLM (Ollama) ---
98+
LLM_PROVIDER=ollama
99+
LLM_MODEL_OLLAMA=llama3.2
100+
```
101+
102+
### Step 3.3 — Launch backend
103+
```bash
104+
# 1. Verify connection
105+
poetry run setup
106+
107+
# 2. Start server
108+
poetry run server
109+
```
110+
Success: you should see Uvicorn running on http://0.0.0.0:8000.
111+
112+
---
113+
114+
## 🎨 4. Frontend Setup (Terminal 2)
115+
116+
Open a new terminal window:
117+
```bash
118+
cd secrin/apps/web
119+
```
120+
121+
### Step 4.1 — Install dependencies
122+
The project prefers pnpm. Try these methods in order.
123+
124+
Method 1 — Standard:
125+
```bash
126+
corepack enable
127+
pnpm install
128+
```
129+
130+
Method 2 — If you get "Permission Denied" (Windows) (PowerShell):
131+
132+
```powershell
133+
iwr https://get.pnpm.io/install.ps1 -useb | iex
134+
```
135+
Then retry Method 1.
136+
137+
Method 3 — Fallback to npm:
138+
```bash
139+
npm install
140+
```
141+
142+
### Step 4.2 — Start UI
143+
```bash
144+
cp .env.example .env
145+
pnpm dev # OR 'npm run dev' if you used Method 3
146+
```
147+
Success: open http://localhost:3000
148+
149+
---
150+
151+
## 🚀 5. First Run & Ingestion
152+
- The graph will be blank on first login. This is normal.
153+
- Login using your Neo4j credentials.
154+
- Hybrid users: if `neo4j+s://` fails to connect, try `bolt+s://`.
155+
156+
Ingest data:
157+
1. Go to "Integrations" in the sidebar.
158+
2. Paste a GitHub URL (example: `https://github.com/SecrinLabs/secrin`).
159+
3. Click "Ingest".
160+
4. Watch the backend terminal — it will clone the repo and embed it.
161+
5. When the backend terminal says "Finished", go to the Graph tab to view your data.
162+
163+
---
164+
165+
## ❓ Troubleshooting — Common Errors
166+
167+
- "No routing servers available"
168+
1. Check if Neo4j Aura is "Paused" in the console. Resume it.
169+
2. Try using `bolt+s://` instead of `neo4j+s://`.
170+
3. Try a mobile hotspot (some Wi‑Fi blocks port 7687).
171+
172+
- "Database does not exist"
173+
- Neo4j Aura Free tier only allows the database name `neo4j`. Check your `.env`.
174+
175+
- "Command pnpm not found"
176+
- Restart your terminal after installing it. If it still fails, in PowerShell run:
177+
```powershell
178+
$env:PATH = "$env:LOCALAPPDATA\pnpm;$env:PATH"
179+
```
180+
181+
- "Unsupported LLM provider: gemini"
182+
- You are running an old version. `git pull` the latest changes or check `llm_factory.py`.

apps/web/components/Neo4jGraph.tsx

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -92,11 +92,21 @@ export default function Neo4jGraph({
9292
// Dynamic import of neo4j-driver for client-side only
9393
const neo4j = (await import("neo4j-driver")).default;
9494

95+
// FIX: Check if the URL implies a secure connection (Cloud/Aura)
96+
const isSecure =
97+
neo4jUrl.startsWith('neo4j+s://') ||
98+
neo4jUrl.startsWith('neo4j+ssc://') ||
99+
neo4jUrl.startsWith('bolt+s://') ||
100+
neo4jUrl.startsWith('bolt+ssc://');
101+
95102
const driver = neo4j.driver(
96103
neo4jUrl,
97104
neo4j.auth.basic(username, password),
98-
{ encrypted: false }
105+
// If it's a secure URL, don't pass any config (let the URL handle it).
106+
// If it's local, explicitly turn off encryption.
107+
isSecure ? {} : { encrypted: "ENCRYPTION_OFF" }
99108
);
109+
100110
const session = driver.session({ database });
101111

102112
const start = Date.now();

packages/config/settings.py

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -100,6 +100,12 @@ class Settings(BaseSettings):
100100
description="Embedding cache TTL in seconds (if caching enabled)"
101101
)
102102

103+
# Google Gemini Api Configuration
104+
GEMINI_API_KEY: str = Field(
105+
default="",
106+
description="Gemini API key"
107+
)
108+
103109
# OpenAI Configuration
104110
OPENAI_API_KEY: str = Field(
105111
default="",

packages/memory/factories/llm_factory.py

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,8 @@
66
from typing import Optional
77
from packages.memory.llm.base import BaseLLMProvider
88
from packages.memory.llm.providers.ollama import OllamaProvider
9+
# --- ADDED: Import Gemini ---
10+
from packages.memory.llm.providers.gemini import GeminiProvider
911
from packages.config import Settings
1012

1113

@@ -47,10 +49,16 @@ def create_provider(
4749
temperature=temperature,
4850
max_tokens=max_tokens
4951
)
52+
elif provider.lower() == "gemini":
53+
return GeminiProvider(
54+
model=model,
55+
temperature=temperature,
56+
max_tokens=max_tokens
57+
)
5058
else:
5159
raise ValueError(
5260
f"Unsupported LLM provider: {provider}. "
53-
f"Currently supported: ['ollama']"
61+
f"Currently supported: ['ollama', 'gemini']"
5462
)
5563

5664
@classmethod

0 commit comments

Comments
 (0)