Skip to content

Commit 5d15a71

Browse files
authored
Merge pull request #32 from SecrinLabs/ronaldo
chore: update default Ollama completion model to qwen2.5-coder and em…
2 parents 4f49f2a + 7174e38 commit 5d15a71

6 files changed

Lines changed: 29 additions & 29 deletions

File tree

README.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -88,8 +88,8 @@ poetry run python scripts/verify.py
8888

8989
```bash
9090
# Install Ollama → https://ollama.com
91-
ollama pull llama3 # completion model
92-
ollama pull nomic-embed-text # embedding model (768d)
91+
ollama pull qwen2.5-coder:0.5b # completion model
92+
ollama pull mxbai-embed-large # embedding model (768d)
9393
```
9494

9595
### Cloud LLM
@@ -119,8 +119,8 @@ then writes `.secrin.yml`. Commit this file so teammates share the same defaults
119119
```
120120
? LLM provider (ollama / openai / anthropic): ollama
121121
? Ollama host [http://localhost:11434]:
122-
? Ollama model [llama3]:
123-
? Embed model [nomic-embed-text]:
122+
? Ollama model [qwen2.5-coder:0.5b]:
123+
? Embed model [mxbai-embed-large]:
124124
? Neo4j URI [bolt://localhost:7687]:
125125
? Neo4j username [neo4j]:
126126
? Neo4j password:
@@ -130,7 +130,7 @@ then writes `.secrin.yml`. Commit this file so teammates share the same defaults
130130
✓ Created .secrin.yml
131131
✓ Updated .env (NEO4J_PASS)
132132
✓ Connected to Neo4j (0 nodes found)
133-
✓ Connected to Ollama (llama3 available)
133+
✓ Connected to Ollama (qwen2.5-coder:0.5b available)
134134
135135
Run `secrin graph build --repo <url-or-path>` to index your repo.
136136
```
@@ -154,8 +154,8 @@ secrin analyze
154154
Runs in two phases:
155155

156156
```
157-
⠋ [Phase 1] Summarizing ollama / llama3 Function 48 Class 12 File 21
158-
⠋ [Phase 2] Embedding nomic-embed-text Function 48 Class 12 File 21
157+
⠋ [Phase 1] Summarizing ollama / qwen2.5-coder:0.5b Function 48 Class 12 File 21
158+
⠋ [Phase 2] Embedding mxbai-embed-large Function 48 Class 12 File 21
159159
160160
╭─ Analysis Results ────────────────────────╮
161161
│ Label Summarized Embedded │
@@ -220,7 +220,7 @@ Coverage 98% summarized · 96% embedded
220220
Last analyzed 2 hours ago (main @ a3f9c12)
221221
Neo4j ● connected bolt://localhost:7687
222222
Wiki docs/wiki/ · 47 pages
223-
.secrin.yml ✓ ollama / llama3
223+
.secrin.yml ✓ ollama / qwen2.5-coder:0.5b
224224
──────────────────────────────────────────────────────────────
225225
```
226226

@@ -233,8 +233,8 @@ Wiki docs/wiki/ · 47 pages
233233
```yaml
234234
llm:
235235
provider: ollama # ollama | openai | anthropic
236-
model: llama3
237-
embed_model: nomic-embed-text
236+
model: qwen2.5-coder:0.5b
237+
embed_model: mxbai-embed-large
238238
base_url: http://localhost:11434 # Ollama only; omitted for cloud providers
239239

240240
neo4j:
@@ -260,7 +260,7 @@ ANTHROPIC_API_KEY=sk-ant-...
260260
# Optional overrides (if you don't use .secrin.yml)
261261
LLM_PROVIDER=ollama
262262
LLM_MODEL_OLLAMA=qwen2.5-coder:0.5b
263-
OLLAMA_EMBEDDING_MODEL=nomic-embed-text
263+
OLLAMA_EMBEDDING_MODEL=mxbai-embed-large
264264
EMBEDDING_DIMENSION=768
265265
```
266266

packages/arc42gen/providers/ollama_provider.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,10 +23,10 @@ class OllamaProvider(BaseLLMProvider):
2323
No API key required - models run on the local machine.
2424
"""
2525

26-
DEFAULT_MODEL = "llama3"
26+
DEFAULT_MODEL = "qwen2.5-coder:0.5b"
2727
MODEL_ALIASES = {
2828
"codellama": "codellama:7b",
29-
"llama3": "qwen2.5-coder:0.5b",
29+
"qwen2.5-coder:0.5b": "qwen2.5-coder:0.5b",
3030
"mistral": "mistral:7b",
3131
}
3232

packages/cli/agents/llm_client.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@
2626
---------------------------------
2727
Anthropic does not expose an embeddings API. When provider="anthropic",
2828
embed() uses Ollama (base_url + embed_model). Ensure Ollama is running
29-
and embed_model is set to a local embedding model (e.g. nomic-embed-text).
29+
and embed_model is set to a local embedding model (e.g. mxbai-embed-large).
3030
"""
3131
from __future__ import annotations
3232

@@ -43,15 +43,15 @@
4343
# ---------------------------------------------------------------------------
4444

4545
_DEFAULT_MODELS: dict[str, str] = {
46-
"ollama": "llama3",
46+
"ollama": "qwen2.5-coder:0.5b",
4747
"openai": "gpt-4o-mini",
4848
"anthropic": "claude-haiku-4-5-20251001",
4949
}
5050

5151
_DEFAULT_EMBED_MODELS: dict[str, str] = {
52-
"ollama": "nomic-embed-text",
52+
"ollama": "mxbai-embed-large",
5353
"openai": "text-embedding-3-small",
54-
"anthropic": "nomic-embed-text", # via Ollama
54+
"anthropic": "mxbai-embed-large", # via Ollama
5555
}
5656

5757

packages/cli/commands/init.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,8 @@
99
----------------
1010
? LLM provider (ollama / openai / anthropic): ollama
1111
? Ollama host [http://localhost:11434]:
12-
? Ollama model [llama3]:
13-
? Embed model [nomic-embed-text]:
12+
? Ollama model [qwen2.5-coder:0.5b]:
13+
? Embed model [mxbai-embed-large]:
1414
? Neo4j URI [bolt://localhost:7687]:
1515
? Neo4j username [neo4j]:
1616
? Neo4j password:
@@ -20,7 +20,7 @@
2020
✓ Created .secrin.yml
2121
✓ Updated .env (NEO4J_PASS)
2222
✓ Connected to Neo4j (42 nodes found)
23-
✓ Connected to Ollama (llama3 available)
23+
✓ Connected to Ollama (qwen2.5-coder:0.5b available)
2424
2525
Run `secrin graph build --repo <url-or-path>` to index your repo.
2626
@@ -52,14 +52,14 @@
5252

5353
# Provider → default completion model
5454
_COMPLETION_DEFAULTS: dict[str, str] = {
55-
"ollama": "llama3",
55+
"ollama": "qwen2.5-coder:0.5b",
5656
"openai": "gpt-4o-mini",
5757
"anthropic": "claude-haiku-4-5-20251001",
5858
}
5959

6060
# Provider → default embed model
6161
_EMBED_DEFAULTS: dict[str, str] = {
62-
"ollama": "nomic-embed-text",
62+
"ollama": "mxbai-embed-large",
6363
"openai": "text-embedding-3-small",
6464
"anthropic": "nomic-embed-text", # via Ollama
6565
}

packages/cli/commands/status.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
Last analyzed 2 hours ago (main @ a3f9c12)
2020
Neo4j ● connected bolt://localhost:7687
2121
Wiki docs/wiki/ · 47 pages
22-
.secrin.yml ✓ ollama / llama3
22+
.secrin.yml ✓ ollama / qwen2.5-coder:0.5b
2323
──────────────────────────────────────────────────────────────
2424
"""
2525
from __future__ import annotations

packages/cli/core/secrin_yml.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,8 @@
99
------
1010
llm:
1111
provider: ollama # ollama | openai | anthropic
12-
model: llama3
13-
embed_model: nomic-embed-text
12+
model: qwen2.5-coder:0.5b
13+
embed_model: mxbai-embed-large
1414
base_url: http://localhost:11434 # Ollama only
1515
1616
neo4j:
@@ -38,8 +38,8 @@
3838
class SecrinYml:
3939
# LLM
4040
provider: str = "ollama"
41-
model: str = "llama3"
42-
embed_model: str = "nomic-embed-text"
41+
model: str = "qwen2.5-coder:0.5b"
42+
embed_model: str = "mxbai-embed-large"
4343
base_url: str = "http://localhost:11434" # Ollama host
4444

4545
# Neo4j
@@ -72,8 +72,8 @@ def load(cwd: Path) -> Optional[SecrinYml]:
7272

7373
return SecrinYml(
7474
provider = llm.get("provider", "ollama"),
75-
model = llm.get("model", "llama3"),
76-
embed_model = llm.get("embed_model", "nomic-embed-text"),
75+
model = llm.get("model", "qwen2.5-coder:0.5b"),
76+
embed_model = llm.get("embed_model", "mxbai-embed-large"),
7777
base_url = llm.get("base_url", "http://localhost:11434"),
7878
neo4j_uri = neo4j.get("uri", "bolt://localhost:7687"),
7979
neo4j_user = neo4j.get("username", "neo4j"),

0 commit comments

Comments
 (0)