fix: [ENG-2484] validate openai-compatible URL and defer provider activation#549
fix: [ENG-2484] validate openai-compatible URL and defer provider activation#549RyanNg1403 wants to merge 2 commits intomainfrom
Conversation
…ivation Removes the `llama3` placeholder default that silently corrupted config when users couldn't reach the model picker, validates the user-provided base URL against `/models` before persisting any state, and stops flipping the active provider during connect when no active model would be set — that intermediate state was unmounting the slash command's flow via the welcome view, hanging the REPL after the user picked a model.
|
Claude finished @RyanNg1403's task in 3m 51s —— View job PR Review: fix/ENG-2484 — validate openai-compatible URL and defer provider activation
Overall Assessment: 4 / 5The root cause analysis is precise and the fix is well-scoped. The three-part solution (remove placeholder, validate before persist, defer activation) correctly addresses all reported symptoms without touching unrelated flows. Test coverage is solid for the new invariants. A few minor issues noted below. What's well doneValidation-before-persist ordering (
Injected validator ( Issues foundRedundant
Missing CLI guard for absent
Missing test:
SecurityNo concerns. The error message includes the user-supplied |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d311e45c0d
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
…le setup Falls back to stored baseUrl/apiKey on partial reconfigure so an endpoint that requires auth no longer 401s when the user keeps the same key. Rejects openai-compatible connects that have no effective baseUrl (request + stored) instead of silently storing a useless config entry. Short-circuits the existingActiveModel lookup when the registry already supplies a defaultModel. Tightens the TUI auto-route to openai-compatible specifically, clears selectedProvider on connect failure for symmetry with sibling error paths, and adds a model:setActive test pinning the activate-on-pick chain that the deferred activation design depends on.
|
Claude finished @RyanNg1403's task in 2m 44s —— View job PR Review: fix/ENG-2484 — validate openai-compatible URL and defer provider activation
Overall Assessment: 4.5 / 5The refactor commit ( What the latest refactor fixed ✅All six prior-review issues were resolved:
Remaining nit:
|
Summary
llama3as the active model — a placeholder that doesn't exist on most users' Ollama/LM Studio setups. A bad base URL was accepted as "connected" with no validation, and the resulting half-configured state could hang the REPL after a disconnect/reconnect cycle.activeProviderbefore the user has a usable model).defaultModel: 'llama3'from theopenai-compatibleregistry entry (no sensible canonical default exists)./modelsvalidation insetupConnectfor openai-compatible — bad URL →{success: false}with a friendly message; no keychain or config write.setAsActive?: booleanoption toconnectProvider.setupConnectnow skipswithActiveProviderwhen noactiveModelwill be set, preventing the welcome-page-mid-flow detour that unmounted the slash command'sProviderFlowand hung the REPL.activeModelonProviderDTOso the TUI can route a connected-but-no-model provider to the model picker directly (only when the provider is current — non-current half-configured providers still reach the actions menu so Disconnect stays accessible).handleApiKeySuccessnow checksresponse.successand surfaces the server's error inline at the provider list (previously swallowed silently).ApiKeyDialogvalidation contract for non-openai-compatible providers. The picker UI itself. Themodel:setActivehandler.Type of change
Scope (select all touched areas)
Linked issues
Root cause (bug fixes only, otherwise write
N/A)setupConnectalways wroteactiveModel: provider?.defaultModeland calledwithActiveProviderunconditionally. For openai-compatible,defaultModel: 'llama3'was a placeholder that almost never matched the user's actual server. Combined with no URL validation, this meant any connect attempt — successful or not — committed{activeProvider: 'openai-compatible', activeModel: 'llama3'}to disk. The picker step in the TUI either fetched empty models from a wrong URL or showed a model the user hadn't picked. Removing the placeholder without also gatingwithActiveProviderintroduced a different bug: the half-configured state (activeProviderset,activeModelundefined) was interpreted byuseAppViewModeas'config-provider', causing the welcome page to mount mid-flow and unmount the slash command'sProviderFlowbefore itsonCompletecould fire — leaving the REPL stuck.useAppViewMode, so the welcome-page-mid-flow detour was only possible after the placeholder was removed. The first iteration of this fix removed the placeholder without addressing the activation contract, exposing the second bug deterministically. Tests now pin both behaviors.Test plan
test/unit/core/domain/entities/provider-registry.test.ts— pinsopenai-compatible.defaultModel === undefined.test/unit/infra/transport/handlers/provider-handler.test.ts— 8 new tests covering URL validation, no-pre-write ofactiveModel,setAsActivedecisions across openai-compatible (no-existing-model and existing-model paths) and non-openai-compatible providers.{success: false}, no keychain write, no config write, no broadcast.{success: true}, noactiveModelplaceholder,setAsActivereflects whether a model will end up set.activeModel→ activation re-enabled.defaultModel).activeProvidercorrectly remained at the prior provider; openai-compatible was connected withoutactiveModel; no welcome-page detour.User-visible changes
Could not reach OpenAI-compatible endpoint at <URL>: <details>) instead of silently committing a broken config./modelslisting, and the user must select one for the provider to become active./providers(non-current case) the actions menu is shown with Disconnect reachable.Evidence
npm test); typecheck clean; build succeeds.brv providers connect openai-compatible --base-url http://localhost:1/nopereturns{"success":false,"error":"Could not reach OpenAI-compatible endpoint at http://localhost:1/nope: API error: "}with no state pollution. Good-URL connect produces config withoutactiveModel.activeProviderpreserved during connect, slash command flow stays mounted, no hang.Checklist
npm test) — 6881 passingnpm run lint) — full lint blocked by pre-existing submodule config issue (@workspace/typescript-config/react-library.jsonnot found inpackages/byterover-packages/ui/); confirmed by stashing changes. Targeted lint on changed files is clean.npm run typecheck)npm run build)mainRisks and mitigations
connectProvider(currently only the OAuth callback handler) might rely on the implicitwithActiveProvider. Mitigation:setAsActivedefaults totrue, so callers that don't pass it preserve existing behavior. The OAuth handler always supplies adefaultModel, so itssetAsActivewould betrueregardless. Verified: existing OAuth tests still pass.activeProvider: 'openai-compatible'with noactiveModel) land on the welcome page on first launch. Mitigation: the TUI auto-route inhandleSelectjumps to the model picker when the user picks the current half-configured provider, so they can finish setup in one screen.200 OKto/modelsbut isn't actually OpenAI-compatible (returns garbage), validation passes and the picker may show empty/garbage. Mitigation: explicitly disclosed as a narrow edge case; user can recover via Disconnect orbrv providers disconnect. Out of scope for this ticket.