Skip to content

fix(orchestrator): retry template fetch after fail#2526

Draft
dobrac wants to merge 2 commits intomainfrom
cursor/3c76a271
Draft

fix(orchestrator): retry template fetch after fail#2526
dobrac wants to merge 2 commits intomainfrom
cursor/3c76a271

Conversation

@dobrac
Copy link
Copy Markdown
Contributor

@dobrac dobrac commented Apr 30, 2026

Evict template cache entries when their initial async fetch fails so transient GCS/rootfs errors do not poison cache.

@cursor
Copy link
Copy Markdown

cursor Bot commented Apr 30, 2026

PR Summary

Medium Risk
Touches core template caching and async fetch behavior; mistakes could cause unexpected cache churn or extra fetch load, though changes are localized and backed by new tests.

Overview
Template caching now evicts a newly cached template if its asynchronous initial Fetch fails, preventing transient storage/rootfs errors from leaving a permanently broken entry and allowing subsequent requests to repopulate the cache. This changes storageTemplate.Fetch to return errors and adds coverage to ensure failed fetches are removed (while still sharing the in-flight fetch result until it completes) and that a later successful fetch is cached normally.

Reviewed by Cursor Bugbot for commit 26cfa8d. Bugbot is set up for automated code reviews on this repo. Configure here.

Copy link
Copy Markdown

@claude claude Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review with inline comments

Comment thread packages/orchestrator/pkg/sandbox/template/cache.go Outdated
Comment thread packages/orchestrator/pkg/sandbox/template/storage_template.go
@dobrac dobrac added the bug Something isn't working label Apr 30, 2026
@dobrac dobrac marked this pull request as ready for review April 30, 2026 01:25
Comment on lines +343 to +362
func (c *Cache) fetchAndEvictOnFailure(ctx context.Context, key string, tmpl *storageTemplate) {
err := tmpl.Fetch(ctx, c.buildStore)
if err == nil {
return
}

c.extendMu.Lock()
defer c.extendMu.Unlock()

item := c.cache.Get(key)
if item == nil || item.Value() != tmpl {
return
}

c.cache.Delete(key)
logger.L().Error(ctx, "template fetch failed, evicted cached template",
logger.WithBuildID(key),
zap.Error(err),
)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 On partial Fetch failure, fetchAndEvictOnFailure calls cache.Delete which synchronously fires the OnEviction handler in NewCache, running template.Close → closeTemplate → snapfile.Close() → os.RemoveAll(f.path) (storage_file.go:51-53). Concurrent peerserver consumers that obtained the same *storageTemplate via GetCachedTemplate (peerserver/resolve.go:39-47, peerserver/file.go:41-66) can race the deletion: they hold the path returned by t.Snapfile() but their os.Open call may land after the eviction unlinked the file. Pre-PR this was effectively impossible (eviction only fired on 25h TTL); post-PR it can fire within milliseconds. Fix by having OnEviction wait for in-use refs (refcount/handle), or skip Close when the entry was deleted due to a Fetch failure where partial state was never safely shareable.

Extended reasoning...

What changes\n\nThe PR adds fetchAndEvictOnFailure (cache.go:343-362) so a failed initial fetch evicts the cache entry instead of poisoning the cache. The eviction goes through c.cache.Delete(key), which fires the existing OnEviction handler registered at cache.go:75-84:\n\ngo\ncache.OnEviction(func(ctx context.Context, _ ttlcache.EvictionReason, item *ttlcache.Item[string, Template]) {\n peers.Purge(item.Key())\n template := item.Value()\n err := template.Close(ctx)\n ...\n})\n\n\ntemplate.Close calls closeTemplate (template.go:24-59), which iterates the per-field SetOnce values via Wait(), appends them to a closer list, and calls Close() on each. For *storageFile (used for snapfile), Close() is os.RemoveAll(f.path) (storage_file.go:51-53), so the on-disk cached file is unlinked.\n\n## Why partial failure is realistic\n\nstorageTemplate.Fetch (storage_template.go:74-237) launches four errgroup.Group goroutines (snapfile, metafile, memfile, rootfs). The errgroup has no parent context cancellation tied to sibling failures, so when one goroutine fails (e.g. rootfs returns an errMsg at storage_template.go:212-220), the others continue to completion and may successfully SetValue on their own field. So the storageTemplate returned synchronously to callers can have a valid snapfile even when Fetch ultimately returns non-nil.\n\n## The race\n\nThe *storageTemplate is shared across concurrent callers via cache.GetOrSet in getTemplateWithFetch (cache.go:317-339), and is also exposed via Cache.GetCachedTemplate (cache.go:189-198), which the peerserver consumes:\n\n- peerserver/resolve.go:39-47 builds a fileSource{getFile: t.Snapfile} from the cached template.\n- peerserver/file.go:41-72 then calls f.getFile() (returns the SetValue'd *storageFile and its path) and immediately os.Open(file.Path()) (line 52).\n\nIf the partial fetch failure happens between the peer obtaining the template and calling os.Open, the eviction's os.RemoveAll can land first and os.Open returns ENOENT (which the code maps to ErrNotAvailable at file.go:53-55).\n\n## Why pre-PR code was safe\n\nPre-PR, eviction only happened via the 25h TTL (templateExpiration), so the file was unlinked long after any active reader was gone. The item.Value() != tmpl guard in fetchAndEvictOnFailure only protects against deleting a different template instance — not against deleting one still in active use by other goroutines.\n\n## Step-by-step proof\n\n1. T=0ms: Sandbox A calls GetTemplate(buildID). getTemplateWithFetch does GetOrSet(key, tmpl) (miss), launches go fetchAndEvictOnFailure(ctx, key, tmpl), returns tmpl to A.\n2. T=1ms: Peer X requests the snapfile. peerserver.ResolveBlob calls cache.GetCachedTemplate(buildID) and finds tmpl. It returns fileSource{getFile: tmpl.Snapfile}.\n3. T=2ms: snapfile fetch completes successfully → tmpl.snapfile.SetValue(snapfile) succeeds; the on-disk path /cache/.../snapfile exists.\n4. T=3ms: rootfs fetch fails (transient GCS error) → tmpl.rootfs.SetError(...), the rootfs goroutine returns the error.\n5. T=4ms: errgroup.Wait() returns the rootfs error; storageTemplate.Fetch returns it; fetchAndEvictOnFailure calls c.cache.Delete(key).\n6. T=4ms: OnEviction fires synchronously; closeTemplate calls tmpl.Snapfile().Wait() (returns the SetValue'd snapfile), appends it to closables, then snapfile.Close() runs os.RemoveAll("/cache/.../snapfile"). File is unlinked.\n7. T=5ms: Peer X's handler reaches fileSource.Streamf.getFile() returns the snapfile (the in-memory *storageFile is still valid), then os.Open(file.Path()) fails with ENOENT.\n\n## Scope correction\n\nOne note on the original description: closeTemplate (template.go:24-59) only closes Memfile, Rootfs, and Snapfile — it does not call t.Metadata(), so the metafile is not RemoveAll'd by eviction. Only the snapfile (via *storageFile.Close) is affected. Memfile/rootfs use *Storage whose Close is a no-op for the on-disk cached chunks here.\n\n## Suggested fix\n\nOptions:\n- Have eviction skip Close (or defer it) when the eviction was triggered by a Fetch failure where readers may still hold the partial state — and rely on a later TTL or refcount-based cleanup.\n- Make consumers acquire a refcount on the template (Acquire/Release) so OnEviction's Close blocks until refs hit zero.\n- For the peerserver path specifically, treat *storageFile paths as immutable while a reader is mid-stream by deferring removal.

@dobrac dobrac marked this pull request as draft May 1, 2026 04:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants