Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 22 additions & 1 deletion packages/orchestrator/pkg/sandbox/template/cache.go
Original file line number Diff line number Diff line change
Expand Up @@ -332,10 +332,31 @@
missesMetric.Add(ctx, 1)
// We don't want to cancel the request if the request was canceled, because it can be used by other templates
// It's a little bit problematic, because shutdown won't cancel the fetch
go tmpl.Fetch(context.WithoutCancel(ctx), c.buildStore)
go c.fetchAndEvictOnFailure(context.WithoutCancel(ctx), key, tmpl)
} else {
hitsMetric.Add(ctx, 1)
}

return t.Value()
}

func (c *Cache) fetchAndEvictOnFailure(ctx context.Context, key string, tmpl *storageTemplate) {
err := tmpl.Fetch(ctx, c.buildStore)
if err == nil {
return
}

c.extendMu.Lock()
defer c.extendMu.Unlock()

item := c.cache.Get(key)
if item == nil || item.Value() != tmpl {
return
}

c.cache.Delete(key)
logger.L().Error(ctx, "template fetch failed, evicted cached template",
logger.WithBuildID(key),
zap.Error(err),
)
}

Check failure on line 362 in packages/orchestrator/pkg/sandbox/template/cache.go

View check run for this annotation

Claude / Claude Code Review

Eviction on partial Fetch failure deletes snapfile/metafile while callers still hold the template

On partial Fetch failure, fetchAndEvictOnFailure calls cache.Delete which synchronously fires the OnEviction handler in NewCache, running template.Close → closeTemplate → snapfile.Close() → os.RemoveAll(f.path) (storage_file.go:51-53). Concurrent peerserver consumers that obtained the same *storageTemplate via GetCachedTemplate (peerserver/resolve.go:39-47, peerserver/file.go:41-66) can race the deletion: they hold the path returned by t.Snapfile() but their os.Open call may land after the evict
Comment on lines +343 to +362
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 On partial Fetch failure, fetchAndEvictOnFailure calls cache.Delete which synchronously fires the OnEviction handler in NewCache, running template.Close → closeTemplate → snapfile.Close() → os.RemoveAll(f.path) (storage_file.go:51-53). Concurrent peerserver consumers that obtained the same *storageTemplate via GetCachedTemplate (peerserver/resolve.go:39-47, peerserver/file.go:41-66) can race the deletion: they hold the path returned by t.Snapfile() but their os.Open call may land after the eviction unlinked the file. Pre-PR this was effectively impossible (eviction only fired on 25h TTL); post-PR it can fire within milliseconds. Fix by having OnEviction wait for in-use refs (refcount/handle), or skip Close when the entry was deleted due to a Fetch failure where partial state was never safely shareable.

Extended reasoning...

What changes\n\nThe PR adds fetchAndEvictOnFailure (cache.go:343-362) so a failed initial fetch evicts the cache entry instead of poisoning the cache. The eviction goes through c.cache.Delete(key), which fires the existing OnEviction handler registered at cache.go:75-84:\n\ngo\ncache.OnEviction(func(ctx context.Context, _ ttlcache.EvictionReason, item *ttlcache.Item[string, Template]) {\n peers.Purge(item.Key())\n template := item.Value()\n err := template.Close(ctx)\n ...\n})\n\n\ntemplate.Close calls closeTemplate (template.go:24-59), which iterates the per-field SetOnce values via Wait(), appends them to a closer list, and calls Close() on each. For *storageFile (used for snapfile), Close() is os.RemoveAll(f.path) (storage_file.go:51-53), so the on-disk cached file is unlinked.\n\n## Why partial failure is realistic\n\nstorageTemplate.Fetch (storage_template.go:74-237) launches four errgroup.Group goroutines (snapfile, metafile, memfile, rootfs). The errgroup has no parent context cancellation tied to sibling failures, so when one goroutine fails (e.g. rootfs returns an errMsg at storage_template.go:212-220), the others continue to completion and may successfully SetValue on their own field. So the storageTemplate returned synchronously to callers can have a valid snapfile even when Fetch ultimately returns non-nil.\n\n## The race\n\nThe *storageTemplate is shared across concurrent callers via cache.GetOrSet in getTemplateWithFetch (cache.go:317-339), and is also exposed via Cache.GetCachedTemplate (cache.go:189-198), which the peerserver consumes:\n\n- peerserver/resolve.go:39-47 builds a fileSource{getFile: t.Snapfile} from the cached template.\n- peerserver/file.go:41-72 then calls f.getFile() (returns the SetValue'd *storageFile and its path) and immediately os.Open(file.Path()) (line 52).\n\nIf the partial fetch failure happens between the peer obtaining the template and calling os.Open, the eviction's os.RemoveAll can land first and os.Open returns ENOENT (which the code maps to ErrNotAvailable at file.go:53-55).\n\n## Why pre-PR code was safe\n\nPre-PR, eviction only happened via the 25h TTL (templateExpiration), so the file was unlinked long after any active reader was gone. The item.Value() != tmpl guard in fetchAndEvictOnFailure only protects against deleting a different template instance — not against deleting one still in active use by other goroutines.\n\n## Step-by-step proof\n\n1. T=0ms: Sandbox A calls GetTemplate(buildID). getTemplateWithFetch does GetOrSet(key, tmpl) (miss), launches go fetchAndEvictOnFailure(ctx, key, tmpl), returns tmpl to A.\n2. T=1ms: Peer X requests the snapfile. peerserver.ResolveBlob calls cache.GetCachedTemplate(buildID) and finds tmpl. It returns fileSource{getFile: tmpl.Snapfile}.\n3. T=2ms: snapfile fetch completes successfully → tmpl.snapfile.SetValue(snapfile) succeeds; the on-disk path /cache/.../snapfile exists.\n4. T=3ms: rootfs fetch fails (transient GCS error) → tmpl.rootfs.SetError(...), the rootfs goroutine returns the error.\n5. T=4ms: errgroup.Wait() returns the rootfs error; storageTemplate.Fetch returns it; fetchAndEvictOnFailure calls c.cache.Delete(key).\n6. T=4ms: OnEviction fires synchronously; closeTemplate calls tmpl.Snapfile().Wait() (returns the SetValue'd snapfile), appends it to closables, then snapfile.Close() runs os.RemoveAll("/cache/.../snapfile"). File is unlinked.\n7. T=5ms: Peer X's handler reaches fileSource.Streamf.getFile() returns the snapfile (the in-memory *storageFile is still valid), then os.Open(file.Path()) fails with ENOENT.\n\n## Scope correction\n\nOne note on the original description: closeTemplate (template.go:24-59) only closes Memfile, Rootfs, and Snapfile — it does not call t.Metadata(), so the metafile is not RemoveAll'd by eviction. Only the snapfile (via *storageFile.Close) is affected. Memfile/rootfs use *Storage whose Close is a no-op for the on-disk cached chunks here.\n\n## Suggested fix\n\nOptions:\n- Have eviction skip Close (or defer it) when the eviction was triggered by a Fetch failure where readers may still hold the partial state — and rely on a later TTL or refcount-based cleanup.\n- Make consumers acquire a refcount on the template (Acquire/Release) so OnEviction's Close blocks until refs hit zero.\n- For the peerserver path specifically, treat *storageFile paths as immutable while a reader is mid-stream by deferring removal.

214 changes: 214 additions & 0 deletions packages/orchestrator/pkg/sandbox/template/cache_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,20 @@ package template

import (
"context"
"errors"
"io"
"testing"
"time"

"github.com/google/uuid"
"github.com/jellydator/ttlcache/v3"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"

"github.com/e2b-dev/infra/packages/orchestrator/pkg/cfg"
blockmetrics "github.com/e2b-dev/infra/packages/orchestrator/pkg/sandbox/block/metrics"
"github.com/e2b-dev/infra/packages/shared/pkg/storage"
"github.com/e2b-dev/infra/packages/shared/pkg/storage/header"
)

func newTestCache(defaultTTL time.Duration) *Cache {
Expand All @@ -16,6 +24,128 @@ func newTestCache(defaultTTL time.Duration) *Cache {
}
}

func newFetchTestCache(t *testing.T) *Cache {
t.Helper()

return &Cache{
cache: ttlcache.New(ttlcache.WithTTL[string, Template](time.Hour)),
config: cfg.Config{
BuilderConfig: cfg.BuilderConfig{
StorageConfig: storage.Config{
TemplateCacheDir: t.TempDir(),
},
},
},
}
}

type testTemplateFile struct {
path string
}

func (f testTemplateFile) Path() string {
return f.path
}

func (f testTemplateFile) Close() error {
return nil
}

type testStorageProvider struct {
openBlobErr error
release <-chan struct{}
}

func (p *testStorageProvider) wait(ctx context.Context) error {
if p.release == nil {
return nil
}

select {
case <-p.release:
return nil
case <-ctx.Done():
return ctx.Err()
}
}

func (p *testStorageProvider) DeleteObjectsWithPrefix(context.Context, string) error {
return nil
}

func (p *testStorageProvider) UploadSignedURL(context.Context, string, time.Duration) (string, error) {
return "", nil
}

func (p *testStorageProvider) OpenBlob(ctx context.Context, _ string, _ storage.ObjectType) (storage.Blob, error) {
if err := p.wait(ctx); err != nil {
return nil, err
}

if p.openBlobErr != nil {
return nil, p.openBlobErr
}

return testBlob{}, nil
}

func (p *testStorageProvider) OpenSeekable(context.Context, string, storage.SeekableObjectType) (storage.Seekable, error) {
return nil, errors.New("unexpected seekable open")
}

func (p *testStorageProvider) GetDetails() string {
return "test storage provider"
}

type testBlob struct{}

func (testBlob) WriteTo(context.Context, io.Writer) (int64, error) {
return 0, nil
}

func (testBlob) Put(context.Context, []byte) error {
return nil
}

func (testBlob) Exists(context.Context) (bool, error) {
return true, nil
}

func testHeader(t *testing.T, buildID string) *header.Header {
t.Helper()

id := uuid.MustParse(buildID)
h, err := header.NewHeader(header.NewTemplateMetadata(id, 4096, 4096), nil)
require.NoError(t, err)

return h
}

func newStorageTemplateForTest(
t *testing.T,
c *Cache,
buildID string,
provider storage.StorageProvider,
memfileHeader *header.Header,
rootfsHeader *header.Header,
) *storageTemplate {
t.Helper()

tmpl, err := newTemplateFromStorage(
c.config.BuilderConfig,
buildID,
memfileHeader,
rootfsHeader,
provider,
blockmetrics.Metrics{},
testTemplateFile{path: "snapfile"},
testTemplateFile{path: "metadata"},
)
require.NoError(t, err)

return tmpl
}

// simulateGetTemplate mimics getTemplateWithFetch's lock-protected TTL logic
// without needing a full storageTemplate (which requires disk paths).
func simulateGetTemplate(c *Cache, key string, maxSandboxLengthHours int64) {
Expand Down Expand Up @@ -126,3 +256,87 @@ func TestWithoutExtend_EntryEvictedEarly(t *testing.T) {
item := c.cache.Get(key)
assert.Nil(t, item, "without TTL extension, the entry should be evicted after the default TTL")
}

func TestGetTemplateWithFetch_RetriesAfterFetchFailure(t *testing.T) {
t.Parallel()

ctx := context.Background()
c := newFetchTestCache(t)
buildID := uuid.NewString()
memfileHeader := testHeader(t, buildID)

failedTemplate := newStorageTemplateForTest(
t,
c,
buildID,
&testStorageProvider{openBlobErr: errors.New("gcs unavailable")},
memfileHeader,
nil,
)
got := c.getTemplateWithFetch(ctx, failedTemplate, 0)
require.Same(t, failedTemplate, got)

_, err := got.Rootfs()
require.ErrorContains(t, err, "gcs unavailable")
require.Eventually(t, func() bool {
_, found := c.GetCachedTemplate(buildID)

return !found
}, time.Second, time.Millisecond)

successfulTemplate := newStorageTemplateForTest(
t,
c,
buildID,
&testStorageProvider{},
memfileHeader,
testHeader(t, buildID),
)
got = c.getTemplateWithFetch(ctx, successfulTemplate, 0)
require.Same(t, successfulTemplate, got)

rootfs, err := got.Rootfs()
require.NoError(t, err)
require.NotNil(t, rootfs)
require.Eventually(t, func() bool {
cached, ok := c.GetCachedTemplate(buildID)

return ok && cached == got
}, time.Second, time.Millisecond)
}

func TestGetTemplateWithFetch_SharesCachedFetchBeforeEviction(t *testing.T) {
t.Parallel()

ctx := context.Background()
c := newFetchTestCache(t)
buildID := uuid.NewString()
memfileHeader := testHeader(t, buildID)
release := make(chan struct{})

firstTemplate := newStorageTemplateForTest(
t,
c,
buildID,
&testStorageProvider{openBlobErr: errors.New("gcs unavailable"), release: release},
memfileHeader,
nil,
)
got := c.getTemplateWithFetch(ctx, firstTemplate, 0)
require.Same(t, firstTemplate, got)

secondTemplate := newStorageTemplateForTest(
t,
c,
buildID,
&testStorageProvider{},
memfileHeader,
testHeader(t, buildID),
)
got = c.getTemplateWithFetch(ctx, secondTemplate, 0)
require.Same(t, firstTemplate, got)

close(release)
_, err := got.Rootfs()
require.ErrorContains(t, err, "gcs unavailable")
}
21 changes: 9 additions & 12 deletions packages/orchestrator/pkg/sandbox/template/storage_template.go
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ func newTemplateFromStorage(
}, nil
}

func (t *storageTemplate) Fetch(ctx context.Context, buildStore *build.DiffStore) {
func (t *storageTemplate) Fetch(ctx context.Context, buildStore *build.DiffStore) error {
ctx, span := tracer.Start(ctx, "fetch storage template", trace.WithAttributes(
telemetry.WithBuildID(t.paths.BuildID),
))
Expand Down Expand Up @@ -102,7 +102,7 @@ func (t *storageTemplate) Fetch(ctx context.Context, buildStore *build.DiffStore
return fmt.Errorf("failed to set snapfile error: %w", errors.Join(errMsg, err))
}

return nil
return errMsg
Comment thread
dobrac marked this conversation as resolved.
}

if err := t.snapfile.SetValue(snapfile); err != nil {
Expand Down Expand Up @@ -134,7 +134,7 @@ func (t *storageTemplate) Fetch(ctx context.Context, buildStore *build.DiffStore
return fmt.Errorf("failed to set metafile error: %w", errors.Join(sourceErr, err))
}

return nil
return sourceErr
}

if err != nil {
Expand All @@ -152,7 +152,7 @@ func (t *storageTemplate) Fetch(ctx context.Context, buildStore *build.DiffStore
return fmt.Errorf("failed to set metafile error: %w", errors.Join(sourceErr, err))
}

return nil
return sourceErr
}

if err := t.metafile.SetValue(&storageFile{
Expand Down Expand Up @@ -189,7 +189,7 @@ func (t *storageTemplate) Fetch(ctx context.Context, buildStore *build.DiffStore
return fmt.Errorf("failed to set memfile error: %w", errors.Join(errMsg, err))
}

return nil
return errMsg
}

if err := t.memfile.SetValue(memfileStorage); err != nil {
Expand All @@ -216,7 +216,7 @@ func (t *storageTemplate) Fetch(ctx context.Context, buildStore *build.DiffStore
return fmt.Errorf("failed to set rootfs error: %w", errors.Join(errMsg, err))
}

return nil
return errMsg
}

if err := t.rootfs.SetValue(rootfsStorage); err != nil {
Expand All @@ -231,13 +231,10 @@ func (t *storageTemplate) Fetch(ctx context.Context, buildStore *build.DiffStore
span.RecordError(err)
span.SetStatus(codes.Error, err.Error())

logger.L().Error(ctx, "failed to fetch template storage",
logger.WithBuildID(t.paths.BuildID),
zap.Error(err),
)

return
return err
}

return nil
}

func (t *storageTemplate) Close(ctx context.Context) error {
Expand Down
Loading