Skip to content

[6.x] Performance Optimisation for Stache - Stache Batch Get Items for Redis/Memcached/DynamoDB - Reduce Network Overhead#13224

Open
jonathan-bird wants to merge 2 commits intostatamic:6.xfrom
jonathan-bird:feature/stache-memory-batching
Open

[6.x] Performance Optimisation for Stache - Stache Batch Get Items for Redis/Memcached/DynamoDB - Reduce Network Overhead#13224
jonathan-bird wants to merge 2 commits intostatamic:6.xfrom
jonathan-bird:feature/stache-memory-batching

Conversation

@jonathan-bird
Copy link
Copy Markdown

Description

Optimises Stache store item retrieval by using batch cache operations for Redis, Memcached, and DynamoDB users.

  • Adds getItems() method to BasicStore that fetches multiple items efficiently
  • Groups keys by child store in AggregateStore before delegating
  • Uses cache()->many() for batch fetching (single MGET for Redis)
  • Only enables batch mode for network-based cache drivers (Redis, Memcached, DynamoDB)
  • File/Array cache users continue using individual lookups (due to no network overhead)

Performance Impact

For Redis/Memcached users fetching many items (eg. search results). I wrote a little benchmark tool and this is more or less the summary of the speed improvements. I have only benchmarked using Redis but given they're all in-memory stores, it should be roughly the same for Dynamo/Memcached:

Items Batch (new) Individual (old) Improvement Speedup
100 0.38 ms 4.61 ms 91.8% 12x
1000 2.42 ms 27.76 ms 91.3% 11.5x
5000 11.54 ms 134.52 ms 91.4% 11.7x

Approx 12x speedup for Redis users. The batch approach using MGET is much faster than N individual GET calls.

For File/Array cache users: no change (driver detection skips batch mode). With in-memory Array cache, the improvement is minimal because there's no network latency to optimise. The overhead of building the batch request can even make it slightly slower for small datasets, so it's skipped.

Use Cases

This improvement benefits any code path that retrieves multiple Stache items at once such as:

  • Search results hydration
  • Entry queries returning many results
  • Bulk operations on entries/terms/assets

@jonathan-bird jonathan-bird changed the title Stache Batch Get Items for Redis/Memcached/DynamoDB - Reduce Network Overhead [5.x] Stache Batch Get Items for Redis/Memcached/DynamoDB - Reduce Network Overhead Dec 1, 2025
@jonathan-bird
Copy link
Copy Markdown
Author

@jasonvarga I've gotten a little carried away with performance improvements and found this too. Keen to hear your thoughts. I'm not 100% sure the shouldUseBatchCaching checking which stores are used is the best approach - just need to not run for disk storage basically.

@jonathan-bird jonathan-bird changed the title [5.x] Stache Batch Get Items for Redis/Memcached/DynamoDB - Reduce Network Overhead [5.x] Performance Optimisation for Stache - Stache Batch Get Items for Redis/Memcached/DynamoDB - Reduce Network Overhead Dec 4, 2025
@vitalijalbu
Copy link
Copy Markdown

@jonathan-bird hi, any updates about this pr? would be very useful if released asap 💪

@duncanmcclean
Copy link
Copy Markdown
Member

Sorry, we don't have an ETA for reviewing/merging this pull request. We'll get to it when we can.

In the meantime, you can pull this PR into your project with a composer patch.

@jonathan-bird
Copy link
Copy Markdown
Author

Not yet, it's ready when the team are. Ideally for v6

@duncanmcclean duncanmcclean changed the title [5.x] Performance Optimisation for Stache - Stache Batch Get Items for Redis/Memcached/DynamoDB - Reduce Network Overhead [6.x] Performance Optimisation for Stache - Stache Batch Get Items for Redis/Memcached/DynamoDB - Reduce Network Overhead Jan 28, 2026
@duncanmcclean duncanmcclean changed the base branch from 5.x to 6.x January 28, 2026 17:26
@jasonvarga
Copy link
Copy Markdown
Member

The overhead of building the batch request can even make it slightly slower for small datasets

@jonathan-bird Do you remember in your tests how much slower it went? If it's not that bad I'd rather always do batching to keep things simple. We wouldn't need to test/maintain two logic paths.

@jonathan-bird
Copy link
Copy Markdown
Author

@jasonvarga I didn't keep the tester I wrote annoyingly, but it was small - you can even tell with 100 it's only 0.38ms including batching.

Batching all of the time would be fine, it's the bigger sites that feel the pain currently. I just wrote it in a way to get the best performance out of everything.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants