[6.x] Performance Optimisation for Stache - Stache Batch Get Items for Redis/Memcached/DynamoDB - Reduce Network Overhead#13224
Conversation
|
@jasonvarga I've gotten a little carried away with performance improvements and found this too. Keen to hear your thoughts. I'm not 100% sure the |
|
@jonathan-bird hi, any updates about this pr? would be very useful if released asap 💪 |
|
Sorry, we don't have an ETA for reviewing/merging this pull request. We'll get to it when we can. In the meantime, you can pull this PR into your project with a composer patch. |
|
Not yet, it's ready when the team are. Ideally for v6 |
@jonathan-bird Do you remember in your tests how much slower it went? If it's not that bad I'd rather always do batching to keep things simple. We wouldn't need to test/maintain two logic paths. |
|
@jasonvarga I didn't keep the tester I wrote annoyingly, but it was small - you can even tell with 100 it's only 0.38ms including batching. Batching all of the time would be fine, it's the bigger sites that feel the pain currently. I just wrote it in a way to get the best performance out of everything. |
Description
Optimises Stache store item retrieval by using batch cache operations for Redis, Memcached, and DynamoDB users.
getItems()method toBasicStorethat fetches multiple items efficientlyAggregateStorebefore delegatingcache()->many()for batch fetching (single MGET for Redis)Performance Impact
For Redis/Memcached users fetching many items (eg. search results). I wrote a little benchmark tool and this is more or less the summary of the speed improvements. I have only benchmarked using Redis but given they're all in-memory stores, it should be roughly the same for Dynamo/Memcached:
Approx 12x speedup for Redis users. The batch approach using MGET is much faster than N individual GET calls.
For File/Array cache users: no change (driver detection skips batch mode). With in-memory Array cache, the improvement is minimal because there's no network latency to optimise. The overhead of building the batch request can even make it slightly slower for small datasets, so it's skipped.
Use Cases
This improvement benefits any code path that retrieves multiple Stache items at once such as: