Handling similar long-running requests with caching

Given a scenario where I‘ve got multiple instances of a service, doing a multiple long running request is there a sane way to implement a sort of „there’s already a similar request running wait until it‘s finished and then take it from the cache? Got a Redis instance that could be used for some sort of shared state 😅
Was this page helpful?