Is coalescing find requests still a good practice considering HTTP/2?

If I got it correctly, coalesceFindRequests is mostly used to reduce the number of network requests fired if application needs to fetch multiple records of the same type in one runloop. I’m wondering if this is still a good practice from performance perspective taking HTTP/2 into consideration. I put some thoughts on it together in this post and would appreciate any feedback.

HTTP/2 is reducing the cost for multiple requests dramatically. HTTP 1.1 has used one TCP connection for each request and therefore firing multiple requests was very expensive. Not only but in particular if HTTPS is used which needs many roundtrips until connection is established. Since HTTP/2 reuses the same TCP connection for all requests to same origin this isn’t an issue anymore.

If I got the debates on similar topics especially HTTP/2 and asset concatenation right, it’s basically weighing up better ability to cache the request and the still existing overhead. Overhead is mainly about the tradeoff needed by browsers to create, fire and parse separate requests and the higher compression rate for bigger payloads, isn’t it?

Normally ember-data does the caching on client side. It does not fire a request if the record is already in store. So caching improvements are mostly on server-side, isn’t it? The chance to hit a caching proxy is much higher if individual requests are fired since the coalesced find requests doesn’t needed to be cached. If it’s often the same records being coalesced there might be still a tradeoff for the API to handle multiple requests instead of one. This is highly specific to API implementation isn’t it? I would assume it’s less an issue for a node.js API than for PHP implementation as an example. Also microservices might perform much better for this scenario.

For client-side the caching is only important if it comes to offline first approaches using web workers isn’t it? In this use cases it’s highly important to have as much requests cached by web workers as possible to serve content in offline mode. Imagine not being able to show any content just because there is one unknown record coalesced in the request. Of course there are other approaches for offline fist in ember than using web workers (e.g. caching in localStorage per ember-data adapter) for which this doesn’t apply.

Did I missed an important point? If not this seems to be basically weighing up between hit rate on a caching proxy before API and the additional work put on browser to handle multiple requests. I would assume that compression rate is not that important cause we are talking about small payload size even for coalesced find requests.

These are good questions. The best way to answer them is to conduct experiments.

FWIW, I think implementing findMany (which is a prerequisite for doing coalescing) is an unusual thing for ember data users to do. While I’m sure there’s some combination of backend architecture and data access pattern that makes it worthwhile, it’s just not a thing that comes up a lot.

Multiple records with known ID are not only fetched if you using private method findMany. Actually I’ve seen a lot applications using coalesceFindRequests a lot but non of them had implemented that method. The use case I’ve seen most is not fetching all needed related records in model hook (e.g. backend does not support include param and developer has not chained the find requests) and therefore relationship is fetched async when rendered.