How to find memory leak

How to find memory leak of Big Ember App, which includes 1000 of lineItems? Efficient method of lazy Loading?

Hi @M_D could you be a little more specific? When you say “memory leak” is it actually a leak (e.g. memory that should be garbage collected but is never actually freed and continues to build up) or is it just high memory usage due to too much data being fetched/serialized/cached? Do you use Ember Data? What Ember version are you using? Are the memory issues problematic in your test suite or just your application operation in general?

Actual memory leaks can be tricky to find but there are some tools and techniques you can use.

Fetching or caching too much data is a common problem, especially when using legacy Ember Data where a large number of records can have a pretty intense memory impact and be difficult to unload. There are a number of different things you could do to reduce memory usage in this case:

  • fetch less data to begin with by paginating and/or filtering. This is generally the biggest and easiest win but sometimes it requires significant refactor if your app is based around the idea of “fetch all the things” e.g. findAll or hasMany relationships
  • skip Ember Data (if using legacy Ember Data) and fetch directly giving you raw JSON for some/all of your data. Ember Data records can use significantly more memory than raw data, and it’s harder (since hte default is “cache everything”) to reclaim that memory
  • If your API supports something like sparse fieldsets you could fetch smaller payloads by only requesting the fields you need
1 Like

Hi @dknutsen , Let us consider e - commerce app, We create a single order with 1000 items. In that time, scroll to the bottom of lineitems, the app gets slow. So that suspecting any leaks are there.

In that case I wouldn’t really call this a memory leak per se, it’s more of a general performance issue. I’m sure memory is a factor but a “leak” usually means the memory cannot be reclaimed successfully by the garbage collector, whereas in this case any memory buildup is intentional. You may also be hitting render bottlenecks too.

Anyway given that I’d still point to my three remediation ideas above. At the end of the day this is a suboptimal architecture. Fetching 1000+ records is never what you really want. If you fetch them you have to factor in over the wire time and size, server serialization time, and memory load. If you’re fetching them via legacy Ember Data (hasMany, findAll, query, etc) it’s even worse because each record uses significantly more memory than the raw JSON (at least in my experience). Rendering 1000+ rows at a time will also take time no matter how much you optimize it. So in summary the only “real” fix for this problem is to not fetch/render that many records at a time. Generally the answer to that is some sort of pagination scheme.

But let’s say you want to keep said suboptimal architecture where you’re overfetching and overrendering… You can reduce memory load by optimizing how you fetch/cache the records. For example if you’re using legacy Ember Data try it with a raw fetch call and and render the raw JSON with minimal serialization. This will also prevent the data from being cached in the store. If the caching is the issue you can try unloading records from the store or making sure all refs to the raw data are cleaned up (if not using legacy Ember Data already).

If rendering is the primary bottleneck but you still want to render that many records (again, my advice is to tackle the problem at its source instead of optimizing leaf render state) you can use occlusion rendering tools like vertical-collection

1 Like