Datadown to a component leads to memory leak?

Hi… I have a simple ember app that does data-down to a component

I basically want to avoid memory leaks in my app. I tested in canary and found that there are detached HTMLDivElements… Below is the screenshot… Are these really memory leaks? If yes, Can someone please help me out on how to avoid these leaks? Also, is there anyway to find the source of the leaks from the heap snapshots?

Here is the twiddle

I did the following:

  1. Took a heap snapshot after loading the twiddle (snapshot1)
  2. Clicked on Show/Hide child button to show the child component
  3. Clicked on Show/Hide child button to hide the child component
  4. Took a heap snapshot ( snapshot 2 ) Compared the snapshots and found the following

1 Like

I took a brief, confused, look at the heap analysis, and noticed that it looks detached DOM is retained by a reference in an array called destroyables in constructs like SimpleBlockTracker, which in turn is retained on a blockstack which is retained in a bunch of places. The number of uncollected detached DOM nodes doesn’t seem to grow… they get GC’d eventually after further renders but there are always some around that seemingly needn’t be retained. Perhaps they’re just referred to a little longer than is absolutely necessary?

First of all, I m curious to know how you found the cause i.e the reference in the array. Any particular knack? :stuck_out_tongue: It would be really helpful if you could explain how you narrowed it down. :slight_smile: Okay… so, it is an internal issue with ember and there is no other way to avoid it? Because, I am working on a bigger app with multiple addons and multiple nested components with data-down… In that case, the memory keeps on increasing ( also browser memory goes to hundreds of MBs ) and does not seem to get cleared in GC :see_no_evil:

First of all, I m curious to know how you found the cause i.e the reference in the array. Any particular knack? :stuck_out_tongue:

Haha, well… basically just browsing around the Retainers tab in the dev tools, looking for objects that are holding a reference to these things and keeping them from being garbage collected.

I’m not quite sure what I’m seeing here but I’m going to write out my thoughts and see if anyone can make sense of this.

Here are the detached divs reported by the memory profiler:

Hovering over that last one (@845453) it appears to be the outer div for the recently-destroyed memory-leak-child component:

So let’s see what’s retaining a reference to it:

Let’s start with those First and Last objects that are referring to our div as node:

So they are the first and last properties on a SimpleBlockTracker, which is itself referred to as __BOUNDS__... on some other object. Let’s see what other object is retaining our SimpleBlockTracker:

Oh my. Is this a problem? Why are these thing retained? The thing is, following the retain paths for all these objects, the all seem to terminate at a WeakMap:

And that WeakMap is Ember’s global metaStore:


If all these references terminate in a WeakMap, then that shouldn’t prevent any of these trees of objects from being garbage collected. Maybe there’s some subtlety to WeakMap that I’m missing? Hope someone more knowledgeable can help!


I got some help understanding this from resident browser-internals guru @krisselden. I will try to relate what I learned.

Current status of leak bugs in canary

We are retaining more than necessary at the moment, but not an unbounded amount (as @jgwhite guessed) . Basically some things are left around from the previous render and not cleared away until the next render. Work is ongoing on this branch to clean up this issue. A lot of what you’ll see there is actually cleaning up places where the test suite itself was leaking things, making the hunt for real framework leaks harder.

There was a different unbounded memory leak that was fixed in 3.1.1, but it would look different from the example above. Glimmer was recompiling components unnecessarily, and that was causing the glimmer-vm’s stack to overflow.

One of the hard things about tracking these leaks down is that the heap snapshot tools are being misled into showing real but irrelevant retaining paths. destroyables, for example, is something Kris has already looked at and confirmed is benign.

The big reason why the snapshots are pretty confusing is related to WeakMaps (which are used both by ember and by the browser’s own internals).

Why WeakMaps make the heap snapshots misleading

For the purposes of this discussion, assume we have an object leakyValue that is stored as a value in a WeakMap named theWeakMap, and nowhere else. Another object which I’ll call leakyKey is the corresponding key in the WeakMap, as in theWeakMap.set(leakyKey, leakyValue).

leakyValue is retained by two different retention paths. If either path was to go away, the object would be garbage collected. This is the weird thing about WeakMaps – normally any single path to an object is enough to explain why it is retained. But a value in a WeakMap is only retained if it’s reachable along both paths.

The first path is the path from a GC root to theWeakMap itself. If theWeakMap is dropped, leakyValue can also be dropped, even if leakyKey is still retained.

The second path is the path from a GC root to leakyKey. If leakyKey is dropped, then leakyObj can also be dropped, regardless of whether theWeakMap is still retained.

Now, the first difficulty is that the dev tools can’t really know which of the two path was supposed to be cut. Sometimes you have a WeakMap that is intentionally long lived, and it’s the key that’s leaking. Sometimes you have a key that’s intentionally long lived and the WeakMap is leaking. The tools can’t tell you which is relevant.

Second, the dev tools try to determine the interesting retaining paths by looking for the shortest paths. This heuristic was good until WeakMaps were introduced, but now it can be quite misleading, because of these dual paths. For example, you may have a WeakMap very close to a GC root, which will make the path through the WeakMap itself the shortest path to the value, even though it’s not really relevant because you intended to retain the WeakMap, and it’s really the longer path to the key that is the culprit. In that case, finding that longer path can be very painful, because the tools don’t surface it.

And this whole problem is compounded when there are cycles adding to the complexity.

Hopefully the heap snapshot tool will adapt soon to the new complexity of WeakMaps. For now, it is quite hard to use.


Thanks a tonne @jgwhite & @ef4 for taking the time to explain in detail :slight_smile:

Sorry, I m not quite clear…

Basically some things are left around from the previous render and not cleared away until the next render. Work is ongoing on this branch to clean up this issue.

destroyables, for example, is something Kris has already looked at and confirmed is benign.

So this does cause memory leaks and work is going on to clean up this issue but it is harmless from the app developer’s perspective? :thinking:

As I already mentioned, I work on a bigger app with numerous nested components and addons. I noticed that memory shooted upto hundreds of MBs ( browser memory in task manager - from say 140MB to 300MB after accessing few components in the app and app becomes unresponsive if it goes upto GB, memory increases in heap snapshots too). So we started looking for memory leaks in the app… and found the above issue.

So this could possibly be the reason for memory shooting up? :frowning:

The specific thing that @ef4 was saying is that the currently known leaks (which we are absolutely working to fix!) will only leak a single container (the one from the last test that was ran). It does not grow unbounded.