Guidance re: triggering tasks from component args

So my team is pretty split on this topic recently and I’d like to solicit guidance from the community and preferably anyone from the core team on the right path forward.

The problem

We have some components (provider-style in many cases) that fetch data using ember-concurrency tasks when some/all of their args change. They currently mostly use didReceiveAttrs lifecycle hooks to trigger refetch but we want to move away from that for at least two reasons:

  • no power over which args trigger refetch (which usually results in too many fetches)
  • lifecycle hooks are going away as we migrate to glimmer components

Obviously the long-term solution is the Resources RFC (or whatever shakes out of that), but we need something to use in the interim. Using the last couple comments in this RFC issue as a jumping off point some of us have started exploring the render-helpers type approach, but others really seem to hate that. Whatever the case it seems like we have 3 basic options:

  1. render helpers/modifiers or something like them
  2. observers
  3. computed properties which return task instances wrapped in a promise proxy object

We’ve done #3 a lot in the past but I personally dislike that solution very much as side effects in computed properties are strongly discouraged, and to me it’s kind of a “dishonesty” calling a data fetching task a computed property. Another group seems to really hate the render-helper type solution though, citing weirdness wiring things up in the template and “it’s basically just moving an observer to the template”.

Obviously none of the solutions are amazing and we all know we’ll have to compromise and it will be temporary, but is there any community/core consensus as to what we should do in the near term? I’ve done a lot of reading and it doesn’t seem like it, but it’s easy to miss stuff. I know @pzuraq has written a lot on these topics, and I’ve seen @ef4 in some of the relevant threads. Any feedback much appreciated.

Just as a quick status update on @use and Resources, we’ve made progress on splitting out that RFC’s core components into a number of primitives that we can use implement the APIs described in that RFC in userland. The idea was that we would do this to make it possible to experiment with the high level APIs before committing to them, similar to what we’ve done with component and modifier managers.

All of the primitive RFCs have been merged and accepted, and we just need to get them over the finish line implementation-wise. Once that’s done, we should be able to make something like @use work in an addon very quickly. The relevant RFCs are:

I could definitely use help getting these over the line, have a lot of things I’m working on at the moment (including the recent performance regressions). Either way though, we are very close!

In the meantime, I have personally been using computed properties for these types of tasks. They do feel a bit unergonomic, but fundamentally doing that is actually doing pretty much the same thing that @use is doing under the hood. I think it feels wrong because the high level API isn’t there closing over all the details, and it is very easy to get those details wrong or do something you shouldn’t (e.g. side-effect in a non-declarative way).

2 Likes

@pzuraq and I have independently ended up in the same place here: just using getters for this.

As I’ve been teaching other developers across the app I support about how to think about this, I’ve been making two points:

  1. As @pzuraq said, the @use/resources API will do the same thing under the hood. Implementing it with the explicit use of the getter feels (and honestly could be) a little odd, but… that’s what it’s going to be anyway; having some of this in our app in the transition period is actually helpful in a way in that it helps developers build up a mental model about how this will work. The utilities we add on later will help make a clearer distinction about where we’re intentionally doing “side effects”/interacting with the outside world via API calls etc.—but fundamentally work in the same way.

  2. Just as importantly, it’s not only okay but in my opinion actually good to treat async data as just another normal kind of data—in fact, this is how you have to do it in some languages (hello Haskell!)—as long as we can model that clearly and correctly. We can think of these as managed effects, and a well-designed data structure can let us manage them effectively in JS, too.

We tend not to like side effects in getters for a handful of reasons, I think:

  • They tend to be unmanaged, and can as a result be unbounded in their implications for the performance of the app, especially because…
  • Unmanaged side effects can just straight-up block the main thread. Correctly managed async (via promises etc.) avoid this problem, but…
  • You can also end up leaking promises across tests if you’re not careful to clean them up.

Notably, though, none of those problems inhere in having a sync data come in via a getter: it matters very much how the asynchronous data is (or isn’t!) managed.

I helped design and build a data structure and helper to provide support for not only this pattern but the general pattern of needing managed async without proxies (since we’re actively eliminating mixins and proxies out of our app). It gives you insight into whether the async data it wraps is still loading, has loaded, or has errored, and can be used from a template or from JS. (I expect something kind of like this will become a very popular Resource once the primitives all land, because it’s incredibly useful!)

In a component which does its own data fetching based on a component argument, you end up with a use like this:

import Component from '@glimmer/component';
import { inject as service } from '@ember/service';
import load from 'my-app/helpers/load';

export default class LazyProfileInfo extends Component {
  @service store;

  get profileData() {
    return this.load(this.store.findRecord('profile-extra', this.args.user.id));
  }
}
{{#if this.profileData.isLoaded}}
  <p>{{this.user.name}} is {{this.profileData.value.birthday}}</p>
{{/if}}
{{!--
  In this case we don't care about `isLoading` or `isError`, since it's
  fine if we don't having anything to show the user for this.
--}}

A bunch of nice things fall out of this design:

  • you can layer caching onto that as you like, with whatever semantics you need
  • it’s still async—never going to block anything
  • you don’t have to worry about weird behavior with proxies (no need for get here!)
  • you don’t have to worry about what to do when the component is being torn down (because it’ll just throw away the getter, the resulting state, and then GC it out of the WeakMap)
  • it will automatically update in the template appropriately when the state changes: the AsyncData type which load returns uses auto-tracking correctly under the hood
  • it will automatically rerun whenever a new argument for the user is passed (assuming the user model is appropriately internally tracked, of course)
  • most importantly from my POV, it helps everyone think about this kind of async state as just data

Our experience so far with it has been very positive—including in that developers really like it once they are able to start thinking about async data the way this helper encourages.

One additional note: this helper and data structure won’t help you avoid leaking promises across tests; for that you should make sure that your tests are using @ember/test-waiters and correctly setting waiters on any promises you’re creating manually (the framework-internal promises are all already correctly managed).

1 Like

This is some great info, thanks to both of you!

@pzuraq great news on the developments re: Resources and the lower level APIs. I’m taking vacation next week but I’ll try to take a look and see if there’s anything I could help out on for sure.

@chriskrycho big thanks for the detailed explanation and samples. Love not using proxies, I’m trying to move us away from them as well. The helper you wrote looks pretty cool. Out of curiosity any reason you chose to do that over using ember-concurrency tasks? Especially given the extra work you’d need to do to ensure they don’t leak across tests vs task cancellation?

I agree with both the above replies.

I would rephrase this to solve the contradiction. Externally-observable side-effects are strongly discouraged. But that doesn’t mean a side-effectful internal implementation is bad.

When it comes down to it, there is no such thing as side-effect-free code. You always alter something in the state of the machine. But when our architectures are working well, the changes aren’t observable from the outside. A great example is branch prediction on your CPU.

Functional purity is an abstract ideal we can strive for in our interfaces, but our implementations are grounded in reality where side-effects and state are just fundamentally present all the time, and may in fact be the best implementation.

Maybe the simplest pattern that has this property is caching. Updating a cache is a side-effect! But as long as the cache is designed correctly (with proper dependent keys / autotracking / etc to do invalidation), its effect is invisible to consumers of the value (other than the “side-channel” of observing how fast the answer came back).

A data-fetching computed property is just a somewhat fancier cache with more internal states.

This depends on your point of view of what constitutes a value.

If you limit yourself to only primitives (string, number, boolean) and combinators for building bigger values out of other values (array, map, set, etc), then yes, it seems like we aren’t really “computing a value”.

But a Promise is also a value. It represents an ongoing process of computation, and exposes one particular API for learning the outcome of that process. A Resource (as the term is used in the RFC discussed here) is very similar to a Promise – it represents an ongoing process of computation. It differs from Promise by offering a different API for observing or controlling that other computation.

If we accept that Resources are valid values, then the problem of “not really computing a value” disappears.

2 Likes

Thanks @ef4 that’s definitely a helpful way to think about it. And I guess it makes it clearer what I didn’t like was more our particular patterns for doing it.

There are a few reasons:

  • The first, and most important, is that we currently support :fire: DIAF IE11 :fire: on a very large codebase, don’t currently do any kind of “differential builds” to target different browsers differently, and we cannot afford the performance overhead of shipping regenerator-runtime.

  • The second is that while Ember Concurrency tasks are really good, they’re also far more than we need in general. They have much more API surface area, and come with a much larger cost in terms of the size of the addon than this tiny helper does. That’s in no way a knock against it: I’ve used it extensively in the past, and apps adjacent to mine do use it, and happily. But if you don’t need restart-ability, cancel-ability, etc., it’s a lot to pay for!

  • In terms of the management for tests, it’s not really an issue for us: 99% of our Promises come straight from Ember Data, and most of that last 1% would need to be managed explicitly (rather than via EC tasks) anyway.

1 Like

Hi @chriskrycho I saw this topic and the how to force a re-render glimmer component and it fit my problem exactly, so I tried to use the async-data-loader (both the version on your blog and the one in the github gist).

I’m using Ember 3.15/Ember Data 3.12 and EmberCLI 3.15

I’ve been having a problem, though… my code enters an infinite cycle, the getter is continually invoked (from the template, I assume, the stack trace seems to come from ember internals) and since it runs an ajax query (store.query) it basically is always making a request.

The template only has

{{#if this.welds.isLoaded}}

and the getter is (this is in a glimmer component)

get welds() {
   let query = // code to calculate the query
   return load(this.store.query('project-weld', query));
}

Any ideas? Thanks a lot

This is a known quirk with/bug in Store.query. If you manually cache the result in an untracked property it should resolve this for you! See the discussion here.

Hi Chris!

Thank you so much for the quick reply, I actually found that discussion after a while of giving up on this :sweat_smile: . I’m interested in getting back to this.

Does this mean that we as developers always need remember to either manually cache the result, or use the cache decorator? (i.e. there’s no expectation of fixing that quirk/bug in store.query? - it’s a feature, not a bug, situation?).

Thanks again for taking the time to reply :slight_smile:

That’s correct! Unlike Classic, where everything was cached aggressively by default—which was rather expensive and often didn’t pay for itself—Octane flips the default so that you add caching only where you need it.

I agree, that makes sense :slight_smile:

Although, from what I could understand, If I was using store.findRecord I wouldn’t need to cache, right? Just need to cache because I’m using store.query. It’s not really a decision ‘I want to cache this, because I want better performance’ , right?

No, you’ll want to cache any call to the store or similar, because you want the getter to return a stable result no matter how many times it’s invoked as long as the tracked state it uses doesn’t change. (This is a place where Resources will help!) The difference for query is that it has a big so you have to manually cache; @cached won’t solve it like it does in the general case.

1 Like