Multiple changesets, text entry, and numeric data

Data can come from a variety of sources - a form, query parameters, an external file - and data from each source needs to be validated before it can be used. A change-set provides a place to put data for validation before it is written to the @tracked properties that drive its use by the rest of the application.

I am running into two design problems for entry of numeric data.

First, validation must necessary have two tiers. In the first tier, “can the entered data be parsed into numbers?”. In the second tier lie all the questions about the numeric value in isolation and then about the value’s consistency with the rest of the data. You can’t really ask any of the other questions until you have verified that what was provided parses correctly. Fleshing this out a little, the same piece of data from different sources may have different text formats, but once it is reduced to a number, the rest of the validation is about the nature of the data rather than its representation and can generally be common.

Right now, using the ember-changeset proxy for the validation of the parsed data means both sides must be numbers, which leaves the problem of cobbling a second tier of text entry validation on top of it and keeping local data for the field text, so that validation failures don’t just overwrite the data the user typed with what’s in the changeset. The user wants to be able to correct what was typed rather than losing it.

Second, it seems to me a changeset should encompass only the set of data being delivered from a single source, since a changeset validates if the data is consistent with its companions before any of them are permitted to pass into the @tracked storage. Therefore, I would want one changeset for an input form, another for the data coming from a file, and a third for changes to the query parameters. Each of them, upon save(), delivers data to the single set of @tracked data they all protect.

Design problem: What happens to the data in one changeset when a save() from another changeset changes the underlying @tracked data? Does it ignore the change? Is there a notification so the app can decide what to do about the pending values that haven’t been passed to save() yet?

Combine those two design problems, and we are finding it difficult to use pure “business as usual” DDAU to have changes to the @tracked data flow through the changesets to the text fields when the tracked data is reloaded outside the context of the form’s changeset. We’re trying to avoid sledge hammers like reloading routes or shoehorns like {{on-update}} and want to rely solely on the natural flow of tracked architecture. We’re even wondering whether we need our own hybrid text-entry/numeric-validation changeset implementation to support all this, but don’t really want to go there.

Any guidelines on how to proceed? Is there something obvious we’re doing wrong?

Interesting question here. A few questions of my own.

Do all these data processing changesets live simultaneously on the screen at the same time? If you’re processing query parameters it sounds like you may have at least two at a time?

What type of data is stored in these changesets that may go stale? And a question for you? Do these various inputs need to use changesets at this stage? From the sounds of what I’m hearing so far, it sounds more like the initial “parse” phase could be considered a formatting phase that wouldn’t have to exist in the changeset system? Sounded like there is a much more sophisticated/complex validation system that occurs later where validations may be necessary but is that needed for the initial input stage?

TL;DR: While “actions up” for field parsers and change sets is straightforward, I’m not sure how to extend “data down” of tracked properties → change sets → field parsers → data fields in handlebars working through the Octane state architecture rather than around it.

Do all these data processing changesets live simultaneously on the screen at the same time?

Interesting turn of phrase, I hadn’t thought of change sets as being “on the screen” at all, but as a part of the business logic of the application, a “golem at the gate” so that (a) flaws can’t enter the core business data from that source, and (b) the provider of the data is offered feedback to provide acceptable input. At any given save(), the changeset updates either the portion of tracked state enumerating errors or the portion of tracked state containing the protected data.

Thinking things through more carefully, it probably isn’t a good idea to treat all of these sources of input as independent and unrelated. Portions of the input data appear on forms on different tabs. The form for the current tab is on the screen when the user loads a file. If there are problems in the file data, the user might want to use the forms to amend it, and the form containing the problem data might not be on the screen when the file is loaded.

So with this adjustment, there would be one shared changeset for all this data across the tabs, the URL and the file input, so that you can flip tabs to correct data, regardless of where it came from. Eventually there are no more errors and the app can then operate on the data.

With only one changeset for all user input, whether the changeset is part of the core application or the UI becomes less of a distinction. The golem is at the gate and which side of the line of the wall his feet are on scarcely matters. We might prefer to treat it as the back edge of the user interface, since the error state has more to do with user input than the core application. If so, we’d keep the changeset and its error data in the application controller, rather than the business logic model where the data it protects lives.

Do these various inputs need to use changesets at this stage?

Parse has a lifecycle a lot like a changeset. It identifies flaws in the text representation and communicates them back to the user, while leaving the incorrect value in place for correction. However, parse is always at the individual field level, and it crosses two datatypes (string <=> number), so, yes, it is something different. We need a field-level cross-type abstraction that operates a lot like a changeset - let’s call it a “field parser”.

The field parser has to integrate with the changeset in a bidirectional flow so the two layers appear as one to the user:

  • Successful parse must submit the parsed data to the changeset for further validation. That part seems straightforward enough “actions up”.
  • Errors from parsing a field need to appear together with, for instance, range errors from some other field that successfully parsed.
  • If the validated side of a parse is updated by some other source, the unvalidated side needs to be updated to reflect it. (For instance, a displayed form field needs to update when the data from a file presents itself for changeset validation.)

So we are back at the TL;DR at the top: “Data down” to display fields in handlebars through change sets and field validators should occur seamlessly whenever the target data changes. I don’t believe this occurs automatically through the proxy used by ember-changesets and I’m not sure how I would make it happen in a field validator either with the primitives currently available. Can somebody offer me a clue? :slight_smile:

Let’s see if I can make this more concrete…

At the bottom of this reply is a sample of text input to a numeric field. The numeric field can also be updated by an external action, simulating the effect of a file load. As written, if the loadClicked call changes the modelValue, the text displayed in the input box won’t change.

Looking at the paths, there are two “Actions Up” paths to modelValue:

  • changeSubmitted testSetModelValuemodelValue
  • loadClickedtestSetModelValuemodelValue

There needs to be a “data down” path from modelValue to the display.

  • modelValue → formText → <input value={{this.formText}}/>

Changes to modelValue from any source need to invalidate formText, causing it to recalculate. When the user types input, any changes to formText only change the modelValue at the end of the whole submit/validate cycle, so the behavior is asymmetric, and it’s not a pure getter/setter. I’m not sure how to stitch the text field into the tracking chain to make this happen, but it seems like a pretty basic need for data entry.

We have done any of a number of gross things in the past to accomplish this kind of thing:

  • applying observers
  • abusing the notion of computed values, using the special case where the setter returns a value, even though ES5 setters don’t, or worse, where we inadvertently overwrote the whole computed value.

I can think of other gross things I could do, like putting something in the actions-up part of the cycle to register any active forms to update their text data when the keeper of the model receives a request to update, essentially changing formText as a procedural adjunct to changing modelValue during actions-up.

I want to stop doing stuff that wires the data flow around the core mechanisms and use the tracking architecture to create a “single flow” instead. I’ve watched it being done for things like tracked arrays and objects and for cached data. How do I accomplish this for editable text fields fronting numeric data?

Or, alternatively, how am I thinking about this the wrong way? How do I need to adjust my concepts?

Sample Code:

import Controller from '@ember/controller';
import { tracked } from '@glimmer/tracking';
import { action } from '@ember/object';

/*
<label for="formval" >Enter a number between 1 and 20:</label>
<input 
  id="formval" type="text" 
  value={{this.formText}} 
  {{on 'input' this.valueChanged}}
  {{on 'change' this.changeSubmitted}}
/> 

<p class="value-line">Model Value: <span class="value">{{this.modelValue}}</span></p>
<p class="error">{{this.errorText}}</p>
*/

export default class TextController extends Controller {
  @tracked formText = '';
  @tracked modelValue = 12;
  @tracked errorText = '';

  @action valueChanged(event) {
    this.formText = event.target.value;
  }
  @action changeSubmitted() {
    this.testSetModelValue(this.formText);
  }
  @action loadClicked(loadValidValue) {
    if (loadValidValue) {
      this.testSetModelValue('16');
    } else {
      this.testSetModelValue('3q4');
    }
  }

  testSetModelValue(valueText) {
    let value = this.modelValue;
    // Test
    let errorText = this.validateParse(valueText);
    // Set
    if (!errorText) {
      value = parseInt(valueText);
      this.modelValue = value;
      this.errorText = '';
    } else {
      this.errorText = errorText;
    }
    // Report
    return !errorText;
  }

  // Validate

  validateParse(value) {
    if (!/^\d+$/.test(value)) {
      return `${value} is not a number`;
    } else {
      return '';
    }
  }
}

Fundamentally, from what I’ve seen in your description here, the challenge is that you (quite reasonably!) want to separate input change from input persistence. This is a common pattern when working with forms, and isn’t specific to numeric work: you want to provide validation and feedback to users as they input data, but don’t want to persist it until they submit it, for example. (A common non-numeric use case: providing feedback on a password creation field, while not doing anything with it until it’s valid and the user says “yep, set a new password” at which point you might be doing further, server-side-integrated validation.)

The fundamental complexity you’re struggling with is inescapable here, but we can wrangle it. It’s inescapable because it is fundamental to the problem: you have some ultimate source of truth, from which the user’s data is initially set, and from which it is also allowed to diverge. That means that there are three pieces of data in the system:

  • the source of the data
  • the user’s changes to the data
  • the input itself!

And the relationship between these all needs to be managed correctly! Wrangling that complexity means leaning into that divergence and making it explicit in the structure of your data.

Note: in what follows I’m intentionally talking in quite general terms, not specific to the example you’ve given and not using ember-changeset. The details of how this will work in your example is apt to be different in a variety of ways. I wanted to convey here the ideas in play, rather than get hung up on specific details of the implementation (though there’s a fair bit of code below to suggest the shape of data handling I think may be useful here).


Aside: This is a heuristic I find useful in general: where possible, explicitly encode the business logic in the data structures and the functions which operate on them—rather than in a procedure which correctly implements the business logic, but implicitly. This is the intuition behind the idea, common in typed functional programming communities, of “making illegal states impossible”: it’s just about encoding your data so that it’s impossible to construct an illegal state in the program.


In the scenario you’re describing, as well as any place you need a downstream which is allowed to diverge from an upstream source of data and then must be synced again later, it’s helpful to create a data structure which manages that. You can think of it as a “data buffer” or a “form model”; the key is that you are creating an explicit place for that divergence to happen. Here’s the rough shape I would put that in:

  1. A helper which can create that buffer for you:

    // form-model-for
    import { helper } from '@ember/component/helper';
    import { tracked } from '@glimmer/tracking';
    
    class FormModel {
      @tracked value;
    
      constructor(initial) {
        this.value = initial;
      }
    }
    
    export default helper(([val]) => new FormValue(val));
    

    In TypeScript terms, I would make FormModel generic over the data it wraps:

    class FormModel<T> {
      @tracked value: T;
    
      constructor(initial: T) {
          this.value = initial;
      }
    }
    

    This helper will re-run any time the val passed in changes, so the parent which owns the data and passes it in will always be “in charge” of what the child shows: it’ll just end up with a new FormModel for that piece of data.

  2. A component which accepts a FormModel for a given (set of) field(s). I’m using TS here to make the contract clearer, but it’s the same as in JS. I’m showing it with a single min field, but you can imagine doing exactly the same with as many more fields as makes sense, and of course you can imagine generalizing this to whatever degree you need.

    // my-form.ts
    import Component from '@glimmer/component';
    
    interface MyFormArgs {
      min: FormModel<number>;
      save: (newMin: number) => void;
    }
    
    const MIN_MIN = 0;
    const MIN_MAX = 10;
    
    class MyForm extends Component<MyFormArgs> {
      @tracked minParseError = null;
    
      get minOutOfRange() {
        let min = this.args.min.value;
        return min <> MIN_MIN || min > MIN_MAX;
      }
    
      setMin({ target: { value } }) {
        let result = parse(value);
        if (result.ok) {
          this.args.min = result.value;
        } else {
          this.minParseError = result.reason;
        }
      }
    
      @action persist(event) {
        event.preventDefault();
        this.args.save(this.args.min.value);
      }
    }
    
    {{! my-form.hbs }}
    <form {{on "submit" this.persist}}>
      <label>min:
        <input
          value={{@min.value}}
          {{on "input" this.setMin}}
        />
      </label>
      {{#if this.minOutOfRange}}
        <p>Whoops! Min is out of range!</p>
      {{/if}}
    
      <button type='submit'>Save new min!</button>
    </form>
    

    Notice that this actively changes min.value. That’s fine, and doesn’t violate “data-down, actions up”. The contract of this FormModel type is that you’re allowed to do this, and when you want to persist it back to the parent, you still have a place (@save) to do that explicitly.

  3. Wire those up like this:

    // index.js
    import Controller from '@ember/controller';
    
    export default class IndexController extends Controller {
      @tracked min = 0;
    
      saveMin(newMin: number) {
        let result = validate(newMin);
        if (result.ok) {
          this.min = result.value;
        } else {
          alert("NOOOOOPE!")
        }
      }
    }
    
    {{! index.hbs }}
    <MyForm @min={{form-model-for this.min}} @save={{this.saveMin}} />
    

This is the most explicit version of this, but you could actually go a step further and use a Proxy, in a very similar way to how args themselves are implemented if you were going to pass some structured data down. Then instead of creating an individual field for each model, you could use the combination of the Proxy and something like TrackedMap or TrackedObject from tracked-built-ins to make it “transparent” to the downstream consumer of the data.

That might look this:

import { helper } from '@ember/component/helper';
import { tracked } from 'tracked-built-ins';

class Buffer {
  #bufferedData;

  constructor(obj) {
    this.#bufferedData = tracked(obj);
  }

  get(_target, prop) {
    return this.#bufferedData[prop];
  }

  set(_target, prop, value) {
    // ignore sets on this which aren't keys on the original object!
    if (this.#bufferedData[prop]) {
      this.#bufferedData[prop] = value;
    }
  }
}

function bufferProxyFor(obj) {
  const target = Object.create(null);
  const handler = new Buffer(obj);
  return new Proxy(target, handler);
}

export default helper(([obj]) => bufferProxyFor(obj));

Using that would work really nicely for an object with multiple keys:

<MyForm @data={{buffer-for this.data}} @save={{this.save}} />

(This assumes that this.data is a single blob of tracked state in the backing class. If it were individual tracked properties, which is also common, you might want to do {{buffer-for (hash min=this.min max=this.max)}} and so on.)

Within MyForm, you’d just access the object exactly as if it were the original object, but you’d mark that you’re receiving a Buffer for that object (you can see a sketch of a type-safe way of doing that here):

// my-form.ts
import Component from '@glimmer/component';

interface MyFormArgs {
  data: Buffer<{
    min: number;
    // other data...
  }>;
  save: (newMin: number) => void;
}

class MyForm extends Component<MyFormArgs> {
  @tracked minParseError = null;

  get minInRange() {
    const { min } = this.args.data;
    return min >= MIN_MIN && min <= MIN_MAX;
  }

  setMin(value: string) {
    let result = parse(value);
    if (result.ok) {
      this.args.data.min = result.value;
    } else {
      this.minParseError = result.reason;
    }
  }

  @action persist(event) {
    event.preventDefault();
    this.args.save(this.args.data.min);
  }
}

The key, however you do it, is that the public contract of these types is explicit about how the data is being managed:

  • You have source data, which can be validated in actions at the point of ownership to whatever degree you need.
  • You have downstream data which is allowed to diverge from that, and which can also be validated in whatever way in actions locally, but the parent remains in charge (whenever the parent updates its data, it’ll “automatically” get updated downstream, no manual syncing required).
  • The values in the input itself stay in sync by way of wiring up the {{on}} modifier(s) just like usual, but by separating the “buffer” from the source of truth, and having the input push into the buffer until you decide to persist it to the source of truth, you get the level of control you need there.

Hopefully that’s helpful!

Ah, thanks @chriskrycho, for the illumination! Now I just have to understand the lesson behind the lesson here and apply it wherever it applies. From your answer I glean several epiphanies, :slight_smile: fuel for many refactorings to come.

  • Raw input field values don’t need shadowing in local component state. I can use the value property of the <input> field itself to hold and carry local text state beyond the interval leading to a single change event. I don’t need to maintain a text buffer in the component to refresh it from. Except through user editing, target.value won’t get overwritten until we do something to <input value={{something}}/>, driven by tracked data changes upstream, which causes the control itself to be re-rendered.

  • Carrying input state in input elements is part of applying “HTML first.” (Ab)using Glimmer to neutralize the DOM’s intended role in keeping input state has a lot in common with the kind of logic we sometimes see for making <div>s behave (sort of but not fully) like buttons.

  • Common thought patterns don’t always lead to common code patterns. While the parse buffer is conceptually similar to the change set buffer, because the text can be buffered in the DOM itself, relying on that feature fits neatly into the tracking flow “for free”. It avoids the cognitive overhead of explicitly injecting an (unnecessary) component-level text buffer into the data-down flow just to maintain unparsed text outside the DOM.

  • Helpers provide our only post-Octane “hook” for affecting state in the “data-down” path. This fills the “invocation gap” I was running up against, as fields aren’t procedural code and getters are designed to not affect state. (Yes, I know a couple of things Ember could like :wink: are on the way.)

Follow-up questions:

  • Doesn’t a well-written changeset implementation encapsulate the kind of “data buffer” logic you described at length?

I suspect all my woes were in how I was handling the text, and that ember-changeset was already doing the right thing all along. That’s the next thing I’ll need to prove out. If so, I’ll gladly use the product of somebody else’s years of learning curve rather than initiating my own. :slight_smile: If not, well, you’ve provided some ideas I can adapt to get what I need. Thanks for that.

  • Do helpers run during rendering, the way modifiers do? This could affect what we can afford to do there.

Since they provide attribute and parameter values the same way that fields and getters do, I suspect they run before rendering, putting them in the “render-safe” path. It occurs to me that a helper could also provide a cleaner alternative to many (ab)uses of {{did-update}} modifiers, and that some of the code sitting behind pre-resolved promises in many modifiers might also be candidates for a helper instead.

Many thanks, Chris! I always learn from your posts and learn even more from the thought process behind them, which I’m thankful you’re always careful to show.

Hopefully I can clarify some things by responding!

This is true, but in many cases (for rapid feedback to the user via getters which compute the validity of a given field, for example), you will want it to stay in sync. The key is what to sync it with, and how to keep that thing in sync with the rest of the system. You want something like a changeset/buffer/etc. because that gives you a clean point of integration and separation, which can be propagated into at will/as you need.

Depending on the API of the change set, they might work exactly the same way! We’ll come back to this later. You do want to keep the input in sync with some “backing” state, most likely, though you could also imagine not even having the buffer at all and just using something like ember-validity-modifier to just let all of the interactive validation on the form elements use the native browser APIs for it. (I highly recommend that approach, in fact, because it gets you accessibility in the form feedback for free!)

Here, it’s worth note that the helper isn’t actually doing something that a getter couldn’t do, but it’s much clearer about the semantics involved, and it puts the question of who gets to create the buffer back up one level. You could do this instead:

export default class MyForm extends Component {
  @cached get bufferForMin() {
    return bufferFor(this.args.min);
  }

  @action setMin({ target: { value } }) {
    this.bufferForMin.value = value;
  }
}

But that’s a bit confusing in my opinion: you have to think carefully about how and when it will get invoked, including needing to put @cached on it to make sure that the getter only hands you a new Buffer when args.min changes, no matter where/how it gets invoked, and you have to think about the fact that what the getter hands you is a (stable) Buffer.

Helpers (or function-based helpers, at least) and getters are both just pure-functional derivations from root state. The reason to prefer a helper here isn’t so much about invocation, it’s that it makes the contract clearer: the parent provides a buffer to the child.

Yep! And so the key is thinking about when and how to construct a new changeset. I like to do it in something like the way I described, rather than trying to do it as explicit state within the component. That’s where folks usually get in a mess, in my experience, because you immediately start having problems with re-synchronizing state when the parent’s state changes. The helper just makes that much clearer (and a getter would work as well, as shown above… though with the tradeoffs mentioned above!).

I got curious and went and looked up the API for ember-changeset again, and, spoilers: they had the same idea! The first example there should look awfully familiar:

{{! application/template.hbs}}
{{#with (changeset model this.validate) as |changesetObj|}}
  <DummyForm
      @changeset={{changesetObj}}
      @submit={{this.submit}}
      @rollback={{this.rollback}} />
{{/with}}

I would tweak it slightly, but with the same basic semantics:

<DummyForm
  @changeset={{changeset @model this.validate}}
  @submit={{this.submit}}
  @rollback={{this.rollback}}
/>

This is, indeed, just the same as the use of the {{buffer-for}} I outlined. :sweat_smile: Notice that you can still do distinct things on the input vs. change events for individual fields, as well as more on form submit.

Note that part of what you may be feeling here is that ember-changeset gives you a validation primitive, and so it’s hard to avoid the temptation to try to push everything into that. And I think that’s a reasonable and indeed good choice on the part of ember-changeset, but if you need more granularity you may want to handle it differently! It may be in your cases that you only need the “buffering” behavior of ember-changeset, and want to combine it with other ways of parsing and validating more directly:

Inside the DummyForm, you would just do something like:

<form {{on "submit" this.saveChanges}}>
  {{#each-in @changeset.fields as |name field|}}
    <label>{{name}}:
      <input
        value={{field.value}}
        {{on "input" (pick "event.target" (fn this.parseAndSet name))}}
        {{on "change" (pick "event.target" (fn this.fullyValidateAndSet name))}}
      />
    </label>
  {{/each-in}}
</form>

Then you would invoke your form with the changeset like this:

<DummyForm
  @changeset={{changeset @model this.validate skipValidate=true}}
  @submit={{this.submit}}
  @rollback={{this.rollback}}
/>

and then your saveChanges action would be responsible for doing the top-level validation:

export default class DummyForm extends Component {
  @action saveChanges() {
    let { changeset } = this.args;
    changeset.validate();
    if (changeSet.isValid()) {
      this.args.submit();
    } else {
      // do something else to help the user!
    }
  }
}

Mostly, notice here that just because ember-changeset gives you a validation primitive doesn’t mean that it’s the only primitive you can use. You can compose them together at whatever level of granularity makes sense!

I think this may represent a slight misunderstanding of the timing semantics on modifiers vs. other things—and indeed the idea of “render-safe” work is something I’ll come back to below. Both helpers and modifiers are invoked during rendering—as are components, as are getters with controllers or components (or, heck, even on a model if you happen to return an object with a getter/CP on it)! The semantics for all of them are:

  • they are all invoked during first render (as you would imagine: they have to be!)
  • after that, none are invoked again during any render unless reactive (tracked) state they consume changes

The way I would think of this is the same for anything you invoke in a template: a component, a helper, a modifier, or even a getter in a backing class: you don’t want to do something which is going to block the thread and prevent progress from rendering further! In the cases where you have some long-running task, you can use later from @ember/runloop or push it into a web worker or whatever the appropriate approach is, and then grab something like ember-async-data [shameless plug] to wrap up the asynchrony and represent it as data. (This is another example of representing business logic as data!)

Helpers and getters both! I have found zero legitimate use cases for render modifiers other than migrating legacy code—and even there, it’s often clearer and easier to just refactor to true one-way data flow, given how tangled and messy many legacy uses of didReceiveAttrs etc. were!

Thanks again, Chris.

Our UI has tabs of settings forms on the left and tabs of results on the right. On initial load, we have valid initial values for all data. Our forms have no submit button and the user is able to see results change immediately every time they make a valid data change to any field. This encourages pick-and-try behavior that a separate submit button would bog down. Because we enter electrical values with strings like 10.23p or 234k, we really can’t validate anything on input events, only on change.

On events that signify the user has finished changing a field (including submit, change, etc.), we (1) validate that we can parse the entered data to native data, (2) validate that the native data item value “makes sense” individually, (3) validate that the native data value will play well with other data already accepted, (4) save the data in the field that the rest of the app sees.

Attempting a save on ember-changeset with our validator handles 2, 3, 4 for us, but we have to deliver the same type of data into ember-changeset that goes out of it, so we can’t put the data into ember-changeset until we have a parsed value to offer it. This means we have two distinct layers of validation - parse-validated text and value-validated data. We use the changeset to feed any errors from the parse validation into the errors object the changeset manages so we can display them all together.

If we can get away with using the <input> field’s value property as the sole place to hold the data in its text form, that would be the cheapest. Otherwise, we’ll have to cascade two changeset-like mechanisms, one for validating and buffering uncertain text to native-form data, and the other for validating and buffering uncertain native-form data to co-valid native-form data.

ember-validity-modifier looks interesting. Thanks for the pointer. It kind of looks like where I’m going, and the accessibility features in particular have my attention. Right now, if any validation error is present, we replace our stack of results tabs with a list of validation issues the user must correct before they will be able to see any results.

The semantics for all of them [helpers, modifiers, components, getters] are:

  • they are all invoked during first render (as you would imagine: they have to be!)
  • after that, none are invoked again during any render unless reactive (tracked) state they consume changes

But modifiers and blur events seem to be the only places in the code where I get plagued by “write after read” warnings and push changes out one micro-cycle to circumvent them, knowing I just bought an immediate second render by doing so and checking my code carefully to ensure it can’t cycle. It looks like helpers and getters get invoked early enough to aggregate changes they discover into the same render cycle (maybe at least as long as they don’t flow upstream in the component hierarchy? Not sure - haven’t encountered a problem).

you don’t want to do something which is going to block the thread and prevent progress from rendering further!

I don’t think async is ever an issue for us in validation. Does it ever become an issue in the absence of a server or data we need to lazy-load to find out what to measure against? All our validations are synchronous in the client and very fast. However, I will still look at ember-async-data because it sounds like it might be a useful tool for loading our server-supplied parametric data. Another good pointer.