This is what I wrote I the commit/PR message, it was meant to communicate with the rest of the team about the code in the PR (most of which don’t work on the frontend but have worked with/alongside classic Ember Data frontend in the past). It’s more about the… vibes so maybe not every detail is accurate, but directionally it should be alright. Figured I’d share it here as well.
The Theory
The Old Ember Data
Historically, ember-data is said to be a “resource-centric library” – you define your “models” (attributes and relationships), and then you ask the store to “find all users”, “query for users with this email domain”, “save this user”, etc. You then define a global sets of rules (with limited per-type/per-type-of-operation customizations) to describe to the store how to fulfill those requests.
Ultimately, the goal is to make it easy to develop application code by centralizing the complexity of performing the various CRUD operations, parsing responses, handling common errors, etc. Another important goal is for the store also acts as a cache for already-loaded data, so that when a different part of the application asks for the same data, it can be fulfilled locally, skipping a trip to the server (de-duping requests).
This seemed like a natural approach to the problem, but it does have some limitations. It only worked well for very consistent/uniform CRUD-style APIs, as the core API needs to define the possible kinds of operations pretty rigidly and consistently across types, and the code needs to handle the common functionality (errors, etc) generically across types. There are some room for customization, but is still quite limited, especially when you need to express operations that are far outside the confines of the expected patterns.
There are also other problems, e.g. it’s difficult to support APIs that doesn’t maps as closely to the CRUD paradigm expected by the core code (think graphql), and even the best behaved resourceful APIs tend to have one-off (realistically, many-offs) exceptions to the norm that can be challenging to model, and the difficulties extends beyond just specifying how to describe the one-off request and response, but as data gets into the cache through these responses the code needs to know how to merge them with existing records or how one operation invalidates other data not directly associated with the request (think – e.g. deleting the 1: side of a 1:many relationship). There are also concerns about code sizes of model classes in bigger apps, but that is less relevant to us for now.
The New Ember Data (& Warp Drive)
To better support these use cases, ember-data is in the process (and pretty close to the end) of re-inventing itself as “request-centric” library by breaking up the existing functionalities, re-assembling them in new ways, and shedding a bunch of stuff along the way.
At the core, it starts with a RequestManager
service, which, if nothing else, gives you a centralized fetch()++
API to put common logic. Think something similar to faraday in Ruby where you use to make requests in lieu of the basic built-in client, where you can use middleware to automatically attach auth token, headers, etc, but is otherwise flexible enough to let you make whatever requests you need to make without getting in the way.
Then it builds on that by adding a caching layer on top. This is somewhere between the “classic ember data store” (caching ~ model instances) and the browser’s cache (cache entire raw response payloads by primary matching URLs). It is able to do coarse grained URL-based caching, but so long as your describe the semantics of your requests, it can also understand the semantics of the responses and cache things at a more granular and semantic-rich way, including how things relate to each other across different types of requests, closer to what the classic ember-data store does.
The new ember-data seems to have largely gotten out of the business of abstracting the operations themselves (e.g. store.findRecord('type', id)
are largely going away). Fundamentally, it operates on requests (and request in this context maps pretty closely to the raw request concept in fetch()
) – give me a request, describe to me what it does, and I’ll fulfill it, taking into account caching, etc. Naturally, this results in a lot of boilerplate code that was previously abstracted away. The intention is that you will write those boilerplate code yourself by making plain helper functions that returns these requests (called request “builders”), and ember-data will give you a little help with utility functions you can use inside these builder functions. You’d then abstract/centralize across the application yourself, using the normal patterns (make a folder you import from, make a service, etc).
Finally, less relevant to us directly, but another goal of the new ember-data is also make the library usable outside of Ember, by building the core APIs (request manager, cache, etc) in a framework-agnostic/plain-JavaScript manner and exposing hooks for gluing into the framework. Since the maintainers use and are affiliated with Ember, there is first-class support for that. But this is largely what the warp drive naming is about – in between those API changes, the library is also simultaneously rebranding into warp drive to reflect the framework-agnostic plan, and some of the newer, non-Ember specific APIs have started to ship under those names.
Current State of Ember Data/Warp Drive
The “old way” of doing things in Ember Data (models, CRUD APIs in the store, etc) are still there, and most existing applications still uses them. However, the current effort by the maintainers is to complete the new story and the plan is to eventually deprecate the old relics.
That being said, there are aspects of the new story that are not quite complete/ready for prime time yet, specifically “SchemaRecord” – the replacement for models.
Briefly, the plan is to replace models, and the need to enumerate attributes and relationships as model code upfront, schemas (description of the shape of the response data) can be populated on-demand, either loaded separately as JSON, automatically derived from a structured JSON:API response, etc. It also allows for the same logical type having different schemas on the same type for different operations. For example, the “account” type could have a password
field during the sign up and the change password operation, but otherwise this field is omitted.
SchemaRecord
is the code that automatically – based on the schema – hydrate response data into rich JavaScript objects for presentation in the UI. The package exists, but some parts of it isn’t quite “done” – my understanding is that it has to do with support for collections, which affects relationships. And in general, the story for fetching relationships in the “new world” is also a bit of a work-in-progress/underbaked, but the maintainers expect that to be finished relatively soon (optimistically, perhaps “by the end of the year”).
After consulting with the maintainers (@runspired specifically), this is the plan we came up with based on all of that and the timeframe of this project:
- Overall, use whatever is ready, and use the “good parts” of the old stuff in a way that aligns with the new direction, as needed
- Use
RequestManager
- Use builders for making requests
- Use the
@ember-data/store
, which is a more barebones version compared to the ember-data/store
(subtle difference – without the @
) that pre-configured a bunch of legacy behavior, a lot of which we don’t need
- Configure the
@ember-data/store
with as much/as little legacy behavior as needed, explicitly, so we can keep an eye on what we are using
- Use
@ember-data/model
, but minimally – treat them mostly like a schema, only use them to annotate the fields and relationships, and derived getters
- There is more to say/configure when it comes to relationships, but since at this moment, we are using those very minimally, it can be deferred to a future PR
What is in this PR
…