Should we keep using Ember Data?


I’m working on a team building an ecosystem of microservices. We are using Ember for our administrative dashboard product. Part of the reason we chose it is that it’s designed for “ambitious web applications” and if that term ever applied to anyone, it’s us. Despite the learning curve, Ember has been a pretty fantastic tool for us (I was coming from a Backbone app, personally), but Ember Data is becoming a serious pain point for us.

It seems that Ember Data’s power comes from its preference for JSON API conventions (as in those proffered by Which is great if you’re building one central back-end specifically for your Ember front-end. But for our purposes, having one client-side web application framework’s opinions dictate the structure of dozens of RESTful services with as many authors and non-Ember consumers is pretty much a non-starter.

The current needle in our eye has been nested URLs, which I see have been discussed at length on this forum - though the proposed solution(s) don’t seem to cover the case we’re running into. Most importantly, manipulating resource relationships by way of HTTP is not something Ember Data can do. But lots of APIs use nested URLs - both for creating new resources and for creating relationships between resources.

Am I wrong to think that Ember Data can’t handle relationships that are manipulated by HTTP calls? Is there any argument to be made that Ember Data is a useful tool for applications built on APIs that aren’t necessarily in line with the idea of what an API should be? Is anyone using an alternative - beyond making $.ajax calls manually - that is flexible enough to deal with APIs that are not designed around


Not sure about nested urls, nor the larger existential question of continuing use, but we are using ember data to manage relationships.

Design.DesignSerializer = DS.ActiveModelSerializer.extend DS.EmbeddedRecordsMixin,
      serialize: 'ids'

The request body will then include

"material_ids": [1,2,3]

We handle saving the relationship changes in the designs server endpoint.


Here’s the sort of thing I’m referring to. There are two collections in an API: /foos and /bars.

  • foos resources (found at /foos/{foo_id}) have a has-many relationship to bars (found at /bars/{bar_id}).
  • The representation of a foo does not include a bars array (of records or IDs) or links or any data related to the bars.
  • Interacting with the bars that relate to a foo is done via a GET/POST/DELETE to /foos/{foo_id}/bars.

I’ve been reading about these sorts of issues - they seem to be pretty common - for over a day now and gotten nowhere. Almost invariably, the answers are “change the API” and that’s just not reasonable if the authors of your APIs are not writing them specifically to be used with Ember Data.


If you’ve got data coming from lots of different end points with different payload formats you might find it easier to manually push records into the ember-data store. This allows you to query any url at all, just do an ajax request from jquery. It also gives you an opportunity to reformat the payload into ember-datas format on a per url basis rather than just per type (if you use the standard ember-data serializers/adapters). Here’s an example from one of my projects: (note it doesn’t return any results but you could easily add this by using a filter on the store).

import Ember from 'ember';
import config from 'project/config/environment';

export default Ember.Object.extend({
	moveTask: function(movement, organisation){
		var _this = this;
		return Ember.$.ajax({
			url:  config.APP.API_BASEPATH + "/organisations/" + organisation.get("id") + "/task_movements",
			type: 'POST',
			data: {
				movement: {
					originalTaskListId: movement.originalTaskList.get("id"),
					currentTaskListId: movement.currentTaskList.get("id"),
					taskId: movement.task.get("id"),
					precedingTaskId: movement.get("")
		}).then(function(payload){'task-list', payload);


Thanks @opsb.

Making certain calls manually and updating the store myself has been something bouncing around my head. Of course, the store raises questions by itself - especially in an application where multiple people might be working on records simultaneously. We could be setting ourselves up for a catastrophe if User A is working on Object 1, saves it, but User B has it in their store and starts working on it, too. We’d have to be strategic about manually reloading records at key points.

I was able to force my adapter to fetch relationships by manually adding a links property in the appropriate normalizeHash function in the serializer:

if (!_.has(hash, 'links')) {
  hash.links = {
    bars: '/foos/%@/bars'.fmt(

I’m still not sure how best to approach atomic creation/deletion of relationships in a RESTful way (i.e. POST or DELETE to /foos/1/bars). This is where manual requests might come into play - that is, make that request manually, then manually update the foo record’s bars.

The question that brings up for me is: what’s the cognitive overhead of thinking in a sometimes-ED-but-sometimes-not way? At what point does ED become burdensome instead of useful? We’d have to come up with some clear rules there and it may ultimately be less difficult to just do away with ED entirely.

I’m currently kicking myself for not understanding how tightly ED was coupled to JSONAPI when we chose it. I feel like it should be clearer in the documentation about how opinionated it is about the structure of your API(s).


@misteroneill I don’t think it’s fair to say that ember-data is that tightly coupled with JSON-API. It’s not that difficult to write your own adapter or serializer to map to a different format. Whether or not it’s worth the effort is another question though…


I prefer using Ember without Ember Data, and I learned a lot from this article,


Ember-Data works quite well with HAL:

Or Firebase:

In fact the included adapter is the ActiveModelAdapter, which tracks the Ruby ActiveModelSerializer gem.

If you want to use JSON-API, I suggest this adapter:

Anyway, whatever the fault of Ember-Data (and sure, there are faults), I would not suggest that one of them is a “coupling” to JSON-API. I regularly write customize adapters and serializers to match quirks in an API.

If you need the features Ember-Data provides, you can write an adapter for most well-constructed and consistent APIs. It takes some elbow grease, but so would adding all those features on top of a custom data-layer.

As a real-world example, this codebase uses a HAL adapter, _links to describe relationships (no ids), hits two different API hosts (even through relationships), and supports nested URLs. Definitely not a JSON-API, but we use Ember-Data quite successfully.

A few additional thoughts:

  • You will always have a more successful API story if you build it to a standard. I don’t care which. Choosing a standard will make your life with Ember-Data simpler, but also with any other kind of client. It provides a concrete guide to check your behavior against, instead of the design details being trapped in the head of a developer. This is the biggest single piece of advice I can offer. Reality dictates that you cannot always re-write an existing API, and I don’t suggest that, but you should keep the goal of moving toward standards in mind.
  • “Am I wrong to think that Ember Data can’t handle relationships that are manipulated by HTTP calls?” If an API call has a side-effect of changing a secondary model, we often either a) sideload the changed model. Ember-Data will load any data you push in a sideload at any time. Or b) on the client side, reload or fetch the needed data after the save completes. Promises are your friend here, and make chaining a save then a reload easy.

I suggest starting with unit tests and to be sure you understand how your data-layer works. Testing helps to break down the insurmountable task of writing a data layer (of any kind) into manageable pieces.


If you’re using Node / Sails

Check out the advanced branch - early days but I am using it without issue so far.

The advanced branch brings JSON API links to the outputted JSON.

SANE uses these blueprints which is another great project for easing dev.


Maybe not JSONAPI specifically, but there certainly is a problem with relationships that are not represented in a "links" property or side-loaded (which is the biggest source of frustration when building against one of the APIs I need to build against).

Thanks for the insight about manually interacting with the store. I think my strategy is going to be creating my own "links" during normalization in these cases, then dealing with sending these relationships back to the API outside of the normal ED save cycle.


Agreed on having a standard for APIs, but there are some common REST practices (i.e. not representing relationships via side-loading or the very-specific "links" property) that fall pretty flat with ED.

Thanks for the suggestion re: manually reloading data after saving. I think we’re looking at following an approach like that.


Agreed on having a standard for APIs, but there are some common REST practices (i.e. not representing relationships via side-loading or the very-specific “links” property) that fall pretty flat with ED.

Again, this is not true. The included adapter expects links, but to customize it you only need write a custom extractMeta function. An example in the hal-9000 adapter. Of course you don’t need to use links at all, and can always just return an array of ids (which is the default usage you will often see) or a set of embedded records with a little configuration.

I don’t suggest any of this is effortless, but the story is far better than “falling flat” on non links relationships. I expect most Ember developers don’t use links at all- it is a fairly new addition to Ember Data.


Here’s my take. I’ve used Ember Data in anger and got a long way with it, including dynamic model attributes based on field model instances. Think generic product model with dynamic rich model fields described by models in a separate fields collection, along with implementing all kinds of constraints and calculations and field interdependency rules for sophisticated product descriptions/recipe’s and composition of products/parts, eg changing a user defined ‘area’, ‘difficulty’ field on a product can recalculate cost and quantities of multiple sub parts ad infinitum based on rules and complex calculations. Quite complex stuff that pushes the envelope. I also use embedded records for things that have a complex structure. It works

But I made my backend use flat URLs for everything , no nesting ever, which helps things work better with Ember Data or any api for that matter. Given all models have ids (I use v4 uuids) any nesting is just redundant. Ember data can be made to work with almost any data if you invest the time into understanding how the adapter and serializer layers work. Sometimes there are odd limitations in the interfaces and you may have hit one of those.

I have however found that Ember Data is just too slow to be practical in an ambitious application. Pull back a thousand items and you wait 2 seconds or more on a high end machine in the bowels of Ember Data before Ember even begins the processing required to update the view. I helped resolve some perf problems in Ember Data which doubled the speed in the latest betas but its still not good enough to be usable. The time from an ajax POJO to entering the render path should be milliseconds at most but its often thousands of time that in most of my cases even for relatively simple models. I had a colour database of some 40k items/descriptions with high cardinalty relatiknshils and I needed to cache a fair bit to make offline use practical. Its infeasible with Ember Data so you avoid it in those cases.

I’ve also implemented some sophisticated save/update orchestration that also deals with undo/redo within editing contexts. This was accomplished using controllers mediating models. It worked but its not as clean as I would like and took a lot of effort using buffered proxies. In the end Ember Data ruins things for you as you still have to push changes into model state in order to sync to the server and other parts of the UI update once the model changes. Then undo/redo gets complicated once that state has been pushed into the Ember Data layer. Layers and layers of stuff you wish wasn’t there.

Alas I have come to the conclusion my needs are better served sticking close to POJOs and a Flux like architecture using immutable.js to hold all application state. Its a different way of thinking about state management and well worth it IMO. Undo/redo practically for free both globally and for multiple even overlapping editing contexts, ability to serialize/store/transport all app state trivially, never have unwanted side effects or UI changing, control exactly when state gets updated and propagated vs backend/server synchronization. Its fast, simpler, more powerful and uses less memory. Above all I avoid models and just have state which components render. State mutation happens via a well defined semantic layer ‘at the top’, eg ‘add person’, ‘assign task’. Whatever state changes need to occur happen in that layer. I don’t need to deal with async fetch/relationship/promise issues down in components, views, templates and controllers like you need to do when using Ember Data since the state is pretty much a POJO. This is largely aligned with the ember 2.0 DDAU (data down actions up) approach accept for the fact components can pull state from services directly and components can send service actions rather than all state and actions being mediated by a parent component, which I think is flawed but that deserves a separate discussion.

If you don’t know what Flux is watch the facebook flux videos (but ignore their horrible public implementation) and perhaps check out reflux, fynx, barracks and other flux like implementstions for examples and inspiration of the basic idea.

I’m still refining how I do things but certainly Ember Data is not part of my approach any longer.


One of the APIs I develop against does not include any meta-data about relationships (or embedded records) with the representation(s) of resources, so I started manually defining links in the normalize function as a proof of concept.

EDIT: Actually, I read that too fast. That solution doesn’t work because it assumes that there is a _links property in the payload. I’ll have to continue manually defining links during normalization.


I’ve ended forking ember-data 0.14 and just running with it because I was sick of having the carpet pulled from under me with every new release. I wish I had abandoned ember-data altogether but at that stage it was too big a rewrite to move away from it. I feel vindicated as they are still changing the landscape with each release, so now I hear from emberconf that once jsonapi is baked in then everything is good. I have a huge degree of cynicism that 1.0 will ever happen.

What I like about ember data is having one true model throughout the application but as ahacking states, it just does not scale. I found I could not pull back any more than about 50 records at a go for any particular model or else the UI thread would hang. I infinite scroll everything or I retrieve all datasets in chunks of no more than 25 records.

Part of my application is an email client like gmail with a large JSON payload thanks to the size of an email body. I found I could only pull back about 5 or so emails at a time so I load everything in small chunks.

The buffered proxy made life saner with ember-data and I found it unusable without.

I never used nested URLs and for complicated model data manipulation, I would create a model class that represented the job and do all the work on the server. The same was true if I was for example updating 1000s of records, I would create model class like TrackContactsJob that would POST to the server and simply return a 201 to say that all was good, I then have a set of convenience methods that update all the ember-data models in memory to reflect what was happening on the server by updating their _data hash and then calling notifyPropertyChange on each model to let any listeners know that the data had changed. All this was a massive pain in the ass.

I think the idea of an in memory store is a bit outdated now and I just want to work with POJOs and arrays. I also think working with immutable data structures would have made my life so much easier.


I’m interested to know how you use immutable.js in your ember application. I have not used immutable.js but I’ve played around with clojurescript and om were all the clojurescript data structures are immutable.

In om you have one large application state atom hash that represents the whole app state, e.g.

(def app-state
  (atom {:todos [] :meetings []}]}))

You can then create a reference-cursor to part of the application state to pass to a component:

(defn todos []
  (om/ref-cursor (:todo (om/root-cursor app-state))))

When the cursor changes the component re-renders.

I see it is possible to create cursors with immutable.js, are you using cursors with immuable.js or how are you optimising your immutable.js use?


I’m not using immutable.js with Ember in any production code. Experimenting only to see where the blockers are but its still early days with the way Ember is designed.

I have been drafting a routable components RFC and looking at how Ember may be able to support the use of alternative state management approaches like immutable data types.

There have been some recent positive changes to the router to separate view concerns and this needs to go further. Once there is clean separation, outlets can go away to be replaced with components that render based on route state. The router as a service would just be one of many services managing application state “at the top”. Ideally I’d like all state to be held in a single immutable object/atom but that doesn’t look achievable in Ember. Also components need a React like hooks which I believe being progressed.

I’ve been trying to adopt the things I can do in React which currently affords a superior architecture and see if or how Ember could support something similar. I already have a proof of concept for routable components, but a lot more is required. I was planning to submit an ember RFC but haven’t had time recently to finalize it.


I just blogged about some of this here, I’d be interested in any feedback you have.



As I was reading through your blog post I was trying to think of how that may or may not fit into my current understanding of how to architect an app in ember. A couple of things came up.

Your general arguments against ember data and pushing for pojos make sense for many cases but I would argue that ember data has it’s uses not to mention it is highly unlikely that it will be stripped out entirely given where it is now. The official solution to the performance issues you bring up seems to be don’t use ember data. I’ve talked to a few experienced ember developers who mentioned that they mix and match ember-data and their own stuff as it doesn’t supply what you need all the time.

I think this is where the problem begins, ember needs to have an official story for best practices around not using ember-data and clear statements that ember-data is not to be used in certain situations ie time series data. Whilst experience may make it obvious where ember-data is and is not useful, when people are just starting all signs point to ember-data AND when you do finally make the decision to not use ember-data for certain situations seems like your pretty much on your own.

I’ve heard a couple mentions of using services to encapsulate server update logic is a best practice. It decouples your persistence logic from ui / route. If this is really the way to go then this would also seem like a great place to insert ember alternatives. Your design I think would fit into this quite nicely and could even be bundled into a nice addon. That way as ember fails me I can install the dagda1-reflux-immutability-sanity addon and create a service for the area where ember-data is failing me. Maybe next time I may even just start with it.

To sum my morning ramblings, the current default flow seems to be. New person starts using ember-data, as their app grows and their needs become more varied they get fed up with ember-data for certain things, then they start using their own stuff or heavily modifying ember-data to make it do things it’s not really supported to do. I suggest

  • stating the limitations of ember-data on the ember / ember-data site
  • encouraging an ecosystem of alternatives by defining a clean and consistent place for developers to create addons of this type that can coexist with each other. I suggest services.


I am going to create an add on called ember-flow or something and I’ll start bringing stuff in.

My advice based on my experience is to avoid ember-data. They are constantly ditching everything and starting from scratch. We wasted so much time on my current project persevering before I forked and went with my own version of ember-data.

My hunch is that you will still need some sort of data store or at least an id map but I won’t know until I try and develop something meaningful this way. But it does not need to be the full blooded features of ember-data. I think that way too feature rich for what I need.

I’ll post a link to the addon so others can join in.

Reflux is not my architecture, it works for react so it can work here.