Adding a function to ED store to normalize+push a single type


#1

Currently you can add data to the ED store by following methods:

  1. Call store.pushPayload(type, payload) which takes a generic payload, normalizes it and pushes it to the store. Note that because it works on a generic payload, adapter’s pushPayload cannot have any information about what types are inside the payload.

  2. Call store.push(type, data) which already expects normalized data

It seems like we are missing a common use case, which is to normalize and push record data of a known type. The lack of this has already been mentioned at Should there be an easy way to normalize incoming JSON payload?, https://github.com/emberjs/data/pull/1474 and in other issues/PRs.

There are several possible solutions:

  1. Add yet another push method to the store. Something like store.pushAndNormalize but with a better name
  2. Add a {normalize: true} option to the current store.push and store.pushMany functions
  3. Something else completely?

Any thoughts?


#2

Thanks @terzicigor for continuing the discussion! I’m sure we can figure out how to deal with this use-case.

##TLDR

  • Why does store.pushPayload accept a type parameter but does not pass it to the serializer’s pushPayload?
  • if the serializer’s pushPayload shouldn’t know the type, why do all of the other methods that need to know about the type in the serializer (source) get passed the type?
  • if store.pushPayload should only accept generic payloads, why does it have a type parameter at all?
  • The key difference from JSON API or the ActiveModel JSON serializer is that the JSON data format for Django REST Framework does not include the type as a key in the returned data, meaning the type can’t be determined solely from raw data

How store.pushPayload currently works

The method store.pushPayload (source) accepts a type and payload argument. It uses the type to determine which serializer to use for that type, and calls the appropriate serializer’s pushPayload (source) with the parameters store and payload. The serializer’s pushPayload (at least in the case of the RESTSerializer) normalizes the payload and then pushes it into the store. The main difference between store.pushPayload and store.push (source) is that store.pushPayload defers the actual work to the type's serializer’s version of pushPayload, where store.push does not use the serializer at all.

The serializer’s pushPayload only functions correctly because it assumes the payload argument contains enough information to determine the payload's type. This is the case for JSON API and the ActiveModel JSON serializer, but is not the case for other popular REST APIs such as Django REST Framework.

Couple of questions:

  • How would a new store.pushAndNormalize differ from store.pushPayload? It seems like the only different might be passing the type argument to the serializer’s pushPayload.
  • You mention the adapter’s pushPayload method being called, but adapter’s don’t have that method. Do you mean the serializer’s pushPayload method? In that case, shouldn’t the serializer be able to know the type?

##Issues when creating a custom adapter/serializer

The problem I encounter when working on ember-data-django-rest-adapter is when I need to manually push data into the store, through some side-channel such as during application boot. The JSON format from Django REST Framework would be in a form like this (note that there is no high level type key, or any other indication of what the model should be):

// data returned from /users/1/
{
    "id": 1,
    "first_name": "Tom",
    "last_name": "Dale",
    "username": "tdale",
    "email": "example@example.com"
}

Manually adding this data to the store would ideally be accomplished by something such as:

var currentUserObj = { /* Tom Dale's user info here */ };
var type = 'user';
store.pushPayload(type, currentUserObj);

But it will not work! store.pushPayload uses the passed in type to figure out which serializer to use, but it will not pass the type to the serializer’s version of pushPayload, and as you can see from the JSON format, if the type is not sent to the serializer’s pushPayload the serializer will have no way to know which type to use.

This makes it impossible to write a custom serializer pushPayload method that gets called from store.pushPayload when the data format doesn’t include the type. The only way to manually push records into the store is to actually bypass the store and reach directly into the serializer to call a push method:

var currentUserObj = { /* ... */ };
var type = 'user';
store.serializerFor(type).pushPayload(store, type, currentUserObj); // note that our JSON format requires pushPayload to know the type 

This doesn’t feel like it should be the way to gain this functionality, especially when it seems like the store.pushPayload method is intended to exactly provide this functionality.

##The strange case of store.pushPayload accepting a type One inconsistency I have noticed is store.pushPayload accepting a type. This is required so that the correct serializer can be used for the type that we are manually pushing into the store, but as we’ve seen above, this type information doesn’t make it into the serializer’s pushPayload implementation.

This means that you actually can’t push generic JSON into store.pushPayload. If you wanted to pre-load multiple records of many types (such as on initial application load), your application would need to manually separate them into their types and call store.pushPayload for each model type:

var bootstrappedRecords = {
    "users": [
        {
            "id": 1,
            "first_name": "Tom",
            "last_name": "Dale",
            "username": "tdale",
            "email": "example@example.com"
        },
        {
            "id": 2,
            "first_name": "Yehuda",
            "last_name": "Katz",
            "username": "tdale",
            "email": "example@example.com"
        }
    ],
    "posts": [
        {
            "id": 100,
            "content": "...",
            "author": 1
        },
        {
            "id": 101,
            "content": "...",
            "author": 2
        }
    ]
};

store.pushPayload('user', bootstrappedRecords.users);
store.pushPayload('posts', bootstrappedRecords.posts);

I know that the JSON API and the ActiveModel JSON serializer include the type as a key, but the Django REST Framework doesn’t. Maybe a future path will be to create a Django-based REST API project that conforms to the JSON API specification, but the current format of Django REST Framework seems like it should be something that Ember-Data should be able to support.


#3

We ended up adding a store.normalize(type, payload) method that should cover this use case https://github.com/emberjs/data/pull/2096


#4

I’m sorry if this isn’t the right place to ask, but I’m confused about how to properly use extract*vs normalize*.

At the bottom of this section, TRANSITION.md, it says to use extract to restructure the top-level payload and to use normalize to normalize sub hashes. However, the part I’m struggling with is if I have embedded objects that need to be restructured, such as comments embedded on a post, is that a normalize operation or an extract operation? The example on that TRANSITION.md guide shows pulling comments from a post as part of the extraction. Is that the right place to do this type of operation?

The reason I think it’s relevant to this thread is because store.push(type, store.normalize(type, payload)) will still not call any of the extracts. So if the idea is that this is the right way to push in data, such as from a socket layer, I’d still have to manually call extract.

Thoughts? I’m open to any advice, we have the use case of embedded records being return from AJAX and also pushed via a socket in the same format. Thanks!


#5

normalize is strictly meant for a single record, if you have a whole payload, you might want to look into pushPayload or extract.

We also moved away from handling embedded records by restructuring, they are now normalized recursively from a normalize call, so if you are using ember-data canary both extract and normalize should work for the embedded case


#6

Thanks for addressing this, @terzicigor!


#7

@terzicigor. So the recommendation is to do:

      socket.on('user-updated', function(payload) {
        var data = payload.data;
        var type = store.modelFor('user');
        var serializer = store.serializerFor(type.typeKey);
        var record = serializer.extract(store, type, data, data.id, 'single');
        store.push('user', record);
     });

Forgive me, but it seems like this was the exact problem that this post set out to solve. I mean, looking at the implementation of store.normalize this is effectively what that function is doing, expect normalizing instead of extracting. Do you really expect that store.normalize will satisfy majority of use cases?

It seems to me that the payload formats will be relatively similar whether they come through a socket or through an AJAX response. If you agree with that, then surely you can see the disconnect here. Otherwise, perhaps I’m in a small minority that are sending down socket payloads in a consistent format w/ AJAX (containing meta, etc).


#8

Yes, that does seem crappy, we should make it better. In your use case, how do you know whether you need to use extractSingle vs extractMany?


#9

That’s an excellent question. For our use case, it’s always been extractSingle because it’s normally pushing down chances to single entities (and sometimes includes sideloaded entities). For that reason, extractSingle is all we’ve needed. But I certainly wouldn’t want to count out the validity of extractMany.

Perhaps that’s a reason to just keep it this way. I don’t want to make the API more complicated than necessary. I can easily reopen the store and add my own helper. My push back was not necessarily intended to generate a code change, it was just to produce a better understanding of the extract vs normalize responsibilities, as well as a better understanding of the intended solution.

You guys are the data store gurus and I’ll leave it to you to decide what’s best.

Thanks!