Route separation between backend and frontend

Hello,

I am just getting started with emberjs/onepage apps and one thing I am not sure how to solve. I notice that many examples add the # char to the url to be able to navigate through the app. For my app I want the URL’s to look like a normal webpage without the # char, i.e. domain.com/profile/photo and not Domain Names - Buy a Website Domain - Domain.com. This should be possible with html5?

The second issue is how do I handle the backend requests without using the # char? If I type domain.com/profile/photo the web server will try and serve that request and not the ember app. I am guessing I will need to rewrite requests on the backend? One scenario I am thinking of doing is if a user is logged out I serve static HTML pages from the backend server. If the user is logged in I serve the emberjs app. This way my non-logged in site is crawlable by google and others. Any ideas how to make this work?

Thanks, Michael

With regards to your first issue, add:

App.Router.reopen({
  location: 'history'
});

More info on that here.

With regards to your second issue, I’m not entirely sure. I assume Ember will intercept those requests somehow (with the hash location API the index file would always be loaded making that easy, so I’m not so sure about with the history location API).

Thanks for your reply! I will try that out. Yes I am not sure how that would work as when I type a direct url I assume the browser will make a http request to the backend?

You’ll need to create a catch-all route on the server to this feature. If you think about it, ember has no control over the browser when someone enters your site through something like domain.com/sample/ember/route. Most likely, your server side routing will try to pick this up and serve a page. Your catch all should grab all these requests and redirect to the single entry point, and then ember will do the rest after your default page is served.

Yes this was my suspicion. To give a different result based if there is a user session I guess I would have to parse a cookie before deciding what to send to the browser. If the user is not logged in I send the static pages.

I think that approach may be rather antiquated in respect to an SPA using a modern API backend. Why not serve the same entry point no matter what, and then in your ApplicationRoute (which is run every time, regardless of the entry point) you can decide what to do. For example:

App.ApplicationRoute = Em.Route.extend({
	redirect : function() {
		// this computed property could look at a cookie or localstorage
		// to determine if an authToken was stored in a previous request
		var authenticated = this.controllerFor('application').get('authenticated');
		this.transitionTo(authenticated ? 'home' : 'signin');
	}
});

Hope this helps,

Sam

Just want to confirm, redirect is a method, not a property, right?

I do agree it’s not a modern way of handling things but I need to be able to handle non authenticated crawlers that can’t handle JS such as Facebook, Google, Twitter etc. If my main router is in JS there is no guarantee that I will serve content to crawlers. I know that Google is getting better att handling JS but there is no guarantee of how they will parse your content. Being able to share on social media and SEO is vital for marketing your product and something I can’t ignore.

The app I am building will only use a REST API endpoint but I was thinking in some use cases of serving static html to serve the crawlers. Anyone have experience on how Google, Facebook (open graph) etc. parse ember apps?

I can see how my code comment is misleading. I was referring to the authenticated property on the application controller. redirect is the standard ember function used to provide alternate pathways when a route is being invoked.

To solve the crawler issue, take a look, for example, at the source for the Discourse page you are looking at right now.

What you can do is make your server generate a very plain version of the page within <noscript> tags in addition to the Ember app. That way, crawlers will be able to index your pages.

After looking around I found this post that very nicely explains all the different options for one page apps: http://pivotallabs.com/seo-friendly-single-page-apps-in-rails/

I am right now leaning towards adding the fragment meta tag to all my pages and have my nginx webserver redirect to an SEO api that will parse the site via phantomjs and return static html. I would also add some kind of cache mechanism to reduce server load. This way I don’t have to maintain two versions of the site.

Good article, thanks for the link!