Deploying on S3 and scaling


I have a question regarding deploying my app to S3 and serving it straight from there.

We have a platform that hosts events, so whenever we have events happening, they drive a lot of traffic to the various parts of our platform. Soon these will be ember apps that each serve a specific purpose.

Currently we are looking into hosting the apps on a LAMP stack and using apache .htaccess files to route all traffic to the index.html page. I’ve done this a few times and I know it’s tried and tested in the community as well.

For this we will be using the S3 deploy plugin along with a custom one I’m making to zip the files and use a configuration script to launch new EC2 servers behind a load balancer and auto scaling group to make sure we’re prepared for high amounts of traffic when it occurs.

But I have seen people hosting their apps directly from S3, which seems intriguing to me as it will take a few steps out of the equation for us as we run the other PHP portions of our app (at least until it’s all been migrated to an ember-cli app).

By question is how does hosting an ember app in S3 scale in terms of when a large volume of traffic attempts to hit and use the app all at the same time.

Right now as I mentioned before we are preparing for this by allowing EC2 to provision new servers for us that will have our code, and use a load balancer that will point traffic to one of the several servers running at the time. How does that type of scenario play out when hosting on S3?

If you host this way I want to hear from you, how do you handle this? Specifically, when it comes to high traffic volume what is your strategy if any.

I welcome new ideas but we really want to try to stick with an all AWS solution so we really aren’t interested in using something else like Heroku, etc. Although if you make a good case for it I may be able to persuade my team that direction.


not sure what you mean by PHP codes, but there is a tricky way of hosting the app only on S3 which is rewriting the 404 rules. This will however mess up all the logs and you have to forget about getting any useful report. (you will find this way by searching in this forum).

I use AWS in this way:

  1. assets (img, js, css, …) are all uploaded to S3.

  2. there is a CloudFront proxy in front of S3 that serves the assets. This guarantees the scalability.

  3. index.html is served by an EC2 machine running nginx. this is the machine behind domain name and the traffic and CPU usage on this machine is extremely low.

you will however need a different deployment process. I do it by following these steps after ember build:

  1. fingerprinting the assets
  2. CDNifying the assets
  3. uploading assets to S3
  4. uploading new index.htm to EC2 machine


You can’t scale out of S3 it’s far beyond anything you’ll ever have hitting it, Amazon doesn’t even bother definining any limits. From their FAQ:

Amazon S3 was designed from the ground up to handle traffic for any Internet application. Pay-as-you-go pricing and unlimited capacity ensures that your incremental costs don’t change and that your service is not interrupted. Amazon S3’s massive scale enables us to spread load evenly, so that no individual application is affected by traffic spikes.

But it’s more or less just a static file serve. You’d end up having your app live on and then your backend on say It’s which will point to your load balancer and where any load will be.