Yes we can, and are working towards overall improved performance. The limiting factor is just time.
A thorough Android performance investigation would be great. Although I have done similar work for Mobile Safari (likely because I own various iDevices but no android ones) I have not had the time or opportunity to do the same on Android. If someone can get me the time, I will gladly dig in.
Facts I am aware of:
- other then some annoying bugs, JSC + Safari are really killing it in terms of real world performance (both desktop and mobile)
- V8 for androids currently gets steam rolled by JSC pretty much regardless of framework
- There exist a huge number of potential ember performance improvements that will yield cross platform improvements.
- 1.7 + 1.8’s use of an AMD style loader really hurts mobile (and especially androids) performance. The solution is to continue the effort to upgrade to @eventualbuddha’s latest es6 transpiler, so we can take advantage of bundled builds.
- 1.8, 1.9 and 1.10 are meant to incrementally (as we complete it) role out our new view rendering system. The first step 1.8 takes a known performance hit, to normalize handlebars into something htmlbars would emit. This is very likely (but obviously not proven on android) the majority of any performance regression we are seeing.
Historically, ember has only done performance work in an ad-hoc way. This increases the barrier of entry for performance work for both core contributors and the community at large. This also results in wildly inaccurate benchmarks #'s because they have not had sufficient community scrutiny and this inherently does a poor (nearly non-existent job) of related knowledge sharing. Finally, ad-hoc prevents running performance regression tests.
This is clearly poor but a problem I have been noodling on for quite some time.
What we need is.
Reliable benchmarking that is representative of real world use-cases and app level performance concerns. Simply running X, N times may be sufficient for micro benchmarks but not the macro benchmarks I believe we are desperately lacking. Basically, given user action Y, what N complexity system still renders at 60FPS or 30FPS or …
This should allow us to better understand what is actually slow, what is sufficiently fast and will likely illuminate a triage priority.
Another note about micro benchmarks against app level concerns, running a user action N times, will often skew results as actions that normally wouldn’t become JIT’d or actions that normally deopt infrequently now often skew results as N increases.
Ideally ^^ style of tests could just be part of discourses normal development process.
In addition to this, micro-benchmarks that cover targeted parts of the framework. Ember.Object.create + actionsFor etc.etc. will continue to be valuable.
And finally these tests need to be actionable.
- able to run on multiple platforms, mobile/desktop/ and likely cross evergreen browsers
- easily profitable
- point to real problems, not benchmark induced GC pressure.
- stable re-runs over sufficient N should result in consistent numbers.
An early quick, helpful + actionable effort has been: GitHub - stefanpenner/perf-stress which has allowed me and others to dramatically improve many ember + ember-data performance issues. (I suspect 1.8 would be in a much worse situation without that effort). This obviously suffers from the lack of automation.
This weekend, I decided to give angular benchpress project a try: spike with benchpress · stefanpenner/ember.js@820a8d6 · GitHub it is interesting, but really is lacking especially when it comes to delivering the metrics i described above. Also, it doesn’t have a concept of asynchronous tests, so many real-world scenarios become impossible. It’s also suffers from the run X N times sequentially problem, which really isn’t a natural way to interact with an application and doesn’t give me the #'s I want.
I will likely use this as some form of inspiration and transform it into something I believe will be helpful, I have made a short todo-list spike with benchpress · stefanpenner/ember.js@820a8d6 · GitHub
Since then I have also begun spiking on a enhanced test runner that actually yields the numbers and metrics we care about. Hopefully I will have something working an on GH in the next few days.
/end stefan.brain_dump
TL;DR performance needs to get better.