Speed comparisons on various Hardware / Repos

tl;dr: comparison tables

ember.js at 55ffe6326 - using yarn

| runner | cpu | OS | build | no cache install | cached install | | – | – | – | – | – | – | – | – | | NullVoxPopuli | Intel i5-1135G7 | Ubuntu | 15.7s | 22.2s | 5.9s | | evo | M1 Max | MacOS | 9.5s | 17s | 4s | | lookingsideways | M1 | MacOS | 9.7s | 18s | 4s | | MoeSzyslak | Intel i9 (2019, 2.3 Ghz 8-Core) | MacOS | 19.7s | 20.4s | 6.7s |

limber at 6acac0f - using pnpm

| runner | cpu | OS | production build | uncached production build | cached install | | – | – | – | – | – | – | – | – | | NullVoxPopuli | Intel i5-1135G7 | Ubuntu | 26.9s | 29.0s | 4.8s | | evo | M1 Max | MacOS | 15.8s | 16.4s | 9.7s | | lookingsideways | M1 | MacOS | 16.1s | 17.3s | 8.5s | | MoeSzyslak | Intel i9 (2019, 2.3 Ghz 8-Core) | MacOS | 32.2s | 30s | 12.6s |



Because folks have been thinking about upgrading to an M1 Max Mac OS machine for development, I wanted to share some of my own benchmarks as I don’t use MacOS, and wanted to show folks that there are cheaper alternatives to the Apple Ecosystem, if you’re after speed (hopefully – depending on what this thread brings to light!).

So, I’m going to be testing with two OSS projects:

And what we’ll be measuring is:

  • time for:
    • yarn install / pnpm install
    • yarn build / pnpm build

Setup

  • install hyperfine
  • disable scripts (you should do this anyway for security) These settings also effect what npm/yarn/etc do after installing deps, which can dramatically affect results
    • npm config set ignore-scripts=true
    • yarn config set ignore-scripts true

NOTE: all of these are “high level” benchmarks, involving the whole system, CPU, RAM, disk, etc.

My System

  • frame.work
    • RAM: 32GB
    • CPU: 11th Gen Intel Core i5-1135G7 @ 2.4GHz × 8
    • OS: Ubuntu 22.04
    • Disk: WD Black SN850 NVMe

Process

Make sure you have your system monitor utility open and verify that you’re not hitting any sort of RAM limit or are running out of available CPUs usage. Hitting swap, or being thrown in some de-prioritized CPU queue can greatly negatively affect the results.

(I had to re-do some tests due to hitting swap :sweat_smile: )

If you only have 32GB of RAM, make sure you have at least 10GB free to run these benches (to safely avoid hitting swap)

ember.js

Setup

git clone https://github.com/emberjs/ember.js.git
cd ember.js
git checkout 55ffe6326

yarn build

About 15.7s

❯ hyperfine --runs 5 \
  --setup 'yarn && yarn build' \
  'yarn build'
# ..........
Benchmark 1: yarn build
  Time (mean ± σ):     15.713 s ±  0.956 s    [User: 24.479 s, System: 1.805 s]
  Range (min … max):   14.817 s … 16.831 s    5 runs

the initial build (done during setup), builds out a cache for the build, so all subsequent builds, and what this is measuring is mostly a “no-op” build where we determine that nothing has changed and all parts of the build should be 100% cache-hits.

cacheless install

About 22.2s

❯ hyperfine --runs 5 \
  --prepare 'yarn cache clean && rm -r node_modules' \
  'yarn install'
# ..........
Benchmark 1: yarn install
  Time (mean ± σ):     22.225 s ±  0.362 s    [User: 20.772 s, System: 15.475 s]
  Range (min … max):   21.858 s … 22.771 s    5 runs

regular / cached install

About 5.9s

❯ hyperfine --runs 5 \
  --setup 'yarn' \
  --prepare 'rm -r node_modules' \
  'yarn install'
# ..........
Benchmark 1: yarn install
  Time (mean ± σ):      5.829 s ±  1.072 s    [User: 6.563 s, System: 6.748 s]
  Range (min … max):    4.685 s …  6.815 s    5 runs

limber

Setup

git clone https://github.com/NullVoxPopuli/limber.git
cd limber
git checkout 6acac0f

This projects uses turbo to aggressively cache, so we’ll be crafting the commands to avoid turbo’s cache, and focusing on the build of the “app”. For building the app, the setup phase will build the dependent libraries, and the time spent on those is omitted from the measurements.

cached production build

About 26.9s

❯ hyperfine --runs 5 \
  --setup 'pnpm install && pnpm build' \
  'pnpm run --filter=limber build'
# ..........
Benchmark 1: pnpm run --filter=limber build
  Time (mean ± σ):     26.891 s ±  2.141 s    [User: 34.278 s, System: 2.934 s]
  Range (min … max):   24.883 s … 30.121 s    5 runs

uncached production build

because this app uses embroider, we can delete /tmp/embroider to get a less-cache build

About 29.0s

❯ hyperfine --runs 5 \
  --setup 'pnpm install && pnpm build' \
  --prepare 'rm -r /tmp/embroider' \
  'pnpm run --filter=limber build'
# ..........
Benchmark 1: pnpm run --filter=limber build
  Time (mean ± σ):     28.967 s ±  2.583 s    [User: 34.861 s, System: 2.869 s]
  Range (min … max):   26.493 s … 32.511 s    5 runs

cacheless install

I don’t know how to clear the cache in pnpm

regular / cached install

About 4.8s

❯ hyperfine --runs 5 \
  --setup 'pnpm install' \
  --prepare 'rm -r node_modules' \
  'pnpm install'
# ..........
Benchmark 1: pnpm install
  Time (mean ± σ):      4.817 s ±  0.035 s    [User: 5.645 s, System: 3.617 s]
  Range (min … max):    4.785 s …  4.863 s    5 runs

NOTE: at the time of writing / measuring / testing,

  • ember.js is still a v1 addon
  • limber consumes a lot of v1 addons

over time as more things move to v2 addons, these benches will improve on the same hardware, changing no other variables.

2 Likes

My System

  • Apple MacBook Pro 16" M1 Max
    • RAM: 64GB
    • CPU: M1 Max 10 Core (~3.22GHz)
    • OS: MacOS Monterey 12.4
    • Disk: 1TB SSD

Ember repo

yarn build

About 9.5s

$ hyperfine --runs 5 \
--setup 'yarn && yarn build' \
'yarn build'
...
Benchmark 1: yarn build
  Time (mean ± σ):      9.522 s ±  0.090 s    [User: 14.785 s, System: 2.259 s]
  Range (min … max):    9.397 s …  9.609 s    5 runs

cacheless install

About 17s

$ hyperfine --runs 5 \
  --prepare 'yarn cache clean && rm -r node_modules' \
  'yarn install'
...
Benchmark 1: yarn install
  Time (mean ± σ):     17.001 s ±  2.066 s    [User: 11.129 s, System: 19.702 s]
  Range (min … max):   15.680 s … 20.648 s    5 runs

regular / cached install

About 4s

$ hyperfine --runs 5 \
  --setup 'yarn' \
  --prepare 'rm -r node_modules' \
  'yarn install'
...
Benchmark 1: yarn install
  Time (mean ± σ):      4.028 s ±  0.043 s    [User: 3.962 s, System: 10.335 s]
  Range (min … max):    3.987 s …  4.092 s    5 runs

Limber Repo

cached production build

About 15.8s

$ hyperfine --runs 5 \
  --setup 'pnpm install && pnpm build' \
  'pnpm run --filter=limber build'
...
Benchmark 1: pnpm run --filter=limber build
  Time (mean ± σ):     15.833 s ±  0.155 s    [User: 19.787 s, System: 3.022 s]
  Range (min … max):   15.688 s … 16.050 s    5 runs

uncached production build

About 16.4s

MacOS uses a $TMPDIR variable to locate the folder so we rm $TMPDIR/embroider to get a cacheless build

$ hyperfine --runs 5 \
  --setup 'pnpm install && pnpm build' \
  --prepare 'rm -r $TMPDIR/embroider' \
  'pnpm run --filter=limber build'
...
Benchmark 1: pnpm run --filter=limber build
  Time (mean ± σ):     16.396 s ±  0.186 s    [User: 19.844 s, System: 3.000 s]
  Range (min … max):   16.138 s … 16.574 s    5 runs

cacheless install

I also don’t know how to clear the cache in pnpm

regular / cached install

About 9.7s

$ hyperfine --runs 5 \
  --setup 'pnpm install' \
  --prepare 'rm -r node_modules' \
  'pnpm install'
...
Benchmark 1: pnpm install
  Time (mean ± σ):      9.710 s ±  0.100 s    [User: 3.806 s, System: 27.534 s]
  Range (min … max):    9.591 s …  9.858 s    5 runs

Not sure why pnpm is slower, tried a few times but always got similar results.

1 Like

As for getting similar performance for cheaper, absolutely you could! So why did I go for the top shelf mac? A few reasons

  • I have some iOS projects
  • I’m a designer with a few mac only apps in my workflow
  • I work with video processing/transcoding systems on some of my projects so the media engine is super helpful with speeding up those workflows
  • I wanted to treat myself for a decent year business wise :slight_smile:

I will be super interested in seeing other peoples results!

1 Like

Is it helpful to get a comparison with the cheaper end of the Apple ecosystem?

My system

  • Apple MacBook Pro 13" M1 (2020)
    • RAM: 16GB
    • CPU: M1
    • OS: macOS Monterey 12.3
    • Disk: 1TB SSD

Ember repo

yarn build

About 9.7s

❯ hyperfine --runs 5 \
  --setup 'yarn && yarn build' \
  'yarn build'
Benchmark 1: yarn build
  Time (mean ± σ):      9.723 s ±  0.147 s    [User: 15.410 s, System: 2.342 s]
  Range (min … max):    9.519 s …  9.922 s    5 runs

cacheless install

About 18s

❯ hyperfine --runs 5 \
  --prepare 'yarn cache clean && rm -r node_modules' \
  'yarn install'
Benchmark 1: yarn install
  Time (mean ± σ):     17.873 s ±  1.710 s    [User: 11.139 s, System: 16.758 s]
  Range (min … max):   16.667 s … 20.684 s    5 runs

regular / cached install

About 4s

❯ hyperfine --runs 5 \
  --setup 'yarn' \
  --prepare 'rm -r node_modules' \
  'yarn install'
Benchmark 1: yarn install
  Time (mean ± σ):      3.970 s ±  0.117 s    [User: 3.664 s, System: 7.550 s]
  Range (min … max):    3.838 s …  4.153 s    5 runs

Limber Repo

cached production build

About 16.1s

❯ hyperfine --runs 5 \
  --setup 'pnpm install && pnpm build' \
  'pnpm run --filter=limber build'
Benchmark 1: pnpm run --filter=limber build
  Time (mean ± σ):     16.155 s ±  0.186 s    [User: 20.170 s, System: 2.982 s]
  Range (min … max):   15.966 s … 16.453 s    5 runs

uncached production build

about 17.3s

❯ hyperfine --runs 5 \
  --setup 'pnpm install && pnpm build' \
  --prepare 'rm -r $TMPDIR/embroider' \
  'pnpm run --filter=limber build'
Benchmark 1: pnpm run --filter=limber build
  Time (mean ± σ):     17.325 s ±  0.046 s    [User: 20.750 s, System: 2.886 s]
  Range (min … max):   17.282 s … 17.396 s    5 runs

cacheless install

??

regular / cached install

About 8.5s

❯ hyperfine --runs 5 \
  --setup 'pnpm install' \
  --prepare 'rm -r node_modules' \
  'pnpm install'
Benchmark 1: pnpm install
  Time (mean ± σ):      8.458 s ±  0.132 s    [User: 3.964 s, System: 21.907 s]
  Range (min … max):    8.347 s …  8.669 s    5 runs

Based on the closeness to the M1 Max times is it likely these tasks are testing disk speed more than CPU?

2 Likes

Yes! I added your numbers to the table at the top of the thread. Thank you!!

Uncertain, of all the differences between all our machines, disks should more or less all be the same – some form of M.2 NVMe. Maybe there are other factors at play, too. Def need more data :smiley:

Something that I do find interesting though, is that pnpm seems faster on Linux/Intel than MacOS/M1.

Would be great if anyone with an Intel Mac wants to run some of these benches :upside_down_face:

Here are my results on an Intel MacBook Pro (although on Big Sur)

My system

  • Apple MacBook Pro 16" Intel i9 (2019)
    • RAM: 32GB
    • CPU: Intel Core i9 (2.3 GHz 8-Core)
    • OS: macOS Big Sur 11.6.5
    • Disk: 1TB SSD

Ember repo

yarn build

About 19.7s

❯ hyperfine --runs 5 \
  --setup 'yarn && yarn build' \
  'yarn build'
Benchmark 1: yarn build
  Time (mean ± σ):     19.694 s ±  0.264 s    [User: 27.289 s, System: 5.050 s]
  Range (min … max):   19.379 s … 19.931 s    5 runs

cacheless install

About 20.4s

❯ hyperfine --runs 5 \
  --prepare 'yarn cache clean && rm -r node_modules' \
  'yarn install'
Benchmark 1: yarn install
  Time (mean ± σ):     20.438 s ±  0.602 s    [User: 20.542 s, System: 29.305 s]
  Range (min … max):   19.745 s … 21.312 s    5 runs

regular / cached install

About 6.7s

❯ hyperfine --runs 5 \
  --setup 'yarn' \
  --prepare 'rm -r node_modules' \
  'yarn install'
Benchmark 1: yarn install
  Time (mean ± σ):      6.686 s ±  0.184 s    [User: 7.337 s, System: 14.850 s]
  Range (min … max):    6.564 s …  7.006 s    5 runs

Limber Repo

cached production build

About 32.2s

❯ hyperfine --runs 5 \
  --setup 'pnpm install && pnpm build' \
  'pnpm run --filter=limber build'
Benchmark 1: pnpm run --filter=limber build
  Time (mean ± σ):     32.226 s ±  0.626 s    [User: 36.626 s, System: 7.028 s]
  Range (min … max):   31.288 s … 32.810 s    5 runs

uncached production build

about 30s

❯ hyperfine --runs 5 \
  --setup 'pnpm install && pnpm build' \
  --prepare 'rm -r $TMPDIR/embroider' \
  'pnpm run --filter=limber build'
Benchmark 1: pnpm run --filter=limber build
  Time (mean ± σ):     29.955 s ±  0.141 s    [User: 33.995 s, System: 6.606 s]
  Range (min … max):   29.722 s … 30.073 s    5 runs

cacheless install

no idea as well

regular / cached install

About 12.6s

❯ hyperfine --runs 5 \
  --setup 'pnpm install' \
  --prepare 'rm -r node_modules' \
  'pnpm install'
Benchmark 1: pnpm install
  Time (mean ± σ):     12.618 s ±  0.352 s    [User: 7.590 s, System: 32.407 s]
  Range (min … max):   12.265 s … 13.066 s    5 runs

Thanks!! <3

I added your results to the table at the top of the thread.

1 Like

bah, I really don’t like how I can’t edit old posts. :frowning: cc @locks :stuck_out_tongue:

Here are the comparisons, but with good formatting:

ember.js at 55ffe6326 - using yarn

runner cpu OS build no cache install cached install
NullVoxPopuli Intel i5-1135G7 Ubuntu 15.7s 22.2s 5.9s
evo M1 Max MacOS 9.5s 17s 4s
lookingsideways M1 MacOS 9.7s 18s 4s
MoeSzyslak Intel i9 (2019, 2.3 Ghz 8-Core) MacOS 19.7s 20.4s 6.7s

limber at 6acac0f - using pnpm

runner cpu OS production build uncached production build cached install
NullVoxPopuli Intel i5-1135G7 Ubuntu 26.9s 29.0s 4.8s
evo M1 Max MacOS 15.8s 16.4s 9.7s
lookingsideways M1 MacOS 16.1s 17.3s 8.5s
MoeSzyslak Intel i9 (2019, 2.3 Ghz 8-Core) MacOS 32.2s 30s 12.6s

Been a while, but while I had another unique setup, I figured I’d add its stats to the tables.

This is my gaming computer, which is Windows, but instead of developing on Windows, it’s a VM, using VirtualBox, but with direct access to an NVMe drive. I may be upgrading the CPU on this machine this year, so I wanted to record these numbers before that happens.

PC Hardware:

  • RAM: 32GB
  • CPU: i7-8700 @ 3.2GHz x 6 (12 Logical processes / threads)
  • OS: Windows 11 Pro
  • Disk: Samsung SSD 970 EVO (NVMe / M.2) – this is the host Disk, entirely separate from the VM

VM:

  • RAM: 17000MB
  • CPU: 6 (6 Logical Processes / threads)
  • OS: Ubuntu 22.04 / Linux 5.15
  • Disk: Samsung SSD 970 EVO (NVMe / M.2) – VirtualBox has direct access – no virtualization for disk i/o – I used to dual boot this system, but one day got lazy, and wondered if I could use my existing second-boot drive as the drive for a VirtualBox machine. Turns out, not that big of a deal! it works nicely.

Stats:

  • ember.js @ 55ffe6326

    • build: 15.903s
    • cacheless install: 32.14s
    • regular / cached install: 10.2s
  • limber @ 77d67a0b6bcc9ebe6621906149de970c8a821bde (had to use a different hash, cause something with my system doesn’t work with the old hash)

    • cached production build: 20.74s
    • uncached production build: 22.37s
    • regular / cached install: 9.98s

So, based on these stats, the new AMD Ryzen CPUs are looking pretty good (4x faster?, we’ll see!)

1 Like

So… I upgraded some things.

PC Hardware that is different from above

  • Ram: 64GB
  • CPU: AMD Ryzen 9, 7900X 12 Core (24 Logical processes / threads) (this was on sale, and cheaper than than the 7900, :person_shrugging: )

VM Info that that is different from above:

  • RAM: 32000MB
  • CPU: 12 (12 Logical processes / threads)

Stats:

ember.js @ 55ffe6326

  • build: 8.4s
  • cacheless install: 20.6s
  • regular / cached install: 6.8s

limber @ 77d67a0b6bcc9ebe6621906149de970c8a821bde (had to use a different hash, cause something with my system doesn’t work with the old hash)

  • cached production build: 13.1
  • uncached production build: 13.4s
  • regular / cached install: 7.0s

So, what this means:

  • my disks are probably what’s holding me back now, as I didn’t upgrade those, and “they’re fast enough”.
  • AMD Ryzen 9 is faster than M1

Anyone have an M2? I need to update the instructions as hyperfine has changed :sweat_smile:

1 Like

I’m setting up a new natively booted ubuntu OS (not VM’d)

initially, I get about 2-4s faster on all the tests, which is fantastic – it means that even with a VM with raw disk access, there is still overhead. Also it confirms the amount of impact that disk-access has on builds. With these numbers, I have also undervolted my 7900X CPU to keep noise down (0.15 mv, iirc)

Limber @ 810258e

(main at the time of writing)

Setup: pnpm install && pnpm build

Test Cached build:

❯ hyperfine --runs 5 "pnpm run --filter=limber build"
Time (mean ± σ):     11.209 s ±  0.110 s

Test Uncached build:

❯ hyperfine --runs 5 --prepare "rm -r /tmp/embroider" "pnpm run --filter=limber build"
Time (mean ± σ):     11.687 s ±  0.169 s

Test Cached development build:

❯ hyperfine --runs 5 'pnpm --filter=limber exec ember build --environment=development'
Time (mean ± σ):      7.870 s ±  0.116 s

Cached install:

❯ hyperfine --runs 5 --prepare "find . -name 'node_modules' -type d -prune -exec rm -rf '{}' +" "pnpm i --ignore-scripts"
Time (mean ± σ):      3.404 s ±  0.036 s 

ember.js @ 7e482015b

(master at the time of writing)

Setup: yarn && yarn build if you’re using volta, you’ll need to first run:

volta pin node@16
volta pin yarn@1

Build:

❯ hyperfine --runs 5   "yarn build"
Time (mean ± σ):     13.638 s ±  0.168 s

Cached Install:

❯ hyperfine --runs 5 --prepare "find . -name 'node_modules' -type d -prune -exec rm -rf '{}' +" "yarn install"
  Time (mean ± σ):      4.764 s ±  2.508 s

Couple things:

The gist is that when your long-term storage in slower than RAM, you can gain some performance by doing everything in RAM.

One important thing with the numbers to follow is that my PC is now all enclosed in a case – before it was on a test bench and had access to more air – so… I’ve changed multiple variables. (could obvs be solved if I run tests outside a ramdisk – but I don’t expect the access-to-air to affect RAM – only CPU)

I’m using Linux, because everything is easier on Linux, and here is how I set up my temporary ramdisk and isolate machine-wide dependency cache to the ramdisk (because if your dependencies read from your slow SSD, you lose out on perf of repeat install times).

My new machine has ~ 64GB of RAM, and I wouldn’t try this with less than 32GB of total RAM

#!/bin/bash
# this script could live anywhere
# Setting up the ramdisk

# Creates a 4GB ramdisk
target="$PWD/ramdisk-experiment"
mkdir -p $target
sudo mount -t tmpfs -o rw,size=4G tmpfs $target

You can verify that $PWD/ramdisk-experiment is a ramdisk via df | grep tmpfs The output for my machine today is

❯ df | grep tmpfs
tmpfs                       6497816      2904   6494912   1% /run
tmpfs                      32489068    383048  32106020   2% /dev/shm
tmpfs                          5120         4      5116   1% /run/lock
tmpfs                       6497812       208   6497604   1% /run/user/1000
tmpfs                       4194304         0   4194304   0% /home/nvp/Development/ramdisk-experiment

and that line at the bottom is key – we have success!!

JS Setup Stuff

We need to create a temporary yarn and pnpm cache in this ramdisk so that we don’t use the global caches from ~ / $HOME.

cd ramdisk-experiment

# set new local locations - this is temporary
export PNPM_HOME="$PWD/pnpm" # pnpm@7.30.0
export YARN_CACHE_FOLDER="$PWD/yarn-cache" # yarn@v1

Next, clone the repos, and checkout the appropriate commits

git clone git@github.com:NullVoxPopuli/limber.git # 30 MB git history
git clone git@github.com:emberjs/ember.js.git # 75 MB git history

# parenthesis execute these commands in a sub-shell, 
# so your terminal doesn't change directories
( cd limber && git checkout 810258e )
( cd ember.js && git checkout 7e482015b )

Now verify that pnpm install and yarn install populate the in-ramdisk caches. Prior to now the folders we told the cache to be in don’t actually exist.

( cd limber && pnpm i )
( cd ember.js && yarn )

You should see:

❯ ls
ember.js  limber  pnpm  yarn-cache

Now we can get back to the benches, and see if having everything in RAM improves anything.

Limber

❯ hyperfine --runs 5 'pnpm --filter=limber exec ember build --environment=development'
  Time (mean ± σ):      8.170 s ±  0.065 s

CPU-bound stuff such as builds has mostly not changed.

hyperfine --runs 5 --prepare "find . -name 'node_modules' -type d -prune -exec rm -rf '{}' +" "pnpm i --ignore-scripts"
  Time (mean ± σ):      3.411 s ±  0.008 s

Turns out M.2 SSDs are really good. (I think most high-end laptops post-2019 would have one of these)

ember.js

❯ hyperfine --runs 5   "yarn build"
  Time (mean ± σ):     13.948 s ±  0.061 s
❯ hyperfine --runs 5 --prepare "find . -name 'node_modules' -type d -prune -exec rm -rf '{}' +" "yarn install"
  Time (mean ± σ):      3.481 s ±  0.045 s

Ramdisk Conclusion

So with yarn it appears there is a bit of a benefit to the RAM disk. probably because yarn copies files around your disk.

yarn install in the ramdisk ran about 27% faster, or said another way, took 73% of the time it takes outside the ramdisk.

With pnpm, it seems dep management is efficient enough where ramdisks don’t matter (sym/hard links are very fast)