So the Glimmer VM is not tree-shakeable via standard build steps using JavaScript bundlers, that is absolutely correct. Since it converts templates into an interpreted bytecode, JS bundlers aren’t able to understand what instructions are used and which ones are not, so they cannot tree shake the same way you would for standard JS code. I also think that as you said, the list of opcodes is overall very small, and most real-world apps will end up using all of them.
There is one exception here, and that is for deprecated functionality. Over time, features may be removed from a rendering engine, and replaced with more streamlined, minimal, features. For instance, Classic components actually require quite a few extra capabilities than Glimmer components, and those capabilities exist even if you are writing a new Octane app with no Classic components at all.
This is what the component manager’s capabilities
feature is all about. As part of the conversion from the wire-format to the byte-code, capabilities are loaded for the component, and based on that, instructions for those capabilities will be emitted. If a capability is not enabled, the opcode is not added. This reduces the runtime cost for the component, so it is only impacted the very first time it is loaded. It does not reduce the cost of shipping those instructions over the wire though.
The idea at the moment is that we’ll reduce the Intermediate Representation (IR) from the current wire format down to a minimal bytecode, and have a very minimal layer in Wasm that receives the capabilities and expands some minimal opcodes based on those, which should be much faster than the compilation process today. This will all still be done at load time though, in the browser, so we won’t be able to treeshake any capabilities’ instructions. However, in the future we do want to explore using static analysis to figure out what the capabilities of components are at compile-time, and if that works out I could also imagine being able to exclude unused opcodes.
All that to say, this is firmly in the “maybe it could work and we definitely want to explore it” space, and not the “we definitely plan on shipping something like this” space. So, for your article, I would say that we are accepting the tradeoff like you outlined, and unless future R&D pays off I think that’s probably what will happen. And, like you pointed out, the tradeoff isn’t all that huge, because the opcodes are minimal and there aren’t many capabilities anyways (and most are likely to be deprecated and removed eventually).