> Some might mention that V8 moved away from the always-JIT
They did, but their always-JIT was a bit heavyweight. They generated full method-jit code on first invocation at first. This was a massive improvement over the existing state of the art, but it was also relatively early in the whole development of the "fast JS" ecosystem we live in now.
The _size_ of JS grew, and the size of functions grew, and the complexity of logic grew. The amount of cold and run-once code grew.
That issue was brought into focus by the growth of the webapp space and the size of the payloads.
In this case (erlang folks) - they're going about the whole thing in a very good way. Their "jit-everything" is actually "jit one instruction at a time", which is _amazingly_ perceptive of the challenges people in other JIT teams have faced (e.g. JS JIT teams). What they're doing is tightly scoped, easy to bootstrap and test with fallbacks to VM-calls into the VM for slowpaths or complicated stuff.
That's a solid base you can slowly layer with higher tiers later if it matters. They're keeping their abstraction layers strong by having a well-specified bytecode system, and hopefully they will strive to keep it relatively independent of the runtime, and avoid leakages of runtime semantics into instructions.
I was personally very impressed by their description of their approach and motivation behind each decision.
They did, but their always-JIT was a bit heavyweight. They generated full method-jit code on first invocation at first. This was a massive improvement over the existing state of the art, but it was also relatively early in the whole development of the "fast JS" ecosystem we live in now.
The _size_ of JS grew, and the size of functions grew, and the complexity of logic grew. The amount of cold and run-once code grew.
That issue was brought into focus by the growth of the webapp space and the size of the payloads.
In this case (erlang folks) - they're going about the whole thing in a very good way. Their "jit-everything" is actually "jit one instruction at a time", which is _amazingly_ perceptive of the challenges people in other JIT teams have faced (e.g. JS JIT teams). What they're doing is tightly scoped, easy to bootstrap and test with fallbacks to VM-calls into the VM for slowpaths or complicated stuff.
That's a solid base you can slowly layer with higher tiers later if it matters. They're keeping their abstraction layers strong by having a well-specified bytecode system, and hopefully they will strive to keep it relatively independent of the runtime, and avoid leakages of runtime semantics into instructions.
I was personally very impressed by their description of their approach and motivation behind each decision.