"Run a fast, sandboxed bytecode outside of the sandbox by compiling it into a binary" so just a binary. I'm not sure I understand why you would use wasm here. No one writes wasm; you compile to it (usually from llvm ir). Why couldn't you just go straight from llvm ir to a binary; skip the wasm? I suspect I'm missing something here, but it doesn't seem to make sense.
WebAssembly is portable, stable, and well-specified. LLVM IR is not portable (it is platform-specific), not stable (it changes between LLVM versions), and not well-specified (especially around undef). Therefore, WebAssembly is a good distribution format, LLVM IR is not.
Oh good point, we can just embed the JVM into browsers instead. WebAssembly’s cancelled everyone!
Jokes aside, why are you comparing these two very different virtual machines? WASM is a general purpose VM, JVM is not. For example, you won’t find a Rust JVM target any time soon. (Not to suggest that the JVM is strictly limited to Java, it isn’t obviously, but it is not nearly as suited to being a target for lower level languages. Also, the security model is very different.)
In fact there’s almost certainly more ways, too. You could probably transpile WASM to JVM bytecode. These things are most useful when you already are in the JVM anyways, I can’t imagine most people writing software in pure Rust would jump for this, especially given that WASM has a lot to offer for this use case already and has good momentum. I see it as a better fit, with useful security guarantees and a simple design.
The benefits of the JVM still stand out to me personally. Run anywhere, generate machine code at run time most place, world class optimizing compiler, battle tested & ready to deploy, integration with existing code base for gradual rewrites, etc.
Wasm is a good idea but it's going to need to reimplement a lot of existing code (optimizing + jit).
Doable and maybe it will convince people to do what has been a great idea first widely deployed by Java: compiled language in an abstract machine.
The JVM had many chances to realize that goal. Java applets, J2ME, etc. I’m not sure which particular issue really kept it from keeping mindshare. I don’t think the virtual machine itself was ever really the problem.
Still, since the Java platform didn’t capture this use case of a general purpose abstract machine, it makes a whole lot of sense to develop something like WASM. It’s a much more neutral platform to build on.
In particular we actually don’t need to go through all of the things Java went through; we have a wealth of knowledge about what things work and what things don’t work so well. Yes, its a new JIT, but not that new: from my understanding typically the JavaScript JIT machinery is reused for the WASM JIT in browsers.
If JavaScript JITs are anything like Java JITs then it might be that most of the performance-improving optimizations are based on optimizations of how Java/JavaScript operate and recognizing specific patterns that can be replaced with simpler instructions to improve performance. The JavaScript JITs help with the machine code generation but not necessarily with other higher-level optimizations.
We already tried JVM in browsers, remember java applets? I sure remember the security and compatibility problems :)
WASM has been designed from the ground up with portability, security, and stability in mind. It is also a lower target than JVM bytecode, which makes it more suitable to represent languages like rust and go. It has also been designed to take advantage of the sandboxed JIT engines that browsers already have for running javascript. Additionally, WASM is an open standard that anyone can contribute to, which is something that is greatly valued on the web.
We already tried JVM in browsers, remember java applets? I sure remember the security and compatibility problems :)
Actually, we didn't really try JVM in the browser. We tried it as a plug-in like Flash. The JVM didn't have access to the DOM like Javascript and WASM will have.
So, none of them out-of-the-box like Javascript. Given, it didn't really work as far out as 2005[1] which is 10 years after Javascript was introduced, I stand by my original statement.
remember them? Some of us still have the unfortunate pleasure of using them. (Worse: the IT desktop folks have to support a very specific outdated version of IE to keep it functional. I remember hearing that's how a new vendor's "solution" would be delivered, and the facepalm I did at the time. The look on desktop support's face was a bit more pale and filled with dread.)
Besides the point about already trying out Java Applets, Why would you dismiss the Oracle avoidance? Oracle recently moved with Java to show a strong preference to monetising it, whereas lots of people on the web cannot afford any licencing just to be able to make stuff to run on the client machine. It is actually a very strong feature of it.
It's a really funny monetisation move, releasing everything as open source, putting it all in openjdk (which has been the language reference spec for a while anyway)
The only bit being monetised is the Oracle compiled and distributed version of the JVM. If you don't want to pay Oracle, just use OpenJDK, which has all of the same hotspot JIT stuff.
Oracle could have easily done as Google has done with Android, and make sure that the client end wouldn't run without some kind of proprietary extension[a]. They are the world leaders in deploying the Ask toolbar, and with that comes the end users. It is only because we aren't using Java that we don't see this happening.
a] Yes you can make android apps run on AOSP, but as many comment with regards to Huawei losing their Android licence, it will remove access to a lot of API infrastructure that isn't even Google specific.
edit: * caused formatting rather than being a note
Yeah I know, it was in jest, as a reminder as to the practices of Oracle. Also most consumers do use the JRE, and the size of the audience is what Oracle would make decisions on.
Because LLVM IR is CPU-Arch specific. For instance, IR for x86 cannot be used on ARM CPUs, which is also the reason why Apple's bitcode representation intended to make apps portable, doesn't cross the iOS (ARM) <--> MacOs (x64) boundary, unless ARM ISA emulation is happening in Mac (like in Marzipan?)
It is not so much that the IR is specific, apart from platform intrinsics. It is that almost any optimization pass will encode architectural details, like packing and alignment.
That way the same binary could run on machines with different architectures. There is also work in progress to define common system api for wasm, which would allow to run the same binary on any platform.
Internal memory corruption caused by out of bounds accesses, in case the code was generated from C derived languages, as it doesn't provide memory tagging.