WebAssembly, is it really the future?
- Eliminate those problems from the language; or
- Avoid the problems altogether.
That leaves us with another issue , and this lies right at the centre of the Web.
Thankfully, there is a potential solution – adopting WebAssembly to build your web applications. And I’m going to try and convince you why you should.
What is Web Assembly?
In a nutshell, it’s the future of web development.
- It’s a new language: WebAssembly code is represented in a binary format, which converts neatly into a text format so it’s readable, writeable and most importantly – debuggable.
- It’s a Compile Target: A way for other languages to get first-class binary support across the entire web platform stack.
What are the benefits?
Firstly, it’s fast and efficient – WebAssembly executes at close to native speed by taking advantage of common hardware capabilities available on a wide range of platforms. Helpful if you’ve got the latest nVidia GPU.
As mentioned above, it’s also open and debuggable. It directly translates into human readable text. So you can ‘view source’ and still read it. Also, most of the latest browsers are able to debug WASM. As time goes on, these tools will increase in power.
Before I answer this question, let’s back up a moment. Let me take you back in time, before node and react, before backbone, before jQuery, now skip back another 22 years previous to that again…
The year is 1984 and I’m a young 11-year-old learning how to program games on my Commodore 64. I started off with BASIC, which was very easy to learn and allowed me to wow my family with neat little tricks. But it was slow. Very slow!
BASIC was a scripting language and as with any scripting language, it’s first interpreted line by line, then Just-In-Time compiled to binary then finally executed.
It quickly became apparent that if I wanted to do anything that ran faster, I’d have to learn how to code in assembly language. Sometimes you need to get to the bare metal or at least as close to it as possible.
Fast-forward to today and the same principle applies; just because our processors can handle it, doesn’t mean we shouldn’t optimise our code.
But let me be clear, I’m not arguing for people to learn how to code in raw WASM, I’m simply saying that the ‘machine code’ of the Web should actually be machine code!
Why would I want to use WebAssembly?
Because it’s the right thing to do.
WebAssembly gives us access to a set of low-level building blocks that can construct just about anything. But the key is low-level. It defines primitives including a range of types and operations on those types, literal forms for them, control-flow, calls, and a bunch of other stuff even including a heap. These are very simple primitives. Nothing fancy. No complicated object system (prototypal or otherwise) and no built-in automatic garbage collector.
WebAssembly encodes bytecode instructions which have opcode names. Just like with any other assembly language, the format is a simple opcode / operand per instruction. Each opcode, for example, ‘get_local’, directly translates to a binary code, in this case: ‘hexadecimal 20’.
So let’s talk a bit about the structure of web assembly.
WebAssembly has 4 value types:
- i32: 32-bit integer
- i64: 64-bit integer
- f32: 32-bit floating point
- f64: 64-bit floating point
As with any Turing computer, we need memory. In the case of web assembly, we use what’s called linear memory. This is a range of memory spanning from offset 0 and extending up to a varying memory size. This size is always a multiple of the WebAssembly page size, which is fixed at 64K. The memory size can be dynamically increased by the grow_memory operator and it increases by the page size i.e. 64K.
Memory is an array of bytes and as such, it’s sand-boxed – it doesn’t access other linear memories, the execution stack, local variables, or other process memory.
To access linear memory, we have the load set of opcodes. The pattern here is each opcode addresses a 32 or 64 bit integer or floating point.
- i32.load: load 4 bytes as i32
- i64.load: load 8 bytes as i32
- f32.load: load 4 bytes as f32
- f64.load: load 8 bytes as f32
And the store set of opcodes. The same pattern here applies:
- i32.store: store 4 bytes
- i64.store: store 8 bytes
- f32.store: store 4 bytes
- f64.store: store 8 bytes
We can get and set local variables using ‘get_local’ and ‘set_local’, and the same goes for global variables with ;get_global’ and ‘set_global’.
We then have some control constructs. Branching and comparison for example.
We can also call functions using ‘call’ and ‘call_indirect’. Each function has a signature which consists of return types and arguments. Here we have the ability to call normally or call indirectly, where an indirect call is a dynamic comparison of what the call expects and what the function signature is.
And finally, we have operators … which are very handy for the day-to-day things that every programmer does, such as addition, subtraction, multiplication, shifting and rotating.
So we can see here that programs are made up of all the normal parts you’d expect from a low-level language – and more importantly a compiler can be easily tuned to produce this binary format.
This makes more sense because we can actually get back to the normal pattern for developing software … write in a high-level language, compile directly to machine code, and execute. This pattern has been missing from web development for far too long!
Can I use it today?
Absolutely. The first time I spoke publicly about WASM, the support level was 75% globally.
As of today, that figure is 85% of all users globally. This is significant. There’s a small distance to go, but if we ignore IE, Opera and other browsers for Android then we can consider it almost universal support.
So what’s the catch? Nothing that can’t be overcome easily.
Again, when I spoke publicly last July (2018), I said that compiler support was the biggest barrier because we were limited to LLVM effectively, but in the past year there has been a lot of change …
Just take a look at the list above! There has been a monumental shift in just a few months and we’ve got so much support for this from all the big players now.
One of the best things about WASM is that it is easy to express things like threads and SIMD (Single Instruction, Multiple Data). That means fat, parallel processing pipelines for your realtime video stream effects processor.