Modern JavaScript complements: WebAssembly and Rust

Like all programming languages, JavaScript has certain shortcomings that simply can't or won't be fixed, either because of design limitations or because of better technologies that have come to the forefront.

As far as design limitations are concerned, throughout the upcoming chapters you'll learn how ECMAScript has evolved and how even languages that produce JavaScript have emerged to address some of these JavaScript shortcomings. But as far as better technologies are concerned, it's only fair to take a short detour to talk about technologies which are self-sustainable without JavaScript, but are becoming complementary to the evolution of Modern JavaScript itself.

WebAssembly

WebAssembly or simply WASM is an effort undertaken by the W3C (World Wide Web Consortium) which is the same consortium that oversees most major web standards that include: CSS, DOM, HTML and XML, among many others.

To understand the 'why' behind WebAssembly let's go straight to the WebAssembly specification[1]: WebAssembly is a safe, portable, low-level code format designed for efficient execution and compact representation. Its main goal is to enable high performance applications on the Web, but it does not make any Web-specific assumptions or provide Web-specific features, so it can be employed in other environments as well.

JavaScript's performance crux: An interpreted language

In Modern JavaScript essentials I first mentioned how JavaScript was an interpreted language. One of the main issues with interpreted languages is they're too far removed from machine language, which is the final instruction set required by all computers. The further away a programming language is from machine language, the more transformations it requires, which in turns means it takes more time execute.

In an effort to alleviate this inherent performance problem in interpreted languages, JavaScript has been devising many strategies throughout the years, such as equipping JavaScript engines with JIT (Just-in-time compilation) and relying on tools like Minifiers-Obfuscators-Compressors and Source Maps to create more compact and easily transferable JavaScript. However, no matter how much optimization goes into JIT or Minifiers-Obfuscators-Compressors and Source Maps, it's an undeniable fact that prior to executing JavaScript it's still an interpreted language, a more efficient form of interpreted language using some of these strategies, but an interpreted language nonetheless.

In this sense, WebAssembly is the technology that aims to provide JavaScript with one of its longest sought after features: high performance.

Getting to machine language from other languages: Interpreted, compiled, assembly, higher level, mid level, system level and low level.

The binary files -- .exe or .bin files -- which most people click on mindlessly to install software on their computers are made up of machine language. If you open an .exe or .bin file with a text editor, you'll notice it's exclusively made up of number sequences that represent instructions for a computer's central processing unit (CPU). CPUs unlike humans are designed to work with these low level type of instructions for efficiency reasons.

Machine language is considered the lowest level and also the highest performant of all languages. But along with the upside of being the highest performant, the downside of working at such a low level -- number sequences -- also means it's hard, if not impossible, to modify and it's also optimized for specific types of CPUs, which is why you'll often find binary files/machine code for different processor architectures (e.g. i386, amd64, arm64, mips).

Next in the continuum of low to high level languages is assembly language. While machine language is limited to number sequences, assembly language is a little more verbose relying on text instructions to describe its tasks. Still, assembly language is considered a low level programming language to the point it's even nicknamed symbolic machine language, mainly because even its text instructions are not that obvious and the instructions must also take into account the type of CPU on which it will run. Assembly language like all programming languages ends up transformed into machine language, so performance wise there's not much difference, except that assembly language requires an assembler to make the transformation from assembly language to machine language, in turn making development somewhat slower, but this extra step is done in the name of working with a friendlier modifiable syntax vs. number sequences used by machine language.

Following assembly language as higher level languages are compiled languages like C and C++. Unlike assembly language and machine language which contain hardware/CPU specific instructions, C and C++ avoid these type of instructions making them truly portable across different hardware/CPUs. Now, this doesn't mean C and C++ don't require working with hardware/CPU specific instructions, it simply means they work at a higher level and let a compiler worry about things like hardware/CPU specific instructions. Just like assembly language, compiled languages like C and C++ also end up transformed into machine language so performance is also a non-issue, however, what does change is the amount of steps required to transform a compiled language like C or C++ into machine language. Compilers are much more complex tools than assemblers, producing intermediate object language, requiring header files and using linkers to achieve their end goal, among other things. The important takeaway from this is not so much that compilers are elaborate tools -- there are many resources that explain compilers in a very detailed way[2] -- it's that compiled languages like C and C++ introduce an extra layer in the form of a tool between them and machine language to faciliate working at a higher level than assembly language or machine language.

Before moving on, it's important to understand the classification of low to high level languages is generally a contentious one. Strictly speaking, low level languages are those that require handling hardware/CPU specific instructions, which would only leave assembly language and machine language to this group. However, although languages like C and C++ don't require dealing with hardware/CPU specific instructions, they still have control over some important hardware details, chief among them memory management, which are instructions to assign and unassign memory resources. So is managing memory resources low level or high level ? Purists would say high level since memory management is the same across different processor architectures. On the other hand, people who've worked with higher level languages that automatically do memory management, would say memory management is low level since it can be painstakingly time consuming and a distraction from other tasks. To a certain extent both arguments hold truth, so to settle the argument about this fuzzy boundry you'll often hear the term mid level languages to describe languages that offer both low level and high level features. Since it's not good to get lost on semantics, I would recommend the following as a rule of thumb: lower level languages are more difficult to implement, are more difficult to read by humans and are also created for specific hardware/CPU; where as higher level languages are easier to implement, are easier to read by humans and are also shielded from hardware/CPU specifics by relying on tools (e.g. compilers, run-times) to take care of specific hardware/CPU details.

Another important point to make before moving on is that languages like C and C++ represent a sweet spot in the low to high level language ladder. On the one hand, they're sufficiently low level they let you manage things like memory without needing to know assembly, but yet they're sufficiently high level to be portable across hardware/CPUs. In summary, they're just right for software that requires low level control and high abstraction, such as software that serves as a foundation for other software (e.g. operating systems and run-times). For this reason, in addition to being classified as mid level languages, languages like C and C++ are also often referred to as system languages.

Although C and C++ represent a step forward from assembly language, they can be tedious to work with for application software. If you're creating an operating system or a browser then having control over something like memory management can be essential, but if you're creating software for accounting or shipping workflows then needing to deal with memory management can be a burden. For this reason, higher level languages emerged to further simplify and shield engineers from working at the levels offered by C and C++.

The strategy for higher level languages like Java and C# consists of using a run-time to take care of all the execution intricacies -- similar to how compilers and assemblers take care of the low level details in other languages. These run-times -- which are mostly built in C, C++ and assembly language -- are tasked with generating the final machine language executed by a CPU, in addition to taking care of other low level details like memory management. To accomodate this architecture, these type of languages are initially pre-compiled to a low level language -- Java bytecode in Java and Common Intermediate Language (CIL) in C# -- designed to run on their own run-time -- Java Virtual Machine (JVM) in Java and Common Language Runtime (CLR) in C#. By introducing this level of abstraction, the majority of the low level work (e.g. machine language compilation, memory management, support for different processor architectures) is shifted to the run-time which is made available for multiple hardware/CPUs, so those creating the software are spared from dealing with such issues to focus on higher level programming tasks. This language strategy of using run-times is often described as "Write once, run anywhere" (WORA), a marketing theme popularized by the creators of Java.

Languages that use run-time based architectures come in two major variations: pre-compiled languages and interpreted languages. As described in the previous paragraph, languages like Java and C# need to be pre-compiled to a low level language before they're executed in their respective run-times, a process which is done for efficiency reasons. On the other hand, interpreted languages like JavaScript and Python skip this pre-compilation step in the name of practicality, so their respective run-times can execute languages 'as is'.

Since interpreted languages are executed 'as is' they tend to be slower than their pre-compiled cousins that also rely on run-times: 'as is' text is slower to process than an optimized pre-compiled format; 'as is' text is also bulkier to transfer over a network than an optimized pre-compiled format; not to mention 'as is' text is also more prone to intellectual property theft than a pre-compiled format. Although in the end all run-time based languages are transformed into machine language, because interpreted languages require the most steps to reach this state, they are among the slowest performers and it's why JavaScript uses tools like Minifiers-Obfuscators-Compressors and Source Maps to alleviate these problems.

This walkthrough of how different languages reach machine language illustrates just how far JavaScript is from machine language compared to other languages -- which in turn affects its performance. In addition, this summary also illustrates how close plain assembly language is to machine language making it one of the better performing languages.

Now imagine if JavaScript were capable of the same level of performance as plain assembly language -- extremly close to machine language -- but without the need to deal with the low level details, all the while allowing you to continue to use JavaScript syntax as you know it ? You don't need to imagine anymore, it exists and it's called WebAssembly.

WebAssembly precursors : Browser add-ons/plug-ins, JavaScript asm.js and transpilers for low level languages

JavaScript has existed for over two decades, so to say no one ever noticed or cared to solve its lackluster performance until WebAssembly would be an understatement.

Because JavaScript was originally conceived for browsers, the first attempts to improve JavaScript's performance problems were more focused on enabling browsers to execute more performant languages than JavaScript, rather than improving JavaScript itself. These first attempts created the market for browser add-ons/plug-ins, which are small applications designed to enable browsers to run something other than their natively supported languages: HTML, CSS and JavaScript. It's worth pointing out that while many browser add-ons/plug-ins enable better performing languages, the ulterior motive of many browser add-ons/plug-ins is to also deliver enhanced capabilities for multimedia, real-time interaction and native browser interfaces with toolbars.

Among the first browser add-ons/plug-ins were those to run Java for Java applets and Adobe Flash for ActionScript applications, both of which are still in use to this day. What both these add-ons/plug-ins achieved was the ability to run compiled language instructions that not only ran faster than JavaScript, but also delivered features that weren't possible to replicate with JavaScript (e.g. Desktop like behavior for data workflows, real-time interactions and enriched graphical experiences).

A more ambitious approach taken by Microsoft was the creation of ActiveX to enable the execution of compiled-type languages across any networked application -- not just browsers, but software in general (e.g. Browser, Office, Media Player). Although ActiveX quickly gained support for a wide array of languages (e.g. C++, Delphi, VisualBasic) to be compiled into 'ActiveX Controls' -- its core execution components -- its limits were quick to show. Because ActiveX Controls contained compiled instructions (i.e. machine language) they became limited to run on a single processor architecture and Windows operating systems, on top of which, ActiveX was also born flawed with its security scheme to allow full access to a host computer (e.g. file system, applications) vs. restricted access to a sandboxed environment like that of a browser.

Based on a similar premise to ActiveX, but confined to a browser to address any security concerns, Google created Google Native Client (NaCl). Although NaCl delivered on the promise to execute compiled-type languages -- C and C++ -- securly in a browser through 'nexe executables', it like ActiveX containing compiled instructions (i.e. machine language) became limited to a single processor architecture or the need to distribute multiple 'nexe executables' for different processor architectures. To allow portability between processor architecture Google created Portable Native Client (PNaCl), which has a similar design to NaCL, except that it produces processor agnostic compiled instructions (i.e. machine language) through 'pexe executables'. PNaCL achieves its portability by requiring a browser to translate these agnostic compiled instructions into processor specific compiled instructions, a process which is pretty similar to other languages that rely on intermediate bytecode formats to achieve portability (e.g. Java bytecode, C# CIL). Because both NaCl and PNaCl were developed by Google they were designed for Google's Chrome browser and have no portability for other browsers, however, with the appearance of WebAssembly this has become a moot point as Google has begun to phase out the use PNaCl in favor of WebAssembly[3].

All of the previous techniques are based on the premise that browsers can't execute JavaScript efficiently and thus require support for faster languages. While it's certainly true JavaScript is on the slow side compared to other languages -- as discussed in Getting to machine language from other languages -- there's another technique that tackles JavaScript's performance issues by addressing the language itself, beyond improvements to JavaScript engines with JIT and Minifiers-Obfuscators-Compressors and Source Maps.

Mozilla being another large organization with a stake in browser and JavaScript performance devised asm.js[4]. Unlike the approaches taken by Microsoft and Google to equip browsers with new technology, Mozilla's approach with asm.js was to leverage standard JavaScript in order to make it work with browsers out-of-the-box. To understand the reasoning behind asm.js you must recall how high level a language JavaScript is compared to other languages. For example, compared to languages like C and C++, JavaScript has many more features like object-orientatation and automatic memory management/garbage collection, all of which must be distilled by JavaScript engines to reach machine language. Although these high level features make it easier to create software, they also slow down its execution.

Enter asm.js, a JavaScript subset, designed to speed up JavaScript execution. By limiting the type of JavaScript constructs processed by JavaScript engines, asm.js offers two major benefits: JavaScript engines perform less work to reach machine language -- since there are less JavaScript variations to distill -- but more importantly, with less JavaScript constructs to support, it becomes possible to map lower level languages like C and C++ to asm.js and run logic originally written in C and C++ as JavaScript! Mozilla being the creator of asm.js, even retrofitted its JavaScript engine -- used by its flagship Firefox browser -- with a special asm.js module[5]. More importantly though, since asm.js is JavaScript, it can run 'as is' on any JavaScript engine without adpatation, although both Microsoft and Google have followed suit with their own JavaScript engine optimizations for asm.js[6].

So how do you write asm.js ? You actually don't, you rely on a transpiler like Emscripten, Mandreel or Cheerp to generate it. So asm.js is really means to an end, just like the Modern JavaScript languages TypeScript and JSX that produce JavaScript, except that asm.js provides a way to produce JavaScript from lower level languages like C and C++.

WebAssembly: Lessons learned from JavaScript, browsers, add-ons/plug-ins, low level languages and transpilers

As you've explored in the past sections, the path to execute better performing languages in browsers or achieve better performing JavaScript has been filled with fragmented technologies and tools. If you look at WebAssembly without losing sight of these attempts to achieve a "safe, portable, low-level code format designed for efficient execution and compact representation for web applications" you'll understand why WebAssembly is such an important complementary technology for Modern JavaScript.

WebAssembly covers practically all of the shortcomings present in the fragmented technologies and tools we just covered.

Rust:

  1. https://www.w3.org/TR/wasm-core-1/    

  2. https://en.wikipedia.org/wiki/Compiler    

  3. https://developer.chrome.com/native-client/migration    

  4. http://asmjs.org/    

  5. https://blog.mozilla.org/luke/2013/03/21/asm-js-in-firefox-nightly/    

  6. https://jaxenter.com/ie-chrome-set-support-asm-js-114783.html