Modern JavaScript essentials: Concepts, syntax & tools

If there's one thing modern JavaScript has it's variety. From the handful of JavaScript projects used by 99% of users, there are hundreds more that cover glaring omissions in these major projects. And from these broader JavaScript projects, there are thousands more that cover edge cases that only a small percentage of users or only their owners know about. This leads to what many in the JavaScript community call JavaScript fatigue: A daily occurrence of new JavaScript projects that creates an almost endless learning curve.

While learning curves are a natural part of technology, the reality is nobody has the time to keep up with thousands of projects of anything no matter how good or what they have to offer. If you've ever felt overwhelmed or intimidated by the amount of JavaScript projects, you're not alone. The good news is you don't need to give up on JavaScript just because you can't keep up with every JavaScript project that offers a better, greater or faster way of doing things. What you need to do is learn modern JavaScript essentials, to not only help you understand and write better JavaScript, but also to help you quickly weed out thousands of JavaScript projects that don't fit your needs.

What you'll read next is a combination of concepts, syntax & tools that are essential to working with modern JavaScript. It isn't a JavaScript language treatise or specification summary -- there are many other places you can read that -- it's more of an off the cuff discussion on essentials you really should have no excuse for not knowing if you plan work with JavaScript on a day to day basis.

ECMAScript: ES5, ES5.1, ES6 (or ECMAScript 2015), ES7 (ECMAScript 2016)

ECMAScript [1] is the specification on which JavaScript is based on. As a specification, it means ECMAScript is a blueprint to which JavaScript engines (implementations) must adhere to. JavaScript engines on the other hand are what's included in browsers or other environments to run JavaScript code.

ECMAScript or ES versions like any other language version are a big deal because they represent new features that make a language more powerful and easier to work with (e.g. PHP 5 to PHP 7, Python 2 to Python 3). ES had been relatively stagnant up until 2015 with the appearance of ES6 -- prior to that, ES3 was published in 1999, ES4 was abandoned, ES5 was published in 2009 and ES5.1 was published in 2011. This meant JavaScript had enjoyed very long periods of feature stability.

The release of ES6 in 2015 marked significant changes to address features required by the explosive growth of JavaScript. And the speed of feature changes was so great that ES7 became a reality in 2016. So unlike the first ES/JavaScript major version cycle that took 10 years, the latest ES/JavaScript major version cycle only took 1 year! Why is this important ? Because you'll constantly face situations where plain JavaScript is either ES5.1, ES6 or ES7 compliant. It will be JavaScript all the same, but it won't run on all JavaScript engines because these are in constant flux to support different ES versions. The following is a list of major JavaScript engines and their ES support:

In addition to different ES support across JavaScript engines, another important factor related to ES versioning you'll face is when/if you use languages that produce JavaScript. In such cases you have to produce JavaScript, but you'll always need to ponder the question of whether it will be JavaScript ES5.1, ES6 or ES7 ?

As a rule of thumb, the lower ES version you target the more likely it's that JavaScript will run everywhere. Although higher ES versions are more likely to include JavaScript with 'magical' like features compared to older versions, just be aware that when targeting higher ES versions you run the risk of JavaScript not working on older JavaScript engines, unless of course you use a shim.

Shims and Polyfills

A shim is a generic term used in both the technology and non-technology world. If you happen to sit at a wobbly table or chair, the wooden or paper artifact they use to level it off is called a shim. In the technology world, a shim is a piece of software that allows an old component or API to remain functional in the face of new demands -- so just like a wobbly table or chair, you don't throw it away, you use a shim to keep it working. A specific JavaScript example of a shim is the ES6 shim which allows older JavaScript engines to behave as closely as possible to the new JavaScript ES6 standard. So if you have an application that uses ES6, it doesn't mean you have to scrap all older JavaScript engines, you can use the ES6 shim to allow older browsers to interpret ES6 features.

A pollyfill is a piece of software that fulfills a feature you expected a browser to support natively. Although very similar to a shim, a pollyfill provides functionality to fulfill something new because support for it isn't there yet, where as a shim provides functionality to fulfill something new for something that's old that you don't want to scrap. This is the 'why' behind the fill in pollyfill, it fills in missing functionality. A lot of modern JavaScript libraries (e.g. Angular, Web components) rely on pollyfills, to be able to run on browsers that don't yet support certain modern JavaScript functionalities natively.

Package managers

Package managers are an essential piece of software for most languages because they help install packages, resolve package dependencies and keep track of an application's requirements (e.g. Java uses maven, PHP uses compose, Python uses pip and Ruby uses gems). For many early or shallow JavaScript users, package managers can sound like overkill, since adding JavaScript to a project can be as simple as adding <script> tags that point to a CDN and creating JavaScript logic.

Though you may still be able to get away using a couple of <script> tags that point to CDN resources and go straight to implementing JavaScript logic, this practice looks to have its days numbered. For modern JavaScript projects that only use JavaScript, it isn't uncommon to have to manage dozens or hundreds of JavaScript packages, at which point package managers become a necessity.

In JavaScript like in other languages, there are now various package managers to choose from, but the most popular JavaScript package managers are: npm and Bower.

Npm emerged from Node.js which is the dominant platform to run JavaScript on the server. With Node.js being bound to the server, it was only natural the amount and variety of JavaScript packages exploded like most server bound languages (e.g. Java, PHP, Python, Ruby) and so npm was born. There are now over 250,000 JavaScript packages available through npm, ranging from small utility packages for debugging, command line executables, to multiple full-fledged server MVC frameworks to build web applications. Npm has become so popular to manage JavaScript packages, that it's now even used to manage projects unrelated to Node.js (e.g. seeing npm used to manage standalone JavaScript React projects -- which are bound to the browser -- is not uncommon).

Bower is another JavaScript package manager with a wider scope than npm. Unlike npm which focuses on managing JavaScript packages, Bower is designed to manage packages that include JavaScript, HTML, CSS, fonts and even image files. This last focus makes Bower a better fit for JavaScript projects that are bound to the client (i.e. browser) as it automatically manages all these additional bits -- HTML, fonts, images -- that are important in UI (User Interface) development.

There's no hard rule to using npm over Bower, or viceversa. A project's type and the JavaScript packages it requires will usually end up determining which package manager is the best fit. Npm being the most popular is the most referenced package manager (e.g. if you see package 'X' it will have instructions : "To use 'X' use: npm..."), but many UI bound JavaScript packages have complementary Bower instructions (e.g. "To use 'X' you can also use: bower...").

Transpiling and transpilers

Transpiling is a term that refers to converting source code into another type of source code. Technically speaking, transpiling is a specific type of compiling process. Where as you typically compile source code to convert it to machine-code so it can be executed, transpiling just changes the source between programming languages. In JavaScript transpiling has become common with the emergence of languages that produce JavaScript.

Transpiling inevitably introduces another step to modern JavaScript development, that while not insurmountable, must be addressed. Transpilers can be used in various places. You can add a transpiler to a web page so the transpilation process is done on an end user's browser and you can also perform the transpilation process as part of an application's build process, so everything is deployed as plain JavaScript. In addition, there are also online transpilers to convert small snippets of non-JavaScript code into JavaScript code.

It's worth mentioning that loading a transpiler into a web page should only be done for development purposes, because it places additional load on a user's browser. For production environments, it's best to use a transpiler as part of the build process to deliver plain JavaScript to a user's browser, thus reducing load times because the browser is spared the transpilation process.

The dominant JavaScript transpiler is named Babel.

Modules, namespaces & module types

Have you ever noticed JavaScript doesn't use import/export/include type statements to create modules and namespaces like it's done in other languages such as Java, PHP, Python or Ruby ? JavaScript doesn't use import/export/include statements because it doesn't have any, at least not until recently. The only support for modules and namespaces in JavaScript in the early years was through an ad hoc mechanism that enclosed constructs into a variable that created a pseudo namespace for all its contents

    JSHACK = { // JavaScript fields,functions...
        function process() {
    JSHACK.process() // Call process() function inside JSHACK

Modules play an important role in programming languages because they decouple functionality into standalone units (e.g. a module for XML processing, another module for JSON processing), which in turn favors changing these units independently and plugging them into larger projects like building blocks. With modules also comes the concept of namespaces, which means every variable, function, class or construct in a module is given a unique identifier (e.g. xml.process() or These unique identifiers or namespaces also play an important role in development because they avoid potential name clashes (e.g. using a common function name like load() without namespaces can be confusing, with a namespace it's unequivocally clear xml.load() or json.load()).

With the explosive growth of modern JavaScript, it was only a matter of time for JavaScript to get its own import/export/include like statements, similar and with the same purpose as those in other programming languages. But it turns out, JavaScript modules was such an important missing piece in modern JavaScript development, that there's not one, two or three ways to implement JavaScript modules, but four!

At this point, it's futile to moan about why there are so many JavaScript module types, the point is they are out there and you'll have to deal with it, at least for the time being. In no particular order, the different JavaScript module techniques are:

Module loaders and bundlers

As if having to transpile other languages into JavaScript or having four different JavaScript module systems to deal with wasn't enough, now you'll need to learn about module loaders & bundlers! Fortunately module loaders & bundlers -- which are often the same piece of software -- are used to bring a little sanity to the prospect of transpiling and dealing with various module types.

Let's take a text book example of what module loaders do. You found this exciting package that's a bit dated which uses plain JavaScript and CommonJS modules, but your application must use this other bleeding edge framework that's written in TypeScript and uses ES 6 modules, on top of this you need to use JSX because you want React components. The good is news is you can use a module loader, the bad news is you'll need to use a module loader.

A module loader takes care of unifying whatever discrepancies might exist between JavaScript modules. This means an application can use a CommonJS module & ES 6 module and the loader makes them work together. In addition, a module loader also takes care of the transpiling process, so if an application uses TypeScript or JSX code, it gets transformed into plain JavaScript so it's understandable to a JavaScript engine.

The same piece of software that works as a module loader, is also often a module bundler. A module bundler is useful because it groups an application's modules into a single module. Why would you even want to bundle modules ? In other languages whose logic runs on the server, using a dozen or a hundred modules is an afterthought because all modules are loaded locally. But in JavaScript where it's likely modules need to be transferred over a network to an end user's browser, making a single module can be the difference between quick load times (i.e. one module) and long load times (i.e. a dozen or a hundred modules).

Given the amount of work a module loader/bundler performs, they often require a lot of configuration effort and it can take some time to understand all their capabilities. On top of this, module loaders/bundlers are probably one of the most fragmented segments in modern JavaScript (i.e. there are a lot of options and they all use different techniques).

Some of the more popular module loaders/bundlers include: SystemJS, RequireJS, webpack and browserify.

Callback functions

Callback functions are a big deal in JavaScript because operations are generally done in a single-thread. This means that if you execute operations A, B & C on a single-thread, operation B can't start until operation A finishes and operation C can't start until operation B finishes. This behavior which is also characterized as blocking, is particularly detrimental in UI (User Interfaces) and I/O operations (e.g. reading/writing a file). You don't want a UI to block (i.e.'freeze') while an application does another task that takes 5 or 10 seconds, just as you don't want to stop everything while an application reads or writes a large file.

To start a concrete discussion on callback functions, let's begin with a concept you've probably already used in past JavaScript projects: AJAX (Asynchronous JavaScript and XML). AJAX emerged as a means to get data into a browser from a remote server without the browser (i.e. UI) blocking. This means AJAX triggers a call to a remote server -- asynchronously -- while the main sequence can continue uninterrupted and callback functions are assigned the duties to wait out and process the remote server response. The following snippet illustrates a simple AJAX sequence that uses jQuery.

Callback functions in AJAX method call
// Operation 1 finishes here
// Operation 2 (AJAX) starts here 
  .done(function(data) {
  .fail(function() {
  .always(function() {
// Operation 3 starts here, right away

Notice that after operation 1 finishes, operation 2 with $.ajax starts and makes a call to a remote server. Because operation 2 is an AJAX call it doesn't wait for the remote server to answer and the execution continues immediately to operation 3. So who or what handles the remote server response ? You can see that after the initial statement in operation 2 $.ajax("/remoteservice/") there are three chained functions .done(), .fail() and .always(). These last functions are callbacks and are what handles the remote server response from the AJAX call. If the remote server returns a successful response .done is run, if the remote server sends a failed response .fail() is run, and irrespective of the remote server response .always() is always run.

An important characteristic of callback functions is there's no assurance at what time they get called -- it could be 5, 10, 15 seconds or 1 hour later -- it all depends on how long the parent function takes to execute -- in this case whatever time the /remoteservice/ URL takes to send a response -- the key thing is other method calls in the workflow don't have to wait for the parent function to complete.


Promises are used for deferred and asynchronous operations. The need for JavaScript promises has existed for some time, to the point various JavaScript libraries emerged to fill in this void (e.g. WinJS, RSVP.js). But it was only recently with ES6 that JavaScript got a native first level object named Promise for this purpose -- just like String, Date and Object. With Promise becoming part of the core language, the use of promises in JavaScript has only risen, making it important to understand why and how to use promises.

While callback functions could be considered a type of promise, since they promise to run after an event -- in the callback function example, a promise to process a remote server response -- there's a major drawback with callback functions when you need them to trigger yet more asynchronous processes. The following snippet illustrates a series of nested AJAX calls to better illustrate the problem with callback functions.

Nested callback functions in AJAX method call

// Operation 1 finishes here
// Operation 2 (AJAX) starts here 
  .done(function(data) {
     .done(function(data) {
       console.log("nested success");   
        .done(function(data) {
         console.log("nested nested success"); 
// Operation 3 starts here, right away

As you can see in this last example, once the first AJAX call completes successfully, it triggers yet another AJAX call, and once this last call completes successfully, it triggers yet another AJAX call. Notice this sequence doesn't even take into account error handling (i.e. the .fail() callback used in the previous callback method example). If an error happened on the first AJAX call, it would require an additional tree of calls, and to handle an error on the second AJAX call, yet another tree of calls. This type of callback nesting can quickly get out of hand, to the point it's often named callback hell, due to how difficult it can be to work with (e.g. understand, debug, change).

If you think this type of operation nesting isn't too common in JavaScript, think again. With the demands placed on JavaScript applications, these series of dependent operations can easily be something like : an authentication service, that once successful loads a user's profile, that once successful loads a user's pictures or some other variation like it. Faced with these demands and to avoid confusing callback nesting, the Promise object was created.

Promise objects are passed around like any other object reference. The following example shows a function that checks if the input is a number and returns a Promise object as the result.

Promise object syntax

The promiseNumberCheck function uses isNaN to determine if the argument is a number. If the argument is not a number the function returns a Promise object as rejected with the given reason, if the argument is a number the function returns a Promise object as resolved with the given number. Next, we call the promiseNumberCheck function two times -- once with a number, the other with a string -- and assign the Promise objects to test1 and test2.

Next, you can see the Promise objects have the then() method called on them. The purpose of the then() method is to analyze the status of a Promise object. In this case, the status of both Promise objects is clear and instant because we set them, but in other cases a Promise object's status may be neither clear nor instant (e.g. if a remote service is involved). So the purpose of the then() method is to say 'when the Promise object status is set (rejected or resolved) and whether it's instantly or in 5 minutes, then do this'.

The Promise object's then method has two variations. The first option in the test1 reference uses two functions, the first one to handle a resolved promise and the second to handle a rejected promise, in both cases the function's input is the value assigned to the promise. The second option used in the test2 reference is to chain the then() method with the catch() method, in this case the then() method handles the resolved promise with a single method and the catch() method is used to handle a rejected promise with its own method.

Now that you're familiar with the syntax and main methods in Promise objects, let's create another example that introduces a delay in Promise objects to better illustrate how they're useful.

Promise objects with delay

The promiseRandomSeconds() function first generates a random number between 1 and 10 to use as a delay. Next, we immediately return a Promise instance with new Promise(). Unlike the Promise objects used in the promiseNumberCheck() function in the previous example, the Promise object status in this example isn't determined until a later time, so we use the generic new Promise() syntax.

Notice that wrapped inside this new Promise() statement is the function(resolve, reject) { } function. It's inside this last function you can perform any logic you require and then mark the Promise object with the resolve or reject references. In this case, the logic inside the function(resolve, reject) { } simply introduces a delay of promiseRandomSeconds -- the field value -- and depending on this value, marks the Promise object with resolve if the value is even or it marks the Promise object with reject if the value if odd.

Next, we make two calls to the promiseRandomSeconds function and assign them to the test1 and test2 references. Because the test1 and test2 references contains Promise object instances, we then use the then() and except() methods to determine the status of both Promise objects, just as we did in the first promise example.

The most interesting part in this last example is the last log statement console.log("Hey there, I didn't have to wait!"). Even though the prior logic to this log statement introduces potential delays between 1 and 10 seconds, this last log statement is the first thing that's sent to output. Because the prior logic is based on Promise objects, this logic doesn't interfere with the main sequence and the final result (resolve or success) isn't reported until the Promise object determines it. In this case the Promise logic is pretty basic and based on a random number, but it can become a very powerful technique when used with tasks that can be delayed or return unexpected results (e.g. a REST service, reading a large file).

Finally, to close the discussion on the Promise object, the following example illustrates how to chain Promise objects.

Chained Promise objects

The promiseWordAppender function always returns a Promise object as resolved with the given word argument. The interesting functionality in this example comes with the chained then() methods on the test1 reference. Notice that inside the first then() method a return statement is used, which is a way to invoke the parent method again. Inside the second then() method another return statement is used to call the parent method one more time. In this case the Promise logic simply appends words, but it's a powerful construct to be able to chain multiple then() statements with more complex logic (e.g. call service A, once finished call service B, once finished call service C, all this considering the times to execute each service can vary and is not guaranteed to be successful).