Node is a fantastic platform for writing backends. Except when you don’t get things right.

Depending on what side of the fence you happen to be on, Node is either the best or the worst thing to happen to the Web Development world. But opinions notwithstanding, there’s no arguing the popularity of Node. It shot up in popularity way faster than anyone expected, even its creator (he said so in an otherwise pessimistic interview)!

As of writing, it’s the default platform for starting new apps, which I admit is often the result of herd mentality, but the net effect is that there are more jobs, more money, and more exciting projects in Node than in other traditional scripting languages.

It’s come to a point, unfortunately, that when someone asks me to recommend them a starting stack for web development or new startup products, Node is my #1 recommendation even though I’m well versed in PHP and Laravel.

If I might be allowed to continue the rant a little (which I will be since I’m the one writing ?), Node haters have a point when they say that their favorite web stack can do things just as well as Node does, but the converse is also true. And then there are things async programming and events, which were baked into Node from day 1, and other ecosystems are now desperately trying to copy.

Today we have async options in PHP and Python, but unfortunately, the core of existing, popular libraries are purely synchronous, so it’s almost like you’re fighting against the system. But anyway, enough ranting for a day. 🙂

So, if you’re a Node developer (beginner or familiar), it’s likely that you’re making one of these big mistakes that negatively affect your application. It might be because you’re not familiar with a particular way of doing things better in Node, or maybe it’s simply habits you’ve carried over from some other ecosystem.

Not respecting the event loop

When a person migrates to Node, it’s partly because they’ve heard stories of how LinkedIn scales using Node, or they’ve seen benchmarks that show Node running circles around PHP, Ruby, etc. when it comes to serving requests per second or handling open socket connections.

So they build their app, expecting the same explosive response times they dreamed of — except that nothing close to it happens.

One of the prime reasons for this is not understanding the event loop properly. Consider the following code that gets a set of books from the database and then sorts them by the total number of pages:

db.Library.get(libraryId, function(err, library) {
    let books = library.books;
    books.sort(function(a, b) {
        return a.pages < b.pages ? -1 : 1

I agree that this code doesn’t do anything with the sorted books array, but that’s not the point here. The point is that such an innocent-looking code is enough to blow up the event loop as soon as you start dealing with a non-trivial number of books.

The reason is that the event loop is meant to perform non-blocking I/O. A good example is that of a pizza packer at a pizza joint — the person specializes in cutting the pizza, folding covers into delivery boxes, putting the pizza in, attaching the right labels, and pushing it to the delivery guy.

Amazing, right? Just like Node!


But consider what will happen if this person also needs to mix, prepare and package the seasonings. Depending on how intricate the process is, the pizza packing rate will be cut down to one-third, or maybe even come to a complete stop.

This is what we mean by tasks that are “blocking” — as long as Node simply has to pass information around, it’s very fast and ideally the best option, but as soon as it needs to do some extensive calculations, it stops, and everything else has to wait. This happens because the event loop is single-threaded (more details here.)

So, don’t perform calculations within the event loop, no matter how important they are. I mean, adding numbers and taking averages is fine, but large data sets will make your Node app crawl.

Hoping that async code will cooperate

Consider this very simple Node example that reads data from a file and displays it:

const fs = require('fs');

let contents = fs.readFile('secret.txt', (err, data) => {
    return data;

console.log('File contents are: ');

Exposure to classical languages (like PHP, Python, Perl, Ruby, C++, etc.) will have you apply the common sense that after this code runs, the variable contents will have the contents of the file. But here’s what happens when you actually execute the code:

We get undefined (<slow clap>). That’s because while you may care deeply about Node, its async nature doesn’t care about you (it’s meant to be a joke! Please don’t spam hate comments here 😛 ). Our job is to understand its async nature and work with it. readFile() is an asynchronous function, which means as soon as it’s called, the Node event loop passes off the work to the filesystem component and moves on.

It does return to the function later when the file has been read, but by that time contents is treated like an uninitialized variable and thus contains undefined. The correct way is to process the file data inside the callback function, but I can’t go into more details as this is not a Node tutorial. 🙂

Callback that calls the callback that calls the callback that calls . . .

JavaScript is closer to functional programming that any other older, mainstream language (in fact, all said and done, it’s my favorite when it comes to object-oriented design and functional capabilities — I put it above Python, PHP, Perl, Java, and even Ruby when it comes to writing “enjoyable” code).

That is, functions get more citizen rights than they do in other languages. Couple this with the fact that asynchronous code works by providing you a callback function, and we end up with a recipe for disaster known as Callback Hell.

Here’s some sample Electron code I came across on Quora. What do you think it does?

var options;

    function () {

        options = {
            frame: false,
            height: 768,
            width: 1024,
            x: 0,
            y: 0

        options.BrowserWindow = require('electron').BrowserWindow;
        options.browserWindow = new options.BrowserWindow(options);
            function () {
                    function (data) {


If you’re having a hard time, join the club!

Functions inside functions inside functions are hard to read and very hard to reason about, which is why it’s been termed as “callback hell” (I suppose Hell is a confusing place to get out of!). While this technically works, you are making your code future-proof from any attempts at comprehension and maintenance.

There are many ways to avoid callback hell, including Promises and Reactive Extensions.

Not using all CPU cores

Modern processors have several cores — 2, 4, 8, 16, 32 . . . the number keeps climbing.

But this isn’t what the Node creator had in mind when he released Node. As a result, Node is single-threaded, which means it runs inside a single thread (or process, if you want to call it that, though they’re not the same), utilizing only one CPU core.

That means if you learned Node from tutorials and friends and code snippets floating around, and have your app deployed on an 8-core server, you’re wasting 7/8 of the processing power available!

Needless to say, it’s a massive waste. If you follow this path, you’ll end up paying for eight servers when you only need one. That is, spend $16,000 per month when $2,000 will do (loss of money always hurts, right?  ?). All this, when the solution is pretty simple: using the cluster module.

I can’t go into all the details here, but it’s a simple technique of detecting how many cores the current machine has, and then launching that many Node instances. When errors are detected, the instance is restarted. Here’s how simple it is to implement (tutorial here):

var cluster = require('cluster');

if(cluster.isMaster) {
    var numWorkers = require('os').cpus().length;

    console.log('Master cluster setting up ' + numWorkers + ' workers...');

    for(var i = 0; i < numWorkers; i++) {

    cluster.on('online', function(worker) {
        console.log('Worker ' + + ' is online');

    cluster.on('exit', function(worker, code, signal) {
        console.log('Worker ' + + ' died with code: ' + code + ', and signal: ' + signal);
        console.log('Starting a new worker');
} else {
    var app = require('express')();
    app.all('/*', function(req, res) {res.send('process ' + + ' says hello!').end();})

    var server = app.listen(8000, function() {
        console.log('Process ' + + ' is listening to all incoming requests');

As you can see, cluster.fork() does the magic, and the rest is simply listening to a couple of essential cluster events and doing the necessary cleanup.

Not using TypeScript

Okay, it’s not a mistake, as such, and plenty of Node applications have been and are being written without TypeScript.

That said, TypeScript offers the guarantees and peace of mind that Node always needed, and in my eyes, it’s a mistake if you’re developing for Node in 2019 and not using TypeScript (especially when the A (Angular) in the MEAN stack moved to TypeScript long ago).

The transition is gentle, and TypeScript is almost precisely like the JavaScript you know, with the surety of types, ES6, and a few other checks thrown in:

//   /lib/controllers/crmController.ts
import * as mongoose from 'mongoose';
import { ContactSchema } from '../models/crmModel';
import { Request, Response } from 'express';

const Contact = mongoose.model('Contact', ContactSchema);
export class ContactController{
public addNewContact (req: Request, res: Response) {                
        let newContact = new Contact(req.body);
   , contact) => {

I’d recommend checking this nice and friendly TypeScript tutorial.


Node is impressive, but it’s not without its (many?) problems. That said, this applies to all technologies out there, new and old, and we’ll do better to understand Node and work with it.

I hope these five tips will prevent you from getting sucked into the tar pit of perennial bugs and performance issues. If I missed something interesting, please let me know, and I’ll be more than happy (in fact, thankful!) to include them in the article. 🙂