Faye 0.8: the refactoring

Posted on by James Coglan
3

I’m pleased to finally announce the release of Faye 0.8 after a few months of reorganising the 0.7 codebase to make it more modular, and splitting parts of it out into separate projects. Before I get to what’s changed, I’m going to get the API changes out of the way: this is the stuff you need to know if you’re upgrading.

I hate introducing API changes but I’m afraid these really couldn’t be avoided. They’re really configuration changes so you shouldn’t need to change a lot of code. Please get on the mailing list if you have problems.

First, if you’re running the Ruby server you need to tell it which web server you’re using. Faye now supports the Thin, Rainbows and Goliath web servers, and you need to tell the WebSocket layer which set of adapters to load. For a ‘hello world’ app this looks like this:

# config.ru
require 'faye'
Faye::WebSocket.load_adapter('thin')

app = Faye::RackAdapter.new(:mount => '/faye', :timeout => 25)

run app

Depending on whether you run your application with rackup or thin, the load_adapter call might not be strictly necessary, but better to have it in there just in case. See the Faye Ruby docs and the faye-websocket documentation on how to run your app with different Ruby servers.

Second, if you use the Redis backend, you need to install a new library and change the engine.type setting in your server. Instead of specifying the name of the engine, this field now takes a reference to the engine object. In Ruby that looks like this:

# config.ru
# First run: gem install faye-redis

require 'faye'
require 'faye/redis'

bayeux = Faye::RackAdapter.new(
  :mount   => '/',
  :timeout => 25,
  :engine  => {
    :type  => Faye::Redis,
    :host  => 'redis.example.com',
    # more options
  }
)

And on Node:

// First run: npm install faye-redis

var faye  = require('faye'),
    redis = require('faye-redis');

var bayeux = new faye.NodeAdapter({
  mount:    '/',
  timeout:  25,
  engine: {
    type:   redis,
    host:   'redis.example.com',
    // more options
  }
});

Apart from this, Faye 0.8 should be backward-compatible with previous releases.

Having got the administrivia out of the way, what’s new? Well, the main focus of the 0.8 release is modularity. Two major components of Faye – the WebSocket support and the Redis engine – have been split off into their own packages. I’ve been blogging here already about faye-websocket (rubygem, npm package), and the major work over the last few months has gone into making this a really solid WebSocket and EventSource implementation. Because of its new-found freedom outside the main Faye project, it’s been adopted by other projects, notably SockJS, Cramp and Poltergeist. This adoption, particularly from SockJS, has meant more feedback, bug fixes and performance improvements and resulted in a really solid WebSocket library that improves Faye’s performance.

This new library has had a beneficial impact on Faye’s transport layer. faye-websocket is much faster than Faye’s 0.7 WebSocket code, supports EventSource and new WebSocket features, and runs on more Ruby servers: Faye 0.7 was confined to Thin whereas it now also runs on Rainbows and Goliath. On top of this, Faye 0.8 adds a new EventSource-based transport to support Opera 11 and browsers where proxies block WebSocket, and improves how it uses WebSocket. Previously, Faye’s WebSocket transport used the same polling-based /meta/connect cycle as other transports. It was faster than HTTP, but not optimal. Faye 0.8 now breaks out of this request/response pattern and pushes messages to WebSocket and EventSource connections as soon as they arrive at the server, without returning the /meta/connect poll. This results in lower latency, particular when delivering messages at high volume.

The second major change is that the Redis engine is now a separate library, faye-redis (rubygem, npm package). This has two important benefits. First, the main Faye package no longer depends on Redis clients, and in particular the Node version no longer depends on packages with C extensions, so no compiler is needed to install it. Second, it means Faye’s backend is now totally pluggable and third parties can implement their own engines: the API is thoroughly documented for Ruby and Node. The engine is a small piece of code (for example here’s the Ruby in-memory engine) but it really defines Faye’s behaviour. This layer is not concerned with transport negotiation (HTTP, WebSocket, etc) or even the Bayeux protocol format, it just implements the messaging business logic and stores the state of the system. You can easily implement your own engine to run on top of another messaging/storage stack, or change the messaging semantics if you like. Faye has a complete set of tests you can run to check your engine – see the Ruby and Node projects for examples.

Finally, there have been a couple of changes to the client. We’ve switched from exponential-backoff to fixed-interval for trying to reconnect after the client loses its connection to the server, and this interval is configurable using the retry setting:

// Attempts to reconnect every 5 seconds
var client = new Faye.Client('example.com/bayeux', {
  retry: 5
});

You can also set headers for long-polling requests on the client; this is useful for talking to OAuth-protected servers, for example:

client.setHeader('Authorization', 'OAuth ' + accessToken);

And finally, there’s a new server-side setting, ping, that controls how often the server sends keep-alive data over WebSocket and EventSource connections. This data is ignored by the client, but helps keep the connection open through proxies that like to kill idle connections.

// Sends pings every 10 seconds

var bayeux = new faye.NodeAdapter({
  mount: '/bayeux',
  ping:  10
});

And that just about wraps things up for this release. Since 0.7.1, the main Faye codebase has shed over 2,000 lines of code into other projects that we can easily ship incremental updates to without affecting Faye itself. It’s more performant, leaner and more modular and I know there are already projects doing cool things with it. If you’re using Faye for interesting projects, I’d love to hear from you on the mailing list.

Posted in Evented, Faye, JavaScript, Node, Ruby | 3 Replies

Organizing a project with JS.Packages

Posted on by James Coglan
Reply

I’ve been asked by a few users of JS.Class to explain how I use it to organize projects. I’ve been meaning to write this up for quite a while, ever since we adopted it at Songkick for managing our client-side codebase. Specifically, we use JS.Packages to organize our code, and JS.Test to test it, and I’m mostly going to talk about JS.Packages here.

JS.Packages is my personal hat-throw into the ring of JavaScript module loaders. It’s designed to separate dependency metadata from source code, and be capable of loading just about anything as efficiently as possible. It works at a more abstract level than most script loaders: users specify objects they want to use, rather than scripts they want to load, allowing JS.Packages to optimize downloads for them and load modules that have their own loading strategies, all through a single interface, the JS.require() function.

As an example, I’m going to show how we at Songkick use JS.Packages within our main Rails app. We manage our JavaScript and CSS by doing as much as possible in those languages, and finding simple ways to integrate with the Rails stack. JS.Packages lets us specify where our scripts live and how they depend on each other in pure JavaScript, making this information portable. We use JS.require() to load our codebase onto static pages for running unit tests without the Rails stack, and we use jsbuild and AssetHat to package it for deployment. Nowhere in our setup do we need to manage lists of script tags or worry about load order.

The first rule of our codebase is: every class/module lives in its own file, much like how we organize our Ruby code. And this means every namespace: even if a namespace has no methods of its own but just contains other classes, we give it a file so that other files don’t have to guess whether the namespace is defined or not. For example a file containing a UI widget class might look like this:

// public/javascripts/songkick/ui/widget.js

Songkick.UI.Widget = function() {
  // ...
};

This file does not have to check whether Songkick or Songkick.UI is defined, it just assumes they are. The namespaces are each defined in their own file:

// public/javascripts/songkick.js
Songkick = {};

// public/javascripts/songkick/ui.js
Songkick.UI = {};

Notice how each major class or namespace lives in a file named after the module it contains; this makes it easier to find things while hacking and lets us take advantage of the autoload() feature in JS.Packages to keep our dependency data small. It looks redundant at first, but it helps maintain predictability as the codebase grows. It results in more files, but we bundle everything for production so we keep our code browsable without sacrificing performance. I’ll cover bundling later on.

To drive out the implementation of our UI widget, we use JS.Test to write a spec for it. I’m just going to give it some random behaviour for now to demonstrate how we get everything wired up.

// test/js/songkick/ui/widget_spec.js

Songkick.UI.WidgetSpec = JS.Test.describe("Songkick.UI.Widget", function() { with(this) {
  before(function() { with(this) {
    this.widget = new Songkick.UI.Widget("foo")
  }})

  it("returns its attributes", function() { with(this) {
    assertEqual( {name: "foo"}, widget.getAttributes() )
  }})
}})

So now we’ve got a test and some skeleton source code, how do we run the tests? First, we need a static page to load up the JS.Packages loader, our manifest (which we’ll get to in a second) and a script that runs the tests:

// test/js/browser.html

<!doctype html>
<html>
  <head>
    <meta http-equiv="Content-type" content="text/html; charset=utf-8">
    <title>JavaScript tests</title>
  </head>
  <body>

    <script type="text/javascript">ROOT = '../..'</script>
    <script type="text/javascript" src="/img/spacer.gif"> </script>
    <script type="text/javascript" src="/img/spacer.gif"> </script>
    <script type="text/javascript" src="/img/spacer.gif"> </script>

  </body>
</html>

The file runner.js should be very simple: ideally we just want to load Songkick.UI.WidgetSpec and run it:

// test/js/runner.js

// Don't cache files during tests
JS.cacheBust = true;

JS.require('JS.Test', function() {

  JS.require(
    'Songkick.UI.WidgetSpec',
    // more specs as the app grows...
    function() { JS.Test.autorun() });
});

The final missing piece is the manifest, the file that says where our files are stored and how they depend on each other. Let’s start with a manifest that uses autoload() to specify all our scripts’ locations; I’ll present the code and explain what each line does.

// public/javascripts/manifest.js

JS.Packages(function() { with(this) {
  var ROOT = JS.ENV.ROOT || '.'

  autoload(/^(.*)Spec$/,     {from: ROOT + '/test/js', require: '$1'});
  autoload(/^(.*)\.[^\.]+$/, {from: ROOT + '/public/javascripts', require: '$1'});
  autoload(/^(.*)$/,         {from: ROOT + '/public/javascripts'});
}});

The ROOT setting simply lets us override root directory for the manifest, as we do on our test page. After that, we have three autoload() statements. When you call JS.require() with an object that’s not been explicitly configured, the autoload() rules are examined in order until a match for the name is found.

The first rule says that object names matching /^(.*)Spec$/ (that is, test files) should be loaded from the test/js directory. For example, Songkick.UI.WidgetSpec should be found in test/js/songkick/ui/widget_spec.js. The require: '$1' means that the object depends on the object captured by the regex, so Songkick.UI.WidgetSpec requires Songkick.UI.Widget to be loaded first, as you’d expect.

The second rule makes sure that the containing namespace for any object is loaded before the object itself. For example, it makes sure Songkick.UI is loaded before Songkick.UI.Widget, and Songkick before Songkick.UI. The regex captures everything up to the final . in the name, and makes sure it’s loaded using require: '$1'.

The third rule is a catch-all: any object not matched by the above rules should be loaded from public/javascripts. Because of the preceeding rule, this only matches root objects, i.e. it matches Songkick but not Songkick.UI. Taken together, these rules say: load all objects from public/javascripts, and make sure any containing namespaces are loaded first.

Let’s implement the code needed to make the test pass. We’re going to use jQuery to do some trivial operation; the details aren’t important but it causes a dependency problem that I’ll illustrate next.

// public/javascripts/songkick/ui/widget.js

Songkick.UI.Widget = function(name) {
  this._name = name;
};

Songkick.UI.Widget.prototype.getAttributes = function() {
  return jQuery.extend({}, {name: this._name});
};

If you open the page test/js/browser.html, you’ll see an error:

spacer

The test doesn’t work because jQuery is not loaded; this means part of our codebase depends on it but JS.Packages doesn’t know that. Remember runner.js just requires Songkick.UI.WidgetSpec? We can use jsbuild to see which files get loaded when we require this object. (jsbuild is a command-line tool I wrote after an internal project at Amazon, that was using JS.Class, decided they needed to pre-compile their code for static analysis rather than loading it dynamically at runtime. You can install it by running npm install -g jsclass.)

$ jsbuild -m public/javascripts/manifest.js -o paths Songkick.UI.WidgetSpec
public/javascripts/songkick.js
public/javascripts/songkick/ui.js
public/javascripts/songkick/ui/widget.js
test/js/songkick/ui/widget_spec.js

As expected, it loads the containing namespaces, the Widget class, and the spec, in that order. But the Widget class depends on jQuery, so we need to tell JS.Packages about this. However, rather than adding it as a dependency to every UI module in our application, we can use a naming convention trick: all our UI modules require Songkick.UI to be loaded first, so we can make everything in that namespace depend on jQuery but making the namespace itself depend on jQuery. We update our manifest like so:

// public/javascripts/manifest.js

JS.Packages(function() { with(this) {
  var ROOT = JS.ENV.ROOT || '.';

  file('https://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js')
    .provides('jQuery', '$');

  autoload(/^(.*)Spec$/,     {from: ROOT + '/test/js', require: '$1'});
  autoload(/^(.*)\.[^\.]+$/, {from: ROOT + '/public/javascripts', require: '$1'});
  autoload(/^(.*)$/,         {from: ROOT + '/public/javascripts'});

  pkg('Songkick.UI').requires('jQuery');
}});

Running jsbuild again shows jQuery will be loaded, and if you reload the tests now they will pass:

$ jsbuild -m public/javascripts/manifest.js -o paths Songkick.UI.WidgetSpec
public/javascripts/songkick.js

https://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js

public/javascripts/songkick/ui.js
public/javascripts/songkick/ui/widget.js
test/js/songkick/ui/widget_spec.js

spacer

So we’ve now got a working UI widget, and we can use exactly the same approach to load it in our Rails app: load the JS.Packages library and our manifest, and call JS.require('Songkick.UI.Widget'). But in production, we’d rather not be downloading all those tiny little files one at a time, it’s much more efficient to bundle them into one file.

To bundle our JavaScript and CSS for Rails, we use AssetHat, or rather a fork we made to tweak a few things. Our fork notwithstanding, AssetHat is the closest of the handful of Rails packaging solutions we tried that did everything we needed, and I highly recommend it.

AssetHat uses a file called config/assets.yml, in which you list all the bundles you want and which files should go in each section. But I’d rather specify which objects I want in each bundle; we already have tooling that figures out which files we need and in what order so I’d rather not duplicate that information. But fortunately, AssetHat lets you put ERB in your config, and we use this to shell out to jsbuild to construct our bundles for us.

First, we write a jsbuild bundles file that says which objects our application needs. We exclude jQuery from the bundle because we’ll probably load that from Google’s CDN.

// config/bundles.json
{
  "app" : {
    "exclude" : [ "jQuery" ],
    "include" : [
      "Songkick.UI.Widget"
    ]
  }
}

This is a minimal format that’s close to what the application developer works with: objects. It’s easy to figure out which objects your app needs, less simple to make sure you only load the files you need and get them in the right order, in both your test pages and your application code. We can use jsbuild to tell us which files will go into this bundle:

$ jsbuild -m public/javascripts/manifest.js -b config/bundles.json -o paths app
public/javascripts/songkick.js
public/javascripts/songkick/ui.js
public/javascripts/songkick/ui/widget.js

Now all we need to do is pipe this information into AssetHat. This is easily done with a little ERB magic:

// config/assets.yml
# ...
js:
  <%  def js_bundles
        JSON.parse(File.read('config/bundles.json')).keys
      end

      def paths_for_js_bundle(name)
        jsbuild = 'jsbuild -m public/javascripts/manifest.js -b config/bundles.json'
        `#{jsbuild} -o paths -d public/javascripts #{name}`.split("\n")
      end
  %>

  bundles:
  <% js_bundles.each do |name| %>
    <%= name %>:
    <% paths_for_js_bundle(name).each do |path| %>
      - <%= path %>
    <% end %>
  <% end %>

Running the minification task takes the bundles we’ve defined in bundles.json and packages them for us:

$ rake asset_hat:minify
Minifying CSS/JS...

 Wrote JS bundle: public/javascripts/bundles/app.min.js
        contains: public/javascripts/songkick.js
        contains: public/javascripts/songkick/ui.js
        contains: public/javascripts/songkick/ui/widget.js
        MINIFIED: 14.4% (Engine: jsmin)

This bundle can now be loaded in your Rails views very easily:

<%= include_js :bundle => 'app' %>

This will render script tags for each individual file in the bundle during development, and a single script tag containing all the code in production. (You may have to disable the asset pipeline in recent Rails versions to make this work.)

So that’s our JavaScript strategy. As I said earlier, the core concern is to express dependency information in one place, away from the source code, in a portable format that can be used just as easily in a static web page as in your production web framework. Using autoload() and some simple naming conventions, you can get all these benefits while keeping the configuration very small indeed.

But wait, there’s more!

As a demonstration of how valuable it is to have portable dependency data and tests, consider the situation where we now want to run tests from the command line, or during our CI process. We can load the exact same files we load in the browser, plus a little stubbing of the jQuery API, and make our tests run on Node:

// test/js/node.js

require('jsclass');
require('../../public/javascripts/manifest');

JS.ENV.jQuery = {
  extend: function(a, b) {
    for (var k in b) a[k] = b[k];
    return a;
  }
};

JS.ENV.$ = JS.ENV.jQuery;

require('./runner');

And lo and behold, our tests run:

$ node test/js/node.js
Loaded suite Songkick.UI.Widget

Started
.
Finished in 0.003 seconds
1 tests, 1 assertions, 0 failures, 0 errors

Similarly, we can write a quick PhantomJS script to parse the log messages that JS.Test emits:

// test/js/phantom.js

var page = new WebPage();

page.onConsoleMessage = function(message) {
  try {
    var result = JSON.parse(message).jstest;
    if ('total' in result && 'fail' in result) {
      console.log(message);
      var status = (!result.fail && !result.error) ? 0 : 1;
      phantom.exit(status);
    }
  } catch (e) {}
};

page.open('test/js/browser.html');

We can now run our tests on a real WebKit instance from the command line:

$ phantomjs test/js/phantom.js
{"jstest":{"fail":0,"error":0,"total":1}}

One nice side-effect of doing as much of this as possible in JavaScript is that it improves your API design and makes you decouple your JS from your server-side stack; if it can’t be done through HTML and JavaScript, your code doesn’t do it. This makes it easy to keep your code portable, making it easier to reuse across applications with different server-side stacks.

Posted in JavaScript, JS.Class, Node, Rails, Testing | Leave a reply

The cost of privacy

Posted on by James Coglan
3

I have a bone to pick with a certain oddly prevalent piece of received wisdom in the JavaScript community. I’ve been meaning to rant about this properly for what seems like geologic amounts of time, but I finally hit a concrete example today that both broke the camel’s back and gave me something I could actually illustrate my point with.

Let’s talk about private variables, or more precisely, defining API methods inside a closure, giving them privileged access to data that code outside the closure cannot see. This is used in several JavaScript design patterns, for example when writing constructors or using the module pattern:

// Defining a constructor
var Foo = function() {
  var privateData = {hello: 'world', etc: 'etc'};

  this.publicMethod = function(key) {
    return privateData[key];
  };
};

// Defining a single object
var Bar = (function() {
  var privateData = {hello: 'world', etc: 'etc'};

  return {
    publicMethod: function(key) {
      return privateData[key];
    }
  };
})();

The reason people use this pattern, in fact the only reason for doing so as far as I can tell, is encapsulation. They want to keep the internal state of something private, so that access to it is predictable and it can’t put the object in a weird or inconsistent state, or leak implementation details to consumers of the API. These are all laudable design goals.

However, encapsulation does not need to be rigorously enforced by the machine, and using this style has all sorts of annoying costs that I’ll get to in just a second. Encapsulation is something you get by deliberately designing interfaces and architectures, by communicating with your team/users, and through documentation and tests. Trying to enforce it in code shows a level of paranoia that isn’t necessary in most situations, and this code style has plenty of costs that grossly offset the minimal encapsulation benefit it provides.

So, I guess I should show you what I’m talking about. Okay, hands up who can read the jQuery.Deferred documentation and then tell me what this code does?

var DelayedTask = function() {
  jQuery.Deferred.call(this);
};
DelayedTask.prototype = new jQuery.Deferred();

var task = new DelayedTask();
task.done(function(value) { console.log('Done!', value) });

task.resolve('the value');

var again = new DelayedTask();
again.done(function(value) { console.log('Again!', value) });

I asked this on Twitter earlier and got one correct response. This code prints the following:

Done! the value
Again! the value

But from the source code, what it seems to be trying to do is as follows:

  • Create a subclass of jQuery.Deferred
  • Instantiate a copy of the subclass, and add a callback to it
  • Resolve the instance, invoking the callback
  • Create a second instance of the subclass and add a callback to it
  • Do not resolve the second instance

But it does not do this: the second instance is somehow already resolved, and the callback we add is unexpectedly invoked. What’s going on?

Well, let’s examine what we expect to happen when we try to subclass in JavaScript:

var DelayedTask = function() {
  jQuery.Deferred.call(this);
};
DelayedTask.prototype = new jQuery.Deferred();

We assign the prototype of our subclass to an instance of the superclass, putting all its API methods in our subclass’s prototype. But this not only puts the superclass’s methods into our prototype, it also puts the object’s state in there, so in our constructor we call jQuery.Deferred.call(this). This applies the superclass’s constructor function to our new instance, setting up a fresh copy of any state the object might need.

So why doesn’t this work? Well it turns out that inside the jQuery.Deferred function you’ll find code that essentially does this:

jQuery.Deferred = function() {
  var doneCallbacks = [],
      failCallbacks = [],
      // more private state

  var done = function(callback) {
    doneCallbacks.push(callback);
  };

  var fail = function(callback) {
    failCallbacks.push(callback);
  };

  return {
    done: done,
    fail: fail,
    // more API methods
  };
};

So now we see what’s really going on: jQuery.Deferred, despite its capitalized name and the docs’ instruction to invoke it with new, is not a constructor. It’s a normal function that creates and returns an object, rather than acting on the object created by new. Its methods are not stored in a prototype, they are created anew every time you create a deferred object. As such they are bound to the private data inside the Deferred closure, and cannot be reused and applied to other objects, such as objects that try to inherit this API. It also means that calling jQuery.Deferred.call(this) in our constructor is pointless, since it just returns a new object and does not modify this at all.

This concept of binding is important. All JavaScript function invocations have an implicit argument: this. It refers to the receiving object if the function is called as a method (i.e. o in o.m()) or can be set explicitly using the first argument to call() or apply(). Being able to invoke a method against any object is part of what makes them useful; a single function can be used by all the objects in a class and give different output depending on each object’s state. A function that does not use this to refer to state, but instead refers to variables inside a closure, can only act on that data; if you want to reuse its behaviour you have to reimplement it or manually delegate to it.

Manual delegation means that in order to implement a ‘subtype’, we keep an instance of the supertype as an instance variable and reimplement its API, delegating calls to the stored object.

var DelayedTask = function() {
  this._deferred = new jQuery.Deferred();
};

DelayedTask.prototype.always = function() {
  this._deferred.always.apply(this._deferred, arguments);
  return this;
};

// 17 more method definitions

One correspondent on Twitter suggested I do this to dynamically inherit the jQuery API:

var DelayedTask = function() {
  jQuery.extend(this, new jQuery.Deferred());
};

This makes a new instance of the superclass, and copies its API onto the subclass instance. This avoids the problem of manual delegation, but you just introduced four new problems:

  • It assumes the methods are correctly bound to new jQuery.Deferred(), so they will work correctly if invoked as methods on another object. This happens to be true in our case but it’s a risky assumption to make about all objects.
  • These methods will return a reference to the jQuery.Deferred instance, rather than the DelayedTask object, breaking an abstraction boundary.
  • for/in loops are slow; object creation now takes O(N) time where N is the size of its API.
  • Doing this will clobber any methods you define in your prototype, so if you want to override any methods you have to put them in the constructor.

So we’ve shown that defining your methods in the constructor, rather than in the prototype, hurts reusability and increases maintenance effort, especially in such a dynamic, malleable language as JavaScript. It also makes code harder to understand: if I have to read the whole bloated body of a function to figure out that it’s not really a constructor, because it explicitly returns an object after defining a ton of methods, that’s a maintenance problem. Here’s several things I expect to be true if you name a function using an uppercase first letter:

  • It must be invoked using new to function correctly
  • The object returned by new Foo() gives true for object instanceof Foo
  • It can be subclassed easily using the technique shown above
  • Its public API can be seen by inspecting its prototype

One of JavaScript’s key problems is that it’s not semantically rich enough to accurately convey the author’s intent. This can largely be solved using libraries and consistent style, but any time you need to read to the end of a function (including all possible early exits) to figure out if it’s really a constructor or not, that causes maintenance problems.

I’ve shown that it causes reusability problems and shared state, and that it makes code harder to understand, but if I know JavaScript programmers you’re probably not convinced. Okay, let’s try science:

require('jsclass');
JS.require('JS.Benchmark');

var Foo = function() {};
Foo.prototype.method = function() {};

var Bar = function() {
  this.method = function() {};
};

JS.Benchmark.measure('prototype', 1000000, {
  test: function() { new Foo() }
});

JS.Benchmark.measure('constructor', 1000000, {
  test: function() { new Bar() }
});

We have two classes: one defines a method on its prototype, and the other defines it inside its constructor. Let’s see how they perform:

$ node test.js
 BENCHMARK [47ms +/- 8%] prototype
 BENCHMARK [265ms +/- 26%] constructor

(Another pet peeve: benchmarks with no statistical error margins.)

Defining the API inside the constructor – just one no-op method – is substantially slower than if the method is defined on the prototype. Every time you invoke Bar, it has to construct its entire API from scratch, instead of simply setting a pointer to the prototype to inherit methods. It also uses more memory: all those function objects aren’t free, and are more likely to leak memory since they’re inside a closure. As the API gets bigger, this problem only gets worse.

Defining methods this way vastly increases the runtime of constructors, increases memory use, and forces manual delegation, adding an extra function dispatch to every method call in the subclass. JavaScript function calls are expensive: two of the best ways to improve the performance of JavaScript code are to inline function calls, and make constructors cheaper (a third being to aggressively remove loops). The original version of Sylvester defined methods in its constructors, and the first big performance win involved moving to prototypes. One of the factors that made faye-websocket much faster was removing unnecessary function calls and loops.

As a sweet bonus, if your instances all share the same copy of their methods, you can do useful things like test coverage analysis, which is impossible if every instance gets a fresh set of methods.

Yes, I know most of the time object creation and function calls do not dominate the runtime of an application, but when they do, you will wish you’d written your code in a style less likely to introduce performance problems. Writing code with prototypes is no more costly than using closures in terms of development effort, and avoids all the problems I’ve listed above, and I like sticking to habits that result in less maintenance work.