tomdale.net

Make decisions so your users don't have to.
November 16, 2015

JavaScript Frameworks and Mobile Performance

In response to Paul Lewis’ The Cost of Frameworks:

Critics of frameworks tend to totally miss the value that they provide for people. Paul presents this list of why people use them:

  • Frameworks are fun to use.
  • Frameworks make it quicker to build an MVP.
  • Frameworks work around lumps / bugs in the platform.
  • Knowing [insert framework] means I get paid.

Most critics miss the key one: frameworks let you manage the complexity of your application as it and the team building it grows over time. All of the other stuff is just gravy.

Ergonomics vs. User Needs

Paul frames a zero-sum tradeoff between a user’s needs, like the app not crashing and being fast, and the developer’s need to have tools that make them productive. I’ve seen this argument before and it always baffles me. These are not orthogonal concerns. Developer ergonomics often help the user with their needs, because the more productive the developer is, the more bugs they can fix, features they can add, slow code they can profile.

Many developers have worked on a project where the complexity of the codebase swelled to be so great that every new feature felt like a slog. Frameworks exist to help tame that complexity. Ergonomics sometimes have cost, but oftentimes they can be added at zero cost. I am a big fan of Rust’s mantra of zero-cost abstractions—tools that help the developer, and help the computer understand what the developers wants, and so the code can be better optimized.

Are Frameworks Worth It?

After this we have the standard discussion of whether frameworks are “worth it” or if their downsides outweigh their benefits. I doubt my post or his post will convince you one way or the other.

In my experience, it boils down to: app developers overwhelmingly use frameworks because they have to support code for years. Devrel folks love small libraries or “vanilla JS” because they don’t have to support big codebases for years and can quickly bang out demos with the latest and greatest.

The fact that frameworks continue to be so popular despite the prevalence of articles like this (for the past decade at least) pooh-poohing them should tell you something. Developers aren’t getting tricked by framework authors; they find the benefits worth it.

Data

Paul presents some benchmarks showing the execution times of the TodoMVC app implemented in several different frameworks. Paul’s takeaway:

For me the results are pretty clear: there appears to be a pretty hefty tax to using Frameworks on mobile, especially compared to writing vanilla JavaScript.

Here’s the problem: TodoMVC is a miniscule app and is in no way representative of the apps most developers work on day to day. Yes, frameworks have overhead, and often significant overhead. But the framework’s abstractions exist to make your application code less verbose.

I have heard from many developers who have told me that they accepted the argument that vanilla JavaScript or microlibraries would let them write leaner, meaner, faster apps. After a year or two, however, what they found themselves with was a slower, bigger, less documented and unmaintained in-house framework with no community. As apps grow, you tend to need the abstractions that a framework offers. Either you or the community write the code.

So here’s what I think a more interesting study would be: what is the execution time of real world apps of approximately similar complexity? It may be that apps built with frameworks are still slower. But my hypothesis is that, for apps of any complexity, the ones that start off “vanilla” will accrete their own Frankenframework that performs similarly to, if not worse than, an off-the-shelf framework like Ember or Angular.

I pitched this idea to Paul on Twitter and he said:

@tomdale tricky that, TodoMVC is the only like-for-like I know of that is functionally equivalent across several frameworks.

— Paul Lewis (@aerotwist) November 16, 2015

Agreed that this is true, but is measuring the wrong thing better than measuring nothing at all? Using TodoMVC as the metric doesn’t make any sense. People use frameworks to tame complexity, but TodoMVC doesn’t have any. It’s important here to state the hypothesis so you know what you’re testing. Testing TodoMVC because it’s the only thing available is called availability bias.

An interesting question is: If I build a large, real-world app without a framework, will it be faster than one written with a framework, and by how much? How much productivity will I lose to having to invent, author, and maintain my own abstractions for managing complexity?

But measuring the execution time of TodoMVC apps doesn’t answer that question.

It’s important to define the hypothesis being tested because it affects the conclusions you draw. I could benchmark a bicycle versus a diesel truck, and the results will be a lot different if I am testing “is this good for commuting to the office?” instead of “how good is this for hauling lumber?”

Paul’s benchmark answers the question, “Do frameworks make small, simple apps on mobile devices load more slowly?” The answer to that is “yes” but I don’t think it’s an interesting question.

Mobile Performance

To me the issue is not about whether to use a framework or not; the issue is how atrocious JavaScript performance is on mobile, especially Chrome on Android. To quote Jeff Atwood, the state of JavaScript on Android in 2015 is… poor.

Developers need to be able to rely on abstractions. Many developers would consider it a non-starter to build a web app in 2015 without the help of a framework. Maybe you disagree with that, and that’s fine. But before judging them, consider whether their needs may be more complex than yours—do they need to support it for years? Do they have large, distributed teams building it? It’s surprising how easy things can become challenging when you go from a single developer hacking on a repo to a large app worked on by a massive organization. Maybe you don’t care about that use case, but I do. I like building tools that can be used by my bank or my airline to make my experience using them even just a little bit better.

Large apps tend to have a minimum of several hundred kilobytes of code—often more—whether they’re built with a framework or not. Developers need abstractions to build complex software, and whether they come from a framework or are hand-written, they negatively impact the performance of apps.

Even if you were to eliminate frameworks entirely, many apps would still have hundreds of kilobytes of JavaScript. Rather than saying “use less code,” which most people would consider unrealistic, a better strategy is to reduce the cost of code.

Thankfully, everyone is working on this from multiple angles, which is what it will take to make mobile apps as fast as their native counterparts:

  • Hardware manufacturers are making CPUs faster.
  • Browser vendors are making their JavaScript engines faster (here’s hoping Chrome catches up to Safari soon).
  • Framework authors are optimizing their code for the performance characteristics of the slowest browser they support, i.e. Chrome on Android.
  • Frameworks are working on reducing startup costs via strategies like dead code elimination and server-side rendering.

The bottom line is that I don’t think “reduce the amount of code” is a reasonable lesson for the average developer to follow. Much better to let developers write as much code as they need for the cool apps they want to build, and then have tool vendors figure out how to make that fast.

Ironically, the existence of frameworks makes keeping up with the latest and greatest easier. Frameworks can track the best practices for eking out mobile performance, and you can take advantage of that just by upgrading the version of the framework you use, rather than rewriting the app.

Consider following me on twitter dot com.

comments closed
February 5, 2015

You’re Missing the Point of Server-Side Rendered JavaScript Apps

There is a lot of confusion right now about the push to render JavaScript applications on the server-side. Part of this has to do with the awful terminology, but mostly it has to do with the fact that it’s a fundamental shift in how we architect and deploy these apps, and the people peddling this idea (myself included) have not done a great job motivating the benefits.

@tomdale @robconery i really don't understand the value, here. why add another layer to do what the server already does?

— Derick Bailey (@derickbailey) February 4, 2015

First, let me make something as clear as I can: the point of running your client-side JavaScript application on the server is not to replace your existing API server.

Unfortunately, when most people think of client-JavaScript-running-on-the-server, they think of technologies like Meteor, which do ask you to write both your API server (responsible for things like authentication and authorization) and your client app in the same codebase using the same framework. Personally, I think this approach is a complexity and productivity disaster, but that deserves its own blog post. For now, just know that when I talk about server-side rendering in the context of Ember, this is not what I’m talking about.

This post was actually inspired by Old Man Conery, who yesterday on Twitter insisted that us JavaScript hipsters should get off his lawn and just use a traditional, server-rendered application if we want SEO.

@mixonic @tomdale I can't believe this conversation. Stick Ember in where it's not needed then figure out how to work SEO because Ember.

— Rob Conery (@robconery) February 4, 2015

Rob and I have a good relationship and I like to tease him, but in this case I think he represents an extremely common worldview that is helpful to study, because it can show us where the messaging around client-side JavaScript apps breaks down. “These developers just jammed JavaScript in here for no darn reason!” is something I hear very often, so it’s important to remind ourselves why people pick JavaScript.

Why Do People Love Client-Side JavaScript?

The Beauty of the Language

LOL j/k

Coherence

All modern websites, even server-rendered ones, need JavaScript. There is just a lot of dynamic stuff you need to do that can only be done in JavaScript. If you build your UI entirely in JavaScript with a sane architecture, it’s easy to reason about how the different dynamic behavior interacts.

The traditional approach of sprinkling JavaScript on top of server-rendered HTML was fine for a long time, but the more AJAX and other ad hoc dynamic behavior you have, the more it turns into a giant ball of mud.

Worst of all, you now have state and behavior for the same task—UI rendering—implemented in two languages and running on two computers separated by a high-latency network. Congratulations, you’ve just signed up to solve one of the hardest problems in distributed computing.

Performance

Client-side JavaScript applications are damn fast. In a traditional server-rendered application, every user interaction has to confer with a server far away in a data center about what to do. This can be fast under optimal scenarios, if you have a fast server in a fast data center connected to a user with a fast connection and a fast device. That’s a lot of “ifs” to be hanging the performance of your UI on.

Imagine if you were using an app on your mobile phone and after every tap or swipe, nothing happened until the app sent a request over the network and received a response. You’d be pissed! Yet this is exactly what people have historically been willing to put up with on the web.

Client-side rendered web apps are different. These apps load the entire payload upfront, so once it’s booted, it has all of the templates, business logic, etc. necessary to respond instantly to a user’s interactions. The only time it needs to confer with a server is to fetch data it hasn’t seen before–and frameworks like Ember make it trivially easy to show a loading spinner while that data loads, keeping the UI responsive so the user knows that their tap or click was received. If the data doesn’t need to be loaded, clicks feel insanely fast.

There’s a misconception that client-rendered JavaScript apps are only useful for more “application-y” sites and not so-called “content” sites; Vine and Bustle are two frequently cited examples. But client-side routing, where you have all of the templates and logic available at the moment of the user’s click, provides performance benefits to every site, and users are beginning to become conditioned to the idea that interactions on the web should be near-instaneous. If you don’t start building for client-side rendering today, my bet is your site is going to feel like a real clunker in just a few years’ time. This is going to be the norm for all sites, and in the not-too-distant-future, server-rendered stuff will feel very “legacy.”

Why Do People Hate Client-Side JavaScript?

Everyone is different, and if you asked 100 different haters I’m sure they would have 100 different flavors of Hatorade™. But there are some real tradeoffs to using client-side JavaScript.

Performance

Yes, one of the biggest advantages of client-side rendering is also its biggest weakness. The performance of JavaScript apps is amazing–once all of that JavaScript has finished loading. Until then, you usually just see a loading spinner. That sucks on the web, where people are used to seeing something interesting within a second or two of clicking on a link. So while subsequent clicks will be lightning fast, unless you do a lot of manual work, the initial experience is going to suffer.

Unpredictable Performance

At TXJS many years ago, I had a long chat with Dan Webb at Twitter about client-side vs. server-side rendering. He had just finished working on migrating Twitter away from their 100% client-side app back to a more traditional server-rendered approach, and I was mad about their blog post. In my experience, client-side JavaScript apps were almost always faster, and the new Twitter site felt like a regression. I just didn’t understand where they were coming from.

Thankfully, Dan helped me see the bubble I was living in. For me, who always had a relatively modern device, this stuff was super fast. But Dan explained that they had users all around the world clicking on links to Twitter, some of them in internet cafes in remote areas running PCs from 1998. They were seeing times of over 10 seconds just to download, evaluate, and run the JavaScript before the user saw anything. (Forgive me for not having exact numbers, this was years ago and there was definitely beer involved.)

Say what you will about server-rendered apps, the performance of your server is much more predictable, and more easily upgraded, than the many, many different device configurations of your users. Server-rendering is important to ensure that users who are not on the latest-and-greatest can see your content immediately when they click a link.

Crawlers

Client-side rendered apps require, by definition, a JavaScript runtime to work correctly. There is a common misconception that screen readers and other accessibility tools don’t work with JavaScript, but this is just flat-out not true, so don’t let people shame you for using JavaScript for this reason.

That being said, there is a vast world of tools out there that consume and scrape content delivered over HTTP that don’t implement a JavaScript runtime. I don’t think it’s worth going out of our way to cater to this case, at least not at the sake of developer productivity. But if we can solve the problem and make these JavaScript apps available to JavaScript-less clients, why the hell not? Frameworks like Ember should be solving hard problems like this so developers can get back to work.

Ember FastBoot

The project I’ve been working on, Ember FastBoot, is designed to bend the curve of these tradeoffs. Rather than replace your API server, it replaces whatever you’re using now to serve static HTML and JavaScript (usually something like nginx or a CDN like CloudFront).

In a traditional client-side rendered application, your browser first requests a bunch of static assets from an asset server. Those assets make up the entirety of your app. While all of those assets load, your user usually sees a blank page.

Once they load, the application boots and renders its UI. Unfortunately, it doesn’t yet have the model data from your API server, so it renders a loading spinner while it goes and fetches it.

Finally, once the data comes in, your application’s UI is ready to use. And, of course, future interactions will be very, very snappy.

spacer

But what if we could eliminate the blank page step entirely, without sacrificing any of the benefits of client-side JavaScript?

We do this by running an instance of your Ember app in Node on the server. Again, this doesn’t replace your API server—it replaces the CDN or whatever else you were using to serve static assets.

When a request comes in, we already have the app warmed up in memory. We tell it the URL you’re trying to reach, and let it render on the server. If it needs to fetch data from your API server to render, so long as both servers are in the same data center, latency should be very low (and is under your control, unlike the public internet).

The very first thing the user sees is HTML and CSS, avoiding the blank page and loading spinner. Even users with slow JavaScript engines quickly get what they’re after, because no JavaScript has loaded yet. Once the HTML and CSS is delivered, the static JavaScript app payload is delivered, as before. Once the JavaScript loading has “caught up” to the HTML and CSS, your application takes over further navigation, giving you the snappy interaction of traditional client-side apps with the snappy first load of server-rendered pages.

And, of course, for those who don’t have any JavaScript at all (shoutout to all my Lynx users), basic functionality will just work, since Ember generates standard URLs and <a> tags and avoids the onclick= jiggery pokery. More advanced things like D3 graphs won’t work, obviously, but hey, better than nothing.

spacer

We are also investigating further optimizations, like embedding the model data right into the initial JavaScript payload, so you avoid another roundtrip to the API server for frequently accessed model data.

These are early days, and making this work relies heavily on having a single, conventional architecture for your web applications, which Ember offers.

Ultimately, this isn’t about you replacing your API server with Ember. I don’t think I would ever want that.

Instead, client-side rendering and server-side rendering have always had performance tradeoffs. Ember’s FastBoot is about trying to bend the curve of those tradeoffs, giving you the best of both worlds.

Big thanks to Bustle for sponsoring our initial work on this project. Companies that sponsor indie open source work are the best.

comments closed
December 8, 2014

Portland City Hall vs. Ride-sharing: The Lady Doth Protest Too Much

Last Friday, Uber started operating in Portland despite the city government’s stance that ride-sharing services remain illegal.

One of the loudest voices of fear, uncertainty and doubt coming out of City Hall about Uber has been Steve Novick. Mr. Novick is a City Commissioner, and, unfortunately, in charge of the transportation board here.

Let me start by acknowledging that Uber does not make a compelling victim. Their take-no-prisoners attitude serves them well when dealing with local bureaucrats in the pockets of special interests like Mr. Novick, but it can also manifest in much more sinister ways.

But this isn’t about Uber. It’s just as much about other services like Lyft, Sidecar, etc. More importantly, it’s about the right of Portland residents to have a choice in who provides their transportation. Uber happens to be fighting the fight, so it’s inevitable that the unsavoriness of that company becomes part of the conversation, but from my perspective this battle is about unshackling Portland from its antiquated and corrupt taxi system.

Mr. Novick’s feigned outrage that Uber has started operating without government say-so is made all the more hilarious by his assertion that, had they just been a little more patient, the regulations would have been updated to accommodate them.

“We have told Uber and Lyft that they are welcome to offer ideas for regulatory changes. Uber has chosen instead to break the law,” Mr. Novick is quoted as saying.

Mr. Novick, and Mayor Hales, the onus is on you to fix the regulations, not the people you are regulating. And you’ve had plenty of time. I’ve been using Uber in San Francisco since 2010; these services did not suddenly take you by surprise. I travel frequently and Portland is the only city where I can’t rely on Lyft or Uber to get a reliable ride.

“It’s difficult to understand why Portland is now the largest city in the country where ride sharing companies are not able to operate,” said over 40 local restaurant, bar, and hotel owners in a letter to City Hall, “At Feast Portland this year, the number one complaint among the 12,000 attendees from 50 different states and provinces was, ‘lack of reliable taxi service.’”

Cities that value the needs of their citizens over preserving taxi monopolies have already found ways to adjust regulations to protect riders while still offering them the convenience, availability, and reliability of phone-hailed car services.

In New York City, for example, drivers must have a special driver’s license, licensed vehicle, and insurance. If your concern is actually about the welfare of passengers and not just protecting your taxi company cronies, it turns out there are solutions that don’t involve wholesale bans.

While Mr. Novick claims this is about safety, I have never felt unsafe in an Uber or a Lyft. I have had some very sketchy taxi rides, including one where the cabbie nearly got out and engaged in a fist fight with another driver.

The taxi regulation in Portland is a disaster, with demand far exceeding supply on weekends. The number of cabs is capped at 460, or approximately one taxi for every 1,269 people. (Seattle’s ratio is about one taxi per 890 people; San Francisco, one per 533.)

There is a reason every other major city has welcomed these services. Being able to reliably call for a car, even on busy nights, reduces drunk driving, means fewer cars on the road, and improves access to underserved communities.

Mr. Novick says he’s going to throw the book at drivers, impounding the cars of hard-working Americans who are looking to make a little extra cash doing what Americans do best: responding to market demand.

Let him. Once Portland residents get a taste for how transformative services like Uber and Lyft can be, they’ll find it hard to stomach the image of authoritative thugs putting their boot down on middle-class Americans trying to make a buck, all because unions happen to be Mr. Novick’s biggest donors.

Oh, did I mention that Mr. Novick is in the pocket of the Oregon AFL-CIO? Forget all the blustering about doing what’s best for Portland residents; it’s just theater to please his union masters and protect his political base.

While I wish City Hall had had the foresight to deal with this before it became a legal battle, I’m grateful that, in the story of Portland vs. ride-sharing, we have someone coming out looking like a bigger villain than Uber.

comments closed
November 25, 2014

Mozilla’s Add-on Policy is Hostile to Open Source

Mozilla prides itself on being a champion of the open web, and largely it is. But one policy continues to increasingly grate: their badly-managed add-on review program.

For background, Ember.js apps rely heavily on conventional structure. We wrote the Ember Inspector as an extension of the Chrome developer tools. The inspector makes it easy to understand the structure and behavior of Ember apps, and in addition to being a great tool for debugging, is used to help people learning the framework.

spacer

Because we tried to keep the architecture browser-agnostic, Luca Greco was able to add support for the Firefox API.

We were excited to support this work because we believe an open web relies on multiple competitive browsers.

However, actually distributing this add-on to users highlights a stark difference between Mozilla and Google.

Though there was an initial review process for adding our extension to the Chrome Web Store, updates after that are approved immediately and rollout automatically to all of our users. This is wonderful, as it lets us get updates out to Ember developers despite our aggressive six-week release cycle. Basically, once we had established credibility, Google gave us the benefit of the doubt when it came to incremental updates.

Mozilla, on the other hand, mandates that each and every update, no matter how minor, be hand vetted. Unfortunately, no one at Mozilla is paid to be a reviewer; they depend on an overstretched team of volunteers.

Because of that, our last review took 35 days. After over a month, our update was sent back to us: rejected.

Why?

Your version was rejected because of the following problems: 1) Extending the prototype of native objects like Object, Array and String is not allowed because it can cause compatibility problems with other add-ons or the browser itself. (data/toolbar-button-panel.html line 40 et al.) 2) You are using an outdated version of the Add-ons SDK, which we no longer accept. Please use the latest stable version of the SDK: https://developer.mozilla.org/Add-ons/SDK 3) Use of console.log() in submitted version. Please fix them and submit again. Thank you.

This is ridiculous for a number of reasons. For one, not being able to use console.log is a silly reason for rejection and, as Sam Breed points out, is even included in their add-on documentation. Extending prototypes should not matter at all since the add-on runs in its own realm, and we’ve been doing this since day one. Getting rejected now feels extremely Apple-esque (or Kafka-esque, but I repeat myself).

But the bottom line is that waiting a month at a time for approval is just not tenable, especially since Google has proven that the auto-approval model works just fine. Worst of all, many Ember.js developers who prefer Firefox have accused us of treating it like a second-class citizen, since they assume the month+ delays are our doing. Losing goodwill over something you have no control over is incredibly frustrating.

At this point, I can not in good conscience recommend that Ember developers use the Ember Inspector add-on from the Mozilla add-ons page. Either compile it from source yourself and set a reminder every few weeks to update it, or switch to Chrome until this issue is resolved.

There are many good people at Mozilla, many of whom I consider friends, and I know this policy is as upsetting to them as it is to me. Hopefully, you can use this post to start a conversation with the decision-makers that are keeping this antiquated, open-source-hostile policy in place.

Update

Some people have accused me of writing a linkbaity headline, and that this issue has nothing to do with open source. Perhaps, and apologies if the title offended. The reason it feels particularly onerous to me is that, as an open source project, we’re already stretched thin—it’s a colossal waste of resources to deal with the issues caused by this policy. If we were a for-profit company, we could just make it someone’s job. But no one in an open source community wants to deal with stuff like this on their nights and weekends.

comments closed
September 4, 2013

Maybe Progressive Enhancement is the Wrong Term

Many of the responses to my last article, Progressive Enhancement: Zed’s Dead, Baby, were that I “didn’t get” progressive enhancement.

“‘Progressive enhancement’ isn’t ‘working with JavaScript disabled,'” proponents argued.

First, many, many people conflate the two. Sigh JavaScript is an exercise in lobotomizing browsers and then expecting websites to still work with them. This is a little bit like arguing that we should still be laying out our websites using tables and inline styles in case users disable CSS. No. JavaScript is part of the web platform; you don’t get to take it away and expect the web to work.

So, maybe it is time to choose a new term that doesn’t carry with it the baggage of the past.

Second, many people argued that web applications should always be initially rendered on the server. Only once the browser has finished rendering that initial HTML payload should JavaScript then take over.

These folks agree that 100% JavaScript-driven apps (like Bustle) are faster than traditional, server-rendered apps after the initial load, but argue that any increase in time-to-initial-content is an unacceptable tradeoff.

While I think that for many apps, it is an acceptable tradeoff, it clearly isn’t for many others, like Twitter.

Unfortunately, telling people to “just render the initial HTML on the server” is a bit like saying “it’s easy to end world hunger; just give everyone some food.” You’ve described the solution while leaving out many of the important details in the middle. Doing this in a sane way that scales up to a large application requires architecting your app in such a way that you can separate out certain parts, like data fetching and HTML rendering, and do those just on the server.

Once you’ve done that, you need to somehow transfer the state that the server used to do its initial render over to the client, so it can pick up where the server left off.

You can do this by hand (see Airbnb’s Rendr) but it requires so much manual labor that any application of complexity would quickly fall apart.

That being said, I agree that this is where JavaScript applications are heading, and I think Ember is strongly positioned to be one of the first to deliver a comprehensive solution to both web spidering and initial server renders that improve the “time to first content.” We’ve been thinking about this for a long time and have designed Ember’s architecture around it. I laid out our plan last June in an interview on the Herding Code podcast. (I start talking about this around the 19:30 mark.)

I think that once we deliver server-side renders that can hand off seamlessly to JavaScript on the client, we will be able to combine the speed of initial load times of traditional web applications with the snappy performance and superior maintainability of 100% JavaScript apps like Bustle and its peers.

Of course, there are many UI issues that we still need to figure out as a community. For one thing, server-rendered apps can leave you with a non-functional Potemkin village of UI elements until the JavaScript finishes loading, leading to frustrating phantom clicks that go nowhere.

Just don’t dismiss 100% JavaScript apps because of where they are today. The future is coming fast.

comments closed
September 2, 2013

Progressive Enhancement: Zed’s Dead, Baby

spacer

A few days ago, Daniel Mall launched a snarky tumblr called Sigh, JavaScript. I was reminded of law enforcement agencies that release a “wall of shame” of men who solicit prostitutes.1 The goal here is to publicly embarrass those who fall outside your social norms; in this case, it’s websites that don’t work with JavaScript disabled.

I’ve got bad news, though: Progressive enhancement is dead, baby. It’s dead. At least for the majority of web developers.

The religious devotion to it was useful in a time when web development was new and browsers were still more like bumbling toddlers than the confident, lively young adults they’ve grown to become.

Something happened a few years ago in web browser land. Did you notice it? I didn’t. At least not right away.

At some point recently, the browser transformed from being an awesome interactive document viewer into being the world’s most advanced, widely-distributed application runtime.

Developer communities have a habit of crafting mantras that they can repeat over and over again. These distill down the many nuances of decision-making into a single rule, repeated over and over again, that the majority of people can follow and do approximately the right thing. This is good.

However, the downside of a mantra is that the original context around why it was created gets lost. They tend to take on a sort of religious feel. I’ve seen in-depth technical discussions get derailed because people would invoke the mantra as an axiom rather than as having being derived from first principles. (“Just use bcrypt” is another one.)

Mantras are useful for aligning a developer community around a set of norms, but they don’t take into account that the underlying assumptions behind them change. They tend to live on a little bit beyond their welcome.

I remember when CSS was getting big but not yet widely available, the clamor was for not using it since it "broke" the web. Now, javascript.

— Trek Glowacki (@trek) August 28, 2013

Many proponents of progressive enhancement like to frame the issue in a way that feels, to me, a little condescending. Here’s Daniel Mall again in his follow-up post:

Lots of people don’t know how to build sites that work for as many people as possible. That’s more than ok, but don’t pretend that it was your plan all along.

Actually, Daniel, I do know how to build sites that work for as many people as possible. However, I’m betting my business on the fact that, by building JavaScript apps from the ground up, I can build a better product than my competitors who chain themselves to progressive enhancement.

Take Skylight, the Rails performance monitoring tool I build as my day job. From the beginning, we architected it as though we were building a native desktop application that just happens to run in the web browser. (The one difference is that JavaScript web apps need to have good URLs to not feel broken, which is why we used Ember.js.)

To fetch data, it opens a socket to a Java backend that streams in data transmitted as protobufs. It then analyzes and recombines that data in response to the user interacting with the UI, which is powered by Ember.js and D3.

Probably a short movie illustrates what I’m talking about:

What we’re doing wasn’t even possible in the browser a few years ago. It’s time to revisit our received wisdom.

We live in a time where you can assume JavaScript is part of the web platform. Worrying about browsers without JavaScript is like worrying about whether you’re backwards compatible with HTML 3.2 or CSS2. At some point, you have to accept that some things are just part of the platform. Draw

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.