isolani: weblog 2012-11-13T10:51:50+01:00 Isofarro isolani.co.uk/blog/ isolani: blogging about web standards, accessibility, semantic web, intelligent agents, gadgets and web development Web App Mistakes: Condemned to repeat www.isolani.co.uk/blog/standards/WebAppMistakesWeAreCondemnedToRepeat 2012-11-13T10:51:50+01:00 2012-11-13T10:51:50+01:00 The first two talks at this year's Full Frontal conference were extreme position pieces arguing about the relevance of HTML in building webapps. They were sparked by the growing number of web app frameworks that start off with an empty HTML document (or a series of nested empty div elements), such as the first version of SproutCore used prominently by Apple's failed MobileMe website. As my good friend Steve Webster describes SproutCore's development approach at that time:

SproutCore doesn't just ignore progressive enhancement - it hacks it into tiny little pieces, urinates all over them and then mails them back to you one by one

(Note that these frameworks have a track record of ignoring established web development best practices, and then attempt to bolt them back in later. Meteor is also heading down the same cobbled path, they added Google crawlability as a feature in later releases.)

Enforcing imperative development

The list of Web frameworks that perpetuate the empty-document technique grows; SenchaTouch and Meteor being two garnering attention today. John Allsopp notes the approach is about replacing a declarative set of technologies with an imperative model. Thus the idioms and patterns are quite different.

The core of the argument is that apps built using web technologies are being left behind by app-specific development toolkits, and that web technologies need to improve to be better than these toolkits. Trying to succinctly express this runs headlong into absurdity, perhaps not through reductio ad absurdum:

Apps built with app-specific technology are better apps than those built with non-app-specific technology.

Obvious? But more pertinent is the question: so what? The Web's declarative model is what keeps the barrier of building websites as low as possible. (And web apps are just websites, just with a lot more JavaScript.)

The shiny toy syndrome

And yet, James Pearce's talk at FullFrontal 2012 kept plugging in this direction, resulting in this particular example of HTML minimisation:

<script src="/img/spacer.gif"> 

That is the entire HTML document for a web application. It's hard to understand whether this is a tongue-in-cheek, or a foot-in-mouth argument. Claiming it as just an extreme example of a web app and not meant for real world use is disingenuous; people seem compelled to use these odd curiosities in real world products.

Take the "lets reimplement the browser in Canvas from scratch" noodling of Bespin. As the HTML5 Editor (at that time) Ian Hickson noted:

Bespin is a great demo, but it's a horrible misuse of <canvas>.

Or Henri Svionen's less succinct, yet still brutally accurate:

I think Bespin can be taken as an example of using <canvas> being "doing it wrong" and the solution being not using <canvas>.

Despite this regular feedback, it still took the Bespin / SkyWriter developers a few years of fighting with performance and usability issues before they moved away from canvas. In that time, not least because of the initial attention Bespin received in tech circles, the "canvas as the browser" approache started to gain adoption as an acceptable way to build web applications. Bespin was no more than a proof-of-concept never intended to be used in a production setting (according to its developers).

Doing it right the hard way

Of course, modern web developers using new fangled content-less HTML aren't making the same mistakes as Bespin and SproutCore. Their conceptions reflect web development best practice; of a separation of concerns (for stability), progressive enhancement (for wide device compatibility), graceful degradation (for robustness), accessibility (as an extreme usability utility). Right?

In the far-too recent past, web app developers jumped on the HashBang URLs as the technique that exemplifies the applification of the Web. Despite it being contrary to web development best practice, single-page webapp developers persisted with this technique in the name of better performance and more robust code.

Yet Twitter backtracked back to progressive enhancement (in the name of better performance, reducing the time to first tweet), and Gawker Media quietly reverted on their hashbang-dependent implementation in the name of customer experience and robustness.

Both companies recognised that the problems of Hashbang URLs weren't in the URLs themselves, but the complete dependence on a JavaScript bootstrap for the content experience. That means instead of one document loading before content appears, it now requires one document, plus a chunk of JavaScript that simulates a browser to load in, initialise itself, call its required content and template assets and then render them in the content window.

Death by first impression

Where HashBang URL using web apps fell down is the overhead before a user experiences the first chunk of content (Twitter's "time to first tweet" metric) is too high for a responsive web experience. Yes, the single page website is fractionally more responsive to customer interactions after the core application infrastructure has finally loaded up and run; the duration to first content visible turns out to be a more important metric.

It's the first impression, folks. Despite the empirical fallaciousness on the importance of a first impression, customers tend towards experiences that get them to their content and utility quicker. We've know this since the beginning of the web; a two-page form converts better than a three-page form, despite requesting the same information.

It's probably an urban legend, but makes a good story nevertheless, that Twitter brought in the high-priest of High Performance JavaScript (Steve Souders) to advise them on how to make their web application faster and he replied with: "Have you though about putting the Tweets in the HTML instead of loading them through JavaScript?"

It turns out progressive enhancement isn't dead. It's actually still the primary technique for getting content to the customer fast. It's just continually ignored, and web apps eventually get there when they run out of non-best-practice techniques to throw at the bootstrap-time problem.

(It does make me chuckle when web developers claim progressive enhancement is hard as a reason for skipping it, and a little time later after the diminishing returns of that short-sightedness has worn off, they undertake the far more difficult approach to bolting progressive enhancement back in because the alternatives are even harder. All high-quality and efficient web development paths go through the forest of progressive enhancement eventually.)

Building applications with the web stack

Building applications using web technologies isn't new. We've been doing it for at least a decade. Sometimes you don't really notice.

Firefox and Thunderbird are two quintessential examples of applications being built with web technologies. The entire user interfaces of both products are a collaboration of markup, CSS and JavaScript displayed by the Gecko rendering engine. Firefox is thus the inception point, since it's a web application that runs other web applications.

We had a steady stream of applications built on Mozilla's XUL framework beyond mere proof-of-concepts (and Twitter clients): Songbird, Komodo Edit IDE, Cyclone3 CMS, Flickr Uploader, ChatZilla IRC Chat, InstaBird IM Client, Blue Griffon ePub editor, StreamBase.

XUL is an XML vocabulary, but it works with HTML so cleanly that at times it can be mistaken for just being extra tags bolted on top of HTML. Even the extension mechanism XBL allows you to create extra tags that can be used as first class elements in your structural (or declarative) documents.

The Mozilla approach to connecting the web surface to the computer interfaces is to create an entire series of APIs that expose the inner workings of a computer and make them available through JavaScript. And the developer can chose to surface that right up to an actual custom element.

We've sunk countless hours working within the XUL framework, some of the best developer tools came through that route. For example Joe Hewitt's Firebug, a tool that effectively brought web development out of the Stone Age, and its little cousin Chris Pederick's Web Developer Toolbar.

The second still-growing framework for web technology based apps is Adobe Air. I still regularly see new software being built by and sold to small and medium-sized business, as a simple way of encapsulating expert knowledge into a handy tool. I think I've bought at least three Adobe Air based applications (not including Twitter clients) this year alone. For example Keyword Blaze helps small businesses explore and find online niches and assist in Keyword Research. It's lowering the bar for entrepreneurs to handle and manage their own SEO strategy.

While webapp ninjas complain about their tools and environment, entrepreneurs create these applications with the web stack.

As an aside: there was a movement running in parallel to Mozilla's effort that focused on the idea of Rich Internet Applications (largely driven by the Open XUL Alliance), where developers collaborated on building declarative interface bindings to their pet languages. For a brief while that's where web application development sparked. That produced toolkits like Luxor-XUL for client-side Python bindings as one notable example.

Web application stack drawbacks

The typical counter-argument to these approaches is the visual look and feel. Clearly none of the platforms above look exactly like custom iPhone apps on the iPhone. They don't feel like native iPhone apps.

Developers who find this particular feature galling then go to extraordinary lengths to duplicate the feel of the iPhone interface inside their web apps. This has the side-effect of the app feeling ridiculous on an Android phone of the same technical specifications because you get an Android experience on the outside, and a sub-par iPhone emulated experience on the inside of the app. It's common to land in a situation of having two back buttons in this scenario each doing something different.

The crux here isn't that web technologies don't make good enough application platforms, they don't match perfectly with the native apps look and feel. Because every native platform is different, whether it's in the design or the establishment of the idioms and metaphors that are the heart of it, or just subtle differences in definitions and interfaces.

The Web is platform agnostic, its success isn't chained to the continued success of a specific platform. And so they do not conform to individual platform expectations. Much like their cousins the cross-platform application toolkits (Java, QT, Lazarus), they don't exactly match the Operating System native widgets because there isn't a clean mapping across the range of platforms they support.

This platform independent characteristic is a feature of the Web, not a shortcoming. It's not meant to emulate Operating System graphical widgets. That's the browser's job (or the operating system's job), not the web stack's.

And you know what, it doesn't matter. Applications written for a specific platform also don't always look the same as other apps on the same platform. WinAmp, QuickTime, Twitterific, Chrome and StickyNotes are just five applications I'm running right now that don't look like the default visual standard of the Operating Systems they are running on.

If an application's success is primarily based on looking and feeling like a native application for that platform, you really have to be an idiot to build that application using something other than a toolkit or framework designed with that goal as the primary endpoint.

GMail continues being a pioneering success for web applications. It remains steadfastly popular for its features and information management, not because it looks like a native iPhone app on the iPhone.

The Web platform

The Web's independence from the hardware and software platform people use is a feature. It's better than cross-platform frameworks which are constantly criticised for not producing exact native-feeling apps on the multitude of platforms they run on. The Web is above that pettiness.

The Web isn't an application platform. It is really a data platform (or more precisely, a content platform). A data platform that has a very light visual and behaviour layer available: Cascading StyleSheets and JavaScript.

Take a typical brand new iPhone. Make sure it doesn't touch the web in any way, so place it in an environment where it cannot make an HTTP Request or receive an HTTP Response.

How useful are your native applications? Oh that's right, the only applications you can get to are the ones installed on the phone by default. The other applications are just data sitting on the Web waiting for you to request them.

And the default applications you do use because of the value they provide are probably going to offer you less value without that lifeline to the Web. Yes, the value of the application isn't because it's a native application, but because of the data it uses, and that data is sitting on the Web.

Smartphones are useful because they participate on the Web of content, as equal citizens to desktop browsers and tablets. Applications are just a shell that offers interaction with the data. Without the data, the interactions are worthless.

The Web is ubiquitous

Native apps need the Web, the Web doesn't need native apps.

If your primary requirement is a seamless native app experience, then you need to build a native app for each platform you want to support.

If you are content to abstract away the nativeness of a platform to a wrapper (like a browser), then a web application is perfectly adequate.

But there's also a third alternative: a hybrid of both approaches, since the Web isn't (just) an application platform, it's the primary globally available data platform in your application.

We who cannot remember the past...

]]>
Ubuntu 12.04: Solving slow boot times on Toshiba NB300 Netbook www.isolani.co.uk/blog/stuff/SlowBootTimesUbuntuToshibaNetbook 2012-11-01T21:55:47+01:00 2012-11-01T21:55:47+01:00 I picked up a Toshiba NB300-108 for just under £70 on an ebay auction a few weeks ago, and decided to clean install Ubuntu 12.04. True to form Ubuntu installed flawlessly. But, there was one annoying problem. It took about 5 to 7 minutes to boot.

It’s taken a couple of days of hacking around, trashing Ubuntu and re-installing it, but I’ve gotten to the bottom of the issue. Most of the 5 minute boot sequence seems to be the laptop just sitting there waiting for something to happen, there’s only a tiny amount of disk activity going on.

The first step is to figure out why the boot process is so slow. So in a Terminal, run dmesg which displays a timestamped log of each subsystem initialising. The timestamp counts the number of seconds from power on. Looking through this list I saw a huge leap in seconds (about 350 seconds), and that line said:

[  352.885250] ADDRCONF(NETDEV_UP): eth0: 
    link is not ready

Turns out Ubuntu can’t quite deal with the Ethernet card (possibly a Realtek network module). Opening the “Connection Information” in the Network menu (top right icon of up-and-down arrows) identifies the network card driver as r8169.

The solution turns out to delay the initialisation of the Ethernet card, removing it from the boot sequence and adding it to the rc.d initialisation. Here’s how to do that:

First we blacklist the card from the early boot initialisation steps (all on one line):

echo "blacklist r8169" | 
  sudo tee /etc/modprobe.d/blacklist-ethernet.conf

Second step is initialising it by editing /etc/rc.local and adding in modprobe r8169 just before the exit 0 line:

modprobe r8169
exit 0

Thirdly we need to rebuild the boot image to take into account the newly added blacklist item, so run the following:

sudo dpkg-reconfigure linux-image-$(uname -r)

Once that is done, reboot the laptop.

For me that reduced the boot time to 38 seconds, so a reduction of about 90%. There’s still a couple more seconds to be saved by disabling the ipv6 and parallel port support, but that’s for a rainy day.

Related links

  • UbuntuForums: [SOLVED] Ubuntu 12.04 boots for more then 60s
  • UbuntuForums: Possible to blacklist lp module? Causing 3 minute boot delay!
]]>
Full Frontal 2011 - My Notes www.isolani.co.uk/blog/javascript/FullFrontal2011MyNotes 2011-11-29T20:55:58+01:00 2011-11-29T20:55:58+01:00 Another November, and the third Full Frontal one-day conference took place at the Duke of York cinema in Brighton. Last year's conference was fairly interesting, if a little demo-heavy. This year's seems geared toward development tools and web applications. The headline speaker was ex-Yahoo Nicholas Zakas, but the supporting speakers were more impressive and fresh.

Most interesting talk was Jeremy Ashekenas' CoffeeScript Design Decisions - which surprised me somewhat considering my opinion on the impracticability of CoffeeScript in a real-world web development team. I was also pleasantly surprised by Rik Arends' talk about the Cloud9 IDE. Phil Hawkesworth's talk was eminently listenable, but unfortunately impossible to live blog. Eloquent JavaScript author Marjin Haverbeke was incredibly interesting, but perhaps too technical for a conference talk. The final two talks, from Brendan Dawes and Marcin Wichary were both entertaining and (more importantly) inspiring. Glenn Jones provided us a very useful look at almost-ready features of browsers as a platform for sharing data. And Zakas rolled out a two-year old talk.

CoffeeScript Design Decisions by Jeremy Ashkenas

I covered Ashkenas on CoffeeScript Design Decisions in a separate blog post. It was that interesting and bloggable.

Respectable Code-Editing in the Browser by Marijn Haverbeke

Marijn Haverbeke talks about code editors in a browser; basically glorified text areas. Text areas themselves are too primitive for building a code editor on; indentation is unworkable, and good luck finding which line of your code you are currently on.

The code editor in a browser received a massive boost from Bespin, a code editor written entirely in canvas, and basically recreated every pixel of a UI. As a demo Bespin was interesting, but using canvas is inappropriate solution. Thankfully this abysmal mess of inaccessibility is now mostly abandoned, and the code editors that have arisen from this work are well on track to being more conducive to the web environment, and more perceptive of the DOM. One significant limitation of canvas is its portability.

So far there are only 3 serious implementations of in-browser code editors:

  • ACE, which took the good parts of Bespin, and with new techniques it is much faster and more portable than Bespin.
  • Orion, which is an Eclipse project, uses content editable / design mode as a basis.
  • CodeMirror, which is Marjin's project. Currently in its second iteration.

All three are open source projects, and to date there isn't a single commercial implementation of a browser-based code editor.

Marijn is the sole developer behind CodeMirror. The first major version of CodeMirror used content-editable (or design mode); this is a browser supported mechanism of inline editing. But Marijn found the feature both underspecified and the browser support buggy or unsuitable for building a code editor. Issues ranged from Internet Explorer inserting paragraphs whenever it could, to the general unavailability of useful events.

The original version of CodeMirror grew out of Marjin's previous project, an online JavaScript book called Eloquent JavaScript. The book contains exercises and demonstrations of JavaScript, code that could be written and run inline on the page. For the book Marjin's solution of a Firebug-like textarea console and output window worked. He later added colour syntax highlighting by overlaying the text area with coloured text DOM nodes, but this turned out to be too slow with the pure JavaScript DOM changes. This gave rise to the first version of CodeMirror.

After content-editable proved insufficient (code folding was impossible, for example), Marjin started version 2.0 of CodeMirror which was based pure DOM. He limited the size of the DOM by creating a viewport of the document, so only the visible part of the document needed to be rendered on page, thus speeding up the overall rendering of the document. Marjin calls this a fake editor, just lots of changing DOM nodes. The cursor is a lie, the text looks editable, with copy and paste and moving text working. Marjin crafted his own text selection algorithms, and that worked pretty well since CodeMirror has full control over the document model.

CodeMirror with its API approach allows undo/redo, code-folding, and an extensible mechanism for supporting multiple languages with both syntax highlighting and code completion. Building a language extension is about implementing one method, token(), which is in a SAX-like way called for each token within the edited document. There are hooks for managing state through the document. The typical features of a good editor such as search-in-place and text replacement end up being scripts on top of the CodeMirror API.

CodeMirror supports the identification of local variables, helping the developer identify potential bugs by it's ability to distinguish the scope of variables as the code is being edited. This support isn't limited to JavaScript. XML-based languages has mismatched tag detection, for example.

Also CodeMirror can handle syntax highlighting of multiple languages in the same document. (composed modes) Thus an HTML page containing CSS and JavaScript - and even PHP - each piece can be independently supported. I guess this is done on a line-by-line basis, so perhaps an inline piece of JavaScript surrounded on the same line as non-JavaScript might not by syntax highlighted in the same way as a block of JavaScript.

Internally, the document being edited is stored as a doubly-indexed B-tree as this allows the two way mapping of the actual line of code with the line currently being edited on screen. These two lines can be different because of code-folding, long lines being wrapped, and the region being displayed is offset from the top of the document.

The editor listens for scroll events, and avoids startup freezes as it renders the DOM nodes onces there's sufficient information available to render the current visible region. So scrolling and loading large documents doesn't slow down the interface.

Copy and pasting is a clever hack, CodeMirror detects a right-click and inserts a hidden textarea underneath the cursor. Then the context menu at this point offers copy and paste options, because they are native to text fields.

With demos of the above features and talking through how they were implemented Marjin delivered an interesting tech-heavy talk.

How We Architected Cloud9 IDE by Rik Arends

Rik Arends introduces Cloud9 IDE as the easiest way to work with node.js. Cloud9 IDE is an online IDE and runs on node.js.

Though, why build and IDE with JavaScript? JavaScript developers lack tooling. Each main language has an IDE geared for their language; Java has Eclipse, C++ has Visual Studio, JavaScript is added to IDEs and editors as an afterthought, always treated as a second hand language. By using JavaScript developers will already know how they can extend the IDE they use.

We evangelise the Web, but we don't develop on it. Rik accepts the challenge that if many applications can be hosted on the web. why not a developer IDE. Surely web developers should be able to work using a cloud-based IDE. If the web can self-host its entire toolchain isn't this, he asks, the best way of pushing the web forward?

IDEs don't need to be ugly or clunky, so Cloud9 takes a designed approach to IDEs. One year ago it looked a little like Eclipse. Today it will look very familiar to TextMate users (and indeed it takes some useful cues from the Sublime Text Editor). They are even currently importing TextMate themes.

Cloud9 is betting the company on building a cloud-based IDE. It is currently funded by Accell and Atlassian. And, Cloud9 IDE is developed using the Cloud9 IDE. The IDE is open source, and downloadable from github to run on your own sever. (I caught up with Rik later on in the day and he confirmed it can be up and running on a VPS as long as node.js has at least 128Mb of RAM available. So a cleanly configured 256Mb VPS should be a useful starting point).

Rik demos the Cloud9 IDE; projects are brought in via github (after oAuthing with github). The main editing interface uses the ACE code-editing component. This renders a viewport of the current code as DOM Fragments (similar to CodeMirror2, I gather). This means rendering is lightning fast, and syntax highlighting is easy to extend for different languages and markup schemas. One recently added feature to the editor is a mini-map of the code, which is a good visual overview of the code; this is based on the findings that developers recognise code block by how the look, rather than the actual code.

Cloud9 IDE has a Console for running git commands. It's actually an emulation of a server shell, but very limited range of unix commands. One feature coming soon is a link to a virtual machine, and that will allow a real unix console right in the IDE.

Projects can have node.js servers configured with them (essentially on the fly and managed through the CloudIDE interface). This allows inline node.js debugging, so the developer has the whole gamut of breakpoints and means of stepping through code, peeking at stack traces and in-scope variables (much like the Firebug debugger for browser-based JavaScript debugging).

The IDE has support for Heroku based deploys (as git commit hooks, I think). The console has code-complete (for example git commands and options).

CloudIDE takes advantage of a number of publicly available libraries and toolkits, including:

  • APF - a high performance UI library
  • ACE as the code editing component.
  • Socket.io which offers realtime socket networking. Rik demoed inline chat and collaborative editing within the CloudIDE interface.
  • jsDAV a pure JavaScript implementation of the WebDAV protocol
  • Connect as the HTTP middleware (which Rik compares to a build-your-own customised Apache server).

Lessons learned using node.js

Rik lists some useful pointers in building applications on node.js:

  • structured exceptions do not fit in node.js., it doesn't make sense in an evented system, since there's no clean way of surfacing errors back up to where they can be seen. And because of the asynchronicity of node, you don't want an exception to break the whole call chain.
  • Beware of globals. Definitely "use strict" to catch these. Beware of modifying the prototype of objects since these changes persist; prototype changes are global, so this can cause unintended side-effects elsewhere in your code.
  • minimise your callback nesting by grouping functions together into a more linear-like order. (I guess this also hints at considering using promises).
  • for dense business logic and tests, CPS transforms makes sense, use a code analyser to refactor into a top-down looking code.

From a systems point of view:

  • keep an eye on state, preferably have none because it persists across requests
  • Monitor nodeJS: use a piece of software called forever which runs node apps in the background and keeps monitoring that it's still alive, restarting it when it fails.
  • Log everything so you can reconstruct bugs. Use logging over UDP, so it's a fire-and-forget, for example
  • Thick client + stateless server = win
  • Put node behind a proxy— makes it very manageable. load balancing, server upgrades, start processes.
  • Do not blindly trust a library or module — nodejs is pretty new, so there are many examples of "my first node js module" that then never get updated with fixes and improvements. Don't be afraid to get your hands dirty and building your own libraries and modules, when necessary.

Rik's excellent talk covers useful ground in building and architecting node.js applications. It's also a great example of what is considered an impossible to build application implemented in a highly usable, flexible and powerful way. I guess this is the first serious node.js app I've seen under the covers of, and a great demonstration of what's possible with node.js

Scalable JavaScript Application Architecture by Nicholas Zakas

We live in a globally connected world. Conferences happen regularly, they are recorded in video and posted on-line for the world to watch. The JavaScript community is small and closely-knit, so we keep up with what's going on outside of the UK borders. With that in mind I find the idea of paid conference speakers who get top billing (both local speakers, and flown in from outside countries) giving the same talk multiple times at different events a deplorable and unacceptable behaviour.

Nicholas Zakas' talk "Scalable JavaScript Application Architecture" is unchanged from the original talk he gave at the Bayjax conference hosted at Yahoo, Sunnyvale in September 2009. Two years ago. I've already watched the video of this talk (several times), so I was unimpressed to see him wheel it out again verbatim.

This repetition may make sense if this was a last-minute stand-in replacement for another speaker, or one to make up the numbers speaker-wise. But not for a top-billed speaker flown in from the USA, that we as attendees were paying for.

Beyond the Page by Glenn Jones

One page web applications are appearing all over the place, and as Glenn accurately notes they are siloes, but they don't need to be. Glenn shows a couple of demos he has been working on that allow data from one web application to be communicated to another.

His central application is a people store, storing vcards (or the microformat equivalent hcards). He shows how drag and drop between two separate browsers can be used to copy across the vcard information purely in the browser, meaning that this transfer can be done on any web-based application hosted anywhere.

Glenn shows further capabilities, of dragging and dropping vcards from the file system to the browser, and saving vcards from the browser back to the file system.

This shows that the drag-and-drop API and the filesystem based APIs (including uncompressing zip files in pure JavaScript) are getting to to point of being useful to the developer. Glenn demos the applications first, and then goes back over each application showing how each demo is done, along with the APIs used, and the current set of browsers these APIs are working in.

What impressed me was that the data wasn't just text, or an image, but a set of structured data plus an image. The drag and drop looks very useful, but Glenn notes that the HTML5 specification for dragdrop is a disaster - concurring with PPK's viewpoint. Currently advances in drag-drop seem to be driven by GMail.

Another clever demo was copying in a chunk of HTML (using the Clipboard). Glenn shows that under the covers he is using content-editable / design mode to receive the HTML, and then run a standard parser over that to extract out the structured information. Certainly a sensible use of microformats parser.

As an additional extra Glenn is using the Google Social API which, given an email, returns a list of social network or XFN type information about the person represented by that email address.

The last part of Glenn's talk covered Web Intents. These are used to map common actions with services, such as posting a status, bookmarking a link, editing an image, picking a profile. Glenn shows the two main ways Web Intents improves the flow of information:

Many blogposts have social bookmarking links, instead Glenn shows that replacing these with a "Bookmark this link" and identifying it as such to the browser, that browser can then assign that link to the service that the user uses. So clicking on that link opens the Delicious bookmarking dialogue for one user, and a Pinboard dialogue for another user.

The other direction is for filling in forms - like street addresses. He demos a webintent that maps a "Fill in this address using a profile" link and maps it to his own people store. That way he can select the right profile in his contacts system appropriate for the form such as his personal profile address, or his business profile for his office address.

A very informative talk about the features that are on the way to being ready for use in web sites, features that improve the user experience, and thus help reduce the risk of customers disengaging with sites.

Beyond the Planet of the Geeks by Brendan Dawes

Brendan Dawes is a designer and a geek, and he delights in design of items that escape us - such as Japanese and Brazillian paperclips (and pencils). And he lets his creativity loose. When a high-profile architecture company hires Brendan's company to design a new experience for their portfolio of buildings, they see in Brendan the very same boundless creativity, amazing things happen.

Brendan's narrative draws you in, he shows that sometimes it's good to build things that are not user-friendly because in that way you get experiences that are memorable, enriching and deeply rewarding, as well as expressing a brand identity. One side-effect of having a rich exploratory interface is that it opens the visitor up to serendipitous discovery, one of the most wonderful experiences for curious people.

Brendan's talk is about visualisation of data into an engaging experience. He talks about the ideas that worked, how the idea evolved, how the customer asked to make the logo smaller, about what it's like creating with no limits in place.

Yet he also shows that the idea is still grounded in technology, it's built in HTML5 because it's something Brendan doesn't know (Brendan is a firm believer in moving on to different technologies when you have mastered the current one).

An utterly enjoyable, and inspiring talk, and a nice counter-balance with the other technical-focused talks.

Google Doodles by Marcin Wichary

Marcin works for Google, and his 20% project is helping the Google Doodles team build their next Doodle. He's been involved in several of the new interactive doodles that have appeared on the Google homepage over the last year. From the faithful implementation of PacMan, to the ??? machine, to the magnifying glass doodle, and back to the Jules Verne underwater exploration doodle, and the ??? animation.

After a quick summary of the history of the Google Doodle (from the first Burning Man doodle, right to when Marcin starts to get involved), Marcin dives into the interesting stories behind the interactive ones, covering the usability tests and technical challenges of the doodle. We are shown several versions of the Jules Verne doodle and see the changes made and how they considered IE support.

Marcin talks about how each of the doodles evolved to their end-result, the user-experience factors, the technical limitations and reinventing new animation techniques. Marcin has a healthy interest in retrogame programming techniques, and it's eye-opening some of those techniques have been adapted in these famous doodles. The Martha Graham dancing woman doodle is a magnificent example of his Crushinator technique, which reminded me of the bit-blitting techniques of the Amiga.

Animation tricks like a thick lens holder to hide the coarseness of the rectangular clip-regions, thus saving performance and still producing a jaw-dropping result. Similar to the Dark Sceptre approach to masking large animated characters.

Marcin shows some of the easter eggs present in the Doodles, like the Jules Verne one controllable by the accelerometer built into the Macbook Pros. Many people first learnt about that particular hardware feature because of the Jules Verne doodle. Marcin notes some of the bugs they uncovered, for example the same generation of Macbook pro having different accelerometer chips which resulted in the movements being completely reversed.

A fascinating exploration of animation, interactivity and rich user-experience on the world's most trafficked page. And shows how the old animation techniques are repurposed in the ever-evolving web.

Conclusion

Full Frontal was an enjoyable day spent watching masters of JavaScript showing their efforts and taking us behind the scenes. I like Remy & Julie's choice of speakers - the people who actually ply their trade and craftsmanship, the ones who don't seek out the limelight and attention, and generally just do a wonderful job and are passionate about what they do.

Full Frontal delivers in a way no other conferences do, it's a small audience in a homey venue. Free from marketing and hype, and delivers great value, inspiration, and a focus on the web developer. It is my favourite event.

]]>
Full Frontal 2011: CoffeeScript Design Decisions by Jeremy Ashkenas www.isolani.co.uk/blog/javascript/FullFrontal2011CoffeeScriptDesignDecisions 2011-11-15T10:36:17+01:00 2011-11-15T10:36:17+01:00 I'm skeptical about CoffeeScript. A large chunk of that skepticism is the idea that compiling one language into JavaScript is introducing additional layers of complexity that lead to maintainability issues, and difficulty in debugging problematic code. It's a cognitive dissonance that distracts from the task of developing production ready code.

Jeremy Ashkenas guided tour of CoffeeScript is informative and useful in seeing what's available. Essentially, CoffeeScript's principle is "it's just JavaScript". Except it isn't, it's syntactically different, and requires a compilation step to get it transformed to JavaScript.

It's just JavaScript

Ashkenas points out that the are plentiful cross-compilers for JavaScript, almost every computer language that has a modicum of current support. These languages are not JavaScript, and their paradigms are not directly related or relevant to JavaScript. CoffeeScript is different in that it embraces "JavaScript: The Good Parts", while ignoring the parts that don't make Crockford's list, and then cleans up the syntax; removing unwanted decoration/unnecessary boilerplate and adding in new syntactic sugar. Semantically CoffeeScript is JavaScript with a different syntax.

Essentially, the aim with CoffeeScript, and it seems that they are still on target, is the generated JavaScript is what you would have written in the first place (in a perfect world, at least). This means that debugging the code should be easier. This reinforces the guide that CoffeeScript isn't a first language, CoffeeScript users must know JavaScript, at least the Good Parts, and be comfortable with it.

So CoffeeScript isn't about avoiding JavaScript, or pretending it's Ruby. It's as JavaScripty as possible. It smoothes over the compatibility gaps (for example, Internet Explorer not having an indexOf method on Arrays, CoffeeScript polyfillers that in).

Throughout his talk Ashkenas points out the multiple applicability of CoffeeScript, not just for use in the browser, but also on the server with node.js, and also for scripting Photoshop.

History of CoffeeScript

CoffeeScript is almost two years old. It was first developed on Christmas Day 2009. Initially it was built in Ruby, using its lexer and parser libraries. At version 0.5 the CoffeeScript compiler was rewritten in CoffeeScript and since then has been self-hosting. Ashkenas takes great pride in knowing that an update to the CoffeeScript compiler is compiled with (the previous version of) CoffeeScript, and the resulting code (the new version of the compiler) then compiles the source code again. This self-hosing / bootstrapping process highlights how mature the CoffeeScript compiler already is.

Essentially CoffeeScript is written in CoffeeScript.

One idea with CoffeeScript is to use it as a basis of prototyping new JavaScript language features, so it's being tracked by various members (e.g. Brendan Eich) of JavaScript standards bodies. CoffeeScript's core features are a cleaner syntax, improved semantics, and a basis for new JavaScript goodies. In CoffeeScript everything is an expression. It also introduces list comprehensions to Arrays (so offering comprehension supporting array methods like filter, map and reduce)

JavaScript to CoffeeScript

To get from JavaScript to CoffeeScript involves the following steps:

  • remove semi-colons
  • remove the var statement
  • remove curly braces (both of statement blocks and object literals)
  • remove the return keyword (what is returned is the last expression - since everything is an expression)
  • replace the function keyword with ->

So a JavaScript function:

var square = function(x) {
    return x*x;
};

In CoffeeScript becomes:

square = (x) -> x*x

Ashkenas takes the audience through the various JavaScript structures and shows how they look in CoffeeScript. What is surprising is the clarity of one-to-one mapping between the two languages. Object literals are already CoffeeScript ready: just remove the curly braces and commas (so no trailing comma problems), but keep the indentation. CoffeeScript objects are whitespace dependent (like Python).

Additional Features

CoffeeScript adds list comprehensions,

// Iterating through a list of values
list = ['Curly', 'Larry', 'Moe']
for stooge in list
    console.log 'Hi ' + stooge

// Iterating through a list of keys
for name of process
    console.log name

// Iterating through a range
for i in [0..10]
    console.log i

So a list comprehension in CoffeeScript looks like this:

list = [ 'Curly', 'Larry', 'Moe' ]
hello_stooges = for stooge in list when stooge isnt "Moe"
    "Hi " + stooge

console.log hello_stooges

In CoffeeScript everything is an expression, this means statements, and even try/catch blocks are expressions.

CoffeeScript adds splats (...), which is a way of dealing with variable number of arguments in JavaScript methods.

bestActor = (winner, others...)

CoffeeScript also introduces an existential operator (?), which allows a short circuiting of dot-operator variable references. When part of the dot-path isn't found, the expression returns undefined rather than the code collapsing with a "property not found", or null errors:

sue =
    name: "Sue Jones"
    age: 27

// Using the existential operator
console.log sue.non?.existent.property

// Existential operator can also be used on methods:
console.log sue.name.reverse?()

CoffeeScript's multi-line string blocks is similar to Pythons triple-quoted strings (with interpolation of variables done with #{varname} ):

haiku = """
    multi-line
    string
    block. #{varname}
"""

The multi-line features are available for comments (start and end them with three octothorpes), and more usefully in regular expressions (using three forward slashes), which certainly helps in the general readability of regular expressions.

Inheritance with CoffeeScript

Classes are simpler in CoffeeScript than JavaScript, though it looks more classical at face value than prototypical. This risks confusing newbies and leading to subtle bugs that will be hard to isolate. But, Ashkenas is clear, CoffeeScript is for developers who are already comfortable with JavaScript. Classical-looking inheritance is a case of the extends keyword, and the super() method call

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.