wingolog

on generators

25 February 2013 4:17 PM (igalia | javascript | ecmascript | v8 | javascriptcore | generators | delimited continuations | scheme | guile)

Hello! Is this dog?

spacer

Hi dog this is Andy, with some more hacking nargery. Today the topic is generators.

(shift k)

By way of introduction, you might recall (with a shudder?) a couple of articles I wrote in the past on delimited continuations. I even gave a talk on delimited continuations at FrOSCon last year, a talk that was so hot that thieves broke into the the FrOSCon video squad's room and stole their hard drives before the edited product could hit the tubes. Not kidding! The FrOSCon folks still have the tapes somewhere, but meanwhile time marches on without the videos; truly, the terrorists have won.

Anyway, the summary of all that is that Scheme's much-praised call-with-current-continuation isn't actually a great building block for composable abstractions, and that delimited continuations have much better properties.

So imagine my surprise when Oleg Kiselyov, in a mail to the Scheme standardization list, suggested a different replacement for call/cc:

My bigger suggestion is to remove call/cc and dynamic-wind from the base library into an optional feature. I'd like to add the note inviting the discussion for more appropriate abstractions to supersede call/cc -- such [as] generators, for example.

He elaborated in another mail:

Generators are a wide term, some versions of generators are equivalent (equi-expressible) to shift/reset. Roshan James and Amr Sabry have written a paper about all kinds of generators, arguing for the generator interface.

And you know, generators are indeed a very convenient abstraction for composing producers and consumers of values -- more so than the lower-level delimited continuations. Guile's failure to standardize a generator interface is an instance of a class of Scheme-related problems, in which generality is treated as more important than utility. It's true that you can do generators in Guile, but we shouldn't make the user build their own.

The rest of the programming world moved on and built generators into their languages. James and Sabry's paper is good because it looks at yield as a kind of mainstream form of delimited continuations, and uses that perspective to place the generators of common languages in a taxonomy. Some of them are as expressive as delimited continuations. There are are also two common restrictions on generators that trade off expressive power for ease of implementation:

  • Generators can be restricted to disallow backtracking, as in Python. This allows generators to yield without allocating memory for a new saved execution state, instead updating the stored continuation in-place. This is commonly called the one-shot restriction.

  • Generators can also restrict access to their continuation. Instead of reifying generator state as external iterators, internal iterators call their consumers directly. The first generators developed by Mary Shaw in the Alphard language were like this, as are the generators in Ruby. This can be called the linear-stack restriction, after an implementation technique that they enable.

With that long introduction out of the way, I wanted to take a look at the proposal for generators in the latest ECMAScript draft reports -- to examine their expressiveness, and to see what implementations might look like. ES6 specifies generators with the the first kind of restriction -- with external iterators, like Python.

Also, Kiselyov & co. have a hip new paper out that explores the expressiveness of linear-stack generators, Lazy v. Yield: Incremental, Linear Pretty-printing. I was quite skeptical of this paper's claim that internal iterators were significantly more efficient than external iterators. So this article will also look at the performance implications of the various forms of generators, including full resumable generators.

ES6 iterators

As you might know, there's an official ECMA committee beavering about on a new revision of the ECMAScript language, and they would like to include both iterators and generators.

For the purposes of ECMAScript 6 (ES6), an iterator is an object with a next method. Like this one:

var p = console.log;

function integers_from(n) {
  function next() {
    return this.n++;
  }
  return { n: n, next: next };
}

var ints = integers_from(0);
p(ints.next()); // prints 0
p(ints.next()); // prints 1

Here, int_gen is an iterator that returns successive integers from 0, all the way up to infinity2^53.

To indicate the end of a sequence, the current ES6 drafts follow Python's example by having the next method throw a special exception. ES6 will probably have an isStopIteration() predicate in the @iter module, but for now I'll just compare directly.

function StopIteration() {}
function isStopIteration(x) { return x === StopIteration; }

function take_n(iter, max) {
  function next() {
    if (this.n++ < max)
      return iter.next();
    else
      throw StopIteration;
  }
  return { n: 0, next: next };
}

function fold(f, iter, seed) {
  var elt;
  while (1) {
    try {
      elt = iter.next();
    } catch (e) {
      if (isStopIteration (e))
        break;
      else
        throw e;
    }
    seed = f(elt, seed);
  }
  return seed;
}

function sum(a, b) { return a + b; }

p(fold(sum, take_n(integers_from(0), 10), 0));
// prints 45

Now, I don't think that we can praise this code for being particularly elegant. The end result is pretty good: we take a limited amount of a stream of integers, and combine them with a fold, without creating intermediate data structures. However the means of achieving the result -- the implementation of the producers and consumers -- are nasty enough to prevent most people from using this idiom.

To remedy this, ES6 does two things. One is some "sugar" over the use of iterators, the for-of form. With this sugar, we can rewrite fold much more succinctly:

function fold(f, iter, seed) {
  for (var x of iter)
    seed = f(x, seed);
  return seed;
}

The other thing offered by ES6 are generators. Generators are functions that can yield. They are defined by function*, and yield their values using iterators:

function* integers_from(n) {
  while (1)
    yield n++;
}

function* take_n(iter, max) {
  var n = 0;
  while (n++ < max)
    yield iter.next();
}

These definitions are mostly equivalent to the previous iterator definitions, and are much nicer to read and write.

I say "mostly equivalent" because generators are closer to coroutines in their semantics because they can also receive values or exceptions from their caller via the send and throw methods. This is very much as in Python. Task.js is a nice example of how a coroutine-like treatment of generators can help avoid the fraction-of-an-action callback mess in frameworks like Twistednode.js.

There is also a yield* operator to delegate to another generator, as in Python's PEP 380.

OK, now that you know all the things, it's time to ask (and hopefully answer) a few questions.

design of ES6 iterators

Firstly, has the ES6 committee made the right decisions regarding the iterators proposal? For me, I think they did well in choosing external iterators over internal iterators, and the choice to duck-type the iterator objects is a good one. In a language like ES6, lacking delimited continuations or cross-method generators, external iterators are more expressive than internal iterators. The for-of sugar is pretty nice, too.

Of course, expressivity isn't the only thing. For iterators to be maximally useful they have to be fast, and here I think they could do better. Terminating iteration with a StopIteration exception is not so great. It will be hard for an optimizing compiler to rewrite the throw/catch loop terminating condition into a simple goto, and with that you lose some visibility about your loop termination condition. Of course you can assume that the throw case is not taken, but you usually don't know for sure that there are no more throw statements that you might have missed.

Dart did a better job here, I think. They split the next method into two: a moveNext method to advance the iterator, and a current getter for the current value. Like this:

function dart_integers_from(n) {
  function moveNext() {
    this.current++;
    return true;
  }
  return { current: n-1, moveNext: moveNext };
}

function dart_take_n(iter, max) {
  function moveNext() {
    if (this.n++ < max && iter.moveNext()) {
      this.current = iter.current;
      return true;
    }
    else
      return false;
  }
  return {
    n: 0,
    current: undefined,
    moveNext: moveNext
  };
}

function dart_fold(f, iter, seed) {
  while (iter.moveNext())
    seed = f(iter.current, seed);
}

I think it might be fair to say that it's easier to implement an ES6 iterator, but it is easier to use a Dart iterator. In this case, dart_fold is much nicer, and almost doesn't need any sugar at all.

Besides the aesthetics, Dart's moveNext is transparent to the compiler. A simple inlining operation makes the termination condition immediately apparent to the compiler, enabling range inference and other optimizations.

iterator performance

To measure the overhead of for-of-style iteration, we can run benchmarks on "desugared" versions of for-of loops and compare them to normal for loops. We'll throw in Dart-style iterators as well, and also a stateful closure iterator designed to simulate generators. (More on that later.)

I put up a test case on jsPerf. If you click through, you can run the benchmarks directly in your browser, and see results from other users. It's imprecise, but I believe it is accurate, more or less.

The summary of the results is that current engines are able to run a basic summing for loop in about five cycles per iteration (again, assuming 2.5GHz processors; the Epiphany 3.6 numbers, for example, are mine). That's suboptimal only by a factor of 2 or so; pretty good. Thigh fives all around!

However, performance with iterators is not so good. I was surprised to find that most of the other tests are suboptimal by about a factor of 20. I expected the engines to do a better job at inlining.

One data point that stands out of my limited data set is the development Epiphany 3.7.90 browser, which uses JavaScriptCore from WebKit trunk. This browser did significantly better with Dart-style iterators, something I suspect due to better inlining, though I haven't verified it.

Better inlining and stack allocation is already a development focus of modern JS engines. Dart-style iterators will get faster without being a specific focus of optimization. It seems like a much better strategy for the ES6 specification to piggy-back off this work and specify iterators in such a way that they will be fast by default, without requiring more work on the exception system, something that is universally reviled among implementors.

what about generators?

It's tough to know how ES6 generators will perform, because they're not widely implemented and don't have a straightforward "desugaring". As far as I know, there is only one practical implementation strategy for yield in JavaScript, and that is to save the function's stack and the set of live registers. Resuming a generator copies the activation back onto the stack and restores the registers. Any other strategy that involves higher-level control-flow rewriting would make debugging too difficult. This is not a terribly onerous requirement for an implementation. They already have to know exactly what is live and what is not, and can reconstruct a stack frame without too much difficulty.

In practice, invoking a generator is not much different from calling a stateful closure. For this reason, I added another test to the jsPerf benchmark I mentioned before that tests stateful closures. It does slightly worse than iterators that keep state in an object, but not much worse -- and there is no essential reason why it should be slow.

On the other hand, I don't think we can expect compilers to do a good job at inlining generators, especially if they are specified as terminating with a StopIteration. Terminating with an exception has its own quirks, as Chris Leary wrote a few years ago, though I think in his conclusion he is searching for virtues where there are none.

dogs are curious about multi-shot generators

So dog, that's ECMAScript. But since we started with Scheme, I'll close this parenthesis with a note on resumable, multi-shot generators. What is the performance impact of allowing generators to be fully resumable?

Being able to snapshot your program at any time is pretty cool, but it has an allocation cost. That's pretty much the dominating factor. All of the examples that we have seen so far execute (or should execute) without allocating any memory. But if your generators are resumable -- or even if they aren't, but you build them on top of full delimited continuations -- you can't re-use the storage of an old captured continuation when reifying a new one. A test of iterating through resumable generators is a test of short-lived allocations.

I ran this test in my Guile:

(define (stateful-coroutine proc)
  (let ((tag (make-prompt-tag)))
    (define (thunk)
      (proc (lambda (val) (abort-to-prompt tag val))))
    (lambda ()
      (call-with-prompt tag
                        thunk
                        (lambda (cont ret)
                          (set! thunk cont)
                          ret)))))

(define (integers-from n)
  (stateful-coroutine
   (lambda (yield)
     (let lp ((i n))
       (yield i)
       (lp (1+ i))))))

(define (sum-values iter n)
  (let lp ((i 0) (sum 0))
    (if (< i n)
        (lp (1+ i) (+ sum (iter)))
        sum)))

(sum-values (integers-from 0) 1000000)

I get the equivalent of 2 Ops/sec (in terms of the jsPerf results), allocating about 0.5 GB/s (288 bytes/iteration). This is similar to the allocation rate I get for JavaScriptCore, which is also not generational. V8's generational collector lets it sustain about 3.5 GB/s, at least in my tests. Test case here; multiply the 8-element test by 100 to yield approximate MB/s.

Anyway these numbers are much lower than the iterator tests. ES6 did right to specify one-shot generators as a primitive. In Guile we should perhaps consider implementing some form of one-shot generators that don't have an allocation overhead.

Finally, while I enjoyed the paper from Kiselyov and the gang, I don't see the point in praising simple generators when they don't offer a significant performance advantage. I think we will see external iterators in JavaScript achieve near-optimal performance within a couple years without paying the expressivity penalty of internal iterators.

Catch you next time, dog!

(3)

opengl particle simulation in guile

16 February 2013 7:20 PM (guile | scheme | opengl | gl | ffi)

Hello, internets! Did you know that today marks two years of Guile 2? Word!

Guile 2 was a major upgrade to Guile's performance and expressiveness as a language, and I can say as a user that it is so nice to be able to rely on such a capable foundation for hacking the hack.

To celebrate, we organized a little birthday hack-feast -- a communal potluck of programs that Guilers brought together to share with each other. Like last year, we'll collect them all and publish a summary article on Planet GNU later on this week. This is an article about the hack that I worked on.

figl

Figl is a Guile binding to OpenGL and related libraries, written using Guile's dynamic FFI.

Figl is a collaboration between myself and Guile super-hacker Daniel Hartwig. It's pretty complete, with a generated binding for all of OpenGL 2.1 based on the specification files. It doesn't have a good web site yet -- it might change name, and probably we'll move to savannah -- but it does have great documentation in texinfo format, built with the source.

But that's not my potluck program. My program is a little demo particle simulation built on Figl.

Source available here. It's a fairly modern implementation using vertex buffer objects, doing client-side geometry updates.

To run, assuming you have some version of Guile 2.0 installed:

git clone git://gitorious.org/guile-figl/guile-figl.git
cd guile-figl
autoreconf -vif
./configure
make
# 5000 particles
./env guile examples/particle-system/vbo.scm 5000

You'll need OpenGL development packages installed -- for the.so links, not the header files.

performance

If you look closely on the video, you'll see that the program itself is rendering at 60 fps, using 260% CPU. (The CPU utilization drops as soon as the video starts, because of the screen capture program.) This is because the updates to the particle positions and to the vertices are done using as many processors as you have -- 4, in my case (2 real and 2 threads each). I've gotten it up to 310% or so, which is pretty good utilization considering that the draw itself is single-threaded and there are other things that need to use the CPU like the X server and the compositor.

Each particle has a position and a velocity, and is attracted to the center using a gravity-like force. If you consider that updating a particle's position should probably take about 100 cycles (for a square root, some divisions, and a few other floating point operations), then my 2.4GHz processors should allow for a particle simulation with 1.6 million particles, updated at 60fps. In that context, getting to 5K particles at 60 fps is not so impressive -- it's sub-optimal by a factor of 320.

This slowness is oddly exciting to me. It's been a while since I've felt Guile to be slow, and this is a great test case. The two major things to work on next in Guile are its newer register-based virtual machine, which should speed things up by a factor of 2 or so in this benchmark; and an ahead-of-time native compiler, which should add another factor of 5 perhaps. Optimizations to an ahead-of-time native compiler could add another factor of 2, and a JIT would probably make up the rest of the difference.

One thing I was really wondering about when making this experiment was how the garbage collector would affect the feeling of the animation. Back when I worked with the excellent folks at Oblong, I realized that graphics could be held to a much higher standard of smoothness than I was used to. Guile allocates floating-point numbers on the heap and uses the Boehm-Demers-Weiser conservative collector, and I was a bit skeptical that things could be smooth because of the pause times.

I was pleasantly surprised that the result was very smooth, without noticeable GC hiccups. However, GC is noticeable on an overloaded system. Since the simulation advances time by 1/60th of a second at each frame, regardless of the actual frame rate, differing frame processing times under load can show a sort of "sticky-zipper" phenomenon.

If I ever got to optimal update performance, I'd probably need to use a better graphics card than my Intel i965 embedded controller. And of course vertex shaders are really the way to go if I cared about the effect and not the compiler -- but I'm more interested in compilers than graphics, so I'm OK with the state of things.

Finally I would note that it has been pure pleasure to program with general, high-level macros, knowing that the optimizer would transform the code into really efficient low-level code. Of course I had to check the optimizations, with the ,optimize, command at the REPL. I even had to add a couple of transformations to the optimizer at one point, but I was able to do that without restarting the program, which was quite excellent.

So that's my happy birthday dish for the Guile 2 potluck. More on other programs after the hack-feast is over; until then, see you in #guile!

(8)

knocking on private back doors with the web browser

4 February 2013 8:47 AM (security | javascript | guile | port scanning)

I woke up at five this morning with two things in my head.

One was, most unfortunately, Rebecca Black's Friday, except with "Monday" -- you know, "Monday, Monday, gettin' up on Monday, hack, hack, hack, hack, which thing should I hack?", et cetera.

The other was a faint echo of Patrick McKenzie's great article on the practical implications of the recent Rails vulnerabilities. In particular, he pointed out that development web servers running only on your local laptop could be accessed by malicious web pages.

Of course this is not new, strictly speaking, but it was surprising to me in a way that it shouldn't have been. For example, Guile has a command-line argument to run a REPL server on a local port. This is fantastic for interactive development and debugging, but it is also a vulnerability if you run a web server on the same machine. A malicious web page could request data from the default Guile listener port, which is a short hop away from rooting your machine.

I made a little test case this morning: a local port scanner. It tries to fetch localhost:port/favicon.ico for all ports between 1 and 10000 on your machine. (You can click that link; it doesn't run the scan until you click a button.)

If it finds a favicon, that indicates that there is a running local web application, which may or may not be vulnerable to CSRF attacks or other attacks like the Rails yaml attack.

If the request for a favicon times out, probably that port is vulnerable to some kind of attack, like the old form protocol attack.

If the request fails, the port may be closed, or the process listening at the port may have detected invalid input and closed the connection. We could run a timing loop to determine if a connection was made or not, but for now the port scanner doesn't output any information for this case.

In my case, the page finds that I have a probably vulnerable port 2628, which appears to be dictd.

Anyway, I hope that page is useful to someone. And if you are adding backdoors into your applications for development purposes, be careful!

(1)

an opinionated guide to scheme implementations

7 January 2013 1:43 PM (scheme | guile | gnu | chicken | racket | chibi | gambit)

Hark, the beloved programing language! But hark, also: for Scheme is a language of many implementations. If Scheme were a countryside, it would have its cosmopolitan cities, its hipster dives, its blue-collar factory towns, quaint villages, western movie sets full of façades and no buildings, shacks in the woods, and single-occupancy rent-by-the-hour slums. It's a confusing and delightful place, but you need a guide.

And so it is that there is a steady rivulet of folks coming into the #scheme IRC channel asking what implementation to use. Mostly the regulars are tired of the question, but when they do answer it's in very guarded terms -- because the topic is a bit touchy, and a bit tribal. There's a subtle tension between partisans of the different implementations, but people are polite enough to avoid controversial, categorical statements. Unfortunately the end result of this is usually silence in the channel.

Well I'm tired of it! All that silence is boring, and so I thought I'd give you my opinionated guide to Scheme implementations. This is necessarily subjective and colored, as I co-maintain the Guile implementation and do all of my hacking in Guile. But hey, to each folk their stroke, right? What I say might still be valuable to you.

So without further introduction, here are seven recommendations for seven use cases.

The embedded Scheme

So you have a big project written in C or C++ and you want to embed a Scheme interpreter, to allow you to extend this project from the inside in a higher-level, dynamic language. This is the sort of thing where Lua really shines. So what Scheme to use?

Use Chibi Scheme! It's small and from all accounts works great for static linking. Include a copy of it in your project, and save yourself the headache of adding an external dependency to something that changes.

Embedding Scheme is a great option if you need to ship an application on Windows, iOS, or Android. The other possibility is shipping a native binary created by a Scheme that compiles natively, so...

The Scheme that can make a binary executable

Sometimes what you want to do is just ship an executable, and not rely on the user to install a runtime for a particular Scheme system. Here you should use Chicken or Gambit. Both of these Schemes have a compiler that produces C, which it then passes on to your system's C compiler to generate an executable.

Another possibility is to use Racket, and use its raco exe and raco dist commands to package together a program along with a dynamically-linked runtime into a distributable directory.

The dynamically linked Scheme

What if you want to extend a project written in C, C++, or the like, but you want to dynamically link to a Scheme provided with the user's system instead of statically linking? Here I recommend Guile. Its C API and ABI are stable, parallel-installable, well-documented, and binary packages are available on all Free Software systems. It's LGPL, so it doesn't impose any licensing requirements on your C program, or on the Scheme that you write.

The Scheme with an integrated development environment

What if you really like IDEs? For great justice, download you a Racket! It has a great debugger, an excellent Scheme editor, on-line help, and it's easy to run on all popular platforms. Incidentally, I would say that Racket is probably the most newbie-friendly Scheme there is, something that stems from its long association with education in US high schools and undergraduate programs. But don't let the pedagogical side of things fool you into thinking it is underpowered: besides its usefulness for language research, Racket includes a package manager with lots of real-world modules, and lots of people swear by it for getting real work done.

The Scheme for SICP

Many people come by #scheme asking which Scheme they should use for following along with Structure and Interpretation of Computer Programs. SICP was originally written for MIT Scheme, but these days it's easier to use Neil Van Dyke's SICP mode for Racket. It has nice support for SICP's "picture langauge". So do that!

The server-side Scheme

So you write the intertubes, do you? Well here you have a plethora of options. While I write all of my server-side software in Guile, and I think it's a fine implementation for this purpose, let me recommend Chicken for your consideration. It has loads of modules ("eggs", they call them) for all kinds of tasks -- AWS integration, great HTTP support, zeromq, excellent support for systems-level programming, and they have a good community of server-side hackers.

The Scheme for interactive development with Emacs

Install Guile! All right, this point is a bit of an advertisement, but it's my blog so that's OK. So the thing you need to do is install Paredit and Geiser. That page I linked to gives you the procedure. From there you can build up your program incrementally, starting with debugging the null program. It's a nice experience.

You and your Scheme

These recommendations are starting points. Most of these systems are multi-purpose: you can use Gambit as an interpreter for source code, Racket for server-side programing, or Chicken in Emacs via SLIME. And there are a panoply of other fine Schemes: Chez, Gauche, Kawa, and the list goes on and on (and even farther beyond that). The important thing in the beginning is to make a good starting choice, invest some time with your choice, and then (possibly) change implementations if needed.

To choose an implementation is to choose a tribe. Since Scheme is so minimal, you begin to rely on extensions that are only present in your implementation, and so through code you bind yourself to a world of code, people, and practice, loosely bound to the rest of the Scheme world through a fictional first-person-plural. This is OK! Going deep into a relationship with an implementation is the only way to do great work. The looser ties to the rest of the Scheme world in the form of the standards, the literature, the IRC channel, and the mailing lists provide refreshing conversation among fellow travellers, not marching orders for a phalanx.

So don't fear implementation lock-in -- treat your implementation's facilities as strengths, until you have more experience and know more about how your needs are served by your Scheme.

Finally...

Just do it! You wouldn't think that this point needs elaboration, but I see so many people floundering and blathering without coding, so it bears repeating. Some people get this strange nostalgic "Scheme is an ideal that I cannot reach" mental block that is completely unproductive, not to mention lame. If you're thinking about using Scheme, do it! Install your Scheme, run your REPL, and start typing stuff at it. And try to write as much of your program in Scheme as possible. Prefer extension over embedding. Use an existing implementation rather than writing your own. Get good at using your REPL (or the IDE, in the case of Racket). Read your implementation's manual. Work with other people's code so you get an idea of what kind of code experienced Schemers write. Read the source code of your implementation.

See you in #scheme, and happy hacking!

(19)

corps: bespoke text codecs

12 December 2012 4:57 PM (compression | text | unicode | scheme | corps | bytecode | huffman | igalia)

Happy 12/12/12, peoples! In honor of repeated subsequences, today I'm happy to release a new set of compression tools, Corps.

Corps is a toolkit for generating custom text codecs, specialized to particular bodies of text. You give it an example corpus to analyze, and Corps can generate codecs based on what it finds.

For example, if you want to develop a compression algorithm that operates on JavaScript source text, you probably want to use a special code to represent the multi-character sequence function.

I say "probably", because in reality you don't know what substrings are most common. Corps uses the Re-Pair algorithm to build up an optimal token set. This algorithm treats all characters in the input as tokens, and recursively creates composite tokens from the most common pair of adjacent tokens, repeating the process until there are no more repeated pairs of tokens. For full details, see "Off-Line Dictionary-Based Compression" by Larsson and Moffat.

Corps is mostly useful for creating special-purpose fixed codecs based on a dictionary generated by its offline analysis. Although pre-generated codecs often do not compress as well as adaptive codecs like the one used by gzip, fixed codecs are typically faster as they can use the fixed code table to generate optimized encoders and decoders. Corps currently generates encoders in C and Scheme, and decoders in C, Scheme, and JavaScript.

Special-purpose codecs can also provide some interesting properties, such as the ability to decompress a substring of the original text.

get source

Corps is written for Guile version 2.0. See gnu.org/s/guile for more information on Guile and how to install it on your system.

To build Corps from git, do:

git clone git://gitorious.org/corps/corps.git
cd corps
./autogen.sh && ./configure && make && make check

You can install using make install, but it's often more convenient to just run corps uninstalled, using the env script.

stinky cheese

Corps includes a simple command-line tool called corps. For full documentation, run corps help. You can run it uninstalled by prefixing the ./env from the build tree.

As an example, say you want to build a database of wikipedia pages on cheese. Let's fetch a page:

$ curl -s 'en.wikipedia.org/wiki/Maroilles_(cheese)' > cheese
$ ls -l cheese
-rw-r--r-- 1 wingo wingo 43123 Dec 12 15:12 cheese

Now we analyze it to determine common substrings:

./env corps extract-all cheese > cheese-tokens

This generates a list of (string,frequency) pairs. An extract-all token set is usually quite large. We can pare it down to something manageable, the 500 most common substrings:

./env corps extract-subset cheese-tokens 500 cheese > 500-tokens

With this dictionary, we can huffman-code the page:

$ ./env corps encode -t 500-tokens -m huffman cheese cheese.huff
$ ls -l cheese.huff
-rw-r--r-- 1 wingo wingo 18060 Dec 12 16:09 cheese.huff

Well that's pretty cool -- it's less than half the size of the source text. We can also pare down this set of tokens to an appropriate number of tokens for a bytecode, and try doing a byte-encode of the file:

$ ./env corps extract-subset 500-tokens 254 cheese > 254-tokens
$ ./env corps encode -t 254-tokens cheese > cheese.bc
$ ls -l cheese.bc
-rw-r--r-- 1 wingo wingo 22260 Dec 12 16:19 cheese.bc

It's larger than the huffman-code, not only because of the smaller dictionary, but also because a bytecode is less dense. In practice though a bytecode is good enough while being very fast, so we'll continue with the bytecode.

Now let's generate a C encoder and decoder for this token set.

$ ./env corps generate-c-byte-encoder 254-tokens > encoder.inc.c
$ ./env corps generate-c-byte-decoder 254-tokens > decoder.inc.c
$ cp ~/src/corps/corps/decoder.c .
$ cp ~/src/corps/corps/encoder.c .
$ gcc -o decoder -O3 decoder.c
$ gcc -o encoder -O3 encoder.c
$ ls -l encoder decoder
-rwxr-xr-x 1 wingo wingo 13192 Dec 12 16:23 decoder
-rwxr-xr-x 1 wingo wingo 31048 Dec 12 16:23 encoder

Nice! We could use the corps tool to decode cheese.bc, but to vary things up we can use our zippy C decoder:

$ ./decoder < cheese.bc > cheese.out
$ cmp cheese.out cheese && echo 'excellent!'
excellent!

The performance of the C encoder is pretty good:

$ for ((x=0;x<1000;x++)) do cat cheese >> megacheese; done
$ time gzip -c < megacheese > megacheese.gz
real	0m1.418s
user	0m1.396s
sys	0m0.016s
$ time ./encoder < megacheese > megacheese.bc
real	0m0.523s
user	0m0.480s
sys	0m0.044s
$ ls -l megacheese*
-rw-r--r-- 1 wingo wingo 43123000 Dec 12 17:03 megacheese
-rw-r--r-- 1 wingo wingo 22370002 Dec 12 17:04 megacheese.bc
-rw-r--r-- 1 wingo wingo 11519311 Dec 12 17:04 megacheese.gz

Gzip gets better compression results for many reasons, but our quick-and-dirty bytecode compressor does well enough and is quite fast. The decoder is also quite good:

$ time ./decoder < megacheese.bc > /dev/null
real	0m0.179s
user	0m0.160s
sys	0m0.016s
$ time gunzip -c < megacheese.gz > /dev/null
real	0m0.294s
user	0m0.284s
sys	0m0.008s

Amusingly, for this text, gzipping the bytecoded file has quite an impact:

$ gzip -c < megacheese.bc > megacheese.bc.gz
$ ls -l megacheese*
-rw-r--r-- 1 wingo wingo 43123000 Dec 12 17:03 megacheese
-rw-r--r-- 1 wingo wingo 22370002 Dec 12 17:04 megacheese.bc
-rw-r--r-- 1 wingo wingo   175246 Dec 12 17:12 megacheese.bc.gz
-rw-r--r-- 1 wingo wingo 11519311 Dec 12 17:04 megacheese.gz

It so happens that byte-compressing the original text allows it to fit within the default gzip "window size" of 32 KB, letting gzip detect the thousandfold duplication of the source text. As a codec that works on bytes, gzip tends to work quite well on bytecoded files, and poorly on bit-coding schemes like huffman codes. A gzipped bytecoded file is usually smaller than a gzipped raw file and smaller than a gzipped huffman-coded file.

hack

I'll close with a link to the 254 most common substrings in one corpus of JavaScript code that I analyzed: here. I have a set of patches to integrate a codec optimized for JavaScript source into V8, to try to reduce the memory footprint of JS source code. More on that quixotic endeavor in some future post. Until then, happy scheming!

(1)

geneva

5 December 2012 11:16 PM (geneva | switzerland | eu | spain | france | capitalism)

silence on the wire

It's been pretty quiet in this electro-rag, and for a simple but profound reason. I don't know about all of Maslow's hierarchy of needs, but for me there is definitely a hierarchy between housing and anything else.

Thing is, my partner and I drove up to Geneva almost three months ago, but we only moved in somewhere permanent last week. Everything was up in the air! Sublets and couches and moving vans and storage units. All the things in all the places.

In a way it should have been liberating, but I reckon I'm just a homebody: someone who likes to have a spritual and physical base to come back to. It's the classic introverted/extroverted "where do you get your energy" thing -- in private, or in society? Of course it's both, but for me the private side is important and necessary.

society, nations, states

Incidentally, October made ten years since I left the US. I lived in Namibia for a couple of years, moved to Spain in 2005, and as you know, left for Switzerland in September. In that time, a third of my life, I have not been able to vote for my local government. Representative democracy isn't very democratic, of course, but not even having that nominal connection to collective decision-making is a bit unmooring.

In Spain I have what is called "EU long-term residency". In 2003, the EU standardized visas for "third-country nationals": folks like me that do not have citizenship in a EU country. After 5 years, you automatically get country-specific "long-term residency", and can optionally apply for "EU long-term residency" (as I just did), which has the additional benefit of being transferable to other EU countries.

The idea of EU long-term residency was to make third-country nationals have the same rights as EU citizens, but several restrictions placed on the agreement by individual nations make it not very attractive

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.