Inside 233

Existential Pontification and Generalized Abstract Digressions
March 24, 2012

Reduce Ubuntu latency by disabling mDNS

This is a very quick and easy fix that has made latency on Ubuntu servers I maintain go from three to four seconds to instantaneous. If you've noticed that you have high latency on ssh or scp (or even other software like remctl), and you have control over your server, try this on the server: aptitude remove libnss-mdns. It turns out that multicast DNS on Ubuntu has a longstanding bug on Ubuntu where they didn't correctly tune the timeouts, which results in extremely bad performance on reverse DNS lookups when an IP has no name.

Removing multicast DNS will break some applications which rely on multicast DNS; however, if you're running Linux you probably won't notice. There are a number of other solutions listed on the bug I linked above which you're also welcome to try.

leave a comment
March 20, 2012

Visit month: Princeton

If you haven't noticed, these are coming in the order of the visit days.

Whereas the weather at UPenn was nice and sunny, the NJ Transit dinghy rolled into a very misty Princeton. Fortunately, I had properly registered for this visit day, so the hotel was in order. I was a bit early, so I met up with an old friend who had recently penned this short story and we talked about various bits and bobs ("I hear you're up for a Hugo!") before I meandered over to the computer science building.

The Princeton campus also holds some distinctive memories for me, a patchwork of experiences of my high school self, including a unpleasant month temping for the Princeton Wind Ensemble (I've been since informed that the ensemble is much healthier now), visits to the area for various reasons, a fascinating interview in which I was told that "I should not go to Princeton", and a very pleasant, if slightly out-of-place, campus preview weekend. Of course, it is the case that graduate student life is completely different from undergraduate life, so I was prepared to shelve these experiences for my second visit.

One of the most interesting things I realized about Princeton computer science, during Andrew Appel's speech during the admit's reception dinner, was how many famous computer scientists had graded Princeton with their presence at some point or another; these included Church, Turing, Gödel, Neumann, and others... One of the insights of his talk was this: “maybe your PhD thesis doesn’t need to be your most important work, but maybe you can have a four page aside which sets forth one of the most important techniques in computer science.” (He was referring to Alan Turing’s thesis, in which he introduced the concept of relative computing.)

Some "facts": at Princeton, your advisors work to get you out, so you don't have to worry about dallying too long on the PhD. Being a part-time student is not ok, and Princeton does have qualifying exams, but you'll probably pass. (The overall wisdom that I came away with here is that computer science quals are very different from quals in other PhD disciplines, where the qual is very much a weeder. In CS, they want you to pass.) The generals are in some sense a checkpoint, where you can figure out whether or not you're happy with the research you're doing with your advisor. You need to take six classes, four in distribution, but you can pass out of the final. You spend your second year teaching, both semesters as a "preceptor". You pick your advisor at the end of your first year, and five people serve on your thesis committee (it tends not to be too adversarial). You're relatively shielded from grants.

Andrew Appel is a very smart man. He’s been recently working with my current advisor Adam Chlipala, and their research interests have some overlap. He is working on a verified software toolchain, but he's also happy to tackle smaller problems. One of the small theoretical problems he showed us during our three-on-one meeting was the problem of calculating fixpoints of functions which are not monotonic, but instead oscillate towards some convergence (this occurs when the recursion shows up in the contravariant position; as strange as this may seem, this shows up a bit in object oriented languages!) One of his grad students told me he likes "semantic definitions", and is not afraid to take on large challenges or tackle the gritty details of languages like Cminor, integer alignment, or scaling up theoretical techniques for large programs. And despite being department chair, it's easy to schedule time with him, and he seems to have this uncanny ability to generate grants.

David Walker is very excited about the Frenetic programming language, which is a language for dealing with network programming. I already had a bit of context about this, because Jennifer Rexford had given an invited talk at POPL about this topic. One of the interesting things I noticed when hearing about OpenFlow for the second time was how similar its handling of unknown events was to hardware: if the efficient lookup table doesn't know what to do, you interrupt and go to a slow, general purpose computer, which figures out what to do, installs the rule in the lookup table, and sends the packet on its way. The three of us might have been a technical mood, because we ended up asking him to give us the gory details of per-packet consistency when updating network rules. I hear in another life David Walker also did type systems, and has also done a bit of work in semantics. One contrast a grad student noted between Appel and Walker was that Appel would assume you know what he is talking about (so it's your onus to interrupt him if you don't: a useful skill!), whereas Walker will always go and explain everything and always stop you if he doesn't know what you're talking about. This makes him really good at communicating. (Oh, did I mention Frenetic is written in Haskell?)

Randomly, I discovered one of the current grad students was the creator of smogon.com. He does PL research now: but it's kind of nifty to see how interests change over the years...

leave a comment
March 18, 2012

Is it better to teach formalism or intuition?

Note: this is not a discussion of Hilbert's formalism versus Brouwer's intuitionism; I'm using formalism and intuition in a more loose sense, where formalism represents symbols and formal systems which we use to rigorously define arguments (though not logic: a sliding scale of rigor is allowed here), while intuition represents a hand-wavy argument, the mental model mathematicians actually carry in their head.

Formalism and intuition should be taught together.

spacer

But as utterly uncontroversial as this statement may be, I think it is worth elaborating why this is the case—the reason we find will say important things about how to approach learning difficult new technical concepts.

Taken in isolation, formalism and intuition are remarkably poor teachers. Have you had the reading a mathematical textbook written in the classic style, all of the definitions crammed up at the beginning of the chapter in what seems like a formidable, almost impenetrable barrier. One thinks, "If only the author had bothered to throw us a few sentences explaining what it is we're actually trying to do here!" But at the same time, I think that we have all also had the experience of humming along to a nice, informal description, when we really haven't learned much at all: when we sit down and try to actually solve some problems, we find these descriptions frustratingly vague. It's easy to conclude that there really is no royal road to intuition, that it must be hard won by climbing the mountains of formalism.

So why is the situation any different when we put them together?

The analogy I like is this: formalism is the object under study, whereas intuition is the results of such a study. Imagine an archaeologist who is attempting to explain to his colleagues a new conclusion he has drawn from a recently excavated artifact. Formalism without intuition is the equivalent of handing over the actual physical artifact without any notes. You can in principle reconstruct the conclusions, but it might take a while. Intuition without formalism, however, is the equivalent of describing your results over the phone. It conveys some information, but not with the resolution or fidelity of actually having the artifact in your hands.

This interpretation explains a bit about how I learn mathematics. For example, most mathematical formalisms aren't artifacts you can "hold in your hand"—at least, not without a little practice. Instead, they are giant mazes, of which you may have visibility of small fragments of at a time. (The parable of the blind men and the elephant comes to mind.) Being able to hold more of the formalism in your head, at once, increases your visibility. Being able to do quick inference increases your visibility. So memorization and rote exercise do have an important part to play in learning mathematics; they increase our facility of handling the bare formalism.

However, when it comes to humans, intuition is our most powerful weapon. With it, we can guess that things are true long before we have any good reason of thinking it is the case. But it is inextricably linked to the formalism which it describes and a working intuition is no where near as easily transferable as a description of a formal system. Intuition is not something you can learn; it is something that assists you as you explore the formalism: "the map of intuition." You still have to explore the formalism, but you shouldn't feel bad about frequently consulting your map.

I've started using this idea for when I take notes in proof-oriented classes, e.g. this set of notes on the probabilistic method (PDF). The text in black is the formal proof; but interspersed throughout there are blue and red comments trying to explicate "what it is we're doing here." For me, these have the largest added value of anything the lecturer gives, since I find these remarks are usually nowhere to be seen in any written notes or even scribe notes. I write them down even if I have no idea what the intuition is supposed to mean (because I'll usually figure it out later.) I've also decided that my ideal note-taking setup is already having all of the formalism written down, so I can pay closer attention to any off-hand remarks the lecturer may be making. (It also means, if I'm bored, I can read ahead in the lecture, rather than disengage.)

We should encourage this style of presentation in static media, and I think a crucial step towards making this possible is easier annotation. A chalkboard is a two-dimensional medium, but when we LaTeX our presentation goes one-dimensional, and we rely on the written word to do most of the heavy lifting. This is fine, but a bit of work, and it means that people don't do it as frequently as we might like. I'm not sure what the best such system would look like (I am personally a fan of hand-written notes, but this is certainly not for everyone!), but I'm optimistic about the annotated textbooks that are coming out from the recent spate of teaching startups.

For further reading, I encourage you to look at William Thurston's essay, On proof and progress in mathematics, which has had a great influence on my thoughts in this matter.

4 comments
March 16, 2012

Visit month: University of Pennsylvania

I'm hoping that this will be the beginning of a series of posts describing all of the visit days/open houses that I attended over the past month. Most of the information is being sucked out of the notes I took during the visits, so it's very stream of consciousness style. It's kind of personal, and I won't be offended if you decide not to read. You've been warned!

I arrive at the Inn at Penn shortly before midnight, and check in. Well, attempt it; they appear to have no reservation on hand. It appears that I hadn't actually registered for the visit weekend. Oops.

Sam tells me later that first impressions stick, and that her first impression of me was a thoroughly discombobulated PhD admit. Ruffle up my hair! Ask if they're a fellow CS PhD admit, whether or not they have a room, oh, and by the way, what is your name? (I didn't realize that was the real sticking point until she told me about it at the end of the CMU visit.) But Brent is on IRC and I ping him and he knows a guy named Antal who is also visiting UPenn so I give him a call and he lends me his room for a night and things are good. (Mike, the graduate studies coordinator, is kind of enough to fit me into schedules the next day. Thank you Mike!)

I've been to UPenn before. I visited with my Dad; Roy Vagelos was former CEO of Merck and a large influence on UPenn, and at one point I had private aspirations of doing my undergraduate degree in something non-CS before hopping to computer science graduate school. But when the MIT and Princeton admissions came UPenn got soundly bumped off the table, and this experience was wrapped up and stowed away. But I recognized small pieces of the campus from that visit, stitched together with the more recent experience of Hac Phi and POPL. Our tour of Philadelphia lead us to the Bed and Breakfast I stayed at POPL, and I was Surprised™.

Benjamin Pierce is flying off to Sweden for WG 2.8 (a magical place of fairies, skiing and functional programming), so he's only around for the morning presentations, but during breakfast he talks a bit about differential privacy to a few of the candidates, and then heads up the presentations. I hear from his students that he’s gotten quite involved with the CRASH project, and is looking a lot more at the hardware side of things. One of the remarkable things that I heard in the morning talks was how programming languages has sort of leaked into all of the CS research that was presented (or maybe that was just selection bias?) Well, not everyone: the machine learning researchers just “want to do a lot of math” (but maybe we still get a little worried about privacy.)

UPenn is well known for its PL group, where “we write a little bit of Greek every day.” It’s really quite impressive walking into the PL lunch, with a battalion of grad students lined up against the back wall cracking jokes about Coq and Haskell and the presentation is about F*.

Here are some "facts". At UPenn you spend two semesters TAing, there’s an office lottery but people in the same group tend to cluster together, you don’t have to work on grants, the professors are super understanding of graduate student's life situations, the living costs are around $500/mo, and that it is a very laid back place. Your advisor is very much in charge of your degree, except once a year when there's a department wide review to check no one falls through the cracks. The drop-out rate is low. UPenn has a nice gym which is $400/yr. Biking Philadelphia is great, but you shouldn't live in grad housing. Because there is a power trio of Pierce, Zdancewic and Weirich, when it comes time for you to form your thesis committee, the other two profs who aren't your advisor serve on your committee. The PL group is in a slightly unusual situation where there aren't any 2nd or 3rd year students (this was due to a double-sabbatical one year, and bad luck of the draw another.) But there are infinitely many 1st year students. You have to take three laid back exams. Philadelphia is a nice and large city (5th biggest!) UPenn and CMU are perhaps the biggest pure programming languages departments out of the places I visited.

Stephanie Weirich is super-interested in dependently typed languages. She wants to find out how to get functional programmers to use dependent types, and she's attacking it from two sides: you can take a language like Haskell for which we are adding more and more features moving it towards dependent types, and you can build a language from scratch like TRELLYS which attempts to integrate dependent types in a usable way. She’s not shy about giving 1st years really hard projects, but she's happy to do random stuff with students. She reflects on Dimitrios Vytiniotis' trajectory: where he discovered his true calling with polymorphism and Haskell, and is now off designing crazy type systems with Simon Peyton Jones at Microsoft Research. “I can’t make you go crazy about a topic,” (in the good sense) but she’s interested in helping you find out what lights you on fire. I ask her about the technical skills involved in developing metatheory for type systems (after all, it’s very rare for an undergraduate to have any experience in this sort of thing): she tells me it’s all about seeing lots of examples, figuring out what’s going to get you in trouble. (In some sense, it’s not “deep”, although maybe any poor computer scientist who has tried to puzzle out the paper of a type theorist might disagree.)

Steve Zdancewic has a bit more spread of research interests, but one of the exciting things coming out is the fact that they have Coq infrastructure for LLVM that they've been working on over the course of the year, and now it's time to use the infrastructure for cool things, such as do big proofs and figure out what's going on. There have been a lot of cases where doing just this has lead to lots of interesting insights (including realization that many proofs were instances of a general technique): mechanized verification is a good thing. He also has some side projects, including compilers for programming languages that will execute on quantum computers (all the computations have to be reversible! There is in fact a whole conference on this; it turns out you can do things like reduce heat emissions.) There's also a bit of interest in program synthesis. Steve echoes Stephanie when he says, “I am enthusastic about anything that my students are interested in enough to convince me to be enthusiastic about.” They don’t do the chemistry model of getting a PhD, where you “use this pipette a hundred million times.”

In the words of one grad student, these are professors that are “friendly and much smarter than you!” They’re happy to play around and have fun with programming languages concepts (as opposed to the very hard core type theory research happening at CMU; but more on that later.)

We'll close with a picture, seen in the neighborhoods of Philadelphia.

7 comments
March 5, 2012

You could have invented fractional cascading

Suppose that you have k sorted arrays, each of size n. You would like to search for single element in each of the k arrays (or its predecessor, if it doesn't exist).

spacer

Obviously you can binary search each array individually, resulting in a spacer runtime. But we might think we can do better that: after all, we're doing the same search k times, and maybe we can "reuse" the results of the first search for later searches.

Here's another obvious thing we can do: for every element in the first array, let's give it a pointer to the element with the same value in the second array (or if the value doesn't exist, the predecessor.) Then once we've found the item in the first array, we can just follow these pointers down in order to figure out where the item is in all the other arrays.

spacer

But there's a problem: sometimes, these pointers won't help us at all. In particular, if a later lists is completely "in between" two elements of the first list, we have to redo the entire search, since the pointer gave us no information that we didn't already know.

spacer

So what do we do? Consider the case where k = 2; everything would be better if only we could guarantee that the first list contained the right elements to give you useful information about the second array. We could just merge the arrays, but if we did this in the general case we'd end up with a totally merged array of size spacer , which is not so good if k is large.

But we don't need all of the elements of the second array; every other item will do!

spacer

Let's repeatedly do this. Take the last array, take every other element and merge it into the second to last array. Now, with the new second to last array, do this to the next array. Rinse and repeat. How big does the first array end up being? You can solve the recurrence: spacer , which is the geometric series spacer . Amazingly, the new first list is only twice as large, which is only one extra step in the binary search!

spacer

What we have just implemented is fractional cascading! A fraction of any array cascades up the rest of the arrays.

There is one more detail which has to be attended to. When I follow a pointer down, I might end up on an element which is not actually a member of the current array (it was one that was cascaded up). I need to be able to efficiently find the next element which is a member of the current array (and there might be many cascaded elements jammed between it and the next member element, so doing a left-scan could take a long time); so for every cascaded element I store a pointer to the predecessor member element.

spacer

Fractional cascading is a very useful transformation, used in a variety of contexts including layered range trees and 3D orthogonal range searching. In fact, it can be generalized in several ways. The first is that we can cascade some fixed fraction α of elements, rather than the 1/2 we did here. Additionally, we don't have to limit ourselves to cascading up a list of arrays; we can cascade up an arbitrary graph, merging many lists together as long as we pick α to be less than 1/d, where d is the in-degree of the node.

spacer

Exercise. Previously, we described range trees. How can fractional cascading be used to reduce the query complexity by a factor of spacer ?

Exercise. There is actually another way we can setup the pointers in a fractionally cascaded data structure. Rather than have downward pointers for every element, we only maintain pointers between elements which are identical (that is to say, they were cascaded up.) This turns out to be more convenient when you are constructing the data structure. However, you now need to maintain another set of pointers. What are they? (Hint: Consider the case where a search lands on a non-cascaded, member element.)

19 comments
February 26, 2012

Visualizing range trees

Range trees are a data structure which lets you efficiently query a set of points and figure out what points are in some bounding box. They do so by maintaining nested trees: the first level is sorted on the x-coordinate, the second level on the y-coordinate, and so forth. Unfortunately, due to their fractal nature, range trees a bit hard to visualize. (In the higher dimensional case, this is definitely a “Yo dawg, I heard you liked trees, so I put a tree in your tree in your tree in your...”) But we’re going to attempt to visualize them anyway, by taking advantage of the fact that a sorted list is basically the same thing as a balanced binary search tree. (We’ll also limit ourselves to two-dimensional case for sanity’s sake.) I’ll also describe a nice algorithm for building range trees.

Suppose that we have a set of points spacer . How do we build a range tree? The first thing we do is build a balanced binary search tree for the x-coordinate (denoted in blue). There are a number of ways we can do this, including sorting the list with your favorite sorting algorithm and then building the BBST from that; however, we can build the tree directly by using quicksort with median-finding, pictured below left.

spacer

Once we’ve sorted on x-coordinate, we now need to re-sort every x-subtree on the y-coordinates (denoted in red), the results of which will be stored in another tree we’ll store inside the x-subtree. Now, we could sort each list from scratch, but since for any node we're computing the y-sorted trees of its children, we can just merge them together ala mergesort, pictured above right. (This is where the -1 in spacer comes from!)

So, when we create a range-tree, we first quicksort on the x-coordinate, and then mergesort on the y-coordinate (saving the intermediate results). This is pictured below:

spacer

We can interpret this diagram as a range tree as follows: the top-level tree is the x-coordinate BBST, as when we get the leaves we see that all of the points are sorted by x-coordinate. However, the points that are stored inside the intermediate nodes represent the y-coordinate BBSTs; each list is sorted on the y-coordinate, and implicitly represents another BBST. I’ve also thrown in a rendering of the points being held by this range tree at the bottom.

Let’s use this as our working example. If we want to find points between the x-coordinates 1 and 4 inclusive, we search for the leaf containing 1, the leaf containing 4, and take all of the subtrees between this.

spacer

What if we want to find points between the y-coordinates 2 and 4 inclusive, with no filtering on x, we can simply look at the BBST stored in the root node and do the range query.

spacer

Things are a little more interesting when we actually want to do a bounding box (e.g. (1,2) x (4,4) inclusive): first, we locate all of the subtrees in the x-BBST; then, we do range queries in each of the y-BBSTs.

spacer

Here is another example (4,4) x (7,7) inclusive. We get lucky this time and only need to check one y-BBST, because the X range directly corresponds to one subtree. In general, however, we will only need to check spacer subtrees.

spacer

It should be easy to see that query time is spacer (since we may need to perform a 1-D range query on spacer trees, and each query takes spacer time). Perhaps less obviously, this scheme only takes up spacer space. Furthermore, we can actually get the query time down to spacer , using a trick called fractional cascading. But that’s for another post!

11 comments
February 23, 2012

Anatomy of “You could have invented…”

The You could have invented... article follows a particular scheme:

  1. Introduce an easy to understand problem,
  2. Attempt to solve the problem, but get stuck doing it the "obvious" way,
  3. Introduce an easy to understand insight,
  4. Methodically work out the rest of the details, arriving at the final result.

Why does framing the problem this way help?

  • While the details involved in step 4 result in a structure which is not necessarily obvious (thus giving the impression that the concept is hard to understand), the insight is very easy to understand and the rest is just "monkey-work". The method of deriving the solution is more compressible than the solution itself, so it is easier to learn.
  • Picking a very specific, easy-to-understand problem helps ground us in a concrete example, whereas the resulting structure might be too general to get a good intuition off of.

It's very important that the problem is easy to understand, and the process of "working out the details" is simple. Otherwise, the presentation feels contrived. This method is also inappropriate when the audience is in fact smart enough to just look at the end-result and understand on an intuitive level what is going on. Usually, this is because they have already seen the examples. But for the rest of us, it is a remarkably effective method of pedagogy.

I'll take this opportunity to dissect two particular examples. The first is Dan Piponi’s canonical article, You Could Have Invented Monads. Here are the four steps:

  1. Suppose we want to debug pure functions, with type signature f :: Float -> Float, so that they can also return a string message what happened, f' :: Float -> (Float, String).
  2. If we want to compose these functions, the obvious solution of threading the state manually is really annoying.
  3. We can abstract this pattern out as a higher-order function.
  4. We can do the same recipe on a number of other examples, and then show that it generalizes. The generalization is a Monad.

The second is article of mine, You could have invented Zippers:

  1. I want to do two things: have access to both the parent and children of nodes in a tree, and do a persistent update of the tree (e.g. without mutation.)
  2. If we do the obvious thing, we have to update all the nodes in the tree.
  3. We only need to flip one pointer, so that it points to the parent.
  4. We can create a new data structure which holds this information (which, quite frankly, is a little ugly), and then show this procedure generalizes. The generalization is a Zipper.

So the next time you try to explain something that seems complicated on the outside, but simple on the inside, give this method a spin! Next time: You could have invented fractional cascading.

Postscript. This is a common way well-written academic papers are structured, though very few of them are titled as such. One noticeable difference, however, is that often the “detail work” is not obvious, or requires some novel technical methods. Sometimes, a researcher comes across a really important technical method, and it diffuses throughout the community, to the point where it is obvious to anyone working in the field. In some respects, this is one thing that characterizes a truly successful paper.

4 comments
February 20, 2012

Transcript of “Inventing on Principle”

I've posted a partial transcript to Github of Bret Victor's "Inventing on Principle". About a quarter of the talk has been transcribed; I don't know when I will get around to doing the rest so if someone wants to step up that would be great! (The reason I decided to do the writeup is because I am working on a technical analysis of the presentation, hopefully to appear in on this blog soon.)

Below is a copy of the transcript which I will endeavor to keep up to date with the Github copy. The original content was licensed under CC-BY.


[[0:07]] So, unlike the previous session, I don't have any prizes to give up. I'm just going to tell you how to live your life.

[[0:14]] This talk is actually about a way of living your life that most people don't talk about. As you approach your career, you'll hear a lot about following your passion, or doing something you love. I'm going to talk about something kind of different. I'm going to talk about following a principle—finding a guiding principle for your work, something that you believe is important, necessary, and right, and using it to guide what you do.

[[0:46]] There are three parts to this talk. I'm first going to talk about a principle that guides a lot of my work, and try to give you a taste of what comes out of that. And, I'm going to talk about some other people who have lived that way; what their principles are, what they believe in. But these are just examples, to help you think about what you believe in, and how you want to live your life.

[[1:10]] So to begin with me: Ideas are very important. I think that bringing ideas into the world are one of the most important things people can do. And I think that great ideas, in the form of great art, stories, inventions, scientific theories, these things take on lives of their own, which give meaning to lives as people. So, I think a lot about how people create ideas and how ideas grow. And in particular, what sorts of tools create an environment where ideas grow. Now, I've spent a lot of time over the years making creative tools, using creative tools, thinking about them a lot, and here's something I've come upon: Creators need an immediate connection with what they're creating. So that's my principle. ___ What I mean by that is when you're making something, when you make a change, or make a decision, you need to see the effect of that immediately. There can't be a delay, there can't be anything hidden. Readers have to ... do it. Now I'm going to show a series of cases where I noticed that that principle was violated and I'll show you what I did about that, and then I'm going to talk about the larger context in which this work lives.

[[2:32]] So, to begin with, let's think about coding. Here's how coding works: you type a bunch of code into a text editor, kind of imagining in your head what you find your code is going to do. And then you compile and run, and something comes out. In this case, this is just JavaScript, Canvas, and it draws this single little tree. But, if there's anything wrong with the scene, or if I go and make changes, if I have further ideas, if I go back to the code, I go edit the code, compile and run, see what it looks like. Anything wrong, go back to the code. Most of my time is spent working in the code, working in a text editor blindly, without an immediate connection to this thing, which is what I'm actually trying to make.

[[3:20]] So I feel that this goes against the principle that I have, that creators need immediate connection to what they're making, so I have tried a coding environment that would be more in line with this principle. So what I have here is a picture on the side, and the code on the side, and this part draws the sky and this part draws the mountain and this part draws the tree, and when I make any change to the code the picture changes immediately. So the code and the picture are always the same; there is no compile and run. I just change things in the code and I can see the change in the picture. And now that

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.