diego's weblog

there and back again

guess the world doesn’t want to be eaten…

Leave a Comment Posted by d on January 11, 2012

My favorite quote from Cory Doctorow’s excellent Lockdown: The coming war on general purpose computing.

This stuff matters because we’ve spent the last decade sending our best players out to fight what we thought was the final boss at the end of the game, but it turns out it’s just been an end-level guardian.

Must read.

PS: to avoid being unnecessarily cryptic, the subject comes from Marc’s Why software is eating the world from a few months back. Ok, ok, Marc was talking about something slightly different, but I can’t avoid making these connections. Anyway: light up, world. Embrace the Singularity! spacer

Share this:

  • Twitter
  • Facebook
  • Uncategorized

    e-readers are doomed! doomed I tellsya!

    1 Comment Posted by d on January 6, 2012

    Matt Alexander, at The Loop, resurfaces the by-now well-known idea that e-readers (specifically eInk devices like the Kindle) are doomed to quick extinction.

    The reasoning is based on the following, quoting from the article (many of the points are repeated two or three times, “technology” in particular):

    1. Technology. The technology is “[...] inherently stunted. E-ink is slow to refresh, it has ghosting issues thanks to page caching, color is not yet practical, and full motion animation is still a long way off.”
    2. Multiple devices, oh, and the technology is terrible. “[...] does anyone really need an extra device to carry around? Yes, e-readers are cheap, but the technology behind them is set to directly intersect with the development of pixel-dense, retina LCDs, and feature-rich tablets.”
    3. Tablets = e-readers. “The Fire is sharing the Kindle name not only because it helps in marketing a new product, but because the concepts are on an inevitable path toward merging. ” — this is an idea that combines both 1) and 2) into a third meta-point.

    Lost in all this is what Matt also mentions a couple of times — the “spec” advantages: “ They are cheap, lightweight, have long battery life, and operate well in direct sunlight, but they do little more than present traditional literature in an electronic package.” But I’ll come back to this.

    Allow me, however, to start with point 3) by allowing, that, yes e-readers aren’t tablets. And that’s a good thing. Affordances matter. If this wasn’t true, wristwatches would have disappeared long ago (we all have clocks in our phones, right? Not to mention there are clocks everywhere in any industrialized nation). This is especially true in media, since different formats don’t just allow a different physical relationship between you and the object (as would be the case in the wristwatch example), but also allow different ways of expression for content creators. These two things lead us to create millions of variants of tools that are created to fit both the task and the user. So e-readers not being tablets is not a problem, it’s an advantage.  Plus, “They are cheap, lightweight, have long battery life, and operate well in direct sunlight.”

    What of point 2) then? Proliferation of devices does seem to be a problem, but it’s primarily a problem that is pervasive among the nerditry (myself included). Not that nerds are the only ones that have lots of devices; what I’m saying is that we’re the only ones that think that this is a gigantic problem. People, by and large, are comfortable with the idea that different devices (or “tools” in the widest sense of the word) for different things, as long as it’s clear what they do. And in terms of proliferation, well. How many people do you know that have both iPads and iPhones? Talk about duplication of function there. In any case, e-readers aren’t competing with tablets. They’re competing with books. It’s not “one ereader vs one tablet” but rather, “one ereader vs dozens or hundreds of books”. Oh, and e-readers are “cheap, lightweight, have long battery life, and operate well in direct sunlight”

    On to point 1) — the technology. This is completely irrelevant, as anyone who has used an e-reader for long periods of time can attest. The “ghosting” issue is not noticeable for real use (as opposed to watching the display while you’re writing a review). Animations are irrelevant since it’s about books. Also, before I forget: “They are cheap, lightweight, have long battery life, and operate well in direct sunlight.”

    So let’s go back to e-readers as devices that are “cheap, lightweight, have long battery life, and operate well in direct sunlight”. spacer

    This is a point that is always mentioned in these articles, and always somehow dismissed as largely irrelevant in the long term, but it’s crucial. You can get a new Kindle today (with unobtrusive ads on the “screensaver”) for $79. The most expensive one will cost you twice that. I can easily see a point in a year or two when Amazon will simply give away Kindles to people with Prime, or maybe even to anyone with an Amazon account and enough purchasing history. Meanwhile, the iPad is at $500 and the Fire at $200. And the iPad is a far, far superior product as a true “tablet.” Tablets will continue to be heavier, more expensive, and have shorter battery life, and, yes, will continue to not be quite readable under many lighting conditions. By the way, the difference in price also leads to a difference in use. You will be far less stressed about someone stealing your $79 Kindle than your $500 iPad. You will take the Kindle to the beach, and on trips, far more often. And so on.

    Affordances matter. What something is replacing matters. A 3-5-10x difference in price matters. A 2-3x difference in weight matters. A 5-10x difference in battery life matters. Usability matters (particularly when it means “I can use this absolutely anywhere”).

    Tablets will keep growing, mostly at the expense of PCs and laptops, and e-readers will keep selling by the millions. And, yes, many people will use tablets as e-readers, but many more will get both. Perhaps the millions of eInk readers a year are just considered a niche in today’s global economies of scale, but, in my book (semantic meta-pun intended), that’s not “doomed”. Or anything even close to it.

    Share this:

    • Twitter
    • Facebook
  • technology

    mini-ode to “hello, world”

    Leave a Comment Posted by d on January 2, 2012

    “FORTRAN begat ALGOL, which begat CPL, which begat BCPL, from whence B, and then C, arose…”

    I hadn’t looked at Scala in a while, so I head over to www.scala-lang.org and start looking through docs. First the tutorial, and sure enough the first thing they do is show a Hello, World example. It suddenly struck me that we may be at a point where a lot of people don’t know or even remember where this started.

    The first time I was exposed to “hello, world” was, appropriately enough, in the first widely published book that included it — Kernigan & Ritchie’s “The C Programming Language.” It was 1994, and until that time I had mostly dabbled with Pascal (Turbo Pascal FTW!) and minor languages like the programming language for dBase. The pre-ANSI C version of “hello, world” simply read:

    main()
    {
    printf("hello, world\n");
    }

    (the ANSI C edition I think added the obligatory #include <stdio.h> at the beginning).

    This much I remembered, but was that the first use? It seemed likely…

    According to the wikipedia entry on the topic, it turns out that’s not the first time “hello, world” was used as the primordial program for a language — Kernigan had used it earlier in his Tutorial Introduction to the Language B, as follows:

    main( ) {
    extrn a, b, c;
    putchar(a); putchar(b); putchar(c); putchar('!*n');
    }
    a 'hell';
    b 'o, w';
    c 'orld';

    It is fitting that C’s version was significantly simpler and clearer. spacer

    It’s also interesting that to this day I still see “The C Programming Language” as the best book ever written to teach a language, or as an introduction to programming for that matter. The only one that comes to mind that could match it in terms of how perfectly the book “wraps around” its topic as well as being readable is, perhaps, “The TeXbook” (in which programming concepts play an important but not overriding role).

    Good times.

    ps: somewhat related — a story I heard many years ago about why C++ was called that –after starting as “C with Classes”– was that when the time came to name the successor to C, they couldn’t decide whether to call it “D” (since it would follow B and C) or “P” (since it would follow C in “BCPL”). Faced with this conundrum, the solomonic answer was to call it “C++” by just adding the increment operator to “C”. Apocryphal, you say? May be so! But for fiction, it’s good fiction. spacer

    Share this:

    • Twitter
    • Facebook
  • books, software

    space pilot 3000

    Leave a Comment Posted by d on January 1, 2012

    Hey! Only a decade late.

    Happy New Year’s everyone!!!

    Share this:

    • Twitter
    • Facebook
  • personal

    hacker cooties

    Leave a Comment Posted by d on December 24, 2011

    I am definitely enjoying this, the tail end of 2011, living largely in disconnected mode, barely looking at news, catching up on books and movies. Come January, there will be time for frenetic engagement with the electronic world.

    Something I wanted to share though, was a fragment on Charlie Stross’s “how I got here in the end — my non-writing careers” which, if you are a fan of Stross –or, really, just a self-respecting nerd of any kind– and haven’t read you should (and if you’re not a fan of Stross, it’s probably just a matter of time… until you’ve read one of his books.)

    “But for the most part, Banking IT staff — at least, in the 1990s — didn’t have a clue about the internet. I’m pretty sure some of them hadn’t even heard of the internet. And in consequence, their reaction to a phone call from some guy in a dot-com wanting to jump through the certification and approval hoops to connect his EPOS server to their network varied from bovine placidity (“whatever, dude”), to frenetic confusion (“we’ll have to set up a committee to discuss the relevant criteria for approving connecting your — what did you call it, can you spell that for me? — S-E-R-V-E-R to out network”) to panic (“aieee! Hacker cooties are coming to get us!”)”

    Hilarious. Happy holidays everyone!

    Share this:

    • Twitter
    • Facebook
  • books, personal, technology

    steve’s last theorem

    Leave a Comment Posted by d on December 11, 2011

    “I finally cracked it.” is what Steve Jobs told to Walter Isaacson when talking about a way to revolutionize TV. Given the usual insanity, wild extrapolation and rumor-mongering that goes on regarding upcoming Apple products, it’s no surprise that this got a lot of people excited. Aside from the typical unfounded speculation, see for example Gruber’s excellent Apps are the new channels. I think everyone that works in tech and read that sentence is still wondering, exactly, what he meant.

    This reminded me of Fermat’s Last Theorem. Pierre de Fermat, in his copy of Arithmetica wrote the following in one of the margins:

    it is impossible to separate a cube into two cubes, or a fourth power into two fourth powers, or in general, any power higher than the second, into two like powers. I have discovered a truly marvelous proof of this, which this margin is too narrow to contain.

    The proof took over 350 years to arrive, and obsessed many world-class mathematicians since it was first proposed. While it is certainly possible that Fermat had the proof in his head, he never published it or even wrote it down anywhere, and Occam’s Razor would dictate that he had the intuition, but not the proof, and he left that down there, to put it simply, as a prank — or, if you find that somehow offensive, think of it as a challenge.

    Now, these two problems are as similar as oranges and toasters, but the result is not. Whether he had “cracked it” or not, Jobs knew that throwing down the gauntlet like this was something that would be analyzed, parsed, processed, and argued over for a long time.

    He knew that, if Apple had a product in development in that area, he was setting the bar for its release high enough, in public, for all to see (and this is not a minor thing).

    That, whether Apple actually had a product in play or not, he was planting the seed of something larger.

    That someone, somewhere, would use it as a beacon to say “Steve cracked it, so can I,” and eventually, finally, really crack the problem.

    Just a thought. spacer

    Share this:

    • Twitter
    • Facebook
  • software, technology

    I for one welcome our new algorithmic overlords

    2 Comments Posted by d on December 6, 2011

    spacer

    In recent weeks, “markets” (more specifically, bond markets, but that’s less important for this particular argument) have been directly responsible for the change in leadership in two countries (Greece and Italy) and at least indirectly responsible for the change in another (Spain — in the form of all sorts of perceived pressure on the electorate).

    At the root of the crisis in Europe is something that no doubt is real — excessive public debt, in some cases, like Greece, plainly unsustainable, while in others (e.g. Italy) the debt was sustainable at reasonable interest rates.

    In this game, if everyone stays confident that you’ll be able to pay, and interest rates stay low, then you’ll likely be able to pay. Any loss of confidence can lead to a spiral and end up in default. Again, there’s real problems underlying this, and by now the fact that some countries have already subscribed debt at very high interest rates means the perception has (partially) become reality: short of significant structural changes, they won’t be able to pay the money back. Once doubts reach the people on the street and they start withdrawing their money from bank accounts, well… game over.

    But I digress. My point was that a lot of this was driven by “the market(s)”. What exactly is(are) this elusive creature(s)? Is it beast or fowl? Is it a mythical monster? Snooki-like perhaps?

    No — it’s just a faceless amalgam of people of course, but something else: software.

    Algorithmic trading platforms, whether human-filtered or not. Automatic buy/sell triggers based on complex conditions that can execute in real-time, or near real-time. And so on.

    These systems are not in direct control of the entire money supply, but they are in control of a fairly sizable chunk of it. (By direct control I mean that a software system can, without human intervention, execute a transaction, such as buying x shares of a given stock). This, in a system in which few people understand each individual component, and no one has a view, much less an understanding, of what the system entails as a whole.

    This is not some sort of “conspiracy”, it’s local optima at the expense of global balance: for each individual, hedge fund, investment banks, etc., it’s perfectly reasonable to set up these various automatic systems and cutoff points, but when you put all of this together you have both a) a lot of really jumpy and sensitive buffalo in your herd and b) are surrounded by cliffs to run into.

    The system both generates tiny signals of “movement” (up or down, it doesn’t really matter) and is ultra-sensitive to those same signals.

    And in the case of governments and the bond markets, well, even tiny variations are important, because they have significant ripple effects in the long term.

    This, naturally, also applies to the spiral up (followed by the spiral down) of the 2006-2008 bubble/crash.

    So while software embedded in weapon systems has been our obsession as a culture (the “Terminator scenario”) where it’s really been causing problems for a while (even contributing to toppling governments!) has been in finance/economics.

    So I think one could make a fairly convincing case that what we have here is a sort of mix between Skynet and the Ants from Deep Space Homer (where, incidentally, the meme “I for one welcome our X overlords” started).

    Another thought: this could be considered as an indication that the Singularity has already happened in some form, where a meta-entity that we can’t see or control is now operating and unknowingly causing impact back on our level. By ‘unknowingly’ I mean that I don’t think meta-entity (Singularity or not) implies meta-consciousness, whether reflecting on itself or on the elements that are part of it, in this case, us+Earth. Or it could be that it is aware to some degree or fully, but perhaps it finds it impossible to control itself, or needs to do it to survive (imagine the destructive effect of many of our physical actions on our own particles — say, breathing, or eating).

    Interesting…

    Now, perhaps appropriately, I’m off to re-read Accelerando. Those lobsters crack me up every time.

    Share this:

    • Twitter
    • Facebook
  • software, technology

    dreaming it all up again

    Leave a Comment Posted by d on December 3, 2011
    “It’s no big deal, it’s just — we have to go away and … and dream it all up again.”
     U2 at the Point Depot, Dublin, December 30, 1989
    Today was my last day at Ning.

    Seven years ago, almost to the day, I wrote on my weblog about looking for the next big thing, after clevercactus, my previous startup, ran out of funding. I had spent all the money I had pursuing an idea, to the point that for a few days in early December 2004 I couldn’t buy food (I eventually got a contracting job that tied me over for a bit), but I didn’t regret it. More to the point: I was exhausted and more than a little burned out, and I couldn’t really see clearly what I wanted to do next…. except move back to the Valley, and to take on a new challenge.

    The response was amazing — many people, most of whom I didn’t know, emailed me to offer advice, help, and some of them, opportunities.

    One of those emails was from one Marc Andreessen. I met him, and Gina at their small office (at that time the company was still called 24hourlaundry, “meeting absolutely none of your laundry needs”) in Palo Alto. After five meetings (!) they made an offer and I accepted it “in principle” without even knowing exactly what the idea was –it was that much in stealth mode– and we agreed that if I didn’t like the idea, once they described it in detail, I could say no. Now, it may sound crazy to accept an offer sight-unseen, but I’ve always valued teams more than anything and we had had a great time talking about ideas, the state of the market, and what social could be if it was really pervasive (remember, this was late ’04/early ’05).
    I had to go back to Dublin, so we arranged to do a web meeting to go over the idea.

    We set up the call, and the first slide went up. No fancy graphics, no eye-catching copy. It just said:

         A web platform for social applications


    Holy shit, I thought.

    I got it, instantly.

    I didn’t hesitate. Before Gina could start talking, I said “where do I sign?”

    And that’s where Ning started for me.

    .o.

    I got working on it immediately, on the first piece of the system we needed to boot up the environment, the administrative interface.  Funnily enough that interface is basically the one piece of code that still runs relatively unchanged to this day.

    What followed was an incredible journey. I became Chief Architect, then VP of Engineering, then CTO, taking the job over from Marc. Throughout all this, we grew from 20 people or so in 2007 to over 120 in 2008, from 100K users in early ’07 to 1MM users in December that year, to 10MM users in December ’08, and beyond that.

    We went through a couple of years of people telling us things like “a platform for social networks? Well, I can build that in PHP in a week, why would anyone want that?” until it somehow became obvious that people would want just that (and that no, you couldn’t build it in a week), and they would even (gasp!) pay for it. In 2009/2010, acting Chief Product Officer as well as CTO, we designed a new product offering and business model, and then I focused (with just a couple of brilliant engineers) on creating a new mobile product from scratch, Mogwee.

    Today, just like two years ago, Ning networks register millions of users a month, and have reached people everywhere, even if they don’t know it. A lot of them do know it, though — I’ve met a lot of people that have told me that Ning changed their life, and I can’t even begin to explain how it is that they have actually changed mine.

    When Glam Media acquired Ning in September, I decided that it was a good time to start something new. Glam+Ning have great stuff coming up and I look forward to seeing it unveiled in the coming weeks and months, but the itch to go back to being a Pirate was something I just couldn’t ignore spacer .

    So, here I am.

    I am deeply grateful to Marc and Gina for giving me the opportunity early on to be a part of something amazing, and to the many, many people that contributed to make Ning what it is today, including its Network Creators, who are the most committed user community I’ve had the privilege of working for.

    I’ll be starting something new, and I’ll certainly be blogging more over the next few weeks. In theory, I will even be taking time off over the next week or two, but who knows how that’s going to pan out. (A friend said, “Yeah, I can see you taking one or two… days off.” Heh).

     And that’s it for now. All I need is some time to…  dream it all up again.

    Share this:

    • Twitter
    • Facebook
  • ning, personal

    too bad phone number records don’t have a TTL

    Leave a Comment Posted by d on January 22, 2011

    It’s been almost a day since I ported my phone number from AT&T to Google Voice, and I still can’t receive SMS or international calls. Mind you, the porting process explicitly stated that SMS could take up to three days to transfer over, but while I was doing it I didn’t pay much attention to that.

    Yesterday, when someone was trying to call me from abroad and got “this number has been disconnected” as a response, the obvious struck me: phone numbers, like DNS, clearly has a set of local caches at your local endpoint that know who is responsible for that phone number and lets it figure out the rounting. When the owner of the record changes the changes must propagate world-wide, and this takes time. Caching policies must vary between providers, and given how rarely phone numbers change, it’s reasonable that the cache would be fairly long-lived. SMS and international calls must both use the same name-resolution (or number-resolution) routing system.

    As I said, fairly obvious, but it’s one of those things that we rarely think about. I’ve been searching online for a bit on information on these systems but can’t seem to find any. I’m curious about the protocol they use, and how much like (or unlike) DNS it may be. My bet is that with the benefit of hindsight it’s an arcane, bizarre system, evolved painfully and slowly over a long time that everyone would like to change, but can’t. Oh, wait, that is DNS.

    Either that, or there’s a giant building somewhere with rows of filing cabinets disappearing into the distance, with grim-looking people pushing carts full of punchcards that have to be moved from one filing cabinet to another. I’ll keep looking and see which one it is.

    Share this:

    • Twitter
    • Facebook
  • software, technology

    “because your computer is too fast”

    Leave a Comment Posted by d on January 22, 2011

    La Brea: a really interesting tool that uses “AOP-style” cross-cutting wrappers for OS-level calls. I am generally obsessed with the idea that we trust too much the underlying systems that we run software on, and this shows in many subtle ways in code as we are writing it, particularly around error detection, correction, and reporting. “Try La Brea” definitely goes in the to-do list.

    Share this:

    • Twitter
    • Facebook
  • software

    Older posts

    gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.