These Posts Are Meant For Reading

Posted on by Stephane Beniak
1

This is not a fancy blog.

Some cheap web host running WordPress with the default theme (Twenty Eleven at the time) is all I really need. Sure, I removed plenty of the preloaded cruft (tag clouds? seriously?) and hacked the CSS to hell and back, but I never put too much consideration into the actual content’s presentation.

I’m happy to announce that with my recent discovery of the Readability web-app, I have learned the error of my ways. So behold! A minor blog redesign!

What’s New

spacer

Before

spacer

After

I’ve never heard of anyone complaining about fonts being too large.

As such, the tiny type is gone in favour of bigger, bolder and higher contrast text, along with wider line spacing spacing and larger navigation buttons. Other minor changes include a lighter, less distracting background, horizontal rules to clearly delineate content, and more padding just about everywhere.

All of these changes were done in the name of readability; I hope this provides a better content consumption experience for everyone.

If you have any additional feedback on how I can further improve these pages, I’d be happy to hear it via the comments below.

Posted in Design | Tagged Readability | 1 Reply

An Ode to Terrible Teachers

Posted on by Stephane Beniak
Reply

I’ll never forget my very first lecture as a University student…

(Gather round kids, story time!)

It was an overcast September morning as I walked onto McGill campus, and it was the first day of what would turn out to be the best years of my life (so far). Not that you could tell from the way our first lecture unfolded.

Ordinary differential equations, a.k.a. ODEs, taught by a certain Professor Johnson (names may have been changed, slightly). As I was heading in and getting seated, in walked a man with an uncanny resemblance to Santa Claus. Flowing white beard, portly, suspenders, the works. He took several minutes to introduce himself, read an incredibly dry course outline, and give us a spiel on “academic integrity” (the first of many such abysmal failures, but that’s a story for another time).

He then paused for a moment, giving the auditorium’s students just enough time to wonder about his intentions for this first-lecture-of-the-semester: was he going to let us go early because it was the first class, or was he going to start covering material right now?

Needless to say, it turned out to be the latter, or there really wouldn’t have been anything memorable about the whole thing…

So off he went, launching into the course material with a lengthy example on the use of integrating factors when solving first-order linear ODEs. Something I only figured out a few weeks later, actually, because I don’t recall it being mentioned by name in any understandable manner. It was complete gibberish to me at the time anyways.

There was no attempt at an explanation of what it was we were doing. Or why. No context, preamble, or introduction. His writing was furious, rapid and nearly illegible, and what little he said was not keeping up.

He filled the classroom’s six chalkboards so quickly that I never managed to write it all down, let alone absorb any information throughout the process. Several times I stared, dumbfounded, at the positively mystical proceedings unfolding at the front of the classroom… before snapping out of it to continue my frenzied note-taking, lest I fall too far behind.

Professor Johnson never once consulted his notes as he sped along solving the problem. I was briefly impressed before I realized it was because he didn’t have any, and was relying entirely on his own ability. Which would’ve been commendable, if it weren’t for his abrupt halt as we closed in on the answer.

“Hold on, that doesn’t make sense,” said Professor Johnson, in his unusually deep and commanding voice.

That was met with stony silence from the peanut gallery, save for the faint sound of pencils urgently dashing across paper. At this point I had given up any semblance of note-taking and was simply observing everyone else, in the hopes that I might not be the only one feeling so stupid… I had absolutely no idea what was going on. I tried to judge the reaction of those in my vicinity, but most were still frantically writing things down, so it was hard to tell if anyone else found the situation as objectionable as I did.

Eventually the furious scribbling died down as students peered up from their notebooks to get a look at this predicament. Professor Johnson was still quietly thinking, eyes darting about the many chalkboards. Nobody else was following along well enough to figure out what was wrong. Or, at the very least, too intimidated to speak up about it (do keep in mind: this was our first University lecture ever, very few people knew each other, and he had a really deep voice, okay?).

It took another few moments before a sudden “Aha!” broke the silence. “A simple mistake,” he proclaimed. He’d found the error somewhere near the beginning, on the second board or so. Which conveniently meant that everything derived from that point onwards was also wrong, so we’d have to redo most of the problem.

I was feeling clever for giving up on note-taking as pages were erased or torn out around around me. Low grumblings of discontent began to fill what had been dead silence beforehand. But the solving process began anew, and most people kept on writing.

It was only a few boards later where some new problem came up, however. A minor typo led to a negative being carried incorrectly. “Easy to fix, carry on.”

Then again. What looked like an e was actually a 9. “A trifle.”

On and on the mistakes came and went, always casually dismissed, until we eventually made our way to an answer. A real, final answer to the example we set out to solve an hour ago.

But of course it was wrong too. Professor Johnson scratched his head as the problem continued to elude him. By now, only the truly dedicated were writing anything down. Everyone else was too busy glaring in the general direction of all this questionable math.

Time was running short, and the situation did not look promising. We had no reason to believe we could reach a correct solution within the next few minutes, so Professor Johnson decided it was time to make amends. “The answer isn’t quite right, but it’s trivial to derive for yourself. Give it a try when you revise your notes,” he said, seemingly without a care in the world.

A flick of the hand indicated our dismissal. We couldn’t leave fast enough.


I was never a dumb kid. I always did well enough in the classroom to earn the honours-flavour-of-the-semester at my grade school, high school, and CEGEP. But by the end of that very first ODEs lecture, I had concluded that I must be too stupid for the mighty prestige of McGill University.

Clearly my parents weren’t kidding when they said a Bachelor’s degree would be tough; I must have vastly underestimated the challenge, and overestimated my own ability. Professor Johnson had quite literally struck the fear of failure into me, with just 90 minutes of his regular morning routine.

Ultimately I went on to my second class that day, Intermediate Calculus, and had a much better time thanks to a more able and likeable professor. But that crushing fear of failure, however brief it was, taught me a very valuable lesson.

Learning is fundamentally your responsibility

Within just a few lectures, the pattern had been firmly established: I wasn’t going to learn anything at all by going to class. It was time to take things into my own hands.

I started attending another class section (illicitly, because it was already overbooked, and reached the status of serious fire hazard well before the end of term). I bought the recommended book (one of the few times I ever did so during my four years), and actually read large swaths of it. I discovered Paul’s Online Math Notes and learnt a great deal from there too. I banded together with friends and together we taught ourselves the class while working out assignments together (we all finished them legitimately too, not just splitting up the numbers and passing them around afterwards). And at the end of the day, our efforts were rewarded; we all did very well in the class, no thanks to Professor Johnson.

Or was it really? Admittedly, most of the credit belongs to the incredibly smart people I ended up collaborating with. But our professor was so astronomically terrible that it paradoxically spurred me to do better, through a mixture of fear and a need for revenge.

It motivated me to do better as a student, to prove “the authority” wrong, that the material wasn’t as hard as it was made out to be, that I could figure it out. As an avid gamer, it encouraged me to “set a high score” and “beat the game,” so to speak.

Later in my degree, it drove me to become a teaching assistant (for Applied Linear Algebra… close enough), to show all the bad teachers out there that teaching was not fundamentally difficult. I quickly learnt that a little bit of caring and a little more effort goes a long way towards garnering some respect.

And in the grand scheme of things, it ultimately taught me this: you can learn anything. You don’t need the structured learning environment of school to guide you, and you certainly don’t need a good teacher to explain it to you. Start with a little motivation, get some reliable resources (friends, books, the internet, etc), trim it down to what matters, and get to work.

Professor Johnson was right, really… it is trivial. So, in a strange way, thank you.

Posted in Thoughts | Tagged Academia, School, Teaching | Leave a reply

Your App’s User Experience, From The Very Beginning

Posted on by Stephane Beniak
Reply

As a mobile (game) developer, it’s my job to be constantly worrying about the end user experience of what I create. It’s extremely important for your app to be finely polished, from the main menu screen down to every last nook and cranny, in order to maximize user retention. But something that’s often overlooked during user testing is the app’s startup sequence and loading process.

It’s easy to rationalize that with excuses like “users will only see it once” or “it only lasts a second or two,” but in reality first impressions matter, a lot, so it’s important for them to be as flawless as possible. I’d like to rectify some of that negligence with a few personal guidelines I’ve built up over time.

1. Does your app load as quickly as possible?

It goes without saying that your app should load instantaneously (i.e. less than one second) if possible. For every passing second beyond that, you can bet that somewhere there’s an impatient user backing out, never having tried it. Either that, or the constant frustration from the slow load times builds up over time and eventually causes them to move onto some competitor’s app.

2. Does your app show signs of life immediately?

Nobody wants to stare at a blank screen while an app boots up. It’s misleading, unprofessional, and a missed branding or tutoring opportunity. Android, iOS and Windows Phone all provide mechanisms for your app to immediately present a default image before the rest of its assets even begin to load; you should absolutely be exploiting these accordingly.

There are a few possible avenues to pursue here, depending on what you think makes the most sense to the user. Some apps prefer to load their default UI background, which gives the illusion of near-instantaneous loading when the app actually takes a moment to catch up. Others might use it as a branding opportunity by displaying a logo while the app loads, and by overlapping things in this way we’re not wasting the user’s time with a splash screen, the app is immediately usable as soon as it’s finished loading.

3. Does your app let the user know it’s loading?

For any app that takes more than about two seconds to load, you should seriously consider adding a loading animation of some kind. It doesn’t have to be a fully-fledged progress bar, because it’s extremely difficult for them to not suck. A simple spinning hourglass will do the job, because we’re trying to communicate that the app is functional, not frozen or stuck, and nothing more. A static, unresponsive image says “I’m frozen” to the user, while spinning arrows says “I’m working as hard as I can, it’ll just be a sec” to the user, and earns you a lot more patience.

Granted, if your app loads instantaneously (see point #1 above) there’s no need for a loading animation or progress bar. In fact, if you can be sure that your app will load just as quickly across all supported devices (even the cheapest entry-level handsets), I’d argue that you should remove any traces of loading progress from your intro screen. It’s visually annoying to watch a flickering progress bar or rapid animation that quickly disappears anyways. In this case, you’re better off presenting you’re better off presenting the app’s main menu background as your default intro screen, as mentioned in point #2 above.

User Testing, Starting From The Beginning

One final note on user testing: test early, test often, and test from the very beginning of your app. You should never be handing off the device with the app already loaded to the main menu and ready to rumble. Instead, you should get the user to launch your app directly from the device’s app list by simply telling him or her the name.

That way, you’ll be able to get feedback on your app’s startup sequence, the default image presented and the load time. As a bonus, this can also reveal problems with your app’s icon or the memorability of its name. App discoverability is a serious problem in today’s app stores, so you should be doing your best to mitigate it ahead of time.

Posted in Design | Tagged Mobile Apps, User Experience | Leave a reply

Fixing Silent Hangs When Calling Old .NET Assemblies

Posted on by Stephane Beniak
Reply

As a shameless C# junkie, it’s not often that Microsoft’s managed code gives me trouble… but due to some sort of unfortunate universal law, that means that said trouble will be proportionally much more obscure and convoluted accordingly.

The Silence of the Hangs

Case in point: it seems that modern version of .NET don’t play nice with their ancestors. Should you ever find yourself calling some old mixed-mode assemblies (<= .NET 3.5) from some new managed code (>= .NET 4.0), you program, when run, will probably do absolutely nothing and just silently hang.

Pausing the run will reveal your code getting stuck on the very first method call to the external assembly, with the call stack unhelpfully capped with [Managed to Native Transition].

spacer

(Aside for some context: I’m writing a library to get old-school DirectInput controllers to work in XNA, which otherwise only supports XInput gamepads. The problem occurs when calling Microsoft’s own DirectInput managed code wrapper, which hasn’t been update since .NET 2.0…)

The Fixer

Much Google-Fu ensued. Partial solutions were found and compiled into one foolproof list for your viewing pleasure:

  1. Open up your App.config file
    1. If you don’t have one, simply create one by going to Project => Add new item and selecting Application configuration file
  2. Add exactly these lines to your App.config:
    <configuration>
      <startup useLegacyV2RuntimeActivationPolicy="true">
        <supportedRuntime version="v4.0"/>
      </startup>
    </configuration>
  3. If you are from the future, change the runtime version accordingly (“v4.5″ for VS2012, etc.)
  4. Go to Debug => Exceptions, expand Managed Debugging Assistants and uncheck LoaderLock

And that should do it. There are probably some arcane side-effects to these hacks, but I haven’t run into any problems so far. Therefore, I can safely extrapolate my experience to mean that your computer will catch on fire the moment you attempt anything written here.

Shotgun-not-liable.

Update: Someone with much deeper technical knowledge than me provided this explanation of the situation along with a better solution, should you be interested in learning more. I wish I had found this earlier.

Posted in Programming | Tagged .NET, C#, XNA | Leave a reply

The Computer Architecture Analogy, Part 5: Compilers

Posted on by Stephane Beniak
1

Back to the lab, gentle[wo]men, the computer architecture series we started many moons ago continues! Last time we went over Assembly programming in an effort to better understand how (relatively) normal humans like us can actually talk to computers. But to be frank, we can do much better, so accordingly the chef’s special today is compilers.

Throughout these posts, I’ll be using one central analogy in my attempt to bring everything together: the main assembly line in a car manufacturing plant, which is analogous to the processor (or CPU) that lies within a larger computer. As a direct corollary, we’ll be referring to ourselves as the car designers in charge of writing up the instruction manual for the plant to follow, which is analogous to the programmers in charge of writing code for the processor to execute.

Do Recall

When we discussed Assembly last week with rose-coloured glasses, well, it’s only because it was so much better than the old alternative of using a keyboard that looked like this:
spacer

Allowing the user to type what vaguely resembled words rather than arbitrary codes (Assembly Language vs. Machine Code, as shown below) was a huge jump in usability and productivity.

spacer

Like we explored last time, this is analogous to the car designer being able to write his instructions in his native language (German). So, he’s able to work more comfortably and then have a team of translators convert it to the correct language for the manufacturing plant (Chinese). Sure, it takes time and money to do this, but because the designer can now work much faster and more comfortably, the ultimate result is a better, nicer and more advanced automobile.

But We Can Do Better

Okay, so Assemblers are certainly a step in the right direction, the instructions are much more legible and everyone’s happy that better cars are being produced as a result. But once the novelty wears off and you start getting used to your new tool, it’s clear that some annoyances still remain…

Manual space management

For one, as we’re writing the instruction manual, we might start to notice some patterns in the way the more common instructions are written. Take for example the process of attaching the hood, the trunk lid, or the doors to the car. If you recall part 3 of this series, we’re constantly writing instructions like this:

  1. LOAD "hood" in shelf bin #101
  2. LOAD "attachment bolts" in shelf bin #102
  3. LOAD "attachment nuts" in shelf bin #103
  4. ADD #101, #102, #103 to "car"
  5. STORE result "car" for later

In other words, we’re manually controlling the plant’s storage space (as explained in part 1) by telling the workers exactly when to bring a part from the back-store to the shelf because they’ll need it, or when to put it back because we’re done with it.

This is, quite frankly, ridiculous when you think about it. That means we need to know the exact specifications of the auto plant we’re writing to, so we don’t accidentally make the workers run out of space because the shelf is smaller than we thought. That also means we’re wasting time writing very specific instructions when in reality they’re pretty obvious; we could just have someone else convert it for us like we did with the translators last time.

So that’s exactly what we’re going to do. We’re going to hire a new team of people and call them “converters,” and they’re going to transform the written words (in German) of the designer into the more specific intermediate instructions (still in German but more verbose) which get handed off to the translators before being shipped to the plant (in Chinese).

This will allow the car designer to write at a higher level, worry less about the details of space management, and ultimately be more productive. For example, the instructions given above don’t need to be written as five individual instructions (LOAD, LOAD, LOAD, ADD, STORE), it’s just as understandable if we write something like this:

  1. AttachToCar(Hood, using Bolts, Nuts)

All of the necessary information is contained within that one single instruction, and anything else like storing it afterwards is strongly implied (you weren’t gonna just throw out all that hard work were you?). Even details like the shelf bin numbers don’t need to be explicitly written because they can be figured out. This basically leaves it up to the team of converters to use the shelves as they see fit, because fundamentally it doesn’t matter what bin on the shelf is used so long as the car actually gets built correctly. And since there’s only one set of bolts and nuts that fits the hood anyways, it’s perfectly feasible for them to work their magic by just choosing whatever makes sense. Leaving the specific details to them means we can focus on the bigger picture.

External instructions

Another problem with the old instructions we were writing is that they sometimes get incredibly repetitive. Let’s say that you as the designer are putting the same exact engine in four different models of car. Does it really make sense to be copy-pasting identical engine assembly instructions into each of the instruction manuals? Firstly, that’s an awful lot of matching text to be repeating across four different manuals, and secondly, if you made a mistake, you’ll have to fix it four different times!

So, why not save a few trees by just writing an external instruction manual about this particular engine, and that way you can just refer to it directly in each car’s manual. In this way, you as the car designer can nonchalantly write “now just build engine #001 lol” in the middle of the car’s instruction manual. Then, when you’re handing it off to the converters, just include a copy of the manual for building engine #001, and they’ll take care of bringing it all together into one complete set of instructions to be sent to the plant.

Instruction Analysis

Now that this conversion process has been put in place, thanks to the problems above, and we have a team of people to carefully read over the code, we might as well maximize our use of them. It’s really easy to make mistakes when writing instruction manuals, so they might as well be proofreading for us.

The mistakes we’re most likely to make as designers come from the category of things that “make no goddamn sense.” For example, we might write in step #12 to attach the engine, when in reality it hasn’t been built yet because that’s actually in step #15. With low-level instructions, we might assume the engine is already in shelf bin #101 and accidentally write an instruction to attach the engine like this:

  1. ADD #101 to "car"

So, because we’re doing manual space management and accessing bin numbers directly, a worker would end up reaching into shelf bin #101, pulling out a tire, and having no idea what to do next as he wonders what kind of idiot wrote these instructions.

Now there are two advantages to having the conversion team around in this case. First, writing in a higher level language means that instead of producing the instruction above (which is unclear because it doesn’t tell us what’s actually in bin #101), we’d be writing something like this instead:

  1. Install(Engine, using DuctTape, ChewingGum)

So, because the word engine is explicitly listed, it’s much easier to notice the mistake at a glance and prevent it from reaching the converters. But let’s say that we’re having a rough day and we don’t notice before sending it off… Were we to hand off such faulty instructions to the converters, they could easily proofread it and notice the mistake themselves. Even if they didn’t see it right away, because it’s their job to figure out the best way of using the shelf space now, sooner or later they’d notice that the bin was not being filled, the mistake would surface and a catastrophe would be avoided.

Assemblers on Steroids

The team of converters, as we’ve outlined them above, is directly analogous to modern code compilers. They essentially fit above the Assembler in the stack, as shown below. spacer

They eliminate even more tedium from the lives of programmers so they can focus on getting things done. More specifically let’s focus on the three big ones we covered above in Analogy-land.

Register Allocation

First, where working in Assembly requires the programmer to manually manage individual registers, compilers were specifically built to take care of it themselves with a technique called register allocation. This way, the programmer doesn’t have to worry about loading or storing values into low-level memory, all of that complexity is hidden away so that he (or she, come on now) can treat everything as being readily accessible immediately. This leads to code that is much simpler, much shorter, and much easier to read and write.

Also, because this hides the specific implementation of the current computer from the programmer, this allows for code to become portable across many machines. Granted, this means that you’ll need a new compiler for every different instruction set (part tre) you wish to target, but it nonetheless allows programmers to save a lot of time by not having to worry about what specific type of processor the code is going to run on.

Linkers

Second, our new best friend the compiler brings along his buddy called the linker to help in the fight against programmer tedium. While not strictly a part of the compiler, the linker takes the compiler’s low-level code output and joins it with whatever other libraries have been referenced to form a single executable file. So, just like the car designer can refer to an external engine manual in his instructions provided he attaches it, the programmer can refer to an external library so long as it’s stored somewhere nearby on the computer. The linker will grab whatever it needs from the library and smash it all together into a single executable for ease of use. This allows code to be written much more succinctly, as it eliminates the need for repetition across programs.

Code Analysis

Third, because compilers inherently needs to “read” your code, another helpful feature they bring along is the ability to perform code analysis for the programmer. The easy first one is error-checking, as anything that doesn’t make sense will give the compiler trouble when trying to produce machine code. So, it’ll stop the compilation and report an error to the programmer so he can fix things up and have better luck next time.

But since the compiler’s getting intimate with your code, it might as well give it some more advanced analysis. The next big one is performance analysis so that the compiler can automatically remove dead code, or remove instructions that recalculate the same exact number over and over, for example. Furthermore, given that it’s aware of the target machine’s technical details, it can also perform platform-specific enhancements that go beyond the obvious ones outlined earlier. Granted, this means that a lot of work must go into ensuring the compiler’s correctness, but once it’s in place, free* performance improvements are nothing to sneeze at.

…And More

Finally, note that a compiler does a lot more than just the three things outlined above, and you can read more about it on good old Wikipedia, or in this informative slide deck. But those three are pretty big ones and serve to give you some appreciation of the work that went into designing compilers over the years. Also this post has grown into an overly long abomination, so here, let’s keep you entertained.

cat_computer.jpg

Up, Up and Away

At last, we’ve gotten to a point where written code actually resembles English thanks to compilers, not to mention that they bring many other improvements to the table with them. Apart from newfangled visual programming shenanigans, compilers are effectively the top of the food chain when it comes to programming tools (Note that interpreters are also very similar for the purposes of our discussion).

So, this ends our little side-quest exploring the means by which programmers tell computers what to do. It was increasingly getting further away from the core topic of computer architecture anyways. Next time, we’ll be back on the more traditional track, covering branch prediction.