Home → Blog

Let's Code: Test-Driven JavaScript, my new screencast series on rigorous, professional JavaScript development, is now available! Check out the demo video here.

Agile and Predictability

by James Shore

29 Sep, 2014

Over on the AgilePDX mailing list, there's an interesting conversation on making predictions with Agile. It started off with Michael Kelly asking if Agile can help with predictability. Here's my response:

It's entirely possible to make predictions with Agile. They're just as good as predictions made with other methods, and with XP practices, they can be much better. Agile leaders talk about embracing change because that has more potential value than making predictions.

Software is inherently unpredictable. So is the weather. Forecasts (predictions) are possible in both situations, given sufficient rigor. How your team approaches predictions depends on what level of fluency they have.

One-star teams adapt their plans and work in terms of business results. However, they don't have rigorous engineering practices, which means their predictions have wide error bars, on par with typical non-Agile teams (for 90% reliability, need 4x estimate*). They believe predictions are impossible in Agile.

Two-star teams use rigorous engineering practices such as test-driven development, continuous integration, and the other good stuff in XP. They can make predictions with reasonable precision (for 90% reliability, need 1.8x estimate*) They can and do provide reliable predictions.

Three- and four-star teams conduct experiments and change direction depending on market opportunities. They can make predictions just as well as two-star teams can, but estimating and predicting has a cost, and those predictions often have no real value in the market. They often choose not to incur the waste of making predictions.

So if a company were to talk to me about improving predictability, I would talk to them about what sort of fluency they wanted to achieve, why, and the investments they need to make to get there. For some organizations, *3 fluency isn't desired. It's too big of a cultural shift. In those cases, a *2 team is a great fit, and can provide the predictability the organization wants.

I describe the "how to" of making predictions with Agile in "Use Risk Management to Make Solid Commitments".

*The error-bar numbers are approximate and depend on the team. See the "Use Risk Management" essay for an explanation of where they come from.

Home → Blog | Permanent Link Comments (spacer ) -->

How Does TDD Affect Design?

by James Shore

17 May, 2014

(This essay was originally posted to the Let's Code JavaScript blog.)

I've heard people say TDD automatically creates good designs. More recently, I've heard David Hansson say it creates design damage. Who's right?

Neither. TDD doesn't create design. You do.

TDD Can Lead to Better Design

There are a few ways I've seen TDD lead to better design:

  1. A good test suite allows you to refactor, which allows you to improve your design over time.

  2. The TDD cycle is very detail-oriented and requires you to make some design decisions when writing tests, rather than when writing production code. I find this helps me think about design issues more deeply.

  3. TDD makes some design problems more obvious.

None of these force you to create better design, but if you're working hard to create good design, I find that these things make it easier to get there.

TDD Can Lead to Worse Design

There are a few ways I've seen TDD lead to worse design:

  1. TDD works best with fast tests and rapid feedback. In search of speed, some people use mocks in a way that locks their production code in place. Ironically, this makes refactoring very difficult, which prevents designs from being improved.

  2. Also in search of speed, some people make very elaborate dependency-injection tools and structures, as well as unnecessary interfaces or classes, just so they can mock out dependencies for testing. This leads to overly complex, hard to understand code.

  3. TDD activates people's desire to get a "high score" by having a lot of tests. This can push people to write worthless or silly tests, or use multiple tests where a single test would do.

None of these are required by TDD, but they're still common. The first two are obvious solutions to the sorts of design problems TDD exposes.

They're also very poor solutions, and you can (and should) choose not to do these things. It is possible to create fast tests and rapid feedback without these mistakes, and you can see us take that approach in the screencast.

So Do Your Job

TDD doesn't create good design. You do. TDD can help expose design smells. You have to pay attention and fix them. TDD can push you toward facile solutions. You have to be careful not to make your design worse just to make testing better.

So pay attention. Think about design. And if TDD is pushing you in a direction that makes your code worse, stop. Take a walk. Talk to a colleague. And look for a better way.

Home → Blog | Permanent Link Comments (spacer ) -->

The Lament of the Agile Practitioner

by James Shore

08 May, 2014

I got involved with Extreme Programming in 2000. Loved it. Best thing since sliced bread, yadda yadda. I was completely spoiled for other kinds of work.

So when that contract ended, I went looking for other opportunities to do XP. But guess what? In 2001, there weren't any. So I started teaching people how to do it. Bam! I'm a consultant.

Several lean years later (I don't mean Lean, I mean ramen), I'm figuring out this consulting thing. I've got a network, I've got business entity, people actually call me, and oh, oh, and I make a real damn difference.

Then Agile starts getting really popular. Certification starts picking up. Scrum's the new hotness, XP's too "unrealistic." I start noticing some of my friends in the biz are dropping out, going back to start companies or lead teams or something real. But I stick with it. I'm thinking, "Sure, there's some bottom feeders creeping in, but Agile's still based on a core of people who really care about doing good work. Besides, if we all leave, what will keep Agile on track?"

It gets worse. Now I'm noticing that there are certain clients that simply won't be successful. I can tell in a phone screen. And it's not Scrum's fault, or certification, or anything. It's the clients. They want easy. I start getting picky, turning them down, refusing to do lucrative but ineffective short-term training.

Beck writes XP Explained, 2nd edition. People talk about Agile "crossing the chasm." I start working on the 2nd edition XP Pocket Guide with chromatic and it turns into The Art of Agile Development. We try to write it for the early majority--the pragmatics, not the innovators and early adopters that were originally attracted to Agile and are now moving on to other things. It's a big success, still is.

It gets worse. The slapdash implementations of Agile now outnumber the good ones by a huge margin. You can find two-day Scrum training everywhere. Everybody wants to get in on the certification money train. Why? Clients won't send people to anything else. The remaining idealists are either fleeing, founding new brands, or becoming Certified Scrum Trainers.

I write The Decline and Fall of Agile. Martin Fowler writes Flaccid Scrum. I write Stumbling through Mediocrity. At conferences, we early adopters console each other by saying, "The name 'Agile' will go away, but that's just because practices like TDD will just be 'the way you do software.'" I start looking very seriously for other opportunities.

That was six years ago.

...

Believe it or not, things haven't really gotten worse since then. Actually, they've gotten a bit better. See, 2-5 years is about how long a not-really-Agile Agile team can survive before things shudder to a complete halt. But not-quite-Agile was Actually. So. Much. Better. (I know! Who could believe it?) than what these terribly dysfunctional organizations were doing before that they're interested in making Agile work. So they're finally investing in learning how to do Agile well. Those shallow training sessions and certifications I decried? They opened the door.

And so here we are, 2014. People are complaining about the state of Agile, saying it's dying. I disagree. I see these "Agile is Dying" threads as a good thing. Because they mean that the word is getting out about Agile-in-name-only. Because every time this comes up, you have a horde of commenters saying "Yeah! Agile sucks!" But... BUT... there's also a few people who say, "No, you don't understand, I've seen Agile work, and it was glorious." That's amazing. Truly. I've come to believe that no movement survives contact with the masses. After 20 years, to still have people who get it? Who are benefiting? Whose lives are being changed?

That means we have a shot.

And as for me... I found that opportunity, so I get to be even more picky about where I consult. But I continue to fight the good fight. Diana and I produced the Agile Fluency™ model, a way of understanding and talking about the investments needed, and we're launching the Agile Fluency Project later this month. We've already released the model, permissive license, for everyone to use. Use it.

Because Agile has no definition, just a manifesto. It is what the community says it is. It always has been. Speak up.

Discuss this essay on the Let's Code JavaScript blog.

Home → Blog | Permanent Link Comments (spacer ) -->

Object Playground: The Definitive Guide to Object-Oriented JavaScript

by James Shore

27 Aug, 2013

Let's Code: Test-Driven JavaScript, my screencast series on rigorous, professional JavaScript development, recently celebrated its one year anniversary. There's over 130 episodes online now, covering topics ranging from test-driven development (of course!), to cross-browser automation, to software design and abstraction. It's incredibly in-depth and I'm very proud of it.

To celebrate the one-year anniversary, we've released Object Playground, a free video and visualizer for understanding object-oriented programming. The visualizer is very cool: it runs actual JavaScript code and graphs the object relationships created by that code. There are several preset examples and you can also type in arbitrary code of your own.

spacer

Object Playground in action

Understanding how objects and inheritance work in JavaScript is a challenge even for experienced developers, so I've supplemented the tool with an in-depth video illustrating how it all fits together. The feedback has been overwhelmingly positive. Check it out.

Home → Blog | Permanent Link Comments (spacer ) -->

Estimation and Fluency

by James Shore

25 Feb, 2013

Martin Fowler recently asked me via email if I thought there might be a relationship between Agile Fluency and how teams approach estimation. This is my response:

I definitely see a relationship between fluency and estimation. I can't say it's clear cut or that I have real data on it, but this is my gut feel:

  1. "Focus on Value" teams tend to fall into two camps: either "estimation is bad Agile" or "we're terrible at estimating." These statements are the same boy dressed by different parents. One star teams can't provide reliable estimates because their iterations are unpredictable, typically with stories drifting into later iterations (making velocity meaningless), and they have a lot of technical debt (so even if they took a rigorous approach to "done done" iterations, there would be wide variance in velocity from iteration to iteration, so their predictions' error bars would be too wide to be useful).

  2. "Deliver Value" teams tend to take a "we serve our customer" attitude. They're very good at delivering what the customer asks for (if not necessarily what he wants). Their velocity is predictable, so they can make quite accurate predictions about how long it will take to get their current backlog done. Variance primarily comes from changes to the backlog and difficulty discussing needs with customers (leading to changes down the road), but those are manageable with error bars. Some two-star teams retain the "estimation is bad Agile" philosophy, but any two-star team with a reasonably stable backlog should be capable of making useful predictions.

  3. "Optimize Value" teams are more concerned with meeting business needs than delivering a particular backlog. Although they can make predictions about when work will be done, especially if they've had a lot of practice at it during their two-star phase, they're more likely to focus on releasing the next thing as soon as possible by reducing scope and collaboratively creating clever shortcuts. (E.g., "I know you said you wanted a subscriber database, but we think we can meet our first goal faster if we just make REST calls to our credit card processor as needed. That has ramifications x, y, z; what do you think?"). They may make predictions to share with stakeholders, but those stakeholders are higher-level and more willing to accept "we're working on business goal X" rather wanting than a detailed timeline.

  4. I'm not sure how this would play out with "Optimize for System" teams. I imagine it would be the same as three-star fluency, but with a different emphasis.

Home → Blog | Permanent Link Comments (spacer ) -->

Analysis of Hacker News Traffic Surge on Let's Code TDJS Sales

by James Shore

25 Feb, 2013

A few weeks ago, my new screencast series, Let's Code: Test-Driven JavaScript, got mentioned on Hacker News. Daniel Markham asked that I share the traffic and conversion numbers. I agreed, and it's been long enough for me to collect some numbers, so here we are.

To begin with, Let's Code TDJS is a bootstrapped venture. It's a screencast series focused on rigorous, professional JavaScript development. It costs money. $19.95/month for now, $24.95/month starting March 1st, with a seven-day free trial.

Let's Code TDJS isn't exactly a startup--I already have a business as an independent Agile consultant--but I'm working on the series full-time. I've effectively "quit my day job" to work on it. I'm doing this solo.

I launched the series last year on Kickstarter. Thanks in large part to a mention on Hacker News and then even more from Peter Cooper's JavaScript Weekly, the Kickstarter was a huge success. It raised about $40K and attracted 879 backers. That confirmed the viability of the screencast and also gave me runway to develop the show.

(By the way, launching on Kickstarter was fantastic. I mean, sure, it was nailbiting and hectic and scary, and running the KS campaign took *way* more work than I ever expected, but the results were fantastic. As a platform for confirming the viability of an idea while simultaneously giving you seed funding, I can't imagine anything better for the bootstrapper.)

Anyway, the Kickstarter got a reasonable amount of attention. I tossed up a signup page for people who missed it, and by the time I was ready to release the series to the public, I had a little over 1500 people on my mailing list.

I announced the series' availability on February 4th. First on Sunday via Twitter, and then on Monday morning (recipient's time) via the mailing list. That's about 4,200 Twitter followers and 1,500 mailing list recipients.

Before we get into the numbers, I should tell you that I don't use Google Analytics. I don't track visitors, uniques, pageviews, none of that. I'm not sure what I would do with it. What I do track is 1) number of sales and 2) conversions/retention. That's it.

So, I announced on the weekend of the fourth. There was a corresponding surge in sales. Here's how many new subscriptions I got each day, with Monday the 4th counting as "100x." (So if I got 100,000 subscriptions on Monday--which, since you don't see me posting from a solid gold throne, you can assume I didn't--then I would have gotten 48,000 on Sunday.)

  • Sunday: 48x
  • Monday: 100x
  • Tuesday: 33x
  • Wednesday: 25x
  • (Total: 206x.)

82% of these subscribers converted to paying customers. 15% cancelled the trial and 3% just didn't pay. Although I collect credit card information at signup, those subscribers' credit card didn't process successfully.

A week and a half later, just after midnight my time (PST) on Wednesday the 13th, the series was posted to Hacker News by Shirish Padalkar. It was on the front page for about a day, and was near the top of the front page for the critical morning hours of every major European and US time zone. It peaked at #7. That led to a shorter, sharper surge.

  • Wednesday: 140x
  • Thursday: 23x
  • (Total: 163x, or 79% of the email's surge.)

83% of these subscribers converted from the free trial. 14% cancelled and 4% didn't pay. The only real difference between the two surges was that a lot more of the HN subscribers cancelled at the last moment. About half actually cancelled after their credit card failed to process and they got a dunning email. It makes me wonder if HN tire-kickers are more inclined to "hack the system" by somehow putting in a credit card that will can be authorized but cannot be charged. A debit card with insufficient funds would do the trick.

Another interesting data point is that "background" subscriptions--that is, the steady flow of subscriptions since February 2nd, other than traffic surges--averages 10x per day. (That is, on an average day, I get one tenth the subscriptions I got on Monday the 4th). However, the conversion rate for those "background" subscriptions is 95%. I'm not sure why it's so much higher. Perhaps those subscriptions are the result of word-of-mouth recommendations? That would make sense, since I'm not advertising or doing any real traffic generation yet.

My conclusions:

  • For this service, a mention on HN is about equivalent to a mailing list of 1,200 interested potential customers and 3,330 Twitter followers. That's actually less than I would have expected.

  • The conversion behavior of HN'ers is about the same as the mailing list. I would have expected the mailing list to convert at a higher rate, since they've already expressed interest. But it's essentially the same.

  • Both surges led to significantly less conversions than word-of-mouth subscribers. Don't get me wrong--from my research, I'm led to believe that ~83% is excellent. But 95% is frikkin' amazing.

  • HN'ers are more likely to cancel a free trial at the last moment and use credit cards that authorize but cannot be charged.

Finally--thanks to everyone who subscribed! I hope this was interesting. You can discuss this post on Hacker News. I'm available to answer questions.

(If you're curious about the series, you can find a demo video here.)

Home → Blog | Permanent Link Comments (spacer ) -->

Let's Code: Test-Driven JavaScript Now Available

by James Shore

11 Feb, 2013

I'm thrilled to announce that Let's Code: Test-Driven JavaScript is now open to the public!

I've been working on this project for over nine months. Over a thousand people have had early access, and reactions have been overwhelmingly positive. Viewers are saying things like "truly phenomenal training" (Josh Adam, via email), "highly recommended" (Theo Andersen, via Twitter), and "*the* goto reference" (anonymous, via viewer survey).

Up until last week, the show was only available to Kickstarter backers. Now I've opened it up to everyone. Come check it out! There's a demo video below.

About Let's Code: Test-Driven JavaScript

If you've programmed in JavaScript, you know that it's an... interesting... language. Don't get me wrong: I love JavaScript. I love its first-class functions, the intensive VM competition between browser makers, and how it makes the web come alive. It definitely has its good parts.

It also has some not-so-good parts. Whether it's browser DOMs, automatic semicolon insertion, or an object model with a split personality, everyone's had some part of JavaScript bite them in the ass at some point. That's why test-driven development is so important.

Let's Code: Test-Driven JavaScript is a screencast series focused on rigorous, professional web development. That means test-driven development, of course, and also techniques such as build automation, continuous integration, refactoring, and evolutionary design. We support multiple browsers and platforms, including iOS, and we use Node.js on the server. The testing tools we're using include NodeUnit, Mocha, expect.js, and Testacular.

You can learn more on the Let's Code TDJS home page. If you'd like to subscribe, you can sign up here.

Home → Blog | Permanent Link Comments (spacer ) -->

Come Play TDD With Me at CITCON

by James Shore

18 Sep, 2012

CITCON, the Continuous Integration and Testing Conference, is coming up this weekend in Portland, and I'm going to be there and recording episodes of Let's Play TDD! I'm looking for people to pair with during the conference.

If you're interested, check out the current source code and watch of a few of the most recent videos. Then, at the conference, come to my "Let's Play TDD" session and volunteer to pair! It should be a lot of fun.

There are still a few slots open at the conference, so if you haven't registered, there's still time. I hope to see you there!

Home → Blog | Permanent Link Comments (spacer ) -->

Acceptance Testing Revisited

by James Shore

08 Sep, 2012

I recently had the chance to reconsider my position on acceptance testing, thanks to a question from Michal Svoboda over on the discussion forums at my Let's Code: Test-Driven Javascript screencast. I think this new answer adds some nuances I haven't mentioned before, so I'm reproducing it here:

I think "acceptance" is actually a nuanced problem that is fuzzy, social, and negotiable. Using tests to mediate this problem is a bad idea, in my opinion. I'd rather see "acceptance" be done through face-to-face conversations before, after, and during development of code, centering around whiteboard sketches (earlier) and manual demonstrations (later) rather than automated tests.

That said, we still need to test the behavior of the software. But this is a programmer concern, and can be done with programmer tests. Tools like Cucumber shift the burden of "the software being right" to the customer, which I feel is a mistake. It's our job as programmers to (a) work with the customer, on the customer's terms, so we build the right thing, and (b) make sure it actually works, and keeps working in the future. TDD helps us do it, and it's our responsibility, not the customer's.

I don't know if this is very clear. To rephrase: "acceptance" should be a conversation, and it's one that we should allow to grow and change as the customer sees the software and refines her understanding of what she wants. Testing is too limited, and too rigid. Asking customers to read and write acceptance tests is a poor use of their time, skill, and inclinations.

I have more here:

  • The Problems with Acceptance Testing
  • Alternatives to Acceptance Testing
gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.