meyerweb.com

Skip to: site navigation/presentation
Skip to: Thoughts From Eric

Archive: 'Standards' Category

Pricing ‘CSS:The Definitive Guide’

  • Wed 3 Oct 2012
  • 1117
  • CSS
    Projects
    Standards
    Writing
  • One response

When I announced the serial publication of CSS: The Definitive Guide, Fourth Edition, I failed to address the question how pricing will work. Well, more decided to break it out into its own post, really. As it turns out, there are two components to the answer.

First component is the pricing of the pre-books. Roughly speaking, each pre-book will be priced according to its length. The assumed base for the electronic version is $2.99, and $7.99 for the print version, with significantly longer pre-books (say, one where two chapters are combined) priced somewhat higher. How much higher depends on the length. It’s possible that prices will drift a bit over time as production or printing costs change, but there’s no way to guarantee that. We’re basically pricing them as they come out.

At the end of the process, when all the chapters are written and bundled into an omnibus book edition, there will be discounts tied to the chapters you’ve already purchased. The more chapters you bought ahead, the deeper the discount. If you bought the pre-books direct from O’Reilly, then you’ll automatically get a discount code tailored to the number of pre-book you’ve already bought. If you bought them elsewhere, then O’Reilly’s customer service will work to create a comparable discount, though that will obviously be a slower process.

The second component is: how much will the codes cut the price of the final, complete book? That I cannot say. The reason is that I don’t know (nor does anyone) what minimum price O’Reilly will need to charge to cover its costs while taking into account the money already paid. I’m hopeful that if you bought all of the pre-books, then the electronic version of the final book will be very close to free, but again, we have to see where things stand once we reach that point. It might be that the production costs of the complete book mean that it’s still a couple of bucks even at the deepest discount, but we’ll see! One of the exciting things about this experiment is that even my editor and I don’t know exactly how it will all turn out. We really are forging a new trail here, one that I hope will benefit other authors—and, by direct extension, readers—in the future.

‘CSS: The Definitive Guide’, Fourth Edition

  • Mon 1 Oct 2012
  • 1342
  • CSS
    Projects
    Standards
    Writing
  • Six responses

I’m really excited to announce that CSS: The Definitive Guide, Fourth Edition, is being released one piece at a time.

As announced last week on the O’Reilly Tools of Change for Publishing blog, the next edition of CSS:TDG will be released chapter by chapter. As each one is finished, it will go into production right away instead of waiting for the entire omnibus book to be completed. You’ll be able to get each standalone as an e-book, a print-on-demand paper copy, or even as both if that’s how you roll. I’ve taken to calling these “pre-books”, which I hope isn’t too confusing or inaccurate.

There are a lot of advantages to this, which I wrote about in some detail for the TOC post. Boiled down, they are: accuracy, agility, and à la carte. If you have the e-book version, then updates can be downloaded for free as errata are corrected or rewrites are triggered by changes to CSS itself. And, of course, you can only buy the pre-books that interest you, if you don’t feel like you need the whole thing.

I should clarify that not every pre-book is a single chapter; occasionally, more than one chapter of the final product will be bundled together into a single pre-book. For example, Selectors, Specificity, and the Cascade is actually chapters 2 and 3 of the final book combined. It just made no sense to sell them separately, so we didn’t. “Values, Units, and Colors”. on the other hand, is Chapter 4 all by itself. (So if anyone was wondering about the pricing differences between those two pre-books, there’s your explanation.)

If you want to see what the e-book versions are like, CSS and Documents (otherwise known as Chapter 1) has been given the low, low price of $0.00. Give it a whirl, see if you like the way the pre-books work as bits.

My current plan is to work through the chapters sequentially, but I’m always willing to depart from that plan if it seems like a good idea. What amuses me about all this is the way the writing of CSS: The Definitive Guide has come to mirror CSS itself—split up into modules that can be tackled independently of the others, and eventually collected into a snapshot tome that reflects a point in time instead of an overarching version number.

Even pre-book is a significantly updated version of their third-edition counterparts, though of course a great deal of material has stayed the same. In some cases I rewrote or rearranged existing sections for greater clarity, and in all but “CSS and Documents” I’ve added a fair amount of new material. I think they’re just as useful today as the older editions were in their day, and I hope you’ll agree.

Just to reiterate, these are the three pre-books currently available:

  • CSS and Documents (free) — the basics of CSS and how it’s associated with HTML, covering things like link and style as well as obscure topics like HTTP header linking
  • Selectors, Specificity, and the Cascade — including all of the level 3 selectors, examples of use, and how conflicts are resolved
  • Values, Units, and Colors — fairly up to date, including HSL/HSLa/RGBa and the full run of X11-based keywords, and also the newest units except for the very, very latest—and as they firm up and gain support, we’ll add them into an update!

As future pre-books come out, I’ll definitely announce them here and in the usual social spaces. I really think this is a good move for the book and the topic, and I’m very excited to explore this method of publishing with O’Reilly!

Defining ‘ch’

  • Tue 15 May 2012
  • 0937
  • CSS
    Standards
  • 19 responses

I’m working my way through a rewrite of Two Salmon (more on that anon), and I just recently came to the ch unit. Its definition in the latest CSS Values and Units module is as follows:

ch unit

Equal to the advance measure of the “0″ (ZERO, U+0030) glyph found in the font used to render it.

…and that’s it. I had never heard the term “advance measure” before, and a bit of Googling for font "advance measure" only led me to copies of the CSS Values and Units module and some configuration files for the Panda 3D game engine. So I asked the editor and it turns out that “advance measure” is a CSS-ism that corresponds to the term “advance width”, which I had also never heard before but which yielded way more Google results. Wikipedia’s entry for “Font” has this definition:

Glyph-level metrics include … the advance width (the proper distance between the glyph’s initial pen position and the next glyph’s initial pen position)…

My question for the font geeks in the room is this: is that the generally accepted definition for “advance width”? If not, is there a better definition out there; and if so, where? I’d like to be able to recommend the best known definition for inclusion in the specification; or, if there’s no agreement as to the best, then at least a good one. The Wikipedia definition certainly sounds good, assuming it’s accurate. Is it?

In CSS terms, if I’ve understood things correctly, 1ch is equal to the width of the character box for the glyph for “0”. In other words, if I were to create a floated element with just a “0” and no whitespace between it and the element’s open and close tags, then the float’s width would be precisely 1ch. But that’s if I’ve understood things correctly. If I haven’t, I hope I’ll be corrected soon!

Linear Gradient Keywords

  • Thu 26 Apr 2012
  • 1454
  • CSS
    Standards
  • 10 responses

Linear gradients in CSS can lead to all kinds of wacky, wacky results—some of them, it sometimes seems, in the syntax itself.

Let me note right up front that some of what I’m talking about here isn’t widely deployed yet, nor for that matter even truly set in stone. Close, maybe, but there could still be changes. Even if nothing actually does change, this isn’t a “news you can use RIGHT NOW” article. Like so much of what I produce, it’s more of a stroll through a small corner of CSS, just to see what we might see.

For all their apparent complexity, linear gradients are pretty simple. You define a direction along which the gradient progresses, and then list as many color stops as you like. In doing so, you describe an image with text, sort of like SVG does. That’s an important point to keep in mind: a linear (or radial) gradient is an image, just as much as any GIF or PNG. That means, among other things, that you can mix raster images and gradient images in the background of an element using the multiple background syntax.

But back to gradients. Here’s a very simple gradient image:

linear-gradient(45deg, red, blue)

The 45deg defines the gradient line, which is the line that defines how the gradient progresses. The gradient line always passes through the center of the background area, and its specific direction is declared by you, the author. In this case, it’s been declared to point toward the 45-degree angle. red and blue are the color stops. Since the colors don’t have stop distances defined, the distances are assumed to be 0% and 100%, respectively, which means you get a gradient blend from red to blue that progresses along the gradient line.

You can create hard stops, too:

linear-gradient(45deg, green 50%, lightblue 50%)
spacer

That gets you the result shown in Figure 1, to which I’ve added (in Photoshop) an arrow showing the direction of the gradient line, as well as the centerpoint of the background area. Each individual “stripe” in the gradient is perpendicular to the gradient line; that’s why the boundary between the two colors at the 50% point is perpendicular to the gradient line. This perpendicularness is always the case.

Now, degrees are cool and all (and they’ll be changing from math angles to compass angles in order to harmonize with animations, but that’s a subject for another time), but you can also use directional keywords. Two different kinds, as it happens.

The first way to use keywords is to just declare a direction, mixing and matching from the terms top, bottom, right, and left. The funky part is that in this case, you’re declaring the direction the gradient line comes from, not that toward which it’s going; that is, you specify its origin instead of its destination. So if you want your gradient to progress from the bottom left corner to the top right corner, you actually say bottom left:

linear-gradient(bottom left, green 50%, lightblue 50%)
spacer

But bottom left does not equal 45deg, unless the background area is exactly square. If the area is not square, then the gradient line goes from one corner to another, with the boundary lines perpendicular to that, as shown in Figure 2. Again, I added a gradient line and centerpoint in Photoshop for clarity.

Of course, this means that if the background area resizes in either direction, then the angle of the gradient line will also change. Make the element taller or more narrow, and the line will rotate counter-clockwise (UK: anti-clockwise); go shorter or wider and it will rotate clockwise (UK: clockwise). This might well be exactly what you want. It’s certainly different than an degree angle value, which will never rotate due to changes in the background’s size.

The second way to use keywords looks similar, but has quite different results. You use the same top/bottom/left/right terms, but in addition you prepend the to keyword, like so:

linear-gradient(to top right, green 50%, lightblue 50%)
spacer

In this case, it’s clear that you’re declaring the gradient line’s destination and not its origin; after all, you’re saying to top right. However, when you do it this way, you are not specifying the top right corner of the background area. You’re specifying a general topward-and-rightward direction. You can see the result of the previous sample in Figure 3; once more, Photoshop was used to add the gradient line.

Notice the hard-stop boundary line. It’s actually stretching from top left to bottom right (neither of which is top right). That’s because with the to keyword in front, you’re triggering what’s been referred to as “magic corners” behavior. When you do this, no matter how the background area is (re)sized, that boundary line will always stretch from top left to bottom right. Those are the magic corners. The gradient line thus doesn’t point into the top right corner, unless the background area is perfectly square—it points into the top right quadrant (quarter) of the background area. Apparently the term “magic quadrants” was not considered better than “magic corners”.

The effect of changing the background area’s size is the same as before; decreasing the height or increasing the width of the background area will deflect the gradient line clockwise, and the opposite change to either axis will produce the opposite deflection. The only difference is the starting condition.

Beyond all this, if you want to use keywords that always point toward a corner, as in Figure 2 except specifying the destination instead of the origin, that doesn’t appear to be an option. Similarly, neither can you declare an origin quadrant. Having the gradient line always traverse from corner to corner means declaring the origin of the gradient line (Figure 2). If you want the “magic corners” effect where the 50% color-stop line points from corner to corner, with the gradient line’s destination flexible, then you declare a destination quadrant (Figure 3).

As for actual support: as of this writing, only Firefox and Opera support “magic corners”. All current browsers—in Explorer’s case, that means IE10—support angles and non-magic keywords, which means Opera and Firefox support both keyword behaviors. Nobody has yet switched from math angles to compass angles. (I used 45deg very intentionally, as it’s the same direction in either system.)

That’s the state of things with linear gradients right now. I’m interested to know what you think of the various keyword patterns and behaviors—I know I had some initial trouble grasping them, and having rather different effects for the two patterns seems like it will be confusing. What say you?

Firefox Failing localStorage Due to Cookie Policy

  • Wed 25 Apr 2012
  • 1007
  • Browsers
    Standards
    Web
  • Seven responses

I recently stumbled over a subtle interaction between cookie policies and localStorage in Firefox. Herewith, I document it for anyone who might run into the same problem (all four of you) as well as for you JS developers who are using, or thinking about using, locally stored data. Also, there’s a Bugzilla report, so either it’ll get fixed and then this won’t be a problem or else it will get resolved WONTFIX and I’ll have to figure out what to do next.

The basic problem is, every newfangled “try code out for yourself” site I hit is just failing in Firefox 11 and 12. Dabblet, for example, just returns a big blank page with the toolbar across the top, and none of the top-right buttons work except for the Help (“?”) button. And I write all that in the present tense because the problem still exists as I write this.

What’s happening is that any attempt to access localStorage, whether writing or reading, returns a security error. Here’s an anonymized example from Firefox’s error console:

Error: uncaught exception: [Exception... "Security error" code: "1000" nsresult: "0x805303e8 (NS_ERROR_DOM_SECURITY_ERR)" location: "example.com/code.js Line: 666"]

When you go to line 666, you discover it refers to localStorage. Usually it’s a write attempt, but reading gets you the same error.

But here’s the thing: it only does this if your browser preferences are set so that, when it comes to accepting cookies, the “Keep until:” option is set to “ask me every time”. If you change that to either of the other two options, then localStorage can be written and read without incident. No security errors. Switch it back to “ask me every time”, and the security errors come back.

Just to cover all the bases regarding my configuration:

  1. Firefox is not in Private Browsing mode.
  2. dom.storage.default_quota is 5120.
  3. dom.storage.enabled is true.

Also: yes, I have my cookie policy set that way on purpose. It might not work for you, but it definitely works for me. “Just change your cookie policy” is the new “use a different browser” (which is the new “get a better OS”) and it ain’t gonna fly here.

To my way of thinking, this behavior doesn’t conform to step one of 4.3 The localStorage attribute, which states:

The user agent may throw a SecurityError exception instead of returning a Storage object if the request violates a policy decision (e.g. if the user agent is configured to not allow the page to persist data).

I haven’t configured anything to not persist data—quite the opposite—and my policy decision is not to refuse cookies, it’s to ask me about expiration times so I can decide how I want a given cookie handled. It seems to me that, given my current preferences, Firefox ought to ask me if I want to accept local storage of data whenever a script tries to write to localStorage. If that’s somehow impossible, then there should at least be a global preference for how I want to handle localStorage actions.

Of course, that’s all true only if localStorage data has expiration times. If it doesn’t, then I’ve already said I’ll accept cookies, even from third-party sites. I just want a say on their expiration times (or, if I choose, to deny the cookie through the dialog box; it’s an option). I’m not entirely clear on this, so if someone can point to hard information on whether localStorage does or doesn’t time out, that would be fantastic. I did see:

User agents should expire data from the local storage areas only for security reasons or when requested to do so by the user.

…from the same section, which to me sounds like localStorage doesn’t have expiration times, but maybe there’s another bit I haven’t seen that casts a new light on things. As always, tender application of the Clue-by-Four of Enlightenment is welcome.

Okay, so the point of all this: if you’re getting localStorage failures in Firefox, check your cookies expiration policy. If that’s the problem, then at least you know how to fix it—or, as in my case, why you’ll continue to have localStorage problems for the next little while. Furthermore, if you’re writing JS that interacts with localStorage or a similar local-data technology, please make sure you’re looking for security exceptions and other errors, and planning appropriate fallbacks.

Whitespace in CSS Calculations

  • Tue 10 Apr 2012
  • 1100
  • CSS
    Standards
  • 18 responses

I’ve been messing around with native calculated values in CSS, and there’s something a bit surprising buried in the value format. To quote the CSS3 Values and Units specification:

Note that the grammar requires spaces around binary ‘+’ and ‘-’ operators. The ‘*’ and ‘/’ operators do not require spaces.

In other words, two out of four calculation operators require whitespace around them, and for the other two it doesn’t matter. Nothing like consistency, eh?

This is why you see examples like this:

calc(100%/3 - 2*1em - 2*1px);

That’s actually the minimum number of characters you need to write that particular expression, so far as I can tell. Given the grammar requirements, you could legitimately rewrite that example like so:

calc(100% / 3 - 2 * 1em - 2 * 1px);

…but not like so:

calc(100%/3-2*1em-2*1px);

The removal of the spaces around the ‘-’ operators means the whole value is invalid, and will be discarded outright by browsers.

We can of course say that this last example is kind of unreadable and so it’s better to have the spaces in there, but the part that trips me up is the general inconsistency in whitespace requirements. There are apparently very good reasons, or at least very historical reasons, why the spaces around ‘+’ and ‘-’ are necessary. In which case, I personally would have required spaces around all the operators, just to keep things consistent. But maybe that’s just me.

Regardless, this is something to keep in mind as we move forward into an era of wider browser support for calc().

Oh, and by the way, the specification also says:

The ‘calc()’ expression represents the result of the mathematical calculation it contains, using standard operator precedence rules.

Unfortunately, the specification doesn’t seem to actually define these “standard operator precedence rules”. This makes for some interesting ambiguities because, as with most standards, there are so many to choose from. For example, 3em / 2 * 3 could be “three em divided by two, with the result multiplied by three” (thus 4.5em) or “three em divided by six” (thus 0.5em), depending on whether you privilege multipliers over dividers or vice versa.

I’ve looked around the Values and Units specification but haven’t found any text defining the actual rules of precedence, so I’m going to assume US rules (which would yield 4.5em) unless proven otherwise. Initial testing seems to bear this out, but not every browser line supports these sorts of expressions yet, so there’s still plenty of time for them to introduce inconsistencies. If you want to be clear about how you want your operators to be resolved, use parentheses: they trump all. If you want to be sure 3em / 2 * 3 resolves to 4.5em, then write it (3em / 2) * 3, or (3em/2)*3 if you care to use inconsistent whitespacing.

Customizing Your Markup

  • Wed 28 Mar 2012
  • 1055
  • (X)HTML
    Commentary
    HTML5
    Microformats
    semantic web
    Standards
  • 16 responses

So HTML5 allows you (at the moment) to create your own custom elements. Only, not really.

(Ed. note: this post has been corrected since its publication, and a followup clarification has been posted.)

Suppose you’re creating a super-sweet JavaScript library to improve text presentation—like, say, TypeButter—and you need to insert a bunch of elements that won’t accidentally pick up pre-existing CSS. That rules span right out the door, and anything else would be either a bad semantic match, likely to pick up CSS by mistake, or both.

Assuming you don’t want to spend the hours and lines of code necessary to push ahead with span and a whole lot of dynamic CSS rewriting, the obvious solution is to invent a new element and drop that into place. If you’re doing kerning, then a kern element makes a lot of sense, right? Right. And you can certainly do that in browsers today, as well as years back. Stuff in a new element, hit it up with some CSS, and you’re done.

Now, how does this fit with the HTML5 specification? Not at all well. HTML5 does not allow you to invent new elements and stuff them into your document willy-nilly. You can’t even do it with a prefix like x-kern, because hyphens aren’t valid characters for element names (unless I read the rules incorrectly, which is always possible).

No, here’s what you do instead (corrected 10 Apr 12):

  1. Wrap your document, or at least the portion of it where you plan to use your custom markup,Define the element customization you want with an element element. That’s not a typo.
  2. To your element element, add an extends attribute whose value is the HTML5 element you plan to extend. We’ll use span, but you can extend any element.
  3. Now add a name attribute that names your custom “element” name, like x-kern.
  4. Okay, you’re ready! Now anywhere you want to add a customized element, drop in the elements named by extends and then supply the name via an is attribute.

Did you follow all that? No? Okay, maybe this will make it a bit less unclear. (Note: the following code block was corrected 10 Apr 12.)

<element extends="span" name="x-kern"></element>
<h1>
<span is="x-kern" style="…">A</span>
<span is="x-kern" style="…">u</span>
<span is="x-kern" style="…">t</span>
<span is="x-kern" style="…">u</span>
mn
</h1>
<p>...</p>
<p>...</p>
<p>...</p>

(Based on markup taken from the TypeButter demo page. I simplified the inline style attributes that TypeButter generates for purposes of clarity.)

So that’s how you create “custom elements” in HTML5 as of now. Which is to say, you don’t. All you’re doing is attaching a label to an existing element; you’re sort of customizing an existing element, not creating a customized element. That’s not going to help prevent CSS from being mistakenly applied to those elements.

Personally, I find this a really, really, really clumsy approach—so clumsy that I don’t think I could recommend its use. Given that browsers will accept, render, and style arbitrary elements, I’d pretty much say to just go ahead and do it. Do try to name your elements so they won’t run into problems later, such as prefixing them with an “x” or your username or something, but since browsers support it, may as well capitalize on their capabilities.

I’m not in the habit of saying that sort of thing lightly, either. While I’m not the wild-eyed standards-or-be-damned radical some people think I am, I have always striven to play within the rules when possible. Yes, there are always situations where you work counter to general best practices or even the rules, but I rarely do so lightly. As an example, my co-founders and I went to some effort to play nice when we created the principles for Microformats, segregating our semantics into attribute values—but only because Tantek, Matt, and I cared a lot about long-term stability and validation. We went as far as necessary to play nice, and not one millimeter further, and all the while we wished mightily for the ability to create custom attributes and elements.

Most people aren’t going to exert that much effort: they’re going to see that something works and never stop to question if what they’re doing is valid or has long-term stability. “If the browser let me do it, it must be okay” is the background assumption that runs through our profession, and why wouldn’t it? It’s an entirely understandable assumption to make.

We need something better. My personal preference would be to expand the “foreign elements” definition to encompass any unrecognized element, and let the parser deal with any structural problems like lack of well-formedness. Perhaps also expand the rules about element names to permit hyphens, so that we could do things like x-kern or emeyer-disambiguate or whatever. I could even see my way clear to defining an way to let an author list their customized elements. Say, something like <meta name="custom-elements" content="kern lead follow embiggen shrink"/>. I just made that up off the top of my head, so feel free to ignore the syntax if it’s too limiting. The general concept is what’s important.

The creation of customized elements isn’t a common use case, but it’s an incredibly valuable ability, and people are going to do it. They’re already doing it, in fact. It’s important to figure out how to make the process of doing so simpler and more elegant.

Invented Elements

  • Fri 23 Mar 2012
  • 1016
  • (X)HTML
    Browsers
    DOM
    HTML5
    Standards
    Web
  • 14 responses

This morning I caught a pointer to TypeButter, which is a jQuery library that does “optical kerning” in an attempt to improve the appearance of type. I’m not going to get into its design utility because I’m not qualified; I only notice kerning either when it’s set insanely wide or when it crosses over into keming. I suppose I’ve been looking at web type for so many years, it looks normal to me now. (Well, almost normal, but I’m not going to get into my personal typographic idiosyncrasies now.)

My reason to bring this up is that I’m very interested by how TypeButter accomplishes its kerning: it inserts kern elements with inline style attributes that bear letter-spacing values. Not span elements, kern elements. No, you didn’t miss an HTML5 news bite; there is no kern element, nor am I aware of a plan for one. TypeButter basically invents a specific-purpose element.

I believe I understand the reasoning. Had they used span, they would’ve likely tripped over existing author styles that apply to span. Browsers these days don’t really have a problem accepting and styling arbitrary elements, and any that do would simply render type their usual way. Because the markup is script-generated, markup validation services don’t throw conniption fits. There might well be browser performance problems, particularly if you optically kern all the things, but used in moderation (say, on headings) I wouldn’t expect too much of a hit.

The one potential drawback I can see, as articulated by Jake Archibald, is the possibility of a future kern element that might have different effects, or at least be styled by future author CSS and thus get picked up by TypeButter’s kerns. The currently accepted way to avoid that sort of problem is to prefix with x-, as in x-kern. Personally, I find it deeply unlikely that there will ever be an official kern element; it’s too presentationally focused. But, of course, one never knows.

If TypeButter shifted to generating x-kern before reaching v1.0 final, I doubt it would degrade the TypeButter experience at all, and it would indeed be more future-proof. It’s likely worth doing, if only to set a good example for libraries to follow, unless of course there’s downside I haven’t thought of yet. It’s definitely worth discussing, because as more browser enhancements are written, this sort of issue will come up more and more. Settling on some community best practices could save us some trouble down the road.

Update 23 Mar 12: it turns out custom elements are not as simple as we might prefer; see the comment below for details. That throws a fairly large wrench into the gears, and requires further contemplation.

« Earlier Entries
November 2012
SMTWTFS
October  
 123
45678910
11121314151617
18192021222324
252627282930  

Archives

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.