February 14, 2013

Threaded HTML parser, background blending and welcoming Opera

Peter Beverloo

A wild Last Week in Chromium and WebKit appears! This update describes the 1,654 commits which were made last week, 958 for Chromium and 696 for WebKit.

In case you didn’t hear this elsewhere yet: Opera has announced that it will start using the Chromium port of WebKit as the rendering engine in their browsers. This is very exciting news, even though I’m sad to see an excellent engine like Presto part with the rendering engine playing field. Meanwhile, Apple more broadly started with upstreaming the iOS port to WebKit!

It’s now possible to change the dock side of Web Inspector by dragging its toolbar. There’s now an option to show composited layer borders and the JavaScript tokenizer can now detect round braces. Also recently introduced is the Continuous Painting Mode and the possible upcoming addition of a console.table() function.

Adam, Eric and Tony have been working on changing WebKit’s HTML parser to handle tokenization and parts of parsing on a background thread. On resource constrained systems, such as the Nexus 7 tablet, this yields improvements of at least 10%, with the maximum stop time going down by roughly 50%. Some of last week’s fixes focused on the preload scanner, timing of the load event and the XSS auditor.

WebKit’s Media Stream implementation has been enhanced with support for DTMF. Mike will be working on support for X-Content-Type-Options:nosniff and edit actions have been added for Bold and Italic commands. document.activeElement won’t return non-focusable elements anymore, the formenctype attribute now defaults to an empty string and FocusEvent got a constructor.

Rik started working on implementing support for the background-blend-mode CSS property to WebKit, which determines the blending mode of the background image. The vmax unit has been implemented, completing the vh, vw, vmin and vmax group, and language selection for the ::cue pseudo-element is now available. Grids now recompute their logical height after row or column changes, and WebKit can now parse grid-auto-flow.

In a series of just over a dozen commits, Gregg imported a slightly modified Khronos WebGL conformance test suite into WebKit. Sharing test suites is great, and I hope we can see more of this!

Other changes that occurred last week:

  • The TestWebKitAPI test suite now works for iOS,
  • The TestRunner library has been turned into a component for the Chromium build.
  • An experimental gyp-based build has landed for the Gtk port, with build system talks being on again.
  • Jer aligned WebKit’s Encrypted Media implementation with the latest specification updates.
  • Google’s Chief Apple Polisher, Nico Weber, became a WebKit Reviewer. Congratulations!
  • A flag has been added to Chromium for toggling composition of fixed positioned elements.
  • Some plumbing by John on the Fullscreen API for Android.

No promises, but hopefully another article next week spacer .

By Peter Beverloo at February 14, 2013 12:44 AM

February 07, 2013

A look into Custom Filters reference implementation

Adobe Web Platform

spacer

Over the past two years, my team in Adobe has been actively working on the CSS Custom Filters specification (formerly CSS Shaders), which is just one part of the greater CSS Filters specification. Alongside the spec work, we have been working on the CSS Custom Filters WebKit implementation, so I’ve decided to write this blog post to explain how this is all being implemented in WebKit, giving an high-level conceptual overview of the game.

Support for OpenGL shaders on general DOM content has been one of the noted features for the latest Chrome releases. Users and web developers are excited about the ability to apply cinematic effects on their pages.

Shaders, Shaders, Shaders

I’ve often been asked “what is a shader?”; well, shaders are short programs that run on the GPU that – unlike regular programs – solely deal with bitmaps (textures) and 3D meshes, performing specific operations on pixels (called fragments) and vertices. You can get a more in-depth explanation of shaders on Wikipedia.

Roughly speaking, the reason why these small programs run on the your graphic card’s processor (the GPU) instead of the regular CPU is that GPUs are very good at performing ad hoc math operations in parallel. While the GPU is busy performing graphical operations, your CPU is idle and ready to perform other tasks. Everything is faster and snappier.

In our case, the texture passed to the graphic pipeline is styled DOM content, your page content. This means that with one line of CSS and a shader, you can get splendid and astonishing results, just take a look at the image at the beginning of this post.

HTML5Rocks has a great post covering the whole idea behind CSS Custom Filters with an example of their capabilities so I won’t spend too many words on how to use them, instead I’ll be focusing on what is the general idea behind the implementation. Let’s start!

Parsing and Style Resolution

The life of a CSS Shader Custom Filter starts with the CSS Custom Filter declaration. With the currently implemented WebKit syntax, it might look like the following:

#myElement {
    -webkit-filter: custom(url(vertex.vs) none)
}

This CSS is firstly encountered by WebKit’s CSSParser, which – as the name might suggest – is the component within WebKit responsible for CSS parsing. Raw text data gathered from this object is then computed by WebKit’s StyleResolver and from there, a CustomFilterOperation is created. This class will contain all the major information about the Custom Filter you’ve asked to use, such as its parameters. It also references the program, a CustomFilterProgram, which is a vertex shader and fragment shader pair, that will be applied to the texture (your styled DOM content).

Here’s a picture of the happy event:

PassRefPtr StyleResolver::createCustomFilterOperation(WebKitCSSFilterValue* filterValue)
{
    // ...snip...
    RefPtr program = StyleCustomFilterProgram::create(vertexShader.release(), fragmentShader.release(), programType, mixSettings, meshType);
    return CustomFilterOperation::create(program.release(), parameterList, meshRows, meshColumns, meshType);
}

Computing and Rendering

The RenderLayer tree is one of the many trees in WebKit and it’s currently being used, among other things, to deal with group operations such as opacity, stacking contexts, transforms and to do things like scrolling and accelerated compositing: it’s a huge huge piece of code that defines the word “complexity”.:-)

What happens after the parsing and the style resolution is most interesting: when a DOM element has a filter associated with it, it receives its own RenderLayer. When this happens, the relevant DOM content is painted in an image buffer and then sent to the GPU for the “shading treatment” as a regular texture. All this starts to take place thanks to RenderLayerFilterInfo which, among other things, downloads the shaders as external resources and calls the RenderLayer back to notify it that everything’s ready to repaint.

The computing magic begins when a RenderLayer starts to iterate on all the filters associated with it:

FilterOperations RenderLayer::computeFilterOperations(const RenderStyle* style)
{
	// ...snip...
        for (size_t i = 0; i  filters.size(); ++i) {
        RefPtr filterOperation = filters.operations().at(i);
        if (filterOperation->getOperationType() == FilterOperation::CUSTOM) {
	// ...snip...
            CustomFilterGlobalContext* globalContext = renderer()->view()->customFilterGlobalContext();
            RefPtr validatedProgram = globalContext->getValidatedProgram(program->programInfo());
	// ...snip...
}

For every custom filter, a validated version of the program (see next paragraph) is requested; this is then appended to the list of the associated filter operations.

The actual rendering will take place in the FilterEffectRenderer class where the “filter building” operation will start in a surprisingly straightforward manner:

bool FilterEffectRenderer::build(RenderObject* renderer, const FilterOperations& operations)
{
	// ...snip...
        effect = createCustomFilterEffect(this, document, customFilterOperation);
	// ...snip...
}

FilterEffectRenderer::createCustomFilterEffect is the method that will rule them all. Take a look at it, it’s just gorgeous! spacer

A word about security

Among the other things, shaders deal with pixels which means that a texture can be sampled (read). This translates into a potential security risk: we don’t really like the idea of malicious shaders reading your page content. Imagine to what would happen if anyone was able to push a shader capable of making a screenshot of your private messages on some social network.

In light of this we do enforce restrictions on shader code before it’s sent to the GPU. We do this by validating and rewriting the shaders in a controlled and secure fashion, and CustomFilterValidatedProgram serves just this purpose, one of its goals is to avoid texture sampling, meaning that the shaders cannot read pixels from your page content anymore. Most of the validation work actually relies on ANGLE, an open source library used by many other projects, including WebKit and Gecko, for WebGL validation and portability.

If you want to have an idea of the kind of validation we perform, pay attention to CustomFilterValidatedProgram: it might prove a very learning reading.

Wrapping up

Currently, we’re working with the CSS Working Group to make the CSS Custom Filters syntax more elegant and future-proof. As with any new web feature, you can expect changes. Stay tuned for more information on upcoming improvements!

By Michelangelo De Simone at February 07, 2013 05:00 PM

February 05, 2013

A Visual Method for Understanding WebKit Layout

Adobe Web Platform

My last post was an introduction to the WebKit layout code. It was pretty high level, with a focus on documentation. This post is much more hands on: I will explain some changes that you can make to WebKit’s C++ rendering code to be able to see which classes handle which parts of a web page.

Getting and building the code

This is entirely optional, so if you don’t feel like setting this up or don’t have a machine powerful enough to do the build, feel free to skip to the next section. I will put in links to all of the code that I reference throughout the document, so you should be able to follow along with nothing more than the web browser you are using to read this.

Note that this section does not attempt to be a complete guide to setting up a build environment and getting a working build. That would be an entire blog series of its own!

Getting the code

If you’re still with me, before you build anything, you’ll need to get a copy of the source code. WebKit uses Subversion for source control; however, most developers work with the Git mirror. If you’re on Windows, you should jump directly to the Building the code section below, as you need to have the code checked out to a specific location in order for the code to build. Otherwise, you can continue with this section.

Installing Git on Linux and Mac OS X

Installing Git is pretty straightforward on Linux and Mac OS X. On Linux, I’d suggest that you use your distribution’s package manager, but if you can’t do that for some reason, you can download Git from the Git Homepage.

For recent versions of Mac OS X, it is included in the Xcode developer tools, and you will need them to build the code anyways. Once you’ve installed Xcode, you will need to install the command line tools, which is located under Xcode -> Preferences -> Downloads in Xcode 4.5.2, and in a similar location in earlier versions.

Downloading the source

Once you have Git installed, grabbing the source is pretty straightforward. First, make sure you have at least 4 gigs of free space. Then, open a terminal, find a suitable directory, and run the following command:

git clone git://git.webkit.org/WebKit.git WebKit

This will take awhile, but when it’s done, you’ll have your very own copy of the WebKit source code.

That’s great, but I still don’t get this Git thing

Explaining how to use Git is beyond the scope of this post, but here are a couple of resources:

  • Using Git With WebKit: Documentation on using Git with WebKit
  • Pro Git: Free, professionally written book on Git

Building the code

WebKit is designed to be able to run on many different platforms. To facilitate this, functions like graphics and networking have abstraction layers that allow an implementation for a specific platform to be plugged in. This combination of the platform independent part of WebKit with the low level bindings for a specific platform is called a port.

spacer

Note that while there are ports that are operating system specific, like the Mac and Windows ports, there are also ports for platforms like Qt, which is a library that can be used on many different operating systems. The short of it is that in order to build WebKit, you need to choose the proper port to build for your purposes.

The best place to start to learn how to build WebKit is to take a look at the documentation on installing the developer tools to build webkit. It lists out the most common ports with either instructions in the page or links to the Wiki documentation on how to set up a development environment for that port. I would suggest using the Mac port if you’re on a Mac, and the Windows port if you’re on Windows. If you’re on Linux, you can choose between the Gtk, Qt, and EFL ports. If you don’t care or don’t know which one to choose, you could use the Gtk port if you’re using Gnome, and the Qt port if you’re using KDE.

If you’re building a port other than the Mac or Windows port, all of the build instructions are on the wiki page about that port, which I linked to in the paragraph above. Otherwise, you will want to read the build instructions for the Mac and Windows ports.

There are more ports than the ones I mentioned here; however, when doing WebKit development, it is probably easiest to start with one of the ports I mentioned above. If you would like to know about others, there is a list of ports on the WebKit wiki.

Help! I followed all the instructions and it still doesn’t work!

If you can’t get your chosen port to build, or if you’re unsure of which port to choose, there is a page of contact information for the WebKit project. I would suggest starting by sending a message to the webkit-help mailing list, as that will get you in contact with people that should be able to help with any build problems or questions that you may encounter. You can also contact the developers on IRC: the #webkit channel on irc.freenode.net is WebKit central.

Visualizing layout

The layout process determines what features of a web page get drawn into which locations. However, when starting out, it can be hard to link the C++ code to the visual result that you see in your browser. Probably the easiest way to make this connection in a visual way is to modify the code that draws everything to the screen. This drawing step happens after layout, and WebKit calls it painting.

Basics of painting

After layout has finished, the process of painting begins. It happens in a very similar way to layout: each rendering class has a paint method that is analogous to the layout method we discussed in the last post. This paint method is responsible for drawing the current renderer and it’s children onto the display.

The CSS specification defines the painting order for elements, and the WebKit implementation has painting phases for all of these steps. As with many of these things, it is too complex to explain here in its entirety, but the rules of the painting order ensures that all of CSS’s positioning rules are rendered properly. This must be taken into account when thinking of where in the individual paint methods changes should be made.

The paint method can be instrumented to draw the outline of the area painted by the class. This can make it much easier to understand which parts of the code create which parts of the rendered page.

What does it look like?

spacer

To generate the above screenshot, RenderBlock::paint method (Look in Source/WebCore/rendering/RenderBlock.cpp for the definition) was instrumented to draw a green border around the block itself.

Drawing the outlines

The C++ code that generated those lovely green outlines is the following:

Color color;
color.setRGB(0, 255, 0);
paintInfo.context->setStrokeColor(color, style()->colorSpace());
paintInfo.context->strokeRect(overflowBox, 2);

Let’s break this down line by line:

Color color;

This creates a Color object. It is hopefully not surprising that this object represents a color, and will be used to set the color that we are going to draw our border with.

color.setRGB(0, 255, 0);

This call sets the color for our Color object. It takes 3 numbers between 0 and 255, each one representing the amount of red, green, or blue that makes up the color. For simplicity, I just went with entirely green, but you could do any sort of fancy thing you like here. Setting the color to different values could come in handy if you want to instrument multiple classes at once: each could have a different color, making it easy to tell them apart.

paintInfo.context->setStrokeColor(color, style()->colorSpace());

Now’s when things get more complex. paintInfo is an instance of PaintInfo that is passed as an argument to the paint method. PaintInfo is a simple structure that contains the state of the painting operation as well as the object used to actually call the native drawing routines. That object is the context member of PaintInfo. This object is actually a facade to provide a platform independent API for drawing. It is declared in platform/graphics/GraphicsContext.h, and there are corresponding platform specific implementations that it uses depending in which WebKit port is in use.

If you have done graphics programming before (Even using HTML’s canvas element), you will probably find the use of the GraphicsContext familiar. The methods may have different names, but the underlying concepts of the API are exactly the same. For example, this setStrokeColor call sets the color that is used by subsequent drawing operations that use this context.

The last piece of this line that I should explain is the style() call. This gets the CSS style information for the RenderBlock, and queries it for the color space. This is one of the things that will need to change when modifying this code to work in paint methods in other objects, since there are some cases when a renderer delegates painting some part of it’s content to another class that itself is not a renderer. In those cases, that class has a reference to the renderer it is attached to, and the style() method can be called via that member. In most cases, this member is called m_renderer, so this call changes to m_renderer->style()->colorSpace(). I give an example of this case later in this post.

paintInfo.context->strokeRect(overflowBox, 2);

This is the call that actually draws the border rectangle. The overflowBox is the dimensions of the block plus any visible overflow: that’s the entire area of the page that is rendered by this block and its children. The second argument to strokeRect is the width of the line used to draw the rectangleA.

Where is this code added?

Of course, this code cannot just be added at random, you need to put it in a specific place in the paint method. As of this writing, the paint method in RenderBlock.cpp looks like the following:

void RenderBlock::paint(PaintInfo& paintInfo, const LayoutPoint& paintOffset)
{
    LayoutPoint adjustedPaintOffset = paintOffset + location();

    PaintPhase phase = paintInfo.phase;

    // Check if we need to do anything at all.
    // FIXME: Could eliminate the isRoot() check if we fix background painting
    // so that the RenderView paints the root's background.
    if (!isRoot()) {
        LayoutRect overflowBox = overflowRectForPaintRejection();
        flipForWritingMode(overflowBox);
        overflowBox.inflate(maximalOutlineSize(paintInfo.phase));
        overflowBox.moveBy(adjustedPaintOffset);
        if (!overflowBox.intersects(paintInfo.rect))
            return;
    }

    bool pushedClip = pushContentsClip(paintInfo, adjustedPaintOffset);
    paintObject(paintInfo, adjustedPaintOffset);
    if (pushedClip)
        popContentsClip(paintInfo, phase, adjustedPaintOffset);

    // Our scrollbar widgets paint exactly when we tell them to,
    // so that they work properly with z-index.  We paint after we painted
    // the background/border, so that the scrollbars will sit above
    // the background/border.
    if (hasOverflowClip() && style()->visibility() == VISIBLE &&
            (phase == PaintPhaseBlockBackground ||
                phase == PaintPhaseChildBlockBackground) &&
            paintInfo.shouldPaintWithinRoot(this))
        layer()->paintOverflowControls(paintInfo.context,
            roundedIntPoint(adjustedPaintOffset), paintInfo.rect);
}

I’m not going to go over this one line at a time, as by the time you read this, this method may or may not look the same. However, I would like to share where I placed the code and the rationale behind it so that hopefully you will still be able to instrument the code yourself even if it has changed, and will be able to think about how to instrument the paint methods in other render classes as well.

When looking for a place to put the code in this method, the first thing I needed to find was where I could find a LayoutRect that represents the area that this renderer will paint into. This is the dimensions of the block plus any visible overflow. This value is usually used early on in the paint method to determine if the current block is actually in area to be painted (paintInfo.rect). In RenderBlock::paint, you can see this in the if statement following the helpful comment “Check if we need to do anything at all.”:

if (!isRoot()) {
    LayoutRect overflowBox = overflowRectForPaintRejection();
    flipForWritingMode(overflowBox);
    overflowBox.inflate(maximalOutlineSize(paintInfo.phase));
    overflowBox.moveBy(adjustedPaintOffset);
    if (!overflowBox.intersects(paintInfo.rect))
        return;
}

So overflowBox is our variable, which you might remember from the instrumentation code example above. Now, we need to determine where to put our instrumentation code. In this case, unless we want to move the definition of overflowBox, we don’t have much choice: it must go in the body of the if statement above. Since we don’t want to draw the outline if the block is not in the area to be painted, we should put it after the return statement at the end, like so:

if (!isRoot()) {
    LayoutRect overflowBox = overflowRectForPaintRejection();
    flipForWritingMode(overflowBox);
    overflowBox.inflate(maximalOutlineSize(paintInfo.phase));
    overflowBox.moveBy(adjustedPaintOffset);
    if (!overflowBox.intersects(paintInfo.rect))
        return;

    Color color;
    color.setRGB(0, 255, 0);
    paintInfo.context->setStrokeColor(color, style()->colorSpace());
    paintInfo.context->strokeRect(overflowBox, 2);
}

And now if we build and run the code, we should see an effect like in the screenshot above. It is also very interesting to launch the web inspector with this active.

More complex paint methods

The paint method of RenderBlock makes for a nice example because it is pretty small and it is pretty straightforward to see how to instrument it. (Most of the complexity is nicely factored out into helper methods.) However, this is not the case for the paint method in InlineFlowBox. InlineFlowBox is used to manage flows of inline content, like text.

I am not going to reproduce the entirety of InlineFlowBox::paint here, but you should be able to follow along by either opening the file in your favorite editor if you opted to download the code, or by looking at InlineFlowBox.cpp on Trac.

While this is a more complex paint method, the check if painting should happen is right at the top of the method:

LayoutRect overflowRect(visualOverflowRect(lineTop, lineBottom));
overflowRect.inflate(renderer()->maximalOutlineSize(paintInfo.phase));
flipForWritingMode(overflowRect);
overflowRect.moveBy(paintOffset);

if (!paintInfo.rect.intersects(pixelSnappedIntRect(overflowRect)))
    return;

From this, we can glean two pieces of information, just like we did in the RenderBox example: that overflowRect defines our painted area, and that we should instrument after the if statement that determines if we should paint or not.

Looking at the rest of the method and the plethora of if and return statements, it is less obvious how far into the method our instrumentation code should go. All of those conditions might look very opaque, but you may be happy to know that you don’t need to understand them all to decide where to put the instrumentation code.

Most of the conditions in InlineFlowBox::paint have to do with the different phases of painting. I mentioned earlier that CSS defines the order in which painting should be done: WebKit uses paint phases to implement this. So all these if statements are doing is ensuring that only the proper things are drawn in a given phase. If we were writing production code, we would need to determine in which phase our outline should be drawn in, and make sure that it is only drawn in that phase. However, since this is exploratory code, we can just draw the outline in every phase, which will ensure that it gets shows up on top of any other content rendered by this object. The simplest way to do this is to place the instrumentation code as the first thing after it is determined that painting should happen.

Great! Now we know where the code should go, and we know which variable contains our bounding box, so we can take the code from earlier, changing overflowBox to overflowRect and changing the color to red just in case we want to have this active at the same time as the changes to RenderBlock:

Color color;
color.setRGB(255, 0, 0);
paintInfo.context->setStrokeColor(color, style()->colorSpace());
paintInfo.context->strokeRect(overflowRect, 2);

That’s the right direction, but if you do this, you will find out that it doesn’t compile: the compiler can’t find a style() method! That’s because we’ve discovered that InlineFlowBox is not a render object, it is a helper, used by render objects like RenderInline.

Luckily, these helper objects have a reference to their render object, accessible via the m_renderer instance variable. This allows us to make one more change to our code which will allow it to compile:

Color color;
color.setRGB(255, 0, 0);
paintInfo.context->setStrokeColor(color, m_renderer->style()->colorSpace());
paintInfo.context->strokeRect(overflowRect, 2);

And then you can see the InlineFlowBoxes in all their glory:

spacer

What’s next?

If you didn’t get a working build before reading the whole thing, I would suggest going back, setting up a build, and then trying out the code. It really is a great way to learn how layout (and even a bit of painting) works.

If you have a working build and were following along at home, you should try instrumenting the paint methods of some of the other classes in Source/WebCore/rendering (InlineBox and InlineTextBox are good places to start), and see how these classes map to the visual elements of a web page. You might be surprised by what you discover.

And if that isn’t enough for you, there are plenty of issues on bugs.webkit.org that you could dig into. (And if you can’t find something you find interesting there, I’m sure that someone on the #webkit channel on irc.freenode.net could give suggestions.) After all, there’s no better way to learn than to really work with the code.

Of course, I will be also writing more posts here as well. Stay tuned for my next post on WebKit’s layout tests and the infrastructure that runs them.

By Bem Jones-Bey at February 05, 2013 05:14 PM

Planet WebKit

Subscriptions

Last updated:
February 16, 2013 12:53 PM
All times are UTC.

Powered by:
spacer

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.