Author Archives: David Schleef

Update on the GStreamer DeckLink Elements

Posted on by David Schleef
Reply

A little more than a year ago, I posted about GStreamer support for SDI and HD-SDI using DeckLink hardware from BlackMagic Design.  In the meantime, the decklinksrc and decklinksink elements have grown up a bit, and work with most devices in the DeckLink and Intensity line of hardware.  A laundry list of features:

  • Multiple device support
  • Multiple input and output support on a single device
  • HDMI, component analog, and composite input and output with Intensity Pro
  • Analog, AES/EBU, and embedded (HDMI/SDI) audio input
  • SDI, HD-SDI, and Optical SDI input and output with DeckLink
  • Works on Linux, OS/X (new), and Windows
  • 8-bit and 10-bit support for SDI/HD-SDI
  • Supports most video modes in the DeckLink SDK
  • Implements GstPropertyProbe interface for proper detection as a source element
  • Lots of bug fixes from previous releases

Kudos to Blake Tregre and Joshua Doe for submitting several of the patches implementing the above list.  There still a bunch of outstanding bug reports (some with patches) that need to be fixed.  Several of these relate to output, which is currently rather clumsy and broken.

People have asked me about automatically detecting the video mode for input.  Some DeckLink hardware has this capability, but not any of the hardware I have to test with.  However, I’ve had some success with cycling through the video modes at the application level, with a 200 ms timeout between modes, stopping when it finds a mode than generates output.  This works ok, except that it tends to confuse 60i and 30p modes (and 50i with 25p), which can be differentiated with a bit of processing on the images.  At some point I’d like to integrate this functionality into decklinksrc, but wouldn’t be upset if someone else did it first.

Posted in entropywave, gstreamer, video | Tagged blackmagic, decklink, decklinksink, decklinksrc, gstreamer, hd-sdi, hdmi, intensity pro, pro video, sdi, video capture | Leave a reply

HDTV Color Matrix

Posted on by David Schleef
2

Digital video is a time series of pictures, and each picture is comprised of an array of pixels, and each pixel is comprised of three numbers representing how brightly the red, green, and blue LCD dots (or CRT phosphors, if you’re old school) glow.  The representation in memory, however, is not of RGB values, but of YCbCr values, which one calculates by multiplying a 3×3 matrix with the RGB values, and then adding/subtracting some offsets.  This converts the components into a gray value (Y, or luma) and Cb and Cr (chroma blue and chroma red). The reason for doing this is because the human visual system is more sensitive to variations in luma compared to variations in chroma (er, actually luminance and chrominance, see below).  Furthermore, for this reason, typically half or 3/4 of the chroma values are dropped and not stored — the missing ones are interpolated when converting back to RGB for display.

There are various theoretical reasons for choosing a particular matrix, and I’ve recently become interested if these reasons are actually valid.  For historical reasons, early digital video copied analog precedent and used a matrix that is theoretically suboptimal.  This matrix is used in standard definition (SD) video, but was changed to the theoretically correct matrix for high-definition (HD) video.  There are other technical differences between SD and HD video, but this is the most significant for color accuracy.

For some time, I’ve been curious how much of a visual difference there is between the two matrices.  Here are two stills from Big Buck Bunny, the first is the original, correct image, and the second is the same picture converted to YCbCr with the HDTV matrix and then back to RGB with the SDTV matrix.  (To best see the differences, open the images in separate browser tabs and flip between them.)

spacer spacer If you are like me, you probably have trouble seeing the difference side by side, but flipping between them makes it fairly obvious.  I chose this image because it has relatively saturated green and greenish-yellow, which shows off some of the largest differences.

The RGB values for the pixels that are used in computation are not proportional to the actual amount of power output by a monitor.  This is known as gamma correction, and is a clever byproduct of the fact that the response curve of television phosphors (the amount of light output for a given voltage) is approximately similar to the response curve of the eye (the perceived brightness based on the amount of light).  Thus voltage became synonymous with perceived brightness, televisions had fewer vacuum tubes, and we’re left with that legacy.  But it’s not a bad legacy, because just like dropping chroma values, it makes it easier to compress images.

However, color comes along and messes with that simplicity a bit.  Luminance in color theory is used to describe how the brain interprets the brightness of a particular pixel, which is proportional to the RGB values in linear light space, i.e., the amount of light emanating from a display.  Luma is proportional to the RGB values in gamma-corrected (actually, gamma-compressed) space.  This means that luma doesn’t simply depend on luminance, and contains some variation due to color.  This messes with our idea that matrixing RGB values will separate variations in brightness from variations in color.  How visible is it?  I took the above picture and squashed the luma to one value, leaving chroma values the same (HD matrix):

spacer

What you see here is that saturated areas appear brighter than the grey areas.  This is chroma (i.e., the color values we use in calculations) feeding into luminance (i.e., the perception of brightness).

How much does this matter for image and video compression efficiency?  It’s a minor inefficiency of a subtle visual difference.  In other words, not very much.

Earlier I mentioned that the HD matrix was theoretically more correct than the SD matrix.  What about in practice?  Here’s the same luma-squashed image with the SD matrix.  Notice that there’s a lot more leakage from chroma into luminance, especially in the green leaves:

spacer

Posted in video | Tagged chroma, chrominance, color conversion, color matrixing, digital video, format conversion, hd, hdtv, luma, luminance, rgb, sd, sdtv, video compression, ycbcr | 2 Replies

New Schrödinger Release

Posted on by David Schleef

I recently added support for 10- and 16-bit encoding and decoding to Schrödinger, so I did a little release. Presenting Schrödinger-1.0.11. Also pushed changes to GStreamer to handle the new features. Although these changes have been in the works for some time, a little prompting from j-b caused me to finish this off, so this will probably appear in VLC soon, too.
This was the last piece needed to create a 10-bit master of Sintel, which I’ve been planning to do for some time.

Posted in dirac, gstreamer, video

Live GStreamer Pipeline Graphing

Posted on by David Schleef

Rather that fix bugs in my existing projects, I decided to have a “me” weekend the last few days. So I spent some time fixing up and adding new features to a very minor pet project, gst-gtk-widets. (It’s a few Gtk+ widgets for doing things related to GStreamer, like a video player widget. It also doesn’t do everything correctly, so look at the code with caution.) Specifically, I added the ability to change the pipeline from the UI in gst-launch-gtk, a GUI version of the venerable gst-launch tool. I also added some live pipeline graphing. Drumroll… and screenshot.

spacer

Screenshot of gst-launch-gtk with preliminary pipeline graphing

Posted in gstreamer

GStreamer SDI Capture Plugins

Posted on by David Schleef

I’m getting ready to push several commits to the gst-plugins-bad source repository that add plugins for capturing SDI and HD-SDI using cards from two different manufacturers: BlackMagic Design‘s DeckLink, and Linear Systems SDI Master capture card.

The Linear Systems cards are probably better known by their reseller, DVEO. Entropy Wave uses both of these cards in the E1000 Live Encoder appliance, we’ve found that aside from some motherboard incompatibilities in the DeckLink cards, they both work great in Linux. While we’re primarily interested in live capture at the moment, output has also been implemented.

We slightly prefer the Linear Systems cards – mainly because the drivers are open source, but also because the API allows lower level access to the hardware, including SDI clocking and raw VANC and HANC data. It also allows subframe latency, although not implemented in the GStreamer plugin, it will be nice to use in the future.

In comparison, the DeckLink driver and SDK are not open source (which means I can’t fix any bugs), although they conveniently provide open source headers and shim code for interfacing with the SDK. This allows the GStreamer plugin to be completely open source and legally distributable separately from the SDK, but will only work if the SDK libraries and driver are present. Optical fiber connections are only available in the DeckLink, and the DeckLink cards tend to be less expensive.

spacer It will take a few weeks for these to be available as part of a GStreamer release, however, they are available in the Media SDK now.

(Reposted from my Entropy Wave blog.)

Posted in entropywave, gstreamer, video

Snotty comment about “Vegetarian-friendly Restaurants”

Posted on by David Schleef

It was a stuffwhitepeoplelike trifecta: a conversation at a vegetarian dinner party about new restaurants in San Francisco.  Inevitably, the topic turned to vegetarian-friendly restaurants and the non-vegetarians offered some suggestions.  Oops, party foul, bad form.  Offering something without meat in it is not sufficient to be considered “vegetarian friendly”, in the same way that offering at least one thing that is edible is a sufficient criterion for “decent restaurant”.

Thus, in the spirit of the Bechdel test (and apologies thereto), I offer a minimal standard for a vegetarian-friendly restaurant:

  • There must be at least two main dishes on the menu that are vegetarian
  • Neither of which is butternut squash ravoli
  • Appetizers and salads cannot have gratuitous meat, except as an option
Posted in Uncategorized

FLAC code mirror

Posted on by David Schleef

Since parts of SourceForge have been down for a while, I am making my git mirror of the FLAC CVS repository available.  It is here.  I’ve also made my fairly uninteresting patches available on the Entropy Wave Open Source site, here.

Posted in video | Tagged code.entropywave.com, flac, git, mirror, sourceforge, xiph

Open Video Conference

Posted on by David Schleef

spacer

The Open Video Conference is happening again, and coming up in a few short weeks (October 1-2).  It’s a great mixture of people from both sides of open video: open content and open technology.  I’ll be giving a tutorial that ties the two sides together, explaining to content producers how to use the technology to its fullest.

In association with OVC is the Foundations of Open Media workshop (October 3-4), which is mostly a bunch of video and media hackers getting together to talk about our next steps in taking over the world.

Posted in video

This must be the Orc blog

Posted on by David Schleef

I just released Orc-0.4.6.  Lots of improvements based on feedback from people using it related to GStreamer.  This release is roughly timed for the upcoming GStreamer core/base 0.10.30 releases — it is the recommended release, but you can also use 0.4.5.

Orc is roughly following a 3-release cycle timed with GStreamer releases: features, performance, bug fixes.  This release was roughly performance+bug fixes.  After the GStreamer release, there will be a release adding new features, and the next version of GStreamer will depend on that version.  Following by a few weeks will be performance improvements, then bug fixes in time for the next GStreamer release.

Many of my recent posts have been Orc related.  Must change this…

Posted in orc, Uncategorized

Orc NEON backend

Posted on by David Schleef

The Orc NEON backend is now open-source and part of the 0.4.5 release.  Thanks to Nokia for making this possible.

All of the GStreamer code that previously used liboil has been converted to Orc, and will be part of the upcoming releases.  Plugins from -base that were converted are videotestsrc, videoscale, audioconvert, volume, and adder.  The change in speed is minor for now, since liboil was not used extensively.

The way that GStreamer uses Orc is intended to be very convenient for the end user:  by default when compiling from source, Orc will be used if an acceptable version is found, otherwise backup code (in C) will be used.  This backup code is automatically generated and checked in to git by a developer, so it always exists.  The configure options ‘–enable-orc’ and ‘–disable-orc’ affect this: the former causes configure to require Orc, and a configure failure is a good reminder to upgrade Orc.  The latter may be useful on architectures where Orc doesn’t generate code yet.  After a transitional phase, Orc will become required, since the more advanced uses of Orc cannot be duplicated by backup functions.

Posted in gstreamer, orc