GStreamer Streaming Server Library

Posted on by David Schleef

Introducing the GStreamer Streaming Server Library, or GSS for short.

This post was originally intended to be a release announcement, but I started to wander off to work on other projects before the release was 100% complete.  So perhaps this is a pre-announcement.  Or it’s merely an informational piece with the bonus that the source code repository is in a pretty stable and bug-free state at the moment.  I tagged it with “gss-0.5.0″.

What it is

GSS is a standalone HTTP server implemented as a library.  Its special focus is to serve live video streams to thousands of clients, mainly for use inside an HTML5 video tag.  It’s based on GStreamer, libsoup, and json-glib, and uses Bootstrap and BrowserID in the user interface.

GSS comes with a streaming server application that is essentially a small wrapper around the library.  This application is referred to as the Entropy Wave Streaming Server (ew-stream-server); the code that is now GSS was originally split out of this application.  The app can be found in the tools/ directory in the source tree.

Features

  • Streaming formats: WebM, Ogg, MPEG-TS.  (FLV support is waiting for a flvparse element in GStreamer.)
  • Streams in different formats/sizes/bitrates are bundled into a single “program”.
  • Streaming to Flash via HTTP.
  • Authentication using BrowserID.
  • Automatic conversion from properly formed MPEG-TS to HTTP Live Streaming.
  • Automatic conversion to RTP/RTSP (Experiemental, works for Ogg/Theora/Vorbis only.)
  • Stream upload via HTTP PUT (3 different varieties), Icecast, raw TCP socket.
  • Stream pull from another HTTP streaming server.
  • Content protection via automatic one-time URLs.
  • (Experimental) Video-on-Demand stream types.
  • Per-stream, per-program, and server metrics.
  • HTTP configuration interface and REST API is used to control the server, allowing standalone operation and easy integration with other web servers.

What’s not there?

  • Other types of authentication, LDAP or other authorization.
  • RTMP support.  (Maybe some day, but there are several good open-source Flash servers out there already.)
  • Support for upload using HTTP PUT with no 100-Continue header.  Several HTTP libraries do this.
  • Decent VOD support, with rate-controlled streaming, burst start, and seeking.

The details

  • Main web page
  • Code repository
  • License: LGPL (same as GStreamer)

 

Posted in entropywave, firefox, gstreamer, mozilla, video | Tagged bootstrap, browserid, code.entropywave.com, firefox, flash, gstreamer, html5, icecast, json-glib, libsoup, live streaming, live video, ogg, streaming server, theora, video, video on demand, video streaming, video tag, webm

GStreamer backend for video in Firefox

Posted on by David Schleef

Good news to hear that the GStreamer backend for video playback in Firefox has landed, due to a flurry of work by Alessandro Decina in the last few months.  Of course, this isn’t part of the standard Firefox build (but maybe some day?), but it’s very useful for putting Firefox on mobile and embedded platforms, since GStreamer has a well-established ecosystem of vendor-provided plugins for hardware decoding.

Posted in gstreamer, mozilla, video | Tagged firefox, gstreamer, hardware decoding, html5, mozilla, video decoding

OggStreamer: audio capture and streaming device

Posted on by David Schleef

Recently learned about a cool new open hardware project called OggStreamer.  They’re designing and making a small device that records an analog audio signal and streams it using Ogg/Vorbis.  It’s an open hardware project, so all the schematics and PCB layout is provided.

Posted in video

Update on the GStreamer DeckLink Elements

Posted on by David Schleef

A little more than a year ago, I posted about GStreamer support for SDI and HD-SDI using DeckLink hardware from BlackMagic Design.  In the meantime, the decklinksrc and decklinksink elements have grown up a bit, and work with most devices in the DeckLink and Intensity line of hardware.  A laundry list of features:

  • Multiple device support
  • Multiple input and output support on a single device
  • HDMI, component analog, and composite input and output with Intensity Pro
  • Analog, AES/EBU, and embedded (HDMI/SDI) audio input
  • SDI, HD-SDI, and Optical SDI input and output with DeckLink
  • Works on Linux, OS/X (new), and Windows
  • 8-bit and 10-bit support for SDI/HD-SDI
  • Supports most video modes in the DeckLink SDK
  • Implements GstPropertyProbe interface for proper detection as a source element
  • Lots of bug fixes from previous releases

Kudos to Blake Tregre and Joshua Doe for submitting several of the patches implementing the above list.  There still a bunch of outstanding bug reports (some with patches) that need to be fixed.  Several of these relate to output, which is currently rather clumsy and broken.

People have asked me about automatically detecting the video mode for input.  Some DeckLink hardware has this capability, but not any of the hardware I have to test with.  However, I’ve had some success with cycling through the video modes at the application level, with a 200 ms timeout between modes, stopping when it finds a mode than generates output.  This works ok, except that it tends to confuse 60i and 30p modes (and 50i with 25p), which can be differentiated with a bit of processing on the images.  At some point I’d like to integrate this functionality into decklinksrc, but wouldn’t be upset if someone else did it first.

Posted in entropywave, gstreamer, video | Tagged blackmagic, decklink, decklinksink, decklinksrc, gstreamer, hd-sdi, hdmi, intensity pro, pro video, sdi, video capture

HDTV Color Matrix

Posted on by David Schleef

Digital video is a time series of pictures, and each picture is comprised of an array of pixels, and each pixel is comprised of three numbers representing how brightly the red, green, and blue LCD dots (or CRT phosphors, if you’re old school) glow.  The representation in memory, however, is not of RGB values, but of YCbCr values, which one calculates by multiplying a 3×3 matrix with the RGB values, and then adding/subtracting some offsets.  This converts the components into a gray value (Y, or luma) and Cb and Cr (chroma blue and chroma red). The reason for doing this is because the human visual system is more sensitive to variations in luma compared to variations in chroma (er, actually luminance and chrominance, see below).  Furthermore, for this reason, typically half or 3/4 of the chroma values are dropped and not stored — the missing ones are interpolated when converting back to RGB for display.

There are various theoretical reasons for choosing a particular matrix, and I’ve recently become interested if these reasons are actually valid.  For historical reasons, early digital video copied analog precedent and used a matrix that is theoretically suboptimal.  This matrix is used in standard definition (SD) video, but was changed to the theoretically correct matrix for high-definition (HD) video.  There are other technical differences between SD and HD video, but this is the most significant for color accuracy.

For some time, I’ve been curious how much of a visual difference there is between the two matrices.  Here are two stills from Big Buck Bunny, the first is the original, correct image, and the second is the same picture converted to YCbCr with the HDTV matrix and then back to RGB with the SDTV matrix.  (To best see the differences, open the images in separate browser tabs and flip between them.)

spacer spacer If you are like me, you probably have trouble seeing the difference side by side, but flipping between them makes it fairly obvious.  I chose this image because it has relatively saturated green and greenish-yellow, which shows off some of the largest differences.

The RGB values for the pixels that are used in computation are not proportional to the actual amount of power output by a monitor.  This is known as gamma correction, and is a clever byproduct of the fact that the response curve of television phosphors (the amount of light output for a given voltage) is approximately similar to the response curve of the eye (the perceived brightness based on the amount of light).  Thus voltage became synonymous with perceived brightness, televisions had fewer vacuum tubes, and we’re left with that legacy.  But it’s not a bad legacy, because just like dropping chroma values, it makes it easier to compress images.

However, color comes along and messes with that simplicity a bit.  Luminance in color theory is used to describe how the brain interprets the brightness of a particular pixel, which is proportional to the RGB values in linear light space, i.e., the amount of light emanating from a display.  Luma is proportional to the RGB values in gamma-corrected (actually, gamma-compressed) space.  This means that luma doesn’t simply depend on luminance, and contains some variation due to color.  This messes with our idea that matrixing RGB values will separate variations in brightness from variations in color.  How visible is it?  I took the above picture and squashed the luma to one value, leaving chroma values the same (HD matrix):

spacer

What you see here is that saturated areas appear brighter than the grey areas.  This is chroma (i.e., the color values we use in calculations) feeding into luminance (i.e., the perception of brightness).

How much does this matter for image and video compression efficiency?  It’s a minor inefficiency of a subtle visual difference.  In other words, not very much.

Earlier I mentioned that the HD matrix was theoretically more correct than the SD matrix.  What about in practice?  Here’s the same luma-squashed image with the SD matrix.  Notice that there’s a lot more leakage from chroma into luminance, especially in the green leaves:

spacer

Posted in video | Tagged chroma, chrominance, color conversion, color matrixing, digital video, format conversion, hd, hdtv, luma, luminance, rgb, sd, sdtv, video compression, ycbcr

New Schrödinger Release

Posted on by David Schleef

I recently added support for 10- and 16-bit encoding and decoding to Schrödinger, so I did a little release. Presenting Schrödinger-1.0.11. Also pushed changes to GStreamer to handle the new features. Although these changes have been in the works for some time, a little prompting from j-b caused me to finish this off, so this will probably appear in VLC soon, too.
This was the last piece needed to create a 10-bit master of Sintel, which I’ve been planning to do for some time.

Posted in dirac, gstreamer, video

Live GStreamer Pipeline Graphing

Posted on by David Schleef

Rather that fix bugs in my existing projects, I decided to have a “me” weekend the last few days. So I spent some time fixing up and adding new features to a very minor pet project, gst-gtk-widets. (It’s a few Gtk+ widgets for doing things related to GStreamer, like a video player widget. It also doesn’t do everything correctly, so look at the code with caution.) Specifically, I added the ability to change the pipeline from the UI in gst-launch-gtk, a GUI version of the venerable gst-launch tool. I also added some live pipeline graphing. Drumroll… and screenshot.

spacer

Screenshot of gst-launch-gtk with preliminary pipeline graphing

Posted in gstreamer

GStreamer SDI Capture Plugins

Posted on by David Schleef

I’m getting ready to push several commits to the gst-plugins-bad source repository that add plugins for capturing SDI and HD-SDI using cards from two different manufacturers: BlackMagic Design‘s DeckLink, and Linear Systems SDI Master capture card.

The Linear Systems cards are probably better known by their reseller, DVEO. Entropy Wave uses both of these cards in the E1000 Live Encoder appliance, we’ve found that aside from some motherboard incompatibilities in the DeckLink cards, they both work great in Linux. While we’re primarily interested in live capture at the moment, output has also been implemented.

We slightly prefer the Linear Systems cards – mainly because the drivers are open source, but also because the API allows lower level access to the hardware, including SDI clocking and raw VANC and HANC data. It also allows subframe latency, although not implemented in the GStreamer plugin, it will be nice to use in the future.

In comparison, the DeckLink driver and SDK are not open source (which means I can’t fix any bugs), although they conveniently provide open source headers and shim code for interfacing with the SDK. This allows the GStreamer plugin to be completely open source and legally distributable separately from the SDK, but will only work if the SDK libraries and driver are present. Optical fiber connections are only available in the DeckLink, and the DeckLink cards tend to be less expensive.

spacer It will take a few weeks for these to be available as part of a GStreamer release, however, they are available in the Media SDK now.

(Reposted from my Entropy Wave blog.)

Posted in entropywave, gstreamer, video

Snotty comment about “Vegetarian-friendly Restaurants”

Posted on by David Schleef

It was a stuffwhitepeoplelike trifecta: a conversation at a vegetarian dinner party about new restaurants in San Francisco.  Inevitably, the topic turned to vegetarian-friendly restaurants and the non-vegetarians offered some suggestions.  Oops, party foul, bad form.  Offering something without meat in it is not sufficient to be considered “vegetarian friendly”, in the same way that offering at least one thing that is edible is a sufficient criterion for “decent restaurant”.

Thus, in the spirit of the Bechdel test (and apologies thereto), I offer a minimal standard for a vegetarian-friendly restaurant:

  • There must be at least two main dishes on the menu that are vegetarian
  • Neither of which is butternut squash ravoli
  • Appetizers and salads cannot have gratuitous meat, except as an option
Posted in Uncategorized

FLAC code mirror

Posted on by David Schleef

Since parts of SourceForge have been down for a while, I am making my git mirror of the FLAC CVS repository available.  It is here.  I’ve also made my fairly uninteresting patches available on the Entropy Wave Open Source site, here.

Posted in video | Tagged code.entropywave.com, flac, git, mirror, sourceforge, xiph