November 16, 2012 | |
A few months ago, I wrote about my reverse engineering attempt to Logitech Unifying devices. Back then, I concluded my post with big hopes on the future after receiving a document with some part of the specification of the HID++ 2.0 from Logitech.
A couple of weeks ago, some of my summer work has been merged to UPower, adding battery support for some Logitech devices.
As I discovered late in my first reverse engineering attempt, Logitech developed a custom HID protocol named HID++. This protocol exists in two versions, 1.0 and 2.0. Some devices talk with version 1 of the protocol (like my M705 mouse) and some others talk with version 2 of the protocol (like my K750 keyboard).
Recently, I've been able to be in touch with a Logitech engineer who worked on the Linux support for the Unifying receiver, and he has been really helpful and exposed me some details about this protocol.
Logitech made the decision to publish their HID++ specification publicly about a year ago, but still didn't do it. The internal review needed to publish such documents hasn't be done yet. The only published draft is just an extract of the specification, with even some typo in it as I discovered.
Some other documents have been recently published, but I didn't have the time to review them. They contains HID++ 1.0 specifications and some details I asked for about the K750 keyboard.
It took me sometime to get a full understanding of the protocol, its
different version etc. After reverse engineering my K750 keyboard, I've also
reverse engineered the data stream used to get my M705 mouse battery status.
I've also received some information about the HID++ 1.0 protocol, so I've
been able to discover a bit more on what the packets mean. Most of my
discoveries are now used to do proper #define
in
up-lg-unifying.c
so the code makes more sense.
My first patch implements a new property for UPower devices, named luminosity, that use with K750 keyboard to report the light level received. The second patch add support for Logitech Unifying devices (over USB only) and should work with at least Logitech M705 and K750 devices. This should be available with the next version of UPower, which should be 0.9.19.
So far, Logitech has been kind enough to help me understanding part of the protocol and even sent me a few devices so I can play and test my work with them. Unfortunately, this will probably requires some work and time, and so far Logitech was not able to help with that.
There should be enough information out there to at least add support for battery to HID++ 2.0 devices, and probably a few other things too. I hope I'd get the time do this at some point, but feel free to beat me in this race!
November 15, 2012 | |
One of the most exciting conferences in the Free Software world, foss.in in Bangalore, India has trouble finding enough sponsoring for this year's edition. Many speakers from all around the Free Software world (including yours truly) have signed up to present at the event, and the conference would appreciate any corporate funding they can get!
Please check if your company can help and contact the organizers for details!
See you in Bangalore!
November 14, 2012 | |
CNN is running a short video on our Reading Project in Ethiopia, which I've been working on this year alongside XO-4 software development.
The team's far larger than just OLPC staff — we've been fortunate to work on the project with Maryanne Wolf and her team at the Tufts University Center for Reading and Language Research, Cynthia Breazeal and team at the MIT Media Lab, and Sugata Mitra at Newcastle University.
(There's also a Technology Review article with slightly more information, and an OLPC SF conference talk video that goes more in-depth.)
November 13, 2012 | |
App developers and end users both like bundled software, because it’s easy to support and easy for users to get up and running while minimizing breakage. How could we come up with an approach that also allows distributions and package-management frameworks to integrate well and deal with issues like security? I muse upon this over at my RedMonk blog.
(C) allenran917@ Flickr – CC by-nc-nd
It was something like September 2011, shortly after the last Desktop Summit in Berlin, when I started the work improving ModemManager with the new DBus API, GDBus-based DBus support, port-type-agnostic implementations, dynamic DBus interfaces, built-in org.freedesktop.DBus.ObjectManager interface support, built-in org.freedesktop.DBus.Properties.PropertiesChanged signal support, the new libmm-glib client library, the new mmcli command line tool… Took me around half a year to port most of the core stuff, and some more months to port all the plugins we had. And then, suddenly, git master wasn’t that unfinished thing any more, and we were even implementing lots of new features like GPS support in the Location interface, improved SMS Messaging capabilities, ability to switch Firmware on supported modems, and last but definitely not least, support for new QMI-enabled modems through the just officially released libqmi…
And when I thought it was all done already, I woke up from my dream and realized that not even me was really using the new ModemManager as it wasn’t integrated in the desktop… not even in NetworkManager. So there I was again, with a whole new set of things to fix…
There is already a set of patches available to include integration of the new ModemManager interface in NetworkManager; pending for review in the NM mailing list. This is the second iteration already, after having some of the patches from the first iteration already merged in git master.
This integration is probably the most complex one in the list, as it really has to deal with quite different DBus interfaces, but the overall look of it seems to be quite good from my point of view. With the patches on, NetworkManager exposes a new –with-modem-manager-1 configure switch which defaults to ‘auto’ (compile the MM1 support if libmm-glib found). Note that both the old and new ModemManager would be supported in this case, to easier fallback to the old version if the user experiences problems with the new ModemManager.
All in all, the required changes to NetworkManager are quite well defined, implemented mainly as a new ‘NMModemBroadband’ object and some additional bits to monitor added/removed modems through the org.freedesktop.DBus.ObjectManager interface.
A proper integration in my desktop of choice (gnome-shell based GNOME 3) required to integrate the new interface in the Network indicator of the shell. As in every good day, I first had to deal with several issues I found, but I ended up submitting a patch that properly talks to the new ModemManager1 interface when needed. As with NetworkManager, this patch would support both the old and the new interfaces at the same time.
Not that this was a big change, anyway. The Network indicator only uses the ModemManager interface to grab Operator Name, MCCMNC and/or SID, and signal strength. Something that could even be included directly in the NMDevice exposed by NetworkManager actually…
Right now the only big issue pending here is to properly unlock the modem when it gets enabled, something that I still need to check how to do it or where to do it.
Truth be told, hacking the shell has ended up being quite nice, even with my total lack of knowledge in JavaScript. My only suggestion here would be: get a new enough distribution with a recent enough GNOME3 (e.g. the unreleased Fedora 18) and jhbuild buildone from there. I think I’ve never seen a clean jhbuild from scratch succeed… maybe it’s just bad luck.
And then came the applet…
Why the hell would Bluetooth DUN not work with my implementation, I was thinking, until I found that even if GNOME3 doesn’t rely on nm-applet, it still uses some of its functionality through libnm-gtk/libnma, like the code to decide which kind of modem we’re dealing with and launch the appropriate mobile connection wizard. Ended up also submitting patches to include the missing functionality in network-manager-applet.
For those not using GNOME3 and gnome-shell; note that I didn’t implement the support for the new ModemManager1 interface in the nm-applet itself; I just did the bluetooth-guess-my-modem-type bits required in libnma. If I feel in the mood I may even try to implement the proper support in the applet, but I wouldn’t mind some help with this… patches welcome! Same goes for other desktops relying on NetworkManager (KDE, xfce…); I wouldn’t mind to update those myself as well, but I truly don’t have that much free time.
Ok, so gnome-shell integration is more or less ready now; we should be finished, right? Well, not just yet. gnome-control-center also talks to ModemManager, in this case to get Operator Name and Equipment Identifier, which btw were not getting properly loaded and updated. Once that fixed, I finally got a new patch to have the control center talk to the new interface.
I bet you won’t go one by one to all the patches I linked before and apply them in your custom compiled NetworkManager, gnome-shell, network-manager-applet and gnome-control-center… but for all those brave Fedora 18 users, you can try with the 64-bit packages that I built for me, all available here:
If you want to rebuild these packages yourself, you’ll find the source tarballs here and the packaging repositories here. The packaging is really awful – I suck at it – so you’ll probably need to install the RPMs with –force Note that the git repo for packaging has several git submodules, one for each item packaged, so remember to “git submodule init” and “git submodule update“.
Are these all the patches needed to have the best ModemManager experience in GNOME3? No! The list of pending tasks for this purpose grows a bit every day…
Help!
Special thanks go to my employer Lanedo GmbH, which sponsors quite a lot of the GNOME integration work; as well as to my girlfriend’s employer, which sends her 400km away from home from Monday to Thusday. Without them this job would have been impossible!
November 12, 2012 | |
While Solaris 11.1 was under development, we started seeing some errors in the builds of the upstream X.Org git master sources, such as:
"Display.c", line 65: Function has no return statement : x_io_error_handler "hostx.c", line 341: Function has no return statement : x_io_error_handlerfrom functions that were defined to match a specific callback definition that declared them as returning an int if they did return, but these were calling exit() instead of returning so hadn't listed a return value.
These had been generating warnings for years which we'd been ignoring, but X.Org has made enough progress in cleaning up code for compiler warnings and static analysis issues lately, that the community turned up the default error levels, including the gcc flag -Werror=return-type and the equivalent Solaris Studio cc flags -v -errwarn=E_FUNC_HAS_NO_RETURN_STMT, so now these became errors that stopped the build. Yet on Solaris, gcc built this code fine, while Studio errored out. Investigation showed this was due to the Solaris headers, which during Solaris 10 development added a number of annotations to the headers when gcc was being used for the amd64 kernel bringup before the Studio amd64 port was ready. Since Studio did not support the inline form of these annotations at the time, but instead used #pragma for them, the definitions were only present for gcc.
To resolve this, I fixed both sides of the problem, so that it would work for building new X.Org sources on older Solaris releases or with older Studio compilers, as well as fixing the general problem before it broke more software building on Solaris.
To the X.Org sources, I added the traditional Studio #pragma does_not_return to recognize that functions like exit() don't ever return, in patches such as this Xserver patch. Adding a dummy return statement was ruled out as that introduced unreachable code errors from compilers and analyzers that correctly realized you couldn't reach that code after a return statement.
And on the Solaris 11.1 side, I updated the annotation definitions in <sys/ccompile.h> to enable for Studio 12.0 and later compilers the annotations already existing in a number of system headers for functions like exit() and abort(). If you look in that file you'll see the annotations we currently use, though the forms there haven't gone through review to become a Committed interface, so may change in the future.
Actually getting this integrated into Solaris though took a bit more work than just editing one header file. Our ELF binary build comparison tool, wsdiff, actually showed a large number of differences in the resulting binaries due to the compiler using this information for branch prediction, code path analysis, and other possible optimizations, so after comparing enough of the disassembly output to be comfortable with the changes, we also made sure to get this in early enough in the release cycle so that it would get plenty of test exposure before the release.
It also required updating quite a bit of code to avoid introducing new lint or compiler warnings or errors, and people building applications on top of Solaris 11.1 and later may need to make similar changes if they want to keep their build logs similarly clean.
Previously, if you had a function that was declared with a non-void return
type, lint and cc would warn if you didn't return a value, even if you called
a function like exit() or panic() that ended execution.
For instance:
#include <stdlib.h>
int
callback(int status)
{
if (status == 0)
return status;
exit(status);
}
would previously require a never executed return 0; after the
exit() to avoid lint warning "function falls off bottom without
returning value".
Now the compiler & lint will both issue "statement not reached" warnings for a return 0; after the final exit(), allowing (or in some cases, requiring) it to be removed. However, if there is no return statement anywhere in the function, lint will warn that you've declared a function returning a value that never does so, suggesting you can declare it as void. Unfortunately, if your function signature is required to match a certain form, such as in a callback, you not be able to do so, and will need to add a /* LINTED */ to the end of the function.
If you need your code to build on both a newer and an older release, then you will either need to #ifdef these unreachable statements, or, to keep your sources common across releases, add to your sources the corresponding #pragma recognized by both current and older compiler versions, such as:
#pragma does_not_return(exit) #pragma does_not_return(panic)Hopefully this little extra work is paid for by the compilers & code analyzers being able to better understand your code paths, giving you better optimizations and more accurate errors & warning messages.
November 09, 2012 | |
November 08, 2012 | |
One feature that would be of interest to us in the Empathy Video Conference client is the ability to record conversations. Due to that I have been putting together a simple prototype Python test application in free moments to verify that everything works as expected, before any effort is put into doing any work inside Empathy.
The sample code below requires two webcams to be connected to your system to work. It basically takes the two camera video streams, puts one of them through a encode/rtp/decode process (to roughly emulate what happens in a video call) and puts a text overlay onto the video to let the conference participant know the call is being recorded. The two video streams are then mixed together and displayed. In the actual application the combined stream would be saved to disk instead of course and also audio captured and mixed.
If we ever get around to working on this feature is an open question, but at least we can now assume that it is likely to work. Of course getting one stream in over the network over RTP is very different from what this sample does, so that might uncover some bugs.
The sample also works with Python3, so even though it is only a prototype it already fulfils the GNOME Goal
import sys from gi.repository import Gst from gi.repository import GObject GObject.threads_init() Gst.init(None) import os class VideoBox(): def __init__(self): mainloop = GObject.MainLoop() # Create transcoding pipeline self.pipeline = Gst.Pipeline() self.v4lsrc1 = Gst.ElementFactory.make('v4l2src', None) self.v4lsrc1.set_property("device", "/dev/video0") self.pipeline.add(self.v4lsrc1) self.v4lsrc2 = Gst.ElementFactory.make('v4l2src', None) self.v4lsrc2.set_property("device", "/dev/video1") self.pipeline.add(self.v4lsrc2) camera1caps = Gst.Caps.from_string("video/x-raw, ,") self.camerafilter1 = Gst.ElementFactory.make("capsfilter", "filter1") self.camerafilter1.set_property("caps", camera1caps) self.pipeline.add(self.camerafilter1) self.videoenc = Gst.ElementFactory.make("theoraenc", None) self.pipeline.add(self.videoenc) self.videodec = Gst.ElementFactory.make("theoradec", None) self.pipeline.add(self.videodec) self.videortppay = Gst.ElementFactory.make("rtptheorapay", None) self.pipeline.add(self.videortppay) self.videortpdepay = Gst.ElementFactory.make("rtptheoradepay", None) self.pipeline.add(self.videortpdepay) self.textoverlay = Gst.ElementFactory.make("textoverlay", None) self.textoverlay.set_property("text","Talk is being recorded") self.pipeline.add(self.textoverlay) camera2caps = Gst.Caps.from_string("video/x-raw, ,") self.camerafilter2 = Gst.ElementFactory.make("capsfilter", "filter2") self.camerafilter2.set_property("caps", camera2caps) self.pipeline.add(self.camerafilter2) self.videomixer = Gst.ElementFactory.make('videomixer', None) self.pipeline.add(self.videomixer) self.videobox1 = Gst.ElementFactory.make('videobox', None) self.videobox1.set_property("border-alpha",0) self.videobox1.set_property("top",0) self.videobox1.set_property("left",-320) self.pipeline.add(self.videobox1) self.videoformatconverter1 = Gst.ElementFactory.make('videoconvert', None) self.pipeline.add(self.videoformatconverter1) self.videoformatconverter2 = Gst.ElementFactory.make('videoconvert', None) self.pipeline.add(self.videoformatconverter2) self.videoformatconverter3 = Gst.ElementFactory.make('videoconvert', None) self.pipeline.add(self.videoformatconverter3) self.videoformatconverter4 = Gst.ElementFactory.make('videoconvert', None) self.pipeline.add(self.videoformatconverter4) self.xvimagesink = Gst.ElementFactory.make('xvimagesink',None) self.pipeline.add(self.xvimagesink) self.v4lsrc1.link(self.camerafilter1) self.camerafilter1.link(self.videoformatconverter1) self.videoformatconverter1.link(self.textoverlay) self.textoverlay.link(self.videobox1) self.videobox1.link(self.videomixer) self.v4lsrc2.link(self.camerafilter2) self.camerafilter2.link(self.videoformatconverter2) self.videoformatconverter2.link(self.videoenc) self.videoenc.link(self.videortppay) self.videortppay.link(self.videortpdepay) self.videortpdepay.link(self.videodec) self.videodec.link(self.videoformatconverter3) self.videoformatconverter3.link(self.videomixer) self.videomixer.link(self.videoformatconverter4) self.videoformatconverter4.link(self.xvimagesink) self.pipeline.set_state(Gst.State.PLAYING) mainloop.run() if __name__ == "__main__": app = VideoBox() signal.signal(signal.SIGINT, signal.SIG_DFL) exit_status = app.run(sys.argv) sys.exit(exit_status)
November 06, 2012 | |
I was at the OpenStack France meetup 2 yesterday evening.
This has been a wonderful evening, talking about OpenStack and all with around 30-40 people. I and Nick Barcet presented Ceilometer and have received some good feedbacks about it. We should also thanks Nebula, who sponsored the evening, and Erwan Gallen since it was nicely organized, and free beers are always enjoyable.
For people interested, the slides of our Ceilometer presentations are available. This is a lighter and fresher version of the slides used by Nick and Doug at the OpenStack Design Summit.
November 03, 2012 | |
GStreamer does assembling advanced video application quite easy, in fact so easy that even I can write such an application in Python What I have had a lot more issues with is understanding how to deal with things like USB cameras and such. Well luckily the developers of Cheese realized this and created libcheese to help. libcheese is today used by Cheese itself of course, but also by Empathy for its camera handling.
Since I been thinking about adding some kind of video recording support in Transmageddon I wanted to test libcheese from Python. Unfortunately there was no Python examples available anywhere online, so I had write my own example
With some pointers from David King I managed to put the following python code together.
import sys from gi.repository import Gtk from gi.repository import Cheese from gi.repository import Clutter from gi.repository import Gst Gst.init(None) Clutter.init(sys.argv) class VideoBox(): def __init__(self): self.stage = Clutter.Stage() self.stage.set_size(400, 400) self.layout_manager = Clutter.BoxLayout() self.textures_box = Clutter.Actor(layout_manager=self.layout_manager) self.stage.add_actor(self.textures_box) self.video_texture = Clutter.Texture.new() self.video_texture.set_keep_aspect_ratio(True) self.video_texture.set_size(400,400) self.layout_manager.pack(self.video_texture, expand=False, x_fill=False, y_fill=False, x_align=Clutter.BoxAlignment.CENTER, y_align=Clutter.BoxAlignment.CENTER) self.camera = Cheese.Camera.new(self.video_texture, None, 100, 100) Cheese.Camera.setup(self.camera, None) Cheese.Camera.play(self.camera) def added(signal, data): uuid=data.get_uuid() node=data.get_device_node() print "uuid is " +str(uuid) print "node is " +str(node) self.camera.set_device_by_device_node(node) self.camera.switch_camera_device() device_monitor=Cheese.CameraDeviceMonitor.new() device_monitor.connect("added", added) device_monitor.coldplug() self.stage.show() Clutter.main() if __name__ == "__main__": app = VideoBox()
The application creates a simple clutter window to host the stream from the webcam. So when you run the application it should display the video from the system webcam. Then if you plug a second webcam into a USB port it will switch the video feed to that stream. Not a very useful application in itself, but hopefully enough to get you started on using libcheese from Python. You can find the libcheese API docs here, they are for C, but Python API from Gobject Introspection follows it so close that you should be able to find the right calls. And remember for figuring out exact API names ipython is your friend
P.S. You need Cheese 3.6 installed to be able to use libcheese with Python, this version which will be in Fedora starting with Fedora 18.
November 01, 2012 | |
This is a short overview of the Intel hardware and what the GEM (graphics execution manager) in the i915 does.
GEM essentially deals with graphics buffer objects (which can contain textures, renderbuffers, shaders or all kinds of other state objects and data used by the gpu) and how to run a given workload on the gpu, commonly called command submission (CS), but in the i915.ko driver done with the execbuf ioctl (since the gpu commands themselves reside in a buffer object on Intel hardware).
So the first topic to look at is what kind of different address space we have, which different pieces of hardware can access them, and how we bind various pieces of memory into them (i.e. where the corresponding pagetables are and what they look like). Contrary to discrete gpus Intel gpus can only access system memory, hence the only way to make any memory available to the gpu is by binding a bunch of pages into one of these gpu pagetables, and we don't need to bother us with different kinds of underlying memory as backing storage.
The gpu itself has its own virtual address space, commonly called GTT. On modern chips its 2GB big, and all gpu functions (display unit, render rings and similar global resources, but also all the actual buffer objects used for rendering) access the data they need through it. On earlier generations it's much smaller, down to a meager 32M for the i830M. On Sandybridge and newer platforms we also have a second address space called the per-process GTT (PPGTT for short), which is of the same size. This address space can only be accessed by the gpu engines (and even there we sometimes can't use it), hence scanout buffers must be in the global GTT. The original aim of PPGTT was to insulate different gpu processes, but context switch times are high, and up to Sandybridge the TLBs have errate when using different address spaces. The reason we use PPGTT now - it's been around as a hardware feature even on earlier generations - is that PPGTT PTEs