Back on December 15th, we got a look at the internals of a SICK Laser Rangefinder (LIDAR), a $6k device that employs a single laser diode to produce ~6000 points per second (~600 points per scan at ~10Hz) over a 180° field-of-view. Now, we can compare that to the Rolls Royce of Laser Rangefinders -- the Velodyne Lidar, a $75k device employing 64 laser diodes to produce 1.3 million data points per second with a 360° horizontal field-of-view and a 26.8° vertical field-of-view. Below is a video of Bruce Hall, President of Velodyne LIDAR, demonstrating the HDL-64E in operation and taking a look at its internals. It may not be a complete disassembly (it does cost $75,000 afterall!), but it does provide some interesting insights into the Velodyne's internals.
You may recall that the Velodyne (below left) is a popular fixture on DARPA Urban Grand Challenge vehicles, producing the characteristic concentric laser scans (below right) that proved useful in everything from obstacle avoidance to curb and lane detection.
So let's dig a little deeper and show how this amazing sensor functions. First (below left) is an image showing the characteristic front lens assembly. Notice that there are two "blocks" -- a top and a bottom, which each contain 32 laser diodes (for a total of 64). The laser beams exit the device on the outer lenses and return to photo-detectors through the middle lenses, using time-of-flight (TOF) to determine distance. Below right is a view of the rear of the device.
There are a couple of interesting structures to note in the rear of the Velodyne. For example, there are four banks of laser diodes, each containing 16 lasers; in the image (below left), Bruce is pointing to the "top right" laser diode bank. The lasers are precisely (and painstakingly?) aligned with avalance photodiodes (a semiconductor approximation to a photo-multiplier tube) contained on a PCB behind the central lens. Bruce is pointing to the top avalance photodiode board in the image (below center). All of the timing, control, and reception signals are routed to a "main PCB" just under the top of the device. Finally, counter-balancing weights are employed to keep the entire (spinning) system stable -- they are being pointed to in the image (below right).
OK, enough chatter. You can watch the video if you like.
So, I have a few questions about the resiliency of these sensors... Among my questions:
Anyway, it is a very compelling sensor -- I wish I could afford one.
Credit to Robot Central for pointing out this video.
Comments
12:32 am
I am excited to see that Velodyne's official homepage links to this post (see the link for "HDL-64E Product Demonstration"). While on their hompage, I was also reminded of the very cool, Grammy-Nominated RadioHead Music Video (see below) for the song "House of Cards" that features Velodyne-generated 3D point-clouds in addition to spatial data from Geometric Informatics' camera system.
To me, the most curious part of the video is where you can clearly make out the power lines in the urban point-clouds! Power lines are generally quite small, making them difficult to resolve at large distances, yet the Velodyne seems to see them just fine -- impressive! For those who are curious how this music video was produced, check out the video about the production below.
Finally, the processing source code and LIDAR scan data to build your own visualizations is available from Google Code, along with a fun web-app that lets you interactively explore a LIDAR scan. When I get some spare time, I'll have to go take a look at how they're doing their cloud visualization -- most of the systems I've used are based on an OpenGL desktop application.
While I'm at it, I think it may be prudent to include more comprehensive specifications for the Velodyne HDL-64E (also found in PDF form).
Specifications for the Velodyne HDL-64E:
Sensor:
Laser
Mechanical
Output
2:01 pm
1:33 pm
11:42 am
3:23 pm
4:56 pm
7:09 am
4:31 pm
When we were working on that video, we actually discovered a lot of great applications for modern filmmaking that we've been working on putting into films. I think it's only a matter of time before we can start capturing better results with "non-participating" media like smoke and glass. On the film we're working on right now for Martin Scorsese, we're capturing most of the sets with lidar and including color values for all the vertices so we can reconstruct not just the XYZ values of geometry in the scene, but also RGB.
One of the biggest things we wanted to do after shooting that video was set up a lidar system at 24Hz synced with a traditional RGB motion picture camera via beamsplitter so we could capture RGBZ data. It's possible that we could one day automate a lot of things that are presently done in visual effects with manual labor.
Sadly it's on the back burner for lack of research funds, and we're too damn busy with the work we've already got in front of us.
Glad to see someone appreciated that video at least on a technical level. It was a pretty major gamble that we only had 5 weeks to pull off. And forget trying to explain to the "creatives" what we were doing. No one really had too much of a clue what was going on, they wanted to know if they'd get something cool at the end. Props to them for taking the chance.
-ben (VFX Supervisor on that thing)
PS: Most of the noise and static you see in the performance captures we did with the Geometrics system are the result of shooting through glass that had water drizzling on it. And when you see chunks of his head flying around, that's because we were wacking the scanner periodically because the director thought it looked too clean and real. We also ended up decimating the data set to make it look pixelated. We were recording so much data from his face that it looked like a complete mesh, and with the intensity values applied, it just looked like we shot him with a regular camera in black and white.
PPS: We always use the metric system in VFX. ;)
3:00 pm
Would someone be kind enough to please point one to a dataset using velodyne? (preferably incrementally complex or going from simple to complex please).
Thanks
12:05 am
@ A.S.
A quick Google search for "velodyne data set" turned up one dataset from the University of Osnabruck in Germany (yeah, I had never heard of them either) and another dataset from the University of Washington. There are probably others too...
Post new comment
More information about formatting options