Any rvm prior to 1.8.3 can’t be used in the “rvm exec” form inside daemontools.
The problem is that rvm leaves a bash wrapper around the ruby process, where daemontools requires that its managed processes exec away any wrappers.
You know you have this problem when svc -dk doesn’t actually kill your ruby process (but does kill the wrapper), causing daemontools to think the process is down (and thereby possibly spawn a new one, ad infinitum).
Upgrading to the latest head version is a good fix for this problem, but if you can’t do that, you can just set up your environment to use ruby directly in your daemontools run script thusly: #!/bin/bash
exec 2>&1
#load your ruby env
source /usr/local/rvm/scripts/rvm
rvm use yourruby@yourgemset
#exec normally
exec thin -e production -c . -p 7400 start
A client application receives record ids in a particular, and meaningful order. We need to fetch blobs out of MySQL in that same order using an IN clause. Problem is, order is not guaranteed, unless ORDER BY is present, and our MySQL has no idea how the original order was concocted.
Previously, the code selected out the target data, and re-ordered the resultset in memory. This is very, very costly for large numbers of results, which all have to be returned at once, whereas I would like to “stream” the result set (in order thanks), using lazy enumerators (and here).
What to do?
The non-obvious solution, after much googling, is that we can use the MySQL function find_in_set. It looks like this:
select x from y where y.id in (a bunch of ids here) order by find_in_set(id, ‘all my ids.join(,)’)
What we’re doing with this, is ordering by a function, the input of which is the column name, and all the ids, concatenated, and comma delimited. The function will find the position of each id in the string, and return an integer, which is used by the ORDER BY clause for each row.
The net result, is that you wind up with an explicit ordering of results, without having to do anything in application memory, and the ability to stream the result set. We do this with 1000 ids at a time (a MySQL limit), and it’s plenty fast for our needs.
KCachegrind is a tool for generating and viewing call-graphs for profiling code. Unfortunately, i’ts not laying around in any package repositories I have configured (I’ll write a heated screed on packaging systems at some point), so I decided to compile it myself.
After much googling and random package installing (X, libx-devel, kdelibs-devel etc) and the like, i hit on the following:
$ ./configure —with-qt-dir=/usr/lib64/qt-3.3/lib —with-extra-includes=/usr/include/kde3 —with-qt-libraries=/usr/lib64/qt-3.3/lib —enable-libsuffix=64
Modify appropriately for your Qt and KDE library versions and locations. Interestingly, the libsuffix parameter fixes the
checking if UIC has KDE plugins available… no
configure: error: you need to install kdelibs first.
check during configure, a cause of much consternation.
Splunk is an awesome tool. Getting the web frontend (aka Splunkweb) working behind a reverse proxy with ssl enabled is not awesome, and nearly totally undocumented.
Here’s how I did it with Lighttpd (ymmv):
Edit $splunk_home/etc/system/local/web.conf, and add the following directives:
SSOMode = permissive
tools.proxy.on = True
tools.proxy.base = https://<your splunk hostname>
Note that I’m not using the Splunk single signon features (SSOMode)
The tools.proxy.base setting will cause Cherrypy to use the correct external hostname for redirects & such. Without this setting, you’ll always be redirected to localhost.
Inside lighttpd.conf, the following configuration did the trick:
Set up SSL:
$SERVER[“socket”] == “0.0.0.0:443” {
ssl.engine = “enable”
ssl.pemfile = “/etc/ssl/your_ssl_cert.pem”
server.name = “www.example.com”
server.document-root = “/srv/www/vhosts/example.com/www/”
}
And then configure the reverse proxy:
proxy.server = ( “” =>
( “splunk” =>
(
“host” => “127.0.0.1”,
“port” => 8000,
“fix-redirects” => 1
)
)
)
Note that this will serve Splunk from the root of the http space. If you want Splunk mounted somewhere else, you’re on your own.
The Mercurial DSCCM system has the ability to create, read and apply “bundles”. Bundles are files containing compressed Mercurial changesets (including any binary content). Bundles are useful for transferring changesets between disconnected, or intermittently connected repositories.
Often, it’s a good idea to inspect the bundle contents before unbundling. Mercurial treats a bundle as a repository in and of itself, and therefore it’s possible to see the log entries in a bundle thusly:
hg -R ~/path/to_bundle.hg outgoing
When executed from within a working copy, the command will show a summary of the changesets contained within the bundle, but not contained in the working copy.
osx .dmg files can be easily converted to raw disk dumps (for use with the venerable dd) thusly:
hdiutil convert -format UDTO -o new.dd original.dmg
Then you can ‘burn’ the image onto a thumbdrive or the like:
dd if=./new.dd of=/dev/disk2 bs=2m
Note that if you get the output file parameter wrong, you run the risk of overwriting something you might not want to, like your boot disk.
For some reason, osx doesn’t magically mount .iso images (i.e. dvd or cdrom backups). Fortunately, you can use the following:
/usr/libexec/vndevice attach /dev/vn0 image.iso && mount_cd9660 /dev/vn0 /your/mountpoint
You will find the contents of the iso image in /your/mountpoint