Archive | perl RSS feed for this section

Continuous Deployment to CPAN

18 Aug

Recently I was working on a refactor of one of my CPAN modules which, among other things, involved changing its name from Test::A8N to the specific Test::Story.  Doing so made me think about the process I usually go through when I consider releasing a CPAN module.

First, let me explain something about myself: I don’t like tedious or repetitive tasks.  I hate having to do the same thing over and over again, partly because I don’t want to waste my time, but mostly because inevitably one of the following will happen:

  1. I’ll forget a crucial step, and will screw something up;
  2. I’ll forget how to do it, and in my efforts to re-learn it I’ll screw something up;
  3. I won’t care enough to go through the effort, so something will get screwed up.

I expect you’re noticing a trend here.  Really the only reason programmers come into this profession in the first place I suspect is because we’re just so bad at doing things the normal way, we have to automate everything we’ll either do poorly, lazily, or forget to do all together.

For those of us who are programmers, many times we’re so lazy that we won’t want to do the same thing within our programs more than once, so we abstract functionality into reusable modules.  By that token the Perl community must be some of the most inventively lazy group of people, because CPAN is full of useful tidbits like that.  Getting modules to CPAN requires a contributor to actually, you know, submit their project.  And this is, in itself, a somewhat manual process.

I do all of my development in a version control repository, and I write a decent amount of unit tests to prove my functionality works.  So once I come to the decision that a set of new features is worthy of a new release, this is the process I go through:

  1. Run “perl Makefile.PL && make && make test” to verify everything runs okay;
  2. Run “make dist” to create the distribution tarball;
  3. Noticing the version number is wrong, I go in and change it in the main Perl module;
  4. After running “make dist” again, I realize I forgot to change the README;
  5. Potentially after another “make dist“, I’ll remember I’m supposed to update the Changes file to indicate what I’ve added;
  6. It’s at this point I realize that I forgot to add new documentation to cover this new feature;
  7. I run “make test” again, this time with TEST_POD=1 set to ensure my documentation checks are run;
  8. I’ll then try to remember what the command is for uploading a new CPAN module, which involves searching on CPAN for something related to “Upload”;
  9. Finding “cpan-upload“, I’ll have to look at its documentation to figure out how to use it;
  10. I run “cpan-upload“, only to realize I forgot to set my PAUSE credentials in ~/.pause.

At this point, if I’m lucky, the upload will succeed.  This process isn’t meant to be a negative reflection on CPAN, but rather on my own forgetfulness and need to automate.

Read on to find out how I managed to automate this part of my life as well.

(more…)

Last minute talk on automated Perl builds using Hudson tonight

12 Nov

My friend Scott McWhirter, who heads up the Vancouver Perl Monger’s group, asked me yesterday to give a last-minute talk on anything in particular at tonight’s Vancouver.pm meeting. He wasn’t exactly begging, but I know he’s short on speakers this month, and he wanted something interesting to show.

So I decided I’d talk about building and testing Perl-based projects using Hudson. I’ve been planning on writing a blog post on the subject for the past month, but haven’t found the time to finish off the post properly. So if you’re interested in the topic, and you don’t want to wait for me to get around to writing about it online, please feel free to drop by tonight!


Update: The talk went well! Until I have time to put a more comprehensive post up on the topic, you can always read the slides from tonight’s talk.

Have a list of several hundred addresses to get coordinates for? Perl to the rescue!

10 Jun

I recently wanted to try my hand at writing a little iPhone app for helping students find University grant funding.  It turns out to be a bit more difficult than I’d expected, but part of the app was to be a listing of all the available universities near the student.  This, of course, would involve actually having a list of Universities.  To make a long story short, once I got the list (Wikipedia rocks), I needed to get their locations so I could do a proximity search from the user’s coordinates.  Since I didn’t want to spend too much time entering every one of the several hundred University names into Google, I decided to whip up a simple little script in Perl to do this for me.

#!/usr/bin/perl
use strict;
use warnings;
use Geo::Coder::Google;
our $apikey = 'Your-API-Key';
my $geocoder = Geo::Coder::Google->new(apikey => $apikey);
while (<>) {
    chomp $_;
    my $location = $geocoder->geocode( location => $_ );
    print "$_: ";
    if (ref($location) eq 'HASH') {
        print "\n";
        print "    Address: $location->{address}\n";
        print "    Latitude: $location->{Point}{coordinates}[1]\n";
        print "    Longitude: $location->{Point}{coordinates}[0]\n";
    } else {
        print "UNKNOWN\n";
    }
}

If it wasn’t for Perl, this would be fiendishly complicated.  I just threw that in my ~/bin directory and passed a list of addresses to it on STDIN.  Or you could give it a list of files that contain addresses, one per line, and it will give you a YAML output of the results.  For instance:

nachbaur$ echo "University California San Diego" | ~/bin/str2geo.pl
University California San Diego:
    Address: University City, San Diego, CA, USA
    Latitude: 32.854672
    Longitude: -117.204533

I hope you have fun with it.  It was easy to build, but writing little pipe commands like this makes using a Mac with Perl a blast.

I <3 HTTP::Engine

22 May

It’s too bad I can’t use it at work, but HTTP::Engine rocks my world. It does “The Right Thing ™” for negotiating HTTP requests and their content in a wonderfully transport-agnostic way. That means that if you’re running in mod_perl, FastCGI, plain ‘ol CGI, or even running as a stand-alone development-mode daemon, it will just work.

At my work, we’re building new user interface components to our email security appliance product, and for the first time we’re building a web interface that will be used by a potentially high number of public users. Up until now, all of our target users were internal to our customers, meaning system administrators and a small number of workgroup users. Now however, their customers and the recipients of potentially a high number of emails will be able to interact with our appliance. This means we have to scale a lot higher than we have needed to in the past, all with limited hardware and memory.

So for the first time in eons, since before I started working with mod_perl back in the late 90′s, I’m having to parse HTTP POST data, including file uploads, by hand. Catalyst does it really well, but unfortunately the list of dependancies it has makes it prohibitive to pull in to our build. That’s when I discovered HTTP::Engine. It looks great, it does what Catalyst does for managing HTTP requests in a flexible and extensible way, gives all sorts of great developer hooks to access internal information about the request, and it doesn’t require that you buy in to Catalyst’s way of doing things (of course, when I’m at home, I develop my apps using Catalyst).

Sadly though, HTTP::Engine’s list of dependancies is almost as long as Catalyst’s. Normally this wouldn’t be a problem, but given our restricted memory limits, I just can’t justify using it. Which really blows, because as CPAN modules go, this one is pretty darned sexy!

I’ve decided to simply write a sub-class of HTTP::Request that does what we need, and tie it in to the Nginx 3rd-party module for file uploads.

Maybe someday I’ll be able to use Moose, Catalyst and HTTP::Engine on our appliances deployed in customer locations around the world, but for now I’ll just have to roll my own.

My blog and I are joining the Iron Man competition!

8 May

It’s probably not the kind of Iron Man competition that you’re used to hearing about. This one is a challenge to blog at least once per week, every week, about Perl and Perl-related technologies.

My buddy Matt Trout, co-founder of Shadowcat Systems, creator of DBIx::Class, a core contributor to Catalyst, as well as all sorts of other great Perl goodies, is starting a blogging contest to try to get the word out that Perl is still alive and well, and perhaps try and overthrow the public perception that Perl is use old-school “use CGI;”. And he’s also a hilarious public speaker too.

I’m going to be taking part from this blog. I love Perl, and have been writing software in it for nearly 15 years. The problem with a lot of people like myself is that we already know that Perl is great, and is used throughout the world to solve mission-critical problems on a daily basis. But we tend to forget that unlike Java, .Net, and other crappy newcomers, Perl doesn’t have a marketing department trying to convince people that their language doesn’t suck. Anyone who’s had to deal with Java CLASSPATH problems, .Net’s problems like – well, that it’s .Net, for starters – knows that just because there’s industry buzz about something doesn’t mean it’s lacking in the “SUCK” department.

Please check out www.enlightenedperl.org/ironman.html and join in. Or alternately you can email ironman@shadowcat.co.uk if you want to contribute. At the very least, I suggest you follow the Enlightened Perl Iron Man blog. If you’re following this one, at the very least you’ll be following a small part of it.


gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.