Jorge's Stompbox

Plug in, crank it to Eleven.

MAASing and Jujuing With HP Proliant Micro Servers

I’ve heard nothing but good things about HP Proliant N40L Microservers. Those of you who have ever spent more than 5 minutes with popey know he’s all about them. They seem like great little boxes; quiet, Hold 4+1 drives, and with an optional ilo card can be totally headless. Add drives and some Ubuntu server and you’re good to go. Or in Robbie’s case, a few of them with Ubuntu Server and OpenStack:

spacer

Compared to my meager set up:

spacer

Anyway; the Proliant comes with 4 drive bays. Unfortunately one of them is for the OS drive, and yet it comes with an entire empty top part where they expect you to put an optical disc drive, which in today’s world of MAAS is a waste of space, since we’re all about managed PXE installs now. So if you want to put an OS drive there instead, you’ll want this bit of kit so you can stick an OS drive in there and then have the slick bays open for data disks. You’ll need a right angle SATA cable and MOLEX->sata power converter as well, which is why mine is still in pieces. A nice external eSATA port means I can also just reuse my old existing enclosure.

While Robbie will surely be jujuing on his openstack, I will be using my micro to deploy the same juju charms, except as LXC containers. Since juju abstracts the cloud provider stuff from me, I can prototype my little deployments on my server and be reasonably certain that my stuff will be OpenStack-ready, except for scale of course, I couldn’t test a 40 node Hadoop deployment on my server. But I can certainly deploy most things.

If you work at HP and are reading this, give me a little slot on the side for an SSD OS drive and I’ll be a happy clam. Preloaded with Ubuntu Server and I’d be even happier. :)

(Note, this post has nothing to do with the previous XBMC post, whatsoever. >_> )

XBMC Eden in Debian and Ubuntu

The hero of today for me is Andres Mejia, who has put XBMC Eden into Debian and it’s been synced into Ubuntu for 12.04. Thanks to everyone involved who made that happen, Eden has been really great for me.

In other XBMC News, I am finding that turning on dirty regions has improved my UI performance quite a bit, though from asking around it seems to be hit and miss with some people.

Ubuntu OpenWeek Call for Instructors

The IRC workshop extravaganza known as Ubuntu OpenWeek is ready to go. For those of you not in the know, this is a weeklong (sort of) IRC event where we have instructors and classes on various topics in Ubuntu. The wiki page is now ready to go:

https://wiki.ubuntu.com/UbuntuOpenWeek

I am looking for instructors (especially those of you who have never given a workshop before) to give classes on Ubuntu. This can range from “How do I use Unity?” to “How do I contribute to Ubuntu development. So if you’re interested in participating, grab a slot or ping me if you’re interested in giving a class!

Of note, we’ve shortened the “week” to Wednesday through Friday, but the amount of sessions are the same, the days are just a bit longer so we can cover more time zones.

Kicking the Tires on MAAS

Ok so we’ve fleshed out the MAAS wiki page a bit today with some instructions on how to get started. If you’ve got a few boxes around (You’ll likely need at least 3 boxes, one for MAAS, and then two nodes for juju, though you only need 2 to test MAAS by itself.). If you don’t know what MAAS is then you can check out Mark’s blog post about it. It’s a new provisioning tool that lets you quickly deploy servers.

MAAS itself is easy to install, just a sudo apt-get install maas and you’re good to go. Like other deployment tools it helps if you’re in control of your own DNS/DHCP server, though I was able to get it running by just setting them up on their own separate network.

After you’ve got your shiny new MAAS server running we could use a hand with some testing. We’ve documented that here:

  • https://wiki.ubuntu.com/ServerTeam/MAAS/Testing

What you’ll do is install an updated checkbox from a PPA and then submit a few tests:

spacer

And then do some tests:

spacer

If we fail a test we take you right to launchpad where you can file a bug, and then we’ll go fix them. So if you’ve got some boxes sitting around and want to help out do feel free to join in; but remember this is a deployment system, so blowing away your machines and installing Ubuntu Server on them is desired behavior, so don’t sacrifice boxes you might care about.

Giving Feedback

If you’d like to give feedback, comments or even patches, get in touch with the team on launchpad at https://launchpad.net/~maas-devel. We’re keen on learning what you’d like to see MAAS do.

Happy MAASing!

Getting Started With MAAS

MAAS is ready for people to start banging on. If you’ve been struggling with Orchestra deployments then consider this the new version of that, and as you can see from the instructions, it’s already way easier. Here’s the instructions to get it started, and how to point juju to your MAAS installation so you can start deploying services.

  • https://wiki.ubuntu.com/ServerTeam/MAAS

Why the Juju Charm Store Will Change the Way You Use Ubuntu Server

Yikes, quite a statement!

For the past 6 months we’ve been travelling around conferences talking about juju and charms. We’ve had charm schools and training events, but it’s been difficult to explain to people the differences between service orchestration and configuration management, especially with a tool that wasn’t so complete. Thanks to the work for some volunteers though, we’ve managed to have 58 charms available in Ubuntu so far.

But to you that means nothing because we haven’t made it easy to get this stuff, until now. Today the juju charm store landed and it will monumentally change the way you use Ubuntu Server in 12.04. Now that this is in place I can hopefully show you by example on why juju is awesome.

In this post I want to talk more about policy than juju itself. If you’re on precise these commands won’t work (since we haven’t deployed the charms to Precise yet), and our docs aren’t quite up to snuff yet, and two of the commands I talk about don’t even exist yet. So instead of following along try to envision what we’re trying to accomplish policy-wise:

You start by picking a service, and then deploying it:

juju bootstrap
juju deploy zookeeper
juju expose zookeeper

juju goes off and fires up an instance, installs zookeeper, and configures it. I don’t really have to care about things that aren’t important to me like finding what image to deploy, provisioning the OS, or that kind of thing. And if I do we make it easy in .juju/environments.yaml for you. Fire up juju status and wait until you see the public IP of your service, then it’s all done. You see up to now that didn’t work, we made you manually branch things and stuff. But that’s not even important right now, the big change here isn’t the addition of juju itself, it’s the charm store.

The new new archive

One of the best things about apt has nothing to do with apt itself. There’s been other tools that do the same thing, but nothing like apt. Why is that? Because while apt is great, you don’t really care about apt, you care about the archive at the other end of apt. That huge collection of software. But sometimes it can be limiting. While we love the convenience of a stable archive, it can slow us down, especially when trying new technology, and doubly so in the fast paced world of the cloud. Right now the version of zookeeper in 12.04 is 3.3.5. Normally we would have that for the next 5 years. We have PPAs, but you have to know where to find them, and that’s only half the battle. There’s more to services than just packages. And what about other services? Why isn’t this just built into the OS?

Well, with the juju charm store we can have our cake and eat it too. It’s a collection of charms (scripts) that we can use in Ubuntu. Let’s rewind and instead use this command:

juju deploy zookeeper
juju set zookeeper source=dev
juju expose zookeeper

That did exactly what you think it should do. We installed the development version of zookeeper. Why? Because the charm author made it so on install you could install from the Ubuntu archive, or from a PPA. If he wanted to he can grab the pure upstream source, or pull right from the VCS and expose that as an option.

Wait what?

PPA’s were the first step. But there’s no structure there, and that’s only about packages. Huntin’ and peckin’ for repositories. Boo. Here’s why this works:

  • In the charm store, we allow a charm to deploy a service however the author chooses, with some caveats (see below).
  • We don’t freeze the charm store. It’s a living archive that can be as up to date as people want it to be.

This is a monumental addition to Ubuntu. Anything in the juju charm store can be deployed and updated for the life of the release. Sick of searching for “How to deploy node.js on ubuntu?” and finding some guy’s script on a pastebin? We have a charm that does that. Right now it uses Chris Lea’s PPA to install node. Because that’s what everyone is doing anyway. So if this is the way the community is deploying node on Ubuntu then we’re going to say “okay! but let’s organize this.”

In fact, those scripts you see on pastebins are the first steps to getting those services charmed. So why put them in a store?

  • Community peer review. The store is designed to be shared and for you to modify stuff and push it out to people.
  • Being in the store puts you closer to your end users, no more deployment scripts on wiki pages!
  • We run automated testing on the charm store and spit out warnings if your stuff breaks.
  • Since juju is service orchestration in the store runs on EC2, OpenStack, and bare metal. Test on your laptop and know it’ll deploy on OpenStack.

Since the store isn’t frozen, next year the nodejs charm will get you what you want, you’re not stuck with what’s in the archive now forever. Will the PPA move? Will people want to install node another way? We allow the community to update the charm to be whatever is the current state of the art. And not just nodejs. Whatever service anyone has a charm for.

But I don’t know anything about packaging

Packaging can be hard. I can think of countless projects off the top of my head that have their own deployment scripts, or a list of instructions on their wiki page on how to get something installed. We can gather all of those and put them in the charm store. Take your deployment script (in whatever language you want), submit it to the store, and your software is ready to go. We can build on top of other charms:

juju deploy -n3 mongodb # We’d like a MongoDB replicaset, 3 nodes please!
juju deploy --config node-app.yaml node-app # Then deploy my node application from source using the node-app charm with config information
juju add-relation mongodb node-app # Hooks fire off and my application now talks to the mongodb replicaset
juju expose node-app # Open the port, serve my app! 

What if every node.js application was deployable this way? Rails. Whatever you’re into. All of it. That is what we’re building in the juju charm store, the ability for people to do just that. All of a sudden if your project isn’t in the Ubuntu archive you can still reach your users. Publish your charm after the release is already out and keep it up to date. We move out of the way and give you a solid OS; you deploy the stack you want how you want it. Wouldn’t it be great then if we could:

juju publish # But we don't have this yet.

So for that part you’ll have to push a branch, but you can see where we want to go. And then your application is available via juju deploy ~yourname/yourapp. Submit it to the store, get it reviewed, and it becomes juju deploy yourapp. It’s not so much about juju itself as it is about having a place where devops can collaborate around services.

Be in control of your own platform and share by default

Our default charms are pretty conservative. In fact, the charms you’ll be able to deploy with the default deploy command are generally boring right now. But maybe you don’t care about running zookeeper from the archive, you can have your charm grab the tarball, install the dependencies you need, and build it right on the spot.

Maybe your charm grabs your source from github and runs it right out of there. Totally fine. We just tell you to put it in a certain place. Don’t like how James deploys zookeeper in his charm? Okay:

juju source zookeeper ~/my-local-version   # We don't have this yet either, you can just pull from VCS for now
cd my-local-version # and then make your changes
juju publish # But we don't have this yet, so you'll have to push somewhere.

And now users can juju deploy cs:~yourname/zookeeper the way you like it. That’s it.

And not just you as a person. How about your projects? Look at the instructions for Ubuntu users on the Varnish home page:

Varnish is distributed in the Ubuntu package repositories, but the version there might be out of date, and we generally recommend using the packages provided by varnish-cache.org.

There you go. This project has just told us the very first thing a varnish charm should do. We don’t have one yet. Are you deploying varnish to the cloud? That script you’ve been using to do all this sort of stuff, that’s the beginning of a charm. Did I mention you can write them in whatever language you want?

Caveats and Unicorns

However, we’re not hosting an intergalactic kegger here, we’re not going to allow charms to do bad things. We have some policies in place in the official part of the charm store, you can see them here. I won’t go into each one here, but the idea is like the Ubuntu mindset “Be nice, be collaborative.” and then some technical checks to make sure it works, things like idempotency, the right to distribute, etc. The peer review in the charm store will be a fun and loosely coupled process. We’ve got 58 charms right now. We’ve got some big holes to fill in.

The juju community have stepped up to the plate and have nailed some useful ones like Hadoop, Cassandra, MongoDB, Jenkins, MySQL, PostgreSQL, lodgeit, OwnCloud, ThinkUp, Alice and Subway IRC, TeamSpeak, phpMyAdmin, and even an entire PaaS in CloudFoundry. We even have game servers like Minecraft and Steam. The bits are in place, and now we’re ready to grow this.

So while I think you can have your cake and eat it too, this is only the first step. We think we have a compelling story to tell here in the cloud, where you can quickly deploy, manage, and share with others. We have some serious barriers to overcome, juju is just getting started, and as you can see when you play with it, there’s still some rough edges and limitations we need to work out, but we have a crack team of folks working on this and we’re ready for people to start kicking the tires and giving us feedback.

But while we’re working on baking as many cakes as we can, but like many things in open source, the ecosystem of cakes is what makes it great, so if you want to deploy on the cloud we’re working hard to eliminate deployment barriers for you, I hope you join us!

POSSCon 2012 and Charm Set Options.

spacer

Marco Ceppi and I went to POSSCON last week to talk a little bit about juju and Ubuntu Server. During the charm school Marco demoed some interesting things you can do with the juju set command.

For example in the phpmyadmin charm, we have an option to use the upstream version of phpmyadmin. During the demo Marco was playing with phpmyadmin and said that sometimes he prefers to use the latest upstream version instead of the package. Afterall, having an LTS of 5 years, at some point you might want to use that in case there’s no backport or PPA available.

So Marco just switched to it live.

juju set phpmyadmin use-upstream=true

Then he waited a few seconds for the magic charm to finish, hit refresh, and voila, fresh phpmyadmin. We then continued with the demo.

I suspect use-upstream style switches will be a common feature request for many charms.

I Kind of Like Search Leakage and History

I love the idea of DuckDuckGo and having a search engine that is focused on privacy, but every time I try to switch to it I end up going back to Google after a few days because I can never find anything. I think I realized why I have such a hard time switching to is that it’s best features are the ones I don’t want to use. I want my search results to be altered based on my history, and lately I am liking that people in my Google Circles can have their search results integrated into mine, I am far more likely to trust a result from one of them than a machine generated one (I think), I certainly know it won’t be spam at least.

I do love the goodies though, so every once in a while I still try to switch to it full time, each time I last longer but I am still having a hard time finding things when I use DDG.

Is it possible to use DDG in a “I don’t care” mode?

Iterating Deployment With Juju

I recently blogged about how we redeployed OMG!Ubuntu onto AWS with juju and Ubuntu Server.

But as I talk about in that post, I had made the mistake of “it’s not optimized, throw hardware at it!” and left the site up running on 4 extra large amazon instances. As Marco worked on the charm and Clint and I talked about how to move forward I kept kicking myself for going so overboard. Then Clint explained to me how the cloud lets you overcompensate, and then optimize, deploy, optimize, deploy, and so on. Each time cutting down on resources and saving money.

Here’s how we did this part with juju.

Capture your expertise

First things first. How did we fix the Wordpress charm? Well, Marco analyzed everything we needed, branched the old one, and created this. Click on all the hook links on that page to see what the charm does. I’m not going to go into every detail here, but the basics are, we pull Wordpress from upstream now to get the latest version, and the 2nd major thing is we implemented Brandon Holtsclaw’s nginx configuration and heavy caching. It was important for us to capture every improvement, so we don’t have to do it again. This can be tough when firefighting, the temptation to just ssh into the node and fix it on the spot was a tempting carrot for most of this. But, determined not to do work over and over, we kept improving the charm.

We are now running on 3 mediums for wordpress, all load balancing between themselves via nginx, and then one small for mysql and one small for juju’s bootstrap node. Ryan Kather asked why we weren’t using Amazon Elastic Load Balancing, but it turns out that ELB needs a DNS change and we don’t have access to DNS when Joey is asleep so we just decided to have nginx do the load balancing. Since nginx is doing that we don’t need ELB and we certainly don’t need an instance for haproxy.

Now that the site is up (and still overkilled) we created a staging.omgubuntu.co.uk. And then (this is the cool part), Marco redeployed the charm again but on t1.micros. Now we have the exact deployment but on smaller machines so we can optimize.

And then iterate again and again

Each improvement gets pushed to the charm and redeployed. On the public cloud for a site the size of OMG! it takes 7 minutes to deploy. That means we can work on the charm and test it on staging quickly. If Clint wants to iterate even quicker he can just deploy on his laptop until he’s ready, and then push to AWS.

This enables us to tweak staging and then when we’re ready either upgrade those micros or redeploy to something bigger. Right now the load on each medium instance is about .05, so we can now start to pare down the instance size to smalls so we can be more elastic in growth. Again, still overkill but since it’s so easy to test we can do this relatively quickly. I suspect our goal of getting down to 2-3 smalls is still easily attainable.

Some tools and other lessons we’ve learned:

  • siege (it’s in the archive), lets us load test on the deployments to see how they perform
  • blitz.io is hosted siege that let’s you test from multiple locations.
  • nginx in the hands of someone like Brandon is really epic.
  • We fired up a micro with byobu and then the team juju’ed in there, so we could collaborate in real time.

And some things to fix

  • We’re caching so hard if he posted right now it would take an hour for it to show up on the site, so we’re still figuring out how to have a post cache refresh happen when he posts.
  • Brandon wants to investigate Cloudflare. He’s used it in the past and likes it so we’re going to play with it.
  • We’ll need to branch and generalize and “backport” this set up to the Wordpress charm in the charm store. I am now convinced that this set up (and variations the community comes up with) should be the way we deploy Wordpress in Ubuntu, it will be exciting to see people deploying their blogs this way.

Redeploying OMG!Ubuntu Onto the Cloud With Juju

On St. Patrick’s day Marco Ceppi and I were chatting about SPDY. I was wondering if we could make an option on the WordPress charm to try the experimental SPDY support in either Apache or some other web server. It’d be a cool trick. In general the WordPress charm is boring and generic, it needs to be hotrodded.

Then we noticed OMG!Ubuntu was still down. He had moved his server and all it was returning was a “It Works!” page. As we talked about the charm I kept following the gnashing of teeth from Ubuntu fans aching for their news. Moving a server. Takes a weekend? Over 3 days of downtime? Then it just started to really bother me.

This is 2012, we’re smarter than this! So I asked Joey if we could help. Some standalone server being set up in some rack on only one box? What happens when the site gets busy? What if the facility goes down? So we’re not only going to move him to AWS, we were going to do it with juju so we can show people how easy this is.

Challenge Accepted.

Step One - Think about how you want to be.

Let’s get started then. I know we’re going to use juju, so:

juju bootstrap

And a node fires up on AWS and starts configuring itself with Ubuntu Server. I know we will want a three tier system. One MySQL, one for WordPress, and one for a load balancer so we can add more WordPress later.

juju deploy wordpress
juju deploy haproxy
juju deploy mysql

Three more instances fire up, with Ubuntu Server, and start installing all those things. Ok, now let’s introduce them. HAProxy needs to know where WordPress is, and WordPress needs to talk to MySQL.

juju add-relation haproxy wordpress
juju add-relation wordpress mysql

This is where the magic comes in. The hooks know what WordPress should do when it talks to haproxy, and what to do when talking to MySQL. Ok, we’re done, we have Wordpress ready to go. Not bad, 7 commands to a brand new OMG!

I then did a juju expose haproxy and told Marco “Ready”. I had shared my juju environment with him and put both of our ssh keys in it. Because of this I was able to type these commands and Marco could run juju status and check on them. After about 9 minutes or so for the instances to fire up he was ready to go and we had WordPress up.

Step Two - Getting your old stuff

Joey provided Marco with access to his server with the wonderful ssh-import-id tool. All he needed to do was ssh-import-id marcoceppi and the tool went to launchpad, grabbed his ssh keys, and added them to the server. Then Marco was able to ssh in and copy the data.

The first set was the MySQL dump. He can get to the MySQL node by just doing juju ssh mysql/0 and to the WordPress node with juju ssh wordpress/0. This copy took a bit, the MySQL database was only about 750MB compressed, but there were like 3GB worth of images.

Since this was a WordPress install the only directory that need to be moved was wp-content, this made updating simple with the exisiting charm and WordPress infrastructure.

Step Three - Getting it all ready to go.

Marco copied all the WordPress files in the right spot while I refreshed. And … it worked! Almost. Some images were missing. They were still on the old server, just in a spot we weren’t expecting. Easy fix to copy them over.

We banged on it a bit and we were good to go!

Step Four - Let’s do it.

Right around here I realized I had made a mistake. We had initially started with 4 small Amazon instances. Redirecting the traffic from the old server just for a few seconds crushed the instances. The load on WordPress was 90 alone. Ah nuts I said, should have gone large. I specified extra large in the juju config and reran those commands. When the new environment was up Marco was able to copy the data over to them. Since we were already on AWS instance to instance copy was at LAN speed.

Then we pointed traffic at it again. Instances got smoked again. This time the load was around 40. How? How can one blog smoke these instances like that?

Step Five - Expertise, there is no replacement.

I texted Ubuntu Server Engineer and MySQL ninja Clint Byrum and explained the situation. He checked it out and had it humming in about 5 minutes. So much so that he finished with “Hey you could have just run this on an m1.small, did you guys try that?” Thanks Clint. Now all that was left to do was getting the WordPress node under control.

Marco did this a few ways, he turned on Caching in WordPress, and added Opcode caching in php with APC. Eventually the load on the WordPress instance went down to .01. There were some additional issues with the sheer amount of WordPress plugins OMG! is using, which makes it hard to debug what was going on.

By now Joey was long asleep, we were about 5 hours in, but it was FAST on AWS. So we sent him a mail with the elastic IP so he could move DNS and went to bed.

And where the big work begins.

Right now the site is running on 4 extra large AWS instances. This comes out at about $3.56 an hour + bandwidth. Not ideal, especially since we know we can move back to m1.smalls. We learned a bunch, in fact, it brought to light some things we can fix in Ubuntu:

  • The first trick will be to move the images to S3 buckets. Right now the WordPress EC2 instance is serving the images via Apache. This is expensive. We can move these all into S3 buckets and save way more money pay the same amount of money but take the load off the WordPress instance so won’t even have to be so busy, we can move that to an m1.small. (Update: I was wrong on EC2 and S3 pricing for outgoing data, current 12 cents a gig for either one.)
  • We can also move MySQL to an m1.small.
  • haproxy just points people around, we can move that to an m1.small.
  • And lastly we can move the juju bootstrap node to an m1.small.
  • The WordPress and MySQL charms need work. They basically just apt-get install stuff and connect them to each other, they don’t do what people need for them to do to deploy a real site. This helps us a bunch, but it would be wicked to have experts collaborating around this work. It wouldn’t hurt to make the MySQL package more scaleable out of the box either.

Doing this will make OMG! run on 4 m1.smalls, .32 cents an hour instead of $3.56; plus bandwidth costs. We can cut down a node soon when juju supports putting the bootstrap node on another one instead of running on it’s own, that’ll take us down to 3 nodes.

Marco is making an OMG! charm that is a the WordPress charm but with everything we learned today. When it’s ready the sysadmin at OMG! will do all those commands I just showed you, and then he’ll be done, except he’ll be running on AWS, with proper S3 integration, backup snapshots to S3, and all the goodies and running smooth as butter.

Now that Marco is capturing everything he’s learned in to a charm, we don’t have to do this by hand anymore. It will be trivial to deploy OMG. Joey can redeploy it onto another zone for redundancy and backup, along with all the data, probably in less than an hour, depending on how long the data copy takes from zone to zone. In fact, I think I’m going to try it just to see how long it takes. Things learned tonight we’ll put back in the MySQL and WordPress charms, so that any one who wants to run WordPress on Ubuntu can have a sweet rig.

Conclusion and Scale-o-rama

We showed you how deploying can become easier, and basically scripted. That’s not even the nice part about juju. Every 6 months when Ubuntu releases OMG!Ubuntu gets nearly overrun with traffic. One server, you can’t scale. Unless you buy or rent another server, but he doesn’t need another server except for those few times he’s hurtin’. Want to know how easy it is to scale with juju? About a week before release Joey will go:

juju add-unit wordpress-omg

And another WordPress instance will come up. Since haproxy and WordPress already know each other it will just work. He’ll have a 2 node WordPress instance. He can run this commands for as many nodes as he wants depending on how much traffic is coming in. After the traffic surge is over, he can juju remove-unit wordpress-omg back to one node. for normal operations. Pay for exactly what you need and be able to grow at a moment’s notice, that’s how we roll.

And this is just with WordPress and MySQL, a simple example, you should see how we do Hadoop! We can do this for any service we have a charm for. Interested in getting your stuff on the cloud? Here’s how you can write your own charm, and don’t forget to enter the charm contest, where we’re giving away $500 in Amazon Gift cards for submitted charms!

Need a Quick Instance? Easy Juju Trick

Spinning up on AWS tools can be tough if you’re new and not used to the tools. I quickly found myself misusing juju when I needed an instance real quick. I just fire up a bootstrap node and then ssh into it.

juju bootstrap
juju ssh/0 

My brain tends to remember this much better than any other method for some reason. Then I just juju destroy-environment when I’m done.

Reddit Would Be a Cool Charm

We had alienth from Reddit hanging out during one of the Ubuntu Global Jams and he took a moment to hang out with us in #juju and tell us a little bit about how they deploy Reddit.

I started with the Reddit Installation instructions and found out some of the technology they use. Postgres, Cassandra, Rabbit, and memcached. Hey, we just so happen to have charms for those services.

Then I learned that they have an install script that installs reddit. Well, if this was charmed up we could do:

juju bootstrap
juju deploy reddit
juju deploy postgres
juju deploy cassandra
juju deploy memcached
juju deploy rabbitmq

Then relate them all to reddit, the hooks in the charm would then do what the installation instructions say for each service:

juju add-relation reddit postgres
juju add-relation reddit cassandra
juju add-relation reddit memcached
juju add-relation reddit rabbitmq

And then after all the instances have fired up you can just juju expose reddit, which opens up the port, and then you just hit up the URL.

This would be an awesome charm for a bunch of reasons. First, we have existing charms for everything they need already, the magic would be in how to relate the services to the reddit charm. Theoretically we could juju add-unit cassandra and that would just work. We’d need to test it to find out.

The other bit that is interesting is that if you see the comments in the script people are having small problems getting it to work. Things like “the script uses aptitude and that no longer works.” Well, if this was a charm it would just live in the juju charm store, where people can fix it (like they are already doing on github). But the difference between living in a github gist and the charm store is that we continuously automatically check every charm so we know it works, and it’s easily findable by any Ubuntu server user.

To me that is the best part about juju. Take our convenience scripts off of pastebins and into a place where we can keep them updated, tested, and deployable for everyone. I am hoping that someone takes Reddit and submits it into the juju Charm Contest so that we can make it rock.

Any takers? We’ve got a mirror of our charms on github if you want to take the gist and start charming it up. Here’s how you can get started if you are interested.

Using the Cloud for BOINC, Folding@home, and Other Projects

One of the nice things about the cloud is being able to horizontally get what you want, as long as you either have servers on the other end or a wallet if you don’t.

I’ve always dug projects like BOINC and folding@home. And of course, our never ending search for ET. To me the best part is of course the leaderboards on how you’re competing with everyone else. Look at this subforum on Ars Technica as an example.

“But what if I could just rent enough public cloud capacity to pass my smug competition? In the name of science of course.”

Well, let’s see, we do have HVM Compute Cluster AMIs for Ubuntu. You can just search for “hvm” in our handy dandy AMI browser. These instance from Amazon can be pretty performant, and even offer GPU crunching. But at $2.40 an hour, too rich for my blood.

However if you look at the spot instances of the Eight Extra Large, right now it’s at 56 cents. Now this is much more interesting, I wonder how much bang for the buck I can get out of this. But I don’t want to spend too much (yet).

I saw a six pack of soda-pop for $1.20. That price f—s with your head, man. Because then I though that I would start selling soda-pop. Suddenly I got things of pop with me. “What’s going on, Mitch.” “Not much, looking to buy some pop? Fifty cents a can. It’s not refridgerated because this is a half assed commitment.”

Mitch Hedberg Comedian

Hmm so maybe I don’t even need the computing instances, if it’s just CPU bound then maybe it’s not worth the extra cost for all the other high end things those instances provide. Or I can at least estimate my price-performance ratio with whatever BOINC project I am interested in. Either way I think it would be cool if we had juju charms for these projects so that people can experiment with deploying all these interesting scientific things on the cloud in an easy way. People are already running BOINC on EC2, but with a charm we can just have it in Ubuntu out of the box, no need to have someone maintaining a separate AMI for this when we can just have it. And with juju and config options in charms we could just control the clients through the same interface that we can control any other juju service. And of course, since it’s in the charm store it means we can deploy on bare metal and OpenStack. What a nice way to break in all that shiny new hardware.

So I’ve filed two bugs, one for BOINC, and one for Folding@home. Bruno has snagged BOINC as he plans on entering it in the Charm Contest, but folding is still up for grabs.

Though the projects in BOINC and FAH are relatively familiar to us I think it’d be interesting to think of all the HPC software that is out there that needs computing power and how you can tie that into an easily depl

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.