Stu Radnidge

Infrastructure Strategist.

Follow @sturadnidge
November 16, 2012 at 7:31pm

0 notes

Chinwag with Mike

A couple of weeks ago (pre-‘reverse goatee’) I had the pleasure of being on Mike Laverick’s chinwag video / podcast. It’s always good chatting with Mike, and every time I do I remember those times when I was a newbie back in Australia and Mike’s site was pretty much the only (but certainly the best) source of VMware info on the web. How time flies.

I did make some slight errs during the chat however, the worst of which was somehow saying “dynamic DHCP” when of course I meant to say “dynamic DNS”! Mike’s brain heard the correct thing and just carried on the conversation as normal, but yeh… pretty stupid lol. Getting Geoffrey West’s surname wrong was slightly more forgivable.

Anyway I refer to a few things you should watch, so here are the links for those:

Geoffrey West: The surprising math of cities and corporations

Jason Hoffman: WAR SIGNALS: INDUSTRIALIZATION, MOBILIZATION AND DISRUPTION

Towards the end I also mention something about innovation and organisational structure, which was with reference to this excellent post:

An Operations Mindset Is at Odds with Innovation

And of course, catch my interview here! Thanks Mike :)

October 25, 2012 at 6:31pm

0 notes

Distributed Programming vs Distributed Programs

I had the pleasure of hanging out with VMware community legend John Troyer the other weekend, while he was sojourned in London after VMworld Barcelona.

The conversation was less often about technology, but one of the things we did talk about was application architectures in 2012 and beyond, which really boiled down to the point that distributed programming is hard, and as such can we really expect distributed applications to be pervasive in the Enterprise any time soon.

It only occurred to me later that I had rather poorly communicated my position, which amounts to a perhaps subtle distinction between distrubuted programming and distributed programs. WTF do I mean by that? Read on…

Read More

October 9, 2012 at 10:33pm

0 notes

The Accidental Standing Desk

A long time ago, in a galax…

The original idea was to end up with this www.ikea.com/gb/en/catalog/products/S99843742/#/S59865632, but along the way something went horribly wrong… to blame, as with so many things, was a combination of impulse and too much choice.

I’m not exactly an Ikea regular. So when I went to my local “store”, i was overwhelmed by the choice! Expecting to find a nice little package with all the parts I need to assemble the thing in the picture, instead I was presented with an array of table tops and legs that could all work together.

Before deciding to go cheap, i was very close to getting a custom desk built in the spare room. But at the last minute I decided it would be bad if we ever decided to move and rent our place out, so I opted for something less permament. I had the dimensions of the custom desk mapped already, and had called for a “slimline” 600mm depth. So when I saw a 600mm deep option in Ikea (the original table I saw was 750mm deep) i went for it, naively thinking that the legs I had already picked up would be fine with my new slim table top.

My original idea of a custom desk was such that it would run along an entire length of the room, and so i ended up buying the parts to construct 2 tables. Little did I know, that would matter months later.

When i got home, i eagerly unpacked everything and started putting it together… only the legs didn’t fit!!! The holes in the uprights were fine so I at least had a functional desk, but it didn’t have the aesthetics of the original image. I had 3 options: trek back to Croydon to return the legs for some plain ones. The weather and the thought of going back to Croydon (not the nicest of London suburbs!) ruled that one out. Option 2 was to find someone to cut the steel tubing to a length that would work with my configuration. Unlikely to find those services in the area that i live, i might as well just go back to Croydon. And I’m not doing that. Option 3 was to do nothing. Which is exactly what I did. For ~18 months.

Read More

September 19, 2012 at 8:56pm

1 note

All You Wanted to Know About ZFS

I had started to write a post about ZFS to follow up on my last SmartOS related post, when word finally started getting out about the inaugural ZFS Day on Tuesday, October 2nd. Somewhat unbelievably, the good companies of the Illumos-centric community are putting on the event for FREE, and live streaming the whole thing!!!

So if I’m perfectly honest, there’s no point in me writing something trivial compared to what is bound to be covered at the event - if you want to learn all there is to know about the only modern filesystem worth trusting (yes, I’m looking squarely at you Btrfs) head over and register now even if you’re just going to be watching the live stream (as I will be).

September 3, 2012 at 9:20pm

0 notes

On the Software Defined Data Center

Yeh I know… it’s one of the US spellings I actually like.

I guess this all kicked off a few days ago, with this tweet:

RT @theronconrey: I know this is going to sound like heresy but when hasn’t the datacenter been defined by software? < +1

— Stuart Radnidge (@sturadnidge) August 27, 2012

Don’t take the following too literally… you can of course take the scenario to the nth degree, I’ve tried to strike a balance between too much detail and not enough.

It’s really just something to think about. Something I think about.

—-

A server arrives at the datacenter. An hour later it’s in service and being used.

A server arrives at the datacenter. A facilities member removes the packaging, slaps on a QR code sticker then scans it into the inventory management system. A few milliseconds later a rack location is returned from the inventory management system. Less than an hour later, the server has ping and power. Shortly after that, it’s in service and being used.

A server arrives at the datacenter. A facilities member removes the packaging, slaps on a QR code sticker then scans it into the inventory management system along with the asset tag provided by the server manufacturer and the burn-in confirmation tag provided by the server assembly company. A few milliseconds later a rack location is returned from the inventory management system. Less than an hour later, the server has ping and power. After power on self test, the BIOS initiates a PXE boot sequence. The server provisioning system delivers the relevant operating system installer to the server. The application configuration management system delivers the relevant application dependencies, binaries, code and configuration to the server. Shortly after that, the server is in production and being used.

Read More

August 31, 2012 at 11:37pm

4 notes

Using SmartOS for Node.js Development

I was fortunate enough to attend Nodejitsu’s latest workshop in London today, which was totally awesome - as an infrastructure guy, I’m a total Node.js lightweight and the workshop really opened my eyes to some of the power beneath the surface of node. The workshop was delivered by none other than Paolo and Nuno - people who need no further introduction, suffice to say they are seriously freaky when it comes to pretty much anything involving 1’s and 0’s.

During the course of the workshop Paolo took some time to espouse the virtues of SmartOS… he was in my case preaching to the converted, but aside from the Clock guys who were in the room I wasn’t sure how many others had tried on SmartOS. So this post is for you guys :)

Read More

July 3, 2012 at 9:01pm

0 notes

A Tale of Two Domains - Part 1

For a while now I’ve been trying to build something for the web - not easy when you’re as easily distracted as I am, and holding down a full time job! Typical excuses I know, i should JFDI.

Like many people I have worked with over the years, i have this not-so-uncommon block… I can’t start something without a name. And so when 2 ideas sprang up I started searching for that increasingly elusive quinella of available domain and twitter ID. In this case (or rather these cases), I was more than half fortunate - the twitter names were available, but the domains were… in the redemption period!

To those with a similar level of knowledge as me a few days ago, here’s how the domain expiry process works.

1. Domain ‘expires’, and enters a 40 day grace period. I have read things that imply this can vary from registrar to registrar, but it seems pretty standard from what I have seen.

2. After the grace period, it enters a 30 day redemption period. Again, apparently this can vary, but I have yet to see it (admittedly I have only looked at a small number of domains)

3. Finally, when the redemption period expires, a 5 day ‘pending deletion’ period is entered.

At the end of those 5 days is where things get interesting.

Read More

June 14, 2012 at 7:21am

2 notes

The Fallacy of vHadoop

I really hate writing these kind of posts, especially when there is the remote possibility that it will look like I’m going after someone I have the utmost respect for, Richard McDougall. The guy is a legend and borders on deity status in my books. So keep that in mind when you’re reading this… I’m not attacking anyone, just providing an ‘other side of the coin’ kind of view to what VMware marketing would have you believe. Maybe I should re-adopt my old moniker of “cutting a swath of destruction through marketing bullshit” (those are my old vinternals.com novelty business cards :)

VMware recently announced “Project Serengeti”, a tool designed to rapidly deploy Hadoop clusters on top of vSphere 5, with an accompanying whitepaper “Virtualizing Apache Hadoop”. A whitepaper that is unfortunately not without some glaring omissions / understatements / contradictions.

Let’s take a look at the “Use Cases and advantages of virtualizing Hadoop” section of that whitepaper.

Read More

May 15, 2012 at 8:45pm

0 notes

Fastest Way to an “Offline” Cloudera Install

After many months of analysis and deliberation, we finally took the plunge at work and went with Cloudera for our initial Hadoop deployment. Stoked! But deploying in a corporate environment (ie behind the firewall) isn’t really what the automated Cloudera Manager installer is designed for (yet), and doing a manual install is kind of unwieldy. Of course you could use something like Puppet to get things out, but what if you don’t have that in production yet?

The easiest way by far to get something setup is to mirror the relevant parts of archive.cloudera.com to an internal web server, hack /etc/hosts on all cluster nodes and let the Cloudera Manager installer run as it would normally. Here’s how to do that.

1. Grab relevant parts of archive.cloudera.com, examples for those unfortunate enough to be running SLES internally.

wget -r --no-parent --no-host-directories --reject "mirror*,index*" archive.cloudera.com/cloudera-manager/installer/latest/

wget -r --no-parent --no-host-directories --reject "mirror*,index*" archive.cloudera.com/cloudera-manager/sles/11/x86_64/cloudera-manager/3/

wget -r --no-parent --no-host-directories --reject "mirror*,index*" archive.cloudera.com/sles/11/x86_64/cdh/3/

2. Configure your webserver with a root pointing at the directory you ran wget from (or symlink the ‘cloudera-manager’ and ‘sles’ directories from an existing webroot, whatever), so that the non-host portion of the URL is identical to what you would see on archive.cloudera.com.

3. Hack /etc/hosts on all target hosts to resolve archive.cloudera.com to your internal webserver.

4. Add repos to all target servers using the .repo files from archive.cloudera.com - that’s right, don’t modify them to point to your internal environment, leave them pointing at archive.cloudera.com. So for example in a Suse Linux environment, you might run zypper ar -t YUM archive.cloudera.com/cloudera-manager/sles/11/x86_64/cloudera-manager/3/ "Cloudera Manager" on all target nodes.

5. Run cloudera-manager-installer.bin on the node that will run the Cloudera Management service.

6. Profit.

Sure, it’s hacky. And I have been assured Cloudera will be addressing this properly in due course, however if you can’t wait for that then go with this process - it’s much better than the manual alternative!

May 13, 2012 at 6:43pm

0 notes

iPXE Boot an ISO Image

Well not all iso images, but the only ones worth caring about :D

I’ve been looking at some large scale  deployment automation stuff lately, and had the need to boot into a DOS environment in order to apply firmware updates. Yes, hard to believe that in 2012 this kind of thing is still required, but someone in HP just can’t let go. And before you start thinking “stu you idiot, don’t you even know about the SPP / firmware maintenance CD”, this is an example of what I am talking about - it cannot be applied online, and is not available on the latest SPP / firmware maintenance CD (and by the language in the release notes, I’m assuming it never will be).

After skirting around the “deploy from USB” requirement of that particular update, booting a DOS iso using iPXE was trivially simple… although not that easy to find via Google! So this post is as much a reminder for me as it is for you. Here’s how to do it:

  1. Download a 4.x version of memdisk and stick it on a webserver.
  2. Put the .iso file on a webserver.
  3. Tell the target machine to boot an iPXE script that looks something like

Obviously replace the URLs in that script with whatever you have in your environment, and yes the ‘chain’ line comes after the initrd line.

This technique will work for pretty much any DOS or Linux ISO file - I have yet to come across an offline x86-based firmware update that required some other kind of environment, obviously if you have the option of applying the update online then you probably should.

← Next
1.
Home  Archive  Mobile  RSS    
Themed by langer, powered by Tumblr.
spacer
gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.