by Jim Perrin (noreply@blogger.com) at February 02, 2013 11:42 AM
I'm looking to put together a development cloud - a full featured one at that, on a budget. So here is what I'm thinking about compute nodes :
I can get one of those 'sets' for just under £190.00 ; For Network, a HP ProCurve J9028A / their 1800-24G should do, and available cheaply off ebay. For Storage, I am thinking of repurposing my HP MicroServer with 4x500GB SATA's.
So four 'compute nodes' + switch + cables and disks for the MicroServer should clock in at £1,000.00 still need some sort of a case or rails ( intend to host this at home ).
What am I missing ? What might I be better off with ? £1k for 24 cores, 128gb of ram and 4 physical nodes seems like a good deal to me, but could I do better ?
- KB
by Karanbir Singh at January 22, 2013 02:57 PM
by Jim Perrin (noreply@blogger.com) at November 16, 2012 07:43 AM
Off to another DC tomorrow morning, hoping to swap out some seriously old hardware ( ~ 7'ish years old ) with something slightly newer ( only 4 years old! )
by Karanbir Singh at November 10, 2012 09:51 PM
by Jim Perrin (noreply@blogger.com) at November 08, 2012 11:06 AM
There was a time when I spent lots of time in the Data Center. I didnt enjoy them very much then, too much noise, too many people telling me what to do and way too many unpaid overtime hours clocked up. The last of that was about 12 years ago. These days I goto the DC maybe three or four times in a year, and its mostly related to CentOS infrastructure or my own personal machines hosted around London. And unlike things 12 years ago, I quite enjoy my trips into the vault like colo rooms now.
Apart from a bunch of routine things that I need to sort out tomorrow, I'm hoping to bring online a hardware rng adapter, and a Tilera Server. If you havent seen this platform before, I recommend you do : specially if lots-of-cores is something you are keen on, or have a problem domain overlap with.
- KB
by Karanbir Singh at November 04, 2012 09:36 PM
by Jim Perrin (noreply@blogger.com) at November 01, 2012 09:11 AM
Another option is to add this to /etc/sysconfig/named:OPTIONS="-4"
by noreply@blogger.com (R P Herrold) at October 30, 2012 07:14 PM
The CentOS.org infra is pretty well spread out, but we got caught out by the Sandy storm. Specially our mail and list services, which run from an Internap facility in New york.
Outage lasted from just after 03:00 hrs UTC October 30th 2012 to 07:12 hrs UTC October 30th 2012; and we dont seem to have lost any communcation. If you had bounce backs during that timeperiod, please do retry / resend the emails.
We dont have a machine that can be used as a hot-standby, but we do have fairly good backups. So should the machine go offline completely or suffer major damage, we would be able to bring services back on a different machine with no real loss of data.
- KB
by Karanbir Singh at October 30, 2012 07:09 PM
You've probably read that Ansible uses by default paramiko for the SSH connections to the host(s) you want to manage. But since 0.5 (quite some ago now ...) Ansible can use plain openssh binary as a transport. Why ? simple reasons : you sometimes have complex scenario and you can for example declare a ProxyCommand in your ~/.ssh/config if you need to use a JumpHost to reach the real host you want to connect to. That's fine and I was using that for some of the hosts i have to managed (specifying -c ssh when calling ansible, but having switched to a bash alias containing that string and also -i /path/to/my/inventory for those hosts).
It's great but it can lead to strange results if you don't have a full look at what's happening in the background. Here is the situation I just had yesterday : one of the remote hosts is reachable, but not a standard port (aka tcp/22) so an entry in my ~/.ssh/config was containing both HostName (for the known FQDN of the host I had to point to, not the host i wanted to reach) and Port.
Host myremotehost
HostName my.public.name.or.the.one.from.the.bastion.with.iptables.rule
Port 2222
With such entry, I was able to just "ssh user@myremotehost" and was directly on the remote box. "ansible -c ssh -m ping myremotehost" was happy, but in fact was not reaching the host I was thinking : running "ansible -c ssh -m setup myremotehost -vvv" showed me that ansible_fqdn (one of the ansible facts) wasn't the correct one but instead the host in front of that machine (the one declared with HostName in ~/.ssh/config). The verbose mode showed me that even if you specify the Port in your ~/.ssh/config, ansible will *always* use port 22 :
<myremotehost> EXEC ['ssh', '-tt', '-q', '-o', 'AddressFamily=inet', '-o', 'ControlMaster=auto', '-o', 'ControlPath=/tmp/ansible-ssh-%h-%p-%r', '-o', 'StrictHostKeyChecking=no', '-o', 'Port=22', '-o', 'User=root', 'myremotehost', 'mkdir -p /var/tmp/ansible-1351603527.81-16435744643257 && echo /var/tmp/ansible-1351603527.81-16435744643257']
Hmm, quickly resolved : a quick discussion with people hanging in the #ansible IRC channel (on irc.freenode.net) explained the issue to me : Port is *never* being looked at in your ~/.ssh/config, even when using -c ssh. Solution is to specify the port in your inventory file, as a variable for that host :
myremotehost ansible_ssh_port=9999
In the same vein, you can also use ansible_ssh_host , this one corresponding to the HostName of your ~/.ssh/config.
Hope that it can save you time, if you encounter the same "issue" one day ...
by fabian.arrotin at October 30, 2012 01:35 PM
by Jim Perrin (noreply@blogger.com) at October 30, 2012 12:26 PM
I already know that i'll be criticized for this post, but i don't care . Strangely my last blog post (which is *very* old ...) was about a puppet dashboard, so why speaking about another tool ? Well, first i got a new job and some prerequisites have changed. I still like puppet (and I'd even want to be able to use puppet but that's another story ...) but I was faced to some constraints when being in front of a new project. For that specific project, I had to configure a bunch of new Virtual Machines (RHEL6) coming as OVF files. Problem number one was that I can't alter or modify the base image so i can't push packages (from the distro or third-party repositories). Second issue is that I can't install nor have a daemon/agent running on those machines. I had a look at the different config tools available but they all require either a daemon to be started, or at least having extra packages to be installed on each managed node. (so not possible to have puppetd nor puppetrun or invoke puppet directly through ssh , as puppet can't even be installed, same for saltstack). That's why i decided to give Ansible a try. It was already on my "TO-test" list for a long time but it seems it was really fitting the bill for that specific project and constraints : using the 'already-in-place' ssh authorization, no packages to be installed on the managed nodes, and last-but-no-least, a learning curve that is really thin (compared to puppet and others, but that's my personal opinion/experience).
The other good thing with Ansible is that you can start very easily and then slowly add 'complexity' to your playbooks/tasks. I'm still using for example a flat inventory file, but already organized to reflect what we can do in the future (hostnames included in groups, themselves included in parents groups - aka nested groups). Same for the variables inheritance : at the group level and down to the host level, host variables overwriting those defined at the group level , etc ...)
The Yaml syntax is really easy to understand so you can have quickly your first playbook being played on a bunch of machines simultaneously (thanks to paramiko/parallel ssh). The number of modules is less than the puppet resources, but is quickly growing. I also just tested to tie the execution of ansible playbook with Jenkins so that people not having access to the ansible inventory/playbooks/tasks (stored in a vcs, subversion in my case) can use it from a gui.. More to come on Ansible in the future
by fabian.arrotin at October 26, 2012 02:02 PM
by Jim Perrin (noreply@blogger.com) at October 25, 2012 11:14 AM
by Jim Perrin (noreply@blogger.com) at October 25, 2012 09:53 AM
Most people, me included, still consider IPv6 usage to be something not worth worrying about. This comes from the fact that most services are quite happy chugging along with just IPv4 access at both the service provider end and the service consumer end. However, what happens when an IPv6 option shows up ? Here are some numbers, many will find interesting, from the CentOS Mirrorlist service.
1st June 2012 : 0 hits on IPv6
11th June 2012 : We launched IPv6 access for mirrorlist.centos.org
+ 90 days ( midnight 13th Aug ):
--- 12,831,352 Hits to mirrorlist.centos.org were over ipv6
--- 711,014 unique IPv6's
--- with a usage average of 1.4 hits/sec
--- peak at 350 hits/sec
There are two, sometimes three, but always atleast two machines that respond to mirrorlist.centos.org requests over ipv6. Looking at the stats for one of these machines that has always been around, right since day one we get :
Total hits : 6,324,980
Of this 5,089,935 were CentOS-5 and 1,176,602 were CentOS-6 requests, the rest were invalid requests ( could be either CentOS-4 or for repos that dont exist )
Of the CentOS-5 hits, 61.24% of requests were to x86_64 repos
and from CentOS-6 hits, 27.05% of the reqests were to i386 repos
Interesting stuff, fairly large numbers.
Also worth noting here is that the numbers represent quite a skewed sample set, IPv6 is only really usable in some specific setups and in some specific environments / data centers. It does not represent an overall state of CentOS userbase, so please dont use these numbers to signify that.
- KB
by Karanbir Singh at October 03, 2012 12:22 PM
by noreply@blogger.com (R P Herrold) at September 27, 2012 07:32 AM
It is not clear if a cabal of Anonymous hackers, or simple network administration issues, caused the GoDaddy outage of Monday past. I guess it does not really matter
What really would have hurt is if the root domain server constellation had been compromised, to well and truly take down the internet. A Domain Registrar sends along updates to those root servers periodically, and GoDaddy's outage, from the extent of our involvement with them, simply impaired our ability to renew domains, and set new nameservers (NS records). As we had no urgent renewals pending, that is to say, not at all
We do not rely on GoDaddy for DNS services, and really, never have relied on them for production purposes. For PMman and for our ISP and COLO services, we run three geographically diverse nameservers for most of our purposes. We also run a few others for customers' needs (PTR records for a couple of datacenters we are in, testing, demonstration units)
The true 'masters' of our externally visible DNS servers are simply not accessible from the public internet. We push out updates to our public nameservers by cryptographically protected rdnc transactions. Those transactions are logged, and the information causing a given RDNC transaction are created by queries into a local database with a custom written LAMP control interface based on the FOSS tools that are in a stock CentOS install. Compared to manually editing zone files, checking variants in and out of a version control system, and so forth, this more readily provides us with scalability, traceability and auditability. Why, I caught a piece of lint in a zone file just last week, reading the overnight error report emails
We also retrieve the state of the generated zone files at the client public nameservers, and check them for consistency and coherency, essentially after each update, to prevent errors from propagating. ACLs, transaction logging and other checks provide more tracability, and we closed the mouse hole that that 'lint' crept in through in short order
As a result of the GoDaddy outage, a couple of our 'alumni' tech support folks who have moved on in their careers to other employment, gave us a call Tuesday, because they remembered how paranoid I am on making sure DNS is available. I appreciate the calls, and we've some new customers as a result
People have strong opinions about GoDaddy, sometimes for reasons of political correctness; I like them, by and large, because they provide a workmanlike product for a price that is hard to beat. They sure beat the heck out of the old Network Solutions rates. I have something like 500 domains that I administer and renew and most are there, although some are at other registrars for both historical and other reasons
And while Danica Patrick is not my cup of tea, she is not hard on the eyes, either
by noreply@blogger.com (R P Herrold) at September 12, 2012 09:31 PM
The question was asked in IRC today:
hello folks, is there any way to install packages from a list written by yum list installed? I've two CentOS 6.3 hosts and I like to get them with the same packages installed (also versions)
Here is a quick (and accurate) answer:
rpm -qa --qf '%{name} \n' | sort > MANIFESTNote: that is a backslash n -- the html markup makes it hard to see the distinction
yum -y install `cat MANIFEST`Note: and here, backtick around the cat to get a sub-shell
yum -y updateon each unit
For extra credit, re-run the MANIFEST creator on each unit, and use diff to find any variances
by noreply@blogger.com (R P Herrold) at September 07, 2012 02:28 PM
by Jim Perrin (noreply@blogger.com) at July 28, 2012 09:25 PM
by Johnny Hughes (noreply@blogger.com) at July 10, 2012 06:14 AM
by Johnny Hughes (noreply@blogger.com) at July 10, 2012 06:09 AM
by Jim Perrin (noreply@blogger.com) at June 15, 2012 12:31 PM
I've never listed my mobile number anywhere on the internet, and as far as I remember I've only ever shared it with friends and family. On the other hand, I've had the same number for years and its possible that its 'leaked'; But I still find it quite odd that people around the world manage to get their hands on the number, with no real effort. And that means I get calls.
Calls from people in Argentina at 4am UK time, wanting to know when php-5.4 is going to be released into CentOS-5. Calls from people in the UK, at 8am wanting to know if the httpd update released last night had a fix for CVE-XXX. Calls from people in India at 10pm UK time wanting to find out if the sound card on the motherboard they bought a few hours back, is supported on CentOS. A disgruntled passenger trying to check-in to their flight the next day, and the system throwing up a 'Apache on CentOS' page.
Some are a bit more alarming. eg. a call from people at a Large Defence Contractor in the USA asking who their 'CentOS Technical Account Manager' was and if I knew what the SLA terms were. Or the time when I got a call from a hosting company's Data Center saying there was a fire in the DC and they wanted to know if their CentOS backups were intact.
It's not something new, I've had these calls over the years from maybe 2008 or so. At one point, when it was really hectic with almost 10 to 12 calls a week, in 2010 I was seriously considering changing my number. Just doing the 'ignore if the number isnt in the address book' wasent scaling for me. But I didnt, the process of changing my number with everyone I knew was too much hassle, so I started giving people an alternative number and mostly started ignoring the 'popular' old mobile number. The number of such calls has now drastically reduced. I get maybe 1 or 2 in a week and in many cases I answer them and have had the odd interesting conversation. But realistacally, I think the time has come to change that number.
What I will, however, do is offer up a Voip line : +44-207-0999389 ; This terminates at a phone that I have on my desk. And I will try to make sure its turned on whenever I am doing CentOS stuff, or am in 'Open Source' mode. Go ahead, use that number - give me a call and if I am around, would love to have a chat. But please stop calling me on my mobile.
btw, I have tried to find my own number and failed to do so - even entering parts of the numbers into the various search engines does not bring up my mobile number. So, I have no idea where all these people suceed in finding it ?
- KB
by Karanbir Singh at June 07, 2012 08:52 PM