RSS
 

Part 60: My vCloud Journey – Design Thoughts – Greenfield vs Brownfield

07 Mar

spacer  VS spacer

Note: As ever when I was writing a blogpost – I did a google search an interesting for “Greenfield” and “Brownfield“. Interestingly the second image for Greenfield came with these ladies on the lawn. Apparently  Laura Greenfield is a famous US based photographer and video artist. The first photo comes from here collection on “Girl Culture“. Anyway, that’s my reason for the selecting the photo and I’m sticking with it. The ladies are at least standing on a greenfield…

 Introduction:

The term “Greenfield” and “Brownfield” actually come from the world of planning. Where a greenfield development is a new build on lush new ground unsullied by mans interventions in nature. The term “Brownfield” is used to describe the re-development of previous developed and usually abandon land – normally associated with the rapid decline in some heavy industry steel, coal, shipbuilding, chemicals. We use it in IT often to talk about whether an existing environment (brownfield) should be re-used for a new project, or whether net-new infrastructure (new servers and/or new switches and/or new storage) should be procured first. Now I hope your existing datacenter looks nothing like my second picture – if it does then you should have installed VMware Site Recovery Manager!

When I was attending the vCloud Director Design Best Practises a couple of weeks ago (hosted by the excellent Eric Sloof!) module 3 on “Cloud Architecture Models” talks about whether vCloud Director should be deployed on top a “Greenfield” fresh install of vSphere OR if can be deployed to an existing “Brownfield” install of vSphere. I was quite anxious about this because the course clearly states the recommendation is for a “Greenfield” deployment. Now that does not necessarily mean buying a new site, server rooms or datacenter – it could “just” mean a new rack of servers, network and storage – and the over time porting your existing VMs in the “legacy” vSphere environment into the shiny new cloud.

The thing that irks me a little about this recommendation, it is a problem I’ve had with the whole “greenfield” debate about the arrival of a new technology. For me its kind of up there with that question “What works best – a clean install or an upgrade”. Anyone who tried to upgrade an NT4/Exchange 5.5 box to Windows 2000/Exchange 2000 already knows the answer to this question don’t they. It stands to reason that greenfield deployment is going to be easier than trying to shoe-horn a new application into existing environment for which it was never originally designed. But my real problem with the “greenfield” mentality is the impact on adoption rates.

I had the similar discussion with the folks at Xsigo before they were acquired by Oracle. They told me that were they stood a good chance of getting their technology adopted, and where they got the most attrition - was in greenfield locations. But there’s a problem with that isn’t there? If a new technology is limited/hamstrung by only being deployed in greenfield environment you at stroke hobble its adoption. Because lets face it you’ve just raised the bar/barrier to adoption – to now having to include upfront CAPEX cost in acquiring new kit. Let’s face it’s not like that happens every day – so the other thing we do when we play the “Greenfield” wildcard is limit adoption to the cycle of maintenance, warranty that afflicts most hardware procurement cycles. Overplaying the “Greenfield” approach basically chokes of uptake of a technology.

Now, I’m not rubbishing the recommendation. Nor am I intending to rubbish the courseware from my good friends at VMware Education (remember I’m former VMware Certified Instructor). What I am doing is question this as design best practise. And if I am honest I can see why the courseware makes this statement if you look at the challenges of taking an existing vSphere platform designed for virtualization, not cloud – and preparing it to be utilised by vCloud Director. But what I would ask is just because a particular configuration introduces “challenges” is that a sufficient reason to either walk away or approach management with a request for a purchase order?

To be fair the courseware does map out a possible roadmap for migration/transition of:

  • Create greenfield vSphere install
  • Migrate virtual appliances
  • Remove hosts from “legacy” vSphere environment
  • Redeploy hosts into shiny new infrastructure

I have no issue with that but being the kind of gung-ho psycho I am, I’m actually more interested in whether the existing environment could be kept as is. After all you might have plenty of free compute resource on an existing vSphere environment – and we don’t say (for instance) if you want to deploy VMware View or SRM that you have to start again from scratch. So what makes vCloud Director so special? Well, the answer is ever is in the detail – which is also where you will find our friend the devil.

The Design Best Practise course does an equally good and honest job of stating what the challenges might be of taking an existing vSphere environment – and trying to drop vCloud Director (or any cloud automation software/layer from another vendor for that matter) on top of it. The courseware outlines four areas which could put a spanner in your spokes:

  • Non-Transferrable Metadata (The stuff you already have that won’t port magically into a cloud layer)
  • Resource Pools on existing HA/DRS Cluster
  • Network Layout
  • Storage Layout

Let’s take each one in turn and discuss.

Non-Transferrable Metadata:

There’s some stuff in vCenter which isn’t transferable into vCloud Director these include (but not limited too in my opinion):

  • Guest Customizations in vCenter
  • vShield Zone Configurations
  • VMSafe Configurations

I think these are relatively trivial. Guest customisation  are easy to reproduce, and I think its unlikely that many vSphere customers had any vShield presence in their environment prior to looking at vCloud Director. Whether we like it or vCloud Director & vShield (now vCNS) are bundled together in the minds of lot people. True you can have vCNS on its own, and there’s truckload of advantages to that – but as we move towards a more suite view of the world its difficult to imagine that two aren’t wedded to each other like husband and wife.

What I think missing here is the question of what happens to your existing VMs. Yeah, those things – remember them? They are really quite important aren’t they? IF you have ever install vCloud Director you will see that the abstraction and separation is soooo complete that after the install you don’t see any of your existing VMs. How do you get your old junk into the new shiny cloud layer? There a couple of methods.

  • Method 1: A bad approach in my book would be to power of your VMs in vSphere; export into OVF; importing into vCloud Director catalog; and then deploy.
  • Method 2: Login as “the system admin” to vCloud Director – and use the Import to vSphere option. That’s not bad. But its not really very “tenant” friendly. I’m not in the habit of giving my tenants “sysadmin” rights so they get their VMs into their Organizationspacer
  • Method 3: I personally think the best approach would be to use vCloud Connector to copy the vApp/VMs for you – the vCC can also copy your precious templates from vSphere to vCloud Director – the only thing you loose are the Guest Customizations. But remember which ever way you cut it/dice and slice it – the VM or vApp must be power off to do the move (so that’s a maintenance window). It’s also a copy process – so you need temporary disk space for the original VM/vApp and new version in vCD.spacer

Existing Resource Pools:

It possible to use resource pools on DRS cluster. Many people do – sadly for ALL the wrong reasons. Many naughty vm-admins use them as if they are folders. They are not, as such the practise is not only stupid, it’s also very dangerous. If you don’t believe me read this article by Eric Sloof. If you do this, please stop. Really manually resource pools on DRS cluster have NO role play in vCloud Director – they just cause at best confusion, at worst more problems. Avoid like the plague.

The ONLY time I would use manual resource pools is in a homelab where you don’t have the luxury of dedicated “management cluster” separate and desecrate from the DRS clusters that host the Organizations VMs. That’s my situation. I have an ‘Infrastructure” resource pool where I put all my “infrastructure VM” – so my management layer is running on the same layer it manages. Not the smartest move in the book. But if you lack hardware resources in homelab, what’s a boy to do?

spacer

Network Layout:

This was a biggy for me – and remember last year fessing up to Josh Atwell at VMworld about how from a VLAN perspective my network was a brownfield site – dirty, polluted and contaminated. Like a lot of home-labbers I had a flat-network with no VLAN at all. I mean what’s the point in homelab, unless you get off on 802.3 VLAN Tagging? There are places where vCloud Director expects some sort of VLAN infrastructure for external networks for instance, and of course if you using VLAN-backed Network Pools as the name implies it’s pretty much mandatory. So I implemented VLANs for the first time (greenfield!) and it was much less painful than I expected. I keep one my Dell PowerConnect gigabit switches for management, and its downlinked to cheap and cheerful 48-port NetGear gigabit switch is VLAN’d for my vCloud Director tenants. The management layer can speak to the tenant layer (for ease of lab use) but there’s no comms from the tenant layer to the management layer – unless I allow that…

But less of me. What about production environments. What are the gotchas. Well, you have to be careful you don’t have overlapping VLAN Ranges – so vCloud Director doesn’t go about making portgroups with VLAN Tagging for VLANs that are already in use else where. The same goes with IP ranges. You can have IP pools but a bit like with badly implemented DHCP servers – if you have overlapping ranges of IP then there’s every possibility that a VM might get an IP address that’s in use elsewhere. My biggest problem in my lab is my rubbish IP ranges. Again this stems from it being a home lab. My core network is 192.168.3.x because that’s what my WiFI router at home used. What would make more sense would be to obey classful IP address. Like this:

Private Cloud:

  • External Network – 10.x.y.z
  • OrgNetwork – 172.168.x.x
  • vApp Network – 192.168.2.x

Public Cloud:

  • External Network – 81.82.83.1-82.83.83.254/24 (IP Sub-Allocated to each Organization – like 8 internet IPs each)
  • OrgNetwork – 172.168.x.x
  • vApp Network – 192.168.2.x

There are other issues to be aware of as well such as the vmnic configuration on the DvSwitches – are they being used for other purposes such as IP storage? Are those desecrate and separate networks? Can you guarantee that tenants cannot see vmkernel traffic such as vMotion, Management and so on. In my view if your VMs can see this traffic in vSphere, what you have is vSphere design problem. Not a vCloud Director one! But that’s another story!

Storage Layout:

This is a big one. In my implementation I destroyed ALL my datastores from my previous vSphere setup – except my template, ISO and infrastructure datastores. Everything else got destroyed… Now, I’m not of the opinon that a successful meeting with management begins with this phrase “In order to implement this new technology we must destroy all our data”. So what I did isn’t an option either in a greenfield or brownfield location.

It’s fair to say that you would want to avoid a situation where storage is being used BOTH in the vCloud Level and the vSphere Level. vCloud Director when it’s used as test/dev environment can and will create a destroy lots of VMs in short space of time. And you don’t want your “tenants” in the vSphere layer competing for disk IOs with those in the “tenants” in the vCloud layer. I guess there’s a couple of ways to stop that. You could have server/storage dedicated to your vCloud OR a judicious use of permissions on datastores could be used to “hide” the datastores from the vSphere users. That way they can’t touch the storage used by others. Of course that’s not the end of the story – if you have vSphere and vCloud users sharing the same cluster – then you could have all manner of performance problems by activity taking in one place affect the performance elsewhere – and because of the abstraction it might be tricky to see the cause. Nightmare. All of this does seem to point quiet heavily to seperate environments or deciding that EVERYONE has to get into the vCloud Director world and do their deployments there – with no opportunity to sneak around vCloud Director to gain access to vSphere layer on the QT.

Finally, from a storage perspective – the datastores used for catalogs should not be the same datastore used for running VMs. For the same reason – performance could be undermined by activity one tasks on another. That’s NOT something I did in my design. I use bronze storage to hold catalog items. In hindsight I wish I’d created dedicated “Catalog” store per Organization…

 
3 Comments

Posted Mike Laverick --> in Uncategorized

 

Small Linux Distros for The Home Lab

06 Mar

I recently found I had the need to find a good functional – but very small OS to use in my lab. This post originally started off attached to another blog post – but it grew too long and unwieldy. So I decided to split it out into a separate post.

I’ve been using using vCloud Connector in my lab – and it quickly forced me to pay more attention to sizing of my disks in the VM/vApps. I only have 1mb out-connection on my colocation which is monitored using the standard 95th Percentile Calculation. To get more bandwidth I’d have to pay for burst or add individual MB/s. Either way it costs money to upload. In the end I used a small Linux Virtual distribution which I download, and configured in my lab environment to reduce the pain of uploads.

I guess my use case is pretty tangential – because the real use of this I think is for home labbers who want to spin up VMs but lack the RAM/Disk resources to take the bloatware footprint of most modern operating systems… Now you’d think in this day an age there’d be plenty of versions of really small linux distros to download in an .OVF but I struggled. It seems like the big guys like Suse, RHEL, CentOS define small as 2-4GB. Do remember when a OS fitted on 3-floppy disks? Of course, back then there was no Facebook, Internet, Smartphones etc…

Now one thing I would say about these super-skinny versions of Linux is no two are a like, and if you primarily a Windows/MAC guy like me, with a modicum of Linux skills they can deviate substantially from more commercial versions of Linux such as RHEL and SUSE that you might be more familiar with. So if you want to add additional features or services (which will increase the VMs memory/disk demands) prepare for some late nights reading. [it's 5.12am while I type this...]. The other thing you will need to accept is not having access to VMware Tools within the guest. I’ve found getting VMware Tools installed into these tiny-weeny distros difficult. Often they lack a compiler, or they are using distributions of Linux for which there’s no support. If anyone works out to get VMware Tools installed  to these sort of distros – I’d be interested in learning that process. Mainly because its nice to be able to gracefully shutdown these VMs from the vSphere/vCloud Director power management tools, rather than have to login to each VM and type “halt” or “shutdown”.

In the end my stepson advised my to take a look a Damn Small Linux (or DSL which ships as an .ISO), and go from there – if you browse to distro.ibiblio.org/damnsmall/current  you should find the latest version. I would avoid the file with the name vmx.zip – this is nothing but a generic VMX that boots from the .ISO image. If you looking for a pre-packed version of DSL you might like to use Mike Brown’s site VirtualMikeBrown.com as he has one pre-made with instructions on how to the install yourself: A Small Virtual Machine for a Test Lab.

Whilst DSL is pretty impressive and tiny, I didn’t find it terribly reliable. Often I found that the DSL would often enter a state where it wouldn’t boot through errors in the file system. Upon further research it appears that DSL is now a dormant project. The last update was in 2008, forums are closed to new registrations and its based on the Linux 2.4 kernel. Nonetheless, without an ultra-small VM like the DSL much of my vCloud Connector work wouldn’t have been possible. The vast majority of this vCloud Connector post was written with DSL, but in the end I switched to another slim Linux distribution called SliTaz. It’s not as small as DSL but I found it more modern and up-to-date. SliTaz supports a simple SSHD and HTTPD service which gave me enough to test connections beyond just using ping.

And finally wonderful though these ultra-slim Linux distributions are –  you will find that many don’t support the vCloud Director’s Static IP Pools at all. In case you don’t know Static IP pool are really cool features of vCD. They allow you to have your cake and eat it – its like having a DHCP scope that statically configures the Guest OS – but for it work the GOS must be supported. So if you using one of these skinny-latte distros, you will need some kind of DHCP service on the network – in my case I used vCloud Directors built-in service to do this…

spacer

In the spirit of sharing I’ve couple of versions of SliTaz4.0 instances you can download here rather than having to go thru the rigmarole of downloading the ISO, defining the VM, partitioning the disk and installing the SliTaz to the disk. The console logon is “root” and the password is “root” (which is also the default in SliTaz). SSHD has been enabled (as has HTTPD) and the SSHD login is “vmware” with a password of “vmware”, after which you can us the “su -” command to elevate yourself to root-level access if you so wish.

  • Single VM in .OVF Format (Zipped)
  • vSphere vApp in .OVA Format
  • vCloud Director Compatible .OVF Format (Zipped)

The ZIPz were all made with 7-ZIP a free Windows ZIP utility.

If you want to install SliTaz manually I found these parameters worked best in ESXi 5.1

  • Custom
  • Other Linux/2.6/32-bit
  • Intel E1000
  • 256MG – IDE Hard Disk on 0:0 (SliTax lacks the drivers for BusLogic, LSILogic SAS/Parallel). Remember IDE drives cannot be increased in size from the GUI. I’ve tried to go smaller than 285MB but with little success, although the SliTax install says the installs is complete the IDE drive is not bootable. I think there’s a lack of free space for GRUB to do its work or something like that)
  • Memory: 64MB (min), 128MB gives reasonable performance. I wouldn’t go lower than 64MB, especially if you run the desktop

 

And Finally… 

Shortly after completing this blogpost I found another super-skinny Linux Disto whilst doing some work with the AutoLab. In case you don’t know AutoLab is aimed at home labbers, and builds a complete vSphere environment from scratch using either VMware Workstation, Fusion or a dedicated ESXi host. The whole process is scripted and automated. And it my humble opinion is mighty fine piece of work.

Inside the AutoLab you will find a VM called TTY Linux. TTY Linux is yet another project to create a super small distribution of Linux – according to the TTY Linux guide it pronounced T-T-Y linux, and not “Titty” Linux as I first thought – which I think is shame as I think quite like the idea of “Titty” Linux – just imagine the icons you could have if it had  a graphic front-end.

Anyway, I caught up with the creator (Alastair Cooke) of  AutoLab recently, and sent me a copy of the TTYLinux VM via DropBox. It didn’t take long it – its a mere 6MB compressed. It has 32MB RAM allocation and 32MB IDE drive. I’ve re-complied this into a .OVF (zipped) and a .OVA. The only change I made to the AutoLab version of the TTYLinux is I installed a small web-server (thttpd) and configured a couple of “Hello World” files for the FTP/HTTP service. Apart from that it is the same – you can use the OVA with vSphere, and the OVF with vSphere and vCloud Director.

Oh, and bit like the Damn Small Linux, I’ve found it a little intolerant to dirty shutdowns, whereas with SliTaz you can reset it and it comes up every time (but its is a bit larger…)

  • TTYLinux OVF (7-Zip Zipped)
  • TTYLinux OVA
 
4 Comments

Posted Mike Laverick --> in Cloud Journal, vSphere

 

Part 59: My vCloud Journey – Design Thoughts – Provider vDC

05 Mar

spacer

People who know me well, will know its rare that someone says something that shuts me up. Being a congenital blabbermouth that I am, if you say something interesting to me – its likely to inspire me to say something. So its rare that something is said that makes me take a step back with a realization that the way I’ve been thinking has either a fatal flaw or an assumption that at its heart I’d not questioned before. That’s the other thing – by my nature I’m a very questioning person. Sometimes I wish I wasn’t, and that my brain would give me the rest of me a chance – just accept stuff around me. Sadly, that rarely happens…

Read the rest of this entry »

 
4 Comments

Posted Mike Laverick --> in Cloud Journal

 

Network Health Status in Action

04 Mar

A week or so ago I was experimenting with running ESX inside a vCloud Director vApp – a process I dubbed “vINCEPTION” to describe the way ESX can virtualize itself (often to referred to as vESX or “nesting”). I was experimenting with different ways of getting the VMs which run on top of vESX to speak to the outside world – such as guest vlan tagging. Anyone a couple days later I cranked up my vSphere Client only to see lots of red exclamation marks on my hosts.

spacer

Read the rest of this entry »

 
No Comments

Posted Mike Laverick --> in vSphere

 

VMwareWag with David Hill (@davehill99)

01 Mar

spacer spacer spacer spacer
spacer spacer

This weeks VMwareWag is with David Hill, and was record last week just before the PEX.  Before joining VMware, David was a self-employed IT Consultant and Architect for around 15 years, working on projects for large consultancies and financial institutions. He works as Senior Solutions Architect within Services and Solutions Engineering. He tweets as @davehill99 and like many of us blogs at virtual-blog.com. David’s focus on the vCloud Suite – but I spent sometime quizing him about vCloud Director, because that’s my current focus.

Q1. What for you are the stand-out aspects of vCloud Director?

Q2. Perhaps you can begin with a quick description of a Provider vDC…. Now Provider vDC can contain more than one HA/DRS cluster – what’s the logic behind where the VM gets placed?

Q3. Do you think the changes behind the Provider vDC might ultimately lead to changes in design or best practises. For many a HA/DRS cluster represents a desecrate amount of compute/storage/networking – you could almost call it a virtual silo. Do you see that changing…?

Q4. Can I ask what are you working on currently – what’s keeping you awake at night with thoughts on vCloud Director… or is your mind else where! spacer

spacer MP3 Streaming Play Now | Play in Popup | Download (32)
 
1 Comment

Posted

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.