Stack Overflow running short on space

by Nick Craver on Feb 07, 2012 under Growing Pains

Stack Overflow handles a lot of traffic.  Quantcast ranks us (at the time of this writing) as the 274th largest website in the US, and that’s rising.  That means everything that traffic relates to grows as well.  With growth, there are 2 areas of concern I like to split problems into: technical and non-technical.

Non-technical would be things like community management, flag queues, moderator counts, spam protection, etc.  These might (and often do) end up with technical solutions (that at least help, if not solve the problem), but I tend to think of them as “people problems.”  I won’t cover those here for the most part, unless there’s some technical solution that we can expose that may help others.

So what about the technical?  Now those are more interesting to programmers.  We have lots of things that grow along with traffic, to name a few:

  • Bandwidth (and by-proxy, CDN dependency)
  • Traffic logs (HAProxy logs)
  • CPU/Memory utilization (more processing/cache involved for more users/content)
  • Performance (inefficient things take more time/space, so we have to constantly look for wins in order to stay on the same hardware)
  • Database Size
I’ll probably get to all of the above in the coming months, but today let’s focus on what our current issue is. Stack Overflow is running out of space.  This isn’t news, it isn’t shocking; anything that grows is going to run out of room eventually.

 

This story starts a little over a year ago, right after I was hired.  One of the first things I was asked to do was growth analysis of the Stack Overflow DB. You can grab a copy of it here (excel format).  Brent Ozar, our go-to DBA expert/database magician, told us you can’t predict growth that accurately…and he was right.  There are so many unknowns: traffic from new sources, new features, changes in usage patterns (e.g. more editing caused a great deal more space usage than ever before).  I projected that right now we’d be at about 90GB in Stack Overflow data; we are at 113GB.  That’s not a lot, right?  Unfortunately, it’s all relative…and it is a lot.  Also, we currently have a transaction log of 41GB.  Yes, we could shrink it…but on the next re-org of the PK_Posts (and as a result, all other indexes on the Posts table) it’ll just grow to the same size.

 

These projections were used to influence our direction on where we were going with scale, up vs. out.  We did a bit of both.  First let’s look at the original problems we faced when the entire network ran off one SQL Server box:
  • Memory usage (even at 96GB, we were over-crammed, we couldn’t fit everything in memory)
  • CPU (to be fair, this had other factors like Full Text search eating most of the CPU)
  • Disk IO (this is the big one)

What happens when you have lots of databases is all your sequential performance goes to crap because it’s not sequential anymore.  For disk hardware, we had one array for the DB data files: a RAID 10, 6 drive array of magnetic disks.  When dozens of DBs are competing in the disk queue, all performance is effectively random performance.  That means our read/write stalls were way higher than we liked.  We tuned our indexing and trimmed as much as we could (you should always do this before looking at hardware), but it wasn’t enough.  Even if it was enough there were the CPU/Memory issues of the shared box.

Ok, so we’ve outgrown a single box, now what?  We got a new one specifically for the purpose of giving Stack Overflow its own hardware.  At the time this decision was made, Stack Overflow was a few orders of magnitude larger than any other site we have.  Performance-wise, it’s still the 800 lb. gorilla.  A very tangible problem here was that Stack Overflow was so large and “hot,” it was a bully in terms of memory, forcing lesser sites out of memory and causing slow disk loads for queries after idle periods.  Seconds to load a home page? Ouch. Unacceptable.  It wasn’t just a hardware decision though, it had a psychological component.  Many people on our team just felt that Stack Overflow, being the huge central site in the network that is is, deserved its own hardware…that’s the best I can describe it.

Now, how does that new box solve our problems?  Let’s go down the list:

  • Memory (we have another 96GB of memory just for SO, and it’s not using massive amounts on the original box, win)
  • CPU (fairly straightforward: it’s now split and we have 12 new cores to share the load, win)
  • Disk IO (what’s this? SSDs have come out, game. on.)

We looked at a lot of storage options to solve that IO problem.  In the end, we went with the fastest SSDs money could buy.  The configuration on that new server is a RAID 1 for the OS (magnetic) and a RAID 10 6x Intel X-25E 64GB, giving us 177 GB of usable space.  Now let’s do the math of what’s on that new box as of today:

  • 114 GB – StackOverflow.mdf
  • 41 GB – StackOverflow.ldf

With a few other miscellaneous files on there, we’re up to 156 GB.  155/177 = 12% free space.  Time to panic? Not yet.  Time to plan? Absolutely.  So what is the plan?

We’re going to be replacing these 64GB X-25E drives with 200GB Intel 710 drives.  We’re going with the 710 series mainly for the endurance they offer.  And we’re going with 200GB and not 300GB because the price difference just isn’t worth it, not with the high likelihood of rebuilding the entire server when we move to SQL Server 2012 (and possibly into a cage at that data center).  We simply don’t think we’ll need that space before we stop using these drives 12-18 months from now.

Since we’re eating an outage to do this upgrade (unknown date, those 710 drives are on back-order at the moment) why don’t we do some other upgrades?  Memory of the large capacity DIMM variety is getting cheap, crazy cheap.  As the database grows, less and less of it fits into memory, percentage-wise.  Also, the server goes to 288GB (16GB x 18 DIMMs)…so why not?  For less than $3,000 we can take this server from 6x16GB to 18x16GB and just not worry about memory for the life of the server.  This also has the advantage of balancing all 3 memory channels on both processors, but that’s secondary.  Do we feel silly putting that much memory in a single server? Yes, we do…but it’s so cheap compared to say a single SQL Server license that it seems silly not to do it.

I’ll do a follow-up on this after the upgrade (on the Server Fault main blog, with a stub here).

growth, stack overflow
81 Comments more...

Growing pains and lessons learned

by Nick Craver on Jan 04, 2012 under Intro

In this blog, I aim to give you some behind the scenes views of what goes on at Stack Exchange and share some lessons we learn along the way.

Life at Stack Exchange is pretty busy at the moment; we have lots of projects in the air.  In short, we’re growing, and growing fast.  What effect does this have?

While growth is awesome (it’s what almost every company wants to do), it’s not without technical challenges.  A significant portion of our time is currently devoted to fighting fires in one way or another, whether it be software issues with community scaling (like the mod flag queue) or actual technical walls (like drive space, Ethernet limits).

Off the top of my head, these are just a few items from the past few weeks:

  • We managed to completely saturate our outbound bandwidth in New York (100mbps).  When we took an outage a few days ago to bump a database server from 96GB to 144GB of RAM, we served error pages without the backing of our CDN…turns out that’s not something we’re quite capable of doing anymore.  There were added factors here, but the bottom line is we’ve grown too far to serve even static HTML and a few small images off that 100mbps pipe. We need a CDN at this point, but just to be safe we’ll be upping that connection at the datacenter as well.
  • The Stack Overflow database server is running out of space.  Those Intel X25-E SSD drives we went with have performed superbly, but a raid 10 of 6x64GB (177GB usable) only goes so far.  We’ll be bumping those drives up to 200GB Intel 710 SSDs for the next 12-18 months of growth.  Since we have to eat an outage to do the swap and memory is incredibly cheap, we’ll be bumping that database server to 288GB as well.
  • Our original infrastructure in Oregon (which now hosts Stack Exchange chat) is too old and a bit disorganized – we’re replacing it.  Oregon isn’t only a home for chat and data explorer, it’s the emergency failover if anything catastrophic were to happen in New York.  The old hardware just has no chance of standing up to the current load of our network – so we’re replacing it with shiny new goodies.
  • We’ve changed build servers – we’re building lots of projects across the company now and we need something that scales and is a bit more extensible.  We moved from CruiseControl.Net to TeamCity (still in progress, will be completed with the Oregon upgrade).
  • We’re in process of changing core architecture to continue scaling.  The tag engine that runs on each web server is doing duplicate work and running multiple times.  The search engine (built on Lucene.Net) is both running from disk (not having the entire index bank loaded into memory) and duplicating work.  Both of these are solvable problems, but they need a fundamental change.  I’ll discuss this further coming up; hopefully we’ll have some more open source goodness to share with the community as a result.
  • Version 2.0 on our API is rolling out (lots of SSL-related scaling fun around this behind the scenes).
  • A non-trivial amount of time has gone into our monitoring systems as of late.  We have a lot of servers running a lot of stuff, we need to see what’s going on.  I’ll go into more detail on this later.  Since there seems to be at least some demand for open-sourcing the dashboard we’ve built, we will as soon as time permits.

There are lots of things going on around here, as I get time I’ll try to share more detailed happenings like the above examples with you as we grow.  Not many companies grow as fast as we are with as little hardware or as much passion for performance.  I don’t believe anyone runs the architecture we do at the scale we’re running at (traffic-wise, we actually have very little hardware being utilized); we’re both passionate and insane.

We’ll go through some tough technical changes coming up, from both paying down technical debt and provisioning for the future.  I’ll try and share as much as I can of that here, for those who are merely curious what happens behind the curtain and those who are going through the same troubles we already have, maybe our experiences can help you out.

growth, stack-exchange
1 Comment more...
spacer

Recent Posts

  • Stack Overflow running short on space
  • Growing pains and lessons learned
RSS Feed
Copyright © 2012 Nick Craver. All rights reserved. Theme based on the Jarrah by Templates Next
gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.