Monday, November 12, 2012

PostGIS 2.0 in Berkeley Tommorrow

Brian Hamlin will be presenting PostGIS 2.0 at Soda Hall in Berkeley tommorrow.  If you can't make it, it will probably be live on video (not sure about wireless quality).

Tuesday, October 30, 2012

pgConf.EU 2012 report

I just got back from my first European PostgreSQL conference since 2009.  If you missed it, you should be sorry you did.  Me, I mostly went because it was in Prague this year ... my sweetie really wanted to go, so off we went.

Nearly 300 people attended from what I heard, and the rooms were certainly full enough to support that estimate.  I got to see a lot of people I don't see at pgCon.  Highlights of the conference for me were:
  • Hallway Track: getting to hash stuff out with Joe Celko, Peter Geoghegan, Rob Napier, Hannu Krosing, and Laurenz Albe, whom I never see otherwise.  Not all at the same time, of course!
  • Playing Chess in the DB: Gianni Colli's pgChess extension and supporting presentation were excellent.  If you want evidence that you can do anything with Postgres, load up pgChess!
  • GSOC Students: Atri Sharma, Alexander Korotkov, and Qi Huang all came to the conference, got to meet the community, and presented on their projects from Google Summer of Code.
  • Lightning Talks: I love LTs, of course.  Harald did a particularly good job running them.
  • Boat Party: Heroku rented a party boat to cruise up the Vltava River while we drank lots of Budvar and talked databases.
Of course, I presented stuff too: Elephants and Windmills, about some of our work in wind power, and I rehashed Aaron's PostgreSQL Drinking Game as a lightning talk at Dave's request.  Follow links for slides.






Thanks to the whole pgConf organizing crew for a great time, and for helping with getting the students to the conference.

Not sure I'll make it next year; October is often a busy time for other conferences, and I think it's more important for me to speak at non-Postgres conferences than to our existing community.  However, if they hold it in Barcelona, I might have to find a way to go.

Saturday, October 13, 2012

Freezing Your Tuples Off, Part 2

Continued from Part 1.

vacuum_freeze_min_age


The vacuum_freeze_min_age setting determines the youngest XID which will be changed to FrozenXID on data pages which are being vacuumed anyway.  The advantage of setting it low is that far more XIDs will already be frozen when the data page is finally evicted from memory. This is ideal from a maintenance perspective, as the data page may never need to be read from disk for freezing.  The disadvantage of setting it too low is additional time and CPU used during vacuum, especially if the page ends up being vacuumed several times before it's written out.

The other disadvantage of setting it low, according to the pgsql-hackers list, is that you will have less data for reconstructive database forensics if your database gets corrupted.  However, given PostgreSQL's very low incidence of corruption bugs, this is not a serious consideration when balanced against the very high cost of a vacuum freeze on a really large table.

Therefore, we want the setting to be low, but not so low that XIDs used a few minutes ago are expiring.  This is where things get difficult and not auto-tunable in current Postgres.  You really want XIDs to freeze after a few hours, or maybe a day if your application makes use of long-running transactions frequently.  But Postgres has no single-step way to determine how many XIDs you use per hour on average.

The best way is to monitor this yourself.  In recent versions of PostgreSQL, you can get this from pg_stat_database, which has the counters xact_commit and xact_rollback. If these are part of a monitoring scheme you already have in place (such as Nagios/Cacti, Ganglia or Munin), then you can look at the transaction rates using those tools.  If not, you need to follow these three steps:

1. Run this query, and write down the result:

SELECT SUM(xact_commit + xact_rollback) FROM pg_stat_database;

2. Wait three or four hours (or a day, if you use long-running transactions)

3. Run the query again.

4. Subtract the number from the first query run from the second query run.

5. Round up to the nearest multiple of 10.

So, as an example:

postgres=# select sum(xact_commit + xact_rollback) from pg_stat_database;
   sum 
---------
 1000811

... wait four hours ...

postgres=# select sum(xact_commit + xact_rollback) from pg_stat_database;
   sum 
---------
 2062747

So my transaction burn rate is 1,061,936 for four hours, which I round to 1,000,000.  I then set vacuum_freeze_min_age to 1000000.  The approximate burn rate, 250,000 per hour, is also a handy figure to keep to figure out when XID wraparound will happen next (in about 1 year, if I was starting from XID 1).

Without doing the above, it's fairly hard to estimate a reasonable level, given that XID burn rate depends not only on the amount of write activity you're doing, but how you're grouping the writes into transactions.  For example, in a data collection database, if you're doing each imported fact as a separate standalone INSERT, you could be burning a million XIDs per hour, but if you're batching them in batches of a thousand rows, that cuts you down to 1000 XIDs per hour.  That being said, if you really don't have time to check, here's my rules-of-thumb settings for vacuum_freeze_min_age:

  • Low write activity (100 per minute or less): 50000
  • Moderate write activity (100-500 per minute): 200000
  • High write activity (500 to 4000 per minute): 1000000
  • Very high write activity (higher than 4000 per minute): 10000000

You'll notice that all of these are lower than the default which ships in postgresql.conf.  That default, 100 million, is overly conservative, and means that preemtive freezing almost never happens on databases which run with the defaults.  Also, the default of 100m is half of 200m, the default for autovacuum_freeze_max_age, meaning that even after you've completely vacuumed an entire table, you're left with many XIDs which are 50% of freeze_max_age old.  This causes more wraparound vacuums than are necessary.

Also, to some degree, this isn't worth worrying about below 500 writes/minute, given that it takes 8 years to reach XID wraparound at that rate.  Few PostgreSQL installations go 8 years without a dump/reload.

To be continued in Part 3 ...

Tuesday, October 9, 2012

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.