spacer
spacer
Top posters
Number of posts in the past month
  • spacer Andrew Dunstan (pgExperts) - 9
  • spacer David Wheeler (pgExperts) - 3
  • spacer Dimitri Fontaine (2ndQuadrant) - 3
  • spacer gabrielle roth - 3
  • spacer Szymon Guz (EndPoint) - 3
  • spacer Craig Ringer (2ndQuadrant) - 2
  • spacer Craig Ringer - 2
  • spacer Eyðun Nielsen - 2
  • spacer Greg Sabino Mullane (EndPoint) - 2
  • spacer Hubert 'depesz' Lubaczewski (OmniTI) - 2
  • spacer Josh Berkus (pgExperts) - 2
  • spacer Leo Hsu and Regina Obe - 2
  • spacer Michael Paquier - 2
  • spacer Selena Deckelmann - 2
  • spacer Abdul Yadi - 1
  • spacer Craig Kerstiens - 1
  • spacer Damien Clochard (Dalibo) - 1
  • spacer David Fetter (VMware) - 1
  • spacer Devrim GÜNDÜZ (EnterpriseDB) - 1
  • spacer Gabriele Bartolini (2ndQuadrant) - 1
Top teams
Number of posts in the past month
  • pgExperts - 14
  • 2ndQuadrant - 7
  • EndPoint - 6
  • OmniTI - 3
  • VMware - 2
  • Dalibo - 2
  • Bull Inf Sys - 1
  • SRA OSS, Inc. - 1
  • MicroOLAP - 1
  • EnterpriseDB - 1
Feeds
  • spacer All feeds (full entries)
  • spacer All feeds (short entries)
Twitter
  • Follow @planetpostgres on Twitter for the latest updates!
Planet
  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.
Contact
  • Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.
PostgreSQL Extension Developer Tips - Part 3
Posted by Keith Fiske in OmniTI on 2012-11-19 at 21:39:04

My venture into PostgreSQL extension development was the first time I'd actually had to worry about the contents of a Makefile. The PostgreSQL Extension Network's How To gave enough of an example that I was able to just copy-n-paste it and it worked. The fact that my extensions have no code to actually compile has a lot to do with that. The Makefile PGXN provides assumes that all your SQL code is in a single file. At first that wasn't a big deal. But once my extensions started getting over several thousand lines combined with many separate functions, maintenance started becoming more of a pain. So I started learning a bit more about Makefiles, specifically the part that made the extension sql file that gets installed.

sql/$(EXTENSION)--$(EXTVERSION).sql: sql/$(EXTENSION).sql
    cp $< $@

Basically, that just copies your extension sql file (ex. pg_jobmon.sql) that resides in the /sql folder to the specially formatted file required by postgres (ex. pg_jobmon--0.1.2.sql). So, it works great if all your sql is in a single file. I wanted to be able to have each of my functions in their own file. And maybe other folders and files for things like tables and types. Looking through the gnu make docs I found the variable I needed to do this and how to use it

sql/$(EXTENSION)--$(EXTVERSION).sql: sql/tables/*.sql sql/functions/*.sql
        cat $^ > $@

The $^ variable represents all the prerequisites (part after the colon) for the make rule that you list on the previous line. In my case I have two folders, tables & functions, that contain .sql files. The $@ variable represents the target of the rule (part before the colon). For extensions, this is the specially formatted file they require. Using the cat command and the appropriate redirect, this command just takes all the files in the given folders and dumps them into a single file. One thing I had to be careful of was that the functions required that the tables be made first in the resulting SQL file if the install was going to run properly. The cat operation is done i

[continue reading]

Our equipment went down in Sandy – what do we do next?
Posted by Jim Smith in Bull Inf Sys on 2012-11-19 at 19:01:39

Surrounded by natural disasters, you’ve probably paused and thought  - how secure is my equipment and my database?  And, if you had the misfortune to suffer a loss because of Hurricane Sandy, you may be wondering how to turn misfortune into opportunity.

First, is it time to consider a move from a proprietary database?  You can save millions by migrating your data from an exclusive or legacy, non-relational database to an open database such as PostgresSQL.  With an open-source database such as PostgresSQL hosted in a low-risk data hosting center, your database is protected and efficiently managed with less cost than maintaining a proprietary database.

The next stage would be keeping your data secure from future disasters.  The three questions you should ask are:

1. If a disaster hits tomorrow, how secure is my hardware?

If your facility doesn’t provide a secure location, consider moving your hardware to a more secure location. The location should be comply with SAS 70 Type II standards for physical security – video surveillance and secured access by authorized personnel only.  It should also be monitored 24/7.

2. Do I have secure back-ups of my data in a separate location?

If you feel your hardware is secure, consider hosting back-ups of your data in a managed server data center.  The data center should be easily accessible and have extensive building management services including power and cooling systems, fire suppression, and a building management system that detects, interprets, and logs technical and security events.

3. If my network goes down, do I have a redundant connection secured?

The hardest part of natural disaster is not being able to connect to your data remotely.  If you have taken the necessary precautions and kept your data secure, make sure that you can access that data remotely. The data center you select should have a monitored, redundant, and stable network connection with multiple access options and entry points.

Bull has experts that can help you secure your data in the case of natural disas

[continue reading]

During installation, cluster initialisation fails with the message “No error” on Windows
Posted by Craig Ringer in 2ndQuadrant on 2012-11-19 at 05:53:26

Today I ran into another strange issue with the PostgreSQL installer for Windows. It turned out not to be a problem with the installer; instead it was a form of broken Windows installation that I hadn’t seen before, so I thought I’d write it up.

The installer already contains checks for several kinds of broken Windows install. For example, it tests to make sure that %TEMP% is writeable, and to makes sure the vbscript interpreter actually works. These were both the causes of common problem reports to the mailing lists in the past.

It turns out that some – probably rare – Windows installs also have an incorrect %COMSPEC% environment variable. This causes popen to fail with the wonderfully useful error message: "No error" when initdb tries to execute the PostgreSQL backend. The message displayed to the user is:

Problem running post-install step. Installation may not complete correctly. The database cluster initialisation failed.

… which can be caused by several different issues, of which this is only one.

If you examine the installer log file, %TEMP%\postgresql-installer.log, you’ll find that it contains:

Executing cscript //NoLogo "C:\Program Files\PostgreSQL\9.2/installer/server/initcluster.vbs" "NT AUTHORITY\NetworkService" "postgres" "****" "C:\Program Files\PostgreSQL\9.2" "C:\Program Files\PostgreSQL\9.2\data" 5432 "DEFAULT"
Script exit code: 1

then a bit further down in the log you’ll see:

creating template1 database in C:/Program Files/PostgreSQL/9.2/data/base/1 ... initdb: could not execute command ""C:/Program Files/PostgreSQL/9.2/bin/postgres.exe" --boot -x1 -F ": No error

If so, check to make sure your ComSpec environment variable is valid. Open a command prompt and run:

"%COMSPEC%" /C "echo test ok"

If it prints “test ok” then it’s likely fine and the issue is probably something else. If it fails, you’ve found the problem. Either way, check the value with:

echo %COMSPEC%

It should print something like:

C:\Windows\system32\cmd.exe

where C:\Windows is the location of your Windows install a

[continue reading]

PDXPUG: November meeting recap
Posted by gabrielle roth on 2012-11-19 at 00:07:27

We had a great meeting last week, with over 20 attendees turning out to hear Greg Smith get into some exciting details about VACUUM. Video will be posted once we filter out some feedback.

Slides are here:

2ndquadrant.com/media/cms_page_media/59/ScalingMaint.pdf

His autovacuum cost estimator spreadsheet is here (and also linked from the .pdf above):

highperformancepostgres.com/pgopen-2012/

Thanks to Iovation for hosting and providing yummy sandwiches, and to Greg for flying all the way out here.

As usual, no meeting in December. As usual, go to the 6th Annual Winter Coders’ Social instead:

calagator.org/events/1250463028

We’ll start back up on January 17. John Melesky will be giving a newer version of his talk from PgOpen, PostgreSQL in the Cloud.


spacer spacer
Notes on index-only scans
Posted by Peter Geoghegan in 2ndQuadrant on 2012-11-16 at 12:32:48
One of the most important performance features in Postgres 9.2 is index-only scans; the ability for certain types of queries to be performed without retrieving data from tables, potentially greatly reducing the amount of I/O needed. I recently completely overhauled the Index-only scans PostgreSQL wiki page, so that the page is now targeted at experienced PostgreSQL users that hope to get the most out of the feature.

My apologies to the authors of the feature, Robert Haas, Ibrar Ahmed, Heikki Linnakangas and Tom Lane, if my handling of the topic seems to focus on the negatives. Any reasonable article about any given index-only scan implementation would have to extensively discuss that implementation's limitations. Any discussion of Postgres index-only scans that focussed on the positives would be much shorter, and would essentially just say: "Index-only scans can make some of your queries go much faster!".
Number into words convertion in PostgreSQL
Posted by Pavel Golub in MicroOLAP on 2012-11-16 at 10:42:59

I was playing with some unit tests for PostgresDAC recently. And one test case was to check TPSQLDataset.Locate routine which allows partial-string and case-sensitive matching options. So I want to have some test result set with integer and text columns. The simplest for me are numbers in both representations: using digits and using words, e.g.

  1. one
  2. two
  3. three
  4. four
  5. etc.

Since I hadn’t much spare time I used nice VALUES command:

VALUES (1, 'one'), (2, 'two'), (3, 'three');

Considering I needed special names for columns I used more complex query:

SELECT col1, col2::varchar(10) FROM 
(VALUES (1, 'one'), (2, 'two'), (3, 'three')) AS t(col1, col2);

I did my task well. However I was wondering is there any built-in function in PostgreSQL which may convert integer to such words representation. The answer is: not exactly.

I found some references to cash_words function. But there is no any information (except entry in 7.3 release notes) in the official documents. The use case is quite simple:

SELECT cash_words(gen.i::money) FROM generate_series(1, 10) AS gen(i);
cash_words text

"One dollar and zero cents"
"Two dollars and zero cents"
"Three dollars and zero cents"
"Four dollars and zero cents"
"Five dollars and zero cents"
"Six dollars and zero cents"
"Seven dollars and zero cents"
"Eight dollars and zero cents"
"Nine dollars and zero cents"
"Ten dollars and zero cents"

I’m fine with this result set, who needs only numbers may trim values using standard functions.

PS In case someone wants to create his own implementation using C (or plpgsql, who knows), cash_words function may be found in the \src\backend\utils\adt\cash.c file. It uses num_word private routine.


Filed under: Coding, PostgresDAC Tagged: PostgresDAC, PostgreSQL, trick spacer
New in PostgreSQL 9.2: format()
Posted by David Wheeler in pgExperts on 2012-11-16 at 01:31:00

There’s a new feature in PostgreSQL 9.2 that I don’t recall seeing blogged about elsewhere: the format() function. From the docs:

Read More »

spacer
PostgreSQL search_path Behaviour
Posted by Szymon Guz in EndPoint on 2012-11-15 at 14:53:17

PostgreSQL has a great feature: schemas. So you have one database with multiple schemas. This is a really great solution for the data separation between different applications. Each of them can use different schema, and they also can share some schemas between them.

I have noticed that some programmers tend to name the working schema as their user name. This is not a bad idea, however once I had a strange behaviour with such a solution.

I'm using user name szymon in the database szymon.

First let's create a simple table and add some values. I will add one row with information about the table name.

# CREATE TABLE a ( t TEXT );
# INSERT INTO a(t) VALUES ('This is table a');

Let's check if the row is where it should be:

# SELECT t FROM a;

        t        
-----------------
 This is table a
(1 row)

Now let's create another schema, name it like my user's name.

# CREATE SCHEMA szymon;

Let's now create table a in the new schema.

# CREATE TABLE szymon.a ( t TEXT );

So there are two tables a in different schemas.

# SELECT t FROM pg_tables WHERE tablename = 'a';

 schemaname | tablename | tableowner | tablespace | hasindexes | hasrules | hastriggers 
------------+-----------+------------+------------+------------+----------+-------------
 public     | a         | szymon     | \N         | f          | f        | f
 szymon     | a         | szymon     | \N         | f          | f        | f
(2 rows)

I will just add a row similar to the previous one.

# INSERT INTO szymon.a(t) VALUES ('This is table szymon.a');

Let's check the data in the table "szymon.a".

# SELECT t FROM szymon.a;

           t            
------------------------
 This is table szymon.a
(1 row)

OK, now I have all the data prepared for showing the quite interesting behaviour. As you might see in the above queries, selecting table "a" when there is only one schema works. What's more, selecting "szymon.a" works as well.

What will hapeen when I get data from the table "a"?

# SELECT t FROM a;

           t            
------------------------
 

[continue reading]

A Step Forward
Posted by Jignesh Shah in VMware on 2012-11-15 at 08:23:01
Recently I upgraded my  "lab" setup.
Now it currently looks as follows:

* 2x  Physical Hosts running vSphere 5.1
* Controlled by vCenter 5.1 Server Appliance backed with embedded Postgres Database instance
* Monitored by vCenter Operations which has two embedded Postgres Database instance in the vApp
*To monitor my VMs, I installed  vFabric Hyperic 5.0 also running embedded Postgres Database VM.
* DBaaS provider vFabric Data Director 2.5 also installed with its embedded Postgres database instance running too

Now for my VMs:
* vFabric Postgres 9.1.6.0 VM   integrated with vSphere HA
* My Linux Developer VM running PostgreSQL 9.2.1 

If you get everything you need with Postgres, why even QA for other databases anymore?
Well vFabric Hyperic took the first bold step forward. 





How I Learned to Stop Worrying and Love the Triggers
Posted by Hubert 'depesz' Lubaczewski in OmniTI on 2012-11-14 at 14:19:06
Some people are afraid of triggers. Reasons for this are not really understandable for me, but I guess it stems from the fact that these are usually application developers, and not database admins. Or they encountered some kind of problem with triggers, and now they tend to think that triggers are inherently evil. But they [...]
A rosetta stone for Mac OS X installers for PostgreSQL
Posted by Selena Deckelmann on 2012-11-14 at 03:32:22
Tweet

I’m no longer using Mac OS X for my primary desktop, but many of my coworkers and friends do. Particularly developers writing applications that use PostgreSQL (aka Postgres) for their data storage.

I’ve spent a lot time over the last few years troubleshooting people’s Postgres installs in the following, very common, situations:

  • A developer installed Postgres on their Mac laptop >1 year ago
  • Now they need to upgrade their Postgres to help me, or support a new application that needs new features
  • They have an old database they’d like to migrate to the new version
  • They have no idea which particular Mac OS X installer they used last time

For this exact situation, I have documented some features of the Mac OS X Installers for Postgres.

And, I felt so good to see this right after I posted the wiki page earlier today:

@zacduncan: “@selenamarie This is helpful to me at this very moment. Thank you. ”

\o/

Thinking about Changing Sqitch Change IDs
Posted by David Wheeler in pgExperts on 2012-11-14 at 01:46:00

When Sqitch, (the database change management app I’ve been working on for the last several months) parses a deployment plan, it creates a unique ID for each change in the plan. This ID is a SHA1 hash generated from information about the change, which is a string that looks something like this:

Read More »

spacer
pgRouting RPM package was sponsored, and it is now available!
Posted by Devrim GÜNDÜZ in EnterpriseDB on 2012-11-13 at 09:32:49
A Norway Government Institution asked me to fix the pgRouting package for PostgreSQL 9.2 on RHEL 6. Their name is "Norsk institutt for skog og landska".

Packaging was not that easy, since they asked me to add "Traveling Salesperson functionality" and "Driving Distance functionality", which required gaul and CGAL packages respectively. CGAL is available in atrpms.net, but the gaul-devel required by TSP functionality required a custom slang, which is actually compat-slang, and because of the termcap changes in RHEL 6, I had to build a custom compat-slang package to build gaul-devel spacer

Anyway, packages are now available for PostgreSQL 9.2 on RHEL 6 x86_64 and RHEL 6 i686. I am also planning to push the package for 9.1 sometime this week. Fedora 17 packages will also be available in preparation for next year's RHEL 7!

Thanks "Norsk institutt for skog og landska" for contributing to community packaging.
Improving PostgreSQL performance on AWS EC2
Posted by Craig Ringer in 2ndQuadrant on 2012-11-13 at 07:05:34

Questions periodically come up on the PostgreSQL mailing list regarding Amazon EC2 and how to get PostgreSQL to perform well on it. The general feeling on the list appears to have been that EC2 performs very poorly for database workloads, at least with PostgreSQL, and it doesn’t seem to be taken particularly seriously as a platform. I certainly thought of it as a last resort myself, for when other constraints prevent you from using a proper VPS or real hardware.

I had the chance to meet with a high level AWS support engineer last week. It’s prompted me to write up the basics of configuring EC2 instances for decent PostgreSQL performance. I haven’t had the chance to back the following advice with hard numbers and benchmarks yet, so remember: Always test everything with a simulation of your workload.

Before I can get into the configuration details, I need to outline how EC2 storage works.

EC2 storage types

EC2 instances have two very different storage options. These are explained quite well in the storage documentation, so I won’t repeat the details. It’s sufficient to say that the Instance Store is local to the VM and is unrecoverably lost if the VM is stopped. It is backed by local hard drive or SSD storage. EBS by contrast is durable and isn’t lost when the VM is stopped. It is more like NBD or iSCSI than local storage; it’s a network block device protocol with the corresponding latency and throughput issues that entails.

If you’ve been facing performance issues, you might’ve seen the High I/O instance types and thought “That sounds ideal for my database workload”. I thought so myself – but they won’t actually make any difference if your database storage is on EBS volumes. The High I/O instances have fast instance store storage, but aren’t any different in terms of EBS. So if you’re been using a High I/O instance with a database that’s on EBS volumes you’re not gaining any benefit from the enhanced instance store I/O you’re paying for, and are better off on an EBS-optimized large instance.

Durable databases

F

[continue reading]

Pl/Python talk at Pycon Canada
Posted by Steve Singer on 2012-11-13 at 03:54:25

I spent the weekend attending Pycon Canada where I gave a talk on Pl/Python. I want to thank the conference organizers for putting on an excellent conference. I am told that this was the first time Pycon had a regional conference in Canada and that it was put together by a group of volunteers in less than 6 months.

One of my favourite parts of local/regional conferences held on weekends is that they tend to attract attendees who are passionate about computers and technology. The people who I spoke with at the conference were there because they wanted to be there,not because there boss wanted them to be there, and either loved Python or wanted to learn more about it. I’ve attended many great PostgreSQL conferences over the past few years but it was nice to spend sometime talking with people from broader development backgrounds.

In my discussions with people at the conference I noticed a trend. People I spoke with who are working at companies that did Python development tended to be using PostgreSQL. The ones that weren’t currently using PostgreSQL were using MySQL and talking about moving to PostgreSQL or were apologetic for still being on MySQL. The MySQL users were often apologizing before I told them that I was a PostgreSQL contributor. Some of the MySQL users also mentioned that they were using non-Oracle forks like Percona.

This was in contrast to the people at the Python conference that described their workplaces as doing primarily Java development. The Java development shops tended to be using Oracle or SQL Server. I admit that the sample size of of the Java developers wasn’t that big (this was a Python conference after all) but my observations are worth keeping in mind since they might be indicating a pattern. Other people have commented about the popularity of PostgreSQL in the Ruby community.

I wonder how much of this observations is because older written in Java are already using SQL Server/Oracle and there hasn’t been a strong enough driver to change to PostgreSQL. While newer software projects are ten

[continue reading]

PostGIS 2.0 in Berkeley Tommorrow
Posted by Josh Berkus in pgExperts on 2012-11-13 at 02:52:06
Brian Hamlin will be presenting PostGIS 2.0 at Soda Hall in Berkeley tommorrow.  If you can't make it, it will probably be live on video (not sure about wireless quality).
gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.