What is Page Life Expectancy (PLE) in SQL Server
Being a production DBA means having to quickly look at a SQL Server and diagnose issue with it. We know that a quick look at perfmon or task manager can tell us of high CPU usage, but what is a good way to quickly look at proper memory usage.
Checking out the Page Life Expectancy (PLE) counter is a great way to check for memory pressure. A value too low for your system indicates that pages are being flushed from the buffer pool too quickly. The longer a page can stay in the buffer pool and be read from memory the better. We all know that having to constantly go back to disk to fetch data is costly.
I won’t get into the specifics of which number is too high or too low for a PLE counter, but I will say that any system in the double digit seconds is bad. I have seen systems in single digits and some systems that I measure PLE in days. I have updated my query to reflect those situations. I also like to include the uptime counter just in case I get on a system that SQL was recently restarted, if the server has only been up for 10 minutes, I cannot expect the PLE counter to be more than 10 minutes.
I highly recommending trending this value during the day to monitor usage on your servers. If you find this value consistently low it does not mean you need to rush and add more memory, you should first look at your queries and indexes to make sure you are not reading unnecessary data. One bad query “select * from bigtable” can crush your PLE counter.
I have included two screen shots showing the results from two servers, one showing in days and one in hours.
Notice on the image above that the PLE is the uptime of the server.
In the image above the server has been up for almost two weeks and has a PLE of 235 minutes.
SELECT @@servername AS INSTANCE
, UPTIME_MIN = CASE WHEN[counter_name]= ‘Page life expectancy’
THEN (SELECT DATEDIFF(MI, MAX(login_time),GETDATE())
WHERE cmd=‘LAZY WRITER’)
, [cntr_value] AS PLE_SECS
,[cntr_value]/ 60 AS PLE_MINS
,[cntr_value]/ 3600 AS PLE_HOURS
,[cntr_value]/ 86400 AS PLE_DAYS
WHERE [object_name] LIKE ‘%Manager%’
AND[counter_name] = ‘Page life expectancy’
SSIS training for a cause
On Thursday March 14th (PI Day) in Duluth GA, Audrey Hammonds and Julie Smith will put on a spectacular SSIS training event to benefit the Cloverleaf School of Atlanta.
Both Audrey and Julie are SQL Server MVP’s and have given a countless amount of their time to the SQL Server Community over the years. Visit their blog at datachix.com/ They have spoken at events all over the country and always draw a crowd. I have had the privelage of working with them to plan SQL Saturday 89 and SQL Saturday 111 in Atlanta.
The day of training will be split into two parts.
Part I Dimensional Modeling with Audrey
- Why Transactional databases are the way they are—Codd’s rules to live by
- Why Codd’s rules don’t work for Reporting– Kimball’s rules to live by
- Fun at the whiteboard converting a transactional model to a dimensional model.
Part II Extract Transform and Load using SSIS with Julie
- A tour of SSIS
Control Flow Basics
Data Flow Basics
- How to load Dimensions with SSIS
- How to load Facts with SSIS
- Why SSIS has got such a bad rap sometimes.
- What happened and when
- When things go wrong, how to know the details
- Top quirks of SSIS—what, why, and how to get around them.
You don’t want to miss out on this training. Check out the event on Eventbrite. There is an earlybird rate so you don’t want to miss out. Don’t forget, the proceeds for this training go to a great cause.
How to break a high end de-duplication backup device
I have been working with a client recently who purchased a nice top of the line de-duplication backup device in order to synchronize backups between two data centers. This device is very nice in how it will de-duplicate data to reduce storage needs and also compress the data. It will then synchronize the data to a secondary device hundreds of miles away.
These types of devices are not cheap and work very efficiently. My client got their SQL Server environment migrated to the new backup appliance and found a nice vulnerability within the device. Their previous backup solution maintained the retention policies whereas this new device did not. After using the new solution for a few months they notice performance decreasing almost to the point of moving back to their old solution.
After a couple of days of research into the issue it was ruled that the root cause was having too many files in the backup directory. Since several of the servers contained databases with low RPO’s, they were making lots of transaction log backups. Several of the tlog backup folders contained 10′s of thousands of backup files all with similar names.
Once the tlog folders were purged the system returned to a normal operational level. It was explained to the client by the vendor that they shouldn’t have more than 1000 files per folder and no more than 1000 subfolders.
The first lesson is that anytime you fundamentally change a solution of a critical process, you need to have a process in place to account for every aspect of the previous solution. Had this client built a proper purge process, they would not have brought a top of the line backup device to its knees.
The second lesson is that if you are making frequent transaction log backups for a system with multiple databases on it, you should consider writing the transaction logs to individual sub folders. The best practice would be to have a tlog folder with a sub folder for each database. Having this in place will limit the number of files per directory.
SQL Backup and Recovery – new distribution
I recieved an email today from my publisher Joes 2 Pros that my book was now available on Barnes and Noble’s website in addition to Amazon in print and Kindle. This is very exciting.
In just three months SQL Backup and Recovery has sold nearly 500 copies. I am still in awe that my name is on the cover of a book and I really hope this is the beginning of many more to come.
It is such an educational process to write a book.
SQL Saturday 220 Atlanta GA
That latest installment of SQL Saturday Atlanta was just announced. On May 18th 2013 SQL Server experts from all over will come together in Atlanta GA for a full day of free training. I have been part of the previous two years SQL Saturday events in Atlanta and have had a blast volunteering and helping to put on such an incredible event that changes peoples careers and lives.
This past year for SQL Saturday 111, I recall various attendees and speakers referencing our event as the “SQL Family Reunion”. I must say that for SQL Saturday #220 we are aiming for the same feeling. We anticipate over 900 registrations. Last year we had over 170 sessions submitted for speakers. Over the past 1 hour that our event has been live to register and submit sessions we have had 5 speakers sumbit sessions.
If you would like to attend or submit sessions, makes sure to do so quickly.
Last year we hosted three pre-cons. We will have pre-cons again this year and those events should be live soon.
Visit our link and register today!!!
Performance Impact from Page Splits
Recently I was brought in to help troubleshoot performance issues on a database server. Going through my typical check list I noticed memory pressure on the box and made the recommendationof more memory which was simple to fix. What caught my attention though was that blocking was occurring on a transaction that was trying to do a simple delete of a few records.
In a discussion with the vendor I learned that this system does massive inserts and deletes all day long. This system had only been in place for a few months and one table was nearly 30 GB in size. This was the particular table that was being blocked with a delete of a few records. We use a standard index maintenance script that reorg’s or rebuilds based on fragmentation so I was able to quickly rule out fragmentation as the issue however I still felt like I should dig into the indexes to see what was happening.
In looking at the indexes I found a clustered index that was the primary key nvarchar(255) and 13 non clustered indexes all with the default fill factor of 100%. In digging into looking at the number of page splits per second on this instance they were in the several hundred. Taking a look at the top 10 indexes with page splits, this particular table was 8 of the 10. With this information in hand I went back to the vendor. They basically told me that this level of trouble shooting and tuning was beyond their knowledge but that what I stated sounds good and to implement the changes. All I asked them was if they had an issue with me rebuilding the index and dropping the fill factor rate.
In dropping the fill factor from 100% to 80%, I decreased the page splits drastically. For the overall instance I am no longer in the 100′s per second, they are still active in the 20+ range and for my top 10 indexes this particular table only has one entry. With that being said, I still have more work to do, however this system is in much better shape. The table grew quite a bit but the tradeoff is well worth it.
PASS Summit 2012 – Networking
I was fortunate to attend my 4th PASS Summit. For the past 4 years I have flown out to Seattle Washington and attended the largest Microsoft SQL Server convention in the world. My first year was all about attending sessions and learning new things. Year two was about attending technical sessions and hanging out with a few friends I had made on twitter the year before. Year three was as much about hanging out with friends as it was attending technical sessions. This year I only attended two sessions and all day Tuesday meetings regarding volunteer work. Since this year my official company training event was a SQL Skills Immersion event, my trip to Seattle was technically vacation time for me. I spent my days hanging out with friends and making new ones.
I was very fortunate to be able to publish a book “SQL Backup and Recovery” with Joes2Pros. We made it to print just in time for the PASS Su