.NET Zone
.NET Zone is brought to you in partnership with:
spacer spacer Nick Haslam
  • Bio
  • Website
  • @nhaslam
  • spacer

I'm a UK based BI Consultant, specialising in SQL Server, C# and Business Intelligence. I've been working with SQL Server for over 10 years, and have over 15 years in development. Nick is a DZone MVB and is not an employee of DZone and has posted 15 posts at DZone. You can read more from them at their website. View Full User Profile

T-SQL Tuesday: Taking it to the MAX - Aggregate Functions

07.16.2012
spacer Email
Views: 2130
  • Tweet
  • spacer
This article is part of the DZone .NET Zone, which is brought to you in collaboration with the .NET community. Visit the .NET Zone for additional tutorials, videos, opinions, and other resources on this topic.

We Recommend These Resources

The Java Evolution Mismatch: Why You Need a Better JVM

Innovate Faster in the Cloud with a Platform as a Service

SQL Access to Salesforce.com Data via JDBC? Yes You Can!

Developing .NET/C# Applications with VoltDB

DevOps: From Concepts to Practical Applications Featuring Forrester Research, Inc. and Equinox

spacer

Aggregate Functions are the topic of this months T-SQL Tuesday. An interesting one, and it made me think about what I’ve done, that could be considered interesting, with relation to Aggregation.

One thing that sprung to mind was some work I did on a Data Warehouse. I worked on a Project a while back (a few years now), that included a data source from an ERP system, that was effectively a table populated from a series of Excel worksheets. The table was setup so that each cell in the worksheet had it’s own row. This had resulted in 6,435 (cells A1 to I715) rows, per project, per financial period, so 6435*200 (and then some) * 12 (so 15,444,000) per year. The code and table samples below are representative of the process that we followed, and the table structures have been appropriately anonymised, but you get the general idea.

It wasn’t necessary to load all the source data into the data warehouse, since there was alot of information that we didn’t need. Effectively, this was the process that we had.

spacer

To get the values out for the project, in the correct form, the following T-SQL was used:

SELECT project_id,xl_month,xl_year,
    MAX(CASE WHEN xl_cellref ='A1' THEN xl_value END) AS 'A1',
    MAX(CASE WHEN xl_cellref ='A2' THEN xl_value END) AS 'A2'
FROM dbo.xl_test
GROUP BY project_id, xl_month,xl_year

After a bit of time with this running, we made some changes, and ended up with the following:

SELECT project_id, xl_month, xl_year,  [A1], [A2]
    FROM (
        SELECT project_id, xl_month, xl_year, xl_cellref, xl_value
        FROM dbo.xl_test) AS xl_test
    PIVOT( MAX(xl_value) FOR  xl_cellref IN
    ([A1],[A2])
    ) AS aPivot

This (and some of the other changes we made) actually improved the performance of the DWH load by approximately 25%, however, I’d imagine a fair chunk of that was down to the fact that Pivot is quicker than a dozen or so case statements.

Published at DZone with permission of Nick Haslam, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Tags:
  • TSQL
  • SQL
  • .NET & Windows
This content was brought to you by DZone for all the information you need on ASP.NET, WPF, XAML, SQL Server, and other pieces of the .NET stack.
gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.