Category Archives: SQL Development

Query Tuning and Missing Index Recommendations

–By Jeffry Schwartz

Overview

Most analysts are familiar with missing index recommendations provided by SQL Server when query plans are displayed within SSMS or reported by various missing index DMVs. Several questions arise concerning these recommendations:

  1. What determines whether a column becomes a key or included in the new index?
  2. Where does the ordering of the included columns come from? Does column ordering in queries affect the recommendations, e.g., will query orderings of A, B, C and C, B, A in the selection column list result in the one or two recommendations?
  3. How sophisticated are the missing index recommendations, i.e., does SQL Server compare recommendations and perform any kind of recommendation consolidation or optimization?

This article provides a reproducible example using six different queries that deliberately caused SQL Server to generate missing index recommendations and provide answers to these questions. The example is used to study missing index recommendations in detail, including how they relate to the underlying table and how query columns affect these recommendations. The article also illustrates how a single consolidated index can address the performance needs of all six queries.

Test Table Creation & Load

To determine missing index recommendation behavior, a generic table was constructed and filled with 20 million records. Each record contained an identity column, an ID column, a text column, and 47 metric columns whose values ranged between 1 and 10,000,000. The large number of table columns was used to insure SQL Server would choose an index option when appropriate. Six queries that incorporated various column combinations were executed (some of which differed only in column ordering). To minimize duplication of column values and skewing of query plans, the ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000 formula was used to generate values that were as random as possible. Two indices were created: a clustered index that used the identity column as its only key and a second nonclustered index that used DupID as its only column. The scripts for the creation, loading, and initial indexing of the table are shown below.

— ##############################################################

— Create test table

— ##############################################################

drop table FewDuplicates;

CREATE TABLE FewDuplicates (

IDCol bigint identity (20000000,1),

DupID bigint,

MyText varchar(10),

Metric01 bigint, Metric02 bigint, Metric03 bigint, Metric04 bigint, Metric05 bigint,

Metric06 bigint, Metric07 bigint, Metric08 bigint, Metric09 bigint, Metric10 bigint,

Metric11 bigint, Metric12 bigint, Metric13 bigint, Metric14 bigint, Metric15 bigint,

Metric16 bigint, Metric17 bigint, Metric18 bigint, Metric19 bigint, Metric20 bigint,

Metric21 bigint, Metric22 bigint, Metric23 bigint, Metric24 bigint, Metric25 bigint,

Metric26 bigint, Metric27 bigint, Metric28 bigint, Metric29 bigint, Metric30 bigint,

Metric31 bigint, Metric32 bigint, Metric33 bigint, Metric34 bigint, Metric35 bigint,

Metric36 bigint, Metric37 bigint, Metric38 bigint, Metric39 bigint, Metric40 bigint,

Metric41 bigint, Metric42 bigint, Metric43 bigint, Metric44 bigint, Metric45 bigint,

Metric46 bigint, Metric47 bigint

)

 

— ##############################################################

— Load original table

— ##############################################################

declare @DupID bigint = 1

declare @NumRecs bigint = 20000000

 

truncate table FewDuplicates

set nocount on

while (@DupID <= @NumRecs)

begin

insert into [dbo].[FewDuplicates] (

[DupID], [MyText],

[Metric01], [Metric02], [Metric03], [Metric04], [Metric05], [Metric06], [Metric07],

[Metric08], [Metric09], [Metric10], [Metric11], [Metric12], [Metric13], [Metric14],

[Metric15], [Metric16], [Metric17], [Metric18], [Metric19], [Metric20], [Metric21],

[Metric22], [Metric23], [Metric24], [Metric25], [Metric26], [Metric27], [Metric28],

[Metric29], [Metric30], [Metric31], [Metric32], [Metric33], [Metric34], [Metric35],

[Metric36], [Metric37], [Metric38], [Metric39], [Metric40], [Metric41], [Metric42],

[Metric43], [Metric44], [Metric45], [Metric46], [Metric47]

)

VALUES (

@DupID,‘my text’,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000

)

set @DupID += 1

end — group option loop

set nocount off

 

— ##############################################################

— Create indices on the test table

— ##############################################################

CREATE UNIQUE CLUSTERED INDEX [ci_RecID] ON [dbo].[FewDuplicates]

(

[IDCol] ASC

)

WITH (fillfactor = 100, PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)

ON [PRIMARY]

 

CREATE NONCLUSTERED INDEX [ix_DupID] ON [dbo].[FewDuplicates]

(

DupID ASC

)

WITH (fillfactor = 100, PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)

ON [PRIMARY]

 

Queries & Execution Plans with Missing Index Recommendations

The six queries all performed a range scan based upon DupID and Metric01. Clearly, the ix_DupID index could be used for the first portion of the where clause, but none of the existing indices could assist in the resolution of the second portion of the where clause. Note: All queries used identical where clauses to insure that the only query differences involved the columns that were requested in the select clause. These variations employed different combinations of the first six metric columns included a variety of column orderings. All of the queries returned the same 809 rows. Note: Due to the random nature of the data, the reader will not obtain identical results, but they will be functionally similar. Each query was run separately after the following commands had been executed:

dbcc dropcleanbuffers with no_infomsgs

dbcc freeproccache with no_infomsgs

dbcc freesystemcache(‘TokenAndPermUserStore’) with no_infomsgs

These commands clear all the pertinent caches to insure reproducibility and prevent memory-resident portions of the database from skewing the results. Each query required approximately 90 seconds to execute as shown in Table 1. Each query required approximately one million logical and physical reads to complete. The SSMS option for returning the actual execution plan was set prior to execution, and the six queries and execution plans are shown in the individual query sections below. The first five queries resulted in different recommendations, but the sixth query’s plan and recommended missing index were identical to that of the fifth because the only difference between Queries #5 and #6 is the ordering of the columns (part of the answer to question #2). The differences among all the queries are summarized in Table 2 below.

Blog_20170601_T1

Table 1: Summary of Initial Query Executions & Timings

Blog_20170601_T2

Table 2: Summary of Query Selection Columns

Query #1

select [DupID], [MyText], [Metric01]

from FewDuplicates

where DupID between 100000 and 500000 and Metric01 between 10000 and 30000

Blog_20170601_1

Query #2

select [DupID], [MyText], [Metric02]

from FewDuplicates

where DupID between 100000 and 500000 and Metric01 between 10000 and 30000

Blog_20170601_2

Query #3

select [DupID], [MyText], [Metric03], [Metric06], [Metric04]

from FewDuplicates

where DupID between 100000 and 500000 and Metric01 between 10000 and 30000

Blog_20170601_3

Query #4

select [DupID], [MyText], [Metric04], [Metric06], [Metric02], [Metric01]

from FewDuplicates

where DupID between 100000 and 500000 and Metric01 between 10000 and 30000

Blog_20170601_4

Query #5

select [DupID], [MyText], [Metric01], [Metric03], [Metric05], [Metric02]

from FewDuplicates

where DupID between 100000 and 500000 and Metric01 between 10000 and 30000

Blog_20170601_5

Query #6

select [DupID], [MyText], [Metric05], [Metric02], [Metric01], [Metric03]

from FewDuplicates

where DupID between 100000 and 500000 and Metric01 between 10000 and 30000

Blog_20170601_6

Missing Index Recommendations Displayed via DMVs

The following query was used to provide some of the information enumerated in Table 3 below:

— ##############################################################

— Missing Index DMV Query

https://blogs.msdn.microsoft.com/bartd/2007/07/19/are-you-using-sqls-missing-index-dmvs/

— ##############################################################

SELECT migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) AS improvement_measure,

‘CREATE INDEX [missing_index_’ + CONVERT (varchar, mig.index_group_handle) + ‘_’ + CONVERT (varchar, mid.index_handle)

+ ‘_’ + LEFT (PARSENAME(mid.statement, 1), 32) + ‘]’

+ ‘ ON ‘ + mid.statement

+ ‘ (‘ + ISNULL (mid.equality_columns,)

+ CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN ‘,’ ELSE END

+ ISNULL (mid.inequality_columns,)

+ ‘)’

+ ISNULL (‘ INCLUDE (‘ + mid.included_columns + ‘)’, ) AS create_index_statement,

migs.*, mid.database_id, mid.[object_id]

FROM sys.dm_db_missing_index_groups mig

INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle

INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle

WHERE migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) > 10

ORDER BY migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC

The following table lists the missing index recommendations as displayed by the DMVs. It also summarizes all of the missing index recommendations to make it easier to determine what a consolidated index might require. It is important to note that the missing index recommendations for Query #5 and #6 are the same, just as they were in the query plan listings above. Although discussion of the calculation and interpretation of the Improvement Measure column is beyond the scope of this article, it should be noted that the total improvement measure was approximately twice that of any other index recommendation and further investigation reveals that this index would satisfy both Query #5 and #6.

Blog_20170601_T3

Table 3: Summary of All Missing Index Recommendations

Composite Index Recommendation for All Six Queries

Either Table 2 or Table 3 could be used to determine a composite index. In-depth investigation reveals that since DupID and Metric01 are specified in the where clauses as inequalities, these should be key columns in any index, and review of Table 3 highlights this for all index recommendations. Since MyText is used in all of the queries and Metric01 is to be used as a key column, the only remaining columns are Metric02, Metric03, Metric04, Metric05, and Metric06. An index that employs these columns as included columns can “cover” all the queries shown in this article. For further information about covering indices, please refer to the following article: https://docs.microsoft.com/en-us/sql/relational-databases/indexes/create-indexes-with-included-columns. The composite (and covering) index is shown below:

CREATE NONCLUSTERED INDEX ix_ResolveMultipleQueriesNeeds ON [dbo].[FewDuplicates] (

[DupID],

[Metric01]

)

INCLUDE (

[MyText],

[Metric02],

[Metric03],

[Metric04],

[Metric05],

[Metric06]

)

WITH (fillfactor = 100, PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)

ON [PRIMARY]

Query Plans for Queries #1 – #6 with Composite Index Added

Once the combined index was implemented, each query was run as before: caches cleared and SSMS actual execution plan option on. No query ran longer than one second even when the caches had been purged. Although the queries were run individually for the purposes of obtaining accurate timings, they were also run together for the purpose of showing the similarities of the execution plans with the new index in place. Table 4 lists the execution results and clearly, performance improved greatly for all queries. For example, durations decreased from approximately 90 seconds to about one second and reads dropped from approximately one million to four thousand. Examination of the query plans shows that all six queries used the new index to obtain the observed extremely fast performance.

Blog_20170601_T4

Table 4: Summary of Query Executions & Timings After Composite Index Added

Blog_20170601_7

Summary

Most readers probably already knew the answer to the first question: key columns are used to filter query results. The simplest example of this involves a where clause. Columns referenced by equalities are placed first, followed by columns that are used with inequalities. Some of the missing index DMVs actually differentiate between the types of relationships involved. The example queries show that table column ordering dictates included column order thereby answering question #2. Since table column ordering is used to specify included column ordering, SQL Server WILL NOT duplicate missing index recommendations when the columns are identical regardless of their ordering in the queries. The answer to the last question is that SQL Server does not appear to perform any consolidations or optimizations other than those cited in the answer to question #2. This knowledge is invaluable, particularly when evaluating missing index recommendations on production systems because unless the queries that cause SQL Server to generate the missing index recommendations have been executed recently, there is little information to associate the actual queries with the recommendations. Understanding those factors that affect SQL Server’s recommendations can simplify causing query identification and give an analyst more certainty that the correct associations have been made.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Query Tuning – Adding Included Columns To An Existing Index

— by Lori Brown @SQLSupahStah

For the most part, I stick to my daily DBA duties of making sure that SQL Server maintenance is running and that the proverbial lights are on in all my SQL boxes. However, sometimes it is fun to do some tuning and it’s even better when you have a success.

I have a SQL Server that hosts some Microsoft ERP databases. There was a problem with some slow performance and after collecting some performance data, I narrowed the root cause down to a view (vp_03400Exception1) that was used in a stored procedure. Out of the 15 million reads performed by the proc when I captured it, 12 million were directly from the offending view. In checking the execution plan for the view I found that a large table (APTran) was scanned in one spot and there were key lookups going on in other places (APTran & APDoc).

Bad stuff in bad plan…thick arrow indicates that almost 4 million records were scanned

Blog_20170323_1

Key lookups are not good (**on small tables these might not matter but on large tables they do) and should be eliminated if possible. Key lookups occur when you have an index seek against a table, but your query wants extra columns that are not in that index. This causes SQL Server to have to go back and get those extra columns. Lots of trips back to get extra columns results in slow performance. Of course scans are not good either so I wanted to try to get rid of that too.

Since the key lookups have an output list in the execution plan that lists what extra columns were needed, I took a look and simply added those columns as included columns on the existing indexes that were being used in the plan. In effect, I created a covering index using included columns. The included columns are only used when specific queries need it. If other queries use the index with included columns but doesn’t actually need them, those won’t be pulled in.

Here are the modified indexes with the included columns.

Blog_20170323_4

With the new columns included in the indexes, index seeks with no key lookups are now being performed.  The thick arrow has disappeared because instead of needing to scan the whole APTran table, it knows to return only 4 records.  This means that the rest of the query does not have to perform anywhere near as much work as it did before.  See the new execution plan below:

Much better stuff in more efficient plan…small arrow indicates that only 4 records were needed from APTran so much less work is being done.

Blog_20170323_5

Below you can see that overall the new plan uses less memory, is smaller and just plain old more efficient.

Blog_20170323_6

Boom!! Mic drop!!

Blog_20170323_7

You might be thinking that I should not celebrate since my query only returned 1 row. However, what if the results were hundreds or thousands or millions of rows? The work done by the old execution plan would be significant and the users would feel it. So, what’s the bottom line here? If you have queries accessing large tables, check execution plans to see if they are being accessed efficiently. Sometimes a small change to an existing index will work wonders. Sometimes you have to create a new index to make queries efficient. The execution plan might just tell you what it needs if you know where to look.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

SSISDB – Package Execution Time in Minutes and Average Execution Time

–By Lori Brown   @SQLSupahStah

I have a client who has a lot of SSIS packages in the SSIS catalog that we are migrating to a new server. Of course there is lots of testing going on. So the question came up about how to more easily see package runtimes and comparing to the overall average runtime rather than using the Dashboard or the reports that come with Management Studio. After a little searching, I found that the SSISDB.internal.execution_info table contains what I needed.

— Filter by date and package name

DECLARE @begindate DATE = GETDATE() – 7

DECLARE @pkgname VARCHAR(100) = ‘Your Package Name— Ex: Patient_Update.dtsx

 

— Package execution time

SELECT folder_name, project_name, package_name,

CAST(start_time AS datetime) AS start_time,

DATEDIFF(MINUTE, start_time, end_time) AS ‘execution_time[min]’

FROM SSISDB.internal.execution_info

WHERE start_time >= @begindate

AND package_name = @pkgname — Comment out this line for all packages

ORDER BY start_time

 

— Average package execution time

SELECT folder_name, project_name, package_name,

AVG(DATEDIFF(MINUTE, start_time, end_time)) AS ‘Avg execution_time[min]’

FROM SSISDB.internal.execution_info

WHERE start_time >= @begindate

AND package_name = @pkgname — Comment out this line for all packages

GROUP BY folder_name,project_name,package_name

You can also use the catalog.executions view to get the same data but since I saw the internal table, I jumped in. This was tested on SQL 2016 but I think it will still work in earlier versions. Hope this helps someone out!

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

SQL 2016 SP1 USE HINT

–By Lori Brown   @SQLSupahStah

After installing Service Pack 1 for SQL 2016, I ran across USE HINT and thought I would put out what I could find to document it. Here’s where you find info from Microsoft: https://msdn.microsoft.com/en-us/library/ms181714.aspx and here is a link for all of the bugs that are fixed in SP1: https://support.microsoft.com/en-us/kb/3182545

USE HINT ( hint_name ) Provides one or more additional hints to the query processor as specified by a hint name inside single quotation marks. Hint names are case-insensitive. USE HINT can be utilized without having to be a member of the sysadmin server role.

The following hint names are supported:

  • ‘DISABLE_OPTIMIZED_NESTED_LOOP’ Instructs the query processor not to use a sort operation (batch sort) for optimized nested loop joins when generating a query plan. Equivalent to trace flag 2340.
  • ‘FORCE_LEGACY_CARDINALITY_ESTIMATION’ Forces the query optimizer to use Cardinality Estimation model of SQL Server 2012 and earlier versions. Equivalent to trace flag 9481 or Database Scoped Configuration setting LEGACY_CARDINALITY_ESTIMATION=ON.
  • ‘ENABLE_QUERY_OPTIMIZER_HOTFIXES’ Enables query optimizer hotfixes (changes released in SQL Server Cumulative Updates and Service Packs). Equivalent to trace flag 4199 or Database Scoped Configuration setting QUERY_OPTIMIZER_HOTFIXES=ON.
  • ‘DISABLE_PARAMETER_SNIFFING’ Instructs query optimizer to use average data distribution while compiling a query with one or more parameters, making the query plan independent of the parameter value which was first used when the query was compiled. Equivalent to trace flag 4136 or Database Scoped Configuration setting PARAMETER_SNIFFING=OFF.
  • ‘ASSUME_MIN_SELECTIVITY_FOR_FILTER_ESTIMATES’ Causes SQL Server to generate a plan using minimum selectivity when estimating AND predicates for filters to account for correlation. Equivalent to trace flag 4137 when used with cardinality estimation model of SQL Server 2012 and earlier versions, and has similar effect when trace flag 9471 is used with cardinality estimation model of SQL Server 2014 or higher.
  • ‘DISABLE_OPTIMIZER_ROWGOAL’ Causes SQL Server to generate a plan that does not use row goal adjustments with queries that contain TOP, OPTION (FAST N), IN, or EXISTS keywords. Equivalent to trace flag 4138.
  • ‘ENABLE_HIST_AMENDMENT_FOR_ASC_KEYS’ Enables automatically generated quick statistics (histogram amendment) for any leading index column for which cardinality estimation is needed. The histogram used to estimate cardinality will be adjusted at query compile time to account for actual maximum or minimum value of this column. Equivalent to trace flag 4139.
  • ‘ASSUME_JOIN_PREDICATE_DEPENDS_ON_FILTERS’ Causes SQL Server to generate a query plan using the Simple Containment assumption instead of the default Base Containment assumption for joins, under the query optimizer Cardinality Estimation model of SQL Server 2014 or newer. Equivalent to trace flag 9476.
  • ‘FORCE_DEFAULT_CARDINALITY_ESTIMATION’ Forces the Query Optimizer to use Cardinality Estimation model that corresponds to the current database compatibility level. Use this hint to override Database Scoped Configuration setting LEGACY_CARDINALITY_ESTIMATION=ON or trace flag 9481.

The list of all supported USE HINT names can be queried using the dynamic management view sys.dm_exec_valid_use_hints. More this view can be found here: https://msdn.microsoft.com/en-us/library/mt791356.aspx

Some USE HINT hints may conflict with trace flags enabled at the global or session level, or database scoped configuration settings. In this case, the query level hint (USE HINT) always takes precedence. If a USE HINT conflicts with another query hint or a trace flag enabled at the query level (such as by QUERYTRACEON), SQL Server will generate an error when trying to execute the query.

Example:

DECLARE @qty INT

SET @qty = 4

SELECT h.OrderDate, h.Freight, d.OrderQty, d.UnitPrice

FROM Sales.SalesOrderDetail d

JOIN Sales.SalesOrderHeader h ON (d.SalesOrderID = h.SalesOrderID)

WHERE d.OrderQty > @qty

OPTION (USE HINT(‘DISABLE_PARAMETER_SNIFFING’,‘DISABLE_OPTIMIZED_NESTED_LOOP’));

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Table-Valued Parameter Performance Using C# Data Tables

The following testing and resulting data is a direct result of a recent issue that was discovered at one of our client sites.  After setting up SQL monitoring that does use a SQL trace to capture long running queries, suddenly certain processes would lock up and be unable to complete.  Initially we thought it was everything except the SQL trace causing the issue until we finally turned off all trace captures and the problem went away.  This was very concerning to us since using either SQL trace or XEvents is the method that we generally use to capture query performance. Without a fix, our performance tuning attempts would be severely hamstrung and slowed down. This problem had to be fixed yesterday!

Another side effect that would happen when things locked up was that despite having the max server memory set, SQL would end up using substantially more memory that was allocated to it.  At this client, developers were importing data from an application using C# and table-valued parameters which should have not been an issue. After a lot of searching, we finally came across some comments by another SQL expert who mostly described our issue and recommended that any C# TVP’s should have their columns defined or SQL could lock up if either trace or XEvents Completed Events are captured.  When we checked the C# code we found that the string columns that were being imported were not defined in an effort to make the import code flexible.  We were able definitively fix this very obscure problem with just one line of C# code that is in the below post.  The end result is a FASTER import of data that can be captured in trace or XEvents.  Nice!!

Since we had to put together this fix for a very undocumented issue, we sincerely hope that any DBA or developer who runs into this problem can correct it much faster than we did.  A link to download a zip file of all code displayed in this post is included at the end.  Enjoy!

********************************************

Table-Valued Parameter Performance Using C# Data Tables

— By Jeff Schwartz

Overview

Much has been written about the use of table-valued parameters (TVP) and their performance. These narratives appear to be contradictory because some state that TVPs should be used only for small datasets, whereas others assert that TVPs scale very well. Adding to the confusion is that the fact that most of the performance-related discussions focused on TVP use within T-SQL. Many of these examples employ either SQL trace or Extended Events (XEvents) to evaluate performance, but almost none of them has discussed what can occur when C# data tables and SQL trace or XEvents are used simultaneously. This paper details the analysis and resolution of a situation that occurred at a customer site when an existing C# application suddenly began to run extremely slowly after a very lightweight SQL trace that utilized a several-minute duration threshold and captured no frequent events was activated.

Initial Research

Preliminary research indicated that under very specific conditions the combination of TVPs and SQL trace could result in unintended performance consequences, e.g., excessive SQL Server memory consumption to the point where SQL Server exhausted all memory on the server regardless of the maximum memory setting value. The research also suggested that string usage and lengths in unspecified locations might have an impact on this issue. The question of whether TVPs might have some record-count performance threshold above which performance would degrade was also examined. Although bulk insert was not used by the application, the research team also decided to compare bulk and TVP insertion speeds because that is another source of conflicting information.

Since very little literature exists that discusses the combination of TVPs and SQL trace, the only recourse involved creating test frameworks, performing actual data loads, and using internal application timers in conjunction with either SQL trace or XEvents to monitor performance.

Test Frameworks

1.Two frameworks were constructed: T-SQL-only and C# calling a T-SQL stored procedure.

a. The T-SQL-only test loaded data into a TVP, which then called a stored procedure to perform the insertion into the final table.

b. The development team provided C# code that illustrated how the user application read a flat file into a C# data table, which was then passed as a TVP to a T-SQL stored procedure. This code was followed as closely as possible in the test framework.

2.A SQL server table containing data captured from the sys.dm_db_index_operational_stats Data Management View (DMV) supplied the data for testing because

a. The data table contained approximately 3.2 million records, thereby being large enough to stress the application code and SQL Server adequately. The largest data file provided by the application team was approximately 1.6 million records, so the testing could push well beyond current record levels.

b. Each data record contained 47 columns, which was wider in terms of data columns than any application table being loaded.

c. No string value columns existed in the data. This insured that user string data could not affect any interaction between the C# application and SQL Server, and that any string-related behavior was strictly due to the manner in which the application loaded the data table or the TVP, or the way the TVP parameter-passing was recorded in the SQL trace. All data columns were smallint, int, bigint, or date. This also insured exact record lengths so that any record count-related behavior was independent of record size. Some of the data files being loaded by the development team contained only fixed-width data types, whereas others contained at least a few variable length string columns.

3.The frameworks were designed to load various numbers of records using the same starting point to determine whether a performance threshold existed for TVPs, regardless of whether SQL trace (or XEvents) was active, as well as to determine whether even small numbers of records could be processed with the SQL trace (or XEvents) active.

4.Thirty-one different record levels, shown in Table 1, were used for testing to insure any record-count gaps were small. A separate CSV file of each length was created from the data cited in #2, always beginning with the first record.

Blog_20160901_1

Table 1: Record Counts used in Testing

5.Several methods of loading the flat file CSV data into C# data tables were developed and tested. The C# data table configurations were as follows:

a. Unlimited string length columns (initially used in the production C# application) – herein noted as NoStringMax (String #N/A on graphs)

b. Fixed length string columns of at most 30, 100, 500, or 1000 characters each – herein noted as StringMax (String <#> on graphs). Note: the development team stated that no column would exceed 1,000 characters in length.

c. Exact data type mapping so that the data table column types matched those of the receiving SQL Server table exactly – herein noted as Map

d. A common routine was used by options a, b, and c above to send the data table to SQL Server using a TVP.

e. The section of code that loaded the data table from the flat file was timed separately from the routine cited in d, which also was surrounded by its own timers. This enabled comparison of data load, TVP passing, and combined times.

f. The name of the CSV file is passed in as a parameter along with an adjusted version of the file name so the appropriate test identifiers can be written to the application log file for later analysis.

6.The tests were run with the following SQL Server monitoring options:

a. No SQL Server monitoring at all.

b. SQL trace monitoring including all normally captured events except for completed ones. These were handled as follows:

i.Exclude all completed events

ii.Include ad-hoc and batch completed events with a two-second duration threshold

iii.Include ad-hoc and batch completed events as well as completed statement events with a two-second duration threshold

c. SQL Extended Events (XEvents) including all normally captured events except for completed ones. These were handled as follows:

i.Exclude all completed events

ii.Include ad-hoc and batch completed events with a two-duration threshold

iii.Include ad-hoc and batch completed events as well as completed statement events with a two-duration threshold

7.All tests were performed on the same server to eliminate network and hardware variability. This server had a quad-core Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GHz processor, 32 GB of RAM, and a 500 GB SSD HD.

8.Various combinations of cold and warm SQL Server caches, cold and warm Windows caches, and SQL Server maximum memory limit sizes were tested. Some of the earlier attempts will be discussed in the Results section below. The final specifications were as follows:

a. No applications other than SQL Server, SSMS, and the C# application were active on the test server.

b. SQL Server’s maximum memory limit was set to 12 GB. This allowed SQL to go over its limit without distorting overall system behavior by exhausting server memory entirely.

c. Hot Windows file cache, i.e., a full run that loaded ALL of the flat file data into RAM was performed before the timing tests were run. This insured that the HD was involved minimally.

d. Hot SQL Server buffer cache, achieved by running the next series soon after the last run completed (with no intervening runs), and by running each sequence four times in immediate succession.

C# Code

The NoStringMax routine cited in #5a in the previous section is shown in Figure 1.

Blog_20160901_2

Figure 1: Test C# Data Table Load Routine Using No String Length Maximums

The StringMax routine cited in #5b in the previous section is shown in Figure 2. The ONLY difference between this routine and the previous one is the addition of the MaxLength assignment that is highlighted below. This single line sets the maximum length of each data table column to the value of the iMaxStringLen parameter. As cited in #5b, this value ranged from 30 to 1,000 during testing.

Blog_20160901_3

Figure 2: Test C# Data Table Load Routine Using String Length Maximums

The Map routine cited in #5c in the previous section is shown in Figure 3 through Figure 6. Logically, this code is identical to that of #5a and #5b, but the individual column mappings make the routine much longer. Note: since no strings are used for the data table columns, the MaxLength parameter is unnecessary.

Blog_20160901_4

Figure 3: Test C# Data Table Load Routine Using Data Type Mapping – Part 1

Blog_20160901_5

Figure 4: Test C# Data Table Load Routine Using Data Type Mapping – Part 2

Blog_20160901_6

Figure 5: Test C# Data Table Load Routine Using Data Type Mapping – Part 3

Blog_20160901_7

Figure 6: Test C# Data Table Load Routine Using Data Type Mapping – Part 4

The routine cited in #5d in the previous section is shown in Figure 7.

Blog_20160901_8

Figure 7: C# Test Routine that Sends Data Table to SQL Server Using a TVP

Results

1.The T-SQL code alone, cited in Test Frameworks #1a did not recreate the problem observed at the customer site when either SQL trace or Extended Events (XEvents) were used, so additional testing was unnecessary.

2.No problems occurred when the C# application was used in conjunction with SQL trace or XEvents as long as no completed events were captured. Therefore, capturing only specific trace events created the problems.

3.Further research showed that when the C# code was running, adding ad-hoc or batch completed events to SQL traces or XEvent sessions caused issues. Adding completed statement events did not change things appreciably.

4.Interestingly, the extreme misbehavior was triggered by the combination of using NoStringMax C# code and having SQL trace or XEvents capture ad-hoc or batch completed events. Although the StringMax and Map routines ran a little more slowly for certain record count levels when SQL trace or XEvents captured ad-hoc or batch completed events, the dramatic memory consumption issues did not occur at all when these methods were used.

5.Initially, testing employed a 28 GB maximum SQL Server memory setting, but as occurred at the customer site, when the problem arose, SQL Server consumed all the memory on the server, which caused the server fundamentally to stop working. Using this setting and the NoStringMax C# code, CSV files with record counts up to 25,000 could be processed without taking hours, failing, or causing the server to run completely out of memory. However, the application could not reach the 50,000 record level.

6.Since testing needed to process MUCH higher numbers of records, the decision was made to reduce SQL Server memory to 4 GB in an attempt to provide additional memory space for SQL Server memory overflow. Another reason for lowering the SQL Server memory allocation was to insure that the tests consumed all of SQL Server’s buffer pool and that SQL Server had to operate against memory limits. This consideration, as well as the run-time variations, necessitated the four successive iterations for each combination.

7.Unfortunately, using the 4 GB setting, the lower-end NoStringMax C# code runs while capturing SQL trace ad-hoc or batch completed events caused application failures at approximately the 4,000 or 8,000 record levels. This clearly indicated an increase in SQL Server memory was required, so it was increased to 8 GB. Although the runs processed more records, they still crashed well before the 25,000 record level. Finally, moving the memory limit up to 12 GB enabled everything to run as before, effectively mimicking the 28 GB testing without running the risk of exhausting Windows memory.

8.Figure 8 highlights the NoStringMax C# code runs through the 25,000 record level while SQL trace or XEvents ad-hoc or batch completed events were being captured. It is evident that the run times were unacceptable. When the same test levels are viewed in Figure 9, the problem becomes abundantly clear. With SQL trace or XEvents capturing ad-hoc or batch completed events, the run times for the 10,000 record tests using the NoStringMax C# code ranged between 483 and 584 seconds. When ad-hoc or batch completed events were not captured, the run times were approximately 0.14 SECONDS! The values for all other tests, including NoStringMax C# code with SQL trace and XEvents ad-hoc or batch completed events off, are shown in Figure 9.

Blog_20160901_9

Figure 8: TVP Load Timing when Trace On or XEvents on

9.Although many lines appear in Figure 9, three groupings exist and these are the most important. Group 1, which consists of dotted lines, includes all tests during which ad-hoc or batch completed events were not captured. Group 2, which consists of solid lines, includes all tests during which SQL trace captured ad-hoc or batch completed events. Group 3, which consists of dashed lines, includes all tests during which XEvents captured ad-hoc or batch completed events. Important note: The NoStringMax runs shown in Figure 8 are NOT in Figure 9 because of scaling considerations. Figure 9 highlights several notable issues. Most importantly, once a maximum string length is specified, performance improves even when SQL trace or XEvents ad-hoc or batch completed events are not captured. In addition, the terrible performance problems go away. Interestingly, the behaviors of the various StringMax and Map runs were almost identical through approximately 175,000 records and then again at about 800,000 records. In between, unexplained divergence occurs, but it is only a few seconds and occurred primarily between the 200,000 and 700,000 record levels. The pattern shown in Figure 9 was repeated in every test sequence conducted. It is noteworthy that the StringMax and Map versions of the routine outperformed the NoStringMax under all comparable trace or XEvent-related conditions, and at higher volumes, even when completed events were captured.

Blog_20160901_10

Figure 9: TVP Load Timing when Trace/XEvents off OR Trace or XEvents on and Mapped Data Table or Limited-Length String Values Used

10.Once the number of records exceeeded two million, the load of the C# data table began to crash due to application memory exhaustion. Note: neither Windows nor SQL Server memory was consumed excessively during this phase, so the limitation was strictly application-related.

11.Figure 10 summarizes the overall insertion rates of all previously cited methods at the two million record level, in addition to the bulk insert method. This graph shows clearly that the worst performer was the unlimited string method, and that the best one was the bulk insert method. Note: these rates exclude the reading of the flat file into the C# data table as well as the creation of the file that was suitable for bulk insertion.

Blog_20160901_11

Figure 10: TVP versus Bulk Insert Records per Second

Conclusions

The use of unlimited-length C# strings with data tables not only performs worse without capturing any SQL trace or XEvents ad-hoc or batch completed events, it performs dreadfully when they are. Their use can cause poor overall server performance if the server’s memory is exhausted. Finally, their use prevents the ability to monitor SQL Server query performance using either SQL trace or XEvents. However, when a maximum string length is specified, regardless of its length, performance without SQL trace or XEvents improved and the problematic interaction with SQL trace or XEvents was mitigated almost completely. Since the Map method is inefficient and limiting from an application development perspective and performance was not substantially better than with limited-length strings, its use does not appear to be beneficial.

No table-valued parameter scalability issues were observed with the possible exception of the 200,000 to 700,000 record range when performance inexplicably, and consistently, dipped. However, from 800,000 to 2 million, performance experienced no such dip. Interestingly, standard bulk insert performance exceeded that of TVPs when the same data was used.

Recommendations

Since the code to implement the maximum-length string involves only one line and the performance monitoring and improvement benefits are substantial, the <data table name>.MaxLength = <n> statement should be included in data table load routines as shown by the highlighted line in Figure 2. <n> should be the maximum expected length of any data column. Limiting string length will allow C# TVP data loads to run more efficiently and also enable customers to conduct normal query performance monitoring.

********************************************

The code shown in this post can be downloaded from here….  SQLRX_TVP_CSharp_Code

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

List Index Columns

— By Lori Brown @SQLSupahStah

I was looking for a way to get info on the columns in indexes so that I could use the data to help enhance my index defrag routine. I found that my defrag logic did not figure out if an index has included columns that are LOBs and was failing because it was trying to rebuild it ONLINE. Of course this was happening on a SQL 2008 R2 Enterprise instance. Like it or not, I have a lot of clients with older versions of SQL that I support so I try to account for things but LOBs in an included column had been missed.Blog_20160708_1

Ahhh…the good old days when SQL really started coming out of its shell.

Blog_20160708_2

Now we have new and more powerful SQL 2016. Progress!!

Anyway, I decided that it would be nice to have a query that would list index columns, complete with data type and in key order along with a way to tell if it is an included column. I added some ways to filter the query that I think are common ways to look at this type of data.

— Returns index column info for the database you are connected to

SELECT s.name AS SchemaName,

o.name AS TableName,

i.name AS IndexName,

i.type_desc IndexType ,

c.name AS ColName,

t.name AS ColType,

ic.index_column_id,

ic.is_included_column

FROM sys.objects o

JOIN sys.schemas s ON (o.schema_id = s.schema_id)

JOIN sys.indexes i ON (o.object_id = i.object_id)

JOIN sys.index_columns ic ON (i.object_id = ic.object_id AND i.index_id = ic.index_id)

JOIN sys.columns c ON (i.object_id = c.object_id AND c.column_id = ic.column_id)

JOIN sys.types t ON (c.user_type_id = t.user_type_id)

WHERE o.type = ‘U’

–AND o.name = ‘???’ — Uncomment to filter by table name

–AND o.object_id = 123456789 — Uncomment to filter by object id

–AND (t.name IN (‘text’, ‘ntext’, ‘image’) OR t.max_length = -1) — Uncomment this line for LOBs only

ORDER BY o.name, i.name, c.name, ic.index_column_id

Blog_20160708_3

Even though I know that others have posted similar queries, I like to make my own. This works on SQL 2008 through SQL 2016. I hope someone finds this useful. Let me know of ways to improve it too!

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

SQL 2016 Temporal Tables

— By Lori Brown @SQLSupahStah

A temporal table is a table that holds old versions of rows from a base table. In other words it is a history table.  By having temporal tables SQL Server can automatically manage moving old row versions to the temporal table every time a row in the base table is updated.  The temporal table is physically a different table then the base table, but is linked to the base table.

If you have ever needed to make business decisions based on insights from data that has evolved, you might need temporal tables.

Reasons to use temporal tables are…

  • Auditing data changes and performing data forensics
  • Reconstructing state of the data as of any time in the past
  • Calculating trends over time
  • Maintaining a slowly changing dimension for decision support applications
  • Recovering from accidental data changes and application errors

Below I have created a base temporal table, have designated the history table and have set system-versioning on with data broken up by the RecValidFrom & RecValidTo columns. These columns are equivalent to a start time and end time to the history table and are used to define the period that the record is valid.

CREATE TABLE dbo.EmpTemporal

(ID INT PRIMARY KEY

,FirstName VARCHAR(30)

,LastName VARCHAR(50)

,Salary INT

,City VARCHAR(30)

,RecValidFrom DATETIME2 GENERATED ALWAYS AS ROW START NOT NULL

,RecValidTo DATETIME2 GENERATED ALWAYS AS ROW END NOT NULL

,PERIOD FOR SYSTEM_TIME (RecValidFrom,RecValidTo)) WITH(SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.EmpTemporalHistory))

GO

Blog_20160630_1

Let’s add some records and see that they look like in the base table.

INSERT INTO dbo.EmpTemporal(ID, FirstName, LastName, Salary)

VALUES (1, ‘Lori’, ‘Brown’, 50000)

,(2, ‘Lucy’, ‘Lu’, 50000)

,(3, ‘Joe’, ‘Blow’, 50000)

,(4, ‘Bob’, ‘Barker’, 50000)

,(5, ‘Adam’, ‘West’, 50000)

GO

 

SELECT * FROM dbo.EmpTemporal

GO

Blog_20160630_2

Now let’s make changes to the base table so that we can see how data changes over time…

UPDATE dbo.EmpTemporal     — Lucy’s last name is changed

SET LastName = ‘Stewart’

WHERE ID = 2

 

UPDATE dbo.EmpTemporal     — Lori got a raise

SET Salary = 55000

WHERE ID = 1

 

DELETE dbo.EmpTemporal     — Bob Barker quit and was deleted

WHERE ID = 4

 

UPDATE dbo.EmpTemporal     — Adam moved      

SET City = ‘West Memphis’

WHERE ID = 5

 

UPDATE dbo.EmpTemporal    — Lucy also got a raise

SET Salary = 55000

WHERE ID = 2

We can query data in either the base table or the history table or from both using the FOR SYSTEM_TIME function with a valid time range to filter on.

SELECT * FROM dbo.EmpTemporal

Blog_20160630_3

SELECT * FROM dbo.EmpTemporalHistory

Blog_20160630_4

SELECT * FROM dbo.EmpTemporal

FOR SYSTEM_TIME

BETWEEN ‘2016-01-01 00:00:00.0000000’ AND ‘2016-04-01 00:00:00.0000000’

WHERE ID = 2

ORDER BY RecValidFrom

Blog_20160630_5

Limitations:

  • Temporal tables must have a primary key.
  • History table cannot have constraints (primary key, foreign key, table or column constraints).
  • INSERT and UPDATE statements cannot reference the SYSTEM_TIME period columns.
  • TRUNCATE TABLE is not supported while SYSTEM_VERSIONING is ON. You can disable by simply turning SYSTEM_VERSIONING = OFF.

ALTER TABLE dbo.EmpTemporal SET (SYSTEM_VERSIONING = OFF)

  • Direct modification of the data in a history table is not permitted.
  • ON DELETE CASCADE and ON UPDATE CASCADE are not permitted on the current table.
  • Usage of replication technologies is limited.

More limitations can be found here…. https://msdn.microsoft.com/en-us/library/mt604468.aspx

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

SQL 2016 COMPRESS and DECOMPRESS Functions

–By Lori Brown @SQLSupahStah

SQL Server 2016 has introduced a couple of new functions that can be very useful and save storage space to boot. COMPRESS and DECOMPRESS offers the benefit of compressing column data thereby saving storage space. COMPRESS uses the GZIP compression algorithm and actually compresses data before it is stored in a column or variable and returns binary data. DECOMPRESS combined with CAST reverses the COMPRESS function returns the decompressed version of any compressed data.

I’ll walk us through a simple example of using the new COMPRESS and DECOMPRESS functions using the AdventureWorks database. I first created both a compressed and uncompressed table to hold the exact same data so I can demonstrate the space savings using COMPRESS. Then, I loaded both tables with data from the person.person table in AdventureWorks.

 

CREATE TABLE PersonsCompressed (

FirstName nvarchar(50),

LastName nvarchar(50),

CompressedInfo varbinary(max))

GO

 

CREATE TABLE PersonsUNCompressed (

FirstName nvarchar(50),

LastName nvarchar(50),

UnCompressedInfo nvarchar(max))

GO

 

I made up a bunch of different length text values to put into the 3rd column in each table but here are my basic insert statements. I ran the insert statements many times until I had a significant amount of data in each table.

 

— Load table with compressed data using Person.Person table in AdventureWorks

INSERT INTO PersonsCompressed (FirstName, LastName, CompressedInfo)

SELECT FirstName, LastName, COMPRESS(‘What a nice person they are!….’) FROM Person.Person

INSERT INTO PersonsCompressed (FirstName, LastName, CompressedInfo)

SELECT FirstName, LastName, COMPRESS(‘highly optimistic person who likes to laugh a lot. Goofy. Intrigued by anything science and love intellectual conversations. Thinks on the logical side of things…..’) FROM Person.Person

GO

 

— Load table with UNcompressed data using Person.Person table in AdventureWorks

INSERT INTO PersonsUnCompressed (FirstName, LastName, UnCompressedInfo)

SELECT FirstName, LastName, ‘What a nice person they are!….’ FROM Person.Person

INSERT INTO PersonsUNCompressed (FirstName, LastName, UnCompressedInfo)

SELECT FirstName, LastName, ‘highly optimistic person who likes to laugh a lot. Goofy.

Intrigued by anything science and love intellectual conversations. Thinks on the logical side of things…..’ FROM Person.Person

GO

 

After loading up my tables with a lot of records that are exactly the same except one has a compressed column this is what I see when I check how much space each table is taking:

Blog_20160616_1

That is a space savings of over 60% which can be significant in larger tables!

If you query the tables you can see that the CompressedInfo column is not displayed as the text value that was put in since it was compressed and stored as binary data.

Blog_20160616_2

We have to DECOMPRESS the CompressedInfo column and CAST it to the correct data type to read it.

Blog_20160616_3

There are negatives to the new functions. While MSDN states that compressed columns cannot be indexed (https://msdn.microsoft.com/en-us/library/mt622775.aspx ), I was able to add the column as an included column in an index. And you will likely have to change existing code to use the new functions.

So, next I want to know how performance is affected when having to compress and decompress data in queries. I found that there is considerably more Duration and CPU used when using DECOMPRESS in a query. That seems logical to me since the CPU is where I would expect the work of decompressing data to happen.

Blog_20160616_4

Interestingly, when using COMPRESS and inserting records it was basically the same as inserting without compressing the data.

Blog_20160616_5

A good scenario to use compressed columns would be to use them for archived data with text columns or data in tables that are infrequently accessed.

Hopefully, my small examples will give you some valuable insight into what to expect if you want to use the new functions. Enjoy!

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Finding overlapping times with the LAG Function

— by Jeffry Schwartz

Some functions in SQL are a little difficult to use and understand. I was recently using the LAG function to shed some light on a performance issue, so I thought I would show how I used it and the results it produced in a way that is easy to understand.  At one of our clients after evaluating SQL trace data, I noticed that the attention records (caused by problems with the application, connections, etc.) not only occurred around some non-yielding scheduler messages that had been appearing along with timeouts, but also that the attention records were overlapping, i.e., sometimes one began and then others occurred before the first one ended.  I’ve looked at the attention data from other clients and found that not only do they not overlap like this, they don’t last long enough to overlap, meaning that this is a special case.

To show clearly which ones overlap, I decided to use the lag function to compare the end time of an older one with the start time of the current one.  If the start time of the current record occurs before the end time of a previous record, the flag is set.  This way I don’t have to scrutinize the timestamps and compare them manually.  I partition by ServerName because I want to compare all those from the same instance and I want to order them by StartTime so when I compare adjacent records, they are in time order.  You can partition and order by anything you want.

Here is the query, which is generic enough to be understood easily.  I’ve included the timestamps themselves so you can see how it works.

SELECT StartTime, EndTime,

CASE WHEN StartTime < LAG(EndTime,1,0) OVER (PARTITION BY ServerName ORDER BY StartTime) THEN 1 ELSE 0 END AS [Overlap Lag 1],

CASE WHEN StartTime < LAG(EndTime,2,0) OVER (PARTITION BY ServerName ORDER BY StartTime) THEN 1 ELSE 0 END AS [Overlap Lag 2],

CASE WHEN StartTime < LAG(EndTime,3,0) OVER (PARTITION BY ServerName ORDER BY StartTime) THEN 1 ELSE 0 END AS [Overlap Lag 3],

CASE WHEN StartTime < LAG(EndTime,4,0) OVER (PARTITION BY ServerName ORDER BY StartTime) THEN 1 ELSE 0 END AS [Overlap Lag 4],

CASE WHEN StartTime < LAG(EndTime,5,0) OVER (PARTITION BY ServerName ORDER BY StartTime) THEN 1 ELSE 0 END AS [Overlap Lag 5],

LAG(EndTime,1,0) OVER (PARTITION BY ServerName ORDER BY StartTime) AS [lag EndTime1],

LAG(EndTime,2,0) OVER (PARTITION BY ServerName ORDER BY StartTime) AS [lag EndTime2],

LAG(EndTime,3,0) OVER (PARTITION BY ServerName ORDER BY StartTime) AS [lag EndTime3],

LAG(EndTime,4,0) OVER (PARTITION BY ServerName ORDER BY StartTime) AS [lag EndTime4],

LAG(EndTime,5,0) OVER (PARTITION BY ServerName ORDER BY StartTime) AS [lag EndTime5],

Duration / 1000.0 AS [Duration Secs], Duration / 60000.0 AS [Duration Minutes]

–, SPID, ClientProcessID, RowNumber,

–ServerName AS [Instance], LTRIM(ApplicationName) AS [Application],

–DBName AS [Database], HostName, NTUserName, LoginName AS [Login Name]

FROM dbo.SQLTraceReceiver WITH (NOLOCK)

INNER JOIN SQLStaticDatabaseInfo ON DatabaseID = [DBID]

WHERE SQLTraceReceiver.EventClass = 16 — Attention

 

Here is the output:

Blog_20160526_1

A timestamp of 12:00:00 AM really means NULL because the lag is invalid.  I highlighted the overlaps to make them easier to see.  Obviously, you may not need to use lag for time evaluations as I do, but this shows you how to compare values of adjacent records and report something about the comparison without having to display the actual lag values.  I left the [LAG EndTime<n>] columns in the query so you could see it in action, but for the final real query, I actually would not include them and would add the descriptive columns that are commented out back in.

Lag takes a little getting used to, but you once you play with it a bit as in this example, it makes perfect sense.  These functions came out on 2012, so if you are using this version or higher, you are good to go.  Note:  I could have used LEAD to go the other way, i.e., to compare future records using the current record.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Using Ranking Functions to Partition and Order Unfamiliar Data

The Problem

How many times have you had to subdivide, group, rank, and organize data when you knew very little about the data itself except for the meanings of the columns? This dilemma is one that the members of the SQLRx team encounter frequently. When a customer requests help with a performance problem, we often do not know the number of databases, tables, or indices (or how busy any of them are) before we begin our analysis of the performance data. We also do not know beforehand which metrics will provide the most insight into a customer’s problems. The unknown nature of the data necessitates that we make no assumptions and therefore, look at all performance data.

 

To expedite our analyses, we developed Reporting Services routines that enable us dynamically to depict the data graphically. This is particularly useful for processing data from the SQL Server DMVs, which can produce voluminous amounts of data. For example, a simple 12-hour collection can produce millions of data records that are written periodically and efficiently by our T-SQL scripts into flat files for post-processing off-site. To put the analysis problem into better perspective, it is instructive to consider how one might graph the single-record lookups metric for the 6,035 possible database/table/index combinations contained in the Index Operational DMV data used in this article. If every index were used during the data collection period and each graph contained 14 series, this would result in 432 graphs. Obviously, this is unmanageable. Unless some method is devised to place the most interesting series on the first graph or two, the graphs will be as useless as the number columns themselves. A member of our team, Jeff Schwartz, developed some T-SQL code that grouped the data by metric and any other subdivision entity, e.g., instance, database, file, table, or index without resorting to cursors or dynamic SQL. The series were also ranked with the largest N (specified by the user via the Reporting Services UI) maximums on the first graph with the next N series on second graph, etc. The partitioning and ranking mechanism he devised, which employs the DENSE_RANK and ROW_NUMBER T-SQL ranking functions, is applicable to variety of data subdivision and organization problems and is the basis for this article.

 

Initial Attempts and Experimentation

Although the NTILE function appeared to be the obvious choice, Jeff found that he could not obtain the desired metric partitioning for the graphs by using this function.  NTILE requires a fixed number of partitions. Some metrics had hundreds of unique series, whereas others had one, seven, or nine (see Table 1).  For some combinations, several graphs were created for a given metric, but each graph only had one or two series on it! For those with an intermediate number of combinations, NTILE evenly divided the total number of series into equal portions. This meant that although the user specified a series per graph value such as seven, each graph would only contain five series on it (illustrated in Table 2 by the Leaf Ghosts records), thereby effectively ignoring the user specification. The query below, whose output is shown in Table 1, determines how many unique column combinations existed in the data for each metric.  Clearly, the number per metric varied greatly.  Please also note that the GroupID was assigned automatically based upon the order by clause within the DENSE_RANK function. This will become an integral part of the final partitioning strategy.

 

SELECT MetricName, DENSE_RANK() OVER(ORDER BY MetricName) AS GroupID, COUNT(*) AS [Count]

FROM MeasureBeforeUpdate

GROUP BY MetricName

ORDER BY MetricName

 

Blog_20160307_1

Method Comparisons

The next task was to reestablish the ordering for all the graphs with the worst ones on the first graph, the next worst on the second, etc.  After a fair amount of experimenting, Jeff hit upon the following combination that does the entire grouping, ranking, splitting, etc. all in one pass as shown below (it also uses a CTE to simplify the coding and improve legibility).  The example below shows all the building blocks so you can see how the final query was derived. For the sake of comparison, ALL attempted ranking variations are shown in the query. The ones that proved useful, DR and RNDiv, are highlighted. along with the combined item named MeasureRank. To aid with interpretation, @I_GroupNbr specifies the number of series per graph provided by the user, @MaxCount determines the largest number of combinations (Count in Table 1), and @MaxGroups specifies the maximum number of possible groups for each metric. This value is computed earlier in the stored procedure to simplify the computations greatly and is established using the following formula: SET @MaxGroups = CEILING (@MaxCount * 1.0 / @I_GroupNbr). The DENSE_RANK function provided the metric groupings, and the ROW_NUMBER function partitioned the columns by metric and ordered them by the maximum value in descending order. The integral division enabled maintaining the relative ranking within a group. The query output is shown below.

WITH RankingList AS (

SELECT MsId,

@MaxGroupID + (DENSE_RANK() OVER (ORDER BY MetricName) – 1) * @MaxGroups +

NTILE(@MaxGroups) OVER (PARTITION BY MetricName ORDER BY MetricName)

AS [OldMeasureRank],

DENSE_RANK() OVER (ORDER BY MetricName) AS DR,

NTILE(@MaxGroups) OVER (PARTITION BY MetricName ORDER BY MetricName) AS NT,

ROW_NUMBER() OVER (PARTITION BY MetricName ORDER BY MaxVal DESC) AS RN,

((ROW_NUMBER() OVER (PARTITION BY MetricName ORDER BY MaxVal DESC) – 1) / @I_GroupNbr) + 1 AS RNDiv,

(ROW_NUMBER() OVER (PARTITION BY by MetricName ORDER BY MaxVal DESC) – 1) % @I_GroupNbr AS RNMod,

@MaxGroupID + (DENSE_RANK() OVER (ORDER BY MetricName) -1) * @MaxGroups + ((ROW_NUMBER() OVER (PARTITION BY MetricName ORDER BY MaxVal desc) -1) / @I_GroupNbr) + 1 AS [MeasureRank]

FROM MeasureBeforeUpdate

)

SELECT GroupId = RankingList.MeasureRank, RankingList.*, MeasureBeforeUpdate.*

FROM MeasureBeforeUpdate

INNER JOIN RankingList ON RankingList.MsId = MeasureBeforeUpdate.Msid

Blog_20160307_2

Final Code and Sample Graphs

The final code compares the original partitioning code values with those of the new method. The biggest differences involve the replacement of NTILE with ROW_NUMBER and the addition of integral division. This combination automatically numbers the series for a given group and resets this number once it hits the maximum number of items to be shown on a Reporting Services page.  Since this is a parameter specified by the user, it now breaks the output up into N sections depending upon how many series are requested for each graph. Examples of the graphs with the number of series per graph specified to be seven are shown below the query. As highlighted in the Table 1, only five Leaf Page Merges combinations existed, whereas there were 105 for Leaf Inserts. As a result, only one graph was needed for the first metric, but 15 were possible for the second. Since each graph set has its own ID (as displayed in the graph titles), it is trivial to terminate the graph generation process early by avoiding the calls for any group greater than a certain value. When returning large numbers of metrics, e.g., 40 for Index Operational statistics, being able to terminate graphing early, e.g., at two, can result in a reduction of more than 93 percent in graph generation duration. With this particular data, the total possible number of graphs was 907, whereas by stopping at only two graphs per metric, the total decreased to 61.

 

WITH RankingList AS (

SELECT MsId,

@MaxGroupID + (DENSE_RANK() OVER (ORDER BY MetricName) – 1) * @MaxGroups + NTILE(@MaxGroups) OVER (PARTITION BY MetricName ORDER BY MetricName) AS [OldMeasureRank],

@MaxGroupID + (DENSE_RANK() OVER (ORDER BY MetricName) -1) * @MaxGroups + ((ROW_NUMBER() OVER (PARTITION BY MetricName ORDER BY MaxVal DESC) -1) / @I_GroupNbr) + 1 AS [MeasureRank]

FROM MeasureBeforeUpdate

)

SELECT GroupId = RankingList.MeasureRank, RankingList.*, MeasureBeforeUpdate.*

FROM MeasureBeforeUpdate

INNER JOIN RankingList ON RankingList.MsId = MeasureBeforeUpdate.Msid

Blog_20160307_3

Blog_20160307_4

Blog_20160307_5

 

 

 

Conclusion

This article illustrated the pros and cons of using various ranking functions, especially in combination, which is seldom shown in ranking code examples. This article provides a different perspective regarding Windows ranking functions and their use in solving everyday partitioning, ranking, and organization problems.

 

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!