Tag Archives: Performance Tuning

Query Tuning and Missing Index Recommendations

–By Jeffry Schwartz

Overview

Most analysts are familiar with missing index recommendations provided by SQL Server when query plans are displayed within SSMS or reported by various missing index DMVs. Several questions arise concerning these recommendations:

  1. What determines whether a column becomes a key or included in the new index?
  2. Where does the ordering of the included columns come from? Does column ordering in queries affect the recommendations, e.g., will query orderings of A, B, C and C, B, A in the selection column list result in the one or two recommendations?
  3. How sophisticated are the missing index recommendations, i.e., does SQL Server compare recommendations and perform any kind of recommendation consolidation or optimization?

This article provides a reproducible example using six different queries that deliberately caused SQL Server to generate missing index recommendations and provide answers to these questions. The example is used to study missing index recommendations in detail, including how they relate to the underlying table and how query columns affect these recommendations. The article also illustrates how a single consolidated index can address the performance needs of all six queries.

Test Table Creation & Load

To determine missing index recommendation behavior, a generic table was constructed and filled with 20 million records. Each record contained an identity column, an ID column, a text column, and 47 metric columns whose values ranged between 1 and 10,000,000. The large number of table columns was used to insure SQL Server would choose an index option when appropriate. Six queries that incorporated various column combinations were executed (some of which differed only in column ordering). To minimize duplication of column values and skewing of query plans, the ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000 formula was used to generate values that were as random as possible. Two indices were created: a clustered index that used the identity column as its only key and a second nonclustered index that used DupID as its only column. The scripts for the creation, loading, and initial indexing of the table are shown below.

— ##############################################################

— Create test table

— ##############################################################

drop table FewDuplicates;

CREATE TABLE FewDuplicates (

IDCol bigint identity (20000000,1),

DupID bigint,

MyText varchar(10),

Metric01 bigint, Metric02 bigint, Metric03 bigint, Metric04 bigint, Metric05 bigint,

Metric06 bigint, Metric07 bigint, Metric08 bigint, Metric09 bigint, Metric10 bigint,

Metric11 bigint, Metric12 bigint, Metric13 bigint, Metric14 bigint, Metric15 bigint,

Metric16 bigint, Metric17 bigint, Metric18 bigint, Metric19 bigint, Metric20 bigint,

Metric21 bigint, Metric22 bigint, Metric23 bigint, Metric24 bigint, Metric25 bigint,

Metric26 bigint, Metric27 bigint, Metric28 bigint, Metric29 bigint, Metric30 bigint,

Metric31 bigint, Metric32 bigint, Metric33 bigint, Metric34 bigint, Metric35 bigint,

Metric36 bigint, Metric37 bigint, Metric38 bigint, Metric39 bigint, Metric40 bigint,

Metric41 bigint, Metric42 bigint, Metric43 bigint, Metric44 bigint, Metric45 bigint,

Metric46 bigint, Metric47 bigint

)

 

— ##############################################################

— Load original table

— ##############################################################

declare @DupID bigint = 1

declare @NumRecs bigint = 20000000

 

truncate table FewDuplicates

set nocount on

while (@DupID <= @NumRecs)

begin

insert into [dbo].[FewDuplicates] (

[DupID], [MyText],

[Metric01], [Metric02], [Metric03], [Metric04], [Metric05], [Metric06], [Metric07],

[Metric08], [Metric09], [Metric10], [Metric11], [Metric12], [Metric13], [Metric14],

[Metric15], [Metric16], [Metric17], [Metric18], [Metric19], [Metric20], [Metric21],

[Metric22], [Metric23], [Metric24], [Metric25], [Metric26], [Metric27], [Metric28],

[Metric29], [Metric30], [Metric31], [Metric32], [Metric33], [Metric34], [Metric35],

[Metric36], [Metric37], [Metric38], [Metric39], [Metric40], [Metric41], [Metric42],

[Metric43], [Metric44], [Metric45], [Metric46], [Metric47]

)

VALUES (

@DupID,‘my text’,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000

)

set @DupID += 1

end — group option loop

set nocount off

 

— ##############################################################

— Create indices on the test table

— ##############################################################

CREATE UNIQUE CLUSTERED INDEX [ci_RecID] ON [dbo].[FewDuplicates]

(

[IDCol] ASC

)

WITH (fillfactor = 100, PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)

ON [PRIMARY]

 

CREATE NONCLUSTERED INDEX [ix_DupID] ON [dbo].[FewDuplicates]

(

DupID ASC

)

WITH (fillfactor = 100, PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)

ON [PRIMARY]

 

Queries & Execution Plans with Missing Index Recommendations

The six queries all performed a range scan based upon DupID and Metric01. Clearly, the ix_DupID index could be used for the first portion of the where clause, but none of the existing indices could assist in the resolution of the second portion of the where clause. Note: All queries used identical where clauses to insure that the only query differences involved the columns that were requested in the select clause. These variations employed different combinations of the first six metric columns included a variety of column orderings. All of the queries returned the same 809 rows. Note: Due to the random nature of the data, the reader will not obtain identical results, but they will be functionally similar. Each query was run separately after the following commands had been executed:

dbcc dropcleanbuffers with no_infomsgs

dbcc freeproccache with no_infomsgs

dbcc freesystemcache(‘TokenAndPermUserStore’) with no_infomsgs

These commands clear all the pertinent caches to insure reproducibility and prevent memory-resident portions of the database from skewing the results. Each query required approximately 90 seconds to execute as shown in Table 1. Each query required approximately one million logical and physical reads to complete. The SSMS option for returning the actual execution plan was set prior to execution, and the six queries and execution plans are shown in the individual query sections below. The first five queries resulted in different recommendations, but the sixth query’s plan and recommended missing index were identical to that of the fifth because the only difference between Queries #5 and #6 is the ordering of the columns (part of the answer to question #2). The differences among all the queries are summarized in Table 2 below.

Blog_20170601_T1

Table 1: Summary of Initial Query Executions & Timings

Blog_20170601_T2

Table 2: Summary of Query Selection Columns

Query #1

select [DupID], [MyText], [Metric01]

from FewDuplicates

where DupID between 100000 and 500000 and Metric01 between 10000 and 30000

Blog_20170601_1

Query #2

select [DupID], [MyText], [Metric02]

from FewDuplicates

where DupID between 100000 and 500000 and Metric01 between 10000 and 30000

Blog_20170601_2

Query #3

select [DupID], [MyText], [Metric03], [Metric06], [Metric04]

from FewDuplicates

where DupID between 100000 and 500000 and Metric01 between 10000 and 30000

Blog_20170601_3

Query #4

select [DupID], [MyText], [Metric04], [Metric06], [Metric02], [Metric01]

from FewDuplicates

where DupID between 100000 and 500000 and Metric01 between 10000 and 30000

Blog_20170601_4

Query #5

select [DupID], [MyText], [Metric01], [Metric03], [Metric05], [Metric02]

from FewDuplicates

where DupID between 100000 and 500000 and Metric01 between 10000 and 30000

Blog_20170601_5

Query #6

select [DupID], [MyText], [Metric05], [Metric02], [Metric01], [Metric03]

from FewDuplicates

where DupID between 100000 and 500000 and Metric01 between 10000 and 30000

Blog_20170601_6

Missing Index Recommendations Displayed via DMVs

The following query was used to provide some of the information enumerated in Table 3 below:

— ##############################################################

— Missing Index DMV Query

https://blogs.msdn.microsoft.com/bartd/2007/07/19/are-you-using-sqls-missing-index-dmvs/

— ##############################################################

SELECT migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) AS improvement_measure,

‘CREATE INDEX [missing_index_’ + CONVERT (varchar, mig.index_group_handle) + ‘_’ + CONVERT (varchar, mid.index_handle)

+ ‘_’ + LEFT (PARSENAME(mid.statement, 1), 32) + ‘]’

+ ‘ ON ‘ + mid.statement

+ ‘ (‘ + ISNULL (mid.equality_columns,)

+ CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN ‘,’ ELSE END

+ ISNULL (mid.inequality_columns,)

+ ‘)’

+ ISNULL (‘ INCLUDE (‘ + mid.included_columns + ‘)’, ) AS create_index_statement,

migs.*, mid.database_id, mid.[object_id]

FROM sys.dm_db_missing_index_groups mig

INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle

INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle

WHERE migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) > 10

ORDER BY migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC

The following table lists the missing index recommendations as displayed by the DMVs. It also summarizes all of the missing index recommendations to make it easier to determine what a consolidated index might require. It is important to note that the missing index recommendations for Query #5 and #6 are the same, just as they were in the query plan listings above. Although discussion of the calculation and interpretation of the Improvement Measure column is beyond the scope of this article, it should be noted that the total improvement measure was approximately twice that of any other index recommendation and further investigation reveals that this index would satisfy both Query #5 and #6.

Blog_20170601_T3

Table 3: Summary of All Missing Index Recommendations

Composite Index Recommendation for All Six Queries

Either Table 2 or Table 3 could be used to determine a composite index. In-depth investigation reveals that since DupID and Metric01 are specified in the where clauses as inequalities, these should be key columns in any index, and review of Table 3 highlights this for all index recommendations. Since MyText is used in all of the queries and Metric01 is to be used as a key column, the only remaining columns are Metric02, Metric03, Metric04, Metric05, and Metric06. An index that employs these columns as included columns can “cover” all the queries shown in this article. For further information about covering indices, please refer to the following article: https://docs.microsoft.com/en-us/sql/relational-databases/indexes/create-indexes-with-included-columns. The composite (and covering) index is shown below:

CREATE NONCLUSTERED INDEX ix_ResolveMultipleQueriesNeeds ON [dbo].[FewDuplicates] (

[DupID],

[Metric01]

)

INCLUDE (

[MyText],

[Metric02],

[Metric03],

[Metric04],

[Metric05],

[Metric06]

)

WITH (fillfactor = 100, PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)

ON [PRIMARY]

Query Plans for Queries #1 – #6 with Composite Index Added

Once the combined index was implemented, each query was run as before: caches cleared and SSMS actual execution plan option on. No query ran longer than one second even when the caches had been purged. Although the queries were run individually for the purposes of obtaining accurate timings, they were also run together for the purpose of showing the similarities of the execution plans with the new index in place. Table 4 lists the execution results and clearly, performance improved greatly for all queries. For example, durations decreased from approximately 90 seconds to about one second and reads dropped from approximately one million to four thousand. Examination of the query plans shows that all six queries used the new index to obtain the observed extremely fast performance.

Blog_20170601_T4

Table 4: Summary of Query Executions & Timings After Composite Index Added

Blog_20170601_7

Summary

Most readers probably already knew the answer to the first question: key columns are used to filter query results. The simplest example of this involves a where clause. Columns referenced by equalities are placed first, followed by columns that are used with inequalities. Some of the missing index DMVs actually differentiate between the types of relationships involved. The example queries show that table column ordering dictates included column order thereby answering question #2. Since table column ordering is used to specify included column ordering, SQL Server WILL NOT duplicate missing index recommendations when the columns are identical regardless of their ordering in the queries. The answer to the last question is that SQL Server does not appear to perform any consolidations or optimizations other than those cited in the answer to question #2. This knowledge is invaluable, particularly when evaluating missing index recommendations on production systems because unless the queries that cause SQL Server to generate the missing index recommendations have been executed recently, there is little information to associate the actual queries with the recommendations. Understanding those factors that affect SQL Server’s recommendations can simplify causing query identification and give an analyst more certainty that the correct associations have been made.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Common Criteria Compliance and LCK_M_SCH_M

–By Lori Brown

If you have ever wondered what happens when you mistakenly enable Common Criteria compliance in SQL, well wonder no more!!

Blog_20170427_1

We have a client who has no idea how or when Common Criteria was enabled on their production system. All they know is that performance has been slowly degrading. After collecting performance data, we found that there were high LCK_M_SCH_M waits which is a schema modification lock that prevents access to a table while a DDL operation occurs. We also found blocked process records where a LOGIN_STATS table in the master database was waiting a lot. This table is used to hold login statistics. When there are a lot of logins and outs there can be contention in this table.

When you enable Common Criteria compliance, something called Residual Information Protection (RIP) is enabled. RIP is an additional security measure for memory and it makes it so that in memory a specific bit pattern must be present before memory can be reallocated(overwritten) to a new resource or login. So with lots of logins and outs, there is a performance hit in memory because overwriting the memory allocation has to be done.

Keep in mind if you enable Common Criteria compliance, you can run into slowdowns from locking and memory. Make sure that your server is able to handle this well and that applications are designed to minimize the impact of high logins\logouts.

To disable Common Criteria compliance you can use sp_configure or the GUI.

sp_configure ‘common criteria compliance enabled’, 0

GO

RECONFIGURE

GO

However, it is not really disabled until you reboot the server (it actually says to reboot the server in MSDN). Restarting the instance will not work for this configuration change. I believe that this is because of the Residual Information Protection that secures memory. It stands to reason that without a reboot that flushes memory that RIP is still doing its work and causing performance issues.

Check out the documentation from MSDN on Common Criteria compliance here:

https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/common-criteria-compliance-enabled-server-configuration-option

The biggest takeaway from this is to make sure to change your SQL configuration with good reason. Always know the side effects that can show up when you set any of the advanced configuration options.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

List Tables That May Be Over Indexed

— By Lori Brown

While not having enough indexes can be bad for query performance, having too many indexes can also be just as bad. Use the query below to get a list of tables in your database that has more than 10 indexes.

— Tables with large number of indexes

select t.name as TablesWithLargeNumInx, count(i.name) as CountIndexes

from sys.indexes i

inner join sys.tables t

on i.object_id = t.object_id

where i.index_id > 0

group by t.name

having count(i.name) > 10

If you suspect that you have too many indexes on your tables, you can also check the sys.dm_db_index_usage_stats dynamic management view to know if indexes on your heavily indexed tables are being used well. (Hint: seeks are good and scans are not so much)

select u.user_seeks, u.user_lookups, u.user_scans

from sys.dm_db_index_usage_stats u

inner join sys.indexes i

on u.object_id = i.object_id and u.index_id = i.index_id

WHERE u.object_id=object_id(‘dbo.SomeTableName’)

https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-db-index-usage-stats-transact-sql

Query Tuning – Adding Included Columns To An Existing Index

— by Lori Brown @SQLSupahStah

For the most part, I stick to my daily DBA duties of making sure that SQL Server maintenance is running and that the proverbial lights are on in all my SQL boxes. However, sometimes it is fun to do some tuning and it’s even better when you have a success.

I have a SQL Server that hosts some Microsoft ERP databases. There was a problem with some slow performance and after collecting some performance data, I narrowed the root cause down to a view (vp_03400Exception1) that was used in a stored procedure. Out of the 15 million reads performed by the proc when I captured it, 12 million were directly from the offending view. In checking the execution plan for the view I found that a large table (APTran) was scanned in one spot and there were key lookups going on in other places (APTran & APDoc).

Bad stuff in bad plan…thick arrow indicates that almost 4 million records were scanned

Blog_20170323_1

Key lookups are not good (**on small tables these might not matter but on large tables they do) and should be eliminated if possible. Key lookups occur when you have an index seek against a table, but your query wants extra columns that are not in that index. This causes SQL Server to have to go back and get those extra columns. Lots of trips back to get extra columns results in slow performance. Of course scans are not good either so I wanted to try to get rid of that too.

Since the key lookups have an output list in the execution plan that lists what extra columns were needed, I took a look and simply added those columns as included columns on the existing indexes that were being used in the plan. In effect, I created a covering index using included columns. The included columns are only used when specific queries need it. If other queries use the index with included columns but doesn’t actually need them, those won’t be pulled in.

Here are the modified indexes with the included columns.

Blog_20170323_4

With the new columns included in the indexes, index seeks with no key lookups are now being performed.  The thick arrow has disappeared because instead of needing to scan the whole APTran table, it knows to return only 4 records.  This means that the rest of the query does not have to perform anywhere near as much work as it did before.  See the new execution plan below:

Much better stuff in more efficient plan…small arrow indicates that only 4 records were needed from APTran so much less work is being done.

Blog_20170323_5

Below you can see that overall the new plan uses less memory, is smaller and just plain old more efficient.

Blog_20170323_6

Boom!! Mic drop!!

Blog_20170323_7

You might be thinking that I should not celebrate since my query only returned 1 row. However, what if the results were hundreds or thousands or millions of rows? The work done by the old execution plan would be significant and the users would feel it. So, what’s the bottom line here? If you have queries accessing large tables, check execution plans to see if they are being accessed efficiently. Sometimes a small change to an existing index will work wonders. Sometimes you have to create a new index to make queries efficient. The execution plan might just tell you what it needs if you know where to look.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

SQL 2016 SP1 USE HINT

–By Lori Brown   @SQLSupahStah

After installing Service Pack 1 for SQL 2016, I ran across USE HINT and thought I would put out what I could find to document it. Here’s where you find info from Microsoft: https://msdn.microsoft.com/en-us/library/ms181714.aspx and here is a link for all of the bugs that are fixed in SP1: https://support.microsoft.com/en-us/kb/3182545

USE HINT ( hint_name ) Provides one or more additional hints to the query processor as specified by a hint name inside single quotation marks. Hint names are case-insensitive. USE HINT can be utilized without having to be a member of the sysadmin server role.

The following hint names are supported:

  • ‘DISABLE_OPTIMIZED_NESTED_LOOP’ Instructs the query processor not to use a sort operation (batch sort) for optimized nested loop joins when generating a query plan. Equivalent to trace flag 2340.
  • ‘FORCE_LEGACY_CARDINALITY_ESTIMATION’ Forces the query optimizer to use Cardinality Estimation model of SQL Server 2012 and earlier versions. Equivalent to trace flag 9481 or Database Scoped Configuration setting LEGACY_CARDINALITY_ESTIMATION=ON.
  • ‘ENABLE_QUERY_OPTIMIZER_HOTFIXES’ Enables query optimizer hotfixes (changes released in SQL Server Cumulative Updates and Service Packs). Equivalent to trace flag 4199 or Database Scoped Configuration setting QUERY_OPTIMIZER_HOTFIXES=ON.
  • ‘DISABLE_PARAMETER_SNIFFING’ Instructs query optimizer to use average data distribution while compiling a query with one or more parameters, making the query plan independent of the parameter value which was first used when the query was compiled. Equivalent to trace flag 4136 or Database Scoped Configuration setting PARAMETER_SNIFFING=OFF.
  • ‘ASSUME_MIN_SELECTIVITY_FOR_FILTER_ESTIMATES’ Causes SQL Server to generate a plan using minimum selectivity when estimating AND predicates for filters to account for correlation. Equivalent to trace flag 4137 when used with cardinality estimation model of SQL Server 2012 and earlier versions, and has similar effect when trace flag 9471 is used with cardinality estimation model of SQL Server 2014 or higher.
  • ‘DISABLE_OPTIMIZER_ROWGOAL’ Causes SQL Server to generate a plan that does not use row goal adjustments with queries that contain TOP, OPTION (FAST N), IN, or EXISTS keywords. Equivalent to trace flag 4138.
  • ‘ENABLE_HIST_AMENDMENT_FOR_ASC_KEYS’ Enables automatically generated quick statistics (histogram amendment) for any leading index column for which cardinality estimation is needed. The histogram used to estimate cardinality will be adjusted at query compile time to account for actual maximum or minimum value of this column. Equivalent to trace flag 4139.
  • ‘ASSUME_JOIN_PREDICATE_DEPENDS_ON_FILTERS’ Causes SQL Server to generate a query plan using the Simple Containment assumption instead of the default Base Containment assumption for joins, under the query optimizer Cardinality Estimation model of SQL Server 2014 or newer. Equivalent to trace flag 9476.
  • ‘FORCE_DEFAULT_CARDINALITY_ESTIMATION’ Forces the Query Optimizer to use Cardinality Estimation model that corresponds to the current database compatibility level. Use this hint to override Database Scoped Configuration setting LEGACY_CARDINALITY_ESTIMATION=ON or trace flag 9481.

The list of all supported USE HINT names can be queried using the dynamic management view sys.dm_exec_valid_use_hints. More this view can be found here: https://msdn.microsoft.com/en-us/library/mt791356.aspx

Some USE HINT hints may conflict with trace flags enabled at the global or session level, or database scoped configuration settings. In this case, the query level hint (USE HINT) always takes precedence. If a USE HINT conflicts with another query hint or a trace flag enabled at the query level (such as by QUERYTRACEON), SQL Server will generate an error when trying to execute the query.

Example:

DECLARE @qty INT

SET @qty = 4

SELECT h.OrderDate, h.Freight, d.OrderQty, d.UnitPrice

FROM Sales.SalesOrderDetail d

JOIN Sales.SalesOrderHeader h ON (d.SalesOrderID = h.SalesOrderID)

WHERE d.OrderQty > @qty

OPTION (USE HINT(‘DISABLE_PARAMETER_SNIFFING’,‘DISABLE_OPTIMIZED_NESTED_LOOP’));

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Deadlock Info From system_health Information

— by Lori Brown @SQLSupahStah

Periodically, all DBA’s are going to run into code that fails due to a deadlock. Deadlocks happen when two or more SPIDs\sessions try to access a resource that is locked by each other. In short, something has to give so one is usually chosen by SQL to be the victim meaning one is the loser. Deadlocks can be avoided by designing your databases well and making queries as quick and lightweight as possible. The faster data is retrieved or locked and unlocked the better.

But, how do you figure out what is causing deadlocks if you suddenly run into them? An easy way is to use the information that is already captured in the default system_health session information that is already being gathered by SQL.

Here are queries to retrieve this information from SQL 2008 R2 and SQL 2012 and up.

— get deadlock graphs in SQL 2008

SELECT CAST(event_data.value(‘(event/data/value)[1]’,

                               ‘varchar(max)’) AS XML) AS DeadlockGraph

FROM   ( SELECT   XEvent.query(‘.’) AS event_data

FROM     (   — Cast the target_data to XML

SELECT   CAST(target_data AS XML) AS TargetData

FROM     sys.dm_xe_session_targets st

JOIN sys.dm_xe_sessions s

ON s.address = st.event_session_address

WHERE     name = ‘system_health’

AND target_name = ‘ring_buffer’

) AS Data — Split out the Event Nodes

CROSS APPLY TargetData.nodes(‘RingBufferTarget/

                                     event[@name=”xml_deadlock_report”]’)

AS XEventData ( XEvent )

) AS tab ( event_data );

 

 

— get deadlock graphs in SQL 2012 and up

SELECT XEvent.query(‘(event/data/value/deadlock)[1]’) AS DeadlockGraph

FROM   ( SELECT   XEvent.query(‘.’) AS XEvent

FROM     ( SELECT   CAST(target_data AS XML) AS TargetData

FROM     sys.dm_xe_session_targets st

JOIN sys.dm_xe_sessions s

ON s.address = st.event_session_address

WHERE     s.name = ‘system_health’

AND st.target_name = ‘ring_buffer’

) AS Data

CROSS APPLY TargetData.nodes

(‘RingBufferTarget/event[@name=”xml_deadlock_report”]’)

AS XEventData ( XEvent )

) AS src;

Queries originally from Jonathan Kehayias (One super smart SQL guy!) https://www.simple-talk.com/author/jonathan-kehayias/

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Table-Valued Parameter Performance Using C# Data Tables

The following testing and resulting data is a direct result of a recent issue that was discovered at one of our client sites.  After setting up SQL monitoring that does use a SQL trace to capture long running queries, suddenly certain processes would lock up and be unable to complete.  Initially we thought it was everything except the SQL trace causing the issue until we finally turned off all trace captures and the problem went away.  This was very concerning to us since using either SQL trace or XEvents is the method that we generally use to capture query performance. Without a fix, our performance tuning attempts would be severely hamstrung and slowed down. This problem had to be fixed yesterday!

Another side effect that would happen when things locked up was that despite having the max server memory set, SQL would end up using substantially more memory that was allocated to it.  At this client, developers were importing data from an application using C# and table-valued parameters which should have not been an issue. After a lot of searching, we finally came across some comments by another SQL expert who mostly described our issue and recommended that any C# TVP’s should have their columns defined or SQL could lock up if either trace or XEvents Completed Events are captured.  When we checked the C# code we found that the string columns that were being imported were not defined in an effort to make the import code flexible.  We were able definitively fix this very obscure problem with just one line of C# code that is in the below post.  The end result is a FASTER import of data that can be captured in trace or XEvents.  Nice!!

Since we had to put together this fix for a very undocumented issue, we sincerely hope that any DBA or developer who runs into this problem can correct it much faster than we did.  A link to download a zip file of all code displayed in this post is included at the end.  Enjoy!

********************************************

Table-Valued Parameter Performance Using C# Data Tables

— By Jeff Schwartz

Overview

Much has been written about the use of table-valued parameters (TVP) and their performance. These narratives appear to be contradictory because some state that TVPs should be used only for small datasets, whereas others assert that TVPs scale very well. Adding to the confusion is that the fact that most of the performance-related discussions focused on TVP use within T-SQL. Many of these examples employ either SQL trace or Extended Events (XEvents) to evaluate performance, but almost none of them has discussed what can occur when C# data tables and SQL trace or XEvents are used simultaneously. This paper details the analysis and resolution of a situation that occurred at a customer site when an existing C# application suddenly began to run extremely slowly after a very lightweight SQL trace that utilized a several-minute duration threshold and captured no frequent events was activated.

Initial Research

Preliminary research indicated that under very specific conditions the combination of TVPs and SQL trace could result in unintended performance consequences, e.g., excessive SQL Server memory consumption to the point where SQL Server exhausted all memory on the server regardless of the maximum memory setting value. The research also suggested that string usage and lengths in unspecified locations might have an impact on this issue. The question of whether TVPs might have some record-count performance threshold above which performance would degrade was also examined. Although bulk insert was not used by the application, the research team also decided to compare bulk and TVP insertion speeds because that is another source of conflicting information.

Since very little literature exists that discusses the combination of TVPs and SQL trace, the only recourse involved creating test frameworks, performing actual data loads, and using internal application timers in conjunction with either SQL trace or XEvents to monitor performance.

Test Frameworks

1.Two frameworks were constructed: T-SQL-only and C# calling a T-SQL stored procedure.

a. The T-SQL-only test loaded data into a TVP, which then called a stored procedure to perform the insertion into the final table.

b. The development team provided C# code that illustrated how the user application read a flat file into a C# data table, which was then passed as a TVP to a T-SQL stored procedure. This code was followed as closely as possible in the test framework.

2.A SQL server table containing data captured from the sys.dm_db_index_operational_stats Data Management View (DMV) supplied the data for testing because

a. The data table contained approximately 3.2 million records, thereby being large enough to stress the application code and SQL Server adequately. The largest data file provided by the application team was approximately 1.6 million records, so the testing could push well beyond current record levels.

b. Each data record contained 47 columns, which was wider in terms of data columns than any application table being loaded.

c. No string value columns existed in the data. This insured that user string data could not affect any interaction between the C# application and SQL Server, and that any string-related behavior was strictly due to the manner in which the application loaded the data table or the TVP, or the way the TVP parameter-passing was recorded in the SQL trace. All data columns were smallint, int, bigint, or date. This also insured exact record lengths so that any record count-related behavior was independent of record size. Some of the data files being loaded by the development team contained only fixed-width data types, whereas others contained at least a few variable length string columns.

3.The frameworks were designed to load various numbers of records using the same starting point to determine whether a performance threshold existed for TVPs, regardless of whether SQL trace (or XEvents) was active, as well as to determine whether even small numbers of records could be processed with the SQL trace (or XEvents) active.

4.Thirty-one different record levels, shown in Table 1, were used for testing to insure any record-count gaps were small. A separate CSV file of each length was created from the data cited in #2, always beginning with the first record.

Blog_20160901_1

Table 1: Record Counts used in Testing

5.Several methods of loading the flat file CSV data into C# data tables were developed and tested. The C# data table configurations were as follows:

a. Unlimited string length columns (initially used in the production C# application) – herein noted as NoStringMax (String #N/A on graphs)

b. Fixed length string columns of at most 30, 100, 500, or 1000 characters each – herein noted as StringMax (String <#> on graphs). Note: the development team stated that no column would exceed 1,000 characters in length.

c. Exact data type mapping so that the data table column types matched those of the receiving SQL Server table exactly – herein noted as Map

d. A common routine was used by options a, b, and c above to send the data table to SQL Server using a TVP.

e. The section of code that loaded the data table from the flat file was timed separately from the routine cited in d, which also was surrounded by its own timers. This enabled comparison of data load, TVP passing, and combined times.

f. The name of the CSV file is passed in as a parameter along with an adjusted version of the file name so the appropriate test identifiers can be written to the application log file for later analysis.

6.The tests were run with the following SQL Server monitoring options:

a. No SQL Server monitoring at all.

b. SQL trace monitoring including all normally captured events except for completed ones. These were handled as follows:

i.Exclude all completed events

ii.Include ad-hoc and batch completed events with a two-second duration threshold

iii.Include ad-hoc and batch completed events as well as completed statement events with a two-second duration threshold

c. SQL Extended Events (XEvents) including all normally captured events except for completed ones. These were handled as follows:

i.Exclude all completed events

ii.Include ad-hoc and batch completed events with a two-duration threshold

iii.Include ad-hoc and batch completed events as well as completed statement events with a two-duration threshold

7.All tests were performed on the same server to eliminate network and hardware variability. This server had a quad-core Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GHz processor, 32 GB of RAM, and a 500 GB SSD HD.

8.Various combinations of cold and warm SQL Server caches, cold and warm Windows caches, and SQL Server maximum memory limit sizes were tested. Some of the earlier attempts will be discussed in the Results section below. The final specifications were as follows:

a. No applications other than SQL Server, SSMS, and the C# application were active on the test server.

b. SQL Server’s maximum memory limit was set to 12 GB. This allowed SQL to go over its limit without distorting overall system behavior by exhausting server memory entirely.

c. Hot Windows file cache, i.e., a full run that loaded ALL of the flat file data into RAM was performed before the timing tests were run. This insured that the HD was involved minimally.

d. Hot SQL Server buffer cache, achieved by running the next series soon after the last run completed (with no intervening runs), and by running each sequence four times in immediate succession.

C# Code

The NoStringMax routine cited in #5a in the previous section is shown in Figure 1.

Blog_20160901_2

Figure 1: Test C# Data Table Load Routine Using No String Length Maximums

The StringMax routine cited in #5b in the previous section is shown in Figure 2. The ONLY difference between this routine and the previous one is the addition of the MaxLength assignment that is highlighted below. This single line sets the maximum length of each data table column to the value of the iMaxStringLen parameter. As cited in #5b, this value ranged from 30 to 1,000 during testing.

Blog_20160901_3

Figure 2: Test C# Data Table Load Routine Using String Length Maximums

The Map routine cited in #5c in the previous section is shown in Figure 3 through Figure 6. Logically, this code is identical to that of #5a and #5b, but the individual column mappings make the routine much longer. Note: since no strings are used for the data table columns, the MaxLength parameter is unnecessary.

Blog_20160901_4

Figure 3: Test C# Data Table Load Routine Using Data Type Mapping – Part 1

Blog_20160901_5

Figure 4: Test C# Data Table Load Routine Using Data Type Mapping – Part 2

Blog_20160901_6

Figure 5: Test C# Data Table Load Routine Using Data Type Mapping – Part 3

Blog_20160901_7

Figure 6: Test C# Data Table Load Routine Using Data Type Mapping – Part 4

The routine cited in #5d in the previous section is shown in Figure 7.

Blog_20160901_8

Figure 7: C# Test Routine that Sends Data Table to SQL Server Using a TVP

Results

1.The T-SQL code alone, cited in Test Frameworks #1a did not recreate the problem observed at the customer site when either SQL trace or Extended Events (XEvents) were used, so additional testing was unnecessary.

2.No problems occurred when the C# application was used in conjunction with SQL trace or XEvents as long as no completed events were captured. Therefore, capturing only specific trace events created the problems.

3.Further research showed that when the C# code was running, adding ad-hoc or batch completed events to SQL traces or XEvent sessions caused issues. Adding completed statement events did not change things appreciably.

4.Interestingly, the extreme misbehavior was triggered by the combination of using NoStringMax C# code and having SQL trace or XEvents capture ad-hoc or batch completed events. Although the StringMax and Map routines ran a little more slowly for certain record count levels when SQL trace or XEvents captured ad-hoc or batch completed events, the dramatic memory consumption issues did not occur at all when these methods were used.

5.Initially, testing employed a 28 GB maximum SQL Server memory setting, but as occurred at the customer site, when the problem arose, SQL Server consumed all the memory on the server, which caused the server fundamentally to stop working. Using this setting and the NoStringMax C# code, CSV files with record counts up to 25,000 could be processed without taking hours, failing, or causing the server to run completely out of memory. However, the application could not reach the 50,000 record level.

6.Since testing needed to process MUCH higher numbers of records, the decision was made to reduce SQL Server memory to 4 GB in an attempt to provide additional memory space for SQL Server memory overflow. Another reason for lowering the SQL Server memory allocation was to insure that the tests consumed all of SQL Server’s buffer pool and that SQL Server had to operate against memory limits. This consideration, as well as the run-time variations, necessitated the four successive iterations for each combination.

7.Unfortunately, using the 4 GB setting, the lower-end NoStringMax C# code runs while capturing SQL trace ad-hoc or batch completed events caused application failures at approximately the 4,000 or 8,000 record levels. This clearly indicated an increase in SQL Server memory was required, so it was increased to 8 GB. Although the runs processed more records, they still crashed well before the 25,000 record level. Finally, moving the memory limit up to 12 GB enabled everything to run as before, effectively mimicking the 28 GB testing without running the risk of exhausting Windows memory.

8.Figure 8 highlights the NoStringMax C# code runs through the 25,000 record level while SQL trace or XEvents ad-hoc or batch completed events were being captured. It is evident that the run times were unacceptable. When the same test levels are viewed in Figure 9, the problem becomes abundantly clear. With SQL trace or XEvents capturing ad-hoc or batch completed events, the run times for the 10,000 record tests using the NoStringMax C# code ranged between 483 and 584 seconds. When ad-hoc or batch completed events were not captured, the run times were approximately 0.14 SECONDS! The values for all other tests, including NoStringMax C# code with SQL trace and XEvents ad-hoc or batch completed events off, are shown in Figure 9.

Blog_20160901_9

Figure 8: TVP Load Timing when Trace On or XEvents on

9.Although many lines appear in Figure 9, three groupings exist and these are the most important. Group 1, which consists of dotted lines, includes all tests during which ad-hoc or batch completed events were not captured. Group 2, which consists of solid lines, includes all tests during which SQL trace captured ad-hoc or batch completed events. Group 3, which consists of dashed lines, includes all tests during which XEvents captured ad-hoc or batch completed events. Important note: The NoStringMax runs shown in Figure 8 are NOT in Figure 9 because of scaling considerations. Figure 9 highlights several notable issues. Most importantly, once a maximum string length is specified, performance improves even when SQL trace or XEvents ad-hoc or batch completed events are not captured. In addition, the terrible performance problems go away. Interestingly, the behaviors of the various StringMax and Map runs were almost identical through approximately 175,000 records and then again at about 800,000 records. In between, unexplained divergence occurs, but it is only a few seconds and occurred primarily between the 200,000 and 700,000 record levels. The pattern shown in Figure 9 was repeated in every test sequence conducted. It is noteworthy that the StringMax and Map versions of the routine outperformed the NoStringMax under all comparable trace or XEvent-related conditions, and at higher volumes, even when completed events were captured.

Blog_20160901_10

Figure 9: TVP Load Timing when Trace/XEvents off OR Trace or XEvents on and Mapped Data Table or Limited-Length String Values Used

10.Once the number of records exceeeded two million, the load of the C# data table began to crash due to application memory exhaustion. Note: neither Windows nor SQL Server memory was consumed excessively during this phase, so the limitation was strictly application-related.

11.Figure 10 summarizes the overall insertion rates of all previously cited methods at the two million record level, in addition to the bulk insert method. This graph shows clearly that the worst performer was the unlimited string method, and that the best one was the bulk insert method. Note: these rates exclude the reading of the flat file into the C# data table as well as the creation of the file that was suitable for bulk insertion.

Blog_20160901_11

Figure 10: TVP versus Bulk Insert Records per Second

Conclusions

The use of unlimited-length C# strings with data tables not only performs worse without capturing any SQL trace or XEvents ad-hoc or batch completed events, it performs dreadfully when they are. Their use can cause poor overall server performance if the server’s memory is exhausted. Finally, their use prevents the ability to monitor SQL Server query performance using either SQL trace or XEvents. However, when a maximum string length is specified, regardless of its length, performance without SQL trace or XEvents improved and the problematic interaction with SQL trace or XEvents was mitigated almost completely. Since the Map method is inefficient and limiting from an application development perspective and performance was not substantially better than with limited-length strings, its use does not appear to be beneficial.

No table-valued parameter scalability issues were observed with the possible exception of the 200,000 to 700,000 record range when performance inexplicably, and consistently, dipped. However, from 800,000 to 2 million, performance experienced no such dip. Interestingly, standard bulk insert performance exceeded that of TVPs when the same data was used.

Recommendations

Since the code to implement the maximum-length string involves only one line and the performance monitoring and improvement benefits are substantial, the <data table name>.MaxLength = <n> statement should be included in data table load routines as shown by the highlighted line in Figure 2. <n> should be the maximum expected length of any data column. Limiting string length will allow C# TVP data loads to run more efficiently and also enable customers to conduct normal query performance monitoring.

********************************************

The code shown in this post can be downloaded from here….  SQLRX_TVP_CSharp_Code

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

SQLRx Presenting at SQL Saturday Baton Rouge 2016

SQLRx (@SQLRX) is proud to announce that two of our resident SQL Experts will be presenting sessions at SQL Saturday #515 in Baton Rouge, LA on August 6, 2016. If you have not signed up to attend, you are going to miss out!!

Blog_20160804_4

http://www.sqlsaturday.com/515/eventhome.aspx

Lori Brown (@SQLSupahStah) will give a lesson on how to be an Actively Proactive DBA. Come have fun and learn at her session, Actively Proactive – Straightforward Ways to Manage and Diagnose SQL. Attendees will be taken through the things a proactive DBA does and will also be treated to some quick ways to troubleshoot like a SQL Super Star!

Blog_20160804_1

Scripts will be given out to all who download the presentation materials. Come and get ‘em!!

Blog_20160804_2

Jeff Schwartz will presenting Rx for Demystifying Index Tuning Decisions. Jeff will cover ways to find poor performing queries and tuning them using various methods including execution plans, usage statistics and missing indexes.

Blog_20160804_3

Since Jeff has been tuning things for almost 40 years, he can show you some things that will be extremely useful and will help you tune like a pro.

If you are in Baton Rouge this weekend come by and visit with us! SQLRx is a sponsor of SQL Saturday Baton Rouge 2016 and will have a booth where you can come sit down with Lori or Jeff and talk tech. We will also be handing out goodies as well as giving away a 2 TB external drive at the end of the day. Also, one lucky attendee will win a free Ultra Light Performance Assessment from SQLRx at one of our presenter’s sessions. We will tell that lucky winner about their SQL performance for free….Winner, Winner, Chicken Dinner!!

Hope to see you in Baton Rouge!

SQL 2016 COMPRESS and DECOMPRESS Functions

–By Lori Brown @SQLSupahStah

SQL Server 2016 has introduced a couple of new functions that can be very useful and save storage space to boot. COMPRESS and DECOMPRESS offers the benefit of compressing column data thereby saving storage space. COMPRESS uses the GZIP compression algorithm and actually compresses data before it is stored in a column or variable and returns binary data. DECOMPRESS combined with CAST reverses the COMPRESS function returns the decompressed version of any compressed data.

I’ll walk us through a simple example of using the new COMPRESS and DECOMPRESS functions using the AdventureWorks database. I first created both a compressed and uncompressed table to hold the exact same data so I can demonstrate the space savings using COMPRESS. Then, I loaded both tables with data from the person.person table in AdventureWorks.

 

CREATE TABLE PersonsCompressed (

FirstName nvarchar(50),

LastName nvarchar(50),

CompressedInfo varbinary(max))

GO

 

CREATE TABLE PersonsUNCompressed (

FirstName nvarchar(50),

LastName nvarchar(50),

UnCompressedInfo nvarchar(max))

GO

 

I made up a bunch of different length text values to put into the 3rd column in each table but here are my basic insert statements. I ran the insert statements many times until I had a significant amount of data in each table.

 

— Load table with compressed data using Person.Person table in AdventureWorks

INSERT INTO PersonsCompressed (FirstName, LastName, CompressedInfo)

SELECT FirstName, LastName, COMPRESS(‘What a nice person they are!….’) FROM Person.Person

INSERT INTO PersonsCompressed (FirstName, LastName, CompressedInfo)

SELECT FirstName, LastName, COMPRESS(‘highly optimistic person who likes to laugh a lot. Goofy. Intrigued by anything science and love intellectual conversations. Thinks on the logical side of things…..’) FROM Person.Person

GO

 

— Load table with UNcompressed data using Person.Person table in AdventureWorks

INSERT INTO PersonsUnCompressed (FirstName, LastName, UnCompressedInfo)

SELECT FirstName, LastName, ‘What a nice person they are!….’ FROM Person.Person

INSERT INTO PersonsUNCompressed (FirstName, LastName, UnCompressedInfo)

SELECT FirstName, LastName, ‘highly optimistic person who likes to laugh a lot. Goofy.

Intrigued by anything science and love intellectual conversations. Thinks on the logical side of things…..’ FROM Person.Person

GO

 

After loading up my tables with a lot of records that are exactly the same except one has a compressed column this is what I see when I check how much space each table is taking:

Blog_20160616_1

That is a space savings of over 60% which can be significant in larger tables!

If you query the tables you can see that the CompressedInfo column is not displayed as the text value that was put in since it was compressed and stored as binary data.

Blog_20160616_2

We have to DECOMPRESS the CompressedInfo column and CAST it to the correct data type to read it.

Blog_20160616_3

There are negatives to the new functions. While MSDN states that compressed columns cannot be indexed (https://msdn.microsoft.com/en-us/library/mt622775.aspx ), I was able to add the column as an included column in an index. And you will likely have to change existing code to use the new functions.

So, next I want to know how performance is affected when having to compress and decompress data in queries. I found that there is considerably more Duration and CPU used when using DECOMPRESS in a query. That seems logical to me since the CPU is where I would expect the work of decompressing data to happen.

Blog_20160616_4

Interestingly, when using COMPRESS and inserting records it was basically the same as inserting without compressing the data.

Blog_20160616_5

A good scenario to use compressed columns would be to use them for archived data with text columns or data in tables that are infrequently accessed.

Hopefully, my small examples will give you some valuable insight into what to expect if you want to use the new functions. Enjoy!

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Correlate Performance Spikes with SQL Profiler and Perfmon

–by Ginger Keys

Have you ever watched performance metrics in Performance Monitor (Perfmon) and wondered which SQL Server activities were causing those spikes? SQLRx offers a Microsoft Partner-of-the-Year award-winning service that will correlate all performance metrics on your server with very little overhead, and will pinpoint with precision, right down to a line of code or even a network switch, what is bottlenecking your system. A real live person will then document and advise you exactly how to remedy the problem, and will walk you through each step.

However if you don’t need a comprehensive analysis, and simply want to find what TSQL statements are causing performance spikes, or if you want to observe a single TSQL statement and how it affects server resources, this can be done using SQL Profiler in conjunction with Performance Monitor (Perfmon). Both Profiler and Perfmon provide good information about your server, but being able to correlate these together will help to find the causes of performance spikes.

SQL Profiler will soon be deprecated, but will be available for a while longer. The specific version of SQL has not been determined yet. Profiler is being replaced with Extended Events, and currently there is no straightforward way to use extended events to correlate performance spikes with Perfmon.

In order to correlate performance metrics with SQL activities, we will follow these steps:

  1. Create Profiler Trace
  2. Create Perfmon Data Collector Set
  3. Simulate High CPU / or Run your TSQL Statement(s)
  4. Analyze

*Warning: This article is intended to show you how to use the tools…it is highly advised not to do this on your production server.

Create Profiler Trace

Open SQL Server Management Studio (SSMS) and connect to your instance. On the menu bar, go to Tools, and click on SQL Server Profiler to get started. SQL Profiler will put stress on your server, so focus only on the events and columns needed to analyze the activities, so to minimize the workload on your server.

We want to capture TSQL statements that might be causing performance spikes, so we will look at stored procedures, batches of queries, and individual queries. We will then filter the trace for statements that take longer than a certain Duration (ex. 5000 milliseconds), to filter out the normal running statements.

In the example below, I will use a Blank template, specify the events and columns needed, and save the results to file. Make sure you have enough space on your disk to hold the collected data.

Blog_20160519_1

Go to the Events Selection tab, and select these Events:

  • Showplan XML
  • RPC:Completed
  • SP:StmtCompleted
  • SQL:BatchCompleted
  • SQL:BatchStarting

And include these Columns

  • ApplicationName
  • CPU
  • DatabaseName
  • Duration
  • EndTime
  • EventSequence
  • IntegerData

 

  • LoginName
  • ObjectName
  • Reads
  • SPID
  • StartTime
  • TextData
  • Writes

Blog_20160519_2

Then create a Filter on

  • Duration (experiment with different times: start at 1000ms, and go to 5000 or 10000 if necessary)
  • Database name, if desired
  • Group By : Duration

Blog_20160519_3

Create Perfmon Data Collector Set

While the SQL Profiler is collecting TSQL statements that are executing, we want to also collect data on several performance metrics on the system.   From Administrative Tools, open Performance Monitor (Perfmon). Drill down to Data Collector Sets > User Defined. Right click and select New Data Collector Set.

Blog_20160519_4

Blog_20160519_5

Blog_20160519_6

Add the following Counters:

  • Memory:Available Mbytes
  • PhysicalDisk:% Idle Time (look at this as the opposite of ‘busy’ time)
  • PhysicalDisk:Avg Disk sec/Read
  • PhysicalDisk:Avg Disk sec/Write
  • Processor:% Processor Time (for each processor)
  • SQLServer:Buffer Manager: Page Life Expectancy

You can add as many performance metrics as you deem necessary to view your environment, but these are some basic memory, disk, and CPU metrics.

Set the Sample interval time to 3 seconds, to average out and not collect 1 second spikes. Save your output to a file on your disk with enough space to hold the data collected.

Simulate High CPU Usage

A few pointers before you begin…start SQL Profiler and Perfmon, making sure to start and stop both about the same time. If you are able to run these on a computer other than the server you are monitoring, that is preferred. If not, just be mindful that Profiler creates its own performance stress, so don’t do this if your system is in a critical state. Whatever you do, DO NOT DO THIS ON A PRODUCTION SYSTEM!!

If you already have specific times of the day when you know your server’s resources are peaking, run these tools during that time. Be sure to monitor the size of the data files output so that you don’t consume too much space on your drive.

If you already have some queries you want to test against the performance of your system, you can start and stop the tools during the time you are testing your queries.

If you want to artificially stress resources on your system for practice purposes, there are many TSQL statements available online to stress CPU and memory. Here are a few statements you can run to stress CPU:

— Query to Keep CPU Busy for 30 Seconds

DECLARE    @T DATETIME, @F BIGINT;

SET @T = GETDATE();

WHILE DATEADD(SECOND,30,@T)>GETDATE()

SET   @F=POWER(2,30);

 

— Query to Keep CPU Busy for 60 Seconds

DECLARE   @T DATETIME, @F BIGINT;

SET @T = GETDATE();

WHILE DATEADD(SECOND,60,@T)>GETDATE()

SET   @F=POWER(2,30);

 

— Query to stress CPU

SELECT SUM(CONVERT(BIGINT, o1.object_id)

+ CONVERT(BIGINT, o2.object_id)

+ CONVERT(BIGINT, o3.object_id)

+ CONVERT(BIGINT, o4.object_id))

FROM sys.objects o1

CROSS JOIN sys.objects o2

CROSS JOIN sys.objects o3

CROSS JOIN sys.objects o4

 

— Another Query to stress CPU

SELECT * INTO #columns

FROM sys.columns AS C

 

DECLARE @RCNT INT

SET @RCNT = 0

SET NOCOUNT ON;

WHILE @RCNT = 0

BEGIN

UPDATE #columns

SET column_id = 0

WHERE ISNUMERIC(CONVERT(FLOAT, CONVERT(NVARCHAR(20), CONVERT(VARCHAR(20),  column_id)))) = 0

SELECT @RCNT = @@ROWCOUNT

END

 

— And another Query to stress CPU using Adventureworks database

USE AdventureWorks2014

GO

DECLARE @Flag INT

SET @Flag = 1

WHILE(@Flag < 1000)

BEGIN

ALTER INDEX [PK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID]

ON [Sales].[SalesOrderDetail] REBUILD

SET @Flag = @Flag + 1

END

GO

 

I started the SQL Profiler trace and also the Perfmon data collector. Then after executing the above queries, we can see by looking at Task Manager that the CPU has spiked from 30% range up to 85% range.

Blog_20160519_7

Analyze and Correlate the Results

In order to analyze the results, both sets of data (the profiler trace and the Perfmon collection) need to be uploaded into SQL profiler:

Open the Profiler Trace (trc) file from where the output was saved.

Blog_20160519_8

Then click on File > Import Performance Data > and select the Perfmon file saved.

Blog_20160519_9

The results below show all TSQL statements run during the time period, along with the Perfmon graph and numerical values. The top section shows the profiler events in the order in which they occurred. The middle section shows the Perfmon graph for the duration of the collection period with the min, max, and average values. The bottom section displays either the text of the SQL statement or an execution plan.

You want to look for spikes or bottlenecks in the performance metrics, and then identify the profiler events that occurred around that time.

Click anywhere on the Perfmon graph and the TSQL statement will be pulled up in the Profiler event list. If I click on the Perfmon graph where I see the CPU was 100% (vertical red line below), the chart takes me to the general location (time period) in the Profiler events. You will need to scroll up or down in the list of events during that time period to find the actual event that caused the CPU spike. Clicking on the CPU spike in the Perfmon graph takes us to a TSQL statement around the time frame of that spike and provides the execution plan as shown below:

Blog_20160519_10

You can also click on a line in the Profiler events to see where the performance counters were at that time in the Perfmon graph (indicated with a red line below). I have clicked on a high CPU duration in the TSQL statement list where SQL: BatchCompleted occurred, and the Perfmon metrics on the graph shows 89% processor time and 2% disk idle (which means the disk was 98% busy).

Blog_20160519_11

In our example, the memory performance metrics were not substantial, but the CPU and disks were being stressed, with the average value for CPU being around 70%, and the average disk idle time at 25% (which means it was 75% busy).

Blog_20160519_12

Beware of some ‘Gotchas’!

Don’t run Profiler in production environment! Profiler causes much stress on a system, so if you have access to another server from which you can run this tool, it is preferred. If you don’t have another server to run Profiler from, take great care not to do this during critical periods on your production machine, and be sure to set filters so you don’t collect everything that is running.

Clicking on Perfmon spike will only take you to general area in the list of Trace events. As shown in the example above, you might or might not be directed to the exact Profiler event that caused the spike in performance. You will need to look around in the general area of that time period in the Trace to find the exact event that caused the spike.

There could be non-SQL processes causing your performance issues. If there are other non-SQL applications running on your production server that are causing performance stress, you won’t see those events by using the SQL Profiler tool.

There could be multiple processes happening causing Performance spikes that don’t exceed your Duration filter. Individually these (low to medium resource users) aren’t a problem, but collectively they can cause problems if they are occurring simultaneously.   If there are lots of smaller resource users that don’t meet your filter, you won’t see them, but together they could be causing performance issues.

Catching Showplan XML will catch everything happening in SQL because it doesn’t have a Duration column to filter on…which will bloat your trace file. Again, it is better to run these tools on a server apart from your production server.

* Disclaimer: Our SQLRx performance analysis process vanquishes all these ‘gotchas’…just sayin…give us a call

Conclusion

Being able to correlate TSQL statements with performance metrics can help you determine why you are seeing stress on your system, it can pinpoint queries that may need tuning, and it can help you observe how your queries will affect the performance of your hardware and resource usage. This can be done during a period of spikes in performance, especially if you know beforehand when these spikes occur. However keep in mind that running SQL Profiler contributes to the stress on your system, so take care not to do this during a critical performance spike, and as mentioned before it is most ideal to do this from a non-production server.

This article describes the general methods for using the SQL Profiler tool in conjunction with the Windows Performance Monitor. Correlating performance issues with this process is not 100% accurate, but will give you a great starting point to investigate performance stress on your system. Finding performance bottlenecks and knowing how to correct them is an in-depth process that usually requires years of experience. If you need guidance or assistance with this, contact us. We are experts in this, we have over 40 years of experience, and have helped over 700 clients. We can help you!

If you would like assistance configuring, maintaining, or monitoring your SQL Server and instances please contact us at SQLRx@sqlrx.com. We would be happy to answer any question you have, and would be interested to hear about your SQL experiences as well! Visit us at www.sqlrx.com.