Get Index Column Info with Includes for One or Many Tables

— by Lori Brown @SQLSupahStah

I was recently working with one of my clients on some low hanging fruit type of query tuning. We had checked the cache for plans with missing index warnings in them and were trying to see if we could tweak and existing index or add a new index to speed things up. If you ever work with missing indexes, you surely have seen it recommend crazy things, duplicates or existing indexes or it wants something that can be added to an existing index.

The bottom line for missing index recommendations is that you should NEVER, EVER create a missing index unless you know for absolutely sure that the index does not exist and that it will really help a known performance issue. For instance, you can have missing index warnings on small tables that only return a couple of rows to the query. Those are usually not worth working on since scanning a small table can usually be done very quickly by SQL. To this day, I still find many databases that are way over indexed with indexes that were implemented simply because someone found a missing index warning and did not do their homework or they ran the dreaded Database Tuning Advisor which shoved a bunch of duplicate indexes into tables.

If you are wondering how to get a list of missing indexes, please check out Jeff Schwartz’s blog post on how to do this. (https://blog.sqlrx.com/2017/06/02/query-tuning-and-missing-index-recommendations/ ) This will give you a place to start. It is better if you know what query is throwing the missing index warning so it is a good idea to collect those either in a trace or extended events. Jeff builds on his first post and in his second post on the subject (https://blog.sqlrx.com/2017/07/20/handling-multiple-missing-index-recommendations-for-the-same-table/ ) also goes over the fun of having multiple missing index recommendations for a single table and how to deal with them.

Here’s a handy set of links for some of Jeff’s great index tuning work that you should really check out:

https://blog.sqlrx.com/2016/01/28/sql-server-20122014-extended-events-for-developers-part-1/

https://blog.sqlrx.com/2017/06/02/query-tuning-and-missing-index-recommendations/

https://blog.sqlrx.com/2017/07/20/handling-multiple-missing-index-recommendations-for-the-same-table/

https://blog.sqlrx.com/2017/08/10/how-indexing-affects-deletion-queries/

One of the things that I usually need when performance tuning is to know information about the existing indexes on specific tables. I always want to know what columns are in the indexes along with the included columns so that I can compare the existing indexes to the missing recommendations. This way I can better figure out if a recommendation is something that can be added to an existing index (like an included column) or if I really need to create a brand new index if it does not exist at all.

Like most DBA’s, I keep a toolkit with all kinds of handy scripts. However, I did not have one that would give me index included columns. I also wanted the query to be able to return info from one or many tables at the same time. This would be useful when dealing with things with lots of joins. I know that there are a few bloggers who have posted something similar but I wanted to have the ability to filter on one or multiple tables. So, here is what I came up with:

/********************************

Returns index columns with included columns

plus other needed index info for tables

in @tablelist variable

*********************************/

 

DECLARE @tablelist VARCHAR(1000)

DECLARE @sqlstr NVARCHAR(MAX)

 

SET @tablelist = ‘InvoiceLines,OrderLines,StockItemHoldings’ — comma delimited list of tables, can be one or multiples EX: ‘mytable’ or ‘mytable,nothertable,thirdtable’

 

— Query the tables

IF @tablelist <> OR @tablelist <> ‘?,?,?’

BEGIN

SET @tablelist = REPLACE(QUOTENAME(@tablelist,””), ‘,’, ”’,”’) — Add quotes so the IN clause will work

 

SET @sqlstr = ‘SELECT SCHEMA_NAME(o.schema_id) AS SchemaName

       ,o.name AS TableName

       ,i.name AS IndexName

       ,i.type_desc AS IndexType

       ,c.name AS ColumnName

       ,ic.is_included_column

       ,ic.index_column_id

       ,t.name

       ,c.max_length

       ,c.precision

       ,c.scale

FROM sys.indexes i

JOIN sys.index_columns ic ON i.index_id = ic.index_id

JOIN sys.columns c ON ic.column_id = c.column_id AND ic.object_id = c.object_id

JOIN sys.objects o ON o.object_id = i.object_id AND c.object_id = i.object_id

JOIN sys.types t on t.user_type_id = c.user_type_id

WHERE o.name IN (‘+@tablelist+‘)

ORDER BY SchemaName, TableName, IndexName, index_column_id’

 

EXEC sp_executesql @sqlstr

–PRINT @sqlstr

END

All you have to provide is a comma separated list of the table(s) you are interested in for the @tablelist variable and it will do the rest. The output looks like this:

Blog_20170928_1

I found several bloggers who had made queries that would concatenate the columns together but truthfully I found those hard to read so I settled for a list with some extra info on the data and index types. Hope this is useful to someone out there.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

Find Tables That Have Special Features Enabled

Find out if any of tables in your database have special features enabled using the queries below.  These features need to be understood and carefully managed.

— CDC Enabled Tables

select distinct t.name AS CDCTables

from sys.tables t

where t.is_tracked_by_cdc = 1

 

— File Tables — SQL 2012 +

select distinct t.name AS FileTables

from sys.tables t

where t.is_filetable = 1

 

— Temporal Tables — SQL 2016 +

select distinct t.name AS TemporalTables

from sys.tables t

where t.temporal_type > 0

 

— Stretch Enabled Tables — SQL 2016 +

select distinct t.name AS StretchTables

from sys.tables t

where t.is_remote_data_archive_enabled > 0

 

— External Tables — SQL 2016 +

select distinct t.name AS ExternalTables

from sys.tables t

where t.is_external > 0

https://docs.microsoft.com/en-us/sql/relational-databases/track-changes/about-change-data-capture-sql-server

https://docs.microsoft.com/en-us/sql/relational-databases/blob/filetables-sql-server

https://docs.microsoft.com/en-us/sql/relational-databases/tables/temporal-tables

https://docs.microsoft.com/en-us/sql/sql-server/stretch-database/enable-stretch-database-for-a-table

https://docs.microsoft.com/en-us/sql/t-sql/statements/create-external-table-transact-sql

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Find Out Who Changed the Database Recovery Model

— By Lori Brown @SQLSupahStah

I ran into a situation where we were working on a migration and had been directed to put all databases into FULL recovery model in anticipation of using log shipping to push databases to the new server. Once we are ready to go live on the new server the plan was to ship the last transaction logs and then restore them WITH RECOVERY in an effort to make the final cutover as quick as possible. Of course this means that we had to make sure that all databases were having regular log backups, which we did. Things were going along nicely until we started receiving log backup failure notifications.

Upon checking things, we found that one of the databases had been changed to SIMPLE recovery model. You can find this type of information in the default trace or you can simply scroll through the SQL error logs until you find the entry that you are looking for. If you have a busy instance that has a lot of entries in the error log, this can be a bit time consuming so I came up with a set of queries that will grab the error log entry and attempt to tie it to the info in the default trace so that it was easier to identify WHO was the culprit who made an unauthorized change to the database properties.

 

DECLARE @tracefile VARCHAR(500)

DECLARE @ProcessInfoSPID VARCHAR(20)

 

CREATE TABLE [dbo].[#SQLerrorlog](

[LogDate] DATETIME NULL,

[ProcessInfo] VARCHAR(10) NULL,

[Text] VARCHAR(MAX) NULL

)

 

/*

Valid parameters for sp_readerrorlog

1 – Error log: 0 = current, 1 = Archive #1, 2 = Archive #2, etc…

2 – Log file type: 1 or NULL = error log, 2 = SQL Agent log

3 – Search string 1

4 – Search string 2

 

Change parameters to meet your needs

*/

— Read error log looking for the words RECOVERY

–and either FULL, SIMPLE or BULK_LOGGED indicating a change from prior state

INSERT INTO #SQLerrorlog

EXEC sp_readerrorlog 0, 1, ‘RECOVERY’, ‘FULL’

 

INSERT INTO #SQLerrorlog

EXEC sp_readerrorlog 0, 1, ‘RECOVERY’, ‘SIMPLE’

 

INSERT INTO #SQLerrorlog

EXEC sp_readerrorlog 0, 1, ‘RECOVERY’, ‘BULK_LOGGED’

 

UPDATE #SQLerrorlog

SET ProcessInfo = SUBSTRING(ProcessInfo,5,20)

FROM #SQLerrorlog

WHERE ProcessInfo LIKE ‘spid%’

 

— Get path of default trace file

SELECT @tracefile = CAST(value AS VARCHAR(500))

FROM sys.fn_trace_getinfo(DEFAULT)

WHERE traceid = 1

AND property = 2

 

— Get objects altered from the default trace

SELECT IDENTITY(int, 1, 1) AS RowNumber, *

INTO #temp_trc

FROM sys.fn_trace_gettable(@tracefile, default) g — default = read all trace files

WHERE g.EventClass = 164

 

SELECT t.DatabaseID, t.DatabaseName, t.NTUserName, t.NTDomainName,

t.HostName, t.ApplicationName, t.LoginName, t.SPID, t.StartTime, l.Text

FROM #temp_trc t

JOIN #SQLerrorlog l ON t.SPID = l.ProcessInfo

WHERE t.StartTime > GETDATE()-1 — filter by time within the last 24 hours

ORDER BY t.StartTime DESC

 

DROP TABLE #temp_trc

DROP TABLE #SQLerrorlog

GO

 

You can find more on the following:

sp_readerrorlog is an undocumented procedure that actually uses xp_readerrorlog – https://www.mssqltips.com/sqlservertip/1476/reading-the-sql-server-log-files-using-tsql/

sys.fn_trace_getinfo – https://docs.microsoft.com/en-us/sql/relational-databases/system-functions/sys-fn-trace-getinfo-transact-sql

sys.fn_trace_gettable – https://docs.microsoft.com/en-us/sql/relational-databases/system-functions/sys-fn-trace-gettable-transact-sql

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRxSupport@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. We love to talk tech with anyone in our SQL family!

Links for new DMV’s in SQL 2017

Since we at SQLRX have been super busy this week and still need to do something for a blog post, I thought that doing a link round up of new dynamic management views that are going to be available in SQL 2017 would be a quick and good idea. I am really interested in sys.dm_db_log_info since it give VLF info and can be used for monitoring and alerting.

Enjoy!

Change:

A column on sys.dm_db_file_space_usage has been changed: https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-db-file-space-usage-transact-sql

Brand new dmv’s:

sys.dm_db_log_stats https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-db-log-stats-transact-sql

sys.dm_db_log_info https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-db-log-info-transact-sql

sys.dm_db_stats_histogram https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-db-stats-histogram-transact-sql

sys.dm_tran_version_store_space_usage https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-tran-version-store-space-usage

sys.dm_os_host_info https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-os-host-info-transact-sql

Blog_20170901_1

Yee Haw!!!!

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Log Shipping Backup Failure

–By Ginger Keys

For years the SQL Agent service account on my client’s SQL Server instance ran the Maintenance Plans and SQL Agent jobs with no issues. Many of the SQL databases were set to Full recovery model, and Tlog backups executed on regular bases with no problems.

The client decided to migrate to new hardware in a new datacenter, and decided log shipping the databases over until the go-live date would be the best option for them in their circumstances. We took the databases out of the regular Tlog backup routines and created Transaction Log Shipping routines in its place. The connections to the new instance were seamless, and the Copy & Restore jobs were executing fine. However the Backup jobs were failing!

Why would this be, since the SQL Agent service account had been executing Tlog backups for years!? As it turns out, log shipping uses a different executable for performing tlog backups: sqllogship.exe. The SQL Agent service account must have permissions to the folder location where this executable is located, as shown below. You can locate your executable file by opening the LS_Backup job properties and viewing the job step.

Blog_20170824_1

Once we granted Full permissions to this location for the SQL Agent service account, everything worked as intended. This was the first occasion with log shipping that I have run into permissions issues for the service account on a primary server. Hopefully this is an uncommon occurrence, but it is certainly simple to fix once you understand what is happening.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

List Tables That Contain Special Indexes

Find out if any of your tables contain special columnstore or spatial indexes. Columnstore indexes organizes the data in columns instead of rows like traditional indexes and can increase performance on large data sets as found in data warehouses. Spatial indexes are a special type of index on a spatial column such as geometry or geography.

— Tables with columnstore indexes — SQL 2012 +

select t.name as TablesWithColumnstoreInx

from sys.indexes i

inner join sys.tables t

on i.object_id = t.object_id

where i.type = 5 or i.type = 6

— Tables with spatial indexes — SQL 2014 +

select t.name as TablesWithSpatialInx

from sys.indexes i

inner join sys.tables t

on i.object_id = t.object_id

where i.type = 4

 

https://docs.microsoft.com/en-us/sql/relational-databases/indexes/columnstore-indexes-overview

https://docs.microsoft.com/en-us/sql/relational-databases/spatial/spatial-indexes-overview

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

How Indexing Affects Deletion Queries

— by Jeff Schwartz

The Problem

Many articles concerning SQL Server discuss how record insertion overhead increases with each additional index. They discuss b-tree manipulations and page splits in addition to leaf and non-leaf levels. However, few discuss the fact that deletion overhead increases as well, especially when large numbers of records are deleted by individual queries. Recently, I was working with a client who regularly purged large numbers of records from tables that ranged in size from large to gigantic. For example, one table contained over 6.5 billion records. I added an index (4th overall) to one table expressly for the purpose of expediting the large deletion process, and the deletion run ran longer, despite using the new index! To determine how the numbers of indices and records to be deleted interact, I conducted an experiment to test several combinations. The specifics of the tests and their corresponding results are summarized below.

Test Table Creation & Load

To determine deletion/index behavior, a generic table was constructed and filled with 20 million records. Each record contained an identity column, an ID column, a text column, and 47 metric columns whose random values ranged between 1 and 1,000,000,000. The large number of table columns was used to insure SQL Server would choose an index option when appropriate. To minimize duplication of column values and create a uniform distribution of values, the ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase formula was used to generate values that were as random as possible, where @ModBase was set to one billion. Three indices were created initially: a clustered index that used the identity column as its only key, a nonclustered index that used DupID as its only column, and a nonclustered index that used Metric43 and Metric14 as keys with Metric01, Metric02, Metric03, and Metric04 as included columns. The scripts for the creation, loading, and initial indexing of the table are shown below.

— ##############################################################

— Create test table

— ##############################################################

drop table FewDuplicates;

CREATE TABLE FewDuplicates (

IDCol bigint identity (20000000,1),

DupID bigint,

MyText varchar(10),

Metric01 bigint, Metric02 bigint, Metric03 bigint, Metric04 bigint,

Metric05 bigint, Metric06 bigint, Metric07 bigint, Metric08 bigint,

Metric09 bigint, Metric10 bigint, Metric11 bigint, Metric12 bigint,

Metric13 bigint, Metric14 bigint, Metric15 bigint, Metric16 bigint,

Metric17 bigint, Metric18 bigint, Metric19 bigint, Metric20 bigint,

Metric21 bigint, Metric22 bigint, Metric23 bigint, Metric24 bigint,

Metric25 bigint, Metric26 bigint, Metric27 bigint, Metric28 bigint,

Metric29 bigint, Metric30 bigint, Metric31 bigint, Metric32 bigint,

Metric33 bigint, Metric34 bigint, Metric35 bigint, Metric36 bigint,

Metric37 bigint, Metric38 bigint, Metric39 bigint, Metric40 bigint,

Metric41 bigint, Metric42 bigint, Metric43 bigint, Metric44 bigint,

Metric45 bigint, Metric46 bigint, Metric47 bigint

)

— ##############################################################

— Load original table

— ##############################################################

declare @DupID bigint = 1

declare @NumRecs bigint = 20000000

declare @ModBase bigint = 1000000000

truncate table FewDuplicates

set nocount on

while (@DupID <= @NumRecs)

begin

insert into [dbo].[FewDuplicates] (

[DupID], [MyText]

,[Metric01], [Metric02], [Metric03], [Metric04], [Metric05], [Metric06]

,[Metric07], [Metric08], [Metric09], [Metric10], [Metric11], [Metric12]

,[Metric13], [Metric14], [Metric15], [Metric16], [Metric17], [Metric18]

,[Metric19], [Metric20], [Metric21], [Metric22], [Metric23], [Metric24]

,[Metric25], [Metric26], [Metric27], [Metric28], [Metric29], [Metric30]

,[Metric31], [Metric32], [Metric33], [Metric34], [Metric35], [Metric36]

,[Metric37], [Metric38], [Metric39], [Metric40], [Metric41], [Metric42]

,[Metric43], [Metric44], [Metric45], [Metric46], [Metric47]

)

VALUES (

@DupID,‘my text’,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

 ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

ABS(cast(CHECKSUM(NewId()) as bigint)) % @ModBase,

)

set @DupID += 1

end — group option loop

set nocount off

— ##############################################################

— Create indices on the test table

— ##############################################################

CREATE UNIQUE CLUSTERED INDEX [ci_RecID] ON [dbo].[FewDuplicates]

(

[IDCol] ASC

)

WITH (fillfactor = 100, PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)

ON [PRIMARY]

CREATE NONCLUSTERED INDEX [ix_DupID] ON [dbo].[FewDuplicates]

(

DupID ASC

)

WITH (fillfactor = 100, PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)

ON [PRIMARY]

CREATE NONCLUSTERED INDEX ix_CombinedIndex ON [dbo].[FewDuplicates]

(

[Metric43],

[Metric14]

)

INCLUDE (

[Metric01],

[Metric02],

[Metric03],

[Metric04]

)

WITH (fillfactor = 100, PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)

ON [PRIMARY]

Queries that Perform Varying Numbers of Record Deletions

To illustrate the issue, six queries were created that deleted various numbers of records and they are listed in Table 1. The deletions were designed to use the ix_CombinedIndex index, thereby emulating the client situation in which the optimal index for deletions was used. The only differences involved the numbers of records deleted and these are highlighted in the table below for easier comparison. The selected values were chosen so that small, medium, and large numbers of records would be deleted.

Blog_20170810_1

Table 1: All Six Deletion Queries

The plan for Query #4 with three indices is shown below in Figure 1 and it is fairly simple, as one would expect. Things become more complicated with additional indices, as discussed in the next section.

Blog_20170810_2

Figure 1: Query Plan for Deletion Query #4 with Three Indices

Indices that Cause Additional Deletion Overhead

To examine the effects of additional indices on the deletion queries, up to eight were added to the table using different keys. None of the indices was used for direct data access and the index definitions are shown below, followed by the commands to delete them. Only 4 of these indices were used for the 7-index test, whereas all 8 were used for the 11-index test.

— create indices

CREATE NONCLUSTERED INDEX ix_AddedIndexNumber1 ON [dbo].[FewDuplicates]

(

[Metric01],

[Metric02]

)

INCLUDE (

[Metric03],

[Metric04],

[Metric43],

[Metric14]

)

CREATE NONCLUSTERED INDEX ix_AddedIndexNumber2 ON [dbo].[FewDuplicates]

(

[Metric03],

[Metric04]

)

INCLUDE (

[Metric01],

[Metric02],

[Metric43],

[Metric14]

)

CREATE NONCLUSTERED INDEX ix_AddedIndexNumber3 ON [dbo].[FewDuplicates]

(

[Metric05],

[Metric06]

)

INCLUDE (

[Metric01],

[Metric02],

[Metric43],

[Metric14]

)

CREATE NONCLUSTERED INDEX ix_AddedIndexNumber4 ON [dbo].[FewDuplicates]

(

[Metric07],

[Metric08]

)

INCLUDE (

[Metric01],

[Metric02],

[Metric43],

[Metric14]

)

CREATE NONCLUSTERED INDEX ix_AddedIndexNumber5 ON [dbo].[FewDuplicates]

(

[Metric09],

[Metric10]

)

INCLUDE (

[Metric01],

[Metric02],

[Metric43],

[Metric14]

)

CREATE NONCLUSTERED INDEX ix_AddedIndexNumber6 ON [dbo].[FewDuplicates]

(

[Metric11],

[Metric12]

)

INCLUDE (

[Metric01],

[Metric02],

[Metric43],

[Metric14]

)

CREATE NONCLUSTERED INDEX ix_AddedIndexNumber7 ON [dbo].[FewDuplicates]

(

[Metric13],

[Metric14]

)

INCLUDE (

[Metric01],

[Metric02],

[Metric43],

[Metric44]

)

CREATE NONCLUSTERED INDEX ix_AddedIndexNumber8 ON [dbo].[FewDuplicates]

(

[Metric15],

[Metric16]

)

INCLUDE (

[Metric01],

[Metric02],

[Metric43],

[Metric14]

)

— drop indices

drop INDEX ix_AddedIndexNumber1 ON [dbo].[FewDuplicates]

drop INDEX ix_AddedIndexNumber2 ON [dbo].[FewDuplicates]

drop INDEX ix_AddedIndexNumber3 ON [dbo].[FewDuplicates]

drop INDEX ix_AddedIndexNumber4 ON [dbo].[FewDuplicates]

drop INDEX ix_AddedIndexNumber5 ON [dbo].[FewDuplicates]

drop INDEX ix_AddedIndexNumber6 ON [dbo].[FewDuplicates]

drop INDEX ix_AddedIndexNumber7 ON [dbo].[FewDuplicates]

drop INDEX ix_AddedIndexNumber8 ON [dbo].[FewDuplicates]

Figure 2 shows the plan for the same query that was shown in Figure 1, but this time with 11 indices on the table instead of 3. Clearly, the plan is MUCH more complex and the effects of the additional indices are obvious. This suggests what the test metrics will ultimately confirm: work grows demonstrably as the number of indices increases.

Blog_20170810_3

Blog_20170810_4

Figure 2: Query Plan for Deletion Query #4 with 11 Indices

Test Results

A total of 18 tests were conducted. Three index configurations were used with the following numbers of indices: 3, 7, and 11. Queries #1 – #6 were run against each configuration. As shown in Figure 3 through Figure 6, most of the metrics are comparable until the 100,000-record level is exceeded, at which point great divergence occurs. One of the most interesting findings involves writing, whose curve shape differs completely from those of the other metrics. These figures graphically illustrate why many deletions of small-to-moderate numbers of records are not often noticed as the number of indices increases. However, it also illustrates clearly how the size of the deletion and the number of indices can combine to negate the improvement provided by a specialty deletion index.

Blog_20170810_5

Figure 3: Deletion Queries – Duration (Seconds)

Blog_20170810_6

Figure 4: Deletion Queries – CPU (Seconds)

Blog_20170810_7

Figure 5: Deletion Queries – Reads

Blog_20170810_8

Figure 6: Deletion Queries – Writes

Conclusion

This article illustrated a situation in which the amount of work performed by deletion queries increased dramatically as the number of records to be deleted increased beyond approximately 100,000. Although this particular value is obviously applicable only to the test case, the overall message is clear – as the number of records to be deleted increases well beyond 100,000, the work performed by the query increases dramatically. In addition, this situation worsens considerably as the number of indices spanning a table increases. Therefore, although the overhead associated with deleting low numbers of records may not be noticed as indices are added, the performance of queries that delete large numbers of records will degrade noticeably as the number of indices increases. In some cases, the overhead may negate any improvement that might be gained by adding an index whose purpose is to expedite the deletion process. Therefore, before considering adding an index to improve deletion performance, insure that the batch of deleted records is not too large and the number of indices on the table is small.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

AlwaysOn – Endpoint Ownership

— By Ginger Keys

It is not uncommon for a DBA or other IT staff to set up AlwaysOn in a SQL environment and later leave the company. We ran into this recently with a client and were asked to delete the previous employee’s login from everything SQL related. We were able to remove the login from all databases and server roles, however we were not able to delete the login because it was the owner of an endpoint.

When creating an AlwaysOn Availability Group, you have the option of using the wizard or you can create it using TSQL statements. The wizard is very intuitive and easy to use and with the exception of a few settings you can specify, default configurations are deployed using this method. One of the default configurations is the endpoint owner. Whoever creates the AlwaysOn group is by default the owner of the endpoint.

This is generally not a problem…unless that person leaves the company and you need to delete the login! You will get an error message that says “The server principal owns one or more endpoint(s) and cannot be dropped (Microsoft SQL Server, Error: 15141)”.

To check and see who the owner of your endpoints are, run this statement:

USE master

GO

SELECT e.name as EndpointName,

sp.name AS EndpointOwner,

et.PayloadType,

e.state_desc

FROM sys.endpoints e

INNER JOIN sys.server_principals sp

ON e.principal_id = sp.principal_id

RIGHT OUTER JOIN ( VALUES ( 2, ‘TSQL’),

( 3, ‘SERVICE_BROKER’), ( 4, ‘DATABASE_MIRRORING’) )

AS et ( typeid, PayloadType )

ON et.typeid = e.type

The AlwaysOn endpoint will have the name Hadr_endpoint and will have a DATABASE_MIRRORING payload type as shown below.

Blog_20170803_1

If your AlwaysOn group has already been created and there is a domain login as the owner, you can change the ownership to sa. Run the following statement to make the change:

USE master

GO

ALTER AUTHORIZATION ON ENDPOINT::Hadr_endpoint TO sa

This will allow you to delete any login who might have owned the endpoint if its ever necessary.

If you are creating an AlwaysOn Availablitiy Group and want to use TSQL statements instead of the wizard, you have the ability to specify the endpoint owner. For complete instructions on how to set up the AlwaysOn group with TSQL click here https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/create-an-availability-group-transact-sql

In order to create the endpoint with a specific user, run the following statement:

CREATE ENDPOINT endpoint_mirroring

AUTHORIZATION loginname

STATE = STARTED

AS TCP (LISTENER_PORT = 5022)

FOR DATABASE_MIRRORING (

AUTHENTICATION = WINDOWS KERBEROS,

ENCRYPTION = SUPPORTED,

ROLE=ALL);

GO

In the statement above, if AUTHORIZATION is not specified with a SQL or Windows login, the caller will become the owner of the newly created endpoint. To use AUTHORIZATION and assign ownership to a login, the caller must have IMPERSONATE permission on the specified login.

Endpoints are a fundamental piece of SQL that allows a connection or point of entry into your SQL Server. Knowing who owns these endpoints and how to change the owner will potentially save you some headaches down the road in the event of IT staffing changes in your organization.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

Use MSDB to Get Database Backup Size and Total Time For Each

— by Lori Brown

We recently started using a third party software to do our in-house SQL backups so that the backup files are stored in a redundant and safe place. To confirm that the software was indeed compressing backups as it stated it would, we wanted to see what each backup size actually was in SQL so that we could compare that to what the software was telling us.

SQL stores lots of handy backup information in msdb in the backupset and backupmediafamily tables.

https://docs.microsoft.com/en-us/sql/relational-databases/system-tables/backupset-transact-sql

https://docs.microsoft.com/en-us/sql/relational-databases/system-tables/backupmediafamily-transact-sql

Here is my query. I am only wanting the information from the last 24 hours so have filtered the start date by subtracting 1 day from today. I have also provided some commented out options in case someone needs them.

— database backup size and how long it took to do backup

SELECT bs.database_name AS DatabaseName

, CAST(bs.backup_size/1024.0/1024/1024 AS DECIMAL(10, 2)) AS BackupSizeGB

, CAST(bs.backup_size/1024.0/1024 AS DECIMAL(10, 2)) AS BackupSizeMB

–, CAST(bs.compressed_backup_size/1024.0/1024/1024 AS DECIMAL(10, 2)) AS CompressedSizeGB   

       –, CAST(bs.compressed_backup_size/1024.0/1024 AS DECIMAL(10, 2)) AS CompressedSizeMB

, bs.backup_start_date AS BackupStartDate

, bs.backup_finish_date AS BackupEndDate

, CAST(bs.backup_finish_date – bs.backup_start_date AS TIME) AS AmtTimeToBkup

, bmf.physical_device_name AS BackupDeviceName

FROM msdb.dbo.backupset bs JOIN msdb.dbo.backupmediafamily bmf

ON bs.media_set_id = bmf.media_set_id

WHERE

–bs.database_name = ‘MyDatabase’ and   — uncomment to filter by database name

bs.backup_start_date > DATEADD(dd, -1, GETDATE()) and

bs.type = ‘D’ — change to L for transaction logs

ORDER BY bs.database_name, bs.backup_start_date

And, here is the output.

Blog_20170726_1

It turned out that the software was indeed compressing all backups so that was a good thing.

There is a lot more info that can be pulled from msdb regarding backups. Have some fun and experiment with getting information that you need from there. Here are some links to some other backup related topics that we have blogged about already.

https://blog.sqlrx.com/2017/03/02/sql-server-backup-and-restore-primer/

https://blog.sqlrx.com/2013/04/23/backup-database-commands/

https://blog.sqlrx.com/2013/05/28/be-prepared-for-a-crisis/

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

Handling Multiple Missing Index Recommendations for the Same Table

— By Jeff Schwartz

The Problem

Many articles concerning SQL Server missing index recommendations demonstrate the mechanics for obtaining them and often highlight whether the suggested key columns are used in equality or inequality relationships. Most of these examples emphasize a single index that can be implemented to improve performance. However, real-life situations often involve multiple or many suggested indices. For example, during a recent customer study, I observed 28 recommended indices for one table and 52 for another. Clearly, metrics such as improvement measure and user impact frequently can be used to determine the most important recommendations, but sometimes there are either too many similar recommendations – or – improvement measures are almost the same for several proposed indices, which makes determining an optimal index difficult. A large number of recommendations also often results in index proliferation, i.e., missing index recommendations are implemented piecemeal with no overall strategy resulting in tables with 14, 17, or 24 indices as the author observed in a recent customer performance study. The size of the table compounds this problem because it is especially desirable to limit the number of indices on these tables. For example, the table that had 24 indices on it contained over 30 million records.

The following examples of proposed indices that inspired this article illustrate the multiple recommendation phenomenon – RecIndex1: Keys (DateVal), Included Columns (Metric, ReptCat, LocationID, Total_Amount) and RecIndex2: Keys (LocationID, DateVal) Included Columns (Metric, ReptCat, Total_Amount). Clearly, without additional information, it is difficult to determine whether these recommendations must remain separate or could be combined into a single index. In this situation, knowing whether the proposed key columns are used in equality or inequality where clauses can be critical. This article will discuss how to use knowledge of equality and inequality relationships to determine an appropriate course of action.

Test Table Creation & Load

To determine missing index recommendation behavior, a generic table was constructed and filled with 20 million records. Each record contained an identity column, an ID column, a text column, and 47 metric columns whose values ranged between 1 and 10,000,000. The large number of table columns was used to insure SQL Server would choose an index option when appropriate. Six queries that incorporated various column combinations were executed (some of which differed only in column ordering). To minimize duplication of column values and skewing of query plans, the ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000 formula was used to generate values that were as random as possible. Two indices were created: a clustered index that used the identity column as its only key and a second nonclustered index that used DupID as its only column. The scripts for the creation, loading, and initial indexing of the table are shown below.

— ##############################################################

— Create test table

— ##############################################################

DROP TABLE FewDuplicates;

CREATE TABLE FewDuplicates (

IDCol bigint identity (20000000,1),

DupID bigint,

MyText varchar(10),

Metric01 bigint, Metric02 bigint, Metric03 bigint, Metric04 bigint,

Metric05 bigint, Metric06 bigint, Metric07 bigint, Metric08 bigint,

Metric09 bigint, Metric10 bigint, Metric11 bigint, Metric12 bigint,

Metric13 bigint, Metric14 bigint, Metric15 bigint, Metric16 bigint,

Metric17 bigint, Metric18 bigint, Metric19 bigint, Metric20 bigint,

Metric21 bigint, Metric22 bigint, Metric23 bigint, Metric24 bigint,

Metric25 bigint, Metric26 bigint, Metric27 bigint, Metric28 bigint,

Metric29 bigint, Metric30 bigint, Metric31 bigint, Metric32 bigint,

Metric33 bigint, Metric34 bigint, Metric35 bigint, Metric36 bigint,

Metric37 bigint, Metric38 bigint, Metric39 bigint, Metric40 bigint,

Metric41 bigint, Metric42 bigint, Metric43 bigint, Metric44 bigint,

Metric45 bigint, Metric46 bigint, Metric47 bigint

)

 

— ##############################################################

— Load original table

— ##############################################################

declare @DupID bigint = 1

declare @NumRecs bigint = 20000000

 

truncate table FewDuplicates

set nocount on

while (@DupID <= @NumRecs)

begin

insert into [dbo].[FewDuplicates] (

[DupID], [MyText],

[Metric01], [Metric02], [Metric03], [Metric04], [Metric05], [Metric06], [Metric07], [Metric08], [Metric09], [Metric10], [Metric11], [Metric12], [Metric13], [Metric14], [Metric15], [Metric16], [Metric17], [Metric18], [Metric19], [Metric20], [Metric21], [Metric22], [Metric23], [Metric24], [Metric25], [Metric26], [Metric27], [Metric28], [Metric29], [Metric30], [Metric31], [Metric32], [Metric33], [Metric34], [Metric35], [Metric36], [Metric37], [Metric38], [Metric39], [Metric40], [Metric41], [Metric42],

[Metric43], [Metric44], [Metric45], [Metric46], [Metric47]

)

VALUES (

@DupID,‘my text’,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000

)

set @DupID += 1

end — group option loop

set nocount off

 

— ##############################################################

— Create indices on the test table

— ##############################################################

CREATE UNIQUE CLUSTERED INDEX [ci_RecID] ON [dbo].[FewDuplicates]

(

[IDCol] ASC

)

WITH (fillfactor = 100, PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)

ON [PRIMARY]

 

CREATE NONCLUSTERED INDEX [ix_DupID] ON [dbo].[FewDuplicates]

(

DupID ASC

)

WITH (fillfactor = 100, PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)

ON [PRIMARY]

Queries that Create Missing Index Recommendations

To illustrate the issue, two queries were created that were guaranteed to generate missing index recommendations as well as mimic the behavior of the ones cited in The Problem section. They are listed in Table 1 and the differences are highlighted for easier comparison. The query plans for the two queries are displayed in Table 2 and Table 3. Both queries performed full clustered index scans and generated missing index recommendations. The recommendations are shown below in two pieces within each table. The most important points are that Query Plan #1 specifies Metric14 first and Metric43 second, whereas Query Plan #2 specifies Metric43 alone with Metric14 as an included column. At first glance, these appear to be contradictory and potentially incompatible differences.

Blog_20170720_1

Blog_20170720_2

Blog_20170720_3

Table 4 summarizes the metrics, columns, and full index definitions suggested by SQL Server. Given the very large improvement measure values, the key ordering of the proposed indices, and the somewhat different included columns, it may be tempting to implement both indices despite the fact that the table contained 20 million records. However, more detailed analysis using the data in Table 5 shows that a single index that is constructed properly can accommodate both recommendations and, therefore, both queries. The most important metrics are displayed in column_usage because one uses two equality comparisons and the other only uses one. Therefore, if we specify the equality column used in both queries first and specify the equality/included column second, both queries will be satisfied. VERY IMPORTANT NOTE: Although the key and included column ordering appear obvious because of the column names used in this example table, i.e., suffixes in numerical order, when normal column names like DateVal or LocationID are used, ordering is much less obvious. As cited in my previous blog entitled Query Tuning and Missing Index Recommendations, when ordering is not crucial, e.g., when only equality operations or included columns are specified, SQL Server uses the ordering of the columns in the table itself rather than the ordering specified in the query.

In most cases SQL Server attempts to create covering indices, which are defined to be indices that contain all the columns of a particular query. Please reference the following web page for further information regarding covering indices: https://docs.microsoft.com/en-us/sql/relational-databases/indexes/create-indexes-with-included-columns. In the author’s experience, implementing the keys of a suggested index wtihout the corresponding included columns often results in SQL Server ignoring the new index. Therefore, the included columns are vital to any missing index strategy. Clearly, a point of diminshing returns exists when the number of included columns approaches the total number of columns in the table (especially very large tables), but as long as the number of columns is reasonable, included columns should always be considered. [The queries to obtain the data shown in Table 4 and Table 5 are provided in Table 6.]

Blog_20170720_4

Blog_20170720_5

SELECT avg_total_user_cost * avg_user_impact * (user_seeks + user_scans)

as [Improvement Measure],

avg_user_impact as [Avg User Impact], user_seeks as Executions,

[statement] as TableName,

equality_columns as [Equality Usage], inequality_columns as [Inequality Usage], included_columns as [Included Columns],

‘Create Nonclustered Index ix_SQLRx_’ + PARSENAME([statement],1) + ‘_’ + CONVERT(varchar, group_handle) + ‘_’ + CONVERT(varchar, g.index_handle) + ‘ ON ‘ +

[statement] +‘ (‘ + ISNULL(replace(equality_columns,‘ ‘,),) +

CASE WHEN equality_columns IS NOT NULL AND inequality_columns IS NOT NULL THEN ‘,’ ELSE END +

ISNULL (replace(inequality_columns,‘ ‘,),) +‘)’ +

CASE WHEN included_columns IS NOT NULL THEN ‘ INCLUDE (‘ + included_columns + ‘)’ ELSE END AS [Create Index Statement]

FROM       sys.dm_db_missing_index_groups g

INNER JOIN   sys.dm_db_missing_index_group_stats s ON

s.group_handle = g.index_group_handle

INNER JOIN   sys.dm_db_missing_index_details d ON d.index_handle = g.index_handle

ORDER BY avg_total_user_cost * avg_user_impact * (user_seeks + user_scans) DESC;

 

SELECT statement AS [Table], column_id , column_name, column_usage,

migs.user_seeks as Executions, migs.avg_user_impact as [Avg User Impact]

FROM sys.dm_db_missing_index_details AS mid

CROSS APPLY sys.dm_db_missing_index_columns (mid.index_handle)

INNER JOIN sys.dm_db_missing_index_groups AS mig ON mig.index_handle = mid.index_handle

INNER JOIN sys.dm_db_missing_index_group_stats AS migs ON mig.index_group_handle = migs.group_handle

ORDER BY mig.index_group_handle, mig.index_handle, column_id

Blog_20170720_8

Table 7 contains the composite index that satisfies both recommendations and queries. It is important to note that Metric43 is the FIRST key and Metric14 is the second. If this ordering is not followed, Query #1 will perform a full scan. Reviewing both queries demonstrates the following:

  1. The key ordering enables the equality operators to apply in both cases.
  2. Having Metric14 as the second key satisfies both the equality condition and the included column condition.
  3. The other columns specified by the queries are supplied so the table data need never be referenced.

 

CREATE NONCLUSTERED INDEX ix_CombinedIndex ON [dbo].[FewDuplicates]

(

[Metric43],

[Metric14]

)

INCLUDE (

[Metric01],

[Metric02],

[Metric03],

[Metric04]

)

WITH (fillfactor = 100, PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)

ON [PRIMARY]

Blog_20170720_9

Table 8 and Table 9 contain the updated query plans, which illustrate the facts that the clustered index scans have been replaced by index seek operators and the clustered index is not accessed to satisfy either query. Note also that parallelism was present in the query plans shown in Table 2 and Table 3, but is absent in the new query plans displayed in Table 8 and Table 9.

Blog_20170720_6

Blog_20170720_7

Conclusion

This article illustrated a situation in which multiple missing index recommendations first appeared to necessitate separate indices, resulting in a single index implementation. Careful examination of the key relationships used in the queries and specified by the missing index recommendations enabled the author to develop one composite recommendation that enabled optimal query performance.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!