Query to Get SQL Job Info from sysprocesses

— By Lori Brown @SQLSupahStah

We recently were troubleshooting an issue where one client reported that the SQL instance was suddenly very slow. One of the things we do is check to see what queries are currently running in SQL by using the following query:

— Finds info on queries that are running NOW

SELECT GETDATE() AS CurrentTime,

SUBSTRING(text, CASE WHEN ((ExecReq.statement_start_offset/2) + 1) < 1 THEN 1 ELSE (ExecReq.statement_start_offset/2) + 1 end,

((CASE ExecReq.statement_end_offset WHEN -1 THEN DATALENGTH(text) ELSE ExecReq.statement_end_offset END

– ExecReq.statement_start_offset)/2) + 1) AS statement_text,

ExecSQLText.[text],

ExecReq.database_id,

ExecReq.session_id AS [SPID],

ExecReq.[reads],

ExecReq.logical_reads,

ExecReq.writes,

ExecReq.total_elapsed_time / 1000.0 AS [total_elapsed_time (secs)],

ExecReq.wait_time / 1000.0 AS [wait_time (secs)],

ExecReq.cpu_time / 1000.0 AS [cpu_time (secs)],

ExecReq.start_time,

ExecReq.granted_query_memory,

ExecReq.last_wait_type,

ExecReq.blocking_session_id AS [BlockingSPID]

FROM sys.dm_exec_requests ExecReq

OUTER APPLY sys.dm_exec_sql_text(ExecReq.sql_handle) ExecSQLText

WHERE ExecReq.session_id > 50

AND ExecReq.session_id <> @@spid

AND ExecReq.last_wait_type <> ‘BROKER_RECEIVE_WAITFOR’

ORDER BY ExecReq.cpu_time DESC

— total_elapsed_time desc

— wait_time desc

— logical_reads desc

After running this a few times, we can see if any one query consistently shows up in the list as using lots of resources or is blocking or being blocked. We can also see if a lot of queries seem to be listed that usually are fast which might indicate that there is some kind of issue with the execution plan or statistics. Here is an example of what the output looks like:

Blog_20180216_1

If I was trying to further figure out what SPID 709 was doing (as in what user and what application), I might take a look at sp_who2 or sp_WhoIsActive from Adam Machanic (http://whoisactive.com/) and you might see something like this in the program name:

SQLAgent – TSQL JobStep (Job 0xCC1842F477AA1A4E84CD91228FFC799B : Step 1)

We know a job is running but what job is it? I know that we can get the job name from the hex but decided that instead of needing to copy the value from the program_name into a variable that I wanted to have a query to decipher any jobs that might be found on the fly. This would keep me from needing to open the job activity and try to find which job was running. If you have hundreds of jobs to look through or if more than one job is running then you might have to guess which one is the culprit that you are looking for. I also wanted to know what job step the job was on so that I could know where it was and so that I was better prepared to provide meaningful information back to my client.

SELECT p.spid, j.name As ‘Job Name’, js.step_name as ‘Job Step’,

p.blocked, p.lastwaittype, p.dbid, p.cpu, p.physical_io, p.memusage, p.last_batch

FROM master.dbo.sysprocesses p JOIN msdb.dbo.sysjobs j

ON master.dbo.fn_varbintohexstr(convert(varbinary(16), job_id))

COLLATE Latin1_General_CI_AI = substring(replace(program_name, ‘SQLAgent – TSQL JobStep (Job ‘, ), 1, 34)

JOIN msdb.dbo.sysjobsteps js ON j.job_id = js.job_id

WHERE p.program_name like ‘SQLAgent – TSQL%’

AND js.step_id = (substring(p.program_name, (CHARINDEX(‘: Step’,p.program_name, 1)+7),

((CHARINDEX(‘)’, p.program_name, 1))-(CHARINDEX(‘: Step’, p.program_name, 1)+7))))

Blog_20180216_2

It is handy to have both of the above queries in one query window so that results are returned in one place making it easier to associate running queries with their job. Hope this helps someone out as much as it did me.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. We love to talk tech with anyone in our SQL family!

 

 

 

Notification for Restored Database

— by Ginger Keys

According to best practices for a SQL environment, you should never place your test or development databases on a production server. However sometimes best practices are ignored, sometimes the DBA’s advice is disregarded (ahem…), or sometimes an organization simply does not have the financial resources to spin up separate servers for testing or development.

Whatever the reason, if your production server happens to have test or development databases on it, you need to be make sure these databases are managed properly. Usually those databases get periodically restored with a copy of the production database, and you need to ensure there is enough room on the disk to accommodate all data and log files. Typically the production database will be in Full recovery mode possibly with a large transaction log file to accommodate daily transactions. The test or development database normally does not need the large transaction log file and usually does not need to be in Full recovery mode.

Instead of manually checking each database to see when it might have been restored, create a SQL Agent job that will notify you when this happens so that you can go in proactively and manage the environment.

The following statement can be inserted in the SQL Agent Job step, and will provide a basic notification in the event your databases have been restored. Create the job to execute however often you need for your environment… (I have mine running once per week and have set the parameter for [restore_date] to check the last 7 days of activity).

–create a temp table to hold your data

create table #Restores

(      [destination_database_name] nvarchar(50),

[restore_date] date,

[restore_type] nvarchar(20),

[user_name] nvarchar(50)

)

 

–insert data into your temp table from msdb.dbo.restorehistory table

;with LastRestores As

(      select [destination_database_name],

[restore_date],

CASE [restore_type]

WHEN ‘D’ THEN ‘Database’

WHEN ‘F’ THEN ‘File’

WHEN ‘G’ THEN ‘Filegroup’

WHEN ‘I’ THEN ‘Diff’

WHEN ‘L’ THEN ‘Log’

WHEN ‘V’ THEN ‘VerifyOnly’

ELSE NULL

END AS restore_type,

[user_name]

from msdb.dbo.restorehistory

where ([destination_database_name] like ‘%dev’ OR [destination_database_name] like ‘%test’)

AND [restore_date] >= (GETDATE() – 7) –change to timeframe appropriate to your environment

)

 

insert into #Restores

select [destination_database_name], [restore_date], [restore_type], [user_name]

from LastRestores

 

–if temp table returns any rows, send notification email

if (select count(*) from #Restores) > 0

begin

exec msdb.dbo.sp_send_dbmail

@profile_name = ‘DBMailProfile’,

@recipients = ‘SomeoneImportant@yourompany.com’,

@subject = ‘Test Database Restored’,

@body = ‘A test or training database has been restored on InstanceName. Please make necessary adjustments.’

end

 

–drop your temp table

if (OBJECT_ID(‘tempdb..#Restores’) is not null)

begin

drop table #Restores

end

 

This is a simplistic statement to notify you of restore activity for databases you specify. You can get much more sophisticated if necessary by sending the query output as a file or html, or by triggering an immediate notification whenever the database is restored.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Large MSDB Database From sysmaintplan_logdetail Table

— By Lori Brown @SQLSupahStah

I am recycling this blog post from 2015….”Why?”, you might ask…..Well, I am still running into instances with this exact issue. Soooo, here we go….again.  Enjoy.

I recently received a panicked call from a client who had a SQL instance go down because the server’s C drive was full. As the guy looked he found that the msdb database file was 31 GB and was consuming all of the free space on the OS drive causing SQL to shut down. He cleaned up some other old files so that SQL would work again but did not know what to do about msdb.

As we looked at it together I found that the sysmaintplan_logdetail table was taking all the space in the database. The SQL Agent had been set to only keep about 10000 rows of history but for some unknown reason the table never removed history. After consulting MSDN I found this code did the trick for truncating this table.

USE msdb

GO

ALTER TABLE [dbo].[sysmaintplan_log] DROP CONSTRAINT [FK_sysmaintplan_log_subplan_id];

GO

ALTER TABLE [dbo].[sysmaintplan_logdetail] DROP CONSTRAINT [FK_sysmaintplan_log_detail_task_id];

GO

TRUNCATE TABLE msdb.dbo.sysmaintplan_logdetail;

GO

TRUNCATE TABLE msdb.dbo.sysmaintplan_log;

GO

ALTER TABLE [dbo].[sysmaintplan_log] WITH CHECK ADD CONSTRAINT [FK_sysmaintplan_log_subplan_id] FOREIGN KEY([subplan_id])

REFERENCES [dbo].[sysmaintplan_subplans] ([subplan_id]);

GO

ALTER TABLE [dbo].[sysmaintplan_logdetail] WITH CHECK ADD CONSTRAINT [FK_sysmaintplan_log_detail_task_id] FOREIGN KEY([task_detail_id])

REFERENCES [dbo].[sysmaintplan_log] ([task_detail_id]) ON DELETE CASCADE;

GO

 

After the table was truncated we were able to shrink the database to about 1 GB. For the record – I hate, hate, hate to shrink databases but there were no other options left to us and we had to clear out some room on the drive.

Now with the crisis averted we checked the SQL Agent settings and found that the box to remove agent history was not checked.

Blog_20150702_1

We checked it, hit OK then opened the SQL Agent properties again only to find that the box was unchecked. After doing some research I found that this is a bug that has not been resolved even in SQL 2014.   https://connect.microsoft.com/SQLServer/feedback/details/172026/ssms-vs-sqlagent-automatically-remove-agent-history-bugs Awesome, huh?!

If you check the link there is a workaround posted. I have tested it and found that it takes a super long time to run sp_purge_jobhistory and my test server only has 2 jobs that would have any history at all. So, use the workaround if you feel brave. Hopefully Microsoft will actually fix this some time. Until then, keep an eye on your msdb database size.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at sqlrx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. We love to talk tech with anyone in our SQL family!

 

 

List All Extended Properties For All Objects In A Database

— By Lori Brown @SQLSupahStah

I was recently looking at a way to easily identify if any objects in a database were In-Memory. Since many times I am tasked with taking on systems that I have never seen before, I have learned that it is important to have ways of checking to see if some of the more specialized features of SQL are in use. With this information, I am better prepared to manage a new SQL instance without inadvertently causing issues or ignoring something that should be on my DBA radar. So, finding all things that are In-Memory is important since more and more of my clients are using newer versions of SQL.

I can easily find tables and indexes that are column store by using this query

— Tables with columnstore indexes — SQL 2012 +

select t.name as TablesWithColumnstoreInx, i.name as ColumnStoreIndex

from sys.indexes i

inner join sys.tables t

on i.object_id = t.object_id

where i.type = 5 or i.type = 6

But I was kind of stumped to find a good way to identify natively compiled stored procedures since as far as I could tell, those are identified only through extended properties. After doing some research, I found someone who had the idea to load up all of the extended properties into a table so that it could be joined to the sys.objects table to pass object_id’s into the OBJECTPROPERTYEX function and thereby returning a list of all extended properties for all objects. Unfortunately, I cannot find the original forum post that was so helpful again and am unable to provide a hat tip to the correct person. But, here is what I have settled on:

USE WideWorldImporters

GO

— Find extended properties on objects

CREATE TABLE #ObjectPropertyEXList (PropertyName nvarchar(50), PropertyAppliesTo nvarchar(256), PropertyDesc nvarchar(3000));

 

INSERT INTO #ObjectPropertyEXList (PropertyName, PropertyAppliesTo, PropertyDesc)

SELECT [PropertyName] = N’BaseType’, [PropertyAppliesToType] = N’Any schema-scoped object’, [PropertyDesc] = N’Identifies the base type of the object. When the specified object is a SYNONYM, the base type of the underlying object is returned. Nonnull = Object type Base data type: char(2)’ 

UNION ALL SELECT [PropertyName] = N’CnstIsClustKey’, [PropertyAppliesToType] = N’Constraint’, [PropertyDesc] = N’PRIMARY KEY constraint with a clustered index. 1 = True 0 = False Base data type: int’

UNION ALL SELECT [PropertyName] = N’CnstIsColumn’, [PropertyAppliesToType] = N’Constraint’, [PropertyDesc] = N’CHECK, DEFAULT, or FOREIGN KEY constraint on a single column. 1 = True 0 = False Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’CnstIsDeleteCascade’, [PropertyAppliesToType] = N’Constraint’, [PropertyDesc] = N’FOREIGN KEY constraint with the ON DELETE CASCADE option. 1 = True 0 = False Base data type: int’

UNION ALL SELECT [PropertyName] = N’CnstIsDisabled’, [PropertyAppliesToType] = N’Constraint’, [PropertyDesc] = N’Disabled constraint. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’CnstIsNonclustKey’, [PropertyAppliesToType] = N’Constraint’, [PropertyDesc] = N’PRIMARY KEY constraint with a nonclustered index. 1 = True 0 = False Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’CnstIsNotRepl’, [PropertyAppliesToType] = N’Constraint’, [PropertyDesc] = N’Constraint is defined by using the NOT FOR REPLICATION keywords. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’CnstIsNotTrusted’, [PropertyAppliesToType] = N’Constraint’, [PropertyDesc] = N’Constraint was enabled without checking existing rows. Therefore, the constraint may not hold for all rows. 1 = True 0 = False Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’CnstIsUpdateCascade’, [PropertyAppliesToType] = N’Constraint’, [PropertyDesc] = N’FOREIGN KEY constraint with the ON UPDATE CASCADE option. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’ExecIsAfterTrigger’, [PropertyAppliesToType] = N’Trigger’, [PropertyDesc] = N’AFTER trigger. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’ExecIsAnsiNullsOn’, [PropertyAppliesToType] = N’Transact-SQL function, Transact-SQL procedure, Transact-SQL trigger, view’, [PropertyDesc] = N’The setting of ANSI_NULLS at creation time. 1 = True 0 = False Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’ExecIsDeleteTrigger’, [PropertyAppliesToType] = N’Trigger’, [PropertyDesc] = N’DELETE trigger. 1 = True 0 = False Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’ExecIsFirstDeleteTrigger’, [PropertyAppliesToType] = N’Trigger’, [PropertyDesc] = N’The first trigger fired when a DELETE is executed against the table. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’ExecIsFirstInsertTrigger’, [PropertyAppliesToType] = N’Trigger‘, [PropertyDesc] = N’The first trigger fired when an INSERT is executed against the table. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’ExecIsFirstUpdateTrigger’, [PropertyAppliesToType] = N’Trigger’, [PropertyDesc] = N’The first trigger fired when an UPDATE is executed against the table. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’ExecIsInsertTrigger’, [PropertyAppliesToType] = N’Trigger’, [PropertyDesc] = N’INSERT trigger. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’ExecIsInsteadOfTrigger’, [PropertyAppliesToType] = N’Trigger’, [PropertyDesc] = N’INSTEAD OF trigger. 1 = True 0 = False Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’ExecIsLastDeleteTrigger’, [PropertyAppliesToType] = N’Trigger’, [PropertyDesc] = N’Last trigger fired when a DELETE is executed against the table. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’ExecIsLastInsertTrigger’, [PropertyAppliesToType] = N’Trigger’, [PropertyDesc] = N’Last trigger fired when an INSERT is executed against the table. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’ExecIsLastUpdateTrigger’, [PropertyAppliesToType] = N’Trigger’, [PropertyDesc] = N’Last trigger fired when an UPDATE is executed against the table. 1 = True 0 = False Base data type: int’

UNION ALL SELECT [PropertyName] = N’ExecIsQuotedIdentOn’, [PropertyAppliesToType] = N’Transact-SQL function, Transact-SQL procedure, Transact-SQL trigger, view’, [PropertyDesc] = N’Setting of QUOTED_IDENTIFIER at creation time. 1 = True 0 = False Base data type: int’

UNION ALL SELECT [PropertyName] = N’ExecIsStartup’, [PropertyAppliesToType] = N’Procedure’, [PropertyDesc] = N’Startup procedure. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’ExecIsTriggerDisabled’, [PropertyAppliesToType] = N’Trigger’, [PropertyDesc] = N’Disabled trigger. 1 = True 0 = False Base data type: int’    

UNION ALL SELECT [PropertyName] = N’ExecIsTriggerNotForRepl’, [PropertyAppliesToType] = N’Trigger’, [PropertyDesc] = N’Trigger defined as NOT FOR REPLICATION. 1 = True 0 = False Base data type: int’    

UNION ALL SELECT [PropertyName] = N’ExecIsUpdateTrigger’, [PropertyAppliesToType] = N’Trigger’, [PropertyDesc] = N’UPDATE trigger. 1 = True 0 = False Base data type: int’

UNION ALL SELECT [PropertyName] = N’ExecIsWithNativeCompilation’, [PropertyAppliesToType] = N’Transact-SQL Procedure’, [PropertyDesc] = N’Procedure is natively compiled. 1 = True 0 = False Base data type: int Applies to: SQL Server 2014 through SQL Server 2017.’  

UNION ALL SELECT [PropertyName] = N’HasAfterTrigger’, [PropertyAppliesToType] = N’Table, view’, [PropertyDesc] = N’Table or view has an AFTER trigger. 1 = True 0 = False Base data type: int’    

UNION ALL SELECT [PropertyName] = N’HasDeleteTrigger’, [PropertyAppliesToType] = N’Table, view’, [PropertyDesc] = N’Table or view has a DELETE trigger. 1 = True 0 = False Base data type: int’

UNION ALL SELECT [PropertyName] = N’HasInsertTrigger’, [PropertyAppliesToType] = N’Table, view’, [PropertyDesc] = N’Table or view has an INSERT trigger. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’HasInsteadOfTrigger’, [PropertyAppliesToType] = N’Table, view’, [PropertyDesc] = N’Table or view has an INSTEAD OF trigger. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’HasUpdateTrigger’, [PropertyAppliesToType] = N’Table, view’, [PropertyDesc] = N’Table or view has an UPDATE trigger. 1 = True 0 = False Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’IsAnsiNullsOn’, [PropertyAppliesToType] = N’Transact-SQL function, Transact-SQL procedure, table, Transact-SQL trigger, view’, [PropertyDesc] = N’Specifies that the ANSI NULLS option setting for the table is ON, meaning all comparisons against a null value evaluate to UNKNOWN. This setting applies to all expressions in the table definition, including computed columns and constraints, for as long as the table exists. 1 = True 0 = False Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’IsCheckCnst’, [PropertyAppliesToType] = N’Any schema-scoped object’, [PropertyDesc] = N’CHECK constraint. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’IsConstraint’, [PropertyAppliesToType] = N’Any schema-scoped object’, [PropertyDesc] = N’Constraint. 1 = True 0 = False Base data type: int’

UNION ALL SELECT [PropertyName] = N’IsDefault’, [PropertyAppliesToType] = N’Any schema-scoped object’, [PropertyDesc] = N’Bound default. 1 = True 0 = False Base data type: int Applies to: SQL Server 2008 through SQL Server 2016.’  

UNION ALL SELECT [PropertyName] = N’IsDefaultCnst’, [PropertyAppliesToType] = N’Any schema-scoped object’, [PropertyDesc] = N’DEFAULT constraint. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’IsDeterministic’, [PropertyAppliesToType] = N’Scalar and table-valued functions, view’, [PropertyDesc] = N’The determinism property of the function or view. 1 = Deterministic 0 = Not Deterministic Base data type: int’

UNION ALL SELECT [PropertyName] = N’IsEncrypted’, [PropertyAppliesToType] = N’Transact-SQL function, Transact-SQL procedure, table, Transact-SQL trigger, view’, [PropertyDesc] = N’Indicates that the original text of the module statement was converted to an obfuscated format. The output of the obfuscation is not directly visible in any of the catalog views in SQL Server 2005. Users without access to system tables or database files cannot retrieve the obfuscated text. However, the text is available to users that can either access system tables over the DAC port or directly access database files. Also, users that can attach a debugger to the server process can retrieve the original procedure from memory at run time. 1 = Encrypted 0 = Not encrypted Base data type: int’  

UNION ALL SELECT [PropertyName] = N’IsExecuted’, [PropertyAppliesToType] = N’Any schema-scoped object’, [PropertyDesc] = N’Specifies the object can be executed (view, procedure, function, or trigger). 1 = True 0 = False Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’IsExtendedProc’, [PropertyAppliesToType] = N’Any schema-scoped object’, [PropertyDesc] = N’Extended procedure. 1 = True 0 = False Base data type: int’

UNION ALL SELECT [PropertyName] = N’IsForeignKey’, [PropertyAppliesToType] = N’Any schema-scoped object’, [PropertyDesc] = N’FOREIGN KEY constraint. 1 = True 0 = False Base data type: int’

UNION ALL SELECT [PropertyName] = N’IsIndexed’, [PropertyAppliesToType] = N’Table, view’, [PropertyDesc] = N’A table or view with an index. 1 = True 0 = False Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’IsIndexable’, [PropertyAppliesToType] = N’Table, view’, [PropertyDesc] = N’A table or view on which an index may be created. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’IsInlineFunction’, [PropertyAppliesToType] = N’Function’, [PropertyDesc] = N’Inline function. 1 = Inline function 0 = Not inline function Base data type: int’  

UNION ALL SELECT [PropertyName] = N’IsMSShipped’, [PropertyAppliesToType] = N’Any schema-scoped object’, [PropertyDesc] = N’An object created during installation of SQL Server. 1 = True 0 = False Base data type: int’

UNION ALL SELECT [PropertyName] = N’IsPrecise’, [PropertyAppliesToType] = N’Computed column, function, user-defined type, view’, [PropertyDesc] = N’Indicates whether the object contains an imprecise computation, such as floating point operations. 1 = Precise 0 = Imprecise Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’IsPrimaryKey’, [PropertyAppliesToType] = N’Any schema-scoped object’, [PropertyDesc] = N’PRIMARY KEY constraint. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’IsProcedure’, [PropertyAppliesToType] = N’Any schema-scoped object’, [PropertyDesc] = N’Procedure. 1 = True 0 = False Base data type: int’    

UNION ALL SELECT [PropertyName] = N’IsQuotedIdentOn’, [PropertyAppliesToType] = N’CHECK constraint, DEFAULT definition, Transact-SQL function, Transact-SQL procedure, table, Transact-SQL trigger, view’, [PropertyDesc] = N’Specifies that the quoted identifier setting for the object is ON, meaning double quotation marks delimit identifiers in all expressions involved in the object definition. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’IsQueue’, [PropertyAppliesToType] = N’Any schema-scoped object’, [PropertyDesc] = N’Service Broker Queue 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’IsReplProc’, [PropertyAppliesToType] = N’Any schema-scoped object’, [PropertyDesc] = N’Replication procedure. 1 = True 0 = False Base data type: int’

UNION ALL SELECT [PropertyName] = N’IsRule’, [PropertyAppliesToType] = N’Any schema-scoped object’, [PropertyDesc] = N’Bound rule. 1 = True 0 = False Base data type: int’

UNION ALL SELECT [PropertyName] = N’IsScalarFunction’, [PropertyAppliesToType] = N’Function’, [PropertyDesc] = N’Scalar-valued function. 1 = Scalar-valued function 0 = Not scalar-valued function Base data type: int’   

UNION ALL SELECT [PropertyName] = N’IsSchemaBound’, [PropertyAppliesToType] = N’Function, Procedure, view’, [PropertyDesc] = N’A schema bound function or view created by using SCHEMABINDING. 1 = Schema-bound 0 = Not schema-bound Base data type: int’  

UNION ALL SELECT [PropertyName] = N’IsSystemTable’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’System table. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’IsSystemVerified’, [PropertyAppliesToType] = N’Computed column, function, user-defined type, view’, [PropertyDesc] = N’The precision and determinism properties of the object can be verified by SQL Server. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’IsTable’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table. 1 = True 0 = False Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’IsTableFunction’, [PropertyAppliesToType] = N’Function’, [PropertyDesc] = N’Table-valued function. 1 = Table-valued function 0 = Not table-valued function Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’IsTrigger’, [PropertyAppliesToType] = N’Any schema-scoped object’, [PropertyDesc] = N’Trigger. 1 = True 0 = False Base data type: int’    

UNION ALL SELECT [PropertyName] = N’IsUniqueCnst’, [PropertyAppliesToType] = N’Any schema-scoped object’, [PropertyDesc] = N’UNIQUE constraint. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’IsUserTable’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’User-defined table. 1 = True 0 = False Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’IsView’, [PropertyAppliesToType] = N’View’, [PropertyDesc] = N’View. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’OwnerId’, [PropertyAppliesToType] = N’Any schema-scoped object’, [PropertyDesc] = N’Owner of the object. System_CAPS_noteNote The schema owner is not necessarily the object owner. For example, child objects (those where parent_object_id is nonnull) will always return the same owner ID as the parent. Nonnull = Database user ID of the object owner. NULL = Unsupported object type, or object ID is not valid. Base data type: int’

UNION ALL SELECT [PropertyName] = N’SchemaId’, [PropertyAppliesToType] = N’Any schema-scoped object’, [PropertyDesc] = N’The ID of the schema associated with the object. Nonnull = Schema ID of the object. Base data type: int’  

UNION ALL SELECT [PropertyName] = N’SystemDataAccess’, [PropertyAppliesToType] = N’Function, view’, [PropertyDesc] = N’Object accesses system data, system catalogs or virtual system tables, in the local instance of SQL Server. 0 = None 1 = Read Base data type: int’  

UNION ALL SELECT [PropertyName] = N’TableDeleteTrigger’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table has a DELETE trigger. >1 = ID of first trigger with the specified type. Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’TableDeleteTriggerCount’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’The table has the specified number of DELETE triggers. Nonnull = Number of DELETE triggers Base data type: int’  

UNION ALL SELECT [PropertyName] = N’TableFullTextMergeStatus’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Whether a table that has a full-text index that is currently in merging. 0 = Table does not have a full-text index, or the full-text index is not in merging. 1 = The full-text index is in merging. Applies to: SQL Server 2008 through SQL Server 2016.’ 

UNION ALL SELECT [PropertyName] = N’TableFullTextBackgroundUpdateIndexOn’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’The table has full-text background update index (autochange tracking) enabled. 1 = TRUE 0 = FALSE Base data type: int Applies to: SQL Server 2008 through SQL Server 2016.’  

UNION ALL SELECT [PropertyName] = N’TableFulltextCatalogId’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’ID of the full-text catalog in which the full-text index data for the table resides. Nonzero = Full-text catalog ID, associated with the unique index that identifies the rows in a full-text indexed table. 0 = Table does not have a full-text index. Base data type: int Applies to: SQL Server 2008 through SQL Server 2016.’  

UNION ALL SELECT [PropertyName] = N’TableFullTextChangeTrackingOn’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table has full-text change-tracking enabled. 1 = TRUE 0 = FALSE Base data type: int Applies to: SQL Server 2008 through SQL Server 2016.’  

UNION ALL SELECT [PropertyName] = N’TableFulltextDocsProcessed’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Number of rows processed since the start of full-text indexing. In a table that is being indexed for full-text search, all the columns of one row are considered as part of one document to be indexed. 0 = No active crawl or full-text indexing is completed. > 0 = One of the following: The number of documents processed by insert or update operations since the start of full, incremental, or manual change tracking population. The number of rows processed by insert or update operations since change tracking with background update index population was enabled, the full-text index schema changed, the full-text catalog rebuilt, or the instance of SQL Server restarted, and so on. NULL = Table does not have a full-text index. Base data type: int Note   This property does not monitor or count deleted rows. Applies to: SQL Server 2008 through SQL Server 2016.’

UNION ALL SELECT [PropertyName] = N’TableFulltextFailCount’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’The number of rows that full-text search did not index. 0 = The population has completed. >0 = One of the following: The number of documents that were not indexed since the start of Full, Incremental, and Manual Update change tracking population. For change tracking with background update index, the number of rows that were not indexed since the start of the population, or the restart of the population. This could be caused by a schema change, rebuild of the catalog, server restart, and so on NULL = Table does not have a Full-Text index. Base data type: int Applies to: SQL Server 2008 through SQL Server 2016.’ 

UNION ALL SELECT [PropertyName] = N’TableFulltextItemCount’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Nonnull = Number of rows that were full-text indexed successfully. NULL = Table does not have a full-text index. Base data type: int Applies to: SQL Server 2008 through SQL Server 2016.’ 

UNION ALL SELECT [PropertyName] = N’TableFulltextKeyColumn’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’ID of the column associated with the single-column unique index that is part of the definition of a full-text index and semantic index. 0 = Table does not have a full-text index. Base data type: int Applies to: SQL Server 2008 through SQL Server 2016.’  

UNION ALL SELECT [PropertyName] = N’TableFulltextPendingChanges’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Number of pending change tracking entries to process. 0 = change tracking is not enabled. NULL = Table does not have a full-text index. Base data type: int Applies to: SQL Server 2008 through SQL Server 2016.’    

UNION ALL SELECT [PropertyName] = N’TableFulltextPopulateStatus’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’0 = Idle. 1 = Full population is in progress. 2 = Incremental population is in progress. 3 = Propagation of tracked changes is in progress. 4 = Background update index is in progress, such as autochange tracking. 5 = Full-text indexing is throttled or paused. 6 = An error has occurred. Examine the crawl log for details. For more information, see the Troubleshooting Errors in a Full-Text Population (Crawl) section of Populate Full-Text Indexes. Base data type: int Applies to: SQL Server 2008 through SQL Server 2016.’  

UNION ALL SELECT [PropertyName] = N’TableFullTextSemanticExtraction’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table is enabled for semantic indexing. 1 = True 0 = False Base data type: int Applies to: SQL Server 2012 through SQL Server 2016.’

UNION ALL SELECT [PropertyName] = N’TableHasActiveFulltextIndex’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table has an active full-text index. 1 = True 0 = False Base data type: int Applies to: SQL Server 2008 through SQL Server 2016.’   

UNION ALL SELECT [PropertyName] = N’TableHasCheckCnst’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table has a CHECK constraint. 1 = True 0 = False Base data type: int’

UNION ALL SELECT [PropertyName] = N’TableHasClustIndex’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table has a clustered index. 1 = True 0 = False Base data type: int’

UNION ALL SELECT [PropertyName] = N’TableHasDefaultCnst’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table has a DEFAULT constraint. 1 = True 0 = False Base data type: int’

UNION ALL SELECT [PropertyName] = N’TableHasDeleteTrigger’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table has a DELETE trigger. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’TableHasForeignKey’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table has a FOREIGN KEY constraint. 1 = True 0 = False Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’TableHasForeignRef’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table is referenced by a FOREIGN KEY constraint. 1 = True 0 = False Base data type: int’

UNION ALL SELECT [PropertyName] = N’TableHasIdentity’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table has an identity column. 1 = True 0 = False Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’TableHasIndex’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table has an index of any type. 1 = True 0 = False Base data type: int’

UNION ALL SELECT [PropertyName] = N’TableHasInsertTrigger’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Object has an INSERT trigger. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’TableHasNonclustIndex’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’The table has a nonclustered index. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’TableHasPrimaryKey’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table has a primary key. 1 = True 0 = False Base data type: int’

UNION ALL SELECT [PropertyName] = N’TableHasRowGuidCol’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table has a ROWGUIDCOL for a uniqueidentifier column. 1 = True 0 = False Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’TableHasTextImage’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table has a text, ntext, or image column. 1 = True 0 = False Base data type: int’  

UNION ALL SELECT [PropertyName] = N’TableHasTimestamp’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table has a timestamp column. 1 = True 0 = False Base data type: int’

UNION ALL SELECT [PropertyName] = N’TableHasUniqueCnst’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table has a UNIQUE constraint. 1 = True 0 = False Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’TableHasUpdateTrigger’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’The object has an UPDATE trigger. 1 = True 0 = False Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’TableHasVarDecimalStorageFormat’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table is enabled for vardecimal storage format. 1 = True 0 = False’  

UNION ALL SELECT [PropertyName] = N’TableInsertTrigger’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table has an INSERT trigger. >1 = ID of first trigger with the specified type. Base data type: int’    

UNION ALL SELECT [PropertyName] = N’TableInsertTriggerCount’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’The table has the specified number of INSERT triggers. >0 = The number of INSERT triggers. Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’TableIsFake’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table is not real. It is materialized internally on demand by the Database Engine. 1 = True 0 = False Base data type: int’   

UNION ALL SELECT [PropertyName] = N’TableIsLockedOnBulkLoad’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table is locked because a bcp or BULK INSERT job. 1 = True 0 = False Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’TableIsMemoryOptimized’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table is memory optimized 1 = True 0 = False Base data type: int For more information, see In-Memory OLTP (In-Memory Optimization). Applies to: SQL Server 2014 through SQL Server 2016.’  

UNION ALL SELECT [PropertyName] = N’TableIsPinned’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table is pinned to be held in the data cache. 0 = False This feature is not supported in SQL Server 2005 and later versions.’  

UNION ALL SELECT [PropertyName] = N’TableTextInRowLimit’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table has text in row option set. > 0 = Maximum bytes allowed for text in row. 0 = text in row option is not set. Base data type: int’

UNION ALL SELECT [PropertyName] = N’TableUpdateTrigger’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table has an UPDATE trigger. > 1 = ID of first trigger with the specified type. Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’TableUpdateTriggerCount’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table has the specified number of UPDATE triggers. > 0 = The number of UPDATE triggers. Base data type: int’ 

UNION ALL SELECT [PropertyName] = N’UserDataAccess’, [PropertyAppliesToType] = N’Function, View’, [PropertyDesc] = N’Indicates the object accesses user data, user tables, in the local instance of SQL Server. 1 = Read 0 = None Base data type: int’  

UNION ALL SELECT [PropertyName] = N’TableHasColumnSet’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Table has a column set. 0 = False 1 = True For more information, see Use Column Sets.’  

UNION ALL SELECT [PropertyName] = N’Cardinality’, [PropertyAppliesToType] = N’Table (system or user-defined), view, or index’, [PropertyDesc] = N’The number of rows in the specified object. Applies to: SQL Server 2012 through SQL Server 2016.’ 

UNION ALL SELECT [PropertyName] = N’TableTemporalType’, [PropertyAppliesToType] = N’Table’, [PropertyDesc] = N’Specifies the type of table. 0 = non-temporal table 1 = history table for system-versioned table 2 = system-versioned temporal table Applies to: SQL Server 2016 Community Technology Preview 2 (CTP2) through SQL Server 2016.’    

;

 

–List all object properties for a given object

— Uncomment filters to find specific things

SELECT

op.PropertyName

, o.*

FROM sys.objects o

cross apply #ObjectPropertyEXList op

WHERE OBJECTPROPERTYEX(o.object_id, op.PropertyName) = 1

— AND op.PropertyName = ‘ExecIsWithNativeCompilation’ — uncomment this to look for a specific property

— AND o.type = ‘P’ — uncomment this to look for a specific object type

— AND o.name = ‘Colors_Archive’ — uncomment this to look for a specific object by name

ORDER BY o.name

 

/* List of object types

 

AF = Aggregate function (CLR)

C = CHECK constraint

D = DEFAULT (constraint or stand-alone)

F = FOREIGN KEY constraint,FN = SQL scalar function

FS = Assembly (CLR) scalar-function

FT = Assembly (CLR) table-valued function

IF = SQL inline table-valued function

IT = Internal table

P = SQL Stored Procedure

PC = Assembly (CLR) stored-procedure

PG = Plan guide

PK = PRIMARY KEY constraint

R = Rule (old-style, stand-alone)

RF = Replication-filter-procedure

S = System base table

SN = Synonym

SO = Sequence object

U = Table (user-defined)

V = View

SQ = Service queue

TA = Assembly (CLR) DML trigger

TF = SQL table-valued-function

TR = SQL DML trigger

TT = Table type

UQ = UNIQUE constraint

X = Extended stored procedure

ET = External Table

*/

 

DROP TABLE #ObjectPropertyEXList

 

I have verified that all extended properties that are loaded into #ObjectPropertyEXList are from the most recent list on Microsoft (https://docs.microsoft.com/en-us/sql/t-sql/functions/objectpropertyex-transact-sql ) and have also tried to provide some extra ways to filter for data so read the comments carefully.

Here are my results when I look only for objects that are natively compiled:

Blog_20180125_1

And here are the results when I list all extended properties on a single object:

Blog_20180125_2

Hey look!! The Colors_Archive table is a temporal table!! Good to know!!

If the person who was so helpful in creating the extended properties table will let me know who they are, I will be happy to extend proper credit where it is due. I really could not find the source again despite several tries.

Hopefully you, the reader, will find this helpful. If you make improvements and would like to pass them along, please feel free to let me know and I will update this post with your ideas.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRxSupport@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. We love to talk tech with anyone in our SQL family!

 

Inconsistent Tempdb Growth

— by Ginger Keys

Tempdb database is a system database that is shared across all databases and all connections in SQL Server.   As a general rule, I configure all my client’s servers to have multiple tempdb files with consistent sizes and consistent auto growth set according to best practices. The number of tempdb files created equals the number of cores or 8, whichever is lower. And the size set for each file depends on each individual SQL instance to accommodate a typical workload.

The Problem

Recently during a routine review of a client’s server, I noticed in the SQL Server Logs that there were messages indicating I/O pressure for the tempdb database.

Blog_20180111_1

I also noticed in Windows Explorer that my tempdb files were no longer uniform in size. In fact the first data file had grown way out of proportion compared to all the other data files:

Blog_20180111_2

After double-checking the database properties, I confirmed that these files should have grown in consistent amounts:

Blog_20180111_3

So what happened to cause this inconsistent growth and how should I fix it?

Tempdb is used (and can grow) when users explicitly create objects like temp tables, variables, cursors, or stored procedures. Tempdb is also used when the database engine creates work tables for sorts or spooling, and row versioning operations.

It’s often difficult to diagnose what happened on a SQL Server after the fact. There are many resources online to determine what is currently using tempdb but not a lot of information from processes that may have run days ago.

What Happened?

First, I want to see what is currently using tempdb to find out if there are any processes using all that space in the database. The TSQL statements in the following links are helpful in determining who is currently using tempdb:

https://littlekendra.com/2009/08/27/whos-using-all-that-space-in-tempdb-and-whats-their-plan/

http://www.sqlservercentral.com/scripts/tempdb/72007/

https://blog.sqlauthority.com/2015/01/23/sql-server-who-is-consuming-my-tempdb-now/

If you find there are plans using the tempdb causing substantial growth it is probably a good opportunity for some performance tuning.

In my situation there was nothing significant currently using tempdb, so my issue happened prior to my discovering the problem. Since I was not able to see any active queries creating temp tables, stored procs, cursors, or variables, I wanted to see if there were row versioning operations occurring.

If snapshot isolation is enabled on a database, performance can improve because it eliminates locking on the underlying tables. However in order to do this SQL puts the data (as it existed at the start of the transaction) into a temp table with row versions for each transaction, so that all queries in the transaction see the same version (or snapshot) of the database. This can take up a large amount of space in tempdb, depending on the size of the tables in the database being queried.

Run this statement to find any user databases with snapshot isolation enabled:

select * from sys.databases

where (snapshot_isolation_state = 1 or is_read_committed_snapshot_on = 1)

and database_id > 4

My client had a few databases with snapshot isolation enabled, so we had a conversation about the reasoning behind these settings. However it was still unclear as to why the tempdb grew and why the files grew out of proportion. As stated earlier, unless you are actively collecting performance data for this type of behavior, it is often very difficult to troubleshoot the cause of the problem after the fact. So what do we do about it?

Solution

When your tempdb files grow beyond the original size set in the properties, and they grow out of proportion, the preferred and best solution is to grow the files to a uniform size, as long as there is room on your disk. This is the preferred solution because if the tempdb files grew, it apparently needed that much room to perform the tasks that caused it to grow. You can adjust the size of your tempdb files using the GUI, or you can run the following statement for each data file:

USE master

GO

ALTER DATABASE tempdb

MODIFY FILE (NAME = tempdev, SIZE = 2000MB)

If there is not enough space on your drive and you need to resolve the issue right now, shrink the tempdb files down to a uniform size. Keeping your tempdb data files the same size helps to avoid page contention issues.

This can be done through the GUI by selecting each individual data file and specifying the desired size, or you can run this statement for each data file:

USE tempdb

GO

DBCC SHRINKFILE (tempdev, 1000)

GO

 

If for some reason the data files don’t shrink to the size you specified, try running this

DBCC FREEPROCCACHE

then execute the SHRINKFILE statement again. Do this during a maintenance time of course and not during production time while users are connected.

Prevention and Best Practice

Put your tempdb on its own volume if possible. If your tempdb is on the same drive as your user database files and that drive runs out of space, it will be difficult to restart SQL Server. If your tempdb is on its own drive you can utilize all of the space for your tempdb files and also reduce I/O contention.

Create as many data files as you have processors, up to a maximum of 8. (Note that dual-core CPU is the same as 2 CPUs). This will reduce storage contention and is more scalable. There is much debate on the necessity of this practice…this is a general guideline.

Make each data file the same size. This helps to avoid SGAM page contention issues, and will allow for optimal proportional-fill performance. Set the size of your data files large enough to accommodate a typical workload. And if you have tempdb on its own volume, size your files to use the majority of the space on the volume. This way SQL doesn’t have to stop activity in order to grow the files.

Set the autogrowth to a fixed amount (not percentage) and to a reasonable size. If the threshold value is set too low compared to the amount of data written to tempdb, the files will have to expand too often. During autogrowth the database is unavailable, which will cause user transactions to have to wait until the growth process completes. Note – every time SQL server instance is restarted, the tempdb will be recreated to the original size specified in the database properties.

Enable Trace Flag 1117 – If you are not yet using SQL Server 2016 and are on an earlier version, enabling trace flag t1117 will ensure that your tempdb files all grow when any of the files reach the autogrow threshold. (Starting with SQL Server 2016 this flag is unnecessary.) Trace flag 1117 is a global setting, and will impact every database in your instance, not just tempdb. You can enable the trace flag by going to SQL Server Configuration Manager > SQL Server Services > Right click on SQL Server (MSSQLSERVER) > Properties > Startup Parameters > enter –t1117 and click Add. You must restart the service for the change to take effect.

Blog_20180111_4

Finally, you can minimize tempdb utilization by practicing the following:

  • Avoid SORT_IN_TEMPDB option
  • Avoid unnecessary cursors
  • Avoid spooling (CTEs referenced multiple times)
  • Avoid using MARS
  • Don’t enable snapshot isolation unless there is a very compelling reason to do so
  • Avoid using triggers for bulk operations
  • Avoid using LOBs as local variables

Reference: https://dba.stackexchange.com/questions/19870/how-to-identify-which-query-is-filling-up-the-tempdb-transaction-log

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

 

AlwaysOn – Connection Handshake Failure

— by Ginger Keys

While doing some work on a test environment recently I realized the AlwaysOn Group was failing. (On a production system I would have had several Alerts set to notify me of any issues). The dashboard on my test instance showed that the secondary replica was not synchronizing and from clicking on the blue warning links it actually showed that the secondary was not connected:

Blog_20171221_1

Blog_20171221_2

Blog_20171221_3

I confirmed that both the primary and secondary instances were up and running, but the secondary Availability Replica and Databases were not connected to the AlwaysOn Group:

Blog_20171221_4

Problem

The AlwaysOn Health Events were consistent with the SQL Server Logs. I discovered that the Windows Server Failover Cluster had been down (but was back online), and that the AlwaysOn Group had failed over and back. But the culprit to my secondary replica and databases being disconnected was an endpoint issue. I found in my SQL Server logs the service account running AlwaysOn had for some reason lost its permissions to connect to the endpoint:

Blog_20171221_5

Message

Database Mirroring login attempt by user ‘Domain\user.’ failed with error: ‘Connection handshake failed. The login ‘Domain\user’ does not have CONNECT permission on the endpoint. State 84.’. [CLIENT: 10.0.0.0]

Solution

Thankfully the solution to this issue was pretty simple. Run the following script on both the primary and secondary instances to grant the service account running AlwaysOn connect permission:

GRANT CONNECT ON ENDPOINT::hadr_endpoint TO [Domain\user]

GO

This immediately corrected the problem, and the secondary replica and databases were connected to the AlwaysOn Group without any further action.

Unfortunately the cause of issues on a SQL Server are not always apparent after the fact, so that is why it is important to be proactive and set up robust alerts and notifications. For more information on setting up alerting and monitoring for your AlwaysOn Group click here https://blog.sqlrx.com/2015/08/27/alwayson-monitoring-and-alerting/ .

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

 

 

 

 

Rx for Demystifying Index Tuning Decisions – Part 8

— by Jeffry Schwartz

Review

Due to the holiday week, this part will be shorter than usual. In Parts 1 through 7 of this blog series the author discussed various aspects of SQL Server indices such as:

  • Types of indices and their variations, e.g., clustered, nonclustered, filtered, and covering
  • Physical versus logical index access methods, e.g., seeks, scans, key/RID lookups, singleton lookups, and range scans
  • Why index tuning should begin with queries
  • Determining an Appropriate Strategy for Index Tuning
  • Determining Queries that Need Indices the Most
  • Capturing query performance information using Extended Events
  • Using query plans to determine relationships between queries and indices
  • Query Optimizer
  • Statistics and how they affect index usage and query performance
  • Consequences of too many indices
  • Overlapping Indices
  • Overview of Dynamic Management Functions and a detailed discussion of the sys.dm_db_index_usage_stats DMV
  • Detailed discussion of the sys.dm_db_index_operational_stats DMF (and how to use it)

 

https://blog.sqlrx.com/2017/10/26/rx-for-demystifying-index-tuning-decisions-part-1/

https://blog.sqlrx.com/2017/11/02/rx-for-demystifying-index-tuning-decisions-part-2/

https://blog.sqlrx.com/2017/11/09/rx-for-demystifying-index-tuning-decisions-part-3/

https://blog.sqlrx.com/2017/11/16/rx-for-demystifying-index-tuning-decisions-part-4/

https://blog.sqlrx.com/2017/11/30/rx-for-demystifying-index-tuning-decisions-part-5/

https://blog.sqlrx.com/2017/11/30/rx-for-demystifying-index-tuning-decisions-part-6/

https://blog.sqlrx.com/2017/12/07/rx-for-demystifying-index-tuning-decisions-part-7/

 

Part 8 contains discusses how to determine missing indices and their estimated impact (if implemented) as well as incorporating missing index recommendations either into existing indices or into as few new indices as possible.

Missing Index Information

SQL Server monitors missing index warnings (same ones as shown in query plans) and records the estimates of the relative cost of not having the recommended index as queries execute. I/O plays a dominant role in these statistics because the expected I/O amounts affect estimated cost and improvement factors. Execution counts also affect these numbers. The impact shown in Figure 9 is quite high and indicates that the query could make good use of the recommended index if it were implemented. In this example, the key lookup accounts for all the work performed by the query and is no doubt the reason for the missing index recommendation.

Blog_20171214_1

Figure 9: Missing Index Example

 

Missing Index DM Views

Four views are used to derive the missing index statistics recommendations:

  • sys.dm_db_missing_index_columns
  • sys.dm_db_missing_index_details
  • sys.dm_db_missing_index_group_stats
  • sys.dm_db_missing_index_groups

These four DMV/DMFs are dependent upon the missing index warnings that would appear in SSMS query plans. Although Table 13 does not show the actual index recommendations, it does show the numerics involved. Improvement Measure and User Impact are frequently good indicators of potentially useful indices. The number of query executions, referred to as user seeks in the DMV, is also a good indicator of an index that would benefit multiple query executions. This metric can also be useful for locating specific queries that might need this index because the counts can sometimes limit the search candidates. The reader should note that the names of some tables are repeated multiple times, indicating that multiple index recommendations were suggested by SQL Server. Generally, improvement measures of one million or higher should be considered seriously. The formula for improvement measure is as follows: avg_total_user_cost * avg_user_impact * (user_seeks + user_scans). Note: User_scans are almost always zero. Although the 894 million value shown in Table 13 may seem very high, the values shown in Table 14 emphasize how bad things can get.

Blog_20171214_2

Table 13: Consolidated Missing Index DM Views Sample Report

Blog_20171214_2a

Table 14: Extremely High Missing Index Values

 

Table 15 summarizes the recommendations by table and shows another way this data can be viewed. The number of recommended indices for the jkl_abc.dbo.dly_st_mtc_smy, jkl_abc.dbo.hr_st_mtc_smy, and jkl_def.dbo.nglentUS2017 tables are instructive because they are 28, 52, and 51, respectively. Clearly, these cannot be implemented without creating significant amounts of overhead and repeating the excessive indexing problems cited earlier in this blog series. Clearly, as indicated by the Total Improvement Measure and the Recommended Index Count, the jkl_abc.dbo.dly_st_mtc_smy table needs a great deal of indexing tuning help. Resolving the jkl_def.dbo.KDS_spofsvc table’s indexing issues should be simpler to resolve because SQL Server only recommended three indices.

Blog_20171214_3

Table 15: Missing Index Improvement by Table

 

It is also important to realize that these recommendations are driven by the queries, so very often many similar recommendations will be provided by SQL Server since it does not compare existing indices with recommended ones. The recommendations often make extensive use of included columns even to the extent that sometimes almost all of the columns in the table are recommended for inclusion. SQL Server treats every index as if it were new, even if an existing index could be modified slightly to accommodate the recommendation. The duplication issue is shown clearly in Table 16. All of these recommendations are for the same table and only show those recommendations that began with CID as the first key. Most of these recommendations are not worth implementing given the low improvement measure, but the 274,170 one might be worth considering. The table also shows that 35 indices already exist on this table, so adding more indices is not advisable unless absolutely necessary. Even then, every opportunity to drop an existing index to make room for a new one should be exercised. Using a cross-tabular format like the one below makes comparing the various recommendations much easier. In addition, the reader should note the operator in parentheses (= or <). The equality ones make things much easier because as long as both CID and CO are present, the order is not as critical. Therefore, an existing index that had CO as the first key already might be useful if the CID key were added. Using the color-coded cross-tab format also makes it much easier to identify the included columns that are common to or different from those of the other missing index recommendations.

Blog_20171214_4

Table 16: Missing Index Cross-Tab Example

 

Some articles suggest that the included columns in missing index recommendations are in no particular order. As discussed in the article that is available at the following link, the included columns are produced in TABLE column order, regardless of the order specified in the query that generated the missing index recommendation: https://blog.sqlrx.com/2017/06/02/query-tuning-and-missing-index-recommendations. Two other important points should be made regarding these recommendations and the DMVs. The cited DMVs also provide the data for the Database Tuning Advisor, although Microsoft says the tool is more comprehensive and “much better.” It is often tempting to assume that any poorly performing query will generate some sort of warning, especially those that perform full scans. As cited previously, versions of SQL Server prior to SQL Server 2014 never seemed to generate any missing index recommendations for queries that performed full table or index scans. Under certain conditions, the new optimizer seems to generate missing index recommendations for some full scans. However, if the query makes extensive use of numerous nested loops, recommendations will often still not be produced by SQL Server. For further information regarding missing index DMVs, please consult the following links:

http://blogs.msdn.com/b/bartd/archive/2007/07/19/are-you-using-sql-s-missing-index-dmvs.aspx and http://www.mssqltips.com/tip.asp?tip=1634.

Database Engine Tuning Advisor (DTA)

This tool replays a single, previously captured workload, and enables the analyst to determine how this workload might perform on a new server, with a new version of SQL Server, or using a revised index structure. Unfortunately, the analyst does not have a great deal of control over how the workload is replayed and it uses a single T-SQL script or trace table replay quite often. The analyst can choose to manually or automatically implement the recommendations. Since it uses the missing index DMVs, it often implements many of the indices with the highest factors without regard to existing indices, i.e., no consideration is given to whether functionality would be duplicated or whether any existing index could be adjusted to satisfy the query’s need. This often results in significant duplication of indices, i.e., the overlapping indices that were discussed earlier. Since it often favors covering indices, it frequently implements indices that duplicate large portions of the table in order to make the index a covering one. The new indices use the _DTA prefix to distinguish them from user-generated indices.

Alternatives to Database Engine Tuning Advisor

Several methods exist for replaying a workload. One method is to replay the workload using Profiler and reload the database after each test. Other methods must be used when the databases being tested are huge, therefore requiring many hours to reload. One method is to use a read-only database, but this method does not work as well as it once did. On earlier releases of SQL Server, a query that changed the database would execute up to the point at which the database change was attempted. However, attempting this method on SQL Server 2016 does not work because the query fails immediately. Another method is to run the inquiry-only queries in parallel while controlling thread levels, and run the update queries sequentially utilizing a begin tran/rollback tran for each query that changes the database. This causes the updates to take longer, not only because the queries are run sequentially, but also because work is required to roll the transaction out. Fortunately, the work performed by the query is recorded separately from the rollback portion, so before-and-after query performance comparison is still easy to do. A variation of the last method is to execute only the queries that work with specific tables. This allows the testing to be more surgical.

Conclusion

Index tuning can be difficult, but it definitely can be accomplished. It is essential to understand how queries and indices interact as well as how to read and understand Query Plans, Index Usage and Operational Stats metrics, and Missing Index recommendations and DMVs. Capturing performance data from SQL Server instance that has been running for as long as possible is critical to insure that index usage is fully understood. The longer the instance has been running, the more comprehensive these metrics will be. Using some form of SQL trace replay is the ONLY way to perform comprehensive index tuning because it is surprising how many times index usage will be counterintuitive. Therefore, the analyst MUST see it in action! Significant index redesign can often be completed with 2-5 weeks’ worth of effort, and the results can be quite dramatic as shown in this series.

This concludes the Rx for Demystifying Index Tuning Decisions blog series. Hopefully, the reader has found the series enlightening and useful. Please let us know if you have any questions or suggestions for additional articles. Best of luck with all your tuning efforts and please contact us with any questions.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

Rx for Demystifying Index Tuning Decisions – Part 7

— by Jeffry Schwartz

Review

In Parts 1 through 6 of this blog series the author discussed various aspects of SQL Server indices such as:

  • Types of indices and their variations, e.g., clustered, nonclustered, filtered, and covering
  • Physical versus logical index access methods, e.g., seeks, scans, key/RID lookups, singleton lookups, and range scans
  • Why index tuning should begin with queries
  • Determining an Appropriate Strategy for Index Tuning
  • Determining Queries that Need Indices the Most
  • Capturing query performance information using Extended Events
  • Using query plans to determine relationships between queries and indices
  • Query Optimizer
  • Statistics and how they affect index usage and query performance
  • Consequences of too many indices
  • Overlapping Indices
  • Overview of Dynamic Management Functions and a detailed discussion of the sys.dm_db_index_usage_stats DMV

https://blog.sqlrx.com/2017/10/26/rx-for-demystifying-index-tuning-decisions-part-1/

https://blog.sqlrx.com/2017/11/02/rx-for-demystifying-index-tuning-decisions-part-2/

https://blog.sqlrx.com/2017/11/09/rx-for-demystifying-index-tuning-decisions-part-3/

https://blog.sqlrx.com/2017/11/16/rx-for-demystifying-index-tuning-decisions-part-4/

https://blog.sqlrx.com/2017/11/30/rx-for-demystifying-index-tuning-decisions-part-5/

https://blog.sqlrx.com/2017/11/30/rx-for-demystifying-index-tuning-decisions-part-6/

 

Part 7 contains a detailed discussion of the sys.dm_db_index_operational_stats DMF (and how to use it).

sys.dm_db_index_operational_stats DMF

The sys.dm_db_index_operational_stats DMF requires four parameters and returns one row per index. It provides additional information regarding how the seeks and scans from the sys.dm_db_index_usage_stats DMV are actually implemented within SQL Server. For example, this DMF records how many range and table scans or single record retrievals were performed against an index or heap. Note: although heaps are not indices, their activity is recorded by this DMF using an index ID of zero. Other important information regarding leaf and non-leaf level page and row lock and latch wait times is also provided. Finally, page split information for both the leaf and non-leaf levels is provided via this DMF as well.

Four parameters are required:

{ database_id | NULL | 0 | DEFAULT } (use db_id() for current db)

{ object_id | NULL | 0 | DEFAULT }

{ index_id | NULL | -1 | DEFAULT }

{ partition_number | NULL | 0 | DEFAULT }

 

Use NULL parameters to obtain information for all available entities for a given level, e.g., databases or tables. The following command will return information for all databases, tables, indices, and partitions that are attached to a particular SQL Server instance: select * from sys.dm_db_index_operational_stats (NULL, NULL, NULL, NULL).

Combining the outputs of sys.dm_db_index_usage_stats and sys.dm_db_index_operational_stats enable the analyst to determine whether the majority of usage seeks actually resulted index range scans or single record lookups. This combination can also be used to identify the approximate percentage of heap or clustered single record accesses that resulted from RID/Key lookups because these are usually performed in loops of single record accesses. Another excellent source of information involves row, page, page latch, and page I/O latch wait times because these can point to specific tables and indices (and indirectly, keys) that are involved in application delays. Finally, invaluable information involving page splits is also available via this DMF. Note: all of this information is available for both leaf and non-leaf levels. The lists below are provided to aid the reader in addressing specific performance problems.

To analyze common table or index partition access pattern use

  • leaf_insert_count
  • leaf_delete_count
  • leaf_update_count
  • leaf_ghost_count
  • range_scan_count
  • singleton_lookup_count

To identify overhead & contention caused by latching and locking

  • row_lock_count and page_lock_count

These values Indicate how many times Database Engine tried to acquire row and page locks (overhead)

  • row_lock_wait_in_ms and page_lock_wait_in_ms

These values Indicate whether lock contention exists on index or heap, and significance of contention

  • page_latch_wait_count and page_latch_wait_in_ms

These values Indicate whether latch contention exists on index or heap, and significance of contention

To analyze statistics of physical I/Os on an index or heap partition

  • page_io_latch_wait_count and page_io_latch_wait_in_ms

These values Indicate how many physical I/Os were issued to bring index or heap pages into memory and how much waiting was involved

Table 12 demonstrates the kind of report that is available using this data. The highlighted SQL Server tables in this example show that the ix_ci_RowNumEvent index is used almost exclusively for single record lookups, whereas the ix_RowNumEvent index is only ranged scanned. ix_CETRowNum and ix_Checksum are used similarly. This kind of report can be extended easily to include various wait types as well.

Blog_20171207_1

Table 12: Index Operational Stats Summary

 

The next and final article in this blog series will cover determining missing indices and their estimated impact if implemented as well as incorporating missing index recommendations, either into existing indices or as few indices as possible. Until then…good luck with your tuning efforts and please contact us with any questions.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Rx for Demystifying Index Tuning Decisions – Part 6

— by Jeffry Schwartz

Review

In Parts 1 through 5 of this blog series the author discussed various aspects of SQL Server indices such as:

  • Types of indices and their variations, e.g., clustered, nonclustered, filtered, and covering
  • Physical versus logical index access methods, e.g., seeks, scans, key/RID lookups, singleton lookups, and range scans
  • Why index tuning should begin with queries
  • Determining an Appropriate Strategy for Index Tuning
  • Determining Queries that Need Indices the Most
  • Capturing query performance information using Extended Events
  • Using query plans to determine relationships between queries and indices
  • Query Optimizer
  • Statistics and how they affect index usage and query performance
  • Consequences of too many indices
  • Overlapping Indices

 

Part 6 contains an overview of dynamic management functions as well as a detailed discussion of the sys.dm_db_index_usage_stats DMV (and how to use it).

 

Dynamic Management Views and Functions

Dynamic Management Views and Functions are “Designed to give you a window into what’s going on inside SQL Server.” There are two types: DMV – Pure view, i.e., no parameters required and DMF – Table-valued function, i.e., parameters required. The parameters usually specify database, table, index, partition, etc. They provide a significant amount of information regarding the system, databases, performance, and internal workings of SQL Server. Most DMVs and DMFs are simple to use, but one must reconcile the numeric IDs with static views that contain textual names.

Most values are accumulated from last SQL Server instance restart. To determine interval-specific values one must calculate differences between individual sample records. However, one must be certain to difference records with same database ID, object ID, index ID, and possibly, partition ID. This data is perfect for periodic or intermittent sampling because no data is lost during the sampling process. Note: sys.dm_db_index_operational_stats may be the exception if a table is used intermittently because when tables haven’t been used for a period of time they are removed from the cache and their metrics are cleared. Capture rates can range from every 30 seconds to a few times per day;   the need for granular analysis normally dictates the collection frequency.

Although several methods for collecting data exist, two general usage scenarios are instructive.

Method 1

  1. Capture a snapshot before monitored activities begin and store the results in a SQL table or a spreadsheet
  2. Execute the workload (whether natural production or test)
  3. Capture a snapshot again and compare value differences
  4. Load into spreadsheets for further analysis, if necessary

Method 2

  1. Capture a snapshot every <n> minutes and store results in a flat file
  2. After the workload and capture processes are complete, load data into SQL tables
  3. Use SQL Server to compare incremental value differences and store results
  4. Extract interval data to spreadsheets for further analysis

Several general and static views are required for converting numeric DMV and DMF identifiers into understandable text

sys.databases

Lists all databases and their IDs so proper associations can be made

sys.partitions

Only way to decode HOBT (Heap or Binary Tree) IDs returned by lock-specific information, e.g., blocked process records and sys.dm_os_waiting_tasks

sys.configurations

Provides information regarding OS and SQL Server configurations

Several database-specific views are needed to convert various database-specific IDs into understandable text. Each of these must be interrogated for each database separately.

sys.objects

Lists all database objects such as tables, views, stored procedures, etc.

sys.indexes

Lists all indices and their associated table IDs

Does not provide row counts as sysindexes does

sys.filegroups

Lists all file groups and their IDs

sys.database_files

Lists all physical database files and their IDs

sys.schemas

Lists all database schemas

Index Related Dynamic Management Views and Functions

Two DMV/DMFs are used to obtain index-related performance information and they are BOTH needed to obtain an accurate and comprehensive perspective regarding index usage. The differences between physical and logical index access metrics were discussed earlier in this series. sys.dm_db_index_usage_stats returns information regarding the query code usage of tables and indices, e.g., inserts, updates, deletes, random accesses, and sequential accesses. These metrics most closely match Query Plan operators, and the information is retained much longer than operational (physical) stats, so it is quite useful for long-term evaluation of index usage, i.e., whether indices are used a great deal or not at all. sys.dm_db_index_operational_stats “returns current low-level I/O, locking, latching, and access method activity for each partition of a table or index in the database.” These metrics track actual index and table operations, e.g., singleton lookups and range scans. Information for certain structures may be deallocated when table no longer in cache, so this is not a reliable source of long-term index usage.

sys.dm_db_index_usage_stats DMV

The sys.dm_db_index_usage_stats DMV requires no input parameters and returns one row per index, the number of seeks, scans, lookups, and updates for user and system queries. As discussed earlier in this series, seeks are random or pseudo-random (single record or range of records as long as a key is used). Scans are fully sequential. The time of the last seek, scan, lookup, and update for user queries is also provided by this function. The metrics are divided into two categories, system and user. The System category is comprised of work generated by maintenance activities, e.g., statistics updates. The User category is comprised of insert, update, delete, and select operations. The metrics report user statements, not record counts. Therefore, one insert statement can result in millions of rows being inserted, so the metric will show one, not millions. This DMV helps determine index and table usage patterns and is particularly useful for identifying indices that are seldom, if ever, used by queries, especially if SQL Server instance has been running for a very long time, e.g., months. As cited previously, these metrics are much more trustworthy for long-term evaluations than operational stats because of possible cache removal issues. Table 8 illustrates the type of information that is available from this function. Each of the highlighted indices was usually accessed sequentially, i.e., a full scan. Since one of the tables contained 71 million rows and the other contained almost 34 million rows, these statistics clearly demonstrate tuning opportunities. Reminder: random accesses can be either single record lookups or a filtered range scan. Table 10 shows data from an actual customer’s SQL Server instance that had been up for 271 days, i.e., approximately nine months. This view highlights the indices that are fully scanned and those that are hardly accessed. The highlighted row in Table 9 shows an index that is updated, but never used. It is easy to become distracted by all the update activity and lose sight of the fact that the index had not been used for inquiry purposes in 271 days.

Blog_20171130_1b

Table 8: Index Usage Stats Summary

 

Blog_20171130_2b

Table 9: Index Usage To-Date Summary (271 Days)

 

The following code uses sys.dm_db_index_usage_stats to list rarely-used indices for a specific database with resolved names and its output appears in Table 10.

declare @dbid int = db_id()

select objectname=object_name(inxusage.object_id), inxusage.object_id, indexname=sysinx.name, sysinx.index_id, user_seeks, user_scans, user_lookups, user_updates

from sys.dm_db_index_usage_stats inxusage,

sys.indexes sysinx

where database_id = @dbid and       objectproperty(inxusage.object_id,‘IsUserTable’) = 1 and   sysinx.object_id = inxusage.object_id and

sysinx.index_id = inxusage.index_id

order by (user_seeks + user_scans + user_lookups + user_updates) asc

Blog_20171130_3b

Table 10: Example – Unused Tbl1 Indices

 

Table 11 illustrates the kind of overall summary report that this view when combined with static index information and the missing index DMVs that will be discussed later can generate. This table shows clearly which SQL Server tables have several indices, how many are duplicated or unused, and how many others were suggested by SQL Server. The row counts are often invaluable for this kind of overall analysis, which can direct an index tuning study.

Blog_20171130_4b

Table 11: Index/Table Overall Summary

 

The next article in this blog series will cover the sys.dm_db_index_operational_stats DMF in detail.   Until then…good luck with your tuning efforts and please contact us with any questions.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Rx for Demystifying Index Tuning Decisions – Part 5

— by Jeffry Schwartz

Review

Due to the holiday week, this part will be shorter than usual. In Parts 1 through 4 of this blog series the author discussed various aspects of SQL Server indices such as:

  • Types of indices and their variations, e.g., clustered, nonclustered, filtered, and covering
  • Physical versus logical index access methods, e.g., seeks, scans, key/RID lookups, singleton lookups, and range scans
  • Why index tuning should begin with queries
  • Determining an Appropriate Strategy for Index Tuning
  • Determining Queries that Need Indices the Most
  • Capturing query performance information using Extended Events
  • Using query plans to determine relationships between queries and indices
  • Query Optimizer
  • Statistics and how they affect index usage and query performance
  • Consequences of too many indices

 

Part 5 covers the following topic:

  • Overlapping Indices

Overlapping Indices

Overlapping indices duplicate the functionality of one another in some way. The easiest way to understand this concept is to see an actual example as shown in Table 6. The simplest example is Inx4, which completely duplicates Inx3. Since the table contained almost 17 million records, this is a very undesirable situation. Table 7 highlights the fact that SQL Server only used Inx3 ONCE, so this provides even more incentive to eliminate Inx3. This data also indicates that Inx1 can be removed because it was never used. Note: the usage data was obtained using two different index DMVs/DMFs, which will be discussed later.

Blog_20171130_1a

Table 6: Overlapping Index Examples

 

Blog_20171130_2a

Table 7: Overlapping Index Usage

 

Examining the index layouts further we can see clearly that Inx1 through Inx4 could be consolidated into one index that would handle all usage. The composite index is shown below. It may be possible to exclude CurrCode and ECID from the composite index, but more research would need to be conducted before that choice could be committed to safely. Specifically, since the number of User Lookups was quite high on the table, some queries do not have all the columns they require and two of these may be CurrCode and ECID. Query plans are invaluable in answering these kinds of questions.

Blog_20171130_3a

At first glance, it appears that Inx7 and Inx8 might be combined as well. However, since the secondary keys are different and both are used for range scanning only (see Table 7), there is no certain way to combine these and also handle the queries that use them properly. Therefore, it is best to leave these alone.

The information above was determined using the following system views: sys.all_columns, sys.databases, sys.dm_db_partition_stats, sys.index_columns, sys.indexes, and sys.tables.

The next article in this blog series will cover dynamic management views.   Until then….happy tuning and please contact us with any questions.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!