Category Archives: Tip of the Month

Get a List of Tables That Are Compressed

Find out if any of your tables are compressed in your database using the query below. Compressed tables can save space and make queries run faster.

— Compressed Tables
select distinct AS CompressedTables
from sys.partitions p
inner join sys.tables t
on p.object_id = t.object_id
where p.data_compression > 0

If you don’t have any tables compressed but think you might want to compress some, you can check your data compression savings by running the stored procedure sp_estimate_data_compression_savings for your targeted table.

USE WideWorldImporters;
EXEC sp_estimate_data_compression_savings ‘Sales’, ‘Invoices’, NULL, NULL, ‘ROW’ ;

List Partitioned Tables And Other Info About Them

— By Lori Brown @SQLSupahStah

Here is a good way to find out if any of your tables are partitioned in your database using the query below. It is important to know so that you can find out how the tables are partitioned.

— Partitioned Tables


FROM sys.partitions p

INNER JOIN sys.tables t

ON p.object_id = t.object_id

WHERE p.partition_number <> 1

If you have partitioned tables here is a good way to find out how it is partitioned up.

— Get partition info

SELECT SCHEMA_NAME(o.schema_id) + ‘.’ + OBJECT_NAME(i.object_id) AS [object]

, p.partition_number AS [p#]

, AS [filegroup]

, p.rows

, au.total_pages AS pages

, CASE boundary_value_on_right

WHEN 1 THEN ‘less than’

ELSE ‘less than or equal to’ END AS comparison

, rv.value

, CONVERT (VARCHAR(6), CONVERT (INT, SUBSTRING (au.first_page, 6, 1) +

SUBSTRING (au.first_page, 5, 1))) + ‘:’ + CONVERT (VARCHAR(20),

CONVERT (INT, SUBSTRING (au.first_page, 4, 1) +

SUBSTRING (au.first_page, 3, 1) + SUBSTRING (au.first_page, 2, 1) +

SUBSTRING (au.first_page, 1, 1))) AS first_page

FROM sys.partitions p

INNER JOIN sys.indexes i

ON p.object_id = i.object_id

AND p.index_id = i.index_id

INNER JOIN sys.objects o

ON p.object_id = o.object_id

INNER JOIN sys.system_internals_allocation_units au

ON p.partition_id = au.container_id

INNER JOIN sys.partition_schemes ps

ON ps.data_space_id = i.data_space_id

INNER JOIN sys.partition_functions f

ON f.function_id = ps.function_id

INNER JOIN sys.destination_data_spaces dds

ON dds.partition_scheme_id = ps.data_space_id

AND dds.destination_id = p.partition_number

INNER JOIN sys.filegroups fg

ON dds.data_space_id = fg.data_space_id

LEFT OUTER JOIN sys.partition_range_values rv

ON f.function_id = rv.function_id

AND p.partition_number = rv.boundary_id

WHERE i.index_id < 2

AND o.object_id = OBJECT_ID(‘dbo.SomeTableName’);

List Tables That May Be Over Indexed

— By Lori Brown

While not having enough indexes can be bad for query performance, having too many indexes can also be just as bad. Use the query below to get a list of tables in your database that has more than 10 indexes.

— Tables with large number of indexes

select as TablesWithLargeNumInx, count( as CountIndexes

from sys.indexes i

inner join sys.tables t

on i.object_id = t.object_id

where i.index_id > 0

group by

having count( > 10

If you suspect that you have too many indexes on your tables, you can also check the sys.dm_db_index_usage_stats dynamic management view to know if indexes on your heavily indexed tables are being used well. (Hint: seeks are good and scans are not so much)

select u.user_seeks, u.user_lookups, u.user_scans

from sys.dm_db_index_usage_stats u

inner join sys.indexes i

on u.object_id = i.object_id and u.index_id = i.index_id

WHERE u.object_id=object_id(‘dbo.SomeTableName’)

March 2016 Tip of the Month -Full Text Indexing – Clean up those log files!

If you have Full Text catalogs set up in your databases you may notice that inside your log folder there will be lots of files named SQLFTxxxxxxxxxxxx.LOG. These are known as crawl logs which are designed to log information on the full text indexing. These logs can really pile up and there is no automated process that will trim older files. Unfortunately, this can mean that a DBA may have to manually delete old files but should be very careful that you do not delete any of the other files that SQL uses in the folder such as the ErrorLog, default trace and system health files.


You can move the location of these files by changing the default location of the Error Dump path in your system configuration parameters but have to plan on a short outage while SQL is stopped and restarted. Sometimes these files can become quite large and hard to open. You may need to run EXEC sp_fulltext_recycle_crawl_log @ftcat = ‘FTCatalogName’ to stop SQL from using the file.


For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at We will be happy to help! Leave a comment and feel free to track back to us. Visit us at!

Feb 2016 Tip of the Month – New function in SQL 2016: sys.dm_exec_function_stats

SQL Server Performance Tuning:  New function in SQL 2016: sys.dm_exec_function_stats

Returns performance statistics for cached functions. Returns information about scalar functions, including in-memory functions and CLR scalar functions but not about table valued functions. The view returns one row for each cached function plan, and the lifetime of the row is as long as the function remains in memory. When a function is removed from the cache, the corresponding plan is no longer available in the view.

SELECT [object_id],


OBJECT_NAME(object_id, database_id) ‘function name’,




total_elapsed_time/execution_count AS [avg_elapsed_time],







FROM sys.dm_exec_function_stats AS

ORDER BY [total_logical_reads] DESC


More information from Microsoft can be found here:

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at We will be happy to help! Leave a comment and feel free to track back to us. Visit us at!

Jan 2016 Tip of the Month: sys.dm_server_memory_dumps

January 2016 Tip of the Month:  Using sys.dm_server_memory_dumps to find SQL memory dumps.

When SQL experiences a serious issue it will often dump the contents of its memory to disk. This is known as a memory dump and it does generate files that are usually located in the Logs folder. I often find that many SQL DBA’s are unaware that SQL may be experiencing issues until they start seeing some scary looking messages in their error logs. It is possible for SQL to have a serious issue but not bad enough to cause it to shut down. I have also found that the memory dump files can be quite large and may need to be cleaned up periodically. If you query the dynamic management view sys.dm_server_memory_dumps, you will get a list of memory dumps along with the location of the files.

SELECT [filename], creation_time, size_in_bytes

FROM sys.dm_server_memory_dumps


If you find that SQL is producing memory dumps, you should be alarmed and may need to send the files off to Microsoft for evaluation. You can find more on the sys.dm_server_memory_dumps on MSDN at:


How to connect using the Dedicated Administrator Connection

The SQL dedicated administrator connection(DAC) has been around since SQL 2005 and is a diagnostic connection that is designed to allow an administrator to troubleshoot SQL Server when the instance is having problems.  Only members of the sysadmin role can connect using the DAC.  You can connect using the DAC both through SQLCMD (SQLCMD –A –d master) and in SQL Server Management Studio (ADMIN:<instance_name>).



If you don’t have remote admin connections enabled in the instance, you will only be able to connect to DAC while logged into the server.   It’s pretty simple to enable the setting.  Just run this:

EXEC sp_configure ‘remote Admin Connections’, 1




While you can run many T-SQL statements while connected using DAC, there are limitations.  For instance, you can’t run backup or restore commands while connected using DAC.  You should stick to queries that check DMV’s such as sys.dm_tran_locks , sys.dm_os_scheduler, sys.dm_exec_requests, sys.dm_exec_sessions as well as some DBCC statements such as DBCC FREEPROCCACHE and DBCC SQLPERF.  More on the DAC and its limitations here:

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at We will be happy to help! Leave a comment and feel free to track back to us. We love to talk tech with anyone in our SQL family!

SQL 2014 Backup Encryption

Available in SQL Server 2014 Enterprise Edition and Standard Edition

Backup encryption has been introduced in SQL 2014 to allow encrypting backups without being forced to encrypt the entire database. Several encryption algorithms are supported, including Advanced Encryption Standard (AES) 128, AES 192, AES 256, and Triple DES. You can also compress the encrypted backup. Encryption will cause higher CPU utilization so you must make sure that CPU’s can handle backup along with workload.

Here is what to do to encrypt your backup:

  1. Create a master key with a password
  2. Create a certificate or asymmetric key
  3. Then backup the database and set your algorithm and the certificate that was created.

USE master;




–Create a certificate or asymmetric key

USE master;



WITH SUBJECT = ‘Backup Cert For My Database’


–Backup the database


TO DISK = N’C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Backup\MYDB.bak’








STATS = 10


Tip of the Month: Buffer Pool Extension

— By Lori Brown  @SQLSupahStah

Available in SQL Server 2014 Enterprise Edition and Standard Edition x64 only

Buffer Pool Extension is a way to use solid state disks (SSD) as nonvolatile RAM (NvRAM) instead of memory. It reduces latency. It stores clean or committed pages only so there is no risk of data loss and will give faster access to data that would have otherwise be slow because it would have to be read from disk. Buffer Pool Extension can provide performance gains for read-heavy OLTP workloads.

Turn on Buffer Pool Extension in SQL with an ALTER SERVER statement. FILENAME is the path to the bpe file that will be located on the SSD disk and SIZE is the size of the bpe file in GB.




Tip of the Month: WAIT_AT_LOW_PRIORITY option for Index Rebuilds

— By Lori Brown  @SQLSupahStah

Available in SQL Server 2014 Enterprise Edition since can only be done using ONLINE rebuilds.

Index rebuilds have been enhanced with the WAIT_AT_LOW_PRIORITY option. It is used to set the duration in minutes to wait as well as behavior when blocking happens. When setting WAIT_AT_LOW_PRIORITY you also must set MAX_DURATION and ABORT_AFTER_WAIT to tell SQL how long to wait for blocking and then what to do after that amount of time has expired and blocking is still happening.

ALTER INDEX ix_RoadRaces_State

on dbo.RoadRaces


  • NONE = Keep waiting
  • SELF = Give way to the user queries
  • BLOCKERS = Kill all user transactions that block the online index rebuild