Category Archives: Miscellaneous

Log Connections to SQL Instance

— by Ginger Keys

If you ever have a need to monitor connections to your SQL server, and any related information about the connections such as database, logins, etc., there are some DMVs that can give you tons of information. Previously you might have used the sys.sysprocesses table to derive much of this information, but this is being deprecated in the most recent versions of SQL server.

Instead, you can collect valuable information from these DMVs:




In order to capture and retain connection information for my SQL server, I will create a small database and a table to hold some basic information. Of course you can alter the script to include more, less, or different data than what I am demonstrating below, to better fit your specific information needs.

I will create a database and a table, then insert data from two of the DMVs listed above.

Step 1 – Create a table to hold login activity

— Create a database

USE master




( NAME = N’Connections’, FILENAME = N’C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\Connections.mdf’ ,

SIZE = 1024MB , FILEGROWTH = 512MB )


( NAME = N’Connections_log’, FILENAME = N’C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\Connections_log.ldf’ ,




— Create table to hold Login info

USE [Connections]


CREATE TABLE [dbo].[LoginActivity]


host_name [nvarchar](128) NULL,

program_name [nvarchar](128) NULL,

login_name [nvarchar](128) NOT NULL,

client_net_address [nvarchar](48) NULL,

DatabaseName [nvarchar](128) NOT NULL,

login_time [datetime] NOT NULL,

status [nvarchar](30) NOT NULL,

date_time[datetime] NOT NULL,




Step 2 – Insert Data into Table

If you need to retain or archive this connection information, you can create a database which will hold the information, or export the results to a spreadsheet or other file. Otherwise you can simply select the information from the DMV below if you only need to see current data.


USE Connections


INSERT INTO LoginActivity









— run the following select statement by itself to see connection info if you don’t want to save the output





c.client_net_address, AS DatabaseName,



GETDATE() AS date_time

FROM sys.dm_exec_sessions s

JOIN sys.dm_exec_connections c ON s.session_id = c.session_id

JOIN sys.databases d ON d.database_id = s.database_id

–where = ‘ABCompany’ –can specify databases if needed

WHERE GETDATE() >= DATEADD(hh,-10, GETDATE()) –date range can be adjusted


Step 3 – View/Save Output Results

After inserting the data into my table, I can see the current connections from the last 10 hours (as per my insert statement). On a production server, this list would be far greater.

SELECT * FROM LoginActivity


From the columns I have included in my table:

Host_name – will give you the name of the workstation connecting – shows NULL for internal sessions.

Program_name – tells you the name of the client program or application connecting.

Client_net_address – provides the host address of each client connecting

Login_name, DatabaseName, and login_time – self-explanatory.

date_time – is the current day and time the query is run

Status – gives the status of the session, which will be running, sleeping, dormant, or preconnect.

This information can also be output to a text or excel file if preferred.



Being able to see users or applications making connections to your SQL Server can be useful or necessary for many reasons. The steps outlined above provide a general guideline for deriving connection information that can be altered to fit your organizational needs.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at We will be happy to help! Leave a comment and feel free to track back to us. Visit us at!

Find Tables That Have Special Features Enabled

Find out if any of tables in your database have special features enabled using the queries below.  These features need to be understood and carefully managed.

— CDC Enabled Tables

select distinct AS CDCTables

from sys.tables t

where t.is_tracked_by_cdc = 1


— File Tables — SQL 2012 +

select distinct AS FileTables

from sys.tables t

where t.is_filetable = 1


— Temporal Tables — SQL 2016 +

select distinct AS TemporalTables

from sys.tables t

where t.temporal_type > 0


— Stretch Enabled Tables — SQL 2016 +

select distinct AS StretchTables

from sys.tables t

where t.is_remote_data_archive_enabled > 0


— External Tables — SQL 2016 +

select distinct AS ExternalTables

from sys.tables t

where t.is_external > 0

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at We will be happy to help! Leave a comment and feel free to track back to us. Visit us at!

Links for new DMV’s in SQL 2017

Since we at SQLRX have been super busy this week and still need to do something for a blog post, I thought that doing a link round up of new dynamic management views that are going to be available in SQL 2017 would be a quick and good idea. I am really interested in sys.dm_db_log_info since it give VLF info and can be used for monitoring and alerting.



A column on sys.dm_db_file_space_usage has been changed:

Brand new dmv’s:







Yee Haw!!!!

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at We will be happy to help! Leave a comment and feel free to track back to us. Visit us at!

Is This Database Being Used?

— By Ginger Keys

During the migration of a client’s SQL environment to a new hosting provider it was questioned whether all of the databases needed to be migrated. Nobody seemed to know whether several of the databases were used or what they were for. One person suggested we just not move them, and see if anybody screamed or if it broke anything. Well, I guess that’s one way to find out if a database is being used. But there should be more definitive ways to do this, right?

There really is no straightforward way to determine this, but there are several things we can look at to get a better idea.

  • Look at any current connections
  • Capture login/connections over period of time
  • Observe index usage
  • Look at transaction count
  • Find dependencies with Jobs or other DBs

Current connections

sp_who2 is a well-known stored procedure that returns information about current users, sessions and processes. You can run exec sp_who2 to return all sessions belonging to the instance, or you can filter to return only the active sessions:

–Find active connections to the instance

USE master;


EXEC sp_who2 ‘active’;


Information about processes on your instance can also be derived from sys.sysprocesses. This system view will be deprecated in future releases of SQL. The info in this view returns data about both client and system processes running on the instance. The following statements will filter data relating to a specific database:

–Find number of active connections to database

USE master;




COUNT(dbid) AS NumberOfConnections,





FROM   sys.sysprocesses

WHERE DB_NAME(dbid) = ‘AdventureWorks2016’ –insert your database name here

GROUP BY dbid, spid, loginame, login_time, last_batch, status



–Active Connections to Database with connecting IP address






db_name(s.database_id) AS DBName,



GETDATE() AS date_time

FROM sys.dm_exec_sessions AS s

INNER JOIN sys.dm_exec_connections ASON s.session_id = c.session_id

INNER JOIN sys.sysprocesses ASON s.session_id = p.spid

WHERE DB_NAME(p.dbid) = ‘AdventureWorks2016’ –insert your database name here


Connections over time

It might be more beneficial to watch the connections to a database over a period of time instead of looking at the current connections only. In order to gather this data over time, you could create a trace through SQL Server Profiler. Simply run the trace, export it as a definition file, and import it into a table to query the results.   You can also create a SQL Server Audit to record successful logins, but these are at the server level, not the database level.   For a good explanation on how to perform both of these tasks click here . Depending upon how long you run the Profiler or Audit, these files can take up a considerable amount of space so be sure to monitor your disk space carefully.


Index Usage

Another way to see if your database is in use is to look and see if the indexes are being used. Information on index usage is held in the sys.dm_db_index_usage_stats table since the last server reboot, and can be queried using this statement which can be tailored to select the data you need.


DB_NAME(database_id) DatabaseName,





FROM sys.dm_db_index_usage_stats

WHERE db_name(database_id) = ‘AdventureWorks2016’ –insert your database name here



The statement above will show you the date and time the indexes for your database were last used. For the reverse of that, if you want to see which database have not had the indexes used since the last server reboot, run this statement:

SELECT name AS DatabaseName

FROM sys.databases

WHERE name NOT IN (‘master’, ‘tempdb’, ‘model’, ‘msdb’)



DB_NAME(database_id) AS DatabaseName

FROM sys.dm_db_index_usage_stats



Transaction Count for the Database

Checking to see if the number of transactions are increasing for a database is another way to see if it is being used. You can query the sys.dm_os_performance_counters for Transactions/sec and run this several times to see if the count is increasing or not. Or you can open Perfmon and watch it there as well.

–Transaction count increasing?


FROM sys.dm_os_performance_counters

WHERE counter_name LIKE ‘Transactions/sec%’

AND instance_name LIKE ‘AdventureWorks2016%’ –insert your database name here



–I waited a few minutes and executed the select statement again


Database Dependencies

Occasionally other databases or linked servers will connect to your database.   To see objects in your database that are referenced by other databases, run this statement:

SELECT OBJECT_NAME (referencing_id) AS referencing_object,




FROM sys.sql_expression_dependencies

WHERE referenced_database_name IS NOT NULL

AND is_ambiguous = 0;


For finding object referencing linked servers use this

SELECT OBJECT_NAME (referencing_id) AS referencing_object,





FROM sys.sql_expression_dependencies

WHERE referenced_server_name IS NOT NULL

AND is_ambiguous = 0;

Database dependencies can be a very in-depth topic, and the statements above are only meant for high-level information about connections to your database. For more information about this topic click here .


There are countless ways to determine if your database is being used. Other methods that could be used is to see if there are execution plans are in the cache referencing the database, see if reads/writes are happening, look at lock_acquired events, and many other methods I haven’t thought of. The methods outlined above provide a useful starting point to investigate who/what is connecting and if your database is active or not. Yes, you can just take the database offline or detach, or even delete it and see if anyone fusses. But it’s much more prudent to take a look around at some simple things to make that determination.

Feel free to comment with other methods you have used….we would love to hear from you. For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at We will be happy to help! Leave a comment and feel free to track back to us. Visit us at!



Login Failures Bloating SQL Server Log

–By Ginger Keys

After migrating a client’s SQL instances to VMs from physical servers, I noticed the following week that the SQL Server Log files had tons of failed login attempts from what looked like an application. These attempts were happening several times per second, and had totally bloated the SQL Server logs.


This is not desirable because 1) obviously something should not be attempting and failing to connect that much, and 2) it makes it super cumbersome to monitor more critical messages in the SQL logs when so many login errors are bloating the log. Too many failed login attempts could mean that the application has not been set with the correct permissions to access the databases needed, or it could be an indication of a hacking attempt.

I ran the script below to determine how many failed login attempts had occurred in the last 7 days, and the count came back with over 3 million failed attempts!

— Create a temporary table

CREATE TABLE #FailedLogins

(Logdate DATETIME,

Process VARCHAR(20),

Text VARCHAR(4000))


— Insert data from SQL Server Log into temp table

INSERT INTO #FailedLogins

EXEC xp_readerrorlog 1, 1, N’Login Failed’, N’LoginName’,‘20170309’,‘20170316’


— For more info about xp_readerrorlog click here

  –Count number of occurrences

SELECT COUNT(Text) AS LoginFailures, Text

FROM #FailedLogins



–Drop temp table

DROP TABLE #FailedLogins


As it turns out it was a legacy system moved from the old environment that needed to be adjusted to connect to the new environment. I didn’t exactly get to put on my superwoman cape and solve the problem, but at least I was able to identify the issue and direct it to the appropriate IT staff to correct.

As part of your everyday duties as a DBA it is prudent to monitor your SQL Server Logs to catch hacking attempts or malicious activity with your instance, and also to look for errors that could be occurring. By default, SQL contains 7 error log files (one current, and six archives). Error logs contain critical information about your SQL server. A new log is created each time the SQL service starts, and the oldest archived log gets deleted. For security reasons it is best practice to keep a large number of error logs, depending on the space you have available. Someone with access to your SQL server could execute sp_cycle_errorlog and regenerate the logs at a rate that could ‘delete’ their malicious activity or cover up the fact that they have gained access to your system. If you have highly sensitive data or stringent auditing requirements you can increase the number of retained error logs up to 99, just be sure to monitor your disk space.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at We will be happy to help! Leave a comment and feel free to track back to us. Visit us at!

SQL Server Memory In Task Manager – Tell Your Manager to Not Worry!

-by Ginger Keys

blog_20160929_1We are going to re-promote this subject because it is still so relevant. All too often we run into people who are in a panic after looking at Task Manager to try to diagnose SQL stress. They seem to become very worried when looking at how much memory SQL uses not realizing that is as expected.

Task Manager is just that, a task manager. If you are looking at it to get accurate values for memory (RAM) usage or CPU time, you will get not-so-accurate information. False alarms. Information that doesn’t exactly mean what it implies.

Blog_20150507_2What I usually see when I look at Task Manager is that CPU is low, disk usage is low, and memory used is high – which is what we expect on a SQL machine.  SQL will use as much memory as it can get, to improve performance by caching tons of data in memory in case it needs it, to avoid having to go to disk.  It will immediately release any memory to the OS if needed.  SQL is a memory hog, and will hold onto all the memory we allocate to it, whether it is actually using it or not.  And this is what we want it to do!

SQL will take all the memory that is allocated to it when it is needed, store data and execution plans in the buffer, and when it is through using it, it doesn’t release it…it caches that data into memory until either 1) SQL needs it again, or 2) Windows OS requests the memory.

This isn’t an indication of performance stress…it’s just showing that SQL has reserved all the memory that we told it to use (through setting min and max memory on the SQL instance properties [1]).  SQL doesn’t let go of memory unless the server comes under memory pressure (like if other apps need memory and Windows sends out a memory pressure notification).  As long as the OS doesn’t need to use it for anything, SQL will hold on to it for future use.

The default memory setting when you install SQL is for it to use all of the memory. When windows asks SQL to give it back, SQL gives it back in huge amounts. And then proceeds to build back up again to maximum amount. This is not optimum for stability or performance, and it is highly recommended to set the limit in the Max Memory Setting as a best practice. That way Windows OS is always guaranteed to have a certain amount.

Click here for more info on setting Max Memory:

Another good reference for setting Max Memory for SQL is here:


And when looking at Resource Monitor, it is a mistake to use the Available and Free metrics as the key indicators for determining overall system health [2]. The Available MB of memory is comprised of both the Standby and the Free… the stuff in Standby is what Windows has put in the cache (but can be booted out if needed), and Free means it is not even being used at all. The combination is what Windows calls Available. So there is actually data in the cache that Windows says is available. Some of the categories in task manager/resource monitor can be very confusing and even misleading, which is why we shouldn’t use these metrics alone to decide if we are in danger of being out of memory.

Instead of only relying on task manager or resource monitor, we really need to look at other metrics to find out if there is a memory related performance issue on the machine.

Some of the metrics to pay attention to are:

  • Page Life Expectancy
  • Available Bytes
  • Buffer Cache Hit Ratio
  • Target & Total Server Memory
  • Memory Grants Pending
  • Pages/sec (Hard Page Faults)
  • Batch Requests/sec & Compilations/sec

Some of these metrics can be queried from TSQL, or can be observed with Perfmon (Windows Performance Monitor – located in Administrative Tools).

 Page Life Expectancy (PLE)

Found in Perfmon at SQL Server: Buffer Manager – Page Life Expectancy, this counter measures the average amount of time unlocked data pages are staying in the buffer pool.   During peak production hours this number may dip into lower numbers, but we generally want this number to remain above 300 seconds (so data is staying in the cache for 5 minutes or longer). The longer data stays in the buffer, the less likely SQL will have to go to disk for I/O operations.

There is an interesting occurrence with page life expectancy… when SQL really does run out of memory, PLE drops very quickly…but it grows back very slowly. You’re probably not still having memory issues during the coming back stage, even though it may look like its struggling. If PLE stays down however, then you have a problem.   So be careful, because PLE can be misleading when it takes a drop… there’s a difference between it going down & staying down (you have a memory problem), and going down once and crawling back up (which is normal). If it stays down below 300 seconds consistently, you may need to add more RAM.

You can monitor this through Perfmon, or run the following query:

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Buffer Manager%’

AND [counter_name] = ‘Page life expectancy’

Available MBytes

Found in Perfmon as Memory: Available MBytes, this counter detects how much memory is available for use, and whether Windows is experiencing memory pressure. We generally start investigating if this number consistently goes below 500MB, which would indicate low memory, and Windows may start memory management operations. If Available Mbytes is consistently low, you may need to add more RAM.

This counter cannot be queried through TSQL, it has to be observed through Perfmon only.

Buffer Cache Hit Ratio

Found in Perfmon as SQL Server: Buffer Manager: Buffer Cache Hit Ratio. This counter averages (since the last restart of your SQL instance) how often SQL Server goes to the buffer pool to get data, and actually finds that data in memory, instead of having to go to disk. We want to see this ratio high in OLTP servers – around 90-95%. The higher the ratio, the less often SQL has to go to disk for I/O operations, which translates into better performance for your end users. If this number is consistently below the 90% mark, you may need to add more RAM to your server to boost performance.

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Buffer Manager%’

AND [counter_name] = ‘Buffer cache hit ratio’

 Target & Total Server Memory

Found in Perfmon at SQL Server: Memory Manager – Total Server Memory, and SQL Server: Memory Manager – Target Server Memory. The Total Server Memory is the current amount of memory that SQL Server is using.  The Total Server memory will be quite a bit lower than the Target memory during the initial buffer pool ramp up… it is trying to populate the cache and get pages loaded into memory.  Performance might be a little slower during this time since more disk I/O is required, but this is normal.  After it ramps up, and normal operations resume, Total Server Memory should be very close to Target Server Memory. (The ratio should be close to 1). If Total Server Memory does not increase much, but stays significantly less than Target, this could indicate a couple of things…

1) You may have allocated much more memory than SQL can use – SQL could cache the entire databases into memory, and if the databases are smaller than the amount of memory on the machine, the data won’t take up all the space allocated to SQL. So in this case Total Memory (actually memory being used by SQL) will never reach Target Memory (amount allocated to SQL). Or,

2) SQL cannot grow the buffer pool because of memory pressure from outside of SQL. If this is the case, you need to either increase the Max Server Memory, or add more RAM to boost performance.

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Memory Manager%’

AND [counter_name] IN (‘Total Server Memory (KB)’,’Target Server Memory (KB)’)

 Memory Grants Pending

Found in Perfmon as SQL Server: Memory Manager – Memory Grant Pending, this counter measures the total number of SQL processes waiting for a workspace memory grant. The general recommendation for this measurement should be 1 or less. Anything above 1 indicates there are SQL processes waiting for memory in order to operate.

Memory grants pending could be due to bad queries, missing indexes, sorts or hashes. To investigate this, you can query the sys.dm_exec_query_memory_grants view, which will show which queries (if any) that require a memory grant to execute [5].

If the Memory Grants Pending are not due to the above mentioned conditions, then more memory should be allocated to SQL Server by adjusting Max Server Memory. Adding more RAM should be the last resort in this case.

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Memory Manager%’

AND [counter_name] = ‘Memory Grants Pending’

Pages/sec (Hard Page Faults)

Found in Perfmon as Memory: Pages/sec, this measures the number of pages read from or written to disk. Pages/sec is the sum of Pages Input/sec and Pages Output/sec and reports the number of requested pages that were not immediately available in RAM and had to be read from the disk (resulting in hard page faults). A Hard Page Fault occurs when windows has to use the swap file on the disk [3]. It’s when the address in memory that’s part of a program is no longer in main memory, but has been instead swapped out to the paging file, making the system go looking for it on the hard disk. Since disk storage is much slower than RAM, performance can be greatly degraded.

We want to see the Page/sec number below 50, and closer to 0. If you see a high value of Pages/sec on a regular basis, you might have performance degradation, but not necessarily. A high Pages/sec value can happen while doing database backups or restores, importing or exporting data, or by reading a file mapped in memory [4].

Because a hard fault doesn’t necessarily indicate a critical error condition depending upon what’s normal for your environment, it’s a good idea to measure a baseline value, and monitor this number based on that. If the values are consistently higher that your baseline value, you should consider adding more RAM.

Batch Request & Compilations

SQL Server: SQL Statistics – Batch Request/Sec. This is the number of incoming queries

SQL Server: SQL Statistics – Compilations/Sec. This is the number of new executions plans SQL had to build

If Compilations/sec is 25% or higher relative to Batch Requests/sec, SQL Server is putting execution plans in the cache, but never actually reusing them.  Your valuable memory is being used up to cache query execution plans that will never be used again – instead of caching data.  This is bad. We don’t want this to happen. And a high Compilation/sec value (like over 100) indicates there are a lot of Ad-Hoc (one-hit-wonder) queries being run. You can enable the “optimize for ad hoc” setting if this is the case, and this will put the execution plan in the buffer, but only after the second time it has been used.

To query these metrics with TSQL:

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’

AND [counter_name] = ‘Batch Requests/sec’

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’

AND [counter_name] = ‘SQL Compilations/sec’

To get the ratio of Compilations to Batch Requests:


(SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’

AND [counter_name] = ‘SQL Compilations/sec’)


(SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’

AND [counter_name] = ‘Batch Requests/sec’)

,2) as [Ratio]


Looking at the counters in Perfmon (Report View) below, we can see that this particular server is in a healthy state, with over 11GB of Available Bytes, no page faults (so I/O is operating from buffer, not disk), Buffer cache with 100% ratio, PLE of 21,515 seconds (that’s almost 6 hours data spent in the buffer), no pending memory grants (so no SQL process is having to wait for memory), ample Total memory, and low compilation ratio (11%) compared to all the queries being run.


These measurements should be observed and understood, instead of using Task Manager to make critical decisions about resources on your SQL Server.

If it is decided to make changes based upon only what we see in Task Manager, and we reduce the amount of memory SQL can use, and it happens to need more to execute processes, it will have to read/write from disk, which will degrade the performance.  Reducing the amount of memory SQL can use will slow performance and will most likely increase complaints from end users. The lower you set your Maximum Server Memory, the lower your performance will go.

No matter how much memory you put in a system, SQL Server will use all it can get until its caching entire databases in memory and then some.  This isn’t an accident. SQL Server caches data in memory.  On purpose. That means it is healthy, and enhancing performance for your end users.

Now send a link to this blog post to your favorite person on staff who needs to be educated about Task Manager and SQL Server. You will be glad you did!


[1] “Glenn Berry’s SQL Server Performance”

[2] “Use Resource Monitor for Memory Monitoring” by Scott Lowe.

[3] “Swap File”. TechTarget.


List All Objects and Indexes in a Filegroup or Partition

— by Lori Brown @SQLSupahStah

One of my clients has a rather large database with multiple filegroups and multiple database files in them. I happened to notice that a specific drive that is occupied by 2 specific files that are supposed to hold indexes is getting utilized at a much higher rate than any of the other files. The index file group has 8 files in it. I was anticipating gathering some performance data to try to figure out what indexes are located on those specific files and needed a way to figure that out. My searching for a script led me to TechNet where there are a bunch of scripts that are freely downloadable. The one I needed came Olaf Helper. I added a join on the database files so that I could see what file they are on and filter on that.

Since I need to join my query to the actual database files, I needed to figure out what the file_id was for the specific files that I am interested in.

SELECT * FROM sys.database_files

This is a very modified output so you can see the different filegroups and the files in them.


Checking the data in sys.database files will give you the answer.

Anyway, put it all together and mine looks like this:

SELECT AS DataSpaceName


,AU.type_desc AS AllocationDesc

,AU.total_pages / 128 AS TotalSizeMB

,AU.used_pages / 128 AS UsedSizeMB

,AU.data_pages / 128 AS DataSizeMB

, AS SchemaName

,OBJ.type_desc AS ObjectType

, AS ObjectName

,IDX.type_desc AS IndexType

, AS IndexName

FROM sys.data_spaces AS DS

INNER JOIN sys.allocation_units AS AU

ON DS.data_space_id = AU.data_space_id

INNER JOIN sys.partitions AS PA

ON (AU.type IN (1, 3)

AND AU.container_id = PA.hobt_id)


(AU.type = 2

AND AU.container_id = PA.partition_id)

JOIN sys.database_files f

on AU.data_space_id = f.data_space_id

INNER JOIN sys.objects AS OBJ

ON PA.object_id = OBJ.object_id

INNER JOIN sys.schemas AS SCH

ON OBJ.schema_id = SCH.schema_id

LEFT JOIN sys.indexes AS IDX

ON PA.object_id = IDX.object_id

AND PA.index_id = IDX.index_id

WHERE f.file_id IN (13,14) AND AU.total_pages > 0 — Look at specific files, could also filter on file group

ORDER BY AU.total_pages desc — Order by size





The output looks like this…I sorted on size but left the columns needed to sort by name:


Now I am ready to collect performance data to see what stressful queries are using the indexes located on these files.

I don’t have the results from the performance data just yet but will likely follow up later on putting all the pieces together to solve this very specific problem.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at We will be happy to help! Leave a comment and feel free to track back to us. Visit us at!


Get SQL Start Up Parameters and Other Info From sys.dm_server_registry

–By Lori Brown @SQLSupahStah

The dynamic management view sys.dm_server_registry has been around since SQL 2008 R2 SP1 so it is not new but it’s relatively new to me. Since I have a ton of SQL Servers to monitor, it is always good to have some way of easily knowing if there are any start up parameters configured for the instance. I found that this and more can be pulled from the registry using sys.dm_server_registry. ( )

If you look at all values returned by the dmv, you will see something like this:


In my local instance there are 191 rows returned so this can get to be a little difficult to read through. I wanted to just report on certain things so I have narrowed what is returned. Most of the time I am not interested in the actual registry key, so here is what I did:

— Startup Parameters

SELECT value_name, value_data

FROM sys.dm_server_registry

WHERE registry_key LIKE N’%Parameters’

Which returns this as the results:


Notice SQLArg3 which shows that I have added the T1118 trace flag to the startup parameters. Of course I can get some of the same info by opening up the configuration manager and checking the Startup Parameters tab of the SQL Server service but I wanted my results sent to me periodically in a report so that I can know if someone had added or changed anything. I’ll show my report later.

I also have found that it is good to know what protocols are enabled and what port SQL is listening on. Since I have a lot of servers to manage, with some on the default port and others not, I thought that it would be good to report on this. My clients also have different protocols enabled for different servers. So, since nothing is standardized, it’s good that this info be put into a report so that I can easily refer to it should I ever need it.

— List protocols enabled

SELECT sr.value_data AS EnabledProtocol

FROM sys.dm_server_registry sr

WHERE sr.registry_key IN (SELECT k.registry_key

FROM sys.dm_server_registry k

WHERE k.value_name = ‘Enabled’ AND k.value_data = 1)

AND sr.value_name = ‘DisplayName’


— List TCP port

SELECT value_name AS PortName, value_data AS PortValue

FROM sys.dm_server_registry

WHERE registry_key LIKE N’%SuperSocketNetLib\Tcp\IPAll’

AND value_name NOT LIKE N’DisplayName’


And, here is what my report looks like. I like to make small HTML formatted reports that I know I can generate on pretty much any server.


With a little bit of research and creativity, you can come up with ways to get at information in SQL easily.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at We will be happy to help! Leave a comment and feel free to track back to us. Visit us at!


SQL Server Statistics and Trace Flag 2371

— by Lori Brown @SQLSupahStah

Statistics are pretty important to SQL Server. Statistics are small objects that hold information of the distribution of data in a table. The SQL optimizer uses this information to create efficient ways to get data from the table. So, if statistics are not taken care of SQL may have a hard time running queries. It really needs this info to do its job.

I am a fan of updating statistics at least once a day. I prefer to use sp_updatestats ( ) since it will by default skip tables that have not had any changes. Sp_updatestats will however update stats if there have been ANY changes to a table. So some people seem to think that it can incur too much overhead (CPU, IO) but it has served most databases well. However, (here comes the “But, but, but…”) if you don’t have a long enough window to run sp_updatestats or if the default sample of 25% is not enough to give an accurate distro of data, then you need to come up with other solutions…especially for large tables. Large tables are always a problem when trying to set up maintenance.

If you have AUTO_UPDATE_STATISTICS enabled on your database (as you should unless you have a prevailing reason not too) statistics are updated automatically when enough data has changed and if a table is large enough. Here is what triggers an automatic update of stats:

  • The table size has gone from 0 to >0 rows
  • The number of rows in the table when the statistics were gathered was 500 or less, and the colmodctr (hidden system column) of the leading column of the statistics object has changed by more than 500 since then
  • The table has more than 500 rows when the statistics when the statistics were gathered, and the colmodctr (hidden system column) of the leading column of the statistics object has changed by more than 500 + 20% of the number of rows in the table when the statistics were gathered

If you think about this a bit, you can see how this is pretty good for small tables but large tables….those might run into a sticky wicket. When you have a really large table it can take a significant amount of changes before statistics are automatically updated.


While researching some things in SQL 2016, I found that I have been unaware of trace flag 2371. In SQL 2016, TF 2371 is now on by default. However, as far back as SQL 2008 R2, you can set this trace flag and get the benefits.

So, what is it? Trace flag 2371 is used to lower the threshold that tells SQL to automatically update statistics. This is especially important on larger tables where statistics updates are not triggered fast enough because of the high threshold of changes that normally need to happen on a table before statistics are considered to be old by SQL. Old stats and SQL do not get along well. Apparently this was a HUGE problem for SAP and it is the reason that the trace flag exists. With TF 2371, the threshold to trigger an automatic statistics update goes dynamically down meaning there will be more frequent stats updates for larger tables which should really help SQL create efficient execution plans. Yay!


By now you must be thinking…I am gonna turn on TF 2371 everywhere!!!


But, I really would not recommend it. If you have not been having performance problems without TF 2371 then you probably don’t need it. If it ain’t broke, don’t fix it! Turning on trace flags means that you are telling SQL to not use the regular paths through the SQL engine. When you turn on a trace flag it is a global setting and cannot be disabled for specific databases. And, if you have to open a ticket with Microsoft, they are likely going to be asking questions about why you have it turned on. Okay…so the risk on this one seems pretty low but you really should have a reason to enable it.

Sadly the only way to know if statistics are the cause for slow queries seems to be to run into performance issues and then notice that performance improves when statistics are update. You can take the time to catch execution plans both before and after the stats update so that you can verify but that is about it.

Don’t forget that in SQL 2016, TF 2371 is enabled by default. That could be something that helps you decide to move to it.

I have to give a trackback to Kendra Little for an excellent post on statistics. You can get more on whys and ways to take care of your statistics from

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at We will be happy to help! Leave a comment and feel free to track back to us. Visit us at!

Table-Valued Parameter Performance Using C# Data Tables

The following testing and resulting data is a direct result of a recent issue that was discovered at one of our client sites.  After setting up SQL monitoring that does use a SQL trace to capture long running queries, suddenly certain processes would lock up and be unable to complete.  Initially we thought it was everything except the SQL trace causing the issue until we finally turned off all trace captures and the problem went away.  This was very concerning to us since using either SQL trace or XEvents is the method that we generally use to capture query performance. Without a fix, our performance tuning attempts would be severely hamstrung and slowed down. This problem had to be fixed yesterday!

Another side effect that would happen when things locked up was that despite having the max server memory set, SQL would end up using substantially more memory that was allocated to it.  At this client, developers were importing data from an application using C# and table-valued parameters which should have not been an issue. After a lot of searching, we finally came across some comments by another SQL expert who mostly described our issue and recommended that any C# TVP’s should have their columns defined or SQL could lock up if either trace or XEvents Completed Events are captured.  When we checked the C# code we found that the string columns that were being imported were not defined in an effort to make the import code flexible.  We were able definitively fix this very obscure problem with just one line of C# code that is in the below post.  The end result is a FASTER import of data that can be captured in trace or XEvents.  Nice!!

Since we had to put together this fix for a very undocumented issue, we sincerely hope that any DBA or developer who runs into this problem can correct it much faster than we did.  A link to download a zip file of all code displayed in this post is included at the end.  Enjoy!


Table-Valued Parameter Performance Using C# Data Tables

— By Jeff Schwartz


Much has been written about the use of table-valued parameters (TVP) and their performance. These narratives appear to be contradictory because some state that TVPs should be used only for small datasets, whereas others assert that TVPs scale very well. Adding to the confusion is that the fact that most of the performance-related discussions focused on TVP use within T-SQL. Many of these examples employ either SQL trace or Extended Events (XEvents) to evaluate performance, but almost none of them has discussed what can occur when C# data tables and SQL trace or XEvents are used simultaneously. This paper details the analysis and resolution of a situation that occurred at a customer site when an existing C# application suddenly began to run extremely slowly after a very lightweight SQL trace that utilized a several-minute duration threshold and captured no frequent events was activated.

Initial Research

Preliminary research indicated that under very specific conditions the combination of TVPs and SQL trace could result in unintended performance consequences, e.g., excessive SQL Server memory consumption to the point where SQL Server exhausted all memory on the server regardless of the maximum memory setting value. The research also suggested that string usage and lengths in unspecified locations might have an impact on this issue. The question of whether TVPs might have some record-count performance threshold above which performance would degrade was also examined. Although bulk insert was not used by the application, the research team also decided to compare bulk and TVP insertion speeds because that is another source of conflicting information.

Since very little literature exists that discusses the combination of TVPs and SQL trace, the only recourse involved creating test frameworks, performing actual data loads, and using internal application timers in conjunction with either SQL trace or XEvents to monitor performance.

Test Frameworks

1.Two frameworks were constructed: T-SQL-only and C# calling a T-SQL stored procedure.

a. The T-SQL-only test loaded data into a TVP, which then called a stored procedure to perform the insertion into the final table.

b. The development team provided C# code that illustrated how the user application read a flat file into a C# data table, which was then passed as a TVP to a T-SQL stored procedure. This code was followed as closely as possible in the test framework.

2.A SQL server table containing data captured from the sys.dm_db_index_operational_stats Data Management View (DMV) supplied the data for testing because

a. The data table contained approximately 3.2 million records, thereby being large enough to stress the application code and SQL Server adequately. The largest data file provided by the application team was approximately 1.6 million records, so the testing could push well beyond current record levels.

b. Each data record contained 47 columns, which was wider in terms of data columns than any application table being loaded.

c. No string value columns existed in the data. This insured that user string data could not affect any interaction between the C# application and SQL Server, and that any string-related behavior was strictly due to the manner in which the application loaded the data table or the TVP, or the way the TVP parameter-passing was recorded in the SQL trace. All data columns were smallint, int, bigint, or date. This also insured exact record lengths so that any record count-related behavior was independent of record size. Some of the data files being loaded by the development team contained only fixed-width data types, whereas others contained at least a few variable length string columns.

3.The frameworks were designed to load various numbers of records using the same starting point to determine whether a performance threshold existed for TVPs, regardless of whether SQL trace (or XEvents) was active, as well as to determine whether even small numbers of records could be processed with the SQL trace (or XEvents) active.

4.Thirty-one different record levels, shown in Table 1, were used for testing to insure any record-count gaps were small. A separate CSV file of each length was created from the data cited in #2, always beginning with the first record.


Table 1: Record Counts used in Testing

5.Several methods of loading the flat file CSV data into C# data tables were developed and tested. The C# data table configurations were as follows:

a. Unlimited string length columns (initially used in the production C# application) – herein noted as NoStringMax (String #N/A on graphs)

b. Fixed length string columns of at most 30, 100, 500, or 1000 characters each – herein noted as StringMax (String <#> on graphs). Note: the development team stated that no column would exceed 1,000 characters in length.

c. Exact data type mapping so that the data table column types matched those of the receiving SQL Server table exactly – herein noted as Map

d. A common routine was used by options a, b, and c above to send the data table to SQL Server using a TVP.

e. The section of code that loaded the data table from the flat file was timed separately from the routine cited in d, which also was surrounded by its own timers. This enabled comparison of data load, TVP passing, and combined times.

f. The name of the CSV file is passed in as a parameter along with an adjusted version of the file name so the appropriate test identifiers can be written to the application log file for later analysis.

6.The tests were run with the following SQL Server monitoring options:

a. No SQL Server monitoring at all.

b. SQL trace monitoring including all normally captured events except for completed ones. These were handled as follows:

i.Exclude all completed events

ii.Include ad-hoc and batch completed events with a two-second duration threshold

iii.Include ad-hoc and batch completed events as well as completed statement events with a two-second duration threshold

c. SQL Extended Events (XEvents) including all normally captured events except for completed ones. These were handled as follows:

i.Exclude all completed events

ii.Include ad-hoc and batch completed events with a two-duration threshold

iii.Include ad-hoc and batch completed events as well as completed statement events with a two-duration threshold

7.All tests were performed on the same server to eliminate network and hardware variability. This server had a quad-core Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GHz processor, 32 GB of RAM, and a 500 GB SSD HD.

8.Various combinations of cold and warm SQL Server caches, cold and warm Windows caches, and SQL Server maximum memory limit sizes were tested. Some of the earlier attempts will be discussed in the Results section below. The final specifications were as follows:

a. No applications other than SQL Server, SSMS, and the C# application were active on the test server.

b. SQL Server’s maximum memory limit was set to 12 GB. This allowed SQL to go over its limit without distorting overall system behavior by exhausting server memory entirely.

c. Hot Windows file cache, i.e., a full run that loaded ALL of the flat file data into RAM was performed before the timing tests were run. This insured that the HD was involved minimally.

d. Hot SQL Server buffer cache, achieved by running the next series soon after the last run completed (with no intervening runs), and by running each sequence four times in immediate succession.

C# Code

The NoStringMax routine cited in #5a in the previous section is shown in Figure 1.


Figure 1: Test C# Data Table Load Routine Using No String Length Maximums

The StringMax routine cited in #5b in the previous section is shown in Figure 2. The ONLY difference between this routine and the previous one is the addition of the MaxLength assignment that is highlighted below. This single line sets the maximum length of each data table column to the value of the iMaxStringLen parameter. As cited in #5b, this value ranged from 30 to 1,000 during testing.


Figure 2: Test C# Data Table Load Routine Using String Length Maximums

The Map routine cited in #5c in the previous section is shown in Figure 3 through Figure 6. Logically, this code is identical to that of #5a and #5b, but the individual column mappings make the routine much longer. Note: since no strings are used for the data table columns, the MaxLength parameter is unnecessary.


Figure 3: Test C# Data Table Load Routine Using Data Type Mapping – Part 1


Figure 4: Test C# Data Table Load Routine Using Data Type Mapping – Part 2


Figure 5: Test C# Data Table Load Routine Using Data Type Mapping – Part 3


Figure 6: Test C# Data Table Load Routine Using Data Type Mapping – Part 4

The routine cited in #5d in the previous section is shown in Figure 7.


Figure 7: C# Test Routine that Sends Data Table to SQL Server Using a TVP


1.The T-SQL code alone, cited in Test Frameworks #1a did not recreate the problem observed at the customer site when either SQL trace or Extended Events (XEvents) were used, so additional testing was unnecessary.

2.No problems occurred when the C# application was used in conjunction with SQL trace or XEvents as long as no completed events were captured. Therefore, capturing only specific trace events created the problems.

3.Further research showed that when the C# code was running, adding ad-hoc or batch completed events to SQL traces or XEvent sessions caused issues. Adding completed statement events did not change things appreciably.

4.Interestingly, the extreme misbehavior was triggered by the combination of using NoStringMax C# code and having SQL trace or XEvents capture ad-hoc or batch completed events. Although the StringMax and Map routines ran a little more slowly for certain record count levels when SQL trace or XEvents captured ad-hoc or batch completed events, the dramatic memory consumption issues did not occur at all when these methods were used.

5.Initially, testing employed a 28 GB maximum SQL Server memory setting, but as occurred at the customer site, when the problem arose, SQL Server consumed all the memory on the server, which caused the server fundamentally to stop working. Using this setting and the NoStringMax C# code, CSV files with record counts up to 25,000 could be processed without taking hours, failing, or causing the server to run completely out of memory. However, the application could not reach the 50,000 record level.

6.Since testing needed to process MUCH higher numbers of records, the decision was made to reduce SQL Server memory to 4 GB in an attempt to provide additional memory space for SQL Server memory overflow. Another reason for lowering the SQL Server memory allocation was to insure that the tests consumed all of SQL Server’s buffer pool and that SQL Server had to operate against memory limits. This consideration, as well as the run-time variations, necessitated the four successive iterations for each combination.

7.Unfortunately, using the 4 GB setting, the lower-end NoStringMax C# code runs while capturing SQL trace ad-hoc or batch completed events caused application failures at approximately the 4,000 or 8,000 record levels. This clearly indicated an increase in SQL Server memory was required, so it was increased to 8 GB. Although the runs processed more records, they still crashed well before the 25,000 record level. Finally, moving the memory limit up to 12 GB enabled everything to run as before, effectively mimicking the 28 GB testing without running the risk of exhausting Windows memory.

8.Figure 8 highlights the NoStringMax C# code runs through the 25,000 record level while SQL trace or XEvents ad-hoc or batch completed events were being captured. It is evident that the run times were unacceptable. When the same test levels are viewed in Figure 9, the problem becomes abundantly clear. With SQL trace or XEvents capturing ad-hoc or batch completed events, the run times for the 10,000 record tests using the NoStringMax C# code ranged between 483 and 584 seconds. When ad-hoc or batch completed events were not captured, the run times were approximately 0.14 SECONDS! The values for all other tests, including NoStringMax C# code with SQL trace and XEvents ad-hoc or batch completed events off, are shown in Figure 9.


Figure 8: TVP Load Timing when Trace On or XEvents on

9.Although many lines appear in Figure 9, three groupings exist and these are the most important. Group 1, which consists of dotted lines, includes all tests during which ad-hoc or batch completed events were not captured. Group 2, which consists of solid lines, includes all tests during which SQL trace captured ad-hoc or batch completed events. Group 3, which consists of dashed lines, includes all tests during which XEvents captured ad-hoc or batch completed events. Important note: The NoStringMax runs shown in Figure 8 are NOT in Figure 9 because of scaling considerations. Figure 9 highlights several notable issues. Most importantly, once a maximum string length is specified, performance improves even when SQL trace or XEvents ad-hoc or batch completed events are not captured. In addition, the terrible performance problems go away. Interestingly, the behaviors of the various StringMax and Map runs were almost identical through approximately 175,000 records and then again at about 800,000 records. In between, unexplained divergence occurs, but it is only a few seconds and occurred primarily between the 200,000 and 700,000 record levels. The pattern shown in Figure 9 was repeated in every test sequence conducted. It is noteworthy that the StringMax and Map versions of the routine outperformed the NoStringMax under all comparable trace or XEvent-related conditions, and at higher volumes, even when completed events were captured.


Figure 9: TVP Load Timing when Trace/XEvents off OR Trace or XEvents on and Mapped Data Table or Limited-Length String Values Used

10.Once the number of records exceeeded two million, the load of the C# data table began to crash due to application memory exhaustion. Note: neither Windows nor SQL Server memory was consumed excessively during this phase, so the limitation was strictly application-related.

11.Figure 10 summarizes the overall insertion rates of all previously cited methods at the two million record level, in addition to the bulk insert method. This graph shows clearly that the worst performer was the unlimited string method, and that the best one was the bulk insert method. Note: these rates exclude the reading of the flat file into the C# data table as well as the creation of the file that was suitable for bulk insertion.


Figure 10: TVP versus Bulk Insert Records per Second


The use of unlimited-length C# strings with data tables not only performs worse without capturing any SQL trace or XEvents ad-hoc or batch completed events, it performs dreadfully when they are. Their use can cause poor overall server performance if the server’s memory is exhausted. Finally, their use prevents the ability to monitor SQL Server query performance using either SQL trace or XEvents. However, when a maximum string length is specified, regardless of its length, performance without SQL trace or XEvents improved and the problematic interaction with SQL trace or XEvents was mitigated almost completely. Since the Map method is inefficient and limiting from an application development perspective and performance was not substantially better than with limited-length strings, its use does not appear to be beneficial.

No table-valued parameter scalability issues were observed with the possible exception of the 200,000 to 700,000 record range when performance inexplicably, and consistently, dipped. However, from 800,000 to 2 million, performance experienced no such dip. Interestingly, standard bulk insert performance exceeded that of TVPs when the same data was used.


Since the code to implement the maximum-length string involves only one line and the performance monitoring and improvement benefits are substantial, the <data table name>.MaxLength = <n> statement should be included in data table load routines as shown by the highlighted line in Figure 2. <n> should be the maximum expected length of any data column. Limiting string length will allow C# TVP data loads to run more efficiently and also enable customers to conduct normal query performance monitoring.


The code shown in this post can be downloaded from here….  SQLRX_TVP_CSharp_Code

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at We will be happy to help! Leave a comment and feel free to track back to us. Visit us at!