Tag Archives: SQL Administration

Log Connections to SQL Instance

— by Ginger Keys

If you ever have a need to monitor connections to your SQL server, and any related information about the connections such as database, logins, etc., there are some DMVs that can give you tons of information. Previously you might have used the sys.sysprocesses table to derive much of this information, but this is being deprecated in the most recent versions of SQL server.

Instead, you can collect valuable information from these DMVs:

sys.dm_exec_sessions   https://msdn.microsoft.com/en-us/library/ms176013.aspx

sys.dm_exec_connections   https://msdn.microsoft.com/en-us/library/ms181509.aspx

sys.dm_exec_requests   https://msdn.microsoft.com/en-us/library/ms177648.aspx

In order to capture and retain connection information for my SQL server, I will create a small database and a table to hold some basic information. Of course you can alter the script to include more, less, or different data than what I am demonstrating below, to better fit your specific information needs.

I will create a database and a table, then insert data from two of the DMVs listed above.

Step 1 – Create a table to hold login activity

— Create a database

USE master

GO

CREATE DATABASE [Connections]

ON PRIMARY

( NAME = N’Connections’, FILENAME = N’C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\Connections.mdf’ ,

SIZE = 1024MB , FILEGROWTH = 512MB )

LOG ON

( NAME = N’Connections_log’, FILENAME = N’C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\Connections_log.ldf’ ,

SIZE = 1024MB , FILEGROWTH = 512MB)

GO

 

— Create table to hold Login info

USE [Connections]

GO

CREATE TABLE [dbo].[LoginActivity]

(

host_name [nvarchar](128) NULL,

program_name [nvarchar](128) NULL,

login_name [nvarchar](128) NOT NULL,

client_net_address [nvarchar](48) NULL,

DatabaseName [nvarchar](128) NOT NULL,

login_time [datetime] NOT NULL,

status [nvarchar](30) NOT NULL,

date_time[datetime] NOT NULL,

) ON [PRIMARY]

GO

 

Step 2 – Insert Data into Table

If you need to retain or archive this connection information, you can create a database which will hold the information, or export the results to a spreadsheet or other file. Otherwise you can simply select the information from the DMV below if you only need to see current data.

 

USE Connections

GO

INSERT INTO LoginActivity

(host_name,

program_name,

login_name,

client_net_address,

DatabaseName,

login_time,

status,

date_time)

— run the following select statement by itself to see connection info if you don’t want to save the output

SELECT

s.host_name,

s.program_name,

s.login_name,

c.client_net_address,

d.name AS DatabaseName,

s.login_time,

s.status,

GETDATE() AS date_time

FROM sys.dm_exec_sessions s

JOIN sys.dm_exec_connections c ON s.session_id = c.session_id

JOIN sys.databases d ON d.database_id = s.database_id

–where d.name = ‘ABCompany’ –can specify databases if needed

WHERE GETDATE() >= DATEADD(hh,-10, GETDATE()) –date range can be adjusted

 

Step 3 – View/Save Output Results

After inserting the data into my table, I can see the current connections from the last 10 hours (as per my insert statement). On a production server, this list would be far greater.

SELECT * FROM LoginActivity

Blog_20171005_1

From the columns I have included in my table:

Host_name – will give you the name of the workstation connecting – shows NULL for internal sessions.

Program_name – tells you the name of the client program or application connecting.

Client_net_address – provides the host address of each client connecting

Login_name, DatabaseName, and login_time – self-explanatory.

date_time – is the current day and time the query is run

Status – gives the status of the session, which will be running, sleeping, dormant, or preconnect.

This information can also be output to a text or excel file if preferred.

Blog_20171005_2

Conclusion

Being able to see users or applications making connections to your SQL Server can be useful or necessary for many reasons. The steps outlined above provide a general guideline for deriving connection information that can be altered to fit your organizational needs.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Get Index Column Info with Includes for One or Many Tables

— by Lori Brown @SQLSupahStah

I was recently working with one of my clients on some low hanging fruit type of query tuning. We had checked the cache for plans with missing index warnings in them and were trying to see if we could tweak and existing index or add a new index to speed things up. If you ever work with missing indexes, you surely have seen it recommend crazy things, duplicates or existing indexes or it wants something that can be added to an existing index.

The bottom line for missing index recommendations is that you should NEVER, EVER create a missing index unless you know for absolutely sure that the index does not exist and that it will really help a known performance issue. For instance, you can have missing index warnings on small tables that only return a couple of rows to the query. Those are usually not worth working on since scanning a small table can usually be done very quickly by SQL. To this day, I still find many databases that are way over indexed with indexes that were implemented simply because someone found a missing index warning and did not do their homework or they ran the dreaded Database Tuning Advisor which shoved a bunch of duplicate indexes into tables.

If you are wondering how to get a list of missing indexes, please check out Jeff Schwartz’s blog post on how to do this. (https://blog.sqlrx.com/2017/06/02/query-tuning-and-missing-index-recommendations/ ) This will give you a place to start. It is better if you know what query is throwing the missing index warning so it is a good idea to collect those either in a trace or extended events. Jeff builds on his first post and in his second post on the subject (https://blog.sqlrx.com/2017/07/20/handling-multiple-missing-index-recommendations-for-the-same-table/ ) also goes over the fun of having multiple missing index recommendations for a single table and how to deal with them.

Here’s a handy set of links for some of Jeff’s great index tuning work that you should really check out:

https://blog.sqlrx.com/2016/01/28/sql-server-20122014-extended-events-for-developers-part-1/

https://blog.sqlrx.com/2017/06/02/query-tuning-and-missing-index-recommendations/

https://blog.sqlrx.com/2017/07/20/handling-multiple-missing-index-recommendations-for-the-same-table/

https://blog.sqlrx.com/2017/08/10/how-indexing-affects-deletion-queries/

One of the things that I usually need when performance tuning is to know information about the existing indexes on specific tables. I always want to know what columns are in the indexes along with the included columns so that I can compare the existing indexes to the missing recommendations. This way I can better figure out if a recommendation is something that can be added to an existing index (like an included column) or if I really need to create a brand new index if it does not exist at all.

Like most DBA’s, I keep a toolkit with all kinds of handy scripts. However, I did not have one that would give me index included columns. I also wanted the query to be able to return info from one or many tables at the same time. This would be useful when dealing with things with lots of joins. I know that there are a few bloggers who have posted something similar but I wanted to have the ability to filter on one or multiple tables. So, here is what I came up with:

/********************************

Returns index columns with included columns

plus other needed index info for tables

in @tablelist variable

*********************************/

 

DECLARE @tablelist VARCHAR(1000)

DECLARE @sqlstr NVARCHAR(MAX)

 

SET @tablelist = ‘InvoiceLines,OrderLines,StockItemHoldings’ — comma delimited list of tables, can be one or multiples EX: ‘mytable’ or ‘mytable,nothertable,thirdtable’

 

— Query the tables

IF @tablelist <> OR @tablelist <> ‘?,?,?’

BEGIN

SET @tablelist = REPLACE(QUOTENAME(@tablelist,””), ‘,’, ”’,”’) — Add quotes so the IN clause will work

 

SET @sqlstr = ‘SELECT SCHEMA_NAME(o.schema_id) AS SchemaName

       ,o.name AS TableName

       ,i.name AS IndexName

       ,i.type_desc AS IndexType

       ,c.name AS ColumnName

       ,ic.is_included_column

       ,ic.index_column_id

       ,t.name

       ,c.max_length

       ,c.precision

       ,c.scale

FROM sys.indexes i

JOIN sys.index_columns ic ON i.index_id = ic.index_id

JOIN sys.columns c ON ic.column_id = c.column_id AND ic.object_id = c.object_id

JOIN sys.objects o ON o.object_id = i.object_id AND c.object_id = i.object_id

JOIN sys.types t on t.user_type_id = c.user_type_id

WHERE o.name IN (‘+@tablelist+‘)

ORDER BY SchemaName, TableName, IndexName, index_column_id’

 

EXEC sp_executesql @sqlstr

–PRINT @sqlstr

END

All you have to provide is a comma separated list of the table(s) you are interested in for the @tablelist variable and it will do the rest. The output looks like this:

Blog_20170928_1

I found several bloggers who had made queries that would concatenate the columns together but truthfully I found those hard to read so I settled for a list with some extra info on the data and index types. Hope this is useful to someone out there.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

Find Tables That Have Special Features Enabled

Find out if any of tables in your database have special features enabled using the queries below.  These features need to be understood and carefully managed.

— CDC Enabled Tables

select distinct t.name AS CDCTables

from sys.tables t

where t.is_tracked_by_cdc = 1

 

— File Tables — SQL 2012 +

select distinct t.name AS FileTables

from sys.tables t

where t.is_filetable = 1

 

— Temporal Tables — SQL 2016 +

select distinct t.name AS TemporalTables

from sys.tables t

where t.temporal_type > 0

 

— Stretch Enabled Tables — SQL 2016 +

select distinct t.name AS StretchTables

from sys.tables t

where t.is_remote_data_archive_enabled > 0

 

— External Tables — SQL 2016 +

select distinct t.name AS ExternalTables

from sys.tables t

where t.is_external > 0

https://docs.microsoft.com/en-us/sql/relational-databases/track-changes/about-change-data-capture-sql-server

https://docs.microsoft.com/en-us/sql/relational-databases/blob/filetables-sql-server

https://docs.microsoft.com/en-us/sql/relational-databases/tables/temporal-tables

https://docs.microsoft.com/en-us/sql/sql-server/stretch-database/enable-stretch-database-for-a-table

https://docs.microsoft.com/en-us/sql/t-sql/statements/create-external-table-transact-sql

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Find Out Who Changed the Database Recovery Model

— By Lori Brown @SQLSupahStah

I ran into a situation where we were working on a migration and had been directed to put all databases into FULL recovery model in anticipation of using log shipping to push databases to the new server. Once we are ready to go live on the new server the plan was to ship the last transaction logs and then restore them WITH RECOVERY in an effort to make the final cutover as quick as possible. Of course this means that we had to make sure that all databases were having regular log backups, which we did. Things were going along nicely until we started receiving log backup failure notifications.

Upon checking things, we found that one of the databases had been changed to SIMPLE recovery model. You can find this type of information in the default trace or you can simply scroll through the SQL error logs until you find the entry that you are looking for. If you have a busy instance that has a lot of entries in the error log, this can be a bit time consuming so I came up with a set of queries that will grab the error log entry and attempt to tie it to the info in the default trace so that it was easier to identify WHO was the culprit who made an unauthorized change to the database properties.

 

DECLARE @tracefile VARCHAR(500)

DECLARE @ProcessInfoSPID VARCHAR(20)

 

CREATE TABLE [dbo].[#SQLerrorlog](

[LogDate] DATETIME NULL,

[ProcessInfo] VARCHAR(10) NULL,

[Text] VARCHAR(MAX) NULL

)

 

/*

Valid parameters for sp_readerrorlog

1 – Error log: 0 = current, 1 = Archive #1, 2 = Archive #2, etc…

2 – Log file type: 1 or NULL = error log, 2 = SQL Agent log

3 – Search string 1

4 – Search string 2

 

Change parameters to meet your needs

*/

— Read error log looking for the words RECOVERY

–and either FULL, SIMPLE or BULK_LOGGED indicating a change from prior state

INSERT INTO #SQLerrorlog

EXEC sp_readerrorlog 0, 1, ‘RECOVERY’, ‘FULL’

 

INSERT INTO #SQLerrorlog

EXEC sp_readerrorlog 0, 1, ‘RECOVERY’, ‘SIMPLE’

 

INSERT INTO #SQLerrorlog

EXEC sp_readerrorlog 0, 1, ‘RECOVERY’, ‘BULK_LOGGED’

 

UPDATE #SQLerrorlog

SET ProcessInfo = SUBSTRING(ProcessInfo,5,20)

FROM #SQLerrorlog

WHERE ProcessInfo LIKE ‘spid%’

 

— Get path of default trace file

SELECT @tracefile = CAST(value AS VARCHAR(500))

FROM sys.fn_trace_getinfo(DEFAULT)

WHERE traceid = 1

AND property = 2

 

— Get objects altered from the default trace

SELECT IDENTITY(int, 1, 1) AS RowNumber, *

INTO #temp_trc

FROM sys.fn_trace_gettable(@tracefile, default) g — default = read all trace files

WHERE g.EventClass = 164

 

SELECT t.DatabaseID, t.DatabaseName, t.NTUserName, t.NTDomainName,

t.HostName, t.ApplicationName, t.LoginName, t.SPID, t.StartTime, l.Text

FROM #temp_trc t

JOIN #SQLerrorlog l ON t.SPID = l.ProcessInfo

WHERE t.StartTime > GETDATE()-1 — filter by time within the last 24 hours

ORDER BY t.StartTime DESC

 

DROP TABLE #temp_trc

DROP TABLE #SQLerrorlog

GO

 

You can find more on the following:

sp_readerrorlog is an undocumented procedure that actually uses xp_readerrorlog – https://www.mssqltips.com/sqlservertip/1476/reading-the-sql-server-log-files-using-tsql/

sys.fn_trace_getinfo – https://docs.microsoft.com/en-us/sql/relational-databases/system-functions/sys-fn-trace-getinfo-transact-sql

sys.fn_trace_gettable – https://docs.microsoft.com/en-us/sql/relational-databases/system-functions/sys-fn-trace-gettable-transact-sql

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRxSupport@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. We love to talk tech with anyone in our SQL family!

Links for new DMV’s in SQL 2017

Since we at SQLRX have been super busy this week and still need to do something for a blog post, I thought that doing a link round up of new dynamic management views that are going to be available in SQL 2017 would be a quick and good idea. I am really interested in sys.dm_db_log_info since it give VLF info and can be used for monitoring and alerting.

Enjoy!

Change:

A column on sys.dm_db_file_space_usage has been changed: https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-db-file-space-usage-transact-sql

Brand new dmv’s:

sys.dm_db_log_stats https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-db-log-stats-transact-sql

sys.dm_db_log_info https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-db-log-info-transact-sql

sys.dm_db_stats_histogram https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-db-stats-histogram-transact-sql

sys.dm_tran_version_store_space_usage https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-tran-version-store-space-usage

sys.dm_os_host_info https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-os-host-info-transact-sql

Blog_20170901_1

Yee Haw!!!!

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Log Shipping Backup Failure

–By Ginger Keys

For years the SQL Agent service account on my client’s SQL Server instance ran the Maintenance Plans and SQL Agent jobs with no issues. Many of the SQL databases were set to Full recovery model, and Tlog backups executed on regular bases with no problems.

The client decided to migrate to new hardware in a new datacenter, and decided log shipping the databases over until the go-live date would be the best option for them in their circumstances. We took the databases out of the regular Tlog backup routines and created Transaction Log Shipping routines in its place. The connections to the new instance were seamless, and the Copy & Restore jobs were executing fine. However the Backup jobs were failing!

Why would this be, since the SQL Agent service account had been executing Tlog backups for years!? As it turns out, log shipping uses a different executable for performing tlog backups: sqllogship.exe. The SQL Agent service account must have permissions to the folder location where this executable is located, as shown below. You can locate your executable file by opening the LS_Backup job properties and viewing the job step.

Blog_20170824_1

Once we granted Full permissions to this location for the SQL Agent service account, everything worked as intended. This was the first occasion with log shipping that I have run into permissions issues for the service account on a primary server. Hopefully this is an uncommon occurrence, but it is certainly simple to fix once you understand what is happening.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

List Tables That Contain Special Indexes

Find out if any of your tables contain special columnstore or spatial indexes. Columnstore indexes organizes the data in columns instead of rows like traditional indexes and can increase performance on large data sets as found in data warehouses. Spatial indexes are a special type of index on a spatial column such as geometry or geography.

— Tables with columnstore indexes — SQL 2012 +

select t.name as TablesWithColumnstoreInx

from sys.indexes i

inner join sys.tables t

on i.object_id = t.object_id

where i.type = 5 or i.type = 6

— Tables with spatial indexes — SQL 2014 +

select t.name as TablesWithSpatialInx

from sys.indexes i

inner join sys.tables t

on i.object_id = t.object_id

where i.type = 4

 

https://docs.microsoft.com/en-us/sql/relational-databases/indexes/columnstore-indexes-overview

https://docs.microsoft.com/en-us/sql/relational-databases/spatial/spatial-indexes-overview

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

AlwaysOn – Endpoint Ownership

— By Ginger Keys

It is not uncommon for a DBA or other IT staff to set up AlwaysOn in a SQL environment and later leave the company. We ran into this recently with a client and were asked to delete the previous employee’s login from everything SQL related. We were able to remove the login from all databases and server roles, however we were not able to delete the login because it was the owner of an endpoint.

When creating an AlwaysOn Availability Group, you have the option of using the wizard or you can create it using TSQL statements. The wizard is very intuitive and easy to use and with the exception of a few settings you can specify, default configurations are deployed using this method. One of the default configurations is the endpoint owner. Whoever creates the AlwaysOn group is by default the owner of the endpoint.

This is generally not a problem…unless that person leaves the company and you need to delete the login! You will get an error message that says “The server principal owns one or more endpoint(s) and cannot be dropped (Microsoft SQL Server, Error: 15141)”.

To check and see who the owner of your endpoints are, run this statement:

USE master

GO

SELECT e.name as EndpointName,

sp.name AS EndpointOwner,

et.PayloadType,

e.state_desc

FROM sys.endpoints e

INNER JOIN sys.server_principals sp

ON e.principal_id = sp.principal_id

RIGHT OUTER JOIN ( VALUES ( 2, ‘TSQL’),

( 3, ‘SERVICE_BROKER’), ( 4, ‘DATABASE_MIRRORING’) )

AS et ( typeid, PayloadType )

ON et.typeid = e.type

The AlwaysOn endpoint will have the name Hadr_endpoint and will have a DATABASE_MIRRORING payload type as shown below.

Blog_20170803_1

If your AlwaysOn group has already been created and there is a domain login as the owner, you can change the ownership to sa. Run the following statement to make the change:

USE master

GO

ALTER AUTHORIZATION ON ENDPOINT::Hadr_endpoint TO sa

This will allow you to delete any login who might have owned the endpoint if its ever necessary.

If you are creating an AlwaysOn Availablitiy Group and want to use TSQL statements instead of the wizard, you have the ability to specify the endpoint owner. For complete instructions on how to set up the AlwaysOn group with TSQL click here https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/create-an-availability-group-transact-sql

In order to create the endpoint with a specific user, run the following statement:

CREATE ENDPOINT endpoint_mirroring

AUTHORIZATION loginname

STATE = STARTED

AS TCP (LISTENER_PORT = 5022)

FOR DATABASE_MIRRORING (

AUTHENTICATION = WINDOWS KERBEROS,

ENCRYPTION = SUPPORTED,

ROLE=ALL);

GO

In the statement above, if AUTHORIZATION is not specified with a SQL or Windows login, the caller will become the owner of the newly created endpoint. To use AUTHORIZATION and assign ownership to a login, the caller must have IMPERSONATE permission on the specified login.

Endpoints are a fundamental piece of SQL that allows a connection or point of entry into your SQL Server. Knowing who owns these endpoints and how to change the owner will potentially save you some headaches down the road in the event of IT staffing changes in your organization.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

Use MSDB to Get Database Backup Size and Total Time For Each

— by Lori Brown

We recently started using a third party software to do our in-house SQL backups so that the backup files are stored in a redundant and safe place. To confirm that the software was indeed compressing backups as it stated it would, we wanted to see what each backup size actually was in SQL so that we could compare that to what the software was telling us.

SQL stores lots of handy backup information in msdb in the backupset and backupmediafamily tables.

https://docs.microsoft.com/en-us/sql/relational-databases/system-tables/backupset-transact-sql

https://docs.microsoft.com/en-us/sql/relational-databases/system-tables/backupmediafamily-transact-sql

Here is my query. I am only wanting the information from the last 24 hours so have filtered the start date by subtracting 1 day from today. I have also provided some commented out options in case someone needs them.

— database backup size and how long it took to do backup

SELECT bs.database_name AS DatabaseName

, CAST(bs.backup_size/1024.0/1024/1024 AS DECIMAL(10, 2)) AS BackupSizeGB

, CAST(bs.backup_size/1024.0/1024 AS DECIMAL(10, 2)) AS BackupSizeMB

–, CAST(bs.compressed_backup_size/1024.0/1024/1024 AS DECIMAL(10, 2)) AS CompressedSizeGB   

       –, CAST(bs.compressed_backup_size/1024.0/1024 AS DECIMAL(10, 2)) AS CompressedSizeMB

, bs.backup_start_date AS BackupStartDate

, bs.backup_finish_date AS BackupEndDate

, CAST(bs.backup_finish_date – bs.backup_start_date AS TIME) AS AmtTimeToBkup

, bmf.physical_device_name AS BackupDeviceName

FROM msdb.dbo.backupset bs JOIN msdb.dbo.backupmediafamily bmf

ON bs.media_set_id = bmf.media_set_id

WHERE

–bs.database_name = ‘MyDatabase’ and   — uncomment to filter by database name

bs.backup_start_date > DATEADD(dd, -1, GETDATE()) and

bs.type = ‘D’ — change to L for transaction logs

ORDER BY bs.database_name, bs.backup_start_date

And, here is the output.

Blog_20170726_1

It turned out that the software was indeed compressing all backups so that was a good thing.

There is a lot more info that can be pulled from msdb regarding backups. Have some fun and experiment with getting information that you need from there. Here are some links to some other backup related topics that we have blogged about already.

https://blog.sqlrx.com/2017/03/02/sql-server-backup-and-restore-primer/

https://blog.sqlrx.com/2013/04/23/backup-database-commands/

https://blog.sqlrx.com/2013/05/28/be-prepared-for-a-crisis/

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

AlwaysOn 2016 – Encrypted Databases and Performance

–By Ginger Keys

It is common knowledge that encrypting a database causes degraded performance in a SQL server. In an AlwaysOn environment performance can be even more sluggish because there is the extra element of data replication latency. How much difference does it really make? Of course the answer is “it depends” on your environment and your workload. I was curious to see for myself what kind of performance hit encryption would have on one of my test databases, so this post will look at CPU usage of an encrypted vs non-encrypted database.

Microsoft says that turning on TDE (Transparent Data Encryption) for a database will result in a 2-4% performance penalty, which is actually not too bad given the benefits of having your data more secure. There is even more of a performance hit when enabling cell level or column level encryption. When encrypting any of your databases, keep in mind that the tempdb database will also be encrypted. This could have a performance impact on your other non-encrypted databases on the same instance.

In a previous post I demonstrated how to add an encrypted database to an AlwaysOn group in SQL2016. In this article I will demonstrate the performance effects of having an encrypted database in your AlwaysOn Group compared to the same database not-encrypted.

I have 3 identical test databases I will use to look at performance metrics.

  • GKTestDB is TDE encrypted, and is part of the AlwaysOn group
  • GKTestDB2 is not encrypted, and not part of AlwaysOn group
  • GKTestDB3 is not encrypted, but is part of AlwaysOn group

Blog_20170713_1

There are some good open source tools for performing stress tests on your SQL database.   You can find one here that uses the AdventureWorks2014 database http://www.sqlballs.com/2016/05/sql-server-random-workload-generator.html. There is also one here and you can use this on any database https://github.com/ErikEJ/SqlQueryStress/wiki. I will be using the SQLQueryStress tool for my demonstration.

Step 1 – Test performance on non-encrypted DB not in AG

To measure performance metrics, create a User Defined Data Collector Set in Performance Monitor (Perfmon). There are many metrics that can be measured, but I will only be looking at CPU % Processor Time.

Blog_20170713_2

Download and open the SQLQueryStress tool, and create a statement to run against your database. In my test I will first look at the performance of running a select query for 5000 iterations on a database that has not been added to the AlwaysOn group, and has not been encrypted: (GKTestDB2)

Blog_20170713_3

Be sure to clear the buffers and cache before performing each stress test. Select your database, the server name, the number of iterations, the number of threads and the delay between queries in milliseconds.

Blog_20170713_4

Start your user defined Perfmon data collector set, then start the stress test in the SQLQueryStress tool.

Blog_20170713_5

At the end of each stress test you will need to manually stop your Perfmon collection.

Step 2 – Test performance on encrypted DB in the AlwaysOn Group

Now I will perform the same stress test to see performance on the identical but Encrypted database in the AlwaysOn group (GKTestDB). Be sure to clear the buffers and cache, and change the database in the SQLQueryStress tool.

Blog_20170713_6

Blog_20170713_7

Step 3 – Test performance on non – encrypted DB in the AlwaysOn Group

Just for curiosity sake, I’m also going to test the identical database that is not encrypted, but is included in the AlwaysOn group (GKTestDB3):

Blog_20170713_8

Blog_20170713_9

Step 4 – Compare results

I set the output files of my Perfmon results to land in a location on my local drive so that I can open up the results of each test and compare.

The CPU usage for the database not encrypted and not in my AlwaysOn group averaged 43% for the duration the test was run, as shown by the dark line on the graph below.

Not Encrypted / Not in AG database CPU usage:

Blog_20170713_10

The CPU usage for the identical database that is encrypted and is in the AlwaysOn group averaged 57.5% during the stress test as shown on the graph below. This is quite a bit more than the non-encrypted/non AG database, especially given the simple statement that was being run.

TDE Encrypted / Joined to AG Database CPU usage:

Blog_20170713_11

And finally, the CPU usage for the identical database that is not encrypted, but is included in my AlwaysOn group averaged 43.4%, which is not much different than the non-encrypted /non-AG database above.

Not Encrypted / Joined to AG Database CPU usage:

Blog_20170713_12

Blog_20170713_13

Conclusion

Having an encrypted database creates a noticeable CPU performance hit as compared to a non-encrypted database. Microsoft provides many options for protecting your data, transparent data encryption (TDE) being one of them. The benefits and assurance of securing your data outweigh the performance cost, however it may be useful to see how much of a performance hit your system will encounter prior to deciding which security options your organization will deploy.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!