Tag Archives: Ginger Keys

Notification for Restored Database

— by Ginger Keys

According to best practices for a SQL environment, you should never place your test or development databases on a production server. However sometimes best practices are ignored, sometimes the DBA’s advice is disregarded (ahem…), or sometimes an organization simply does not have the financial resources to spin up separate servers for testing or development.

Whatever the reason, if your production server happens to have test or development databases on it, you need to be make sure these databases are managed properly. Usually those databases get periodically restored with a copy of the production database, and you need to ensure there is enough room on the disk to accommodate all data and log files. Typically the production database will be in Full recovery mode possibly with a large transaction log file to accommodate daily transactions. The test or development database normally does not need the large transaction log file and usually does not need to be in Full recovery mode.

Instead of manually checking each database to see when it might have been restored, create a SQL Agent job that will notify you when this happens so that you can go in proactively and manage the environment.

The following statement can be inserted in the SQL Agent Job step, and will provide a basic notification in the event your databases have been restored. Create the job to execute however often you need for your environment… (I have mine running once per week and have set the parameter for [restore_date] to check the last 7 days of activity).

–create a temp table to hold your data

create table #Restores

(      [destination_database_name] nvarchar(50),

[restore_date] date,

[restore_type] nvarchar(20),

[user_name] nvarchar(50)



–insert data into your temp table from msdb.dbo.restorehistory table

;with LastRestores As

(      select [destination_database_name],


CASE [restore_type]

WHEN ‘D’ THEN ‘Database’

WHEN ‘F’ THEN ‘File’

WHEN ‘G’ THEN ‘Filegroup’

WHEN ‘I’ THEN ‘Diff’


WHEN ‘V’ THEN ‘VerifyOnly’


END AS restore_type,


from msdb.dbo.restorehistory

where ([destination_database_name] like ‘%dev’ OR [destination_database_name] like ‘%test’)

AND [restore_date] >= (GETDATE() – 7) –change to timeframe appropriate to your environment



insert into #Restores

select [destination_database_name], [restore_date], [restore_type], [user_name]

from LastRestores


–if temp table returns any rows, send notification email

if (select count(*) from #Restores) > 0


exec msdb.dbo.sp_send_dbmail

@profile_name = ‘DBMailProfile’,

@recipients = ‘SomeoneImportant@yourompany.com’,

@subject = ‘Test Database Restored’,

@body = ‘A test or training database has been restored on InstanceName. Please make necessary adjustments.’



–drop your temp table

if (OBJECT_ID(‘tempdb..#Restores’) is not null)


drop table #Restores



This is a simplistic statement to notify you of restore activity for databases you specify. You can get much more sophisticated if necessary by sending the query output as a file or html, or by triggering an immediate notification whenever the database is restored.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Inconsistent Tempdb Growth

— by Ginger Keys

Tempdb database is a system database that is shared across all databases and all connections in SQL Server.   As a general rule, I configure all my client’s servers to have multiple tempdb files with consistent sizes and consistent auto growth set according to best practices. The number of tempdb files created equals the number of cores or 8, whichever is lower. And the size set for each file depends on each individual SQL instance to accommodate a typical workload.

The Problem

Recently during a routine review of a client’s server, I noticed in the SQL Server Logs that there were messages indicating I/O pressure for the tempdb database.


I also noticed in Windows Explorer that my tempdb files were no longer uniform in size. In fact the first data file had grown way out of proportion compared to all the other data files:


After double-checking the database properties, I confirmed that these files should have grown in consistent amounts:


So what happened to cause this inconsistent growth and how should I fix it?

Tempdb is used (and can grow) when users explicitly create objects like temp tables, variables, cursors, or stored procedures. Tempdb is also used when the database engine creates work tables for sorts or spooling, and row versioning operations.

It’s often difficult to diagnose what happened on a SQL Server after the fact. There are many resources online to determine what is currently using tempdb but not a lot of information from processes that may have run days ago.

What Happened?

First, I want to see what is currently using tempdb to find out if there are any processes using all that space in the database. The TSQL statements in the following links are helpful in determining who is currently using tempdb:




If you find there are plans using the tempdb causing substantial growth it is probably a good opportunity for some performance tuning.

In my situation there was nothing significant currently using tempdb, so my issue happened prior to my discovering the problem. Since I was not able to see any active queries creating temp tables, stored procs, cursors, or variables, I wanted to see if there were row versioning operations occurring.

If snapshot isolation is enabled on a database, performance can improve because it eliminates locking on the underlying tables. However in order to do this SQL puts the data (as it existed at the start of the transaction) into a temp table with row versions for each transaction, so that all queries in the transaction see the same version (or snapshot) of the database. This can take up a large amount of space in tempdb, depending on the size of the tables in the database being queried.

Run this statement to find any user databases with snapshot isolation enabled:

select * from sys.databases

where (snapshot_isolation_state = 1 or is_read_committed_snapshot_on = 1)

and database_id > 4

My client had a few databases with snapshot isolation enabled, so we had a conversation about the reasoning behind these settings. However it was still unclear as to why the tempdb grew and why the files grew out of proportion. As stated earlier, unless you are actively collecting performance data for this type of behavior, it is often very difficult to troubleshoot the cause of the problem after the fact. So what do we do about it?


When your tempdb files grow beyond the original size set in the properties, and they grow out of proportion, the preferred and best solution is to grow the files to a uniform size, as long as there is room on your disk. This is the preferred solution because if the tempdb files grew, it apparently needed that much room to perform the tasks that caused it to grow. You can adjust the size of your tempdb files using the GUI, or you can run the following statement for each data file:

USE master



MODIFY FILE (NAME = tempdev, SIZE = 2000MB)

If there is not enough space on your drive and you need to resolve the issue right now, shrink the tempdb files down to a uniform size. Keeping your tempdb data files the same size helps to avoid page contention issues.

This can be done through the GUI by selecting each individual data file and specifying the desired size, or you can run this statement for each data file:

USE tempdb


DBCC SHRINKFILE (tempdev, 1000)



If for some reason the data files don’t shrink to the size you specified, try running this


then execute the SHRINKFILE statement again. Do this during a maintenance time of course and not during production time while users are connected.

Prevention and Best Practice

Put your tempdb on its own volume if possible. If your tempdb is on the same drive as your user database files and that drive runs out of space, it will be difficult to restart SQL Server. If your tempdb is on its own drive you can utilize all of the space for your tempdb files and also reduce I/O contention.

Create as many data files as you have processors, up to a maximum of 8. (Note that dual-core CPU is the same as 2 CPUs). This will reduce storage contention and is more scalable. There is much debate on the necessity of this practice…this is a general guideline.

Make each data file the same size. This helps to avoid SGAM page contention issues, and will allow for optimal proportional-fill performance. Set the size of your data files large enough to accommodate a typical workload. And if you have tempdb on its own volume, size your files to use the majority of the space on the volume. This way SQL doesn’t have to stop activity in order to grow the files.

Set the autogrowth to a fixed amount (not percentage) and to a reasonable size. If the threshold value is set too low compared to the amount of data written to tempdb, the files will have to expand too often. During autogrowth the database is unavailable, which will cause user transactions to have to wait until the growth process completes. Note – every time SQL server instance is restarted, the tempdb will be recreated to the original size specified in the database properties.

Enable Trace Flag 1117 – If you are not yet using SQL Server 2016 and are on an earlier version, enabling trace flag t1117 will ensure that your tempdb files all grow when any of the files reach the autogrow threshold. (Starting with SQL Server 2016 this flag is unnecessary.) Trace flag 1117 is a global setting, and will impact every database in your instance, not just tempdb. You can enable the trace flag by going to SQL Server Configuration Manager > SQL Server Services > Right click on SQL Server (MSSQLSERVER) > Properties > Startup Parameters > enter –t1117 and click Add. You must restart the service for the change to take effect.


Finally, you can minimize tempdb utilization by practicing the following:

  • Avoid SORT_IN_TEMPDB option
  • Avoid unnecessary cursors
  • Avoid spooling (CTEs referenced multiple times)
  • Avoid using MARS
  • Don’t enable snapshot isolation unless there is a very compelling reason to do so
  • Avoid using triggers for bulk operations
  • Avoid using LOBs as local variables

Reference: https://dba.stackexchange.com/questions/19870/how-to-identify-which-query-is-filling-up-the-tempdb-transaction-log

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!



AlwaysOn – Connection Handshake Failure

— by Ginger Keys

While doing some work on a test environment recently I realized the AlwaysOn Group was failing. (On a production system I would have had several Alerts set to notify me of any issues). The dashboard on my test instance showed that the secondary replica was not synchronizing and from clicking on the blue warning links it actually showed that the secondary was not connected:




I confirmed that both the primary and secondary instances were up and running, but the secondary Availability Replica and Databases were not connected to the AlwaysOn Group:



The AlwaysOn Health Events were consistent with the SQL Server Logs. I discovered that the Windows Server Failover Cluster had been down (but was back online), and that the AlwaysOn Group had failed over and back. But the culprit to my secondary replica and databases being disconnected was an endpoint issue. I found in my SQL Server logs the service account running AlwaysOn had for some reason lost its permissions to connect to the endpoint:



Database Mirroring login attempt by user ‘Domain\user.’ failed with error: ‘Connection handshake failed. The login ‘Domain\user’ does not have CONNECT permission on the endpoint. State 84.’. [CLIENT:]


Thankfully the solution to this issue was pretty simple. Run the following script on both the primary and secondary instances to grant the service account running AlwaysOn connect permission:

GRANT CONNECT ON ENDPOINT::hadr_endpoint TO [Domain\user]


This immediately corrected the problem, and the secondary replica and databases were connected to the AlwaysOn Group without any further action.

Unfortunately the cause of issues on a SQL Server are not always apparent after the fact, so that is why it is important to be proactive and set up robust alerts and notifications. For more information on setting up alerting and monitoring for your AlwaysOn Group click here https://blog.sqlrx.com/2015/08/27/alwayson-monitoring-and-alerting/ .

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!






Log Connections to SQL Instance

— by Ginger Keys

If you ever have a need to monitor connections to your SQL server, and any related information about the connections such as database, logins, etc., there are some DMVs that can give you tons of information. Previously you might have used the sys.sysprocesses table to derive much of this information, but this is being deprecated in the most recent versions of SQL server.

Instead, you can collect valuable information from these DMVs:

sys.dm_exec_sessions   https://msdn.microsoft.com/en-us/library/ms176013.aspx

sys.dm_exec_connections   https://msdn.microsoft.com/en-us/library/ms181509.aspx

sys.dm_exec_requests   https://msdn.microsoft.com/en-us/library/ms177648.aspx

In order to capture and retain connection information for my SQL server, I will create a small database and a table to hold some basic information. Of course you can alter the script to include more, less, or different data than what I am demonstrating below, to better fit your specific information needs.

I will create a database and a table, then insert data from two of the DMVs listed above.

Step 1 – Create a table to hold login activity

— Create a database

USE master




( NAME = N’Connections’, FILENAME = N’C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\Connections.mdf’ ,

SIZE = 1024MB , FILEGROWTH = 512MB )


( NAME = N’Connections_log’, FILENAME = N’C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\Connections_log.ldf’ ,




— Create table to hold Login info

USE [Connections]


CREATE TABLE [dbo].[LoginActivity]


host_name [nvarchar](128) NULL,

program_name [nvarchar](128) NULL,

login_name [nvarchar](128) NOT NULL,

client_net_address [nvarchar](48) NULL,

DatabaseName [nvarchar](128) NOT NULL,

login_time [datetime] NOT NULL,

status [nvarchar](30) NOT NULL,

date_time[datetime] NOT NULL,




Step 2 – Insert Data into Table

If you need to retain or archive this connection information, you can create a database which will hold the information, or export the results to a spreadsheet or other file. Otherwise you can simply select the information from the DMV below if you only need to see current data.


USE Connections


INSERT INTO LoginActivity









— run the following select statement by itself to see connection info if you don’t want to save the output






d.name AS DatabaseName,



GETDATE() AS date_time

FROM sys.dm_exec_sessions s

JOIN sys.dm_exec_connections c ON s.session_id = c.session_id

JOIN sys.databases d ON d.database_id = s.database_id

–where d.name = ‘ABCompany’ –can specify databases if needed

WHERE GETDATE() >= DATEADD(hh,-10, GETDATE()) –date range can be adjusted


Step 3 – View/Save Output Results

After inserting the data into my table, I can see the current connections from the last 10 hours (as per my insert statement). On a production server, this list would be far greater.

SELECT * FROM LoginActivity


From the columns I have included in my table:

Host_name – will give you the name of the workstation connecting – shows NULL for internal sessions.

Program_name – tells you the name of the client program or application connecting.

Client_net_address – provides the host address of each client connecting

Login_name, DatabaseName, and login_time – self-explanatory.

date_time – is the current day and time the query is run

Status – gives the status of the session, which will be running, sleeping, dormant, or preconnect.

This information can also be output to a text or excel file if preferred.



Being able to see users or applications making connections to your SQL Server can be useful or necessary for many reasons. The steps outlined above provide a general guideline for deriving connection information that can be altered to fit your organizational needs.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Log Shipping Backup Failure

–By Ginger Keys

For years the SQL Agent service account on my client’s SQL Server instance ran the Maintenance Plans and SQL Agent jobs with no issues. Many of the SQL databases were set to Full recovery model, and Tlog backups executed on regular bases with no problems.

The client decided to migrate to new hardware in a new datacenter, and decided log shipping the databases over until the go-live date would be the best option for them in their circumstances. We took the databases out of the regular Tlog backup routines and created Transaction Log Shipping routines in its place. The connections to the new instance were seamless, and the Copy & Restore jobs were executing fine. However the Backup jobs were failing!

Why would this be, since the SQL Agent service account had been executing Tlog backups for years!? As it turns out, log shipping uses a different executable for performing tlog backups: sqllogship.exe. The SQL Agent service account must have permissions to the folder location where this executable is located, as shown below. You can locate your executable file by opening the LS_Backup job properties and viewing the job step.


Once we granted Full permissions to this location for the SQL Agent service account, everything worked as intended. This was the first occasion with log shipping that I have run into permissions issues for the service account on a primary server. Hopefully this is an uncommon occurrence, but it is certainly simple to fix once you understand what is happening.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

AlwaysOn – Endpoint Ownership

— By Ginger Keys

It is not uncommon for a DBA or other IT staff to set up AlwaysOn in a SQL environment and later leave the company. We ran into this recently with a client and were asked to delete the previous employee’s login from everything SQL related. We were able to remove the login from all databases and server roles, however we were not able to delete the login because it was the owner of an endpoint.

When creating an AlwaysOn Availability Group, you have the option of using the wizard or you can create it using TSQL statements. The wizard is very intuitive and easy to use and with the exception of a few settings you can specify, default configurations are deployed using this method. One of the default configurations is the endpoint owner. Whoever creates the AlwaysOn group is by default the owner of the endpoint.

This is generally not a problem…unless that person leaves the company and you need to delete the login! You will get an error message that says “The server principal owns one or more endpoint(s) and cannot be dropped (Microsoft SQL Server, Error: 15141)”.

To check and see who the owner of your endpoints are, run this statement:

USE master


SELECT e.name as EndpointName,

sp.name AS EndpointOwner,



FROM sys.endpoints e

INNER JOIN sys.server_principals sp

ON e.principal_id = sp.principal_id



AS et ( typeid, PayloadType )

ON et.typeid = e.type

The AlwaysOn endpoint will have the name Hadr_endpoint and will have a DATABASE_MIRRORING payload type as shown below.


If your AlwaysOn group has already been created and there is a domain login as the owner, you can change the ownership to sa. Run the following statement to make the change:

USE master



This will allow you to delete any login who might have owned the endpoint if its ever necessary.

If you are creating an AlwaysOn Availablitiy Group and want to use TSQL statements instead of the wizard, you have the ability to specify the endpoint owner. For complete instructions on how to set up the AlwaysOn group with TSQL click here https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/create-an-availability-group-transact-sql

In order to create the endpoint with a specific user, run the following statement:

CREATE ENDPOINT endpoint_mirroring









In the statement above, if AUTHORIZATION is not specified with a SQL or Windows login, the caller will become the owner of the newly created endpoint. To use AUTHORIZATION and assign ownership to a login, the caller must have IMPERSONATE permission on the specified login.

Endpoints are a fundamental piece of SQL that allows a connection or point of entry into your SQL Server. Knowing who owns these endpoints and how to change the owner will potentially save you some headaches down the road in the event of IT staffing changes in your organization.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!


AlwaysOn 2016 – Encrypted Databases and Performance

–By Ginger Keys

It is common knowledge that encrypting a database causes degraded performance in a SQL server. In an AlwaysOn environment performance can be even more sluggish because there is the extra element of data replication latency. How much difference does it really make? Of course the answer is “it depends” on your environment and your workload. I was curious to see for myself what kind of performance hit encryption would have on one of my test databases, so this post will look at CPU usage of an encrypted vs non-encrypted database.

Microsoft says that turning on TDE (Transparent Data Encryption) for a database will result in a 2-4% performance penalty, which is actually not too bad given the benefits of having your data more secure. There is even more of a performance hit when enabling cell level or column level encryption. When encrypting any of your databases, keep in mind that the tempdb database will also be encrypted. This could have a performance impact on your other non-encrypted databases on the same instance.

In a previous post I demonstrated how to add an encrypted database to an AlwaysOn group in SQL2016. In this article I will demonstrate the performance effects of having an encrypted database in your AlwaysOn Group compared to the same database not-encrypted.

I have 3 identical test databases I will use to look at performance metrics.

  • GKTestDB is TDE encrypted, and is part of the AlwaysOn group
  • GKTestDB2 is not encrypted, and not part of AlwaysOn group
  • GKTestDB3 is not encrypted, but is part of AlwaysOn group


There are some good open source tools for performing stress tests on your SQL database.   You can find one here that uses the AdventureWorks2014 database http://www.sqlballs.com/2016/05/sql-server-random-workload-generator.html. There is also one here and you can use this on any database https://github.com/ErikEJ/SqlQueryStress/wiki. I will be using the SQLQueryStress tool for my demonstration.

Step 1 – Test performance on non-encrypted DB not in AG

To measure performance metrics, create a User Defined Data Collector Set in Performance Monitor (Perfmon). There are many metrics that can be measured, but I will only be looking at CPU % Processor Time.


Download and open the SQLQueryStress tool, and create a statement to run against your database. In my test I will first look at the performance of running a select query for 5000 iterations on a database that has not been added to the AlwaysOn group, and has not been encrypted: (GKTestDB2)


Be sure to clear the buffers and cache before performing each stress test. Select your database, the server name, the number of iterations, the number of threads and the delay between queries in milliseconds.


Start your user defined Perfmon data collector set, then start the stress test in the SQLQueryStress tool.


At the end of each stress test you will need to manually stop your Perfmon collection.

Step 2 – Test performance on encrypted DB in the AlwaysOn Group

Now I will perform the same stress test to see performance on the identical but Encrypted database in the AlwaysOn group (GKTestDB). Be sure to clear the buffers and cache, and change the database in the SQLQueryStress tool.



Step 3 – Test performance on non – encrypted DB in the AlwaysOn Group

Just for curiosity sake, I’m also going to test the identical database that is not encrypted, but is included in the AlwaysOn group (GKTestDB3):



Step 4 – Compare results

I set the output files of my Perfmon results to land in a location on my local drive so that I can open up the results of each test and compare.

The CPU usage for the database not encrypted and not in my AlwaysOn group averaged 43% for the duration the test was run, as shown by the dark line on the graph below.

Not Encrypted / Not in AG database CPU usage:


The CPU usage for the identical database that is encrypted and is in the AlwaysOn group averaged 57.5% during the stress test as shown on the graph below. This is quite a bit more than the non-encrypted/non AG database, especially given the simple statement that was being run.

TDE Encrypted / Joined to AG Database CPU usage:


And finally, the CPU usage for the identical database that is not encrypted, but is included in my AlwaysOn group averaged 43.4%, which is not much different than the non-encrypted /non-AG database above.

Not Encrypted / Joined to AG Database CPU usage:




Having an encrypted database creates a noticeable CPU performance hit as compared to a non-encrypted database. Microsoft provides many options for protecting your data, transparent data encryption (TDE) being one of them. The benefits and assurance of securing your data outweigh the performance cost, however it may be useful to see how much of a performance hit your system will encounter prior to deciding which security options your organization will deploy.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!






Stretch Databases – SQL Server 2016

— by Ginger Keys


Microsoft has added another cool feature in SQL Server 2016 – Stretch databases – that provides a cost effective way to store historical or ‘cold’ data.  This feature is ideal for transactional databases with large amounts of cold data, typically stored in a small number of tables. Stretching is better than the traditional method of archiving your old data to a remote location because stretched data can still be queried and accessed through applications. The data is always online and there is no need to change anything in your application in order to query it. The end user doesn’t see any difference other than possibly a small amount of latency when querying the stretched data

Features and Advantages:

Some features and benefits of stretching your data to Microsoft Azure:

  • Useful for ‘cold’ or historical data.
  • Your data is still online and available for query.
  • No changes to applications or queries required.
  • Stretching your data to Azure is encrypted and safe – and Always Encrypted, Row Level Security, and other SQL security features are still available
  • Useful for reducing storage space needed on local disk
  • Useful for reducing size of backups and the time required to perform backups.
  • Stretching applies to individual tables – you can stretch either the entire table or selected rows from the table.
  • Select and Insert is supported for stretched data; while Update and Delete is not supported.



There are some limitation to stretch databases, and not all databases or tables will be eligible for stretching. Some features and datatypes in a table will prohibit stretching a table in your database. These include

  • Tables with FILESTREAM data
  • Tables replicated or using Change Tracking or Change Data Capture
  • Memory optimized tables
  • Text, ntext, image, timestamp datatypes
  • Sql_variant, geometry or geography, CLR, hierarchyid datatypes
  • COLUMN_SET or computed column types
  • Default or check constraints, or foreign key constraints that reference the table
  • Full text, XML, spatial indexes, or index views that reference the table

Certain limitations also exist for tables that have been stretched:

  • Uniqueness is not enforced with UNIQUE or PRIMARY KEY constraints in the stretched table in Azure
  • You can’t Update or Delete rows in a stretch enabled table or view
  • You can’t Insert rows into a stretched table on a linked server
  • You can’t create an index for a view that includes stretch enabled tables
  • Filters on indexes do not propagate to the remote table.

Steps to set up Stretch Database

  1. Get an Azure account with Microsoft – https://azure.microsoft.com/en-us/free/
    1. Create Azure server
    2. Establish user credentials
    3. Configure firewall rules
  2. Run DMA (Data Migration Assistant) to see if your database is eligible
    1. Download Stretch Database Advisor (DMA) – https://www.microsoft.com/en-us/download/details.aspx?id=53595
  3. Enable Instance
  4. Enable Database
  5. Monitor state of stretch progress

I’m going to assume you have already completed the first two steps and will not go over these. So beginning with step 3, to implement a stretch database you must enable the feature on both the instance and database.

Enable Instance

To enable Stretch Database on your instance run the following statement:

–enable instance for stretch

EXEC sp_configure ‘remote data archive’, ‘1’




Enable Database

From SQL Server Management Studio (SSMS) right click on the database which holds the table(s) you want to stretch to Azure and select Tasks > Stretch > Enable.


This will open the Enable Database for Stretch Wizard. First step will be to select the table(s) you want to stretch to Azure. You can select the entire table contents, or…


…if you click on the ‘Entire Table’ link, you can select specific rows to stretch as shown below.


The next step is to Configure Azure. You will be prompted to sign in to your Azure account:


After you have signed in to Azure, select the subscription to use (if you have more than one), select the correct region, select the server you have created (or create a new one), and provide the login credentials. Currently Azure only supports SQL Server Authentication for stretch databases.


In order to stretch a database table to Azure, the database must have a database master key (DMK). Specify (and save) the password for the DMK by creating the credential in the Wizard as follows on the Secure Credentials page.


If you already have a DMK for your database you will enter the password on this same screen as shown.


On the Select IP Address page, you can select the ‘Use source SQL Server Public IP’ button or specify the range of subnet IP addresses to use.


After specifying the IP addresses, click next through the Summary and Results pages.



Thank you Microsoft, we will enjoy it!

Monitor your Stretch Progress

In order to see the migration status of your data to the Azure cloud server, run this TSQL statement to show the number of rows migrated:

select * from sys.dm_db_rda_migration_status

order by start_time_utc desc

You can also monitor your stretched database by right clicking on your database > Tasks > Stretch > Monitor as shown


This will display a report with your Source server info, the Azure server info and information about the tables that were stretched:


You will also notice in Object Explorer, the stretched database has a different icon beside it


You will also notice under the External Resources of your database, the Stretched server will be listed in the External Data Sources folder.


When connecting to your Azure server, you can now see the stretched database which get assigned a name and table name automatically.


Now What?

Let’s see what happens now when we query data from the Orders table we just stretched.

From your local server instance, select rows from your table as shown. You will notice that the execution plan shows a ‘Remote Query’ operation because it is pulling data from the remote Azure server.


I only stretched certain rows of data to the Azure cloud, not the entire table. When I run a select statement that only pulls data from my local database and not the stretched Azure data, you will notice the execution plan changes. No Remote Query or concatenation operation is involved because it is only pulling data from the local table.


Disable Stretch Database

Now if for some reason you need to un-stretch your database you can disable Stretch Database for your entire database or just for an individual table. Disabling stretch database does not delete the table or data from Azure. You have to manually drop it using the Azure management portal, and the remote table in Azure will continue to incur costs until you delete it.

To disable Stretch Database for your database right click on your database in Object Explorer on your local SSMS, select Tasks > Stretch > Disable as shown


To disable Stretch Database for a table, right click on the table name and select Stretch > Disable > and either Bring data back from Azure, or Leave data in Azure. Bringing data back from Azure will incur data transfer costs.



Migrating cold or historical data to Microsoft’s Azure can be a cost effective way to retain inactive data, while still having access to it when needed. For information on Azure Stretch Database pricing go here https://azure.microsoft.com/en-us/pricing/details/sql-server-stretch-database/ .   This article covered very basic information on how to set up a Stretch Database, but there are several more aspects such as performance implications and backup/restore processes that we did not cover. Hopefully this gives a starting point on which you can begin to test and explore this new feature.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!






AlwaysOn 2016 – Encrypted Databases in an AG

–By Ginger Keys

In the AlwaysOn 2016 release, there are several enhancements that have improved manageability, scalability, and availability. One of the enhancements is the ability to add encrypted databases to an AlwaysOn group.

Encrypted databases were allowed in earlier versions of AlwaysOn, however they could not be accessed if there was a failover to a secondary replica because the primary and secondary replicas had different service master keys. So when the secondary would try to decrypt the database master key using its own service master key, the master key would fail to open because of an invalid password.

In SQL 2016 there is now access to the database on the secondaries in the event of failover since it is no longer a requirement for each replica to have the same Database master Key (DMK) and password on each instance. There is no need to back up the DMK from the primary and restore it to the secondary(s), it is only necessary that a DMK exist on each replica. As long as each replica has a DMK created, even without the same password, there will not be any encryption/decryption issues because of different keys.

Transparent Data Encryption (TDE) is a method of protecting your data ‘at rest’. There are several methods of protecting your data in addition to TDE, but this article will only discuss TDE encrypted databases in an AlwaysOn Availability Group. If your physical media (drives or backup tapes) are stolen, TDE will encrypt the sensitive data in the data and log files, and will also protect the keys used to encrypt the data with a certificate. TDE provides for real-time I/O encryption and decryption, but it does not protect data ‘in-flight’ flowing from the server to the end user. Also it does not protect data from a user who has privileges in the SQL instance.

The steps below demonstrate how to add an encrypted database to an AlwaysOn Availability Group

  1. Encrypt your database (if not already encrypted)
  2. Create/verify Database Master Key (DMK) on secondary replica(s)
  3. Create the TDE Certificate on secondary replica(s)
  4. Perform Full and Tlog backup of encrypted database on primary replica
  5. Add encrypted database to AlwaysOn group on primary replica
  6. Restore Full and Tlog backups on secondary replica(s)
  7. Join the encrypted database to the AlwaysOn group on secondary replica(s)

Step 1 – Encrypt Database

If your database has not already been encrypted, follow these steps to create a master key (DMK), backup the certificate, create a database encryption key (DEK), and enable encryption on the database on your primary replica.

–create DMK database master key

USE master










–check to make sure master key is set up

SELECT b.name, a.crypt_type_desc

FROM sys.key_encryptions a

INNER JOIN sys.symmetric_keys b

ON a.key_id = b.symmetric_key_id

WHERE b.name = ‘##MS_DatabaseMasterKey##’;



–create and backup certificate




TO FILE = ‘C:\SQLRX\KeyBkup\Cert4TDE.cer’



FILE = ‘C:\SQLRX\KeyBkup\TDEPvtKey.pvk’,





–create DEK database encryption key with certificate




WITH ALGORITHM   = AES_256 — AES_128 | AES_192 | AES_256 | TRIPLE_DES_3KEY



USE master



enable TDE for the database




–monitor encryption progress

SELECT db_name(database_id), encryption_state, percent_complete, key_algorithm, key_length

FROM sys.dm_database_encryption_keys



Step 2 – Verify Database Master Key on Secondary Replica(s)

On each secondary replica, verify that a Database Master Key (DMK) exists in the master database.


–check to make sure master key is set up

SELECT b.name, a.crypt_type_desc

FROM sys.key_encryptions a

INNER JOIN sys.symmetric_keys b

ON a.key_id = b.symmetric_key_id

WHERE b.name = ‘##MS_DatabaseMasterKey##’;


If a record is returned, that means a DMK exists on the secondary instance. If no record is returned, then create a one as follows:

–create DMK database master key

USE master










Step 3 – Create TDE Certificate on Secondary Replica(s)

On each secondary replica instance, create/restore the TDE Certificate from the server certificate and private key that you backed up on the primary. Copy the certificate as well as the private key from the primary over to a file path on the secondary(s). The following will install the certificate onto the secondary replica(s).

–create TDE Certificate from the certificate backed up on primary


FROM FILE = ‘C:\SQLRX\KeyBkup\Cert4TDE.cer’



FILE = ‘C:\SQLRX\KeyBkup\TDEPvtKey.pvk’,



Step 4 – Create Full & Tlog Backups on the Primary Replica

Create a full and tlog backup of the TDE encrypted database on the primary replica.

–create Full Backup of encrypted database on primary


TO DISK = N’C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\GKTestDB.bak’



–create Tlog backup of encrypted database on primary


TO DISK = N’C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\GKTestTlogBackup.trn’



Step 5 – Add Encrypted Database to AlwaysOn Group

On the primary replica instance, add the TDE encrypted database to the Availability Group. As of the current release of SQL Server 2016, you cannot do this yet through the wizard


Instead add the TDE database to your AlwaysOn group using TSQL:

USE master





Step 6 – Restore Full & Tlog Backups on Secondary Replica(s)

On each secondary replica instance, restore the full and tlog backups of the encrypted database with no recovery. Copy the backup files from the primary replica to a file location on your secondary and restore from that file path as follows:

–Restore Full backup of encrypted database on secondary with no recovery

USE [master]



–Restore the Tlog backup of encrypted database on secondary with no recovery



Step 7 – Join the Encrypted Database to AlwaysOn Group on Secondary Replica

On each secondary replica instance, join the database to the availability group.

USE master







After joining the encrypted database to the AlwaysOn group, you should now be able to fail over the group to the secondary replica and be able to access the database without any issues. It is common knowledge that encrypting a database causes degraded performance, so the next article will demonstrate the performance effects of having an encrypted database in your AlwaysOn Group.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!


AlwaysOn 2016 – Direct Seeding

–By Ginger Keys

SQL Server 2016 provides some awesome new enhancements and features, especially in Enterprise Edition.  One of the new features relating to AlwaysOn is Direct Seeding.

Good stuff – No more time consuming backup and restore

  • Direct Seeding allows you to initialize the databases to your secondary replica(s) without having to configure a network share or perform backup and restore operations.  It will send the database directly over to the secondary replica without you needing to do anything.  Woo hoo!!

Gotchas – Some with regards to Large Databases

  • Direct seeding does not use compression by default, so adding a very large database to your AlwaysOn group could take a while and can use a lot of your network bandwidth.  Compression can be enabled by turning on Trace Flag 9567.   However, know that adding a large database with compression enabled will cause high CPU utilization during the seeding process.   For more info click here https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/tune-compression-for-availability-group .
  • Also during direct seeding the transaction log will not truncate, so in a highly transactional database your transaction log could potentially grow very large during this time.
  • For large databases being initialized to secondary replicas using direct seeding, be mindful of your network capacity (as the entire database will be sent across the network to the secondary(s)), workload amount which could bloat your transaction log, CPU pressure if you enable compression, and database size which will affect the seeding time.

You may want to use direct seeding during afterhours to prevent your Network or System Admin from hunting you down for the network hit that will happen.

If you are using SQL Server Management Studio version 17, Direct Seeding of your databases is now available in the GUI! You can download this version of SSMS here https://docs.microsoft.com/en-us/sql/ssms/download-sql-server-management-studio-ssms .

Let’s Do This!

I will assume you already have your environment ready (i.e. failover cluster created and configured, SQL installed and configured, AlwaysOn enabled, replicas and logins configured, etc.). So from the New Availability Group Wizard, you will specify your AG name:


Just a side note… in earlier versions of AlwaysOn (SQL 2012 & 2014), failover would occur if there was a problem with the health of the instance. If one of your databases was in trouble however, the AlwaysOn group would not fail over as long as the instance was okay. So if you had a database that was offline, suspect or corrupt, no failover would occur. In AlwaysOn 2016, failover will occur not only if the instance is in trouble, but also if one or more of your databases is in trouble. This is not a default setting however. You have to specify this when creating your AlwaysOn group, by selecting the ‘Database Level Health Detection’ checkbox in the setup wizard.

Next, select your database(s) to add to the AlwaysOn group:


Then add the replicas to be included in your AG and what type of failover (automatic or manual), data replication (synchronous or asynchronous), and whether they will allow for read only access:


Next click on the Endpoints tab and make sure URL and port numbers are correct:


Click on the Backup Preferences tab and specify how you want your backups to occur:


Click on the Listener tab and specify the Listener (DNS) name, port number and IP address:


And THIS is where you can specify Automatic (Direct) Seeding!


Click Next through the next screens and THAT’S IT!


You’re done, simple as that.


If you do not have SSMS v17 you can still initialize your databases to secondary replicas using direct seeding, but you will need to do this through TSQL from the primary replica:



REPLICA ON ‘<*primary_server*>’

WITH (ENDPOINT_URL = N’TCP://<*primary_server*>.<*fully_qualified_domain_name*>:5022′,






N'<*secondary_server*>’ WITH (ENDPOINT_URL = N’TCP://<*secondary_server*>.<*fully_qualified_domain_name*>:5022′,







For more information click here https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/automatically-initialize-always-on-availability-group .


SQL Server 2016 AlwaysOn gives you the ability to join databases to your AlwaysOn group and initialize them on your secondaries without having to backup and restore, or establish a network share. This ultimately saves much time and effort, but can take a while if your database is large. And using compression during the seeding process can save some time, but your CPU will take a performance hit. Also keep in mind that your transaction log will grow during this time as it will not truncate during the seeding process. Of course, be sure to test this first before doing it in your production environment!

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!