Tag Archives: SQL 2016

AlwaysOn 2016 – Encrypted Databases and Performance

–By Ginger Keys

It is common knowledge that encrypting a database causes degraded performance in a SQL server. In an AlwaysOn environment performance can be even more sluggish because there is the extra element of data replication latency. How much difference does it really make? Of course the answer is “it depends” on your environment and your workload. I was curious to see for myself what kind of performance hit encryption would have on one of my test databases, so this post will look at CPU usage of an encrypted vs non-encrypted database.

Microsoft says that turning on TDE (Transparent Data Encryption) for a database will result in a 2-4% performance penalty, which is actually not too bad given the benefits of having your data more secure. There is even more of a performance hit when enabling cell level or column level encryption. When encrypting any of your databases, keep in mind that the tempdb database will also be encrypted. This could have a performance impact on your other non-encrypted databases on the same instance.

In a previous post I demonstrated how to add an encrypted database to an AlwaysOn group in SQL2016. In this article I will demonstrate the performance effects of having an encrypted database in your AlwaysOn Group compared to the same database not-encrypted.

I have 3 identical test databases I will use to look at performance metrics.

  • GKTestDB is TDE encrypted, and is part of the AlwaysOn group
  • GKTestDB2 is not encrypted, and not part of AlwaysOn group
  • GKTestDB3 is not encrypted, but is part of AlwaysOn group


There are some good open source tools for performing stress tests on your SQL database.   You can find one here that uses the AdventureWorks2014 database http://www.sqlballs.com/2016/05/sql-server-random-workload-generator.html. There is also one here and you can use this on any database https://github.com/ErikEJ/SqlQueryStress/wiki. I will be using the SQLQueryStress tool for my demonstration.

Step 1 – Test performance on non-encrypted DB not in AG

To measure performance metrics, create a User Defined Data Collector Set in Performance Monitor (Perfmon). There are many metrics that can be measured, but I will only be looking at CPU % Processor Time.


Download and open the SQLQueryStress tool, and create a statement to run against your database. In my test I will first look at the performance of running a select query for 5000 iterations on a database that has not been added to the AlwaysOn group, and has not been encrypted: (GKTestDB2)


Be sure to clear the buffers and cache before performing each stress test. Select your database, the server name, the number of iterations, the number of threads and the delay between queries in milliseconds.


Start your user defined Perfmon data collector set, then start the stress test in the SQLQueryStress tool.


At the end of each stress test you will need to manually stop your Perfmon collection.

Step 2 – Test performance on encrypted DB in the AlwaysOn Group

Now I will perform the same stress test to see performance on the identical but Encrypted database in the AlwaysOn group (GKTestDB). Be sure to clear the buffers and cache, and change the database in the SQLQueryStress tool.



Step 3 – Test performance on non – encrypted DB in the AlwaysOn Group

Just for curiosity sake, I’m also going to test the identical database that is not encrypted, but is included in the AlwaysOn group (GKTestDB3):



Step 4 – Compare results

I set the output files of my Perfmon results to land in a location on my local drive so that I can open up the results of each test and compare.

The CPU usage for the database not encrypted and not in my AlwaysOn group averaged 43% for the duration the test was run, as shown by the dark line on the graph below.

Not Encrypted / Not in AG database CPU usage:


The CPU usage for the identical database that is encrypted and is in the AlwaysOn group averaged 57.5% during the stress test as shown on the graph below. This is quite a bit more than the non-encrypted/non AG database, especially given the simple statement that was being run.

TDE Encrypted / Joined to AG Database CPU usage:


And finally, the CPU usage for the identical database that is not encrypted, but is included in my AlwaysOn group averaged 43.4%, which is not much different than the non-encrypted /non-AG database above.

Not Encrypted / Joined to AG Database CPU usage:




Having an encrypted database creates a noticeable CPU performance hit as compared to a non-encrypted database. Microsoft provides many options for protecting your data, transparent data encryption (TDE) being one of them. The benefits and assurance of securing your data outweigh the performance cost, however it may be useful to see how much of a performance hit your system will encounter prior to deciding which security options your organization will deploy.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!






SQL 2016 Database Mail Issues

If you are planning on installing SQL Server 2016, take note that you need SP1 or .NET 3.5 for Database Mail. Database Mail is a very important and widely used part of SQL that allows us to send e-mail. I found this when mail could not be sent and the Database Mail log was empty after it was set up. It has also been noted that SP1 + CU1 can also break Database Mail again so be careful when applying Cumulative Updates.



Stretch Databases – SQL Server 2016

— by Ginger Keys


Microsoft has added another cool feature in SQL Server 2016 – Stretch databases – that provides a cost effective way to store historical or ‘cold’ data.  This feature is ideal for transactional databases with large amounts of cold data, typically stored in a small number of tables. Stretching is better than the traditional method of archiving your old data to a remote location because stretched data can still be queried and accessed through applications. The data is always online and there is no need to change anything in your application in order to query it. The end user doesn’t see any difference other than possibly a small amount of latency when querying the stretched data

Features and Advantages:

Some features and benefits of stretching your data to Microsoft Azure:

  • Useful for ‘cold’ or historical data.
  • Your data is still online and available for query.
  • No changes to applications or queries required.
  • Stretching your data to Azure is encrypted and safe – and Always Encrypted, Row Level Security, and other SQL security features are still available
  • Useful for reducing storage space needed on local disk
  • Useful for reducing size of backups and the time required to perform backups.
  • Stretching applies to individual tables – you can stretch either the entire table or selected rows from the table.
  • Select and Insert is supported for stretched data; while Update and Delete is not supported.



There are some limitation to stretch databases, and not all databases or tables will be eligible for stretching. Some features and datatypes in a table will prohibit stretching a table in your database. These include

  • Tables with FILESTREAM data
  • Tables replicated or using Change Tracking or Change Data Capture
  • Memory optimized tables
  • Text, ntext, image, timestamp datatypes
  • Sql_variant, geometry or geography, CLR, hierarchyid datatypes
  • COLUMN_SET or computed column types
  • Default or check constraints, or foreign key constraints that reference the table
  • Full text, XML, spatial indexes, or index views that reference the table

Certain limitations also exist for tables that have been stretched:

  • Uniqueness is not enforced with UNIQUE or PRIMARY KEY constraints in the stretched table in Azure
  • You can’t Update or Delete rows in a stretch enabled table or view
  • You can’t Insert rows into a stretched table on a linked server
  • You can’t create an index for a view that includes stretch enabled tables
  • Filters on indexes do not propagate to the remote table.

Steps to set up Stretch Database

  1. Get an Azure account with Microsoft – https://azure.microsoft.com/en-us/free/
    1. Create Azure server
    2. Establish user credentials
    3. Configure firewall rules
  2. Run DMA (Data Migration Assistant) to see if your database is eligible
    1. Download Stretch Database Advisor (DMA) – https://www.microsoft.com/en-us/download/details.aspx?id=53595
  3. Enable Instance
  4. Enable Database
  5. Monitor state of stretch progress

I’m going to assume you have already completed the first two steps and will not go over these. So beginning with step 3, to implement a stretch database you must enable the feature on both the instance and database.

Enable Instance

To enable Stretch Database on your instance run the following statement:

–enable instance for stretch

EXEC sp_configure ‘remote data archive’, ‘1’




Enable Database

From SQL Server Management Studio (SSMS) right click on the database which holds the table(s) you want to stretch to Azure and select Tasks > Stretch > Enable.


This will open the Enable Database for Stretch Wizard. First step will be to select the table(s) you want to stretch to Azure. You can select the entire table contents, or…


…if you click on the ‘Entire Table’ link, you can select specific rows to stretch as shown below.


The next step is to Configure Azure. You will be prompted to sign in to your Azure account:


After you have signed in to Azure, select the subscription to use (if you have more than one), select the correct region, select the server you have created (or create a new one), and provide the login credentials. Currently Azure only supports SQL Server Authentication for stretch databases.


In order to stretch a database table to Azure, the database must have a database master key (DMK). Specify (and save) the password for the DMK by creating the credential in the Wizard as follows on the Secure Credentials page.


If you already have a DMK for your database you will enter the password on this same screen as shown.


On the Select IP Address page, you can select the ‘Use source SQL Server Public IP’ button or specify the range of subnet IP addresses to use.


After specifying the IP addresses, click next through the Summary and Results pages.



Thank you Microsoft, we will enjoy it!

Monitor your Stretch Progress

In order to see the migration status of your data to the Azure cloud server, run this TSQL statement to show the number of rows migrated:

select * from sys.dm_db_rda_migration_status

order by start_time_utc desc

You can also monitor your stretched database by right clicking on your database > Tasks > Stretch > Monitor as shown


This will display a report with your Source server info, the Azure server info and information about the tables that were stretched:


You will also notice in Object Explorer, the stretched database has a different icon beside it


You will also notice under the External Resources of your database, the Stretched server will be listed in the External Data Sources folder.


When connecting to your Azure server, you can now see the stretched database which get assigned a name and table name automatically.


Now What?

Let’s see what happens now when we query data from the Orders table we just stretched.

From your local server instance, select rows from your table as shown. You will notice that the execution plan shows a ‘Remote Query’ operation because it is pulling data from the remote Azure server.


I only stretched certain rows of data to the Azure cloud, not the entire table. When I run a select statement that only pulls data from my local database and not the stretched Azure data, you will notice the execution plan changes. No Remote Query or concatenation operation is involved because it is only pulling data from the local table.


Disable Stretch Database

Now if for some reason you need to un-stretch your database you can disable Stretch Database for your entire database or just for an individual table. Disabling stretch database does not delete the table or data from Azure. You have to manually drop it using the Azure management portal, and the remote table in Azure will continue to incur costs until you delete it.

To disable Stretch Database for your database right click on your database in Object Explorer on your local SSMS, select Tasks > Stretch > Disable as shown


To disable Stretch Database for a table, right click on the table name and select Stretch > Disable > and either Bring data back from Azure, or Leave data in Azure. Bringing data back from Azure will incur data transfer costs.



Migrating cold or historical data to Microsoft’s Azure can be a cost effective way to retain inactive data, while still having access to it when needed. For information on Azure Stretch Database pricing go here https://azure.microsoft.com/en-us/pricing/details/sql-server-stretch-database/ .   This article covered very basic information on how to set up a Stretch Database, but there are several more aspects such as performance implications and backup/restore processes that we did not cover. Hopefully this gives a starting point on which you can begin to test and explore this new feature.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!






SQL 2016 – Partitioning An Existing Table With A Columnstore Index

— by Lori Brown @SQLSupahStah

I recently ran across a data warehouse that did have the largest table partitioned. We were migrating up to SQL 2016 and we wanted to make sure that the partitions were still working and in place. Since I felt I needed to refresh my table partitioning skills, I decided to conduct a small scale test of partitioning up an existing table by year and to make it more fun, I wanted to have a columnstore index present to see how interesting things could be.

I am using the Fact.Purchase table in the new WideWorldImportersDW database. WideWorldImporters replaces AdventureWorks and can be researched and downloaded from here: https://blogs.technet.microsoft.com/dataplatforminsider/2016/06/09/wideworldimporters-the-new-sql-server-sample-database/

I did a little querying in the Fact.Purchase table and found that the Date Key column is great to use to partition it by year. And, I found that the dates in the table are generally from 2013 through 2016.

First you have to add the filegroups that will be used for the table once it is partitioned. I have added 4 filegroups for years 2013 – 2016.


Once that is done, I have to add an .NDF file for each filegroup and map the file to the correct filegroup.



Next I need a partition function and a partition scheme.

USE [WideWorldImportersDW]


CREATE PARTITION FUNCTION [PF_Purch_Year](date) AS RANGE RIGHT FOR VALUES (‘2014-01-01’, ‘2015-01-01’, ‘2016-01-01’)


CREATE PARTITION SCHEME [PS_Purch_Year] AS PARTITION [PF_Purch_Year] TO (Purch2013, Purch2014, Purch2015, Purch2016)


What I am basically doing is setting parameters or boundaries for objects that use the PF_PURCH_YEAR function to place date values older than 01/01/2014 into the Purch2013 filegroup, values between 01/01/2014 and 01/01/2015 into the Purch2014 filegroup, values between 01/01/2015 and 01/01/2016 into the Purch2015 filegroup and finally everything newer than or equal to 01/01/2016 into the Purch2016 filegroup.


Now that I have everything set up, I have to move the data to it. However, I cannot do that with an existing clustered index. I have to drop the clustered index and rebuild it with the partition scheme. This does get a bit confusing if you are trying to move existing things as you will see.

I first dropped all my indexes. Of course I had scripted them all out for rebuild but ran into trouble when I tried to build them again.

USE [WideWorldImportersDW]


DROP INDEX [CCX_Fact_Purchase] ON [Fact].[Purchase]


ALTER TABLE [Fact].[Purchase] DROP CONSTRAINT [PK_Fact_Purchase]


DROP INDEX [FK_Fact_Purchase_Date_Key] ON [Fact].[Purchase]


DROP INDEX [FK_Fact_Purchase_Stock_Item_Key] ON [Fact].[Purchase]


DROP INDEX [FK_Fact_Purchase_Supplier_Key] ON [Fact].[Purchase]



Here we go trying to rebuild….this is where it gets interesting…

USE [WideWorldImportersDW]




ON PS_Purch_Year ([Date Key])


I received an error:

Msg 35316, Level 16, State 1, Line 3

The statement failed because a columnstore index must be partition-aligned with the base table. Create the columnstore index using the same partition function and same (or equivalent) partition scheme as the base table. If the base table is not partitioned, create a nonpartitioned columnstore index.


Uh oh! This meant that the table had already been part of a different partition scheme. I did not think to look for that before I started. Ugh! I thought that it must be easy to change the base table partition alignment and it is but it is not super intuitive. Since I had already dropped all my indexes, I figured that I needed to recreate the original clustered index on the old partition since that would be how the base table is associated with anything. I rebuilt it using the old partition. So far so good. When I tried rebuilding it with DROP_EXISTING = ON I received the same error as before. After thinking about it for a bit, I dropped everything again but this time created a regular clustered index on the new partition to align the base table. This worked!



[Purchase Key] ASC,

[Date Key] ASC

)ON PS_Purch_Year ([Date Key])



Now the base table is aligned with the partition scheme that I wanted it on but I need the clustered index to be the columnstore index. We have to drop and create again and everything falls into place.

ALTER TABLE [Fact].[Purchase] DROP CONSTRAINT [PK_Fact_Purchase]


USE [WideWorldImportersDW]




ON PS_Purch_Year ([Date Key])




[Purchase Key] ASC,

[Date Key] ASC

)ON PS_Purch_Year ([Date Key])


CREATE NONCLUSTERED INDEX [FK_Fact_Purchase_Date_Key] ON [Fact].[Purchase]


[Date Key] ASC

)ON PS_Purch_Year ([Date Key])


CREATE NONCLUSTERED INDEX [FK_Fact_Purchase_Stock_Item_Key] ON [Fact].[Purchase]


[Stock Item Key] ASC

)ON PS_Purch_Year ([Date Key])


CREATE NONCLUSTERED INDEX [FK_Fact_Purchase_Supplier_Key] ON [Fact].[Purchase]


[Supplier Key] ASC

)ON PS_Purch_Year ([Date Key])



Success!!!! Now I just needed to make sure that things are where I expect them to be. I pilfered the following queries from MSDN…


FROM sys.tables AS t

JOIN sys.indexes AS i

ON t.[object_id] = i.[object_id]

JOIN sys.partition_schemes ps

ON i.data_space_id = ps.data_space_id

WHERE t.name = ‘Purchase’;




As you scroll across the above results I can see that all my indexes are in the PS_Purch_Year partition scheme.

SELECT t.name AS TableName, i.name AS IndexName, p.partition_number, p.partition_id, i.data_space_id,

f.function_id, f.type_desc, r.boundary_id, r.value AS BoundaryValue

FROM sys.tables AS t

JOIN sys.indexes AS i

ON t.object_id = i.object_id

JOIN sys.partitions AS p

ON i.object_id = p.object_id AND i.index_id = p.index_id

JOIN sys.partition_schemes AS s

ON i.data_space_id = s.data_space_id

JOIN sys.partition_functions AS f

ON s.function_id = f.function_id

LEFT JOIN sys.partition_range_values AS r

ON f.function_id = r.function_id and r.boundary_id = p.partition_number

WHERE t.name = ‘Purchase’

ORDER BY p.partition_number;


This query shows the boundaries.


I also found a really handy query created by David Peter Hansen that gives you info on partitions on a table much more concisely. https://davidpeterhansen.com/view-partitions-in-sql-server/  As they say….go read the whole thing. J


I hope this helps someone who wants to partition an existing table. I did this on my own system with a small table and not a production system so please be sure to test your process against a test system before you accidentally cause an issue in production.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!



–By Lori Brown   @SQLSupahStah

After installing Service Pack 1 for SQL 2016, I ran across USE HINT and thought I would put out what I could find to document it. Here’s where you find info from Microsoft: https://msdn.microsoft.com/en-us/library/ms181714.aspx and here is a link for all of the bugs that are fixed in SP1: https://support.microsoft.com/en-us/kb/3182545

USE HINT ( hint_name ) Provides one or more additional hints to the query processor as specified by a hint name inside single quotation marks. Hint names are case-insensitive. USE HINT can be utilized without having to be a member of the sysadmin server role.

The following hint names are supported:

  • ‘DISABLE_OPTIMIZED_NESTED_LOOP’ Instructs the query processor not to use a sort operation (batch sort) for optimized nested loop joins when generating a query plan. Equivalent to trace flag 2340.
  • ‘FORCE_LEGACY_CARDINALITY_ESTIMATION’ Forces the query optimizer to use Cardinality Estimation model of SQL Server 2012 and earlier versions. Equivalent to trace flag 9481 or Database Scoped Configuration setting LEGACY_CARDINALITY_ESTIMATION=ON.
  • ‘ENABLE_QUERY_OPTIMIZER_HOTFIXES’ Enables query optimizer hotfixes (changes released in SQL Server Cumulative Updates and Service Packs). Equivalent to trace flag 4199 or Database Scoped Configuration setting QUERY_OPTIMIZER_HOTFIXES=ON.
  • ‘DISABLE_PARAMETER_SNIFFING’ Instructs query optimizer to use average data distribution while compiling a query with one or more parameters, making the query plan independent of the parameter value which was first used when the query was compiled. Equivalent to trace flag 4136 or Database Scoped Configuration setting PARAMETER_SNIFFING=OFF.
  • ‘ASSUME_MIN_SELECTIVITY_FOR_FILTER_ESTIMATES’ Causes SQL Server to generate a plan using minimum selectivity when estimating AND predicates for filters to account for correlation. Equivalent to trace flag 4137 when used with cardinality estimation model of SQL Server 2012 and earlier versions, and has similar effect when trace flag 9471 is used with cardinality estimation model of SQL Server 2014 or higher.
  • ‘DISABLE_OPTIMIZER_ROWGOAL’ Causes SQL Server to generate a plan that does not use row goal adjustments with queries that contain TOP, OPTION (FAST N), IN, or EXISTS keywords. Equivalent to trace flag 4138.
  • ‘ENABLE_HIST_AMENDMENT_FOR_ASC_KEYS’ Enables automatically generated quick statistics (histogram amendment) for any leading index column for which cardinality estimation is needed. The histogram used to estimate cardinality will be adjusted at query compile time to account for actual maximum or minimum value of this column. Equivalent to trace flag 4139.
  • ‘ASSUME_JOIN_PREDICATE_DEPENDS_ON_FILTERS’ Causes SQL Server to generate a query plan using the Simple Containment assumption instead of the default Base Containment assumption for joins, under the query optimizer Cardinality Estimation model of SQL Server 2014 or newer. Equivalent to trace flag 9476.
  • ‘FORCE_DEFAULT_CARDINALITY_ESTIMATION’ Forces the Query Optimizer to use Cardinality Estimation model that corresponds to the current database compatibility level. Use this hint to override Database Scoped Configuration setting LEGACY_CARDINALITY_ESTIMATION=ON or trace flag 9481.

The list of all supported USE HINT names can be queried using the dynamic management view sys.dm_exec_valid_use_hints. More this view can be found here: https://msdn.microsoft.com/en-us/library/mt791356.aspx

Some USE HINT hints may conflict with trace flags enabled at the global or session level, or database scoped configuration settings. In this case, the query level hint (USE HINT) always takes precedence. If a USE HINT conflicts with another query hint or a trace flag enabled at the query level (such as by QUERYTRACEON), SQL Server will generate an error when trying to execute the query.



SET @qty = 4

SELECT h.OrderDate, h.Freight, d.OrderQty, d.UnitPrice

FROM Sales.SalesOrderDetail d

JOIN Sales.SalesOrderHeader h ON (d.SalesOrderID = h.SalesOrderID)

WHERE d.OrderQty > @qty


For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Script to De-Orphan SQL Logins

–By Lori Brown   @SQLSupahStah

I am working with a client who is migrating from SQL 2012 to SQL 2016. It just so happens that while they have just a few databases to move, they also have a ton of SQL logins that needed to be migrated. This is not an issue since I know to use sp_help_revlogin to script logins and passwords into a script that can be applied to the new SQL 2016. However, inevitably, there are SQL logins that have been deleted from SQL but not from the database or while testing on SQL 2016 the databases are refreshed and now logins are orphaned. I needed to have a script that would look at each database and give me a script that could be used to create or de-orphan these logins.

I figured that with a little Google-fu I would find someone who had already created a way to do this. And, while I did find a solution, I decided to give it my own twist. My solution is to have a script that I can run when needed that will output the statements that I need to create or de-orphan logins from all databases in an instance. I combined things from these two blogs:



Here is my code and a quick look at the output:

USE master


DECLARE @usercount INT

DECLARE @sqlstr1 NVARCHAR(1000)

DECLARE @sqlstr2 NVARCHAR(500)

DECLARE @dbname VARCHAR(128)





SELECT name FROM sys.databases

WHERE name NOT IN (‘master’,‘tempdb’, ‘msdb’, ‘model’)

AND is_read_only = 0 AND state = 0







SET @sqlstr1 = ‘SET @orphanedusers = CURSOR


       FROM [‘+@dbname+‘].sys.database_principals a

       LEFT OUTER JOIN sys.server_principals b

             ON a.sid = b.sid

       WHERE (b.sid IS NULL)

            AND (a.principal_id > 4)

             AND (a.type_desc = ”SQL_USER”)


       OPEN @orphanedusers’


EXEC sys.sp_executesql @sqlstr1, N’@orphanedusers cursor output’, @OrphUsers OUTPUT


FETCH NEXT FROM @OrphUsers INTO @username




SELECT @usercount = COUNT(*) FROM sys.server_principals WHERE name = @username


IF @usercount = 0


PRINT ‘–User ‘+@username+‘ does not exist and must be created. Please use a strong password.’

SET @sqlstr2 = ‘USE [master] CREATE LOGIN ‘ + @username + ‘ WITH PASSWORD = ”ABC123”’




                    PRINT ‘–User ‘+@username+‘ is an orphan.’

SET @sqlstr2 = ‘USE [‘+@dbname+‘] ALTER USER ‘ + @username + ‘ WITH LOGIN = ‘ + @username



PRINT (@sqlstr2)

FETCH NEXT FROM @OrphUsers INTO @username


CLOSE @OrphUsers










The output labels what needs to be done to make sure the users have access to their databases. If the user does not exist in SQL at all then a CREATE LOGIN statement is generated. If the user is simply orphaned an ALTER LOGIN statement is generated.

The only thing I did not do yet (mostly because my needs were centered around SQL logins) is to include Windows domain accounts in my logic. As soon as I add that to my script, I will repost. Or, if someone out there adds it, please send an update.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!


SQL 2016 Service Pack 1 Released

— By Lori Brown   @SQLSupahStah

SQL 2016 SP1 has been released. I might be a little late to this announcement but thought I would at least give the download link and some highlights on things that are fixed in it.

Download it from here: https://www.microsoft.com/en-us/download/details.aspx?id=54276

Some of the highlights are:

  • New CREATE OR ALTER that allows functions, triggers, procedures or views to be created without having to test for existence first.


  • New error log message that indicates if tempdb data files are not all the same size


  • New lightweight query profiling via trace flag 7412


  • Enterprise features now available in Standard Edition.


For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Consistency Check an Analysis Services Database

— By Lori Brown @SQLSupahStah

I am migrating a client to SQL 2016 and had to restore the SSAS databases and do what amounts to a DBCC CHECKDB against it. The command is done while connected to SSAS in Management Studio. You open an XMLA query (not MDX)…


…and run the following:

<DBCC xmlns=”http://schemas.microsoft.com/analysisservices/2003/engine>


<DatabaseID>Work Metrics</DatabaseID>

<CubeID>Hourly Insights</CubeID>



The output looks like this:


As usual more info can be found from MSDN: https://msdn.microsoft.com/en-us/library/mt156975.aspx

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Upcoming SQLRX Lunch and Learn

Please join SQLRX’s Lori Brown and the SQLRX team at the Microsoft Campus in Irving, Texas this coming Wednesday November 16, 2016 for an informative take on reasons to upgrade to SQL Server 2016.

Register Now!


There are a lot of really good reasons to upgrade to SQL 2016. Even if you’re still on SQL 2008 R2 or older, and worried about what might break. Either way, it’s time to upgrade! Join the experts at SQLRx for reasons why to upgrade and how to do it with less risk using SQLRx.

Presentation Topics will include:

  • What’s NEW and really Improved.
  • Using ReplayableTrace to know before you go.
  • Already Upgraded but nothing to show – SQLRx can fix that.
  • BI, AlwaysOn, The Query Store and more…

Who Should Attend: Anyone managing critical business applications running the SQL Server Platform.

Wednesday, Nov 16

Lunch and registration: 11:30

Presentation: 12:00 – 1:00 pm CST

Take this opportunity to meet our experts face to face and see what all the buzz is about in SQL 2016’s new features that improve availability, compatibility, scalability and manageability.


Microsoft Campus 7000 State Hwy 161 (Bush Turnpike) Bldg LC 1 (Bldg on the right) Irving, TX 75039

Register Now!


AlwaysOn 2016 – Maintenance

— by Ginger Keys

SQL Server AlwaysOn provides a fairly robust environment for disaster recovery; however you can never be too careful when it comes to technology and protecting your data. Regular maintenance routines (i.e. backups, etc.) need to be set up and performed on an ongoing basis, even when you have your databases in an AlwaysOn Availability Group.

AlwaysOn allows some maintenance routines to be performed on the secondary replicas, but determining which maintenance to perform on what replica can be somewhat confusing. We will discuss the following basic maintenance routines and where to execute them.

  • Backups
  • Integrity Checks
  • Index Maintenance
  • Update Statistics


Even though there is a copy of your database on the secondary replica(s), you should still continue to back up your databases and transaction logs on a regular basis.   Backups can be very resource intensive by putting strain on I/O and CPU especially when using compression. Because of this, offloading your backup operations to a secondary replica can greatly improve the performance of your primary production replica.

With an AlwaysOn group, you have several options of where to back up your databases as shown below. You can choose your backup preference when you set up your AlwaysOn group, or you can go back later and set your backup preferences through the AG properties.


Prefer Secondary – backups should occur on a secondary replica except when the primary replica is the only replica online. In that case, the backup should occur on the primary replica. This is the default behavior.

Secondary Only – backups will happen only on a replica that is designated as a secondary, and will execute in the order of the backup priority you set. Backups will not occur on the primary replica.

Primary – backups will only occur on the replica currently designated as primary.

Any Replica – backups will occur on the replica with the highest backup priority setting.

If you perform backups on your primary replica, Full, Differential, and Tlog backups can be run normally. If you perform backups on a secondary replica, there are some differences to be aware of:

  • You must configure your AlwaysOn group to specify where you want your backups to occur. You can do this through the New Availability Group setup GUI (as shown in picture above), or by using TSQL script (for more info click here https://msdn.microsoft.com/en-us/library/ff878399.aspx). These backup preferences will apply to both Full and Tlog backups.
  • The automated backup preference setting has no impact on ad-hoc backups. Manual ad-hoc backups will occur on whatever replica you perform the backup.
  • If you create your backup jobs through the Maintenance Plan, create the jobs on all replicas that have any intention of potentially performing backups. The job will automatically include the scripting logic that calls and checks the sys.fn_hadr_backup_is_preferred_replica function, and will not execute if the replica is not the preferred backup location. The job will not fail, it simply won’t execute. And in the event of failover the job will already exist on the replica the AlwaysOn Group has failed over to, and will execute accordingly.
  • Full Backups run on secondary replicas are copy only backups. Regular backups are not allowed. Copy-only backups do not impact the log chain or clear the differential bitmap.
  • Differential backups are not allowed on secondary replicas.
  • Tlog Backups only support regular backups on a secondary replica (copy only is not allowed). Tlog backups can be taken on any replica regardless of synchronization mode, and the log chain will remain consistent.
  • If databases exist in your SQL instance that are not part of the AlwaysOn Group, create separate maintenance jobs for those databases apart from the AlwaysOn databases so that the sys.fn_hadr_backup_is_preferred_replica function is not invoked. This way your non-AlwaysOn databases will not get lumped in with the AlwaysOn databases. Having separate maintenance routines for separate groups of databases can get a little hairy, as well as managing the jobs that need to run on each replica…for excellent guidance on this click here https://blog.sqlrx.com/2015/10/08/alwayson-availability-groups-enable-or-disable-jobs-on-failover/
  • Since the instances on the AlwaysOn replicas do not share an msdb database backups will not show up in the msdb backup tables.  This is important to know in the event you have to perform audits, or need to research backup history.


Integrity Checks

Perform Integrity Checks on all replicas if possible.

There are many differing opinions in the IT world on where to run the DBCC CHECKDB in an AlwaysOn Group.   Some believe integrity checks should be run on the secondary only, some believe it should be run wherever your database backups are happening, and some believe it should be run on all replicas.

SQL allows integrity checks to be offloaded to any of the secondary replicas in order to prevent resource contention on the primary replica, which keeps performance optimized. This is a good idea in theory, however if there is corruption in the data files of the primary database (assuming you have offloaded your integrity checks to a secondary) it will not be detected by the integrity check.   Even though data in the databases is the same across all replicas, the actual data files reside on the separate disk subsystems of each replica. The integrity check runs against the data files on each disk subsystem, and if your hardware or a file is corrupt on one of your replicas the integrity check run on another replica will not catch it.

Corruption on a primary does not propagate to the secondary because complete database pages are not sent, only the tlog records which only describe the physical changes made to the pages…they don’t contain the actual pages. Performing an integrity check on a primary replica doesn’t detect corruption on a secondary and vise-versa.   They are separate databases with their own data files kept in sync through transaction logs that describe the physical changes.

If your organization is extremely sensitive to performance for your AlwaysOn group, we recommend you restore a copy of your database backups to a server not in your AlwaysOn group (like a test server) and run the integrity checks there. This will provide a consistency check of your data, will ensure your backups are viable, and will eliminate overhead of running CHECKDB on the primary.

Index Maintenance

Perform Index Maintenance on primary replica

Because secondary databases are read-only, index maintenance must be performed on the primary replica. Index maintenance generates large amounts of logged changes by design and consequently will result in large transaction logs being replicated to the secondaries. Also index maintenance can cause the primary log files to grow large because the log cannot be truncated until redo has completed the changes in all secondary replicas.

Some ways to reduce resource contention for index maintenance is:

  1. Schedule during off peak times if possible
  2. Break up maintenance into smaller phases, spread out over time period
  3. Use MAXDOP to limit the number of processors used for the operation
  4. Rebuild/reorganize based upon specified criteria:
    1. Ignore heaps or small tables
    2. Rebuild/reorganize based upon fragmentation levels acceptable to your environment

If you are fortunate enough to have a large maintenance window to be able to rebuild your indexes on a daily basis, there is not much need to update statistics as rebuilding indexes does this by design. The frequency of your maintenance routines will depend on each individual SQL environment. As a general rule with our clients, we rebuild indexes once per week and update statistics every day…but again, your environment may require something different.

Update Statistics

Update Statistics on the primary replica.

SQL Server creates statistics on database tables and indexes that gives the query optimizer information about the distribution of data. This allows SQL to compile the most efficient query plan in order to improve query performance and run the queries as quickly as possible.

In an AlwaysOn Group of databases, statistics that are created and updated on the primary replica are persisted to the secondary replicas as part of the data replication of the transaction logs records.

But what if you’re using your secondaries for read-only operations and the secondary databases need different statistics than the primary? Databases on the secondary replicas are restricted to read-only access so statistics cannot be created on them! If the secondary replicas cannot create statistics, then performance of read-only workloads can suffer.

Thankfully SQL will create and maintain statistics on the secondary replicas in tempdb by using the suffix _readonly_database_statistic appended to the name of the temporary statistic. This keeps it separate from the permanent statistics that have been sent over from the primary. These temporary statistics can only be created by SQL Server, and you need to allow auto update and auto creation of statistics.

For more information on SQL statistics, click here https://msdn.microsoft.com/en-us/library/ms190397.aspx


This article covers a high-level overview for performing some basic maintenance on your AlwaysOn databases. These are suggestions based on a simple AlwaysOn environment. There are always exceptions to these rules and circumstances where a system is set up in a more complex manner.   For example, you may want Tlog backups to occur on a different replica than the Full backups. You may have your AlwaysOn group also involved in log shipping or some other functionality that requires more or different maintenance routines.   There are methods to accommodate every environment, but is beyond the scope of this article. Hopefully these topics have given you a basic foundation for maintaining your AlwaysOn environment.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!