Category Archives: SQL Server

Installing and Connecting to SQL 2017 on Ubuntu (Linux)

–By Lori Brown

SQL Server 2017 is available as a CTP for testing (https://www.microsoft.com/en-us/sql-server/sql-server-2017) and is able to be installed on a system running Linux. Because I have done all of my professional work on Windows systems, the leap to a Linux system was a bit of a stretch. I learned that while Linux is the OS kernel, Red Hat, SUSE, Ubuntu and Docker Engine are all flavors of Linux. A rough analogy might be that something like Ubuntu is sort of like a version of Windows. Regardless of how you think of it, it is a truly interesting experience to get SQL 2017 installed.

The first thing to do is get your friendly System Admin to stand up a Linux server. We used Ubuntu 16.04 but there are other flavors of Linux that can also be installed as listed below. Your Admin should provide you with a login and password that is an administrator so that you can get SQL running.

Blog_20170621_1

Since everything I have ever done on a computer has had a GUI, I decided to install Gnome to try to shorten my learning curve on Ubuntu. (https://ubuntugnome.org/download/ ) Truthfully, navigating in Gnome was not super intuitive to me but I was able to bumble around and get things done. When following the install instructions from Microsoft, there are places where commands are used to get things done that are labeled “bash”. It took me a bit to figure out that bash is a Unix shell or command prompt that is used to execute commands. I found a bash guide if anyone is interested. (http://www.tldp.org/LDP/Bash-Beginners-Guide/html/Bash-Beginners-Guide.html) Apparently you can use bash on Windows….I learn something new every day. (https://blogs.windows.com/buildingapps/2016/03/30/run-bash-on-ubuntu-on-windows/#igsmYPLj6ovEWf7Y.97)

In Gnome, I had to search for terminal to get the command prompt (bash) window. To get SQL installed, I followed the instructions from Microsoft and have taken screenshots of what it looks like after running commands. (https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-ubuntu)

curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add –

Blog_20170621_2

curl https://packages.microsoft.com/config/ubuntu/16.04/mssql-server.list | sudo tee /etc/apt/sources.list.d/mssql-server.list

Blog_20170621_3

sudo apt-get update

Blog_20170621_4

sudo apt-get install -y mssql-server

Blog_20170621_5

During the setup completion step, you should be ready to provide the SA password when prompted for the SQL Server system administrator password.

sudo /opt/mssql/bin/mssql-conf setup

Blog_20170621_6

systemctl status mssql-server

Blog_20170621_7

Now install SQLCMD and BCP tools. (https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-tools#ubuntu)  I only installed SQLCMD but the BCP commands are similar.

curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add –

Blog_20170621_8Blog_20170621_9

sudo apt-get update

Blog_20170621_10

sudo apt-get install mssql-tools unixodbc-dev

You have to answer licensing questions here so be ready.

Blog_20170621_11

Blog_20170621_12

I also installed the SQL Server Agent. The commands are very similar to previous commands so I did not take any extra screen shots of those. https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-sql-agent

Once the tools are installed, you must use SQLCMD to query. You cannot use SSMS on Ubuntu.

Blog_20170621_13

Since I am a creature of the Microsoft world and really want to use SSMS to do things, I need to be able to connect to SQL on Ubuntu using SSMS. So that you can connect remotely, I made sure to open SQL ports in Ubuntu. First you have to enable the firewall:

Blog_20170621_14

Then open ports 1433 & 1434:

Blog_20170621_15

To connect and query from a Windows machine….install SQL 2017 SSMS. (https://docs.microsoft.com/en-us/sql/ssms/download-sql-server-management-studio-ssms)

Connect to SQL as normal. I tried many times to get connected using my Ubuntu hostname but did not have luck. I was able to connect using the IP address however.

Blog_20170621_16

The Ubuntu hostname appears in the terminal connection. Use ifconfig to get the IP address. (http://www.aboutlinux.info/2006/11/ifconfig-dissected-and-demystified.html)

Blog_20170621_17

Once connected you can do most things as usual in SSMS.

Blog_20170621_18

Blog_20170621_19

Blog_20170621_20

Blog_20170621_21

As of this writing, I found several places where Microsoft employees warned that some things will likely break and that they are working on bugs. So, if you run into anything that you can’t find a way around, let them know. So far Microsoft seems to be very open to information and suggestions which makes me very hopeful that SQL 2017 will be released soon and will be much faster than previous versions.

For a taste of some of the new things in SQL 2017 see: https://blog.sqlrx.com/2017/05/19/sql-2017-is-on-the-way/

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

What You Need to Know Before Hosting Your SQL Server in the Cloud

–By Lisa Dominy

cloud-headaches_crop

Whether you currently host your SQL Server in the Cloud or are considering moving it to the cloud, here are a few guidelines that can lead you to select the right hosting provider for your specific requirements.

  1. Set the stage for a tech discussion. Familiarize yourself with your current technology profile so you can have an efficient technical conversation with hosting providers. If you already know your needs in these areas…

CPU-Processors

RAM-memory

Disk Space-Storage

Applications

Database Software Requirements

…you are all set. If not, be sure you are confident that the hosting provider can ask the right questions to provide optimum performance for your company.

  1. Judge whether they will take the time to find out about your business. Find a hosting provider that will take the time to ask the right questions to get to know your business. They should work with you to create a relationship of success. The right hosting provider should be able to provide the right technology and services that meet your business objectives.
  2. Confirm that they provide quick response time. Make sure the hosting provider will provide prompt response from knowledgeable support staff when needed. What is the process they use if an issue comes up? This is a very important step that is sometimes overlooked, and never a good lesson to learn during the move.
  3. Look for hard evidence that they solve problems and issues promptly. It is very important for your hosting provider to solve problems quickly. If a server is down or running slow, this could cause a lot of damage in the long run, as well as lost revenue.
  4. Ask if they have access to SQL experts like ISI’s SQLRx team. Make sure your hosting provider has access to SQL experts. This is very important to the success of your implementation. Why?
  • You’ll need performance information specific to a SQL workload that indicates that hardware may need to be expanded in a specific way.
  • Without a partnership with a SQL expert like SQLRx, it may be difficult and costly to provide enough hardware to support some companies’ workloads. There have been times when our SQLRx team found that the hosting company had provided great hardware but SQL needed tuning or configuration changes.
  • Applications and database software can cause server conflicts no matter how well architected they appear. This is where the right partnership is critical.

Are you ready to go to the cloud? The right hosting company must have access to the right experts and deep experience in the Cloud. This is why we’ve built a strong, trusted relationship with VAZATA. They are the hardware/hosting expert and we are their trusted SQL partner. VAZATA is a very responsive and knowledgeable hosting provider.

Here is an actual client example:

Despite having a very important SQL Server backend, an information services company did not have a SQL Database Administrator, and they had multiple application servers in their environment heavily utilizing that SQL backend. Plus, many of the applications they were running were outdated, but could benefit from better hardware. Based on their needs, there was no cost justification to bring in a full-time SQL Database Administrator. This company leveraged ISI’s SQLRx to provide that service.  VAZATA enhanced their experience when they moved all of the company’s virtual servers into the VAZATA private cloud environment, and the customer immediately started to see better performance. Equally important, based on recommendations from ISI’s SQLRx team, they adjusted some of the infrastructure resources (CPU, RAM, and Storage) to obtain the best performance possible in the most cost-effective way for the amount of users accessing the system.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com or www.vazata.com We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Get a List of Tables That Are Compressed

Find out if any of your tables are compressed in your database using the query below. Compressed tables can save space and make queries run faster.

— Compressed Tables
select distinct t.name AS CompressedTables
from sys.partitions p
inner join sys.tables t
on p.object_id = t.object_id
where p.data_compression > 0

If you don’t have any tables compressed but think you might want to compress some, you can check your data compression savings by running the stored procedure sp_estimate_data_compression_savings for your targeted table.

USE WideWorldImporters;
GO
EXEC sp_estimate_data_compression_savings ‘Sales’, ‘Invoices’, NULL, NULL, ‘ROW’ ;
GO

https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-estimate-data-compression-savings-transact-sql

Query Tuning and Missing Index Recommendations

–By Jeffry Schwartz

Overview

Most analysts are familiar with missing index recommendations provided by SQL Server when query plans are displayed within SSMS or reported by various missing index DMVs. Several questions arise concerning these recommendations:

  1. What determines whether a column becomes a key or included in the new index?
  2. Where does the ordering of the included columns come from? Does column ordering in queries affect the recommendations, e.g., will query orderings of A, B, C and C, B, A in the selection column list result in the one or two recommendations?
  3. How sophisticated are the missing index recommendations, i.e., does SQL Server compare recommendations and perform any kind of recommendation consolidation or optimization?

This article provides a reproducible example using six different queries that deliberately caused SQL Server to generate missing index recommendations and provide answers to these questions. The example is used to study missing index recommendations in detail, including how they relate to the underlying table and how query columns affect these recommendations. The article also illustrates how a single consolidated index can address the performance needs of all six queries.

Test Table Creation & Load

To determine missing index recommendation behavior, a generic table was constructed and filled with 20 million records. Each record contained an identity column, an ID column, a text column, and 47 metric columns whose values ranged between 1 and 10,000,000. The large number of table columns was used to insure SQL Server would choose an index option when appropriate. Six queries that incorporated various column combinations were executed (some of which differed only in column ordering). To minimize duplication of column values and skewing of query plans, the ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000 formula was used to generate values that were as random as possible. Two indices were created: a clustered index that used the identity column as its only key and a second nonclustered index that used DupID as its only column. The scripts for the creation, loading, and initial indexing of the table are shown below.

— ##############################################################

— Create test table

— ##############################################################

drop table FewDuplicates;

CREATE TABLE FewDuplicates (

IDCol bigint identity (20000000,1),

DupID bigint,

MyText varchar(10),

Metric01 bigint, Metric02 bigint, Metric03 bigint, Metric04 bigint, Metric05 bigint,

Metric06 bigint, Metric07 bigint, Metric08 bigint, Metric09 bigint, Metric10 bigint,

Metric11 bigint, Metric12 bigint, Metric13 bigint, Metric14 bigint, Metric15 bigint,

Metric16 bigint, Metric17 bigint, Metric18 bigint, Metric19 bigint, Metric20 bigint,

Metric21 bigint, Metric22 bigint, Metric23 bigint, Metric24 bigint, Metric25 bigint,

Metric26 bigint, Metric27 bigint, Metric28 bigint, Metric29 bigint, Metric30 bigint,

Metric31 bigint, Metric32 bigint, Metric33 bigint, Metric34 bigint, Metric35 bigint,

Metric36 bigint, Metric37 bigint, Metric38 bigint, Metric39 bigint, Metric40 bigint,

Metric41 bigint, Metric42 bigint, Metric43 bigint, Metric44 bigint, Metric45 bigint,

Metric46 bigint, Metric47 bigint

)

 

— ##############################################################

— Load original table

— ##############################################################

declare @DupID bigint = 1

declare @NumRecs bigint = 20000000

 

truncate table FewDuplicates

set nocount on

while (@DupID <= @NumRecs)

begin

insert into [dbo].[FewDuplicates] (

[DupID], [MyText],

[Metric01], [Metric02], [Metric03], [Metric04], [Metric05], [Metric06], [Metric07],

[Metric08], [Metric09], [Metric10], [Metric11], [Metric12], [Metric13], [Metric14],

[Metric15], [Metric16], [Metric17], [Metric18], [Metric19], [Metric20], [Metric21],

[Metric22], [Metric23], [Metric24], [Metric25], [Metric26], [Metric27], [Metric28],

[Metric29], [Metric30], [Metric31], [Metric32], [Metric33], [Metric34], [Metric35],

[Metric36], [Metric37], [Metric38], [Metric39], [Metric40], [Metric41], [Metric42],

[Metric43], [Metric44], [Metric45], [Metric46], [Metric47]

)

VALUES (

@DupID,‘my text’,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000,

ABS(cast(CHECKSUM(NewId()) as bigint)) % 10000000

)

set @DupID += 1

end — group option loop

set nocount off

 

— ##############################################################

— Create indices on the test table

— ##############################################################

CREATE UNIQUE CLUSTERED INDEX [ci_RecID] ON [dbo].[FewDuplicates]

(

[IDCol] ASC

)

WITH (fillfactor = 100, PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)

ON [PRIMARY]

 

CREATE NONCLUSTERED INDEX [ix_DupID] ON [dbo].[FewDuplicates]

(

DupID ASC

)

WITH (fillfactor = 100, PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)

ON [PRIMARY]

 

Queries & Execution Plans with Missing Index Recommendations

The six queries all performed a range scan based upon DupID and Metric01. Clearly, the ix_DupID index could be used for the first portion of the where clause, but none of the existing indices could assist in the resolution of the second portion of the where clause. Note: All queries used identical where clauses to insure that the only query differences involved the columns that were requested in the select clause. These variations employed different combinations of the first six metric columns included a variety of column orderings. All of the queries returned the same 809 rows. Note: Due to the random nature of the data, the reader will not obtain identical results, but they will be functionally similar. Each query was run separately after the following commands had been executed:

dbcc dropcleanbuffers with no_infomsgs

dbcc freeproccache with no_infomsgs

dbcc freesystemcache(‘TokenAndPermUserStore’) with no_infomsgs

These commands clear all the pertinent caches to insure reproducibility and prevent memory-resident portions of the database from skewing the results. Each query required approximately 90 seconds to execute as shown in Table 1. Each query required approximately one million logical and physical reads to complete. The SSMS option for returning the actual execution plan was set prior to execution, and the six queries and execution plans are shown in the individual query sections below. The first five queries resulted in different recommendations, but the sixth query’s plan and recommended missing index were identical to that of the fifth because the only difference between Queries #5 and #6 is the ordering of the columns (part of the answer to question #2). The differences among all the queries are summarized in Table 2 below.

Blog_20170601_T1

Table 1: Summary of Initial Query Executions & Timings

Blog_20170601_T2

Table 2: Summary of Query Selection Columns

Query #1

select [DupID], [MyText], [Metric01]

from FewDuplicates

where DupID between 100000 and 500000 and Metric01 between 10000 and 30000

Blog_20170601_1

Query #2

select [DupID], [MyText], [Metric02]

from FewDuplicates

where DupID between 100000 and 500000 and Metric01 between 10000 and 30000

Blog_20170601_2

Query #3

select [DupID], [MyText], [Metric03], [Metric06], [Metric04]

from FewDuplicates

where DupID between 100000 and 500000 and Metric01 between 10000 and 30000

Blog_20170601_3

Query #4

select [DupID], [MyText], [Metric04], [Metric06], [Metric02], [Metric01]

from FewDuplicates

where DupID between 100000 and 500000 and Metric01 between 10000 and 30000

Blog_20170601_4

Query #5

select [DupID], [MyText], [Metric01], [Metric03], [Metric05], [Metric02]

from FewDuplicates

where DupID between 100000 and 500000 and Metric01 between 10000 and 30000

Blog_20170601_5

Query #6

select [DupID], [MyText], [Metric05], [Metric02], [Metric01], [Metric03]

from FewDuplicates

where DupID between 100000 and 500000 and Metric01 between 10000 and 30000

Blog_20170601_6

Missing Index Recommendations Displayed via DMVs

The following query was used to provide some of the information enumerated in Table 3 below:

— ##############################################################

— Missing Index DMV Query

https://blogs.msdn.microsoft.com/bartd/2007/07/19/are-you-using-sqls-missing-index-dmvs/

— ##############################################################

SELECT migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) AS improvement_measure,

‘CREATE INDEX [missing_index_’ + CONVERT (varchar, mig.index_group_handle) + ‘_’ + CONVERT (varchar, mid.index_handle)

+ ‘_’ + LEFT (PARSENAME(mid.statement, 1), 32) + ‘]’

+ ‘ ON ‘ + mid.statement

+ ‘ (‘ + ISNULL (mid.equality_columns,)

+ CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN ‘,’ ELSE END

+ ISNULL (mid.inequality_columns,)

+ ‘)’

+ ISNULL (‘ INCLUDE (‘ + mid.included_columns + ‘)’, ) AS create_index_statement,

migs.*, mid.database_id, mid.[object_id]

FROM sys.dm_db_missing_index_groups mig

INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle

INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle

WHERE migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) > 10

ORDER BY migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC

The following table lists the missing index recommendations as displayed by the DMVs. It also summarizes all of the missing index recommendations to make it easier to determine what a consolidated index might require. It is important to note that the missing index recommendations for Query #5 and #6 are the same, just as they were in the query plan listings above. Although discussion of the calculation and interpretation of the Improvement Measure column is beyond the scope of this article, it should be noted that the total improvement measure was approximately twice that of any other index recommendation and further investigation reveals that this index would satisfy both Query #5 and #6.

Blog_20170601_T3

Table 3: Summary of All Missing Index Recommendations

Composite Index Recommendation for All Six Queries

Either Table 2 or Table 3 could be used to determine a composite index. In-depth investigation reveals that since DupID and Metric01 are specified in the where clauses as inequalities, these should be key columns in any index, and review of Table 3 highlights this for all index recommendations. Since MyText is used in all of the queries and Metric01 is to be used as a key column, the only remaining columns are Metric02, Metric03, Metric04, Metric05, and Metric06. An index that employs these columns as included columns can “cover” all the queries shown in this article. For further information about covering indices, please refer to the following article: https://docs.microsoft.com/en-us/sql/relational-databases/indexes/create-indexes-with-included-columns. The composite (and covering) index is shown below:

CREATE NONCLUSTERED INDEX ix_ResolveMultipleQueriesNeeds ON [dbo].[FewDuplicates] (

[DupID],

[Metric01]

)

INCLUDE (

[MyText],

[Metric02],

[Metric03],

[Metric04],

[Metric05],

[Metric06]

)

WITH (fillfactor = 100, PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)

ON [PRIMARY]

Query Plans for Queries #1 – #6 with Composite Index Added

Once the combined index was implemented, each query was run as before: caches cleared and SSMS actual execution plan option on. No query ran longer than one second even when the caches had been purged. Although the queries were run individually for the purposes of obtaining accurate timings, they were also run together for the purpose of showing the similarities of the execution plans with the new index in place. Table 4 lists the execution results and clearly, performance improved greatly for all queries. For example, durations decreased from approximately 90 seconds to about one second and reads dropped from approximately one million to four thousand. Examination of the query plans shows that all six queries used the new index to obtain the observed extremely fast performance.

Blog_20170601_T4

Table 4: Summary of Query Executions & Timings After Composite Index Added

Blog_20170601_7

Summary

Most readers probably already knew the answer to the first question: key columns are used to filter query results. The simplest example of this involves a where clause. Columns referenced by equalities are placed first, followed by columns that are used with inequalities. Some of the missing index DMVs actually differentiate between the types of relationships involved. The example queries show that table column ordering dictates included column order thereby answering question #2. Since table column ordering is used to specify included column ordering, SQL Server WILL NOT duplicate missing index recommendations when the columns are identical regardless of their ordering in the queries. The answer to the last question is that SQL Server does not appear to perform any consolidations or optimizations other than those cited in the answer to question #2. This knowledge is invaluable, particularly when evaluating missing index recommendations on production systems because unless the queries that cause SQL Server to generate the missing index recommendations have been executed recently, there is little information to associate the actual queries with the recommendations. Understanding those factors that affect SQL Server’s recommendations can simplify causing query identification and give an analyst more certainty that the correct associations have been made.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

AlwaysOn 2016 – Encrypted Databases in an AG

–By Ginger Keys

In the AlwaysOn 2016 release, there are several enhancements that have improved manageability, scalability, and availability. One of the enhancements is the ability to add encrypted databases to an AlwaysOn group.

Encrypted databases were allowed in earlier versions of AlwaysOn, however they could not be accessed if there was a failover to a secondary replica because the primary and secondary replicas had different service master keys. So when the secondary would try to decrypt the database master key using its own service master key, the master key would fail to open because of an invalid password.

In SQL 2016 there is now access to the database on the secondaries in the event of failover since it is no longer a requirement for each replica to have the same Database master Key (DMK) and password on each instance. There is no need to back up the DMK from the primary and restore it to the secondary(s), it is only necessary that a DMK exist on each replica. As long as each replica has a DMK created, even without the same password, there will not be any encryption/decryption issues because of different keys.

Transparent Data Encryption (TDE) is a method of protecting your data ‘at rest’. There are several methods of protecting your data in addition to TDE, but this article will only discuss TDE encrypted databases in an AlwaysOn Availability Group. If your physical media (drives or backup tapes) are stolen, TDE will encrypt the sensitive data in the data and log files, and will also protect the keys used to encrypt the data with a certificate. TDE provides for real-time I/O encryption and decryption, but it does not protect data ‘in-flight’ flowing from the server to the end user. Also it does not protect data from a user who has privileges in the SQL instance.

The steps below demonstrate how to add an encrypted database to an AlwaysOn Availability Group

  1. Encrypt your database (if not already encrypted)
  2. Create/verify Database Master Key (DMK) on secondary replica(s)
  3. Create the TDE Certificate on secondary replica(s)
  4. Perform Full and Tlog backup of encrypted database on primary replica
  5. Add encrypted database to AlwaysOn group on primary replica
  6. Restore Full and Tlog backups on secondary replica(s)
  7. Join the encrypted database to the AlwaysOn group on secondary replica(s)

Step 1 – Encrypt Database

If your database has not already been encrypted, follow these steps to create a master key (DMK), backup the certificate, create a database encryption key (DEK), and enable encryption on the database on your primary replica.

–create DMK database master key

USE master

GO

CREATE MASTER KEY ENCRYPTION BY PASSWORD = ‘DMKP@ssw0rd’

GO

OPEN MASTER KEY DECRYPTION BY PASSWORD = ‘DMKP@ssw0rd’

GO

BACKUP MASTER KEY TO FILE = ‘C:\SQLRX\KeyBkup\SQLDMK.dmk’

ENCRYPTION BY PASSWORD = ‘G00dP@ssw0rd’

GO

 

–check to make sure master key is set up

SELECT b.name, a.crypt_type_desc

FROM sys.key_encryptions a

INNER JOIN sys.symmetric_keys b

ON a.key_id = b.symmetric_key_id

WHERE b.name = ‘##MS_DatabaseMasterKey##’;

GO

 

–create and backup certificate

CREATE CERTIFICATE TDECert WITH SUBJECT = ‘Cert used for TDE’;

GO

BACKUP CERTIFICATE TDECert

TO FILE = ‘C:\SQLRX\KeyBkup\Cert4TDE.cer’

WITH PRIVATE KEY

(

FILE = ‘C:\SQLRX\KeyBkup\TDEPvtKey.pvk’,

ENCRYPTION BY PASSWORD = ‘*DBA$RC00l’

);

GO

 

–create DEK database encryption key with certificate

USE GKTestDB

GO

CREATE DATABASE ENCRYPTION KEY

WITH ALGORITHM   = AES_256 — AES_128 | AES_192 | AES_256 | TRIPLE_DES_3KEY

ENCRYPTION BY SERVER CERTIFICATE TDECert

GO

USE master

GO

 

enable TDE for the database

ALTER DATABASE GKTestDB SET ENCRYPTION ON

GO

 

–monitor encryption progress

SELECT db_name(database_id), encryption_state, percent_complete, key_algorithm, key_length

FROM sys.dm_database_encryption_keys

GO    

 

Step 2 – Verify Database Master Key on Secondary Replica(s)

On each secondary replica, verify that a Database Master Key (DMK) exists in the master database.

 

–check to make sure master key is set up

SELECT b.name, a.crypt_type_desc

FROM sys.key_encryptions a

INNER JOIN sys.symmetric_keys b

ON a.key_id = b.symmetric_key_id

WHERE b.name = ‘##MS_DatabaseMasterKey##’;

GO

If a record is returned, that means a DMK exists on the secondary instance. If no record is returned, then create a one as follows:

–create DMK database master key

USE master

GO

CREATE MASTER KEY ENCRYPTION BY PASSWORD = ‘DMKP@ssw0rd2’

GO

OPEN MASTER KEY DECRYPTION BY PASSWORD = ‘DMKP@ssw0rd2’

GO

BACKUP MASTER KEY TO FILE = ‘C:\SQLRX\KeyBkup\SQLDMK.dmk’

ENCRYPTION BY PASSWORD = ‘G00dP@ssw0rd’

GO

 

Step 3 – Create TDE Certificate on Secondary Replica(s)

On each secondary replica instance, create/restore the TDE Certificate from the server certificate and private key that you backed up on the primary. Copy the certificate as well as the private key from the primary over to a file path on the secondary(s). The following will install the certificate onto the secondary replica(s).

–create TDE Certificate from the certificate backed up on primary

CREATE CERTIFICATE TDECert

FROM FILE = ‘C:\SQLRX\KeyBkup\Cert4TDE.cer’

WITH PRIVATE KEY

(

FILE = ‘C:\SQLRX\KeyBkup\TDEPvtKey.pvk’,

DECRYPTION BY PASSWORD = ‘*DBA$RC00l’

)

Step 4 – Create Full & Tlog Backups on the Primary Replica

Create a full and tlog backup of the TDE encrypted database on the primary replica.

–create Full Backup of encrypted database on primary

BACKUP DATABASE [GKTestDB]

TO DISK = N’C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\GKTestDB.bak’

WITH NOFORMAT, NOINIT, NAME = N’GKTestDB-Full Database Backup’, SKIP, NOREWIND,NOUNLOAD, COMPRESSION, STATS = 10

GO

–create Tlog backup of encrypted database on primary

BACKUP LOG [GKTestDB]

TO DISK = N’C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\GKTestTlogBackup.trn’

WITH NOFORMAT, NOINIT, NAME = N’GKTestDB-Full Database Backup’, SKIP, NOREWIND, NOUNLOAD, STATS = 10

GO

Step 5 – Add Encrypted Database to AlwaysOn Group

On the primary replica instance, add the TDE encrypted database to the Availability Group. As of the current release of SQL Server 2016, you cannot do this yet through the wizard

Blog_20170525_1

Instead add the TDE database to your AlwaysOn group using TSQL:

USE master

GO

ALTER AVAILABILITY GROUP AGroup_SQLRx

ADD DATABASE GKTestDB

GO

Step 6 – Restore Full & Tlog Backups on Secondary Replica(s)

On each secondary replica instance, restore the full and tlog backups of the encrypted database with no recovery. Copy the backup files from the primary replica to a file location on your secondary and restore from that file path as follows:

–Restore Full backup of encrypted database on secondary with no recovery

USE [master]

RESTORE DATABASE [GKTestDB] FROM DISK = N’C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\GKTestFullBackup.bak’ WITH FILE = 2, NORECOVERY, NOUNLOAD, STATS = 5

GO

–Restore the Tlog backup of encrypted database on secondary with no recovery

RESTORE LOG [GKTestDB] FROM DISK = N’C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\GKTestTlogBackup.trn’ WITH FILE = 1, NORECOVERY, NOUNLOAD, STATS = 5

GO

Step 7 – Join the Encrypted Database to AlwaysOn Group on Secondary Replica

On each secondary replica instance, join the database to the availability group.

USE master

GO

ALTER DATABASE GKTestDB

SET HADR AVAILABILITY GROUP = AGroup_SQLRx

GO

Blog_20170525_2

Conclusion

After joining the encrypted database to the AlwaysOn group, you should now be able to fail over the group to the secondary replica and be able to access the database without any issues. It is common knowledge that encrypting a database causes degraded performance, so the next article will demonstrate the performance effects of having an encrypted database in your AlwaysOn Group.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

SQL 2017 Is On The Way!!

–By Lori Brown  @SQLSupahStah

SQL Server 2017 is rolling our way like a big ole train. And, with the new features that are going to be available there may be quite a few shops that want to jump onboard.

Blog_20170519_1

As of this writing SQL Server 2017 CTP 2.1 is available for download and testing. (https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2017-ctp/ ) I managed to recently attend a presentation by Denzil Ribeiro who is a manager with the SQLCAT team on some of the new features in SQL 2017. I tend to pay attention to the things that get the guys on the inside excited so here are some of the highlights with links that I could find and my notes on SQL 2017.

Resumable Online Index Rebuild

https://blogs.technet.microsoft.com/dataplatforminsider/2017/04/20/resumable-online-index-rebuild-is-in-public-preview-for-sql-server-2017-ctp-2-0/

Index rebuilds can be paused and restarted. Since index must be created with ONLINE = ON then it is likely that this is an Enterprise Edition feature.

ALTER INDEX IDX_MyInx ON SomeTable

REBUILD WITH (RESUMABLE = ON, ONLINE = ON, MAX_DURATION = 1)

— Pause

ALTER INDEX IDX_MyInx ON SomeTable PAUSE

Use sys.index_resumable_operations to view the status of resumable index rebuilds. https://docs.microsoft.com/en-us/sql/relational-databases/system-catalog-views/sys-index-resumable-operations

Wait stats for per query executions

Available by default. Statistics info is now in query plans. Use sys.query_store_wait_stats to see wait info for a query plan. https://docs.microsoft.com/en-us/sql/relational-databases/system-catalog-views/sys-query-store-wait-stats-transact-sql

Automatic Query Tuning

https://docs.microsoft.com/en-us/sql/relational-databases/automatic-tuning/automatic-tuning

Recognizes plan regressions and will automatically force a good plan to be used. Database must be in 140 (SQL2017) compatibility. Must enable the Query Store for the database. Can then use sys.dm_db_tuning_recommendations (https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-db-tuning-recommendations-transact-sql ) to get info on plans and recommendations that were used to fix regressions. Won’t force a good plan unless AUTOMATIC_TUNING = ON.

ALTER DATABASE current

SET AUTOMATIC_TUNING ( FORCE_LAST_GOOD_PLAN = ON );

Adaptive Query Processing

Interleaved Execution

https://blogs.msdn.microsoft.com/sqlserverstorageengine/2017/04/19/introducing-interleaved-execution-for-multi-statement-table-valued-functions/

Available by default. Adjust plans by testing plan and then redesigning plan based on better row estimates. All while a query is executing.

Batch Mode Memory Grant Feedback

https://blogs.msdn.microsoft.com/sqlserverstorageengine/2016/11/29/introducing-batch-mode-adaptive-memory-grant-feedback/

Adjust plan in cache if memory grant is not good enough (either too much or not enough).

Batch Mode Adaptive Joins

https://blogs.msdn.microsoft.com/sqlserverstorageengine/2017/04/19/introducing-batch-mode-adaptive-joins/

After first join in a plan, better decision made to choose Hash Join or Nested Loop Join. If number of rows small then likely Nested Loop Join will be used

Graph Data Processing

https://blogs.technet.microsoft.com/dataplatforminsider/2017/04/20/graph-data-processing-with-sql-server-2017/

Nodes and edges stored as tables. Many – many modeling. New MATCH clause in queries. https://docs.microsoft.com/en-us/sql/t-sql/statements/match-sql-graph

Blog_20170519_2

SELECT Person2.Name

FROM Person Person1, Friends, Person Person2

WHERE MATCH(Person1 – (Friends) -> Person2)

AND Person1.Name = ‘John’;

Python

https://blogs.technet.microsoft.com/dataplatforminsider/2017/04/19/python-in-sql-server-2017-enhanced-in-database-machine-learning/

Python is now integrated into SQL just like R Services and is used for advanced analytics. I have to admit that Python is way beyond my skillset right now but wanted to mention it.

Conclusion

A lot of these new features for SQL 2017 are evolving and more will come out I am sure. I am in the process of getting a VM with Linux installed on it so that I can check out installing and running SQL Server on Linux so expect a post on that soon.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

AlwaysOn 2016 – Direct Seeding

–By Ginger Keys

SQL Server 2016 provides some awesome new enhancements and features, especially in Enterprise Edition.  One of the new features relating to AlwaysOn is Direct Seeding.

Good stuff – No more time consuming backup and restore

  • Direct Seeding allows you to initialize the databases to your secondary replica(s) without having to configure a network share or perform backup and restore operations.  It will send the database directly over to the secondary replica without you needing to do anything.  Woo hoo!!

Gotchas – Some with regards to Large Databases

  • Direct seeding does not use compression by default, so adding a very large database to your AlwaysOn group could take a while and can use a lot of your network bandwidth.  Compression can be enabled by turning on Trace Flag 9567.   However, know that adding a large database with compression enabled will cause high CPU utilization during the seeding process.   For more info click here https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/tune-compression-for-availability-group .
  • Also during direct seeding the transaction log will not truncate, so in a highly transactional database your transaction log could potentially grow very large during this time.
  • For large databases being initialized to secondary replicas using direct seeding, be mindful of your network capacity (as the entire database will be sent across the network to the secondary(s)), workload amount which could bloat your transaction log, CPU pressure if you enable compression, and database size which will affect the seeding time.

You may want to use direct seeding during afterhours to prevent your Network or System Admin from hunting you down for the network hit that will happen.

If you are using SQL Server Management Studio version 17, Direct Seeding of your databases is now available in the GUI! You can download this version of SSMS here https://docs.microsoft.com/en-us/sql/ssms/download-sql-server-management-studio-ssms .

Let’s Do This!

I will assume you already have your environment ready (i.e. failover cluster created and configured, SQL installed and configured, AlwaysOn enabled, replicas and logins configured, etc.). So from the New Availability Group Wizard, you will specify your AG name:

Blog_20170511_1

Just a side note… in earlier versions of AlwaysOn (SQL 2012 & 2014), failover would occur if there was a problem with the health of the instance. If one of your databases was in trouble however, the AlwaysOn group would not fail over as long as the instance was okay. So if you had a database that was offline, suspect or corrupt, no failover would occur. In AlwaysOn 2016, failover will occur not only if the instance is in trouble, but also if one or more of your databases is in trouble. This is not a default setting however. You have to specify this when creating your AlwaysOn group, by selecting the ‘Database Level Health Detection’ checkbox in the setup wizard.

Next, select your database(s) to add to the AlwaysOn group:

Blog_20170511_2

Then add the replicas to be included in your AG and what type of failover (automatic or manual), data replication (synchronous or asynchronous), and whether they will allow for read only access:

Blog_20170511_3

Next click on the Endpoints tab and make sure URL and port numbers are correct:

Blog_20170511_4

Click on the Backup Preferences tab and specify how you want your backups to occur:

Blog_20170511_5

Click on the Listener tab and specify the Listener (DNS) name, port number and IP address:

Blog_20170511_6

And THIS is where you can specify Automatic (Direct) Seeding!

Blog_20170511_7

Click Next through the next screens and THAT’S IT!

Blog_20170511_8

You’re done, simple as that.

Blog_20170511_9

If you do not have SSMS v17 you can still initialize your databases to secondary replicas using direct seeding, but you will need to do this through TSQL from the primary replica:

CREATE AVAILABILITY GROUP []

FOR DATABASE db1

REPLICA ON ‘<*primary_server*>’

WITH (ENDPOINT_URL = N’TCP://<*primary_server*>.<*fully_qualified_domain_name*>:5022′,

FAILOVER_MODE = AUTOMATIC,

AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,

BACKUP_PRIORITY = 50,

SECONDARY_ROLE(ALLOW_CONNECTIONS = NO),

SEEDING_MODE = AUTOMATIC),

N'<*secondary_server*>’ WITH (ENDPOINT_URL = N’TCP://<*secondary_server*>.<*fully_qualified_domain_name*>:5022′,

FAILOVER_MODE = AUTOMATIC,

AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,

BACKUP_PRIORITY = 50,

SECONDARY_ROLE(ALLOW_CONNECTIONS = NO),

SEEDING_MODE = AUTOMATIC);

GO

For more information click here https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/automatically-initialize-always-on-availability-group .

Conclusion

SQL Server 2016 AlwaysOn gives you the ability to join databases to your AlwaysOn group and initialize them on your secondaries without having to backup and restore, or establish a network share. This ultimately saves much time and effort, but can take a while if your database is large. And using compression during the seeding process can save some time, but your CPU will take a performance hit. Also keep in mind that your transaction log will grow during this time as it will not truncate during the seeding process. Of course, be sure to test this first before doing it in your production environment!

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

List Partitioned Tables And Other Info About Them

— By Lori Brown @SQLSupahStah

Here is a good way to find out if any of your tables are partitioned in your database using the query below. It is important to know so that you can find out how the tables are partitioned.

— Partitioned Tables

SELECT DISTINCT t.name AS ParitionedTables

FROM sys.partitions p

INNER JOIN sys.tables t

ON p.object_id = t.object_id

WHERE p.partition_number <> 1

If you have partitioned tables here is a good way to find out how it is partitioned up.

— Get partition info

SELECT SCHEMA_NAME(o.schema_id) + ‘.’ + OBJECT_NAME(i.object_id) AS [object]

, p.partition_number AS [p#]

, fg.name AS [filegroup]

, p.rows

, au.total_pages AS pages

, CASE boundary_value_on_right

WHEN 1 THEN ‘less than’

ELSE ‘less than or equal to’ END AS comparison

, rv.value

, CONVERT (VARCHAR(6), CONVERT (INT, SUBSTRING (au.first_page, 6, 1) +

SUBSTRING (au.first_page, 5, 1))) + ‘:’ + CONVERT (VARCHAR(20),

CONVERT (INT, SUBSTRING (au.first_page, 4, 1) +

SUBSTRING (au.first_page, 3, 1) + SUBSTRING (au.first_page, 2, 1) +

SUBSTRING (au.first_page, 1, 1))) AS first_page

FROM sys.partitions p

INNER JOIN sys.indexes i

ON p.object_id = i.object_id

AND p.index_id = i.index_id

INNER JOIN sys.objects o

ON p.object_id = o.object_id

INNER JOIN sys.system_internals_allocation_units au

ON p.partition_id = au.container_id

INNER JOIN sys.partition_schemes ps

ON ps.data_space_id = i.data_space_id

INNER JOIN sys.partition_functions f

ON f.function_id = ps.function_id

INNER JOIN sys.destination_data_spaces dds

ON dds.partition_scheme_id = ps.data_space_id

AND dds.destination_id = p.partition_number

INNER JOIN sys.filegroups fg

ON dds.data_space_id = fg.data_space_id

LEFT OUTER JOIN sys.partition_range_values rv

ON f.function_id = rv.function_id

AND p.partition_number = rv.boundary_id

WHERE i.index_id < 2

AND o.object_id = OBJECT_ID(‘dbo.SomeTableName’);

Common Criteria Compliance and LCK_M_SCH_M

–By Lori Brown

If you have ever wondered what happens when you mistakenly enable Common Criteria compliance in SQL, well wonder no more!!

Blog_20170427_1

We have a client who has no idea how or when Common Criteria was enabled on their production system. All they know is that performance has been slowly degrading. After collecting performance data, we found that there were high LCK_M_SCH_M waits which is a schema modification lock that prevents access to a table while a DDL operation occurs. We also found blocked process records where a LOGIN_STATS table in the master database was waiting a lot. This table is used to hold login statistics. When there are a lot of logins and outs there can be contention in this table.

When you enable Common Criteria compliance, something called Residual Information Protection (RIP) is enabled. RIP is an additional security measure for memory and it makes it so that in memory a specific bit pattern must be present before memory can be reallocated(overwritten) to a new resource or login. So with lots of logins and outs, there is a performance hit in memory because overwriting the memory allocation has to be done.

Keep in mind if you enable Common Criteria compliance, you can run into slowdowns from locking and memory. Make sure that your server is able to handle this well and that applications are designed to minimize the impact of high logins\logouts.

To disable Common Criteria compliance you can use sp_configure or the GUI.

sp_configure ‘common criteria compliance enabled’, 0

GO

RECONFIGURE

GO

However, it is not really disabled until you reboot the server (it actually says to reboot the server in MSDN). Restarting the instance will not work for this configuration change. I believe that this is because of the Residual Information Protection that secures memory. It stands to reason that without a reboot that flushes memory that RIP is still doing its work and causing performance issues.

Check out the documentation from MSDN on Common Criteria compliance here:

https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/common-criteria-compliance-enabled-server-configuration-option

The biggest takeaway from this is to make sure to change your SQL configuration with good reason. Always know the side effects that can show up when you set any of the advanced configuration options.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

SQL 2016 Automatic Seeding of Availability Groups

–By Lori Brown

I came across something today that I did not know existed in AG’s, Automatic Seeding. This allows SQL to automatically create the secondary replicas for all databases in an AG. Pretty cool!! So if you set things up correctly when you set up your AG’s, you don’t have to worry about backing up databases and tlogs and taking them to the secondary replica, restoring and then getting the AG fully set up. It is a one stop shop. This is new in SQL 2016 only as far as I can tell.

https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/automatically-initialize-always-on-availability-group

Blog_20170420_1

You have to set up your AG by script but that is not too difficult.

—Run On Primary

CREATE AVAILABILITY GROUP [<availability_group_name>]

FOR DATABASE db1

REPLICA ON ‘<*primary_server*>’

WITH (ENDPOINT_URL = N’TCP://<primary_server>.<fully_qualified_domain_name>:5022′,

FAILOVER_MODE = AUTOMATIC,

AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,

BACKUP_PRIORITY = 50,

SECONDARY_ROLE(ALLOW_CONNECTIONS = NO),

SEEDING_MODE = AUTOMATIC),

N'<secondary_server>’ WITH (ENDPOINT_URL = N’TCP://<secondary_server>.<fully_qualified_domain_name>:5022′,

FAILOVER_MODE = AUTOMATIC,

AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,

BACKUP_PRIORITY = 50,

SECONDARY_ROLE(ALLOW_CONNECTIONS = NO),

SEEDING_MODE = AUTOMATIC);

GO

Of course you have to be aware that if you set this on an AG with large databases, this could cause an issue since SQL would be pushing an entire database across the network. There is trace flag 9567 that can help compress the data stream for AG’s using Automatic Seeding but there are some side effects of increased processor load that you need to be aware of.

https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/tune-compression-for-availability-group

We are setting up some new VM’s here at SQLRX and will be blogging later in much more depth on how this works.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!