Tag Archives: Ginger Keys

AlwaysOn – Endpoint Ownership

— By Ginger Keys

It is not uncommon for a DBA or other IT staff to set up AlwaysOn in a SQL environment and later leave the company. We ran into this recently with a client and were asked to delete the previous employee’s login from everything SQL related. We were able to remove the login from all databases and server roles, however we were not able to delete the login because it was the owner of an endpoint.

When creating an AlwaysOn Availability Group, you have the option of using the wizard or you can create it using TSQL statements. The wizard is very intuitive and easy to use and with the exception of a few settings you can specify, default configurations are deployed using this method. One of the default configurations is the endpoint owner. Whoever creates the AlwaysOn group is by default the owner of the endpoint.

This is generally not a problem…unless that person leaves the company and you need to delete the login! You will get an error message that says “The server principal owns one or more endpoint(s) and cannot be dropped (Microsoft SQL Server, Error: 15141)”.

To check and see who the owner of your endpoints are, run this statement:

USE master

GO

SELECT e.name as EndpointName,

sp.name AS EndpointOwner,

et.PayloadType,

e.state_desc

FROM sys.endpoints e

INNER JOIN sys.server_principals sp

ON e.principal_id = sp.principal_id

RIGHT OUTER JOIN ( VALUES ( 2, ‘TSQL’),

( 3, ‘SERVICE_BROKER’), ( 4, ‘DATABASE_MIRRORING’) )

AS et ( typeid, PayloadType )

ON et.typeid = e.type

The AlwaysOn endpoint will have the name Hadr_endpoint and will have a DATABASE_MIRRORING payload type as shown below.

Blog_20170803_1

If your AlwaysOn group has already been created and there is a domain login as the owner, you can change the ownership to sa. Run the following statement to make the change:

USE master

GO

ALTER AUTHORIZATION ON ENDPOINT::Hadr_endpoint TO sa

This will allow you to delete any login who might have owned the endpoint if its ever necessary.

If you are creating an AlwaysOn Availablitiy Group and want to use TSQL statements instead of the wizard, you have the ability to specify the endpoint owner. For complete instructions on how to set up the AlwaysOn group with TSQL click here https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/create-an-availability-group-transact-sql

In order to create the endpoint with a specific user, run the following statement:

CREATE ENDPOINT endpoint_mirroring

AUTHORIZATION loginname

STATE = STARTED

AS TCP (LISTENER_PORT = 5022)

FOR DATABASE_MIRRORING (

AUTHENTICATION = WINDOWS KERBEROS,

ENCRYPTION = SUPPORTED,

ROLE=ALL);

GO

In the statement above, if AUTHORIZATION is not specified with a SQL or Windows login, the caller will become the owner of the newly created endpoint. To use AUTHORIZATION and assign ownership to a login, the caller must have IMPERSONATE permission on the specified login.

Endpoints are a fundamental piece of SQL that allows a connection or point of entry into your SQL Server. Knowing who owns these endpoints and how to change the owner will potentially save you some headaches down the road in the event of IT staffing changes in your organization.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

AlwaysOn 2016 – Encrypted Databases and Performance

–By Ginger Keys

It is common knowledge that encrypting a database causes degraded performance in a SQL server. In an AlwaysOn environment performance can be even more sluggish because there is the extra element of data replication latency. How much difference does it really make? Of course the answer is “it depends” on your environment and your workload. I was curious to see for myself what kind of performance hit encryption would have on one of my test databases, so this post will look at CPU usage of an encrypted vs non-encrypted database.

Microsoft says that turning on TDE (Transparent Data Encryption) for a database will result in a 2-4% performance penalty, which is actually not too bad given the benefits of having your data more secure. There is even more of a performance hit when enabling cell level or column level encryption. When encrypting any of your databases, keep in mind that the tempdb database will also be encrypted. This could have a performance impact on your other non-encrypted databases on the same instance.

In a previous post I demonstrated how to add an encrypted database to an AlwaysOn group in SQL2016. In this article I will demonstrate the performance effects of having an encrypted database in your AlwaysOn Group compared to the same database not-encrypted.

I have 3 identical test databases I will use to look at performance metrics.

  • GKTestDB is TDE encrypted, and is part of the AlwaysOn group
  • GKTestDB2 is not encrypted, and not part of AlwaysOn group
  • GKTestDB3 is not encrypted, but is part of AlwaysOn group

Blog_20170713_1

There are some good open source tools for performing stress tests on your SQL database.   You can find one here that uses the AdventureWorks2014 database http://www.sqlballs.com/2016/05/sql-server-random-workload-generator.html. There is also one here and you can use this on any database https://github.com/ErikEJ/SqlQueryStress/wiki. I will be using the SQLQueryStress tool for my demonstration.

Step 1 – Test performance on non-encrypted DB not in AG

To measure performance metrics, create a User Defined Data Collector Set in Performance Monitor (Perfmon). There are many metrics that can be measured, but I will only be looking at CPU % Processor Time.

Blog_20170713_2

Download and open the SQLQueryStress tool, and create a statement to run against your database. In my test I will first look at the performance of running a select query for 5000 iterations on a database that has not been added to the AlwaysOn group, and has not been encrypted: (GKTestDB2)

Blog_20170713_3

Be sure to clear the buffers and cache before performing each stress test. Select your database, the server name, the number of iterations, the number of threads and the delay between queries in milliseconds.

Blog_20170713_4

Start your user defined Perfmon data collector set, then start the stress test in the SQLQueryStress tool.

Blog_20170713_5

At the end of each stress test you will need to manually stop your Perfmon collection.

Step 2 – Test performance on encrypted DB in the AlwaysOn Group

Now I will perform the same stress test to see performance on the identical but Encrypted database in the AlwaysOn group (GKTestDB). Be sure to clear the buffers and cache, and change the database in the SQLQueryStress tool.

Blog_20170713_6

Blog_20170713_7

Step 3 – Test performance on non – encrypted DB in the AlwaysOn Group

Just for curiosity sake, I’m also going to test the identical database that is not encrypted, but is included in the AlwaysOn group (GKTestDB3):

Blog_20170713_8

Blog_20170713_9

Step 4 – Compare results

I set the output files of my Perfmon results to land in a location on my local drive so that I can open up the results of each test and compare.

The CPU usage for the database not encrypted and not in my AlwaysOn group averaged 43% for the duration the test was run, as shown by the dark line on the graph below.

Not Encrypted / Not in AG database CPU usage:

Blog_20170713_10

The CPU usage for the identical database that is encrypted and is in the AlwaysOn group averaged 57.5% during the stress test as shown on the graph below. This is quite a bit more than the non-encrypted/non AG database, especially given the simple statement that was being run.

TDE Encrypted / Joined to AG Database CPU usage:

Blog_20170713_11

And finally, the CPU usage for the identical database that is not encrypted, but is included in my AlwaysOn group averaged 43.4%, which is not much different than the non-encrypted /non-AG database above.

Not Encrypted / Joined to AG Database CPU usage:

Blog_20170713_12

Blog_20170713_13

Conclusion

Having an encrypted database creates a noticeable CPU performance hit as compared to a non-encrypted database. Microsoft provides many options for protecting your data, transparent data encryption (TDE) being one of them. The benefits and assurance of securing your data outweigh the performance cost, however it may be useful to see how much of a performance hit your system will encounter prior to deciding which security options your organization will deploy.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

 

 

 

 

Stretch Databases – SQL Server 2016

— by Ginger Keys

Blog_20170629_1

Microsoft has added another cool feature in SQL Server 2016 – Stretch databases – that provides a cost effective way to store historical or ‘cold’ data.  This feature is ideal for transactional databases with large amounts of cold data, typically stored in a small number of tables. Stretching is better than the traditional method of archiving your old data to a remote location because stretched data can still be queried and accessed through applications. The data is always online and there is no need to change anything in your application in order to query it. The end user doesn’t see any difference other than possibly a small amount of latency when querying the stretched data

Features and Advantages:

Some features and benefits of stretching your data to Microsoft Azure:

  • Useful for ‘cold’ or historical data.
  • Your data is still online and available for query.
  • No changes to applications or queries required.
  • Stretching your data to Azure is encrypted and safe – and Always Encrypted, Row Level Security, and other SQL security features are still available
  • Useful for reducing storage space needed on local disk
  • Useful for reducing size of backups and the time required to perform backups.
  • Stretching applies to individual tables – you can stretch either the entire table or selected rows from the table.
  • Select and Insert is supported for stretched data; while Update and Delete is not supported.

 

Limitations:

There are some limitation to stretch databases, and not all databases or tables will be eligible for stretching. Some features and datatypes in a table will prohibit stretching a table in your database. These include

  • Tables with FILESTREAM data
  • Tables replicated or using Change Tracking or Change Data Capture
  • Memory optimized tables
  • Text, ntext, image, timestamp datatypes
  • Sql_variant, geometry or geography, CLR, hierarchyid datatypes
  • COLUMN_SET or computed column types
  • Default or check constraints, or foreign key constraints that reference the table
  • Full text, XML, spatial indexes, or index views that reference the table

Certain limitations also exist for tables that have been stretched:

  • Uniqueness is not enforced with UNIQUE or PRIMARY KEY constraints in the stretched table in Azure
  • You can’t Update or Delete rows in a stretch enabled table or view
  • You can’t Insert rows into a stretched table on a linked server
  • You can’t create an index for a view that includes stretch enabled tables
  • Filters on indexes do not propagate to the remote table.

Steps to set up Stretch Database

  1. Get an Azure account with Microsoft – https://azure.microsoft.com/en-us/free/
    1. Create Azure server
    2. Establish user credentials
    3. Configure firewall rules
  2. Run DMA (Data Migration Assistant) to see if your database is eligible
    1. Download Stretch Database Advisor (DMA) – https://www.microsoft.com/en-us/download/details.aspx?id=53595
  3. Enable Instance
  4. Enable Database
  5. Monitor state of stretch progress

I’m going to assume you have already completed the first two steps and will not go over these. So beginning with step 3, to implement a stretch database you must enable the feature on both the instance and database.

Enable Instance

To enable Stretch Database on your instance run the following statement:

–enable instance for stretch

EXEC sp_configure ‘remote data archive’, ‘1’

GO

RECONFIGURE

GO

Enable Database

From SQL Server Management Studio (SSMS) right click on the database which holds the table(s) you want to stretch to Azure and select Tasks > Stretch > Enable.

Blog_20170629_2

This will open the Enable Database for Stretch Wizard. First step will be to select the table(s) you want to stretch to Azure. You can select the entire table contents, or…

Blog_20170629_3

…if you click on the ‘Entire Table’ link, you can select specific rows to stretch as shown below.

Blog_20170629_4

The next step is to Configure Azure. You will be prompted to sign in to your Azure account:

Blog_20170629_5

After you have signed in to Azure, select the subscription to use (if you have more than one), select the correct region, select the server you have created (or create a new one), and provide the login credentials. Currently Azure only supports SQL Server Authentication for stretch databases.

Blog_20170629_6

In order to stretch a database table to Azure, the database must have a database master key (DMK). Specify (and save) the password for the DMK by creating the credential in the Wizard as follows on the Secure Credentials page.

Blog_20170629_7

If you already have a DMK for your database you will enter the password on this same screen as shown.

Blog_20170629_8

On the Select IP Address page, you can select the ‘Use source SQL Server Public IP’ button or specify the range of subnet IP addresses to use.

Blog_20170629_9

After specifying the IP addresses, click next through the Summary and Results pages.

Blog_20170629_10

Blog_20170629_11

Thank you Microsoft, we will enjoy it!

Monitor your Stretch Progress

In order to see the migration status of your data to the Azure cloud server, run this TSQL statement to show the number of rows migrated:

select * from sys.dm_db_rda_migration_status

order by start_time_utc desc

You can also monitor your stretched database by right clicking on your database > Tasks > Stretch > Monitor as shown

Blog_20170629_12

This will display a report with your Source server info, the Azure server info and information about the tables that were stretched:

Blog_20170629_13

You will also notice in Object Explorer, the stretched database has a different icon beside it

Blog_20170629_14

You will also notice under the External Resources of your database, the Stretched server will be listed in the External Data Sources folder.

Blog_20170629_15

When connecting to your Azure server, you can now see the stretched database which get assigned a name and table name automatically.

Blog_20170629_16

Now What?

Let’s see what happens now when we query data from the Orders table we just stretched.

From your local server instance, select rows from your table as shown. You will notice that the execution plan shows a ‘Remote Query’ operation because it is pulling data from the remote Azure server.

Blog_20170629_17

I only stretched certain rows of data to the Azure cloud, not the entire table. When I run a select statement that only pulls data from my local database and not the stretched Azure data, you will notice the execution plan changes. No Remote Query or concatenation operation is involved because it is only pulling data from the local table.

Blog_20170629_18

Disable Stretch Database

Now if for some reason you need to un-stretch your database you can disable Stretch Database for your entire database or just for an individual table. Disabling stretch database does not delete the table or data from Azure. You have to manually drop it using the Azure management portal, and the remote table in Azure will continue to incur costs until you delete it.

To disable Stretch Database for your database right click on your database in Object Explorer on your local SSMS, select Tasks > Stretch > Disable as shown

Blog_20170629_19

To disable Stretch Database for a table, right click on the table name and select Stretch > Disable > and either Bring data back from Azure, or Leave data in Azure. Bringing data back from Azure will incur data transfer costs.

Blog_20170629_20

Conclusion

Migrating cold or historical data to Microsoft’s Azure can be a cost effective way to retain inactive data, while still having access to it when needed. For information on Azure Stretch Database pricing go here https://azure.microsoft.com/en-us/pricing/details/sql-server-stretch-database/ .   This article covered very basic information on how to set up a Stretch Database, but there are several more aspects such as performance implications and backup/restore processes that we did not cover. Hopefully this gives a starting point on which you can begin to test and explore this new feature.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

 

 

 

 

AlwaysOn 2016 – Encrypted Databases in an AG

–By Ginger Keys

In the AlwaysOn 2016 release, there are several enhancements that have improved manageability, scalability, and availability. One of the enhancements is the ability to add encrypted databases to an AlwaysOn group.

Encrypted databases were allowed in earlier versions of AlwaysOn, however they could not be accessed if there was a failover to a secondary replica because the primary and secondary replicas had different service master keys. So when the secondary would try to decrypt the database master key using its own service master key, the master key would fail to open because of an invalid password.

In SQL 2016 there is now access to the database on the secondaries in the event of failover since it is no longer a requirement for each replica to have the same Database master Key (DMK) and password on each instance. There is no need to back up the DMK from the primary and restore it to the secondary(s), it is only necessary that a DMK exist on each replica. As long as each replica has a DMK created, even without the same password, there will not be any encryption/decryption issues because of different keys.

Transparent Data Encryption (TDE) is a method of protecting your data ‘at rest’. There are several methods of protecting your data in addition to TDE, but this article will only discuss TDE encrypted databases in an AlwaysOn Availability Group. If your physical media (drives or backup tapes) are stolen, TDE will encrypt the sensitive data in the data and log files, and will also protect the keys used to encrypt the data with a certificate. TDE provides for real-time I/O encryption and decryption, but it does not protect data ‘in-flight’ flowing from the server to the end user. Also it does not protect data from a user who has privileges in the SQL instance.

The steps below demonstrate how to add an encrypted database to an AlwaysOn Availability Group

  1. Encrypt your database (if not already encrypted)
  2. Create/verify Database Master Key (DMK) on secondary replica(s)
  3. Create the TDE Certificate on secondary replica(s)
  4. Perform Full and Tlog backup of encrypted database on primary replica
  5. Add encrypted database to AlwaysOn group on primary replica
  6. Restore Full and Tlog backups on secondary replica(s)
  7. Join the encrypted database to the AlwaysOn group on secondary replica(s)

Step 1 – Encrypt Database

If your database has not already been encrypted, follow these steps to create a master key (DMK), backup the certificate, create a database encryption key (DEK), and enable encryption on the database on your primary replica.

–create DMK database master key

USE master

GO

CREATE MASTER KEY ENCRYPTION BY PASSWORD = ‘DMKP@ssw0rd’

GO

OPEN MASTER KEY DECRYPTION BY PASSWORD = ‘DMKP@ssw0rd’

GO

BACKUP MASTER KEY TO FILE = ‘C:\SQLRX\KeyBkup\SQLDMK.dmk’

ENCRYPTION BY PASSWORD = ‘G00dP@ssw0rd’

GO

 

–check to make sure master key is set up

SELECT b.name, a.crypt_type_desc

FROM sys.key_encryptions a

INNER JOIN sys.symmetric_keys b

ON a.key_id = b.symmetric_key_id

WHERE b.name = ‘##MS_DatabaseMasterKey##’;

GO

 

–create and backup certificate

CREATE CERTIFICATE TDECert WITH SUBJECT = ‘Cert used for TDE’;

GO

BACKUP CERTIFICATE TDECert

TO FILE = ‘C:\SQLRX\KeyBkup\Cert4TDE.cer’

WITH PRIVATE KEY

(

FILE = ‘C:\SQLRX\KeyBkup\TDEPvtKey.pvk’,

ENCRYPTION BY PASSWORD = ‘*DBA$RC00l’

);

GO

 

–create DEK database encryption key with certificate

USE GKTestDB

GO

CREATE DATABASE ENCRYPTION KEY

WITH ALGORITHM   = AES_256 — AES_128 | AES_192 | AES_256 | TRIPLE_DES_3KEY

ENCRYPTION BY SERVER CERTIFICATE TDECert

GO

USE master

GO

 

enable TDE for the database

ALTER DATABASE GKTestDB SET ENCRYPTION ON

GO

 

–monitor encryption progress

SELECT db_name(database_id), encryption_state, percent_complete, key_algorithm, key_length

FROM sys.dm_database_encryption_keys

GO    

 

Step 2 – Verify Database Master Key on Secondary Replica(s)

On each secondary replica, verify that a Database Master Key (DMK) exists in the master database.

 

–check to make sure master key is set up

SELECT b.name, a.crypt_type_desc

FROM sys.key_encryptions a

INNER JOIN sys.symmetric_keys b

ON a.key_id = b.symmetric_key_id

WHERE b.name = ‘##MS_DatabaseMasterKey##’;

GO

If a record is returned, that means a DMK exists on the secondary instance. If no record is returned, then create a one as follows:

–create DMK database master key

USE master

GO

CREATE MASTER KEY ENCRYPTION BY PASSWORD = ‘DMKP@ssw0rd2’

GO

OPEN MASTER KEY DECRYPTION BY PASSWORD = ‘DMKP@ssw0rd2’

GO

BACKUP MASTER KEY TO FILE = ‘C:\SQLRX\KeyBkup\SQLDMK.dmk’

ENCRYPTION BY PASSWORD = ‘G00dP@ssw0rd’

GO

 

Step 3 – Create TDE Certificate on Secondary Replica(s)

On each secondary replica instance, create/restore the TDE Certificate from the server certificate and private key that you backed up on the primary. Copy the certificate as well as the private key from the primary over to a file path on the secondary(s). The following will install the certificate onto the secondary replica(s).

–create TDE Certificate from the certificate backed up on primary

CREATE CERTIFICATE TDECert

FROM FILE = ‘C:\SQLRX\KeyBkup\Cert4TDE.cer’

WITH PRIVATE KEY

(

FILE = ‘C:\SQLRX\KeyBkup\TDEPvtKey.pvk’,

DECRYPTION BY PASSWORD = ‘*DBA$RC00l’

)

Step 4 – Create Full & Tlog Backups on the Primary Replica

Create a full and tlog backup of the TDE encrypted database on the primary replica.

–create Full Backup of encrypted database on primary

BACKUP DATABASE [GKTestDB]

TO DISK = N’C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\GKTestDB.bak’

WITH NOFORMAT, NOINIT, NAME = N’GKTestDB-Full Database Backup’, SKIP, NOREWIND,NOUNLOAD, COMPRESSION, STATS = 10

GO

–create Tlog backup of encrypted database on primary

BACKUP LOG [GKTestDB]

TO DISK = N’C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\GKTestTlogBackup.trn’

WITH NOFORMAT, NOINIT, NAME = N’GKTestDB-Full Database Backup’, SKIP, NOREWIND, NOUNLOAD, STATS = 10

GO

Step 5 – Add Encrypted Database to AlwaysOn Group

On the primary replica instance, add the TDE encrypted database to the Availability Group. As of the current release of SQL Server 2016, you cannot do this yet through the wizard

Blog_20170525_1

Instead add the TDE database to your AlwaysOn group using TSQL:

USE master

GO

ALTER AVAILABILITY GROUP AGroup_SQLRx

ADD DATABASE GKTestDB

GO

Step 6 – Restore Full & Tlog Backups on Secondary Replica(s)

On each secondary replica instance, restore the full and tlog backups of the encrypted database with no recovery. Copy the backup files from the primary replica to a file location on your secondary and restore from that file path as follows:

–Restore Full backup of encrypted database on secondary with no recovery

USE [master]

RESTORE DATABASE [GKTestDB] FROM DISK = N’C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\GKTestFullBackup.bak’ WITH FILE = 2, NORECOVERY, NOUNLOAD, STATS = 5

GO

–Restore the Tlog backup of encrypted database on secondary with no recovery

RESTORE LOG [GKTestDB] FROM DISK = N’C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\GKTestTlogBackup.trn’ WITH FILE = 1, NORECOVERY, NOUNLOAD, STATS = 5

GO

Step 7 – Join the Encrypted Database to AlwaysOn Group on Secondary Replica

On each secondary replica instance, join the database to the availability group.

USE master

GO

ALTER DATABASE GKTestDB

SET HADR AVAILABILITY GROUP = AGroup_SQLRx

GO

Blog_20170525_2

Conclusion

After joining the encrypted database to the AlwaysOn group, you should now be able to fail over the group to the secondary replica and be able to access the database without any issues. It is common knowledge that encrypting a database causes degraded performance, so the next article will demonstrate the performance effects of having an encrypted database in your AlwaysOn Group.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

AlwaysOn 2016 – Direct Seeding

–By Ginger Keys

SQL Server 2016 provides some awesome new enhancements and features, especially in Enterprise Edition.  One of the new features relating to AlwaysOn is Direct Seeding.

Good stuff – No more time consuming backup and restore

  • Direct Seeding allows you to initialize the databases to your secondary replica(s) without having to configure a network share or perform backup and restore operations.  It will send the database directly over to the secondary replica without you needing to do anything.  Woo hoo!!

Gotchas – Some with regards to Large Databases

  • Direct seeding does not use compression by default, so adding a very large database to your AlwaysOn group could take a while and can use a lot of your network bandwidth.  Compression can be enabled by turning on Trace Flag 9567.   However, know that adding a large database with compression enabled will cause high CPU utilization during the seeding process.   For more info click here https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/tune-compression-for-availability-group .
  • Also during direct seeding the transaction log will not truncate, so in a highly transactional database your transaction log could potentially grow very large during this time.
  • For large databases being initialized to secondary replicas using direct seeding, be mindful of your network capacity (as the entire database will be sent across the network to the secondary(s)), workload amount which could bloat your transaction log, CPU pressure if you enable compression, and database size which will affect the seeding time.

You may want to use direct seeding during afterhours to prevent your Network or System Admin from hunting you down for the network hit that will happen.

If you are using SQL Server Management Studio version 17, Direct Seeding of your databases is now available in the GUI! You can download this version of SSMS here https://docs.microsoft.com/en-us/sql/ssms/download-sql-server-management-studio-ssms .

Let’s Do This!

I will assume you already have your environment ready (i.e. failover cluster created and configured, SQL installed and configured, AlwaysOn enabled, replicas and logins configured, etc.). So from the New Availability Group Wizard, you will specify your AG name:

Blog_20170511_1

Just a side note… in earlier versions of AlwaysOn (SQL 2012 & 2014), failover would occur if there was a problem with the health of the instance. If one of your databases was in trouble however, the AlwaysOn group would not fail over as long as the instance was okay. So if you had a database that was offline, suspect or corrupt, no failover would occur. In AlwaysOn 2016, failover will occur not only if the instance is in trouble, but also if one or more of your databases is in trouble. This is not a default setting however. You have to specify this when creating your AlwaysOn group, by selecting the ‘Database Level Health Detection’ checkbox in the setup wizard.

Next, select your database(s) to add to the AlwaysOn group:

Blog_20170511_2

Then add the replicas to be included in your AG and what type of failover (automatic or manual), data replication (synchronous or asynchronous), and whether they will allow for read only access:

Blog_20170511_3

Next click on the Endpoints tab and make sure URL and port numbers are correct:

Blog_20170511_4

Click on the Backup Preferences tab and specify how you want your backups to occur:

Blog_20170511_5

Click on the Listener tab and specify the Listener (DNS) name, port number and IP address:

Blog_20170511_6

And THIS is where you can specify Automatic (Direct) Seeding!

Blog_20170511_7

Click Next through the next screens and THAT’S IT!

Blog_20170511_8

You’re done, simple as that.

Blog_20170511_9

If you do not have SSMS v17 you can still initialize your databases to secondary replicas using direct seeding, but you will need to do this through TSQL from the primary replica:

CREATE AVAILABILITY GROUP []

FOR DATABASE db1

REPLICA ON ‘<*primary_server*>’

WITH (ENDPOINT_URL = N’TCP://<*primary_server*>.<*fully_qualified_domain_name*>:5022′,

FAILOVER_MODE = AUTOMATIC,

AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,

BACKUP_PRIORITY = 50,

SECONDARY_ROLE(ALLOW_CONNECTIONS = NO),

SEEDING_MODE = AUTOMATIC),

N'<*secondary_server*>’ WITH (ENDPOINT_URL = N’TCP://<*secondary_server*>.<*fully_qualified_domain_name*>:5022′,

FAILOVER_MODE = AUTOMATIC,

AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,

BACKUP_PRIORITY = 50,

SECONDARY_ROLE(ALLOW_CONNECTIONS = NO),

SEEDING_MODE = AUTOMATIC);

GO

For more information click here https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/automatically-initialize-always-on-availability-group .

Conclusion

SQL Server 2016 AlwaysOn gives you the ability to join databases to your AlwaysOn group and initialize them on your secondaries without having to backup and restore, or establish a network share. This ultimately saves much time and effort, but can take a while if your database is large. And using compression during the seeding process can save some time, but your CPU will take a performance hit. Also keep in mind that your transaction log will grow during this time as it will not truncate during the seeding process. Of course, be sure to test this first before doing it in your production environment!

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Is This Database Being Used?

— By Ginger Keys

During the migration of a client’s SQL environment to a new hosting provider it was questioned whether all of the databases needed to be migrated. Nobody seemed to know whether several of the databases were used or what they were for. One person suggested we just not move them, and see if anybody screamed or if it broke anything. Well, I guess that’s one way to find out if a database is being used. But there should be more definitive ways to do this, right?

There really is no straightforward way to determine this, but there are several things we can look at to get a better idea.

  • Look at any current connections
  • Capture login/connections over period of time
  • Observe index usage
  • Look at transaction count
  • Find dependencies with Jobs or other DBs

Current connections

sp_who2 is a well-known stored procedure that returns information about current users, sessions and processes. You can run exec sp_who2 to return all sessions belonging to the instance, or you can filter to return only the active sessions:

–Find active connections to the instance

USE master;

GO

EXEC sp_who2 ‘active’;

GO

Information about processes on your instance can also be derived from sys.sysprocesses. This system view will be deprecated in future releases of SQL. The info in this view returns data about both client and system processes running on the instance. The following statements will filter data relating to a specific database:

–Find number of active connections to database

USE master;

GO

SELECT DB_NAME(dbid) AS DBName,

spid,

COUNT(dbid) AS NumberOfConnections,

loginame,

login_time,

last_batch,

status

FROM   sys.sysprocesses

WHERE DB_NAME(dbid) = ‘AdventureWorks2016’ –insert your database name here

GROUP BY dbid, spid, loginame, login_time, last_batch, status

ORDER BY DB_NAME(dbid)

 

–Active Connections to Database with connecting IP address

SELECT

s.host_name,

s.program_name,

s.login_name,

c.client_net_address,

db_name(s.database_id) AS DBName,

s.login_time,

s.status,

GETDATE() AS date_time

FROM sys.dm_exec_sessions AS s

INNER JOIN sys.dm_exec_connections ASON s.session_id = c.session_id

INNER JOIN sys.sysprocesses ASON s.session_id = p.spid

WHERE DB_NAME(p.dbid) = ‘AdventureWorks2016’ –insert your database name here

 

Connections over time

It might be more beneficial to watch the connections to a database over a period of time instead of looking at the current connections only. In order to gather this data over time, you could create a trace through SQL Server Profiler. Simply run the trace, export it as a definition file, and import it into a table to query the results.   You can also create a SQL Server Audit to record successful logins, but these are at the server level, not the database level.   For a good explanation on how to perform both of these tasks click here https://mssqltalks.wordpress.com/2013/02/25/how-to-audit-login-to-my-sql-server-both-failed-and-successful/ . Depending upon how long you run the Profiler or Audit, these files can take up a considerable amount of space so be sure to monitor your disk space carefully.

 

Index Usage

Another way to see if your database is in use is to look and see if the indexes are being used. Information on index usage is held in the sys.dm_db_index_usage_stats table since the last server reboot, and can be queried using this statement which can be tailored to select the data you need.

SELECT

DB_NAME(database_id) DatabaseName,

last_user_seek,

last_user_scan,

last_user_lookup,

last_user_update

FROM sys.dm_db_index_usage_stats

WHERE db_name(database_id) = ‘AdventureWorks2016’ –insert your database name here

 

Blog_20170330_1

The statement above will show you the date and time the indexes for your database were last used. For the reverse of that, if you want to see which database have not had the indexes used since the last server reboot, run this statement:

SELECT name AS DatabaseName

FROM sys.databases

WHERE name NOT IN (‘master’, ‘tempdb’, ‘model’, ‘msdb’)

EXCEPT

SELECT DISTINCT

DB_NAME(database_id) AS DatabaseName

FROM sys.dm_db_index_usage_stats

ORDER BY 1

Blog_20170330_2

Transaction Count for the Database

Checking to see if the number of transactions are increasing for a database is another way to see if it is being used. You can query the sys.dm_os_performance_counters for Transactions/sec and run this several times to see if the count is increasing or not. Or you can open Perfmon and watch it there as well.

–Transaction count increasing?

SELECT *

FROM sys.dm_os_performance_counters

WHERE counter_name LIKE ‘Transactions/sec%’

AND instance_name LIKE ‘AdventureWorks2016%’ –insert your database name here

GO

Blog_20170330_3

–I waited a few minutes and executed the select statement again

Blog_20170330_4

Database Dependencies

Occasionally other databases or linked servers will connect to your database.   To see objects in your database that are referenced by other databases, run this statement:

SELECT OBJECT_NAME (referencing_id) AS referencing_object,

referenced_database_name,

referenced_schema_name,

referenced_entity_name

FROM sys.sql_expression_dependencies

WHERE referenced_database_name IS NOT NULL

AND is_ambiguous = 0;

 

For finding object referencing linked servers use this

SELECT OBJECT_NAME (referencing_id) AS referencing_object,

referenced_server_name,

referenced_database_name,

referenced_schema_name,

referenced_entity_name

FROM sys.sql_expression_dependencies

WHERE referenced_server_name IS NOT NULL

AND is_ambiguous = 0;

Database dependencies can be a very in-depth topic, and the statements above are only meant for high-level information about connections to your database. For more information about this topic click here https://www.mssqltips.com/sqlservertip/2999/different-ways-to-find-sql-server-object-dependencies/ .

Conclusion

There are countless ways to determine if your database is being used. Other methods that could be used is to see if there are execution plans are in the cache referencing the database, see if reads/writes are happening, look at lock_acquired events, and many other methods I haven’t thought of. The methods outlined above provide a useful starting point to investigate who/what is connecting and if your database is active or not. Yes, you can just take the database offline or detach, or even delete it and see if anyone fusses. But it’s much more prudent to take a look around at some simple things to make that determination.

Feel free to comment with other methods you have used….we would love to hear from you. For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

 

Login Failures Bloating SQL Server Log

–By Ginger Keys

After migrating a client’s SQL instances to VMs from physical servers, I noticed the following week that the SQL Server Log files had tons of failed login attempts from what looked like an application. These attempts were happening several times per second, and had totally bloated the SQL Server logs.

Blog_20170316_1

This is not desirable because 1) obviously something should not be attempting and failing to connect that much, and 2) it makes it super cumbersome to monitor more critical messages in the SQL logs when so many login errors are bloating the log. Too many failed login attempts could mean that the application has not been set with the correct permissions to access the databases needed, or it could be an indication of a hacking attempt.

I ran the script below to determine how many failed login attempts had occurred in the last 7 days, and the count came back with over 3 million failed attempts!

— Create a temporary table

CREATE TABLE #FailedLogins

(Logdate DATETIME,

Process VARCHAR(20),

Text VARCHAR(4000))

 

— Insert data from SQL Server Log into temp table

INSERT INTO #FailedLogins

EXEC xp_readerrorlog 1, 1, N’Login Failed’, N’LoginName’,‘20170309’,‘20170316’

 

— For more info about xp_readerrorlog click here https://sqlandme.com/2012/01/25/sql-server-reading-errorlog-with-xp_readerrorlog/

  –Count number of occurrences

SELECT COUNT(Text) AS LoginFailures, Text

FROM #FailedLogins

GROUP BY TEXT

 

–Drop temp table

DROP TABLE #FailedLogins

 

As it turns out it was a legacy system moved from the old environment that needed to be adjusted to connect to the new environment. I didn’t exactly get to put on my superwoman cape and solve the problem, but at least I was able to identify the issue and direct it to the appropriate IT staff to correct.

As part of your everyday duties as a DBA it is prudent to monitor your SQL Server Logs to catch hacking attempts or malicious activity with your instance, and also to look for errors that could be occurring. By default, SQL contains 7 error log files (one current, and six archives). Error logs contain critical information about your SQL server. A new log is created each time the SQL service starts, and the oldest archived log gets deleted. For security reasons it is best practice to keep a large number of error logs, depending on the space you have available. Someone with access to your SQL server could execute sp_cycle_errorlog and regenerate the logs at a rate that could ‘delete’ their malicious activity or cover up the fact that they have gained access to your system. If you have highly sensitive data or stringent auditing requirements you can increase the number of retained error logs up to 99, just be sure to monitor your disk space.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Moving Non-clustered Indexes To New Data File

-by Ginger Keys

It is not uncommon for database files to grow to the point you need more disk space. But what if it’s not possible or not in the budget to add more space? There are several creative options you have to move or shrink files, or delete unneeded items from the disk. One solution we recently implemented with a client was to move the indexes of a large database to another drive.

Create File/Filegroup

Only non-clustered indexes can be moved, and they must reside in a different filegroup from the primary filegroup. If you don’t already have a separate file and filegroup created, you must do this first. Make sure you have created a folder path on your system to the drive that will be holding your index files.

blog_20170209_1

Get Space Requirement for Indexes

Determine the non-clustered indexes you have in your database and their size by running this script to ensure you have enough space allocated on the target drive to house the index files.

USE AdventureWorks2016

GO

SELECT

OBJECT_NAME(i.OBJECT_ID) AS TableName,

i.name AS IndexName,

8 * SUM(a.used_pages) AS ‘Indexsize(KB)’

FROM sys.indexes AS i

JOIN sys.partitions AS p ON p.OBJECT_ID = i.OBJECT_ID AND p.index_id = i.index_id

JOIN sys.allocation_units AS a ON a.container_id = p.partition_id

JOIN sys.tables AS t ON i.object_id = t.object_id

WHERE i.type_desc = ‘NONCLUSTERED’

GROUP BY i.OBJECT_ID,i.index_id,i.name

ORDER BY OBJECT_NAME(i.OBJECT_ID),i.index_id

blog_20170209_2

Create and Execute Stored Procedure

The following script will create a stored procedure called MoveIndexToFileGroup that will move an index from one file group to another, and even supports the INCLUDE columns. Run this script against the master database to create the stored procedure: https://gist.github.com/dalenewman/6377911

In order to get a list of tables for your database, run the following script.

Use AdventureWorks2016 –use your database name

GO

SELECT * FROM INFORMATION_SCHEMA.TABLES

blog_20170209_3

After the above stored procedure is created and you have gotten a list of schemas/tables/indexes you want to move, execute the following

EXEC MoveIndexToFileGroup

@DBName = ‘AdventureWorks2016’, –your database name

@SchemaName = ‘HumanResources’, –schema name that defaults to dbo

@ObjectNameList = ‘Shift,  

Department,

                    EmployeeDepartmentHistory,

                    EmployeePayHistory,

                    Employee,

                    JobCandidate’, –a table or list of tables

@IndexName = NULL, –an index or NULL for all of them

@FileGroupName = ‘INDEX’; –the target file group

 

Your indexes will appear in the target destination you created.

blog_20170209_4

Conclusion

There are many useful reasons for moving your non-clustered indexes to a separate drive…in this case ours was simply to create more space for the drive holding the data files. Whatever your reason, the steps above should give you a simplistic way to achieve this.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

 

 

 

 

 

SQL Server Security – Controlling Access

— by Ginger Keys

Installing SQL following best practices is an important first step in securing your SQL server. The next step in securing your SQL server is to decide who can access the SQL instance, what databases and other objects they need access to, and what kind of permission to give them to those objects. So what is involved in securing access to your SQL server? We will go over the basic components, and discuss best practices to help keep your server secure.

blog_20161020_1

There are three main areas of security within your SQL server instance.

  • Principals – these are the persons or entities needing to access your SQL server.
  • Securables – these are the databases, objects, and other resources in your SQL server instance.
  • Permissions – these determine which resources a principal can access and what actions they are allowed to perform.

There are also some other important areas to consider in controlling access to your SQL server:

  • Schemas and Ownership

 

  • SQL server service accounts
  • Administrative accounts

 

Principals

Principals are persons, applications, and entities that can be authenticated to access the SQL server resources. In SQL server, principals include logins, users, and roles.

LOGINS – Principals require a login in order to connect to the SQL server. Logins are at the server level only, and provide for the entry point, or the initial connection to the server. Logins are validated against the master database and the connection is to the instance, not the databases or other components.   SQL supports two methods for authenticating logins: Windows and SQL authentication. Mixed mode authentication allows for the use of either Windows or SQL logins.

Windows logins can include individual users, groups, domains accounts, or local accounts. Group accounts are granted login access to all logins that are members of the group. SQL relies on Windows to authenticate these accounts.

SQL logins are specific to the instance, and are stored in SQL, with the username and hash of the password in the master database. SQL uses internal authentication to validate login attempts. This type of login may be necessary for users not associated with a Windows domain.

Best Practices for Logins:

  • Use Windows authentication whenever possible
  • create Windows groups in Active Directory set with appropriate access/permissions to the SQL server, then add individual users to the appropriate groups
  • Do not use SQL authentication for logins if possible –when SQL Server logins are used, SQL Server login names and passwords are passed across the network, which makes them less secure
  • Audit SQL Server failed login attempts to monitor for possible hacking activity

USERS – Once logged in to a SQL instance, a user account is necessary in order to connect to a database and its components. Users are created within a database, and mapped back to the server login.

User accounts that are not mapped to a login account are known as orphaned users. An exception is contained database users; they do not need to map to a login.

Guest user – this account is a built-in account in SQL server, and is disabled in new databases by default. The guest user allows a login to access databases without being mapped to a specific database user, and it inherits the ‘public’ database role with its permissions.

dbo user – this account has implied permissions to perform all activities in the database. Any principals belonging to the sysadmin fixed server role are mapped to the dbo user account automatically. The dbo user is in every database and is a member of the db_owner database role.

Best Practices for Users:

  • Disable the guest user in every user database (not system DBs)
  • If you must use the guest account, grant it minimum permissions

ROLES – Roles exists at both the server and database level. Permissions can be assigned to a role which makes it more efficient to manage principals’ access to securables. Permissions are given to roles, then logins and users can be added to (or removed from) roles.

Server roles – server level roles can be fixed or user defined. Members of server roles have permissions to sever-level securables, and cannot be changed or revoked. Logins can be assigned to fixed server roles without having a user account in a database.     

For complete list and description of server roles click here https://msdn.microsoft.com/en-us/library/ms188659.aspx

Database roles – These roles have a pre-defined set of permissions. Logins must be mapped to database user accounts in order to work with database objects. Database users can then be added to database roles, inheriting any permission sets associated with those roles.

For complete list and description of database roles click here https://msdn.microsoft.com/library/ms189121.aspx

Public role – The public role is contained in every database including system databases. It cannot be dropped and you can’t add or remove users from it. Permissions granted to the public role are inherited by all users because they belong to the public role by default.

Best Practices for Roles:

  • Be very cautious when adding users to fixed server roles:
    • Do not add principals to the sysadmin role unless they are highly trusted.
    • Membership in the securityadmin role allows principals to control server permissions, and should be treated with the same caution as the sysadmin role.
    • Be very cautious in adding members to the bulkadmin role. This role can insert data from any local file into a table, which could put your data at risk. For more information click here https://msdn.microsoft.com/library/ms188659.aspx

 

 

  • Grant public role only the permissions you want all users to have, and revoke unnecessary privileges.

Securables

SQL Server securables are the resources that can be accessed by a principal. SQL server resources operate within a hierarchy, with the server at the top of the hierarchy. Below the server instance lies the databases, and below the databases are a collection of objects (schemas, tables, views, etc.). Access to securables is controlled by granting or denying permissions to principals, or by adding or removing principals (logins and users) to roles which have access. All securables have an owner. The owner of a securable has absolute control over the securable and cannot be denied any privilege. Server level securables are owned by server principals (logins), and database level securables are owned by database principals (users).

blog_20161215_1

Permissions:

Permissions determine the type of access granted on a securable to a specific principal and what tasks a principal can perform on securables. The TSQL permission statements are

GRANT

REVOKE

DENY

Granting permission to a principal removes any DENY or REVOKE on that securable. A permission given at a higher scope in the hierarchy that contains that securable will take precedence over the lower level permission statement. Database level permissions only apply to that specific database.

Owners of securables can grant permissions on the objects they own, and any principal with CONTROL permissions can grant permissions on that securable as well.

Best Practices for Permissions:

  • Always use the principal of least privilege, which limits users by granting the minimum permissions necessary to accomplish a task. For more information click here https://msdn.microsoft.com/library/ms191291.aspx
  • Document any elevated user permission and request managerial approval.
  • When developing an application use a least-privileged user account (LUA), which may be more difficult – but will eliminate the temptation to grant elevated privileges as a quick fix when an end user cannot perform certain tasks that the administrative developer could. Granting elevated permissions to users in order to acquire lost functionality after the development phase can leave your application vulnerable to attack.
  • Grant permissions to roles rather that to users. It is easier to add and remove users from roles, than to manage separate permission sets for individual users.
  • Don’t grant individual users access to SQL Server, instead create groups for specific servers with specific permissions, and add individual users to the appropriate groups.

Ownership & Schemas 

Owners of objects have irrevocable permissions to administer those objects. The owner of a securable has absolute control over the securable and cannot be denied any privilege. You cannot remove privileges from an object owner, and you cannot drop users from a database if they own objects in it. Server level securables are owned by server principals (logins), and database level securables are owned by database principals (users).

A schema is a named container for similar database objects, and can be owned by any principal. You can assign security rules for a schema which will then be inherited by all objects in that schema.

Objects can be moved between schemas, and schema ownership can be transferred between principals.

Best Practices for Schemas:

  • In your TSQL, refer to the objects using a fully qualified name. At the very least, use the schema name followed by the object name, separated by a period (.). Example: databasename.schemaname.tablename.
  • Use synonyms to obfuscate the schema ownership of objects.
  • Use schemas to protect the base database object from being altered or removed from the database by users without sysadmin permissions.
  • Use schemas to combine related, logical entities to reduce administration overhead.
  • For more info – https://technet.microsoft.com/en-us/library/dd283095(v=sql.100).aspx

SQL Server Service Accounts 

Depending on what features and components you decide to install, SQL Server has several services that are used to manage authentication of SQL Server with Windows. These services need user accounts associated with them to start and run the services. The service accounts should have exactly the privileges that it needs to do its job and no more. For a list of all available SQL services click here https://msdn.microsoft.com/en-us/library/ms143504.aspx .

Best Practices for Service Accounts:

  • Assign service accounts low-level permissions using the principle of least privilege.
  • Don’t use built-in Windows system accounts (e.g., Network Service, Local System) for SQL Server service accounts – the built-in system accounts inherit certain elevated rights in Active Directory that aren’t required by SQL Server.
  • Use a Managed Service Account (MSA) if resources external to the SQL Server will be used.
  • When MSA are not possible, use a specific low-privilege non-user domain account.
  • Use separate accounts for different SQL Server services. The service accounts should not only be different from one another, they should not be used by any other service on the same server.
  • SQL Server Agent account is the only service account that requires membership in the systems admin server role.
  • Do not grant additional permissions to the SQL Server service account or the service groups – SQL Server installation program will grant them the local rights that they need during the installation.
  • Always use SQL Server Configuration Manager to change service accounts or passwords.
  • Don’t add service accounts to the Local Administrator group
  • Use a service account for applications, and restrict the applications access only to data required – not every object in the database

Administrative Accounts

sa Account – SQL Server creates this server level login during installation by default. The sa account has full administrative rights in your SQL Server instance, and is well known and often targeted by hackers. The sa login maps to the sysadmin fixed server role, which has irrevocable administrative privileges on the whole server.

BUILTIN\Administrators – All members of the Windows BUILTIN\Administrators group are members of the sysadmin role by default. This built in group is no longer included in SQL Server 2008 and later.

Best Practices for Administrative Accounts

  • Disable or rename the sa account – make sure another account exists with admin privileges before doing this.
  • If you must use the sa account, assign a strong password and enforce password policies
  • Do not use the sa account for day-to-day administration or logging on to the server remotely
  • Do not allow applications to use the sa account to connect to SQL
  • Remove the BUILTIN/Administrators group from the SQL Server Logins if it’s present in your instance.
  • Restrict users with system admin privileges, using server roles instead.
  • Restrict members of local administrator group. Limit the administrative access only to those who really require it.
  • Assign every administrator a named login, and don’t allow shared logins so that you can identify the users behind each and every database change.
  • Document all users with administrative rights, and any elevated user permissions

Conclusion

Controlling access to your SQL Server should be well thought out ahead of time. Every organization needs to have an information security policy in place, and the role of the DBA is to enforce these policies and protect the SQL Server instances along with the data in them. The suggestions provided above give a basic guideline for controlling who has access to your SQL Server. Utilizing windows groups, database roles, applying the principle of least privilege, and implementing the other suggestions above can be instrumental in keeping your SQL environments safe.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

SQL Server Memory In Task Manager – Tell Your Manager to Not Worry!

-by Ginger Keys

blog_20160929_1We are going to re-promote this subject because it is still so relevant. All too often we run into people who are in a panic after looking at Task Manager to try to diagnose SQL stress. They seem to become very worried when looking at how much memory SQL uses not realizing that is as expected.

Task Manager is just that, a task manager. If you are looking at it to get accurate values for memory (RAM) usage or CPU time, you will get not-so-accurate information. False alarms. Information that doesn’t exactly mean what it implies.

Blog_20150507_2What I usually see when I look at Task Manager is that CPU is low, disk usage is low, and memory used is high – which is what we expect on a SQL machine.  SQL will use as much memory as it can get, to improve performance by caching tons of data in memory in case it needs it, to avoid having to go to disk.  It will immediately release any memory to the OS if needed.  SQL is a memory hog, and will hold onto all the memory we allocate to it, whether it is actually using it or not.  And this is what we want it to do!

SQL will take all the memory that is allocated to it when it is needed, store data and execution plans in the buffer, and when it is through using it, it doesn’t release it…it caches that data into memory until either 1) SQL needs it again, or 2) Windows OS requests the memory.

This isn’t an indication of performance stress…it’s just showing that SQL has reserved all the memory that we told it to use (through setting min and max memory on the SQL instance properties [1]).  SQL doesn’t let go of memory unless the server comes under memory pressure (like if other apps need memory and Windows sends out a memory pressure notification).  As long as the OS doesn’t need to use it for anything, SQL will hold on to it for future use.

The default memory setting when you install SQL is for it to use all of the memory. When windows asks SQL to give it back, SQL gives it back in huge amounts. And then proceeds to build back up again to maximum amount. This is not optimum for stability or performance, and it is highly recommended to set the limit in the Max Memory Setting as a best practice. That way Windows OS is always guaranteed to have a certain amount.

Click here for more info on setting Max Memory: https://sqlserverperformance.wordpress.com/2009/10/29/suggested-max-memory-settings-for-sql-server-20052008/

Another good reference for setting Max Memory for SQL is here: http://www.sqlservercentral.com/blogs/glennberry/2009/10/29/suggested-max-memory-settings-for-sql-server-2005_2F00_2008/

Blog_20150507_3

And when looking at Resource Monitor, it is a mistake to use the Available and Free metrics as the key indicators for determining overall system health [2]. The Available MB of memory is comprised of both the Standby and the Free… the stuff in Standby is what Windows has put in the cache (but can be booted out if needed), and Free means it is not even being used at all. The combination is what Windows calls Available. So there is actually data in the cache that Windows says is available. Some of the categories in task manager/resource monitor can be very confusing and even misleading, which is why we shouldn’t use these metrics alone to decide if we are in danger of being out of memory.

Instead of only relying on task manager or resource monitor, we really need to look at other metrics to find out if there is a memory related performance issue on the machine.

Some of the metrics to pay attention to are:

  • Page Life Expectancy
  • Available Bytes
  • Buffer Cache Hit Ratio
  • Target & Total Server Memory
  • Memory Grants Pending
  • Pages/sec (Hard Page Faults)
  • Batch Requests/sec & Compilations/sec

Some of these metrics can be queried from TSQL, or can be observed with Perfmon (Windows Performance Monitor – located in Administrative Tools).

 Page Life Expectancy (PLE)

Found in Perfmon at SQL Server: Buffer Manager – Page Life Expectancy, this counter measures the average amount of time unlocked data pages are staying in the buffer pool.   During peak production hours this number may dip into lower numbers, but we generally want this number to remain above 300 seconds (so data is staying in the cache for 5 minutes or longer). The longer data stays in the buffer, the less likely SQL will have to go to disk for I/O operations.

There is an interesting occurrence with page life expectancy… when SQL really does run out of memory, PLE drops very quickly…but it grows back very slowly. You’re probably not still having memory issues during the coming back stage, even though it may look like its struggling. If PLE stays down however, then you have a problem.   So be careful, because PLE can be misleading when it takes a drop… there’s a difference between it going down & staying down (you have a memory problem), and going down once and crawling back up (which is normal). If it stays down below 300 seconds consistently, you may need to add more RAM.

You can monitor this through Perfmon, or run the following query:

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Buffer Manager%’

AND [counter_name] = ‘Page life expectancy’

Available MBytes

Found in Perfmon as Memory: Available MBytes, this counter detects how much memory is available for use, and whether Windows is experiencing memory pressure. We generally start investigating if this number consistently goes below 500MB, which would indicate low memory, and Windows may start memory management operations. If Available Mbytes is consistently low, you may need to add more RAM.

This counter cannot be queried through TSQL, it has to be observed through Perfmon only.

Buffer Cache Hit Ratio

Found in Perfmon as SQL Server: Buffer Manager: Buffer Cache Hit Ratio. This counter averages (since the last restart of your SQL instance) how often SQL Server goes to the buffer pool to get data, and actually finds that data in memory, instead of having to go to disk. We want to see this ratio high in OLTP servers – around 90-95%. The higher the ratio, the less often SQL has to go to disk for I/O operations, which translates into better performance for your end users. If this number is consistently below the 90% mark, you may need to add more RAM to your server to boost performance.

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Buffer Manager%’

AND [counter_name] = ‘Buffer cache hit ratio’

 Target & Total Server Memory

Found in Perfmon at SQL Server: Memory Manager – Total Server Memory, and SQL Server: Memory Manager – Target Server Memory. The Total Server Memory is the current amount of memory that SQL Server is using.  The Total Server memory will be quite a bit lower than the Target memory during the initial buffer pool ramp up… it is trying to populate the cache and get pages loaded into memory.  Performance might be a little slower during this time since more disk I/O is required, but this is normal.  After it ramps up, and normal operations resume, Total Server Memory should be very close to Target Server Memory. (The ratio should be close to 1). If Total Server Memory does not increase much, but stays significantly less than Target, this could indicate a couple of things…

1) You may have allocated much more memory than SQL can use – SQL could cache the entire databases into memory, and if the databases are smaller than the amount of memory on the machine, the data won’t take up all the space allocated to SQL. So in this case Total Memory (actually memory being used by SQL) will never reach Target Memory (amount allocated to SQL). Or,

2) SQL cannot grow the buffer pool because of memory pressure from outside of SQL. If this is the case, you need to either increase the Max Server Memory, or add more RAM to boost performance.

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Memory Manager%’

AND [counter_name] IN (‘Total Server Memory (KB)’,’Target Server Memory (KB)’)

 Memory Grants Pending

Found in Perfmon as SQL Server: Memory Manager – Memory Grant Pending, this counter measures the total number of SQL processes waiting for a workspace memory grant. The general recommendation for this measurement should be 1 or less. Anything above 1 indicates there are SQL processes waiting for memory in order to operate.

Memory grants pending could be due to bad queries, missing indexes, sorts or hashes. To investigate this, you can query the sys.dm_exec_query_memory_grants view, which will show which queries (if any) that require a memory grant to execute [5].

If the Memory Grants Pending are not due to the above mentioned conditions, then more memory should be allocated to SQL Server by adjusting Max Server Memory. Adding more RAM should be the last resort in this case.

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Memory Manager%’

AND [counter_name] = ‘Memory Grants Pending’

Pages/sec (Hard Page Faults)

Found in Perfmon as Memory: Pages/sec, this measures the number of pages read from or written to disk. Pages/sec is the sum of Pages Input/sec and Pages Output/sec and reports the number of requested pages that were not immediately available in RAM and had to be read from the disk (resulting in hard page faults). A Hard Page Fault occurs when windows has to use the swap file on the disk [3]. It’s when the address in memory that’s part of a program is no longer in main memory, but has been instead swapped out to the paging file, making the system go looking for it on the hard disk. Since disk storage is much slower than RAM, performance can be greatly degraded.

We want to see the Page/sec number below 50, and closer to 0. If you see a high value of Pages/sec on a regular basis, you might have performance degradation, but not necessarily. A high Pages/sec value can happen while doing database backups or restores, importing or exporting data, or by reading a file mapped in memory [4].

Because a hard fault doesn’t necessarily indicate a critical error condition depending upon what’s normal for your environment, it’s a good idea to measure a baseline value, and monitor this number based on that. If the values are consistently higher that your baseline value, you should consider adding more RAM.

Batch Request & Compilations

SQL Server: SQL Statistics – Batch Request/Sec. This is the number of incoming queries

SQL Server: SQL Statistics – Compilations/Sec. This is the number of new executions plans SQL had to build

If Compilations/sec is 25% or higher relative to Batch Requests/sec, SQL Server is putting execution plans in the cache, but never actually reusing them.  Your valuable memory is being used up to cache query execution plans that will never be used again – instead of caching data.  This is bad. We don’t want this to happen. And a high Compilation/sec value (like over 100) indicates there are a lot of Ad-Hoc (one-hit-wonder) queries being run. You can enable the “optimize for ad hoc” setting if this is the case, and this will put the execution plan in the buffer, but only after the second time it has been used.

To query these metrics with TSQL:

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’

AND [counter_name] = ‘Batch Requests/sec’

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’

AND [counter_name] = ‘SQL Compilations/sec’

To get the ratio of Compilations to Batch Requests:

SELECT ROUND (100.0 *

(SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’

AND [counter_name] = ‘SQL Compilations/sec’)

/

(SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’

AND [counter_name] = ‘Batch Requests/sec’)

,2) as [Ratio]

 

Looking at the counters in Perfmon (Report View) below, we can see that this particular server is in a healthy state, with over 11GB of Available Bytes, no page faults (so I/O is operating from buffer, not disk), Buffer cache with 100% ratio, PLE of 21,515 seconds (that’s almost 6 hours data spent in the buffer), no pending memory grants (so no SQL process is having to wait for memory), ample Total memory, and low compilation ratio (11%) compared to all the queries being run.

Blog_20150507_4

These measurements should be observed and understood, instead of using Task Manager to make critical decisions about resources on your SQL Server.

If it is decided to make changes based upon only what we see in Task Manager, and we reduce the amount of memory SQL can use, and it happens to need more to execute processes, it will have to read/write from disk, which will degrade the performance.  Reducing the amount of memory SQL can use will slow performance and will most likely increase complaints from end users. The lower you set your Maximum Server Memory, the lower your performance will go.

No matter how much memory you put in a system, SQL Server will use all it can get until its caching entire databases in memory and then some.  This isn’t an accident. SQL Server caches data in memory.  On purpose. That means it is healthy, and enhancing performance for your end users.

Now send a link to this blog post to your favorite person on staff who needs to be educated about Task Manager and SQL Server. You will be glad you did!

References

[1] “Glenn Berry’s SQL Server Performance” https://sqlserverperformance.wordpress.com/2009/10/29/suggested-max-memory-settings-for-sql-server-20052008/

[2] “Use Resource Monitor for Memory Monitoring” by Scott Lowe.   http://www.techrepublic.com/blog/the-enterprise-cloud/use-resource-monitor-for-memory-monitoring/

[3] “Swap File”. TechTarget. http://searchwindowsserver.techtarget.com/definition/swap-file-swap-space-or-pagefile