Category Archives: Beginner

Get a List of Tables That Are Compressed

Find out if any of your tables are compressed in your database using the query below. Compressed tables can save space and make queries run faster.

— Compressed Tables
select distinct t.name AS CompressedTables
from sys.partitions p
inner join sys.tables t
on p.object_id = t.object_id
where p.data_compression > 0

If you don’t have any tables compressed but think you might want to compress some, you can check your data compression savings by running the stored procedure sp_estimate_data_compression_savings for your targeted table.

USE WideWorldImporters;
GO
EXEC sp_estimate_data_compression_savings ‘Sales’, ‘Invoices’, NULL, NULL, ‘ROW’ ;
GO

https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-estimate-data-compression-savings-transact-sql

List Partitioned Tables And Other Info About Them

— By Lori Brown @SQLSupahStah

Here is a good way to find out if any of your tables are partitioned in your database using the query below. It is important to know so that you can find out how the tables are partitioned.

— Partitioned Tables

SELECT DISTINCT t.name AS ParitionedTables

FROM sys.partitions p

INNER JOIN sys.tables t

ON p.object_id = t.object_id

WHERE p.partition_number <> 1

If you have partitioned tables here is a good way to find out how it is partitioned up.

— Get partition info

SELECT SCHEMA_NAME(o.schema_id) + ‘.’ + OBJECT_NAME(i.object_id) AS [object]

, p.partition_number AS [p#]

, fg.name AS [filegroup]

, p.rows

, au.total_pages AS pages

, CASE boundary_value_on_right

WHEN 1 THEN ‘less than’

ELSE ‘less than or equal to’ END AS comparison

, rv.value

, CONVERT (VARCHAR(6), CONVERT (INT, SUBSTRING (au.first_page, 6, 1) +

SUBSTRING (au.first_page, 5, 1))) + ‘:’ + CONVERT (VARCHAR(20),

CONVERT (INT, SUBSTRING (au.first_page, 4, 1) +

SUBSTRING (au.first_page, 3, 1) + SUBSTRING (au.first_page, 2, 1) +

SUBSTRING (au.first_page, 1, 1))) AS first_page

FROM sys.partitions p

INNER JOIN sys.indexes i

ON p.object_id = i.object_id

AND p.index_id = i.index_id

INNER JOIN sys.objects o

ON p.object_id = o.object_id

INNER JOIN sys.system_internals_allocation_units au

ON p.partition_id = au.container_id

INNER JOIN sys.partition_schemes ps

ON ps.data_space_id = i.data_space_id

INNER JOIN sys.partition_functions f

ON f.function_id = ps.function_id

INNER JOIN sys.destination_data_spaces dds

ON dds.partition_scheme_id = ps.data_space_id

AND dds.destination_id = p.partition_number

INNER JOIN sys.filegroups fg

ON dds.data_space_id = fg.data_space_id

LEFT OUTER JOIN sys.partition_range_values rv

ON f.function_id = rv.function_id

AND p.partition_number = rv.boundary_id

WHERE i.index_id < 2

AND o.object_id = OBJECT_ID(‘dbo.SomeTableName’);

List Tables That May Be Over Indexed

— By Lori Brown

While not having enough indexes can be bad for query performance, having too many indexes can also be just as bad. Use the query below to get a list of tables in your database that has more than 10 indexes.

— Tables with large number of indexes

select t.name as TablesWithLargeNumInx, count(i.name) as CountIndexes

from sys.indexes i

inner join sys.tables t

on i.object_id = t.object_id

where i.index_id > 0

group by t.name

having count(i.name) > 10

If you suspect that you have too many indexes on your tables, you can also check the sys.dm_db_index_usage_stats dynamic management view to know if indexes on your heavily indexed tables are being used well. (Hint: seeks are good and scans are not so much)

select u.user_seeks, u.user_lookups, u.user_scans

from sys.dm_db_index_usage_stats u

inner join sys.indexes i

on u.object_id = i.object_id and u.index_id = i.index_id

WHERE u.object_id=object_id(‘dbo.SomeTableName’)

https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-db-index-usage-stats-transact-sql

Is This Database Being Used?

— By Ginger Keys

During the migration of a client’s SQL environment to a new hosting provider it was questioned whether all of the databases needed to be migrated. Nobody seemed to know whether several of the databases were used or what they were for. One person suggested we just not move them, and see if anybody screamed or if it broke anything. Well, I guess that’s one way to find out if a database is being used. But there should be more definitive ways to do this, right?

There really is no straightforward way to determine this, but there are several things we can look at to get a better idea.

  • Look at any current connections
  • Capture login/connections over period of time
  • Observe index usage
  • Look at transaction count
  • Find dependencies with Jobs or other DBs

Current connections

sp_who2 is a well-known stored procedure that returns information about current users, sessions and processes. You can run exec sp_who2 to return all sessions belonging to the instance, or you can filter to return only the active sessions:

–Find active connections to the instance

USE master;

GO

EXEC sp_who2 ‘active’;

GO

Information about processes on your instance can also be derived from sys.sysprocesses. This system view will be deprecated in future releases of SQL. The info in this view returns data about both client and system processes running on the instance. The following statements will filter data relating to a specific database:

–Find number of active connections to database

USE master;

GO

SELECT DB_NAME(dbid) AS DBName,

spid,

COUNT(dbid) AS NumberOfConnections,

loginame,

login_time,

last_batch,

status

FROM   sys.sysprocesses

WHERE DB_NAME(dbid) = ‘AdventureWorks2016’ –insert your database name here

GROUP BY dbid, spid, loginame, login_time, last_batch, status

ORDER BY DB_NAME(dbid)

 

–Active Connections to Database with connecting IP address

SELECT

s.host_name,

s.program_name,

s.login_name,

c.client_net_address,

db_name(s.database_id) AS DBName,

s.login_time,

s.status,

GETDATE() AS date_time

FROM sys.dm_exec_sessions AS s

INNER JOIN sys.dm_exec_connections ASON s.session_id = c.session_id

INNER JOIN sys.sysprocesses ASON s.session_id = p.spid

WHERE DB_NAME(p.dbid) = ‘AdventureWorks2016’ –insert your database name here

 

Connections over time

It might be more beneficial to watch the connections to a database over a period of time instead of looking at the current connections only. In order to gather this data over time, you could create a trace through SQL Server Profiler. Simply run the trace, export it as a definition file, and import it into a table to query the results.   You can also create a SQL Server Audit to record successful logins, but these are at the server level, not the database level.   For a good explanation on how to perform both of these tasks click here https://mssqltalks.wordpress.com/2013/02/25/how-to-audit-login-to-my-sql-server-both-failed-and-successful/ . Depending upon how long you run the Profiler or Audit, these files can take up a considerable amount of space so be sure to monitor your disk space carefully.

 

Index Usage

Another way to see if your database is in use is to look and see if the indexes are being used. Information on index usage is held in the sys.dm_db_index_usage_stats table since the last server reboot, and can be queried using this statement which can be tailored to select the data you need.

SELECT

DB_NAME(database_id) DatabaseName,

last_user_seek,

last_user_scan,

last_user_lookup,

last_user_update

FROM sys.dm_db_index_usage_stats

WHERE db_name(database_id) = ‘AdventureWorks2016’ –insert your database name here

 

Blog_20170330_1

The statement above will show you the date and time the indexes for your database were last used. For the reverse of that, if you want to see which database have not had the indexes used since the last server reboot, run this statement:

SELECT name AS DatabaseName

FROM sys.databases

WHERE name NOT IN (‘master’, ‘tempdb’, ‘model’, ‘msdb’)

EXCEPT

SELECT DISTINCT

DB_NAME(database_id) AS DatabaseName

FROM sys.dm_db_index_usage_stats

ORDER BY 1

Blog_20170330_2

Transaction Count for the Database

Checking to see if the number of transactions are increasing for a database is another way to see if it is being used. You can query the sys.dm_os_performance_counters for Transactions/sec and run this several times to see if the count is increasing or not. Or you can open Perfmon and watch it there as well.

–Transaction count increasing?

SELECT *

FROM sys.dm_os_performance_counters

WHERE counter_name LIKE ‘Transactions/sec%’

AND instance_name LIKE ‘AdventureWorks2016%’ –insert your database name here

GO

Blog_20170330_3

–I waited a few minutes and executed the select statement again

Blog_20170330_4

Database Dependencies

Occasionally other databases or linked servers will connect to your database.   To see objects in your database that are referenced by other databases, run this statement:

SELECT OBJECT_NAME (referencing_id) AS referencing_object,

referenced_database_name,

referenced_schema_name,

referenced_entity_name

FROM sys.sql_expression_dependencies

WHERE referenced_database_name IS NOT NULL

AND is_ambiguous = 0;

 

For finding object referencing linked servers use this

SELECT OBJECT_NAME (referencing_id) AS referencing_object,

referenced_server_name,

referenced_database_name,

referenced_schema_name,

referenced_entity_name

FROM sys.sql_expression_dependencies

WHERE referenced_server_name IS NOT NULL

AND is_ambiguous = 0;

Database dependencies can be a very in-depth topic, and the statements above are only meant for high-level information about connections to your database. For more information about this topic click here https://www.mssqltips.com/sqlservertip/2999/different-ways-to-find-sql-server-object-dependencies/ .

Conclusion

There are countless ways to determine if your database is being used. Other methods that could be used is to see if there are execution plans are in the cache referencing the database, see if reads/writes are happening, look at lock_acquired events, and many other methods I haven’t thought of. The methods outlined above provide a useful starting point to investigate who/what is connecting and if your database is active or not. Yes, you can just take the database offline or detach, or even delete it and see if anyone fusses. But it’s much more prudent to take a look around at some simple things to make that determination.

Feel free to comment with other methods you have used….we would love to hear from you. For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

 

Login Failures Bloating SQL Server Log

–By Ginger Keys

After migrating a client’s SQL instances to VMs from physical servers, I noticed the following week that the SQL Server Log files had tons of failed login attempts from what looked like an application. These attempts were happening several times per second, and had totally bloated the SQL Server logs.

Blog_20170316_1

This is not desirable because 1) obviously something should not be attempting and failing to connect that much, and 2) it makes it super cumbersome to monitor more critical messages in the SQL logs when so many login errors are bloating the log. Too many failed login attempts could mean that the application has not been set with the correct permissions to access the databases needed, or it could be an indication of a hacking attempt.

I ran the script below to determine how many failed login attempts had occurred in the last 7 days, and the count came back with over 3 million failed attempts!

— Create a temporary table

CREATE TABLE #FailedLogins

(Logdate DATETIME,

Process VARCHAR(20),

Text VARCHAR(4000))

 

— Insert data from SQL Server Log into temp table

INSERT INTO #FailedLogins

EXEC xp_readerrorlog 1, 1, N’Login Failed’, N’LoginName’,‘20170309’,‘20170316’

 

— For more info about xp_readerrorlog click here https://sqlandme.com/2012/01/25/sql-server-reading-errorlog-with-xp_readerrorlog/

  –Count number of occurrences

SELECT COUNT(Text) AS LoginFailures, Text

FROM #FailedLogins

GROUP BY TEXT

 

–Drop temp table

DROP TABLE #FailedLogins

 

As it turns out it was a legacy system moved from the old environment that needed to be adjusted to connect to the new environment. I didn’t exactly get to put on my superwoman cape and solve the problem, but at least I was able to identify the issue and direct it to the appropriate IT staff to correct.

As part of your everyday duties as a DBA it is prudent to monitor your SQL Server Logs to catch hacking attempts or malicious activity with your instance, and also to look for errors that could be occurring. By default, SQL contains 7 error log files (one current, and six archives). Error logs contain critical information about your SQL server. A new log is created each time the SQL service starts, and the oldest archived log gets deleted. For security reasons it is best practice to keep a large number of error logs, depending on the space you have available. Someone with access to your SQL server could execute sp_cycle_errorlog and regenerate the logs at a rate that could ‘delete’ their malicious activity or cover up the fact that they have gained access to your system. If you have highly sensitive data or stringent auditing requirements you can increase the number of retained error logs up to 99, just be sure to monitor your disk space.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

SQL Server Backup and Restore Primer

— by Lori Brown @SQLSupahStah

Back it up!!

blog_20170302_1  …. BEEP! BEEP! BEEP! … Oh wait! Not that kind of backup…

Since I spent a lot of time trying to creatively move a database from one server to another last night, I thought that a review of ways to backup and restore would be a good topic for today.

I had someone who needed me to move a database from one server to another in an effort to consolidate servers. The database backup file was a significant size and could not fit onto any of the local drives of the new server. And to make things more fun, the SQL Server service account did not have permission to view the old server at all. I think the inability to see the old server had something to do with the old server had been in a different domain at one point and there was some issue that was never fixed that could give the service account access. Nonetheless, the move had to be completed. So, this made it so that restoring would have to be performed using a location that the SQL Server service account could access. I could not use a UNC path for restoring the database.

The new server was really tight on space and the admins were very reluctant to add more space if that space was not needed permanently. I had several drives with differing amounts of free space so if I could create enough small backup files, I could put those files in several different places on the new server and could then restore since the SQL Server service account did have access to all local drives.

You can backup databases to one file or multiple files. Here are the basics of backing up to a single file. Since I am giving the T-SQL way to do things, I am also including a way to make the file name unique with a timestamp.

— Backup a database to a single backup file

DECLARE @BackupFile NVARCHAR(200)

DECLARE @Timestamp VARCHAR(50)

— create a timestamp for a unique name

SET @Timestamp = REPLACE(REPLACE(REPLACE(CONVERT(VARCHAR,GETDATE(), 20),‘-‘,),‘:’,),‘ ‘,)

— create backup file string

SET @BackupFile = ‘D:\SQLBackups\WideWorldImporters_Full_’+@Timestamp+‘.bak’

— backup the database

BACKUP DATABASE WideWorldImporters

TO DISK = @BackupFile

WITH NOFORMAT, INIT, NAME = N’WideWorldImporters-Full Database Backup’,

SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 10;

GO

Backing up to multiple files is good to use for large databases and will break up a single large backup file to several smaller files. This is what I had to do last night when a large single database file would not fit on any local drive.

— Backup a database to multiple backup files

DECLARE @BackupFile1 NVARCHAR(200)

DECLARE @BackupFile2 NVARCHAR(200)

DECLARE @BackupFile3 NVARCHAR(200)

DECLARE @BackupFile4 NVARCHAR(200)

DECLARE @Timestamp VARCHAR(50)

— create a timestamp for a unique name

SET @Timestamp = REPLACE(REPLACE(REPLACE(CONVERT(VARCHAR,GETDATE(), 20),‘-‘,),‘:’,),‘ ‘,)

— create backup file strings

SET @BackupFile1 = ‘D:\SQLBackups\WideWorldImporters_Full1_’+@Timestamp+‘.bak’

SET @BackupFile2 = ‘D:\SQLBackups\WideWorldImporters_Full2_’+@Timestamp+‘.bak’

SET @BackupFile3 = ‘D:\SQLBackups\WideWorldImporters_Full3_’+@Timestamp+‘.bak’

SET @BackupFile4 = ‘D:\SQLBackups\WideWorldImporters_Full4_’+@Timestamp+‘.bak’

— backup the database

BACKUP DATABASE WideWorldImporters

TO DISK = @BackupFile1,

DISK = @BackupFile2,

DISK = @BackupFile3,

DISK = @BackupFile4

WITH NOFORMAT, INIT, NAME = N’WideWorldImporters-Full Database Backup’,

             SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 10;

GO

Here is the MSDN link for all things backup…. https://msdn.microsoft.com/en-us/library/ms186865.aspx

Some backup options that are good to know about are BLOCKSIZE, BUFFERCOUNT and MAXTRANSFERSIZE. These can help speed up backups immensely but should be thoroughly tested. I recently had a 1.8 TB database that without setting those backup options, caused the database backup to take around 4 hours. With them set, the backup time was reduced to 50 minutes.

Restore it!!

blog_20170302_2  … Reunited and it feels so goooood… Yeah, no one wants to hear me sing.  Somehow I never win at karaoke.

Practicing restoring databases proves that the backup files are usable and will polish up your skills so that you are ready to restore in the event of a crisis. If you have to restore database files to different drives than are available on the production server or if you have to restore often as in the case of refreshing a database, create a script that will be easier to use than restoring using the GUI and navigating to the new file locations. Don’t forget that databases from newer version of SQL cannot be restored to an older version.

Here are some basic examples of restore statements.

— Restore a database from a single backup file

USE master

GO

RESTORE DATABASE WideWorldImporters

FROM DISK = N’D:\SQLBackups\WideWorldImporters_Full.bak’

WITH RECOVERY

GO

 

— Restore a database from a single database backup file

— and a transaction log backup file

USE master

GO

RESTORE DATABASE WideWorldImporters

FROM DISK = N’D:\SQLBackups\WideWorldImporters_Full.bak’

WITH NORECOVERY — NORECOVERY allows more backup files to be restored

GO

RESTORE LOG WideWorldImporters

FROM DISK = N’D:\SQLBackups\WideWorldImporters_tlog.trn’

WITH RECOVERY

GO

In my case, since I had backed up the database to multiple files, I moved those files to several different local drives and was able to restore from there. I don’t like it when space is that tight but some shops run a very tight ship and it is just something I have to live with.

— Restore a database from multiple backup files

USE master

GO

RESTORE DATABASE WideWorldImporters

FROM DISK = N’D:\SQLBackups\WideWorldImporters_Full_1.bak’,

DISK = N’D:\SQLBackups\WideWorldImporters_Full_2.bak’,

DISK = N’D:\SQLBackups\WideWorldImporters_Full_3.bak’,

DISK = N’D:\SQLBackups\WideWorldImporters_Full_4.bak’

WITH REPLACE, — REPLACE WILL OVERWRITE EXISTING FILES

RECOVERY

GO

 

— Restore a database from multiple backup files

— and move the database and log files to new drives

USE master

GO

RESTORE DATABASE WideWorldImporters

FROM DISK = N’D:\SQLBackups\WideWorldImporters_Full_1.bak’,

DISK = N’D:\SQLBackups\WideWorldImporters_Full_2.bak’,

DISK = N’D:\SQLBackups\WideWorldImporters_Full_3.bak’,

DISK = N’D:\SQLBackups\WideWorldImporters_Full_4.bak’

WITH MOVE N’WideWorldImporters’ TO N’H:\MSSQL\WideWorldImporters.mdf’,

MOVE N’WideWorldImporters_log’ TO N’L:\MSSQL\WideWorldImporters_log.ldf’,

RECOVERY

GO

Restore WITH RECOVERY to make the database ready to use. Restore WITH NORECOVERY if you want to restore more backups.   Use REPLACE only when you are certain that you want to overwrite the existing files so be careful with it.

More on RESTORE can be found at https://msdn.microsoft.com/en-us/library/ms186858.aspx

Some good things to know are available in the RESTORE are the ability to restore pages, verifying backups, NORECOVERY and STANDBY, and STOPAT. Some of those options are specific when restoring transaction logs but are good to know anyway.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Shrinking Database Files – Let’s Not And Say We Did

— by Lori Brown  @SQLSupahStah

I recently ran into someone who swears that shrinking database files on a production server is a good thing and proudly said they did it on all systems they work on.  OMG!!  Let me post a quote directly from MSDN regarding weather or not shrinking database files causes fragmentation (https://msdn.microsoft.com/en-us/library/ms189493.aspx) :

blog_20170216_1

Please notice I am talking about production SQL Servers and not development, test, QA, staging or anything that is not production.  Of course if you shrink database files in any environment you are still going to cause more fragmentation but since the use of those systems is often vastly different than a production system, you as the DBA have to judge the ramifications on those non-production systems.

blog_20170216_2

Have I ever had to shrink database files on the production systems I monitor…Uhhh, yes.  But, I do so very infrequently and with reason.  I even have one system where the guy who is in charge of it refuses to allow me to disable the nightly job that shrinks some of the databases.  I can only do so much but he can’t say I have not duly warned him.

 

There are tons of good articles and even rants of why you should not shrink your database files.  Here are some links in case you think I am the only one telling you to NOT shrink your database files.

 

https://www.brentozar.com/archive/2009/08/stop-shrinking-your-database-files-seriously-now/  – Gotta love some Ozar!

 

http://www.sqlskills.com/blogs/paul/why-you-should-not-shrink-your-data-files/ – From THE man!

 

http://www.sqlservercentral.com/blogs/simple-sql-server/2016/01/19/shrinking-database-data-files/ – Good post from a new blogger with the appropriate warnings.

 

https://www.am2.co/2016/04/shrink-database-4-easy-steps/ – Lots of warnings throughout the post.

 

https://www.mssqltips.com/sqlservertip/3008/solving-sql-server-database-physical-file-fragmentation/ – Shows a tool from sysinternals that allows you to see the physical fragmentation and states that once db files are physically fragmented the only way to undo the damage is to take your databases offline and defrag them at the disk level.

 

If you go to the links, you may be thinking that some of those posts are old.  However, look through the comments…there are new ones up to the present and they are still getting the same answers. 

If after all that you are still not convinced that shrinking database files causes physical fragmentation, well… I tried.  But go ahead and keep at it.  If you are lucky you may not ever have an issue or experience things getting slow because of fragmentation.  Roll those dice!

blog_20170216_3

 

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

 

 

SQL Server Security – Controlling Access

— by Ginger Keys

Installing SQL following best practices is an important first step in securing your SQL server. The next step in securing your SQL server is to decide who can access the SQL instance, what databases and other objects they need access to, and what kind of permission to give them to those objects. So what is involved in securing access to your SQL server? We will go over the basic components, and discuss best practices to help keep your server secure.

blog_20161020_1

There are three main areas of security within your SQL server instance.

  • Principals – these are the persons or entities needing to access your SQL server.
  • Securables – these are the databases, objects, and other resources in your SQL server instance.
  • Permissions – these determine which resources a principal can access and what actions they are allowed to perform.

There are also some other important areas to consider in controlling access to your SQL server:

  • Schemas and Ownership

 

  • SQL server service accounts
  • Administrative accounts

 

Principals

Principals are persons, applications, and entities that can be authenticated to access the SQL server resources. In SQL server, principals include logins, users, and roles.

LOGINS – Principals require a login in order to connect to the SQL server. Logins are at the server level only, and provide for the entry point, or the initial connection to the server. Logins are validated against the master database and the connection is to the instance, not the databases or other components.   SQL supports two methods for authenticating logins: Windows and SQL authentication. Mixed mode authentication allows for the use of either Windows or SQL logins.

Windows logins can include individual users, groups, domains accounts, or local accounts. Group accounts are granted login access to all logins that are members of the group. SQL relies on Windows to authenticate these accounts.

SQL logins are specific to the instance, and are stored in SQL, with the username and hash of the password in the master database. SQL uses internal authentication to validate login attempts. This type of login may be necessary for users not associated with a Windows domain.

Best Practices for Logins:

  • Use Windows authentication whenever possible
  • create Windows groups in Active Directory set with appropriate access/permissions to the SQL server, then add individual users to the appropriate groups
  • Do not use SQL authentication for logins if possible –when SQL Server logins are used, SQL Server login names and passwords are passed across the network, which makes them less secure
  • Audit SQL Server failed login attempts to monitor for possible hacking activity

USERS – Once logged in to a SQL instance, a user account is necessary in order to connect to a database and its components. Users are created within a database, and mapped back to the server login.

User accounts that are not mapped to a login account are known as orphaned users. An exception is contained database users; they do not need to map to a login.

Guest user – this account is a built-in account in SQL server, and is disabled in new databases by default. The guest user allows a login to access databases without being mapped to a specific database user, and it inherits the ‘public’ database role with its permissions.

dbo user – this account has implied permissions to perform all activities in the database. Any principals belonging to the sysadmin fixed server role are mapped to the dbo user account automatically. The dbo user is in every database and is a member of the db_owner database role.

Best Practices for Users:

  • Disable the guest user in every user database (not system DBs)
  • If you must use the guest account, grant it minimum permissions

ROLES – Roles exists at both the server and database level. Permissions can be assigned to a role which makes it more efficient to manage principals’ access to securables. Permissions are given to roles, then logins and users can be added to (or removed from) roles.

Server roles – server level roles can be fixed or user defined. Members of server roles have permissions to sever-level securables, and cannot be changed or revoked. Logins can be assigned to fixed server roles without having a user account in a database.     

For complete list and description of server roles click here https://msdn.microsoft.com/en-us/library/ms188659.aspx

Database roles – These roles have a pre-defined set of permissions. Logins must be mapped to database user accounts in order to work with database objects. Database users can then be added to database roles, inheriting any permission sets associated with those roles.

For complete list and description of database roles click here https://msdn.microsoft.com/library/ms189121.aspx

Public role – The public role is contained in every database including system databases. It cannot be dropped and you can’t add or remove users from it. Permissions granted to the public role are inherited by all users because they belong to the public role by default.

Best Practices for Roles:

  • Be very cautious when adding users to fixed server roles:
    • Do not add principals to the sysadmin role unless they are highly trusted.
    • Membership in the securityadmin role allows principals to control server permissions, and should be treated with the same caution as the sysadmin role.
    • Be very cautious in adding members to the bulkadmin role. This role can insert data from any local file into a table, which could put your data at risk. For more information click here https://msdn.microsoft.com/library/ms188659.aspx

 

 

  • Grant public role only the permissions you want all users to have, and revoke unnecessary privileges.

Securables

SQL Server securables are the resources that can be accessed by a principal. SQL server resources operate within a hierarchy, with the server at the top of the hierarchy. Below the server instance lies the databases, and below the databases are a collection of objects (schemas, tables, views, etc.). Access to securables is controlled by granting or denying permissions to principals, or by adding or removing principals (logins and users) to roles which have access. All securables have an owner. The owner of a securable has absolute control over the securable and cannot be denied any privilege. Server level securables are owned by server principals (logins), and database level securables are owned by database principals (users).

blog_20161215_1

Permissions:

Permissions determine the type of access granted on a securable to a specific principal and what tasks a principal can perform on securables. The TSQL permission statements are

GRANT

REVOKE

DENY

Granting permission to a principal removes any DENY or REVOKE on that securable. A permission given at a higher scope in the hierarchy that contains that securable will take precedence over the lower level permission statement. Database level permissions only apply to that specific database.

Owners of securables can grant permissions on the objects they own, and any principal with CONTROL permissions can grant permissions on that securable as well.

Best Practices for Permissions:

  • Always use the principal of least privilege, which limits users by granting the minimum permissions necessary to accomplish a task. For more information click here https://msdn.microsoft.com/library/ms191291.aspx
  • Document any elevated user permission and request managerial approval.
  • When developing an application use a least-privileged user account (LUA), which may be more difficult – but will eliminate the temptation to grant elevated privileges as a quick fix when an end user cannot perform certain tasks that the administrative developer could. Granting elevated permissions to users in order to acquire lost functionality after the development phase can leave your application vulnerable to attack.
  • Grant permissions to roles rather that to users. It is easier to add and remove users from roles, than to manage separate permission sets for individual users.
  • Don’t grant individual users access to SQL Server, instead create groups for specific servers with specific permissions, and add individual users to the appropriate groups.

Ownership & Schemas 

Owners of objects have irrevocable permissions to administer those objects. The owner of a securable has absolute control over the securable and cannot be denied any privilege. You cannot remove privileges from an object owner, and you cannot drop users from a database if they own objects in it. Server level securables are owned by server principals (logins), and database level securables are owned by database principals (users).

A schema is a named container for similar database objects, and can be owned by any principal. You can assign security rules for a schema which will then be inherited by all objects in that schema.

Objects can be moved between schemas, and schema ownership can be transferred between principals.

Best Practices for Schemas:

  • In your TSQL, refer to the objects using a fully qualified name. At the very least, use the schema name followed by the object name, separated by a period (.). Example: databasename.schemaname.tablename.
  • Use synonyms to obfuscate the schema ownership of objects.
  • Use schemas to protect the base database object from being altered or removed from the database by users without sysadmin permissions.
  • Use schemas to combine related, logical entities to reduce administration overhead.
  • For more info – https://technet.microsoft.com/en-us/library/dd283095(v=sql.100).aspx

SQL Server Service Accounts 

Depending on what features and components you decide to install, SQL Server has several services that are used to manage authentication of SQL Server with Windows. These services need user accounts associated with them to start and run the services. The service accounts should have exactly the privileges that it needs to do its job and no more. For a list of all available SQL services click here https://msdn.microsoft.com/en-us/library/ms143504.aspx .

Best Practices for Service Accounts:

  • Assign service accounts low-level permissions using the principle of least privilege.
  • Don’t use built-in Windows system accounts (e.g., Network Service, Local System) for SQL Server service accounts – the built-in system accounts inherit certain elevated rights in Active Directory that aren’t required by SQL Server.
  • Use a Managed Service Account (MSA) if resources external to the SQL Server will be used.
  • When MSA are not possible, use a specific low-privilege non-user domain account.
  • Use separate accounts for different SQL Server services. The service accounts should not only be different from one another, they should not be used by any other service on the same server.
  • SQL Server Agent account is the only service account that requires membership in the systems admin server role.
  • Do not grant additional permissions to the SQL Server service account or the service groups – SQL Server installation program will grant them the local rights that they need during the installation.
  • Always use SQL Server Configuration Manager to change service accounts or passwords.
  • Don’t add service accounts to the Local Administrator group
  • Use a service account for applications, and restrict the applications access only to data required – not every object in the database

Administrative Accounts

sa Account – SQL Server creates this server level login during installation by default. The sa account has full administrative rights in your SQL Server instance, and is well known and often targeted by hackers. The sa login maps to the sysadmin fixed server role, which has irrevocable administrative privileges on the whole server.

BUILTIN\Administrators – All members of the Windows BUILTIN\Administrators group are members of the sysadmin role by default. This built in group is no longer included in SQL Server 2008 and later.

Best Practices for Administrative Accounts

  • Disable or rename the sa account – make sure another account exists with admin privileges before doing this.
  • If you must use the sa account, assign a strong password and enforce password policies
  • Do not use the sa account for day-to-day administration or logging on to the server remotely
  • Do not allow applications to use the sa account to connect to SQL
  • Remove the BUILTIN/Administrators group from the SQL Server Logins if it’s present in your instance.
  • Restrict users with system admin privileges, using server roles instead.
  • Restrict members of local administrator group. Limit the administrative access only to those who really require it.
  • Assign every administrator a named login, and don’t allow shared logins so that you can identify the users behind each and every database change.
  • Document all users with administrative rights, and any elevated user permissions

Conclusion

Controlling access to your SQL Server should be well thought out ahead of time. Every organization needs to have an information security policy in place, and the role of the DBA is to enforce these policies and protect the SQL Server instances along with the data in them. The suggestions provided above give a basic guideline for controlling who has access to your SQL Server. Utilizing windows groups, database roles, applying the principle of least privilege, and implementing the other suggestions above can be instrumental in keeping your SQL environments safe.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Deadlock Info From system_health Information

— by Lori Brown @SQLSupahStah

Periodically, all DBA’s are going to run into code that fails due to a deadlock. Deadlocks happen when two or more SPIDs\sessions try to access a resource that is locked by each other. In short, something has to give so one is usually chosen by SQL to be the victim meaning one is the loser. Deadlocks can be avoided by designing your databases well and making queries as quick and lightweight as possible. The faster data is retrieved or locked and unlocked the better.

But, how do you figure out what is causing deadlocks if you suddenly run into them? An easy way is to use the information that is already captured in the default system_health session information that is already being gathered by SQL.

Here are queries to retrieve this information from SQL 2008 R2 and SQL 2012 and up.

— get deadlock graphs in SQL 2008

SELECT CAST(event_data.value(‘(event/data/value)[1]’,

                               ‘varchar(max)’) AS XML) AS DeadlockGraph

FROM   ( SELECT   XEvent.query(‘.’) AS event_data

FROM     (   — Cast the target_data to XML

SELECT   CAST(target_data AS XML) AS TargetData

FROM     sys.dm_xe_session_targets st

JOIN sys.dm_xe_sessions s

ON s.address = st.event_session_address

WHERE     name = ‘system_health’

AND target_name = ‘ring_buffer’

) AS Data — Split out the Event Nodes

CROSS APPLY TargetData.nodes(‘RingBufferTarget/

                                     event[@name=”xml_deadlock_report”]’)

AS XEventData ( XEvent )

) AS tab ( event_data );

 

 

— get deadlock graphs in SQL 2012 and up

SELECT XEvent.query(‘(event/data/value/deadlock)[1]’) AS DeadlockGraph

FROM   ( SELECT   XEvent.query(‘.’) AS XEvent

FROM     ( SELECT   CAST(target_data AS XML) AS TargetData

FROM     sys.dm_xe_session_targets st

JOIN sys.dm_xe_sessions s

ON s.address = st.event_session_address

WHERE     s.name = ‘system_health’

AND st.target_name = ‘ring_buffer’

) AS Data

CROSS APPLY TargetData.nodes

(‘RingBufferTarget/event[@name=”xml_deadlock_report”]’)

AS XEventData ( XEvent )

) AS src;

Queries originally from Jonathan Kehayias (One super smart SQL guy!) https://www.simple-talk.com/author/jonathan-kehayias/

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

AlwaysOn 2016 – New and Improved

–by Ginger Keys

 

AlwaysOn Availability Groups made its debut in SQL Server 2012 as a new feature that enhanced the existing technologies of database mirroring and failover clustering to provide an option for high availability and disaster recovery. AlwaysOn gives you the ability to scale out several secondary Servers (replicas) for high availability or disaster recovery purposes. The secondaries are usable for read-only operations (like reporting) and maintenance tasks (like backups), and AlwaysOn provides for zero data loss protection and automatic page repair.

Since the AlwaysOn 2012 release, there have been several enhancements that have improved manageability, scalability, and availability. This article will discuss some of these improvements and why they’re important.

Some of the enhancements to the AlwaysOn feature in SQL Server 2016 are only available if your SQL server is running on the Windows Server 2016 platform. However, if you are still on Windows Server 2012 R2 there are still many robust improvements that are available.

Some of the new features and enhancements for AlwaysOn 2016 are:

  • More failover targets
  • Better log transport performance
  • Load balancing for readable secondaries
  • DTC support
  • Database-level health monitoring
  • Group Managed Service Account (gMSA) support
  • Basic Availability Groups
  • BI enhancements
  • Non-domain replicas
  • Encrypted database support
  • SSIS catalog support

 More Failover Targets

In AlwaysOn 2012 and 2014, you were allowed a maximum of two replicas to designate for automatic failover. AlwaysOn 2016 allows three replicas for automatic failover. You must have synchronous data replication and automatic failover set between the primary and the secondaries.

Automatic failover is generally used to support high availability, and because of synchronous data flow there is near zero data loss in the event of failover.

Blog_20160818_1

Better Log Transport Performance

The increased use of solid-state disks (SSDs) has provided users with high-speed hardware, enabling very fast throughput. This however, can be overwhelming to a system trying to write transactions to a secondary server. Because of this, Microsoft has revamped the data synchronization process for AlwaysOn, streamlining the pipeline so that there is better throughput and also less stress on the CPU. Bottlenecks are most likely to occur during the Log Capture and Redo steps. Previously the log-capture and the redo steps used a single thread to process the logs, but now these steps use multiple threads and run in parallel, which greatly improves performance.

The steps of data replication are illustrated below.

Transaction Occurs –> Log Flush –> Log Capture –> Send –> Log Received –> Log Cached –> Log Hardened –> Acknowledgement Sent –> Redo

Blog_20160818_2

Load Balancing for Readable Secondaries

One of the great features of using AlwaysOn is the ability to use the secondary replicas for read only operations. Prior to AlwaysOn SQL2016, the listener would direct any read-only requests to the first available replica, even though you might have several secondary replicas available, and you might have preferred the read operations go to secondary #3 or #4, instead of #2. Now in SQL2016 the list of readable secondaries are presented to the listener in a round-robin fashion. By executing TSQL statement similar to the following, the workload will adjust in the event of failover:

— designate server with read-only access in Secondary status

ALTER AVAILABILITY GROUP AGSQL2016

MODIFY REPLICA ON N’SQLSRVTST1′

WITH (SECONDARY_ROLE (ALLOW_CONNECTIONS = READ_ONLY));

 

ALTER AVAILABILITY GROUP AGSQL2016

MODIFY REPLICA ON N’SQLSRVTST2′

WITH (SECONDARY_ROLE (ALLOW_CONNECTIONS = READ_ONLY));

 

ALTER AVAILABILITY GROUP AGSQL2016

MODIFY REPLICA ON N’SQLSRVTST3′

WITH (SECONDARY_ROLE (ALLOW_CONNECTIONS = READ_ONLY));

 

— provide read-only routing URL

ALTER AVAILABILITY GROUP AGSQL2016

MODIFY REPLICA ON N’SQLSRVTST1′

WITH (SECONDARY_ROLE (READ_ONLY_ROUTING_URL = N’TCP://SQLSRVTST1.domain.com:1433′));

 

ALTER AVAILABILITY GROUP AGSQL2016

MODIFY REPLICA ON N’SQLSRVTST2

WITH (SECONDARY_ROLE (READ_ONLY_ROUTING_URL = N’TCP://SQLSRVTST2.domain.com:1433′));

 

ALTER AVAILABILITY GROUP AGSQL2016

MODIFY REPLICA ON N’SQLSRVTST3′

WITH (SECONDARY_ROLE (READ_ONLY_ROUTING_URL = N’TCP://SQLSRVTST3.domain.com:1433′));

 

— designate priority of read-only routing lists for each server in primary status

ALTER AVAILABILITY GROUP AGSQL2016

MODIFY REPLICA ON N’SQLSRVTST1′

WITH (PRIMARY_ROLE (READ_ONLY_ROUTING_LIST=(‘SQLSRVTST3’,‘SQLSRVTST2’,‘SQLSRVTST1’)));

 

ALTER AVAILABILITY GROUP AGSQL2016

MODIFY REPLICA ON N’SQLSRVTST2′

WITH (PRIMARY_ROLE (READ_ONLY_ROUTING_LIST=(‘SQLSRVTST1’,‘SQLSRVTST3’,‘SQLSRVTST2’)));

GO

 

ALTER AVAILABILITY GROUP AGSQL2016

MODIFY REPLICA ON N’SQLSRVTST3′

WITH (PRIMARY_ROLE (READ_ONLY_ROUTING_LIST=(‘SQLSRVTST2’,‘SQLSRVTST1’,‘SQLSRVTST3’)));

GO

 

In addition to configuring your availability group routing lists, you must also ensure that the client’s application connection string use an application intent of read-only in the connection string when connecting to the AG listener. If this is not set in the client application connection string, the connection will automatically be directed to the primary replica. Following is an example of a connection string:

Server=tcp:AGListener,1433; Database=AdventureWorks;IntegratedSecurity=SSPI; ApplicationIntent=ReadOnly

Also it’s best not to mix synchronous and asynchronous replicas in the same load balance group.

DTC Support

*Only available with Windows Server 2016 or Windows Server 2012 R2 with update KB3090973

Distributed Transaction Coordinator (DTC) is necessary if your client application needs to perform transactions across multiple instances.   DTC is part of the operating system, and ensures consistency when your database engine makes multi-server transactions. Applications can use DTC when they connect to the database engine, or can be started through TSQL by using the BEGIN DISTRIBUTED TRANSACTION command.

USE AdventureWorks2012;

GO 

BEGIN DISTRIBUTED TRANSACTION;

— your tsql statement here

DELETE FROM AdventureWorks2012.HumanResources.JobCandidate

WHERE JobCandidateID = 13;

GO 

COMMIT TRANSACTION;

GO

 

Not only can your application perform transactions between multiple SQL Server instances, but also between other DTC compliant servers such as WebSphere or Oracle.

DTC is not supported in AlwaysOn 2014 and earlier. You cannot add DTC support to an already existing AlwaysOn availability group. For complete information click here https://msdn.microsoft.com/en-us/library/mt748186.aspx .

To implement this in AlwaysOn 2016, Availability Groups must be created with the CREATE AVAILABILITY GROUP command and the WITH DTC_SUPPORT = PER_DB clause.

CREATE AVAILABILITY GROUP AGSQL2016

WITH (DTC_SUPPORT = PER_DB)

FOR DATABASE [Database1, Database2, Database3]

REPLICA ON

‘SQLSRVTST1’ WITH — substitute node name

(

ENDPOINT_URL = ‘TCP://SQLSRVTST1.<domain>:7022’,

AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,

FAILOVER_MODE = AUTOMATIC

),

‘SQLSRVTST2’ WITH — substitute node name

(

ENDPOINT_URL = ‘TCP://SQLSRVTST2.<domain>:7022’,

AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,

FAILOVER_MODE = AUTOMATIC

);

GO

 

The clauses in the TSQL script above are only a partial list of available options, just for demonstration purposes. For more information on creating availability groups click here https://msdn.microsoft.com/en-us/library/ff878399.aspx .

 Database-Level Health Monitoring

In earlier versions of AlwaysOn (SQL 2012 & 2014), failover would occur if there was a problem with the health of the instance. If one of your databases was in trouble however, the AlwaysOn group would not fail over as long as the instance was okay. So if you had a database that was offline, suspect or corrupt, no failover would occur. In AlwaysOn 2016, failover will occur not only if the instance is in trouble, but also if one or more of your databases is in trouble. This is not a default setting however. You have to specify this when creating your AlwaysOn group, by selecting the Database Level Health Detection checkbox in the setup wizard.

Blog_20160818_3

You can also adjust the setting for what triggers a database failover, by setting the property value in the Failover Condition Level properties. You can increase or reduce the values from the default level if necessary. For more information click here https://msdn.microsoft.com/en-us/library/ff878667.aspx.  

Group Managed Service Account (gMSA) Support

In SQL Server 2012, Microsoft added the gMSA enhancement so that service account passwords can be more easily managed. You can now create a single service account for your SQL Server instances, manage the password in Active Directory, and also delegate permissions to each of your servers. This feature can be useful for AlwaysOn groups because passwords and permissions to access certain resources, like shared files, can be managed for one account instead of each instance individually. Using a gMSA is also more secure than using a regular domain user account to manage services in your AG.

Basic Availability Groups

AlwaysOn Basic Availability Groups (BAG) are available with SQL Server 2016 Standard Edition. The functionality is generally the same as database mirroring (which has been deprecated). BAGs provide a failover environment for only one database, there can only be two replicas in the group, replication can be synchronous or asynchronous, and there is no read access and no backups on the secondary.   To create a Basic Availability Group, use the CREATE AVAILABILITY GROUP TSQL command and specify WITH BASIC.   Below is a partial script (again for demonstration purposes) of creating a Basic Availability Group. For more detailed information click here https://msdn.microsoft.com/en-us/library/ff878399.aspx .

 

CREATE AVAILABILITY GROUP BAGSQL2016

WITH (AUTOMATED_BACKUP_PREFERENCE = PRIMARY,

BASIC,

DB_FAILOVER = OFF,

DTC_SUPPORT = NONE)

FOR DATABASE [Database1, Database2, Database3]

REPLICA ON

‘SQLSRVTST1’ WITH — substitute node name

(

ENDPOINT_URL = ‘TCP://SQLSRVTST1.<domain>.com:5022’,

FAILOVER_MODE = AUTOMATIC,

AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,

SECONDARY_ROLE (ALLOW_CONNECTIONS = NO)

),

‘SQLSRVTST2’ WITH — substitute node name

(

ENDPOINT_URL = ‘TCP://SQLSRVTST2.<domain>.com:5022’,

FAILOVER_MODE = AUTOMATIC,

AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,

SECONDARY_ROLE (ALLOW_CONNECTIONS = NO)

)

GO

BI Enhancements

By using AlwaysOn Availability Groups, a data warehouse workload can be offloaded to a single readable secondary replica — or even across multiple readable secondary replicas — leaving the resources on the primary replica to efficiently support the mission-critical business process. Reporting and data analysis can be very resource intensive on a server, so offloading to a non-production server can enhance overall performance. Another added benefit is that because Microsoft revamped the data synchronization process, there is very low latency on the data feeding the data warehouse, so that near real-time analytics can be a reality.

Domain Replicas No Longer Necessary

*Only available with Windows Server 2016

Most companies operate in the context of a single Active Directory domain, but some organizations are set up with multiple domains and could benefit from spreading an AlwaysOn Group across domains so that multiple servers can host DR replicas. And then there are other organizations that operate without Active Directory domains at all.

With Windows Server 2016 operating system, WSFC does not require cluster nodes be in same domain, or in any domain at all (it can be in a workgroup). SQL Server 2016 is now able to deploy AlwaysOn Availability Groups in environments with:

  • All nodes in a single domain
  • Nodes in multiple domains with full trust
  • Nodes in multiple domains with no trust
  • Nodes in no domain at all

This gives improved flexibility by removing domain specific constraints for SQL clusters. For more info click here https://blogs.msdn.microsoft.com/clustering/2015/08/17/workgroup-and-multi-domain-clusters-in-windows-server-2016/

Encrypted Database Support

Encrypted databases were allowed in earlier versions of AlwaysOn, however they couldn’t be added using the New Availability Group wizard, and they could not be accessed if there was a failover to a secondary replica. In SQL Server 2016, it is possible to add an encrypted database via the wizard, and there is now access to the database in the event of failover. This is because during the creation of the Availability Group, the wizard executes sp_control_dbmasterkey_password for each replica, and consequently creates the credentials associated with the database master key for each instance. In the event of failover SQL will search for the correct credentials until it is able to decrypt the database master key.

There are some restrictions with adding an encrypted database to an AlwaysOn group. For more information on how to make an encrypted database eligible to add to an availability group, click here https://msdn.microsoft.com/en-us/library/hh510178.aspx .

SSIS Catalog Support

In SQL Server 2016, you can add your SSIS catalog (SSISDB) and its contents (projects, packages, etc.) to an AlwaysOn Availability Group in SQL2016 like any other database, in order to enhance high availability and disaster recovery.

There are some special prerequisites and configurations for adding the SSISDB to an AlwaysOn group. For more information click here https://msdn.microsoft.com/en-us/library/mt163864.aspx

Conclusion

Many new features and enhancements have been made to SQL Server 2016. This article discussed basic information about some of the enhancements to the AlwaysOn feature. The latest version of AlwaysOn Availability Groups 2016 has improved functionality, scalability, and manageability, and continues to be a robust enhancement for high availability and disaster recovery.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!