Category Archives: Beginner

Use MSDB to Get Database Backup Size and Total Time For Each

— by Lori Brown

We recently started using a third party software to do our in-house SQL backups so that the backup files are stored in a redundant and safe place. To confirm that the software was indeed compressing backups as it stated it would, we wanted to see what each backup size actually was in SQL so that we could compare that to what the software was telling us.

SQL stores lots of handy backup information in msdb in the backupset and backupmediafamily tables.

https://docs.microsoft.com/en-us/sql/relational-databases/system-tables/backupset-transact-sql

https://docs.microsoft.com/en-us/sql/relational-databases/system-tables/backupmediafamily-transact-sql

Here is my query. I am only wanting the information from the last 24 hours so have filtered the start date by subtracting 1 day from today. I have also provided some commented out options in case someone needs them.

— database backup size and how long it took to do backup

SELECT bs.database_name AS DatabaseName

, CAST(bs.backup_size/1024.0/1024/1024 AS DECIMAL(10, 2)) AS BackupSizeGB

, CAST(bs.backup_size/1024.0/1024 AS DECIMAL(10, 2)) AS BackupSizeMB

–, CAST(bs.compressed_backup_size/1024.0/1024/1024 AS DECIMAL(10, 2)) AS CompressedSizeGB   

       –, CAST(bs.compressed_backup_size/1024.0/1024 AS DECIMAL(10, 2)) AS CompressedSizeMB

, bs.backup_start_date AS BackupStartDate

, bs.backup_finish_date AS BackupEndDate

, CAST(bs.backup_finish_date – bs.backup_start_date AS TIME) AS AmtTimeToBkup

, bmf.physical_device_name AS BackupDeviceName

FROM msdb.dbo.backupset bs JOIN msdb.dbo.backupmediafamily bmf

ON bs.media_set_id = bmf.media_set_id

WHERE

–bs.database_name = ‘MyDatabase’ and   — uncomment to filter by database name

bs.backup_start_date > DATEADD(dd, -1, GETDATE()) and

bs.type = ‘D’ — change to L for transaction logs

ORDER BY bs.database_name, bs.backup_start_date

And, here is the output.

Blog_20170726_1

It turned out that the software was indeed compressing all backups so that was a good thing.

There is a lot more info that can be pulled from msdb regarding backups. Have some fun and experiment with getting information that you need from there. Here are some links to some other backup related topics that we have blogged about already.

https://blog.sqlrx.com/2017/03/02/sql-server-backup-and-restore-primer/

https://blog.sqlrx.com/2013/04/23/backup-database-commands/

https://blog.sqlrx.com/2013/05/28/be-prepared-for-a-crisis/

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

Get a List of Tables That Are Compressed

Find out if any of your tables are compressed in your database using the query below. Compressed tables can save space and make queries run faster.

— Compressed Tables
select distinct t.name AS CompressedTables
from sys.partitions p
inner join sys.tables t
on p.object_id = t.object_id
where p.data_compression > 0

If you don’t have any tables compressed but think you might want to compress some, you can check your data compression savings by running the stored procedure sp_estimate_data_compression_savings for your targeted table.

USE WideWorldImporters;
GO
EXEC sp_estimate_data_compression_savings ‘Sales’, ‘Invoices’, NULL, NULL, ‘ROW’ ;
GO

https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-estimate-data-compression-savings-transact-sql

List Partitioned Tables And Other Info About Them

— By Lori Brown @SQLSupahStah

Here is a good way to find out if any of your tables are partitioned in your database using the query below. It is important to know so that you can find out how the tables are partitioned.

— Partitioned Tables

SELECT DISTINCT t.name AS ParitionedTables

FROM sys.partitions p

INNER JOIN sys.tables t

ON p.object_id = t.object_id

WHERE p.partition_number <> 1

If you have partitioned tables here is a good way to find out how it is partitioned up.

— Get partition info

SELECT SCHEMA_NAME(o.schema_id) + ‘.’ + OBJECT_NAME(i.object_id) AS [object]

, p.partition_number AS [p#]

, fg.name AS [filegroup]

, p.rows

, au.total_pages AS pages

, CASE boundary_value_on_right

WHEN 1 THEN ‘less than’

ELSE ‘less than or equal to’ END AS comparison

, rv.value

, CONVERT (VARCHAR(6), CONVERT (INT, SUBSTRING (au.first_page, 6, 1) +

SUBSTRING (au.first_page, 5, 1))) + ‘:’ + CONVERT (VARCHAR(20),

CONVERT (INT, SUBSTRING (au.first_page, 4, 1) +

SUBSTRING (au.first_page, 3, 1) + SUBSTRING (au.first_page, 2, 1) +

SUBSTRING (au.first_page, 1, 1))) AS first_page

FROM sys.partitions p

INNER JOIN sys.indexes i

ON p.object_id = i.object_id

AND p.index_id = i.index_id

INNER JOIN sys.objects o

ON p.object_id = o.object_id

INNER JOIN sys.system_internals_allocation_units au

ON p.partition_id = au.container_id

INNER JOIN sys.partition_schemes ps

ON ps.data_space_id = i.data_space_id

INNER JOIN sys.partition_functions f

ON f.function_id = ps.function_id

INNER JOIN sys.destination_data_spaces dds

ON dds.partition_scheme_id = ps.data_space_id

AND dds.destination_id = p.partition_number

INNER JOIN sys.filegroups fg

ON dds.data_space_id = fg.data_space_id

LEFT OUTER JOIN sys.partition_range_values rv

ON f.function_id = rv.function_id

AND p.partition_number = rv.boundary_id

WHERE i.index_id < 2

AND o.object_id = OBJECT_ID(‘dbo.SomeTableName’);

List Tables That May Be Over Indexed

— By Lori Brown

While not having enough indexes can be bad for query performance, having too many indexes can also be just as bad. Use the query below to get a list of tables in your database that has more than 10 indexes.

— Tables with large number of indexes

select t.name as TablesWithLargeNumInx, count(i.name) as CountIndexes

from sys.indexes i

inner join sys.tables t

on i.object_id = t.object_id

where i.index_id > 0

group by t.name

having count(i.name) > 10

If you suspect that you have too many indexes on your tables, you can also check the sys.dm_db_index_usage_stats dynamic management view to know if indexes on your heavily indexed tables are being used well. (Hint: seeks are good and scans are not so much)

select u.user_seeks, u.user_lookups, u.user_scans

from sys.dm_db_index_usage_stats u

inner join sys.indexes i

on u.object_id = i.object_id and u.index_id = i.index_id

WHERE u.object_id=object_id(‘dbo.SomeTableName’)

https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-db-index-usage-stats-transact-sql

Is This Database Being Used?

— By Ginger Keys

During the migration of a client’s SQL environment to a new hosting provider it was questioned whether all of the databases needed to be migrated. Nobody seemed to know whether several of the databases were used or what they were for. One person suggested we just not move them, and see if anybody screamed or if it broke anything. Well, I guess that’s one way to find out if a database is being used. But there should be more definitive ways to do this, right?

There really is no straightforward way to determine this, but there are several things we can look at to get a better idea.

  • Look at any current connections
  • Capture login/connections over period of time
  • Observe index usage
  • Look at transaction count
  • Find dependencies with Jobs or other DBs

Current connections

sp_who2 is a well-known stored procedure that returns information about current users, sessions and processes. You can run exec sp_who2 to return all sessions belonging to the instance, or you can filter to return only the active sessions:

–Find active connections to the instance

USE master;

GO

EXEC sp_who2 ‘active’;

GO

Information about processes on your instance can also be derived from sys.sysprocesses. This system view will be deprecated in future releases of SQL. The info in this view returns data about both client and system processes running on the instance. The following statements will filter data relating to a specific database:

–Find number of active connections to database

USE master;

GO

SELECT DB_NAME(dbid) AS DBName,

spid,

COUNT(dbid) AS NumberOfConnections,

loginame,

login_time,

last_batch,

status

FROM   sys.sysprocesses

WHERE DB_NAME(dbid) = ‘AdventureWorks2016’ –insert your database name here

GROUP BY dbid, spid, loginame, login_time, last_batch, status

ORDER BY DB_NAME(dbid)

 

–Active Connections to Database with connecting IP address

SELECT

s.host_name,

s.program_name,

s.login_name,

c.client_net_address,

db_name(s.database_id) AS DBName,

s.login_time,

s.status,

GETDATE() AS date_time

FROM sys.dm_exec_sessions AS s

INNER JOIN sys.dm_exec_connections ASON s.session_id = c.session_id

INNER JOIN sys.sysprocesses ASON s.session_id = p.spid

WHERE DB_NAME(p.dbid) = ‘AdventureWorks2016’ –insert your database name here

 

Connections over time

It might be more beneficial to watch the connections to a database over a period of time instead of looking at the current connections only. In order to gather this data over time, you could create a trace through SQL Server Profiler. Simply run the trace, export it as a definition file, and import it into a table to query the results.   You can also create a SQL Server Audit to record successful logins, but these are at the server level, not the database level.   For a good explanation on how to perform both of these tasks click here https://mssqltalks.wordpress.com/2013/02/25/how-to-audit-login-to-my-sql-server-both-failed-and-successful/ . Depending upon how long you run the Profiler or Audit, these files can take up a considerable amount of space so be sure to monitor your disk space carefully.

 

Index Usage

Another way to see if your database is in use is to look and see if the indexes are being used. Information on index usage is held in the sys.dm_db_index_usage_stats table since the last server reboot, and can be queried using this statement which can be tailored to select the data you need.

SELECT

DB_NAME(database_id) DatabaseName,

last_user_seek,

last_user_scan,

last_user_lookup,

last_user_update

FROM sys.dm_db_index_usage_stats

WHERE db_name(database_id) = ‘AdventureWorks2016’ –insert your database name here

 

Blog_20170330_1

The statement above will show you the date and time the indexes for your database were last used. For the reverse of that, if you want to see which database have not had the indexes used since the last server reboot, run this statement:

SELECT name AS DatabaseName

FROM sys.databases

WHERE name NOT IN (‘master’, ‘tempdb’, ‘model’, ‘msdb’)

EXCEPT

SELECT DISTINCT

DB_NAME(database_id) AS DatabaseName

FROM sys.dm_db_index_usage_stats

ORDER BY 1

Blog_20170330_2

Transaction Count for the Database

Checking to see if the number of transactions are increasing for a database is another way to see if it is being used. You can query the sys.dm_os_performance_counters for Transactions/sec and run this several times to see if the count is increasing or not. Or you can open Perfmon and watch it there as well.

–Transaction count increasing?

SELECT *

FROM sys.dm_os_performance_counters

WHERE counter_name LIKE ‘Transactions/sec%’

AND instance_name LIKE ‘AdventureWorks2016%’ –insert your database name here

GO

Blog_20170330_3

–I waited a few minutes and executed the select statement again

Blog_20170330_4

Database Dependencies

Occasionally other databases or linked servers will connect to your database.   To see objects in your database that are referenced by other databases, run this statement:

SELECT OBJECT_NAME (referencing_id) AS referencing_object,

referenced_database_name,

referenced_schema_name,

referenced_entity_name

FROM sys.sql_expression_dependencies

WHERE referenced_database_name IS NOT NULL

AND is_ambiguous = 0;

 

For finding object referencing linked servers use this

SELECT OBJECT_NAME (referencing_id) AS referencing_object,

referenced_server_name,

referenced_database_name,

referenced_schema_name,

referenced_entity_name

FROM sys.sql_expression_dependencies

WHERE referenced_server_name IS NOT NULL

AND is_ambiguous = 0;

Database dependencies can be a very in-depth topic, and the statements above are only meant for high-level information about connections to your database. For more information about this topic click here https://www.mssqltips.com/sqlservertip/2999/different-ways-to-find-sql-server-object-dependencies/ .

Conclusion

There are countless ways to determine if your database is being used. Other methods that could be used is to see if there are execution plans are in the cache referencing the database, see if reads/writes are happening, look at lock_acquired events, and many other methods I haven’t thought of. The methods outlined above provide a useful starting point to investigate who/what is connecting and if your database is active or not. Yes, you can just take the database offline or detach, or even delete it and see if anyone fusses. But it’s much more prudent to take a look around at some simple things to make that determination.

Feel free to comment with other methods you have used….we would love to hear from you. For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

 

Login Failures Bloating SQL Server Log

–By Ginger Keys

After migrating a client’s SQL instances to VMs from physical servers, I noticed the following week that the SQL Server Log files had tons of failed login attempts from what looked like an application. These attempts were happening several times per second, and had totally bloated the SQL Server logs.

Blog_20170316_1

This is not desirable because 1) obviously something should not be attempting and failing to connect that much, and 2) it makes it super cumbersome to monitor more critical messages in the SQL logs when so many login errors are bloating the log. Too many failed login attempts could mean that the application has not been set with the correct permissions to access the databases needed, or it could be an indication of a hacking attempt.

I ran the script below to determine how many failed login attempts had occurred in the last 7 days, and the count came back with over 3 million failed attempts!

— Create a temporary table

CREATE TABLE #FailedLogins

(Logdate DATETIME,

Process VARCHAR(20),

Text VARCHAR(4000))

 

— Insert data from SQL Server Log into temp table

INSERT INTO #FailedLogins

EXEC xp_readerrorlog 1, 1, N’Login Failed’, N’LoginName’,‘20170309’,‘20170316’

 

— For more info about xp_readerrorlog click here https://sqlandme.com/2012/01/25/sql-server-reading-errorlog-with-xp_readerrorlog/

  –Count number of occurrences

SELECT COUNT(Text) AS LoginFailures, Text

FROM #FailedLogins

GROUP BY TEXT

 

–Drop temp table

DROP TABLE #FailedLogins

 

As it turns out it was a legacy system moved from the old environment that needed to be adjusted to connect to the new environment. I didn’t exactly get to put on my superwoman cape and solve the problem, but at least I was able to identify the issue and direct it to the appropriate IT staff to correct.

As part of your everyday duties as a DBA it is prudent to monitor your SQL Server Logs to catch hacking attempts or malicious activity with your instance, and also to look for errors that could be occurring. By default, SQL contains 7 error log files (one current, and six archives). Error logs contain critical information about your SQL server. A new log is created each time the SQL service starts, and the oldest archived log gets deleted. For security reasons it is best practice to keep a large number of error logs, depending on the space you have available. Someone with access to your SQL server could execute sp_cycle_errorlog and regenerate the logs at a rate that could ‘delete’ their malicious activity or cover up the fact that they have gained access to your system. If you have highly sensitive data or stringent auditing requirements you can increase the number of retained error logs up to 99, just be sure to monitor your disk space.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

SQL Server Backup and Restore Primer

— by Lori Brown @SQLSupahStah

Back it up!!

blog_20170302_1  …. BEEP! BEEP! BEEP! … Oh wait! Not that kind of backup…

Since I spent a lot of time trying to creatively move a database from one server to another last night, I thought that a review of ways to backup and restore would be a good topic for today.

I had someone who needed me to move a database from one server to another in an effort to consolidate servers. The database backup file was a significant size and could not fit onto any of the local drives of the new server. And to make things more fun, the SQL Server service account did not have permission to view the old server at all. I think the inability to see the old server had something to do with the old server had been in a different domain at one point and there was some issue that was never fixed that could give the service account access. Nonetheless, the move had to be completed. So, this made it so that restoring would have to be performed using a location that the SQL Server service account could access. I could not use a UNC path for restoring the database.

The new server was really tight on space and the admins were very reluctant to add more space if that space was not needed permanently. I had several drives with differing amounts of free space so if I could create enough small backup files, I could put those files in several different places on the new server and could then restore since the SQL Server service account did have access to all local drives.

You can backup databases to one file or multiple files. Here are the basics of backing up to a single file. Since I am giving the T-SQL way to do things, I am also including a way to make the file name unique with a timestamp.

— Backup a database to a single backup file

DECLARE @BackupFile NVARCHAR(200)

DECLARE @Timestamp VARCHAR(50)

— create a timestamp for a unique name

SET @Timestamp = REPLACE(REPLACE(REPLACE(CONVERT(VARCHAR,GETDATE(), 20),‘-‘,),‘:’,),‘ ‘,)

— create backup file string

SET @BackupFile = ‘D:\SQLBackups\WideWorldImporters_Full_’+@Timestamp+‘.bak’

— backup the database

BACKUP DATABASE WideWorldImporters

TO DISK = @BackupFile

WITH NOFORMAT, INIT, NAME = N’WideWorldImporters-Full Database Backup’,

SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 10;

GO

Backing up to multiple files is good to use for large databases and will break up a single large backup file to several smaller files. This is what I had to do last night when a large single database file would not fit on any local drive.

— Backup a database to multiple backup files

DECLARE @BackupFile1 NVARCHAR(200)

DECLARE @BackupFile2 NVARCHAR(200)

DECLARE @BackupFile3 NVARCHAR(200)

DECLARE @BackupFile4 NVARCHAR(200)

DECLARE @Timestamp VARCHAR(50)

— create a timestamp for a unique name

SET @Timestamp = REPLACE(REPLACE(REPLACE(CONVERT(VARCHAR,GETDATE(), 20),‘-‘,),‘:’,),‘ ‘,)

— create backup file strings

SET @BackupFile1 = ‘D:\SQLBackups\WideWorldImporters_Full1_’+@Timestamp+‘.bak’

SET @BackupFile2 = ‘D:\SQLBackups\WideWorldImporters_Full2_’+@Timestamp+‘.bak’

SET @BackupFile3 = ‘D:\SQLBackups\WideWorldImporters_Full3_’+@Timestamp+‘.bak’

SET @BackupFile4 = ‘D:\SQLBackups\WideWorldImporters_Full4_’+@Timestamp+‘.bak’

— backup the database

BACKUP DATABASE WideWorldImporters

TO DISK = @BackupFile1,

DISK = @BackupFile2,

DISK = @BackupFile3,

DISK = @BackupFile4

WITH NOFORMAT, INIT, NAME = N’WideWorldImporters-Full Database Backup’,

             SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 10;

GO

Here is the MSDN link for all things backup…. https://msdn.microsoft.com/en-us/library/ms186865.aspx

Some backup options that are good to know about are BLOCKSIZE, BUFFERCOUNT and MAXTRANSFERSIZE. These can help speed up backups immensely but should be thoroughly tested. I recently had a 1.8 TB database that without setting those backup options, caused the database backup to take around 4 hours. With them set, the backup time was reduced to 50 minutes.

Restore it!!

blog_20170302_2  … Reunited and it feels so goooood… Yeah, no one wants to hear me sing.  Somehow I never win at karaoke.

Practicing restoring databases proves that the backup files are usable and will polish up your skills so that you are ready to restore in the event of a crisis. If you have to restore database files to different drives than are available on the production server or if you have to restore often as in the case of refreshing a database, create a script that will be easier to use than restoring using the GUI and navigating to the new file locations. Don’t forget that databases from newer version of SQL cannot be restored to an older version.

Here are some basic examples of restore statements.

— Restore a database from a single backup file

USE master

GO

RESTORE DATABASE WideWorldImporters

FROM DISK = N’D:\SQLBackups\WideWorldImporters_Full.bak’

WITH RECOVERY

GO

 

— Restore a database from a single database backup file

— and a transaction log backup file

USE master

GO

RESTORE DATABASE WideWorldImporters

FROM DISK = N’D:\SQLBackups\WideWorldImporters_Full.bak’

WITH NORECOVERY — NORECOVERY allows more backup files to be restored

GO

RESTORE LOG WideWorldImporters

FROM DISK = N’D:\SQLBackups\WideWorldImporters_tlog.trn’

WITH RECOVERY

GO

In my case, since I had backed up the database to multiple files, I moved those files to several different local drives and was able to restore from there. I don’t like it when space is that tight but some shops run a very tight ship and it is just something I have to live with.

— Restore a database from multiple backup files

USE master

GO

RESTORE DATABASE WideWorldImporters

FROM DISK = N’D:\SQLBackups\WideWorldImporters_Full_1.bak’,

DISK = N’D:\SQLBackups\WideWorldImporters_Full_2.bak’,

DISK = N’D:\SQLBackups\WideWorldImporters_Full_3.bak’,

DISK = N’D:\SQLBackups\WideWorldImporters_Full_4.bak’

WITH REPLACE, — REPLACE WILL OVERWRITE EXISTING FILES

RECOVERY

GO

 

— Restore a database from multiple backup files

— and move the database and log files to new drives

USE master

GO

RESTORE DATABASE WideWorldImporters

FROM DISK = N’D:\SQLBackups\WideWorldImporters_Full_1.bak’,

DISK = N’D:\SQLBackups\WideWorldImporters_Full_2.bak’,

DISK = N’D:\SQLBackups\WideWorldImporters_Full_3.bak’,

DISK = N’D:\SQLBackups\WideWorldImporters_Full_4.bak’

WITH MOVE N’WideWorldImporters’ TO N’H:\MSSQL\WideWorldImporters.mdf’,

MOVE N’WideWorldImporters_log’ TO N’L:\MSSQL\WideWorldImporters_log.ldf’,

RECOVERY

GO

Restore WITH RECOVERY to make the database ready to use. Restore WITH NORECOVERY if you want to restore more backups.   Use REPLACE only when you are certain that you want to overwrite the existing files so be careful with it.

More on RESTORE can be found at https://msdn.microsoft.com/en-us/library/ms186858.aspx

Some good things to know are available in the RESTORE are the ability to restore pages, verifying backups, NORECOVERY and STANDBY, and STOPAT. Some of those options are specific when restoring transaction logs but are good to know anyway.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Shrinking Database Files – Let’s Not And Say We Did

— by Lori Brown  @SQLSupahStah

I recently ran into someone who swears that shrinking database files on a production server is a good thing and proudly said they did it on all systems they work on.  OMG!!  Let me post a quote directly from MSDN regarding weather or not shrinking database files causes fragmentation (https://msdn.microsoft.com/en-us/library/ms189493.aspx) :

blog_20170216_1

Please notice I am talking about production SQL Servers and not development, test, QA, staging or anything that is not production.  Of course if you shrink database files in any environment you are still going to cause more fragmentation but since the use of those systems is often vastly different than a production system, you as the DBA have to judge the ramifications on those non-production systems.

blog_20170216_2

Have I ever had to shrink database files on the production systems I monitor…Uhhh, yes.  But, I do so very infrequently and with reason.  I even have one system where the guy who is in charge of it refuses to allow me to disable the nightly job that shrinks some of the databases.  I can only do so much but he can’t say I have not duly warned him.

 

There are tons of good articles and even rants of why you should not shrink your database files.  Here are some links in case you think I am the only one telling you to NOT shrink your database files.

 

https://www.brentozar.com/archive/2009/08/stop-shrinking-your-database-files-seriously-now/  – Gotta love some Ozar!

 

http://www.sqlskills.com/blogs/paul/why-you-should-not-shrink-your-data-files/ – From THE man!

 

http://www.sqlservercentral.com/blogs/simple-sql-server/2016/01/19/shrinking-database-data-files/ – Good post from a new blogger with the appropriate warnings.

 

https://www.am2.co/2016/04/shrink-database-4-easy-steps/ – Lots of warnings throughout the post.

 

https://www.mssqltips.com/sqlservertip/3008/solving-sql-server-database-physical-file-fragmentation/ – Shows a tool from sysinternals that allows you to see the physical fragmentation and states that once db files are physically fragmented the only way to undo the damage is to take your databases offline and defrag them at the disk level.

 

If you go to the links, you may be thinking that some of those posts are old.  However, look through the comments…there are new ones up to the present and they are still getting the same answers. 

If after all that you are still not convinced that shrinking database files causes physical fragmentation, well… I tried.  But go ahead and keep at it.  If you are lucky you may not ever have an issue or experience things getting slow because of fragmentation.  Roll those dice!

blog_20170216_3

 

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

 

 

SQL Server Security – Controlling Access

— by Ginger Keys

Installing SQL following best practices is an important first step in securing your SQL server. The next step in securing your SQL server is to decide who can access the SQL instance, what databases and other objects they need access to, and what kind of permission to give them to those objects. So what is involved in securing access to your SQL server? We will go over the basic components, and discuss best practices to help keep your server secure.

blog_20161020_1

There are three main areas of security within your SQL server instance.

  • Principals – these are the persons or entities needing to access your SQL server.
  • Securables – these are the databases, objects, and other resources in your SQL server instance.
  • Permissions – these determine which resources a principal can access and what actions they are allowed to perform.

There are also some other important areas to consider in controlling access to your SQL server:

  • Schemas and Ownership

 

  • SQL server service accounts
  • Administrative accounts

 

Principals

Principals are persons, applications, and entities that can be authenticated to access the SQL server resources. In SQL server, principals include logins, users, and roles.

LOGINS – Principals require a login in order to connect to the SQL server. Logins are at the server level only, and provide for the entry point, or the initial connection to the server. Logins are validated against the master database and the connection is to the instance, not the databases or other components.   SQL supports two methods for authenticating logins: Windows and SQL authentication. Mixed mode authentication allows for the use of either Windows or SQL logins.

Windows logins can include individual users, groups, domains accounts, or local accounts. Group accounts are granted login access to all logins that are members of the group. SQL relies on Windows to authenticate these accounts.

SQL logins are specific to the instance, and are stored in SQL, with the username and hash of the password in the master database. SQL uses internal authentication to validate login attempts. This type of login may be necessary for users not associated with a Windows domain.

Best Practices for Logins:

  • Use Windows authentication whenever possible
  • create Windows groups in Active Directory set with appropriate access/permissions to the SQL server, then add individual users to the appropriate groups
  • Do not use SQL authentication for logins if possible –when SQL Server logins are used, SQL Server login names and passwords are passed across the network, which makes them less secure
  • Audit SQL Server failed login attempts to monitor for possible hacking activity

USERS – Once logged in to a SQL instance, a user account is necessary in order to connect to a database and its components. Users are created within a database, and mapped back to the server login.

User accounts that are not mapped to a login account are known as orphaned users. An exception is contained database users; they do not need to map to a login.

Guest user – this account is a built-in account in SQL server, and is disabled in new databases by default. The guest user allows a login to access databases without being mapped to a specific database user, and it inherits the ‘public’ database role with its permissions.

dbo user – this account has implied permissions to perform all activities in the database. Any principals belonging to the sysadmin fixed server role are mapped to the dbo user account automatically. The dbo user is in every database and is a member of the db_owner database role.

Best Practices for Users:

  • Disable the guest user in every user database (not system DBs)
  • If you must use the guest account, grant it minimum permissions

ROLES – Roles exists at both the server and database level. Permissions can be assigned to a role which makes it more efficient to manage principals’ access to securables. Permissions are given to roles, then logins and users can be added to (or removed from) roles.

Server roles – server level roles can be fixed or user defined. Members of server roles have permissions to sever-level securables, and cannot be changed or revoked. Logins can be assigned to fixed server roles without having a user account in a database.     

For complete list and description of server roles click here https://msdn.microsoft.com/en-us/library/ms188659.aspx

Database roles – These roles have a pre-defined set of permissions. Logins must be mapped to database user accounts in order to work with database objects. Database users can then be added to database roles, inheriting any permission sets associated with those roles.

For complete list and description of database roles click here https://msdn.microsoft.com/library/ms189121.aspx

Public role – The public role is contained in every database including system databases. It cannot be dropped and you can’t add or remove users from it. Permissions granted to the public role are inherited by all users because they belong to the public role by default.

Best Practices for Roles:

  • Be very cautious when adding users to fixed server roles:
    • Do not add principals to the sysadmin role unless they are highly trusted.
    • Membership in the securityadmin role allows principals to control server permissions, and should be treated with the same caution as the sysadmin role.
    • Be very cautious in adding members to the bulkadmin role. This role can insert data from any local file into a table, which could put your data at risk. For more information click here https://msdn.microsoft.com/library/ms188659.aspx

 

 

  • Grant public role only the permissions you want all users to have, and revoke unnecessary privileges.

Securables

SQL Server securables are the resources that can be accessed by a principal. SQL server resources operate within a hierarchy, with the server at the top of the hierarchy. Below the server instance lies the databases, and below the databases are a collection of objects (schemas, tables, views, etc.). Access to securables is controlled by granting or denying permissions to principals, or by adding or removing principals (logins and users) to roles which have access. All securables have an owner. The owner of a securable has absolute control over the securable and cannot be denied any privilege. Server level securables are owned by server principals (logins), and database level securables are owned by database principals (users).

blog_20161215_1

Permissions:

Permissions determine the type of access granted on a securable to a specific principal and what tasks a principal can perform on securables. The TSQL permission statements are

GRANT

REVOKE

DENY

Granting permission to a principal removes any DENY or REVOKE on that securable. A permission given at a higher scope in the hierarchy that contains that securable will take precedence over the lower level permission statement. Database level permissions only apply to that specific database.

Owners of securables can grant permissions on the objects they own, and any principal with CONTROL permissions can grant permissions on that securable as well.

Best Practices for Permissions:

  • Always use the principal of least privilege, which limits users by granting the minimum permissions necessary to accomplish a task. For more information click here https://msdn.microsoft.com/library/ms191291.aspx
  • Document any elevated user permission and request managerial approval.
  • When developing an application use a least-privileged user account (LUA), which may be more difficult – but will eliminate the temptation to grant elevated privileges as a quick fix when an end user cannot perform certain tasks that the administrative developer could. Granting elevated permissions to users in order to acquire lost functionality after the development phase can leave your application vulnerable to attack.
  • Grant permissions to roles rather that to users. It is easier to add and remove users from roles, than to manage separate permission sets for individual users.
  • Don’t grant individual users access to SQL Server, instead create groups for specific servers with specific permissions, and add individual users to the appropriate groups.

Ownership & Schemas 

Owners of objects have irrevocable permissions to administer those objects. The owner of a securable has absolute control over the securable and cannot be denied any privilege. You cannot remove privileges from an object owner, and you cannot drop users from a database if they own objects in it. Server level securables are owned by server principals (logins), and database level securables are owned by database principals (users).

A schema is a named container for similar database objects, and can be owned by any principal. You can assign security rules for a schema which will then be inherited by all objects in that schema.

Objects can be moved between schemas, and schema ownership can be transferred between principals.

Best Practices for Schemas:

  • In your TSQL, refer to the objects using a fully qualified name. At the very least, use the schema name followed by the object name, separated by a period (.). Example: databasename.schemaname.tablename.
  • Use synonyms to obfuscate the schema ownership of objects.
  • Use schemas to protect the base database object from being altered or removed from the database by users without sysadmin permissions.
  • Use schemas to combine related, logical entities to reduce administration overhead.
  • For more info – https://technet.microsoft.com/en-us/library/dd283095(v=sql.100).aspx

SQL Server Service Accounts 

Depending on what features and components you decide to install, SQL Server has several services that are used to manage authentication of SQL Server with Windows. These services need user accounts associated with them to start and run the services. The service accounts should have exactly the privileges that it needs to do its job and no more. For a list of all available SQL services click here https://msdn.microsoft.com/en-us/library/ms143504.aspx .

Best Practices for Service Accounts:

  • Assign service accounts low-level permissions using the principle of least privilege.
  • Don’t use built-in Windows system accounts (e.g., Network Service, Local System) for SQL Server service accounts – the built-in system accounts inherit certain elevated rights in Active Directory that aren’t required by SQL Server.
  • Use a Managed Service Account (MSA) if resources external to the SQL Server will be used.
  • When MSA are not possible, use a specific low-privilege non-user domain account.
  • Use separate accounts for different SQL Server services. The service accounts should not only be different from one another, they should not be used by any other service on the same server.
  • SQL Server Agent account is the only service account that requires membership in the systems admin server role.
  • Do not grant additional permissions to the SQL Server service account or the service groups – SQL Server installation program will grant them the local rights that they need during the installation.
  • Always use SQL Server Configuration Manager to change service accounts or passwords.
  • Don’t add service accounts to the Local Administrator group
  • Use a service account for applications, and restrict the applications access only to data required – not every object in the database

Administrative Accounts

sa Account – SQL Server creates this server level login during installation by default. The sa account has full administrative rights in your SQL Server instance, and is well known and often targeted by hackers. The sa login maps to the sysadmin fixed server role, which has irrevocable administrative privileges on the whole server.

BUILTIN\Administrators – All members of the Windows BUILTIN\Administrators group are members of the sysadmin role by default. This built in group is no longer included in SQL Server 2008 and later.

Best Practices for Administrative Accounts

  • Disable or rename the sa account – make sure another account exists with admin privileges before doing this.
  • If you must use the sa account, assign a strong password and enforce password policies
  • Do not use the sa account for day-to-day administration or logging on to the server remotely
  • Do not allow applications to use the sa account to connect to SQL
  • Remove the BUILTIN/Administrators group from the SQL Server Logins if it’s present in your instance.
  • Restrict users with system admin privileges, using server roles instead.
  • Restrict members of local administrator group. Limit the administrative access only to those who really require it.
  • Assign every administrator a named login, and don’t allow shared logins so that you can identify the users behind each and every database change.
  • Document all users with administrative rights, and any elevated user permissions

Conclusion

Controlling access to your SQL Server should be well thought out ahead of time. Every organization needs to have an information security policy in place, and the role of the DBA is to enforce these policies and protect the SQL Server instances along with the data in them. The suggestions provided above give a basic guideline for controlling who has access to your SQL Server. Utilizing windows groups, database roles, applying the principle of least privilege, and implementing the other suggestions above can be instrumental in keeping your SQL environments safe.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Deadlock Info From system_health Information

— by Lori Brown @SQLSupahStah

Periodically, all DBA’s are going to run into code that fails due to a deadlock. Deadlocks happen when two or more SPIDs\sessions try to access a resource that is locked by each other. In short, something has to give so one is usually chosen by SQL to be the victim meaning one is the loser. Deadlocks can be avoided by designing your databases well and making queries as quick and lightweight as possible. The faster data is retrieved or locked and unlocked the better.

But, how do you figure out what is causing deadlocks if you suddenly run into them? An easy way is to use the information that is already captured in the default system_health session information that is already being gathered by SQL.

Here are queries to retrieve this information from SQL 2008 R2 and SQL 2012 and up.

— get deadlock graphs in SQL 2008

SELECT CAST(event_data.value(‘(event/data/value)[1]’,

                               ‘varchar(max)’) AS XML) AS DeadlockGraph

FROM   ( SELECT   XEvent.query(‘.’) AS event_data

FROM     (   — Cast the target_data to XML

SELECT   CAST(target_data AS XML) AS TargetData

FROM     sys.dm_xe_session_targets st

JOIN sys.dm_xe_sessions s

ON s.address = st.event_session_address

WHERE     name = ‘system_health’

AND target_name = ‘ring_buffer’

) AS Data — Split out the Event Nodes

CROSS APPLY TargetData.nodes(‘RingBufferTarget/

                                     event[@name=”xml_deadlock_report”]’)

AS XEventData ( XEvent )

) AS tab ( event_data );

 

 

— get deadlock graphs in SQL 2012 and up

SELECT XEvent.query(‘(event/data/value/deadlock)[1]’) AS DeadlockGraph

FROM   ( SELECT   XEvent.query(‘.’) AS XEvent

FROM     ( SELECT   CAST(target_data AS XML) AS TargetData

FROM     sys.dm_xe_session_targets st

JOIN sys.dm_xe_sessions s

ON s.address = st.event_session_address

WHERE     s.name = ‘system_health’

AND st.target_name = ‘ring_buffer’

) AS Data

CROSS APPLY TargetData.nodes

(‘RingBufferTarget/event[@name=”xml_deadlock_report”]’)

AS XEventData ( XEvent )

) AS src;

Queries originally from Jonathan Kehayias (One super smart SQL guy!) https://www.simple-talk.com/author/jonathan-kehayias/

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!