Monthly Archives: March 2017

Is This Database Being Used?

— By Ginger Keys

During the migration of a client’s SQL environment to a new hosting provider it was questioned whether all of the databases needed to be migrated. Nobody seemed to know whether several of the databases were used or what they were for. One person suggested we just not move them, and see if anybody screamed or if it broke anything. Well, I guess that’s one way to find out if a database is being used. But there should be more definitive ways to do this, right?

There really is no straightforward way to determine this, but there are several things we can look at to get a better idea.

  • Look at any current connections
  • Capture login/connections over period of time
  • Observe index usage
  • Look at transaction count
  • Find dependencies with Jobs or other DBs

Current connections

sp_who2 is a well-known stored procedure that returns information about current users, sessions and processes. You can run exec sp_who2 to return all sessions belonging to the instance, or you can filter to return only the active sessions:

–Find active connections to the instance

USE master;

GO

EXEC sp_who2 ‘active’;

GO

Information about processes on your instance can also be derived from sys.sysprocesses. This system view will be deprecated in future releases of SQL. The info in this view returns data about both client and system processes running on the instance. The following statements will filter data relating to a specific database:

–Find number of active connections to database

USE master;

GO

SELECT DB_NAME(dbid) AS DBName,

spid,

COUNT(dbid) AS NumberOfConnections,

loginame,

login_time,

last_batch,

status

FROM   sys.sysprocesses

WHERE DB_NAME(dbid) = ‘AdventureWorks2016’ –insert your database name here

GROUP BY dbid, spid, loginame, login_time, last_batch, status

ORDER BY DB_NAME(dbid)

 

–Active Connections to Database with connecting IP address

SELECT

s.host_name,

s.program_name,

s.login_name,

c.client_net_address,

db_name(s.database_id) AS DBName,

s.login_time,

s.status,

GETDATE() AS date_time

FROM sys.dm_exec_sessions AS s

INNER JOIN sys.dm_exec_connections ASON s.session_id = c.session_id

INNER JOIN sys.sysprocesses ASON s.session_id = p.spid

WHERE DB_NAME(p.dbid) = ‘AdventureWorks2016’ –insert your database name here

 

Connections over time

It might be more beneficial to watch the connections to a database over a period of time instead of looking at the current connections only. In order to gather this data over time, you could create a trace through SQL Server Profiler. Simply run the trace, export it as a definition file, and import it into a table to query the results.   You can also create a SQL Server Audit to record successful logins, but these are at the server level, not the database level.   For a good explanation on how to perform both of these tasks click here https://mssqltalks.wordpress.com/2013/02/25/how-to-audit-login-to-my-sql-server-both-failed-and-successful/ . Depending upon how long you run the Profiler or Audit, these files can take up a considerable amount of space so be sure to monitor your disk space carefully.

 

Index Usage

Another way to see if your database is in use is to look and see if the indexes are being used. Information on index usage is held in the sys.dm_db_index_usage_stats table since the last server reboot, and can be queried using this statement which can be tailored to select the data you need.

SELECT

DB_NAME(database_id) DatabaseName,

last_user_seek,

last_user_scan,

last_user_lookup,

last_user_update

FROM sys.dm_db_index_usage_stats

WHERE db_name(database_id) = ‘AdventureWorks2016’ –insert your database name here

 

Blog_20170330_1

The statement above will show you the date and time the indexes for your database were last used. For the reverse of that, if you want to see which database have not had the indexes used since the last server reboot, run this statement:

SELECT name AS DatabaseName

FROM sys.databases

WHERE name NOT IN (‘master’, ‘tempdb’, ‘model’, ‘msdb’)

EXCEPT

SELECT DISTINCT

DB_NAME(database_id) AS DatabaseName

FROM sys.dm_db_index_usage_stats

ORDER BY 1

Blog_20170330_2

Transaction Count for the Database

Checking to see if the number of transactions are increasing for a database is another way to see if it is being used. You can query the sys.dm_os_performance_counters for Transactions/sec and run this several times to see if the count is increasing or not. Or you can open Perfmon and watch it there as well.

–Transaction count increasing?

SELECT *

FROM sys.dm_os_performance_counters

WHERE counter_name LIKE ‘Transactions/sec%’

AND instance_name LIKE ‘AdventureWorks2016%’ –insert your database name here

GO

Blog_20170330_3

–I waited a few minutes and executed the select statement again

Blog_20170330_4

Database Dependencies

Occasionally other databases or linked servers will connect to your database.   To see objects in your database that are referenced by other databases, run this statement:

SELECT OBJECT_NAME (referencing_id) AS referencing_object,

referenced_database_name,

referenced_schema_name,

referenced_entity_name

FROM sys.sql_expression_dependencies

WHERE referenced_database_name IS NOT NULL

AND is_ambiguous = 0;

 

For finding object referencing linked servers use this

SELECT OBJECT_NAME (referencing_id) AS referencing_object,

referenced_server_name,

referenced_database_name,

referenced_schema_name,

referenced_entity_name

FROM sys.sql_expression_dependencies

WHERE referenced_server_name IS NOT NULL

AND is_ambiguous = 0;

Database dependencies can be a very in-depth topic, and the statements above are only meant for high-level information about connections to your database. For more information about this topic click here https://www.mssqltips.com/sqlservertip/2999/different-ways-to-find-sql-server-object-dependencies/ .

Conclusion

There are countless ways to determine if your database is being used. Other methods that could be used is to see if there are execution plans are in the cache referencing the database, see if reads/writes are happening, look at lock_acquired events, and many other methods I haven’t thought of. The methods outlined above provide a useful starting point to investigate who/what is connecting and if your database is active or not. Yes, you can just take the database offline or detach, or even delete it and see if anyone fusses. But it’s much more prudent to take a look around at some simple things to make that determination.

Feel free to comment with other methods you have used….we would love to hear from you. For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

 

Query Tuning – Adding Included Columns To An Existing Index

— by Lori Brown @SQLSupahStah

For the most part, I stick to my daily DBA duties of making sure that SQL Server maintenance is running and that the proverbial lights are on in all my SQL boxes. However, sometimes it is fun to do some tuning and it’s even better when you have a success.

I have a SQL Server that hosts some Microsoft ERP databases. There was a problem with some slow performance and after collecting some performance data, I narrowed the root cause down to a view (vp_03400Exception1) that was used in a stored procedure. Out of the 15 million reads performed by the proc when I captured it, 12 million were directly from the offending view. In checking the execution plan for the view I found that a large table (APTran) was scanned in one spot and there were key lookups going on in other places (APTran & APDoc).

Bad stuff in bad plan…thick arrow indicates that almost 4 million records were scanned

Blog_20170323_1

Key lookups are not good (**on small tables these might not matter but on large tables they do) and should be eliminated if possible. Key lookups occur when you have an index seek against a table, but your query wants extra columns that are not in that index. This causes SQL Server to have to go back and get those extra columns. Lots of trips back to get extra columns results in slow performance. Of course scans are not good either so I wanted to try to get rid of that too.

Since the key lookups have an output list in the execution plan that lists what extra columns were needed, I took a look and simply added those columns as included columns on the existing indexes that were being used in the plan. In effect, I created a covering index using included columns. The included columns are only used when specific queries need it. If other queries use the index with included columns but doesn’t actually need them, those won’t be pulled in.

Here are the modified indexes with the included columns.

Blog_20170323_4

With the new columns included in the indexes, index seeks with no key lookups are now being performed.  The thick arrow has disappeared because instead of needing to scan the whole APTran table, it knows to return only 4 records.  This means that the rest of the query does not have to perform anywhere near as much work as it did before.  See the new execution plan below:

Much better stuff in more efficient plan…small arrow indicates that only 4 records were needed from APTran so much less work is being done.

Blog_20170323_5

Below you can see that overall the new plan uses less memory, is smaller and just plain old more efficient.

Blog_20170323_6

Boom!! Mic drop!!

Blog_20170323_7

You might be thinking that I should not celebrate since my query only returned 1 row. However, what if the results were hundreds or thousands or millions of rows? The work done by the old execution plan would be significant and the users would feel it. So, what’s the bottom line here? If you have queries accessing large tables, check execution plans to see if they are being accessed efficiently. Sometimes a small change to an existing index will work wonders. Sometimes you have to create a new index to make queries efficient. The execution plan might just tell you what it needs if you know where to look.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

Login Failures Bloating SQL Server Log

–By Ginger Keys

After migrating a client’s SQL instances to VMs from physical servers, I noticed the following week that the SQL Server Log files had tons of failed login attempts from what looked like an application. These attempts were happening several times per second, and had totally bloated the SQL Server logs.

Blog_20170316_1

This is not desirable because 1) obviously something should not be attempting and failing to connect that much, and 2) it makes it super cumbersome to monitor more critical messages in the SQL logs when so many login errors are bloating the log. Too many failed login attempts could mean that the application has not been set with the correct permissions to access the databases needed, or it could be an indication of a hacking attempt.

I ran the script below to determine how many failed login attempts had occurred in the last 7 days, and the count came back with over 3 million failed attempts!

— Create a temporary table

CREATE TABLE #FailedLogins

(Logdate DATETIME,

Process VARCHAR(20),

Text VARCHAR(4000))

 

— Insert data from SQL Server Log into temp table

INSERT INTO #FailedLogins

EXEC xp_readerrorlog 1, 1, N’Login Failed’, N’LoginName’,‘20170309’,‘20170316’

 

— For more info about xp_readerrorlog click here https://sqlandme.com/2012/01/25/sql-server-reading-errorlog-with-xp_readerrorlog/

  –Count number of occurrences

SELECT COUNT(Text) AS LoginFailures, Text

FROM #FailedLogins

GROUP BY TEXT

 

–Drop temp table

DROP TABLE #FailedLogins

 

As it turns out it was a legacy system moved from the old environment that needed to be adjusted to connect to the new environment. I didn’t exactly get to put on my superwoman cape and solve the problem, but at least I was able to identify the issue and direct it to the appropriate IT staff to correct.

As part of your everyday duties as a DBA it is prudent to monitor your SQL Server Logs to catch hacking attempts or malicious activity with your instance, and also to look for errors that could be occurring. By default, SQL contains 7 error log files (one current, and six archives). Error logs contain critical information about your SQL server. A new log is created each time the SQL service starts, and the oldest archived log gets deleted. For security reasons it is best practice to keep a large number of error logs, depending on the space you have available. Someone with access to your SQL server could execute sp_cycle_errorlog and regenerate the logs at a rate that could ‘delete’ their malicious activity or cover up the fact that they have gained access to your system. If you have highly sensitive data or stringent auditing requirements you can increase the number of retained error logs up to 99, just be sure to monitor your disk space.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

SQL Server Backup and Restore Primer

— by Lori Brown @SQLSupahStah

Back it up!!

blog_20170302_1  …. BEEP! BEEP! BEEP! … Oh wait! Not that kind of backup…

Since I spent a lot of time trying to creatively move a database from one server to another last night, I thought that a review of ways to backup and restore would be a good topic for today.

I had someone who needed me to move a database from one server to another in an effort to consolidate servers. The database backup file was a significant size and could not fit onto any of the local drives of the new server. And to make things more fun, the SQL Server service account did not have permission to view the old server at all. I think the inability to see the old server had something to do with the old server had been in a different domain at one point and there was some issue that was never fixed that could give the service account access. Nonetheless, the move had to be completed. So, this made it so that restoring would have to be performed using a location that the SQL Server service account could access. I could not use a UNC path for restoring the database.

The new server was really tight on space and the admins were very reluctant to add more space if that space was not needed permanently. I had several drives with differing amounts of free space so if I could create enough small backup files, I could put those files in several different places on the new server and could then restore since the SQL Server service account did have access to all local drives.

You can backup databases to one file or multiple files. Here are the basics of backing up to a single file. Since I am giving the T-SQL way to do things, I am also including a way to make the file name unique with a timestamp.

— Backup a database to a single backup file

DECLARE @BackupFile NVARCHAR(200)

DECLARE @Timestamp VARCHAR(50)

— create a timestamp for a unique name

SET @Timestamp = REPLACE(REPLACE(REPLACE(CONVERT(VARCHAR,GETDATE(), 20),‘-‘,),‘:’,),‘ ‘,)

— create backup file string

SET @BackupFile = ‘D:\SQLBackups\WideWorldImporters_Full_’+@Timestamp+‘.bak’

— backup the database

BACKUP DATABASE WideWorldImporters

TO DISK = @BackupFile

WITH NOFORMAT, INIT, NAME = N’WideWorldImporters-Full Database Backup’,

SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 10;

GO

Backing up to multiple files is good to use for large databases and will break up a single large backup file to several smaller files. This is what I had to do last night when a large single database file would not fit on any local drive.

— Backup a database to multiple backup files

DECLARE @BackupFile1 NVARCHAR(200)

DECLARE @BackupFile2 NVARCHAR(200)

DECLARE @BackupFile3 NVARCHAR(200)

DECLARE @BackupFile4 NVARCHAR(200)

DECLARE @Timestamp VARCHAR(50)

— create a timestamp for a unique name

SET @Timestamp = REPLACE(REPLACE(REPLACE(CONVERT(VARCHAR,GETDATE(), 20),‘-‘,),‘:’,),‘ ‘,)

— create backup file strings

SET @BackupFile1 = ‘D:\SQLBackups\WideWorldImporters_Full1_’+@Timestamp+‘.bak’

SET @BackupFile2 = ‘D:\SQLBackups\WideWorldImporters_Full2_’+@Timestamp+‘.bak’

SET @BackupFile3 = ‘D:\SQLBackups\WideWorldImporters_Full3_’+@Timestamp+‘.bak’

SET @BackupFile4 = ‘D:\SQLBackups\WideWorldImporters_Full4_’+@Timestamp+‘.bak’

— backup the database

BACKUP DATABASE WideWorldImporters

TO DISK = @BackupFile1,

DISK = @BackupFile2,

DISK = @BackupFile3,

DISK = @BackupFile4

WITH NOFORMAT, INIT, NAME = N’WideWorldImporters-Full Database Backup’,

             SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 10;

GO

Here is the MSDN link for all things backup…. https://msdn.microsoft.com/en-us/library/ms186865.aspx

Some backup options that are good to know about are BLOCKSIZE, BUFFERCOUNT and MAXTRANSFERSIZE. These can help speed up backups immensely but should be thoroughly tested. I recently had a 1.8 TB database that without setting those backup options, caused the database backup to take around 4 hours. With them set, the backup time was reduced to 50 minutes.

Restore it!!

blog_20170302_2  … Reunited and it feels so goooood… Yeah, no one wants to hear me sing.  Somehow I never win at karaoke.

Practicing restoring databases proves that the backup files are usable and will polish up your skills so that you are ready to restore in the event of a crisis. If you have to restore database files to different drives than are available on the production server or if you have to restore often as in the case of refreshing a database, create a script that will be easier to use than restoring using the GUI and navigating to the new file locations. Don’t forget that databases from newer version of SQL cannot be restored to an older version.

Here are some basic examples of restore statements.

— Restore a database from a single backup file

USE master

GO

RESTORE DATABASE WideWorldImporters

FROM DISK = N’D:\SQLBackups\WideWorldImporters_Full.bak’

WITH RECOVERY

GO

 

— Restore a database from a single database backup file

— and a transaction log backup file

USE master

GO

RESTORE DATABASE WideWorldImporters

FROM DISK = N’D:\SQLBackups\WideWorldImporters_Full.bak’

WITH NORECOVERY — NORECOVERY allows more backup files to be restored

GO

RESTORE LOG WideWorldImporters

FROM DISK = N’D:\SQLBackups\WideWorldImporters_tlog.trn’

WITH RECOVERY

GO

In my case, since I had backed up the database to multiple files, I moved those files to several different local drives and was able to restore from there. I don’t like it when space is that tight but some shops run a very tight ship and it is just something I have to live with.

— Restore a database from multiple backup files

USE master

GO

RESTORE DATABASE WideWorldImporters

FROM DISK = N’D:\SQLBackups\WideWorldImporters_Full_1.bak’,

DISK = N’D:\SQLBackups\WideWorldImporters_Full_2.bak’,

DISK = N’D:\SQLBackups\WideWorldImporters_Full_3.bak’,

DISK = N’D:\SQLBackups\WideWorldImporters_Full_4.bak’

WITH REPLACE, — REPLACE WILL OVERWRITE EXISTING FILES

RECOVERY

GO

 

— Restore a database from multiple backup files

— and move the database and log files to new drives

USE master

GO

RESTORE DATABASE WideWorldImporters

FROM DISK = N’D:\SQLBackups\WideWorldImporters_Full_1.bak’,

DISK = N’D:\SQLBackups\WideWorldImporters_Full_2.bak’,

DISK = N’D:\SQLBackups\WideWorldImporters_Full_3.bak’,

DISK = N’D:\SQLBackups\WideWorldImporters_Full_4.bak’

WITH MOVE N’WideWorldImporters’ TO N’H:\MSSQL\WideWorldImporters.mdf’,

MOVE N’WideWorldImporters_log’ TO N’L:\MSSQL\WideWorldImporters_log.ldf’,

RECOVERY

GO

Restore WITH RECOVERY to make the database ready to use. Restore WITH NORECOVERY if you want to restore more backups.   Use REPLACE only when you are certain that you want to overwrite the existing files so be careful with it.

More on RESTORE can be found at https://msdn.microsoft.com/en-us/library/ms186858.aspx

Some good things to know are available in the RESTORE are the ability to restore pages, verifying backups, NORECOVERY and STANDBY, and STOPAT. Some of those options are specific when restoring transaction logs but are good to know anyway.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!