Tag Archives: Script

Find Out Who Changed the Database Recovery Model

— By Lori Brown @SQLSupahStah

I ran into a situation where we were working on a migration and had been directed to put all databases into FULL recovery model in anticipation of using log shipping to push databases to the new server. Once we are ready to go live on the new server the plan was to ship the last transaction logs and then restore them WITH RECOVERY in an effort to make the final cutover as quick as possible. Of course this means that we had to make sure that all databases were having regular log backups, which we did. Things were going along nicely until we started receiving log backup failure notifications.

Upon checking things, we found that one of the databases had been changed to SIMPLE recovery model. You can find this type of information in the default trace or you can simply scroll through the SQL error logs until you find the entry that you are looking for. If you have a busy instance that has a lot of entries in the error log, this can be a bit time consuming so I came up with a set of queries that will grab the error log entry and attempt to tie it to the info in the default trace so that it was easier to identify WHO was the culprit who made an unauthorized change to the database properties.

 

DECLARE @tracefile VARCHAR(500)

DECLARE @ProcessInfoSPID VARCHAR(20)

 

CREATE TABLE [dbo].[#SQLerrorlog](

[LogDate] DATETIME NULL,

[ProcessInfo] VARCHAR(10) NULL,

[Text] VARCHAR(MAX) NULL

)

 

/*

Valid parameters for sp_readerrorlog

1 – Error log: 0 = current, 1 = Archive #1, 2 = Archive #2, etc…

2 – Log file type: 1 or NULL = error log, 2 = SQL Agent log

3 – Search string 1

4 – Search string 2

 

Change parameters to meet your needs

*/

— Read error log looking for the words RECOVERY

–and either FULL, SIMPLE or BULK_LOGGED indicating a change from prior state

INSERT INTO #SQLerrorlog

EXEC sp_readerrorlog 0, 1, ‘RECOVERY’, ‘FULL’

 

INSERT INTO #SQLerrorlog

EXEC sp_readerrorlog 0, 1, ‘RECOVERY’, ‘SIMPLE’

 

INSERT INTO #SQLerrorlog

EXEC sp_readerrorlog 0, 1, ‘RECOVERY’, ‘BULK_LOGGED’

 

UPDATE #SQLerrorlog

SET ProcessInfo = SUBSTRING(ProcessInfo,5,20)

FROM #SQLerrorlog

WHERE ProcessInfo LIKE ‘spid%’

 

— Get path of default trace file

SELECT @tracefile = CAST(value AS VARCHAR(500))

FROM sys.fn_trace_getinfo(DEFAULT)

WHERE traceid = 1

AND property = 2

 

— Get objects altered from the default trace

SELECT IDENTITY(int, 1, 1) AS RowNumber, *

INTO #temp_trc

FROM sys.fn_trace_gettable(@tracefile, default) g — default = read all trace files

WHERE g.EventClass = 164

 

SELECT t.DatabaseID, t.DatabaseName, t.NTUserName, t.NTDomainName,

t.HostName, t.ApplicationName, t.LoginName, t.SPID, t.StartTime, l.Text

FROM #temp_trc t

JOIN #SQLerrorlog l ON t.SPID = l.ProcessInfo

WHERE t.StartTime > GETDATE()-1 — filter by time within the last 24 hours

ORDER BY t.StartTime DESC

 

DROP TABLE #temp_trc

DROP TABLE #SQLerrorlog

GO

 

You can find more on the following:

sp_readerrorlog is an undocumented procedure that actually uses xp_readerrorlog – https://www.mssqltips.com/sqlservertip/1476/reading-the-sql-server-log-files-using-tsql/

sys.fn_trace_getinfo – https://docs.microsoft.com/en-us/sql/relational-databases/system-functions/sys-fn-trace-getinfo-transact-sql

sys.fn_trace_gettable – https://docs.microsoft.com/en-us/sql/relational-databases/system-functions/sys-fn-trace-gettable-transact-sql

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRxSupport@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. We love to talk tech with anyone in our SQL family!

List Partitioned Tables And Other Info About Them

— By Lori Brown @SQLSupahStah

Here is a good way to find out if any of your tables are partitioned in your database using the query below. It is important to know so that you can find out how the tables are partitioned.

— Partitioned Tables

SELECT DISTINCT t.name AS ParitionedTables

FROM sys.partitions p

INNER JOIN sys.tables t

ON p.object_id = t.object_id

WHERE p.partition_number <> 1

If you have partitioned tables here is a good way to find out how it is partitioned up.

— Get partition info

SELECT SCHEMA_NAME(o.schema_id) + ‘.’ + OBJECT_NAME(i.object_id) AS [object]

, p.partition_number AS [p#]

, fg.name AS [filegroup]

, p.rows

, au.total_pages AS pages

, CASE boundary_value_on_right

WHEN 1 THEN ‘less than’

ELSE ‘less than or equal to’ END AS comparison

, rv.value

, CONVERT (VARCHAR(6), CONVERT (INT, SUBSTRING (au.first_page, 6, 1) +

SUBSTRING (au.first_page, 5, 1))) + ‘:’ + CONVERT (VARCHAR(20),

CONVERT (INT, SUBSTRING (au.first_page, 4, 1) +

SUBSTRING (au.first_page, 3, 1) + SUBSTRING (au.first_page, 2, 1) +

SUBSTRING (au.first_page, 1, 1))) AS first_page

FROM sys.partitions p

INNER JOIN sys.indexes i

ON p.object_id = i.object_id

AND p.index_id = i.index_id

INNER JOIN sys.objects o

ON p.object_id = o.object_id

INNER JOIN sys.system_internals_allocation_units au

ON p.partition_id = au.container_id

INNER JOIN sys.partition_schemes ps

ON ps.data_space_id = i.data_space_id

INNER JOIN sys.partition_functions f

ON f.function_id = ps.function_id

INNER JOIN sys.destination_data_spaces dds

ON dds.partition_scheme_id = ps.data_space_id

AND dds.destination_id = p.partition_number

INNER JOIN sys.filegroups fg

ON dds.data_space_id = fg.data_space_id

LEFT OUTER JOIN sys.partition_range_values rv

ON f.function_id = rv.function_id

AND p.partition_number = rv.boundary_id

WHERE i.index_id < 2

AND o.object_id = OBJECT_ID(‘dbo.SomeTableName’);

List Tables That May Be Over Indexed

— By Lori Brown

While not having enough indexes can be bad for query performance, having too many indexes can also be just as bad. Use the query below to get a list of tables in your database that has more than 10 indexes.

— Tables with large number of indexes

select t.name as TablesWithLargeNumInx, count(i.name) as CountIndexes

from sys.indexes i

inner join sys.tables t

on i.object_id = t.object_id

where i.index_id > 0

group by t.name

having count(i.name) > 10

If you suspect that you have too many indexes on your tables, you can also check the sys.dm_db_index_usage_stats dynamic management view to know if indexes on your heavily indexed tables are being used well. (Hint: seeks are good and scans are not so much)

select u.user_seeks, u.user_lookups, u.user_scans

from sys.dm_db_index_usage_stats u

inner join sys.indexes i

on u.object_id = i.object_id and u.index_id = i.index_id

WHERE u.object_id=object_id(‘dbo.SomeTableName’)

https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-db-index-usage-stats-transact-sql

SQL Toolkit – Get List of Database Options and Features

–By Lori Brown

I am going to start up a thread with some queries that I consider to be important to DBA’s. As a consultant, many times we have to start managing SQL instances without any knowledge of why it has been configured in various ways. We usually try to work with our clients to figure things out but many times even they don’t know what has been done to the SQL configuration and database options or features. This can happen when a software vendor is allowed to set up the SQL instance or when a developer or DBA enables certain things so that work can be done. I have some scripts that I keep in my Toolkit and can use when I need to better understand the environment. At least armed with a little knowledge, I can ask questions and help my clients better understand things.

Here is a query that I use to find out what database options have been set as well as what advanced features maybe enabled. I tried to make the output display in plain English so that if I have to pass the info on to a client, they can understand it as well. Please note that at the end of each column, I have put a comment indicating if the feature is version specific. If you don’t have that version of SQL, simply comment out the lines that don’t apply to you. As new versions of SQL come out, I’ll try to test and keep things in the Toolkit up to date.

— get a list of interesting database settings

SELECT d.name AS DBName

,(CASE d.compatibility_level WHEN 80 THEN ‘SQL 2000’

WHEN 90 THEN ‘SQL 2005’

WHEN 100 THEN ‘SQL 2008’

WHEN 110 THEN ‘SQL 2012’

WHEN 120 THEN ‘SQL 2014’

WHEN 130 THEN ‘SQL 2016’ END) AS Compatibility

,d.user_access_desc AS UserAccessSetting

,(CASE d.is_auto_close_on WHEN 0 THEN ‘OFF’ ELSE ‘ON’ END) AS AutoCloseSetting

,(CASE d.is_auto_shrink_on WHEN 0 THEN ‘OFF’ ELSE ‘ON’ END) AS AutoShrinkSetting

,d.recovery_model_desc AS RecoveryModel

,d.page_verify_option_desc AS PageVerifySetting

,(CASE d.is_auto_create_stats_on WHEN 0 THEN ‘OFF’ ELSE ‘ON’ END) AS AutoCreateStats

,(CASE d.is_auto_update_stats_on WHEN 0 THEN ‘OFF’ ELSE ‘ON’ END) AS AutoUpdateStats

,(CASE d.is_cdc_enabled WHEN 0 THEN ‘OFF’ ELSE ‘ON’ END) AS ChangeDataCaptureSetting

,(CASE d.is_encrypted WHEN 0 THEN ‘OFF’ ELSE ‘ON’ END) AS TDESetting

— Always On info

,ag.name AS AGName

,rs.role_desc AS AGRole

,UPPER(ag.automated_backup_preference_desc) AS AGBkupPref

— Other interesting stuff

,d.containment_desc AS ContainmentSetting — SQL 2012 +

,d.delayed_durability_desc AS DelDurabilitySetting — SQL 2014 +

,(CASE d.is_remote_data_archive_enabled WHEN 0 THEN ‘Stretch-disabled’ ELSE ‘Stretch-disabled’ END) AS StretchDBSetting — SQL 2016 +

,(CASE d.is_query_store_on WHEN 0 THEN ‘Disabled’ ELSE ‘Enabled’ END) AS QueryStoreSetting — SQL 2016 +

FROM sys.databases d

LEFT JOIN sys.availability_databases_cluster adc ON d.group_database_id = adc.group_database_id

LEFT JOIN sys.availability_groups ag ON adc.group_id = ag.group_id

LEFT JOIN sys.dm_hadr_availability_replica_states rs ON ag.group_id = rs.group_id AND d.replica_id = rs.replica_id

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Moving Non-clustered Indexes To New Data File

-by Ginger Keys

It is not uncommon for database files to grow to the point you need more disk space. But what if it’s not possible or not in the budget to add more space? There are several creative options you have to move or shrink files, or delete unneeded items from the disk. One solution we recently implemented with a client was to move the indexes of a large database to another drive.

Create File/Filegroup

Only non-clustered indexes can be moved, and they must reside in a different filegroup from the primary filegroup. If you don’t already have a separate file and filegroup created, you must do this first. Make sure you have created a folder path on your system to the drive that will be holding your index files.

blog_20170209_1

Get Space Requirement for Indexes

Determine the non-clustered indexes you have in your database and their size by running this script to ensure you have enough space allocated on the target drive to house the index files.

USE AdventureWorks2016

GO

SELECT

OBJECT_NAME(i.OBJECT_ID) AS TableName,

i.name AS IndexName,

8 * SUM(a.used_pages) AS ‘Indexsize(KB)’

FROM sys.indexes AS i

JOIN sys.partitions AS p ON p.OBJECT_ID = i.OBJECT_ID AND p.index_id = i.index_id

JOIN sys.allocation_units AS a ON a.container_id = p.partition_id

JOIN sys.tables AS t ON i.object_id = t.object_id

WHERE i.type_desc = ‘NONCLUSTERED’

GROUP BY i.OBJECT_ID,i.index_id,i.name

ORDER BY OBJECT_NAME(i.OBJECT_ID),i.index_id

blog_20170209_2

Create and Execute Stored Procedure

The following script will create a stored procedure called MoveIndexToFileGroup that will move an index from one file group to another, and even supports the INCLUDE columns. Run this script against the master database to create the stored procedure: https://gist.github.com/dalenewman/6377911

In order to get a list of tables for your database, run the following script.

Use AdventureWorks2016 –use your database name

GO

SELECT * FROM INFORMATION_SCHEMA.TABLES

blog_20170209_3

After the above stored procedure is created and you have gotten a list of schemas/tables/indexes you want to move, execute the following

EXEC MoveIndexToFileGroup

@DBName = ‘AdventureWorks2016’, –your database name

@SchemaName = ‘HumanResources’, –schema name that defaults to dbo

@ObjectNameList = ‘Shift,  

Department,

                    EmployeeDepartmentHistory,

                    EmployeePayHistory,

                    Employee,

                    JobCandidate’, –a table or list of tables

@IndexName = NULL, –an index or NULL for all of them

@FileGroupName = ‘INDEX’; –the target file group

 

Your indexes will appear in the target destination you created.

blog_20170209_4

Conclusion

There are many useful reasons for moving your non-clustered indexes to a separate drive…in this case ours was simply to create more space for the drive holding the data files. Whatever your reason, the steps above should give you a simplistic way to achieve this.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

 

 

 

 

 

SSISDB – Package Execution Time in Minutes and Average Execution Time

–By Lori Brown   @SQLSupahStah

I have a client who has a lot of SSIS packages in the SSIS catalog that we are migrating to a new server. Of course there is lots of testing going on. So the question came up about how to more easily see package runtimes and comparing to the overall average runtime rather than using the Dashboard or the reports that come with Management Studio. After a little searching, I found that the SSISDB.internal.execution_info table contains what I needed.

— Filter by date and package name

DECLARE @begindate DATE = GETDATE() – 7

DECLARE @pkgname VARCHAR(100) = ‘Your Package Name— Ex: Patient_Update.dtsx

 

— Package execution time

SELECT folder_name, project_name, package_name,

CAST(start_time AS datetime) AS start_time,

DATEDIFF(MINUTE, start_time, end_time) AS ‘execution_time[min]’

FROM SSISDB.internal.execution_info

WHERE start_time >= @begindate

AND package_name = @pkgname — Comment out this line for all packages

ORDER BY start_time

 

— Average package execution time

SELECT folder_name, project_name, package_name,

AVG(DATEDIFF(MINUTE, start_time, end_time)) AS ‘Avg execution_time[min]’

FROM SSISDB.internal.execution_info

WHERE start_time >= @begindate

AND package_name = @pkgname — Comment out this line for all packages

GROUP BY folder_name,project_name,package_name

You can also use the catalog.executions view to get the same data but since I saw the internal table, I jumped in. This was tested on SQL 2016 but I think it will still work in earlier versions. Hope this helps someone out!

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Find Currently Executing Jobs Running Longer Than 30 Day Average

–By Lori Brown   @SQLSupahStah

I needed to update some of our long running job monitoring code to improve it from the version that we have right now. I like this version because it uses msdb.dbo.syssessions (https://msdn.microsoft.com/en-us/library/ms175016.aspx) to validate that a job is actually running. I also wanted to know the percent difference between the current run duration versus an average duration per job from the past 30 days. I decided to place the calculated average into a table variable and then join on it to get my results. I also used the IIF function (https://msdn.microsoft.com/en-us/library/hh213574.aspx) to help me avoid a divide by zero error that comes up when the average duration equals 0.

I also learned about a function that I had never heard of before, msdb.dbo.agent_datetime. Interestingly, I cannot find documentation of this function from MSDN. However, this had apparently been around for a while. I don’t feel too bad about not knowing about this one since even super smart Pinal Dave had only recently found it too (https://blog.sqlauthority.com/2015/03/13/sql-server-interesting-function-agent_datetime/)

— currently executing jobs running longer than 30 day avg runtime

declare @startdt date

declare @enddt date

declare @JobsAvgDuration table (job_id uniqueidentifier, JobName varchar(250), ThirtyDayAvgDurMin int)

 

set @startdt = getdate()-30

set @enddt = getdate()

 

insert into @JobsAvgDuration

select j.job_id, j.name as JobName,

AVG((h.run_duration/10000*3600 + (h.run_duration/100)%100*60 + h.run_duration%100 + 31 ) / 60) as ThirtyDayAvgDurMin

from msdb.dbo.sysjobs j

join msdb.dbo.sysjobhistory h on (j.job_id = h.job_id)

where j.enabled = 1

and msdb.dbo.agent_datetime(h.run_date, 0) between @startdt and @enddt

group by j.job_id, j.name

order by j.name

 

select j.name as LongRunningJobName,

datediff(mi, a.start_execution_date, getdate()) as MinsRunning,

d.ThirtyDayAvgDurMin,

(datediff(mi, a.start_execution_date, getdate())-d.ThirtyDayAvgDurMin)*100/IIF(d.ThirtyDayAvgDurMin=0,1,d.ThirtyDayAvgDurMin) as PercentDiff

from msdb.dbo.sysjobs j

join msdb.dbo.sysjobactivity a on (j.job_id = a.job_id)

join @JobsAvgDuration d on (d.job_id = j.job_id)

where a.session_id in (select session_id from msdb.dbo.syssessions) — active session

and (a.start_execution_date IS NOT NULL and a.stop_execution_date IS NULL)

and datediff(mi, a.start_execution_date, getdate()) > d.ThirtyDayAvgDurMin

I had to force one of my monitoring jobs to run long so I could show you the results.

blog_20170119_1

All that is left is to put the results into a table or a report and I will be done. Let me know if you have a unique way of monitoring for jobs that run extraordinarily long. We would love to see other solutions!

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

Synchronize Logins on AlwaysOn Replicas

–By Lori Brown   @SQLSupahStah

At one of my clients who has an AlwaysOn failover cluster, I noticed as I was checking up on things on the secondary replica that there were several fewer Windows and SQL logins on the secondary than on the primary. I really did not want to tediously compare the logins on each so instead came up with a script that can be run periodically to script out any logins that need to be synched up between the replicas. I actually found most of the script that I needed on the SQLSoldier’s blog (http://www.sqlsoldier.com/wp/) in a stored procedure that was intended to synch logins between mirrored partners. Since AlwaysOn is Mirroring on steroids I thought that it would work and it did. I did put my own touches on the proc by turning it into a standalone script that uses OPENROWSET to query the AG primary replica and also changed it from automatically executing the statements for missing logins to simply printing them into a script that can be applied as needed.

And now….here is what you came here for….

— Original logic from http://www.sqlsoldier.com/wp/sqlserver/transferring-logins-to-a-database-mirror

— Sync Logins to AlwaysOn Replicas

—     Inputs: @PartnerServer – Target Instance (InstName or Machine\NamedInst or Instname,port)

—     Output: All Statements to create logins with SID and Password for both Windows and SQL logins

—             Will also add logins to server roles

—     Person executing this must be sysadmin

—     Ad Hoc Distributed Queries must be enabled for this to work without setting up a linked server

/*

— Turn on Ad Hoc Distributed Queries so we don’t have to set up a linked server

sp_configure ‘show advanced options’, 1

GO

reconfigure with override

go

sp_configure ‘Ad Hoc Distributed Queries’, 1

GO

reconfigure with override

go

*/

 

Use master;

Go

Declare @MaxID int,

@CurrID int,

@PartnerServer sysname,

@SQL nvarchar(max),

@LoginName sysname,

@IsDisabled int,

@Type char(1),

@SID varbinary(85),

@SIDString nvarchar(100),

@PasswordHash varbinary(256),

@PasswordHashString nvarchar(300),

@RoleName sysname,

@Machine sysname,

@PermState nvarchar(60),

@PermName sysname,

@Class tinyint,

@MajorID int,

@ErrNumber int,

@ErrSeverity int,

@ErrState int,

@ErrProcedure sysname,

@ErrLine int,

@ErrMsg nvarchar(2048)

 

SET @PartnerServer = ‘InstName’

 

Declare @Logins Table (LoginID int identity(1, 1) not null primary key,

[Name] sysname not null,

[SID] varbinary(85) not null,

IsDisabled int not null,

[Type] char(1) not null,

PasswordHash varbinary(256) null)

 

Declare @Roles Table (RoleID int identity(1, 1) not null primary key,

RoleName sysname not null,

LoginName sysname not null)

 

Declare @Perms Table (PermID int identity(1, 1) not null primary key,

LoginName sysname not null,

PermState nvarchar(60) not null,

PermName sysname not null,

Class tinyint not null,

ClassDesc nvarchar(60) not null,

MajorID int not null,

SubLoginName sysname null,

SubEndPointName sysname null)

 

Set NoCount On;

 

If CharIndex(‘\’, @PartnerServer) > 0 — Check for Named Instance

Begin

Set @Machine = LEFT(@PartnerServer, CharIndex(‘\’, @PartnerServer) – 1);

End

Else If CharIndex(‘,’, @PartnerServer) > 0 — Check for Instance with port in connection string

Begin

Set @Machine = LEFT(@PartnerServer, CharIndex(‘,’, @PartnerServer) – 1);

End

Else

Begin

Set @Machine = @PartnerServer;

End

 

— Get all Windows logins from principal server

— using OPENROWSET and Windows Authentication

Set @SQL = ‘Select a.* From OPENROWSET (”SQLNCLI”, ”Server=’+@PartnerServer+‘;Trusted_Connection=yes;”, ”Select P.name, P.sid, P.is_disabled, P.type, L.password_hash

             From master.sys.server_principals P

             Left Join master.sys.sql_logins L On L.principal_id = P.principal_id

             Where P.type In (””U””, ””G””, ””S””)

             And P.name <> ””sa””

             And P.name Not Like ””##%””

             And CharIndex(””’ + @Machine + ‘\””, P.name) = 0”)as a;’

 

–print @SQL

 

Insert Into @Logins (Name, SID, IsDisabled, Type, PasswordHash)

Exec sp_executesql @SQL;

 

— Get all roles from principal server

— using OPENROWSET and Windows Authentication

Set @SQL = ‘Select a.* From OPENROWSET (”SQLNCLI”, ”Server=’+@PartnerServer+‘;Trusted_Connection=yes;”, ”Select RoleP.name as RoleName, LoginP.name as LoginName

             From master.sys.server_role_members RM

             Inner Join master.sys.server_principals RoleP

             On RoleP.principal_id = RM.role_principal_id

             Inner Join master.sys.server_principals LoginP

             On LoginP.principal_id = RM.member_principal_id

             Where LoginP.type In (””U””, ””G””, ””S””)

             And LoginP.name <> ””sa””

             And LoginP.name Not Like ””##%””

             And RoleP.type = ””R””

             And CharIndex(””’ + @Machine + ‘\””, LoginP.name) = 0”)as a;’

 

–print @SQL

 

Insert Into @Roles (RoleName, LoginName)

Exec sp_executesql @SQL;

 

— Get all explicitly granted permissions

— using OPENROWSET and Windows Authentication

Set @SQL = ‘Select a.* From OPENROWSET (”SQLNCLI”, ”Server=’+@PartnerServer+‘;Trusted_Connection=yes;”, ”Select P.name Collate database_default,

                    SP.state_desc, SP.permission_name, SP.class, SP.class_desc, SP.major_id,

                    SubP.name Collate database_default,

                    SubEP.name Collate database_default

             From master.sys.server_principals P

             Inner Join master.sys.server_permissions SP

             On SP.grantee_principal_id = P.principal_id

             Left Join master.sys.server_principals SubP

             On SubP.principal_id = SP.major_id And SP.class = 101

             Left Join master.sys.endpoints SubEP

             On SubEP.endpoint_id = SP.major_id And SP.class = 105

             Where P.type In (””U””, ””G””, ””S””)

             And P.name <> ””sa””

             And P.name Not Like ””##%””

             And CharIndex(””’ + @Machine + ‘\””, P.name) = 0”)as a;’

 

–print @SQL

 

Insert Into @Perms (LoginName, PermState, PermName, Class, ClassDesc, MajorID, SubLoginName, SubEndPointName)

Exec sp_executesql @SQL;

 

Select @MaxID = Max(LoginID), @CurrID = 1

From @Logins;

 

While @CurrID <= @MaxID

Begin

Select @LoginName = Name,

@IsDisabled = IsDisabled,

@Type = [Type],

@SID = [SID],

@PasswordHash = PasswordHash

From @Logins

Where LoginID = @CurrID;

 

If Not Exists (Select 1 From sys.server_principals

Where name = @LoginName)

Begin

Set @SQL = ‘Create Login ‘ + quotename(@LoginName)

If @Type In (‘U’, ‘G’)

Begin

Set @SQL = @SQL + ‘ From Windows;’

End

Else

Begin

Set @PasswordHashString = ‘0x’ +

Cast( As XML).value(‘xs:hexBinary(sql:variable(“@PasswordHash”))’, ‘nvarchar(300)’);

 

Set @SQL = @SQL + ‘ With Password = ‘ + @PasswordHashString + ‘ HASHED, ‘;

 

Set @SIDString = ‘0x’ +

Cast( As XML).value(‘xs:hexBinary(sql:variable(“@SID”))’, ‘nvarchar(100)’);

 

Set @SQL = @SQL + ‘SID = ‘ + @SIDString + ‘;’;

End

 

Print @SQL;

 

If @IsDisabled = 1

Begin

Set @SQL = ‘Alter Login ‘ + quotename(@LoginName) + ‘ Disable;’

Print @SQL;

End

End

Set @CurrID = @CurrID + 1;

End

 

Select @MaxID = Max(RoleID), @CurrID = 1

From @Roles;

 

While @CurrID <= @MaxID

Begin

Select @LoginName = LoginName,

@RoleName = RoleName

From @Roles

Where RoleID = @CurrID;

 

If Not Exists (Select 1 From sys.server_role_members RM

Inner Join sys.server_principals RoleP

On RoleP.principal_id = RM.role_principal_id

Inner Join sys.server_principals LoginP

On LoginP.principal_id = RM.member_principal_id

Where LoginP.type In (‘U’, ‘G’, ‘S’)

And RoleP.type = ‘R’

And RoleP.name = @RoleName

And LoginP.name = @LoginName)

Begin

Print ‘Exec sp_addsrvrolemember @rolename = ”’ + @RoleName + ”’, @loginame = ”’ + @LoginName + ”’;’;

End

Set @CurrID = @CurrID + 1;

End

 

Select @MaxID = Max(PermID), @CurrID = 1

From @Perms;

 

While @CurrID <= @MaxID

Begin

Select @PermState = PermState,

@PermName = PermName,

@Class = Class,

@LoginName = LoginName,

@MajorID = MajorID,

@SQL = PermState + space(1) + PermName + SPACE(1) +

Case Class When 101 Then ‘On Login::’ + QUOTENAME(SubLoginName)

When 105 Then ‘On ‘ + ClassDesc + ‘::’ + QUOTENAME(SubEndPointName)

Else End +

‘ To ‘ + QUOTENAME(LoginName) + ‘;’

From @Perms

Where PermID = @CurrID;

 

If Not Exists (Select 1 From sys.server_principals P

Inner Join sys.server_permissions SP On SP.grantee_principal_id = P.principal_id

Where SP.state_desc = @PermState

And SP.permission_name = @PermName

And SP.class = @Class

And P.name = @LoginName

And SP.major_id = @MajorID)

Begin

Print @SQL;

End

Set @CurrID = @CurrID + 1;

End

 

Set NoCount Off;

GO

 

/*

— Turn off Ad Hoc Distributed Queries

sp_configure ‘Ad Hoc Distributed Queries’, 0

GO

reconfigure with override

go

*/

On my test AG replicas, you can see that there are two users that are not present on the secondary. I need to add them so that if a failover occurs then there will not be any login failures.

blog_20170105_1

Using the script above on SQLSRVTST2, I get the following output and can use that to create my users with their correct permissions. I had to set the @PartnerServer variable to SQLSRVTST1 in the script and enable Ad Hoc Distributed Queries.

blog_20170105_2

……

blog_20170105_3

blog_20170105_4

The nice thing is that the password and SID’s are also scripted for you. There is no need to install sp_hexadecimal and sp_help_revlogin in master. If you have AG replicas with a lot of logins, this can be a real timesaver for you. Hope you enjoy and if you make any improvements, please share them with us!

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

Script to De-Orphan SQL Logins

–By Lori Brown   @SQLSupahStah

I am working with a client who is migrating from SQL 2012 to SQL 2016. It just so happens that while they have just a few databases to move, they also have a ton of SQL logins that needed to be migrated. This is not an issue since I know to use sp_help_revlogin to script logins and passwords into a script that can be applied to the new SQL 2016. However, inevitably, there are SQL logins that have been deleted from SQL but not from the database or while testing on SQL 2016 the databases are refreshed and now logins are orphaned. I needed to have a script that would look at each database and give me a script that could be used to create or de-orphan these logins.

I figured that with a little Google-fu I would find someone who had already created a way to do this. And, while I did find a solution, I decided to give it my own twist. My solution is to have a script that I can run when needed that will output the statements that I need to create or de-orphan logins from all databases in an instance. I combined things from these two blogs:

http://www.sqlshack.com/creating-a-stored-procedure-to-fix-orphaned-database-users/

https://www.codeproject.com/articles/489617/create-a-cursor-using-dynamic-sql-query

Here is my code and a quick look at the output:

USE master

SET NOCOUNT ON

DECLARE @usercount INT

DECLARE @sqlstr1 NVARCHAR(1000)

DECLARE @sqlstr2 NVARCHAR(500)

DECLARE @dbname VARCHAR(128)

DECLARE @username SYSNAME

DECLARE @OrphUsers AS CURSOR

 

DECLARE db CURSOR FOR

SELECT name FROM sys.databases

WHERE name NOT IN (‘master’,‘tempdb’, ‘msdb’, ‘model’)

AND is_read_only = 0 AND state = 0

 

OPEN db

FETCH NEXT FROM db INTO @dbname

WHILE @@FETCH_STATUS=0

BEGIN

 

SET @sqlstr1 = ‘SET @orphanedusers = CURSOR

       FOR SELECT a.NAME

       FROM [‘+@dbname+‘].sys.database_principals a

       LEFT OUTER JOIN sys.server_principals b

             ON a.sid = b.sid

       WHERE (b.sid IS NULL)

            AND (a.principal_id > 4)

             AND (a.type_desc = ”SQL_USER”)

 

       OPEN @orphanedusers’

 

EXEC sys.sp_executesql @sqlstr1, N’@orphanedusers cursor output’, @OrphUsers OUTPUT

 

FETCH NEXT FROM @OrphUsers INTO @username

WHILE @@FETCH_STATUS=0

BEGIN

 

SELECT @usercount = COUNT(*) FROM sys.server_principals WHERE name = @username

 

IF @usercount = 0

BEGIN

PRINT ‘–User ‘+@username+‘ does not exist and must be created. Please use a strong password.’

SET @sqlstr2 = ‘USE [master] CREATE LOGIN ‘ + @username + ‘ WITH PASSWORD = ”ABC123”’

END

             ELSE

             BEGIN

                    PRINT ‘–User ‘+@username+‘ is an orphan.’

SET @sqlstr2 = ‘USE [‘+@dbname+‘] ALTER USER ‘ + @username + ‘ WITH LOGIN = ‘ + @username

END

 

PRINT (@sqlstr2)

FETCH NEXT FROM @OrphUsers INTO @username

END

CLOSE @OrphUsers

DEALLOCATE @OrphUsers

 

FETCH NEXT FROM db INTO @dbname

END

CLOSE db

DEALLOCATE db

 

SET NOCOUNT OFF

blog_20161229_1

The output labels what needs to be done to make sure the users have access to their databases. If the user does not exist in SQL at all then a CREATE LOGIN statement is generated. If the user is simply orphaned an ALTER LOGIN statement is generated.

The only thing I did not do yet (mostly because my needs were centered around SQL logins) is to include Windows domain accounts in my logic. As soon as I add that to my script, I will repost. Or, if someone out there adds it, please send an update.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

Table-Valued Parameter Performance Using C# Data Tables

The following testing and resulting data is a direct result of a recent issue that was discovered at one of our client sites.  After setting up SQL monitoring that does use a SQL trace to capture long running queries, suddenly certain processes would lock up and be unable to complete.  Initially we thought it was everything except the SQL trace causing the issue until we finally turned off all trace captures and the problem went away.  This was very concerning to us since using either SQL trace or XEvents is the method that we generally use to capture query performance. Without a fix, our performance tuning attempts would be severely hamstrung and slowed down. This problem had to be fixed yesterday!

Another side effect that would happen when things locked up was that despite having the max server memory set, SQL would end up using substantially more memory that was allocated to it.  At this client, developers were importing data from an application using C# and table-valued parameters which should have not been an issue. After a lot of searching, we finally came across some comments by another SQL expert who mostly described our issue and recommended that any C# TVP’s should have their columns defined or SQL could lock up if either trace or XEvents Completed Events are captured.  When we checked the C# code we found that the string columns that were being imported were not defined in an effort to make the import code flexible.  We were able definitively fix this very obscure problem with just one line of C# code that is in the below post.  The end result is a FASTER import of data that can be captured in trace or XEvents.  Nice!!

Since we had to put together this fix for a very undocumented issue, we sincerely hope that any DBA or developer who runs into this problem can correct it much faster than we did.  A link to download a zip file of all code displayed in this post is included at the end.  Enjoy!

********************************************

Table-Valued Parameter Performance Using C# Data Tables

— By Jeff Schwartz

Overview

Much has been written about the use of table-valued parameters (TVP) and their performance. These narratives appear to be contradictory because some state that TVPs should be used only for small datasets, whereas others assert that TVPs scale very well. Adding to the confusion is that the fact that most of the performance-related discussions focused on TVP use within T-SQL. Many of these examples employ either SQL trace or Extended Events (XEvents) to evaluate performance, but almost none of them has discussed what can occur when C# data tables and SQL trace or XEvents are used simultaneously. This paper details the analysis and resolution of a situation that occurred at a customer site when an existing C# application suddenly began to run extremely slowly after a very lightweight SQL trace that utilized a several-minute duration threshold and captured no frequent events was activated.

Initial Research

Preliminary research indicated that under very specific conditions the combination of TVPs and SQL trace could result in unintended performance consequences, e.g., excessive SQL Server memory consumption to the point where SQL Server exhausted all memory on the server regardless of the maximum memory setting value. The research also suggested that string usage and lengths in unspecified locations might have an impact on this issue. The question of whether TVPs might have some record-count performance threshold above which performance would degrade was also examined. Although bulk insert was not used by the application, the research team also decided to compare bulk and TVP insertion speeds because that is another source of conflicting information.

Since very little literature exists that discusses the combination of TVPs and SQL trace, the only recourse involved creating test frameworks, performing actual data loads, and using internal application timers in conjunction with either SQL trace or XEvents to monitor performance.

Test Frameworks

1.Two frameworks were constructed: T-SQL-only and C# calling a T-SQL stored procedure.

a. The T-SQL-only test loaded data into a TVP, which then called a stored procedure to perform the insertion into the final table.

b. The development team provided C# code that illustrated how the user application read a flat file into a C# data table, which was then passed as a TVP to a T-SQL stored procedure. This code was followed as closely as possible in the test framework.

2.A SQL server table containing data captured from the sys.dm_db_index_operational_stats Data Management View (DMV) supplied the data for testing because

a. The data table contained approximately 3.2 million records, thereby being large enough to stress the application code and SQL Server adequately. The largest data file provided by the application team was approximately 1.6 million records, so the testing could push well beyond current record levels.

b. Each data record contained 47 columns, which was wider in terms of data columns than any application table being loaded.

c. No string value columns existed in the data. This insured that user string data could not affect any interaction between the C# application and SQL Server, and that any string-related behavior was strictly due to the manner in which the application loaded the data table or the TVP, or the way the TVP parameter-passing was recorded in the SQL trace. All data columns were smallint, int, bigint, or date. This also insured exact record lengths so that any record count-related behavior was independent of record size. Some of the data files being loaded by the development team contained only fixed-width data types, whereas others contained at least a few variable length string columns.

3.The frameworks were designed to load various numbers of records using the same starting point to determine whether a performance threshold existed for TVPs, regardless of whether SQL trace (or XEvents) was active, as well as to determine whether even small numbers of records could be processed with the SQL trace (or XEvents) active.

4.Thirty-one different record levels, shown in Table 1, were used for testing to insure any record-count gaps were small. A separate CSV file of each length was created from the data cited in #2, always beginning with the first record.

Blog_20160901_1

Table 1: Record Counts used in Testing

5.Several methods of loading the flat file CSV data into C# data tables were developed and tested. The C# data table configurations were as follows:

a. Unlimited string length columns (initially used in the production C# application) – herein noted as NoStringMax (String #N/A on graphs)

b. Fixed length string columns of at most 30, 100, 500, or 1000 characters each – herein noted as StringMax (String <#> on graphs). Note: the development team stated that no column would exceed 1,000 characters in length.

c. Exact data type mapping so that the data table column types matched those of the receiving SQL Server table exactly – herein noted as Map

d. A common routine was used by options a, b, and c above to send the data table to SQL Server using a TVP.

e. The section of code that loaded the data table from the flat file was timed separately from the routine cited in d, which also was surrounded by its own timers. This enabled comparison of data load, TVP passing, and combined times.

f. The name of the CSV file is passed in as a parameter along with an adjusted version of the file name so the appropriate test identifiers can be written to the application log file for later analysis.

6.The tests were run with the following SQL Server monitoring options:

a. No SQL Server monitoring at all.

b. SQL trace monitoring including all normally captured events except for completed ones. These were handled as follows:

i.Exclude all completed events

ii.Include ad-hoc and batch completed events with a two-second duration threshold

iii.Include ad-hoc and batch completed events as well as completed statement events with a two-second duration threshold

c. SQL Extended Events (XEvents) including all normally captured events except for completed ones. These were handled as follows:

i.Exclude all completed events

ii.Include ad-hoc and batch completed events with a two-duration threshold

iii.Include ad-hoc and batch completed events as well as completed statement events with a two-duration threshold

7.All tests were performed on the same server to eliminate network and hardware variability. This server had a quad-core Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GHz processor, 32 GB of RAM, and a 500 GB SSD HD.

8.Various combinations of cold and warm SQL Server caches, cold and warm Windows caches, and SQL Server maximum memory limit sizes were tested. Some of the earlier attempts will be discussed in the Results section below. The final specifications were as follows:

a. No applications other than SQL Server, SSMS, and the C# application were active on the test server.

b. SQL Server’s maximum memory limit was set to 12 GB. This allowed SQL to go over its limit without distorting overall system behavior by exhausting server memory entirely.

c. Hot Windows file cache, i.e., a full run that loaded ALL of the flat file data into RAM was performed before the timing tests were run. This insured that the HD was involved minimally.

d. Hot SQL Server buffer cache, achieved by running the next series soon after the last run completed (with no intervening runs), and by running each sequence four times in immediate succession.

C# Code

The NoStringMax routine cited in #5a in the previous section is shown in Figure 1.

Blog_20160901_2

Figure 1: Test C# Data Table Load Routine Using No String Length Maximums

The StringMax routine cited in #5b in the previous section is shown in Figure 2. The ONLY difference between this routine and the previous one is the addition of the MaxLength assignment that is highlighted below. This single line sets the maximum length of each data table column to the value of the iMaxStringLen parameter. As cited in #5b, this value ranged from 30 to 1,000 during testing.

Blog_20160901_3

Figure 2: Test C# Data Table Load Routine Using String Length Maximums

The Map routine cited in #5c in the previous section is shown in Figure 3 through Figure 6. Logically, this code is identical to that of #5a and #5b, but the individual column mappings make the routine much longer. Note: since no strings are used for the data table columns, the MaxLength parameter is unnecessary.

Blog_20160901_4

Figure 3: Test C# Data Table Load Routine Using Data Type Mapping – Part 1

Blog_20160901_5

Figure 4: Test C# Data Table Load Routine Using Data Type Mapping – Part 2

Blog_20160901_6

Figure 5: Test C# Data Table Load Routine Using Data Type Mapping – Part 3

Blog_20160901_7

Figure 6: Test C# Data Table Load Routine Using Data Type Mapping – Part 4

The routine cited in #5d in the previous section is shown in Figure 7.

Blog_20160901_8

Figure 7: C# Test Routine that Sends Data Table to SQL Server Using a TVP

Results

1.The T-SQL code alone, cited in Test Frameworks #1a did not recreate the problem observed at the customer site when either SQL trace or Extended Events (XEvents) were used, so additional testing was unnecessary.

2.No problems occurred when the C# application was used in conjunction with SQL trace or XEvents as long as no completed events were captured. Therefore, capturing only specific trace events created the problems.

3.Further research showed that when the C# code was running, adding ad-hoc or batch completed events to SQL traces or XEvent sessions caused issues. Adding completed statement events did not change things appreciably.

4.Interestingly, the extreme misbehavior was triggered by the combination of using NoStringMax C# code and having SQL trace or XEvents capture ad-hoc or batch completed events. Although the StringMax and Map routines ran a little more slowly for certain record count levels when SQL trace or XEvents captured ad-hoc or batch completed events, the dramatic memory consumption issues did not occur at all when these methods were used.

5.Initially, testing employed a 28 GB maximum SQL Server memory setting, but as occurred at the customer site, when the problem arose, SQL Server consumed all the memory on the server, which caused the server fundamentally to stop working. Using this setting and the NoStringMax C# code, CSV files with record counts up to 25,000 could be processed without taking hours, failing, or causing the server to run completely out of memory. However, the application could not reach the 50,000 record level.

6.Since testing needed to process MUCH higher numbers of records, the decision was made to reduce SQL Server memory to 4 GB in an attempt to provide additional memory space for SQL Server memory overflow. Another reason for lowering the SQL Server memory allocation was to insure that the tests consumed all of SQL Server’s buffer pool and that SQL Server had to operate against memory limits. This consideration, as well as the run-time variations, necessitated the four successive iterations for each combination.

7.Unfortunately, using the 4 GB setting, the lower-end NoStringMax C# code runs while capturing SQL trace ad-hoc or batch completed events caused application failures at approximately the 4,000 or 8,000 record levels. This clearly indicated an increase in SQL Server memory was required, so it was increased to 8 GB. Although the runs processed more records, they still crashed well before the 25,000 record level. Finally, moving the memory limit up to 12 GB enabled everything to run as before, effectively mimicking the 28 GB testing without running the risk of exhausting Windows memory.

8.Figure 8 highlights the NoStringMax C# code runs through the 25,000 record level while SQL trace or XEvents ad-hoc or batch completed events were being captured. It is evident that the run times were unacceptable. When the same test levels are viewed in Figure 9, the problem becomes abundantly clear. With SQL trace or XEvents capturing ad-hoc or batch completed events, the run times for the 10,000 record tests using the NoStringMax C# code ranged between 483 and 584 seconds. When ad-hoc or batch completed events were not captured, the run times were approximately 0.14 SECONDS! The values for all other tests, including NoStringMax C# code with SQL trace and XEvents ad-hoc or batch completed events off, are shown in Figure 9.

Blog_20160901_9

Figure 8: TVP Load Timing when Trace On or XEvents on

9.Although many lines appear in Figure 9, three groupings exist and these are the most important. Group 1, which consists of dotted lines, includes all tests during which ad-hoc or batch completed events were not captured. Group 2, which consists of solid lines, includes all tests during which SQL trace captured ad-hoc or batch completed events. Group 3, which consists of dashed lines, includes all tests during which XEvents captured ad-hoc or batch completed events. Important note: The NoStringMax runs shown in Figure 8 are NOT in Figure 9 because of scaling considerations. Figure 9 highlights several notable issues. Most importantly, once a maximum string length is specified, performance improves even when SQL trace or XEvents ad-hoc or batch completed events are not captured. In addition, the terrible performance problems go away. Interestingly, the behaviors of the various StringMax and Map runs were almost identical through approximately 175,000 records and then again at about 800,000 records. In between, unexplained divergence occurs, but it is only a few seconds and occurred primarily between the 200,000 and 700,000 record levels. The pattern shown in Figure 9 was repeated in every test sequence conducted. It is noteworthy that the StringMax and Map versions of the routine outperformed the NoStringMax under all comparable trace or XEvent-related conditions, and at higher volumes, even when completed events were captured.

Blog_20160901_10

Figure 9: TVP Load Timing when Trace/XEvents off OR Trace or XEvents on and Mapped Data Table or Limited-Length String Values Used

10.Once the number of records exceeeded two million, the load of the C# data table began to crash due to application memory exhaustion. Note: neither Windows nor SQL Server memory was consumed excessively during this phase, so the limitation was strictly application-related.

11.Figure 10 summarizes the overall insertion rates of all previously cited methods at the two million record level, in addition to the bulk insert method. This graph shows clearly that the worst performer was the unlimited string method, and that the best one was the bulk insert method. Note: these rates exclude the reading of the flat file into the C# data table as well as the creation of the file that was suitable for bulk insertion.

Blog_20160901_11

Figure 10: TVP versus Bulk Insert Records per Second

Conclusions

The use of unlimited-length C# strings with data tables not only performs worse without capturing any SQL trace or XEvents ad-hoc or batch completed events, it performs dreadfully when they are. Their use can cause poor overall server performance if the server’s memory is exhausted. Finally, their use prevents the ability to monitor SQL Server query performance using either SQL trace or XEvents. However, when a maximum string length is specified, regardless of its length, performance without SQL trace or XEvents improved and the problematic interaction with SQL trace or XEvents was mitigated almost completely. Since the Map method is inefficient and limiting from an application development perspective and performance was not substantially better than with limited-length strings, its use does not appear to be beneficial.

No table-valued parameter scalability issues were observed with the possible exception of the 200,000 to 700,000 record range when performance inexplicably, and consistently, dipped. However, from 800,000 to 2 million, performance experienced no such dip. Interestingly, standard bulk insert performance exceeded that of TVPs when the same data was used.

Recommendations

Since the code to implement the maximum-length string involves only one line and the performance monitoring and improvement benefits are substantial, the <data table name>.MaxLength = <n> statement should be included in data table load routines as shown by the highlighted line in Figure 2. <n> should be the maximum expected length of any data column. Limiting string length will allow C# TVP data loads to run more efficiently and also enable customers to conduct normal query performance monitoring.

********************************************

The code shown in this post can be downloaded from here….  SQLRX_TVP_CSharp_Code

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!