Monthly Archives: September 2016

SQL Trace Flag 4199

— by Lori Brown @SQLSupahStah

Once again, I find things about SQL Server that I had no idea were even available when I actually have time to do some in-depth reading and research. Most of the time, my regular duties keep me so busy that I don’t have time to play with new SQL functionality much so this makes it very easy to miss things like all the interesting trace flags that are available.

blog_20160929_1

I found TF 4199 while doing some research on SQL 2016. Trace flag 4199 is used to turn on all hotfix or cumulative update (CU) functionality after it is installed. So, if you had an issue that should have been resolved by installing a CU or SP and did not find it happening, you could turn on trace flag 4199 to force all of the fixes to become active. Fixes and CUs are apparently not supposed to be turned on by default since they could affect execution plans. If you installed a specific hotfix, most of the time those come with their own trace flag that was needed to enable the fix. TF 4199 is the code to turn them all on.

To turn on a TF globally use the –T4199 command in your SQL startup parameters.

blog_20160929_2

To turn on a TF for your session use DBCC TRACEON (4199).

TF 4199 can be enabled in a specific query by using the OPTION clause with QUERYTRACEON.

SELECT col1, col2, f4 FROM MyTable WHERE f1 = 0 AND f2 = 1 OPTION (QUERYTRACEON 4199)

In my defense, I have rarely installed hotfixes or CUs and have tended to install service packs and so far (knock on wood) to not have to use a trace flag to get expected benefits.

In SQL 2016, TF 4199 is enabled by default for databases in 130 compatibility.

blog_20160929_3

For those of you who have not upgraded to SQL 2016, please be very cautious about enabling trace flags globally. Even with TF 4199, you can cause unexpected behavior to show up. If you do enable a trace flag, please have a good reason to do so. I have seen places (ahem** Microsoft ** ahem) that recommend TF 4199 be set globally but I would be very skeptical of doing this.

More info on trace flag 4199 can be found here: https://support.microsoft.com/en-us/kb/974006

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

SQL Server Statistics and Trace Flag 2371

— by Lori Brown @SQLSupahStah

Statistics are pretty important to SQL Server. Statistics are small objects that hold information of the distribution of data in a table. The SQL optimizer uses this information to create efficient ways to get data from the table. So, if statistics are not taken care of SQL may have a hard time running queries. It really needs this info to do its job.

I am a fan of updating statistics at least once a day. I prefer to use sp_updatestats (https://msdn.microsoft.com/en-us/library/ms173804.aspx ) since it will by default skip tables that have not had any changes. Sp_updatestats will however update stats if there have been ANY changes to a table. So some people seem to think that it can incur too much overhead (CPU, IO) but it has served most databases well. However, (here comes the “But, but, but…”) if you don’t have a long enough window to run sp_updatestats or if the default sample of 25% is not enough to give an accurate distro of data, then you need to come up with other solutions…especially for large tables. Large tables are always a problem when trying to set up maintenance.

If you have AUTO_UPDATE_STATISTICS enabled on your database (as you should unless you have a prevailing reason not too) statistics are updated automatically when enough data has changed and if a table is large enough. Here is what triggers an automatic update of stats:

  • The table size has gone from 0 to >0 rows
  • The number of rows in the table when the statistics were gathered was 500 or less, and the colmodctr (hidden system column) of the leading column of the statistics object has changed by more than 500 since then
  • The table has more than 500 rows when the statistics when the statistics were gathered, and the colmodctr (hidden system column) of the leading column of the statistics object has changed by more than 500 + 20% of the number of rows in the table when the statistics were gathered

If you think about this a bit, you can see how this is pretty good for small tables but large tables….those might run into a sticky wicket. When you have a really large table it can take a significant amount of changes before statistics are automatically updated.

blog_20160921_1

While researching some things in SQL 2016, I found that I have been unaware of trace flag 2371. In SQL 2016, TF 2371 is now on by default. However, as far back as SQL 2008 R2, you can set this trace flag and get the benefits.

So, what is it? Trace flag 2371 is used to lower the threshold that tells SQL to automatically update statistics. This is especially important on larger tables where statistics updates are not triggered fast enough because of the high threshold of changes that normally need to happen on a table before statistics are considered to be old by SQL. Old stats and SQL do not get along well. Apparently this was a HUGE problem for SAP and it is the reason that the trace flag exists. With TF 2371, the threshold to trigger an automatic statistics update goes dynamically down meaning there will be more frequent stats updates for larger tables which should really help SQL create efficient execution plans. Yay!

blog_20160921_2

By now you must be thinking…I am gonna turn on TF 2371 everywhere!!!

blog_20160921_3

But, I really would not recommend it. If you have not been having performance problems without TF 2371 then you probably don’t need it. If it ain’t broke, don’t fix it! Turning on trace flags means that you are telling SQL to not use the regular paths through the SQL engine. When you turn on a trace flag it is a global setting and cannot be disabled for specific databases. And, if you have to open a ticket with Microsoft, they are likely going to be asking questions about why you have it turned on. Okay…so the risk on this one seems pretty low but you really should have a reason to enable it.

Sadly the only way to know if statistics are the cause for slow queries seems to be to run into performance issues and then notice that performance improves when statistics are update. You can take the time to catch execution plans both before and after the stats update so that you can verify but that is about it.

Don’t forget that in SQL 2016, TF 2371 is enabled by default. That could be something that helps you decide to move to it.

I have to give a trackback to Kendra Little for an excellent post on statistics. You can get more on whys and ways to take care of your statistics from http://www.littlekendra.com/2016/04/18/updating-statistics-in-sql-server-maintenance-answers/

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Setting up Disconnected Log Shipping

— by Lori Brown @SQLSupahStah

I have a few clients who have set up SQL instances to hold copies of databases from either vendors or other companies with whom they are actively sharing data with. In these cases, since the source databases are on servers that are part of another company’s domain, the best way to make sure that data is kept up to date is to set up what I call disconnected log shipping. It is disconnected because the primary and secondary SQL instances have no access to each other so setting up regular log shipping will not work.

The owners of the primary instance, in each of the times I have worked on something like this, have set up regular uploads of transaction log backups to an FTP site. My task is to download the .trn files and apply them to the database on our secondary instance. So far, I have had good luck using WinSCP (https://winscp.net/eng/index.php ) and setting up some custom code to create my own log shipping complete with a monitoring job that will alert if logs have not been applied with a specified time. WinSCP is free (but you really should consider donating) and has a very good knowledge base that will show you how to use it fully.

For the record, I know that I can use SQL’s log shipping stored procedures to set up things too but each time I tried, something has gone wrong forcing me to come up with a workaround. For instance, in my latest attempt, the restore job would constantly fail saying that something was NULL that was expecting a date. I don’t have the error anymore so I am doing that one from memory. No matter how I changed things in the log shipping tables, I could not get past the error. After several days of no luck with the SQL log shipping procedures, I took a big step back and decided to just go around it.

After doing some research to see if anyone had posted a better way of setting up and monitoring disconnected log shipping, I found that almost everyone talks about using SQL’s functionality and there is just not much documented that covers how to set up your own solution. So, here is what I did.

WinSCP

First of all, I need to start with a disclaimer that I am NOT a WinSCP scripting expert. I was fortunate at one project where something like this had already been created so I used that for a template. For assistance on WinSCP, please look through their forums and online help.

Download and install WinSCP. I used all defaults and things are working great. Open WinSCP and use it to connect to the FTP site that contains the tlogs and full backups that need to be restored on your SQL instance.

blog_20160915_1

Make sure to put the host key in the cache because you will need it later. You can download your full database backups then so that you will be ready to restore and get log shipping set up.

You will need some folders set up to hold your .trn files, WinSCP scripts and other things. I set up a FTPImport folder and subfolders to hold scripts, archives, the downloaded tlogs and some WinSCP logs too.

blog_20160915_2

You can see that I have two scripts in the Scripts folder. Call_WinSCP.cmd is set up with the path to the WinSCP executable and provides paths to scripts and logs for troubleshooting. In mine I am only giving the path the scripts. I tried to provide comments on each command so that you will know what each line is going to do, so read through things carefully before trying to same in your server. I will eventually set up a job that will call the batch file and tell the batch file to use Get_SQLRXDB_TlogBackups.scp.

Call_WinSCP.cmd

@Echo Off

REM Script to call WinSCP from Current Folder

REM Parameter %1 is name of WinSCP script file to run

REM Parameter %2 is uniquifier for log file names (e.g. date string YYYYMMDDHHmm)

 

Echo Running WinSCP with command Script

 

Date /t

Time /t

 

Echo WinSCP Command /script=”E:\FTPImport\Scripts\%1.scp”.

Echo WinSCP Command /Log=”E:\FTPImport\WinSCPLogs\%2.log”

 

C:\”Program Files (x86)\WinSCP\winscp.com” /script=”E:\FTPImport\Scripts\%1.scp”

 

Date /t

Time /t

 

Get_SQLRXDB_TlogBackups.scp

# WINSCP script file to connect to FTP site and download current T Log Backups

 

# Set Script Options for this transfer

Option echo on

Option batch on

Option confirm off

Option transfer binary

 

# Import Folder for T-Log Backups = E:\FTPImport\TLogs

# Source Folder for T-Log Backups = company.ftpsite.com /LogShipping/SQLRXDB

# Account Name = MyFTPAccount

# Password = $tr0ngPwdH3r3

 

#echo Connect to server

Open sftp://MyFTPAccount:$tr0ngPwdH3r3@company.ftpsite.com -hostkey=”ssh-rsa 2048 xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx”

 

#echo change remote working directory to folder with the log backups and list contents

cd LogShipping/SQLRXDB

pwd

 

#echo Change Local working directory to folder for log backups and list contents

lcd E:\FTPImport\TLogs\SQLRXDB

lpwd

 

#echo copy and then delete the T-Log files from remote location

get -delete SQLRXDB_backup_*.trn

 

#echo Close Connection and exit

Close

Exit

Set up Log Shipping

Now that I have WinSCP ready, I just need to do some normal set up for Log Shipping. That means that I have to restore my database, set up a job that will download tlog backup files from the FTP site, set up a job that will restore the downloaded tlog backup files and finally, set up a job that will let someone know if log shipping is out of sync.

First, restore the downloaded backup of the database and leave it in norecovery so that the transaction log backups can be applied to it later.

— Restore database with norecovery

USE [master]

GO

RESTORE DATABASE [SQLRXDB]

FROM DISK = N’E:\FTPImport\FullBackups\SQLRXDB_full_backup.bak’ WITH FILE = 1,

MOVE N’SQLRXDB_Data’ TO N’E:\MSSQL\DATA\SQLRXDB.mdf’,

MOVE N’SQLRXDB_Log’ TO N’F:\MSSQL\LOGS\SQLRXDB_log.ldf’,

NORECOVERY, NOUNLOAD, REPLACE, STATS = 5

GO

Next, set up a “copy” job that is very similar to the one created by Log Shipping stored procedures. The copy job calls WinSCP using a CmdExec step and tells is to use the Get_SQLRXDB_TlogBackups.scp script. The command to call WinSCP is pretty straightforward.

Call E:\FTPImport\Scripts\Call_WinSCP.cmd Get_SQLRXDB_TLogBackups Get_TLogs_WinSCPSession > E:\FTPImport\WinSCPLogs\Call_WinSCP_JobStep.log

And, here is the job that I created. At this client, they want .trn files downloaded every hour.

— Create job to download tlogs from FTP site

USE [msdb]

GO

 

/****** Object: Job [SQLRXDB_GetTLogs]   ******/

BEGIN TRANSACTION

DECLARE @ReturnCode INT

SELECT @ReturnCode = 0

 

DECLARE @jobId BINARY(16)

EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N’SQLRXDB_GetTLogs’,

@enabled=0,

@notify_level_eventlog=0,

@notify_level_email=0,

@notify_level_netsend=0,

@notify_level_page=0,

@delete_level=0,

@description=N’Download SQLRXDB tlogs from FTP site’,

@category_name=N’Log Shipping’,

@owner_login_name=N’sa’, @job_id = @jobId OUTPUT

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

/****** Object: Step [Get SQLRXDB Tlog Files]   ******/

EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N’Get SQLRXDB Tlog Files’,

@step_id=1,

@cmdexec_success_code=0,

@on_success_action=1,

@on_success_step_id=0,

@on_fail_action=2,

@on_fail_step_id=0,

@retry_attempts=0,

@retry_interval=0,

@os_run_priority=0, @subsystem=N’CmdExec’,

@command=N’Call E:\FTPImport\Scripts\Call_WinSCP.cmd Get_SQLRXDB_TLogBackups Get_TLogs_WinSCPSession > E:\FTPImport\WinSCPLogs\Call_WinSCP_JobStep.log’,

@output_file_name=N’E:\FTPImport\SQLRXDB_GetTLogs.log’,

@flags=32,

@proxy_name=N’RunCmdExec’

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id=@jobId, @name=N’SQLRXDB_GetTlogs’,

@enabled=1,

@freq_type=4,

@freq_interval=1,

@freq_subday_type=8,

@freq_subday_interval=1,

@freq_relative_interval=0,

@freq_recurrence_factor=0,

@active_start_date=20160813,

@active_end_date=99991231,

@active_start_time=500,

@active_end_time=235959

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)’

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

COMMIT TRANSACTION

GOTO EndSave

QuitWithRollback:

   IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION

EndSave:

 

GO

I have some code that I have been using for a long time to read the contents of a folder and do things with the files it finds. So, in this case I am using the code below to read the files that were downloaded from FTP and make a list so that some dynamic SQL can try to restore those files to the database. I put in some error handling code that will hopefully tell me if a specific file cannot be applied for troubleshooting purposes. In my case, the client wants to use the database for reporting so I am restoring the files and then placing the database in STANDBY. Then as a final thing so that only newer files will be worked on, I move the restored files to an archive folder.

SET NOCOUNT ON

 

— Variable declarations

DECLARE @CMD1 VARCHAR(5000)

DECLARE @CMD2 VARCHAR(5000)

DECLARE @CMD3 NVARCHAR(1000)

DECLARE @FilePath VARCHAR(200)

DECLARE @ArchivePath VARCHAR(200)

DECLARE @UndoFilePath VARCHAR(200)

DECLARE @File2Restore VARCHAR(128)

DECLARE @File2Move VARCHAR(128)

DECLARE @RestoreStmt NVARCHAR(1000)

DECLARE @error_msg VARCHAR(500)

DECLARE @ErrMsg VARCHAR(500))

 

— Create the #OriginalFileList temporary table to support the un-cleansed file list

CREATE TABLE #OriginalFileList (

Col1 VARCHAR(1000) NULL

)

 

— Create the #ParsedFileList temporary table to support the cleansed file list

CREATE TABLE #ParsedFileList (

PFLID INT PRIMARY KEY IDENTITY (1,1) NOT NULL,

DateTimeStamp DATETIME NOT NULL,

FileSize VARCHAR(50) NOT NULL,

FileName1 VARCHAR (255) NOT NULL

)

 

— Initialize the variables

SELECT @CMD1 =

SELECT @CMD2 =

SELECT @CMD3 =

SELECT @FilePath = ‘E:\FTPImport\TLogs\SQLRXDB\’

SELECT @ArchivePath = ‘E:\FTPImport\Archives\SQLRXDB\’

SELECT @UndoFilePath = ‘E:\MSSQL\Backup\ROLLBACK_UNDO_SQLRXDB.BAK’

 

— Build the string to capture the file names in the restore location

SELECT @CMD1 = ‘master.dbo.xp_cmdshell ‘ + char(39) + ‘dir ‘ + @FilePath + ‘\*.*’ + char(39)

 

— Build the string to populate the #OriginalFileList temporary table

SELECT @CMD2 = ‘INSERT INTO #OriginalFileList(Col1)’ + char(13) + ‘EXEC ‘ + @CMD1

 

— Execute the string to populate the #OriginalFileList table

EXEC (@CMD2)

 

— Delete unneeded data from the #OriginalFileList

DELETE FROM #OriginalFileList

WHERE COL1 IS NULL

 

DELETE FROM #OriginalFileList

WHERE COL1 LIKE ‘%Volume%’

 

DELETE FROM #OriginalFileList

WHERE COL1 LIKE ‘%Directory%’

 

DELETE FROM #OriginalFileList

WHERE COL1 LIKE ‘%<DIR>%’

 

DELETE FROM #OriginalFileList

WHERE COL1 LIKE ‘%bytes%’

 

— Populate the #ParsedFileList table with the final data

INSERT INTO #ParsedFileList (DateTimeStamp, FileSize, FileName1)

SELECT LTRIM(SUBSTRING (Col1, 1, 20)) AS ‘DateTimeStamp’,

LTRIM(SUBSTRING (Col1, 21, 18)) AS ‘FileSize’,

LTRIM(SUBSTRING (Col1, 40, 1000)) AS ‘FileName1’

FROM #OriginalFileList

 

 

DECLARE TLogs2Restore CURSOR FOR

SELECT FileName1 FROM #ParsedFileList ORDER BY FileName1 ASC

 

— Open the cursor

OPEN TLogs2Restore

 

FETCH NEXT FROM TLogs2Restore INTO @File2Restore

 

— Loop through

WHILE (@@FETCH_STATUS = 0) BEGIN

BEGIN TRY

SET @RestoreStmt = ‘RESTORE LOG [SQLRXDB] FROM DISK = N”’+@FilePath+@File2Restore+”‘ WITH STANDBY = ”’+@UndoFilePath+””

–print @RestoreStmt

EXEC sp_executesql @RestoreStmt

END TRY

BEGIN CATCH

SET @error_msg = error_message()

SET @ErrMsg = ‘Log file ‘+@File2Restore+‘ cannot be restored due to error: ‘+@error_msg

PRINT @ErrMsg

END CATCH

FETCH NEXT FROM TLogs2Restore INTO @File2Restore

 

END

 

— Close and deallocate the cursor

CLOSE TLogs2Restore

DEALLOCATE TLogs2Restore

 

 

— Move restored tlogs to archive folder

DECLARE TLogs2Archive CURSOR FOR

SELECT FileName1 FROM #ParsedFileList ORDER BY FileName1 ASC

 

— Open the cursor

OPEN TLogs2Archive

 

— Loop through

WHILE (1=1)

BEGIN

FETCH NEXT FROM TLogs2Archive INTO @File2Move

IF @@FETCH_STATUS < 0 BREAK

 

SET @CMD3 = ‘EXEC master.dbo.xp_cmdshell ”ROBOCOPY.EXE ‘+@FilePath+ ‘+@ArchivePath+’ ‘+@File2Move+‘ /MOV /XO /NP”’

–print @CMD3

EXEC sp_executesql @CMD3

 

END

 

— Close and deallocate the cursor

CLOSE TLogs2Archive

DEALLOCATE TLogs2Archive

 

— Drop the temporary tables

DROP TABLE #OriginalFileList

DROP TABLE #ParsedFileList

 

SET NOCOUNT OFF

GO

And, here is the above code put into a job….

— create job to restore downloaded tlogs and archive the tlogs when done

USE [msdb]

GO

 

/****** Object: Job [SQLRXDB_RestoreTlogs]   ******/

BEGIN TRANSACTION

DECLARE @ReturnCode INT

SELECT @ReturnCode = 0

 

DECLARE @jobId BINARY(16)

EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N’SQLRXDB_RestoreTlogs’,

@enabled=0,

@notify_level_eventlog=0,

@notify_level_email=0,

@notify_level_netsend=0,

@notify_level_page=0,

@delete_level=0,

@description=N’No description available.’,

@category_name=N’Log Shipping’,

@owner_login_name=N’sa’, @job_id = @jobId OUTPUT

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

/****** Object: Step [Restore]   ******/

EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N’Restore’,

@step_id=1,

@cmdexec_success_code=0,

@on_success_action=1,

@on_success_step_id=0,

@on_fail_action=2,

@on_fail_step_id=0,

@retry_attempts=0,

@retry_interval=0,

@os_run_priority=0, @subsystem=N’TSQL’,

@command=N’SET NOCOUNT ON

 

— Variable declarations

DECLARE @CMD1 VARCHAR(5000)

DECLARE @CMD2 VARCHAR(5000)

DECLARE @CMD3 NVARCHAR(1000)

DECLARE @FilePath VARCHAR(200)

DECLARE @ArchivePath VARCHAR(200)

DECLARE @UndoFilePath VARCHAR(200)

DECLARE @File2Restore VARCHAR(128)

DECLARE @File2Move VARCHAR(128)

DECLARE @RestoreStmt NVARCHAR(1000)

DECLARE @error_msg VARCHAR(500)

DECLARE @ErrMsg VARCHAR(500))

 

— Create the #OriginalFileList temporary table to support the un-cleansed file list

CREATE TABLE #OriginalFileList (

Col1 VARCHAR(1000) NULL

)

 

— Create the #ParsedFileList temporary table to support the cleansed file list

CREATE TABLE #ParsedFileList (

PFLID INT PRIMARY KEY IDENTITY (1,1) NOT NULL,

DateTimeStamp DATETIME NOT NULL,

FileSize VARCHAR(50) NOT NULL,

FileName1 VARCHAR (255) NOT NULL

)

 

— Initialize the variables

SELECT @CMD1 = ””

SELECT @CMD2 = ””

SELECT @CMD3 = ””

SELECT @FilePath = ”E:\FTPImport\TLogs\SQLRXDB\”

SELECT @ArchivePath = ”E:\FTPImport\Archives\SQLRXDB\”

SELECT @UndoFilePath = ”E:\MSSQL\Backup\ROLLBACK_UNDO_SQLRXDB.BAK”

 

— Build the string to capture the file names in the restore location

SELECT @CMD1 = ”master.dbo.xp_cmdshell ” + char(39) + ”dir ” + @FilePath + ”\*.*” + char(39)

 

— Build the string to populate the #OriginalFileList temporary table

SELECT @CMD2 = ”INSERT INTO #OriginalFileList(Col1)” + char(13) +

”EXEC ” + @CMD1

 

— Execute the string to populate the #OriginalFileList table

EXEC (@CMD2)

 

— Delete unneeded data from the #OriginalFileList

DELETE FROM #OriginalFileList

WHERE COL1 IS NULL

 

DELETE FROM #OriginalFileList

WHERE COL1 LIKE ”%Volume%”

 

DELETE FROM #OriginalFileList

WHERE COL1 LIKE ”%Directory%”

 

DELETE FROM #OriginalFileList

WHERE COL1 LIKE ”%<DIR>%”

 

DELETE FROM #OriginalFileList

WHERE COL1 LIKE ”%bytes%”

 

— Populate the #ParsedFileList table with the final data

INSERT INTO #ParsedFileList (DateTimeStamp, FileSize, FileName1)

SELECT LTRIM(SUBSTRING (Col1, 1, 20)) AS ”DateTimeStamp”,

LTRIM(SUBSTRING (Col1, 21, 18)) AS ”FileSize”,

LTRIM(SUBSTRING (Col1, 40, 1000)) AS ”FileName1”

FROM #OriginalFileList

 

 

DECLARE TLogs2Restore CURSOR FOR

SELECT FileName1 FROM #ParsedFileList ORDER BY FileName1 ASC

 

— Open the cursor

OPEN TLogs2Restore

 

FETCH NEXT FROM TLogs2Restore INTO @File2Restore

 

— Loop through

WHILE (@@FETCH_STATUS = 0) BEGIN

BEGIN TRY

       SET @RestoreStmt = ”RESTORE LOG [SQLRXDB] FROM DISK = N”””+@FilePath+@File2Restore+””” WITH STANDBY = ”””+@UndoFilePath+””””

       –print @RestoreStmt

       EXEC sp_executesql @RestoreStmt

END TRY

BEGIN CATCH

       SET @error_msg = error_message()

       SET @ErrMsg = ”Log file ”+@File2Restore+” cannot be restored due to error: ”+@error_msg

       PRINT @ErrMsg

END CATCH

FETCH NEXT FROM TLogs2Restore INTO @File2Restore

 

END

 

— Close and deallocate the cursor

CLOSE TLogs2Restore

DEALLOCATE TLogs2Restore

 

 

— Move restored tlogs to archive folder

DECLARE TLogs2Archive CURSOR FOR

SELECT FileName1 FROM #ParsedFileList ORDER BY FileName1 ASC

 

— Open the cursor

OPEN TLogs2Archive

 

— Loop through

WHILE (1=1)

BEGIN

       FETCH NEXT FROM TLogs2Archive INTO @File2Move

       IF @@FETCH_STATUS < 0 BREAK

 

       SET @CMD3 = ”EXEC master.dbo.xp_cmdshell ””ROBOCOPY.EXE ”+@FilePath+” ”+@ArchivePath+” ”+@File2Move+” /MOV /XO /NP”””

       –print @CMD3

       EXEC sp_executesql @CMD3

 

END

 

— Close and deallocate the cursor

CLOSE TLogs2Archive

DEALLOCATE TLogs2Archive

 

— Drop the temporary tables

DROP TABLE #OriginalFileList

DROP TABLE #ParsedFileList

 

SET NOCOUNT OFF

GO

 

,

@database_name=N’master’,

@flags=0

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id=@jobId, @name=N’RestoreSched’,

@enabled=1,

@freq_type=4,

@freq_interval=1,

@freq_subday_type=8,

@freq_subday_interval=1,

@freq_relative_interval=0,

@freq_recurrence_factor=0,

@active_start_date=20160824,

@active_end_date=99991231,

@active_start_time=1500,

@active_end_time=235959

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)’

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

COMMIT TRANSACTION

GOTO EndSave

QuitWithRollback:

IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION

EndSave:

 

GO

The last thing I need is something that will tell me that transaction logs have not been restored. This is supposed to resemble the LS_Alert job that is normally created for Log Shipping. In my client’s case, they wanted to know if data was out of synch by more than 2 hours. Since regular log shipping is not set up and nothing is automatically being logged, I instead am using xp_readerrorlog to find messages that logs have been restored for the target database with a 2 hour window. If it does not find those messages in the logs then an e-mail notification is sent.

— create a homegrown log shipping monitoring job to let you know when it is out of synch

USE [msdb]

GO

 

/****** Object: Job [SQLRXDB LogShipping Monitor]   ******/

BEGIN TRANSACTION

DECLARE @ReturnCode INT

SELECT @ReturnCode = 0

 

DECLARE @jobId BINARY(16)

EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N’SQLRXDB LogShipping Monitor’,

@enabled=0,

@notify_level_eventlog=0,

@notify_level_email=2,

@notify_level_netsend=0,

@notify_level_page=0,

@delete_level=0,

@description=N’Scrapes the SQL log looking for log restored statements. If none found in the past N hours email alert will be sent.’,

@category_name=N’Log Shipping’,

@owner_login_name=N’sa’,

@job_id = @jobId OUTPUT

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

/****** Object: Step [Monitor Log Shipping]   ******/

EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N’Monitor SQLRXDB Log Shipping’,

@step_id=1,

@cmdexec_success_code=0,

@on_success_action=1,

@on_success_step_id=0,

@on_fail_action=2,

@on_fail_step_id=0,

@retry_attempts=0,

@retry_interval=0,

@os_run_priority=0, @subsystem=N’TSQL’,

@command=N’DECLARE @TimeStart DATETIME;

DECLARE @TimeEnd DATETIME;

DECLARE @CntLogRestoredMsgs INT;

DECLARE @msgstr VARCHAR(500);

SET @TimeStart = DATEADD(hh, -2, GETDATE()); — 2 hours

SET @TimeEnd = GETDATE();

 

CREATE TABLE #sqlerrlog

       (PK_Col int IDENTITY(1,1) PRIMARY KEY

       , LogDate datetime

       , ProcessInfo varchar(15)

       , [Text] varchar(max) );

— read the current sql server log

INSERT INTO #sqlerrlog

       EXEC xp_readerrorlog 0, 1, N”Log was restored”, N”SQLRXDB”, @TimeStart,@TimeEnd;

— read archive #1 sql server log in case of recent restart

INSERT INTO #sqlerrlog

       EXEC xp_readerrorlog 1, 1, N”Log was restored”, N”SQLRXDB”, @TimeStart,@TimeEnd;

 

SELECT @CntLogRestoredMsgs = COUNT(*) FROM #sqlerrlog

 

IF @CntLogRestoredMsgs = 0

BEGIN

       SET @msgstr = ”No transaction logs have been restored for the SQLRXDB database on OurServerName in the past N hours.”

                   + CHAR(10) + ”Investigate as soon as possible!”

 

       EXEC msdb.dbo.sp_send_dbmail

             @profile_name = ”DBMail”,

      @recipients = ”Admins@company.com”,

      @subject =”OurServerName Log Shipping Alert”,

      @body = @msgstr

 

END

 

— cleanup

DROP TABLE #sqlerrlog

 

,

@database_name=N’master’,

@flags=0

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id=@jobId, @name=N’LSMonitorSchedule’,

@enabled=1,

@freq_type=4,

@freq_interval=1,

@freq_subday_type=8,

@freq_subday_interval=1,

@freq_relative_interval=0,

@freq_recurrence_factor=0,

@active_start_date=20160710,

@active_end_date=99991231,

@active_start_time=80000,

@active_end_time=20001

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)’

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

COMMIT TRANSACTION

GOTO EndSave

QuitWithRollback:

IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION

EndSave:

 

GO

And…BADABING!! You now have disconnected log shipping set up. Set off the jobs and work out any path or naming issues. Once I had all pieces going, it has continued to work like a charm! I hope that someone finds this helpful since I could not find anywhere that had all parts of this process documented.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Deadlock Info From system_health Information

— by Lori Brown @SQLSupahStah

Periodically, all DBA’s are going to run into code that fails due to a deadlock. Deadlocks happen when two or more SPIDs\sessions try to access a resource that is locked by each other. In short, something has to give so one is usually chosen by SQL to be the victim meaning one is the loser. Deadlocks can be avoided by designing your databases well and making queries as quick and lightweight as possible. The faster data is retrieved or locked and unlocked the better.

But, how do you figure out what is causing deadlocks if you suddenly run into them? An easy way is to use the information that is already captured in the default system_health session information that is already being gathered by SQL.

Here are queries to retrieve this information from SQL 2008 R2 and SQL 2012 and up.

— get deadlock graphs in SQL 2008

SELECT CAST(event_data.value(‘(event/data/value)[1]’,

                               ‘varchar(max)’) AS XML) AS DeadlockGraph

FROM   ( SELECT   XEvent.query(‘.’) AS event_data

FROM     (   — Cast the target_data to XML

SELECT   CAST(target_data AS XML) AS TargetData

FROM     sys.dm_xe_session_targets st

JOIN sys.dm_xe_sessions s

ON s.address = st.event_session_address

WHERE     name = ‘system_health’

AND target_name = ‘ring_buffer’

) AS Data — Split out the Event Nodes

CROSS APPLY TargetData.nodes(‘RingBufferTarget/

                                     event[@name=”xml_deadlock_report”]’)

AS XEventData ( XEvent )

) AS tab ( event_data );

 

 

— get deadlock graphs in SQL 2012 and up

SELECT XEvent.query(‘(event/data/value/deadlock)[1]’) AS DeadlockGraph

FROM   ( SELECT   XEvent.query(‘.’) AS XEvent

FROM     ( SELECT   CAST(target_data AS XML) AS TargetData

FROM     sys.dm_xe_session_targets st

JOIN sys.dm_xe_sessions s

ON s.address = st.event_session_address

WHERE     s.name = ‘system_health’

AND st.target_name = ‘ring_buffer’

) AS Data

CROSS APPLY TargetData.nodes

(‘RingBufferTarget/event[@name=”xml_deadlock_report”]’)

AS XEventData ( XEvent )

) AS src;

Queries originally from Jonathan Kehayias (One super smart SQL guy!) https://www.simple-talk.com/author/jonathan-kehayias/

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Table-Valued Parameter Performance Using C# Data Tables

The following testing and resulting data is a direct result of a recent issue that was discovered at one of our client sites.  After setting up SQL monitoring that does use a SQL trace to capture long running queries, suddenly certain processes would lock up and be unable to complete.  Initially we thought it was everything except the SQL trace causing the issue until we finally turned off all trace captures and the problem went away.  This was very concerning to us since using either SQL trace or XEvents is the method that we generally use to capture query performance. Without a fix, our performance tuning attempts would be severely hamstrung and slowed down. This problem had to be fixed yesterday!

Another side effect that would happen when things locked up was that despite having the max server memory set, SQL would end up using substantially more memory that was allocated to it.  At this client, developers were importing data from an application using C# and table-valued parameters which should have not been an issue. After a lot of searching, we finally came across some comments by another SQL expert who mostly described our issue and recommended that any C# TVP’s should have their columns defined or SQL could lock up if either trace or XEvents Completed Events are captured.  When we checked the C# code we found that the string columns that were being imported were not defined in an effort to make the import code flexible.  We were able definitively fix this very obscure problem with just one line of C# code that is in the below post.  The end result is a FASTER import of data that can be captured in trace or XEvents.  Nice!!

Since we had to put together this fix for a very undocumented issue, we sincerely hope that any DBA or developer who runs into this problem can correct it much faster than we did.  A link to download a zip file of all code displayed in this post is included at the end.  Enjoy!

********************************************

Table-Valued Parameter Performance Using C# Data Tables

— By Jeff Schwartz

Overview

Much has been written about the use of table-valued parameters (TVP) and their performance. These narratives appear to be contradictory because some state that TVPs should be used only for small datasets, whereas others assert that TVPs scale very well. Adding to the confusion is that the fact that most of the performance-related discussions focused on TVP use within T-SQL. Many of these examples employ either SQL trace or Extended Events (XEvents) to evaluate performance, but almost none of them has discussed what can occur when C# data tables and SQL trace or XEvents are used simultaneously. This paper details the analysis and resolution of a situation that occurred at a customer site when an existing C# application suddenly began to run extremely slowly after a very lightweight SQL trace that utilized a several-minute duration threshold and captured no frequent events was activated.

Initial Research

Preliminary research indicated that under very specific conditions the combination of TVPs and SQL trace could result in unintended performance consequences, e.g., excessive SQL Server memory consumption to the point where SQL Server exhausted all memory on the server regardless of the maximum memory setting value. The research also suggested that string usage and lengths in unspecified locations might have an impact on this issue. The question of whether TVPs might have some record-count performance threshold above which performance would degrade was also examined. Although bulk insert was not used by the application, the research team also decided to compare bulk and TVP insertion speeds because that is another source of conflicting information.

Since very little literature exists that discusses the combination of TVPs and SQL trace, the only recourse involved creating test frameworks, performing actual data loads, and using internal application timers in conjunction with either SQL trace or XEvents to monitor performance.

Test Frameworks

1.Two frameworks were constructed: T-SQL-only and C# calling a T-SQL stored procedure.

a. The T-SQL-only test loaded data into a TVP, which then called a stored procedure to perform the insertion into the final table.

b. The development team provided C# code that illustrated how the user application read a flat file into a C# data table, which was then passed as a TVP to a T-SQL stored procedure. This code was followed as closely as possible in the test framework.

2.A SQL server table containing data captured from the sys.dm_db_index_operational_stats Data Management View (DMV) supplied the data for testing because

a. The data table contained approximately 3.2 million records, thereby being large enough to stress the application code and SQL Server adequately. The largest data file provided by the application team was approximately 1.6 million records, so the testing could push well beyond current record levels.

b. Each data record contained 47 columns, which was wider in terms of data columns than any application table being loaded.

c. No string value columns existed in the data. This insured that user string data could not affect any interaction between the C# application and SQL Server, and that any string-related behavior was strictly due to the manner in which the application loaded the data table or the TVP, or the way the TVP parameter-passing was recorded in the SQL trace. All data columns were smallint, int, bigint, or date. This also insured exact record lengths so that any record count-related behavior was independent of record size. Some of the data files being loaded by the development team contained only fixed-width data types, whereas others contained at least a few variable length string columns.

3.The frameworks were designed to load various numbers of records using the same starting point to determine whether a performance threshold existed for TVPs, regardless of whether SQL trace (or XEvents) was active, as well as to determine whether even small numbers of records could be processed with the SQL trace (or XEvents) active.

4.Thirty-one different record levels, shown in Table 1, were used for testing to insure any record-count gaps were small. A separate CSV file of each length was created from the data cited in #2, always beginning with the first record.

Blog_20160901_1

Table 1: Record Counts used in Testing

5.Several methods of loading the flat file CSV data into C# data tables were developed and tested. The C# data table configurations were as follows:

a. Unlimited string length columns (initially used in the production C# application) – herein noted as NoStringMax (String #N/A on graphs)

b. Fixed length string columns of at most 30, 100, 500, or 1000 characters each – herein noted as StringMax (String <#> on graphs). Note: the development team stated that no column would exceed 1,000 characters in length.

c. Exact data type mapping so that the data table column types matched those of the receiving SQL Server table exactly – herein noted as Map

d. A common routine was used by options a, b, and c above to send the data table to SQL Server using a TVP.

e. The section of code that loaded the data table from the flat file was timed separately from the routine cited in d, which also was surrounded by its own timers. This enabled comparison of data load, TVP passing, and combined times.

f. The name of the CSV file is passed in as a parameter along with an adjusted version of the file name so the appropriate test identifiers can be written to the application log file for later analysis.

6.The tests were run with the following SQL Server monitoring options:

a. No SQL Server monitoring at all.

b. SQL trace monitoring including all normally captured events except for completed ones. These were handled as follows:

i.Exclude all completed events

ii.Include ad-hoc and batch completed events with a two-second duration threshold

iii.Include ad-hoc and batch completed events as well as completed statement events with a two-second duration threshold

c. SQL Extended Events (XEvents) including all normally captured events except for completed ones. These were handled as follows:

i.Exclude all completed events

ii.Include ad-hoc and batch completed events with a two-duration threshold

iii.Include ad-hoc and batch completed events as well as completed statement events with a two-duration threshold

7.All tests were performed on the same server to eliminate network and hardware variability. This server had a quad-core Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GHz processor, 32 GB of RAM, and a 500 GB SSD HD.

8.Various combinations of cold and warm SQL Server caches, cold and warm Windows caches, and SQL Server maximum memory limit sizes were tested. Some of the earlier attempts will be discussed in the Results section below. The final specifications were as follows:

a. No applications other than SQL Server, SSMS, and the C# application were active on the test server.

b. SQL Server’s maximum memory limit was set to 12 GB. This allowed SQL to go over its limit without distorting overall system behavior by exhausting server memory entirely.

c. Hot Windows file cache, i.e., a full run that loaded ALL of the flat file data into RAM was performed before the timing tests were run. This insured that the HD was involved minimally.

d. Hot SQL Server buffer cache, achieved by running the next series soon after the last run completed (with no intervening runs), and by running each sequence four times in immediate succession.

C# Code

The NoStringMax routine cited in #5a in the previous section is shown in Figure 1.

Blog_20160901_2

Figure 1: Test C# Data Table Load Routine Using No String Length Maximums

The StringMax routine cited in #5b in the previous section is shown in Figure 2. The ONLY difference between this routine and the previous one is the addition of the MaxLength assignment that is highlighted below. This single line sets the maximum length of each data table column to the value of the iMaxStringLen parameter. As cited in #5b, this value ranged from 30 to 1,000 during testing.

Blog_20160901_3

Figure 2: Test C# Data Table Load Routine Using String Length Maximums

The Map routine cited in #5c in the previous section is shown in Figure 3 through Figure 6. Logically, this code is identical to that of #5a and #5b, but the individual column mappings make the routine much longer. Note: since no strings are used for the data table columns, the MaxLength parameter is unnecessary.

Blog_20160901_4

Figure 3: Test C# Data Table Load Routine Using Data Type Mapping – Part 1

Blog_20160901_5

Figure 4: Test C# Data Table Load Routine Using Data Type Mapping – Part 2

Blog_20160901_6

Figure 5: Test C# Data Table Load Routine Using Data Type Mapping – Part 3

Blog_20160901_7

Figure 6: Test C# Data Table Load Routine Using Data Type Mapping – Part 4

The routine cited in #5d in the previous section is shown in Figure 7.

Blog_20160901_8

Figure 7: C# Test Routine that Sends Data Table to SQL Server Using a TVP

Results

1.The T-SQL code alone, cited in Test Frameworks #1a did not recreate the problem observed at the customer site when either SQL trace or Extended Events (XEvents) were used, so additional testing was unnecessary.

2.No problems occurred when the C# application was used in conjunction with SQL trace or XEvents as long as no completed events were captured. Therefore, capturing only specific trace events created the problems.

3.Further research showed that when the C# code was running, adding ad-hoc or batch completed events to SQL traces or XEvent sessions caused issues. Adding completed statement events did not change things appreciably.

4.Interestingly, the extreme misbehavior was triggered by the combination of using NoStringMax C# code and having SQL trace or XEvents capture ad-hoc or batch completed events. Although the StringMax and Map routines ran a little more slowly for certain record count levels when SQL trace or XEvents captured ad-hoc or batch completed events, the dramatic memory consumption issues did not occur at all when these methods were used.

5.Initially, testing employed a 28 GB maximum SQL Server memory setting, but as occurred at the customer site, when the problem arose, SQL Server consumed all the memory on the server, which caused the server fundamentally to stop working. Using this setting and the NoStringMax C# code, CSV files with record counts up to 25,000 could be processed without taking hours, failing, or causing the server to run completely out of memory. However, the application could not reach the 50,000 record level.

6.Since testing needed to process MUCH higher numbers of records, the decision was made to reduce SQL Server memory to 4 GB in an attempt to provide additional memory space for SQL Server memory overflow. Another reason for lowering the SQL Server memory allocation was to insure that the tests consumed all of SQL Server’s buffer pool and that SQL Server had to operate against memory limits. This consideration, as well as the run-time variations, necessitated the four successive iterations for each combination.

7.Unfortunately, using the 4 GB setting, the lower-end NoStringMax C# code runs while capturing SQL trace ad-hoc or batch completed events caused application failures at approximately the 4,000 or 8,000 record levels. This clearly indicated an increase in SQL Server memory was required, so it was increased to 8 GB. Although the runs processed more records, they still crashed well before the 25,000 record level. Finally, moving the memory limit up to 12 GB enabled everything to run as before, effectively mimicking the 28 GB testing without running the risk of exhausting Windows memory.

8.Figure 8 highlights the NoStringMax C# code runs through the 25,000 record level while SQL trace or XEvents ad-hoc or batch completed events were being captured. It is evident that the run times were unacceptable. When the same test levels are viewed in Figure 9, the problem becomes abundantly clear. With SQL trace or XEvents capturing ad-hoc or batch completed events, the run times for the 10,000 record tests using the NoStringMax C# code ranged between 483 and 584 seconds. When ad-hoc or batch completed events were not captured, the run times were approximately 0.14 SECONDS! The values for all other tests, including NoStringMax C# code with SQL trace and XEvents ad-hoc or batch completed events off, are shown in Figure 9.

Blog_20160901_9

Figure 8: TVP Load Timing when Trace On or XEvents on

9.Although many lines appear in Figure 9, three groupings exist and these are the most important. Group 1, which consists of dotted lines, includes all tests during which ad-hoc or batch completed events were not captured. Group 2, which consists of solid lines, includes all tests during which SQL trace captured ad-hoc or batch completed events. Group 3, which consists of dashed lines, includes all tests during which XEvents captured ad-hoc or batch completed events. Important note: The NoStringMax runs shown in Figure 8 are NOT in Figure 9 because of scaling considerations. Figure 9 highlights several notable issues. Most importantly, once a maximum string length is specified, performance improves even when SQL trace or XEvents ad-hoc or batch completed events are not captured. In addition, the terrible performance problems go away. Interestingly, the behaviors of the various StringMax and Map runs were almost identical through approximately 175,000 records and then again at about 800,000 records. In between, unexplained divergence occurs, but it is only a few seconds and occurred primarily between the 200,000 and 700,000 record levels. The pattern shown in Figure 9 was repeated in every test sequence conducted. It is noteworthy that the StringMax and Map versions of the routine outperformed the NoStringMax under all comparable trace or XEvent-related conditions, and at higher volumes, even when completed events were captured.

Blog_20160901_10

Figure 9: TVP Load Timing when Trace/XEvents off OR Trace or XEvents on and Mapped Data Table or Limited-Length String Values Used

10.Once the number of records exceeeded two million, the load of the C# data table began to crash due to application memory exhaustion. Note: neither Windows nor SQL Server memory was consumed excessively during this phase, so the limitation was strictly application-related.

11.Figure 10 summarizes the overall insertion rates of all previously cited methods at the two million record level, in addition to the bulk insert method. This graph shows clearly that the worst performer was the unlimited string method, and that the best one was the bulk insert method. Note: these rates exclude the reading of the flat file into the C# data table as well as the creation of the file that was suitable for bulk insertion.

Blog_20160901_11

Figure 10: TVP versus Bulk Insert Records per Second

Conclusions

The use of unlimited-length C# strings with data tables not only performs worse without capturing any SQL trace or XEvents ad-hoc or batch completed events, it performs dreadfully when they are. Their use can cause poor overall server performance if the server’s memory is exhausted. Finally, their use prevents the ability to monitor SQL Server query performance using either SQL trace or XEvents. However, when a maximum string length is specified, regardless of its length, performance without SQL trace or XEvents improved and the problematic interaction with SQL trace or XEvents was mitigated almost completely. Since the Map method is inefficient and limiting from an application development perspective and performance was not substantially better than with limited-length strings, its use does not appear to be beneficial.

No table-valued parameter scalability issues were observed with the possible exception of the 200,000 to 700,000 record range when performance inexplicably, and consistently, dipped. However, from 800,000 to 2 million, performance experienced no such dip. Interestingly, standard bulk insert performance exceeded that of TVPs when the same data was used.

Recommendations

Since the code to implement the maximum-length string involves only one line and the performance monitoring and improvement benefits are substantial, the <data table name>.MaxLength = <n> statement should be included in data table load routines as shown by the highlighted line in Figure 2. <n> should be the maximum expected length of any data column. Limiting string length will allow C# TVP data loads to run more efficiently and also enable customers to conduct normal query performance monitoring.

********************************************

The code shown in this post can be downloaded from here….  SQLRX_TVP_CSharp_Code

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!