Tag Archives: Lori Brown

How Enabling Change Tracking Can Cause Backup Failure

— By Lori Brown @SQLSupahStah

Recently I had a client who contacted me because all of the sudden the backup for a couple of databases started to fail. While those backups had been running successfully in the past they suddenly started failing with this message:


With a little more digging he found more info. Here is more verbiage from the error message:


Now we have gotten somewhere. This error is commonly associated when change tracking has been enabled on databases. I found that for Microsoft Dynamics and Management Reporter application databases that change tracking must be enabled and is commonly set for about 21 tables. Sure enough, the databases that had the backup failures were Dynamics databases. Apparently the data in the sys.syscommittab system table can get corrupt in some way. The recommended fix is to disable change tracking, take a backup of the databases then enable change tracking once again. Backups should be successful after that.


Since change tracking is not just for Dynamics databases, and could potentially be found on tons of tables, I thought that I would make a bit of code that would find all the tables that have change tracking enabled and create all of the disable\enable statements necessary to fix this issue and get backups working again.

I tested this out on my local system. Here is how I did it.

Enable change tracking on my database.


Create a few tables. Some have change tracking enabled and some do not. I also set different settings for TRACK_COLUMNS_UPDATED so that I could be sure that all settings were correctly captured and the enable statements would put everything back as it had been initially found.



CREATE TABLE [dbo].[CTTest](

[Column1] [int] NOT NULL,

[Column2] [varchar](50) NULL,

[Column3] [date] NULL,



[Column1] ASC





CREATE TABLE [dbo].[CTTest2](

[Column1] [int] NOT NULL,

[Column2] [varchar](50) NULL,

[Column3] [date] NULL,



[Column1] ASC





CREATE TABLE [dbo].[CTTest3](

[Column1] [int] NOT NULL,

[Column2] [varchar](50) NULL,

[Column3] [date] NULL,



[Column1] ASC





I know….I am not very creative when making test tables. Each table must have a primary key in order to enable change tracking. After that, I went to the CTTest, CTTest3 & CTTest4 table properties and set change tracking up. I turned on TRACK_COLUMNS_UPDATED for CTTest & CTTest4 but turned it off for CTTest3.


Here is my code designed to find the tables with change tracking enabled and create the disable\enable statements needed to turn it all of then turn it all back on again. No statements are executed against the database. Print statements are used instead to create the statements you need so that you can control what happens and when. Run this while connected to the user database that is the target.

DECLARE @dbname VARCHAR(128)




DECLARE @RetPerUnitsDesc VARCHAR(60)

DECLARE @schema VARCHAR(128)

DECLARE @tblname VARCHAR(128)



DECLARE @sqlstr1 NVARCHAR(2000)

DECLARE @sqlstr2 NVARCHAR(2000)

DECLARE @sqlstr3 NVARCHAR(2000)

DECLARE @sqlstr4 NVARCHAR(2000)



SELECT S.name AS SchemaName,

OBJECT_NAME(T.object_id) AS TableName,


FROM sys.change_tracking_tables T

INNER JOIN sys.tables TT ON TT.object_id = T.object_id

INNER JOIN sys.schemas S ON S.schema_id = TT.schema_id

ORDER BY SchemaName, TableName


SET @dbname = DB_NAME()

SELECT @AutoClnup = is_auto_cleanup_on,

@RetPer = retention_period ,

@RetPerUnitsDesc = retention_period_units_desc

FROM sys.change_tracking_databases WHERE database_id = DB_ID()


IF @AutoClnup = 1

SET @AutoClnupStr = ‘ON’


SET @AutoClnupStr = ‘OFF’


OPEN tblcur


FETCH NEXT FROM tblcur INTO @schema, @tblname, @ColUpdtFlg


— disable statements





SET @sqlstr1 = ‘ALTER TABLE [‘+@schema+‘].[‘+@tblname+‘] DISABLE CHANGE_TRACKING’


PRINT @sqlstr1


FETCH NEXT FROM tblcur INTO @schema, @tblname, @ColUpdtFlg


CLOSE tblcur


SET @sqlstr2 =


PRINT @sqlstr2



— open it again to create enable statements

OPEN tblcur


FETCH NEXT FROM tblcur INTO @schema, @tblname, @ColUpdtFlg


— enable statements


SET @sqlstr3 = ‘ALTER DATABASE [‘+@dbname+‘] SET CHANGE_TRACKING = ON (CHANGE_RETENTION = ‘+CAST(@RetPer AS VARCHAR(4))+‘ ‘+@RetPerUnitsDesc+‘, AUTO_CLEANUP = ‘+@AutoClnupStr+‘)


PRINT @sqlstr3




IF @ColUpdtFlg = 1

SET @ColUpdt = ‘ON’


SET @ColUpdt = ‘OFF’


SET @sqlstr4 = ‘ALTER TABLE [‘+@schema+‘].[‘+@tblname+‘] ENABLE CHANGE_TRACKING WITH (TRACK_COLUMNS_UPDATED = ‘+@ColUpdt+‘)’


PRINT @sqlstr4


FETCH NEXT FROM tblcur INTO @schema, @tblname, @ColUpdtFlg




CLOSE tblcur





Here is the output:


You can copy and paste the statements you need into a query window and run them to disable\enable change tracking. Or you can save the output to a file so that you have a script saved for just in case you need it.

I decided to also put a bit of data into one of the tables so that I could check to see what happens to the hidden tables when you go about disabling and enabling change tracking.

insert into CTTest values (1, ‘Value1’, ‘1-1-2018’)

insert into CTTest values (2, ‘Value2’, ‘1-2-2018’)

insert into CTTest values (3, ‘Value3’, ‘1-3-2018’)

insert into CTTest values (4, ‘Value4’, ‘1-4-2018’)

insert into CTTest values (150, ‘Value5’, ‘1-5-2018’)

update CTTest set Column2 = ‘Val1’ where Column3 = ‘1-1-2018’

delete from CTTest where Column3 = ‘1-5-2018’

Using a query from super smart Kendra Little (@Kendra_Little or https://littlekendra.com/ ), I can see that my internal tables are being tracked.



select sct1.name as CT_schema,

sot1.name as CT_table,

ps1.row_count as CT_rows,

ps1.reserved_page_count*8./1024. as CT_reserved_MB,

sct2.name as tracked_schema,

sot2.name as tracked_name,

ps2.row_count as tracked_rows,

ps2.reserved_page_count*8./1024. as tracked_base_table_MB,

change_tracking_min_valid_version(sot2.object_id) as min_valid_version

FROM sys.internal_tables it

JOIN sys.objects sot1 on it.object_id=sot1.object_id

JOIN sys.schemas AS sct1 on


JOIN sys.dm_db_partition_stats ps1 on

it.object_id = ps1. object_id

and ps1.index_id in (0,1)

LEFT JOIN sys.objects sot2 on it.parent_object_id=sot2.object_id

LEFT JOIN sys.schemas AS sct2 on


LEFT JOIN sys.dm_db_partition_stats ps2 on

sot2.object_id = ps2. object_id

and ps2.index_id in (0,1)

WHERE it.internal_type IN (209, 210);



When I ran the statements to disable change tracking the same query shows that no tables are being tracked:


My client reported that disabling\enabling change tracking did indeed fix his backup problem. The fun part is that while several databases had change tracking enabled, only a few had the backup issue.

Here are some of the links that I thought were interesting while researching:





For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. We love to talk tech with anyone in our SQL family!

Query to Get SQL Job Info from sysprocesses

— By Lori Brown @SQLSupahStah

We recently were troubleshooting an issue where one client reported that the SQL instance was suddenly very slow. One of the things we do is check to see what queries are currently running in SQL by using the following query:

— Finds info on queries that are running NOW


SUBSTRING(text, CASE WHEN ((ExecReq.statement_start_offset/2) + 1) < 1 THEN 1 ELSE (ExecReq.statement_start_offset/2) + 1 end,

((CASE ExecReq.statement_end_offset WHEN -1 THEN DATALENGTH(text) ELSE ExecReq.statement_end_offset END

– ExecReq.statement_start_offset)/2) + 1) AS statement_text,



ExecReq.session_id AS [SPID],




ExecReq.total_elapsed_time / 1000.0 AS [total_elapsed_time (secs)],

ExecReq.wait_time / 1000.0 AS [wait_time (secs)],

ExecReq.cpu_time / 1000.0 AS [cpu_time (secs)],




ExecReq.blocking_session_id AS [BlockingSPID]

FROM sys.dm_exec_requests ExecReq

OUTER APPLY sys.dm_exec_sql_text(ExecReq.sql_handle) ExecSQLText

WHERE ExecReq.session_id > 50

AND ExecReq.session_id <> @@spid

AND ExecReq.last_wait_type <> ‘BROKER_RECEIVE_WAITFOR’

ORDER BY ExecReq.cpu_time DESC

— total_elapsed_time desc

— wait_time desc

— logical_reads desc

After running this a few times, we can see if any one query consistently shows up in the list as using lots of resources or is blocking or being blocked. We can also see if a lot of queries seem to be listed that usually are fast which might indicate that there is some kind of issue with the execution plan or statistics. Here is an example of what the output looks like:


If I was trying to further figure out what SPID 709 was doing (as in what user and what application), I might take a look at sp_who2 or sp_WhoIsActive from Adam Machanic (http://whoisactive.com/) and you might see something like this in the program name:

SQLAgent – TSQL JobStep (Job 0xCC1842F477AA1A4E84CD91228FFC799B : Step 1)

We know a job is running but what job is it? I know that we can get the job name from the hex but decided that instead of needing to copy the value from the program_name into a variable that I wanted to have a query to decipher any jobs that might be found on the fly. This would keep me from needing to open the job activity and try to find which job was running. If you have hundreds of jobs to look through or if more than one job is running then you might have to guess which one is the culprit that you are looking for. I also wanted to know what job step the job was on so that I could know where it was and so that I was better prepared to provide meaningful information back to my client.

SELECT p.spid, j.name As ‘Job Name’, js.step_name as ‘Job Step’,

p.blocked, p.lastwaittype, p.dbid, p.cpu, p.physical_io, p.memusage, p.last_batch

FROM master.dbo.sysprocesses p JOIN msdb.dbo.sysjobs j

ON master.dbo.fn_varbintohexstr(convert(varbinary(16), job_id))

COLLATE Latin1_General_CI_AI = substring(replace(program_name, ‘SQLAgent – TSQL JobStep (Job ‘, ), 1, 34)

JOIN msdb.dbo.sysjobsteps js ON j.job_id = js.job_id

WHERE p.program_name like ‘SQLAgent – TSQL%’

AND js.step_id = (substring(p.program_name, (CHARINDEX(‘: Step’,p.program_name, 1)+7),

((CHARINDEX(‘)’, p.program_name, 1))-(CHARINDEX(‘: Step’, p.program_name, 1)+7))))


It is handy to have both of the above queries in one query window so that results are returned in one place making it easier to associate running queries with their job. Hope this helps someone out as much as it did me.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. We love to talk tech with anyone in our SQL family!




Large MSDB Database From sysmaintplan_logdetail Table

— By Lori Brown @SQLSupahStah

I am recycling this blog post from 2015….”Why?”, you might ask…..Well, I am still running into instances with this exact issue. Soooo, here we go….again.  Enjoy.

I recently received a panicked call from a client who had a SQL instance go down because the server’s C drive was full. As the guy looked he found that the msdb database file was 31 GB and was consuming all of the free space on the OS drive causing SQL to shut down. He cleaned up some other old files so that SQL would work again but did not know what to do about msdb.

As we looked at it together I found that the sysmaintplan_logdetail table was taking all the space in the database. The SQL Agent had been set to only keep about 10000 rows of history but for some unknown reason the table never removed history. After consulting MSDN I found this code did the trick for truncating this table.

USE msdb


ALTER TABLE [dbo].[sysmaintplan_log] DROP CONSTRAINT [FK_sysmaintplan_log_subplan_id];


ALTER TABLE [dbo].[sysmaintplan_logdetail] DROP CONSTRAINT [FK_sysmaintplan_log_detail_task_id];


TRUNCATE TABLE msdb.dbo.sysmaintplan_logdetail;


TRUNCATE TABLE msdb.dbo.sysmaintplan_log;


ALTER TABLE [dbo].[sysmaintplan_log] WITH CHECK ADD CONSTRAINT [FK_sysmaintplan_log_subplan_id] FOREIGN KEY([subplan_id])

REFERENCES [dbo].[sysmaintplan_subplans] ([subplan_id]);


ALTER TABLE [dbo].[sysmaintplan_logdetail] WITH CHECK ADD CONSTRAINT [FK_sysmaintplan_log_detail_task_id] FOREIGN KEY([task_detail_id])

REFERENCES [dbo].[sysmaintplan_log] ([task_detail_id]) ON DELETE CASCADE;



After the table was truncated we were able to shrink the database to about 1 GB. For the record – I hate, hate, hate to shrink databases but there were no other options left to us and we had to clear out some room on the drive.

Now with the crisis averted we checked the SQL Agent settings and found that the box to remove agent history was not checked.


We checked it, hit OK then opened the SQL Agent properties again only to find that the box was unchecked. After doing some research I found that this is a bug that has not been resolved even in SQL 2014.   https://connect.microsoft.com/SQLServer/feedback/details/172026/ssms-vs-sqlagent-automatically-remove-agent-history-bugs Awesome, huh?!

If you check the link there is a workaround posted. I have tested it and found that it takes a super long time to run sp_purge_jobhistory and my test server only has 2 jobs that would have any history at all. So, use the workaround if you feel brave. Hopefully Microsoft will actually fix this some time. Until then, keep an eye on your msdb database size.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at sqlrx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. We love to talk tech with anyone in our SQL family!



Get Index Column Info with Includes for One or Many Tables

— by Lori Brown @SQLSupahStah

I was recently working with one of my clients on some low hanging fruit type of query tuning. We had checked the cache for plans with missing index warnings in them and were trying to see if we could tweak and existing index or add a new index to speed things up. If you ever work with missing indexes, you surely have seen it recommend crazy things, duplicates or existing indexes or it wants something that can be added to an existing index.

The bottom line for missing index recommendations is that you should NEVER, EVER create a missing index unless you know for absolutely sure that the index does not exist and that it will really help a known performance issue. For instance, you can have missing index warnings on small tables that only return a couple of rows to the query. Those are usually not worth working on since scanning a small table can usually be done very quickly by SQL. To this day, I still find many databases that are way over indexed with indexes that were implemented simply because someone found a missing index warning and did not do their homework or they ran the dreaded Database Tuning Advisor which shoved a bunch of duplicate indexes into tables.

If you are wondering how to get a list of missing indexes, please check out Jeff Schwartz’s blog post on how to do this. (https://blog.sqlrx.com/2017/06/02/query-tuning-and-missing-index-recommendations/ ) This will give you a place to start. It is better if you know what query is throwing the missing index warning so it is a good idea to collect those either in a trace or extended events. Jeff builds on his first post and in his second post on the subject (https://blog.sqlrx.com/2017/07/20/handling-multiple-missing-index-recommendations-for-the-same-table/ ) also goes over the fun of having multiple missing index recommendations for a single table and how to deal with them.

Here’s a handy set of links for some of Jeff’s great index tuning work that you should really check out:





One of the things that I usually need when performance tuning is to know information about the existing indexes on specific tables. I always want to know what columns are in the indexes along with the included columns so that I can compare the existing indexes to the missing recommendations. This way I can better figure out if a recommendation is something that can be added to an existing index (like an included column) or if I really need to create a brand new index if it does not exist at all.

Like most DBA’s, I keep a toolkit with all kinds of handy scripts. However, I did not have one that would give me index included columns. I also wanted the query to be able to return info from one or many tables at the same time. This would be useful when dealing with things with lots of joins. I know that there are a few bloggers who have posted something similar but I wanted to have the ability to filter on one or multiple tables. So, here is what I came up with:


Returns index columns with included columns

plus other needed index info for tables

in @tablelist variable



DECLARE @tablelist VARCHAR(1000)



SET @tablelist = ‘InvoiceLines,OrderLines,StockItemHoldings’ — comma delimited list of tables, can be one or multiples EX: ‘mytable’ or ‘mytable,nothertable,thirdtable’


— Query the tables

IF @tablelist <> OR @tablelist <> ‘?,?,?’


SET @tablelist = REPLACE(QUOTENAME(@tablelist,””), ‘,’, ”’,”’) — Add quotes so the IN clause will work


SET @sqlstr = ‘SELECT SCHEMA_NAME(o.schema_id) AS SchemaName

       ,o.name AS TableName

       ,i.name AS IndexName

       ,i.type_desc AS IndexType

       ,c.name AS ColumnName







FROM sys.indexes i

JOIN sys.index_columns ic ON i.index_id = ic.index_id

JOIN sys.columns c ON ic.column_id = c.column_id AND ic.object_id = c.object_id

JOIN sys.objects o ON o.object_id = i.object_id AND c.object_id = i.object_id

JOIN sys.types t on t.user_type_id = c.user_type_id

WHERE o.name IN (‘+@tablelist+‘)

ORDER BY SchemaName, TableName, IndexName, index_column_id’


EXEC sp_executesql @sqlstr

–PRINT @sqlstr


All you have to provide is a comma separated list of the table(s) you are interested in for the @tablelist variable and it will do the rest. The output looks like this:


I found several bloggers who had made queries that would concatenate the columns together but truthfully I found those hard to read so I settled for a list with some extra info on the data and index types. Hope this is useful to someone out there.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!


Find Out Who Changed the Database Recovery Model

— By Lori Brown @SQLSupahStah

I ran into a situation where we were working on a migration and had been directed to put all databases into FULL recovery model in anticipation of using log shipping to push databases to the new server. Once we are ready to go live on the new server the plan was to ship the last transaction logs and then restore them WITH RECOVERY in an effort to make the final cutover as quick as possible. Of course this means that we had to make sure that all databases were having regular log backups, which we did. Things were going along nicely until we started receiving log backup failure notifications.

Upon checking things, we found that one of the databases had been changed to SIMPLE recovery model. You can find this type of information in the default trace or you can simply scroll through the SQL error logs until you find the entry that you are looking for. If you have a busy instance that has a lot of entries in the error log, this can be a bit time consuming so I came up with a set of queries that will grab the error log entry and attempt to tie it to the info in the default trace so that it was easier to identify WHO was the culprit who made an unauthorized change to the database properties.


DECLARE @tracefile VARCHAR(500)



CREATE TABLE [dbo].[#SQLerrorlog](


[ProcessInfo] VARCHAR(10) NULL,





Valid parameters for sp_readerrorlog

1 – Error log: 0 = current, 1 = Archive #1, 2 = Archive #2, etc…

2 – Log file type: 1 or NULL = error log, 2 = SQL Agent log

3 – Search string 1

4 – Search string 2


Change parameters to meet your needs


— Read error log looking for the words RECOVERY

–and either FULL, SIMPLE or BULK_LOGGED indicating a change from prior state


EXEC sp_readerrorlog 0, 1, ‘RECOVERY’, ‘FULL’



EXEC sp_readerrorlog 0, 1, ‘RECOVERY’, ‘SIMPLE’



EXEC sp_readerrorlog 0, 1, ‘RECOVERY’, ‘BULK_LOGGED’


UPDATE #SQLerrorlog

SET ProcessInfo = SUBSTRING(ProcessInfo,5,20)

FROM #SQLerrorlog

WHERE ProcessInfo LIKE ‘spid%’


— Get path of default trace file

SELECT @tracefile = CAST(value AS VARCHAR(500))

FROM sys.fn_trace_getinfo(DEFAULT)

WHERE traceid = 1

AND property = 2


— Get objects altered from the default trace

SELECT IDENTITY(int, 1, 1) AS RowNumber, *

INTO #temp_trc

FROM sys.fn_trace_gettable(@tracefile, default) g — default = read all trace files

WHERE g.EventClass = 164


SELECT t.DatabaseID, t.DatabaseName, t.NTUserName, t.NTDomainName,

t.HostName, t.ApplicationName, t.LoginName, t.SPID, t.StartTime, l.Text

FROM #temp_trc t

JOIN #SQLerrorlog l ON t.SPID = l.ProcessInfo

WHERE t.StartTime > GETDATE()-1 — filter by time within the last 24 hours



DROP TABLE #temp_trc




You can find more on the following:

sp_readerrorlog is an undocumented procedure that actually uses xp_readerrorlog – https://www.mssqltips.com/sqlservertip/1476/reading-the-sql-server-log-files-using-tsql/

sys.fn_trace_getinfo – https://docs.microsoft.com/en-us/sql/relational-databases/system-functions/sys-fn-trace-getinfo-transact-sql

sys.fn_trace_gettable – https://docs.microsoft.com/en-us/sql/relational-databases/system-functions/sys-fn-trace-gettable-transact-sql

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRxSupport@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. We love to talk tech with anyone in our SQL family!

Use MSDB to Get Database Backup Size and Total Time For Each

— by Lori Brown

We recently started using a third party software to do our in-house SQL backups so that the backup files are stored in a redundant and safe place. To confirm that the software was indeed compressing backups as it stated it would, we wanted to see what each backup size actually was in SQL so that we could compare that to what the software was telling us.

SQL stores lots of handy backup information in msdb in the backupset and backupmediafamily tables.



Here is my query. I am only wanting the information from the last 24 hours so have filtered the start date by subtracting 1 day from today. I have also provided some commented out options in case someone needs them.

— database backup size and how long it took to do backup

SELECT bs.database_name AS DatabaseName

, CAST(bs.backup_size/1024.0/1024/1024 AS DECIMAL(10, 2)) AS BackupSizeGB

, CAST(bs.backup_size/1024.0/1024 AS DECIMAL(10, 2)) AS BackupSizeMB

–, CAST(bs.compressed_backup_size/1024.0/1024/1024 AS DECIMAL(10, 2)) AS CompressedSizeGB   

       –, CAST(bs.compressed_backup_size/1024.0/1024 AS DECIMAL(10, 2)) AS CompressedSizeMB

, bs.backup_start_date AS BackupStartDate

, bs.backup_finish_date AS BackupEndDate

, CAST(bs.backup_finish_date – bs.backup_start_date AS TIME) AS AmtTimeToBkup

, bmf.physical_device_name AS BackupDeviceName

FROM msdb.dbo.backupset bs JOIN msdb.dbo.backupmediafamily bmf

ON bs.media_set_id = bmf.media_set_id


–bs.database_name = ‘MyDatabase’ and   — uncomment to filter by database name

bs.backup_start_date > DATEADD(dd, -1, GETDATE()) and

bs.type = ‘D’ — change to L for transaction logs

ORDER BY bs.database_name, bs.backup_start_date

And, here is the output.


It turned out that the software was indeed compressing all backups so that was a good thing.

There is a lot more info that can be pulled from msdb regarding backups. Have some fun and experiment with getting information that you need from there. Here are some links to some other backup related topics that we have blogged about already.




For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!


Installing and Connecting to SQL 2017 on Ubuntu (Linux)

–By Lori Brown

SQL Server 2017 is available as a CTP for testing (https://www.microsoft.com/en-us/sql-server/sql-server-2017) and is able to be installed on a system running Linux. Because I have done all of my professional work on Windows systems, the leap to a Linux system was a bit of a stretch. I learned that while Linux is the OS kernel, Red Hat, SUSE, Ubuntu and Docker Engine are all flavors of Linux. A rough analogy might be that something like Ubuntu is sort of like a version of Windows. Regardless of how you think of it, it is a truly interesting experience to get SQL 2017 installed.

The first thing to do is get your friendly System Admin to stand up a Linux server. We used Ubuntu 16.04 but there are other flavors of Linux that can also be installed as listed below. Your Admin should provide you with a login and password that is an administrator so that you can get SQL running.


Since everything I have ever done on a computer has had a GUI, I decided to install Gnome to try to shorten my learning curve on Ubuntu. (https://ubuntugnome.org/download/ ) Truthfully, navigating in Gnome was not super intuitive to me but I was able to bumble around and get things done. When following the install instructions from Microsoft, there are places where commands are used to get things done that are labeled “bash”. It took me a bit to figure out that bash is a Unix shell or command prompt that is used to execute commands. I found a bash guide if anyone is interested. (http://www.tldp.org/LDP/Bash-Beginners-Guide/html/Bash-Beginners-Guide.html) Apparently you can use bash on Windows….I learn something new every day. (https://blogs.windows.com/buildingapps/2016/03/30/run-bash-on-ubuntu-on-windows/#igsmYPLj6ovEWf7Y.97)

In Gnome, I had to search for terminal to get the command prompt (bash) window. To get SQL installed, I followed the instructions from Microsoft and have taken screenshots of what it looks like after running commands. (https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-ubuntu)

curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add –


curl https://packages.microsoft.com/config/ubuntu/16.04/mssql-server.list | sudo tee /etc/apt/sources.list.d/mssql-server.list


sudo apt-get update


sudo apt-get install -y mssql-server


During the setup completion step, you should be ready to provide the SA password when prompted for the SQL Server system administrator password.

sudo /opt/mssql/bin/mssql-conf setup


systemctl status mssql-server


Now install SQLCMD and BCP tools. (https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-tools#ubuntu)  I only installed SQLCMD but the BCP commands are similar.

curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add –


sudo apt-get update


sudo apt-get install mssql-tools unixodbc-dev

You have to answer licensing questions here so be ready.



I also installed the SQL Server Agent. The commands are very similar to previous commands so I did not take any extra screen shots of those. https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-sql-agent

Once the tools are installed, you must use SQLCMD to query. You cannot use SSMS on Ubuntu.


Since I am a creature of the Microsoft world and really want to use SSMS to do things, I need to be able to connect to SQL on Ubuntu using SSMS. So that you can connect remotely, I made sure to open SQL ports in Ubuntu. First you have to enable the firewall:


Then open ports 1433 & 1434:


To connect and query from a Windows machine….install SQL 2017 SSMS. (https://docs.microsoft.com/en-us/sql/ssms/download-sql-server-management-studio-ssms)

Connect to SQL as normal. I tried many times to get connected using my Ubuntu hostname but did not have luck. I was able to connect using the IP address however.


The Ubuntu hostname appears in the terminal connection. Use ifconfig to get the IP address. (http://www.aboutlinux.info/2006/11/ifconfig-dissected-and-demystified.html)


Once connected you can do most things as usual in SSMS.





As of this writing, I found several places where Microsoft employees warned that some things will likely break and that they are working on bugs. So, if you run into anything that you can’t find a way around, let them know. So far Microsoft seems to be very open to information and suggestions which makes me very hopeful that SQL 2017 will be released soon and will be much faster than previous versions.

For a taste of some of the new things in SQL 2017 see: https://blog.sqlrx.com/2017/05/19/sql-2017-is-on-the-way/

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

SQL 2017 Is On The Way!!

–By Lori Brown  @SQLSupahStah

SQL Server 2017 is rolling our way like a big ole train. And, with the new features that are going to be available there may be quite a few shops that want to jump onboard.


As of this writing SQL Server 2017 CTP 2.1 is available for download and testing. (https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2017-ctp/ ) I managed to recently attend a presentation by Denzil Ribeiro who is a manager with the SQLCAT team on some of the new features in SQL 2017. I tend to pay attention to the things that get the guys on the inside excited so here are some of the highlights with links that I could find and my notes on SQL 2017.

Resumable Online Index Rebuild


Index rebuilds can be paused and restarted. Since index must be created with ONLINE = ON then it is likely that this is an Enterprise Edition feature.



— Pause


Use sys.index_resumable_operations to view the status of resumable index rebuilds. https://docs.microsoft.com/en-us/sql/relational-databases/system-catalog-views/sys-index-resumable-operations

Wait stats for per query executions

Available by default. Statistics info is now in query plans. Use sys.query_store_wait_stats to see wait info for a query plan. https://docs.microsoft.com/en-us/sql/relational-databases/system-catalog-views/sys-query-store-wait-stats-transact-sql

Automatic Query Tuning


Recognizes plan regressions and will automatically force a good plan to be used. Database must be in 140 (SQL2017) compatibility. Must enable the Query Store for the database. Can then use sys.dm_db_tuning_recommendations (https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-db-tuning-recommendations-transact-sql ) to get info on plans and recommendations that were used to fix regressions. Won’t force a good plan unless AUTOMATIC_TUNING = ON.



Adaptive Query Processing

Interleaved Execution


Available by default. Adjust plans by testing plan and then redesigning plan based on better row estimates. All while a query is executing.

Batch Mode Memory Grant Feedback


Adjust plan in cache if memory grant is not good enough (either too much or not enough).

Batch Mode Adaptive Joins


After first join in a plan, better decision made to choose Hash Join or Nested Loop Join. If number of rows small then likely Nested Loop Join will be used

Graph Data Processing


Nodes and edges stored as tables. Many – many modeling. New MATCH clause in queries. https://docs.microsoft.com/en-us/sql/t-sql/statements/match-sql-graph


SELECT Person2.Name

FROM Person Person1, Friends, Person Person2

WHERE MATCH(Person1 – (Friends) -> Person2)

AND Person1.Name = ‘John’;



Python is now integrated into SQL just like R Services and is used for advanced analytics. I have to admit that Python is way beyond my skillset right now but wanted to mention it.


A lot of these new features for SQL 2017 are evolving and more will come out I am sure. I am in the process of getting a VM with Linux installed on it so that I can check out installing and running SQL Server on Linux so expect a post on that soon.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

List Partitioned Tables And Other Info About Them

— By Lori Brown @SQLSupahStah

Here is a good way to find out if any of your tables are partitioned in your database using the query below. It is important to know so that you can find out how the tables are partitioned.

— Partitioned Tables

SELECT DISTINCT t.name AS ParitionedTables

FROM sys.partitions p

INNER JOIN sys.tables t

ON p.object_id = t.object_id

WHERE p.partition_number <> 1

If you have partitioned tables here is a good way to find out how it is partitioned up.

— Get partition info

SELECT SCHEMA_NAME(o.schema_id) + ‘.’ + OBJECT_NAME(i.object_id) AS [object]

, p.partition_number AS [p#]

, fg.name AS [filegroup]

, p.rows

, au.total_pages AS pages

, CASE boundary_value_on_right

WHEN 1 THEN ‘less than’

ELSE ‘less than or equal to’ END AS comparison

, rv.value

, CONVERT (VARCHAR(6), CONVERT (INT, SUBSTRING (au.first_page, 6, 1) +

SUBSTRING (au.first_page, 5, 1))) + ‘:’ + CONVERT (VARCHAR(20),

CONVERT (INT, SUBSTRING (au.first_page, 4, 1) +

SUBSTRING (au.first_page, 3, 1) + SUBSTRING (au.first_page, 2, 1) +

SUBSTRING (au.first_page, 1, 1))) AS first_page

FROM sys.partitions p

INNER JOIN sys.indexes i

ON p.object_id = i.object_id

AND p.index_id = i.index_id

INNER JOIN sys.objects o

ON p.object_id = o.object_id

INNER JOIN sys.system_internals_allocation_units au

ON p.partition_id = au.container_id

INNER JOIN sys.partition_schemes ps

ON ps.data_space_id = i.data_space_id

INNER JOIN sys.partition_functions f

ON f.function_id = ps.function_id

INNER JOIN sys.destination_data_spaces dds

ON dds.partition_scheme_id = ps.data_space_id

AND dds.destination_id = p.partition_number

INNER JOIN sys.filegroups fg

ON dds.data_space_id = fg.data_space_id

LEFT OUTER JOIN sys.partition_range_values rv

ON f.function_id = rv.function_id

AND p.partition_number = rv.boundary_id

WHERE i.index_id < 2

AND o.object_id = OBJECT_ID(‘dbo.SomeTableName’);

Common Criteria Compliance and LCK_M_SCH_M

–By Lori Brown

If you have ever wondered what happens when you mistakenly enable Common Criteria compliance in SQL, well wonder no more!!


We have a client who has no idea how or when Common Criteria was enabled on their production system. All they know is that performance has been slowly degrading. After collecting performance data, we found that there were high LCK_M_SCH_M waits which is a schema modification lock that prevents access to a table while a DDL operation occurs. We also found blocked process records where a LOGIN_STATS table in the master database was waiting a lot. This table is used to hold login statistics. When there are a lot of logins and outs there can be contention in this table.

When you enable Common Criteria compliance, something called Residual Information Protection (RIP) is enabled. RIP is an additional security measure for memory and it makes it so that in memory a specific bit pattern must be present before memory can be reallocated(overwritten) to a new resource or login. So with lots of logins and outs, there is a performance hit in memory because overwriting the memory allocation has to be done.

Keep in mind if you enable Common Criteria compliance, you can run into slowdowns from locking and memory. Make sure that your server is able to handle this well and that applications are designed to minimize the impact of high logins\logouts.

To disable Common Criteria compliance you can use sp_configure or the GUI.

sp_configure ‘common criteria compliance enabled’, 0




However, it is not really disabled until you reboot the server (it actually says to reboot the server in MSDN). Restarting the instance will not work for this configuration change. I believe that this is because of the Residual Information Protection that secures memory. It stands to reason that without a reboot that flushes memory that RIP is still doing its work and causing performance issues.

Check out the documentation from MSDN on Common Criteria compliance here:


The biggest takeaway from this is to make sure to change your SQL configuration with good reason. Always know the side effects that can show up when you set any of the advanced configuration options.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!