Monthly Archives: May 2015

Delayed Durability

— By Lori Brown

Available in SQL Server 2014 – All Editions

Delayed Durabilty is a way to defer the acknowledgement of the transaction log records getting written to disk, i.e. transactions can continue without waiting, and SQL will assume that the log record *will* be written. You may know the ACID rules (Atomicity, Consistency, Isolation, Durability) that guarantee that when a transaction is committed it stays committed. Allowing delayed durability basically removes the D. Normally Durability is achieved through write-ahead logging (WAL) which means that transactions are written to disk before the transaction is complete. With delayed durability enabled you are trading durability for performance. So, be aware that you can lose data

SQL uses 60KB of the log buffer, and will attempt to flush the log to disk when the 60KB block is full. There is a risk of data loss with this. Delayed Durability option is set at the database level, at the individual transaction level, or (for natively compiled procedures in In-Memory OLTP) at the procedure level. It is best to enable if your transaction log is a bottleneck and you are experiencing high WRITELOG waits.

If you need to you can force the transaction log to flush delayed transactions by executing sys.sp_flush_log.




  • ALLOWED = individual transactions can use Delayed Durability.
  • FORCED = all transactions that can use Delayed Durability will
  • DISABLED = committing a transaction using the delayed option will be ignored with no error message


Add to the transaction commit….


In-Memory OLTP (EE only)

Add to the BEGIN ATOMIC block…

CREATE PROCEDURE [dbo].[NewRoadRace] @RaceID INT, @RaceName VARCHAR(500), @LengthMiles INT, @State CHAR(2)





INSERT INTO dbo.RoadRaces (RaceID, RaceName, LengthMiles, [State]) VALUES (@RaceID, @RaceName, @LengthMiles, @State)



In-Memory OLTP (Hekaton)

-by Lori Brown @SQLSupahStah

 Available in SQL Server 2014 Enterprise Edition

In-Memory OLTP is a new lock-free, latch-free, optimistic concurrency approach to storing data in memory. While it may seem reminiscent of DBCC PINTABLE is it definitely not the same since data in memory-optimized tables is not organized in pages, and does not use the Buffer Pool. It is designed to eliminate locking delays by incurring less latching and locking for high volume insertion workloads and can drastically improve performance. It gives you a way to get around locking and latching issues without implementing partitioning which requires code changes.

Create a MEMORY OPTIMIZED DATA filegroup:

Note: You can have only one filestream filegroup for memory optimized tables but you can create multiple files and place them into the filegroup.

In SSMS, go to the properties of a database that you want to enable to use In-Memory OLTP and click on the Filegroups page. Under MEMORY OPTIMIZED DATA, click Add filegroup and then enter a name for the filegroup.


Then add a file to the filegroup, click the Files page, click Add and then set up the new file with a Logical Name, File Type = FILESTREAM Data and Filegroup = the Memory Optimized filegroup that was just created.


Create Memory Optimized Tables:

Create memory optimized tables by specifying the MEMORY_OPTIMIZED = ON clause to differentiate it from regular disk based tables. Specifying the MEMORY_OPTIMIZED = ON clause tells the In-Memory OLTP engine to handle this table instead of the SQL database engine. You might wonder how data is put back into memory especially after a server restart of crash… This is done by SQL recovering objects by reading data into memory from the transaction log and checkpoint files on recovery. BUCKET COUNT and DURABILITY must be set when you create the In-Memory table.



[RaceName] VARCHAR(500) COLLATE Latin1_General_100_BIN2 NOT NULL INDEX [RName] HASH WITH (BUCKET_COUNT = 500000),

[LengthMiles] INT NULL,

[State] CHAR(2) NULL




  • BUCKET COUNT should be 2X the number of unique values of the column.
  • DURABILITY = SCHEMA_AND_DATA means that you want the data and table schema persisted and able to be recovered on server restart or after a crash.
  • DURABILITY = SCHEMA_ONLY means that the only the schema would be persisted and not the data on server restart or after a crash.

When done a dll and other files will be created.   You can find them in this folder: <Drive letter>:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\DATA\xtp\<database id>\ You should notice that there will be a new XTP folder and a subfolder with a number which is the database id. The number on the end of the dll name is the object id of the table that was just created.



Memory optimized tables can have up to 8 non-clustered indexes but no clustered index and must be created when the table is created.  Indexes can be created on string columns but only if they use a BIN2 collation.  You can query the BIN2 collations to choose from by checking sys.fn_helpcollations. When indexes are created you cannot drop or modify the index so you must plan well before you create it.   There are 2 types of indexes (Hash Index and Range Index) and all are inherently covering indexes meaning it will virtually contain all of the columns in the table.

  • Hash Index – best for joins using the “=” operator
  • Range Index – best for queries with “>” or “<” operators.


Create Natively Compiled Stored Procedures:

You can also create natively compiled stored procedures that use the In-Memory OLTP engine. These stored procedures can only access memory optimized tables.   Creating a natively compiled stored procedure is very much like creating an In-Memory OLTP table. The WITH NATIVE_COMPILATION option tells SQL that the procedure is to be handled by the In-Memory OLTP engine.

CREATE PROCEDURE [dbo].[NewRoadRace] @RaceID INT, @RaceName VARCHAR(500), @LengthMiles INT, @State CHAR(2)





INSERT INTO dbo.RoadRaces (RaceID, RaceName, LengthMiles, [State]) VALUES (@RaceID, @RaceName, @LengthMiles, @State)



Now let’s put all the parts into motion. Execute the stored proc just like you would any other stored proc.

EXECUTE [dbo].[NewRoadRace] @RaceID = 100, @RaceName = ‘Run The Country 5K’, @LengthMiles = 3.1, @State = ‘TX’

EXECUTE [dbo].[NewRoadRace] @RaceID = 101, @RaceName = ‘Run The Country 10K’, @LengthMiles = 6.2, @State = ‘TX’

And query the In-Memory OLTP table as usual.


When considering using In-Memory OLTP be aware that code changes may be needed especially for stored procedures. See this for more info: 

SQL Server Memory in Task Manager – Should I Be Worried?


–by Ginger Keys

Recently, a client of mine contacted me in a panic because almost all of the memory on the SQL server was used up…according to Task Manager, that is.

Don’t panic…task manager is probably lying.  Well, a little bit anyway.  And Task Manager is just that, a task manager. If you are looking at it to get accurate values for memory (RAM) usage or CPU time, you will get not-so-accurate information.  False alarms.  Information that doesn’t exactly mean what it implies.


Blog_20150507_2After speaking with my very worried client, I checked around on the servers and looked at many performance indicators.  What I usually see is that CPU is low, disk usage is low, and memory used is high – which is what we expect on a SQL machine.  SQL will use as much memory as it can get, to improve performance by caching tons of data in memory in case it needs it, to avoid having to go to disk.  It will immediately release any memory to the OS if needed.  SQL is a memory hog, and will hold onto all the memory we allocate to it, whether it is actually using it or not.  And this is what we want it to do!


SQL will take all the memory that is allocated to it when it is needed, store data and execution plans in the buffer, and when it is through using it, it doesn’t release it…it caches that data into memory until either 1) SQL needs it again, or 2) Windows OS requests the memory.

This isn’t an indication of performance stress…it’s just showing that SQL has reserved all the memory that we told it to use (through setting min and max memory on the SQL instance properties [1]).  SQL doesn’t let go of memory unless the server comes under memory pressure (like if other apps need memory and Windows sends out a memory pressure notification).  As long as the OS doesn’t need to use it for anything, SQL will hold on to it for future use.

The default memory setting when you install SQL is for it to use all of the memory. When windows asks SQL to give it back, SQL gives it back in huge amounts. And then proceeds to build back up again to maximum amount. This is not optimum for stability or performance, and it is highly recommended to set the limit in the Max Memory Setting as a best practice. That way Windows OS is always guaranteed to have a certain amount.

Click here for more info on setting Max Memory:

Another good reference for setting Max Memory for SQL is here:


And when looking at Resource Monitor, it is a mistake to use the Available and Free metrics as the key indicators for determining overall system health [2]. The Available MB of memory is comprised of both the Standby and the Free… the stuff in Standby is what Windows has put in the cache (but can be booted out if needed), and Free means it is not even being used at all. The combination is what Windows calls Available. So there is actually data in the cache that Windows says is available. Some of the categories in task manager/resource monitor can be very confusing and even misleading, which is why we shouldn’t use these metrics alone to decide if we are in danger of being out of memory.

Instead of only relying on task manager or resource monitor, we really need to look at other metrics to find out if there is a memory related performance issue on the machine.

Some of the metrics to pay attention to are:

  • Page Life Expectancy
  • Available Bytes
  • Buffer Cache Hit Ratio
  • Target & Total Server Memory
  • Memory Grants Pending
  • Pages/sec (Hard Page Faults)
  • Batch Requests/sec & Compilations/sec

Some of these metrics can be queried from TSQL, or can be observed with Perfmon (Windows Performance Monitor – located in Administrative Tools).

Page Life Expectancy (PLE)

Found in Perfmon at SQL Server: Buffer Manager – Page Life Expectancy, this counter measures the average amount of time unlocked data pages are staying in the buffer pool.   During peak production hours this number may dip into lower numbers, but we generally want this number to remain above 300 seconds (so data is staying in the cache for 5 minutes or longer). The longer data stays in the buffer, the less likely SQL will have to go to disk for I/O operations.

There is an interesting occurrence with page life expectancy… when SQL really does run out of memory, PLE drops very quickly…but it grows back very slowly. You’re probably not still having memory issues during the coming back stage, even though it may look like its struggling. If PLE stays down however, then you have a problem.   So be careful, because PLE can be misleading when it takes a drop… there’s a difference between it going down & staying down (you have a memory problem), and going down once and crawling back up (which is normal). If it stays down below 300 seconds consistently, you may need to add more RAM.

You can monitor this through Perfmon, or run the following query:

SELECT [cntr_value] FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Buffer Manager%’ AND [counter_name] = ‘Page life expectancy’

Available MBytes

Found in Perfmon as Memory: Available MBytes, this counter detects how much memory is available for use, and whether Windows is experiencing memory pressure. We generally start investigating if this number consistently goes below 500MB, which would indicate low memory, and Windows may start memory management operations. If Available Mbytes is consistently low, you may need to add more RAM.

This counter cannot be queried through TSQL, it has to be observed through Perfmon only.

 Buffer Cache Hit Ratio

Found in Perfmon as SQL Server: Buffer Manager: Buffer Cache Hit Ratio. This counter averages (since the last restart of your SQL instance) how often SQL Server goes to the buffer pool to get data, and actually finds that data in memory, instead of having to go to disk. We want to see this ratio high in OLTP servers – around 90-95%. The higher the ratio, the less often SQL has to go to disk for I/O operations, which translates into better performance for your end users. If this number is consistently below the 90% mark, you may need to add more RAM to your server to boost performance.

SELECT [cntr_value] FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Buffer Manager%’ AND [counter_name] = ‘Buffer cache hit ratio’

Target & Total Server Memory

Found in Perfmon at SQL Server: Memory Manager – Total Server Memory, and SQL Server: Memory Manager – Target Server Memory. The Total Server Memory is the current amount of memory that SQL Server is using.  The Total Server memory will be quite a bit lower than the Target memory during the initial buffer pool ramp up… it is trying to populate the cache and get pages loaded into memory.  Performance might be a little slower during this time since more disk I/O is required, but this is normal.  After it ramps up, and normal operations resume, Total Server Memory should be very close to Target Server Memory. (The ratio should be close to 1). If Total Server Memory does not increase much, but stays significantly less than Target, this could indicate a couple of things…

1) You may have allocated much more memory than SQL can use – SQL could cache the entire databases into memory, and if the databases are smaller than the amount of memory on the machine, the data won’t take up all the space allocated to SQL. So in this case Total Memory (actually memory being used by SQL) will never reach Target Memory (amount allocated to SQL).   Or,

2) SQL cannot grow the buffer pool because of memory pressure from outside of SQL. If this is the case, you need to either increase the Max Server Memory, or add more RAM to boost performance.

SELECT [cntr_value] FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Memory Manager%’ AND [counter_name] IN (‘Total Server Memory (KB)’,’Target Server Memory (KB)’)

Memory Grants Pending

Found in Perfmon as SQL Server: Memory Manager – Memory Grant Pending, this counter measures the total number of SQL processes waiting for a workspace memory grant. The general recommendation for this measurement should be 1 or less. Anything above 1 indicates there are SQL processes waiting for memory in order to operate.

Memory grants pending could be due to bad queries, missing indexes, sorts or hashes. To investigate this, you can query the sys.dm_exec_query_memory_grants view, which will show which queries (if any) that require a memory grant to execute [5].

If the Memory Grants Pending are not due to the above mentioned conditions, then more memory should be allocated to SQL Server by adjusting Max Server Memory. Adding more RAM should be the last resort in this case.

SELECT [cntr_value] FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Memory Manager%’ AND [counter_name] = ‘Memory Grants Pending’

Pages/sec (Hard Page Faults)

Found in Perfmon as Memory: Pages/sec, this measures the number of pages read from or written to disk. Pages/sec is the sum of Pages Input/sec and Pages Output/sec and reports the number of requested pages that were not immediately available in RAM and had to be read from the disk (resulting in hard page faults). A Hard Page Fault occurs when windows has to use the swap file on the disk [3]. It’s when the address in memory that’s part of a program is no longer in main memory, but has been instead swapped out to the paging file, making the system go looking for it on the hard disk. Since disk storage is much slower than RAM, performance can be greatly degraded.

We want to see the Page/sec number below 50, and closer to 0. If you see a high value of Pages/sec on a regular basis, you might have performance degradation, but not necessarily. A high Pages/sec value can happen while doing database backups or restores, importing or exporting data, or by reading a file mapped in memory [4].

Because a hard fault doesn’t necessarily indicate a critical error condition depending upon what’s normal for your environment, it’s a good idea to measure a baseline value, and monitor this number based on that. If the values are consistently higher that your baseline value, you should consider adding more RAM.

Batch Request & Compilations

SQL Server: SQL Statistics – Batch Request/Sec. This is the number of incoming queries

SQL Server: SQL Statistics – Compilations/Sec. This is the number of new executions plans SQL had to build

If Compilations/sec is 25% or higher relative to Batch Requests/sec, SQL Server is putting execution plans in the cache, but never actually reusing them.  Your valuable memory is being used up to cache query execution plans that will never be used again – instead of caching data.  This is bad. We don’t want this to happen. And a high Compilation/sec value (like over 100) indicates there are a lot of Ad-Hoc (one-hit-wonder) queries being run. You can enable the “optimize for ad hoc” setting if this is the case, and this will put the execution plan in the buffer, but only after the second time it has been used.

To query these metrics with TSQL:

SELECT [cntr_value] FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’ AND [counter_name] = ‘Batch Requests/sec’

SELECT [cntr_value] FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’ AND [counter_name] = ‘SQL Compilations/sec’

To get the ratio of Compilations to Batch Requests:


(SELECT [cntr_value] FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’ AND [counter_name] = ‘SQL Compilations/sec’)


(SELECT [cntr_value] FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’ AND [counter_name] = ‘Batch Requests/sec’)

,2) as [Ratio]

Looking at the counters in Perfmon (Report View) below, we can see that this particular server is in a healthy state, with over 11GB of Available Bytes, no page faults (so I/O is operating from buffer, not disk), Buffer cache with 100% ratio, PLE of 21,515 seconds (that’s almost 6 hours data spent in the buffer), no pending memory grants (so no SQL process is having to wait for memory), ample Total memory, and low compilation ratio (11%) compared to all the queries being run.


These measurements should be observed and understood, instead of using Task Manager to make critical decisions about resources on your SQL Server.

If it is decided to make changes based upon only what we see in Task Manager, and we reduce the amount of memory SQL can use, and it happens to need more to execute processes, it will have to read/write from disk, which will degrade the performance.  Reducing the amount of memory SQL can use will slow performance and will most likely increase complaints from end users. The lower you set your Maximum Server Memory, the lower your performance will go.

No matter how much memory you put in a system, SQL Server will use all it can get until its caching entire databases in memory and then some.  This isn’t an accident. SQL Server caches data in memory.  On purpose. That means it is healthy, and enhancing performance for your end users.


[1] “Glenn Berry’s SQL Server Performance”

[2] “Use Resource Monitor for Memory Monitoring” by Scott Lowe.

[3] “Swap File”. TechTarget.

[4] “SQL Server memory performance metrics – Part 1 – Memory pages/sec and Memory page faults/sec”. Milena Petrovic.

[5] “SQL Server memory performance metrics – Part 5 – understanding Lazy Writes, Free List Stalls/sec, and Memory Grants Pending”. M Petrovic.

Generate Random Column Values to Populate a Table for Testing

— By Lori Brown  @SQLSupahStah

Recently I had to do some R&D for a project and needed to generate a large table to test performance on. Since it is important to test queries on large data sets I thought that I would post how I populated a few data types in my table. To test query results I decided that I needed some values in the table that were not random so I simply salted my random data with known data at irregular intervals that would allow for testing. With a little bit of work I ended up with over 4 million records in a table with 45 columns. You can take the code below and can expand it to fit your needs. Just change the value for the @Loop variable to control how many records you want to insert into your table.

CREATE TABLE RandomLoad ([NumberColumn] int,[BitColumn] bit,[VarcharColumn] varchar(20),[CharColumn] char(1),[DateColumn] date)


–The below code will populate the RandomLoad table with random records

DECLARE @Loop INT, @Num1 INT, @Length INT






SET @Loop = 0

WHILE @Loop < 1000  — number of records to generate


— Generate Number

SET @Num1 = ROUND(RAND() * 10000, 0)

— Generate Bit


— Generate Varchar

SET @Varchar1 = ”

SET @Length = CAST(RAND() * 20 AS INT) — Up to 20 characters long

WHILE @Length <> 0


SET @Varchar1 = @Varchar1 + CHAR(CAST(RAND() * 96 + 32 AS INT))

SET @Length = @Length – 1


— Generate Char

SET @Char1 = LEFT(newid(),1)

— Generate Date

SET @Date1 = CAST(GETDATE() + (365 * 2 * RAND() – 365) AS DATE)

INSERT INTO [RandomLoad] VALUES (@Num1,@Bit1,@Varchar1,@Char1,@Date1)

SET @Loop = @Loop + 1




Need more info or have questions please comment or contact us at and we will be happy to help!