Category Archives: SQLRX

What You Need to Know Before Hosting Your SQL Server in the Cloud

–By Lisa Dominy

cloud-headaches_crop

Whether you currently host your SQL Server in the Cloud or are considering moving it to the cloud, here are a few guidelines that can lead you to select the right hosting provider for your specific requirements.

  1. Set the stage for a tech discussion. Familiarize yourself with your current technology profile so you can have an efficient technical conversation with hosting providers. If you already know your needs in these areas…

CPU-Processors

RAM-memory

Disk Space-Storage

Applications

Database Software Requirements

…you are all set. If not, be sure you are confident that the hosting provider can ask the right questions to provide optimum performance for your company.

  1. Judge whether they will take the time to find out about your business. Find a hosting provider that will take the time to ask the right questions to get to know your business. They should work with you to create a relationship of success. The right hosting provider should be able to provide the right technology and services that meet your business objectives.
  2. Confirm that they provide quick response time. Make sure the hosting provider will provide prompt response from knowledgeable support staff when needed. What is the process they use if an issue comes up? This is a very important step that is sometimes overlooked, and never a good lesson to learn during the move.
  3. Look for hard evidence that they solve problems and issues promptly. It is very important for your hosting provider to solve problems quickly. If a server is down or running slow, this could cause a lot of damage in the long run, as well as lost revenue.
  4. Ask if they have access to SQL experts like ISI’s SQLRx team. Make sure your hosting provider has access to SQL experts. This is very important to the success of your implementation. Why?
  • You’ll need performance information specific to a SQL workload that indicates that hardware may need to be expanded in a specific way.
  • Without a partnership with a SQL expert like SQLRx, it may be difficult and costly to provide enough hardware to support some companies’ workloads. There have been times when our SQLRx team found that the hosting company had provided great hardware but SQL needed tuning or configuration changes.
  • Applications and database software can cause server conflicts no matter how well architected they appear. This is where the right partnership is critical.

Are you ready to go to the cloud? The right hosting company must have access to the right experts and deep experience in the Cloud. This is why we’ve built a strong, trusted relationship with VAZATA. They are the hardware/hosting expert and we are their trusted SQL partner. VAZATA is a very responsive and knowledgeable hosting provider.

Here is an actual client example:

Despite having a very important SQL Server backend, an information services company did not have a SQL Database Administrator, and they had multiple application servers in their environment heavily utilizing that SQL backend. Plus, many of the applications they were running were outdated, but could benefit from better hardware. Based on their needs, there was no cost justification to bring in a full-time SQL Database Administrator. This company leveraged ISI’s SQLRx to provide that service.  VAZATA enhanced their experience when they moved all of the company’s virtual servers into the VAZATA private cloud environment, and the customer immediately started to see better performance. Equally important, based on recommendations from ISI’s SQLRx team, they adjusted some of the infrastructure resources (CPU, RAM, and Storage) to obtain the best performance possible in the most cost-effective way for the amount of users accessing the system.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com or www.vazata.com We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

SQL Server Memory In Task Manager – Tell Your Manager to Not Worry!

-by Ginger Keys

blog_20160929_1We are going to re-promote this subject because it is still so relevant. All too often we run into people who are in a panic after looking at Task Manager to try to diagnose SQL stress. They seem to become very worried when looking at how much memory SQL uses not realizing that is as expected.

Task Manager is just that, a task manager. If you are looking at it to get accurate values for memory (RAM) usage or CPU time, you will get not-so-accurate information. False alarms. Information that doesn’t exactly mean what it implies.

Blog_20150507_2What I usually see when I look at Task Manager is that CPU is low, disk usage is low, and memory used is high – which is what we expect on a SQL machine.  SQL will use as much memory as it can get, to improve performance by caching tons of data in memory in case it needs it, to avoid having to go to disk.  It will immediately release any memory to the OS if needed.  SQL is a memory hog, and will hold onto all the memory we allocate to it, whether it is actually using it or not.  And this is what we want it to do!

SQL will take all the memory that is allocated to it when it is needed, store data and execution plans in the buffer, and when it is through using it, it doesn’t release it…it caches that data into memory until either 1) SQL needs it again, or 2) Windows OS requests the memory.

This isn’t an indication of performance stress…it’s just showing that SQL has reserved all the memory that we told it to use (through setting min and max memory on the SQL instance properties [1]).  SQL doesn’t let go of memory unless the server comes under memory pressure (like if other apps need memory and Windows sends out a memory pressure notification).  As long as the OS doesn’t need to use it for anything, SQL will hold on to it for future use.

The default memory setting when you install SQL is for it to use all of the memory. When windows asks SQL to give it back, SQL gives it back in huge amounts. And then proceeds to build back up again to maximum amount. This is not optimum for stability or performance, and it is highly recommended to set the limit in the Max Memory Setting as a best practice. That way Windows OS is always guaranteed to have a certain amount.

Click here for more info on setting Max Memory: https://sqlserverperformance.wordpress.com/2009/10/29/suggested-max-memory-settings-for-sql-server-20052008/

Another good reference for setting Max Memory for SQL is here: http://www.sqlservercentral.com/blogs/glennberry/2009/10/29/suggested-max-memory-settings-for-sql-server-2005_2F00_2008/

Blog_20150507_3

And when looking at Resource Monitor, it is a mistake to use the Available and Free metrics as the key indicators for determining overall system health [2]. The Available MB of memory is comprised of both the Standby and the Free… the stuff in Standby is what Windows has put in the cache (but can be booted out if needed), and Free means it is not even being used at all. The combination is what Windows calls Available. So there is actually data in the cache that Windows says is available. Some of the categories in task manager/resource monitor can be very confusing and even misleading, which is why we shouldn’t use these metrics alone to decide if we are in danger of being out of memory.

Instead of only relying on task manager or resource monitor, we really need to look at other metrics to find out if there is a memory related performance issue on the machine.

Some of the metrics to pay attention to are:

  • Page Life Expectancy
  • Available Bytes
  • Buffer Cache Hit Ratio
  • Target & Total Server Memory
  • Memory Grants Pending
  • Pages/sec (Hard Page Faults)
  • Batch Requests/sec & Compilations/sec

Some of these metrics can be queried from TSQL, or can be observed with Perfmon (Windows Performance Monitor – located in Administrative Tools).

 Page Life Expectancy (PLE)

Found in Perfmon at SQL Server: Buffer Manager – Page Life Expectancy, this counter measures the average amount of time unlocked data pages are staying in the buffer pool.   During peak production hours this number may dip into lower numbers, but we generally want this number to remain above 300 seconds (so data is staying in the cache for 5 minutes or longer). The longer data stays in the buffer, the less likely SQL will have to go to disk for I/O operations.

There is an interesting occurrence with page life expectancy… when SQL really does run out of memory, PLE drops very quickly…but it grows back very slowly. You’re probably not still having memory issues during the coming back stage, even though it may look like its struggling. If PLE stays down however, then you have a problem.   So be careful, because PLE can be misleading when it takes a drop… there’s a difference between it going down & staying down (you have a memory problem), and going down once and crawling back up (which is normal). If it stays down below 300 seconds consistently, you may need to add more RAM.

You can monitor this through Perfmon, or run the following query:

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Buffer Manager%’

AND [counter_name] = ‘Page life expectancy’

Available MBytes

Found in Perfmon as Memory: Available MBytes, this counter detects how much memory is available for use, and whether Windows is experiencing memory pressure. We generally start investigating if this number consistently goes below 500MB, which would indicate low memory, and Windows may start memory management operations. If Available Mbytes is consistently low, you may need to add more RAM.

This counter cannot be queried through TSQL, it has to be observed through Perfmon only.

Buffer Cache Hit Ratio

Found in Perfmon as SQL Server: Buffer Manager: Buffer Cache Hit Ratio. This counter averages (since the last restart of your SQL instance) how often SQL Server goes to the buffer pool to get data, and actually finds that data in memory, instead of having to go to disk. We want to see this ratio high in OLTP servers – around 90-95%. The higher the ratio, the less often SQL has to go to disk for I/O operations, which translates into better performance for your end users. If this number is consistently below the 90% mark, you may need to add more RAM to your server to boost performance.

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Buffer Manager%’

AND [counter_name] = ‘Buffer cache hit ratio’

 Target & Total Server Memory

Found in Perfmon at SQL Server: Memory Manager – Total Server Memory, and SQL Server: Memory Manager – Target Server Memory. The Total Server Memory is the current amount of memory that SQL Server is using.  The Total Server memory will be quite a bit lower than the Target memory during the initial buffer pool ramp up… it is trying to populate the cache and get pages loaded into memory.  Performance might be a little slower during this time since more disk I/O is required, but this is normal.  After it ramps up, and normal operations resume, Total Server Memory should be very close to Target Server Memory. (The ratio should be close to 1). If Total Server Memory does not increase much, but stays significantly less than Target, this could indicate a couple of things…

1) You may have allocated much more memory than SQL can use – SQL could cache the entire databases into memory, and if the databases are smaller than the amount of memory on the machine, the data won’t take up all the space allocated to SQL. So in this case Total Memory (actually memory being used by SQL) will never reach Target Memory (amount allocated to SQL). Or,

2) SQL cannot grow the buffer pool because of memory pressure from outside of SQL. If this is the case, you need to either increase the Max Server Memory, or add more RAM to boost performance.

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Memory Manager%’

AND [counter_name] IN (‘Total Server Memory (KB)’,’Target Server Memory (KB)’)

 Memory Grants Pending

Found in Perfmon as SQL Server: Memory Manager – Memory Grant Pending, this counter measures the total number of SQL processes waiting for a workspace memory grant. The general recommendation for this measurement should be 1 or less. Anything above 1 indicates there are SQL processes waiting for memory in order to operate.

Memory grants pending could be due to bad queries, missing indexes, sorts or hashes. To investigate this, you can query the sys.dm_exec_query_memory_grants view, which will show which queries (if any) that require a memory grant to execute [5].

If the Memory Grants Pending are not due to the above mentioned conditions, then more memory should be allocated to SQL Server by adjusting Max Server Memory. Adding more RAM should be the last resort in this case.

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Memory Manager%’

AND [counter_name] = ‘Memory Grants Pending’

Pages/sec (Hard Page Faults)

Found in Perfmon as Memory: Pages/sec, this measures the number of pages read from or written to disk. Pages/sec is the sum of Pages Input/sec and Pages Output/sec and reports the number of requested pages that were not immediately available in RAM and had to be read from the disk (resulting in hard page faults). A Hard Page Fault occurs when windows has to use the swap file on the disk [3]. It’s when the address in memory that’s part of a program is no longer in main memory, but has been instead swapped out to the paging file, making the system go looking for it on the hard disk. Since disk storage is much slower than RAM, performance can be greatly degraded.

We want to see the Page/sec number below 50, and closer to 0. If you see a high value of Pages/sec on a regular basis, you might have performance degradation, but not necessarily. A high Pages/sec value can happen while doing database backups or restores, importing or exporting data, or by reading a file mapped in memory [4].

Because a hard fault doesn’t necessarily indicate a critical error condition depending upon what’s normal for your environment, it’s a good idea to measure a baseline value, and monitor this number based on that. If the values are consistently higher that your baseline value, you should consider adding more RAM.

Batch Request & Compilations

SQL Server: SQL Statistics – Batch Request/Sec. This is the number of incoming queries

SQL Server: SQL Statistics – Compilations/Sec. This is the number of new executions plans SQL had to build

If Compilations/sec is 25% or higher relative to Batch Requests/sec, SQL Server is putting execution plans in the cache, but never actually reusing them.  Your valuable memory is being used up to cache query execution plans that will never be used again – instead of caching data.  This is bad. We don’t want this to happen. And a high Compilation/sec value (like over 100) indicates there are a lot of Ad-Hoc (one-hit-wonder) queries being run. You can enable the “optimize for ad hoc” setting if this is the case, and this will put the execution plan in the buffer, but only after the second time it has been used.

To query these metrics with TSQL:

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’

AND [counter_name] = ‘Batch Requests/sec’

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’

AND [counter_name] = ‘SQL Compilations/sec’

To get the ratio of Compilations to Batch Requests:

SELECT ROUND (100.0 *

(SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’

AND [counter_name] = ‘SQL Compilations/sec’)

/

(SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’

AND [counter_name] = ‘Batch Requests/sec’)

,2) as [Ratio]

 

Looking at the counters in Perfmon (Report View) below, we can see that this particular server is in a healthy state, with over 11GB of Available Bytes, no page faults (so I/O is operating from buffer, not disk), Buffer cache with 100% ratio, PLE of 21,515 seconds (that’s almost 6 hours data spent in the buffer), no pending memory grants (so no SQL process is having to wait for memory), ample Total memory, and low compilation ratio (11%) compared to all the queries being run.

Blog_20150507_4

These measurements should be observed and understood, instead of using Task Manager to make critical decisions about resources on your SQL Server.

If it is decided to make changes based upon only what we see in Task Manager, and we reduce the amount of memory SQL can use, and it happens to need more to execute processes, it will have to read/write from disk, which will degrade the performance.  Reducing the amount of memory SQL can use will slow performance and will most likely increase complaints from end users. The lower you set your Maximum Server Memory, the lower your performance will go.

No matter how much memory you put in a system, SQL Server will use all it can get until its caching entire databases in memory and then some.  This isn’t an accident. SQL Server caches data in memory.  On purpose. That means it is healthy, and enhancing performance for your end users.

Now send a link to this blog post to your favorite person on staff who needs to be educated about Task Manager and SQL Server. You will be glad you did!

References

[1] “Glenn Berry’s SQL Server Performance” https://sqlserverperformance.wordpress.com/2009/10/29/suggested-max-memory-settings-for-sql-server-20052008/

[2] “Use Resource Monitor for Memory Monitoring” by Scott Lowe.   http://www.techrepublic.com/blog/the-enterprise-cloud/use-resource-monitor-for-memory-monitoring/

[3] “Swap File”. TechTarget. http://searchwindowsserver.techtarget.com/definition/swap-file-swap-space-or-pagefile

 

SQLRx Presenting at SQL Saturday Baton Rouge 2016

SQLRx (@SQLRX) is proud to announce that two of our resident SQL Experts will be presenting sessions at SQL Saturday #515 in Baton Rouge, LA on August 6, 2016. If you have not signed up to attend, you are going to miss out!!

Blog_20160804_4

http://www.sqlsaturday.com/515/eventhome.aspx

Lori Brown (@SQLSupahStah) will give a lesson on how to be an Actively Proactive DBA. Come have fun and learn at her session, Actively Proactive – Straightforward Ways to Manage and Diagnose SQL. Attendees will be taken through the things a proactive DBA does and will also be treated to some quick ways to troubleshoot like a SQL Super Star!

Blog_20160804_1

Scripts will be given out to all who download the presentation materials. Come and get ‘em!!

Blog_20160804_2

Jeff Schwartz will presenting Rx for Demystifying Index Tuning Decisions. Jeff will cover ways to find poor performing queries and tuning them using various methods including execution plans, usage statistics and missing indexes.

Blog_20160804_3

Since Jeff has been tuning things for almost 40 years, he can show you some things that will be extremely useful and will help you tune like a pro.

If you are in Baton Rouge this weekend come by and visit with us! SQLRx is a sponsor of SQL Saturday Baton Rouge 2016 and will have a booth where you can come sit down with Lori or Jeff and talk tech. We will also be handing out goodies as well as giving away a 2 TB external drive at the end of the day. Also, one lucky attendee will win a free Ultra Light Performance Assessment from SQLRx at one of our presenter’s sessions. We will tell that lucky winner about their SQL performance for free….Winner, Winner, Chicken Dinner!!

Hope to see you in Baton Rouge!

Correlate Performance Spikes with SQL Profiler and Perfmon

–by Ginger Keys

Have you ever watched performance metrics in Performance Monitor (Perfmon) and wondered which SQL Server activities were causing those spikes? SQLRx offers a Microsoft Partner-of-the-Year award-winning service that will correlate all performance metrics on your server with very little overhead, and will pinpoint with precision, right down to a line of code or even a network switch, what is bottlenecking your system. A real live person will then document and advise you exactly how to remedy the problem, and will walk you through each step.

However if you don’t need a comprehensive analysis, and simply want to find what TSQL statements are causing performance spikes, or if you want to observe a single TSQL statement and how it affects server resources, this can be done using SQL Profiler in conjunction with Performance Monitor (Perfmon). Both Profiler and Perfmon provide good information about your server, but being able to correlate these together will help to find the causes of performance spikes.

SQL Profiler will soon be deprecated, but will be available for a while longer. The specific version of SQL has not been determined yet. Profiler is being replaced with Extended Events, and currently there is no straightforward way to use extended events to correlate performance spikes with Perfmon.

In order to correlate performance metrics with SQL activities, we will follow these steps:

  1. Create Profiler Trace
  2. Create Perfmon Data Collector Set
  3. Simulate High CPU / or Run your TSQL Statement(s)
  4. Analyze

*Warning: This article is intended to show you how to use the tools…it is highly advised not to do this on your production server.

Create Profiler Trace

Open SQL Server Management Studio (SSMS) and connect to your instance. On the menu bar, go to Tools, and click on SQL Server Profiler to get started. SQL Profiler will put stress on your server, so focus only on the events and columns needed to analyze the activities, so to minimize the workload on your server.

We want to capture TSQL statements that might be causing performance spikes, so we will look at stored procedures, batches of queries, and individual queries. We will then filter the trace for statements that take longer than a certain Duration (ex. 5000 milliseconds), to filter out the normal running statements.

In the example below, I will use a Blank template, specify the events and columns needed, and save the results to file. Make sure you have enough space on your disk to hold the collected data.

Blog_20160519_1

Go to the Events Selection tab, and select these Events:

  • Showplan XML
  • RPC:Completed
  • SP:StmtCompleted
  • SQL:BatchCompleted
  • SQL:BatchStarting

And include these Columns

  • ApplicationName
  • CPU
  • DatabaseName
  • Duration
  • EndTime
  • EventSequence
  • IntegerData

 

  • LoginName
  • ObjectName
  • Reads
  • SPID
  • StartTime
  • TextData
  • Writes

Blog_20160519_2

Then create a Filter on

  • Duration (experiment with different times: start at 1000ms, and go to 5000 or 10000 if necessary)
  • Database name, if desired
  • Group By : Duration

Blog_20160519_3

Create Perfmon Data Collector Set

While the SQL Profiler is collecting TSQL statements that are executing, we want to also collect data on several performance metrics on the system.   From Administrative Tools, open Performance Monitor (Perfmon). Drill down to Data Collector Sets > User Defined. Right click and select New Data Collector Set.

Blog_20160519_4

Blog_20160519_5

Blog_20160519_6

Add the following Counters:

  • Memory:Available Mbytes
  • PhysicalDisk:% Idle Time (look at this as the opposite of ‘busy’ time)
  • PhysicalDisk:Avg Disk sec/Read
  • PhysicalDisk:Avg Disk sec/Write
  • Processor:% Processor Time (for each processor)
  • SQLServer:Buffer Manager: Page Life Expectancy

You can add as many performance metrics as you deem necessary to view your environment, but these are some basic memory, disk, and CPU metrics.

Set the Sample interval time to 3 seconds, to average out and not collect 1 second spikes. Save your output to a file on your disk with enough space to hold the data collected.

Simulate High CPU Usage

A few pointers before you begin…start SQL Profiler and Perfmon, making sure to start and stop both about the same time. If you are able to run these on a computer other than the server you are monitoring, that is preferred. If not, just be mindful that Profiler creates its own performance stress, so don’t do this if your system is in a critical state. Whatever you do, DO NOT DO THIS ON A PRODUCTION SYSTEM!!

If you already have specific times of the day when you know your server’s resources are peaking, run these tools during that time. Be sure to monitor the size of the data files output so that you don’t consume too much space on your drive.

If you already have some queries you want to test against the performance of your system, you can start and stop the tools during the time you are testing your queries.

If you want to artificially stress resources on your system for practice purposes, there are many TSQL statements available online to stress CPU and memory. Here are a few statements you can run to stress CPU:

— Query to Keep CPU Busy for 30 Seconds

DECLARE    @T DATETIME, @F BIGINT;

SET @T = GETDATE();

WHILE DATEADD(SECOND,30,@T)>GETDATE()

SET   @F=POWER(2,30);

 

— Query to Keep CPU Busy for 60 Seconds

DECLARE   @T DATETIME, @F BIGINT;

SET @T = GETDATE();

WHILE DATEADD(SECOND,60,@T)>GETDATE()

SET   @F=POWER(2,30);

 

— Query to stress CPU

SELECT SUM(CONVERT(BIGINT, o1.object_id)

+ CONVERT(BIGINT, o2.object_id)

+ CONVERT(BIGINT, o3.object_id)

+ CONVERT(BIGINT, o4.object_id))

FROM sys.objects o1

CROSS JOIN sys.objects o2

CROSS JOIN sys.objects o3

CROSS JOIN sys.objects o4

 

— Another Query to stress CPU

SELECT * INTO #columns

FROM sys.columns AS C

 

DECLARE @RCNT INT

SET @RCNT = 0

SET NOCOUNT ON;

WHILE @RCNT = 0

BEGIN

UPDATE #columns

SET column_id = 0

WHERE ISNUMERIC(CONVERT(FLOAT, CONVERT(NVARCHAR(20), CONVERT(VARCHAR(20),  column_id)))) = 0

SELECT @RCNT = @@ROWCOUNT

END

 

— And another Query to stress CPU using Adventureworks database

USE AdventureWorks2014

GO

DECLARE @Flag INT

SET @Flag = 1

WHILE(@Flag < 1000)

BEGIN

ALTER INDEX [PK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID]

ON [Sales].[SalesOrderDetail] REBUILD

SET @Flag = @Flag + 1

END

GO

 

I started the SQL Profiler trace and also the Perfmon data collector. Then after executing the above queries, we can see by looking at Task Manager that the CPU has spiked from 30% range up to 85% range.

Blog_20160519_7

Analyze and Correlate the Results

In order to analyze the results, both sets of data (the profiler trace and the Perfmon collection) need to be uploaded into SQL profiler:

Open the Profiler Trace (trc) file from where the output was saved.

Blog_20160519_8

Then click on File > Import Performance Data > and select the Perfmon file saved.

Blog_20160519_9

The results below show all TSQL statements run during the time period, along with the Perfmon graph and numerical values. The top section shows the profiler events in the order in which they occurred. The middle section shows the Perfmon graph for the duration of the collection period with the min, max, and average values. The bottom section displays either the text of the SQL statement or an execution plan.

You want to look for spikes or bottlenecks in the performance metrics, and then identify the profiler events that occurred around that time.

Click anywhere on the Perfmon graph and the TSQL statement will be pulled up in the Profiler event list. If I click on the Perfmon graph where I see the CPU was 100% (vertical red line below), the chart takes me to the general location (time period) in the Profiler events. You will need to scroll up or down in the list of events during that time period to find the actual event that caused the CPU spike. Clicking on the CPU spike in the Perfmon graph takes us to a TSQL statement around the time frame of that spike and provides the execution plan as shown below:

Blog_20160519_10

You can also click on a line in the Profiler events to see where the performance counters were at that time in the Perfmon graph (indicated with a red line below). I have clicked on a high CPU duration in the TSQL statement list where SQL: BatchCompleted occurred, and the Perfmon metrics on the graph shows 89% processor time and 2% disk idle (which means the disk was 98% busy).

Blog_20160519_11

In our example, the memory performance metrics were not substantial, but the CPU and disks were being stressed, with the average value for CPU being around 70%, and the average disk idle time at 25% (which means it was 75% busy).

Blog_20160519_12

Beware of some ‘Gotchas’!

Don’t run Profiler in production environment! Profiler causes much stress on a system, so if you have access to another server from which you can run this tool, it is preferred. If you don’t have another server to run Profiler from, take great care not to do this during critical periods on your production machine, and be sure to set filters so you don’t collect everything that is running.

Clicking on Perfmon spike will only take you to general area in the list of Trace events. As shown in the example above, you might or might not be directed to the exact Profiler event that caused the spike in performance. You will need to look around in the general area of that time period in the Trace to find the exact event that caused the spike.

There could be non-SQL processes causing your performance issues. If there are other non-SQL applications running on your production server that are causing performance stress, you won’t see those events by using the SQL Profiler tool.

There could be multiple processes happening causing Performance spikes that don’t exceed your Duration filter. Individually these (low to medium resource users) aren’t a problem, but collectively they can cause problems if they are occurring simultaneously.   If there are lots of smaller resource users that don’t meet your filter, you won’t see them, but together they could be causing performance issues.

Catching Showplan XML will catch everything happening in SQL because it doesn’t have a Duration column to filter on…which will bloat your trace file. Again, it is better to run these tools on a server apart from your production server.

* Disclaimer: Our SQLRx performance analysis process vanquishes all these ‘gotchas’…just sayin…give us a call

Conclusion

Being able to correlate TSQL statements with performance metrics can help you determine why you are seeing stress on your system, it can pinpoint queries that may need tuning, and it can help you observe how your queries will affect the performance of your hardware and resource usage. This can be done during a period of spikes in performance, especially if you know beforehand when these spikes occur. However keep in mind that running SQL Profiler contributes to the stress on your system, so take care not to do this during a critical performance spike, and as mentioned before it is most ideal to do this from a non-production server.

This article describes the general methods for using the SQL Profiler tool in conjunction with the Windows Performance Monitor. Correlating performance issues with this process is not 100% accurate, but will give you a great starting point to investigate performance stress on your system. Finding performance bottlenecks and knowing how to correct them is an in-depth process that usually requires years of experience. If you need guidance or assistance with this, contact us. We are experts in this, we have over 40 years of experience, and have helped over 700 clients. We can help you!

If you would like assistance configuring, maintaining, or monitoring your SQL Server and instances please contact us at SQLRx@sqlrx.com. We would be happy to answer any question you have, and would be interested to hear about your SQL experiences as well! Visit us at www.sqlrx.com.

 

 

 

 

 

 

 

 

 

How SQLRX Can Help You When Tools Do Not

 

For those of you who visit our humble blog from time to time and have never actually checked out our website (www.sqlrx.com) to get acquainted with what we do, please allow me to check that box for you.

 

SQLRX is part of a Dallas, Texas based consulting company called Integrated Services Inc (ISI). The SQLRX division is tasked with providing consulting services for everything SQL Server. We do it all from installation to migrations to performance tuning and RemoteDBA services. Since we have been in business for a while, we have worked on systems from small 1 server shops to fortune 100 high transactional enterprise level systems.

 

One of the things we commonly see is that many shops have SQL monitoring software installed that they regularly use to monitor the systems that we are asked to work on. Interestingly, we always seem to find root causes of performance issues that the monitoring software may have been either missing or generating data on but not providing a solution. I ran across a blog post from Jonathan Kehayias who warns about severe performance issues that can be directly attributed to a very well-known monitoring software company who has yet to fix it. https://www.sqlskills.com/blogs/jonathan/a-warning-about-diagnostic-managers-query-monitor-feature-in-v9/   I sent this link to a client of mine that uses this software and he said that the next version was also buggy and that the vendor had tried to patch but had not fixed.

 

So, why use expensive tools to monitor for SQL performance if they don’t fix them? I realize that some places only have one person acting as the DBA and they use the software to help them keep an eye on things. However, when you look at how most of the SQL monitoring software companies promote themselves, they make it sound like they are the answer to all your issues. I know that is not true for anyone or anything.

 RemoteDBA

We offer a DBA subscription service that is VERY reasonably priced (about 1/3rd the cost of a FTE) that can be used either as the sole DBA for up to 4 SQL Servers or as backup to a staff DBA who needs some help. SQLRX does not install any executable to any server so you don’t have to worry about applying updates or planning an outage for software maintenance. We strictly use SQL Server functionality and have developed maintenance and proactive monitoring routines complete with reporting that is designed to watch for only the things that really need to be monitored, and alert only when there really is an issue. And, we don’t just monitor we also fix things. You can’t get that from software.

 

Here are a few examples of the things we monitor and alert on.

Blog_20160428_1Blog_20160428_2

Blog_20160428_3

Blog_20160428_4

Performance Tuning

Another service that we offer that is SUPER valuable is our performance tuning assessments. While these are fixed fee reports, they are a complete assessment of a SQL Server that looks at both SQL and Windows hardware performance. We call them holistic because we look at the entire system, not just one part. We don’t just tell you what is wrong, we also provide a list of recommended fixes in a list that gives the order that we would fix things. There are numerous companies with systems that were performing very badly that used our assessments to fix everything from queries to finding hardware that was incorrectly configured by a major hardware vendor. We even had one company that asked for our assistance in completely re-engineering all indexes in their main database, which we did quite successfully.

Blog_20160428_5

Blog_20160428_6

If you are looking for access to a SQL DBA or have software that is monitoring your SQL Servers but not providing the answers you are looking for, please contact us. We would be happy to provide a free of charge proof of concept and would love to help you out. Just send us an e-mail to get started: SQLRX@sqlrx.com

 

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

Find Permission Changes In The Default Trace

— By Lori Brown  @SQLSupahStah

I recently had someone contact me who needed to figure out when a user’s database permissions were changed. Apparently, a domain account that had been working for months doing things in a specific database suddenly could not do its normal work. On checking, we found that all of the user’s permissions had been removed from the database. The permissions were reset but management wanted to know when and who changed things. All the people with access to assign permissions were asked about it and everyone swore that no changes had been made. Uh huh…

Fortunately the default trace in SQL does contain auditing events for when objects and logins are changed.

Blog_20150807_1

A quick query of the default trace files using fn_trace_getinfo and fn_trace_gettable will give you what you need.

DECLARE @tracefile VARCHAR(500)

— Get path of default trace file

SELECT @tracefile = CAST(value AS VARCHAR(500))

FROM ::fn_trace_getinfo(DEFAULT)

WHERE traceid = 1

AND property = 2

— Get security changes from the default trace

SELECT *

FROM ::fn_trace_gettable(@tracefile, DEFAULT) trcdata  — DEFAULT means all trace files will be read

INNER JOIN sys.trace_events evt

ON trcdata.EventClass = evt.trace_event_id

WHERE trcdata.EventClass IN (102, 103, 104, 105, 106, 108, 109, 110, 111)

ORDER BY trcdata.StartTime

–trcdata.DatabaseID

–trcdata.TargetLoginName

More info on fn_trace_getinfo: https://msdn.microsoft.com/en-us/library/ms173875.aspx

More info on fn_trace_gettable: https://msdn.microsoft.com/en-us/library/ms188425.aspx

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRxSupport@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. We love to talk tech with anyone in our SQL family!

Tip of the Month: Resource Governor for I/O in SQL 2014

Available in SQL Server 2014 Enterprise Edition

The Resource Governor can now restrict the number of I/O’s sent to the disk subsystem. In the past, only CPU and Memory could be throttled, but now DBAs can also throttle I/O. This gives DBAs the ability to further control those users who insist on running reports or other queries that can bring all other activity on the instance to a grinding halt.

Here is what you can now do with the Resource Governor:

  • Set constraints on the physical I/O operations
  • Throttles I/O by controlling the number of I/O’s sent to the disk subsystem
  • I/O’s can be reads or writes of any
  • Limit the physical I/O’s issued for user threads in a resource pool

MAX_OUTSTANDING_IO_PER_VOLUME = set the maximum outstanding I/O operations per disk volume. Can be set and act as a safety even when MAX_IOPS_PER_VOLUME and MIN_IOPS_PER_VOLUME have been set.

ALTER RESOURCE GOVERNOR

WITH (MAX_OUTSTANDING_IO_PER_VOLUME = 20)

 

MAX_IOPS_PER_VOLUME and MIN_IOPS_PER_VOLUME = the maximum and minimum IO operations per second respectively

CREATE RESOURCE POOL ReportPool

WITH (MIN_IOPS_PER_VOLUME = 20, MAX_IOPS_PER_VOLUME = 60)

More info can be found at:

https://msdn.microsoft.com/en-us/bb933866.aspx

http://www.databasejournal.com/features/mssql/restricting-io-using-sql-server-2014-resource-governor.html

Do You Know When To Ask For Help?

— By Lori Brown  @SQLSupahStah

This is a bit off from our normal technical blog posts but, hey, if you can’t go out on a limb on your own blog, what good is it? After doing cleanup on systems over the years, I started thinking about why some servers seem to be so poorly configured. Usually it seems that systems are poorly architected out of inexperience and sometimes out of sheer ego. I always wonder why the system architect didn’t look things up or ask for help since so much information is available at our fingertips through a decent search engine. However, humans being what we are, none of us like to admit to shortcomings, but we all at times need help.

Know when to ask for help

We all know that we need help sometimes, but it is difficult and sometimes embarrassing to admit it. When I was a staff DBA I did have times when I needed to bring in outside consultants for help. For instance, if a project required skills that I did not have (C# programming for one) or required more time than I was able to commit to, then consulting was a good option. As a Mid-level DBA at the time, I did seek out help and mentoring from more experienced DBA’s. So, I have raised my hand for help but always grew when I did. I was fortunate to have found people who could mentor me and grow my skills.

Eh….who am I kidding….my skills are always having to grow, even now!! It takes time and lots of work to keep current on SQL Server, let alone know all of the coding and hardware pitfalls that are out there. Now that I have been a consultant for quite a while, I have found that the smartest DBA’s or IT people in general are the ones who know when to ask for help.

The vendor trap

trap

Many times we are called to assist with systems that were not installed or configured by anyone on a company’s staff. Instead the company listened to a software vendor who seemed to know what they were doing and allowed the vendor to handle all provisioning and configuration of their server. Unfortunately many software vendors will give hardware recommendations and programming advice based on a small testing database that they used to do small load tests with. Usually a customer over time will grow their data to exceed what a software vendor has tested with or will add customizations and will be faced with hardware that is not able to handle the work it is presented with and code that runs very slowly. The software vendor, when approached for help, often does not have the expertise to assist with anything complex and companies are left high and dry with a frustrating situation and the feeling that money invested might have been wasted.

IT managers are then faced with the prospect of incurring consulting costs and may be reluctant to do so. We can assess both SQL and hardware performance to expose areas of weakness quickly. Sometimes solutions can be as simple as adding a little more memory or as complex as a complete index overhaul of a database. The important thing is that we can find a problem when your software vendor is not super cooperative out of fear that poor coding may be revealed. We often provide performance data to software vendors and will work with them directly to get a system back on its feet.

Hitting the knowledge wall

wallThere seems to be several types of personalities that hit the knowledge wall at bad times. There is the “No one knows our systems like I do” person, the “My way or the highway” person, the “I don’t have time to deal with this” person and finally the “I can fake my way through this” person. The last thing you want is for a project to go south and needing to explain to your boss what happened or even worse having to look for a job. No single person can know absolutely everything about hardware, application performance, SQL Server administration, SQL Server performance, etc.  In some cases, people are prevented from knowing by organizational boundaries. I have been doing this for a while and have seen systems that are very poorly architected that were generally that way because someone was too proud to ask for advice or input on it.

If you feel like you are in over your head on a SQL related project, raise your hand and we will be there to help you out. We are the type of consultants that are not looking to make a staff IT person look or feel bad about their work. We are often brought in to bridge the gap that is a result of the long-standing adversarial relationship has existed between the database and system administrators. We actually try to point out ways to improve things and are ultimately hoping to make the IT staff look good when their system performs well and the boss is not calling them on the carpet for anything.

Hitting the budget wall

nomoneySadly, I also run into a lot of people who want to get help but don’t think they can justify the budget for it. At least think about reaching out to a consulting firm to see if they can work within your budget. There are some cost effective ways to get expert help if you look for them.

Some places think they do not have the budget or the amount of work to justify having a full-time DBA available for their SQL Server(s): however many times mission critical information is being hosted in a SQL database that is not performing all that well. We offer a fractional DBA service (RemoteDBA) that costs 1/3rd of what it would cost to have a Mid-level DBA on staff. We are Senior-level DBA’s who can help at a very reasonable cost. We can work side-by-side with an overworked staff DBA or can fully administrate and monitor SQL instances for shops that do not have a DBA on staff.

If you have project work or performance problems that are overwhelming you, look into asking about short term monthly consulting or mentoring. We provide those types of services as well. We often do short term contracting with shops that just need root causes of issues identified and then they go away to do the actual work. And, we always try to educate our customers on how to do things better.

Watch all about it!

ISI (or SQLRX as our division is known) has been in the consulting business since 1985. SQLRx is our division which specializes in RemoteDBA services and SQL Server performance tuning. Not too long ago, we started compiling video testimonials from some of our clients who have volunteered to tell the world about their experiences with us. The videos are not long, and to me, are better than reading some dry case study as you normally see from most IT consultants.

ISI specializes in Business Intelligence and Application Development including mobile development for all platforms. We also founded and manage the DFW (Dallas – Fort Worth area) Business Intelligence Community and host monthly meetings featuring various BI experts and/or products, as well as case studies and user testimonials.

heretohelpVisit our You Tube channel for testimonials on how we have helped our clients.

https://www.youtube.com/user/ISI85DFW/videos

Who knows…..maybe it’s time for you to raise your hand and get some help.

If you think you need help but are not sure (maybe you just want a sanity check) let us know!!  We offer a free SQL HealthCheck that is done with a one to one call and GoToMeeting to take a look at your system.  It only takes about 30 minutes and is FREE as in no strings attached.  E-mail us at SQLRxSupport@sqlrx.com to get a checkup scheduled.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRxSupport@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. We love to talk tech with anyone in our SQL family!