Category Archives: Advice

What You Need to Know Before Hosting Your SQL Server in the Cloud

–By Lisa Dominy

cloud-headaches_crop

Whether you currently host your SQL Server in the Cloud or are considering moving it to the cloud, here are a few guidelines that can lead you to select the right hosting provider for your specific requirements.

  1. Set the stage for a tech discussion. Familiarize yourself with your current technology profile so you can have an efficient technical conversation with hosting providers. If you already know your needs in these areas…

CPU-Processors

RAM-memory

Disk Space-Storage

Applications

Database Software Requirements

…you are all set. If not, be sure you are confident that the hosting provider can ask the right questions to provide optimum performance for your company.

  1. Judge whether they will take the time to find out about your business. Find a hosting provider that will take the time to ask the right questions to get to know your business. They should work with you to create a relationship of success. The right hosting provider should be able to provide the right technology and services that meet your business objectives.
  2. Confirm that they provide quick response time. Make sure the hosting provider will provide prompt response from knowledgeable support staff when needed. What is the process they use if an issue comes up? This is a very important step that is sometimes overlooked, and never a good lesson to learn during the move.
  3. Look for hard evidence that they solve problems and issues promptly. It is very important for your hosting provider to solve problems quickly. If a server is down or running slow, this could cause a lot of damage in the long run, as well as lost revenue.
  4. Ask if they have access to SQL experts like ISI’s SQLRx team. Make sure your hosting provider has access to SQL experts. This is very important to the success of your implementation. Why?
  • You’ll need performance information specific to a SQL workload that indicates that hardware may need to be expanded in a specific way.
  • Without a partnership with a SQL expert like SQLRx, it may be difficult and costly to provide enough hardware to support some companies’ workloads. There have been times when our SQLRx team found that the hosting company had provided great hardware but SQL needed tuning or configuration changes.
  • Applications and database software can cause server conflicts no matter how well architected they appear. This is where the right partnership is critical.

Are you ready to go to the cloud? The right hosting company must have access to the right experts and deep experience in the Cloud. This is why we’ve built a strong, trusted relationship with VAZATA. They are the hardware/hosting expert and we are their trusted SQL partner. VAZATA is a very responsive and knowledgeable hosting provider.

Here is an actual client example:

Despite having a very important SQL Server backend, an information services company did not have a SQL Database Administrator, and they had multiple application servers in their environment heavily utilizing that SQL backend. Plus, many of the applications they were running were outdated, but could benefit from better hardware. Based on their needs, there was no cost justification to bring in a full-time SQL Database Administrator. This company leveraged ISI’s SQLRx to provide that service.  VAZATA enhanced their experience when they moved all of the company’s virtual servers into the VAZATA private cloud environment, and the customer immediately started to see better performance. Equally important, based on recommendations from ISI’s SQLRx team, they adjusted some of the infrastructure resources (CPU, RAM, and Storage) to obtain the best performance possible in the most cost-effective way for the amount of users accessing the system.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com or www.vazata.com We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Shrinking Database Files – Let’s Not And Say We Did

— by Lori Brown  @SQLSupahStah

I recently ran into someone who swears that shrinking database files on a production server is a good thing and proudly said they did it on all systems they work on.  OMG!!  Let me post a quote directly from MSDN regarding weather or not shrinking database files causes fragmentation (https://msdn.microsoft.com/en-us/library/ms189493.aspx) :

blog_20170216_1

Please notice I am talking about production SQL Servers and not development, test, QA, staging or anything that is not production.  Of course if you shrink database files in any environment you are still going to cause more fragmentation but since the use of those systems is often vastly different than a production system, you as the DBA have to judge the ramifications on those non-production systems.

blog_20170216_2

Have I ever had to shrink database files on the production systems I monitor…Uhhh, yes.  But, I do so very infrequently and with reason.  I even have one system where the guy who is in charge of it refuses to allow me to disable the nightly job that shrinks some of the databases.  I can only do so much but he can’t say I have not duly warned him.

 

There are tons of good articles and even rants of why you should not shrink your database files.  Here are some links in case you think I am the only one telling you to NOT shrink your database files.

 

https://www.brentozar.com/archive/2009/08/stop-shrinking-your-database-files-seriously-now/  – Gotta love some Ozar!

 

http://www.sqlskills.com/blogs/paul/why-you-should-not-shrink-your-data-files/ – From THE man!

 

http://www.sqlservercentral.com/blogs/simple-sql-server/2016/01/19/shrinking-database-data-files/ – Good post from a new blogger with the appropriate warnings.

 

https://www.am2.co/2016/04/shrink-database-4-easy-steps/ – Lots of warnings throughout the post.

 

https://www.mssqltips.com/sqlservertip/3008/solving-sql-server-database-physical-file-fragmentation/ – Shows a tool from sysinternals that allows you to see the physical fragmentation and states that once db files are physically fragmented the only way to undo the damage is to take your databases offline and defrag them at the disk level.

 

If you go to the links, you may be thinking that some of those posts are old.  However, look through the comments…there are new ones up to the present and they are still getting the same answers. 

If after all that you are still not convinced that shrinking database files causes physical fragmentation, well… I tried.  But go ahead and keep at it.  If you are lucky you may not ever have an issue or experience things getting slow because of fragmentation.  Roll those dice!

blog_20170216_3

 

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

 

 

SQL Server Memory In Task Manager – Tell Your Manager to Not Worry!

-by Ginger Keys

blog_20160929_1We are going to re-promote this subject because it is still so relevant. All too often we run into people who are in a panic after looking at Task Manager to try to diagnose SQL stress. They seem to become very worried when looking at how much memory SQL uses not realizing that is as expected.

Task Manager is just that, a task manager. If you are looking at it to get accurate values for memory (RAM) usage or CPU time, you will get not-so-accurate information. False alarms. Information that doesn’t exactly mean what it implies.

Blog_20150507_2What I usually see when I look at Task Manager is that CPU is low, disk usage is low, and memory used is high – which is what we expect on a SQL machine.  SQL will use as much memory as it can get, to improve performance by caching tons of data in memory in case it needs it, to avoid having to go to disk.  It will immediately release any memory to the OS if needed.  SQL is a memory hog, and will hold onto all the memory we allocate to it, whether it is actually using it or not.  And this is what we want it to do!

SQL will take all the memory that is allocated to it when it is needed, store data and execution plans in the buffer, and when it is through using it, it doesn’t release it…it caches that data into memory until either 1) SQL needs it again, or 2) Windows OS requests the memory.

This isn’t an indication of performance stress…it’s just showing that SQL has reserved all the memory that we told it to use (through setting min and max memory on the SQL instance properties [1]).  SQL doesn’t let go of memory unless the server comes under memory pressure (like if other apps need memory and Windows sends out a memory pressure notification).  As long as the OS doesn’t need to use it for anything, SQL will hold on to it for future use.

The default memory setting when you install SQL is for it to use all of the memory. When windows asks SQL to give it back, SQL gives it back in huge amounts. And then proceeds to build back up again to maximum amount. This is not optimum for stability or performance, and it is highly recommended to set the limit in the Max Memory Setting as a best practice. That way Windows OS is always guaranteed to have a certain amount.

Click here for more info on setting Max Memory: https://sqlserverperformance.wordpress.com/2009/10/29/suggested-max-memory-settings-for-sql-server-20052008/

Another good reference for setting Max Memory for SQL is here: http://www.sqlservercentral.com/blogs/glennberry/2009/10/29/suggested-max-memory-settings-for-sql-server-2005_2F00_2008/

Blog_20150507_3

And when looking at Resource Monitor, it is a mistake to use the Available and Free metrics as the key indicators for determining overall system health [2]. The Available MB of memory is comprised of both the Standby and the Free… the stuff in Standby is what Windows has put in the cache (but can be booted out if needed), and Free means it is not even being used at all. The combination is what Windows calls Available. So there is actually data in the cache that Windows says is available. Some of the categories in task manager/resource monitor can be very confusing and even misleading, which is why we shouldn’t use these metrics alone to decide if we are in danger of being out of memory.

Instead of only relying on task manager or resource monitor, we really need to look at other metrics to find out if there is a memory related performance issue on the machine.

Some of the metrics to pay attention to are:

  • Page Life Expectancy
  • Available Bytes
  • Buffer Cache Hit Ratio
  • Target & Total Server Memory
  • Memory Grants Pending
  • Pages/sec (Hard Page Faults)
  • Batch Requests/sec & Compilations/sec

Some of these metrics can be queried from TSQL, or can be observed with Perfmon (Windows Performance Monitor – located in Administrative Tools).

 Page Life Expectancy (PLE)

Found in Perfmon at SQL Server: Buffer Manager – Page Life Expectancy, this counter measures the average amount of time unlocked data pages are staying in the buffer pool.   During peak production hours this number may dip into lower numbers, but we generally want this number to remain above 300 seconds (so data is staying in the cache for 5 minutes or longer). The longer data stays in the buffer, the less likely SQL will have to go to disk for I/O operations.

There is an interesting occurrence with page life expectancy… when SQL really does run out of memory, PLE drops very quickly…but it grows back very slowly. You’re probably not still having memory issues during the coming back stage, even though it may look like its struggling. If PLE stays down however, then you have a problem.   So be careful, because PLE can be misleading when it takes a drop… there’s a difference between it going down & staying down (you have a memory problem), and going down once and crawling back up (which is normal). If it stays down below 300 seconds consistently, you may need to add more RAM.

You can monitor this through Perfmon, or run the following query:

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Buffer Manager%’

AND [counter_name] = ‘Page life expectancy’

Available MBytes

Found in Perfmon as Memory: Available MBytes, this counter detects how much memory is available for use, and whether Windows is experiencing memory pressure. We generally start investigating if this number consistently goes below 500MB, which would indicate low memory, and Windows may start memory management operations. If Available Mbytes is consistently low, you may need to add more RAM.

This counter cannot be queried through TSQL, it has to be observed through Perfmon only.

Buffer Cache Hit Ratio

Found in Perfmon as SQL Server: Buffer Manager: Buffer Cache Hit Ratio. This counter averages (since the last restart of your SQL instance) how often SQL Server goes to the buffer pool to get data, and actually finds that data in memory, instead of having to go to disk. We want to see this ratio high in OLTP servers – around 90-95%. The higher the ratio, the less often SQL has to go to disk for I/O operations, which translates into better performance for your end users. If this number is consistently below the 90% mark, you may need to add more RAM to your server to boost performance.

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Buffer Manager%’

AND [counter_name] = ‘Buffer cache hit ratio’

 Target & Total Server Memory

Found in Perfmon at SQL Server: Memory Manager – Total Server Memory, and SQL Server: Memory Manager – Target Server Memory. The Total Server Memory is the current amount of memory that SQL Server is using.  The Total Server memory will be quite a bit lower than the Target memory during the initial buffer pool ramp up… it is trying to populate the cache and get pages loaded into memory.  Performance might be a little slower during this time since more disk I/O is required, but this is normal.  After it ramps up, and normal operations resume, Total Server Memory should be very close to Target Server Memory. (The ratio should be close to 1). If Total Server Memory does not increase much, but stays significantly less than Target, this could indicate a couple of things…

1) You may have allocated much more memory than SQL can use – SQL could cache the entire databases into memory, and if the databases are smaller than the amount of memory on the machine, the data won’t take up all the space allocated to SQL. So in this case Total Memory (actually memory being used by SQL) will never reach Target Memory (amount allocated to SQL). Or,

2) SQL cannot grow the buffer pool because of memory pressure from outside of SQL. If this is the case, you need to either increase the Max Server Memory, or add more RAM to boost performance.

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Memory Manager%’

AND [counter_name] IN (‘Total Server Memory (KB)’,’Target Server Memory (KB)’)

 Memory Grants Pending

Found in Perfmon as SQL Server: Memory Manager – Memory Grant Pending, this counter measures the total number of SQL processes waiting for a workspace memory grant. The general recommendation for this measurement should be 1 or less. Anything above 1 indicates there are SQL processes waiting for memory in order to operate.

Memory grants pending could be due to bad queries, missing indexes, sorts or hashes. To investigate this, you can query the sys.dm_exec_query_memory_grants view, which will show which queries (if any) that require a memory grant to execute [5].

If the Memory Grants Pending are not due to the above mentioned conditions, then more memory should be allocated to SQL Server by adjusting Max Server Memory. Adding more RAM should be the last resort in this case.

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%Memory Manager%’

AND [counter_name] = ‘Memory Grants Pending’

Pages/sec (Hard Page Faults)

Found in Perfmon as Memory: Pages/sec, this measures the number of pages read from or written to disk. Pages/sec is the sum of Pages Input/sec and Pages Output/sec and reports the number of requested pages that were not immediately available in RAM and had to be read from the disk (resulting in hard page faults). A Hard Page Fault occurs when windows has to use the swap file on the disk [3]. It’s when the address in memory that’s part of a program is no longer in main memory, but has been instead swapped out to the paging file, making the system go looking for it on the hard disk. Since disk storage is much slower than RAM, performance can be greatly degraded.

We want to see the Page/sec number below 50, and closer to 0. If you see a high value of Pages/sec on a regular basis, you might have performance degradation, but not necessarily. A high Pages/sec value can happen while doing database backups or restores, importing or exporting data, or by reading a file mapped in memory [4].

Because a hard fault doesn’t necessarily indicate a critical error condition depending upon what’s normal for your environment, it’s a good idea to measure a baseline value, and monitor this number based on that. If the values are consistently higher that your baseline value, you should consider adding more RAM.

Batch Request & Compilations

SQL Server: SQL Statistics – Batch Request/Sec. This is the number of incoming queries

SQL Server: SQL Statistics – Compilations/Sec. This is the number of new executions plans SQL had to build

If Compilations/sec is 25% or higher relative to Batch Requests/sec, SQL Server is putting execution plans in the cache, but never actually reusing them.  Your valuable memory is being used up to cache query execution plans that will never be used again – instead of caching data.  This is bad. We don’t want this to happen. And a high Compilation/sec value (like over 100) indicates there are a lot of Ad-Hoc (one-hit-wonder) queries being run. You can enable the “optimize for ad hoc” setting if this is the case, and this will put the execution plan in the buffer, but only after the second time it has been used.

To query these metrics with TSQL:

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’

AND [counter_name] = ‘Batch Requests/sec’

SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’

AND [counter_name] = ‘SQL Compilations/sec’

To get the ratio of Compilations to Batch Requests:

SELECT ROUND (100.0 *

(SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’

AND [counter_name] = ‘SQL Compilations/sec’)

/

(SELECT [cntr_value]

FROM sys.dm_os_performance_counters

WHERE [object_name] LIKE ‘%SQL Statistics%’

AND [counter_name] = ‘Batch Requests/sec’)

,2) as [Ratio]

 

Looking at the counters in Perfmon (Report View) below, we can see that this particular server is in a healthy state, with over 11GB of Available Bytes, no page faults (so I/O is operating from buffer, not disk), Buffer cache with 100% ratio, PLE of 21,515 seconds (that’s almost 6 hours data spent in the buffer), no pending memory grants (so no SQL process is having to wait for memory), ample Total memory, and low compilation ratio (11%) compared to all the queries being run.

Blog_20150507_4

These measurements should be observed and understood, instead of using Task Manager to make critical decisions about resources on your SQL Server.

If it is decided to make changes based upon only what we see in Task Manager, and we reduce the amount of memory SQL can use, and it happens to need more to execute processes, it will have to read/write from disk, which will degrade the performance.  Reducing the amount of memory SQL can use will slow performance and will most likely increase complaints from end users. The lower you set your Maximum Server Memory, the lower your performance will go.

No matter how much memory you put in a system, SQL Server will use all it can get until its caching entire databases in memory and then some.  This isn’t an accident. SQL Server caches data in memory.  On purpose. That means it is healthy, and enhancing performance for your end users.

Now send a link to this blog post to your favorite person on staff who needs to be educated about Task Manager and SQL Server. You will be glad you did!

References

[1] “Glenn Berry’s SQL Server Performance” https://sqlserverperformance.wordpress.com/2009/10/29/suggested-max-memory-settings-for-sql-server-20052008/

[2] “Use Resource Monitor for Memory Monitoring” by Scott Lowe.   http://www.techrepublic.com/blog/the-enterprise-cloud/use-resource-monitor-for-memory-monitoring/

[3] “Swap File”. TechTarget. http://searchwindowsserver.techtarget.com/definition/swap-file-swap-space-or-pagefile

 

SQL Server 2016 RTM Available!!

In case you live under a rock or were at the dentist, Microsoft announced that SQL Server 2016 has been made generally available today June 1, 2016. See: https://blogs.technet.microsoft.com/dataplatforminsider/2016/06/01/sql-server-2016-is-generally-available-today/

Pricing is as follows:

Blog_20160601_1

A handy feature comparison of previous versions of SQL can be located here:

https://www.microsoft.com/en-us/server-cloud/products/sql-server/comparison.aspx

SQL 2016 feature comparison between editions can be found here:

https://msdn.microsoft.com/en-us/library/cc645993.aspx

Let the upgrades begin!! Everything upgrade related can be found here: https://msdn.microsoft.com/library/bb677622.aspx

We are excited to be working with this new version! Give us a shout with questions and stories of your SQL 2016 experiences!

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

How SQLRX Can Help You When Tools Do Not

 

For those of you who visit our humble blog from time to time and have never actually checked out our website (www.sqlrx.com) to get acquainted with what we do, please allow me to check that box for you.

 

SQLRX is part of a Dallas, Texas based consulting company called Integrated Services Inc (ISI). The SQLRX division is tasked with providing consulting services for everything SQL Server. We do it all from installation to migrations to performance tuning and RemoteDBA services. Since we have been in business for a while, we have worked on systems from small 1 server shops to fortune 100 high transactional enterprise level systems.

 

One of the things we commonly see is that many shops have SQL monitoring software installed that they regularly use to monitor the systems that we are asked to work on. Interestingly, we always seem to find root causes of performance issues that the monitoring software may have been either missing or generating data on but not providing a solution. I ran across a blog post from Jonathan Kehayias who warns about severe performance issues that can be directly attributed to a very well-known monitoring software company who has yet to fix it. https://www.sqlskills.com/blogs/jonathan/a-warning-about-diagnostic-managers-query-monitor-feature-in-v9/   I sent this link to a client of mine that uses this software and he said that the next version was also buggy and that the vendor had tried to patch but had not fixed.

 

So, why use expensive tools to monitor for SQL performance if they don’t fix them? I realize that some places only have one person acting as the DBA and they use the software to help them keep an eye on things. However, when you look at how most of the SQL monitoring software companies promote themselves, they make it sound like they are the answer to all your issues. I know that is not true for anyone or anything.

 RemoteDBA

We offer a DBA subscription service that is VERY reasonably priced (about 1/3rd the cost of a FTE) that can be used either as the sole DBA for up to 4 SQL Servers or as backup to a staff DBA who needs some help. SQLRX does not install any executable to any server so you don’t have to worry about applying updates or planning an outage for software maintenance. We strictly use SQL Server functionality and have developed maintenance and proactive monitoring routines complete with reporting that is designed to watch for only the things that really need to be monitored, and alert only when there really is an issue. And, we don’t just monitor we also fix things. You can’t get that from software.

 

Here are a few examples of the things we monitor and alert on.

Blog_20160428_1Blog_20160428_2

Blog_20160428_3

Blog_20160428_4

Performance Tuning

Another service that we offer that is SUPER valuable is our performance tuning assessments. While these are fixed fee reports, they are a complete assessment of a SQL Server that looks at both SQL and Windows hardware performance. We call them holistic because we look at the entire system, not just one part. We don’t just tell you what is wrong, we also provide a list of recommended fixes in a list that gives the order that we would fix things. There are numerous companies with systems that were performing very badly that used our assessments to fix everything from queries to finding hardware that was incorrectly configured by a major hardware vendor. We even had one company that asked for our assistance in completely re-engineering all indexes in their main database, which we did quite successfully.

Blog_20160428_5

Blog_20160428_6

If you are looking for access to a SQL DBA or have software that is monitoring your SQL Servers but not providing the answers you are looking for, please contact us. We would be happy to provide a free of charge proof of concept and would love to help you out. Just send us an e-mail to get started: SQLRX@sqlrx.com

 

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

 

March 2016 Tip of the Month -Full Text Indexing – Clean up those log files!

If you have Full Text catalogs set up in your databases you may notice that inside your log folder there will be lots of files named SQLFTxxxxxxxxxxxx.LOG. These are known as crawl logs which are designed to log information on the full text indexing. These logs can really pile up and there is no automated process that will trim older files. Unfortunately, this can mean that a DBA may have to manually delete old files but should be very careful that you do not delete any of the other files that SQL uses in the folder such as the ErrorLog, default trace and system health files.

Blog_20160309_1

You can move the location of these files by changing the default location of the Error Dump path in your system configuration parameters but have to plan on a short outage while SQL is stopped and restarted. Sometimes these files can become quite large and hard to open. You may need to run EXEC sp_fulltext_recycle_crawl_log @ftcat = ‘FTCatalogName’ to stop SQL from using the file.

Blog_20160309_2

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

Using Common Table Expressions to Improve Query Performance

-by Jeffry Schwartz

Queries with multiple join clauses create particularly difficult issues when they perform poorly. Join ordering is extremely important because if a join creates a huge interim result set at the beginning of the query execution, SQL Server must continue to handle this result set until it can be trimmed down with filtering or additional join clauses. If this does not occur until the end of query execution, the query will usually perform poorly. However, if the number of records from a large table can be limited early in the execution or the join to the large table is added at the end of the query, the query will usually perform better. Therefore, when large tables are involved the ordering and the types of joins can dramatically affect the performance of a query. When multiple large tables are involved, it is often quite difficult to determine which one caused the poor performance.

One way to influence SQL Server’s ordering of joins is to use Common Table Expressions (CTEs), which were introduced in SQL Server 2005. Using CTEs also has the benefit of allowing a developer to break up a very complex query into its component parts, which can simplify the logic as well as assessment of exactly which joins dramatically affect performance. In addition, since the query is broken up into component parts, development, maintenance, and readability can be simplified greatly. A CTE is defined to be a “temporary named result set, derived from a simple query and defined within the execution scope of a SELECT, INSERT, UPDATE, or DELETE statement.” “CTEs can be thought of as alternatives to derived tables (subquery), views, and inline user-defined functions.” Source: http://en.wikipedia.org/wiki/Common_table_expressions. The following query provides an illustration of a CTE in its most basic form:

with Top30000 as ( — just check the first 30,000 entries

select top 30000 Duration

from <table> with (nolock)

where Duration is not null

)

select min(Duration) as [Min Duration]

from Top30000

Clearly, this query could be written in one statement and the query plan would undoubtedly be the same regardless of whether it used a CTE. One key thing to remember about CTEs is that they only exist within the scope of the query. For example, any subsequent T-SQL statements will not have access to the Top30000 entity, so it is NOT a temporary table. A slightly more sophisticated example is shown below:

use AdventureWorks2008R2;

with ProductReviewID as (

SELECT p.ProductID, p.Name, pr.ProductReviewID, p.ListPrice

FROM Production.Product p

LEFT OUTER JOIN Production.ProductReview pr ON p.ProductID = pr.ProductID

WHERE p.ProductID = 718

)

SELECT DISTINCT p.ProductID, p.Name, p.ListPrice, sd.UnitPrice AS [Selling Price]

FROM Sales.SalesOrderDetail AS sd

INNER JOIN ProductReviewID AS p ON sd.ProductID = p.ProductID

Again, this query could be constructed without using a CTE. In this particular case, I selected two sample join queries from BOL and combined them into a single query using a CTE. Although the filter is hard-wired in this example, it is illustrative of filtering the result set early in the query to reduce the number of records being used by the last part of the query. Note also that the last portion of the query had to join to the CTE in order for the CTE to be visible. A subquery can also accomplish this.

Most CTE examples illustrate recursion or utilize the following layout that is similar to the one that used AdventureWorks above:

CTE1

CTE2 select from CTE1

Select from CTE2 joining other tables

However, this layout is also possible:

CTE1

CTE2

Select from CTE2 joining CTE1 joining other tables

As is this form:

CTE1

CTE2

CTE3, which selects from CTE2 joining CTE1

Select from CTE3 joining other tables

One method that I have used extensively to develop efficient queries is to build complex queries gradually using the last layout, i.e., select from tables and gradually introduce joins while maintaining excellent performance. It is critical that representative data be used during this process, i.e., the table sizes used for testing should be representative of what the production database contains. For example, most of the large tables in the AdventureWorks database contain approximately 20,000 records. With table sizes this small, almost any query design works well. However, if the tables contain 200,000 or 2,000,000 records the behavior would be quite different. In these situations, joins can provide substantial filtering or actually expand the result set. The advantage of the gradual addition approach is that the developer can identify immediately those joins that significantly degrade performance and then attempt to utilize other joins until only the most troublesome joins remain. Although a very efficient form of the query can often be devised using only CTEs, sometimes, interim results must be stored in a temporary table, which is then joined with the large troublesome table.

In summary, although using CTEs will not force SQL Server to perform joins in the order specfied in the T-SQL code, it can often influence SQL Server to adhere more closely to a desired join ordering sequence, especially if filtering clauses are used early in the CTE sequence. Adding joins against large table at the end of a query can also achieve a similar result. Regardless, using a gradual approach to building queries with CTEs enables developers to understand exactly what the effect of every join is and therefore, create the most efficient query possible.

This is the beginning of a series on CTE’s and performance. Stay tuned for more CTE posts to come. I’ll show some examples of how to take an existing query with joins and convert it to a CTE with performance improvement.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRX@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. We love to talk tech with anyone in our SQL family!

Troubleshooting SQL Server CPU

–by Ginger Keys

Recently, a client of mine contacted me in a panic because CPU was running at 99-100% over an extended period of time, which was affecting everything on the machine.

We’ve all seen it at one time or another. And there are several reasons this could be happening. It could be SQL consuming most of the CPU, or it could be something outside of SQL. As a DBA, you don’t want to spend a lot of time troubleshooting high CPU usage if SQL isn’t the culprit. If the issue is caused by processes outside of SQL, you can take a high level look around, but should probably defer that to your Sys Admin or your IT team. If it is SQL that is causing the issues though, we need to dig in deeper.

This article discusses basic CPU troubleshooting. We will take a very ‘high level’ approach to a topic that can potentially get very in-depth and complex. The goal is not only to provide immediate relief but also to help prevent future stress to your CPU caused by SQL.

Is it SQL or System Processes causing high CPU Usage

The first step is to determine if it is SQL or not causing CPU pressure. There are several ways you can make this determination.

1. Task Manager/Resource Monitor

The quick way is to open Task Manager and/or Resource Monitor, and look to see if sqlservr.exe is the culprit. Be cautious when looking at task manager to troubleshoot, as this tool records data once per second. So task manager will record spikes in performance (which is normal), instead of averaging it out over a longer time span (which is a better indicator of stress).

Blog_20151001_1

Blog_20151001_2

2. Perfmon Counters  

You can also determine if SQL is the culprit for high CPU usage by using Performance Monitor (Perfmon). Unlike task manager which records data once/second, you can adjust Perfmon to poll the data every 3-5 seconds to get a better indication of CPU stress. There are four counters you will want to look at:

    • Processor: % Privileged Time (_Total) – measures the amount of time your computer is executing I/O operations and other System services (aka Kernel mode).  If this metric is high (>20%) it’s possible that something like system backups are running, your driver is bad, or large files are being copied into or out of the system.   Either way, it’s time to get your Sys Admin involved.
    • Processor: % User Time (_Total) – measures amount of time your computer is executing user applications, including SQL.  If user time is consuming most of the time, we need to look at what is running on SQL to see what is causing the high CPU usage.
    • Process: % Privileged Time (sqlservr.exe instance) – measures the percentage of elapsed time that the process threads spent executing code in privileged (Kernel/System) mode
    • Process: % User Time (sqlservr.exe instance) – measures the percentage of elapsed time that the process threads spent executing code in user (application/SQL) mode.

Blog_20151001_3

3. Query DMVs

Another method to find out what is causing high CPU usage is to execute the following DMV:

–Get CPU Utilization % from SQL, NonSQL (System), and Idle

DECLARE @ms_ticks_now BIGINT

SELECT @ms_ticks_now = ms_ticks

FROM sys.dm_os_sys_info;

SELECT TOP 10 record_id

,dateadd(ms, – 1 * (@ms_ticks_now – [timestamp]), GetDate()) AS EventTime

,SQL_CPU_Use

,100 – SystemIdle – SQL_CPU_Use AS NonSQL_CPU_Use

,SystemIdle

FROM

(

SELECT record.value(‘(./Record/@id)[1]’, ‘int’) AS record_id

,record.value(‘(./Record/SchedulerMonitorEvent/SystemHealth/SystemIdle)[1]’, ‘int’) AS SystemIdle

,record.value(‘(./Record/SchedulerMonitorEvent/SystemHealth/ProcessUtilization)[1]’, ‘int’) AS SQL_CPU_Use

,TIMESTAMP

FROM

(

SELECT TIMESTAMP

,convert(XML, record) AS Record

FROM sys.dm_os_ring_buffers

WHERE ring_buffer_type = N’RING_BUFFER_SCHEDULER_MONITOR’

AND record LIKE ‘%<SystemHealth>%’

) AS x

) AS y

ORDER BY record_id DESC

(from: http://sqlblog.com/blogs/ben_nevarez/archive/2009/07/26/getting-cpu-utilization-data-from-sql-server.aspx)

The query results break down CPU usage as SQL usage, NonSQL (or System) usage, and Idle. If the SQL_CPU_Use has predominantly higher percentages than NonSQL or Idle, then we know that SQL is causing the pressure on CPU.

Blog_20151001_4

                                                                                                                                                                                

System Process causing high CPU usage

If after your preliminary investigating, you discover that SQL is not causing CPU pressure, but rather it is a system issue, the problem could be any number of things. It’s good to have a general understanding of these other issues, but as a SQL DBA, you may not be able to spend a lot of time troubleshooting these. Some system, or non-SQL issues that could cause CPU stress are:

1. Architecture

2. Virtual Environment

  • Host CPU settings
  • Host processor usage
  • vCPUs allocated among guests
  • # guests
  • # other machines on host, and how busy are they
  • Resource governor settings

3. CPU Power Settings

  • Should be set to high performance on SQL server.  However, this has had inconsistent results, and may or may not make a significant difference.

4. Hardware & Software Interrupts – these should be looked at over a period of time, (at least 5 seconds per sample over 30 minutes or more), and should also be interpreted in relation to a baseline of what’s normal for your system

  • Processor: % Privileged Time (Kernel Time + DPC + Interrupts) over 15-20% consistently is bad.  High Kernel time could indicate windows backups are happening or result from a bad driver
  • Processor: % Interrupt Time – over 5-10% consistently is a problem.  This shows the percentage of time processor spent servicing hardware interrupts.  A high rate could indicate disk or network adapter issues.
  • Processor: % DPC Time – over 5-10% consistently is a problem.  This shows percentage of time processor handles deferred procedure calls.  These are interrupts with a lower priority, and have been deferred to some kind of queue.  Could be software or hardware causing this, and might be necessary to redistribute interrupts or upgrade to faster processor.

5. Antivirus Scans – temporarily disable anti-virus software to rule this out, or locate the antivirus process in task manager/resource monitor.

6. SvcHost.exe – this is a generic process name, and includes individual services that Windows uses to perform various tasks.  There can be multiple occurrences of this process running on your computer at the same time.  

                                                                                                                                                                                

SQL causing high CPU usage

If you determine that it is definitely SQL causing the CPU stress, you first need to find the problem and correct or stop it. After you provide some immediate relief to your system, you then need to troubleshoot the issue so that you can make necessary adjustments and prevent it from happening in the future.

1. sys.dm_exec_requests

First you want to find what in SQL is monopolizing the CPU.  To do this, look at the currently executing queries along with the CPU usage.  Run this script to identify the queries using the most CPU time:

SELECT sessions.session_id

,requests.status

,requests.blocking_session_id

,requests.wait_type

,requests.wait_time / (1000.0) ‘wait_time(sec)’

,requests.cpu_time

,requests.total_elapsed_time / (1000.0) ‘total_elapsed_time(Sec)’

,Substring(sqltext.TEXT, (requests.statement_start_offset / 2) + 1,

((CASE requests.statement_end_offset

WHEN – 1 THEN Datalength(sqltext.TEXT)

ELSE requests.statement_end_offset

END – requests.statement_start_offset  ) / 2

) + 1) AS statement_text

,requests.command

,sessions.login_name

,sessions.host_name

,sessions.program_name

FROM sys.dm_exec_sessions AS sessions

INNER JOIN sys.dm_exec_requests AS requests ON requests.session_id = sessions.session_id

CROSS APPLY sys.dm_exec_sql_text(requests.sql_handle) AS sqltext

WHERE requests.session_id != @@SPID

ORDER BY requests.cpu_time DESC

 

When you identify the query consuming the most CPU time, you should be able to observe the TSQL command that SQL is processing, the user issuing the command, and determine whether it can be safely terminated or not.

2. sys.dm_os_wait_stats

This provides historical (from the time that the SQL instance started), cumulative data and records waits related to the workload of your server.  We can use this DMV to investigate the most prevalent wait types over a period of time, and specifically see CPU waits (signal time) which might indicate CPU pressure.  Since this data is cumulative, current waits might not be evident and adjacent samples should be differenced to determine actual resource and signal wait times.  It is best to track these statistics over time and establish a baseline on which to compare and look for anomalies.

3. Activity Monitor

Using Activity Monitor in SSMS displays both DMVs above in an easy to read report, but can create a performance hit to run it…maybe not a good idea if your CPU is already under extreme stress.

Blog_20151001_5

4. Stored Proc / Query

    After you have taken care of the query that was causing the CPU pressure, you can run the following to get recent CPU utilization by query.  This will show the most expensive queries that have already completed and their execution plans, so that you can check for ways to prevent them from beating up your CPU in the future.

— Get CPU utilization by query

SELECT TOP 10 st.text

,st.dbid

,st.objectid

,qs.total_worker_time

,qs.last_worker_time

,qp.query_plan

FROM sys.dm_exec_query_stats qs

CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st

CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp

ORDER BY qs.total_worker_time DESC

(from: http://sqlskills.com/blogs/glenn)

Some of the steps you might need to take in order to improve performance for these queries are:

  • Check execution plan and tune query.  For a great explanation click here https://blog.sqlrx.com/2015/03/16/introduction-to-sql-server-execution-plans/
  • Update statistics on tables called – statistics are what SQL uses to decide how to execute your query in the most efficient way.
  • Identify high I/O queries
  • Create/rebuild indexes – be sure not to create indexes without thoroughly testing first!

5. Database

    To find out if one or more of your databases is predominantly causing CPU pressure, the following query will help determine which database(s) has used the most CPU resources on your instance:

— Get CPU utilization by database

WITH DB_CPU_Stats

AS

(SELECT DatabaseID, DB_Name(DatabaseID) AS [Database Name], SUM(total_worker_time) AS [CPU_Time_Ms]

FROM sys.dm_exec_query_stats AS qs

CROSS APPLY (SELECT CONVERT(int, value) AS [DatabaseID]

FROM sys.dm_exec_plan_attributes(qs.plan_handle)

WHERE attribute = N’dbid’) AS F_DB

GROUP BY DatabaseID)

SELECT ROW_NUMBER() OVER(ORDER BY [CPU_Time_Ms] DESC) AS [CPU Rank],

[Database Name], [CPU_Time_Ms] AS [CPU Time (ms)],

CAST([CPU_Time_Ms] * 1.0 / SUM([CPU_Time_Ms]) OVER() * 100.0 AS DECIMAL(5, 2)) AS [CPU Percent]

FROM DB_CPU_Stats

WHERE DatabaseID <> 32767 — ResourceDB

ORDER BY [CPU Rank] OPTION (RECOMPILE);

(from: http://sqlskills.com/blogs/glenn)

This information should be gathered over a period of time, to determine if it is a regular occurrence. If you see that a database is consistently using most of the CPU resources, you can

  • Update statistics – to ensure SQL is executing queries in the most efficient way
  • Create/rebuild indexes – to help eliminate fragmentation and forwarded records, as these cause  SQL to work harder to find records

6. Excessive compilations/recompilations

When a query is submitted to SQL, it has to create or reuse a query plan. Creating a query plan can be very CPU intensive, so reusing them is easier on the CPU. Query plans may have to be recompiled if the object schema changes, or if the statistics change on the underlying table or database.

You can view these metrics through Perfmon using these counters:

  • SQLServer:SQL Statistics: SQL Compilations/sec
  • SQLServer:SQL Statistics: SQL Re-Compilations/sec

The ideal value for Re-compilations per second is zero.  And the ideal value for Compilations is less than 25% of SQLServer:SQL Statistics: Batch Requests/sec.

Blog_20151001_6

If your compilations or recompilations are at a higher than optimal number, consider the following:

  • Optimize for ad-hoc
  • Enable parameterization – Query plan reuse is critical for optimal performance on highly transactional systems and it is most often achieved by parameterizing your queries or stored procedures.
  • Avoid statistics based recompilations – by using KEEP PLAN hint or KEEPFIXED PLAN hint

7. SQL Server Configuration

    Making sure your SQL server is configured for optimal performance and best practices can make a huge difference in performance.  Some of the things you can look at are:
    • Max Degree of Parallelism – if you are seeing a lot of CXPACKET wait types, you may need to adjust your MAXDOP.
    • Affinity Mask – this may or may not make a big difference
    • Priority Boost (don’t enable) –this too, may or may not make a big difference
    • Max worker threads – adjusting this could help, but is beyond the scope of this article.
    • File organization on underlying disks – choice of RAID array, and the organization and placement of file types make a big difference in the performance of your server

                                                                                                                                                                                

Conclusion

Troubleshooting CPU issues can be a very painstaking process as there are multiple combinations of issues that can be causing stress. The methods outlined above provide a basic approach to troubleshooting CPU pressure. It could be hardware, software, system, SQL, web, network, and a host of other problems, as well as a complicated combination of these! As a DBA, your main concern is to find and correct any SQL related issues causing this pressure on the CPU.

If you would like assistance configuring, maintaining, or monitoring your SQL Server and instances please contact us at SQLRx@sqlrx.com. We would be happy to answer any question you have, and would be interested to hear about your SQL experiences as well!

                                                                                                                                                                                

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. We love to talk tech with anyone in our SQL family!

The Pros & Cons of Hyperthreading

— By Jeff Schwartz

Hyperthreading is one of those features that most IT personnel have heard about, but many do not understand exactly what it does or why it may improve or degrade the performance of their servers. Although hyperthreaded processors have been available for many years, the confusion surrounding it is as commonplace today as it was when the technology was introduced in 2002. Server, processor, and motherboard manufacturers have contributed to the confusion by activating the feature by default for several years after it was introduced, then deactivating it by default for a few years, and recently by activating it again by default. In addition, for many years, determining whether it was on or off could be accomplished only by reviewing the BIOS settings during a reboot, which is extremely difficult to schedule for most production servers.

What is Hyperthreading?

Hyperthreading involves having two distinct pipelines into a single processor execution core. Research into code sequences showed that code executing on a processor frequently, and naturally, pauses because of things such as main memory fetches into the processor cache and improper code branch prediction. The rationale is to provide the execution portion of an idling core processor with work to do while these kinds of activities are handled. These two pipelines are presented to the outside world as two distinct processors: one physical and one “logical.” Therefore, as far as any operating system is concerned, twice as many processors are available when hyperthreading is active compared to when it is not.

Why not use it all the time?

In theory, utilizing available processor cycles is a great idea and therefore, this mechanism should be used all the time. However, in practice, some problems arise. Two main issues are involved: processor schedulers and the length of time a unit of work stays on a processor without interruption. Since the execution portion of a processor is still a single entity, scheduling appropriately is critically important, i.e., two threads from the same process should NOT be scheduled on the same core. If they are, each thread will take turns stopping the other and this degrades performance. The difficulty, particularly in the SQL Server environment, is that SQL Server schedules its own threads and depending upon the SQL Server version as well as the Windows version on which it is running, SQL Server may or may not be aware of the physical/logical distinction or know how to handle this distinction properly. Therefore, there is a good chance that it may schedule two dependent threads on the same core. The other issue is that a large or parallelized SQL Server query threads usually consume their full time allotment on a processor before they are suspended. Any thread sitting in the other pipe of a hyperthreaded processor will wait the full time before it is allowed to run again. If that thread is from the same query, the two threads will stop each other from running and make the query run substantially longer. Therefore, under normal conditions, most SQL Server customers do not benefit from employing hyperthreading.

One type of workload, high volume small transaction processing, DOES benefit from hyperthreading, but even then it does not benefit as much as one might expect. Benchmarks and actual user experiences have demonstrated that at most, a 30 percent gain can be obtained when transactions are extremely small, e.g., they execute in under 200 milliseconds. Therefore, if a server only handles small high volume transactions regardless of whether they are inquiry, insert, or update, it will benefit from hyperthreading. As soon as any large queries arrive at the server, the benefit is gone and the problems cited in the previous paragraph arise.

How can I tell whether Hyperthreading is active?

Historically, this has been difficult to determine, but in the past few years, Windows and WMI have improved to provide this information without interrogating the BIOS. Older versions did not provide adequate or accurate information, but beginning with Windows 2008, the information is quite useful. This results from the fact that older versions of Windows either did not know or report the physical/logical processor distinction. The following PowerShell (other languages that can access WMI will work also) commands provide the necessary information. The first command returns the number of sockets on the system followed by the number of logical processors, and the second command returns information regarding the processors themselves. The first output was obtained on a quad-core single-socket system with hyperthreading turned off, and the second output was obtained from the same system with hyperthreading turned on. Clearly, the number of logical processors doubles when hyperthreading is turned on.

gwmi win32_computersystem | fl NumberOfProcessors, NumberOfLogicalProcessors

Blog_20150813_1

Conclusion

Hyperthreading can be useful in a SQL Server environment if the workload is comprised solely of extremely lightweight transactions. Under these conditions, the most one can expect to obtain from hyperthreading is a 30 percent improvement. If the server is running the latest versions of Windows and SQL Server, it may be possible to obtain some improvement for workloads that are highly transactional with a few small reports mixed in. However, in most SQL Server environments, hyperthreading will most likely degrade performance.

Do You Know When To Ask For Help?

— By Lori Brown  @SQLSupahStah

This is a bit off from our normal technical blog posts but, hey, if you can’t go out on a limb on your own blog, what good is it? After doing cleanup on systems over the years, I started thinking about why some servers seem to be so poorly configured. Usually it seems that systems are poorly architected out of inexperience and sometimes out of sheer ego. I always wonder why the system architect didn’t look things up or ask for help since so much information is available at our fingertips through a decent search engine. However, humans being what we are, none of us like to admit to shortcomings, but we all at times need help.

Know when to ask for help

We all know that we need help sometimes, but it is difficult and sometimes embarrassing to admit it. When I was a staff DBA I did have times when I needed to bring in outside consultants for help. For instance, if a project required skills that I did not have (C# programming for one) or required more time than I was able to commit to, then consulting was a good option. As a Mid-level DBA at the time, I did seek out help and mentoring from more experienced DBA’s. So, I have raised my hand for help but always grew when I did. I was fortunate to have found people who could mentor me and grow my skills.

Eh….who am I kidding….my skills are always having to grow, even now!! It takes time and lots of work to keep current on SQL Server, let alone know all of the coding and hardware pitfalls that are out there. Now that I have been a consultant for quite a while, I have found that the smartest DBA’s or IT people in general are the ones who know when to ask for help.

The vendor trap

trap

Many times we are called to assist with systems that were not installed or configured by anyone on a company’s staff. Instead the company listened to a software vendor who seemed to know what they were doing and allowed the vendor to handle all provisioning and configuration of their server. Unfortunately many software vendors will give hardware recommendations and programming advice based on a small testing database that they used to do small load tests with. Usually a customer over time will grow their data to exceed what a software vendor has tested with or will add customizations and will be faced with hardware that is not able to handle the work it is presented with and code that runs very slowly. The software vendor, when approached for help, often does not have the expertise to assist with anything complex and companies are left high and dry with a frustrating situation and the feeling that money invested might have been wasted.

IT managers are then faced with the prospect of incurring consulting costs and may be reluctant to do so. We can assess both SQL and hardware performance to expose areas of weakness quickly. Sometimes solutions can be as simple as adding a little more memory or as complex as a complete index overhaul of a database. The important thing is that we can find a problem when your software vendor is not super cooperative out of fear that poor coding may be revealed. We often provide performance data to software vendors and will work with them directly to get a system back on its feet.

Hitting the knowledge wall

wallThere seems to be several types of personalities that hit the knowledge wall at bad times. There is the “No one knows our systems like I do” person, the “My way or the highway” person, the “I don’t have time to deal with this” person and finally the “I can fake my way through this” person. The last thing you want is for a project to go south and needing to explain to your boss what happened or even worse having to look for a job. No single person can know absolutely everything about hardware, application performance, SQL Server administration, SQL Server performance, etc.  In some cases, people are prevented from knowing by organizational boundaries. I have been doing this for a while and have seen systems that are very poorly architected that were generally that way because someone was too proud to ask for advice or input on it.

If you feel like you are in over your head on a SQL related project, raise your hand and we will be there to help you out. We are the type of consultants that are not looking to make a staff IT person look or feel bad about their work. We are often brought in to bridge the gap that is a result of the long-standing adversarial relationship has existed between the database and system administrators. We actually try to point out ways to improve things and are ultimately hoping to make the IT staff look good when their system performs well and the boss is not calling them on the carpet for anything.

Hitting the budget wall

nomoneySadly, I also run into a lot of people who want to get help but don’t think they can justify the budget for it. At least think about reaching out to a consulting firm to see if they can work within your budget. There are some cost effective ways to get expert help if you look for them.

Some places think they do not have the budget or the amount of work to justify having a full-time DBA available for their SQL Server(s): however many times mission critical information is being hosted in a SQL database that is not performing all that well. We offer a fractional DBA service (RemoteDBA) that costs 1/3rd of what it would cost to have a Mid-level DBA on staff. We are Senior-level DBA’s who can help at a very reasonable cost. We can work side-by-side with an overworked staff DBA or can fully administrate and monitor SQL instances for shops that do not have a DBA on staff.

If you have project work or performance problems that are overwhelming you, look into asking about short term monthly consulting or mentoring. We provide those types of services as well. We often do short term contracting with shops that just need root causes of issues identified and then they go away to do the actual work. And, we always try to educate our customers on how to do things better.

Watch all about it!

ISI (or SQLRX as our division is known) has been in the consulting business since 1985. SQLRx is our division which specializes in RemoteDBA services and SQL Server performance tuning. Not too long ago, we started compiling video testimonials from some of our clients who have volunteered to tell the world about their experiences with us. The videos are not long, and to me, are better than reading some dry case study as you normally see from most IT consultants.

ISI specializes in Business Intelligence and Application Development including mobile development for all platforms. We also founded and manage the DFW (Dallas – Fort Worth area) Business Intelligence Community and host monthly meetings featuring various BI experts and/or products, as well as case studies and user testimonials.

heretohelpVisit our You Tube channel for testimonials on how we have helped our clients.

https://www.youtube.com/user/ISI85DFW/videos

Who knows…..maybe it’s time for you to raise your hand and get some help.

If you think you need help but are not sure (maybe you just want a sanity check) let us know!!  We offer a free SQL HealthCheck that is done with a one to one call and GoToMeeting to take a look at your system.  It only takes about 30 minutes and is FREE as in no strings attached.  E-mail us at SQLRxSupport@sqlrx.com to get a checkup scheduled.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRxSupport@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. We love to talk tech with anyone in our SQL family!