Monthly Archives: July 2016

Log File is Too Big!

–by Ginger Keys

Recently I had a client who, after speaking with their software vendor, became overly fearful of their database log files growing too big and taking up all of the disk space on their server. The software vendor is apparently used to having clients that don’t have a DBA on staff, and made some recommendations that would not have allowed my client to recover data to point in time (i.e. switch to Simple recovery, shrink the log file, switch back to Full recovery). I informed my client that this is a very bad idea since they needed to be able to recover as much data as possible if something happened to the database, and promptly undid these changes. I decided to take a look around their systems, and saw the log file for one of the production databases was very large… but there were valid reasons it needed to be that large.

Hopefully you will never have to deal with an ill-informed vendor like this, but if you should find that your log file has grown very big, there are usually some common reasons for this.

Most common reasons:

  1. Your database is set to Full Recovery and no Tlog backups are happening
  2. Large inserts or deletes are happening (and it really needs to be that big)
  3. Long running transactions are running (index maintenance or bulk import)

It’s important to have enough room in your log file for SQL to do its work. Transactions in your database (i.e. changes to data) are logged and these records are written sequentially to the log file. As soon as the transaction has committed, a checkpoint has written the changes to disk, or a backup of the tlog has occurred, these records are no longer needed and SQL will mark the log files ready for re-use.   The most ideal situation is to size your log file correctly from the beginning.

Think of your log file as being like a reusable grocery bag that you use and fill each week.

Blog_20160714_1

When you take the groceries out of the bag (truncate), the size of the bag is still the same, and it’s ready for more groceries (same amount) to fit in. Shrinking the bag will make it smaller, and assuming you need to put the same amount of groceries in the following week, you will have less storage space. The groceries won’t fit, so don’t shrink your grocery bag if you know you need the space!

The obvious solution to prevent log files from growing too large is managing it correctly on the front end (i.e., planning correctly for data and log space, and recovery requirements), but sometimes events occur that require us to be reactive instead of proactive. It happens to the best of us!

Solutions

1. If your database is set to Full Recovery and no Tlog backups are happening

  •  Set to Simple recovery – do this only if you don’t care about point-in-time recovery in the event of disaster. This would most likely used on a test or dev database, and is also suitable for data warehouses with read-only data.
  • Leave in Full recovery, but schedule tlog backups on a frequent basis depending upon your tolerance for possible data loss. In a perfect world, production databases should be in Full Recovery mode for disaster recovery reasons. Recovery model and frequency of backups are always going to depend on a business’ tolerance for data loss. Backing up your log file once every day or three won’t really help with your log size growth, especially in a highly transactional database because the log file will continue to grow until the log is backed up. (I have clients that back up Tlogs every 5 minutes, to every 2 hours…just depends on your business needs).

2. If large inserts or deletes are happening, and it really needs to be that big

  •  Consider breaking up large transactions into smaller batch jobs
  • Consider switching to Bulk Logged Recovery during the large bulk operations. Switching between Full and Bulk Logged will not break the transaction log backup chain.
  • Properly size your log file. For large transactions SQL has to be able to see logs back to the beginning of the transaction in case something goes wrong and it needs to recover. The size of your log file should accommodate the largest transaction, or the largest sum of simultaneous transactions that regularly occur.
  • Properly size the autogrowth of your log file. When your log needs more room for large transactions, and the auto growth size is set in too small increments, too many Virtual Log Files (VLFs) will be created which takes more time to perform database recovery. In order to see how many VLFs are in your database, check how many records are returned after running this statement:

DBCC LOGINFO

As a common rule of thumb, VLFs should be less than 500, depending on the size of your database. If VLFs greatly exceed this amount, take these steps:

1.Take backup of your log file (may need to do this twice)

2.Shrink the log file

USE MyDatabase;

GO

DBCC SHRINKFILE (MyDatabase_log, 1);

GO

— shrink log file as small as it can go, preferably close to 1MB

— may have to shrink it gradually, in smaller chunks

3.Re-grow the log file to a practical size, which should accommodate the largest transaction, or the largest sum of simultaneous transactions that regularly occur. *Depending on the size you want your log file to be, consider growing your log file in multiple chunks. Kimberly Tripp recommends growing your log file in 8GB chunks…read this for more info: http://www.sqlskills.com/blogs/kimberly/transaction-log-vlfs-too-many-or-too-few/

 USE MyDatabase

GO

ALTER DATABASE MyDatabase

MODIFY FILE

(NAME = MyDatabase_log, SIZE = 8000MB)

– grow the log in multiple chunks (i.e. 8GB, then 16GB, etc)

4.Set auto grow size to larger chunks of space, so less VLFs are created during auto grows

USE [master];

GO

ALTER DATABASE MyDatabase

MODIFY FILE

(NAME = MyDatabase_log, FILEGROWTH = 1000MB);

GO

3. If Long running transactions are running (index maintenance or bulk inserts)

  •  Again, properly size your log file. (See steps above). Log files should be created at the desired size when the database is created, rather than allowed to slowly grow over time. However if this was not done initially, then shrinking the log file and manually re-growing it to the desired size is your next option.
  • If possible, consider breaking up large transactions into smaller batch jobs

 

Conclusion

Being proactive in managing your SQL environment is always recommended for preventing problems, but occasionally things happen which require us to deal with issues. Instead of creating your database log file with a small amount of space and allowing it to automatically grow in small amounts, create it with enough space from the start, so that it only grows on rare occasions. Properly managing your database file sizes and auto growth will help ensure your disk space doesn’t run out, and also it will help improve the performance of your database.

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!

List Index Columns

— By Lori Brown @SQLSupahStah

I was looking for a way to get info on the columns in indexes so that I could use the data to help enhance my index defrag routine. I found that my defrag logic did not figure out if an index has included columns that are LOBs and was failing because it was trying to rebuild it ONLINE. Of course this was happening on a SQL 2008 R2 Enterprise instance. Like it or not, I have a lot of clients with older versions of SQL that I support so I try to account for things but LOBs in an included column had been missed.Blog_20160708_1

Ahhh…the good old days when SQL really started coming out of its shell.

Blog_20160708_2

Now we have new and more powerful SQL 2016. Progress!!

Anyway, I decided that it would be nice to have a query that would list index columns, complete with data type and in key order along with a way to tell if it is an included column. I added some ways to filter the query that I think are common ways to look at this type of data.

— Returns index column info for the database you are connected to

SELECT s.name AS SchemaName,

o.name AS TableName,

i.name AS IndexName,

i.type_desc IndexType ,

c.name AS ColName,

t.name AS ColType,

ic.index_column_id,

ic.is_included_column

FROM sys.objects o

JOIN sys.schemas s ON (o.schema_id = s.schema_id)

JOIN sys.indexes i ON (o.object_id = i.object_id)

JOIN sys.index_columns ic ON (i.object_id = ic.object_id AND i.index_id = ic.index_id)

JOIN sys.columns c ON (i.object_id = c.object_id AND c.column_id = ic.column_id)

JOIN sys.types t ON (c.user_type_id = t.user_type_id)

WHERE o.type = ‘U’

–AND o.name = ‘???’ — Uncomment to filter by table name

–AND o.object_id = 123456789 — Uncomment to filter by object id

–AND (t.name IN (‘text’, ‘ntext’, ‘image’) OR t.max_length = -1) — Uncomment this line for LOBs only

ORDER BY o.name, i.name, c.name, ic.index_column_id

Blog_20160708_3

Even though I know that others have posted similar queries, I like to make my own. This works on SQL 2008 through SQL 2016. I hope someone finds this useful. Let me know of ways to improve it too!

For more information about blog posts, concepts and definitions, further explanations, or questions you may have…please contact us at SQLRx@sqlrx.com. We will be happy to help! Leave a comment and feel free to track back to us. Visit us at www.sqlrx.com!