Tag Archives: tempdb

Script to Check TempDB Speed

The following script adopted from Brent Ozar’s blog on his TempDB Performance and Configuration topic. I strongly encourage you to pay a visit to his site.

This query hits the dynamic management function (DMF) sys.dm_io_virtual_file_stats for all of the TempDB data files and lists out how fast they’re responding to write (and read) requests:

SELECT files.physical_name, files.name,
    stats.num_of_writes, (1.0 * stats.io_stall_write_ms / stats.num_of_writes) AS avg_write_stall_ms,
    stats.num_of_reads, (1.0 * stats.io_stall_read_ms / stats.num_of_reads) AS avg_read_stall_ms
FROM sys.dm_io_virtual_file_stats(2, NULL) as stats
INNER JOIN master.sys.master_files AS files
    ON stats.database_id = files.database_id
    AND stats.file_id = files.file_id
WHERE files.type_desc = 'ROWS'

Brent Ozar suggested to look for two things from the result generate from the script:

  • Are writes being evenly distributed between data files?
  • Are writes finishing in 20ms or less?

If either of those question is no, then we have some performance tuning work need to kick in.

How to: Change The Tempdb File Size

You could have been accidentally increased your tempdb data file to more than it actually needs and you have difficultly to reduce the size. Probably you have tried to resize it from the database properties or shrink the file, unfortunately it doesn’t make any changes to the file. The files size remain big.

For instance, your tempdb has 1GB, you accidentally increase it to 81GB which you actually plan to increase it to 8GB. Here is the solution.

First, you should reduce your tempdb file to the size smaller than your desire size.

DBCC SHRINKFILE (N'tempdev', 1024000);  --1GB

After that, you can change the tempdb file size to your desire file size.

ALTER DATABASE [tempdb] MODIFY FILE (
	NAME = N'tempdev', 
	SIZE = 8388608KB );  --8GB
GO;

TEMPDB Should Always Have One Data File Per Processor Core

According to Microsoft’s Storage Top 10 Best Practices, it is recommended to have .25 to 1 data files (per filegroup) for each CPU on the host server (dual core counts as 2 CPUs). This is especially true for TEMPDB where the recommendation is 1 data file per CPU.

It’s also recommended to move TEMPDB to adequate storage and pre-size after installing SQL Server. Performance may benefit if TEMPDB is placed on RAID 1+0(dependent on TEMPDB usage). The data files should be of eaqual size as SQL Server uses a proportional fill algorithm that favors allocations in files with more space.

On the another hand, sometimes the recommendation might not always correct. Paul shared a very good example in his blog post,

I heard just last week of a customer who’s tempdb workload was so high that they had to use 64 tempdb data files on a system with 32 processor cores – and that was the only way for them to alleviate contention. Does this mean it’s a best practice? Absolutely not!

So, why is one-to-one not always a good idea? Too many tempdb data files can cause performance problems for another reason. If you have a workload that uses query plan operators that require lots of memory (e.g. sorts), the odds are that there won’t be enough memory on the server to accomodate the operation, and it will spill out to tempdb. If there are too many tempdb data files, then the writing out of the temporarily-spilled data can be really slowed down while the allocation system does round-robin allocation. The same thing can happen with very large temp tables in tempdb too.

In summary,

  • It is recommended to have .25 to 1 data files (per filegroup) for each CPU on the host server (dual core counts as 2 CPUs).
  • Sometime too many data files may also cause performance problems for another reason