According to Microsoft’s Storage Top 10 Best Practices, it is recommended to have .25 to 1 data files (per filegroup) for each CPU on the host server (dual core counts as 2 CPUs). This is especially true for TEMPDB where the recommendation is 1 data file per CPU.
It’s also recommended to move TEMPDB to adequate storage and pre-size after installing SQL Server. Performance may benefit if TEMPDB is placed on RAID 1+0(dependent on TEMPDB usage). The data files should be of eaqual size as SQL Server uses a proportional fill algorithm that favors allocations in files with more space.
On the another hand, sometimes the recommendation might not always correct. Paul shared a very good example in his blog post,
I heard just last week of a customer who’s tempdb workload was so high that they had to use 64 tempdb data files on a system with 32 processor cores – and that was the only way for them to alleviate contention. Does this mean it’s a best practice? Absolutely not!
So, why is one-to-one not always a good idea? Too many tempdb data files can cause performance problems for another reason. If you have a workload that uses query plan operators that require lots of memory (e.g. sorts), the odds are that there won’t be enough memory on the server to accomodate the operation, and it will spill out to tempdb. If there are too many tempdb data files, then the writing out of the temporarily-spilled data can be really slowed down while the allocation system does round-robin allocation. The same thing can happen with very large temp tables in tempdb too.
- It is recommended to have .25 to 1 data files (per filegroup) for each CPU on the host server (dual core counts as 2 CPUs).
- Sometime too many data files may also cause performance problems for another reason