Clustered vs. Non-Clustered Index

A clustered index affects the way the rows of data in a table are stored on disk. When a clustered index is used, rows are stored in sequential order according to the index column value; for this reason, a table can contain only one clustered index, which is usually used on the primary index value.

A non-clustered index does not affect the way data is physically stored; it creates a new object for the index and stores the column(s) designated for indexing with a pointer back to the row containing the indexed values.

You can think of a clustered index as a dictionary in alphabetical order, and a non-clustered index as a book’s index.

TechRepublic – SQL Server Database Developer Interview Questions and Answers

What is SQL Server CLR Integration and it Benefits?

CLR stands for “Common Language Runtime”. It is the heart of Microsoft .NET Framework and provides the execution environment for all .NET Framework code.

Benefits of this integration:

  1. A better programming model
  2. Improvesafety and security
  3. Ability to define data type and aggregate functions
  4. Streamlined development through a standard environment
  5. Potentiall for improvement performance and scalability

What is Data Collector?

Data Collector (DC) consists of 3 data sources, there are TSQL Queries, SQL Trace, and Performance Monitor Objects/ Monitor.

The 2 key databases that drive the data collector sub-system are the MSDB and MDW.

MSDB contains all or most of the data collector’s metadata and configuration. For MDW, it contains (mostly) the actual data collected into “snapshot” tables.

What is: Detect Virtual Log Files (VLF) in LDF

Multiple small Virtual Log Files commonly known as VLFs together make an LDF file. The writing of the VLF is sequential and resulting in the writing of the LDF file is sequential as well. This leads to another talk that one does not need more than one log file in most cases.

However, in short, you can use following DBCC command to know how many Virtual Log Files or VLFs are present in your log file.


After you execute the above query you can see the column which is marked as 2 which means it is active VLF and the one with 0 which is inactive VLF.

What is: DMV (Dynamic Management Views) and Functions

Dynamic Management View (DMV) and functions return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance.

DMVs and functions are divided into two types:

  • Server-Scope. These require VIEW SERVER STATE permission on the server.
  • Database-Scope. These require VIEW DATABASE STATE permission on the database.

For the record, there are a total of 136 DMVs and functions available in SQL Server 2008 R2. Let execute the T-SQL scripts below to obtain the list.

--Get a list of grouped DMVs
	name as [DMV/DMF Name],
	type_desc as [Type],
	[GroupName] =
		WHEN name LIKE 'dm_exec%' THEN 'Query/Session Execution'
		WHEN name LIKE 'dm_os%' THEN 'SQL Server Operating System'
		WHEN name LIKE 'dm_io%' THEN 'Disk I/O'
		WHEN name LIKE 'dm_tran%' THEN 'Database Transactions and locks'
		WHEN name LIKE 'dm_db_%index%' THEN 'Indexes'
		WHEN name LIKE 'dm_db%' THEN 'Database and database objects'
		WHEN name LIKE 'dm_db_mirror%' THEN 'Database Mirroring'
		WHEN name LIKE 'dm_resou%' THEN 'Resource Governor'
		WHEN name LIKE 'dm_clr%' THEN 'CLR based'
FROM sys.system_objects
WHERE name LIKE 'dm[_]%'

The DMVs and functions have been organized into several categories, please visit BOL for more information.

Also, you can download the detailed system views map from Microsoft download, which shows the key views involved and relationship between them.

Understanding SQL Server Recovery Model


Recovery Models in SQL Server are basically designed to control the transaction log maintenance and to help you to recover your data from a disaster. The choice of a specific recovery model purely depends up on the criticality of the data which will be stored within the database. There are basically 3 different types of recovery models available in SQL Server 2000 and higher versions namely Simple, Full and Bulk Logged. The choice of a specific recovery model purely depends upon the criticality of the data which will be stored within the database.

These models each address different needs for performance, disk and tape space, and protection against data loss. For example, when you choose a recovery model, you must consider the tradeoffs between the following business requirements:

  • Performance of large-scale operation (for example, index creation or bulk loads).
  • Data loss exposure (for example, the loss of committed transactions).
  • Transaction log space consumption.
  • Simplicity of backup and recovery procedures.

Depending on what operations you are performing, more than one model may be appropriate. After you have chosen a recovery model or models, plan the required backup and recovery procedures.

This table provides an overview of the benefits and implications of the three recovery models.

Recovery model Benefits Work loss exposure Recover to point in time?
Simple Permits high-performance bulk copy operations.Reclaims log space to keep space requirements small. Changes since the most recent database or differential backup must be redone. Can recover to the end of any backup. Then changes must be redone.
Full No work is lost due to a lost or damaged data file.Can recover to an arbitrary point in time (for example, prior to application or user error). Normally none.If the log is damaged, changes since the most recent log backup must be redone. Can recover to any point in time.
Bulk-Logged Permits high-performance bulk copy operations.Minimal log space is used by bulk operations. If the log is damaged, or bulk operations occurred since the most recent log backup, changes since that last backup must be redone.Otherwise, no work is lost. Can recover to the end of any backup. Then changes must be redone.

For more information, see Selecting a Recovery Model.

Let us take a look at each of these recovery models in detail.

Simple Recovery Model

In Simple recovery model SQL Server will automatically truncate the transactional log file during the following scenarios.

  • Whenever the transaction log file is 70% full
  • A CHECKPOINT command is executed internally or it is executed manually
  • Whenever the active portion of the transaction log file exceeds the size that SQL Server could recover within the time specified in recovery interval (min) parameter using SP_CONFIGURE.

However, when a database is configured to use a Simple Recovery Model you will not be able to perform the transaction log backup this is by design from Microsoft.

Since you cannot take the transaction log backup, Point in Time recovery is not possible. You can restore your database only to the last available Full or Differential backups. Hence, this recovery model is best suited for user databases which are running in Development or Testing environments or a database which is configured as read-only.

Simple Recovery is not an appropriate choice for production systems where loss of recent changes is unacceptable.

For more information, see Simple Recovery.

Full Recovery Model

In Full Recovery model all the transactions are retained within the transaction log file until the log file is backed up. All the bulk operations such as SELECT INTO, BULK INSERT, BCP, CREATE INDEX, ALTER INDEX, and REBUILD INDEX etc are fully logged and it can be recovered using the available backups. When you are using Full Recovery Model, database administrator should use a combination of Full, Differential and Transactional Log backups as part of database backup plan to avoid any data loss.

By default whenever a new database is created it will be created using Full Recovery Model. This is because Model Database is configured to run under Full Recovery Model.

Best Practice
Database administrator should make sure that all the user databases in a Production environment are configured to use Full recovery model and you are using a combination of Full, Differential and Transactional Log to backups as part of database backup plan to avoid any data loss. This is because Point-in-Time recovery of the database only possible if you have all the valid database backups along with the transaction log tail backup.

Bulk Logged Recovery Model

When a database is configured to use a Bulk Logged Recovery Model then SQL Server will log minimal amount of information for operations such as SELECT INTO, BULK INSERT, BCP, CREATE INDEX etc within the transaction log file there by reducing the log file size. It is advisable to switch the recovery model from Full to Bulk Logged while performing Bulk Logged operations as this will help to reduce the log file growth and will thereby improve the database performance. However, as like in Full recovery model the transaction log will continue to grow until the log file is backed up

Click here to learn more about how the bulk-logged Recovery model works.

Reference Sites

SQL Server Index Type

  • SQL Server maintains indexes by using a B-tree structure
  • Clustered
    • Data is physically ordered around the keys
    • Actual data is stored at the leaf level
  • Nonclustered
    • An independent index structure that points to the underlying records
  • A heap is a table without a clustered index
    • Nonclustered indexes can exist on such a table