How to: Start SQL Server Agent When Agent XPs show Disabled


The SQL Server Agent was found stopped and the status shows “(Agent XPs disabled)” besides the SQL Server Agent, the funny thing is the service was running according to the services console. Tried to start the service from SQL Server Management Studio, but it didn’t work.

This issue appears when ‘Agent XPs’ advance configuration option is disabled and set to 0 for the SQL Server configuration..

Agent XPs is an advanced configuration option which enables the SQL Server Agent extended stored procedures on the server. When this is not enable SQL Server Agent will not be active in SSMS. Most of the time when you start the SQL Server services it automatically enables ‘Agent XPs’, but sometime it fails to enable or sets the value to 0 and then this issue will appear.


To fix this issue we should first set the ‘Agent XPs’ to 1 and then run RECONFIGURE to bring it into effect. Execute the following query and try to restart the agent service. The SQL Server agent should be online now.

sp_configure 'show advanced options',1
reconfigure with override
sp_configure 'Agent XPs',1
reconfigure with override
sp_configure 'show advanced options',0
reconfigure with override

This will be the possible output
Configuration option 'show advanced options' changed from 1 to 1. Run the RECONFIGURE statement to install.
Configuration option 'Agent XPs' changed from 0 to 1. Run the RECONFIGURE statement to install.
Configuration option 'show advanced options' changed from 1 to 0. Run the RECONFIGURE statement to install.


Log Shipping Error “Could not find a log backup file that could be applied to secondary database”

A simple explanation on how log shipping works could be “Backup-Transfer-Restore”. Transaction log will backup on the primary database and transfer the log backup file to the secondary database server, eventually restore it to the database. Thus, any changes on the primary database will reflect on the secondary database.

How to trouble shoot when the log shipping failed and the below error message found in the job history?

Could not find a log backup file that could be applied to secondary database '<DATABASE_NAME>'.

This means the current log backup file which being restored is too recent. A backup from a prior time need to be applied first. Since the log being backed up in sequence on the primary server, the log need to restore according to the sequence as well on the secondary database.

Use the following query to check when is the last log backup file that was copied:

SELECT * FROM [msdb].[dbo].[log_shipping_secondary]

And, also check what was the last log backup file was restore.

SELECT * FROM [msdb].[dbo].[log_shipping_secondary_databases]

Sometime larger transaction log file may takes longer time to copy over to the secondary server while the newer file already copied over, thus it looks unsorted on the secondary database. So wait for the big file to be copied and restart the log shipping job again. Second case, you may need to check the retention time on backup job that run on primary server.

If you see the file on primary server that was not yet copied, you may need to manually copy that over and then start the log shipping again. If the file is not there, you may need to reconfigure the log shipping process again. You can refer to this site on how to configure log shipping for your database.

How to: Performance Tunining on Long Wait Transaction

Most of the time DBA will fire the “sp_who2 active” command to find out the active session on the SQL Server instance and quickly kill the dead lock as soon as he/she find it blocking one another. While, there’s always other way to find out more detail about it before you pull your trigger to kill the running transaction. Thus, you can avoid possible lost of data after you kill the running transaction.

Although you can look at wait statistics to understand specifically what is causing the server to slow down. It’s a great metrics for understanding the system as a whole. You could run a query against sys.dm_os_wait_stats and other by the number of currently waiting task, the cumulative wait time or the max wait time to see what, in general is causing the server to run slow.

To see what’s currently running on the server, you could run a query against the Dynamic Management Object (DMO) sys.dm_exec_requests.

  der.session_id , --internal identifier for the running session
  der.status , --determines if the query is active or waiting
  der.start_time , --gives you an idea when the query started
  der.command , --the type of command involved
  der.database_id , --which database you're connected to
  der.user_id , --which login is running the command
  der.blocking_session_id , --session id of blocking session
  der.wait_type , -- what is the waiting session it waiting on
  der.wait_time , --how long has it been waiting
  der.last_wait_type , --what caused it to last wait
  der.cpu_time , --how much of the CPU has been used
  der.total_elapsed_time , --how long has the command been running
  der.reads , --has the command hit the disk for information
  der.writes , --how much information was written to the disk
  der.logical_reads --how many reads came out of memory
FROM sys.dm_exec_requests AS der;

After you capture the long running transaction that probably use plenty of the resources. Now you need to figure out what is the query this session was running and its execution plan. The query below will help you to pull all this information together in a hurry.

SELECT SUBSTRING(dest.text, ( der.statement_start_offset / 2 ) + 1,
(der.statement_end_offset - der.statement_start_offset )
/ 2 + 1)
FROM sys.dm_exec_requests AS der
CROSS APPLY sys.dm_exec_query_plan(der.plan_handle) AS deqp
CROSS APPLY sys.dm_exec_sql_text(der.sql_handle) AS dest
WHERE der.session_id = 442;

Reference : – The Top 5 Hard-earned Lessons of a DBA