Replication Agent Profiles explained Part 1

As we know there are 3 Agents that are needed in order to Transactional Replication to work: LogReader, Snapshot and Distribution Agent. Each with different role. Each of these is running under given Agent Profile. What is Agent Profile? In short: these are the settings that our Replication Agents are running with, such as Batch sizes, History level settings etc. Let’s go into the details

Agent Profiles GUI (Replication Monitor)

You can change and see profile setting by going to Replication Monitor, open given agent (Snapshot, Log reader,  Distribution):

AgentProfileGUI

Once you open it you can see stock standard SQL Server Agent Profile with Default profile as a current one if nothing been changed since Replication configuration:

agentsettings

You can create your new custom profile by clicking New and using one of the System profiles as a base of that user profile:

NewAgentProfile.JPG

In order to see ALL settings that you can set you need to untick the checkbox. Otherwise you will see only settings used by the profile you choose at the beginning :

allsettings

In order to change agent profile you just need to choose agent profile and restart given Replication Agent (from SQL 2014 SP3 the restart of  the Agent is not necessary)

Agent Profile T-SQL

As always you can do exactly same thing using T-SQL.

Here  are some useful procedures that can help you manage these. All these procedures need to be executed on distributor.

These need to be run on Distribution DB:

exec sp_help_agent_profile – list all profiles that are present on a given distributor. It is also showing which profile is a default for newly created agents

exec sp_help_agent_parameter – shows all parameters that are set for all profiles. @profile_id can be  provided to show only settings for given profile.

Here are some system tables can be run on msdb database on distributor, these are pretty much base tables used by procedures I mentioned before:

select * from [dbo].[MSagent_profiles]
select * from [dbo].[MSagent_parameters]
select * from [dbo].[MSagentparameterlist]

If you want to change profiles settings or change assigned profile to given agent you need to use following procedures.

exec sp_add_agent_parameter – adding parameter to agent profile
exec sp_add_agent_profile – adding new profile
exec sp_change_agent_parameter – changing parameter of a given profile
exec sp_change_agent_profile– changing description of a given profile
exec sp_drop_agent_parameter – dropping parameter from the given profile
exec sp_drop_agent_profile – dropping agent profile

exec sp_update_agent_profile – assigning new profile for given Replication Agent

Here is small script that will give you a distribution agents mapped to agent profiles and what parameters these has set.

select a.name,p.profile_name,p.def_profile, par.parameter_name,par.value from MSdistribution_agents a
join msdb.[dbo].[MSagent_profiles] p
on a.profile_id = p.profile_id
join msdb.dbo.[MSagent_parameters] par
on p.profile_id=par.profile_id

As you can see you can add/modify agent profiles either via GUI or T-SQL. Same relates to assigning profiles to give agents (distribution/logreader etc.) however in this case you can “hard-code” the agent profile into given replication agent job. You can read about replication jobs more over here: Understanding Replication Agents Part 2. You can add -ProfileName in the replication agent parameters and then this setting will overwrite the one you can see in GUI or using T-SQL. You can also specify batch sizes and much more. You can check all agent parameters over here: https://docs.microsoft.com/en-us/sql/relational-databases/replication/agents/replication-distribution-agent?view=sql-server-2017

I would not recommend that as then you loosing visibility of what is really going on and under which profile the agents are running.  Here is an example… I put in the job parameter -ProfileName [Default agent profie] and did restart the job:

 

I have change profile using GUI to Continue on data consistency errors:

Then I deleted one row on subscriber and tried to delete same row on Publisher. Running on Continue on data consistency errors profile agent should skip the error. But in our case we hard-coded the profile in the job…. and that is why we are getting error anyways:

I never put profile name (or any other related to profiles parameters) into the job directly and I strongly recommend the same to everyone.

In next blog post I will go more into details on some of the parameters used in Agent Profiles. In some cases you want to play around with these settings as it can improve performance of your replication. 

The row was not found at the Subscriber when applying

In next few posts will try to cover most of the common Transactional Replication errors.

First one that I want to focus on is “Row not found at the Subscriber”. That error will come up anytime that data is missing at the subscriber and that given row been modified (update or delete) on Publisher. Only UPDATE and DELETE can cause this error. Since SQL 2014 SP2 Microsoft added better error handling and you wont need to browse distribution db in order to get more information. More about this here in a update section: sp_Msreplraiseerror (2014 SP2)

Here is my error (as you can see I am using SQL 2014 with SP2).

RowNotfoundError

Troubleshooting

Lets assume we still on older version and the only thing we are getting is the seq no and command_id of the failing command/transaction. First what you need to check is distribution db and browse it just to get full transaction. Here are my results:

browse_error

As you can see it is update to TableC where PK value is =1. After checking subscriber we can confirm that the row is missing there.

Now there are two ways of fixing it:

1.Skip

By default all agent running on “Default Agent profile” that do not skip ANY Errors. However if you do not care about data on your subscriber to be consistent then you can skip that error by changing Agent Profile. You can see that this error have a code 20598, you can change agent profile to “Continue on data consistency errors.” or create your own Agent Profile with that -SkipErrors value 20598 and restart the agent (since 2017 CU3 you won’t need to restart agents). After that change you will see that Distribution Agent skipped that command and continue with next transactions. Note: That means ALL errors with that error code will be skipped for ALL published tables for that subscription!

If you want to be more selective and skip row not found only for specific table what you can do is modify the update replication proc for given table on the subscriber. You just commenting out last part of proc and put RETURN(1) like that:

skipping_replication_proc

After error been skipped you can revert changes.

2. Insert missing data

I would assume we do not want to miss data on the subscriber. The easiest way for fixing smaller amount of data we can just use tool sp_generate_inserts  I am using it a lot and it is very useful with plenty of settings (include columns, Identity etc.). However it we have bigger set of data you could export table (or just set of data) to subscriber under different name and use left/right join to find missing data. It is quite a bit of work but in this case you will avoid running snapshot and overwriting entire table on Subscriber. Alternative for this is creating separate Snapshot replication on the side for that one table (with different destination name) and deliver the snapshot to Subscriber and then do the join to find missing data. Just note on that if you decide to overwrite table on Subscriber with new Snapshot you will get other errors on that Distribution agent. Here is an example.

We are assuming TableC is still being populated with inserts/updates/deletes… but our distribution agent is failing with an error (row not found) therefore our pending commands start to climbing:

pendingcmds

Lets create a Snapshot replication for that one table:

SnapshotReplication1Snapshot2Snapshot3

Once Publication created you can go and create Subscription (remember to leave Initialize ticked).

NewSub_snapshot1NewSub_snapshot2NewSub_snapshot3NewSub_snapshot4

Now we can generate the Snapshot for or TableC_fix replication:

Snapshot_Generated

Now after Snapshot been delivered we stopped getting Row not found error, we start getting PK violation… just because the command in pending queue in our transactional replication have INSERTS for data that been already delivered by our Snapshot replication.

Erroraftersnapshot

That pretty much means our data is already there in Subscriber. To fix it just Create an Agent Profile with SkipError value 2627 and restart the agent. Once Transactional Replication get to the point when we generate Snapshot the error will disappear  and normal replication will start delivering data.

Skipping

3. Snapshot

In my world I am not able to do it and it can be a bit of pain as well. However if table is big and you want to be sure 100% that you are not missing ANY data, you may not have other option. In order to do that you need to drop article from subscription first by running sp_dropsubscription with @article parameter specified and then sp_Droparticle command for that given article. Then re-add that table into publication and run EXEC sp_refreshsubscriptions @publication ='<publication name’ . (Just remember you need to have Immediate_sync option set to false otherwise you will generate entire snapshot).

DELETE

How you should proceed if it turns out that it is DELETE statement? In my experience you can just skip it, by using agent profile or more selective way… changing DELETE replication procedure on subscriber (point 1). The data is already gone from Publisher and there is no way that we can fix it…

Lastly the most important thing is to find out WHY we are getting these errors (either UPDATE or DELETE) and get the root cause of the issue. Ideally that error should NEVER happen, especially in 1 to 1 structure (one publisher to one subscriber).

Most of you experience that error, for sure. Do you have any other ideas to fix it? Other approach when handling that error? If yes, lets discuss in the comment section.

The way how the changes are replicated in transactional replication (CALL vs SCALL)

As I already mentioned before Replication is using replication procedures in order to replicate data changes  Today I will focus on that how the changes can be send down to subscriber, especially for UPDATE.

By default these start with sp_MSins_%, sp_MSupd_%, sp_MSdel_%. You may already seen CALL or SCALL associated with these replication procedures. You can see these in GUI under articles properties:

call_articles_grab

or by just selecting from sysarticles on Publisher db or msarticles on distribution db:

grab_sysarticles

You can change these either by drop down list in SSMS in article properties or by using sp_changearticle stored procedures (executed on Publisher)

The most interesting in that is the UPDATE as that DML you can send down in few different ways. I will focus on two most common ones CALL and SCALL.

Lets go into more detail…

SCALL (default)

SCALL is default setting when you setting up the publication and adding articles into it. The update will pass only columns that have been changed, including PK values and lastly bitmask with information  about what columns has been changed. Here is an example how this looks like in distribution database (Browse replication commands).

Updated one column in TableD:

UpdateTAbleD_SCALL

After browsing distribution db we can find this:

TAbleD_SCALL_Browse_with comments

As you can see the first parameter is NULL, this is the column that has not been changed. Here is how the procedure looks like on Subscriber:

USE [RC] 

go 

/****** Object:  StoredProcedure [dbo].[sp_MSupd_dboTableD]    Script Date: 2/27/2018 6:00:49 AM ******/ 
SET ansi_nulls ON 

go 

SET quoted_identifier ON 

go 

ALTER PROCEDURE [dbo].[Sp_msupd_dbotabled] @c1     INT = NULL, 
                                           @c2     NVARCHAR(100) = NULL, 
                                           @c3     NVARCHAR(200) = NULL, 
                                           @pkc1   INT = NULL, 
                                           @bitmap BINARY(1) 
AS 
  BEGIN 
      DECLARE @primarykey_text NVARCHAR(100) = '' 

      IF ( Substring(@bitmap, 1, 1) & 1 = 1 ) 
        BEGIN 
            UPDATE [dbo].[tabled] 
            SET    [tabled_id] = CASE Substring(@bitmap, 1, 1) & 1 
                                   WHEN 1 THEN @c1 
                                   ELSE [tabled_id] 
                                 END, 
                   [tabled_text] = CASE Substring(@bitmap, 1, 1) & 2 
                                     WHEN 2 THEN @c2 
                                     ELSE [tabled_text] 
                                   END, 
                   [text2] = CASE Substring(@bitmap, 1, 1) & 4 
                               WHEN 4 THEN @c3 
                               ELSE [text2] 
                             END 
            WHERE  [tabled_id] = @pkc1 

            IF @@rowcount = 0 
              IF @@microsoftversion > 0x07320000 
                BEGIN 
                    IF EXISTS (SELECT * 
                               FROM   sys.all_parameters 
                               WHERE  object_id = Object_id('sp_MSreplraiserror' 
                                                  ) 
                                      AND [name] = '@param3') 
                      BEGIN 
                          SET @primarykey_text = 
                          @primarykey_text + '[TableD_ID] = ' 
                          + CONVERT(NVARCHAR(100), @pkc1, 1) 

                          EXEC Sp_msreplraiserror 
                            @errorid=20598, 
                            @param1=N'[dbo].[TableD]', 
                            @param2=@primarykey_text, 
                            @param3=13233 
                      END 
                    ELSE 
                      EXEC Sp_msreplraiserror 
                        @errorid=20598 
                END 
        END 
      ELSE 
        BEGIN 
            UPDATE [dbo].[tabled] 
            SET    [tabled_text] = CASE Substring(@bitmap, 1, 1) & 2 
                                     WHEN 2 THEN @c2 
                                     ELSE [tabled_text] 
                                   END, 
                   [text2] = CASE Substring(@bitmap, 1, 1) & 4 
                               WHEN 4 THEN @c3 
                               ELSE [text2] 
                             END 
            WHERE  [tabled_id] = @pkc1 

            IF @@rowcount = 0 
              IF @@microsoftversion > 0x07320000 
                BEGIN 
                    IF EXISTS (SELECT * 
                               FROM   sys.all_parameters 
                               WHERE  object_id = Object_id('sp_MSreplraiserror' 
                                                  ) 
                                      AND [name] = '@param3') 
                      BEGIN 
                          SET @primarykey_text = 
                          @primarykey_text + '[TableD_ID] = ' 
                          + CONVERT(NVARCHAR(100), @pkc1, 1) 

                          EXEC Sp_msreplraiserror 
                            @errorid=20598, 
                            @param1=N'[dbo].[TableD]', 
                            @param2=@primarykey_text, 
                            @param3=13233 
                      END 
                    ELSE 
                      EXEC Sp_msreplraiserror 
                        @errorid=20598 
                END 
        END 
  END --  

You can see how bitmask is used to determine if the column actual changed or no.

CALL

First thing lets change SCALL to CALL of TableD. You can do it in SSMS by going to Publication Properties and then to Article properties or just run T-SQL like that one:

exec sp_changearticle 
@publication ='Publication1',
@article = 'TableD',
@property ='upd_cmd',
@value='CALL [sp_MSupd_dboTableD]',
@force_invalidate_snapshot=0

After you run this you do not need to run snapshot. New update procedure will be delivered as normal replicated commands and now the code for update procedure looks like this:

create procedure [sp_MSupd_dboTableD]     @c1 int,     @c2 nvarchar(100),     @c3 nvarchar(200),     @pkc1 int
as
begin   	declare @primarykey_text nvarchar(100) = '' if not (@c1 = @pkc1)
begin  
update [dbo].[TableD] set     [TableD_ID] = @c1,     [TableD_text] = @c2,     [text2] = @c3
	where [TableD_ID] = @pkc1 if @@rowcount = 0
    if @@microsoftversion>0x07320000
		Begin
			if exists (Select * from sys.all_parameters where object_id = OBJECT_ID('sp_MSreplraiserror') and [name] = '@param3')
			Begin
				
				set @primarykey_text = @primarykey_text + '[TableD_ID] = ' + convert(nvarchar(100),@pkc1,1)
				exec sp_MSreplraiserror @errorid=20598, @param1=N'[dbo].[TableD]', @param2=@primarykey_text, @param3=13233 
			End
			Else
				exec sp_MSreplraiserror @errorid=20598
		End end   else
begin  
update [dbo].[TableD] set     [TableD_text] = @c2,     [text2] = @c3
	where [TableD_ID] = @pkc1 if @@rowcount = 0
    if @@microsoftversion>0x07320000
		Begin
			if exists (Select * from sys.all_parameters where object_id = OBJECT_ID('sp_MSreplraiserror') and [name] = '@param3')
			Begin
				
				set @primarykey_text = @primarykey_text + '[TableD_ID] = ' + convert(nvarchar(100),@pkc1,1)
				exec sp_MSreplraiserror @errorid=20598, @param1=N'[dbo].[TableD]', @param2=@primarykey_text, @param3=13233 
			End
			Else
				exec sp_MSreplraiserror @errorid=20598
		End end  end   -- 

As you can see no bitmap mask just simple update to ALL columns in tables and proc does not check if values has changed.

Now lets see how the update looks like in the distribution db. As before we just updating one column in that table:

update TableD
set text2='test_2222'
where TableD_ID=2

If you check distribution database you will see this:

CALL_browse

As you can see it does provide all columns and values even if these has not changed. It will always send everything.

CALL vs SCALL

Now you wonder why this is important? Imagine you have large table with 30 columns some big ones like varchar(max) etc. Your application updating only one column in that table…. How do you think this will affect your distribution db? Lets have a look.

I cleaned up distribution db and will populate 10 column table with 1 million rows. Then update that 1 million rows first with SCALL and then with CALL settings.

Table Structure looks like that:

CREATE TABLE [dbo].[TableA](
	[TableA_ID] [int] NOT NULL,
	[Col1] [nvarchar](100) NULL,
	[Col2] [nvarchar](100) NULL,
	[Col3] bigint NULL,
	[Col4] int NULL,
	[Col5] int NULL,
	[Col6] int NULL,
	[Col7] datetime NULL,
	[Col8] int NULL,
	[Col9] bigint NULL,
	[Col10] [nvarchar](max) NULL,
PRIMARY KEY CLUSTERED 
(
	[TableA_ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

GO

After creating I will populate table with random data and then publish it. Once Snapshot of TableA wth 1 million rows delivered to Subscriber I am checking space  of msrepl_commands on distribution database for that Publication (nothing else is getting into that distribution db at this point).

Snapshot delivered

Right now msrepl_commands is 35 rows and 40KB reserved. It contains only initial snapshot. Article is set to use SCALL for updates. Lets send now 1 million updates just for one column, Col4 which is just INT. I will increase that column by number 100 that is it.

1 million SCALL updates sent down

After sending 1 million updates with SCALL my msrepl_commands table went to: 1000035 rows and 183368 KB reserved for that table.

Lets change article to normal CALL and lets send 1 million updates for Col5.

1 million CALL updates sent down

Now my msrepl_commands table went to 2000088 rows and reserved 759624 KB on drive. Just to explain why row count is not 2000035… it is because I changed article using sp_changearticle which added few rows still very small ones (should not take more than 40 KB)

Summary

As you can see using normal CALL will take way more space inside your distribution db

CALL = 576 256 KB

SCALL = 183 328 KB

Same transactions but with CALL we are using almost 3 times more of storage space. Before you change default SCALL setting to CALL think twice and check storage on your distributor.

If you have any questions just use comments section.

Did you use CALL in some cases and if yes to what were these? What setting do you normally use for UPDATES?

sp_msUpd% has too many arguments, sp_msUpd% expects parameter. Replication Procedures part 2

In previous post I explained basics of replication procedures. Now lets have a look into some common issues with these. Especially when you Publishing from two (or more) sources, same tables  into the  same destination (Subscriber).

Here is an example:

Twopublishers_articles

As you can see both articles from two different Publishers are using exactly same procedure. If you check Subscriber you will see only 3 (not 6) procedures for that table:

Subscriber_procs

In that specific scenario you need to be careful with any changes to schema on Publisher. Why? …..

Lets assume you have Subscriber that is getting data from two (or more) different sources (Publishers). TableA on Subscriber getting data from Publisher1 and Publisher2. By default replication procedure names will have the same name “sp_Msupd_dboTableA” , that means both publishers will be using exact same procedure for insert, updates and deletes. You are receiving request from DEV to modify that table. To be exact drop  a column. If you deploy change to Publisher1 only the replication procedure will be changed and new version (without dropped column) of the procedure will be applied on subscriber. The new procedure wont have that old column… but what about Publisher2? Publisher2 will be trying to execute procedure still with old column  and you will receive “Procedure or function sp_MSins_dboTableA has too many arguments specified.”.

Here is an example. We dropped column id_temp on Publisher1 and then inserted one row on Publisher2:

Column dropped

Once we did that Distribution agent for Publisher2 will start failing:

Dropped column error

How to solve it and/or how to prevent it to happen?

Solution

Once you get to that point when receiving such error you should follow these steps:

  1. Apply change (in our case drop column) on the second Publisher. This wont solve the issue straight away but it will modify replication procedure code to the same as we have on Publisher1. The DDL change will wait in the queue for delivery.
  2. As procedure on Subscriber has 2 instead of 3 parameters we need to modify that procedure manually and include 3rd parameter. modifying proc
  3. Completing Step 2 will cause any inserts coming from Publisher1 failing as modified procedure expect @c3, meanwhile Publisher2 will deliver command that was failing before. You can also set default NULL value in the proc @c3 bigint=NULL and this wont cause error on Publisher1error_publisher1
  4. Wait until change from Step 1 will be replicated down to Subscriber. It will change back procedure to version without old procedure. This will automatically fix distribution agent that will be failing (Step 2). Both Publisher will have same schema and same replication procedure code for that table.

Similar step you need to follow if u add  new column to replicated table. Just instead of adding parameter you need apply old procedure (without new column) from second Publisher (Scripting out Replication Procedures).

Preventing it

You can prevent it to happen in two ways.

  1. While you are configuring Publication. When you are adding articles to Publication you can specify INS/UPD/DEL procedure names. Just use different names for Publisher1 and Publisher2. Below you can see that both are using different procedures.Replication_procs_changed
  2. You can always modify procedures names by using sp_changearticle procedure on a Publisher. However if are receiving error on distribution agent already, this change wont help as we are using Transactional Replication the “change article” will be in the queue behind the failing command. You need to be proactive and do that change before error occur. Here is an example of modifying Insert procedure:    exec sp_changearticle
    @publication=’Publication2′,
    @article=’tableA’,
    @property=’ins_cmd’,
    @value=’CALL sp_MSins_TAbleA_Publisher2′

Now important thing is that you can’t prevent this issue to occur for dropped columns as the old procedure (from second Publisher) will try to insert data into old column anyways and column will be missing. If so happens follow the steps I provided in that post.

The Best way to avoid issues with replication procedures is working closely with DEV when doing schema changes and configure you publications as mentioned above.

[TSQL Script] Browse replication commands

While you can use just simple stored proc to browse commands (How to “Browse” Distributor (sp_browsereplcmds explained)) you can notices few issues with that proc… for example commands are not ordered by command_id. Here is some script I created. I found it useful especially if I want see the order of commands applied to subscriber:)

You just need to provide, published database name, Seqno of transaction that you want to check (you can get it from replication monitor, query msrepl_transactions etc.) and command id that you want to check. Leave NULL if you want to see ALL commands in given transaction.

--BROWSE REPL COMMANDS

declare @publisher_db nvarchar(150)
declare @publisher_db_id int
declare @seqno nvarchar(500)
declare @seqno_bin varbinary(16)
declare @error_id int
declare @error_txt nvarchar(1000)
declare @command_id int


/*-----------------PARAMETERS---------------------*/
set @publisher_db='<Published db>'
set @seqno_bin=<LSN>
set @seqno='<LSN>'
set @command_id = <command id> -- Leave NULL if all you want to get ALL commands from that transaction.
/*-----------------------------------------------*/

create table #temp_commands (
xact_seqno    varbinary(16) NULL,                    
originator_srvname   varchar(100) NULL,                                                                                                            
originator_db        varchar(100) NULL,                                                                                                            
article_id   		 int NULL,
type        		  int NULL,
partial_command 		bit NULL,
hashkey 				int NULL,
originator_publication_id  int NULL,
originator_db_version      int NULL,
originator_lsn         varbinary	(16) NULL,
command                 nvarchar   (1024) NULL,                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
command_id				int NULL)

select @publisher_db_id=id from mspublisher_databases where publisher_db=@publisher_db
select @error_id=error_id from msdistribution_history where xact_seqno=@seqno_bin
select top 1 @error_txt=error_text from  dbo.MSrepl_errors where id = @error_id

insert into #temp_commands
exec sp_browsereplcmds 
@xact_seqno_start=@seqno,
@xact_seqno_end=@seqno,
@publisher_database_id=@publisher_db_id

if @command_id is NULL
BEGIN
select command,command_id, @error_txt as 'Error' from #temp_commands order by command_id
END
ELSE
BEGIN
select command,command_id, @error_txt as 'Error' from #temp_commands where command_id=@command_id order by command_id
END
drop table #temp_commands

 

Execs and desktop heap

Agent Processing Account

In previous post I explained how Replication agents works. It is pretty mych exe running on Distributor (or in pull replication distrib.exe on Subscriber). As always each exe is running under some kind of Windows account. Like here:execs

As you can see distrib.exe and logread.exe are running on SQL Service account and also you notice each if these is using RAM. That account you choosing when you configure publication (configuring snapshot and logreader processing account) and subscriber (configuring distribution agent). Every time you configuring one of these things you need to provide Security settings. There are two things that need to be provided. First processing account and second authentication account. In some cases you can use same account for both, like here:

SecuritySettings

As you can see we will be using SQL Service account for processing logread.exe and also we will use same account in order to connect to Publisher (in order to read transactions). Side note: If you choosing to do so, you need to make sure that this account has all necessary permissions on Publisher server.

Side note 2: You can modify these settings always by going to Publication settings/properties (to modify snapshot and logreader security settings) and Subscription settings/properties (to modify Distribution Agent settings).

We focusing now only on first part. Processing account this is the one that exe will be running under.

You probably wonder why I’m writing about this? Well it is to do with “noninteractive desktop heap “.

Noninteractive desktop heap

Now I would like to put small disclaimer. You may not get to this point where you will get that issue. What is the exact issue?

Let’s assume you have one remote distributor that is used to run 50,60, 100 or more distribution agents. Each of this agent is configured to run under SQL Service account. Each agent is running exe under that account and each process taking some amount of memory on that instance. It is also taking so called “noninteractive desktop heap” that is allowing running processes concurrently. That value is limited for each processing account and it means once u reach it for SQL Service account you wont be able to run any more replication agents.

Once you get to this point the replication agents just wont run… the problem is it wont give you ANY error or just fail without giving any description… it will just stuck on “Starting agent” and finally fail after sometime.

You can test if you have that issue already and just run exe command manually under your account (if you have all necessary permissions on the server) and if agent is running fine then most probably you reached that limit. How to run replication agents manually I described here:

Understanding Replication Agents Part 2

Solution:

  1. Change processing account to use different Domain account. Remember it need to have proper permissions on Distributor. You will need to restart agent after you change processing account.
  2. Increase Desktop Heap Size. However this need server reboot:Increasing the noninteractive desktop heap size

I normally go with solution no. 1 and create few more domain account and run agent under these accounts.

When Distribution Cleanup “not behaving”

Distribution Cleanup running every minute and completing with success? –check

Distributor has plenty of resources and everything seems fine?- check

but msrepl_commands is getting bigger (and bigger) and Cleanup locks with Logreader……. why?

That happens quite often in two cases:

  1.  we are using one distribution db for multiple PUBLISHERS.
  2. tables that we are publishing are very busy and cleanup just not handling the load.

I would like to focus only on the second point. Lets assume 1st point has been fixed and there is only one Publisher going trough that one distribution database.

Retention:

I like to keep retention on “fixed” values, means min and max are the same.

EXEC dbo.sp_MSdistribution_cleanup @min_distretention = 24, @max_distretention = 24

Defaults by Microsoft are min=0 and max=72 which means everything thats been delivered will be deleted on next cleanup run and if replication fails to deliver the replication data will be removed after 72 hours. I like to set it to same values as it did save my “life” few times (more about “disaster recovery” using LSN and transnational replication later).

Im mentioning retention because if you set it to high then the table msrepl_commands and msrepl_transactions will become huge and hard to maintain. Check your retention settings on distributor and set it to values that wont cause disk space issues but still meets your and business needs.

Cleanup procedures:

This is quite interesting… I did notice in very big and busy replication environments some of the default settings does not work well. Reason? The default batch size for delete is 2000. Which is bit low for system that can get 30K (or higher) commands every minute. In such cases you think cleanup is working fine and running but actually it is not coping with amount of load. In this case you will see msrepl_commands just going very big and above the retention period you set on distribution db.

Now just a note… It is not the perfect way but in some cases it need to be done. We need to modify two system procedures on distribution db. Here are the changes I suggest and changes that helped me on large distribution dbs:

  1. Modifications to sp_MSdelete_publisherdb_trans (proc that cleans up msrepl_commands)
  •  Batch size (DELETE TOP())  increase that to value that suits you. Remember to change as well at the end of the loop IF @row_count < [same value as in DELETE TOP]. There are two loops that you need to change it.
  • Modifying PAGELOCK to ROWLOCK.  This will help with locking
  • Additionally you can modify MAXDOP setting (if different globaly on server). I never need to modify it so far. Works good with maxdop=1

2. Modifications to sp_MSdelete_dodelete (proc that cleans up msrepl_transactions:

  •  Batch size (DELETE TOP())  increase that to value that suits you. Remember to change as well at the end of the loop IF @row_count < [same value as in DELETE TOP]. There are two loops that you need to change it.
  • Modifying PAGELOCK to ROWLOCK.  This will help with locking

As you see two modifications are exactly same between these two the only difference will be the values in TOP part. For sp_MSdelete_dodelete you can set these values lower (or not). You just need to see what works the best on your system.

One last thing about this. If you made this changes on SQL 2014 and will apply SP2 CU2. All changes will be replaced with default settings again.

Articles/Publications:

At this point we need to ask question.. Do we really need to replicate all tables? Do we really need so many publications? and the most important do we have articles that are published in multiple publications???

First two questions you need to answer by yourself… however if answer on the last question is “yes I do have articles published in more than one publication” this is part you should read.

Lets assume you have TableA that is published in publication Pub1 but you also have publication named Pub2. 99% of tables published in these two are different but one table that getting 1 million transactions a day is published in both publications… now what happened just by doing it you multiple entries in msrepl_commands for that article. Here is an example. Below query showing how commands are looking for ONE insert into table A.

multiple_commands

As you see because table is published in two different publication you will get two entries for each transactions on that table.

How you can solve it? If possible, send that table with one publication. Remember you can decide what articles goes to which subscription. You can have two subscriptions going out from one Publication but with different sets of articles. But this is topic for different article.

Distribution Cleanup Job:

Cleanup job by default is set to run every 10 minutes. You can decrease that time to 1 minute but this wont always help, still you cleanup can fail with a deadlock. On busy systems I do often adding retry attempts to cleanup step and putting 0 between retries. That save us time and cleanup will try to run again straight away. Note: Do not set retry attempts too high. Otherwise you wont know if there are some real problems with Cleanup Job (because it will never fail or fail when it will be to late and you ran out of disk space).

SQL Server configurations:

Is our distributor following best practices? (multiple tempdb files, maxdop etc.) Is our distributor running on default settings? (default 1MB autogrowth on distribution db, low memory settings etc.). This is just reminder as Im assuming these things you already checked and you ran out of ideas, that is why you are here:).

Last thing is about running update stats on distributors. You may or may not know that it is not really needed…. as cleanup procs running UPDATE STATISTICS (on msrepl_commands and msrepl_transactions) at the begging and at the end of the cleanup proc execution.

Conclusion:

Check your Distributor configurations. Check retention on distribution dbs. Modify cleanup procedures accordingly. Review your publications and articles. Add retry attempts to cleanup step.

These are my suggestions to you. If you have any other ideas or comments let me know.