Push vs. Pull Subscription

As I mentioned already on How to create new Subscription? you can choose between push or pull subscription. There are some benefits but also some drawbacks for both solutions.

The main difference between push and pull is the location of a distribution agent (actual SQL Job). In Push distribution agent is located on the Distributor and there you will be able to find it as a local SQL job (Category: REPL-Distribution). With Pull the job will be located on a Subscriber. Here are pros and cons of both.

PULL (Pros)

  • Faster when going over WAN. In this case agent is “downloading” the data instead of sending it. You may need to consider this if data is going over to the other side of the world.
  • The load is moved from Distributor to Subscriber. The agent will be using subscriber resources. Could be important for you if you have multiple agents coming from one Distributor.

PULL (Cons)

  • Assuming you have multiple distribution agents to maintain it will get harder to maintain these as SQL jobs will be located on Subscriber. The big disadvantage for  me is that you can’t stop/start agent while being on Distributor. Again if you have multiple distributions agents to maintain it is much better if you can do it from central location which is Distributor.  If you logged on Distributor and have Replication Monitor open you wont be able to stop/start such distribution agent (screenshot below). Also job_id in msdistribution_Agents in Distribution db is different than job id on subscriber when using pull, which is bit harder to map SQL job to actual distribution agent.

PullSub

  • If Subscriber is really busy and under pressure already it won’t be good to get more load by adding Distribution agent. However one distribution agent wont make a big difference. That could be different story if we have multiple agents running on that given subscriber.
  • In some cases it could be a security issue. Especially when we have subscriber located on some 3rd party/remote location etc. In that case you will need to open port 1433 for subscriber to your distributor. Distribution agent need access to Distributor.

Push (Pros)

  • Everything can be managed from one location. It is much easier to apply changes to distribution agents, monitor and stop/start agents in bulk.

Push (Cons)

  • Slower than Pull when going over WAN.
  • If there are multiple agents going trough one Distributor you may need to consider separate box just for remote Distributor (that means additional costs).

What should you use? As always… IT DEPENDS:). Depends:

  • how many distribution agents you have
  • how fast is connection to your Subscribers
  • what are security policies in your company
  • how you prefer to manage your replication environment

The choice is yours:) You just need to be aware of some limitations of both solutions

Execs and desktop heap

Agent Processing Account

In previous post I explained how Replication agents works. It is pretty mych exe running on Distributor (or in pull replication distrib.exe on Subscriber). As always each exe is running under some kind of Windows account. Like here:execs

As you can see distrib.exe and logread.exe are running on SQL Service account and also you notice each if these is using RAM. That account you choosing when you configure publication (configuring snapshot and logreader processing account) and subscriber (configuring distribution agent). Every time you configuring one of these things you need to provide Security settings. There are two things that need to be provided. First processing account and second authentication account. In some cases you can use same account for both, like here:

SecuritySettings

As you can see we will be using SQL Service account for processing logread.exe and also we will use same account in order to connect to Publisher (in order to read transactions). Side note: If you choosing to do so, you need to make sure that this account has all necessary permissions on Publisher server.

Side note 2: You can modify these settings always by going to Publication settings/properties (to modify snapshot and logreader security settings) and Subscription settings/properties (to modify Distribution Agent settings).

We focusing now only on first part. Processing account this is the one that exe will be running under.

You probably wonder why I’m writing about this? Well it is to do with “noninteractive desktop heap “.

Noninteractive desktop heap

Now I would like to put small disclaimer. You may not get to this point where you will get that issue. What is the exact issue?

Let’s assume you have one remote distributor that is used to run 50,60, 100 or more distribution agents. Each of this agent is configured to run under SQL Service account. Each agent is running exe under that account and each process taking some amount of memory on that instance. It is also taking so called “noninteractive desktop heap” that is allowing running processes concurrently. That value is limited for each processing account and it means once u reach it for SQL Service account you wont be able to run any more replication agents.

Once you get to this point the replication agents just wont run… the problem is it wont give you ANY error or just fail without giving any description… it will just stuck on “Starting agent” and finally fail after sometime.

You can test if you have that issue already and just run exe command manually under your account (if you have all necessary permissions on the server) and if agent is running fine then most probably you reached that limit. How to run replication agents manually I described here:

Understanding Replication Agents Part 2

Solution:

  1. Change processing account to use different Domain account. Remember it need to have proper permissions on Distributor. You will need to restart agent after you change processing account.
  2. Increase Desktop Heap Size. However this need server reboot:Increasing the noninteractive desktop heap size

I normally go with solution no. 1 and create few more domain account and run agent under these accounts.

Transactional Replication and Transaction Log Management

As you know by now, Transactional Replication is using Transaction Log of the published database.

But Wait! Do I need to run my DB in Full Recovery Model? How about checkpoint? Entire beauty of this is that you do not need run your db in full recovery model. Once Replication is up and configured no transactions for given articles will be lost. How it works?

Lets assume we published a table, we will be replicating INSERT/UPDATES/DELETES and we created subscription (publication need to have at least one active subscription in order to start picking up data for replication). From now on ALL transactions for that table will be marked in transaction log as “For Replication”. That transaction will be kept in the log until LogReader  read it and put into distribution database. If from some reason, Log Reader is failing or stopped the log will keep growing, growing and growing…. when finally you ran out of space…In this case  No backup logs, no checkpoints, no changing recovery model around….will help. Ok one thing will help… dropping entire replication on given published database:) But this is not a solution for us. You need to keep in mind that Log reader agent is crucial and it need to run always. any errors on that agent can affect your published database and entire instance. Keep monitoring that job/agent.

Sequential reads

Transaction log is sequential, Log Reader is reading data  sequentially and Transactional replication is sequential… That means various things or actions can affect our replication. Pretty much everything that touching Transaction Log heavily will impact it.

Index Maintenance/Checkdb?

It will affect your Replication mostly when you are using Online rebuild. Index Rebuild is doing most of the job in transaction log and it will keep growing. Even if Replication really CANNOT replicate this “transactions”, the LogReader still need to read entire Transaction Log from beginning. Let’s assume after large index rebuild your transaction log is 10GB in size… now your Log Reader need to go trough and check entire log in order to find all transaction that are marked for replication. You either need to be very careful with rebuilds on published database (add waitfordelay between rebuilds and let log reader to read the log etc.) or you and the business accepting replication delay during the index maintenance. You will often get message like this in the Log Reader.

LogREader_Indexrebuild

As you see it scans the log and return message that no transactions marked for replication.

One thing you can tweak is the Log Reader Agent Profile. You can increase – LogScanThreshold and -Readbatchsize. Sweet spot? start by adding 0 to each value and see if it helps in your situation. Also what you should increase are -Logintimeout and -QueryTimeout as in some cases you connection can break due to timeout and Log Reader will need to start from the begging and we DO NOT want that.

Mirroring

I know that at some point mirroring will be replaced but still this need to be mention. Lets assume that database you published is also mirrored…If for any reason mirroring is falling behind (slow connection, errors etc.), your replication will do the same. Transactional Replication (Log Reader to be exact) will wait for mirroring to catch up and will read only part of the transaction log that been already send to Secondary by mirroring.

Everything ELSE:

The problem with Transactional Replication is that even large updates on tables that ARE NOT published can affect it..  Update will still go to transaction log and Log Reader still will need to scan it.

Conclusion:

You do not need Full Recovery model in order to setup Transactional Replication and replication wont loose any data because of it. You need to be aware of anything heavy happening on your database and that touching transaction log. Keep monitoring your transaction log and Log Reader.

 

When Distribution Cleanup “not behaving”

Distribution Cleanup running every minute and completing with success? –check

Distributor has plenty of resources and everything seems fine?- check

but msrepl_commands is getting bigger (and bigger) and Cleanup locks with Logreader……. why?

That happens quite often in two cases:

  1.  we are using one distribution db for multiple PUBLISHERS.
  2. tables that we are publishing are very busy and cleanup just not handling the load.

I would like to focus only on the second point. Lets assume 1st point has been fixed and there is only one Publisher going trough that one distribution database.

Retention:

I like to keep retention on “fixed” values, means min and max are the same.

EXEC dbo.sp_MSdistribution_cleanup @min_distretention = 24, @max_distretention = 24

Defaults by Microsoft are min=0 and max=72 which means everything thats been delivered will be deleted on next cleanup run and if replication fails to deliver the replication data will be removed after 72 hours. I like to set it to same values as it did save my “life” few times (more about “disaster recovery” using LSN and transnational replication later).

Im mentioning retention because if you set it to high then the table msrepl_commands and msrepl_transactions will become huge and hard to maintain. Check your retention settings on distributor and set it to values that wont cause disk space issues but still meets your and business needs.

Cleanup procedures:

This is quite interesting… I did notice in very big and busy replication environments some of the default settings does not work well. Reason? The default batch size for delete is 2000. Which is bit low for system that can get 30K (or higher) commands every minute. In such cases you think cleanup is working fine and running but actually it is not coping with amount of load. In this case you will see msrepl_commands just going very big and above the retention period you set on distribution db.

Now just a note… It is not the perfect way but in some cases it need to be done. We need to modify two system procedures on distribution db. Here are the changes I suggest and changes that helped me on large distribution dbs:

  1. Modifications to sp_MSdelete_publisherdb_trans (proc that cleans up msrepl_commands)
  •  Batch size (DELETE TOP())  increase that to value that suits you. Remember to change as well at the end of the loop IF @row_count < [same value as in DELETE TOP]. There are two loops that you need to change it.
  • Modifying PAGELOCK to ROWLOCK.  This will help with locking
  • Additionally you can modify MAXDOP setting (if different globaly on server). I never need to modify it so far. Works good with maxdop=1

2. Modifications to sp_MSdelete_dodelete (proc that cleans up msrepl_transactions:

  •  Batch size (DELETE TOP())  increase that to value that suits you. Remember to change as well at the end of the loop IF @row_count < [same value as in DELETE TOP]. There are two loops that you need to change it.
  • Modifying PAGELOCK to ROWLOCK.  This will help with locking

As you see two modifications are exactly same between these two the only difference will be the values in TOP part. For sp_MSdelete_dodelete you can set these values lower (or not). You just need to see what works the best on your system.

One last thing about this. If you made this changes on SQL 2014 and will apply SP2 CU2. All changes will be replaced with default settings again.

Articles/Publications:

At this point we need to ask question.. Do we really need to replicate all tables? Do we really need so many publications? and the most important do we have articles that are published in multiple publications???

First two questions you need to answer by yourself… however if answer on the last question is “yes I do have articles published in more than one publication” this is part you should read.

Lets assume you have TableA that is published in publication Pub1 but you also have publication named Pub2. 99% of tables published in these two are different but one table that getting 1 million transactions a day is published in both publications… now what happened just by doing it you multiple entries in msrepl_commands for that article. Here is an example. Below query showing how commands are looking for ONE insert into table A.

multiple_commands

As you see because table is published in two different publication you will get two entries for each transactions on that table.

How you can solve it? If possible, send that table with one publication. Remember you can decide what articles goes to which subscription. You can have two subscriptions going out from one Publication but with different sets of articles. But this is topic for different article.

Distribution Cleanup Job:

Cleanup job by default is set to run every 10 minutes. You can decrease that time to 1 minute but this wont always help, still you cleanup can fail with a deadlock. On busy systems I do often adding retry attempts to cleanup step and putting 0 between retries. That save us time and cleanup will try to run again straight away. Note: Do not set retry attempts too high. Otherwise you wont know if there are some real problems with Cleanup Job (because it will never fail or fail when it will be to late and you ran out of disk space).

SQL Server configurations:

Is our distributor following best practices? (multiple tempdb files, maxdop etc.) Is our distributor running on default settings? (default 1MB autogrowth on distribution db, low memory settings etc.). This is just reminder as Im assuming these things you already checked and you ran out of ideas, that is why you are here:).

Last thing is about running update stats on distributors. You may or may not know that it is not really needed…. as cleanup procs running UPDATE STATISTICS (on msrepl_commands and msrepl_transactions) at the begging and at the end of the cleanup proc execution.

Conclusion:

Check your Distributor configurations. Check retention on distribution dbs. Modify cleanup procedures accordingly. Review your publications and articles. Add retry attempts to cleanup step.

These are my suggestions to you. If you have any other ideas or comments let me know.