Jump to content

Does Replica Server need p4?

  • Please log in to reply
2 replies to this topic

#1 peanutsofdoom



  • Members
  • Pip
  • 1 posts

Posted 03 September 2019 - 05:47 PM

I've been following the docs to create a replica server. The end goal being to essentially have a mirror my users in another region can sync from and get better download speeds for their first sync of a very large depot.

I think I have the replica configured correctly. It seems to be able to login with the service user ticket, and p4 pull -lj doesn't show any errors and shows changing sequence numbers.

However, running p4 pull -u from my local machine against the replica server doesn't seem to do anything. I would expect it to start syncing revisions from the depot to the replica. Is that not how it's supposed to work?

Do I need p4 configured on the replica server itself? That seems like an odd question to me, but here I am!  I only have p4d on the replica currently and have been doing configuration locally from my client.

Perhaps I'm misunderstanding how p4 pull -u is supposed to work.


#2 Matt Janulewicz

Matt Janulewicz

    Advanced Member

  • Members
  • PipPipPip
  • 210 posts
  • LocationSan Francisco, CA

Posted 23 September 2019 - 05:58 PM

I'm not sure if 'p4 pull -u' is designed to be run on the command line. Maybe, but we don't do it that way. :)

Typically you'll have pull threads running on the server, one for the journal and additional ones for archive/library content. One example from one of our servers looks like this:

$ p4 configure show | grep "^startup"
startup.1=pull -P sfo-edge1 -i 1 (configure)
startup.2=pull -i 5 -u (configure)
startup.3=pull -i 5 -u (configure)
startup.4=pull -i 5 -u (configure)
startup.5=pull -i 5 -u (configure)
startup.6=pull -i 5 -u (configure)
startup.7=pull -i 5 -u (configure)
startup.8=pull -i 5 -u (configure)
startup.9=pull -i 5 -u (configure)
startup.10=pull -i 5 -u (configure)
startup.11=pull -i 5 -u (configure)

The way we 'seed' archives, then, is to do a 'p4 verify -t' on replica for files you want transferred. This will add anything to the pull thread queue that needs to be transferred from the upstream server

$ p4 verify -qt //some/path/...

One caveat is that the pull thread queue can get clogged, for lack of a better term. Don't let it get too big. It has something to do with the tracking file ($P4ROOT/rdb.lbr) and too many threads connecting to it or something. Not sure. As a rule of thumb (ymmv) I keep it under a million. :) 'p4 pull -ls' will show you a summary of what's in the queue.

One other thing you'll want to look into is controlling which projects/paths are automatically sent to the replica. If you don't configure that, then everything will be added to the replica queue as it's created (new adds, edits, etc.) For some shops this might be fine but in ours we have a few far-flung edge servers that don't need 30 TB worth of data on them.

In the server spec, you'll want to add an ArchiveDataFilter: section and add specific paths that you want automatically transferred. Any unmentioned paths will be transferred on-demand.
-Matt Janulewicz
Staff SCM Engineer, Perforce Administrator
Dolby Laboratories, Inc.
1275 Market St.
San Francisco, CA 94103, USA

#3 Miles O'Neal

Miles O'Neal

    Advanced Member

  • Members
  • PipPipPip
  • 170 posts

Posted 23 September 2019 - 08:58 PM

Matt's right. It sounds like you might have things configured correctly, but simply haven't done anything to force files across. The verify will do that, IFF you have "pull -u" threads configured.

BUT, if you have a good backup where the depots match a checkpoint or full set of metadata files, copying them over and starting from that will bring everything into sync- if configured properly before the backup/checkpoint.

If you haven't got the "pull -u" threads that Matt showed, you need those in the server configs. We typically run 5 on the LAN and 10 on the WAN. We used to have to run 20 on the WAN, but Perforce added some network speedups and we were able to cut back. But we have people check in thousands of files at a time, including large ones, so we need more threads for trans-Atlantic speed. There's nothing like seeing 50k files worth 50GB show up and have all the write-based commands waiting in line, with a horde of frustrated engineers waving pitchforks and torches behind them.

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users