Jump to content


WAN performance

WAN performance configurables

  • Please log in to reply
2 replies to this topic

#1 Miles O'Neal

Miles O'Neal

    Advanced Member

  • Members
  • PipPipPip
  • 128 posts

Posted 03 May 2019 - 08:39 PM

We have a trans-ocean link with round-trip ping times at the OS level of ~130mS. Big submits, clean syncs, and some other things can be really slow. I've been doing some timing tests with smaller data sets, and found surprising (to me) results.

I've played with tcpsize, replica compression, parallel syncs/submits, and client compression. Here are my results so far.
- net.autotune and net.tcpsize=2M (or 1M or 4M) all return similar results.
- the single biggest help beyond not running the default tcpsize was setting rpl.compress to 3. (1 actually made it worse.)
- Adding parallelism, client compression, or both, made things worse. In fact, just turning parallelism on in the server made things worse!

I'm still waiting for engineering to provide a test case that I can use for real world numbers.

Has anyone else delved into this? What did you find?

My testing has been with 2018.1 ; I need to install 2019.1 and try it.

#2 Matt Janulewicz

Matt Janulewicz

    Advanced Member

  • Members
  • PipPipPip
  • 176 posts
  • LocationSan Francisco, CA

Posted 04 May 2019 - 12:57 AM

Sorry I can't offer much in the way of tuning experience. We do all the things we're supposed to do but we also have a lot of WAN optimizations going on that tends to make the various compression settings and whatnot moot, the bottleneck (or lack thereof) tends to be in other areas outside Perforce's purview.

What I can recommend is putting an edge server wherever your far-away friends are. Or at the very least a proxy with a process to pre-cache files. We did this a few years ago and it's been a huge boon to our organization.

By default an edge server populates items on-demand (just like a proxy) but you can set up paths to pull as soon as they are submitted. Everything is queued and gets there when it gets there, then syncs local to the edge server only have to go that far to get updates.

We're lucky in that a majority of our development takes place in California, and most of our edge servers are in Europe, so we don't have a lot of overlap as far as collaborating simultaneously. So even if someone in the states submits a 2 G file that takes an hour to get to Amsterdam, all those folks are asleep when that hour passes but the file is there waiting for them when they get to work.

Anyway, I can't advocate more for a commit->edge architecture when you have a global workforce, it's worth the hardware investment and is infinitely expandable.
-Matt Janulewicz
Staff SCM Engineer, Perforce Administrator
Dolby Laboratories, Inc.
1275 Market St.
San Francisco, CA 94103, USA
majanu@dolby.com

#3 Miles O'Neal

Miles O'Neal

    Advanced Member

  • Members
  • PipPipPip
  • 128 posts

Posted 06 May 2019 - 03:37 PM

View PostMatt Janulewicz, on 04 May 2019 - 12:57 AM, said:

Anyway, I can't advocate more for a commit->edge architecture when you have a global workforce, it's worth the hardware investment and is infinitely expandable.

We currently have a master near HQ and a replica on the other end of the link. Replica compression = 3 was the biggest help so far. We plan to move to edge-commit but that's still a few months off (there are still decisions to be made on configuration vs our processes, and then we have to get downtime scheduled).




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users