Jump to content


Commit-Edge: submit slow / timeout

commit edge submit slow timeout

  • Please log in to reply
7 replies to this topic

#1 leo.cd

leo.cd

    Newbie

  • Members
  • Pip
  • 3 posts

Posted 18 November 2015 - 03:55 PM

Hello,

We have at least 3 sites around the world, working on the same code&asset base. We are investigating the idea of using P4 Commit-Edge architecture, with a Commit server in our main site in EU, and Edge servers in each other location.

For that purpose, we set up a basic commit-edge configuration from scratch, following the steps exposed here: https://www.perforce...stributed.setup

We first tried a EU / North America configuration, but observed a huge latency when submitting an important amount of files.

Suspecting the network, we tried to run both commit and edge server on the same local machine (on 2 different ports), but the issue still remain.

Here what is observed with P4V, from a computer on the same local network:
- We submit 16 000 new binary files of 525B each to the edge server from workspace A
- The progress bar complete within a few seconds
- The progress bar hangs at 100% for about 30 min

During this time:
- A client connected to the commit server see everything as if the changelist has been perfectly submitted, he can get latest and get those files.
- A client connected to the edge server won't see the submitted files before the progress bar finish hanging.

If we cancel the submit when it hang at 100%:
- Files still appear as submitted and are available from "get latest" on the commit server
- On workspace A:
    * the submitted changelist has a "shelve" icon, but we do not see any shelved files, or do not contains any files anymore.
    * we can't remove the changelist, because "it contains some shelved files"
    * the added files are not "marked for add" anymore.
- about 30 minutes latter, workspace A becomes normal, same state as commit server, as if every thing were submitted successfully.

We tried with less files: it's faster, but still very slow:
30 files: 2 seconds
100 files: 6 seconds
500 files: 34 seconds

However, it took only 115 ms to submit those 500 files directly on the commit server. But the edge server still takes minutes to show the files.

That make us think the pulling is the bottleneck here. "p4 pull" show the following during the "un-consistent" state (output of "pull -l" varies with time):

perforce@perforceproxy:~$ p4 pull -lj
Current replica journal state is:	   Journal 0,	  Sequence 3983733.
Current master journal state is:		Journal 0,	  Sequence 8309721.
The statefile was last modified at:	 2015/11/18 15:40:01.
The replica server time is currently:   2015/11/18 15:41:23 +0100 CET
perforce@perforceproxy:~$ p4 pull -l
//depot/test4000/1a5d33d5-1a43-467a-8985-9e2309ee6d3e.bin 1.1 binary active add 8C8B1FBB2FD5F825C1014A3907962CDA 525 1 2015/11/18 15:40:01 2015/11/16 23:13:18 4296 2015/11/18 15:41:30 1 0
//depot/test4000/1a69015b-0be9-4446-a795-3eebd5b0feda.bin 1.1 binary new add 4CF1EFF5A5E1A96EC3FB3C2ECFE07422 525 1 2015/11/18 15:40:01 2015/11/16 23:13:18 4295 2015/11/18 15:41:29 1 0
//depot/test4000/1a6c905f-1b08-47c5-8e68-5d492b40c739.bin 1.1 binary new add BBD5DAA2B714526B4684B2D9BD3FA9FF 524 1 2015/11/18 15:40:01 2015/11/16 23:13:18 4295 2015/11/18 15:41:29 1 0
//depot/test4000/1a6daaa1-f677-43c8-af15-594b05af581f.bin 1.1 binary new add 02BA9C6DB142974782222F600A7D1CD5 524 1 2015/11/18 15:40:01 2015/11/16 23:13:18 4295 2015/11/18 15:41:30 1 0
//depot/test4000/1a70188c-a4e0-43eb-96ad-ad2bcb886d6c.bin 1.1 binary new add FB8F53634F4AB013944508ED0DC8E90F 524 1 2015/11/18 15:40:01 2015/11/16 23:13:18 4295 2015/11/18 15:41:30 1 0
//depot/test4000/1a70fad0-0592-4308-ac7f-d806ecc5759b.bin 1.1 binary new add 0AFB84C2407BA75A398B5268E8BFE48A 525 1 2015/11/18 15:40:01 2015/11/16 23:13:18 4295 2015/11/18 15:41:30 1 0
//depot/test4000/1a710263-ea15-4a3e-b5ee-921ee87d8f3b.bin 1.1 binary new add 3B35FF33A7A70B6F2E8EF1BC57918F58 523 1 2015/11/18 15:40:01 2015/11/16 23:13:18 4295 2015/11/18 15:41:30 1 0
//depot/test4000/1a729830-8098-46ec-84a8-c3f4e3116db1.bin 1.1 binary new add F01C2341E63E64A353B24BDDA24129DF 524 1 2015/11/18 15:40:01 2015/11/16 23:13:18 4295 2015/11/18 15:41:30 1 0
//depot/test4000/1a76c6e8-187f-461c-bf3e-a14b6ecf890a.bin 1.1 binary new add AB8AD36DA09992DF06644F2B6BF19747 524 1 2015/11/18 15:40:01 2015/11/16 23:13:18 4295 2015/11/18 15:41:30 1 0

Other information:
- p4 server is P4D/LINUX26X86_64/2015.2/1252060 (2015/10/22)
- server is a debian linux
- both edge and commit refer to each other with localhost:1667 and localhost:1668

What we tried:
* with and without unicode
* with and without db.peeking=2
* with pull -i 1 and pull -i 0 (0 seems faster)
* with or without exclusive checkout

What we didn't try (yet):
* text file
* testing it on an isolated vm, outside the network
* probably 1 000 thinks we didn't even know about


Questions:
* why would the edge server need to pull everything when it is the one that submitted the file?
* if pulling is the issue, why is it so fast to transfer the file one way, and so slow the other way?
* we are about to test a simple forwarding replicate. But I guess everything tends to say that we'll have the same issue with pulling?
* most of all: any tips/idea/guess on what could be wrong here?

Additional questions:
* In the tutorial linked above, it seems that step 4 of "Create and start the edge server" has a wrong ticket path, should be "/tokyo/p4root/.p4tickets" instead of "/chicago/p4root/.p4tickets", since we create the edge ticket. Am I right?
* Also, could you explain the double "pull -u -i 1" in the edge configuration?
* We were very surprised with the "same timezone" condition. What is the reason behind that? Do you plan to allow different timezone in the future?

Thanks for reading me

#2 ThatGuy

ThatGuy

    Advanced Member

  • Members
  • PipPipPip
  • 33 posts

Posted 19 November 2015 - 04:06 PM

Hi,

Can I suggest increasing the number of pull startup threads and then rerunning your test case, this might help as you would have more threads pulling to assist with the high number of metadata being replicated. This is just a thought but I think it may be well worthit. So to answer one of your questions yes it might be a pulling issue but could you please run a p4 configure show allservers so we can see your setup?

I cannot answer all of your questions but for the timezone it makes sense for replica's & the master instance to share a common timezone because as far as I know the pull command will not convert time zone to from the P4TARGET to the replica that's being replicated to.


Thanks,

Tunga.
Certified P4.

#3 Matt Janulewicz

Matt Janulewicz

    Advanced Member

  • Members
  • PipPipPip
  • 191 posts
  • LocationSan Francisco, CA

Posted 19 November 2015 - 05:58 PM

I can confirm your suspicions about timezone. We have four edge servers sprinkled about the world and have set them all to the same timezone as our commit server. We mainly did that, though, so the timestamps in the logs would align. Easier for troubleshooting.

What stands out in the original post is how far diverged the journals are. They should generally be pretty close to equal at all times. Something seems to be causing it to stall, which could be network latency or any number of things.

During times when this happens, monitoring 'p4 pull -ls' is helpful, too, as it'll give you an idea of what's in the queue, an if that's changing over time. Still seems like this small-ish amount of data should arrive in the queue pretty fast, and perhaps send pretty fast as it's around 1 GB of binaries, if my math is correct. Seems that a halfway decent network should send that much data faster than half an hour.

One last comment, commit->edge architectures use shelving on the back end to transfer files between servers when they're submitted, so that's why you see a report of the changelist having shelved files. That's normal. I'm not 100% sure exactly how the whole thing works and if it might be getting tripped up by having global shelves set as the default?

Another one last comment for real. We've experimented with different numbers of pull threads and found that, at least in our environment, having more than about a dozen pull threads for archive files doesn't offer us any improvement. We've standardized on 10 and it seems to work pretty well. If you only have one archive pull thread, then that 1 GB of data is being pushed through it. I'd consider setting that number of startup threads higher than 1, if that's what you currently have set up.
-Matt Janulewicz
Staff SCM Engineer, Perforce Administrator
Dolby Laboratories, Inc.
1275 Market St.
San Francisco, CA 94103, USA
majanu@dolby.com

#4 Mailman Sync

Mailman Sync

    Advanced Member

  • Maillist Aggregator
  • 2495 posts

Posted 19 November 2015 - 06:45 PM

Originally posted to the perforce-user mailing list by: Michael Mirman


Hi Matt -

Quote

We've experimented with different numbers
of pull threads and found that, at least in our environment, having more than
about a dozen pull threads for archive files doesn't offer us any
improvement. We've standardized on 10 and it seems to work pretty well. If
you only have one archive pull thread, then that 1 GB of data is being pushed
through it. I'd consider setting that number of startup threads higher than
1, if that's what you currently have set up.

We settled on 4 but haven't experimented with other numbers.
I wonder if you use 10 for overseas edge servers or within the US.
My understanding is that the bandwidth is very different for different countries, and I don’t know when more stops being better.

--
Michael Mirman
MathWorks, Inc.
3 Apple Hill Drive, Natick, MA 01760
508-647-7555

_______________________________________________
perforce-user mailing list  -  perforce-user@perforce.com
http://maillist.perf...o/perforce-user



#5 Matt Janulewicz

Matt Janulewicz

    Advanced Member

  • Members
  • PipPipPip
  • 191 posts
  • LocationSan Francisco, CA

Posted 19 November 2015 - 07:03 PM

View PostMailman Sync, on 19 November 2015 - 06:45 PM, said:

Originally posted to the perforce-user mailing list by: Michael Mirman


Hi Matt -



We settled on 4 but haven't experimented with other numbers.
I wonder if you use 10 for overseas edge servers or within the US.
My understanding is that the bandwidth is very different for different countries, and I don’t know when more stops being better.

--
Michael Mirman
MathWorks, Inc.
3 Apple Hill Drive, Natick, MA 01760
508-647-7555

_______________________________________________
perforce-user mailing list  -  perforce-user@perforce.com
http://maillist.perf...o/perforce-user

That's definitely the case, though I tend to set all our edge servers the same for aesthetic/ADD purposes. Our edge that's in the same rack as our commit has 10 but it probably only need 2 or 3.
-Matt Janulewicz
Staff SCM Engineer, Perforce Administrator
Dolby Laboratories, Inc.
1275 Market St.
San Francisco, CA 94103, USA
majanu@dolby.com

#6 leo.cd

leo.cd

    Newbie

  • Members
  • Pip
  • 3 posts

Posted 19 November 2015 - 07:59 PM

Hello all,

Thanks for your answers.

View PostTunga Mavengere., on 19 November 2015 - 04:06 PM, said:

Can I suggest increasing the number of pull startup threads and then rerunning your test case, this might help as you would have more threads pulling to assist with the high number of metadata being replicated. This is just a thought but I think it may be well worthit.

View PostMatt Janulewicz, on 19 November 2015 - 05:58 PM, said:

Another one last comment for real. We've experimented with different numbers of pull threads and found that, at least in our environment, having more than about a dozen pull threads for archive files doesn't offer us any improvement. We've standardized on 10 and it seems to work pretty well. If you only have one archive pull thread, then that 1 GB of data is being pushed through it. I'd consider setting that number of startup threads higher than 1, if that's what you currently have set up.

We tried 1, 2, 10 and 20 pull threads. Performances are always the same.


View PostTunga Mavengere., on 19 November 2015 - 04:06 PM, said:

So to answer one of your questions yes it might be a pulling issue but could you please run a p4 configure show allservers so we can see your setup?

Here is the commit configuration:
perforce@perforce:~$ p4 configure show
P4ROOT=/depot_commit
P4PORT=1667
P4JOURNAL=/log/journal_commit
P4NAME=p4_commit (serverid)
P4LOG=/p4err_commit
P4TICKETS=/depot_commit/.p4tickets (configure)
monitor=3 (configure)
journalPrefix=/backup/p4d_backup_commit (configure)
p4zk.log.file=p4zk.log (default)
auth.default.method=perforce (default)
zk.connect.timeout=300 (default)
server: 1 (P4DEBUG)
serverid=p4_commit (serverid)

And here is the edge configuration:
perforce@perforce:~$ p4 configure show
P4ROOT=/depot_edge
P4PORT=1668
P4JOURNAL=/log/journal_edge
P4NAME=p4_edge (serverid)
P4LOG=/log/edge_local.log (configure)
P4TICKETS=/depot_edge/.p4tickets (configure)
serviceUser=svc_edge (configure)
P4TARGET=localhost:1667 (configure)
monitor=3 (configure)
startup.1=pull -i 0 (configure)
startup.2=pull -u -i 0 (configure)
startup.3=pull -u -i 0 (configure)
startup.4=pull -u -i 0 (configure)
startup.5=pull -u -i 0 (configure)
startup.6=pull -u -i 0 (configure)
startup.7=pull -u -i 0 (configure)
startup.8=pull -u -i 0 (configure)
startup.9=pull -u -i 0 (configure)
startup.10=pull -u -i 0 (configure)
db.replication=readonly (configure)
lbr.replication=readonly (configure)
journalPrefix=/backup/p4d_backup_edge (configure)
p4zk.log.file=p4zk.log (default)
auth.default.method=perforce (default)
zk.connect.timeout=300 (default)
server: 1 (P4DEBUG)
serverid=p4_edge (serverid)

View PostMatt Janulewicz, on 19 November 2015 - 05:58 PM, said:

What stands out in the original post is how far diverged the journals are. They should generally be pretty close to equal at all times. Something seems to be causing it to stall, which could be network latency or any number of things.

Because the 2 perforce servers (commit and edge) are running on the same computer, communicating via localhost, I guess we can discard the "network latency", can't we? Moreover, it's really fast to submit on the commit server, so only pull is affected.

View PostMatt Janulewicz, on 19 November 2015 - 05:58 PM, said:

During times when this happens, monitoring 'p4 pull -ls' is helpful, too, as it'll give you an idea of what's in the queue, an if that's changing over time. Still seems like this small-ish amount of data should arrive in the queue pretty fast, and perhaps send pretty fast as it's around 1 GB of binaries, if my math is correct. Seems that a halfway decent network should send that much data faster than half an hour.

It's only 8MB ;) ... which, as you said, is perfect nonsense. And, we're just transferring from a commit to an edge, on the same computer.

View PostMatt Janulewicz, on 19 November 2015 - 05:58 PM, said:

One last comment, commit->edge architectures use shelving on the back end to transfer files between servers when they're submitted, so that's why you see a report of the changelist having shelved files. That's normal. I'm not 100% sure exactly how the whole thing works and if it might be getting tripped up by having global shelves set as the default?

It's a good point, so I tested the "dm.shelve.promote=1" option, nothing changes.

But I did some test around shelving that are interesting:
- shelving locally on the edge is instantaneous
- promoting the shelve to global is instantaneous
- submitting the global shelved files is freaking slow

We're now 100% sure that the only thing slow is the pull command. if we call it manually, it's slow, and of course submit operation only complete when pull command finish.

So maybe I can redefine the problem as follow:

Is it normal that it takes 34 seconds to pull 500 tiny binary files (256KB) between 2 p4 servers running on the same machine, whereas it takes only a few milliseconds to submit those same files from a computer on the local network to the commit server?

Maybe pulling works this way by design. But since write commands on the edge are waiting for the pulling to complete, this make the submit command really really slower than  working directly on the commit server.

Thanks

#7 P4Nathan

P4Nathan

    Advanced Member

  • Members
  • Pip
  • 7 posts

Posted 27 November 2015 - 05:46 PM

This does sound like it could be related to journal contention, or an I/O bottleneck on your journal and potentially the logs.  Where are you hosting the respective journal and log file?  

Does the edge server log file show a 'Journal Wait' entries around the slow submits?

-Nathan

#8 leo.cd

leo.cd

    Newbie

  • Members
  • Pip
  • 3 posts

Posted 01 December 2015 - 04:45 PM

Hello,

We tested some other configurations, and it happens that the problem comes from the linux machine on which we tested the edge. But we are still trying to find out what could cause the problem.
the machine is perfectly capable of hosting the commit server or a proxy, but can handle the pull as an edge or replica, whether the commit is on the same machine, on an other machine on the same network or in another country.

Here is the edge log slow part (dm-SubmitChangeFlush):

From what I understand, we just see that the edge is helding 86209ms for the worspace (probably just waiting for the pull to end)

Perforce server info:
2015/12/01 16:34:16 pid 11556 leo@workspace 192.168.0.104 [P4V/NTX64/2014.3/1007540/v77] 'dm-SubmitChangeFlush'
Perforce server info:
2015/12/01 16:34:24 pid 11666 perforce@perforce 127.0.0.1 [p4/2015.1/LINUX26X86_64/1024208] 'user-pull -lj'
Perforce server info:
2015/12/01 16:34:24 pid 11666 completed .005s 0+0us 0+0io 0+0net 2252k 0pf
Perforce server info:
2015/12/01 16:34:24 pid 11666 perforce@perforce 127.0.0.1 [p4/2015.1/LINUX26X86_64/1024208] 'user-pull -lj'
--- lapse .005s
--- rpc msgs/size in+out 0+4/0mb+0mb himarks 195906/195906 snd/rcv .000s/.000s
--- db.counters
---   pages in+out+cached 3+0+2
---   locks read/write 1/0 rows get+pos+scan put+del 1+0+0 0+0
--- db.server
---   pages in+out+cached 1+0+2
---   locks read/write 1/0 rows get+pos+scan put+del 1+0+0 0+0
--- db.svrview
---   pages in+out+cached 5+0+2
---   locks read/write 3/0 rows get+pos+scan put+del 0+3+3 0+0
--- db.user.rp
---   pages in+out+cached 3+0+2
---   locks read/write 1/0 rows get+pos+scan put+del 1+0+0 0+0
--- db.user
---   pages in+out+cached 3+0+2
---   locks read/write 1/0 rows get+pos+scan put+del 1+0+0 0+0
--- db.group
---   pages in+out+cached 3+0+2
---   locks read/write 1/0 rows get+pos+scan put+del 0+1+1 0+0
--- db.domain
---   pages in+out+cached 3+0+2
---   locks read/write 1/0 rows get+pos+scan put+del 1+0+0 0+0
--- db.trigger
---   pages in+out+cached 3+0+2
---   locks read/write 1/0 rows get+pos+scan put+del 0+1+1 0+0
--- db.protect
---   pages in+out+cached 3+0+2
---   locks read/write 1/0 rows get+pos+scan put+del 0+1+5 0+0
Perforce server info:
2015/12/01 16:35:42 pid 11556 completed 86.2s 152+56us 0+15376io 0+0net 9652k 0pf
Perforce server info:
2015/12/01 16:34:16 pid 11556 leo@workspace 192.168.0.104 [P4V/NTX64/2014.3/1007540/v77] 'dm-SubmitChangeFlush'
--- lapse 86.2s
--- usage 152+56us 0+15384io 0+0net 9652k 0pf
--- rpc msgs/size in+out 4337+3340/2mb+0mb himarks 195906/523588 snd/rcv .000s/.011s
--- db.counters
---   pages in+out+cached 4+0+2
---   locks read/write 1/1 rows get+pos+scan put+del 2+0+0 0+0
--- db.logger
---   pages in+out+cached 2+0+1
---   locks read/write 0/1 rows get+pos+scan put+del 0+0+0 0+0
--- db.domain
---   pages in+out+cached 2+0+2
---   locks read/write 0/1 rows get+pos+scan put+del 0+0+0 0+0
---   total lock wait+held read/write 0ms+0ms/0ms+18ms
--- db.template
---   pages in+out+cached 7+0+2
---   locks read/write 3/1 rows get+pos+scan put+del 0+3+3 0+0
---   total lock wait+held read/write 0ms+7ms/0ms+18ms
---   max lock wait+held read/write 0ms+7ms/0ms+18ms
--- db.view
---   pages in+out+cached 3+0+2
---   locks read/write 3/0 rows get+pos+scan put+del 0+3+6 0+0
--- db.have
---   pages in+out+cached 16+39+16
---   locks read/write 0/1 rows get+pos+scan put+del 0+0+0 1000+0
---   total lock wait+held read/write 0ms+0ms/0ms+18ms
--- db.integed
---   pages in+out+cached 1+0+1
---   locks read/write 1/0 rows get+pos+scan put+del 0+0+0 0+0
---   total lock wait+held read/write 0ms+11ms/0ms+0ms
--- db.resolve
---   pages in+out+cached 6+0+2
---   locks read/write 2/2 rows get+pos+scan put+del 0+6000+6000 0+0
---   total lock wait+held read/write 0ms+10ms/0ms+29ms
---   max lock wait+held read/write 0ms+5ms/0ms+18ms
--- db.resolvex
---   pages in+out+cached 9+0+2
---   locks read/write 2/3 rows get+pos+scan put+del 0+2000+2000 0+0
---   total lock wait+held read/write 0ms+12ms/0ms+29ms
---   max lock wait+held read/write 0ms+7ms/0ms+18ms
--- db.revdx
---   pages in+out+cached 3+0+2
---   locks read/write 3/0 rows get+pos+scan put+del 0+1002+1002 0+0
---   total lock wait+held read/write 0ms+52ms/0ms+0ms
---   max lock wait+held read/write 0ms+23ms/0ms+0ms
--- db.revhx
---   pages in+out+cached 28+0+26
---   locks read/write 3/0 rows get+pos+scan put+del 0+1002+1002 0+0
---   total lock wait+held read/write 0ms+52ms/0ms+0ms
---   max lock wait+held read/write 0ms+23ms/0ms+0ms
--- db.revsx
---   pages in+out+cached 1+0+1
---   locks read/write 3/0 rows get+pos+scan put+del 0+0+0 0+0
---   total lock wait+held read/write 0ms+22ms/0ms+0ms
---   max lock wait+held read/write 0ms+11ms/0ms+0ms
--- db.revsh
---   pages in+out+cached 32+101+27
---   locks read/write 2/2 rows get+pos+scan put+del 1000+1000+1000 1000+1000
---   total lock wait+held read/write 0ms+12ms/0ms+11ms
---   max lock wait+held read/write 0ms+7ms/0ms+6ms
--- db.rev
---   locks read/write 5/0 rows get+pos+scan put+del 0+0+0 0+0
---   total lock wait+held read/write 0ms+62ms/0ms+0ms
---   max lock wait+held read/write 0ms+23ms/0ms+0ms
--- db.revtx
---   locks read/write 2/0 rows get+pos+scan put+del 0+0+0 0+0
---   total lock wait+held read/write 0ms+41ms/0ms+0ms
---   max lock wait+held read/write 0ms+23ms/0ms+0ms
--- db.locks
---   pages in+out+cached 26+33+16
---   locks read/write 2/12 rows get+pos+scan put+del 0+1000+2000 1000+1000
---   total lock wait+held read/write 0ms+10ms/0ms+33ms
---   max lock wait+held read/write 0ms+5ms/0ms+17ms
--- db.working
---   pages in+out+cached 30+129+31
---   locks read/write 6/12 rows get+pos+scan put+del 2000+5+5005 2000+1000
---   total lock wait+held read/write 0ms+36ms/0ms+33ms
---   max lock wait+held read/write 0ms+23ms/0ms+17ms
--- db.workingx
---   pages in+out+cached 38+113+30
---   locks read/write 3/3 rows get+pos+scan put+del 0+6+4006 1000+1000
---   total lock wait+held read/write 16ms+31ms/0ms+16ms
---   max lock wait+held read/write 16ms+17ms/0ms+6ms
--- db.traits
---   pages in+out+cached 1+0+1
---   locks read/write 1/0 rows get+pos+scan put+del 0+0+0 0+0
---   total lock wait+held read/write 0ms+6ms/0ms+0ms
--- db.trigger
---   pages in+out+cached 1+0+2
---   locks read/write 1/0 rows get+pos+scan put+del 0+1+1 0+0
--- db.change
---   pages in+out+cached 16+8+2
---   locks read/write 7/3 rows get+pos+scan put+del 7+0+0 2+0
---   total lock wait+held read/write 1ms+7ms/0ms+18ms
---   max lock wait+held read/write 1ms+7ms/0ms+17ms
--- db.changex
---   pages in+out+cached 12+8+2
---   locks read/write 1/4 rows get+pos+scan put+del 2+0+0 2+1
---   total lock wait+held read/write 0ms+7ms/0ms+18ms
---   max lock wait+held read/write 0ms+7ms/0ms+17ms
--- db.changeidx
---   pages in+out+cached 2+0+1
---   locks read/write 0/1 rows get+pos+scan put+del 0+0+0 0+0
---   total lock wait+held read/write 0ms+0ms/0ms+17ms
--- db.desc
---   pages in+out+cached 4+0+2
---   locks read/write 2/0 rows get+pos+scan put+del 2+0+0 0+0
---   total lock wait+held read/write 0ms+7ms/0ms+0ms
---   max lock wait+held read/write 0ms+7ms/0ms+0ms
--- db.fix
---   pages in+out+cached 1+0+1
---   locks read/write 1/0 rows get+pos+scan put+del 0+0+0 0+0
---   total lock wait+held read/write 0ms+7ms/0ms+0ms
--- db.fixrev
---   pages in+out+cached 2+0+2
---   locks read/write 2/0 rows get+pos+scan put+del 0+2+2 0+0
---   total lock wait+held read/write 0ms+7ms/0ms+0ms
---   max lock wait+held read/write 0ms+7ms/0ms+0ms
--- clients/workspace(W)
---   total lock wait+held read/write 0ms+0ms/0ms+86209ms

The linux machine is not that bad, I managed to have a correct edge running on a virtual machine with lower RAM / CPU and similar disk.
I still have to compare file system and deactivate all other services that could be running on that linux machine (there is not much, some licence server and maybe a perforce proxy).

Any idea of something that could cause that problem on a specific computer?





Also tagged with one or more of these keywords: commit, edge, submit, slow, timeout

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users