Jump to content


Corrupted during transfer


  • Please log in to reply
4 replies to this topic

#1 RobMcLachlan

RobMcLachlan

    Newbie

  • Members
  • Pip
  • 1 posts

Posted 04 December 2017 - 02:12 PM

Hello,

I have a particular set of binary files which constantly fail when submitting to our remote server from my machine only. Retrying the submit will eventually clear the error and the file will go through successfully (and p4 verify reports no errors) but it's frustrating because the files are 60MB+ in size and retrying is time consuming due to the upload taking about 30 seconds. I have resorted to submitting this file separately so I only need to retry a single file changelist (rather than an even larger changelist which would have to be resubmitted and waste more time), but this is opening up a gap between connected binaries so other users may effectively only get a partial checkin.

The file is an Unreal Engine .umap file containing a specific type of game asset (a landscape) - it might just be coincidence but only .umap files containing this same asset type have failed in this way.

Checksums seem to be different each time I submit as you can see below. I am wondering if this could be a local network error here as nobody else on the project has experienced this.

p4 submit -f submitunchanged -i
Change 931 created with 1 open file(s).
Locking 1 file(s)...
edit //depot/Content/Maps/2017/2017_World.umap#75
//rob/WHGame/Content/Maps/2017/2017_World.umap corrupted during transfer 4192591DF1F7A3F1540F1A0B1B02EE2A vs DBA74048E4490C875C1A0C6C653B3334
Submit aborted -- fix problems then use 'p4 submit -c 931'.
Some file(s) could not be transferred from client.
2 errors reported
//rob/WHGame/Content/Maps/2017/2017_World.umap corrupted during transfer 4192591DF1F7A3F1540F1A0B1B02EE2A vs DBA74048E4490C875C1A0C6C653B3334
Submit aborted -- fix problems then use 'p4 submit -c 931'.
Some file(s) could not be transferred from client.
p4 submit -f submitunchanged -i
edit //depot/Content/Maps/2017/2017_World.umap#75
//rob/WHGame/Content/Maps/2017/2017_World.umap corrupted during transfer 4192591DF1F7A3F1540F1A0B1B02EE2A vs 273DBA06A7538763222D692A5CBB0D23
Submit aborted -- fix problems then use 'p4 submit -c 931'.
Some file(s) could not be transferred from client.
2 errors reported
//rob/WHGame/Content/Maps/2017/2017_World.umap corrupted during transfer 4192591DF1F7A3F1540F1A0B1B02EE2A vs 273DBA06A7538763222D692A5CBB0D23
Submit aborted -- fix problems then use 'p4 submit -c 931'.
Some file(s) could not be transferred from client.
p4 submit -f submitunchanged -i
edit //depot/Content/Maps/2017/2017_World.umap#75
//rob/WHGame/Content/Maps/2017/2017_World.umap corrupted during transfer 4192591DF1F7A3F1540F1A0B1B02EE2A vs 083313E36D9AB082192B14E97D70B61F
Submit aborted -- fix problems then use 'p4 submit -c 931'.
Some file(s) could not be transferred from client.
2 errors reported
//rob/WHGame/Content/Maps/2017/2017_World.umap corrupted during transfer 4192591DF1F7A3F1540F1A0B1B02EE2A vs 083313E36D9AB082192B14E97D70B61F
Submit aborted -- fix problems then use 'p4 submit -c 931'.
Some file(s) could not be transferred from client.
p4 submit -f submitunchanged -i
edit //depot/Content/Maps/2017/2017_World.umap#75
//rob/WHGame/Content/Maps/2017/2017_World.umap corrupted during transfer 4192591DF1F7A3F1540F1A0B1B02EE2A vs 36F1E9F80B2F7874D5B8B97B81839DEC
Submit aborted -- fix problems then use 'p4 submit -c 931'.
Some file(s) could not be transferred from client.
2 errors reported
//rob/WHGame/Content/Maps/2017/2017_World.umap corrupted during transfer 4192591DF1F7A3F1540F1A0B1B02EE2A vs 36F1E9F80B2F7874D5B8B97B81839DEC
Submit aborted -- fix problems then use 'p4 submit -c 931'.
Some file(s) could not be transferred from client.
p4 submit -f submitunchanged -i
edit //depot/Content/Maps/2017/2017_World.umap#75
Submitted change 931
1 file edited


#2 p4rfong

p4rfong

    Advanced Member

  • Staff Moderators
  • 216 posts

Posted 15 December 2017 - 05:38 PM

For some reason,  the file on your client is different from what Perforce expects.  You can run

md5sum <filename>

to see what you workspace has and compare it to the digest in

p4 fstat -Ol //<depot>/<dir>/<filename>

If the files are actually fine and you are using a "local" line ending in your client workspace, you can remove the error with

p4 verify -v //<depot>/<dir>/<filename>

#3 Sambwise

Sambwise

    Advanced Member

  • Members
  • PipPipPip
  • 518 posts

Posted 15 December 2017 - 09:52 PM

I don't think "verify -v" will fix this, nor is comparing against the checksum reported by fstat useful, since the comparison is happening against a revision that doesn't exist yet (i.e. the one being written during the submit).  The differing server checksums each time are very interesting -- if it was some sort of translation error it'd at least be consistent!

There aren't $DateTime$ keywords at play here, are there?  What's the filetype?  (It *shouldn't* matter, but...)

Is anyone else submitting files that are this size or larger?  That seems like the most likely factor to be significant -- if it's specific to files above a certain threshold I might be inclined to suspect the server filesystem (e.g. you're writing these files to a RAID that mangles any file that's larger than some multiple of its stripe size).

If other users are regularly editing and submitting files that size then the network is one possibility, but it's a little weird that it'd deliver a scrambled file to the server without perturbing anything else and throwing some more low-level error.  Do you have anything running on your system that might be messing with reads of the files under some circumstances (an overly aggressive virus scanner, a file syncing/backup service that uses a kernel extension, etc)?

#4 Matt Janulewicz

Matt Janulewicz

    Advanced Member

  • Members
  • PipPipPip
  • 127 posts
  • LocationSan Francisco, CA

Posted 15 December 2017 - 10:08 PM

It's an outside chance, but this smells a tiny bit like this bug that was fixed in 2017.2:

#1525293 (Bug #90697) **
    Parallel submit from an Edge Server could corrupt the
    archives of non-ktext files if they contain RCS keywords.
    This has been fixed.


The nature of it was the we were submitting certain windows binaries that happened to have the character sequence "$Version$" in the metadata. The server was finding this and munging it. The difference here is that in our case the submit would go through on the first try but you'd get weird failures when syncing. Our workaround was to disable parallel submit using a trigger (haven't upgraded yet.)

Are you in a commit->edge architecture, running a server older than 2017.2 and using parallel submit? If not, ignore message. :)

If so, perhaps disable parallel submit explicitly on your next submit and see what happens.

Even though your submit eventually succeeds, I'd be weary that it's actually correct. It might be fun to p4 verify the last revision and compare it to the locally run md5sum to see if they're the same.

BTW, we probably have files submitted that are bigger than 60 MB every day, often much larger (1 GB or more.) Until we ran into this bug we've never had a problem with bigger binary files.
-Matt Janulewicz
Staff SCM Engineer, Perforce Administrator
Dolby Laboratories, Inc.
1275 Market St.
San Francisco, CA 94103, USA
majanu@dolby.com

#5 Sambwise

Sambwise

    Advanced Member

  • Members
  • PipPipPip
  • 518 posts

Posted 16 December 2017 - 01:47 AM

View PostMatt Janulewicz, on 15 December 2017 - 10:08 PM, said:

It's an outside chance, but this smells a tiny bit like this bug that was fixed in 2017.2:

#1525293 (Bug #90697) **
Parallel submit from an Edge Server could corrupt the
archives of non-ktext files if they contain RCS keywords.
This has been fixed.

Oh that's an excellent spot and could easily explain this!  I bet this particular file format has one of the keywords in it and maybe Rob is the only one submitting them via this edge server (or maybe the only with the parallel submit option enabled)?




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users