Jump to content


Cleaning server up after a partial obliterate

obliterate server

  • Please log in to reply
4 replies to this topic

#1 Kumputer

Kumputer

    Member

  • Members
  • PipPip
  • 10 posts

Posted 23 January 2016 - 03:31 AM

I ran an obliterate on a large tree (100,000 files, 80GB) with significant history recently because it was a partially botched scripted series of commits to transfer a history from another (horrible) version control system, and I needed to do it over again from scratch. When I ran obliterate on the tree, it appeared to hang after wiping out most of the files, for about 20 minutes, at which point, I logged into the server and killed the process. When I tried running the obliterate again, it appeared to hang again along with a bunch of warnings. I tried the -h extension and it had the same effect. Lastly I tried using -a and it completed quickly, but now I can see that there are still file remnants taking up a bit of space on the server. They end in ,d. If I manually delete these files, will it have an adverse effect? Is there a better way to clean up?

As a side note, I tried transferring the history again using the scripted commits (to a different location in the depot), but found that I screwed it up again in a different way, so I obliterated this folder, too, but this time waited patiently for about an hour, and it finished. Inspecting the server structure, I see that all the files were wiped clean, but some empty directories were left behind.

#2 Matt Janulewicz

Matt Janulewicz

    Advanced Member

  • Members
  • PipPipPip
  • 222 posts
  • LocationSan Francisco, CA

Posted 25 January 2016 - 05:48 PM

I think you've gotten yourself into an odd situation (with the obliterate -a) where the files are removed from the db but not locally on disk. In a perfect world, you'd have dumped a list of where all the archive files are first then removed them manually after the 'obliterate -a'. The big warning here is you have to be careful about removing files that might be a lazy copy. It sounds like you're importing into a discrete location so it's probably okay to delete those files, but I personally would proceed with an abundance of caution any time I use the obliterate command.

Note that the ,d does mean that the files are binary, likely stored per-revision, so deleting them should be okay. You never want to delete a ,v file as they hold multiple revisions.

If I found myself in this situation, and as painful as it is, I'd generate a list of where all my other archive files are, and compare that to what I want to delete so I don't accidentally wipe a lazy copy. Given any depot path, you can see where an archive is and whether or not it's a lazy copy, with fstat. When I'm running a very large obliterate, I typically run something this on all the files I want to obliterate first:


$ p4 fstat -Oc -Of -F lbrIsLazy=0 -T lbrFile //spec/client/syd-build-voice-ios.p4s
... lbrFile //spec/client/82,d/syd-build-voice-ios.p4s
... lbrFile //spec/client/82,d/syd-build-voice-ios.p4s
... lbrFile //spec/client/82,d/syd-build-voice-ios.p4s
... lbrFile //spec/client/82,d/syd-build-voice-ios.p4s
...

This filters out lazy copies. So now I could run "p4 obliterate -ay //spec/client/syd-build-voice-ios.p4s" then remove <depot_root>/spec/client/82,d/syd-build-voice.ios.p4s manually from the server.

You might want to do this in reverse. Get the listing of all your current files, then ensure you don't delete those when you're cleaning up. Not sure how many files you have in your database/depot, so it may take a while, but will likely be the safest route.

One other thought is that perhaps re-doing the same import will overwrite the old files. Not sure if that's the case, though, never tried it.

One last comment on obliterates, since I have some recent history with doing this same kind of thing on a much grander scale. :) If you have a particularly large db.archmap, obliterates will take a really long time as it searches through there for the archives to remove. Including '-a' in the obliterate skips that step but you'll have to remove things manually as described above. My guess is that optimizing/compressing your database will make this step shorter if you don't skip the archive search, but in cases like our server(s), the database is sufficiently large enough where compressing the database dosen't give us much of an advantage (for obliterate purposes), so for me it's standard operating procedure to dump the archive paths first then obliterate using -a, deleting the archive files manually when I'm done.
-Matt Janulewicz
Currently unemployed, looking for work in Boise, ID!

#3 P4REB

P4REB

    Advanced Member

  • Members
  • Pip
  • 3 posts

Posted 25 January 2016 - 07:10 PM

View PostKumputer, on 23 January 2016 - 03:31 AM, said:

I ran an obliterate on a large tree (100,000 files, 80GB) with significant history recently because it was a partially botched scripted series of commits to transfer a history from another (horrible) version control system, and I needed to do it over again from scratch. When I ran obliterate on the tree, it appeared to hang after wiping out most of the files, for about 20 minutes, at which point, I logged into the server and killed the process. When I tried running the obliterate again, it appeared to hang again along with a bunch of warnings. I tried the -h extension and it had the same effect. Lastly I tried using -a and it completed quickly, but now I can see that there are still file remnants taking up a bit of space on the server. They end in ,d. If I manually delete these files, will it have an adverse effect? Is there a better way to clean up?

As a side note, I tried transferring the history again using the scripted commits (to a different location in the depot), but found that I screwed it up again in a different way, so I obliterated this folder, too, but this time waited patiently for about an hour, and it finished. Inspecting the server structure, I see that all the files were wiped clean, but some empty directories were left behind.
You might try using the undocumented "p4 snap" command.  This is designed to undo lazy copies in particular depot paths. The command to use here would be this:

   p4 snap -n //current_depot_path/... //obliterate_path/...

This will look for files in "//current_depot_path/..."  with librarian paths that reside under "//obliterate_path/..." and copy any file revisions that match to the proper place within the "//current_depot/path/..." location in the versioned file tree.  Note that I provided a "-n" command here, which simply reports what would be done if the command were run without "-n".  If you run "p4 snap" over your entire depot like this you should be able to identify and undo any lazy copy file revisions remaining in the obliterated path.  You should then be able to remove any files remaining in that path from the versioned file tree.

Details on this are contained in "p4 help snap".  Please do NOT run more than one "p4 snap" command at a time!  There is no protection within the server from your use of multiple overlapping "p4 snap" commands.
He who has no hands
Perforce must use his tongue;
Foxes are so cunning
Because they are not strong.
-- Ralph Waldo Emerson

#4 Robert Cowham

Robert Cowham

    Advanced Member

  • PCP
  • 271 posts
  • LocationLondon, UK

Posted 25 January 2016 - 08:02 PM

View PostKumputer, on 23 January 2016 - 03:31 AM, said:

I ran an obliterate on a large tree (100,000 files, 80GB) with significant history recently because it was a partially botched scripted series of commits to transfer a history from another (horrible) version control system, and I needed to do it over again from scratch. When I ran obliterate on the tree, it appeared to hang after wiping out most of the files, for about 20 minutes, at which point, I logged into the server and killed the process.
In addition to the suggestions by Matt and REB to clean up, next time, you might want to consider the obliterate script I wrote which does it incrementally:

https://swarm.worksh...l_obliterate.py
Co-Author of "Learning Perforce SCM", PACKT Publishing, 25 September 2013, ISBN 9781849687645

"It's wonderful to see a new book about Perforce, especially one written by Robert Cowham and Neal Firth. No one can teach Perforce better than these seasoned subject matter experts"
  • Laura Wingerd, author of Practical Perforce, former VP of Product Technology at Perforce

#5 Kumputer

Kumputer

    Member

  • Members
  • PipPip
  • 10 posts

Posted 06 February 2016 - 12:46 AM

Thanks Robert. So, essentially, this script just obliterates every changelist in reverse order?





Also tagged with one or more of these keywords: obliterate, server

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users