Posted 21 November 2018 - 09:10 AM
To avoid certain old branches being backed up nightly I would like to move them away to a separate manually backed up depot - however ideally I need to keep their commit history too.
If I do a copy/move/integrate the new depot/branch is at revision one with no commit history in place, furthermore, so that old location wasn't still hidden and backed up I would need to obliterate it.
Is what I want possible?
Since the majority of our branches have been integrated/branched from earlier branches will obliterating a branch earlier in the chain upset the integrity of later branches?
Also, my understanding from the docs is because these branches have been integrated I'm not able to archive them?
Posted 21 November 2018 - 06:01 PM
What are you trying to accomplish by doing this? Saving space in your archive backups? Making checkpoints faster?
I don't think what you're describing will accomplish either of those, but understanding your ultimate goal will make it easier to suggest an alternative.
Also, what level of availability do you need for the branches that you're potentially getting rid of? Do you want it to be possible to put them right back where they were and resume work with no downtime, or are you fine with them being effectively read-only forever, or somewhere in between?
Posted 22 November 2018 - 11:06 AM
I need everything to be accessible with history, the chances of these files changing is low and can be manually backed up if necessary, they don't need to go back to their original depot.
My thinking was if I can move them to a different depot I can exclude it from the rsync.
Posted 22 November 2018 - 05:33 PM
Have you done any tests (e.g. on a backup server) to validate that obliterating these branches results in a significant enough savings in terms of versioned files to be worth it?
Posted 13 December 2018 - 01:26 AM
It sounds like you may be backing up, or rsyncing from, a live server. One alternate route would be to create a full, read-only replica of your server that is used exclusively for backups. The db and archive files are updated incrementally in real time, so there is no need to rsync. You could also, perhaps, shut the p4d service down during the backup so the db matches the archives. Since folks won't be connecting to the server you can afford some downtime on it.
I would also suggest looking into the SDP (Server Deployment Package), or at least the concepts behind it, and consider maintaining a second, offline copy of your database. Even in a read-only environment, if something nutty happens on the main server, any corrupt/insane journal entries proliferate throughout the system and render all the other db's equally corrupt. Maintaining an offline db that's updated periodically gives you a base to (re)start from without having to spend lots of time recovering a checkpoint. This allows you the time/space to do all kinds of offline things (the aforementioned checkpointing/balancing, backups, 'p4 verify', etc.)
So anyway, in a nutshell:
1. Don't use obliterate for this purpose.
2. Maintain a read-only replica for backups.
3. Maintain second offline db on every server.
* If you want to go completely nuts like I do, you might put ZFS on the backup server and take daily snapshots, then incrementally 'zfs send' those offsite for safekeeping. The size of our main server backup (~1/2 TB db, >100,000,000 files) makes 'classic' backup schemes lose their minds so we're basically tape/storage/agent free on our backups. I know I did a presentation at the last Merge conference telling everyone how bad ZFS on Linux is, but in this case where performance isn't an issue, it's been a godsend.
Staff SCM Engineer, Perforce Administrator
Dolby Laboratories, Inc.
1275 Market St.
San Francisco, CA 94103, USA
Also tagged with one or more of these keywords: copy, move, branch, history, obliterate
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users