Jump to content


Helix Core Backup Script

perforce backup

  • Please log in to reply
7 replies to this topic

#1 ian.morris@trad.fi

ian.morris@trad.fi

    Newbie

  • Members
  • Pip
  • 5 posts

Posted 29 January 2019 - 02:57 PM

Hi

I'm trying to write a simple backup script to do the following with Helix Core

1: Create a journal (p4d -jc)
2. Stop perforce
3. Copy files in the Perforce root directory to another location
4. Start perforce

I thought this would be easy but being new to Perforce I'm struggling a little with it. I end with .db files in the wrong place when I start Perforce, permission issues, connections refused, etc. Too many issues to go into detail about. If anyone already has some kind of script that does something similar that may save me some time.

Thanks

#2 Sambwise

Sambwise

    Advanced Member

  • Members
  • PipPipPip
  • 894 posts

Posted 29 January 2019 - 10:50 PM

As far as I know the Perforce SDP is the standard if you want a premade script: https://swarm.worksh...e-software-sdp/

If you want to just fix up your own script, though, it sounds like all that's happening is that you aren't setting the server environment variables.  This works a bit differently on Windows vs Unix; on Windows you need to do stuff like "p4 set -S Perforce P4ROOT=C:\your\root", and on Unix you can do it via environment variables but I think more typically you just embed it in your startup script like "p4d -r /usr/perforce/root -p 1666".  If you're passing those flags, make sure to pass them to every invocation of p4d (including "p4d -jc" and similar commands).

If you start Perforce in the wrong root location, it'll just make a new empty set of db files and essentially start up a whole new server instance.  Doesn't really hurt anything permanently but obviously you can't do anything with your "real" server instance when the server daemon itself is serving up a whole different one.  "p4 info" is a useful debugging tool since it'll tell you what server root p4d is currently using (among a bunch of other pertinent data).

#3 ian.morris@trad.fi

ian.morris@trad.fi

    Newbie

  • Members
  • Pip
  • 5 posts

Posted 30 January 2019 - 09:17 AM

Perforce SDP looks like a great resource and I hadn't seen that before. I installed it on a clean Linux server but when I ran any P4 commands it couldn't find them. Didn't want to spend time getting that working but will revisit another time.

For now I have a basic backup script which I will put in the cron.daily folder.

p4 verify -q //...
p4 admin checkpoint
p4dctl stop -a
day=$(date +"%d-%m-%Y")
aws s3 cp ../../opt/perforce/servers/master/ s3://mys3bucket/perforce_backup/$day/ --recursive
p4dctl start -a

#4 Matt Janulewicz

Matt Janulewicz

    Advanced Member

  • Members
  • PipPipPip
  • 176 posts
  • LocationSan Francisco, CA

Posted 08 March 2019 - 12:20 AM

I second the SDP idea. It's great. Everyone should use it, no matter how small your server.

We can't afford the downtime for running a checkpoint (it's an hours-long process for us) so we do everything in a second, offline copy of the database.

Once a week we 'swap in' the clean offline database, giving us about 10 seconds of downtime per server a week, no matter how big our db files get.

One thing that's probably messing you up right off the bat is that the SDP doesn't mess with what I'd call the 'standard' environment, at least not on Linux. It doesn't copy binaries to /usr/bin or anything like that, so you'd have to put the SDP bin path in your own $PATH. I just stick this in all my server user's ~/.bash_profile:

export PATH=/depotdata/p4/common/bin:$PATH
-Matt Janulewicz
Staff SCM Engineer, Perforce Administrator
Dolby Laboratories, Inc.
1275 Market St.
San Francisco, CA 94103, USA
majanu@dolby.com

#5 P4TomT

P4TomT

    Member

  • Staff Moderators
  • 10 posts
  • LocationNashua, NH, USA

Posted 25 May 2019 - 01:59 AM

If you want to play with the SDP, try out the Helix Installer.  Spin up a fresh VM (various OS's supported, CentoS 6/7, various others).

See:  https://swarm.worksh...helix-installer

It creates the OS user, installs the SDP along with the Sample Depot, configures the systemd or SysV init scritps (depending on OS version).

BEWARE - this software includes a script named reset_sdp.sh that should NEVER find its way near a production Perforce server (as noted in the docs).

Just as a case study of its usage, our Battle School training course's lab environment uses a slightly customized version of the Helix Installer.  Where the Helix Installer builds a single machine, the custom version builds out a 5-machine simulated global topology, in about 90 seconds.  And then just as quickly blasts it and resets it, enabling students to get thru a series of labs each with the same starting condition.
C. Thomas Tyler
Perforce Software
Consulting Services
Phone: +1 (603) 595-9670
Mobile: +1 (617) 513-2414
Email: consulting@perforce.com
Twitter: @cttyler

P4Blog: http://blog.perforce.com

Get peace of mind with Perforce experts on your team!
Perforce Remote Administration Program: http://bit.ly/PerforceRA

#6 Miles O'Neal

Miles O'Neal

    Advanced Member

  • Members
  • PipPipPip
  • 128 posts

Posted 28 May 2019 - 02:24 PM

We use SDP; among other things it provides those scripts to handle journal rotation and offline metadata rebuilds that others have referenced. Much less effort than rolling your own, if you want it done right.
For a true backup, we run a R/O replica server. We then have a simple script that once a day:
- halts the p4d service
- backs up everything to a backup storage appliance
- restarts the service
This guarantees that we have a recent backup where the versioned files match the metadata (and logs for that matter).

#7 Matt Janulewicz

Matt Janulewicz

    Advanced Member

  • Members
  • PipPipPip
  • 176 posts
  • LocationSan Francisco, CA

Posted 04 June 2019 - 11:01 PM

View PostMiles O, on 28 May 2019 - 02:24 PM, said:

We use SDP; among other things it provides those scripts to handle journal rotation and offline metadata rebuilds that others have referenced. Much less effort than rolling your own, if you want it done right.
For a true backup, we run a R/O replica server. We then have a simple script that once a day:
- halts the p4d service
- backs up everything to a backup storage appliance
- restarts the service
This guarantees that we have a recent backup where the versioned files match the metadata (and logs for that matter).

We do a similar, albeit more elaborate process (we have six edge servers to worry about.) Our backup/replicator is running ZFS and we snapshot the filesystems then 'zfs send' the deltas to a backup/storage appliance (AWS, actually.)

I know I railed against ZFS a few years ago at MERGE, but for something like this where you don't care about performance that much, it's a really nice way to have historical access to all your archives and whatnot.
-Matt Janulewicz
Staff SCM Engineer, Perforce Administrator
Dolby Laboratories, Inc.
1275 Market St.
San Francisco, CA 94103, USA
majanu@dolby.com

#8 Miles O'Neal

Miles O'Neal

    Advanced Member

  • Members
  • PipPipPip
  • 128 posts

Posted 04 June 2019 - 11:34 PM

I am guessing when you say, "don't care about performance that much", you are referring to the snapshot and backup portions, not p4d performance using ZFS?




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users