Jump to content


Max journal sequence number?

journal maximum sequence rollover

  • Please log in to reply
10 replies to this topic

#1 Miles O'Neal

Miles O'Neal

    Advanced Member

  • Members
  • PipPipPip
  • 87 posts

Posted 16 August 2018 - 07:23 PM

What is the max journal sequence number before it rolls over? Thanks.
2017.1 on Linux if it matters.

#2 Miles O'Neal

Miles O'Neal

    Advanced Member

  • Members
  • PipPipPip
  • 87 posts

Posted 16 August 2018 - 07:52 PM

Wait. Is this the current master journal file size? Is there any way to get that via a p4 command (for a previous journal), or do we need to look at the acrhived journal?
Working on monitoring software.

#3 Sambwise

Sambwise

    Advanced Member

  • Members
  • PipPipPip
  • 684 posts

Posted 16 August 2018 - 08:13 PM

I'd expect the max journal number to be some kind of MAXINT value (at least 2^31).  I've never seen it roll over.

The number is just an incrementing counter and has nothing to do with the file size.  As far as I know the file size of the journal isn't stored in the journal anywhere, but IIRC recent versions of the server do produce checksums of the generated checkpoint and journal files.

#4 dave.foglesong

dave.foglesong

    Newbie

  • Members
  • Pip
  • 7 posts

Posted 17 August 2018 - 02:05 AM

In recent versions of Perforce, you can get the current journal (and the various logs) size with "p4 logstat" and old checkpoint/journal details (including size) with "p4 journals".

#5 Miles O'Neal

Miles O'Neal

    Advanced Member

  • Members
  • PipPipPip
  • 87 posts

Posted 12 September 2018 - 09:50 PM

We have a replica we use for backups. It halts during the backup process so that we get a known state (metadata and depot files) for a clean restore. It's down for a few hours every night during the quietest period we have. 200M to 240M isn't unusual for the journal counter difference when it starts back up. That's ~10% of 2^31. So it rolls over fairly frequently if it's within 32 bits.
But if it's just a count of the journal size, then there is no set number, and there may not be an easy way to compute the true difference when there is more than one journal involved.

#6 Sambwise

Sambwise

    Advanced Member

  • Members
  • PipPipPip
  • 684 posts

Posted 12 September 2018 - 09:59 PM

I'm not sure I'm clear on which number you're talking about, but the "journal" counter should only auto-increment by 1 each time a new journal is taken.  If it's incrementing by millions, something else is incrementing it.

#7 Miles O'Neal

Miles O'Neal

    Advanced Member

  • Members
  • PipPipPip
  • 87 posts

Posted 17 September 2018 - 03:42 PM

The journal sequence number, as reported in "p4 pull -lj" on replicas.

#8 p4rfong

p4rfong

    Advanced Member

  • Staff Moderators
  • 294 posts

Posted 18 September 2018 - 01:29 AM

The journal sequence number in "p4 pull -lj" is simply the number of bytes since the last journal.  This grows in size until the journal is truncated.  You can look at the journal on the replica to see its contents as an indication of what the replica journal contains to find out why it is growing as fast as it does.

#9 Miles O'Neal

Miles O'Neal

    Advanced Member

  • Members
  • PipPipPip
  • 87 posts

Posted 24 September 2018 - 02:59 PM

That was the conclusion I had finally arrived at. Thanks!

#10 Matt Janulewicz

Matt Janulewicz

    Advanced Member

  • Members
  • PipPipPip
  • 140 posts
  • LocationSan Francisco, CA

Posted 15 November 2018 - 06:28 PM

Slightly off topic, but just wanted to mention a couple things you could do to prevent the downtime for the backup.

We use the SDP and maintain a second, offline copy of the db on all our servers. Nightly journal truncation, replay into offline DB, at that moment your offline db is as up to date as it can be and can be backed up without taking the live server down. You might get additional library files arriving during the backup that the db won't refer to yet, but at the very least you have a point in time backup immediately after the journal replay that is consistent.

The offline db also allows you to take snapshots without downtime, even on live, production servers. Our snapshots take an average 3 hours to generate and even trying to coordinate downtime on all our backup instances would not be something I'd want to do. (I'm lazy)

You could also consider using a filesystem on your backup server that can do snapshots. Our backup servers run on ZFS. I'm still not convinced that ZFS on Linux is performant enough to put into production (again) given the size of our data set, but it's awesome for backups. We replicate all our edges and the commit to one server, daily ZFS snapshots, incremental 'zfs send' offsite, boom. Done. Added bonus is that a restore is just an rsync from the live filesystem. No agents. No muss. No fuss.
-Matt Janulewicz
Staff SCM Engineer, Perforce Administrator
Dolby Laboratories, Inc.
1275 Market St.
San Francisco, CA 94103, USA
majanu@dolby.com

#11 Miles O'Neal

Miles O'Neal

    Advanced Member

  • Members
  • PipPipPip
  • 87 posts

Posted 19 November 2018 - 10:05 PM

Thanks, Matt. We want the server down so that we can backup *everything* in Helix from a known state- metadata and versioned files in particular, but also journals, logs, etc. This server is dedicated to backups.
We already run the SDP, including the offline/primary metadata swaps.
When we spun the current servers up, ZFS was not ready for prime time (we were moving from Solaris (where we used snapshots) to Linux). We tried doing LVM snapshots with XFS, but at least on the RHEL kernels we had available at the time, removing an older snapshot would often lock up, requiring a reboot, which took a looong time because of the wedged volume manager. Since we have not yet gone to edge/commit (for several reasons), we only have to backup the one server.

The initial question is for a broader monitoring issue, since links can go down, a remote server can go offline for many reasons, etc.





Also tagged with one or more of these keywords: journal, maximum, sequence, rollover

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users