Using git as an emergency backup?
Posted 08 December 2019 - 06:57 PM
I have about 20 student teams with final project presentations due on Wednesday, and I'm not sure if our IT department will be able to look at this in time or how long it'll take to resolve. I have P4 admin privileges but not filesystem-level access so I don't think there's much I can do on my own.
I'm suggesting to students they can switch to Git and use GitHub or Bitbucket so they can continue working.
If they do this, would it work to use their existing P4 workspace directory also as their local git repository, so that when the helix server is back online they can sync the changes? Or would it be safer to have them just use a clean directory for their git repository?
Posted 09 December 2019 - 05:20 AM
As an admin you can reclaim disk space by obliterating files, but this will only be effective if you have large files that are unnecessary (e.g. if somebody's been submitting giant binary files to the Perforce server). If:
1) this disk is dedicated to the Perforce server
2) it filled up very suddenly
3) you don't have any limits in place to prevent users from submitting giant files
it's very possible that one student has (unwittingly?) denial-of-serviced the Perforce server by submitting enough data to it to fill up the disk. You could do a quick check of the student directories (I assume you've got permissions set up so that each student is at least confined to a particular part of the depot) to see if any of them are abnormally large. Here's an example of a query like that run against the //guest depot on public.perforce.com:1666:
% p4 -F %dirName%/... dirs "//guest/*" | p4 -x - sizes -s ... //guest/yael_stern/... 53 files 574669 bytes //guest/yariv_sheizaf/... 118 files 400574 bytes //guest/ydatoor/... 15 files 523797 bytes //guest/yonas_jongkind/... 155 files 3769823 bytes //guest/zach_helke/... 1 files 3133 bytes //guest/zachwhaley/... 42 files 54005 bytes //guest/zardlove/... 1 files 16 bytes //guest/zynthar/... 405 files 159849886 bytes
If one user seems to be taking up all the space, you can go into their directory and obliterate whatever seems superfluous.
If the disk isn't dedicated to the Perforce server, or if it's been creeping up for a while and nobody noticed until it hit the failure point, those are both good things to raise to the IT department as fixable problems.
Using the workspace directly is fine, but they should make sure to exclude the .git directory from their workspace (or add it to their P4IGNORE file) since they probably don't want to add git's repo metadata to the Perforce depot.
Note also that students should be able to simply clone personal Perforce servers on their own machines. That way when the central server comes online they'll be able to push their local history to it directly (whereas if it's in git it's probably not going to be easy to convert back into Perforce).
If the clone command still fails with the disk space error, a temporary workaround would be to bump down the filesys.P4LOG.min configurable (obviously you only have 250M worth of wiggle room, so this is not a long term solution; you might want to disable write access until the root cause is resolved).
Posted 10 December 2019 - 10:44 PM
> p4 logstat
> p4 diskspace
If the log is on its own partition/volume with nothing else, then without direct access you're sorta ... you know.
With the above commands you can at least see if it's _just_ the logs or if some other depot storage has grown larger than expected, if the log volume is shared with depot data.
Aside from the abovementioned obliterate, which is very destructive and final, there's not much you can do to free up space without server admin intervention.
One other thing I just thought of is that shelves can unexpectedly take up a lot of space. If your main storage and logs are on the same volume, and if you have a lot of unneeded shelves, deleting said shelves should free up some space. You'd want to run this to see your total shelf space:
> p4 sizes -sSh //...
Currently unemployed, looking for work in Boise, ID!
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users