Jump to content


Member Since 12 Sep 2016
Offline Last Active Today, 04:51 AM

Posts I've Made

In Topic: Protects table usage statistics

16 August 2019 - 05:43 PM

View PostMiles O, on 16 August 2019 - 05:13 PM, said:

We do have some, some of which seem more inevitable than Thanos's confidence. I actually have a ticket open regarding particular cases; At some number of lines, it has to be more efficient to check (for example) //foo/bar_*/... than X number of lines with /foo/bar_<number>_final. But what is X? 3? 10? 100? (Not 3, that much I'm sure of.)

The ceiling on that number is pretty high (thousands if not millions).  The catch is the protection table isn't interpreted in a vacuum; it gets joined with every other mapping involved in running a given command (which usually includes the client view).  So in some situations your multiple wildcards behave fine, and then one person puts a couple multiple wildcards in their own client (or a branch view, or a command argument, or all of the above) and now suddenly the computed mapping is five million lines long because things went from O(n) to O(k^n) or whatever (I'm fuzzy on the math, but it's bad).  Every time someone hits a map.joinmax error (or crashes their server because they overrode map.joinmax and overflowed memory) it's because they had multiple wildcards in their protection table and it was fine until it suddenly wasn't.


Where is -vmap documented?

Posted Image

Here's what the different levels correspond to: https://swarm.worksh.../map/mapdebug.h

You can look at the debugging statements in the rest of the map/ folder that are gated on those flags to get an idea of what they're dumping out.  This line seems like the most likely to be useful for answering the question of "how often does this mapping line get referenced?":  https://swarm.worksh.../maphalf.cc#419

I'm not sure what you'd actually do with that data or if it would be in any way useful, but that's about the only window there is into what the mapping code is thinking.

In Topic: Re-connect existing folder with changes

16 August 2019 - 05:24 PM

Oh dear.  Command line time!

To start, revert the files that you opened with that reconcile, because they'll gum everything up.  Use the "-k" flag to make sure you don't delete any of your local changes:

p4 revert -k //...

Come up with a best guess of when you synced the workspace from the depot.  It's better to err on the side of too early.  Then run:

p4 flush @DATE

e.g. "p4 flush @2019/08/03".  This will tell the server "pretend I synced to the head revision as of this date".

Now go ahead and run:

p4 reconcile

It will use the "flushed" revisions as the base.  If you guessed too late of a date, the files in your workspace will be implicitly backing out the changes that you didn't really sync from the depot, so be careful!  Run:

p4 diff

to inspect your changes and make sure it looks like the work you did (or the work you did plus changes from the depot that you did actually sync).  Keep a particular eye out for deletions that don't look like yours, because that probably means you flushed to too late of a date and now you're backing out someone else's work.

Everything you've done up to this point can be trivially undone by going back to the start (i.e. you can revert -k and flush to a different date at this point).  Be fairly confident that you got the right date (or an earlier one) before proceeding to the "resolve" step, since that will modify your workspace files and after that it might be difficult to rewind.  Note that if you went TOO early you'll have more changes to resolve and therefore a higher chance of conflicts, but at least nothing will get silently dropped.

If everything looks good you can now run:

p4 sync
p4 resolve
p4 submit

In Topic: Re-connect existing folder with changes

16 August 2019 - 04:14 PM

View PostRavenAB, on 16 August 2019 - 03:11 PM, said:

To get back in sync with the Perforce project, I created a new workspace

You made this too complicated.  :)  Switch back to your original workspace and "reconcile".  Done.

Reconcile uses the information of what you had synced to your workspace before you went offline in order to figure out what changes you've made since then (i.e. it's diffing what you have now vs what you synced).  It's designed to handle this exact situation in a one-shot command.  If you create a new workspace, that information is lost, and everything gets harder.  All your files showed up as "new" because you never synced anything to this new workspace, so relative to that empty state everything is new (and will conflict with the existing files).

As long as you haven't done something really destructive like delete your original workspace, you can just switch back to it, do the reconcile (which will now work correctly), and pick up from there as if you were connected all along.  If any resolves are needed they'll use your last-synced files as the common base so merging will be as easy as possible.

In Topic: Protects table usage statistics

16 August 2019 - 02:24 PM

I don't think there's any good tooling for this -- you might be able to do *something* with -vmap=N debugging flags but I don't know how useful it would be with respect to the task you're trying to accomplish.

Does your protection table have any lines with multiple wildcards?  If so, I can shortcut a lot of investigation for you and tell you that's the thing to optimize out.  :)  Here's a script I wrote in large part to make it easier to eliminate expressions like "//depot/*/foo/..." which tend to be the biggest performance killers:  https://swarm.worksh...main/protexp.pl

In Topic: File status icons for p4exp?

15 August 2019 - 06:11 PM

View Postp4bill, on 15 August 2019 - 05:36 PM, said:

the commands run to get that information can be expensive to run

Thinking this over -- back when P4EXP was originally written there wasn't any version of p4 diff that didn't involve reading the full file from disk, which is somewhat prohibitive, but leveraging the synced modtime (similar to reconcile -m) and/or the read-only bit would make it a little more reasonable to do in-place.  I'd think that just visually flagging files that the user has made writable would do about 95% of the job.