mob, on 21 February 2019 - 11:48 AM, said:
What is the drawback of using populate -f always? I see, there will be more and more deleted files to be kept, on every cleanup and re-sorting. But is that an issue? We hide those in P4V.
If you always hide them in P4V it's probably not much of an issue. In theory there'll be some performance degradation from all the extra db records (which will continue duplicating on every branch), but that's a function of how often you delete/refactor files; if it's relatively rare (so that deleted files relative to active files aren't really significant) or if your database is relatively small to begin with, you likely won't notice the difference.
For integrations, the resolve could do that. It must find the common base anyway, and if one branch to that base has a deletion on the way and the other not, then the deletion is the result of the resolve, that's my view on that. Wrong?
"Common base" between what?
Essentially the issue is that the target file doesn't exist, so there is no common base calculation, which generally only kicks in if you're integrating a file into an existing file. In theory, every time you create a branch there could be an exhaustive search of all branches of the source file to see if there was ever a deletion in a sibling branch, and then some sort of heuristic could be applied by looking at other files in the branch to "guess" whether the file should be propagated, but since each file has its own history it would indeed have to be a guess (and it would also increase the performance cost of branching by orders of magnitude).
There is actually one special case where intermediate files are examined during a branch into a nonexistent path, which is when there's a renamed source. Check out page 14 here: https://swarm.worksh...ase Picking.pdf
In this example B2 is a candidate for integration into A2, which would normally be a "branch" action since A2 does not exist (and in this rename case we'd end up "recreating" a file that was moved). To avoid that, we look within the B* namespace (it's important that we only look within that namespace since otherwise we're in that "orders of magnitude" performance problem scenario) and look specifically for "moved" records, and if we find one then we do the common base analysis on the move ancestor to make everything line up.
For the delete case you describe, though, those signposts don't exist. Using "populate -f" works around the problem by adding one in the form of the "branched delete", which then serves as a starting point for the common base calculation to figure out whether it's actually appropriate to (re)branch the source into the target.