On śro, 2017-06-14 at 11:06 +0200, Alexis Ballier wrote: > On Wed, 14 Jun 2017 00:13:42 +0200 > Michał Górny wrote: > > > On wto, 2017-06-13 at 12:27 +0200, Alexis Ballier wrote: > > > On Mon, 12 Jun 2017 21:17:16 +0200 > > > Michał Górny wrote: > > > > > > > I've actually started typing the initial specification yesterday > > > > [1]. As you can see, banning the extra constraints has made the > > > > algorithms much simpler. In particular: > > > > > > > > 1. You do not have to define 'falsify' for anything other than > > > > pure flags -- which makes it easy to inline it. > > > > > > > > 2. ||, ??, ^^ groups are only flat lists of flags -- which makes > > > > reordering and processing them trivial. > > > > > > > > 3. The algorithm is recursive only on USE-conditional groups. This > > > > makes it trivial to make it iterative. Optimizations become > > > > trivially possible. > > > > > > > > > While you're right in one sense, you're mixing two different things > > > here. What you wrote *is* recursive. It does not recurse just > > > because you're assuming a restricted syntax. You're only saving two > > > things: you don't need to define how to enforce to false (that is 3 > > > lines not 3 pages :=) ) and you're avoiding the nested use > > > conditionals that are already ill defined per the current spec > > > (foo? bar is equivalent to && ( foo bar ) when nested) which I > > > believe is already another problem. > > > > > > Then, remember how I wanted to be much more drastic than you in the > > > beginning by killing all ||,&&,^^ etc. and keep only use > > > conditionals in REQUIRED_USE ? Well, that's where the complexity > > > comes. The whole deal then is to define rewriting rules for the AST > > > so that the algorithm you describe executes the exact same > > > instructions but the new AST only has use conditionals. This is > > > more like writing a compiler for the spec, so this does not belong > > > to the spec and there is no issue here. > > > > I'm looking for a compromise here. Killing those groups completely is > > going to make things harder for users. Keeping them with functionality > > limited to what's used in ~99.9% ebuilds (based on your numbers) is > > IMO a better choice. > > I already said I see the limited syntax as a good thing because it > forces devs to write constraints that have a natural interpretation in > how it is solved. However, you can't limit the syntax without a new > EAPI, and more importantly, properly solving does not even require > limiting the syntax. Actually, you can, via a Gentoo policy. Since solving is not required by the PMS, there is no rule saying it has to work for every constraint allowed by the PMS. Much like, you can't force a particular ordering or forbid circular constraints without a new EAPI. Yet you do it because it gives a practical improvement. > BTW, I don't know how you get that info from my data because I never > voluntarily checked for a restricted syntax :) I took the totals from your data, and subtracted the counts for invalid constraints from mine ;-). > > > > [BTW: checking the rewrite rules behave properly is what I meant by > > > rebasing solve() on top of it and being happy with it] > > > > Could you reiterate the current solving rules (trueify/falsify)? Are > > they equal to the ones you listed last, or does the current > > implementation change anything? > > Let's recap a bit. nsolve() is poorly named and does not solve > anything. It translates to implications and checks whether the > implications solver will always provide a valid result in one pass. > So, if you only care about solving rules, read your spec man. For the > more general case it should behave like those trueify/falsify with > the change that nested implications are interpreted as && (so no > more !(a -> b) crap to worry about). How are && ( a b... ) falsified now? Leftmost only? > If you take solve() as an implementation of your spec, you have: > solve(x) <=> solve(to_impl.convert_to_implications(x)) when solve(x) > is defined; with the added benefit that > 'solve(to_impl.convert_to_implications(x))' is defined and should > provide proper results on the whole REQUIRED_USE syntax as currently > defined (granted that nsolve(x) does not report anything wrong). The point is, solve() is supposed to work without any additional transformations. So the rules need to be consistent. As a matter of fact, I want to add a little extra test to solve.py that verifies that the result without and with transformation is the same. > > > > [1]:https://wiki.gentoo.org/wiki/User:MGorny/GLEP:ReqUse > > > > > > I really don't like the reordering thing. Even the restricted > > > syntax does not fix the issue with '^^ ( a b ) b? ( a )' already > > > mentioned here. It'd be much better and simpler for the spec just to > > > assign a fixed value and use the solving rules with those. > > > > You're not going to convince me by providing examples that are utterly > > broken by design and meaningless ;-). > > Well... if it's so obvious that the example is broken by design that > you don't even bother to explain why, I assume you have an algorithm for > that. Where is the code ? What are the numbers ? How many ebuilds might > fail after reordering ? How can this be improved ? Are you arguing for the sake of arguing here? I just presumed that this example is so obviously broken there is no point wasting any more time on it. The code of nsolve clearly detects that, so I don't really understand what you're trying to prove here. > Extra question: Is there *really* a point in pushing user preferences > that way, esp. when developers can write '!b? ( a )' instead of '|| ( a > b )' and just kill any possibility of changing the order ? Yes, that is by design. If you use ||, ^^, ?? you do not explicitly enforce a particular order -- you give the user a choice. I know it ain't perfect but it's a simple enough solution to a few problems. > As for a real world example, I'll let you find some more interesting > ones, but this one will probably be interesting to you and is a > good start: > > app-text/wklej-0.2.1-r1 ^^ ( python_single_target_pypy > python_single_target_pypy3 python_single_target_python2_7 > python_single_target_python3_4 python_single_target_python3_5 > python_single_target_python3_6 ) python_single_target_pypy? > ( python_targets_pypy ) python_single_target_pypy3? > ( python_targets_pypy3 ) python_single_target_python2_7? > ( python_targets_python2_7 ) python_single_target_python3_4? > ( python_targets_python3_4 ) python_single_target_python3_5? > ( python_targets_python3_5 ) python_single_target_python3_6? > ( python_targets_python3_6 ) vim? ( ^^ ( python_single_target_python2_7 > ) ) > > > Hint: It loops as written here. Reordering the ^^ in a proper way makes > it solvable. Putting the 'vim? ( ... )' part first makes it solvable in > one pass. And that's the exact case where I was considering the problem of ebuild- eclass ordering. And yes, putting the 'vim?' first is an acceptable solution here. What's the problem here? Yes, without reordering you get more 'reliable' results. However, AFAICS nsolve detects a problem in all of the cases: if py2.7 is sorted first, it detects ordering issue; in all other cases, it detects circular dep. Am I missing something? -- Best regards, Michał Górny