On Tue, 9 Aug 2016 22:05:45 +1200 Kent Fredric wrote: > On Tue, 9 Aug 2016 01:59:35 -0400 > Rich Freeman wrote: > > > While I think your proposal is a great one, I think this is actually > > the biggest limitation. A lot of our packages (most?) don't > > actually have tests that can be run on every build (most don't have > > tests, some have tests that take forever to run or can't be used on > > a clean install). > > IMHO, That's not "ideal", but we don't need idealism to be useful > here. > > Tests passing give one kind of useful kind of quality test. > > But "hey, it compiles" gives useful data in itself. > > By easy counter example, "it doesn't compile" is in itself useful > information ( and the predominant supply of bugs filed are compilation > failures ). > > Hell, sometimes I hit a compile failure and I just go "eeh, I'll look > into it next week". How many people are doing the same? > > The beauty of the automated datapoint is it doesn't have to be > "awesome quality" to be useful, its just guidance for further > investigation. > > While runtime testing doesn't HAVE to be extensive, we do want > > somebody to at least take a glance at it. > > Indeed, I'm not hugely in favour of abolishing manual stabilization > entirely, but sometimes it just gets to a point where its a bit beyond > a joke with requests languishing untouched for months. > > If there was even data saying "hey, look, its obvious this isn't ready > for stabilization", we could *remove* or otherwise mark for > postponement stabilization requests that were failing due to > crowd-source metrics. > > This means it can also be used to focus existing stabilization efforts > to reduce the number of things being thrown in the face of manual > stabilizers. > > > > > If everything you're proposing is just on top of what we're already > > doing, then we have the issue that people aren't keeping up with the > > current workload, and even if that report is ultra-nice it is > > actually one more step than we have today. The workload would only > > go down if a machine could look at the report and stabilize things > > without input at least some of the time. > > Indeed, it would require the crowd service to be automated, and the > relevant usage of the data as automated as possible, and humans would > only go looking at the data when interested. > > For instance, when somebody manually files a stable request, some > watcher could run off and scour the reports in a given window and > comment "Warning: Above threshold failure rates for target in last > n-days, proceed with caution", and it would only enhance the existing > stabilization workflow. This whole thing you are proposing has been a past stats project many times in GSOC for Gentoo. The last time, it produced a decent system that was functional and __NEVER__ got deployed and turned on. It ran for several years on the gentoo GSOC student server (vulture). It never gained traction with the infra team due to lack of infra resources and infra personnel to maintain it. Perhaps with the new hardware recently purchased to replace the failed server from earlier this year, we should have some hardware resources. If you can dedicate some time to work on the code which I'm sure will need some updating now, I would help as well (not that I already can't keep up with all the project coding I'm involved with). This is of course if we can get a green light from our infra team to be able to implement a stats vm on the new ganeti system. We will also need some help from security people to ensure the system is secure, nginx/lightttp configuration, etc... So, are you up for it? Any Gentoo dev willing to help admin such a system, please reply with your area of expertise and ability to help. Maybe we can finally get a working and deployed stats system. -- Brian Dolbec