From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id EBE3C13877A for ; Fri, 8 Aug 2014 21:33:36 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 57065E0933; Fri, 8 Aug 2014 21:33:32 +0000 (UTC) Received: from mail-qa0-f48.google.com (mail-qa0-f48.google.com [209.85.216.48]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 6511DE08A2 for ; Fri, 8 Aug 2014 21:33:31 +0000 (UTC) Received: by mail-qa0-f48.google.com with SMTP id m5so6161588qaj.35 for ; Fri, 08 Aug 2014 14:33:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=ypza1wHCdOdi336/fl29xNhq2qCyrsD/96S4tCLmh1U=; b=hxKiHWypwSLKE+pF0cseJBHm5LTqJ26w39/yV68BMW1tklVkRlJJ9FbzAOCJe1xinv yx79xudtZUV3ympSQ+ouavgNKDAVCBRhRl7Jiq74bk/kUItDl9Zeo4m5HCu0+56ZwqHQ 4Vqmjg1i1ZQlfjnDnplFQYL3EUIccHt3qN+vO4xbnoKX3pLJL4XC4wqCYnOsAVtNHpdD FgT3UQ66ObEWmLDJF3jJytnQGwIXsrkMVpcSIYWrXDMk0fhs7042vEKU82j/GbfWl6ZU hwiWTLqC6s+5ve60lD7koJ5FYZwtlYJjxOcQvkJq0scS+6c+RAuTM8ykQbQIkbWX0isC 6jpA== Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-dev@lists.gentoo.org Reply-to: gentoo-dev@lists.gentoo.org MIME-Version: 1.0 X-Received: by 10.224.89.10 with SMTP id c10mr42828160qam.51.1407533610492; Fri, 08 Aug 2014 14:33:30 -0700 (PDT) Received: by 10.140.44.34 with HTTP; Fri, 8 Aug 2014 14:33:30 -0700 (PDT) In-Reply-To: <53e53883.a5e0980a.6062.2fd6@mx.google.com> References: <53e4ccbd.c2b4700a.3bec.2414@mx.google.com> <53e501be.845f700a.598f.2ad0@mx.google.com> <53e53883.a5e0980a.6062.2fd6@mx.google.com> Date: Sat, 9 Aug 2014 09:33:30 +1200 Message-ID: Subject: Re: [gentoo-dev] minimalistic emerge From: Kent Fredric To: gentoo-dev@lists.gentoo.org Content-Type: multipart/alternative; boundary=001a11c3bf00dce739050024f484 X-Archives-Salt: 72a35336-ad35-4a0d-9b8f-2f7c7f0c0716 X-Archives-Hash: 525166f130994603377f9e457211c272 --001a11c3bf00dce739050024f484 Content-Type: text/plain; charset=UTF-8 On 9 August 2014 08:52, Igor wrote: > Hello Kent, > > Friday, August 8, 2014, 9:29:54 PM, you wrote: > > But it's possible to fix many problems even now! > > What would you tell if something VERY simple is implemented like - > reporting > every emerge failed due to slot conflict back home with details for > inspection? > > > > > *-- Best regards, Igor * > mailto:lanthruster@gmail.com > Yes. As I said, INSTALLATION metrics reporting is easy enough to do. I use those sorts of tools EXTENSIVELY with the CPAN platform, and I have valuable reports on what failed, what the interacting components were, and what systems the failures and passes occur on. So I greatly appreciate this utility. Automated bug reports however prove to be a waste of time, a lot of failures are in fact entirely spurious as a result of user error. So a metrics system that simply aggregates automated reports from end users that is observed as a side channel to bugs, proves more beneficial in reality. Usually all maintainers need here is a daily or even weekly digest mail summarising all the packages they're involved with, with their failure summaries, with links to the failures. ( For example, here is one of the report digests I received: http://i.imgur.com/WISqv15.png , and one of the reports it links to : http://www.cpantesters.org/cpan/report/ed7a4d9f-6bf3-1014-93f0-e557a945bbef ) And for such, you don't need to apply rate limiting, because multiple reports from a single individual prove to be entirely inconsequential, as you're not forced to deal with them like normal bugs, but are simply out of band feedback you can read when you have the time. And you can then make sense of the content of that report using your inside expertise and potentially file a relevant bug report based on extracted information, or use context of that bug report to request more context from its submitter. But the point remains that this techology is _ONLY_ effective for install time metrics, and is utterly useless for tracking any kinds of failures that emanate from the *USE* of software. If my firefox installation segv's, nothing is there watching for that to file a report. If firefox does something odd like renders characters incorrectly due to some bug in GPU drivers ( actual issue I had once ), nothing will be capable of detecting and reporting that. Those things are still "bugs" and are still "bugs in packages" and still "bugs in packages that can be resolved by changing dependencies", but are however completely impossible to test for in advance of them happening as part of the installation toolchain. But I'm still very much on board with "have the statistics system". I use it extensively, as I've said, and it is very much one of the best tools I have for solving problems. ( the very distribution of the problems can itself be used to isolate bugs. For instance, http://matrix.cpantesters.org/?dist=Color-Swatch-ASE-Reader%200.001000 Those red lights told me that I had a bug on platforms where perl floating point precision is reduced In fact, *automated* factor analysis pin pointed the probable cause faster than I ever could: http://analysis.cpantesters.org/solved?distv=Color-Swatch-ASE-Reader-0.001000 Just the main blockers are: - Somebody has to implement this technology - That requires time and effort - People have to be convinced of its value - Integration must happen at some level somehow somewhere in the portage toolchain(s) - People must opt in to this technology in order for the reports to happen - And only then can this start to deliver meaningful results. -- Kent *KENTNL* - https://metacpan.org/author/KENTNL --001a11c3bf00dce739050024f484 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable

Yes. As I said, INSTALLATION metrics repor= ting is easy enough to do.

I use those sorts of tools EXTENSIVELY w= ith the CPAN platform, and I have valuable reports on what failed, what the= interacting components were, and what systems the failures and passes occu= r on.

So I greatly appreciate this utility.<= br>
Automated bug reports however prove= to be a waste of time, a lot of failures are in fact entirely spurious as = a result of user error.

So a metrics system that simply aggreg= ates automated reports from end users that is observed as a side channel to= bugs, proves more beneficial in reality.

Usually all maintain= ers need here is a daily or even weekly digest mail summarising all the pac= kages they're involved with, with their failure summaries, with links t= o the failures. ( For example, here is one of the report digests I received= : http://i.imgur.com/WISqv15.png= , and one of the reports it links to : http://www.cpantes= ters.org/cpan/report/ed7a4d9f-6bf3-1014-93f0-e557a945bbef=C2=A0 )

And for such, you don't need to apply rate limiting, because = multiple reports from a single individual prove to be entirely inconsequent= ial, as you're not forced to deal with them like normal bugs, but are s= imply out of band feedback you can read when you have the time.

And you can then make sense of the content of that report using y= our inside expertise and potentially file a relevant bug report based on ex= tracted information, or use context of that bug report to request more cont= ext from its submitter.

But the point remains that this techology is _ONLY_ effective for= install time metrics, and is utterly useless for tracking any kinds of fai= lures that emanate from the *USE* of software.

If my = firefox installation segv's, nothing is there watching for that to file= a report.

If firefox does something odd like renders characters incorr= ectly due to some bug in GPU drivers ( actual issue I had once ), nothing w= ill be capable of detecting and reporting that.

Those thi= ngs are still "bugs" and are still "bugs in packages" a= nd still "bugs in packages that can be resolved by changing dependenci= es", but are however completely impossible to test for in advance of t= hem happening as part of the installation toolchain.

But I'm still very much on board with "have the sta= tistics system". I use it extensively, as I've said, and it is ver= y much one of the best tools I have for solving problems. ( the very distri= bution of the problems can itself be used to isolate bugs.

For instance, http://matrix.cpantesters.org/?dist= =3DColor-Swatch-ASE-Reader%200.001000

Those red lights told me = that I had a bug on platforms where perl floating point precision is reduce= d

In fact, *automated* factor analysis pin pointed the probabl= e cause faster than I ever could:

http://analysis.= cpantesters.org/solved?distv=3DColor-Swatch-ASE-Reader-0.001000

Just the main blockers are:

- Somebody has= to implement this technology
- That requires time and effort=
- People have to be convinced of its value
- Integration must happen at some level somehow somewhere in the portage to= olchain(s)
- People must opt in to this technology in order f= or the reports to happen
- And only then can this start to de= liver meaningful results.


--001a11c3bf00dce739050024f484--