public inbox for gentoo-soc@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-soc] Week 5 Report for Big Data Infrastructure and H2O ebuilds Project
@ 2021-07-12  3:56 Yuan Liao (Leo)
  2021-07-17  4:20 ` Benda Xu
  0 siblings, 1 reply; 2+ messages in thread
From: Yuan Liao (Leo) @ 2021-07-12  3:56 UTC (permalink / raw
  To: gentoo-soc

Hi folks,

This week, I have moved on to the next part of my project: an ebuild
testing framework.  For sophisticated ebuilds with many dependencies
and a lot of USE flags, a framework which facilitates automated ebuild
installation tests can be very useful.  Such a testing framework can
also integrate with a CI service so the tests can be run periodically
and automatically, reminding developers of ebuild breakages early.

Maintainers of ebuilds for Java packages -- especially those ebuilds
in the Spark overlay -- might want to test installing those ebuilds
periodically and with different USE flag configurations.  To start
with, Java packages usually have multiple dependencies, and successful
installation of all those dependencies is necessary for successful
installation of such a Java package itself.  Some packages in the
Spark overlay depend on ebuilds in ::gentoo, but chances are the
latter set of ebuilds get updated or removed, preventing those ebuilds
in the overlay from being merged properly.  As a matter of fact, some
ebuilds in the Spark overlay are already non-installable due to
removal of some Java packages from ::gentoo.  In this case, if there
is a set of ebuild installation tests that run every day or every
week, then problems like this can be discovered as soon as they
emerge.  Besides dependencies, USE flags are another type of
parameters that can determine whether an ebuild can be installed
because a USE flag can affect the list of dependencies and/or how the
ebuild is compiled and installed.  Many ebuilds in the Spark overlay
have a 'binary' USE flag (just like any ebuild created by
java-ebuilder) which installs a pre-built binary JAR from Maven
Central when enabled.  Some of the ebuilds can be installed with
USE="binary" but not with USE="-binary" due to errors in Java source
compilation.  The same ebuild should be tested with both of the USE
flag configurations if the goal is to support both binary and source
installation.

My original plan with regards to ebuild testing is to use an existing
solution called ebuildtester [1], which is a program that can install
a specified package in a stage3 Docker container.  After doing some
research on it, I found that it cannot satisfy some of my ebuild
testing requirements for the Kotlin ebuilds I had created in the
previous weeks at least.  I would like to fully customize USE flag
settings just like how I could specify USE flag settings in
/etc/portage/package.use, but ebuildtester only supports a global USE
flag setting and USE configuration for the package being tested
itself, so the granularity is too coarse.  My Kotlin ebuilds need to
be installed with multiple emerge commands, which is quite uncommon
for ebuilds to be honest, but with ebuildtester, it does not seem to
be possible to run more than one emerge command.  However, I still
liked the idea of testing ebuild installation in a Docker container,
so based on ebuildtester's concept, I created my own tool that tests
ebuilds in a container: ebuild-commander [2].  This tool does not take
an atom to be installed as input; instead, it takes a list of commands
to execute in the container (hence the name "ebuild-commander").  It
also supports copying a specified directory's contents to the
container's /etc/portage, allowing full customization on Portage
settings in ebuild tests.  ebuild-commander is like a rewrite of
ebuildtester strived to allow full control over the ebuild test
execution with a simpler interface.

Automated and periodic ebuild testing with ebuild-commander has now
been set up for my fork of the Spark overlay [3].  I have only created
two test cases for my Kotlin ebuilds at this point, but more tests can
be added later quickly and easily.  At this point, I am planning to
implement a mechanism to automatically compute a minimal set of
packages that need to be specified in arguments to emerge in order to
install all packages in the Spark overlay at least once.  Because a
test case can take a very long time to run, the tests are configured
to run once a day only instead of on every new commit.

That is all for this week.  See you in the next weekly report!

Have a good week,
Leo

[1]: https://github.com/nicolasbock/ebuildtester
[2]: https://github.com/Leo3418/ebuild-commander
[3]: https://github.com/Leo3418/spark-overlay/actions/workflows/docker.yml


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [gentoo-soc] Week 5 Report for Big Data Infrastructure and H2O ebuilds Project
  2021-07-12  3:56 [gentoo-soc] Week 5 Report for Big Data Infrastructure and H2O ebuilds Project Yuan Liao (Leo)
@ 2021-07-17  4:20 ` Benda Xu
  0 siblings, 0 replies; 2+ messages in thread
From: Benda Xu @ 2021-07-17  4:20 UTC (permalink / raw
  To: Yuan Liao (Leo); +Cc: gentoo-soc

"Yuan Liao (Leo)" <liaoyuan@gmail.com> writes:

> This week, I have moved on to the next part of my project: an ebuild
> testing framework.  For sophisticated ebuilds with many dependencies
> and a lot of USE flags, a framework which facilitates automated ebuild
> installation tests can be very useful.  Such a testing framework can
> also integrate with a CI service so the tests can be run periodically
> and automatically, reminding developers of ebuild breakages early.
>
> Maintainers of ebuilds for Java packages -- especially those ebuilds
> in the Spark overlay -- might want to test installing those ebuilds
> periodically and with different USE flag configurations.  To start
> with, Java packages usually have multiple dependencies, and successful
> installation of all those dependencies is necessary for successful
> installation of such a Java package itself.  Some packages in the
> Spark overlay depend on ebuilds in ::gentoo, but chances are the
> latter set of ebuilds get updated or removed, preventing those ebuilds
> in the overlay from being merged properly.  As a matter of fact, some
> ebuilds in the Spark overlay are already non-installable due to
> removal of some Java packages from ::gentoo.  In this case, if there
> is a set of ebuild installation tests that run every day or every
> week, then problems like this can be discovered as soon as they
> emerge.  Besides dependencies, USE flags are another type of
> parameters that can determine whether an ebuild can be installed
> because a USE flag can affect the list of dependencies and/or how the
> ebuild is compiled and installed.  Many ebuilds in the Spark overlay
> have a 'binary' USE flag (just like any ebuild created by
> java-ebuilder) which installs a pre-built binary JAR from Maven
> Central when enabled.  Some of the ebuilds can be installed with
> USE="binary" but not with USE="-binary" due to errors in Java source
> compilation.  The same ebuild should be tested with both of the USE
> flag configurations if the goal is to support both binary and source
> installation.
>
> My original plan with regards to ebuild testing is to use an existing
> solution called ebuildtester [1], which is a program that can install
> a specified package in a stage3 Docker container.  After doing some
> research on it, I found that it cannot satisfy some of my ebuild
> testing requirements for the Kotlin ebuilds I had created in the
> previous weeks at least.  I would like to fully customize USE flag
> settings just like how I could specify USE flag settings in
> /etc/portage/package.use, but ebuildtester only supports a global USE
> flag setting and USE configuration for the package being tested
> itself, so the granularity is too coarse.  My Kotlin ebuilds need to
> be installed with multiple emerge commands, which is quite uncommon
> for ebuilds to be honest, but with ebuildtester, it does not seem to
> be possible to run more than one emerge command.  However, I still
> liked the idea of testing ebuild installation in a Docker container,
> so based on ebuildtester's concept, I created my own tool that tests
> ebuilds in a container: ebuild-commander [2].  This tool does not take
> an atom to be installed as input; instead, it takes a list of commands
> to execute in the container (hence the name "ebuild-commander").  It
> also supports copying a specified directory's contents to the
> container's /etc/portage, allowing full customization on Portage
> settings in ebuild tests.  ebuild-commander is like a rewrite of
> ebuildtester strived to allow full control over the ebuild test
> execution with a simpler interface.
>
> Automated and periodic ebuild testing with ebuild-commander has now
> been set up for my fork of the Spark overlay [3].  I have only created
> two test cases for my Kotlin ebuilds at this point, but more tests can
> be added later quickly and easily.  At this point, I am planning to
> implement a mechanism to automatically compute a minimal set of
> packages that need to be specified in arguments to emerge in order to
> install all packages in the Spark overlay at least once.  Because a
> test case can take a very long time to run, the tests are configured
> to run once a day only instead of on every new commit.
>
> That is all for this week.  See you in the next weekly report!

Glad to see ebuild-commander finally works on GitHub Actions. This lies
down the foundation for long term stability of the overlay.  Great job!

Cheers,
Benda


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2021-07-17  4:20 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-07-12  3:56 [gentoo-soc] Week 5 Report for Big Data Infrastructure and H2O ebuilds Project Yuan Liao (Leo)
2021-07-17  4:20 ` Benda Xu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox