From: Justin Bronder <jsbronder@gentoo.org>
To: gentoo-cluster@lists.gentoo.org
Subject: [gentoo-cluster] Re: Installing and using multiple MPI implementations at the same time.
Date: Mon, 10 Mar 2008 21:07:32 -0400 [thread overview]
Message-ID: <20080311010732.GA26618@mejis.cold-front> (raw)
In-Reply-To: <Pine.LNX.4.64.0803101700430.24263@lesbinux>
[-- Attachment #1: Type: text/plain, Size: 5933 bytes --]
On 10/03/08 18:31 +0200, Alexander Piavka wrote:
>
> Hi Justin,
>
> I've started playing with your empi implementation.
>
> Some problems & suggestions:
>
> 1)'eselect mpi set ...' does not check for existance of ~/.env.d dir
> and fails if one does not exists.
Fixed in eselect-mpi-0.0.2
>
> It creates ~/.env.d/mpi which looks like this:
> ----------------------
> PATH="/usr/lib64/mpi/mpi-openmpi/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/bin:/usr/x86_64-pc-linux-gnu/gcc-bin/4.1.2:/opt/blackdown-jdk-1.4.2.03/bin:/opt/blackdown-jdk-1.4.2.03/jre/bin"
> MANPATH="/usr/lib64/mpi/mpi-openmpi/usr/share/man:/etc/java-config-2/current-system-vm/man:/usr/local/share/man:/usr/share/man:/usr/share/binutils-data/x86_64-pc-linux-gnu/2.18/man:/usr/share/gcc-data/x86_64-pc-linux-gnu/4.1.2/man:/opt/blackdown-jdk-1.4.2.03/man:/etc/java-config/system-vm/man/"
> LD_LIBRARY_PATH="/usr/lib64/mpi/mpi-openmpi/usr/lib64:"
> ESELECT_MPI_IMP="mpi-openmpi"
> export LD_LIBRARY_PATH
> export PATH
> export MANPATH
> export ESELECT_MPI_IMP
> ----------------------
>
> while the following would be better:
> ----------------------
> PATH="/usr/lib64/mpi/mpi-openmpi/usr/bin:${PATH}"
> MANPATH="/usr/lib64/mpi/mpi-openmpi/usr/share/man:${MANPATH}"
> LD_LIBRARY_PATH="/usr/lib64/mpi/mpi-openmpi/usr/lib64:${LD_LIBRARY_PATH}"
> ESELECT_MPI_IMP="mpi-openmpi"
> export LD_LIBRARY_PATH
> export PATH
> export MANPATH
> export ESELECT_MPI_IMP
> ----------------------
>
> maybe even
> ----------------------
> if [ "X${PATH}" != "X" ]; then
> export PATH="/usr/lib64/mpi/mpi-openmpi/usr/bin:${PATH}"
> else
> export PATH="/usr/lib64/mpi/mpi-openmpi/usr/bin"
> fi
> if [ "X${MANPATH}" != "X" ]; then
> export MANPATH="/usr/lib64/mpi/mpi-openmpi/usr/share/man:${MANPATH}"
> else
> export MANPATH="/usr/lib64/mpi/mpi-openmpi/usr/share/man"
> fi
> if [ "X${LD_LIBRARY_PATH}" != "X" ]; then
> export
> LD_LIBRARY_PATH="/usr/lib64/mpi/mpi-openmpi/usr/lib64:${LD_LIBRARY_PATH}"
> else
> export LD_LIBRARY_PATH="/usr/lib64/mpi/mpi-openmpi/usr/lib64"
> fi
> export ESELECT_MPI_IMP
> ----------------------
Yeah, you're probably right. However, I need a way to deal with cleaning out
the environment when the user calls the unset action, or changes from one
implementation to the other. Using what you have above, if the user then
called 'eselect mpi set mpi-lam' and sourced ~/.env.d/mpi, they would first
have the correct paths for mpi-openmpi followed by the ones for mpi-lam in
their environment. See below for why this scares me.
>
> Also, probably . besides /etc/env.d/mpi/mpi-openmpi the /etc/env.d/XXmpi
> file should also be created with the default empi profile then 'eselect mpi
> set <mpi-implementation>'
> is run.
I'm willing to be told why I'm wrong, but I left out the above for what I
believe is a good reason. If you set say, openmpi to be your default
implementation on the system level, then a user eselects lam-mpi, the user
will still have mpif90 in their path. This is a big deal because lam-mpi
does not provide bindings for f90, hence the user could quite quickly become
confused as to why their code using f90 and c is in shambles when they try to
compile.
The above can still happen if openmpi is emerged normally. I have no clue
how to deal with that yet either.
If we keep the ugly ~/.env.d/mpi file, along with the environment stripping
ability, there is no reason that a global mpi profile couldn't be used. What
do you think?
>
> 2)another problem is a failure to install binpkg of openmpi on another
> identical systems, the error is
>
> *
> * ERROR: mpi-openmpi/openmpi-1.2.5-r1 failed.
> * Call stack:
> * ebuild.sh, line 1717: Called dyn_setup
> * ebuild.sh, line 768: Called qa_call 'pkg_setup'
> * ebuild.sh, line 44: Called pkg_setup
> * openmpi-1.2.5-r1.ebuild, line 23: Called mpi_pkg_setup
> * mpi.eclass, line 306: Called die
> * The specific snippet of code:
> * [[ -f "${FILESDIR}"/${MPI_ESELECT_FILE} ]] \
> * || die "MPI_ESELECT_FILE is not defined/found.
> ${MPI_ESELECT_FILE}"
> * The die message:
> * MPI_ESELECT_FILE is not defined/found. eselect.mpi.openmpi
> *
> * If you need support, post the topmost build error, and the call stack if
> relevant.
> * A complete build log is located at
> '/var/tmp/portage/mpi-openmpi/openmpi-1.2.5-r1/temp/build.log'.
> *
>
> I thinks this is due to MPI_ESELECT_FILE being defined in pkg_setup() of
> openmpi ebuild and not in top of ebuild (will check if this would help
> later)
Foolish mistake on my part. MPI_ESELECT_FILE can be defined in pkg_setup as
that always gets called (I believe). However I can't check that file there
as emerging binpkgs doesn't give access to FILESDIR. I've committed a fix to
the overlay.
>
> 3) If i have PORTDIR_OVERLAY="/usr/local/overlays/csbgu /tmp/empi"
> empi --create --implementation mpi-openmpi =sys-cluster/openmpi-1.2.5-r1
> would create mpi-openmpi category tree under
> /usr/local/overlays/csbgu/mpi-openmpi
> since it's first overlay in PORTDIR_OVERLAY, it would be nice if it could
> ALWAYS be created under the empi overlay i.e /tmp/empi/mpi-openmpi
> Of couse i can put the empi overlay first in PORTDIR_OVERLAY instead
> but i want to avoid manual tweaking as much as possible.
> With all mpi-implementation residing in the same overlay tree as empi
> it would much more convienient , for me, to auto distribute single overlay
> among cluster hosts and avoid possible need for commands
> like 'empi --create --implementation mpi-openmpi ...'
Also fixed, added the --overlaydir to the command line arguments.
Thanks for trying this out, it makes me feel useful :)
--
Justin Bronder
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
prev parent reply other threads:[~2008-03-11 1:10 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-02-08 3:19 [gentoo-cluster] Installing and using multiple MPI implementations at the same time Justin Bronder
2008-02-10 6:56 ` Donnie Berkholz
2008-02-11 14:32 ` [gentoo-cluster] " Justin Bronder
2008-03-10 16:31 ` [gentoo-cluster] " Alexander Piavka
2008-03-11 1:07 ` Justin Bronder [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20080311010732.GA26618@mejis.cold-front \
--to=jsbronder@gentoo.org \
--cc=gentoo-cluster@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox