From: Justin Bronder <jsbronder@gentoo.org>
To: gentoo-cluster@lists.gentoo.org
Subject: [gentoo-cluster] Re: Installing and using multiple MPI implementations at the same time.
Date: Mon, 11 Feb 2008 09:32:26 -0500 [thread overview]
Message-ID: <20080211143226.GA22670@mejis.cold-front> (raw)
In-Reply-To: <20080210065605.GA16842@comet.romhat.net>
[-- Attachment #1: Type: text/plain, Size: 1979 bytes --]
On 09/02/08 22:56 -0800, Donnie Berkholz wrote:
...
> > sys-cluster/empi: Does the same stuff that crossdev does. You create a new
> > implementation root by specifying a name and mpi implementation package to
> > build it with. [2] Empi adds these to an overlay under a new category with
> > the name you gave. The ebuilds inherit mpi.eclass which handles pushing all
> > the files to /usr/lib/mpi/<name>, and providing files for eselect-mpi.
>
> lib == $(get_libdir) ?
>
Yup, it's actually grabbed during the addition of the implementation to
eselect-mpi.
> > A couple of final words, hpl and mpi-examples currently wouldn't work without
> > empi, mainly because I'm lazy :) Also I still haven't figured out a good way
> > to handle do{man,doc,www} etcetera, ideas are welcome.
>
> Do the same thing as gcc. Install to a path under
> /usr/libdir/mpi/..../{man,doc} and use environment variables (MANPATH
> etc) and/or symlinks.
>
That's the general idea, however it's not as simple as just setting *into
unless I mess with $D first. The plan being to have something like mpi_doX
which would handle setting the correct install path and restore global
variables (like $D) on exit. Not sure if this is the best method, comments
appreciated :)
> > There's still a fair amount of work to be done, but I wanted to see if
> > anyone had any feedback regarding the general concept first before
> > pushing on.
> >
> > You can pull the overlay from rsync://cold-front.ath.cx/mpi (for now...)
>
> Is this in layman? The file's at
> gentoo/xml/htdocs/proj/en/overlays/layman-global.txt.
>
Not yet, it's being hosted off my home machine which doesn't always have the
best connection. If a consensus can be reached that this is a worthy
solution, I presume I can talk to the overlay/infra guys then, or maybe even
the sci team who already has an overlay with some MPI applications in it.
Thanks,
--
Justin Bronder
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
next prev parent reply other threads:[~2008-02-11 14:33 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-02-08 3:19 [gentoo-cluster] Installing and using multiple MPI implementations at the same time Justin Bronder
2008-02-10 6:56 ` Donnie Berkholz
2008-02-11 14:32 ` Justin Bronder [this message]
2008-03-10 16:31 ` Alexander Piavka
2008-03-11 1:07 ` [gentoo-cluster] " Justin Bronder
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20080211143226.GA22670@mejis.cold-front \
--to=jsbronder@gentoo.org \
--cc=gentoo-cluster@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox