* [gentoo-science] sys-cluster infiniband , mpi related soft. update request
@ 2009-01-20 19:44 Janusz Mordarski
2009-01-20 20:10 ` Alexey Shvetsov
0 siblings, 1 reply; 19+ messages in thread
From: Janusz Mordarski @ 2009-01-20 19:44 UTC (permalink / raw
To: gentoo-science
Hello, i've seen that Infiniband drivers and other related stuff, like
MVAPICH2 and openib etc. are a bit outdated. Recently i started
administering a HPC cluster for molecular dynamics simulations (for ex.
gromacs) from HP and found, that it works best on gentoo ;) . But i need
most recent, fastest infiniband and mpi software for my work.
gentoo-science overlay, if it comes to infiniband, doesn't work at all,
i had to change some ebuilds to make them compile and work, but still,
have some problems with MVAPICH2.
Please sys-cluster developers, if it isn't too much work, update ebuilds
to open fabrics OFED 1.4 , MVAPICH 1.2 etc. (all dependencies). Or
please, give me some instructions how to publish my modified ebuilds
that, at least works (on contrast to openib-1.1.ebuild from overlay ;) )
best regards, Janusz
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [gentoo-science] sys-cluster infiniband , mpi related soft. update request
2009-01-20 19:44 [gentoo-science] sys-cluster infiniband , mpi related soft. update request Janusz Mordarski
@ 2009-01-20 20:10 ` Alexey Shvetsov
2009-01-20 22:58 ` Janusz Mordarski
0 siblings, 1 reply; 19+ messages in thread
From: Alexey Shvetsov @ 2009-01-20 20:10 UTC (permalink / raw
To: gentoo-science
[-- Attachment #1: Type: text/plain, Size: 1135 bytes --]
Janusz Mordarski wrote:
> Hello, i've seen that Infiniband drivers and other related stuff, like
> MVAPICH2 and openib etc. are a bit outdated. Recently i started
> administering a HPC cluster for molecular dynamics simulations (for ex.
> gromacs) from HP and found, that it works best on gentoo ;) . But i need
> most recent, fastest infiniband and mpi software for my work.
> gentoo-science overlay, if it comes to infiniband, doesn't work at all,
> i had to change some ebuilds to make them compile and work, but still,
> have some problems with MVAPICH2.
>
> Please sys-cluster developers, if it isn't too much work, update ebuilds
> to open fabrics OFED 1.4 , MVAPICH 1.2 etc. (all dependencies). Or
> please, give me some instructions how to publish my modified ebuilds
> that, at least works (on contrast to openib-1.1.ebuild from overlay ;) )
>
> best regards, Janusz
Hi
I'm going update openib for gentoo to 1.4. And also i use gentoo cluster with
IB for molecular dynamics with gromacs. =)
BTW seems openmpi wiorks wit IB too so i gonna update it 1.3
--
Alexey 'Alexxy' Shvetsov
Gentoo Team Ru
[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [gentoo-science] sys-cluster infiniband , mpi related soft. update request
2009-01-20 20:10 ` Alexey Shvetsov
@ 2009-01-20 22:58 ` Janusz Mordarski
2009-01-20 23:09 ` Alexey Shvetsov
0 siblings, 1 reply; 19+ messages in thread
From: Janusz Mordarski @ 2009-01-20 22:58 UTC (permalink / raw
To: gentoo-science
Great news, thanks. I can't wait to see those packages updated.
AFAIK most important ones, that need to be updated are openib ,
openib-drivers , openib-files end their deps.
What about openib-mvapich2 ? It's something different than just
'mvapich2' package? Do we need another mvapich?
And the last thing, should i use any Infiniband related option i Kernel
configuration? Or maybe i should turn off all kernel IB-related stuff
and use only those from OFED?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [gentoo-science] sys-cluster infiniband , mpi related soft. update request
2009-01-20 22:58 ` Janusz Mordarski
@ 2009-01-20 23:09 ` Alexey Shvetsov
2009-01-21 18:47 ` Janusz Mordarski
0 siblings, 1 reply; 19+ messages in thread
From: Alexey Shvetsov @ 2009-01-20 23:09 UTC (permalink / raw
To: gentoo-science
[-- Attachment #1: Type: text/plain, Size: 693 bytes --]
On Среда 21 января 2009 01:58:52 Janusz Mordarski wrote:
> Great news, thanks. I can't wait to see those packages updated.
> AFAIK most important ones, that need to be updated are openib ,
> openib-drivers , openib-files end their deps.
>
> What about openib-mvapich2 ? It's something different than just
> 'mvapich2' package? Do we need another mvapich?
>
> And the last thing, should i use any Infiniband related option i Kernel
> configuration? Or maybe i should turn off all kernel IB-related stuff
> and use only those from OFED?
I personally use in kernel drivers.
This at least works with kernels 2.6.24-2.6.28
--
Alexey 'Alexxy' Shvetsov
Gentoo Team Ru
[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [gentoo-science] sys-cluster infiniband , mpi related soft. update request
2009-01-20 23:09 ` Alexey Shvetsov
@ 2009-01-21 18:47 ` Janusz Mordarski
2009-01-21 19:12 ` Bryan Green
` (2 more replies)
0 siblings, 3 replies; 19+ messages in thread
From: Janusz Mordarski @ 2009-01-21 18:47 UTC (permalink / raw
To: gentoo-science
I couldn't wait, so i prepared my own , up-to-date ebuilds for various
soft connected with Infiniband
openib
openib-mvapich2
and also updated version of virtual/mpi - if we want to use
openib-mvapih2 for example with gromacs, we need to tell him to use
mvapich libs, so i think it must be mentionet there in virtual/mpi as a
RDEPEND
How can i share my work with you? i'm new to gentoo development, so
dunno if i have to install some SVN, CVS, git or other software?
I put all my recently updated ebuilds here:
http://bioinfo.mol.uj.edu.pl/gentoo-science-JM.tar.bz2
If someone of you, who has direct access to gentoo-science overlay could
take a look at these, and put those ebuilds to official gentoo-science
(if they are correct)
i spent a lot of time on openib-mvapich2, so i added myself in Changelog
as a version 1.2 'adder'
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [gentoo-science] sys-cluster infiniband , mpi related soft. update request
2009-01-21 18:47 ` Janusz Mordarski
@ 2009-01-21 19:12 ` Bryan Green
2009-01-21 19:31 ` Alexey Shvetsov
2009-01-21 19:51 ` [gentoo-science] " Justin Bronder
2009-01-21 22:26 ` [gentoo-science] " Donnie Berkholz
2 siblings, 1 reply; 19+ messages in thread
From: Bryan Green @ 2009-01-21 19:12 UTC (permalink / raw
To: gentoo-science; +Cc: Janusz Mordarski
Janusz Mordarski writes:
> I couldn't wait, so i prepared my own , up-to-date ebuilds for various
> soft connected with Infiniband
> openib
> openib-mvapich2
>
> and also updated version of virtual/mpi - if we want to use
> openib-mvapih2 for example with gromacs, we need to tell him to use
> mvapich libs, so i think it must be mentionet there in virtual/mpi as a
> RDEPEND
>
> How can i share my work with you? i'm new to gentoo development, so
> dunno if i have to install some SVN, CVS, git or other software?
> I put all my recently updated ebuilds here:
> http://bioinfo.mol.uj.edu.pl/gentoo-science-JM.tar.bz2
>
> If someone of you, who has direct access to gentoo-science overlay could
> take a look at these, and put those ebuilds to official gentoo-science
> (if they are correct)
>
> i spent a lot of time on openib-mvapich2, so i added myself in Changelog
> as a version 1.2 'adder'
Hi Janusz,
I wish I had more time to work on updating the openib ebuilds, but
unfortunately I dont, especially with the move to git, which is a tool I'm
not very familiar with. I'm not actively administering a cluster at the
moment, so I dont have the justification. It sounds like Alexey is working
on updates. Is that right Alexey?
Sorry I can't be of more assistance,
-bryan
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [gentoo-science] sys-cluster infiniband , mpi related soft. update request
2009-01-21 19:12 ` Bryan Green
@ 2009-01-21 19:31 ` Alexey Shvetsov
2009-01-21 19:36 ` Bryan Green
0 siblings, 1 reply; 19+ messages in thread
From: Alexey Shvetsov @ 2009-01-21 19:31 UTC (permalink / raw
To: gentoo-science
[-- Attachment #1: Type: text/plain, Size: 637 bytes --]
Bryan Green wrote:
>
> Hi Janusz,
>
> I wish I had more time to work on updating the openib ebuilds, but
> unfortunately I dont, especially with the move to git, which is a tool I'm
> not very familiar with. I'm not actively administering a cluster at the
> moment, so I dont have the justification. It sounds like Alexey is working
> on updates. Is that right Alexey?
>
> Sorry I can't be of more assistance,
>
> -bryan
Hi Bryan!
Yes i'm working on eclass for openib packages
And after i finish it i'll add OFED-1.4 split ebuilds
Can you help me with testing it?
--
Alexey 'Alexxy' Shvetsov
Gentoo Team Ru
[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [gentoo-science] sys-cluster infiniband , mpi related soft. update request
2009-01-21 19:31 ` Alexey Shvetsov
@ 2009-01-21 19:36 ` Bryan Green
2009-01-21 19:46 ` Janusz Mordarski
0 siblings, 1 reply; 19+ messages in thread
From: Bryan Green @ 2009-01-21 19:36 UTC (permalink / raw
To: gentoo-science
Alexey Shvetsov writes:
>
> Hi Bryan!
>
> Yes i'm working on eclass for openib packages
> And after i finish it i'll add OFED-1.4 split ebuilds
> Can you help me with testing it?
>
Hi Alexey,
Yes, I believe I will be able to test them, and I will be happy to. :)
Good luck, by the way, on the ofa-general list. I hope they will consider a
little more packaging-friendliness.
-bryan
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [gentoo-science] sys-cluster infiniband , mpi related soft. update request
2009-01-21 19:36 ` Bryan Green
@ 2009-01-21 19:46 ` Janusz Mordarski
0 siblings, 0 replies; 19+ messages in thread
From: Janusz Mordarski @ 2009-01-21 19:46 UTC (permalink / raw
To: gentoo-science
Hi
I can test some ebuilds too on my cluster, it uses Mellanox ConnectX
MT25418 DDR PCI-E cards
And talking about openib-mvapich2, Bryan , I think I
can take charge of your ebuilds, if you're out o time
and continue working on the improvements, now i have emerged and working
MVAPICH 1.2p1 on my gentoo boxes
^ permalink raw reply [flat|nested] 19+ messages in thread
* [gentoo-science] Re: sys-cluster infiniband , mpi related soft. update request
2009-01-21 18:47 ` Janusz Mordarski
2009-01-21 19:12 ` Bryan Green
@ 2009-01-21 19:51 ` Justin Bronder
2009-01-21 22:26 ` [gentoo-science] " Donnie Berkholz
2 siblings, 0 replies; 19+ messages in thread
From: Justin Bronder @ 2009-01-21 19:51 UTC (permalink / raw
To: gentoo-science
[-- Attachment #1: Type: text/plain, Size: 1257 bytes --]
On 21/01/09 19:47 +0100, Janusz Mordarski wrote:
> I couldn't wait, so i prepared my own , up-to-date ebuilds for various soft
> connected with Infiniband
> openib
> openib-mvapich2
>
> and also updated version of virtual/mpi - if we want to use openib-mvapih2
> for example with gromacs, we need to tell him to use mvapich libs, so i
> think it must be mentionet there in virtual/mpi as a RDEPEND
mpi.eclass is aware of openib-mvapich2. Ideally, we'll be able to move away
from virtual/mpi and start replacing it's use with the eclass instead. This
has a number of advantages such as being able to specify a subset of
compatible implementations and using EAPI 2 use deps.
>
> How can i share my work with you? i'm new to gentoo development, so dunno
> if i have to install some SVN, CVS, git or other software?
> I put all my recently updated ebuilds here:
> http://bioinfo.mol.uj.edu.pl/gentoo-science-JM.tar.bz2
>
> If someone of you, who has direct access to gentoo-science overlay could
> take a look at these, and put those ebuilds to official gentoo-science (if
> they are correct)
>
> i spent a lot of time on openib-mvapich2, so i added myself in Changelog as
> a version 1.2 'adder'
>
--
Justin Bronder
[-- Attachment #2: Type: application/pgp-signature, Size: 197 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [gentoo-science] sys-cluster infiniband , mpi related soft. update request
2009-01-21 18:47 ` Janusz Mordarski
2009-01-21 19:12 ` Bryan Green
2009-01-21 19:51 ` [gentoo-science] " Justin Bronder
@ 2009-01-21 22:26 ` Donnie Berkholz
2009-01-21 23:33 ` Alexey Shvetsov
2 siblings, 1 reply; 19+ messages in thread
From: Donnie Berkholz @ 2009-01-21 22:26 UTC (permalink / raw
To: gentoo-science
[-- Attachment #1: Type: text/plain, Size: 956 bytes --]
On 19:47 Wed 21 Jan , Janusz Mordarski wrote:
> I couldn't wait, so i prepared my own , up-to-date ebuilds for various
> soft connected with Infiniband
> openib
> openib-mvapich2
>
> and also updated version of virtual/mpi - if we want to use
> openib-mvapih2 for example with gromacs, we need to tell him to use
> mvapich libs, so i think it must be mentionet there in virtual/mpi as a
> RDEPEND
>
> How can i share my work with you? i'm new to gentoo development, so
> dunno if i have to install some SVN, CVS, git or other software?
> I put all my recently updated ebuilds here:
> http://bioinfo.mol.uj.edu.pl/gentoo-science-JM.tar.bz2
Since git is distributed, you can grab a clone (checkout) of the
overlay, check in your changes, and upload the entire repository
somewhere. Someone else can then pull it in.
--
Thanks,
Donnie
Donnie Berkholz
Developer, Gentoo Linux
Blog: http://dberkholz.wordpress.com
[-- Attachment #2: Type: application/pgp-signature, Size: 197 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [gentoo-science] sys-cluster infiniband , mpi related soft. update request
2009-01-21 22:26 ` [gentoo-science] " Donnie Berkholz
@ 2009-01-21 23:33 ` Alexey Shvetsov
2009-01-23 17:42 ` janek
2009-01-23 18:12 ` janek
0 siblings, 2 replies; 19+ messages in thread
From: Alexey Shvetsov @ 2009-01-21 23:33 UTC (permalink / raw
To: gentoo-science
[-- Attachment #1: Type: text/plain, Size: 1124 bytes --]
On Четверг 22 января 2009 01:26:58 Donnie Berkholz wrote:
> On 19:47 Wed 21 Jan , Janusz Mordarski wrote:
> > I couldn't wait, so i prepared my own , up-to-date ebuilds for various
> > soft connected with Infiniband
> > openib
> > openib-mvapich2
> >
> > and also updated version of virtual/mpi - if we want to use
> > openib-mvapih2 for example with gromacs, we need to tell him to use
> > mvapich libs, so i think it must be mentionet there in virtual/mpi as a
> > RDEPEND
> >
> > How can i share my work with you? i'm new to gentoo development, so
> > dunno if i have to install some SVN, CVS, git or other software?
> > I put all my recently updated ebuilds here:
> > http://bioinfo.mol.uj.edu.pl/gentoo-science-JM.tar.bz2
>
> Since git is distributed, you can grab a clone (checkout) of the
> overlay, check in your changes, and upload the entire repository
> somewhere. Someone else can then pull it in.
Hi!
Basic OFED-1.4 ebuilds added and they need testing
its sys-cluster/openib-1.4 and its deps
I'll add all misc stuff soon
--
Alexey 'Alexxy' Shvetsov
Gentoo Team Ru
[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [gentoo-science] sys-cluster infiniband , mpi related soft. update request
2009-01-21 23:33 ` Alexey Shvetsov
@ 2009-01-23 17:42 ` janek
2009-01-23 18:12 ` janek
1 sibling, 0 replies; 19+ messages in thread
From: janek @ 2009-01-23 17:42 UTC (permalink / raw
To: gentoo-science
ok i'll test your new ebuilds in next few days,
mvapich2 is still 1.0.1 ? i published earlier my ebuild for 1.2p1 in
archive , it works, compiles,
it needs to have option to select SDR or DDR link type, maybe by the USE
flag?
--
Janusz Mordarski , Msc
Dept of Coomputational Biophysics and Bioinformatics,
Faculty of Biochemistry, Biophysics and Biotechnology,
Jagiellonian University,
ul. Gronostajowa 7,
30-387 Krakow, Poland.
Tel: (+48-12)-664-6380
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [gentoo-science] sys-cluster infiniband , mpi related soft. update request
2009-01-21 23:33 ` Alexey Shvetsov
2009-01-23 17:42 ` janek
@ 2009-01-23 18:12 ` janek
2009-01-25 20:14 ` Alexey Shvetsov
1 sibling, 1 reply; 19+ messages in thread
From: janek @ 2009-01-23 18:12 UTC (permalink / raw
To: gentoo-science
libmlx4 fails to compile
--
Janusz Mordarski , Msc
Dept of Coomputational Biophysics and Bioinformatics,
Faculty of Biochemistry, Biophysics and Biotechnology,
Jagiellonian University,
ul. Gronostajowa 7,
30-387 Krakow, Poland.
Tel: (+48-12)-664-6380
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [gentoo-science] sys-cluster infiniband , mpi related soft. update request
2009-01-23 18:12 ` janek
@ 2009-01-25 20:14 ` Alexey Shvetsov
2009-01-30 10:30 ` Janusz Mordarski
0 siblings, 1 reply; 19+ messages in thread
From: Alexey Shvetsov @ 2009-01-25 20:14 UTC (permalink / raw
To: gentoo-science
[-- Attachment #1: Type: text/plain, Size: 200 bytes --]
On Пятница 23 января 2009 21:12:30 janek wrote:
> libmlx4 fails to compile
Ok
I'll test it and fix it =)
/me just came back home from Sweeden
--
Alexey 'Alexxy' Shvetsov
Gentoo Team Ru
[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [gentoo-science] sys-cluster infiniband , mpi related soft. update request
2009-01-25 20:14 ` Alexey Shvetsov
@ 2009-01-30 10:30 ` Janusz Mordarski
2009-01-30 11:44 ` Alexey Shvetsov
0 siblings, 1 reply; 19+ messages in thread
From: Janusz Mordarski @ 2009-01-30 10:30 UTC (permalink / raw
To: gentoo-science
[-- Attachment #1: Type: text/plain, Size: 865 bytes --]
Hi, is it finally possible to someone put new ebuild for
openib-mvapich2, i send my ebuild for this package below, it works well,
one think maybe is to add USE flag for SDR or DDR infiniband,
and register mvapich2 in virtual-mpi as one of the 'official' mpi's to
select? it is easy i think and no effort (just add
sys-cluster/openib-mvapich2 as a RDEPEND)
also there's something wrong with this new package responsible for
programs such as ibdiagnet ibtrace... , paths are wrong, emerge installs
these into /usr/bin but when i start those programs, they are looking
for others in /usr/local/bin.. need to be fixed
openib-mvapich2.ebuild attachment:
--
Dept of Computational Biophysics & Bioinformatics,
Faculty of Biochemistry, Biophysics and Biotechnology,
Jagiellonian University,
ul. Gronostajowa 7,
30-387 Krakow, Poland.
Tel: (+48-12)-664-6380
[-- Attachment #2: openib-mvapich2-1.2.ebuild --]
[-- Type: text/plain, Size: 3929 bytes --]
# Copyright 1999-2008 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# $Header: $
inherit mpi fortran flag-o-matic eutils multilib toolchain-funcs
SLOT="0"
LICENSE="BSD"
KEYWORDS="~x86 ~amd64"
DESCRIPTION="MVAPICH2 MPI-over-infiniband package auto-configured for OpenIB."
HOMEPAGE="http://mvapich.cse.ohio-state.edu/"
SRC_URI="${HOMEPAGE}/download/mvapich2/mvapich2-${PV/_/-}p1.tgz"
S="${WORKDIR}/mvapich2-${PV/_/-}p1"
IUSE="debug medium-cluster large-cluster rdma romio threads fortran"
RDEPEND="
|| ( ( sys-cluster/libibverbs
sys-cluster/libibumad
sys-cluster/libibmad
rdma? ( sys-cluster/librdmacm ) )
sys-cluster/openib-userspace )
$(mpi_imp_deplist)"
DEPEND="${RDEPEND}"
pkg_setup() {
MPI_ESELECT_FILE="eselect.mpi.mvapich2"
if [ -z "${MVAPICH_HCA_TYPE}" ]; then
elog "${PN} needs to know which HCA it should optimize for. This is"
elog "passed to the ebuild with the variable, \${MVAPICH_HCA_TYPE}."
elog "Please choose one of: _MLX_PCI_EX_SDR_, _MLX_PCI_EX_DDR_,"
elog "_MLX_PCI_X, _PATH_HT_, or _IBM_EHCA_."
elog "See make.mvapich2.detect in ${S} for more information."
die "MVAPICH_HCA_TYPE undefined"
fi
case ${ARCH} in
amd64)
if grep Intel /proc/cpuinfo &>/dev/null; then
BUILD_ARCH=-D_EM64T_
else
BUILD_ARCH=-D_X86_64_
fi
;;
x86)
BUILD_ARCH=-D_IA32_
;;
ia64)
BUILD_ARCH=-D_IA64_
;;
ppc64)
BUILD_ARCH=-D_PPC64_
;;
*)
die "unsupported architecture: ${ARCH}"
;;
esac
use fortran && fortran_pkg_setup
}
src_unpack() {
unpack ${A}
cd "${S}"
einfo "Disabling examples"
# Examples are always compiled with the default 'all' target. This
# causes problems when we don't build support for everything, including
# threads, mpe2, etc. So we're not going to build them.
sed -i 's:.*cd examples && ${MAKE} all.*::' Makefile.in
}
src_compile() {
local vcluster="small"
use large-cluster && vcluster="large"
use medium-cluster && vcluster="medium"
local c="--with-device=ch3:sock
--with-link=DDR
$(use_enable romio)
--with-cluster-size=${vcluster}
--enable-sharedlibs=gcc"
local enable_srq
[ "${MVAPICH_HCA_TYPE}" == "_MLX_PCI_X_" ] && enable_srq="-DSRQ"
append-flags "${BUILD_ARCH}"
append-flags "${enable_srq}"
append-flags "-D${MVAPICH_HCA_TYPE}"
use debug && c="${c} --enable-g=all --enable-debuginfo"
if use threads; then
c="${c} --enable-threads=multiple --with-thread-package=pthreads"
else
c="${c} --with-thread-package=none"
fi
# enable f90 support for appropriate compilers
if use fortran; then
case "${FORTRANC}" in
gfortran|ifc|ifort|f95)
c="${c} --enable-f77 --enable-f90";;
g77|f77|f2c)
c="${c} --enable-f77 --disable-f90";;
esac
else
c="${c} --disable-f77 --disable-f90"
fi
sed -i \
-e 's/ ${exec_prefix}/ ${DESTDIR}${exec_prefix}/' \
-e 's/ ${libdir}/ ${DESTDIR}${libdir}/' \
${S/-beta2/}/Makefile.in
sed -i '/bindir/s/ ${bindir}/ ${DESTDIR}${bindir}/' ${S/-beta2/}/src/pm/mpd/Makefile.in
cd ${S/-beta2/}
! mpi_classed && c="${c} --sysconfdir=/etc/${PN}"
econf $(mpi_econf_args) ${c}
# http://www.mcs.anl.gov/research/projects/mpich2/support/index.php?s=faqs#parmake
# https://trac.mcs.anl.gov/projects/mpich2/ticket/297
emake -j1 || die "emake failed"
#emake || die "emake failed"
}
src_install() {
emake DESTDIR="${D}" install || die "make install failed"
mpi_imp_add_eselect
}
pkg_postinst() {
einfo "To allow normal users to use infiniband, it is necessary to"
einfo "increase the system limits on locked memory."
einfo "You must increase the kernel.shmmax sysctl value, and increase"
einfo "the memlock limits in /etc/security/limits.conf. i.e.:"
echo
einfo "echo 'kernel.shmmax = 512000000' >> /etc/sysctl.conf"
einfo "echo 512000000 > /proc/sys/kernel/shmmax"
einfo "echo -e '* soft memlock unlimited\n* hard memlock unlimited' > /etc/security/limits.conf"
}
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [gentoo-science] sys-cluster infiniband , mpi related soft. update request
2009-01-30 10:30 ` Janusz Mordarski
@ 2009-01-30 11:44 ` Alexey Shvetsov
2009-02-05 15:14 ` Janusz Mordarski
2009-02-05 18:48 ` Janusz Mordarski
0 siblings, 2 replies; 19+ messages in thread
From: Alexey Shvetsov @ 2009-01-30 11:44 UTC (permalink / raw
To: gentoo-science
[-- Attachment #1: Type: text/plain, Size: 6382 bytes --]
Yes its possible. I add this ebuild after i test it
also i add infiniband support for openmpi
2009/1/30 Janusz Mordarski <janek@mol.uj.edu.pl>
> Hi, is it finally possible to someone put new ebuild for openib-mvapich2, i
> send my ebuild for this package below, it works well, one think maybe is to
> add USE flag for SDR or DDR infiniband,
>
> and register mvapich2 in virtual-mpi as one of the 'official' mpi's to
> select? it is easy i think and no effort (just add
> sys-cluster/openib-mvapich2 as a RDEPEND)
>
> also there's something wrong with this new package responsible for programs
> such as ibdiagnet ibtrace... , paths are wrong, emerge installs these into
> /usr/bin but when i start those programs, they are looking for others in
> /usr/local/bin.. need to be fixed
>
>
> openib-mvapich2.ebuild attachment:
>
>
>
> --
> Dept of Computational Biophysics & Bioinformatics,
>
> Faculty of Biochemistry, Biophysics and Biotechnology,
> Jagiellonian University,
> ul. Gronostajowa 7,
> 30-387 Krakow, Poland.
> Tel: (+48-12)-664-6380
>
>
> # Copyright 1999-2008 Gentoo Foundation
> # Distributed under the terms of the GNU General Public License v2
> # $Header: $
>
> inherit mpi fortran flag-o-matic eutils multilib toolchain-funcs
>
> SLOT="0"
> LICENSE="BSD"
>
> KEYWORDS="~x86 ~amd64"
>
> DESCRIPTION="MVAPICH2 MPI-over-infiniband package auto-configured for
> OpenIB."
>
> HOMEPAGE="http://mvapich.cse.ohio-state.edu/"
> SRC_URI="${HOMEPAGE}/download/mvapich2/mvapich2-${PV/_/-}p1.tgz"
>
> S="${WORKDIR}/mvapich2-${PV/_/-}p1"
>
> IUSE="debug medium-cluster large-cluster rdma romio threads fortran"
>
> RDEPEND="
> || ( ( sys-cluster/libibverbs
> sys-cluster/libibumad
> sys-cluster/libibmad
> rdma? ( sys-cluster/librdmacm ) )
> sys-cluster/openib-userspace )
> $(mpi_imp_deplist)"
> DEPEND="${RDEPEND}"
>
> pkg_setup() {
> MPI_ESELECT_FILE="eselect.mpi.mvapich2"
>
> if [ -z "${MVAPICH_HCA_TYPE}" ]; then
> elog "${PN} needs to know which HCA it should optimize for.
> This is"
> elog "passed to the ebuild with the variable,
> \${MVAPICH_HCA_TYPE}."
> elog "Please choose one of: _MLX_PCI_EX_SDR_,
> _MLX_PCI_EX_DDR_,"
> elog "_MLX_PCI_X, _PATH_HT_, or _IBM_EHCA_."
> elog "See make.mvapich2.detect in ${S} for more
> information."
> die "MVAPICH_HCA_TYPE undefined"
> fi
>
> case ${ARCH} in
> amd64)
> if grep Intel /proc/cpuinfo &>/dev/null; then
> BUILD_ARCH=-D_EM64T_
> else
> BUILD_ARCH=-D_X86_64_
> fi
> ;;
> x86)
> BUILD_ARCH=-D_IA32_
> ;;
> ia64)
> BUILD_ARCH=-D_IA64_
> ;;
> ppc64)
> BUILD_ARCH=-D_PPC64_
> ;;
> *)
> die "unsupported architecture: ${ARCH}"
> ;;
> esac
> use fortran && fortran_pkg_setup
> }
>
> src_unpack() {
> unpack ${A}
> cd "${S}"
> einfo "Disabling examples"
> # Examples are always compiled with the default 'all' target. This
> # causes problems when we don't build support for everything,
> including
> # threads, mpe2, etc. So we're not going to build them.
> sed -i 's:.*cd examples && ${MAKE} all.*::' Makefile.in
> }
>
> src_compile() {
> local vcluster="small"
> use large-cluster && vcluster="large"
> use medium-cluster && vcluster="medium"
>
> local c="--with-device=ch3:sock
> --with-link=DDR
> $(use_enable romio)
> --with-cluster-size=${vcluster}
> --enable-sharedlibs=gcc"
>
> local enable_srq
> [ "${MVAPICH_HCA_TYPE}" == "_MLX_PCI_X_" ] && enable_srq="-DSRQ"
>
>
> append-flags "${BUILD_ARCH}"
> append-flags "${enable_srq}"
> append-flags "-D${MVAPICH_HCA_TYPE}"
>
> use debug && c="${c} --enable-g=all --enable-debuginfo"
>
> if use threads; then
> c="${c} --enable-threads=multiple
> --with-thread-package=pthreads"
> else
> c="${c} --with-thread-package=none"
> fi
>
> # enable f90 support for appropriate compilers
> if use fortran; then
> case "${FORTRANC}" in
> gfortran|ifc|ifort|f95)
> c="${c} --enable-f77 --enable-f90";;
> g77|f77|f2c)
> c="${c} --enable-f77 --disable-f90";;
> esac
> else
> c="${c} --disable-f77 --disable-f90"
> fi
>
> sed -i \
> -e 's/ ${exec_prefix}/ ${DESTDIR}${exec_prefix}/' \
> -e 's/ ${libdir}/ ${DESTDIR}${libdir}/' \
> ${S/-beta2/}/Makefile.in
> sed -i '/bindir/s/ ${bindir}/ ${DESTDIR}${bindir}/'
> ${S/-beta2/}/src/pm/mpd/Makefile.in
> cd ${S/-beta2/}
>
> ! mpi_classed && c="${c} --sysconfdir=/etc/${PN}"
> econf $(mpi_econf_args) ${c}
>
> #
> http://www.mcs.anl.gov/research/projects/mpich2/support/index.php?s=faqs#parmake
> # https://trac.mcs.anl.gov/projects/mpich2/ticket/297
> emake -j1 || die "emake failed"
> #emake || die "emake failed"
> }
>
> src_install() {
> emake DESTDIR="${D}" install || die "make install failed"
> mpi_imp_add_eselect
> }
>
> pkg_postinst() {
> einfo "To allow normal users to use infiniband, it is necessary to"
> einfo "increase the system limits on locked memory."
> einfo "You must increase the kernel.shmmax sysctl value, and
> increase"
> einfo "the memlock limits in /etc/security/limits.conf. i.e.:"
> echo
> einfo "echo 'kernel.shmmax = 512000000' >> /etc/sysctl.conf"
> einfo "echo 512000000 > /proc/sys/kernel/shmmax"
> einfo "echo -e '* soft memlock unlimited\n* hard memlock unlimited'
> > /etc/security/limits.conf"
> }
>
>
>
--
Gentoo GNU/Linux 2.6.25
Mail to
alexxyum@gmail.com
alexxy@gentoo.ru
[-- Attachment #2: Type: text/html, Size: 11657 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [gentoo-science] sys-cluster infiniband , mpi related soft. update request
2009-01-30 11:44 ` Alexey Shvetsov
@ 2009-02-05 15:14 ` Janusz Mordarski
2009-02-05 18:48 ` Janusz Mordarski
1 sibling, 0 replies; 19+ messages in thread
From: Janusz Mordarski @ 2009-02-05 15:14 UTC (permalink / raw
To: gentoo-science
Can you tell me which package from your openib ebuilds adds mlx4_ib
module and other infiniband drivers? i installed whole openib package
with mlx4 flag, but there's no mlx4_ib module since i removed it from
kernel (i wanted to use OFED drivers, not those from kernel)
--
Dept of Computational Biophysics & Bioinformatics,
Faculty of Biochemistry, Biophysics and Biotechnology,
Jagiellonian University,
ul. Gronostajowa 7,
30-387 Krakow, Poland.
Tel: (+48-12)-664-6380
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [gentoo-science] sys-cluster infiniband , mpi related soft. update request
2009-01-30 11:44 ` Alexey Shvetsov
2009-02-05 15:14 ` Janusz Mordarski
@ 2009-02-05 18:48 ` Janusz Mordarski
1 sibling, 0 replies; 19+ messages in thread
From: Janusz Mordarski @ 2009-02-05 18:48 UTC (permalink / raw
To: gentoo-science
Well ok i've made my own ebuild with openib-drivers , v 1.4 (in
attachment), i tried it and it compiles except ehca flag (won't compile
with it), if someone's interested, please check it and put into the overlay
i have another question, i have big PERFORMANCE problems with infiniband
on gentoo, no matter if i use kernel drivers or openib-drivers, here's
what OSU latency benchmark says:
first, OSU over Gigabit Ethernet (for comparison):
# OSU MPI Latency Test v3.1.1
# Size Latency (us)
0 76.37
1 77.77
2 77.66
4 77.92
8 77.95
16 77.98
Infiniband - drivers from openib-drivers ebuild
# OSU MPI Latency Test v3.1.1
# Size Latency (us)
0 50.23
1 52.10
2 51.99
4 51.62
8 51.79
16 52.17
Infiniband - drivers from kernel 2.6.27 - results are almost the same,
except everywhere, where there is 5x (fifty) , it's 4x (fourty)
Infiniband on CentOS (installed from RPM OFED)
# OSU MPI Latency Test v3.1.1
# Size Latency (us)
0 1.65
1 1.82
2 1.81
4 1.72
8 1.60
16 1.61
so, the last one is perfect, Infiniband over IP works as expected
in previous test my benchmarks suffered terrible performance loss over
infiniband on gentoo (the same with OSU Bandwidth benchmark, on Gentoo:
500-750MB/s , on CentOS: 1400 MB/s)
any suggestions? maybe some packages from this overlay are doing
something wrong?
--
Dept of Computational Biophysics & Bioinformatics,
Faculty of Biochemistry, Biophysics and Biotechnology,
Jagiellonian University,
ul. Gronostajowa 7,
30-387 Krakow, Poland.
Tel: (+48-12)-664-6380
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2009-02-05 18:48 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-01-20 19:44 [gentoo-science] sys-cluster infiniband , mpi related soft. update request Janusz Mordarski
2009-01-20 20:10 ` Alexey Shvetsov
2009-01-20 22:58 ` Janusz Mordarski
2009-01-20 23:09 ` Alexey Shvetsov
2009-01-21 18:47 ` Janusz Mordarski
2009-01-21 19:12 ` Bryan Green
2009-01-21 19:31 ` Alexey Shvetsov
2009-01-21 19:36 ` Bryan Green
2009-01-21 19:46 ` Janusz Mordarski
2009-01-21 19:51 ` [gentoo-science] " Justin Bronder
2009-01-21 22:26 ` [gentoo-science] " Donnie Berkholz
2009-01-21 23:33 ` Alexey Shvetsov
2009-01-23 17:42 ` janek
2009-01-23 18:12 ` janek
2009-01-25 20:14 ` Alexey Shvetsov
2009-01-30 10:30 ` Janusz Mordarski
2009-01-30 11:44 ` Alexey Shvetsov
2009-02-05 15:14 ` Janusz Mordarski
2009-02-05 18:48 ` Janusz Mordarski
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox