* [gentoo-commits] proj/portage:prefix commit in: /
@ 2016-03-20 19:31 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2016-03-20 19:31 UTC (permalink / raw
To: gentoo-commits
commit: 08d48efbbdc7f7e36b5be16884cd72f5ef52e3d6
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Mar 20 19:31:06 2016 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Mar 20 19:31:06 2016 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=08d48efb
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
.travis.yml | 3 +++
RELEASE-NOTES | 20 +++++++++++++++++++
bin/egencache | 6 +++---
bin/misc-functions.sh | 2 +-
bin/phase-functions.sh | 7 +------
bin/phase-helpers.sh | 7 ++++++-
bin/portageq | 33 ++++++++++++++------------------
man/portage.5 | 6 ++++++
pym/portage/dbapi/porttree.py | 3 ++-
pym/portage/package/ebuild/doebuild.py | 2 +-
pym/portage/sync/modules/git/__init__.py | 5 ++++-
pym/portage/sync/modules/git/git.py | 4 ++++
pym/portage/tests/repoman/test_simple.py | 9 +++------
pym/repoman/actions.py | 29 +++++++++++++++++++---------
pym/repoman/scanner.py | 2 +-
runtests | 4 ++--
setup.py | 2 +-
17 files changed, 92 insertions(+), 52 deletions(-)
diff --cc bin/misc-functions.sh
index 080e366,58755a1..a930604
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2024-02-25 9:40 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2024-02-25 9:40 UTC (permalink / raw
To: gentoo-commits
commit: 451552da1dcebde961cbc1ce7a95180d9e1c073f
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Feb 25 09:39:18 2024 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Feb 25 09:39:18 2024 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=451552da
Merge remote-tracking branch 'origin/master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
NEWS | 36 +++
bin/install-qa-check.d/90gcc-warnings | 18 +-
bin/socks5-server.py | 34 ++-
lib/_emerge/PollScheduler.py | 9 +-
lib/_emerge/depgraph.py | 15 +-
lib/portage/dbapi/__init__.py | 11 +-
lib/portage/dbapi/porttree.py | 2 +-
lib/portage/process.py | 254 ++++++++++++---------
lib/portage/tests/ebuild/test_doebuild_spawn.py | 3 +-
lib/portage/tests/ebuild/test_ipc_daemon.py | 3 +-
lib/portage/tests/emerge/conftest.py | 4 +-
lib/portage/tests/process/test_spawn_returnproc.py | 2 +-
lib/portage/tests/resolver/test_depth.py | 8 +-
.../test_emptytree_reinstall_unsatisfiability.py | 137 +++++++++++
lib/portage/tests/resolver/test_useflags.py | 6 +-
lib/portage/tests/util/test_socks5.py | 94 ++++++--
lib/portage/util/_async/SchedulerInterface.py | 9 +-
lib/portage/util/_eventloop/asyncio_event_loop.py | 44 +++-
lib/portage/util/futures/_asyncio/__init__.py | 28 ++-
lib/portage/util/socks5.py | 42 +++-
meson.build | 2 +-
21 files changed, 574 insertions(+), 187 deletions(-)
diff --cc lib/portage/process.py
index ead11e3184,cc9ed7bf78..2365778e6a
--- a/lib/portage/process.py
+++ b/lib/portage/process.py
@@@ -35,9 -35,8 +35,10 @@@ portage.proxy.lazyimport.lazyimport
)
from portage.const import BASH_BINARY, SANDBOX_BINARY, FAKEROOT_BINARY
+# PREFIX LOCAL
+from portage.const import MACOSSANDBOX_BINARY
from portage.exception import CommandNotFound
+ from portage.proxy.objectproxy import ObjectProxy
from portage.util._ctypes import find_library, LoadLibrary, ctypes
try:
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2024-02-22 7:27 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2024-02-22 7:27 UTC (permalink / raw
To: gentoo-commits
commit: 07e60cd2a4f67f0b4207fb8150f9d7a1689cb295
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Feb 22 07:27:11 2024 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Feb 22 07:27:11 2024 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=07e60cd2
Merge remote-tracking branch 'origin/master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
.git-blame-ignore-revs | 2 +
.github/workflows/ci.yml | 47 +-
.github/workflows/lint.yml | 2 +-
.pre-commit-config.yaml | 8 +-
NEWS | 173 +++++
bin/dispatch-conf | 2 +-
bin/ebuild-helpers/dohtml | 2 +-
bin/ebuild-helpers/fowners | 2 +-
bin/ebuild-helpers/fperms | 2 +-
bin/ebuild.sh | 8 +-
bin/estrip | 35 +-
bin/fixpackages | 71 +-
bin/install-qa-check.d/05prefix | 10 +-
bin/install-qa-check.d/60bash-completion | 4 +-
bin/install-qa-check.d/90bad-bin-group-write | 2 +-
bin/install-qa-check.d/90bad-bin-owner | 2 +-
bin/install-qa-check.d/90cmake-warnings | 2 +-
bin/install-qa-check.d/90world-writable | 2 +-
bin/install-qa-check.d/95empty-dirs | 2 +-
bin/phase-functions.sh | 3 +-
bin/phase-helpers.sh | 10 +-
bin/quickpkg | 13 +-
cnf/make.conf.example.arc.diff | 46 ++
cnf/make.globals | 10 +-
lib/_emerge/AbstractDepPriority.py | 3 +-
lib/_emerge/AsynchronousTask.py | 6 +-
lib/_emerge/Binpkg.py | 34 +-
lib/_emerge/BinpkgFetcher.py | 120 ++--
lib/_emerge/BinpkgPrefetcher.py | 36 +-
lib/_emerge/BinpkgVerifier.py | 22 +-
lib/_emerge/DepPriority.py | 4 +-
lib/_emerge/DepPriorityNormalRange.py | 4 +-
lib/_emerge/DepPrioritySatisfiedRange.py | 1 +
lib/_emerge/EbuildBinpkg.py | 37 +-
lib/_emerge/EbuildBuild.py | 39 +-
lib/_emerge/EbuildFetchonly.py | 10 +-
lib/_emerge/EbuildMetadataPhase.py | 52 +-
lib/_emerge/EbuildPhase.py | 29 +-
lib/_emerge/MergeListItem.py | 1 -
lib/_emerge/MetadataRegen.py | 16 +-
lib/_emerge/PipeReader.py | 1 -
lib/_emerge/Scheduler.py | 65 +-
lib/_emerge/SpawnProcess.py | 50 +-
lib/_emerge/SubProcess.py | 28 +-
lib/_emerge/UnmergeDepPriority.py | 38 +-
lib/_emerge/actions.py | 51 +-
lib/_emerge/depgraph.py | 238 +++++--
lib/_emerge/resolver/circular_dependency.py | 21 +-
lib/_emerge/resolver/slot_collision.py | 12 +-
.../{binpkg_compression.py => binpkg_format.py} | 16 +-
lib/portage/_compat_upgrade/meson.build | 1 +
lib/portage/_emirrordist/DeletionIterator.py | 8 +-
lib/portage/_emirrordist/FetchIterator.py | 18 +-
lib/portage/_global_updates.py | 15 +-
lib/portage/_selinux.py | 17 +-
lib/portage/_sets/dbapi.py | 2 +-
lib/portage/_sets/libs.py | 1 -
lib/portage/binpkg.py | 2 +-
lib/portage/cache/anydbm.py | 17 +-
lib/portage/const.py | 4 +-
lib/portage/dbapi/__init__.py | 14 +-
lib/portage/dbapi/bintree.py | 187 +++--
lib/portage/dbapi/porttree.py | 130 ++--
lib/portage/dbapi/vartree.py | 51 +-
lib/portage/dep/__init__.py | 24 +-
lib/portage/dep/_slot_operator.py | 1 +
lib/portage/dep/dep_check.py | 17 +-
lib/portage/dep/libc.py | 83 +++
lib/portage/dep/meson.build | 1 +
lib/portage/dep/soname/multilib_category.py | 10 +
lib/portage/emaint/main.py | 2 +-
lib/portage/emaint/modules/merges/merges.py | 4 +-
lib/portage/exception.py | 10 +-
lib/portage/gpg.py | 14 +-
lib/portage/gpkg.py | 34 +-
lib/portage/output.py | 15 +-
lib/portage/package/ebuild/config.py | 67 +-
lib/portage/package/ebuild/doebuild.py | 214 +++---
lib/portage/package/ebuild/fetch.py | 1 +
lib/portage/process.py | 766 +++++++++++++++------
lib/portage/proxy/objectproxy.py | 1 -
lib/portage/sync/modules/git/git.py | 32 +-
lib/portage/sync/modules/rsync/rsync.py | 40 +-
lib/portage/tests/__init__.py | 12 +-
lib/portage/tests/bin/setup_env.py | 7 +-
lib/portage/tests/dbapi/test_auxdb.py | 71 +-
lib/portage/tests/dbapi/test_portdb_cache.py | 6 +-
lib/portage/tests/dep/meson.build | 1 +
lib/portage/tests/dep/test_libc.py | 81 +++
lib/portage/tests/dep/test_overlap_dnf.py | 49 +-
lib/portage/tests/ebuild/test_fetch.py | 24 +-
lib/portage/tests/emerge/conftest.py | 21 +-
lib/portage/tests/emerge/meson.build | 2 +
lib/portage/tests/emerge/test_actions.py | 23 +-
lib/portage/tests/emerge/test_baseline.py | 2 +-
...cker_file_collision.py => test_binpkg_fetch.py} | 185 ++---
lib/portage/tests/emerge/test_config_protect.py | 2 +-
.../emerge/test_emerge_blocker_file_collision.py | 2 +-
lib/portage/tests/emerge/test_emerge_slot_abi.py | 2 +-
lib/portage/tests/emerge/test_libc_dep_inject.py | 552 +++++++++++++++
.../tests/env/config/test_PortageModulesFile.py | 3 +-
lib/portage/tests/glsa/test_security_set.py | 4 +-
lib/portage/tests/gpkg/test_gpkg_gpg.py | 23 +-
.../tests/gpkg/test_gpkg_metadata_update.py | 2 +-
lib/portage/tests/gpkg/test_gpkg_metadata_url.py | 45 +-
lib/portage/tests/gpkg/test_gpkg_path.py | 3 +-
lib/portage/tests/locks/test_lock_nonblock.py | 55 +-
lib/portage/tests/news/test_NewsItem.py | 3 +-
lib/portage/tests/process/meson.build | 1 +
lib/portage/tests/process/test_AsyncFunction.py | 27 +-
lib/portage/tests/process/test_spawn_fail_e2big.py | 7 +-
lib/portage/tests/process/test_spawn_returnproc.py | 39 ++
lib/portage/tests/resolver/ResolverPlayground.py | 96 ++-
lib/portage/tests/resolver/meson.build | 2 +
.../tests/resolver/soname/test_skip_update.py | 17 +-
lib/portage/tests/resolver/test_broken_deps.py | 76 ++
.../tests/resolver/test_cross_dep_priority.py | 164 +++++
lib/portage/tests/resolver/test_depclean_order.py | 117 +++-
lib/portage/tests/resolver/test_eapi.py | 4 +-
lib/portage/tests/sets/base/test_variable_set.py | 8 +
.../tests/sets/files/test_config_file_set.py | 3 +-
.../tests/sets/files/test_static_file_set.py | 3 +-
lib/portage/tests/sets/shell/test_shell.py | 4 +-
lib/portage/tests/sync/test_sync_local.py | 2 +-
lib/portage/tests/update/test_move_ent.py | 205 +++++-
lib/portage/tests/update/test_move_slot_ent.py | 157 ++++-
lib/portage/tests/update/test_update_dbentry.py | 217 +++++-
lib/portage/tests/util/dyn_libs/meson.build | 1 +
.../tests/util/dyn_libs/test_installed_dynlibs.py | 65 ++
lib/portage/tests/util/futures/asyncio/meson.build | 1 -
.../util/futures/asyncio/test_child_watcher.py | 50 --
lib/portage/tests/util/futures/test_retry.py | 43 +-
lib/portage/tests/util/test_manifest.py | 7 +-
lib/portage/util/_async/AsyncTaskFuture.py | 8 +-
lib/portage/util/_async/BuildLogger.py | 39 +-
lib/portage/util/_async/ForkProcess.py | 191 ++---
lib/portage/util/_async/PipeLogger.py | 1 -
lib/portage/util/_async/PopenProcess.py | 5 +-
lib/portage/util/_async/TaskScheduler.py | 1 -
lib/portage/util/_dyn_libs/LinkageMapELF.py | 3 -
lib/portage/util/_dyn_libs/dyn_libs.py | 43 +-
lib/portage/util/elf/constants.py | 5 +
lib/portage/util/file_copy/__init__.py | 7 +-
lib/portage/util/futures/_asyncio/__init__.py | 32 +-
lib/portage/util/futures/_sync_decorator.py | 8 +-
lib/portage/util/futures/executor/fork.py | 6 +-
lib/portage/util/locale.py | 73 +-
lib/portage/util/socks5.py | 36 +-
man/ebuild.5 | 8 +
man/emerge.1 | 5 +-
man/make.conf.5 | 30 +-
meson.build | 2 +-
misc/emerge-delta-webrsync | 1 +
tox.ini | 3 +-
154 files changed, 4862 insertions(+), 1338 deletions(-)
diff --cc lib/portage/const.py
index 8769ab2707,2154213b7b..1909199ef3
--- a/lib/portage/const.py
+++ b/lib/portage/const.py
@@@ -1,14 -1,7 +1,14 @@@
# portage: Constants
- # Copyright 1998-2023 Gentoo Authors
+ # Copyright 1998-2024 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
+# BEGIN PREFIX LOCAL
+# ===========================================================================
+# autotool supplied constants.
+# ===========================================================================
+from portage.const_autotool import *
+# END PREFIX LOCAL
+
import os
from portage import installation
diff --cc lib/portage/package/ebuild/doebuild.py
index c627077a27,bc51fdff2d..7994394bdd
--- a/lib/portage/package/ebuild/doebuild.py
+++ b/lib/portage/package/ebuild/doebuild.py
@@@ -19,10 -19,9 +19,11 @@@ import sy
import tempfile
from textwrap import wrap
import time
+ from typing import Union
import warnings
import zlib
+# PREFIX LOCAL
+import platform
import portage
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2024-01-18 10:22 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2024-01-18 10:22 UTC (permalink / raw
To: gentoo-commits
commit: 0e101ec3f0dede6bc28a87be38afab8f9df34c09
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Jan 18 10:21:51 2024 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Jan 18 10:21:51 2024 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=0e101ec3
subst-install: expand all replacements from const.py
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
subst-install.in | 2 ++
1 file changed, 2 insertions(+)
diff --git a/subst-install.in b/subst-install.in
index 423dfad515..318e113e1c 100644
--- a/subst-install.in
+++ b/subst-install.in
@@ -24,10 +24,12 @@ sedexp=(
-r
-e "s,${at}PORTAGE_BASE${at},@PORTAGE_BASE@,g"
-e "s,${at}PORTAGE_BASE_PATH${at},@PORTAGE_BASE@,g"
+ -e "s,${at}PORTAGE_BIN_PATH${at},@PORTAGE_BASE@/bin,g"
-e "s,${at}PORTAGE_BASH${at},@PORTAGE_BASH@,g"
-e "s,${at}PORTAGE_EPREFIX${at},@PORTAGE_EPREFIX@,g"
-e "s,${at}PORTAGE_MV${at},@PORTAGE_MV@,g"
-e "s,${at}PREFIX_PORTAGE_PYTHON${at},@PREFIX_PORTAGE_PYTHON@,g"
+ -e "s,${at}EPREFIX${at},@PORTAGE_EPREFIX@,g"
-e "s,${at}datadir${at},@datadir@,g"
-e "s,${at}portagegroup${at},${portagegroup},g"
-e "s,${at}portageuser${at},${portageuser},g"
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2024-01-18 9:36 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2024-01-18 9:36 UTC (permalink / raw
To: gentoo-commits
commit: ded2ef9b14efaebe080d3c228832cac7b48639e7
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Jan 18 09:35:25 2024 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Jan 18 09:35:25 2024 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=ded2ef9b
buildsys: fix hprefixify replacement logic for subst-inst
We don't need to escape \ in single-quoted strings.
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
autogen.sh | 3 ---
subst-install.in | 4 ++--
2 files changed, 2 insertions(+), 5 deletions(-)
diff --git a/autogen.sh b/autogen.sh
index ec510e01b0..463753c459 100755
--- a/autogen.sh
+++ b/autogen.sh
@@ -13,7 +13,4 @@ autoconf || die "failed autoconf"
touch ChangeLog
automake -a -c || die "failed automake"
-if [ -x ./test.sh ] ; then
- exec ./test.sh "$@"
-fi
echo "finished"
diff --git a/subst-install.in b/subst-install.in
index acc7824c4b..423dfad515 100644
--- a/subst-install.in
+++ b/subst-install.in
@@ -35,8 +35,8 @@ sedexp=(
-e "s,${at}rootuid${at},@rootuid@,g"
-e "s,${at}rootuser${at},${rootuser},g"
-e "s,${at}sysconfdir${at},@sysconfdir@,g"
- -e 's,([^[:alnum:]}\\)\\.])'"${dirs}"',\\1@PORTAGE_EPREFIX@/\\2,g'
- -e 's,^'"${dirs}"',@PORTAGE_EPREFIX@/\\1,'
+ -e 's,([^[:alnum:]}\)\.])'"${dirs}"',\1@PORTAGE_EPREFIX@/\2,g'
+ -e 's,^'"${dirs}"',@PORTAGE_EPREFIX@/\1,'
)
sources=( )
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2023-12-03 10:10 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2023-12-03 10:10 UTC (permalink / raw
To: gentoo-commits
commit: 2368a1f82ef4dcef2bd107714a5c81257c913b0d
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Dec 3 10:10:08 2023 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Dec 3 10:10:08 2023 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=2368a1f8
configure: drop --with-extra-path handling
EXTRA_PATH is obsolete for a while.
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
configure.ac | 6 ------
1 file changed, 6 deletions(-)
diff --git a/configure.ac b/configure.ac
index 9def5210cd..12b669efb3 100644
--- a/configure.ac
+++ b/configure.ac
@@ -97,18 +97,12 @@ then
PORTAGE_EPREFIX=`${PREFIX_PORTAGE_PYTHON} -c "import os; print(os.path.normpath('$PORTAGE_EPREFIX'))"`
fi
-AC_ARG_WITH(extra-path,
-AS_HELP_STRING([--with-extra-path],[specify additional PATHs available to the portage build environment (use with care)]),
-[EXTRA_PATH="$withval"],
-[EXTRA_PATH=""])
-
AC_SUBST(portageuser)
AC_SUBST(portagegroup)
AC_SUBST(rootuser)
AC_SUBST(rootuid)
AC_SUBST(rootgid)
AC_SUBST(PORTAGE_EPREFIX)
-AC_SUBST(EXTRA_PATH)
AC_SUBST(PORTAGE_BASE,['${exec_prefix}/lib/portage'])
AC_SUBST(PORTAGE_RM)
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2023-12-03 9:54 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2023-12-03 9:54 UTC (permalink / raw
To: gentoo-commits
commit: 06d123dd5bbe32bcb3c9b1df65344acc870201cb
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Dec 3 09:48:01 2023 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Dec 3 09:48:01 2023 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=06d123dd
subst-install: use hprefixify heuristic like meson
Now @PORTAGE_EPREFIX@ is not used for most places, ensure we still get
replacement done for Prefix.
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
subst-install.in | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/subst-install.in b/subst-install.in
index 332b77b656..d65db48042 100644
--- a/subst-install.in
+++ b/subst-install.in
@@ -19,7 +19,9 @@ portageuser=${portageuser//\\/\\\\\\\\}
# there are many ways to do this all dynamic, but we only care for raw
# speed here, so let configure fill in this list and be done with it
at='@'
+dirs='/(usr|lib(|[onx]?32|n?64)|etc|bin|sbin|var|opt|run)'
sedexp=(
+ -r
-e "s,${at}EXTRA_PATH${at},@EXTRA_PATH@,g"
-e "s,${at}PORTAGE_BASE${at},@PORTAGE_BASE@,g"
-e "s,${at}PORTAGE_BASE_PATH${at},@PORTAGE_BASE@,g"
@@ -34,7 +36,8 @@ sedexp=(
-e "s,${at}rootuid${at},@rootuid@,g"
-e "s,${at}rootuser${at},${rootuser},g"
-e "s,${at}sysconfdir${at},@sysconfdir@,g"
- -e "1s,/usr/bin/env ,@PORTAGE_EPREFIX@/usr/bin/,"
+ -e 's,([^[:alnum:]}\\)\\.])'"${dirs}"',\\1@PORTAGE_EPREFIX@/\\2,g'
+ -e 's,^'"${dirs}"',@PORTAGE_EPREFIX@/\\1,'
)
sources=( )
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2023-12-03 9:54 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2023-12-03 9:54 UTC (permalink / raw
To: gentoo-commits
commit: 09f6074e959f5c1bd86d5a31cd64d9e81c7d3a95
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Dec 3 09:39:37 2023 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Dec 3 09:39:37 2023 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=09f6074e
travis: remove obsolete file
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
travis.sh | 31 -------------------------------
1 file changed, 31 deletions(-)
diff --git a/travis.sh b/travis.sh
deleted file mode 100755
index bcb95a9cb1..0000000000
--- a/travis.sh
+++ /dev/null
@@ -1,31 +0,0 @@
-#!/usr/bin/env bash
-
-# this script runs the tests as Travis would do (.travis.yml) and can be
-# used to test the Prefix branch of portage on a non-Prefix system
-
-: ${TMPDIR=/var/tmp}
-
-HERE=$(dirname $(realpath ${BASH_SOURCE[0]}))
-REPO=${HERE##*/}.$$
-
-cd ${TMPDIR}
-git clone ${HERE} ${REPO}
-
-cd ${REPO}
-printf "[build_ext]\nportage-ext-modules=true" >> setup.cfg
-find . -type f -exec \
- sed -e "s|@PORTAGE_EPREFIX@||" \
- -e "s|@PORTAGE_BASE@|${PWD}|" \
- -e "s|@PORTAGE_MV@|$(type -P mv)|" \
- -e "s|@PORTAGE_BASH@|$(type -P bash)|" \
- -e "s|@PREFIX_PORTAGE_PYTHON@|$(type -P python)|" \
- -e "s|@EXTRA_PATH@|${EPREFIX}/usr/sbin:${EPREFIX}/sbin|" \
- -e "s|@portagegroup@|$(id -gn)|" \
- -e "s|@portageuser@|$(id -un)|" \
- -e "s|@rootuser@|$(id -un)|" \
- -e "s|@rootuid@|$(id -u)|" \
- -e "s|@rootgid@|$(id -g)|" \
- -e "s|@sysconfdir@|${EPREFIX}/etc|" \
- -i '{}' +
-unset EPREFIX
-./setup.py test
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2023-12-03 9:54 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2023-12-03 9:54 UTC (permalink / raw
To: gentoo-commits
commit: a05b863dba5fb23207ccf3f79aac9b290163c11a
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Dec 3 09:30:55 2023 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Dec 3 09:30:55 2023 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=a05b863d
Merge branch 'master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
NEWS | 30 +++-
bin/ebuild | 4 -
lib/_emerge/AbstractEbuildProcess.py | 6 -
lib/_emerge/Binpkg.py | 6 +-
lib/_emerge/CompositeTask.py | 8 +-
lib/_emerge/DepPriority.py | 19 +--
lib/_emerge/DepPriorityNormalRange.py | 8 +-
lib/_emerge/DepPrioritySatisfiedRange.py | 13 +-
lib/_emerge/EbuildBuild.py | 4 +
lib/_emerge/Scheduler.py | 18 ++-
lib/_emerge/actions.py | 5 -
lib/_emerge/depgraph.py | 65 +++++++--
lib/portage/tests/resolver/ResolverPlayground.py | 3 -
.../tests/resolver/test_alternatives_gzip.py | 5 +-
lib/portage/tests/resolver/test_merge_order.py | 22 +--
.../tests/resolver/test_rebuild_ghostscript.py | 6 +-
.../resolver/test_runtime_cycle_merge_order.py | 153 ++++++++++++++++++++-
meson.build | 2 +-
18 files changed, 311 insertions(+), 66 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2023-11-24 20:18 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2023-11-24 20:18 UTC (permalink / raw
To: gentoo-commits
commit: 7f7688a31231ba51f89090c3983e92ecb7f96854
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Nov 24 20:18:08 2023 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Nov 24 20:18:08 2023 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=7f7688a3
tarball: setup.py is gone, meson.build is now here
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
tarball.sh | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tarball.sh b/tarball.sh
index 7d35dacf86..a8f52555da 100755
--- a/tarball.sh
+++ b/tarball.sh
@@ -26,7 +26,7 @@ cd "${DEST}"
# expand version
sed -i -e '/^VERSION\s*=/s/^.*$/VERSION = "'${V}_prefix'"/' \
lib/portage/__init__.py
-sed -i -e "/version = /s/'[^']\+'/'${V}-prefix'/" setup.py
+sed -i -e "/\<version : /s/'[^']\+'/'${V}-prefix'/" meson.build
sed -i -e "1s/VERSION/${V}-prefix/" man/{,ru/}*.[15]
sed -i -e "s/@version@/${V}/" configure.ac
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2023-11-24 20:06 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2023-11-24 20:06 UTC (permalink / raw
To: gentoo-commits
commit: df8959465f680edf4c37f0c41b9301b6cdc7eb54
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Nov 24 19:54:29 2023 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Nov 24 19:54:29 2023 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=df895946
Merge branch 'master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
.builds/ci.yml | 38 +-
.builds/lint.yml | 12 +-
.builds/setup-python.sh | 2 +
.github/workflows/ci.yml | 38 +-
.github/workflows/lint.yml | 15 +-
.pre-commit-config.yaml | 10 +-
DEVELOPING | 16 +-
MANIFEST.in | 30 -
NEWS | 228 +++++
README.md | 2 +-
TEST-NOTES | 2 +-
bin/cgroup-release-agent | 2 -
bin/dispatch-conf | 27 +
bin/ebuild | 102 +--
bin/ebuild-helpers/portageq | 8 -
bin/ebuild-helpers/prepall | 2 +-
bin/ebuild-helpers/prepalldocs | 9 +-
bin/ebuild-helpers/prepallinfo | 9 +-
bin/ebuild-helpers/prepallman | 9 +-
bin/ebuild-helpers/prepallstrip | 18 +-
bin/ebuild-helpers/prepinfo | 41 +-
bin/ebuild-helpers/prepman | 13 +-
bin/ebuild-helpers/prepstrip | 14 +-
bin/emaint | 27 +-
bin/emerge | 151 ++--
bin/emerge-webrsync | 40 +-
bin/env-update | 19 +-
bin/estrip | 4 +-
bin/etc-update | 13 +-
bin/install-qa-check.d/05double-D | 2 +-
bin/install-qa-check.d/05prefix | 4 +-
bin/install-qa-check.d/60openrc | 2 +-
bin/meson.build | 73 ++
bin/misc-functions.sh | 13 -
bin/phase-functions.sh | 5 +-
bin/phase-helpers.sh | 8 +-
bin/portageq-wrapper | 1 -
bin/save-ebuild-env.sh | 2 +-
cnf/make.globals | 5 +-
cnf/meson.build | 144 ++++
cnf/repos.conf | 2 +-
doc/api/meson.build | 43 +
doc/config/sets.docbook | 6 +-
doc/fragment/meson.build | 5 +
doc/fragment/version.in | 1 +
doc/meson.build | 57 ++
doc/package/ebuild/eapi/4.docbook | 2 +-
doc/portage.docbook | 2 -
doc/qa.docbook | 4 +-
lib/_emerge/AbstractEbuildProcess.py | 95 +--
lib/_emerge/BinpkgVerifier.py | 4 +-
lib/_emerge/EbuildFetcher.py | 31 +-
lib/_emerge/Package.py | 4 +-
lib/_emerge/Scheduler.py | 24 +-
lib/_emerge/SpawnProcess.py | 177 ++--
lib/_emerge/UseFlagDisplay.py | 2 +-
lib/_emerge/actions.py | 85 +-
lib/_emerge/create_depgraph_params.py | 10 +-
lib/_emerge/depgraph.py | 322 +++++--
lib/_emerge/main.py | 18 +-
lib/_emerge/meson.build | 101 +++
lib/_emerge/resolver/meson.build | 14 +
lib/_emerge/search.py | 14 +-
lib/meson.build | 2 +
lib/portage/__init__.py | 55 +-
lib/portage/_compat_upgrade/meson.build | 10 +
lib/portage/_emirrordist/ContentDB.py | 4 +-
lib/portage/_emirrordist/DeletionTask.py | 14 +-
lib/portage/_emirrordist/FetchIterator.py | 2 +-
lib/portage/_emirrordist/FetchTask.py | 3 +-
lib/portage/_emirrordist/MirrorDistTask.py | 24 +-
lib/portage/_emirrordist/main.py | 14 +-
lib/portage/_emirrordist/meson.build | 15 +
lib/portage/_sets/__init__.py | 10 +-
lib/portage/_sets/files.py | 13 +-
lib/portage/_sets/meson.build | 15 +
lib/portage/_sets/shell.py | 2 +-
lib/portage/binrepo/config.py | 2 +-
lib/portage/binrepo/meson.build | 8 +
lib/portage/cache/index/meson.build | 9 +
lib/portage/cache/mappings.py | 322 +++----
lib/portage/cache/meson.build | 20 +
lib/portage/cache/sql_template.py | 4 +-
lib/portage/cache/sqlite.py | 19 +-
lib/portage/cache/template.py | 10 +-
lib/portage/const.py | 70 +-
lib/portage/dbapi/_MergeProcess.py | 105 ++-
lib/portage/dbapi/_SyncfsProcess.py | 15 +-
lib/portage/dbapi/__init__.py | 45 +-
lib/portage/dbapi/bintree.py | 145 +++-
lib/portage/dbapi/meson.build | 22 +
lib/portage/dbapi/porttree.py | 41 +-
lib/portage/dbapi/vartree.py | 108 +--
lib/portage/dep/__init__.py | 17 +-
lib/portage/dep/_slot_operator.py | 8 +-
lib/portage/dep/dep_check.py | 6 +-
lib/portage/dep/meson.build | 12 +
lib/portage/dep/soname/SonameAtom.py | 6 +-
lib/portage/dep/soname/meson.build | 10 +
lib/portage/elog/meson.build | 16 +
lib/portage/elog/mod_echo.py | 16 +-
lib/portage/emaint/main.py | 2 +-
lib/portage/emaint/meson.build | 11 +
lib/portage/emaint/modules/binhost/meson.build | 8 +
lib/portage/emaint/modules/config/meson.build | 8 +
lib/portage/emaint/modules/logs/meson.build | 8 +
lib/portage/emaint/modules/merges/merges.py | 2 +-
lib/portage/emaint/modules/merges/meson.build | 8 +
lib/portage/emaint/modules/meson.build | 16 +
lib/portage/emaint/modules/move/meson.build | 8 +
lib/portage/emaint/modules/resume/meson.build | 8 +
lib/portage/emaint/modules/sync/meson.build | 8 +
lib/portage/emaint/modules/sync/sync.py | 2 +-
lib/portage/emaint/modules/world/meson.build | 8 +
lib/portage/env/config.py | 2 +-
lib/portage/env/meson.build | 10 +
lib/portage/gpg.py | 4 +-
lib/portage/gpkg.py | 172 +---
lib/portage/installation.py | 21 +
lib/portage/locks.py | 52 +-
lib/portage/meson.build | 74 ++
lib/portage/news.py | 10 +-
lib/portage/output.py | 4 +-
.../package/ebuild/_config/VirtualsManager.py | 4 +-
.../package/ebuild/_config/env_var_validation.py | 2 +-
lib/portage/package/ebuild/_config/helper.py | 2 +-
lib/portage/package/ebuild/_config/meson.build | 17 +
.../package/ebuild/_config/special_env_vars.py | 2 +
lib/portage/package/ebuild/_ipc/meson.build | 10 +
lib/portage/package/ebuild/_metadata_invalid.py | 2 +-
.../ebuild/_parallel_manifest/ManifestProcess.py | 30 +-
.../ebuild/_parallel_manifest/ManifestScheduler.py | 2 +-
.../package/ebuild/_parallel_manifest/meson.build | 10 +
lib/portage/package/ebuild/config.py | 12 +-
lib/portage/package/ebuild/digestgen.py | 2 +-
lib/portage/package/ebuild/doebuild.py | 50 +-
lib/portage/package/ebuild/fetch.py | 4 +-
lib/portage/package/ebuild/meson.build | 23 +
lib/portage/package/meson.build | 9 +
lib/portage/process.py | 27 +-
lib/portage/proxy/meson.build | 9 +
lib/portage/repository/config.py | 4 +-
lib/portage/repository/meson.build | 10 +
.../repository/storage/hardlink_quarantine.py | 2 +-
lib/portage/repository/storage/hardlink_rcu.py | 6 +-
lib/portage/repository/storage/meson.build | 11 +
lib/portage/sync/controller.py | 11 +-
lib/portage/sync/meson.build | 14 +
lib/portage/sync/modules/cvs/cvs.py | 4 +-
lib/portage/sync/modules/cvs/meson.build | 8 +
lib/portage/sync/modules/git/__init__.py | 1 +
lib/portage/sync/modules/git/git.py | 143 +++-
lib/portage/sync/modules/git/meson.build | 8 +
lib/portage/sync/modules/mercurial/meson.build | 8 +
lib/portage/sync/modules/meson.build | 14 +
lib/portage/sync/modules/rsync/meson.build | 8 +
lib/portage/sync/modules/rsync/rsync.py | 13 +-
lib/portage/sync/modules/svn/meson.build | 8 +
lib/portage/sync/modules/webrsync/meson.build | 8 +
lib/portage/sync/syncbase.py | 16 +-
lib/portage/tests/__init__.py | 269 +-----
lib/portage/tests/bin/meson.build | 14 +
lib/portage/tests/bin/test_doins.py | 8 +-
lib/portage/tests/conftest.py | 26 +-
lib/portage/tests/dbapi/meson.build | 12 +
lib/portage/tests/dbapi/test_auxdb.py | 103 ++-
lib/portage/tests/dbapi/test_bintree.py | 86 +-
lib/portage/tests/dbapi/test_portdb_cache.py | 2 +-
lib/portage/tests/dep/meson.build | 28 +
.../tests/dep/{testAtom.py => test_atom.py} | 0
...ckRequiredUse.py => test_check_required_use.py} | 0
...endedAtomDict.py => test_extended_atom_dict.py} | 0
...fectingUSE.py => test_extract_affecting_use.py} | 0
.../dep/{testStandalone.py => test_standalone.py} | 0
lib/portage/tests/ebuild/meson.build | 17 +
lib/portage/tests/ebuild/test_doebuild_fd_pipes.py | 42 +-
lib/portage/tests/ebuild/test_fetch.py | 26 +-
lib/portage/tests/ebuild/test_ipc_daemon.py | 25 +-
lib/portage/tests/ebuild/test_spawn.py | 2 +-
lib/portage/tests/emerge/conftest.py | 845 +++++++++++++++++++
lib/portage/tests/emerge/meson.build | 14 +
lib/portage/tests/emerge/test_actions.py | 4 +-
lib/portage/tests/emerge/test_baseline.py | 221 +++++
lib/portage/tests/emerge/test_config_protect.py | 14 +-
.../emerge/test_emerge_blocker_file_collision.py | 7 +-
lib/portage/tests/emerge/test_emerge_slot_abi.py | 14 +-
lib/portage/tests/emerge/test_simple.py | 724 ----------------
lib/portage/tests/env/config/meson.build | 12 +
lib/portage/tests/env/meson.build | 10 +
lib/portage/tests/glsa/meson.build | 9 +
lib/portage/tests/glsa/test_security_set.py | 16 +-
lib/portage/tests/gpkg/meson.build | 15 +
lib/portage/tests/gpkg/test_gpkg_checksum.py | 2 +-
lib/portage/tests/gpkg/test_gpkg_gpg.py | 2 +-
.../tests/gpkg/test_gpkg_metadata_update.py | 2 +-
lib/portage/tests/gpkg/test_gpkg_metadata_url.py | 2 +-
lib/portage/tests/gpkg/test_gpkg_path.py | 2 +-
lib/portage/tests/gpkg/test_gpkg_size.py | 2 +-
lib/portage/tests/gpkg/test_gpkg_stream.py | 2 +-
lib/portage/tests/lafilefixer/meson.build | 9 +
lib/portage/tests/lafilefixer/test_lafilefixer.py | 4 +-
lib/portage/tests/lazyimport/meson.build | 10 +
.../test_lazy_import_portage_baseline.py | 1 +
lib/portage/tests/lint/meson.build | 12 +
lib/portage/tests/locks/meson.build | 10 +
lib/portage/tests/meson.build | 31 +
lib/portage/tests/news/meson.build | 9 +
lib/portage/tests/news/test_NewsItem.py | 8 +-
lib/portage/tests/process/meson.build | 18 +
lib/portage/tests/process/test_AsyncFunction.py | 92 +-
lib/portage/tests/process/test_ForkProcess.py | 46 +
lib/portage/tests/process/test_pickle.py | 43 +
lib/portage/tests/process/test_poll.py | 1 -
lib/portage/tests/process/test_unshare_net.py | 13 -
lib/portage/tests/resolver/ResolverPlayground.py | 10 +-
.../resolver/binpkg_multi_instance/meson.build | 10 +
lib/portage/tests/resolver/meson.build | 98 +++
lib/portage/tests/resolver/soname/meson.build | 19 +
.../tests/resolver/soname/test_skip_update.py | 19 +-
.../tests/resolver/test_alternatives_gzip.py | 245 ++++++
.../tests/resolver/test_autounmask_binpkg_use.py | 2 +-
.../tests/resolver/test_autounmask_multilib_use.py | 2 -
.../resolver/test_autounmask_use_slot_conflict.py | 2 -
.../tests/resolver/test_circular_choices_rust.py | 2 +-
.../resolver/test_depclean_slot_unavailable.py | 2 +-
lib/portage/tests/resolver/test_merge_order.py | 24 +-
lib/portage/tests/resolver/test_or_choices.py | 2 -
.../tests/resolver/test_rebuild_ghostscript.py | 160 ++++
lib/portage/tests/resolver/test_useflags.py | 176 ++++
lib/portage/tests/runTests.py | 84 --
lib/portage/tests/sets/base/meson.build | 10 +
...lPackageSet.py => test_internal_package_set.py} | 0
.../{testVariableSet.py => test_variable_set.py} | 0
lib/portage/tests/sets/files/meson.build | 10 +
...estConfigFileSet.py => test_config_file_set.py} | 0
...estStaticFileSet.py => test_static_file_set.py} | 0
lib/portage/tests/sets/meson.build | 12 +
lib/portage/tests/sets/shell/meson.build | 9 +
.../sets/shell/{testShell.py => test_shell.py} | 0
lib/portage/tests/sync/meson.build | 9 +
lib/portage/tests/sync/test_sync_local.py | 15 +-
lib/portage/tests/unicode/meson.build | 9 +
lib/portage/tests/unicode/test_string_format.py | 2 +-
lib/portage/tests/update/meson.build | 11 +
lib/portage/tests/util/dyn_libs/meson.build | 9 +
lib/portage/tests/util/eventloop/meson.build | 9 +
lib/portage/tests/util/file_copy/meson.build | 9 +
lib/portage/tests/util/file_copy/test_copyfile.py | 11 +-
lib/portage/tests/util/futures/asyncio/meson.build | 15 +
.../tests/util/futures/asyncio/test_pipe_closed.py | 4 +-
.../util/futures/asyncio/test_subprocess_exec.py | 4 +-
lib/portage/tests/util/futures/meson.build | 15 +
.../tests/util/futures/test_iter_completed.py | 17 +-
lib/portage/tests/util/meson.build | 31 +
lib/portage/tests/util/test_getconfig.py | 2 +-
lib/portage/tests/util/test_uniqueArray.py | 3 +-
lib/portage/tests/versions/meson.build | 10 +
lib/portage/tests/xpak/meson.build | 9 +
lib/portage/util/ExtractKernelVersion.py | 4 -
lib/portage/util/_async/AsyncFunction.py | 32 +-
lib/portage/util/_async/FileCopier.py | 18 +-
lib/portage/util/_async/FileDigester.py | 71 +-
lib/portage/util/_async/ForkProcess.py | 187 ++++-
lib/portage/util/_async/PipeLogger.py | 2 +-
lib/portage/util/_async/meson.build | 20 +
lib/portage/util/_dyn_libs/meson.build | 14 +
lib/portage/util/_eventloop/asyncio_event_loop.py | 40 +-
lib/portage/util/_eventloop/meson.build | 9 +
lib/portage/util/bin_entry_point.py | 18 +-
lib/portage/util/elf/meson.build | 9 +
lib/portage/util/endian/meson.build | 8 +
lib/portage/util/env_update.py | 67 +-
lib/portage/util/file_copy/meson.build | 7 +
lib/portage/util/futures/_asyncio/meson.build | 8 +
lib/portage/util/futures/executor/meson.build | 8 +
lib/portage/util/futures/extendedfutures.py | 2 +-
lib/portage/util/futures/meson.build | 17 +
lib/portage/util/futures/retry.py | 2 +-
lib/portage/util/iterators/meson.build | 8 +
lib/portage/util/lafilefixer.py | 2 +-
lib/portage/util/meson.build | 49 ++
lib/portage/util/movefile.py | 5 +-
lib/portage/util/mtimedb.py | 2 +-
lib/portage/util/socks5.py | 2 +-
lib/portage/util/writeable_check.py | 2 +-
lib/portage/versions.py | 91 +-
lib/portage/xml/meson.build | 8 +
lib/portage/xpak.py | 7 +-
man/color.map.5 | 8 +-
man/dispatch-conf.1 | 2 +-
man/ebuild.1 | 2 +-
man/ebuild.5 | 34 +-
man/egencache.1 | 2 +-
man/emaint.1 | 2 +-
man/emerge.1 | 20 +-
man/emirrordist.1 | 4 +-
man/env-update.1 | 2 +-
man/etc-update.1 | 2 +-
man/fixpackages.1 | 2 +-
man/glsa-check.1 | 2 +-
man/make.conf.5 | 53 +-
man/meson.build | 31 +
man/portage.5 | 40 +-
man/quickpkg.1 | 2 +-
man/ru/color.map.5 | 2 +-
man/ru/dispatch-conf.1 | 2 +-
man/ru/ebuild.1 | 2 +-
man/ru/env-update.1 | 2 +-
man/ru/etc-update.1 | 2 +-
man/ru/fixpackages.1 | 2 +-
man/ru/meson.build | 19 +
meson.build | 126 +++
meson_options.txt | 59 ++
misc/emerge-delta-webrsync | 303 +++++--
pyproject.toml | 44 +-
runtests | 184 ----
setup.py | 925 ---------------------
src/meson.build | 50 ++
src/portage_util_file_copy_reflink_linux.c | 1 +
tox.ini | 7 +-
320 files changed, 6795 insertions(+), 4202 deletions(-)
diff --cc bin/emerge-webrsync
index db4bb6d1e8,99da05543a..9959b6a5a7
--- a/bin/emerge-webrsync
+++ b/bin/emerge-webrsync
@@@ -71,32 -71,21 +71,29 @@@ die()
argv0=$0
- # Use portageq from the same directory/prefix as the current script, so
- # that we don't have to rely on PATH including the current EPREFIX.
- scriptpath=${BASH_SOURCE[0]}
- if [[ -x "${scriptpath%/*}/portageq" ]]; then
- portageq=${scriptpath%/*}/portageq
- elif type -P portageq > /dev/null ; then
- portageq=portageq
- else
- die "could not find 'portageq'; aborting"
- fi
+ # Use emerge and portageq from the same directory/prefix as the current script,
+ # so that we don't have to rely on PATH including the current EPREFIX.
+ emerge=$(PATH="${BASH_SOURCE[0]%/*}:${PATH}" type -P emerge)
+ [[ -n ${emerge} ]] || die "could not find 'emerge'; aborting"
+ portageq=$(PATH="${BASH_SOURCE[0]%/*}:${PATH}" type -P portageq)
+ [[ -n ${portageq} ]] || die "could not find 'portageq'; aborting"
+# PREFIX LOCAL: retrieve PORTAGE_USER/PORTAGE_GROUP
eval "$("${portageq}" envvar -v DISTDIR EPREFIX FEATURES \
FETCHCOMMAND GENTOO_MIRRORS \
PORTAGE_BIN_PATH PORTAGE_CONFIGROOT PORTAGE_GPG_DIR \
PORTAGE_NICENESS PORTAGE_REPOSITORIES PORTAGE_RSYNC_EXTRA_OPTS \
PORTAGE_RSYNC_OPTS PORTAGE_TEMP_GPG_DIR PORTAGE_TMPDIR \
- USERLAND http_proxy https_proxy ftp_proxy)"
+ USERLAND http_proxy ftp_proxy \
++ USERLAND http_proxy https_proxy ftp_proxy
+ PORTAGE_USER PORTAGE_GROUP)"
- export http_proxy ftp_proxy
+ export http_proxy https_proxy ftp_proxy
+# PREFIX LOCAL: use Prefix servers, just because we want this and infra
+# can't support us yet
+GENTOO_MIRRORS="http://rsync.prefix.bitzolder.nl"
+# END PREFIX LOCAL
+
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
repo_name=gentoo
diff --cc bin/install-qa-check.d/05prefix
index 8a83893f14,28f2c06afe..8cf3728627
--- a/bin/install-qa-check.d/05prefix
+++ b/bin/install-qa-check.d/05prefix
@@@ -80,14 -76,12 +80,12 @@@ install_qa_check_prefix()
fi
continue
fi
- # BEGIN PREFIX LOCAL: also check init scripts
- # unprefixed shebang, is the script directly in ${PATH} or an init
- # script?
+ # unprefixed shebang, is the script directly in ${PATH} or an init script?
if [[ ":${PATH}:${EPREFIX}/etc/init.d:" == *":${fp}:"* ]] ; then
if [[ -e ${EROOT}${line[0]} || -e ${ED}${line[0]} ]] ; then
- # is it unprefixed, but we can just fix it because a
- # prefixed variant exists
- eqawarn "prefixing shebang of ${fn#${D}}"
+ # is it unprefixed, but we can just fix it because an
+ # eprefixed variant exists
+ eqawarn "eprefixing shebang of ${fn#${D%/}/}"
# statement is made idempotent on purpose, because
# symlinks may point to the same target, and hence the
# same real file may be sedded multiple times since we
diff --cc bin/save-ebuild-env.sh
index 294cab2555,3a2560aabf..c3c83c91d2
mode 100755,100644..100755
--- a/bin/save-ebuild-env.sh
+++ b/bin/save-ebuild-env.sh
diff --cc lib/portage/const.py
index 8aa7e557ff,bf310bb6e0..8769ab2707
--- a/lib/portage/const.py
+++ b/lib/portage/const.py
@@@ -2,15 -2,10 +2,17 @@@
# Copyright 1998-2023 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
+# BEGIN PREFIX LOCAL
+# ===========================================================================
+# autotool supplied constants.
+# ===========================================================================
+from portage.const_autotool import *
+# END PREFIX LOCAL
+
import os
+ from portage import installation
+
# ===========================================================================
# START OF CONSTANTS -- START OF CONSTANTS -- START OF CONSTANTS -- START OF
# ===========================================================================
@@@ -79,27 -98,12 +105,26 @@@ PORTAGE_PYM_PATH = os.path.realpath(os.
LOCALE_DATA_PATH = f"{PORTAGE_BASE_PATH}/locale" # FIXME: not used
EBUILD_SH_BINARY = f"{PORTAGE_BIN_PATH}/ebuild.sh"
MISC_SH_BINARY = f"{PORTAGE_BIN_PATH}/misc-functions.sh"
- # BEGIN PREFIX LOCAL: use EPREFIX for binaries
- SANDBOX_BINARY = f"{EPREFIX}/usr/bin/sandbox"
- FAKEROOT_BINARY = f"{EPREFIX}/usr/bin/fakeroot"
- # END PREFIX LOCAL
- BASH_BINARY = "/bin/bash"
- MOVE_BINARY = "/bin/mv"
- PRELINK_BINARY = "/usr/sbin/prelink"
+ SANDBOX_BINARY = f"{BINARY_PREFIX}/usr/bin/sandbox"
+ FAKEROOT_BINARY = f"{BINARY_PREFIX}/usr/bin/fakeroot"
+ BASH_BINARY = f"{BINARY_PREFIX}/bin/bash"
+ MOVE_BINARY = f"{BINARY_PREFIX}/bin/mv"
+ PRELINK_BINARY = f"{BINARY_PREFIX}/usr/sbin/prelink"
+
+# BEGIN PREFIX LOCAL: macOS sandbox
+MACOSSANDBOX_BINARY = "/usr/bin/sandbox-exec"
+MACOSSANDBOX_PROFILE = '''(version 1)
+(allow default)
+(deny file-write*)
+(allow file-write* file-write-setugid
+@@MACOSSANDBOX_PATHS@@)
+(allow file-write-data
+@@MACOSSANDBOX_PATHS_CONTENT_ONLY@@)'''
+
+PORTAGE_GROUPNAME = portagegroup
+PORTAGE_USERNAME = portageuser
+# END PREFIX LOCAL
+
INVALID_ENV_FILE = "/etc/spork/is/not/valid/profile.env"
MERGING_IDENTIFIER = "-MERGING-"
REPO_NAME_FILE = "repo_name"
diff --cc lib/portage/dbapi/bintree.py
index 78f7604441,6446fde95a..9d09db3fe2
--- a/lib/portage/dbapi/bintree.py
+++ b/lib/portage/dbapi/bintree.py
@@@ -128,11 -128,30 +128,32 @@@ class bindbapi(fakedbapi)
"SLOT",
"USE",
"_mtime_",
+ # PREFIX LOCAL
+ "EPREFIX",
}
- self._aux_cache_slot_dict = slot_dict_class(self._aux_cache_keys)
self._aux_cache = {}
+ self._aux_cache_slot_dict_cache = None
+
+ @property
+ def _aux_cache_slot_dict(self):
+ if self._aux_cache_slot_dict_cache is None:
+ self._aux_cache_slot_dict_cache = slot_dict_class(self._aux_cache_keys)
+ return self._aux_cache_slot_dict_cache
+
+ def __getstate__(self):
+ state = self.__dict__.copy()
+ # These attributes are not picklable, so they are automatically
+ # regenerated after unpickling.
+ state["_aux_cache_slot_dict_cache"] = None
+ state["_instance_key"] = None
+ return state
+
+ def __setstate__(self, state):
+ self.__dict__.update(state)
+ if self._multi_instance:
+ self._instance_key = self._instance_key_multi_instance
+ else:
+ self._instance_key = self._instance_key_cpv
@property
def writable(self):
diff --cc lib/portage/package/ebuild/fetch.py
index 49cce7f063,5f970fe62d..d67d3115fe
--- a/lib/portage/package/ebuild/fetch.py
+++ b/lib/portage/package/ebuild/fetch.py
@@@ -234,10 -232,9 +234,10 @@@ def _ensure_distdir(settings, distdir)
if "FAKED_MODE" in settings:
# When inside fakeroot, directories with portage's gid appear
# to have root's gid. Therefore, use root's gid instead of
- # portage's gid to avoid spurrious permissions adjustments
+ # portage's gid to avoid spurious permissions adjustments
# when inside fakeroot.
- dir_gid = 0
+ # PREFIX LOCAL: do not assume root to be 0
+ dir_gid = rootgid
userfetch = portage.data.secpass >= 2 and "userfetch" in settings.features
userpriv = portage.data.secpass >= 2 and "userpriv" in settings.features
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2023-11-24 20:06 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2023-11-24 20:06 UTC (permalink / raw
To: gentoo-commits
commit: e7cab3eb02f760ee91b93cf9a823944b1051cfbd
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Nov 24 19:25:15 2023 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Nov 24 20:05:51 2023 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=e7cab3eb
subst-install: replace PORTAGE_BASE_PATH
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
subst-install.in | 1 +
1 file changed, 1 insertion(+)
diff --git a/subst-install.in b/subst-install.in
index b73a23c1a1..332b77b656 100644
--- a/subst-install.in
+++ b/subst-install.in
@@ -22,6 +22,7 @@ at='@'
sedexp=(
-e "s,${at}EXTRA_PATH${at},@EXTRA_PATH@,g"
-e "s,${at}PORTAGE_BASE${at},@PORTAGE_BASE@,g"
+ -e "s,${at}PORTAGE_BASE_PATH${at},@PORTAGE_BASE@,g"
-e "s,${at}PORTAGE_BASH${at},@PORTAGE_BASH@,g"
-e "s,${at}PORTAGE_EPREFIX${at},@PORTAGE_EPREFIX@,g"
-e "s,${at}PORTAGE_MV${at},@PORTAGE_MV@,g"
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2023-06-22 8:47 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2023-06-22 8:47 UTC (permalink / raw
To: gentoo-commits
commit: 9ac8c8b9062b316d266f8a5ed678143eee9d2d1e
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Jun 22 08:46:28 2023 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Jun 22 08:47:08 2023 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=9ac8c8b9
Merge remote-tracking branch 'origin/master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
NEWS | 4 +---
lib/_emerge/depgraph.py | 4 +++-
lib/portage/tests/resolver/test_slot_conflict_blocked_prune.py | 2 +-
...st_unecessary_slot_upgrade.py => test_unnecessary_slot_upgrade.py} | 0
setup.py | 2 +-
5 files changed, 6 insertions(+), 6 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2023-06-17 9:04 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2023-06-17 9:04 UTC (permalink / raw
To: gentoo-commits
commit: ac357a05d5aa49f43504a4ef2acaabc9e8d8daa5
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sat Jun 17 09:03:56 2023 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sat Jun 17 09:03:56 2023 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=ac357a05
tarball: cleanup script
make it flow a bit more natural
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
tarball.sh | 56 ++++++++++++++++++++++++++++++++------------------------
1 file changed, 32 insertions(+), 24 deletions(-)
diff --git a/tarball.sh b/tarball.sh
index 216c1d17f..7d35dacf8 100755
--- a/tarball.sh
+++ b/tarball.sh
@@ -9,38 +9,46 @@ if [ -z "$1" ]; then
fi
export PKG="prefix-portage"
-export TMP="/var/tmp"
+export TMP="/var/tmp/${PKG}-build.$$"
export V="$1"
export DEST="${TMP}/${PKG}-${V}"
+export TARFILE="/var/tmp/${PKG}-${V}.tar.bz2"
-if [[ -e ${DEST} ]]; then
- echo ${DEST} already exists, please remove first
- exit 1
-fi
+# hypothetically it can exist
+rm -Rf "${TMP}"
+
+# create copy of source
+install -d -m0755 "${DEST}"
+rsync -a --exclude='.git' --exclude='.hg' --exclude="repoman/" . "${DEST}"
-install -d -m0755 ${DEST}
-rsync -a --exclude='.git' --exclude='.hg' --exclude="repoman/" . ${DEST}
+cd "${DEST}"
+
+# expand version
sed -i -e '/^VERSION\s*=/s/^.*$/VERSION = "'${V}_prefix'"/' \
- ${DEST}/lib/portage/__init__.py
-sed -i -e "/version = /s/'[^']\+'/'${V}-prefix'/" ${DEST}/setup.py
-sed -i -e "1s/VERSION/${V}-prefix/" ${DEST}/man/{,ru/}*.[15]
-sed -i -e "s/@version@/${V}/" ${DEST}/configure.ac
+ lib/portage/__init__.py
+sed -i -e "/version = /s/'[^']\+'/'${V}-prefix'/" setup.py
+sed -i -e "1s/VERSION/${V}-prefix/" man/{,ru/}*.[15]
+sed -i -e "s/@version@/${V}/" configure.ac
-cd ${DEST}
+# cleanup cruft
find -name '*~' | xargs --no-run-if-empty rm -f
+find -name '*.#*' | xargs --no-run-if-empty rm -f
find -name '*.pyc' | xargs --no-run-if-empty rm -f
find -name '*.pyo' | xargs --no-run-if-empty rm -f
-cd $TMP
-rm -f \
- ${PKG}-${V}/bin/emerge.py \
- ${PKG}-${V}/bin/{pmake,sandbox} \
- ${PKG}-${V}/{bin,lib}/'.#'* \
- ${PKG}-${V}/{bin,lib}/*.{orig,diff} \
- ${PKG}-${V}/{bin,lib}/*.py[oc]
-cd $TMP/${PKG}-${V}
+find -name '*.orig' | xargs --no-run-if-empty rm -f
+rm -Rf autom4te.cache
+
+# we don't need these (why?)
+rm -f bin/emerge.py bin/{pmake,sandbox}
+
+# generate a configure file
chmod a+x autogen.sh && ./autogen.sh || { echo "autogen failed!"; exit -1; };
rm -f autogen.sh tabcheck.py tarball.sh commit
-cd $TMP
-tar --numeric-owner -jcf ${TMP}/${PKG}-${V}.tar.bz2 ${PKG}-${V}
-rm -R ${TMP}/${PKG}-${V}
-ls -la ${TMP}/${PKG}-${V}.tar.bz2
+
+# produce final tarball
+cd "${TMP}"
+tar --numeric-owner -jcf "${TARFILE}" ${PKG}-${V}
+
+cd /
+rm -Rf "${TMP}"
+ls -la "${TARFILE}"
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2023-06-17 8:41 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2023-06-17 8:41 UTC (permalink / raw
To: gentoo-commits
commit: 46fc4d8a2205d98584c4198070e42933c1cd1e62
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sat Jun 17 08:40:27 2023 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sat Jun 17 08:40:27 2023 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=46fc4d8a
Merge remote-tracking branch 'origin/master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
.builds/ci.yml | 39 +
.builds/lint.yml | 47 +
.builds/setup-python.sh | 30 +
.editorconfig | 4 +
.git-blame-ignore-revs | 6 +
.github/workflows/black.yml | 18 -
.github/workflows/ci.yml | 64 +-
.github/workflows/lint.yml | 49 +
.github/workflows/pre-commit.yml | 14 +
.gitignore | 6 +
.pre-commit-config.yaml | 21 +
DEVELOPING | 47 +-
NEWS | 695 ++++-
README.md | 11 +
bin/archive-conf | 23 +-
bin/bashrc-functions.sh | 18 +-
bin/binhost-snapshot | 33 +-
bin/check-implicit-pointer-usage.py | 79 -
bin/chmod-lite.py | 1 -
bin/chpathtool.py | 20 +-
bin/clean_locks | 6 +-
bin/deprecated-path | 28 -
bin/dispatch-conf | 51 +-
bin/dohtml.py | 26 +-
bin/doins.py | 24 +-
bin/eapi.sh | 4 +-
bin/ebuild | 773 +++---
bin/ebuild-helpers/bsd/sed | 3 +-
bin/ebuild-helpers/dobin | 16 +-
bin/ebuild-helpers/doconfd | 6 +-
bin/ebuild-helpers/dodir | 4 +-
bin/ebuild-helpers/dodoc | 8 +-
bin/ebuild-helpers/doenvd | 6 +-
bin/ebuild-helpers/doexe | 23 +-
bin/ebuild-helpers/dohard | 4 +-
bin/ebuild-helpers/doheader | 6 +-
bin/ebuild-helpers/dohtml | 10 +-
bin/ebuild-helpers/doinfo | 6 +-
bin/ebuild-helpers/doinitd | 4 +-
bin/ebuild-helpers/doins | 20 +-
bin/ebuild-helpers/dolib | 22 +-
bin/ebuild-helpers/doman | 10 +-
bin/ebuild-helpers/domo | 20 +-
bin/ebuild-helpers/dosbin | 14 +-
bin/ebuild-helpers/dosed | 8 +-
bin/ebuild-helpers/dosym | 20 +-
bin/ebuild-helpers/fowners | 36 +-
bin/ebuild-helpers/fperms | 33 +-
bin/ebuild-helpers/newins | 14 +-
bin/ebuild-helpers/portageq | 27 +-
bin/ebuild-helpers/prepallstrip | 6 +-
bin/ebuild-helpers/prepinfo | 11 +-
bin/ebuild-helpers/prepman | 6 +-
bin/ebuild-helpers/prepstrip | 6 +-
bin/ebuild-helpers/unprivileged/chown | 3 +-
bin/ebuild-helpers/xattr/install | 2 +-
bin/ebuild-ipc.py | 526 ++--
bin/ebuild-pyhelper | 4 +-
bin/ebuild.sh | 192 +-
bin/ecompress | 25 +-
bin/ecompress-file | 15 +-
bin/egencache | 2433 +++++++++--------
bin/emaint | 87 +-
bin/emerge | 62 +-
bin/emerge-webrsync | 490 ++--
bin/env-update | 2 +-
bin/estrip | 108 +-
bin/etc-update | 32 +-
bin/filter-bash-environment.py | 2 +-
bin/fixpackages | 4 +-
bin/glsa-check | 37 +-
bin/gpkg-helper.py | 27 +-
bin/gpkg-sign | 77 +
bin/install-qa-check.d/05prefix | 8 +-
bin/install-qa-check.d/10executable-issues | 2 +-
bin/install-qa-check.d/10ignored-flags | 2 +-
bin/install-qa-check.d/20deprecated-directories | 6 +-
bin/install-qa-check.d/60pkgconfig | 61 +-
bin/install-qa-check.d/60udev | 5 +-
bin/install-qa-check.d/80libraries | 2 +-
bin/install-qa-check.d/90bad-bin-owner | 2 +-
bin/install-qa-check.d/90config-impl-decl | 141 +
bin/install-qa-check.d/90cython-dep | 45 +
bin/install-qa-check.d/90gcc-warnings | 132 +-
bin/install-qa-check.d/90world-writable | 10 +-
bin/install.py | 3 +-
bin/isolated-functions.sh | 80 +-
bin/lock-helper.py | 1 -
bin/misc-functions.sh | 133 +-
bin/phase-functions.sh | 278 +-
bin/phase-helpers.sh | 246 +-
bin/pid-ns-init | 6 +-
bin/portageq | 2854 ++++++++++----------
bin/portageq-wrapper | 19 +
bin/postinst-qa-check.d/50xdg-utils | 21 +-
bin/quickpkg | 102 +-
bin/regenworld | 14 +-
bin/save-ebuild-env.sh | 12 +-
bin/socks5-server.py | 2 +-
bin/xattr-helper.py | 7 +-
bin/xpak-helper.py | 14 +-
cnf/make.conf.example | 17 +
cnf/make.conf.example.loong.diff | 56 +
cnf/sets/portage.conf | 12 +-
doc/api/conf.py | 6 +-
lib/_emerge/AbstractEbuildProcess.py | 94 +-
lib/_emerge/AbstractPollTask.py | 5 +-
lib/_emerge/AsynchronousLock.py | 40 +-
lib/_emerge/AsynchronousTask.py | 3 +-
lib/_emerge/AtomArg.py | 1 -
lib/_emerge/Binpkg.py | 88 +-
lib/_emerge/BinpkgEnvExtractor.py | 6 +-
lib/_emerge/BinpkgExtractorAsync.py | 71 +-
lib/_emerge/BinpkgFetcher.py | 50 +-
lib/_emerge/BinpkgPrefetcher.py | 10 +-
lib/_emerge/BinpkgVerifier.py | 22 +-
lib/_emerge/Blocker.py | 1 -
lib/_emerge/BlockerCache.py | 5 +-
lib/_emerge/BlockerDB.py | 5 +-
lib/_emerge/CompositeTask.py | 3 +-
lib/_emerge/DepPriority.py | 1 -
lib/_emerge/DependencyArg.py | 3 +-
lib/_emerge/EbuildBinpkg.py | 36 +-
lib/_emerge/EbuildBuild.py | 34 +-
lib/_emerge/EbuildBuildDir.py | 3 +-
lib/_emerge/EbuildExecuter.py | 7 +-
lib/_emerge/EbuildFetcher.py | 17 +-
lib/_emerge/EbuildFetchonly.py | 5 +-
lib/_emerge/EbuildIpcDaemon.py | 3 +-
lib/_emerge/EbuildMerge.py | 11 +-
lib/_emerge/EbuildMetadataPhase.py | 5 +-
lib/_emerge/EbuildPhase.py | 33 +-
lib/_emerge/EbuildProcess.py | 2 -
lib/_emerge/EbuildSpawnProcess.py | 1 -
lib/_emerge/FakeVartree.py | 7 +-
lib/_emerge/FifoIpcDaemon.py | 1 -
lib/_emerge/JobStatusDisplay.py | 23 +-
lib/_emerge/MergeListItem.py | 13 +-
lib/_emerge/MetadataRegen.py | 21 +-
lib/_emerge/Package.py | 50 +-
lib/_emerge/PackageMerge.py | 65 +-
lib/_emerge/PackagePhase.py | 6 +-
lib/_emerge/PackageUninstall.py | 10 +-
lib/_emerge/PipeReader.py | 2 -
lib/_emerge/PollScheduler.py | 8 +-
lib/_emerge/Scheduler.py | 105 +-
lib/_emerge/SequentialTaskQueue.py | 2 -
lib/_emerge/SetArg.py | 1 -
lib/_emerge/SpawnProcess.py | 27 +-
lib/_emerge/SubProcess.py | 9 +-
lib/_emerge/Task.py | 7 +-
lib/_emerge/UseFlagDisplay.py | 7 +-
lib/_emerge/UserQuery.py | 7 +-
lib/_emerge/actions.py | 496 ++--
lib/_emerge/chk_updated_cfg_files.py | 8 +-
lib/_emerge/countdown.py | 6 +-
lib/_emerge/create_depgraph_params.py | 8 +-
lib/_emerge/create_world_atom.py | 4 +-
lib/_emerge/depgraph.py | 1060 ++++----
lib/_emerge/emergelog.py | 10 +-
lib/_emerge/getloadavg.py | 2 +-
lib/_emerge/is_valid_package_atom.py | 2 +-
lib/_emerge/main.py | 22 +-
lib/_emerge/post_emerge.py | 9 +-
lib/_emerge/resolver/backtracking.py | 8 +-
lib/_emerge/resolver/circular_dependency.py | 24 +-
lib/_emerge/resolver/output.py | 59 +-
lib/_emerge/resolver/output_helpers.py | 50 +-
lib/_emerge/resolver/package_tracker.py | 6 +-
lib/_emerge/resolver/slot_collision.py | 95 +-
lib/_emerge/search.py | 25 +-
lib/_emerge/show_invalid_depstring_notice.py | 11 +-
lib/_emerge/stdout_spinner.py | 1 +
lib/_emerge/unmerge.py | 49 +-
lib/portage/__init__.py | 31 +-
lib/portage/_compat_upgrade/binpkg_compression.py | 4 +-
.../_compat_upgrade/binpkg_multi_instance.py | 8 +-
lib/portage/_compat_upgrade/default_locations.py | 12 +-
lib/portage/_emirrordist/Config.py | 17 +-
lib/portage/_emirrordist/ContentDB.py | 16 +-
lib/portage/_emirrordist/DeletionIterator.py | 18 +-
lib/portage/_emirrordist/DeletionTask.py | 40 +-
lib/portage/_emirrordist/FetchIterator.py | 21 +-
lib/portage/_emirrordist/FetchTask.py | 118 +-
lib/portage/_emirrordist/MirrorDistTask.py | 55 +-
lib/portage/_emirrordist/main.py | 12 +-
lib/portage/_global_updates.py | 4 +-
lib/portage/_selinux.py | 4 +-
lib/portage/_sets/ProfilePackageSet.py | 6 +-
lib/portage/_sets/__init__.py | 42 +-
lib/portage/_sets/base.py | 16 +-
lib/portage/_sets/dbapi.py | 74 +-
lib/portage/_sets/files.py | 38 +-
lib/portage/_sets/libs.py | 6 +-
lib/portage/_sets/profiles.py | 12 +-
lib/portage/_sets/security.py | 4 +-
lib/portage/_sets/shell.py | 4 +-
lib/portage/binpkg.py | 32 +-
lib/portage/cache/anydbm.py | 5 +-
lib/portage/cache/cache_errors.py | 15 +-
lib/portage/cache/ebuild_xattr.py | 17 +-
lib/portage/cache/flat_hash.py | 22 +-
lib/portage/cache/fs_template.py | 7 +-
lib/portage/cache/index/IndexStreamIterator.py | 4 -
lib/portage/cache/index/pkg_desc_index.py | 3 +-
lib/portage/cache/mappings.py | 3 -
lib/portage/cache/metadata.py | 20 +-
lib/portage/cache/sql_template.py | 18 +-
lib/portage/cache/sqlite.py | 25 +-
lib/portage/cache/template.py | 35 +-
lib/portage/cache/volatile.py | 3 +-
lib/portage/checksum.py | 235 +-
lib/portage/const.py | 10 +-
lib/portage/cvstree.py | 12 +-
lib/portage/data.py | 7 +-
lib/portage/dbapi/IndexedPortdb.py | 8 +-
.../dbapi/_ContentsCaseSensitivityManager.py | 8 +-
lib/portage/dbapi/_MergeProcess.py | 15 +-
lib/portage/dbapi/_SyncfsProcess.py | 2 -
lib/portage/dbapi/_VdbMetadataDelta.py | 11 +-
lib/portage/dbapi/__init__.py | 19 +-
lib/portage/dbapi/_similar_name_search.py | 1 -
lib/portage/dbapi/bintree.py | 717 +++--
lib/portage/dbapi/porttree.py | 67 +-
lib/portage/dbapi/vartree.py | 378 ++-
lib/portage/dbapi/virtual.py | 21 +-
lib/portage/debug.py | 12 +-
lib/portage/dep/__init__.py | 51 +-
lib/portage/dep/_dnf.py | 10 +-
lib/portage/dep/_slot_operator.py | 9 +-
lib/portage/dep/dep_check.py | 36 +-
lib/portage/dep/soname/SonameAtom.py | 11 +-
lib/portage/dep/soname/multilib_category.py | 7 +-
lib/portage/dispatch_conf.py | 7 +-
lib/portage/elog/__init__.py | 9 +-
lib/portage/elog/messages.py | 4 +-
lib/portage/elog/mod_mail.py | 2 +-
lib/portage/elog/mod_mail_summary.py | 4 +-
lib/portage/elog/mod_save.py | 10 +-
lib/portage/elog/mod_save_summary.py | 9 +-
lib/portage/elog/mod_syslog.py | 2 +-
lib/portage/emaint/main.py | 40 +-
lib/portage/emaint/modules/binhost/binhost.py | 8 +-
lib/portage/emaint/modules/config/config.py | 5 +-
lib/portage/emaint/modules/logs/logs.py | 5 +-
lib/portage/emaint/modules/merges/merges.py | 26 +-
lib/portage/emaint/modules/move/move.py | 13 +-
lib/portage/emaint/modules/resume/resume.py | 9 +-
lib/portage/emaint/modules/sync/sync.py | 6 +-
lib/portage/emaint/modules/world/world.py | 9 +-
lib/portage/env/config.py | 16 +-
lib/portage/env/loaders.py | 8 +-
lib/portage/exception.py | 2 +-
lib/portage/getbinpkg.py | 15 +-
lib/portage/glsa.py | 33 +-
lib/portage/gpkg.py | 386 ++-
lib/portage/locks.py | 32 +-
lib/portage/mail.py | 5 +-
lib/portage/manifest.py | 36 +-
lib/portage/module.py | 16 +-
lib/portage/news.py | 137 +-
lib/portage/output.py | 29 +-
.../package/ebuild/_config/LicenseManager.py | 1 -
.../package/ebuild/_config/LocationsManager.py | 29 +-
.../package/ebuild/_config/special_env_vars.py | 545 ++--
lib/portage/package/ebuild/_ipc/ExitCommand.py | 2 -
lib/portage/package/ebuild/_ipc/IpcCommand.py | 1 -
lib/portage/package/ebuild/_ipc/QueryCommand.py | 29 +-
lib/portage/package/ebuild/_metadata_invalid.py | 3 +-
.../ebuild/_parallel_manifest/ManifestProcess.py | 7 +-
.../ebuild/_parallel_manifest/ManifestScheduler.py | 5 +-
.../ebuild/_parallel_manifest/ManifestTask.py | 3 +-
lib/portage/package/ebuild/config.py | 74 +-
.../package/ebuild/deprecated_profile_check.py | 4 +-
lib/portage/package/ebuild/digestcheck.py | 2 +-
lib/portage/package/ebuild/digestgen.py | 8 +-
lib/portage/package/ebuild/doebuild.py | 277 +-
lib/portage/package/ebuild/fetch.py | 70 +-
lib/portage/package/ebuild/getmaskingstatus.py | 13 +-
lib/portage/package/ebuild/prepare_build_dirs.py | 32 +-
lib/portage/process.py | 87 +-
lib/portage/proxy/lazyimport.py | 12 +-
lib/portage/repository/config.py | 80 +-
.../repository/storage/hardlink_quarantine.py | 6 +-
lib/portage/repository/storage/hardlink_rcu.py | 4 +-
lib/portage/sync/controller.py | 13 +-
lib/portage/sync/modules/git/git.py | 258 +-
lib/portage/sync/modules/mercurial/mercurial.py | 40 +-
lib/portage/sync/modules/rsync/rsync.py | 58 +-
lib/portage/sync/modules/svn/svn.py | 10 +-
lib/portage/sync/modules/webrsync/__init__.py | 4 +-
lib/portage/sync/modules/webrsync/webrsync.py | 56 +-
lib/portage/sync/old_tree_timestamp.py | 4 +-
lib/portage/sync/syncbase.py | 33 +-
lib/portage/tests/__init__.py | 23 +-
lib/portage/tests/bin/test_doins.py | 4 +-
lib/portage/tests/bin/test_eapi7_ver_funcs.py | 30 +-
lib/portage/tests/bin/test_filter_bash_env.py | 23 +-
lib/portage/tests/conftest.py | 89 +
lib/portage/tests/dbapi/test_auxdb.py | 5 +-
lib/portage/tests/dbapi/test_bintree.py | 155 ++
lib/portage/tests/dbapi/test_fakedbapi.py | 2 +-
lib/portage/tests/dbapi/test_portdb_cache.py | 32 +-
lib/portage/tests/dep/testAtom.py | 27 +-
lib/portage/tests/dep/testStandalone.py | 5 +-
lib/portage/tests/dep/test_dep_getcpv.py | 1 -
lib/portage/tests/dep/test_dep_getrepo.py | 1 -
lib/portage/tests/dep/test_dep_getslot.py | 1 -
lib/portage/tests/dep/test_dep_getusedeps.py | 3 +-
lib/portage/tests/dep/test_dnf_convert.py | 1 -
lib/portage/tests/dep/test_get_operator.py | 3 +-
.../tests/dep/test_get_required_use_flags.py | 2 +-
lib/portage/tests/dep/test_isjustname.py | 5 +-
lib/portage/tests/dep/test_isvalidatom.py | 3 +-
lib/portage/tests/dep/test_match_from_list.py | 2 +-
lib/portage/tests/dep/test_overlap_dnf.py | 1 -
lib/portage/tests/dep/test_paren_reduce.py | 1 -
lib/portage/tests/dep/test_soname_atom_pickle.py | 1 -
lib/portage/tests/dep/test_use_reduce.py | 3 +-
.../tests/ebuild/test_array_fromfile_eof.py | 2 +-
lib/portage/tests/ebuild/test_config.py | 10 +-
lib/portage/tests/ebuild/test_doebuild_fd_pipes.py | 2 -
lib/portage/tests/ebuild/test_doebuild_spawn.py | 2 -
lib/portage/tests/ebuild/test_fetch.py | 48 +-
lib/portage/tests/ebuild/test_ipc_daemon.py | 1 -
lib/portage/tests/ebuild/test_shell_quote.py | 2 +-
lib/portage/tests/ebuild/test_spawn.py | 8 +-
.../tests/ebuild/test_use_expand_incremental.py | 8 +-
lib/portage/tests/emerge/test_actions.py | 45 +
lib/portage/tests/emerge/test_config_protect.py | 12 +-
.../emerge/test_emerge_blocker_file_collision.py | 10 +-
lib/portage/tests/emerge/test_emerge_slot_abi.py | 10 +-
lib/portage/tests/emerge/test_simple.py | 19 +-
.../tests/env/config/test_PackageKeywordsFile.py | 3 +-
.../tests/env/config/test_PackageUseFile.py | 3 +-
.../tests/env/config/test_PortageModulesFile.py | 3 +-
lib/portage/tests/glsa/test_security_set.py | 139 +-
lib/portage/tests/gpkg/test_gpkg_checksum.py | 46 +-
lib/portage/tests/gpkg/test_gpkg_gpg.py | 48 +-
.../tests/gpkg/test_gpkg_metadata_update.py | 3 -
lib/portage/tests/gpkg/test_gpkg_metadata_url.py | 13 -
lib/portage/tests/gpkg/test_gpkg_path.py | 20 -
lib/portage/tests/gpkg/test_gpkg_size.py | 4 -
lib/portage/tests/gpkg/test_gpkg_stream.py | 19 -
.../test_lazy_import_portage_baseline.py | 1 -
lib/portage/tests/lint/test_compile_modules.py | 2 +-
lib/portage/tests/lint/test_import_modules.py | 2 +-
lib/portage/tests/locks/test_asynchronous_lock.py | 31 +-
lib/portage/tests/news/test_NewsItem.py | 442 ++-
lib/portage/tests/process/test_PipeLogger.py | 2 +-
lib/portage/tests/process/test_PopenProcess.py | 15 +-
.../tests/process/test_PopenProcessBlockingIO.py | 22 +-
lib/portage/tests/process/test_poll.py | 15 +-
lib/portage/tests/process/test_spawn_fail_e2big.py | 30 +
.../tests/process/test_spawn_warn_large_env.py | 46 +
lib/portage/tests/process/test_unshare_net.py | 38 +-
lib/portage/tests/resolver/ResolverPlayground.py | 78 +-
.../test_build_id_profile_format.py | 4 +-
.../binpkg_multi_instance/test_rebuilt_binaries.py | 4 +-
.../tests/resolver/soname/test_autounmask.py | 4 +-
lib/portage/tests/resolver/soname/test_depclean.py | 1 -
.../tests/resolver/soname/test_downgrade.py | 7 +-
.../tests/resolver/soname/test_or_choices.py | 4 +-
.../tests/resolver/soname/test_reinstall.py | 4 +-
.../tests/resolver/soname/test_skip_update.py | 4 +-
.../soname/test_slot_conflict_reinstall.py | 18 +-
.../resolver/soname/test_slot_conflict_update.py | 4 +-
.../tests/resolver/soname/test_soname_provided.py | 4 +-
.../tests/resolver/soname/test_unsatisfiable.py | 4 +-
.../tests/resolver/soname/test_unsatisfied.py | 4 +-
.../test_aggressive_backtrack_downgrade.py | 1 -
lib/portage/tests/resolver/test_autounmask.py | 26 +-
.../tests/resolver/test_autounmask_binpkg_use.py | 3 +-
.../tests/resolver/test_autounmask_multilib_use.py | 6 +-
.../tests/resolver/test_autounmask_parent.py | 1 -
.../tests/resolver/test_autounmask_use_breakage.py | 1 -
.../resolver/test_autounmask_use_slot_conflict.py | 5 +-
lib/portage/tests/resolver/test_bdeps.py | 4 +-
.../resolver/test_binary_pkg_ebuild_visibility.py | 4 +-
lib/portage/tests/resolver/test_changed_deps.py | 4 +-
.../tests/resolver/test_circular_choices.py | 5 -
.../tests/resolver/test_circular_choices_rust.py | 1 -
.../tests/resolver/test_circular_dependencies.py | 2 -
...test_complete_if_new_subslot_without_revbump.py | 4 +-
lib/portage/tests/resolver/test_depclean.py | 3 -
lib/portage/tests/resolver/test_depclean_order.py | 1 -
lib/portage/tests/resolver/test_depth.py | 1 -
.../resolver/test_disjunctive_depend_order.py | 3 +-
lib/portage/tests/resolver/test_eapi.py | 44 +-
.../resolver/test_imagemagick_graphicsmagick.py | 1 -
lib/portage/tests/resolver/test_multirepo.py | 5 +-
lib/portage/tests/resolver/test_onlydeps_ideps.py | 172 ++
.../tests/resolver/test_onlydeps_minimal.py | 25 +
lib/portage/tests/resolver/test_or_choices.py | 8 +-
lib/portage/tests/resolver/test_package_tracker.py | 3 +-
.../tests/resolver/test_perl_rebuild_bug.py | 121 +
.../tests/resolver/test_profile_default_eapi.py | 6 +-
.../tests/resolver/test_profile_package_set.py | 6 +-
.../test_regular_slot_change_without_revbump.py | 4 +-
lib/portage/tests/resolver/test_required_use.py | 1 -
lib/portage/tests/resolver/test_simple.py | 3 +-
lib/portage/tests/resolver/test_slot_abi.py | 9 +-
.../tests/resolver/test_slot_abi_downgrade.py | 7 +-
.../resolver/test_slot_change_without_revbump.py | 4 +-
lib/portage/tests/resolver/test_slot_collisions.py | 1 -
.../resolver/test_slot_conflict_blocked_prune.py | 78 +
.../resolver/test_slot_conflict_force_rebuild.py | 1 -
.../tests/resolver/test_slot_conflict_rebuild.py | 10 +-
.../test_slot_conflict_unsatisfied_deep_deps.py | 1 -
.../tests/resolver/test_slot_conflict_update.py | 1 -
.../resolver/test_slot_conflict_update_virt.py | 1 -
.../resolver/test_slot_operator_autounmask.py | 5 +-
.../tests/resolver/test_slot_operator_bdeps.py | 4 +-
.../resolver/test_slot_operator_complete_graph.py | 1 -
.../resolver/test_slot_operator_exclusive_slots.py | 1 -
.../resolver/test_slot_operator_missed_update.py | 1 -
.../tests/resolver/test_slot_operator_rebuild.py | 4 +-
.../resolver/test_slot_operator_required_use.py | 1 -
.../resolver/test_slot_operator_reverse_deps.py | 1 -
.../test_slot_operator_runtime_pkg_mask.py | 1 -
.../resolver/test_slot_operator_unsatisfied.py | 1 -
.../tests/resolver/test_slot_operator_unsolved.py | 5 +-
..._slot_operator_update_probe_parent_downgrade.py | 1 -
.../test_solve_non_slot_operator_slot_conflicts.py | 1 -
lib/portage/tests/resolver/test_update.py | 106 +
.../tests/resolver/test_use_dep_defaults.py | 1 -
lib/portage/tests/resolver/test_useflags.py | 3 +-
lib/portage/tests/resolver/test_virtual_slot.py | 4 -
lib/portage/tests/runTests.py | 14 +-
.../tests/sets/base/testInternalPackageSet.py | 4 +-
.../base/testVariableSet.py} | 24 +-
lib/portage/tests/sets/shell/testShell.py | 3 +-
lib/portage/tests/sync/test_sync_local.py | 21 +-
lib/portage/tests/unicode/test_string_format.py | 15 +-
lib/portage/tests/update/test_move_ent.py | 7 +-
lib/portage/tests/update/test_move_slot_ent.py | 4 +-
lib/portage/tests/update/test_update_dbentry.py | 4 +-
.../tests/util/eventloop/test_call_soon_fifo.py | 1 -
lib/portage/tests/util/file_copy/test_copyfile.py | 16 +-
.../util/futures/asyncio/test_child_watcher.py | 1 -
.../tests/util/futures/asyncio/test_pipe_closed.py | 4 +-
.../util/futures/asyncio/test_subprocess_exec.py | 2 -
.../util/futures/asyncio/test_wakeup_fd_sigchld.py | 2 +-
.../tests/util/futures/test_compat_coroutine.py | 1 -
.../tests/util/futures/test_done_callback.py | 1 -
.../tests/util/futures/test_iter_completed.py | 2 -
lib/portage/tests/util/futures/test_retry.py | 16 +-
lib/portage/tests/util/test_checksum.py | 49 +-
lib/portage/tests/util/test_digraph.py | 60 +-
lib/portage/tests/util/test_file_copier.py | 1 -
lib/portage/tests/util/test_getconfig.py | 4 +-
lib/portage/tests/util/test_install_mask.py | 6 +-
lib/portage/tests/util/test_manifest.py | 31 +
lib/portage/tests/util/test_normalizedPath.py | 1 -
lib/portage/tests/util/test_shelve.py | 4 +-
lib/portage/tests/util/test_socks5.py | 4 +-
lib/portage/tests/util/test_stackDicts.py | 2 -
lib/portage/tests/util/test_stackLists.py | 1 -
lib/portage/tests/util/test_varExpand.py | 17 +-
lib/portage/tests/util/test_whirlpool.py | 49 +-
lib/portage/tests/util/test_xattr.py | 2 +-
lib/portage/tests/versions/test_cpv_sort_key.py | 1 -
lib/portage/tests/versions/test_vercmp.py | 11 +-
lib/portage/tests/xpak/test_decodeint.py | 1 -
lib/portage/update.py | 23 +-
lib/portage/util/ExtractKernelVersion.py | 10 +-
lib/portage/util/__init__.py | 163 +-
lib/portage/util/_async/AsyncScheduler.py | 4 +-
lib/portage/util/_async/BuildLogger.py | 2 +-
lib/portage/util/_async/FileCopier.py | 2 +-
lib/portage/util/_async/FileDigester.py | 1 -
lib/portage/util/_async/ForkProcess.py | 9 +-
lib/portage/util/_async/PipeLogger.py | 1 -
lib/portage/util/_async/PipeReaderBlockingIO.py | 23 +-
lib/portage/util/_async/PopenProcess.py | 2 -
lib/portage/util/_async/SchedulerInterface.py | 4 +-
lib/portage/util/_dyn_libs/LinkageMapELF.py | 46 +-
.../util/_dyn_libs/PreservedLibsRegistry.py | 6 +-
.../util/_dyn_libs/display_preserved_libs.py | 13 +-
lib/portage/util/_dyn_libs/soname_deps.py | 4 +-
lib/portage/util/_dyn_libs/soname_deps_qa.py | 9 +-
lib/portage/util/_get_vm_info.py | 2 -
lib/portage/util/_info_files.py | 13 +-
lib/portage/util/_path.py | 4 +-
lib/portage/util/_pty.py | 4 +-
lib/portage/util/_xattr.py | 10 +-
lib/portage/util/backoff.py | 2 +-
lib/portage/util/bin_entry_point.py | 2 +-
lib/portage/util/changelog.py | 2 +-
lib/portage/util/compression_probe.py | 9 +-
lib/portage/util/configparser.py | 7 +-
lib/portage/util/cpuinfo.py | 22 +-
lib/portage/util/digraph.py | 12 +-
lib/portage/util/elf/constants.py | 1 +
lib/portage/util/elf/header.py | 1 -
lib/portage/util/env_update.py | 69 +-
lib/portage/util/file_copy/__init__.py | 2 -
lib/portage/util/futures/_asyncio/__init__.py | 6 +-
lib/portage/util/futures/_asyncio/streams.py | 2 +-
lib/portage/util/futures/_sync_decorator.py | 1 -
lib/portage/util/futures/compat_coroutine.py | 1 +
lib/portage/util/futures/executor/fork.py | 13 +-
lib/portage/util/futures/extendedfutures.py | 10 +-
lib/portage/util/futures/iter_completed.py | 3 +-
lib/portage/util/futures/unix_events.py | 4 +-
lib/portage/util/hooks.py | 2 +-
lib/portage/util/iterators/MultiIterGroupBy.py | 5 -
lib/portage/util/listdir.py | 6 +-
lib/portage/util/locale.py | 7 +-
lib/portage/util/movefile.py | 77 +-
lib/portage/util/mtimedb.py | 4 +-
lib/portage/util/netlink.py | 2 +-
lib/portage/util/shelve.py | 4 +-
lib/portage/util/socks5.py | 2 +-
lib/portage/util/whirlpool.py | 81 +-
lib/portage/util/writeable_check.py | 7 +-
lib/portage/versions.py | 21 +-
lib/portage/xml/metadata.py | 12 +-
lib/portage/xpak.py | 23 +-
man/ebuild.1 | 9 +-
man/ebuild.5 | 100 +-
man/emerge.1 | 53 +-
man/make.conf.5 | 70 +-
man/portage.5 | 51 +-
man/ru/ebuild.1 | 10 +-
man/xpak.5 | 9 +-
runtests | 17 +-
setup.py | 50 +-
src/portage_util__whirlpool.c | 1141 ++++++++
src/portage_util_file_copy_reflink_linux.c | 25 +-
src/portage_util_libc.c | 27 +-
tox.ini | 30 +-
532 files changed, 13827 insertions(+), 10224 deletions(-)
diff --cc bin/ebuild-helpers/dohtml
index e061bc173,55339238e..5384eeb8b
--- a/bin/ebuild-helpers/dohtml
+++ b/bin/ebuild-helpers/dohtml
@@@ -16,10 -16,8 +16,10 @@@ f
# Use safe cwd, avoiding unsafe import for bug #469338.
export __PORTAGE_HELPER_CWD=${PWD}
cd "${PORTAGE_PYM_PATH}" || die
+# BEGIN PREFIX LOCAL: use Prefix Python fallback
PYTHONPATH=${PORTAGE_PYTHONPATH:-${PORTAGE_PYM_PATH}} \
- "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "$PORTAGE_BIN_PATH/dohtml.py" "$@"
- "${PORTAGE_PYTHON:-/usr/bin/python}" "${PORTAGE_BIN_PATH}/dohtml.py" "$@"
++ "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "${PORTAGE_BIN_PATH}/dohtml.py" "$@"
+# END PREFIX LOCAL
ret=$?
# Restore cwd for display by __helpers_die
diff --cc bin/emerge-webrsync
index 85a4d15d7,3835977fc..db4bb6d1e
--- a/bin/emerge-webrsync
+++ b/bin/emerge-webrsync
@@@ -43,10 -79,9 +79,10 @@@ if [[ -x "${scriptpath%/*}/portageq" ]]
elif type -P portageq > /dev/null ; then
portageq=portageq
else
- eecho "could not find 'portageq'; aborting"
- exit 1
+ die "could not find 'portageq'; aborting"
fi
+
+# PREFIX LOCAL: retrieve PORTAGE_USER/PORTAGE_GROUP
eval "$("${portageq}" envvar -v DISTDIR EPREFIX FEATURES \
FETCHCOMMAND GENTOO_MIRRORS \
PORTAGE_BIN_PATH PORTAGE_CONFIGROOT PORTAGE_GPG_DIR \
@@@ -252,11 -411,9 +418,11 @@@ get_snapshot_timestamp()
sync_local() {
local file="$1"
- __vecho "Syncing local tree ..."
+ [[ ${PORTAGE_QUIET} -eq 1 ]] || einfo "Syncing local repository ..."
- local ownership="portage:portage"
+ # PREFIX LOCAL: use PORTAGE_USER and PORTAGE_GROUP
+ local ownership="${PORTAGE_USER:-portage}:${PORTAGE_GROUP:-portage}"
+ # END PREFIX LOCAL
if has usersync ${FEATURES} ; then
case "${USERLAND}" in
BSD)
diff --cc bin/install-qa-check.d/05prefix
index ac059723f,7488ad9e4..8a83893f1
--- a/bin/install-qa-check.d/05prefix
+++ b/bin/install-qa-check.d/05prefix
@@@ -80,14 -76,12 +80,14 @@@ install_qa_check_prefix()
fi
continue
fi
- # unprefixed shebang, is the script directly in ${PATH}?
- if [[ ":${PATH}:" == *":${fp}:"* ]] ; then
+ # BEGIN PREFIX LOCAL: also check init scripts
- # unprefixed shebang, is the script directly in $PATH or an init
++ # unprefixed shebang, is the script directly in ${PATH} or an init
+ # script?
+ if [[ ":${PATH}:${EPREFIX}/etc/init.d:" == *":${fp}:"* ]] ; then
if [[ -e ${EROOT}${line[0]} || -e ${ED}${line[0]} ]] ; then
- # is it unprefixed, but we can just fix it because a
- # prefixed variant exists
- eqawarn "prefixing shebang of ${fn#${D}}"
+ # is it unprefixed, but we can just fix it because an
+ # eprefixed variant exists
+ eqawarn "eprefixing shebang of ${fn#${D%/}/}"
# statement is made idempotent on purpose, because
# symlinks may point to the same target, and hence the
# same real file may be sedded multiple times since we
diff --cc bin/isolated-functions.sh
index 20e9c735c,06be030fb..47e18cc6b
--- a/bin/isolated-functions.sh
+++ b/bin/isolated-functions.sh
@@@ -439,26 -444,22 +444,30 @@@ if [[ -z ${NO_COLOR} ]] ; the
no|false)
__set_colors
;;
- esac
+ esac
+ else
+ __unset_colors
+ fi
+
-if [[ -z ${USERLAND} ]] ; then
- case $(uname -s) in
- *BSD|DragonFly)
- export USERLAND="BSD"
- ;;
- *)
- export USERLAND="GNU"
- ;;
- esac
-fi
+# BEGIN PREFIX LOCAL
+# In Prefix every platform has USERLAND=GNU, even FreeBSD. Since I
+# don't know how to reliably "figure out" we are in a Prefix instance of
+# portage here, I for now disable this check, and hardcode it to GNU.
+# Somehow it appears stange to me that this code is in this file,
+# non-ebuilds/eclasses should never rely on USERLAND and XARGS, don't they?
+#if [[ -z ${USERLAND} ]] ; then
+# case $(uname -s) in
+# *BSD|DragonFly)
+# export USERLAND="BSD"
+# ;;
+# *)
+# export USERLAND="GNU"
+# ;;
+# esac
+#fi
+[[ -z ${USERLAND} ]] && USERLAND="GNU"
+# END PREFIX LOCAL
if [[ -z ${XARGS} ]] ; then
case ${USERLAND} in
@@@ -644,12 -653,12 +661,13 @@@ debug-print()
printf 'debug: %s\n' "${@}" >> "${ECLASS_DEBUG_OUTPUT}"
fi
- if [[ -w $T ]] ; then
- # default target
+ if [[ -w ${T} ]] ; then
+ # Default target
printf '%s\n' "${@}" >> "${T}/eclass-debug.log"
- # let the portage user own/write to this file
+
+ # Let the portage user own/write to this file
- chgrp "${PORTAGE_GRPNAME:-portage}" "${T}/eclass-debug.log"
+ # PREFIX LOCAL: fallback to configured group
+ chgrp "${PORTAGE_GRPNAME:-${PORTAGE_GROUP}}" "${T}/eclass-debug.log"
chmod g+w "${T}/eclass-debug.log"
fi
}
diff --cc bin/misc-functions.sh
index 91010ec65,d9319d5af..b32d06483
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -275,163 -265,14 +292,160 @@@ install_qa_check_misc()
"${PORTAGE_BIN_PATH}"/estrip --ignore "${PORTAGE_DOSTRIP_SKIP[@]}"
"${PORTAGE_BIN_PATH}"/estrip --dequeue
else
- prepallstrip
+ "${PORTAGE_BIN_PATH}"/estrip --prepallstrip
fi
fi
-
- # Portage regenerates this on the installed system.
- rm -f "${ED%/}"/usr/share/info/dir{,.gz,.bz2} || die "rm failed!"
}
+install_qa_check_macho() {
+ if ! has binchecks ${RESTRICT} ; then
+ # on Darwin, dynamic libraries are called .dylibs instead of
+ # .sos. In addition the version component is before the
+ # extension, not after it. Check for this, and *only* warn
+ # about it. Some packages do ship .so files on Darwin and make
+ # it work (ugly!).
+ rm -f "${T}/mach-o.check"
+ find ${ED%/} -name "*.so" -or -name "*.so.*" | \
+ while read i ; do
+ [[ $(file $i) == *"Mach-O"* ]] && \
+ echo "${i#${D}}" >> "${T}/mach-o.check"
+ done
+ if [[ -f ${T}/mach-o.check ]] ; then
+ f=$(< "${T}/mach-o.check")
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: Found .so dynamic libraries on Darwin:"
+ eqawarn " ${f//$'\n'/\n }"
+ fi
+ rm -f "${T}/mach-o.check"
+
+ # The naming for dynamic libraries is different on Darwin; the
+ # version component is before the extention, instead of after
+ # it, as with .sos. Again, make this a warning only.
+ rm -f "${T}/mach-o.check"
+ find ${ED%/} -name "*.dylib.*" | \
+ while read i ; do
+ echo "${i#${D}}" >> "${T}/mach-o.check"
+ done
+ if [[ -f "${T}/mach-o.check" ]] ; then
+ f=$(< "${T}/mach-o.check")
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: Found wrongly named dynamic libraries on Darwin:"
+ eqawarn " ${f// /\n }"
+ fi
+ rm -f "${T}/mach-o.check"
+ fi
+
+ install_name_is_relative() {
+ case $1 in
+ "@executable_path/"*) return 0 ;;
+ "@loader_path"/*) return 0 ;;
+ "@rpath/"*) return 0 ;;
+ *) return 1 ;;
+ esac
+ }
+
+ # While we generate the NEEDED files, check that we don't get kernel
+ # traps at runtime because of broken install_names on Darwin.
+ rm -f "${T}"/.install_name_check_failed
+ scanmacho -qyRF '%a;%p;%S;%n' "${D}" | { while IFS= read l ; do
+ arch=${l%%;*}; l=${l#*;}
+ obj="/${l%%;*}"; l=${l#*;}
+ install_name=${l%%;*}; l=${l#*;}
+ needed=${l%%;*}; l=${l#*;}
+
+ ignore=
+ qa_var="QA_IGNORE_INSTALL_NAME_FILES_${ARCH/-/_}"
+ eval "[[ -n \${!qa_var} ]] &&
+ QA_IGNORE_INSTALL_NAME_FILES=(\"\${${qa_var}[@]}\")"
+
+ if [[ ${#QA_IGNORE_INSTALL_NAME_FILES[@]} -gt 1 ]] ; then
+ for x in "${QA_IGNORE_INSTALL_NAME_FILES[@]}" ; do
+ [[ ${obj##*/} == ${x} ]] && \
+ ignore=true
+ done
+ else
+ local shopts=$-
+ set -o noglob
+ for x in ${QA_IGNORE_INSTALL_NAME_FILES} ; do
+ [[ ${obj##*/} == ${x} ]] && \
+ ignore=true
+ done
+ set +o noglob
+ set -${shopts}
+ fi
+
+ # See if the self-reference install_name points to an existing
+ # and to be installed file. This usually is a symlink for the
+ # major version.
+ if install_name_is_relative ${install_name} ; then
+ # try to locate the library in the installed image
+ local inpath=${install_name#@*/}
+ local libl
+ for libl in $(find "${ED}" -name "${inpath##*/}") ; do
+ if [[ ${libl} == */${inpath} ]] ; then
+ install_name=/${libl#${D}}
+ break
+ fi
+ done
+ fi
+ if [[ ! -e ${D}${install_name} ]] ; then
+ eqawarn "QA Notice: invalid self-reference install_name ${install_name} in ${obj}"
+ # remember we are in an implicit subshell, that's
+ # why we touch a file here ... ideally we should be
+ # able to die correctly/nicely here
+ [[ -z ${ignore} ]] && touch "${T}"/.install_name_check_failed
+ fi
+
+ # this is ugly, paths with spaces won't work
+ for lib in ${needed//,/ } ; do
+ if [[ ${lib} == ${D}* ]] ; then
+ eqawarn "QA Notice: install_name references \${D}: ${lib} in ${obj}"
+ [[ -z ${ignore} ]] && touch "${T}"/.install_name_check_failed
+ elif [[ ${lib} == ${S}* ]] ; then
+ eqawarn "QA Notice: install_name references \${S}: ${lib} in ${obj}"
+ [[ -z ${ignore} ]] && touch "${T}"/.install_name_check_failed
+ elif ! install_name_is_relative ${lib} ; then
+ local isok=no
+ if [[ -e ${lib} || -e ${D}${lib} ]] ; then
+ isok=yes # yay, we're ok
+ elif [[ -e "${EROOT}"/MacOSX.sdk ]] ; then
+ # trigger SDK mode, at least since Big Sur (11.0)
+ # there are no libraries in /usr/lib any more, but
+ # there are references too it (some library cache is
+ # in place), yet we can validate it sort of is sane
+ # by looking at the SDK metacaches, TAPI-files, .tbd
+ # text versions of libraries, so just look there
+ local tbd=${lib%.*}.tbd
+ if [[ -e ${EROOT}/MacOSX.sdk/${lib%.*}.tbd ]] ; then
+ isok=yes # it's in the SDK, so ok
+ elif [[ -e ${EROOT}/MacOSX.sdk/${lib}.tbd ]] ; then
+ isok=yes # this happens in case of Framework refs
+ fi
+ fi
+ if [[ ${isok} == no ]] ; then
+ eqawarn "QA Notice: invalid reference to ${lib} in ${obj}"
+ [[ -z ${ignore} ]] && \
+ touch "${T}"/.install_name_check_failed
+ fi
+ fi
+ done
+
+ # backwards compatibility
+ echo "${obj} ${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ # what we use
+ echo "${arch};${obj};${install_name};${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.MACHO.3
+ done }
+ if [[ -f ${T}/.install_name_check_failed ]] ; then
+ # secret switch "allow_broken_install_names" to get
+ # around this and install broken crap (not a good idea)
+ has allow_broken_install_names ${FEATURES} || \
+ die "invalid install_name found, your application or library will crash at runtime"
+ fi
+}
+
__dyn_instprep() {
if [[ -e ${PORTAGE_BUILDDIR}/.instprepped ]] ; then
- __vecho ">>> It appears that '$PF' is already instprepped; skipping."
+ __vecho ">>> It appears that '${PF}' is already instprepped; skipping."
__vecho ">>> Remove '${PORTAGE_BUILDDIR}/.instprepped' to force instprep."
return 0
fi
@@@ -686,20 -526,25 +699,27 @@@ __dyn_package()
die "PORTAGE_BINPKG_TMPFILE is unset"
mkdir -p "${PORTAGE_BINPKG_TMPFILE%/*}" || die "mkdir failed"
+ if [[ ! -z "${BUILD_ID}" ]]; then
+ echo -n "${BUILD_ID}" > "${PORTAGE_BUILDDIR}"/build-info/BUILD_ID
+ fi
+
if [[ "${BINPKG_FORMAT}" == "xpak" ]]; then
local tar_options=""
- [[ $PORTAGE_VERBOSE = 1 ]] && tar_options+=" -v"
+
+ [[ ${PORTAGE_VERBOSE} = 1 ]] && tar_options+=" -v"
has xattr ${FEATURES} && [[ $(tar --help 2> /dev/null) == *--xattrs* ]] && tar_options+=" --xattrs"
- [[ -z "${PORTAGE_COMPRESSION_COMMAND}" ]] && \
- die "PORTAGE_COMPRESSION_COMMAND is unset"
- tar $tar_options -cf - $PORTAGE_BINPKG_TAR_OPTS -C "${D}" . | \
- $PORTAGE_COMPRESSION_COMMAND > "$PORTAGE_BINPKG_TMPFILE"
- assert "failed to pack binary package: '$PORTAGE_BINPKG_TMPFILE'"
+
+ [[ -z "${PORTAGE_COMPRESSION_COMMAND}" ]] && die "PORTAGE_COMPRESSION_COMMAND is unset"
+
+ tar ${tar_options} -cf - ${PORTAGE_BINPKG_TAR_OPTS} -C "${D}" . | \
+ ${PORTAGE_COMPRESSION_COMMAND} > "${PORTAGE_BINPKG_TMPFILE}"
+ assert "failed to pack binary package: '${PORTAGE_BINPKG_TMPFILE}'"
+
+ # BEGIN PREFIX LOCAL: use PREFIX_PORTAGE_PYTHON fallback
PYTHONPATH=${PORTAGE_PYTHONPATH:-${PORTAGE_PYM_PATH}} \
- "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "$PORTAGE_BIN_PATH"/xpak-helper.py recompose \
- "$PORTAGE_BINPKG_TMPFILE" "$PORTAGE_BUILDDIR/build-info"
- "${PORTAGE_PYTHON:-/usr/bin/python}" "${PORTAGE_BIN_PATH}"/xpak-helper.py recompose \
++ "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "${PORTAGE_BIN_PATH}"/xpak-helper.py recompose \
+ "${PORTAGE_BINPKG_TMPFILE}" "${PORTAGE_BUILDDIR}/build-info"
+ # END PREFIX LOCAL
if [[ $? -ne 0 ]]; then
rm -f "${PORTAGE_BINPKG_TMPFILE}"
die "Failed to append metadata to the tbz2 file"
@@@ -717,11 -563,9 +738,11 @@@
__vecho ">>> Done."
elif [[ "${BINPKG_FORMAT}" == "gpkg" ]]; then
+ # BEGIN PREFIX LOCAL: use PREFIX_PORTAGE_PYTHON fallback
PYTHONPATH=${PORTAGE_PYTHONPATH:-${PORTAGE_PYM_PATH}} \
- "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "$PORTAGE_BIN_PATH"/gpkg-helper.py compress \
- "${CATEGORY}/${PF}" "$PORTAGE_BINPKG_TMPFILE" "$PORTAGE_BUILDDIR/build-info" "${D}"
- "${PORTAGE_PYTHON:-/usr/bin/python}" "${PORTAGE_BIN_PATH}"/gpkg-helper.py compress \
++ "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "${PORTAGE_BIN_PATH}"/gpkg-helper.py compress \
+ "${PF}${BUILD_ID:+-${BUILD_ID}}" "${PORTAGE_BINPKG_TMPFILE}" "${PORTAGE_BUILDDIR}/build-info" "${D}"
+ # END PREFIX LOCAL
if [[ $? -ne 0 ]]; then
rm -f "${PORTAGE_BINPKG_TMPFILE}"
die "Failed to create binpkg file"
diff --cc bin/phase-functions.sh
index 8efd12f22,071941ff7..d0f3a0f85
--- a/bin/phase-functions.sh
+++ b/bin/phase-functions.sh
@@@ -1099,15 -1108,17 +1109,17 @@@ __ebuild_main()
# Use safe cwd, avoiding unsafe import for bug #469338.
cd "${PORTAGE_PYM_PATH}"
__save_ebuild_env | __filter_readonly_variables \
- --filter-features > "$T/environment"
+ --filter-features > "${T}/environment"
assert "__save_ebuild_env failed"
- # PREFIX LOCAL: use configure group
- chgrp "${PORTAGE_GRPNAME:-${PORTAGE_GROUP}}" "$T/environment"
- chmod g+w "$T/environment"
-
- chgrp "${PORTAGE_GRPNAME:-portage}" "${T}/environment"
++ # PREFIX LOCAL: use configured group
++ chgrp "${PORTAGE_GRPNAME:-${PORTAGE_GROUP}}" "${T}/environment"
+ chmod g+w "${T}/environment"
fi
- [[ -n $PORTAGE_EBUILD_EXIT_FILE ]] && > "$PORTAGE_EBUILD_EXIT_FILE"
- if [[ -n $PORTAGE_IPC_DAEMON ]] ; then
- [[ ! -s $SANDBOX_LOG ]]
- "$PORTAGE_BIN_PATH"/ebuild-ipc exit $?
+
+ [[ -n ${PORTAGE_EBUILD_EXIT_FILE} ]] && > "${PORTAGE_EBUILD_EXIT_FILE}"
+ if [[ -n ${PORTAGE_IPC_DAEMON} ]] ; then
+ [[ ! -s ${SANDBOX_LOG} ]]
+
+ "${PORTAGE_BIN_PATH}"/ebuild-ipc exit $?
fi
}
diff --cc bin/save-ebuild-env.sh
index 3520d0416,bba468da1..36eedc7fa
mode 100755,100644..100755
--- a/bin/save-ebuild-env.sh
+++ b/bin/save-ebuild-env.sh
diff --cc lib/_emerge/depgraph.py
index c5d0343b7,28acfed9d..c6b63738a
--- a/lib/_emerge/depgraph.py
+++ b/lib/_emerge/depgraph.py
@@@ -11869,20 -11801,8 +11801,20 @@@ def _get_masking_status(pkg, pkgsetting
if not pkg.installed:
if not pkgsettings._accept_chost(pkg.cpv, pkg._metadata):
- mreasons.append(_MaskReason("CHOST", "CHOST: %s" % pkg._metadata["CHOST"]))
+ mreasons.append(_MaskReason("CHOST", f"CHOST: {pkg._metadata['CHOST']}"))
+ # BEGIN PREFIX LOCAL: check EPREFIX
+ eprefix = pkgsettings["EPREFIX"]
+ if len(eprefix.rstrip('/')) > 0 and pkg.built and not pkg.installed:
+ if not "EPREFIX" in pkg._metadata:
+ mreasons.append(_MaskReason("EPREFIX",
+ "missing EPREFIX"))
+ elif len(pkg._metadata["EPREFIX"].strip()) < len(eprefix):
+ mreasons.append(_MaskReason("EPREFIX",
+ "EPREFIX: '%s' too small" % \
+ pkg._metadata["EPREFIX"]))
+ # END PREFIX LOCAL
+
if pkg.invalid:
for msgs in pkg.invalid.values():
for msg in msgs:
diff --cc lib/portage/__init__.py
index ab3017a5d,aa7e69920..ea1ebc2bb
--- a/lib/portage/__init__.py
+++ b/lib/portage/__init__.py
@@@ -53,18 -51,7 +53,17 @@@ except ImportError as e
sys.stderr.write(f" {e}\n\n")
raise
+# BEGIN PREFIX LOCAL
+# for bug #758230, on macOS the default was switched from fork to spawn,
+# the latter causing issues because all kinds of things can't be
+# pickled, so force fork mode for now
+try:
+ multiprocessing.set_start_method('fork')
+except RuntimeError:
+ pass
+# END PREFIX LOCAL
+
try:
-
import portage.proxy.lazyimport
import portage.proxy as proxy
diff --cc lib/portage/const.py
index 980b40b4b,10a208ceb..8aa7e557f
--- a/lib/portage/const.py
+++ b/lib/portage/const.py
@@@ -1,16 -1,8 +1,15 @@@
# portage: Constants
- # Copyright 1998-2021 Gentoo Authors
+ # Copyright 1998-2023 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
+# BEGIN PREFIX LOCAL
+# ===========================================================================
+# autotool supplied constants.
+# ===========================================================================
+from portage.const_autotool import *
+# END PREFIX LOCAL
+
import os
- import sys
# ===========================================================================
# START OF CONSTANTS -- START OF CONSTANTS -- START OF CONSTANTS -- START OF
diff --cc lib/portage/dbapi/bintree.py
index 986ffeb3d,5f58c548d..78f760444
--- a/lib/portage/dbapi/bintree.py
+++ b/lib/portage/dbapi/bintree.py
@@@ -104,36 -104,31 +104,33 @@@ class bindbapi(fakedbapi)
self.bintree = mybintree
self.move_ent = mybintree.move_ent
# Selectively cache metadata in order to optimize dep matching.
- self._aux_cache_keys = set(
- [
- "BDEPEND",
- "BINPKG_FORMAT",
- "BUILD_ID",
- "BUILD_TIME",
- "CHOST",
- "DEFINED_PHASES",
- "DEPEND",
- "EAPI",
- "IDEPEND",
- "IUSE",
- "KEYWORDS",
- "LICENSE",
- "MD5",
- "PDEPEND",
- "PROPERTIES",
- "PROVIDES",
- "RDEPEND",
- "repository",
- "REQUIRES",
- "RESTRICT",
- "SIZE",
- "SLOT",
- "USE",
- "_mtime_",
- # PREFIX LOCAL
- "EPREFIX",
- ]
- )
+ self._aux_cache_keys = {
+ "BDEPEND",
+ "BUILD_ID",
+ "BUILD_TIME",
+ "CHOST",
+ "DEFINED_PHASES",
+ "DEPEND",
+ "EAPI",
+ "IDEPEND",
+ "IUSE",
+ "KEYWORDS",
+ "LICENSE",
+ "MD5",
+ "PDEPEND",
+ "PROPERTIES",
+ "PROVIDES",
+ "RDEPEND",
+ "repository",
+ "REQUIRES",
+ "RESTRICT",
+ "SIZE",
+ "SLOT",
+ "USE",
+ "_mtime_",
++ # PREFIX LOCAL
++ "EPREFIX",
+ }
self._aux_cache_slot_dict = slot_dict_class(self._aux_cache_keys)
self._aux_cache = {}
@@@ -474,150 -468,134 +470,140 @@@ class binarytree
stacklevel=2,
)
- if True:
- self.pkgdir = normalize_path(pkgdir)
- # NOTE: Event if binpkg-multi-instance is disabled, it's
- # still possible to access a PKGDIR which uses the
- # binpkg-multi-instance layout (or mixed layout).
- self._multi_instance = "binpkg-multi-instance" in settings.features
- if self._multi_instance:
- self._allocate_filename = self._allocate_filename_multi
- self.dbapi = bindbapi(self, settings=settings)
- self.update_ents = self.dbapi.update_ents
- self.move_slot_ent = self.dbapi.move_slot_ent
- self.populated = 0
- self.tree = {}
- self._binrepos_conf = None
- self._remote_has_index = False
- self._remotepkgs = None # remote metadata indexed by cpv
- self._additional_pkgs = {}
- self.invalids = []
- self.settings = settings
- self._pkg_paths = {}
- self._populating = False
- self._all_directory = os.path.isdir(os.path.join(self.pkgdir, "All"))
- self._pkgindex_version = 0
- self._pkgindex_hashes = ["MD5", "SHA1"]
- self._pkgindex_file = os.path.join(self.pkgdir, "Packages")
- self._pkgindex_keys = self.dbapi._aux_cache_keys.copy()
- self._pkgindex_keys.update(["CPV", "SIZE"])
- self._pkgindex_aux_keys = [
- "BASE_URI",
- "BDEPEND",
- "BINPKG_FORMAT",
- "BUILD_ID",
- "BUILD_TIME",
- "CHOST",
- "DEFINED_PHASES",
- "DEPEND",
- "DESCRIPTION",
- "EAPI",
- "FETCHCOMMAND",
- "IDEPEND",
- "IUSE",
- "KEYWORDS",
- "LICENSE",
- "PDEPEND",
- "PKGINDEX_URI",
- "PROPERTIES",
- "PROVIDES",
- "RDEPEND",
- "repository",
- "REQUIRES",
- "RESTRICT",
- "RESUMECOMMAND",
- "SIZE",
- "SLOT",
- "USE",
- # PREFIX LOCAL
- "EPREFIX",
- ]
- self._pkgindex_aux_keys = list(self._pkgindex_aux_keys)
- self._pkgindex_use_evaluated_keys = (
- "BDEPEND",
- "DEPEND",
- "IDEPEND",
- "LICENSE",
- "RDEPEND",
- "PDEPEND",
- "PROPERTIES",
- "RESTRICT",
- )
- self._pkgindex_header = None
- self._pkgindex_header_keys = set(
- [
- "ACCEPT_KEYWORDS",
- "ACCEPT_LICENSE",
- "ACCEPT_PROPERTIES",
- "ACCEPT_RESTRICT",
- "BINPKG_FORMAT",
- "CBUILD",
- "CONFIG_PROTECT",
- "CONFIG_PROTECT_MASK",
- "FEATURES",
- "GENTOO_MIRRORS",
- "INSTALL_MASK",
- "IUSE_IMPLICIT",
- "USE",
- "USE_EXPAND",
- "USE_EXPAND_HIDDEN",
- "USE_EXPAND_IMPLICIT",
- "USE_EXPAND_UNPREFIXED",
- # PREFIX LOCAL
- "EPREFIX",
- ]
- )
- self._pkgindex_default_pkg_data = {
- "BDEPEND": "",
- "BUILD_ID": "",
- "BUILD_TIME": "",
- "DEFINED_PHASES": "",
- "DEPEND": "",
- "EAPI": "0",
- "IDEPEND": "",
- "IUSE": "",
- "KEYWORDS": "",
- "LICENSE": "",
- "PATH": "",
- "PDEPEND": "",
- "PROPERTIES": "",
- "PROVIDES": "",
- "RDEPEND": "",
- "REQUIRES": "",
- "RESTRICT": "",
- "SLOT": "0",
- "USE": "",
- }
- self._pkgindex_inherited_keys = ["BINPKG_FORMAT", "CHOST",
- # PREFIX LOCAL
- "EPREFIX",
- "repository"]
-
- # Populate the header with appropriate defaults.
- self._pkgindex_default_header_data = {
- "BINPKG_FORMAT": self.settings.get(
- "BINPKG_FORMAT", SUPPORTED_GENTOO_BINPKG_FORMATS[0]
- ),
- "CHOST": self.settings.get("CHOST", ""),
- "repository": "",
- }
-
- self._pkgindex_translated_keys = (
- ("DESCRIPTION", "DESC"),
- ("_mtime_", "MTIME"),
- ("repository", "REPO"),
- )
+ self.pkgdir = normalize_path(pkgdir)
+ # NOTE: Event if binpkg-multi-instance is disabled, it's
+ # still possible to access a PKGDIR which uses the
+ # binpkg-multi-instance layout (or mixed layout).
+ self._multi_instance = "binpkg-multi-instance" in settings.features
+ if self._multi_instance:
+ self._allocate_filename = self._allocate_filename_multi
+ self.dbapi = bindbapi(self, settings=settings)
+ self.update_ents = self.dbapi.update_ents
+ self.move_slot_ent = self.dbapi.move_slot_ent
+ self.populated = 0
+ self.tree = {}
+ self._binrepos_conf = None
+ self._remote_has_index = False
+ self._remotepkgs = None # remote metadata indexed by cpv
+ self._additional_pkgs = {}
+ self.invalids = []
+ self.settings = settings
+ self._pkg_paths = {}
+ self._populating = False
+ self._all_directory = os.path.isdir(os.path.join(self.pkgdir, "All"))
+ self._pkgindex_version = 0
+ self._pkgindex_hashes = ["MD5", "SHA1"]
+ self._pkgindex_file = os.path.join(self.pkgdir, "Packages")
+ self._pkgindex_keys = self.dbapi._aux_cache_keys.copy()
+ self._pkgindex_keys.update(["CPV", "SIZE"])
+ self._pkgindex_aux_keys = [
+ "BASE_URI",
+ "BDEPEND",
+ "BUILD_ID",
+ "BUILD_TIME",
+ "CHOST",
+ "DEFINED_PHASES",
+ "DEPEND",
+ "DESCRIPTION",
+ "EAPI",
+ "FETCHCOMMAND",
+ "IDEPEND",
+ "IUSE",
+ "KEYWORDS",
+ "LICENSE",
+ "PDEPEND",
+ "PKGINDEX_URI",
+ "PROPERTIES",
+ "PROVIDES",
+ "RDEPEND",
+ "repository",
+ "REQUIRES",
+ "RESTRICT",
+ "RESUMECOMMAND",
+ "SIZE",
+ "SLOT",
+ "USE",
++ # PREFIX LOCAL
++ "EPREFIX",
+ ]
+ self._pkgindex_use_evaluated_keys = (
+ "BDEPEND",
+ "DEPEND",
+ "IDEPEND",
+ "LICENSE",
+ "RDEPEND",
+ "PDEPEND",
+ "PROPERTIES",
+ "RESTRICT",
+ )
+ self._pkgindex_header = None
+ self._pkgindex_header_keys = {
+ "ACCEPT_KEYWORDS",
+ "ACCEPT_LICENSE",
+ "ACCEPT_PROPERTIES",
+ "ACCEPT_RESTRICT",
+ "CBUILD",
+ "CONFIG_PROTECT",
+ "CONFIG_PROTECT_MASK",
+ "FEATURES",
+ "GENTOO_MIRRORS",
+ "INSTALL_MASK",
+ "IUSE_IMPLICIT",
+ "USE",
+ "USE_EXPAND",
+ "USE_EXPAND_HIDDEN",
+ "USE_EXPAND_IMPLICIT",
+ "USE_EXPAND_UNPREFIXED",
++ # PREFIX LOCAL
++ "EPREFIX",
+ }
+ self._pkgindex_default_pkg_data = {
+ "BDEPEND": "",
+ "BUILD_ID": "",
+ "BUILD_TIME": "",
+ "DEFINED_PHASES": "",
+ "DEPEND": "",
+ "EAPI": "0",
+ "IDEPEND": "",
+ "IUSE": "",
+ "KEYWORDS": "",
+ "LICENSE": "",
+ "PATH": "",
+ "PDEPEND": "",
+ "PROPERTIES": "",
+ "PROVIDES": "",
+ "RDEPEND": "",
+ "REQUIRES": "",
+ "RESTRICT": "",
+ "SLOT": "0",
+ "USE": "",
+ }
+ self._pkgindex_inherited_keys = ["CHOST", "repository"]
++ # PREFIX LOCAL
++ self._pkgindex_inherited_keys += ["EPREFIX"]
+
+ # Populate the header with appropriate defaults.
+ self._pkgindex_default_header_data = {
+ "CHOST": self.settings.get("CHOST", ""),
+ "repository": "",
+ }
+
+ self._pkgindex_translated_keys = (
+ ("DESCRIPTION", "DESC"),
+ ("_mtime_", "MTIME"),
+ ("repository", "REPO"),
+ )
- self._pkgindex_allowed_pkg_keys = set(
- chain(
- self._pkgindex_keys,
- self._pkgindex_aux_keys,
- self._pkgindex_hashes,
- self._pkgindex_default_pkg_data,
- self._pkgindex_inherited_keys,
- chain(*self._pkgindex_translated_keys),
- )
+ self._pkgindex_allowed_pkg_keys = set(
+ chain(
+ self._pkgindex_keys,
+ self._pkgindex_aux_keys,
+ self._pkgindex_hashes,
+ self._pkgindex_default_pkg_data,
+ self._pkgindex_inherited_keys,
+ chain(*self._pkgindex_translated_keys),
)
+ )
@property
def root(self):
diff --cc lib/portage/package/ebuild/_config/special_env_vars.py
index d974325e0,beb411188..b41c952e2
--- a/lib/portage/package/ebuild/_config/special_env_vars.py
+++ b/lib/portage/package/ebuild/_config/special_env_vars.py
@@@ -82,174 -80,167 +80,172 @@@ env_blacklist = frozenset
# important to set our special BASH_ENV variable in the ebuild
# environment in order to prevent sandbox from sourcing /etc/profile
# in it's bashrc (causing major leakage).
- environ_whitelist += [
- "ACCEPT_LICENSE",
- "BASH_ENV",
- "BASH_FUNC____in_portage_iuse%%",
- "BINPKG_FORMAT",
- "BROOT",
- "BUILD_PREFIX",
- "COLUMNS",
- "D",
- "DISTDIR",
- "DOC_SYMLINKS_DIR",
- "EAPI",
- "EBUILD",
- "EBUILD_FORCE_TEST",
- "EBUILD_PHASE",
- "EBUILD_PHASE_FUNC",
- "ECLASSDIR",
- "ECLASS_DEPTH",
- "ED",
- "EMERGE_FROM",
- "ENV_UNSET",
- "EPREFIX",
- "EROOT",
- "ESYSROOT",
- "FEATURES",
- "FILESDIR",
- "HOME",
- "MERGE_TYPE",
- "NOCOLOR",
- "PATH",
- "PKGDIR",
- "PKGUSE",
- "PKG_LOGDIR",
- "PKG_TMPDIR",
- "PORTAGE_ACTUAL_DISTDIR",
- "PORTAGE_ARCHLIST",
- "PORTAGE_BASHRC_FILES",
- "PORTAGE_BASHRC",
- "PM_EBUILD_HOOK_DIR",
- "PORTAGE_BINPKG_FILE",
- "PORTAGE_BINPKG_TAR_OPTS",
- "PORTAGE_BINPKG_TMPFILE",
- "PORTAGE_BIN_PATH",
- "PORTAGE_BUILDDIR",
- "PORTAGE_BUILD_GROUP",
- "PORTAGE_BUILD_USER",
- "PORTAGE_BUNZIP2_COMMAND",
- "PORTAGE_BZIP2_COMMAND",
- "PORTAGE_COLORMAP",
- "PORTAGE_COMPRESS",
- "PORTAGE_COMPRESSION_COMMAND",
- "PORTAGE_COMPRESS_EXCLUDE_SUFFIXES",
- "PORTAGE_CONFIGROOT",
- "PORTAGE_DEBUG",
- "PORTAGE_DEPCACHEDIR",
- "PORTAGE_DOHTML_UNWARNED_SKIPPED_EXTENSIONS",
- "PORTAGE_DOHTML_UNWARNED_SKIPPED_FILES",
- "PORTAGE_DOHTML_WARN_ON_SKIPPED_FILES",
- "PORTAGE_EBUILD_EXIT_FILE",
- "PORTAGE_FEATURES",
- "PORTAGE_GID",
- "PORTAGE_GRPNAME",
- "PORTAGE_INTERNAL_CALLER",
- "PORTAGE_INST_GID",
- "PORTAGE_INST_UID",
- "PORTAGE_IPC_DAEMON",
- "PORTAGE_IUSE",
- "PORTAGE_ECLASS_LOCATIONS",
- "PORTAGE_LOG_FILE",
- "PORTAGE_OVERRIDE_EPREFIX",
- "PORTAGE_PIPE_FD",
- "PORTAGE_PROPERTIES",
- "PORTAGE_PYM_PATH",
- "PORTAGE_PYTHON",
- "PORTAGE_PYTHONPATH",
- "PORTAGE_QUIET",
- "PORTAGE_REPO_NAME",
- "PORTAGE_REPOSITORIES",
- "PORTAGE_RESTRICT",
- "PORTAGE_SIGPIPE_STATUS",
- "PORTAGE_SOCKS5_PROXY",
- "PORTAGE_TMPDIR",
- "PORTAGE_UPDATE_ENV",
- "PORTAGE_USERNAME",
- "PORTAGE_VERBOSE",
- "PORTAGE_WORKDIR_MODE",
- "PORTAGE_XATTR_EXCLUDE",
- "PORTDIR",
- "PORTDIR_OVERLAY",
- "PREROOTPATH",
- "PYTHONDONTWRITEBYTECODE",
- "REPLACING_VERSIONS",
- "REPLACED_BY_VERSION",
- "ROOT",
- "ROOTPATH",
- "SANDBOX_LOG",
- "SYSROOT",
- "T",
- "TMP",
- "TMPDIR",
- "USE_EXPAND",
- "USE_ORDER",
- "WORKDIR",
- "XARGS",
- "__PORTAGE_TEST_HARDLINK_LOCKS",
- # BEGIN PREFIX LOCAL
- "EXTRA_PATH",
- "PORTAGE_GROUP",
- "PORTAGE_USER",
- # END PREFIX LOCAL
- ]
-
- # user config variables
- environ_whitelist += ["DOC_SYMLINKS_DIR", "INSTALL_MASK", "PKG_INSTALL_MASK"]
-
- environ_whitelist += ["A", "AA", "CATEGORY", "P", "PF", "PN", "PR", "PV", "PVR"]
-
- # misc variables inherited from the calling environment
- environ_whitelist += [
- "COLORTERM",
- "DISPLAY",
- "EDITOR",
- "LESS",
- "LESSOPEN",
- "LOGNAME",
- "LS_COLORS",
- "PAGER",
- "TERM",
- "TERMCAP",
- "USER",
- "ftp_proxy",
- "http_proxy",
- "no_proxy",
- ]
-
- # tempdir settings
- environ_whitelist += [
- "TMPDIR",
- "TEMP",
- "TMP",
- ]
-
- # localization settings
- environ_whitelist += [
- "LANG",
- "LC_COLLATE",
- "LC_CTYPE",
- "LC_MESSAGES",
- "LC_MONETARY",
- "LC_NUMERIC",
- "LC_TIME",
- "LC_PAPER",
- "LC_ALL",
- ]
-
- # other variables inherited from the calling environment
- environ_whitelist += [
- "CVS_RSH",
- "ECHANGELOG_USER",
- "GPG_AGENT_INFO",
- "SSH_AGENT_PID",
- "SSH_AUTH_SOCK",
- "STY",
- "WINDOW",
- "XAUTHORITY",
- ]
-
- environ_whitelist = frozenset(environ_whitelist)
+ environ_whitelist = frozenset(
+ (
+ "A",
+ "AA",
+ "ACCEPT_LICENSE",
+ "BASH_ENV",
+ "BASH_FUNC____in_portage_iuse%%",
+ "BINPKG_FORMAT",
+ "BROOT",
+ "BUILD_ID",
+ "BUILD_PREFIX",
+ "CATEGORY",
+ "COLUMNS",
+ "D",
+ "DISTDIR",
+ "DOC_SYMLINKS_DIR",
+ "EAPI",
+ "EBUILD",
+ "EBUILD_FORCE_TEST",
+ "EBUILD_PHASE",
+ "EBUILD_PHASE_FUNC",
+ "ECLASSDIR",
+ "ECLASS_DEPTH",
+ "ED",
+ "EMERGE_FROM",
+ "ENV_UNSET",
+ "EPREFIX",
+ "EROOT",
+ "ESYSROOT",
+ "FEATURES",
+ "FILESDIR",
+ "HOME",
+ "MERGE_TYPE",
+ "NOCOLOR",
+ "NO_COLOR",
+ "P",
+ "PATH",
+ "PF",
+ "PKGDIR",
+ "PKGUSE",
+ "PKG_LOGDIR",
+ "PKG_TMPDIR",
+ "PM_EBUILD_HOOK_DIR",
+ "PN",
+ "PORTAGE_ACTUAL_DISTDIR",
+ "PORTAGE_ARCHLIST",
+ "PORTAGE_BASHRC_FILES",
+ "PORTAGE_BASHRC",
+ "PORTAGE_BINPKG_FILE",
+ "PORTAGE_BINPKG_TAR_OPTS",
+ "PORTAGE_BINPKG_TMPFILE",
+ "PORTAGE_BIN_PATH",
+ "PORTAGE_BUILDDIR",
+ "PORTAGE_BUILD_GROUP",
+ "PORTAGE_BUILD_USER",
+ "PORTAGE_BUNZIP2_COMMAND",
+ "PORTAGE_BZIP2_COMMAND",
+ "PORTAGE_COLORMAP",
+ "PORTAGE_COMPRESS",
+ "PORTAGE_COMPRESSION_COMMAND",
+ "PORTAGE_COMPRESS_EXCLUDE_SUFFIXES",
+ "PORTAGE_CONFIGROOT",
+ "PORTAGE_DEBUG",
+ "PORTAGE_DEPCACHEDIR",
+ "PORTAGE_DOHTML_UNWARNED_SKIPPED_EXTENSIONS",
+ "PORTAGE_DOHTML_UNWARNED_SKIPPED_FILES",
+ "PORTAGE_DOHTML_WARN_ON_SKIPPED_FILES",
+ "PORTAGE_EBUILD_EXIT_FILE",
+ "PORTAGE_FEATURES",
+ "PORTAGE_GID",
+ "PORTAGE_GRPNAME",
+ "PORTAGE_INTERNAL_CALLER",
+ "PORTAGE_INST_GID",
+ "PORTAGE_INST_UID",
+ "PORTAGE_IPC_DAEMON",
+ "PORTAGE_IUSE",
+ "PORTAGE_ECLASS_LOCATIONS",
+ "PORTAGE_LOG_FILE",
+ "PORTAGE_OVERRIDE_EPREFIX",
+ "PORTAGE_PIPE_FD",
+ "PORTAGE_PROPERTIES",
+ "PORTAGE_PYM_PATH",
+ "PORTAGE_PYTHON",
+ "PORTAGE_PYTHONPATH",
+ "PORTAGE_QUIET",
+ "PORTAGE_REPO_NAME",
+ "PORTAGE_REPOSITORIES",
+ "PORTAGE_RESTRICT",
+ "PORTAGE_SIGPIPE_STATUS",
+ "PORTAGE_SOCKS5_PROXY",
+ "PORTAGE_TMPDIR",
+ "PORTAGE_UPDATE_ENV",
+ "PORTAGE_USERNAME",
+ "PORTAGE_VERBOSE",
+ "PORTAGE_WORKDIR_MODE",
+ "PORTAGE_XATTR_EXCLUDE",
+ "PORTDIR",
+ "PORTDIR_OVERLAY",
+ "PR",
+ "PREROOTPATH",
+ "PV",
+ "PVR",
+ "PYTHONDONTWRITEBYTECODE",
+ "REPLACING_VERSIONS",
+ "REPLACED_BY_VERSION",
+ "ROOT",
+ "ROOTPATH",
+ "SANDBOX_LOG",
+ "SYSROOT",
+ "T",
+ "TMP",
+ "TMPDIR",
+ "USE_EXPAND",
+ "USE_ORDER",
+ "WORKDIR",
+ "XARGS",
+ "__PORTAGE_TEST_HARDLINK_LOCKS",
+ # user config variables
+ "DOC_SYMLINKS_DIR",
+ "INSTALL_MASK",
+ "PKG_INSTALL_MASK",
+ # misc variables inherited from the calling environment
+ "COLORTERM",
+ "DISPLAY",
+ "EDITOR",
+ "LESS",
+ "LESSOPEN",
+ "LOGNAME",
+ "LS_COLORS",
+ "PAGER",
+ "TERM",
+ "TERMCAP",
+ "USER",
+ "ftp_proxy",
+ "http_proxy",
+ "no_proxy",
+ # tempdir settings
+ "TMPDIR",
+ "TEMP",
+ "TMP",
+ # localization settings
+ "LANG",
+ "LC_COLLATE",
+ "LC_CTYPE",
+ "LC_MESSAGES",
+ "LC_MONETARY",
+ "LC_NUMERIC",
+ "LC_TIME",
+ "LC_PAPER",
+ "LC_ALL",
+ # other variables inherited from the calling environment
+ "CVS_RSH",
+ "ECHANGELOG_USER",
+ "GPG_AGENT_INFO",
+ "SSH_AGENT_PID",
+ "SSH_AUTH_SOCK",
+ "STY",
+ "WINDOW",
+ "XAUTHORITY",
++ # BEGIN PREFIX LOCAL
++ "EXTRA_PATH",
++ "PORTAGE_GROUP",
++ "PORTAGE_USER",
++ # END PREFIX LOCAL
+ )
+ )
environ_whitelist_re = re.compile(r"^(CCACHE_|DISTCC_).*")
@@@ -258,116 -249,91 +254,104 @@@
# Exclude anything that could be extremely long here (like SRC_URI)
# since that could cause execve() calls to fail with E2BIG errors. For
# example, see bug #262647.
- environ_filter += [
- "DEPEND",
- "RDEPEND",
- "PDEPEND",
- "SRC_URI",
- "BDEPEND",
- "IDEPEND",
- ]
-
- # misc variables inherited from the calling environment
- environ_filter += [
- "INFOPATH",
- "MANPATH",
- "USER",
- # BEGIN PREFIX LOCAL
- "HOST",
- "GROUP",
- "LOGNAME",
- "MAIL",
- "REMOTEHOST",
- "SECURITYSESSIONID",
- "TERMINFO",
- "TERM_PROGRAM",
- "TERM_PROGRAM_VERSION",
- "VENDOR",
- "__CF_USER_TEXT_ENCODING",
- # END PREFIX LOCAL
- ]
-
- # variables that break bash
- environ_filter += [
- "HISTFILE",
- "POSIXLY_CORRECT",
- ]
-
- # portage config variables and variables set directly by portage
- environ_filter += [
- "ACCEPT_CHOSTS",
- "ACCEPT_KEYWORDS",
- "ACCEPT_PROPERTIES",
- "ACCEPT_RESTRICT",
- "AUTOCLEAN",
- "BINPKG_COMPRESS",
- "BINPKG_COMPRESS_FLAGS",
- "CLEAN_DELAY",
- "COLLISION_IGNORE",
- "CONFIG_PROTECT",
- "CONFIG_PROTECT_MASK",
- "EGENCACHE_DEFAULT_OPTS",
- "EMERGE_DEFAULT_OPTS",
- "EMERGE_LOG_DIR",
- "EMERGE_WARNING_DELAY",
- "FETCHCOMMAND",
- "FETCHCOMMAND_FTP",
- "FETCHCOMMAND_HTTP",
- "FETCHCOMMAND_HTTPS",
- "FETCHCOMMAND_RSYNC",
- "FETCHCOMMAND_SFTP",
- "GENTOO_MIRRORS",
- "NOCONFMEM",
- "O",
- "PORTAGE_BACKGROUND",
- "PORTAGE_BACKGROUND_UNMERGE",
- "PORTAGE_BINHOST",
- "PORTAGE_BINPKG_FORMAT",
- "PORTAGE_BUILDDIR_LOCKED",
- "PORTAGE_CHECKSUM_FILTER",
- "PORTAGE_ELOG_CLASSES",
- "PORTAGE_ELOG_MAILFROM",
- "PORTAGE_ELOG_MAILSUBJECT",
- "PORTAGE_ELOG_MAILURI",
- "PORTAGE_ELOG_SYSTEM",
- "PORTAGE_FETCH_CHECKSUM_TRY_MIRRORS",
- "PORTAGE_FETCH_RESUME_MIN_SIZE",
- "PORTAGE_GPG_DIR",
- "PORTAGE_GPG_KEY",
- "PORTAGE_GPG_SIGNING_COMMAND",
- "PORTAGE_IONICE_COMMAND",
- "PORTAGE_PACKAGE_EMPTY_ABORT",
- "PORTAGE_REPO_DUPLICATE_WARN",
- "PORTAGE_RO_DISTDIRS",
- "PORTAGE_RSYNC_EXTRA_OPTS",
- "PORTAGE_RSYNC_OPTS",
- "PORTAGE_RSYNC_RETRIES",
- "PORTAGE_SSH_OPTS",
- "PORTAGE_SYNC_STALE",
- "PORTAGE_USE",
- "PORTAGE_LOG_FILTER_FILE_CMD",
- "PORTAGE_LOGDIR",
- "PORTAGE_LOGDIR_CLEAN",
- "QUICKPKG_DEFAULT_OPTS",
- "REPOMAN_DEFAULT_OPTS",
- "RESUMECOMMAND",
- "RESUMECOMMAND_FTP",
- "RESUMECOMMAND_HTTP",
- "RESUMECOMMAND_HTTPS",
- "RESUMECOMMAND_RSYNC",
- "RESUMECOMMAND_SFTP",
- "UNINSTALL_IGNORE",
- "USE_EXPAND_HIDDEN",
- "USE_ORDER",
- "__PORTAGE_HELPER",
- ]
-
- # No longer supported variables
- environ_filter += ["SYNC"]
-
- environ_filter = frozenset(environ_filter)
+ environ_filter = frozenset(
+ (
+ "DEPEND",
+ "RDEPEND",
+ "PDEPEND",
+ "SRC_URI",
+ "BDEPEND",
+ "IDEPEND",
+ # misc variables inherited from the calling environment
+ "INFOPATH",
+ "MANPATH",
+ "USER",
++ # BEGIN PREFIX LOCAL
++ "HOST",
++ "GROUP",
++ "LOGNAME",
++ "MAIL",
++ "REMOTEHOST",
++ "SECURITYSESSIONID",
++ "TERMINFO",
++ "TERM_PROGRAM",
++ "TERM_PROGRAM_VERSION",
++ "VENDOR",
++ "__CF_USER_TEXT_ENCODING",
++ # END PREFIX LOCAL
+ # variables that break bash
+ "HISTFILE",
+ "POSIXLY_CORRECT",
+ # portage config variables and variables set directly by portage
+ "ACCEPT_CHOSTS",
+ "ACCEPT_KEYWORDS",
+ "ACCEPT_PROPERTIES",
+ "ACCEPT_RESTRICT",
+ "AUTOCLEAN",
+ "BINPKG_COMPRESS",
+ "BINPKG_COMPRESS_FLAGS",
+ "CLEAN_DELAY",
+ "COLLISION_IGNORE",
+ "CONFIG_PROTECT",
+ "CONFIG_PROTECT_MASK",
+ "EGENCACHE_DEFAULT_OPTS",
+ "EMERGE_DEFAULT_OPTS",
+ "EMERGE_LOG_DIR",
+ "EMERGE_WARNING_DELAY",
+ "FETCHCOMMAND",
+ "FETCHCOMMAND_FTP",
+ "FETCHCOMMAND_HTTP",
+ "FETCHCOMMAND_HTTPS",
+ "FETCHCOMMAND_RSYNC",
+ "FETCHCOMMAND_SFTP",
+ "GENTOO_MIRRORS",
+ "NOCONFMEM",
+ "O",
+ "PORTAGE_BACKGROUND",
+ "PORTAGE_BACKGROUND_UNMERGE",
+ "PORTAGE_BINHOST",
+ "PORTAGE_BINPKG_FORMAT",
+ "PORTAGE_BUILDDIR_LOCKED",
+ "PORTAGE_CHECKSUM_FILTER",
+ "PORTAGE_ELOG_CLASSES",
+ "PORTAGE_ELOG_MAILFROM",
+ "PORTAGE_ELOG_MAILSUBJECT",
+ "PORTAGE_ELOG_MAILURI",
+ "PORTAGE_ELOG_SYSTEM",
+ "PORTAGE_FETCH_CHECKSUM_TRY_MIRRORS",
+ "PORTAGE_FETCH_RESUME_MIN_SIZE",
+ "PORTAGE_GPG_DIR",
+ "PORTAGE_GPG_KEY",
+ "PORTAGE_GPG_SIGNING_COMMAND",
+ "PORTAGE_IONICE_COMMAND",
+ "PORTAGE_PACKAGE_EMPTY_ABORT",
+ "PORTAGE_REPO_DUPLICATE_WARN",
+ "PORTAGE_RO_DISTDIRS",
+ "PORTAGE_RSYNC_EXTRA_OPTS",
+ "PORTAGE_RSYNC_OPTS",
+ "PORTAGE_RSYNC_RETRIES",
+ "PORTAGE_SSH_OPTS",
+ "PORTAGE_SYNC_STALE",
+ "PORTAGE_USE",
+ "PORTAGE_LOG_FILTER_FILE_CMD",
+ "PORTAGE_LOGDIR",
+ "PORTAGE_LOGDIR_CLEAN",
+ "QUICKPKG_DEFAULT_OPTS",
+ "REPOMAN_DEFAULT_OPTS",
+ "RESUMECOMMAND",
+ "RESUMECOMMAND_FTP",
+ "RESUMECOMMAND_HTTP",
+ "RESUMECOMMAND_HTTPS",
+ "RESUMECOMMAND_RSYNC",
+ "RESUMECOMMAND_SFTP",
+ "UNINSTALL_IGNORE",
+ "USE_EXPAND_HIDDEN",
+ "USE_ORDER",
+ "__PORTAGE_HELPER",
+ # No longer supported variables
+ "SYNC",
+ )
+ )
# Variables that are not allowed to have per-repo or per-package
# settings.
diff --cc lib/portage/package/ebuild/doebuild.py
index e9d7a858f,8b65a7862..48e9c9896
--- a/lib/portage/package/ebuild/doebuild.py
+++ b/lib/portage/package/ebuild/doebuild.py
@@@ -324,30 -305,20 +311,25 @@@ def _doebuild_path(settings, eapi=None)
for x in portage_bin_path:
path.append(os.path.join(x, "ebuild-helpers"))
- path.extend(prerootpath)
-
- for prefix in prefixes:
- prefix = prefix if prefix else "/"
- for x in (
- "usr/local/sbin",
- "usr/local/bin",
- "usr/sbin",
- "usr/bin",
- "sbin",
- "bin",
- ):
- # Respect order defined in ROOTPATH
- x_abs = os.path.join(prefix, x)
- if x_abs not in rootpath_set:
- path.append(x_abs)
- path.extend(rootpath)
+ # If PATH is set in env.d, ignore PATH from the calling environment.
+ # This allows packages to update our PATH as they get installed.
+ if "PATH" in settings.configdict["env.d"]:
+ settings.configdict["env"].pop("PATH", None)
+
+ if "PATH" in settings:
+ pathset = set(path)
+ for p in settings["PATH"].split(":"):
+ # Avoid duplicate entries.
+ if p not in pathset:
+ path.append(p)
+ pathset.add(p)
+ # BEGIN PREFIX LOCAL: append EXTRA_PATH from make.globals
+ extrapath = [x for x in settings.get("EXTRA_PATH", "").split(":") if x]
+ path.extend(extrapath)
+ # END PREFIX LOCAL
+
settings["PATH"] = ":".join(path)
diff --cc lib/portage/tests/runTests.py
index da6d83e85,bf4c2a7c5..17c244dbe
mode 100755,100644..100644
--- a/lib/portage/tests/runTests.py
+++ b/lib/portage/tests/runTests.py
diff --cc lib/portage/util/__init__.py
index 6855a87f1,1f8c9e94f..0be05ab03
--- a/lib/portage/util/__init__.py
+++ b/lib/portage/util/__init__.py
@@@ -51,8 -72,9 +72,11 @@@ import strin
import sys
import traceback
import glob
+ from typing import Optional, TextIO
import portage
++# PREFIX LOCAL
++from portage.const import EPREFIX
portage.proxy.lazyimport.lazyimport(
globals(),
@@@ -2015,16 -2007,19 +2009,25 @@@ def getlibpaths(root, env=None)
""" Return a list of paths that are used for library lookups """
if env is None:
env = os.environ
+ # BEGIN PREFIX LOCAL:
+ # For Darwin, match LD_LIBRARY_PATH with DYLD_LIBRARY_PATH.
+ # We don't need any host OS lib paths in Prefix, so just going with
+ # the prefixed one is fine.
# the following is based on the information from ld.so(8)
rval = env.get("LD_LIBRARY_PATH", "").split(":")
- rval.extend(read_ld_so_conf(os.path.join(root, "etc", "ld.so.conf")))
- rval.append("/usr/lib")
- rval.append("/lib")
+ rval.extend(env.get("DYLD_LIBRARY_PATH", "").split(":"))
+ rval.extend(read_ld_so_conf(os.path.join(root, EPREFIX, "etc", "ld.so.conf")))
+ rval.append(f"{EPREFIX}/usr/lib")
+ rval.append(f"{EPREFIX}/lib")
+ # END PREFIX LOCAL
return [normalize_path(x) for x in rval if x]
+
+
+ def no_color(settings: Optional[dict]) -> bool:
+ # In several years (2026+), we can cleanup NOCOLOR support, and just support NO_COLOR.
+ has_color: str = settings.get("NO_COLOR")
+ nocolor: str = settings.get("NOCOLOR", "false").lower()
+ if has_color is None:
+ return nocolor in ("yes", "true")
+ return bool(has_color)
diff --cc lib/portage/util/_info_files.py
index 5d4aff1d7,45d674b9b..a5b2b30b4
--- a/lib/portage/util/_info_files.py
+++ b/lib/portage/util/_info_files.py
@@@ -14,9 -12,7 +14,8 @@@ from portage.const import EPREFI
def chk_updated_info_files(root, infodirs, prev_mtimes):
-
- if os.path.exists("/usr/bin/install-info"):
+ # PREFIX LOCAL
+ if os.path.exists(EPREFIX + "/usr/bin/install-info"):
out = portage.output.EOutput()
regen_infodirs = []
for z in infodirs:
@@@ -80,9 -75,8 +79,9 @@@
try:
proc = subprocess.Popen(
[
- "/usr/bin/install-info",
+ # PREFIX LOCAL
- "%s/usr/bin/install-info", EPREFIX,
- "--dir-file=%s" % os.path.join(inforoot, "dir"),
++ f"{EPREFIX}/usr/bin/install-info",
+ f"--dir-file={os.path.join(inforoot, 'dir')}",
os.path.join(inforoot, x),
],
env=dict(os.environ, LANG="C", LANGUAGE="C"),
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2022-07-28 17:38 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2022-07-28 17:38 UTC (permalink / raw
To: gentoo-commits
commit: 6395181bf2c17bb898706e02ad5a5f384a7deee6
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Jul 28 17:38:10 2022 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Jul 28 17:38:10 2022 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=6395181b
Merge remote-tracking branch 'origin/master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
bin/install-qa-check.d/80multilib-strict | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2022-07-27 19:20 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2022-07-27 19:20 UTC (permalink / raw
To: gentoo-commits
commit: 72667001dd965003703da07d706ca672e32ee58e
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Wed Jul 27 19:19:30 2022 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Wed Jul 27 19:19:30 2022 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=72667001
Merge remote-tracking branch 'origin/master' into prefix
Prefix changes pushed to master
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
bin/ebuild-helpers/doins | 5 +----
bin/install-qa-check.d/80libraries | 10 +++++-----
bin/install-qa-check.d/90world-writable | 4 ----
bin/misc-functions.sh | 5 -----
bin/phase-functions.sh | 2 +-
5 files changed, 7 insertions(+), 19 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2022-07-26 19:39 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2022-07-26 19:39 UTC (permalink / raw
To: gentoo-commits
commit: 81c069293c43005a0bc1ee526b140c464dffee4b
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Tue Jul 26 19:38:36 2022 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Tue Jul 26 19:38:36 2022 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=81c06929
Merge remote-tracking branch 'origin/master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
lib/portage/tests/resolver/ResolverPlayground.py | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2022-07-25 15:20 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2022-07-25 15:20 UTC (permalink / raw
To: gentoo-commits
commit: 345ba84cbbe7a808785f9840d6d5e14841967add
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Mon Jul 25 15:20:06 2022 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Mon Jul 25 15:20:31 2022 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=345ba84c
Merge remote-tracking branch 'origin/master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
bin/deprecated-path | 2 +-
bin/eapi7-ver-funcs.sh | 2 +-
bin/ebuild-helpers/nonfatal | 2 +-
bin/ecompress | 2 +-
bin/ecompress-file | 2 +-
bin/estrip | 2 +-
cnf/repo.postsync.d/example | 2 +-
make.conf.example-repatch.sh | 2 +-
8 files changed, 8 insertions(+), 8 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2022-07-24 19:27 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2022-07-24 19:27 UTC (permalink / raw
To: gentoo-commits
commit: e11f7c6065bae3f44f06a8e36ee6b7b4cf7ee52d
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Jul 24 19:24:35 2022 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Jul 24 19:24:35 2022 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=e11f7c60
subst-install: replace /usr/bin/env with EPREFIX during install
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
subst-install.in | 1 +
1 file changed, 1 insertion(+)
diff --git a/subst-install.in b/subst-install.in
index e9f375d76..b73a23c1a 100644
--- a/subst-install.in
+++ b/subst-install.in
@@ -33,6 +33,7 @@ sedexp=(
-e "s,${at}rootuid${at},@rootuid@,g"
-e "s,${at}rootuser${at},${rootuser},g"
-e "s,${at}sysconfdir${at},@sysconfdir@,g"
+ -e "1s,/usr/bin/env ,@PORTAGE_EPREFIX@/usr/bin/,"
)
sources=( )
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2022-07-24 14:01 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2022-07-24 14:01 UTC (permalink / raw
To: gentoo-commits
commit: f5f17c3bec00998b4383097a19f943f94d6d3fbc
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Jul 24 14:01:14 2022 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Jul 24 14:01:14 2022 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=f5f17c3b
tarball: drop tabcheck invocation (was removed)
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
tarball.sh | 8 --------
1 file changed, 8 deletions(-)
diff --git a/tarball.sh b/tarball.sh
index 761e1e8af..216c1d17f 100755
--- a/tarball.sh
+++ b/tarball.sh
@@ -18,14 +18,6 @@ if [[ -e ${DEST} ]]; then
exit 1
fi
-./tabcheck.py $(
- find ./ -name .git -prune -o -type f ! -name '*.py' -print \
- | xargs grep -l "#\!@PREFIX_PORTAGE_PYTHON@" \
- | grep -v "^\./repoman/"
- find ./ -name .git -prune -o -type f -name '*.py' -print \
- | grep -v "^\./repoman/"
-)
-
install -d -m0755 ${DEST}
rsync -a --exclude='.git' --exclude='.hg' --exclude="repoman/" . ${DEST}
sed -i -e '/^VERSION\s*=/s/^.*$/VERSION = "'${V}_prefix'"/' \
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2022-07-24 9:45 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2022-07-24 9:45 UTC (permalink / raw
To: gentoo-commits
commit: 6bb0b79ecb88e536b2cdea570b0972c798170c4f
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Jul 24 09:44:45 2022 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Jul 24 09:45:31 2022 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=6bb0b79e
Merge remote-tracking branch 'origin/master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
.gitignorerevs => .git-blame-ignore-revs | 4 +
.github/workflows/black.yml | 8 +
.github/workflows/ci.yml | 6 +-
.gitignore | 2 -
DEVELOPING | 40 +-
MANIFEST.in | 3 +
NEWS | 215 +-
README.md | 9 +-
RELEASE-NOTES | 3 +
bin/archive-conf | 46 +-
bin/binhost-snapshot | 264 ++-
bin/check-implicit-pointer-usage.py | 2 +-
bin/chmod-lite.py | 2 +-
bin/chpathtool.py | 2 +-
bin/clean_locks | 58 +-
bin/dispatch-conf | 347 +--
bin/dohtml.py | 4 +-
bin/doins.py | 2 +-
bin/eapi.sh | 134 +-
bin/ebuild | 514 ++--
bin/ebuild-helpers/dosym | 2 +-
bin/ebuild-ipc.py | 2 +-
bin/ebuild.sh | 120 +-
bin/egencache | 2437 ++++++++++---------
bin/emaint | 45 +-
bin/emerge | 130 +-
bin/emirrordist | 20 +-
bin/env-update | 46 +-
bin/estrip | 22 +-
bin/filter-bash-environment.py | 20 +-
bin/fixpackages | 42 +-
bin/glsa-check | 705 +++---
bin/{xpak-helper.py => gpkg-helper.py} | 33 +-
bin/install-qa-check.d/05prefix | 4 +-
bin/install-qa-check.d/10ignored-flags | 5 +-
bin/install-qa-check.d/20runtime-directories | 4 +-
bin/install-qa-check.d/60pkgconfig | 118 +-
bin/install-qa-check.d/95empty-dirs | 2 +-
bin/install.py | 2 +-
bin/isolated-functions.sh | 31 +-
bin/lock-helper.py | 2 +-
bin/misc-functions.sh | 140 +-
bin/phase-functions.sh | 42 +-
bin/phase-helpers.sh | 230 +-
bin/pid-ns-init | 42 +-
bin/portageq | 2451 +++++++++++---------
bin/quickpkg | 718 +++---
bin/regenworld | 209 +-
bin/save-ebuild-env.sh | 7 -
bin/shelve-utils | 40 +-
bin/socks5-server.py | 4 +-
bin/xattr-helper.py | 4 +-
bin/xpak-helper.py | 2 +-
cnf/dispatch-conf.conf | 6 +-
cnf/make.conf.example | 36 +
cnf/make.globals | 25 +-
doc/package/ebuild.docbook | 2 -
doc/package/ebuild/eapi/4-python.docbook | 160 --
doc/package/ebuild/eapi/5-progress.docbook | 247 --
doc/portage.docbook | 2 -
lib/_emerge/Binpkg.py | 13 +-
lib/_emerge/BinpkgExtractorAsync.py | 9 +
lib/_emerge/BinpkgFetcher.py | 31 +-
lib/_emerge/EbuildBinpkg.py | 16 +-
lib/_emerge/EbuildMetadataPhase.py | 17 -
lib/_emerge/EbuildPhase.py | 33 +-
lib/_emerge/MiscFunctionsProcess.py | 11 +
lib/_emerge/Package.py | 60 +-
lib/_emerge/Scheduler.py | 2 +-
lib/_emerge/actions.py | 107 +-
lib/_emerge/depgraph.py | 48 +-
lib/_emerge/is_valid_package_atom.py | 2 +-
lib/_emerge/main.py | 105 +-
lib/_emerge/resolver/output.py | 2 +-
lib/_emerge/resolver/package_tracker.py | 8 +-
lib/_emerge/resolver/slot_collision.py | 32 +-
lib/portage/__init__.py | 74 +-
lib/portage/_global_updates.py | 74 +-
lib/portage/_sets/__init__.py | 2 +-
lib/portage/_sets/base.py | 2 +-
lib/portage/_sets/files.py | 8 +-
lib/portage/binpkg.py | 56 +
lib/portage/cache/anydbm.py | 4 +-
lib/portage/cache/cache_errors.py | 2 +-
lib/portage/cache/mappings.py | 4 +-
lib/portage/cache/sql_template.py | 6 +-
lib/portage/checksum.py | 39 +-
lib/portage/const.py | 72 +-
lib/portage/data.py | 65 +-
lib/portage/dbapi/_MergeProcess.py | 22 +
lib/portage/dbapi/__init__.py | 40 +-
lib/portage/dbapi/bintree.py | 529 ++++-
lib/portage/dbapi/porttree.py | 6 +-
lib/portage/dbapi/vartree.py | 176 +-
lib/portage/dep/__init__.py | 152 +-
lib/portage/dep/dep_check.py | 8 +-
lib/portage/dispatch_conf.py | 99 +-
lib/portage/eapi.py | 495 ++--
lib/portage/eclass_cache.py | 11 +-
lib/portage/exception.py | 26 +-
lib/portage/getbinpkg.py | 245 +-
lib/portage/glsa.py | 177 +-
lib/portage/gpg.py | 106 +
lib/portage/gpkg.py | 2016 ++++++++++++++++
lib/portage/localization.py | 8 +-
lib/portage/mail.py | 16 +-
lib/portage/manifest.py | 412 ++--
lib/portage/metadata.py | 34 +-
lib/portage/module.py | 76 +-
lib/portage/news.py | 75 +-
lib/portage/output.py | 75 +-
lib/portage/package/ebuild/_config/UseManager.py | 176 --
.../package/ebuild/_config/special_env_vars.py | 2 +-
.../package/ebuild/_config/unpack_dependencies.py | 55 -
lib/portage/package/ebuild/config.py | 112 +-
lib/portage/package/ebuild/doebuild.py | 22 +-
lib/portage/package/ebuild/fetch.py | 2 +-
lib/portage/sync/modules/git/git.py | 43 +-
.../06B3A311BD775C280D22A9305D90EA06352177F6.rev | 37 +
.../8DEDA2CDED49C8809287B89D8812797DDF1DD192.rev | 37 +
.../273B030399E7BEA66A9AD42216DE7CA17BA5D42E.key | Bin 0 -> 2055 bytes
.../C99796FB85B0C3DF03314A11B5850C51167D6282.key | Bin 0 -> 2055 bytes
lib/portage/tests/.gnupg/pubring.kbx | Bin 0 -> 2774 bytes
lib/portage/tests/.gnupg/trustdb.gpg | Bin 0 -> 1360 bytes
lib/portage/tests/__init__.py | 37 +-
lib/portage/tests/bin/test_filter_bash_env.py | 4 +-
lib/portage/tests/dep/testAtom.py | 34 +-
lib/portage/tests/dep/test_isvalidatom.py | 9 -
lib/portage/tests/emerge/test_simple.py | 47 +-
.../portage/tests/gpkg}/__init__.py | 0
.../tests => lib/portage/tests/gpkg}/__test__.py | 0
lib/portage/tests/gpkg/test_gpkg_checksum.py | 396 ++++
lib/portage/tests/gpkg/test_gpkg_gpg.py | 398 ++++
.../tests/gpkg/test_gpkg_metadata_update.py | 59 +
lib/portage/tests/gpkg/test_gpkg_metadata_url.py | 173 ++
lib/portage/tests/gpkg/test_gpkg_path.py | 390 ++++
lib/portage/tests/gpkg/test_gpkg_size.py | 58 +
lib/portage/tests/gpkg/test_gpkg_stream.py | 112 +
.../test_lazy_import_portage_baseline.py | 2 +-
lib/portage/tests/process/test_PipeLogger.py | 14 +-
lib/portage/tests/process/test_PopenProcess.py | 2 +-
.../tests/process/test_PopenProcessBlockingIO.py | 2 +-
lib/portage/tests/process/test_poll.py | 2 +-
lib/portage/tests/resolver/ResolverPlayground.py | 71 +-
.../test_build_id_profile_format.py | 50 +-
.../binpkg_multi_instance/test_rebuilt_binaries.py | 44 +-
.../tests/resolver/soname/test_autounmask.py | 38 +-
.../tests/resolver/soname/test_downgrade.py | 84 +-
.../tests/resolver/soname/test_or_choices.py | 39 +-
.../tests/resolver/soname/test_reinstall.py | 40 +-
.../tests/resolver/soname/test_skip_update.py | 39 +-
.../soname/test_slot_conflict_reinstall.py | 135 +-
.../resolver/soname/test_slot_conflict_update.py | 38 +-
.../tests/resolver/soname/test_soname_provided.py | 45 +-
.../tests/resolver/soname/test_unsatisfiable.py | 40 +-
.../tests/resolver/soname/test_unsatisfied.py | 40 +-
.../tests/resolver/test_autounmask_binpkg_use.py | 38 +-
lib/portage/tests/resolver/test_bdeps.py | 44 +-
.../resolver/test_binary_pkg_ebuild_visibility.py | 35 +-
lib/portage/tests/resolver/test_changed_deps.py | 41 +-
...test_complete_if_new_subslot_without_revbump.py | 40 +-
.../resolver/test_disjunctive_depend_order.py | 34 +-
lib/portage/tests/resolver/test_installkernel.py | 93 +
lib/portage/tests/resolver/test_multirepo.py | 62 +-
.../test_regular_slot_change_without_revbump.py | 41 +-
lib/portage/tests/resolver/test_simple.py | 34 +-
lib/portage/tests/resolver/test_slot_abi.py | 113 +-
.../tests/resolver/test_slot_abi_downgrade.py | 77 +-
.../resolver/test_slot_change_without_revbump.py | 40 +-
.../resolver/test_slot_operator_autounmask.py | 40 +-
.../tests/resolver/test_slot_operator_bdeps.py | 74 +-
.../tests/resolver/test_slot_operator_rebuild.py | 40 +-
.../tests/resolver/test_slot_operator_unsolved.py | 41 +-
.../tests/resolver/test_unecessary_slot_upgrade.py | 11 -
lib/portage/tests/resolver/test_useflags.py | 37 +-
lib/portage/tests/runTests.py | 14 +-
lib/portage/tests/update/test_move_ent.py | 129 +-
lib/portage/tests/update/test_move_slot_ent.py | 139 +-
lib/portage/tests/update/test_update_dbentry.py | 168 +-
lib/portage/tests/util/file_copy/test_copyfile.py | 6 +-
lib/portage/tests/util/test_install_mask.py | 33 +-
lib/portage/tests/util/test_mtimedb.py | 362 +++
lib/portage/tests/xpak/test_decodeint.py | 2 +-
lib/portage/util/_urlopen.py | 4 +-
lib/portage/util/backoff.py | 2 +-
lib/portage/util/changelog.py | 2 +-
lib/portage/util/install_mask.py | 18 +-
lib/portage/util/lafilefixer.py | 6 +-
lib/portage/util/movefile.py | 16 +-
lib/portage/util/mtimedb.py | 102 +-
lib/portage/util/whirlpool.py | 16 +-
lib/portage/versions.py | 82 +-
lib/portage/xml/metadata.py | 2 +-
man/ebuild.1 | 35 +-
man/ebuild.5 | 86 +-
man/make.conf.5 | 100 +-
man/portage.5 | 52 +-
pylintrc | 1 -
repoman/.repoman_not_installed | 0
repoman/MANIFEST.in | 4 -
repoman/NEWS | 14 -
repoman/README | 49 -
repoman/RELEASE-NOTES | 213 --
repoman/TEST-NOTES | 45 -
repoman/bin/repoman | 52 -
repoman/cnf/linechecks/linechecks.yaml | 34 -
repoman/cnf/metadata.xsd | 548 -----
repoman/cnf/qa_data/qa_data.yaml | 139 --
repoman/cnf/repository/linechecks.yaml | 251 --
repoman/cnf/repository/qa_data.yaml | 163 --
repoman/cnf/repository/repository.yaml | 76 -
repoman/lib/repoman/__init__.py | 103 -
repoman/lib/repoman/_portage.py | 26 -
repoman/lib/repoman/_subprocess.py | 58 -
repoman/lib/repoman/actions.py | 828 -------
repoman/lib/repoman/argparser.py | 388 ----
repoman/lib/repoman/check_missingslot.py | 39 -
repoman/lib/repoman/checks/__init__.py | 0
repoman/lib/repoman/config.py | 172 --
repoman/lib/repoman/copyrights.py | 143 --
repoman/lib/repoman/errors.py | 21 -
repoman/lib/repoman/gpg.py | 73 -
repoman/lib/repoman/main.py | 255 --
repoman/lib/repoman/metadata.py | 89 -
repoman/lib/repoman/modules/__init__.py | 0
repoman/lib/repoman/modules/commit/__init__.py | 0
repoman/lib/repoman/modules/commit/manifest.py | 122 -
repoman/lib/repoman/modules/commit/repochecks.py | 44 -
repoman/lib/repoman/modules/linechecks/__init__.py | 0
.../modules/linechecks/assignment/__init__.py | 27 -
.../modules/linechecks/assignment/assignment.py | 38 -
repoman/lib/repoman/modules/linechecks/base.py | 115 -
repoman/lib/repoman/modules/linechecks/config.py | 149 --
.../lib/repoman/modules/linechecks/controller.py | 164 --
.../repoman/modules/linechecks/depend/__init__.py | 21 -
.../repoman/modules/linechecks/depend/implicit.py | 38 -
.../modules/linechecks/deprecated/__init__.py | 46 -
.../modules/linechecks/deprecated/deprecated.py | 35 -
.../modules/linechecks/deprecated/inherit.py | 67 -
.../lib/repoman/modules/linechecks/do/__init__.py | 21 -
repoman/lib/repoman/modules/linechecks/do/dosym.py | 20 -
.../repoman/modules/linechecks/eapi/__init__.py | 51 -
.../lib/repoman/modules/linechecks/eapi/checks.py | 79 -
.../repoman/modules/linechecks/eapi/definition.py | 35 -
.../repoman/modules/linechecks/emake/__init__.py | 27 -
.../lib/repoman/modules/linechecks/emake/emake.py | 25 -
.../modules/linechecks/gentoo_header/__init__.py | 21 -
.../modules/linechecks/gentoo_header/header.py | 56 -
.../repoman/modules/linechecks/helpers/__init__.py | 21 -
.../repoman/modules/linechecks/helpers/offset.py | 21 -
.../repoman/modules/linechecks/nested/__init__.py | 21 -
.../repoman/modules/linechecks/nested/nested.py | 14 -
.../repoman/modules/linechecks/nested/nesteddie.py | 9 -
.../repoman/modules/linechecks/patches/__init__.py | 21 -
.../repoman/modules/linechecks/patches/patches.py | 24 -
.../repoman/modules/linechecks/phases/__init__.py | 40 -
.../lib/repoman/modules/linechecks/phases/phase.py | 188 --
.../repoman/modules/linechecks/portage/__init__.py | 27 -
.../repoman/modules/linechecks/portage/internal.py | 32 -
.../repoman/modules/linechecks/quotes/__init__.py | 27 -
.../repoman/modules/linechecks/quotes/quoteda.py | 15 -
.../repoman/modules/linechecks/quotes/quotes.py | 92 -
.../lib/repoman/modules/linechecks/uri/__init__.py | 21 -
repoman/lib/repoman/modules/linechecks/uri/uri.py | 30 -
.../lib/repoman/modules/linechecks/use/__init__.py | 21 -
.../repoman/modules/linechecks/use/builtwith.py | 9 -
.../repoman/modules/linechecks/useless/__init__.py | 27 -
.../lib/repoman/modules/linechecks/useless/cd.py | 24 -
.../repoman/modules/linechecks/useless/dodoc.py | 17 -
.../modules/linechecks/whitespace/__init__.py | 27 -
.../repoman/modules/linechecks/whitespace/blank.py | 24 -
.../modules/linechecks/whitespace/whitespace.py | 20 -
.../modules/linechecks/workaround/__init__.py | 21 -
.../modules/linechecks/workaround/workarounds.py | 11 -
repoman/lib/repoman/modules/scan/__init__.py | 0
.../lib/repoman/modules/scan/depend/__init__.py | 43 -
.../repoman/modules/scan/depend/_depend_checks.py | 260 ---
.../lib/repoman/modules/scan/depend/_gen_arches.py | 67 -
repoman/lib/repoman/modules/scan/depend/profile.py | 427 ----
.../repoman/modules/scan/directories/__init__.py | 53 -
.../lib/repoman/modules/scan/directories/files.py | 99 -
.../lib/repoman/modules/scan/directories/mtime.py | 30 -
repoman/lib/repoman/modules/scan/eapi/__init__.py | 28 -
repoman/lib/repoman/modules/scan/eapi/eapi.py | 50 -
.../lib/repoman/modules/scan/ebuild/__init__.py | 66 -
repoman/lib/repoman/modules/scan/ebuild/ebuild.py | 263 ---
.../lib/repoman/modules/scan/ebuild/multicheck.py | 62 -
.../lib/repoman/modules/scan/eclasses/__init__.py | 49 -
repoman/lib/repoman/modules/scan/eclasses/live.py | 77 -
repoman/lib/repoman/modules/scan/eclasses/ruby.py | 49 -
repoman/lib/repoman/modules/scan/fetch/__init__.py | 37 -
repoman/lib/repoman/modules/scan/fetch/fetches.py | 205 --
.../lib/repoman/modules/scan/keywords/__init__.py | 37 -
.../lib/repoman/modules/scan/keywords/keywords.py | 179 --
.../lib/repoman/modules/scan/manifest/__init__.py | 34 -
.../lib/repoman/modules/scan/manifest/manifests.py | 56 -
.../lib/repoman/modules/scan/metadata/__init__.py | 89 -
.../repoman/modules/scan/metadata/description.py | 44 -
.../modules/scan/metadata/ebuild_metadata.py | 88 -
.../repoman/modules/scan/metadata/pkgmetadata.py | 221 --
.../lib/repoman/modules/scan/metadata/restrict.py | 58 -
.../lib/repoman/modules/scan/metadata/use_flags.py | 103 -
repoman/lib/repoman/modules/scan/module.py | 127 -
.../lib/repoman/modules/scan/options/__init__.py | 28 -
.../lib/repoman/modules/scan/options/options.py | 27 -
repoman/lib/repoman/modules/scan/scan.py | 67 -
repoman/lib/repoman/modules/scan/scanbase.py | 79 -
repoman/lib/repoman/modules/vcs/None/__init__.py | 32 -
repoman/lib/repoman/modules/vcs/None/changes.py | 50 -
repoman/lib/repoman/modules/vcs/None/status.py | 52 -
repoman/lib/repoman/modules/vcs/__init__.py | 12 -
repoman/lib/repoman/modules/vcs/bzr/__init__.py | 32 -
repoman/lib/repoman/modules/vcs/bzr/changes.py | 77 -
repoman/lib/repoman/modules/vcs/bzr/status.py | 72 -
repoman/lib/repoman/modules/vcs/changes.py | 170 --
repoman/lib/repoman/modules/vcs/cvs/__init__.py | 32 -
repoman/lib/repoman/modules/vcs/cvs/changes.py | 134 --
repoman/lib/repoman/modules/vcs/cvs/status.py | 134 --
repoman/lib/repoman/modules/vcs/git/__init__.py | 32 -
repoman/lib/repoman/modules/vcs/git/changes.py | 145 --
repoman/lib/repoman/modules/vcs/git/status.py | 80 -
repoman/lib/repoman/modules/vcs/hg/__init__.py | 32 -
repoman/lib/repoman/modules/vcs/hg/changes.py | 109 -
repoman/lib/repoman/modules/vcs/hg/status.py | 68 -
repoman/lib/repoman/modules/vcs/settings.py | 113 -
repoman/lib/repoman/modules/vcs/svn/__init__.py | 32 -
repoman/lib/repoman/modules/vcs/svn/changes.py | 156 --
repoman/lib/repoman/modules/vcs/svn/status.py | 151 --
repoman/lib/repoman/modules/vcs/vcs.py | 145 --
repoman/lib/repoman/profile.py | 94 -
repoman/lib/repoman/qa_data.py | 210 --
repoman/lib/repoman/qa_tracker.py | 46 -
repoman/lib/repoman/repos.py | 377 ---
repoman/lib/repoman/scanner.py | 484 ----
repoman/lib/repoman/tests/__init__.py | 328 ---
repoman/lib/repoman/tests/changelog/__test__.py | 0
.../lib/repoman/tests/changelog/test_echangelog.py | 169 --
repoman/lib/repoman/tests/commit/__init__.py | 2 -
repoman/lib/repoman/tests/commit/__test__.py | 0
repoman/lib/repoman/tests/commit/test_commitmsg.py | 155 --
repoman/lib/repoman/tests/runTests.py | 75 -
repoman/lib/repoman/tests/simple/__init__.py | 2 -
repoman/lib/repoman/tests/simple/__test__.py | 0
repoman/lib/repoman/tests/simple/test_simple.py | 512 ----
repoman/lib/repoman/utilities.py | 590 -----
repoman/man/repoman.1 | 478 ----
repoman/runtests | 182 --
repoman/setup.py | 515 ----
run-pylint | 2 +
runtests | 110 +-
setup.py | 20 +-
tabcheck.py | 7 -
tox.ini | 8 +-
353 files changed, 13289 insertions(+), 22599 deletions(-)
diff --cc bin/archive-conf
index 11e1d25b7,3f7d186fe..ecc8d8a1c
--- a/bin/archive-conf
+++ b/bin/archive-conf
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 1999-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/binhost-snapshot
index fbecfa8bb,4022cb32c..0788e2704
--- a/bin/binhost-snapshot
+++ b/bin/binhost-snapshot
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 2010-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/check-implicit-pointer-usage.py
index 4c736fcd7,06b666c88..1457a3c9f
--- a/bin/check-implicit-pointer-usage.py
+++ b/bin/check-implicit-pointer-usage.py
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Ripped from HP and updated from Debian
# Update by Gentoo to support unicode output
diff --cc bin/chmod-lite.py
index 642a6a544,517a55bd9..baec75c40
--- a/bin/chmod-lite.py
+++ b/bin/chmod-lite.py
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 2015 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/chpathtool.py
index f3842bdd5,d658e5012..1b30bc272
--- a/bin/chpathtool.py
+++ b/bin/chpathtool.py
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 2011-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/clean_locks
index 7959486ac,b80213911..4ff410aa3
--- a/bin/clean_locks
+++ b/bin/clean_locks
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/dispatch-conf
index c2587639d,9490197d3..52f065b91
--- a/bin/dispatch-conf
+++ b/bin/dispatch-conf
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/dohtml.py
index 30e685d00,7aebd2af7..198d63799
--- a/bin/dohtml.py
+++ b/bin/dohtml.py
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 1999-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/eapi.sh
index 1aaaa19e8,689e09b10..59ce6d9de
--- a/bin/eapi.sh
+++ b/bin/eapi.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 2012-2021 Gentoo Authors
+ # Copyright 2012-2022 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
# PHASES
diff --cc bin/ebuild
index 6f70ee4bf,546ab9d1c..9cf4afd5a
--- a/bin/ebuild
+++ b/bin/ebuild
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/ebuild-ipc.py
index 2f5790ee4,bc5dda27d..4a6a9468a
--- a/bin/ebuild-ipc.py
+++ b/bin/ebuild-ipc.py
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 2010-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
#
diff --cc bin/egencache
index 99028203b,842f453ea..31a555102
--- a/bin/egencache
+++ b/bin/egencache
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 2009-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/emaint
index af5234183,b9a129ed0..38a2c6896
--- a/bin/emaint
+++ b/bin/emaint
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 2005-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/emerge
index d952840ef,d1a8d9f52..9a0a570b5
--- a/bin/emerge
+++ b/bin/emerge
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 2006-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/emirrordist
index 866a2be65,9f8db8292..36bc7611a
--- a/bin/emirrordist
+++ b/bin/emirrordist
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 2013-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/env-update
index 5c2df8544,8e597b03d..7f1ae90ab
--- a/bin/env-update
+++ b/bin/env-update
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 1999-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/filter-bash-environment.py
index a3cf7191a,86cb22948..09a45e509
--- a/bin/filter-bash-environment.py
+++ b/bin/filter-bash-environment.py
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/fixpackages
index 5c4185071,6f78b174d..ae28b216c
--- a/bin/fixpackages
+++ b/bin/fixpackages
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 1999-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/glsa-check
index 04312a236,431590cf8..2b2976100
--- a/bin/glsa-check
+++ b/bin/glsa-check
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/install.py
index 6729778a1,4bdffd255..e8933606a
--- a/bin/install.py
+++ b/bin/install.py
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 2013-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/lock-helper.py
index f09e4f2a7,6619d625a..e7acd1938
--- a/bin/lock-helper.py
+++ b/bin/lock-helper.py
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 2010-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/misc-functions.sh
index 6c86952b4,41340e3f7..887af7a23
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -19,17 -19,13 +19,17 @@@ source "${PORTAGE_BIN_PATH}/ebuild.sh"
install_symlink_html_docs() {
if ! ___eapi_has_prefix_variables; then
local ED=${D}
+ else
+ # PREFIX LOCAL: ED needs not to exist, whereas D does
+ [[ ! -d ${ED} && -d ${D} ]] && dodir /
+ # END PREFIX LOCAL
fi
cd "${ED}" || die "cd failed"
- #symlink the html documentation (if DOC_SYMLINKS_DIR is set in make.conf)
- if [ -n "${DOC_SYMLINKS_DIR}" ] ; then
+ # Symlink the html documentation (if DOC_SYMLINKS_DIR is set in make.conf)
+ if [[ -n "${DOC_SYMLINKS_DIR}" ]]; then
local mydocdir docdir
for docdir in "${HTMLDOC_DIR:-does/not/exist}" "${PF}/html" "${PF}/HTML" "${P}/html" "${P}/HTML" ; do
- if [ -d "usr/share/doc/${docdir}" ] ; then
+ if [[ -d "usr/share/doc/${docdir}" ]]; then
mydocdir="/usr/share/doc/${docdir}"
fi
done
@@@ -287,9 -234,22 +273,24 @@@ install_qa_check_elf()
fi
fi
fi
+}
+install_qa_check_misc() {
+ # If binpkg-dostrip is enabled, apply stripping before creating
+ # the binary package.
+ # Note: disabling it won't help with packages calling prepstrip directly.
+ # We do this after the scanelf bits so that we can reuse the data. bug #749624.
+ if has binpkg-dostrip ${FEATURES}; then
+ export STRIP_MASK
+ if ___eapi_has_dostrip; then
+ "${PORTAGE_BIN_PATH}"/estrip --queue "${PORTAGE_DOSTRIP[@]}"
+ "${PORTAGE_BIN_PATH}"/estrip --ignore "${PORTAGE_DOSTRIP_SKIP[@]}"
+ "${PORTAGE_BIN_PATH}"/estrip --dequeue
+ else
+ prepallstrip
+ fi
+ fi
+
# Portage regenerates this on the installed system.
rm -f "${ED%/}"/usr/share/info/dir{,.gz,.bz2} || die "rm failed!"
}
@@@ -1203,35 -507,50 +1204,54 @@@ __dyn_package()
# Sandbox is disabled in case the user wants to use a symlink
# for $PKGDIR and/or $PKGDIR/All.
export SANDBOX_ON="0"
- [ -z "${PORTAGE_BINPKG_TMPFILE}" ] && \
+ [[ -z "${PORTAGE_BINPKG_TMPFILE}" ]] && \
die "PORTAGE_BINPKG_TMPFILE is unset"
mkdir -p "${PORTAGE_BINPKG_TMPFILE%/*}" || die "mkdir failed"
- [ -z "${PORTAGE_COMPRESSION_COMMAND}" ] && \
- die "PORTAGE_COMPRESSION_COMMAND is unset"
- tar $tar_options -cf - $PORTAGE_BINPKG_TAR_OPTS -C "${D}" . | \
- $PORTAGE_COMPRESSION_COMMAND > "$PORTAGE_BINPKG_TMPFILE"
- assert "failed to pack binary package: '$PORTAGE_BINPKG_TMPFILE'"
- PYTHONPATH=${PORTAGE_PYTHONPATH:-${PORTAGE_PYM_PATH}} \
- "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "$PORTAGE_BIN_PATH"/xpak-helper.py recompose \
- "$PORTAGE_BINPKG_TMPFILE" "$PORTAGE_BUILDDIR/build-info"
- if [ $? -ne 0 ]; then
- rm -f "${PORTAGE_BINPKG_TMPFILE}"
- die "Failed to append metadata to the tbz2 file"
- fi
- local md5_hash=""
- if type md5sum &>/dev/null ; then
- md5_hash=$(md5sum "${PORTAGE_BINPKG_TMPFILE}")
- md5_hash=${md5_hash%% *}
- elif type md5 &>/dev/null ; then
- md5_hash=$(md5 "${PORTAGE_BINPKG_TMPFILE}")
- md5_hash=${md5_hash##* }
+
+ if [[ "${BINPKG_FORMAT}" == "xpak" ]]; then
+ local tar_options=""
+ [[ $PORTAGE_VERBOSE = 1 ]] && tar_options+=" -v"
+ has xattr ${FEATURES} && [[ $(tar --help 2> /dev/null) == *--xattrs* ]] && tar_options+=" --xattrs"
+ [[ -z "${PORTAGE_COMPRESSION_COMMAND}" ]] && \
+ die "PORTAGE_COMPRESSION_COMMAND is unset"
+ tar $tar_options -cf - $PORTAGE_BINPKG_TAR_OPTS -C "${D}" . | \
+ $PORTAGE_COMPRESSION_COMMAND > "$PORTAGE_BINPKG_TMPFILE"
+ assert "failed to pack binary package: '$PORTAGE_BINPKG_TMPFILE'"
++ # BEGIN PREFIX LOCAL: use PREFIX_PORTAGE_PYTHON fallback
+ PYTHONPATH=${PORTAGE_PYTHONPATH:-${PORTAGE_PYM_PATH}} \
- "${PORTAGE_PYTHON:-/usr/bin/python}" "$PORTAGE_BIN_PATH"/xpak-helper.py recompose \
++ "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "$PORTAGE_BIN_PATH"/xpak-helper.py recompose \
+ "$PORTAGE_BINPKG_TMPFILE" "$PORTAGE_BUILDDIR/build-info"
++ # END PREFIX LOCAL
+ if [[ $? -ne 0 ]]; then
+ rm -f "${PORTAGE_BINPKG_TMPFILE}"
+ die "Failed to append metadata to the tbz2 file"
+ fi
+ local md5_hash=""
+ if type md5sum &>/dev/null ; then
+ md5_hash=$(md5sum "${PORTAGE_BINPKG_TMPFILE}")
+ md5_hash=${md5_hash%% *}
+ elif type md5 &>/dev/null ; then
+ md5_hash=$(md5 "${PORTAGE_BINPKG_TMPFILE}")
+ md5_hash=${md5_hash##* }
+ fi
+ [[ -n "${md5_hash}" ]] && \
+ echo ${md5_hash} > "${PORTAGE_BUILDDIR}"/build-info/BINPKGMD5
+ __vecho ">>> Done."
+
+ elif [[ "${BINPKG_FORMAT}" == "gpkg" ]]; then
++ # BEGIN PREFIX LOCAL: use PREFIX_PORTAGE_PYTHON fallback
+ PYTHONPATH=${PORTAGE_PYTHONPATH:-${PORTAGE_PYM_PATH}} \
- "${PORTAGE_PYTHON:-/usr/bin/python}" "$PORTAGE_BIN_PATH"/gpkg-helper.py compress \
++ "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "$PORTAGE_BIN_PATH"/gpkg-helper.py compress \
+ "${CATEGORY}/${PF}" "$PORTAGE_BINPKG_TMPFILE" "$PORTAGE_BUILDDIR/build-info" "${D}"
++ # END PREFIX LOCAL
+ if [[ $? -ne 0 ]]; then
+ rm -f "${PORTAGE_BINPKG_TMPFILE}"
+ die "Failed to create binpkg file"
+ fi
+ __vecho ">>> Done."
+ else
+ die "Unknown BINPKG_FORMAT ${BINPKG_FORMAT}"
fi
- [ -n "${md5_hash}" ] && \
- echo ${md5_hash} > "${PORTAGE_BUILDDIR}"/build-info/BINPKGMD5
- __vecho ">>> Done."
cd "${PORTAGE_BUILDDIR}"
>> "$PORTAGE_BUILDDIR/.packaged" || \
diff --cc bin/phase-functions.sh
index 0b6b93038,84a5c1ec3..5a653fa64
--- a/bin/phase-functions.sh
+++ b/bin/phase-functions.sh
@@@ -766,9 -770,8 +770,9 @@@ __dyn_help()
echo "production (stripped)"
fi
echo " merge to : ${ROOT}"
+ echo " offset : ${EPREFIX}"
echo
- if [ -n "$USE" ]; then
+ if [[ -n "$USE" ]]; then
echo "Additionally, support for the following optional features will be enabled:"
echo
echo " ${USE}"
diff --cc bin/portageq
index a178ff0b0,6d12c98dd..4fe254d5d
--- a/bin/portageq
+++ b/bin/portageq
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/quickpkg
index 1bcbda8ba,773c1c07e..7e0445dc3
--- a/bin/quickpkg
+++ b/bin/quickpkg
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/regenworld
index b4f8509cb,7927dd237..6d381fad6
--- a/bin/regenworld
+++ b/bin/regenworld
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/save-ebuild-env.sh
index b3d4c7363,17e4b1b3e..5c7fd4d7d
mode 100755,100644..100755
--- a/bin/save-ebuild-env.sh
+++ b/bin/save-ebuild-env.sh
@@@ -81,21 -81,10 +81,14 @@@ __save_ebuild_env()
${QA_INTERCEPTORS}
___eapi_has_usex && unset -f usex
- ___eapi_has_master_repositories && unset -f master_repositories
- ___eapi_has_repository_path && unset -f repository_path
- ___eapi_has_available_eclasses && unset -f available_eclasses
- ___eapi_has_eclass_path && unset -f eclass_path
- ___eapi_has_license_path && unset -f license_path
- ___eapi_has_package_manager_build_user && unset -f package_manager_build_user
- ___eapi_has_package_manager_build_group && unset -f package_manager_build_group
- # Clear out the triple underscore namespace as it is reserved by the PM.
- unset -f $(compgen -A function ___)
- unset ${!___*}
+ # BEGIN PREFIX LOCAL: compgen is not compiled in during bootstrap
+ if type compgen >& /dev/null ; then
+ # Clear out the triple underscore namespace as it is reserved by the PM.
+ unset -f $(compgen -A function ___)
+ unset ${!___*}
+ fi
+ # END PREFIX LOCAL
# portage config variables and variables set directly by portage
unset ACCEPT_LICENSE BUILD_PREFIX COLS \
diff --cc bin/xattr-helper.py
index fb39ae9df,6e50ac487..8ad94c4ab
--- a/bin/xattr-helper.py
+++ b/bin/xattr-helper.py
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 2012-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/xpak-helper.py
index d92a3f7d8,ac29995e7..8ffa9c747
--- a/bin/xpak-helper.py
+++ b/bin/xpak-helper.py
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -b
-#!/usr/bin/env python
++#!@PREFIX_PORTAGE_PYTHON@
# Copyright 2009-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
diff --cc cnf/make.globals
index 4840dc354,f951bb317..613f58ff0
--- a/cnf/make.globals
+++ b/cnf/make.globals
@@@ -145,11 -143,11 +166,11 @@@ PORTAGE_WORKDIR_MODE="0700
PORTAGE_ELOG_CLASSES="log warn error"
PORTAGE_ELOG_SYSTEM="save_summary:log,warn,error,qa echo"
-PORTAGE_ELOG_MAILURI="root"
+PORTAGE_ELOG_MAILURI="@rootuser@"
PORTAGE_ELOG_MAILSUBJECT="[portage] ebuild log for \${PACKAGE} on \${HOST}"
-PORTAGE_ELOG_MAILFROM="portage@localhost"
+PORTAGE_ELOG_MAILFROM="@portageuser@@localhost"
- # Signing command used by repoman
+ # Signing command used by egencache
PORTAGE_GPG_SIGNING_COMMAND="gpg --sign --digest-algo SHA256 --clearsign --yes --default-key \"\${PORTAGE_GPG_KEY}\" --homedir \"\${PORTAGE_GPG_DIR}\" \"\${FILE}\""
# btrfs.* attributes are irrelevant, see bug #527636.
diff --cc lib/_emerge/EbuildPhase.py
index 3fe4c8f3a,9a04f9c1f..df166ad47
--- a/lib/_emerge/EbuildPhase.py
+++ b/lib/_emerge/EbuildPhase.py
@@@ -29,8 -29,8 +29,10 @@@ from portage.util._async.AsyncTaskFutur
from portage.util._async.BuildLogger import BuildLogger
from portage.util.futures import asyncio
from portage.util.futures.executor.fork import ForkExecutor
+ from portage.exception import InvalidBinaryPackageFormat
+ from portage.const import SUPPORTED_GENTOO_BINPKG_FORMATS
+# PREFIX LOCAL
+from portage.const import EPREFIX
try:
from portage.xml.metadata import MetaDataXML
diff --cc lib/portage/__init__.py
index 1e4c68b13,a4a2c8865..ab3017a5d
--- a/lib/portage/__init__.py
+++ b/lib/portage/__init__.py
@@@ -50,19 -48,9 +50,19 @@@ except ImportError as e
sys.stderr.write(
"!!! gone wrong. Here is the information we got for this exception:\n"
)
- sys.stderr.write(" " + str(e) + "\n\n")
+ sys.stderr.write(f" {e}\n\n")
raise
+# BEGIN PREFIX LOCAL
+# for bug #758230, on macOS the default was switched from fork to spawn,
+# the latter causing issues because all kinds of things can't be
+# pickled, so force fork mode for now
+try:
+ multiprocessing.set_start_method('fork')
+except RuntimeError:
+ pass
+# END PREFIX LOCAL
+
try:
import portage.proxy.lazyimport
diff --cc lib/portage/const.py
index f2c69a4bb,f0f57067a..3038ce285
--- a/lib/portage/const.py
+++ b/lib/portage/const.py
@@@ -2,12 -2,8 +2,13 @@@
# Copyright 1998-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
+# ===========================================================================
+# autotool supplied constants.
+# ===========================================================================
+from portage.const_autotool import *
+
import os
+ import sys
# ===========================================================================
# START OF CONSTANTS -- START OF CONSTANTS -- START OF CONSTANTS -- START OF
@@@ -62,40 -58,22 +63,40 @@@ DEPCACHE_PATH = f"/{CACHE_PATH}/dep
GLOBAL_CONFIG_PATH = "/usr/share/portage/config"
# these variables are not used with target_root or config_root
+PORTAGE_BASE_PATH = PORTAGE_BASE
# NOTE: Use realpath(__file__) so that python module symlinks in site-packages
# are followed back to the real location of the whole portage installation.
+#PREFIX: below should work, but I'm not sure how it it affects other places
# NOTE: Please keep PORTAGE_BASE_PATH in one line to help substitutions.
# fmt:off
-PORTAGE_BASE_PATH = os.path.join(os.sep, os.sep.join(os.path.realpath(__file__.rstrip("co")).split(os.sep)[:-3]))
+# PREFIX LOCAL (from const_autotools)
+#PORTAGE_BASE_PATH = os.path.join(os.sep, os.sep.join(os.path.realpath(__file__.rstrip("co")).split(os.sep)[:-3]))
# fmt:on
- PORTAGE_BIN_PATH = PORTAGE_BASE_PATH + "/bin"
+ PORTAGE_BIN_PATH = f"{PORTAGE_BASE_PATH}/bin"
PORTAGE_PYM_PATH = os.path.realpath(os.path.join(__file__, "../.."))
- LOCALE_DATA_PATH = PORTAGE_BASE_PATH + "/locale" # FIXME: not used
- EBUILD_SH_BINARY = PORTAGE_BIN_PATH + "/ebuild.sh"
- MISC_SH_BINARY = PORTAGE_BIN_PATH + "/misc-functions.sh"
- # BEGIN PREFIX LOCAL
- SANDBOX_BINARY = EPREFIX + "/usr/bin/sandbox"
- FAKEROOT_BINARY = EPREFIX + "/usr/bin/fakeroot"
+ LOCALE_DATA_PATH = f"{PORTAGE_BASE_PATH}/locale" # FIXME: not used
+ EBUILD_SH_BINARY = f"{PORTAGE_BIN_PATH}/ebuild.sh"
+ MISC_SH_BINARY = f"{PORTAGE_BIN_PATH}/misc-functions.sh"
-SANDBOX_BINARY = "/usr/bin/sandbox"
-FAKEROOT_BINARY = "/usr/bin/fakeroot"
++# BEGIN PREFIX LOCAL: use EPREFIX for binaries
++SANDBOX_BINARY = f"{EPREFIX}/usr/bin/sandbox"
++FAKEROOT_BINARY = f"{EPREFIX}/usr/bin/fakeroot"
+# END PREFIX LOCAL
BASH_BINARY = "/bin/bash"
MOVE_BINARY = "/bin/mv"
PRELINK_BINARY = "/usr/sbin/prelink"
- # BEGIN PREFIX LOCAL
++# BEGIN PREFIX LOCAL: macOS sandbox
+MACOSSANDBOX_BINARY = "/usr/bin/sandbox-exec"
+MACOSSANDBOX_PROFILE = '''(version 1)
+(allow default)
+(deny file-write*)
+(allow file-write* file-write-setugid
+@@MACOSSANDBOX_PATHS@@)
+(allow file-write-data
+@@MACOSSANDBOX_PATHS_CONTENT_ONLY@@)'''
+
+PORTAGE_GROUPNAME = portagegroup
+PORTAGE_USERNAME = portageuser
+# END PREFIX LOCAL
INVALID_ENV_FILE = "/etc/spork/is/not/valid/profile.env"
MERGING_IDENTIFIER = "-MERGING-"
@@@ -235,9 -217,7 +240,9 @@@ SUPPORTED_FEATURES = frozenset
"usersync",
"webrsync-gpg",
"xattr",
+ # PREFIX LOCAL
+ "stacked-prefix",
- ]
+ )
)
EAPI = 8
diff --cc lib/portage/data.py
index 6848751fe,1ef8d4aef..5d34db187
--- a/lib/portage/data.py
+++ b/lib/portage/data.py
@@@ -6,9 -6,9 +6,10 @@@ import gr
import os
import platform
import pwd
+from portage.const import PORTAGE_GROUPNAME, PORTAGE_USERNAME, EPREFIX
import portage
+ from portage.localization import _
portage.proxy.lazyimport.lazyimport(
globals(),
@@@ -17,17 -17,11 +18,12 @@@
"portage.util.path:first_existing",
"subprocess",
)
- from portage.localization import _
ostype = platform.system()
- userland = None
- # Prefix always has USERLAND=GNU, even on
- # FreeBSD, OpenBSD and Darwin (thank the lord!).
- # Hopefully this entire USERLAND hack can go once
+ userland = "GNU"
-if ostype == "DragonFly" or ostype.endswith("BSD"):
++# PREFIX LOCAL: Prefix always has USERLAND=GNU
+if EPREFIX == "" and (ostype == "DragonFly" or ostype.endswith("BSD")):
userland = "BSD"
- else:
- userland = "GNU"
lchown = getattr(os, "lchown", None)
diff --cc lib/portage/dbapi/bintree.py
index 8b008a93d,b441fff9a..986ffeb3d
--- a/lib/portage/dbapi/bintree.py
+++ b/lib/portage/dbapi/bintree.py
@@@ -514,9 -582,7 +588,10 @@@ class binarytree
"SLOT": "0",
"USE": "",
}
- self._pkgindex_inherited_keys = ["CHOST", "repository",
- self._pkgindex_inherited_keys = ["BINPKG_FORMAT", "CHOST", "repository"]
++ self._pkgindex_inherited_keys = ["BINPKG_FORMAT", "CHOST",
+ # PREFIX LOCAL
- "EPREFIX"]
++ "EPREFIX",
++ "repository"]
# Populate the header with appropriate defaults.
self._pkgindex_default_header_data = {
diff --cc lib/portage/dbapi/vartree.py
index 7e68b3f8c,a95d60691..6c668726d
--- a/lib/portage/dbapi/vartree.py
+++ b/lib/portage/dbapi/vartree.py
@@@ -59,8 -55,7 +60,9 @@@ from portage.const import
PORTAGE_PACKAGE_ATOM,
PRIVATE_PATH,
VDB_PATH,
+ SUPPORTED_GENTOO_BINPKG_FORMATS,
+ # PREFIX LOCAL
+ EPREFIX,
)
from portage.dbapi import dbapi
from portage.exception import (
diff --cc lib/portage/getbinpkg.py
index aaf0bcf81,ea9ee1d0a..3cad401af
--- a/lib/portage/getbinpkg.py
+++ b/lib/portage/getbinpkg.py
@@@ -19,12 -19,8 +19,10 @@@ import socke
import time
import tempfile
import base64
+# PREFIX LOCAL
+from portage.const import CACHE_PATH
import warnings
- _all_errors = [NotImplementedError, ValueError, socket.error]
-
from html.parser import HTMLParser as html_parser_HTMLParser
from urllib.parse import unquote as urllib_parse_unquote
@@@ -621,18 -603,15 +605,17 @@@ def dir_get_metadata
stacklevel=2,
)
- if not conn:
+ keepconnection = 1
+ if conn:
keepconnection = 0
- else:
- keepconnection = 1
- cache_path = "/var/cache/edb"
+ # PREFIX LOCAL
+ cache_path = CACHE_PATH
metadatafilename = os.path.join(cache_path, "remote_metadata.pickle")
- if makepickle is None:
- # PREFIX LOCAL
- makepickle = CACHE_PATH + "/metadata.idx.most_recent"
+ if not makepickle:
- makepickle = "/var/cache/edb/metadata.idx.most_recent"
++ # PREFIX LOCAL: use CACHE_PATH for EPREFIX
++ makepickle = os.path.join(cache_path, "metadata.idx.most_recent")
try:
conn = create_conn(baseurl, conn)[0]
diff --cc lib/portage/package/ebuild/doebuild.py
index af8845f34,8ee9f73c6..3c1998889
--- a/lib/portage/package/ebuild/doebuild.py
+++ b/lib/portage/package/ebuild/doebuild.py
@@@ -67,10 -66,7 +67,11 @@@ from portage.const import
INVALID_ENV_FILE,
MISC_SH_BINARY,
PORTAGE_PYM_PACKAGES,
+ SUPPORTED_GENTOO_BINPKG_FORMATS,
+ # BEGIN PREFIX LOCAL
+ EPREFIX,
+ MACOSSANDBOX_PROFILE,
+ # END PREFIX LOCAL
)
from portage.data import portage_gid, portage_uid, secpass, uid, userpriv_groups
from portage.dbapi.porttree import _parse_uri_map
diff --cc lib/portage/tests/resolver/ResolverPlayground.py
index 969d8f2fb,ec69ee068..81ce9178b
--- a/lib/portage/tests/resolver/ResolverPlayground.py
+++ b/lib/portage/tests/resolver/ResolverPlayground.py
@@@ -356,8 -360,7 +360,9 @@@ class ResolverPlayground
metadata["repository"] = repo
metadata["CATEGORY"] = cat
metadata["PF"] = pf
+ metadata["BINPKG_FORMAT"] = binpkg_format
+ # PREFIX LOCAL
+ metadata["EPREFIX"] = self.eprefix
repo_dir = self.pkgdir
category_dir = os.path.join(repo_dir, cat)
diff --cc lib/portage/tests/runTests.py
index f2e799c65,00a8ad7bb..2a11381b0
--- a/lib/portage/tests/runTests.py
+++ b/lib/portage/tests/runTests.py
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -bWd
-#!/usr/bin/env python -Wd
++#!@PREFIX_PORTAGE_PYTHON@ -Wd
# runTests.py -- Portage Unit Test Functionality
# Copyright 2006-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2022-01-14 10:40 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2022-01-14 10:40 UTC (permalink / raw
To: gentoo-commits
commit: 4220da1565d996547bd8bb50ee0aa0fb404da120
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Jan 14 10:40:17 2022 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Jan 14 10:40:17 2022 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=4220da15
configure.ac: ran autoupdate
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
configure.ac | 15 +++++++--------
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/configure.ac b/configure.ac
index 9083824eb..9def5210c 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1,7 +1,7 @@
dnl Process this file with autoconf to produce a configure script.
-AC_INIT(portage-prefix, @version@, prefix@gentoo.org)
+AC_INIT([portage-prefix],[@version@],[prefix@gentoo.org])
-AC_PREREQ([2.61])
+AC_PREREQ([2.71])
case "${prefix}" in
'') AC_MSG_ERROR([bad value ${prefix} for --prefix, must not be empty]) ;;
@@ -46,7 +46,7 @@ GENTOO_PATH_GNUPROG(PORTAGE_XARGS, [xargs])
GENTOO_PATH_GNUPROG(PORTAGE_GREP, [grep])
AC_ARG_WITH(portage-user,
-AC_HELP_STRING([--with-portage-user=myuser],[use user 'myuser' as portage owner (default portage)]),
+AS_HELP_STRING([--with-portage-user=myuser],[use user 'myuser' as portage owner (default portage)]),
[case "${withval}" in
""|yes) AC_MSG_ERROR(bad value ${withval} for --with-portage-user);;
*) portageuser="${withval}";;
@@ -54,7 +54,7 @@ esac],
[portageuser="portage"])
AC_ARG_WITH(portage-group,
-AC_HELP_STRING([--with-portage-group=mygroup],[use group 'mygroup' as portage users group (default portage)]),
+AS_HELP_STRING([--with-portage-group=mygroup],[use group 'mygroup' as portage users group (default portage)]),
[case "${withval}" in
""|yes) AC_MSG_ERROR(bad value ${withval} for --with-portage-group);;
*) portagegroup="${withval}";;
@@ -62,7 +62,7 @@ esac],
[portagegroup="portage"])
AC_ARG_WITH(root-user,
-AC_HELP_STRING([--with-root-user=myuser],[uses 'myuser' as owner of installed files (default is portage-user)]),
+AS_HELP_STRING([--with-root-user=myuser],[uses 'myuser' as owner of installed files (default is portage-user)]),
[case "${withval}" in
""|yes) AC_MSG_ERROR(bad value ${withval} for --with-root-user);;
*) rootuser="${withval}";;
@@ -88,8 +88,7 @@ else
fi
AC_ARG_WITH(offset-prefix,
-AC_HELP_STRING([--with-offset-prefix],
- [specify the installation prefix for all packages, defaults to an empty string]),
+AS_HELP_STRING([--with-offset-prefix],[specify the installation prefix for all packages, defaults to an empty string]),
[PORTAGE_EPREFIX=$withval],
[PORTAGE_EPREFIX=''])
@@ -99,7 +98,7 @@ then
fi
AC_ARG_WITH(extra-path,
-AC_HELP_STRING([--with-extra-path], [specify additional PATHs available to the portage build environment (use with care)]),
+AS_HELP_STRING([--with-extra-path],[specify additional PATHs available to the portage build environment (use with care)]),
[EXTRA_PATH="$withval"],
[EXTRA_PATH=""])
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2022-01-14 10:32 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2022-01-14 10:32 UTC (permalink / raw
To: gentoo-commits
commit: 9d0d47eed1ed7b5e2bba49b1d79ca3e9fc7fb7ec
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Jan 14 10:32:01 2022 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Jan 14 10:32:01 2022 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=9d0d47ee
Merge remote-tracking branch 'origin/master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
.editorconfig | 2 +-
.github/workflows/black.yml | 10 +
.github/workflows/ci.yml | 4 +-
.gitignorerevs | 4 +
DEVELOPING | 19 +-
MANIFEST.in | 2 +
NEWS | 56 +
README => README.md | 59 +-
RELEASE-NOTES | 27 +
bin/check-implicit-pointer-usage.py | 52 +-
bin/chmod-lite.py | 25 +-
bin/chpathtool.py | 323 +-
bin/dispatch-conf | 13 +-
bin/dohtml.py | 424 +-
bin/doins.py | 1007 +-
bin/ebuild-ipc.py | 519 +-
bin/ebuild.sh | 83 +-
bin/estrip | 103 +-
bin/filter-bash-environment.py | 266 +-
bin/install-qa-check.d/10ignored-flags | 4 +-
bin/install.py | 379 +-
bin/isolated-functions.sh | 90 +-
bin/lock-helper.py | 34 +-
bin/misc-functions.sh | 64 +-
bin/phase-functions.sh | 6 +-
bin/phase-helpers.sh | 14 +-
bin/pid-ns-init | 248 +-
bin/portageq | 3 +-
bin/save-ebuild-env.sh | 18 +-
bin/socks5-server.py | 449 +-
bin/xattr-helper.py | 226 +-
bin/xpak-helper.py | 81 +-
cnf/make.conf.example.riscv.diff | 61 +
cnf/make.globals | 2 +-
doc/api/conf.py | 24 +-
lib/_emerge/AbstractDepPriority.py | 38 +-
lib/_emerge/AbstractEbuildProcess.py | 869 +-
lib/_emerge/AbstractPollTask.py | 205 +-
lib/_emerge/AsynchronousLock.py | 571 +-
lib/_emerge/AsynchronousTask.py | 391 +-
lib/_emerge/AtomArg.py | 11 +-
lib/_emerge/Binpkg.py | 1003 +-
lib/_emerge/BinpkgEnvExtractor.py | 123 +-
lib/_emerge/BinpkgExtractorAsync.py | 190 +-
lib/_emerge/BinpkgFetcher.py | 447 +-
lib/_emerge/BinpkgPrefetcher.py | 74 +-
lib/_emerge/BinpkgVerifier.py | 232 +-
lib/_emerge/Blocker.py | 15 +-
lib/_emerge/BlockerCache.py | 343 +-
lib/_emerge/BlockerDB.py | 226 +-
lib/_emerge/BlockerDepPriority.py | 14 +-
lib/_emerge/CompositeTask.py | 234 +-
lib/_emerge/DepPriority.py | 100 +-
lib/_emerge/DepPriorityNormalRange.py | 86 +-
lib/_emerge/DepPrioritySatisfiedRange.py | 183 +-
lib/_emerge/Dependency.py | 38 +-
lib/_emerge/DependencyArg.py | 48 +-
lib/_emerge/EbuildBinpkg.py | 94 +-
lib/_emerge/EbuildBuild.py | 1142 +-
lib/_emerge/EbuildBuildDir.py | 295 +-
lib/_emerge/EbuildExecuter.py | 156 +-
lib/_emerge/EbuildFetcher.py | 741 +-
lib/_emerge/EbuildFetchonly.py | 58 +-
lib/_emerge/EbuildIpcDaemon.py | 164 +-
lib/_emerge/EbuildMerge.py | 139 +-
lib/_emerge/EbuildMetadataPhase.py | 431 +-
lib/_emerge/EbuildPhase.py | 1043 +-
lib/_emerge/EbuildProcess.py | 31 +-
lib/_emerge/EbuildSpawnProcess.py | 23 +-
lib/_emerge/FakeVartree.py | 621 +-
lib/_emerge/FifoIpcDaemon.py | 105 +-
lib/_emerge/JobStatusDisplay.py | 556 +-
lib/_emerge/MergeListItem.py | 262 +-
lib/_emerge/MetadataRegen.py | 292 +-
lib/_emerge/MiscFunctionsProcess.py | 91 +-
lib/_emerge/Package.py | 1869 +-
lib/_emerge/PackageArg.py | 18 +-
lib/_emerge/PackageMerge.py | 90 +-
lib/_emerge/PackagePhase.py | 176 +-
lib/_emerge/PackageUninstall.py | 274 +-
lib/_emerge/PackageVirtualDbapi.py | 274 +-
lib/_emerge/PipeReader.py | 169 +-
lib/_emerge/PollScheduler.py | 346 +-
lib/_emerge/ProgressHandler.py | 30 +-
lib/_emerge/RootConfig.py | 64 +-
lib/_emerge/Scheduler.py | 4260 ++--
lib/_emerge/SequentialTaskQueue.py | 157 +-
lib/_emerge/SetArg.py | 12 +-
lib/_emerge/SpawnProcess.py | 545 +-
lib/_emerge/SubProcess.py | 157 +-
lib/_emerge/Task.py | 91 +-
lib/_emerge/TaskSequence.py | 95 +-
lib/_emerge/UninstallFailure.py | 21 +-
lib/_emerge/UnmergeDepPriority.py | 68 +-
lib/_emerge/UseFlagDisplay.py | 188 +-
lib/_emerge/UserQuery.py | 108 +-
lib/_emerge/_find_deep_system_runtime_deps.py | 54 +-
lib/_emerge/_flush_elog_mod_echo.py | 19 +-
lib/_emerge/actions.py | 7077 +++---
lib/_emerge/chk_updated_cfg_files.py | 74 +-
lib/_emerge/clear_caches.py | 21 +-
lib/_emerge/countdown.py | 24 +-
lib/_emerge/create_depgraph_params.py | 398 +-
lib/_emerge/create_world_atom.py | 213 +-
lib/_emerge/depgraph.py | 21924 ++++++++++---------
lib/_emerge/emergelog.py | 68 +-
lib/_emerge/getloadavg.py | 51 +-
lib/_emerge/help.py | 168 +-
lib/_emerge/is_valid_package_atom.py | 30 +-
lib/_emerge/main.py | 2568 +--
lib/_emerge/post_emerge.py | 292 +-
lib/_emerge/resolver/DbapiProvidesIndex.py | 191 +-
lib/_emerge/resolver/backtracking.py | 548 +-
lib/_emerge/resolver/circular_dependency.py | 576 +-
lib/_emerge/resolver/output.py | 1973 +-
lib/_emerge/resolver/output_helpers.py | 1110 +-
lib/_emerge/resolver/package_tracker.py | 740 +-
lib/_emerge/resolver/slot_collision.py | 2466 ++-
lib/_emerge/search.py | 1071 +-
lib/_emerge/show_invalid_depstring_notice.py | 49 +-
lib/_emerge/stdout_spinner.py | 148 +-
lib/_emerge/unmerge.py | 1268 +-
lib/portage/__init__.py | 1200 +-
lib/portage/_compat_upgrade/binpkg_compression.py | 67 +-
.../_compat_upgrade/binpkg_multi_instance.py | 44 +-
lib/portage/_compat_upgrade/default_locations.py | 174 +-
lib/portage/_emirrordist/Config.py | 274 +-
lib/portage/_emirrordist/ContentDB.py | 371 +-
lib/portage/_emirrordist/DeletionIterator.py | 204 +-
lib/portage/_emirrordist/DeletionTask.py | 284 +-
lib/portage/_emirrordist/FetchIterator.py | 539 +-
lib/portage/_emirrordist/FetchTask.py | 1354 +-
lib/portage/_emirrordist/MirrorDistTask.py | 458 +-
lib/portage/_emirrordist/main.py | 882 +-
lib/portage/_global_updates.py | 504 +-
lib/portage/_legacy_globals.py | 145 +-
lib/portage/_selinux.py | 228 +-
lib/portage/_sets/ProfilePackageSet.py | 65 +-
lib/portage/_sets/__init__.py | 619 +-
lib/portage/_sets/base.py | 462 +-
lib/portage/_sets/dbapi.py | 1073 +-
lib/portage/_sets/files.py | 757 +-
lib/portage/_sets/libs.py | 172 +-
lib/portage/_sets/profiles.py | 110 +-
lib/portage/_sets/security.py | 151 +-
lib/portage/_sets/shell.py | 65 +-
lib/portage/binrepo/config.py | 249 +-
lib/portage/cache/anydbm.py | 159 +-
lib/portage/cache/cache_errors.py | 103 +-
lib/portage/cache/ebuild_xattr.py | 298 +-
lib/portage/cache/flat_hash.py | 263 +-
lib/portage/cache/fs_template.py | 130 +-
lib/portage/cache/index/IndexStreamIterator.py | 32 +-
lib/portage/cache/index/pkg_desc_index.py | 67 +-
lib/portage/cache/mappings.py | 780 +-
lib/portage/cache/metadata.py | 292 +-
lib/portage/cache/sql_template.py | 618 +-
lib/portage/cache/sqlite.py | 633 +-
lib/portage/cache/template.py | 674 +-
lib/portage/cache/volatile.py | 35 +-
lib/portage/checksum.py | 948 +-
lib/portage/const.py | 389 +-
lib/portage/cvstree.py | 551 +-
lib/portage/data.py | 564 +-
lib/portage/dbapi/DummyTree.py | 24 +-
lib/portage/dbapi/IndexedPortdb.py | 313 +-
lib/portage/dbapi/IndexedVardb.py | 209 +-
.../dbapi/_ContentsCaseSensitivityManager.py | 179 +-
lib/portage/dbapi/_MergeProcess.py | 465 +-
lib/portage/dbapi/_SyncfsProcess.py | 91 +-
lib/portage/dbapi/_VdbMetadataDelta.py | 333 +-
lib/portage/dbapi/__init__.py | 878 +-
lib/portage/dbapi/_expand_new_virt.py | 129 +-
lib/portage/dbapi/_similar_name_search.py | 96 +-
lib/portage/dbapi/bintree.py | 3835 ++--
lib/portage/dbapi/cpv_expand.py | 183 +-
lib/portage/dbapi/dep_expand.py | 84 +-
lib/portage/dbapi/porttree.py | 3230 +--
lib/portage/dbapi/vartree.py | 12207 ++++++-----
lib/portage/dbapi/virtual.py | 442 +-
lib/portage/debug.py | 215 +-
lib/portage/dep/__init__.py | 5919 ++---
lib/portage/dep/_dnf.py | 159 +-
lib/portage/dep/_slot_operator.py | 206 +-
lib/portage/dep/dep_check.py | 2019 +-
lib/portage/dep/soname/SonameAtom.py | 96 +-
lib/portage/dep/soname/multilib_category.py | 268 +-
lib/portage/dep/soname/parse.py | 68 +-
lib/portage/dispatch_conf.py | 706 +-
lib/portage/eapi.py | 466 +-
lib/portage/eclass_cache.py | 322 +-
lib/portage/elog/__init__.py | 326 +-
lib/portage/elog/filtering.py | 21 +-
lib/portage/elog/messages.py | 311 +-
lib/portage/elog/mod_custom.py | 25 +-
lib/portage/elog/mod_echo.py | 103 +-
lib/portage/elog/mod_mail.py | 63 +-
lib/portage/elog/mod_mail_summary.py | 150 +-
lib/portage/elog/mod_save.py | 128 +-
lib/portage/elog/mod_save_summary.py | 134 +-
lib/portage/elog/mod_syslog.py | 37 +-
lib/portage/emaint/defaults.py | 39 +-
lib/portage/emaint/main.py | 431 +-
lib/portage/emaint/modules/binhost/__init__.py | 26 +-
lib/portage/emaint/modules/binhost/binhost.py | 351 +-
lib/portage/emaint/modules/config/__init__.py | 26 +-
lib/portage/emaint/modules/config/config.py | 129 +-
lib/portage/emaint/modules/logs/__init__.py | 79 +-
lib/portage/emaint/modules/logs/logs.py | 186 +-
lib/portage/emaint/modules/merges/__init__.py | 69 +-
lib/portage/emaint/modules/merges/merges.py | 543 +-
lib/portage/emaint/modules/move/__init__.py | 46 +-
lib/portage/emaint/modules/move/move.py | 361 +-
lib/portage/emaint/modules/resume/__init__.py | 26 +-
lib/portage/emaint/modules/resume/resume.py | 97 +-
lib/portage/emaint/modules/sync/__init__.py | 99 +-
lib/portage/emaint/modules/sync/sync.py | 936 +-
lib/portage/emaint/modules/world/__init__.py | 26 +-
lib/portage/emaint/modules/world/world.py | 158 +-
lib/portage/env/config.py | 162 +-
lib/portage/env/loaders.py | 588 +-
lib/portage/env/validators.py | 23 +-
lib/portage/exception.py | 248 +-
lib/portage/getbinpkg.py | 1737 +-
lib/portage/glsa.py | 1432 +-
lib/portage/localization.py | 62 +-
lib/portage/locks.py | 1419 +-
lib/portage/mail.py | 238 +-
lib/portage/manifest.py | 1446 +-
lib/portage/metadata.py | 417 +-
lib/portage/module.py | 449 +-
lib/portage/news.py | 862 +-
lib/portage/output.py | 1589 +-
.../package/ebuild/_config/KeywordsManager.py | 658 +-
.../package/ebuild/_config/LicenseManager.py | 450 +-
.../package/ebuild/_config/LocationsManager.py | 752 +-
lib/portage/package/ebuild/_config/MaskManager.py | 585 +-
lib/portage/package/ebuild/_config/UseManager.py | 1297 +-
.../package/ebuild/_config/VirtualsManager.py | 444 +-
.../package/ebuild/_config/env_var_validation.py | 31 +-
lib/portage/package/ebuild/_config/features_set.py | 237 +-
lib/portage/package/ebuild/_config/helper.py | 103 +-
.../package/ebuild/_config/special_env_vars.py | 452 +-
.../package/ebuild/_config/unpack_dependencies.py | 67 +-
lib/portage/package/ebuild/_ipc/ExitCommand.py | 36 +-
lib/portage/package/ebuild/_ipc/IpcCommand.py | 7 +-
lib/portage/package/ebuild/_ipc/QueryCommand.py | 264 +-
lib/portage/package/ebuild/_metadata_invalid.py | 68 +-
.../ebuild/_parallel_manifest/ManifestProcess.py | 74 +-
.../ebuild/_parallel_manifest/ManifestScheduler.py | 146 +-
.../ebuild/_parallel_manifest/ManifestTask.py | 403 +-
lib/portage/package/ebuild/_spawn_nofetch.py | 206 +-
lib/portage/package/ebuild/config.py | 6259 +++---
.../package/ebuild/deprecated_profile_check.py | 172 +-
lib/portage/package/ebuild/digestcheck.py | 296 +-
lib/portage/package/ebuild/digestgen.py | 401 +-
lib/portage/package/ebuild/doebuild.py | 5730 ++---
lib/portage/package/ebuild/fetch.py | 3521 +--
lib/portage/package/ebuild/getmaskingreason.py | 205 +-
lib/portage/package/ebuild/getmaskingstatus.py | 333 +-
lib/portage/package/ebuild/prepare_build_dirs.py | 914 +-
lib/portage/package/ebuild/profile_iuse.py | 49 +-
lib/portage/process.py | 1870 +-
lib/portage/progress.py | 87 +-
lib/portage/proxy/lazyimport.py | 385 +-
lib/portage/proxy/objectproxy.py | 120 +-
lib/portage/repository/config.py | 2743 +--
.../repository/storage/hardlink_quarantine.py | 181 +-
lib/portage/repository/storage/hardlink_rcu.py | 491 +-
lib/portage/repository/storage/inplace.py | 61 +-
lib/portage/repository/storage/interface.py | 127 +-
lib/portage/sync/__init__.py | 57 +-
lib/portage/sync/config_checks.py | 131 +-
lib/portage/sync/controller.py | 731 +-
lib/portage/sync/getaddrinfo_validate.py | 37 +-
lib/portage/sync/modules/cvs/__init__.py | 64 +-
lib/portage/sync/modules/cvs/cvs.py | 116 +-
lib/portage/sync/modules/git/__init__.py | 115 +-
lib/portage/sync/modules/git/git.py | 584 +-
lib/portage/sync/modules/mercurial/__init__.py | 52 +-
lib/portage/sync/modules/mercurial/mercurial.py | 320 +-
lib/portage/sync/modules/rsync/__init__.py | 52 +-
lib/portage/sync/modules/rsync/rsync.py | 1519 +-
lib/portage/sync/modules/svn/__init__.py | 38 +-
lib/portage/sync/modules/svn/svn.py | 152 +-
lib/portage/sync/modules/webrsync/__init__.py | 60 +-
lib/portage/sync/modules/webrsync/webrsync.py | 236 +-
lib/portage/sync/old_tree_timestamp.py | 167 +-
lib/portage/sync/syncbase.py | 657 +-
lib/portage/tests/__init__.py | 550 +-
lib/portage/tests/bin/setup_env.py | 131 +-
lib/portage/tests/bin/test_dobin.py | 19 +-
lib/portage/tests/bin/test_dodir.py | 23 +-
lib/portage/tests/bin/test_doins.py | 636 +-
lib/portage/tests/bin/test_eapi7_ver_funcs.py | 460 +-
lib/portage/tests/bin/test_filter_bash_env.py | 70 +-
lib/portage/tests/dbapi/test_auxdb.py | 197 +-
lib/portage/tests/dbapi/test_fakedbapi.py | 163 +-
lib/portage/tests/dbapi/test_portdb_cache.py | 349 +-
lib/portage/tests/dep/testAtom.py | 997 +-
lib/portage/tests/dep/testCheckRequiredUse.py | 414 +-
lib/portage/tests/dep/testExtendedAtomDict.py | 20 +-
lib/portage/tests/dep/testExtractAffectingUSE.py | 158 +-
lib/portage/tests/dep/testStandalone.py | 66 +-
lib/portage/tests/dep/test_best_match_to_list.py | 168 +-
lib/portage/tests/dep/test_dep_getcpv.py | 56 +-
lib/portage/tests/dep/test_dep_getrepo.py | 40 +-
lib/portage/tests/dep/test_dep_getslot.py | 35 +-
lib/portage/tests/dep/test_dep_getusedeps.py | 47 +-
lib/portage/tests/dep/test_dnf_convert.py | 94 +-
lib/portage/tests/dep/test_get_operator.py | 53 +-
.../tests/dep/test_get_required_use_flags.py | 76 +-
lib/portage/tests/dep/test_isjustname.py | 32 +-
lib/portage/tests/dep/test_isvalidatom.py | 407 +-
lib/portage/tests/dep/test_match_from_list.py | 410 +-
lib/portage/tests/dep/test_overlap_dnf.py | 57 +-
lib/portage/tests/dep/test_paren_reduce.py | 121 +-
lib/portage/tests/dep/test_soname_atom_pickle.py | 20 +-
lib/portage/tests/dep/test_use_reduce.py | 1391 +-
.../tests/ebuild/test_array_fromfile_eof.py | 73 +-
lib/portage/tests/ebuild/test_config.py | 724 +-
lib/portage/tests/ebuild/test_doebuild_fd_pipes.py | 273 +-
lib/portage/tests/ebuild/test_doebuild_spawn.py | 176 +-
lib/portage/tests/ebuild/test_fetch.py | 1518 +-
lib/portage/tests/ebuild/test_ipc_daemon.py | 289 +-
lib/portage/tests/ebuild/test_shell_quote.py | 218 +-
lib/portage/tests/ebuild/test_spawn.py | 79 +-
.../tests/ebuild/test_use_expand_incremental.py | 212 +-
lib/portage/tests/emerge/test_config_protect.py | 451 +-
.../emerge/test_emerge_blocker_file_collision.py | 304 +-
lib/portage/tests/emerge/test_emerge_slot_abi.py | 335 +-
lib/portage/tests/emerge/test_global_updates.py | 33 +-
lib/portage/tests/emerge/test_simple.py | 1054 +-
.../tests/env/config/test_PackageKeywordsFile.py | 59 +-
.../tests/env/config/test_PackageMaskFile.py | 34 +-
.../tests/env/config/test_PackageUseFile.py | 44 +-
.../tests/env/config/test_PortageModulesFile.py | 57 +-
lib/portage/tests/glsa/test_security_set.py | 174 +-
lib/portage/tests/lafilefixer/test_lafilefixer.py | 187 +-
.../test_lazy_import_portage_baseline.py | 135 +-
.../lazyimport/test_preload_portage_submodules.py | 18 +-
lib/portage/tests/lint/metadata.py | 9 +-
lib/portage/tests/lint/test_bash_syntax.py | 81 +-
lib/portage/tests/lint/test_compile_modules.py | 103 +-
lib/portage/tests/lint/test_import_modules.py | 59 +-
lib/portage/tests/locks/test_asynchronous_lock.py | 340 +-
lib/portage/tests/locks/test_lock_nonblock.py | 120 +-
lib/portage/tests/news/test_NewsItem.py | 121 +-
lib/portage/tests/process/test_AsyncFunction.py | 91 +-
lib/portage/tests/process/test_PipeLogger.py | 106 +-
lib/portage/tests/process/test_PopenProcess.py | 156 +-
.../tests/process/test_PopenProcessBlockingIO.py | 103 +-
lib/portage/tests/process/test_poll.py | 182 +-
lib/portage/tests/process/test_unshare_net.py | 38 +-
lib/portage/tests/resolver/ResolverPlayground.py | 1950 +-
.../test_build_id_profile_format.py | 271 +-
.../binpkg_multi_instance/test_rebuilt_binaries.py | 193 +-
.../tests/resolver/soname/test_autounmask.py | 176 +-
lib/portage/tests/resolver/soname/test_depclean.py | 106 +-
.../tests/resolver/soname/test_downgrade.py | 463 +-
.../tests/resolver/soname/test_or_choices.py | 166 +-
.../tests/resolver/soname/test_reinstall.py | 150 +-
.../tests/resolver/soname/test_skip_update.py | 148 +-
.../soname/test_slot_conflict_reinstall.py | 671 +-
.../resolver/soname/test_slot_conflict_update.py | 203 +-
.../tests/resolver/soname/test_soname_provided.py | 131 +-
.../tests/resolver/soname/test_unsatisfiable.py | 117 +-
.../tests/resolver/soname/test_unsatisfied.py | 148 +-
.../test_aggressive_backtrack_downgrade.py | 153 +-
lib/portage/tests/resolver/test_autounmask.py | 1346 +-
.../tests/resolver/test_autounmask_binpkg_use.py | 115 +-
.../resolver/test_autounmask_keep_keywords.py | 122 +-
.../tests/resolver/test_autounmask_multilib_use.py | 147 +-
.../tests/resolver/test_autounmask_parent.py | 65 +-
.../resolver/test_autounmask_use_backtrack.py | 146 +-
.../tests/resolver/test_autounmask_use_breakage.py | 174 +-
.../resolver/test_autounmask_use_slot_conflict.py | 76 +-
lib/portage/tests/resolver/test_backtracking.py | 360 +-
lib/portage/tests/resolver/test_bdeps.py | 399 +-
.../resolver/test_binary_pkg_ebuild_visibility.py | 262 +-
lib/portage/tests/resolver/test_blocker.py | 240 +-
lib/portage/tests/resolver/test_changed_deps.py | 213 +-
.../tests/resolver/test_circular_choices.py | 401 +-
.../tests/resolver/test_circular_choices_rust.py | 160 +-
.../tests/resolver/test_circular_dependencies.py | 197 +-
lib/portage/tests/resolver/test_complete_graph.py | 291 +-
...test_complete_if_new_subslot_without_revbump.py | 120 +-
lib/portage/tests/resolver/test_depclean.py | 552 +-
lib/portage/tests/resolver/test_depclean_order.py | 105 +-
.../resolver/test_depclean_slot_unavailable.py | 127 +-
lib/portage/tests/resolver/test_depth.py | 586 +-
.../resolver/test_disjunctive_depend_order.py | 145 +-
lib/portage/tests/resolver/test_eapi.py | 298 +-
.../tests/resolver/test_features_test_use.py | 146 +-
.../resolver/test_imagemagick_graphicsmagick.py | 183 +-
lib/portage/tests/resolver/test_keywords.py | 664 +-
lib/portage/tests/resolver/test_merge_order.py | 1237 +-
.../test_missing_iuse_and_evaluated_atoms.py | 53 +-
lib/portage/tests/resolver/test_multirepo.py | 779 +-
lib/portage/tests/resolver/test_multislot.py | 99 +-
.../tests/resolver/test_old_dep_chain_display.py | 67 +-
lib/portage/tests/resolver/test_onlydeps.py | 57 +-
.../tests/resolver/test_onlydeps_circular.py | 87 +-
.../tests/resolver/test_onlydeps_minimal.py | 83 +-
lib/portage/tests/resolver/test_or_choices.py | 1474 +-
.../tests/resolver/test_or_downgrade_installed.py | 152 +-
.../tests/resolver/test_or_upgrade_installed.py | 418 +-
lib/portage/tests/resolver/test_output.py | 183 +-
lib/portage/tests/resolver/test_package_tracker.py | 509 +-
.../tests/resolver/test_profile_default_eapi.py | 214 +-
.../tests/resolver/test_profile_package_set.py | 217 +-
lib/portage/tests/resolver/test_rebuild.py | 324 +-
.../test_regular_slot_change_without_revbump.py | 104 +-
lib/portage/tests/resolver/test_required_use.py | 435 +-
.../resolver/test_runtime_cycle_merge_order.py | 127 +-
lib/portage/tests/resolver/test_simple.py | 144 +-
lib/portage/tests/resolver/test_slot_abi.py | 907 +-
.../tests/resolver/test_slot_abi_downgrade.py | 425 +-
.../resolver/test_slot_change_without_revbump.py | 150 +-
lib/portage/tests/resolver/test_slot_collisions.py | 606 +-
.../resolver/test_slot_conflict_force_rebuild.py | 125 +-
.../resolver/test_slot_conflict_mask_update.py | 63 +-
.../tests/resolver/test_slot_conflict_rebuild.py | 937 +-
.../test_slot_conflict_unsatisfied_deep_deps.py | 351 +-
.../tests/resolver/test_slot_conflict_update.py | 156 +-
.../resolver/test_slot_conflict_update_virt.py | 129 +-
.../resolver/test_slot_operator_autounmask.py | 232 +-
.../tests/resolver/test_slot_operator_bdeps.py | 395 +-
.../resolver/test_slot_operator_complete_graph.py | 250 +-
.../resolver/test_slot_operator_exclusive_slots.py | 266 +-
.../resolver/test_slot_operator_missed_update.py | 196 +-
.../tests/resolver/test_slot_operator_rebuild.py | 196 +-
.../resolver/test_slot_operator_required_use.py | 114 +-
.../resolver/test_slot_operator_reverse_deps.py | 540 +-
.../test_slot_operator_runtime_pkg_mask.py | 240 +-
.../resolver/test_slot_operator_unsatisfied.py | 121 +-
.../tests/resolver/test_slot_operator_unsolved.py | 147 +-
..._slot_operator_update_probe_parent_downgrade.py | 112 +-
.../test_solve_non_slot_operator_slot_conflicts.py | 113 +-
lib/portage/tests/resolver/test_targetroot.py | 178 +-
.../tests/resolver/test_unecessary_slot_upgrade.py | 62 +
lib/portage/tests/resolver/test_unmerge_order.py | 394 +-
.../tests/resolver/test_use_dep_defaults.py | 80 +-
lib/portage/tests/resolver/test_useflags.py | 214 +-
.../resolver/test_virtual_minimize_children.py | 548 +-
lib/portage/tests/resolver/test_virtual_slot.py | 458 +-
lib/portage/tests/resolver/test_with_test_deps.py | 150 +-
lib/portage/tests/runTests.py | 36 +-
.../tests/sets/base/testInternalPackageSet.py | 67 +-
lib/portage/tests/sets/files/testConfigFileSet.py | 37 +-
lib/portage/tests/sets/files/testStaticFileSet.py | 27 +-
lib/portage/tests/sets/shell/testShell.py | 31 +-
lib/portage/tests/sync/test_sync_local.py | 836 +-
lib/portage/tests/unicode/test_string_format.py | 78 +-
lib/portage/tests/update/test_move_ent.py | 188 +-
lib/portage/tests/update/test_move_slot_ent.py | 259 +-
lib/portage/tests/update/test_update_dbentry.py | 560 +-
.../tests/util/dyn_libs/test_soname_deps.py | 37 +-
.../tests/util/eventloop/test_call_soon_fifo.py | 28 +-
lib/portage/tests/util/file_copy/test_copyfile.py | 98 +-
.../util/futures/asyncio/test_child_watcher.py | 78 +-
.../futures/asyncio/test_event_loop_in_fork.py | 68 +-
.../tests/util/futures/asyncio/test_pipe_closed.py | 251 +-
.../asyncio/test_policy_wrapper_recursion.py | 22 +-
.../futures/asyncio/test_run_until_complete.py | 44 +-
.../util/futures/asyncio/test_subprocess_exec.py | 356 +-
.../util/futures/asyncio/test_wakeup_fd_sigchld.py | 62 +-
.../tests/util/futures/test_compat_coroutine.py | 392 +-
.../tests/util/futures/test_done_callback.py | 41 +-
.../util/futures/test_done_callback_after_exit.py | 69 +-
.../tests/util/futures/test_iter_completed.py | 141 +-
lib/portage/tests/util/futures/test_retry.py | 462 +-
lib/portage/tests/util/test_checksum.py | 236 +-
lib/portage/tests/util/test_digraph.py | 495 +-
lib/portage/tests/util/test_file_copier.py | 63 +-
lib/portage/tests/util/test_getconfig.py | 111 +-
lib/portage/tests/util/test_grabdict.py | 9 +-
lib/portage/tests/util/test_install_mask.py | 315 +-
lib/portage/tests/util/test_normalizedPath.py | 11 +-
lib/portage/tests/util/test_shelve.py | 90 +-
lib/portage/tests/util/test_socks5.py | 314 +-
lib/portage/tests/util/test_stackDictList.py | 28 +-
lib/portage/tests/util/test_stackDicts.py | 41 +-
lib/portage/tests/util/test_stackLists.py | 22 +-
lib/portage/tests/util/test_uniqueArray.py | 33 +-
lib/portage/tests/util/test_varExpand.py | 182 +-
lib/portage/tests/util/test_whirlpool.py | 17 +-
lib/portage/tests/util/test_xattr.py | 278 +-
lib/portage/tests/versions/test_cpv_sort_key.py | 18 +-
lib/portage/tests/versions/test_vercmp.py | 151 +-
lib/portage/tests/xpak/test_decodeint.py | 12 +-
lib/portage/update.py | 805 +-
lib/portage/util/ExtractKernelVersion.py | 135 +-
lib/portage/util/SlotObject.py | 101 +-
lib/portage/util/__init__.py | 3605 +--
lib/portage/util/_async/AsyncFunction.py | 116 +-
lib/portage/util/_async/AsyncScheduler.py | 194 +-
lib/portage/util/_async/AsyncTaskFuture.py | 48 +-
lib/portage/util/_async/BuildLogger.py | 201 +-
lib/portage/util/_async/FileCopier.py | 32 +-
lib/portage/util/_async/FileDigester.py | 140 +-
lib/portage/util/_async/ForkProcess.py | 295 +-
lib/portage/util/_async/PipeLogger.py | 359 +-
lib/portage/util/_async/PipeReaderBlockingIO.py | 130 +-
lib/portage/util/_async/PopenProcess.py | 56 +-
lib/portage/util/_async/SchedulerInterface.py | 233 +-
lib/portage/util/_async/TaskScheduler.py | 23 +-
lib/portage/util/_async/run_main_scheduler.py | 66 +-
lib/portage/util/_compare_files.py | 179 +-
lib/portage/util/_ctypes.py | 65 +-
lib/portage/util/_desktop_entry.py | 117 +-
lib/portage/util/_dyn_libs/LinkageMapELF.py | 1855 +-
lib/portage/util/_dyn_libs/NeededEntry.py | 129 +-
.../util/_dyn_libs/PreservedLibsRegistry.py | 461 +-
.../util/_dyn_libs/display_preserved_libs.py | 166 +-
lib/portage/util/_dyn_libs/dyn_libs.py | 28 +
lib/portage/util/_dyn_libs/soname_deps.py | 310 +-
lib/portage/util/_dyn_libs/soname_deps_qa.py | 161 +-
lib/portage/util/_eventloop/asyncio_event_loop.py | 264 +-
lib/portage/util/_eventloop/global_event_loop.py | 2 +-
lib/portage/util/_get_vm_info.py | 140 +-
lib/portage/util/_info_files.py | 237 +-
lib/portage/util/_path.py | 34 +-
lib/portage/util/_pty.py | 98 +-
lib/portage/util/_urlopen.py | 157 +-
lib/portage/util/_xattr.py | 349 +-
lib/portage/util/backoff.py | 80 +-
lib/portage/util/bin_entry_point.py | 44 +-
lib/portage/util/changelog.py | 103 +-
lib/portage/util/compression_probe.py | 191 +-
lib/portage/util/configparser.py | 106 +-
lib/portage/util/cpuinfo.py | 43 +-
lib/portage/util/digraph.py | 747 +-
lib/portage/util/elf/constants.py | 91 +-
lib/portage/util/elf/header.py | 116 +-
lib/portage/util/endian/decode.py | 70 +-
lib/portage/util/env_update.py | 801 +-
lib/portage/util/file_copy/__init__.py | 37 +-
lib/portage/util/formatter.py | 95 +-
lib/portage/util/futures/__init__.py | 4 +-
lib/portage/util/futures/_asyncio/__init__.py | 486 +-
lib/portage/util/futures/_asyncio/streams.py | 139 +-
lib/portage/util/futures/_sync_decorator.py | 75 +-
lib/portage/util/futures/compat_coroutine.py | 220 +-
lib/portage/util/futures/executor/fork.py | 228 +-
lib/portage/util/futures/extendedfutures.py | 115 +-
lib/portage/util/futures/futures.py | 16 +-
lib/portage/util/futures/iter_completed.py | 342 +-
lib/portage/util/futures/retry.py | 377 +-
lib/portage/util/futures/unix_events.py | 95 +-
lib/portage/util/hooks.py | 52 +
lib/portage/util/install_mask.py | 340 +-
lib/portage/util/iterators/MultiIterGroupBy.py | 164 +-
lib/portage/util/lafilefixer.py | 331 +-
lib/portage/util/listdir.py | 251 +-
lib/portage/util/locale.py | 240 +-
lib/portage/util/movefile.py | 665 +-
lib/portage/util/mtimedb.py | 221 +-
lib/portage/util/netlink.py | 139 +-
lib/portage/util/path.py | 82 +-
lib/portage/util/shelve.py | 86 +-
lib/portage/util/socks5.py | 199 +-
lib/portage/util/whirlpool.py | 2736 ++-
lib/portage/util/writeable_check.py | 199 +-
lib/portage/versions.py | 1100 +-
lib/portage/xml/metadata.py | 842 +-
lib/portage/xpak.py | 943 +-
man/color.map.5 | 6 +-
man/dispatch-conf.1 | 5 +
man/emerge.1 | 9 +-
man/make.conf.5 | 15 +-
repoman/bin/repoman | 46 +-
repoman/lib/repoman/__init__.py | 153 +-
repoman/lib/repoman/_portage.py | 7 +-
repoman/lib/repoman/_subprocess.py | 89 +-
repoman/lib/repoman/actions.py | 1467 +-
repoman/lib/repoman/argparser.py | 621 +-
repoman/lib/repoman/check_missingslot.py | 43 +-
repoman/lib/repoman/config.py | 284 +-
repoman/lib/repoman/copyrights.py | 221 +-
repoman/lib/repoman/errors.py | 17 +-
repoman/lib/repoman/gpg.py | 106 +-
repoman/lib/repoman/main.py | 369 +-
repoman/lib/repoman/metadata.py | 130 +-
repoman/lib/repoman/modules/commit/manifest.py | 189 +-
repoman/lib/repoman/modules/commit/repochecks.py | 54 +-
.../modules/linechecks/assignment/__init__.py | 34 +-
.../modules/linechecks/assignment/assignment.py | 40 +-
repoman/lib/repoman/modules/linechecks/base.py | 177 +-
repoman/lib/repoman/modules/linechecks/config.py | 195 +-
.../lib/repoman/modules/linechecks/controller.py | 277 +-
.../repoman/modules/linechecks/depend/__init__.py | 22 +-
.../repoman/modules/linechecks/depend/implicit.py | 63 +-
.../modules/linechecks/deprecated/__init__.py | 70 +-
.../modules/linechecks/deprecated/deprecated.py | 39 +-
.../modules/linechecks/deprecated/inherit.py | 113 +-
.../lib/repoman/modules/linechecks/do/__init__.py | 22 +-
repoman/lib/repoman/modules/linechecks/do/dosym.py | 24 +-
.../repoman/modules/linechecks/eapi/__init__.py | 82 +-
.../lib/repoman/modules/linechecks/eapi/checks.py | 100 +-
.../repoman/modules/linechecks/eapi/definition.py | 55 +-
.../repoman/modules/linechecks/emake/__init__.py | 34 +-
.../lib/repoman/modules/linechecks/emake/emake.py | 30 +-
.../modules/linechecks/gentoo_header/__init__.py | 22 +-
.../modules/linechecks/gentoo_header/header.py | 95 +-
.../repoman/modules/linechecks/helpers/__init__.py | 22 +-
.../repoman/modules/linechecks/helpers/offset.py | 31 +-
.../repoman/modules/linechecks/nested/__init__.py | 22 +-
.../repoman/modules/linechecks/nested/nested.py | 13 +-
.../repoman/modules/linechecks/nested/nesteddie.py | 14 +-
.../repoman/modules/linechecks/patches/__init__.py | 22 +-
.../repoman/modules/linechecks/patches/patches.py | 29 +-
.../repoman/modules/linechecks/phases/__init__.py | 58 +-
.../lib/repoman/modules/linechecks/phases/phase.py | 305 +-
.../repoman/modules/linechecks/portage/__init__.py | 34 +-
.../repoman/modules/linechecks/portage/internal.py | 44 +-
.../repoman/modules/linechecks/quotes/__init__.py | 34 +-
.../repoman/modules/linechecks/quotes/quoteda.py | 15 +-
.../repoman/modules/linechecks/quotes/quotes.py | 147 +-
.../lib/repoman/modules/linechecks/uri/__init__.py | 22 +-
repoman/lib/repoman/modules/linechecks/uri/uri.py | 50 +-
.../lib/repoman/modules/linechecks/use/__init__.py | 22 +-
.../repoman/modules/linechecks/use/builtwith.py | 9 +-
.../repoman/modules/linechecks/useless/__init__.py | 34 +-
.../lib/repoman/modules/linechecks/useless/cd.py | 32 +-
.../repoman/modules/linechecks/useless/dodoc.py | 19 +-
.../modules/linechecks/whitespace/__init__.py | 34 +-
.../repoman/modules/linechecks/whitespace/blank.py | 31 +-
.../modules/linechecks/whitespace/whitespace.py | 23 +-
.../modules/linechecks/workaround/__init__.py | 22 +-
.../modules/linechecks/workaround/workarounds.py | 12 +-
.../lib/repoman/modules/scan/depend/__init__.py | 57 +-
.../repoman/modules/scan/depend/_depend_checks.py | 448 +-
.../lib/repoman/modules/scan/depend/_gen_arches.py | 112 +-
repoman/lib/repoman/modules/scan/depend/profile.py | 748 +-
.../repoman/modules/scan/directories/__init__.py | 83 +-
.../lib/repoman/modules/scan/directories/files.py | 155 +-
.../lib/repoman/modules/scan/directories/mtime.py | 46 +-
repoman/lib/repoman/modules/scan/eapi/__init__.py | 38 +-
repoman/lib/repoman/modules/scan/eapi/eapi.py | 87 +-
.../lib/repoman/modules/scan/ebuild/__init__.py | 106 +-
repoman/lib/repoman/modules/scan/ebuild/ebuild.py | 464 +-
.../lib/repoman/modules/scan/ebuild/multicheck.py | 100 +-
.../lib/repoman/modules/scan/eclasses/__init__.py | 78 +-
repoman/lib/repoman/modules/scan/eclasses/live.py | 119 +-
repoman/lib/repoman/modules/scan/eclasses/ruby.py | 86 +-
repoman/lib/repoman/modules/scan/fetch/__init__.py | 51 +-
repoman/lib/repoman/modules/scan/fetch/fetches.py | 369 +-
.../lib/repoman/modules/scan/keywords/__init__.py | 51 +-
.../lib/repoman/modules/scan/keywords/keywords.py | 331 +-
.../lib/repoman/modules/scan/manifest/__init__.py | 45 +-
.../lib/repoman/modules/scan/manifest/manifests.py | 92 +-
.../lib/repoman/modules/scan/metadata/__init__.py | 158 +-
.../repoman/modules/scan/metadata/description.py | 73 +-
.../modules/scan/metadata/ebuild_metadata.py | 128 +-
.../repoman/modules/scan/metadata/pkgmetadata.py | 374 +-
.../lib/repoman/modules/scan/metadata/restrict.py | 96 +-
.../lib/repoman/modules/scan/metadata/use_flags.py | 155 +-
repoman/lib/repoman/modules/scan/module.py | 173 +-
.../lib/repoman/modules/scan/options/__init__.py | 37 +-
.../lib/repoman/modules/scan/options/options.py | 48 +-
repoman/lib/repoman/modules/scan/scan.py | 99 +-
repoman/lib/repoman/modules/scan/scanbase.py | 106 +-
repoman/lib/repoman/modules/vcs/None/__init__.py | 46 +-
repoman/lib/repoman/modules/vcs/None/changes.py | 88 +-
repoman/lib/repoman/modules/vcs/None/status.py | 96 +-
repoman/lib/repoman/modules/vcs/__init__.py | 5 +-
repoman/lib/repoman/modules/vcs/bzr/__init__.py | 46 +-
repoman/lib/repoman/modules/vcs/bzr/changes.py | 115 +-
repoman/lib/repoman/modules/vcs/bzr/status.py | 104 +-
repoman/lib/repoman/modules/vcs/changes.py | 311 +-
repoman/lib/repoman/modules/vcs/cvs/__init__.py | 46 +-
repoman/lib/repoman/modules/vcs/cvs/changes.py | 218 +-
repoman/lib/repoman/modules/vcs/cvs/status.py | 239 +-
repoman/lib/repoman/modules/vcs/git/__init__.py | 46 +-
repoman/lib/repoman/modules/vcs/git/changes.py | 252 +-
repoman/lib/repoman/modules/vcs/git/status.py | 115 +-
repoman/lib/repoman/modules/vcs/hg/__init__.py | 46 +-
repoman/lib/repoman/modules/vcs/hg/changes.py | 193 +-
repoman/lib/repoman/modules/vcs/hg/status.py | 95 +-
repoman/lib/repoman/modules/vcs/settings.py | 176 +-
repoman/lib/repoman/modules/vcs/svn/__init__.py | 46 +-
repoman/lib/repoman/modules/vcs/svn/changes.py | 266 +-
repoman/lib/repoman/modules/vcs/svn/status.py | 268 +-
repoman/lib/repoman/modules/vcs/vcs.py | 259 +-
repoman/lib/repoman/profile.py | 133 +-
repoman/lib/repoman/qa_data.py | 374 +-
repoman/lib/repoman/qa_tracker.py | 81 +-
repoman/lib/repoman/repos.py | 624 +-
repoman/lib/repoman/scanner.py | 852 +-
repoman/lib/repoman/tests/__init__.py | 539 +-
.../lib/repoman/tests/changelog/test_echangelog.py | 251 +-
repoman/lib/repoman/tests/commit/test_commitmsg.py | 156 +-
repoman/lib/repoman/tests/runTests.py | 36 +-
repoman/lib/repoman/tests/simple/test_simple.py | 903 +-
repoman/lib/repoman/utilities.py | 1019 +-
repoman/runtests | 258 +-
repoman/setup.py | 768 +-
runtests | 252 +-
setup.py | 1427 +-
src/portage_util_file_copy_reflink_linux.c | 7 +-
tabcheck.py | 3 +-
tox.ini | 18 +-
703 files changed, 142061 insertions(+), 124529 deletions(-)
diff --cc README.md
index d9e004269,e75b430c6..2c17d09b2
--- a/README.md
+++ b/README.md
@@@ -1,20 -1,41 +1,46 @@@
-[![CI](https://github.com/gentoo/portage/actions/workflows/ci.yml/badge.svg)](https://github.com/gentoo/portage/actions/workflows/ci.yml)
-
+ About Portage
+ =============
+
+ Portage is a package management system based on ports collections. The
+ Package Manager Specification Project (PMS) standardises and documents
+ the behaviour of Portage so that ebuild repositories can be used by
+ other package managers.
+
+This is the prefix branch of portage, a branch that deals with portage
+setup as packagemanager for a given offset in the filesystem running
+with user privileges.
+
+If you are not looking for something Gentoo Prefix like, then this
+is not the right place.
+
+ Contributing
+ ============
- =======
- About Portage
- =============
+ Contributions are always welcome! We've started using
+ [black](https://pypi.org/project/black/) to format the code base. Please make
+ sure you run it against any PR's prior to submitting (otherwise we'll probably
+ reject it).
- Portage is a package management system based on ports collections. The
- Package Manager Specification Project (PMS) standardises and documents
- the behaviour of Portage so that ebuild repositories can be used by
- other package managers.
+ There are [ways to
+ integrate](https://black.readthedocs.io/en/stable/integrations/editors.html)
+ black into your text editor and/or IDE.
+ You can also set up a git hook to check your commits, in case you don't want
+ editor integration. Something like this:
+
+ ```sh
+ # .git/hooks/pre-commit (don't forget to chmod +x)
+
+ #!/bin/bash
+ black --check --diff .
+ ```
+
+ To ignore commit 1bb64ff452 - which is a massive commit that simply formatted
+ the code base using black - you can do the following:
+
+ ```sh
+ git config blame.ignoreRevsFile .gitignorerevs
+ ```
Dependencies
============
diff --cc bin/save-ebuild-env.sh
index 7d7235f23,98808814b..b3d4c7363
mode 100755,100644..100755
--- a/bin/save-ebuild-env.sh
+++ b/bin/save-ebuild-env.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2018 Gentoo Foundation
+ # Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
# @FUNCTION: __save_ebuild_env
@@@ -89,16 -89,12 +89,16 @@@ __save_ebuild_env()
___eapi_has_package_manager_build_user && unset -f package_manager_build_user
___eapi_has_package_manager_build_group && unset -f package_manager_build_group
- # Clear out the triple underscore namespace as it is reserved by the PM.
- unset -f $(compgen -A function ___)
- unset ${!___*}
+ # BEGIN PREFIX LOCAL: compgen is not compiled in during bootstrap
+ if type compgen >& /dev/null ; then
+ # Clear out the triple underscore namespace as it is reserved by the PM.
+ unset -f $(compgen -A function ___)
+ unset ${!___*}
+ fi
+ # END PREFIX LOCAL
# portage config variables and variables set directly by portage
- unset ACCEPT_LICENSE BAD BRACKET BUILD_PREFIX COLS \
+ unset ACCEPT_LICENSE BUILD_PREFIX COLS \
DISTDIR DOC_SYMLINKS_DIR \
EBUILD_FORCE_TEST EBUILD_MASTER_PID \
ECLASS_DEPTH ENDCOL FAKEROOTKEY \
diff --cc lib/_emerge/EbuildPhase.py
index 8eaa57497,12326fffd..d07cff7bd
--- a/lib/_emerge/EbuildPhase.py
+++ b/lib/_emerge/EbuildPhase.py
@@@ -26,29 -29,29 +29,31 @@@ from portage.util._async.AsyncTaskFutur
from portage.util._async.BuildLogger import BuildLogger
from portage.util.futures import asyncio
from portage.util.futures.executor.fork import ForkExecutor
+# PREFIX LOCAL
+from portage.const import EPREFIX
try:
- from portage.xml.metadata import MetaDataXML
+ from portage.xml.metadata import MetaDataXML
except (SystemExit, KeyboardInterrupt):
- raise
+ raise
except (ImportError, SystemError, RuntimeError, Exception):
- # broken or missing xml support
- # https://bugs.python.org/issue14988
- MetaDataXML = None
+ # broken or missing xml support
+ # https://bugs.python.org/issue14988
+ MetaDataXML = None
import portage
- portage.proxy.lazyimport.lazyimport(globals(),
- 'portage.elog:messages@elog_messages',
- 'portage.package.ebuild.doebuild:_check_build_log,' + \
- '_post_phase_cmds,_post_phase_userpriv_perms,' + \
- '_post_phase_emptydir_cleanup,' +
- '_post_src_install_soname_symlinks,' + \
- '_post_src_install_uid_fix,_postinst_bsdflags,' + \
- '_post_src_install_write_metadata,' + \
- '_preinst_bsdflags',
- 'portage.util.futures.unix_events:_set_nonblocking',
+
+ portage.proxy.lazyimport.lazyimport(
+ globals(),
+ "portage.elog:messages@elog_messages",
+ "portage.package.ebuild.doebuild:_check_build_log,"
+ + "_post_phase_cmds,_post_phase_userpriv_perms,"
+ + "_post_phase_emptydir_cleanup,"
+ + "_post_src_install_soname_symlinks,"
+ + "_post_src_install_uid_fix,_postinst_bsdflags,"
+ + "_post_src_install_write_metadata,"
+ + "_preinst_bsdflags",
+ "portage.util.futures.unix_events:_set_nonblocking",
)
from portage import os
from portage import _encodings
@@@ -450,84 -521,93 +523,110 @@@ class EbuildPhase(CompositeTask)
class _PostPhaseCommands(CompositeTask):
- __slots__ = ("commands", "elog", "fd_pipes", "logfile", "phase", "settings")
-
- def _start(self):
- if isinstance(self.commands, list):
- cmds = [({}, self.commands)]
- else:
- cmds = list(self.commands)
-
- if 'selinux' not in self.settings.features:
- cmds = [(kwargs, commands) for kwargs, commands in
- cmds if not kwargs.get('selinux_only')]
-
- tasks = TaskSequence()
- for kwargs, commands in cmds:
- # Select args intended for MiscFunctionsProcess.
- kwargs = dict((k, v) for k, v in kwargs.items()
- if k in ('ld_preload_sandbox',))
- tasks.add(MiscFunctionsProcess(background=self.background,
- commands=commands, fd_pipes=self.fd_pipes,
- logfile=self.logfile, phase=self.phase,
- scheduler=self.scheduler, settings=self.settings, **kwargs))
-
- self._start_task(tasks, self._commands_exit)
-
- def _commands_exit(self, task):
-
- if self._default_exit(task) != os.EX_OK:
- self._async_wait()
- return
-
- if self.phase == 'install':
- out = io.StringIO()
- _post_src_install_soname_symlinks(self.settings, out)
- msg = out.getvalue()
- if msg:
- self.scheduler.output(msg, log_path=self.settings.get("PORTAGE_LOG_FILE"))
-
- if 'qa-unresolved-soname-deps' in self.settings.features:
- # This operates on REQUIRES metadata generated by the above function call.
- future = asyncio.ensure_future(self._soname_deps_qa(), loop=self.scheduler)
- # If an unexpected exception occurs, then this will raise it.
- future.add_done_callback(lambda future: future.cancelled() or future.result())
- self._start_task(AsyncTaskFuture(future=future), self._default_final_exit)
- else:
- self._default_final_exit(task)
- else:
- self._default_final_exit(task)
-
- async def _soname_deps_qa(self):
-
- vardb = QueryCommand.get_db()[self.settings['EROOT']]['vartree'].dbapi
-
- all_provides = (await self.scheduler.run_in_executor(ForkExecutor(loop=self.scheduler), _get_all_provides, vardb))
-
- unresolved = _get_unresolved_soname_deps(os.path.join(self.settings['PORTAGE_BUILDDIR'], 'build-info'), all_provides)
+ __slots__ = ("commands", "elog", "fd_pipes", "logfile", "phase", "settings")
+
+ def _start(self):
+ if isinstance(self.commands, list):
+ cmds = [({}, self.commands)]
+ else:
+ cmds = list(self.commands)
+
+ if "selinux" not in self.settings.features:
+ cmds = [
+ (kwargs, commands)
+ for kwargs, commands in cmds
+ if not kwargs.get("selinux_only")
+ ]
+
+ tasks = TaskSequence()
+ for kwargs, commands in cmds:
+ # Select args intended for MiscFunctionsProcess.
+ kwargs = dict(
+ (k, v) for k, v in kwargs.items() if k in ("ld_preload_sandbox",)
+ )
+ tasks.add(
+ MiscFunctionsProcess(
+ background=self.background,
+ commands=commands,
+ fd_pipes=self.fd_pipes,
+ logfile=self.logfile,
+ phase=self.phase,
+ scheduler=self.scheduler,
+ settings=self.settings,
+ **kwargs
+ )
+ )
+
+ self._start_task(tasks, self._commands_exit)
+
+ def _commands_exit(self, task):
+
+ if self._default_exit(task) != os.EX_OK:
+ self._async_wait()
+ return
+
+ if self.phase == "install":
+ out = io.StringIO()
+ _post_src_install_soname_symlinks(self.settings, out)
+ msg = out.getvalue()
+ if msg:
+ self.scheduler.output(
+ msg, log_path=self.settings.get("PORTAGE_LOG_FILE")
+ )
+
+ if "qa-unresolved-soname-deps" in self.settings.features:
+ # This operates on REQUIRES metadata generated by the above function call.
+ future = asyncio.ensure_future(
+ self._soname_deps_qa(), loop=self.scheduler
+ )
+ # If an unexpected exception occurs, then this will raise it.
+ future.add_done_callback(
+ lambda future: future.cancelled() or future.result()
+ )
+ self._start_task(
+ AsyncTaskFuture(future=future), self._default_final_exit
+ )
+ else:
+ self._default_final_exit(task)
+ else:
+ self._default_final_exit(task)
+
+ async def _soname_deps_qa(self):
+
+ vardb = QueryCommand.get_db()[self.settings["EROOT"]]["vartree"].dbapi
+
+ all_provides = await self.scheduler.run_in_executor(
+ ForkExecutor(loop=self.scheduler), _get_all_provides, vardb
+ )
+
+ unresolved = _get_unresolved_soname_deps(
+ os.path.join(self.settings["PORTAGE_BUILDDIR"], "build-info"), all_provides
+ )
+ # BEGIN PREFIX LOCAL
+ if EPREFIX != "" and unresolved:
+ # in prefix, consider the host libs for any unresolved libs,
+ # so we kill warnings about missing libc.so.1, etc.
+ for obj, libs in list(unresolved):
+ unresolved.remove((obj, libs))
+ libs=list(libs)
+ for lib in list(libs):
+ for path in ['/lib64', '/lib/64', '/lib', \
+ '/usr/lib64', '/usr/lib/64', '/usr/lib']:
+ if os.path.exists(os.path.join(path, lib)):
+ libs.remove(lib)
+ break
+ if len(libs) > 0:
+ unresolved.append((obj, tuple(libs)))
+ # END PREFIX LOCAL
+
- if unresolved:
- unresolved.sort()
- qa_msg = ["QA Notice: Unresolved soname dependencies:"]
- qa_msg.append("")
- qa_msg.extend("\t%s: %s" % (filename, " ".join(sorted(soname_deps)))
- for filename, soname_deps in unresolved)
- qa_msg.append("")
- await self.elog("eqawarn", qa_msg)
+ if unresolved:
+ unresolved.sort()
+ qa_msg = ["QA Notice: Unresolved soname dependencies:"]
+ qa_msg.append("")
+ qa_msg.extend(
+ "\t%s: %s" % (filename, " ".join(sorted(soname_deps)))
+ for filename, soname_deps in unresolved
+ )
+ qa_msg.append("")
+ await self.elog("eqawarn", qa_msg)
diff --cc lib/_emerge/Package.py
index 40e595f36,90dfccdef..79f5459b3
--- a/lib/_emerge/Package.py
+++ b/lib/_emerge/Package.py
@@@ -16,783 -22,884 +22,886 @@@ from portage.exception import InvalidDa
from portage.localization import _
from _emerge.Task import Task
+
class Package(Task):
- __hash__ = Task.__hash__
- __slots__ = ("built", "cpv", "depth",
- "installed", "onlydeps", "operation",
- "root_config", "type_name",
- "category", "counter", "cp", "cpv_split",
- "inherited", "iuse", "mtime",
- "pf", "root", "slot", "sub_slot", "slot_atom", "version") + \
- ("_invalid", "_masks", "_metadata", "_provided_cps",
- "_raw_metadata", "_provides", "_requires", "_use",
- "_validated_atoms", "_visible")
-
- metadata_keys = [
- "BDEPEND",
- "BUILD_ID", "BUILD_TIME", "CHOST", "COUNTER", "DEFINED_PHASES",
- "DEPEND", "EAPI", "IDEPEND", "INHERITED", "IUSE", "KEYWORDS",
- "LICENSE", "MD5", "PDEPEND", "PROVIDES",
- "RDEPEND", "repository", "REQUIRED_USE",
- "PROPERTIES", "REQUIRES", "RESTRICT", "SIZE",
- "SLOT", "USE", "_mtime_", "EPREFIX"]
-
- _dep_keys = ('BDEPEND', 'DEPEND', 'IDEPEND', 'PDEPEND', 'RDEPEND')
- _buildtime_keys = ('BDEPEND', 'DEPEND')
- _runtime_keys = ('IDEPEND', 'PDEPEND', 'RDEPEND')
- _use_conditional_misc_keys = ('LICENSE', 'PROPERTIES', 'RESTRICT')
- UNKNOWN_REPO = _unknown_repo
-
- def __init__(self, **kwargs):
- metadata = _PackageMetadataWrapperBase(kwargs.pop('metadata'))
- Task.__init__(self, **kwargs)
- # the SlotObject constructor assigns self.root_config from keyword args
- # and is an instance of a '_emerge.RootConfig.RootConfig class
- self.root = self.root_config.root
- self._raw_metadata = metadata
- self._metadata = _PackageMetadataWrapper(self, metadata)
- if not self.built:
- self._metadata['CHOST'] = self.root_config.settings.get('CHOST', '')
- eapi_attrs = _get_eapi_attrs(self.eapi)
-
- try:
- db = self.cpv._db
- except AttributeError:
- if self.built:
- # For independence from the source ebuild repository and
- # profile implicit IUSE state, require the _db attribute
- # for built packages.
- raise
- db = self.root_config.trees['porttree'].dbapi
-
- self.cpv = _pkg_str(self.cpv, metadata=self._metadata,
- settings=self.root_config.settings, db=db)
- if hasattr(self.cpv, 'slot_invalid'):
- self._invalid_metadata('SLOT.invalid',
- "SLOT: invalid value: '%s'" % self._metadata["SLOT"])
- self.cpv_split = self.cpv.cpv_split
- self.category, self.pf = portage.catsplit(self.cpv)
- self.cp = self.cpv.cp
- self.version = self.cpv.version
- self.slot = self.cpv.slot
- self.sub_slot = self.cpv.sub_slot
- self.slot_atom = Atom("%s%s%s" % (self.cp, _slot_separator, self.slot))
- # sync metadata with validated repo (may be UNKNOWN_REPO)
- self._metadata['repository'] = self.cpv.repo
-
- if self.root_config.settings.local_config:
- implicit_match = db._iuse_implicit_cnstr(self.cpv, self._metadata)
- else:
- implicit_match = db._repoman_iuse_implicit_cnstr(self.cpv, self._metadata)
- usealiases = self.root_config.settings._use_manager.getUseAliases(self)
- self.iuse = self._iuse(self, self._metadata["IUSE"].split(),
- implicit_match, usealiases, self.eapi)
-
- if (self.iuse.enabled or self.iuse.disabled) and \
- not eapi_attrs.iuse_defaults:
- if not self.installed:
- self._invalid_metadata('EAPI.incompatible',
- "IUSE contains defaults, but EAPI doesn't allow them")
- if self.inherited is None:
- self.inherited = frozenset()
-
- if self.operation is None:
- if self.onlydeps or self.installed:
- self.operation = "nomerge"
- else:
- self.operation = "merge"
-
- self._hash_key = Package._gen_hash_key(cpv=self.cpv,
- installed=self.installed, onlydeps=self.onlydeps,
- operation=self.operation, repo_name=self.cpv.repo,
- root_config=self.root_config,
- type_name=self.type_name)
- self._hash_value = hash(self._hash_key)
-
- @property
- def eapi(self):
- return self._metadata["EAPI"]
-
- @property
- def build_id(self):
- return self.cpv.build_id
-
- @property
- def build_time(self):
- if not self.built:
- raise AttributeError('build_time')
- return self.cpv.build_time
-
- @property
- def defined_phases(self):
- return self._metadata.defined_phases
-
- @property
- def properties(self):
- return self._metadata.properties
-
- @property
- def provided_cps(self):
- return (self.cp,)
-
- @property
- def restrict(self):
- return self._metadata.restrict
-
- @property
- def metadata(self):
- warnings.warn("_emerge.Package.Package.metadata is deprecated",
- DeprecationWarning, stacklevel=3)
- return self._metadata
-
- # These are calculated on-demand, so that they are calculated
- # after FakeVartree applies its metadata tweaks.
- @property
- def invalid(self):
- if self._invalid is None:
- self._validate_deps()
- if self._invalid is None:
- self._invalid = False
- return self._invalid
-
- @property
- def masks(self):
- if self._masks is None:
- self._masks = self._eval_masks()
- return self._masks
-
- @property
- def visible(self):
- if self._visible is None:
- self._visible = self._eval_visiblity(self.masks)
- return self._visible
-
- @property
- def validated_atoms(self):
- """
- Returns *all* validated atoms from the deps, regardless
- of USE conditionals, with USE conditionals inside
- atoms left unevaluated.
- """
- if self._validated_atoms is None:
- self._validate_deps()
- return self._validated_atoms
-
- @property
- def stable(self):
- return self.cpv.stable
-
- @property
- def provides(self):
- self.invalid
- return self._provides
-
- @property
- def requires(self):
- self.invalid
- return self._requires
-
- @classmethod
- def _gen_hash_key(cls, cpv=None, installed=None, onlydeps=None,
- operation=None, repo_name=None, root_config=None,
- type_name=None, **kwargs):
-
- if operation is None:
- if installed or onlydeps:
- operation = "nomerge"
- else:
- operation = "merge"
-
- root = None
- if root_config is not None:
- root = root_config.root
- else:
- raise TypeError("root_config argument is required")
-
- elements = [type_name, root, str(cpv), operation]
-
- # For installed (and binary) packages we don't care for the repo
- # when it comes to hashing, because there can only be one cpv.
- # So overwrite the repo_key with type_name.
- if type_name is None:
- raise TypeError("type_name argument is required")
- elif type_name == "ebuild":
- if repo_name is None:
- raise AssertionError(
- "Package._gen_hash_key() " + \
- "called without 'repo_name' argument")
- elements.append(repo_name)
- elif type_name == "binary":
- # Including a variety of fingerprints in the hash makes
- # it possible to simultaneously consider multiple similar
- # packages. Note that digests are not included here, since
- # they are relatively expensive to compute, and they may
- # not necessarily be available.
- elements.extend([cpv.build_id, cpv.file_size,
- cpv.build_time, cpv.mtime])
- else:
- # For installed (and binary) packages we don't care for the repo
- # when it comes to hashing, because there can only be one cpv.
- # So overwrite the repo_key with type_name.
- elements.append(type_name)
-
- return tuple(elements)
-
- def _validate_deps(self):
- """
- Validate deps. This does not trigger USE calculation since that
- is expensive for ebuilds and therefore we want to avoid doing
- it unnecessarily (like for masked packages).
- """
- eapi = self.eapi
- dep_eapi = eapi
- dep_valid_flag = self.iuse.is_valid_flag
- if self.installed:
- # Ignore EAPI.incompatible and conditionals missing
- # from IUSE for installed packages since these issues
- # aren't relevant now (re-evaluate when new EAPIs are
- # deployed).
- dep_eapi = None
- dep_valid_flag = None
-
- validated_atoms = []
- for k in self._dep_keys:
- v = self._metadata.get(k)
- if not v:
- continue
- try:
- atoms = use_reduce(v, eapi=dep_eapi,
- matchall=True, is_valid_flag=dep_valid_flag,
- token_class=Atom, flat=True)
- except InvalidDependString as e:
- self._metadata_exception(k, e)
- else:
- validated_atoms.extend(atoms)
- if not self.built:
- for atom in atoms:
- if not isinstance(atom, Atom):
- continue
- if atom.slot_operator_built:
- e = InvalidDependString(
- _("Improper context for slot-operator "
- "\"built\" atom syntax: %s") %
- (atom.unevaluated_atom,))
- self._metadata_exception(k, e)
-
- self._validated_atoms = tuple(set(atom for atom in
- validated_atoms if isinstance(atom, Atom)))
-
- for k in self._use_conditional_misc_keys:
- v = self._metadata.get(k)
- if not v:
- continue
- try:
- use_reduce(v, eapi=dep_eapi, matchall=True,
- is_valid_flag=dep_valid_flag)
- except InvalidDependString as e:
- self._metadata_exception(k, e)
-
- k = 'REQUIRED_USE'
- v = self._metadata.get(k)
- if v and not self.built:
- if not _get_eapi_attrs(eapi).required_use:
- self._invalid_metadata('EAPI.incompatible',
- "REQUIRED_USE set, but EAPI='%s' doesn't allow it" % eapi)
- else:
- try:
- check_required_use(v, (),
- self.iuse.is_valid_flag, eapi=eapi)
- except InvalidDependString as e:
- self._invalid_metadata(k + ".syntax", "%s: %s" % (k, e))
-
- k = 'SRC_URI'
- v = self._metadata.get(k)
- if v:
- try:
- use_reduce(v, is_src_uri=True, eapi=eapi, matchall=True,
- is_valid_flag=self.iuse.is_valid_flag)
- except InvalidDependString as e:
- if not self.installed:
- self._metadata_exception(k, e)
-
- if self.built:
- k = 'PROVIDES'
- try:
- self._provides = frozenset(
- parse_soname_deps(self._metadata[k]))
- except InvalidData as e:
- self._invalid_metadata(k + ".syntax", "%s: %s" % (k, e))
-
- k = 'REQUIRES'
- try:
- self._requires = frozenset(
- parse_soname_deps(self._metadata[k]))
- except InvalidData as e:
- self._invalid_metadata(k + ".syntax", "%s: %s" % (k, e))
-
- def copy(self):
- return Package(built=self.built, cpv=self.cpv, depth=self.depth,
- installed=self.installed, metadata=self._raw_metadata,
- onlydeps=self.onlydeps, operation=self.operation,
- root_config=self.root_config, type_name=self.type_name)
-
- def _eval_masks(self):
- masks = {}
- settings = self.root_config.settings
-
- if self.invalid is not False:
- masks['invalid'] = self.invalid
-
- if not settings._accept_chost(self.cpv, self._metadata):
- masks['CHOST'] = self._metadata['CHOST']
-
- eapi = self.eapi
- if not portage.eapi_is_supported(eapi):
- masks['EAPI.unsupported'] = eapi
- if portage._eapi_is_deprecated(eapi):
- masks['EAPI.deprecated'] = eapi
-
- missing_keywords = settings._getMissingKeywords(
- self.cpv, self._metadata)
- if missing_keywords:
- masks['KEYWORDS'] = missing_keywords
-
- try:
- missing_properties = settings._getMissingProperties(
- self.cpv, self._metadata)
- if missing_properties:
- masks['PROPERTIES'] = missing_properties
- except InvalidDependString:
- # already recorded as 'invalid'
- pass
-
- try:
- missing_restricts = settings._getMissingRestrict(
- self.cpv, self._metadata)
- if missing_restricts:
- masks['RESTRICT'] = missing_restricts
- except InvalidDependString:
- # already recorded as 'invalid'
- pass
-
- mask_atom = settings._getMaskAtom(self.cpv, self._metadata)
- if mask_atom is not None:
- masks['package.mask'] = mask_atom
-
- try:
- missing_licenses = settings._getMissingLicenses(
- self.cpv, self._metadata)
- if missing_licenses:
- masks['LICENSE'] = missing_licenses
- except InvalidDependString:
- # already recorded as 'invalid'
- pass
-
- if not masks:
- masks = False
-
- return masks
-
- def _eval_visiblity(self, masks):
-
- if masks is not False:
-
- if 'EAPI.unsupported' in masks:
- return False
-
- if 'invalid' in masks:
- return False
-
- if not self.installed and ( \
- 'CHOST' in masks or \
- 'EAPI.deprecated' in masks or \
- 'KEYWORDS' in masks or \
- 'PROPERTIES' in masks or \
- 'RESTRICT' in masks):
- return False
-
- if 'package.mask' in masks or \
- 'LICENSE' in masks:
- return False
-
- return True
-
- def get_keyword_mask(self):
- """returns None, 'missing', or 'unstable'."""
-
- missing = self.root_config.settings._getRawMissingKeywords(
- self.cpv, self._metadata)
-
- if not missing:
- return None
-
- if '**' in missing:
- return 'missing'
-
- global_accept_keywords = frozenset(
- self.root_config.settings.get("ACCEPT_KEYWORDS", "").split())
-
- for keyword in missing:
- if keyword.lstrip("~") in global_accept_keywords:
- return 'unstable'
-
- return 'missing'
-
- def isHardMasked(self):
- """returns a bool if the cpv is in the list of
- expanded pmaskdict[cp] available ebuilds"""
- pmask = self.root_config.settings._getRawMaskAtom(
- self.cpv, self._metadata)
- return pmask is not None
-
- def _metadata_exception(self, k, e):
-
- if k.endswith('DEPEND'):
- qacat = 'dependency.syntax'
- else:
- qacat = k + ".syntax"
-
- if not self.installed:
- categorized_error = False
- if e.errors:
- for error in e.errors:
- if getattr(error, 'category', None) is None:
- continue
- categorized_error = True
- self._invalid_metadata(error.category,
- "%s: %s" % (k, error))
-
- if not categorized_error:
- self._invalid_metadata(qacat,"%s: %s" % (k, e))
- else:
- # For installed packages, show the path of the file
- # containing the invalid metadata, since the user may
- # want to fix the deps by hand.
- vardb = self.root_config.trees['vartree'].dbapi
- path = vardb.getpath(self.cpv, filename=k)
- self._invalid_metadata(qacat, "%s: %s in '%s'" % (k, e, path))
-
- def _invalid_metadata(self, msg_type, msg):
- if self._invalid is None:
- self._invalid = {}
- msgs = self._invalid.get(msg_type)
- if msgs is None:
- msgs = []
- self._invalid[msg_type] = msgs
- msgs.append(msg)
-
- def __str__(self):
- if self.operation == "merge":
- if self.type_name == "binary":
- cpv_color = "PKG_BINARY_MERGE"
- else:
- cpv_color = "PKG_MERGE"
- elif self.operation == "uninstall":
- cpv_color = "PKG_UNINSTALL"
- else:
- cpv_color = "PKG_NOMERGE"
-
- build_id_str = ""
- if isinstance(self.cpv.build_id, int) and self.cpv.build_id > 0:
- build_id_str = "-%s" % self.cpv.build_id
-
- s = "(%s, %s" \
- % (portage.output.colorize(cpv_color, self.cpv +
- build_id_str + _slot_separator + self.slot + "/" +
- self.sub_slot + _repo_separator + self.repo),
- self.type_name)
-
- if self.type_name == "installed":
- if self.root_config.settings['ROOT'] != "/":
- s += " in '%s'" % self.root_config.settings['ROOT']
- if self.operation == "uninstall":
- s += " scheduled for uninstall"
- else:
- if self.operation == "merge":
- s += " scheduled for merge"
- if self.root_config.settings['ROOT'] != "/":
- s += " to '%s'" % self.root_config.settings['ROOT']
- s += ")"
- return s
-
- class _use_class:
-
- __slots__ = ("enabled", "_expand", "_expand_hidden",
- "_force", "_pkg", "_mask")
-
- # Share identical frozenset instances when available.
- _frozensets = {}
-
- def __init__(self, pkg, enabled_flags):
- self._pkg = pkg
- self._expand = None
- self._expand_hidden = None
- self._force = None
- self._mask = None
- if eapi_has_use_aliases(pkg.eapi):
- for enabled_flag in enabled_flags:
- enabled_flags.extend(pkg.iuse.alias_mapping.get(enabled_flag, []))
- self.enabled = frozenset(enabled_flags)
- if pkg.built:
- # Use IUSE to validate USE settings for built packages,
- # in case the package manager that built this package
- # failed to do that for some reason (or in case of
- # data corruption).
- missing_iuse = pkg.iuse.get_missing_iuse(self.enabled)
- if missing_iuse:
- self.enabled = self.enabled.difference(missing_iuse)
-
- def _init_force_mask(self):
- pkgsettings = self._pkg._get_pkgsettings()
- frozensets = self._frozensets
- s = frozenset(
- pkgsettings.get("USE_EXPAND", "").lower().split())
- self._expand = frozensets.setdefault(s, s)
- s = frozenset(
- pkgsettings.get("USE_EXPAND_HIDDEN", "").lower().split())
- self._expand_hidden = frozensets.setdefault(s, s)
- s = pkgsettings.useforce
- self._force = frozensets.setdefault(s, s)
- s = pkgsettings.usemask
- self._mask = frozensets.setdefault(s, s)
-
- @property
- def expand(self):
- if self._expand is None:
- self._init_force_mask()
- return self._expand
-
- @property
- def expand_hidden(self):
- if self._expand_hidden is None:
- self._init_force_mask()
- return self._expand_hidden
-
- @property
- def force(self):
- if self._force is None:
- self._init_force_mask()
- return self._force
-
- @property
- def mask(self):
- if self._mask is None:
- self._init_force_mask()
- return self._mask
-
- @property
- def repo(self):
- return self._metadata['repository']
-
- @property
- def repo_priority(self):
- repo_info = self.root_config.settings.repositories.prepos.get(self.repo)
- if repo_info is None:
- return None
- return repo_info.priority
-
- @property
- def use(self):
- if self._use is None:
- self._init_use()
- return self._use
-
- def _get_pkgsettings(self):
- pkgsettings = self.root_config.trees[
- 'porttree'].dbapi.doebuild_settings
- pkgsettings.setcpv(self)
- return pkgsettings
-
- def _init_use(self):
- if self.built:
- # Use IUSE to validate USE settings for built packages,
- # in case the package manager that built this package
- # failed to do that for some reason (or in case of
- # data corruption). The enabled flags must be consistent
- # with implicit IUSE, in order to avoid potential
- # inconsistencies in USE dep matching (see bug #453400).
- use_str = self._metadata['USE']
- is_valid_flag = self.iuse.is_valid_flag
- enabled_flags = [x for x in use_str.split() if is_valid_flag(x)]
- use_str = " ".join(enabled_flags)
- self._use = self._use_class(
- self, enabled_flags)
- else:
- try:
- use_str = _PackageMetadataWrapperBase.__getitem__(
- self._metadata, 'USE')
- except KeyError:
- use_str = None
- calculated_use = False
- if not use_str:
- use_str = self._get_pkgsettings()["PORTAGE_USE"]
- calculated_use = True
- self._use = self._use_class(
- self, use_str.split())
- # Initialize these now, since USE access has just triggered
- # setcpv, and we want to cache the result of the force/mask
- # calculations that were done.
- if calculated_use:
- self._use._init_force_mask()
-
- _PackageMetadataWrapperBase.__setitem__(
- self._metadata, 'USE', use_str)
-
- return use_str
-
- class _iuse:
-
- __slots__ = ("__weakref__", "_iuse_implicit_match", "_pkg", "alias_mapping",
- "all", "all_aliases", "enabled", "disabled", "tokens")
-
- def __init__(self, pkg, tokens, iuse_implicit_match, aliases, eapi):
- self._pkg = pkg
- self.tokens = tuple(tokens)
- self._iuse_implicit_match = iuse_implicit_match
- enabled = []
- disabled = []
- other = []
- enabled_aliases = []
- disabled_aliases = []
- other_aliases = []
- aliases_supported = eapi_has_use_aliases(eapi)
- self.alias_mapping = {}
- for x in tokens:
- prefix = x[:1]
- if prefix == "+":
- enabled.append(x[1:])
- if aliases_supported:
- self.alias_mapping[x[1:]] = aliases.get(x[1:], [])
- enabled_aliases.extend(self.alias_mapping[x[1:]])
- elif prefix == "-":
- disabled.append(x[1:])
- if aliases_supported:
- self.alias_mapping[x[1:]] = aliases.get(x[1:], [])
- disabled_aliases.extend(self.alias_mapping[x[1:]])
- else:
- other.append(x)
- if aliases_supported:
- self.alias_mapping[x] = aliases.get(x, [])
- other_aliases.extend(self.alias_mapping[x])
- self.enabled = frozenset(chain(enabled, enabled_aliases))
- self.disabled = frozenset(chain(disabled, disabled_aliases))
- self.all = frozenset(chain(enabled, disabled, other))
- self.all_aliases = frozenset(chain(enabled_aliases, disabled_aliases, other_aliases))
-
- def is_valid_flag(self, flags):
- """
- @return: True if all flags are valid USE values which may
- be specified in USE dependencies, False otherwise.
- """
- if isinstance(flags, str):
- flags = [flags]
-
- for flag in flags:
- if not flag in self.all and not flag in self.all_aliases and \
- not self._iuse_implicit_match(flag):
- return False
- return True
-
- def get_missing_iuse(self, flags):
- """
- @return: A list of flags missing from IUSE.
- """
- if isinstance(flags, str):
- flags = [flags]
- missing_iuse = []
- for flag in flags:
- if not flag in self.all and not flag in self.all_aliases and \
- not self._iuse_implicit_match(flag):
- missing_iuse.append(flag)
- return missing_iuse
-
- def get_real_flag(self, flag):
- """
- Returns the flag's name within the scope of this package
- (accounting for aliases), or None if the flag is unknown.
- """
- if flag in self.all:
- return flag
-
- if flag in self.all_aliases:
- for k, v in self.alias_mapping.items():
- if flag in v:
- return k
-
- if self._iuse_implicit_match(flag):
- return flag
-
- return None
-
- def __len__(self):
- return 4
-
- def __iter__(self):
- """
- This is used to generate mtimedb resume mergelist entries, so we
- limit it to 4 items for backward compatibility.
- """
- return iter(self._hash_key[:4])
-
- def __lt__(self, other):
- if other.cp != self.cp:
- return self.cp < other.cp
- result = portage.vercmp(self.version, other.version)
- if result < 0:
- return True
- if result == 0 and self.built and other.built:
- return self.build_time < other.build_time
- return False
-
- def __le__(self, other):
- if other.cp != self.cp:
- return self.cp <= other.cp
- result = portage.vercmp(self.version, other.version)
- if result <= 0:
- return True
- if result == 0 and self.built and other.built:
- return self.build_time <= other.build_time
- return False
-
- def __gt__(self, other):
- if other.cp != self.cp:
- return self.cp > other.cp
- result = portage.vercmp(self.version, other.version)
- if result > 0:
- return True
- if result == 0 and self.built and other.built:
- return self.build_time > other.build_time
- return False
-
- def __ge__(self, other):
- if other.cp != self.cp:
- return self.cp >= other.cp
- result = portage.vercmp(self.version, other.version)
- if result >= 0:
- return True
- if result == 0 and self.built and other.built:
- return self.build_time >= other.build_time
- return False
-
- def with_use(self, use):
- """
- Return an Package instance with the specified USE flags. The
- current instance may be returned if it has identical USE flags.
- @param use: a set of USE flags
- @type use: frozenset
- @return: A package with the specified USE flags
- @rtype: Package
- """
- if use is not self.use.enabled:
- pkg = self.copy()
- pkg._metadata["USE"] = " ".join(use)
- else:
- pkg = self
- return pkg
-
- _all_metadata_keys = set(x for x in portage.auxdbkeys \
- if not x.startswith("UNUSED_"))
+ __hash__ = Task.__hash__
+ __slots__ = (
+ "built",
+ "cpv",
+ "depth",
+ "installed",
+ "onlydeps",
+ "operation",
+ "root_config",
+ "type_name",
+ "category",
+ "counter",
+ "cp",
+ "cpv_split",
+ "inherited",
+ "iuse",
+ "mtime",
+ "pf",
+ "root",
+ "slot",
+ "sub_slot",
+ "slot_atom",
+ "version",
+ ) + (
+ "_invalid",
+ "_masks",
+ "_metadata",
+ "_provided_cps",
+ "_raw_metadata",
+ "_provides",
+ "_requires",
+ "_use",
+ "_validated_atoms",
+ "_visible",
+ )
+
+ metadata_keys = [
+ "BDEPEND",
+ "BUILD_ID",
+ "BUILD_TIME",
+ "CHOST",
+ "COUNTER",
+ "DEFINED_PHASES",
+ "DEPEND",
+ "EAPI",
+ "IDEPEND",
+ "INHERITED",
+ "IUSE",
+ "KEYWORDS",
+ "LICENSE",
+ "MD5",
+ "PDEPEND",
+ "PROVIDES",
+ "RDEPEND",
+ "repository",
+ "REQUIRED_USE",
+ "PROPERTIES",
+ "REQUIRES",
+ "RESTRICT",
+ "SIZE",
+ "SLOT",
+ "USE",
+ "_mtime_",
++ # PREFIX LOCAL
++ "EPREFIX",
+ ]
+
+ _dep_keys = ("BDEPEND", "DEPEND", "IDEPEND", "PDEPEND", "RDEPEND")
+ _buildtime_keys = ("BDEPEND", "DEPEND")
+ _runtime_keys = ("IDEPEND", "PDEPEND", "RDEPEND")
+ _use_conditional_misc_keys = ("LICENSE", "PROPERTIES", "RESTRICT")
+ UNKNOWN_REPO = _unknown_repo
+
+ def __init__(self, **kwargs):
+ metadata = _PackageMetadataWrapperBase(kwargs.pop("metadata"))
+ Task.__init__(self, **kwargs)
+ # the SlotObject constructor assigns self.root_config from keyword args
+ # and is an instance of a '_emerge.RootConfig.RootConfig class
+ self.root = self.root_config.root
+ self._raw_metadata = metadata
+ self._metadata = _PackageMetadataWrapper(self, metadata)
+ if not self.built:
+ self._metadata["CHOST"] = self.root_config.settings.get("CHOST", "")
+ eapi_attrs = _get_eapi_attrs(self.eapi)
+
+ try:
+ db = self.cpv._db
+ except AttributeError:
+ if self.built:
+ # For independence from the source ebuild repository and
+ # profile implicit IUSE state, require the _db attribute
+ # for built packages.
+ raise
+ db = self.root_config.trees["porttree"].dbapi
+
+ self.cpv = _pkg_str(
+ self.cpv, metadata=self._metadata, settings=self.root_config.settings, db=db
+ )
+ if hasattr(self.cpv, "slot_invalid"):
+ self._invalid_metadata(
+ "SLOT.invalid", "SLOT: invalid value: '%s'" % self._metadata["SLOT"]
+ )
+ self.cpv_split = self.cpv.cpv_split
+ self.category, self.pf = portage.catsplit(self.cpv)
+ self.cp = self.cpv.cp
+ self.version = self.cpv.version
+ self.slot = self.cpv.slot
+ self.sub_slot = self.cpv.sub_slot
+ self.slot_atom = Atom("%s%s%s" % (self.cp, _slot_separator, self.slot))
+ # sync metadata with validated repo (may be UNKNOWN_REPO)
+ self._metadata["repository"] = self.cpv.repo
+
+ if self.root_config.settings.local_config:
+ implicit_match = db._iuse_implicit_cnstr(self.cpv, self._metadata)
+ else:
+ implicit_match = db._repoman_iuse_implicit_cnstr(self.cpv, self._metadata)
+ usealiases = self.root_config.settings._use_manager.getUseAliases(self)
+ self.iuse = self._iuse(
+ self, self._metadata["IUSE"].split(), implicit_match, usealiases, self.eapi
+ )
+
+ if (self.iuse.enabled or self.iuse.disabled) and not eapi_attrs.iuse_defaults:
+ if not self.installed:
+ self._invalid_metadata(
+ "EAPI.incompatible",
+ "IUSE contains defaults, but EAPI doesn't allow them",
+ )
+ if self.inherited is None:
+ self.inherited = frozenset()
+
+ if self.operation is None:
+ if self.onlydeps or self.installed:
+ self.operation = "nomerge"
+ else:
+ self.operation = "merge"
+
+ self._hash_key = Package._gen_hash_key(
+ cpv=self.cpv,
+ installed=self.installed,
+ onlydeps=self.onlydeps,
+ operation=self.operation,
+ repo_name=self.cpv.repo,
+ root_config=self.root_config,
+ type_name=self.type_name,
+ )
+ self._hash_value = hash(self._hash_key)
+
+ @property
+ def eapi(self):
+ return self._metadata["EAPI"]
+
+ @property
+ def build_id(self):
+ return self.cpv.build_id
+
+ @property
+ def build_time(self):
+ if not self.built:
+ raise AttributeError("build_time")
+ return self.cpv.build_time
+
+ @property
+ def defined_phases(self):
+ return self._metadata.defined_phases
+
+ @property
+ def properties(self):
+ return self._metadata.properties
+
+ @property
+ def provided_cps(self):
+ return (self.cp,)
+
+ @property
+ def restrict(self):
+ return self._metadata.restrict
+
+ @property
+ def metadata(self):
+ warnings.warn(
+ "_emerge.Package.Package.metadata is deprecated",
+ DeprecationWarning,
+ stacklevel=3,
+ )
+ return self._metadata
+
+ # These are calculated on-demand, so that they are calculated
+ # after FakeVartree applies its metadata tweaks.
+ @property
+ def invalid(self):
+ if self._invalid is None:
+ self._validate_deps()
+ if self._invalid is None:
+ self._invalid = False
+ return self._invalid
+
+ @property
+ def masks(self):
+ if self._masks is None:
+ self._masks = self._eval_masks()
+ return self._masks
+
+ @property
+ def visible(self):
+ if self._visible is None:
+ self._visible = self._eval_visiblity(self.masks)
+ return self._visible
+
+ @property
+ def validated_atoms(self):
+ """
+ Returns *all* validated atoms from the deps, regardless
+ of USE conditionals, with USE conditionals inside
+ atoms left unevaluated.
+ """
+ if self._validated_atoms is None:
+ self._validate_deps()
+ return self._validated_atoms
+
+ @property
+ def stable(self):
+ return self.cpv.stable
+
+ @property
+ def provides(self):
+ self.invalid
+ return self._provides
+
+ @property
+ def requires(self):
+ self.invalid
+ return self._requires
+
+ @classmethod
+ def _gen_hash_key(
+ cls,
+ cpv=None,
+ installed=None,
+ onlydeps=None,
+ operation=None,
+ repo_name=None,
+ root_config=None,
+ type_name=None,
+ **kwargs
+ ):
+
+ if operation is None:
+ if installed or onlydeps:
+ operation = "nomerge"
+ else:
+ operation = "merge"
+
+ root = None
+ if root_config is not None:
+ root = root_config.root
+ else:
+ raise TypeError("root_config argument is required")
+
+ elements = [type_name, root, str(cpv), operation]
+
+ # For installed (and binary) packages we don't care for the repo
+ # when it comes to hashing, because there can only be one cpv.
+ # So overwrite the repo_key with type_name.
+ if type_name is None:
+ raise TypeError("type_name argument is required")
+ elif type_name == "ebuild":
+ if repo_name is None:
+ raise AssertionError(
+ "Package._gen_hash_key() " + "called without 'repo_name' argument"
+ )
+ elements.append(repo_name)
+ elif type_name == "binary":
+ # Including a variety of fingerprints in the hash makes
+ # it possible to simultaneously consider multiple similar
+ # packages. Note that digests are not included here, since
+ # they are relatively expensive to compute, and they may
+ # not necessarily be available.
+ elements.extend([cpv.build_id, cpv.file_size, cpv.build_time, cpv.mtime])
+ else:
+ # For installed (and binary) packages we don't care for the repo
+ # when it comes to hashing, because there can only be one cpv.
+ # So overwrite the repo_key with type_name.
+ elements.append(type_name)
+
+ return tuple(elements)
+
+ def _validate_deps(self):
+ """
+ Validate deps. This does not trigger USE calculation since that
+ is expensive for ebuilds and therefore we want to avoid doing
+ it unnecessarily (like for masked packages).
+ """
+ eapi = self.eapi
+ dep_eapi = eapi
+ dep_valid_flag = self.iuse.is_valid_flag
+ if self.installed:
+ # Ignore EAPI.incompatible and conditionals missing
+ # from IUSE for installed packages since these issues
+ # aren't relevant now (re-evaluate when new EAPIs are
+ # deployed).
+ dep_eapi = None
+ dep_valid_flag = None
+
+ validated_atoms = []
+ for k in self._dep_keys:
+ v = self._metadata.get(k)
+ if not v:
+ continue
+ try:
+ atoms = use_reduce(
+ v,
+ eapi=dep_eapi,
+ matchall=True,
+ is_valid_flag=dep_valid_flag,
+ token_class=Atom,
+ flat=True,
+ )
+ except InvalidDependString as e:
+ self._metadata_exception(k, e)
+ else:
+ validated_atoms.extend(atoms)
+ if not self.built:
+ for atom in atoms:
+ if not isinstance(atom, Atom):
+ continue
+ if atom.slot_operator_built:
+ e = InvalidDependString(
+ _(
+ "Improper context for slot-operator "
+ '"built" atom syntax: %s'
+ )
+ % (atom.unevaluated_atom,)
+ )
+ self._metadata_exception(k, e)
+
+ self._validated_atoms = tuple(
+ set(atom for atom in validated_atoms if isinstance(atom, Atom))
+ )
+
+ for k in self._use_conditional_misc_keys:
+ v = self._metadata.get(k)
+ if not v:
+ continue
+ try:
+ use_reduce(
+ v, eapi=dep_eapi, matchall=True, is_valid_flag=dep_valid_flag
+ )
+ except InvalidDependString as e:
+ self._metadata_exception(k, e)
+
+ k = "REQUIRED_USE"
+ v = self._metadata.get(k)
+ if v and not self.built:
+ if not _get_eapi_attrs(eapi).required_use:
+ self._invalid_metadata(
+ "EAPI.incompatible",
+ "REQUIRED_USE set, but EAPI='%s' doesn't allow it" % eapi,
+ )
+ else:
+ try:
+ check_required_use(v, (), self.iuse.is_valid_flag, eapi=eapi)
+ except InvalidDependString as e:
+ self._invalid_metadata(k + ".syntax", "%s: %s" % (k, e))
+
+ k = "SRC_URI"
+ v = self._metadata.get(k)
+ if v:
+ try:
+ use_reduce(
+ v,
+ is_src_uri=True,
+ eapi=eapi,
+ matchall=True,
+ is_valid_flag=self.iuse.is_valid_flag,
+ )
+ except InvalidDependString as e:
+ if not self.installed:
+ self._metadata_exception(k, e)
+
+ if self.built:
+ k = "PROVIDES"
+ try:
+ self._provides = frozenset(parse_soname_deps(self._metadata[k]))
+ except InvalidData as e:
+ self._invalid_metadata(k + ".syntax", "%s: %s" % (k, e))
+
+ k = "REQUIRES"
+ try:
+ self._requires = frozenset(parse_soname_deps(self._metadata[k]))
+ except InvalidData as e:
+ self._invalid_metadata(k + ".syntax", "%s: %s" % (k, e))
+
+ def copy(self):
+ return Package(
+ built=self.built,
+ cpv=self.cpv,
+ depth=self.depth,
+ installed=self.installed,
+ metadata=self._raw_metadata,
+ onlydeps=self.onlydeps,
+ operation=self.operation,
+ root_config=self.root_config,
+ type_name=self.type_name,
+ )
+
+ def _eval_masks(self):
+ masks = {}
+ settings = self.root_config.settings
+
+ if self.invalid is not False:
+ masks["invalid"] = self.invalid
+
+ if not settings._accept_chost(self.cpv, self._metadata):
+ masks["CHOST"] = self._metadata["CHOST"]
+
+ eapi = self.eapi
+ if not portage.eapi_is_supported(eapi):
+ masks["EAPI.unsupported"] = eapi
+ if portage._eapi_is_deprecated(eapi):
+ masks["EAPI.deprecated"] = eapi
+
+ missing_keywords = settings._getMissingKeywords(self.cpv, self._metadata)
+ if missing_keywords:
+ masks["KEYWORDS"] = missing_keywords
+
+ try:
+ missing_properties = settings._getMissingProperties(
+ self.cpv, self._metadata
+ )
+ if missing_properties:
+ masks["PROPERTIES"] = missing_properties
+ except InvalidDependString:
+ # already recorded as 'invalid'
+ pass
+
+ try:
+ missing_restricts = settings._getMissingRestrict(self.cpv, self._metadata)
+ if missing_restricts:
+ masks["RESTRICT"] = missing_restricts
+ except InvalidDependString:
+ # already recorded as 'invalid'
+ pass
+
+ mask_atom = settings._getMaskAtom(self.cpv, self._metadata)
+ if mask_atom is not None:
+ masks["package.mask"] = mask_atom
+
+ try:
+ missing_licenses = settings._getMissingLicenses(self.cpv, self._metadata)
+ if missing_licenses:
+ masks["LICENSE"] = missing_licenses
+ except InvalidDependString:
+ # already recorded as 'invalid'
+ pass
+
+ if not masks:
+ masks = False
+
+ return masks
+
+ def _eval_visiblity(self, masks):
+
+ if masks is not False:
+
+ if "EAPI.unsupported" in masks:
+ return False
+
+ if "invalid" in masks:
+ return False
+
+ if not self.installed and (
+ "CHOST" in masks
+ or "EAPI.deprecated" in masks
+ or "KEYWORDS" in masks
+ or "PROPERTIES" in masks
+ or "RESTRICT" in masks
+ ):
+ return False
+
+ if "package.mask" in masks or "LICENSE" in masks:
+ return False
+
+ return True
+
+ def get_keyword_mask(self):
+ """returns None, 'missing', or 'unstable'."""
+
+ missing = self.root_config.settings._getRawMissingKeywords(
+ self.cpv, self._metadata
+ )
+
+ if not missing:
+ return None
+
+ if "**" in missing:
+ return "missing"
+
+ global_accept_keywords = frozenset(
+ self.root_config.settings.get("ACCEPT_KEYWORDS", "").split()
+ )
+
+ for keyword in missing:
+ if keyword.lstrip("~") in global_accept_keywords:
+ return "unstable"
+
+ return "missing"
+
+ def isHardMasked(self):
+ """returns a bool if the cpv is in the list of
+ expanded pmaskdict[cp] available ebuilds"""
+ pmask = self.root_config.settings._getRawMaskAtom(self.cpv, self._metadata)
+ return pmask is not None
+
+ def _metadata_exception(self, k, e):
+
+ if k.endswith("DEPEND"):
+ qacat = "dependency.syntax"
+ else:
+ qacat = k + ".syntax"
+
+ if not self.installed:
+ categorized_error = False
+ if e.errors:
+ for error in e.errors:
+ if getattr(error, "category", None) is None:
+ continue
+ categorized_error = True
+ self._invalid_metadata(error.category, "%s: %s" % (k, error))
+
+ if not categorized_error:
+ self._invalid_metadata(qacat, "%s: %s" % (k, e))
+ else:
+ # For installed packages, show the path of the file
+ # containing the invalid metadata, since the user may
+ # want to fix the deps by hand.
+ vardb = self.root_config.trees["vartree"].dbapi
+ path = vardb.getpath(self.cpv, filename=k)
+ self._invalid_metadata(qacat, "%s: %s in '%s'" % (k, e, path))
+
+ def _invalid_metadata(self, msg_type, msg):
+ if self._invalid is None:
+ self._invalid = {}
+ msgs = self._invalid.get(msg_type)
+ if msgs is None:
+ msgs = []
+ self._invalid[msg_type] = msgs
+ msgs.append(msg)
+
+ def __str__(self):
+ if self.operation == "merge":
+ if self.type_name == "binary":
+ cpv_color = "PKG_BINARY_MERGE"
+ else:
+ cpv_color = "PKG_MERGE"
+ elif self.operation == "uninstall":
+ cpv_color = "PKG_UNINSTALL"
+ else:
+ cpv_color = "PKG_NOMERGE"
+
+ build_id_str = ""
+ if isinstance(self.cpv.build_id, int) and self.cpv.build_id > 0:
+ build_id_str = "-%s" % self.cpv.build_id
+
+ s = "(%s, %s" % (
+ portage.output.colorize(
+ cpv_color,
+ self.cpv
+ + build_id_str
+ + _slot_separator
+ + self.slot
+ + "/"
+ + self.sub_slot
+ + _repo_separator
+ + self.repo,
+ ),
+ self.type_name,
+ )
+
+ if self.type_name == "installed":
+ if self.root_config.settings["ROOT"] != "/":
+ s += " in '%s'" % self.root_config.settings["ROOT"]
+ if self.operation == "uninstall":
+ s += " scheduled for uninstall"
+ else:
+ if self.operation == "merge":
+ s += " scheduled for merge"
+ if self.root_config.settings["ROOT"] != "/":
+ s += " to '%s'" % self.root_config.settings["ROOT"]
+ s += ")"
+ return s
+
+ class _use_class:
+
+ __slots__ = ("enabled", "_expand", "_expand_hidden", "_force", "_pkg", "_mask")
+
+ # Share identical frozenset instances when available.
+ _frozensets = {}
+
+ def __init__(self, pkg, enabled_flags):
+ self._pkg = pkg
+ self._expand = None
+ self._expand_hidden = None
+ self._force = None
+ self._mask = None
+ if eapi_has_use_aliases(pkg.eapi):
+ for enabled_flag in enabled_flags:
+ enabled_flags.extend(pkg.iuse.alias_mapping.get(enabled_flag, []))
+ self.enabled = frozenset(enabled_flags)
+ if pkg.built:
+ # Use IUSE to validate USE settings for built packages,
+ # in case the package manager that built this package
+ # failed to do that for some reason (or in case of
+ # data corruption).
+ missing_iuse = pkg.iuse.get_missing_iuse(self.enabled)
+ if missing_iuse:
+ self.enabled = self.enabled.difference(missing_iuse)
+
+ def _init_force_mask(self):
+ pkgsettings = self._pkg._get_pkgsettings()
+ frozensets = self._frozensets
+ s = frozenset(pkgsettings.get("USE_EXPAND", "").lower().split())
+ self._expand = frozensets.setdefault(s, s)
+ s = frozenset(pkgsettings.get("USE_EXPAND_HIDDEN", "").lower().split())
+ self._expand_hidden = frozensets.setdefault(s, s)
+ s = pkgsettings.useforce
+ self._force = frozensets.setdefault(s, s)
+ s = pkgsettings.usemask
+ self._mask = frozensets.setdefault(s, s)
+
+ @property
+ def expand(self):
+ if self._expand is None:
+ self._init_force_mask()
+ return self._expand
+
+ @property
+ def expand_hidden(self):
+ if self._expand_hidden is None:
+ self._init_force_mask()
+ return self._expand_hidden
+
+ @property
+ def force(self):
+ if self._force is None:
+ self._init_force_mask()
+ return self._force
+
+ @property
+ def mask(self):
+ if self._mask is None:
+ self._init_force_mask()
+ return self._mask
+
+ @property
+ def repo(self):
+ return self._metadata["repository"]
+
+ @property
+ def repo_priority(self):
+ repo_info = self.root_config.settings.repositories.prepos.get(self.repo)
+ if repo_info is None:
+ return None
+ return repo_info.priority
+
+ @property
+ def use(self):
+ if self._use is None:
+ self._init_use()
+ return self._use
+
+ def _get_pkgsettings(self):
+ pkgsettings = self.root_config.trees["porttree"].dbapi.doebuild_settings
+ pkgsettings.setcpv(self)
+ return pkgsettings
+
+ def _init_use(self):
+ if self.built:
+ # Use IUSE to validate USE settings for built packages,
+ # in case the package manager that built this package
+ # failed to do that for some reason (or in case of
+ # data corruption). The enabled flags must be consistent
+ # with implicit IUSE, in order to avoid potential
+ # inconsistencies in USE dep matching (see bug #453400).
+ use_str = self._metadata["USE"]
+ is_valid_flag = self.iuse.is_valid_flag
+ enabled_flags = [x for x in use_str.split() if is_valid_flag(x)]
+ use_str = " ".join(enabled_flags)
+ self._use = self._use_class(self, enabled_flags)
+ else:
+ try:
+ use_str = _PackageMetadataWrapperBase.__getitem__(self._metadata, "USE")
+ except KeyError:
+ use_str = None
+ calculated_use = False
+ if not use_str:
+ use_str = self._get_pkgsettings()["PORTAGE_USE"]
+ calculated_use = True
+ self._use = self._use_class(self, use_str.split())
+ # Initialize these now, since USE access has just triggered
+ # setcpv, and we want to cache the result of the force/mask
+ # calculations that were done.
+ if calculated_use:
+ self._use._init_force_mask()
+
+ _PackageMetadataWrapperBase.__setitem__(self._metadata, "USE", use_str)
+
+ return use_str
+
+ class _iuse:
+
+ __slots__ = (
+ "__weakref__",
+ "_iuse_implicit_match",
+ "_pkg",
+ "alias_mapping",
+ "all",
+ "all_aliases",
+ "enabled",
+ "disabled",
+ "tokens",
+ )
+
+ def __init__(self, pkg, tokens, iuse_implicit_match, aliases, eapi):
+ self._pkg = pkg
+ self.tokens = tuple(tokens)
+ self._iuse_implicit_match = iuse_implicit_match
+ enabled = []
+ disabled = []
+ other = []
+ enabled_aliases = []
+ disabled_aliases = []
+ other_aliases = []
+ aliases_supported = eapi_has_use_aliases(eapi)
+ self.alias_mapping = {}
+ for x in tokens:
+ prefix = x[:1]
+ if prefix == "+":
+ enabled.append(x[1:])
+ if aliases_supported:
+ self.alias_mapping[x[1:]] = aliases.get(x[1:], [])
+ enabled_aliases.extend(self.alias_mapping[x[1:]])
+ elif prefix == "-":
+ disabled.append(x[1:])
+ if aliases_supported:
+ self.alias_mapping[x[1:]] = aliases.get(x[1:], [])
+ disabled_aliases.extend(self.alias_mapping[x[1:]])
+ else:
+ other.append(x)
+ if aliases_supported:
+ self.alias_mapping[x] = aliases.get(x, [])
+ other_aliases.extend(self.alias_mapping[x])
+ self.enabled = frozenset(chain(enabled, enabled_aliases))
+ self.disabled = frozenset(chain(disabled, disabled_aliases))
+ self.all = frozenset(chain(enabled, disabled, other))
+ self.all_aliases = frozenset(
+ chain(enabled_aliases, disabled_aliases, other_aliases)
+ )
+
+ def is_valid_flag(self, flags):
+ """
+ @return: True if all flags are valid USE values which may
+ be specified in USE dependencies, False otherwise.
+ """
+ if isinstance(flags, str):
+ flags = [flags]
+
+ for flag in flags:
+ if (
+ not flag in self.all
+ and not flag in self.all_aliases
+ and not self._iuse_implicit_match(flag)
+ ):
+ return False
+ return True
+
+ def get_missing_iuse(self, flags):
+ """
+ @return: A list of flags missing from IUSE.
+ """
+ if isinstance(flags, str):
+ flags = [flags]
+ missing_iuse = []
+ for flag in flags:
+ if (
+ not flag in self.all
+ and not flag in self.all_aliases
+ and not self._iuse_implicit_match(flag)
+ ):
+ missing_iuse.append(flag)
+ return missing_iuse
+
+ def get_real_flag(self, flag):
+ """
+ Returns the flag's name within the scope of this package
+ (accounting for aliases), or None if the flag is unknown.
+ """
+ if flag in self.all:
+ return flag
+
+ if flag in self.all_aliases:
+ for k, v in self.alias_mapping.items():
+ if flag in v:
+ return k
+
+ if self._iuse_implicit_match(flag):
+ return flag
+
+ return None
+
+ def __len__(self):
+ return 4
+
+ def __iter__(self):
+ """
+ This is used to generate mtimedb resume mergelist entries, so we
+ limit it to 4 items for backward compatibility.
+ """
+ return iter(self._hash_key[:4])
+
+ def __lt__(self, other):
+ if other.cp != self.cp:
+ return self.cp < other.cp
+ result = portage.vercmp(self.version, other.version)
+ if result < 0:
+ return True
+ if result == 0 and self.built and other.built:
+ return self.build_time < other.build_time
+ return False
+
+ def __le__(self, other):
+ if other.cp != self.cp:
+ return self.cp <= other.cp
+ result = portage.vercmp(self.version, other.version)
+ if result <= 0:
+ return True
+ if result == 0 and self.built and other.built:
+ return self.build_time <= other.build_time
+ return False
+
+ def __gt__(self, other):
+ if other.cp != self.cp:
+ return self.cp > other.cp
+ result = portage.vercmp(self.version, other.version)
+ if result > 0:
+ return True
+ if result == 0 and self.built and other.built:
+ return self.build_time > other.build_time
+ return False
+
+ def __ge__(self, other):
+ if other.cp != self.cp:
+ return self.cp >= other.cp
+ result = portage.vercmp(self.version, other.version)
+ if result >= 0:
+ return True
+ if result == 0 and self.built and other.built:
+ return self.build_time >= other.build_time
+ return False
+
+ def with_use(self, use):
+ """
+ Return an Package instance with the specified USE flags. The
+ current instance may be returned if it has identical USE flags.
+ @param use: a set of USE flags
+ @type use: frozenset
+ @return: A package with the specified USE flags
+ @rtype: Package
+ """
+ if use is not self.use.enabled:
+ pkg = self.copy()
+ pkg._metadata["USE"] = " ".join(use)
+ else:
+ pkg = self
+ return pkg
+
+
+ _all_metadata_keys = set(x for x in portage.auxdbkeys)
_all_metadata_keys.update(Package.metadata_keys)
_all_metadata_keys = frozenset(_all_metadata_keys)
diff --cc lib/_emerge/actions.py
index 7eea1c93a,515b22b66..0ed90cd71
--- a/lib/_emerge/actions.py
+++ b/lib/_emerge/actions.py
@@@ -2479,151 -2842,124 +2843,153 @@@ def getportageversion(portdir, _unused
class _emerge_config(SlotObject):
- __slots__ = ('action', 'args', 'opts',
- 'running_config', 'target_config', 'trees')
+ __slots__ = ("action", "args", "opts", "running_config", "target_config", "trees")
- # Support unpack as tuple, for load_emerge_config backward compatibility.
- def __iter__(self):
- yield self.target_config.settings
- yield self.trees
- yield self.target_config.mtimedb
+ # Support unpack as tuple, for load_emerge_config backward compatibility.
+ def __iter__(self):
+ yield self.target_config.settings
+ yield self.trees
+ yield self.target_config.mtimedb
- def __getitem__(self, index):
- return list(self)[index]
+ def __getitem__(self, index):
+ return list(self)[index]
- def __len__(self):
- return 3
+ def __len__(self):
+ return 3
- def load_emerge_config(emerge_config=None, env=None, **kargs):
- if emerge_config is None:
- emerge_config = _emerge_config(**kargs)
-
- env = os.environ if env is None else env
- kwargs = {'env': env}
- for k, envvar in (("config_root", "PORTAGE_CONFIGROOT"), ("target_root", "ROOT"),
- ("sysroot", "SYSROOT"), ("eprefix", "EPREFIX")):
- v = env.get(envvar)
- if v is not None:
- kwargs[k] = v
- emerge_config.trees = portage.create_trees(trees=emerge_config.trees,
- **kwargs)
-
- for root_trees in emerge_config.trees.values():
- settings = root_trees["vartree"].settings
- settings._init_dirs()
- setconfig = load_default_config(settings, root_trees)
- root_config = RootConfig(settings, root_trees, setconfig)
- if "root_config" in root_trees:
- # Propagate changes to the existing instance,
- # which may be referenced by a depgraph.
- root_trees["root_config"].update(root_config)
- else:
- root_trees["root_config"] = root_config
+ def load_emerge_config(emerge_config=None, env=None, **kargs):
- target_eroot = emerge_config.trees._target_eroot
- emerge_config.target_config = \
- emerge_config.trees[target_eroot]['root_config']
- emerge_config.target_config.mtimedb = portage.MtimeDB(
- os.path.join(target_eroot, portage.CACHE_PATH, "mtimedb"))
- emerge_config.running_config = emerge_config.trees[
- emerge_config.trees._running_eroot]['root_config']
- QueryCommand._db = emerge_config.trees
+ if emerge_config is None:
+ emerge_config = _emerge_config(**kargs)
+
+ env = os.environ if env is None else env
+ kwargs = {"env": env}
+ for k, envvar in (
+ ("config_root", "PORTAGE_CONFIGROOT"),
+ ("target_root", "ROOT"),
+ ("sysroot", "SYSROOT"),
+ ("eprefix", "EPREFIX"),
+ ):
+ v = env.get(envvar)
+ if v is not None:
+ kwargs[k] = v
+ emerge_config.trees = portage.create_trees(trees=emerge_config.trees, **kwargs)
+
+ for root_trees in emerge_config.trees.values():
+ settings = root_trees["vartree"].settings
+ settings._init_dirs()
+ setconfig = load_default_config(settings, root_trees)
+ root_config = RootConfig(settings, root_trees, setconfig)
+ if "root_config" in root_trees:
+ # Propagate changes to the existing instance,
+ # which may be referenced by a depgraph.
+ root_trees["root_config"].update(root_config)
+ else:
+ root_trees["root_config"] = root_config
+
+ target_eroot = emerge_config.trees._target_eroot
+ emerge_config.target_config = emerge_config.trees[target_eroot]["root_config"]
+ emerge_config.target_config.mtimedb = portage.MtimeDB(
+ os.path.join(target_eroot, portage.CACHE_PATH, "mtimedb")
+ )
+ emerge_config.running_config = emerge_config.trees[
+ emerge_config.trees._running_eroot
+ ]["root_config"]
+ QueryCommand._db = emerge_config.trees
+
+ return emerge_config
- return emerge_config
def getgccversion(chost=None):
- """
- rtype: C{str}
- return: the current in-use gcc version
- """
+ """
+ rtype: C{str}
+ return: the current in-use gcc version
+ """
- gcc_ver_command = ['gcc', '-dumpversion']
- gcc_ver_prefix = 'gcc-'
+ gcc_ver_command = ["gcc", "-dumpversion"]
+ gcc_ver_prefix = "gcc-"
+ clang_ver_command = ['clang', '--version']
+ clang_ver_prefix = 'clang-'
+
+ ubinpath = os.path.join('/', portage.const.EPREFIX, 'usr', 'bin')
+
- gcc_not_found_error = red(
- "!!! No gcc found. You probably need to 'source /etc/profile'\n" +
- "!!! to update the environment of this terminal and possibly\n" +
- "!!! other terminals also.\n"
- )
+ gcc_not_found_error = red(
+ "!!! No gcc found. You probably need to 'source /etc/profile'\n"
+ + "!!! to update the environment of this terminal and possibly\n"
+ + "!!! other terminals also.\n"
+ )
+ def getclangversion(output):
+ version = re.search('clang version ([0-9.]+) ', output)
+ if version:
+ return version.group(1)
+ return "unknown"
+
- if chost:
- try:
- proc = subprocess.Popen(["gcc-config", "-c"],
- stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
- except OSError:
- myoutput = None
- mystatus = 1
- else:
- myoutput = _unicode_decode(proc.communicate()[0]).rstrip("\n")
- mystatus = proc.wait()
- if mystatus == os.EX_OK and myoutput.startswith(chost + "-"):
- return myoutput.replace(chost + "-", gcc_ver_prefix, 1)
+ if chost:
+ try:
+ proc = subprocess.Popen(
- ["gcc-config", "-c"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT
++ [ubinpath + "/" + "gcc-config", "-c"],
++ stdout=subprocess.PIPE, stderr=subprocess.STDOUT
+ )
+ except OSError:
+ myoutput = None
+ mystatus = 1
+ else:
+ myoutput = _unicode_decode(proc.communicate()[0]).rstrip("\n")
+ mystatus = proc.wait()
+ if mystatus == os.EX_OK and myoutput.startswith(chost + "-"):
+ return myoutput.replace(chost + "-", gcc_ver_prefix, 1)
+
+ try:
+ proc = subprocess.Popen(
- [chost + "-" + gcc_ver_command[0]] + gcc_ver_command[1:],
++ [ubinpath + "/" + chost + "-" + gcc_ver_command[0]]
++ + gcc_ver_command[1:],
+ stdout=subprocess.PIPE,
+ stderr=subprocess.STDOUT,
+ )
+ except OSError:
+ myoutput = None
+ mystatus = 1
+ else:
+ myoutput = _unicode_decode(proc.communicate()[0]).rstrip("\n")
+ mystatus = proc.wait()
+ if mystatus == os.EX_OK:
+ return gcc_ver_prefix + myoutput
+ try:
+ proc = subprocess.Popen(
- [chost + "-" + gcc_ver_command[0]] + gcc_ver_command[1:],
- stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
- except OSError:
- myoutput = None
- mystatus = 1
- else:
- myoutput = _unicode_decode(proc.communicate()[0]).rstrip("\n")
- mystatus = proc.wait()
- if mystatus == os.EX_OK:
- return gcc_ver_prefix + myoutput
-
- try:
- proc = subprocess.Popen([ubinpath + "/" + gcc_ver_command[0]] + gcc_ver_command[1:],
- stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
- except OSError:
- myoutput = None
- mystatus = 1
- else:
- myoutput = _unicode_decode(proc.communicate()[0]).rstrip("\n")
- mystatus = proc.wait()
- if mystatus == os.EX_OK:
- return gcc_ver_prefix + myoutput
-
- if chost:
- try:
- proc = subprocess.Popen(
- [ubinpath + "/" + chost + "-" + clang_ver_command[0]] + clang_ver_command[1:],
- stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
++ [ubinpath + "/" + chost + "-" + clang_ver_command[0]]
++ + clang_ver_command[1:],
++ stdout=subprocess.PIPE,
++ stderr=subprocess.STDOUT,
++ )
+ except OSError:
+ myoutput = None
+ mystatus = 1
+ else:
+ myoutput = _unicode_decode(proc.communicate()[0]).rstrip("\n")
+ mystatus = proc.wait()
+ if mystatus == os.EX_OK:
+ return clang_ver_prefix + getclangversion(myoutput)
+
- try:
- proc = subprocess.Popen([ubinpath + "/" + clang_ver_command[0]] + clang_ver_command[1:],
- stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
- except OSError:
- myoutput = None
- mystatus = 1
- else:
- myoutput = _unicode_decode(proc.communicate()[0]).rstrip("\n")
- mystatus = proc.wait()
- if mystatus == os.EX_OK:
- return clang_ver_prefix + getclangversion(myoutput)
-
- portage.writemsg(gcc_not_found_error, noiselevel=-1)
- return "[unavailable]"
+ try:
+ proc = subprocess.Popen(
+ gcc_ver_command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
+ )
+ except OSError:
+ myoutput = None
+ mystatus = 1
+ else:
+ myoutput = _unicode_decode(proc.communicate()[0]).rstrip("\n")
+ mystatus = proc.wait()
+ if mystatus == os.EX_OK:
+ return gcc_ver_prefix + myoutput
+
+ portage.writemsg(gcc_not_found_error, noiselevel=-1)
+ return "[unavailable]"
+
# Warn about features that may confuse users and
# lead them to report invalid bugs.
diff --cc lib/_emerge/depgraph.py
index 4d578d557,f6549eba6..61be9d02b
--- a/lib/_emerge/depgraph.py
+++ b/lib/_emerge/depgraph.py
@@@ -10083,234 -11596,273 +11596,283 @@@ def _backtrack_depgraph(settings, trees
def resume_depgraph(settings, trees, mtimedb, myopts, myparams, spinner):
- """
- Raises PackageSetNotFound if myfiles contains a missing package set.
- """
- _spinner_start(spinner, myopts)
- try:
- return _resume_depgraph(settings, trees, mtimedb, myopts,
- myparams, spinner)
- finally:
- _spinner_stop(spinner)
+ """
+ Raises PackageSetNotFound if myfiles contains a missing package set.
+ """
+ _spinner_start(spinner, myopts)
+ try:
+ return _resume_depgraph(settings, trees, mtimedb, myopts, myparams, spinner)
+ finally:
+ _spinner_stop(spinner)
+
def _resume_depgraph(settings, trees, mtimedb, myopts, myparams, spinner):
- """
- Construct a depgraph for the given resume list. This will raise
- PackageNotFound or depgraph.UnsatisfiedResumeDep when necessary.
- TODO: Return reasons for dropped_tasks, for display/logging.
- @rtype: tuple
- @return: (success, depgraph, dropped_tasks)
- """
- skip_masked = True
- skip_unsatisfied = True
- mergelist = mtimedb["resume"]["mergelist"]
- dropped_tasks = {}
- frozen_config = _frozen_depgraph_config(settings, trees,
- myopts, myparams, spinner)
- while True:
- mydepgraph = depgraph(settings, trees,
- myopts, myparams, spinner, frozen_config=frozen_config)
- try:
- success = mydepgraph._loadResumeCommand(mtimedb["resume"],
- skip_masked=skip_masked)
- except depgraph.UnsatisfiedResumeDep as e:
- if not skip_unsatisfied:
- raise
-
- graph = mydepgraph._dynamic_config.digraph
- unsatisfied_parents = {}
- traversed_nodes = set()
- unsatisfied_stack = [(dep.parent, dep.atom) for dep in e.value]
- while unsatisfied_stack:
- pkg, atom = unsatisfied_stack.pop()
- if atom is not None and \
- mydepgraph._select_pkg_from_installed(
- pkg.root, atom)[0] is not None:
- continue
- atoms = unsatisfied_parents.get(pkg)
- if atoms is None:
- atoms = []
- unsatisfied_parents[pkg] = atoms
- if atom is not None:
- atoms.append(atom)
- if pkg in traversed_nodes:
- continue
- traversed_nodes.add(pkg)
-
- # If this package was pulled in by a parent
- # package scheduled for merge, removing this
- # package may cause the parent package's
- # dependency to become unsatisfied.
- for parent_node, atom in \
- mydepgraph._dynamic_config._parent_atoms.get(pkg, []):
- if not isinstance(parent_node, Package) \
- or parent_node.operation not in ("merge", "nomerge"):
- continue
- # We need to traverse all priorities here, in order to
- # ensure that a package with an unsatisfied depenedency
- # won't get pulled in, even indirectly via a soft
- # dependency.
- unsatisfied_stack.append((parent_node, atom))
-
- unsatisfied_tuples = frozenset(tuple(parent_node)
- for parent_node in unsatisfied_parents
- if isinstance(parent_node, Package))
- pruned_mergelist = []
- for x in mergelist:
- if isinstance(x, list) and \
- tuple(x) not in unsatisfied_tuples:
- pruned_mergelist.append(x)
-
- # If the mergelist doesn't shrink then this loop is infinite.
- if len(pruned_mergelist) == len(mergelist):
- # This happens if a package can't be dropped because
- # it's already installed, but it has unsatisfied PDEPEND.
- raise
- mergelist[:] = pruned_mergelist
-
- # Exclude installed packages that have been removed from the graph due
- # to failure to build/install runtime dependencies after the dependent
- # package has already been installed.
- dropped_tasks.update((pkg, atoms) for pkg, atoms in \
- unsatisfied_parents.items() if pkg.operation != "nomerge")
-
- del e, graph, traversed_nodes, \
- unsatisfied_parents, unsatisfied_stack
- continue
- else:
- break
- return (success, mydepgraph, dropped_tasks)
-
- def get_mask_info(root_config, cpv, pkgsettings,
- db, pkg_type, built, installed, db_keys, myrepo = None, _pkg_use_enabled=None):
- try:
- metadata = dict(zip(db_keys,
- db.aux_get(cpv, db_keys, myrepo=myrepo)))
- except KeyError:
- metadata = None
-
- if metadata is None:
- mreasons = ["corruption"]
- else:
- eapi = metadata['EAPI']
- if not portage.eapi_is_supported(eapi):
- mreasons = ['EAPI %s' % eapi]
- else:
- pkg = Package(type_name=pkg_type, root_config=root_config,
- cpv=cpv, built=built, installed=installed, metadata=metadata)
-
- modified_use = None
- if _pkg_use_enabled is not None:
- modified_use = _pkg_use_enabled(pkg)
-
- mreasons = get_masking_status(pkg, pkgsettings, root_config, myrepo=myrepo, use=modified_use)
-
- return metadata, mreasons
+ """
+ Construct a depgraph for the given resume list. This will raise
+ PackageNotFound or depgraph.UnsatisfiedResumeDep when necessary.
+ TODO: Return reasons for dropped_tasks, for display/logging.
+ @rtype: tuple
+ @return: (success, depgraph, dropped_tasks)
+ """
+ skip_masked = True
+ skip_unsatisfied = True
+ mergelist = mtimedb["resume"]["mergelist"]
+ dropped_tasks = {}
+ frozen_config = _frozen_depgraph_config(settings, trees, myopts, myparams, spinner)
+ while True:
+ mydepgraph = depgraph(
+ settings, trees, myopts, myparams, spinner, frozen_config=frozen_config
+ )
+ try:
+ success = mydepgraph._loadResumeCommand(
+ mtimedb["resume"], skip_masked=skip_masked
+ )
+ except depgraph.UnsatisfiedResumeDep as e:
+ if not skip_unsatisfied:
+ raise
+
+ graph = mydepgraph._dynamic_config.digraph
+ unsatisfied_parents = {}
+ traversed_nodes = set()
+ unsatisfied_stack = [(dep.parent, dep.atom) for dep in e.value]
+ while unsatisfied_stack:
+ pkg, atom = unsatisfied_stack.pop()
+ if (
+ atom is not None
+ and mydepgraph._select_pkg_from_installed(pkg.root, atom)[0]
+ is not None
+ ):
+ continue
+ atoms = unsatisfied_parents.get(pkg)
+ if atoms is None:
+ atoms = []
+ unsatisfied_parents[pkg] = atoms
+ if atom is not None:
+ atoms.append(atom)
+ if pkg in traversed_nodes:
+ continue
+ traversed_nodes.add(pkg)
+
+ # If this package was pulled in by a parent
+ # package scheduled for merge, removing this
+ # package may cause the parent package's
+ # dependency to become unsatisfied.
+ for parent_node, atom in mydepgraph._dynamic_config._parent_atoms.get(
+ pkg, []
+ ):
+ if not isinstance(
+ parent_node, Package
+ ) or parent_node.operation not in ("merge", "nomerge"):
+ continue
+ # We need to traverse all priorities here, in order to
+ # ensure that a package with an unsatisfied depenedency
+ # won't get pulled in, even indirectly via a soft
+ # dependency.
+ unsatisfied_stack.append((parent_node, atom))
+
+ unsatisfied_tuples = frozenset(
+ tuple(parent_node)
+ for parent_node in unsatisfied_parents
+ if isinstance(parent_node, Package)
+ )
+ pruned_mergelist = []
+ for x in mergelist:
+ if isinstance(x, list) and tuple(x) not in unsatisfied_tuples:
+ pruned_mergelist.append(x)
+
+ # If the mergelist doesn't shrink then this loop is infinite.
+ if len(pruned_mergelist) == len(mergelist):
+ # This happens if a package can't be dropped because
+ # it's already installed, but it has unsatisfied PDEPEND.
+ raise
+ mergelist[:] = pruned_mergelist
+
+ # Exclude installed packages that have been removed from the graph due
+ # to failure to build/install runtime dependencies after the dependent
+ # package has already been installed.
+ dropped_tasks.update(
+ (pkg, atoms)
+ for pkg, atoms in unsatisfied_parents.items()
+ if pkg.operation != "nomerge"
+ )
+
+ del e, graph, traversed_nodes, unsatisfied_parents, unsatisfied_stack
+ continue
+ else:
+ break
+ return (success, mydepgraph, dropped_tasks)
+
+
+ def get_mask_info(
+ root_config,
+ cpv,
+ pkgsettings,
+ db,
+ pkg_type,
+ built,
+ installed,
+ db_keys,
+ myrepo=None,
+ _pkg_use_enabled=None,
+ ):
+ try:
+ metadata = dict(zip(db_keys, db.aux_get(cpv, db_keys, myrepo=myrepo)))
+ except KeyError:
+ metadata = None
+
+ if metadata is None:
+ mreasons = ["corruption"]
+ else:
+ eapi = metadata["EAPI"]
+ if not portage.eapi_is_supported(eapi):
+ mreasons = ["EAPI %s" % eapi]
+ else:
+ pkg = Package(
+ type_name=pkg_type,
+ root_config=root_config,
+ cpv=cpv,
+ built=built,
+ installed=installed,
+ metadata=metadata,
+ )
+
+ modified_use = None
+ if _pkg_use_enabled is not None:
+ modified_use = _pkg_use_enabled(pkg)
+
+ mreasons = get_masking_status(
+ pkg, pkgsettings, root_config, myrepo=myrepo, use=modified_use
+ )
+
+ return metadata, mreasons
+
def show_masked_packages(masked_packages):
- shown_licenses = set()
- shown_comments = set()
- # Maybe there is both an ebuild and a binary. Only
- # show one of them to avoid redundant appearance.
- shown_cpvs = set()
- have_eapi_mask = False
- for (root_config, pkgsettings, cpv, repo,
- metadata, mreasons) in masked_packages:
- output_cpv = cpv
- if repo:
- output_cpv += _repo_separator + repo
- if output_cpv in shown_cpvs:
- continue
- shown_cpvs.add(output_cpv)
- eapi_masked = metadata is not None and \
- not portage.eapi_is_supported(metadata["EAPI"])
- if eapi_masked:
- have_eapi_mask = True
- # When masked by EAPI, metadata is mostly useless since
- # it doesn't contain essential things like SLOT.
- metadata = None
- comment, filename = None, None
- if not eapi_masked and \
- "package.mask" in mreasons:
- comment, filename = \
- portage.getmaskingreason(
- cpv, metadata=metadata,
- settings=pkgsettings,
- portdb=root_config.trees["porttree"].dbapi,
- return_location=True)
- missing_licenses = []
- if not eapi_masked and metadata is not None:
- try:
- missing_licenses = \
- pkgsettings._getMissingLicenses(
- cpv, metadata)
- except portage.exception.InvalidDependString:
- # This will have already been reported
- # above via mreasons.
- pass
-
- writemsg("- "+output_cpv+" (masked by: "+", ".join(mreasons)+")\n",
- noiselevel=-1)
-
- if comment and comment not in shown_comments:
- writemsg(filename + ":\n" + comment + "\n",
- noiselevel=-1)
- shown_comments.add(comment)
- portdb = root_config.trees["porttree"].dbapi
- for l in missing_licenses:
- if l in shown_licenses:
- continue
- l_path = portdb.findLicensePath(l)
- if l_path is None:
- continue
- msg = ("A copy of the '%s' license" + \
- " is located at '%s'.\n\n") % (l, l_path)
- writemsg(msg, noiselevel=-1)
- shown_licenses.add(l)
- return have_eapi_mask
+ shown_licenses = set()
+ shown_comments = set()
+ # Maybe there is both an ebuild and a binary. Only
+ # show one of them to avoid redundant appearance.
+ shown_cpvs = set()
+ have_eapi_mask = False
+ for (root_config, pkgsettings, cpv, repo, metadata, mreasons) in masked_packages:
+ output_cpv = cpv
+ if repo:
+ output_cpv += _repo_separator + repo
+ if output_cpv in shown_cpvs:
+ continue
+ shown_cpvs.add(output_cpv)
+ eapi_masked = metadata is not None and not portage.eapi_is_supported(
+ metadata["EAPI"]
+ )
+ if eapi_masked:
+ have_eapi_mask = True
+ # When masked by EAPI, metadata is mostly useless since
+ # it doesn't contain essential things like SLOT.
+ metadata = None
+ comment, filename = None, None
+ if not eapi_masked and "package.mask" in mreasons:
+ comment, filename = portage.getmaskingreason(
+ cpv,
+ metadata=metadata,
+ settings=pkgsettings,
+ portdb=root_config.trees["porttree"].dbapi,
+ return_location=True,
+ )
+ missing_licenses = []
+ if not eapi_masked and metadata is not None:
+ try:
+ missing_licenses = pkgsettings._getMissingLicenses(cpv, metadata)
+ except portage.exception.InvalidDependString:
+ # This will have already been reported
+ # above via mreasons.
+ pass
+
+ writemsg(
+ "- " + output_cpv + " (masked by: " + ", ".join(mreasons) + ")\n",
+ noiselevel=-1,
+ )
+
+ if comment and comment not in shown_comments:
+ writemsg(filename + ":\n" + comment + "\n", noiselevel=-1)
+ shown_comments.add(comment)
+ portdb = root_config.trees["porttree"].dbapi
+ for l in missing_licenses:
+ if l in shown_licenses:
+ continue
+ l_path = portdb.findLicensePath(l)
+ if l_path is None:
+ continue
+ msg = ("A copy of the '%s' license" + " is located at '%s'.\n\n") % (
+ l,
+ l_path,
+ )
+ writemsg(msg, noiselevel=-1)
+ shown_licenses.add(l)
+ return have_eapi_mask
+
def show_mask_docs():
- writemsg("For more information, see the MASKED PACKAGES "
- "section in the emerge\n", noiselevel=-1)
- writemsg("man page or refer to the Gentoo Handbook.\n", noiselevel=-1)
+ writemsg(
+ "For more information, see the MASKED PACKAGES " "section in the emerge\n",
+ noiselevel=-1,
+ )
+ writemsg("man page or refer to the Gentoo Handbook.\n", noiselevel=-1)
+
def show_blocker_docs_link():
- writemsg("\nFor more information about " + bad("Blocked Packages") + ", please refer to the following\n", noiselevel=-1)
- writemsg("section of the Gentoo Linux x86 Handbook (architecture is irrelevant):\n\n", noiselevel=-1)
- writemsg("https://wiki.gentoo.org/wiki/Handbook:X86/Working/Portage#Blocked_packages\n\n", noiselevel=-1)
+ writemsg(
+ "\nFor more information about "
+ + bad("Blocked Packages")
+ + ", please refer to the following\n",
+ noiselevel=-1,
+ )
+ writemsg(
+ "section of the Gentoo Linux x86 Handbook (architecture is irrelevant):\n\n",
+ noiselevel=-1,
+ )
+ writemsg(
+ "https://wiki.gentoo.org/wiki/Handbook:X86/Working/Portage#Blocked_packages\n\n",
+ noiselevel=-1,
+ )
+
def get_masking_status(pkg, pkgsettings, root_config, myrepo=None, use=None):
- return [mreason.message for \
- mreason in _get_masking_status(pkg, pkgsettings, root_config, myrepo=myrepo, use=use)]
+ return [
+ mreason.message
+ for mreason in _get_masking_status(
+ pkg, pkgsettings, root_config, myrepo=myrepo, use=use
+ )
+ ]
+
def _get_masking_status(pkg, pkgsettings, root_config, myrepo=None, use=None):
- mreasons = _getmaskingstatus(
- pkg, settings=pkgsettings,
- portdb=root_config.trees["porttree"].dbapi, myrepo=myrepo)
+ mreasons = _getmaskingstatus(
+ pkg,
+ settings=pkgsettings,
+ portdb=root_config.trees["porttree"].dbapi,
+ myrepo=myrepo,
+ )
- if not pkg.installed:
- if not pkgsettings._accept_chost(pkg.cpv, pkg._metadata):
- mreasons.append(_MaskReason("CHOST", "CHOST: %s" % \
- pkg._metadata["CHOST"]))
+ if not pkg.installed:
+ if not pkgsettings._accept_chost(pkg.cpv, pkg._metadata):
+ mreasons.append(_MaskReason("CHOST", "CHOST: %s" % pkg._metadata["CHOST"]))
+ eprefix = pkgsettings["EPREFIX"]
+ if len(eprefix.rstrip('/')) > 0 and pkg.built and not pkg.installed:
+ if not "EPREFIX" in pkg._metadata:
+ mreasons.append(_MaskReason("EPREFIX",
+ "missing EPREFIX"))
+ elif len(pkg._metadata["EPREFIX"].strip()) < len(eprefix):
+ mreasons.append(_MaskReason("EPREFIX",
+ "EPREFIX: '%s' too small" % \
+ pkg._metadata["EPREFIX"]))
+
- if pkg.invalid:
- for msgs in pkg.invalid.values():
- for msg in msgs:
- mreasons.append(
- _MaskReason("invalid", "invalid: %s" % (msg,)))
+ if pkg.invalid:
+ for msgs in pkg.invalid.values():
+ for msg in msgs:
+ mreasons.append(_MaskReason("invalid", "invalid: %s" % (msg,)))
- if not pkg._metadata["SLOT"]:
- mreasons.append(
- _MaskReason("invalid", "SLOT: undefined"))
+ if not pkg._metadata["SLOT"]:
+ mreasons.append(_MaskReason("invalid", "SLOT: undefined"))
- return mreasons
+ return mreasons
diff --cc lib/_emerge/emergelog.py
index 3562f8eb3,14439da6e..a891f5b54
--- a/lib/_emerge/emergelog.py
+++ b/lib/_emerge/emergelog.py
@@@ -16,7 -15,8 +16,9 @@@ from portage.const import EPREFI
# dblink.merge() and we don't want that to trigger log writes
# unless it's really called via emerge.
_disable = True
+_emerge_log_dir = EPREFIX + '/var/log'
+ _emerge_log_dir = "/var/log"
+
def emergelog(xterm_titles, mystr, short_msg=None):
diff --cc lib/portage/__init__.py
index de2dbfc05,13af8da09..b6b00a8c2
--- a/lib/portage/__init__.py
+++ b/lib/portage/__init__.py
@@@ -9,146 -9,182 +9,198 @@@ VERSION = "HEAD
# ===========================================================================
try:
- import asyncio
- import sys
- import errno
- if not hasattr(errno, 'ESTALE'):
- # ESTALE may not be defined on some systems, such as interix.
- errno.ESTALE = -1
- import multiprocessing.util
- import re
- import types
- import platform
+ import asyncio
+ import sys
+ import errno
+ # PREFIX LOCAL
+ import multiprocessing
- # Temporarily delete these imports, to ensure that only the
- # wrapped versions are imported by portage internals.
- import os
- del os
- import shutil
- del shutil
+ if not hasattr(errno, "ESTALE"):
+ # ESTALE may not be defined on some systems, such as interix.
+ errno.ESTALE = -1
+ import multiprocessing.util
+ import re
+ import types
+ import platform
- except ImportError as e:
- sys.stderr.write("\n\n")
- sys.stderr.write("!!! Failed to complete python imports. These are internal modules for\n")
- sys.stderr.write("!!! python and failure here indicates that you have a problem with python\n")
- sys.stderr.write("!!! itself and thus portage is not able to continue processing.\n\n")
+ # Temporarily delete these imports, to ensure that only the
+ # wrapped versions are imported by portage internals.
+ import os
+
+ del os
+ import shutil
+
+ del shutil
- sys.stderr.write("!!! You might consider starting python with verbose flags to see what has\n")
- sys.stderr.write("!!! gone wrong. Here is the information we got for this exception:\n")
- sys.stderr.write(" "+str(e)+"\n\n")
- raise
+ except ImportError as e:
+ sys.stderr.write("\n\n")
+ sys.stderr.write(
+ "!!! Failed to complete python imports. These are internal modules for\n"
+ )
+ sys.stderr.write(
+ "!!! python and failure here indicates that you have a problem with python\n"
+ )
+ sys.stderr.write(
+ "!!! itself and thus portage is not able to continue processing.\n\n"
+ )
+
+ sys.stderr.write(
+ "!!! You might consider starting python with verbose flags to see what has\n"
+ )
+ sys.stderr.write(
+ "!!! gone wrong. Here is the information we got for this exception:\n"
+ )
+ sys.stderr.write(" " + str(e) + "\n\n")
+ raise
+# BEGIN PREFIX LOCAL
+# for bug #758230, on macOS the default was switched from fork to spawn,
+# the latter causing issues because all kinds of things can't be
+# pickled, so force fork mode for now
+try:
+ multiprocessing.set_start_method('fork')
+except RuntimeError:
+ pass
+# END PREFIX LOCAL
+
try:
- import portage.proxy.lazyimport
- import portage.proxy as proxy
- proxy.lazyimport.lazyimport(globals(),
- 'portage.cache.cache_errors:CacheError',
- 'portage.checksum',
- 'portage.checksum:perform_checksum,perform_md5,prelink_capable',
- 'portage.cvstree',
- 'portage.data',
- 'portage.data:lchown,ostype,portage_gid,portage_uid,secpass,' + \
- 'uid,userland,userpriv_groups,wheelgid',
- 'portage.dbapi',
- 'portage.dbapi.bintree:bindbapi,binarytree',
- 'portage.dbapi.cpv_expand:cpv_expand',
- 'portage.dbapi.dep_expand:dep_expand',
- 'portage.dbapi.porttree:close_portdbapi_caches,FetchlistDict,' + \
- 'portagetree,portdbapi',
- 'portage.dbapi.vartree:dblink,merge,unmerge,vardbapi,vartree',
- 'portage.dbapi.virtual:fakedbapi',
- 'portage.debug',
- 'portage.dep',
- 'portage.dep:best_match_to_list,dep_getcpv,dep_getkey,' + \
- 'flatten,get_operator,isjustname,isspecific,isvalidatom,' + \
- 'match_from_list,match_to_list',
- 'portage.dep.dep_check:dep_check,dep_eval,dep_wordreduce,dep_zapdeps',
- 'portage.eclass_cache',
- 'portage.elog',
- 'portage.exception',
- 'portage.getbinpkg',
- 'portage.locks',
- 'portage.locks:lockdir,lockfile,unlockdir,unlockfile',
- 'portage.mail',
- 'portage.manifest:Manifest',
- 'portage.output',
- 'portage.output:bold,colorize',
- 'portage.package.ebuild.doebuild:doebuild,' + \
- 'doebuild_environment,spawn,spawnebuild',
- 'portage.package.ebuild.config:autouse,best_from_dict,' + \
- 'check_config_instance,config',
- 'portage.package.ebuild.deprecated_profile_check:' + \
- 'deprecated_profile_check',
- 'portage.package.ebuild.digestcheck:digestcheck',
- 'portage.package.ebuild.digestgen:digestgen',
- 'portage.package.ebuild.fetch:fetch',
- 'portage.package.ebuild.getmaskingreason:getmaskingreason',
- 'portage.package.ebuild.getmaskingstatus:getmaskingstatus',
- 'portage.package.ebuild.prepare_build_dirs:prepare_build_dirs',
- 'portage.process',
- 'portage.process:atexit_register,run_exitfuncs',
- 'portage.update:dep_transform,fixdbentries,grab_updates,' + \
- 'parse_updates,update_config_files,update_dbentries,' + \
- 'update_dbentry',
- 'portage.util',
- 'portage.util:atomic_ofstream,apply_secpass_permissions,' + \
- 'apply_recursive_permissions,dump_traceback,getconfig,' + \
- 'grabdict,grabdict_package,grabfile,grabfile_package,' + \
- 'map_dictlist_vals,new_protect_filename,normalize_path,' + \
- 'pickle_read,pickle_write,stack_dictlist,stack_dicts,' + \
- 'stack_lists,unique_array,varexpand,writedict,writemsg,' + \
- 'writemsg_stdout,write_atomic',
- 'portage.util.digraph:digraph',
- 'portage.util.env_update:env_update',
- 'portage.util.ExtractKernelVersion:ExtractKernelVersion',
- 'portage.util.listdir:cacheddir,listdir',
- 'portage.util.movefile:movefile',
- 'portage.util.mtimedb:MtimeDB',
- 'portage.versions',
- 'portage.versions:best,catpkgsplit,catsplit,cpv_getkey,' + \
- 'cpv_getkey@getCPFromCPV,endversion_keys,' + \
- 'suffix_value@endversion,pkgcmp,pkgsplit,vercmp,ververify',
- 'portage.xpak',
- 'subprocess',
- 'time',
- )
-
- from collections import OrderedDict
-
- import portage.const
- from portage.const import VDB_PATH, PRIVATE_PATH, CACHE_PATH, DEPCACHE_PATH, \
- USER_CONFIG_PATH, MODULES_FILE_PATH, CUSTOM_PROFILE_PATH, PORTAGE_BASE_PATH, \
- PORTAGE_BIN_PATH, PORTAGE_PYM_PATH, PROFILE_PATH, LOCALE_DATA_PATH, \
- EBUILD_SH_BINARY, SANDBOX_BINARY, BASH_BINARY, \
- MOVE_BINARY, PRELINK_BINARY, WORLD_FILE, MAKE_CONF_FILE, MAKE_DEFAULTS_FILE, \
- DEPRECATED_PROFILE_FILE, USER_VIRTUALS_FILE, EBUILD_SH_ENV_FILE, \
- INVALID_ENV_FILE, CUSTOM_MIRRORS_FILE, CONFIG_MEMORY_FILE,\
- INCREMENTALS, EAPI, MISC_SH_BINARY, REPO_NAME_LOC, REPO_NAME_FILE, \
- EPREFIX, rootuid
+ import portage.proxy.lazyimport
+ import portage.proxy as proxy
+
+ proxy.lazyimport.lazyimport(
+ globals(),
+ "portage.cache.cache_errors:CacheError",
+ "portage.checksum",
+ "portage.checksum:perform_checksum,perform_md5,prelink_capable",
+ "portage.cvstree",
+ "portage.data",
+ "portage.data:lchown,ostype,portage_gid,portage_uid,secpass,"
+ + "uid,userland,userpriv_groups,wheelgid",
+ "portage.dbapi",
+ "portage.dbapi.bintree:bindbapi,binarytree",
+ "portage.dbapi.cpv_expand:cpv_expand",
+ "portage.dbapi.dep_expand:dep_expand",
+ "portage.dbapi.porttree:close_portdbapi_caches,FetchlistDict,"
+ + "portagetree,portdbapi",
+ "portage.dbapi.vartree:dblink,merge,unmerge,vardbapi,vartree",
+ "portage.dbapi.virtual:fakedbapi",
+ "portage.debug",
+ "portage.dep",
+ "portage.dep:best_match_to_list,dep_getcpv,dep_getkey,"
+ + "flatten,get_operator,isjustname,isspecific,isvalidatom,"
+ + "match_from_list,match_to_list",
+ "portage.dep.dep_check:dep_check,dep_eval,dep_wordreduce,dep_zapdeps",
+ "portage.eclass_cache",
+ "portage.elog",
+ "portage.exception",
+ "portage.getbinpkg",
+ "portage.locks",
+ "portage.locks:lockdir,lockfile,unlockdir,unlockfile",
+ "portage.mail",
+ "portage.manifest:Manifest",
+ "portage.output",
+ "portage.output:bold,colorize",
+ "portage.package.ebuild.doebuild:doebuild,"
+ + "doebuild_environment,spawn,spawnebuild",
+ "portage.package.ebuild.config:autouse,best_from_dict,"
+ + "check_config_instance,config",
+ "portage.package.ebuild.deprecated_profile_check:" + "deprecated_profile_check",
+ "portage.package.ebuild.digestcheck:digestcheck",
+ "portage.package.ebuild.digestgen:digestgen",
+ "portage.package.ebuild.fetch:fetch",
+ "portage.package.ebuild.getmaskingreason:getmaskingreason",
+ "portage.package.ebuild.getmaskingstatus:getmaskingstatus",
+ "portage.package.ebuild.prepare_build_dirs:prepare_build_dirs",
+ "portage.process",
+ "portage.process:atexit_register,run_exitfuncs",
+ "portage.update:dep_transform,fixdbentries,grab_updates,"
+ + "parse_updates,update_config_files,update_dbentries,"
+ + "update_dbentry",
+ "portage.util",
+ "portage.util:atomic_ofstream,apply_secpass_permissions,"
+ + "apply_recursive_permissions,dump_traceback,getconfig,"
+ + "grabdict,grabdict_package,grabfile,grabfile_package,"
+ + "map_dictlist_vals,new_protect_filename,normalize_path,"
+ + "pickle_read,pickle_write,stack_dictlist,stack_dicts,"
+ + "stack_lists,unique_array,varexpand,writedict,writemsg,"
+ + "writemsg_stdout,write_atomic",
+ "portage.util.digraph:digraph",
+ "portage.util.env_update:env_update",
+ "portage.util.ExtractKernelVersion:ExtractKernelVersion",
+ "portage.util.listdir:cacheddir,listdir",
+ "portage.util.movefile:movefile",
+ "portage.util.mtimedb:MtimeDB",
+ "portage.versions",
+ "portage.versions:best,catpkgsplit,catsplit,cpv_getkey,"
+ + "cpv_getkey@getCPFromCPV,endversion_keys,"
+ + "suffix_value@endversion,pkgcmp,pkgsplit,vercmp,ververify",
+ "portage.xpak",
+ "subprocess",
+ "time",
+ )
+
+ from collections import OrderedDict
+
+ import portage.const
+ from portage.const import (
+ VDB_PATH,
+ PRIVATE_PATH,
+ CACHE_PATH,
+ DEPCACHE_PATH,
+ USER_CONFIG_PATH,
+ MODULES_FILE_PATH,
+ CUSTOM_PROFILE_PATH,
+ PORTAGE_BASE_PATH,
+ PORTAGE_BIN_PATH,
+ PORTAGE_PYM_PATH,
+ PROFILE_PATH,
+ LOCALE_DATA_PATH,
+ EBUILD_SH_BINARY,
+ SANDBOX_BINARY,
+ BASH_BINARY,
+ MOVE_BINARY,
+ PRELINK_BINARY,
+ WORLD_FILE,
+ MAKE_CONF_FILE,
+ MAKE_DEFAULTS_FILE,
+ DEPRECATED_PROFILE_FILE,
+ USER_VIRTUALS_FILE,
+ EBUILD_SH_ENV_FILE,
+ INVALID_ENV_FILE,
+ CUSTOM_MIRRORS_FILE,
+ CONFIG_MEMORY_FILE,
+ INCREMENTALS,
+ EAPI,
+ MISC_SH_BINARY,
+ REPO_NAME_LOC,
+ REPO_NAME_FILE,
++ # BEGIN PREFIX LOCAL
++ EPREFIX,
++ rootuid,
++ # END PREFIX LOCAL
+ )
except ImportError as e:
- sys.stderr.write("\n\n")
- sys.stderr.write("!!! Failed to complete portage imports. There are internal modules for\n")
- sys.stderr.write("!!! portage and failure here indicates that you have a problem with your\n")
- sys.stderr.write("!!! installation of portage. Please try a rescue portage located in the ebuild\n")
- sys.stderr.write("!!! repository under '/var/db/repos/gentoo/sys-apps/portage/files/' (default).\n")
- sys.stderr.write("!!! There is a README.RESCUE file that details the steps required to perform\n")
- sys.stderr.write("!!! a recovery of portage.\n")
- sys.stderr.write(" "+str(e)+"\n\n")
- raise
+ sys.stderr.write("\n\n")
+ sys.stderr.write(
+ "!!! Failed to complete portage imports. There are internal modules for\n"
+ )
+ sys.stderr.write(
+ "!!! portage and failure here indicates that you have a problem with your\n"
+ )
+ sys.stderr.write(
+ "!!! installation of portage. Please try a rescue portage located in the ebuild\n"
+ )
+ sys.stderr.write(
+ "!!! repository under '/var/db/repos/gentoo/sys-apps/portage/files/' (default).\n"
+ )
+ sys.stderr.write(
+ "!!! There is a README.RESCUE file that details the steps required to perform\n"
+ )
+ sys.stderr.write("!!! a recovery of portage.\n")
+ sys.stderr.write(" " + str(e) + "\n\n")
+ raise
# We use utf_8 encoding everywhere. Previously, we used
diff --cc lib/portage/const.py
index 892766c68,1edc5fcf1..f2c69a4bb
--- a/lib/portage/const.py
+++ b/lib/portage/const.py
@@@ -58,185 -53,176 +58,196 @@@ NEWS_LIB_PATH = "var/lib/gentoo
# these variables get EPREFIX prepended automagically when they are
# translated into their lowercase variants
- DEPCACHE_PATH = "/var/cache/edb/dep"
- GLOBAL_CONFIG_PATH = "/usr/share/portage/config"
+ DEPCACHE_PATH = "/var/cache/edb/dep"
+ GLOBAL_CONFIG_PATH = "/usr/share/portage/config"
# these variables are not used with target_root or config_root
+PORTAGE_BASE_PATH = PORTAGE_BASE
# NOTE: Use realpath(__file__) so that python module symlinks in site-packages
# are followed back to the real location of the whole portage installation.
+#PREFIX: below should work, but I'm not sure how it it affects other places
# NOTE: Please keep PORTAGE_BASE_PATH in one line to help substitutions.
- #PORTAGE_BASE_PATH = os.path.join(os.sep, os.sep.join(os.path.realpath(__file__.rstrip("co")).split(os.sep)[:-3]))
- PORTAGE_BIN_PATH = PORTAGE_BASE_PATH + "/bin"
- PORTAGE_PYM_PATH = os.path.realpath(os.path.join(__file__, '../..'))
- LOCALE_DATA_PATH = PORTAGE_BASE_PATH + "/locale" # FIXME: not used
- EBUILD_SH_BINARY = PORTAGE_BIN_PATH + "/ebuild.sh"
- MISC_SH_BINARY = PORTAGE_BIN_PATH + "/misc-functions.sh"
- SANDBOX_BINARY = EPREFIX + "/usr/bin/sandbox"
- FAKEROOT_BINARY = EPREFIX + "/usr/bin/fakeroot"
- BASH_BINARY = PORTAGE_BASH
- MOVE_BINARY = PORTAGE_MV
- PRELINK_BINARY = "/usr/sbin/prelink"
+ # fmt:off
-PORTAGE_BASE_PATH = os.path.join(os.sep, os.sep.join(os.path.realpath(__file__.rstrip("co")).split(os.sep)[:-3]))
++# PREFIX LOCAL (from const_autotools)
++#PORTAGE_BASE_PATH = os.path.join(os.sep, os.sep.join(os.path.realpath(__file__.rstrip("co")).split(os.sep)[:-3]))
+ # fmt:on
+ PORTAGE_BIN_PATH = PORTAGE_BASE_PATH + "/bin"
+ PORTAGE_PYM_PATH = os.path.realpath(os.path.join(__file__, "../.."))
+ LOCALE_DATA_PATH = PORTAGE_BASE_PATH + "/locale" # FIXME: not used
+ EBUILD_SH_BINARY = PORTAGE_BIN_PATH + "/ebuild.sh"
+ MISC_SH_BINARY = PORTAGE_BIN_PATH + "/misc-functions.sh"
-SANDBOX_BINARY = "/usr/bin/sandbox"
-FAKEROOT_BINARY = "/usr/bin/fakeroot"
++# BEGIN PREFIX LOCAL
++SANDBOX_BINARY = EPREFIX + "/usr/bin/sandbox"
++FAKEROOT_BINARY = EPREFIX + "/usr/bin/fakeroot"
++# END PREFIX LOCAL
+ BASH_BINARY = "/bin/bash"
+ MOVE_BINARY = "/bin/mv"
+ PRELINK_BINARY = "/usr/sbin/prelink"
++# BEGIN PREFIX LOCAL
+MACOSSANDBOX_BINARY = "/usr/bin/sandbox-exec"
+MACOSSANDBOX_PROFILE = '''(version 1)
+(allow default)
+(deny file-write*)
+(allow file-write* file-write-setugid
+@@MACOSSANDBOX_PATHS@@)
+(allow file-write-data
+@@MACOSSANDBOX_PATHS_CONTENT_ONLY@@)'''
+
+PORTAGE_GROUPNAME = portagegroup
+PORTAGE_USERNAME = portageuser
++# END PREFIX LOCAL
- INVALID_ENV_FILE = "/etc/spork/is/not/valid/profile.env"
- MERGING_IDENTIFIER = "-MERGING-"
- REPO_NAME_FILE = "repo_name"
- REPO_NAME_LOC = "profiles" + "/" + REPO_NAME_FILE
+ INVALID_ENV_FILE = "/etc/spork/is/not/valid/profile.env"
+ MERGING_IDENTIFIER = "-MERGING-"
+ REPO_NAME_FILE = "repo_name"
+ REPO_NAME_LOC = "profiles" + "/" + REPO_NAME_FILE
- PORTAGE_PACKAGE_ATOM = "sys-apps/portage"
- LIBC_PACKAGE_ATOM = "virtual/libc"
- OS_HEADERS_PACKAGE_ATOM = "virtual/os-headers"
- CVS_PACKAGE_ATOM = "dev-vcs/cvs"
- GIT_PACKAGE_ATOM = "dev-vcs/git"
- HG_PACKAGE_ATOM = "dev-vcs/mercurial"
- RSYNC_PACKAGE_ATOM = "net-misc/rsync"
+ PORTAGE_PACKAGE_ATOM = "sys-apps/portage"
+ LIBC_PACKAGE_ATOM = "virtual/libc"
+ OS_HEADERS_PACKAGE_ATOM = "virtual/os-headers"
+ CVS_PACKAGE_ATOM = "dev-vcs/cvs"
+ GIT_PACKAGE_ATOM = "dev-vcs/git"
+ HG_PACKAGE_ATOM = "dev-vcs/mercurial"
+ RSYNC_PACKAGE_ATOM = "net-misc/rsync"
- INCREMENTALS = (
- "ACCEPT_KEYWORDS",
- "CONFIG_PROTECT",
- "CONFIG_PROTECT_MASK",
- "ENV_UNSET",
- "FEATURES",
- "IUSE_IMPLICIT",
- "PRELINK_PATH",
- "PRELINK_PATH_MASK",
- "PROFILE_ONLY_VARIABLES",
- "USE",
- "USE_EXPAND",
- "USE_EXPAND_HIDDEN",
- "USE_EXPAND_IMPLICIT",
- "USE_EXPAND_UNPREFIXED",
+ INCREMENTALS = (
+ "ACCEPT_KEYWORDS",
+ "CONFIG_PROTECT",
+ "CONFIG_PROTECT_MASK",
+ "ENV_UNSET",
+ "FEATURES",
+ "IUSE_IMPLICIT",
+ "PRELINK_PATH",
+ "PRELINK_PATH_MASK",
+ "PROFILE_ONLY_VARIABLES",
+ "USE",
+ "USE_EXPAND",
+ "USE_EXPAND_HIDDEN",
+ "USE_EXPAND_IMPLICIT",
+ "USE_EXPAND_UNPREFIXED",
)
- EBUILD_PHASES = (
- "pretend",
- "setup",
- "unpack",
- "prepare",
- "configure",
- "compile",
- "test",
- "install",
- "package",
- "instprep",
- "preinst",
- "postinst",
- "prerm",
- "postrm",
- "nofetch",
- "config",
- "info",
- "other",
+ EBUILD_PHASES = (
+ "pretend",
+ "setup",
+ "unpack",
+ "prepare",
+ "configure",
+ "compile",
+ "test",
+ "install",
+ "package",
+ "instprep",
+ "preinst",
+ "postinst",
+ "prerm",
+ "postrm",
+ "nofetch",
+ "config",
+ "info",
+ "other",
+ )
+ SUPPORTED_FEATURES = frozenset(
+ [
+ "assume-digests",
+ "binpkg-docompress",
+ "binpkg-dostrip",
+ "binpkg-logs",
+ "binpkg-multi-instance",
+ "buildpkg",
+ "buildpkg-live",
+ "buildsyspkg",
+ "candy",
+ "case-insensitive-fs",
+ "ccache",
+ "cgroup",
+ "chflags",
+ "clean-logs",
+ "collision-protect",
+ "compress-build-logs",
+ "compressdebug",
+ "compress-index",
+ "config-protect-if-modified",
+ "digest",
+ "distcc",
+ "distlocks",
+ "downgrade-backup",
+ "ebuild-locks",
+ "fail-clean",
+ "fakeroot",
+ "fixlafiles",
+ "force-mirror",
+ "force-prefix",
+ "getbinpkg",
+ "icecream",
+ "installsources",
+ "ipc-sandbox",
+ "keeptemp",
+ "keepwork",
+ "lmirror",
+ "merge-sync",
+ "metadata-transfer",
+ "mirror",
+ "mount-sandbox",
+ "multilib-strict",
+ "network-sandbox",
+ "network-sandbox-proxy",
+ "news",
+ "noauto",
+ "noclean",
+ "nodoc",
+ "noinfo",
+ "noman",
+ "nostrip",
+ "notitles",
+ "parallel-fetch",
+ "parallel-install",
+ "pid-sandbox",
+ "pkgdir-index-trusted",
+ "prelink-checksums",
+ "preserve-libs",
+ "protect-owned",
+ "python-trace",
+ "qa-unresolved-soname-deps",
+ "sandbox",
+ "selinux",
+ "sesandbox",
+ "sfperms",
+ "sign",
+ "skiprocheck",
+ "splitdebug",
+ "split-elog",
+ "split-log",
+ "strict",
+ "strict-keepdir",
+ "stricter",
+ "suidctl",
+ "test",
+ "test-fail-continue",
+ "unknown-features-filter",
+ "unknown-features-warn",
+ "unmerge-backup",
+ "unmerge-logs",
+ "unmerge-orphans",
+ "unprivileged",
+ "userfetch",
+ "userpriv",
+ "usersandbox",
+ "usersync",
+ "webrsync-gpg",
+ "xattr",
++ # PREFIX LOCAL
++ "stacked-prefix",
+ ]
)
- SUPPORTED_FEATURES = frozenset([
- "assume-digests",
- "binpkg-docompress",
- "binpkg-dostrip",
- "binpkg-logs",
- "binpkg-multi-instance",
- "buildpkg",
- "buildsyspkg",
- "candy",
- "case-insensitive-fs",
- "ccache",
- "cgroup",
- "chflags",
- "clean-logs",
- "collision-protect",
- "compress-build-logs",
- "compressdebug",
- "compress-index",
- "config-protect-if-modified",
- "digest",
- "distcc",
- "distlocks",
- "downgrade-backup",
- "ebuild-locks",
- "fail-clean",
- "fakeroot",
- "fixlafiles",
- "force-mirror",
- "force-prefix",
- "getbinpkg",
- "icecream",
- "installsources",
- "ipc-sandbox",
- "keeptemp",
- "keepwork",
- "lmirror",
- "merge-sync",
- "metadata-transfer",
- "mirror",
- "mount-sandbox",
- "multilib-strict",
- "network-sandbox",
- "network-sandbox-proxy",
- "news",
- "noauto",
- "noclean",
- "nodoc",
- "noinfo",
- "noman",
- "nostrip",
- "notitles",
- "parallel-fetch",
- "parallel-install",
- "pid-sandbox",
- "pkgdir-index-trusted",
- "prelink-checksums",
- "preserve-libs",
- "protect-owned",
- "python-trace",
- "qa-unresolved-soname-deps",
- "sandbox",
- "selinux",
- "sesandbox",
- "sfperms",
- "sign",
- "skiprocheck",
- "splitdebug",
- "split-elog",
- "split-log",
- "stacked-prefix", # PREFIX LOCAL
- "strict",
- "strict-keepdir",
- "stricter",
- "suidctl",
- "test",
- "test-fail-continue",
- "unknown-features-filter",
- "unknown-features-warn",
- "unmerge-backup",
- "unmerge-logs",
- "unmerge-orphans",
- "unprivileged",
- "userfetch",
- "userpriv",
- "usersandbox",
- "usersync",
- "webrsync-gpg",
- "xattr",
- ])
- EAPI = 8
+ EAPI = 8
- HASHING_BLOCKSIZE = 32768
+ HASHING_BLOCKSIZE = 32768
MANIFEST2_HASH_DEFAULTS = frozenset(["BLAKE2B", "SHA512"])
- MANIFEST2_HASH_DEFAULT = "BLAKE2B"
+ MANIFEST2_HASH_DEFAULT = "BLAKE2B"
- MANIFEST2_IDENTIFIERS = ("AUX", "MISC", "DIST", "EBUILD")
+ MANIFEST2_IDENTIFIERS = ("AUX", "MISC", "DIST", "EBUILD")
# The EPREFIX for the current install is hardcoded here, but access to this
# constant should be minimal, in favor of access via the EPREFIX setting of
diff --cc lib/portage/data.py
index d2d356f95,09a4dd079..0dac72845
--- a/lib/portage/data.py
+++ b/lib/portage/data.py
@@@ -6,26 -6,24 +6,28 @@@ import gr
import os
import platform
import pwd
+from portage.const import PORTAGE_GROUPNAME, PORTAGE_USERNAME, EPREFIX
import portage
- portage.proxy.lazyimport.lazyimport(globals(),
- 'portage.output:colorize',
- 'portage.util:writemsg',
- 'portage.util.path:first_existing',
- 'subprocess'
+
+ portage.proxy.lazyimport.lazyimport(
+ globals(),
+ "portage.output:colorize",
+ "portage.util:writemsg",
+ "portage.util.path:first_existing",
+ "subprocess",
)
from portage.localization import _
ostype = platform.system()
userland = None
-if ostype == "DragonFly" or ostype.endswith("BSD"):
+# Prefix always has USERLAND=GNU, even on
+# FreeBSD, OpenBSD and Darwin (thank the lord!).
+# Hopefully this entire USERLAND hack can go once
+if EPREFIX == "" and (ostype == "DragonFly" or ostype.endswith("BSD")):
- userland = "BSD"
+ userland = "BSD"
else:
- userland = "GNU"
+ userland = "GNU"
lchown = getattr(os, "lchown", None)
@@@ -119,221 -134,231 +138,236 @@@ except KeyError
# configurations with different constants could be used simultaneously.
_initialized_globals = set()
+
def _get_global(k):
- if k in _initialized_globals:
- return globals()[k]
-
- if k == 'secpass':
-
- unprivileged = False
- if hasattr(portage, 'settings'):
- unprivileged = "unprivileged" in portage.settings.features
- else:
- # The config class has equivalent code, but we also need to
- # do it here if _disable_legacy_globals() has been called.
- eroot_or_parent = first_existing(os.path.join(
- _target_root(), _target_eprefix().lstrip(os.sep)))
- try:
- eroot_st = os.stat(eroot_or_parent)
- except OSError:
- pass
- else:
- unprivileged = _unprivileged_mode(
- eroot_or_parent, eroot_st)
-
- v = 0
- if uid == 0:
- v = 2
- elif unprivileged:
- v = 2
- elif _get_global('portage_gid') in os.getgroups():
- v = 1
-
- elif k in ('portage_gid', 'portage_uid'):
-
- #Discover the uid and gid of the portage user/group
- keyerror = False
- try:
- username = str(_get_global('_portage_username'))
- portage_uid = pwd.getpwnam(username).pw_uid
- except KeyError:
- # PREFIX LOCAL: some sysadmins are insane, bug #344307
- if username.isdigit():
- portage_uid = int(username)
- else:
- keyerror = True
- portage_uid = 0
- # END PREFIX LOCAL
-
- try:
- grpname = str(_get_global('_portage_grpname'))
- portage_gid = grp.getgrnam(grpname).gr_gid
- except KeyError:
- # PREFIX LOCAL: some sysadmins are insane, bug #344307
- if grpname.isdigit():
- portage_gid = int(grpname)
- else:
- keyerror = True
- portage_gid = 0
- # END PREFIX LOCAL
-
- # Suppress this error message if both PORTAGE_GRPNAME and
- # PORTAGE_USERNAME are set to "root", for things like
- # Android (see bug #454060).
- if keyerror and not (_get_global('_portage_username') == "root" and
- _get_global('_portage_grpname') == "root"):
- # PREFIX LOCAL: we need to fix this one day to distinguish prefix vs non-prefix
- writemsg(colorize("BAD",
- _("portage: '%s' user or '%s' group missing." % (_get_global('_portage_username'), _get_global('_portage_grpname')))) + "\n", noiselevel=-1)
- writemsg(colorize("BAD",
- _(" In Prefix Portage this is quite dramatic")) + "\n", noiselevel=-1)
- writemsg(colorize("BAD",
- _(" since it means you have thrown away yourself.")) + "\n", noiselevel=-1)
- writemsg(colorize("BAD",
- _(" Re-add yourself or re-bootstrap Gentoo Prefix.")) + "\n", noiselevel=-1)
- # END PREFIX LOCAL
- portage_group_warning()
-
- globals()['portage_gid'] = portage_gid
- _initialized_globals.add('portage_gid')
- globals()['portage_uid'] = portage_uid
- _initialized_globals.add('portage_uid')
-
- if k == 'portage_gid':
- return portage_gid
- if k == 'portage_uid':
- return portage_uid
- raise AssertionError('unknown name: %s' % k)
-
- elif k == 'userpriv_groups':
- v = [_get_global('portage_gid')]
- if secpass >= 2:
- # Get a list of group IDs for the portage user. Do not use
- # grp.getgrall() since it is known to trigger spurious
- # SIGPIPE problems with nss_ldap.
- cmd = ["id", "-G", _portage_username]
-
- encoding = portage._encodings['content']
- cmd = [portage._unicode_encode(x,
- encoding=encoding, errors='strict') for x in cmd]
- proc = subprocess.Popen(cmd, stdout=subprocess.PIPE,
- stderr=subprocess.STDOUT)
- myoutput = proc.communicate()[0]
- status = proc.wait()
- if os.WIFEXITED(status) and os.WEXITSTATUS(status) == os.EX_OK:
- for x in portage._unicode_decode(myoutput,
- encoding=encoding, errors='strict').split():
- try:
- v.append(int(x))
- except ValueError:
- pass
- v = sorted(set(v))
-
- # Avoid instantiating portage.settings when the desired
- # variable is set in os.environ.
- elif k in ('_portage_grpname', '_portage_username'):
- v = None
- if k == '_portage_grpname':
- env_key = 'PORTAGE_GRPNAME'
- else:
- env_key = 'PORTAGE_USERNAME'
-
- if env_key in os.environ:
- v = os.environ[env_key]
- elif hasattr(portage, 'settings'):
- v = portage.settings.get(env_key)
- else:
- # The config class has equivalent code, but we also need to
- # do it here if _disable_legacy_globals() has been called.
- eroot_or_parent = first_existing(os.path.join(
- _target_root(), _target_eprefix().lstrip(os.sep)))
- try:
- eroot_st = os.stat(eroot_or_parent)
- except OSError:
- pass
- else:
- if _unprivileged_mode(eroot_or_parent, eroot_st):
- if k == '_portage_grpname':
- try:
- grp_struct = grp.getgrgid(eroot_st.st_gid)
- except KeyError:
- pass
- else:
- v = grp_struct.gr_name
- else:
- try:
- pwd_struct = pwd.getpwuid(eroot_st.st_uid)
- except KeyError:
- pass
- else:
- v = pwd_struct.pw_name
-
- if v is None:
- # PREFIX LOCAL: use var iso hardwired 'portage'
- if k == '_portage_grpname':
- v = PORTAGE_GROUPNAME
- else:
- v = PORTAGE_USERNAME
- # END PREFIX LOCAL
- else:
- raise AssertionError('unknown name: %s' % k)
-
- globals()[k] = v
- _initialized_globals.add(k)
- return v
+ if k in _initialized_globals:
+ return globals()[k]
+
+ if k == "secpass":
+
+ unprivileged = False
+ if hasattr(portage, "settings"):
+ unprivileged = "unprivileged" in portage.settings.features
+ else:
+ # The config class has equivalent code, but we also need to
+ # do it here if _disable_legacy_globals() has been called.
+ eroot_or_parent = first_existing(
+ os.path.join(_target_root(), _target_eprefix().lstrip(os.sep))
+ )
+ try:
+ eroot_st = os.stat(eroot_or_parent)
+ except OSError:
+ pass
+ else:
+ unprivileged = _unprivileged_mode(eroot_or_parent, eroot_st)
+
+ v = 0
+ if uid == 0:
+ v = 2
+ elif unprivileged:
+ v = 2
+ elif _get_global("portage_gid") in os.getgroups():
+ v = 1
+
+ elif k in ("portage_gid", "portage_uid"):
+
+ # Discover the uid and gid of the portage user/group
+ keyerror = False
+ try:
+ portage_uid = pwd.getpwnam(_get_global("_portage_username")).pw_uid
+ except KeyError:
- keyerror = True
- portage_uid = 0
++ # PREFIX LOCAL: some sysadmins are insane, bug #344307
++ if username.isdigit():
++ portage_uid = int(username)
++ else:
++ keyerror = True
++ portage_uid = 0
++ # END PREFIX LOCAL
+
+ try:
+ portage_gid = grp.getgrnam(_get_global("_portage_grpname")).gr_gid
+ except KeyError:
- keyerror = True
- portage_gid = 0
++ # PREFIX LOCAL: some sysadmins are insane, bug #344307
++ if grpname.isdigit():
++ portage_gid = int(grpname)
++ else:
++ keyerror = True
++ portage_gid = 0
++ # END PREFIX LOCAL
+
+ # Suppress this error message if both PORTAGE_GRPNAME and
+ # PORTAGE_USERNAME are set to "root", for things like
+ # Android (see bug #454060).
- if keyerror and not (
- _get_global("_portage_username") == "root"
- and _get_global("_portage_grpname") == "root"
- ):
- writemsg(
- colorize("BAD", _("portage: 'portage' user or group missing.")) + "\n",
- noiselevel=-1,
- )
- writemsg(
- _(
- " For the defaults, line 1 goes into passwd, "
- "and 2 into group.\n"
- ),
- noiselevel=-1,
- )
- writemsg(
- colorize(
- "GOOD",
- " portage:x:250:250:portage:/var/tmp/portage:/bin/false",
- )
- + "\n",
- noiselevel=-1,
- )
- writemsg(
- colorize("GOOD", " portage::250:portage") + "\n", noiselevel=-1
- )
++ if keyerror and not (_get_global('_portage_username') == "root" and
++ _get_global('_portage_grpname') == "root"):
++ # PREFIX LOCAL: we need to fix this one day to distinguish prefix vs non-prefix
++ writemsg(colorize("BAD",
++ _("portage: '%s' user or '%s' group missing." % (_get_global('_portage_username'), _get_global('_portage_grpname')))) + "\n", noiselevel=-1)
++ writemsg(colorize("BAD",
++ _(" In Prefix Portage this is quite dramatic")) + "\n", noiselevel=-1)
++ writemsg(colorize("BAD",
++ _(" since it means you have thrown away yourself.")) + "\n", noiselevel=-1)
++ writemsg(colorize("BAD",
++ _(" Re-add yourself or re-bootstrap Gentoo Prefix.")) + "\n", noiselevel=-1)
++ # END PREFIX LOCAL
+ portage_group_warning()
+
+ globals()["portage_gid"] = portage_gid
+ _initialized_globals.add("portage_gid")
+ globals()["portage_uid"] = portage_uid
+ _initialized_globals.add("portage_uid")
+
+ if k == "portage_gid":
+ return portage_gid
+ if k == "portage_uid":
+ return portage_uid
+ raise AssertionError("unknown name: %s" % k)
+
+ elif k == "userpriv_groups":
+ v = [_get_global("portage_gid")]
+ if secpass >= 2:
+ # Get a list of group IDs for the portage user. Do not use
+ # grp.getgrall() since it is known to trigger spurious
+ # SIGPIPE problems with nss_ldap.
+ cmd = ["id", "-G", _portage_username]
+
+ encoding = portage._encodings["content"]
+ cmd = [
+ portage._unicode_encode(x, encoding=encoding, errors="strict")
+ for x in cmd
+ ]
+ proc = subprocess.Popen(
+ cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
+ )
+ myoutput = proc.communicate()[0]
+ status = proc.wait()
+ if os.WIFEXITED(status) and os.WEXITSTATUS(status) == os.EX_OK:
+ for x in portage._unicode_decode(
+ myoutput, encoding=encoding, errors="strict"
+ ).split():
+ try:
+ v.append(int(x))
+ except ValueError:
+ pass
+ v = sorted(set(v))
+
+ # Avoid instantiating portage.settings when the desired
+ # variable is set in os.environ.
+ elif k in ("_portage_grpname", "_portage_username"):
+ v = None
+ if k == "_portage_grpname":
+ env_key = "PORTAGE_GRPNAME"
+ else:
+ env_key = "PORTAGE_USERNAME"
+
+ if env_key in os.environ:
+ v = os.environ[env_key]
+ elif hasattr(portage, "settings"):
+ v = portage.settings.get(env_key)
+ else:
+ # The config class has equivalent code, but we also need to
+ # do it here if _disable_legacy_globals() has been called.
+ eroot_or_parent = first_existing(
+ os.path.join(_target_root(), _target_eprefix().lstrip(os.sep))
+ )
+ try:
+ eroot_st = os.stat(eroot_or_parent)
+ except OSError:
+ pass
+ else:
+ if _unprivileged_mode(eroot_or_parent, eroot_st):
+ if k == "_portage_grpname":
+ try:
+ grp_struct = grp.getgrgid(eroot_st.st_gid)
+ except KeyError:
+ pass
+ else:
+ v = grp_struct.gr_name
+ else:
+ try:
+ pwd_struct = pwd.getpwuid(eroot_st.st_uid)
+ except KeyError:
+ pass
+ else:
+ v = pwd_struct.pw_name
+
+ if v is None:
- v = "portage"
++ # PREFIX LOCAL: use var iso hardwired 'portage'
++ if k == '_portage_grpname':
++ v = PORTAGE_GROUPNAME
++ else:
++ v = PORTAGE_USERNAME
++ # END PREFIX LOCAL
+ else:
+ raise AssertionError("unknown name: %s" % k)
+
+ globals()[k] = v
+ _initialized_globals.add(k)
+ return v
+
class _GlobalProxy(portage.proxy.objectproxy.ObjectProxy):
- __slots__ = ('_name',)
+ __slots__ = ("_name",)
- def __init__(self, name):
- portage.proxy.objectproxy.ObjectProxy.__init__(self)
- object.__setattr__(self, '_name', name)
+ def __init__(self, name):
+ portage.proxy.objectproxy.ObjectProxy.__init__(self)
+ object.__setattr__(self, "_name", name)
- def _get_target(self):
- return _get_global(object.__getattribute__(self, '_name'))
+ def _get_target(self):
+ return _get_global(object.__getattribute__(self, "_name"))
- for k in ('portage_gid', 'portage_uid', 'secpass', 'userpriv_groups',
- '_portage_grpname', '_portage_username'):
- globals()[k] = _GlobalProxy(k)
+
+ for k in (
+ "portage_gid",
+ "portage_uid",
+ "secpass",
+ "userpriv_groups",
+ "_portage_grpname",
+ "_portage_username",
+ ):
+ globals()[k] = _GlobalProxy(k)
del k
+
def _init(settings):
- """
- Use config variables like PORTAGE_GRPNAME and PORTAGE_USERNAME to
- initialize global variables. This allows settings to come from make.conf
- instead of requiring them to be set in the calling environment.
- """
- if '_portage_grpname' not in _initialized_globals and \
- '_portage_username' not in _initialized_globals:
-
- # Prevents "TypeError: expected string" errors
- # from grp.getgrnam() with PyPy
- native_string = platform.python_implementation() == 'PyPy'
-
- # PREFIX LOCAL: use var iso hardwired 'portage'
- v = settings.get('PORTAGE_GRPNAME', PORTAGE_GROUPNAME)
- # END PREFIX LOCAL
- if native_string:
- v = portage._native_string(v)
- globals()['_portage_grpname'] = v
- _initialized_globals.add('_portage_grpname')
-
- # PREFIX LOCAL: use var iso hardwired 'portage'
- v = settings.get('PORTAGE_USERNAME', PORTAGE_USERNAME)
- # END PREFIX LOCAL
- if native_string:
- v = portage._native_string(v)
- globals()['_portage_username'] = v
- _initialized_globals.add('_portage_username')
-
- if 'secpass' not in _initialized_globals:
- v = 0
- if uid == 0:
- v = 2
- elif "unprivileged" in settings.features:
- v = 2
- elif portage_gid in os.getgroups():
- v = 1
- globals()['secpass'] = v
- _initialized_globals.add('secpass')
+ """
+ Use config variables like PORTAGE_GRPNAME and PORTAGE_USERNAME to
+ initialize global variables. This allows settings to come from make.conf
+ instead of requiring them to be set in the calling environment.
+ """
+ if (
+ "_portage_grpname" not in _initialized_globals
+ and "_portage_username" not in _initialized_globals
+ ):
+
+ # Prevents "TypeError: expected string" errors
+ # from grp.getgrnam() with PyPy
+ native_string = platform.python_implementation() == "PyPy"
+
- v = settings.get("PORTAGE_GRPNAME", "portage")
++ # PREFIX LOCAL: use var iso hardwired 'portage'
++ v = settings.get('PORTAGE_GRPNAME', PORTAGE_GROUPNAME)
++ # END PREFIX LOCAL
+ if native_string:
+ v = portage._native_string(v)
- globals()["_portage_grpname"] = v
- _initialized_globals.add("_portage_grpname")
++ globals()['_portage_grpname'] = v
++ _initialized_globals.add('_portage_grpname')
+
- v = settings.get("PORTAGE_USERNAME", "portage")
++ # PREFIX LOCAL: use var iso hardwired 'portage'
++ v = settings.get('PORTAGE_USERNAME', PORTAGE_USERNAME)
++ # END PREFIX LOCAL
+ if native_string:
+ v = portage._native_string(v)
- globals()["_portage_username"] = v
- _initialized_globals.add("_portage_username")
++ globals()['_portage_username'] = v
++ _initialized_globals.add('_portage_username')
+
+ if "secpass" not in _initialized_globals:
+ v = 0
+ if uid == 0:
+ v = 2
+ elif "unprivileged" in settings.features:
+ v = 2
+ elif portage_gid in os.getgroups():
+ v = 1
+ globals()["secpass"] = v
+ _initialized_globals.add("secpass")
diff --cc lib/portage/dbapi/bintree.py
index 7e81b7879,9dbf9ee8b..8b008a93d
--- a/lib/portage/dbapi/bintree.py
+++ b/lib/portage/dbapi/bintree.py
@@@ -57,1806 -62,1994 +62,2002 @@@ from urllib.parse import urlpars
class UseCachedCopyOfRemoteIndex(Exception):
- # If the local copy is recent enough
- # then fetching the remote index can be skipped.
- pass
+ # If the local copy is recent enough
+ # then fetching the remote index can be skipped.
+ pass
+
class bindbapi(fakedbapi):
- _known_keys = frozenset(list(fakedbapi._known_keys) + \
- ["CHOST", "repository", "USE"])
- _pkg_str_aux_keys = fakedbapi._pkg_str_aux_keys + ("BUILD_ID", "BUILD_TIME", "_mtime_")
-
- def __init__(self, mybintree=None, **kwargs):
- # Always enable multi_instance mode for bindbapi indexing. This
- # does not affect the local PKGDIR file layout, since that is
- # controlled independently by FEATURES=binpkg-multi-instance.
- # The multi_instance mode is useful for the following reasons:
- # * binary packages with the same cpv from multiple binhosts
- # can be considered simultaneously
- # * if binpkg-multi-instance is disabled, it's still possible
- # to properly access a PKGDIR which has binpkg-multi-instance
- # layout (or mixed layout)
- fakedbapi.__init__(self, exclusive_slots=False,
- multi_instance=True, **kwargs)
- self.bintree = mybintree
- self.move_ent = mybintree.move_ent
- # Selectively cache metadata in order to optimize dep matching.
- self._aux_cache_keys = set(
- ["BDEPEND", "BUILD_ID", "BUILD_TIME", "CHOST", "DEFINED_PHASES",
- "DEPEND", "EAPI", "IDEPEND", "IUSE", "KEYWORDS",
- "LICENSE", "MD5", "PDEPEND", "PROPERTIES",
- "PROVIDES", "RDEPEND", "repository", "REQUIRES", "RESTRICT",
- "SIZE", "SLOT", "USE", "_mtime_", "EPREFIX"
- ])
- self._aux_cache_slot_dict = slot_dict_class(self._aux_cache_keys)
- self._aux_cache = {}
-
- @property
- def writable(self):
- """
- Check if PKGDIR is writable, or permissions are sufficient
- to create it if it does not exist yet.
- @rtype: bool
- @return: True if PKGDIR is writable or can be created,
- False otherwise
- """
- return os.access(first_existing(self.bintree.pkgdir), os.W_OK)
-
- def match(self, *pargs, **kwargs):
- if self.bintree and not self.bintree.populated:
- self.bintree.populate()
- return fakedbapi.match(self, *pargs, **kwargs)
-
- def cpv_exists(self, cpv, myrepo=None):
- if self.bintree and not self.bintree.populated:
- self.bintree.populate()
- return fakedbapi.cpv_exists(self, cpv)
-
- def cpv_inject(self, cpv, **kwargs):
- if not self.bintree.populated:
- self.bintree.populate()
- fakedbapi.cpv_inject(self, cpv,
- metadata=cpv._metadata, **kwargs)
-
- def cpv_remove(self, cpv):
- if not self.bintree.populated:
- self.bintree.populate()
- fakedbapi.cpv_remove(self, cpv)
-
- def aux_get(self, mycpv, wants, myrepo=None):
- if self.bintree and not self.bintree.populated:
- self.bintree.populate()
- # Support plain string for backward compatibility with API
- # consumers (including portageq, which passes in a cpv from
- # a command-line argument).
- instance_key = self._instance_key(mycpv,
- support_string=True)
- if not self._known_keys.intersection(
- wants).difference(self._aux_cache_keys):
- aux_cache = self.cpvdict[instance_key]
- if aux_cache is not None:
- return [aux_cache.get(x, "") for x in wants]
- mysplit = mycpv.split("/")
- mylist = []
- add_pkg = self.bintree._additional_pkgs.get(instance_key)
- if add_pkg is not None:
- return add_pkg._db.aux_get(add_pkg, wants)
- if not self.bintree._remotepkgs or \
- not self.bintree.isremote(mycpv):
- try:
- tbz2_path = self.bintree._pkg_paths[instance_key]
- except KeyError:
- raise KeyError(mycpv)
- tbz2_path = os.path.join(self.bintree.pkgdir, tbz2_path)
- try:
- st = os.lstat(tbz2_path)
- except OSError:
- raise KeyError(mycpv)
- metadata_bytes = portage.xpak.tbz2(tbz2_path).get_data()
- def getitem(k):
- if k == "_mtime_":
- return str(st[stat.ST_MTIME])
- if k == "SIZE":
- return str(st.st_size)
- v = metadata_bytes.get(_unicode_encode(k,
- encoding=_encodings['repo.content'],
- errors='backslashreplace'))
- if v is not None:
- v = _unicode_decode(v,
- encoding=_encodings['repo.content'], errors='replace')
- return v
- else:
- getitem = self.cpvdict[instance_key].get
- mydata = {}
- mykeys = wants
- for x in mykeys:
- myval = getitem(x)
- # myval is None if the key doesn't exist
- # or the tbz2 is corrupt.
- if myval:
- mydata[x] = " ".join(myval.split())
-
- if not mydata.setdefault('EAPI', '0'):
- mydata['EAPI'] = '0'
-
- return [mydata.get(x, '') for x in wants]
-
- def aux_update(self, cpv, values):
- if not self.bintree.populated:
- self.bintree.populate()
- build_id = None
- try:
- build_id = cpv.build_id
- except AttributeError:
- if self.bintree._multi_instance:
- # The cpv.build_id attribute is required if we are in
- # multi-instance mode, since otherwise we won't know
- # which instance to update.
- raise
- else:
- cpv = self._instance_key(cpv, support_string=True)[0]
- build_id = cpv.build_id
-
- tbz2path = self.bintree.getname(cpv)
- if not os.path.exists(tbz2path):
- raise KeyError(cpv)
- mytbz2 = portage.xpak.tbz2(tbz2path)
- mydata = mytbz2.get_data()
-
- for k, v in values.items():
- k = _unicode_encode(k,
- encoding=_encodings['repo.content'], errors='backslashreplace')
- v = _unicode_encode(v,
- encoding=_encodings['repo.content'], errors='backslashreplace')
- mydata[k] = v
-
- for k, v in list(mydata.items()):
- if not v:
- del mydata[k]
- mytbz2.recompose_mem(portage.xpak.xpak_mem(mydata))
- # inject will clear stale caches via cpv_inject.
- self.bintree.inject(cpv)
-
-
- @coroutine
- def unpack_metadata(self, pkg, dest_dir, loop=None):
- """
- Unpack package metadata to a directory. This method is a coroutine.
-
- @param pkg: package to unpack
- @type pkg: _pkg_str or portage.config
- @param dest_dir: destination directory
- @type dest_dir: str
- """
- loop = asyncio._wrap_loop(loop)
- if isinstance(pkg, _pkg_str):
- cpv = pkg
- else:
- cpv = pkg.mycpv
- key = self._instance_key(cpv)
- add_pkg = self.bintree._additional_pkgs.get(key)
- if add_pkg is not None:
- yield add_pkg._db.unpack_metadata(pkg, dest_dir, loop=loop)
- else:
- tbz2_file = self.bintree.getname(cpv)
- yield loop.run_in_executor(ForkExecutor(loop=loop),
- portage.xpak.tbz2(tbz2_file).unpackinfo, dest_dir)
-
- @coroutine
- def unpack_contents(self, pkg, dest_dir, loop=None):
- """
- Unpack package contents to a directory. This method is a coroutine.
-
- @param pkg: package to unpack
- @type pkg: _pkg_str or portage.config
- @param dest_dir: destination directory
- @type dest_dir: str
- """
- loop = asyncio._wrap_loop(loop)
- if isinstance(pkg, _pkg_str):
- settings = self.settings
- cpv = pkg
- else:
- settings = pkg
- cpv = settings.mycpv
-
- pkg_path = self.bintree.getname(cpv)
- if pkg_path is not None:
-
- extractor = BinpkgExtractorAsync(
- background=settings.get('PORTAGE_BACKGROUND') == '1',
- env=settings.environ(),
- features=settings.features,
- image_dir=dest_dir,
- pkg=cpv, pkg_path=pkg_path,
- logfile=settings.get('PORTAGE_LOG_FILE'),
- scheduler=SchedulerInterface(loop))
-
- extractor.start()
- yield extractor.async_wait()
- if extractor.returncode != os.EX_OK:
- raise PortageException("Error Extracting '{}'".format(pkg_path))
-
- else:
- instance_key = self._instance_key(cpv)
- add_pkg = self.bintree._additional_pkgs.get(instance_key)
- if add_pkg is None:
- raise portage.exception.PackageNotFound(cpv)
- yield add_pkg._db.unpack_contents(pkg, dest_dir, loop=loop)
-
- def cp_list(self, *pargs, **kwargs):
- if not self.bintree.populated:
- self.bintree.populate()
- return fakedbapi.cp_list(self, *pargs, **kwargs)
-
- def cp_all(self, sort=False):
- if not self.bintree.populated:
- self.bintree.populate()
- return fakedbapi.cp_all(self, sort=sort)
-
- def cpv_all(self):
- if not self.bintree.populated:
- self.bintree.populate()
- return fakedbapi.cpv_all(self)
-
- def getfetchsizes(self, pkg):
- """
- This will raise MissingSignature if SIZE signature is not available,
- or InvalidSignature if SIZE signature is invalid.
- """
-
- if not self.bintree.populated:
- self.bintree.populate()
-
- pkg = getattr(pkg, 'cpv', pkg)
-
- filesdict = {}
- if not self.bintree.isremote(pkg):
- pass
- else:
- metadata = self.bintree._remotepkgs[self._instance_key(pkg)]
- try:
- size = int(metadata["SIZE"])
- except KeyError:
- raise portage.exception.MissingSignature("SIZE")
- except ValueError:
- raise portage.exception.InvalidSignature(
- "SIZE: %s" % metadata["SIZE"])
- else:
- filesdict[os.path.basename(self.bintree.getname(pkg))] = size
-
- return filesdict
+ _known_keys = frozenset(
+ list(fakedbapi._known_keys) + ["CHOST", "repository", "USE"]
+ )
+ _pkg_str_aux_keys = fakedbapi._pkg_str_aux_keys + (
+ "BUILD_ID",
+ "BUILD_TIME",
+ "_mtime_",
+ )
+
+ def __init__(self, mybintree=None, **kwargs):
+ # Always enable multi_instance mode for bindbapi indexing. This
+ # does not affect the local PKGDIR file layout, since that is
+ # controlled independently by FEATURES=binpkg-multi-instance.
+ # The multi_instance mode is useful for the following reasons:
+ # * binary packages with the same cpv from multiple binhosts
+ # can be considered simultaneously
+ # * if binpkg-multi-instance is disabled, it's still possible
+ # to properly access a PKGDIR which has binpkg-multi-instance
+ # layout (or mixed layout)
+ fakedbapi.__init__(self, exclusive_slots=False, multi_instance=True, **kwargs)
+ self.bintree = mybintree
+ self.move_ent = mybintree.move_ent
+ # Selectively cache metadata in order to optimize dep matching.
+ self._aux_cache_keys = set(
+ [
+ "BDEPEND",
+ "BUILD_ID",
+ "BUILD_TIME",
+ "CHOST",
+ "DEFINED_PHASES",
+ "DEPEND",
+ "EAPI",
+ "IDEPEND",
+ "IUSE",
+ "KEYWORDS",
+ "LICENSE",
+ "MD5",
+ "PDEPEND",
+ "PROPERTIES",
+ "PROVIDES",
+ "RDEPEND",
+ "repository",
+ "REQUIRES",
+ "RESTRICT",
+ "SIZE",
+ "SLOT",
+ "USE",
+ "_mtime_",
++ # PREFIX LOCAL
++ "EPREFIX",
+ ]
+ )
+ self._aux_cache_slot_dict = slot_dict_class(self._aux_cache_keys)
+ self._aux_cache = {}
+
+ @property
+ def writable(self):
+ """
+ Check if PKGDIR is writable, or permissions are sufficient
+ to create it if it does not exist yet.
+ @rtype: bool
+ @return: True if PKGDIR is writable or can be created,
+ False otherwise
+ """
+ return os.access(first_existing(self.bintree.pkgdir), os.W_OK)
+
+ def match(self, *pargs, **kwargs):
+ if self.bintree and not self.bintree.populated:
+ self.bintree.populate()
+ return fakedbapi.match(self, *pargs, **kwargs)
+
+ def cpv_exists(self, cpv, myrepo=None):
+ if self.bintree and not self.bintree.populated:
+ self.bintree.populate()
+ return fakedbapi.cpv_exists(self, cpv)
+
+ def cpv_inject(self, cpv, **kwargs):
+ if not self.bintree.populated:
+ self.bintree.populate()
+ fakedbapi.cpv_inject(self, cpv, metadata=cpv._metadata, **kwargs)
+
+ def cpv_remove(self, cpv):
+ if not self.bintree.populated:
+ self.bintree.populate()
+ fakedbapi.cpv_remove(self, cpv)
+
+ def aux_get(self, mycpv, wants, myrepo=None):
+ if self.bintree and not self.bintree.populated:
+ self.bintree.populate()
+ # Support plain string for backward compatibility with API
+ # consumers (including portageq, which passes in a cpv from
+ # a command-line argument).
+ instance_key = self._instance_key(mycpv, support_string=True)
+ if not self._known_keys.intersection(wants).difference(self._aux_cache_keys):
+ aux_cache = self.cpvdict[instance_key]
+ if aux_cache is not None:
+ return [aux_cache.get(x, "") for x in wants]
+ mysplit = mycpv.split("/")
+ mylist = []
+ add_pkg = self.bintree._additional_pkgs.get(instance_key)
+ if add_pkg is not None:
+ return add_pkg._db.aux_get(add_pkg, wants)
+ if not self.bintree._remotepkgs or not self.bintree.isremote(mycpv):
+ try:
+ tbz2_path = self.bintree._pkg_paths[instance_key]
+ except KeyError:
+ raise KeyError(mycpv)
+ tbz2_path = os.path.join(self.bintree.pkgdir, tbz2_path)
+ try:
+ st = os.lstat(tbz2_path)
+ except OSError:
+ raise KeyError(mycpv)
+ metadata_bytes = portage.xpak.tbz2(tbz2_path).get_data()
+
+ def getitem(k):
+ if k == "_mtime_":
+ return str(st[stat.ST_MTIME])
+ if k == "SIZE":
+ return str(st.st_size)
+ v = metadata_bytes.get(
+ _unicode_encode(
+ k,
+ encoding=_encodings["repo.content"],
+ errors="backslashreplace",
+ )
+ )
+ if v is not None:
+ v = _unicode_decode(
+ v, encoding=_encodings["repo.content"], errors="replace"
+ )
+ return v
+
+ else:
+ getitem = self.cpvdict[instance_key].get
+ mydata = {}
+ mykeys = wants
+ for x in mykeys:
+ myval = getitem(x)
+ # myval is None if the key doesn't exist
+ # or the tbz2 is corrupt.
+ if myval:
+ mydata[x] = " ".join(myval.split())
+
+ if not mydata.setdefault("EAPI", "0"):
+ mydata["EAPI"] = "0"
+
+ return [mydata.get(x, "") for x in wants]
+
+ def aux_update(self, cpv, values):
+ if not self.bintree.populated:
+ self.bintree.populate()
+ build_id = None
+ try:
+ build_id = cpv.build_id
+ except AttributeError:
+ if self.bintree._multi_instance:
+ # The cpv.build_id attribute is required if we are in
+ # multi-instance mode, since otherwise we won't know
+ # which instance to update.
+ raise
+ else:
+ cpv = self._instance_key(cpv, support_string=True)[0]
+ build_id = cpv.build_id
+
+ tbz2path = self.bintree.getname(cpv)
+ if not os.path.exists(tbz2path):
+ raise KeyError(cpv)
+ mytbz2 = portage.xpak.tbz2(tbz2path)
+ mydata = mytbz2.get_data()
+
+ for k, v in values.items():
+ k = _unicode_encode(
+ k, encoding=_encodings["repo.content"], errors="backslashreplace"
+ )
+ v = _unicode_encode(
+ v, encoding=_encodings["repo.content"], errors="backslashreplace"
+ )
+ mydata[k] = v
+
+ for k, v in list(mydata.items()):
+ if not v:
+ del mydata[k]
+ mytbz2.recompose_mem(portage.xpak.xpak_mem(mydata))
+ # inject will clear stale caches via cpv_inject.
+ self.bintree.inject(cpv)
+
+ async def unpack_metadata(self, pkg, dest_dir, loop=None):
+ """
+ Unpack package metadata to a directory. This method is a coroutine.
+
+ @param pkg: package to unpack
+ @type pkg: _pkg_str or portage.config
+ @param dest_dir: destination directory
+ @type dest_dir: str
+ """
+ loop = asyncio._wrap_loop(loop)
+ if isinstance(pkg, _pkg_str):
+ cpv = pkg
+ else:
+ cpv = pkg.mycpv
+ key = self._instance_key(cpv)
+ add_pkg = self.bintree._additional_pkgs.get(key)
+ if add_pkg is not None:
+ await add_pkg._db.unpack_metadata(pkg, dest_dir, loop=loop)
+ else:
+ tbz2_file = self.bintree.getname(cpv)
+ await loop.run_in_executor(
+ ForkExecutor(loop=loop),
+ portage.xpak.tbz2(tbz2_file).unpackinfo,
+ dest_dir,
+ )
+
+ async def unpack_contents(self, pkg, dest_dir, loop=None):
+ """
+ Unpack package contents to a directory. This method is a coroutine.
+
+ @param pkg: package to unpack
+ @type pkg: _pkg_str or portage.config
+ @param dest_dir: destination directory
+ @type dest_dir: str
+ """
+ loop = asyncio._wrap_loop(loop)
+ if isinstance(pkg, _pkg_str):
+ settings = self.settings
+ cpv = pkg
+ else:
+ settings = pkg
+ cpv = settings.mycpv
+
+ pkg_path = self.bintree.getname(cpv)
+ if pkg_path is not None:
+
+ extractor = BinpkgExtractorAsync(
+ background=settings.get("PORTAGE_BACKGROUND") == "1",
+ env=settings.environ(),
+ features=settings.features,
+ image_dir=dest_dir,
+ pkg=cpv,
+ pkg_path=pkg_path,
+ logfile=settings.get("PORTAGE_LOG_FILE"),
+ scheduler=SchedulerInterface(loop),
+ )
+
+ extractor.start()
+ await extractor.async_wait()
+ if extractor.returncode != os.EX_OK:
+ raise PortageException("Error Extracting '{}'".format(pkg_path))
+
+ else:
+ instance_key = self._instance_key(cpv)
+ add_pkg = self.bintree._additional_pkgs.get(instance_key)
+ if add_pkg is None:
+ raise portage.exception.PackageNotFound(cpv)
+ await add_pkg._db.unpack_contents(pkg, dest_dir, loop=loop)
+
+ def cp_list(self, *pargs, **kwargs):
+ if not self.bintree.populated:
+ self.bintree.populate()
+ return fakedbapi.cp_list(self, *pargs, **kwargs)
+
+ def cp_all(self, sort=False):
+ if not self.bintree.populated:
+ self.bintree.populate()
+ return fakedbapi.cp_all(self, sort=sort)
+
+ def cpv_all(self):
+ if not self.bintree.populated:
+ self.bintree.populate()
+ return fakedbapi.cpv_all(self)
+
+ def getfetchsizes(self, pkg):
+ """
+ This will raise MissingSignature if SIZE signature is not available,
+ or InvalidSignature if SIZE signature is invalid.
+ """
+
+ if not self.bintree.populated:
+ self.bintree.populate()
+
+ pkg = getattr(pkg, "cpv", pkg)
+
+ filesdict = {}
+ if not self.bintree.isremote(pkg):
+ pass
+ else:
+ metadata = self.bintree._remotepkgs[self._instance_key(pkg)]
+ try:
+ size = int(metadata["SIZE"])
+ except KeyError:
+ raise portage.exception.MissingSignature("SIZE")
+ except ValueError:
+ raise portage.exception.InvalidSignature("SIZE: %s" % metadata["SIZE"])
+ else:
+ filesdict[os.path.basename(self.bintree.getname(pkg))] = size
+
+ return filesdict
class binarytree:
- "this tree scans for a list of all packages available in PKGDIR"
- def __init__(self, _unused=DeprecationWarning, pkgdir=None,
- virtual=DeprecationWarning, settings=None):
-
- if pkgdir is None:
- raise TypeError("pkgdir parameter is required")
-
- if settings is None:
- raise TypeError("settings parameter is required")
-
- if _unused is not DeprecationWarning:
- warnings.warn("The first parameter of the "
- "portage.dbapi.bintree.binarytree"
- " constructor is now unused. Instead "
- "settings['ROOT'] is used.",
- DeprecationWarning, stacklevel=2)
-
- if virtual is not DeprecationWarning:
- warnings.warn("The 'virtual' parameter of the "
- "portage.dbapi.bintree.binarytree"
- " constructor is unused",
- DeprecationWarning, stacklevel=2)
-
- if True:
- self.pkgdir = normalize_path(pkgdir)
- # NOTE: Event if binpkg-multi-instance is disabled, it's
- # still possible to access a PKGDIR which uses the
- # binpkg-multi-instance layout (or mixed layout).
- self._multi_instance = ("binpkg-multi-instance" in
- settings.features)
- if self._multi_instance:
- self._allocate_filename = self._allocate_filename_multi
- self.dbapi = bindbapi(self, settings=settings)
- self.update_ents = self.dbapi.update_ents
- self.move_slot_ent = self.dbapi.move_slot_ent
- self.populated = 0
- self.tree = {}
- self._binrepos_conf = None
- self._remote_has_index = False
- self._remotepkgs = None # remote metadata indexed by cpv
- self._additional_pkgs = {}
- self.invalids = []
- self.settings = settings
- self._pkg_paths = {}
- self._populating = False
- self._all_directory = os.path.isdir(
- os.path.join(self.pkgdir, "All"))
- self._pkgindex_version = 0
- self._pkgindex_hashes = ["MD5","SHA1"]
- self._pkgindex_file = os.path.join(self.pkgdir, "Packages")
- self._pkgindex_keys = self.dbapi._aux_cache_keys.copy()
- self._pkgindex_keys.update(["CPV", "SIZE"])
- self._pkgindex_aux_keys = \
- ["BASE_URI", "BDEPEND", "BUILD_ID", "BUILD_TIME", "CHOST",
- "DEFINED_PHASES", "DEPEND", "DESCRIPTION", "EAPI", "FETCHCOMMAND",
- "IDEPEND", "IUSE", "KEYWORDS", "LICENSE", "PDEPEND",
- "PKGINDEX_URI", "PROPERTIES", "PROVIDES",
- "RDEPEND", "repository", "REQUIRES", "RESTRICT", "RESUMECOMMAND",
- "SIZE", "SLOT", "USE", "EPREFIX"]
- self._pkgindex_aux_keys = list(self._pkgindex_aux_keys)
- self._pkgindex_use_evaluated_keys = \
- ("BDEPEND", "DEPEND", "IDEPEND", "LICENSE", "RDEPEND",
- "PDEPEND", "PROPERTIES", "RESTRICT")
- self._pkgindex_header = None
- self._pkgindex_header_keys = set([
- "ACCEPT_KEYWORDS", "ACCEPT_LICENSE",
- "ACCEPT_PROPERTIES", "ACCEPT_RESTRICT", "CBUILD",
- "CONFIG_PROTECT", "CONFIG_PROTECT_MASK", "FEATURES",
- "GENTOO_MIRRORS", "INSTALL_MASK", "IUSE_IMPLICIT", "USE",
- "USE_EXPAND", "USE_EXPAND_HIDDEN", "USE_EXPAND_IMPLICIT",
- "USE_EXPAND_UNPREFIXED",
- "EPREFIX"])
- self._pkgindex_default_pkg_data = {
- "BDEPEND" : "",
- "BUILD_ID" : "",
- "BUILD_TIME" : "",
- "DEFINED_PHASES" : "",
- "DEPEND" : "",
- "EAPI" : "0",
- "IDEPEND" : "",
- "IUSE" : "",
- "KEYWORDS": "",
- "LICENSE" : "",
- "PATH" : "",
- "PDEPEND" : "",
- "PROPERTIES" : "",
- "PROVIDES": "",
- "RDEPEND" : "",
- "REQUIRES": "",
- "RESTRICT": "",
- "SLOT" : "0",
- "USE" : "",
- }
- self._pkgindex_inherited_keys = ["CHOST", "repository", "EPREFIX"]
-
- # Populate the header with appropriate defaults.
- self._pkgindex_default_header_data = {
- "CHOST" : self.settings.get("CHOST", ""),
- "repository" : "",
- }
-
- self._pkgindex_translated_keys = (
- ("DESCRIPTION" , "DESC"),
- ("_mtime_" , "MTIME"),
- ("repository" , "REPO"),
- )
-
- self._pkgindex_allowed_pkg_keys = set(chain(
- self._pkgindex_keys,
- self._pkgindex_aux_keys,
- self._pkgindex_hashes,
- self._pkgindex_default_pkg_data,
- self._pkgindex_inherited_keys,
- chain(*self._pkgindex_translated_keys)
- ))
-
- @property
- def root(self):
- warnings.warn("The root attribute of "
- "portage.dbapi.bintree.binarytree"
- " is deprecated. Use "
- "settings['ROOT'] instead.",
- DeprecationWarning, stacklevel=3)
- return self.settings['ROOT']
-
- def move_ent(self, mylist, repo_match=None):
- if not self.populated:
- self.populate()
- origcp = mylist[1]
- newcp = mylist[2]
- # sanity check
- for atom in (origcp, newcp):
- if not isjustname(atom):
- raise InvalidPackageName(str(atom))
- mynewcat = catsplit(newcp)[0]
- origmatches=self.dbapi.cp_list(origcp)
- moves = 0
- if not origmatches:
- return moves
- for mycpv in origmatches:
- mycpv_cp = mycpv.cp
- if mycpv_cp != origcp:
- # Ignore PROVIDE virtual match.
- continue
- if repo_match is not None \
- and not repo_match(mycpv.repo):
- continue
-
- # Use isvalidatom() to check if this move is valid for the
- # EAPI (characters allowed in package names may vary).
- if not isvalidatom(newcp, eapi=mycpv.eapi):
- continue
-
- mynewcpv = mycpv.replace(mycpv_cp, str(newcp), 1)
- myoldpkg = catsplit(mycpv)[1]
- mynewpkg = catsplit(mynewcpv)[1]
-
- # If this update has already been applied to the same
- # package build then silently continue.
- applied = False
- for maybe_applied in self.dbapi.match('={}'.format(mynewcpv)):
- if maybe_applied.build_time == mycpv.build_time:
- applied = True
- break
-
- if applied:
- continue
-
- if (mynewpkg != myoldpkg) and self.dbapi.cpv_exists(mynewcpv):
- writemsg(_("!!! Cannot update binary: Destination exists.\n"),
- noiselevel=-1)
- writemsg("!!! "+mycpv+" -> "+mynewcpv+"\n", noiselevel=-1)
- continue
-
- tbz2path = self.getname(mycpv)
- if os.path.exists(tbz2path) and not os.access(tbz2path,os.W_OK):
- writemsg(_("!!! Cannot update readonly binary: %s\n") % mycpv,
- noiselevel=-1)
- continue
-
- moves += 1
- mytbz2 = portage.xpak.tbz2(tbz2path)
- mydata = mytbz2.get_data()
- updated_items = update_dbentries([mylist], mydata, parent=mycpv)
- mydata.update(updated_items)
- mydata[b'PF'] = \
- _unicode_encode(mynewpkg + "\n",
- encoding=_encodings['repo.content'])
- mydata[b'CATEGORY'] = \
- _unicode_encode(mynewcat + "\n",
- encoding=_encodings['repo.content'])
- if mynewpkg != myoldpkg:
- ebuild_data = mydata.pop(_unicode_encode(myoldpkg + '.ebuild',
- encoding=_encodings['repo.content']), None)
- if ebuild_data is not None:
- mydata[_unicode_encode(mynewpkg + '.ebuild',
- encoding=_encodings['repo.content'])] = ebuild_data
-
- metadata = self.dbapi._aux_cache_slot_dict()
- for k in self.dbapi._aux_cache_keys:
- v = mydata.get(_unicode_encode(k))
- if v is not None:
- v = _unicode_decode(v)
- metadata[k] = " ".join(v.split())
-
- # Create a copy of the old version of the package and
- # apply the update to it. Leave behind the old version,
- # assuming that it will be deleted by eclean-pkg when its
- # time comes.
- mynewcpv = _pkg_str(mynewcpv, metadata=metadata, db=self.dbapi)
- update_path = self.getname(mynewcpv, allocate_new=True) + ".partial"
- self._ensure_dir(os.path.dirname(update_path))
- update_path_lock = None
- try:
- update_path_lock = lockfile(update_path, wantnewlockfile=True)
- copyfile(tbz2path, update_path)
- mytbz2 = portage.xpak.tbz2(update_path)
- mytbz2.recompose_mem(portage.xpak.xpak_mem(mydata))
- self.inject(mynewcpv, filename=update_path)
- finally:
- if update_path_lock is not None:
- try:
- os.unlink(update_path)
- except OSError:
- pass
- unlockfile(update_path_lock)
-
- return moves
-
- def prevent_collision(self, cpv):
- warnings.warn("The "
- "portage.dbapi.bintree.binarytree.prevent_collision "
- "method is deprecated.",
- DeprecationWarning, stacklevel=2)
-
- def _ensure_dir(self, path):
- """
- Create the specified directory. Also, copy gid and group mode
- bits from self.pkgdir if possible.
- @param cat_dir: Absolute path of the directory to be created.
- @type cat_dir: String
- """
- try:
- pkgdir_st = os.stat(self.pkgdir)
- except OSError:
- ensure_dirs(path)
- return
- pkgdir_gid = pkgdir_st.st_gid
- pkgdir_grp_mode = 0o2070 & pkgdir_st.st_mode
- try:
- ensure_dirs(path, gid=pkgdir_gid, mode=pkgdir_grp_mode, mask=0)
- except PortageException:
- if not os.path.isdir(path):
- raise
-
- def _file_permissions(self, path):
- try:
- pkgdir_st = os.stat(self.pkgdir)
- except OSError:
- pass
- else:
- pkgdir_gid = pkgdir_st.st_gid
- pkgdir_grp_mode = 0o0060 & pkgdir_st.st_mode
- try:
- portage.util.apply_permissions(path, gid=pkgdir_gid,
- mode=pkgdir_grp_mode, mask=0)
- except PortageException:
- pass
-
- def populate(self, getbinpkgs=False, getbinpkg_refresh=True, add_repos=()):
- """
- Populates the binarytree with package metadata.
-
- @param getbinpkgs: include remote packages
- @type getbinpkgs: bool
- @param getbinpkg_refresh: attempt to refresh the cache
- of remote package metadata if getbinpkgs is also True
- @type getbinpkg_refresh: bool
- @param add_repos: additional binary package repositories
- @type add_repos: sequence
- """
-
- if self._populating:
- return
-
- if not os.path.isdir(self.pkgdir) and not (getbinpkgs or add_repos):
- self.populated = True
- return
-
- # Clear all caches in case populate is called multiple times
- # as may be the case when _global_updates calls populate()
- # prior to performing package moves since it only wants to
- # operate on local packages (getbinpkgs=0).
- self._remotepkgs = None
-
- self._populating = True
- try:
- update_pkgindex = self._populate_local(
- reindex='pkgdir-index-trusted' not in self.settings.features)
-
- if update_pkgindex and self.dbapi.writable:
- # If the Packages file needs to be updated, then _populate_local
- # needs to be called once again while the file is locked, so
- # that changes made by a concurrent process cannot be lost. This
- # case is avoided when possible, in order to minimize lock
- # contention.
- pkgindex_lock = None
- try:
- pkgindex_lock = lockfile(self._pkgindex_file,
- wantnewlockfile=True)
- update_pkgindex = self._populate_local()
- if update_pkgindex:
- self._pkgindex_write(update_pkgindex)
- finally:
- if pkgindex_lock:
- unlockfile(pkgindex_lock)
-
- if add_repos:
- self._populate_additional(add_repos)
-
- if getbinpkgs:
- config_path = os.path.join(self.settings['PORTAGE_CONFIGROOT'], BINREPOS_CONF_FILE)
- self._binrepos_conf = BinRepoConfigLoader((config_path,), self.settings)
- if not self._binrepos_conf:
- writemsg(_("!!! %s is missing (or PORTAGE_BINHOST is unset), but use is requested.\n") % (config_path,),
- noiselevel=-1)
- else:
- self._populate_remote(getbinpkg_refresh=getbinpkg_refresh)
-
- finally:
- self._populating = False
-
- self.populated = True
-
- def _populate_local(self, reindex=True):
- """
- Populates the binarytree with local package metadata.
-
- @param reindex: detect added / modified / removed packages and
- regenerate the index file if necessary
- @type reindex: bool
- """
- self.dbapi.clear()
- _instance_key = self.dbapi._instance_key
- # In order to minimize disk I/O, we never compute digests here.
- # Therefore we exclude hashes from the minimum_keys, so that
- # the Packages file will not be needlessly re-written due to
- # missing digests.
- minimum_keys = self._pkgindex_keys.difference(self._pkgindex_hashes)
- if True:
- pkg_paths = {}
- self._pkg_paths = pkg_paths
- dir_files = {}
- if reindex:
- for parent, dir_names, file_names in os.walk(self.pkgdir):
- relative_parent = parent[len(self.pkgdir)+1:]
- dir_files[relative_parent] = file_names
-
- pkgindex = self._load_pkgindex()
- if not self._pkgindex_version_supported(pkgindex):
- pkgindex = self._new_pkgindex()
- metadata = {}
- basename_index = {}
- for d in pkgindex.packages:
- cpv = _pkg_str(d["CPV"], metadata=d,
- settings=self.settings, db=self.dbapi)
- d["CPV"] = cpv
- metadata[_instance_key(cpv)] = d
- path = d.get("PATH")
- if not path:
- path = cpv + ".tbz2"
-
- if reindex:
- basename = os.path.basename(path)
- basename_index.setdefault(basename, []).append(d)
- else:
- instance_key = _instance_key(cpv)
- pkg_paths[instance_key] = path
- self.dbapi.cpv_inject(cpv)
-
- update_pkgindex = False
- for mydir, file_names in dir_files.items():
- try:
- mydir = _unicode_decode(mydir,
- encoding=_encodings["fs"], errors="strict")
- except UnicodeDecodeError:
- continue
- for myfile in file_names:
- try:
- myfile = _unicode_decode(myfile,
- encoding=_encodings["fs"], errors="strict")
- except UnicodeDecodeError:
- continue
- if not myfile.endswith(SUPPORTED_XPAK_EXTENSIONS):
- continue
- mypath = os.path.join(mydir, myfile)
- full_path = os.path.join(self.pkgdir, mypath)
- s = os.lstat(full_path)
-
- if not stat.S_ISREG(s.st_mode):
- continue
-
- # Validate data from the package index and try to avoid
- # reading the xpak if possible.
- possibilities = basename_index.get(myfile)
- if possibilities:
- match = None
- for d in possibilities:
- try:
- if int(d["_mtime_"]) != s[stat.ST_MTIME]:
- continue
- except (KeyError, ValueError):
- continue
- try:
- if int(d["SIZE"]) != int(s.st_size):
- continue
- except (KeyError, ValueError):
- continue
- if not minimum_keys.difference(d):
- match = d
- break
- if match:
- mycpv = match["CPV"]
- instance_key = _instance_key(mycpv)
- pkg_paths[instance_key] = mypath
- # update the path if the package has been moved
- oldpath = d.get("PATH")
- if oldpath and oldpath != mypath:
- update_pkgindex = True
- # Omit PATH if it is the default path for
- # the current Packages format version.
- if mypath != mycpv + ".tbz2":
- d["PATH"] = mypath
- if not oldpath:
- update_pkgindex = True
- else:
- d.pop("PATH", None)
- if oldpath:
- update_pkgindex = True
- self.dbapi.cpv_inject(mycpv)
- continue
- if not os.access(full_path, os.R_OK):
- writemsg(_("!!! Permission denied to read " \
- "binary package: '%s'\n") % full_path,
- noiselevel=-1)
- self.invalids.append(myfile[:-5])
- continue
- pkg_metadata = self._read_metadata(full_path, s,
- keys=chain(self.dbapi._aux_cache_keys,
- ("PF", "CATEGORY")))
- mycat = pkg_metadata.get("CATEGORY", "")
- mypf = pkg_metadata.get("PF", "")
- slot = pkg_metadata.get("SLOT", "")
- mypkg = myfile[:-5]
- if not mycat or not mypf or not slot:
- #old-style or corrupt package
- writemsg(_("\n!!! Invalid binary package: '%s'\n") % full_path,
- noiselevel=-1)
- missing_keys = []
- if not mycat:
- missing_keys.append("CATEGORY")
- if not mypf:
- missing_keys.append("PF")
- if not slot:
- missing_keys.append("SLOT")
- msg = []
- if missing_keys:
- missing_keys.sort()
- msg.append(_("Missing metadata key(s): %s.") % \
- ", ".join(missing_keys))
- msg.append(_(" This binary package is not " \
- "recoverable and should be deleted."))
- for line in textwrap.wrap("".join(msg), 72):
- writemsg("!!! %s\n" % line, noiselevel=-1)
- self.invalids.append(mypkg)
- continue
-
- multi_instance = False
- invalid_name = False
- build_id = None
- if myfile.endswith(".xpak"):
- multi_instance = True
- build_id = self._parse_build_id(myfile)
- if build_id < 1:
- invalid_name = True
- elif myfile != "%s-%s.xpak" % (
- mypf, build_id):
- invalid_name = True
- else:
- mypkg = mypkg[:-len(str(build_id))-1]
- elif myfile != mypf + ".tbz2":
- invalid_name = True
-
- if invalid_name:
- writemsg(_("\n!!! Binary package name is "
- "invalid: '%s'\n") % full_path,
- noiselevel=-1)
- continue
-
- if pkg_metadata.get("BUILD_ID"):
- try:
- build_id = int(pkg_metadata["BUILD_ID"])
- except ValueError:
- writemsg(_("!!! Binary package has "
- "invalid BUILD_ID: '%s'\n") %
- full_path, noiselevel=-1)
- continue
- else:
- build_id = None
-
- if multi_instance:
- name_split = catpkgsplit("%s/%s" %
- (mycat, mypf))
- if (name_split is None or
- tuple(catsplit(mydir)) != name_split[:2]):
- continue
- elif mycat != mydir and mydir != "All":
- continue
- if mypkg != mypf.strip():
- continue
- mycpv = mycat + "/" + mypkg
- if not self.dbapi._category_re.match(mycat):
- writemsg(_("!!! Binary package has an " \
- "unrecognized category: '%s'\n") % full_path,
- noiselevel=-1)
- writemsg(_("!!! '%s' has a category that is not" \
- " listed in %setc/portage/categories\n") % \
- (mycpv, self.settings["PORTAGE_CONFIGROOT"]),
- noiselevel=-1)
- continue
- if build_id is not None:
- pkg_metadata["BUILD_ID"] = str(build_id)
- pkg_metadata["SIZE"] = str(s.st_size)
- # Discard items used only for validation above.
- pkg_metadata.pop("CATEGORY")
- pkg_metadata.pop("PF")
- mycpv = _pkg_str(mycpv,
- metadata=self.dbapi._aux_cache_slot_dict(pkg_metadata),
- db=self.dbapi)
- pkg_paths[_instance_key(mycpv)] = mypath
- self.dbapi.cpv_inject(mycpv)
- update_pkgindex = True
- d = metadata.get(_instance_key(mycpv),
- pkgindex._pkg_slot_dict())
- if d:
- try:
- if int(d["_mtime_"]) != s[stat.ST_MTIME]:
- d.clear()
- except (KeyError, ValueError):
- d.clear()
- if d:
- try:
- if int(d["SIZE"]) != int(s.st_size):
- d.clear()
- except (KeyError, ValueError):
- d.clear()
-
- for k in self._pkgindex_allowed_pkg_keys:
- v = pkg_metadata.get(k)
- if v:
- d[k] = v
- d["CPV"] = mycpv
-
- try:
- self._eval_use_flags(mycpv, d)
- except portage.exception.InvalidDependString:
- writemsg(_("!!! Invalid binary package: '%s'\n") % \
- self.getname(mycpv), noiselevel=-1)
- self.dbapi.cpv_remove(mycpv)
- del pkg_paths[_instance_key(mycpv)]
-
- # record location if it's non-default
- if mypath != mycpv + ".tbz2":
- d["PATH"] = mypath
- else:
- d.pop("PATH", None)
- metadata[_instance_key(mycpv)] = d
-
- if reindex:
- for instance_key in list(metadata):
- if instance_key not in pkg_paths:
- del metadata[instance_key]
-
- if update_pkgindex:
- del pkgindex.packages[:]
- pkgindex.packages.extend(iter(metadata.values()))
- self._update_pkgindex_header(pkgindex.header)
-
- self._pkgindex_header = {}
- self._merge_pkgindex_header(pkgindex.header,
- self._pkgindex_header)
-
- return pkgindex if update_pkgindex else None
-
- def _populate_remote(self, getbinpkg_refresh=True):
-
- self._remote_has_index = False
- self._remotepkgs = {}
- # Order by descending priority.
- for repo in reversed(list(self._binrepos_conf.values())):
- base_url = repo.sync_uri
- parsed_url = urlparse(base_url)
- host = parsed_url.netloc
- port = parsed_url.port
- user = None
- passwd = None
- user_passwd = ""
- if "@" in host:
- user, host = host.split("@", 1)
- user_passwd = user + "@"
- if ":" in user:
- user, passwd = user.split(":", 1)
-
- if port is not None:
- port_str = ":%s" % (port,)
- if host.endswith(port_str):
- host = host[:-len(port_str)]
- pkgindex_file = os.path.join(self.settings["EROOT"], CACHE_PATH, "binhost",
- host, parsed_url.path.lstrip("/"), "Packages")
- pkgindex = self._new_pkgindex()
- try:
- f = io.open(_unicode_encode(pkgindex_file,
- encoding=_encodings['fs'], errors='strict'),
- mode='r', encoding=_encodings['repo.content'],
- errors='replace')
- try:
- pkgindex.read(f)
- finally:
- f.close()
- except EnvironmentError as e:
- if e.errno != errno.ENOENT:
- raise
- local_timestamp = pkgindex.header.get("TIMESTAMP", None)
- try:
- download_timestamp = \
- float(pkgindex.header.get("DOWNLOAD_TIMESTAMP", 0))
- except ValueError:
- download_timestamp = 0
- remote_timestamp = None
- rmt_idx = self._new_pkgindex()
- proc = None
- tmp_filename = None
- try:
- # urlparse.urljoin() only works correctly with recognized
- # protocols and requires the base url to have a trailing
- # slash, so join manually...
- url = base_url.rstrip("/") + "/Packages"
- f = None
-
- if not getbinpkg_refresh and local_timestamp:
- raise UseCachedCopyOfRemoteIndex()
-
- try:
- ttl = float(pkgindex.header.get("TTL", 0))
- except ValueError:
- pass
- else:
- if download_timestamp and ttl and \
- download_timestamp + ttl > time.time():
- raise UseCachedCopyOfRemoteIndex()
-
- # Set proxy settings for _urlopen -> urllib_request
- proxies = {}
- for proto in ('http', 'https'):
- value = self.settings.get(proto + '_proxy')
- if value is not None:
- proxies[proto] = value
-
- # Don't use urlopen for https, unless
- # PEP 476 is supported (bug #469888).
- if repo.fetchcommand is None and (parsed_url.scheme not in ('https',) or _have_pep_476()):
- try:
- f = _urlopen(url, if_modified_since=local_timestamp, proxies=proxies)
- if hasattr(f, 'headers') and f.headers.get('timestamp', ''):
- remote_timestamp = f.headers.get('timestamp')
- except IOError as err:
- if hasattr(err, 'code') and err.code == 304: # not modified (since local_timestamp)
- raise UseCachedCopyOfRemoteIndex()
-
- if parsed_url.scheme in ('ftp', 'http', 'https'):
- # This protocol is supposedly supported by urlopen,
- # so apparently there's a problem with the url
- # or a bug in urlopen.
- if self.settings.get("PORTAGE_DEBUG", "0") != "0":
- traceback.print_exc()
-
- raise
- except ValueError:
- raise ParseError("Invalid Portage BINHOST value '%s'"
- % url.lstrip())
-
- if f is None:
-
- path = parsed_url.path.rstrip("/") + "/Packages"
-
- if repo.fetchcommand is None and parsed_url.scheme == 'ssh':
- # Use a pipe so that we can terminate the download
- # early if we detect that the TIMESTAMP header
- # matches that of the cached Packages file.
- ssh_args = ['ssh']
- if port is not None:
- ssh_args.append("-p%s" % (port,))
- # NOTE: shlex evaluates embedded quotes
- ssh_args.extend(portage.util.shlex_split(
- self.settings.get("PORTAGE_SSH_OPTS", "")))
- ssh_args.append(user_passwd + host)
- ssh_args.append('--')
- ssh_args.append('cat')
- ssh_args.append(path)
-
- proc = subprocess.Popen(ssh_args,
- stdout=subprocess.PIPE)
- f = proc.stdout
- else:
- if repo.fetchcommand is None:
- setting = 'FETCHCOMMAND_' + parsed_url.scheme.upper()
- fcmd = self.settings.get(setting)
- if not fcmd:
- fcmd = self.settings.get('FETCHCOMMAND')
- if not fcmd:
- raise EnvironmentError("FETCHCOMMAND is unset")
- else:
- fcmd = repo.fetchcommand
-
- fd, tmp_filename = tempfile.mkstemp()
- tmp_dirname, tmp_basename = os.path.split(tmp_filename)
- os.close(fd)
-
- fcmd_vars = {
- "DISTDIR": tmp_dirname,
- "FILE": tmp_basename,
- "URI": url
- }
-
- for k in ("PORTAGE_SSH_OPTS",):
- v = self.settings.get(k)
- if v is not None:
- fcmd_vars[k] = v
-
- success = portage.getbinpkg.file_get(
- fcmd=fcmd, fcmd_vars=fcmd_vars)
- if not success:
- raise EnvironmentError("%s failed" % (setting,))
- f = open(tmp_filename, 'rb')
-
- f_dec = codecs.iterdecode(f,
- _encodings['repo.content'], errors='replace')
- try:
- rmt_idx.readHeader(f_dec)
- if not remote_timestamp: # in case it had not been read from HTTP header
- remote_timestamp = rmt_idx.header.get("TIMESTAMP", None)
- if not remote_timestamp:
- # no timestamp in the header, something's wrong
- pkgindex = None
- writemsg(_("\n\n!!! Binhost package index " \
- " has no TIMESTAMP field.\n"), noiselevel=-1)
- else:
- if not self._pkgindex_version_supported(rmt_idx):
- writemsg(_("\n\n!!! Binhost package index version" \
- " is not supported: '%s'\n") % \
- rmt_idx.header.get("VERSION"), noiselevel=-1)
- pkgindex = None
- elif local_timestamp != remote_timestamp:
- rmt_idx.readBody(f_dec)
- pkgindex = rmt_idx
- finally:
- # Timeout after 5 seconds, in case close() blocks
- # indefinitely (see bug #350139).
- try:
- try:
- AlarmSignal.register(5)
- f.close()
- finally:
- AlarmSignal.unregister()
- except AlarmSignal:
- writemsg("\n\n!!! %s\n" % \
- _("Timed out while closing connection to binhost"),
- noiselevel=-1)
- except UseCachedCopyOfRemoteIndex:
- writemsg_stdout("\n")
- writemsg_stdout(
- colorize("GOOD", _("Local copy of remote index is up-to-date and will be used.")) + \
- "\n")
- rmt_idx = pkgindex
- except EnvironmentError as e:
- # This includes URLError which is raised for SSL
- # certificate errors when PEP 476 is supported.
- writemsg(_("\n\n!!! Error fetching binhost package" \
- " info from '%s'\n") % _hide_url_passwd(base_url))
- # With Python 2, the EnvironmentError message may
- # contain bytes or unicode, so use str to ensure
- # safety with all locales (bug #532784).
- try:
- error_msg = str(e)
- except UnicodeDecodeError as uerror:
- error_msg = str(uerror.object,
- encoding='utf_8', errors='replace')
- writemsg("!!! %s\n\n" % error_msg)
- del e
- pkgindex = None
- if proc is not None:
- if proc.poll() is None:
- proc.kill()
- proc.wait()
- proc = None
- if tmp_filename is not None:
- try:
- os.unlink(tmp_filename)
- except OSError:
- pass
- if pkgindex is rmt_idx:
- pkgindex.modified = False # don't update the header
- pkgindex.header["DOWNLOAD_TIMESTAMP"] = "%d" % time.time()
- try:
- ensure_dirs(os.path.dirname(pkgindex_file))
- f = atomic_ofstream(pkgindex_file)
- pkgindex.write(f)
- f.close()
- except (IOError, PortageException):
- if os.access(os.path.dirname(pkgindex_file), os.W_OK):
- raise
- # The current user doesn't have permission to cache the
- # file, but that's alright.
- if pkgindex:
- remote_base_uri = pkgindex.header.get("URI", base_url)
- for d in pkgindex.packages:
- cpv = _pkg_str(d["CPV"], metadata=d,
- settings=self.settings, db=self.dbapi)
- # Local package instances override remote instances
- # with the same instance_key.
- if self.dbapi.cpv_exists(cpv):
- continue
-
- d["CPV"] = cpv
- d["BASE_URI"] = remote_base_uri
- d["PKGINDEX_URI"] = url
- # FETCHCOMMAND and RESUMECOMMAND may be specified
- # by binrepos.conf, and otherwise ensure that they
- # do not propagate from the Packages index since
- # it may be unsafe to execute remotely specified
- # commands.
- if repo.fetchcommand is None:
- d.pop('FETCHCOMMAND', None)
- else:
- d['FETCHCOMMAND'] = repo.fetchcommand
- if repo.resumecommand is None:
- d.pop('RESUMECOMMAND', None)
- else:
- d['RESUMECOMMAND'] = repo.resumecommand
- self._remotepkgs[self.dbapi._instance_key(cpv)] = d
- self.dbapi.cpv_inject(cpv)
-
- self._remote_has_index = True
- self._merge_pkgindex_header(pkgindex.header,
- self._pkgindex_header)
-
- def _populate_additional(self, repos):
- for repo in repos:
- aux_keys = list(set(chain(repo._aux_cache_keys, repo._pkg_str_aux_keys)))
- for cpv in repo.cpv_all():
- metadata = dict(zip(aux_keys, repo.aux_get(cpv, aux_keys)))
- pkg = _pkg_str(cpv, metadata=metadata, settings=repo.settings, db=repo)
- instance_key = self.dbapi._instance_key(pkg)
- self._additional_pkgs[instance_key] = pkg
- self.dbapi.cpv_inject(pkg)
-
- def inject(self, cpv, filename=None):
- """Add a freshly built package to the database. This updates
- $PKGDIR/Packages with the new package metadata (including MD5).
- @param cpv: The cpv of the new package to inject
- @type cpv: string
- @param filename: File path of the package to inject, or None if it's
- already in the location returned by getname()
- @type filename: string
- @rtype: _pkg_str or None
- @return: A _pkg_str instance on success, or None on failure.
- """
- mycat, mypkg = catsplit(cpv)
- if not self.populated:
- self.populate()
- if filename is None:
- full_path = self.getname(cpv)
- else:
- full_path = filename
- try:
- s = os.stat(full_path)
- except OSError as e:
- if e.errno != errno.ENOENT:
- raise
- del e
- writemsg(_("!!! Binary package does not exist: '%s'\n") % full_path,
- noiselevel=-1)
- return
- metadata = self._read_metadata(full_path, s)
- invalid_depend = False
- try:
- self._eval_use_flags(cpv, metadata)
- except portage.exception.InvalidDependString:
- invalid_depend = True
- if invalid_depend or not metadata.get("SLOT"):
- writemsg(_("!!! Invalid binary package: '%s'\n") % full_path,
- noiselevel=-1)
- return
-
- fetched = False
- try:
- build_id = cpv.build_id
- except AttributeError:
- build_id = None
- else:
- instance_key = self.dbapi._instance_key(cpv)
- if instance_key in self.dbapi.cpvdict:
- # This means we've been called by aux_update (or
- # similar). The instance key typically changes (due to
- # file modification), so we need to discard existing
- # instance key references.
- self.dbapi.cpv_remove(cpv)
- self._pkg_paths.pop(instance_key, None)
- if self._remotepkgs is not None:
- fetched = self._remotepkgs.pop(instance_key, None)
-
- cpv = _pkg_str(cpv, metadata=metadata, settings=self.settings,
- db=self.dbapi)
-
- # Reread the Packages index (in case it's been changed by another
- # process) and then updated it, all while holding a lock.
- pkgindex_lock = None
- try:
- os.makedirs(self.pkgdir, exist_ok=True)
- pkgindex_lock = lockfile(self._pkgindex_file,
- wantnewlockfile=1)
- if filename is not None:
- new_filename = self.getname(cpv, allocate_new=True)
- try:
- samefile = os.path.samefile(filename, new_filename)
- except OSError:
- samefile = False
- if not samefile:
- self._ensure_dir(os.path.dirname(new_filename))
- _movefile(filename, new_filename, mysettings=self.settings)
- full_path = new_filename
-
- basename = os.path.basename(full_path)
- pf = catsplit(cpv)[1]
- if (build_id is None and not fetched and
- basename.endswith(".xpak")):
- # Apply the newly assigned BUILD_ID. This is intended
- # to occur only for locally built packages. If the
- # package was fetched, we want to preserve its
- # attributes, so that we can later distinguish that it
- # is identical to its remote counterpart.
- build_id = self._parse_build_id(basename)
- metadata["BUILD_ID"] = str(build_id)
- cpv = _pkg_str(cpv, metadata=metadata,
- settings=self.settings, db=self.dbapi)
- binpkg = portage.xpak.tbz2(full_path)
- binary_data = binpkg.get_data()
- binary_data[b"BUILD_ID"] = _unicode_encode(
- metadata["BUILD_ID"])
- binpkg.recompose_mem(portage.xpak.xpak_mem(binary_data))
-
- self._file_permissions(full_path)
- pkgindex = self._load_pkgindex()
- if not self._pkgindex_version_supported(pkgindex):
- pkgindex = self._new_pkgindex()
-
- d = self._inject_file(pkgindex, cpv, full_path)
- self._update_pkgindex_header(pkgindex.header)
- self._pkgindex_write(pkgindex)
-
- finally:
- if pkgindex_lock:
- unlockfile(pkgindex_lock)
-
- # This is used to record BINPKGMD5 in the installed package
- # database, for a package that has just been built.
- cpv._metadata["MD5"] = d["MD5"]
-
- return cpv
-
- def _read_metadata(self, filename, st, keys=None):
- """
- Read metadata from a binary package. The returned metadata
- dictionary will contain empty strings for any values that
- are undefined (this is important because the _pkg_str class
- distinguishes between missing and undefined values).
-
- @param filename: File path of the binary package
- @type filename: string
- @param st: stat result for the binary package
- @type st: os.stat_result
- @param keys: optional list of specific metadata keys to retrieve
- @type keys: iterable
- @rtype: dict
- @return: package metadata
- """
- if keys is None:
- keys = self.dbapi._aux_cache_keys
- metadata = self.dbapi._aux_cache_slot_dict()
- else:
- metadata = {}
- binary_metadata = portage.xpak.tbz2(filename).get_data()
- for k in keys:
- if k == "_mtime_":
- metadata[k] = str(st[stat.ST_MTIME])
- elif k == "SIZE":
- metadata[k] = str(st.st_size)
- else:
- v = binary_metadata.get(_unicode_encode(k))
- if v is None:
- if k == "EAPI":
- metadata[k] = "0"
- else:
- metadata[k] = ""
- else:
- v = _unicode_decode(v)
- metadata[k] = " ".join(v.split())
- return metadata
-
- def _inject_file(self, pkgindex, cpv, filename):
- """
- Add a package to internal data structures, and add an
- entry to the given pkgindex.
- @param pkgindex: The PackageIndex instance to which an entry
- will be added.
- @type pkgindex: PackageIndex
- @param cpv: A _pkg_str instance corresponding to the package
- being injected.
- @type cpv: _pkg_str
- @param filename: Absolute file path of the package to inject.
- @type filename: string
- @rtype: dict
- @return: A dict corresponding to the new entry which has been
- added to pkgindex. This may be used to access the checksums
- which have just been generated.
- """
- # Update state for future isremote calls.
- instance_key = self.dbapi._instance_key(cpv)
- if self._remotepkgs is not None:
- self._remotepkgs.pop(instance_key, None)
-
- self.dbapi.cpv_inject(cpv)
- self._pkg_paths[instance_key] = filename[len(self.pkgdir)+1:]
- d = self._pkgindex_entry(cpv)
-
- # If found, remove package(s) with duplicate path.
- path = d.get("PATH", "")
- for i in range(len(pkgindex.packages) - 1, -1, -1):
- d2 = pkgindex.packages[i]
- if path and path == d2.get("PATH"):
- # Handle path collisions in $PKGDIR/All
- # when CPV is not identical.
- del pkgindex.packages[i]
- elif cpv == d2.get("CPV"):
- if path == d2.get("PATH", ""):
- del pkgindex.packages[i]
-
- pkgindex.packages.append(d)
- return d
-
- def _pkgindex_write(self, pkgindex):
- contents = codecs.getwriter(_encodings['repo.content'])(io.BytesIO())
- pkgindex.write(contents)
- contents = contents.getvalue()
- atime = mtime = int(pkgindex.header["TIMESTAMP"])
- output_files = [(atomic_ofstream(self._pkgindex_file, mode="wb"),
- self._pkgindex_file, None)]
-
- if "compress-index" in self.settings.features:
- gz_fname = self._pkgindex_file + ".gz"
- fileobj = atomic_ofstream(gz_fname, mode="wb")
- output_files.append((GzipFile(filename='', mode="wb",
- fileobj=fileobj, mtime=mtime), gz_fname, fileobj))
-
- for f, fname, f_close in output_files:
- f.write(contents)
- f.close()
- if f_close is not None:
- f_close.close()
- self._file_permissions(fname)
- # some seconds might have elapsed since TIMESTAMP
- os.utime(fname, (atime, mtime))
-
- def _pkgindex_entry(self, cpv):
- """
- Performs checksums, and gets size and mtime via lstat.
- Raises InvalidDependString if necessary.
- @rtype: dict
- @return: a dict containing entry for the give cpv.
- """
-
- pkg_path = self.getname(cpv)
-
- d = dict(cpv._metadata.items())
- d.update(perform_multiple_checksums(
- pkg_path, hashes=self._pkgindex_hashes))
-
- d["CPV"] = cpv
- st = os.lstat(pkg_path)
- d["_mtime_"] = str(st[stat.ST_MTIME])
- d["SIZE"] = str(st.st_size)
-
- rel_path = pkg_path[len(self.pkgdir)+1:]
- # record location if it's non-default
- if rel_path != cpv + ".tbz2":
- d["PATH"] = rel_path
-
- return d
-
- def _new_pkgindex(self):
- return portage.getbinpkg.PackageIndex(
- allowed_pkg_keys=self._pkgindex_allowed_pkg_keys,
- default_header_data=self._pkgindex_default_header_data,
- default_pkg_data=self._pkgindex_default_pkg_data,
- inherited_keys=self._pkgindex_inherited_keys,
- translated_keys=self._pkgindex_translated_keys)
-
- @staticmethod
- def _merge_pkgindex_header(src, dest):
- """
- Merge Packages header settings from src to dest, in order to
- propagate implicit IUSE and USE_EXPAND settings for use with
- binary and installed packages. Values are appended, so the
- result is a union of elements from src and dest.
-
- Pull in ARCH if it's not defined, since it's used for validation
- by emerge's profile_check function, and also for KEYWORDS logic
- in the _getmaskingstatus function.
-
- @param src: source mapping (read only)
- @type src: Mapping
- @param dest: destination mapping
- @type dest: MutableMapping
- """
- for k, v in iter_iuse_vars(src):
- v_before = dest.get(k)
- if v_before is not None:
- merged_values = set(v_before.split())
- merged_values.update(v.split())
- v = ' '.join(sorted(merged_values))
- dest[k] = v
-
- if 'ARCH' not in dest and 'ARCH' in src:
- dest['ARCH'] = src['ARCH']
-
- def _propagate_config(self, config):
- """
- Propagate implicit IUSE and USE_EXPAND settings from the binary
- package database to a config instance. If settings are not
- available to propagate, then this will do nothing and return
- False.
-
- @param config: config instance
- @type config: portage.config
- @rtype: bool
- @return: True if settings successfully propagated, False if settings
- were not available to propagate.
- """
- if self._pkgindex_header is None:
- return False
-
- self._merge_pkgindex_header(self._pkgindex_header,
- config.configdict['defaults'])
- config.regenerate()
- config._init_iuse()
- return True
-
- def _update_pkgindex_header(self, header):
- """
- Add useful settings to the Packages file header, for use by
- binhost clients.
-
- This will return silently if the current profile is invalid or
- does not have an IUSE_IMPLICIT variable, since it's useful to
- maintain a cache of implicit IUSE settings for use with binary
- packages.
- """
- if not (self.settings.profile_path and
- "IUSE_IMPLICIT" in self.settings):
- header.setdefault("VERSION", str(self._pkgindex_version))
- return
-
- portdir = normalize_path(os.path.realpath(self.settings["PORTDIR"]))
- profiles_base = os.path.join(portdir, "profiles") + os.path.sep
- if self.settings.profile_path:
- profile_path = normalize_path(
- os.path.realpath(self.settings.profile_path))
- if profile_path.startswith(profiles_base):
- profile_path = profile_path[len(profiles_base):]
- header["PROFILE"] = profile_path
- header["VERSION"] = str(self._pkgindex_version)
- base_uri = self.settings.get("PORTAGE_BINHOST_HEADER_URI")
- if base_uri:
- header["URI"] = base_uri
- else:
- header.pop("URI", None)
- for k in list(self._pkgindex_header_keys) + \
- self.settings.get("USE_EXPAND_IMPLICIT", "").split() + \
- self.settings.get("USE_EXPAND_UNPREFIXED", "").split():
- v = self.settings.get(k, None)
- if v:
- header[k] = v
- else:
- header.pop(k, None)
-
- # These values may be useful for using a binhost without
- # having a local copy of the profile (bug #470006).
- for k in self.settings.get("USE_EXPAND_IMPLICIT", "").split():
- k = "USE_EXPAND_VALUES_" + k
- v = self.settings.get(k)
- if v:
- header[k] = v
- else:
- header.pop(k, None)
-
- def _pkgindex_version_supported(self, pkgindex):
- version = pkgindex.header.get("VERSION")
- if version:
- try:
- if int(version) <= self._pkgindex_version:
- return True
- except ValueError:
- pass
- return False
-
- def _eval_use_flags(self, cpv, metadata):
- use = frozenset(metadata.get("USE", "").split())
- for k in self._pkgindex_use_evaluated_keys:
- if k.endswith('DEPEND'):
- token_class = Atom
- else:
- token_class = None
-
- deps = metadata.get(k)
- if deps is None:
- continue
- try:
- deps = use_reduce(deps, uselist=use, token_class=token_class)
- deps = paren_enclose(deps)
- except portage.exception.InvalidDependString as e:
- writemsg("%s: %s\n" % (k, e), noiselevel=-1)
- raise
- metadata[k] = deps
-
- def exists_specific(self, cpv):
- if not self.populated:
- self.populate()
- return self.dbapi.match(
- dep_expand("="+cpv, mydb=self.dbapi, settings=self.settings))
-
- def dep_bestmatch(self, mydep):
- "compatibility method -- all matches, not just visible ones"
- if not self.populated:
- self.populate()
- writemsg("\n\n", 1)
- writemsg("mydep: %s\n" % mydep, 1)
- mydep = dep_expand(mydep, mydb=self.dbapi, settings=self.settings)
- writemsg("mydep: %s\n" % mydep, 1)
- mykey = dep_getkey(mydep)
- writemsg("mykey: %s\n" % mykey, 1)
- mymatch = best(match_from_list(mydep,self.dbapi.cp_list(mykey)))
- writemsg("mymatch: %s\n" % mymatch, 1)
- if mymatch is None:
- return ""
- return mymatch
-
- def getname(self, cpv, allocate_new=None):
- """Returns a file location for this package.
- If cpv has both build_time and build_id attributes, then the
- path to the specific corresponding instance is returned.
- Otherwise, allocate a new path and return that. When allocating
- a new path, behavior depends on the binpkg-multi-instance
- FEATURES setting.
- """
- if not self.populated:
- self.populate()
-
- try:
- cpv.cp
- except AttributeError:
- cpv = _pkg_str(cpv)
-
- filename = None
- if allocate_new:
- filename = self._allocate_filename(cpv)
- elif self._is_specific_instance(cpv):
- instance_key = self.dbapi._instance_key(cpv)
- path = self._pkg_paths.get(instance_key)
- if path is not None:
- filename = os.path.join(self.pkgdir, path)
-
- if filename is None and not allocate_new:
- try:
- instance_key = self.dbapi._instance_key(cpv,
- support_string=True)
- except KeyError:
- pass
- else:
- filename = self._pkg_paths.get(instance_key)
- if filename is not None:
- filename = os.path.join(self.pkgdir, filename)
- elif instance_key in self._additional_pkgs:
- return None
-
- if filename is None:
- if self._multi_instance:
- pf = catsplit(cpv)[1]
- filename = "%s-%s.xpak" % (
- os.path.join(self.pkgdir, cpv.cp, pf), "1")
- else:
- filename = os.path.join(self.pkgdir, cpv + ".tbz2")
-
- return filename
-
- def _is_specific_instance(self, cpv):
- specific = True
- try:
- build_time = cpv.build_time
- build_id = cpv.build_id
- except AttributeError:
- specific = False
- else:
- if build_time is None or build_id is None:
- specific = False
- return specific
-
- def _max_build_id(self, cpv):
- max_build_id = 0
- for x in self.dbapi.cp_list(cpv.cp):
- if (x == cpv and x.build_id is not None and
- x.build_id > max_build_id):
- max_build_id = x.build_id
- return max_build_id
-
- def _allocate_filename(self, cpv):
- return os.path.join(self.pkgdir, cpv + ".tbz2")
-
- def _allocate_filename_multi(self, cpv):
-
- # First, get the max build_id found when _populate was
- # called.
- max_build_id = self._max_build_id(cpv)
-
- # A new package may have been added concurrently since the
- # last _populate call, so use increment build_id until
- # we locate an unused id.
- pf = catsplit(cpv)[1]
- build_id = max_build_id + 1
-
- while True:
- filename = "%s-%s.xpak" % (
- os.path.join(self.pkgdir, cpv.cp, pf), build_id)
- if os.path.exists(filename):
- build_id += 1
- else:
- return filename
-
- @staticmethod
- def _parse_build_id(filename):
- build_id = -1
- suffixlen = len(".xpak")
- hyphen = filename.rfind("-", 0, -(suffixlen + 1))
- if hyphen != -1:
- build_id = filename[hyphen+1:-suffixlen]
- try:
- build_id = int(build_id)
- except ValueError:
- pass
- return build_id
-
- def isremote(self, pkgname):
- """Returns true if the package is kept remotely and it has not been
- downloaded (or it is only partially downloaded)."""
- if self._remotepkgs is None:
- return False
- instance_key = self.dbapi._instance_key(pkgname)
- if instance_key not in self._remotepkgs:
- return False
- if instance_key in self._additional_pkgs:
- return False
- # Presence in self._remotepkgs implies that it's remote. When a
- # package is downloaded, state is updated by self.inject().
- return True
-
- def get_pkgindex_uri(self, cpv):
- """Returns the URI to the Packages file for a given package."""
- uri = None
- if self._remotepkgs is not None:
- metadata = self._remotepkgs.get(self.dbapi._instance_key(cpv))
- if metadata is not None:
- uri = metadata["PKGINDEX_URI"]
- return uri
-
- def gettbz2(self, pkgname):
- """Fetches the package from a remote site, if necessary. Attempts to
- resume if the file appears to be partially downloaded."""
- instance_key = self.dbapi._instance_key(pkgname)
- tbz2_path = self.getname(pkgname)
- tbz2name = os.path.basename(tbz2_path)
- resume = False
- if os.path.exists(tbz2_path):
- if tbz2name[:-5] not in self.invalids:
- return
-
- resume = True
- writemsg(_("Resuming download of this tbz2, but it is possible that it is corrupt.\n"),
- noiselevel=-1)
-
- mydest = os.path.dirname(self.getname(pkgname))
- self._ensure_dir(mydest)
- # urljoin doesn't work correctly with unrecognized protocols like sftp
- if self._remote_has_index:
- rel_url = self._remotepkgs[instance_key].get("PATH")
- if not rel_url:
- rel_url = pkgname + ".tbz2"
- remote_base_uri = self._remotepkgs[instance_key]["BASE_URI"]
- url = remote_base_uri.rstrip("/") + "/" + rel_url.lstrip("/")
- else:
- url = self.settings["PORTAGE_BINHOST"].rstrip("/") + "/" + tbz2name
- protocol = urlparse(url)[0]
- fcmd_prefix = "FETCHCOMMAND"
- if resume:
- fcmd_prefix = "RESUMECOMMAND"
- fcmd = self.settings.get(fcmd_prefix + "_" + protocol.upper())
- if not fcmd:
- fcmd = self.settings.get(fcmd_prefix)
- success = portage.getbinpkg.file_get(url, mydest, fcmd=fcmd)
- if not success:
- try:
- os.unlink(self.getname(pkgname))
- except OSError:
- pass
- raise portage.exception.FileNotFound(mydest)
- self.inject(pkgname)
-
- def _load_pkgindex(self):
- pkgindex = self._new_pkgindex()
- try:
- f = io.open(_unicode_encode(self._pkgindex_file,
- encoding=_encodings['fs'], errors='strict'),
- mode='r', encoding=_encodings['repo.content'],
- errors='replace')
- except EnvironmentError:
- pass
- else:
- try:
- pkgindex.read(f)
- finally:
- f.close()
- return pkgindex
-
- def _get_digests(self, pkg):
-
- try:
- cpv = pkg.cpv
- except AttributeError:
- cpv = pkg
-
- _instance_key = self.dbapi._instance_key
- instance_key = _instance_key(cpv)
- digests = {}
- metadata = (None if self._remotepkgs is None else
- self._remotepkgs.get(instance_key))
- if metadata is None:
- for d in self._load_pkgindex().packages:
- if (d["CPV"] == cpv and
- instance_key == _instance_key(_pkg_str(d["CPV"],
- metadata=d, settings=self.settings))):
- metadata = d
- break
-
- if metadata is None:
- return digests
-
- for k in get_valid_checksum_keys():
- v = metadata.get(k)
- if not v:
- continue
- digests[k] = v
-
- if "SIZE" in metadata:
- try:
- digests["size"] = int(metadata["SIZE"])
- except ValueError:
- writemsg(_("!!! Malformed SIZE attribute in remote " \
- "metadata for '%s'\n") % cpv)
-
- return digests
-
- def digestCheck(self, pkg):
- """
- Verify digests for the given package and raise DigestException
- if verification fails.
- @rtype: bool
- @return: True if digests could be located, False otherwise.
- """
-
- digests = self._get_digests(pkg)
-
- if not digests:
- return False
-
- try:
- cpv = pkg.cpv
- except AttributeError:
- cpv = pkg
-
- pkg_path = self.getname(cpv)
- hash_filter = _hash_filter(
- self.settings.get("PORTAGE_CHECKSUM_FILTER", ""))
- if not hash_filter.transparent:
- digests = _apply_hash_filter(digests, hash_filter)
- eout = EOutput()
- eout.quiet = self.settings.get("PORTAGE_QUIET") == "1"
- ok, st = _check_distfile(pkg_path, digests, eout, show_errors=0)
- if not ok:
- ok, reason = verify_all(pkg_path, digests)
- if not ok:
- raise portage.exception.DigestException(
- (pkg_path,) + tuple(reason))
-
- return True
-
- def getslot(self, mycatpkg):
- "Get a slot for a catpkg; assume it exists."
- myslot = ""
- try:
- myslot = self.dbapi._pkg_str(mycatpkg, None).slot
- except KeyError:
- pass
- return myslot
+ "this tree scans for a list of all packages available in PKGDIR"
+
+ def __init__(
+ self,
+ _unused=DeprecationWarning,
+ pkgdir=None,
+ virtual=DeprecationWarning,
+ settings=None,
+ ):
+
+ if pkgdir is None:
+ raise TypeError("pkgdir parameter is required")
+
+ if settings is None:
+ raise TypeError("settings parameter is required")
+
+ if _unused is not DeprecationWarning:
+ warnings.warn(
+ "The first parameter of the "
+ "portage.dbapi.bintree.binarytree"
+ " constructor is now unused. Instead "
+ "settings['ROOT'] is used.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
+ if virtual is not DeprecationWarning:
+ warnings.warn(
+ "The 'virtual' parameter of the "
+ "portage.dbapi.bintree.binarytree"
+ " constructor is unused",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
+ if True:
+ self.pkgdir = normalize_path(pkgdir)
+ # NOTE: Event if binpkg-multi-instance is disabled, it's
+ # still possible to access a PKGDIR which uses the
+ # binpkg-multi-instance layout (or mixed layout).
+ self._multi_instance = "binpkg-multi-instance" in settings.features
+ if self._multi_instance:
+ self._allocate_filename = self._allocate_filename_multi
+ self.dbapi = bindbapi(self, settings=settings)
+ self.update_ents = self.dbapi.update_ents
+ self.move_slot_ent = self.dbapi.move_slot_ent
+ self.populated = 0
+ self.tree = {}
+ self._binrepos_conf = None
+ self._remote_has_index = False
+ self._remotepkgs = None # remote metadata indexed by cpv
+ self._additional_pkgs = {}
+ self.invalids = []
+ self.settings = settings
+ self._pkg_paths = {}
+ self._populating = False
+ self._all_directory = os.path.isdir(os.path.join(self.pkgdir, "All"))
+ self._pkgindex_version = 0
+ self._pkgindex_hashes = ["MD5", "SHA1"]
+ self._pkgindex_file = os.path.join(self.pkgdir, "Packages")
+ self._pkgindex_keys = self.dbapi._aux_cache_keys.copy()
+ self._pkgindex_keys.update(["CPV", "SIZE"])
+ self._pkgindex_aux_keys = [
+ "BASE_URI",
+ "BDEPEND",
+ "BUILD_ID",
+ "BUILD_TIME",
+ "CHOST",
+ "DEFINED_PHASES",
+ "DEPEND",
+ "DESCRIPTION",
+ "EAPI",
+ "FETCHCOMMAND",
+ "IDEPEND",
+ "IUSE",
+ "KEYWORDS",
+ "LICENSE",
+ "PDEPEND",
+ "PKGINDEX_URI",
+ "PROPERTIES",
+ "PROVIDES",
+ "RDEPEND",
+ "repository",
+ "REQUIRES",
+ "RESTRICT",
+ "RESUMECOMMAND",
+ "SIZE",
+ "SLOT",
+ "USE",
++ # PREFIX LOCAL
++ "EPREFIX",
+ ]
+ self._pkgindex_aux_keys = list(self._pkgindex_aux_keys)
+ self._pkgindex_use_evaluated_keys = (
+ "BDEPEND",
+ "DEPEND",
+ "IDEPEND",
+ "LICENSE",
+ "RDEPEND",
+ "PDEPEND",
+ "PROPERTIES",
+ "RESTRICT",
+ )
+ self._pkgindex_header = None
+ self._pkgindex_header_keys = set(
+ [
+ "ACCEPT_KEYWORDS",
+ "ACCEPT_LICENSE",
+ "ACCEPT_PROPERTIES",
+ "ACCEPT_RESTRICT",
+ "CBUILD",
+ "CONFIG_PROTECT",
+ "CONFIG_PROTECT_MASK",
+ "FEATURES",
+ "GENTOO_MIRRORS",
+ "INSTALL_MASK",
+ "IUSE_IMPLICIT",
+ "USE",
+ "USE_EXPAND",
+ "USE_EXPAND_HIDDEN",
+ "USE_EXPAND_IMPLICIT",
+ "USE_EXPAND_UNPREFIXED",
++ # PREFIX LOCAL
++ "EPREFIX",
+ ]
+ )
+ self._pkgindex_default_pkg_data = {
+ "BDEPEND": "",
+ "BUILD_ID": "",
+ "BUILD_TIME": "",
+ "DEFINED_PHASES": "",
+ "DEPEND": "",
+ "EAPI": "0",
+ "IDEPEND": "",
+ "IUSE": "",
+ "KEYWORDS": "",
+ "LICENSE": "",
+ "PATH": "",
+ "PDEPEND": "",
+ "PROPERTIES": "",
+ "PROVIDES": "",
+ "RDEPEND": "",
+ "REQUIRES": "",
+ "RESTRICT": "",
+ "SLOT": "0",
+ "USE": "",
+ }
- self._pkgindex_inherited_keys = ["CHOST", "repository"]
++ self._pkgindex_inherited_keys = ["CHOST", "repository",
++ # PREFIX LOCAL
++ "EPREFIX"]
+
+ # Populate the header with appropriate defaults.
+ self._pkgindex_default_header_data = {
+ "CHOST": self.settings.get("CHOST", ""),
+ "repository": "",
+ }
+
+ self._pkgindex_translated_keys = (
+ ("DESCRIPTION", "DESC"),
+ ("_mtime_", "MTIME"),
+ ("repository", "REPO"),
+ )
+
+ self._pkgindex_allowed_pkg_keys = set(
+ chain(
+ self._pkgindex_keys,
+ self._pkgindex_aux_keys,
+ self._pkgindex_hashes,
+ self._pkgindex_default_pkg_data,
+ self._pkgindex_inherited_keys,
+ chain(*self._pkgindex_translated_keys),
+ )
+ )
+
+ @property
+ def root(self):
+ warnings.warn(
+ "The root attribute of "
+ "portage.dbapi.bintree.binarytree"
+ " is deprecated. Use "
+ "settings['ROOT'] instead.",
+ DeprecationWarning,
+ stacklevel=3,
+ )
+ return self.settings["ROOT"]
+
+ def move_ent(self, mylist, repo_match=None):
+ if not self.populated:
+ self.populate()
+ origcp = mylist[1]
+ newcp = mylist[2]
+ # sanity check
+ for atom in (origcp, newcp):
+ if not isjustname(atom):
+ raise InvalidPackageName(str(atom))
+ mynewcat = catsplit(newcp)[0]
+ origmatches = self.dbapi.cp_list(origcp)
+ moves = 0
+ if not origmatches:
+ return moves
+ for mycpv in origmatches:
+ mycpv_cp = mycpv.cp
+ if mycpv_cp != origcp:
+ # Ignore PROVIDE virtual match.
+ continue
+ if repo_match is not None and not repo_match(mycpv.repo):
+ continue
+
+ # Use isvalidatom() to check if this move is valid for the
+ # EAPI (characters allowed in package names may vary).
+ if not isvalidatom(newcp, eapi=mycpv.eapi):
+ continue
+
+ mynewcpv = mycpv.replace(mycpv_cp, str(newcp), 1)
+ myoldpkg = catsplit(mycpv)[1]
+ mynewpkg = catsplit(mynewcpv)[1]
+
+ # If this update has already been applied to the same
+ # package build then silently continue.
+ applied = False
+ for maybe_applied in self.dbapi.match("={}".format(mynewcpv)):
+ if maybe_applied.build_time == mycpv.build_time:
+ applied = True
+ break
+
+ if applied:
+ continue
+
+ if (mynewpkg != myoldpkg) and self.dbapi.cpv_exists(mynewcpv):
+ writemsg(
+ _("!!! Cannot update binary: Destination exists.\n"), noiselevel=-1
+ )
+ writemsg("!!! " + mycpv + " -> " + mynewcpv + "\n", noiselevel=-1)
+ continue
+
+ tbz2path = self.getname(mycpv)
+ if os.path.exists(tbz2path) and not os.access(tbz2path, os.W_OK):
+ writemsg(
+ _("!!! Cannot update readonly binary: %s\n") % mycpv, noiselevel=-1
+ )
+ continue
+
+ moves += 1
+ mytbz2 = portage.xpak.tbz2(tbz2path)
+ mydata = mytbz2.get_data()
+ updated_items = update_dbentries([mylist], mydata, parent=mycpv)
+ mydata.update(updated_items)
+ mydata[b"PF"] = _unicode_encode(
+ mynewpkg + "\n", encoding=_encodings["repo.content"]
+ )
+ mydata[b"CATEGORY"] = _unicode_encode(
+ mynewcat + "\n", encoding=_encodings["repo.content"]
+ )
+ if mynewpkg != myoldpkg:
+ ebuild_data = mydata.pop(
+ _unicode_encode(
+ myoldpkg + ".ebuild", encoding=_encodings["repo.content"]
+ ),
+ None,
+ )
+ if ebuild_data is not None:
+ mydata[
+ _unicode_encode(
+ mynewpkg + ".ebuild", encoding=_encodings["repo.content"]
+ )
+ ] = ebuild_data
+
+ metadata = self.dbapi._aux_cache_slot_dict()
+ for k in self.dbapi._aux_cache_keys:
+ v = mydata.get(_unicode_encode(k))
+ if v is not None:
+ v = _unicode_decode(v)
+ metadata[k] = " ".join(v.split())
+
+ # Create a copy of the old version of the package and
+ # apply the update to it. Leave behind the old version,
+ # assuming that it will be deleted by eclean-pkg when its
+ # time comes.
+ mynewcpv = _pkg_str(mynewcpv, metadata=metadata, db=self.dbapi)
+ update_path = self.getname(mynewcpv, allocate_new=True) + ".partial"
+ self._ensure_dir(os.path.dirname(update_path))
+ update_path_lock = None
+ try:
+ update_path_lock = lockfile(update_path, wantnewlockfile=True)
+ copyfile(tbz2path, update_path)
+ mytbz2 = portage.xpak.tbz2(update_path)
+ mytbz2.recompose_mem(portage.xpak.xpak_mem(mydata))
+ self.inject(mynewcpv, filename=update_path)
+ finally:
+ if update_path_lock is not None:
+ try:
+ os.unlink(update_path)
+ except OSError:
+ pass
+ unlockfile(update_path_lock)
+
+ return moves
+
+ def prevent_collision(self, cpv):
+ warnings.warn(
+ "The "
+ "portage.dbapi.bintree.binarytree.prevent_collision "
+ "method is deprecated.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
+ def _ensure_dir(self, path):
+ """
+ Create the specified directory. Also, copy gid and group mode
+ bits from self.pkgdir if possible.
+ @param cat_dir: Absolute path of the directory to be created.
+ @type cat_dir: String
+ """
+ try:
+ pkgdir_st = os.stat(self.pkgdir)
+ except OSError:
+ ensure_dirs(path)
+ return
+ pkgdir_gid = pkgdir_st.st_gid
+ pkgdir_grp_mode = 0o2070 & pkgdir_st.st_mode
+ try:
+ ensure_dirs(path, gid=pkgdir_gid, mode=pkgdir_grp_mode, mask=0)
+ except PortageException:
+ if not os.path.isdir(path):
+ raise
+
+ def _file_permissions(self, path):
+ try:
+ pkgdir_st = os.stat(self.pkgdir)
+ except OSError:
+ pass
+ else:
+ pkgdir_gid = pkgdir_st.st_gid
+ pkgdir_grp_mode = 0o0060 & pkgdir_st.st_mode
+ try:
+ portage.util.apply_permissions(
+ path, gid=pkgdir_gid, mode=pkgdir_grp_mode, mask=0
+ )
+ except PortageException:
+ pass
+
+ def populate(self, getbinpkgs=False, getbinpkg_refresh=True, add_repos=()):
+ """
+ Populates the binarytree with package metadata.
+
+ @param getbinpkgs: include remote packages
+ @type getbinpkgs: bool
+ @param getbinpkg_refresh: attempt to refresh the cache
+ of remote package metadata if getbinpkgs is also True
+ @type getbinpkg_refresh: bool
+ @param add_repos: additional binary package repositories
+ @type add_repos: sequence
+ """
+
+ if self._populating:
+ return
+
+ if not os.path.isdir(self.pkgdir) and not (getbinpkgs or add_repos):
+ self.populated = True
+ return
+
+ # Clear all caches in case populate is called multiple times
+ # as may be the case when _global_updates calls populate()
+ # prior to performing package moves since it only wants to
+ # operate on local packages (getbinpkgs=0).
+ self._remotepkgs = None
+
+ self._populating = True
+ try:
+ update_pkgindex = self._populate_local(
+ reindex="pkgdir-index-trusted" not in self.settings.features
+ )
+
+ if update_pkgindex and self.dbapi.writable:
+ # If the Packages file needs to be updated, then _populate_local
+ # needs to be called once again while the file is locked, so
+ # that changes made by a concurrent process cannot be lost. This
+ # case is avoided when possible, in order to minimize lock
+ # contention.
+ pkgindex_lock = None
+ try:
+ pkgindex_lock = lockfile(self._pkgindex_file, wantnewlockfile=True)
+ update_pkgindex = self._populate_local()
+ if update_pkgindex:
+ self._pkgindex_write(update_pkgindex)
+ finally:
+ if pkgindex_lock:
+ unlockfile(pkgindex_lock)
+
+ if add_repos:
+ self._populate_additional(add_repos)
+
+ if getbinpkgs:
+ config_path = os.path.join(
+ self.settings["PORTAGE_CONFIGROOT"], BINREPOS_CONF_FILE
+ )
+ self._binrepos_conf = BinRepoConfigLoader((config_path,), self.settings)
+ if not self._binrepos_conf:
+ writemsg(
+ _(
+ "!!! %s is missing (or PORTAGE_BINHOST is unset), but use is requested.\n"
+ )
+ % (config_path,),
+ noiselevel=-1,
+ )
+ else:
+ self._populate_remote(getbinpkg_refresh=getbinpkg_refresh)
+
+ finally:
+ self._populating = False
+
+ self.populated = True
+
+ def _populate_local(self, reindex=True):
+ """
+ Populates the binarytree with local package metadata.
+
+ @param reindex: detect added / modified / removed packages and
+ regenerate the index file if necessary
+ @type reindex: bool
+ """
+ self.dbapi.clear()
+ _instance_key = self.dbapi._instance_key
+ # In order to minimize disk I/O, we never compute digests here.
+ # Therefore we exclude hashes from the minimum_keys, so that
+ # the Packages file will not be needlessly re-written due to
+ # missing digests.
+ minimum_keys = self._pkgindex_keys.difference(self._pkgindex_hashes)
+ if True:
+ pkg_paths = {}
+ self._pkg_paths = pkg_paths
+ dir_files = {}
+ if reindex:
+ for parent, dir_names, file_names in os.walk(self.pkgdir):
+ relative_parent = parent[len(self.pkgdir) + 1 :]
+ dir_files[relative_parent] = file_names
+
+ pkgindex = self._load_pkgindex()
+ if not self._pkgindex_version_supported(pkgindex):
+ pkgindex = self._new_pkgindex()
+ metadata = {}
+ basename_index = {}
+ for d in pkgindex.packages:
+ cpv = _pkg_str(
+ d["CPV"], metadata=d, settings=self.settings, db=self.dbapi
+ )
+ d["CPV"] = cpv
+ metadata[_instance_key(cpv)] = d
+ path = d.get("PATH")
+ if not path:
+ path = cpv + ".tbz2"
+
+ if reindex:
+ basename = os.path.basename(path)
+ basename_index.setdefault(basename, []).append(d)
+ else:
+ instance_key = _instance_key(cpv)
+ pkg_paths[instance_key] = path
+ self.dbapi.cpv_inject(cpv)
+
+ update_pkgindex = False
+ for mydir, file_names in dir_files.items():
+ try:
+ mydir = _unicode_decode(
+ mydir, encoding=_encodings["fs"], errors="strict"
+ )
+ except UnicodeDecodeError:
+ continue
+ for myfile in file_names:
+ try:
+ myfile = _unicode_decode(
+ myfile, encoding=_encodings["fs"], errors="strict"
+ )
+ except UnicodeDecodeError:
+ continue
+ if not myfile.endswith(SUPPORTED_XPAK_EXTENSIONS):
+ continue
+ mypath = os.path.join(mydir, myfile)
+ full_path = os.path.join(self.pkgdir, mypath)
+ s = os.lstat(full_path)
+
+ if not stat.S_ISREG(s.st_mode):
+ continue
+
+ # Validate data from the package index and try to avoid
+ # reading the xpak if possible.
+ possibilities = basename_index.get(myfile)
+ if possibilities:
+ match = None
+ for d in possibilities:
+ try:
+ if int(d["_mtime_"]) != s[stat.ST_MTIME]:
+ continue
+ except (KeyError, ValueError):
+ continue
+ try:
+ if int(d["SIZE"]) != int(s.st_size):
+ continue
+ except (KeyError, ValueError):
+ continue
+ if not minimum_keys.difference(d):
+ match = d
+ break
+ if match:
+ mycpv = match["CPV"]
+ instance_key = _instance_key(mycpv)
+ pkg_paths[instance_key] = mypath
+ # update the path if the package has been moved
+ oldpath = d.get("PATH")
+ if oldpath and oldpath != mypath:
+ update_pkgindex = True
+ # Omit PATH if it is the default path for
+ # the current Packages format version.
+ if mypath != mycpv + ".tbz2":
+ d["PATH"] = mypath
+ if not oldpath:
+ update_pkgindex = True
+ else:
+ d.pop("PATH", None)
+ if oldpath:
+ update_pkgindex = True
+ self.dbapi.cpv_inject(mycpv)
+ continue
+ if not os.access(full_path, os.R_OK):
+ writemsg(
+ _("!!! Permission denied to read " "binary package: '%s'\n")
+ % full_path,
+ noiselevel=-1,
+ )
+ self.invalids.append(myfile[:-5])
+ continue
+ pkg_metadata = self._read_metadata(
+ full_path,
+ s,
+ keys=chain(self.dbapi._aux_cache_keys, ("PF", "CATEGORY")),
+ )
+ mycat = pkg_metadata.get("CATEGORY", "")
+ mypf = pkg_metadata.get("PF", "")
+ slot = pkg_metadata.get("SLOT", "")
+ mypkg = myfile[:-5]
+ if not mycat or not mypf or not slot:
+ # old-style or corrupt package
+ writemsg(
+ _("\n!!! Invalid binary package: '%s'\n") % full_path,
+ noiselevel=-1,
+ )
+ missing_keys = []
+ if not mycat:
+ missing_keys.append("CATEGORY")
+ if not mypf:
+ missing_keys.append("PF")
+ if not slot:
+ missing_keys.append("SLOT")
+ msg = []
+ if missing_keys:
+ missing_keys.sort()
+ msg.append(
+ _("Missing metadata key(s): %s.")
+ % ", ".join(missing_keys)
+ )
+ msg.append(
+ _(
+ " This binary package is not "
+ "recoverable and should be deleted."
+ )
+ )
+ for line in textwrap.wrap("".join(msg), 72):
+ writemsg("!!! %s\n" % line, noiselevel=-1)
+ self.invalids.append(mypkg)
+ continue
+
+ multi_instance = False
+ invalid_name = False
+ build_id = None
+ if myfile.endswith(".xpak"):
+ multi_instance = True
+ build_id = self._parse_build_id(myfile)
+ if build_id < 1:
+ invalid_name = True
+ elif myfile != "%s-%s.xpak" % (mypf, build_id):
+ invalid_name = True
+ else:
+ mypkg = mypkg[: -len(str(build_id)) - 1]
+ elif myfile != mypf + ".tbz2":
+ invalid_name = True
+
+ if invalid_name:
+ writemsg(
+ _("\n!!! Binary package name is " "invalid: '%s'\n")
+ % full_path,
+ noiselevel=-1,
+ )
+ continue
+
+ if pkg_metadata.get("BUILD_ID"):
+ try:
+ build_id = int(pkg_metadata["BUILD_ID"])
+ except ValueError:
+ writemsg(
+ _("!!! Binary package has " "invalid BUILD_ID: '%s'\n")
+ % full_path,
+ noiselevel=-1,
+ )
+ continue
+ else:
+ build_id = None
+
+ if multi_instance:
+ name_split = catpkgsplit("%s/%s" % (mycat, mypf))
+ if (
+ name_split is None
+ or tuple(catsplit(mydir)) != name_split[:2]
+ ):
+ continue
+ elif mycat != mydir and mydir != "All":
+ continue
+ if mypkg != mypf.strip():
+ continue
+ mycpv = mycat + "/" + mypkg
+ if not self.dbapi._category_re.match(mycat):
+ writemsg(
+ _(
+ "!!! Binary package has an "
+ "unrecognized category: '%s'\n"
+ )
+ % full_path,
+ noiselevel=-1,
+ )
+ writemsg(
+ _(
+ "!!! '%s' has a category that is not"
+ " listed in %setc/portage/categories\n"
+ )
+ % (mycpv, self.settings["PORTAGE_CONFIGROOT"]),
+ noiselevel=-1,
+ )
+ continue
+ if build_id is not None:
+ pkg_metadata["BUILD_ID"] = str(build_id)
+ pkg_metadata["SIZE"] = str(s.st_size)
+ # Discard items used only for validation above.
+ pkg_metadata.pop("CATEGORY")
+ pkg_metadata.pop("PF")
+ mycpv = _pkg_str(
+ mycpv,
+ metadata=self.dbapi._aux_cache_slot_dict(pkg_metadata),
+ db=self.dbapi,
+ )
+ pkg_paths[_instance_key(mycpv)] = mypath
+ self.dbapi.cpv_inject(mycpv)
+ update_pkgindex = True
+ d = metadata.get(_instance_key(mycpv), pkgindex._pkg_slot_dict())
+ if d:
+ try:
+ if int(d["_mtime_"]) != s[stat.ST_MTIME]:
+ d.clear()
+ except (KeyError, ValueError):
+ d.clear()
+ if d:
+ try:
+ if int(d["SIZE"]) != int(s.st_size):
+ d.clear()
+ except (KeyError, ValueError):
+ d.clear()
+
+ for k in self._pkgindex_allowed_pkg_keys:
+ v = pkg_metadata.get(k)
+ if v:
+ d[k] = v
+ d["CPV"] = mycpv
+
+ try:
+ self._eval_use_flags(mycpv, d)
+ except portage.exception.InvalidDependString:
+ writemsg(
+ _("!!! Invalid binary package: '%s'\n")
+ % self.getname(mycpv),
+ noiselevel=-1,
+ )
+ self.dbapi.cpv_remove(mycpv)
+ del pkg_paths[_instance_key(mycpv)]
+
+ # record location if it's non-default
+ if mypath != mycpv + ".tbz2":
+ d["PATH"] = mypath
+ else:
+ d.pop("PATH", None)
+ metadata[_instance_key(mycpv)] = d
+
+ if reindex:
+ for instance_key in list(metadata):
+ if instance_key not in pkg_paths:
+ del metadata[instance_key]
+
+ if update_pkgindex:
+ del pkgindex.packages[:]
+ pkgindex.packages.extend(iter(metadata.values()))
+ self._update_pkgindex_header(pkgindex.header)
+
+ self._pkgindex_header = {}
+ self._merge_pkgindex_header(pkgindex.header, self._pkgindex_header)
+
+ return pkgindex if update_pkgindex else None
+
+ def _populate_remote(self, getbinpkg_refresh=True):
+
+ self._remote_has_index = False
+ self._remotepkgs = {}
+ # Order by descending priority.
+ for repo in reversed(list(self._binrepos_conf.values())):
+ base_url = repo.sync_uri
+ parsed_url = urlparse(base_url)
+ host = parsed_url.netloc
+ port = parsed_url.port
+ user = None
+ passwd = None
+ user_passwd = ""
+ if "@" in host:
+ user, host = host.split("@", 1)
+ user_passwd = user + "@"
+ if ":" in user:
+ user, passwd = user.split(":", 1)
+
+ if port is not None:
+ port_str = ":%s" % (port,)
+ if host.endswith(port_str):
+ host = host[: -len(port_str)]
+ pkgindex_file = os.path.join(
+ self.settings["EROOT"],
+ CACHE_PATH,
+ "binhost",
+ host,
+ parsed_url.path.lstrip("/"),
+ "Packages",
+ )
+ pkgindex = self._new_pkgindex()
+ try:
+ f = io.open(
+ _unicode_encode(
+ pkgindex_file, encoding=_encodings["fs"], errors="strict"
+ ),
+ mode="r",
+ encoding=_encodings["repo.content"],
+ errors="replace",
+ )
+ try:
+ pkgindex.read(f)
+ finally:
+ f.close()
+ except EnvironmentError as e:
+ if e.errno != errno.ENOENT:
+ raise
+ local_timestamp = pkgindex.header.get("TIMESTAMP", None)
+ try:
+ download_timestamp = float(pkgindex.header.get("DOWNLOAD_TIMESTAMP", 0))
+ except ValueError:
+ download_timestamp = 0
+ remote_timestamp = None
+ rmt_idx = self._new_pkgindex()
+ proc = None
+ tmp_filename = None
+ try:
+ # urlparse.urljoin() only works correctly with recognized
+ # protocols and requires the base url to have a trailing
+ # slash, so join manually...
+ url = base_url.rstrip("/") + "/Packages"
+ f = None
+
+ if not getbinpkg_refresh and local_timestamp:
+ raise UseCachedCopyOfRemoteIndex()
+
+ try:
+ ttl = float(pkgindex.header.get("TTL", 0))
+ except ValueError:
+ pass
+ else:
+ if (
+ download_timestamp
+ and ttl
+ and download_timestamp + ttl > time.time()
+ ):
+ raise UseCachedCopyOfRemoteIndex()
+
+ # Set proxy settings for _urlopen -> urllib_request
+ proxies = {}
+ for proto in ("http", "https"):
+ value = self.settings.get(proto + "_proxy")
+ if value is not None:
+ proxies[proto] = value
+
+ # Don't use urlopen for https, unless
+ # PEP 476 is supported (bug #469888).
+ if repo.fetchcommand is None and (
+ parsed_url.scheme not in ("https",) or _have_pep_476()
+ ):
+ try:
+ f = _urlopen(
+ url, if_modified_since=local_timestamp, proxies=proxies
+ )
+ if hasattr(f, "headers") and f.headers.get("timestamp", ""):
+ remote_timestamp = f.headers.get("timestamp")
+ except IOError as err:
+ if (
+ hasattr(err, "code") and err.code == 304
+ ): # not modified (since local_timestamp)
+ raise UseCachedCopyOfRemoteIndex()
+
+ if parsed_url.scheme in ("ftp", "http", "https"):
+ # This protocol is supposedly supported by urlopen,
+ # so apparently there's a problem with the url
+ # or a bug in urlopen.
+ if self.settings.get("PORTAGE_DEBUG", "0") != "0":
+ traceback.print_exc()
+
+ raise
+ except ValueError:
+ raise ParseError(
+ "Invalid Portage BINHOST value '%s'" % url.lstrip()
+ )
+
+ if f is None:
+
+ path = parsed_url.path.rstrip("/") + "/Packages"
+
+ if repo.fetchcommand is None and parsed_url.scheme == "ssh":
+ # Use a pipe so that we can terminate the download
+ # early if we detect that the TIMESTAMP header
+ # matches that of the cached Packages file.
+ ssh_args = ["ssh"]
+ if port is not None:
+ ssh_args.append("-p%s" % (port,))
+ # NOTE: shlex evaluates embedded quotes
+ ssh_args.extend(
+ portage.util.shlex_split(
+ self.settings.get("PORTAGE_SSH_OPTS", "")
+ )
+ )
+ ssh_args.append(user_passwd + host)
+ ssh_args.append("--")
+ ssh_args.append("cat")
+ ssh_args.append(path)
+
+ proc = subprocess.Popen(ssh_args, stdout=subprocess.PIPE)
+ f = proc.stdout
+ else:
+ if repo.fetchcommand is None:
+ setting = "FETCHCOMMAND_" + parsed_url.scheme.upper()
+ fcmd = self.settings.get(setting)
+ if not fcmd:
+ fcmd = self.settings.get("FETCHCOMMAND")
+ if not fcmd:
+ raise EnvironmentError("FETCHCOMMAND is unset")
+ else:
+ fcmd = repo.fetchcommand
+
+ fd, tmp_filename = tempfile.mkstemp()
+ tmp_dirname, tmp_basename = os.path.split(tmp_filename)
+ os.close(fd)
+
+ fcmd_vars = {
+ "DISTDIR": tmp_dirname,
+ "FILE": tmp_basename,
+ "URI": url,
+ }
+
+ for k in ("PORTAGE_SSH_OPTS",):
+ v = self.settings.get(k)
+ if v is not None:
+ fcmd_vars[k] = v
+
+ success = portage.getbinpkg.file_get(
+ fcmd=fcmd, fcmd_vars=fcmd_vars
+ )
+ if not success:
+ raise EnvironmentError("%s failed" % (setting,))
+ f = open(tmp_filename, "rb")
+
+ f_dec = codecs.iterdecode(
+ f, _encodings["repo.content"], errors="replace"
+ )
+ try:
+ rmt_idx.readHeader(f_dec)
+ if (
+ not remote_timestamp
+ ): # in case it had not been read from HTTP header
+ remote_timestamp = rmt_idx.header.get("TIMESTAMP", None)
+ if not remote_timestamp:
+ # no timestamp in the header, something's wrong
+ pkgindex = None
+ writemsg(
+ _(
+ "\n\n!!! Binhost package index "
+ " has no TIMESTAMP field.\n"
+ ),
+ noiselevel=-1,
+ )
+ else:
+ if not self._pkgindex_version_supported(rmt_idx):
+ writemsg(
+ _(
+ "\n\n!!! Binhost package index version"
+ " is not supported: '%s'\n"
+ )
+ % rmt_idx.header.get("VERSION"),
+ noiselevel=-1,
+ )
+ pkgindex = None
+ elif local_timestamp != remote_timestamp:
+ rmt_idx.readBody(f_dec)
+ pkgindex = rmt_idx
+ finally:
+ # Timeout after 5 seconds, in case close() blocks
+ # indefinitely (see bug #350139).
+ try:
+ try:
+ AlarmSignal.register(5)
+ f.close()
+ finally:
+ AlarmSignal.unregister()
+ except AlarmSignal:
+ writemsg(
+ "\n\n!!! %s\n"
+ % _("Timed out while closing connection to binhost"),
+ noiselevel=-1,
+ )
+ except UseCachedCopyOfRemoteIndex:
+ writemsg_stdout("\n")
+ writemsg_stdout(
+ colorize(
+ "GOOD",
+ _("Local copy of remote index is up-to-date and will be used."),
+ )
+ + "\n"
+ )
+ rmt_idx = pkgindex
+ except EnvironmentError as e:
+ # This includes URLError which is raised for SSL
+ # certificate errors when PEP 476 is supported.
+ writemsg(
+ _("\n\n!!! Error fetching binhost package" " info from '%s'\n")
+ % _hide_url_passwd(base_url)
+ )
+ # With Python 2, the EnvironmentError message may
+ # contain bytes or unicode, so use str to ensure
+ # safety with all locales (bug #532784).
+ try:
+ error_msg = str(e)
+ except UnicodeDecodeError as uerror:
+ error_msg = str(uerror.object, encoding="utf_8", errors="replace")
+ writemsg("!!! %s\n\n" % error_msg)
+ del e
+ pkgindex = None
+ if proc is not None:
+ if proc.poll() is None:
+ proc.kill()
+ proc.wait()
+ proc = None
+ if tmp_filename is not None:
+ try:
+ os.unlink(tmp_filename)
+ except OSError:
+ pass
+ if pkgindex is rmt_idx:
+ pkgindex.modified = False # don't update the header
+ pkgindex.header["DOWNLOAD_TIMESTAMP"] = "%d" % time.time()
+ try:
+ ensure_dirs(os.path.dirname(pkgindex_file))
+ f = atomic_ofstream(pkgindex_file)
+ pkgindex.write(f)
+ f.close()
+ except (IOError, PortageException):
+ if os.access(os.path.dirname(pkgindex_file), os.W_OK):
+ raise
+ # The current user doesn't have permission to cache the
+ # file, but that's alright.
+ if pkgindex:
+ remote_base_uri = pkgindex.header.get("URI", base_url)
+ for d in pkgindex.packages:
+ cpv = _pkg_str(
+ d["CPV"], metadata=d, settings=self.settings, db=self.dbapi
+ )
+ # Local package instances override remote instances
+ # with the same instance_key.
+ if self.dbapi.cpv_exists(cpv):
+ continue
+
+ d["CPV"] = cpv
+ d["BASE_URI"] = remote_base_uri
+ d["PKGINDEX_URI"] = url
+ # FETCHCOMMAND and RESUMECOMMAND may be specified
+ # by binrepos.conf, and otherwise ensure that they
+ # do not propagate from the Packages index since
+ # it may be unsafe to execute remotely specified
+ # commands.
+ if repo.fetchcommand is None:
+ d.pop("FETCHCOMMAND", None)
+ else:
+ d["FETCHCOMMAND"] = repo.fetchcommand
+ if repo.resumecommand is None:
+ d.pop("RESUMECOMMAND", None)
+ else:
+ d["RESUMECOMMAND"] = repo.resumecommand
+ self._remotepkgs[self.dbapi._instance_key(cpv)] = d
+ self.dbapi.cpv_inject(cpv)
+
+ self._remote_has_index = True
+ self._merge_pkgindex_header(pkgindex.header, self._pkgindex_header)
+
+ def _populate_additional(self, repos):
+ for repo in repos:
+ aux_keys = list(set(chain(repo._aux_cache_keys, repo._pkg_str_aux_keys)))
+ for cpv in repo.cpv_all():
+ metadata = dict(zip(aux_keys, repo.aux_get(cpv, aux_keys)))
+ pkg = _pkg_str(cpv, metadata=metadata, settings=repo.settings, db=repo)
+ instance_key = self.dbapi._instance_key(pkg)
+ self._additional_pkgs[instance_key] = pkg
+ self.dbapi.cpv_inject(pkg)
+
+ def inject(self, cpv, filename=None):
+ """Add a freshly built package to the database. This updates
+ $PKGDIR/Packages with the new package metadata (including MD5).
+ @param cpv: The cpv of the new package to inject
+ @type cpv: string
+ @param filename: File path of the package to inject, or None if it's
+ already in the location returned by getname()
+ @type filename: string
+ @rtype: _pkg_str or None
+ @return: A _pkg_str instance on success, or None on failure.
+ """
+ mycat, mypkg = catsplit(cpv)
+ if not self.populated:
+ self.populate()
+ if filename is None:
+ full_path = self.getname(cpv)
+ else:
+ full_path = filename
+ try:
+ s = os.stat(full_path)
+ except OSError as e:
+ if e.errno != errno.ENOENT:
+ raise
+ del e
+ writemsg(
+ _("!!! Binary package does not exist: '%s'\n") % full_path,
+ noiselevel=-1,
+ )
+ return
+ metadata = self._read_metadata(full_path, s)
+ invalid_depend = False
+ try:
+ self._eval_use_flags(cpv, metadata)
+ except portage.exception.InvalidDependString:
+ invalid_depend = True
+ if invalid_depend or not metadata.get("SLOT"):
+ writemsg(_("!!! Invalid binary package: '%s'\n") % full_path, noiselevel=-1)
+ return
+
+ fetched = False
+ try:
+ build_id = cpv.build_id
+ except AttributeError:
+ build_id = None
+ else:
+ instance_key = self.dbapi._instance_key(cpv)
+ if instance_key in self.dbapi.cpvdict:
+ # This means we've been called by aux_update (or
+ # similar). The instance key typically changes (due to
+ # file modification), so we need to discard existing
+ # instance key references.
+ self.dbapi.cpv_remove(cpv)
+ self._pkg_paths.pop(instance_key, None)
+ if self._remotepkgs is not None:
+ fetched = self._remotepkgs.pop(instance_key, None)
+
+ cpv = _pkg_str(cpv, metadata=metadata, settings=self.settings, db=self.dbapi)
+
+ # Reread the Packages index (in case it's been changed by another
+ # process) and then updated it, all while holding a lock.
+ pkgindex_lock = None
+ try:
+ os.makedirs(self.pkgdir, exist_ok=True)
+ pkgindex_lock = lockfile(self._pkgindex_file, wantnewlockfile=1)
+ if filename is not None:
+ new_filename = self.getname(cpv, allocate_new=True)
+ try:
+ samefile = os.path.samefile(filename, new_filename)
+ except OSError:
+ samefile = False
+ if not samefile:
+ self._ensure_dir(os.path.dirname(new_filename))
+ _movefile(filename, new_filename, mysettings=self.settings)
+ full_path = new_filename
+
+ basename = os.path.basename(full_path)
+ pf = catsplit(cpv)[1]
+ if build_id is None and not fetched and basename.endswith(".xpak"):
+ # Apply the newly assigned BUILD_ID. This is intended
+ # to occur only for locally built packages. If the
+ # package was fetched, we want to preserve its
+ # attributes, so that we can later distinguish that it
+ # is identical to its remote counterpart.
+ build_id = self._parse_build_id(basename)
+ metadata["BUILD_ID"] = str(build_id)
+ cpv = _pkg_str(
+ cpv, metadata=metadata, settings=self.settings, db=self.dbapi
+ )
+ binpkg = portage.xpak.tbz2(full_path)
+ binary_data = binpkg.get_data()
+ binary_data[b"BUILD_ID"] = _unicode_encode(metadata["BUILD_ID"])
+ binpkg.recompose_mem(portage.xpak.xpak_mem(binary_data))
+
+ self._file_permissions(full_path)
+ pkgindex = self._load_pkgindex()
+ if not self._pkgindex_version_supported(pkgindex):
+ pkgindex = self._new_pkgindex()
+
+ d = self._inject_file(pkgindex, cpv, full_path)
+ self._update_pkgindex_header(pkgindex.header)
+ self._pkgindex_write(pkgindex)
+
+ finally:
+ if pkgindex_lock:
+ unlockfile(pkgindex_lock)
+
+ # This is used to record BINPKGMD5 in the installed package
+ # database, for a package that has just been built.
+ cpv._metadata["MD5"] = d["MD5"]
+
+ return cpv
+
+ def _read_metadata(self, filename, st, keys=None):
+ """
+ Read metadata from a binary package. The returned metadata
+ dictionary will contain empty strings for any values that
+ are undefined (this is important because the _pkg_str class
+ distinguishes between missing and undefined values).
+
+ @param filename: File path of the binary package
+ @type filename: string
+ @param st: stat result for the binary package
+ @type st: os.stat_result
+ @param keys: optional list of specific metadata keys to retrieve
+ @type keys: iterable
+ @rtype: dict
+ @return: package metadata
+ """
+ if keys is None:
+ keys = self.dbapi._aux_cache_keys
+ metadata = self.dbapi._aux_cache_slot_dict()
+ else:
+ metadata = {}
+ binary_metadata = portage.xpak.tbz2(filename).get_data()
+ for k in keys:
+ if k == "_mtime_":
+ metadata[k] = str(st[stat.ST_MTIME])
+ elif k == "SIZE":
+ metadata[k] = str(st.st_size)
+ else:
+ v = binary_metadata.get(_unicode_encode(k))
+ if v is None:
+ if k == "EAPI":
+ metadata[k] = "0"
+ else:
+ metadata[k] = ""
+ else:
+ v = _unicode_decode(v)
+ metadata[k] = " ".join(v.split())
+ return metadata
+
+ def _inject_file(self, pkgindex, cpv, filename):
+ """
+ Add a package to internal data structures, and add an
+ entry to the given pkgindex.
+ @param pkgindex: The PackageIndex instance to which an entry
+ will be added.
+ @type pkgindex: PackageIndex
+ @param cpv: A _pkg_str instance corresponding to the package
+ being injected.
+ @type cpv: _pkg_str
+ @param filename: Absolute file path of the package to inject.
+ @type filename: string
+ @rtype: dict
+ @return: A dict corresponding to the new entry which has been
+ added to pkgindex. This may be used to access the checksums
+ which have just been generated.
+ """
+ # Update state for future isremote calls.
+ instance_key = self.dbapi._instance_key(cpv)
+ if self._remotepkgs is not None:
+ self._remotepkgs.pop(instance_key, None)
+
+ self.dbapi.cpv_inject(cpv)
+ self._pkg_paths[instance_key] = filename[len(self.pkgdir) + 1 :]
+ d = self._pkgindex_entry(cpv)
+
+ # If found, remove package(s) with duplicate path.
+ path = d.get("PATH", "")
+ for i in range(len(pkgindex.packages) - 1, -1, -1):
+ d2 = pkgindex.packages[i]
+ if path and path == d2.get("PATH"):
+ # Handle path collisions in $PKGDIR/All
+ # when CPV is not identical.
+ del pkgindex.packages[i]
+ elif cpv == d2.get("CPV"):
+ if path == d2.get("PATH", ""):
+ del pkgindex.packages[i]
+
+ pkgindex.packages.append(d)
+ return d
+
+ def _pkgindex_write(self, pkgindex):
+ contents = codecs.getwriter(_encodings["repo.content"])(io.BytesIO())
+ pkgindex.write(contents)
+ contents = contents.getvalue()
+ atime = mtime = int(pkgindex.header["TIMESTAMP"])
+ output_files = [
+ (atomic_ofstream(self._pkgindex_file, mode="wb"), self._pkgindex_file, None)
+ ]
+
+ if "compress-index" in self.settings.features:
+ gz_fname = self._pkgindex_file + ".gz"
+ fileobj = atomic_ofstream(gz_fname, mode="wb")
+ output_files.append(
+ (
+ GzipFile(filename="", mode="wb", fileobj=fileobj, mtime=mtime),
+ gz_fname,
+ fileobj,
+ )
+ )
+
+ for f, fname, f_close in output_files:
+ f.write(contents)
+ f.close()
+ if f_close is not None:
+ f_close.close()
+ self._file_permissions(fname)
+ # some seconds might have elapsed since TIMESTAMP
+ os.utime(fname, (atime, mtime))
+
+ def _pkgindex_entry(self, cpv):
+ """
+ Performs checksums, and gets size and mtime via lstat.
+ Raises InvalidDependString if necessary.
+ @rtype: dict
+ @return: a dict containing entry for the give cpv.
+ """
+
+ pkg_path = self.getname(cpv)
+
+ d = dict(cpv._metadata.items())
+ d.update(perform_multiple_checksums(pkg_path, hashes=self._pkgindex_hashes))
+
+ d["CPV"] = cpv
+ st = os.lstat(pkg_path)
+ d["_mtime_"] = str(st[stat.ST_MTIME])
+ d["SIZE"] = str(st.st_size)
+
+ rel_path = pkg_path[len(self.pkgdir) + 1 :]
+ # record location if it's non-default
+ if rel_path != cpv + ".tbz2":
+ d["PATH"] = rel_path
+
+ return d
+
+ def _new_pkgindex(self):
+ return portage.getbinpkg.PackageIndex(
+ allowed_pkg_keys=self._pkgindex_allowed_pkg_keys,
+ default_header_data=self._pkgindex_default_header_data,
+ default_pkg_data=self._pkgindex_default_pkg_data,
+ inherited_keys=self._pkgindex_inherited_keys,
+ translated_keys=self._pkgindex_translated_keys,
+ )
+
+ @staticmethod
+ def _merge_pkgindex_header(src, dest):
+ """
+ Merge Packages header settings from src to dest, in order to
+ propagate implicit IUSE and USE_EXPAND settings for use with
+ binary and installed packages. Values are appended, so the
+ result is a union of elements from src and dest.
+
+ Pull in ARCH if it's not defined, since it's used for validation
+ by emerge's profile_check function, and also for KEYWORDS logic
+ in the _getmaskingstatus function.
+
+ @param src: source mapping (read only)
+ @type src: Mapping
+ @param dest: destination mapping
+ @type dest: MutableMapping
+ """
+ for k, v in iter_iuse_vars(src):
+ v_before = dest.get(k)
+ if v_before is not None:
+ merged_values = set(v_before.split())
+ merged_values.update(v.split())
+ v = " ".join(sorted(merged_values))
+ dest[k] = v
+
+ if "ARCH" not in dest and "ARCH" in src:
+ dest["ARCH"] = src["ARCH"]
+
+ def _propagate_config(self, config):
+ """
+ Propagate implicit IUSE and USE_EXPAND settings from the binary
+ package database to a config instance. If settings are not
+ available to propagate, then this will do nothing and return
+ False.
+
+ @param config: config instance
+ @type config: portage.config
+ @rtype: bool
+ @return: True if settings successfully propagated, False if settings
+ were not available to propagate.
+ """
+ if self._pkgindex_header is None:
+ return False
+
+ self._merge_pkgindex_header(
+ self._pkgindex_header, config.configdict["defaults"]
+ )
+ config.regenerate()
+ config._init_iuse()
+ return True
+
+ def _update_pkgindex_header(self, header):
+ """
+ Add useful settings to the Packages file header, for use by
+ binhost clients.
+
+ This will return silently if the current profile is invalid or
+ does not have an IUSE_IMPLICIT variable, since it's useful to
+ maintain a cache of implicit IUSE settings for use with binary
+ packages.
+ """
+ if not (self.settings.profile_path and "IUSE_IMPLICIT" in self.settings):
+ header.setdefault("VERSION", str(self._pkgindex_version))
+ return
+
+ portdir = normalize_path(os.path.realpath(self.settings["PORTDIR"]))
+ profiles_base = os.path.join(portdir, "profiles") + os.path.sep
+ if self.settings.profile_path:
+ profile_path = normalize_path(os.path.realpath(self.settings.profile_path))
+ if profile_path.startswith(profiles_base):
+ profile_path = profile_path[len(profiles_base) :]
+ header["PROFILE"] = profile_path
+ header["VERSION"] = str(self._pkgindex_version)
+ base_uri = self.settings.get("PORTAGE_BINHOST_HEADER_URI")
+ if base_uri:
+ header["URI"] = base_uri
+ else:
+ header.pop("URI", None)
+ for k in (
+ list(self._pkgindex_header_keys)
+ + self.settings.get("USE_EXPAND_IMPLICIT", "").split()
+ + self.settings.get("USE_EXPAND_UNPREFIXED", "").split()
+ ):
+ v = self.settings.get(k, None)
+ if v:
+ header[k] = v
+ else:
+ header.pop(k, None)
+
+ # These values may be useful for using a binhost without
+ # having a local copy of the profile (bug #470006).
+ for k in self.settings.get("USE_EXPAND_IMPLICIT", "").split():
+ k = "USE_EXPAND_VALUES_" + k
+ v = self.settings.get(k)
+ if v:
+ header[k] = v
+ else:
+ header.pop(k, None)
+
+ def _pkgindex_version_supported(self, pkgindex):
+ version = pkgindex.header.get("VERSION")
+ if version:
+ try:
+ if int(version) <= self._pkgindex_version:
+ return True
+ except ValueError:
+ pass
+ return False
+
+ def _eval_use_flags(self, cpv, metadata):
+ use = frozenset(metadata.get("USE", "").split())
+ for k in self._pkgindex_use_evaluated_keys:
+ if k.endswith("DEPEND"):
+ token_class = Atom
+ else:
+ token_class = None
+
+ deps = metadata.get(k)
+ if deps is None:
+ continue
+ try:
+ deps = use_reduce(deps, uselist=use, token_class=token_class)
+ deps = paren_enclose(deps)
+ except portage.exception.InvalidDependString as e:
+ writemsg("%s: %s\n" % (k, e), noiselevel=-1)
+ raise
+ metadata[k] = deps
+
+ def exists_specific(self, cpv):
+ if not self.populated:
+ self.populate()
+ return self.dbapi.match(
+ dep_expand("=" + cpv, mydb=self.dbapi, settings=self.settings)
+ )
+
+ def dep_bestmatch(self, mydep):
+ "compatibility method -- all matches, not just visible ones"
+ if not self.populated:
+ self.populate()
+ writemsg("\n\n", 1)
+ writemsg("mydep: %s\n" % mydep, 1)
+ mydep = dep_expand(mydep, mydb=self.dbapi, settings=self.settings)
+ writemsg("mydep: %s\n" % mydep, 1)
+ mykey = dep_getkey(mydep)
+ writemsg("mykey: %s\n" % mykey, 1)
+ mymatch = best(match_from_list(mydep, self.dbapi.cp_list(mykey)))
+ writemsg("mymatch: %s\n" % mymatch, 1)
+ if mymatch is None:
+ return ""
+ return mymatch
+
+ def getname(self, cpv, allocate_new=None):
+ """Returns a file location for this package.
+ If cpv has both build_time and build_id attributes, then the
+ path to the specific corresponding instance is returned.
+ Otherwise, allocate a new path and return that. When allocating
+ a new path, behavior depends on the binpkg-multi-instance
+ FEATURES setting.
+ """
+ if not self.populated:
+ self.populate()
+
+ try:
+ cpv.cp
+ except AttributeError:
+ cpv = _pkg_str(cpv)
+
+ filename = None
+ if allocate_new:
+ filename = self._allocate_filename(cpv)
+ elif self._is_specific_instance(cpv):
+ instance_key = self.dbapi._instance_key(cpv)
+ path = self._pkg_paths.get(instance_key)
+ if path is not None:
+ filename = os.path.join(self.pkgdir, path)
+
+ if filename is None and not allocate_new:
+ try:
+ instance_key = self.dbapi._instance_key(cpv, support_string=True)
+ except KeyError:
+ pass
+ else:
+ filename = self._pkg_paths.get(instance_key)
+ if filename is not None:
+ filename = os.path.join(self.pkgdir, filename)
+ elif instance_key in self._additional_pkgs:
+ return None
+
+ if filename is None:
+ if self._multi_instance:
+ pf = catsplit(cpv)[1]
+ filename = "%s-%s.xpak" % (os.path.join(self.pkgdir, cpv.cp, pf), "1")
+ else:
+ filename = os.path.join(self.pkgdir, cpv + ".tbz2")
+
+ return filename
+
+ def _is_specific_instance(self, cpv):
+ specific = True
+ try:
+ build_time = cpv.build_time
+ build_id = cpv.build_id
+ except AttributeError:
+ specific = False
+ else:
+ if build_time is None or build_id is None:
+ specific = False
+ return specific
+
+ def _max_build_id(self, cpv):
+ max_build_id = 0
+ for x in self.dbapi.cp_list(cpv.cp):
+ if x == cpv and x.build_id is not None and x.build_id > max_build_id:
+ max_build_id = x.build_id
+ return max_build_id
+
+ def _allocate_filename(self, cpv):
+ return os.path.join(self.pkgdir, cpv + ".tbz2")
+
+ def _allocate_filename_multi(self, cpv):
+
+ # First, get the max build_id found when _populate was
+ # called.
+ max_build_id = self._max_build_id(cpv)
+
+ # A new package may have been added concurrently since the
+ # last _populate call, so use increment build_id until
+ # we locate an unused id.
+ pf = catsplit(cpv)[1]
+ build_id = max_build_id + 1
+
+ while True:
+ filename = "%s-%s.xpak" % (os.path.join(self.pkgdir, cpv.cp, pf), build_id)
+ if os.path.exists(filename):
+ build_id += 1
+ else:
+ return filename
+
+ @staticmethod
+ def _parse_build_id(filename):
+ build_id = -1
+ suffixlen = len(".xpak")
+ hyphen = filename.rfind("-", 0, -(suffixlen + 1))
+ if hyphen != -1:
+ try:
+ build_id = int(filename[hyphen + 1 : -suffixlen])
+ except ValueError:
+ pass
+ return build_id
+
+ def isremote(self, pkgname):
+ """Returns true if the package is kept remotely and it has not been
+ downloaded (or it is only partially downloaded)."""
+ if self._remotepkgs is None:
+ return False
+ instance_key = self.dbapi._instance_key(pkgname)
+ if instance_key not in self._remotepkgs:
+ return False
+ if instance_key in self._additional_pkgs:
+ return False
+ # Presence in self._remotepkgs implies that it's remote. When a
+ # package is downloaded, state is updated by self.inject().
+ return True
+
+ def get_pkgindex_uri(self, cpv):
+ """Returns the URI to the Packages file for a given package."""
+ uri = None
+ if self._remotepkgs is not None:
+ metadata = self._remotepkgs.get(self.dbapi._instance_key(cpv))
+ if metadata is not None:
+ uri = metadata["PKGINDEX_URI"]
+ return uri
+
+ def gettbz2(self, pkgname):
+ """Fetches the package from a remote site, if necessary. Attempts to
+ resume if the file appears to be partially downloaded."""
+ instance_key = self.dbapi._instance_key(pkgname)
+ tbz2_path = self.getname(pkgname)
+ tbz2name = os.path.basename(tbz2_path)
+ resume = False
+ if os.path.exists(tbz2_path):
+ if tbz2name[:-5] not in self.invalids:
+ return
+
+ resume = True
+ writemsg(
+ _(
+ "Resuming download of this tbz2, but it is possible that it is corrupt.\n"
+ ),
+ noiselevel=-1,
+ )
+
+ mydest = os.path.dirname(self.getname(pkgname))
+ self._ensure_dir(mydest)
+ # urljoin doesn't work correctly with unrecognized protocols like sftp
+ if self._remote_has_index:
+ rel_url = self._remotepkgs[instance_key].get("PATH")
+ if not rel_url:
+ rel_url = pkgname + ".tbz2"
+ remote_base_uri = self._remotepkgs[instance_key]["BASE_URI"]
+ url = remote_base_uri.rstrip("/") + "/" + rel_url.lstrip("/")
+ else:
+ url = self.settings["PORTAGE_BINHOST"].rstrip("/") + "/" + tbz2name
+ protocol = urlparse(url)[0]
+ fcmd_prefix = "FETCHCOMMAND"
+ if resume:
+ fcmd_prefix = "RESUMECOMMAND"
+ fcmd = self.settings.get(fcmd_prefix + "_" + protocol.upper())
+ if not fcmd:
+ fcmd = self.settings.get(fcmd_prefix)
+ success = portage.getbinpkg.file_get(url, mydest, fcmd=fcmd)
+ if not success:
+ try:
+ os.unlink(self.getname(pkgname))
+ except OSError:
+ pass
+ raise portage.exception.FileNotFound(mydest)
+ self.inject(pkgname)
+
+ def _load_pkgindex(self):
+ pkgindex = self._new_pkgindex()
+ try:
+ f = io.open(
+ _unicode_encode(
+ self._pkgindex_file, encoding=_encodings["fs"], errors="strict"
+ ),
+ mode="r",
+ encoding=_encodings["repo.content"],
+ errors="replace",
+ )
+ except EnvironmentError:
+ pass
+ else:
+ try:
+ pkgindex.read(f)
+ finally:
+ f.close()
+ return pkgindex
+
+ def _get_digests(self, pkg):
+
+ try:
+ cpv = pkg.cpv
+ except AttributeError:
+ cpv = pkg
+
+ _instance_key = self.dbapi._instance_key
+ instance_key = _instance_key(cpv)
+ digests = {}
+ metadata = (
+ None if self._remotepkgs is None else self._remotepkgs.get(instance_key)
+ )
+ if metadata is None:
+ for d in self._load_pkgindex().packages:
+ if d["CPV"] == cpv and instance_key == _instance_key(
+ _pkg_str(d["CPV"], metadata=d, settings=self.settings)
+ ):
+ metadata = d
+ break
+
+ if metadata is None:
+ return digests
+
+ for k in get_valid_checksum_keys():
+ v = metadata.get(k)
+ if not v:
+ continue
+ digests[k] = v
+
+ if "SIZE" in metadata:
+ try:
+ digests["size"] = int(metadata["SIZE"])
+ except ValueError:
+ writemsg(
+ _("!!! Malformed SIZE attribute in remote " "metadata for '%s'\n")
+ % cpv
+ )
+
+ return digests
+
+ def digestCheck(self, pkg):
+ """
+ Verify digests for the given package and raise DigestException
+ if verification fails.
+ @rtype: bool
+ @return: True if digests could be located, False otherwise.
+ """
+
+ digests = self._get_digests(pkg)
+
+ if not digests:
+ return False
+
+ try:
+ cpv = pkg.cpv
+ except AttributeError:
+ cpv = pkg
+
+ pkg_path = self.getname(cpv)
+ hash_filter = _hash_filter(self.settings.get("PORTAGE_CHECKSUM_FILTER", ""))
+ if not hash_filter.transparent:
+ digests = _apply_hash_filter(digests, hash_filter)
+ eout = EOutput()
+ eout.quiet = self.settings.get("PORTAGE_QUIET") == "1"
+ ok, st = _check_distfile(pkg_path, digests, eout, show_errors=0)
+ if not ok:
+ ok, reason = verify_all(pkg_path, digests)
+ if not ok:
+ raise portage.exception.DigestException((pkg_path,) + tuple(reason))
+
+ return True
+
+ def getslot(self, mycatpkg):
+ "Get a slot for a catpkg; assume it exists."
+ myslot = ""
+ try:
+ myslot = self.dbapi._pkg_str(mycatpkg, None).slot
+ except KeyError:
+ pass
+ return myslot
diff --cc lib/portage/dbapi/vartree.py
index 749963fa9,8ffb23b1c..73202f625
--- a/lib/portage/dbapi/vartree.py
+++ b/lib/portage/dbapi/vartree.py
@@@ -1,62 -1,70 +1,72 @@@
# Copyright 1998-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
- __all__ = [
- "vardbapi", "vartree", "dblink"] + \
- ["write_contents", "tar_contents"]
+ __all__ = ["vardbapi", "vartree", "dblink"] + ["write_contents", "tar_contents"]
import portage
- portage.proxy.lazyimport.lazyimport(globals(),
- 'hashlib:md5',
- 'portage.checksum:_perform_md5_merge@perform_md5',
- 'portage.data:portage_gid,portage_uid,secpass',
- 'portage.dbapi.dep_expand:dep_expand',
- 'portage.dbapi._MergeProcess:MergeProcess',
- 'portage.dbapi._SyncfsProcess:SyncfsProcess',
- 'portage.dep:dep_getkey,isjustname,isvalidatom,match_from_list,' + \
- 'use_reduce,_slot_separator,_repo_separator',
- 'portage.eapi:_get_eapi_attrs',
- 'portage.elog:collect_ebuild_messages,collect_messages,' + \
- 'elog_process,_merge_logentries',
- 'portage.locks:lockdir,unlockdir,lockfile,unlockfile',
- 'portage.output:bold,colorize',
- 'portage.package.ebuild.doebuild:doebuild_environment,' + \
- '_merge_unicode_error',
- 'portage.package.ebuild.prepare_build_dirs:prepare_build_dirs',
- 'portage.package.ebuild._ipc.QueryCommand:QueryCommand',
- 'portage.process:find_binary',
- 'portage.util:apply_secpass_permissions,ConfigProtect,ensure_dirs,' + \
- 'writemsg,writemsg_level,write_atomic,atomic_ofstream,writedict,' + \
- 'grabdict,normalize_path,new_protect_filename',
- 'portage.util._compare_files:compare_files',
- 'portage.util.digraph:digraph',
- 'portage.util.env_update:env_update',
- 'portage.util.install_mask:install_mask_dir,InstallMask,_raise_exc',
- 'portage.util.listdir:dircache,listdir',
- 'portage.util.movefile:movefile',
- 'portage.util.path:first_existing,iter_parents',
- 'portage.util.writeable_check:get_ro_checker',
- 'portage.util._xattr:xattr',
- 'portage.util._dyn_libs.PreservedLibsRegistry:PreservedLibsRegistry',
- 'portage.util._dyn_libs.LinkageMapELF:LinkageMapELF@LinkageMap',
- 'portage.util._dyn_libs.LinkageMapMachO:LinkageMapMachO',
- 'portage.util._dyn_libs.LinkageMapPeCoff:LinkageMapPeCoff',
- 'portage.util._dyn_libs.LinkageMapXCoff:LinkageMapXCoff',
- 'portage.util._dyn_libs.NeededEntry:NeededEntry',
- 'portage.util._async.SchedulerInterface:SchedulerInterface',
- 'portage.util._eventloop.global_event_loop:global_event_loop',
- 'portage.versions:best,catpkgsplit,catsplit,cpv_getkey,vercmp,' + \
- '_get_slot_re,_pkgsplit@pkgsplit,_pkg_str,_unknown_repo',
- 'subprocess',
- 'tarfile',
+
+ portage.proxy.lazyimport.lazyimport(
+ globals(),
+ "hashlib:md5",
+ "portage.checksum:_perform_md5_merge@perform_md5",
+ "portage.data:portage_gid,portage_uid,secpass",
+ "portage.dbapi.dep_expand:dep_expand",
+ "portage.dbapi._MergeProcess:MergeProcess",
+ "portage.dbapi._SyncfsProcess:SyncfsProcess",
+ "portage.dep:dep_getkey,isjustname,isvalidatom,match_from_list,"
+ + "use_reduce,_slot_separator,_repo_separator",
+ "portage.eapi:_get_eapi_attrs",
+ "portage.elog:collect_ebuild_messages,collect_messages,"
+ + "elog_process,_merge_logentries",
+ "portage.locks:lockdir,unlockdir,lockfile,unlockfile",
+ "portage.output:bold,colorize",
+ "portage.package.ebuild.doebuild:doebuild_environment," + "_merge_unicode_error",
+ "portage.package.ebuild.prepare_build_dirs:prepare_build_dirs",
+ "portage.package.ebuild._ipc.QueryCommand:QueryCommand",
+ "portage.process:find_binary",
+ "portage.util:apply_secpass_permissions,ConfigProtect,ensure_dirs,"
+ + "writemsg,writemsg_level,write_atomic,atomic_ofstream,writedict,"
+ + "grabdict,normalize_path,new_protect_filename",
+ "portage.util._compare_files:compare_files",
+ "portage.util.digraph:digraph",
+ "portage.util.env_update:env_update",
+ "portage.util.install_mask:install_mask_dir,InstallMask,_raise_exc",
+ "portage.util.listdir:dircache,listdir",
+ "portage.util.movefile:movefile",
+ "portage.util.path:first_existing,iter_parents",
+ "portage.util.writeable_check:get_ro_checker",
+ "portage.util._xattr:xattr",
+ "portage.util._dyn_libs.PreservedLibsRegistry:PreservedLibsRegistry",
+ "portage.util._dyn_libs.LinkageMapELF:LinkageMapELF@LinkageMap",
+ "portage.util._dyn_libs.NeededEntry:NeededEntry",
+ "portage.util._async.SchedulerInterface:SchedulerInterface",
+ "portage.util._eventloop.global_event_loop:global_event_loop",
+ "portage.versions:best,catpkgsplit,catsplit,cpv_getkey,vercmp,"
+ + "_get_slot_re,_pkgsplit@pkgsplit,_pkg_str,_unknown_repo",
+ "subprocess",
+ "tarfile",
)
- from portage.const import CACHE_PATH, CONFIG_MEMORY_FILE, \
- MERGING_IDENTIFIER, PORTAGE_PACKAGE_ATOM, PRIVATE_PATH, VDB_PATH, EPREFIX
+ from portage.const import (
+ CACHE_PATH,
+ CONFIG_MEMORY_FILE,
+ MERGING_IDENTIFIER,
+ PORTAGE_PACKAGE_ATOM,
+ PRIVATE_PATH,
+ VDB_PATH,
++ # PREFIX LOCAL
++ EPREFIX,
+ )
from portage.dbapi import dbapi
- from portage.exception import CommandNotFound, \
- InvalidData, InvalidLocation, InvalidPackageName, \
- FileNotFound, PermissionDenied, UnsupportedAPIException
+ from portage.exception import (
+ CommandNotFound,
+ InvalidData,
+ InvalidLocation,
+ InvalidPackageName,
+ FileNotFound,
+ PermissionDenied,
+ UnsupportedAPIException,
+ )
from portage.localization import _
from portage.util.futures import asyncio
@@@ -105,5697 -112,6397 +114,6415 @@@ import warning
class vardbapi(dbapi):
- _excluded_dirs = ["CVS", "lost+found"]
- _excluded_dirs = [re.escape(x) for x in _excluded_dirs]
- _excluded_dirs = re.compile(r'^(\..*|' + MERGING_IDENTIFIER + '.*|' + \
- "|".join(_excluded_dirs) + r')$')
-
- _aux_cache_version = "1"
- _owners_cache_version = "1"
-
- # Number of uncached packages to trigger cache update, since
- # it's wasteful to update it for every vdb change.
- _aux_cache_threshold = 5
-
- _aux_cache_keys_re = re.compile(r'^NEEDED\..*$')
- _aux_multi_line_re = re.compile(r'^(CONTENTS|NEEDED\..*)$')
- _pkg_str_aux_keys = dbapi._pkg_str_aux_keys + ("BUILD_ID", "BUILD_TIME", "_mtime_")
-
- def __init__(self, _unused_param=DeprecationWarning,
- categories=None, settings=None, vartree=None):
- """
- The categories parameter is unused since the dbapi class
- now has a categories property that is generated from the
- available packages.
- """
-
- # Used by emerge to check whether any packages
- # have been added or removed.
- self._pkgs_changed = False
-
- # The _aux_cache_threshold doesn't work as designed
- # if the cache is flushed from a subprocess, so we
- # use this to avoid waste vdb cache updates.
- self._flush_cache_enabled = True
-
- #cache for category directory mtimes
- self.mtdircache = {}
-
- #cache for dependency checks
- self.matchcache = {}
-
- #cache for cp_list results
- self.cpcache = {}
-
- self.blockers = None
- if settings is None:
- settings = portage.settings
- self.settings = settings
-
- if _unused_param is not DeprecationWarning:
- warnings.warn("The first parameter of the "
- "portage.dbapi.vartree.vardbapi"
- " constructor is now unused. Instead "
- "settings['ROOT'] is used.",
- DeprecationWarning, stacklevel=2)
-
- self._eroot = settings['EROOT']
- self._dbroot = self._eroot + VDB_PATH
- self._lock = None
- self._lock_count = 0
-
- self._conf_mem_file = self._eroot + CONFIG_MEMORY_FILE
- self._fs_lock_obj = None
- self._fs_lock_count = 0
- self._slot_locks = {}
-
- if vartree is None:
- vartree = portage.db[settings['EROOT']]['vartree']
- self.vartree = vartree
- self._aux_cache_keys = set(
- ["BDEPEND", "BUILD_TIME", "CHOST", "COUNTER", "DEPEND",
- "DESCRIPTION", "EAPI", "HOMEPAGE",
- "BUILD_ID", "IDEPEND", "IUSE", "KEYWORDS",
- "LICENSE", "PDEPEND", "PROPERTIES", "RDEPEND",
- "repository", "RESTRICT" , "SLOT", "USE", "DEFINED_PHASES",
- "PROVIDES", "REQUIRES"
- ])
- self._aux_cache_obj = None
- self._aux_cache_filename = os.path.join(self._eroot,
- CACHE_PATH, "vdb_metadata.pickle")
- self._cache_delta_filename = os.path.join(self._eroot,
- CACHE_PATH, "vdb_metadata_delta.json")
- self._cache_delta = VdbMetadataDelta(self)
- self._counter_path = os.path.join(self._eroot,
- CACHE_PATH, "counter")
-
- self._plib_registry = PreservedLibsRegistry(settings["ROOT"],
- os.path.join(self._eroot, PRIVATE_PATH, "preserved_libs_registry"))
- self._linkmap = LinkageMap(self)
- chost = self.settings.get('CHOST')
- if not chost:
- chost = 'lunix?' # this happens when profiles are not available
- if chost.find('darwin') >= 0:
- self._linkmap = LinkageMapMachO(self)
- elif chost.find('interix') >= 0 or chost.find('winnt') >= 0:
- self._linkmap = LinkageMapPeCoff(self)
- elif chost.find('aix') >= 0:
- self._linkmap = LinkageMapXCoff(self)
- else:
- self._linkmap = LinkageMap(self)
- self._owners = self._owners_db(self)
-
- self._cached_counter = None
-
- @property
- def writable(self):
- """
- Check if var/db/pkg is writable, or permissions are sufficient
- to create it if it does not exist yet.
- @rtype: bool
- @return: True if var/db/pkg is writable or can be created,
- False otherwise
- """
- return os.access(first_existing(self._dbroot), os.W_OK)
-
- @property
- def root(self):
- warnings.warn("The root attribute of "
- "portage.dbapi.vartree.vardbapi"
- " is deprecated. Use "
- "settings['ROOT'] instead.",
- DeprecationWarning, stacklevel=3)
- return self.settings['ROOT']
-
- def getpath(self, mykey, filename=None):
- # This is an optimized hotspot, so don't use unicode-wrapped
- # os module and don't use os.path.join().
- rValue = self._eroot + VDB_PATH + _os.sep + mykey
- if filename is not None:
- # If filename is always relative, we can do just
- # rValue += _os.sep + filename
- rValue = _os.path.join(rValue, filename)
- return rValue
-
- def lock(self):
- """
- Acquire a reentrant lock, blocking, for cooperation with concurrent
- processes. State is inherited by subprocesses, allowing subprocesses
- to reenter a lock that was acquired by a parent process. However,
- a lock can be released only by the same process that acquired it.
- """
- if self._lock_count:
- self._lock_count += 1
- else:
- if self._lock is not None:
- raise AssertionError("already locked")
- # At least the parent needs to exist for the lock file.
- ensure_dirs(self._dbroot)
- self._lock = lockdir(self._dbroot)
- self._lock_count += 1
-
- def unlock(self):
- """
- Release a lock, decrementing the recursion level. Each unlock() call
- must be matched with a prior lock() call, or else an AssertionError
- will be raised if unlock() is called while not locked.
- """
- if self._lock_count > 1:
- self._lock_count -= 1
- else:
- if self._lock is None:
- raise AssertionError("not locked")
- self._lock_count = 0
- unlockdir(self._lock)
- self._lock = None
-
- def _fs_lock(self):
- """
- Acquire a reentrant lock, blocking, for cooperation with concurrent
- processes.
- """
- if self._fs_lock_count < 1:
- if self._fs_lock_obj is not None:
- raise AssertionError("already locked")
- try:
- self._fs_lock_obj = lockfile(self._conf_mem_file)
- except InvalidLocation:
- self.settings._init_dirs()
- self._fs_lock_obj = lockfile(self._conf_mem_file)
- self._fs_lock_count += 1
-
- def _fs_unlock(self):
- """
- Release a lock, decrementing the recursion level.
- """
- if self._fs_lock_count <= 1:
- if self._fs_lock_obj is None:
- raise AssertionError("not locked")
- unlockfile(self._fs_lock_obj)
- self._fs_lock_obj = None
- self._fs_lock_count -= 1
-
- def _slot_lock(self, slot_atom):
- """
- Acquire a slot lock (reentrant).
-
- WARNING: The varbapi._slot_lock method is not safe to call
- in the main process when that process is scheduling
- install/uninstall tasks in parallel, since the locks would
- be inherited by child processes. In order to avoid this sort
- of problem, this method should be called in a subprocess
- (typically spawned by the MergeProcess class).
- """
- lock, counter = self._slot_locks.get(slot_atom, (None, 0))
- if lock is None:
- lock_path = self.getpath("%s:%s" % (slot_atom.cp, slot_atom.slot))
- ensure_dirs(os.path.dirname(lock_path))
- lock = lockfile(lock_path, wantnewlockfile=True)
- self._slot_locks[slot_atom] = (lock, counter + 1)
-
- def _slot_unlock(self, slot_atom):
- """
- Release a slot lock (or decrementing recursion level).
- """
- lock, counter = self._slot_locks.get(slot_atom, (None, 0))
- if lock is None:
- raise AssertionError("not locked")
- counter -= 1
- if counter == 0:
- unlockfile(lock)
- del self._slot_locks[slot_atom]
- else:
- self._slot_locks[slot_atom] = (lock, counter)
-
- def _bump_mtime(self, cpv):
- """
- This is called before an after any modifications, so that consumers
- can use directory mtimes to validate caches. See bug #290428.
- """
- base = self._eroot + VDB_PATH
- cat = catsplit(cpv)[0]
- catdir = base + _os.sep + cat
- t = time.time()
- t = (t, t)
- try:
- for x in (catdir, base):
- os.utime(x, t)
- except OSError:
- ensure_dirs(catdir)
-
- def cpv_exists(self, mykey, myrepo=None):
- "Tells us whether an actual ebuild exists on disk (no masking)"
- return os.path.exists(self.getpath(mykey))
-
- def cpv_counter(self, mycpv):
- "This method will grab the COUNTER. Returns a counter value."
- try:
- return int(self.aux_get(mycpv, ["COUNTER"])[0])
- except (KeyError, ValueError):
- pass
- writemsg_level(_("portage: COUNTER for %s was corrupted; " \
- "resetting to value of 0\n") % (mycpv,),
- level=logging.ERROR, noiselevel=-1)
- return 0
-
- def cpv_inject(self, mycpv):
- "injects a real package into our on-disk database; assumes mycpv is valid and doesn't already exist"
- ensure_dirs(self.getpath(mycpv))
- counter = self.counter_tick(mycpv=mycpv)
- # write local package counter so that emerge clean does the right thing
- write_atomic(self.getpath(mycpv, filename="COUNTER"), str(counter))
-
- def isInjected(self, mycpv):
- if self.cpv_exists(mycpv):
- if os.path.exists(self.getpath(mycpv, filename="INJECTED")):
- return True
- if not os.path.exists(self.getpath(mycpv, filename="CONTENTS")):
- return True
- return False
-
- def move_ent(self, mylist, repo_match=None):
- origcp = mylist[1]
- newcp = mylist[2]
-
- # sanity check
- for atom in (origcp, newcp):
- if not isjustname(atom):
- raise InvalidPackageName(str(atom))
- origmatches = self.match(origcp, use_cache=0)
- moves = 0
- if not origmatches:
- return moves
- for mycpv in origmatches:
- mycpv_cp = mycpv.cp
- if mycpv_cp != origcp:
- # Ignore PROVIDE virtual match.
- continue
- if repo_match is not None \
- and not repo_match(mycpv.repo):
- continue
-
- # Use isvalidatom() to check if this move is valid for the
- # EAPI (characters allowed in package names may vary).
- if not isvalidatom(newcp, eapi=mycpv.eapi):
- continue
-
- mynewcpv = mycpv.replace(mycpv_cp, str(newcp), 1)
- mynewcat = catsplit(newcp)[0]
- origpath = self.getpath(mycpv)
- if not os.path.exists(origpath):
- continue
- moves += 1
- if not os.path.exists(self.getpath(mynewcat)):
- #create the directory
- ensure_dirs(self.getpath(mynewcat))
- newpath = self.getpath(mynewcpv)
- if os.path.exists(newpath):
- #dest already exists; keep this puppy where it is.
- continue
- _movefile(origpath, newpath, mysettings=self.settings)
- self._clear_pkg_cache(self._dblink(mycpv))
- self._clear_pkg_cache(self._dblink(mynewcpv))
-
- # We need to rename the ebuild now.
- old_pf = catsplit(mycpv)[1]
- new_pf = catsplit(mynewcpv)[1]
- if new_pf != old_pf:
- try:
- os.rename(os.path.join(newpath, old_pf + ".ebuild"),
- os.path.join(newpath, new_pf + ".ebuild"))
- except EnvironmentError as e:
- if e.errno != errno.ENOENT:
- raise
- del e
- write_atomic(os.path.join(newpath, "PF"), new_pf+"\n")
- write_atomic(os.path.join(newpath, "CATEGORY"), mynewcat+"\n")
-
- return moves
-
- def cp_list(self, mycp, use_cache=1):
- mysplit=catsplit(mycp)
- if mysplit[0] == '*':
- mysplit[0] = mysplit[0][1:]
- try:
- mystat = os.stat(self.getpath(mysplit[0])).st_mtime_ns
- except OSError:
- mystat = 0
- if use_cache and mycp in self.cpcache:
- cpc = self.cpcache[mycp]
- if cpc[0] == mystat:
- return cpc[1][:]
- cat_dir = self.getpath(mysplit[0])
- try:
- dir_list = os.listdir(cat_dir)
- except EnvironmentError as e:
- if e.errno == PermissionDenied.errno:
- raise PermissionDenied(cat_dir)
- del e
- dir_list = []
-
- returnme = []
- for x in dir_list:
- if self._excluded_dirs.match(x) is not None:
- continue
- ps = pkgsplit(x)
- if not ps:
- self.invalidentry(os.path.join(self.getpath(mysplit[0]), x))
- continue
- if len(mysplit) > 1:
- if ps[0] == mysplit[1]:
- cpv = "%s/%s" % (mysplit[0], x)
- metadata = dict(zip(self._aux_cache_keys,
- self.aux_get(cpv, self._aux_cache_keys)))
- returnme.append(_pkg_str(cpv, metadata=metadata,
- settings=self.settings, db=self))
- self._cpv_sort_ascending(returnme)
- if use_cache:
- self.cpcache[mycp] = [mystat, returnme[:]]
- elif mycp in self.cpcache:
- del self.cpcache[mycp]
- return returnme
-
- def cpv_all(self, use_cache=1):
- """
- Set use_cache=0 to bypass the portage.cachedir() cache in cases
- when the accuracy of mtime staleness checks should not be trusted
- (generally this is only necessary in critical sections that
- involve merge or unmerge of packages).
- """
- return list(self._iter_cpv_all(use_cache=use_cache))
-
- def _iter_cpv_all(self, use_cache=True, sort=False):
- returnme = []
- basepath = os.path.join(self._eroot, VDB_PATH) + os.path.sep
-
- if use_cache:
- from portage import listdir
- else:
- def listdir(p, **kwargs):
- try:
- return [x for x in os.listdir(p) \
- if os.path.isdir(os.path.join(p, x))]
- except EnvironmentError as e:
- if e.errno == PermissionDenied.errno:
- raise PermissionDenied(p)
- del e
- return []
-
- catdirs = listdir(basepath, EmptyOnError=1, ignorecvs=1, dirsonly=1)
- if sort:
- catdirs.sort()
-
- for x in catdirs:
- if self._excluded_dirs.match(x) is not None:
- continue
- if not self._category_re.match(x):
- continue
-
- pkgdirs = listdir(basepath + x, EmptyOnError=1, dirsonly=1)
- if sort:
- pkgdirs.sort()
-
- for y in pkgdirs:
- if self._excluded_dirs.match(y) is not None:
- continue
- subpath = x + "/" + y
- # -MERGING- should never be a cpv, nor should files.
- try:
- subpath = _pkg_str(subpath, db=self)
- except InvalidData:
- self.invalidentry(self.getpath(subpath))
- continue
-
- yield subpath
-
- def cp_all(self, use_cache=1, sort=False):
- mylist = self.cpv_all(use_cache=use_cache)
- d={}
- for y in mylist:
- if y[0] == '*':
- y = y[1:]
- try:
- mysplit = catpkgsplit(y)
- except InvalidData:
- self.invalidentry(self.getpath(y))
- continue
- if not mysplit:
- self.invalidentry(self.getpath(y))
- continue
- d[mysplit[0]+"/"+mysplit[1]] = None
- return sorted(d) if sort else list(d)
-
- def checkblockers(self, origdep):
- pass
-
- def _clear_cache(self):
- self.mtdircache.clear()
- self.matchcache.clear()
- self.cpcache.clear()
- self._aux_cache_obj = None
-
- def _add(self, pkg_dblink):
- self._pkgs_changed = True
- self._clear_pkg_cache(pkg_dblink)
-
- def _remove(self, pkg_dblink):
- self._pkgs_changed = True
- self._clear_pkg_cache(pkg_dblink)
-
- def _clear_pkg_cache(self, pkg_dblink):
- # Due to 1 second mtime granularity in <python-2.5, mtime checks
- # are not always sufficient to invalidate vardbapi caches. Therefore,
- # the caches need to be actively invalidated here.
- self.mtdircache.pop(pkg_dblink.cat, None)
- self.matchcache.pop(pkg_dblink.cat, None)
- self.cpcache.pop(pkg_dblink.mysplit[0], None)
- dircache.pop(pkg_dblink.dbcatdir, None)
-
- def match(self, origdep, use_cache=1):
- "caching match function"
- mydep = dep_expand(
- origdep, mydb=self, use_cache=use_cache, settings=self.settings)
- cache_key = (mydep, mydep.unevaluated_atom)
- mykey = dep_getkey(mydep)
- mycat = catsplit(mykey)[0]
- if not use_cache:
- if mycat in self.matchcache:
- del self.mtdircache[mycat]
- del self.matchcache[mycat]
- return list(self._iter_match(mydep,
- self.cp_list(mydep.cp, use_cache=use_cache)))
- try:
- curmtime = os.stat(os.path.join(self._eroot, VDB_PATH, mycat)).st_mtime_ns
- except (IOError, OSError):
- curmtime=0
-
- if mycat not in self.matchcache or \
- self.mtdircache[mycat] != curmtime:
- # clear cache entry
- self.mtdircache[mycat] = curmtime
- self.matchcache[mycat] = {}
- if mydep not in self.matchcache[mycat]:
- mymatch = list(self._iter_match(mydep,
- self.cp_list(mydep.cp, use_cache=use_cache)))
- self.matchcache[mycat][cache_key] = mymatch
- return self.matchcache[mycat][cache_key][:]
-
- def findname(self, mycpv, myrepo=None):
- return self.getpath(str(mycpv), filename=catsplit(mycpv)[1]+".ebuild")
-
- def flush_cache(self):
- """If the current user has permission and the internal aux_get cache has
- been updated, save it to disk and mark it unmodified. This is called
- by emerge after it has loaded the full vdb for use in dependency
- calculations. Currently, the cache is only written if the user has
- superuser privileges (since that's required to obtain a lock), but all
- users have read access and benefit from faster metadata lookups (as
- long as at least part of the cache is still valid)."""
- if self._flush_cache_enabled and \
- self._aux_cache is not None and \
- secpass >= 2 and \
- (len(self._aux_cache["modified"]) >= self._aux_cache_threshold or
- not os.path.exists(self._cache_delta_filename)):
-
- ensure_dirs(os.path.dirname(self._aux_cache_filename))
-
- self._owners.populate() # index any unindexed contents
- valid_nodes = set(self.cpv_all())
- for cpv in list(self._aux_cache["packages"]):
- if cpv not in valid_nodes:
- del self._aux_cache["packages"][cpv]
- del self._aux_cache["modified"]
- timestamp = time.time()
- self._aux_cache["timestamp"] = timestamp
-
- with atomic_ofstream(self._aux_cache_filename, 'wb') as f:
- pickle.dump(self._aux_cache, f, protocol=2)
-
- apply_secpass_permissions(
- self._aux_cache_filename, mode=0o644)
-
- self._cache_delta.initialize(timestamp)
- apply_secpass_permissions(
- self._cache_delta_filename, mode=0o644)
-
- self._aux_cache["modified"] = set()
-
- @property
- def _aux_cache(self):
- if self._aux_cache_obj is None:
- self._aux_cache_init()
- return self._aux_cache_obj
-
- def _aux_cache_init(self):
- aux_cache = None
- open_kwargs = {}
- try:
- with open(_unicode_encode(self._aux_cache_filename,
- encoding=_encodings['fs'], errors='strict'),
- mode='rb', **open_kwargs) as f:
- mypickle = pickle.Unpickler(f)
- try:
- mypickle.find_global = None
- except AttributeError:
- # TODO: If py3k, override Unpickler.find_class().
- pass
- aux_cache = mypickle.load()
- except (SystemExit, KeyboardInterrupt):
- raise
- except Exception as e:
- if isinstance(e, EnvironmentError) and \
- getattr(e, 'errno', None) in (errno.ENOENT, errno.EACCES):
- pass
- else:
- writemsg(_("!!! Error loading '%s': %s\n") % \
- (self._aux_cache_filename, e), noiselevel=-1)
- del e
-
- if not aux_cache or \
- not isinstance(aux_cache, dict) or \
- aux_cache.get("version") != self._aux_cache_version or \
- not aux_cache.get("packages"):
- aux_cache = {"version": self._aux_cache_version}
- aux_cache["packages"] = {}
-
- owners = aux_cache.get("owners")
- if owners is not None:
- if not isinstance(owners, dict):
- owners = None
- elif "version" not in owners:
- owners = None
- elif owners["version"] != self._owners_cache_version:
- owners = None
- elif "base_names" not in owners:
- owners = None
- elif not isinstance(owners["base_names"], dict):
- owners = None
-
- if owners is None:
- owners = {
- "base_names" : {},
- "version" : self._owners_cache_version
- }
- aux_cache["owners"] = owners
-
- aux_cache["modified"] = set()
- self._aux_cache_obj = aux_cache
-
- def aux_get(self, mycpv, wants, myrepo = None):
- """This automatically caches selected keys that are frequently needed
- by emerge for dependency calculations. The cached metadata is
- considered valid if the mtime of the package directory has not changed
- since the data was cached. The cache is stored in a pickled dict
- object with the following format:
-
- {version:"1", "packages":{cpv1:(mtime,{k1,v1, k2,v2, ...}), cpv2...}}
-
- If an error occurs while loading the cache pickle or the version is
- unrecognized, the cache will simple be recreated from scratch (it is
- completely disposable).
- """
- cache_these_wants = self._aux_cache_keys.intersection(wants)
- for x in wants:
- if self._aux_cache_keys_re.match(x) is not None:
- cache_these_wants.add(x)
-
- if not cache_these_wants:
- mydata = self._aux_get(mycpv, wants)
- return [mydata[x] for x in wants]
-
- cache_these = set(self._aux_cache_keys)
- cache_these.update(cache_these_wants)
-
- mydir = self.getpath(mycpv)
- mydir_stat = None
- try:
- mydir_stat = os.stat(mydir)
- except OSError as e:
- if e.errno != errno.ENOENT:
- raise
- raise KeyError(mycpv)
- # Use float mtime when available.
- mydir_mtime = mydir_stat.st_mtime
- pkg_data = self._aux_cache["packages"].get(mycpv)
- pull_me = cache_these.union(wants)
- mydata = {"_mtime_" : mydir_mtime}
- cache_valid = False
- cache_incomplete = False
- cache_mtime = None
- metadata = None
- if pkg_data is not None:
- if not isinstance(pkg_data, tuple) or len(pkg_data) != 2:
- pkg_data = None
- else:
- cache_mtime, metadata = pkg_data
- if not isinstance(cache_mtime, (float, int)) or \
- not isinstance(metadata, dict):
- pkg_data = None
-
- if pkg_data:
- cache_mtime, metadata = pkg_data
- if isinstance(cache_mtime, float):
- if cache_mtime == mydir_stat.st_mtime:
- cache_valid = True
-
- # Handle truncated mtime in order to avoid cache
- # invalidation for livecd squashfs (bug 564222).
- elif int(cache_mtime) == mydir_stat.st_mtime:
- cache_valid = True
- else:
- # Cache may contain integer mtime.
- cache_valid = cache_mtime == mydir_stat[stat.ST_MTIME]
-
- if cache_valid:
- # Migrate old metadata to unicode.
- for k, v in metadata.items():
- metadata[k] = _unicode_decode(v,
- encoding=_encodings['repo.content'], errors='replace')
-
- mydata.update(metadata)
- pull_me.difference_update(mydata)
-
- if pull_me:
- # pull any needed data and cache it
- aux_keys = list(pull_me)
- mydata.update(self._aux_get(mycpv, aux_keys, st=mydir_stat))
- if not cache_valid or cache_these.difference(metadata):
- cache_data = {}
- if cache_valid and metadata:
- cache_data.update(metadata)
- for aux_key in cache_these:
- cache_data[aux_key] = mydata[aux_key]
- self._aux_cache["packages"][str(mycpv)] = \
- (mydir_mtime, cache_data)
- self._aux_cache["modified"].add(mycpv)
-
- eapi_attrs = _get_eapi_attrs(mydata['EAPI'])
- if _get_slot_re(eapi_attrs).match(mydata['SLOT']) is None:
- # Empty or invalid slot triggers InvalidAtom exceptions when
- # generating slot atoms for packages, so translate it to '0' here.
- mydata['SLOT'] = '0'
-
- return [mydata[x] for x in wants]
-
- def _aux_get(self, mycpv, wants, st=None):
- mydir = self.getpath(mycpv)
- if st is None:
- try:
- st = os.stat(mydir)
- except OSError as e:
- if e.errno == errno.ENOENT:
- raise KeyError(mycpv)
- elif e.errno == PermissionDenied.errno:
- raise PermissionDenied(mydir)
- else:
- raise
- if not stat.S_ISDIR(st.st_mode):
- raise KeyError(mycpv)
- results = {}
- env_keys = []
- for x in wants:
- if x == "_mtime_":
- results[x] = st[stat.ST_MTIME]
- continue
- try:
- with io.open(
- _unicode_encode(os.path.join(mydir, x),
- encoding=_encodings['fs'], errors='strict'),
- mode='r', encoding=_encodings['repo.content'],
- errors='replace') as f:
- myd = f.read()
- except IOError:
- if x not in self._aux_cache_keys and \
- self._aux_cache_keys_re.match(x) is None:
- env_keys.append(x)
- continue
- myd = ''
-
- # Preserve \n for metadata that is known to
- # contain multiple lines.
- if self._aux_multi_line_re.match(x) is None:
- myd = " ".join(myd.split())
-
- results[x] = myd
-
- if env_keys:
- env_results = self._aux_env_search(mycpv, env_keys)
- for k in env_keys:
- v = env_results.get(k)
- if v is None:
- v = ''
- if self._aux_multi_line_re.match(k) is None:
- v = " ".join(v.split())
- results[k] = v
-
- if results.get("EAPI") == "":
- results["EAPI"] = '0'
-
- return results
-
- def _aux_env_search(self, cpv, variables):
- """
- Search environment.bz2 for the specified variables. Returns
- a dict mapping variables to values, and any variables not
- found in the environment will not be included in the dict.
- This is useful for querying variables like ${SRC_URI} and
- ${A}, which are not saved in separate files but are available
- in environment.bz2 (see bug #395463).
- """
- env_file = self.getpath(cpv, filename="environment.bz2")
- if not os.path.isfile(env_file):
- return {}
- bunzip2_cmd = portage.util.shlex_split(
- self.settings.get("PORTAGE_BUNZIP2_COMMAND", ""))
- if not bunzip2_cmd:
- bunzip2_cmd = portage.util.shlex_split(
- self.settings["PORTAGE_BZIP2_COMMAND"])
- bunzip2_cmd.append("-d")
- args = bunzip2_cmd + ["-c", env_file]
- try:
- proc = subprocess.Popen(args, stdout=subprocess.PIPE)
- except EnvironmentError as e:
- if e.errno != errno.ENOENT:
- raise
- raise portage.exception.CommandNotFound(args[0])
-
- # Parts of the following code are borrowed from
- # filter-bash-environment.py (keep them in sync).
- var_assign_re = re.compile(r'(^|^declare\s+-\S+\s+|^declare\s+|^export\s+)([^=\s]+)=("|\')?(.*)$')
- close_quote_re = re.compile(r'(\\"|"|\')\s*$')
- def have_end_quote(quote, line):
- close_quote_match = close_quote_re.search(line)
- return close_quote_match is not None and \
- close_quote_match.group(1) == quote
-
- variables = frozenset(variables)
- results = {}
- for line in proc.stdout:
- line = _unicode_decode(line,
- encoding=_encodings['content'], errors='replace')
- var_assign_match = var_assign_re.match(line)
- if var_assign_match is not None:
- key = var_assign_match.group(2)
- quote = var_assign_match.group(3)
- if quote is not None:
- if have_end_quote(quote,
- line[var_assign_match.end(2)+2:]):
- value = var_assign_match.group(4)
- else:
- value = [var_assign_match.group(4)]
- for line in proc.stdout:
- line = _unicode_decode(line,
- encoding=_encodings['content'],
- errors='replace')
- value.append(line)
- if have_end_quote(quote, line):
- break
- value = ''.join(value)
- # remove trailing quote and whitespace
- value = value.rstrip()[:-1]
- else:
- value = var_assign_match.group(4).rstrip()
-
- if key in variables:
- results[key] = value
-
- proc.wait()
- proc.stdout.close()
- return results
-
- def aux_update(self, cpv, values):
- mylink = self._dblink(cpv)
- if not mylink.exists():
- raise KeyError(cpv)
- self._bump_mtime(cpv)
- self._clear_pkg_cache(mylink)
- for k, v in values.items():
- if v:
- mylink.setfile(k, v)
- else:
- try:
- os.unlink(os.path.join(self.getpath(cpv), k))
- except EnvironmentError:
- pass
- self._bump_mtime(cpv)
-
- @coroutine
- def unpack_metadata(self, pkg, dest_dir, loop=None):
- """
- Unpack package metadata to a directory. This method is a coroutine.
-
- @param pkg: package to unpack
- @type pkg: _pkg_str or portage.config
- @param dest_dir: destination directory
- @type dest_dir: str
- """
- loop = asyncio._wrap_loop(loop)
- if not isinstance(pkg, portage.config):
- cpv = pkg
- else:
- cpv = pkg.mycpv
- dbdir = self.getpath(cpv)
- def async_copy():
- for parent, dirs, files in os.walk(dbdir, onerror=_raise_exc):
- for key in files:
- shutil.copy(os.path.join(parent, key),
- os.path.join(dest_dir, key))
- break
- yield loop.run_in_executor(ForkExecutor(loop=loop), async_copy)
-
- @coroutine
- def unpack_contents(self, pkg, dest_dir,
- include_config=None, include_unmodified_config=None, loop=None):
- """
- Unpack package contents to a directory. This method is a coroutine.
-
- This copies files from the installed system, in the same way
- as the quickpkg(1) command. Default behavior for handling
- of protected configuration files is controlled by the
- QUICKPKG_DEFAULT_OPTS variable. The relevant quickpkg options
- are --include-config and --include-unmodified-config. When
- a configuration file is not included because it is protected,
- an ewarn message is logged.
-
- @param pkg: package to unpack
- @type pkg: _pkg_str or portage.config
- @param dest_dir: destination directory
- @type dest_dir: str
- @param include_config: Include all files protected by
- CONFIG_PROTECT (as a security precaution, default is False
- unless modified by QUICKPKG_DEFAULT_OPTS).
- @type include_config: bool
- @param include_unmodified_config: Include files protected by
- CONFIG_PROTECT that have not been modified since installation
- (as a security precaution, default is False unless modified
- by QUICKPKG_DEFAULT_OPTS).
- @type include_unmodified_config: bool
- """
- loop = asyncio._wrap_loop(loop)
- if not isinstance(pkg, portage.config):
- settings = self.settings
- cpv = pkg
- else:
- settings = pkg
- cpv = settings.mycpv
-
- scheduler = SchedulerInterface(loop)
- parser = argparse.ArgumentParser()
- parser.add_argument('--include-config',
- choices=('y', 'n'),
- default='n')
- parser.add_argument('--include-unmodified-config',
- choices=('y', 'n'),
- default='n')
-
- # Method parameters may override QUICKPKG_DEFAULT_OPTS.
- opts_list = portage.util.shlex_split(settings.get('QUICKPKG_DEFAULT_OPTS', ''))
- if include_config is not None:
- opts_list.append('--include-config={}'.format(
- 'y' if include_config else 'n'))
- if include_unmodified_config is not None:
- opts_list.append('--include-unmodified-config={}'.format(
- 'y' if include_unmodified_config else 'n'))
-
- opts, args = parser.parse_known_args(opts_list)
-
- tar_cmd = ('tar', '-x', '--xattrs', '--xattrs-include=*', '-C', dest_dir)
- pr, pw = os.pipe()
- proc = (yield asyncio.create_subprocess_exec(*tar_cmd, stdin=pr))
- os.close(pr)
- with os.fdopen(pw, 'wb', 0) as pw_file:
- excluded_config_files = (yield loop.run_in_executor(ForkExecutor(loop=loop),
- functools.partial(self._dblink(cpv).quickpkg,
- pw_file,
- include_config=opts.include_config == 'y',
- include_unmodified_config=opts.include_unmodified_config == 'y')))
- yield proc.wait()
- if proc.returncode != os.EX_OK:
- raise PortageException('command failed: {}'.format(tar_cmd))
-
- if excluded_config_files:
- log_lines = ([_("Config files excluded by QUICKPKG_DEFAULT_OPTS (see quickpkg(1) man page):")] +
- ['\t{}'.format(name) for name in excluded_config_files])
- out = io.StringIO()
- for line in log_lines:
- portage.elog.messages.ewarn(line, phase='install', key=cpv, out=out)
- scheduler.output(out.getvalue(),
- background=self.settings.get("PORTAGE_BACKGROUND") == "1",
- log_path=settings.get("PORTAGE_LOG_FILE"))
-
- def counter_tick(self, myroot=None, mycpv=None):
- """
- @param myroot: ignored, self._eroot is used instead
- """
- return self.counter_tick_core(incrementing=1, mycpv=mycpv)
-
- def get_counter_tick_core(self, myroot=None, mycpv=None):
- """
- Use this method to retrieve the counter instead
- of having to trust the value of a global counter
- file that can lead to invalid COUNTER
- generation. When cache is valid, the package COUNTER
- files are not read and we rely on the timestamp of
- the package directory to validate cache. The stat
- calls should only take a short time, so performance
- is sufficient without having to rely on a potentially
- corrupt global counter file.
-
- The global counter file located at
- $CACHE_PATH/counter serves to record the
- counter of the last installed package and
- it also corresponds to the total number of
- installation actions that have occurred in
- the history of this package database.
-
- @param myroot: ignored, self._eroot is used instead
- """
- del myroot
- counter = -1
- try:
- with io.open(
- _unicode_encode(self._counter_path,
- encoding=_encodings['fs'], errors='strict'),
- mode='r', encoding=_encodings['repo.content'],
- errors='replace') as f:
- try:
- counter = int(f.readline().strip())
- except (OverflowError, ValueError) as e:
- writemsg(_("!!! COUNTER file is corrupt: '%s'\n") %
- self._counter_path, noiselevel=-1)
- writemsg("!!! %s\n" % (e,), noiselevel=-1)
- except EnvironmentError as e:
- # Silently allow ENOENT since files under
- # /var/cache/ are allowed to disappear.
- if e.errno != errno.ENOENT:
- writemsg(_("!!! Unable to read COUNTER file: '%s'\n") % \
- self._counter_path, noiselevel=-1)
- writemsg("!!! %s\n" % str(e), noiselevel=-1)
- del e
-
- if self._cached_counter == counter:
- max_counter = counter
- else:
- # We must ensure that we return a counter
- # value that is at least as large as the
- # highest one from the installed packages,
- # since having a corrupt value that is too low
- # can trigger incorrect AUTOCLEAN behavior due
- # to newly installed packages having lower
- # COUNTERs than the previous version in the
- # same slot.
- max_counter = counter
- for cpv in self.cpv_all():
- try:
- pkg_counter = int(self.aux_get(cpv, ["COUNTER"])[0])
- except (KeyError, OverflowError, ValueError):
- continue
- if pkg_counter > max_counter:
- max_counter = pkg_counter
-
- return max_counter + 1
-
- def counter_tick_core(self, myroot=None, incrementing=1, mycpv=None):
- """
- This method will grab the next COUNTER value and record it back
- to the global file. Note that every package install must have
- a unique counter, since a slotmove update can move two packages
- into the same SLOT and in that case it's important that both
- packages have different COUNTER metadata.
-
- @param myroot: ignored, self._eroot is used instead
- @param mycpv: ignored
- @rtype: int
- @return: new counter value
- """
- myroot = None
- mycpv = None
- self.lock()
- try:
- counter = self.get_counter_tick_core() - 1
- if incrementing:
- #increment counter
- counter += 1
- # update new global counter file
- try:
- write_atomic(self._counter_path, str(counter))
- except InvalidLocation:
- self.settings._init_dirs()
- write_atomic(self._counter_path, str(counter))
- self._cached_counter = counter
-
- # Since we hold a lock, this is a good opportunity
- # to flush the cache. Note that this will only
- # flush the cache periodically in the main process
- # when _aux_cache_threshold is exceeded.
- self.flush_cache()
- finally:
- self.unlock()
-
- return counter
-
- def _dblink(self, cpv):
- category, pf = catsplit(cpv)
- return dblink(category, pf, settings=self.settings,
- vartree=self.vartree, treetype="vartree")
-
- def removeFromContents(self, pkg, paths, relative_paths=True):
- """
- @param pkg: cpv for an installed package
- @type pkg: string
- @param paths: paths of files to remove from contents
- @type paths: iterable
- """
- if not hasattr(pkg, "getcontents"):
- pkg = self._dblink(pkg)
- root = self.settings['ROOT']
- root_len = len(root) - 1
- new_contents = pkg.getcontents().copy()
- removed = 0
-
- for filename in paths:
- filename = _unicode_decode(filename,
- encoding=_encodings['content'], errors='strict')
- filename = normalize_path(filename)
- if relative_paths:
- relative_filename = filename
- else:
- relative_filename = filename[root_len:]
- contents_key = pkg._match_contents(relative_filename)
- if contents_key:
- # It's possible for two different paths to refer to the same
- # contents_key, due to directory symlinks. Therefore, pass a
- # default value to pop, in order to avoid a KeyError which
- # could otherwise be triggered (see bug #454400).
- new_contents.pop(contents_key, None)
- removed += 1
-
- if removed:
- # Also remove corresponding NEEDED lines, so that they do
- # no corrupt LinkageMap data for preserve-libs.
- needed_filename = os.path.join(pkg.dbdir, LinkageMap._needed_aux_key)
- new_needed = None
- try:
- with io.open(_unicode_encode(needed_filename,
- encoding=_encodings['fs'], errors='strict'),
- mode='r', encoding=_encodings['repo.content'],
- errors='replace') as f:
- needed_lines = f.readlines()
- except IOError as e:
- if e.errno not in (errno.ENOENT, errno.ESTALE):
- raise
- else:
- new_needed = []
- for l in needed_lines:
- l = l.rstrip("\n")
- if not l:
- continue
- try:
- entry = NeededEntry.parse(needed_filename, l)
- except InvalidData as e:
- writemsg_level("\n%s\n\n" % (e,),
- level=logging.ERROR, noiselevel=-1)
- continue
-
- filename = os.path.join(root, entry.filename.lstrip(os.sep))
- if filename in new_contents:
- new_needed.append(entry)
-
- self.writeContentsToContentsFile(pkg, new_contents, new_needed=new_needed)
-
- def writeContentsToContentsFile(self, pkg, new_contents, new_needed=None):
- """
- @param pkg: package to write contents file for
- @type pkg: dblink
- @param new_contents: contents to write to CONTENTS file
- @type new_contents: contents dictionary of the form
- {u'/path/to/file' : (contents_attribute 1, ...), ...}
- @param new_needed: new NEEDED entries
- @type new_needed: list of NeededEntry
- """
- root = self.settings['ROOT']
- self._bump_mtime(pkg.mycpv)
- if new_needed is not None:
- f = atomic_ofstream(os.path.join(pkg.dbdir, LinkageMap._needed_aux_key))
- for entry in new_needed:
- f.write(str(entry))
- f.close()
- f = atomic_ofstream(os.path.join(pkg.dbdir, "CONTENTS"))
- write_contents(new_contents, root, f)
- f.close()
- self._bump_mtime(pkg.mycpv)
- pkg._clear_contents_cache()
-
- class _owners_cache:
- """
- This class maintains an hash table that serves to index package
- contents by mapping the basename of file to a list of possible
- packages that own it. This is used to optimize owner lookups
- by narrowing the search down to a smaller number of packages.
- """
- _new_hash = md5
- _hash_bits = 16
- _hex_chars = _hash_bits // 4
-
- def __init__(self, vardb):
- self._vardb = vardb
-
- def add(self, cpv):
- eroot_len = len(self._vardb._eroot)
- pkg_hash = self._hash_pkg(cpv)
- db = self._vardb._dblink(cpv)
- if not db.getcontents():
- # Empty path is a code used to represent empty contents.
- self._add_path("", pkg_hash)
-
- for x in db._contents.keys():
- self._add_path(x[eroot_len:], pkg_hash)
-
- self._vardb._aux_cache["modified"].add(cpv)
-
- def _add_path(self, path, pkg_hash):
- """
- Empty path is a code that represents empty contents.
- """
- if path:
- name = os.path.basename(path.rstrip(os.path.sep))
- if not name:
- return
- else:
- name = path
- name_hash = self._hash_str(name)
- base_names = self._vardb._aux_cache["owners"]["base_names"]
- pkgs = base_names.get(name_hash)
- if pkgs is None:
- pkgs = {}
- base_names[name_hash] = pkgs
- pkgs[pkg_hash] = None
-
- def _hash_str(self, s):
- h = self._new_hash()
- # Always use a constant utf_8 encoding here, since
- # the "default" encoding can change.
- h.update(_unicode_encode(s,
- encoding=_encodings['repo.content'],
- errors='backslashreplace'))
- h = h.hexdigest()
- h = h[-self._hex_chars:]
- h = int(h, 16)
- return h
-
- def _hash_pkg(self, cpv):
- counter, mtime = self._vardb.aux_get(
- cpv, ["COUNTER", "_mtime_"])
- try:
- counter = int(counter)
- except ValueError:
- counter = 0
- return (str(cpv), counter, mtime)
-
- class _owners_db:
-
- def __init__(self, vardb):
- self._vardb = vardb
-
- def populate(self):
- self._populate()
-
- def _populate(self):
- owners_cache = vardbapi._owners_cache(self._vardb)
- cached_hashes = set()
- base_names = self._vardb._aux_cache["owners"]["base_names"]
-
- # Take inventory of all cached package hashes.
- for name, hash_values in list(base_names.items()):
- if not isinstance(hash_values, dict):
- del base_names[name]
- continue
- cached_hashes.update(hash_values)
-
- # Create sets of valid package hashes and uncached packages.
- uncached_pkgs = set()
- hash_pkg = owners_cache._hash_pkg
- valid_pkg_hashes = set()
- for cpv in self._vardb.cpv_all():
- hash_value = hash_pkg(cpv)
- valid_pkg_hashes.add(hash_value)
- if hash_value not in cached_hashes:
- uncached_pkgs.add(cpv)
-
- # Cache any missing packages.
- for cpv in uncached_pkgs:
- owners_cache.add(cpv)
-
- # Delete any stale cache.
- stale_hashes = cached_hashes.difference(valid_pkg_hashes)
- if stale_hashes:
- for base_name_hash, bucket in list(base_names.items()):
- for hash_value in stale_hashes.intersection(bucket):
- del bucket[hash_value]
- if not bucket:
- del base_names[base_name_hash]
-
- return owners_cache
-
- def get_owners(self, path_iter):
- """
- @return the owners as a dblink -> set(files) mapping.
- """
- owners = {}
- for owner, f in self.iter_owners(path_iter):
- owned_files = owners.get(owner)
- if owned_files is None:
- owned_files = set()
- owners[owner] = owned_files
- owned_files.add(f)
- return owners
-
- def getFileOwnerMap(self, path_iter):
- owners = self.get_owners(path_iter)
- file_owners = {}
- for pkg_dblink, files in owners.items():
- for f in files:
- owner_set = file_owners.get(f)
- if owner_set is None:
- owner_set = set()
- file_owners[f] = owner_set
- owner_set.add(pkg_dblink)
- return file_owners
-
- def iter_owners(self, path_iter):
- """
- Iterate over tuples of (dblink, path). In order to avoid
- consuming too many resources for too much time, resources
- are only allocated for the duration of a given iter_owners()
- call. Therefore, to maximize reuse of resources when searching
- for multiple files, it's best to search for them all in a single
- call.
- """
-
- if not isinstance(path_iter, list):
- path_iter = list(path_iter)
- owners_cache = self._populate()
- vardb = self._vardb
- root = vardb._eroot
- hash_pkg = owners_cache._hash_pkg
- hash_str = owners_cache._hash_str
- base_names = self._vardb._aux_cache["owners"]["base_names"]
- case_insensitive = "case-insensitive-fs" \
- in vardb.settings.features
-
- dblink_cache = {}
-
- def dblink(cpv):
- x = dblink_cache.get(cpv)
- if x is None:
- if len(dblink_cache) > 20:
- # Ensure that we don't run out of memory.
- raise StopIteration()
- x = self._vardb._dblink(cpv)
- dblink_cache[cpv] = x
- return x
-
- while path_iter:
-
- path = path_iter.pop()
- if case_insensitive:
- path = path.lower()
- is_basename = os.sep != path[:1]
- if is_basename:
- name = path
- else:
- name = os.path.basename(path.rstrip(os.path.sep))
-
- if not name:
- continue
-
- name_hash = hash_str(name)
- pkgs = base_names.get(name_hash)
- owners = []
- if pkgs is not None:
- try:
- for hash_value in pkgs:
- if not isinstance(hash_value, tuple) or \
- len(hash_value) != 3:
- continue
- cpv, counter, mtime = hash_value
- if not isinstance(cpv, str):
- continue
- try:
- current_hash = hash_pkg(cpv)
- except KeyError:
- continue
-
- if current_hash != hash_value:
- continue
-
- if is_basename:
- for p in dblink(cpv)._contents.keys():
- if os.path.basename(p) == name:
- owners.append((cpv, dblink(cpv).
- _contents.unmap_key(
- p)[len(root):]))
- else:
- key = dblink(cpv)._match_contents(path)
- if key is not False:
- owners.append(
- (cpv, key[len(root):]))
-
- except StopIteration:
- path_iter.append(path)
- del owners[:]
- dblink_cache.clear()
- gc.collect()
- for x in self._iter_owners_low_mem(path_iter):
- yield x
- return
- else:
- for cpv, p in owners:
- yield (dblink(cpv), p)
-
- def _iter_owners_low_mem(self, path_list):
- """
- This implemention will make a short-lived dblink instance (and
- parse CONTENTS) for every single installed package. This is
- slower and but uses less memory than the method which uses the
- basename cache.
- """
-
- if not path_list:
- return
-
- case_insensitive = "case-insensitive-fs" \
- in self._vardb.settings.features
- path_info_list = []
- for path in path_list:
- if case_insensitive:
- path = path.lower()
- is_basename = os.sep != path[:1]
- if is_basename:
- name = path
- else:
- name = os.path.basename(path.rstrip(os.path.sep))
- path_info_list.append((path, name, is_basename))
-
- # Do work via the global event loop, so that it can be used
- # for indication of progress during the search (bug #461412).
- event_loop = asyncio._safe_loop()
- root = self._vardb._eroot
-
- def search_pkg(cpv, search_future):
- dblnk = self._vardb._dblink(cpv)
- results = []
- for path, name, is_basename in path_info_list:
- if is_basename:
- for p in dblnk._contents.keys():
- if os.path.basename(p) == name:
- results.append((dblnk,
- dblnk._contents.unmap_key(
- p)[len(root):]))
- else:
- key = dblnk._match_contents(path)
- if key is not False:
- results.append(
- (dblnk, key[len(root):]))
- search_future.set_result(results)
-
- for cpv in self._vardb.cpv_all():
- search_future = event_loop.create_future()
- event_loop.call_soon(search_pkg, cpv, search_future)
- event_loop.run_until_complete(search_future)
- for result in search_future.result():
- yield result
+ _excluded_dirs = ["CVS", "lost+found"]
+ _excluded_dirs = [re.escape(x) for x in _excluded_dirs]
+ _excluded_dirs = re.compile(
+ r"^(\..*|" + MERGING_IDENTIFIER + ".*|" + "|".join(_excluded_dirs) + r")$"
+ )
+
+ _aux_cache_version = "1"
+ _owners_cache_version = "1"
+
+ # Number of uncached packages to trigger cache update, since
+ # it's wasteful to update it for every vdb change.
+ _aux_cache_threshold = 5
+
+ _aux_cache_keys_re = re.compile(r"^NEEDED\..*$")
+ _aux_multi_line_re = re.compile(r"^(CONTENTS|NEEDED\..*)$")
+ _pkg_str_aux_keys = dbapi._pkg_str_aux_keys + ("BUILD_ID", "BUILD_TIME", "_mtime_")
+
+ def __init__(
+ self,
+ _unused_param=DeprecationWarning,
+ categories=None,
+ settings=None,
+ vartree=None,
+ ):
+ """
+ The categories parameter is unused since the dbapi class
+ now has a categories property that is generated from the
+ available packages.
+ """
+
+ # Used by emerge to check whether any packages
+ # have been added or removed.
+ self._pkgs_changed = False
+
+ # The _aux_cache_threshold doesn't work as designed
+ # if the cache is flushed from a subprocess, so we
+ # use this to avoid waste vdb cache updates.
+ self._flush_cache_enabled = True
+
+ # cache for category directory mtimes
+ self.mtdircache = {}
+
+ # cache for dependency checks
+ self.matchcache = {}
+
+ # cache for cp_list results
+ self.cpcache = {}
+
+ self.blockers = None
+ if settings is None:
+ settings = portage.settings
+ self.settings = settings
+
+ if _unused_param is not DeprecationWarning:
+ warnings.warn(
+ "The first parameter of the "
+ "portage.dbapi.vartree.vardbapi"
+ " constructor is now unused. Instead "
+ "settings['ROOT'] is used.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
+ self._eroot = settings["EROOT"]
+ self._dbroot = self._eroot + VDB_PATH
+ self._lock = None
+ self._lock_count = 0
+
+ self._conf_mem_file = self._eroot + CONFIG_MEMORY_FILE
+ self._fs_lock_obj = None
+ self._fs_lock_count = 0
+ self._slot_locks = {}
+
+ if vartree is None:
+ vartree = portage.db[settings["EROOT"]]["vartree"]
+ self.vartree = vartree
+ self._aux_cache_keys = set(
+ [
+ "BDEPEND",
+ "BUILD_TIME",
+ "CHOST",
+ "COUNTER",
+ "DEPEND",
+ "DESCRIPTION",
+ "EAPI",
+ "HOMEPAGE",
+ "BUILD_ID",
+ "IDEPEND",
+ "IUSE",
+ "KEYWORDS",
+ "LICENSE",
+ "PDEPEND",
+ "PROPERTIES",
+ "RDEPEND",
+ "repository",
+ "RESTRICT",
+ "SLOT",
+ "USE",
+ "DEFINED_PHASES",
+ "PROVIDES",
+ "REQUIRES",
+ ]
+ )
+ self._aux_cache_obj = None
+ self._aux_cache_filename = os.path.join(
+ self._eroot, CACHE_PATH, "vdb_metadata.pickle"
+ )
+ self._cache_delta_filename = os.path.join(
+ self._eroot, CACHE_PATH, "vdb_metadata_delta.json"
+ )
+ self._cache_delta = VdbMetadataDelta(self)
+ self._counter_path = os.path.join(self._eroot, CACHE_PATH, "counter")
+
+ self._plib_registry = PreservedLibsRegistry(
+ settings["ROOT"],
+ os.path.join(self._eroot, PRIVATE_PATH, "preserved_libs_registry"),
+ )
- self._linkmap = LinkageMap(self)
++ chost = self.settings.get('CHOST')
++ if not chost:
++ chost = 'lunix?' # this happens when profiles are not available
++ if chost.find('darwin') >= 0:
++ self._linkmap = LinkageMapMachO(self)
++ elif chost.find('interix') >= 0 or chost.find('winnt') >= 0:
++ self._linkmap = LinkageMapPeCoff(self)
++ elif chost.find('aix') >= 0:
++ self._linkmap = LinkageMapXCoff(self)
++ else:
++ self._linkmap = LinkageMap(self)
+ self._owners = self._owners_db(self)
+
+ self._cached_counter = None
+
+ @property
+ def writable(self):
+ """
+ Check if var/db/pkg is writable, or permissions are sufficient
+ to create it if it does not exist yet.
+ @rtype: bool
+ @return: True if var/db/pkg is writable or can be created,
+ False otherwise
+ """
+ return os.access(first_existing(self._dbroot), os.W_OK)
+
+ @property
+ def root(self):
+ warnings.warn(
+ "The root attribute of "
+ "portage.dbapi.vartree.vardbapi"
+ " is deprecated. Use "
+ "settings['ROOT'] instead.",
+ DeprecationWarning,
+ stacklevel=3,
+ )
+ return self.settings["ROOT"]
+
+ def getpath(self, mykey, filename=None):
+ # This is an optimized hotspot, so don't use unicode-wrapped
+ # os module and don't use os.path.join().
+ rValue = self._eroot + VDB_PATH + _os.sep + mykey
+ if filename is not None:
+ # If filename is always relative, we can do just
+ # rValue += _os.sep + filename
+ rValue = _os.path.join(rValue, filename)
+ return rValue
+
+ def lock(self):
+ """
+ Acquire a reentrant lock, blocking, for cooperation with concurrent
+ processes. State is inherited by subprocesses, allowing subprocesses
+ to reenter a lock that was acquired by a parent process. However,
+ a lock can be released only by the same process that acquired it.
+ """
+ if self._lock_count:
+ self._lock_count += 1
+ else:
+ if self._lock is not None:
+ raise AssertionError("already locked")
+ # At least the parent needs to exist for the lock file.
+ ensure_dirs(self._dbroot)
+ self._lock = lockdir(self._dbroot)
+ self._lock_count += 1
+
+ def unlock(self):
+ """
+ Release a lock, decrementing the recursion level. Each unlock() call
+ must be matched with a prior lock() call, or else an AssertionError
+ will be raised if unlock() is called while not locked.
+ """
+ if self._lock_count > 1:
+ self._lock_count -= 1
+ else:
+ if self._lock is None:
+ raise AssertionError("not locked")
+ self._lock_count = 0
+ unlockdir(self._lock)
+ self._lock = None
+
+ def _fs_lock(self):
+ """
+ Acquire a reentrant lock, blocking, for cooperation with concurrent
+ processes.
+ """
+ if self._fs_lock_count < 1:
+ if self._fs_lock_obj is not None:
+ raise AssertionError("already locked")
+ try:
+ self._fs_lock_obj = lockfile(self._conf_mem_file)
+ except InvalidLocation:
+ self.settings._init_dirs()
+ self._fs_lock_obj = lockfile(self._conf_mem_file)
+ self._fs_lock_count += 1
+
+ def _fs_unlock(self):
+ """
+ Release a lock, decrementing the recursion level.
+ """
+ if self._fs_lock_count <= 1:
+ if self._fs_lock_obj is None:
+ raise AssertionError("not locked")
+ unlockfile(self._fs_lock_obj)
+ self._fs_lock_obj = None
+ self._fs_lock_count -= 1
+
+ def _slot_lock(self, slot_atom):
+ """
+ Acquire a slot lock (reentrant).
+
+ WARNING: The varbapi._slot_lock method is not safe to call
+ in the main process when that process is scheduling
+ install/uninstall tasks in parallel, since the locks would
+ be inherited by child processes. In order to avoid this sort
+ of problem, this method should be called in a subprocess
+ (typically spawned by the MergeProcess class).
+ """
+ lock, counter = self._slot_locks.get(slot_atom, (None, 0))
+ if lock is None:
+ lock_path = self.getpath("%s:%s" % (slot_atom.cp, slot_atom.slot))
+ ensure_dirs(os.path.dirname(lock_path))
+ lock = lockfile(lock_path, wantnewlockfile=True)
+ self._slot_locks[slot_atom] = (lock, counter + 1)
+
+ def _slot_unlock(self, slot_atom):
+ """
+ Release a slot lock (or decrementing recursion level).
+ """
+ lock, counter = self._slot_locks.get(slot_atom, (None, 0))
+ if lock is None:
+ raise AssertionError("not locked")
+ counter -= 1
+ if counter == 0:
+ unlockfile(lock)
+ del self._slot_locks[slot_atom]
+ else:
+ self._slot_locks[slot_atom] = (lock, counter)
+
+ def _bump_mtime(self, cpv):
+ """
+ This is called before an after any modifications, so that consumers
+ can use directory mtimes to validate caches. See bug #290428.
+ """
+ base = self._eroot + VDB_PATH
+ cat = catsplit(cpv)[0]
+ catdir = base + _os.sep + cat
+ t = time.time()
+ t = (t, t)
+ try:
+ for x in (catdir, base):
+ os.utime(x, t)
+ except OSError:
+ ensure_dirs(catdir)
+
+ def cpv_exists(self, mykey, myrepo=None):
+ "Tells us whether an actual ebuild exists on disk (no masking)"
+ return os.path.exists(self.getpath(mykey))
+
+ def cpv_counter(self, mycpv):
+ "This method will grab the COUNTER. Returns a counter value."
+ try:
+ return int(self.aux_get(mycpv, ["COUNTER"])[0])
+ except (KeyError, ValueError):
+ pass
+ writemsg_level(
+ _("portage: COUNTER for %s was corrupted; " "resetting to value of 0\n")
+ % (mycpv,),
+ level=logging.ERROR,
+ noiselevel=-1,
+ )
+ return 0
+
+ def cpv_inject(self, mycpv):
+ "injects a real package into our on-disk database; assumes mycpv is valid and doesn't already exist"
+ ensure_dirs(self.getpath(mycpv))
+ counter = self.counter_tick(mycpv=mycpv)
+ # write local package counter so that emerge clean does the right thing
+ write_atomic(self.getpath(mycpv, filename="COUNTER"), str(counter))
+
+ def isInjected(self, mycpv):
+ if self.cpv_exists(mycpv):
+ if os.path.exists(self.getpath(mycpv, filename="INJECTED")):
+ return True
+ if not os.path.exists(self.getpath(mycpv, filename="CONTENTS")):
+ return True
+ return False
+
+ def move_ent(self, mylist, repo_match=None):
+ origcp = mylist[1]
+ newcp = mylist[2]
+
+ # sanity check
+ for atom in (origcp, newcp):
+ if not isjustname(atom):
+ raise InvalidPackageName(str(atom))
+ origmatches = self.match(origcp, use_cache=0)
+ moves = 0
+ if not origmatches:
+ return moves
+ for mycpv in origmatches:
+ mycpv_cp = mycpv.cp
+ if mycpv_cp != origcp:
+ # Ignore PROVIDE virtual match.
+ continue
+ if repo_match is not None and not repo_match(mycpv.repo):
+ continue
+
+ # Use isvalidatom() to check if this move is valid for the
+ # EAPI (characters allowed in package names may vary).
+ if not isvalidatom(newcp, eapi=mycpv.eapi):
+ continue
+
+ mynewcpv = mycpv.replace(mycpv_cp, str(newcp), 1)
+ mynewcat = catsplit(newcp)[0]
+ origpath = self.getpath(mycpv)
+ if not os.path.exists(origpath):
+ continue
+ moves += 1
+ if not os.path.exists(self.getpath(mynewcat)):
+ # create the directory
+ ensure_dirs(self.getpath(mynewcat))
+ newpath = self.getpath(mynewcpv)
+ if os.path.exists(newpath):
+ # dest already exists; keep this puppy where it is.
+ continue
+ _movefile(origpath, newpath, mysettings=self.settings)
+ self._clear_pkg_cache(self._dblink(mycpv))
+ self._clear_pkg_cache(self._dblink(mynewcpv))
+
+ # We need to rename the ebuild now.
+ old_pf = catsplit(mycpv)[1]
+ new_pf = catsplit(mynewcpv)[1]
+ if new_pf != old_pf:
+ try:
+ os.rename(
+ os.path.join(newpath, old_pf + ".ebuild"),
+ os.path.join(newpath, new_pf + ".ebuild"),
+ )
+ except EnvironmentError as e:
+ if e.errno != errno.ENOENT:
+ raise
+ del e
+ write_atomic(os.path.join(newpath, "PF"), new_pf + "\n")
+ write_atomic(os.path.join(newpath, "CATEGORY"), mynewcat + "\n")
+
+ return moves
+
+ def cp_list(self, mycp, use_cache=1):
+ mysplit = catsplit(mycp)
+ if mysplit[0] == "*":
+ mysplit[0] = mysplit[0][1:]
+ try:
+ mystat = os.stat(self.getpath(mysplit[0])).st_mtime_ns
+ except OSError:
+ mystat = 0
+ if use_cache and mycp in self.cpcache:
+ cpc = self.cpcache[mycp]
+ if cpc[0] == mystat:
+ return cpc[1][:]
+ cat_dir = self.getpath(mysplit[0])
+ try:
+ dir_list = os.listdir(cat_dir)
+ except EnvironmentError as e:
+ if e.errno == PermissionDenied.errno:
+ raise PermissionDenied(cat_dir)
+ del e
+ dir_list = []
+
+ returnme = []
+ for x in dir_list:
+ if self._excluded_dirs.match(x) is not None:
+ continue
+ ps = pkgsplit(x)
+ if not ps:
+ self.invalidentry(os.path.join(self.getpath(mysplit[0]), x))
+ continue
+ if len(mysplit) > 1:
+ if ps[0] == mysplit[1]:
+ cpv = "%s/%s" % (mysplit[0], x)
+ metadata = dict(
+ zip(
+ self._aux_cache_keys,
+ self.aux_get(cpv, self._aux_cache_keys),
+ )
+ )
+ returnme.append(
+ _pkg_str(
+ cpv, metadata=metadata, settings=self.settings, db=self
+ )
+ )
+ self._cpv_sort_ascending(returnme)
+ if use_cache:
+ self.cpcache[mycp] = [mystat, returnme[:]]
+ elif mycp in self.cpcache:
+ del self.cpcache[mycp]
+ return returnme
+
+ def cpv_all(self, use_cache=1):
+ """
+ Set use_cache=0 to bypass the portage.cachedir() cache in cases
+ when the accuracy of mtime staleness checks should not be trusted
+ (generally this is only necessary in critical sections that
+ involve merge or unmerge of packages).
+ """
+ return list(self._iter_cpv_all(use_cache=use_cache))
+
+ def _iter_cpv_all(self, use_cache=True, sort=False):
+ returnme = []
+ basepath = os.path.join(self._eroot, VDB_PATH) + os.path.sep
+
+ if use_cache:
+ from portage import listdir
+ else:
+
+ def listdir(p, **kwargs):
+ try:
+ return [
+ x for x in os.listdir(p) if os.path.isdir(os.path.join(p, x))
+ ]
+ except EnvironmentError as e:
+ if e.errno == PermissionDenied.errno:
+ raise PermissionDenied(p)
+ del e
+ return []
+
+ catdirs = listdir(basepath, EmptyOnError=1, ignorecvs=1, dirsonly=1)
+ if sort:
+ catdirs.sort()
+
+ for x in catdirs:
+ if self._excluded_dirs.match(x) is not None:
+ continue
+ if not self._category_re.match(x):
+ continue
+
+ pkgdirs = listdir(basepath + x, EmptyOnError=1, dirsonly=1)
+ if sort:
+ pkgdirs.sort()
+
+ for y in pkgdirs:
+ if self._excluded_dirs.match(y) is not None:
+ continue
+ subpath = x + "/" + y
+ # -MERGING- should never be a cpv, nor should files.
+ try:
+ subpath = _pkg_str(subpath, db=self)
+ except InvalidData:
+ self.invalidentry(self.getpath(subpath))
+ continue
+
+ yield subpath
+
+ def cp_all(self, use_cache=1, sort=False):
+ mylist = self.cpv_all(use_cache=use_cache)
+ d = {}
+ for y in mylist:
+ if y[0] == "*":
+ y = y[1:]
+ try:
+ mysplit = catpkgsplit(y)
+ except InvalidData:
+ self.invalidentry(self.getpath(y))
+ continue
+ if not mysplit:
+ self.invalidentry(self.getpath(y))
+ continue
+ d[mysplit[0] + "/" + mysplit[1]] = None
+ return sorted(d) if sort else list(d)
+
+ def checkblockers(self, origdep):
+ pass
+
+ def _clear_cache(self):
+ self.mtdircache.clear()
+ self.matchcache.clear()
+ self.cpcache.clear()
+ self._aux_cache_obj = None
+
+ def _add(self, pkg_dblink):
+ self._pkgs_changed = True
+ self._clear_pkg_cache(pkg_dblink)
+
+ def _remove(self, pkg_dblink):
+ self._pkgs_changed = True
+ self._clear_pkg_cache(pkg_dblink)
+
+ def _clear_pkg_cache(self, pkg_dblink):
+ # Due to 1 second mtime granularity in <python-2.5, mtime checks
+ # are not always sufficient to invalidate vardbapi caches. Therefore,
+ # the caches need to be actively invalidated here.
+ self.mtdircache.pop(pkg_dblink.cat, None)
+ self.matchcache.pop(pkg_dblink.cat, None)
+ self.cpcache.pop(pkg_dblink.mysplit[0], None)
+ dircache.pop(pkg_dblink.dbcatdir, None)
+
+ def match(self, origdep, use_cache=1):
+ "caching match function"
+ mydep = dep_expand(
+ origdep, mydb=self, use_cache=use_cache, settings=self.settings
+ )
+ cache_key = (mydep, mydep.unevaluated_atom)
+ mykey = dep_getkey(mydep)
+ mycat = catsplit(mykey)[0]
+ if not use_cache:
+ if mycat in self.matchcache:
+ del self.mtdircache[mycat]
+ del self.matchcache[mycat]
+ return list(
+ self._iter_match(mydep, self.cp_list(mydep.cp, use_cache=use_cache))
+ )
+ try:
+ curmtime = os.stat(os.path.join(self._eroot, VDB_PATH, mycat)).st_mtime_ns
+ except (IOError, OSError):
+ curmtime = 0
+
+ if mycat not in self.matchcache or self.mtdircache[mycat] != curmtime:
+ # clear cache entry
+ self.mtdircache[mycat] = curmtime
+ self.matchcache[mycat] = {}
+ if mydep not in self.matchcache[mycat]:
+ mymatch = list(
+ self._iter_match(mydep, self.cp_list(mydep.cp, use_cache=use_cache))
+ )
+ self.matchcache[mycat][cache_key] = mymatch
+ return self.matchcache[mycat][cache_key][:]
+
+ def findname(self, mycpv, myrepo=None):
+ return self.getpath(str(mycpv), filename=catsplit(mycpv)[1] + ".ebuild")
+
+ def flush_cache(self):
+ """If the current user has permission and the internal aux_get cache has
+ been updated, save it to disk and mark it unmodified. This is called
+ by emerge after it has loaded the full vdb for use in dependency
+ calculations. Currently, the cache is only written if the user has
+ superuser privileges (since that's required to obtain a lock), but all
+ users have read access and benefit from faster metadata lookups (as
+ long as at least part of the cache is still valid)."""
+ if (
+ self._flush_cache_enabled
+ and self._aux_cache is not None
+ and secpass >= 2
+ and (
+ len(self._aux_cache["modified"]) >= self._aux_cache_threshold
+ or not os.path.exists(self._cache_delta_filename)
+ )
+ ):
+
+ ensure_dirs(os.path.dirname(self._aux_cache_filename))
+
+ self._owners.populate() # index any unindexed contents
+ valid_nodes = set(self.cpv_all())
+ for cpv in list(self._aux_cache["packages"]):
+ if cpv not in valid_nodes:
+ del self._aux_cache["packages"][cpv]
+ del self._aux_cache["modified"]
+ timestamp = time.time()
+ self._aux_cache["timestamp"] = timestamp
+
+ with atomic_ofstream(self._aux_cache_filename, "wb") as f:
+ pickle.dump(self._aux_cache, f, protocol=2)
+
+ apply_secpass_permissions(self._aux_cache_filename, mode=0o644)
+
+ self._cache_delta.initialize(timestamp)
+ apply_secpass_permissions(self._cache_delta_filename, mode=0o644)
+
+ self._aux_cache["modified"] = set()
+
+ @property
+ def _aux_cache(self):
+ if self._aux_cache_obj is None:
+ self._aux_cache_init()
+ return self._aux_cache_obj
+
+ def _aux_cache_init(self):
+ aux_cache = None
+ open_kwargs = {}
+ try:
+ with open(
+ _unicode_encode(
+ self._aux_cache_filename, encoding=_encodings["fs"], errors="strict"
+ ),
+ mode="rb",
+ **open_kwargs
+ ) as f:
+ mypickle = pickle.Unpickler(f)
+ try:
+ mypickle.find_global = None
+ except AttributeError:
+ # TODO: If py3k, override Unpickler.find_class().
+ pass
+ aux_cache = mypickle.load()
+ except (SystemExit, KeyboardInterrupt):
+ raise
+ except Exception as e:
+ if isinstance(e, EnvironmentError) and getattr(e, "errno", None) in (
+ errno.ENOENT,
+ errno.EACCES,
+ ):
+ pass
+ else:
+ writemsg(
+ _("!!! Error loading '%s': %s\n") % (self._aux_cache_filename, e),
+ noiselevel=-1,
+ )
+ del e
+
+ if (
+ not aux_cache
+ or not isinstance(aux_cache, dict)
+ or aux_cache.get("version") != self._aux_cache_version
+ or not aux_cache.get("packages")
+ ):
+ aux_cache = {"version": self._aux_cache_version}
+ aux_cache["packages"] = {}
+
+ owners = aux_cache.get("owners")
+ if owners is not None:
+ if not isinstance(owners, dict):
+ owners = None
+ elif "version" not in owners:
+ owners = None
+ elif owners["version"] != self._owners_cache_version:
+ owners = None
+ elif "base_names" not in owners:
+ owners = None
+ elif not isinstance(owners["base_names"], dict):
+ owners = None
+
+ if owners is None:
+ owners = {"base_names": {}, "version": self._owners_cache_version}
+ aux_cache["owners"] = owners
+
+ aux_cache["modified"] = set()
+ self._aux_cache_obj = aux_cache
+
+ def aux_get(self, mycpv, wants, myrepo=None):
+ """This automatically caches selected keys that are frequently needed
+ by emerge for dependency calculations. The cached metadata is
+ considered valid if the mtime of the package directory has not changed
+ since the data was cached. The cache is stored in a pickled dict
+ object with the following format:
+
+ {version:"1", "packages":{cpv1:(mtime,{k1,v1, k2,v2, ...}), cpv2...}}
+
+ If an error occurs while loading the cache pickle or the version is
+ unrecognized, the cache will simple be recreated from scratch (it is
+ completely disposable).
+ """
+ cache_these_wants = self._aux_cache_keys.intersection(wants)
+ for x in wants:
+ if self._aux_cache_keys_re.match(x) is not None:
+ cache_these_wants.add(x)
+
+ if not cache_these_wants:
+ mydata = self._aux_get(mycpv, wants)
+ return [mydata[x] for x in wants]
+
+ cache_these = set(self._aux_cache_keys)
+ cache_these.update(cache_these_wants)
+
+ mydir = self.getpath(mycpv)
+ mydir_stat = None
+ try:
+ mydir_stat = os.stat(mydir)
+ except OSError as e:
+ if e.errno != errno.ENOENT:
+ raise
+ raise KeyError(mycpv)
+ # Use float mtime when available.
+ mydir_mtime = mydir_stat.st_mtime
+ pkg_data = self._aux_cache["packages"].get(mycpv)
+ pull_me = cache_these.union(wants)
+ mydata = {"_mtime_": mydir_mtime}
+ cache_valid = False
+ cache_incomplete = False
+ cache_mtime = None
+ metadata = None
+ if pkg_data is not None:
+ if not isinstance(pkg_data, tuple) or len(pkg_data) != 2:
+ pkg_data = None
+ else:
+ cache_mtime, metadata = pkg_data
+ if not isinstance(cache_mtime, (float, int)) or not isinstance(
+ metadata, dict
+ ):
+ pkg_data = None
+
+ if pkg_data:
+ cache_mtime, metadata = pkg_data
+ if isinstance(cache_mtime, float):
+ if cache_mtime == mydir_stat.st_mtime:
+ cache_valid = True
+
+ # Handle truncated mtime in order to avoid cache
+ # invalidation for livecd squashfs (bug 564222).
+ elif int(cache_mtime) == mydir_stat.st_mtime:
+ cache_valid = True
+ else:
+ # Cache may contain integer mtime.
+ cache_valid = cache_mtime == mydir_stat[stat.ST_MTIME]
+
+ if cache_valid:
+ # Migrate old metadata to unicode.
+ for k, v in metadata.items():
+ metadata[k] = _unicode_decode(
+ v, encoding=_encodings["repo.content"], errors="replace"
+ )
+
+ mydata.update(metadata)
+ pull_me.difference_update(mydata)
+
+ if pull_me:
+ # pull any needed data and cache it
+ aux_keys = list(pull_me)
+ mydata.update(self._aux_get(mycpv, aux_keys, st=mydir_stat))
+ if not cache_valid or cache_these.difference(metadata):
+ cache_data = {}
+ if cache_valid and metadata:
+ cache_data.update(metadata)
+ for aux_key in cache_these:
+ cache_data[aux_key] = mydata[aux_key]
+ self._aux_cache["packages"][str(mycpv)] = (mydir_mtime, cache_data)
+ self._aux_cache["modified"].add(mycpv)
+
+ eapi_attrs = _get_eapi_attrs(mydata["EAPI"])
+ if _get_slot_re(eapi_attrs).match(mydata["SLOT"]) is None:
+ # Empty or invalid slot triggers InvalidAtom exceptions when
+ # generating slot atoms for packages, so translate it to '0' here.
+ mydata["SLOT"] = "0"
+
+ return [mydata[x] for x in wants]
+
+ def _aux_get(self, mycpv, wants, st=None):
+ mydir = self.getpath(mycpv)
+ if st is None:
+ try:
+ st = os.stat(mydir)
+ except OSError as e:
+ if e.errno == errno.ENOENT:
+ raise KeyError(mycpv)
+ elif e.errno == PermissionDenied.errno:
+ raise PermissionDenied(mydir)
+ else:
+ raise
+ if not stat.S_ISDIR(st.st_mode):
+ raise KeyError(mycpv)
+ results = {}
+ env_keys = []
+ for x in wants:
+ if x == "_mtime_":
+ results[x] = st[stat.ST_MTIME]
+ continue
+ try:
+ with io.open(
+ _unicode_encode(
+ os.path.join(mydir, x),
+ encoding=_encodings["fs"],
+ errors="strict",
+ ),
+ mode="r",
+ encoding=_encodings["repo.content"],
+ errors="replace",
+ ) as f:
+ myd = f.read()
+ except IOError:
+ if (
+ x not in self._aux_cache_keys
+ and self._aux_cache_keys_re.match(x) is None
+ ):
+ env_keys.append(x)
+ continue
+ myd = ""
+
+ # Preserve \n for metadata that is known to
+ # contain multiple lines.
+ if self._aux_multi_line_re.match(x) is None:
+ myd = " ".join(myd.split())
+
+ results[x] = myd
+
+ if env_keys:
+ env_results = self._aux_env_search(mycpv, env_keys)
+ for k in env_keys:
+ v = env_results.get(k)
+ if v is None:
+ v = ""
+ if self._aux_multi_line_re.match(k) is None:
+ v = " ".join(v.split())
+ results[k] = v
+
+ if results.get("EAPI") == "":
+ results["EAPI"] = "0"
+
+ return results
+
+ def _aux_env_search(self, cpv, variables):
+ """
+ Search environment.bz2 for the specified variables. Returns
+ a dict mapping variables to values, and any variables not
+ found in the environment will not be included in the dict.
+ This is useful for querying variables like ${SRC_URI} and
+ ${A}, which are not saved in separate files but are available
+ in environment.bz2 (see bug #395463).
+ """
+ env_file = self.getpath(cpv, filename="environment.bz2")
+ if not os.path.isfile(env_file):
+ return {}
+ bunzip2_cmd = portage.util.shlex_split(
+ self.settings.get("PORTAGE_BUNZIP2_COMMAND", "")
+ )
+ if not bunzip2_cmd:
+ bunzip2_cmd = portage.util.shlex_split(
+ self.settings["PORTAGE_BZIP2_COMMAND"]
+ )
+ bunzip2_cmd.append("-d")
+ args = bunzip2_cmd + ["-c", env_file]
+ try:
+ proc = subprocess.Popen(args, stdout=subprocess.PIPE)
+ except EnvironmentError as e:
+ if e.errno != errno.ENOENT:
+ raise
+ raise portage.exception.CommandNotFound(args[0])
+
+ # Parts of the following code are borrowed from
+ # filter-bash-environment.py (keep them in sync).
+ var_assign_re = re.compile(
+ r'(^|^declare\s+-\S+\s+|^declare\s+|^export\s+)([^=\s]+)=("|\')?(.*)$'
+ )
+ close_quote_re = re.compile(r'(\\"|"|\')\s*$')
+
+ def have_end_quote(quote, line):
+ close_quote_match = close_quote_re.search(line)
+ return close_quote_match is not None and close_quote_match.group(1) == quote
+
+ variables = frozenset(variables)
+ results = {}
+ for line in proc.stdout:
+ line = _unicode_decode(
+ line, encoding=_encodings["content"], errors="replace"
+ )
+ var_assign_match = var_assign_re.match(line)
+ if var_assign_match is not None:
+ key = var_assign_match.group(2)
+ quote = var_assign_match.group(3)
+ if quote is not None:
+ if have_end_quote(quote, line[var_assign_match.end(2) + 2 :]):
+ value = var_assign_match.group(4)
+ else:
+ value = [var_assign_match.group(4)]
+ for line in proc.stdout:
+ line = _unicode_decode(
+ line, encoding=_encodings["content"], errors="replace"
+ )
+ value.append(line)
+ if have_end_quote(quote, line):
+ break
+ value = "".join(value)
+ # remove trailing quote and whitespace
+ value = value.rstrip()[:-1]
+ else:
+ value = var_assign_match.group(4).rstrip()
+
+ if key in variables:
+ results[key] = value
+
+ proc.wait()
+ proc.stdout.close()
+ return results
+
+ def aux_update(self, cpv, values):
+ mylink = self._dblink(cpv)
+ if not mylink.exists():
+ raise KeyError(cpv)
+ self._bump_mtime(cpv)
+ self._clear_pkg_cache(mylink)
+ for k, v in values.items():
+ if v:
+ mylink.setfile(k, v)
+ else:
+ try:
+ os.unlink(os.path.join(self.getpath(cpv), k))
+ except EnvironmentError:
+ pass
+ self._bump_mtime(cpv)
+
+ async def unpack_metadata(self, pkg, dest_dir, loop=None):
+ """
+ Unpack package metadata to a directory. This method is a coroutine.
+
+ @param pkg: package to unpack
+ @type pkg: _pkg_str or portage.config
+ @param dest_dir: destination directory
+ @type dest_dir: str
+ """
+ loop = asyncio._wrap_loop(loop)
+ if not isinstance(pkg, portage.config):
+ cpv = pkg
+ else:
+ cpv = pkg.mycpv
+ dbdir = self.getpath(cpv)
+
+ def async_copy():
+ for parent, dirs, files in os.walk(dbdir, onerror=_raise_exc):
+ for key in files:
+ shutil.copy(os.path.join(parent, key), os.path.join(dest_dir, key))
+ break
+
+ await loop.run_in_executor(ForkExecutor(loop=loop), async_copy)
+
+ async def unpack_contents(
+ self,
+ pkg,
+ dest_dir,
+ include_config=None,
+ include_unmodified_config=None,
+ loop=None,
+ ):
+ """
+ Unpack package contents to a directory. This method is a coroutine.
+
+ This copies files from the installed system, in the same way
+ as the quickpkg(1) command. Default behavior for handling
+ of protected configuration files is controlled by the
+ QUICKPKG_DEFAULT_OPTS variable. The relevant quickpkg options
+ are --include-config and --include-unmodified-config. When
+ a configuration file is not included because it is protected,
+ an ewarn message is logged.
+
+ @param pkg: package to unpack
+ @type pkg: _pkg_str or portage.config
+ @param dest_dir: destination directory
+ @type dest_dir: str
+ @param include_config: Include all files protected by
+ CONFIG_PROTECT (as a security precaution, default is False
+ unless modified by QUICKPKG_DEFAULT_OPTS).
+ @type include_config: bool
+ @param include_unmodified_config: Include files protected by
+ CONFIG_PROTECT that have not been modified since installation
+ (as a security precaution, default is False unless modified
+ by QUICKPKG_DEFAULT_OPTS).
+ @type include_unmodified_config: bool
+ """
+ loop = asyncio._wrap_loop(loop)
+ if not isinstance(pkg, portage.config):
+ settings = self.settings
+ cpv = pkg
+ else:
+ settings = pkg
+ cpv = settings.mycpv
+
+ scheduler = SchedulerInterface(loop)
+ parser = argparse.ArgumentParser()
+ parser.add_argument("--include-config", choices=("y", "n"), default="n")
+ parser.add_argument(
+ "--include-unmodified-config", choices=("y", "n"), default="n"
+ )
+
+ # Method parameters may override QUICKPKG_DEFAULT_OPTS.
+ opts_list = portage.util.shlex_split(settings.get("QUICKPKG_DEFAULT_OPTS", ""))
+ if include_config is not None:
+ opts_list.append(
+ "--include-config={}".format("y" if include_config else "n")
+ )
+ if include_unmodified_config is not None:
+ opts_list.append(
+ "--include-unmodified-config={}".format(
+ "y" if include_unmodified_config else "n"
+ )
+ )
+
+ opts, args = parser.parse_known_args(opts_list)
+
+ tar_cmd = ("tar", "-x", "--xattrs", "--xattrs-include=*", "-C", dest_dir)
+ pr, pw = os.pipe()
+ proc = await asyncio.create_subprocess_exec(*tar_cmd, stdin=pr)
+ os.close(pr)
+ with os.fdopen(pw, "wb", 0) as pw_file:
+ excluded_config_files = await loop.run_in_executor(
+ ForkExecutor(loop=loop),
+ functools.partial(
+ self._dblink(cpv).quickpkg,
+ pw_file,
+ include_config=opts.include_config == "y",
+ include_unmodified_config=opts.include_unmodified_config == "y",
+ ),
+ )
+ await proc.wait()
+ if proc.returncode != os.EX_OK:
+ raise PortageException("command failed: {}".format(tar_cmd))
+
+ if excluded_config_files:
+ log_lines = [
+ _(
+ "Config files excluded by QUICKPKG_DEFAULT_OPTS (see quickpkg(1) man page):"
+ )
+ ] + ["\t{}".format(name) for name in excluded_config_files]
+ out = io.StringIO()
+ for line in log_lines:
+ portage.elog.messages.ewarn(line, phase="install", key=cpv, out=out)
+ scheduler.output(
+ out.getvalue(),
+ background=self.settings.get("PORTAGE_BACKGROUND") == "1",
+ log_path=settings.get("PORTAGE_LOG_FILE"),
+ )
+
+ def counter_tick(self, myroot=None, mycpv=None):
+ """
+ @param myroot: ignored, self._eroot is used instead
+ """
+ return self.counter_tick_core(incrementing=1, mycpv=mycpv)
+
+ def get_counter_tick_core(self, myroot=None, mycpv=None):
+ """
+ Use this method to retrieve the counter instead
+ of having to trust the value of a global counter
+ file that can lead to invalid COUNTER
+ generation. When cache is valid, the package COUNTER
+ files are not read and we rely on the timestamp of
+ the package directory to validate cache. The stat
+ calls should only take a short time, so performance
+ is sufficient without having to rely on a potentially
+ corrupt global counter file.
+
+ The global counter file located at
+ $CACHE_PATH/counter serves to record the
+ counter of the last installed package and
+ it also corresponds to the total number of
+ installation actions that have occurred in
+ the history of this package database.
+
+ @param myroot: ignored, self._eroot is used instead
+ """
+ del myroot
+ counter = -1
+ try:
+ with io.open(
+ _unicode_encode(
+ self._counter_path, encoding=_encodings["fs"], errors="strict"
+ ),
+ mode="r",
+ encoding=_encodings["repo.content"],
+ errors="replace",
+ ) as f:
+ try:
+ counter = int(f.readline().strip())
+ except (OverflowError, ValueError) as e:
+ writemsg(
+ _("!!! COUNTER file is corrupt: '%s'\n") % self._counter_path,
+ noiselevel=-1,
+ )
+ writemsg("!!! %s\n" % (e,), noiselevel=-1)
+ except EnvironmentError as e:
+ # Silently allow ENOENT since files under
+ # /var/cache/ are allowed to disappear.
+ if e.errno != errno.ENOENT:
+ writemsg(
+ _("!!! Unable to read COUNTER file: '%s'\n") % self._counter_path,
+ noiselevel=-1,
+ )
+ writemsg("!!! %s\n" % str(e), noiselevel=-1)
+ del e
+
+ if self._cached_counter == counter:
+ max_counter = counter
+ else:
+ # We must ensure that we return a counter
+ # value that is at least as large as the
+ # highest one from the installed packages,
+ # since having a corrupt value that is too low
+ # can trigger incorrect AUTOCLEAN behavior due
+ # to newly installed packages having lower
+ # COUNTERs than the previous version in the
+ # same slot.
+ max_counter = counter
+ for cpv in self.cpv_all():
+ try:
+ pkg_counter = int(self.aux_get(cpv, ["COUNTER"])[0])
+ except (KeyError, OverflowError, ValueError):
+ continue
+ if pkg_counter > max_counter:
+ max_counter = pkg_counter
+
+ return max_counter + 1
+
+ def counter_tick_core(self, myroot=None, incrementing=1, mycpv=None):
+ """
+ This method will grab the next COUNTER value and record it back
+ to the global file. Note that every package install must have
+ a unique counter, since a slotmove update can move two packages
+ into the same SLOT and in that case it's important that both
+ packages have different COUNTER metadata.
+
+ @param myroot: ignored, self._eroot is used instead
+ @param mycpv: ignored
+ @rtype: int
+ @return: new counter value
+ """
+ myroot = None
+ mycpv = None
+ self.lock()
+ try:
+ counter = self.get_counter_tick_core() - 1
+ if incrementing:
+ # increment counter
+ counter += 1
+ # update new global counter file
+ try:
+ write_atomic(self._counter_path, str(counter))
+ except InvalidLocation:
+ self.settings._init_dirs()
+ write_atomic(self._counter_path, str(counter))
+ self._cached_counter = counter
+
+ # Since we hold a lock, this is a good opportunity
+ # to flush the cache. Note that this will only
+ # flush the cache periodically in the main process
+ # when _aux_cache_threshold is exceeded.
+ self.flush_cache()
+ finally:
+ self.unlock()
+
+ return counter
+
+ def _dblink(self, cpv):
+ category, pf = catsplit(cpv)
+ return dblink(
+ category,
+ pf,
+ settings=self.settings,
+ vartree=self.vartree,
+ treetype="vartree",
+ )
+
+ def removeFromContents(self, pkg, paths, relative_paths=True):
+ """
+ @param pkg: cpv for an installed package
+ @type pkg: string
+ @param paths: paths of files to remove from contents
+ @type paths: iterable
+ """
+ if not hasattr(pkg, "getcontents"):
+ pkg = self._dblink(pkg)
+ root = self.settings["ROOT"]
+ root_len = len(root) - 1
+ new_contents = pkg.getcontents().copy()
+ removed = 0
+
+ for filename in paths:
+ filename = _unicode_decode(
+ filename, encoding=_encodings["content"], errors="strict"
+ )
+ filename = normalize_path(filename)
+ if relative_paths:
+ relative_filename = filename
+ else:
+ relative_filename = filename[root_len:]
+ contents_key = pkg._match_contents(relative_filename)
+ if contents_key:
+ # It's possible for two different paths to refer to the same
+ # contents_key, due to directory symlinks. Therefore, pass a
+ # default value to pop, in order to avoid a KeyError which
+ # could otherwise be triggered (see bug #454400).
+ new_contents.pop(contents_key, None)
+ removed += 1
+
+ if removed:
+ # Also remove corresponding NEEDED lines, so that they do
+ # no corrupt LinkageMap data for preserve-libs.
+ needed_filename = os.path.join(pkg.dbdir, LinkageMap._needed_aux_key)
+ new_needed = None
+ try:
+ with io.open(
+ _unicode_encode(
+ needed_filename, encoding=_encodings["fs"], errors="strict"
+ ),
+ mode="r",
+ encoding=_encodings["repo.content"],
+ errors="replace",
+ ) as f:
+ needed_lines = f.readlines()
+ except IOError as e:
+ if e.errno not in (errno.ENOENT, errno.ESTALE):
+ raise
+ else:
+ new_needed = []
+ for l in needed_lines:
+ l = l.rstrip("\n")
+ if not l:
+ continue
+ try:
+ entry = NeededEntry.parse(needed_filename, l)
+ except InvalidData as e:
+ writemsg_level(
+ "\n%s\n\n" % (e,), level=logging.ERROR, noiselevel=-1
+ )
+ continue
+
+ filename = os.path.join(root, entry.filename.lstrip(os.sep))
+ if filename in new_contents:
+ new_needed.append(entry)
+
+ self.writeContentsToContentsFile(pkg, new_contents, new_needed=new_needed)
+
+ def writeContentsToContentsFile(self, pkg, new_contents, new_needed=None):
+ """
+ @param pkg: package to write contents file for
+ @type pkg: dblink
+ @param new_contents: contents to write to CONTENTS file
+ @type new_contents: contents dictionary of the form
+ {u'/path/to/file' : (contents_attribute 1, ...), ...}
+ @param new_needed: new NEEDED entries
+ @type new_needed: list of NeededEntry
+ """
+ root = self.settings["ROOT"]
+ self._bump_mtime(pkg.mycpv)
+ if new_needed is not None:
+ f = atomic_ofstream(os.path.join(pkg.dbdir, LinkageMap._needed_aux_key))
+ for entry in new_needed:
+ f.write(str(entry))
+ f.close()
+ f = atomic_ofstream(os.path.join(pkg.dbdir, "CONTENTS"))
+ write_contents(new_contents, root, f)
+ f.close()
+ self._bump_mtime(pkg.mycpv)
+ pkg._clear_contents_cache()
+
+ class _owners_cache:
+ """
+ This class maintains an hash table that serves to index package
+ contents by mapping the basename of file to a list of possible
+ packages that own it. This is used to optimize owner lookups
+ by narrowing the search down to a smaller number of packages.
+ """
+
+ _new_hash = md5
+ _hash_bits = 16
+ _hex_chars = _hash_bits // 4
+
+ def __init__(self, vardb):
+ self._vardb = vardb
+
+ def add(self, cpv):
+ eroot_len = len(self._vardb._eroot)
+ pkg_hash = self._hash_pkg(cpv)
+ db = self._vardb._dblink(cpv)
+ if not db.getcontents():
+ # Empty path is a code used to represent empty contents.
+ self._add_path("", pkg_hash)
+
+ for x in db._contents.keys():
+ self._add_path(x[eroot_len:], pkg_hash)
+
+ self._vardb._aux_cache["modified"].add(cpv)
+
+ def _add_path(self, path, pkg_hash):
+ """
+ Empty path is a code that represents empty contents.
+ """
+ if path:
+ name = os.path.basename(path.rstrip(os.path.sep))
+ if not name:
+ return
+ else:
+ name = path
+ name_hash = self._hash_str(name)
+ base_names = self._vardb._aux_cache["owners"]["base_names"]
+ pkgs = base_names.get(name_hash)
+ if pkgs is None:
+ pkgs = {}
+ base_names[name_hash] = pkgs
+ pkgs[pkg_hash] = None
+
+ def _hash_str(self, s):
+ h = self._new_hash()
+ # Always use a constant utf_8 encoding here, since
+ # the "default" encoding can change.
+ h.update(
+ _unicode_encode(
+ s, encoding=_encodings["repo.content"], errors="backslashreplace"
+ )
+ )
+ h = h.hexdigest()
+ h = h[-self._hex_chars :]
+ h = int(h, 16)
+ return h
+
+ def _hash_pkg(self, cpv):
+ counter, mtime = self._vardb.aux_get(cpv, ["COUNTER", "_mtime_"])
+ try:
+ counter = int(counter)
+ except ValueError:
+ counter = 0
+ return (str(cpv), counter, mtime)
+
+ class _owners_db:
+ def __init__(self, vardb):
+ self._vardb = vardb
+
+ def populate(self):
+ self._populate()
+
+ def _populate(self):
+ owners_cache = vardbapi._owners_cache(self._vardb)
+ cached_hashes = set()
+ base_names = self._vardb._aux_cache["owners"]["base_names"]
+
+ # Take inventory of all cached package hashes.
+ for name, hash_values in list(base_names.items()):
+ if not isinstance(hash_values, dict):
+ del base_names[name]
+ continue
+ cached_hashes.update(hash_values)
+
+ # Create sets of valid package hashes and uncached packages.
+ uncached_pkgs = set()
+ hash_pkg = owners_cache._hash_pkg
+ valid_pkg_hashes = set()
+ for cpv in self._vardb.cpv_all():
+ hash_value = hash_pkg(cpv)
+ valid_pkg_hashes.add(hash_value)
+ if hash_value not in cached_hashes:
+ uncached_pkgs.add(cpv)
+
+ # Cache any missing packages.
+ for cpv in uncached_pkgs:
+ owners_cache.add(cpv)
+
+ # Delete any stale cache.
+ stale_hashes = cached_hashes.difference(valid_pkg_hashes)
+ if stale_hashes:
+ for base_name_hash, bucket in list(base_names.items()):
+ for hash_value in stale_hashes.intersection(bucket):
+ del bucket[hash_value]
+ if not bucket:
+ del base_names[base_name_hash]
+
+ return owners_cache
+
+ def get_owners(self, path_iter):
+ """
+ @return the owners as a dblink -> set(files) mapping.
+ """
+ owners = {}
+ for owner, f in self.iter_owners(path_iter):
+ owned_files = owners.get(owner)
+ if owned_files is None:
+ owned_files = set()
+ owners[owner] = owned_files
+ owned_files.add(f)
+ return owners
+
+ def getFileOwnerMap(self, path_iter):
+ owners = self.get_owners(path_iter)
+ file_owners = {}
+ for pkg_dblink, files in owners.items():
+ for f in files:
+ owner_set = file_owners.get(f)
+ if owner_set is None:
+ owner_set = set()
+ file_owners[f] = owner_set
+ owner_set.add(pkg_dblink)
+ return file_owners
+
+ def iter_owners(self, path_iter):
+ """
+ Iterate over tuples of (dblink, path). In order to avoid
+ consuming too many resources for too much time, resources
+ are only allocated for the duration of a given iter_owners()
+ call. Therefore, to maximize reuse of resources when searching
+ for multiple files, it's best to search for them all in a single
+ call.
+ """
+
+ if not isinstance(path_iter, list):
+ path_iter = list(path_iter)
+ owners_cache = self._populate()
+ vardb = self._vardb
+ root = vardb._eroot
+ hash_pkg = owners_cache._hash_pkg
+ hash_str = owners_cache._hash_str
+ base_names = self._vardb._aux_cache["owners"]["base_names"]
+ case_insensitive = "case-insensitive-fs" in vardb.settings.features
+
+ dblink_cache = {}
+
+ def dblink(cpv):
+ x = dblink_cache.get(cpv)
+ if x is None:
+ if len(dblink_cache) > 20:
+ # Ensure that we don't run out of memory.
+ raise StopIteration()
+ x = self._vardb._dblink(cpv)
+ dblink_cache[cpv] = x
+ return x
+
+ while path_iter:
+
+ path = path_iter.pop()
+ if case_insensitive:
+ path = path.lower()
+ is_basename = os.sep != path[:1]
+ if is_basename:
+ name = path
+ else:
+ name = os.path.basename(path.rstrip(os.path.sep))
+
+ if not name:
+ continue
+
+ name_hash = hash_str(name)
+ pkgs = base_names.get(name_hash)
+ owners = []
+ if pkgs is not None:
+ try:
+ for hash_value in pkgs:
+ if (
+ not isinstance(hash_value, tuple)
+ or len(hash_value) != 3
+ ):
+ continue
+ cpv, counter, mtime = hash_value
+ if not isinstance(cpv, str):
+ continue
+ try:
+ current_hash = hash_pkg(cpv)
+ except KeyError:
+ continue
+
+ if current_hash != hash_value:
+ continue
+
+ if is_basename:
+ for p in dblink(cpv)._contents.keys():
+ if os.path.basename(p) == name:
+ owners.append(
+ (
+ cpv,
+ dblink(cpv)._contents.unmap_key(p)[
+ len(root) :
+ ],
+ )
+ )
+ else:
+ key = dblink(cpv)._match_contents(path)
+ if key is not False:
+ owners.append((cpv, key[len(root) :]))
+
+ except StopIteration:
+ path_iter.append(path)
+ del owners[:]
+ dblink_cache.clear()
+ gc.collect()
+ for x in self._iter_owners_low_mem(path_iter):
+ yield x
+ return
+ else:
+ for cpv, p in owners:
+ yield (dblink(cpv), p)
+
+ def _iter_owners_low_mem(self, path_list):
+ """
+ This implemention will make a short-lived dblink instance (and
+ parse CONTENTS) for every single installed package. This is
+ slower and but uses less memory than the method which uses the
+ basename cache.
+ """
+
+ if not path_list:
+ return
+
+ case_insensitive = "case-insensitive-fs" in self._vardb.settings.features
+ path_info_list = []
+ for path in path_list:
+ if case_insensitive:
+ path = path.lower()
+ is_basename = os.sep != path[:1]
+ if is_basename:
+ name = path
+ else:
+ name = os.path.basename(path.rstrip(os.path.sep))
+ path_info_list.append((path, name, is_basename))
+
+ # Do work via the global event loop, so that it can be used
+ # for indication of progress during the search (bug #461412).
+ event_loop = asyncio._safe_loop()
+ root = self._vardb._eroot
+
+ def search_pkg(cpv, search_future):
+ dblnk = self._vardb._dblink(cpv)
+ results = []
+ for path, name, is_basename in path_info_list:
+ if is_basename:
+ for p in dblnk._contents.keys():
+ if os.path.basename(p) == name:
+ results.append(
+ (dblnk, dblnk._contents.unmap_key(p)[len(root) :])
+ )
+ else:
+ key = dblnk._match_contents(path)
+ if key is not False:
+ results.append((dblnk, key[len(root) :]))
+ search_future.set_result(results)
+
+ for cpv in self._vardb.cpv_all():
+ search_future = event_loop.create_future()
+ event_loop.call_soon(search_pkg, cpv, search_future)
+ event_loop.run_until_complete(search_future)
+ for result in search_future.result():
+ yield result
+
class vartree:
- "this tree will scan a var/db/pkg database located at root (passed to init)"
- def __init__(self, root=None, virtual=DeprecationWarning, categories=None,
- settings=None):
-
- if settings is None:
- settings = portage.settings
-
- if root is not None and root != settings['ROOT']:
- warnings.warn("The 'root' parameter of the "
- "portage.dbapi.vartree.vartree"
- " constructor is now unused. Use "
- "settings['ROOT'] instead.",
- DeprecationWarning, stacklevel=2)
-
- if virtual is not DeprecationWarning:
- warnings.warn("The 'virtual' parameter of the "
- "portage.dbapi.vartree.vartree"
- " constructor is unused",
- DeprecationWarning, stacklevel=2)
-
- self.settings = settings
- self.dbapi = vardbapi(settings=settings, vartree=self)
- self.populated = 1
-
- @property
- def root(self):
- warnings.warn("The root attribute of "
- "portage.dbapi.vartree.vartree"
- " is deprecated. Use "
- "settings['ROOT'] instead.",
- DeprecationWarning, stacklevel=3)
- return self.settings['ROOT']
-
- def getpath(self, mykey, filename=None):
- return self.dbapi.getpath(mykey, filename=filename)
-
- def zap(self, mycpv):
- return
-
- def inject(self, mycpv):
- return
-
- def get_provide(self, mycpv):
- return []
-
- def get_all_provides(self):
- return {}
-
- def dep_bestmatch(self, mydep, use_cache=1):
- "compatibility method -- all matches, not just visible ones"
- #mymatch=best(match(dep_expand(mydep,self.dbapi),self.dbapi))
- mymatch = best(self.dbapi.match(
- dep_expand(mydep, mydb=self.dbapi, settings=self.settings),
- use_cache=use_cache))
- if mymatch is None:
- return ""
- return mymatch
-
- def dep_match(self, mydep, use_cache=1):
- "compatibility method -- we want to see all matches, not just visible ones"
- #mymatch = match(mydep,self.dbapi)
- mymatch = self.dbapi.match(mydep, use_cache=use_cache)
- if mymatch is None:
- return []
- return mymatch
-
- def exists_specific(self, cpv):
- return self.dbapi.cpv_exists(cpv)
-
- def getallcpv(self):
- """temporary function, probably to be renamed --- Gets a list of all
- category/package-versions installed on the system."""
- return self.dbapi.cpv_all()
-
- def getallnodes(self):
- """new behavior: these are all *unmasked* nodes. There may or may not be available
- masked package for nodes in this nodes list."""
- return self.dbapi.cp_all()
-
- def getebuildpath(self, fullpackage):
- cat, package = catsplit(fullpackage)
- return self.getpath(fullpackage, filename=package+".ebuild")
-
- def getslot(self, mycatpkg):
- "Get a slot for a catpkg; assume it exists."
- try:
- return self.dbapi._pkg_str(mycatpkg, None).slot
- except KeyError:
- return ""
-
- def populate(self):
- self.populated=1
+ "this tree will scan a var/db/pkg database located at root (passed to init)"
+
+ def __init__(
+ self, root=None, virtual=DeprecationWarning, categories=None, settings=None
+ ):
+
+ if settings is None:
+ settings = portage.settings
+
+ if root is not None and root != settings["ROOT"]:
+ warnings.warn(
+ "The 'root' parameter of the "
+ "portage.dbapi.vartree.vartree"
+ " constructor is now unused. Use "
+ "settings['ROOT'] instead.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
+ if virtual is not DeprecationWarning:
+ warnings.warn(
+ "The 'virtual' parameter of the "
+ "portage.dbapi.vartree.vartree"
+ " constructor is unused",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
+ self.settings = settings
+ self.dbapi = vardbapi(settings=settings, vartree=self)
+ self.populated = 1
+
+ @property
+ def root(self):
+ warnings.warn(
+ "The root attribute of "
+ "portage.dbapi.vartree.vartree"
+ " is deprecated. Use "
+ "settings['ROOT'] instead.",
+ DeprecationWarning,
+ stacklevel=3,
+ )
+ return self.settings["ROOT"]
+
+ def getpath(self, mykey, filename=None):
+ return self.dbapi.getpath(mykey, filename=filename)
+
+ def zap(self, mycpv):
+ return
+
+ def inject(self, mycpv):
+ return
+
+ def get_provide(self, mycpv):
+ return []
+
+ def get_all_provides(self):
+ return {}
+
+ def dep_bestmatch(self, mydep, use_cache=1):
+ "compatibility method -- all matches, not just visible ones"
+ # mymatch=best(match(dep_expand(mydep,self.dbapi),self.dbapi))
+ mymatch = best(
+ self.dbapi.match(
+ dep_expand(mydep, mydb=self.dbapi, settings=self.settings),
+ use_cache=use_cache,
+ )
+ )
+ if mymatch is None:
+ return ""
+ return mymatch
+
+ def dep_match(self, mydep, use_cache=1):
+ "compatibility method -- we want to see all matches, not just visible ones"
+ # mymatch = match(mydep,self.dbapi)
+ mymatch = self.dbapi.match(mydep, use_cache=use_cache)
+ if mymatch is None:
+ return []
+ return mymatch
+
+ def exists_specific(self, cpv):
+ return self.dbapi.cpv_exists(cpv)
+
+ def getallcpv(self):
+ """temporary function, probably to be renamed --- Gets a list of all
+ category/package-versions installed on the system."""
+ return self.dbapi.cpv_all()
+
+ def getallnodes(self):
+ """new behavior: these are all *unmasked* nodes. There may or may not be available
+ masked package for nodes in this nodes list."""
+ return self.dbapi.cp_all()
+
+ def getebuildpath(self, fullpackage):
+ cat, package = catsplit(fullpackage)
+ return self.getpath(fullpackage, filename=package + ".ebuild")
+
+ def getslot(self, mycatpkg):
+ "Get a slot for a catpkg; assume it exists."
+ try:
+ return self.dbapi._pkg_str(mycatpkg, None).slot
+ except KeyError:
+ return ""
+
+ def populate(self):
+ self.populated = 1
+
class dblink:
- """
- This class provides an interface to the installed package database
- At present this is implemented as a text backend in /var/db/pkg.
- """
-
- _normalize_needed = re.compile(r'//|^[^/]|./$|(^|/)\.\.?(/|$)')
-
- _contents_re = re.compile(r'^(' + \
- r'(?P<dir>(dev|dir|fif) (.+))|' + \
- r'(?P<obj>(obj) (.+) (\S+) (\d+))|' + \
- r'(?P<sym>(sym) (.+) -> (.+) ((\d+)|(?P<oldsym>(' + \
- r'\(\d+, \d+L, \d+L, \d+, \d+, \d+, \d+L, \d+, (\d+), \d+\)))))' + \
- r')$'
- )
-
- # These files are generated by emerge, so we need to remove
- # them when they are the only thing left in a directory.
- _infodir_cleanup = frozenset(["dir", "dir.old"])
-
- _ignored_unlink_errnos = (
- errno.EBUSY, errno.ENOENT,
- errno.ENOTDIR, errno.EISDIR)
-
- _ignored_rmdir_errnos = (
- errno.EEXIST, errno.ENOTEMPTY,
- errno.EBUSY, errno.ENOENT,
- errno.ENOTDIR, errno.EISDIR,
- errno.EPERM)
-
- def __init__(self, cat, pkg, myroot=None, settings=None, treetype=None,
- vartree=None, blockers=None, scheduler=None, pipe=None):
- """
- Creates a DBlink object for a given CPV.
- The given CPV may not be present in the database already.
-
- @param cat: Category
- @type cat: String
- @param pkg: Package (PV)
- @type pkg: String
- @param myroot: ignored, settings['ROOT'] is used instead
- @type myroot: String (Path)
- @param settings: Typically portage.settings
- @type settings: portage.config
- @param treetype: one of ['porttree','bintree','vartree']
- @type treetype: String
- @param vartree: an instance of vartree corresponding to myroot.
- @type vartree: vartree
- """
-
- if settings is None:
- raise TypeError("settings argument is required")
-
- mysettings = settings
- self._eroot = mysettings['EROOT']
- self.cat = cat
- self.pkg = pkg
- self.mycpv = self.cat + "/" + self.pkg
- if self.mycpv == settings.mycpv and \
- isinstance(settings.mycpv, _pkg_str):
- self.mycpv = settings.mycpv
- else:
- self.mycpv = _pkg_str(self.mycpv)
- self.mysplit = list(self.mycpv.cpv_split[1:])
- self.mysplit[0] = self.mycpv.cp
- self.treetype = treetype
- if vartree is None:
- vartree = portage.db[self._eroot]["vartree"]
- self.vartree = vartree
- self._blockers = blockers
- self._scheduler = scheduler
- self.dbroot = normalize_path(os.path.join(self._eroot, VDB_PATH))
- self.dbcatdir = self.dbroot+"/"+cat
- self.dbpkgdir = self.dbcatdir+"/"+pkg
- self.dbtmpdir = self.dbcatdir+"/"+MERGING_IDENTIFIER+pkg
- self.dbdir = self.dbpkgdir
- self.settings = mysettings
- self._verbose = self.settings.get("PORTAGE_VERBOSE") == "1"
-
- self.myroot = self.settings['ROOT']
- self._installed_instance = None
- self.contentscache = None
- self._contents_inodes = None
- self._contents_basenames = None
- self._linkmap_broken = False
- self._device_path_map = {}
- self._hardlink_merge_map = {}
- self._hash_key = (self._eroot, self.mycpv)
- self._protect_obj = None
- self._pipe = pipe
- self._postinst_failure = False
-
- # When necessary, this attribute is modified for
- # compliance with RESTRICT=preserve-libs.
- self._preserve_libs = "preserve-libs" in mysettings.features
- self._contents = ContentsCaseSensitivityManager(self)
- self._slot_locks = []
-
- def __hash__(self):
- return hash(self._hash_key)
-
- def __eq__(self, other):
- return isinstance(other, dblink) and \
- self._hash_key == other._hash_key
-
- def _get_protect_obj(self):
-
- if self._protect_obj is None:
- self._protect_obj = ConfigProtect(self._eroot,
- portage.util.shlex_split(
- self.settings.get("CONFIG_PROTECT", "")),
- portage.util.shlex_split(
- self.settings.get("CONFIG_PROTECT_MASK", "")),
- case_insensitive=("case-insensitive-fs"
- in self.settings.features))
-
- return self._protect_obj
-
- def isprotected(self, obj):
- return self._get_protect_obj().isprotected(obj)
-
- def updateprotect(self):
- self._get_protect_obj().updateprotect()
-
- def lockdb(self):
- self.vartree.dbapi.lock()
-
- def unlockdb(self):
- self.vartree.dbapi.unlock()
-
- def _slot_locked(f):
- """
- A decorator function which, when parallel-install is enabled,
- acquires and releases slot locks for the current package and
- blocked packages. This is required in order to account for
- interactions with blocked packages (involving resolution of
- file collisions).
- """
- def wrapper(self, *args, **kwargs):
- if "parallel-install" in self.settings.features:
- self._acquire_slot_locks(
- kwargs.get("mydbapi", self.vartree.dbapi))
- try:
- return f(self, *args, **kwargs)
- finally:
- self._release_slot_locks()
- return wrapper
-
- def _acquire_slot_locks(self, db):
- """
- Acquire slot locks for the current package and blocked packages.
- """
-
- slot_atoms = []
-
- try:
- slot = self.mycpv.slot
- except AttributeError:
- slot, = db.aux_get(self.mycpv, ["SLOT"])
- slot = slot.partition("/")[0]
-
- slot_atoms.append(portage.dep.Atom(
- "%s:%s" % (self.mycpv.cp, slot)))
-
- for blocker in self._blockers or []:
- slot_atoms.append(blocker.slot_atom)
-
- # Sort atoms so that locks are acquired in a predictable
- # order, preventing deadlocks with competitors that may
- # be trying to acquire overlapping locks.
- slot_atoms.sort()
- for slot_atom in slot_atoms:
- self.vartree.dbapi._slot_lock(slot_atom)
- self._slot_locks.append(slot_atom)
-
- def _release_slot_locks(self):
- """
- Release all slot locks.
- """
- while self._slot_locks:
- self.vartree.dbapi._slot_unlock(self._slot_locks.pop())
-
- def getpath(self):
- "return path to location of db information (for >>> informational display)"
- return self.dbdir
-
- def exists(self):
- "does the db entry exist? boolean."
- return os.path.exists(self.dbdir)
-
- def delete(self):
- """
- Remove this entry from the database
- """
- try:
- os.lstat(self.dbdir)
- except OSError as e:
- if e.errno not in (errno.ENOENT, errno.ENOTDIR, errno.ESTALE):
- raise
- return
-
- # Check validity of self.dbdir before attempting to remove it.
- if not self.dbdir.startswith(self.dbroot):
- writemsg(_("portage.dblink.delete(): invalid dbdir: %s\n") % \
- self.dbdir, noiselevel=-1)
- return
-
- if self.dbdir is self.dbpkgdir:
- counter, = self.vartree.dbapi.aux_get(
- self.mycpv, ["COUNTER"])
- self.vartree.dbapi._cache_delta.recordEvent(
- "remove", self.mycpv,
- self.settings["SLOT"].split("/")[0], counter)
-
- shutil.rmtree(self.dbdir)
- # If empty, remove parent category directory.
- try:
- os.rmdir(os.path.dirname(self.dbdir))
- except OSError:
- pass
- self.vartree.dbapi._remove(self)
-
- # Use self.dbroot since we need an existing path for syncfs.
- try:
- self._merged_path(self.dbroot, os.lstat(self.dbroot))
- except OSError:
- pass
-
- self._post_merge_sync()
-
- def clearcontents(self):
- """
- For a given db entry (self), erase the CONTENTS values.
- """
- self.lockdb()
- try:
- if os.path.exists(self.dbdir+"/CONTENTS"):
- os.unlink(self.dbdir+"/CONTENTS")
- finally:
- self.unlockdb()
-
- def _clear_contents_cache(self):
- self.contentscache = None
- self._contents_inodes = None
- self._contents_basenames = None
- self._contents.clear_cache()
-
- def getcontents(self):
- """
- Get the installed files of a given package (aka what that package installed)
- """
- if self.contentscache is not None:
- return self.contentscache
- contents_file = os.path.join(self.dbdir, "CONTENTS")
- pkgfiles = {}
- try:
- with io.open(_unicode_encode(contents_file,
- encoding=_encodings['fs'], errors='strict'),
- mode='r', encoding=_encodings['repo.content'],
- errors='replace') as f:
- mylines = f.readlines()
- except EnvironmentError as e:
- if e.errno != errno.ENOENT:
- raise
- del e
- self.contentscache = pkgfiles
- return pkgfiles
-
- null_byte = "\0"
- normalize_needed = self._normalize_needed
- contents_re = self._contents_re
- obj_index = contents_re.groupindex['obj']
- dir_index = contents_re.groupindex['dir']
- sym_index = contents_re.groupindex['sym']
- # The old symlink format may exist on systems that have packages
- # which were installed many years ago (see bug #351814).
- oldsym_index = contents_re.groupindex['oldsym']
- # CONTENTS files already contain EPREFIX
- myroot = self.settings['ROOT']
- if myroot == os.path.sep:
- myroot = None
- # used to generate parent dir entries
- dir_entry = ("dir",)
- eroot_split_len = len(self.settings["EROOT"].split(os.sep)) - 1
- pos = 0
- errors = []
- for pos, line in enumerate(mylines):
- if null_byte in line:
- # Null bytes are a common indication of corruption.
- errors.append((pos + 1, _("Null byte found in CONTENTS entry")))
- continue
- line = line.rstrip("\n")
- m = contents_re.match(line)
- if m is None:
- errors.append((pos + 1, _("Unrecognized CONTENTS entry")))
- continue
-
- if m.group(obj_index) is not None:
- base = obj_index
- #format: type, mtime, md5sum
- data = (m.group(base+1), m.group(base+4), m.group(base+3))
- elif m.group(dir_index) is not None:
- base = dir_index
- #format: type
- data = (m.group(base+1),)
- elif m.group(sym_index) is not None:
- base = sym_index
- if m.group(oldsym_index) is None:
- mtime = m.group(base+5)
- else:
- mtime = m.group(base+8)
- #format: type, mtime, dest
- data = (m.group(base+1), mtime, m.group(base+3))
- else:
- # This won't happen as long the regular expression
- # is written to only match valid entries.
- raise AssertionError(_("required group not found " + \
- "in CONTENTS entry: '%s'") % line)
-
- path = m.group(base+2)
- if normalize_needed.search(path) is not None:
- path = normalize_path(path)
- if not path.startswith(os.path.sep):
- path = os.path.sep + path
-
- if myroot is not None:
- path = os.path.join(myroot, path.lstrip(os.path.sep))
-
- # Implicitly add parent directories, since we can't necessarily
- # assume that they are explicitly listed in CONTENTS, and it's
- # useful for callers if they can rely on parent directory entries
- # being generated here (crucial for things like dblink.isowner()).
- path_split = path.split(os.sep)
- path_split.pop()
- while len(path_split) > eroot_split_len:
- parent = os.sep.join(path_split)
- if parent in pkgfiles:
- break
- pkgfiles[parent] = dir_entry
- path_split.pop()
-
- pkgfiles[path] = data
-
- if errors:
- writemsg(_("!!! Parse error in '%s'\n") % contents_file, noiselevel=-1)
- for pos, e in errors:
- writemsg(_("!!! line %d: %s\n") % (pos, e), noiselevel=-1)
- self.contentscache = pkgfiles
- return pkgfiles
-
- def quickpkg(self, output_file, include_config=False, include_unmodified_config=False):
- """
- Create a tar file appropriate for use by quickpkg.
-
- @param output_file: Write binary tar stream to file.
- @type output_file: file
- @param include_config: Include all files protected by CONFIG_PROTECT
- (as a security precaution, default is False).
- @type include_config: bool
- @param include_unmodified_config: Include files protected by CONFIG_PROTECT
- that have not been modified since installation (as a security precaution,
- default is False).
- @type include_unmodified_config: bool
- @rtype: list
- @return: Paths of protected configuration files which have been omitted.
- """
- settings = self.settings
- cpv = self.mycpv
- xattrs = 'xattr' in settings.features
- contents = self.getcontents()
- excluded_config_files = []
- protect = None
-
- if not include_config:
- confprot = ConfigProtect(settings['EROOT'],
- portage.util.shlex_split(settings.get('CONFIG_PROTECT', '')),
- portage.util.shlex_split(settings.get('CONFIG_PROTECT_MASK', '')),
- case_insensitive=('case-insensitive-fs' in settings.features))
-
- def protect(filename):
- if not confprot.isprotected(filename):
- return False
- if include_unmodified_config:
- file_data = contents[filename]
- if file_data[0] == 'obj':
- orig_md5 = file_data[2].lower()
- cur_md5 = perform_md5(filename, calc_prelink=1)
- if orig_md5 == cur_md5:
- return False
- excluded_config_files.append(filename)
- return True
-
- # The tarfile module will write pax headers holding the
- # xattrs only if PAX_FORMAT is specified here.
- with tarfile.open(fileobj=output_file, mode='w|',
- format=tarfile.PAX_FORMAT if xattrs else tarfile.DEFAULT_FORMAT) as tar:
- tar_contents(contents, settings['ROOT'], tar, protect=protect, xattrs=xattrs)
-
- return excluded_config_files
-
- def _prune_plib_registry(self, unmerge=False,
- needed=None, preserve_paths=None):
- # remove preserved libraries that don't have any consumers left
- if not (self._linkmap_broken or
- self.vartree.dbapi._linkmap is None or
- self.vartree.dbapi._plib_registry is None):
- self.vartree.dbapi._fs_lock()
- plib_registry = self.vartree.dbapi._plib_registry
- plib_registry.lock()
- try:
- plib_registry.load()
-
- unmerge_with_replacement = \
- unmerge and preserve_paths is not None
- if unmerge_with_replacement:
- # If self.mycpv is about to be unmerged and we
- # have a replacement package, we want to exclude
- # the irrelevant NEEDED data that belongs to
- # files which are being unmerged now.
- exclude_pkgs = (self.mycpv,)
- else:
- exclude_pkgs = None
-
- self._linkmap_rebuild(exclude_pkgs=exclude_pkgs,
- include_file=needed, preserve_paths=preserve_paths)
-
- if unmerge:
- unmerge_preserve = None
- if not unmerge_with_replacement:
- unmerge_preserve = \
- self._find_libs_to_preserve(unmerge=True)
- counter = self.vartree.dbapi.cpv_counter(self.mycpv)
- try:
- slot = self.mycpv.slot
- except AttributeError:
- slot = _pkg_str(self.mycpv, slot=self.settings["SLOT"]).slot
- plib_registry.unregister(self.mycpv, slot, counter)
- if unmerge_preserve:
- for path in sorted(unmerge_preserve):
- contents_key = self._match_contents(path)
- if not contents_key:
- continue
- obj_type = self.getcontents()[contents_key][0]
- self._display_merge(_(">>> needed %s %s\n") % \
- (obj_type, contents_key), noiselevel=-1)
- plib_registry.register(self.mycpv,
- slot, counter, unmerge_preserve)
- # Remove the preserved files from our contents
- # so that they won't be unmerged.
- self.vartree.dbapi.removeFromContents(self,
- unmerge_preserve)
-
- unmerge_no_replacement = \
- unmerge and not unmerge_with_replacement
- cpv_lib_map = self._find_unused_preserved_libs(
- unmerge_no_replacement)
- if cpv_lib_map:
- self._remove_preserved_libs(cpv_lib_map)
- self.vartree.dbapi.lock()
- try:
- for cpv, removed in cpv_lib_map.items():
- if not self.vartree.dbapi.cpv_exists(cpv):
- continue
- self.vartree.dbapi.removeFromContents(cpv, removed)
- finally:
- self.vartree.dbapi.unlock()
-
- plib_registry.store()
- finally:
- plib_registry.unlock()
- self.vartree.dbapi._fs_unlock()
-
- @_slot_locked
- def unmerge(self, pkgfiles=None, trimworld=None, cleanup=True,
- ldpath_mtimes=None, others_in_slot=None, needed=None,
- preserve_paths=None):
- """
- Calls prerm
- Unmerges a given package (CPV)
- calls postrm
- calls cleanrm
- calls env_update
-
- @param pkgfiles: files to unmerge (generally self.getcontents() )
- @type pkgfiles: Dictionary
- @param trimworld: Unused
- @type trimworld: Boolean
- @param cleanup: cleanup to pass to doebuild (see doebuild)
- @type cleanup: Boolean
- @param ldpath_mtimes: mtimes to pass to env_update (see env_update)
- @type ldpath_mtimes: Dictionary
- @param others_in_slot: all dblink instances in this slot, excluding self
- @type others_in_slot: list
- @param needed: Filename containing libraries needed after unmerge.
- @type needed: String
- @param preserve_paths: Libraries preserved by a package instance that
- is currently being merged. They need to be explicitly passed to the
- LinkageMap, since they are not registered in the
- PreservedLibsRegistry yet.
- @type preserve_paths: set
- @rtype: Integer
- @return:
- 1. os.EX_OK if everything went well.
- 2. return code of the failed phase (for prerm, postrm, cleanrm)
- """
-
- if trimworld is not None:
- warnings.warn("The trimworld parameter of the " + \
- "portage.dbapi.vartree.dblink.unmerge()" + \
- " method is now unused.",
- DeprecationWarning, stacklevel=2)
-
- background = False
- log_path = self.settings.get("PORTAGE_LOG_FILE")
- if self._scheduler is None:
- # We create a scheduler instance and use it to
- # log unmerge output separately from merge output.
- self._scheduler = SchedulerInterface(asyncio._safe_loop())
- if self.settings.get("PORTAGE_BACKGROUND") == "subprocess":
- if self.settings.get("PORTAGE_BACKGROUND_UNMERGE") == "1":
- self.settings["PORTAGE_BACKGROUND"] = "1"
- self.settings.backup_changes("PORTAGE_BACKGROUND")
- background = True
- elif self.settings.get("PORTAGE_BACKGROUND_UNMERGE") == "0":
- self.settings["PORTAGE_BACKGROUND"] = "0"
- self.settings.backup_changes("PORTAGE_BACKGROUND")
- elif self.settings.get("PORTAGE_BACKGROUND") == "1":
- background = True
-
- self.vartree.dbapi._bump_mtime(self.mycpv)
- showMessage = self._display_merge
- if self.vartree.dbapi._categories is not None:
- self.vartree.dbapi._categories = None
-
- # When others_in_slot is not None, the backup has already been
- # handled by the caller.
- caller_handles_backup = others_in_slot is not None
-
- # When others_in_slot is supplied, the security check has already been
- # done for this slot, so it shouldn't be repeated until the next
- # replacement or unmerge operation.
- if others_in_slot is None:
- slot = self.vartree.dbapi._pkg_str(self.mycpv, None).slot
- slot_matches = self.vartree.dbapi.match(
- "%s:%s" % (portage.cpv_getkey(self.mycpv), slot))
- others_in_slot = []
- for cur_cpv in slot_matches:
- if cur_cpv == self.mycpv:
- continue
- others_in_slot.append(dblink(self.cat, catsplit(cur_cpv)[1],
- settings=self.settings, vartree=self.vartree,
- treetype="vartree", pipe=self._pipe))
-
- retval = self._security_check([self] + others_in_slot)
- if retval:
- return retval
-
- contents = self.getcontents()
- # Now, don't assume that the name of the ebuild is the same as the
- # name of the dir; the package may have been moved.
- myebuildpath = os.path.join(self.dbdir, self.pkg + ".ebuild")
- failures = 0
- ebuild_phase = "prerm"
- mystuff = os.listdir(self.dbdir)
- for x in mystuff:
- if x.endswith(".ebuild"):
- if x[:-7] != self.pkg:
- # Clean up after vardbapi.move_ent() breakage in
- # portage versions before 2.1.2
- os.rename(os.path.join(self.dbdir, x), myebuildpath)
- write_atomic(os.path.join(self.dbdir, "PF"), self.pkg+"\n")
- break
-
- if self.mycpv != self.settings.mycpv or \
- "EAPI" not in self.settings.configdict["pkg"]:
- # We avoid a redundant setcpv call here when
- # the caller has already taken care of it.
- self.settings.setcpv(self.mycpv, mydb=self.vartree.dbapi)
-
- eapi_unsupported = False
- try:
- doebuild_environment(myebuildpath, "prerm",
- settings=self.settings, db=self.vartree.dbapi)
- except UnsupportedAPIException as e:
- eapi_unsupported = e
-
- if self._preserve_libs and "preserve-libs" in \
- self.settings["PORTAGE_RESTRICT"].split():
- self._preserve_libs = False
-
- builddir_lock = None
- scheduler = self._scheduler
- retval = os.EX_OK
- try:
- # Only create builddir_lock if the caller
- # has not already acquired the lock.
- if "PORTAGE_BUILDDIR_LOCKED" not in self.settings:
- builddir_lock = EbuildBuildDir(
- scheduler=scheduler,
- settings=self.settings)
- scheduler.run_until_complete(builddir_lock.async_lock())
- prepare_build_dirs(settings=self.settings, cleanup=True)
- log_path = self.settings.get("PORTAGE_LOG_FILE")
-
- # Do this before the following _prune_plib_registry call, since
- # that removes preserved libraries from our CONTENTS, and we
- # may want to backup those libraries first.
- if not caller_handles_backup:
- retval = self._pre_unmerge_backup(background)
- if retval != os.EX_OK:
- showMessage(_("!!! FAILED prerm: quickpkg: %s\n") % retval,
- level=logging.ERROR, noiselevel=-1)
- return retval
-
- self._prune_plib_registry(unmerge=True, needed=needed,
- preserve_paths=preserve_paths)
-
- # Log the error after PORTAGE_LOG_FILE is initialized
- # by prepare_build_dirs above.
- if eapi_unsupported:
- # Sometimes this happens due to corruption of the EAPI file.
- failures += 1
- showMessage(_("!!! FAILED prerm: %s\n") % \
- os.path.join(self.dbdir, "EAPI"),
- level=logging.ERROR, noiselevel=-1)
- showMessage("%s\n" % (eapi_unsupported,),
- level=logging.ERROR, noiselevel=-1)
- elif os.path.isfile(myebuildpath):
- phase = EbuildPhase(background=background,
- phase=ebuild_phase, scheduler=scheduler,
- settings=self.settings)
- phase.start()
- retval = phase.wait()
-
- # XXX: Decide how to handle failures here.
- if retval != os.EX_OK:
- failures += 1
- showMessage(_("!!! FAILED prerm: %s\n") % retval,
- level=logging.ERROR, noiselevel=-1)
-
- self.vartree.dbapi._fs_lock()
- try:
- self._unmerge_pkgfiles(pkgfiles, others_in_slot)
- finally:
- self.vartree.dbapi._fs_unlock()
- self._clear_contents_cache()
-
- if not eapi_unsupported and os.path.isfile(myebuildpath):
- ebuild_phase = "postrm"
- phase = EbuildPhase(background=background,
- phase=ebuild_phase, scheduler=scheduler,
- settings=self.settings)
- phase.start()
- retval = phase.wait()
-
- # XXX: Decide how to handle failures here.
- if retval != os.EX_OK:
- failures += 1
- showMessage(_("!!! FAILED postrm: %s\n") % retval,
- level=logging.ERROR, noiselevel=-1)
-
- finally:
- self.vartree.dbapi._bump_mtime(self.mycpv)
- try:
- if not eapi_unsupported and os.path.isfile(myebuildpath):
- if retval != os.EX_OK:
- msg_lines = []
- msg = _("The '%(ebuild_phase)s' "
- "phase of the '%(cpv)s' package "
- "has failed with exit value %(retval)s.") % \
- {"ebuild_phase":ebuild_phase, "cpv":self.mycpv,
- "retval":retval}
- from textwrap import wrap
- msg_lines.extend(wrap(msg, 72))
- msg_lines.append("")
-
- ebuild_name = os.path.basename(myebuildpath)
- ebuild_dir = os.path.dirname(myebuildpath)
- msg = _("The problem occurred while executing "
- "the ebuild file named '%(ebuild_name)s' "
- "located in the '%(ebuild_dir)s' directory. "
- "If necessary, manually remove "
- "the environment.bz2 file and/or the "
- "ebuild file located in that directory.") % \
- {"ebuild_name":ebuild_name, "ebuild_dir":ebuild_dir}
- msg_lines.extend(wrap(msg, 72))
- msg_lines.append("")
-
- msg = _("Removal "
- "of the environment.bz2 file is "
- "preferred since it may allow the "
- "removal phases to execute successfully. "
- "The ebuild will be "
- "sourced and the eclasses "
- "from the current ebuild repository will be used "
- "when necessary. Removal of "
- "the ebuild file will cause the "
- "pkg_prerm() and pkg_postrm() removal "
- "phases to be skipped entirely.")
- msg_lines.extend(wrap(msg, 72))
-
- self._eerror(ebuild_phase, msg_lines)
-
- self._elog_process(phasefilter=("prerm", "postrm"))
-
- if retval == os.EX_OK:
- try:
- doebuild_environment(myebuildpath, "cleanrm",
- settings=self.settings, db=self.vartree.dbapi)
- except UnsupportedAPIException:
- pass
- phase = EbuildPhase(background=background,
- phase="cleanrm", scheduler=scheduler,
- settings=self.settings)
- phase.start()
- retval = phase.wait()
- finally:
- if builddir_lock is not None:
- scheduler.run_until_complete(
- builddir_lock.async_unlock())
-
- if log_path is not None:
-
- if not failures and 'unmerge-logs' not in self.settings.features:
- try:
- os.unlink(log_path)
- except OSError:
- pass
-
- try:
- st = os.stat(log_path)
- except OSError:
- pass
- else:
- if st.st_size == 0:
- try:
- os.unlink(log_path)
- except OSError:
- pass
-
- if log_path is not None and os.path.exists(log_path):
- # Restore this since it gets lost somewhere above and it
- # needs to be set for _display_merge() to be able to log.
- # Note that the log isn't necessarily supposed to exist
- # since if PORTAGE_LOGDIR is unset then it's a temp file
- # so it gets cleaned above.
- self.settings["PORTAGE_LOG_FILE"] = log_path
- else:
- self.settings.pop("PORTAGE_LOG_FILE", None)
-
- env_update(target_root=self.settings['ROOT'],
- prev_mtimes=ldpath_mtimes,
- contents=contents, env=self.settings,
- writemsg_level=self._display_merge, vardbapi=self.vartree.dbapi)
-
- unmerge_with_replacement = preserve_paths is not None
- if not unmerge_with_replacement:
- # When there's a replacement package which calls us via treewalk,
- # treewalk will automatically call _prune_plib_registry for us.
- # Otherwise, we need to call _prune_plib_registry ourselves.
- # Don't pass in the "unmerge=True" flag here, since that flag
- # is intended to be used _prior_ to unmerge, not after.
- self._prune_plib_registry()
-
- return os.EX_OK
-
- def _display_merge(self, msg, level=0, noiselevel=0):
- if not self._verbose and noiselevel >= 0 and level < logging.WARN:
- return
- if self._scheduler is None:
- writemsg_level(msg, level=level, noiselevel=noiselevel)
- else:
- log_path = None
- if self.settings.get("PORTAGE_BACKGROUND") != "subprocess":
- log_path = self.settings.get("PORTAGE_LOG_FILE")
- background = self.settings.get("PORTAGE_BACKGROUND") == "1"
-
- if background and log_path is None:
- if level >= logging.WARN:
- writemsg_level(msg, level=level, noiselevel=noiselevel)
- else:
- self._scheduler.output(msg,
- log_path=log_path, background=background,
- level=level, noiselevel=noiselevel)
-
- def _show_unmerge(self, zing, desc, file_type, file_name):
- self._display_merge("%s %s %s %s\n" % \
- (zing, desc.ljust(8), file_type, file_name))
-
- def _unmerge_pkgfiles(self, pkgfiles, others_in_slot):
- """
-
- Unmerges the contents of a package from the liveFS
- Removes the VDB entry for self
-
- @param pkgfiles: typically self.getcontents()
- @type pkgfiles: Dictionary { filename: [ 'type', '?', 'md5sum' ] }
- @param others_in_slot: all dblink instances in this slot, excluding self
- @type others_in_slot: list
- @rtype: None
- """
-
- os = _os_merge
- perf_md5 = perform_md5
- showMessage = self._display_merge
- show_unmerge = self._show_unmerge
- ignored_unlink_errnos = self._ignored_unlink_errnos
- ignored_rmdir_errnos = self._ignored_rmdir_errnos
-
- if not pkgfiles:
- showMessage(_("No package files given... Grabbing a set.\n"))
- pkgfiles = self.getcontents()
-
- if others_in_slot is None:
- others_in_slot = []
- slot = self.vartree.dbapi._pkg_str(self.mycpv, None).slot
- slot_matches = self.vartree.dbapi.match(
- "%s:%s" % (portage.cpv_getkey(self.mycpv), slot))
- for cur_cpv in slot_matches:
- if cur_cpv == self.mycpv:
- continue
- others_in_slot.append(dblink(self.cat, catsplit(cur_cpv)[1],
- settings=self.settings,
- vartree=self.vartree, treetype="vartree", pipe=self._pipe))
-
- cfgfiledict = grabdict(self.vartree.dbapi._conf_mem_file)
- stale_confmem = []
- protected_symlinks = {}
-
- unmerge_orphans = "unmerge-orphans" in self.settings.features
- calc_prelink = "prelink-checksums" in self.settings.features
-
- if pkgfiles:
- self.updateprotect()
- mykeys = list(pkgfiles)
- mykeys.sort()
- mykeys.reverse()
-
- #process symlinks second-to-last, directories last.
- mydirs = set()
-
- uninstall_ignore = portage.util.shlex_split(
- self.settings.get("UNINSTALL_IGNORE", ""))
-
- def unlink(file_name, lstatobj):
- if bsd_chflags:
- if lstatobj.st_flags != 0:
- bsd_chflags.lchflags(file_name, 0)
- parent_name = os.path.dirname(file_name)
- # Use normal stat/chflags for the parent since we want to
- # follow any symlinks to the real parent directory.
- pflags = os.stat(parent_name).st_flags
- if pflags != 0:
- bsd_chflags.chflags(parent_name, 0)
- try:
- if not stat.S_ISLNK(lstatobj.st_mode):
- # Remove permissions to ensure that any hardlinks to
- # suid/sgid files are rendered harmless.
- os.chmod(file_name, 0)
- os.unlink(file_name)
- except OSError as ose:
- # If the chmod or unlink fails, you are in trouble.
- # With Prefix this can be because the file is owned
- # by someone else (a screwup by root?), on a normal
- # system maybe filesystem corruption. In any case,
- # if we backtrace and die here, we leave the system
- # in a totally undefined state, hence we just bleed
- # like hell and continue to hopefully finish all our
- # administrative and pkg_postinst stuff.
- self._eerror("postrm",
- ["Could not chmod or unlink '%s': %s" % \
- (file_name, ose)])
- else:
-
- # Even though the file no longer exists, we log it
- # here so that _unmerge_dirs can see that we've
- # removed a file from this device, and will record
- # the parent directory for a syncfs call.
- self._merged_path(file_name, lstatobj, exists=False)
-
- finally:
- if bsd_chflags and pflags != 0:
- # Restore the parent flags we saved before unlinking
- bsd_chflags.chflags(parent_name, pflags)
-
- unmerge_desc = {}
- unmerge_desc["cfgpro"] = _("cfgpro")
- unmerge_desc["replaced"] = _("replaced")
- unmerge_desc["!dir"] = _("!dir")
- unmerge_desc["!empty"] = _("!empty")
- unmerge_desc["!fif"] = _("!fif")
- unmerge_desc["!found"] = _("!found")
- unmerge_desc["!md5"] = _("!md5")
- unmerge_desc["!mtime"] = _("!mtime")
- unmerge_desc["!obj"] = _("!obj")
- unmerge_desc["!sym"] = _("!sym")
- unmerge_desc["!prefix"] = _("!prefix")
-
- real_root = self.settings['ROOT']
- real_root_len = len(real_root) - 1
- eroot = self.settings["EROOT"]
-
- infodirs = frozenset(infodir for infodir in chain(
- self.settings.get("INFOPATH", "").split(":"),
- self.settings.get("INFODIR", "").split(":")) if infodir)
- infodirs_inodes = set()
- for infodir in infodirs:
- infodir = os.path.join(real_root, infodir.lstrip(os.sep))
- try:
- statobj = os.stat(infodir)
- except OSError:
- pass
- else:
- infodirs_inodes.add((statobj.st_dev, statobj.st_ino))
-
- for i, objkey in enumerate(mykeys):
-
- obj = normalize_path(objkey)
- if os is _os_merge:
- try:
- _unicode_encode(obj,
- encoding=_encodings['merge'], errors='strict')
- except UnicodeEncodeError:
- # The package appears to have been merged with a
- # different value of sys.getfilesystemencoding(),
- # so fall back to utf_8 if appropriate.
- try:
- _unicode_encode(obj,
- encoding=_encodings['fs'], errors='strict')
- except UnicodeEncodeError:
- pass
- else:
- os = portage.os
- perf_md5 = portage.checksum.perform_md5
-
- file_data = pkgfiles[objkey]
- file_type = file_data[0]
-
- # don't try to unmerge the prefix offset itself
- if len(obj) <= len(eroot) or not obj.startswith(eroot):
- show_unmerge("---", unmerge_desc["!prefix"], file_type, obj)
- continue
-
- statobj = None
- try:
- statobj = os.stat(obj)
- except OSError:
- pass
- lstatobj = None
- try:
- lstatobj = os.lstat(obj)
- except (OSError, AttributeError):
- pass
- islink = lstatobj is not None and stat.S_ISLNK(lstatobj.st_mode)
- if lstatobj is None:
- show_unmerge("---", unmerge_desc["!found"], file_type, obj)
- continue
-
- f_match = obj[len(eroot)-1:]
- ignore = False
- for pattern in uninstall_ignore:
- if fnmatch.fnmatch(f_match, pattern):
- ignore = True
- break
-
- if not ignore:
- if islink and f_match in \
- ("/lib", "/usr/lib", "/usr/local/lib"):
- # Ignore libdir symlinks for bug #423127.
- ignore = True
-
- if ignore:
- show_unmerge("---", unmerge_desc["cfgpro"], file_type, obj)
- continue
-
- # don't use EROOT, CONTENTS entries already contain EPREFIX
- if obj.startswith(real_root):
- relative_path = obj[real_root_len:]
- is_owned = False
- for dblnk in others_in_slot:
- if dblnk.isowner(relative_path):
- is_owned = True
- break
-
- if is_owned and islink and \
- file_type in ("sym", "dir") and \
- statobj and stat.S_ISDIR(statobj.st_mode):
- # A new instance of this package claims the file, so
- # don't unmerge it. If the file is symlink to a
- # directory and the unmerging package installed it as
- # a symlink, but the new owner has it listed as a
- # directory, then we'll produce a warning since the
- # symlink is a sort of orphan in this case (see
- # bug #326685).
- symlink_orphan = False
- for dblnk in others_in_slot:
- parent_contents_key = \
- dblnk._match_contents(relative_path)
- if not parent_contents_key:
- continue
- if not parent_contents_key.startswith(
- real_root):
- continue
- if dblnk.getcontents()[
- parent_contents_key][0] == "dir":
- symlink_orphan = True
- break
-
- if symlink_orphan:
- protected_symlinks.setdefault(
- (statobj.st_dev, statobj.st_ino),
- []).append(relative_path)
-
- if is_owned:
- show_unmerge("---", unmerge_desc["replaced"], file_type, obj)
- continue
- elif relative_path in cfgfiledict:
- stale_confmem.append(relative_path)
-
- # Don't unlink symlinks to directories here since that can
- # remove /lib and /usr/lib symlinks.
- if unmerge_orphans and \
- lstatobj and not stat.S_ISDIR(lstatobj.st_mode) and \
- not (islink and statobj and stat.S_ISDIR(statobj.st_mode)) and \
- not self.isprotected(obj):
- try:
- unlink(obj, lstatobj)
- except EnvironmentError as e:
- if e.errno not in ignored_unlink_errnos:
- raise
- del e
- show_unmerge("<<<", "", file_type, obj)
- continue
-
- lmtime = str(lstatobj[stat.ST_MTIME])
- if (pkgfiles[objkey][0] not in ("dir", "fif", "dev")) and (lmtime != pkgfiles[objkey][1]):
- show_unmerge("---", unmerge_desc["!mtime"], file_type, obj)
- continue
-
- if file_type == "dir" and not islink:
- if lstatobj is None or not stat.S_ISDIR(lstatobj.st_mode):
- show_unmerge("---", unmerge_desc["!dir"], file_type, obj)
- continue
- mydirs.add((obj, (lstatobj.st_dev, lstatobj.st_ino)))
- elif file_type == "sym" or (file_type == "dir" and islink):
- if not islink:
- show_unmerge("---", unmerge_desc["!sym"], file_type, obj)
- continue
-
- # If this symlink points to a directory then we don't want
- # to unmerge it if there are any other packages that
- # installed files into the directory via this symlink
- # (see bug #326685).
- # TODO: Resolving a symlink to a directory will require
- # simulation if $ROOT != / and the link is not relative.
- if islink and statobj and stat.S_ISDIR(statobj.st_mode) \
- and obj.startswith(real_root):
-
- relative_path = obj[real_root_len:]
- try:
- target_dir_contents = os.listdir(obj)
- except OSError:
- pass
- else:
- if target_dir_contents:
- # If all the children are regular files owned
- # by this package, then the symlink should be
- # safe to unmerge.
- all_owned = True
- for child in target_dir_contents:
- child = os.path.join(relative_path, child)
- if not self.isowner(child):
- all_owned = False
- break
- try:
- child_lstat = os.lstat(os.path.join(
- real_root, child.lstrip(os.sep)))
- except OSError:
- continue
-
- if not stat.S_ISREG(child_lstat.st_mode):
- # Nested symlinks or directories make
- # the issue very complex, so just
- # preserve the symlink in order to be
- # on the safe side.
- all_owned = False
- break
-
- if not all_owned:
- protected_symlinks.setdefault(
- (statobj.st_dev, statobj.st_ino),
- []).append(relative_path)
- show_unmerge("---", unmerge_desc["!empty"],
- file_type, obj)
- continue
-
- # Go ahead and unlink symlinks to directories here when
- # they're actually recorded as symlinks in the contents.
- # Normally, symlinks such as /lib -> lib64 are not recorded
- # as symlinks in the contents of a package. If a package
- # installs something into ${D}/lib/, it is recorded in the
- # contents as a directory even if it happens to correspond
- # to a symlink when it's merged to the live filesystem.
- try:
- unlink(obj, lstatobj)
- show_unmerge("<<<", "", file_type, obj)
- except (OSError, IOError) as e:
- if e.errno not in ignored_unlink_errnos:
- raise
- del e
- show_unmerge("!!!", "", file_type, obj)
- elif pkgfiles[objkey][0] == "obj":
- if statobj is None or not stat.S_ISREG(statobj.st_mode):
- show_unmerge("---", unmerge_desc["!obj"], file_type, obj)
- continue
- mymd5 = None
- try:
- mymd5 = perf_md5(obj, calc_prelink=calc_prelink)
- except FileNotFound as e:
- # the file has disappeared between now and our stat call
- show_unmerge("---", unmerge_desc["!obj"], file_type, obj)
- continue
-
- # string.lower is needed because db entries used to be in upper-case. The
- # string.lower allows for backwards compatibility.
- if mymd5 != pkgfiles[objkey][2].lower():
- show_unmerge("---", unmerge_desc["!md5"], file_type, obj)
- continue
- try:
- unlink(obj, lstatobj)
- except (OSError, IOError) as e:
- if e.errno not in ignored_unlink_errnos:
- raise
- del e
- show_unmerge("<<<", "", file_type, obj)
- elif pkgfiles[objkey][0] == "fif":
- if not stat.S_ISFIFO(lstatobj[stat.ST_MODE]):
- show_unmerge("---", unmerge_desc["!fif"], file_type, obj)
- continue
- show_unmerge("---", "", file_type, obj)
- elif pkgfiles[objkey][0] == "dev":
- show_unmerge("---", "", file_type, obj)
-
- self._unmerge_dirs(mydirs, infodirs_inodes,
- protected_symlinks, unmerge_desc, unlink, os)
- mydirs.clear()
-
- if protected_symlinks:
- self._unmerge_protected_symlinks(others_in_slot, infodirs_inodes,
- protected_symlinks, unmerge_desc, unlink, os)
-
- if protected_symlinks:
- msg = "One or more symlinks to directories have been " + \
- "preserved in order to ensure that files installed " + \
- "via these symlinks remain accessible. " + \
- "This indicates that the mentioned symlink(s) may " + \
- "be obsolete remnants of an old install, and it " + \
- "may be appropriate to replace a given symlink " + \
- "with the directory that it points to."
- lines = textwrap.wrap(msg, 72)
- lines.append("")
- flat_list = set()
- flat_list.update(*protected_symlinks.values())
- flat_list = sorted(flat_list)
- for f in flat_list:
- lines.append("\t%s" % (os.path.join(real_root,
- f.lstrip(os.sep))))
- lines.append("")
- self._elog("elog", "postrm", lines)
-
- # Remove stale entries from config memory.
- if stale_confmem:
- for filename in stale_confmem:
- del cfgfiledict[filename]
- writedict(cfgfiledict, self.vartree.dbapi._conf_mem_file)
-
- #remove self from vartree database so that our own virtual gets zapped if we're the last node
- self.vartree.zap(self.mycpv)
-
- def _unmerge_protected_symlinks(self, others_in_slot, infodirs_inodes,
- protected_symlinks, unmerge_desc, unlink, os):
-
- real_root = self.settings['ROOT']
- show_unmerge = self._show_unmerge
- ignored_unlink_errnos = self._ignored_unlink_errnos
-
- flat_list = set()
- flat_list.update(*protected_symlinks.values())
- flat_list = sorted(flat_list)
-
- for f in flat_list:
- for dblnk in others_in_slot:
- if dblnk.isowner(f):
- # If another package in the same slot installed
- # a file via a protected symlink, return early
- # and don't bother searching for any other owners.
- return
-
- msg = []
- msg.append("")
- msg.append(_("Directory symlink(s) may need protection:"))
- msg.append("")
-
- for f in flat_list:
- msg.append("\t%s" % \
- os.path.join(real_root, f.lstrip(os.path.sep)))
-
- msg.append("")
- msg.append("Use the UNINSTALL_IGNORE variable to exempt specific symlinks")
- msg.append("from the following search (see the make.conf man page).")
- msg.append("")
- msg.append(_("Searching all installed"
- " packages for files installed via above symlink(s)..."))
- msg.append("")
- self._elog("elog", "postrm", msg)
-
- self.lockdb()
- try:
- owners = self.vartree.dbapi._owners.get_owners(flat_list)
- self.vartree.dbapi.flush_cache()
- finally:
- self.unlockdb()
-
- for owner in list(owners):
- if owner.mycpv == self.mycpv:
- owners.pop(owner, None)
-
- if not owners:
- msg = []
- msg.append(_("The above directory symlink(s) are all "
- "safe to remove. Removing them now..."))
- msg.append("")
- self._elog("elog", "postrm", msg)
- dirs = set()
- for unmerge_syms in protected_symlinks.values():
- for relative_path in unmerge_syms:
- obj = os.path.join(real_root,
- relative_path.lstrip(os.sep))
- parent = os.path.dirname(obj)
- while len(parent) > len(self._eroot):
- try:
- lstatobj = os.lstat(parent)
- except OSError:
- break
- else:
- dirs.add((parent,
- (lstatobj.st_dev, lstatobj.st_ino)))
- parent = os.path.dirname(parent)
- try:
- unlink(obj, os.lstat(obj))
- show_unmerge("<<<", "", "sym", obj)
- except (OSError, IOError) as e:
- if e.errno not in ignored_unlink_errnos:
- raise
- del e
- show_unmerge("!!!", "", "sym", obj)
-
- protected_symlinks.clear()
- self._unmerge_dirs(dirs, infodirs_inodes,
- protected_symlinks, unmerge_desc, unlink, os)
- dirs.clear()
-
- def _unmerge_dirs(self, dirs, infodirs_inodes,
- protected_symlinks, unmerge_desc, unlink, os):
-
- show_unmerge = self._show_unmerge
- infodir_cleanup = self._infodir_cleanup
- ignored_unlink_errnos = self._ignored_unlink_errnos
- ignored_rmdir_errnos = self._ignored_rmdir_errnos
- real_root = self.settings['ROOT']
-
- dirs = sorted(dirs)
- revisit = {}
-
- while True:
- try:
- obj, inode_key = dirs.pop()
- except IndexError:
- break
- # Treat any directory named "info" as a candidate here,
- # since it might have been in INFOPATH previously even
- # though it may not be there now.
- if inode_key in infodirs_inodes or \
- os.path.basename(obj) == "info":
- try:
- remaining = os.listdir(obj)
- except OSError:
- pass
- else:
- cleanup_info_dir = ()
- if remaining and \
- len(remaining) <= len(infodir_cleanup):
- if not set(remaining).difference(infodir_cleanup):
- cleanup_info_dir = remaining
-
- for child in cleanup_info_dir:
- child = os.path.join(obj, child)
- try:
- lstatobj = os.lstat(child)
- if stat.S_ISREG(lstatobj.st_mode):
- unlink(child, lstatobj)
- show_unmerge("<<<", "", "obj", child)
- except EnvironmentError as e:
- if e.errno not in ignored_unlink_errnos:
- raise
- del e
- show_unmerge("!!!", "", "obj", child)
-
- try:
- parent_name = os.path.dirname(obj)
- parent_stat = os.stat(parent_name)
-
- if bsd_chflags:
- lstatobj = os.lstat(obj)
- if lstatobj.st_flags != 0:
- bsd_chflags.lchflags(obj, 0)
-
- # Use normal stat/chflags for the parent since we want to
- # follow any symlinks to the real parent directory.
- pflags = parent_stat.st_flags
- if pflags != 0:
- bsd_chflags.chflags(parent_name, 0)
- try:
- os.rmdir(obj)
- finally:
- if bsd_chflags and pflags != 0:
- # Restore the parent flags we saved before unlinking
- bsd_chflags.chflags(parent_name, pflags)
-
- # Record the parent directory for use in syncfs calls.
- # Note that we use a realpath and a regular stat here, since
- # we want to follow any symlinks back to the real device where
- # the real parent directory resides.
- self._merged_path(os.path.realpath(parent_name), parent_stat)
-
- show_unmerge("<<<", "", "dir", obj)
- except EnvironmentError as e:
- if e.errno not in ignored_rmdir_errnos:
- raise
- if e.errno != errno.ENOENT:
- show_unmerge("---", unmerge_desc["!empty"], "dir", obj)
- revisit[obj] = inode_key
-
- # Since we didn't remove this directory, record the directory
- # itself for use in syncfs calls, if we have removed another
- # file from the same device.
- # Note that we use a realpath and a regular stat here, since
- # we want to follow any symlinks back to the real device where
- # the real directory resides.
- try:
- dir_stat = os.stat(obj)
- except OSError:
- pass
- else:
- if dir_stat.st_dev in self._device_path_map:
- self._merged_path(os.path.realpath(obj), dir_stat)
-
- else:
- # When a directory is successfully removed, there's
- # no need to protect symlinks that point to it.
- unmerge_syms = protected_symlinks.pop(inode_key, None)
- if unmerge_syms is not None:
- parents = []
- for relative_path in unmerge_syms:
- obj = os.path.join(real_root,
- relative_path.lstrip(os.sep))
- try:
- unlink(obj, os.lstat(obj))
- show_unmerge("<<<", "", "sym", obj)
- except (OSError, IOError) as e:
- if e.errno not in ignored_unlink_errnos:
- raise
- del e
- show_unmerge("!!!", "", "sym", obj)
- else:
- parents.append(os.path.dirname(obj))
-
- if parents:
- # Revisit parents recursively (bug 640058).
- recursive_parents = []
- for parent in set(parents):
- while parent in revisit:
- recursive_parents.append(parent)
- parent = os.path.dirname(parent)
- if parent == '/':
- break
-
- for parent in sorted(set(recursive_parents)):
- dirs.append((parent, revisit.pop(parent)))
-
- def isowner(self, filename, destroot=None):
- """
- Check if a file belongs to this package. This may
- result in a stat call for the parent directory of
- every installed file, since the inode numbers are
- used to work around the problem of ambiguous paths
- caused by symlinked directories. The results of
- stat calls are cached to optimize multiple calls
- to this method.
-
- @param filename:
- @type filename:
- @param destroot:
- @type destroot:
- @rtype: Boolean
- @return:
- 1. True if this package owns the file.
- 2. False if this package does not own the file.
- """
-
- if destroot is not None and destroot != self._eroot:
- warnings.warn("The second parameter of the " + \
- "portage.dbapi.vartree.dblink.isowner()" + \
- " is now unused. Instead " + \
- "self.settings['EROOT'] will be used.",
- DeprecationWarning, stacklevel=2)
-
- return bool(self._match_contents(filename))
-
- def _match_contents(self, filename, destroot=None):
- """
- The matching contents entry is returned, which is useful
- since the path may differ from the one given by the caller,
- due to symlinks.
-
- @rtype: String
- @return: the contents entry corresponding to the given path, or False
- if the file is not owned by this package.
- """
-
- filename = _unicode_decode(filename,
- encoding=_encodings['content'], errors='strict')
-
- if destroot is not None and destroot != self._eroot:
- warnings.warn("The second parameter of the " + \
- "portage.dbapi.vartree.dblink._match_contents()" + \
- " is now unused. Instead " + \
- "self.settings['ROOT'] will be used.",
- DeprecationWarning, stacklevel=2)
-
- # don't use EROOT here, image already contains EPREFIX
- destroot = self.settings['ROOT']
-
- # The given filename argument might have a different encoding than the
- # the filenames contained in the contents, so use separate wrapped os
- # modules for each. The basename is more likely to contain non-ascii
- # characters than the directory path, so use os_filename_arg for all
- # operations involving the basename of the filename arg.
- os_filename_arg = _os_merge
- os = _os_merge
-
- try:
- _unicode_encode(filename,
- encoding=_encodings['merge'], errors='strict')
- except UnicodeEncodeError:
- # The package appears to have been merged with a
- # different value of sys.getfilesystemencoding(),
- # so fall back to utf_8 if appropriate.
- try:
- _unicode_encode(filename,
- encoding=_encodings['fs'], errors='strict')
- except UnicodeEncodeError:
- pass
- else:
- os_filename_arg = portage.os
-
- destfile = normalize_path(
- os_filename_arg.path.join(destroot,
- filename.lstrip(os_filename_arg.path.sep)))
-
- if "case-insensitive-fs" in self.settings.features:
- destfile = destfile.lower()
-
- if self._contents.contains(destfile):
- return self._contents.unmap_key(destfile)
-
- if self.getcontents():
- basename = os_filename_arg.path.basename(destfile)
- if self._contents_basenames is None:
-
- try:
- for x in self._contents.keys():
- _unicode_encode(x,
- encoding=_encodings['merge'],
- errors='strict')
- except UnicodeEncodeError:
- # The package appears to have been merged with a
- # different value of sys.getfilesystemencoding(),
- # so fall back to utf_8 if appropriate.
- try:
- for x in self._contents.keys():
- _unicode_encode(x,
- encoding=_encodings['fs'],
- errors='strict')
- except UnicodeEncodeError:
- pass
- else:
- os = portage.os
-
- self._contents_basenames = set(
- os.path.basename(x) for x in self._contents.keys())
- if basename not in self._contents_basenames:
- # This is a shortcut that, in most cases, allows us to
- # eliminate this package as an owner without the need
- # to examine inode numbers of parent directories.
- return False
-
- # Use stat rather than lstat since we want to follow
- # any symlinks to the real parent directory.
- parent_path = os_filename_arg.path.dirname(destfile)
- try:
- parent_stat = os_filename_arg.stat(parent_path)
- except EnvironmentError as e:
- if e.errno != errno.ENOENT:
- raise
- del e
- return False
- if self._contents_inodes is None:
-
- if os is _os_merge:
- try:
- for x in self._contents.keys():
- _unicode_encode(x,
- encoding=_encodings['merge'],
- errors='strict')
- except UnicodeEncodeError:
- # The package appears to have been merged with a
- # different value of sys.getfilesystemencoding(),
- # so fall back to utf_8 if appropriate.
- try:
- for x in self._contents.keys():
- _unicode_encode(x,
- encoding=_encodings['fs'],
- errors='strict')
- except UnicodeEncodeError:
- pass
- else:
- os = portage.os
-
- self._contents_inodes = {}
- parent_paths = set()
- for x in self._contents.keys():
- p_path = os.path.dirname(x)
- if p_path in parent_paths:
- continue
- parent_paths.add(p_path)
- try:
- s = os.stat(p_path)
- except OSError:
- pass
- else:
- inode_key = (s.st_dev, s.st_ino)
- # Use lists of paths in case multiple
- # paths reference the same inode.
- p_path_list = self._contents_inodes.get(inode_key)
- if p_path_list is None:
- p_path_list = []
- self._contents_inodes[inode_key] = p_path_list
- if p_path not in p_path_list:
- p_path_list.append(p_path)
-
- p_path_list = self._contents_inodes.get(
- (parent_stat.st_dev, parent_stat.st_ino))
- if p_path_list:
- for p_path in p_path_list:
- x = os_filename_arg.path.join(p_path, basename)
- if self._contents.contains(x):
- return self._contents.unmap_key(x)
-
- return False
-
- def _linkmap_rebuild(self, **kwargs):
- """
- Rebuild the self._linkmap if it's not broken due to missing
- scanelf binary. Also, return early if preserve-libs is disabled
- and the preserve-libs registry is empty.
- """
- if self._linkmap_broken or \
- self.vartree.dbapi._linkmap is None or \
- self.vartree.dbapi._plib_registry is None or \
- ("preserve-libs" not in self.settings.features and \
- not self.vartree.dbapi._plib_registry.hasEntries()):
- return
- try:
- self.vartree.dbapi._linkmap.rebuild(**kwargs)
- except CommandNotFound as e:
- self._linkmap_broken = True
- self._display_merge(_("!!! Disabling preserve-libs " \
- "due to error: Command Not Found: %s\n") % (e,),
- level=logging.ERROR, noiselevel=-1)
-
- def _find_libs_to_preserve(self, unmerge=False):
- """
- Get set of relative paths for libraries to be preserved. When
- unmerge is False, file paths to preserve are selected from
- self._installed_instance. Otherwise, paths are selected from
- self.
- """
- if self._linkmap_broken or \
- self.vartree.dbapi._linkmap is None or \
- self.vartree.dbapi._plib_registry is None or \
- (not unmerge and self._installed_instance is None) or \
- not self._preserve_libs:
- return set()
-
- os = _os_merge
- linkmap = self.vartree.dbapi._linkmap
- if unmerge:
- installed_instance = self
- else:
- installed_instance = self._installed_instance
- old_contents = installed_instance.getcontents()
- root = self.settings['ROOT']
- root_len = len(root) - 1
- lib_graph = digraph()
- path_node_map = {}
-
- def path_to_node(path):
- node = path_node_map.get(path)
- if node is None:
- node = linkmap._LibGraphNode(linkmap._obj_key(path))
- alt_path_node = lib_graph.get(node)
- if alt_path_node is not None:
- node = alt_path_node
- node.alt_paths.add(path)
- path_node_map[path] = node
- return node
-
- consumer_map = {}
- provider_nodes = set()
- # Create provider nodes and add them to the graph.
- for f_abs in old_contents:
-
- if os is _os_merge:
- try:
- _unicode_encode(f_abs,
- encoding=_encodings['merge'], errors='strict')
- except UnicodeEncodeError:
- # The package appears to have been merged with a
- # different value of sys.getfilesystemencoding(),
- # so fall back to utf_8 if appropriate.
- try:
- _unicode_encode(f_abs,
- encoding=_encodings['fs'], errors='strict')
- except UnicodeEncodeError:
- pass
- else:
- os = portage.os
-
- f = f_abs[root_len:]
- try:
- consumers = linkmap.findConsumers(f,
- exclude_providers=(installed_instance.isowner,))
- except KeyError:
- continue
- if not consumers:
- continue
- provider_node = path_to_node(f)
- lib_graph.add(provider_node, None)
- provider_nodes.add(provider_node)
- consumer_map[provider_node] = consumers
-
- # Create consumer nodes and add them to the graph.
- # Note that consumers can also be providers.
- for provider_node, consumers in consumer_map.items():
- for c in consumers:
- consumer_node = path_to_node(c)
- if installed_instance.isowner(c) and \
- consumer_node not in provider_nodes:
- # This is not a provider, so it will be uninstalled.
- continue
- lib_graph.add(provider_node, consumer_node)
-
- # Locate nodes which should be preserved. They consist of all
- # providers that are reachable from consumers that are not
- # providers themselves.
- preserve_nodes = set()
- for consumer_node in lib_graph.root_nodes():
- if consumer_node in provider_nodes:
- continue
- # Preserve all providers that are reachable from this consumer.
- node_stack = lib_graph.child_nodes(consumer_node)
- while node_stack:
- provider_node = node_stack.pop()
- if provider_node in preserve_nodes:
- continue
- preserve_nodes.add(provider_node)
- node_stack.extend(lib_graph.child_nodes(provider_node))
-
- preserve_paths = set()
- for preserve_node in preserve_nodes:
- # Preserve the library itself, and also preserve the
- # soname symlink which is the only symlink that is
- # strictly required.
- hardlinks = set()
- soname_symlinks = set()
- soname = linkmap.getSoname(next(iter(preserve_node.alt_paths)))
- have_replacement_soname_link = False
- have_replacement_hardlink = False
- for f in preserve_node.alt_paths:
- f_abs = os.path.join(root, f.lstrip(os.sep))
- try:
- if stat.S_ISREG(os.lstat(f_abs).st_mode):
- hardlinks.add(f)
- if not unmerge and self.isowner(f):
- have_replacement_hardlink = True
- if os.path.basename(f) == soname:
- have_replacement_soname_link = True
- elif os.path.basename(f) == soname:
- soname_symlinks.add(f)
- if not unmerge and self.isowner(f):
- have_replacement_soname_link = True
- except OSError:
- pass
-
- if have_replacement_hardlink and have_replacement_soname_link:
- continue
-
- if hardlinks:
- preserve_paths.update(hardlinks)
- preserve_paths.update(soname_symlinks)
-
- return preserve_paths
-
- def _add_preserve_libs_to_contents(self, preserve_paths):
- """
- Preserve libs returned from _find_libs_to_preserve().
- """
-
- if not preserve_paths:
- return
-
- os = _os_merge
- showMessage = self._display_merge
- root = self.settings['ROOT']
-
- # Copy contents entries from the old package to the new one.
- new_contents = self.getcontents().copy()
- old_contents = self._installed_instance.getcontents()
- for f in sorted(preserve_paths):
- f = _unicode_decode(f,
- encoding=_encodings['content'], errors='strict')
- f_abs = os.path.join(root, f.lstrip(os.sep))
- contents_entry = old_contents.get(f_abs)
- if contents_entry is None:
- # This will probably never happen, but it might if one of the
- # paths returned from findConsumers() refers to one of the libs
- # that should be preserved yet the path is not listed in the
- # contents. Such a path might belong to some other package, so
- # it shouldn't be preserved here.
- showMessage(_("!!! File '%s' will not be preserved "
- "due to missing contents entry\n") % (f_abs,),
- level=logging.ERROR, noiselevel=-1)
- preserve_paths.remove(f)
- continue
- new_contents[f_abs] = contents_entry
- obj_type = contents_entry[0]
- showMessage(_(">>> needed %s %s\n") % (obj_type, f_abs),
- noiselevel=-1)
- # Add parent directories to contents if necessary.
- parent_dir = os.path.dirname(f_abs)
- while len(parent_dir) > len(root):
- new_contents[parent_dir] = ["dir"]
- prev = parent_dir
- parent_dir = os.path.dirname(parent_dir)
- if prev == parent_dir:
- break
- outfile = atomic_ofstream(os.path.join(self.dbtmpdir, "CONTENTS"))
- write_contents(new_contents, root, outfile)
- outfile.close()
- self._clear_contents_cache()
-
- def _find_unused_preserved_libs(self, unmerge_no_replacement):
- """
- Find preserved libraries that don't have any consumers left.
- """
-
- if self._linkmap_broken or \
- self.vartree.dbapi._linkmap is None or \
- self.vartree.dbapi._plib_registry is None or \
- not self.vartree.dbapi._plib_registry.hasEntries():
- return {}
-
- # Since preserved libraries can be consumers of other preserved
- # libraries, use a graph to track consumer relationships.
- plib_dict = self.vartree.dbapi._plib_registry.getPreservedLibs()
- linkmap = self.vartree.dbapi._linkmap
- lib_graph = digraph()
- preserved_nodes = set()
- preserved_paths = set()
- path_cpv_map = {}
- path_node_map = {}
- root = self.settings['ROOT']
-
- def path_to_node(path):
- node = path_node_map.get(path)
- if node is None:
- chost = self.settings.get('CHOST')
- if chost.find('darwin') >= 0:
- node = LinkageMapMachO._LibGraphNode(linkmap._obj_key(path))
- elif chost.find('interix') >= 0 or chost.find('winnt') >= 0:
- node = LinkageMapPeCoff._LibGraphNode(linkmap._obj_key(path))
- elif chost.find('aix') >= 0:
- node = LinkageMapXCoff._LibGraphNode(linkmap._obj_key(path))
- else:
- node = LinkageMap._LibGraphNode(linkmap._obj_key(path))
- alt_path_node = lib_graph.get(node)
- if alt_path_node is not None:
- node = alt_path_node
- node.alt_paths.add(path)
- path_node_map[path] = node
- return node
-
- for cpv, plibs in plib_dict.items():
- for f in plibs:
- path_cpv_map[f] = cpv
- preserved_node = path_to_node(f)
- if not preserved_node.file_exists():
- continue
- lib_graph.add(preserved_node, None)
- preserved_paths.add(f)
- preserved_nodes.add(preserved_node)
- for c in self.vartree.dbapi._linkmap.findConsumers(f):
- consumer_node = path_to_node(c)
- if not consumer_node.file_exists():
- continue
- # Note that consumers may also be providers.
- lib_graph.add(preserved_node, consumer_node)
-
- # Eliminate consumers having providers with the same soname as an
- # installed library that is not preserved. This eliminates
- # libraries that are erroneously preserved due to a move from one
- # directory to another.
- # Also eliminate consumers that are going to be unmerged if
- # unmerge_no_replacement is True.
- provider_cache = {}
- for preserved_node in preserved_nodes:
- soname = linkmap.getSoname(preserved_node)
- for consumer_node in lib_graph.parent_nodes(preserved_node):
- if consumer_node in preserved_nodes:
- continue
- if unmerge_no_replacement:
- will_be_unmerged = True
- for path in consumer_node.alt_paths:
- if not self.isowner(path):
- will_be_unmerged = False
- break
- if will_be_unmerged:
- # This consumer is not preserved and it is
- # being unmerged, so drop this edge.
- lib_graph.remove_edge(preserved_node, consumer_node)
- continue
-
- providers = provider_cache.get(consumer_node)
- if providers is None:
- providers = linkmap.findProviders(consumer_node)
- provider_cache[consumer_node] = providers
- providers = providers.get(soname)
- if providers is None:
- continue
- for provider in providers:
- if provider in preserved_paths:
- continue
- provider_node = path_to_node(provider)
- if not provider_node.file_exists():
- continue
- if provider_node in preserved_nodes:
- continue
- # An alternative provider seems to be
- # installed, so drop this edge.
- lib_graph.remove_edge(preserved_node, consumer_node)
- break
-
- cpv_lib_map = {}
- while lib_graph:
- root_nodes = preserved_nodes.intersection(lib_graph.root_nodes())
- if not root_nodes:
- break
- lib_graph.difference_update(root_nodes)
- unlink_list = set()
- for node in root_nodes:
- unlink_list.update(node.alt_paths)
- unlink_list = sorted(unlink_list)
- for obj in unlink_list:
- cpv = path_cpv_map.get(obj)
- if cpv is None:
- # This means that a symlink is in the preserved libs
- # registry, but the actual lib it points to is not.
- self._display_merge(_("!!! symlink to lib is preserved, "
- "but not the lib itself:\n!!! '%s'\n") % (obj,),
- level=logging.ERROR, noiselevel=-1)
- continue
- removed = cpv_lib_map.get(cpv)
- if removed is None:
- removed = set()
- cpv_lib_map[cpv] = removed
- removed.add(obj)
-
- return cpv_lib_map
-
- def _remove_preserved_libs(self, cpv_lib_map):
- """
- Remove files returned from _find_unused_preserved_libs().
- """
-
- os = _os_merge
-
- files_to_remove = set()
- for files in cpv_lib_map.values():
- files_to_remove.update(files)
- files_to_remove = sorted(files_to_remove)
- showMessage = self._display_merge
- root = self.settings['ROOT']
-
- parent_dirs = set()
- for obj in files_to_remove:
- obj = os.path.join(root, obj.lstrip(os.sep))
- parent_dirs.add(os.path.dirname(obj))
- if os.path.islink(obj):
- obj_type = _("sym")
- else:
- obj_type = _("obj")
- try:
- os.unlink(obj)
- except OSError as e:
- if e.errno != errno.ENOENT:
- raise
- del e
- else:
- showMessage(_("<<< !needed %s %s\n") % (obj_type, obj),
- noiselevel=-1)
-
- # Remove empty parent directories if possible.
- while parent_dirs:
- x = parent_dirs.pop()
- while True:
- try:
- os.rmdir(x)
- except OSError:
- break
- prev = x
- x = os.path.dirname(x)
- if x == prev:
- break
-
- self.vartree.dbapi._plib_registry.pruneNonExisting()
-
- def _collision_protect(self, srcroot, destroot, mypkglist,
- file_list, symlink_list):
-
- os = _os_merge
-
- real_relative_paths = {}
-
- collision_ignore = []
- for x in portage.util.shlex_split(
- self.settings.get("COLLISION_IGNORE", "")):
- if os.path.isdir(os.path.join(self._eroot, x.lstrip(os.sep))):
- x = normalize_path(x)
- x += "/*"
- collision_ignore.append(x)
-
- # For collisions with preserved libraries, the current package
- # will assume ownership and the libraries will be unregistered.
- if self.vartree.dbapi._plib_registry is None:
- # preserve-libs is entirely disabled
- plib_cpv_map = None
- plib_paths = None
- plib_inodes = {}
- else:
- plib_dict = self.vartree.dbapi._plib_registry.getPreservedLibs()
- plib_cpv_map = {}
- plib_paths = set()
- for cpv, paths in plib_dict.items():
- plib_paths.update(paths)
- for f in paths:
- plib_cpv_map[f] = cpv
- plib_inodes = self._lstat_inode_map(plib_paths)
-
- plib_collisions = {}
-
- showMessage = self._display_merge
- stopmerge = False
- collisions = []
- dirs = set()
- dirs_ro = set()
- symlink_collisions = []
- destroot = self.settings['ROOT']
- totfiles = len(file_list) + len(symlink_list)
- previous = time.monotonic()
- progress_shown = False
- report_interval = 1.7 # seconds
- falign = len("%d" % totfiles)
- showMessage(_(" %s checking %d files for package collisions\n") % \
- (colorize("GOOD", "*"), totfiles))
- for i, (f, f_type) in enumerate(chain(
- ((f, "reg") for f in file_list),
- ((f, "sym") for f in symlink_list))):
- current = time.monotonic()
- if current - previous > report_interval:
- showMessage(_("%3d%% done, %*d files remaining ...\n") %
- (i * 100 / totfiles, falign, totfiles - i))
- previous = current
- progress_shown = True
-
- dest_path = normalize_path(os.path.join(destroot, f.lstrip(os.path.sep)))
-
- # Relative path with symbolic links resolved only in parent directories
- real_relative_path = os.path.join(os.path.realpath(os.path.dirname(dest_path)),
- os.path.basename(dest_path))[len(destroot):]
-
- real_relative_paths.setdefault(real_relative_path, []).append(f.lstrip(os.path.sep))
-
- parent = os.path.dirname(dest_path)
- if parent not in dirs:
- for x in iter_parents(parent):
- if x in dirs:
- break
- dirs.add(x)
- if os.path.isdir(x):
- if not os.access(x, os.W_OK):
- dirs_ro.add(x)
- break
-
- try:
- dest_lstat = os.lstat(dest_path)
- except EnvironmentError as e:
- if e.errno == errno.ENOENT:
- del e
- continue
- elif e.errno == errno.ENOTDIR:
- del e
- # A non-directory is in a location where this package
- # expects to have a directory.
- dest_lstat = None
- parent_path = dest_path
- while len(parent_path) > len(destroot):
- parent_path = os.path.dirname(parent_path)
- try:
- dest_lstat = os.lstat(parent_path)
- break
- except EnvironmentError as e:
- if e.errno != errno.ENOTDIR:
- raise
- del e
- if not dest_lstat:
- raise AssertionError(
- "unable to find non-directory " + \
- "parent for '%s'" % dest_path)
- dest_path = parent_path
- f = os.path.sep + dest_path[len(destroot):]
- if f in collisions:
- continue
- else:
- raise
- if f[0] != "/":
- f="/"+f
-
- if stat.S_ISDIR(dest_lstat.st_mode):
- if f_type == "sym":
- # This case is explicitly banned
- # by PMS (see bug #326685).
- symlink_collisions.append(f)
- collisions.append(f)
- continue
-
- plibs = plib_inodes.get((dest_lstat.st_dev, dest_lstat.st_ino))
- if plibs:
- for path in plibs:
- cpv = plib_cpv_map[path]
- paths = plib_collisions.get(cpv)
- if paths is None:
- paths = set()
- plib_collisions[cpv] = paths
- paths.add(path)
- # The current package will assume ownership and the
- # libraries will be unregistered, so exclude this
- # path from the normal collisions.
- continue
-
- isowned = False
- full_path = os.path.join(destroot, f.lstrip(os.path.sep))
- for ver in mypkglist:
- if ver.isowner(f):
- isowned = True
- break
- if not isowned and self.isprotected(full_path):
- isowned = True
- if not isowned:
- f_match = full_path[len(self._eroot)-1:]
- stopmerge = True
- for pattern in collision_ignore:
- if fnmatch.fnmatch(f_match, pattern):
- stopmerge = False
- break
- if stopmerge:
- collisions.append(f)
-
- internal_collisions = {}
- for real_relative_path, files in real_relative_paths.items():
- # Detect internal collisions between non-identical files.
- if len(files) >= 2:
- files.sort()
- for i in range(len(files) - 1):
- file1 = normalize_path(os.path.join(srcroot, files[i]))
- file2 = normalize_path(os.path.join(srcroot, files[i+1]))
- # Compare files, ignoring differences in times.
- differences = compare_files(file1, file2, skipped_types=("atime", "mtime", "ctime"))
- if differences:
- internal_collisions.setdefault(real_relative_path, {})[(files[i], files[i+1])] = differences
-
- if progress_shown:
- showMessage(_("100% done\n"))
-
- return collisions, internal_collisions, dirs_ro, symlink_collisions, plib_collisions
-
- def _lstat_inode_map(self, path_iter):
- """
- Use lstat to create a map of the form:
- {(st_dev, st_ino) : set([path1, path2, ...])}
- Multiple paths may reference the same inode due to hardlinks.
- All lstat() calls are relative to self.myroot.
- """
-
- os = _os_merge
-
- root = self.settings['ROOT']
- inode_map = {}
- for f in path_iter:
- path = os.path.join(root, f.lstrip(os.sep))
- try:
- st = os.lstat(path)
- except OSError as e:
- if e.errno not in (errno.ENOENT, errno.ENOTDIR):
- raise
- del e
- continue
- key = (st.st_dev, st.st_ino)
- paths = inode_map.get(key)
- if paths is None:
- paths = set()
- inode_map[key] = paths
- paths.add(f)
- return inode_map
-
- def _security_check(self, installed_instances):
- if not installed_instances:
- return 0
-
- os = _os_merge
-
- showMessage = self._display_merge
-
- file_paths = set()
- for dblnk in installed_instances:
- file_paths.update(dblnk.getcontents())
- inode_map = {}
- real_paths = set()
- for i, path in enumerate(file_paths):
-
- if os is _os_merge:
- try:
- _unicode_encode(path,
- encoding=_encodings['merge'], errors='strict')
- except UnicodeEncodeError:
- # The package appears to have been merged with a
- # different value of sys.getfilesystemencoding(),
- # so fall back to utf_8 if appropriate.
- try:
- _unicode_encode(path,
- encoding=_encodings['fs'], errors='strict')
- except UnicodeEncodeError:
- pass
- else:
- os = portage.os
-
- try:
- s = os.lstat(path)
- except OSError as e:
- if e.errno not in (errno.ENOENT, errno.ENOTDIR):
- raise
- del e
- continue
- if not stat.S_ISREG(s.st_mode):
- continue
- path = os.path.realpath(path)
- if path in real_paths:
- continue
- real_paths.add(path)
- if s.st_nlink > 1 and \
- s.st_mode & (stat.S_ISUID | stat.S_ISGID):
- k = (s.st_dev, s.st_ino)
- inode_map.setdefault(k, []).append((path, s))
- suspicious_hardlinks = []
- for path_list in inode_map.values():
- path, s = path_list[0]
- if len(path_list) == s.st_nlink:
- # All hardlinks seem to be owned by this package.
- continue
- suspicious_hardlinks.append(path_list)
- if not suspicious_hardlinks:
- return 0
-
- msg = []
- msg.append(_("suid/sgid file(s) "
- "with suspicious hardlink(s):"))
- msg.append("")
- for path_list in suspicious_hardlinks:
- for path, s in path_list:
- msg.append("\t%s" % path)
- msg.append("")
- msg.append(_("See the Gentoo Security Handbook "
- "guide for advice on how to proceed."))
-
- self._eerror("preinst", msg)
-
- return 1
-
- def _eqawarn(self, phase, lines):
- self._elog("eqawarn", phase, lines)
-
- def _eerror(self, phase, lines):
- self._elog("eerror", phase, lines)
-
- def _elog(self, funcname, phase, lines):
- func = getattr(portage.elog.messages, funcname)
- if self._scheduler is None:
- for l in lines:
- func(l, phase=phase, key=self.mycpv)
- else:
- background = self.settings.get("PORTAGE_BACKGROUND") == "1"
- log_path = None
- if self.settings.get("PORTAGE_BACKGROUND") != "subprocess":
- log_path = self.settings.get("PORTAGE_LOG_FILE")
- out = io.StringIO()
- for line in lines:
- func(line, phase=phase, key=self.mycpv, out=out)
- msg = out.getvalue()
- self._scheduler.output(msg,
- background=background, log_path=log_path)
-
- def _elog_process(self, phasefilter=None):
- cpv = self.mycpv
- if self._pipe is None:
- elog_process(cpv, self.settings, phasefilter=phasefilter)
- else:
- logdir = os.path.join(self.settings["T"], "logging")
- ebuild_logentries = collect_ebuild_messages(logdir)
- # phasefilter is irrelevant for the above collect_ebuild_messages
- # call, since this package instance has a private logdir. However,
- # it may be relevant for the following collect_messages call.
- py_logentries = collect_messages(key=cpv, phasefilter=phasefilter).get(cpv, {})
- logentries = _merge_logentries(py_logentries, ebuild_logentries)
- funcnames = {
- "INFO": "einfo",
- "LOG": "elog",
- "WARN": "ewarn",
- "QA": "eqawarn",
- "ERROR": "eerror"
- }
- str_buffer = []
- for phase, messages in logentries.items():
- for key, lines in messages:
- funcname = funcnames[key]
- if isinstance(lines, str):
- lines = [lines]
- for line in lines:
- for line in line.split('\n'):
- fields = (funcname, phase, cpv, line)
- str_buffer.append(' '.join(fields))
- str_buffer.append('\n')
- if str_buffer:
- str_buffer = _unicode_encode(''.join(str_buffer))
- while str_buffer:
- str_buffer = str_buffer[os.write(self._pipe, str_buffer):]
-
- def _emerge_log(self, msg):
- emergelog(False, msg)
-
- def treewalk(self, srcroot, destroot, inforoot, myebuild, cleanup=0,
- mydbapi=None, prev_mtimes=None, counter=None):
- """
-
- This function does the following:
-
- calls doebuild(mydo=instprep)
- calls get_ro_checker to retrieve a function for checking whether Portage
- will write to a read-only filesystem, then runs it against the directory list
- calls self._preserve_libs if FEATURES=preserve-libs
- calls self._collision_protect if FEATURES=collision-protect
- calls doebuild(mydo=pkg_preinst)
- Merges the package to the livefs
- unmerges old version (if required)
- calls doebuild(mydo=pkg_postinst)
- calls env_update
-
- @param srcroot: Typically this is ${D}
- @type srcroot: String (Path)
- @param destroot: ignored, self.settings['ROOT'] is used instead
- @type destroot: String (Path)
- @param inforoot: root of the vardb entry ?
- @type inforoot: String (Path)
- @param myebuild: path to the ebuild that we are processing
- @type myebuild: String (Path)
- @param mydbapi: dbapi which is handed to doebuild.
- @type mydbapi: portdbapi instance
- @param prev_mtimes: { Filename:mtime } mapping for env_update
- @type prev_mtimes: Dictionary
- @rtype: Boolean
- @return:
- 1. 0 on success
- 2. 1 on failure
-
- secondhand is a list of symlinks that have been skipped due to their target
- not existing; we will merge these symlinks at a later time.
- """
-
- os = _os_merge
-
- srcroot = _unicode_decode(srcroot,
- encoding=_encodings['content'], errors='strict')
- destroot = self.settings['ROOT']
- inforoot = _unicode_decode(inforoot,
- encoding=_encodings['content'], errors='strict')
- myebuild = _unicode_decode(myebuild,
- encoding=_encodings['content'], errors='strict')
-
- showMessage = self._display_merge
- srcroot = normalize_path(srcroot).rstrip(os.path.sep) + os.path.sep
-
- if not os.path.isdir(srcroot):
- showMessage(_("!!! Directory Not Found: D='%s'\n") % srcroot,
- level=logging.ERROR, noiselevel=-1)
- return 1
-
- # run instprep internal phase
- doebuild_environment(myebuild, "instprep",
- settings=self.settings, db=mydbapi)
- phase = EbuildPhase(background=False, phase="instprep",
- scheduler=self._scheduler, settings=self.settings)
- phase.start()
- if phase.wait() != os.EX_OK:
- showMessage(_("!!! instprep failed\n"),
- level=logging.ERROR, noiselevel=-1)
- return 1
-
- is_binpkg = self.settings.get("EMERGE_FROM") == "binary"
- slot = ''
- for var_name in ('CHOST', 'SLOT'):
- try:
- with io.open(_unicode_encode(
- os.path.join(inforoot, var_name),
- encoding=_encodings['fs'], errors='strict'),
- mode='r', encoding=_encodings['repo.content'],
- errors='replace') as f:
- val = f.readline().strip()
- except EnvironmentError as e:
- if e.errno != errno.ENOENT:
- raise
- del e
- val = ''
-
- if var_name == 'SLOT':
- slot = val
-
- if not slot.strip():
- slot = self.settings.get(var_name, '')
- if not slot.strip():
- showMessage(_("!!! SLOT is undefined\n"),
- level=logging.ERROR, noiselevel=-1)
- return 1
- write_atomic(os.path.join(inforoot, var_name), slot + '\n')
-
- # This check only applies when built from source, since
- # inforoot values are written just after src_install.
- if not is_binpkg and val != self.settings.get(var_name, ''):
- self._eqawarn('preinst',
- [_("QA Notice: Expected %(var_name)s='%(expected_value)s', got '%(actual_value)s'\n") % \
- {"var_name":var_name, "expected_value":self.settings.get(var_name, ''), "actual_value":val}])
-
- def eerror(lines):
- self._eerror("preinst", lines)
-
- if not os.path.exists(self.dbcatdir):
- ensure_dirs(self.dbcatdir)
-
- # NOTE: We use SLOT obtained from the inforoot
- # directory, in order to support USE=multislot.
- # Use _pkg_str discard the sub-slot part if necessary.
- slot = _pkg_str(self.mycpv, slot=slot).slot
- cp = self.mysplit[0]
- slot_atom = "%s:%s" % (cp, slot)
-
- self.lockdb()
- try:
- # filter any old-style virtual matches
- slot_matches = [cpv for cpv in self.vartree.dbapi.match(slot_atom)
- if cpv_getkey(cpv) == cp]
-
- if self.mycpv not in slot_matches and \
- self.vartree.dbapi.cpv_exists(self.mycpv):
- # handle multislot or unapplied slotmove
- slot_matches.append(self.mycpv)
-
- others_in_slot = []
- for cur_cpv in slot_matches:
- # Clone the config in case one of these has to be unmerged,
- # since we need it to have private ${T} etc... for things
- # like elog.
- settings_clone = portage.config(clone=self.settings)
- # This reset ensures that there is no unintended leakage
- # of variables which should not be shared.
- settings_clone.reset()
- settings_clone.setcpv(cur_cpv, mydb=self.vartree.dbapi)
- if self._preserve_libs and "preserve-libs" in \
- settings_clone["PORTAGE_RESTRICT"].split():
- self._preserve_libs = False
- others_in_slot.append(dblink(self.cat, catsplit(cur_cpv)[1],
- settings=settings_clone,
- vartree=self.vartree, treetype="vartree",
- scheduler=self._scheduler, pipe=self._pipe))
- finally:
- self.unlockdb()
-
- # If any instance has RESTRICT=preserve-libs, then
- # restrict it for all instances.
- if not self._preserve_libs:
- for dblnk in others_in_slot:
- dblnk._preserve_libs = False
-
- retval = self._security_check(others_in_slot)
- if retval:
- return retval
-
- if slot_matches:
- # Used by self.isprotected().
- max_dblnk = None
- max_counter = -1
- for dblnk in others_in_slot:
- cur_counter = self.vartree.dbapi.cpv_counter(dblnk.mycpv)
- if cur_counter > max_counter:
- max_counter = cur_counter
- max_dblnk = dblnk
- self._installed_instance = max_dblnk
-
- # Apply INSTALL_MASK before collision-protect, since it may
- # be useful to avoid collisions in some scenarios.
- # We cannot detect if this is needed or not here as INSTALL_MASK can be
- # modified by bashrc files.
- phase = MiscFunctionsProcess(background=False,
- commands=["preinst_mask"], phase="preinst",
- scheduler=self._scheduler, settings=self.settings)
- phase.start()
- phase.wait()
- try:
- with io.open(_unicode_encode(os.path.join(inforoot, "INSTALL_MASK"),
- encoding=_encodings['fs'], errors='strict'),
- mode='r', encoding=_encodings['repo.content'],
- errors='replace') as f:
- install_mask = InstallMask(f.read())
- except EnvironmentError:
- install_mask = None
-
- if install_mask:
- install_mask_dir(self.settings["ED"], install_mask)
- if any(x in self.settings.features for x in ('nodoc', 'noman', 'noinfo')):
- try:
- os.rmdir(os.path.join(self.settings["ED"], 'usr', 'share'))
- except OSError:
- pass
-
- # We check for unicode encoding issues after src_install. However,
- # the check must be repeated here for binary packages (it's
- # inexpensive since we call os.walk() here anyway).
- unicode_errors = []
- line_ending_re = re.compile('[\n\r]')
- srcroot_len = len(srcroot)
- ed_len = len(self.settings["ED"])
- eprefix_len = len(self.settings["EPREFIX"])
-
- while True:
-
- unicode_error = False
- eagain_error = False
-
- filelist = []
- linklist = []
- paths_with_newlines = []
- def onerror(e):
- raise
- walk_iter = os.walk(srcroot, onerror=onerror)
- while True:
- try:
- parent, dirs, files = next(walk_iter)
- except StopIteration:
- break
- except OSError as e:
- if e.errno != errno.EAGAIN:
- raise
- # Observed with PyPy 1.8.
- eagain_error = True
- break
-
- try:
- parent = _unicode_decode(parent,
- encoding=_encodings['merge'], errors='strict')
- except UnicodeDecodeError:
- new_parent = _unicode_decode(parent,
- encoding=_encodings['merge'], errors='replace')
- new_parent = _unicode_encode(new_parent,
- encoding='ascii', errors='backslashreplace')
- new_parent = _unicode_decode(new_parent,
- encoding=_encodings['merge'], errors='replace')
- os.rename(parent, new_parent)
- unicode_error = True
- unicode_errors.append(new_parent[ed_len:])
- break
-
- for fname in files:
- try:
- fname = _unicode_decode(fname,
- encoding=_encodings['merge'], errors='strict')
- except UnicodeDecodeError:
- fpath = portage._os.path.join(
- parent.encode(_encodings['merge']), fname)
- new_fname = _unicode_decode(fname,
- encoding=_encodings['merge'], errors='replace')
- new_fname = _unicode_encode(new_fname,
- encoding='ascii', errors='backslashreplace')
- new_fname = _unicode_decode(new_fname,
- encoding=_encodings['merge'], errors='replace')
- new_fpath = os.path.join(parent, new_fname)
- os.rename(fpath, new_fpath)
- unicode_error = True
- unicode_errors.append(new_fpath[ed_len:])
- fname = new_fname
- fpath = new_fpath
- else:
- fpath = os.path.join(parent, fname)
-
- relative_path = fpath[srcroot_len:]
-
- if line_ending_re.search(relative_path) is not None:
- paths_with_newlines.append(relative_path)
-
- file_mode = os.lstat(fpath).st_mode
- if stat.S_ISREG(file_mode):
- filelist.append(relative_path)
- elif stat.S_ISLNK(file_mode):
- # Note: os.walk puts symlinks to directories in the "dirs"
- # list and it does not traverse them since that could lead
- # to an infinite recursion loop.
- linklist.append(relative_path)
-
- myto = _unicode_decode(
- _os.readlink(_unicode_encode(fpath,
- encoding=_encodings['merge'], errors='strict')),
- encoding=_encodings['merge'], errors='replace')
- if line_ending_re.search(myto) is not None:
- paths_with_newlines.append(relative_path)
-
- if unicode_error:
- break
-
- if not (unicode_error or eagain_error):
- break
-
- if unicode_errors:
- self._elog("eqawarn", "preinst",
- _merge_unicode_error(unicode_errors))
-
- if paths_with_newlines:
- msg = []
- msg.append(_("This package installs one or more files containing line ending characters:"))
- msg.append("")
- paths_with_newlines.sort()
- for f in paths_with_newlines:
- msg.append("\t/%s" % (f.replace("\n", "\\n").replace("\r", "\\r")))
- msg.append("")
- msg.append(_("package %s NOT merged") % self.mycpv)
- msg.append("")
- eerror(msg)
- return 1
-
- # If there are no files to merge, and an installed package in the same
- # slot has files, it probably means that something went wrong.
- if self.settings.get("PORTAGE_PACKAGE_EMPTY_ABORT") == "1" and \
- not filelist and not linklist and others_in_slot:
- installed_files = None
- for other_dblink in others_in_slot:
- installed_files = other_dblink.getcontents()
- if not installed_files:
- continue
- from textwrap import wrap
- wrap_width = 72
- msg = []
- d = {
- "new_cpv":self.mycpv,
- "old_cpv":other_dblink.mycpv
- }
- msg.extend(wrap(_("The '%(new_cpv)s' package will not install "
- "any files, but the currently installed '%(old_cpv)s'"
- " package has the following files: ") % d, wrap_width))
- msg.append("")
- msg.extend(sorted(installed_files))
- msg.append("")
- msg.append(_("package %s NOT merged") % self.mycpv)
- msg.append("")
- msg.extend(wrap(
- _("Manually run `emerge --unmerge =%s` if you "
- "really want to remove the above files. Set "
- "PORTAGE_PACKAGE_EMPTY_ABORT=\"0\" in "
- "/etc/portage/make.conf if you do not want to "
- "abort in cases like this.") % other_dblink.mycpv,
- wrap_width))
- eerror(msg)
- if installed_files:
- return 1
-
- # Make sure the ebuild environment is initialized and that ${T}/elog
- # exists for logging of collision-protect eerror messages.
- if myebuild is None:
- myebuild = os.path.join(inforoot, self.pkg + ".ebuild")
- doebuild_environment(myebuild, "preinst",
- settings=self.settings, db=mydbapi)
- self.settings["REPLACING_VERSIONS"] = " ".join(
- [portage.versions.cpv_getversion(other.mycpv)
- for other in others_in_slot])
- prepare_build_dirs(settings=self.settings, cleanup=cleanup)
-
- # check for package collisions
- blockers = []
- for blocker in self._blockers or []:
- blocker = self.vartree.dbapi._dblink(blocker.cpv)
- # It may have been unmerged before lock(s)
- # were aquired.
- if blocker.exists():
- blockers.append(blocker)
-
- collisions, internal_collisions, dirs_ro, symlink_collisions, plib_collisions = \
- self._collision_protect(srcroot, destroot,
- others_in_slot + blockers, filelist, linklist)
-
- # Check for read-only filesystems.
- ro_checker = get_ro_checker()
- rofilesystems = ro_checker(dirs_ro)
-
- if rofilesystems:
- msg = _("One or more files installed to this package are "
- "set to be installed to read-only filesystems. "
- "Please mount the following filesystems as read-write "
- "and retry.")
- msg = textwrap.wrap(msg, 70)
- msg.append("")
- for f in rofilesystems:
- msg.append("\t%s" % f)
- msg.append("")
- self._elog("eerror", "preinst", msg)
-
- msg = _("Package '%s' NOT merged due to read-only file systems.") % \
- self.settings.mycpv
- msg += _(" If necessary, refer to your elog "
- "messages for the whole content of the above message.")
- msg = textwrap.wrap(msg, 70)
- eerror(msg)
- return 1
-
- if internal_collisions:
- msg = _("Package '%s' has internal collisions between non-identical files "
- "(located in separate directories in the installation image (${D}) "
- "corresponding to merged directories in the target "
- "filesystem (${ROOT})):") % self.settings.mycpv
- msg = textwrap.wrap(msg, 70)
- msg.append("")
- for k, v in sorted(internal_collisions.items(), key=operator.itemgetter(0)):
- msg.append("\t%s" % os.path.join(destroot, k.lstrip(os.path.sep)))
- for (file1, file2), differences in sorted(v.items()):
- msg.append("\t\t%s" % os.path.join(destroot, file1.lstrip(os.path.sep)))
- msg.append("\t\t%s" % os.path.join(destroot, file2.lstrip(os.path.sep)))
- msg.append("\t\t\tDifferences: %s" % ", ".join(differences))
- msg.append("")
- self._elog("eerror", "preinst", msg)
-
- msg = _("Package '%s' NOT merged due to internal collisions "
- "between non-identical files.") % self.settings.mycpv
- msg += _(" If necessary, refer to your elog messages for the whole "
- "content of the above message.")
- eerror(textwrap.wrap(msg, 70))
- return 1
-
- if symlink_collisions:
- # Symlink collisions need to be distinguished from other types
- # of collisions, in order to avoid confusion (see bug #409359).
- msg = _("Package '%s' has one or more collisions "
- "between symlinks and directories, which is explicitly "
- "forbidden by PMS section 13.4 (see bug #326685):") % \
- (self.settings.mycpv,)
- msg = textwrap.wrap(msg, 70)
- msg.append("")
- for f in symlink_collisions:
- msg.append("\t%s" % os.path.join(destroot,
- f.lstrip(os.path.sep)))
- msg.append("")
- self._elog("eerror", "preinst", msg)
-
- if collisions:
- collision_protect = "collision-protect" in self.settings.features
- protect_owned = "protect-owned" in self.settings.features
- msg = _("This package will overwrite one or more files that"
- " may belong to other packages (see list below).")
- if not (collision_protect or protect_owned):
- msg += _(" Add either \"collision-protect\" or"
- " \"protect-owned\" to FEATURES in"
- " make.conf if you would like the merge to abort"
- " in cases like this. See the make.conf man page for"
- " more information about these features.")
- if self.settings.get("PORTAGE_QUIET") != "1":
- msg += _(" You can use a command such as"
- " `portageq owners / <filename>` to identify the"
- " installed package that owns a file. If portageq"
- " reports that only one package owns a file then do NOT"
- " file a bug report. A bug report is only useful if it"
- " identifies at least two or more packages that are known"
- " to install the same file(s)."
- " If a collision occurs and you"
- " can not explain where the file came from then you"
- " should simply ignore the collision since there is not"
- " enough information to determine if a real problem"
- " exists. Please do NOT file a bug report at"
- " https://bugs.gentoo.org/ unless you report exactly which"
- " two packages install the same file(s). See"
- " https://wiki.gentoo.org/wiki/Knowledge_Base:Blockers"
- " for tips on how to solve the problem. And once again,"
- " please do NOT file a bug report unless you have"
- " completely understood the above message.")
-
- self.settings["EBUILD_PHASE"] = "preinst"
- from textwrap import wrap
- msg = wrap(msg, 70)
- if collision_protect:
- msg.append("")
- msg.append(_("package %s NOT merged") % self.settings.mycpv)
- msg.append("")
- msg.append(_("Detected file collision(s):"))
- msg.append("")
-
- for f in collisions:
- msg.append("\t%s" % \
- os.path.join(destroot, f.lstrip(os.path.sep)))
-
- eerror(msg)
-
- owners = None
- if collision_protect or protect_owned or symlink_collisions:
- msg = []
- msg.append("")
- msg.append(_("Searching all installed"
- " packages for file collisions..."))
- msg.append("")
- msg.append(_("Press Ctrl-C to Stop"))
- msg.append("")
- eerror(msg)
-
- if len(collisions) > 20:
- # get_owners is slow for large numbers of files, so
- # don't look them all up.
- collisions = collisions[:20]
-
- pkg_info_strs = {}
- self.lockdb()
- try:
- owners = self.vartree.dbapi._owners.get_owners(collisions)
- self.vartree.dbapi.flush_cache()
-
- for pkg in owners:
- pkg = self.vartree.dbapi._pkg_str(pkg.mycpv, None)
- pkg_info_str = "%s%s%s" % (pkg,
- _slot_separator, pkg.slot)
- if pkg.repo != _unknown_repo:
- pkg_info_str += "%s%s" % (_repo_separator,
- pkg.repo)
- pkg_info_strs[pkg] = pkg_info_str
-
- finally:
- self.unlockdb()
-
- for pkg, owned_files in owners.items():
- msg = []
- msg.append(pkg_info_strs[pkg.mycpv])
- for f in sorted(owned_files):
- msg.append("\t%s" % os.path.join(destroot,
- f.lstrip(os.path.sep)))
- msg.append("")
- eerror(msg)
-
- if not owners:
- eerror([_("None of the installed"
- " packages claim the file(s)."), ""])
-
- symlink_abort_msg =_("Package '%s' NOT merged since it has "
- "one or more collisions between symlinks and directories, "
- "which is explicitly forbidden by PMS section 13.4 "
- "(see bug #326685).")
-
- # The explanation about the collision and how to solve
- # it may not be visible via a scrollback buffer, especially
- # if the number of file collisions is large. Therefore,
- # show a summary at the end.
- abort = False
- if symlink_collisions:
- abort = True
- msg = symlink_abort_msg % (self.settings.mycpv,)
- elif collision_protect:
- abort = True
- msg = _("Package '%s' NOT merged due to file collisions.") % \
- self.settings.mycpv
- elif protect_owned and owners:
- abort = True
- msg = _("Package '%s' NOT merged due to file collisions.") % \
- self.settings.mycpv
- else:
- msg = _("Package '%s' merged despite file collisions.") % \
- self.settings.mycpv
- msg += _(" If necessary, refer to your elog "
- "messages for the whole content of the above message.")
- eerror(wrap(msg, 70))
-
- if abort:
- return 1
-
- # The merge process may move files out of the image directory,
- # which causes invalidation of the .installed flag.
- try:
- os.unlink(os.path.join(
- os.path.dirname(normalize_path(srcroot)), ".installed"))
- except OSError as e:
- if e.errno != errno.ENOENT:
- raise
- del e
-
- self.dbdir = self.dbtmpdir
- self.delete()
- ensure_dirs(self.dbtmpdir)
-
- downgrade = False
- if self._installed_instance is not None and \
- vercmp(self.mycpv.version,
- self._installed_instance.mycpv.version) < 0:
- downgrade = True
-
- if self._installed_instance is not None:
- rval = self._pre_merge_backup(self._installed_instance, downgrade)
- if rval != os.EX_OK:
- showMessage(_("!!! FAILED preinst: ") +
- "quickpkg: %s\n" % rval,
- level=logging.ERROR, noiselevel=-1)
- return rval
-
- # run preinst script
- showMessage(_(">>> Merging %(cpv)s to %(destroot)s\n") % \
- {"cpv":self.mycpv, "destroot":destroot})
- phase = EbuildPhase(background=False, phase="preinst",
- scheduler=self._scheduler, settings=self.settings)
- phase.start()
- a = phase.wait()
-
- # XXX: Decide how to handle failures here.
- if a != os.EX_OK:
- showMessage(_("!!! FAILED preinst: ")+str(a)+"\n",
- level=logging.ERROR, noiselevel=-1)
- return a
-
- # copy "info" files (like SLOT, CFLAGS, etc.) into the database
- for x in os.listdir(inforoot):
- self.copyfile(inforoot+"/"+x)
-
- # write local package counter for recording
- if counter is None:
- counter = self.vartree.dbapi.counter_tick(mycpv=self.mycpv)
- with io.open(_unicode_encode(os.path.join(self.dbtmpdir, 'COUNTER'),
- encoding=_encodings['fs'], errors='strict'),
- mode='w', encoding=_encodings['repo.content'],
- errors='backslashreplace') as f:
- f.write("%s" % counter)
-
- self.updateprotect()
-
- #if we have a file containing previously-merged config file md5sums, grab it.
- self.vartree.dbapi._fs_lock()
- try:
- # This prunes any libraries from the registry that no longer
- # exist on disk, in case they have been manually removed.
- # This has to be done prior to merge, since after merge it
- # is non-trivial to distinguish these files from files
- # that have just been merged.
- plib_registry = self.vartree.dbapi._plib_registry
- if plib_registry:
- plib_registry.lock()
- try:
- plib_registry.load()
- plib_registry.store()
- finally:
- plib_registry.unlock()
-
- # Always behave like --noconfmem is enabled for downgrades
- # so that people who don't know about this option are less
- # likely to get confused when doing upgrade/downgrade cycles.
- cfgfiledict = grabdict(self.vartree.dbapi._conf_mem_file)
- if "NOCONFMEM" in self.settings or downgrade:
- cfgfiledict["IGNORE"]=1
- else:
- cfgfiledict["IGNORE"]=0
-
- rval = self._merge_contents(srcroot, destroot, cfgfiledict)
- if rval != os.EX_OK:
- return rval
- finally:
- self.vartree.dbapi._fs_unlock()
-
- # These caches are populated during collision-protect and the data
- # they contain is now invalid. It's very important to invalidate
- # the contents_inodes cache so that FEATURES=unmerge-orphans
- # doesn't unmerge anything that belongs to this package that has
- # just been merged.
- for dblnk in others_in_slot:
- dblnk._clear_contents_cache()
- self._clear_contents_cache()
-
- linkmap = self.vartree.dbapi._linkmap
- plib_registry = self.vartree.dbapi._plib_registry
- # We initialize preserve_paths to an empty set rather
- # than None here because it plays an important role
- # in prune_plib_registry logic by serving to indicate
- # that we have a replacement for a package that's
- # being unmerged.
-
- preserve_paths = set()
- needed = None
- if not (self._linkmap_broken or linkmap is None or
- plib_registry is None):
- self.vartree.dbapi._fs_lock()
- plib_registry.lock()
- try:
- plib_registry.load()
- needed = os.path.join(inforoot, linkmap._needed_aux_key)
- self._linkmap_rebuild(include_file=needed)
-
- # Preserve old libs if they are still in use
- # TODO: Handle cases where the previous instance
- # has already been uninstalled but it still has some
- # preserved libraries in the registry that we may
- # want to preserve here.
- preserve_paths = self._find_libs_to_preserve()
- finally:
- plib_registry.unlock()
- self.vartree.dbapi._fs_unlock()
-
- if preserve_paths:
- self._add_preserve_libs_to_contents(preserve_paths)
-
- # If portage is reinstalling itself, remove the old
- # version now since we want to use the temporary
- # PORTAGE_BIN_PATH that will be removed when we return.
- reinstall_self = False
- if self.myroot == "/" and \
- match_from_list(PORTAGE_PACKAGE_ATOM, [self.mycpv]):
- reinstall_self = True
-
- emerge_log = self._emerge_log
-
- # If we have any preserved libraries then autoclean
- # is forced so that preserve-libs logic doesn't have
- # to account for the additional complexity of the
- # AUTOCLEAN=no mode.
- autoclean = self.settings.get("AUTOCLEAN", "yes") == "yes" \
- or preserve_paths
-
- if autoclean:
- emerge_log(_(" >>> AUTOCLEAN: %s") % (slot_atom,))
-
- others_in_slot.append(self) # self has just been merged
- for dblnk in list(others_in_slot):
- if dblnk is self:
- continue
- if not (autoclean or dblnk.mycpv == self.mycpv or reinstall_self):
- continue
- showMessage(_(">>> Safely unmerging already-installed instance...\n"))
- emerge_log(_(" === Unmerging... (%s)") % (dblnk.mycpv,))
- others_in_slot.remove(dblnk) # dblnk will unmerge itself now
- dblnk._linkmap_broken = self._linkmap_broken
- dblnk.settings["REPLACED_BY_VERSION"] = portage.versions.cpv_getversion(self.mycpv)
- dblnk.settings.backup_changes("REPLACED_BY_VERSION")
- unmerge_rval = dblnk.unmerge(ldpath_mtimes=prev_mtimes,
- others_in_slot=others_in_slot, needed=needed,
- preserve_paths=preserve_paths)
- dblnk.settings.pop("REPLACED_BY_VERSION", None)
-
- if unmerge_rval == os.EX_OK:
- emerge_log(_(" >>> unmerge success: %s") % (dblnk.mycpv,))
- else:
- emerge_log(_(" !!! unmerge FAILURE: %s") % (dblnk.mycpv,))
-
- self.lockdb()
- try:
- # TODO: Check status and abort if necessary.
- dblnk.delete()
- finally:
- self.unlockdb()
- showMessage(_(">>> Original instance of package unmerged safely.\n"))
-
- if len(others_in_slot) > 1:
- showMessage(colorize("WARN", _("WARNING:"))
- + _(" AUTOCLEAN is disabled. This can cause serious"
- " problems due to overlapping packages.\n"),
- level=logging.WARN, noiselevel=-1)
-
- # We hold both directory locks.
- self.dbdir = self.dbpkgdir
- self.lockdb()
- try:
- self.delete()
- _movefile(self.dbtmpdir, self.dbpkgdir, mysettings=self.settings)
- self._merged_path(self.dbpkgdir, os.lstat(self.dbpkgdir))
- self.vartree.dbapi._cache_delta.recordEvent(
- "add", self.mycpv, slot, counter)
- finally:
- self.unlockdb()
-
- # Check for file collisions with blocking packages
- # and remove any colliding files from their CONTENTS
- # since they now belong to this package.
- self._clear_contents_cache()
- contents = self.getcontents()
- destroot_len = len(destroot) - 1
- self.lockdb()
- try:
- for blocker in blockers:
- self.vartree.dbapi.removeFromContents(blocker, iter(contents),
- relative_paths=False)
- finally:
- self.unlockdb()
-
- plib_registry = self.vartree.dbapi._plib_registry
- if plib_registry:
- self.vartree.dbapi._fs_lock()
- plib_registry.lock()
- try:
- plib_registry.load()
-
- if preserve_paths:
- # keep track of the libs we preserved
- plib_registry.register(self.mycpv, slot, counter,
- sorted(preserve_paths))
-
- # Unregister any preserved libs that this package has overwritten
- # and update the contents of the packages that owned them.
- plib_dict = plib_registry.getPreservedLibs()
- for cpv, paths in plib_collisions.items():
- if cpv not in plib_dict:
- continue
- has_vdb_entry = False
- if cpv != self.mycpv:
- # If we've replaced another instance with the
- # same cpv then the vdb entry no longer belongs
- # to it, so we'll have to get the slot and counter
- # from plib_registry._data instead.
- self.vartree.dbapi.lock()
- try:
- try:
- slot = self.vartree.dbapi._pkg_str(cpv, None).slot
- counter = self.vartree.dbapi.cpv_counter(cpv)
- except (KeyError, InvalidData):
- pass
- else:
- has_vdb_entry = True
- self.vartree.dbapi.removeFromContents(
- cpv, paths)
- finally:
- self.vartree.dbapi.unlock()
-
- if not has_vdb_entry:
- # It's possible for previously unmerged packages
- # to have preserved libs in the registry, so try
- # to retrieve the slot and counter from there.
- has_registry_entry = False
- for plib_cps, (plib_cpv, plib_counter, plib_paths) in \
- plib_registry._data.items():
- if plib_cpv != cpv:
- continue
- try:
- cp, slot = plib_cps.split(":", 1)
- except ValueError:
- continue
- counter = plib_counter
- has_registry_entry = True
- break
-
- if not has_registry_entry:
- continue
-
- remaining = [f for f in plib_dict[cpv] if f not in paths]
- plib_registry.register(cpv, slot, counter, remaining)
-
- plib_registry.store()
- finally:
- plib_registry.unlock()
- self.vartree.dbapi._fs_unlock()
-
- self.vartree.dbapi._add(self)
- contents = self.getcontents()
-
- #do postinst script
- self.settings["PORTAGE_UPDATE_ENV"] = \
- os.path.join(self.dbpkgdir, "environment.bz2")
- self.settings.backup_changes("PORTAGE_UPDATE_ENV")
- try:
- phase = EbuildPhase(background=False, phase="postinst",
- scheduler=self._scheduler, settings=self.settings)
- phase.start()
- a = phase.wait()
- if a == os.EX_OK:
- showMessage(_(">>> %s merged.\n") % self.mycpv)
- finally:
- self.settings.pop("PORTAGE_UPDATE_ENV", None)
-
- if a != os.EX_OK:
- # It's stupid to bail out here, so keep going regardless of
- # phase return code.
- self._postinst_failure = True
- self._elog("eerror", "postinst", [
- _("FAILED postinst: %s") % (a,),
- ])
-
- #update environment settings, library paths. DO NOT change symlinks.
- env_update(
- target_root=self.settings['ROOT'], prev_mtimes=prev_mtimes,
- contents=contents, env=self.settings,
- writemsg_level=self._display_merge, vardbapi=self.vartree.dbapi)
-
- # For gcc upgrades, preserved libs have to be removed after the
- # the library path has been updated.
- self._prune_plib_registry()
- self._post_merge_sync()
-
- return os.EX_OK
-
- def _new_backup_path(self, p):
- """
- The works for any type path, such as a regular file, symlink,
- or directory. The parent directory is assumed to exist.
- The returned filename is of the form p + '.backup.' + x, where
- x guarantees that the returned path does not exist yet.
- """
- os = _os_merge
-
- x = -1
- while True:
- x += 1
- backup_p = '%s.backup.%04d' % (p, x)
- try:
- os.lstat(backup_p)
- except OSError:
- break
-
- return backup_p
-
- def _merge_contents(self, srcroot, destroot, cfgfiledict):
-
- cfgfiledict_orig = cfgfiledict.copy()
-
- # open CONTENTS file (possibly overwriting old one) for recording
- # Use atomic_ofstream for automatic coercion of raw bytes to
- # unicode, in order to prevent TypeError when writing raw bytes
- # to TextIOWrapper with python2.
- outfile = atomic_ofstream(_unicode_encode(
- os.path.join(self.dbtmpdir, 'CONTENTS'),
- encoding=_encodings['fs'], errors='strict'),
- mode='w', encoding=_encodings['repo.content'],
- errors='backslashreplace')
-
- # Don't bump mtimes on merge since some application require
- # preservation of timestamps. This means that the unmerge phase must
- # check to see if file belongs to an installed instance in the same
- # slot.
- mymtime = None
-
- # set umask to 0 for merging; back up umask, save old one in prevmask (since this is a global change)
- prevmask = os.umask(0)
- secondhand = []
-
- # we do a first merge; this will recurse through all files in our srcroot but also build up a
- # "second hand" of symlinks to merge later
- if self.mergeme(srcroot, destroot, outfile, secondhand,
- self.settings["EPREFIX"].lstrip(os.sep), cfgfiledict, mymtime):
- return 1
-
- # now, it's time for dealing our second hand; we'll loop until we can't merge anymore. The rest are
- # broken symlinks. We'll merge them too.
- lastlen = 0
- while len(secondhand) and len(secondhand)!=lastlen:
- # clear the thirdhand. Anything from our second hand that
- # couldn't get merged will be added to thirdhand.
-
- thirdhand = []
- if self.mergeme(srcroot, destroot, outfile, thirdhand,
- secondhand, cfgfiledict, mymtime):
- return 1
-
- #swap hands
- lastlen = len(secondhand)
-
- # our thirdhand now becomes our secondhand. It's ok to throw
- # away secondhand since thirdhand contains all the stuff that
- # couldn't be merged.
- secondhand = thirdhand
-
- if len(secondhand):
- # force merge of remaining symlinks (broken or circular; oh well)
- if self.mergeme(srcroot, destroot, outfile, None,
- secondhand, cfgfiledict, mymtime):
- return 1
-
- #restore umask
- os.umask(prevmask)
-
- #if we opened it, close it
- outfile.flush()
- outfile.close()
-
- # write out our collection of md5sums
- if cfgfiledict != cfgfiledict_orig:
- cfgfiledict.pop("IGNORE", None)
- try:
- writedict(cfgfiledict, self.vartree.dbapi._conf_mem_file)
- except InvalidLocation:
- self.settings._init_dirs()
- writedict(cfgfiledict, self.vartree.dbapi._conf_mem_file)
-
- return os.EX_OK
-
- def mergeme(self, srcroot, destroot, outfile, secondhand, stufftomerge, cfgfiledict, thismtime):
- """
-
- This function handles actual merging of the package contents to the livefs.
- It also handles config protection.
-
- @param srcroot: Where are we copying files from (usually ${D})
- @type srcroot: String (Path)
- @param destroot: Typically ${ROOT}
- @type destroot: String (Path)
- @param outfile: File to log operations to
- @type outfile: File Object
- @param secondhand: A set of items to merge in pass two (usually
- or symlinks that point to non-existing files that may get merged later)
- @type secondhand: List
- @param stufftomerge: Either a diretory to merge, or a list of items.
- @type stufftomerge: String or List
- @param cfgfiledict: { File:mtime } mapping for config_protected files
- @type cfgfiledict: Dictionary
- @param thismtime: None or new mtime for merged files (expressed in seconds
- in Python <3.3 and nanoseconds in Python >=3.3)
- @type thismtime: None or Int
- @rtype: None or Boolean
- @return:
- 1. True on failure
- 2. None otherwise
-
- """
-
- showMessage = self._display_merge
- writemsg = self._display_merge
-
- os = _os_merge
- sep = os.sep
- join = os.path.join
- srcroot = normalize_path(srcroot).rstrip(sep) + sep
- destroot = normalize_path(destroot).rstrip(sep) + sep
- calc_prelink = "prelink-checksums" in self.settings.features
-
- protect_if_modified = \
- "config-protect-if-modified" in self.settings.features and \
- self._installed_instance is not None
-
- # this is supposed to merge a list of files. There will be 2 forms of argument passing.
- if isinstance(stufftomerge, str):
- #A directory is specified. Figure out protection paths, listdir() it and process it.
- mergelist = [join(stufftomerge, child) for child in \
- os.listdir(join(srcroot, stufftomerge))]
- else:
- mergelist = stufftomerge[:]
-
- while mergelist:
-
- relative_path = mergelist.pop()
- mysrc = join(srcroot, relative_path)
- mydest = join(destroot, relative_path)
- # myrealdest is mydest without the $ROOT prefix (makes a difference if ROOT!="/")
- myrealdest = join(sep, relative_path)
- # stat file once, test using S_* macros many times (faster that way)
- mystat = os.lstat(mysrc)
- mymode = mystat[stat.ST_MODE]
- mymd5 = None
- myto = None
-
- mymtime = mystat.st_mtime_ns
-
- if stat.S_ISREG(mymode):
- mymd5 = perform_md5(mysrc, calc_prelink=calc_prelink)
- elif stat.S_ISLNK(mymode):
- # The file name of mysrc and the actual file that it points to
- # will have earlier been forcefully converted to the 'merge'
- # encoding if necessary, but the content of the symbolic link
- # may need to be forcefully converted here.
- myto = _os.readlink(_unicode_encode(mysrc,
- encoding=_encodings['merge'], errors='strict'))
- try:
- myto = _unicode_decode(myto,
- encoding=_encodings['merge'], errors='strict')
- except UnicodeDecodeError:
- myto = _unicode_decode(myto, encoding=_encodings['merge'],
- errors='replace')
- myto = _unicode_encode(myto, encoding='ascii',
- errors='backslashreplace')
- myto = _unicode_decode(myto, encoding=_encodings['merge'],
- errors='replace')
- os.unlink(mysrc)
- os.symlink(myto, mysrc)
-
- mymd5 = md5(_unicode_encode(myto)).hexdigest()
-
- protected = False
- if stat.S_ISLNK(mymode) or stat.S_ISREG(mymode):
- protected = self.isprotected(mydest)
-
- if stat.S_ISREG(mymode) and \
- mystat.st_size == 0 and \
- os.path.basename(mydest).startswith(".keep"):
- protected = False
-
- destmd5 = None
- mydest_link = None
- # handy variables; mydest is the target object on the live filesystems;
- # mysrc is the source object in the temporary install dir
- try:
- mydstat = os.lstat(mydest)
- mydmode = mydstat.st_mode
- if protected:
- if stat.S_ISLNK(mydmode):
- # Read symlink target as bytes, in case the
- # target path has a bad encoding.
- mydest_link = _os.readlink(
- _unicode_encode(mydest,
- encoding=_encodings['merge'],
- errors='strict'))
- mydest_link = _unicode_decode(mydest_link,
- encoding=_encodings['merge'],
- errors='replace')
-
- # For protection of symlinks, the md5
- # of the link target path string is used
- # for cfgfiledict (symlinks are
- # protected since bug #485598).
- destmd5 = md5(_unicode_encode(mydest_link)).hexdigest()
-
- elif stat.S_ISREG(mydmode):
- destmd5 = perform_md5(mydest,
- calc_prelink=calc_prelink)
- except (FileNotFound, OSError) as e:
- if isinstance(e, OSError) and e.errno != errno.ENOENT:
- raise
- #dest file doesn't exist
- mydstat = None
- mydmode = None
- mydest_link = None
- destmd5 = None
-
- moveme = True
- if protected:
- mydest, protected, moveme = self._protect(cfgfiledict,
- protect_if_modified, mymd5, myto, mydest,
- myrealdest, mydmode, destmd5, mydest_link)
-
- zing = "!!!"
- if not moveme:
- # confmem rejected this update
- zing = "---"
-
- if stat.S_ISLNK(mymode):
- # we are merging a symbolic link
- # Pass in the symlink target in order to bypass the
- # os.readlink() call inside abssymlink(), since that
- # call is unsafe if the merge encoding is not ascii
- # or utf_8 (see bug #382021).
- myabsto = abssymlink(mysrc, target=myto)
-
- if myabsto.startswith(srcroot):
- myabsto = myabsto[len(srcroot):]
- myabsto = myabsto.lstrip(sep)
- if self.settings and self.settings["D"]:
- if myto.startswith(self.settings["D"]):
- myto = myto[len(self.settings["D"])-1:]
- # myrealto contains the path of the real file to which this symlink points.
- # we can simply test for existence of this file to see if the target has been merged yet
- myrealto = normalize_path(os.path.join(destroot, myabsto))
- if mydmode is not None and stat.S_ISDIR(mydmode):
- if not protected:
- # we can't merge a symlink over a directory
- newdest = self._new_backup_path(mydest)
- msg = []
- msg.append("")
- msg.append(_("Installation of a symlink is blocked by a directory:"))
- msg.append(" '%s'" % mydest)
- msg.append(_("This symlink will be merged with a different name:"))
- msg.append(" '%s'" % newdest)
- msg.append("")
- self._eerror("preinst", msg)
- mydest = newdest
-
- # if secondhand is None it means we're operating in "force" mode and should not create a second hand.
- if (secondhand != None) and (not os.path.exists(myrealto)):
- # either the target directory doesn't exist yet or the target file doesn't exist -- or
- # the target is a broken symlink. We will add this file to our "second hand" and merge
- # it later.
- secondhand.append(mysrc[len(srcroot):])
- continue
- # unlinking no longer necessary; "movefile" will overwrite symlinks atomically and correctly
- if moveme:
- zing = ">>>"
- mymtime = movefile(mysrc, mydest, newmtime=thismtime,
- sstat=mystat, mysettings=self.settings,
- encoding=_encodings['merge'])
-
- try:
- self._merged_path(mydest, os.lstat(mydest))
- except OSError:
- pass
-
- if mymtime != None:
- # Use lexists, since if the target happens to be a broken
- # symlink then that should trigger an independent warning.
- if not (os.path.lexists(myrealto) or
- os.path.lexists(join(srcroot, myabsto))):
- self._eqawarn('preinst',
- [_("QA Notice: Symbolic link /%s points to /%s which does not exist.")
- % (relative_path, myabsto)])
-
- showMessage("%s %s -> %s\n" % (zing, mydest, myto))
- outfile.write(
- self._format_contents_line(
- node_type="sym",
- abs_path=myrealdest,
- symlink_target=myto,
- mtime_ns=mymtime,
- )
- )
- else:
- showMessage(_("!!! Failed to move file.\n"),
- level=logging.ERROR, noiselevel=-1)
- showMessage("!!! %s -> %s\n" % (mydest, myto),
- level=logging.ERROR, noiselevel=-1)
- return 1
- elif stat.S_ISDIR(mymode):
- # we are merging a directory
- if mydmode != None:
- # destination exists
-
- if bsd_chflags:
- # Save then clear flags on dest.
- dflags = mydstat.st_flags
- if dflags != 0:
- bsd_chflags.lchflags(mydest, 0)
-
- if not stat.S_ISLNK(mydmode) and \
- not os.access(mydest, os.W_OK):
- pkgstuff = pkgsplit(self.pkg)
- writemsg(_("\n!!! Cannot write to '%s'.\n") % mydest, noiselevel=-1)
- writemsg(_("!!! Please check permissions and directories for broken symlinks.\n"))
- writemsg(_("!!! You may start the merge process again by using ebuild:\n"))
- writemsg("!!! ebuild "+self.settings["PORTDIR"]+"/"+self.cat+"/"+pkgstuff[0]+"/"+self.pkg+".ebuild merge\n")
- writemsg(_("!!! And finish by running this: env-update\n\n"))
- return 1
-
- if stat.S_ISDIR(mydmode) or \
- (stat.S_ISLNK(mydmode) and os.path.isdir(mydest)):
- # a symlink to an existing directory will work for us; keep it:
- showMessage("--- %s/\n" % mydest)
- if bsd_chflags:
- bsd_chflags.lchflags(mydest, dflags)
- else:
- # a non-directory and non-symlink-to-directory. Won't work for us. Move out of the way.
- backup_dest = self._new_backup_path(mydest)
- msg = []
- msg.append("")
- msg.append(_("Installation of a directory is blocked by a file:"))
- msg.append(" '%s'" % mydest)
- msg.append(_("This file will be renamed to a different name:"))
- msg.append(" '%s'" % backup_dest)
- msg.append("")
- self._eerror("preinst", msg)
- if movefile(mydest, backup_dest,
- mysettings=self.settings,
- encoding=_encodings['merge']) is None:
- return 1
- showMessage(_("bak %s %s.backup\n") % (mydest, mydest),
- level=logging.ERROR, noiselevel=-1)
- #now create our directory
- try:
- if self.settings.selinux_enabled():
- _selinux_merge.mkdir(mydest, mysrc)
- else:
- os.mkdir(mydest)
- except OSError as e:
- # Error handling should be equivalent to
- # portage.util.ensure_dirs() for cases
- # like bug #187518.
- if e.errno in (errno.EEXIST,):
- pass
- elif os.path.isdir(mydest):
- pass
- else:
- raise
- del e
-
- if bsd_chflags:
- bsd_chflags.lchflags(mydest, dflags)
- os.chmod(mydest, mystat[0])
- os.chown(mydest, mystat[4], mystat[5])
- showMessage(">>> %s/\n" % mydest)
- else:
- try:
- #destination doesn't exist
- if self.settings.selinux_enabled():
- _selinux_merge.mkdir(mydest, mysrc)
- else:
- os.mkdir(mydest)
- except OSError as e:
- # Error handling should be equivalent to
- # portage.util.ensure_dirs() for cases
- # like bug #187518.
- if e.errno in (errno.EEXIST,):
- pass
- elif os.path.isdir(mydest):
- pass
- else:
- raise
- del e
- os.chmod(mydest, mystat[0])
- os.chown(mydest, mystat[4], mystat[5])
- showMessage(">>> %s/\n" % mydest)
-
- try:
- self._merged_path(mydest, os.lstat(mydest))
- except OSError:
- pass
-
- outfile.write(
- self._format_contents_line(node_type="dir", abs_path=myrealdest)
- )
- # recurse and merge this directory
- mergelist.extend(join(relative_path, child) for child in
- os.listdir(join(srcroot, relative_path)))
-
- elif stat.S_ISREG(mymode):
- # we are merging a regular file
- if not protected and \
- mydmode is not None and stat.S_ISDIR(mydmode):
- # install of destination is blocked by an existing directory with the same name
- newdest = self._new_backup_path(mydest)
- msg = []
- msg.append("")
- msg.append(_("Installation of a regular file is blocked by a directory:"))
- msg.append(" '%s'" % mydest)
- msg.append(_("This file will be merged with a different name:"))
- msg.append(" '%s'" % newdest)
- msg.append("")
- self._eerror("preinst", msg)
- mydest = newdest
-
- # whether config protection or not, we merge the new file the
- # same way. Unless moveme=0 (blocking directory)
- if moveme:
- # Create hardlinks only for source files that already exist
- # as hardlinks (having identical st_dev and st_ino).
- hardlink_key = (mystat.st_dev, mystat.st_ino)
-
- hardlink_candidates = self._hardlink_merge_map.get(hardlink_key)
- if hardlink_candidates is None:
- hardlink_candidates = []
- self._hardlink_merge_map[hardlink_key] = hardlink_candidates
-
- mymtime = movefile(mysrc, mydest, newmtime=thismtime,
- sstat=mystat, mysettings=self.settings,
- hardlink_candidates=hardlink_candidates,
- encoding=_encodings['merge'])
- if mymtime is None:
- return 1
- hardlink_candidates.append(mydest)
- zing = ">>>"
-
- try:
- self._merged_path(mydest, os.lstat(mydest))
- except OSError:
- pass
-
- if mymtime != None:
- outfile.write(
- self._format_contents_line(
- node_type="obj",
- abs_path=myrealdest,
- md5_digest=mymd5,
- mtime_ns=mymtime,
- )
- )
- showMessage("%s %s\n" % (zing,mydest))
- else:
- # we are merging a fifo or device node
- zing = "!!!"
- if mydmode is None:
- # destination doesn't exist
- if movefile(mysrc, mydest, newmtime=thismtime,
- sstat=mystat, mysettings=self.settings,
- encoding=_encodings['merge']) is not None:
- zing = ">>>"
-
- try:
- self._merged_path(mydest, os.lstat(mydest))
- except OSError:
- pass
-
- else:
- return 1
- if stat.S_ISFIFO(mymode):
- outfile.write(
- self._format_contents_line(node_type="fif", abs_path=myrealdest)
- )
- else:
- outfile.write(
- self._format_contents_line(node_type="dev", abs_path=myrealdest)
- )
- showMessage(zing + " " + mydest + "\n")
-
- def _protect(self, cfgfiledict, protect_if_modified, src_md5,
- src_link, dest, dest_real, dest_mode, dest_md5, dest_link):
-
- move_me = True
- protected = True
- force = False
- k = False
- if self._installed_instance is not None:
- k = self._installed_instance._match_contents(dest_real)
- if k is not False:
- if dest_mode is None:
- # If the file doesn't exist, then it may
- # have been deleted or renamed by the
- # admin. Therefore, force the file to be
- # merged with a ._cfg name, so that the
- # admin will be prompted for this update
- # (see bug #523684).
- force = True
-
- elif protect_if_modified:
- data = self._installed_instance.getcontents()[k]
- if data[0] == "obj" and data[2] == dest_md5:
- protected = False
- elif data[0] == "sym" and data[2] == dest_link:
- protected = False
-
- if protected and dest_mode is not None:
- # we have a protection path; enable config file management.
- if src_md5 == dest_md5:
- protected = False
-
- elif src_md5 == cfgfiledict.get(dest_real, [None])[0]:
- # An identical update has previously been
- # merged. Skip it unless the user has chosen
- # --noconfmem.
- move_me = protected = bool(cfgfiledict["IGNORE"])
-
- if protected and \
- (dest_link is not None or src_link is not None) and \
- dest_link != src_link:
- # If either one is a symlink, and they are not
- # identical symlinks, then force config protection.
- force = True
-
- if move_me:
- # Merging a new file, so update confmem.
- cfgfiledict[dest_real] = [src_md5]
- elif dest_md5 == cfgfiledict.get(dest_real, [None])[0]:
- # A previously remembered update has been
- # accepted, so it is removed from confmem.
- del cfgfiledict[dest_real]
-
- if protected and move_me:
- dest = new_protect_filename(dest,
- newmd5=(dest_link or src_md5),
- force=force)
-
- return dest, protected, move_me
-
- def _format_contents_line(
- self, node_type, abs_path, md5_digest=None, symlink_target=None, mtime_ns=None
- ):
- fields = [node_type, abs_path]
- if md5_digest is not None:
- fields.append(md5_digest)
- elif symlink_target is not None:
- fields.append("-> {}".format(symlink_target))
- if mtime_ns is not None:
- fields.append(str(mtime_ns // 1000000000))
- return "{}\n".format(" ".join(fields))
-
- def _merged_path(self, path, lstatobj, exists=True):
- previous_path = self._device_path_map.get(lstatobj.st_dev)
- if previous_path is None or previous_path is False or \
- (exists and len(path) < len(previous_path)):
- if exists:
- self._device_path_map[lstatobj.st_dev] = path
- else:
- # This entry is used to indicate that we've unmerged
- # a file from this device, and later, this entry is
- # replaced by a parent directory.
- self._device_path_map[lstatobj.st_dev] = False
-
- def _post_merge_sync(self):
- """
- Call this after merge or unmerge, in order to sync relevant files to
- disk and avoid data-loss in the event of a power failure. This method
- does nothing if FEATURES=merge-sync is disabled.
- """
- if not self._device_path_map or \
- "merge-sync" not in self.settings.features:
- return
-
- returncode = None
- if platform.system() == "Linux":
-
- paths = []
- for path in self._device_path_map.values():
- if path is not False:
- paths.append(path)
- paths = tuple(paths)
-
- proc = SyncfsProcess(paths=paths,
- scheduler=(self._scheduler or asyncio._safe_loop()))
- proc.start()
- returncode = proc.wait()
-
- if returncode is None or returncode != os.EX_OK:
- try:
- proc = subprocess.Popen(["sync"])
- except EnvironmentError:
- pass
- else:
- proc.wait()
-
- @_slot_locked
- def merge(self, mergeroot, inforoot, myroot=None, myebuild=None, cleanup=0,
- mydbapi=None, prev_mtimes=None, counter=None):
- """
- @param myroot: ignored, self._eroot is used instead
- """
- myroot = None
- retval = -1
- parallel_install = "parallel-install" in self.settings.features
- if not parallel_install:
- self.lockdb()
- self.vartree.dbapi._bump_mtime(self.mycpv)
- if self._scheduler is None:
- self._scheduler = SchedulerInterface(asyncio._safe_loop())
- try:
- retval = self.treewalk(mergeroot, myroot, inforoot, myebuild,
- cleanup=cleanup, mydbapi=mydbapi, prev_mtimes=prev_mtimes,
- counter=counter)
-
- # If PORTAGE_BUILDDIR doesn't exist, then it probably means
- # fail-clean is enabled, and the success/die hooks have
- # already been called by EbuildPhase.
- if os.path.isdir(self.settings['PORTAGE_BUILDDIR']):
-
- if retval == os.EX_OK:
- phase = 'success_hooks'
- else:
- phase = 'die_hooks'
-
- ebuild_phase = MiscFunctionsProcess(
- background=False, commands=[phase],
- scheduler=self._scheduler, settings=self.settings)
- ebuild_phase.start()
- ebuild_phase.wait()
- self._elog_process()
-
- if 'noclean' not in self.settings.features and \
- (retval == os.EX_OK or \
- 'fail-clean' in self.settings.features):
- if myebuild is None:
- myebuild = os.path.join(inforoot, self.pkg + ".ebuild")
-
- doebuild_environment(myebuild, "clean",
- settings=self.settings, db=mydbapi)
- phase = EbuildPhase(background=False, phase="clean",
- scheduler=self._scheduler, settings=self.settings)
- phase.start()
- phase.wait()
- finally:
- self.settings.pop('REPLACING_VERSIONS', None)
- if self.vartree.dbapi._linkmap is None:
- # preserve-libs is entirely disabled
- pass
- else:
- self.vartree.dbapi._linkmap._clear_cache()
- self.vartree.dbapi._bump_mtime(self.mycpv)
- if not parallel_install:
- self.unlockdb()
-
- if retval == os.EX_OK and self._postinst_failure:
- retval = portage.const.RETURNCODE_POSTINST_FAILURE
-
- return retval
-
- def getstring(self,name):
- "returns contents of a file with whitespace converted to spaces"
- if not os.path.exists(self.dbdir+"/"+name):
- return ""
- with io.open(
- _unicode_encode(os.path.join(self.dbdir, name),
- encoding=_encodings['fs'], errors='strict'),
- mode='r', encoding=_encodings['repo.content'], errors='replace'
- ) as f:
- mydata = f.read().split()
- return " ".join(mydata)
-
- def copyfile(self,fname):
- shutil.copyfile(fname,self.dbdir+"/"+os.path.basename(fname))
-
- def getfile(self,fname):
- if not os.path.exists(self.dbdir+"/"+fname):
- return ""
- with io.open(_unicode_encode(os.path.join(self.dbdir, fname),
- encoding=_encodings['fs'], errors='strict'),
- mode='r', encoding=_encodings['repo.content'], errors='replace'
- ) as f:
- return f.read()
-
- def setfile(self,fname,data):
- kwargs = {}
- if fname == 'environment.bz2' or not isinstance(data, str):
- kwargs['mode'] = 'wb'
- else:
- kwargs['mode'] = 'w'
- kwargs['encoding'] = _encodings['repo.content']
- write_atomic(os.path.join(self.dbdir, fname), data, **kwargs)
-
- def getelements(self,ename):
- if not os.path.exists(self.dbdir+"/"+ename):
- return []
- with io.open(_unicode_encode(
- os.path.join(self.dbdir, ename),
- encoding=_encodings['fs'], errors='strict'),
- mode='r', encoding=_encodings['repo.content'], errors='replace'
- ) as f:
- mylines = f.readlines()
- myreturn = []
- for x in mylines:
- for y in x[:-1].split():
- myreturn.append(y)
- return myreturn
-
- def setelements(self,mylist,ename):
- with io.open(_unicode_encode(
- os.path.join(self.dbdir, ename),
- encoding=_encodings['fs'], errors='strict'),
- mode='w', encoding=_encodings['repo.content'],
- errors='backslashreplace') as f:
- for x in mylist:
- f.write("%s\n" % x)
-
- def isregular(self):
- "Is this a regular package (does it have a CATEGORY file? A dblink can be virtual *and* regular)"
- return os.path.exists(os.path.join(self.dbdir, "CATEGORY"))
-
- def _pre_merge_backup(self, backup_dblink, downgrade):
-
- if ("unmerge-backup" in self.settings.features or
- (downgrade and "downgrade-backup" in self.settings.features)):
- return self._quickpkg_dblink(backup_dblink, False, None)
-
- return os.EX_OK
-
- def _pre_unmerge_backup(self, background):
-
- if "unmerge-backup" in self.settings.features :
- logfile = None
- if self.settings.get("PORTAGE_BACKGROUND") != "subprocess":
- logfile = self.settings.get("PORTAGE_LOG_FILE")
- return self._quickpkg_dblink(self, background, logfile)
-
- return os.EX_OK
-
- def _quickpkg_dblink(self, backup_dblink, background, logfile):
-
- build_time = backup_dblink.getfile('BUILD_TIME')
- try:
- build_time = int(build_time.strip())
- except ValueError:
- build_time = 0
-
- trees = QueryCommand.get_db()[self.settings["EROOT"]]
- bintree = trees["bintree"]
-
- for binpkg in reversed(
- bintree.dbapi.match('={}'.format(backup_dblink.mycpv))):
- if binpkg.build_time == build_time:
- return os.EX_OK
-
- self.lockdb()
- try:
-
- if not backup_dblink.exists():
- # It got unmerged by a concurrent process.
- return os.EX_OK
-
- # Call quickpkg for support of QUICKPKG_DEFAULT_OPTS and stuff.
- quickpkg_binary = os.path.join(self.settings["PORTAGE_BIN_PATH"],
- "quickpkg")
-
- if not os.access(quickpkg_binary, os.X_OK):
- # If not running from the source tree, use PATH.
- quickpkg_binary = find_binary("quickpkg")
- if quickpkg_binary is None:
- self._display_merge(
- _("%s: command not found") % "quickpkg",
- level=logging.ERROR, noiselevel=-1)
- return 127
-
- # Let quickpkg inherit the global vartree config's env.
- env = dict(self.vartree.settings.items())
- env["__PORTAGE_INHERIT_VARDB_LOCK"] = "1"
-
- pythonpath = [x for x in env.get('PYTHONPATH', '').split(":") if x]
- if not pythonpath or \
- not os.path.samefile(pythonpath[0], portage._pym_path):
- pythonpath.insert(0, portage._pym_path)
- env['PYTHONPATH'] = ":".join(pythonpath)
-
- quickpkg_proc = SpawnProcess(
- args=[portage._python_interpreter, quickpkg_binary,
- "=%s" % (backup_dblink.mycpv,)],
- background=background, env=env,
- scheduler=self._scheduler, logfile=logfile)
- quickpkg_proc.start()
-
- return quickpkg_proc.wait()
-
- finally:
- self.unlockdb()
-
- def merge(mycat, mypkg, pkgloc, infloc,
- myroot=None, settings=None, myebuild=None,
- mytree=None, mydbapi=None, vartree=None, prev_mtimes=None, blockers=None,
- scheduler=None, fd_pipes=None):
- """
- @param myroot: ignored, settings['EROOT'] is used instead
- """
- myroot = None
- if settings is None:
- raise TypeError("settings argument is required")
- if not os.access(settings['EROOT'], os.W_OK):
- writemsg(_("Permission denied: access('%s', W_OK)\n") % settings['EROOT'],
- noiselevel=-1)
- return errno.EACCES
- background = (settings.get('PORTAGE_BACKGROUND') == '1')
- merge_task = MergeProcess(
- mycat=mycat, mypkg=mypkg, settings=settings,
- treetype=mytree, vartree=vartree,
- scheduler=(scheduler or asyncio._safe_loop()),
- background=background, blockers=blockers, pkgloc=pkgloc,
- infloc=infloc, myebuild=myebuild, mydbapi=mydbapi,
- prev_mtimes=prev_mtimes, logfile=settings.get('PORTAGE_LOG_FILE'),
- fd_pipes=fd_pipes)
- merge_task.start()
- retcode = merge_task.wait()
- return retcode
-
- def unmerge(cat, pkg, myroot=None, settings=None,
- mytrimworld=None, vartree=None,
- ldpath_mtimes=None, scheduler=None):
- """
- @param myroot: ignored, settings['EROOT'] is used instead
- @param mytrimworld: ignored
- """
- myroot = None
- if settings is None:
- raise TypeError("settings argument is required")
- mylink = dblink(cat, pkg, settings=settings, treetype="vartree",
- vartree=vartree, scheduler=scheduler)
- vartree = mylink.vartree
- parallel_install = "parallel-install" in settings.features
- if not parallel_install:
- mylink.lockdb()
- try:
- if mylink.exists():
- retval = mylink.unmerge(ldpath_mtimes=ldpath_mtimes)
- if retval == os.EX_OK:
- mylink.lockdb()
- try:
- mylink.delete()
- finally:
- mylink.unlockdb()
- return retval
- return os.EX_OK
- finally:
- if vartree.dbapi._linkmap is None:
- # preserve-libs is entirely disabled
- pass
- else:
- vartree.dbapi._linkmap._clear_cache()
- if not parallel_install:
- mylink.unlockdb()
+ """
+ This class provides an interface to the installed package database
+ At present this is implemented as a text backend in /var/db/pkg.
+ """
+
+ _normalize_needed = re.compile(r"//|^[^/]|./$|(^|/)\.\.?(/|$)")
+
+ _contents_re = re.compile(
+ r"^("
+ + r"(?P<dir>(dev|dir|fif) (.+))|"
+ + r"(?P<obj>(obj) (.+) (\S+) (\d+))|"
+ + r"(?P<sym>(sym) (.+) -> (.+) ((\d+)|(?P<oldsym>("
+ + r"\(\d+, \d+L, \d+L, \d+, \d+, \d+, \d+L, \d+, (\d+), \d+\)))))"
+ + r")$"
+ )
+
+ # These files are generated by emerge, so we need to remove
+ # them when they are the only thing left in a directory.
+ _infodir_cleanup = frozenset(["dir", "dir.old"])
+
+ _ignored_unlink_errnos = (errno.EBUSY, errno.ENOENT, errno.ENOTDIR, errno.EISDIR)
+
+ _ignored_rmdir_errnos = (
+ errno.EEXIST,
+ errno.ENOTEMPTY,
+ errno.EBUSY,
+ errno.ENOENT,
+ errno.ENOTDIR,
+ errno.EISDIR,
+ errno.EPERM,
+ )
+
+ def __init__(
+ self,
+ cat,
+ pkg,
+ myroot=None,
+ settings=None,
+ treetype=None,
+ vartree=None,
+ blockers=None,
+ scheduler=None,
+ pipe=None,
+ ):
+ """
+ Creates a DBlink object for a given CPV.
+ The given CPV may not be present in the database already.
+
+ @param cat: Category
+ @type cat: String
+ @param pkg: Package (PV)
+ @type pkg: String
+ @param myroot: ignored, settings['ROOT'] is used instead
+ @type myroot: String (Path)
+ @param settings: Typically portage.settings
+ @type settings: portage.config
+ @param treetype: one of ['porttree','bintree','vartree']
+ @type treetype: String
+ @param vartree: an instance of vartree corresponding to myroot.
+ @type vartree: vartree
+ """
+
+ if settings is None:
+ raise TypeError("settings argument is required")
+
+ mysettings = settings
+ self._eroot = mysettings["EROOT"]
+ self.cat = cat
+ self.pkg = pkg
+ self.mycpv = self.cat + "/" + self.pkg
+ if self.mycpv == settings.mycpv and isinstance(settings.mycpv, _pkg_str):
+ self.mycpv = settings.mycpv
+ else:
+ self.mycpv = _pkg_str(self.mycpv)
+ self.mysplit = list(self.mycpv.cpv_split[1:])
+ self.mysplit[0] = self.mycpv.cp
+ self.treetype = treetype
+ if vartree is None:
+ vartree = portage.db[self._eroot]["vartree"]
+ self.vartree = vartree
+ self._blockers = blockers
+ self._scheduler = scheduler
+ self.dbroot = normalize_path(os.path.join(self._eroot, VDB_PATH))
+ self.dbcatdir = self.dbroot + "/" + cat
+ self.dbpkgdir = self.dbcatdir + "/" + pkg
+ self.dbtmpdir = self.dbcatdir + "/" + MERGING_IDENTIFIER + pkg
+ self.dbdir = self.dbpkgdir
+ self.settings = mysettings
+ self._verbose = self.settings.get("PORTAGE_VERBOSE") == "1"
+
+ self.myroot = self.settings["ROOT"]
+ self._installed_instance = None
+ self.contentscache = None
+ self._contents_inodes = None
+ self._contents_basenames = None
+ self._linkmap_broken = False
+ self._device_path_map = {}
+ self._hardlink_merge_map = {}
+ self._hash_key = (self._eroot, self.mycpv)
+ self._protect_obj = None
+ self._pipe = pipe
+ self._postinst_failure = False
+
+ # When necessary, this attribute is modified for
+ # compliance with RESTRICT=preserve-libs.
+ self._preserve_libs = "preserve-libs" in mysettings.features
+ self._contents = ContentsCaseSensitivityManager(self)
+ self._slot_locks = []
+
+ def __hash__(self):
+ return hash(self._hash_key)
+
+ def __eq__(self, other):
+ return isinstance(other, dblink) and self._hash_key == other._hash_key
+
+ def _get_protect_obj(self):
+
+ if self._protect_obj is None:
+ self._protect_obj = ConfigProtect(
+ self._eroot,
+ portage.util.shlex_split(self.settings.get("CONFIG_PROTECT", "")),
+ portage.util.shlex_split(self.settings.get("CONFIG_PROTECT_MASK", "")),
+ case_insensitive=("case-insensitive-fs" in self.settings.features),
+ )
+
+ return self._protect_obj
+
+ def isprotected(self, obj):
+ return self._get_protect_obj().isprotected(obj)
+
+ def updateprotect(self):
+ self._get_protect_obj().updateprotect()
+
+ def lockdb(self):
+ self.vartree.dbapi.lock()
+
+ def unlockdb(self):
+ self.vartree.dbapi.unlock()
+
+ def _slot_locked(f):
+ """
+ A decorator function which, when parallel-install is enabled,
+ acquires and releases slot locks for the current package and
+ blocked packages. This is required in order to account for
+ interactions with blocked packages (involving resolution of
+ file collisions).
+ """
+
+ def wrapper(self, *args, **kwargs):
+ if "parallel-install" in self.settings.features:
+ self._acquire_slot_locks(kwargs.get("mydbapi", self.vartree.dbapi))
+ try:
+ return f(self, *args, **kwargs)
+ finally:
+ self._release_slot_locks()
+
+ return wrapper
+
+ def _acquire_slot_locks(self, db):
+ """
+ Acquire slot locks for the current package and blocked packages.
+ """
+
+ slot_atoms = []
+
+ try:
+ slot = self.mycpv.slot
+ except AttributeError:
+ (slot,) = db.aux_get(self.mycpv, ["SLOT"])
+ slot = slot.partition("/")[0]
+
+ slot_atoms.append(portage.dep.Atom("%s:%s" % (self.mycpv.cp, slot)))
+
+ for blocker in self._blockers or []:
+ slot_atoms.append(blocker.slot_atom)
+
+ # Sort atoms so that locks are acquired in a predictable
+ # order, preventing deadlocks with competitors that may
+ # be trying to acquire overlapping locks.
+ slot_atoms.sort()
+ for slot_atom in slot_atoms:
+ self.vartree.dbapi._slot_lock(slot_atom)
+ self._slot_locks.append(slot_atom)
+
+ def _release_slot_locks(self):
+ """
+ Release all slot locks.
+ """
+ while self._slot_locks:
+ self.vartree.dbapi._slot_unlock(self._slot_locks.pop())
+
+ def getpath(self):
+ "return path to location of db information (for >>> informational display)"
+ return self.dbdir
+
+ def exists(self):
+ "does the db entry exist? boolean."
+ return os.path.exists(self.dbdir)
+
+ def delete(self):
+ """
+ Remove this entry from the database
+ """
+ try:
+ os.lstat(self.dbdir)
+ except OSError as e:
+ if e.errno not in (errno.ENOENT, errno.ENOTDIR, errno.ESTALE):
+ raise
+ return
+
+ # Check validity of self.dbdir before attempting to remove it.
+ if not self.dbdir.startswith(self.dbroot):
+ writemsg(
+ _("portage.dblink.delete(): invalid dbdir: %s\n") % self.dbdir,
+ noiselevel=-1,
+ )
+ return
+
+ if self.dbdir is self.dbpkgdir:
+ (counter,) = self.vartree.dbapi.aux_get(self.mycpv, ["COUNTER"])
+ self.vartree.dbapi._cache_delta.recordEvent(
+ "remove", self.mycpv, self.settings["SLOT"].split("/")[0], counter
+ )
+
+ shutil.rmtree(self.dbdir)
+ # If empty, remove parent category directory.
+ try:
+ os.rmdir(os.path.dirname(self.dbdir))
+ except OSError:
+ pass
+ self.vartree.dbapi._remove(self)
+
+ # Use self.dbroot since we need an existing path for syncfs.
+ try:
+ self._merged_path(self.dbroot, os.lstat(self.dbroot))
+ except OSError:
+ pass
+
+ self._post_merge_sync()
+
+ def clearcontents(self):
+ """
+ For a given db entry (self), erase the CONTENTS values.
+ """
+ self.lockdb()
+ try:
+ if os.path.exists(self.dbdir + "/CONTENTS"):
+ os.unlink(self.dbdir + "/CONTENTS")
+ finally:
+ self.unlockdb()
+
+ def _clear_contents_cache(self):
+ self.contentscache = None
+ self._contents_inodes = None
+ self._contents_basenames = None
+ self._contents.clear_cache()
+
+ def getcontents(self):
+ """
+ Get the installed files of a given package (aka what that package installed)
+ """
+ if self.contentscache is not None:
+ return self.contentscache
+ contents_file = os.path.join(self.dbdir, "CONTENTS")
+ pkgfiles = {}
+ try:
+ with io.open(
+ _unicode_encode(
+ contents_file, encoding=_encodings["fs"], errors="strict"
+ ),
+ mode="r",
+ encoding=_encodings["repo.content"],
+ errors="replace",
+ ) as f:
+ mylines = f.readlines()
+ except EnvironmentError as e:
+ if e.errno != errno.ENOENT:
+ raise
+ del e
+ self.contentscache = pkgfiles
+ return pkgfiles
+
+ null_byte = "\0"
+ normalize_needed = self._normalize_needed
+ contents_re = self._contents_re
+ obj_index = contents_re.groupindex["obj"]
+ dir_index = contents_re.groupindex["dir"]
+ sym_index = contents_re.groupindex["sym"]
+ # The old symlink format may exist on systems that have packages
+ # which were installed many years ago (see bug #351814).
+ oldsym_index = contents_re.groupindex["oldsym"]
+ # CONTENTS files already contain EPREFIX
+ myroot = self.settings["ROOT"]
+ if myroot == os.path.sep:
+ myroot = None
+ # used to generate parent dir entries
+ dir_entry = ("dir",)
+ eroot_split_len = len(self.settings["EROOT"].split(os.sep)) - 1
+ pos = 0
+ errors = []
+ for pos, line in enumerate(mylines):
+ if null_byte in line:
+ # Null bytes are a common indication of corruption.
+ errors.append((pos + 1, _("Null byte found in CONTENTS entry")))
+ continue
+ line = line.rstrip("\n")
+ m = contents_re.match(line)
+ if m is None:
+ errors.append((pos + 1, _("Unrecognized CONTENTS entry")))
+ continue
+
+ if m.group(obj_index) is not None:
+ base = obj_index
+ # format: type, mtime, md5sum
+ data = (m.group(base + 1), m.group(base + 4), m.group(base + 3))
+ elif m.group(dir_index) is not None:
+ base = dir_index
+ # format: type
+ data = (m.group(base + 1),)
+ elif m.group(sym_index) is not None:
+ base = sym_index
+ if m.group(oldsym_index) is None:
+ mtime = m.group(base + 5)
+ else:
+ mtime = m.group(base + 8)
+ # format: type, mtime, dest
+ data = (m.group(base + 1), mtime, m.group(base + 3))
+ else:
+ # This won't happen as long the regular expression
+ # is written to only match valid entries.
+ raise AssertionError(
+ _("required group not found " + "in CONTENTS entry: '%s'") % line
+ )
+
+ path = m.group(base + 2)
+ if normalize_needed.search(path) is not None:
+ path = normalize_path(path)
+ if not path.startswith(os.path.sep):
+ path = os.path.sep + path
+
+ if myroot is not None:
+ path = os.path.join(myroot, path.lstrip(os.path.sep))
+
+ # Implicitly add parent directories, since we can't necessarily
+ # assume that they are explicitly listed in CONTENTS, and it's
+ # useful for callers if they can rely on parent directory entries
+ # being generated here (crucial for things like dblink.isowner()).
+ path_split = path.split(os.sep)
+ path_split.pop()
+ while len(path_split) > eroot_split_len:
+ parent = os.sep.join(path_split)
+ if parent in pkgfiles:
+ break
+ pkgfiles[parent] = dir_entry
+ path_split.pop()
+
+ pkgfiles[path] = data
+
+ if errors:
+ writemsg(_("!!! Parse error in '%s'\n") % contents_file, noiselevel=-1)
+ for pos, e in errors:
+ writemsg(_("!!! line %d: %s\n") % (pos, e), noiselevel=-1)
+ self.contentscache = pkgfiles
+ return pkgfiles
+
+ def quickpkg(
+ self, output_file, include_config=False, include_unmodified_config=False
+ ):
+ """
+ Create a tar file appropriate for use by quickpkg.
+
+ @param output_file: Write binary tar stream to file.
+ @type output_file: file
+ @param include_config: Include all files protected by CONFIG_PROTECT
+ (as a security precaution, default is False).
+ @type include_config: bool
+ @param include_unmodified_config: Include files protected by CONFIG_PROTECT
+ that have not been modified since installation (as a security precaution,
+ default is False).
+ @type include_unmodified_config: bool
+ @rtype: list
+ @return: Paths of protected configuration files which have been omitted.
+ """
+ settings = self.settings
+ cpv = self.mycpv
+ xattrs = "xattr" in settings.features
+ contents = self.getcontents()
+ excluded_config_files = []
+ protect = None
+
+ if not include_config:
+ confprot = ConfigProtect(
+ settings["EROOT"],
+ portage.util.shlex_split(settings.get("CONFIG_PROTECT", "")),
+ portage.util.shlex_split(settings.get("CONFIG_PROTECT_MASK", "")),
+ case_insensitive=("case-insensitive-fs" in settings.features),
+ )
+
+ def protect(filename):
+ if not confprot.isprotected(filename):
+ return False
+ if include_unmodified_config:
+ file_data = contents[filename]
+ if file_data[0] == "obj":
+ orig_md5 = file_data[2].lower()
+ cur_md5 = perform_md5(filename, calc_prelink=1)
+ if orig_md5 == cur_md5:
+ return False
+ excluded_config_files.append(filename)
+ return True
+
+ # The tarfile module will write pax headers holding the
+ # xattrs only if PAX_FORMAT is specified here.
+ with tarfile.open(
+ fileobj=output_file,
+ mode="w|",
+ format=tarfile.PAX_FORMAT if xattrs else tarfile.DEFAULT_FORMAT,
+ ) as tar:
+ tar_contents(
+ contents, settings["ROOT"], tar, protect=protect, xattrs=xattrs
+ )
+
+ return excluded_config_files
+
+ def _prune_plib_registry(self, unmerge=False, needed=None, preserve_paths=None):
+ # remove preserved libraries that don't have any consumers left
+ if not (
+ self._linkmap_broken
+ or self.vartree.dbapi._linkmap is None
+ or self.vartree.dbapi._plib_registry is None
+ ):
+ self.vartree.dbapi._fs_lock()
+ plib_registry = self.vartree.dbapi._plib_registry
+ plib_registry.lock()
+ try:
+ plib_registry.load()
+
+ unmerge_with_replacement = unmerge and preserve_paths is not None
+ if unmerge_with_replacement:
+ # If self.mycpv is about to be unmerged and we
+ # have a replacement package, we want to exclude
+ # the irrelevant NEEDED data that belongs to
+ # files which are being unmerged now.
+ exclude_pkgs = (self.mycpv,)
+ else:
+ exclude_pkgs = None
+
+ self._linkmap_rebuild(
+ exclude_pkgs=exclude_pkgs,
+ include_file=needed,
+ preserve_paths=preserve_paths,
+ )
+
+ if unmerge:
+ unmerge_preserve = None
+ if not unmerge_with_replacement:
+ unmerge_preserve = self._find_libs_to_preserve(unmerge=True)
+ counter = self.vartree.dbapi.cpv_counter(self.mycpv)
+ try:
+ slot = self.mycpv.slot
+ except AttributeError:
+ slot = _pkg_str(self.mycpv, slot=self.settings["SLOT"]).slot
+ plib_registry.unregister(self.mycpv, slot, counter)
+ if unmerge_preserve:
+ for path in sorted(unmerge_preserve):
+ contents_key = self._match_contents(path)
+ if not contents_key:
+ continue
+ obj_type = self.getcontents()[contents_key][0]
+ self._display_merge(
+ _(">>> needed %s %s\n") % (obj_type, contents_key),
+ noiselevel=-1,
+ )
+ plib_registry.register(
+ self.mycpv, slot, counter, unmerge_preserve
+ )
+ # Remove the preserved files from our contents
+ # so that they won't be unmerged.
+ self.vartree.dbapi.removeFromContents(self, unmerge_preserve)
+
+ unmerge_no_replacement = unmerge and not unmerge_with_replacement
+ cpv_lib_map = self._find_unused_preserved_libs(unmerge_no_replacement)
+ if cpv_lib_map:
+ self._remove_preserved_libs(cpv_lib_map)
+ self.vartree.dbapi.lock()
+ try:
+ for cpv, removed in cpv_lib_map.items():
+ if not self.vartree.dbapi.cpv_exists(cpv):
+ continue
+ self.vartree.dbapi.removeFromContents(cpv, removed)
+ finally:
+ self.vartree.dbapi.unlock()
+
+ plib_registry.store()
+ finally:
+ plib_registry.unlock()
+ self.vartree.dbapi._fs_unlock()
+
+ @_slot_locked
+ def unmerge(
+ self,
+ pkgfiles=None,
+ trimworld=None,
+ cleanup=True,
+ ldpath_mtimes=None,
+ others_in_slot=None,
+ needed=None,
+ preserve_paths=None,
+ ):
+ """
+ Calls prerm
+ Unmerges a given package (CPV)
+ calls postrm
+ calls cleanrm
+ calls env_update
+
+ @param pkgfiles: files to unmerge (generally self.getcontents() )
+ @type pkgfiles: Dictionary
+ @param trimworld: Unused
+ @type trimworld: Boolean
+ @param cleanup: cleanup to pass to doebuild (see doebuild)
+ @type cleanup: Boolean
+ @param ldpath_mtimes: mtimes to pass to env_update (see env_update)
+ @type ldpath_mtimes: Dictionary
+ @param others_in_slot: all dblink instances in this slot, excluding self
+ @type others_in_slot: list
+ @param needed: Filename containing libraries needed after unmerge.
+ @type needed: String
+ @param preserve_paths: Libraries preserved by a package instance that
+ is currently being merged. They need to be explicitly passed to the
+ LinkageMap, since they are not registered in the
+ PreservedLibsRegistry yet.
+ @type preserve_paths: set
+ @rtype: Integer
+ @return:
+ 1. os.EX_OK if everything went well.
+ 2. return code of the failed phase (for prerm, postrm, cleanrm)
+ """
+
+ if trimworld is not None:
+ warnings.warn(
+ "The trimworld parameter of the "
+ + "portage.dbapi.vartree.dblink.unmerge()"
+ + " method is now unused.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
+ background = False
+ log_path = self.settings.get("PORTAGE_LOG_FILE")
+ if self._scheduler is None:
+ # We create a scheduler instance and use it to
+ # log unmerge output separately from merge output.
+ self._scheduler = SchedulerInterface(asyncio._safe_loop())
+ if self.settings.get("PORTAGE_BACKGROUND") == "subprocess":
+ if self.settings.get("PORTAGE_BACKGROUND_UNMERGE") == "1":
+ self.settings["PORTAGE_BACKGROUND"] = "1"
+ self.settings.backup_changes("PORTAGE_BACKGROUND")
+ background = True
+ elif self.settings.get("PORTAGE_BACKGROUND_UNMERGE") == "0":
+ self.settings["PORTAGE_BACKGROUND"] = "0"
+ self.settings.backup_changes("PORTAGE_BACKGROUND")
+ elif self.settings.get("PORTAGE_BACKGROUND") == "1":
+ background = True
+
+ self.vartree.dbapi._bump_mtime(self.mycpv)
+ showMessage = self._display_merge
+ if self.vartree.dbapi._categories is not None:
+ self.vartree.dbapi._categories = None
+
+ # When others_in_slot is not None, the backup has already been
+ # handled by the caller.
+ caller_handles_backup = others_in_slot is not None
+
+ # When others_in_slot is supplied, the security check has already been
+ # done for this slot, so it shouldn't be repeated until the next
+ # replacement or unmerge operation.
+ if others_in_slot is None:
+ slot = self.vartree.dbapi._pkg_str(self.mycpv, None).slot
+ slot_matches = self.vartree.dbapi.match(
+ "%s:%s" % (portage.cpv_getkey(self.mycpv), slot)
+ )
+ others_in_slot = []
+ for cur_cpv in slot_matches:
+ if cur_cpv == self.mycpv:
+ continue
+ others_in_slot.append(
+ dblink(
+ self.cat,
+ catsplit(cur_cpv)[1],
+ settings=self.settings,
+ vartree=self.vartree,
+ treetype="vartree",
+ pipe=self._pipe,
+ )
+ )
+
+ retval = self._security_check([self] + others_in_slot)
+ if retval:
+ return retval
+
+ contents = self.getcontents()
+ # Now, don't assume that the name of the ebuild is the same as the
+ # name of the dir; the package may have been moved.
+ myebuildpath = os.path.join(self.dbdir, self.pkg + ".ebuild")
+ failures = 0
+ ebuild_phase = "prerm"
+ mystuff = os.listdir(self.dbdir)
+ for x in mystuff:
+ if x.endswith(".ebuild"):
+ if x[:-7] != self.pkg:
+ # Clean up after vardbapi.move_ent() breakage in
+ # portage versions before 2.1.2
+ os.rename(os.path.join(self.dbdir, x), myebuildpath)
+ write_atomic(os.path.join(self.dbdir, "PF"), self.pkg + "\n")
+ break
+
+ if (
+ self.mycpv != self.settings.mycpv
+ or "EAPI" not in self.settings.configdict["pkg"]
+ ):
+ # We avoid a redundant setcpv call here when
+ # the caller has already taken care of it.
+ self.settings.setcpv(self.mycpv, mydb=self.vartree.dbapi)
+
+ eapi_unsupported = False
+ try:
+ doebuild_environment(
+ myebuildpath, "prerm", settings=self.settings, db=self.vartree.dbapi
+ )
+ except UnsupportedAPIException as e:
+ eapi_unsupported = e
+
+ if (
+ self._preserve_libs
+ and "preserve-libs" in self.settings["PORTAGE_RESTRICT"].split()
+ ):
+ self._preserve_libs = False
+
+ builddir_lock = None
+ scheduler = self._scheduler
+ retval = os.EX_OK
+ try:
+ # Only create builddir_lock if the caller
+ # has not already acquired the lock.
+ if "PORTAGE_BUILDDIR_LOCKED" not in self.settings:
+ builddir_lock = EbuildBuildDir(
+ scheduler=scheduler, settings=self.settings
+ )
+ scheduler.run_until_complete(builddir_lock.async_lock())
+ prepare_build_dirs(settings=self.settings, cleanup=True)
+ log_path = self.settings.get("PORTAGE_LOG_FILE")
+
+ # Do this before the following _prune_plib_registry call, since
+ # that removes preserved libraries from our CONTENTS, and we
+ # may want to backup those libraries first.
+ if not caller_handles_backup:
+ retval = self._pre_unmerge_backup(background)
+ if retval != os.EX_OK:
+ showMessage(
+ _("!!! FAILED prerm: quickpkg: %s\n") % retval,
+ level=logging.ERROR,
+ noiselevel=-1,
+ )
+ return retval
+
+ self._prune_plib_registry(
+ unmerge=True, needed=needed, preserve_paths=preserve_paths
+ )
+
+ # Log the error after PORTAGE_LOG_FILE is initialized
+ # by prepare_build_dirs above.
+ if eapi_unsupported:
+ # Sometimes this happens due to corruption of the EAPI file.
+ failures += 1
+ showMessage(
+ _("!!! FAILED prerm: %s\n") % os.path.join(self.dbdir, "EAPI"),
+ level=logging.ERROR,
+ noiselevel=-1,
+ )
+ showMessage(
+ "%s\n" % (eapi_unsupported,), level=logging.ERROR, noiselevel=-1
+ )
+ elif os.path.isfile(myebuildpath):
+ phase = EbuildPhase(
+ background=background,
+ phase=ebuild_phase,
+ scheduler=scheduler,
+ settings=self.settings,
+ )
+ phase.start()
+ retval = phase.wait()
+
+ # XXX: Decide how to handle failures here.
+ if retval != os.EX_OK:
+ failures += 1
+ showMessage(
+ _("!!! FAILED prerm: %s\n") % retval,
+ level=logging.ERROR,
+ noiselevel=-1,
+ )
+
+ self.vartree.dbapi._fs_lock()
+ try:
+ self._unmerge_pkgfiles(pkgfiles, others_in_slot)
+ finally:
+ self.vartree.dbapi._fs_unlock()
+ self._clear_contents_cache()
+
+ if not eapi_unsupported and os.path.isfile(myebuildpath):
+ ebuild_phase = "postrm"
+ phase = EbuildPhase(
+ background=background,
+ phase=ebuild_phase,
+ scheduler=scheduler,
+ settings=self.settings,
+ )
+ phase.start()
+ retval = phase.wait()
+
+ # XXX: Decide how to handle failures here.
+ if retval != os.EX_OK:
+ failures += 1
+ showMessage(
+ _("!!! FAILED postrm: %s\n") % retval,
+ level=logging.ERROR,
+ noiselevel=-1,
+ )
+
+ finally:
+ self.vartree.dbapi._bump_mtime(self.mycpv)
+ try:
+ if not eapi_unsupported and os.path.isfile(myebuildpath):
+ if retval != os.EX_OK:
+ msg_lines = []
+ msg = _(
+ "The '%(ebuild_phase)s' "
+ "phase of the '%(cpv)s' package "
+ "has failed with exit value %(retval)s."
+ ) % {
+ "ebuild_phase": ebuild_phase,
+ "cpv": self.mycpv,
+ "retval": retval,
+ }
+ from textwrap import wrap
+
+ msg_lines.extend(wrap(msg, 72))
+ msg_lines.append("")
+
+ ebuild_name = os.path.basename(myebuildpath)
+ ebuild_dir = os.path.dirname(myebuildpath)
+ msg = _(
+ "The problem occurred while executing "
+ "the ebuild file named '%(ebuild_name)s' "
+ "located in the '%(ebuild_dir)s' directory. "
+ "If necessary, manually remove "
+ "the environment.bz2 file and/or the "
+ "ebuild file located in that directory."
+ ) % {"ebuild_name": ebuild_name, "ebuild_dir": ebuild_dir}
+ msg_lines.extend(wrap(msg, 72))
+ msg_lines.append("")
+
+ msg = _(
+ "Removal "
+ "of the environment.bz2 file is "
+ "preferred since it may allow the "
+ "removal phases to execute successfully. "
+ "The ebuild will be "
+ "sourced and the eclasses "
+ "from the current ebuild repository will be used "
+ "when necessary. Removal of "
+ "the ebuild file will cause the "
+ "pkg_prerm() and pkg_postrm() removal "
+ "phases to be skipped entirely."
+ )
+ msg_lines.extend(wrap(msg, 72))
+
+ self._eerror(ebuild_phase, msg_lines)
+
+ self._elog_process(phasefilter=("prerm", "postrm"))
+
+ if retval == os.EX_OK:
+ try:
+ doebuild_environment(
+ myebuildpath,
+ "cleanrm",
+ settings=self.settings,
+ db=self.vartree.dbapi,
+ )
+ except UnsupportedAPIException:
+ pass
+ phase = EbuildPhase(
+ background=background,
+ phase="cleanrm",
+ scheduler=scheduler,
+ settings=self.settings,
+ )
+ phase.start()
+ retval = phase.wait()
+ finally:
+ if builddir_lock is not None:
+ scheduler.run_until_complete(builddir_lock.async_unlock())
+
+ if log_path is not None:
+
+ if not failures and "unmerge-logs" not in self.settings.features:
+ try:
+ os.unlink(log_path)
+ except OSError:
+ pass
+
+ try:
+ st = os.stat(log_path)
+ except OSError:
+ pass
+ else:
+ if st.st_size == 0:
+ try:
+ os.unlink(log_path)
+ except OSError:
+ pass
+
+ if log_path is not None and os.path.exists(log_path):
+ # Restore this since it gets lost somewhere above and it
+ # needs to be set for _display_merge() to be able to log.
+ # Note that the log isn't necessarily supposed to exist
+ # since if PORTAGE_LOGDIR is unset then it's a temp file
+ # so it gets cleaned above.
+ self.settings["PORTAGE_LOG_FILE"] = log_path
+ else:
+ self.settings.pop("PORTAGE_LOG_FILE", None)
+
+ env_update(
+ target_root=self.settings["ROOT"],
+ prev_mtimes=ldpath_mtimes,
+ contents=contents,
+ env=self.settings,
+ writemsg_level=self._display_merge,
+ vardbapi=self.vartree.dbapi,
+ )
+
+ unmerge_with_replacement = preserve_paths is not None
+ if not unmerge_with_replacement:
+ # When there's a replacement package which calls us via treewalk,
+ # treewalk will automatically call _prune_plib_registry for us.
+ # Otherwise, we need to call _prune_plib_registry ourselves.
+ # Don't pass in the "unmerge=True" flag here, since that flag
+ # is intended to be used _prior_ to unmerge, not after.
+ self._prune_plib_registry()
+
+ return os.EX_OK
+
+ def _display_merge(self, msg, level=0, noiselevel=0):
+ if not self._verbose and noiselevel >= 0 and level < logging.WARN:
+ return
+ if self._scheduler is None:
+ writemsg_level(msg, level=level, noiselevel=noiselevel)
+ else:
+ log_path = None
+ if self.settings.get("PORTAGE_BACKGROUND") != "subprocess":
+ log_path = self.settings.get("PORTAGE_LOG_FILE")
+ background = self.settings.get("PORTAGE_BACKGROUND") == "1"
+
+ if background and log_path is None:
+ if level >= logging.WARN:
+ writemsg_level(msg, level=level, noiselevel=noiselevel)
+ else:
+ self._scheduler.output(
+ msg,
+ log_path=log_path,
+ background=background,
+ level=level,
+ noiselevel=noiselevel,
+ )
+
+ def _show_unmerge(self, zing, desc, file_type, file_name):
+ self._display_merge(
+ "%s %s %s %s\n" % (zing, desc.ljust(8), file_type, file_name)
+ )
+
+ def _unmerge_pkgfiles(self, pkgfiles, others_in_slot):
+ """
+
+ Unmerges the contents of a package from the liveFS
+ Removes the VDB entry for self
+
+ @param pkgfiles: typically self.getcontents()
+ @type pkgfiles: Dictionary { filename: [ 'type', '?', 'md5sum' ] }
+ @param others_in_slot: all dblink instances in this slot, excluding self
+ @type others_in_slot: list
+ @rtype: None
+ """
+
+ os = _os_merge
+ perf_md5 = perform_md5
+ showMessage = self._display_merge
+ show_unmerge = self._show_unmerge
+ ignored_unlink_errnos = self._ignored_unlink_errnos
+ ignored_rmdir_errnos = self._ignored_rmdir_errnos
+
+ if not pkgfiles:
+ showMessage(_("No package files given... Grabbing a set.\n"))
+ pkgfiles = self.getcontents()
+
+ if others_in_slot is None:
+ others_in_slot = []
+ slot = self.vartree.dbapi._pkg_str(self.mycpv, None).slot
+ slot_matches = self.vartree.dbapi.match(
+ "%s:%s" % (portage.cpv_getkey(self.mycpv), slot)
+ )
+ for cur_cpv in slot_matches:
+ if cur_cpv == self.mycpv:
+ continue
+ others_in_slot.append(
+ dblink(
+ self.cat,
+ catsplit(cur_cpv)[1],
+ settings=self.settings,
+ vartree=self.vartree,
+ treetype="vartree",
+ pipe=self._pipe,
+ )
+ )
+
+ cfgfiledict = grabdict(self.vartree.dbapi._conf_mem_file)
+ stale_confmem = []
+ protected_symlinks = {}
+
+ unmerge_orphans = "unmerge-orphans" in self.settings.features
+ calc_prelink = "prelink-checksums" in self.settings.features
+
+ if pkgfiles:
+ self.updateprotect()
+ mykeys = list(pkgfiles)
+ mykeys.sort()
+ mykeys.reverse()
+
+ # process symlinks second-to-last, directories last.
+ mydirs = set()
+
+ uninstall_ignore = portage.util.shlex_split(
+ self.settings.get("UNINSTALL_IGNORE", "")
+ )
+
+ def unlink(file_name, lstatobj):
+ if bsd_chflags:
+ if lstatobj.st_flags != 0:
+ bsd_chflags.lchflags(file_name, 0)
+ parent_name = os.path.dirname(file_name)
+ # Use normal stat/chflags for the parent since we want to
+ # follow any symlinks to the real parent directory.
+ pflags = os.stat(parent_name).st_flags
+ if pflags != 0:
+ bsd_chflags.chflags(parent_name, 0)
+ try:
+ if not stat.S_ISLNK(lstatobj.st_mode):
+ # Remove permissions to ensure that any hardlinks to
+ # suid/sgid files are rendered harmless.
+ os.chmod(file_name, 0)
+ os.unlink(file_name)
+ except OSError as ose:
+ # If the chmod or unlink fails, you are in trouble.
+ # With Prefix this can be because the file is owned
+ # by someone else (a screwup by root?), on a normal
+ # system maybe filesystem corruption. In any case,
+ # if we backtrace and die here, we leave the system
+ # in a totally undefined state, hence we just bleed
+ # like hell and continue to hopefully finish all our
+ # administrative and pkg_postinst stuff.
+ self._eerror(
+ "postrm",
+ ["Could not chmod or unlink '%s': %s" % (file_name, ose)],
+ )
+ else:
+
+ # Even though the file no longer exists, we log it
+ # here so that _unmerge_dirs can see that we've
+ # removed a file from this device, and will record
+ # the parent directory for a syncfs call.
+ self._merged_path(file_name, lstatobj, exists=False)
+
+ finally:
+ if bsd_chflags and pflags != 0:
+ # Restore the parent flags we saved before unlinking
+ bsd_chflags.chflags(parent_name, pflags)
+
+ unmerge_desc = {}
+ unmerge_desc["cfgpro"] = _("cfgpro")
+ unmerge_desc["replaced"] = _("replaced")
+ unmerge_desc["!dir"] = _("!dir")
+ unmerge_desc["!empty"] = _("!empty")
+ unmerge_desc["!fif"] = _("!fif")
+ unmerge_desc["!found"] = _("!found")
+ unmerge_desc["!md5"] = _("!md5")
+ unmerge_desc["!mtime"] = _("!mtime")
+ unmerge_desc["!obj"] = _("!obj")
+ unmerge_desc["!sym"] = _("!sym")
+ unmerge_desc["!prefix"] = _("!prefix")
+
+ real_root = self.settings["ROOT"]
+ real_root_len = len(real_root) - 1
+ eroot = self.settings["EROOT"]
+
+ infodirs = frozenset(
+ infodir
+ for infodir in chain(
+ self.settings.get("INFOPATH", "").split(":"),
+ self.settings.get("INFODIR", "").split(":"),
+ )
+ if infodir
+ )
+ infodirs_inodes = set()
+ for infodir in infodirs:
+ infodir = os.path.join(real_root, infodir.lstrip(os.sep))
+ try:
+ statobj = os.stat(infodir)
+ except OSError:
+ pass
+ else:
+ infodirs_inodes.add((statobj.st_dev, statobj.st_ino))
+
+ for i, objkey in enumerate(mykeys):
+
+ obj = normalize_path(objkey)
+ if os is _os_merge:
+ try:
+ _unicode_encode(
+ obj, encoding=_encodings["merge"], errors="strict"
+ )
+ except UnicodeEncodeError:
+ # The package appears to have been merged with a
+ # different value of sys.getfilesystemencoding(),
+ # so fall back to utf_8 if appropriate.
+ try:
+ _unicode_encode(
+ obj, encoding=_encodings["fs"], errors="strict"
+ )
+ except UnicodeEncodeError:
+ pass
+ else:
+ os = portage.os
+ perf_md5 = portage.checksum.perform_md5
+
+ file_data = pkgfiles[objkey]
+ file_type = file_data[0]
+
+ # don't try to unmerge the prefix offset itself
+ if len(obj) <= len(eroot) or not obj.startswith(eroot):
+ show_unmerge("---", unmerge_desc["!prefix"], file_type, obj)
+ continue
+
+ statobj = None
+ try:
+ statobj = os.stat(obj)
+ except OSError:
+ pass
+ lstatobj = None
+ try:
+ lstatobj = os.lstat(obj)
+ except (OSError, AttributeError):
+ pass
+ islink = lstatobj is not None and stat.S_ISLNK(lstatobj.st_mode)
+ if lstatobj is None:
+ show_unmerge("---", unmerge_desc["!found"], file_type, obj)
+ continue
+
+ f_match = obj[len(eroot) - 1 :]
+ ignore = False
+ for pattern in uninstall_ignore:
+ if fnmatch.fnmatch(f_match, pattern):
+ ignore = True
+ break
+
+ if not ignore:
+ if islink and f_match in ("/lib", "/usr/lib", "/usr/local/lib"):
+ # Ignore libdir symlinks for bug #423127.
+ ignore = True
+
+ if ignore:
+ show_unmerge("---", unmerge_desc["cfgpro"], file_type, obj)
+ continue
+
+ # don't use EROOT, CONTENTS entries already contain EPREFIX
+ if obj.startswith(real_root):
+ relative_path = obj[real_root_len:]
+ is_owned = False
+ for dblnk in others_in_slot:
+ if dblnk.isowner(relative_path):
+ is_owned = True
+ break
+
+ if (
+ is_owned
+ and islink
+ and file_type in ("sym", "dir")
+ and statobj
+ and stat.S_ISDIR(statobj.st_mode)
+ ):
+ # A new instance of this package claims the file, so
+ # don't unmerge it. If the file is symlink to a
+ # directory and the unmerging package installed it as
+ # a symlink, but the new owner has it listed as a
+ # directory, then we'll produce a warning since the
+ # symlink is a sort of orphan in this case (see
+ # bug #326685).
+ symlink_orphan = False
+ for dblnk in others_in_slot:
+ parent_contents_key = dblnk._match_contents(relative_path)
+ if not parent_contents_key:
+ continue
+ if not parent_contents_key.startswith(real_root):
+ continue
+ if dblnk.getcontents()[parent_contents_key][0] == "dir":
+ symlink_orphan = True
+ break
+
+ if symlink_orphan:
+ protected_symlinks.setdefault(
+ (statobj.st_dev, statobj.st_ino), []
+ ).append(relative_path)
+
+ if is_owned:
+ show_unmerge("---", unmerge_desc["replaced"], file_type, obj)
+ continue
+ elif relative_path in cfgfiledict:
+ stale_confmem.append(relative_path)
+
+ # Don't unlink symlinks to directories here since that can
+ # remove /lib and /usr/lib symlinks.
+ if (
+ unmerge_orphans
+ and lstatobj
+ and not stat.S_ISDIR(lstatobj.st_mode)
+ and not (islink and statobj and stat.S_ISDIR(statobj.st_mode))
+ and not self.isprotected(obj)
+ ):
+ try:
+ unlink(obj, lstatobj)
+ except EnvironmentError as e:
+ if e.errno not in ignored_unlink_errnos:
+ raise
+ del e
+ show_unmerge("<<<", "", file_type, obj)
+ continue
+
+ lmtime = str(lstatobj[stat.ST_MTIME])
+ if (pkgfiles[objkey][0] not in ("dir", "fif", "dev")) and (
+ lmtime != pkgfiles[objkey][1]
+ ):
+ show_unmerge("---", unmerge_desc["!mtime"], file_type, obj)
+ continue
+
+ if file_type == "dir" and not islink:
+ if lstatobj is None or not stat.S_ISDIR(lstatobj.st_mode):
+ show_unmerge("---", unmerge_desc["!dir"], file_type, obj)
+ continue
+ mydirs.add((obj, (lstatobj.st_dev, lstatobj.st_ino)))
+ elif file_type == "sym" or (file_type == "dir" and islink):
+ if not islink:
+ show_unmerge("---", unmerge_desc["!sym"], file_type, obj)
+ continue
+
+ # If this symlink points to a directory then we don't want
+ # to unmerge it if there are any other packages that
+ # installed files into the directory via this symlink
+ # (see bug #326685).
+ # TODO: Resolving a symlink to a directory will require
+ # simulation if $ROOT != / and the link is not relative.
+ if (
+ islink
+ and statobj
+ and stat.S_ISDIR(statobj.st_mode)
+ and obj.startswith(real_root)
+ ):
+
+ relative_path = obj[real_root_len:]
+ try:
+ target_dir_contents = os.listdir(obj)
+ except OSError:
+ pass
+ else:
+ if target_dir_contents:
+ # If all the children are regular files owned
+ # by this package, then the symlink should be
+ # safe to unmerge.
+ all_owned = True
+ for child in target_dir_contents:
+ child = os.path.join(relative_path, child)
+ if not self.isowner(child):
+ all_owned = False
+ break
+ try:
+ child_lstat = os.lstat(
+ os.path.join(
+ real_root, child.lstrip(os.sep)
+ )
+ )
+ except OSError:
+ continue
+
+ if not stat.S_ISREG(child_lstat.st_mode):
+ # Nested symlinks or directories make
+ # the issue very complex, so just
+ # preserve the symlink in order to be
+ # on the safe side.
+ all_owned = False
+ break
+
+ if not all_owned:
+ protected_symlinks.setdefault(
+ (statobj.st_dev, statobj.st_ino), []
+ ).append(relative_path)
+ show_unmerge(
+ "---", unmerge_desc["!empty"], file_type, obj
+ )
+ continue
+
+ # Go ahead and unlink symlinks to directories here when
+ # they're actually recorded as symlinks in the contents.
+ # Normally, symlinks such as /lib -> lib64 are not recorded
+ # as symlinks in the contents of a package. If a package
+ # installs something into ${D}/lib/, it is recorded in the
+ # contents as a directory even if it happens to correspond
+ # to a symlink when it's merged to the live filesystem.
+ try:
+ unlink(obj, lstatobj)
+ show_unmerge("<<<", "", file_type, obj)
+ except (OSError, IOError) as e:
+ if e.errno not in ignored_unlink_errnos:
+ raise
+ del e
+ show_unmerge("!!!", "", file_type, obj)
+ elif pkgfiles[objkey][0] == "obj":
+ if statobj is None or not stat.S_ISREG(statobj.st_mode):
+ show_unmerge("---", unmerge_desc["!obj"], file_type, obj)
+ continue
+ mymd5 = None
+ try:
+ mymd5 = perf_md5(obj, calc_prelink=calc_prelink)
+ except FileNotFound as e:
+ # the file has disappeared between now and our stat call
+ show_unmerge("---", unmerge_desc["!obj"], file_type, obj)
+ continue
+
+ # string.lower is needed because db entries used to be in upper-case. The
+ # string.lower allows for backwards compatibility.
+ if mymd5 != pkgfiles[objkey][2].lower():
+ show_unmerge("---", unmerge_desc["!md5"], file_type, obj)
+ continue
+ try:
+ unlink(obj, lstatobj)
+ except (OSError, IOError) as e:
+ if e.errno not in ignored_unlink_errnos:
+ raise
+ del e
+ show_unmerge("<<<", "", file_type, obj)
+ elif pkgfiles[objkey][0] == "fif":
+ if not stat.S_ISFIFO(lstatobj[stat.ST_MODE]):
+ show_unmerge("---", unmerge_desc["!fif"], file_type, obj)
+ continue
+ show_unmerge("---", "", file_type, obj)
+ elif pkgfiles[objkey][0] == "dev":
+ show_unmerge("---", "", file_type, obj)
+
+ self._unmerge_dirs(
+ mydirs, infodirs_inodes, protected_symlinks, unmerge_desc, unlink, os
+ )
+ mydirs.clear()
+
+ if protected_symlinks:
+ self._unmerge_protected_symlinks(
+ others_in_slot,
+ infodirs_inodes,
+ protected_symlinks,
+ unmerge_desc,
+ unlink,
+ os,
+ )
+
+ if protected_symlinks:
+ msg = (
+ "One or more symlinks to directories have been "
+ + "preserved in order to ensure that files installed "
+ + "via these symlinks remain accessible. "
+ + "This indicates that the mentioned symlink(s) may "
+ + "be obsolete remnants of an old install, and it "
+ + "may be appropriate to replace a given symlink "
+ + "with the directory that it points to."
+ )
+ lines = textwrap.wrap(msg, 72)
+ lines.append("")
+ flat_list = set()
+ flat_list.update(*protected_symlinks.values())
+ flat_list = sorted(flat_list)
+ for f in flat_list:
+ lines.append("\t%s" % (os.path.join(real_root, f.lstrip(os.sep))))
+ lines.append("")
+ self._elog("elog", "postrm", lines)
+
+ # Remove stale entries from config memory.
+ if stale_confmem:
+ for filename in stale_confmem:
+ del cfgfiledict[filename]
+ writedict(cfgfiledict, self.vartree.dbapi._conf_mem_file)
+
+ # remove self from vartree database so that our own virtual gets zapped if we're the last node
+ self.vartree.zap(self.mycpv)
+
+ def _unmerge_protected_symlinks(
+ self,
+ others_in_slot,
+ infodirs_inodes,
+ protected_symlinks,
+ unmerge_desc,
+ unlink,
+ os,
+ ):
+
+ real_root = self.settings["ROOT"]
+ show_unmerge = self._show_unmerge
+ ignored_unlink_errnos = self._ignored_unlink_errnos
+
+ flat_list = set()
+ flat_list.update(*protected_symlinks.values())
+ flat_list = sorted(flat_list)
+
+ for f in flat_list:
+ for dblnk in others_in_slot:
+ if dblnk.isowner(f):
+ # If another package in the same slot installed
+ # a file via a protected symlink, return early
+ # and don't bother searching for any other owners.
+ return
+
+ msg = []
+ msg.append("")
+ msg.append(_("Directory symlink(s) may need protection:"))
+ msg.append("")
+
+ for f in flat_list:
+ msg.append("\t%s" % os.path.join(real_root, f.lstrip(os.path.sep)))
+
+ msg.append("")
+ msg.append("Use the UNINSTALL_IGNORE variable to exempt specific symlinks")
+ msg.append("from the following search (see the make.conf man page).")
+ msg.append("")
+ msg.append(
+ _(
+ "Searching all installed"
+ " packages for files installed via above symlink(s)..."
+ )
+ )
+ msg.append("")
+ self._elog("elog", "postrm", msg)
+
+ self.lockdb()
+ try:
+ owners = self.vartree.dbapi._owners.get_owners(flat_list)
+ self.vartree.dbapi.flush_cache()
+ finally:
+ self.unlockdb()
+
+ for owner in list(owners):
+ if owner.mycpv == self.mycpv:
+ owners.pop(owner, None)
+
+ if not owners:
+ msg = []
+ msg.append(
+ _(
+ "The above directory symlink(s) are all "
+ "safe to remove. Removing them now..."
+ )
+ )
+ msg.append("")
+ self._elog("elog", "postrm", msg)
+ dirs = set()
+ for unmerge_syms in protected_symlinks.values():
+ for relative_path in unmerge_syms:
+ obj = os.path.join(real_root, relative_path.lstrip(os.sep))
+ parent = os.path.dirname(obj)
+ while len(parent) > len(self._eroot):
+ try:
+ lstatobj = os.lstat(parent)
+ except OSError:
+ break
+ else:
+ dirs.add((parent, (lstatobj.st_dev, lstatobj.st_ino)))
+ parent = os.path.dirname(parent)
+ try:
+ unlink(obj, os.lstat(obj))
+ show_unmerge("<<<", "", "sym", obj)
+ except (OSError, IOError) as e:
+ if e.errno not in ignored_unlink_errnos:
+ raise
+ del e
+ show_unmerge("!!!", "", "sym", obj)
+
+ protected_symlinks.clear()
+ self._unmerge_dirs(
+ dirs, infodirs_inodes, protected_symlinks, unmerge_desc, unlink, os
+ )
+ dirs.clear()
+
+ def _unmerge_dirs(
+ self, dirs, infodirs_inodes, protected_symlinks, unmerge_desc, unlink, os
+ ):
+
+ show_unmerge = self._show_unmerge
+ infodir_cleanup = self._infodir_cleanup
+ ignored_unlink_errnos = self._ignored_unlink_errnos
+ ignored_rmdir_errnos = self._ignored_rmdir_errnos
+ real_root = self.settings["ROOT"]
+
+ dirs = sorted(dirs)
+ revisit = {}
+
+ while True:
+ try:
+ obj, inode_key = dirs.pop()
+ except IndexError:
+ break
+ # Treat any directory named "info" as a candidate here,
+ # since it might have been in INFOPATH previously even
+ # though it may not be there now.
+ if inode_key in infodirs_inodes or os.path.basename(obj) == "info":
+ try:
+ remaining = os.listdir(obj)
+ except OSError:
+ pass
+ else:
+ cleanup_info_dir = ()
+ if remaining and len(remaining) <= len(infodir_cleanup):
+ if not set(remaining).difference(infodir_cleanup):
+ cleanup_info_dir = remaining
+
+ for child in cleanup_info_dir:
+ child = os.path.join(obj, child)
+ try:
+ lstatobj = os.lstat(child)
+ if stat.S_ISREG(lstatobj.st_mode):
+ unlink(child, lstatobj)
+ show_unmerge("<<<", "", "obj", child)
+ except EnvironmentError as e:
+ if e.errno not in ignored_unlink_errnos:
+ raise
+ del e
+ show_unmerge("!!!", "", "obj", child)
+
+ try:
+ parent_name = os.path.dirname(obj)
+ parent_stat = os.stat(parent_name)
+
+ if bsd_chflags:
+ lstatobj = os.lstat(obj)
+ if lstatobj.st_flags != 0:
+ bsd_chflags.lchflags(obj, 0)
+
+ # Use normal stat/chflags for the parent since we want to
+ # follow any symlinks to the real parent directory.
+ pflags = parent_stat.st_flags
+ if pflags != 0:
+ bsd_chflags.chflags(parent_name, 0)
+ try:
+ os.rmdir(obj)
+ finally:
+ if bsd_chflags and pflags != 0:
+ # Restore the parent flags we saved before unlinking
+ bsd_chflags.chflags(parent_name, pflags)
+
+ # Record the parent directory for use in syncfs calls.
+ # Note that we use a realpath and a regular stat here, since
+ # we want to follow any symlinks back to the real device where
+ # the real parent directory resides.
+ self._merged_path(os.path.realpath(parent_name), parent_stat)
+
+ show_unmerge("<<<", "", "dir", obj)
+ except EnvironmentError as e:
+ if e.errno not in ignored_rmdir_errnos:
+ raise
+ if e.errno != errno.ENOENT:
+ show_unmerge("---", unmerge_desc["!empty"], "dir", obj)
+ revisit[obj] = inode_key
+
+ # Since we didn't remove this directory, record the directory
+ # itself for use in syncfs calls, if we have removed another
+ # file from the same device.
+ # Note that we use a realpath and a regular stat here, since
+ # we want to follow any symlinks back to the real device where
+ # the real directory resides.
+ try:
+ dir_stat = os.stat(obj)
+ except OSError:
+ pass
+ else:
+ if dir_stat.st_dev in self._device_path_map:
+ self._merged_path(os.path.realpath(obj), dir_stat)
+
+ else:
+ # When a directory is successfully removed, there's
+ # no need to protect symlinks that point to it.
+ unmerge_syms = protected_symlinks.pop(inode_key, None)
+ if unmerge_syms is not None:
+ parents = []
+ for relative_path in unmerge_syms:
+ obj = os.path.join(real_root, relative_path.lstrip(os.sep))
+ try:
+ unlink(obj, os.lstat(obj))
+ show_unmerge("<<<", "", "sym", obj)
+ except (OSError, IOError) as e:
+ if e.errno not in ignored_unlink_errnos:
+ raise
+ del e
+ show_unmerge("!!!", "", "sym", obj)
+ else:
+ parents.append(os.path.dirname(obj))
+
+ if parents:
+ # Revisit parents recursively (bug 640058).
+ recursive_parents = []
+ for parent in set(parents):
+ while parent in revisit:
+ recursive_parents.append(parent)
+ parent = os.path.dirname(parent)
+ if parent == "/":
+ break
+
+ for parent in sorted(set(recursive_parents)):
+ dirs.append((parent, revisit.pop(parent)))
+
+ def isowner(self, filename, destroot=None):
+ """
+ Check if a file belongs to this package. This may
+ result in a stat call for the parent directory of
+ every installed file, since the inode numbers are
+ used to work around the problem of ambiguous paths
+ caused by symlinked directories. The results of
+ stat calls are cached to optimize multiple calls
+ to this method.
+
+ @param filename:
+ @type filename:
+ @param destroot:
+ @type destroot:
+ @rtype: Boolean
+ @return:
+ 1. True if this package owns the file.
+ 2. False if this package does not own the file.
+ """
+
+ if destroot is not None and destroot != self._eroot:
+ warnings.warn(
+ "The second parameter of the "
+ + "portage.dbapi.vartree.dblink.isowner()"
+ + " is now unused. Instead "
+ + "self.settings['EROOT'] will be used.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
+ return bool(self._match_contents(filename))
+
+ def _match_contents(self, filename, destroot=None):
+ """
+ The matching contents entry is returned, which is useful
+ since the path may differ from the one given by the caller,
+ due to symlinks.
+
+ @rtype: String
+ @return: the contents entry corresponding to the given path, or False
+ if the file is not owned by this package.
+ """
+
+ filename = _unicode_decode(
+ filename, encoding=_encodings["content"], errors="strict"
+ )
+
+ if destroot is not None and destroot != self._eroot:
+ warnings.warn(
+ "The second parameter of the "
+ + "portage.dbapi.vartree.dblink._match_contents()"
+ + " is now unused. Instead "
+ + "self.settings['ROOT'] will be used.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
+ # don't use EROOT here, image already contains EPREFIX
+ destroot = self.settings["ROOT"]
+
+ # The given filename argument might have a different encoding than the
+ # the filenames contained in the contents, so use separate wrapped os
+ # modules for each. The basename is more likely to contain non-ascii
+ # characters than the directory path, so use os_filename_arg for all
+ # operations involving the basename of the filename arg.
+ os_filename_arg = _os_merge
+ os = _os_merge
+
+ try:
+ _unicode_encode(filename, encoding=_encodings["merge"], errors="strict")
+ except UnicodeEncodeError:
+ # The package appears to have been merged with a
+ # different value of sys.getfilesystemencoding(),
+ # so fall back to utf_8 if appropriate.
+ try:
+ _unicode_encode(filename, encoding=_encodings["fs"], errors="strict")
+ except UnicodeEncodeError:
+ pass
+ else:
+ os_filename_arg = portage.os
+
+ destfile = normalize_path(
+ os_filename_arg.path.join(
+ destroot, filename.lstrip(os_filename_arg.path.sep)
+ )
+ )
+
+ if "case-insensitive-fs" in self.settings.features:
+ destfile = destfile.lower()
+
+ if self._contents.contains(destfile):
+ return self._contents.unmap_key(destfile)
+
+ if self.getcontents():
+ basename = os_filename_arg.path.basename(destfile)
+ if self._contents_basenames is None:
+
+ try:
+ for x in self._contents.keys():
+ _unicode_encode(
+ x, encoding=_encodings["merge"], errors="strict"
+ )
+ except UnicodeEncodeError:
+ # The package appears to have been merged with a
+ # different value of sys.getfilesystemencoding(),
+ # so fall back to utf_8 if appropriate.
+ try:
+ for x in self._contents.keys():
+ _unicode_encode(
+ x, encoding=_encodings["fs"], errors="strict"
+ )
+ except UnicodeEncodeError:
+ pass
+ else:
+ os = portage.os
+
+ self._contents_basenames = set(
+ os.path.basename(x) for x in self._contents.keys()
+ )
+ if basename not in self._contents_basenames:
+ # This is a shortcut that, in most cases, allows us to
+ # eliminate this package as an owner without the need
+ # to examine inode numbers of parent directories.
+ return False
+
+ # Use stat rather than lstat since we want to follow
+ # any symlinks to the real parent directory.
+ parent_path = os_filename_arg.path.dirname(destfile)
+ try:
+ parent_stat = os_filename_arg.stat(parent_path)
+ except EnvironmentError as e:
+ if e.errno != errno.ENOENT:
+ raise
+ del e
+ return False
+ if self._contents_inodes is None:
+
+ if os is _os_merge:
+ try:
+ for x in self._contents.keys():
+ _unicode_encode(
+ x, encoding=_encodings["merge"], errors="strict"
+ )
+ except UnicodeEncodeError:
+ # The package appears to have been merged with a
+ # different value of sys.getfilesystemencoding(),
+ # so fall back to utf_8 if appropriate.
+ try:
+ for x in self._contents.keys():
+ _unicode_encode(
+ x, encoding=_encodings["fs"], errors="strict"
+ )
+ except UnicodeEncodeError:
+ pass
+ else:
+ os = portage.os
+
+ self._contents_inodes = {}
+ parent_paths = set()
+ for x in self._contents.keys():
+ p_path = os.path.dirname(x)
+ if p_path in parent_paths:
+ continue
+ parent_paths.add(p_path)
+ try:
+ s = os.stat(p_path)
+ except OSError:
+ pass
+ else:
+ inode_key = (s.st_dev, s.st_ino)
+ # Use lists of paths in case multiple
+ # paths reference the same inode.
+ p_path_list = self._contents_inodes.get(inode_key)
+ if p_path_list is None:
+ p_path_list = []
+ self._contents_inodes[inode_key] = p_path_list
+ if p_path not in p_path_list:
+ p_path_list.append(p_path)
+
+ p_path_list = self._contents_inodes.get(
+ (parent_stat.st_dev, parent_stat.st_ino)
+ )
+ if p_path_list:
+ for p_path in p_path_list:
+ x = os_filename_arg.path.join(p_path, basename)
+ if self._contents.contains(x):
+ return self._contents.unmap_key(x)
+
+ return False
+
+ def _linkmap_rebuild(self, **kwargs):
+ """
+ Rebuild the self._linkmap if it's not broken due to missing
+ scanelf binary. Also, return early if preserve-libs is disabled
+ and the preserve-libs registry is empty.
+ """
+ if (
+ self._linkmap_broken
+ or self.vartree.dbapi._linkmap is None
+ or self.vartree.dbapi._plib_registry is None
+ or (
+ "preserve-libs" not in self.settings.features
+ and not self.vartree.dbapi._plib_registry.hasEntries()
+ )
+ ):
+ return
+ try:
+ self.vartree.dbapi._linkmap.rebuild(**kwargs)
+ except CommandNotFound as e:
+ self._linkmap_broken = True
+ self._display_merge(
+ _(
+ "!!! Disabling preserve-libs "
+ "due to error: Command Not Found: %s\n"
+ )
+ % (e,),
+ level=logging.ERROR,
+ noiselevel=-1,
+ )
+
+ def _find_libs_to_preserve(self, unmerge=False):
+ """
+ Get set of relative paths for libraries to be preserved. When
+ unmerge is False, file paths to preserve are selected from
+ self._installed_instance. Otherwise, paths are selected from
+ self.
+ """
+ if (
+ self._linkmap_broken
+ or self.vartree.dbapi._linkmap is None
+ or self.vartree.dbapi._plib_registry is None
+ or (not unmerge and self._installed_instance is None)
+ or not self._preserve_libs
+ ):
+ return set()
+
+ os = _os_merge
+ linkmap = self.vartree.dbapi._linkmap
+ if unmerge:
+ installed_instance = self
+ else:
+ installed_instance = self._installed_instance
+ old_contents = installed_instance.getcontents()
+ root = self.settings["ROOT"]
+ root_len = len(root) - 1
+ lib_graph = digraph()
+ path_node_map = {}
+
+ def path_to_node(path):
+ node = path_node_map.get(path)
+ if node is None:
+ node = LinkageMap._LibGraphNode(linkmap._obj_key(path))
+ alt_path_node = lib_graph.get(node)
+ if alt_path_node is not None:
+ node = alt_path_node
+ node.alt_paths.add(path)
+ path_node_map[path] = node
+ return node
+
+ consumer_map = {}
+ provider_nodes = set()
+ # Create provider nodes and add them to the graph.
+ for f_abs in old_contents:
+
+ if os is _os_merge:
+ try:
+ _unicode_encode(
+ f_abs, encoding=_encodings["merge"], errors="strict"
+ )
+ except UnicodeEncodeError:
+ # The package appears to have been merged with a
+ # different value of sys.getfilesystemencoding(),
+ # so fall back to utf_8 if appropriate.
+ try:
+ _unicode_encode(
+ f_abs, encoding=_encodings["fs"], errors="strict"
+ )
+ except UnicodeEncodeError:
+ pass
+ else:
+ os = portage.os
+
+ f = f_abs[root_len:]
+ try:
+ consumers = linkmap.findConsumers(
+ f, exclude_providers=(installed_instance.isowner,)
+ )
+ except KeyError:
+ continue
+ if not consumers:
+ continue
+ provider_node = path_to_node(f)
+ lib_graph.add(provider_node, None)
+ provider_nodes.add(provider_node)
+ consumer_map[provider_node] = consumers
+
+ # Create consumer nodes and add them to the graph.
+ # Note that consumers can also be providers.
+ for provider_node, consumers in consumer_map.items():
+ for c in consumers:
+ consumer_node = path_to_node(c)
+ if (
+ installed_instance.isowner(c)
+ and consumer_node not in provider_nodes
+ ):
+ # This is not a provider, so it will be uninstalled.
+ continue
+ lib_graph.add(provider_node, consumer_node)
+
+ # Locate nodes which should be preserved. They consist of all
+ # providers that are reachable from consumers that are not
+ # providers themselves.
+ preserve_nodes = set()
+ for consumer_node in lib_graph.root_nodes():
+ if consumer_node in provider_nodes:
+ continue
+ # Preserve all providers that are reachable from this consumer.
+ node_stack = lib_graph.child_nodes(consumer_node)
+ while node_stack:
+ provider_node = node_stack.pop()
+ if provider_node in preserve_nodes:
+ continue
+ preserve_nodes.add(provider_node)
+ node_stack.extend(lib_graph.child_nodes(provider_node))
+
+ preserve_paths = set()
+ for preserve_node in preserve_nodes:
+ # Preserve the library itself, and also preserve the
+ # soname symlink which is the only symlink that is
+ # strictly required.
+ hardlinks = set()
+ soname_symlinks = set()
+ soname = linkmap.getSoname(next(iter(preserve_node.alt_paths)))
+ have_replacement_soname_link = False
+ have_replacement_hardlink = False
+ for f in preserve_node.alt_paths:
+ f_abs = os.path.join(root, f.lstrip(os.sep))
+ try:
+ if stat.S_ISREG(os.lstat(f_abs).st_mode):
+ hardlinks.add(f)
+ if not unmerge and self.isowner(f):
+ have_replacement_hardlink = True
+ if os.path.basename(f) == soname:
+ have_replacement_soname_link = True
+ elif os.path.basename(f) == soname:
+ soname_symlinks.add(f)
+ if not unmerge and self.isowner(f):
+ have_replacement_soname_link = True
+ except OSError:
+ pass
+
+ if have_replacement_hardlink and have_replacement_soname_link:
+ continue
+
+ if hardlinks:
+ preserve_paths.update(hardlinks)
+ preserve_paths.update(soname_symlinks)
+
+ return preserve_paths
+
+ def _add_preserve_libs_to_contents(self, preserve_paths):
+ """
+ Preserve libs returned from _find_libs_to_preserve().
+ """
+
+ if not preserve_paths:
+ return
+
+ os = _os_merge
+ showMessage = self._display_merge
+ root = self.settings["ROOT"]
+
+ # Copy contents entries from the old package to the new one.
+ new_contents = self.getcontents().copy()
+ old_contents = self._installed_instance.getcontents()
+ for f in sorted(preserve_paths):
+ f = _unicode_decode(f, encoding=_encodings["content"], errors="strict")
+ f_abs = os.path.join(root, f.lstrip(os.sep))
+ contents_entry = old_contents.get(f_abs)
+ if contents_entry is None:
+ # This will probably never happen, but it might if one of the
+ # paths returned from findConsumers() refers to one of the libs
+ # that should be preserved yet the path is not listed in the
+ # contents. Such a path might belong to some other package, so
+ # it shouldn't be preserved here.
+ showMessage(
+ _(
+ "!!! File '%s' will not be preserved "
+ "due to missing contents entry\n"
+ )
+ % (f_abs,),
+ level=logging.ERROR,
+ noiselevel=-1,
+ )
+ preserve_paths.remove(f)
+ continue
+ new_contents[f_abs] = contents_entry
+ obj_type = contents_entry[0]
+ showMessage(_(">>> needed %s %s\n") % (obj_type, f_abs), noiselevel=-1)
+ # Add parent directories to contents if necessary.
+ parent_dir = os.path.dirname(f_abs)
+ while len(parent_dir) > len(root):
+ new_contents[parent_dir] = ["dir"]
+ prev = parent_dir
+ parent_dir = os.path.dirname(parent_dir)
+ if prev == parent_dir:
+ break
+ outfile = atomic_ofstream(os.path.join(self.dbtmpdir, "CONTENTS"))
+ write_contents(new_contents, root, outfile)
+ outfile.close()
+ self._clear_contents_cache()
+
+ def _find_unused_preserved_libs(self, unmerge_no_replacement):
+ """
+ Find preserved libraries that don't have any consumers left.
+ """
+
+ if (
+ self._linkmap_broken
+ or self.vartree.dbapi._linkmap is None
+ or self.vartree.dbapi._plib_registry is None
+ or not self.vartree.dbapi._plib_registry.hasEntries()
+ ):
+ return {}
+
+ # Since preserved libraries can be consumers of other preserved
+ # libraries, use a graph to track consumer relationships.
+ plib_dict = self.vartree.dbapi._plib_registry.getPreservedLibs()
+ linkmap = self.vartree.dbapi._linkmap
+ lib_graph = digraph()
+ preserved_nodes = set()
+ preserved_paths = set()
+ path_cpv_map = {}
+ path_node_map = {}
+ root = self.settings["ROOT"]
+
+ def path_to_node(path):
+ node = path_node_map.get(path)
+ if node is None:
- node = LinkageMap._LibGraphNode(linkmap._obj_key(path))
++ chost = self.settings.get('CHOST')
++ if chost.find('darwin') >= 0:
++ node = LinkageMapMachO._LibGraphNode(linkmap._obj_key(path))
++ elif chost.find('interix') >= 0 or chost.find('winnt') >= 0:
++ node = LinkageMapPeCoff._LibGraphNode(linkmap._obj_key(path))
++ elif chost.find('aix') >= 0:
++ node = LinkageMapXCoff._LibGraphNode(linkmap._obj_key(path))
++ else
++ node = LinkageMap._LibGraphNode(linkmap._obj_key(path))
+ alt_path_node = lib_graph.get(node)
+ if alt_path_node is not None:
+ node = alt_path_node
+ node.alt_paths.add(path)
+ path_node_map[path] = node
+ return node
+
+ for cpv, plibs in plib_dict.items():
+ for f in plibs:
+ path_cpv_map[f] = cpv
+ preserved_node = path_to_node(f)
+ if not preserved_node.file_exists():
+ continue
+ lib_graph.add(preserved_node, None)
+ preserved_paths.add(f)
+ preserved_nodes.add(preserved_node)
+ for c in self.vartree.dbapi._linkmap.findConsumers(f):
+ consumer_node = path_to_node(c)
+ if not consumer_node.file_exists():
+ continue
+ # Note that consumers may also be providers.
+ lib_graph.add(preserved_node, consumer_node)
+
+ # Eliminate consumers having providers with the same soname as an
+ # installed library that is not preserved. This eliminates
+ # libraries that are erroneously preserved due to a move from one
+ # directory to another.
+ # Also eliminate consumers that are going to be unmerged if
+ # unmerge_no_replacement is True.
+ provider_cache = {}
+ for preserved_node in preserved_nodes:
+ soname = linkmap.getSoname(preserved_node)
+ for consumer_node in lib_graph.parent_nodes(preserved_node):
+ if consumer_node in preserved_nodes:
+ continue
+ if unmerge_no_replacement:
+ will_be_unmerged = True
+ for path in consumer_node.alt_paths:
+ if not self.isowner(path):
+ will_be_unmerged = False
+ break
+ if will_be_unmerged:
+ # This consumer is not preserved and it is
+ # being unmerged, so drop this edge.
+ lib_graph.remove_edge(preserved_node, consumer_node)
+ continue
+
+ providers = provider_cache.get(consumer_node)
+ if providers is None:
+ providers = linkmap.findProviders(consumer_node)
+ provider_cache[consumer_node] = providers
+ providers = providers.get(soname)
+ if providers is None:
+ continue
+ for provider in providers:
+ if provider in preserved_paths:
+ continue
+ provider_node = path_to_node(provider)
+ if not provider_node.file_exists():
+ continue
+ if provider_node in preserved_nodes:
+ continue
+ # An alternative provider seems to be
+ # installed, so drop this edge.
+ lib_graph.remove_edge(preserved_node, consumer_node)
+ break
+
+ cpv_lib_map = {}
+ while lib_graph:
+ root_nodes = preserved_nodes.intersection(lib_graph.root_nodes())
+ if not root_nodes:
+ break
+ lib_graph.difference_update(root_nodes)
+ unlink_list = set()
+ for node in root_nodes:
+ unlink_list.update(node.alt_paths)
+ unlink_list = sorted(unlink_list)
+ for obj in unlink_list:
+ cpv = path_cpv_map.get(obj)
+ if cpv is None:
+ # This means that a symlink is in the preserved libs
+ # registry, but the actual lib it points to is not.
+ self._display_merge(
+ _(
+ "!!! symlink to lib is preserved, "
+ "but not the lib itself:\n!!! '%s'\n"
+ )
+ % (obj,),
+ level=logging.ERROR,
+ noiselevel=-1,
+ )
+ continue
+ removed = cpv_lib_map.get(cpv)
+ if removed is None:
+ removed = set()
+ cpv_lib_map[cpv] = removed
+ removed.add(obj)
+
+ return cpv_lib_map
+
+ def _remove_preserved_libs(self, cpv_lib_map):
+ """
+ Remove files returned from _find_unused_preserved_libs().
+ """
+
+ os = _os_merge
+
+ files_to_remove = set()
+ for files in cpv_lib_map.values():
+ files_to_remove.update(files)
+ files_to_remove = sorted(files_to_remove)
+ showMessage = self._display_merge
+ root = self.settings["ROOT"]
+
+ parent_dirs = set()
+ for obj in files_to_remove:
+ obj = os.path.join(root, obj.lstrip(os.sep))
+ parent_dirs.add(os.path.dirname(obj))
+ if os.path.islink(obj):
+ obj_type = _("sym")
+ else:
+ obj_type = _("obj")
+ try:
+ os.unlink(obj)
+ except OSError as e:
+ if e.errno != errno.ENOENT:
+ raise
+ del e
+ else:
+ showMessage(_("<<< !needed %s %s\n") % (obj_type, obj), noiselevel=-1)
+
+ # Remove empty parent directories if possible.
+ while parent_dirs:
+ x = parent_dirs.pop()
+ while True:
+ try:
+ os.rmdir(x)
+ except OSError:
+ break
+ prev = x
+ x = os.path.dirname(x)
+ if x == prev:
+ break
+
+ self.vartree.dbapi._plib_registry.pruneNonExisting()
+
+ def _collision_protect(self, srcroot, destroot, mypkglist, file_list, symlink_list):
+
+ os = _os_merge
+
+ real_relative_paths = {}
+
+ collision_ignore = []
+ for x in portage.util.shlex_split(self.settings.get("COLLISION_IGNORE", "")):
+ if os.path.isdir(os.path.join(self._eroot, x.lstrip(os.sep))):
+ x = normalize_path(x)
+ x += "/*"
+ collision_ignore.append(x)
+
+ # For collisions with preserved libraries, the current package
+ # will assume ownership and the libraries will be unregistered.
+ if self.vartree.dbapi._plib_registry is None:
+ # preserve-libs is entirely disabled
+ plib_cpv_map = None
+ plib_paths = None
+ plib_inodes = {}
+ else:
+ plib_dict = self.vartree.dbapi._plib_registry.getPreservedLibs()
+ plib_cpv_map = {}
+ plib_paths = set()
+ for cpv, paths in plib_dict.items():
+ plib_paths.update(paths)
+ for f in paths:
+ plib_cpv_map[f] = cpv
+ plib_inodes = self._lstat_inode_map(plib_paths)
+
+ plib_collisions = {}
+
+ showMessage = self._display_merge
+ stopmerge = False
+ collisions = []
+ dirs = set()
+ dirs_ro = set()
+ symlink_collisions = []
+ destroot = self.settings["ROOT"]
+ totfiles = len(file_list) + len(symlink_list)
+ previous = time.monotonic()
+ progress_shown = False
+ report_interval = 1.7 # seconds
+ falign = len("%d" % totfiles)
+ showMessage(
+ _(" %s checking %d files for package collisions\n")
+ % (colorize("GOOD", "*"), totfiles)
+ )
+ for i, (f, f_type) in enumerate(
+ chain(((f, "reg") for f in file_list), ((f, "sym") for f in symlink_list))
+ ):
+ current = time.monotonic()
+ if current - previous > report_interval:
+ showMessage(
+ _("%3d%% done, %*d files remaining ...\n")
+ % (i * 100 / totfiles, falign, totfiles - i)
+ )
+ previous = current
+ progress_shown = True
+
+ dest_path = normalize_path(os.path.join(destroot, f.lstrip(os.path.sep)))
+
+ # Relative path with symbolic links resolved only in parent directories
+ real_relative_path = os.path.join(
+ os.path.realpath(os.path.dirname(dest_path)),
+ os.path.basename(dest_path),
+ )[len(destroot) :]
+
+ real_relative_paths.setdefault(real_relative_path, []).append(
+ f.lstrip(os.path.sep)
+ )
+
+ parent = os.path.dirname(dest_path)
+ if parent not in dirs:
+ for x in iter_parents(parent):
+ if x in dirs:
+ break
+ dirs.add(x)
+ if os.path.isdir(x):
+ if not os.access(x, os.W_OK):
+ dirs_ro.add(x)
+ break
+
+ try:
+ dest_lstat = os.lstat(dest_path)
+ except EnvironmentError as e:
+ if e.errno == errno.ENOENT:
+ del e
+ continue
+ elif e.errno == errno.ENOTDIR:
+ del e
+ # A non-directory is in a location where this package
+ # expects to have a directory.
+ dest_lstat = None
+ parent_path = dest_path
+ while len(parent_path) > len(destroot):
+ parent_path = os.path.dirname(parent_path)
+ try:
+ dest_lstat = os.lstat(parent_path)
+ break
+ except EnvironmentError as e:
+ if e.errno != errno.ENOTDIR:
+ raise
+ del e
+ if not dest_lstat:
+ raise AssertionError(
+ "unable to find non-directory "
+ + "parent for '%s'" % dest_path
+ )
+ dest_path = parent_path
+ f = os.path.sep + dest_path[len(destroot) :]
+ if f in collisions:
+ continue
+ else:
+ raise
+ if f[0] != "/":
+ f = "/" + f
+
+ if stat.S_ISDIR(dest_lstat.st_mode):
+ if f_type == "sym":
+ # This case is explicitly banned
+ # by PMS (see bug #326685).
+ symlink_collisions.append(f)
+ collisions.append(f)
+ continue
+
+ plibs = plib_inodes.get((dest_lstat.st_dev, dest_lstat.st_ino))
+ if plibs:
+ for path in plibs:
+ cpv = plib_cpv_map[path]
+ paths = plib_collisions.get(cpv)
+ if paths is None:
+ paths = set()
+ plib_collisions[cpv] = paths
+ paths.add(path)
+ # The current package will assume ownership and the
+ # libraries will be unregistered, so exclude this
+ # path from the normal collisions.
+ continue
+
+ isowned = False
+ full_path = os.path.join(destroot, f.lstrip(os.path.sep))
+ for ver in mypkglist:
+ if ver.isowner(f):
+ isowned = True
+ break
+ if not isowned and self.isprotected(full_path):
+ isowned = True
+ if not isowned:
+ f_match = full_path[len(self._eroot) - 1 :]
+ stopmerge = True
+ for pattern in collision_ignore:
+ if fnmatch.fnmatch(f_match, pattern):
+ stopmerge = False
+ break
+ if stopmerge:
+ collisions.append(f)
+
+ internal_collisions = {}
+ for real_relative_path, files in real_relative_paths.items():
+ # Detect internal collisions between non-identical files.
+ if len(files) >= 2:
+ files.sort()
+ for i in range(len(files) - 1):
+ file1 = normalize_path(os.path.join(srcroot, files[i]))
+ file2 = normalize_path(os.path.join(srcroot, files[i + 1]))
+ # Compare files, ignoring differences in times.
+ differences = compare_files(
+ file1, file2, skipped_types=("atime", "mtime", "ctime")
+ )
+ if differences:
+ internal_collisions.setdefault(real_relative_path, {})[
+ (files[i], files[i + 1])
+ ] = differences
+
+ if progress_shown:
+ showMessage(_("100% done\n"))
+
+ return (
+ collisions,
+ internal_collisions,
+ dirs_ro,
+ symlink_collisions,
+ plib_collisions,
+ )
+
+ def _lstat_inode_map(self, path_iter):
+ """
+ Use lstat to create a map of the form:
+ {(st_dev, st_ino) : set([path1, path2, ...])}
+ Multiple paths may reference the same inode due to hardlinks.
+ All lstat() calls are relative to self.myroot.
+ """
+
+ os = _os_merge
+
+ root = self.settings["ROOT"]
+ inode_map = {}
+ for f in path_iter:
+ path = os.path.join(root, f.lstrip(os.sep))
+ try:
+ st = os.lstat(path)
+ except OSError as e:
+ if e.errno not in (errno.ENOENT, errno.ENOTDIR):
+ raise
+ del e
+ continue
+ key = (st.st_dev, st.st_ino)
+ paths = inode_map.get(key)
+ if paths is None:
+ paths = set()
+ inode_map[key] = paths
+ paths.add(f)
+ return inode_map
+
+ def _security_check(self, installed_instances):
+ if not installed_instances:
+ return 0
+
+ os = _os_merge
+
+ showMessage = self._display_merge
+
+ file_paths = set()
+ for dblnk in installed_instances:
+ file_paths.update(dblnk.getcontents())
+ inode_map = {}
+ real_paths = set()
+ for i, path in enumerate(file_paths):
+
+ if os is _os_merge:
+ try:
+ _unicode_encode(path, encoding=_encodings["merge"], errors="strict")
+ except UnicodeEncodeError:
+ # The package appears to have been merged with a
+ # different value of sys.getfilesystemencoding(),
+ # so fall back to utf_8 if appropriate.
+ try:
+ _unicode_encode(
+ path, encoding=_encodings["fs"], errors="strict"
+ )
+ except UnicodeEncodeError:
+ pass
+ else:
+ os = portage.os
+
+ try:
+ s = os.lstat(path)
+ except OSError as e:
+ if e.errno not in (errno.ENOENT, errno.ENOTDIR):
+ raise
+ del e
+ continue
+ if not stat.S_ISREG(s.st_mode):
+ continue
+ path = os.path.realpath(path)
+ if path in real_paths:
+ continue
+ real_paths.add(path)
+ if s.st_nlink > 1 and s.st_mode & (stat.S_ISUID | stat.S_ISGID):
+ k = (s.st_dev, s.st_ino)
+ inode_map.setdefault(k, []).append((path, s))
+ suspicious_hardlinks = []
+ for path_list in inode_map.values():
+ path, s = path_list[0]
+ if len(path_list) == s.st_nlink:
+ # All hardlinks seem to be owned by this package.
+ continue
+ suspicious_hardlinks.append(path_list)
+ if not suspicious_hardlinks:
+ return 0
+
+ msg = []
+ msg.append(_("suid/sgid file(s) " "with suspicious hardlink(s):"))
+ msg.append("")
+ for path_list in suspicious_hardlinks:
+ for path, s in path_list:
+ msg.append("\t%s" % path)
+ msg.append("")
+ msg.append(
+ _("See the Gentoo Security Handbook " "guide for advice on how to proceed.")
+ )
+
+ self._eerror("preinst", msg)
+
+ return 1
+
+ def _eqawarn(self, phase, lines):
+ self._elog("eqawarn", phase, lines)
+
+ def _eerror(self, phase, lines):
+ self._elog("eerror", phase, lines)
+
+ def _elog(self, funcname, phase, lines):
+ func = getattr(portage.elog.messages, funcname)
+ if self._scheduler is None:
+ for l in lines:
+ func(l, phase=phase, key=self.mycpv)
+ else:
+ background = self.settings.get("PORTAGE_BACKGROUND") == "1"
+ log_path = None
+ if self.settings.get("PORTAGE_BACKGROUND") != "subprocess":
+ log_path = self.settings.get("PORTAGE_LOG_FILE")
+ out = io.StringIO()
+ for line in lines:
+ func(line, phase=phase, key=self.mycpv, out=out)
+ msg = out.getvalue()
+ self._scheduler.output(msg, background=background, log_path=log_path)
+
+ def _elog_process(self, phasefilter=None):
+ cpv = self.mycpv
+ if self._pipe is None:
+ elog_process(cpv, self.settings, phasefilter=phasefilter)
+ else:
+ logdir = os.path.join(self.settings["T"], "logging")
+ ebuild_logentries = collect_ebuild_messages(logdir)
+ # phasefilter is irrelevant for the above collect_ebuild_messages
+ # call, since this package instance has a private logdir. However,
+ # it may be relevant for the following collect_messages call.
+ py_logentries = collect_messages(key=cpv, phasefilter=phasefilter).get(
+ cpv, {}
+ )
+ logentries = _merge_logentries(py_logentries, ebuild_logentries)
+ funcnames = {
+ "INFO": "einfo",
+ "LOG": "elog",
+ "WARN": "ewarn",
+ "QA": "eqawarn",
+ "ERROR": "eerror",
+ }
+ str_buffer = []
+ for phase, messages in logentries.items():
+ for key, lines in messages:
+ funcname = funcnames[key]
+ if isinstance(lines, str):
+ lines = [lines]
+ for line in lines:
+ for line in line.split("\n"):
+ fields = (funcname, phase, cpv, line)
+ str_buffer.append(" ".join(fields))
+ str_buffer.append("\n")
+ if str_buffer:
+ str_buffer = _unicode_encode("".join(str_buffer))
+ while str_buffer:
+ str_buffer = str_buffer[os.write(self._pipe, str_buffer) :]
+
+ def _emerge_log(self, msg):
+ emergelog(False, msg)
+
+ def treewalk(
+ self,
+ srcroot,
+ destroot,
+ inforoot,
+ myebuild,
+ cleanup=0,
+ mydbapi=None,
+ prev_mtimes=None,
+ counter=None,
+ ):
+ """
+
+ This function does the following:
+
+ calls doebuild(mydo=instprep)
+ calls get_ro_checker to retrieve a function for checking whether Portage
+ will write to a read-only filesystem, then runs it against the directory list
+ calls self._preserve_libs if FEATURES=preserve-libs
+ calls self._collision_protect if FEATURES=collision-protect
+ calls doebuild(mydo=pkg_preinst)
+ Merges the package to the livefs
+ unmerges old version (if required)
+ calls doebuild(mydo=pkg_postinst)
+ calls env_update
+
+ @param srcroot: Typically this is ${D}
+ @type srcroot: String (Path)
+ @param destroot: ignored, self.settings['ROOT'] is used instead
+ @type destroot: String (Path)
+ @param inforoot: root of the vardb entry ?
+ @type inforoot: String (Path)
+ @param myebuild: path to the ebuild that we are processing
+ @type myebuild: String (Path)
+ @param mydbapi: dbapi which is handed to doebuild.
+ @type mydbapi: portdbapi instance
+ @param prev_mtimes: { Filename:mtime } mapping for env_update
+ @type prev_mtimes: Dictionary
+ @rtype: Boolean
+ @return:
+ 1. 0 on success
+ 2. 1 on failure
+
+ secondhand is a list of symlinks that have been skipped due to their target
+ not existing; we will merge these symlinks at a later time.
+ """
+
+ os = _os_merge
+
+ srcroot = _unicode_decode(
+ srcroot, encoding=_encodings["content"], errors="strict"
+ )
+ destroot = self.settings["ROOT"]
+ inforoot = _unicode_decode(
+ inforoot, encoding=_encodings["content"], errors="strict"
+ )
+ myebuild = _unicode_decode(
+ myebuild, encoding=_encodings["content"], errors="strict"
+ )
+
+ showMessage = self._display_merge
+ srcroot = normalize_path(srcroot).rstrip(os.path.sep) + os.path.sep
+
+ if not os.path.isdir(srcroot):
+ showMessage(
+ _("!!! Directory Not Found: D='%s'\n") % srcroot,
+ level=logging.ERROR,
+ noiselevel=-1,
+ )
+ return 1
+
+ # run instprep internal phase
+ doebuild_environment(myebuild, "instprep", settings=self.settings, db=mydbapi)
+ phase = EbuildPhase(
+ background=False,
+ phase="instprep",
+ scheduler=self._scheduler,
+ settings=self.settings,
+ )
+ phase.start()
+ if phase.wait() != os.EX_OK:
+ showMessage(_("!!! instprep failed\n"), level=logging.ERROR, noiselevel=-1)
+ return 1
+
+ is_binpkg = self.settings.get("EMERGE_FROM") == "binary"
+ slot = ""
+ for var_name in ("CHOST", "SLOT"):
+ try:
+ with io.open(
+ _unicode_encode(
+ os.path.join(inforoot, var_name),
+ encoding=_encodings["fs"],
+ errors="strict",
+ ),
+ mode="r",
+ encoding=_encodings["repo.content"],
+ errors="replace",
+ ) as f:
+ val = f.readline().strip()
+ except EnvironmentError as e:
+ if e.errno != errno.ENOENT:
+ raise
+ del e
+ val = ""
+
+ if var_name == "SLOT":
+ slot = val
+
+ if not slot.strip():
+ slot = self.settings.get(var_name, "")
+ if not slot.strip():
+ showMessage(
+ _("!!! SLOT is undefined\n"),
+ level=logging.ERROR,
+ noiselevel=-1,
+ )
+ return 1
+ write_atomic(os.path.join(inforoot, var_name), slot + "\n")
+
+ # This check only applies when built from source, since
+ # inforoot values are written just after src_install.
+ if not is_binpkg and val != self.settings.get(var_name, ""):
+ self._eqawarn(
+ "preinst",
+ [
+ _(
+ "QA Notice: Expected %(var_name)s='%(expected_value)s', got '%(actual_value)s'\n"
+ )
+ % {
+ "var_name": var_name,
+ "expected_value": self.settings.get(var_name, ""),
+ "actual_value": val,
+ }
+ ],
+ )
+
+ def eerror(lines):
+ self._eerror("preinst", lines)
+
+ if not os.path.exists(self.dbcatdir):
+ ensure_dirs(self.dbcatdir)
+
+ # NOTE: We use SLOT obtained from the inforoot
+ # directory, in order to support USE=multislot.
+ # Use _pkg_str discard the sub-slot part if necessary.
+ slot = _pkg_str(self.mycpv, slot=slot).slot
+ cp = self.mysplit[0]
+ slot_atom = "%s:%s" % (cp, slot)
+
+ self.lockdb()
+ try:
+ # filter any old-style virtual matches
+ slot_matches = [
+ cpv
+ for cpv in self.vartree.dbapi.match(slot_atom)
+ if cpv_getkey(cpv) == cp
+ ]
+
+ if self.mycpv not in slot_matches and self.vartree.dbapi.cpv_exists(
+ self.mycpv
+ ):
+ # handle multislot or unapplied slotmove
+ slot_matches.append(self.mycpv)
+
+ others_in_slot = []
+ for cur_cpv in slot_matches:
+ # Clone the config in case one of these has to be unmerged,
+ # since we need it to have private ${T} etc... for things
+ # like elog.
+ settings_clone = portage.config(clone=self.settings)
+ # This reset ensures that there is no unintended leakage
+ # of variables which should not be shared.
+ settings_clone.reset()
+ settings_clone.setcpv(cur_cpv, mydb=self.vartree.dbapi)
+ if (
+ self._preserve_libs
+ and "preserve-libs" in settings_clone["PORTAGE_RESTRICT"].split()
+ ):
+ self._preserve_libs = False
+ others_in_slot.append(
+ dblink(
+ self.cat,
+ catsplit(cur_cpv)[1],
+ settings=settings_clone,
+ vartree=self.vartree,
+ treetype="vartree",
+ scheduler=self._scheduler,
+ pipe=self._pipe,
+ )
+ )
+ finally:
+ self.unlockdb()
+
+ # If any instance has RESTRICT=preserve-libs, then
+ # restrict it for all instances.
+ if not self._preserve_libs:
+ for dblnk in others_in_slot:
+ dblnk._preserve_libs = False
+
+ retval = self._security_check(others_in_slot)
+ if retval:
+ return retval
+
+ if slot_matches:
+ # Used by self.isprotected().
+ max_dblnk = None
+ max_counter = -1
+ for dblnk in others_in_slot:
+ cur_counter = self.vartree.dbapi.cpv_counter(dblnk.mycpv)
+ if cur_counter > max_counter:
+ max_counter = cur_counter
+ max_dblnk = dblnk
+ self._installed_instance = max_dblnk
+
+ # Apply INSTALL_MASK before collision-protect, since it may
+ # be useful to avoid collisions in some scenarios.
+ # We cannot detect if this is needed or not here as INSTALL_MASK can be
+ # modified by bashrc files.
+ phase = MiscFunctionsProcess(
+ background=False,
+ commands=["preinst_mask"],
+ phase="preinst",
+ scheduler=self._scheduler,
+ settings=self.settings,
+ )
+ phase.start()
+ phase.wait()
+ try:
+ with io.open(
+ _unicode_encode(
+ os.path.join(inforoot, "INSTALL_MASK"),
+ encoding=_encodings["fs"],
+ errors="strict",
+ ),
+ mode="r",
+ encoding=_encodings["repo.content"],
+ errors="replace",
+ ) as f:
+ install_mask = InstallMask(f.read())
+ except EnvironmentError:
+ install_mask = None
+
+ if install_mask:
+ install_mask_dir(self.settings["ED"], install_mask)
+ if any(x in self.settings.features for x in ("nodoc", "noman", "noinfo")):
+ try:
+ os.rmdir(os.path.join(self.settings["ED"], "usr", "share"))
+ except OSError:
+ pass
+
+ # We check for unicode encoding issues after src_install. However,
+ # the check must be repeated here for binary packages (it's
+ # inexpensive since we call os.walk() here anyway).
+ unicode_errors = []
+ line_ending_re = re.compile("[\n\r]")
+ srcroot_len = len(srcroot)
+ ed_len = len(self.settings["ED"])
+ eprefix_len = len(self.settings["EPREFIX"])
+
+ while True:
+
+ unicode_error = False
+ eagain_error = False
+
+ filelist = []
+ linklist = []
+ paths_with_newlines = []
+
+ def onerror(e):
+ raise
+
+ walk_iter = os.walk(srcroot, onerror=onerror)
+ while True:
+ try:
+ parent, dirs, files = next(walk_iter)
+ except StopIteration:
+ break
+ except OSError as e:
+ if e.errno != errno.EAGAIN:
+ raise
+ # Observed with PyPy 1.8.
+ eagain_error = True
+ break
+
+ try:
+ parent = _unicode_decode(
+ parent, encoding=_encodings["merge"], errors="strict"
+ )
+ except UnicodeDecodeError:
+ new_parent = _unicode_decode(
+ parent, encoding=_encodings["merge"], errors="replace"
+ )
+ new_parent = _unicode_encode(
+ new_parent, encoding="ascii", errors="backslashreplace"
+ )
+ new_parent = _unicode_decode(
+ new_parent, encoding=_encodings["merge"], errors="replace"
+ )
+ os.rename(parent, new_parent)
+ unicode_error = True
+ unicode_errors.append(new_parent[ed_len:])
+ break
+
+ for fname in files:
+ try:
+ fname = _unicode_decode(
+ fname, encoding=_encodings["merge"], errors="strict"
+ )
+ except UnicodeDecodeError:
+ fpath = portage._os.path.join(
+ parent.encode(_encodings["merge"]), fname
+ )
+ new_fname = _unicode_decode(
+ fname, encoding=_encodings["merge"], errors="replace"
+ )
+ new_fname = _unicode_encode(
+ new_fname, encoding="ascii", errors="backslashreplace"
+ )
+ new_fname = _unicode_decode(
+ new_fname, encoding=_encodings["merge"], errors="replace"
+ )
+ new_fpath = os.path.join(parent, new_fname)
+ os.rename(fpath, new_fpath)
+ unicode_error = True
+ unicode_errors.append(new_fpath[ed_len:])
+ fname = new_fname
+ fpath = new_fpath
+ else:
+ fpath = os.path.join(parent, fname)
+
+ relative_path = fpath[srcroot_len:]
+
+ if line_ending_re.search(relative_path) is not None:
+ paths_with_newlines.append(relative_path)
+
+ file_mode = os.lstat(fpath).st_mode
+ if stat.S_ISREG(file_mode):
+ filelist.append(relative_path)
+ elif stat.S_ISLNK(file_mode):
+ # Note: os.walk puts symlinks to directories in the "dirs"
+ # list and it does not traverse them since that could lead
+ # to an infinite recursion loop.
+ linklist.append(relative_path)
+
+ myto = _unicode_decode(
+ _os.readlink(
+ _unicode_encode(
+ fpath, encoding=_encodings["merge"], errors="strict"
+ )
+ ),
+ encoding=_encodings["merge"],
+ errors="replace",
+ )
+ if line_ending_re.search(myto) is not None:
+ paths_with_newlines.append(relative_path)
+
+ if unicode_error:
+ break
+
+ if not (unicode_error or eagain_error):
+ break
+
+ if unicode_errors:
+ self._elog("eqawarn", "preinst", _merge_unicode_error(unicode_errors))
+
+ if paths_with_newlines:
+ msg = []
+ msg.append(
+ _(
+ "This package installs one or more files containing line ending characters:"
+ )
+ )
+ msg.append("")
+ paths_with_newlines.sort()
+ for f in paths_with_newlines:
+ msg.append("\t/%s" % (f.replace("\n", "\\n").replace("\r", "\\r")))
+ msg.append("")
+ msg.append(_("package %s NOT merged") % self.mycpv)
+ msg.append("")
+ eerror(msg)
+ return 1
+
+ # If there are no files to merge, and an installed package in the same
+ # slot has files, it probably means that something went wrong.
+ if (
+ self.settings.get("PORTAGE_PACKAGE_EMPTY_ABORT") == "1"
+ and not filelist
+ and not linklist
+ and others_in_slot
+ ):
+ installed_files = None
+ for other_dblink in others_in_slot:
+ installed_files = other_dblink.getcontents()
+ if not installed_files:
+ continue
+ from textwrap import wrap
+
+ wrap_width = 72
+ msg = []
+ d = {"new_cpv": self.mycpv, "old_cpv": other_dblink.mycpv}
+ msg.extend(
+ wrap(
+ _(
+ "The '%(new_cpv)s' package will not install "
+ "any files, but the currently installed '%(old_cpv)s'"
+ " package has the following files: "
+ )
+ % d,
+ wrap_width,
+ )
+ )
+ msg.append("")
+ msg.extend(sorted(installed_files))
+ msg.append("")
+ msg.append(_("package %s NOT merged") % self.mycpv)
+ msg.append("")
+ msg.extend(
+ wrap(
+ _(
+ "Manually run `emerge --unmerge =%s` if you "
+ "really want to remove the above files. Set "
+ 'PORTAGE_PACKAGE_EMPTY_ABORT="0" in '
+ "/etc/portage/make.conf if you do not want to "
+ "abort in cases like this."
+ )
+ % other_dblink.mycpv,
+ wrap_width,
+ )
+ )
+ eerror(msg)
+ if installed_files:
+ return 1
+
+ # Make sure the ebuild environment is initialized and that ${T}/elog
+ # exists for logging of collision-protect eerror messages.
+ if myebuild is None:
+ myebuild = os.path.join(inforoot, self.pkg + ".ebuild")
+ doebuild_environment(myebuild, "preinst", settings=self.settings, db=mydbapi)
+ self.settings["REPLACING_VERSIONS"] = " ".join(
+ [portage.versions.cpv_getversion(other.mycpv) for other in others_in_slot]
+ )
+ prepare_build_dirs(settings=self.settings, cleanup=cleanup)
+
+ # check for package collisions
+ blockers = []
+ for blocker in self._blockers or []:
+ blocker = self.vartree.dbapi._dblink(blocker.cpv)
+ # It may have been unmerged before lock(s)
+ # were aquired.
+ if blocker.exists():
+ blockers.append(blocker)
+
+ (
+ collisions,
+ internal_collisions,
+ dirs_ro,
+ symlink_collisions,
+ plib_collisions,
+ ) = self._collision_protect(
+ srcroot, destroot, others_in_slot + blockers, filelist, linklist
+ )
+
+ # Check for read-only filesystems.
+ ro_checker = get_ro_checker()
+ rofilesystems = ro_checker(dirs_ro)
+
+ if rofilesystems:
+ msg = _(
+ "One or more files installed to this package are "
+ "set to be installed to read-only filesystems. "
+ "Please mount the following filesystems as read-write "
+ "and retry."
+ )
+ msg = textwrap.wrap(msg, 70)
+ msg.append("")
+ for f in rofilesystems:
+ msg.append("\t%s" % f)
+ msg.append("")
+ self._elog("eerror", "preinst", msg)
+
+ msg = (
+ _("Package '%s' NOT merged due to read-only file systems.")
+ % self.settings.mycpv
+ )
+ msg += _(
+ " If necessary, refer to your elog "
+ "messages for the whole content of the above message."
+ )
+ msg = textwrap.wrap(msg, 70)
+ eerror(msg)
+ return 1
+
+ if internal_collisions:
+ msg = (
+ _(
+ "Package '%s' has internal collisions between non-identical files "
+ "(located in separate directories in the installation image (${D}) "
+ "corresponding to merged directories in the target "
+ "filesystem (${ROOT})):"
+ )
+ % self.settings.mycpv
+ )
+ msg = textwrap.wrap(msg, 70)
+ msg.append("")
+ for k, v in sorted(internal_collisions.items(), key=operator.itemgetter(0)):
+ msg.append("\t%s" % os.path.join(destroot, k.lstrip(os.path.sep)))
+ for (file1, file2), differences in sorted(v.items()):
+ msg.append(
+ "\t\t%s" % os.path.join(destroot, file1.lstrip(os.path.sep))
+ )
+ msg.append(
+ "\t\t%s" % os.path.join(destroot, file2.lstrip(os.path.sep))
+ )
+ msg.append("\t\t\tDifferences: %s" % ", ".join(differences))
+ msg.append("")
+ self._elog("eerror", "preinst", msg)
+
+ msg = (
+ _(
+ "Package '%s' NOT merged due to internal collisions "
+ "between non-identical files."
+ )
+ % self.settings.mycpv
+ )
+ msg += _(
+ " If necessary, refer to your elog messages for the whole "
+ "content of the above message."
+ )
+ eerror(textwrap.wrap(msg, 70))
+ return 1
+
+ if symlink_collisions:
+ # Symlink collisions need to be distinguished from other types
+ # of collisions, in order to avoid confusion (see bug #409359).
+ msg = _(
+ "Package '%s' has one or more collisions "
+ "between symlinks and directories, which is explicitly "
+ "forbidden by PMS section 13.4 (see bug #326685):"
+ ) % (self.settings.mycpv,)
+ msg = textwrap.wrap(msg, 70)
+ msg.append("")
+ for f in symlink_collisions:
+ msg.append("\t%s" % os.path.join(destroot, f.lstrip(os.path.sep)))
+ msg.append("")
+ self._elog("eerror", "preinst", msg)
+
+ if collisions:
+ collision_protect = "collision-protect" in self.settings.features
+ protect_owned = "protect-owned" in self.settings.features
+ msg = _(
+ "This package will overwrite one or more files that"
+ " may belong to other packages (see list below)."
+ )
+ if not (collision_protect or protect_owned):
+ msg += _(
+ ' Add either "collision-protect" or'
+ ' "protect-owned" to FEATURES in'
+ " make.conf if you would like the merge to abort"
+ " in cases like this. See the make.conf man page for"
+ " more information about these features."
+ )
+ if self.settings.get("PORTAGE_QUIET") != "1":
+ msg += _(
+ " You can use a command such as"
+ " `portageq owners / <filename>` to identify the"
+ " installed package that owns a file. If portageq"
+ " reports that only one package owns a file then do NOT"
+ " file a bug report. A bug report is only useful if it"
+ " identifies at least two or more packages that are known"
+ " to install the same file(s)."
+ " If a collision occurs and you"
+ " can not explain where the file came from then you"
+ " should simply ignore the collision since there is not"
+ " enough information to determine if a real problem"
+ " exists. Please do NOT file a bug report at"
+ " https://bugs.gentoo.org/ unless you report exactly which"
+ " two packages install the same file(s). See"
+ " https://wiki.gentoo.org/wiki/Knowledge_Base:Blockers"
+ " for tips on how to solve the problem. And once again,"
+ " please do NOT file a bug report unless you have"
+ " completely understood the above message."
+ )
+
+ self.settings["EBUILD_PHASE"] = "preinst"
+ from textwrap import wrap
+
+ msg = wrap(msg, 70)
+ if collision_protect:
+ msg.append("")
+ msg.append(_("package %s NOT merged") % self.settings.mycpv)
+ msg.append("")
+ msg.append(_("Detected file collision(s):"))
+ msg.append("")
+
+ for f in collisions:
+ msg.append("\t%s" % os.path.join(destroot, f.lstrip(os.path.sep)))
+
+ eerror(msg)
+
+ owners = None
+ if collision_protect or protect_owned or symlink_collisions:
+ msg = []
+ msg.append("")
+ msg.append(
+ _("Searching all installed" " packages for file collisions...")
+ )
+ msg.append("")
+ msg.append(_("Press Ctrl-C to Stop"))
+ msg.append("")
+ eerror(msg)
+
+ if len(collisions) > 20:
+ # get_owners is slow for large numbers of files, so
+ # don't look them all up.
+ collisions = collisions[:20]
+
+ pkg_info_strs = {}
+ self.lockdb()
+ try:
+ owners = self.vartree.dbapi._owners.get_owners(collisions)
+ self.vartree.dbapi.flush_cache()
+
+ for pkg in owners:
+ pkg = self.vartree.dbapi._pkg_str(pkg.mycpv, None)
+ pkg_info_str = "%s%s%s" % (pkg, _slot_separator, pkg.slot)
+ if pkg.repo != _unknown_repo:
+ pkg_info_str += "%s%s" % (_repo_separator, pkg.repo)
+ pkg_info_strs[pkg] = pkg_info_str
+
+ finally:
+ self.unlockdb()
+
+ for pkg, owned_files in owners.items():
+ msg = []
+ msg.append(pkg_info_strs[pkg.mycpv])
+ for f in sorted(owned_files):
+ msg.append(
+ "\t%s" % os.path.join(destroot, f.lstrip(os.path.sep))
+ )
+ msg.append("")
+ eerror(msg)
+
+ if not owners:
+ eerror(
+ [_("None of the installed" " packages claim the file(s)."), ""]
+ )
+
+ symlink_abort_msg = _(
+ "Package '%s' NOT merged since it has "
+ "one or more collisions between symlinks and directories, "
+ "which is explicitly forbidden by PMS section 13.4 "
+ "(see bug #326685)."
+ )
+
+ # The explanation about the collision and how to solve
+ # it may not be visible via a scrollback buffer, especially
+ # if the number of file collisions is large. Therefore,
+ # show a summary at the end.
+ abort = False
+ if symlink_collisions:
+ abort = True
+ msg = symlink_abort_msg % (self.settings.mycpv,)
+ elif collision_protect:
+ abort = True
+ msg = (
+ _("Package '%s' NOT merged due to file collisions.")
+ % self.settings.mycpv
+ )
+ elif protect_owned and owners:
+ abort = True
+ msg = (
+ _("Package '%s' NOT merged due to file collisions.")
+ % self.settings.mycpv
+ )
+ else:
+ msg = (
+ _("Package '%s' merged despite file collisions.")
+ % self.settings.mycpv
+ )
+ msg += _(
+ " If necessary, refer to your elog "
+ "messages for the whole content of the above message."
+ )
+ eerror(wrap(msg, 70))
+
+ if abort:
+ return 1
+
+ # The merge process may move files out of the image directory,
+ # which causes invalidation of the .installed flag.
+ try:
+ os.unlink(
+ os.path.join(os.path.dirname(normalize_path(srcroot)), ".installed")
+ )
+ except OSError as e:
+ if e.errno != errno.ENOENT:
+ raise
+ del e
+
+ self.dbdir = self.dbtmpdir
+ self.delete()
+ ensure_dirs(self.dbtmpdir)
+
+ downgrade = False
+ if (
+ self._installed_instance is not None
+ and vercmp(self.mycpv.version, self._installed_instance.mycpv.version) < 0
+ ):
+ downgrade = True
+
+ if self._installed_instance is not None:
+ rval = self._pre_merge_backup(self._installed_instance, downgrade)
+ if rval != os.EX_OK:
+ showMessage(
+ _("!!! FAILED preinst: ") + "quickpkg: %s\n" % rval,
+ level=logging.ERROR,
+ noiselevel=-1,
+ )
+ return rval
+
+ # run preinst script
+ showMessage(
+ _(">>> Merging %(cpv)s to %(destroot)s\n")
+ % {"cpv": self.mycpv, "destroot": destroot}
+ )
+ phase = EbuildPhase(
+ background=False,
+ phase="preinst",
+ scheduler=self._scheduler,
+ settings=self.settings,
+ )
+ phase.start()
+ a = phase.wait()
+
+ # XXX: Decide how to handle failures here.
+ if a != os.EX_OK:
+ showMessage(
+ _("!!! FAILED preinst: ") + str(a) + "\n",
+ level=logging.ERROR,
+ noiselevel=-1,
+ )
+ return a
+
+ # copy "info" files (like SLOT, CFLAGS, etc.) into the database
+ for x in os.listdir(inforoot):
+ self.copyfile(inforoot + "/" + x)
+
+ # write local package counter for recording
+ if counter is None:
+ counter = self.vartree.dbapi.counter_tick(mycpv=self.mycpv)
+ with io.open(
+ _unicode_encode(
+ os.path.join(self.dbtmpdir, "COUNTER"),
+ encoding=_encodings["fs"],
+ errors="strict",
+ ),
+ mode="w",
+ encoding=_encodings["repo.content"],
+ errors="backslashreplace",
+ ) as f:
+ f.write("%s" % counter)
+
+ self.updateprotect()
+
+ # if we have a file containing previously-merged config file md5sums, grab it.
+ self.vartree.dbapi._fs_lock()
+ try:
+ # This prunes any libraries from the registry that no longer
+ # exist on disk, in case they have been manually removed.
+ # This has to be done prior to merge, since after merge it
+ # is non-trivial to distinguish these files from files
+ # that have just been merged.
+ plib_registry = self.vartree.dbapi._plib_registry
+ if plib_registry:
+ plib_registry.lock()
+ try:
+ plib_registry.load()
+ plib_registry.store()
+ finally:
+ plib_registry.unlock()
+
+ # Always behave like --noconfmem is enabled for downgrades
+ # so that people who don't know about this option are less
+ # likely to get confused when doing upgrade/downgrade cycles.
+ cfgfiledict = grabdict(self.vartree.dbapi._conf_mem_file)
+ if "NOCONFMEM" in self.settings or downgrade:
+ cfgfiledict["IGNORE"] = 1
+ else:
+ cfgfiledict["IGNORE"] = 0
+
+ rval = self._merge_contents(srcroot, destroot, cfgfiledict)
+ if rval != os.EX_OK:
+ return rval
+ finally:
+ self.vartree.dbapi._fs_unlock()
+
+ # These caches are populated during collision-protect and the data
+ # they contain is now invalid. It's very important to invalidate
+ # the contents_inodes cache so that FEATURES=unmerge-orphans
+ # doesn't unmerge anything that belongs to this package that has
+ # just been merged.
+ for dblnk in others_in_slot:
+ dblnk._clear_contents_cache()
+ self._clear_contents_cache()
+
+ linkmap = self.vartree.dbapi._linkmap
+ plib_registry = self.vartree.dbapi._plib_registry
+ # We initialize preserve_paths to an empty set rather
+ # than None here because it plays an important role
+ # in prune_plib_registry logic by serving to indicate
+ # that we have a replacement for a package that's
+ # being unmerged.
+
+ preserve_paths = set()
+ needed = None
+ if not (self._linkmap_broken or linkmap is None or plib_registry is None):
+ self.vartree.dbapi._fs_lock()
+ plib_registry.lock()
+ try:
+ plib_registry.load()
+ needed = os.path.join(inforoot, linkmap._needed_aux_key)
+ self._linkmap_rebuild(include_file=needed)
+
+ # Preserve old libs if they are still in use
+ # TODO: Handle cases where the previous instance
+ # has already been uninstalled but it still has some
+ # preserved libraries in the registry that we may
+ # want to preserve here.
+ preserve_paths = self._find_libs_to_preserve()
+ finally:
+ plib_registry.unlock()
+ self.vartree.dbapi._fs_unlock()
+
+ if preserve_paths:
+ self._add_preserve_libs_to_contents(preserve_paths)
+
+ # If portage is reinstalling itself, remove the old
+ # version now since we want to use the temporary
+ # PORTAGE_BIN_PATH that will be removed when we return.
+ reinstall_self = False
+ if self.myroot == "/" and match_from_list(PORTAGE_PACKAGE_ATOM, [self.mycpv]):
+ reinstall_self = True
+
+ emerge_log = self._emerge_log
+
+ # If we have any preserved libraries then autoclean
+ # is forced so that preserve-libs logic doesn't have
+ # to account for the additional complexity of the
+ # AUTOCLEAN=no mode.
+ autoclean = self.settings.get("AUTOCLEAN", "yes") == "yes" or preserve_paths
+
+ if autoclean:
+ emerge_log(_(" >>> AUTOCLEAN: %s") % (slot_atom,))
+
+ others_in_slot.append(self) # self has just been merged
+ for dblnk in list(others_in_slot):
+ if dblnk is self:
+ continue
+ if not (autoclean or dblnk.mycpv == self.mycpv or reinstall_self):
+ continue
+ showMessage(_(">>> Safely unmerging already-installed instance...\n"))
+ emerge_log(_(" === Unmerging... (%s)") % (dblnk.mycpv,))
+ others_in_slot.remove(dblnk) # dblnk will unmerge itself now
+ dblnk._linkmap_broken = self._linkmap_broken
+ dblnk.settings["REPLACED_BY_VERSION"] = portage.versions.cpv_getversion(
+ self.mycpv
+ )
+ dblnk.settings.backup_changes("REPLACED_BY_VERSION")
+ unmerge_rval = dblnk.unmerge(
+ ldpath_mtimes=prev_mtimes,
+ others_in_slot=others_in_slot,
+ needed=needed,
+ preserve_paths=preserve_paths,
+ )
+ dblnk.settings.pop("REPLACED_BY_VERSION", None)
+
+ if unmerge_rval == os.EX_OK:
+ emerge_log(_(" >>> unmerge success: %s") % (dblnk.mycpv,))
+ else:
+ emerge_log(_(" !!! unmerge FAILURE: %s") % (dblnk.mycpv,))
+
+ self.lockdb()
+ try:
+ # TODO: Check status and abort if necessary.
+ dblnk.delete()
+ finally:
+ self.unlockdb()
+ showMessage(_(">>> Original instance of package unmerged safely.\n"))
+
+ if len(others_in_slot) > 1:
+ showMessage(
+ colorize("WARN", _("WARNING:"))
+ + _(
+ " AUTOCLEAN is disabled. This can cause serious"
+ " problems due to overlapping packages.\n"
+ ),
+ level=logging.WARN,
+ noiselevel=-1,
+ )
+
+ # We hold both directory locks.
+ self.dbdir = self.dbpkgdir
+ self.lockdb()
+ try:
+ self.delete()
+ _movefile(self.dbtmpdir, self.dbpkgdir, mysettings=self.settings)
+ self._merged_path(self.dbpkgdir, os.lstat(self.dbpkgdir))
+ self.vartree.dbapi._cache_delta.recordEvent(
+ "add", self.mycpv, slot, counter
+ )
+ finally:
+ self.unlockdb()
+
+ # Check for file collisions with blocking packages
+ # and remove any colliding files from their CONTENTS
+ # since they now belong to this package.
+ self._clear_contents_cache()
+ contents = self.getcontents()
+ destroot_len = len(destroot) - 1
+ self.lockdb()
+ try:
+ for blocker in blockers:
+ self.vartree.dbapi.removeFromContents(
+ blocker, iter(contents), relative_paths=False
+ )
+ finally:
+ self.unlockdb()
+
+ plib_registry = self.vartree.dbapi._plib_registry
+ if plib_registry:
+ self.vartree.dbapi._fs_lock()
+ plib_registry.lock()
+ try:
+ plib_registry.load()
+
+ if preserve_paths:
+ # keep track of the libs we preserved
+ plib_registry.register(
+ self.mycpv, slot, counter, sorted(preserve_paths)
+ )
+
+ # Unregister any preserved libs that this package has overwritten
+ # and update the contents of the packages that owned them.
+ plib_dict = plib_registry.getPreservedLibs()
+ for cpv, paths in plib_collisions.items():
+ if cpv not in plib_dict:
+ continue
+ has_vdb_entry = False
+ if cpv != self.mycpv:
+ # If we've replaced another instance with the
+ # same cpv then the vdb entry no longer belongs
+ # to it, so we'll have to get the slot and counter
+ # from plib_registry._data instead.
+ self.vartree.dbapi.lock()
+ try:
+ try:
+ slot = self.vartree.dbapi._pkg_str(cpv, None).slot
+ counter = self.vartree.dbapi.cpv_counter(cpv)
+ except (KeyError, InvalidData):
+ pass
+ else:
+ has_vdb_entry = True
+ self.vartree.dbapi.removeFromContents(cpv, paths)
+ finally:
+ self.vartree.dbapi.unlock()
+
+ if not has_vdb_entry:
+ # It's possible for previously unmerged packages
+ # to have preserved libs in the registry, so try
+ # to retrieve the slot and counter from there.
+ has_registry_entry = False
+ for plib_cps, (
+ plib_cpv,
+ plib_counter,
+ plib_paths,
+ ) in plib_registry._data.items():
+ if plib_cpv != cpv:
+ continue
+ try:
+ cp, slot = plib_cps.split(":", 1)
+ except ValueError:
+ continue
+ counter = plib_counter
+ has_registry_entry = True
+ break
+
+ if not has_registry_entry:
+ continue
+
+ remaining = [f for f in plib_dict[cpv] if f not in paths]
+ plib_registry.register(cpv, slot, counter, remaining)
+
+ plib_registry.store()
+ finally:
+ plib_registry.unlock()
+ self.vartree.dbapi._fs_unlock()
+
+ self.vartree.dbapi._add(self)
+ contents = self.getcontents()
+
+ # do postinst script
+ self.settings["PORTAGE_UPDATE_ENV"] = os.path.join(
+ self.dbpkgdir, "environment.bz2"
+ )
+ self.settings.backup_changes("PORTAGE_UPDATE_ENV")
+ try:
+ phase = EbuildPhase(
+ background=False,
+ phase="postinst",
+ scheduler=self._scheduler,
+ settings=self.settings,
+ )
+ phase.start()
+ a = phase.wait()
+ if a == os.EX_OK:
+ showMessage(_(">>> %s merged.\n") % self.mycpv)
+ finally:
+ self.settings.pop("PORTAGE_UPDATE_ENV", None)
+
+ if a != os.EX_OK:
+ # It's stupid to bail out here, so keep going regardless of
+ # phase return code.
+ self._postinst_failure = True
+ self._elog(
+ "eerror",
+ "postinst",
+ [
+ _("FAILED postinst: %s") % (a,),
+ ],
+ )
+
+ # update environment settings, library paths. DO NOT change symlinks.
+ env_update(
+ target_root=self.settings["ROOT"],
+ prev_mtimes=prev_mtimes,
+ contents=contents,
+ env=self.settings,
+ writemsg_level=self._display_merge,
+ vardbapi=self.vartree.dbapi,
+ )
+
+ # For gcc upgrades, preserved libs have to be removed after the
+ # the library path has been updated.
+ self._prune_plib_registry()
+ self._post_merge_sync()
+
+ return os.EX_OK
+
+ def _new_backup_path(self, p):
+ """
+ The works for any type path, such as a regular file, symlink,
+ or directory. The parent directory is assumed to exist.
+ The returned filename is of the form p + '.backup.' + x, where
+ x guarantees that the returned path does not exist yet.
+ """
+ os = _os_merge
+
+ x = -1
+ while True:
+ x += 1
+ backup_p = "%s.backup.%04d" % (p, x)
+ try:
+ os.lstat(backup_p)
+ except OSError:
+ break
+
+ return backup_p
+
+ def _merge_contents(self, srcroot, destroot, cfgfiledict):
+
+ cfgfiledict_orig = cfgfiledict.copy()
+
+ # open CONTENTS file (possibly overwriting old one) for recording
+ # Use atomic_ofstream for automatic coercion of raw bytes to
+ # unicode, in order to prevent TypeError when writing raw bytes
+ # to TextIOWrapper with python2.
+ outfile = atomic_ofstream(
+ _unicode_encode(
+ os.path.join(self.dbtmpdir, "CONTENTS"),
+ encoding=_encodings["fs"],
+ errors="strict",
+ ),
+ mode="w",
+ encoding=_encodings["repo.content"],
+ errors="backslashreplace",
+ )
+
+ # Don't bump mtimes on merge since some application require
+ # preservation of timestamps. This means that the unmerge phase must
+ # check to see if file belongs to an installed instance in the same
+ # slot.
+ mymtime = None
+
+ # set umask to 0 for merging; back up umask, save old one in prevmask (since this is a global change)
+ prevmask = os.umask(0)
+ secondhand = []
+
+ # we do a first merge; this will recurse through all files in our srcroot but also build up a
+ # "second hand" of symlinks to merge later
+ if self.mergeme(
+ srcroot,
+ destroot,
+ outfile,
+ secondhand,
+ self.settings["EPREFIX"].lstrip(os.sep),
+ cfgfiledict,
+ mymtime,
+ ):
+ return 1
+
+ # now, it's time for dealing our second hand; we'll loop until we can't merge anymore. The rest are
+ # broken symlinks. We'll merge them too.
+ lastlen = 0
+ while len(secondhand) and len(secondhand) != lastlen:
+ # clear the thirdhand. Anything from our second hand that
+ # couldn't get merged will be added to thirdhand.
+
+ thirdhand = []
+ if self.mergeme(
+ srcroot, destroot, outfile, thirdhand, secondhand, cfgfiledict, mymtime
+ ):
+ return 1
+
+ # swap hands
+ lastlen = len(secondhand)
+
+ # our thirdhand now becomes our secondhand. It's ok to throw
+ # away secondhand since thirdhand contains all the stuff that
+ # couldn't be merged.
+ secondhand = thirdhand
+
+ if len(secondhand):
+ # force merge of remaining symlinks (broken or circular; oh well)
+ if self.mergeme(
+ srcroot, destroot, outfile, None, secondhand, cfgfiledict, mymtime
+ ):
+ return 1
+
+ # restore umask
+ os.umask(prevmask)
+
+ # if we opened it, close it
+ outfile.flush()
+ outfile.close()
+
+ # write out our collection of md5sums
+ if cfgfiledict != cfgfiledict_orig:
+ cfgfiledict.pop("IGNORE", None)
+ try:
+ writedict(cfgfiledict, self.vartree.dbapi._conf_mem_file)
+ except InvalidLocation:
+ self.settings._init_dirs()
+ writedict(cfgfiledict, self.vartree.dbapi._conf_mem_file)
+
+ return os.EX_OK
+
+ def mergeme(
+ self,
+ srcroot,
+ destroot,
+ outfile,
+ secondhand,
+ stufftomerge,
+ cfgfiledict,
+ thismtime,
+ ):
+ """
+
+ This function handles actual merging of the package contents to the livefs.
+ It also handles config protection.
+
+ @param srcroot: Where are we copying files from (usually ${D})
+ @type srcroot: String (Path)
+ @param destroot: Typically ${ROOT}
+ @type destroot: String (Path)
+ @param outfile: File to log operations to
+ @type outfile: File Object
+ @param secondhand: A set of items to merge in pass two (usually
+ or symlinks that point to non-existing files that may get merged later)
+ @type secondhand: List
+ @param stufftomerge: Either a diretory to merge, or a list of items.
+ @type stufftomerge: String or List
+ @param cfgfiledict: { File:mtime } mapping for config_protected files
+ @type cfgfiledict: Dictionary
+ @param thismtime: None or new mtime for merged files (expressed in seconds
+ in Python <3.3 and nanoseconds in Python >=3.3)
+ @type thismtime: None or Int
+ @rtype: None or Boolean
+ @return:
+ 1. True on failure
+ 2. None otherwise
+
+ """
+
+ showMessage = self._display_merge
+ writemsg = self._display_merge
+
+ os = _os_merge
+ sep = os.sep
+ join = os.path.join
+ srcroot = normalize_path(srcroot).rstrip(sep) + sep
+ destroot = normalize_path(destroot).rstrip(sep) + sep
+ calc_prelink = "prelink-checksums" in self.settings.features
+
+ protect_if_modified = (
+ "config-protect-if-modified" in self.settings.features
+ and self._installed_instance is not None
+ )
+
+ # this is supposed to merge a list of files. There will be 2 forms of argument passing.
+ if isinstance(stufftomerge, str):
+ # A directory is specified. Figure out protection paths, listdir() it and process it.
+ mergelist = [
+ join(stufftomerge, child)
+ for child in os.listdir(join(srcroot, stufftomerge))
+ ]
+ else:
+ mergelist = stufftomerge[:]
+
+ while mergelist:
+
+ relative_path = mergelist.pop()
+ mysrc = join(srcroot, relative_path)
+ mydest = join(destroot, relative_path)
+ # myrealdest is mydest without the $ROOT prefix (makes a difference if ROOT!="/")
+ myrealdest = join(sep, relative_path)
+ # stat file once, test using S_* macros many times (faster that way)
+ mystat = os.lstat(mysrc)
+ mymode = mystat[stat.ST_MODE]
+ mymd5 = None
+ myto = None
+
+ mymtime = mystat.st_mtime_ns
+
+ if stat.S_ISREG(mymode):
+ mymd5 = perform_md5(mysrc, calc_prelink=calc_prelink)
+ elif stat.S_ISLNK(mymode):
+ # The file name of mysrc and the actual file that it points to
+ # will have earlier been forcefully converted to the 'merge'
+ # encoding if necessary, but the content of the symbolic link
+ # may need to be forcefully converted here.
+ myto = _os.readlink(
+ _unicode_encode(
+ mysrc, encoding=_encodings["merge"], errors="strict"
+ )
+ )
+ try:
+ myto = _unicode_decode(
+ myto, encoding=_encodings["merge"], errors="strict"
+ )
+ except UnicodeDecodeError:
+ myto = _unicode_decode(
+ myto, encoding=_encodings["merge"], errors="replace"
+ )
+ myto = _unicode_encode(
+ myto, encoding="ascii", errors="backslashreplace"
+ )
+ myto = _unicode_decode(
+ myto, encoding=_encodings["merge"], errors="replace"
+ )
+ os.unlink(mysrc)
+ os.symlink(myto, mysrc)
+
+ mymd5 = md5(_unicode_encode(myto)).hexdigest()
+
+ protected = False
+ if stat.S_ISLNK(mymode) or stat.S_ISREG(mymode):
+ protected = self.isprotected(mydest)
+
+ if (
+ stat.S_ISREG(mymode)
+ and mystat.st_size == 0
+ and os.path.basename(mydest).startswith(".keep")
+ ):
+ protected = False
+
+ destmd5 = None
+ mydest_link = None
+ # handy variables; mydest is the target object on the live filesystems;
+ # mysrc is the source object in the temporary install dir
+ try:
+ mydstat = os.lstat(mydest)
+ mydmode = mydstat.st_mode
+ if protected:
+ if stat.S_ISLNK(mydmode):
+ # Read symlink target as bytes, in case the
+ # target path has a bad encoding.
+ mydest_link = _os.readlink(
+ _unicode_encode(
+ mydest, encoding=_encodings["merge"], errors="strict"
+ )
+ )
+ mydest_link = _unicode_decode(
+ mydest_link, encoding=_encodings["merge"], errors="replace"
+ )
+
+ # For protection of symlinks, the md5
+ # of the link target path string is used
+ # for cfgfiledict (symlinks are
+ # protected since bug #485598).
+ destmd5 = md5(_unicode_encode(mydest_link)).hexdigest()
+
+ elif stat.S_ISREG(mydmode):
+ destmd5 = perform_md5(mydest, calc_prelink=calc_prelink)
+ except (FileNotFound, OSError) as e:
+ if isinstance(e, OSError) and e.errno != errno.ENOENT:
+ raise
+ # dest file doesn't exist
+ mydstat = None
+ mydmode = None
+ mydest_link = None
+ destmd5 = None
+
+ moveme = True
+ if protected:
+ mydest, protected, moveme = self._protect(
+ cfgfiledict,
+ protect_if_modified,
+ mymd5,
+ myto,
+ mydest,
+ myrealdest,
+ mydmode,
+ destmd5,
+ mydest_link,
+ )
+
+ zing = "!!!"
+ if not moveme:
+ # confmem rejected this update
+ zing = "---"
+
+ if stat.S_ISLNK(mymode):
+ # we are merging a symbolic link
+ # Pass in the symlink target in order to bypass the
+ # os.readlink() call inside abssymlink(), since that
+ # call is unsafe if the merge encoding is not ascii
+ # or utf_8 (see bug #382021).
+ myabsto = abssymlink(mysrc, target=myto)
+
+ if myabsto.startswith(srcroot):
+ myabsto = myabsto[len(srcroot) :]
+ myabsto = myabsto.lstrip(sep)
+ if self.settings and self.settings["D"]:
+ if myto.startswith(self.settings["D"]):
+ myto = myto[len(self.settings["D"]) - 1 :]
+ # myrealto contains the path of the real file to which this symlink points.
+ # we can simply test for existence of this file to see if the target has been merged yet
+ myrealto = normalize_path(os.path.join(destroot, myabsto))
+ if mydmode is not None and stat.S_ISDIR(mydmode):
+ if not protected:
+ # we can't merge a symlink over a directory
+ newdest = self._new_backup_path(mydest)
+ msg = []
+ msg.append("")
+ msg.append(
+ _("Installation of a symlink is blocked by a directory:")
+ )
+ msg.append(" '%s'" % mydest)
+ msg.append(
+ _("This symlink will be merged with a different name:")
+ )
+ msg.append(" '%s'" % newdest)
+ msg.append("")
+ self._eerror("preinst", msg)
+ mydest = newdest
+
+ # if secondhand is None it means we're operating in "force" mode and should not create a second hand.
+ if (secondhand != None) and (not os.path.exists(myrealto)):
+ # either the target directory doesn't exist yet or the target file doesn't exist -- or
+ # the target is a broken symlink. We will add this file to our "second hand" and merge
+ # it later.
+ secondhand.append(mysrc[len(srcroot) :])
+ continue
+ # unlinking no longer necessary; "movefile" will overwrite symlinks atomically and correctly
+ if moveme:
+ zing = ">>>"
+ mymtime = movefile(
+ mysrc,
+ mydest,
+ newmtime=thismtime,
+ sstat=mystat,
+ mysettings=self.settings,
+ encoding=_encodings["merge"],
+ )
+
+ try:
+ self._merged_path(mydest, os.lstat(mydest))
+ except OSError:
+ pass
+
+ if mymtime != None:
+ # Use lexists, since if the target happens to be a broken
+ # symlink then that should trigger an independent warning.
+ if not (
+ os.path.lexists(myrealto)
+ or os.path.lexists(join(srcroot, myabsto))
+ ):
+ self._eqawarn(
+ "preinst",
+ [
+ _(
+ "QA Notice: Symbolic link /%s points to /%s which does not exist."
+ )
+ % (relative_path, myabsto)
+ ],
+ )
+
+ showMessage("%s %s -> %s\n" % (zing, mydest, myto))
+ outfile.write(
+ self._format_contents_line(
+ node_type="sym",
+ abs_path=myrealdest,
+ symlink_target=myto,
+ mtime_ns=mymtime,
+ )
+ )
+ else:
+ showMessage(
+ _("!!! Failed to move file.\n"),
+ level=logging.ERROR,
+ noiselevel=-1,
+ )
+ showMessage(
+ "!!! %s -> %s\n" % (mydest, myto),
+ level=logging.ERROR,
+ noiselevel=-1,
+ )
+ return 1
+ elif stat.S_ISDIR(mymode):
+ # we are merging a directory
+ if mydmode != None:
+ # destination exists
+
+ if bsd_chflags:
+ # Save then clear flags on dest.
+ dflags = mydstat.st_flags
+ if dflags != 0:
+ bsd_chflags.lchflags(mydest, 0)
+
+ if not stat.S_ISLNK(mydmode) and not os.access(mydest, os.W_OK):
+ pkgstuff = pkgsplit(self.pkg)
+ writemsg(
+ _("\n!!! Cannot write to '%s'.\n") % mydest, noiselevel=-1
+ )
+ writemsg(
+ _(
+ "!!! Please check permissions and directories for broken symlinks.\n"
+ )
+ )
+ writemsg(
+ _(
+ "!!! You may start the merge process again by using ebuild:\n"
+ )
+ )
+ writemsg(
+ "!!! ebuild "
+ + self.settings["PORTDIR"]
+ + "/"
+ + self.cat
+ + "/"
+ + pkgstuff[0]
+ + "/"
+ + self.pkg
+ + ".ebuild merge\n"
+ )
+ writemsg(_("!!! And finish by running this: env-update\n\n"))
+ return 1
+
+ if stat.S_ISDIR(mydmode) or (
+ stat.S_ISLNK(mydmode) and os.path.isdir(mydest)
+ ):
+ # a symlink to an existing directory will work for us; keep it:
+ showMessage("--- %s/\n" % mydest)
+ if bsd_chflags:
+ bsd_chflags.lchflags(mydest, dflags)
+ else:
+ # a non-directory and non-symlink-to-directory. Won't work for us. Move out of the way.
+ backup_dest = self._new_backup_path(mydest)
+ msg = []
+ msg.append("")
+ msg.append(
+ _("Installation of a directory is blocked by a file:")
+ )
+ msg.append(" '%s'" % mydest)
+ msg.append(_("This file will be renamed to a different name:"))
+ msg.append(" '%s'" % backup_dest)
+ msg.append("")
+ self._eerror("preinst", msg)
+ if (
+ movefile(
+ mydest,
+ backup_dest,
+ mysettings=self.settings,
+ encoding=_encodings["merge"],
+ )
+ is None
+ ):
+ return 1
+ showMessage(
+ _("bak %s %s.backup\n") % (mydest, mydest),
+ level=logging.ERROR,
+ noiselevel=-1,
+ )
+ # now create our directory
+ try:
+ if self.settings.selinux_enabled():
+ _selinux_merge.mkdir(mydest, mysrc)
+ else:
+ os.mkdir(mydest)
+ except OSError as e:
+ # Error handling should be equivalent to
+ # portage.util.ensure_dirs() for cases
+ # like bug #187518.
+ if e.errno in (errno.EEXIST,):
+ pass
+ elif os.path.isdir(mydest):
+ pass
+ else:
+ raise
+ del e
+
+ if bsd_chflags:
+ bsd_chflags.lchflags(mydest, dflags)
+ os.chmod(mydest, mystat[0])
+ os.chown(mydest, mystat[4], mystat[5])
+ showMessage(">>> %s/\n" % mydest)
+ else:
+ try:
+ # destination doesn't exist
+ if self.settings.selinux_enabled():
+ _selinux_merge.mkdir(mydest, mysrc)
+ else:
+ os.mkdir(mydest)
+ except OSError as e:
+ # Error handling should be equivalent to
+ # portage.util.ensure_dirs() for cases
+ # like bug #187518.
+ if e.errno in (errno.EEXIST,):
+ pass
+ elif os.path.isdir(mydest):
+ pass
+ else:
+ raise
+ del e
+ os.chmod(mydest, mystat[0])
+ os.chown(mydest, mystat[4], mystat[5])
+ showMessage(">>> %s/\n" % mydest)
+
+ try:
+ self._merged_path(mydest, os.lstat(mydest))
+ except OSError:
+ pass
+
+ outfile.write(
+ self._format_contents_line(node_type="dir", abs_path=myrealdest)
+ )
+ # recurse and merge this directory
+ mergelist.extend(
+ join(relative_path, child)
+ for child in os.listdir(join(srcroot, relative_path))
+ )
+
+ elif stat.S_ISREG(mymode):
+ # we are merging a regular file
+ if not protected and mydmode is not None and stat.S_ISDIR(mydmode):
+ # install of destination is blocked by an existing directory with the same name
+ newdest = self._new_backup_path(mydest)
+ msg = []
+ msg.append("")
+ msg.append(
+ _("Installation of a regular file is blocked by a directory:")
+ )
+ msg.append(" '%s'" % mydest)
+ msg.append(_("This file will be merged with a different name:"))
+ msg.append(" '%s'" % newdest)
+ msg.append("")
+ self._eerror("preinst", msg)
+ mydest = newdest
+
+ # whether config protection or not, we merge the new file the
+ # same way. Unless moveme=0 (blocking directory)
+ if moveme:
+ # Create hardlinks only for source files that already exist
+ # as hardlinks (having identical st_dev and st_ino).
+ hardlink_key = (mystat.st_dev, mystat.st_ino)
+
+ hardlink_candidates = self._hardlink_merge_map.get(hardlink_key)
+ if hardlink_candidates is None:
+ hardlink_candidates = []
+ self._hardlink_merge_map[hardlink_key] = hardlink_candidates
+
+ mymtime = movefile(
+ mysrc,
+ mydest,
+ newmtime=thismtime,
+ sstat=mystat,
+ mysettings=self.settings,
+ hardlink_candidates=hardlink_candidates,
+ encoding=_encodings["merge"],
+ )
+ if mymtime is None:
+ return 1
+ hardlink_candidates.append(mydest)
+ zing = ">>>"
+
+ try:
+ self._merged_path(mydest, os.lstat(mydest))
+ except OSError:
+ pass
+
+ if mymtime != None:
+ outfile.write(
+ self._format_contents_line(
+ node_type="obj",
+ abs_path=myrealdest,
+ md5_digest=mymd5,
+ mtime_ns=mymtime,
+ )
+ )
+ showMessage("%s %s\n" % (zing, mydest))
+ else:
+ # we are merging a fifo or device node
+ zing = "!!!"
+ if mydmode is None:
+ # destination doesn't exist
+ if (
+ movefile(
+ mysrc,
+ mydest,
+ newmtime=thismtime,
+ sstat=mystat,
+ mysettings=self.settings,
+ encoding=_encodings["merge"],
+ )
+ is not None
+ ):
+ zing = ">>>"
+
+ try:
+ self._merged_path(mydest, os.lstat(mydest))
+ except OSError:
+ pass
+
+ else:
+ return 1
+ if stat.S_ISFIFO(mymode):
+ outfile.write(
+ self._format_contents_line(node_type="fif", abs_path=myrealdest)
+ )
+ else:
+ outfile.write(
+ self._format_contents_line(node_type="dev", abs_path=myrealdest)
+ )
+ showMessage(zing + " " + mydest + "\n")
+
+ def _protect(
+ self,
+ cfgfiledict,
+ protect_if_modified,
+ src_md5,
+ src_link,
+ dest,
+ dest_real,
+ dest_mode,
+ dest_md5,
+ dest_link,
+ ):
+
+ move_me = True
+ protected = True
+ force = False
+ k = False
+ if self._installed_instance is not None:
+ k = self._installed_instance._match_contents(dest_real)
+ if k is not False:
+ if dest_mode is None:
+ # If the file doesn't exist, then it may
+ # have been deleted or renamed by the
+ # admin. Therefore, force the file to be
+ # merged with a ._cfg name, so that the
+ # admin will be prompted for this update
+ # (see bug #523684).
+ force = True
+
+ elif protect_if_modified:
+ data = self._installed_instance.getcontents()[k]
+ if data[0] == "obj" and data[2] == dest_md5:
+ protected = False
+ elif data[0] == "sym" and data[2] == dest_link:
+ protected = False
+
+ if protected and dest_mode is not None:
+ # we have a protection path; enable config file management.
+ if src_md5 == dest_md5:
+ protected = False
+
+ elif src_md5 == cfgfiledict.get(dest_real, [None])[0]:
+ # An identical update has previously been
+ # merged. Skip it unless the user has chosen
+ # --noconfmem.
+ move_me = protected = bool(cfgfiledict["IGNORE"])
+
+ if (
+ protected
+ and (dest_link is not None or src_link is not None)
+ and dest_link != src_link
+ ):
+ # If either one is a symlink, and they are not
+ # identical symlinks, then force config protection.
+ force = True
+
+ if move_me:
+ # Merging a new file, so update confmem.
+ cfgfiledict[dest_real] = [src_md5]
+ elif dest_md5 == cfgfiledict.get(dest_real, [None])[0]:
+ # A previously remembered update has been
+ # accepted, so it is removed from confmem.
+ del cfgfiledict[dest_real]
+
+ if protected and move_me:
+ dest = new_protect_filename(
+ dest, newmd5=(dest_link or src_md5), force=force
+ )
+
+ return dest, protected, move_me
+
+ def _format_contents_line(
+ self, node_type, abs_path, md5_digest=None, symlink_target=None, mtime_ns=None
+ ):
+ fields = [node_type, abs_path]
+ if md5_digest is not None:
+ fields.append(md5_digest)
+ elif symlink_target is not None:
+ fields.append("-> {}".format(symlink_target))
+ if mtime_ns is not None:
+ fields.append(str(mtime_ns // 1000000000))
+ return "{}\n".format(" ".join(fields))
+
+ def _merged_path(self, path, lstatobj, exists=True):
+ previous_path = self._device_path_map.get(lstatobj.st_dev)
+ if (
+ previous_path is None
+ or previous_path is False
+ or (exists and len(path) < len(previous_path))
+ ):
+ if exists:
+ self._device_path_map[lstatobj.st_dev] = path
+ else:
+ # This entry is used to indicate that we've unmerged
+ # a file from this device, and later, this entry is
+ # replaced by a parent directory.
+ self._device_path_map[lstatobj.st_dev] = False
+
+ def _post_merge_sync(self):
+ """
+ Call this after merge or unmerge, in order to sync relevant files to
+ disk and avoid data-loss in the event of a power failure. This method
+ does nothing if FEATURES=merge-sync is disabled.
+ """
+ if not self._device_path_map or "merge-sync" not in self.settings.features:
+ return
+
+ returncode = None
+ if platform.system() == "Linux":
+
+ paths = []
+ for path in self._device_path_map.values():
+ if path is not False:
+ paths.append(path)
+ paths = tuple(paths)
+
+ proc = SyncfsProcess(
+ paths=paths, scheduler=(self._scheduler or asyncio._safe_loop())
+ )
+ proc.start()
+ returncode = proc.wait()
+
+ if returncode is None or returncode != os.EX_OK:
+ try:
+ proc = subprocess.Popen(["sync"])
+ except EnvironmentError:
+ pass
+ else:
+ proc.wait()
+
+ @_slot_locked
+ def merge(
+ self,
+ mergeroot,
+ inforoot,
+ myroot=None,
+ myebuild=None,
+ cleanup=0,
+ mydbapi=None,
+ prev_mtimes=None,
+ counter=None,
+ ):
+ """
+ @param myroot: ignored, self._eroot is used instead
+ """
+ myroot = None
+ retval = -1
+ parallel_install = "parallel-install" in self.settings.features
+ if not parallel_install:
+ self.lockdb()
+ self.vartree.dbapi._bump_mtime(self.mycpv)
+ if self._scheduler is None:
+ self._scheduler = SchedulerInterface(asyncio._safe_loop())
+ try:
+ retval = self.treewalk(
+ mergeroot,
+ myroot,
+ inforoot,
+ myebuild,
+ cleanup=cleanup,
+ mydbapi=mydbapi,
+ prev_mtimes=prev_mtimes,
+ counter=counter,
+ )
+
+ # If PORTAGE_BUILDDIR doesn't exist, then it probably means
+ # fail-clean is enabled, and the success/die hooks have
+ # already been called by EbuildPhase.
+ if os.path.isdir(self.settings["PORTAGE_BUILDDIR"]):
+
+ if retval == os.EX_OK:
+ phase = "success_hooks"
+ else:
+ phase = "die_hooks"
+
+ ebuild_phase = MiscFunctionsProcess(
+ background=False,
+ commands=[phase],
+ scheduler=self._scheduler,
+ settings=self.settings,
+ )
+ ebuild_phase.start()
+ ebuild_phase.wait()
+ self._elog_process()
+
+ if "noclean" not in self.settings.features and (
+ retval == os.EX_OK or "fail-clean" in self.settings.features
+ ):
+ if myebuild is None:
+ myebuild = os.path.join(inforoot, self.pkg + ".ebuild")
+
+ doebuild_environment(
+ myebuild, "clean", settings=self.settings, db=mydbapi
+ )
+ phase = EbuildPhase(
+ background=False,
+ phase="clean",
+ scheduler=self._scheduler,
+ settings=self.settings,
+ )
+ phase.start()
+ phase.wait()
+ finally:
+ self.settings.pop("REPLACING_VERSIONS", None)
+ if self.vartree.dbapi._linkmap is None:
+ # preserve-libs is entirely disabled
+ pass
+ else:
+ self.vartree.dbapi._linkmap._clear_cache()
+ self.vartree.dbapi._bump_mtime(self.mycpv)
+ if not parallel_install:
+ self.unlockdb()
+
+ if retval == os.EX_OK and self._postinst_failure:
+ retval = portage.const.RETURNCODE_POSTINST_FAILURE
+
+ return retval
+
+ def getstring(self, name):
+ "returns contents of a file with whitespace converted to spaces"
+ if not os.path.exists(self.dbdir + "/" + name):
+ return ""
+ with io.open(
+ _unicode_encode(
+ os.path.join(self.dbdir, name),
+ encoding=_encodings["fs"],
+ errors="strict",
+ ),
+ mode="r",
+ encoding=_encodings["repo.content"],
+ errors="replace",
+ ) as f:
+ mydata = f.read().split()
+ return " ".join(mydata)
+
+ def copyfile(self, fname):
+ shutil.copyfile(fname, self.dbdir + "/" + os.path.basename(fname))
+
+ def getfile(self, fname):
+ if not os.path.exists(self.dbdir + "/" + fname):
+ return ""
+ with io.open(
+ _unicode_encode(
+ os.path.join(self.dbdir, fname),
+ encoding=_encodings["fs"],
+ errors="strict",
+ ),
+ mode="r",
+ encoding=_encodings["repo.content"],
+ errors="replace",
+ ) as f:
+ return f.read()
+
+ def setfile(self, fname, data):
+ kwargs = {}
+ if fname == "environment.bz2" or not isinstance(data, str):
+ kwargs["mode"] = "wb"
+ else:
+ kwargs["mode"] = "w"
+ kwargs["encoding"] = _encodings["repo.content"]
+ write_atomic(os.path.join(self.dbdir, fname), data, **kwargs)
+
+ def getelements(self, ename):
+ if not os.path.exists(self.dbdir + "/" + ename):
+ return []
+ with io.open(
+ _unicode_encode(
+ os.path.join(self.dbdir, ename),
+ encoding=_encodings["fs"],
+ errors="strict",
+ ),
+ mode="r",
+ encoding=_encodings["repo.content"],
+ errors="replace",
+ ) as f:
+ mylines = f.readlines()
+ myreturn = []
+ for x in mylines:
+ for y in x[:-1].split():
+ myreturn.append(y)
+ return myreturn
+
+ def setelements(self, mylist, ename):
+ with io.open(
+ _unicode_encode(
+ os.path.join(self.dbdir, ename),
+ encoding=_encodings["fs"],
+ errors="strict",
+ ),
+ mode="w",
+ encoding=_encodings["repo.content"],
+ errors="backslashreplace",
+ ) as f:
+ for x in mylist:
+ f.write("%s\n" % x)
+
+ def isregular(self):
+ "Is this a regular package (does it have a CATEGORY file? A dblink can be virtual *and* regular)"
+ return os.path.exists(os.path.join(self.dbdir, "CATEGORY"))
+
+ def _pre_merge_backup(self, backup_dblink, downgrade):
+
+ if "unmerge-backup" in self.settings.features or (
+ downgrade and "downgrade-backup" in self.settings.features
+ ):
+ return self._quickpkg_dblink(backup_dblink, False, None)
+
+ return os.EX_OK
+
+ def _pre_unmerge_backup(self, background):
+
+ if "unmerge-backup" in self.settings.features:
+ logfile = None
+ if self.settings.get("PORTAGE_BACKGROUND") != "subprocess":
+ logfile = self.settings.get("PORTAGE_LOG_FILE")
+ return self._quickpkg_dblink(self, background, logfile)
+
+ return os.EX_OK
+
+ def _quickpkg_dblink(self, backup_dblink, background, logfile):
+
+ build_time = backup_dblink.getfile("BUILD_TIME")
+ try:
+ build_time = int(build_time.strip())
+ except ValueError:
+ build_time = 0
+
+ trees = QueryCommand.get_db()[self.settings["EROOT"]]
+ bintree = trees["bintree"]
+
+ for binpkg in reversed(bintree.dbapi.match("={}".format(backup_dblink.mycpv))):
+ if binpkg.build_time == build_time:
+ return os.EX_OK
+
+ self.lockdb()
+ try:
+
+ if not backup_dblink.exists():
+ # It got unmerged by a concurrent process.
+ return os.EX_OK
+
+ # Call quickpkg for support of QUICKPKG_DEFAULT_OPTS and stuff.
+ quickpkg_binary = os.path.join(
+ self.settings["PORTAGE_BIN_PATH"], "quickpkg"
+ )
+
+ if not os.access(quickpkg_binary, os.X_OK):
+ # If not running from the source tree, use PATH.
+ quickpkg_binary = find_binary("quickpkg")
+ if quickpkg_binary is None:
+ self._display_merge(
+ _("%s: command not found") % "quickpkg",
+ level=logging.ERROR,
+ noiselevel=-1,
+ )
+ return 127
+
+ # Let quickpkg inherit the global vartree config's env.
+ env = dict(self.vartree.settings.items())
+ env["__PORTAGE_INHERIT_VARDB_LOCK"] = "1"
+
+ pythonpath = [x for x in env.get("PYTHONPATH", "").split(":") if x]
+ if not pythonpath or not os.path.samefile(pythonpath[0], portage._pym_path):
+ pythonpath.insert(0, portage._pym_path)
+ env["PYTHONPATH"] = ":".join(pythonpath)
+
+ quickpkg_proc = SpawnProcess(
+ args=[
+ portage._python_interpreter,
+ quickpkg_binary,
+ "=%s" % (backup_dblink.mycpv,),
+ ],
+ background=background,
+ env=env,
+ scheduler=self._scheduler,
+ logfile=logfile,
+ )
+ quickpkg_proc.start()
+
+ return quickpkg_proc.wait()
+
+ finally:
+ self.unlockdb()
+
+
+ def merge(
+ mycat,
+ mypkg,
+ pkgloc,
+ infloc,
+ myroot=None,
+ settings=None,
+ myebuild=None,
+ mytree=None,
+ mydbapi=None,
+ vartree=None,
+ prev_mtimes=None,
+ blockers=None,
+ scheduler=None,
+ fd_pipes=None,
+ ):
+ """
+ @param myroot: ignored, settings['EROOT'] is used instead
+ """
+ myroot = None
+ if settings is None:
+ raise TypeError("settings argument is required")
+ if not os.access(settings["EROOT"], os.W_OK):
+ writemsg(
+ _("Permission denied: access('%s', W_OK)\n") % settings["EROOT"],
+ noiselevel=-1,
+ )
+ return errno.EACCES
+ background = settings.get("PORTAGE_BACKGROUND") == "1"
+ merge_task = MergeProcess(
+ mycat=mycat,
+ mypkg=mypkg,
+ settings=settings,
+ treetype=mytree,
+ vartree=vartree,
+ scheduler=(scheduler or asyncio._safe_loop()),
+ background=background,
+ blockers=blockers,
+ pkgloc=pkgloc,
+ infloc=infloc,
+ myebuild=myebuild,
+ mydbapi=mydbapi,
+ prev_mtimes=prev_mtimes,
+ logfile=settings.get("PORTAGE_LOG_FILE"),
+ fd_pipes=fd_pipes,
+ )
+ merge_task.start()
+ retcode = merge_task.wait()
+ return retcode
+
+
+ def unmerge(
+ cat,
+ pkg,
+ myroot=None,
+ settings=None,
+ mytrimworld=None,
+ vartree=None,
+ ldpath_mtimes=None,
+ scheduler=None,
+ ):
+ """
+ @param myroot: ignored, settings['EROOT'] is used instead
+ @param mytrimworld: ignored
+ """
+ myroot = None
+ if settings is None:
+ raise TypeError("settings argument is required")
+ mylink = dblink(
+ cat,
+ pkg,
+ settings=settings,
+ treetype="vartree",
+ vartree=vartree,
+ scheduler=scheduler,
+ )
+ vartree = mylink.vartree
+ parallel_install = "parallel-install" in settings.features
+ if not parallel_install:
+ mylink.lockdb()
+ try:
+ if mylink.exists():
+ retval = mylink.unmerge(ldpath_mtimes=ldpath_mtimes)
+ if retval == os.EX_OK:
+ mylink.lockdb()
+ try:
+ mylink.delete()
+ finally:
+ mylink.unlockdb()
+ return retval
+ return os.EX_OK
+ finally:
+ if vartree.dbapi._linkmap is None:
+ # preserve-libs is entirely disabled
+ pass
+ else:
+ vartree.dbapi._linkmap._clear_cache()
+ if not parallel_install:
+ mylink.unlockdb()
+
def write_contents(contents, root, f):
- """
- Write contents to any file like object. The file will be left open.
- """
- root_len = len(root) - 1
- for filename in sorted(contents):
- entry_data = contents[filename]
- entry_type = entry_data[0]
- relative_filename = filename[root_len:]
- if entry_type == "obj":
- entry_type, mtime, md5sum = entry_data
- line = "%s %s %s %s\n" % \
- (entry_type, relative_filename, md5sum, mtime)
- elif entry_type == "sym":
- entry_type, mtime, link = entry_data
- line = "%s %s -> %s %s\n" % \
- (entry_type, relative_filename, link, mtime)
- else: # dir, dev, fif
- line = "%s %s\n" % (entry_type, relative_filename)
- f.write(line)
-
- def tar_contents(contents, root, tar, protect=None, onProgress=None,
- xattrs=False):
- os = _os_merge
- encoding = _encodings['merge']
-
- try:
- for x in contents:
- _unicode_encode(x,
- encoding=_encodings['merge'],
- errors='strict')
- except UnicodeEncodeError:
- # The package appears to have been merged with a
- # different value of sys.getfilesystemencoding(),
- # so fall back to utf_8 if appropriate.
- try:
- for x in contents:
- _unicode_encode(x,
- encoding=_encodings['fs'],
- errors='strict')
- except UnicodeEncodeError:
- pass
- else:
- os = portage.os
- encoding = _encodings['fs']
-
- tar.encoding = encoding
- root = normalize_path(root).rstrip(os.path.sep) + os.path.sep
- id_strings = {}
- maxval = len(contents)
- curval = 0
- if onProgress:
- onProgress(maxval, 0)
- paths = list(contents)
- paths.sort()
- for path in paths:
- curval += 1
- try:
- lst = os.lstat(path)
- except OSError as e:
- if e.errno != errno.ENOENT:
- raise
- del e
- if onProgress:
- onProgress(maxval, curval)
- continue
- contents_type = contents[path][0]
- if path.startswith(root):
- arcname = "./" + path[len(root):]
- else:
- raise ValueError("invalid root argument: '%s'" % root)
- live_path = path
- if 'dir' == contents_type and \
- not stat.S_ISDIR(lst.st_mode) and \
- os.path.isdir(live_path):
- # Even though this was a directory in the original ${D}, it exists
- # as a symlink to a directory in the live filesystem. It must be
- # recorded as a real directory in the tar file to ensure that tar
- # can properly extract it's children.
- live_path = os.path.realpath(live_path)
- lst = os.lstat(live_path)
-
- # Since os.lstat() inside TarFile.gettarinfo() can trigger a
- # UnicodeEncodeError when python has something other than utf_8
- # return from sys.getfilesystemencoding() (as in bug #388773),
- # we implement the needed functionality here, using the result
- # of our successful lstat call. An alternative to this would be
- # to pass in the fileobj argument to TarFile.gettarinfo(), so
- # that it could use fstat instead of lstat. However, that would
- # have the unwanted effect of dereferencing symlinks.
-
- tarinfo = tar.tarinfo()
- tarinfo.name = arcname
- tarinfo.mode = lst.st_mode
- tarinfo.uid = lst.st_uid
- tarinfo.gid = lst.st_gid
- tarinfo.size = 0
- tarinfo.mtime = lst.st_mtime
- tarinfo.linkname = ""
- if stat.S_ISREG(lst.st_mode):
- inode = (lst.st_ino, lst.st_dev)
- if (lst.st_nlink > 1 and
- inode in tar.inodes and
- arcname != tar.inodes[inode]):
- tarinfo.type = tarfile.LNKTYPE
- tarinfo.linkname = tar.inodes[inode]
- else:
- tar.inodes[inode] = arcname
- tarinfo.type = tarfile.REGTYPE
- tarinfo.size = lst.st_size
- elif stat.S_ISDIR(lst.st_mode):
- tarinfo.type = tarfile.DIRTYPE
- elif stat.S_ISLNK(lst.st_mode):
- tarinfo.type = tarfile.SYMTYPE
- tarinfo.linkname = os.readlink(live_path)
- else:
- continue
- try:
- tarinfo.uname = pwd.getpwuid(tarinfo.uid)[0]
- except KeyError:
- pass
- try:
- tarinfo.gname = grp.getgrgid(tarinfo.gid)[0]
- except KeyError:
- pass
-
- if stat.S_ISREG(lst.st_mode):
- if protect and protect(path):
- # Create an empty file as a place holder in order to avoid
- # potential collision-protect issues.
- f = tempfile.TemporaryFile()
- f.write(_unicode_encode(
- "# empty file because --include-config=n " + \
- "when `quickpkg` was used\n"))
- f.flush()
- f.seek(0)
- tarinfo.size = os.fstat(f.fileno()).st_size
- tar.addfile(tarinfo, f)
- f.close()
- else:
- path_bytes = _unicode_encode(path,
- encoding=encoding,
- errors='strict')
-
- if xattrs:
- # Compatible with GNU tar, which saves the xattrs
- # under the SCHILY.xattr namespace.
- for k in xattr.list(path_bytes):
- tarinfo.pax_headers['SCHILY.xattr.' +
- _unicode_decode(k)] = _unicode_decode(
- xattr.get(path_bytes, _unicode_encode(k)))
-
- with open(path_bytes, 'rb') as f:
- tar.addfile(tarinfo, f)
-
- else:
- tar.addfile(tarinfo)
- if onProgress:
- onProgress(maxval, curval)
+ """
+ Write contents to any file like object. The file will be left open.
+ """
+ root_len = len(root) - 1
+ for filename in sorted(contents):
+ entry_data = contents[filename]
+ entry_type = entry_data[0]
+ relative_filename = filename[root_len:]
+ if entry_type == "obj":
+ entry_type, mtime, md5sum = entry_data
+ line = "%s %s %s %s\n" % (entry_type, relative_filename, md5sum, mtime)
+ elif entry_type == "sym":
+ entry_type, mtime, link = entry_data
+ line = "%s %s -> %s %s\n" % (entry_type, relative_filename, link, mtime)
+ else: # dir, dev, fif
+ line = "%s %s\n" % (entry_type, relative_filename)
+ f.write(line)
+
+
+ def tar_contents(contents, root, tar, protect=None, onProgress=None, xattrs=False):
+ os = _os_merge
+ encoding = _encodings["merge"]
+
+ try:
+ for x in contents:
+ _unicode_encode(x, encoding=_encodings["merge"], errors="strict")
+ except UnicodeEncodeError:
+ # The package appears to have been merged with a
+ # different value of sys.getfilesystemencoding(),
+ # so fall back to utf_8 if appropriate.
+ try:
+ for x in contents:
+ _unicode_encode(x, encoding=_encodings["fs"], errors="strict")
+ except UnicodeEncodeError:
+ pass
+ else:
+ os = portage.os
+ encoding = _encodings["fs"]
+
+ tar.encoding = encoding
+ root = normalize_path(root).rstrip(os.path.sep) + os.path.sep
+ id_strings = {}
+ maxval = len(contents)
+ curval = 0
+ if onProgress:
+ onProgress(maxval, 0)
+ paths = list(contents)
+ paths.sort()
+ for path in paths:
+ curval += 1
+ try:
+ lst = os.lstat(path)
+ except OSError as e:
+ if e.errno != errno.ENOENT:
+ raise
+ del e
+ if onProgress:
+ onProgress(maxval, curval)
+ continue
+ contents_type = contents[path][0]
+ if path.startswith(root):
+ arcname = "./" + path[len(root) :]
+ else:
+ raise ValueError("invalid root argument: '%s'" % root)
+ live_path = path
+ if (
+ "dir" == contents_type
+ and not stat.S_ISDIR(lst.st_mode)
+ and os.path.isdir(live_path)
+ ):
+ # Even though this was a directory in the original ${D}, it exists
+ # as a symlink to a directory in the live filesystem. It must be
+ # recorded as a real directory in the tar file to ensure that tar
+ # can properly extract it's children.
+ live_path = os.path.realpath(live_path)
+ lst = os.lstat(live_path)
+
+ # Since os.lstat() inside TarFile.gettarinfo() can trigger a
+ # UnicodeEncodeError when python has something other than utf_8
+ # return from sys.getfilesystemencoding() (as in bug #388773),
+ # we implement the needed functionality here, using the result
+ # of our successful lstat call. An alternative to this would be
+ # to pass in the fileobj argument to TarFile.gettarinfo(), so
+ # that it could use fstat instead of lstat. However, that would
+ # have the unwanted effect of dereferencing symlinks.
+
+ tarinfo = tar.tarinfo()
+ tarinfo.name = arcname
+ tarinfo.mode = lst.st_mode
+ tarinfo.uid = lst.st_uid
+ tarinfo.gid = lst.st_gid
+ tarinfo.size = 0
+ tarinfo.mtime = lst.st_mtime
+ tarinfo.linkname = ""
+ if stat.S_ISREG(lst.st_mode):
+ inode = (lst.st_ino, lst.st_dev)
+ if (
+ lst.st_nlink > 1
+ and inode in tar.inodes
+ and arcname != tar.inodes[inode]
+ ):
+ tarinfo.type = tarfile.LNKTYPE
+ tarinfo.linkname = tar.inodes[inode]
+ else:
+ tar.inodes[inode] = arcname
+ tarinfo.type = tarfile.REGTYPE
+ tarinfo.size = lst.st_size
+ elif stat.S_ISDIR(lst.st_mode):
+ tarinfo.type = tarfile.DIRTYPE
+ elif stat.S_ISLNK(lst.st_mode):
+ tarinfo.type = tarfile.SYMTYPE
+ tarinfo.linkname = os.readlink(live_path)
+ else:
+ continue
+ try:
+ tarinfo.uname = pwd.getpwuid(tarinfo.uid)[0]
+ except KeyError:
+ pass
+ try:
+ tarinfo.gname = grp.getgrgid(tarinfo.gid)[0]
+ except KeyError:
+ pass
+
+ if stat.S_ISREG(lst.st_mode):
+ if protect and protect(path):
+ # Create an empty file as a place holder in order to avoid
+ # potential collision-protect issues.
+ f = tempfile.TemporaryFile()
+ f.write(
+ _unicode_encode(
+ "# empty file because --include-config=n "
+ + "when `quickpkg` was used\n"
+ )
+ )
+ f.flush()
+ f.seek(0)
+ tarinfo.size = os.fstat(f.fileno()).st_size
+ tar.addfile(tarinfo, f)
+ f.close()
+ else:
+ path_bytes = _unicode_encode(path, encoding=encoding, errors="strict")
+
+ if xattrs:
+ # Compatible with GNU tar, which saves the xattrs
+ # under the SCHILY.xattr namespace.
+ for k in xattr.list(path_bytes):
+ tarinfo.pax_headers[
+ "SCHILY.xattr." + _unicode_decode(k)
+ ] = _unicode_decode(xattr.get(path_bytes, _unicode_encode(k)))
+
+ with open(path_bytes, "rb") as f:
+ tar.addfile(tarinfo, f)
+
+ else:
+ tar.addfile(tarinfo)
+ if onProgress:
+ onProgress(maxval, curval)
diff --cc lib/portage/getbinpkg.py
index 3eb9479f2,6aa8f1de1..aaf0bcf81
--- a/lib/portage/getbinpkg.py
+++ b/lib/portage/getbinpkg.py
@@@ -19,7 -19,6 +19,8 @@@ import socke
import time
import tempfile
import base64
++# PREFIX LOCAL
+from portage.const import CACHE_PATH
import warnings
_all_errors = [NotImplementedError, ValueError, socket.error]
@@@ -348,561 -386,616 +388,618 @@@ def match_in_array(array, prefix="", su
def dir_get_list(baseurl, conn=None):
- """Takes a base url to connect to and read from.
- URI should be in the form <proto>://<site>[:port]<path>
- Connection is used for persistent connection instances."""
-
- warnings.warn("portage.getbinpkg.dir_get_list() is deprecated",
- DeprecationWarning, stacklevel=2)
-
- if not conn:
- keepconnection = 0
- else:
- keepconnection = 1
-
- conn, protocol, address, params, headers = create_conn(baseurl, conn)
-
- listing = None
- if protocol in ["http","https"]:
- if not address.endswith("/"):
- # http servers can return a 400 error here
- # if the address doesn't end with a slash.
- address += "/"
- page, rc, msg = make_http_request(conn, address, params, headers)
-
- if page:
- parser = ParseLinks()
- parser.feed(_unicode_decode(page))
- del page
- listing = parser.get_anchors()
- else:
- import portage.exception
- raise portage.exception.PortageException(
- _("Unable to get listing: %s %s") % (rc,msg))
- elif protocol in ["ftp"]:
- if address[-1] == '/':
- olddir = conn.pwd()
- conn.cwd(address)
- listing = conn.nlst()
- conn.cwd(olddir)
- del olddir
- else:
- listing = conn.nlst(address)
- elif protocol == "sftp":
- listing = conn.listdir(address)
- else:
- raise TypeError(_("Unknown protocol. '%s'") % protocol)
-
- if not keepconnection:
- conn.close()
-
- return listing
+ """Takes a base url to connect to and read from.
+ URI should be in the form <proto>://<site>[:port]<path>
+ Connection is used for persistent connection instances."""
+
+ warnings.warn(
+ "portage.getbinpkg.dir_get_list() is deprecated",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
+ if not conn:
+ keepconnection = 0
+ else:
+ keepconnection = 1
+
+ conn, protocol, address, params, headers = create_conn(baseurl, conn)
+
+ listing = None
+ if protocol in ["http", "https"]:
+ if not address.endswith("/"):
+ # http servers can return a 400 error here
+ # if the address doesn't end with a slash.
+ address += "/"
+ page, rc, msg = make_http_request(conn, address, params, headers)
+
+ if page:
+ parser = ParseLinks()
+ parser.feed(_unicode_decode(page))
+ del page
+ listing = parser.get_anchors()
+ else:
+ import portage.exception
+
+ raise portage.exception.PortageException(
+ _("Unable to get listing: %s %s") % (rc, msg)
+ )
+ elif protocol in ["ftp"]:
+ if address[-1] == "/":
+ olddir = conn.pwd()
+ conn.cwd(address)
+ listing = conn.nlst()
+ conn.cwd(olddir)
+ del olddir
+ else:
+ listing = conn.nlst(address)
+ elif protocol == "sftp":
+ listing = conn.listdir(address)
+ else:
+ raise TypeError(_("Unknown protocol. '%s'") % protocol)
+
+ if not keepconnection:
+ conn.close()
+
+ return listing
+
def file_get_metadata(baseurl, conn=None, chunk_size=3000):
- """Takes a base url to connect to and read from.
- URI should be in the form <proto>://<site>[:port]<path>
- Connection is used for persistent connection instances."""
-
- warnings.warn("portage.getbinpkg.file_get_metadata() is deprecated",
- DeprecationWarning, stacklevel=2)
-
- if not conn:
- keepconnection = 0
- else:
- keepconnection = 1
-
- conn, protocol, address, params, headers = create_conn(baseurl, conn)
-
- if protocol in ["http","https"]:
- headers["Range"] = "bytes=-%s" % str(chunk_size)
- data, _x, _x = make_http_request(conn, address, params, headers)
- elif protocol in ["ftp"]:
- data, _x, _x = make_ftp_request(conn, address, -chunk_size)
- elif protocol == "sftp":
- f = conn.open(address)
- try:
- f.seek(-chunk_size, 2)
- data = f.read()
- finally:
- f.close()
- else:
- raise TypeError(_("Unknown protocol. '%s'") % protocol)
-
- if data:
- xpaksize = portage.xpak.decodeint(data[-8:-4])
- if (xpaksize + 8) > chunk_size:
- myid = file_get_metadata(baseurl, conn, xpaksize + 8)
- if not keepconnection:
- conn.close()
- return myid
- xpak_data = data[len(data) - (xpaksize + 8):-8]
- del data
-
- myid = portage.xpak.xsplit_mem(xpak_data)
- if not myid:
- myid = None, None
- del xpak_data
- else:
- myid = None, None
-
- if not keepconnection:
- conn.close()
-
- return myid
-
-
- def file_get(baseurl=None, dest=None, conn=None, fcmd=None, filename=None,
- fcmd_vars=None):
- """Takes a base url to connect to and read from.
- URI should be in the form <proto>://[user[:pass]@]<site>[:port]<path>"""
-
- if not fcmd:
-
- warnings.warn("Use of portage.getbinpkg.file_get() without the fcmd "
- "parameter is deprecated", DeprecationWarning, stacklevel=2)
-
- return file_get_lib(baseurl, dest, conn)
-
- variables = {}
-
- if fcmd_vars is not None:
- variables.update(fcmd_vars)
-
- if "DISTDIR" not in variables:
- if dest is None:
- raise portage.exception.MissingParameter(
- _("%s is missing required '%s' key") %
- ("fcmd_vars", "DISTDIR"))
- variables["DISTDIR"] = dest
-
- if "URI" not in variables:
- if baseurl is None:
- raise portage.exception.MissingParameter(
- _("%s is missing required '%s' key") %
- ("fcmd_vars", "URI"))
- variables["URI"] = baseurl
-
- if "FILE" not in variables:
- if filename is None:
- filename = os.path.basename(variables["URI"])
- variables["FILE"] = filename
-
- from portage.util import varexpand
- from portage.process import spawn
- myfetch = portage.util.shlex_split(fcmd)
- myfetch = [varexpand(x, mydict=variables) for x in myfetch]
- fd_pipes = {
- 0: portage._get_stdin().fileno(),
- 1: sys.__stdout__.fileno(),
- 2: sys.__stdout__.fileno()
- }
- sys.__stdout__.flush()
- sys.__stderr__.flush()
- retval = spawn(myfetch, env=os.environ.copy(), fd_pipes=fd_pipes)
- if retval != os.EX_OK:
- sys.stderr.write(_("Fetcher exited with a failure condition.\n"))
- return 0
- return 1
+ """Takes a base url to connect to and read from.
+ URI should be in the form <proto>://<site>[:port]<path>
+ Connection is used for persistent connection instances."""
+
+ warnings.warn(
+ "portage.getbinpkg.file_get_metadata() is deprecated",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
+ if not conn:
+ keepconnection = 0
+ else:
+ keepconnection = 1
+
+ conn, protocol, address, params, headers = create_conn(baseurl, conn)
+
+ if protocol in ["http", "https"]:
+ headers["Range"] = "bytes=-%s" % str(chunk_size)
+ data, _x, _x = make_http_request(conn, address, params, headers)
+ elif protocol in ["ftp"]:
+ data, _x, _x = make_ftp_request(conn, address, -chunk_size)
+ elif protocol == "sftp":
+ f = conn.open(address)
+ try:
+ f.seek(-chunk_size, 2)
+ data = f.read()
+ finally:
+ f.close()
+ else:
+ raise TypeError(_("Unknown protocol. '%s'") % protocol)
+
+ if data:
+ xpaksize = portage.xpak.decodeint(data[-8:-4])
+ if (xpaksize + 8) > chunk_size:
+ myid = file_get_metadata(baseurl, conn, xpaksize + 8)
+ if not keepconnection:
+ conn.close()
+ return myid
+ xpak_data = data[len(data) - (xpaksize + 8) : -8]
+ del data
+
+ myid = portage.xpak.xsplit_mem(xpak_data)
+ if not myid:
+ myid = None, None
+ del xpak_data
+ else:
+ myid = None, None
+
+ if not keepconnection:
+ conn.close()
+
+ return myid
+
+
+ def file_get(
+ baseurl=None, dest=None, conn=None, fcmd=None, filename=None, fcmd_vars=None
+ ):
+ """Takes a base url to connect to and read from.
+ URI should be in the form <proto>://[user[:pass]@]<site>[:port]<path>"""
+
+ if not fcmd:
+
+ warnings.warn(
+ "Use of portage.getbinpkg.file_get() without the fcmd "
+ "parameter is deprecated",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
+ return file_get_lib(baseurl, dest, conn)
+
+ variables = {}
+
+ if fcmd_vars is not None:
+ variables.update(fcmd_vars)
+
+ if "DISTDIR" not in variables:
+ if dest is None:
+ raise portage.exception.MissingParameter(
+ _("%s is missing required '%s' key") % ("fcmd_vars", "DISTDIR")
+ )
+ variables["DISTDIR"] = dest
+
+ if "URI" not in variables:
+ if baseurl is None:
+ raise portage.exception.MissingParameter(
+ _("%s is missing required '%s' key") % ("fcmd_vars", "URI")
+ )
+ variables["URI"] = baseurl
+
+ if "FILE" not in variables:
+ if filename is None:
+ filename = os.path.basename(variables["URI"])
+ variables["FILE"] = filename
+
+ from portage.util import varexpand
+ from portage.process import spawn
+
+ myfetch = portage.util.shlex_split(fcmd)
+ myfetch = [varexpand(x, mydict=variables) for x in myfetch]
+ fd_pipes = {
+ 0: portage._get_stdin().fileno(),
+ 1: sys.__stdout__.fileno(),
+ 2: sys.__stdout__.fileno(),
+ }
+ sys.__stdout__.flush()
+ sys.__stderr__.flush()
+ retval = spawn(myfetch, env=os.environ.copy(), fd_pipes=fd_pipes)
+ if retval != os.EX_OK:
+ sys.stderr.write(_("Fetcher exited with a failure condition.\n"))
+ return 0
+ return 1
+
def file_get_lib(baseurl, dest, conn=None):
- """Takes a base url to connect to and read from.
- URI should be in the form <proto>://<site>[:port]<path>
- Connection is used for persistent connection instances."""
-
- warnings.warn("portage.getbinpkg.file_get_lib() is deprecated",
- DeprecationWarning, stacklevel=2)
-
- if not conn:
- keepconnection = 0
- else:
- keepconnection = 1
-
- conn, protocol, address, params, headers = create_conn(baseurl, conn)
-
- sys.stderr.write("Fetching '" + str(os.path.basename(address)) + "'\n")
- if protocol in ["http", "https"]:
- data, rc, _msg = make_http_request(conn, address, params, headers, dest=dest)
- elif protocol in ["ftp"]:
- data, rc, _msg = make_ftp_request(conn, address, dest=dest)
- elif protocol == "sftp":
- rc = 0
- try:
- f = conn.open(address)
- except SystemExit:
- raise
- except Exception:
- rc = 1
- else:
- try:
- if dest:
- bufsize = 8192
- while True:
- data = f.read(bufsize)
- if not data:
- break
- dest.write(data)
- finally:
- f.close()
- else:
- raise TypeError(_("Unknown protocol. '%s'") % protocol)
-
- if not keepconnection:
- conn.close()
-
- return rc
-
-
- def dir_get_metadata(baseurl, conn=None, chunk_size=3000, verbose=1, usingcache=1, makepickle=None):
-
- warnings.warn("portage.getbinpkg.dir_get_metadata() is deprecated",
- DeprecationWarning, stacklevel=2)
-
- if not conn:
- keepconnection = 0
- else:
- keepconnection = 1
-
- cache_path = CACHE_PATH
- metadatafilename = os.path.join(cache_path, 'remote_metadata.pickle')
-
- if makepickle is None:
- makepickle = CACHE_PATH+"/metadata.idx.most_recent"
-
- try:
- conn = create_conn(baseurl, conn)[0]
- except _all_errors as e:
- # ftplib.FTP(host) can raise errors like this:
- # socket.error: (111, 'Connection refused')
- sys.stderr.write("!!! %s\n" % (e,))
- return {}
-
- out = sys.stdout
- try:
- metadatafile = open(_unicode_encode(metadatafilename,
- encoding=_encodings['fs'], errors='strict'), 'rb')
- mypickle = pickle.Unpickler(metadatafile)
- try:
- mypickle.find_global = None
- except AttributeError:
- # TODO: If py3k, override Unpickler.find_class().
- pass
- metadata = mypickle.load()
- out.write(_("Loaded metadata pickle.\n"))
- out.flush()
- metadatafile.close()
- except (SystemExit, KeyboardInterrupt):
- raise
- except Exception:
- metadata = {}
- if baseurl not in metadata:
- metadata[baseurl] = {}
- if "indexname" not in metadata[baseurl]:
- metadata[baseurl]["indexname"] = ""
- if "timestamp" not in metadata[baseurl]:
- metadata[baseurl]["timestamp"] = 0
- if "unmodified" not in metadata[baseurl]:
- metadata[baseurl]["unmodified"] = 0
- if "data" not in metadata[baseurl]:
- metadata[baseurl]["data"] = {}
-
- if not os.access(cache_path, os.W_OK):
- sys.stderr.write(_("!!! Unable to write binary metadata to disk!\n"))
- sys.stderr.write(_("!!! Permission denied: '%s'\n") % cache_path)
- return metadata[baseurl]["data"]
-
- import portage.exception
- try:
- filelist = dir_get_list(baseurl, conn)
- except portage.exception.PortageException as e:
- sys.stderr.write(_("!!! Error connecting to '%s'.\n") %
- _hide_url_passwd(baseurl))
- sys.stderr.write("!!! %s\n" % str(e))
- del e
- return metadata[baseurl]["data"]
- tbz2list = match_in_array(filelist, suffix=".tbz2")
- metalist = match_in_array(filelist, prefix="metadata.idx")
- del filelist
-
- # Determine if our metadata file is current.
- metalist.sort()
- metalist.reverse() # makes the order new-to-old.
- for mfile in metalist:
- if usingcache and \
- ((metadata[baseurl]["indexname"] != mfile) or \
- (metadata[baseurl]["timestamp"] < int(time.time() - (60 * 60 * 24)))):
- # Try to download new cache until we succeed on one.
- data = ""
- for trynum in [1, 2, 3]:
- mytempfile = tempfile.TemporaryFile()
- try:
- file_get(baseurl + "/" + mfile, mytempfile, conn)
- if mytempfile.tell() > len(data):
- mytempfile.seek(0)
- data = mytempfile.read()
- except ValueError as e:
- sys.stderr.write("--- %s\n" % str(e))
- if trynum < 3:
- sys.stderr.write(_("Retrying...\n"))
- sys.stderr.flush()
- mytempfile.close()
- continue
- if match_in_array([mfile], suffix=".gz"):
- out.write("gzip'd\n")
- out.flush()
- try:
- import gzip
- mytempfile.seek(0)
- gzindex = gzip.GzipFile(mfile[:-3], 'rb', 9, mytempfile)
- data = gzindex.read()
- except SystemExit as e:
- raise
- except Exception as e:
- mytempfile.close()
- sys.stderr.write(_("!!! Failed to use gzip: ") + str(e) + "\n")
- sys.stderr.flush()
- mytempfile.close()
- try:
- metadata[baseurl]["data"] = pickle.loads(data)
- del data
- metadata[baseurl]["indexname"] = mfile
- metadata[baseurl]["timestamp"] = int(time.time())
- metadata[baseurl]["modified"] = 0 # It's not, right after download.
- out.write(_("Pickle loaded.\n"))
- out.flush()
- break
- except SystemExit as e:
- raise
- except Exception as e:
- sys.stderr.write(_("!!! Failed to read data from index: ") + str(mfile) + "\n")
- sys.stderr.write("!!! %s" % str(e))
- sys.stderr.flush()
- try:
- metadatafile = open(_unicode_encode(metadatafilename,
- encoding=_encodings['fs'], errors='strict'), 'wb')
- pickle.dump(metadata, metadatafile, protocol=2)
- metadatafile.close()
- except SystemExit as e:
- raise
- except Exception as e:
- sys.stderr.write(_("!!! Failed to write binary metadata to disk!\n"))
- sys.stderr.write("!!! %s\n" % str(e))
- sys.stderr.flush()
- break
- # We may have metadata... now we run through the tbz2 list and check.
-
- class CacheStats:
- from time import time
- def __init__(self, out):
- self.misses = 0
- self.hits = 0
- self.last_update = 0
- self.out = out
- self.min_display_latency = 0.2
- def update(self):
- cur_time = self.time()
- if cur_time - self.last_update >= self.min_display_latency:
- self.last_update = cur_time
- self.display()
- def display(self):
- self.out.write("\r"+colorize("WARN",
- _("cache miss: '") + str(self.misses) + "'") + \
- " --- " + colorize("GOOD", _("cache hit: '") + str(self.hits) + "'"))
- self.out.flush()
-
- cache_stats = CacheStats(out)
- have_tty = os.environ.get('TERM') != 'dumb' and out.isatty()
- if have_tty:
- cache_stats.display()
- binpkg_filenames = set()
- for x in tbz2list:
- x = os.path.basename(x)
- binpkg_filenames.add(x)
- if x not in metadata[baseurl]["data"]:
- cache_stats.misses += 1
- if have_tty:
- cache_stats.update()
- metadata[baseurl]["modified"] = 1
- myid = None
- for _x in range(3):
- try:
- myid = file_get_metadata(
- "/".join((baseurl.rstrip("/"), x.lstrip("/"))),
- conn, chunk_size)
- break
- except http_client_BadStatusLine:
- # Sometimes this error is thrown from conn.getresponse() in
- # make_http_request(). The docstring for this error in
- # httplib.py says "Presumably, the server closed the
- # connection before sending a valid response".
- conn = create_conn(baseurl)[0]
- except http_client_ResponseNotReady:
- # With some http servers this error is known to be thrown
- # from conn.getresponse() in make_http_request() when the
- # remote file does not have appropriate read permissions.
- # Maybe it's possible to recover from this exception in
- # cases though, so retry.
- conn = create_conn(baseurl)[0]
-
- if myid and myid[0]:
- metadata[baseurl]["data"][x] = make_metadata_dict(myid)
- elif verbose:
- sys.stderr.write(colorize("BAD",
- _("!!! Failed to retrieve metadata on: ")) + str(x) + "\n")
- sys.stderr.flush()
- else:
- cache_stats.hits += 1
- if have_tty:
- cache_stats.update()
- cache_stats.display()
- # Cleanse stale cache for files that don't exist on the server anymore.
- stale_cache = set(metadata[baseurl]["data"]).difference(binpkg_filenames)
- if stale_cache:
- for x in stale_cache:
- del metadata[baseurl]["data"][x]
- metadata[baseurl]["modified"] = 1
- del stale_cache
- del binpkg_filenames
- out.write("\n")
- out.flush()
-
- try:
- if "modified" in metadata[baseurl] and metadata[baseurl]["modified"]:
- metadata[baseurl]["timestamp"] = int(time.time())
- metadatafile = open(_unicode_encode(metadatafilename,
- encoding=_encodings['fs'], errors='strict'), 'wb')
- pickle.dump(metadata, metadatafile, protocol=2)
- metadatafile.close()
- if makepickle:
- metadatafile = open(_unicode_encode(makepickle,
- encoding=_encodings['fs'], errors='strict'), 'wb')
- pickle.dump(metadata[baseurl]["data"], metadatafile, protocol=2)
- metadatafile.close()
- except SystemExit as e:
- raise
- except Exception as e:
- sys.stderr.write(_("!!! Failed to write binary metadata to disk!\n"))
- sys.stderr.write("!!! "+str(e)+"\n")
- sys.stderr.flush()
-
- if not keepconnection:
- conn.close()
-
- return metadata[baseurl]["data"]
+ """Takes a base url to connect to and read from.
+ URI should be in the form <proto>://<site>[:port]<path>
+ Connection is used for persistent connection instances."""
+
+ warnings.warn(
+ "portage.getbinpkg.file_get_lib() is deprecated",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
+ if not conn:
+ keepconnection = 0
+ else:
+ keepconnection = 1
+
+ conn, protocol, address, params, headers = create_conn(baseurl, conn)
+
+ sys.stderr.write("Fetching '" + str(os.path.basename(address)) + "'\n")
+ if protocol in ["http", "https"]:
+ data, rc, _msg = make_http_request(conn, address, params, headers, dest=dest)
+ elif protocol in ["ftp"]:
+ data, rc, _msg = make_ftp_request(conn, address, dest=dest)
+ elif protocol == "sftp":
+ rc = 0
+ try:
+ f = conn.open(address)
+ except SystemExit:
+ raise
+ except Exception:
+ rc = 1
+ else:
+ try:
+ if dest:
+ bufsize = 8192
+ while True:
+ data = f.read(bufsize)
+ if not data:
+ break
+ dest.write(data)
+ finally:
+ f.close()
+ else:
+ raise TypeError(_("Unknown protocol. '%s'") % protocol)
+
+ if not keepconnection:
+ conn.close()
+
+ return rc
+
+
+ def dir_get_metadata(
+ baseurl, conn=None, chunk_size=3000, verbose=1, usingcache=1, makepickle=None
+ ):
+
+ warnings.warn(
+ "portage.getbinpkg.dir_get_metadata() is deprecated",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
+ if not conn:
+ keepconnection = 0
+ else:
+ keepconnection = 1
+
- cache_path = "/var/cache/edb"
++ # PREFIX LOCAL
++ cache_path = CACHE_PATH
+ metadatafilename = os.path.join(cache_path, "remote_metadata.pickle")
+
+ if makepickle is None:
- makepickle = "/var/cache/edb/metadata.idx.most_recent"
++ # PREFIX LOCAL
++ makepickle = CACHE_PATH + "/metadata.idx.most_recent"
+
+ try:
+ conn = create_conn(baseurl, conn)[0]
+ except _all_errors as e:
+ # ftplib.FTP(host) can raise errors like this:
+ # socket.error: (111, 'Connection refused')
+ sys.stderr.write("!!! %s\n" % (e,))
+ return {}
+
+ out = sys.stdout
+ try:
+ metadatafile = open(
+ _unicode_encode(
+ metadatafilename, encoding=_encodings["fs"], errors="strict"
+ ),
+ "rb",
+ )
+ mypickle = pickle.Unpickler(metadatafile)
+ try:
+ mypickle.find_global = None
+ except AttributeError:
+ # TODO: If py3k, override Unpickler.find_class().
+ pass
+ metadata = mypickle.load()
+ out.write(_("Loaded metadata pickle.\n"))
+ out.flush()
+ metadatafile.close()
+ except (SystemExit, KeyboardInterrupt):
+ raise
+ except Exception:
+ metadata = {}
+ if baseurl not in metadata:
+ metadata[baseurl] = {}
+ if "indexname" not in metadata[baseurl]:
+ metadata[baseurl]["indexname"] = ""
+ if "timestamp" not in metadata[baseurl]:
+ metadata[baseurl]["timestamp"] = 0
+ if "unmodified" not in metadata[baseurl]:
+ metadata[baseurl]["unmodified"] = 0
+ if "data" not in metadata[baseurl]:
+ metadata[baseurl]["data"] = {}
+
+ if not os.access(cache_path, os.W_OK):
+ sys.stderr.write(_("!!! Unable to write binary metadata to disk!\n"))
+ sys.stderr.write(_("!!! Permission denied: '%s'\n") % cache_path)
+ return metadata[baseurl]["data"]
+
+ import portage.exception
+
+ try:
+ filelist = dir_get_list(baseurl, conn)
+ except portage.exception.PortageException as e:
+ sys.stderr.write(
+ _("!!! Error connecting to '%s'.\n") % _hide_url_passwd(baseurl)
+ )
+ sys.stderr.write("!!! %s\n" % str(e))
+ del e
+ return metadata[baseurl]["data"]
+ tbz2list = match_in_array(filelist, suffix=".tbz2")
+ metalist = match_in_array(filelist, prefix="metadata.idx")
+ del filelist
+
+ # Determine if our metadata file is current.
+ metalist.sort()
+ metalist.reverse() # makes the order new-to-old.
+ for mfile in metalist:
+ if usingcache and (
+ (metadata[baseurl]["indexname"] != mfile)
+ or (metadata[baseurl]["timestamp"] < int(time.time() - (60 * 60 * 24)))
+ ):
+ # Try to download new cache until we succeed on one.
+ data = ""
+ for trynum in [1, 2, 3]:
+ mytempfile = tempfile.TemporaryFile()
+ try:
+ file_get(baseurl + "/" + mfile, mytempfile, conn)
+ if mytempfile.tell() > len(data):
+ mytempfile.seek(0)
+ data = mytempfile.read()
+ except ValueError as e:
+ sys.stderr.write("--- %s\n" % str(e))
+ if trynum < 3:
+ sys.stderr.write(_("Retrying...\n"))
+ sys.stderr.flush()
+ mytempfile.close()
+ continue
+ if match_in_array([mfile], suffix=".gz"):
+ out.write("gzip'd\n")
+ out.flush()
+ try:
+ import gzip
+
+ mytempfile.seek(0)
+ gzindex = gzip.GzipFile(mfile[:-3], "rb", 9, mytempfile)
+ data = gzindex.read()
+ except SystemExit as e:
+ raise
+ except Exception as e:
+ mytempfile.close()
+ sys.stderr.write(_("!!! Failed to use gzip: ") + str(e) + "\n")
+ sys.stderr.flush()
+ mytempfile.close()
+ try:
+ metadata[baseurl]["data"] = pickle.loads(data)
+ del data
+ metadata[baseurl]["indexname"] = mfile
+ metadata[baseurl]["timestamp"] = int(time.time())
+ metadata[baseurl]["modified"] = 0 # It's not, right after download.
+ out.write(_("Pickle loaded.\n"))
+ out.flush()
+ break
+ except SystemExit as e:
+ raise
+ except Exception as e:
+ sys.stderr.write(
+ _("!!! Failed to read data from index: ") + str(mfile) + "\n"
+ )
+ sys.stderr.write("!!! %s" % str(e))
+ sys.stderr.flush()
+ try:
+ metadatafile = open(
+ _unicode_encode(
+ metadatafilename, encoding=_encodings["fs"], errors="strict"
+ ),
+ "wb",
+ )
+ pickle.dump(metadata, metadatafile, protocol=2)
+ metadatafile.close()
+ except SystemExit as e:
+ raise
+ except Exception as e:
+ sys.stderr.write(_("!!! Failed to write binary metadata to disk!\n"))
+ sys.stderr.write("!!! %s\n" % str(e))
+ sys.stderr.flush()
+ break
+ # We may have metadata... now we run through the tbz2 list and check.
+
+ class CacheStats:
+ from time import time
+
+ def __init__(self, out):
+ self.misses = 0
+ self.hits = 0
+ self.last_update = 0
+ self.out = out
+ self.min_display_latency = 0.2
+
+ def update(self):
+ cur_time = self.time()
+ if cur_time - self.last_update >= self.min_display_latency:
+ self.last_update = cur_time
+ self.display()
+
+ def display(self):
+ self.out.write(
+ "\r"
+ + colorize("WARN", _("cache miss: '") + str(self.misses) + "'")
+ + " --- "
+ + colorize("GOOD", _("cache hit: '") + str(self.hits) + "'")
+ )
+ self.out.flush()
+
+ cache_stats = CacheStats(out)
+ have_tty = os.environ.get("TERM") != "dumb" and out.isatty()
+ if have_tty:
+ cache_stats.display()
+ binpkg_filenames = set()
+ for x in tbz2list:
+ x = os.path.basename(x)
+ binpkg_filenames.add(x)
+ if x not in metadata[baseurl]["data"]:
+ cache_stats.misses += 1
+ if have_tty:
+ cache_stats.update()
+ metadata[baseurl]["modified"] = 1
+ myid = None
+ for _x in range(3):
+ try:
+ myid = file_get_metadata(
+ "/".join((baseurl.rstrip("/"), x.lstrip("/"))), conn, chunk_size
+ )
+ break
+ except http_client_BadStatusLine:
+ # Sometimes this error is thrown from conn.getresponse() in
+ # make_http_request(). The docstring for this error in
+ # httplib.py says "Presumably, the server closed the
+ # connection before sending a valid response".
+ conn = create_conn(baseurl)[0]
+ except http_client_ResponseNotReady:
+ # With some http servers this error is known to be thrown
+ # from conn.getresponse() in make_http_request() when the
+ # remote file does not have appropriate read permissions.
+ # Maybe it's possible to recover from this exception in
+ # cases though, so retry.
+ conn = create_conn(baseurl)[0]
+
+ if myid and myid[0]:
+ metadata[baseurl]["data"][x] = make_metadata_dict(myid)
+ elif verbose:
+ sys.stderr.write(
+ colorize("BAD", _("!!! Failed to retrieve metadata on: "))
+ + str(x)
+ + "\n"
+ )
+ sys.stderr.flush()
+ else:
+ cache_stats.hits += 1
+ if have_tty:
+ cache_stats.update()
+ cache_stats.display()
+ # Cleanse stale cache for files that don't exist on the server anymore.
+ stale_cache = set(metadata[baseurl]["data"]).difference(binpkg_filenames)
+ if stale_cache:
+ for x in stale_cache:
+ del metadata[baseurl]["data"][x]
+ metadata[baseurl]["modified"] = 1
+ del stale_cache
+ del binpkg_filenames
+ out.write("\n")
+ out.flush()
+
+ try:
+ if "modified" in metadata[baseurl] and metadata[baseurl]["modified"]:
+ metadata[baseurl]["timestamp"] = int(time.time())
+ metadatafile = open(
+ _unicode_encode(
+ metadatafilename, encoding=_encodings["fs"], errors="strict"
+ ),
+ "wb",
+ )
+ pickle.dump(metadata, metadatafile, protocol=2)
+ metadatafile.close()
+ if makepickle:
+ metadatafile = open(
+ _unicode_encode(makepickle, encoding=_encodings["fs"], errors="strict"),
+ "wb",
+ )
+ pickle.dump(metadata[baseurl]["data"], metadatafile, protocol=2)
+ metadatafile.close()
+ except SystemExit as e:
+ raise
+ except Exception as e:
+ sys.stderr.write(_("!!! Failed to write binary metadata to disk!\n"))
+ sys.stderr.write("!!! " + str(e) + "\n")
+ sys.stderr.flush()
+
+ if not keepconnection:
+ conn.close()
+
+ return metadata[baseurl]["data"]
+
def _cmp_cpv(d1, d2):
- cpv1 = d1["CPV"]
- cpv2 = d2["CPV"]
- if cpv1 > cpv2:
- return 1
- if cpv1 == cpv2:
- return 0
- return -1
+ cpv1 = d1["CPV"]
+ cpv2 = d2["CPV"]
+ if cpv1 > cpv2:
+ return 1
+ if cpv1 == cpv2:
+ return 0
+ return -1
- class PackageIndex:
- def __init__(self,
- allowed_pkg_keys=None,
- default_header_data=None,
- default_pkg_data=None,
- inherited_keys=None,
- translated_keys=None):
-
- self._pkg_slot_dict = None
- if allowed_pkg_keys is not None:
- self._pkg_slot_dict = slot_dict_class(allowed_pkg_keys)
-
- self._default_header_data = default_header_data
- self._default_pkg_data = default_pkg_data
- self._inherited_keys = inherited_keys
- self._write_translation_map = {}
- self._read_translation_map = {}
- if translated_keys:
- self._write_translation_map.update(translated_keys)
- self._read_translation_map.update(((y, x) for (x, y) in translated_keys))
- self.header = {}
- if self._default_header_data:
- self.header.update(self._default_header_data)
- self.packages = []
- self.modified = True
-
- def _readpkgindex(self, pkgfile, pkg_entry=True):
-
- allowed_keys = None
- if self._pkg_slot_dict is None or not pkg_entry:
- d = {}
- else:
- d = self._pkg_slot_dict()
- allowed_keys = d.allowed_keys
-
- for line in pkgfile:
- line = line.rstrip("\n")
- if not line:
- break
- line = line.split(":", 1)
- if not len(line) == 2:
- continue
- k, v = line
- if v:
- v = v[1:]
- k = self._read_translation_map.get(k, k)
- if allowed_keys is not None and \
- k not in allowed_keys:
- continue
- d[k] = v
- return d
-
- def _writepkgindex(self, pkgfile, items):
- for k, v in items:
- pkgfile.write("%s: %s\n" % \
- (self._write_translation_map.get(k, k), v))
- pkgfile.write("\n")
-
- def read(self, pkgfile):
- self.readHeader(pkgfile)
- self.readBody(pkgfile)
-
- def readHeader(self, pkgfile):
- self.header.update(self._readpkgindex(pkgfile, pkg_entry=False))
-
- def readBody(self, pkgfile):
- while True:
- d = self._readpkgindex(pkgfile)
- if not d:
- break
- mycpv = d.get("CPV")
- if not mycpv:
- continue
- if self._default_pkg_data:
- for k, v in self._default_pkg_data.items():
- d.setdefault(k, v)
- if self._inherited_keys:
- for k in self._inherited_keys:
- v = self.header.get(k)
- if v is not None:
- d.setdefault(k, v)
- self.packages.append(d)
-
- def write(self, pkgfile):
- if self.modified:
- self.header["TIMESTAMP"] = str(int(time.time()))
- self.header["PACKAGES"] = str(len(self.packages))
- keys = list(self.header)
- keys.sort()
- self._writepkgindex(pkgfile, [(k, self.header[k]) \
- for k in keys if self.header[k]])
- for metadata in sorted(self.packages,
- key=portage.util.cmp_sort_key(_cmp_cpv)):
- metadata = metadata.copy()
- if self._inherited_keys:
- for k in self._inherited_keys:
- v = self.header.get(k)
- if v is not None and v == metadata.get(k):
- del metadata[k]
- if self._default_pkg_data:
- for k, v in self._default_pkg_data.items():
- if metadata.get(k) == v:
- metadata.pop(k, None)
- keys = list(metadata)
- keys.sort()
- self._writepkgindex(pkgfile,
- [(k, metadata[k]) for k in keys if metadata[k]])
+ class PackageIndex:
+ def __init__(
+ self,
+ allowed_pkg_keys=None,
+ default_header_data=None,
+ default_pkg_data=None,
+ inherited_keys=None,
+ translated_keys=None,
+ ):
+
+ self._pkg_slot_dict = None
+ if allowed_pkg_keys is not None:
+ self._pkg_slot_dict = slot_dict_class(allowed_pkg_keys)
+
+ self._default_header_data = default_header_data
+ self._default_pkg_data = default_pkg_data
+ self._inherited_keys = inherited_keys
+ self._write_translation_map = {}
+ self._read_translation_map = {}
+ if translated_keys:
+ self._write_translation_map.update(translated_keys)
+ self._read_translation_map.update(((y, x) for (x, y) in translated_keys))
+ self.header = {}
+ if self._default_header_data:
+ self.header.update(self._default_header_data)
+ self.packages = []
+ self.modified = True
+
+ def _readpkgindex(self, pkgfile, pkg_entry=True):
+
+ allowed_keys = None
+ if self._pkg_slot_dict is None or not pkg_entry:
+ d = {}
+ else:
+ d = self._pkg_slot_dict()
+ allowed_keys = d.allowed_keys
+
+ for line in pkgfile:
+ line = line.rstrip("\n")
+ if not line:
+ break
+ line = line.split(":", 1)
+ if not len(line) == 2:
+ continue
+ k, v = line
+ if v:
+ v = v[1:]
+ k = self._read_translation_map.get(k, k)
+ if allowed_keys is not None and k not in allowed_keys:
+ continue
+ d[k] = v
+ return d
+
+ def _writepkgindex(self, pkgfile, items):
+ for k, v in items:
+ pkgfile.write("%s: %s\n" % (self._write_translation_map.get(k, k), v))
+ pkgfile.write("\n")
+
+ def read(self, pkgfile):
+ self.readHeader(pkgfile)
+ self.readBody(pkgfile)
+
+ def readHeader(self, pkgfile):
+ self.header.update(self._readpkgindex(pkgfile, pkg_entry=False))
+
+ def readBody(self, pkgfile):
+ while True:
+ d = self._readpkgindex(pkgfile)
+ if not d:
+ break
+ mycpv = d.get("CPV")
+ if not mycpv:
+ continue
+ if self._default_pkg_data:
+ for k, v in self._default_pkg_data.items():
+ d.setdefault(k, v)
+ if self._inherited_keys:
+ for k in self._inherited_keys:
+ v = self.header.get(k)
+ if v is not None:
+ d.setdefault(k, v)
+ self.packages.append(d)
+
+ def write(self, pkgfile):
+ if self.modified:
+ self.header["TIMESTAMP"] = str(int(time.time()))
+ self.header["PACKAGES"] = str(len(self.packages))
+ keys = list(self.header)
+ keys.sort()
+ self._writepkgindex(
+ pkgfile, [(k, self.header[k]) for k in keys if self.header[k]]
+ )
+ for metadata in sorted(self.packages, key=portage.util.cmp_sort_key(_cmp_cpv)):
+ metadata = metadata.copy()
+ if self._inherited_keys:
+ for k in self._inherited_keys:
+ v = self.header.get(k)
+ if v is not None and v == metadata.get(k):
+ del metadata[k]
+ if self._default_pkg_data:
+ for k, v in self._default_pkg_data.items():
+ if metadata.get(k) == v:
+ metadata.pop(k, None)
+ keys = list(metadata)
+ keys.sort()
+ self._writepkgindex(
+ pkgfile, [(k, metadata[k]) for k in keys if metadata[k]]
+ )
diff --cc lib/portage/package/ebuild/_config/special_env_vars.py
index 682bea62b,06ae3aa39..9331bf451
--- a/lib/portage/package/ebuild/_config/special_env_vars.py
+++ b/lib/portage/package/ebuild/_config/special_env_vars.py
@@@ -40,49 -83,109 +83,114 @@@ environ_whitelist = [
# environment in order to prevent sandbox from sourcing /etc/profile
# in it's bashrc (causing major leakage).
environ_whitelist += [
- "ACCEPT_LICENSE", "BASH_ENV", "BASH_FUNC____in_portage_iuse%%",
- "BROOT", "BUILD_PREFIX", "COLUMNS", "D",
- "DISTDIR", "DOC_SYMLINKS_DIR", "EAPI", "EBUILD",
- "EBUILD_FORCE_TEST",
- "EBUILD_PHASE", "EBUILD_PHASE_FUNC", "ECLASSDIR", "ECLASS_DEPTH", "ED",
- "EMERGE_FROM", "ENV_UNSET", "EPREFIX", "EROOT", "ESYSROOT",
- "FEATURES", "FILESDIR", "HOME", "MERGE_TYPE", "NOCOLOR", "PATH",
- "PKGDIR",
- "PKGUSE", "PKG_LOGDIR", "PKG_TMPDIR",
- "PORTAGE_ACTUAL_DISTDIR", "PORTAGE_ARCHLIST", "PORTAGE_BASHRC_FILES",
- "PORTAGE_BASHRC", "PM_EBUILD_HOOK_DIR",
- "PORTAGE_BINPKG_FILE", "PORTAGE_BINPKG_TAR_OPTS",
- "PORTAGE_BINPKG_TMPFILE",
- "PORTAGE_BIN_PATH",
- "PORTAGE_BUILDDIR", "PORTAGE_BUILD_GROUP", "PORTAGE_BUILD_USER",
- "PORTAGE_BUNZIP2_COMMAND", "PORTAGE_BZIP2_COMMAND",
- "PORTAGE_COLORMAP", "PORTAGE_COMPRESS", "PORTAGE_COMPRESSION_COMMAND",
- "PORTAGE_COMPRESS_EXCLUDE_SUFFIXES",
- "PORTAGE_CONFIGROOT", "PORTAGE_DEBUG", "PORTAGE_DEPCACHEDIR",
- "PORTAGE_DOHTML_UNWARNED_SKIPPED_EXTENSIONS",
- "PORTAGE_DOHTML_UNWARNED_SKIPPED_FILES",
- "PORTAGE_DOHTML_WARN_ON_SKIPPED_FILES",
- "PORTAGE_EBUILD_EXIT_FILE", "PORTAGE_FEATURES",
- "PORTAGE_GID", "PORTAGE_GRPNAME",
- "PORTAGE_INTERNAL_CALLER",
- "PORTAGE_INST_GID", "PORTAGE_INST_UID",
- "PORTAGE_IPC_DAEMON", "PORTAGE_IUSE", "PORTAGE_ECLASS_LOCATIONS",
- "PORTAGE_LOG_FILE", "PORTAGE_OVERRIDE_EPREFIX", "PORTAGE_PIPE_FD",
- "PORTAGE_PROPERTIES",
- "PORTAGE_PYM_PATH", "PORTAGE_PYTHON",
- "PORTAGE_PYTHONPATH", "PORTAGE_QUIET",
- "PORTAGE_REPO_NAME", "PORTAGE_REPOSITORIES", "PORTAGE_RESTRICT",
- "PORTAGE_SIGPIPE_STATUS", "PORTAGE_SOCKS5_PROXY",
- "PORTAGE_TMPDIR", "PORTAGE_UPDATE_ENV", "PORTAGE_USERNAME",
- "PORTAGE_VERBOSE", "PORTAGE_WORKDIR_MODE", "PORTAGE_XATTR_EXCLUDE",
- "PORTDIR", "PORTDIR_OVERLAY", "PREROOTPATH", "PYTHONDONTWRITEBYTECODE",
- "REPLACING_VERSIONS", "REPLACED_BY_VERSION",
- "ROOT", "ROOTPATH", "SANDBOX_LOG", "SYSROOT", "T", "TMP", "TMPDIR",
- "USE_EXPAND", "USE_ORDER", "WORKDIR",
- "XARGS", "__PORTAGE_TEST_HARDLINK_LOCKS",
- # PREFIX LOCAL
- "EXTRA_PATH", "PORTAGE_GROUP", "PORTAGE_USER",
- # END PREFIX LOCAL
+ "ACCEPT_LICENSE",
+ "BASH_ENV",
+ "BASH_FUNC____in_portage_iuse%%",
+ "BROOT",
+ "BUILD_PREFIX",
+ "COLUMNS",
+ "D",
+ "DISTDIR",
+ "DOC_SYMLINKS_DIR",
+ "EAPI",
+ "EBUILD",
+ "EBUILD_FORCE_TEST",
+ "EBUILD_PHASE",
+ "EBUILD_PHASE_FUNC",
+ "ECLASSDIR",
+ "ECLASS_DEPTH",
+ "ED",
+ "EMERGE_FROM",
+ "ENV_UNSET",
+ "EPREFIX",
+ "EROOT",
+ "ESYSROOT",
+ "FEATURES",
+ "FILESDIR",
+ "HOME",
+ "MERGE_TYPE",
+ "NOCOLOR",
+ "PATH",
+ "PKGDIR",
+ "PKGUSE",
+ "PKG_LOGDIR",
+ "PKG_TMPDIR",
+ "PORTAGE_ACTUAL_DISTDIR",
+ "PORTAGE_ARCHLIST",
+ "PORTAGE_BASHRC_FILES",
+ "PORTAGE_BASHRC",
+ "PM_EBUILD_HOOK_DIR",
+ "PORTAGE_BINPKG_FILE",
+ "PORTAGE_BINPKG_TAR_OPTS",
+ "PORTAGE_BINPKG_TMPFILE",
+ "PORTAGE_BIN_PATH",
+ "PORTAGE_BUILDDIR",
+ "PORTAGE_BUILD_GROUP",
+ "PORTAGE_BUILD_USER",
+ "PORTAGE_BUNZIP2_COMMAND",
+ "PORTAGE_BZIP2_COMMAND",
+ "PORTAGE_COLORMAP",
+ "PORTAGE_COMPRESS",
+ "PORTAGE_COMPRESSION_COMMAND",
+ "PORTAGE_COMPRESS_EXCLUDE_SUFFIXES",
+ "PORTAGE_CONFIGROOT",
+ "PORTAGE_DEBUG",
+ "PORTAGE_DEPCACHEDIR",
+ "PORTAGE_DOHTML_UNWARNED_SKIPPED_EXTENSIONS",
+ "PORTAGE_DOHTML_UNWARNED_SKIPPED_FILES",
+ "PORTAGE_DOHTML_WARN_ON_SKIPPED_FILES",
+ "PORTAGE_EBUILD_EXIT_FILE",
+ "PORTAGE_FEATURES",
+ "PORTAGE_GID",
+ "PORTAGE_GRPNAME",
+ "PORTAGE_INTERNAL_CALLER",
+ "PORTAGE_INST_GID",
+ "PORTAGE_INST_UID",
+ "PORTAGE_IPC_DAEMON",
+ "PORTAGE_IUSE",
+ "PORTAGE_ECLASS_LOCATIONS",
+ "PORTAGE_LOG_FILE",
+ "PORTAGE_OVERRIDE_EPREFIX",
+ "PORTAGE_PIPE_FD",
+ "PORTAGE_PROPERTIES",
+ "PORTAGE_PYM_PATH",
+ "PORTAGE_PYTHON",
+ "PORTAGE_PYTHONPATH",
+ "PORTAGE_QUIET",
+ "PORTAGE_REPO_NAME",
+ "PORTAGE_REPOSITORIES",
+ "PORTAGE_RESTRICT",
+ "PORTAGE_SIGPIPE_STATUS",
+ "PORTAGE_SOCKS5_PROXY",
+ "PORTAGE_TMPDIR",
+ "PORTAGE_UPDATE_ENV",
+ "PORTAGE_USERNAME",
+ "PORTAGE_VERBOSE",
+ "PORTAGE_WORKDIR_MODE",
+ "PORTAGE_XATTR_EXCLUDE",
+ "PORTDIR",
+ "PORTDIR_OVERLAY",
+ "PREROOTPATH",
+ "PYTHONDONTWRITEBYTECODE",
+ "REPLACING_VERSIONS",
+ "REPLACED_BY_VERSION",
+ "ROOT",
+ "ROOTPATH",
+ "SANDBOX_LOG",
+ "SYSROOT",
+ "T",
+ "TMP",
+ "TMPDIR",
+ "USE_EXPAND",
+ "USE_ORDER",
+ "WORKDIR",
+ "XARGS",
+ "__PORTAGE_TEST_HARDLINK_LOCKS",
++ # BEGIN PREFIX LOCAL
++ "EXTRA_PATH",
++ "PORTAGE_GROUP",
++ "PORTAGE_USER",
++ # END PREFIX LOCAL
]
# user config variables
@@@ -115,13 -232,15 +237,18 @@@ environ_whitelist +=
]
# other variables inherited from the calling environment
+# UNIXMODE is necessary for MiNT
environ_whitelist += [
- "CVS_RSH", "ECHANGELOG_USER",
- "GPG_AGENT_INFO",
- "SSH_AGENT_PID", "SSH_AUTH_SOCK",
- "STY", "WINDOW", "XAUTHORITY",
- "UNIXMODE",
+ "CVS_RSH",
+ "ECHANGELOG_USER",
+ "GPG_AGENT_INFO",
+ "SSH_AGENT_PID",
+ "SSH_AUTH_SOCK",
+ "STY",
+ "WINDOW",
+ "XAUTHORITY",
++ # PREFIX LOCAL
++ "UNIXMODE",
]
environ_whitelist = frozenset(environ_whitelist)
@@@ -141,11 -265,9 +273,22 @@@ environ_filter +=
# misc variables inherited from the calling environment
environ_filter += [
- "INFOPATH", "MANPATH", "USER",
- "HOST", "GROUP", "LOGNAME", "MAIL", "REMOTEHOST",
- "SECURITYSESSIONID",
- "TERMINFO", "TERM_PROGRAM", "TERM_PROGRAM_VERSION",
- "VENDOR", "__CF_USER_TEXT_ENCODING",
+ "INFOPATH",
+ "MANPATH",
+ "USER",
++ # BEGIN PREFIX LOCAL
++ "HOST",
++ "GROUP",
++ "LOGNAME",
++ "MAIL",
++ "REMOTEHOST",
++ "SECURITYSESSIONID",
++ "TERMINFO",
++ "TERM_PROGRAM",
++ "TERM_PROGRAM_VERSION",
++ "VENDOR",
++ "__CF_USER_TEXT_ENCODING",
++ # END PREFIX LOCAL
]
# variables that break bash
diff --cc lib/portage/package/ebuild/config.py
index 59972af76,b4d6862a3..625a1be49
--- a/lib/portage/package/ebuild/config.py
+++ b/lib/portage/package/ebuild/config.py
@@@ -41,15 -64,28 +64,28 @@@ from portage.env.loaders import KeyValu
from portage.exception import InvalidDependString, PortageException
from portage.localization import _
from portage.output import colorize
-from portage.process import fakeroot_capable, sandbox_capable
+from portage.process import fakeroot_capable, sandbox_capable, macossandbox_capable
from portage.repository.config import (
- allow_profile_repo_deps,
- load_repository_config,
+ allow_profile_repo_deps,
+ load_repository_config,
+ )
+ from portage.util import (
+ ensure_dirs,
+ getconfig,
+ grabdict,
+ grabdict_package,
+ grabfile,
+ grabfile_package,
+ LazyItemsDict,
+ normalize_path,
+ shlex_split,
+ stack_dictlist,
+ stack_dicts,
+ stack_lists,
+ writemsg,
+ writemsg_level,
+ _eapi_cache,
)
- from portage.util import ensure_dirs, getconfig, grabdict, \
- grabdict_package, grabfile, grabfile_package, LazyItemsDict, \
- normalize_path, shlex_split, stack_dictlist, stack_dicts, stack_lists, \
- writemsg, writemsg_level, _eapi_cache
from portage.util.install_mask import _raise_exc
from portage.util.path import first_existing
from portage.util._path import exists_raise_eaccess, isdir_raise_eaccess
@@@ -70,2889 -111,3307 +111,3315 @@@ from portage.package.ebuild._config.unp
_feature_flags_cache = {}
+
def _get_feature_flags(eapi_attrs):
- cache_key = (eapi_attrs.feature_flag_test,)
- flags = _feature_flags_cache.get(cache_key)
- if flags is not None:
- return flags
+ cache_key = (eapi_attrs.feature_flag_test,)
+ flags = _feature_flags_cache.get(cache_key)
+ if flags is not None:
+ return flags
+
+ flags = []
+ if eapi_attrs.feature_flag_test:
+ flags.append("test")
- flags = []
- if eapi_attrs.feature_flag_test:
- flags.append("test")
+ flags = frozenset(flags)
+ _feature_flags_cache[cache_key] = flags
+ return flags
- flags = frozenset(flags)
- _feature_flags_cache[cache_key] = flags
- return flags
def autouse(myvartree, use_cache=1, mysettings=None):
- warnings.warn("portage.autouse() is deprecated",
- DeprecationWarning, stacklevel=2)
- return ""
+ warnings.warn("portage.autouse() is deprecated", DeprecationWarning, stacklevel=2)
+ return ""
+
def check_config_instance(test):
- if not isinstance(test, config):
- raise TypeError("Invalid type for config object: %s (should be %s)" % (test.__class__, config))
+ if not isinstance(test, config):
+ raise TypeError(
+ "Invalid type for config object: %s (should be %s)"
+ % (test.__class__, config)
+ )
+
def best_from_dict(key, top_dict, key_order, EmptyOnError=1, FullCopy=1, AllowEmpty=1):
- for x in key_order:
- if x in top_dict and key in top_dict[x]:
- if FullCopy:
- return copy.deepcopy(top_dict[x][key])
- return top_dict[x][key]
- if EmptyOnError:
- return ""
- raise KeyError("Key not found in list; '%s'" % key)
+ for x in key_order:
+ if x in top_dict and key in top_dict[x]:
+ if FullCopy:
+ return copy.deepcopy(top_dict[x][key])
+ return top_dict[x][key]
+ if EmptyOnError:
+ return ""
+ raise KeyError("Key not found in list; '%s'" % key)
+
def _lazy_iuse_regex(iuse_implicit):
- """
- The PORTAGE_IUSE value is lazily evaluated since re.escape() is slow
- and the value is only used when an ebuild phase needs to be executed
- (it's used only to generate QA notices).
- """
- # Escape anything except ".*" which is supposed to pass through from
- # _get_implicit_iuse().
- regex = sorted(re.escape(x) for x in iuse_implicit)
- regex = "^(%s)$" % "|".join(regex)
- regex = regex.replace("\\.\\*", ".*")
- return regex
+ """
+ The PORTAGE_IUSE value is lazily evaluated since re.escape() is slow
+ and the value is only used when an ebuild phase needs to be executed
+ (it's used only to generate QA notices).
+ """
+ # Escape anything except ".*" which is supposed to pass through from
+ # _get_implicit_iuse().
+ regex = sorted(re.escape(x) for x in iuse_implicit)
+ regex = "^(%s)$" % "|".join(regex)
+ regex = regex.replace("\\.\\*", ".*")
+ return regex
+
class _iuse_implicit_match_cache:
+ def __init__(self, settings):
+ self._iuse_implicit_re = re.compile(
+ "^(%s)$" % "|".join(settings._get_implicit_iuse())
+ )
+ self._cache = {}
+
+ def __call__(self, flag):
+ """
+ Returns True if the flag is matched, False otherwise.
+ """
+ try:
+ return self._cache[flag]
+ except KeyError:
+ m = self._iuse_implicit_re.match(flag) is not None
+ self._cache[flag] = m
+ return m
- def __init__(self, settings):
- self._iuse_implicit_re = re.compile("^(%s)$" % \
- "|".join(settings._get_implicit_iuse()))
- self._cache = {}
-
- def __call__(self, flag):
- """
- Returns True if the flag is matched, False otherwise.
- """
- try:
- return self._cache[flag]
- except KeyError:
- m = self._iuse_implicit_re.match(flag) is not None
- self._cache[flag] = m
- return m
class config:
- """
- This class encompasses the main portage configuration. Data is pulled from
- ROOT/PORTDIR/profiles/, from ROOT/etc/make.profile incrementally through all
- parent profiles as well as from ROOT/PORTAGE_CONFIGROOT/* for user specified
- overrides.
-
- Generally if you need data like USE flags, FEATURES, environment variables,
- virtuals ...etc you look in here.
- """
-
- _constant_keys = frozenset(['PORTAGE_BIN_PATH', 'PORTAGE_GID',
- 'PORTAGE_PYM_PATH', 'PORTAGE_PYTHONPATH'])
-
- _deprecated_keys = {'PORTAGE_LOGDIR': 'PORT_LOGDIR',
- 'PORTAGE_LOGDIR_CLEAN': 'PORT_LOGDIR_CLEAN',
- 'SIGNED_OFF_BY': 'DCO_SIGNED_OFF_BY'}
-
- _setcpv_aux_keys = ('BDEPEND', 'DEFINED_PHASES', 'DEPEND', 'EAPI', 'IDEPEND',
- 'INHERITED', 'IUSE', 'REQUIRED_USE', 'KEYWORDS', 'LICENSE', 'PDEPEND',
- 'PROPERTIES', 'RDEPEND', 'SLOT',
- 'repository', 'RESTRICT', 'LICENSE',)
-
- _module_aliases = {
- "cache.metadata_overlay.database" : "portage.cache.flat_hash.mtime_md5_database",
- "portage.cache.metadata_overlay.database" : "portage.cache.flat_hash.mtime_md5_database",
- }
-
- _case_insensitive_vars = special_env_vars.case_insensitive_vars
- _default_globals = special_env_vars.default_globals
- _env_blacklist = special_env_vars.env_blacklist
- _environ_filter = special_env_vars.environ_filter
- _environ_whitelist = special_env_vars.environ_whitelist
- _environ_whitelist_re = special_env_vars.environ_whitelist_re
- _global_only_vars = special_env_vars.global_only_vars
-
- def __init__(self, clone=None, mycpv=None, config_profile_path=None,
- config_incrementals=None, config_root=None, target_root=None,
- sysroot=None, eprefix=None, local_config=True, env=None,
- _unmatched_removal=False, repositories=None):
- """
- @param clone: If provided, init will use deepcopy to copy by value the instance.
- @type clone: Instance of config class.
- @param mycpv: CPV to load up (see setcpv), this is the same as calling init with mycpv=None
- and then calling instance.setcpv(mycpv).
- @type mycpv: String
- @param config_profile_path: Configurable path to the profile (usually PROFILE_PATH from portage.const)
- @type config_profile_path: String
- @param config_incrementals: List of incremental variables
- (defaults to portage.const.INCREMENTALS)
- @type config_incrementals: List
- @param config_root: path to read local config from (defaults to "/", see PORTAGE_CONFIGROOT)
- @type config_root: String
- @param target_root: the target root, which typically corresponds to the
- value of the $ROOT env variable (default is /)
- @type target_root: String
- @param sysroot: the sysroot to build against, which typically corresponds
- to the value of the $SYSROOT env variable (default is /)
- @type sysroot: String
- @param eprefix: set the EPREFIX variable (default is portage.const.EPREFIX)
- @type eprefix: String
- @param local_config: Enables loading of local config (/etc/portage); used most by repoman to
- ignore local config (keywording and unmasking)
- @type local_config: Boolean
- @param env: The calling environment which is used to override settings.
- Defaults to os.environ if unspecified.
- @type env: dict
- @param _unmatched_removal: Enabled by repoman when the
- --unmatched-removal option is given.
- @type _unmatched_removal: Boolean
- @param repositories: Configuration of repositories.
- Defaults to portage.repository.config.load_repository_config().
- @type repositories: Instance of portage.repository.config.RepoConfigLoader class.
- """
-
- # This is important when config is reloaded after emerge --sync.
- _eapi_cache.clear()
-
- # When initializing the global portage.settings instance, avoid
- # raising exceptions whenever possible since exceptions thrown
- # from 'import portage' or 'import portage.exceptions' statements
- # can practically render the api unusable for api consumers.
- tolerant = hasattr(portage, '_initializing_globals')
- self._tolerant = tolerant
- self._unmatched_removal = _unmatched_removal
-
- self.locked = 0
- self.mycpv = None
- self._setcpv_args_hash = None
- self.puse = ""
- self._penv = []
- self.modifiedkeys = []
- self.uvlist = []
- self._accept_chost_re = None
- self._accept_properties = None
- self._accept_restrict = None
- self._features_overrides = []
- self._make_defaults = None
- self._parent_stable = None
- self._soname_provided = None
-
- # _unknown_features records unknown features that
- # have triggered warning messages, and ensures that
- # the same warning isn't shown twice.
- self._unknown_features = set()
-
- self.local_config = local_config
-
- if clone:
- # For immutable attributes, use shallow copy for
- # speed and memory conservation.
- self._tolerant = clone._tolerant
- self._unmatched_removal = clone._unmatched_removal
- self.categories = clone.categories
- self.depcachedir = clone.depcachedir
- self.incrementals = clone.incrementals
- self.module_priority = clone.module_priority
- self.profile_path = clone.profile_path
- self.profiles = clone.profiles
- self.packages = clone.packages
- self.repositories = clone.repositories
- self.unpack_dependencies = clone.unpack_dependencies
- self._default_features_use = clone._default_features_use
- self._iuse_effective = clone._iuse_effective
- self._iuse_implicit_match = clone._iuse_implicit_match
- self._non_user_variables = clone._non_user_variables
- self._env_d_blacklist = clone._env_d_blacklist
- self._pbashrc = clone._pbashrc
- self._repo_make_defaults = clone._repo_make_defaults
- self.usemask = clone.usemask
- self.useforce = clone.useforce
- self.puse = clone.puse
- self.user_profile_dir = clone.user_profile_dir
- self.local_config = clone.local_config
- self.make_defaults_use = clone.make_defaults_use
- self.mycpv = clone.mycpv
- self._setcpv_args_hash = clone._setcpv_args_hash
- self._soname_provided = clone._soname_provided
- self._profile_bashrc = clone._profile_bashrc
-
- # immutable attributes (internal policy ensures lack of mutation)
- self._locations_manager = clone._locations_manager
- self._use_manager = clone._use_manager
- # force instantiation of lazy immutable objects when cloning, so
- # that they're not instantiated more than once
- self._keywords_manager_obj = clone._keywords_manager
- self._mask_manager_obj = clone._mask_manager
-
- # shared mutable attributes
- self._unknown_features = clone._unknown_features
-
- self.modules = copy.deepcopy(clone.modules)
- self._penv = copy.deepcopy(clone._penv)
-
- self.configdict = copy.deepcopy(clone.configdict)
- self.configlist = [
- self.configdict['env.d'],
- self.configdict['repo'],
- self.configdict['features'],
- self.configdict['pkginternal'],
- self.configdict['globals'],
- self.configdict['defaults'],
- self.configdict['conf'],
- self.configdict['pkg'],
- self.configdict['env'],
- ]
- self.lookuplist = self.configlist[:]
- self.lookuplist.reverse()
- self._use_expand_dict = copy.deepcopy(clone._use_expand_dict)
- self.backupenv = self.configdict["backupenv"]
- self.prevmaskdict = copy.deepcopy(clone.prevmaskdict)
- self.pprovideddict = copy.deepcopy(clone.pprovideddict)
- self.features = features_set(self)
- self.features._features = copy.deepcopy(clone.features._features)
- self._features_overrides = copy.deepcopy(clone._features_overrides)
-
- #Strictly speaking _license_manager is not immutable. Users need to ensure that
- #extract_global_changes() is called right after __init__ (if at all).
- #It also has the mutable member _undef_lic_groups. It is used to track
- #undefined license groups, to not display an error message for the same
- #group again and again. Because of this, it's useful to share it between
- #all LicenseManager instances.
- self._license_manager = clone._license_manager
-
- # force instantiation of lazy objects when cloning, so
- # that they're not instantiated more than once
- self._virtuals_manager_obj = copy.deepcopy(clone._virtuals_manager)
-
- self._accept_properties = copy.deepcopy(clone._accept_properties)
- self._ppropertiesdict = copy.deepcopy(clone._ppropertiesdict)
- self._accept_restrict = copy.deepcopy(clone._accept_restrict)
- self._paccept_restrict = copy.deepcopy(clone._paccept_restrict)
- self._penvdict = copy.deepcopy(clone._penvdict)
- self._pbashrcdict = copy.deepcopy(clone._pbashrcdict)
- self._expand_map = copy.deepcopy(clone._expand_map)
-
- else:
- # lazily instantiated objects
- self._keywords_manager_obj = None
- self._mask_manager_obj = None
- self._virtuals_manager_obj = None
-
- locations_manager = LocationsManager(config_root=config_root,
- config_profile_path=config_profile_path, eprefix=eprefix,
- local_config=local_config, target_root=target_root,
- sysroot=sysroot)
- self._locations_manager = locations_manager
-
- eprefix = locations_manager.eprefix
- config_root = locations_manager.config_root
- sysroot = locations_manager.sysroot
- esysroot = locations_manager.esysroot
- broot = locations_manager.broot
- abs_user_config = locations_manager.abs_user_config
- make_conf_paths = [
- os.path.join(config_root, 'etc', 'make.conf'),
- os.path.join(config_root, MAKE_CONF_FILE)
- ]
- try:
- if os.path.samefile(*make_conf_paths):
- make_conf_paths.pop()
- except OSError:
- pass
-
- make_conf_count = 0
- make_conf = {}
- for x in make_conf_paths:
- mygcfg = getconfig(x,
- tolerant=tolerant, allow_sourcing=True,
- expand=make_conf, recursive=True)
- if mygcfg is not None:
- make_conf.update(mygcfg)
- make_conf_count += 1
-
- if make_conf_count == 2:
- writemsg("!!! %s\n" %
- _("Found 2 make.conf files, using both '%s' and '%s'") %
- tuple(make_conf_paths), noiselevel=-1)
-
- # __* variables set in make.conf are local and are not be propagated.
- make_conf = {k: v for k, v in make_conf.items() if not k.startswith("__")}
-
- # Allow ROOT setting to come from make.conf if it's not overridden
- # by the constructor argument (from the calling environment).
- locations_manager.set_root_override(make_conf.get("ROOT"))
- target_root = locations_manager.target_root
- eroot = locations_manager.eroot
- self.global_config_path = locations_manager.global_config_path
-
- # The expand_map is used for variable substitution
- # in getconfig() calls, and the getconfig() calls
- # update expand_map with the value of each variable
- # assignment that occurs. Variable substitution occurs
- # in the following order, which corresponds to the
- # order of appearance in self.lookuplist:
- #
- # * env.d
- # * make.globals
- # * make.defaults
- # * make.conf
- #
- # Notably absent is "env", since we want to avoid any
- # interaction with the calling environment that might
- # lead to unexpected results.
-
- env_d = getconfig(os.path.join(eroot, "etc", "profile.env"),
- tolerant=tolerant, expand=False) or {}
- expand_map = env_d.copy()
- self._expand_map = expand_map
-
- # Allow make.globals and make.conf to set paths relative to vars like ${EPREFIX}.
- expand_map["BROOT"] = broot
- expand_map["EPREFIX"] = eprefix
- expand_map["EROOT"] = eroot
- expand_map["ESYSROOT"] = esysroot
- expand_map["PORTAGE_CONFIGROOT"] = config_root
- expand_map["ROOT"] = target_root
- expand_map["SYSROOT"] = sysroot
-
- if portage._not_installed:
- make_globals_path = os.path.join(PORTAGE_BASE_PATH, "cnf", "make.globals")
- else:
- make_globals_path = os.path.join(self.global_config_path, "make.globals")
- old_make_globals = os.path.join(config_root, "etc", "make.globals")
- if os.path.isfile(old_make_globals) and \
- not os.path.samefile(make_globals_path, old_make_globals):
- # Don't warn if they refer to the same path, since
- # that can be used for backward compatibility with
- # old software.
- writemsg("!!! %s\n" %
- _("Found obsolete make.globals file: "
- "'%s', (using '%s' instead)") %
- (old_make_globals, make_globals_path),
- noiselevel=-1)
-
- make_globals = getconfig(make_globals_path,
- tolerant=tolerant, expand=expand_map)
- if make_globals is None:
- make_globals = {}
-
- for k, v in self._default_globals.items():
- make_globals.setdefault(k, v)
-
- if config_incrementals is None:
- self.incrementals = INCREMENTALS
- else:
- self.incrementals = config_incrementals
- if not isinstance(self.incrementals, frozenset):
- self.incrementals = frozenset(self.incrementals)
-
- self.module_priority = ("user", "default")
- self.modules = {}
- modules_file = os.path.join(config_root, MODULES_FILE_PATH)
- modules_loader = KeyValuePairFileLoader(modules_file, None, None)
- modules_dict, modules_errors = modules_loader.load()
- self.modules["user"] = modules_dict
- if self.modules["user"] is None:
- self.modules["user"] = {}
- user_auxdbmodule = \
- self.modules["user"].get("portdbapi.auxdbmodule")
- if user_auxdbmodule is not None and \
- user_auxdbmodule in self._module_aliases:
- warnings.warn("'%s' is deprecated: %s" %
- (user_auxdbmodule, modules_file))
-
- self.modules["default"] = {
- "portdbapi.auxdbmodule": "portage.cache.flat_hash.mtime_md5_database",
- }
-
- self.configlist=[]
-
- # back up our incremental variables:
- self.configdict={}
- self._use_expand_dict = {}
- # configlist will contain: [ env.d, globals, features, defaults, conf, pkg, backupenv, env ]
- self.configlist.append({})
- self.configdict["env.d"] = self.configlist[-1]
-
- self.configlist.append({})
- self.configdict["repo"] = self.configlist[-1]
-
- self.configlist.append({})
- self.configdict["features"] = self.configlist[-1]
-
- self.configlist.append({})
- self.configdict["pkginternal"] = self.configlist[-1]
-
- # env_d will be None if profile.env doesn't exist.
- if env_d:
- self.configdict["env.d"].update(env_d)
-
- # backupenv is used for calculating incremental variables.
- if env is None:
- env = os.environ
-
- # Avoid potential UnicodeDecodeError exceptions later.
- env_unicode = dict((_unicode_decode(k), _unicode_decode(v))
- for k, v in env.items())
-
- self.backupenv = env_unicode
-
- if env_d:
- # Remove duplicate values so they don't override updated
- # profile.env values later (profile.env is reloaded in each
- # call to self.regenerate).
- for k, v in env_d.items():
- try:
- if self.backupenv[k] == v:
- del self.backupenv[k]
- except KeyError:
- pass
- del k, v
-
- self.configdict["env"] = LazyItemsDict(self.backupenv)
-
- self.configlist.append(make_globals)
- self.configdict["globals"]=self.configlist[-1]
-
- self.make_defaults_use = []
-
- #Loading Repositories
- self["PORTAGE_CONFIGROOT"] = config_root
- self["ROOT"] = target_root
- self["SYSROOT"] = sysroot
- self["EPREFIX"] = eprefix
- self["EROOT"] = eroot
- self["ESYSROOT"] = esysroot
- self["BROOT"] = broot
- known_repos = []
- portdir = ""
- portdir_overlay = ""
- portdir_sync = None
- for confs in [make_globals, make_conf, self.configdict["env"]]:
- v = confs.get("PORTDIR")
- if v is not None:
- portdir = v
- known_repos.append(v)
- v = confs.get("PORTDIR_OVERLAY")
- if v is not None:
- portdir_overlay = v
- known_repos.extend(shlex_split(v))
- v = confs.get("SYNC")
- if v is not None:
- portdir_sync = v
- if 'PORTAGE_RSYNC_EXTRA_OPTS' in confs:
- self['PORTAGE_RSYNC_EXTRA_OPTS'] = confs['PORTAGE_RSYNC_EXTRA_OPTS']
-
- self["PORTDIR"] = portdir
- self["PORTDIR_OVERLAY"] = portdir_overlay
- if portdir_sync:
- self["SYNC"] = portdir_sync
- self.lookuplist = [self.configdict["env"]]
- if repositories is None:
- self.repositories = load_repository_config(self)
- else:
- self.repositories = repositories
-
- known_repos.extend(repo.location for repo in self.repositories)
- known_repos = frozenset(known_repos)
-
- self['PORTAGE_REPOSITORIES'] = self.repositories.config_string()
- self.backup_changes('PORTAGE_REPOSITORIES')
-
- #filling PORTDIR and PORTDIR_OVERLAY variable for compatibility
- main_repo = self.repositories.mainRepo()
- if main_repo is not None:
- self["PORTDIR"] = main_repo.location
- self.backup_changes("PORTDIR")
- expand_map["PORTDIR"] = self["PORTDIR"]
-
- # repoman controls PORTDIR_OVERLAY via the environment, so no
- # special cases are needed here.
- portdir_overlay = list(self.repositories.repoLocationList())
- if portdir_overlay and portdir_overlay[0] == self["PORTDIR"]:
- portdir_overlay = portdir_overlay[1:]
-
- new_ov = []
- if portdir_overlay:
- for ov in portdir_overlay:
- ov = normalize_path(ov)
- if isdir_raise_eaccess(ov) or portage._sync_mode:
- new_ov.append(portage._shell_quote(ov))
- else:
- writemsg(_("!!! Invalid PORTDIR_OVERLAY"
- " (not a dir): '%s'\n") % ov, noiselevel=-1)
-
- self["PORTDIR_OVERLAY"] = " ".join(new_ov)
- self.backup_changes("PORTDIR_OVERLAY")
- expand_map["PORTDIR_OVERLAY"] = self["PORTDIR_OVERLAY"]
-
- locations_manager.set_port_dirs(self["PORTDIR"], self["PORTDIR_OVERLAY"])
- locations_manager.load_profiles(self.repositories, known_repos)
-
- profiles_complex = locations_manager.profiles_complex
- self.profiles = locations_manager.profiles
- self.profile_path = locations_manager.profile_path
- self.user_profile_dir = locations_manager.user_profile_dir
-
- try:
- packages_list = [grabfile_package(
- os.path.join(x.location, "packages"),
- verify_eapi=True, eapi=x.eapi, eapi_default=None,
- allow_repo=allow_profile_repo_deps(x),
- allow_build_id=x.allow_build_id)
- for x in profiles_complex]
- except EnvironmentError as e:
- _raise_exc(e)
-
- self.packages = tuple(stack_lists(packages_list, incremental=1))
-
- # revmaskdict
- self.prevmaskdict={}
- for x in self.packages:
- # Negative atoms are filtered by the above stack_lists() call.
- if not isinstance(x, Atom):
- x = Atom(x.lstrip('*'))
- self.prevmaskdict.setdefault(x.cp, []).append(x)
-
- self.unpack_dependencies = load_unpack_dependencies_configuration(self.repositories)
-
- mygcfg = {}
- if profiles_complex:
- mygcfg_dlists = []
- for x in profiles_complex:
- # Prevent accidents triggered by USE="${USE} ..." settings
- # at the top of make.defaults which caused parent profile
- # USE to override parent profile package.use settings.
- # It would be nice to guard USE_EXPAND variables like
- # this too, but unfortunately USE_EXPAND is not known
- # until after make.defaults has been evaluated, so that
- # will require some form of make.defaults preprocessing.
- expand_map.pop("USE", None)
- mygcfg_dlists.append(
- getconfig(os.path.join(x.location, "make.defaults"),
- tolerant=tolerant, expand=expand_map,
- recursive=x.portage1_directories))
- self._make_defaults = mygcfg_dlists
- mygcfg = stack_dicts(mygcfg_dlists,
- incrementals=self.incrementals)
- if mygcfg is None:
- mygcfg = {}
- self.configlist.append(mygcfg)
- self.configdict["defaults"]=self.configlist[-1]
-
- mygcfg = {}
- for x in make_conf_paths:
- mygcfg.update(getconfig(x,
- tolerant=tolerant, allow_sourcing=True,
- expand=expand_map, recursive=True) or {})
-
- # __* variables set in make.conf are local and are not be propagated.
- mygcfg = {k: v for k, v in mygcfg.items() if not k.startswith("__")}
-
- # Don't allow the user to override certain variables in make.conf
- profile_only_variables = self.configdict["defaults"].get(
- "PROFILE_ONLY_VARIABLES", "").split()
- profile_only_variables = stack_lists([profile_only_variables])
- non_user_variables = set()
- non_user_variables.update(profile_only_variables)
- non_user_variables.update(self._env_blacklist)
- non_user_variables.update(self._global_only_vars)
- non_user_variables = frozenset(non_user_variables)
- self._non_user_variables = non_user_variables
-
- self._env_d_blacklist = frozenset(chain(
- profile_only_variables,
- self._env_blacklist,
- ))
- env_d = self.configdict["env.d"]
- for k in self._env_d_blacklist:
- env_d.pop(k, None)
-
- for k in profile_only_variables:
- mygcfg.pop(k, None)
-
- self.configlist.append(mygcfg)
- self.configdict["conf"]=self.configlist[-1]
-
- self.configlist.append(LazyItemsDict())
- self.configdict["pkg"]=self.configlist[-1]
-
- self.configdict["backupenv"] = self.backupenv
-
- # Don't allow the user to override certain variables in the env
- for k in profile_only_variables:
- self.backupenv.pop(k, None)
-
- self.configlist.append(self.configdict["env"])
-
- # make lookuplist for loading package.*
- self.lookuplist=self.configlist[:]
- self.lookuplist.reverse()
-
- # Blacklist vars that could interfere with portage internals.
- for blacklisted in self._env_blacklist:
- for cfg in self.lookuplist:
- cfg.pop(blacklisted, None)
- self.backupenv.pop(blacklisted, None)
- del blacklisted, cfg
-
- self["PORTAGE_CONFIGROOT"] = config_root
- self.backup_changes("PORTAGE_CONFIGROOT")
- self["ROOT"] = target_root
- self.backup_changes("ROOT")
- self["SYSROOT"] = sysroot
- self.backup_changes("SYSROOT")
- self["EPREFIX"] = eprefix
- self.backup_changes("EPREFIX")
- self["EROOT"] = eroot
- self.backup_changes("EROOT")
- self["ESYSROOT"] = esysroot
- self.backup_changes("ESYSROOT")
- self["BROOT"] = broot
- self.backup_changes("BROOT")
-
- # The prefix of the running portage instance is used in the
- # ebuild environment to implement the --host-root option for
- # best_version and has_version.
- self["PORTAGE_OVERRIDE_EPREFIX"] = portage.const.EPREFIX
- self.backup_changes("PORTAGE_OVERRIDE_EPREFIX")
-
- self._ppropertiesdict = portage.dep.ExtendedAtomDict(dict)
- self._paccept_restrict = portage.dep.ExtendedAtomDict(dict)
- self._penvdict = portage.dep.ExtendedAtomDict(dict)
- self._pbashrcdict = {}
- self._pbashrc = ()
-
- self._repo_make_defaults = {}
- for repo in self.repositories.repos_with_profiles():
- d = getconfig(os.path.join(repo.location, "profiles", "make.defaults"),
- tolerant=tolerant, expand=self.configdict["globals"].copy(), recursive=repo.portage1_profiles) or {}
- if d:
- for k in chain(self._env_blacklist,
- profile_only_variables, self._global_only_vars):
- d.pop(k, None)
- self._repo_make_defaults[repo.name] = d
-
- #Read all USE related files from profiles and optionally from user config.
- self._use_manager = UseManager(self.repositories, profiles_complex,
- abs_user_config, self._isStable, user_config=local_config)
- #Initialize all USE related variables we track ourselves.
- self.usemask = self._use_manager.getUseMask()
- self.useforce = self._use_manager.getUseForce()
- self.configdict["conf"]["USE"] = \
- self._use_manager.extract_global_USE_changes( \
- self.configdict["conf"].get("USE", ""))
-
- #Read license_groups and optionally license_groups and package.license from user config
- self._license_manager = LicenseManager(locations_manager.profile_locations, \
- abs_user_config, user_config=local_config)
- #Extract '*/*' entries from package.license
- self.configdict["conf"]["ACCEPT_LICENSE"] = \
- self._license_manager.extract_global_changes( \
- self.configdict["conf"].get("ACCEPT_LICENSE", ""))
-
- # profile.bashrc
- self._profile_bashrc = tuple(os.path.isfile(os.path.join(profile.location, 'profile.bashrc'))
- for profile in profiles_complex)
-
- if local_config:
- #package.properties
- propdict = grabdict_package(os.path.join(
- abs_user_config, "package.properties"), recursive=1, allow_wildcard=True, \
- allow_repo=True, verify_eapi=False,
- allow_build_id=True)
- v = propdict.pop("*/*", None)
- if v is not None:
- if "ACCEPT_PROPERTIES" in self.configdict["conf"]:
- self.configdict["conf"]["ACCEPT_PROPERTIES"] += " " + " ".join(v)
- else:
- self.configdict["conf"]["ACCEPT_PROPERTIES"] = " ".join(v)
- for k, v in propdict.items():
- self._ppropertiesdict.setdefault(k.cp, {})[k] = v
-
- # package.accept_restrict
- d = grabdict_package(os.path.join(
- abs_user_config, "package.accept_restrict"),
- recursive=True, allow_wildcard=True,
- allow_repo=True, verify_eapi=False,
- allow_build_id=True)
- v = d.pop("*/*", None)
- if v is not None:
- if "ACCEPT_RESTRICT" in self.configdict["conf"]:
- self.configdict["conf"]["ACCEPT_RESTRICT"] += " " + " ".join(v)
- else:
- self.configdict["conf"]["ACCEPT_RESTRICT"] = " ".join(v)
- for k, v in d.items():
- self._paccept_restrict.setdefault(k.cp, {})[k] = v
-
- #package.env
- penvdict = grabdict_package(os.path.join(
- abs_user_config, "package.env"), recursive=1, allow_wildcard=True, \
- allow_repo=True, verify_eapi=False,
- allow_build_id=True)
- v = penvdict.pop("*/*", None)
- if v is not None:
- global_wildcard_conf = {}
- self._grab_pkg_env(v, global_wildcard_conf)
- incrementals = self.incrementals
- conf_configdict = self.configdict["conf"]
- for k, v in global_wildcard_conf.items():
- if k in incrementals:
- if k in conf_configdict:
- conf_configdict[k] = \
- conf_configdict[k] + " " + v
- else:
- conf_configdict[k] = v
- else:
- conf_configdict[k] = v
- expand_map[k] = v
-
- for k, v in penvdict.items():
- self._penvdict.setdefault(k.cp, {})[k] = v
-
- # package.bashrc
- for profile in profiles_complex:
- if not 'profile-bashrcs' in profile.profile_formats:
- continue
- self._pbashrcdict[profile] = \
- portage.dep.ExtendedAtomDict(dict)
- bashrc = grabdict_package(os.path.join(profile.location,
- "package.bashrc"), recursive=1, allow_wildcard=True,
- allow_repo=allow_profile_repo_deps(profile),
- verify_eapi=True,
- eapi=profile.eapi, eapi_default=None,
- allow_build_id=profile.allow_build_id)
- if not bashrc:
- continue
-
- for k, v in bashrc.items():
- envfiles = [os.path.join(profile.location,
- "bashrc",
- envname) for envname in v]
- self._pbashrcdict[profile].setdefault(k.cp, {})\
- .setdefault(k, []).extend(envfiles)
-
- #getting categories from an external file now
- self.categories = [grabfile(os.path.join(x, "categories")) \
- for x in locations_manager.profile_and_user_locations]
- category_re = dbapi._category_re
- # categories used to be a tuple, but now we use a frozenset
- # for hashed category validation in pordbapi.cp_list()
- self.categories = frozenset(
- x for x in stack_lists(self.categories, incremental=1)
- if category_re.match(x) is not None)
-
- archlist = [grabfile(os.path.join(x, "arch.list")) \
- for x in locations_manager.profile_and_user_locations]
- archlist = sorted(stack_lists(archlist, incremental=1))
- self.configdict["conf"]["PORTAGE_ARCHLIST"] = " ".join(archlist)
-
- pkgprovidedlines = []
- for x in profiles_complex:
- provpath = os.path.join(x.location, "package.provided")
- if os.path.exists(provpath):
- if _get_eapi_attrs(x.eapi).allows_package_provided:
- pkgprovidedlines.append(grabfile(provpath,
- recursive=x.portage1_directories))
- else:
- # TODO: bail out?
- writemsg((_("!!! package.provided not allowed in EAPI %s: ")
- %x.eapi)+x.location+"\n",
- noiselevel=-1)
-
- pkgprovidedlines = stack_lists(pkgprovidedlines, incremental=1)
- has_invalid_data = False
- for x in range(len(pkgprovidedlines)-1, -1, -1):
- myline = pkgprovidedlines[x]
- if not isvalidatom("=" + myline):
- writemsg(_("Invalid package name in package.provided: %s\n") % \
- myline, noiselevel=-1)
- has_invalid_data = True
- del pkgprovidedlines[x]
- continue
- cpvr = catpkgsplit(pkgprovidedlines[x])
- if not cpvr or cpvr[0] == "null":
- writemsg(_("Invalid package name in package.provided: ")+pkgprovidedlines[x]+"\n",
- noiselevel=-1)
- has_invalid_data = True
- del pkgprovidedlines[x]
- continue
- if has_invalid_data:
- writemsg(_("See portage(5) for correct package.provided usage.\n"),
- noiselevel=-1)
- self.pprovideddict = {}
- for x in pkgprovidedlines:
- x_split = catpkgsplit(x)
- if x_split is None:
- continue
- mycatpkg = cpv_getkey(x)
- if mycatpkg in self.pprovideddict:
- self.pprovideddict[mycatpkg].append(x)
- else:
- self.pprovideddict[mycatpkg]=[x]
-
- # reasonable defaults; this is important as without USE_ORDER,
- # USE will always be "" (nothing set)!
- if "USE_ORDER" not in self:
- self["USE_ORDER"] = "env:pkg:conf:defaults:pkginternal:features:repo:env.d"
- self.backup_changes("USE_ORDER")
-
- if "CBUILD" not in self and "CHOST" in self:
- self["CBUILD"] = self["CHOST"]
- self.backup_changes("CBUILD")
-
- if "USERLAND" not in self:
- # Set default USERLAND so that our test cases can assume that
- # it's always set. This allows isolated-functions.sh to avoid
- # calling uname -s when sourced.
- system = platform.system()
- if system is not None and \
- (system.endswith("BSD") or system == "DragonFly"):
- self["USERLAND"] = "BSD"
- else:
- self["USERLAND"] = "GNU"
- self.backup_changes("USERLAND")
-
- default_inst_ids = {
- "PORTAGE_INST_GID": "0",
- "PORTAGE_INST_UID": "0",
- }
-
- # PREFIX LOCAL: inventing UID/GID based on a path is a very
- # bad idea, it breaks almost everything since group ids
- # don't have to match, when a user has many
- # This in particularly breaks the configure-set portage
- # group and user (in portage/data.py)
- eroot_or_parent = first_existing(eroot)
- unprivileged = True
- # try:
- # eroot_st = os.stat(eroot_or_parent)
- # except OSError:
- # pass
- # else:
- #
- # if portage.data._unprivileged_mode(
- # eroot_or_parent, eroot_st):
- # unprivileged = True
- #
- # default_inst_ids["PORTAGE_INST_GID"] = str(eroot_st.st_gid)
- # default_inst_ids["PORTAGE_INST_UID"] = str(eroot_st.st_uid)
- #
- # if "PORTAGE_USERNAME" not in self:
- # try:
- # pwd_struct = pwd.getpwuid(eroot_st.st_uid)
- # except KeyError:
- # pass
- # else:
- # self["PORTAGE_USERNAME"] = pwd_struct.pw_name
- # self.backup_changes("PORTAGE_USERNAME")
- #
- # if "PORTAGE_GRPNAME" not in self:
- # try:
- # grp_struct = grp.getgrgid(eroot_st.st_gid)
- # except KeyError:
- # pass
- # else:
- # self["PORTAGE_GRPNAME"] = grp_struct.gr_name
- # self.backup_changes("PORTAGE_GRPNAME")
- # END PREFIX LOCAL
-
- for var, default_val in default_inst_ids.items():
- try:
- self[var] = str(int(self.get(var, default_val)))
- except ValueError:
- writemsg(_("!!! %s='%s' is not a valid integer. "
- "Falling back to %s.\n") % (var, self[var], default_val),
- noiselevel=-1)
- self[var] = default_val
- self.backup_changes(var)
-
- self.depcachedir = self.get("PORTAGE_DEPCACHEDIR")
- if self.depcachedir is None:
- self.depcachedir = os.path.join(os.sep,
- portage.const.EPREFIX, DEPCACHE_PATH.lstrip(os.sep))
- if unprivileged and target_root != os.sep:
- # In unprivileged mode, automatically make
- # depcachedir relative to target_root if the
- # default depcachedir is not writable.
- if not os.access(first_existing(self.depcachedir),
- os.W_OK):
- self.depcachedir = os.path.join(eroot,
- DEPCACHE_PATH.lstrip(os.sep))
-
- self["PORTAGE_DEPCACHEDIR"] = self.depcachedir
- self.backup_changes("PORTAGE_DEPCACHEDIR")
-
- if portage._internal_caller:
- self["PORTAGE_INTERNAL_CALLER"] = "1"
- self.backup_changes("PORTAGE_INTERNAL_CALLER")
-
- # initialize self.features
- self.regenerate()
- feature_use = []
- if "test" in self.features:
- feature_use.append("test")
- self.configdict["features"]["USE"] = self._default_features_use = " ".join(feature_use)
- if feature_use:
- # Regenerate USE so that the initial "test" flag state is
- # correct for evaluation of !test? conditionals in RESTRICT.
- self.regenerate()
-
- if unprivileged:
- self.features.add('unprivileged')
-
- if bsd_chflags:
- self.features.add('chflags')
-
- self._init_iuse()
-
- self._validate_commands()
-
- for k in self._case_insensitive_vars:
- if k in self:
- self[k] = self[k].lower()
- self.backup_changes(k)
-
- # The first constructed config object initializes these modules,
- # and subsequent calls to the _init() functions have no effect.
- portage.output._init(config_root=self['PORTAGE_CONFIGROOT'])
- portage.data._init(self)
-
- if mycpv:
- self.setcpv(mycpv)
-
- def _init_iuse(self):
- self._iuse_effective = self._calc_iuse_effective()
- self._iuse_implicit_match = _iuse_implicit_match_cache(self)
-
- @property
- def mygcfg(self):
- warnings.warn("portage.config.mygcfg is deprecated", stacklevel=3)
- return {}
-
- def _validate_commands(self):
- for k in special_env_vars.validate_commands:
- v = self.get(k)
- if v is not None:
- valid, v_split = validate_cmd_var(v)
-
- if not valid:
- if v_split:
- writemsg_level(_("%s setting is invalid: '%s'\n") % \
- (k, v), level=logging.ERROR, noiselevel=-1)
-
- # before deleting the invalid setting, backup
- # the default value if available
- v = self.configdict['globals'].get(k)
- if v is not None:
- default_valid, v_split = validate_cmd_var(v)
- if not default_valid:
- if v_split:
- writemsg_level(
- _("%s setting from make.globals" + \
- " is invalid: '%s'\n") % \
- (k, v), level=logging.ERROR, noiselevel=-1)
- # make.globals seems corrupt, so try for
- # a hardcoded default instead
- v = self._default_globals.get(k)
-
- # delete all settings for this key,
- # including the invalid one
- del self[k]
- self.backupenv.pop(k, None)
- if v:
- # restore validated default
- self.configdict['globals'][k] = v
-
- def _init_dirs(self):
- """
- Create a few directories that are critical to portage operation
- """
- if not os.access(self["EROOT"], os.W_OK):
- return
-
- # gid, mode, mask, preserve_perms
- dir_mode_map = {
- "tmp" : ( -1, 0o1777, 0, True),
- "var/tmp" : ( -1, 0o1777, 0, True),
- PRIVATE_PATH : (portage_gid, 0o2750, 0o2, False),
- CACHE_PATH : (portage_gid, 0o755, 0o2, False)
- }
-
- for mypath, (gid, mode, modemask, preserve_perms) \
- in dir_mode_map.items():
- mydir = os.path.join(self["EROOT"], mypath)
- if preserve_perms and os.path.isdir(mydir):
- # Only adjust permissions on some directories if
- # they don't exist yet. This gives freedom to the
- # user to adjust permissions to suit their taste.
- continue
- try:
- ensure_dirs(mydir, gid=gid, mode=mode, mask=modemask)
- except PortageException as e:
- writemsg(_("!!! Directory initialization failed: '%s'\n") % mydir,
- noiselevel=-1)
- writemsg("!!! %s\n" % str(e),
- noiselevel=-1)
-
- @property
- def _keywords_manager(self):
- if self._keywords_manager_obj is None:
- self._keywords_manager_obj = KeywordsManager(
- self._locations_manager.profiles_complex,
- self._locations_manager.abs_user_config,
- self.local_config,
- global_accept_keywords=self.configdict["defaults"].get("ACCEPT_KEYWORDS", ""))
- return self._keywords_manager_obj
-
- @property
- def _mask_manager(self):
- if self._mask_manager_obj is None:
- self._mask_manager_obj = MaskManager(self.repositories,
- self._locations_manager.profiles_complex,
- self._locations_manager.abs_user_config,
- user_config=self.local_config,
- strict_umatched_removal=self._unmatched_removal)
- return self._mask_manager_obj
-
- @property
- def _virtuals_manager(self):
- if self._virtuals_manager_obj is None:
- self._virtuals_manager_obj = VirtualsManager(self.profiles)
- return self._virtuals_manager_obj
-
- @property
- def pkeywordsdict(self):
- result = self._keywords_manager.pkeywordsdict.copy()
- for k, v in result.items():
- result[k] = v.copy()
- return result
-
- @property
- def pmaskdict(self):
- return self._mask_manager._pmaskdict.copy()
-
- @property
- def punmaskdict(self):
- return self._mask_manager._punmaskdict.copy()
-
- @property
- def soname_provided(self):
- if self._soname_provided is None:
- d = stack_dictlist((grabdict(
- os.path.join(x, "soname.provided"), recursive=True)
- for x in self.profiles), incremental=True)
- self._soname_provided = frozenset(SonameAtom(cat, soname)
- for cat, sonames in d.items() for soname in sonames)
- return self._soname_provided
-
- def expandLicenseTokens(self, tokens):
- """ Take a token from ACCEPT_LICENSE or package.license and expand it
- if it's a group token (indicated by @) or just return it if it's not a
- group. If a group is negated then negate all group elements."""
- return self._license_manager.expandLicenseTokens(tokens)
-
- def validate(self):
- """Validate miscellaneous settings and display warnings if necessary.
- (This code was previously in the global scope of portage.py)"""
-
- groups = self.get("ACCEPT_KEYWORDS", "").split()
- archlist = self.archlist()
- if not archlist:
- writemsg(_("--- 'profiles/arch.list' is empty or "
- "not available. Empty ebuild repository?\n"), noiselevel=1)
- else:
- for group in groups:
- if group not in archlist and \
- not (group.startswith("-") and group[1:] in archlist) and \
- group not in ("*", "~*", "**"):
- writemsg(_("!!! INVALID ACCEPT_KEYWORDS: %s\n") % str(group),
- noiselevel=-1)
-
- profile_broken = False
-
- # getmaskingstatus requires ARCH for ACCEPT_KEYWORDS support
- arch = self.get('ARCH')
- if not self.profile_path or not arch:
- profile_broken = True
- else:
- # If any one of these files exists, then
- # the profile is considered valid.
- for x in ("make.defaults", "parent",
- "packages", "use.force", "use.mask"):
- if exists_raise_eaccess(os.path.join(self.profile_path, x)):
- break
- else:
- profile_broken = True
-
- if profile_broken and not portage._sync_mode:
- abs_profile_path = None
- for x in (PROFILE_PATH, 'etc/make.profile'):
- x = os.path.join(self["PORTAGE_CONFIGROOT"], x)
- try:
- os.lstat(x)
- except OSError:
- pass
- else:
- abs_profile_path = x
- break
-
- if abs_profile_path is None:
- abs_profile_path = os.path.join(self["PORTAGE_CONFIGROOT"],
- PROFILE_PATH)
-
- writemsg(_("\n\n!!! %s is not a symlink and will probably prevent most merges.\n") % abs_profile_path,
- noiselevel=-1)
- writemsg(_("!!! It should point into a profile within %s/profiles/\n") % self["PORTDIR"])
- writemsg(_("!!! (You can safely ignore this message when syncing. It's harmless.)\n\n\n"))
-
- abs_user_virtuals = os.path.join(self["PORTAGE_CONFIGROOT"],
- USER_VIRTUALS_FILE)
- if os.path.exists(abs_user_virtuals):
- writemsg("\n!!! /etc/portage/virtuals is deprecated in favor of\n")
- writemsg("!!! /etc/portage/profile/virtuals. Please move it to\n")
- writemsg("!!! this new location.\n\n")
-
- if not sandbox_capable and not macossandbox_capable and \
- ("sandbox" in self.features or "usersandbox" in self.features):
- if self.profile_path is not None and \
- os.path.realpath(self.profile_path) == \
- os.path.realpath(os.path.join(
- self["PORTAGE_CONFIGROOT"], PROFILE_PATH)):
- # Don't show this warning when running repoman and the
- # sandbox feature came from a profile that doesn't belong
- # to the user.
- writemsg(colorize("BAD", _("!!! Problem with sandbox"
- " binary. Disabling...\n\n")), noiselevel=-1)
-
- if "fakeroot" in self.features and \
- not fakeroot_capable:
- writemsg(_("!!! FEATURES=fakeroot is enabled, but the "
- "fakeroot binary is not installed.\n"), noiselevel=-1)
-
- if "webrsync-gpg" in self.features:
- writemsg(_("!!! FEATURES=webrsync-gpg is deprecated, see the make.conf(5) man page.\n"),
- noiselevel=-1)
-
- if os.getuid() == 0 and not hasattr(os, "setgroups"):
- warning_shown = False
-
- if "userpriv" in self.features:
- writemsg(_("!!! FEATURES=userpriv is enabled, but "
- "os.setgroups is not available.\n"), noiselevel=-1)
- warning_shown = True
-
- if "userfetch" in self.features:
- writemsg(_("!!! FEATURES=userfetch is enabled, but "
- "os.setgroups is not available.\n"), noiselevel=-1)
- warning_shown = True
-
- if warning_shown and platform.python_implementation() == 'PyPy':
- writemsg(_("!!! See https://bugs.pypy.org/issue833 for details.\n"),
- noiselevel=-1)
-
- binpkg_compression = self.get("BINPKG_COMPRESS")
- if binpkg_compression:
- try:
- compression = _compressors[binpkg_compression]
- except KeyError as e:
- writemsg("!!! BINPKG_COMPRESS contains invalid or "
- "unsupported compression method: %s" % e.args[0],
- noiselevel=-1)
- else:
- try:
- compression_binary = shlex_split(
- portage.util.varexpand(compression["compress"],
- mydict=self))[0]
- except IndexError as e:
- writemsg("!!! BINPKG_COMPRESS contains invalid or "
- "unsupported compression method: %s" % e.args[0],
- noiselevel=-1)
- else:
- if portage.process.find_binary(
- compression_binary) is None:
- missing_package = compression["package"]
- writemsg("!!! BINPKG_COMPRESS unsupported %s. "
- "Missing package: %s" %
- (binpkg_compression, missing_package),
- noiselevel=-1)
-
- def load_best_module(self,property_string):
- best_mod = best_from_dict(property_string,self.modules,self.module_priority)
- mod = None
- try:
- mod = load_mod(best_mod)
- except ImportError:
- if best_mod in self._module_aliases:
- mod = load_mod(self._module_aliases[best_mod])
- elif not best_mod.startswith("cache."):
- raise
- else:
- best_mod = "portage." + best_mod
- try:
- mod = load_mod(best_mod)
- except ImportError:
- raise
- return mod
-
- def lock(self):
- self.locked = 1
-
- def unlock(self):
- self.locked = 0
-
- def modifying(self):
- if self.locked:
- raise Exception(_("Configuration is locked."))
-
- def backup_changes(self,key=None):
- self.modifying()
- if key and key in self.configdict["env"]:
- self.backupenv[key] = copy.deepcopy(self.configdict["env"][key])
- else:
- raise KeyError(_("No such key defined in environment: %s") % key)
-
- def reset(self, keeping_pkg=0, use_cache=None):
- """
- Restore environment from self.backupenv, call self.regenerate()
- @param keeping_pkg: Should we keep the setcpv() data or delete it.
- @type keeping_pkg: Boolean
- @rype: None
- """
-
- if use_cache is not None:
- warnings.warn("The use_cache parameter for config.reset() is deprecated and without effect.",
- DeprecationWarning, stacklevel=2)
-
- self.modifying()
- self.configdict["env"].clear()
- self.configdict["env"].update(self.backupenv)
-
- self.modifiedkeys = []
- if not keeping_pkg:
- self.mycpv = None
- self._setcpv_args_hash = None
- self.puse = ""
- del self._penv[:]
- self.configdict["pkg"].clear()
- self.configdict["pkginternal"].clear()
- self.configdict["features"]["USE"] = self._default_features_use
- self.configdict["repo"].clear()
- self.configdict["defaults"]["USE"] = \
- " ".join(self.make_defaults_use)
- self.usemask = self._use_manager.getUseMask()
- self.useforce = self._use_manager.getUseForce()
- self.regenerate()
-
- class _lazy_vars:
-
- __slots__ = ('built_use', 'settings', 'values')
-
- def __init__(self, built_use, settings):
- self.built_use = built_use
- self.settings = settings
- self.values = None
-
- def __getitem__(self, k):
- if self.values is None:
- self.values = self._init_values()
- return self.values[k]
-
- def _init_values(self):
- values = {}
- settings = self.settings
- use = self.built_use
- if use is None:
- use = frozenset(settings['PORTAGE_USE'].split())
-
- values['ACCEPT_LICENSE'] = settings._license_manager.get_prunned_accept_license( \
- settings.mycpv, use, settings.get('LICENSE', ''), settings.get('SLOT'), settings.get('PORTAGE_REPO_NAME'))
- values['PORTAGE_PROPERTIES'] = self._flatten('PROPERTIES', use, settings)
- values['PORTAGE_RESTRICT'] = self._flatten('RESTRICT', use, settings)
- return values
-
- def _flatten(self, var, use, settings):
- try:
- restrict = set(use_reduce(settings.get(var, ''), uselist=use, flat=True))
- except InvalidDependString:
- restrict = set()
- return ' '.join(sorted(restrict))
-
- class _lazy_use_expand:
- """
- Lazily evaluate USE_EXPAND variables since they are only needed when
- an ebuild shell is spawned. Variables values are made consistent with
- the previously calculated USE settings.
- """
-
- def __init__(self, settings, unfiltered_use,
- use, usemask, iuse_effective,
- use_expand_split, use_expand_dict):
- self._settings = settings
- self._unfiltered_use = unfiltered_use
- self._use = use
- self._usemask = usemask
- self._iuse_effective = iuse_effective
- self._use_expand_split = use_expand_split
- self._use_expand_dict = use_expand_dict
-
- def __getitem__(self, key):
- prefix = key.lower() + '_'
- prefix_len = len(prefix)
- expand_flags = set( x[prefix_len:] for x in self._use \
- if x[:prefix_len] == prefix )
- var_split = self._use_expand_dict.get(key, '').split()
- # Preserve the order of var_split because it can matter for things
- # like LINGUAS.
- var_split = [ x for x in var_split if x in expand_flags ]
- var_split.extend(expand_flags.difference(var_split))
- has_wildcard = '*' in expand_flags
- if has_wildcard:
- var_split = [ x for x in var_split if x != "*" ]
- has_iuse = set()
- for x in self._iuse_effective:
- if x[:prefix_len] == prefix:
- has_iuse.add(x[prefix_len:])
- if has_wildcard:
- # * means to enable everything in IUSE that's not masked
- if has_iuse:
- usemask = self._usemask
- for suffix in has_iuse:
- x = prefix + suffix
- if x not in usemask:
- if suffix not in expand_flags:
- var_split.append(suffix)
- else:
- # If there is a wildcard and no matching flags in IUSE then
- # LINGUAS should be unset so that all .mo files are
- # installed.
- var_split = []
- # Make the flags unique and filter them according to IUSE.
- # Also, continue to preserve order for things like LINGUAS
- # and filter any duplicates that variable may contain.
- filtered_var_split = []
- remaining = has_iuse.intersection(var_split)
- for x in var_split:
- if x in remaining:
- remaining.remove(x)
- filtered_var_split.append(x)
- var_split = filtered_var_split
-
- return ' '.join(var_split)
-
- def _setcpv_recursion_gate(f):
- """
- Raise AssertionError for recursive setcpv calls.
- """
- def wrapper(self, *args, **kwargs):
- if hasattr(self, '_setcpv_active'):
- raise AssertionError('setcpv recursion detected')
- self._setcpv_active = True
- try:
- return f(self, *args, **kwargs)
- finally:
- del self._setcpv_active
- return wrapper
-
- @_setcpv_recursion_gate
- def setcpv(self, mycpv, use_cache=None, mydb=None):
- """
- Load a particular CPV into the config, this lets us see the
- Default USE flags for a particular ebuild as well as the USE
- flags from package.use.
-
- @param mycpv: A cpv to load
- @type mycpv: string
- @param mydb: a dbapi instance that supports aux_get with the IUSE key.
- @type mydb: dbapi or derivative.
- @rtype: None
- """
-
- if use_cache is not None:
- warnings.warn("The use_cache parameter for config.setcpv() is deprecated and without effect.",
- DeprecationWarning, stacklevel=2)
-
- self.modifying()
-
- pkg = None
- built_use = None
- explicit_iuse = None
- if not isinstance(mycpv, str):
- pkg = mycpv
- mycpv = pkg.cpv
- mydb = pkg._metadata
- explicit_iuse = pkg.iuse.all
- args_hash = (mycpv, id(pkg))
- if pkg.built:
- built_use = pkg.use.enabled
- else:
- args_hash = (mycpv, id(mydb))
-
- if args_hash == self._setcpv_args_hash:
- return
- self._setcpv_args_hash = args_hash
-
- has_changed = False
- self.mycpv = mycpv
- cat, pf = catsplit(mycpv)
- cp = cpv_getkey(mycpv)
- cpv_slot = self.mycpv
- pkginternaluse = ""
- pkginternaluse_list = []
- feature_use = []
- iuse = ""
- pkg_configdict = self.configdict["pkg"]
- previous_iuse = pkg_configdict.get("IUSE")
- previous_iuse_effective = pkg_configdict.get("IUSE_EFFECTIVE")
- previous_features = pkg_configdict.get("FEATURES")
- previous_penv = self._penv
-
- aux_keys = self._setcpv_aux_keys
-
- # Discard any existing metadata and package.env settings from
- # the previous package instance.
- pkg_configdict.clear()
-
- pkg_configdict["CATEGORY"] = cat
- pkg_configdict["PF"] = pf
- repository = None
- eapi = None
- if mydb:
- if not hasattr(mydb, "aux_get"):
- for k in aux_keys:
- if k in mydb:
- # Make these lazy, since __getitem__ triggers
- # evaluation of USE conditionals which can't
- # occur until PORTAGE_USE is calculated below.
- pkg_configdict.addLazySingleton(k,
- mydb.__getitem__, k)
- else:
- # When calling dbapi.aux_get(), grab USE for built/installed
- # packages since we want to save it PORTAGE_BUILT_USE for
- # evaluating conditional USE deps in atoms passed via IPC to
- # helpers like has_version and best_version.
- aux_keys = set(aux_keys)
- if hasattr(mydb, '_aux_cache_keys'):
- aux_keys = aux_keys.intersection(mydb._aux_cache_keys)
- aux_keys.add('USE')
- aux_keys = list(aux_keys)
- for k, v in zip(aux_keys, mydb.aux_get(self.mycpv, aux_keys)):
- pkg_configdict[k] = v
- built_use = frozenset(pkg_configdict.pop('USE').split())
- if not built_use:
- # Empty USE means this dbapi instance does not contain
- # built packages.
- built_use = None
- eapi = pkg_configdict['EAPI']
-
- repository = pkg_configdict.pop("repository", None)
- if repository is not None:
- pkg_configdict["PORTAGE_REPO_NAME"] = repository
- iuse = pkg_configdict["IUSE"]
- if pkg is None:
- self.mycpv = _pkg_str(self.mycpv, metadata=pkg_configdict,
- settings=self)
- cpv_slot = self.mycpv
- else:
- cpv_slot = pkg
- for x in iuse.split():
- if x.startswith("+"):
- pkginternaluse_list.append(x[1:])
- elif x.startswith("-"):
- pkginternaluse_list.append(x)
- pkginternaluse = " ".join(pkginternaluse_list)
-
- eapi_attrs = _get_eapi_attrs(eapi)
-
- if pkginternaluse != self.configdict["pkginternal"].get("USE", ""):
- self.configdict["pkginternal"]["USE"] = pkginternaluse
- has_changed = True
-
- repo_env = []
- if repository and repository != Package.UNKNOWN_REPO:
- repos = []
- try:
- repos.extend(repo.name for repo in
- self.repositories[repository].masters)
- except KeyError:
- pass
- repos.append(repository)
- for repo in repos:
- d = self._repo_make_defaults.get(repo)
- if d is None:
- d = {}
- else:
- # make a copy, since we might modify it with
- # package.use settings
- d = d.copy()
- cpdict = self._use_manager._repo_puse_dict.get(repo, {}).get(cp)
- if cpdict:
- repo_puse = ordered_by_atom_specificity(cpdict, cpv_slot)
- if repo_puse:
- for x in repo_puse:
- d["USE"] = d.get("USE", "") + " " + " ".join(x)
- if d:
- repo_env.append(d)
-
- if repo_env or self.configdict["repo"]:
- self.configdict["repo"].clear()
- self.configdict["repo"].update(stack_dicts(repo_env,
- incrementals=self.incrementals))
- has_changed = True
-
- defaults = []
- for i, pkgprofileuse_dict in enumerate(self._use_manager._pkgprofileuse):
- if self.make_defaults_use[i]:
- defaults.append(self.make_defaults_use[i])
- cpdict = pkgprofileuse_dict.get(cp)
- if cpdict:
- pkg_defaults = ordered_by_atom_specificity(cpdict, cpv_slot)
- if pkg_defaults:
- defaults.extend(pkg_defaults)
- defaults = " ".join(defaults)
- if defaults != self.configdict["defaults"].get("USE",""):
- self.configdict["defaults"]["USE"] = defaults
- has_changed = True
-
- useforce = self._use_manager.getUseForce(cpv_slot)
- if useforce != self.useforce:
- self.useforce = useforce
- has_changed = True
-
- usemask = self._use_manager.getUseMask(cpv_slot)
- if usemask != self.usemask:
- self.usemask = usemask
- has_changed = True
-
- oldpuse = self.puse
- self.puse = self._use_manager.getPUSE(cpv_slot)
- if oldpuse != self.puse:
- has_changed = True
- self.configdict["pkg"]["PKGUSE"] = self.puse[:] # For saving to PUSE file
- self.configdict["pkg"]["USE"] = self.puse[:] # this gets appended to USE
-
- if previous_features:
- # The package from the previous setcpv call had package.env
- # settings which modified FEATURES. Therefore, trigger a
- # regenerate() call in order to ensure that self.features
- # is accurate.
- has_changed = True
- # Prevent stale features USE from corrupting the evaluation
- # of USE conditional RESTRICT.
- self.configdict["features"]["USE"] = self._default_features_use
-
- self._penv = []
- cpdict = self._penvdict.get(cp)
- if cpdict:
- penv_matches = ordered_by_atom_specificity(cpdict, cpv_slot)
- if penv_matches:
- for x in penv_matches:
- self._penv.extend(x)
-
- bashrc_files = []
-
- for profile, profile_bashrc in zip(self._locations_manager.profiles_complex, self._profile_bashrc):
- if profile_bashrc:
- bashrc_files.append(os.path.join(profile.location, 'profile.bashrc'))
- if profile in self._pbashrcdict:
- cpdict = self._pbashrcdict[profile].get(cp)
- if cpdict:
- bashrc_matches = \
- ordered_by_atom_specificity(cpdict, cpv_slot)
- for x in bashrc_matches:
- bashrc_files.extend(x)
-
- self._pbashrc = tuple(bashrc_files)
-
- protected_pkg_keys = set(pkg_configdict)
- protected_pkg_keys.discard('USE')
-
- # If there are _any_ package.env settings for this package
- # then it automatically triggers config.reset(), in order
- # to account for possible incremental interaction between
- # package.use, package.env, and overrides from the calling
- # environment (configdict['env']).
- if self._penv:
- has_changed = True
- # USE is special because package.use settings override
- # it. Discard any package.use settings here and they'll
- # be added back later.
- pkg_configdict.pop('USE', None)
- self._grab_pkg_env(self._penv, pkg_configdict,
- protected_keys=protected_pkg_keys)
-
- # Now add package.use settings, which override USE from
- # package.env
- if self.puse:
- if 'USE' in pkg_configdict:
- pkg_configdict['USE'] = \
- pkg_configdict['USE'] + " " + self.puse
- else:
- pkg_configdict['USE'] = self.puse
-
- elif previous_penv:
- has_changed = True
-
- if not (previous_iuse == iuse and
- previous_iuse_effective is not None == eapi_attrs.iuse_effective):
- has_changed = True
-
- if has_changed:
- # This can modify self.features due to package.env settings.
- self.reset(keeping_pkg=1)
-
- if "test" in self.features:
- # This is independent of IUSE and RESTRICT, so that the same
- # value can be shared between packages with different settings,
- # which is important when evaluating USE conditional RESTRICT.
- feature_use.append("test")
-
- feature_use = " ".join(feature_use)
- if feature_use != self.configdict["features"]["USE"]:
- # Regenerate USE for evaluation of conditional RESTRICT.
- self.configdict["features"]["USE"] = feature_use
- self.reset(keeping_pkg=1)
- has_changed = True
-
- if explicit_iuse is None:
- explicit_iuse = frozenset(x.lstrip("+-") for x in iuse.split())
- if eapi_attrs.iuse_effective:
- iuse_implicit_match = self._iuse_effective_match
- else:
- iuse_implicit_match = self._iuse_implicit_match
-
- if pkg is None:
- raw_properties = pkg_configdict.get("PROPERTIES")
- raw_restrict = pkg_configdict.get("RESTRICT")
- else:
- raw_properties = pkg._raw_metadata["PROPERTIES"]
- raw_restrict = pkg._raw_metadata["RESTRICT"]
-
- restrict_test = False
- if raw_restrict:
- try:
- if built_use is not None:
- properties = use_reduce(raw_properties,
- uselist=built_use, flat=True)
- restrict = use_reduce(raw_restrict,
- uselist=built_use, flat=True)
- else:
- properties = use_reduce(raw_properties,
- uselist=frozenset(x for x in self['USE'].split()
- if x in explicit_iuse or iuse_implicit_match(x)),
- flat=True)
- restrict = use_reduce(raw_restrict,
- uselist=frozenset(x for x in self['USE'].split()
- if x in explicit_iuse or iuse_implicit_match(x)),
- flat=True)
- except PortageException:
- pass
- else:
- allow_test = self.get('ALLOW_TEST', '').split()
- restrict_test = (
- "test" in restrict and not "all" in allow_test and
- not ("test_network" in properties and "network" in allow_test))
-
- if restrict_test and "test" in self.features:
- # Handle it like IUSE="-test", since features USE is
- # independent of RESTRICT.
- pkginternaluse_list.append("-test")
- pkginternaluse = " ".join(pkginternaluse_list)
- self.configdict["pkginternal"]["USE"] = pkginternaluse
- # TODO: can we avoid that?
- self.reset(keeping_pkg=1)
- has_changed = True
-
- env_configdict = self.configdict['env']
-
- # Ensure that "pkg" values are always preferred over "env" values.
- # This must occur _after_ the above reset() call, since reset()
- # copies values from self.backupenv.
- for k in protected_pkg_keys:
- env_configdict.pop(k, None)
-
- lazy_vars = self._lazy_vars(built_use, self)
- env_configdict.addLazySingleton('ACCEPT_LICENSE',
- lazy_vars.__getitem__, 'ACCEPT_LICENSE')
- env_configdict.addLazySingleton('PORTAGE_PROPERTIES',
- lazy_vars.__getitem__, 'PORTAGE_PROPERTIES')
- env_configdict.addLazySingleton('PORTAGE_RESTRICT',
- lazy_vars.__getitem__, 'PORTAGE_RESTRICT')
-
- if built_use is not None:
- pkg_configdict['PORTAGE_BUILT_USE'] = ' '.join(built_use)
-
- # If reset() has not been called, it's safe to return
- # early if IUSE has not changed.
- if not has_changed:
- return
-
- # Filter out USE flags that aren't part of IUSE. This has to
- # be done for every setcpv() call since practically every
- # package has different IUSE.
- use = set(self["USE"].split())
- unfiltered_use = frozenset(use)
-
- if eapi_attrs.iuse_effective:
- portage_iuse = set(self._iuse_effective)
- portage_iuse.update(explicit_iuse)
- if built_use is not None:
- # When the binary package was built, the profile may have
- # had different IUSE_IMPLICIT settings, so any member of
- # the built USE setting is considered to be a member of
- # IUSE_EFFECTIVE (see bug 640318).
- portage_iuse.update(built_use)
- self.configdict["pkg"]["IUSE_EFFECTIVE"] = \
- " ".join(sorted(portage_iuse))
-
- self.configdict["env"]["BASH_FUNC____in_portage_iuse%%"] = (
- "() { "
- "if [[ ${#___PORTAGE_IUSE_HASH[@]} -lt 1 ]]; then "
- " declare -gA ___PORTAGE_IUSE_HASH=(%s); "
- "fi; "
- "[[ -n ${___PORTAGE_IUSE_HASH[$1]} ]]; "
- "}" ) % " ".join('["%s"]=1' % x for x in portage_iuse)
- else:
- portage_iuse = self._get_implicit_iuse()
- portage_iuse.update(explicit_iuse)
-
- # The _get_implicit_iuse() returns a regular expression
- # so we can't use the (faster) map. Fall back to
- # implementing ___in_portage_iuse() the older/slower way.
-
- # PORTAGE_IUSE is not always needed so it's lazily evaluated.
- self.configdict["env"].addLazySingleton(
- "PORTAGE_IUSE", _lazy_iuse_regex, portage_iuse)
- self.configdict["env"]["BASH_FUNC____in_portage_iuse%%"] = \
- "() { [[ $1 =~ ${PORTAGE_IUSE} ]]; }"
-
- ebuild_force_test = not restrict_test and \
- self.get("EBUILD_FORCE_TEST") == "1"
-
- if "test" in explicit_iuse or iuse_implicit_match("test"):
- if "test" in self.features:
- if ebuild_force_test and "test" in self.usemask:
- self.usemask = \
- frozenset(x for x in self.usemask if x != "test")
- if restrict_test or \
- ("test" in self.usemask and not ebuild_force_test):
- # "test" is in IUSE and USE=test is masked, so execution
- # of src_test() probably is not reliable. Therefore,
- # temporarily disable FEATURES=test just for this package.
- self["FEATURES"] = " ".join(x for x in self.features \
- if x != "test")
-
- # Allow _* flags from USE_EXPAND wildcards to pass through here.
- use.difference_update([x for x in use \
- if (x not in explicit_iuse and \
- not iuse_implicit_match(x)) and x[-2:] != '_*'])
-
- # Use the calculated USE flags to regenerate the USE_EXPAND flags so
- # that they are consistent. For optimal performance, use slice
- # comparison instead of startswith().
- use_expand_split = set(x.lower() for \
- x in self.get('USE_EXPAND', '').split())
- lazy_use_expand = self._lazy_use_expand(
- self, unfiltered_use, use, self.usemask,
- portage_iuse, use_expand_split, self._use_expand_dict)
-
- use_expand_iuses = dict((k, set()) for k in use_expand_split)
- for x in portage_iuse:
- x_split = x.split('_')
- if len(x_split) == 1:
- continue
- for i in range(len(x_split) - 1):
- k = '_'.join(x_split[:i+1])
- if k in use_expand_split:
- use_expand_iuses[k].add(x)
- break
-
- for k, use_expand_iuse in use_expand_iuses.items():
- if k + '_*' in use:
- use.update( x for x in use_expand_iuse if x not in usemask )
- k = k.upper()
- self.configdict['env'].addLazySingleton(k,
- lazy_use_expand.__getitem__, k)
-
- for k in self.get("USE_EXPAND_UNPREFIXED", "").split():
- var_split = self.get(k, '').split()
- var_split = [ x for x in var_split if x in use ]
- if var_split:
- self.configlist[-1][k] = ' '.join(var_split)
- elif k in self:
- self.configlist[-1][k] = ''
-
- # Filtered for the ebuild environment. Store this in a separate
- # attribute since we still want to be able to see global USE
- # settings for things like emerge --info.
-
- self.configdict["env"]["PORTAGE_USE"] = \
- " ".join(sorted(x for x in use if x[-2:] != '_*'))
-
- # Clear the eapi cache here rather than in the constructor, since
- # setcpv triggers lazy instantiation of things like _use_manager.
- _eapi_cache.clear()
-
- def _grab_pkg_env(self, penv, container, protected_keys=None):
- if protected_keys is None:
- protected_keys = ()
- abs_user_config = os.path.join(
- self['PORTAGE_CONFIGROOT'], USER_CONFIG_PATH)
- non_user_variables = self._non_user_variables
- # Make a copy since we don't want per-package settings
- # to pollute the global expand_map.
- expand_map = self._expand_map.copy()
- incrementals = self.incrementals
- for envname in penv:
- penvfile = os.path.join(abs_user_config, "env", envname)
- penvconfig = getconfig(penvfile, tolerant=self._tolerant,
- allow_sourcing=True, expand=expand_map)
- if penvconfig is None:
- writemsg("!!! %s references non-existent file: %s\n" % \
- (os.path.join(abs_user_config, 'package.env'), penvfile),
- noiselevel=-1)
- else:
- for k, v in penvconfig.items():
- if k in protected_keys or \
- k in non_user_variables:
- writemsg("!!! Illegal variable " + \
- "'%s' assigned in '%s'\n" % \
- (k, penvfile), noiselevel=-1)
- elif k in incrementals:
- if k in container:
- container[k] = container[k] + " " + v
- else:
- container[k] = v
- else:
- container[k] = v
-
- def _iuse_effective_match(self, flag):
- return flag in self._iuse_effective
-
- def _calc_iuse_effective(self):
- """
- Beginning with EAPI 5, IUSE_EFFECTIVE is defined by PMS.
- """
- iuse_effective = []
- iuse_effective.extend(self.get("IUSE_IMPLICIT", "").split())
-
- # USE_EXPAND_IMPLICIT should contain things like ARCH, ELIBC,
- # KERNEL, and USERLAND.
- use_expand_implicit = frozenset(
- self.get("USE_EXPAND_IMPLICIT", "").split())
-
- # USE_EXPAND_UNPREFIXED should contain at least ARCH, and
- # USE_EXPAND_VALUES_ARCH should contain all valid ARCH flags.
- for v in self.get("USE_EXPAND_UNPREFIXED", "").split():
- if v not in use_expand_implicit:
- continue
- iuse_effective.extend(
- self.get("USE_EXPAND_VALUES_" + v, "").split())
-
- use_expand = frozenset(self.get("USE_EXPAND", "").split())
- for v in use_expand_implicit:
- if v not in use_expand:
- continue
- lower_v = v.lower()
- for x in self.get("USE_EXPAND_VALUES_" + v, "").split():
- iuse_effective.append(lower_v + "_" + x)
-
- return frozenset(iuse_effective)
-
- def _get_implicit_iuse(self):
- """
- Prior to EAPI 5, these flags are considered to
- be implicit members of IUSE:
- * Flags derived from ARCH
- * Flags derived from USE_EXPAND_HIDDEN variables
- * Masked flags, such as those from {,package}use.mask
- * Forced flags, such as those from {,package}use.force
- * build and bootstrap flags used by bootstrap.sh
- """
- iuse_implicit = set()
- # Flags derived from ARCH.
- arch = self.configdict["defaults"].get("ARCH")
- if arch:
- iuse_implicit.add(arch)
- iuse_implicit.update(self.get("PORTAGE_ARCHLIST", "").split())
-
- # Flags derived from USE_EXPAND_HIDDEN variables
- # such as ELIBC, KERNEL, and USERLAND.
- use_expand_hidden = self.get("USE_EXPAND_HIDDEN", "").split()
- for x in use_expand_hidden:
- iuse_implicit.add(x.lower() + "_.*")
-
- # Flags that have been masked or forced.
- iuse_implicit.update(self.usemask)
- iuse_implicit.update(self.useforce)
-
- # build and bootstrap flags used by bootstrap.sh
- iuse_implicit.add("build")
- iuse_implicit.add("bootstrap")
-
- return iuse_implicit
-
- def _getUseMask(self, pkg, stable=None):
- return self._use_manager.getUseMask(pkg, stable=stable)
-
- def _getUseForce(self, pkg, stable=None):
- return self._use_manager.getUseForce(pkg, stable=stable)
-
- def _getMaskAtom(self, cpv, metadata):
- """
- Take a package and return a matching package.mask atom, or None if no
- such atom exists or it has been cancelled by package.unmask.
-
- @param cpv: The package name
- @type cpv: String
- @param metadata: A dictionary of raw package metadata
- @type metadata: dict
- @rtype: String
- @return: A matching atom string or None if one is not found.
- """
- return self._mask_manager.getMaskAtom(cpv, metadata["SLOT"], metadata.get('repository'))
-
- def _getRawMaskAtom(self, cpv, metadata):
- """
- Take a package and return a matching package.mask atom, or None if no
- such atom exists or it has been cancelled by package.unmask.
-
- @param cpv: The package name
- @type cpv: String
- @param metadata: A dictionary of raw package metadata
- @type metadata: dict
- @rtype: String
- @return: A matching atom string or None if one is not found.
- """
- return self._mask_manager.getRawMaskAtom(cpv, metadata["SLOT"], metadata.get('repository'))
-
-
- def _getProfileMaskAtom(self, cpv, metadata):
- """
- Take a package and return a matching profile atom, or None if no
- such atom exists. Note that a profile atom may or may not have a "*"
- prefix.
-
- @param cpv: The package name
- @type cpv: String
- @param metadata: A dictionary of raw package metadata
- @type metadata: dict
- @rtype: String
- @return: A matching profile atom string or None if one is not found.
- """
-
- warnings.warn("The config._getProfileMaskAtom() method is deprecated.",
- DeprecationWarning, stacklevel=2)
-
- cp = cpv_getkey(cpv)
- profile_atoms = self.prevmaskdict.get(cp)
- if profile_atoms:
- pkg = "".join((cpv, _slot_separator, metadata["SLOT"]))
- repo = metadata.get("repository")
- if repo and repo != Package.UNKNOWN_REPO:
- pkg = "".join((pkg, _repo_separator, repo))
- pkg_list = [pkg]
- for x in profile_atoms:
- if match_from_list(x, pkg_list):
- continue
- return x
- return None
-
- def _isStable(self, pkg):
- return self._keywords_manager.isStable(pkg,
- self.get("ACCEPT_KEYWORDS", ""),
- self.configdict["backupenv"].get("ACCEPT_KEYWORDS", ""))
-
- def _getKeywords(self, cpv, metadata):
- return self._keywords_manager.getKeywords(cpv, metadata["SLOT"], \
- metadata.get("KEYWORDS", ""), metadata.get("repository"))
-
- def _getMissingKeywords(self, cpv, metadata):
- """
- Take a package and return a list of any KEYWORDS that the user may
- need to accept for the given package. If the KEYWORDS are empty
- and the ** keyword has not been accepted, the returned list will
- contain ** alone (in order to distinguish from the case of "none
- missing").
-
- @param cpv: The package name (for package.keywords support)
- @type cpv: String
- @param metadata: A dictionary of raw package metadata
- @type metadata: dict
- @rtype: List
- @return: A list of KEYWORDS that have not been accepted.
- """
-
- # Hack: Need to check the env directly here as otherwise stacking
- # doesn't work properly as negative values are lost in the config
- # object (bug #139600)
- backuped_accept_keywords = self.configdict["backupenv"].get("ACCEPT_KEYWORDS", "")
- global_accept_keywords = self.get("ACCEPT_KEYWORDS", "")
-
- return self._keywords_manager.getMissingKeywords(cpv, metadata["SLOT"], \
- metadata.get("KEYWORDS", ""), metadata.get('repository'), \
- global_accept_keywords, backuped_accept_keywords)
-
- def _getRawMissingKeywords(self, cpv, metadata):
- """
- Take a package and return a list of any KEYWORDS that the user may
- need to accept for the given package. If the KEYWORDS are empty,
- the returned list will contain ** alone (in order to distinguish
- from the case of "none missing"). This DOES NOT apply any user config
- package.accept_keywords acceptance.
-
- @param cpv: The package name (for package.keywords support)
- @type cpv: String
- @param metadata: A dictionary of raw package metadata
- @type metadata: dict
- @rtype: List
- @return: lists of KEYWORDS that have not been accepted
- and the keywords it looked for.
- """
- return self._keywords_manager.getRawMissingKeywords(cpv, metadata["SLOT"], \
- metadata.get("KEYWORDS", ""), metadata.get('repository'), \
- self.get("ACCEPT_KEYWORDS", ""))
-
- def _getPKeywords(self, cpv, metadata):
- global_accept_keywords = self.get("ACCEPT_KEYWORDS", "")
-
- return self._keywords_manager.getPKeywords(cpv, metadata["SLOT"], \
- metadata.get('repository'), global_accept_keywords)
-
- def _getMissingLicenses(self, cpv, metadata):
- """
- Take a LICENSE string and return a list of any licenses that the user
- may need to accept for the given package. The returned list will not
- contain any licenses that have already been accepted. This method
- can throw an InvalidDependString exception.
-
- @param cpv: The package name (for package.license support)
- @type cpv: String
- @param metadata: A dictionary of raw package metadata
- @type metadata: dict
- @rtype: List
- @return: A list of licenses that have not been accepted.
- """
- return self._license_manager.getMissingLicenses( \
- cpv, metadata["USE"], metadata["LICENSE"], metadata["SLOT"], metadata.get('repository'))
-
- def _getMissingProperties(self, cpv, metadata):
- """
- Take a PROPERTIES string and return a list of any properties the user
- may need to accept for the given package. The returned list will not
- contain any properties that have already been accepted. This method
- can throw an InvalidDependString exception.
-
- @param cpv: The package name (for package.properties support)
- @type cpv: String
- @param metadata: A dictionary of raw package metadata
- @type metadata: dict
- @rtype: List
- @return: A list of properties that have not been accepted.
- """
- accept_properties = self._accept_properties
- try:
- cpv.slot
- except AttributeError:
- cpv = _pkg_str(cpv, metadata=metadata, settings=self)
- cp = cpv_getkey(cpv)
- cpdict = self._ppropertiesdict.get(cp)
- if cpdict:
- pproperties_list = ordered_by_atom_specificity(cpdict, cpv)
- if pproperties_list:
- accept_properties = list(self._accept_properties)
- for x in pproperties_list:
- accept_properties.extend(x)
-
- properties_str = metadata.get("PROPERTIES", "")
- properties = set(use_reduce(properties_str, matchall=1, flat=True))
-
- acceptable_properties = set()
- for x in accept_properties:
- if x == '*':
- acceptable_properties.update(properties)
- elif x == '-*':
- acceptable_properties.clear()
- elif x[:1] == '-':
- acceptable_properties.discard(x[1:])
- else:
- acceptable_properties.add(x)
-
- if "?" in properties_str:
- use = metadata["USE"].split()
- else:
- use = []
-
- return [x for x in use_reduce(properties_str, uselist=use, flat=True)
- if x not in acceptable_properties]
-
- def _getMissingRestrict(self, cpv, metadata):
- """
- Take a RESTRICT string and return a list of any tokens the user
- may need to accept for the given package. The returned list will not
- contain any tokens that have already been accepted. This method
- can throw an InvalidDependString exception.
-
- @param cpv: The package name (for package.accept_restrict support)
- @type cpv: String
- @param metadata: A dictionary of raw package metadata
- @type metadata: dict
- @rtype: List
- @return: A list of tokens that have not been accepted.
- """
- accept_restrict = self._accept_restrict
- try:
- cpv.slot
- except AttributeError:
- cpv = _pkg_str(cpv, metadata=metadata, settings=self)
- cp = cpv_getkey(cpv)
- cpdict = self._paccept_restrict.get(cp)
- if cpdict:
- paccept_restrict_list = ordered_by_atom_specificity(cpdict, cpv)
- if paccept_restrict_list:
- accept_restrict = list(self._accept_restrict)
- for x in paccept_restrict_list:
- accept_restrict.extend(x)
-
- restrict_str = metadata.get("RESTRICT", "")
- all_restricts = set(use_reduce(restrict_str, matchall=1, flat=True))
-
- acceptable_restricts = set()
- for x in accept_restrict:
- if x == '*':
- acceptable_restricts.update(all_restricts)
- elif x == '-*':
- acceptable_restricts.clear()
- elif x[:1] == '-':
- acceptable_restricts.discard(x[1:])
- else:
- acceptable_restricts.add(x)
-
- if "?" in restrict_str:
- use = metadata["USE"].split()
- else:
- use = []
-
- return [x for x in use_reduce(restrict_str, uselist=use, flat=True)
- if x not in acceptable_restricts]
-
- def _accept_chost(self, cpv, metadata):
- """
- @return True if pkg CHOST is accepted, False otherwise.
- """
- if self._accept_chost_re is None:
- accept_chost = self.get("ACCEPT_CHOSTS", "").split()
- if not accept_chost:
- chost = self.get("CHOST")
- if chost:
- accept_chost.append(chost)
- if not accept_chost:
- self._accept_chost_re = re.compile(".*")
- elif len(accept_chost) == 1:
- try:
- self._accept_chost_re = re.compile(r'^%s$' % accept_chost[0])
- except re.error as e:
- writemsg(_("!!! Invalid ACCEPT_CHOSTS value: '%s': %s\n") % \
- (accept_chost[0], e), noiselevel=-1)
- self._accept_chost_re = re.compile("^$")
- else:
- try:
- self._accept_chost_re = re.compile(
- r'^(%s)$' % "|".join(accept_chost))
- except re.error as e:
- writemsg(_("!!! Invalid ACCEPT_CHOSTS value: '%s': %s\n") % \
- (" ".join(accept_chost), e), noiselevel=-1)
- self._accept_chost_re = re.compile("^$")
-
- pkg_chost = metadata.get('CHOST', '')
- return not pkg_chost or \
- self._accept_chost_re.match(pkg_chost) is not None
-
- def setinst(self, mycpv, mydbapi):
- """This used to update the preferences for old-style virtuals.
- It is no-op now."""
- pass
-
- def reload(self):
- """Reload things like /etc/profile.env that can change during runtime."""
- env_d_filename = os.path.join(self["EROOT"], "etc", "profile.env")
- self.configdict["env.d"].clear()
- env_d = getconfig(env_d_filename,
- tolerant=self._tolerant, expand=False)
- if env_d:
- # env_d will be None if profile.env doesn't exist.
- for k in self._env_d_blacklist:
- env_d.pop(k, None)
- self.configdict["env.d"].update(env_d)
-
- def regenerate(self, useonly=0, use_cache=None):
- """
- Regenerate settings
- This involves regenerating valid USE flags, re-expanding USE_EXPAND flags
- re-stacking USE flags (-flag and -*), as well as any other INCREMENTAL
- variables. This also updates the env.d configdict; useful in case an ebuild
- changes the environment.
-
- If FEATURES has already stacked, it is not stacked twice.
-
- @param useonly: Only regenerate USE flags (not any other incrementals)
- @type useonly: Boolean
- @rtype: None
- """
-
- if use_cache is not None:
- warnings.warn("The use_cache parameter for config.regenerate() is deprecated and without effect.",
- DeprecationWarning, stacklevel=2)
-
- self.modifying()
-
- if useonly:
- myincrementals=["USE"]
- else:
- myincrementals = self.incrementals
- myincrementals = set(myincrementals)
-
- # Process USE last because it depends on USE_EXPAND which is also
- # an incremental!
- myincrementals.discard("USE")
-
- mydbs = self.configlist[:-1]
- mydbs.append(self.backupenv)
-
- # ACCEPT_LICENSE is a lazily evaluated incremental, so that * can be
- # used to match all licenses without every having to explicitly expand
- # it to all licenses.
- if self.local_config:
- mysplit = []
- for curdb in mydbs:
- mysplit.extend(curdb.get('ACCEPT_LICENSE', '').split())
- mysplit = prune_incremental(mysplit)
- accept_license_str = ' '.join(mysplit) or '* -@EULA'
- self.configlist[-1]['ACCEPT_LICENSE'] = accept_license_str
- self._license_manager.set_accept_license_str(accept_license_str)
- else:
- # repoman will accept any license
- self._license_manager.set_accept_license_str("*")
-
- # ACCEPT_PROPERTIES works like ACCEPT_LICENSE, without groups
- if self.local_config:
- mysplit = []
- for curdb in mydbs:
- mysplit.extend(curdb.get('ACCEPT_PROPERTIES', '').split())
- mysplit = prune_incremental(mysplit)
- self.configlist[-1]['ACCEPT_PROPERTIES'] = ' '.join(mysplit)
- if tuple(mysplit) != self._accept_properties:
- self._accept_properties = tuple(mysplit)
- else:
- # repoman will accept any property
- self._accept_properties = ('*',)
-
- if self.local_config:
- mysplit = []
- for curdb in mydbs:
- mysplit.extend(curdb.get('ACCEPT_RESTRICT', '').split())
- mysplit = prune_incremental(mysplit)
- self.configlist[-1]['ACCEPT_RESTRICT'] = ' '.join(mysplit)
- if tuple(mysplit) != self._accept_restrict:
- self._accept_restrict = tuple(mysplit)
- else:
- # repoman will accept any property
- self._accept_restrict = ('*',)
-
- increment_lists = {}
- for k in myincrementals:
- incremental_list = []
- increment_lists[k] = incremental_list
- for curdb in mydbs:
- v = curdb.get(k)
- if v is not None:
- incremental_list.append(v.split())
-
- if 'FEATURES' in increment_lists:
- increment_lists['FEATURES'].append(self._features_overrides)
-
- myflags = set()
- for mykey, incremental_list in increment_lists.items():
-
- myflags.clear()
- for mysplit in incremental_list:
-
- for x in mysplit:
- if x=="-*":
- # "-*" is a special "minus" var that means "unset all settings".
- # so USE="-* gnome" will have *just* gnome enabled.
- myflags.clear()
- continue
-
- if x[0]=="+":
- # Not legal. People assume too much. Complain.
- writemsg(colorize("BAD",
- _("%s values should not start with a '+': %s") % (mykey,x)) \
- + "\n", noiselevel=-1)
- x=x[1:]
- if not x:
- continue
-
- if x[0] == "-":
- myflags.discard(x[1:])
- continue
-
- # We got here, so add it now.
- myflags.add(x)
-
- #store setting in last element of configlist, the original environment:
- if myflags or mykey in self:
- self.configlist[-1][mykey] = " ".join(sorted(myflags))
-
- # Do the USE calculation last because it depends on USE_EXPAND.
- use_expand = self.get("USE_EXPAND", "").split()
- use_expand_dict = self._use_expand_dict
- use_expand_dict.clear()
- for k in use_expand:
- v = self.get(k)
- if v is not None:
- use_expand_dict[k] = v
-
- use_expand_unprefixed = self.get("USE_EXPAND_UNPREFIXED", "").split()
-
- # In order to best accomodate the long-standing practice of
- # setting default USE_EXPAND variables in the profile's
- # make.defaults, we translate these variables into their
- # equivalent USE flags so that useful incremental behavior
- # is enabled (for sub-profiles).
- configdict_defaults = self.configdict['defaults']
- if self._make_defaults is not None:
- for i, cfg in enumerate(self._make_defaults):
- if not cfg:
- self.make_defaults_use.append("")
- continue
- use = cfg.get("USE", "")
- expand_use = []
-
- for k in use_expand_unprefixed:
- v = cfg.get(k)
- if v is not None:
- expand_use.extend(v.split())
-
- for k in use_expand_dict:
- v = cfg.get(k)
- if v is None:
- continue
- prefix = k.lower() + '_'
- for x in v.split():
- if x[:1] == '-':
- expand_use.append('-' + prefix + x[1:])
- else:
- expand_use.append(prefix + x)
-
- if expand_use:
- expand_use.append(use)
- use = ' '.join(expand_use)
- self.make_defaults_use.append(use)
- self.make_defaults_use = tuple(self.make_defaults_use)
- # Preserve both positive and negative flags here, since
- # negative flags may later interact with other flags pulled
- # in via USE_ORDER.
- configdict_defaults['USE'] = ' '.join(
- filter(None, self.make_defaults_use))
- # Set to None so this code only runs once.
- self._make_defaults = None
-
- if not self.uvlist:
- for x in self["USE_ORDER"].split(":"):
- if x in self.configdict:
- self.uvlist.append(self.configdict[x])
- self.uvlist.reverse()
-
- # For optimal performance, use slice
- # comparison instead of startswith().
- iuse = self.configdict["pkg"].get("IUSE")
- if iuse is not None:
- iuse = [x.lstrip("+-") for x in iuse.split()]
- myflags = set()
- for curdb in self.uvlist:
-
- for k in use_expand_unprefixed:
- v = curdb.get(k)
- if v is None:
- continue
- for x in v.split():
- if x[:1] == "-":
- myflags.discard(x[1:])
- else:
- myflags.add(x)
-
- cur_use_expand = [x for x in use_expand if x in curdb]
- mysplit = curdb.get("USE", "").split()
- if not mysplit and not cur_use_expand:
- continue
- for x in mysplit:
- if x == "-*":
- myflags.clear()
- continue
-
- if x[0] == "+":
- writemsg(colorize("BAD", _("USE flags should not start "
- "with a '+': %s\n") % x), noiselevel=-1)
- x = x[1:]
- if not x:
- continue
-
- if x[0] == "-":
- if x[-2:] == '_*':
- prefix = x[1:-1]
- prefix_len = len(prefix)
- myflags.difference_update(
- [y for y in myflags if \
- y[:prefix_len] == prefix])
- myflags.discard(x[1:])
- continue
-
- if iuse is not None and x[-2:] == '_*':
- # Expand wildcards here, so that cases like
- # USE="linguas_* -linguas_en_US" work correctly.
- prefix = x[:-1]
- prefix_len = len(prefix)
- has_iuse = False
- for y in iuse:
- if y[:prefix_len] == prefix:
- has_iuse = True
- myflags.add(y)
- if not has_iuse:
- # There are no matching IUSE, so allow the
- # wildcard to pass through. This allows
- # linguas_* to trigger unset LINGUAS in
- # cases when no linguas_ flags are in IUSE.
- myflags.add(x)
- else:
- myflags.add(x)
-
- if curdb is configdict_defaults:
- # USE_EXPAND flags from make.defaults are handled
- # earlier, in order to provide useful incremental
- # behavior (for sub-profiles).
- continue
-
- for var in cur_use_expand:
- var_lower = var.lower()
- is_not_incremental = var not in myincrementals
- if is_not_incremental:
- prefix = var_lower + "_"
- prefix_len = len(prefix)
- for x in list(myflags):
- if x[:prefix_len] == prefix:
- myflags.remove(x)
- for x in curdb[var].split():
- if x[0] == "+":
- if is_not_incremental:
- writemsg(colorize("BAD", _("Invalid '+' "
- "operator in non-incremental variable "
- "'%s': '%s'\n") % (var, x)), noiselevel=-1)
- continue
- else:
- writemsg(colorize("BAD", _("Invalid '+' "
- "operator in incremental variable "
- "'%s': '%s'\n") % (var, x)), noiselevel=-1)
- x = x[1:]
- if x[0] == "-":
- if is_not_incremental:
- writemsg(colorize("BAD", _("Invalid '-' "
- "operator in non-incremental variable "
- "'%s': '%s'\n") % (var, x)), noiselevel=-1)
- continue
- myflags.discard(var_lower + "_" + x[1:])
- continue
- myflags.add(var_lower + "_" + x)
-
- if hasattr(self, "features"):
- self.features._features.clear()
- else:
- self.features = features_set(self)
- self.features._features.update(self.get('FEATURES', '').split())
- self.features._sync_env_var()
- self.features._validate()
-
- myflags.update(self.useforce)
- arch = self.configdict["defaults"].get("ARCH")
- if arch:
- myflags.add(arch)
-
- myflags.difference_update(self.usemask)
- self.configlist[-1]["USE"]= " ".join(sorted(myflags))
-
- if self.mycpv is None:
- # Generate global USE_EXPAND variables settings that are
- # consistent with USE, for display by emerge --info. For
- # package instances, these are instead generated via
- # setcpv().
- for k in use_expand:
- prefix = k.lower() + '_'
- prefix_len = len(prefix)
- expand_flags = set( x[prefix_len:] for x in myflags \
- if x[:prefix_len] == prefix )
- var_split = use_expand_dict.get(k, '').split()
- var_split = [ x for x in var_split if x in expand_flags ]
- var_split.extend(sorted(expand_flags.difference(var_split)))
- if var_split:
- self.configlist[-1][k] = ' '.join(var_split)
- elif k in self:
- self.configlist[-1][k] = ''
-
- for k in use_expand_unprefixed:
- var_split = self.get(k, '').split()
- var_split = [ x for x in var_split if x in myflags ]
- if var_split:
- self.configlist[-1][k] = ' '.join(var_split)
- elif k in self:
- self.configlist[-1][k] = ''
-
- @property
- def virts_p(self):
- warnings.warn("portage config.virts_p attribute " + \
- "is deprecated, use config.get_virts_p()",
- DeprecationWarning, stacklevel=2)
- return self.get_virts_p()
-
- @property
- def virtuals(self):
- warnings.warn("portage config.virtuals attribute " + \
- "is deprecated, use config.getvirtuals()",
- DeprecationWarning, stacklevel=2)
- return self.getvirtuals()
-
- def get_virts_p(self):
- # Ensure that we don't trigger the _treeVirtuals
- # assertion in VirtualsManager._compile_virtuals().
- self.getvirtuals()
- return self._virtuals_manager.get_virts_p()
-
- def getvirtuals(self):
- if self._virtuals_manager._treeVirtuals is None:
- #Hack around the fact that VirtualsManager needs a vartree
- #and vartree needs a config instance.
- #This code should be part of VirtualsManager.getvirtuals().
- if self.local_config:
- temp_vartree = vartree(settings=self)
- self._virtuals_manager._populate_treeVirtuals(temp_vartree)
- else:
- self._virtuals_manager._treeVirtuals = {}
-
- return self._virtuals_manager.getvirtuals()
-
- def _populate_treeVirtuals_if_needed(self, vartree):
- """Reduce the provides into a list by CP."""
- if self._virtuals_manager._treeVirtuals is None:
- if self.local_config:
- self._virtuals_manager._populate_treeVirtuals(vartree)
- else:
- self._virtuals_manager._treeVirtuals = {}
-
- def __delitem__(self,mykey):
- self.pop(mykey)
-
- def __getitem__(self, key):
- try:
- return self._getitem(key)
- except KeyError:
- if portage._internal_caller:
- stack = traceback.format_stack()[:-1] + traceback.format_exception(*sys.exc_info())[1:]
- try:
- # Ensure that output is written to terminal.
- with open("/dev/tty", "w") as f:
- f.write("=" * 96 + "\n")
- f.write("=" * 8 + " Traceback for invalid call to portage.package.ebuild.config.config.__getitem__ " + "=" * 8 + "\n")
- f.writelines(stack)
- f.write("=" * 96 + "\n")
- except Exception:
- pass
- raise
- else:
- warnings.warn(_("Passing nonexistent key %r to %s is deprecated. Use %s instead.") %
- (key, "portage.package.ebuild.config.config.__getitem__",
- "portage.package.ebuild.config.config.get"), DeprecationWarning, stacklevel=2)
- return ""
-
- def _getitem(self, mykey):
-
- if mykey in self._constant_keys:
- # These two point to temporary values when
- # portage plans to update itself.
- if mykey == "PORTAGE_BIN_PATH":
- return portage._bin_path
- if mykey == "PORTAGE_PYM_PATH":
- return portage._pym_path
-
- if mykey == "PORTAGE_PYTHONPATH":
- value = [x for x in \
- self.backupenv.get("PYTHONPATH", "").split(":") if x]
- need_pym_path = True
- if value:
- try:
- need_pym_path = not os.path.samefile(value[0],
- portage._pym_path)
- except OSError:
- pass
- if need_pym_path:
- value.insert(0, portage._pym_path)
- return ":".join(value)
-
- if mykey == "PORTAGE_GID":
- return "%s" % portage_gid
-
- for d in self.lookuplist:
- try:
- return d[mykey]
- except KeyError:
- pass
-
- deprecated_key = self._deprecated_keys.get(mykey)
- if deprecated_key is not None:
- value = self._getitem(deprecated_key)
- #warnings.warn(_("Key %s has been renamed to %s. Please ",
- # "update your configuration") % (deprecated_key, mykey),
- # UserWarning)
- return value
-
- raise KeyError(mykey)
-
- def get(self, k, x=None):
- try:
- return self._getitem(k)
- except KeyError:
- return x
-
- def pop(self, key, *args):
- self.modifying()
- if len(args) > 1:
- raise TypeError(
- "pop expected at most 2 arguments, got " + \
- repr(1 + len(args)))
- v = self
- for d in reversed(self.lookuplist):
- v = d.pop(key, v)
- if v is self:
- if args:
- return args[0]
- raise KeyError(key)
- return v
-
- def __contains__(self, mykey):
- """Called to implement membership test operators (in and not in)."""
- try:
- self._getitem(mykey)
- except KeyError:
- return False
- else:
- return True
-
- def setdefault(self, k, x=None):
- v = self.get(k)
- if v is not None:
- return v
- self[k] = x
- return x
-
- def __iter__(self):
- keys = set()
- keys.update(self._constant_keys)
- for d in self.lookuplist:
- keys.update(d)
- return iter(keys)
-
- def iterkeys(self):
- return iter(self)
-
- def iteritems(self):
- for k in self:
- yield (k, self._getitem(k))
-
- def __setitem__(self,mykey,myvalue):
- "set a value; will be thrown away at reset() time"
- if not isinstance(myvalue, str):
- raise ValueError("Invalid type being used as a value: '%s': '%s'" % (str(mykey),str(myvalue)))
-
- # Avoid potential UnicodeDecodeError exceptions later.
- mykey = _unicode_decode(mykey)
- myvalue = _unicode_decode(myvalue)
-
- self.modifying()
- self.modifiedkeys.append(mykey)
- self.configdict["env"][mykey]=myvalue
-
- def environ(self):
- "return our locally-maintained environment"
- mydict={}
- environ_filter = self._environ_filter
-
- eapi = self.get('EAPI')
- eapi_attrs = _get_eapi_attrs(eapi)
- phase = self.get('EBUILD_PHASE')
- emerge_from = self.get('EMERGE_FROM')
- filter_calling_env = False
- if self.mycpv is not None and \
- not (emerge_from == 'ebuild' and phase == 'setup') and \
- phase not in ('clean', 'cleanrm', 'depend', 'fetch'):
- temp_dir = self.get('T')
- if temp_dir is not None and \
- os.path.exists(os.path.join(temp_dir, 'environment')):
- filter_calling_env = True
-
- environ_whitelist = self._environ_whitelist
- for x, myvalue in self.iteritems():
- if x in environ_filter:
- continue
- if not isinstance(myvalue, str):
- writemsg(_("!!! Non-string value in config: %s=%s\n") % \
- (x, myvalue), noiselevel=-1)
- continue
- if filter_calling_env and \
- x not in environ_whitelist and \
- not self._environ_whitelist_re.match(x):
- # Do not allow anything to leak into the ebuild
- # environment unless it is explicitly whitelisted.
- # This ensures that variables unset by the ebuild
- # remain unset (bug #189417).
- continue
- mydict[x] = myvalue
- if "HOME" not in mydict and "BUILD_PREFIX" in mydict:
- writemsg("*** HOME not set. Setting to "+mydict["BUILD_PREFIX"]+"\n")
- mydict["HOME"]=mydict["BUILD_PREFIX"][:]
-
- if filter_calling_env:
- if phase:
- whitelist = []
- if "rpm" == phase:
- whitelist.append("RPMDIR")
- for k in whitelist:
- v = self.get(k)
- if v is not None:
- mydict[k] = v
-
- # At some point we may want to stop exporting FEATURES to the ebuild
- # environment, in order to prevent ebuilds from abusing it. In
- # preparation for that, export it as PORTAGE_FEATURES so that bashrc
- # users will be able to migrate any FEATURES conditional code to
- # use this alternative variable.
- mydict["PORTAGE_FEATURES"] = self["FEATURES"]
-
- # Filtered by IUSE and implicit IUSE.
- mydict["USE"] = self.get("PORTAGE_USE", "")
-
- # Don't export AA to the ebuild environment in EAPIs that forbid it
- if not eapi_exports_AA(eapi):
- mydict.pop("AA", None)
-
- if not eapi_exports_merge_type(eapi):
- mydict.pop("MERGE_TYPE", None)
-
- src_like_phase = (phase == 'setup' or
- _phase_func_map.get(phase, '').startswith('src_'))
-
- if not (src_like_phase and eapi_attrs.sysroot):
- mydict.pop("ESYSROOT", None)
-
- if not (src_like_phase and eapi_attrs.broot):
- mydict.pop("BROOT", None)
-
- # Prefix variables are supported beginning with EAPI 3, or when
- # force-prefix is in FEATURES, since older EAPIs would otherwise be
- # useless with prefix configurations. This brings compatibility with
- # the prefix branch of portage, which also supports EPREFIX for all
- # EAPIs (for obvious reasons).
- if phase == 'depend' or \
- ('force-prefix' not in self.features and
- eapi is not None and not eapi_supports_prefix(eapi)):
- mydict.pop("ED", None)
- mydict.pop("EPREFIX", None)
- mydict.pop("EROOT", None)
- mydict.pop("ESYSROOT", None)
-
- if phase not in ("pretend", "setup", "preinst", "postinst") or \
- not eapi_exports_replace_vars(eapi):
- mydict.pop("REPLACING_VERSIONS", None)
-
- if phase not in ("prerm", "postrm") or \
- not eapi_exports_replace_vars(eapi):
- mydict.pop("REPLACED_BY_VERSION", None)
-
- if phase is not None and eapi_attrs.exports_EBUILD_PHASE_FUNC:
- phase_func = _phase_func_map.get(phase)
- if phase_func is not None:
- mydict["EBUILD_PHASE_FUNC"] = phase_func
-
- if eapi_attrs.posixish_locale:
- split_LC_ALL(mydict)
- mydict["LC_COLLATE"] = "C"
- # check_locale() returns None when check can not be executed.
- if check_locale(silent=True, env=mydict) is False:
- # try another locale
- for l in ("C.UTF-8", "en_US.UTF-8", "en_GB.UTF-8", "C"):
- mydict["LC_CTYPE"] = l
- if check_locale(silent=True, env=mydict):
- # TODO: output the following only once
- # writemsg(_("!!! LC_CTYPE unsupported, using %s instead\n")
- # % mydict["LC_CTYPE"])
- break
- else:
- raise AssertionError("C locale did not pass the test!")
-
- if not eapi_attrs.exports_PORTDIR:
- mydict.pop("PORTDIR", None)
- if not eapi_attrs.exports_ECLASSDIR:
- mydict.pop("ECLASSDIR", None)
-
- if not eapi_attrs.path_variables_end_with_trailing_slash:
- for v in ("D", "ED", "ROOT", "EROOT", "ESYSROOT", "BROOT"):
- if v in mydict:
- mydict[v] = mydict[v].rstrip(os.path.sep)
-
- # Since SYSROOT=/ interacts badly with autotools.eclass (bug 654600),
- # and no EAPI expects SYSROOT to have a trailing slash, always strip
- # the trailing slash from SYSROOT.
- if 'SYSROOT' in mydict:
- mydict['SYSROOT'] = mydict['SYSROOT'].rstrip(os.sep)
-
- try:
- builddir = mydict["PORTAGE_BUILDDIR"]
- distdir = mydict["DISTDIR"]
- except KeyError:
- pass
- else:
- mydict["PORTAGE_ACTUAL_DISTDIR"] = distdir
- mydict["DISTDIR"] = os.path.join(builddir, "distdir")
-
- return mydict
-
- def thirdpartymirrors(self):
- if getattr(self, "_thirdpartymirrors", None) is None:
- thirdparty_lists = []
- for repo_name in reversed(self.repositories.prepos_order):
- thirdparty_lists.append(grabdict(os.path.join(
- self.repositories[repo_name].location,
- "profiles", "thirdpartymirrors")))
- self._thirdpartymirrors = stack_dictlist(thirdparty_lists, incremental=True)
- return self._thirdpartymirrors
-
- def archlist(self):
- _archlist = []
- for myarch in self["PORTAGE_ARCHLIST"].split():
- _archlist.append(myarch)
- _archlist.append("~" + myarch)
- return _archlist
-
- def selinux_enabled(self):
- if getattr(self, "_selinux_enabled", None) is None:
- self._selinux_enabled = 0
- if "selinux" in self["USE"].split():
- if selinux:
- if selinux.is_selinux_enabled() == 1:
- self._selinux_enabled = 1
- else:
- self._selinux_enabled = 0
- else:
- writemsg(_("!!! SELinux module not found. Please verify that it was installed.\n"),
- noiselevel=-1)
- self._selinux_enabled = 0
-
- return self._selinux_enabled
-
- keys = __iter__
- items = iteritems
++
+ """
+ This class encompasses the main portage configuration. Data is pulled from
+ ROOT/PORTDIR/profiles/, from ROOT/etc/make.profile incrementally through all
+ parent profiles as well as from ROOT/PORTAGE_CONFIGROOT/* for user specified
+ overrides.
+
+ Generally if you need data like USE flags, FEATURES, environment variables,
+ virtuals ...etc you look in here.
+ """
+
+ _constant_keys = frozenset(
+ ["PORTAGE_BIN_PATH", "PORTAGE_GID", "PORTAGE_PYM_PATH", "PORTAGE_PYTHONPATH"]
+ )
+
+ _deprecated_keys = {
+ "PORTAGE_LOGDIR": "PORT_LOGDIR",
+ "PORTAGE_LOGDIR_CLEAN": "PORT_LOGDIR_CLEAN",
+ "SIGNED_OFF_BY": "DCO_SIGNED_OFF_BY",
+ }
+
+ _setcpv_aux_keys = (
+ "BDEPEND",
+ "DEFINED_PHASES",
+ "DEPEND",
+ "EAPI",
+ "IDEPEND",
+ "INHERITED",
+ "IUSE",
+ "REQUIRED_USE",
+ "KEYWORDS",
+ "LICENSE",
+ "PDEPEND",
+ "PROPERTIES",
+ "RDEPEND",
+ "SLOT",
+ "repository",
+ "RESTRICT",
+ "LICENSE",
+ )
+
+ _module_aliases = {
+ "cache.metadata_overlay.database": "portage.cache.flat_hash.mtime_md5_database",
+ "portage.cache.metadata_overlay.database": "portage.cache.flat_hash.mtime_md5_database",
+ }
+
+ _case_insensitive_vars = special_env_vars.case_insensitive_vars
+ _default_globals = special_env_vars.default_globals
+ _env_blacklist = special_env_vars.env_blacklist
+ _environ_filter = special_env_vars.environ_filter
+ _environ_whitelist = special_env_vars.environ_whitelist
+ _environ_whitelist_re = special_env_vars.environ_whitelist_re
+ _global_only_vars = special_env_vars.global_only_vars
+
+ def __init__(
+ self,
+ clone=None,
+ mycpv=None,
+ config_profile_path=None,
+ config_incrementals=None,
+ config_root=None,
+ target_root=None,
+ sysroot=None,
+ eprefix=None,
+ local_config=True,
+ env=None,
+ _unmatched_removal=False,
+ repositories=None,
+ ):
+ """
+ @param clone: If provided, init will use deepcopy to copy by value the instance.
+ @type clone: Instance of config class.
+ @param mycpv: CPV to load up (see setcpv), this is the same as calling init with mycpv=None
+ and then calling instance.setcpv(mycpv).
+ @type mycpv: String
+ @param config_profile_path: Configurable path to the profile (usually PROFILE_PATH from portage.const)
+ @type config_profile_path: String
+ @param config_incrementals: List of incremental variables
+ (defaults to portage.const.INCREMENTALS)
+ @type config_incrementals: List
+ @param config_root: path to read local config from (defaults to "/", see PORTAGE_CONFIGROOT)
+ @type config_root: String
+ @param target_root: the target root, which typically corresponds to the
+ value of the $ROOT env variable (default is /)
+ @type target_root: String
+ @param sysroot: the sysroot to build against, which typically corresponds
+ to the value of the $SYSROOT env variable (default is /)
+ @type sysroot: String
+ @param eprefix: set the EPREFIX variable (default is portage.const.EPREFIX)
+ @type eprefix: String
+ @param local_config: Enables loading of local config (/etc/portage); used most by repoman to
+ ignore local config (keywording and unmasking)
+ @type local_config: Boolean
+ @param env: The calling environment which is used to override settings.
+ Defaults to os.environ if unspecified.
+ @type env: dict
+ @param _unmatched_removal: Enabled by repoman when the
+ --unmatched-removal option is given.
+ @type _unmatched_removal: Boolean
+ @param repositories: Configuration of repositories.
+ Defaults to portage.repository.config.load_repository_config().
+ @type repositories: Instance of portage.repository.config.RepoConfigLoader class.
+ """
+
+ # This is important when config is reloaded after emerge --sync.
+ _eapi_cache.clear()
+
+ # When initializing the global portage.settings instance, avoid
+ # raising exceptions whenever possible since exceptions thrown
+ # from 'import portage' or 'import portage.exceptions' statements
+ # can practically render the api unusable for api consumers.
+ tolerant = hasattr(portage, "_initializing_globals")
+ self._tolerant = tolerant
+ self._unmatched_removal = _unmatched_removal
+
+ self.locked = 0
+ self.mycpv = None
+ self._setcpv_args_hash = None
+ self.puse = ""
+ self._penv = []
+ self.modifiedkeys = []
+ self.uvlist = []
+ self._accept_chost_re = None
+ self._accept_properties = None
+ self._accept_restrict = None
+ self._features_overrides = []
+ self._make_defaults = None
+ self._parent_stable = None
+ self._soname_provided = None
+
+ # _unknown_features records unknown features that
+ # have triggered warning messages, and ensures that
+ # the same warning isn't shown twice.
+ self._unknown_features = set()
+
+ self.local_config = local_config
+
+ if clone:
+ # For immutable attributes, use shallow copy for
+ # speed and memory conservation.
+ self._tolerant = clone._tolerant
+ self._unmatched_removal = clone._unmatched_removal
+ self.categories = clone.categories
+ self.depcachedir = clone.depcachedir
+ self.incrementals = clone.incrementals
+ self.module_priority = clone.module_priority
+ self.profile_path = clone.profile_path
+ self.profiles = clone.profiles
+ self.packages = clone.packages
+ self.repositories = clone.repositories
+ self.unpack_dependencies = clone.unpack_dependencies
+ self._default_features_use = clone._default_features_use
+ self._iuse_effective = clone._iuse_effective
+ self._iuse_implicit_match = clone._iuse_implicit_match
+ self._non_user_variables = clone._non_user_variables
+ self._env_d_blacklist = clone._env_d_blacklist
+ self._pbashrc = clone._pbashrc
+ self._repo_make_defaults = clone._repo_make_defaults
+ self.usemask = clone.usemask
+ self.useforce = clone.useforce
+ self.puse = clone.puse
+ self.user_profile_dir = clone.user_profile_dir
+ self.local_config = clone.local_config
+ self.make_defaults_use = clone.make_defaults_use
+ self.mycpv = clone.mycpv
+ self._setcpv_args_hash = clone._setcpv_args_hash
+ self._soname_provided = clone._soname_provided
+ self._profile_bashrc = clone._profile_bashrc
+
+ # immutable attributes (internal policy ensures lack of mutation)
+ self._locations_manager = clone._locations_manager
+ self._use_manager = clone._use_manager
+ # force instantiation of lazy immutable objects when cloning, so
+ # that they're not instantiated more than once
+ self._keywords_manager_obj = clone._keywords_manager
+ self._mask_manager_obj = clone._mask_manager
+
+ # shared mutable attributes
+ self._unknown_features = clone._unknown_features
+
+ self.modules = copy.deepcopy(clone.modules)
+ self._penv = copy.deepcopy(clone._penv)
+
+ self.configdict = copy.deepcopy(clone.configdict)
+ self.configlist = [
+ self.configdict["env.d"],
+ self.configdict["repo"],
+ self.configdict["features"],
+ self.configdict["pkginternal"],
+ self.configdict["globals"],
+ self.configdict["defaults"],
+ self.configdict["conf"],
+ self.configdict["pkg"],
+ self.configdict["env"],
+ ]
+ self.lookuplist = self.configlist[:]
+ self.lookuplist.reverse()
+ self._use_expand_dict = copy.deepcopy(clone._use_expand_dict)
+ self.backupenv = self.configdict["backupenv"]
+ self.prevmaskdict = copy.deepcopy(clone.prevmaskdict)
+ self.pprovideddict = copy.deepcopy(clone.pprovideddict)
+ self.features = features_set(self)
+ self.features._features = copy.deepcopy(clone.features._features)
+ self._features_overrides = copy.deepcopy(clone._features_overrides)
+
+ # Strictly speaking _license_manager is not immutable. Users need to ensure that
+ # extract_global_changes() is called right after __init__ (if at all).
+ # It also has the mutable member _undef_lic_groups. It is used to track
+ # undefined license groups, to not display an error message for the same
+ # group again and again. Because of this, it's useful to share it between
+ # all LicenseManager instances.
+ self._license_manager = clone._license_manager
+
+ # force instantiation of lazy objects when cloning, so
+ # that they're not instantiated more than once
+ self._virtuals_manager_obj = copy.deepcopy(clone._virtuals_manager)
+
+ self._accept_properties = copy.deepcopy(clone._accept_properties)
+ self._ppropertiesdict = copy.deepcopy(clone._ppropertiesdict)
+ self._accept_restrict = copy.deepcopy(clone._accept_restrict)
+ self._paccept_restrict = copy.deepcopy(clone._paccept_restrict)
+ self._penvdict = copy.deepcopy(clone._penvdict)
+ self._pbashrcdict = copy.deepcopy(clone._pbashrcdict)
+ self._expand_map = copy.deepcopy(clone._expand_map)
+
+ else:
+ # lazily instantiated objects
+ self._keywords_manager_obj = None
+ self._mask_manager_obj = None
+ self._virtuals_manager_obj = None
+
+ locations_manager = LocationsManager(
+ config_root=config_root,
+ config_profile_path=config_profile_path,
+ eprefix=eprefix,
+ local_config=local_config,
+ target_root=target_root,
+ sysroot=sysroot,
+ )
+ self._locations_manager = locations_manager
+
+ eprefix = locations_manager.eprefix
+ config_root = locations_manager.config_root
+ sysroot = locations_manager.sysroot
+ esysroot = locations_manager.esysroot
+ broot = locations_manager.broot
+ abs_user_config = locations_manager.abs_user_config
+ make_conf_paths = [
+ os.path.join(config_root, "etc", "make.conf"),
+ os.path.join(config_root, MAKE_CONF_FILE),
+ ]
+ try:
+ if os.path.samefile(*make_conf_paths):
+ make_conf_paths.pop()
+ except OSError:
+ pass
+
+ make_conf_count = 0
+ make_conf = {}
+ for x in make_conf_paths:
+ mygcfg = getconfig(
+ x,
+ tolerant=tolerant,
+ allow_sourcing=True,
+ expand=make_conf,
+ recursive=True,
+ )
+ if mygcfg is not None:
+ make_conf.update(mygcfg)
+ make_conf_count += 1
+
+ if make_conf_count == 2:
+ writemsg(
+ "!!! %s\n"
+ % _("Found 2 make.conf files, using both '%s' and '%s'")
+ % tuple(make_conf_paths),
+ noiselevel=-1,
+ )
+
+ # __* variables set in make.conf are local and are not be propagated.
+ make_conf = {k: v for k, v in make_conf.items() if not k.startswith("__")}
+
+ # Allow ROOT setting to come from make.conf if it's not overridden
+ # by the constructor argument (from the calling environment).
+ locations_manager.set_root_override(make_conf.get("ROOT"))
+ target_root = locations_manager.target_root
+ eroot = locations_manager.eroot
+ self.global_config_path = locations_manager.global_config_path
+
+ # The expand_map is used for variable substitution
+ # in getconfig() calls, and the getconfig() calls
+ # update expand_map with the value of each variable
+ # assignment that occurs. Variable substitution occurs
+ # in the following order, which corresponds to the
+ # order of appearance in self.lookuplist:
+ #
+ # * env.d
+ # * make.globals
+ # * make.defaults
+ # * make.conf
+ #
+ # Notably absent is "env", since we want to avoid any
+ # interaction with the calling environment that might
+ # lead to unexpected results.
+
+ env_d = (
+ getconfig(
+ os.path.join(eroot, "etc", "profile.env"),
+ tolerant=tolerant,
+ expand=False,
+ )
+ or {}
+ )
+ expand_map = env_d.copy()
+ self._expand_map = expand_map
+
+ # Allow make.globals and make.conf to set paths relative to vars like ${EPREFIX}.
+ expand_map["BROOT"] = broot
+ expand_map["EPREFIX"] = eprefix
+ expand_map["EROOT"] = eroot
+ expand_map["ESYSROOT"] = esysroot
+ expand_map["PORTAGE_CONFIGROOT"] = config_root
+ expand_map["ROOT"] = target_root
+ expand_map["SYSROOT"] = sysroot
+
+ if portage._not_installed:
+ make_globals_path = os.path.join(
+ PORTAGE_BASE_PATH, "cnf", "make.globals"
+ )
+ else:
+ make_globals_path = os.path.join(
+ self.global_config_path, "make.globals"
+ )
+ old_make_globals = os.path.join(config_root, "etc", "make.globals")
+ if os.path.isfile(old_make_globals) and not os.path.samefile(
+ make_globals_path, old_make_globals
+ ):
+ # Don't warn if they refer to the same path, since
+ # that can be used for backward compatibility with
+ # old software.
+ writemsg(
+ "!!! %s\n"
+ % _(
+ "Found obsolete make.globals file: "
+ "'%s', (using '%s' instead)"
+ )
+ % (old_make_globals, make_globals_path),
+ noiselevel=-1,
+ )
+
+ make_globals = getconfig(
+ make_globals_path, tolerant=tolerant, expand=expand_map
+ )
+ if make_globals is None:
+ make_globals = {}
+
+ for k, v in self._default_globals.items():
+ make_globals.setdefault(k, v)
+
+ if config_incrementals is None:
+ self.incrementals = INCREMENTALS
+ else:
+ self.incrementals = config_incrementals
+ if not isinstance(self.incrementals, frozenset):
+ self.incrementals = frozenset(self.incrementals)
+
+ self.module_priority = ("user", "default")
+ self.modules = {}
+ modules_file = os.path.join(config_root, MODULES_FILE_PATH)
+ modules_loader = KeyValuePairFileLoader(modules_file, None, None)
+ modules_dict, modules_errors = modules_loader.load()
+ self.modules["user"] = modules_dict
+ if self.modules["user"] is None:
+ self.modules["user"] = {}
+ user_auxdbmodule = self.modules["user"].get("portdbapi.auxdbmodule")
+ if (
+ user_auxdbmodule is not None
+ and user_auxdbmodule in self._module_aliases
+ ):
+ warnings.warn(
+ "'%s' is deprecated: %s" % (user_auxdbmodule, modules_file)
+ )
+
+ self.modules["default"] = {
+ "portdbapi.auxdbmodule": "portage.cache.flat_hash.mtime_md5_database",
+ }
+
+ self.configlist = []
+
+ # back up our incremental variables:
+ self.configdict = {}
+ self._use_expand_dict = {}
+ # configlist will contain: [ env.d, globals, features, defaults, conf, pkg, backupenv, env ]
+ self.configlist.append({})
+ self.configdict["env.d"] = self.configlist[-1]
+
+ self.configlist.append({})
+ self.configdict["repo"] = self.configlist[-1]
+
+ self.configlist.append({})
+ self.configdict["features"] = self.configlist[-1]
+
+ self.configlist.append({})
+ self.configdict["pkginternal"] = self.configlist[-1]
+
+ # env_d will be None if profile.env doesn't exist.
+ if env_d:
+ self.configdict["env.d"].update(env_d)
+
+ # backupenv is used for calculating incremental variables.
+ if env is None:
+ env = os.environ
+
+ # Avoid potential UnicodeDecodeError exceptions later.
+ env_unicode = dict(
+ (_unicode_decode(k), _unicode_decode(v)) for k, v in env.items()
+ )
+
+ self.backupenv = env_unicode
+
+ if env_d:
+ # Remove duplicate values so they don't override updated
+ # profile.env values later (profile.env is reloaded in each
+ # call to self.regenerate).
+ for k, v in env_d.items():
+ try:
+ if self.backupenv[k] == v:
+ del self.backupenv[k]
+ except KeyError:
+ pass
+ del k, v
+
+ self.configdict["env"] = LazyItemsDict(self.backupenv)
+
+ self.configlist.append(make_globals)
+ self.configdict["globals"] = self.configlist[-1]
+
+ self.make_defaults_use = []
+
+ # Loading Repositories
+ self["PORTAGE_CONFIGROOT"] = config_root
+ self["ROOT"] = target_root
+ self["SYSROOT"] = sysroot
+ self["EPREFIX"] = eprefix
+ self["EROOT"] = eroot
+ self["ESYSROOT"] = esysroot
+ self["BROOT"] = broot
+ known_repos = []
+ portdir = ""
+ portdir_overlay = ""
+ portdir_sync = None
+ for confs in [make_globals, make_conf, self.configdict["env"]]:
+ v = confs.get("PORTDIR")
+ if v is not None:
+ portdir = v
+ known_repos.append(v)
+ v = confs.get("PORTDIR_OVERLAY")
+ if v is not None:
+ portdir_overlay = v
+ known_repos.extend(shlex_split(v))
+ v = confs.get("SYNC")
+ if v is not None:
+ portdir_sync = v
+ if "PORTAGE_RSYNC_EXTRA_OPTS" in confs:
+ self["PORTAGE_RSYNC_EXTRA_OPTS"] = confs["PORTAGE_RSYNC_EXTRA_OPTS"]
+
+ self["PORTDIR"] = portdir
+ self["PORTDIR_OVERLAY"] = portdir_overlay
+ if portdir_sync:
+ self["SYNC"] = portdir_sync
+ self.lookuplist = [self.configdict["env"]]
+ if repositories is None:
+ self.repositories = load_repository_config(self)
+ else:
+ self.repositories = repositories
+
+ known_repos.extend(repo.location for repo in self.repositories)
+ known_repos = frozenset(known_repos)
+
+ self["PORTAGE_REPOSITORIES"] = self.repositories.config_string()
+ self.backup_changes("PORTAGE_REPOSITORIES")
+
+ # filling PORTDIR and PORTDIR_OVERLAY variable for compatibility
+ main_repo = self.repositories.mainRepo()
+ if main_repo is not None:
+ self["PORTDIR"] = main_repo.location
+ self.backup_changes("PORTDIR")
+ expand_map["PORTDIR"] = self["PORTDIR"]
+
+ # repoman controls PORTDIR_OVERLAY via the environment, so no
+ # special cases are needed here.
+ portdir_overlay = list(self.repositories.repoLocationList())
+ if portdir_overlay and portdir_overlay[0] == self["PORTDIR"]:
+ portdir_overlay = portdir_overlay[1:]
+
+ new_ov = []
+ if portdir_overlay:
+ for ov in portdir_overlay:
+ ov = normalize_path(ov)
+ if isdir_raise_eaccess(ov) or portage._sync_mode:
+ new_ov.append(portage._shell_quote(ov))
+ else:
+ writemsg(
+ _("!!! Invalid PORTDIR_OVERLAY" " (not a dir): '%s'\n")
+ % ov,
+ noiselevel=-1,
+ )
+
+ self["PORTDIR_OVERLAY"] = " ".join(new_ov)
+ self.backup_changes("PORTDIR_OVERLAY")
+ expand_map["PORTDIR_OVERLAY"] = self["PORTDIR_OVERLAY"]
+
+ locations_manager.set_port_dirs(self["PORTDIR"], self["PORTDIR_OVERLAY"])
+ locations_manager.load_profiles(self.repositories, known_repos)
+
+ profiles_complex = locations_manager.profiles_complex
+ self.profiles = locations_manager.profiles
+ self.profile_path = locations_manager.profile_path
+ self.user_profile_dir = locations_manager.user_profile_dir
+
+ try:
+ packages_list = [
+ grabfile_package(
+ os.path.join(x.location, "packages"),
+ verify_eapi=True,
+ eapi=x.eapi,
+ eapi_default=None,
+ allow_repo=allow_profile_repo_deps(x),
+ allow_build_id=x.allow_build_id,
+ )
+ for x in profiles_complex
+ ]
+ except EnvironmentError as e:
+ _raise_exc(e)
+
+ self.packages = tuple(stack_lists(packages_list, incremental=1))
+
+ # revmaskdict
+ self.prevmaskdict = {}
+ for x in self.packages:
+ # Negative atoms are filtered by the above stack_lists() call.
+ if not isinstance(x, Atom):
+ x = Atom(x.lstrip("*"))
+ self.prevmaskdict.setdefault(x.cp, []).append(x)
+
+ self.unpack_dependencies = load_unpack_dependencies_configuration(
+ self.repositories
+ )
+
+ mygcfg = {}
+ if profiles_complex:
+ mygcfg_dlists = []
+ for x in profiles_complex:
+ # Prevent accidents triggered by USE="${USE} ..." settings
+ # at the top of make.defaults which caused parent profile
+ # USE to override parent profile package.use settings.
+ # It would be nice to guard USE_EXPAND variables like
+ # this too, but unfortunately USE_EXPAND is not known
+ # until after make.defaults has been evaluated, so that
+ # will require some form of make.defaults preprocessing.
+ expand_map.pop("USE", None)
+ mygcfg_dlists.append(
+ getconfig(
+ os.path.join(x.location, "make.defaults"),
+ tolerant=tolerant,
+ expand=expand_map,
+ recursive=x.portage1_directories,
+ )
+ )
+ self._make_defaults = mygcfg_dlists
+ mygcfg = stack_dicts(mygcfg_dlists, incrementals=self.incrementals)
+ if mygcfg is None:
+ mygcfg = {}
+ self.configlist.append(mygcfg)
+ self.configdict["defaults"] = self.configlist[-1]
+
+ mygcfg = {}
+ for x in make_conf_paths:
+ mygcfg.update(
+ getconfig(
+ x,
+ tolerant=tolerant,
+ allow_sourcing=True,
+ expand=expand_map,
+ recursive=True,
+ )
+ or {}
+ )
+
+ # __* variables set in make.conf are local and are not be propagated.
+ mygcfg = {k: v for k, v in mygcfg.items() if not k.startswith("__")}
+
+ # Don't allow the user to override certain variables in make.conf
+ profile_only_variables = (
+ self.configdict["defaults"].get("PROFILE_ONLY_VARIABLES", "").split()
+ )
+ profile_only_variables = stack_lists([profile_only_variables])
+ non_user_variables = set()
+ non_user_variables.update(profile_only_variables)
+ non_user_variables.update(self._env_blacklist)
+ non_user_variables.update(self._global_only_vars)
+ non_user_variables = frozenset(non_user_variables)
+ self._non_user_variables = non_user_variables
+
+ self._env_d_blacklist = frozenset(
+ chain(
+ profile_only_variables,
+ self._env_blacklist,
+ )
+ )
+ env_d = self.configdict["env.d"]
+ for k in self._env_d_blacklist:
+ env_d.pop(k, None)
+
+ for k in profile_only_variables:
+ mygcfg.pop(k, None)
+
+ self.configlist.append(mygcfg)
+ self.configdict["conf"] = self.configlist[-1]
+
+ self.configlist.append(LazyItemsDict())
+ self.configdict["pkg"] = self.configlist[-1]
+
+ self.configdict["backupenv"] = self.backupenv
+
+ # Don't allow the user to override certain variables in the env
+ for k in profile_only_variables:
+ self.backupenv.pop(k, None)
+
+ self.configlist.append(self.configdict["env"])
+
+ # make lookuplist for loading package.*
+ self.lookuplist = self.configlist[:]
+ self.lookuplist.reverse()
+
+ # Blacklist vars that could interfere with portage internals.
+ for blacklisted in self._env_blacklist:
+ for cfg in self.lookuplist:
+ cfg.pop(blacklisted, None)
+ self.backupenv.pop(blacklisted, None)
+ del blacklisted, cfg
+
+ self["PORTAGE_CONFIGROOT"] = config_root
+ self.backup_changes("PORTAGE_CONFIGROOT")
+ self["ROOT"] = target_root
+ self.backup_changes("ROOT")
+ self["SYSROOT"] = sysroot
+ self.backup_changes("SYSROOT")
+ self["EPREFIX"] = eprefix
+ self.backup_changes("EPREFIX")
+ self["EROOT"] = eroot
+ self.backup_changes("EROOT")
+ self["ESYSROOT"] = esysroot
+ self.backup_changes("ESYSROOT")
+ self["BROOT"] = broot
+ self.backup_changes("BROOT")
+
+ # The prefix of the running portage instance is used in the
+ # ebuild environment to implement the --host-root option for
+ # best_version and has_version.
+ self["PORTAGE_OVERRIDE_EPREFIX"] = portage.const.EPREFIX
+ self.backup_changes("PORTAGE_OVERRIDE_EPREFIX")
+
+ self._ppropertiesdict = portage.dep.ExtendedAtomDict(dict)
+ self._paccept_restrict = portage.dep.ExtendedAtomDict(dict)
+ self._penvdict = portage.dep.ExtendedAtomDict(dict)
+ self._pbashrcdict = {}
+ self._pbashrc = ()
+
+ self._repo_make_defaults = {}
+ for repo in self.repositories.repos_with_profiles():
+ d = (
+ getconfig(
+ os.path.join(repo.location, "profiles", "make.defaults"),
+ tolerant=tolerant,
+ expand=self.configdict["globals"].copy(),
+ recursive=repo.portage1_profiles,
+ )
+ or {}
+ )
+ if d:
+ for k in chain(
+ self._env_blacklist,
+ profile_only_variables,
+ self._global_only_vars,
+ ):
+ d.pop(k, None)
+ self._repo_make_defaults[repo.name] = d
+
+ # Read all USE related files from profiles and optionally from user config.
+ self._use_manager = UseManager(
+ self.repositories,
+ profiles_complex,
+ abs_user_config,
+ self._isStable,
+ user_config=local_config,
+ )
+ # Initialize all USE related variables we track ourselves.
+ self.usemask = self._use_manager.getUseMask()
+ self.useforce = self._use_manager.getUseForce()
+ self.configdict["conf"][
+ "USE"
+ ] = self._use_manager.extract_global_USE_changes(
+ self.configdict["conf"].get("USE", "")
+ )
+
+ # Read license_groups and optionally license_groups and package.license from user config
+ self._license_manager = LicenseManager(
+ locations_manager.profile_locations,
+ abs_user_config,
+ user_config=local_config,
+ )
+ # Extract '*/*' entries from package.license
+ self.configdict["conf"][
+ "ACCEPT_LICENSE"
+ ] = self._license_manager.extract_global_changes(
+ self.configdict["conf"].get("ACCEPT_LICENSE", "")
+ )
+
+ # profile.bashrc
+ self._profile_bashrc = tuple(
+ os.path.isfile(os.path.join(profile.location, "profile.bashrc"))
+ for profile in profiles_complex
+ )
+
+ if local_config:
+ # package.properties
+ propdict = grabdict_package(
+ os.path.join(abs_user_config, "package.properties"),
+ recursive=1,
+ allow_wildcard=True,
+ allow_repo=True,
+ verify_eapi=False,
+ allow_build_id=True,
+ )
+ v = propdict.pop("*/*", None)
+ if v is not None:
+ if "ACCEPT_PROPERTIES" in self.configdict["conf"]:
+ self.configdict["conf"]["ACCEPT_PROPERTIES"] += " " + " ".join(
+ v
+ )
+ else:
+ self.configdict["conf"]["ACCEPT_PROPERTIES"] = " ".join(v)
+ for k, v in propdict.items():
+ self._ppropertiesdict.setdefault(k.cp, {})[k] = v
+
+ # package.accept_restrict
+ d = grabdict_package(
+ os.path.join(abs_user_config, "package.accept_restrict"),
+ recursive=True,
+ allow_wildcard=True,
+ allow_repo=True,
+ verify_eapi=False,
+ allow_build_id=True,
+ )
+ v = d.pop("*/*", None)
+ if v is not None:
+ if "ACCEPT_RESTRICT" in self.configdict["conf"]:
+ self.configdict["conf"]["ACCEPT_RESTRICT"] += " " + " ".join(v)
+ else:
+ self.configdict["conf"]["ACCEPT_RESTRICT"] = " ".join(v)
+ for k, v in d.items():
+ self._paccept_restrict.setdefault(k.cp, {})[k] = v
+
+ # package.env
+ penvdict = grabdict_package(
+ os.path.join(abs_user_config, "package.env"),
+ recursive=1,
+ allow_wildcard=True,
+ allow_repo=True,
+ verify_eapi=False,
+ allow_build_id=True,
+ )
+ v = penvdict.pop("*/*", None)
+ if v is not None:
+ global_wildcard_conf = {}
+ self._grab_pkg_env(v, global_wildcard_conf)
+ incrementals = self.incrementals
+ conf_configdict = self.configdict["conf"]
+ for k, v in global_wildcard_conf.items():
+ if k in incrementals:
+ if k in conf_configdict:
+ conf_configdict[k] = conf_configdict[k] + " " + v
+ else:
+ conf_configdict[k] = v
+ else:
+ conf_configdict[k] = v
+ expand_map[k] = v
+
+ for k, v in penvdict.items():
+ self._penvdict.setdefault(k.cp, {})[k] = v
+
+ # package.bashrc
+ for profile in profiles_complex:
+ if not "profile-bashrcs" in profile.profile_formats:
+ continue
+ self._pbashrcdict[profile] = portage.dep.ExtendedAtomDict(dict)
+ bashrc = grabdict_package(
+ os.path.join(profile.location, "package.bashrc"),
+ recursive=1,
+ allow_wildcard=True,
+ allow_repo=allow_profile_repo_deps(profile),
+ verify_eapi=True,
+ eapi=profile.eapi,
+ eapi_default=None,
+ allow_build_id=profile.allow_build_id,
+ )
+ if not bashrc:
+ continue
+
+ for k, v in bashrc.items():
+ envfiles = [
+ os.path.join(profile.location, "bashrc", envname)
+ for envname in v
+ ]
+ self._pbashrcdict[profile].setdefault(k.cp, {}).setdefault(
+ k, []
+ ).extend(envfiles)
+
+ # getting categories from an external file now
+ self.categories = [
+ grabfile(os.path.join(x, "categories"))
+ for x in locations_manager.profile_and_user_locations
+ ]
+ category_re = dbapi._category_re
+ # categories used to be a tuple, but now we use a frozenset
+ # for hashed category validation in pordbapi.cp_list()
+ self.categories = frozenset(
+ x
+ for x in stack_lists(self.categories, incremental=1)
+ if category_re.match(x) is not None
+ )
+
+ archlist = [
+ grabfile(os.path.join(x, "arch.list"))
+ for x in locations_manager.profile_and_user_locations
+ ]
+ archlist = sorted(stack_lists(archlist, incremental=1))
+ self.configdict["conf"]["PORTAGE_ARCHLIST"] = " ".join(archlist)
+
+ pkgprovidedlines = []
+ for x in profiles_complex:
+ provpath = os.path.join(x.location, "package.provided")
+ if os.path.exists(provpath):
+ if _get_eapi_attrs(x.eapi).allows_package_provided:
+ pkgprovidedlines.append(
+ grabfile(provpath, recursive=x.portage1_directories)
+ )
+ else:
+ # TODO: bail out?
+ writemsg(
+ (
+ _("!!! package.provided not allowed in EAPI %s: ")
+ % x.eapi
+ )
+ + x.location
+ + "\n",
+ noiselevel=-1,
+ )
+
+ pkgprovidedlines = stack_lists(pkgprovidedlines, incremental=1)
+ has_invalid_data = False
+ for x in range(len(pkgprovidedlines) - 1, -1, -1):
+ myline = pkgprovidedlines[x]
+ if not isvalidatom("=" + myline):
+ writemsg(
+ _("Invalid package name in package.provided: %s\n") % myline,
+ noiselevel=-1,
+ )
+ has_invalid_data = True
+ del pkgprovidedlines[x]
+ continue
+ cpvr = catpkgsplit(pkgprovidedlines[x])
+ if not cpvr or cpvr[0] == "null":
+ writemsg(
+ _("Invalid package name in package.provided: ")
+ + pkgprovidedlines[x]
+ + "\n",
+ noiselevel=-1,
+ )
+ has_invalid_data = True
+ del pkgprovidedlines[x]
+ continue
+ if has_invalid_data:
+ writemsg(
+ _("See portage(5) for correct package.provided usage.\n"),
+ noiselevel=-1,
+ )
+ self.pprovideddict = {}
+ for x in pkgprovidedlines:
+ x_split = catpkgsplit(x)
+ if x_split is None:
+ continue
+ mycatpkg = cpv_getkey(x)
+ if mycatpkg in self.pprovideddict:
+ self.pprovideddict[mycatpkg].append(x)
+ else:
+ self.pprovideddict[mycatpkg] = [x]
+
+ # reasonable defaults; this is important as without USE_ORDER,
+ # USE will always be "" (nothing set)!
+ if "USE_ORDER" not in self:
+ self[
+ "USE_ORDER"
+ ] = "env:pkg:conf:defaults:pkginternal:features:repo:env.d"
+ self.backup_changes("USE_ORDER")
+
+ if "CBUILD" not in self and "CHOST" in self:
+ self["CBUILD"] = self["CHOST"]
+ self.backup_changes("CBUILD")
+
+ if "USERLAND" not in self:
+ # Set default USERLAND so that our test cases can assume that
+ # it's always set. This allows isolated-functions.sh to avoid
+ # calling uname -s when sourced.
+ system = platform.system()
+ if system is not None and (
+ system.endswith("BSD") or system == "DragonFly"
+ ):
+ self["USERLAND"] = "BSD"
+ else:
+ self["USERLAND"] = "GNU"
+ self.backup_changes("USERLAND")
+
+ default_inst_ids = {
+ "PORTAGE_INST_GID": "0",
+ "PORTAGE_INST_UID": "0",
+ }
+
+ eroot_or_parent = first_existing(eroot)
+ unprivileged = False
+ try:
++ # PREFIX LOCAL: inventing UID/GID based on a path is a very
++ # bad idea, it breaks almost everything since group ids
++ # don't have to match, when a user has many
++ # This in particularly breaks the configure-set portage
++ # group and user (in portage/data.py)
++ raise OSError(2, "No such file or directory")
+ eroot_st = os.stat(eroot_or_parent)
+ except OSError:
+ pass
+ else:
+
+ if portage.data._unprivileged_mode(eroot_or_parent, eroot_st):
+ unprivileged = True
+
+ default_inst_ids["PORTAGE_INST_GID"] = str(eroot_st.st_gid)
+ default_inst_ids["PORTAGE_INST_UID"] = str(eroot_st.st_uid)
+
+ if "PORTAGE_USERNAME" not in self:
+ try:
+ pwd_struct = pwd.getpwuid(eroot_st.st_uid)
+ except KeyError:
+ pass
+ else:
+ self["PORTAGE_USERNAME"] = pwd_struct.pw_name
+ self.backup_changes("PORTAGE_USERNAME")
+
+ if "PORTAGE_GRPNAME" not in self:
+ try:
+ grp_struct = grp.getgrgid(eroot_st.st_gid)
+ except KeyError:
+ pass
+ else:
+ self["PORTAGE_GRPNAME"] = grp_struct.gr_name
+ self.backup_changes("PORTAGE_GRPNAME")
+
+ for var, default_val in default_inst_ids.items():
+ try:
+ self[var] = str(int(self.get(var, default_val)))
+ except ValueError:
+ writemsg(
+ _(
+ "!!! %s='%s' is not a valid integer. "
+ "Falling back to %s.\n"
+ )
+ % (var, self[var], default_val),
+ noiselevel=-1,
+ )
+ self[var] = default_val
+ self.backup_changes(var)
+
+ self.depcachedir = self.get("PORTAGE_DEPCACHEDIR")
+ if self.depcachedir is None:
+ self.depcachedir = os.path.join(
+ os.sep, portage.const.EPREFIX, DEPCACHE_PATH.lstrip(os.sep)
+ )
+ if unprivileged and target_root != os.sep:
+ # In unprivileged mode, automatically make
+ # depcachedir relative to target_root if the
+ # default depcachedir is not writable.
+ if not os.access(first_existing(self.depcachedir), os.W_OK):
+ self.depcachedir = os.path.join(
+ eroot, DEPCACHE_PATH.lstrip(os.sep)
+ )
+
+ self["PORTAGE_DEPCACHEDIR"] = self.depcachedir
+ self.backup_changes("PORTAGE_DEPCACHEDIR")
+
+ if portage._internal_caller:
+ self["PORTAGE_INTERNAL_CALLER"] = "1"
+ self.backup_changes("PORTAGE_INTERNAL_CALLER")
+
+ # initialize self.features
+ self.regenerate()
+ feature_use = []
+ if "test" in self.features:
+ feature_use.append("test")
+ self.configdict["features"]["USE"] = self._default_features_use = " ".join(
+ feature_use
+ )
+ if feature_use:
+ # Regenerate USE so that the initial "test" flag state is
+ # correct for evaluation of !test? conditionals in RESTRICT.
+ self.regenerate()
+
+ if unprivileged:
+ self.features.add("unprivileged")
+
+ if bsd_chflags:
+ self.features.add("chflags")
+
+ self._init_iuse()
+
+ self._validate_commands()
+
+ for k in self._case_insensitive_vars:
+ if k in self:
+ self[k] = self[k].lower()
+ self.backup_changes(k)
+
+ # The first constructed config object initializes these modules,
+ # and subsequent calls to the _init() functions have no effect.
+ portage.output._init(config_root=self["PORTAGE_CONFIGROOT"])
+ portage.data._init(self)
+
+ if mycpv:
+ self.setcpv(mycpv)
+
+ def _init_iuse(self):
+ self._iuse_effective = self._calc_iuse_effective()
+ self._iuse_implicit_match = _iuse_implicit_match_cache(self)
+
+ @property
+ def mygcfg(self):
+ warnings.warn("portage.config.mygcfg is deprecated", stacklevel=3)
+ return {}
+
+ def _validate_commands(self):
+ for k in special_env_vars.validate_commands:
+ v = self.get(k)
+ if v is not None:
+ valid, v_split = validate_cmd_var(v)
+
+ if not valid:
+ if v_split:
+ writemsg_level(
+ _("%s setting is invalid: '%s'\n") % (k, v),
+ level=logging.ERROR,
+ noiselevel=-1,
+ )
+
+ # before deleting the invalid setting, backup
+ # the default value if available
+ v = self.configdict["globals"].get(k)
+ if v is not None:
+ default_valid, v_split = validate_cmd_var(v)
+ if not default_valid:
+ if v_split:
+ writemsg_level(
+ _(
+ "%s setting from make.globals"
+ + " is invalid: '%s'\n"
+ )
+ % (k, v),
+ level=logging.ERROR,
+ noiselevel=-1,
+ )
+ # make.globals seems corrupt, so try for
+ # a hardcoded default instead
+ v = self._default_globals.get(k)
+
+ # delete all settings for this key,
+ # including the invalid one
+ del self[k]
+ self.backupenv.pop(k, None)
+ if v:
+ # restore validated default
+ self.configdict["globals"][k] = v
+
+ def _init_dirs(self):
+ """
+ Create a few directories that are critical to portage operation
+ """
+ if not os.access(self["EROOT"], os.W_OK):
+ return
+
+ # gid, mode, mask, preserve_perms
+ dir_mode_map = {
+ "tmp": (-1, 0o1777, 0, True),
+ "var/tmp": (-1, 0o1777, 0, True),
+ PRIVATE_PATH: (portage_gid, 0o2750, 0o2, False),
+ CACHE_PATH: (portage_gid, 0o755, 0o2, False),
+ }
+
+ for mypath, (gid, mode, modemask, preserve_perms) in dir_mode_map.items():
+ mydir = os.path.join(self["EROOT"], mypath)
+ if preserve_perms and os.path.isdir(mydir):
+ # Only adjust permissions on some directories if
+ # they don't exist yet. This gives freedom to the
+ # user to adjust permissions to suit their taste.
+ continue
+ try:
+ ensure_dirs(mydir, gid=gid, mode=mode, mask=modemask)
+ except PortageException as e:
+ writemsg(
+ _("!!! Directory initialization failed: '%s'\n") % mydir,
+ noiselevel=-1,
+ )
+ writemsg("!!! %s\n" % str(e), noiselevel=-1)
+
+ @property
+ def _keywords_manager(self):
+ if self._keywords_manager_obj is None:
+ self._keywords_manager_obj = KeywordsManager(
+ self._locations_manager.profiles_complex,
+ self._locations_manager.abs_user_config,
+ self.local_config,
+ global_accept_keywords=self.configdict["defaults"].get(
+ "ACCEPT_KEYWORDS", ""
+ ),
+ )
+ return self._keywords_manager_obj
+
+ @property
+ def _mask_manager(self):
+ if self._mask_manager_obj is None:
+ self._mask_manager_obj = MaskManager(
+ self.repositories,
+ self._locations_manager.profiles_complex,
+ self._locations_manager.abs_user_config,
+ user_config=self.local_config,
+ strict_umatched_removal=self._unmatched_removal,
+ )
+ return self._mask_manager_obj
+
+ @property
+ def _virtuals_manager(self):
+ if self._virtuals_manager_obj is None:
+ self._virtuals_manager_obj = VirtualsManager(self.profiles)
+ return self._virtuals_manager_obj
+
+ @property
+ def pkeywordsdict(self):
+ result = self._keywords_manager.pkeywordsdict.copy()
+ for k, v in result.items():
+ result[k] = v.copy()
+ return result
+
+ @property
+ def pmaskdict(self):
+ return self._mask_manager._pmaskdict.copy()
+
+ @property
+ def punmaskdict(self):
+ return self._mask_manager._punmaskdict.copy()
+
+ @property
+ def soname_provided(self):
+ if self._soname_provided is None:
+ d = stack_dictlist(
+ (
+ grabdict(os.path.join(x, "soname.provided"), recursive=True)
+ for x in self.profiles
+ ),
+ incremental=True,
+ )
+ self._soname_provided = frozenset(
+ SonameAtom(cat, soname)
+ for cat, sonames in d.items()
+ for soname in sonames
+ )
+ return self._soname_provided
+
+ def expandLicenseTokens(self, tokens):
+ """Take a token from ACCEPT_LICENSE or package.license and expand it
+ if it's a group token (indicated by @) or just return it if it's not a
+ group. If a group is negated then negate all group elements."""
+ return self._license_manager.expandLicenseTokens(tokens)
+
+ def validate(self):
+ """Validate miscellaneous settings and display warnings if necessary.
+ (This code was previously in the global scope of portage.py)"""
+
+ groups = self.get("ACCEPT_KEYWORDS", "").split()
+ archlist = self.archlist()
+ if not archlist:
+ writemsg(
+ _(
+ "--- 'profiles/arch.list' is empty or "
+ "not available. Empty ebuild repository?\n"
+ ),
+ noiselevel=1,
+ )
+ else:
+ for group in groups:
+ if (
+ group not in archlist
+ and not (group.startswith("-") and group[1:] in archlist)
+ and group not in ("*", "~*", "**")
+ ):
+ writemsg(
+ _("!!! INVALID ACCEPT_KEYWORDS: %s\n") % str(group),
+ noiselevel=-1,
+ )
+
+ profile_broken = False
+
+ # getmaskingstatus requires ARCH for ACCEPT_KEYWORDS support
+ arch = self.get("ARCH")
+ if not self.profile_path or not arch:
+ profile_broken = True
+ else:
+ # If any one of these files exists, then
+ # the profile is considered valid.
+ for x in ("make.defaults", "parent", "packages", "use.force", "use.mask"):
+ if exists_raise_eaccess(os.path.join(self.profile_path, x)):
+ break
+ else:
+ profile_broken = True
+
+ if profile_broken and not portage._sync_mode:
+ abs_profile_path = None
+ for x in (PROFILE_PATH, "etc/make.profile"):
+ x = os.path.join(self["PORTAGE_CONFIGROOT"], x)
+ try:
+ os.lstat(x)
+ except OSError:
+ pass
+ else:
+ abs_profile_path = x
+ break
+
+ if abs_profile_path is None:
+ abs_profile_path = os.path.join(
+ self["PORTAGE_CONFIGROOT"], PROFILE_PATH
+ )
+
+ writemsg(
+ _(
+ "\n\n!!! %s is not a symlink and will probably prevent most merges.\n"
+ )
+ % abs_profile_path,
+ noiselevel=-1,
+ )
+ writemsg(
+ _("!!! It should point into a profile within %s/profiles/\n")
+ % self["PORTDIR"]
+ )
+ writemsg(
+ _(
+ "!!! (You can safely ignore this message when syncing. It's harmless.)\n\n\n"
+ )
+ )
+
+ abs_user_virtuals = os.path.join(self["PORTAGE_CONFIGROOT"], USER_VIRTUALS_FILE)
+ if os.path.exists(abs_user_virtuals):
+ writemsg("\n!!! /etc/portage/virtuals is deprecated in favor of\n")
+ writemsg("!!! /etc/portage/profile/virtuals. Please move it to\n")
+ writemsg("!!! this new location.\n\n")
+
+ if not sandbox_capable and (
+ "sandbox" in self.features or "usersandbox" in self.features
+ ):
+ if self.profile_path is not None and os.path.realpath(
+ self.profile_path
+ ) == os.path.realpath(
+ os.path.join(self["PORTAGE_CONFIGROOT"], PROFILE_PATH)
+ ):
+ # Don't show this warning when running repoman and the
+ # sandbox feature came from a profile that doesn't belong
+ # to the user.
+ writemsg(
+ colorize(
+ "BAD", _("!!! Problem with sandbox" " binary. Disabling...\n\n")
+ ),
+ noiselevel=-1,
+ )
+
+ if "fakeroot" in self.features and not fakeroot_capable:
+ writemsg(
+ _(
+ "!!! FEATURES=fakeroot is enabled, but the "
+ "fakeroot binary is not installed.\n"
+ ),
+ noiselevel=-1,
+ )
+
+ if "webrsync-gpg" in self.features:
+ writemsg(
+ _(
+ "!!! FEATURES=webrsync-gpg is deprecated, see the make.conf(5) man page.\n"
+ ),
+ noiselevel=-1,
+ )
+
+ if os.getuid() == 0 and not hasattr(os, "setgroups"):
+ warning_shown = False
+
+ if "userpriv" in self.features:
+ writemsg(
+ _(
+ "!!! FEATURES=userpriv is enabled, but "
+ "os.setgroups is not available.\n"
+ ),
+ noiselevel=-1,
+ )
+ warning_shown = True
+
+ if "userfetch" in self.features:
+ writemsg(
+ _(
+ "!!! FEATURES=userfetch is enabled, but "
+ "os.setgroups is not available.\n"
+ ),
+ noiselevel=-1,
+ )
+ warning_shown = True
+
+ if warning_shown and platform.python_implementation() == "PyPy":
+ writemsg(
+ _("!!! See https://bugs.pypy.org/issue833 for details.\n"),
+ noiselevel=-1,
+ )
+
+ binpkg_compression = self.get("BINPKG_COMPRESS")
+ if binpkg_compression:
+ try:
+ compression = _compressors[binpkg_compression]
+ except KeyError as e:
+ writemsg(
+ "!!! BINPKG_COMPRESS contains invalid or "
+ "unsupported compression method: %s" % e.args[0],
+ noiselevel=-1,
+ )
+ else:
+ try:
+ compression_binary = shlex_split(
+ portage.util.varexpand(compression["compress"], mydict=self)
+ )[0]
+ except IndexError as e:
+ writemsg(
+ "!!! BINPKG_COMPRESS contains invalid or "
+ "unsupported compression method: %s" % e.args[0],
+ noiselevel=-1,
+ )
+ else:
+ if portage.process.find_binary(compression_binary) is None:
+ missing_package = compression["package"]
+ writemsg(
+ "!!! BINPKG_COMPRESS unsupported %s. "
+ "Missing package: %s"
+ % (binpkg_compression, missing_package),
+ noiselevel=-1,
+ )
+
+ def load_best_module(self, property_string):
+ best_mod = best_from_dict(property_string, self.modules, self.module_priority)
+ mod = None
+ try:
+ mod = load_mod(best_mod)
+ except ImportError:
+ if best_mod in self._module_aliases:
+ mod = load_mod(self._module_aliases[best_mod])
+ elif not best_mod.startswith("cache."):
+ raise
+ else:
+ best_mod = "portage." + best_mod
+ try:
+ mod = load_mod(best_mod)
+ except ImportError:
+ raise
+ return mod
+
+ def lock(self):
+ self.locked = 1
+
+ def unlock(self):
+ self.locked = 0
+
+ def modifying(self):
+ if self.locked:
+ raise Exception(_("Configuration is locked."))
+
+ def backup_changes(self, key=None):
+ self.modifying()
+ if key and key in self.configdict["env"]:
+ self.backupenv[key] = copy.deepcopy(self.configdict["env"][key])
+ else:
+ raise KeyError(_("No such key defined in environment: %s") % key)
+
+ def reset(self, keeping_pkg=0, use_cache=None):
+ """
+ Restore environment from self.backupenv, call self.regenerate()
+ @param keeping_pkg: Should we keep the setcpv() data or delete it.
+ @type keeping_pkg: Boolean
+ @rype: None
+ """
+
+ if use_cache is not None:
+ warnings.warn(
+ "The use_cache parameter for config.reset() is deprecated and without effect.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
+ self.modifying()
+ self.configdict["env"].clear()
+ self.configdict["env"].update(self.backupenv)
+
+ self.modifiedkeys = []
+ if not keeping_pkg:
+ self.mycpv = None
+ self._setcpv_args_hash = None
+ self.puse = ""
+ del self._penv[:]
+ self.configdict["pkg"].clear()
+ self.configdict["pkginternal"].clear()
+ self.configdict["features"]["USE"] = self._default_features_use
+ self.configdict["repo"].clear()
+ self.configdict["defaults"]["USE"] = " ".join(self.make_defaults_use)
+ self.usemask = self._use_manager.getUseMask()
+ self.useforce = self._use_manager.getUseForce()
+ self.regenerate()
+
+ class _lazy_vars:
+
+ __slots__ = ("built_use", "settings", "values")
+
+ def __init__(self, built_use, settings):
+ self.built_use = built_use
+ self.settings = settings
+ self.values = None
+
+ def __getitem__(self, k):
+ if self.values is None:
+ self.values = self._init_values()
+ return self.values[k]
+
+ def _init_values(self):
+ values = {}
+ settings = self.settings
+ use = self.built_use
+ if use is None:
+ use = frozenset(settings["PORTAGE_USE"].split())
+
+ values[
+ "ACCEPT_LICENSE"
+ ] = settings._license_manager.get_prunned_accept_license(
+ settings.mycpv,
+ use,
+ settings.get("LICENSE", ""),
+ settings.get("SLOT"),
+ settings.get("PORTAGE_REPO_NAME"),
+ )
+ values["PORTAGE_PROPERTIES"] = self._flatten("PROPERTIES", use, settings)
+ values["PORTAGE_RESTRICT"] = self._flatten("RESTRICT", use, settings)
+ return values
+
+ def _flatten(self, var, use, settings):
+ try:
+ restrict = set(
+ use_reduce(settings.get(var, ""), uselist=use, flat=True)
+ )
+ except InvalidDependString:
+ restrict = set()
+ return " ".join(sorted(restrict))
+
+ class _lazy_use_expand:
+ """
+ Lazily evaluate USE_EXPAND variables since they are only needed when
+ an ebuild shell is spawned. Variables values are made consistent with
+ the previously calculated USE settings.
+ """
+
+ def __init__(
+ self,
+ settings,
+ unfiltered_use,
+ use,
+ usemask,
+ iuse_effective,
+ use_expand_split,
+ use_expand_dict,
+ ):
+ self._settings = settings
+ self._unfiltered_use = unfiltered_use
+ self._use = use
+ self._usemask = usemask
+ self._iuse_effective = iuse_effective
+ self._use_expand_split = use_expand_split
+ self._use_expand_dict = use_expand_dict
+
+ def __getitem__(self, key):
+ prefix = key.lower() + "_"
+ prefix_len = len(prefix)
+ expand_flags = set(
+ x[prefix_len:] for x in self._use if x[:prefix_len] == prefix
+ )
+ var_split = self._use_expand_dict.get(key, "").split()
+ # Preserve the order of var_split because it can matter for things
+ # like LINGUAS.
+ var_split = [x for x in var_split if x in expand_flags]
+ var_split.extend(expand_flags.difference(var_split))
+ has_wildcard = "*" in expand_flags
+ if has_wildcard:
+ var_split = [x for x in var_split if x != "*"]
+ has_iuse = set()
+ for x in self._iuse_effective:
+ if x[:prefix_len] == prefix:
+ has_iuse.add(x[prefix_len:])
+ if has_wildcard:
+ # * means to enable everything in IUSE that's not masked
+ if has_iuse:
+ usemask = self._usemask
+ for suffix in has_iuse:
+ x = prefix + suffix
+ if x not in usemask:
+ if suffix not in expand_flags:
+ var_split.append(suffix)
+ else:
+ # If there is a wildcard and no matching flags in IUSE then
+ # LINGUAS should be unset so that all .mo files are
+ # installed.
+ var_split = []
+ # Make the flags unique and filter them according to IUSE.
+ # Also, continue to preserve order for things like LINGUAS
+ # and filter any duplicates that variable may contain.
+ filtered_var_split = []
+ remaining = has_iuse.intersection(var_split)
+ for x in var_split:
+ if x in remaining:
+ remaining.remove(x)
+ filtered_var_split.append(x)
+ var_split = filtered_var_split
+
+ return " ".join(var_split)
+
+ def _setcpv_recursion_gate(f):
+ """
+ Raise AssertionError for recursive setcpv calls.
+ """
+
+ def wrapper(self, *args, **kwargs):
+ if hasattr(self, "_setcpv_active"):
+ raise AssertionError("setcpv recursion detected")
+ self._setcpv_active = True
+ try:
+ return f(self, *args, **kwargs)
+ finally:
+ del self._setcpv_active
+
+ return wrapper
+
+ @_setcpv_recursion_gate
+ def setcpv(self, mycpv, use_cache=None, mydb=None):
+ """
+ Load a particular CPV into the config, this lets us see the
+ Default USE flags for a particular ebuild as well as the USE
+ flags from package.use.
+
+ @param mycpv: A cpv to load
+ @type mycpv: string
+ @param mydb: a dbapi instance that supports aux_get with the IUSE key.
+ @type mydb: dbapi or derivative.
+ @rtype: None
+ """
+
+ if use_cache is not None:
+ warnings.warn(
+ "The use_cache parameter for config.setcpv() is deprecated and without effect.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
+ self.modifying()
+
+ pkg = None
+ built_use = None
+ explicit_iuse = None
+ if not isinstance(mycpv, str):
+ pkg = mycpv
+ mycpv = pkg.cpv
+ mydb = pkg._metadata
+ explicit_iuse = pkg.iuse.all
+ args_hash = (mycpv, id(pkg))
+ if pkg.built:
+ built_use = pkg.use.enabled
+ else:
+ args_hash = (mycpv, id(mydb))
+
+ if args_hash == self._setcpv_args_hash:
+ return
+ self._setcpv_args_hash = args_hash
+
+ has_changed = False
+ self.mycpv = mycpv
+ cat, pf = catsplit(mycpv)
+ cp = cpv_getkey(mycpv)
+ cpv_slot = self.mycpv
+ pkginternaluse = ""
+ pkginternaluse_list = []
+ feature_use = []
+ iuse = ""
+ pkg_configdict = self.configdict["pkg"]
+ previous_iuse = pkg_configdict.get("IUSE")
+ previous_iuse_effective = pkg_configdict.get("IUSE_EFFECTIVE")
+ previous_features = pkg_configdict.get("FEATURES")
+ previous_penv = self._penv
+
+ aux_keys = self._setcpv_aux_keys
+
+ # Discard any existing metadata and package.env settings from
+ # the previous package instance.
+ pkg_configdict.clear()
+
+ pkg_configdict["CATEGORY"] = cat
+ pkg_configdict["PF"] = pf
+ repository = None
+ eapi = None
+ if mydb:
+ if not hasattr(mydb, "aux_get"):
+ for k in aux_keys:
+ if k in mydb:
+ # Make these lazy, since __getitem__ triggers
+ # evaluation of USE conditionals which can't
+ # occur until PORTAGE_USE is calculated below.
+ pkg_configdict.addLazySingleton(k, mydb.__getitem__, k)
+ else:
+ # When calling dbapi.aux_get(), grab USE for built/installed
+ # packages since we want to save it PORTAGE_BUILT_USE for
+ # evaluating conditional USE deps in atoms passed via IPC to
+ # helpers like has_version and best_version.
+ aux_keys = set(aux_keys)
+ if hasattr(mydb, "_aux_cache_keys"):
+ aux_keys = aux_keys.intersection(mydb._aux_cache_keys)
+ aux_keys.add("USE")
+ aux_keys = list(aux_keys)
+ for k, v in zip(aux_keys, mydb.aux_get(self.mycpv, aux_keys)):
+ pkg_configdict[k] = v
+ built_use = frozenset(pkg_configdict.pop("USE").split())
+ if not built_use:
+ # Empty USE means this dbapi instance does not contain
+ # built packages.
+ built_use = None
+ eapi = pkg_configdict["EAPI"]
+
+ repository = pkg_configdict.pop("repository", None)
+ if repository is not None:
+ pkg_configdict["PORTAGE_REPO_NAME"] = repository
+ iuse = pkg_configdict["IUSE"]
+ if pkg is None:
+ self.mycpv = _pkg_str(
+ self.mycpv, metadata=pkg_configdict, settings=self
+ )
+ cpv_slot = self.mycpv
+ else:
+ cpv_slot = pkg
+ for x in iuse.split():
+ if x.startswith("+"):
+ pkginternaluse_list.append(x[1:])
+ elif x.startswith("-"):
+ pkginternaluse_list.append(x)
+ pkginternaluse = " ".join(pkginternaluse_list)
+
+ eapi_attrs = _get_eapi_attrs(eapi)
+
+ if pkginternaluse != self.configdict["pkginternal"].get("USE", ""):
+ self.configdict["pkginternal"]["USE"] = pkginternaluse
+ has_changed = True
+
+ repo_env = []
+ if repository and repository != Package.UNKNOWN_REPO:
+ repos = []
+ try:
+ repos.extend(
+ repo.name for repo in self.repositories[repository].masters
+ )
+ except KeyError:
+ pass
+ repos.append(repository)
+ for repo in repos:
+ d = self._repo_make_defaults.get(repo)
+ if d is None:
+ d = {}
+ else:
+ # make a copy, since we might modify it with
+ # package.use settings
+ d = d.copy()
+ cpdict = self._use_manager._repo_puse_dict.get(repo, {}).get(cp)
+ if cpdict:
+ repo_puse = ordered_by_atom_specificity(cpdict, cpv_slot)
+ if repo_puse:
+ for x in repo_puse:
+ d["USE"] = d.get("USE", "") + " " + " ".join(x)
+ if d:
+ repo_env.append(d)
+
+ if repo_env or self.configdict["repo"]:
+ self.configdict["repo"].clear()
+ self.configdict["repo"].update(
+ stack_dicts(repo_env, incrementals=self.incrementals)
+ )
+ has_changed = True
+
+ defaults = []
+ for i, pkgprofileuse_dict in enumerate(self._use_manager._pkgprofileuse):
+ if self.make_defaults_use[i]:
+ defaults.append(self.make_defaults_use[i])
+ cpdict = pkgprofileuse_dict.get(cp)
+ if cpdict:
+ pkg_defaults = ordered_by_atom_specificity(cpdict, cpv_slot)
+ if pkg_defaults:
+ defaults.extend(pkg_defaults)
+ defaults = " ".join(defaults)
+ if defaults != self.configdict["defaults"].get("USE", ""):
+ self.configdict["defaults"]["USE"] = defaults
+ has_changed = True
+
+ useforce = self._use_manager.getUseForce(cpv_slot)
+ if useforce != self.useforce:
+ self.useforce = useforce
+ has_changed = True
+
+ usemask = self._use_manager.getUseMask(cpv_slot)
+ if usemask != self.usemask:
+ self.usemask = usemask
+ has_changed = True
+
+ oldpuse = self.puse
+ self.puse = self._use_manager.getPUSE(cpv_slot)
+ if oldpuse != self.puse:
+ has_changed = True
+ self.configdict["pkg"]["PKGUSE"] = self.puse[:] # For saving to PUSE file
+ self.configdict["pkg"]["USE"] = self.puse[:] # this gets appended to USE
+
+ if previous_features:
+ # The package from the previous setcpv call had package.env
+ # settings which modified FEATURES. Therefore, trigger a
+ # regenerate() call in order to ensure that self.features
+ # is accurate.
+ has_changed = True
+ # Prevent stale features USE from corrupting the evaluation
+ # of USE conditional RESTRICT.
+ self.configdict["features"]["USE"] = self._default_features_use
+
+ self._penv = []
+ cpdict = self._penvdict.get(cp)
+ if cpdict:
+ penv_matches = ordered_by_atom_specificity(cpdict, cpv_slot)
+ if penv_matches:
+ for x in penv_matches:
+ self._penv.extend(x)
+
+ bashrc_files = []
+
+ for profile, profile_bashrc in zip(
+ self._locations_manager.profiles_complex, self._profile_bashrc
+ ):
+ if profile_bashrc:
+ bashrc_files.append(os.path.join(profile.location, "profile.bashrc"))
+ if profile in self._pbashrcdict:
+ cpdict = self._pbashrcdict[profile].get(cp)
+ if cpdict:
+ bashrc_matches = ordered_by_atom_specificity(cpdict, cpv_slot)
+ for x in bashrc_matches:
+ bashrc_files.extend(x)
+
+ self._pbashrc = tuple(bashrc_files)
+
+ protected_pkg_keys = set(pkg_configdict)
+ protected_pkg_keys.discard("USE")
+
+ # If there are _any_ package.env settings for this package
+ # then it automatically triggers config.reset(), in order
+ # to account for possible incremental interaction between
+ # package.use, package.env, and overrides from the calling
+ # environment (configdict['env']).
+ if self._penv:
+ has_changed = True
+ # USE is special because package.use settings override
+ # it. Discard any package.use settings here and they'll
+ # be added back later.
+ pkg_configdict.pop("USE", None)
+ self._grab_pkg_env(
+ self._penv, pkg_configdict, protected_keys=protected_pkg_keys
+ )
+
+ # Now add package.use settings, which override USE from
+ # package.env
+ if self.puse:
+ if "USE" in pkg_configdict:
+ pkg_configdict["USE"] = pkg_configdict["USE"] + " " + self.puse
+ else:
+ pkg_configdict["USE"] = self.puse
+
+ elif previous_penv:
+ has_changed = True
+
+ if not (
+ previous_iuse == iuse
+ and previous_iuse_effective is not None == eapi_attrs.iuse_effective
+ ):
+ has_changed = True
+
+ if has_changed:
+ # This can modify self.features due to package.env settings.
+ self.reset(keeping_pkg=1)
+
+ if "test" in self.features:
+ # This is independent of IUSE and RESTRICT, so that the same
+ # value can be shared between packages with different settings,
+ # which is important when evaluating USE conditional RESTRICT.
+ feature_use.append("test")
+
+ feature_use = " ".join(feature_use)
+ if feature_use != self.configdict["features"]["USE"]:
+ # Regenerate USE for evaluation of conditional RESTRICT.
+ self.configdict["features"]["USE"] = feature_use
+ self.reset(keeping_pkg=1)
+ has_changed = True
+
+ if explicit_iuse is None:
+ explicit_iuse = frozenset(x.lstrip("+-") for x in iuse.split())
+ if eapi_attrs.iuse_effective:
+ iuse_implicit_match = self._iuse_effective_match
+ else:
+ iuse_implicit_match = self._iuse_implicit_match
+
+ if pkg is None:
+ raw_properties = pkg_configdict.get("PROPERTIES")
+ raw_restrict = pkg_configdict.get("RESTRICT")
+ else:
+ raw_properties = pkg._raw_metadata["PROPERTIES"]
+ raw_restrict = pkg._raw_metadata["RESTRICT"]
+
+ restrict_test = False
+ if raw_restrict:
+ try:
+ if built_use is not None:
+ properties = use_reduce(
+ raw_properties, uselist=built_use, flat=True
+ )
+ restrict = use_reduce(raw_restrict, uselist=built_use, flat=True)
+ else:
+ properties = use_reduce(
+ raw_properties,
+ uselist=frozenset(
+ x
+ for x in self["USE"].split()
+ if x in explicit_iuse or iuse_implicit_match(x)
+ ),
+ flat=True,
+ )
+ restrict = use_reduce(
+ raw_restrict,
+ uselist=frozenset(
+ x
+ for x in self["USE"].split()
+ if x in explicit_iuse or iuse_implicit_match(x)
+ ),
+ flat=True,
+ )
+ except PortageException:
+ pass
+ else:
+ allow_test = self.get("ALLOW_TEST", "").split()
+ restrict_test = (
+ "test" in restrict
+ and not "all" in allow_test
+ and not ("test_network" in properties and "network" in allow_test)
+ )
+
+ if restrict_test and "test" in self.features:
+ # Handle it like IUSE="-test", since features USE is
+ # independent of RESTRICT.
+ pkginternaluse_list.append("-test")
+ pkginternaluse = " ".join(pkginternaluse_list)
+ self.configdict["pkginternal"]["USE"] = pkginternaluse
+ # TODO: can we avoid that?
+ self.reset(keeping_pkg=1)
+ has_changed = True
+
+ env_configdict = self.configdict["env"]
+
+ # Ensure that "pkg" values are always preferred over "env" values.
+ # This must occur _after_ the above reset() call, since reset()
+ # copies values from self.backupenv.
+ for k in protected_pkg_keys:
+ env_configdict.pop(k, None)
+
+ lazy_vars = self._lazy_vars(built_use, self)
+ env_configdict.addLazySingleton(
+ "ACCEPT_LICENSE", lazy_vars.__getitem__, "ACCEPT_LICENSE"
+ )
+ env_configdict.addLazySingleton(
+ "PORTAGE_PROPERTIES", lazy_vars.__getitem__, "PORTAGE_PROPERTIES"
+ )
+ env_configdict.addLazySingleton(
+ "PORTAGE_RESTRICT", lazy_vars.__getitem__, "PORTAGE_RESTRICT"
+ )
+
+ if built_use is not None:
+ pkg_configdict["PORTAGE_BUILT_USE"] = " ".join(built_use)
+
+ # If reset() has not been called, it's safe to return
+ # early if IUSE has not changed.
+ if not has_changed:
+ return
+
+ # Filter out USE flags that aren't part of IUSE. This has to
+ # be done for every setcpv() call since practically every
+ # package has different IUSE.
+ use = set(self["USE"].split())
+ unfiltered_use = frozenset(use)
+
+ if eapi_attrs.iuse_effective:
+ portage_iuse = set(self._iuse_effective)
+ portage_iuse.update(explicit_iuse)
+ if built_use is not None:
+ # When the binary package was built, the profile may have
+ # had different IUSE_IMPLICIT settings, so any member of
+ # the built USE setting is considered to be a member of
+ # IUSE_EFFECTIVE (see bug 640318).
+ portage_iuse.update(built_use)
+ self.configdict["pkg"]["IUSE_EFFECTIVE"] = " ".join(sorted(portage_iuse))
+
+ self.configdict["env"]["BASH_FUNC____in_portage_iuse%%"] = (
+ "() { "
+ "if [[ ${#___PORTAGE_IUSE_HASH[@]} -lt 1 ]]; then "
+ " declare -gA ___PORTAGE_IUSE_HASH=(%s); "
+ "fi; "
+ "[[ -n ${___PORTAGE_IUSE_HASH[$1]} ]]; "
+ "}"
+ ) % " ".join('["%s"]=1' % x for x in portage_iuse)
+ else:
+ portage_iuse = self._get_implicit_iuse()
+ portage_iuse.update(explicit_iuse)
+
+ # The _get_implicit_iuse() returns a regular expression
+ # so we can't use the (faster) map. Fall back to
+ # implementing ___in_portage_iuse() the older/slower way.
+
+ # PORTAGE_IUSE is not always needed so it's lazily evaluated.
+ self.configdict["env"].addLazySingleton(
+ "PORTAGE_IUSE", _lazy_iuse_regex, portage_iuse
+ )
+ self.configdict["env"][
+ "BASH_FUNC____in_portage_iuse%%"
+ ] = "() { [[ $1 =~ ${PORTAGE_IUSE} ]]; }"
+
+ ebuild_force_test = not restrict_test and self.get("EBUILD_FORCE_TEST") == "1"
+
+ if "test" in explicit_iuse or iuse_implicit_match("test"):
+ if "test" in self.features:
+ if ebuild_force_test and "test" in self.usemask:
+ self.usemask = frozenset(x for x in self.usemask if x != "test")
+ if restrict_test or ("test" in self.usemask and not ebuild_force_test):
+ # "test" is in IUSE and USE=test is masked, so execution
+ # of src_test() probably is not reliable. Therefore,
+ # temporarily disable FEATURES=test just for this package.
+ self["FEATURES"] = " ".join(x for x in self.features if x != "test")
+
+ # Allow _* flags from USE_EXPAND wildcards to pass through here.
+ use.difference_update(
+ [
+ x
+ for x in use
+ if (x not in explicit_iuse and not iuse_implicit_match(x))
+ and x[-2:] != "_*"
+ ]
+ )
+
+ # Use the calculated USE flags to regenerate the USE_EXPAND flags so
+ # that they are consistent. For optimal performance, use slice
+ # comparison instead of startswith().
+ use_expand_split = set(x.lower() for x in self.get("USE_EXPAND", "").split())
+ lazy_use_expand = self._lazy_use_expand(
+ self,
+ unfiltered_use,
+ use,
+ self.usemask,
+ portage_iuse,
+ use_expand_split,
+ self._use_expand_dict,
+ )
+
+ use_expand_iuses = dict((k, set()) for k in use_expand_split)
+ for x in portage_iuse:
+ x_split = x.split("_")
+ if len(x_split) == 1:
+ continue
+ for i in range(len(x_split) - 1):
+ k = "_".join(x_split[: i + 1])
+ if k in use_expand_split:
+ use_expand_iuses[k].add(x)
+ break
+
+ for k, use_expand_iuse in use_expand_iuses.items():
+ if k + "_*" in use:
+ use.update(x for x in use_expand_iuse if x not in usemask)
+ k = k.upper()
+ self.configdict["env"].addLazySingleton(k, lazy_use_expand.__getitem__, k)
+
+ for k in self.get("USE_EXPAND_UNPREFIXED", "").split():
+ var_split = self.get(k, "").split()
+ var_split = [x for x in var_split if x in use]
+ if var_split:
+ self.configlist[-1][k] = " ".join(var_split)
+ elif k in self:
+ self.configlist[-1][k] = ""
+
+ # Filtered for the ebuild environment. Store this in a separate
+ # attribute since we still want to be able to see global USE
+ # settings for things like emerge --info.
+
+ self.configdict["env"]["PORTAGE_USE"] = " ".join(
+ sorted(x for x in use if x[-2:] != "_*")
+ )
+
+ # Clear the eapi cache here rather than in the constructor, since
+ # setcpv triggers lazy instantiation of things like _use_manager.
+ _eapi_cache.clear()
+
+ def _grab_pkg_env(self, penv, container, protected_keys=None):
+ if protected_keys is None:
+ protected_keys = ()
+ abs_user_config = os.path.join(self["PORTAGE_CONFIGROOT"], USER_CONFIG_PATH)
+ non_user_variables = self._non_user_variables
+ # Make a copy since we don't want per-package settings
+ # to pollute the global expand_map.
+ expand_map = self._expand_map.copy()
+ incrementals = self.incrementals
+ for envname in penv:
+ penvfile = os.path.join(abs_user_config, "env", envname)
+ penvconfig = getconfig(
+ penvfile,
+ tolerant=self._tolerant,
+ allow_sourcing=True,
+ expand=expand_map,
+ )
+ if penvconfig is None:
+ writemsg(
+ "!!! %s references non-existent file: %s\n"
+ % (os.path.join(abs_user_config, "package.env"), penvfile),
+ noiselevel=-1,
+ )
+ else:
+ for k, v in penvconfig.items():
+ if k in protected_keys or k in non_user_variables:
+ writemsg(
+ "!!! Illegal variable "
+ + "'%s' assigned in '%s'\n" % (k, penvfile),
+ noiselevel=-1,
+ )
+ elif k in incrementals:
+ if k in container:
+ container[k] = container[k] + " " + v
+ else:
+ container[k] = v
+ else:
+ container[k] = v
+
+ def _iuse_effective_match(self, flag):
+ return flag in self._iuse_effective
+
+ def _calc_iuse_effective(self):
+ """
+ Beginning with EAPI 5, IUSE_EFFECTIVE is defined by PMS.
+ """
+ iuse_effective = []
+ iuse_effective.extend(self.get("IUSE_IMPLICIT", "").split())
+
+ # USE_EXPAND_IMPLICIT should contain things like ARCH, ELIBC,
+ # KERNEL, and USERLAND.
+ use_expand_implicit = frozenset(self.get("USE_EXPAND_IMPLICIT", "").split())
+
+ # USE_EXPAND_UNPREFIXED should contain at least ARCH, and
+ # USE_EXPAND_VALUES_ARCH should contain all valid ARCH flags.
+ for v in self.get("USE_EXPAND_UNPREFIXED", "").split():
+ if v not in use_expand_implicit:
+ continue
+ iuse_effective.extend(self.get("USE_EXPAND_VALUES_" + v, "").split())
+
+ use_expand = frozenset(self.get("USE_EXPAND", "").split())
+ for v in use_expand_implicit:
+ if v not in use_expand:
+ continue
+ lower_v = v.lower()
+ for x in self.get("USE_EXPAND_VALUES_" + v, "").split():
+ iuse_effective.append(lower_v + "_" + x)
+
+ return frozenset(iuse_effective)
+
+ def _get_implicit_iuse(self):
+ """
+ Prior to EAPI 5, these flags are considered to
+ be implicit members of IUSE:
+ * Flags derived from ARCH
+ * Flags derived from USE_EXPAND_HIDDEN variables
+ * Masked flags, such as those from {,package}use.mask
+ * Forced flags, such as those from {,package}use.force
+ * build and bootstrap flags used by bootstrap.sh
+ """
+ iuse_implicit = set()
+ # Flags derived from ARCH.
+ arch = self.configdict["defaults"].get("ARCH")
+ if arch:
+ iuse_implicit.add(arch)
+ iuse_implicit.update(self.get("PORTAGE_ARCHLIST", "").split())
+
+ # Flags derived from USE_EXPAND_HIDDEN variables
+ # such as ELIBC, KERNEL, and USERLAND.
+ use_expand_hidden = self.get("USE_EXPAND_HIDDEN", "").split()
+ for x in use_expand_hidden:
+ iuse_implicit.add(x.lower() + "_.*")
+
+ # Flags that have been masked or forced.
+ iuse_implicit.update(self.usemask)
+ iuse_implicit.update(self.useforce)
+
+ # build and bootstrap flags used by bootstrap.sh
+ iuse_implicit.add("build")
+ iuse_implicit.add("bootstrap")
+
+ return iuse_implicit
+
+ def _getUseMask(self, pkg, stable=None):
+ return self._use_manager.getUseMask(pkg, stable=stable)
+
+ def _getUseForce(self, pkg, stable=None):
+ return self._use_manager.getUseForce(pkg, stable=stable)
+
+ def _getMaskAtom(self, cpv, metadata):
+ """
+ Take a package and return a matching package.mask atom, or None if no
+ such atom exists or it has been cancelled by package.unmask.
+
+ @param cpv: The package name
+ @type cpv: String
+ @param metadata: A dictionary of raw package metadata
+ @type metadata: dict
+ @rtype: String
+ @return: A matching atom string or None if one is not found.
+ """
+ return self._mask_manager.getMaskAtom(
+ cpv, metadata["SLOT"], metadata.get("repository")
+ )
+
+ def _getRawMaskAtom(self, cpv, metadata):
+ """
+ Take a package and return a matching package.mask atom, or None if no
+ such atom exists or it has been cancelled by package.unmask.
+
+ @param cpv: The package name
+ @type cpv: String
+ @param metadata: A dictionary of raw package metadata
+ @type metadata: dict
+ @rtype: String
+ @return: A matching atom string or None if one is not found.
+ """
+ return self._mask_manager.getRawMaskAtom(
+ cpv, metadata["SLOT"], metadata.get("repository")
+ )
+
+ def _getProfileMaskAtom(self, cpv, metadata):
+ """
+ Take a package and return a matching profile atom, or None if no
+ such atom exists. Note that a profile atom may or may not have a "*"
+ prefix.
+
+ @param cpv: The package name
+ @type cpv: String
+ @param metadata: A dictionary of raw package metadata
+ @type metadata: dict
+ @rtype: String
+ @return: A matching profile atom string or None if one is not found.
+ """
+
+ warnings.warn(
+ "The config._getProfileMaskAtom() method is deprecated.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
+ cp = cpv_getkey(cpv)
+ profile_atoms = self.prevmaskdict.get(cp)
+ if profile_atoms:
+ pkg = "".join((cpv, _slot_separator, metadata["SLOT"]))
+ repo = metadata.get("repository")
+ if repo and repo != Package.UNKNOWN_REPO:
+ pkg = "".join((pkg, _repo_separator, repo))
+ pkg_list = [pkg]
+ for x in profile_atoms:
+ if match_from_list(x, pkg_list):
+ continue
+ return x
+ return None
+
+ def _isStable(self, pkg):
+ return self._keywords_manager.isStable(
+ pkg,
+ self.get("ACCEPT_KEYWORDS", ""),
+ self.configdict["backupenv"].get("ACCEPT_KEYWORDS", ""),
+ )
+
+ def _getKeywords(self, cpv, metadata):
+ return self._keywords_manager.getKeywords(
+ cpv,
+ metadata["SLOT"],
+ metadata.get("KEYWORDS", ""),
+ metadata.get("repository"),
+ )
+
+ def _getMissingKeywords(self, cpv, metadata):
+ """
+ Take a package and return a list of any KEYWORDS that the user may
+ need to accept for the given package. If the KEYWORDS are empty
+ and the ** keyword has not been accepted, the returned list will
+ contain ** alone (in order to distinguish from the case of "none
+ missing").
+
+ @param cpv: The package name (for package.keywords support)
+ @type cpv: String
+ @param metadata: A dictionary of raw package metadata
+ @type metadata: dict
+ @rtype: List
+ @return: A list of KEYWORDS that have not been accepted.
+ """
+
+ # Hack: Need to check the env directly here as otherwise stacking
+ # doesn't work properly as negative values are lost in the config
+ # object (bug #139600)
+ backuped_accept_keywords = self.configdict["backupenv"].get(
+ "ACCEPT_KEYWORDS", ""
+ )
+ global_accept_keywords = self.get("ACCEPT_KEYWORDS", "")
+
+ return self._keywords_manager.getMissingKeywords(
+ cpv,
+ metadata["SLOT"],
+ metadata.get("KEYWORDS", ""),
+ metadata.get("repository"),
+ global_accept_keywords,
+ backuped_accept_keywords,
+ )
+
+ def _getRawMissingKeywords(self, cpv, metadata):
+ """
+ Take a package and return a list of any KEYWORDS that the user may
+ need to accept for the given package. If the KEYWORDS are empty,
+ the returned list will contain ** alone (in order to distinguish
+ from the case of "none missing"). This DOES NOT apply any user config
+ package.accept_keywords acceptance.
+
+ @param cpv: The package name (for package.keywords support)
+ @type cpv: String
+ @param metadata: A dictionary of raw package metadata
+ @type metadata: dict
+ @rtype: List
+ @return: lists of KEYWORDS that have not been accepted
+ and the keywords it looked for.
+ """
+ return self._keywords_manager.getRawMissingKeywords(
+ cpv,
+ metadata["SLOT"],
+ metadata.get("KEYWORDS", ""),
+ metadata.get("repository"),
+ self.get("ACCEPT_KEYWORDS", ""),
+ )
+
+ def _getPKeywords(self, cpv, metadata):
+ global_accept_keywords = self.get("ACCEPT_KEYWORDS", "")
+
+ return self._keywords_manager.getPKeywords(
+ cpv, metadata["SLOT"], metadata.get("repository"), global_accept_keywords
+ )
+
+ def _getMissingLicenses(self, cpv, metadata):
+ """
+ Take a LICENSE string and return a list of any licenses that the user
+ may need to accept for the given package. The returned list will not
+ contain any licenses that have already been accepted. This method
+ can throw an InvalidDependString exception.
+
+ @param cpv: The package name (for package.license support)
+ @type cpv: String
+ @param metadata: A dictionary of raw package metadata
+ @type metadata: dict
+ @rtype: List
+ @return: A list of licenses that have not been accepted.
+ """
+ return self._license_manager.getMissingLicenses(
+ cpv,
+ metadata["USE"],
+ metadata["LICENSE"],
+ metadata["SLOT"],
+ metadata.get("repository"),
+ )
+
+ def _getMissingProperties(self, cpv, metadata):
+ """
+ Take a PROPERTIES string and return a list of any properties the user
+ may need to accept for the given package. The returned list will not
+ contain any properties that have already been accepted. This method
+ can throw an InvalidDependString exception.
+
+ @param cpv: The package name (for package.properties support)
+ @type cpv: String
+ @param metadata: A dictionary of raw package metadata
+ @type metadata: dict
+ @rtype: List
+ @return: A list of properties that have not been accepted.
+ """
+ accept_properties = self._accept_properties
+ try:
+ cpv.slot
+ except AttributeError:
+ cpv = _pkg_str(cpv, metadata=metadata, settings=self)
+ cp = cpv_getkey(cpv)
+ cpdict = self._ppropertiesdict.get(cp)
+ if cpdict:
+ pproperties_list = ordered_by_atom_specificity(cpdict, cpv)
+ if pproperties_list:
+ accept_properties = list(self._accept_properties)
+ for x in pproperties_list:
+ accept_properties.extend(x)
+
+ properties_str = metadata.get("PROPERTIES", "")
+ properties = set(use_reduce(properties_str, matchall=1, flat=True))
+
+ acceptable_properties = set()
+ for x in accept_properties:
+ if x == "*":
+ acceptable_properties.update(properties)
+ elif x == "-*":
+ acceptable_properties.clear()
+ elif x[:1] == "-":
+ acceptable_properties.discard(x[1:])
+ else:
+ acceptable_properties.add(x)
+
+ if "?" in properties_str:
+ use = metadata["USE"].split()
+ else:
+ use = []
+
+ return [
+ x
+ for x in use_reduce(properties_str, uselist=use, flat=True)
+ if x not in acceptable_properties
+ ]
+
+ def _getMissingRestrict(self, cpv, metadata):
+ """
+ Take a RESTRICT string and return a list of any tokens the user
+ may need to accept for the given package. The returned list will not
+ contain any tokens that have already been accepted. This method
+ can throw an InvalidDependString exception.
+
+ @param cpv: The package name (for package.accept_restrict support)
+ @type cpv: String
+ @param metadata: A dictionary of raw package metadata
+ @type metadata: dict
+ @rtype: List
+ @return: A list of tokens that have not been accepted.
+ """
+ accept_restrict = self._accept_restrict
+ try:
+ cpv.slot
+ except AttributeError:
+ cpv = _pkg_str(cpv, metadata=metadata, settings=self)
+ cp = cpv_getkey(cpv)
+ cpdict = self._paccept_restrict.get(cp)
+ if cpdict:
+ paccept_restrict_list = ordered_by_atom_specificity(cpdict, cpv)
+ if paccept_restrict_list:
+ accept_restrict = list(self._accept_restrict)
+ for x in paccept_restrict_list:
+ accept_restrict.extend(x)
+
+ restrict_str = metadata.get("RESTRICT", "")
+ all_restricts = set(use_reduce(restrict_str, matchall=1, flat=True))
+
+ acceptable_restricts = set()
+ for x in accept_restrict:
+ if x == "*":
+ acceptable_restricts.update(all_restricts)
+ elif x == "-*":
+ acceptable_restricts.clear()
+ elif x[:1] == "-":
+ acceptable_restricts.discard(x[1:])
+ else:
+ acceptable_restricts.add(x)
+
+ if "?" in restrict_str:
+ use = metadata["USE"].split()
+ else:
+ use = []
+
+ return [
+ x
+ for x in use_reduce(restrict_str, uselist=use, flat=True)
+ if x not in acceptable_restricts
+ ]
+
+ def _accept_chost(self, cpv, metadata):
+ """
+ @return True if pkg CHOST is accepted, False otherwise.
+ """
+ if self._accept_chost_re is None:
+ accept_chost = self.get("ACCEPT_CHOSTS", "").split()
+ if not accept_chost:
+ chost = self.get("CHOST")
+ if chost:
+ accept_chost.append(chost)
+ if not accept_chost:
+ self._accept_chost_re = re.compile(".*")
+ elif len(accept_chost) == 1:
+ try:
+ self._accept_chost_re = re.compile(r"^%s$" % accept_chost[0])
+ except re.error as e:
+ writemsg(
+ _("!!! Invalid ACCEPT_CHOSTS value: '%s': %s\n")
+ % (accept_chost[0], e),
+ noiselevel=-1,
+ )
+ self._accept_chost_re = re.compile("^$")
+ else:
+ try:
+ self._accept_chost_re = re.compile(
+ r"^(%s)$" % "|".join(accept_chost)
+ )
+ except re.error as e:
+ writemsg(
+ _("!!! Invalid ACCEPT_CHOSTS value: '%s': %s\n")
+ % (" ".join(accept_chost), e),
+ noiselevel=-1,
+ )
+ self._accept_chost_re = re.compile("^$")
+
+ pkg_chost = metadata.get("CHOST", "")
+ return not pkg_chost or self._accept_chost_re.match(pkg_chost) is not None
+
+ def setinst(self, mycpv, mydbapi):
+ """This used to update the preferences for old-style virtuals.
+ It is no-op now."""
+ pass
+
+ def reload(self):
+ """Reload things like /etc/profile.env that can change during runtime."""
+ env_d_filename = os.path.join(self["EROOT"], "etc", "profile.env")
+ self.configdict["env.d"].clear()
+ env_d = getconfig(env_d_filename, tolerant=self._tolerant, expand=False)
+ if env_d:
+ # env_d will be None if profile.env doesn't exist.
+ for k in self._env_d_blacklist:
+ env_d.pop(k, None)
+ self.configdict["env.d"].update(env_d)
+
+ def regenerate(self, useonly=0, use_cache=None):
+ """
+ Regenerate settings
+ This involves regenerating valid USE flags, re-expanding USE_EXPAND flags
+ re-stacking USE flags (-flag and -*), as well as any other INCREMENTAL
+ variables. This also updates the env.d configdict; useful in case an ebuild
+ changes the environment.
+
+ If FEATURES has already stacked, it is not stacked twice.
+
+ @param useonly: Only regenerate USE flags (not any other incrementals)
+ @type useonly: Boolean
+ @rtype: None
+ """
+
+ if use_cache is not None:
+ warnings.warn(
+ "The use_cache parameter for config.regenerate() is deprecated and without effect.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
+ self.modifying()
+
+ if useonly:
+ myincrementals = ["USE"]
+ else:
+ myincrementals = self.incrementals
+ myincrementals = set(myincrementals)
+
+ # Process USE last because it depends on USE_EXPAND which is also
+ # an incremental!
+ myincrementals.discard("USE")
+
+ mydbs = self.configlist[:-1]
+ mydbs.append(self.backupenv)
+
+ # ACCEPT_LICENSE is a lazily evaluated incremental, so that * can be
+ # used to match all licenses without every having to explicitly expand
+ # it to all licenses.
+ if self.local_config:
+ mysplit = []
+ for curdb in mydbs:
+ mysplit.extend(curdb.get("ACCEPT_LICENSE", "").split())
+ mysplit = prune_incremental(mysplit)
+ accept_license_str = " ".join(mysplit) or "* -@EULA"
+ self.configlist[-1]["ACCEPT_LICENSE"] = accept_license_str
+ self._license_manager.set_accept_license_str(accept_license_str)
+ else:
+ # repoman will accept any license
+ self._license_manager.set_accept_license_str("*")
+
+ # ACCEPT_PROPERTIES works like ACCEPT_LICENSE, without groups
+ if self.local_config:
+ mysplit = []
+ for curdb in mydbs:
+ mysplit.extend(curdb.get("ACCEPT_PROPERTIES", "").split())
+ mysplit = prune_incremental(mysplit)
+ self.configlist[-1]["ACCEPT_PROPERTIES"] = " ".join(mysplit)
+ if tuple(mysplit) != self._accept_properties:
+ self._accept_properties = tuple(mysplit)
+ else:
+ # repoman will accept any property
+ self._accept_properties = ("*",)
+
+ if self.local_config:
+ mysplit = []
+ for curdb in mydbs:
+ mysplit.extend(curdb.get("ACCEPT_RESTRICT", "").split())
+ mysplit = prune_incremental(mysplit)
+ self.configlist[-1]["ACCEPT_RESTRICT"] = " ".join(mysplit)
+ if tuple(mysplit) != self._accept_restrict:
+ self._accept_restrict = tuple(mysplit)
+ else:
+ # repoman will accept any property
+ self._accept_restrict = ("*",)
+
+ increment_lists = {}
+ for k in myincrementals:
+ incremental_list = []
+ increment_lists[k] = incremental_list
+ for curdb in mydbs:
+ v = curdb.get(k)
+ if v is not None:
+ incremental_list.append(v.split())
+
+ if "FEATURES" in increment_lists:
+ increment_lists["FEATURES"].append(self._features_overrides)
+
+ myflags = set()
+ for mykey, incremental_list in increment_lists.items():
+
+ myflags.clear()
+ for mysplit in incremental_list:
+
+ for x in mysplit:
+ if x == "-*":
+ # "-*" is a special "minus" var that means "unset all settings".
+ # so USE="-* gnome" will have *just* gnome enabled.
+ myflags.clear()
+ continue
+
+ if x[0] == "+":
+ # Not legal. People assume too much. Complain.
+ writemsg(
+ colorize(
+ "BAD",
+ _("%s values should not start with a '+': %s")
+ % (mykey, x),
+ )
+ + "\n",
+ noiselevel=-1,
+ )
+ x = x[1:]
+ if not x:
+ continue
+
+ if x[0] == "-":
+ myflags.discard(x[1:])
+ continue
+
+ # We got here, so add it now.
+ myflags.add(x)
+
+ # store setting in last element of configlist, the original environment:
+ if myflags or mykey in self:
+ self.configlist[-1][mykey] = " ".join(sorted(myflags))
+
+ # Do the USE calculation last because it depends on USE_EXPAND.
+ use_expand = self.get("USE_EXPAND", "").split()
+ use_expand_dict = self._use_expand_dict
+ use_expand_dict.clear()
+ for k in use_expand:
+ v = self.get(k)
+ if v is not None:
+ use_expand_dict[k] = v
+
+ use_expand_unprefixed = self.get("USE_EXPAND_UNPREFIXED", "").split()
+
+ # In order to best accomodate the long-standing practice of
+ # setting default USE_EXPAND variables in the profile's
+ # make.defaults, we translate these variables into their
+ # equivalent USE flags so that useful incremental behavior
+ # is enabled (for sub-profiles).
+ configdict_defaults = self.configdict["defaults"]
+ if self._make_defaults is not None:
+ for i, cfg in enumerate(self._make_defaults):
+ if not cfg:
+ self.make_defaults_use.append("")
+ continue
+ use = cfg.get("USE", "")
+ expand_use = []
+
+ for k in use_expand_unprefixed:
+ v = cfg.get(k)
+ if v is not None:
+ expand_use.extend(v.split())
+
+ for k in use_expand_dict:
+ v = cfg.get(k)
+ if v is None:
+ continue
+ prefix = k.lower() + "_"
+ for x in v.split():
+ if x[:1] == "-":
+ expand_use.append("-" + prefix + x[1:])
+ else:
+ expand_use.append(prefix + x)
+
+ if expand_use:
+ expand_use.append(use)
+ use = " ".join(expand_use)
+ self.make_defaults_use.append(use)
+ self.make_defaults_use = tuple(self.make_defaults_use)
+ # Preserve both positive and negative flags here, since
+ # negative flags may later interact with other flags pulled
+ # in via USE_ORDER.
+ configdict_defaults["USE"] = " ".join(filter(None, self.make_defaults_use))
+ # Set to None so this code only runs once.
+ self._make_defaults = None
+
+ if not self.uvlist:
+ for x in self["USE_ORDER"].split(":"):
+ if x in self.configdict:
+ self.uvlist.append(self.configdict[x])
+ self.uvlist.reverse()
+
+ # For optimal performance, use slice
+ # comparison instead of startswith().
+ iuse = self.configdict["pkg"].get("IUSE")
+ if iuse is not None:
+ iuse = [x.lstrip("+-") for x in iuse.split()]
+ myflags = set()
+ for curdb in self.uvlist:
+
+ for k in use_expand_unprefixed:
+ v = curdb.get(k)
+ if v is None:
+ continue
+ for x in v.split():
+ if x[:1] == "-":
+ myflags.discard(x[1:])
+ else:
+ myflags.add(x)
+
+ cur_use_expand = [x for x in use_expand if x in curdb]
+ mysplit = curdb.get("USE", "").split()
+ if not mysplit and not cur_use_expand:
+ continue
+ for x in mysplit:
+ if x == "-*":
+ myflags.clear()
+ continue
+
+ if x[0] == "+":
+ writemsg(
+ colorize(
+ "BAD",
+ _("USE flags should not start " "with a '+': %s\n") % x,
+ ),
+ noiselevel=-1,
+ )
+ x = x[1:]
+ if not x:
+ continue
+
+ if x[0] == "-":
+ if x[-2:] == "_*":
+ prefix = x[1:-1]
+ prefix_len = len(prefix)
+ myflags.difference_update(
+ [y for y in myflags if y[:prefix_len] == prefix]
+ )
+ myflags.discard(x[1:])
+ continue
+
+ if iuse is not None and x[-2:] == "_*":
+ # Expand wildcards here, so that cases like
+ # USE="linguas_* -linguas_en_US" work correctly.
+ prefix = x[:-1]
+ prefix_len = len(prefix)
+ has_iuse = False
+ for y in iuse:
+ if y[:prefix_len] == prefix:
+ has_iuse = True
+ myflags.add(y)
+ if not has_iuse:
+ # There are no matching IUSE, so allow the
+ # wildcard to pass through. This allows
+ # linguas_* to trigger unset LINGUAS in
+ # cases when no linguas_ flags are in IUSE.
+ myflags.add(x)
+ else:
+ myflags.add(x)
+
+ if curdb is configdict_defaults:
+ # USE_EXPAND flags from make.defaults are handled
+ # earlier, in order to provide useful incremental
+ # behavior (for sub-profiles).
+ continue
+
+ for var in cur_use_expand:
+ var_lower = var.lower()
+ is_not_incremental = var not in myincrementals
+ if is_not_incremental:
+ prefix = var_lower + "_"
+ prefix_len = len(prefix)
+ for x in list(myflags):
+ if x[:prefix_len] == prefix:
+ myflags.remove(x)
+ for x in curdb[var].split():
+ if x[0] == "+":
+ if is_not_incremental:
+ writemsg(
+ colorize(
+ "BAD",
+ _(
+ "Invalid '+' "
+ "operator in non-incremental variable "
+ "'%s': '%s'\n"
+ )
+ % (var, x),
+ ),
+ noiselevel=-1,
+ )
+ continue
+ else:
+ writemsg(
+ colorize(
+ "BAD",
+ _(
+ "Invalid '+' "
+ "operator in incremental variable "
+ "'%s': '%s'\n"
+ )
+ % (var, x),
+ ),
+ noiselevel=-1,
+ )
+ x = x[1:]
+ if x[0] == "-":
+ if is_not_incremental:
+ writemsg(
+ colorize(
+ "BAD",
+ _(
+ "Invalid '-' "
+ "operator in non-incremental variable "
+ "'%s': '%s'\n"
+ )
+ % (var, x),
+ ),
+ noiselevel=-1,
+ )
+ continue
+ myflags.discard(var_lower + "_" + x[1:])
+ continue
+ myflags.add(var_lower + "_" + x)
+
+ if hasattr(self, "features"):
+ self.features._features.clear()
+ else:
+ self.features = features_set(self)
+ self.features._features.update(self.get("FEATURES", "").split())
+ self.features._sync_env_var()
+ self.features._validate()
+
+ myflags.update(self.useforce)
+ arch = self.configdict["defaults"].get("ARCH")
+ if arch:
+ myflags.add(arch)
+
+ myflags.difference_update(self.usemask)
+ self.configlist[-1]["USE"] = " ".join(sorted(myflags))
+
+ if self.mycpv is None:
+ # Generate global USE_EXPAND variables settings that are
+ # consistent with USE, for display by emerge --info. For
+ # package instances, these are instead generated via
+ # setcpv().
+ for k in use_expand:
+ prefix = k.lower() + "_"
+ prefix_len = len(prefix)
+ expand_flags = set(
+ x[prefix_len:] for x in myflags if x[:prefix_len] == prefix
+ )
+ var_split = use_expand_dict.get(k, "").split()
+ var_split = [x for x in var_split if x in expand_flags]
+ var_split.extend(sorted(expand_flags.difference(var_split)))
+ if var_split:
+ self.configlist[-1][k] = " ".join(var_split)
+ elif k in self:
+ self.configlist[-1][k] = ""
+
+ for k in use_expand_unprefixed:
+ var_split = self.get(k, "").split()
+ var_split = [x for x in var_split if x in myflags]
+ if var_split:
+ self.configlist[-1][k] = " ".join(var_split)
+ elif k in self:
+ self.configlist[-1][k] = ""
+
+ @property
+ def virts_p(self):
+ warnings.warn(
+ "portage config.virts_p attribute "
+ + "is deprecated, use config.get_virts_p()",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return self.get_virts_p()
+
+ @property
+ def virtuals(self):
+ warnings.warn(
+ "portage config.virtuals attribute "
+ + "is deprecated, use config.getvirtuals()",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return self.getvirtuals()
+
+ def get_virts_p(self):
+ # Ensure that we don't trigger the _treeVirtuals
+ # assertion in VirtualsManager._compile_virtuals().
+ self.getvirtuals()
+ return self._virtuals_manager.get_virts_p()
+
+ def getvirtuals(self):
+ if self._virtuals_manager._treeVirtuals is None:
+ # Hack around the fact that VirtualsManager needs a vartree
+ # and vartree needs a config instance.
+ # This code should be part of VirtualsManager.getvirtuals().
+ if self.local_config:
+ temp_vartree = vartree(settings=self)
+ self._virtuals_manager._populate_treeVirtuals(temp_vartree)
+ else:
+ self._virtuals_manager._treeVirtuals = {}
+
+ return self._virtuals_manager.getvirtuals()
+
+ def _populate_treeVirtuals_if_needed(self, vartree):
+ """Reduce the provides into a list by CP."""
+ if self._virtuals_manager._treeVirtuals is None:
+ if self.local_config:
+ self._virtuals_manager._populate_treeVirtuals(vartree)
+ else:
+ self._virtuals_manager._treeVirtuals = {}
+
+ def __delitem__(self, mykey):
+ self.pop(mykey)
+
+ def __getitem__(self, key):
+ try:
+ return self._getitem(key)
+ except KeyError:
+ if portage._internal_caller:
+ stack = (
+ traceback.format_stack()[:-1]
+ + traceback.format_exception(*sys.exc_info())[1:]
+ )
+ try:
+ # Ensure that output is written to terminal.
+ with open("/dev/tty", "w") as f:
+ f.write("=" * 96 + "\n")
+ f.write(
+ "=" * 8
+ + " Traceback for invalid call to portage.package.ebuild.config.config.__getitem__ "
+ + "=" * 8
+ + "\n"
+ )
+ f.writelines(stack)
+ f.write("=" * 96 + "\n")
+ except Exception:
+ pass
+ raise
+ else:
+ warnings.warn(
+ _("Passing nonexistent key %r to %s is deprecated. Use %s instead.")
+ % (
+ key,
+ "portage.package.ebuild.config.config.__getitem__",
+ "portage.package.ebuild.config.config.get",
+ ),
+ DeprecationWarning,
+ stacklevel=2,
+ )
+ return ""
+
+ def _getitem(self, mykey):
+
+ if mykey in self._constant_keys:
+ # These two point to temporary values when
+ # portage plans to update itself.
+ if mykey == "PORTAGE_BIN_PATH":
+ return portage._bin_path
+ if mykey == "PORTAGE_PYM_PATH":
+ return portage._pym_path
+
+ if mykey == "PORTAGE_PYTHONPATH":
+ value = [
+ x for x in self.backupenv.get("PYTHONPATH", "").split(":") if x
+ ]
+ need_pym_path = True
+ if value:
+ try:
+ need_pym_path = not os.path.samefile(
+ value[0], portage._pym_path
+ )
+ except OSError:
+ pass
+ if need_pym_path:
+ value.insert(0, portage._pym_path)
+ return ":".join(value)
+
+ if mykey == "PORTAGE_GID":
+ return "%s" % portage_gid
+
+ for d in self.lookuplist:
+ try:
+ return d[mykey]
+ except KeyError:
+ pass
+
+ deprecated_key = self._deprecated_keys.get(mykey)
+ if deprecated_key is not None:
+ value = self._getitem(deprecated_key)
+ # warnings.warn(_("Key %s has been renamed to %s. Please ",
+ # "update your configuration") % (deprecated_key, mykey),
+ # UserWarning)
+ return value
+
+ raise KeyError(mykey)
+
+ def get(self, k, x=None):
+ try:
+ return self._getitem(k)
+ except KeyError:
+ return x
+
+ def pop(self, key, *args):
+ self.modifying()
+ if len(args) > 1:
+ raise TypeError(
+ "pop expected at most 2 arguments, got " + repr(1 + len(args))
+ )
+ v = self
+ for d in reversed(self.lookuplist):
+ v = d.pop(key, v)
+ if v is self:
+ if args:
+ return args[0]
+ raise KeyError(key)
+ return v
+
+ def __contains__(self, mykey):
+ """Called to implement membership test operators (in and not in)."""
+ try:
+ self._getitem(mykey)
+ except KeyError:
+ return False
+ else:
+ return True
+
+ def setdefault(self, k, x=None):
+ v = self.get(k)
+ if v is not None:
+ return v
+ self[k] = x
+ return x
+
+ def __iter__(self):
+ keys = set()
+ keys.update(self._constant_keys)
+ for d in self.lookuplist:
+ keys.update(d)
+ return iter(keys)
+
+ def iterkeys(self):
+ return iter(self)
+
+ def iteritems(self):
+ for k in self:
+ yield (k, self._getitem(k))
+
+ def __setitem__(self, mykey, myvalue):
+ "set a value; will be thrown away at reset() time"
+ if not isinstance(myvalue, str):
+ raise ValueError(
+ "Invalid type being used as a value: '%s': '%s'"
+ % (str(mykey), str(myvalue))
+ )
+
+ # Avoid potential UnicodeDecodeError exceptions later.
+ mykey = _unicode_decode(mykey)
+ myvalue = _unicode_decode(myvalue)
+
+ self.modifying()
+ self.modifiedkeys.append(mykey)
+ self.configdict["env"][mykey] = myvalue
+
+ def environ(self):
+ "return our locally-maintained environment"
+ mydict = {}
+ environ_filter = self._environ_filter
+
+ eapi = self.get("EAPI")
+ eapi_attrs = _get_eapi_attrs(eapi)
+ phase = self.get("EBUILD_PHASE")
+ emerge_from = self.get("EMERGE_FROM")
+ filter_calling_env = False
+ if (
+ self.mycpv is not None
+ and not (emerge_from == "ebuild" and phase == "setup")
+ and phase not in ("clean", "cleanrm", "depend", "fetch")
+ ):
+ temp_dir = self.get("T")
+ if temp_dir is not None and os.path.exists(
+ os.path.join(temp_dir, "environment")
+ ):
+ filter_calling_env = True
+
+ environ_whitelist = self._environ_whitelist
+ for x, myvalue in self.iteritems():
+ if x in environ_filter:
+ continue
+ if not isinstance(myvalue, str):
+ writemsg(
+ _("!!! Non-string value in config: %s=%s\n") % (x, myvalue),
+ noiselevel=-1,
+ )
+ continue
+ if (
+ filter_calling_env
+ and x not in environ_whitelist
+ and not self._environ_whitelist_re.match(x)
+ ):
+ # Do not allow anything to leak into the ebuild
+ # environment unless it is explicitly whitelisted.
+ # This ensures that variables unset by the ebuild
+ # remain unset (bug #189417).
+ continue
+ mydict[x] = myvalue
+ if "HOME" not in mydict and "BUILD_PREFIX" in mydict:
+ writemsg("*** HOME not set. Setting to " + mydict["BUILD_PREFIX"] + "\n")
+ mydict["HOME"] = mydict["BUILD_PREFIX"][:]
+
+ if filter_calling_env:
+ if phase:
+ whitelist = []
+ if "rpm" == phase:
+ whitelist.append("RPMDIR")
+ for k in whitelist:
+ v = self.get(k)
+ if v is not None:
+ mydict[k] = v
+
+ # At some point we may want to stop exporting FEATURES to the ebuild
+ # environment, in order to prevent ebuilds from abusing it. In
+ # preparation for that, export it as PORTAGE_FEATURES so that bashrc
+ # users will be able to migrate any FEATURES conditional code to
+ # use this alternative variable.
+ mydict["PORTAGE_FEATURES"] = self["FEATURES"]
+
+ # Filtered by IUSE and implicit IUSE.
+ mydict["USE"] = self.get("PORTAGE_USE", "")
+
+ # Don't export AA to the ebuild environment in EAPIs that forbid it
+ if not eapi_exports_AA(eapi):
+ mydict.pop("AA", None)
+
+ if not eapi_exports_merge_type(eapi):
+ mydict.pop("MERGE_TYPE", None)
+
+ src_like_phase = phase == "setup" or _phase_func_map.get(phase, "").startswith(
+ "src_"
+ )
+
+ if not (src_like_phase and eapi_attrs.sysroot):
+ mydict.pop("ESYSROOT", None)
+
+ if not (src_like_phase and eapi_attrs.broot):
+ mydict.pop("BROOT", None)
+
+ # Prefix variables are supported beginning with EAPI 3, or when
+ # force-prefix is in FEATURES, since older EAPIs would otherwise be
+ # useless with prefix configurations. This brings compatibility with
+ # the prefix branch of portage, which also supports EPREFIX for all
+ # EAPIs (for obvious reasons).
+ if phase == "depend" or (
+ "force-prefix" not in self.features
+ and eapi is not None
+ and not eapi_supports_prefix(eapi)
+ ):
+ mydict.pop("ED", None)
+ mydict.pop("EPREFIX", None)
+ mydict.pop("EROOT", None)
+ mydict.pop("ESYSROOT", None)
+
+ if (
+ phase
+ not in (
+ "pretend",
+ "setup",
+ "preinst",
+ "postinst",
+ )
+ or not eapi_exports_replace_vars(eapi)
+ ):
+ mydict.pop("REPLACING_VERSIONS", None)
+
+ if phase not in ("prerm", "postrm") or not eapi_exports_replace_vars(eapi):
+ mydict.pop("REPLACED_BY_VERSION", None)
+
+ if phase is not None and eapi_attrs.exports_EBUILD_PHASE_FUNC:
+ phase_func = _phase_func_map.get(phase)
+ if phase_func is not None:
+ mydict["EBUILD_PHASE_FUNC"] = phase_func
+
+ if eapi_attrs.posixish_locale:
+ split_LC_ALL(mydict)
+ mydict["LC_COLLATE"] = "C"
+ # check_locale() returns None when check can not be executed.
+ if check_locale(silent=True, env=mydict) is False:
+ # try another locale
+ for l in ("C.UTF-8", "en_US.UTF-8", "en_GB.UTF-8", "C"):
+ mydict["LC_CTYPE"] = l
+ if check_locale(silent=True, env=mydict):
+ # TODO: output the following only once
+ # writemsg(_("!!! LC_CTYPE unsupported, using %s instead\n")
+ # % mydict["LC_CTYPE"])
+ break
+ else:
+ raise AssertionError("C locale did not pass the test!")
+
+ if not eapi_attrs.exports_PORTDIR:
+ mydict.pop("PORTDIR", None)
+ if not eapi_attrs.exports_ECLASSDIR:
+ mydict.pop("ECLASSDIR", None)
+
+ if not eapi_attrs.path_variables_end_with_trailing_slash:
+ for v in ("D", "ED", "ROOT", "EROOT", "ESYSROOT", "BROOT"):
+ if v in mydict:
+ mydict[v] = mydict[v].rstrip(os.path.sep)
+
+ # Since SYSROOT=/ interacts badly with autotools.eclass (bug 654600),
+ # and no EAPI expects SYSROOT to have a trailing slash, always strip
+ # the trailing slash from SYSROOT.
+ if "SYSROOT" in mydict:
+ mydict["SYSROOT"] = mydict["SYSROOT"].rstrip(os.sep)
+
+ try:
+ builddir = mydict["PORTAGE_BUILDDIR"]
+ distdir = mydict["DISTDIR"]
+ except KeyError:
+ pass
+ else:
+ mydict["PORTAGE_ACTUAL_DISTDIR"] = distdir
+ mydict["DISTDIR"] = os.path.join(builddir, "distdir")
+
+ return mydict
+
+ def thirdpartymirrors(self):
+ if getattr(self, "_thirdpartymirrors", None) is None:
+ thirdparty_lists = []
+ for repo_name in reversed(self.repositories.prepos_order):
+ thirdparty_lists.append(
+ grabdict(
+ os.path.join(
+ self.repositories[repo_name].location,
+ "profiles",
+ "thirdpartymirrors",
+ )
+ )
+ )
+ self._thirdpartymirrors = stack_dictlist(thirdparty_lists, incremental=True)
+ return self._thirdpartymirrors
+
+ def archlist(self):
+ _archlist = []
+ for myarch in self["PORTAGE_ARCHLIST"].split():
+ _archlist.append(myarch)
+ _archlist.append("~" + myarch)
+ return _archlist
+
+ def selinux_enabled(self):
+ if getattr(self, "_selinux_enabled", None) is None:
+ self._selinux_enabled = 0
+ if "selinux" in self["USE"].split():
+ if selinux:
+ if selinux.is_selinux_enabled() == 1:
+ self._selinux_enabled = 1
+ else:
+ self._selinux_enabled = 0
+ else:
+ writemsg(
+ _(
+ "!!! SELinux module not found. Please verify that it was installed.\n"
+ ),
+ noiselevel=-1,
+ )
+ self._selinux_enabled = 0
+
+ return self._selinux_enabled
+
+ keys = __iter__
+ items = iteritems
++>>>>>>> origin/master
diff --cc lib/portage/package/ebuild/doebuild.py
index 69132e651,ac627f555..af8845f34
--- a/lib/portage/package/ebuild/doebuild.py
+++ b/lib/portage/package/ebuild/doebuild.py
@@@ -22,48 -22,81 +22,86 @@@ from textwrap import wra
import time
import warnings
import zlib
+import platform
import portage
- portage.proxy.lazyimport.lazyimport(globals(),
- 'portage.package.ebuild.config:check_config_instance',
- 'portage.package.ebuild.digestcheck:digestcheck',
- 'portage.package.ebuild.digestgen:digestgen',
- 'portage.package.ebuild.fetch:_drop_privs_userfetch,_want_userfetch,fetch',
- 'portage.package.ebuild.prepare_build_dirs:_prepare_fake_distdir',
- 'portage.package.ebuild._ipc.QueryCommand:QueryCommand',
- 'portage.dep._slot_operator:evaluate_slot_operator_equal_deps',
- 'portage.package.ebuild._spawn_nofetch:spawn_nofetch',
- 'portage.util.elf.header:ELFHeader',
- 'portage.dep.soname.multilib_category:compute_multilib_category',
- 'portage.util._desktop_entry:validate_desktop_entry',
- 'portage.util._dyn_libs.NeededEntry:NeededEntry',
- 'portage.util._dyn_libs.soname_deps:SonameDepsProcessor',
- 'portage.util._async.SchedulerInterface:SchedulerInterface',
- 'portage.util._eventloop.global_event_loop:global_event_loop',
- 'portage.util.ExtractKernelVersion:ExtractKernelVersion'
+
+ portage.proxy.lazyimport.lazyimport(
+ globals(),
+ "portage.package.ebuild.config:check_config_instance",
+ "portage.package.ebuild.digestcheck:digestcheck",
+ "portage.package.ebuild.digestgen:digestgen",
+ "portage.package.ebuild.fetch:_drop_privs_userfetch,_want_userfetch,fetch",
+ "portage.package.ebuild.prepare_build_dirs:_prepare_fake_distdir",
+ "portage.package.ebuild._ipc.QueryCommand:QueryCommand",
+ "portage.dep._slot_operator:evaluate_slot_operator_equal_deps",
+ "portage.package.ebuild._spawn_nofetch:spawn_nofetch",
+ "portage.util.elf.header:ELFHeader",
+ "portage.dep.soname.multilib_category:compute_multilib_category",
+ "portage.util._desktop_entry:validate_desktop_entry",
+ "portage.util._dyn_libs.NeededEntry:NeededEntry",
+ "portage.util._dyn_libs.soname_deps:SonameDepsProcessor",
+ "portage.util._async.SchedulerInterface:SchedulerInterface",
+ "portage.util._eventloop.global_event_loop:global_event_loop",
+ "portage.util.ExtractKernelVersion:ExtractKernelVersion",
)
- from portage import bsd_chflags, \
- eapi_is_supported, merge, os, selinux, shutil, \
- unmerge, _encodings, _os_merge, \
- _shell_quote, _unicode_decode, _unicode_encode
- from portage.const import EBUILD_SH_ENV_FILE, EBUILD_SH_ENV_DIR, \
- EBUILD_SH_BINARY, INVALID_ENV_FILE, MISC_SH_BINARY, PORTAGE_PYM_PACKAGES, EPREFIX, MACOSSANDBOX_PROFILE
- from portage.data import portage_gid, portage_uid, secpass, \
- uid, userpriv_groups
+ from portage import (
+ bsd_chflags,
+ eapi_is_supported,
+ merge,
+ os,
+ selinux,
+ shutil,
+ unmerge,
+ _encodings,
+ _os_merge,
+ _shell_quote,
+ _unicode_decode,
+ _unicode_encode,
+ )
+ from portage.const import (
+ EBUILD_SH_ENV_FILE,
+ EBUILD_SH_ENV_DIR,
+ EBUILD_SH_BINARY,
+ INVALID_ENV_FILE,
+ MISC_SH_BINARY,
+ PORTAGE_PYM_PACKAGES,
++ # BEGIN PREFIX LOCAL
++ EPREFIX,
++ MACOSSANDBOX_PROFILE,
++ # END PREFIX LOCAL
+ )
+ from portage.data import portage_gid, portage_uid, secpass, uid, userpriv_groups
from portage.dbapi.porttree import _parse_uri_map
- from portage.dep import Atom, check_required_use, \
- human_readable_required_use, paren_enclose, use_reduce
- from portage.eapi import (eapi_exports_KV, eapi_exports_merge_type,
- eapi_exports_replace_vars, eapi_exports_REPOSITORY,
- eapi_has_required_use, eapi_has_src_prepare_and_src_configure,
- eapi_has_pkg_pretend, _get_eapi_attrs)
+ from portage.dep import (
+ Atom,
+ check_required_use,
+ human_readable_required_use,
+ paren_enclose,
+ use_reduce,
+ )
+ from portage.eapi import (
+ eapi_exports_KV,
+ eapi_exports_merge_type,
+ eapi_exports_replace_vars,
+ eapi_exports_REPOSITORY,
+ eapi_has_required_use,
+ eapi_has_src_prepare_and_src_configure,
+ eapi_has_pkg_pretend,
+ _get_eapi_attrs,
+ )
from portage.elog import elog_process, _preload_elog_modules
from portage.elog.messages import eerror, eqawarn
- from portage.exception import (DigestException, FileNotFound,
- IncorrectParameter, InvalidData, InvalidDependString,
- PermissionDenied, UnsupportedAPIException)
+ from portage.exception import (
+ DigestException,
+ FileNotFound,
+ IncorrectParameter,
+ InvalidData,
+ InvalidDependString,
+ PermissionDenied,
+ UnsupportedAPIException,
+ )
from portage.localization import _
from portage.output import colormap
from portage.package.ebuild.prepare_build_dirs import prepare_build_dirs
@@@ -92,1326 -127,1660 +132,1667 @@@ from _emerge.EbuildSpawnProcess import
from _emerge.Package import Package
from _emerge.RootConfig import RootConfig
-
- _unsandboxed_phases = frozenset([
- "clean", "cleanrm", "config",
- "help", "info", "postinst",
- "preinst", "pretend", "postrm",
- "prerm", "setup"
- ])
+ _unsandboxed_phases = frozenset(
+ [
+ "clean",
+ "cleanrm",
+ "config",
+ "help",
+ "info",
+ "postinst",
+ "preinst",
+ "pretend",
+ "postrm",
+ "prerm",
+ "setup",
+ ]
+ )
# phases in which IPC with host is allowed
- _ipc_phases = frozenset([
- "setup", "pretend", "config", "info",
- "preinst", "postinst", "prerm", "postrm",
- ])
+ _ipc_phases = frozenset(
+ [
+ "setup",
+ "pretend",
+ "config",
+ "info",
+ "preinst",
+ "postinst",
+ "prerm",
+ "postrm",
+ ]
+ )
# phases which execute in the global PID namespace
- _global_pid_phases = frozenset([
- 'config', 'depend', 'preinst', 'prerm', 'postinst', 'postrm'])
+ _global_pid_phases = frozenset(
+ ["config", "depend", "preinst", "prerm", "postinst", "postrm"]
+ )
_phase_func_map = {
- "config": "pkg_config",
- "setup": "pkg_setup",
- "nofetch": "pkg_nofetch",
- "unpack": "src_unpack",
- "prepare": "src_prepare",
- "configure": "src_configure",
- "compile": "src_compile",
- "test": "src_test",
- "install": "src_install",
- "preinst": "pkg_preinst",
- "postinst": "pkg_postinst",
- "prerm": "pkg_prerm",
- "postrm": "pkg_postrm",
- "info": "pkg_info",
- "pretend": "pkg_pretend",
+ "config": "pkg_config",
+ "setup": "pkg_setup",
+ "nofetch": "pkg_nofetch",
+ "unpack": "src_unpack",
+ "prepare": "src_prepare",
+ "configure": "src_configure",
+ "compile": "src_compile",
+ "test": "src_test",
+ "install": "src_install",
+ "preinst": "pkg_preinst",
+ "postinst": "pkg_postinst",
+ "prerm": "pkg_prerm",
+ "postrm": "pkg_postrm",
+ "info": "pkg_info",
+ "pretend": "pkg_pretend",
}
- _vdb_use_conditional_keys = Package._dep_keys + \
- ('LICENSE', 'PROPERTIES', 'RESTRICT',)
+ _vdb_use_conditional_keys = Package._dep_keys + (
+ "LICENSE",
+ "PROPERTIES",
+ "RESTRICT",
+ )
+
def _doebuild_spawn(phase, settings, actionmap=None, **kwargs):
- """
- All proper ebuild phases which execute ebuild.sh are spawned
- via this function. No exceptions.
- """
-
- if phase in _unsandboxed_phases:
- kwargs['free'] = True
-
- kwargs['ipc'] = 'ipc-sandbox' not in settings.features or \
- phase in _ipc_phases
- kwargs['mountns'] = 'mount-sandbox' in settings.features
- kwargs['networked'] = (
- 'network-sandbox' not in settings.features or
- (phase == 'unpack' and
- 'live' in settings['PORTAGE_PROPERTIES'].split()) or
- (phase == 'test' and
- 'test_network' in settings['PORTAGE_PROPERTIES'].split()) or
- phase in _ipc_phases or
- 'network-sandbox' in settings['PORTAGE_RESTRICT'].split())
- kwargs['pidns'] = ('pid-sandbox' in settings.features and
- phase not in _global_pid_phases)
-
- if phase == 'depend':
- kwargs['droppriv'] = 'userpriv' in settings.features
- # It's not necessary to close_fds for this phase, since
- # it should not spawn any daemons, and close_fds is
- # best avoided since it can interact badly with some
- # garbage collectors (see _setup_pipes docstring).
- kwargs['close_fds'] = False
-
- if actionmap is not None and phase in actionmap:
- kwargs.update(actionmap[phase]["args"])
- cmd = actionmap[phase]["cmd"] % phase
- else:
- if phase == 'cleanrm':
- ebuild_sh_arg = 'clean'
- else:
- ebuild_sh_arg = phase
-
- cmd = "%s %s" % (_shell_quote(
- os.path.join(settings["PORTAGE_BIN_PATH"],
- os.path.basename(EBUILD_SH_BINARY))),
- ebuild_sh_arg)
-
- settings['EBUILD_PHASE'] = phase
- try:
- return spawn(cmd, settings, **kwargs)
- finally:
- settings.pop('EBUILD_PHASE', None)
-
- def _spawn_phase(phase, settings, actionmap=None, returnpid=False,
- logfile=None, **kwargs):
-
- if returnpid:
- return _doebuild_spawn(phase, settings, actionmap=actionmap,
- returnpid=returnpid, logfile=logfile, **kwargs)
-
- # The logfile argument is unused here, since EbuildPhase uses
- # the PORTAGE_LOG_FILE variable if set.
- ebuild_phase = EbuildPhase(actionmap=actionmap, background=False,
- phase=phase, scheduler=SchedulerInterface(asyncio._safe_loop()),
- settings=settings, **kwargs)
-
- ebuild_phase.start()
- ebuild_phase.wait()
- return ebuild_phase.returncode
+ """
+ All proper ebuild phases which execute ebuild.sh are spawned
+ via this function. No exceptions.
+ """
+
+ if phase in _unsandboxed_phases:
+ kwargs["free"] = True
+
+ kwargs["ipc"] = "ipc-sandbox" not in settings.features or phase in _ipc_phases
+ kwargs["mountns"] = "mount-sandbox" in settings.features
+ kwargs["networked"] = (
+ "network-sandbox" not in settings.features
+ or (phase == "unpack" and "live" in settings["PORTAGE_PROPERTIES"].split())
+ or (
+ phase == "test" and "test_network" in settings["PORTAGE_PROPERTIES"].split()
+ )
+ or phase in _ipc_phases
+ or "network-sandbox" in settings["PORTAGE_RESTRICT"].split()
+ )
+ kwargs["pidns"] = (
+ "pid-sandbox" in settings.features and phase not in _global_pid_phases
+ )
+
+ if phase == "depend":
+ kwargs["droppriv"] = "userpriv" in settings.features
+ # It's not necessary to close_fds for this phase, since
+ # it should not spawn any daemons, and close_fds is
+ # best avoided since it can interact badly with some
+ # garbage collectors (see _setup_pipes docstring).
+ kwargs["close_fds"] = False
+
+ if actionmap is not None and phase in actionmap:
+ kwargs.update(actionmap[phase]["args"])
+ cmd = actionmap[phase]["cmd"] % phase
+ else:
+ if phase == "cleanrm":
+ ebuild_sh_arg = "clean"
+ else:
+ ebuild_sh_arg = phase
+
+ cmd = "%s %s" % (
+ _shell_quote(
+ os.path.join(
+ settings["PORTAGE_BIN_PATH"], os.path.basename(EBUILD_SH_BINARY)
+ )
+ ),
+ ebuild_sh_arg,
+ )
+
+ settings["EBUILD_PHASE"] = phase
+ try:
+ return spawn(cmd, settings, **kwargs)
+ finally:
+ settings.pop("EBUILD_PHASE", None)
+
+
+ def _spawn_phase(
+ phase, settings, actionmap=None, returnpid=False, logfile=None, **kwargs
+ ):
+
+ if returnpid:
+ return _doebuild_spawn(
+ phase,
+ settings,
+ actionmap=actionmap,
+ returnpid=returnpid,
+ logfile=logfile,
+ **kwargs
+ )
+
+ # The logfile argument is unused here, since EbuildPhase uses
+ # the PORTAGE_LOG_FILE variable if set.
+ ebuild_phase = EbuildPhase(
+ actionmap=actionmap,
+ background=False,
+ phase=phase,
+ scheduler=SchedulerInterface(asyncio._safe_loop()),
+ settings=settings,
+ **kwargs
+ )
+
+ ebuild_phase.start()
+ ebuild_phase.wait()
+ return ebuild_phase.returncode
+
def _doebuild_path(settings, eapi=None):
- """
- Generate the PATH variable.
- """
-
- # Note: PORTAGE_BIN_PATH may differ from the global constant
- # when portage is reinstalling itself.
- portage_bin_path = [settings["PORTAGE_BIN_PATH"]]
- if portage_bin_path[0] != portage.const.PORTAGE_BIN_PATH:
- # Add a fallback path for restarting failed builds (bug 547086)
- portage_bin_path.append(portage.const.PORTAGE_BIN_PATH)
- prerootpath = [x for x in settings.get("PREROOTPATH", "").split(":") if x]
- rootpath = [x for x in settings.get("ROOTPATH", "").split(":") if x]
- rootpath_set = frozenset(rootpath)
- overrides = [x for x in settings.get(
- "__PORTAGE_TEST_PATH_OVERRIDE", "").split(":") if x]
-
- prefixes = []
- # settings["EPREFIX"] should take priority over portage.const.EPREFIX
- if portage.const.EPREFIX != settings["EPREFIX"] and settings["ROOT"] == os.sep:
- prefixes.append(settings["EPREFIX"])
- prefixes.append(portage.const.EPREFIX)
-
- path = overrides
-
- if "xattr" in settings.features:
- for x in portage_bin_path:
- path.append(os.path.join(x, "ebuild-helpers", "xattr"))
-
- if uid != 0 and \
- "unprivileged" in settings.features and \
- "fakeroot" not in settings.features:
- for x in portage_bin_path:
- path.append(os.path.join(x,
- "ebuild-helpers", "unprivileged"))
-
- if settings.get("USERLAND", "GNU") != "GNU":
- for x in portage_bin_path:
- path.append(os.path.join(x, "ebuild-helpers", "bsd"))
-
- for x in portage_bin_path:
- path.append(os.path.join(x, "ebuild-helpers"))
- path.extend(prerootpath)
-
- for prefix in prefixes:
- prefix = prefix if prefix else "/"
- for x in ("usr/local/sbin", "usr/local/bin", "usr/sbin", "usr/bin", "sbin", "bin"):
- # Respect order defined in ROOTPATH
- x_abs = os.path.join(prefix, x)
- if x_abs not in rootpath_set:
- path.append(x_abs)
-
- path.extend(rootpath)
-
- # PREFIX LOCAL: append EXTRA_PATH from make.globals
- extrapath = [x for x in settings.get("EXTRA_PATH", "").split(":") if x]
- path.extend(extrapath)
- # END PREFIX LOCAL
-
- settings["PATH"] = ":".join(path)
-
- def doebuild_environment(myebuild, mydo, myroot=None, settings=None,
- debug=False, use_cache=None, db=None):
- """
- Create and store environment variable in the config instance
- that's passed in as the "settings" parameter. This will raise
- UnsupportedAPIException if the given ebuild has an unsupported
- EAPI. All EAPI dependent code comes last, so that essential
- variables like PORTAGE_BUILDDIR are still initialized even in
- cases when UnsupportedAPIException needs to be raised, which
- can be useful when uninstalling a package that has corrupt
- EAPI metadata.
- The myroot and use_cache parameters are unused.
- """
-
- if settings is None:
- raise TypeError("settings argument is required")
-
- if db is None:
- raise TypeError("db argument is required")
-
- mysettings = settings
- mydbapi = db
- ebuild_path = os.path.abspath(myebuild)
- pkg_dir = os.path.dirname(ebuild_path)
- mytree = os.path.dirname(os.path.dirname(pkg_dir))
- mypv = os.path.basename(ebuild_path)[:-7]
- mysplit = _pkgsplit(mypv, eapi=mysettings.configdict["pkg"].get("EAPI"))
- if mysplit is None:
- raise IncorrectParameter(
- _("Invalid ebuild path: '%s'") % myebuild)
-
- if mysettings.mycpv is not None and \
- mysettings.configdict["pkg"].get("PF") == mypv and \
- "CATEGORY" in mysettings.configdict["pkg"]:
- # Assume that PF is enough to assume that we've got
- # the correct CATEGORY, though this is not really
- # a solid assumption since it's possible (though
- # unlikely) that two packages in different
- # categories have the same PF. Callers should call
- # setcpv or create a clean clone of a locked config
- # instance in order to ensure that this assumption
- # does not fail like in bug #408817.
- cat = mysettings.configdict["pkg"]["CATEGORY"]
- mycpv = mysettings.mycpv
- elif os.path.basename(pkg_dir) in (mysplit[0], mypv):
- # portdbapi or vardbapi
- cat = os.path.basename(os.path.dirname(pkg_dir))
- mycpv = cat + "/" + mypv
- else:
- raise AssertionError("unable to determine CATEGORY")
-
- # Make a backup of PORTAGE_TMPDIR prior to calling config.reset()
- # so that the caller can override it.
- tmpdir = mysettings["PORTAGE_TMPDIR"]
-
- if mydo == 'depend':
- if mycpv != mysettings.mycpv:
- # Don't pass in mydbapi here since the resulting aux_get
- # call would lead to infinite 'depend' phase recursion.
- mysettings.setcpv(mycpv)
- else:
- # If EAPI isn't in configdict["pkg"], it means that setcpv()
- # hasn't been called with the mydb argument, so we have to
- # call it here (portage code always calls setcpv properly,
- # but api consumers might not).
- if mycpv != mysettings.mycpv or \
- "EAPI" not in mysettings.configdict["pkg"]:
- # Reload env.d variables and reset any previous settings.
- mysettings.reload()
- mysettings.reset()
- mysettings.setcpv(mycpv, mydb=mydbapi)
-
- # config.reset() might have reverted a change made by the caller,
- # so restore it to its original value. Sandbox needs canonical
- # paths, so realpath it.
- mysettings["PORTAGE_TMPDIR"] = os.path.realpath(tmpdir)
-
- mysettings.pop("EBUILD_PHASE", None) # remove from backupenv
- mysettings["EBUILD_PHASE"] = mydo
-
- # Set requested Python interpreter for Portage helpers.
- mysettings['PORTAGE_PYTHON'] = portage._python_interpreter
-
- # This is used by assert_sigpipe_ok() that's used by the ebuild
- # unpack() helper. SIGPIPE is typically 13, but its better not
- # to assume that.
- mysettings['PORTAGE_SIGPIPE_STATUS'] = str(128 + signal.SIGPIPE)
-
- # We are disabling user-specific bashrc files.
- mysettings["BASH_ENV"] = INVALID_ENV_FILE
-
- if debug: # Otherwise it overrides emerge's settings.
- # We have no other way to set debug... debug can't be passed in
- # due to how it's coded... Don't overwrite this so we can use it.
- mysettings["PORTAGE_DEBUG"] = "1"
-
- mysettings["EBUILD"] = ebuild_path
- mysettings["O"] = pkg_dir
- mysettings.configdict["pkg"]["CATEGORY"] = cat
- mysettings["PF"] = mypv
-
- if hasattr(mydbapi, 'repositories'):
- repo = mydbapi.repositories.get_repo_for_location(mytree)
- mysettings['PORTDIR'] = repo.eclass_db.porttrees[0]
- mysettings['PORTAGE_ECLASS_LOCATIONS'] = repo.eclass_db.eclass_locations_string
- mysettings.configdict["pkg"]["PORTAGE_REPO_NAME"] = repo.name
-
- mysettings["PORTDIR"] = os.path.realpath(mysettings["PORTDIR"])
- mysettings.pop("PORTDIR_OVERLAY", None)
- mysettings["DISTDIR"] = os.path.realpath(mysettings["DISTDIR"])
- mysettings["RPMDIR"] = os.path.realpath(mysettings["RPMDIR"])
-
- mysettings["ECLASSDIR"] = mysettings["PORTDIR"]+"/eclass"
-
- mysettings["PORTAGE_BASHRC_FILES"] = "\n".join(mysettings._pbashrc)
-
- mysettings["P"] = mysplit[0]+"-"+mysplit[1]
- mysettings["PN"] = mysplit[0]
- mysettings["PV"] = mysplit[1]
- mysettings["PR"] = mysplit[2]
-
- if noiselimit < 0:
- mysettings["PORTAGE_QUIET"] = "1"
-
- if mysplit[2] == "r0":
- mysettings["PVR"]=mysplit[1]
- else:
- mysettings["PVR"]=mysplit[1]+"-"+mysplit[2]
-
- # All temporary directories should be subdirectories of
- # $PORTAGE_TMPDIR/portage, since it's common for /tmp and /var/tmp
- # to be mounted with the "noexec" option (see bug #346899).
- mysettings["BUILD_PREFIX"] = mysettings["PORTAGE_TMPDIR"]+"/portage"
- mysettings["PKG_TMPDIR"] = mysettings["BUILD_PREFIX"]+"/._unmerge_"
-
- # Package {pre,post}inst and {pre,post}rm may overlap, so they must have separate
- # locations in order to prevent interference.
- if mydo in ("unmerge", "prerm", "postrm", "cleanrm"):
- mysettings["PORTAGE_BUILDDIR"] = os.path.join(
- mysettings["PKG_TMPDIR"],
- mysettings["CATEGORY"], mysettings["PF"])
- else:
- mysettings["PORTAGE_BUILDDIR"] = os.path.join(
- mysettings["BUILD_PREFIX"],
- mysettings["CATEGORY"], mysettings["PF"])
-
- mysettings["HOME"] = os.path.join(mysettings["PORTAGE_BUILDDIR"], "homedir")
- mysettings["WORKDIR"] = os.path.join(mysettings["PORTAGE_BUILDDIR"], "work")
- mysettings["D"] = os.path.join(mysettings["PORTAGE_BUILDDIR"], "image") + os.sep
- mysettings["T"] = os.path.join(mysettings["PORTAGE_BUILDDIR"], "temp")
- mysettings["SANDBOX_LOG"] = os.path.join(mysettings["T"], "sandbox.log")
- mysettings["FILESDIR"] = os.path.join(settings["PORTAGE_BUILDDIR"], "files")
-
- # Prefix forward compatability
- eprefix_lstrip = mysettings["EPREFIX"].lstrip(os.sep)
- mysettings["ED"] = os.path.join(
- mysettings["D"], eprefix_lstrip).rstrip(os.sep) + os.sep
-
- mysettings["PORTAGE_BASHRC"] = os.path.join(
- mysettings["PORTAGE_CONFIGROOT"], EBUILD_SH_ENV_FILE)
- mysettings["PM_EBUILD_HOOK_DIR"] = os.path.join(
- mysettings["PORTAGE_CONFIGROOT"], EBUILD_SH_ENV_DIR)
-
- # Allow color.map to control colors associated with einfo, ewarn, etc...
- mysettings["PORTAGE_COLORMAP"] = colormap()
-
- if "COLUMNS" not in mysettings:
- # Set COLUMNS, in order to prevent unnecessary stty calls
- # inside the set_colors function of isolated-functions.sh.
- # We cache the result in os.environ, in order to avoid
- # multiple stty calls in cases when get_term_size() falls
- # back to stty due to a missing or broken curses module.
- columns = os.environ.get("COLUMNS")
- if columns is None:
- rows, columns = portage.output.get_term_size()
- if columns < 1:
- # Force a sane value for COLUMNS, so that tools
- # like ls don't complain (see bug #394091).
- columns = 80
- columns = str(columns)
- os.environ["COLUMNS"] = columns
- mysettings["COLUMNS"] = columns
-
- # EAPI is always known here, even for the "depend" phase, because
- # EbuildMetadataPhase gets it from _parse_eapi_ebuild_head().
- eapi = mysettings.configdict['pkg']['EAPI']
- _doebuild_path(mysettings, eapi=eapi)
-
- # All EAPI dependent code comes last, so that essential variables like
- # PATH and PORTAGE_BUILDDIR are still initialized even in cases when
- # UnsupportedAPIException needs to be raised, which can be useful
- # when uninstalling a package that has corrupt EAPI metadata.
- if not eapi_is_supported(eapi):
- raise UnsupportedAPIException(mycpv, eapi)
-
- if eapi_exports_REPOSITORY(eapi) and "PORTAGE_REPO_NAME" in mysettings.configdict["pkg"]:
- mysettings.configdict["pkg"]["REPOSITORY"] = mysettings.configdict["pkg"]["PORTAGE_REPO_NAME"]
-
- if mydo != "depend":
- if hasattr(mydbapi, "getFetchMap") and \
- ("A" not in mysettings.configdict["pkg"] or \
- "AA" not in mysettings.configdict["pkg"]):
- src_uri = mysettings.configdict["pkg"].get("SRC_URI")
- if src_uri is None:
- src_uri, = mydbapi.aux_get(mysettings.mycpv,
- ["SRC_URI"], mytree=mytree)
- metadata = {
- "EAPI" : eapi,
- "SRC_URI" : src_uri,
- }
- use = frozenset(mysettings["PORTAGE_USE"].split())
- try:
- uri_map = _parse_uri_map(mysettings.mycpv, metadata, use=use)
- except InvalidDependString:
- mysettings.configdict["pkg"]["A"] = ""
- else:
- mysettings.configdict["pkg"]["A"] = " ".join(uri_map)
-
- try:
- uri_map = _parse_uri_map(mysettings.mycpv, metadata)
- except InvalidDependString:
- mysettings.configdict["pkg"]["AA"] = ""
- else:
- mysettings.configdict["pkg"]["AA"] = " ".join(uri_map)
-
- ccache = "ccache" in mysettings.features
- distcc = "distcc" in mysettings.features
- icecream = "icecream" in mysettings.features
-
- if ccache or distcc or icecream:
- libdir = None
- default_abi = mysettings.get("DEFAULT_ABI")
- if default_abi:
- libdir = mysettings.get("LIBDIR_" + default_abi)
- if not libdir:
- libdir = "lib"
-
- # The installation locations use to vary between versions...
- # Safer to look them up rather than assuming
- possible_libexecdirs = (libdir, "lib", "libexec")
- masquerades = []
- if distcc:
- masquerades.append(("distcc", "distcc"))
- if icecream:
- masquerades.append(("icecream", "icecc"))
- if ccache:
- masquerades.append(("ccache", "ccache"))
-
- for feature, m in masquerades:
- for l in possible_libexecdirs:
- p = os.path.join(os.sep, eprefix_lstrip,
- "usr", l, m, "bin")
- if os.path.isdir(p):
- mysettings["PATH"] = p + ":" + mysettings["PATH"]
- break
- else:
- writemsg(("Warning: %s requested but no masquerade dir "
- "can be found in /usr/lib*/%s/bin\n") % (m, m))
- mysettings.features.remove(feature)
-
- if 'MAKEOPTS' not in mysettings:
- nproc = get_cpu_count()
- if nproc:
- mysettings['MAKEOPTS'] = '-j%d' % (nproc)
-
- if not eapi_exports_KV(eapi):
- # Discard KV for EAPIs that don't support it. Cached KV is restored
- # from the backupenv whenever config.reset() is called.
- mysettings.pop('KV', None)
- elif 'KV' not in mysettings and \
- mydo in ('compile', 'config', 'configure', 'info',
- 'install', 'nofetch', 'postinst', 'postrm', 'preinst',
- 'prepare', 'prerm', 'setup', 'test', 'unpack'):
- mykv, err1 = ExtractKernelVersion(
- os.path.join(mysettings['EROOT'], "usr/src/linux"))
- if mykv:
- # Regular source tree
- mysettings["KV"] = mykv
- else:
- mysettings["KV"] = ""
- mysettings.backup_changes("KV")
-
- binpkg_compression = mysettings.get("BINPKG_COMPRESS", "bzip2")
- try:
- compression = _compressors[binpkg_compression]
- except KeyError as e:
- if binpkg_compression:
- writemsg("Warning: Invalid or unsupported compression method: %s\n" % e.args[0])
- else:
- # Empty BINPKG_COMPRESS disables compression.
- mysettings['PORTAGE_COMPRESSION_COMMAND'] = 'cat'
- else:
- try:
- compression_binary = shlex_split(varexpand(compression["compress"], mydict=settings))[0]
- except IndexError as e:
- writemsg("Warning: Invalid or unsupported compression method: %s\n" % e.args[0])
- else:
- if find_binary(compression_binary) is None:
- missing_package = compression["package"]
- writemsg("Warning: File compression unsupported %s. Missing package: %s\n" % (binpkg_compression, missing_package))
- else:
- cmd = [varexpand(x, mydict=settings) for x in shlex_split(compression["compress"])]
- # Filter empty elements
- cmd = [x for x in cmd if x != ""]
- mysettings['PORTAGE_COMPRESSION_COMMAND'] = ' '.join(cmd)
+ """
+ Generate the PATH variable.
+ """
+
+ # Note: PORTAGE_BIN_PATH may differ from the global constant
+ # when portage is reinstalling itself.
+ portage_bin_path = [settings["PORTAGE_BIN_PATH"]]
+ if portage_bin_path[0] != portage.const.PORTAGE_BIN_PATH:
+ # Add a fallback path for restarting failed builds (bug 547086)
+ portage_bin_path.append(portage.const.PORTAGE_BIN_PATH)
+ prerootpath = [x for x in settings.get("PREROOTPATH", "").split(":") if x]
+ rootpath = [x for x in settings.get("ROOTPATH", "").split(":") if x]
+ rootpath_set = frozenset(rootpath)
+ overrides = [
+ x for x in settings.get("__PORTAGE_TEST_PATH_OVERRIDE", "").split(":") if x
+ ]
+
+ prefixes = []
+ # settings["EPREFIX"] should take priority over portage.const.EPREFIX
+ if portage.const.EPREFIX != settings["EPREFIX"] and settings["ROOT"] == os.sep:
+ prefixes.append(settings["EPREFIX"])
+ prefixes.append(portage.const.EPREFIX)
+
+ path = overrides
+
+ if "xattr" in settings.features:
+ for x in portage_bin_path:
+ path.append(os.path.join(x, "ebuild-helpers", "xattr"))
+
+ if (
+ uid != 0
+ and "unprivileged" in settings.features
+ and "fakeroot" not in settings.features
+ ):
+ for x in portage_bin_path:
+ path.append(os.path.join(x, "ebuild-helpers", "unprivileged"))
+
+ if settings.get("USERLAND", "GNU") != "GNU":
+ for x in portage_bin_path:
+ path.append(os.path.join(x, "ebuild-helpers", "bsd"))
+
+ for x in portage_bin_path:
+ path.append(os.path.join(x, "ebuild-helpers"))
+ path.extend(prerootpath)
+
+ for prefix in prefixes:
+ prefix = prefix if prefix else "/"
+ for x in (
+ "usr/local/sbin",
+ "usr/local/bin",
+ "usr/sbin",
+ "usr/bin",
+ "sbin",
+ "bin",
+ ):
+ # Respect order defined in ROOTPATH
+ x_abs = os.path.join(prefix, x)
+ if x_abs not in rootpath_set:
+ path.append(x_abs)
+
+ path.extend(rootpath)
++
++ # BEGIN PREFIX LOCAL: append EXTRA_PATH from make.globals
++ extrapath = [x for x in settings.get("EXTRA_PATH", "").split(":") if x]
++ path.extend(extrapath)
++ # END PREFIX LOCAL
++
+ settings["PATH"] = ":".join(path)
+
+
+ def doebuild_environment(
+ myebuild, mydo, myroot=None, settings=None, debug=False, use_cache=None, db=None
+ ):
+ """
+ Create and store environment variable in the config instance
+ that's passed in as the "settings" parameter. This will raise
+ UnsupportedAPIException if the given ebuild has an unsupported
+ EAPI. All EAPI dependent code comes last, so that essential
+ variables like PORTAGE_BUILDDIR are still initialized even in
+ cases when UnsupportedAPIException needs to be raised, which
+ can be useful when uninstalling a package that has corrupt
+ EAPI metadata.
+ The myroot and use_cache parameters are unused.
+ """
+
+ if settings is None:
+ raise TypeError("settings argument is required")
+
+ if db is None:
+ raise TypeError("db argument is required")
+
+ mysettings = settings
+ mydbapi = db
+ ebuild_path = os.path.abspath(myebuild)
+ pkg_dir = os.path.dirname(ebuild_path)
+ mytree = os.path.dirname(os.path.dirname(pkg_dir))
+ mypv = os.path.basename(ebuild_path)[:-7]
+ mysplit = _pkgsplit(mypv, eapi=mysettings.configdict["pkg"].get("EAPI"))
+ if mysplit is None:
+ raise IncorrectParameter(_("Invalid ebuild path: '%s'") % myebuild)
+
+ if (
+ mysettings.mycpv is not None
+ and mysettings.configdict["pkg"].get("PF") == mypv
+ and "CATEGORY" in mysettings.configdict["pkg"]
+ ):
+ # Assume that PF is enough to assume that we've got
+ # the correct CATEGORY, though this is not really
+ # a solid assumption since it's possible (though
+ # unlikely) that two packages in different
+ # categories have the same PF. Callers should call
+ # setcpv or create a clean clone of a locked config
+ # instance in order to ensure that this assumption
+ # does not fail like in bug #408817.
+ cat = mysettings.configdict["pkg"]["CATEGORY"]
+ mycpv = mysettings.mycpv
+ elif os.path.basename(pkg_dir) in (mysplit[0], mypv):
+ # portdbapi or vardbapi
+ cat = os.path.basename(os.path.dirname(pkg_dir))
+ mycpv = cat + "/" + mypv
+ else:
+ raise AssertionError("unable to determine CATEGORY")
+
+ # Make a backup of PORTAGE_TMPDIR prior to calling config.reset()
+ # so that the caller can override it.
+ tmpdir = mysettings["PORTAGE_TMPDIR"]
+
+ if mydo == "depend":
+ if mycpv != mysettings.mycpv:
+ # Don't pass in mydbapi here since the resulting aux_get
+ # call would lead to infinite 'depend' phase recursion.
+ mysettings.setcpv(mycpv)
+ else:
+ # If EAPI isn't in configdict["pkg"], it means that setcpv()
+ # hasn't been called with the mydb argument, so we have to
+ # call it here (portage code always calls setcpv properly,
+ # but api consumers might not).
+ if mycpv != mysettings.mycpv or "EAPI" not in mysettings.configdict["pkg"]:
+ # Reload env.d variables and reset any previous settings.
+ mysettings.reload()
+ mysettings.reset()
+ mysettings.setcpv(mycpv, mydb=mydbapi)
+
+ # config.reset() might have reverted a change made by the caller,
+ # so restore it to its original value. Sandbox needs canonical
+ # paths, so realpath it.
+ mysettings["PORTAGE_TMPDIR"] = os.path.realpath(tmpdir)
+
+ mysettings.pop("EBUILD_PHASE", None) # remove from backupenv
+ mysettings["EBUILD_PHASE"] = mydo
+
+ # Set requested Python interpreter for Portage helpers.
+ mysettings["PORTAGE_PYTHON"] = portage._python_interpreter
+
+ # This is used by assert_sigpipe_ok() that's used by the ebuild
+ # unpack() helper. SIGPIPE is typically 13, but its better not
+ # to assume that.
+ mysettings["PORTAGE_SIGPIPE_STATUS"] = str(128 + signal.SIGPIPE)
+
+ # We are disabling user-specific bashrc files.
+ mysettings["BASH_ENV"] = INVALID_ENV_FILE
+
+ if debug: # Otherwise it overrides emerge's settings.
+ # We have no other way to set debug... debug can't be passed in
+ # due to how it's coded... Don't overwrite this so we can use it.
+ mysettings["PORTAGE_DEBUG"] = "1"
+
+ mysettings["EBUILD"] = ebuild_path
+ mysettings["O"] = pkg_dir
+ mysettings.configdict["pkg"]["CATEGORY"] = cat
+ mysettings["PF"] = mypv
+
+ if hasattr(mydbapi, "repositories"):
+ repo = mydbapi.repositories.get_repo_for_location(mytree)
+ mysettings["PORTDIR"] = repo.eclass_db.porttrees[0]
+ mysettings["PORTAGE_ECLASS_LOCATIONS"] = repo.eclass_db.eclass_locations_string
+ mysettings.configdict["pkg"]["PORTAGE_REPO_NAME"] = repo.name
+
+ mysettings["PORTDIR"] = os.path.realpath(mysettings["PORTDIR"])
+ mysettings.pop("PORTDIR_OVERLAY", None)
+ mysettings["DISTDIR"] = os.path.realpath(mysettings["DISTDIR"])
+ mysettings["RPMDIR"] = os.path.realpath(mysettings["RPMDIR"])
+
+ mysettings["ECLASSDIR"] = mysettings["PORTDIR"] + "/eclass"
+
+ mysettings["PORTAGE_BASHRC_FILES"] = "\n".join(mysettings._pbashrc)
+
+ mysettings["P"] = mysplit[0] + "-" + mysplit[1]
+ mysettings["PN"] = mysplit[0]
+ mysettings["PV"] = mysplit[1]
+ mysettings["PR"] = mysplit[2]
+
+ if noiselimit < 0:
+ mysettings["PORTAGE_QUIET"] = "1"
+
+ if mysplit[2] == "r0":
+ mysettings["PVR"] = mysplit[1]
+ else:
+ mysettings["PVR"] = mysplit[1] + "-" + mysplit[2]
+
+ # All temporary directories should be subdirectories of
+ # $PORTAGE_TMPDIR/portage, since it's common for /tmp and /var/tmp
+ # to be mounted with the "noexec" option (see bug #346899).
+ mysettings["BUILD_PREFIX"] = mysettings["PORTAGE_TMPDIR"] + "/portage"
+ mysettings["PKG_TMPDIR"] = mysettings["BUILD_PREFIX"] + "/._unmerge_"
+
+ # Package {pre,post}inst and {pre,post}rm may overlap, so they must have separate
+ # locations in order to prevent interference.
+ if mydo in ("unmerge", "prerm", "postrm", "cleanrm"):
+ mysettings["PORTAGE_BUILDDIR"] = os.path.join(
+ mysettings["PKG_TMPDIR"], mysettings["CATEGORY"], mysettings["PF"]
+ )
+ else:
+ mysettings["PORTAGE_BUILDDIR"] = os.path.join(
+ mysettings["BUILD_PREFIX"], mysettings["CATEGORY"], mysettings["PF"]
+ )
+
+ mysettings["HOME"] = os.path.join(mysettings["PORTAGE_BUILDDIR"], "homedir")
+ mysettings["WORKDIR"] = os.path.join(mysettings["PORTAGE_BUILDDIR"], "work")
+ mysettings["D"] = os.path.join(mysettings["PORTAGE_BUILDDIR"], "image") + os.sep
+ mysettings["T"] = os.path.join(mysettings["PORTAGE_BUILDDIR"], "temp")
+ mysettings["SANDBOX_LOG"] = os.path.join(mysettings["T"], "sandbox.log")
+ mysettings["FILESDIR"] = os.path.join(settings["PORTAGE_BUILDDIR"], "files")
+
+ # Prefix forward compatability
+ eprefix_lstrip = mysettings["EPREFIX"].lstrip(os.sep)
+ mysettings["ED"] = (
+ os.path.join(mysettings["D"], eprefix_lstrip).rstrip(os.sep) + os.sep
+ )
+
+ mysettings["PORTAGE_BASHRC"] = os.path.join(
+ mysettings["PORTAGE_CONFIGROOT"], EBUILD_SH_ENV_FILE
+ )
+ mysettings["PM_EBUILD_HOOK_DIR"] = os.path.join(
+ mysettings["PORTAGE_CONFIGROOT"], EBUILD_SH_ENV_DIR
+ )
+
+ # Allow color.map to control colors associated with einfo, ewarn, etc...
+ mysettings["PORTAGE_COLORMAP"] = colormap()
+
+ if "COLUMNS" not in mysettings:
+ # Set COLUMNS, in order to prevent unnecessary stty calls
+ # inside the set_colors function of isolated-functions.sh.
+ # We cache the result in os.environ, in order to avoid
+ # multiple stty calls in cases when get_term_size() falls
+ # back to stty due to a missing or broken curses module.
+ columns = os.environ.get("COLUMNS")
+ if columns is None:
+ rows, columns = portage.output.get_term_size()
+ if columns < 1:
+ # Force a sane value for COLUMNS, so that tools
+ # like ls don't complain (see bug #394091).
+ columns = 80
+ columns = str(columns)
+ os.environ["COLUMNS"] = columns
+ mysettings["COLUMNS"] = columns
+
+ # EAPI is always known here, even for the "depend" phase, because
+ # EbuildMetadataPhase gets it from _parse_eapi_ebuild_head().
+ eapi = mysettings.configdict["pkg"]["EAPI"]
+ _doebuild_path(mysettings, eapi=eapi)
+
+ # All EAPI dependent code comes last, so that essential variables like
+ # PATH and PORTAGE_BUILDDIR are still initialized even in cases when
+ # UnsupportedAPIException needs to be raised, which can be useful
+ # when uninstalling a package that has corrupt EAPI metadata.
+ if not eapi_is_supported(eapi):
+ raise UnsupportedAPIException(mycpv, eapi)
+
+ if (
+ eapi_exports_REPOSITORY(eapi)
+ and "PORTAGE_REPO_NAME" in mysettings.configdict["pkg"]
+ ):
+ mysettings.configdict["pkg"]["REPOSITORY"] = mysettings.configdict["pkg"][
+ "PORTAGE_REPO_NAME"
+ ]
+
+ if mydo != "depend":
+ if hasattr(mydbapi, "getFetchMap") and (
+ "A" not in mysettings.configdict["pkg"]
+ or "AA" not in mysettings.configdict["pkg"]
+ ):
+ src_uri = mysettings.configdict["pkg"].get("SRC_URI")
+ if src_uri is None:
+ (src_uri,) = mydbapi.aux_get(
+ mysettings.mycpv, ["SRC_URI"], mytree=mytree
+ )
+ metadata = {
+ "EAPI": eapi,
+ "SRC_URI": src_uri,
+ }
+ use = frozenset(mysettings["PORTAGE_USE"].split())
+ try:
+ uri_map = _parse_uri_map(mysettings.mycpv, metadata, use=use)
+ except InvalidDependString:
+ mysettings.configdict["pkg"]["A"] = ""
+ else:
+ mysettings.configdict["pkg"]["A"] = " ".join(uri_map)
+
+ try:
+ uri_map = _parse_uri_map(mysettings.mycpv, metadata)
+ except InvalidDependString:
+ mysettings.configdict["pkg"]["AA"] = ""
+ else:
+ mysettings.configdict["pkg"]["AA"] = " ".join(uri_map)
+
+ ccache = "ccache" in mysettings.features
+ distcc = "distcc" in mysettings.features
+ icecream = "icecream" in mysettings.features
+
+ if ccache or distcc or icecream:
+ libdir = None
+ default_abi = mysettings.get("DEFAULT_ABI")
+ if default_abi:
+ libdir = mysettings.get("LIBDIR_" + default_abi)
+ if not libdir:
+ libdir = "lib"
+
+ # The installation locations use to vary between versions...
+ # Safer to look them up rather than assuming
+ possible_libexecdirs = (libdir, "lib", "libexec")
+ masquerades = []
+ if distcc:
+ masquerades.append(("distcc", "distcc"))
+ if icecream:
+ masquerades.append(("icecream", "icecc"))
+ if ccache:
+ masquerades.append(("ccache", "ccache"))
+
+ for feature, m in masquerades:
+ for l in possible_libexecdirs:
+ p = os.path.join(os.sep, eprefix_lstrip, "usr", l, m, "bin")
+ if os.path.isdir(p):
+ mysettings["PATH"] = p + ":" + mysettings["PATH"]
+ break
+ else:
+ writemsg(
+ (
+ "Warning: %s requested but no masquerade dir "
+ "can be found in /usr/lib*/%s/bin\n"
+ )
+ % (m, m)
+ )
+ mysettings.features.remove(feature)
+
+ if "MAKEOPTS" not in mysettings:
+ nproc = get_cpu_count()
+ if nproc:
+ mysettings["MAKEOPTS"] = "-j%d" % (nproc)
+
+ if not eapi_exports_KV(eapi):
+ # Discard KV for EAPIs that don't support it. Cached KV is restored
+ # from the backupenv whenever config.reset() is called.
+ mysettings.pop("KV", None)
+ elif "KV" not in mysettings and mydo in (
+ "compile",
+ "config",
+ "configure",
+ "info",
+ "install",
+ "nofetch",
+ "postinst",
+ "postrm",
+ "preinst",
+ "prepare",
+ "prerm",
+ "setup",
+ "test",
+ "unpack",
+ ):
+ mykv, err1 = ExtractKernelVersion(
+ os.path.join(mysettings["EROOT"], "usr/src/linux")
+ )
+ if mykv:
+ # Regular source tree
+ mysettings["KV"] = mykv
+ else:
+ mysettings["KV"] = ""
+ mysettings.backup_changes("KV")
+
+ binpkg_compression = mysettings.get("BINPKG_COMPRESS", "bzip2")
+ try:
+ compression = _compressors[binpkg_compression]
+ except KeyError as e:
+ if binpkg_compression:
+ writemsg(
+ "Warning: Invalid or unsupported compression method: %s\n"
+ % e.args[0]
+ )
+ else:
+ # Empty BINPKG_COMPRESS disables compression.
+ mysettings["PORTAGE_COMPRESSION_COMMAND"] = "cat"
+ else:
+ try:
+ compression_binary = shlex_split(
+ varexpand(compression["compress"], mydict=settings)
+ )[0]
+ except IndexError as e:
+ writemsg(
+ "Warning: Invalid or unsupported compression method: %s\n"
+ % e.args[0]
+ )
+ else:
+ if find_binary(compression_binary) is None:
+ missing_package = compression["package"]
+ writemsg(
+ "Warning: File compression unsupported %s. Missing package: %s\n"
+ % (binpkg_compression, missing_package)
+ )
+ else:
+ cmd = [
+ varexpand(x, mydict=settings)
+ for x in shlex_split(compression["compress"])
+ ]
+ # Filter empty elements
+ cmd = [x for x in cmd if x != ""]
+ mysettings["PORTAGE_COMPRESSION_COMMAND"] = " ".join(cmd)
+
_doebuild_manifest_cache = None
_doebuild_broken_ebuilds = set()
_doebuild_broken_manifests = set()
_doebuild_commands_without_builddir = (
- 'clean', 'cleanrm', 'depend', 'digest',
- 'fetch', 'fetchall', 'help', 'manifest'
+ "clean",
+ "cleanrm",
+ "depend",
+ "digest",
+ "fetch",
+ "fetchall",
+ "help",
+ "manifest",
)
- def doebuild(myebuild, mydo, _unused=DeprecationWarning, settings=None, debug=0, listonly=0,
- fetchonly=0, cleanup=0, dbkey=DeprecationWarning, use_cache=1, fetchall=0, tree=None,
- mydbapi=None, vartree=None, prev_mtimes=None,
- fd_pipes=None, returnpid=False):
- """
- Wrapper function that invokes specific ebuild phases through the spawning
- of ebuild.sh
-
- @param myebuild: name of the ebuild to invoke the phase on (CPV)
- @type myebuild: String
- @param mydo: Phase to run
- @type mydo: String
- @param _unused: Deprecated (use settings["ROOT"] instead)
- @type _unused: String
- @param settings: Portage Configuration
- @type settings: instance of portage.config
- @param debug: Turns on various debug information (eg, debug for spawn)
- @type debug: Boolean
- @param listonly: Used to wrap fetch(); passed such that fetch only lists files required.
- @type listonly: Boolean
- @param fetchonly: Used to wrap fetch(); passed such that files are only fetched (no other actions)
- @type fetchonly: Boolean
- @param cleanup: Passed to prepare_build_dirs (TODO: what does it do?)
- @type cleanup: Boolean
- @param dbkey: A file path where metadata generated by the 'depend' phase
- will be written.
- @type dbkey: String
- @param use_cache: Enables the cache
- @type use_cache: Boolean
- @param fetchall: Used to wrap fetch(), fetches all URIs (even ones invalid due to USE conditionals)
- @type fetchall: Boolean
- @param tree: Which tree to use ('vartree','porttree','bintree', etc..), defaults to 'porttree'
- @type tree: String
- @param mydbapi: a dbapi instance to pass to various functions; this should be a portdbapi instance.
- @type mydbapi: portdbapi instance
- @param vartree: A instance of vartree; used for aux_get calls, defaults to db[myroot]['vartree']
- @type vartree: vartree instance
- @param prev_mtimes: A dict of { filename:mtime } keys used by merge() to do config_protection
- @type prev_mtimes: dictionary
- @param fd_pipes: A dict of mapping for pipes, { '0': stdin, '1': stdout }
- for example.
- @type fd_pipes: Dictionary
- @param returnpid: Return a list of process IDs for a successful spawn, or
- an integer value if spawn is unsuccessful. NOTE: This requires the
- caller clean up all returned PIDs.
- @type returnpid: Boolean
- @rtype: Boolean
- @return:
- 1. 0 for success
- 2. 1 for error
-
- Most errors have an accompanying error message.
-
- listonly and fetchonly are only really necessary for operations involving 'fetch'
- prev_mtimes are only necessary for merge operations.
- Other variables may not be strictly required, many have defaults that are set inside of doebuild.
-
- """
-
- if settings is None:
- raise TypeError("settings parameter is required")
- mysettings = settings
- myroot = settings['EROOT']
-
- if _unused is not DeprecationWarning:
- warnings.warn("The third parameter of the "
- "portage.doebuild() is deprecated. Instead "
- "settings['EROOT'] is used.",
- DeprecationWarning, stacklevel=2)
-
- if dbkey is not DeprecationWarning:
- warnings.warn("portage.doebuild() called "
- "with deprecated dbkey argument.",
- DeprecationWarning, stacklevel=2)
-
- if not tree:
- writemsg("Warning: tree not specified to doebuild\n")
- tree = "porttree"
-
- # chunked out deps for each phase, so that ebuild binary can use it
- # to collapse targets down.
- actionmap_deps={
- "pretend" : [],
- "setup": ["pretend"],
- "unpack": ["setup"],
- "prepare": ["unpack"],
- "configure": ["prepare"],
- "compile":["configure"],
- "test": ["compile"],
- "install":["test"],
- "instprep":["install"],
- "rpm": ["install"],
- "package":["install"],
- "merge" :["install"],
- }
-
- if mydbapi is None:
- mydbapi = portage.db[myroot][tree].dbapi
-
- if vartree is None and mydo in ("merge", "qmerge", "unmerge"):
- vartree = portage.db[myroot]["vartree"]
-
- features = mysettings.features
-
- clean_phases = ("clean", "cleanrm")
- validcommands = ["help","clean","prerm","postrm","cleanrm","preinst","postinst",
- "config", "info", "setup", "depend", "pretend",
- "fetch", "fetchall", "digest",
- "unpack", "prepare", "configure", "compile", "test",
- "install", "instprep", "rpm", "qmerge", "merge",
- "package", "unmerge", "manifest", "nofetch"]
-
- if mydo not in validcommands:
- validcommands.sort()
- writemsg("!!! doebuild: '%s' is not one of the following valid commands:" % mydo,
- noiselevel=-1)
- for vcount in range(len(validcommands)):
- if vcount%6 == 0:
- writemsg("\n!!! ", noiselevel=-1)
- writemsg(validcommands[vcount].ljust(11), noiselevel=-1)
- writemsg("\n", noiselevel=-1)
- return 1
-
- if returnpid and mydo != 'depend':
- # This case is not supported, since it bypasses the EbuildPhase class
- # which implements important functionality (including post phase hooks
- # and IPC for things like best/has_version and die).
- warnings.warn("portage.doebuild() called "
- "with returnpid parameter enabled. This usage will "
- "not be supported in the future.",
- DeprecationWarning, stacklevel=2)
-
- if mydo == "fetchall":
- fetchall = 1
- mydo = "fetch"
-
- if mydo not in clean_phases and not os.path.exists(myebuild):
- writemsg("!!! doebuild: %s not found for %s\n" % (myebuild, mydo),
- noiselevel=-1)
- return 1
-
- global _doebuild_manifest_cache
- pkgdir = os.path.dirname(myebuild)
- manifest_path = os.path.join(pkgdir, "Manifest")
- if tree == "porttree":
- repo_config = mysettings.repositories.get_repo_for_location(
- os.path.dirname(os.path.dirname(pkgdir)))
- else:
- repo_config = None
-
- mf = None
- if "strict" in features and \
- "digest" not in features and \
- tree == "porttree" and \
- not repo_config.thin_manifest and \
- mydo not in ("digest", "manifest", "help") and \
- not portage._doebuild_manifest_exempt_depend and \
- not (repo_config.allow_missing_manifest and not os.path.exists(manifest_path)):
- # Always verify the ebuild checksums before executing it.
- global _doebuild_broken_ebuilds
-
- if myebuild in _doebuild_broken_ebuilds:
- return 1
-
- # Avoid checking the same Manifest several times in a row during a
- # regen with an empty cache.
- if _doebuild_manifest_cache is None or \
- _doebuild_manifest_cache.getFullname() != manifest_path:
- _doebuild_manifest_cache = None
- if not os.path.exists(manifest_path):
- out = portage.output.EOutput()
- out.eerror(_("Manifest not found for '%s'") % (myebuild,))
- _doebuild_broken_ebuilds.add(myebuild)
- return 1
- mf = repo_config.load_manifest(pkgdir, mysettings["DISTDIR"])
-
- else:
- mf = _doebuild_manifest_cache
-
- try:
- mf.checkFileHashes("EBUILD", os.path.basename(myebuild))
- except KeyError:
- if not (mf.allow_missing and
- os.path.basename(myebuild) not in mf.fhashdict["EBUILD"]):
- out = portage.output.EOutput()
- out.eerror(_("Missing digest for '%s'") % (myebuild,))
- _doebuild_broken_ebuilds.add(myebuild)
- return 1
- except FileNotFound:
- out = portage.output.EOutput()
- out.eerror(_("A file listed in the Manifest "
- "could not be found: '%s'") % (myebuild,))
- _doebuild_broken_ebuilds.add(myebuild)
- return 1
- except DigestException as e:
- out = portage.output.EOutput()
- out.eerror(_("Digest verification failed:"))
- out.eerror("%s" % e.value[0])
- out.eerror(_("Reason: %s") % e.value[1])
- out.eerror(_("Got: %s") % e.value[2])
- out.eerror(_("Expected: %s") % e.value[3])
- _doebuild_broken_ebuilds.add(myebuild)
- return 1
-
- if mf.getFullname() in _doebuild_broken_manifests:
- return 1
-
- if mf is not _doebuild_manifest_cache and not mf.allow_missing:
-
- # Make sure that all of the ebuilds are
- # actually listed in the Manifest.
- for f in os.listdir(pkgdir):
- pf = None
- if f[-7:] == '.ebuild':
- pf = f[:-7]
- if pf is not None and not mf.hasFile("EBUILD", f):
- f = os.path.join(pkgdir, f)
- if f not in _doebuild_broken_ebuilds:
- out = portage.output.EOutput()
- out.eerror(_("A file is not listed in the "
- "Manifest: '%s'") % (f,))
- _doebuild_broken_manifests.add(manifest_path)
- return 1
-
- # We cache it only after all above checks succeed.
- _doebuild_manifest_cache = mf
-
- logfile=None
- builddir_lock = None
- tmpdir = None
- tmpdir_orig = None
-
- try:
- if mydo in ("digest", "manifest", "help"):
- # Temporarily exempt the depend phase from manifest checks, in case
- # aux_get calls trigger cache generation.
- portage._doebuild_manifest_exempt_depend += 1
-
- # If we don't need much space and we don't need a constant location,
- # we can temporarily override PORTAGE_TMPDIR with a random temp dir
- # so that there's no need for locking and it can be used even if the
- # user isn't in the portage group.
- if not returnpid and mydo in ("info",):
- tmpdir = tempfile.mkdtemp()
- tmpdir_orig = mysettings["PORTAGE_TMPDIR"]
- mysettings["PORTAGE_TMPDIR"] = tmpdir
-
- doebuild_environment(myebuild, mydo, myroot, mysettings, debug,
- use_cache, mydbapi)
-
- if mydo in clean_phases:
- builddir_lock = None
- if not returnpid and \
- 'PORTAGE_BUILDDIR_LOCKED' not in mysettings:
- builddir_lock = EbuildBuildDir(
- scheduler=asyncio._safe_loop(),
- settings=mysettings)
- builddir_lock.scheduler.run_until_complete(
- builddir_lock.async_lock())
- try:
- return _spawn_phase(mydo, mysettings,
- fd_pipes=fd_pipes, returnpid=returnpid)
- finally:
- if builddir_lock is not None:
- builddir_lock.scheduler.run_until_complete(
- builddir_lock.async_unlock())
-
- # get possible slot information from the deps file
- if mydo == "depend":
- writemsg("!!! DEBUG: dbkey: %s\n" % str(dbkey), 2)
- if returnpid:
- return _spawn_phase(mydo, mysettings,
- fd_pipes=fd_pipes, returnpid=returnpid)
- if dbkey and dbkey is not DeprecationWarning:
- mysettings["dbkey"] = dbkey
- else:
- mysettings["dbkey"] = \
- os.path.join(mysettings.depcachedir, "aux_db_key_temp")
-
- return _spawn_phase(mydo, mysettings,
- fd_pipes=fd_pipes, returnpid=returnpid)
-
- if mydo == "nofetch":
-
- if returnpid:
- writemsg("!!! doebuild: %s\n" %
- _("returnpid is not supported for phase '%s'\n" % mydo),
- noiselevel=-1)
-
- return spawn_nofetch(mydbapi, myebuild, settings=mysettings,
- fd_pipes=fd_pipes)
-
- if tree == "porttree":
-
- if not returnpid:
- # Validate dependency metadata here to ensure that ebuilds with
- # invalid data are never installed via the ebuild command. Skip
- # this when returnpid is True (assume the caller handled it).
- rval = _validate_deps(mysettings, myroot, mydo, mydbapi)
- if rval != os.EX_OK:
- return rval
-
- else:
- # FEATURES=noauto only makes sense for porttree, and we don't want
- # it to trigger redundant sourcing of the ebuild for API consumers
- # that are using binary packages
- if "noauto" in mysettings.features:
- mysettings.features.discard("noauto")
-
- # If we are not using a private temp dir, then check access
- # to the global temp dir.
- if tmpdir is None and \
- mydo not in _doebuild_commands_without_builddir:
- rval = _check_temp_dir(mysettings)
- if rval != os.EX_OK:
- return rval
-
- if mydo == "unmerge":
- if returnpid:
- writemsg("!!! doebuild: %s\n" %
- _("returnpid is not supported for phase '%s'\n" % mydo),
- noiselevel=-1)
- return unmerge(mysettings["CATEGORY"],
- mysettings["PF"], myroot, mysettings, vartree=vartree)
-
- phases_to_run = set()
- if returnpid or \
- "noauto" in mysettings.features or \
- mydo not in actionmap_deps:
- phases_to_run.add(mydo)
- else:
- phase_stack = [mydo]
- while phase_stack:
- x = phase_stack.pop()
- if x in phases_to_run:
- continue
- phases_to_run.add(x)
- phase_stack.extend(actionmap_deps.get(x, []))
- del phase_stack
-
- alist = set(mysettings.configdict["pkg"].get("A", "").split())
-
- unpacked = False
- if tree != "porttree" or \
- mydo in _doebuild_commands_without_builddir:
- pass
- elif "unpack" not in phases_to_run:
- unpacked = os.path.exists(os.path.join(
- mysettings["PORTAGE_BUILDDIR"], ".unpacked"))
- else:
- try:
- workdir_st = os.stat(mysettings["WORKDIR"])
- except OSError:
- pass
- else:
- newstuff = False
- if not os.path.exists(os.path.join(
- mysettings["PORTAGE_BUILDDIR"], ".unpacked")):
- writemsg_stdout(_(
- ">>> Not marked as unpacked; recreating WORKDIR...\n"))
- newstuff = True
- else:
- for x in alist:
- writemsg_stdout(">>> Checking %s's mtime...\n" % x)
- try:
- x_st = os.stat(os.path.join(
- mysettings["DISTDIR"], x))
- except OSError:
- # file deleted
- x_st = None
-
- if x_st is not None and x_st.st_mtime > workdir_st.st_mtime:
- writemsg_stdout(_(">>> Timestamp of "
- "%s has changed; recreating WORKDIR...\n") % x)
- newstuff = True
- break
-
- if newstuff:
- if builddir_lock is None and \
- 'PORTAGE_BUILDDIR_LOCKED' not in mysettings:
- builddir_lock = EbuildBuildDir(
- scheduler=asyncio._safe_loop(),
- settings=mysettings)
- builddir_lock.scheduler.run_until_complete(
- builddir_lock.async_lock())
- try:
- _spawn_phase("clean", mysettings)
- finally:
- if builddir_lock is not None:
- builddir_lock.scheduler.run_until_complete(
- builddir_lock.async_unlock())
- builddir_lock = None
- else:
- writemsg_stdout(_(">>> WORKDIR is up-to-date, keeping...\n"))
- unpacked = True
-
- # Build directory creation isn't required for any of these.
- # In the fetch phase, the directory is needed only for RESTRICT=fetch
- # in order to satisfy the sane $PWD requirement (from bug #239560)
- # when pkg_nofetch is spawned.
- have_build_dirs = False
- if mydo not in ('digest', 'fetch', 'help', 'manifest'):
- if not returnpid and \
- 'PORTAGE_BUILDDIR_LOCKED' not in mysettings:
- builddir_lock = EbuildBuildDir(
- scheduler=asyncio._safe_loop(),
- settings=mysettings)
- builddir_lock.scheduler.run_until_complete(
- builddir_lock.async_lock())
- mystatus = prepare_build_dirs(myroot, mysettings, cleanup)
- if mystatus:
- return mystatus
- have_build_dirs = True
-
- # emerge handles logging externally
- if not returnpid:
- # PORTAGE_LOG_FILE is set by the
- # above prepare_build_dirs() call.
- logfile = mysettings.get("PORTAGE_LOG_FILE")
-
- if have_build_dirs:
- rval = _prepare_env_file(mysettings)
- if rval != os.EX_OK:
- return rval
-
- if eapi_exports_merge_type(mysettings["EAPI"]) and \
- "MERGE_TYPE" not in mysettings.configdict["pkg"]:
- if tree == "porttree":
- mysettings.configdict["pkg"]["MERGE_TYPE"] = "source"
- elif tree == "bintree":
- mysettings.configdict["pkg"]["MERGE_TYPE"] = "binary"
-
- if tree == "porttree":
- mysettings.configdict["pkg"]["EMERGE_FROM"] = "ebuild"
- elif tree == "bintree":
- mysettings.configdict["pkg"]["EMERGE_FROM"] = "binary"
-
- # NOTE: It's not possible to set REPLACED_BY_VERSION for prerm
- # and postrm here, since we don't necessarily know what
- # versions are being installed. This could be a problem
- # for API consumers if they don't use dblink.treewalk()
- # to execute prerm and postrm.
- if eapi_exports_replace_vars(mysettings["EAPI"]) and \
- (mydo in ("postinst", "preinst", "pretend", "setup") or \
- ("noauto" not in features and not returnpid and \
- (mydo in actionmap_deps or mydo in ("merge", "package", "qmerge")))):
- if not vartree:
- writemsg("Warning: vartree not given to doebuild. " + \
- "Cannot set REPLACING_VERSIONS in pkg_{pretend,setup}\n")
- else:
- vardb = vartree.dbapi
- cpv = mysettings.mycpv
- cpv_slot = "%s%s%s" % \
- (cpv.cp, portage.dep._slot_separator, cpv.slot)
- mysettings["REPLACING_VERSIONS"] = " ".join(
- set(portage.versions.cpv_getversion(match) \
- for match in vardb.match(cpv_slot) + \
- vardb.match('='+cpv)))
-
- # if any of these are being called, handle them -- running them out of
- # the sandbox -- and stop now.
- if mydo in ("config", "help", "info", "postinst",
- "preinst", "pretend", "postrm", "prerm"):
- if mydo in ("preinst", "postinst"):
- env_file = os.path.join(os.path.dirname(mysettings["EBUILD"]),
- "environment.bz2")
- if os.path.isfile(env_file):
- mysettings["PORTAGE_UPDATE_ENV"] = env_file
- try:
- return _spawn_phase(mydo, mysettings,
- fd_pipes=fd_pipes, logfile=logfile, returnpid=returnpid)
- finally:
- mysettings.pop("PORTAGE_UPDATE_ENV", None)
-
- mycpv = "/".join((mysettings["CATEGORY"], mysettings["PF"]))
-
- # Only try and fetch the files if we are going to need them ...
- # otherwise, if user has FEATURES=noauto and they run `ebuild clean
- # unpack compile install`, we will try and fetch 4 times :/
- need_distfiles = tree == "porttree" and not unpacked and \
- (mydo in ("fetch", "unpack") or \
- mydo not in ("digest", "manifest") and "noauto" not in features)
- if need_distfiles:
-
- src_uri = mysettings.configdict["pkg"].get("SRC_URI")
- if src_uri is None:
- src_uri, = mydbapi.aux_get(mysettings.mycpv,
- ["SRC_URI"], mytree=os.path.dirname(os.path.dirname(
- os.path.dirname(myebuild))))
- metadata = {
- "EAPI" : mysettings["EAPI"],
- "SRC_URI" : src_uri,
- }
- use = frozenset(mysettings["PORTAGE_USE"].split())
- try:
- alist = _parse_uri_map(mysettings.mycpv, metadata, use=use)
- aalist = _parse_uri_map(mysettings.mycpv, metadata)
- except InvalidDependString as e:
- writemsg("!!! %s\n" % str(e), noiselevel=-1)
- writemsg(_("!!! Invalid SRC_URI for '%s'.\n") % mycpv,
- noiselevel=-1)
- del e
- return 1
-
- if "mirror" in features or fetchall:
- fetchme = aalist
- else:
- fetchme = alist
-
- dist_digests = None
- if mf is not None:
- dist_digests = mf.getTypeDigests("DIST")
-
- def _fetch_subprocess(fetchme, mysettings, listonly, dist_digests):
- # For userfetch, drop privileges for the entire fetch call, in
- # order to handle DISTDIR on NFS with root_squash for bug 601252.
- if _want_userfetch(mysettings):
- _drop_privs_userfetch(mysettings)
-
- return fetch(fetchme, mysettings, listonly=listonly,
- fetchonly=fetchonly, allow_missing_digests=False,
- digests=dist_digests)
-
- loop = asyncio._safe_loop()
- if loop.is_running():
- # Called by EbuildFetchonly for emerge --pretend --fetchonly.
- success = fetch(fetchme, mysettings, listonly=listonly,
- fetchonly=fetchonly, allow_missing_digests=False,
- digests=dist_digests)
- else:
- success = loop.run_until_complete(
- loop.run_in_executor(ForkExecutor(loop=loop),
- _fetch_subprocess, fetchme, mysettings, listonly, dist_digests))
- if not success:
- # Since listonly mode is called by emerge --pretend in an
- # asynchronous context, spawn_nofetch would trigger event loop
- # recursion here, therefore delegate execution of pkg_nofetch
- # to the caller (bug 657360).
- if not listonly:
- spawn_nofetch(mydbapi, myebuild, settings=mysettings,
- fd_pipes=fd_pipes)
- return 1
-
- if need_distfiles:
- # Files are already checked inside fetch(),
- # so do not check them again.
- checkme = []
- elif unpacked:
- # The unpack phase is marked as complete, so it
- # would be wasteful to check distfiles again.
- checkme = []
- else:
- checkme = alist
-
- if mydo == "fetch" and listonly:
- return 0
-
- try:
- if mydo == "manifest":
- mf = None
- _doebuild_manifest_cache = None
- return not digestgen(mysettings=mysettings, myportdb=mydbapi)
- if mydo == "digest":
- mf = None
- _doebuild_manifest_cache = None
- return not digestgen(mysettings=mysettings, myportdb=mydbapi)
- if "digest" in mysettings.features:
- mf = None
- _doebuild_manifest_cache = None
- digestgen(mysettings=mysettings, myportdb=mydbapi)
- except PermissionDenied as e:
- writemsg(_("!!! Permission Denied: %s\n") % (e,), noiselevel=-1)
- if mydo in ("digest", "manifest"):
- return 1
-
- if mydo == "fetch":
- # Return after digestgen for FEATURES=digest support.
- # Return before digestcheck, since fetch() already
- # checked any relevant digests.
- return 0
-
- # See above comment about fetching only when needed
- if tree == 'porttree' and \
- not digestcheck(checkme, mysettings, "strict" in features, mf=mf):
- return 1
-
- # remove PORTAGE_ACTUAL_DISTDIR once cvs/svn is supported via SRC_URI
- if tree == 'porttree' and \
- ((mydo != "setup" and "noauto" not in features) \
- or mydo in ("install", "unpack")):
- _prepare_fake_distdir(mysettings, alist)
-
- #initial dep checks complete; time to process main commands
- actionmap = _spawn_actionmap(mysettings)
-
- # merge the deps in so we have again a 'full' actionmap
- # be glad when this can die.
- for x in actionmap:
- if len(actionmap_deps.get(x, [])):
- actionmap[x]["dep"] = ' '.join(actionmap_deps[x])
-
- regular_actionmap_phase = mydo in actionmap
-
- if regular_actionmap_phase:
- bintree = None
- if mydo == "package":
- # Make sure the package directory exists before executing
- # this phase. This can raise PermissionDenied if
- # the current user doesn't have write access to $PKGDIR.
- if hasattr(portage, 'db'):
- bintree = portage.db[mysettings['EROOT']]['bintree']
- binpkg_tmpfile_dir = os.path.join(bintree.pkgdir, mysettings["CATEGORY"])
- bintree._ensure_dir(binpkg_tmpfile_dir)
- with tempfile.NamedTemporaryFile(
- prefix=mysettings["PF"],
- suffix=".tbz2." + str(portage.getpid()),
- dir=binpkg_tmpfile_dir,
- delete=False) as binpkg_tmpfile:
- mysettings["PORTAGE_BINPKG_TMPFILE"] = binpkg_tmpfile.name
- else:
- parent_dir = os.path.join(mysettings["PKGDIR"],
- mysettings["CATEGORY"])
- portage.util.ensure_dirs(parent_dir)
- if not os.access(parent_dir, os.W_OK):
- raise PermissionDenied(
- "access('%s', os.W_OK)" % parent_dir)
- retval = spawnebuild(mydo,
- actionmap, mysettings, debug, logfile=logfile,
- fd_pipes=fd_pipes, returnpid=returnpid)
-
- if returnpid and isinstance(retval, list):
- return retval
-
- if retval == os.EX_OK:
- if mydo == "package" and bintree is not None:
- pkg = bintree.inject(mysettings.mycpv,
- filename=mysettings["PORTAGE_BINPKG_TMPFILE"])
- if pkg is not None:
- infoloc = os.path.join(
- mysettings["PORTAGE_BUILDDIR"], "build-info")
- build_info = {
- "BINPKGMD5": "%s\n" % pkg._metadata["MD5"],
- }
- if pkg.build_id is not None:
- build_info["BUILD_ID"] = "%s\n" % pkg.build_id
- for k, v in build_info.items():
- with io.open(_unicode_encode(
- os.path.join(infoloc, k),
- encoding=_encodings['fs'], errors='strict'),
- mode='w', encoding=_encodings['repo.content'],
- errors='strict') as f:
- f.write(v)
- else:
- if "PORTAGE_BINPKG_TMPFILE" in mysettings:
- try:
- os.unlink(mysettings["PORTAGE_BINPKG_TMPFILE"])
- except OSError:
- pass
-
- elif returnpid:
- writemsg("!!! doebuild: %s\n" %
- _("returnpid is not supported for phase '%s'\n" % mydo),
- noiselevel=-1)
-
- if regular_actionmap_phase:
- # handled above
- pass
- elif mydo == "qmerge":
- # check to ensure install was run. this *only* pops up when users
- # forget it and are using ebuild
- if not os.path.exists(
- os.path.join(mysettings["PORTAGE_BUILDDIR"], ".installed")):
- writemsg(_("!!! mydo=qmerge, but the install phase has not been run\n"),
- noiselevel=-1)
- return 1
- # qmerge is a special phase that implies noclean.
- if "noclean" not in mysettings.features:
- mysettings.features.add("noclean")
- _handle_self_update(mysettings, vartree.dbapi)
- #qmerge is specifically not supposed to do a runtime dep check
- retval = merge(
- mysettings["CATEGORY"], mysettings["PF"], mysettings["D"],
- os.path.join(mysettings["PORTAGE_BUILDDIR"], "build-info"),
- myroot, mysettings, myebuild=mysettings["EBUILD"], mytree=tree,
- mydbapi=mydbapi, vartree=vartree, prev_mtimes=prev_mtimes,
- fd_pipes=fd_pipes)
- elif mydo=="merge":
- retval = spawnebuild("install", actionmap, mysettings, debug,
- alwaysdep=1, logfile=logfile, fd_pipes=fd_pipes,
- returnpid=returnpid)
- if retval != os.EX_OK:
- # The merge phase handles this already. Callers don't know how
- # far this function got, so we have to call elog_process() here
- # so that it's only called once.
- elog_process(mysettings.mycpv, mysettings)
- if retval == os.EX_OK:
- _handle_self_update(mysettings, vartree.dbapi)
- retval = merge(mysettings["CATEGORY"], mysettings["PF"],
- mysettings["D"], os.path.join(mysettings["PORTAGE_BUILDDIR"],
- "build-info"), myroot, mysettings,
- myebuild=mysettings["EBUILD"], mytree=tree, mydbapi=mydbapi,
- vartree=vartree, prev_mtimes=prev_mtimes,
- fd_pipes=fd_pipes)
-
- else:
- writemsg_stdout(_("!!! Unknown mydo: %s\n") % mydo, noiselevel=-1)
- return 1
-
- return retval
-
- finally:
-
- if builddir_lock is not None:
- builddir_lock.scheduler.run_until_complete(
- builddir_lock.async_unlock())
- if tmpdir:
- mysettings["PORTAGE_TMPDIR"] = tmpdir_orig
- shutil.rmtree(tmpdir)
-
- mysettings.pop("REPLACING_VERSIONS", None)
-
- if logfile and not returnpid:
- try:
- if os.stat(logfile).st_size == 0:
- os.unlink(logfile)
- except OSError:
- pass
-
- if mydo in ("digest", "manifest", "help"):
- # If necessary, depend phase has been triggered by aux_get calls
- # and the exemption is no longer needed.
- portage._doebuild_manifest_exempt_depend -= 1
+
+ def doebuild(
+ myebuild,
+ mydo,
+ _unused=DeprecationWarning,
+ settings=None,
+ debug=0,
+ listonly=0,
+ fetchonly=0,
+ cleanup=0,
+ use_cache=1,
+ fetchall=0,
+ tree=None,
+ mydbapi=None,
+ vartree=None,
+ prev_mtimes=None,
+ fd_pipes=None,
+ returnpid=False,
+ ):
+ """
+ Wrapper function that invokes specific ebuild phases through the spawning
+ of ebuild.sh
+
+ @param myebuild: name of the ebuild to invoke the phase on (CPV)
+ @type myebuild: String
+ @param mydo: Phase to run
+ @type mydo: String
+ @param _unused: Deprecated (use settings["ROOT"] instead)
+ @type _unused: String
+ @param settings: Portage Configuration
+ @type settings: instance of portage.config
+ @param debug: Turns on various debug information (eg, debug for spawn)
+ @type debug: Boolean
+ @param listonly: Used to wrap fetch(); passed such that fetch only lists files required.
+ @type listonly: Boolean
+ @param fetchonly: Used to wrap fetch(); passed such that files are only fetched (no other actions)
+ @type fetchonly: Boolean
+ @param cleanup: Passed to prepare_build_dirs (TODO: what does it do?)
+ @type cleanup: Boolean
+ @param use_cache: Enables the cache
+ @type use_cache: Boolean
+ @param fetchall: Used to wrap fetch(), fetches all URIs (even ones invalid due to USE conditionals)
+ @type fetchall: Boolean
+ @param tree: Which tree to use ('vartree','porttree','bintree', etc..), defaults to 'porttree'
+ @type tree: String
+ @param mydbapi: a dbapi instance to pass to various functions; this should be a portdbapi instance.
+ @type mydbapi: portdbapi instance
+ @param vartree: A instance of vartree; used for aux_get calls, defaults to db[myroot]['vartree']
+ @type vartree: vartree instance
+ @param prev_mtimes: A dict of { filename:mtime } keys used by merge() to do config_protection
+ @type prev_mtimes: dictionary
+ @param fd_pipes: A dict of mapping for pipes, { '0': stdin, '1': stdout }
+ for example.
+ @type fd_pipes: Dictionary
+ @param returnpid: Return a list of process IDs for a successful spawn, or
+ an integer value if spawn is unsuccessful. NOTE: This requires the
+ caller clean up all returned PIDs.
+ @type returnpid: Boolean
+ @rtype: Boolean
+ @return:
+ 1. 0 for success
+ 2. 1 for error
+
+ Most errors have an accompanying error message.
+
+ listonly and fetchonly are only really necessary for operations involving 'fetch'
+ prev_mtimes are only necessary for merge operations.
+ Other variables may not be strictly required, many have defaults that are set inside of doebuild.
+
+ """
+
+ if settings is None:
+ raise TypeError("settings parameter is required")
+ mysettings = settings
+ myroot = settings["EROOT"]
+
+ if _unused is not DeprecationWarning:
+ warnings.warn(
+ "The third parameter of the "
+ "portage.doebuild() is deprecated. Instead "
+ "settings['EROOT'] is used.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
+ if not tree:
+ writemsg("Warning: tree not specified to doebuild\n")
+ tree = "porttree"
+
+ # chunked out deps for each phase, so that ebuild binary can use it
+ # to collapse targets down.
+ actionmap_deps = {
+ "pretend": [],
+ "setup": ["pretend"],
+ "unpack": ["setup"],
+ "prepare": ["unpack"],
+ "configure": ["prepare"],
+ "compile": ["configure"],
+ "test": ["compile"],
+ "install": ["test"],
+ "instprep": ["install"],
+ "rpm": ["install"],
+ "package": ["install"],
+ "merge": ["install"],
+ }
+
+ if mydbapi is None:
+ mydbapi = portage.db[myroot][tree].dbapi
+
+ if vartree is None and mydo in ("merge", "qmerge", "unmerge"):
+ vartree = portage.db[myroot]["vartree"]
+
+ features = mysettings.features
+
+ clean_phases = ("clean", "cleanrm")
+ validcommands = [
+ "help",
+ "clean",
+ "prerm",
+ "postrm",
+ "cleanrm",
+ "preinst",
+ "postinst",
+ "config",
+ "info",
+ "setup",
+ "depend",
+ "pretend",
+ "fetch",
+ "fetchall",
+ "digest",
+ "unpack",
+ "prepare",
+ "configure",
+ "compile",
+ "test",
+ "install",
+ "instprep",
+ "rpm",
+ "qmerge",
+ "merge",
+ "package",
+ "unmerge",
+ "manifest",
+ "nofetch",
+ ]
+
+ if mydo not in validcommands:
+ validcommands.sort()
+ writemsg(
+ "!!! doebuild: '%s' is not one of the following valid commands:" % mydo,
+ noiselevel=-1,
+ )
+ for vcount in range(len(validcommands)):
+ if vcount % 6 == 0:
+ writemsg("\n!!! ", noiselevel=-1)
+ writemsg(validcommands[vcount].ljust(11), noiselevel=-1)
+ writemsg("\n", noiselevel=-1)
+ return 1
+
+ if returnpid and mydo != "depend":
+ # This case is not supported, since it bypasses the EbuildPhase class
+ # which implements important functionality (including post phase hooks
+ # and IPC for things like best/has_version and die).
+ warnings.warn(
+ "portage.doebuild() called "
+ "with returnpid parameter enabled. This usage will "
+ "not be supported in the future.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
+ if mydo == "fetchall":
+ fetchall = 1
+ mydo = "fetch"
+
+ if mydo not in clean_phases and not os.path.exists(myebuild):
+ writemsg(
+ "!!! doebuild: %s not found for %s\n" % (myebuild, mydo), noiselevel=-1
+ )
+ return 1
+
+ global _doebuild_manifest_cache
+ pkgdir = os.path.dirname(myebuild)
+ manifest_path = os.path.join(pkgdir, "Manifest")
+ if tree == "porttree":
+ repo_config = mysettings.repositories.get_repo_for_location(
+ os.path.dirname(os.path.dirname(pkgdir))
+ )
+ else:
+ repo_config = None
+
+ mf = None
+ if (
+ "strict" in features
+ and "digest" not in features
+ and tree == "porttree"
+ and not repo_config.thin_manifest
+ and mydo not in ("digest", "manifest", "help")
+ and not portage._doebuild_manifest_exempt_depend
+ and not (
+ repo_config.allow_missing_manifest and not os.path.exists(manifest_path)
+ )
+ ):
+ # Always verify the ebuild checksums before executing it.
+ global _doebuild_broken_ebuilds
+
+ if myebuild in _doebuild_broken_ebuilds:
+ return 1
+
+ # Avoid checking the same Manifest several times in a row during a
+ # regen with an empty cache.
+ if (
+ _doebuild_manifest_cache is None
+ or _doebuild_manifest_cache.getFullname() != manifest_path
+ ):
+ _doebuild_manifest_cache = None
+ if not os.path.exists(manifest_path):
+ out = portage.output.EOutput()
+ out.eerror(_("Manifest not found for '%s'") % (myebuild,))
+ _doebuild_broken_ebuilds.add(myebuild)
+ return 1
+ mf = repo_config.load_manifest(pkgdir, mysettings["DISTDIR"])
+
+ else:
+ mf = _doebuild_manifest_cache
+
+ try:
+ mf.checkFileHashes("EBUILD", os.path.basename(myebuild))
+ except KeyError:
+ if not (
+ mf.allow_missing
+ and os.path.basename(myebuild) not in mf.fhashdict["EBUILD"]
+ ):
+ out = portage.output.EOutput()
+ out.eerror(_("Missing digest for '%s'") % (myebuild,))
+ _doebuild_broken_ebuilds.add(myebuild)
+ return 1
+ except FileNotFound:
+ out = portage.output.EOutput()
+ out.eerror(
+ _("A file listed in the Manifest " "could not be found: '%s'")
+ % (myebuild,)
+ )
+ _doebuild_broken_ebuilds.add(myebuild)
+ return 1
+ except DigestException as e:
+ out = portage.output.EOutput()
+ out.eerror(_("Digest verification failed:"))
+ out.eerror("%s" % e.value[0])
+ out.eerror(_("Reason: %s") % e.value[1])
+ out.eerror(_("Got: %s") % e.value[2])
+ out.eerror(_("Expected: %s") % e.value[3])
+ _doebuild_broken_ebuilds.add(myebuild)
+ return 1
+
+ if mf.getFullname() in _doebuild_broken_manifests:
+ return 1
+
+ if mf is not _doebuild_manifest_cache and not mf.allow_missing:
+
+ # Make sure that all of the ebuilds are
+ # actually listed in the Manifest.
+ for f in os.listdir(pkgdir):
+ pf = None
+ if f[-7:] == ".ebuild":
+ pf = f[:-7]
+ if pf is not None and not mf.hasFile("EBUILD", f):
+ f = os.path.join(pkgdir, f)
+ if f not in _doebuild_broken_ebuilds:
+ out = portage.output.EOutput()
+ out.eerror(
+ _("A file is not listed in the " "Manifest: '%s'") % (f,)
+ )
+ _doebuild_broken_manifests.add(manifest_path)
+ return 1
+
+ # We cache it only after all above checks succeed.
+ _doebuild_manifest_cache = mf
+
+ logfile = None
+ builddir_lock = None
+ tmpdir = None
+ tmpdir_orig = None
+
+ try:
+ if mydo in ("digest", "manifest", "help"):
+ # Temporarily exempt the depend phase from manifest checks, in case
+ # aux_get calls trigger cache generation.
+ portage._doebuild_manifest_exempt_depend += 1
+
+ # If we don't need much space and we don't need a constant location,
+ # we can temporarily override PORTAGE_TMPDIR with a random temp dir
+ # so that there's no need for locking and it can be used even if the
+ # user isn't in the portage group.
+ if not returnpid and mydo in ("info",):
+ tmpdir = tempfile.mkdtemp()
+ tmpdir_orig = mysettings["PORTAGE_TMPDIR"]
+ mysettings["PORTAGE_TMPDIR"] = tmpdir
+
+ doebuild_environment(
+ myebuild, mydo, myroot, mysettings, debug, use_cache, mydbapi
+ )
+
+ if mydo in clean_phases:
+ builddir_lock = None
+ if not returnpid and "PORTAGE_BUILDDIR_LOCKED" not in mysettings:
+ builddir_lock = EbuildBuildDir(
+ scheduler=asyncio._safe_loop(), settings=mysettings
+ )
+ builddir_lock.scheduler.run_until_complete(builddir_lock.async_lock())
+ try:
+ return _spawn_phase(
+ mydo, mysettings, fd_pipes=fd_pipes, returnpid=returnpid
+ )
+ finally:
+ if builddir_lock is not None:
+ builddir_lock.scheduler.run_until_complete(
+ builddir_lock.async_unlock()
+ )
+
+ # get possible slot information from the deps file
+ if mydo == "depend":
+ if not returnpid:
+ raise TypeError("returnpid must be True for depend phase")
+ return _spawn_phase(
+ mydo, mysettings, fd_pipes=fd_pipes, returnpid=returnpid
+ )
+
+ if mydo == "nofetch":
+
+ if returnpid:
+ writemsg(
+ "!!! doebuild: %s\n"
+ % _("returnpid is not supported for phase '%s'\n" % mydo),
+ noiselevel=-1,
+ )
+
+ return spawn_nofetch(
+ mydbapi, myebuild, settings=mysettings, fd_pipes=fd_pipes
+ )
+
+ if tree == "porttree":
+
+ if not returnpid:
+ # Validate dependency metadata here to ensure that ebuilds with
+ # invalid data are never installed via the ebuild command. Skip
+ # this when returnpid is True (assume the caller handled it).
+ rval = _validate_deps(mysettings, myroot, mydo, mydbapi)
+ if rval != os.EX_OK:
+ return rval
+
+ else:
+ # FEATURES=noauto only makes sense for porttree, and we don't want
+ # it to trigger redundant sourcing of the ebuild for API consumers
+ # that are using binary packages
+ if "noauto" in mysettings.features:
+ mysettings.features.discard("noauto")
+
+ # If we are not using a private temp dir, then check access
+ # to the global temp dir.
+ if tmpdir is None and mydo not in _doebuild_commands_without_builddir:
+ rval = _check_temp_dir(mysettings)
+ if rval != os.EX_OK:
+ return rval
+
+ if mydo == "unmerge":
+ if returnpid:
+ writemsg(
+ "!!! doebuild: %s\n"
+ % _("returnpid is not supported for phase '%s'\n" % mydo),
+ noiselevel=-1,
+ )
+ return unmerge(
+ mysettings["CATEGORY"],
+ mysettings["PF"],
+ myroot,
+ mysettings,
+ vartree=vartree,
+ )
+
+ phases_to_run = set()
+ if returnpid or "noauto" in mysettings.features or mydo not in actionmap_deps:
+ phases_to_run.add(mydo)
+ else:
+ phase_stack = [mydo]
+ while phase_stack:
+ x = phase_stack.pop()
+ if x in phases_to_run:
+ continue
+ phases_to_run.add(x)
+ phase_stack.extend(actionmap_deps.get(x, []))
+ del phase_stack
+
+ alist = set(mysettings.configdict["pkg"].get("A", "").split())
+
+ unpacked = False
+ if tree != "porttree" or mydo in _doebuild_commands_without_builddir:
+ pass
+ elif "unpack" not in phases_to_run:
+ unpacked = os.path.exists(
+ os.path.join(mysettings["PORTAGE_BUILDDIR"], ".unpacked")
+ )
+ else:
+ try:
+ workdir_st = os.stat(mysettings["WORKDIR"])
+ except OSError:
+ pass
+ else:
+ newstuff = False
+ if not os.path.exists(
+ os.path.join(mysettings["PORTAGE_BUILDDIR"], ".unpacked")
+ ):
+ writemsg_stdout(
+ _(">>> Not marked as unpacked; recreating WORKDIR...\n")
+ )
+ newstuff = True
+ else:
+ for x in alist:
+ writemsg_stdout(">>> Checking %s's mtime...\n" % x)
+ try:
+ x_st = os.stat(os.path.join(mysettings["DISTDIR"], x))
+ except OSError:
+ # file deleted
+ x_st = None
+
+ if x_st is not None and x_st.st_mtime > workdir_st.st_mtime:
+ writemsg_stdout(
+ _(
+ ">>> Timestamp of "
+ "%s has changed; recreating WORKDIR...\n"
+ )
+ % x
+ )
+ newstuff = True
+ break
+
+ if newstuff:
+ if (
+ builddir_lock is None
+ and "PORTAGE_BUILDDIR_LOCKED" not in mysettings
+ ):
+ builddir_lock = EbuildBuildDir(
+ scheduler=asyncio._safe_loop(), settings=mysettings
+ )
+ builddir_lock.scheduler.run_until_complete(
+ builddir_lock.async_lock()
+ )
+ try:
+ _spawn_phase("clean", mysettings)
+ finally:
+ if builddir_lock is not None:
+ builddir_lock.scheduler.run_until_complete(
+ builddir_lock.async_unlock()
+ )
+ builddir_lock = None
+ else:
+ writemsg_stdout(_(">>> WORKDIR is up-to-date, keeping...\n"))
+ unpacked = True
+
+ # Build directory creation isn't required for any of these.
+ # In the fetch phase, the directory is needed only for RESTRICT=fetch
+ # in order to satisfy the sane $PWD requirement (from bug #239560)
+ # when pkg_nofetch is spawned.
+ have_build_dirs = False
+ if mydo not in ("digest", "fetch", "help", "manifest"):
+ if not returnpid and "PORTAGE_BUILDDIR_LOCKED" not in mysettings:
+ builddir_lock = EbuildBuildDir(
+ scheduler=asyncio._safe_loop(), settings=mysettings
+ )
+ builddir_lock.scheduler.run_until_complete(builddir_lock.async_lock())
+ mystatus = prepare_build_dirs(myroot, mysettings, cleanup)
+ if mystatus:
+ return mystatus
+ have_build_dirs = True
+
+ # emerge handles logging externally
+ if not returnpid:
+ # PORTAGE_LOG_FILE is set by the
+ # above prepare_build_dirs() call.
+ logfile = mysettings.get("PORTAGE_LOG_FILE")
+
+ if have_build_dirs:
+ rval = _prepare_env_file(mysettings)
+ if rval != os.EX_OK:
+ return rval
+
+ if (
+ eapi_exports_merge_type(mysettings["EAPI"])
+ and "MERGE_TYPE" not in mysettings.configdict["pkg"]
+ ):
+ if tree == "porttree":
+ mysettings.configdict["pkg"]["MERGE_TYPE"] = "source"
+ elif tree == "bintree":
+ mysettings.configdict["pkg"]["MERGE_TYPE"] = "binary"
+
+ if tree == "porttree":
+ mysettings.configdict["pkg"]["EMERGE_FROM"] = "ebuild"
+ elif tree == "bintree":
+ mysettings.configdict["pkg"]["EMERGE_FROM"] = "binary"
+
+ # NOTE: It's not possible to set REPLACED_BY_VERSION for prerm
+ # and postrm here, since we don't necessarily know what
+ # versions are being installed. This could be a problem
+ # for API consumers if they don't use dblink.treewalk()
+ # to execute prerm and postrm.
+ if eapi_exports_replace_vars(mysettings["EAPI"]) and (
+ mydo in ("postinst", "preinst", "pretend", "setup")
+ or (
+ "noauto" not in features
+ and not returnpid
+ and (mydo in actionmap_deps or mydo in ("merge", "package", "qmerge"))
+ )
+ ):
+ if not vartree:
+ writemsg(
+ "Warning: vartree not given to doebuild. "
+ + "Cannot set REPLACING_VERSIONS in pkg_{pretend,setup}\n"
+ )
+ else:
+ vardb = vartree.dbapi
+ cpv = mysettings.mycpv
+ cpv_slot = "%s%s%s" % (cpv.cp, portage.dep._slot_separator, cpv.slot)
+ mysettings["REPLACING_VERSIONS"] = " ".join(
+ set(
+ portage.versions.cpv_getversion(match)
+ for match in vardb.match(cpv_slot) + vardb.match("=" + cpv)
+ )
+ )
+
+ # if any of these are being called, handle them -- running them out of
+ # the sandbox -- and stop now.
+ if mydo in (
+ "config",
+ "help",
+ "info",
+ "postinst",
+ "preinst",
+ "pretend",
+ "postrm",
+ "prerm",
+ ):
+ if mydo in ("preinst", "postinst"):
+ env_file = os.path.join(
+ os.path.dirname(mysettings["EBUILD"]), "environment.bz2"
+ )
+ if os.path.isfile(env_file):
+ mysettings["PORTAGE_UPDATE_ENV"] = env_file
+ try:
+ return _spawn_phase(
+ mydo,
+ mysettings,
+ fd_pipes=fd_pipes,
+ logfile=logfile,
+ returnpid=returnpid,
+ )
+ finally:
+ mysettings.pop("PORTAGE_UPDATE_ENV", None)
+
+ mycpv = "/".join((mysettings["CATEGORY"], mysettings["PF"]))
+
+ # Only try and fetch the files if we are going to need them ...
+ # otherwise, if user has FEATURES=noauto and they run `ebuild clean
+ # unpack compile install`, we will try and fetch 4 times :/
+ need_distfiles = (
+ tree == "porttree"
+ and not unpacked
+ and (
+ mydo in ("fetch", "unpack")
+ or mydo not in ("digest", "manifest")
+ and "noauto" not in features
+ )
+ )
+ if need_distfiles:
+
+ src_uri = mysettings.configdict["pkg"].get("SRC_URI")
+ if src_uri is None:
+ (src_uri,) = mydbapi.aux_get(
+ mysettings.mycpv,
+ ["SRC_URI"],
+ mytree=os.path.dirname(os.path.dirname(os.path.dirname(myebuild))),
+ )
+ metadata = {
+ "EAPI": mysettings["EAPI"],
+ "SRC_URI": src_uri,
+ }
+ use = frozenset(mysettings["PORTAGE_USE"].split())
+ try:
+ alist = _parse_uri_map(mysettings.mycpv, metadata, use=use)
+ aalist = _parse_uri_map(mysettings.mycpv, metadata)
+ except InvalidDependString as e:
+ writemsg("!!! %s\n" % str(e), noiselevel=-1)
+ writemsg(_("!!! Invalid SRC_URI for '%s'.\n") % mycpv, noiselevel=-1)
+ del e
+ return 1
+
+ if "mirror" in features or fetchall:
+ fetchme = aalist
+ else:
+ fetchme = alist
+
+ dist_digests = None
+ if mf is not None:
+ dist_digests = mf.getTypeDigests("DIST")
+
+ def _fetch_subprocess(fetchme, mysettings, listonly, dist_digests):
+ # For userfetch, drop privileges for the entire fetch call, in
+ # order to handle DISTDIR on NFS with root_squash for bug 601252.
+ if _want_userfetch(mysettings):
+ _drop_privs_userfetch(mysettings)
+
+ return fetch(
+ fetchme,
+ mysettings,
+ listonly=listonly,
+ fetchonly=fetchonly,
+ allow_missing_digests=False,
+ digests=dist_digests,
+ )
+
+ loop = asyncio._safe_loop()
+ if loop.is_running():
+ # Called by EbuildFetchonly for emerge --pretend --fetchonly.
+ success = fetch(
+ fetchme,
+ mysettings,
+ listonly=listonly,
+ fetchonly=fetchonly,
+ allow_missing_digests=False,
+ digests=dist_digests,
+ )
+ else:
+ success = loop.run_until_complete(
+ loop.run_in_executor(
+ ForkExecutor(loop=loop),
+ _fetch_subprocess,
+ fetchme,
+ mysettings,
+ listonly,
+ dist_digests,
+ )
+ )
+ if not success:
+ # Since listonly mode is called by emerge --pretend in an
+ # asynchronous context, spawn_nofetch would trigger event loop
+ # recursion here, therefore delegate execution of pkg_nofetch
+ # to the caller (bug 657360).
+ if not listonly:
+ spawn_nofetch(
+ mydbapi, myebuild, settings=mysettings, fd_pipes=fd_pipes
+ )
+ return 1
+
+ if need_distfiles:
+ # Files are already checked inside fetch(),
+ # so do not check them again.
+ checkme = []
+ elif unpacked:
+ # The unpack phase is marked as complete, so it
+ # would be wasteful to check distfiles again.
+ checkme = []
+ else:
+ checkme = alist
+
+ if mydo == "fetch" and listonly:
+ return 0
+
+ try:
+ if mydo == "manifest":
+ mf = None
+ _doebuild_manifest_cache = None
+ return not digestgen(mysettings=mysettings, myportdb=mydbapi)
+ if mydo == "digest":
+ mf = None
+ _doebuild_manifest_cache = None
+ return not digestgen(mysettings=mysettings, myportdb=mydbapi)
+ if "digest" in mysettings.features:
+ mf = None
+ _doebuild_manifest_cache = None
+ digestgen(mysettings=mysettings, myportdb=mydbapi)
+ except PermissionDenied as e:
+ writemsg(_("!!! Permission Denied: %s\n") % (e,), noiselevel=-1)
+ if mydo in ("digest", "manifest"):
+ return 1
+
+ if mydo == "fetch":
+ # Return after digestgen for FEATURES=digest support.
+ # Return before digestcheck, since fetch() already
+ # checked any relevant digests.
+ return 0
+
+ # See above comment about fetching only when needed
+ if tree == "porttree" and not digestcheck(
+ checkme, mysettings, "strict" in features, mf=mf
+ ):
+ return 1
+
+ # remove PORTAGE_ACTUAL_DISTDIR once cvs/svn is supported via SRC_URI
+ if tree == "porttree" and (
+ (mydo != "setup" and "noauto" not in features)
+ or mydo in ("install", "unpack")
+ ):
+ _prepare_fake_distdir(mysettings, alist)
+
+ # initial dep checks complete; time to process main commands
+ actionmap = _spawn_actionmap(mysettings)
+
+ # merge the deps in so we have again a 'full' actionmap
+ # be glad when this can die.
+ for x in actionmap:
+ if len(actionmap_deps.get(x, [])):
+ actionmap[x]["dep"] = " ".join(actionmap_deps[x])
+
+ regular_actionmap_phase = mydo in actionmap
+
+ if regular_actionmap_phase:
+ bintree = None
+ if mydo == "package":
+ # Make sure the package directory exists before executing
+ # this phase. This can raise PermissionDenied if
+ # the current user doesn't have write access to $PKGDIR.
+ if hasattr(portage, "db"):
+ bintree = portage.db[mysettings["EROOT"]]["bintree"]
+ binpkg_tmpfile_dir = os.path.join(
+ bintree.pkgdir, mysettings["CATEGORY"]
+ )
+ bintree._ensure_dir(binpkg_tmpfile_dir)
+ with tempfile.NamedTemporaryFile(
+ prefix=mysettings["PF"],
+ suffix=".tbz2." + str(portage.getpid()),
+ dir=binpkg_tmpfile_dir,
+ delete=False,
+ ) as binpkg_tmpfile:
+ mysettings["PORTAGE_BINPKG_TMPFILE"] = binpkg_tmpfile.name
+ else:
+ parent_dir = os.path.join(
+ mysettings["PKGDIR"], mysettings["CATEGORY"]
+ )
+ portage.util.ensure_dirs(parent_dir)
+ if not os.access(parent_dir, os.W_OK):
+ raise PermissionDenied("access('%s', os.W_OK)" % parent_dir)
+ retval = spawnebuild(
+ mydo,
+ actionmap,
+ mysettings,
+ debug,
+ logfile=logfile,
+ fd_pipes=fd_pipes,
+ returnpid=returnpid,
+ )
+
+ if returnpid and isinstance(retval, list):
+ return retval
+
+ if retval == os.EX_OK:
+ if mydo == "package" and bintree is not None:
+ pkg = bintree.inject(
+ mysettings.mycpv, filename=mysettings["PORTAGE_BINPKG_TMPFILE"]
+ )
+ if pkg is not None:
+ infoloc = os.path.join(
+ mysettings["PORTAGE_BUILDDIR"], "build-info"
+ )
+ build_info = {
+ "BINPKGMD5": "%s\n" % pkg._metadata["MD5"],
+ }
+ if pkg.build_id is not None:
+ build_info["BUILD_ID"] = "%s\n" % pkg.build_id
+ for k, v in build_info.items():
+ with io.open(
+ _unicode_encode(
+ os.path.join(infoloc, k),
+ encoding=_encodings["fs"],
+ errors="strict",
+ ),
+ mode="w",
+ encoding=_encodings["repo.content"],
+ errors="strict",
+ ) as f:
+ f.write(v)
+ else:
+ if "PORTAGE_BINPKG_TMPFILE" in mysettings:
+ try:
+ os.unlink(mysettings["PORTAGE_BINPKG_TMPFILE"])
+ except OSError:
+ pass
+
+ elif returnpid:
+ writemsg(
+ "!!! doebuild: %s\n"
+ % _("returnpid is not supported for phase '%s'\n" % mydo),
+ noiselevel=-1,
+ )
+
+ if regular_actionmap_phase:
+ # handled above
+ pass
+ elif mydo == "qmerge":
+ # check to ensure install was run. this *only* pops up when users
+ # forget it and are using ebuild
+ if not os.path.exists(
+ os.path.join(mysettings["PORTAGE_BUILDDIR"], ".installed")
+ ):
+ writemsg(
+ _("!!! mydo=qmerge, but the install phase has not been run\n"),
+ noiselevel=-1,
+ )
+ return 1
+ # qmerge is a special phase that implies noclean.
+ if "noclean" not in mysettings.features:
+ mysettings.features.add("noclean")
+ _handle_self_update(mysettings, vartree.dbapi)
+ # qmerge is specifically not supposed to do a runtime dep check
+ retval = merge(
+ mysettings["CATEGORY"],
+ mysettings["PF"],
+ mysettings["D"],
+ os.path.join(mysettings["PORTAGE_BUILDDIR"], "build-info"),
+ myroot,
+ mysettings,
+ myebuild=mysettings["EBUILD"],
+ mytree=tree,
+ mydbapi=mydbapi,
+ vartree=vartree,
+ prev_mtimes=prev_mtimes,
+ fd_pipes=fd_pipes,
+ )
+ elif mydo == "merge":
+ retval = spawnebuild(
+ "install",
+ actionmap,
+ mysettings,
+ debug,
+ alwaysdep=1,
+ logfile=logfile,
+ fd_pipes=fd_pipes,
+ returnpid=returnpid,
+ )
+ if retval != os.EX_OK:
+ # The merge phase handles this already. Callers don't know how
+ # far this function got, so we have to call elog_process() here
+ # so that it's only called once.
+ elog_process(mysettings.mycpv, mysettings)
+ if retval == os.EX_OK:
+ _handle_self_update(mysettings, vartree.dbapi)
+ retval = merge(
+ mysettings["CATEGORY"],
+ mysettings["PF"],
+ mysettings["D"],
+ os.path.join(mysettings["PORTAGE_BUILDDIR"], "build-info"),
+ myroot,
+ mysettings,
+ myebuild=mysettings["EBUILD"],
+ mytree=tree,
+ mydbapi=mydbapi,
+ vartree=vartree,
+ prev_mtimes=prev_mtimes,
+ fd_pipes=fd_pipes,
+ )
+
+ else:
+ writemsg_stdout(_("!!! Unknown mydo: %s\n") % mydo, noiselevel=-1)
+ return 1
+
+ return retval
+
+ finally:
+
+ if builddir_lock is not None:
+ builddir_lock.scheduler.run_until_complete(builddir_lock.async_unlock())
+ if tmpdir:
+ mysettings["PORTAGE_TMPDIR"] = tmpdir_orig
+ shutil.rmtree(tmpdir)
+
+ mysettings.pop("REPLACING_VERSIONS", None)
+
+ if logfile and not returnpid:
+ try:
+ if os.stat(logfile).st_size == 0:
+ os.unlink(logfile)
+ except OSError:
+ pass
+
+ if mydo in ("digest", "manifest", "help"):
+ # If necessary, depend phase has been triggered by aux_get calls
+ # and the exemption is no longer needed.
+ portage._doebuild_manifest_exempt_depend -= 1
+
def _check_temp_dir(settings):
- if "PORTAGE_TMPDIR" not in settings or \
- not os.path.isdir(settings["PORTAGE_TMPDIR"]):
- writemsg(_("The directory specified in your "
- "PORTAGE_TMPDIR variable, '%s',\n"
- "does not exist. Please create this directory or "
- "correct your PORTAGE_TMPDIR setting.\n") % \
- settings.get("PORTAGE_TMPDIR", ""), noiselevel=-1)
- return 1
-
- # as some people use a separate PORTAGE_TMPDIR mount
- # we prefer that as the checks below would otherwise be pointless
- # for those people.
- checkdir = first_existing(os.path.join(settings["PORTAGE_TMPDIR"], "portage"))
-
- if not os.access(checkdir, os.W_OK):
- writemsg(_("%s is not writable.\n"
- "Likely cause is that you've mounted it as readonly.\n") % checkdir,
- noiselevel=-1)
- return 1
-
- with tempfile.NamedTemporaryFile(prefix="exectest-", dir=checkdir) as fd:
- os.chmod(fd.name, 0o755)
- if not os.access(fd.name, os.X_OK):
- writemsg(_("Can not execute files in %s\n"
- "Likely cause is that you've mounted it with one of the\n"
- "following mount options: 'noexec', 'user', 'users'\n\n"
- "Please make sure that portage can execute files in this directory.\n") % checkdir,
- noiselevel=-1)
- return 1
-
- return os.EX_OK
+ if "PORTAGE_TMPDIR" not in settings or not os.path.isdir(
+ settings["PORTAGE_TMPDIR"]
+ ):
+ writemsg(
+ _(
+ "The directory specified in your "
+ "PORTAGE_TMPDIR variable, '%s',\n"
+ "does not exist. Please create this directory or "
+ "correct your PORTAGE_TMPDIR setting.\n"
+ )
+ % settings.get("PORTAGE_TMPDIR", ""),
+ noiselevel=-1,
+ )
+ return 1
+
+ # as some people use a separate PORTAGE_TMPDIR mount
+ # we prefer that as the checks below would otherwise be pointless
+ # for those people.
+ checkdir = first_existing(os.path.join(settings["PORTAGE_TMPDIR"], "portage"))
+
+ if not os.access(checkdir, os.W_OK):
+ writemsg(
+ _(
+ "%s is not writable.\n"
+ "Likely cause is that you've mounted it as readonly.\n"
+ )
+ % checkdir,
+ noiselevel=-1,
+ )
+ return 1
+
+ with tempfile.NamedTemporaryFile(prefix="exectest-", dir=checkdir) as fd:
+ os.chmod(fd.name, 0o755)
+ if not os.access(fd.name, os.X_OK):
+ writemsg(
+ _(
+ "Can not execute files in %s\n"
+ "Likely cause is that you've mounted it with one of the\n"
+ "following mount options: 'noexec', 'user', 'users'\n\n"
+ "Please make sure that portage can execute files in this directory.\n"
+ )
+ % checkdir,
+ noiselevel=-1,
+ )
+ return 1
+
+ return os.EX_OK
+
def _prepare_env_file(settings):
- """
- Extract environment.bz2 if it exists, but only if the destination
- environment file doesn't already exist. There are lots of possible
- states when doebuild() calls this function, and we want to avoid
- clobbering an existing environment file.
- """
-
- env_extractor = BinpkgEnvExtractor(background=False,
- scheduler=asyncio._safe_loop(),
- settings=settings)
-
- if env_extractor.dest_env_exists():
- # There are lots of possible states when doebuild()
- # calls this function, and we want to avoid
- # clobbering an existing environment file.
- return os.EX_OK
-
- if not env_extractor.saved_env_exists():
- # If the environment.bz2 doesn't exist, then ebuild.sh will
- # source the ebuild as a fallback.
- return os.EX_OK
-
- env_extractor.start()
- env_extractor.wait()
- return env_extractor.returncode
+ """
+ Extract environment.bz2 if it exists, but only if the destination
+ environment file doesn't already exist. There are lots of possible
+ states when doebuild() calls this function, and we want to avoid
+ clobbering an existing environment file.
+ """
+
+ env_extractor = BinpkgEnvExtractor(
+ background=False, scheduler=asyncio._safe_loop(), settings=settings
+ )
+
+ if env_extractor.dest_env_exists():
+ # There are lots of possible states when doebuild()
+ # calls this function, and we want to avoid
+ # clobbering an existing environment file.
+ return os.EX_OK
+
+ if not env_extractor.saved_env_exists():
+ # If the environment.bz2 doesn't exist, then ebuild.sh will
+ # source the ebuild as a fallback.
+ return os.EX_OK
+
+ env_extractor.start()
+ env_extractor.wait()
+ return env_extractor.returncode
+
def _spawn_actionmap(settings):
- features = settings.features
- restrict = settings["PORTAGE_RESTRICT"].split()
- nosandbox = (("userpriv" in features) and \
- ("usersandbox" not in features) and \
- "userpriv" not in restrict and \
- "nouserpriv" not in restrict)
-
- if not (portage.process.sandbox_capable or \
- portage.process.macossandbox_capable):
- nosandbox = True
-
- sesandbox = settings.selinux_enabled() and \
- "sesandbox" in features
-
- droppriv = "userpriv" in features and \
- "userpriv" not in restrict and \
- secpass >= 2
-
- fakeroot = "fakeroot" in features
-
- portage_bin_path = settings["PORTAGE_BIN_PATH"]
- ebuild_sh_binary = os.path.join(portage_bin_path,
- os.path.basename(EBUILD_SH_BINARY))
- misc_sh_binary = os.path.join(portage_bin_path,
- os.path.basename(MISC_SH_BINARY))
- ebuild_sh = _shell_quote(ebuild_sh_binary) + " %s"
- misc_sh = _shell_quote(misc_sh_binary) + " __dyn_%s"
-
- # args are for the to spawn function
- actionmap = {
- "pretend": {"cmd":ebuild_sh, "args":{"droppriv":0, "free":1, "sesandbox":0, "fakeroot":0}},
- "setup": {"cmd":ebuild_sh, "args":{"droppriv":0, "free":1, "sesandbox":0, "fakeroot":0}},
- "unpack": {"cmd":ebuild_sh, "args":{"droppriv":droppriv, "free":0, "sesandbox":sesandbox, "fakeroot":0}},
- "prepare": {"cmd":ebuild_sh, "args":{"droppriv":droppriv, "free":0, "sesandbox":sesandbox, "fakeroot":0}},
- "configure":{"cmd":ebuild_sh, "args":{"droppriv":droppriv, "free":nosandbox, "sesandbox":sesandbox, "fakeroot":0}},
- "compile": {"cmd":ebuild_sh, "args":{"droppriv":droppriv, "free":nosandbox, "sesandbox":sesandbox, "fakeroot":0}},
- "test": {"cmd":ebuild_sh, "args":{"droppriv":droppriv, "free":nosandbox, "sesandbox":sesandbox, "fakeroot":0}},
- "install": {"cmd":ebuild_sh, "args":{"droppriv":0, "free":0, "sesandbox":sesandbox, "fakeroot":fakeroot}},
- "instprep": {"cmd":misc_sh, "args":{"droppriv":0, "free":0, "sesandbox":sesandbox, "fakeroot":fakeroot}},
- "rpm": {"cmd":misc_sh, "args":{"droppriv":0, "free":0, "sesandbox":0, "fakeroot":fakeroot}},
- "package": {"cmd":misc_sh, "args":{"droppriv":0, "free":0, "sesandbox":0, "fakeroot":fakeroot}},
- }
-
- return actionmap
+ features = settings.features
+ restrict = settings["PORTAGE_RESTRICT"].split()
+ nosandbox = (
+ ("userpriv" in features)
+ and ("usersandbox" not in features)
+ and "userpriv" not in restrict
+ and "nouserpriv" not in restrict
+ )
+
- if not portage.process.sandbox_capable:
++ if not (portage.process.sandbox_capable
++ or portage.process.macossandbox_capable):
+ nosandbox = True
+
+ sesandbox = settings.selinux_enabled() and "sesandbox" in features
+
+ droppriv = "userpriv" in features and "userpriv" not in restrict and secpass >= 2
+
+ fakeroot = "fakeroot" in features
+
+ portage_bin_path = settings["PORTAGE_BIN_PATH"]
+ ebuild_sh_binary = os.path.join(
+ portage_bin_path, os.path.basename(EBUILD_SH_BINARY)
+ )
+ misc_sh_binary = os.path.join(portage_bin_path, os.path.basename(MISC_SH_BINARY))
+ ebuild_sh = _shell_quote(ebuild_sh_binary) + " %s"
+ misc_sh = _shell_quote(misc_sh_binary) + " __dyn_%s"
+
+ # args are for the to spawn function
+ actionmap = {
+ "pretend": {
+ "cmd": ebuild_sh,
+ "args": {"droppriv": 0, "free": 1, "sesandbox": 0, "fakeroot": 0},
+ },
+ "setup": {
+ "cmd": ebuild_sh,
+ "args": {"droppriv": 0, "free": 1, "sesandbox": 0, "fakeroot": 0},
+ },
+ "unpack": {
+ "cmd": ebuild_sh,
+ "args": {
+ "droppriv": droppriv,
+ "free": 0,
+ "sesandbox": sesandbox,
+ "fakeroot": 0,
+ },
+ },
+ "prepare": {
+ "cmd": ebuild_sh,
+ "args": {
+ "droppriv": droppriv,
+ "free": 0,
+ "sesandbox": sesandbox,
+ "fakeroot": 0,
+ },
+ },
+ "configure": {
+ "cmd": ebuild_sh,
+ "args": {
+ "droppriv": droppriv,
+ "free": nosandbox,
+ "sesandbox": sesandbox,
+ "fakeroot": 0,
+ },
+ },
+ "compile": {
+ "cmd": ebuild_sh,
+ "args": {
+ "droppriv": droppriv,
+ "free": nosandbox,
+ "sesandbox": sesandbox,
+ "fakeroot": 0,
+ },
+ },
+ "test": {
+ "cmd": ebuild_sh,
+ "args": {
+ "droppriv": droppriv,
+ "free": nosandbox,
+ "sesandbox": sesandbox,
+ "fakeroot": 0,
+ },
+ },
+ "install": {
+ "cmd": ebuild_sh,
+ "args": {
+ "droppriv": 0,
+ "free": 0,
+ "sesandbox": sesandbox,
+ "fakeroot": fakeroot,
+ },
+ },
+ "instprep": {
+ "cmd": misc_sh,
+ "args": {
+ "droppriv": 0,
+ "free": 0,
+ "sesandbox": sesandbox,
+ "fakeroot": fakeroot,
+ },
+ },
+ "rpm": {
+ "cmd": misc_sh,
+ "args": {"droppriv": 0, "free": 0, "sesandbox": 0, "fakeroot": fakeroot},
+ },
+ "package": {
+ "cmd": misc_sh,
+ "args": {"droppriv": 0, "free": 0, "sesandbox": 0, "fakeroot": fakeroot},
+ },
+ }
+
+ return actionmap
+
def _validate_deps(mysettings, myroot, mydo, mydbapi):
@@@ -1482,382 -1881,363 +1893,451 @@@
# XXX This would be to replace getstatusoutput completely.
# XXX Issue: cannot block execution. Deadlock condition.
- def spawn(mystring, mysettings, debug=False, free=False, droppriv=False,
- sesandbox=False, fakeroot=False, networked=True, ipc=True,
- mountns=False, pidns=False, **keywords):
- """
- Spawn a subprocess with extra portage-specific options.
- Optiosn include:
-
- Sandbox: Sandbox means the spawned process will be limited in its ability t
- read and write files (normally this means it is restricted to ${D}/)
- SElinux Sandbox: Enables sandboxing on SElinux
- Reduced Privileges: Drops privilages such that the process runs as portage:portage
- instead of as root.
-
- Notes: os.system cannot be used because it messes with signal handling. Instead we
- use the portage.process spawn* family of functions.
-
- This function waits for the process to terminate.
-
- @param mystring: Command to run
- @type mystring: String
- @param mysettings: Either a Dict of Key,Value pairs or an instance of portage.config
- @type mysettings: Dictionary or config instance
- @param debug: Ignored
- @type debug: Boolean
- @param free: Enable sandboxing for this process
- @type free: Boolean
- @param droppriv: Drop to portage:portage when running this command
- @type droppriv: Boolean
- @param sesandbox: Enable SELinux Sandboxing (toggles a context switch)
- @type sesandbox: Boolean
- @param fakeroot: Run this command with faked root privileges
- @type fakeroot: Boolean
- @param networked: Run this command with networking access enabled
- @type networked: Boolean
- @param ipc: Run this command with host IPC access enabled
- @type ipc: Boolean
- @param mountns: Run this command inside mount namespace
- @type mountns: Boolean
- @param pidns: Run this command in isolated PID namespace
- @type pidns: Boolean
- @param keywords: Extra options encoded as a dict, to be passed to spawn
- @type keywords: Dictionary
- @rtype: Integer
- @return:
- 1. The return code of the spawned process.
- """
-
- check_config_instance(mysettings)
-
- fd_pipes = keywords.get("fd_pipes")
- if fd_pipes is None:
- fd_pipes = {
- 0:portage._get_stdin().fileno(),
- 1:sys.__stdout__.fileno(),
- 2:sys.__stderr__.fileno(),
- }
- # In some cases the above print statements don't flush stdout, so
- # it needs to be flushed before allowing a child process to use it
- # so that output always shows in the correct order.
- stdout_filenos = (sys.__stdout__.fileno(), sys.__stderr__.fileno())
- for fd in fd_pipes.values():
- if fd in stdout_filenos:
- sys.__stdout__.flush()
- sys.__stderr__.flush()
- break
-
- features = mysettings.features
-
- # Use Linux namespaces if available
- if uid == 0 and platform.system() == 'Linux':
- keywords['unshare_net'] = not networked
- keywords['unshare_ipc'] = not ipc
- keywords['unshare_mount'] = mountns
- keywords['unshare_pid'] = pidns
-
- if not networked and mysettings.get("EBUILD_PHASE") != "nofetch" and \
- ("network-sandbox-proxy" in features or "distcc" in features):
- # Provide a SOCKS5-over-UNIX-socket proxy to escape sandbox
- # Don't do this for pkg_nofetch, since the spawn_nofetch
- # function creates a private PORTAGE_TMPDIR.
- try:
- proxy = get_socks5_proxy(mysettings)
- except NotImplementedError:
- pass
- else:
- mysettings['PORTAGE_SOCKS5_PROXY'] = proxy
- mysettings['DISTCC_SOCKS_PROXY'] = proxy
-
- # TODO: Enable fakeroot to be used together with droppriv. The
- # fake ownership/permissions will have to be converted to real
- # permissions in the merge phase.
- fakeroot = fakeroot and uid != 0 and portage.process.fakeroot_capable
- portage_build_uid = os.getuid()
- portage_build_gid = os.getgid()
- logname = None
- if uid == 0 and portage_uid and portage_gid and hasattr(os, "setgroups"):
- if droppriv:
- logname = portage.data._portage_username
- keywords.update({
- "uid": portage_uid,
- "gid": portage_gid,
- "groups": userpriv_groups,
- "umask": 0o22
- })
-
- # Adjust pty ownership so that subprocesses
- # can directly access /dev/fd/{1,2}.
- stdout_fd = fd_pipes.get(1)
- if stdout_fd is not None:
- try:
- subprocess_tty = _os.ttyname(stdout_fd)
- except OSError:
- pass
- else:
- try:
- parent_tty = _os.ttyname(sys.__stdout__.fileno())
- except OSError:
- parent_tty = None
-
- if subprocess_tty != parent_tty:
- _os.chown(subprocess_tty,
- int(portage_uid), int(portage_gid))
-
- if "userpriv" in features and "userpriv" not in mysettings["PORTAGE_RESTRICT"].split() and secpass >= 2:
- # Since Python 3.4, getpwuid and getgrgid
- # require int type (no proxies).
- portage_build_uid = int(portage_uid)
- portage_build_gid = int(portage_gid)
-
- if "PORTAGE_BUILD_USER" not in mysettings:
- user = None
- try:
- user = pwd.getpwuid(portage_build_uid).pw_name
- except KeyError:
- if portage_build_uid == 0:
- user = "root"
- elif portage_build_uid == portage_uid:
- user = portage.data._portage_username
- # PREFIX LOCAL: accept numeric uid
- else:
- user = portage_uid
- # END PREFIX LOCAL
- if user is not None:
- mysettings["PORTAGE_BUILD_USER"] = user
-
- if "PORTAGE_BUILD_GROUP" not in mysettings:
- group = None
- try:
- group = grp.getgrgid(portage_build_gid).gr_name
- except KeyError:
- if portage_build_gid == 0:
- group = "root"
- elif portage_build_gid == portage_gid:
- group = portage.data._portage_grpname
- # PREFIX LOCAL: accept numeric gid
- else:
- group = portage_gid
- # END PREFIX LOCAL
- if group is not None:
- mysettings["PORTAGE_BUILD_GROUP"] = group
-
- if not free:
- free=((droppriv and "usersandbox" not in features) or \
- (not droppriv and "sandbox" not in features and \
- "usersandbox" not in features and not fakeroot))
-
- if not free and not (fakeroot or portage.process.sandbox_capable or \
- portage.process.macossandbox_capable):
- free = True
-
- if mysettings.mycpv is not None:
- keywords["opt_name"] = "[%s]" % mysettings.mycpv
- else:
- keywords["opt_name"] = "[%s/%s]" % \
- (mysettings.get("CATEGORY",""), mysettings.get("PF",""))
-
- if free or "SANDBOX_ACTIVE" in os.environ:
- keywords["opt_name"] += " bash"
- spawn_func = portage.process.spawn_bash
- elif fakeroot:
- keywords["opt_name"] += " fakeroot"
- keywords["fakeroot_state"] = os.path.join(mysettings["T"], "fakeroot.state")
- spawn_func = portage.process.spawn_fakeroot
- elif "sandbox" in features and platform.system() == 'Darwin':
- keywords["opt_name"] += " macossandbox"
- sbprofile = MACOSSANDBOX_PROFILE
-
- # determine variable names from profile: split
- # "text@@VARNAME@@moretext@@OTHERVAR@@restoftext" into
- # ("text", # "VARNAME", "moretext", "OTHERVAR", "restoftext")
- # and extract variable named by reading every second item.
- variables = []
- for line in sbprofile.split("\n"):
- variables.extend(line.split("@@")[1:-1:2])
-
- for var in variables:
- paths = ""
- if var in mysettings:
- paths = mysettings[var]
- else:
- writemsg("Warning: sandbox profile references variable %s "
- "which is not set.\nThe rule using it will have no "
- "effect, which is most likely not the intended "
- "result.\nPlease check make.conf/make.globals.\n" %
- var)
-
- # not set or empty value
- if not paths:
- sbprofile = sbprofile.replace("@@%s@@" % var, "")
- continue
-
- rules_literal = ""
- rules_regex = ""
-
- # FIXME: Allow for quoting inside the variable
- # to allow paths with spaces in them?
- for path in paths.split(" "):
- # do a second round of token
- # replacements to be able to reference
- # settings like EPREFIX or
- # PORTAGE_BUILDDIR.
- for token in path.split("@@")[1:-1:2]:
- if token not in mysettings:
- continue
-
- path = path.replace("@@%s@@" % token, mysettings[token])
-
- if "@@" in path:
- # unreplaced tokens left -
- # silently ignore path - needed
- # for PORTAGE_ACTUAL_DISTDIR
- # which isn't always set
- pass
- elif path[-1] == os.sep:
- # path ends in slash - make it a
- # regex and allow access
- # recursively.
- path = path.replace(r'+', r'\+')
- path = path.replace(r'*', r'\*')
- path = path.replace(r'[', r'\[')
- path = path.replace(r']', r'\]')
- rules_regex += " #\"^%s\"\n" % path
- else:
- rules_literal += " #\"%s\"\n" % path
-
- rules = ""
- if rules_literal:
- rules += " (literal\n" + rules_literal + " )\n"
- if rules_regex:
- rules += " (regex\n" + rules_regex + " )\n"
- sbprofile = sbprofile.replace("@@%s@@" % var, rules)
-
- keywords["profile"] = sbprofile
- spawn_func = portage.process.spawn_macossandbox
- else:
- keywords["opt_name"] += " sandbox"
- spawn_func = portage.process.spawn_sandbox
-
- if sesandbox:
- spawn_func = selinux.spawn_wrapper(spawn_func,
- mysettings["PORTAGE_SANDBOX_T"])
-
- logname_backup = None
- if logname is not None:
- logname_backup = mysettings.configdict["env"].get("LOGNAME")
- mysettings.configdict["env"]["LOGNAME"] = logname
-
- try:
- if keywords.get("returnpid"):
- return spawn_func(mystring, env=mysettings.environ(),
- **keywords)
-
- proc = EbuildSpawnProcess(
- background=False, args=mystring,
- scheduler=SchedulerInterface(asyncio._safe_loop()),
- spawn_func=spawn_func,
- settings=mysettings, **keywords)
-
- proc.start()
- proc.wait()
-
- return proc.returncode
-
- finally:
- if logname is None:
- pass
- elif logname_backup is None:
- mysettings.configdict["env"].pop("LOGNAME", None)
- else:
- mysettings.configdict["env"]["LOGNAME"] = logname_backup
+
+
+ def spawn(
+ mystring,
+ mysettings,
+ debug=False,
+ free=False,
+ droppriv=False,
+ sesandbox=False,
+ fakeroot=False,
+ networked=True,
+ ipc=True,
+ mountns=False,
+ pidns=False,
+ **keywords
+ ):
+ """
+ Spawn a subprocess with extra portage-specific options.
+ Optiosn include:
+
+ Sandbox: Sandbox means the spawned process will be limited in its ability t
+ read and write files (normally this means it is restricted to ${D}/)
+ SElinux Sandbox: Enables sandboxing on SElinux
+ Reduced Privileges: Drops privilages such that the process runs as portage:portage
+ instead of as root.
+
+ Notes: os.system cannot be used because it messes with signal handling. Instead we
+ use the portage.process spawn* family of functions.
+
+ This function waits for the process to terminate.
+
+ @param mystring: Command to run
+ @type mystring: String
+ @param mysettings: Either a Dict of Key,Value pairs or an instance of portage.config
+ @type mysettings: Dictionary or config instance
+ @param debug: Ignored
+ @type debug: Boolean
+ @param free: Enable sandboxing for this process
+ @type free: Boolean
+ @param droppriv: Drop to portage:portage when running this command
+ @type droppriv: Boolean
+ @param sesandbox: Enable SELinux Sandboxing (toggles a context switch)
+ @type sesandbox: Boolean
+ @param fakeroot: Run this command with faked root privileges
+ @type fakeroot: Boolean
+ @param networked: Run this command with networking access enabled
+ @type networked: Boolean
+ @param ipc: Run this command with host IPC access enabled
+ @type ipc: Boolean
+ @param mountns: Run this command inside mount namespace
+ @type mountns: Boolean
+ @param pidns: Run this command in isolated PID namespace
+ @type pidns: Boolean
+ @param keywords: Extra options encoded as a dict, to be passed to spawn
+ @type keywords: Dictionary
+ @rtype: Integer
+ @return:
+ 1. The return code of the spawned process.
+ """
+
+ check_config_instance(mysettings)
+
+ fd_pipes = keywords.get("fd_pipes")
+ if fd_pipes is None:
+ fd_pipes = {
+ 0: portage._get_stdin().fileno(),
+ 1: sys.__stdout__.fileno(),
+ 2: sys.__stderr__.fileno(),
+ }
+ # In some cases the above print statements don't flush stdout, so
+ # it needs to be flushed before allowing a child process to use it
+ # so that output always shows in the correct order.
+ stdout_filenos = (sys.__stdout__.fileno(), sys.__stderr__.fileno())
+ for fd in fd_pipes.values():
+ if fd in stdout_filenos:
+ sys.__stdout__.flush()
+ sys.__stderr__.flush()
+ break
+
+ features = mysettings.features
+
+ # Use Linux namespaces if available
+ if uid == 0 and platform.system() == "Linux":
+ keywords["unshare_net"] = not networked
+ keywords["unshare_ipc"] = not ipc
+ keywords["unshare_mount"] = mountns
+ keywords["unshare_pid"] = pidns
+
+ if (
+ not networked
+ and mysettings.get("EBUILD_PHASE") != "nofetch"
+ and ("network-sandbox-proxy" in features or "distcc" in features)
+ ):
+ # Provide a SOCKS5-over-UNIX-socket proxy to escape sandbox
+ # Don't do this for pkg_nofetch, since the spawn_nofetch
+ # function creates a private PORTAGE_TMPDIR.
+ try:
+ proxy = get_socks5_proxy(mysettings)
+ except NotImplementedError:
+ pass
+ else:
+ mysettings["PORTAGE_SOCKS5_PROXY"] = proxy
+ mysettings["DISTCC_SOCKS_PROXY"] = proxy
+
+ # TODO: Enable fakeroot to be used together with droppriv. The
+ # fake ownership/permissions will have to be converted to real
+ # permissions in the merge phase.
+ fakeroot = fakeroot and uid != 0 and portage.process.fakeroot_capable
+ portage_build_uid = os.getuid()
+ portage_build_gid = os.getgid()
+ logname = None
+ if uid == 0 and portage_uid and portage_gid and hasattr(os, "setgroups"):
+ if droppriv:
+ logname = portage.data._portage_username
+ keywords.update(
+ {
+ "uid": portage_uid,
+ "gid": portage_gid,
+ "groups": userpriv_groups,
+ "umask": 0o22,
+ }
+ )
+
+ # Adjust pty ownership so that subprocesses
+ # can directly access /dev/fd/{1,2}.
+ stdout_fd = fd_pipes.get(1)
+ if stdout_fd is not None:
+ try:
+ subprocess_tty = _os.ttyname(stdout_fd)
+ except OSError:
+ pass
+ else:
+ try:
+ parent_tty = _os.ttyname(sys.__stdout__.fileno())
+ except OSError:
+ parent_tty = None
+
+ if subprocess_tty != parent_tty:
+ _os.chown(subprocess_tty, int(portage_uid), int(portage_gid))
+
+ if (
+ "userpriv" in features
+ and "userpriv" not in mysettings["PORTAGE_RESTRICT"].split()
+ and secpass >= 2
+ ):
+ # Since Python 3.4, getpwuid and getgrgid
+ # require int type (no proxies).
+ portage_build_uid = int(portage_uid)
+ portage_build_gid = int(portage_gid)
+
+ if "PORTAGE_BUILD_USER" not in mysettings:
+ user = None
+ try:
+ user = pwd.getpwuid(portage_build_uid).pw_name
+ except KeyError:
+ if portage_build_uid == 0:
+ user = "root"
+ elif portage_build_uid == portage_uid:
+ user = portage.data._portage_username
++ # BEGIN PREFIX LOCAL: accept numeric uid
++ else:
++ user = portage_uid
++ # END PREFIX LOCAL
+ if user is not None:
+ mysettings["PORTAGE_BUILD_USER"] = user
+
+ if "PORTAGE_BUILD_GROUP" not in mysettings:
+ group = None
+ try:
+ group = grp.getgrgid(portage_build_gid).gr_name
+ except KeyError:
+ if portage_build_gid == 0:
+ group = "root"
+ elif portage_build_gid == portage_gid:
+ group = portage.data._portage_grpname
++ # BEGIN PREFIX LOCAL: accept numeric gid
++ else:
++ group = portage_gid
++ # END PREFIX LOCAL
+ if group is not None:
+ mysettings["PORTAGE_BUILD_GROUP"] = group
+
+ if not free:
+ free = (droppriv and "usersandbox" not in features) or (
+ not droppriv
+ and "sandbox" not in features
+ and "usersandbox" not in features
+ and not fakeroot
+ )
+
- if not free and not (fakeroot or portage.process.sandbox_capable):
++ if not free and not (fakeroot or portage.process.sandbox_capable
++ or portage.process.macossandbox_capable): # PREFIX LOCAL
+ free = True
+
+ if mysettings.mycpv is not None:
+ keywords["opt_name"] = "[%s]" % mysettings.mycpv
+ else:
+ keywords["opt_name"] = "[%s/%s]" % (
+ mysettings.get("CATEGORY", ""),
+ mysettings.get("PF", ""),
+ )
+
+ if free or "SANDBOX_ACTIVE" in os.environ:
+ keywords["opt_name"] += " bash"
+ spawn_func = portage.process.spawn_bash
+ elif fakeroot:
+ keywords["opt_name"] += " fakeroot"
+ keywords["fakeroot_state"] = os.path.join(mysettings["T"], "fakeroot.state")
+ spawn_func = portage.process.spawn_fakeroot
++ # BEGIN PREFIX LOCAL
++ elif "sandbox" in features and platform.system() == 'Darwin':
++ keywords["opt_name"] += " macossandbox"
++ sbprofile = MACOSSANDBOX_PROFILE
++
++ # determine variable names from profile: split
++ # "text@@VARNAME@@moretext@@OTHERVAR@@restoftext" into
++ # ("text", # "VARNAME", "moretext", "OTHERVAR", "restoftext")
++ # and extract variable named by reading every second item.
++ variables = []
++ for line in sbprofile.split("\n"):
++ variables.extend(line.split("@@")[1:-1:2])
++
++ for var in variables:
++ paths = ""
++ if var in mysettings:
++ paths = mysettings[var]
++ else:
++ writemsg("Warning: sandbox profile references variable %s "
++ "which is not set.\nThe rule using it will have no "
++ "effect, which is most likely not the intended "
++ "result.\nPlease check make.conf/make.globals.\n" %
++ var)
++
++ # not set or empty value
++ if not paths:
++ sbprofile = sbprofile.replace("@@%s@@" % var, "")
++ continue
++
++ rules_literal = ""
++ rules_regex = ""
++
++ # FIXME: Allow for quoting inside the variable
++ # to allow paths with spaces in them?
++ for path in paths.split(" "):
++ # do a second round of token
++ # replacements to be able to reference
++ # settings like EPREFIX or
++ # PORTAGE_BUILDDIR.
++ for token in path.split("@@")[1:-1:2]:
++ if token not in mysettings:
++ continue
++
++ path = path.replace("@@%s@@" % token, mysettings[token])
++
++ if "@@" in path:
++ # unreplaced tokens left -
++ # silently ignore path - needed
++ # for PORTAGE_ACTUAL_DISTDIR
++ # which isn't always set
++ pass
++ elif path[-1] == os.sep:
++ # path ends in slash - make it a
++ # regex and allow access
++ # recursively.
++ path = path.replace(r'+', r'\+')
++ path = path.replace(r'*', r'\*')
++ path = path.replace(r'[', r'\[')
++ path = path.replace(r']', r'\]')
++ rules_regex += " #\"^%s\"\n" % path
++ else:
++ rules_literal += " #\"%s\"\n" % path
++
++ rules = ""
++ if rules_literal:
++ rules += " (literal\n" + rules_literal + " )\n"
++ if rules_regex:
++ rules += " (regex\n" + rules_regex + " )\n"
++ sbprofile = sbprofile.replace("@@%s@@" % var, rules)
++
++ keywords["profile"] = sbprofile
++ spawn_func = portage.process.spawn_macossandbox
++ # END PREFIX LOCAL
+ else:
+ keywords["opt_name"] += " sandbox"
+ spawn_func = portage.process.spawn_sandbox
+
+ if sesandbox:
+ spawn_func = selinux.spawn_wrapper(spawn_func, mysettings["PORTAGE_SANDBOX_T"])
+
+ logname_backup = None
+ if logname is not None:
+ logname_backup = mysettings.configdict["env"].get("LOGNAME")
+ mysettings.configdict["env"]["LOGNAME"] = logname
+
+ try:
+ if keywords.get("returnpid"):
+ return spawn_func(mystring, env=mysettings.environ(), **keywords)
+
+ proc = EbuildSpawnProcess(
+ background=False,
+ args=mystring,
+ scheduler=SchedulerInterface(asyncio._safe_loop()),
+ spawn_func=spawn_func,
+ settings=mysettings,
+ **keywords
+ )
+
+ proc.start()
+ proc.wait()
+
+ return proc.returncode
+
+ finally:
+ if logname is None:
+ pass
+ elif logname_backup is None:
+ mysettings.configdict["env"].pop("LOGNAME", None)
+ else:
+ mysettings.configdict["env"]["LOGNAME"] = logname_backup
+
# parse actionmap to spawn ebuild with the appropriate args
- def spawnebuild(mydo, actionmap, mysettings, debug, alwaysdep=0,
- logfile=None, fd_pipes=None, returnpid=False):
-
- if returnpid:
- warnings.warn("portage.spawnebuild() called "
- "with returnpid parameter enabled. This usage will "
- "not be supported in the future.",
- DeprecationWarning, stacklevel=2)
-
- if not returnpid and \
- (alwaysdep or "noauto" not in mysettings.features):
- # process dependency first
- if "dep" in actionmap[mydo]:
- retval = spawnebuild(actionmap[mydo]["dep"], actionmap,
- mysettings, debug, alwaysdep=alwaysdep, logfile=logfile,
- fd_pipes=fd_pipes, returnpid=returnpid)
- if retval:
- return retval
-
- eapi = mysettings["EAPI"]
-
- if mydo in ("configure", "prepare") and not eapi_has_src_prepare_and_src_configure(eapi):
- return os.EX_OK
-
- if mydo == "pretend" and not eapi_has_pkg_pretend(eapi):
- return os.EX_OK
-
- if not (mydo == "install" and "noauto" in mysettings.features):
- check_file = os.path.join(
- mysettings["PORTAGE_BUILDDIR"], ".%sed" % mydo.rstrip('e'))
- if os.path.exists(check_file):
- writemsg_stdout(_(">>> It appears that "
- "'%(action)s' has already executed for '%(pkg)s'; skipping.\n") %
- {"action":mydo, "pkg":mysettings["PF"]})
- writemsg_stdout(_(">>> Remove '%(file)s' to force %(action)s.\n") %
- {"file":check_file, "action":mydo})
- return os.EX_OK
-
- return _spawn_phase(mydo, mysettings,
- actionmap=actionmap, logfile=logfile,
- fd_pipes=fd_pipes, returnpid=returnpid)
- _post_phase_cmds = {
- "install" : [
- "install_qa_check",
- "install_symlink_html_docs",
- "install_hooks"],
-
- "preinst" : (
- (
- # Since SELinux does not allow LD_PRELOAD across domain transitions,
- # disable the LD_PRELOAD sandbox for preinst_selinux_labels.
- {
- "ld_preload_sandbox": False,
- "selinux_only": True,
- },
- [
- "preinst_selinux_labels",
- ],
- ),
- (
- {},
- [
- "preinst_aix",
- "preinst_sfperms",
- "preinst_suid_scan",
- "preinst_qa_check",
- ],
- ),
- ),
- "postinst" : [
- "postinst_aix",
- "postinst_qa_check"],
+ def spawnebuild(
+ mydo,
+ actionmap,
+ mysettings,
+ debug,
+ alwaysdep=0,
+ logfile=None,
+ fd_pipes=None,
+ returnpid=False,
+ ):
+
+ if returnpid:
+ warnings.warn(
+ "portage.spawnebuild() called "
+ "with returnpid parameter enabled. This usage will "
+ "not be supported in the future.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
+ if not returnpid and (alwaysdep or "noauto" not in mysettings.features):
+ # process dependency first
+ if "dep" in actionmap[mydo]:
+ retval = spawnebuild(
+ actionmap[mydo]["dep"],
+ actionmap,
+ mysettings,
+ debug,
+ alwaysdep=alwaysdep,
+ logfile=logfile,
+ fd_pipes=fd_pipes,
+ returnpid=returnpid,
+ )
+ if retval:
+ return retval
+
+ eapi = mysettings["EAPI"]
+
+ if mydo in ("configure", "prepare") and not eapi_has_src_prepare_and_src_configure(
+ eapi
+ ):
+ return os.EX_OK
+
+ if mydo == "pretend" and not eapi_has_pkg_pretend(eapi):
+ return os.EX_OK
+
+ if not (mydo == "install" and "noauto" in mysettings.features):
+ check_file = os.path.join(
+ mysettings["PORTAGE_BUILDDIR"], ".%sed" % mydo.rstrip("e")
+ )
+ if os.path.exists(check_file):
+ writemsg_stdout(
+ _(
+ ">>> It appears that "
+ "'%(action)s' has already executed for '%(pkg)s'; skipping.\n"
+ )
+ % {"action": mydo, "pkg": mysettings["PF"]}
+ )
+ writemsg_stdout(
+ _(">>> Remove '%(file)s' to force %(action)s.\n")
+ % {"file": check_file, "action": mydo}
+ )
+ return os.EX_OK
+
+ return _spawn_phase(
+ mydo,
+ mysettings,
+ actionmap=actionmap,
+ logfile=logfile,
+ fd_pipes=fd_pipes,
+ returnpid=returnpid,
+ )
+
+
+ _post_phase_cmds = {
+ "install": ["install_qa_check", "install_symlink_html_docs", "install_hooks"],
+ "preinst": (
+ (
+ # Since SELinux does not allow LD_PRELOAD across domain transitions,
+ # disable the LD_PRELOAD sandbox for preinst_selinux_labels.
+ {
+ "ld_preload_sandbox": False,
+ "selinux_only": True,
+ },
+ [
+ "preinst_selinux_labels",
+ ],
+ ),
+ (
+ {},
+ [
++ # PREFIX LOCAL
++ "preinst_aix",
+ "preinst_sfperms",
+ "preinst_suid_scan",
+ "preinst_qa_check",
+ ],
+ ),
+ ),
- "postinst": ["postinst_qa_check"],
++ "postinst": [
++ # PREFIX LOCAL
++ "postinst_aix",
++ "postinst_qa_check",
++ ],
}
+
def _post_phase_userpriv_perms(mysettings):
- if "userpriv" in mysettings.features and secpass >= 2:
- """ Privileged phases may have left files that need to be made
- writable to a less privileged user."""
- for path in (mysettings["HOME"], mysettings["T"]):
- apply_recursive_permissions(path,
- uid=portage_uid, gid=portage_gid, dirmode=0o700, dirmask=0,
- filemode=0o600, filemask=0)
+ if "userpriv" in mysettings.features and secpass >= 2:
+ """Privileged phases may have left files that need to be made
+ writable to a less privileged user."""
+ for path in (mysettings["HOME"], mysettings["T"]):
+ apply_recursive_permissions(
+ path,
+ uid=portage_uid,
+ gid=portage_gid,
+ dirmode=0o700,
+ dirmask=0,
+ filemode=0o600,
+ filemask=0,
+ )
def _post_phase_emptydir_cleanup(mysettings):
diff --cc lib/portage/package/ebuild/fetch.py
index 7c95245a7,2d3625800..2fcd33bd9
--- a/lib/portage/package/ebuild/fetch.py
+++ b/lib/portage/package/ebuild/fetch.py
@@@ -22,29 -22,44 +22,46 @@@ from urllib.parse import urlpars
from urllib.parse import quote as urlquote
import portage
- portage.proxy.lazyimport.lazyimport(globals(),
- 'portage.package.ebuild.config:check_config_instance,config',
- 'portage.package.ebuild.doebuild:doebuild_environment,' + \
- '_doebuild_spawn',
- 'portage.package.ebuild.prepare_build_dirs:prepare_build_dirs',
- 'portage.util:atomic_ofstream',
- 'portage.util.configparser:SafeConfigParser,read_configs,' +
- 'ConfigParserError',
- 'portage.util.install_mask:_raise_exc',
- 'portage.util._urlopen:urlopen',
+
+ portage.proxy.lazyimport.lazyimport(
+ globals(),
+ "portage.package.ebuild.config:check_config_instance,config",
+ "portage.package.ebuild.doebuild:doebuild_environment," + "_doebuild_spawn",
+ "portage.package.ebuild.prepare_build_dirs:prepare_build_dirs",
+ "portage.util:atomic_ofstream",
+ "portage.util.configparser:SafeConfigParser,read_configs," + "ConfigParserError",
+ "portage.util.install_mask:_raise_exc",
+ "portage.util._urlopen:urlopen",
)
- from portage import os, selinux, shutil, _encodings, \
- _movefile, _shell_quote, _unicode_encode
- from portage.checksum import (get_valid_checksum_keys, perform_md5, verify_all,
- _filter_unaccelarated_hashes, _hash_filter, _apply_hash_filter,
- checksum_str)
- from portage.const import BASH_BINARY, CUSTOM_MIRRORS_FILE, \
- GLOBAL_CONFIG_PATH
+ from portage import (
+ os,
+ selinux,
+ shutil,
+ _encodings,
+ _movefile,
+ _shell_quote,
+ _unicode_encode,
+ )
+ from portage.checksum import (
+ get_valid_checksum_keys,
+ perform_md5,
+ verify_all,
+ _filter_unaccelarated_hashes,
+ _hash_filter,
+ _apply_hash_filter,
+ checksum_str,
+ )
+ from portage.const import BASH_BINARY, CUSTOM_MIRRORS_FILE, GLOBAL_CONFIG_PATH
++# PREFIX LOCAL
+from portage.const import rootgid
from portage.data import portage_gid, portage_uid, userpriv_groups
- from portage.exception import FileNotFound, OperationNotPermitted, \
- PortageException, TryAgain
+ from portage.exception import (
+ FileNotFound,
+ OperationNotPermitted,
+ PortageException,
+ TryAgain,
+ )
from portage.localization import _
from portage.locks import lockfile, unlockfile
from portage.output import colorize, EOutput
@@@ -181,55 -217,63 +219,64 @@@ def _userpriv_test_write_file(settings
def _ensure_distdir(settings, distdir):
- """
- Ensure that DISTDIR exists with appropriate permissions.
-
- @param settings: portage config
- @type settings: portage.package.ebuild.config.config
- @param distdir: DISTDIR path
- @type distdir: str
- @raise PortageException: portage.exception wrapper exception
- """
- global _userpriv_test_write_file_cache
- dirmode = 0o070
- filemode = 0o60
- modemask = 0o2
- dir_gid = portage_gid
- if "FAKED_MODE" in settings:
- # When inside fakeroot, directories with portage's gid appear
- # to have root's gid. Therefore, use root's gid instead of
- # portage's gid to avoid spurrious permissions adjustments
- # when inside fakeroot.
- dir_gid = rootgid
-
- userfetch = portage.data.secpass >= 2 and "userfetch" in settings.features
- userpriv = portage.data.secpass >= 2 and "userpriv" in settings.features
- write_test_file = os.path.join(distdir, ".__portage_test_write__")
-
- try:
- st = os.stat(distdir)
- except OSError:
- st = None
-
- if st is not None and stat.S_ISDIR(st.st_mode):
- if not (userfetch or userpriv):
- return
- if _userpriv_test_write_file(settings, write_test_file):
- return
-
- _userpriv_test_write_file_cache.pop(write_test_file, None)
- if ensure_dirs(distdir, gid=dir_gid, mode=dirmode, mask=modemask):
- if st is None:
- # The directory has just been created
- # and therefore it must be empty.
- return
- writemsg(_("Adjusting permissions recursively: '%s'\n") % distdir,
- noiselevel=-1)
- if not apply_recursive_permissions(distdir,
- gid=dir_gid, dirmode=dirmode, dirmask=modemask,
- filemode=filemode, filemask=modemask, onerror=_raise_exc):
- raise OperationNotPermitted(
- _("Failed to apply recursive permissions for the portage group."))
+ """
+ Ensure that DISTDIR exists with appropriate permissions.
+
+ @param settings: portage config
+ @type settings: portage.package.ebuild.config.config
+ @param distdir: DISTDIR path
+ @type distdir: str
+ @raise PortageException: portage.exception wrapper exception
+ """
+ global _userpriv_test_write_file_cache
+ dirmode = 0o070
+ filemode = 0o60
+ modemask = 0o2
+ dir_gid = portage_gid
+ if "FAKED_MODE" in settings:
+ # When inside fakeroot, directories with portage's gid appear
+ # to have root's gid. Therefore, use root's gid instead of
+ # portage's gid to avoid spurrious permissions adjustments
+ # when inside fakeroot.
- dir_gid = 0
++ # PREFIX LOCAL: do not assume root to be 0
++ dir_gid = rootgid
+
+ userfetch = portage.data.secpass >= 2 and "userfetch" in settings.features
+ userpriv = portage.data.secpass >= 2 and "userpriv" in settings.features
+ write_test_file = os.path.join(distdir, ".__portage_test_write__")
+
+ try:
+ st = os.stat(distdir)
+ except OSError:
+ st = None
+
+ if st is not None and stat.S_ISDIR(st.st_mode):
+ if not (userfetch or userpriv):
+ return
+ if _userpriv_test_write_file(settings, write_test_file):
+ return
+
+ _userpriv_test_write_file_cache.pop(write_test_file, None)
+ if ensure_dirs(distdir, gid=dir_gid, mode=dirmode, mask=modemask):
+ if st is None:
+ # The directory has just been created
+ # and therefore it must be empty.
+ return
+ writemsg(
+ _("Adjusting permissions recursively: '%s'\n") % distdir, noiselevel=-1
+ )
+ if not apply_recursive_permissions(
+ distdir,
+ gid=dir_gid,
+ dirmode=dirmode,
+ dirmask=modemask,
+ filemode=filemode,
+ filemask=modemask,
+ onerror=_raise_exc,
+ ):
+ raise OperationNotPermitted(
+ _("Failed to apply recursive permissions for the portage group.")
+ )
def _checksum_failure_temp_file(settings, distdir, basename):
diff --cc lib/portage/process.py
index d608e6237,84e09f8ec..5879694f9
--- a/lib/portage/process.py
+++ b/lib/portage/process.py
@@@ -19,11 -19,13 +19,13 @@@ from portage import o
from portage import _encodings
from portage import _unicode_encode
import portage
- portage.proxy.lazyimport.lazyimport(globals(),
- 'portage.util:dump_traceback,writemsg',
+
+ portage.proxy.lazyimport.lazyimport(
+ globals(),
+ "portage.util:dump_traceback,writemsg",
)
-from portage.const import BASH_BINARY, SANDBOX_BINARY, FAKEROOT_BINARY
+from portage.const import BASH_BINARY, SANDBOX_BINARY, MACOSSANDBOX_BINARY, FAKEROOT_BINARY
from portage.exception import CommandNotFound
from portage.util._ctypes import find_library, LoadLibrary, ctypes
@@@ -55,165 -58,166 +58,181 @@@ except AttributeError
# Prefer /proc/self/fd if available (/dev/fd
# doesn't work on solaris, see bug #474536).
for _fd_dir in ("/proc/self/fd", "/dev/fd"):
- if os.path.isdir(_fd_dir):
- break
- else:
- _fd_dir = None
+ if os.path.isdir(_fd_dir):
+ break
+ else:
+ _fd_dir = None
# /dev/fd does not work on FreeBSD, see bug #478446
- if platform.system() in ('FreeBSD',) and _fd_dir == '/dev/fd':
- _fd_dir = None
+ if platform.system() in ("FreeBSD",) and _fd_dir == "/dev/fd":
+ _fd_dir = None
if _fd_dir is not None:
- def get_open_fds():
- return (int(fd) for fd in os.listdir(_fd_dir) if fd.isdigit())
-
- if platform.python_implementation() == 'PyPy':
- # EAGAIN observed with PyPy 1.8.
- _get_open_fds = get_open_fds
- def get_open_fds():
- try:
- return _get_open_fds()
- except OSError as e:
- if e.errno != errno.EAGAIN:
- raise
- return range(max_fd_limit)
+
+ def get_open_fds():
+ return (int(fd) for fd in os.listdir(_fd_dir) if fd.isdigit())
+
+ if platform.python_implementation() == "PyPy":
+ # EAGAIN observed with PyPy 1.8.
+ _get_open_fds = get_open_fds
+
+ def get_open_fds():
+ try:
+ return _get_open_fds()
+ except OSError as e:
+ if e.errno != errno.EAGAIN:
+ raise
+ return range(max_fd_limit)
elif os.path.isdir("/proc/%s/fd" % portage.getpid()):
- # In order for this function to work in forked subprocesses,
- # os.getpid() must be called from inside the function.
- def get_open_fds():
- return (int(fd) for fd in os.listdir("/proc/%s/fd" % portage.getpid())
- if fd.isdigit())
+ # In order for this function to work in forked subprocesses,
+ # os.getpid() must be called from inside the function.
+ def get_open_fds():
+ return (
+ int(fd)
+ for fd in os.listdir("/proc/%s/fd" % portage.getpid())
+ if fd.isdigit()
+ )
else:
- def get_open_fds():
- return range(max_fd_limit)
- sandbox_capable = (os.path.isfile(SANDBOX_BINARY) and
- os.access(SANDBOX_BINARY, os.X_OK))
+ def get_open_fds():
+ return range(max_fd_limit)
- fakeroot_capable = (os.path.isfile(FAKEROOT_BINARY) and
- os.access(FAKEROOT_BINARY, os.X_OK))
+
+ sandbox_capable = os.path.isfile(SANDBOX_BINARY) and os.access(SANDBOX_BINARY, os.X_OK)
+
+ fakeroot_capable = os.path.isfile(FAKEROOT_BINARY) and os.access(
+ FAKEROOT_BINARY, os.X_OK
+ )
+macossandbox_capable = (os.path.isfile(MACOSSANDBOX_BINARY) and
+ os.access(MACOSSANDBOX_BINARY, os.X_OK))
def sanitize_fds():
- """
- Set the inheritable flag to False for all open file descriptors,
- except for those corresponding to stdin, stdout, and stderr. This
- ensures that any unintentionally inherited file descriptors will
- not be inherited by child processes.
- """
- if _set_inheritable is not None:
-
- whitelist = frozenset([
- portage._get_stdin().fileno(),
- sys.__stdout__.fileno(),
- sys.__stderr__.fileno(),
- ])
-
- for fd in get_open_fds():
- if fd not in whitelist:
- try:
- _set_inheritable(fd, False)
- except OSError:
- pass
+ """
+ Set the inheritable flag to False for all open file descriptors,
+ except for those corresponding to stdin, stdout, and stderr. This
+ ensures that any unintentionally inherited file descriptors will
+ not be inherited by child processes.
+ """
+ if _set_inheritable is not None:
+
+ whitelist = frozenset(
+ [
+ portage._get_stdin().fileno(),
+ sys.__stdout__.fileno(),
+ sys.__stderr__.fileno(),
+ ]
+ )
+
+ for fd in get_open_fds():
+ if fd not in whitelist:
+ try:
+ _set_inheritable(fd, False)
+ except OSError:
+ pass
def spawn_bash(mycommand, debug=False, opt_name=None, **keywords):
- """
- Spawns a bash shell running a specific commands
-
- @param mycommand: The command for bash to run
- @type mycommand: String
- @param debug: Turn bash debugging on (set -x)
- @type debug: Boolean
- @param opt_name: Name of the spawned process (detaults to binary name)
- @type opt_name: String
- @param keywords: Extra Dictionary arguments to pass to spawn
- @type keywords: Dictionary
- """
-
- args = [BASH_BINARY]
- if not opt_name:
- opt_name = os.path.basename(mycommand.split()[0])
- if debug:
- # Print commands and their arguments as they are executed.
- args.append("-x")
- args.append("-c")
- args.append(mycommand)
- return spawn(args, opt_name=opt_name, **keywords)
+ """
+ Spawns a bash shell running a specific commands
+
+ @param mycommand: The command for bash to run
+ @type mycommand: String
+ @param debug: Turn bash debugging on (set -x)
+ @type debug: Boolean
+ @param opt_name: Name of the spawned process (detaults to binary name)
+ @type opt_name: String
+ @param keywords: Extra Dictionary arguments to pass to spawn
+ @type keywords: Dictionary
+ """
+
+ args = [BASH_BINARY]
+ if not opt_name:
+ opt_name = os.path.basename(mycommand.split()[0])
+ if debug:
+ # Print commands and their arguments as they are executed.
+ args.append("-x")
+ args.append("-c")
+ args.append(mycommand)
+ return spawn(args, opt_name=opt_name, **keywords)
+
def spawn_sandbox(mycommand, opt_name=None, **keywords):
- if not sandbox_capable:
- return spawn_bash(mycommand, opt_name=opt_name, **keywords)
- args = [SANDBOX_BINARY]
- if not opt_name:
- opt_name = os.path.basename(mycommand.split()[0])
- args.append(mycommand)
- return spawn(args, opt_name=opt_name, **keywords)
+ if not sandbox_capable:
+ return spawn_bash(mycommand, opt_name=opt_name, **keywords)
+ args = [SANDBOX_BINARY]
+ if not opt_name:
+ opt_name = os.path.basename(mycommand.split()[0])
+ args.append(mycommand)
+ return spawn(args, opt_name=opt_name, **keywords)
+
def spawn_fakeroot(mycommand, fakeroot_state=None, opt_name=None, **keywords):
- args = [FAKEROOT_BINARY]
- if not opt_name:
- opt_name = os.path.basename(mycommand.split()[0])
- if fakeroot_state:
- open(fakeroot_state, "a").close()
- args.append("-s")
- args.append(fakeroot_state)
- args.append("-i")
- args.append(fakeroot_state)
- args.append("--")
- args.append(BASH_BINARY)
- args.append("-c")
- args.append(mycommand)
- return spawn(args, opt_name=opt_name, **keywords)
+ args = [FAKEROOT_BINARY]
+ if not opt_name:
+ opt_name = os.path.basename(mycommand.split()[0])
+ if fakeroot_state:
+ open(fakeroot_state, "a").close()
+ args.append("-s")
+ args.append(fakeroot_state)
+ args.append("-i")
+ args.append(fakeroot_state)
+ args.append("--")
+ args.append(BASH_BINARY)
+ args.append("-c")
+ args.append(mycommand)
+ return spawn(args, opt_name=opt_name, **keywords)
+
+def spawn_macossandbox(mycommand, profile=None, opt_name=None, **keywords):
+ if not macossandbox_capable:
+ return spawn_bash(mycommand, opt_name=opt_name, **keywords)
+ args=[MACOSSANDBOX_BINARY]
+ if not opt_name:
+ opt_name = os.path.basename(mycommand.split()[0])
+ args.append("-p")
+ args.append(profile)
+ args.append(BASH_BINARY)
+ args.append("-c")
+ args.append(mycommand)
+ return spawn(args, opt_name=opt_name, **keywords)
+
_exithandlers = []
+
+
def atexit_register(func, *args, **kargs):
- """Wrapper around atexit.register that is needed in order to track
- what is registered. For example, when portage restarts itself via
- os.execv, the atexit module does not work so we have to do it
- manually by calling the run_exitfuncs() function in this module."""
- _exithandlers.append((func, args, kargs))
+ """Wrapper around atexit.register that is needed in order to track
+ what is registered. For example, when portage restarts itself via
+ os.execv, the atexit module does not work so we have to do it
+ manually by calling the run_exitfuncs() function in this module."""
+ _exithandlers.append((func, args, kargs))
+
def run_exitfuncs():
- """This should behave identically to the routine performed by
- the atexit module at exit time. It's only necessary to call this
- function when atexit will not work (because of os.execv, for
- example)."""
-
- # This function is a copy of the private atexit._run_exitfuncs()
- # from the python 2.4.2 sources. The only difference from the
- # original function is in the output to stderr.
- exc_info = None
- while _exithandlers:
- func, targs, kargs = _exithandlers.pop()
- try:
- func(*targs, **kargs)
- except SystemExit:
- exc_info = sys.exc_info()
- except: # No idea what they called, so we need this broad except here.
- dump_traceback("Error in portage.process.run_exitfuncs", noiselevel=0)
- exc_info = sys.exc_info()
-
- if exc_info is not None:
- raise exc_info[0](exc_info[1]).with_traceback(exc_info[2])
+ """This should behave identically to the routine performed by
+ the atexit module at exit time. It's only necessary to call this
+ function when atexit will not work (because of os.execv, for
+ example)."""
+
+ # This function is a copy of the private atexit._run_exitfuncs()
+ # from the python 2.4.2 sources. The only difference from the
+ # original function is in the output to stderr.
+ exc_info = None
+ while _exithandlers:
+ func, targs, kargs = _exithandlers.pop()
+ try:
+ func(*targs, **kargs)
+ except SystemExit:
+ exc_info = sys.exc_info()
+ except: # No idea what they called, so we need this broad except here.
+ dump_traceback("Error in portage.process.run_exitfuncs", noiselevel=0)
+ exc_info = sys.exc_info()
+
+ if exc_info is not None:
+ raise exc_info[0](exc_info[1]).with_traceback(exc_info[2])
+
atexit.register(run_exitfuncs)
diff --cc lib/portage/tests/lazyimport/test_lazy_import_portage_baseline.py
index 2bc54698a,cf239240c..1115b18d7
--- a/lib/portage/tests/lazyimport/test_lazy_import_portage_baseline.py
+++ b/lib/portage/tests/lazyimport/test_lazy_import_portage_baseline.py
@@@ -11,19 -11,26 +11,28 @@@ from portage.util._eventloop.global_eve
from _emerge.PipeReader import PipeReader
from _emerge.SpawnProcess import SpawnProcess
+
class LazyImportPortageBaselineTestCase(TestCase):
- _module_re = re.compile(r'^(portage|repoman|_emerge)\.')
+ _module_re = re.compile(r"^(portage|repoman|_emerge)\.")
- _baseline_imports = frozenset([
- 'portage.const', 'portage.localization',
- 'portage.proxy', 'portage.proxy.lazyimport',
- 'portage.proxy.objectproxy',
- 'portage._selinux',
- 'portage.const_autotool',
- ])
+ _baseline_imports = frozenset(
+ [
+ "portage.const",
+ "portage.localization",
+ "portage.proxy",
+ "portage.proxy.lazyimport",
+ "portage.proxy.objectproxy",
+ "portage._selinux",
++ # PREFIX LOCAL
++ 'portage.const_autotool',
+ ]
+ )
- _baseline_import_cmd = [portage._python_interpreter, '-c', '''
+ _baseline_import_cmd = [
+ portage._python_interpreter,
+ "-c",
+ """
import os
import sys
sys.path.insert(0, os.environ["PORTAGE_PYM_PATH"])
diff --cc lib/portage/tests/resolver/ResolverPlayground.py
index 67267a5cd,fdd0714e6..969d8f2fb
--- a/lib/portage/tests/resolver/ResolverPlayground.py
+++ b/lib/portage/tests/resolver/ResolverPlayground.py
@@@ -72,616 -88,669 +88,671 @@@ class ResolverPlayground
</pkgmetadata>
"""
- portage_bin = (
- 'ebuild',
- 'egencache',
- 'emerge',
- 'emerge-webrsync',
- 'emirrordist',
- 'glsa-check',
- 'portageq',
- 'quickpkg',
- )
-
- portage_sbin = (
- 'archive-conf',
- 'dispatch-conf',
- 'emaint',
- 'env-update',
- 'etc-update',
- 'fixpackages',
- 'regenworld',
- )
-
- def __init__(self, ebuilds={}, binpkgs={}, installed={}, profile={}, repo_configs={}, \
- user_config={}, sets={}, world=[], world_sets=[], distfiles={}, eclasses={},
- eprefix=None, targetroot=False, debug=False):
- """
- ebuilds: cpv -> metadata mapping simulating available ebuilds.
- installed: cpv -> metadata mapping simulating installed packages.
- If a metadata key is missing, it gets a default value.
- profile: settings defined by the profile.
- """
-
- self.debug = debug
- if eprefix is None:
- self.eprefix = normalize_path(tempfile.mkdtemp())
-
- # EPREFIX/bin is used by fake true_binaries. Real binaries goes into EPREFIX/usr/bin
- eubin = os.path.join(self.eprefix, "usr", "bin")
- ensure_dirs(eubin)
- for x in self.portage_bin:
- os.symlink(os.path.join(PORTAGE_BIN_PATH, x), os.path.join(eubin, x))
-
- eusbin = os.path.join(self.eprefix, "usr", "sbin")
- ensure_dirs(eusbin)
- for x in self.portage_sbin:
- os.symlink(os.path.join(PORTAGE_BIN_PATH, x), os.path.join(eusbin, x))
-
- essential_binaries = (
- "awk",
- "basename",
- "bzip2",
- "cat",
- "chgrp",
- "chmod",
- "chown",
- "comm",
- "cp",
- "egrep",
- "env",
- "find",
- "grep",
- "head",
- "install",
- "ln",
- "mkdir",
- "mkfifo",
- "mktemp",
- "mv",
- "readlink",
- "rm",
- "sed",
- "sort",
- "tar",
- "tr",
- "uname",
- "uniq",
- "xargs",
- "zstd",
- )
- # Exclude internal wrappers from PATH lookup.
- orig_path = os.environ['PATH']
- included_paths = []
- for path in orig_path.split(':'):
- if path and not fnmatch.fnmatch(path, '*/portage/*/ebuild-helpers*'):
- included_paths.append(path)
- try:
- os.environ['PATH'] = ':'.join(included_paths)
- for x in essential_binaries:
- path = find_binary(x)
- if path is None:
- raise portage.exception.CommandNotFound(x)
- os.symlink(path, os.path.join(eubin, x))
- finally:
- os.environ['PATH'] = orig_path
- else:
- self.eprefix = normalize_path(eprefix)
-
- # Tests may override portage.const.EPREFIX in order to
- # simulate a prefix installation. It's reasonable to do
- # this because tests should be self-contained such that
- # the "real" value of portage.const.EPREFIX is entirely
- # irrelevant (see bug #492932).
- self._orig_eprefix = portage.const.EPREFIX
- portage.const.EPREFIX = self.eprefix.rstrip(os.sep)
-
- self.eroot = self.eprefix + os.sep
- if targetroot:
- self.target_root = os.path.join(self.eroot, 'target_root')
- else:
- self.target_root = os.sep
- self.distdir = os.path.join(self.eroot, "var", "portage", "distfiles")
- self.pkgdir = os.path.join(self.eprefix, "pkgdir")
- self.vdbdir = os.path.join(self.eroot, "var/db/pkg")
- os.makedirs(self.vdbdir)
-
- if not debug:
- portage.util.noiselimit = -2
-
- self._repositories = {}
- #Make sure the main repo is always created
- self._get_repo_dir("test_repo")
-
- self._create_distfiles(distfiles)
- self._create_ebuilds(ebuilds)
- self._create_binpkgs(binpkgs)
- self._create_installed(installed)
- self._create_profile(ebuilds, eclasses, installed, profile, repo_configs, user_config, sets)
- self._create_world(world, world_sets)
-
- self.settings, self.trees = self._load_config()
-
- self._create_ebuild_manifests(ebuilds)
-
- portage.util.noiselimit = 0
-
- def reload_config(self):
- """
- Reload configuration from disk, which is useful if it has
- been modified after the constructor has been called.
- """
- for eroot in self.trees:
- portdb = self.trees[eroot]["porttree"].dbapi
- portdb.close_caches()
- self.settings, self.trees = self._load_config()
-
- def _get_repo_dir(self, repo):
- """
- Create the repo directory if needed.
- """
- if repo not in self._repositories:
- if repo == "test_repo":
- self._repositories["DEFAULT"] = {"main-repo": repo}
-
- repo_path = os.path.join(self.eroot, "var", "repositories", repo)
- self._repositories[repo] = {"location": repo_path}
- profile_path = os.path.join(repo_path, "profiles")
-
- try:
- os.makedirs(profile_path)
- except os.error:
- pass
-
- repo_name_file = os.path.join(profile_path, "repo_name")
- with open(repo_name_file, "w") as f:
- f.write("%s\n" % repo)
-
- return self._repositories[repo]["location"]
-
- def _create_distfiles(self, distfiles):
- os.makedirs(self.distdir)
- for k, v in distfiles.items():
- with open(os.path.join(self.distdir, k), 'wb') as f:
- f.write(v)
-
- def _create_ebuilds(self, ebuilds):
- for cpv in ebuilds:
- a = Atom("=" + cpv, allow_repo=True)
- repo = a.repo
- if repo is None:
- repo = "test_repo"
-
- metadata = ebuilds[cpv].copy()
- copyright_header = metadata.pop("COPYRIGHT_HEADER", None)
- eapi = metadata.pop("EAPI", "0")
- misc_content = metadata.pop("MISC_CONTENT", None)
- metadata.setdefault("DEPEND", "")
- metadata.setdefault("SLOT", "0")
- metadata.setdefault("KEYWORDS", "x86")
- metadata.setdefault("IUSE", "")
-
- unknown_keys = set(metadata).difference(
- portage.dbapi.dbapi._known_keys)
- if unknown_keys:
- raise ValueError("metadata of ebuild '%s' contains unknown keys: %s" %
- (cpv, sorted(unknown_keys)))
-
- repo_dir = self._get_repo_dir(repo)
- ebuild_dir = os.path.join(repo_dir, a.cp)
- ebuild_path = os.path.join(ebuild_dir, a.cpv.split("/")[1] + ".ebuild")
- try:
- os.makedirs(ebuild_dir)
- except os.error:
- pass
-
- with open(ebuild_path, "w") as f:
- if copyright_header is not None:
- f.write(copyright_header)
- f.write('EAPI="%s"\n' % eapi)
- for k, v in metadata.items():
- f.write('%s="%s"\n' % (k, v))
- if misc_content is not None:
- f.write(misc_content)
-
- def _create_ebuild_manifests(self, ebuilds):
- tmpsettings = config(clone=self.settings)
- tmpsettings['PORTAGE_QUIET'] = '1'
- for cpv in ebuilds:
- a = Atom("=" + cpv, allow_repo=True)
- repo = a.repo
- if repo is None:
- repo = "test_repo"
-
- repo_dir = self._get_repo_dir(repo)
- ebuild_dir = os.path.join(repo_dir, a.cp)
- ebuild_path = os.path.join(ebuild_dir, a.cpv.split("/")[1] + ".ebuild")
-
- portdb = self.trees[self.eroot]["porttree"].dbapi
- tmpsettings['O'] = ebuild_dir
- if not digestgen(mysettings=tmpsettings, myportdb=portdb):
- raise AssertionError('digest creation failed for %s' % ebuild_path)
-
- def _create_binpkgs(self, binpkgs):
- # When using BUILD_ID, there can be mutiple instances for the
- # same cpv. Therefore, binpkgs may be an iterable instead of
- # a dict.
- items = getattr(binpkgs, 'items', None)
- items = items() if items is not None else binpkgs
- for cpv, metadata in items:
- a = Atom("=" + cpv, allow_repo=True)
- repo = a.repo
- if repo is None:
- repo = "test_repo"
-
- pn = catsplit(a.cp)[1]
- cat, pf = catsplit(a.cpv)
- metadata = metadata.copy()
- metadata.setdefault("SLOT", "0")
- metadata.setdefault("KEYWORDS", "x86")
- metadata.setdefault("BUILD_TIME", "0")
- metadata["repository"] = repo
- metadata["CATEGORY"] = cat
- metadata["PF"] = pf
- metadata["EPREFIX"] = self.eprefix
-
- repo_dir = self.pkgdir
- category_dir = os.path.join(repo_dir, cat)
- if "BUILD_ID" in metadata:
- binpkg_path = os.path.join(category_dir, pn,
- "%s-%s.xpak"% (pf, metadata["BUILD_ID"]))
- else:
- binpkg_path = os.path.join(category_dir, pf + ".tbz2")
-
- ensure_dirs(os.path.dirname(binpkg_path))
- t = portage.xpak.tbz2(binpkg_path)
- t.recompose_mem(portage.xpak.xpak_mem(metadata))
-
- def _create_installed(self, installed):
- for cpv in installed:
- a = Atom("=" + cpv, allow_repo=True)
- repo = a.repo
- if repo is None:
- repo = "test_repo"
-
- vdb_pkg_dir = os.path.join(self.vdbdir, a.cpv)
- try:
- os.makedirs(vdb_pkg_dir)
- except os.error:
- pass
-
- metadata = installed[cpv].copy()
- metadata.setdefault("SLOT", "0")
- metadata.setdefault("BUILD_TIME", "0")
- metadata.setdefault("COUNTER", "0")
- metadata.setdefault("KEYWORDS", "~x86")
-
- unknown_keys = set(metadata).difference(
- portage.dbapi.dbapi._known_keys)
- unknown_keys.discard("BUILD_TIME")
- unknown_keys.discard("BUILD_ID")
- unknown_keys.discard("COUNTER")
- unknown_keys.discard("repository")
- unknown_keys.discard("USE")
- unknown_keys.discard("PROVIDES")
- unknown_keys.discard("REQUIRES")
- if unknown_keys:
- raise ValueError("metadata of installed '%s' contains unknown keys: %s" %
- (cpv, sorted(unknown_keys)))
-
- metadata["repository"] = repo
- for k, v in metadata.items():
- with open(os.path.join(vdb_pkg_dir, k), "w") as f:
- f.write("%s\n" % v)
-
- ebuild_path = os.path.join(vdb_pkg_dir, a.cpv.split("/")[1] + ".ebuild")
- with open(ebuild_path, "w") as f:
- f.write('EAPI="%s"\n' % metadata.pop('EAPI', '0'))
- for k, v in metadata.items():
- f.write('%s="%s"\n' % (k, v))
-
- env_path = os.path.join(vdb_pkg_dir, 'environment.bz2')
- with bz2.BZ2File(env_path, mode='w') as f:
- with open(ebuild_path, 'rb') as inputfile:
- f.write(inputfile.read())
-
- def _create_profile(self, ebuilds, eclasses, installed, profile, repo_configs, user_config, sets):
-
- user_config_dir = os.path.join(self.eroot, USER_CONFIG_PATH)
-
- try:
- os.makedirs(user_config_dir)
- except os.error:
- pass
-
- for repo in self._repositories:
- if repo == "DEFAULT":
- continue
-
- repo_dir = self._get_repo_dir(repo)
- profile_dir = os.path.join(repo_dir, "profiles")
- metadata_dir = os.path.join(repo_dir, "metadata")
- os.makedirs(metadata_dir)
-
- #Create $REPO/profiles/categories
- categories = set()
- for cpv in ebuilds:
- ebuilds_repo = Atom("="+cpv, allow_repo=True).repo
- if ebuilds_repo is None:
- ebuilds_repo = "test_repo"
- if ebuilds_repo == repo:
- categories.add(catsplit(cpv)[0])
-
- categories_file = os.path.join(profile_dir, "categories")
- with open(categories_file, "w") as f:
- for cat in categories:
- f.write(cat + "\n")
-
- #Create $REPO/profiles/license_groups
- license_file = os.path.join(profile_dir, "license_groups")
- with open(license_file, "w") as f:
- f.write("EULA TEST\n")
-
- repo_config = repo_configs.get(repo)
- if repo_config:
- for config_file, lines in repo_config.items():
- if config_file not in self.config_files and not any(fnmatch.fnmatch(config_file, os.path.join(x, "*")) for x in self.config_files):
- raise ValueError("Unknown config file: '%s'" % config_file)
-
- if config_file in ("layout.conf",):
- file_name = os.path.join(repo_dir, "metadata", config_file)
- else:
- file_name = os.path.join(profile_dir, config_file)
- if "/" in config_file and not os.path.isdir(os.path.dirname(file_name)):
- os.makedirs(os.path.dirname(file_name))
- with open(file_name, "w") as f:
- for line in lines:
- f.write("%s\n" % line)
- # Temporarily write empty value of masters until it becomes default.
- # TODO: Delete all references to "# use implicit masters" when empty value becomes default.
- if config_file == "layout.conf" and not any(line.startswith(("masters =", "# use implicit masters")) for line in lines):
- f.write("masters =\n")
-
- #Create $profile_dir/eclass (we fail to digest the ebuilds if it's not there)
- eclass_dir = os.path.join(repo_dir, "eclass")
- os.makedirs(eclass_dir)
-
- for eclass_name, eclass_content in eclasses.items():
- with open(os.path.join(eclass_dir, "{}.eclass".format(eclass_name)), 'wt') as f:
- if isinstance(eclass_content, str):
- eclass_content = [eclass_content]
- for line in eclass_content:
- f.write("{}\n".format(line))
-
- # Temporarily write empty value of masters until it becomes default.
- if not repo_config or "layout.conf" not in repo_config:
- layout_conf_path = os.path.join(repo_dir, "metadata", "layout.conf")
- with open(layout_conf_path, "w") as f:
- f.write("masters =\n")
-
- if repo == "test_repo":
- #Create a minimal profile in /var/db/repos/gentoo
- sub_profile_dir = os.path.join(profile_dir, "default", "linux", "x86", "test_profile")
- os.makedirs(sub_profile_dir)
-
- if not (profile and "eapi" in profile):
- eapi_file = os.path.join(sub_profile_dir, "eapi")
- with open(eapi_file, "w") as f:
- f.write("0\n")
-
- make_defaults_file = os.path.join(sub_profile_dir, "make.defaults")
- with open(make_defaults_file, "w") as f:
- f.write("ARCH=\"x86\"\n")
- f.write("ACCEPT_KEYWORDS=\"x86\"\n")
-
- use_force_file = os.path.join(sub_profile_dir, "use.force")
- with open(use_force_file, "w") as f:
- f.write("x86\n")
-
- parent_file = os.path.join(sub_profile_dir, "parent")
- with open(parent_file, "w") as f:
- f.write("..\n")
-
- if profile:
- for config_file, lines in profile.items():
- if config_file not in self.config_files:
- raise ValueError("Unknown config file: '%s'" % config_file)
-
- file_name = os.path.join(sub_profile_dir, config_file)
- with open(file_name, "w") as f:
- for line in lines:
- f.write("%s\n" % line)
-
- #Create profile symlink
- os.symlink(sub_profile_dir, os.path.join(user_config_dir, "make.profile"))
-
- make_conf = {
- "ACCEPT_KEYWORDS": "x86",
- "CLEAN_DELAY": "0",
- "DISTDIR" : self.distdir,
- "EMERGE_WARNING_DELAY": "0",
- "PKGDIR": self.pkgdir,
- "PORTAGE_INST_GID": str(portage.data.portage_gid),
- "PORTAGE_INST_UID": str(portage.data.portage_uid),
- "PORTAGE_TMPDIR": os.path.join(self.eroot, 'var/tmp'),
- }
-
- if os.environ.get("NOCOLOR"):
- make_conf["NOCOLOR"] = os.environ["NOCOLOR"]
-
- # Pass along PORTAGE_USERNAME and PORTAGE_GRPNAME since they
- # need to be inherited by ebuild subprocesses.
- if 'PORTAGE_USERNAME' in os.environ:
- make_conf['PORTAGE_USERNAME'] = os.environ['PORTAGE_USERNAME']
- if 'PORTAGE_GRPNAME' in os.environ:
- make_conf['PORTAGE_GRPNAME'] = os.environ['PORTAGE_GRPNAME']
-
- make_conf_lines = []
- for k_v in make_conf.items():
- make_conf_lines.append('%s="%s"' % k_v)
-
- if "make.conf" in user_config:
- make_conf_lines.extend(user_config["make.conf"])
-
- if not portage.process.sandbox_capable or \
- os.environ.get("SANDBOX_ON") == "1":
- # avoid problems from nested sandbox instances
- make_conf_lines.append('FEATURES="${FEATURES} -sandbox -usersandbox"')
-
- configs = user_config.copy()
- configs["make.conf"] = make_conf_lines
-
- for config_file, lines in configs.items():
- if config_file not in self.config_files:
- raise ValueError("Unknown config file: '%s'" % config_file)
-
- file_name = os.path.join(user_config_dir, config_file)
- with open(file_name, "w") as f:
- for line in lines:
- f.write("%s\n" % line)
-
- #Create /usr/share/portage/config/make.globals
- make_globals_path = os.path.join(self.eroot,
- GLOBAL_CONFIG_PATH.lstrip(os.sep), "make.globals")
- ensure_dirs(os.path.dirname(make_globals_path))
- os.symlink(os.path.join(cnf_path, "make.globals"),
- make_globals_path)
-
- #Create /usr/share/portage/config/sets/portage.conf
- default_sets_conf_dir = os.path.join(self.eroot, "usr/share/portage/config/sets")
-
- try:
- os.makedirs(default_sets_conf_dir)
- except os.error:
- pass
-
- provided_sets_portage_conf = (
- os.path.join(cnf_path, "sets", "portage.conf"))
- os.symlink(provided_sets_portage_conf, os.path.join(default_sets_conf_dir, "portage.conf"))
-
- set_config_dir = os.path.join(user_config_dir, "sets")
-
- try:
- os.makedirs(set_config_dir)
- except os.error:
- pass
-
- for sets_file, lines in sets.items():
- file_name = os.path.join(set_config_dir, sets_file)
- with open(file_name, "w") as f:
- for line in lines:
- f.write("%s\n" % line)
-
- if cnf_path_repoman is not None:
- #Create /usr/share/repoman
- repoman_share_dir = os.path.join(self.eroot, 'usr', 'share', 'repoman')
- os.symlink(cnf_path_repoman, repoman_share_dir)
-
- def _create_world(self, world, world_sets):
- #Create /var/lib/portage/world
- var_lib_portage = os.path.join(self.eroot, "var", "lib", "portage")
- os.makedirs(var_lib_portage)
-
- world_file = os.path.join(var_lib_portage, "world")
- world_set_file = os.path.join(var_lib_portage, "world_sets")
-
- with open(world_file, "w") as f:
- for atom in world:
- f.write("%s\n" % atom)
-
- with open(world_set_file, "w") as f:
- for atom in world_sets:
- f.write("%s\n" % atom)
-
- def _load_config(self):
-
- create_trees_kwargs = {}
- if self.target_root != os.sep:
- create_trees_kwargs["target_root"] = self.target_root
-
- env = {
- "PORTAGE_REPOSITORIES": "\n".join("[%s]\n%s" % (repo_name, "\n".join("%s = %s" % (k, v) for k, v in repo_config.items())) for repo_name, repo_config in self._repositories.items())
- }
-
- if self.debug:
- env["PORTAGE_DEBUG"] = "1"
-
- trees = portage.create_trees(env=env, eprefix=self.eprefix,
- **create_trees_kwargs)
-
- for root, root_trees in trees.items():
- settings = root_trees["vartree"].settings
- settings._init_dirs()
- setconfig = load_default_config(settings, root_trees)
- root_trees["root_config"] = RootConfig(settings, root_trees, setconfig)
-
- return trees[trees._target_eroot]["vartree"].settings, trees
-
- def run(self, atoms, options={}, action=None):
- options = options.copy()
- options["--pretend"] = True
- if self.debug:
- options["--debug"] = True
-
- if action is None:
- if options.get("--depclean"):
- action = "depclean"
- elif options.get("--prune"):
- action = "prune"
-
- if "--usepkgonly" in options:
- options["--usepkg"] = True
-
- global_noiselimit = portage.util.noiselimit
- global_emergelog_disable = _emerge.emergelog._disable
- try:
-
- if not self.debug:
- portage.util.noiselimit = -2
- _emerge.emergelog._disable = True
-
- if action in ("depclean", "prune"):
- depclean_result = _calc_depclean(self.settings, self.trees, None,
- options, action, InternalPackageSet(initial_atoms=atoms, allow_wildcard=True), None)
- result = ResolverPlaygroundDepcleanResult(
- atoms,
- depclean_result.returncode,
- depclean_result.cleanlist,
- depclean_result.ordered,
- depclean_result.req_pkg_count,
- depclean_result.depgraph,
- )
- else:
- params = create_depgraph_params(options, action)
- success, depgraph, favorites = backtrack_depgraph(
- self.settings, self.trees, options, params, action, atoms, None)
- depgraph._show_merge_list()
- depgraph.display_problems()
- result = ResolverPlaygroundResult(atoms, success, depgraph, favorites)
- finally:
- portage.util.noiselimit = global_noiselimit
- _emerge.emergelog._disable = global_emergelog_disable
-
- return result
-
- def run_TestCase(self, test_case):
- if not isinstance(test_case, ResolverPlaygroundTestCase):
- raise TypeError("ResolverPlayground needs a ResolverPlaygroundTestCase")
- for atoms in test_case.requests:
- result = self.run(atoms, test_case.options, test_case.action)
- if not test_case.compare_with_result(result):
- return
-
- def cleanup(self):
- for eroot in self.trees:
- portdb = self.trees[eroot]["porttree"].dbapi
- portdb.close_caches()
- if self.debug:
- print("\nEROOT=%s" % self.eroot)
- else:
- shutil.rmtree(self.eroot)
- if hasattr(self, '_orig_eprefix'):
- portage.const.EPREFIX = self._orig_eprefix
+ portage_bin = (
+ "ebuild",
+ "egencache",
+ "emerge",
+ "emerge-webrsync",
+ "emirrordist",
+ "glsa-check",
+ "portageq",
+ "quickpkg",
+ )
+
+ portage_sbin = (
+ "archive-conf",
+ "dispatch-conf",
+ "emaint",
+ "env-update",
+ "etc-update",
+ "fixpackages",
+ "regenworld",
+ )
+
+ def __init__(
+ self,
+ ebuilds={},
+ binpkgs={},
+ installed={},
+ profile={},
+ repo_configs={},
+ user_config={},
+ sets={},
+ world=[],
+ world_sets=[],
+ distfiles={},
+ eclasses={},
+ eprefix=None,
+ targetroot=False,
+ debug=False,
+ ):
+ """
+ ebuilds: cpv -> metadata mapping simulating available ebuilds.
+ installed: cpv -> metadata mapping simulating installed packages.
+ If a metadata key is missing, it gets a default value.
+ profile: settings defined by the profile.
+ """
+
+ self.debug = debug
+ if eprefix is None:
+ self.eprefix = normalize_path(tempfile.mkdtemp())
+
+ # EPREFIX/bin is used by fake true_binaries. Real binaries goes into EPREFIX/usr/bin
+ eubin = os.path.join(self.eprefix, "usr", "bin")
+ ensure_dirs(eubin)
+ for x in self.portage_bin:
+ os.symlink(os.path.join(PORTAGE_BIN_PATH, x), os.path.join(eubin, x))
+
+ eusbin = os.path.join(self.eprefix, "usr", "sbin")
+ ensure_dirs(eusbin)
+ for x in self.portage_sbin:
+ os.symlink(os.path.join(PORTAGE_BIN_PATH, x), os.path.join(eusbin, x))
+
+ essential_binaries = (
+ "awk",
+ "basename",
+ "bzip2",
+ "cat",
+ "chgrp",
+ "chmod",
+ "chown",
+ "comm",
+ "cp",
+ "egrep",
+ "env",
+ "find",
+ "grep",
+ "head",
+ "install",
+ "ln",
+ "mkdir",
+ "mkfifo",
+ "mktemp",
+ "mv",
+ "readlink",
+ "rm",
+ "sed",
+ "sort",
+ "tar",
+ "tr",
+ "uname",
+ "uniq",
+ "xargs",
+ "zstd",
+ )
+ # Exclude internal wrappers from PATH lookup.
+ orig_path = os.environ["PATH"]
+ included_paths = []
+ for path in orig_path.split(":"):
+ if path and not fnmatch.fnmatch(path, "*/portage/*/ebuild-helpers*"):
+ included_paths.append(path)
+ try:
+ os.environ["PATH"] = ":".join(included_paths)
+ for x in essential_binaries:
+ path = find_binary(x)
+ if path is None:
+ raise portage.exception.CommandNotFound(x)
+ os.symlink(path, os.path.join(eubin, x))
+ finally:
+ os.environ["PATH"] = orig_path
+ else:
+ self.eprefix = normalize_path(eprefix)
+
+ # Tests may override portage.const.EPREFIX in order to
+ # simulate a prefix installation. It's reasonable to do
+ # this because tests should be self-contained such that
+ # the "real" value of portage.const.EPREFIX is entirely
+ # irrelevant (see bug #492932).
+ self._orig_eprefix = portage.const.EPREFIX
+ portage.const.EPREFIX = self.eprefix.rstrip(os.sep)
+
+ self.eroot = self.eprefix + os.sep
+ if targetroot:
+ self.target_root = os.path.join(self.eroot, "target_root")
+ else:
+ self.target_root = os.sep
+ self.distdir = os.path.join(self.eroot, "var", "portage", "distfiles")
+ self.pkgdir = os.path.join(self.eprefix, "pkgdir")
+ self.vdbdir = os.path.join(self.eroot, "var/db/pkg")
+ os.makedirs(self.vdbdir)
+
+ if not debug:
+ portage.util.noiselimit = -2
+
+ self._repositories = {}
+ # Make sure the main repo is always created
+ self._get_repo_dir("test_repo")
+
+ self._create_distfiles(distfiles)
+ self._create_ebuilds(ebuilds)
+ self._create_binpkgs(binpkgs)
+ self._create_installed(installed)
+ self._create_profile(
+ ebuilds, eclasses, installed, profile, repo_configs, user_config, sets
+ )
+ self._create_world(world, world_sets)
+
+ self.settings, self.trees = self._load_config()
+
+ self._create_ebuild_manifests(ebuilds)
+
+ portage.util.noiselimit = 0
+
+ def reload_config(self):
+ """
+ Reload configuration from disk, which is useful if it has
+ been modified after the constructor has been called.
+ """
+ for eroot in self.trees:
+ portdb = self.trees[eroot]["porttree"].dbapi
+ portdb.close_caches()
+ self.settings, self.trees = self._load_config()
+
+ def _get_repo_dir(self, repo):
+ """
+ Create the repo directory if needed.
+ """
+ if repo not in self._repositories:
+ if repo == "test_repo":
+ self._repositories["DEFAULT"] = {"main-repo": repo}
+
+ repo_path = os.path.join(self.eroot, "var", "repositories", repo)
+ self._repositories[repo] = {"location": repo_path}
+ profile_path = os.path.join(repo_path, "profiles")
+
+ try:
+ os.makedirs(profile_path)
+ except os.error:
+ pass
+
+ repo_name_file = os.path.join(profile_path, "repo_name")
+ with open(repo_name_file, "w") as f:
+ f.write("%s\n" % repo)
+
+ return self._repositories[repo]["location"]
+
+ def _create_distfiles(self, distfiles):
+ os.makedirs(self.distdir)
+ for k, v in distfiles.items():
+ with open(os.path.join(self.distdir, k), "wb") as f:
+ f.write(v)
+
+ def _create_ebuilds(self, ebuilds):
+ for cpv in ebuilds:
+ a = Atom("=" + cpv, allow_repo=True)
+ repo = a.repo
+ if repo is None:
+ repo = "test_repo"
+
+ metadata = ebuilds[cpv].copy()
+ copyright_header = metadata.pop("COPYRIGHT_HEADER", None)
+ eapi = metadata.pop("EAPI", "0")
+ misc_content = metadata.pop("MISC_CONTENT", None)
+ metadata.setdefault("DEPEND", "")
+ metadata.setdefault("SLOT", "0")
+ metadata.setdefault("KEYWORDS", "x86")
+ metadata.setdefault("IUSE", "")
+
+ unknown_keys = set(metadata).difference(portage.dbapi.dbapi._known_keys)
+ if unknown_keys:
+ raise ValueError(
+ "metadata of ebuild '%s' contains unknown keys: %s"
+ % (cpv, sorted(unknown_keys))
+ )
+
+ repo_dir = self._get_repo_dir(repo)
+ ebuild_dir = os.path.join(repo_dir, a.cp)
+ ebuild_path = os.path.join(ebuild_dir, a.cpv.split("/")[1] + ".ebuild")
+ try:
+ os.makedirs(ebuild_dir)
+ except os.error:
+ pass
+
+ with open(ebuild_path, "w") as f:
+ if copyright_header is not None:
+ f.write(copyright_header)
+ f.write('EAPI="%s"\n' % eapi)
+ for k, v in metadata.items():
+ f.write('%s="%s"\n' % (k, v))
+ if misc_content is not None:
+ f.write(misc_content)
+
+ def _create_ebuild_manifests(self, ebuilds):
+ tmpsettings = config(clone=self.settings)
+ tmpsettings["PORTAGE_QUIET"] = "1"
+ for cpv in ebuilds:
+ a = Atom("=" + cpv, allow_repo=True)
+ repo = a.repo
+ if repo is None:
+ repo = "test_repo"
+
+ repo_dir = self._get_repo_dir(repo)
+ ebuild_dir = os.path.join(repo_dir, a.cp)
+ ebuild_path = os.path.join(ebuild_dir, a.cpv.split("/")[1] + ".ebuild")
+
+ portdb = self.trees[self.eroot]["porttree"].dbapi
+ tmpsettings["O"] = ebuild_dir
+ if not digestgen(mysettings=tmpsettings, myportdb=portdb):
+ raise AssertionError("digest creation failed for %s" % ebuild_path)
+
+ def _create_binpkgs(self, binpkgs):
+ # When using BUILD_ID, there can be mutiple instances for the
+ # same cpv. Therefore, binpkgs may be an iterable instead of
+ # a dict.
+ items = getattr(binpkgs, "items", None)
+ items = items() if items is not None else binpkgs
+ for cpv, metadata in items:
+ a = Atom("=" + cpv, allow_repo=True)
+ repo = a.repo
+ if repo is None:
+ repo = "test_repo"
+
+ pn = catsplit(a.cp)[1]
+ cat, pf = catsplit(a.cpv)
+ metadata = metadata.copy()
+ metadata.setdefault("SLOT", "0")
+ metadata.setdefault("KEYWORDS", "x86")
+ metadata.setdefault("BUILD_TIME", "0")
+ metadata["repository"] = repo
+ metadata["CATEGORY"] = cat
+ metadata["PF"] = pf
++ # PREFIX LOCAL
++ metadata["EPREFIX"] = self.eprefix
+
+ repo_dir = self.pkgdir
+ category_dir = os.path.join(repo_dir, cat)
+ if "BUILD_ID" in metadata:
+ binpkg_path = os.path.join(
+ category_dir, pn, "%s-%s.xpak" % (pf, metadata["BUILD_ID"])
+ )
+ else:
+ binpkg_path = os.path.join(category_dir, pf + ".tbz2")
+
+ ensure_dirs(os.path.dirname(binpkg_path))
+ t = portage.xpak.tbz2(binpkg_path)
+ t.recompose_mem(portage.xpak.xpak_mem(metadata))
+
+ def _create_installed(self, installed):
+ for cpv in installed:
+ a = Atom("=" + cpv, allow_repo=True)
+ repo = a.repo
+ if repo is None:
+ repo = "test_repo"
+
+ vdb_pkg_dir = os.path.join(self.vdbdir, a.cpv)
+ try:
+ os.makedirs(vdb_pkg_dir)
+ except os.error:
+ pass
+
+ metadata = installed[cpv].copy()
+ metadata.setdefault("SLOT", "0")
+ metadata.setdefault("BUILD_TIME", "0")
+ metadata.setdefault("COUNTER", "0")
+ metadata.setdefault("KEYWORDS", "~x86")
+
+ unknown_keys = set(metadata).difference(portage.dbapi.dbapi._known_keys)
+ unknown_keys.discard("BUILD_TIME")
+ unknown_keys.discard("BUILD_ID")
+ unknown_keys.discard("COUNTER")
+ unknown_keys.discard("repository")
+ unknown_keys.discard("USE")
+ unknown_keys.discard("PROVIDES")
+ unknown_keys.discard("REQUIRES")
+ if unknown_keys:
+ raise ValueError(
+ "metadata of installed '%s' contains unknown keys: %s"
+ % (cpv, sorted(unknown_keys))
+ )
+
+ metadata["repository"] = repo
+ for k, v in metadata.items():
+ with open(os.path.join(vdb_pkg_dir, k), "w") as f:
+ f.write("%s\n" % v)
+
+ ebuild_path = os.path.join(vdb_pkg_dir, a.cpv.split("/")[1] + ".ebuild")
+ with open(ebuild_path, "w") as f:
+ f.write('EAPI="%s"\n' % metadata.pop("EAPI", "0"))
+ for k, v in metadata.items():
+ f.write('%s="%s"\n' % (k, v))
+
+ env_path = os.path.join(vdb_pkg_dir, "environment.bz2")
+ with bz2.BZ2File(env_path, mode="w") as f:
+ with open(ebuild_path, "rb") as inputfile:
+ f.write(inputfile.read())
+
+ def _create_profile(
+ self, ebuilds, eclasses, installed, profile, repo_configs, user_config, sets
+ ):
+
+ user_config_dir = os.path.join(self.eroot, USER_CONFIG_PATH)
+
+ try:
+ os.makedirs(user_config_dir)
+ except os.error:
+ pass
+
+ for repo in self._repositories:
+ if repo == "DEFAULT":
+ continue
+
+ repo_dir = self._get_repo_dir(repo)
+ profile_dir = os.path.join(repo_dir, "profiles")
+ metadata_dir = os.path.join(repo_dir, "metadata")
+ os.makedirs(metadata_dir)
+
+ # Create $REPO/profiles/categories
+ categories = set()
+ for cpv in ebuilds:
+ ebuilds_repo = Atom("=" + cpv, allow_repo=True).repo
+ if ebuilds_repo is None:
+ ebuilds_repo = "test_repo"
+ if ebuilds_repo == repo:
+ categories.add(catsplit(cpv)[0])
+
+ categories_file = os.path.join(profile_dir, "categories")
+ with open(categories_file, "w") as f:
+ for cat in categories:
+ f.write(cat + "\n")
+
+ # Create $REPO/profiles/license_groups
+ license_file = os.path.join(profile_dir, "license_groups")
+ with open(license_file, "w") as f:
+ f.write("EULA TEST\n")
+
+ repo_config = repo_configs.get(repo)
+ if repo_config:
+ for config_file, lines in repo_config.items():
+ if config_file not in self.config_files and not any(
+ fnmatch.fnmatch(config_file, os.path.join(x, "*"))
+ for x in self.config_files
+ ):
+ raise ValueError("Unknown config file: '%s'" % config_file)
+
+ if config_file in ("layout.conf",):
+ file_name = os.path.join(repo_dir, "metadata", config_file)
+ else:
+ file_name = os.path.join(profile_dir, config_file)
+ if "/" in config_file and not os.path.isdir(
+ os.path.dirname(file_name)
+ ):
+ os.makedirs(os.path.dirname(file_name))
+ with open(file_name, "w") as f:
+ for line in lines:
+ f.write("%s\n" % line)
+ # Temporarily write empty value of masters until it becomes default.
+ # TODO: Delete all references to "# use implicit masters" when empty value becomes default.
+ if config_file == "layout.conf" and not any(
+ line.startswith(("masters =", "# use implicit masters"))
+ for line in lines
+ ):
+ f.write("masters =\n")
+
+ # Create $profile_dir/eclass (we fail to digest the ebuilds if it's not there)
+ eclass_dir = os.path.join(repo_dir, "eclass")
+ os.makedirs(eclass_dir)
+
+ for eclass_name, eclass_content in eclasses.items():
+ with open(
+ os.path.join(eclass_dir, "{}.eclass".format(eclass_name)), "wt"
+ ) as f:
+ if isinstance(eclass_content, str):
+ eclass_content = [eclass_content]
+ for line in eclass_content:
+ f.write("{}\n".format(line))
+
+ # Temporarily write empty value of masters until it becomes default.
+ if not repo_config or "layout.conf" not in repo_config:
+ layout_conf_path = os.path.join(repo_dir, "metadata", "layout.conf")
+ with open(layout_conf_path, "w") as f:
+ f.write("masters =\n")
+
+ if repo == "test_repo":
+ # Create a minimal profile in /var/db/repos/gentoo
+ sub_profile_dir = os.path.join(
+ profile_dir, "default", "linux", "x86", "test_profile"
+ )
+ os.makedirs(sub_profile_dir)
+
+ if not (profile and "eapi" in profile):
+ eapi_file = os.path.join(sub_profile_dir, "eapi")
+ with open(eapi_file, "w") as f:
+ f.write("0\n")
+
+ make_defaults_file = os.path.join(sub_profile_dir, "make.defaults")
+ with open(make_defaults_file, "w") as f:
+ f.write('ARCH="x86"\n')
+ f.write('ACCEPT_KEYWORDS="x86"\n')
+
+ use_force_file = os.path.join(sub_profile_dir, "use.force")
+ with open(use_force_file, "w") as f:
+ f.write("x86\n")
+
+ parent_file = os.path.join(sub_profile_dir, "parent")
+ with open(parent_file, "w") as f:
+ f.write("..\n")
+
+ if profile:
+ for config_file, lines in profile.items():
+ if config_file not in self.config_files:
+ raise ValueError("Unknown config file: '%s'" % config_file)
+
+ file_name = os.path.join(sub_profile_dir, config_file)
+ with open(file_name, "w") as f:
+ for line in lines:
+ f.write("%s\n" % line)
+
+ # Create profile symlink
+ os.symlink(
+ sub_profile_dir, os.path.join(user_config_dir, "make.profile")
+ )
+
+ make_conf = {
+ "ACCEPT_KEYWORDS": "x86",
+ "CLEAN_DELAY": "0",
+ "DISTDIR": self.distdir,
+ "EMERGE_WARNING_DELAY": "0",
+ "PKGDIR": self.pkgdir,
+ "PORTAGE_INST_GID": str(portage.data.portage_gid),
+ "PORTAGE_INST_UID": str(portage.data.portage_uid),
+ "PORTAGE_TMPDIR": os.path.join(self.eroot, "var/tmp"),
+ }
+
+ if os.environ.get("NOCOLOR"):
+ make_conf["NOCOLOR"] = os.environ["NOCOLOR"]
+
+ # Pass along PORTAGE_USERNAME and PORTAGE_GRPNAME since they
+ # need to be inherited by ebuild subprocesses.
+ if "PORTAGE_USERNAME" in os.environ:
+ make_conf["PORTAGE_USERNAME"] = os.environ["PORTAGE_USERNAME"]
+ if "PORTAGE_GRPNAME" in os.environ:
+ make_conf["PORTAGE_GRPNAME"] = os.environ["PORTAGE_GRPNAME"]
+
+ make_conf_lines = []
+ for k_v in make_conf.items():
+ make_conf_lines.append('%s="%s"' % k_v)
+
+ if "make.conf" in user_config:
+ make_conf_lines.extend(user_config["make.conf"])
+
+ if not portage.process.sandbox_capable or os.environ.get("SANDBOX_ON") == "1":
+ # avoid problems from nested sandbox instances
+ make_conf_lines.append('FEATURES="${FEATURES} -sandbox -usersandbox"')
+
+ configs = user_config.copy()
+ configs["make.conf"] = make_conf_lines
+
+ for config_file, lines in configs.items():
+ if config_file not in self.config_files:
+ raise ValueError("Unknown config file: '%s'" % config_file)
+
+ file_name = os.path.join(user_config_dir, config_file)
+ with open(file_name, "w") as f:
+ for line in lines:
+ f.write("%s\n" % line)
+
+ # Create /usr/share/portage/config/make.globals
+ make_globals_path = os.path.join(
+ self.eroot, GLOBAL_CONFIG_PATH.lstrip(os.sep), "make.globals"
+ )
+ ensure_dirs(os.path.dirname(make_globals_path))
+ os.symlink(os.path.join(cnf_path, "make.globals"), make_globals_path)
+
+ # Create /usr/share/portage/config/sets/portage.conf
+ default_sets_conf_dir = os.path.join(
+ self.eroot, "usr/share/portage/config/sets"
+ )
+
+ try:
+ os.makedirs(default_sets_conf_dir)
+ except os.error:
+ pass
+
+ provided_sets_portage_conf = os.path.join(cnf_path, "sets", "portage.conf")
+ os.symlink(
+ provided_sets_portage_conf,
+ os.path.join(default_sets_conf_dir, "portage.conf"),
+ )
+
+ set_config_dir = os.path.join(user_config_dir, "sets")
+
+ try:
+ os.makedirs(set_config_dir)
+ except os.error:
+ pass
+
+ for sets_file, lines in sets.items():
+ file_name = os.path.join(set_config_dir, sets_file)
+ with open(file_name, "w") as f:
+ for line in lines:
+ f.write("%s\n" % line)
+
+ if cnf_path_repoman is not None:
+ # Create /usr/share/repoman
+ repoman_share_dir = os.path.join(self.eroot, "usr", "share", "repoman")
+ os.symlink(cnf_path_repoman, repoman_share_dir)
+
+ def _create_world(self, world, world_sets):
+ # Create /var/lib/portage/world
+ var_lib_portage = os.path.join(self.eroot, "var", "lib", "portage")
+ os.makedirs(var_lib_portage)
+
+ world_file = os.path.join(var_lib_portage, "world")
+ world_set_file = os.path.join(var_lib_portage, "world_sets")
+
+ with open(world_file, "w") as f:
+ for atom in world:
+ f.write("%s\n" % atom)
+
+ with open(world_set_file, "w") as f:
+ for atom in world_sets:
+ f.write("%s\n" % atom)
+
+ def _load_config(self):
+
+ create_trees_kwargs = {}
+ if self.target_root != os.sep:
+ create_trees_kwargs["target_root"] = self.target_root
+
+ env = {
+ "PORTAGE_REPOSITORIES": "\n".join(
+ "[%s]\n%s"
+ % (
+ repo_name,
+ "\n".join("%s = %s" % (k, v) for k, v in repo_config.items()),
+ )
+ for repo_name, repo_config in self._repositories.items()
+ )
+ }
+
+ if self.debug:
+ env["PORTAGE_DEBUG"] = "1"
+
+ trees = portage.create_trees(
+ env=env, eprefix=self.eprefix, **create_trees_kwargs
+ )
+
+ for root, root_trees in trees.items():
+ settings = root_trees["vartree"].settings
+ settings._init_dirs()
+ setconfig = load_default_config(settings, root_trees)
+ root_trees["root_config"] = RootConfig(settings, root_trees, setconfig)
+
+ return trees[trees._target_eroot]["vartree"].settings, trees
+
+ def run(self, atoms, options={}, action=None):
+ options = options.copy()
+ options["--pretend"] = True
+ if self.debug:
+ options["--debug"] = True
+
+ if action is None:
+ if options.get("--depclean"):
+ action = "depclean"
+ elif options.get("--prune"):
+ action = "prune"
+
+ if "--usepkgonly" in options:
+ options["--usepkg"] = True
+
+ global_noiselimit = portage.util.noiselimit
+ global_emergelog_disable = _emerge.emergelog._disable
+ try:
+
+ if not self.debug:
+ portage.util.noiselimit = -2
+ _emerge.emergelog._disable = True
+
+ if action in ("depclean", "prune"):
+ depclean_result = _calc_depclean(
+ self.settings,
+ self.trees,
+ None,
+ options,
+ action,
+ InternalPackageSet(initial_atoms=atoms, allow_wildcard=True),
+ None,
+ )
+ result = ResolverPlaygroundDepcleanResult(
+ atoms,
+ depclean_result.returncode,
+ depclean_result.cleanlist,
+ depclean_result.ordered,
+ depclean_result.req_pkg_count,
+ depclean_result.depgraph,
+ )
+ else:
+ params = create_depgraph_params(options, action)
+ success, depgraph, favorites = backtrack_depgraph(
+ self.settings, self.trees, options, params, action, atoms, None
+ )
+ depgraph._show_merge_list()
+ depgraph.display_problems()
+ result = ResolverPlaygroundResult(atoms, success, depgraph, favorites)
+ finally:
+ portage.util.noiselimit = global_noiselimit
+ _emerge.emergelog._disable = global_emergelog_disable
+
+ return result
+
+ def run_TestCase(self, test_case):
+ if not isinstance(test_case, ResolverPlaygroundTestCase):
+ raise TypeError("ResolverPlayground needs a ResolverPlaygroundTestCase")
+ for atoms in test_case.requests:
+ result = self.run(atoms, test_case.options, test_case.action)
+ if not test_case.compare_with_result(result):
+ return
+
+ def cleanup(self):
+ for eroot in self.trees:
+ portdb = self.trees[eroot]["porttree"].dbapi
+ portdb.close_caches()
+ if self.debug:
+ print("\nEROOT=%s" % self.eroot)
+ else:
+ shutil.rmtree(self.eroot)
+ if hasattr(self, "_orig_eprefix"):
+ portage.const.EPREFIX = self._orig_eprefix
class ResolverPlaygroundTestCase:
diff --cc lib/portage/util/__init__.py
index 8c2f96f56,5ade7f660..11a7d0677
--- a/lib/portage/util/__init__.py
+++ b/lib/portage/util/__init__.py
@@@ -904,918 -1053,970 +1054,983 @@@ def varexpand(mystring, mydict=None, er
# broken and removed, but can still be imported
pickle_write = None
+
def pickle_read(filename, default=None, debug=0):
- if not os.access(filename, os.R_OK):
- writemsg(_("pickle_read(): File not readable. '") + filename + "'\n", 1)
- return default
- data = None
- try:
- myf = open(_unicode_encode(filename,
- encoding=_encodings['fs'], errors='strict'), 'rb')
- mypickle = pickle.Unpickler(myf)
- data = mypickle.load()
- myf.close()
- del mypickle, myf
- writemsg(_("pickle_read(): Loaded pickle. '") + filename + "'\n", 1)
- except SystemExit as e:
- raise
- except Exception as e:
- writemsg(_("!!! Failed to load pickle: ") + str(e) + "\n", 1)
- data = default
- return data
+ if not os.access(filename, os.R_OK):
+ writemsg(_("pickle_read(): File not readable. '") + filename + "'\n", 1)
+ return default
+ data = None
+ try:
+ myf = open(
+ _unicode_encode(filename, encoding=_encodings["fs"], errors="strict"), "rb"
+ )
+ mypickle = pickle.Unpickler(myf)
+ data = mypickle.load()
+ myf.close()
+ del mypickle, myf
+ writemsg(_("pickle_read(): Loaded pickle. '") + filename + "'\n", 1)
+ except SystemExit as e:
+ raise
+ except Exception as e:
+ writemsg(_("!!! Failed to load pickle: ") + str(e) + "\n", 1)
+ data = default
+ return data
+
def dump_traceback(msg, noiselevel=1):
- info = sys.exc_info()
- if not info[2]:
- stack = traceback.extract_stack()[:-1]
- error = None
- else:
- stack = traceback.extract_tb(info[2])
- error = str(info[1])
- writemsg("\n====================================\n", noiselevel=noiselevel)
- writemsg("%s\n\n" % msg, noiselevel=noiselevel)
- for line in traceback.format_list(stack):
- writemsg(line, noiselevel=noiselevel)
- if error:
- writemsg(error+"\n", noiselevel=noiselevel)
- writemsg("====================================\n\n", noiselevel=noiselevel)
+ info = sys.exc_info()
+ if not info[2]:
+ stack = traceback.extract_stack()[:-1]
+ error = None
+ else:
+ stack = traceback.extract_tb(info[2])
+ error = str(info[1])
+ writemsg("\n====================================\n", noiselevel=noiselevel)
+ writemsg("%s\n\n" % msg, noiselevel=noiselevel)
+ for line in traceback.format_list(stack):
+ writemsg(line, noiselevel=noiselevel)
+ if error:
+ writemsg(error + "\n", noiselevel=noiselevel)
+ writemsg("====================================\n\n", noiselevel=noiselevel)
+
class cmp_sort_key:
- """
- In python-3.0 the list.sort() method no longer has a "cmp" keyword
- argument. This class acts as an adapter which converts a cmp function
- into one that's suitable for use as the "key" keyword argument to
- list.sort(), making it easier to port code for python-3.0 compatibility.
- It works by generating key objects which use the given cmp function to
- implement their __lt__ method.
-
- Beginning with Python 2.7 and 3.2, equivalent functionality is provided
- by functools.cmp_to_key().
- """
- __slots__ = ("_cmp_func",)
+ """
+ In python-3.0 the list.sort() method no longer has a "cmp" keyword
+ argument. This class acts as an adapter which converts a cmp function
+ into one that's suitable for use as the "key" keyword argument to
+ list.sort(), making it easier to port code for python-3.0 compatibility.
+ It works by generating key objects which use the given cmp function to
+ implement their __lt__ method.
+
+ Beginning with Python 2.7 and 3.2, equivalent functionality is provided
+ by functools.cmp_to_key().
+ """
+
+ __slots__ = ("_cmp_func",)
- def __init__(self, cmp_func):
- """
- @type cmp_func: callable which takes 2 positional arguments
- @param cmp_func: A cmp function.
- """
- self._cmp_func = cmp_func
+ def __init__(self, cmp_func):
+ """
+ @type cmp_func: callable which takes 2 positional arguments
+ @param cmp_func: A cmp function.
+ """
+ self._cmp_func = cmp_func
- def __call__(self, lhs):
- return self._cmp_key(self._cmp_func, lhs)
+ def __call__(self, lhs):
+ return self._cmp_key(self._cmp_func, lhs)
- class _cmp_key:
- __slots__ = ("_cmp_func", "_obj")
+ class _cmp_key:
+ __slots__ = ("_cmp_func", "_obj")
- def __init__(self, cmp_func, obj):
- self._cmp_func = cmp_func
- self._obj = obj
+ def __init__(self, cmp_func, obj):
+ self._cmp_func = cmp_func
+ self._obj = obj
+
+ def __lt__(self, other):
+ if other.__class__ is not self.__class__:
+ raise TypeError(
+ "Expected type %s, got %s" % (self.__class__, other.__class__)
+ )
+ return self._cmp_func(self._obj, other._obj) < 0
- def __lt__(self, other):
- if other.__class__ is not self.__class__:
- raise TypeError("Expected type %s, got %s" % \
- (self.__class__, other.__class__))
- return self._cmp_func(self._obj, other._obj) < 0
def unique_array(s):
- """lifted from python cookbook, credit: Tim Peters
- Return a list of the elements in s in arbitrary order, sans duplicates"""
- n = len(s)
- # assume all elements are hashable, if so, it's linear
- try:
- return list(set(s))
- except TypeError:
- pass
-
- # so much for linear. abuse sort.
- try:
- t = list(s)
- t.sort()
- except TypeError:
- pass
- else:
- assert n > 0
- last = t[0]
- lasti = i = 1
- while i < n:
- if t[i] != last:
- t[lasti] = last = t[i]
- lasti += 1
- i += 1
- return t[:lasti]
-
- # blah. back to original portage.unique_array
- u = []
- for x in s:
- if x not in u:
- u.append(x)
- return u
+ """lifted from python cookbook, credit: Tim Peters
+ Return a list of the elements in s in arbitrary order, sans duplicates"""
+ n = len(s)
+ # assume all elements are hashable, if so, it's linear
+ try:
+ return list(set(s))
+ except TypeError:
+ pass
+
+ # so much for linear. abuse sort.
+ try:
+ t = list(s)
+ t.sort()
+ except TypeError:
+ pass
+ else:
+ assert n > 0
+ last = t[0]
+ lasti = i = 1
+ while i < n:
+ if t[i] != last:
+ t[lasti] = last = t[i]
+ lasti += 1
+ i += 1
+ return t[:lasti]
+
+ # blah. back to original portage.unique_array
+ u = []
+ for x in s:
+ if x not in u:
+ u.append(x)
+ return u
+
def unique_everseen(iterable, key=None):
- """
- List unique elements, preserving order. Remember all elements ever seen.
- Taken from itertools documentation.
- """
- # unique_everseen('AAAABBBCCDAABBB') --> A B C D
- # unique_everseen('ABBCcAD', str.lower) --> A B C D
- seen = set()
- seen_add = seen.add
- if key is None:
- for element in filterfalse(seen.__contains__, iterable):
- seen_add(element)
- yield element
- else:
- for element in iterable:
- k = key(element)
- if k not in seen:
- seen_add(k)
- yield element
+ """
+ List unique elements, preserving order. Remember all elements ever seen.
+ Taken from itertools documentation.
+ """
+ # unique_everseen('AAAABBBCCDAABBB') --> A B C D
+ # unique_everseen('ABBCcAD', str.lower) --> A B C D
+ seen = set()
+ seen_add = seen.add
+ if key is None:
+ for element in filterfalse(seen.__contains__, iterable):
+ seen_add(element)
+ yield element
+ else:
+ for element in iterable:
+ k = key(element)
+ if k not in seen:
+ seen_add(k)
+ yield element
+
def _do_stat(filename, follow_links=True):
- try:
- if follow_links:
- return os.stat(filename)
- return os.lstat(filename)
- except OSError as oe:
- func_call = "stat('%s')" % filename
- if oe.errno == errno.EPERM:
- raise OperationNotPermitted(func_call)
- if oe.errno == errno.EACCES:
- raise PermissionDenied(func_call)
- if oe.errno == errno.ENOENT:
- raise FileNotFound(filename)
- raise
-
- def apply_permissions(filename, uid=-1, gid=-1, mode=-1, mask=-1,
- stat_cached=None, follow_links=True):
- """Apply user, group, and mode bits to a file if the existing bits do not
- already match. The default behavior is to force an exact match of mode
- bits. When mask=0 is specified, mode bits on the target file are allowed
- to be a superset of the mode argument (via logical OR). When mask>0, the
- mode bits that the target file is allowed to have are restricted via
- logical XOR.
- Returns True if the permissions were modified and False otherwise."""
-
- modified = False
-
- # Since Python 3.4, chown requires int type (no proxies).
- uid = int(uid)
- gid = int(gid)
-
- if stat_cached is None:
- stat_cached = _do_stat(filename, follow_links=follow_links)
-
- if (uid != -1 and uid != stat_cached.st_uid) or \
- (gid != -1 and gid != stat_cached.st_gid):
- try:
- if follow_links:
- os.chown(filename, uid, gid)
- else:
- portage.data.lchown(filename, uid, gid)
- modified = True
- except OSError as oe:
- func_call = "chown('%s', %i, %i)" % (filename, uid, gid)
- if oe.errno == errno.EPERM:
- raise OperationNotPermitted(func_call)
- elif oe.errno == errno.EACCES:
- raise PermissionDenied(func_call)
- elif oe.errno == errno.EROFS:
- raise ReadOnlyFileSystem(func_call)
- elif oe.errno == errno.ENOENT:
- raise FileNotFound(filename)
- else:
- raise
-
- new_mode = -1
- st_mode = stat_cached.st_mode & 0o7777 # protect from unwanted bits
- if mask >= 0:
- if mode == -1:
- mode = 0 # Don't add any mode bits when mode is unspecified.
- else:
- mode = mode & 0o7777
- if (mode & st_mode != mode) or \
- ((mask ^ st_mode) & st_mode != st_mode):
- new_mode = mode | st_mode
- new_mode = (mask ^ new_mode) & new_mode
- elif mode != -1:
- mode = mode & 0o7777 # protect from unwanted bits
- if mode != st_mode:
- new_mode = mode
-
- # The chown system call may clear S_ISUID and S_ISGID
- # bits, so those bits are restored if necessary.
- if modified and new_mode == -1 and \
- (st_mode & stat.S_ISUID or st_mode & stat.S_ISGID):
- if mode == -1:
- new_mode = st_mode
- else:
- mode = mode & 0o7777
- if mask >= 0:
- new_mode = mode | st_mode
- new_mode = (mask ^ new_mode) & new_mode
- else:
- new_mode = mode
- if not (new_mode & stat.S_ISUID or new_mode & stat.S_ISGID):
- new_mode = -1
-
- if not follow_links and stat.S_ISLNK(stat_cached.st_mode):
- # Mode doesn't matter for symlinks.
- new_mode = -1
-
- if new_mode != -1:
- try:
- os.chmod(filename, new_mode)
- modified = True
- except OSError as oe:
- func_call = "chmod('%s', %s)" % (filename, oct(new_mode))
- if oe.errno == errno.EPERM:
- raise OperationNotPermitted(func_call)
- elif oe.errno == errno.EACCES:
- raise PermissionDenied(func_call)
- elif oe.errno == errno.EROFS:
- raise ReadOnlyFileSystem(func_call)
- elif oe.errno == errno.ENOENT:
- raise FileNotFound(filename)
- raise
- return modified
+ try:
+ if follow_links:
+ return os.stat(filename)
+ return os.lstat(filename)
+ except OSError as oe:
+ func_call = "stat('%s')" % filename
+ if oe.errno == errno.EPERM:
+ raise OperationNotPermitted(func_call)
+ if oe.errno == errno.EACCES:
+ raise PermissionDenied(func_call)
+ if oe.errno == errno.ENOENT:
+ raise FileNotFound(filename)
+ raise
+
+
+ def apply_permissions(
+ filename, uid=-1, gid=-1, mode=-1, mask=-1, stat_cached=None, follow_links=True
+ ):
+ """Apply user, group, and mode bits to a file if the existing bits do not
+ already match. The default behavior is to force an exact match of mode
+ bits. When mask=0 is specified, mode bits on the target file are allowed
+ to be a superset of the mode argument (via logical OR). When mask>0, the
+ mode bits that the target file is allowed to have are restricted via
+ logical XOR.
+ Returns True if the permissions were modified and False otherwise."""
+
+ modified = False
+
+ # Since Python 3.4, chown requires int type (no proxies).
+ uid = int(uid)
+ gid = int(gid)
+
+ if stat_cached is None:
+ stat_cached = _do_stat(filename, follow_links=follow_links)
+
+ if (uid != -1 and uid != stat_cached.st_uid) or (
+ gid != -1 and gid != stat_cached.st_gid
+ ):
+ try:
+ if follow_links:
+ os.chown(filename, uid, gid)
+ else:
+ portage.data.lchown(filename, uid, gid)
+ modified = True
+ except OSError as oe:
+ func_call = "chown('%s', %i, %i)" % (filename, uid, gid)
+ if oe.errno == errno.EPERM:
+ raise OperationNotPermitted(func_call)
+ elif oe.errno == errno.EACCES:
+ raise PermissionDenied(func_call)
+ elif oe.errno == errno.EROFS:
+ raise ReadOnlyFileSystem(func_call)
+ elif oe.errno == errno.ENOENT:
+ raise FileNotFound(filename)
+ else:
+ raise
+
+ new_mode = -1
+ st_mode = stat_cached.st_mode & 0o7777 # protect from unwanted bits
+ if mask >= 0:
+ if mode == -1:
+ mode = 0 # Don't add any mode bits when mode is unspecified.
+ else:
+ mode = mode & 0o7777
+ if (mode & st_mode != mode) or ((mask ^ st_mode) & st_mode != st_mode):
+ new_mode = mode | st_mode
+ new_mode = (mask ^ new_mode) & new_mode
+ elif mode != -1:
+ mode = mode & 0o7777 # protect from unwanted bits
+ if mode != st_mode:
+ new_mode = mode
+
+ # The chown system call may clear S_ISUID and S_ISGID
+ # bits, so those bits are restored if necessary.
+ if (
+ modified
+ and new_mode == -1
+ and (st_mode & stat.S_ISUID or st_mode & stat.S_ISGID)
+ ):
+ if mode == -1:
+ new_mode = st_mode
+ else:
+ mode = mode & 0o7777
+ if mask >= 0:
+ new_mode = mode | st_mode
+ new_mode = (mask ^ new_mode) & new_mode
+ else:
+ new_mode = mode
+ if not (new_mode & stat.S_ISUID or new_mode & stat.S_ISGID):
+ new_mode = -1
+
+ if not follow_links and stat.S_ISLNK(stat_cached.st_mode):
+ # Mode doesn't matter for symlinks.
+ new_mode = -1
+
+ if new_mode != -1:
+ try:
+ os.chmod(filename, new_mode)
+ modified = True
+ except OSError as oe:
+ func_call = "chmod('%s', %s)" % (filename, oct(new_mode))
+ if oe.errno == errno.EPERM:
+ raise OperationNotPermitted(func_call)
+ elif oe.errno == errno.EACCES:
+ raise PermissionDenied(func_call)
+ elif oe.errno == errno.EROFS:
+ raise ReadOnlyFileSystem(func_call)
+ elif oe.errno == errno.ENOENT:
+ raise FileNotFound(filename)
+ raise
+ return modified
+
def apply_stat_permissions(filename, newstat, **kwargs):
- """A wrapper around apply_secpass_permissions that gets
- uid, gid, and mode from a stat object"""
- return apply_secpass_permissions(filename, uid=newstat.st_uid, gid=newstat.st_gid,
- mode=newstat.st_mode, **kwargs)
-
- def apply_recursive_permissions(top, uid=-1, gid=-1,
- dirmode=-1, dirmask=-1, filemode=-1, filemask=-1, onerror=None):
- """A wrapper around apply_secpass_permissions that applies permissions
- recursively. If optional argument onerror is specified, it should be a
- function; it will be called with one argument, a PortageException instance.
- Returns True if all permissions are applied and False if some are left
- unapplied."""
-
- # Avoid issues with circular symbolic links, as in bug #339670.
- follow_links = False
-
- if onerror is None:
- # Default behavior is to dump errors to stderr so they won't
- # go unnoticed. Callers can pass in a quiet instance.
- def onerror(e):
- if isinstance(e, OperationNotPermitted):
- writemsg(_("Operation Not Permitted: %s\n") % str(e),
- noiselevel=-1)
- elif isinstance(e, FileNotFound):
- writemsg(_("File Not Found: '%s'\n") % str(e), noiselevel=-1)
- else:
- raise
-
- # For bug 554084, always apply permissions to a directory before
- # that directory is traversed.
- all_applied = True
-
- try:
- stat_cached = _do_stat(top, follow_links=follow_links)
- except FileNotFound:
- # backward compatibility
- return True
-
- if stat.S_ISDIR(stat_cached.st_mode):
- mode = dirmode
- mask = dirmask
- else:
- mode = filemode
- mask = filemask
-
- try:
- applied = apply_secpass_permissions(top,
- uid=uid, gid=gid, mode=mode, mask=mask,
- stat_cached=stat_cached, follow_links=follow_links)
- if not applied:
- all_applied = False
- except PortageException as e:
- all_applied = False
- onerror(e)
-
- for dirpath, dirnames, filenames in os.walk(top):
- for name, mode, mask in chain(
- ((x, filemode, filemask) for x in filenames),
- ((x, dirmode, dirmask) for x in dirnames)):
- try:
- applied = apply_secpass_permissions(os.path.join(dirpath, name),
- uid=uid, gid=gid, mode=mode, mask=mask,
- follow_links=follow_links)
- if not applied:
- all_applied = False
- except PortageException as e:
- # Ignore InvalidLocation exceptions such as FileNotFound
- # and DirectoryNotFound since sometimes things disappear,
- # like when adjusting permissions on DISTCC_DIR.
- if not isinstance(e, portage.exception.InvalidLocation):
- all_applied = False
- onerror(e)
- return all_applied
-
- def apply_secpass_permissions(filename, uid=-1, gid=-1, mode=-1, mask=-1,
- stat_cached=None, follow_links=True):
- """A wrapper around apply_permissions that uses secpass and simple
- logic to apply as much of the permissions as possible without
- generating an obviously avoidable permission exception. Despite
- attempts to avoid an exception, it's possible that one will be raised
- anyway, so be prepared.
- Returns True if all permissions are applied and False if some are left
- unapplied."""
-
- if stat_cached is None:
- stat_cached = _do_stat(filename, follow_links=follow_links)
-
- all_applied = True
-
- # Avoid accessing portage.data.secpass when possible, since
- # it triggers config loading (undesirable for chmod-lite).
- if (uid != -1 or gid != -1) and portage.data.secpass < 2:
-
- if uid != -1 and \
- uid != stat_cached.st_uid:
- all_applied = False
- uid = -1
-
- if gid != -1 and \
- gid != stat_cached.st_gid and \
- gid not in os.getgroups():
- all_applied = False
- gid = -1
-
- apply_permissions(filename, uid=uid, gid=gid, mode=mode, mask=mask,
- stat_cached=stat_cached, follow_links=follow_links)
- return all_applied
+ """A wrapper around apply_secpass_permissions that gets
+ uid, gid, and mode from a stat object"""
+ return apply_secpass_permissions(
+ filename, uid=newstat.st_uid, gid=newstat.st_gid, mode=newstat.st_mode, **kwargs
+ )
+
+
+ def apply_recursive_permissions(
+ top, uid=-1, gid=-1, dirmode=-1, dirmask=-1, filemode=-1, filemask=-1, onerror=None
+ ):
+ """A wrapper around apply_secpass_permissions that applies permissions
+ recursively. If optional argument onerror is specified, it should be a
+ function; it will be called with one argument, a PortageException instance.
+ Returns True if all permissions are applied and False if some are left
+ unapplied."""
+
+ # Avoid issues with circular symbolic links, as in bug #339670.
+ follow_links = False
+
+ if onerror is None:
+ # Default behavior is to dump errors to stderr so they won't
+ # go unnoticed. Callers can pass in a quiet instance.
+ def onerror(e):
+ if isinstance(e, OperationNotPermitted):
+ writemsg(_("Operation Not Permitted: %s\n") % str(e), noiselevel=-1)
+ elif isinstance(e, FileNotFound):
+ writemsg(_("File Not Found: '%s'\n") % str(e), noiselevel=-1)
+ else:
+ raise
+
+ # For bug 554084, always apply permissions to a directory before
+ # that directory is traversed.
+ all_applied = True
+
+ try:
+ stat_cached = _do_stat(top, follow_links=follow_links)
+ except FileNotFound:
+ # backward compatibility
+ return True
+
+ if stat.S_ISDIR(stat_cached.st_mode):
+ mode = dirmode
+ mask = dirmask
+ else:
+ mode = filemode
+ mask = filemask
+
+ try:
+ applied = apply_secpass_permissions(
+ top,
+ uid=uid,
+ gid=gid,
+ mode=mode,
+ mask=mask,
+ stat_cached=stat_cached,
+ follow_links=follow_links,
+ )
+ if not applied:
+ all_applied = False
+ except PortageException as e:
+ all_applied = False
+ onerror(e)
+
+ for dirpath, dirnames, filenames in os.walk(top):
+ for name, mode, mask in chain(
+ ((x, filemode, filemask) for x in filenames),
+ ((x, dirmode, dirmask) for x in dirnames),
+ ):
+ try:
+ applied = apply_secpass_permissions(
+ os.path.join(dirpath, name),
+ uid=uid,
+ gid=gid,
+ mode=mode,
+ mask=mask,
+ follow_links=follow_links,
+ )
+ if not applied:
+ all_applied = False
+ except PortageException as e:
+ # Ignore InvalidLocation exceptions such as FileNotFound
+ # and DirectoryNotFound since sometimes things disappear,
+ # like when adjusting permissions on DISTCC_DIR.
+ if not isinstance(e, portage.exception.InvalidLocation):
+ all_applied = False
+ onerror(e)
+ return all_applied
+
+
+ def apply_secpass_permissions(
+ filename, uid=-1, gid=-1, mode=-1, mask=-1, stat_cached=None, follow_links=True
+ ):
+ """A wrapper around apply_permissions that uses secpass and simple
+ logic to apply as much of the permissions as possible without
+ generating an obviously avoidable permission exception. Despite
+ attempts to avoid an exception, it's possible that one will be raised
+ anyway, so be prepared.
+ Returns True if all permissions are applied and False if some are left
+ unapplied."""
+
+ if stat_cached is None:
+ stat_cached = _do_stat(filename, follow_links=follow_links)
+
+ all_applied = True
+
+ # Avoid accessing portage.data.secpass when possible, since
+ # it triggers config loading (undesirable for chmod-lite).
+ if (uid != -1 or gid != -1) and portage.data.secpass < 2:
+
+ if uid != -1 and uid != stat_cached.st_uid:
+ all_applied = False
+ uid = -1
+
+ if gid != -1 and gid != stat_cached.st_gid and gid not in os.getgroups():
+ all_applied = False
+ gid = -1
+
+ apply_permissions(
+ filename,
+ uid=uid,
+ gid=gid,
+ mode=mode,
+ mask=mask,
+ stat_cached=stat_cached,
+ follow_links=follow_links,
+ )
+ return all_applied
+
class atomic_ofstream(AbstractContextManager, ObjectProxy):
- """Write a file atomically via os.rename(). Atomic replacement prevents
- interprocess interference and prevents corruption of the target
- file when the write is interrupted (for example, when an 'out of space'
- error occurs)."""
-
- def __init__(self, filename, mode='w', follow_links=True, **kargs):
- """Opens a temporary filename.pid in the same directory as filename."""
- ObjectProxy.__init__(self)
- object.__setattr__(self, '_aborted', False)
- if 'b' in mode:
- open_func = open
- else:
- open_func = io.open
- kargs.setdefault('encoding', _encodings['content'])
- kargs.setdefault('errors', 'backslashreplace')
-
- if follow_links:
- canonical_path = os.path.realpath(filename)
- object.__setattr__(self, '_real_name', canonical_path)
- tmp_name = "%s.%i" % (canonical_path, portage.getpid())
- try:
- object.__setattr__(self, '_file',
- open_func(_unicode_encode(tmp_name,
- encoding=_encodings['fs'], errors='strict'),
- mode=mode, **kargs))
- return
- except IOError as e:
- if canonical_path == filename:
- raise
- # Ignore this error, since it's irrelevant
- # and the below open call will produce a
- # new error if necessary.
-
- object.__setattr__(self, '_real_name', filename)
- tmp_name = "%s.%i" % (filename, portage.getpid())
- object.__setattr__(self, '_file',
- open_func(_unicode_encode(tmp_name,
- encoding=_encodings['fs'], errors='strict'),
- mode=mode, **kargs))
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- if exc_type is not None:
- self.abort()
- else:
- self.close()
-
- def _get_target(self):
- return object.__getattribute__(self, '_file')
-
- def __getattribute__(self, attr):
- if attr in ('close', 'abort', '__del__'):
- return object.__getattribute__(self, attr)
- return getattr(object.__getattribute__(self, '_file'), attr)
-
- def close(self):
- """Closes the temporary file, copies permissions (if possible),
- and performs the atomic replacement via os.rename(). If the abort()
- method has been called, then the temp file is closed and removed."""
- f = object.__getattribute__(self, '_file')
- real_name = object.__getattribute__(self, '_real_name')
- if not f.closed:
- try:
- f.close()
- if not object.__getattribute__(self, '_aborted'):
- try:
- apply_stat_permissions(f.name, os.stat(real_name))
- except OperationNotPermitted:
- pass
- except FileNotFound:
- pass
- except OSError as oe: # from the above os.stat call
- if oe.errno in (errno.ENOENT, errno.EPERM):
- pass
- else:
- raise
- os.rename(f.name, real_name)
- finally:
- # Make sure we cleanup the temp file
- # even if an exception is raised.
- try:
- os.unlink(f.name)
- except OSError as oe:
- pass
-
- def abort(self):
- """If an error occurs while writing the file, the user should
- call this method in order to leave the target file unchanged.
- This will call close() automatically."""
- if not object.__getattribute__(self, '_aborted'):
- object.__setattr__(self, '_aborted', True)
- self.close()
-
- def __del__(self):
- """If the user does not explicitly call close(), it is
- assumed that an error has occurred, so we abort()."""
- try:
- f = object.__getattribute__(self, '_file')
- except AttributeError:
- pass
- else:
- if not f.closed:
- self.abort()
- # ensure destructor from the base class is called
- base_destructor = getattr(ObjectProxy, '__del__', None)
- if base_destructor is not None:
- base_destructor(self)
+ """Write a file atomically via os.rename(). Atomic replacement prevents
+ interprocess interference and prevents corruption of the target
+ file when the write is interrupted (for example, when an 'out of space'
+ error occurs)."""
+
+ def __init__(self, filename, mode="w", follow_links=True, **kargs):
+ """Opens a temporary filename.pid in the same directory as filename."""
+ ObjectProxy.__init__(self)
+ object.__setattr__(self, "_aborted", False)
+ if "b" in mode:
+ open_func = open
+ else:
+ open_func = io.open
+ kargs.setdefault("encoding", _encodings["content"])
+ kargs.setdefault("errors", "backslashreplace")
+
+ if follow_links:
+ canonical_path = os.path.realpath(filename)
+ object.__setattr__(self, "_real_name", canonical_path)
+ tmp_name = "%s.%i" % (canonical_path, portage.getpid())
+ try:
+ object.__setattr__(
+ self,
+ "_file",
+ open_func(
+ _unicode_encode(
+ tmp_name, encoding=_encodings["fs"], errors="strict"
+ ),
+ mode=mode,
+ **kargs
+ ),
+ )
+ return
+ except IOError as e:
+ if canonical_path == filename:
+ raise
+ # Ignore this error, since it's irrelevant
+ # and the below open call will produce a
+ # new error if necessary.
+
+ object.__setattr__(self, "_real_name", filename)
+ tmp_name = "%s.%i" % (filename, portage.getpid())
+ object.__setattr__(
+ self,
+ "_file",
+ open_func(
+ _unicode_encode(tmp_name, encoding=_encodings["fs"], errors="strict"),
+ mode=mode,
+ **kargs
+ ),
+ )
+
+ def __exit__(self, exc_type, exc_val, exc_tb):
+ if exc_type is not None:
+ self.abort()
+ else:
+ self.close()
+
+ def _get_target(self):
+ return object.__getattribute__(self, "_file")
+
+ def __getattribute__(self, attr):
+ if attr in ("close", "abort", "__del__"):
+ return object.__getattribute__(self, attr)
+ return getattr(object.__getattribute__(self, "_file"), attr)
+
+ def close(self):
+ """Closes the temporary file, copies permissions (if possible),
+ and performs the atomic replacement via os.rename(). If the abort()
+ method has been called, then the temp file is closed and removed."""
+ f = object.__getattribute__(self, "_file")
+ real_name = object.__getattribute__(self, "_real_name")
+ if not f.closed:
+ try:
+ f.close()
+ if not object.__getattribute__(self, "_aborted"):
+ try:
+ apply_stat_permissions(f.name, os.stat(real_name))
+ except OperationNotPermitted:
+ pass
+ except FileNotFound:
+ pass
+ except OSError as oe: # from the above os.stat call
+ if oe.errno in (errno.ENOENT, errno.EPERM):
+ pass
+ else:
+ raise
+ os.rename(f.name, real_name)
+ finally:
+ # Make sure we cleanup the temp file
+ # even if an exception is raised.
+ try:
+ os.unlink(f.name)
+ except OSError as oe:
+ pass
+
+ def abort(self):
+ """If an error occurs while writing the file, the user should
+ call this method in order to leave the target file unchanged.
+ This will call close() automatically."""
+ if not object.__getattribute__(self, "_aborted"):
+ object.__setattr__(self, "_aborted", True)
+ self.close()
+
+ def __del__(self):
+ """If the user does not explicitly call close(), it is
+ assumed that an error has occurred, so we abort()."""
+ try:
+ f = object.__getattribute__(self, "_file")
+ except AttributeError:
+ pass
+ else:
+ if not f.closed:
+ self.abort()
+ # ensure destructor from the base class is called
+ base_destructor = getattr(ObjectProxy, "__del__", None)
+ if base_destructor is not None:
+ base_destructor(self)
+
def write_atomic(file_path, content, **kwargs):
- f = None
- try:
- f = atomic_ofstream(file_path, **kwargs)
- f.write(content)
- f.close()
- except (IOError, OSError) as e:
- if f:
- f.abort()
- func_call = "write_atomic('%s')" % file_path
- if e.errno == errno.EPERM:
- raise OperationNotPermitted(func_call)
- elif e.errno == errno.EACCES:
- raise PermissionDenied(func_call)
- elif e.errno == errno.EROFS:
- raise ReadOnlyFileSystem(func_call)
- elif e.errno == errno.ENOENT:
- raise FileNotFound(file_path)
- else:
- raise
+ f = None
+ try:
+ f = atomic_ofstream(file_path, **kwargs)
+ f.write(content)
+ f.close()
+ except (IOError, OSError) as e:
+ if f:
+ f.abort()
+ func_call = "write_atomic('%s')" % file_path
+ if e.errno == errno.EPERM:
+ raise OperationNotPermitted(func_call)
+ elif e.errno == errno.EACCES:
+ raise PermissionDenied(func_call)
+ elif e.errno == errno.EROFS:
+ raise ReadOnlyFileSystem(func_call)
+ elif e.errno == errno.ENOENT:
+ raise FileNotFound(file_path)
+ else:
+ raise
- def ensure_dirs(dir_path, **kwargs):
- """Create a directory and call apply_permissions.
- Returns True if a directory is created or the permissions needed to be
- modified, and False otherwise.
- This function's handling of EEXIST errors makes it useful for atomic
- directory creation, in which multiple processes may be competing to
- create the same directory.
- """
+ def ensure_dirs(dir_path, **kwargs):
+ """Create a directory and call apply_permissions.
+ Returns True if a directory is created or the permissions needed to be
+ modified, and False otherwise.
+
+ This function's handling of EEXIST errors makes it useful for atomic
+ directory creation, in which multiple processes may be competing to
+ create the same directory.
+ """
+
+ created_dir = False
+
+ try:
+ os.makedirs(dir_path)
+ created_dir = True
+ except OSError as oe:
+ func_call = "makedirs('%s')" % dir_path
+ if oe.errno in (errno.EEXIST,):
+ pass
+ else:
+ if os.path.isdir(dir_path):
+ # NOTE: DragonFly raises EPERM for makedir('/')
+ # and that is supposed to be ignored here.
+ # Also, sometimes mkdir raises EISDIR on FreeBSD
+ # and we want to ignore that too (bug #187518).
+ pass
+ elif oe.errno == errno.EPERM:
+ raise OperationNotPermitted(func_call)
+ elif oe.errno == errno.EACCES:
+ raise PermissionDenied(func_call)
+ elif oe.errno == errno.EROFS:
+ raise ReadOnlyFileSystem(func_call)
+ else:
+ raise
+ if kwargs:
+ perms_modified = apply_permissions(dir_path, **kwargs)
+ else:
+ perms_modified = False
+ return created_dir or perms_modified
- created_dir = False
-
- try:
- os.makedirs(dir_path)
- created_dir = True
- except OSError as oe:
- func_call = "makedirs('%s')" % dir_path
- if oe.errno in (errno.EEXIST,):
- pass
- else:
- if os.path.isdir(dir_path):
- # NOTE: DragonFly raises EPERM for makedir('/')
- # and that is supposed to be ignored here.
- # Also, sometimes mkdir raises EISDIR on FreeBSD
- # and we want to ignore that too (bug #187518).
- pass
- elif oe.errno == errno.EPERM:
- raise OperationNotPermitted(func_call)
- elif oe.errno == errno.EACCES:
- raise PermissionDenied(func_call)
- elif oe.errno == errno.EROFS:
- raise ReadOnlyFileSystem(func_call)
- else:
- raise
- if kwargs:
- perms_modified = apply_permissions(dir_path, **kwargs)
- else:
- perms_modified = False
- return created_dir or perms_modified
class LazyItemsDict(UserDict):
- """A mapping object that behaves like a standard dict except that it allows
- for lazy initialization of values via callable objects. Lazy items can be
- overwritten and deleted just as normal items."""
-
- __slots__ = ('lazy_items',)
-
- def __init__(self, *args, **kwargs):
-
- self.lazy_items = {}
- UserDict.__init__(self, *args, **kwargs)
-
- def addLazyItem(self, item_key, value_callable, *pargs, **kwargs):
- """Add a lazy item for the given key. When the item is requested,
- value_callable will be called with *pargs and **kwargs arguments."""
- self.lazy_items[item_key] = \
- self._LazyItem(value_callable, pargs, kwargs, False)
- # make it show up in self.keys(), etc...
- UserDict.__setitem__(self, item_key, None)
-
- def addLazySingleton(self, item_key, value_callable, *pargs, **kwargs):
- """This is like addLazyItem except value_callable will only be called
- a maximum of 1 time and the result will be cached for future requests."""
- self.lazy_items[item_key] = \
- self._LazyItem(value_callable, pargs, kwargs, True)
- # make it show up in self.keys(), etc...
- UserDict.__setitem__(self, item_key, None)
-
- def update(self, *args, **kwargs):
- if len(args) > 1:
- raise TypeError(
- "expected at most 1 positional argument, got " + \
- repr(len(args)))
- if args:
- map_obj = args[0]
- else:
- map_obj = None
- if map_obj is None:
- pass
- elif isinstance(map_obj, LazyItemsDict):
- for k in map_obj:
- if k in map_obj.lazy_items:
- UserDict.__setitem__(self, k, None)
- else:
- UserDict.__setitem__(self, k, map_obj[k])
- self.lazy_items.update(map_obj.lazy_items)
- else:
- UserDict.update(self, map_obj)
- if kwargs:
- UserDict.update(self, kwargs)
-
- def __getitem__(self, item_key):
- if item_key in self.lazy_items:
- lazy_item = self.lazy_items[item_key]
- pargs = lazy_item.pargs
- if pargs is None:
- pargs = ()
- kwargs = lazy_item.kwargs
- if kwargs is None:
- kwargs = {}
- result = lazy_item.func(*pargs, **kwargs)
- if lazy_item.singleton:
- self[item_key] = result
- return result
-
- return UserDict.__getitem__(self, item_key)
-
- def __setitem__(self, item_key, value):
- if item_key in self.lazy_items:
- del self.lazy_items[item_key]
- UserDict.__setitem__(self, item_key, value)
-
- def __delitem__(self, item_key):
- if item_key in self.lazy_items:
- del self.lazy_items[item_key]
- UserDict.__delitem__(self, item_key)
-
- def clear(self):
- self.lazy_items.clear()
- UserDict.clear(self)
-
- def copy(self):
- return self.__copy__()
-
- def __copy__(self):
- return self.__class__(self)
-
- def __deepcopy__(self, memo=None):
- """
- This forces evaluation of each contained lazy item, and deepcopy of
- the result. A TypeError is raised if any contained lazy item is not
- a singleton, since it is not necessarily possible for the behavior
- of this type of item to be safely preserved.
- """
- if memo is None:
- memo = {}
- result = self.__class__()
- memo[id(self)] = result
- for k in self:
- k_copy = deepcopy(k, memo)
- lazy_item = self.lazy_items.get(k)
- if lazy_item is not None:
- if not lazy_item.singleton:
- raise TypeError("LazyItemsDict " + \
- "deepcopy is unsafe with lazy items that are " + \
- "not singletons: key=%s value=%s" % (k, lazy_item,))
- UserDict.__setitem__(result, k_copy, deepcopy(self[k], memo))
- return result
-
- class _LazyItem:
-
- __slots__ = ('func', 'pargs', 'kwargs', 'singleton')
-
- def __init__(self, func, pargs, kwargs, singleton):
-
- if not pargs:
- pargs = None
- if not kwargs:
- kwargs = None
-
- self.func = func
- self.pargs = pargs
- self.kwargs = kwargs
- self.singleton = singleton
-
- def __copy__(self):
- return self.__class__(self.func, self.pargs,
- self.kwargs, self.singleton)
-
- def __deepcopy__(self, memo=None):
- """
- Override this since the default implementation can fail silently,
- leaving some attributes unset.
- """
- if memo is None:
- memo = {}
- result = self.__copy__()
- memo[id(self)] = result
- result.func = deepcopy(self.func, memo)
- result.pargs = deepcopy(self.pargs, memo)
- result.kwargs = deepcopy(self.kwargs, memo)
- result.singleton = deepcopy(self.singleton, memo)
- return result
+ """A mapping object that behaves like a standard dict except that it allows
+ for lazy initialization of values via callable objects. Lazy items can be
+ overwritten and deleted just as normal items."""
+
+ __slots__ = ("lazy_items",)
+
+ def __init__(self, *args, **kwargs):
+
+ self.lazy_items = {}
+ UserDict.__init__(self, *args, **kwargs)
+
+ def addLazyItem(self, item_key, value_callable, *pargs, **kwargs):
+ """Add a lazy item for the given key. When the item is requested,
+ value_callable will be called with *pargs and **kwargs arguments."""
+ self.lazy_items[item_key] = self._LazyItem(value_callable, pargs, kwargs, False)
+ # make it show up in self.keys(), etc...
+ UserDict.__setitem__(self, item_key, None)
+
+ def addLazySingleton(self, item_key, value_callable, *pargs, **kwargs):
+ """This is like addLazyItem except value_callable will only be called
+ a maximum of 1 time and the result will be cached for future requests."""
+ self.lazy_items[item_key] = self._LazyItem(value_callable, pargs, kwargs, True)
+ # make it show up in self.keys(), etc...
+ UserDict.__setitem__(self, item_key, None)
+
+ def update(self, *args, **kwargs):
+ if len(args) > 1:
+ raise TypeError(
+ "expected at most 1 positional argument, got " + repr(len(args))
+ )
+ if args:
+ map_obj = args[0]
+ else:
+ map_obj = None
+ if map_obj is None:
+ pass
+ elif isinstance(map_obj, LazyItemsDict):
+ for k in map_obj:
+ if k in map_obj.lazy_items:
+ UserDict.__setitem__(self, k, None)
+ else:
+ UserDict.__setitem__(self, k, map_obj[k])
+ self.lazy_items.update(map_obj.lazy_items)
+ else:
+ UserDict.update(self, map_obj)
+ if kwargs:
+ UserDict.update(self, kwargs)
+
+ def __getitem__(self, item_key):
+ if item_key in self.lazy_items:
+ lazy_item = self.lazy_items[item_key]
+ pargs = lazy_item.pargs
+ if pargs is None:
+ pargs = ()
+ kwargs = lazy_item.kwargs
+ if kwargs is None:
+ kwargs = {}
+ result = lazy_item.func(*pargs, **kwargs)
+ if lazy_item.singleton:
+ self[item_key] = result
+ return result
+
+ return UserDict.__getitem__(self, item_key)
+
+ def __setitem__(self, item_key, value):
+ if item_key in self.lazy_items:
+ del self.lazy_items[item_key]
+ UserDict.__setitem__(self, item_key, value)
+
+ def __delitem__(self, item_key):
+ if item_key in self.lazy_items:
+ del self.lazy_items[item_key]
+ UserDict.__delitem__(self, item_key)
+
+ def clear(self):
+ self.lazy_items.clear()
+ UserDict.clear(self)
+
+ def copy(self):
+ return self.__copy__()
+
+ def __copy__(self):
+ return self.__class__(self)
+
+ def __deepcopy__(self, memo=None):
+ """
+ This forces evaluation of each contained lazy item, and deepcopy of
+ the result. A TypeError is raised if any contained lazy item is not
+ a singleton, since it is not necessarily possible for the behavior
+ of this type of item to be safely preserved.
+ """
+ if memo is None:
+ memo = {}
+ result = self.__class__()
+ memo[id(self)] = result
+ for k in self:
+ k_copy = deepcopy(k, memo)
+ lazy_item = self.lazy_items.get(k)
+ if lazy_item is not None:
+ if not lazy_item.singleton:
+ raise TypeError(
+ "LazyItemsDict "
+ + "deepcopy is unsafe with lazy items that are "
+ + "not singletons: key=%s value=%s"
+ % (
+ k,
+ lazy_item,
+ )
+ )
+ UserDict.__setitem__(result, k_copy, deepcopy(self[k], memo))
+ return result
+
+ class _LazyItem:
+
+ __slots__ = ("func", "pargs", "kwargs", "singleton")
+
+ def __init__(self, func, pargs, kwargs, singleton):
+
+ if not pargs:
+ pargs = None
+ if not kwargs:
+ kwargs = None
+
+ self.func = func
+ self.pargs = pargs
+ self.kwargs = kwargs
+ self.singleton = singleton
+
+ def __copy__(self):
+ return self.__class__(self.func, self.pargs, self.kwargs, self.singleton)
+
+ def __deepcopy__(self, memo=None):
+ """
+ Override this since the default implementation can fail silently,
+ leaving some attributes unset.
+ """
+ if memo is None:
+ memo = {}
+ result = self.__copy__()
+ memo[id(self)] = result
+ result.func = deepcopy(self.func, memo)
+ result.pargs = deepcopy(self.pargs, memo)
+ result.kwargs = deepcopy(self.kwargs, memo)
+ result.singleton = deepcopy(self.singleton, memo)
+ return result
+
class ConfigProtect:
- def __init__(self, myroot, protect_list, mask_list,
- case_insensitive=False):
- self.myroot = myroot
- self.protect_list = protect_list
- self.mask_list = mask_list
- self.case_insensitive = case_insensitive
- self.updateprotect()
-
- def updateprotect(self):
- """Update internal state for isprotected() calls. Nonexistent paths
- are ignored."""
-
- os = _os_merge
-
- self.protect = []
- self._dirs = set()
- for x in self.protect_list:
- ppath = normalize_path(
- os.path.join(self.myroot, x.lstrip(os.path.sep)))
- # Protect files that don't exist (bug #523684). If the
- # parent directory doesn't exist, we can safely skip it.
- if os.path.isdir(os.path.dirname(ppath)):
- self.protect.append(ppath)
- try:
- if stat.S_ISDIR(os.stat(ppath).st_mode):
- self._dirs.add(ppath)
- except OSError:
- pass
-
- self.protectmask = []
- for x in self.mask_list:
- ppath = normalize_path(
- os.path.join(self.myroot, x.lstrip(os.path.sep)))
- if self.case_insensitive:
- ppath = ppath.lower()
- try:
- """Use lstat so that anything, even a broken symlink can be
- protected."""
- if stat.S_ISDIR(os.lstat(ppath).st_mode):
- self._dirs.add(ppath)
- self.protectmask.append(ppath)
- """Now use stat in case this is a symlink to a directory."""
- if stat.S_ISDIR(os.stat(ppath).st_mode):
- self._dirs.add(ppath)
- except OSError:
- # If it doesn't exist, there's no need to mask it.
- pass
-
- def isprotected(self, obj):
- """Returns True if obj is protected, False otherwise. The caller must
- ensure that obj is normalized with a single leading slash. A trailing
- slash is optional for directories."""
- masked = 0
- protected = 0
- sep = os.path.sep
- if self.case_insensitive:
- obj = obj.lower()
- for ppath in self.protect:
- if len(ppath) > masked and obj.startswith(ppath):
- if ppath in self._dirs:
- if obj != ppath and not obj.startswith(ppath + sep):
- # /etc/foo does not match /etc/foobaz
- continue
- elif obj != ppath:
- # force exact match when CONFIG_PROTECT lists a
- # non-directory
- continue
- protected = len(ppath)
- #config file management
- for pmpath in self.protectmask:
- if len(pmpath) >= protected and obj.startswith(pmpath):
- if pmpath in self._dirs:
- if obj != pmpath and \
- not obj.startswith(pmpath + sep):
- # /etc/foo does not match /etc/foobaz
- continue
- elif obj != pmpath:
- # force exact match when CONFIG_PROTECT_MASK lists
- # a non-directory
- continue
- #skip, it's in the mask
- masked = len(pmpath)
- return protected > masked
+ def __init__(self, myroot, protect_list, mask_list, case_insensitive=False):
+ self.myroot = myroot
+ self.protect_list = protect_list
+ self.mask_list = mask_list
+ self.case_insensitive = case_insensitive
+ self.updateprotect()
+
+ def updateprotect(self):
+ """Update internal state for isprotected() calls. Nonexistent paths
+ are ignored."""
+
+ os = _os_merge
+
+ self.protect = []
+ self._dirs = set()
+ for x in self.protect_list:
+ ppath = normalize_path(os.path.join(self.myroot, x.lstrip(os.path.sep)))
+ # Protect files that don't exist (bug #523684). If the
+ # parent directory doesn't exist, we can safely skip it.
+ if os.path.isdir(os.path.dirname(ppath)):
+ self.protect.append(ppath)
+ try:
+ if stat.S_ISDIR(os.stat(ppath).st_mode):
+ self._dirs.add(ppath)
+ except OSError:
+ pass
+
+ self.protectmask = []
+ for x in self.mask_list:
+ ppath = normalize_path(os.path.join(self.myroot, x.lstrip(os.path.sep)))
+ if self.case_insensitive:
+ ppath = ppath.lower()
+ try:
+ """Use lstat so that anything, even a broken symlink can be
+ protected."""
+ if stat.S_ISDIR(os.lstat(ppath).st_mode):
+ self._dirs.add(ppath)
+ self.protectmask.append(ppath)
+ """Now use stat in case this is a symlink to a directory."""
+ if stat.S_ISDIR(os.stat(ppath).st_mode):
+ self._dirs.add(ppath)
+ except OSError:
+ # If it doesn't exist, there's no need to mask it.
+ pass
+
+ def isprotected(self, obj):
+ """Returns True if obj is protected, False otherwise. The caller must
+ ensure that obj is normalized with a single leading slash. A trailing
+ slash is optional for directories."""
+ masked = 0
+ protected = 0
+ sep = os.path.sep
+ if self.case_insensitive:
+ obj = obj.lower()
+ for ppath in self.protect:
+ if len(ppath) > masked and obj.startswith(ppath):
+ if ppath in self._dirs:
+ if obj != ppath and not obj.startswith(ppath + sep):
+ # /etc/foo does not match /etc/foobaz
+ continue
+ elif obj != ppath:
+ # force exact match when CONFIG_PROTECT lists a
+ # non-directory
+ continue
+ protected = len(ppath)
+ # config file management
+ for pmpath in self.protectmask:
+ if len(pmpath) >= protected and obj.startswith(pmpath):
+ if pmpath in self._dirs:
+ if obj != pmpath and not obj.startswith(pmpath + sep):
+ # /etc/foo does not match /etc/foobaz
+ continue
+ elif obj != pmpath:
+ # force exact match when CONFIG_PROTECT_MASK lists
+ # a non-directory
+ continue
+ # skip, it's in the mask
+ masked = len(pmpath)
+ return protected > masked
+
def new_protect_filename(mydest, newmd5=None, force=False):
- """Resolves a config-protect filename for merging, optionally
- using the last filename if the md5 matches. If force is True,
- then a new filename will be generated even if mydest does not
- exist yet.
- (dest,md5) ==> 'string' --- path_to_target_filename
- (dest) ==> ('next', 'highest') --- next_target and most-recent_target
- """
+ """Resolves a config-protect filename for merging, optionally
+ using the last filename if the md5 matches. If force is True,
+ then a new filename will be generated even if mydest does not
+ exist yet.
+ (dest,md5) ==> 'string' --- path_to_target_filename
+ (dest) ==> ('next', 'highest') --- next_target and most-recent_target
+ """
+
+ # config protection filename format:
+ # ._cfg0000_foo
+ # 0123456789012
+
+ os = _os_merge
+
+ prot_num = -1
+ last_pfile = ""
+
+ if not force and not os.path.exists(mydest):
+ return mydest
+
+ real_filename = os.path.basename(mydest)
+ real_dirname = os.path.dirname(mydest)
+ for pfile in os.listdir(real_dirname):
+ if pfile[0:5] != "._cfg":
+ continue
+ if pfile[10:] != real_filename:
+ continue
+ try:
+ new_prot_num = int(pfile[5:9])
+ if new_prot_num > prot_num:
+ prot_num = new_prot_num
+ last_pfile = pfile
+ except ValueError:
+ continue
+ prot_num = prot_num + 1
+
+ new_pfile = normalize_path(
+ os.path.join(
+ real_dirname, "._cfg" + str(prot_num).zfill(4) + "_" + real_filename
+ )
+ )
+ old_pfile = normalize_path(os.path.join(real_dirname, last_pfile))
+ if last_pfile and newmd5:
+ try:
+ old_pfile_st = os.lstat(old_pfile)
+ except OSError as e:
+ if e.errno != errno.ENOENT:
+ raise
+ else:
+ if stat.S_ISLNK(old_pfile_st.st_mode):
+ try:
+ # Read symlink target as bytes, in case the
+ # target path has a bad encoding.
+ pfile_link = os.readlink(
+ _unicode_encode(
+ old_pfile, encoding=_encodings["merge"], errors="strict"
+ )
+ )
+ except OSError:
+ if e.errno != errno.ENOENT:
+ raise
+ else:
+ pfile_link = _unicode_decode(
+ pfile_link, encoding=_encodings["merge"], errors="replace"
+ )
+ if pfile_link == newmd5:
+ return old_pfile
+ else:
+ try:
+ last_pfile_md5 = portage.checksum._perform_md5_merge(old_pfile)
+ except FileNotFound:
+ # The file suddenly disappeared or it's a
+ # broken symlink.
+ pass
+ else:
+ if last_pfile_md5 == newmd5:
+ return old_pfile
+ return new_pfile
- # config protection filename format:
- # ._cfg0000_foo
- # 0123456789012
-
- os = _os_merge
-
- prot_num = -1
- last_pfile = ""
-
- if not force and \
- not os.path.exists(mydest):
- return mydest
-
- real_filename = os.path.basename(mydest)
- real_dirname = os.path.dirname(mydest)
- for pfile in os.listdir(real_dirname):
- if pfile[0:5] != "._cfg":
- continue
- if pfile[10:] != real_filename:
- continue
- try:
- new_prot_num = int(pfile[5:9])
- if new_prot_num > prot_num:
- prot_num = new_prot_num
- last_pfile = pfile
- except ValueError:
- continue
- prot_num = prot_num + 1
-
- new_pfile = normalize_path(os.path.join(real_dirname,
- "._cfg" + str(prot_num).zfill(4) + "_" + real_filename))
- old_pfile = normalize_path(os.path.join(real_dirname, last_pfile))
- if last_pfile and newmd5:
- try:
- old_pfile_st = os.lstat(old_pfile)
- except OSError as e:
- if e.errno != errno.ENOENT:
- raise
- else:
- if stat.S_ISLNK(old_pfile_st.st_mode):
- try:
- # Read symlink target as bytes, in case the
- # target path has a bad encoding.
- pfile_link = os.readlink(_unicode_encode(old_pfile,
- encoding=_encodings['merge'], errors='strict'))
- except OSError:
- if e.errno != errno.ENOENT:
- raise
- else:
- pfile_link = _unicode_decode(pfile_link,
- encoding=_encodings['merge'], errors='replace')
- if pfile_link == newmd5:
- return old_pfile
- else:
- try:
- last_pfile_md5 = \
- portage.checksum._perform_md5_merge(old_pfile)
- except FileNotFound:
- # The file suddenly disappeared or it's a
- # broken symlink.
- pass
- else:
- if last_pfile_md5 == newmd5:
- return old_pfile
- return new_pfile
def find_updated_config_files(target_root, config_protect):
- """
- Return a tuple of configuration files that needs to be updated.
- The tuple contains lists organized like this:
- [protected_dir, file_list]
- If the protected config isn't a protected_dir but a procted_file, list is:
- [protected_file, None]
- If no configuration files needs to be updated, None is returned
- """
+ """
+ Return a tuple of configuration files that needs to be updated.
+ The tuple contains lists organized like this:
+ [protected_dir, file_list]
+ If the protected config isn't a protected_dir but a procted_file, list is:
+ [protected_file, None]
+ If no configuration files needs to be updated, None is returned
+ """
+
+ encoding = _encodings["fs"]
+
+ if config_protect:
+ # directories with some protect files in them
+ for x in config_protect:
+ files = []
+
+ x = os.path.join(target_root, x.lstrip(os.path.sep))
+ if not os.access(x, os.W_OK):
+ continue
+ try:
+ mymode = os.lstat(x).st_mode
+ except OSError:
+ continue
+
+ if stat.S_ISLNK(mymode):
+ # We want to treat it like a directory if it
+ # is a symlink to an existing directory.
+ try:
+ real_mode = os.stat(x).st_mode
+ if stat.S_ISDIR(real_mode):
+ mymode = real_mode
+ except OSError:
+ pass
+
+ if stat.S_ISDIR(mymode):
+ mycommand = (
+ "find '%s' -name '.*' -type d -prune -o -name '._cfg????_*'" % x
+ )
+ else:
+ mycommand = (
+ "find '%s' -maxdepth 1 -name '._cfg????_%s'"
+ % os.path.split(x.rstrip(os.path.sep))
+ )
+ mycommand += " ! -name '.*~' ! -iname '.*.bak' -print0"
+ cmd = shlex_split(mycommand)
+
+ cmd = [
+ _unicode_encode(arg, encoding=encoding, errors="strict") for arg in cmd
+ ]
+ proc = subprocess.Popen(
+ cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
+ )
+ output = _unicode_decode(proc.communicate()[0], encoding=encoding)
+ status = proc.wait()
+ if os.WIFEXITED(status) and os.WEXITSTATUS(status) == os.EX_OK:
+ files = output.split("\0")
+ # split always produces an empty string as the last element
+ if files and not files[-1]:
+ del files[-1]
+ if files:
+ if stat.S_ISDIR(mymode):
+ yield (x, files)
+ else:
+ yield (x, None)
+
+
+ _ld_so_include_re = re.compile(r"^include\s+(\S.*)")
- encoding = _encodings['fs']
-
- if config_protect:
- # directories with some protect files in them
- for x in config_protect:
- files = []
-
- x = os.path.join(target_root, x.lstrip(os.path.sep))
- if not os.access(x, os.W_OK):
- continue
- try:
- mymode = os.lstat(x).st_mode
- except OSError:
- continue
-
- if stat.S_ISLNK(mymode):
- # We want to treat it like a directory if it
- # is a symlink to an existing directory.
- try:
- real_mode = os.stat(x).st_mode
- if stat.S_ISDIR(real_mode):
- mymode = real_mode
- except OSError:
- pass
-
- if stat.S_ISDIR(mymode):
- mycommand = \
- "find '%s' -name '.*' -type d -prune -o -name '._cfg????_*'" % x
- else:
- mycommand = "find '%s' -maxdepth 1 -name '._cfg????_%s'" % \
- os.path.split(x.rstrip(os.path.sep))
- mycommand += " ! -name '.*~' ! -iname '.*.bak' -print0"
- cmd = shlex_split(mycommand)
-
- cmd = [_unicode_encode(arg, encoding=encoding, errors='strict')
- for arg in cmd]
- proc = subprocess.Popen(cmd, stdout=subprocess.PIPE,
- stderr=subprocess.STDOUT)
- output = _unicode_decode(proc.communicate()[0], encoding=encoding)
- status = proc.wait()
- if os.WIFEXITED(status) and os.WEXITSTATUS(status) == os.EX_OK:
- files = output.split('\0')
- # split always produces an empty string as the last element
- if files and not files[-1]:
- del files[-1]
- if files:
- if stat.S_ISDIR(mymode):
- yield (x, files)
- else:
- yield (x, None)
-
- _ld_so_include_re = re.compile(r'^include\s+(\S.*)')
def getlibpaths(root, env=None):
- def read_ld_so_conf(path):
- for l in grabfile(path):
- include_match = _ld_so_include_re.match(l)
- if include_match is not None:
- subpath = os.path.join(os.path.dirname(path),
- include_match.group(1))
- for p in glob.glob(subpath):
- for r in read_ld_so_conf(p):
- yield r
- else:
- yield l
-
- """ Return a list of paths that are used for library lookups """
- if env is None:
- env = os.environ
-
- # PREFIX HACK: LD_LIBRARY_PATH isn't portable, and considered
- # harmfull, so better not use it. We don't need any host OS lib
- # paths either, so do Prefix case.
- if EPREFIX != '':
- rval = []
- rval.append(EPREFIX + "/usr/lib")
- rval.append(EPREFIX + "/lib")
- # we don't know the CHOST here, so it's a bit hard to guess
- # where GCC's and ld's libs are. Though, GCC's libs should be
- # in lib and usr/lib, binutils' libs rarely used
- else:
- # the following is based on the information from ld.so(8)
- rval = env.get("LD_LIBRARY_PATH", "").split(":")
- rval.extend(read_ld_so_conf(os.path.join(root, "etc", "ld.so.conf")))
- rval.append("/usr/lib")
- rval.append("/lib")
-
- return [normalize_path(x) for x in rval if x]
+ def read_ld_so_conf(path):
+ for l in grabfile(path):
+ include_match = _ld_so_include_re.match(l)
+ if include_match is not None:
+ subpath = os.path.join(os.path.dirname(path), include_match.group(1))
+ for p in glob.glob(subpath):
+ for r in read_ld_so_conf(p):
+ yield r
+ else:
+ yield l
+
+ """ Return a list of paths that are used for library lookups """
+ if env is None:
+ env = os.environ
- # the following is based on the information from ld.so(8)
- rval = env.get("LD_LIBRARY_PATH", "").split(":")
- rval.extend(read_ld_so_conf(os.path.join(root, "etc", "ld.so.conf")))
- rval.append("/usr/lib")
- rval.append("/lib")
++ # BEGIN PREFIX LOCAL:
++ # LD_LIBRARY_PATH isn't portable, and considered harmfull, so better
++ # not use it. We don't need any host OS lib paths either, so do
++ # Prefix case.
++ if EPREFIX != '':
++ rval = []
++ rval.append(EPREFIX + "/usr/lib")
++ rval.append(EPREFIX + "/lib")
++ # we don't know the CHOST here, so it's a bit hard to guess
++ # where GCC's and ld's libs are. Though, GCC's libs should be
++ # in lib and usr/lib, binutils' libs are rarely used
++ else:
++ # END PREFIX LOCAL
++ # the following is based on the information from ld.so(8)
++ rval = env.get("LD_LIBRARY_PATH", "").split(":")
++ rval.extend(read_ld_so_conf(os.path.join(root, "etc", "ld.so.conf")))
++ rval.append("/usr/lib")
++ rval.append("/lib")
+
+ return [normalize_path(x) for x in rval if x]
diff --cc lib/portage/util/_info_files.py
index de44b0fdc,528b273d9..2a8d277b3
--- a/lib/portage/util/_info_files.py
+++ b/lib/portage/util/_info_files.py
@@@ -9,131 -9,132 +9,136 @@@ import subproces
import portage
from portage import os
+from portage.const import EPREFIX
+
def chk_updated_info_files(root, infodirs, prev_mtimes):
- if os.path.exists(EPREFIX + "/usr/bin/install-info"):
- out = portage.output.EOutput()
- regen_infodirs = []
- for z in infodirs:
- if z == '':
- continue
- inforoot = portage.util.normalize_path(root + EPREFIX + z)
- if os.path.isdir(inforoot) and \
- not [x for x in os.listdir(inforoot) \
- if x.startswith('.keepinfodir')]:
- infomtime = os.stat(inforoot)[stat.ST_MTIME]
- if inforoot not in prev_mtimes or \
- prev_mtimes[inforoot] != infomtime:
- regen_infodirs.append(inforoot)
- if os.path.exists("/usr/bin/install-info"):
++ # PREFIX LOCAL
++ if os.path.exists(EPREFIX + "/usr/bin/install-info"):
+ out = portage.output.EOutput()
+ regen_infodirs = []
+ for z in infodirs:
+ if z == "":
+ continue
- inforoot = portage.util.normalize_path(root + z)
++ # PREFIX LOCAL
++ inforoot = portage.util.normalize_path(root + EPREFIX + z)
+ if os.path.isdir(inforoot) and not [
+ x for x in os.listdir(inforoot) if x.startswith(".keepinfodir")
+ ]:
+ infomtime = os.stat(inforoot)[stat.ST_MTIME]
+ if inforoot not in prev_mtimes or prev_mtimes[inforoot] != infomtime:
+ regen_infodirs.append(inforoot)
- if not regen_infodirs:
- portage.util.writemsg_stdout("\n")
- if portage.util.noiselimit >= 0:
- out.einfo("GNU info directory index is up-to-date.")
- else:
- portage.util.writemsg_stdout("\n")
- if portage.util.noiselimit >= 0:
- out.einfo("Regenerating GNU info directory index...")
+ if not regen_infodirs:
+ portage.util.writemsg_stdout("\n")
+ if portage.util.noiselimit >= 0:
+ out.einfo("GNU info directory index is up-to-date.")
+ else:
+ portage.util.writemsg_stdout("\n")
+ if portage.util.noiselimit >= 0:
+ out.einfo("Regenerating GNU info directory index...")
- dir_extensions = ("", ".gz", ".bz2")
- icount = 0
- badcount = 0
- errmsg = ""
- for inforoot in regen_infodirs:
- if inforoot == '':
- continue
+ dir_extensions = ("", ".gz", ".bz2")
+ icount = 0
+ badcount = 0
+ errmsg = ""
+ for inforoot in regen_infodirs:
+ if inforoot == "":
+ continue
- if not os.path.isdir(inforoot) or \
- not os.access(inforoot, os.W_OK):
- continue
+ if not os.path.isdir(inforoot) or not os.access(inforoot, os.W_OK):
+ continue
- file_list = os.listdir(inforoot)
- file_list.sort()
- dir_file = os.path.join(inforoot, "dir")
- moved_old_dir = False
- processed_count = 0
- for x in file_list:
- if x.startswith(".") or \
- os.path.isdir(os.path.join(inforoot, x)):
- continue
- if x.startswith("dir"):
- skip = False
- for ext in dir_extensions:
- if x == "dir" + ext or \
- x == "dir" + ext + ".old":
- skip = True
- break
- if skip:
- continue
- if processed_count == 0:
- for ext in dir_extensions:
- try:
- os.rename(dir_file + ext, dir_file + ext + ".old")
- moved_old_dir = True
- except EnvironmentError as e:
- if e.errno != errno.ENOENT:
- raise
- del e
- processed_count += 1
- try:
- proc = subprocess.Popen(
- ['%s/usr/bin/install-info' % EPREFIX,
- '--dir-file=%s' % os.path.join(inforoot, "dir"),
- os.path.join(inforoot, x)],
- env=dict(os.environ, LANG="C", LANGUAGE="C"),
- stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
- except OSError:
- myso = None
- else:
- myso = portage._unicode_decode(
- proc.communicate()[0]).rstrip("\n")
- proc.wait()
- existsstr = "already exists, for file `"
- if myso:
- if re.search(existsstr, myso):
- # Already exists... Don't increment the count for this.
- pass
- elif myso[:44] == "install-info: warning: no info dir entry in ":
- # This info file doesn't contain a DIR-header: install-info produces this
- # (harmless) warning (the --quiet switch doesn't seem to work).
- # Don't increment the count for this.
- pass
- else:
- badcount += 1
- errmsg += myso + "\n"
- icount += 1
+ file_list = os.listdir(inforoot)
+ file_list.sort()
+ dir_file = os.path.join(inforoot, "dir")
+ moved_old_dir = False
+ processed_count = 0
+ for x in file_list:
+ if x.startswith(".") or os.path.isdir(os.path.join(inforoot, x)):
+ continue
+ if x.startswith("dir"):
+ skip = False
+ for ext in dir_extensions:
+ if x == "dir" + ext or x == "dir" + ext + ".old":
+ skip = True
+ break
+ if skip:
+ continue
+ if processed_count == 0:
+ for ext in dir_extensions:
+ try:
+ os.rename(dir_file + ext, dir_file + ext + ".old")
+ moved_old_dir = True
+ except EnvironmentError as e:
+ if e.errno != errno.ENOENT:
+ raise
+ del e
+ processed_count += 1
+ try:
+ proc = subprocess.Popen(
+ [
- "/usr/bin/install-info",
++ # PREFIX LOCAL
++ "%s/usr/bin/install-info", EPREFIX,
+ "--dir-file=%s" % os.path.join(inforoot, "dir"),
+ os.path.join(inforoot, x),
+ ],
+ env=dict(os.environ, LANG="C", LANGUAGE="C"),
+ stdout=subprocess.PIPE,
+ stderr=subprocess.STDOUT,
+ )
+ except OSError:
+ myso = None
+ else:
+ myso = portage._unicode_decode(proc.communicate()[0]).rstrip(
+ "\n"
+ )
+ proc.wait()
+ existsstr = "already exists, for file `"
+ if myso:
+ if re.search(existsstr, myso):
+ # Already exists... Don't increment the count for this.
+ pass
+ elif (
+ myso[:44] == "install-info: warning: no info dir entry in "
+ ):
+ # This info file doesn't contain a DIR-header: install-info produces this
+ # (harmless) warning (the --quiet switch doesn't seem to work).
+ # Don't increment the count for this.
+ pass
+ else:
+ badcount += 1
+ errmsg += myso + "\n"
+ icount += 1
- if moved_old_dir and not os.path.exists(dir_file):
- # We didn't generate a new dir file, so put the old file
- # back where it was originally found.
- for ext in dir_extensions:
- try:
- os.rename(dir_file + ext + ".old", dir_file + ext)
- except EnvironmentError as e:
- if e.errno != errno.ENOENT:
- raise
- del e
+ if moved_old_dir and not os.path.exists(dir_file):
+ # We didn't generate a new dir file, so put the old file
+ # back where it was originally found.
+ for ext in dir_extensions:
+ try:
+ os.rename(dir_file + ext + ".old", dir_file + ext)
+ except EnvironmentError as e:
+ if e.errno != errno.ENOENT:
+ raise
+ del e
- # Clean dir.old cruft so that they don't prevent
- # unmerge of otherwise empty directories.
- for ext in dir_extensions:
- try:
- os.unlink(dir_file + ext + ".old")
- except EnvironmentError as e:
- if e.errno != errno.ENOENT:
- raise
- del e
+ # Clean dir.old cruft so that they don't prevent
+ # unmerge of otherwise empty directories.
+ for ext in dir_extensions:
+ try:
+ os.unlink(dir_file + ext + ".old")
+ except EnvironmentError as e:
+ if e.errno != errno.ENOENT:
+ raise
+ del e
- #update mtime so we can potentially avoid regenerating.
- prev_mtimes[inforoot] = os.stat(inforoot)[stat.ST_MTIME]
+ # update mtime so we can potentially avoid regenerating.
+ prev_mtimes[inforoot] = os.stat(inforoot)[stat.ST_MTIME]
- if badcount:
- out.eerror("Processed %d info files; %d errors." % \
- (icount, badcount))
- portage.util.writemsg_level(errmsg,
- level=logging.ERROR, noiselevel=-1)
- else:
- if icount > 0 and portage.util.noiselimit >= 0:
- out.einfo("Processed %d info files." % (icount,))
+ if badcount:
+ out.eerror("Processed %d info files; %d errors." % (icount, badcount))
+ portage.util.writemsg_level(errmsg, level=logging.ERROR, noiselevel=-1)
+ else:
+ if icount > 0 and portage.util.noiselimit >= 0:
+ out.einfo("Processed %d info files." % (icount,))
diff --cc lib/portage/util/_pty.py
index a92f57543,e58f95e0a..40ab1a8db
--- a/lib/portage/util/_pty.py
+++ b/lib/portage/util/_pty.py
@@@ -9,70 -9,68 +9,68 @@@ from portage import o
from portage.output import get_term_size, set_term_size
from portage.util import writemsg
-# Disable the use of openpty on Solaris as it seems Python's openpty
-# implementation doesn't play nice on Solaris with Portage's
-# behaviour causing hangs/deadlocks.
+# Disable the use of openpty on Solaris (and others) as it seems Python's
+# openpty implementation doesn't play nice with Portage's behaviour,
+# causing hangs/deadlocks.
# Additional note for the future: on Interix, pipes do NOT work, so
# _disable_openpty on Interix must *never* be True
-_disable_openpty = platform.system() in ("SunOS",)
+_disable_openpty = platform.system() in ("AIX","FreeMiNT","HP-UX","SunOS",)
- _fbsd_test_pty = platform.system() == 'FreeBSD'
+ _fbsd_test_pty = platform.system() == "FreeBSD"
+
def _create_pty_or_pipe(copy_term_size=None):
- """
- Try to create a pty and if then fails then create a normal
- pipe instead.
+ """
+ Try to create a pty and if then fails then create a normal
+ pipe instead.
- @param copy_term_size: If a tty file descriptor is given
- then the term size will be copied to the pty.
- @type copy_term_size: int
- @rtype: tuple
- @return: A tuple of (is_pty, master_fd, slave_fd) where
- is_pty is True if a pty was successfully allocated, and
- False if a normal pipe was allocated.
- """
+ @param copy_term_size: If a tty file descriptor is given
+ then the term size will be copied to the pty.
+ @type copy_term_size: int
+ @rtype: tuple
+ @return: A tuple of (is_pty, master_fd, slave_fd) where
+ is_pty is True if a pty was successfully allocated, and
+ False if a normal pipe was allocated.
+ """
- got_pty = False
+ got_pty = False
- global _disable_openpty, _fbsd_test_pty
+ global _disable_openpty, _fbsd_test_pty
- if _fbsd_test_pty and not _disable_openpty:
- # Test for python openpty breakage after freebsd7 to freebsd8
- # upgrade, which results in a 'Function not implemented' error
- # and the process being killed.
- pid = os.fork()
- if pid == 0:
- pty.openpty()
- os._exit(os.EX_OK)
- pid, status = os.waitpid(pid, 0)
- if (status & 0xff) == 140:
- _disable_openpty = True
- _fbsd_test_pty = False
+ if _fbsd_test_pty and not _disable_openpty:
+ # Test for python openpty breakage after freebsd7 to freebsd8
+ # upgrade, which results in a 'Function not implemented' error
+ # and the process being killed.
+ pid = os.fork()
+ if pid == 0:
+ pty.openpty()
+ os._exit(os.EX_OK)
+ pid, status = os.waitpid(pid, 0)
+ if (status & 0xFF) == 140:
+ _disable_openpty = True
+ _fbsd_test_pty = False
- if _disable_openpty:
- master_fd, slave_fd = os.pipe()
- else:
- try:
- master_fd, slave_fd = pty.openpty()
- got_pty = True
- except EnvironmentError as e:
- _disable_openpty = True
- writemsg("openpty failed: '%s'\n" % str(e),
- noiselevel=-1)
- del e
- master_fd, slave_fd = os.pipe()
+ if _disable_openpty:
+ master_fd, slave_fd = os.pipe()
+ else:
+ try:
+ master_fd, slave_fd = pty.openpty()
+ got_pty = True
+ except EnvironmentError as e:
+ _disable_openpty = True
+ writemsg("openpty failed: '%s'\n" % str(e), noiselevel=-1)
+ del e
+ master_fd, slave_fd = os.pipe()
- if got_pty:
- # Disable post-processing of output since otherwise weird
- # things like \n -> \r\n transformations may occur.
- mode = termios.tcgetattr(slave_fd)
- mode[1] &= ~termios.OPOST
- termios.tcsetattr(slave_fd, termios.TCSANOW, mode)
+ if got_pty:
+ # Disable post-processing of output since otherwise weird
+ # things like \n -> \r\n transformations may occur.
+ mode = termios.tcgetattr(slave_fd)
+ mode[1] &= ~termios.OPOST
+ termios.tcsetattr(slave_fd, termios.TCSANOW, mode)
- if got_pty and \
- copy_term_size is not None and \
- os.isatty(copy_term_size):
- rows, columns = get_term_size()
- set_term_size(rows, columns, slave_fd)
+ if got_pty and copy_term_size is not None and os.isatty(copy_term_size):
+ rows, columns = get_term_size()
+ set_term_size(rows, columns, slave_fd)
- return (got_pty, master_fd, slave_fd)
+ return (got_pty, master_fd, slave_fd)
diff --cc lib/portage/util/env_update.py
index 31aacc292,bb0ebf84c..b7f27dfb7
--- a/lib/portage/util/env_update.py
+++ b/lib/portage/util/env_update.py
@@@ -23,374 -28,419 +28,422 @@@ from portage.dbapi.vartree import vartr
from portage.package.ebuild.config import config
- def env_update(makelinks=1, target_root=None, prev_mtimes=None, contents=None,
- env=None, writemsg_level=None, vardbapi=None):
- """
- Parse /etc/env.d and use it to generate /etc/profile.env, csh.env,
- ld.so.conf, and prelink.conf. Finally, run ldconfig. When ldconfig is
- called, its -X option will be used in order to avoid potential
- interference with installed soname symlinks that are required for
- correct operation of FEATURES=preserve-libs for downgrade operations.
- It's not necessary for ldconfig to create soname symlinks, since
- portage will use NEEDED.ELF.2 data to automatically create them
- after src_install if they happen to be missing.
- @param makelinks: True if ldconfig should be called, False otherwise
- @param target_root: root that is passed to the ldconfig -r option,
- defaults to portage.settings["ROOT"].
- @type target_root: String (Path)
- """
- if vardbapi is None:
- if isinstance(env, config):
- vardbapi = vartree(settings=env).dbapi
- else:
- if target_root is None:
- eprefix = portage.settings["EPREFIX"]
- target_root = portage.settings["ROOT"]
- target_eroot = portage.settings['EROOT']
- else:
- eprefix = portage.const.EPREFIX
- target_eroot = os.path.join(target_root,
- eprefix.lstrip(os.sep))
- target_eroot = target_eroot.rstrip(os.sep) + os.sep
- if hasattr(portage, "db") and target_eroot in portage.db:
- vardbapi = portage.db[target_eroot]["vartree"].dbapi
- else:
- settings = config(config_root=target_root,
- target_root=target_root, eprefix=eprefix)
- target_root = settings["ROOT"]
- if env is None:
- env = settings
- vardbapi = vartree(settings=settings).dbapi
-
- # Lock the config memory file to prevent symlink creation
- # in merge_contents from overlapping with env-update.
- vardbapi._fs_lock()
- try:
- return _env_update(makelinks, target_root, prev_mtimes, contents,
- env, writemsg_level)
- finally:
- vardbapi._fs_unlock()
-
- def _env_update(makelinks, target_root, prev_mtimes, contents, env,
- writemsg_level):
- if writemsg_level is None:
- writemsg_level = portage.util.writemsg_level
- if target_root is None:
- target_root = portage.settings["ROOT"]
- if prev_mtimes is None:
- prev_mtimes = portage.mtimedb["ldpath"]
- if env is None:
- settings = portage.settings
- else:
- settings = env
-
- eprefix = settings.get("EPREFIX", portage.const.EPREFIX)
- eprefix_lstrip = eprefix.lstrip(os.sep)
- eroot = normalize_path(os.path.join(target_root, eprefix_lstrip)).rstrip(os.sep) + os.sep
- envd_dir = os.path.join(eroot, "etc", "env.d")
- ensure_dirs(envd_dir, mode=0o755)
- fns = listdir(envd_dir, EmptyOnError=1)
- fns.sort()
- templist = []
- for x in fns:
- if len(x) < 3:
- continue
- if not x[0].isdigit() or not x[1].isdigit():
- continue
- if x.startswith(".") or x.endswith("~") or x.endswith(".bak"):
- continue
- templist.append(x)
- fns = templist
- del templist
-
- space_separated = set(["CONFIG_PROTECT", "CONFIG_PROTECT_MASK"])
- colon_separated = set(["ADA_INCLUDE_PATH", "ADA_OBJECTS_PATH",
- "CLASSPATH", "INFODIR", "INFOPATH", "KDEDIRS", "LDPATH", "MANPATH",
- "PATH", "PKG_CONFIG_PATH", "PRELINK_PATH", "PRELINK_PATH_MASK",
- "PYTHONPATH", "ROOTPATH"])
-
- config_list = []
-
- for x in fns:
- file_path = os.path.join(envd_dir, x)
- try:
- myconfig = getconfig(file_path, expand=False)
- except ParseError as e:
- writemsg("!!! '%s'\n" % str(e), noiselevel=-1)
- del e
- continue
- if myconfig is None:
- # broken symlink or file removed by a concurrent process
- writemsg("!!! File Not Found: '%s'\n" % file_path, noiselevel=-1)
- continue
-
- config_list.append(myconfig)
- if "SPACE_SEPARATED" in myconfig:
- space_separated.update(myconfig["SPACE_SEPARATED"].split())
- del myconfig["SPACE_SEPARATED"]
- if "COLON_SEPARATED" in myconfig:
- colon_separated.update(myconfig["COLON_SEPARATED"].split())
- del myconfig["COLON_SEPARATED"]
-
- env = {}
- specials = {}
- for var in space_separated:
- mylist = []
- for myconfig in config_list:
- if var in myconfig:
- for item in myconfig[var].split():
- if item and not item in mylist:
- mylist.append(item)
- del myconfig[var] # prepare for env.update(myconfig)
- if mylist:
- env[var] = " ".join(mylist)
- specials[var] = mylist
-
- for var in colon_separated:
- mylist = []
- for myconfig in config_list:
- if var in myconfig:
- for item in myconfig[var].split(":"):
- if item and not item in mylist:
- mylist.append(item)
- del myconfig[var] # prepare for env.update(myconfig)
- if mylist:
- env[var] = ":".join(mylist)
- specials[var] = mylist
-
- for myconfig in config_list:
- """Cumulative variables have already been deleted from myconfig so that
- they won't be overwritten by this dict.update call."""
- env.update(myconfig)
-
- ldsoconf_path = os.path.join(eroot, "etc", "ld.so.conf")
- try:
- myld = io.open(_unicode_encode(ldsoconf_path,
- encoding=_encodings['fs'], errors='strict'),
- mode='r', encoding=_encodings['content'], errors='replace')
- myldlines = myld.readlines()
- myld.close()
- oldld = []
- for x in myldlines:
- #each line has at least one char (a newline)
- if x[:1] == "#":
- continue
- oldld.append(x[:-1])
- except (IOError, OSError) as e:
- if e.errno != errno.ENOENT:
- raise
- oldld = None
-
- newld = specials["LDPATH"]
- if oldld != newld:
- #ld.so.conf needs updating and ldconfig needs to be run
- myfd = atomic_ofstream(ldsoconf_path)
- myfd.write("# ld.so.conf autogenerated by env-update; make all changes to\n")
- myfd.write("# contents of /etc/env.d directory\n")
- for x in specials["LDPATH"]:
- myfd.write(x + "\n")
- myfd.close()
-
- potential_lib_dirs = set()
- for lib_dir_glob in ('usr/lib*', 'lib*'):
- x = os.path.join(eroot, lib_dir_glob)
- for y in glob.glob(_unicode_encode(x,
- encoding=_encodings['fs'], errors='strict')):
- try:
- y = _unicode_decode(y,
- encoding=_encodings['fs'], errors='strict')
- except UnicodeDecodeError:
- continue
- if os.path.basename(y) != 'libexec':
- potential_lib_dirs.add(y[len(eroot):])
-
- # Update prelink.conf if we are prelink-enabled
- if prelink_capable:
- prelink_d = os.path.join(eroot, 'etc', 'prelink.conf.d')
- ensure_dirs(prelink_d)
- newprelink = atomic_ofstream(os.path.join(prelink_d, 'portage.conf'))
- newprelink.write("# prelink.conf autogenerated by env-update; make all changes to\n")
- newprelink.write("# contents of /etc/env.d directory\n")
-
- for x in sorted(potential_lib_dirs) + ['bin', 'sbin']:
- newprelink.write('-l /%s\n' % (x,))
- prelink_paths = set()
- prelink_paths |= set(specials.get('LDPATH', []))
- prelink_paths |= set(specials.get('PATH', []))
- prelink_paths |= set(specials.get('PRELINK_PATH', []))
- prelink_path_mask = specials.get('PRELINK_PATH_MASK', [])
- for x in prelink_paths:
- if not x:
- continue
- if x[-1:] != '/':
- x += "/"
- plmasked = 0
- for y in prelink_path_mask:
- if not y:
- continue
- if y[-1] != '/':
- y += "/"
- if y == x[0:len(y)]:
- plmasked = 1
- break
- if not plmasked:
- newprelink.write("-h %s\n" % (x,))
- for x in prelink_path_mask:
- newprelink.write("-b %s\n" % (x,))
- newprelink.close()
-
- # Migration code path. If /etc/prelink.conf was generated by us, then
- # point it to the new stuff until the prelink package re-installs.
- prelink_conf = os.path.join(eroot, 'etc', 'prelink.conf')
- try:
- with open(_unicode_encode(prelink_conf,
- encoding=_encodings['fs'], errors='strict'), 'rb') as f:
- if f.readline() == b'# prelink.conf autogenerated by env-update; make all changes to\n':
- f = atomic_ofstream(prelink_conf)
- f.write('-c /etc/prelink.conf.d/*.conf\n')
- f.close()
- except IOError as e:
- if e.errno != errno.ENOENT:
- raise
-
- current_time = int(time.time())
- mtime_changed = False
-
- lib_dirs = set()
- for lib_dir in set(specials['LDPATH']) | potential_lib_dirs:
- x = os.path.join(eroot, lib_dir.lstrip(os.sep))
- try:
- newldpathtime = os.stat(x)[stat.ST_MTIME]
- lib_dirs.add(normalize_path(x))
- except OSError as oe:
- if oe.errno == errno.ENOENT:
- try:
- del prev_mtimes[x]
- except KeyError:
- pass
- # ignore this path because it doesn't exist
- continue
- raise
- if newldpathtime == current_time:
- # Reset mtime to avoid the potential ambiguity of times that
- # differ by less than 1 second.
- newldpathtime -= 1
- os.utime(x, (newldpathtime, newldpathtime))
- prev_mtimes[x] = newldpathtime
- mtime_changed = True
- elif x in prev_mtimes:
- if prev_mtimes[x] == newldpathtime:
- pass
- else:
- prev_mtimes[x] = newldpathtime
- mtime_changed = True
- else:
- prev_mtimes[x] = newldpathtime
- mtime_changed = True
-
- if makelinks and \
- not mtime_changed and \
- contents is not None:
- libdir_contents_changed = False
- for mypath, mydata in contents.items():
- if mydata[0] not in ("obj", "sym"):
- continue
- head, tail = os.path.split(mypath)
- if head in lib_dirs:
- libdir_contents_changed = True
- break
- if not libdir_contents_changed:
- makelinks = False
-
- if "CHOST" in settings and "CBUILD" in settings and \
- settings["CHOST"] != settings["CBUILD"]:
- ldconfig = find_binary("%s-ldconfig" % settings["CHOST"])
- else:
- ldconfig = os.path.join(eroot, "sbin", "ldconfig")
-
- if ldconfig is None:
- pass
- elif not (os.access(ldconfig, os.X_OK) and os.path.isfile(ldconfig)):
- ldconfig = None
-
- # Only run ldconfig as needed
- if makelinks and ldconfig:
- # ldconfig has very different behaviour between FreeBSD and Linux
- if ostype == "Linux" or ostype.lower().endswith("gnu"):
- # We can't update links if we haven't cleaned other versions first, as
- # an older package installed ON TOP of a newer version will cause ldconfig
- # to overwrite the symlinks we just made. -X means no links. After 'clean'
- # we can safely create links.
- writemsg_level(_(">>> Regenerating %setc/ld.so.cache...\n") % \
- (target_root,))
- os.system("cd / ; %s -X -r '%s'" % (ldconfig, target_root))
- elif ostype in ("FreeBSD", "DragonFly"):
- writemsg_level(_(">>> Regenerating %svar/run/ld-elf.so.hints...\n") % \
- target_root)
- os.system(("cd / ; %s -elf -i " + \
- "-f '%svar/run/ld-elf.so.hints' '%setc/ld.so.conf'") % \
- (ldconfig, target_root, target_root))
-
- del specials["LDPATH"]
-
- notice = "# THIS FILE IS AUTOMATICALLY GENERATED BY env-update.\n"
- notice += "# DO NOT EDIT THIS FILE."
- penvnotice = notice + " CHANGES TO STARTUP PROFILES\n"
- cenvnotice = penvnotice[:]
- penvnotice += "# GO INTO " + eprefix + "/etc/profile NOT /etc/profile.env\n\n"
- cenvnotice += "# GO INTO " + eprefix + "/etc/csh.cshrc NOT /etc/csh.env\n\n"
-
- #create /etc/profile.env for bash support
- profile_env_path = os.path.join(eroot, "etc", "profile.env")
- with atomic_ofstream(profile_env_path) as outfile:
- outfile.write(penvnotice)
-
- env_keys = [x for x in env if x != "LDPATH"]
- env_keys.sort()
- for k in env_keys:
- v = env[k]
- if v.startswith('$') and not v.startswith('${'):
- outfile.write("export %s=$'%s'\n" % (k, v[1:]))
- else:
- outfile.write("export %s='%s'\n" % (k, v))
-
- # Create the systemd user environment configuration file
- # /etc/environment.d/10-gentoo-env.conf with the
- # environment configuration from /etc/env.d.
- systemd_environment_dir = os.path.join(eroot, "etc", "environment.d")
- os.makedirs(systemd_environment_dir, exist_ok=True)
-
- systemd_gentoo_env_path = os.path.join(systemd_environment_dir,
- "10-gentoo-env.conf")
- with atomic_ofstream(systemd_gentoo_env_path) as systemd_gentoo_env:
- senvnotice = notice + "\n\n"
- systemd_gentoo_env.write(senvnotice)
-
- for env_key in env_keys:
- # Skip PATH since this makes it impossible to use
- # "systemctl --user import-environment PATH".
- if env_key == 'PATH':
- continue
-
- env_key_value = env[env_key]
-
- # Skip variables with the empty string
- # as value. Those sometimes appear in
- # profile.env (e.g. "export GCC_SPECS=''"),
- # but are invalid in systemd's syntax.
- if not env_key_value:
- continue
-
- # Transform into systemd environment.d
- # conf syntax, basically shell variable
- # assignment (without "export ").
- line = f"{env_key}={env_key_value}\n"
-
- systemd_gentoo_env.write(line)
-
- #create /etc/csh.env for (t)csh support
- outfile = atomic_ofstream(os.path.join(eroot, "etc", "csh.env"))
- outfile.write(cenvnotice)
- for x in env_keys:
- outfile.write("setenv %s '%s'\n" % (x, env[x]))
- outfile.close()
+ def env_update(
+ makelinks=1,
+ target_root=None,
+ prev_mtimes=None,
+ contents=None,
+ env=None,
+ writemsg_level=None,
+ vardbapi=None,
+ ):
+ """
+ Parse /etc/env.d and use it to generate /etc/profile.env, csh.env,
+ ld.so.conf, and prelink.conf. Finally, run ldconfig. When ldconfig is
+ called, its -X option will be used in order to avoid potential
+ interference with installed soname symlinks that are required for
+ correct operation of FEATURES=preserve-libs for downgrade operations.
+ It's not necessary for ldconfig to create soname symlinks, since
+ portage will use NEEDED.ELF.2 data to automatically create them
+ after src_install if they happen to be missing.
+ @param makelinks: True if ldconfig should be called, False otherwise
+ @param target_root: root that is passed to the ldconfig -r option,
+ defaults to portage.settings["ROOT"].
+ @type target_root: String (Path)
+ """
+ if vardbapi is None:
+ if isinstance(env, config):
+ vardbapi = vartree(settings=env).dbapi
+ else:
+ if target_root is None:
+ eprefix = portage.settings["EPREFIX"]
+ target_root = portage.settings["ROOT"]
+ target_eroot = portage.settings["EROOT"]
+ else:
+ eprefix = portage.const.EPREFIX
+ target_eroot = os.path.join(target_root, eprefix.lstrip(os.sep))
+ target_eroot = target_eroot.rstrip(os.sep) + os.sep
+ if hasattr(portage, "db") and target_eroot in portage.db:
+ vardbapi = portage.db[target_eroot]["vartree"].dbapi
+ else:
+ settings = config(
+ config_root=target_root, target_root=target_root, eprefix=eprefix
+ )
+ target_root = settings["ROOT"]
+ if env is None:
+ env = settings
+ vardbapi = vartree(settings=settings).dbapi
+
+ # Lock the config memory file to prevent symlink creation
+ # in merge_contents from overlapping with env-update.
+ vardbapi._fs_lock()
+ try:
+ return _env_update(
+ makelinks, target_root, prev_mtimes, contents, env, writemsg_level
+ )
+ finally:
+ vardbapi._fs_unlock()
+
+
+ def _env_update(makelinks, target_root, prev_mtimes, contents, env, writemsg_level):
+ if writemsg_level is None:
+ writemsg_level = portage.util.writemsg_level
+ if target_root is None:
+ target_root = portage.settings["ROOT"]
+ if prev_mtimes is None:
+ prev_mtimes = portage.mtimedb["ldpath"]
+ if env is None:
+ settings = portage.settings
+ else:
+ settings = env
+
- eprefix = settings.get("EPREFIX", "")
++ # PREFIX LOCAL
++ eprefix = settings.get("EPREFIX", portage.const.EPREFIX)
+ eprefix_lstrip = eprefix.lstrip(os.sep)
+ eroot = (
+ normalize_path(os.path.join(target_root, eprefix_lstrip)).rstrip(os.sep)
+ + os.sep
+ )
+ envd_dir = os.path.join(eroot, "etc", "env.d")
+ ensure_dirs(envd_dir, mode=0o755)
+ fns = listdir(envd_dir, EmptyOnError=1)
+ fns.sort()
+ templist = []
+ for x in fns:
+ if len(x) < 3:
+ continue
+ if not x[0].isdigit() or not x[1].isdigit():
+ continue
+ if x.startswith(".") or x.endswith("~") or x.endswith(".bak"):
+ continue
+ templist.append(x)
+ fns = templist
+ del templist
+
+ space_separated = set(["CONFIG_PROTECT", "CONFIG_PROTECT_MASK"])
+ colon_separated = set(
+ [
+ "ADA_INCLUDE_PATH",
+ "ADA_OBJECTS_PATH",
+ "CLASSPATH",
+ "INFODIR",
+ "INFOPATH",
+ "KDEDIRS",
+ "LDPATH",
+ "MANPATH",
+ "PATH",
+ "PKG_CONFIG_PATH",
+ "PRELINK_PATH",
+ "PRELINK_PATH_MASK",
+ "PYTHONPATH",
+ "ROOTPATH",
+ ]
+ )
+
+ config_list = []
+
+ for x in fns:
+ file_path = os.path.join(envd_dir, x)
+ try:
+ myconfig = getconfig(file_path, expand=False)
+ except ParseError as e:
+ writemsg("!!! '%s'\n" % str(e), noiselevel=-1)
+ del e
+ continue
+ if myconfig is None:
+ # broken symlink or file removed by a concurrent process
+ writemsg("!!! File Not Found: '%s'\n" % file_path, noiselevel=-1)
+ continue
+
+ config_list.append(myconfig)
+ if "SPACE_SEPARATED" in myconfig:
+ space_separated.update(myconfig["SPACE_SEPARATED"].split())
+ del myconfig["SPACE_SEPARATED"]
+ if "COLON_SEPARATED" in myconfig:
+ colon_separated.update(myconfig["COLON_SEPARATED"].split())
+ del myconfig["COLON_SEPARATED"]
+
+ env = {}
+ specials = {}
+ for var in space_separated:
+ mylist = []
+ for myconfig in config_list:
+ if var in myconfig:
+ for item in myconfig[var].split():
+ if item and not item in mylist:
+ mylist.append(item)
+ del myconfig[var] # prepare for env.update(myconfig)
+ if mylist:
+ env[var] = " ".join(mylist)
+ specials[var] = mylist
+
+ for var in colon_separated:
+ mylist = []
+ for myconfig in config_list:
+ if var in myconfig:
+ for item in myconfig[var].split(":"):
+ if item and not item in mylist:
+ mylist.append(item)
+ del myconfig[var] # prepare for env.update(myconfig)
+ if mylist:
+ env[var] = ":".join(mylist)
+ specials[var] = mylist
+
+ for myconfig in config_list:
+ """Cumulative variables have already been deleted from myconfig so that
+ they won't be overwritten by this dict.update call."""
+ env.update(myconfig)
+
+ ldsoconf_path = os.path.join(eroot, "etc", "ld.so.conf")
+ try:
+ myld = io.open(
+ _unicode_encode(ldsoconf_path, encoding=_encodings["fs"], errors="strict"),
+ mode="r",
+ encoding=_encodings["content"],
+ errors="replace",
+ )
+ myldlines = myld.readlines()
+ myld.close()
+ oldld = []
+ for x in myldlines:
+ # each line has at least one char (a newline)
+ if x[:1] == "#":
+ continue
+ oldld.append(x[:-1])
+ except (IOError, OSError) as e:
+ if e.errno != errno.ENOENT:
+ raise
+ oldld = None
+
+ newld = specials["LDPATH"]
+ if oldld != newld:
+ # ld.so.conf needs updating and ldconfig needs to be run
+ myfd = atomic_ofstream(ldsoconf_path)
+ myfd.write("# ld.so.conf autogenerated by env-update; make all changes to\n")
+ myfd.write("# contents of /etc/env.d directory\n")
+ for x in specials["LDPATH"]:
+ myfd.write(x + "\n")
+ myfd.close()
+
+ potential_lib_dirs = set()
+ for lib_dir_glob in ("usr/lib*", "lib*"):
+ x = os.path.join(eroot, lib_dir_glob)
+ for y in glob.glob(
+ _unicode_encode(x, encoding=_encodings["fs"], errors="strict")
+ ):
+ try:
+ y = _unicode_decode(y, encoding=_encodings["fs"], errors="strict")
+ except UnicodeDecodeError:
+ continue
+ if os.path.basename(y) != "libexec":
+ potential_lib_dirs.add(y[len(eroot) :])
+
+ # Update prelink.conf if we are prelink-enabled
+ if prelink_capable:
+ prelink_d = os.path.join(eroot, "etc", "prelink.conf.d")
+ ensure_dirs(prelink_d)
+ newprelink = atomic_ofstream(os.path.join(prelink_d, "portage.conf"))
+ newprelink.write(
+ "# prelink.conf autogenerated by env-update; make all changes to\n"
+ )
+ newprelink.write("# contents of /etc/env.d directory\n")
+
+ for x in sorted(potential_lib_dirs) + ["bin", "sbin"]:
+ newprelink.write("-l /%s\n" % (x,))
+ prelink_paths = set()
+ prelink_paths |= set(specials.get("LDPATH", []))
+ prelink_paths |= set(specials.get("PATH", []))
+ prelink_paths |= set(specials.get("PRELINK_PATH", []))
+ prelink_path_mask = specials.get("PRELINK_PATH_MASK", [])
+ for x in prelink_paths:
+ if not x:
+ continue
+ if x[-1:] != "/":
+ x += "/"
+ plmasked = 0
+ for y in prelink_path_mask:
+ if not y:
+ continue
+ if y[-1] != "/":
+ y += "/"
+ if y == x[0 : len(y)]:
+ plmasked = 1
+ break
+ if not plmasked:
+ newprelink.write("-h %s\n" % (x,))
+ for x in prelink_path_mask:
+ newprelink.write("-b %s\n" % (x,))
+ newprelink.close()
+
+ # Migration code path. If /etc/prelink.conf was generated by us, then
+ # point it to the new stuff until the prelink package re-installs.
+ prelink_conf = os.path.join(eroot, "etc", "prelink.conf")
+ try:
+ with open(
+ _unicode_encode(
+ prelink_conf, encoding=_encodings["fs"], errors="strict"
+ ),
+ "rb",
+ ) as f:
+ if (
+ f.readline()
+ == b"# prelink.conf autogenerated by env-update; make all changes to\n"
+ ):
+ f = atomic_ofstream(prelink_conf)
+ f.write("-c /etc/prelink.conf.d/*.conf\n")
+ f.close()
+ except IOError as e:
+ if e.errno != errno.ENOENT:
+ raise
+
+ current_time = int(time.time())
+ mtime_changed = False
+
+ lib_dirs = set()
+ for lib_dir in set(specials["LDPATH"]) | potential_lib_dirs:
+ x = os.path.join(eroot, lib_dir.lstrip(os.sep))
+ try:
+ newldpathtime = os.stat(x)[stat.ST_MTIME]
+ lib_dirs.add(normalize_path(x))
+ except OSError as oe:
+ if oe.errno == errno.ENOENT:
+ try:
+ del prev_mtimes[x]
+ except KeyError:
+ pass
+ # ignore this path because it doesn't exist
+ continue
+ raise
+ if newldpathtime == current_time:
+ # Reset mtime to avoid the potential ambiguity of times that
+ # differ by less than 1 second.
+ newldpathtime -= 1
+ os.utime(x, (newldpathtime, newldpathtime))
+ prev_mtimes[x] = newldpathtime
+ mtime_changed = True
+ elif x in prev_mtimes:
+ if prev_mtimes[x] == newldpathtime:
+ pass
+ else:
+ prev_mtimes[x] = newldpathtime
+ mtime_changed = True
+ else:
+ prev_mtimes[x] = newldpathtime
+ mtime_changed = True
+
+ if makelinks and not mtime_changed and contents is not None:
+ libdir_contents_changed = False
+ for mypath, mydata in contents.items():
+ if mydata[0] not in ("obj", "sym"):
+ continue
+ head, tail = os.path.split(mypath)
+ if head in lib_dirs:
+ libdir_contents_changed = True
+ break
+ if not libdir_contents_changed:
+ makelinks = False
+
+ if (
+ "CHOST" in settings
+ and "CBUILD" in settings
+ and settings["CHOST"] != settings["CBUILD"]
+ ):
+ ldconfig = find_binary("%s-ldconfig" % settings["CHOST"])
+ else:
+ ldconfig = os.path.join(eroot, "sbin", "ldconfig")
+
+ if ldconfig is None:
+ pass
+ elif not (os.access(ldconfig, os.X_OK) and os.path.isfile(ldconfig)):
+ ldconfig = None
+
+ # Only run ldconfig as needed
+ if makelinks and ldconfig:
+ # ldconfig has very different behaviour between FreeBSD and Linux
+ if ostype == "Linux" or ostype.lower().endswith("gnu"):
+ # We can't update links if we haven't cleaned other versions first, as
+ # an older package installed ON TOP of a newer version will cause ldconfig
+ # to overwrite the symlinks we just made. -X means no links. After 'clean'
+ # we can safely create links.
+ writemsg_level(
+ _(">>> Regenerating %setc/ld.so.cache...\n") % (target_root,)
+ )
+ os.system("cd / ; %s -X -r '%s'" % (ldconfig, target_root))
+ elif ostype in ("FreeBSD", "DragonFly"):
+ writemsg_level(
+ _(">>> Regenerating %svar/run/ld-elf.so.hints...\n") % target_root
+ )
+ os.system(
+ (
+ "cd / ; %s -elf -i "
+ + "-f '%svar/run/ld-elf.so.hints' '%setc/ld.so.conf'"
+ )
+ % (ldconfig, target_root, target_root)
+ )
+
+ del specials["LDPATH"]
+
+ notice = "# THIS FILE IS AUTOMATICALLY GENERATED BY env-update.\n"
+ notice += "# DO NOT EDIT THIS FILE."
+ penvnotice = notice + " CHANGES TO STARTUP PROFILES\n"
+ cenvnotice = penvnotice[:]
- penvnotice += "# GO INTO /etc/profile NOT /etc/profile.env\n\n"
- cenvnotice += "# GO INTO /etc/csh.cshrc NOT /etc/csh.env\n\n"
++ # BEGIN PREFIX LOCAL
++ penvnotice += "# GO INTO " + eprefix + "/etc/profile NOT " + eprefix + "/etc/profile.env\n\n"
++ cenvnotice += "# GO INTO " + eprefix + "/etc/csh.cshrc NOT " + eprefix + "/etc/csh.env\n\n"
++ # END PREFIX LOCAL
+
+ # create /etc/profile.env for bash support
+ profile_env_path = os.path.join(eroot, "etc", "profile.env")
+ with atomic_ofstream(profile_env_path) as outfile:
+ outfile.write(penvnotice)
+
+ env_keys = [x for x in env if x != "LDPATH"]
+ env_keys.sort()
+ for k in env_keys:
+ v = env[k]
+ if v.startswith("$") and not v.startswith("${"):
+ outfile.write("export %s=$'%s'\n" % (k, v[1:]))
+ else:
+ outfile.write("export %s='%s'\n" % (k, v))
+
+ # Create the systemd user environment configuration file
+ # /etc/environment.d/10-gentoo-env.conf with the
+ # environment configuration from /etc/env.d.
+ systemd_environment_dir = os.path.join(eroot, "etc", "environment.d")
+ os.makedirs(systemd_environment_dir, exist_ok=True)
+
+ systemd_gentoo_env_path = os.path.join(
+ systemd_environment_dir, "10-gentoo-env.conf"
+ )
+ with atomic_ofstream(systemd_gentoo_env_path) as systemd_gentoo_env:
+ senvnotice = notice + "\n\n"
+ systemd_gentoo_env.write(senvnotice)
+
+ for env_key in env_keys:
+ # Skip PATH since this makes it impossible to use
+ # "systemctl --user import-environment PATH".
+ if env_key == "PATH":
+ continue
+
+ env_key_value = env[env_key]
+
+ # Skip variables with the empty string
+ # as value. Those sometimes appear in
+ # profile.env (e.g. "export GCC_SPECS=''"),
+ # but are invalid in systemd's syntax.
+ if not env_key_value:
+ continue
+
+ # Transform into systemd environment.d
+ # conf syntax, basically shell variable
+ # assignment (without "export ").
+ line = f"{env_key}={env_key_value}\n"
+
+ systemd_gentoo_env.write(line)
+
+ # create /etc/csh.env for (t)csh support
+ outfile = atomic_ofstream(os.path.join(eroot, "etc", "csh.env"))
+ outfile.write(cenvnotice)
+ for x in env_keys:
+ outfile.write("setenv %s '%s'\n" % (x, env[x]))
+ outfile.close()
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2021-07-06 7:10 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2021-07-06 7:10 UTC (permalink / raw
To: gentoo-commits
commit: d9a23f21950ac8dbd49844ea30aaceee5e9dc983
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Tue Jul 6 07:09:18 2021 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Tue Jul 6 07:10:04 2021 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=d9a23f21
Merge tag 'portage-3.0.21' into prefix
portage-3.0.21
Signed-off-by: Zac Medico <zmedico <AT> gentoo.org>
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
.github/workflows/ci.yml | 2 +-
NEWS | 12 ++
README | 2 +-
RELEASE-NOTES | 18 +++
bin/eapi.sh | 76 ++++++++-
bin/ebuild-helpers/doconfd | 8 +-
bin/ebuild-helpers/doenvd | 8 +-
bin/ebuild-helpers/doheader | 8 +-
bin/ebuild-helpers/doinitd | 9 +-
bin/ebuild-helpers/dosym | 67 +++++++-
bin/ebuild-ipc.py | 26 ++-
bin/ebuild.sh | 85 +++++++---
bin/estrip | 14 +-
bin/install-qa-check.d/10executable-issues | 2 +-
bin/install-qa-check.d/10ignored-flags | 4 +-
bin/install-qa-check.d/80libraries | 2 +-
bin/isolated-functions.sh | 6 +-
bin/misc-functions.sh | 13 +-
bin/phase-functions.sh | 23 ++-
bin/phase-helpers.sh | 62 +++++--
bin/portageq | 6 +-
bin/regenworld | 7 +-
cnf/sets/portage.conf | 1 +
lib/_emerge/EbuildBuild.py | 25 ++-
lib/_emerge/EbuildPhase.py | 5 +-
lib/_emerge/Package.py | 8 +-
lib/_emerge/PackageUninstall.py | 2 +-
lib/_emerge/Scheduler.py | 36 +++--
lib/_emerge/actions.py | 30 +++-
lib/_emerge/depgraph.py | 7 +-
lib/_emerge/main.py | 4 +-
lib/_emerge/unmerge.py | 19 ++-
lib/portage/__init__.py | 4 +-
lib/portage/_emirrordist/FetchIterator.py | 68 ++++----
lib/portage/_sets/dbapi.py | 8 +-
lib/portage/cache/metadata.py | 4 +-
lib/portage/const.py | 4 +-
lib/portage/dbapi/bintree.py | 7 +-
lib/portage/dbapi/porttree.py | 12 +-
lib/portage/dbapi/vartree.py | 2 +-
lib/portage/dep/__init__.py | 8 +-
lib/portage/dispatch_conf.py | 4 +-
lib/portage/eapi.py | 15 +-
.../package/ebuild/_config/special_env_vars.py | 7 +-
lib/portage/package/ebuild/config.py | 24 ++-
lib/portage/package/ebuild/doebuild.py | 34 +++-
lib/portage/package/ebuild/fetch.py | 87 ++++++----
lib/portage/tests/__init__.py | 127 ++++++++-------
lib/portage/tests/ebuild/test_shell_quote.py | 126 +++++++++++++++
lib/portage/tests/emerge/test_simple.py | 10 +-
lib/portage/tests/resolver/ResolverPlayground.py | 39 ++++-
lib/portage/tests/resolver/test_unmerge_order.py | 179 +++++++++++++++++++++
lib/portage/update.py | 17 +-
lib/portage/util/_async/PipeLogger.py | 35 ++--
lib/portage/util/env_update.py | 5 +
man/ebuild.5 | 33 +++-
man/make.conf.5 | 34 +++-
man/portage.5 | 33 +++-
repoman/lib/repoman/tests/__init__.py | 18 +--
repoman/setup.py | 47 ++++--
setup.py | 125 +++++++++++---
61 files changed, 1360 insertions(+), 353 deletions(-)
diff --cc bin/eapi.sh
index 26df37328,362cc07c0..1aaaa19e8
--- a/bin/eapi.sh
+++ b/bin/eapi.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 2012-2018 Gentoo Foundation
+ # Copyright 2012-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
# PHASES
diff --cc bin/ebuild-helpers/doconfd
index e32c9d5c0,572629a54..a1b8146f7
--- a/bin/ebuild-helpers/doconfd
+++ b/bin/ebuild-helpers/doconfd
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2018 Gentoo Foundation
+ # Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/doenvd
index 4e8068659,f1310c848..1d2c90b8c
--- a/bin/ebuild-helpers/doenvd
+++ b/bin/ebuild-helpers/doenvd
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2018 Gentoo Foundation
+ # Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/doheader
index a87536c33,2f21a5a2a..bf11ecdb0
--- a/bin/ebuild-helpers/doheader
+++ b/bin/ebuild-helpers/doheader
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2018 Gentoo Foundation
+ # Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/doinitd
index 883858320,1863aedac..ad8bb0347
--- a/bin/ebuild-helpers/doinitd
+++ b/bin/ebuild-helpers/doinitd
@@@ -1,7 -1,9 +1,9 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
+ source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
+
if [[ $# -lt 1 ]] ; then
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
__helpers_die "${0##*/}: at least one argument needed"
diff --cc bin/ebuild-ipc.py
index c523572a7,fa6ac4395..a0735fee5
--- a/bin/ebuild-ipc.py
+++ b/bin/ebuild-ipc.py
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 2010-2018 Gentoo Foundation
+ # Copyright 2010-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
#
# This is a helper which ebuild processes can use
diff --cc bin/ebuild.sh
index dddaada87,5916bedfc..9f0eb24a2
--- a/bin/ebuild.sh
+++ b/bin/ebuild.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2018 Gentoo Foundation
+ # Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
# Prevent aliases from causing portage to act inappropriately.
diff --cc bin/isolated-functions.sh
index c16f0c42f,b495ae6c7..5232fcf7f
--- a/bin/isolated-functions.sh
+++ b/bin/isolated-functions.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2020 Gentoo Authors
+ # Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}/eapi.sh" || exit 1
diff --cc bin/misc-functions.sh
index ed35414b0,bd1fb7553..53a15aef7
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -240,15 -201,18 +240,20 @@@ install_qa_check_elf()
echo "${QA_SONAME_NO_SYMLINK}" > \
"${PORTAGE_BUILDDIR}"/build-info/QA_SONAME_NO_SYMLINK
- if has binchecks ${RESTRICT} && \
- [ -s "${PORTAGE_BUILDDIR}/build-info/NEEDED.ELF.2" ] ; then
- eqawarn "QA Notice: RESTRICT=binchecks prevented checks on these ELF files:"
- eqawarn "$(while read -r x; do x=${x#*;} ; x=${x%%;*} ; echo "${x#${EPREFIX}}" ; done < "${PORTAGE_BUILDDIR}"/build-info/NEEDED.ELF.2)"
+ if [[ -s ${PORTAGE_BUILDDIR}/build-info/NEEDED.ELF.2 ]]; then
+ if grep -qs '<stabilize-allarches/>' "${EBUILD%/*}/metadata.xml"; then
+ eqawarn "QA Notice: <stabilize-allarches/> found on package installing ELF files"
+ fi
+
+ if has binchecks ${PORTAGE_RESTRICT}; then
+ eqawarn "QA Notice: RESTRICT=binchecks prevented checks on these ELF files:"
+ eqawarn "$(while read -r x; do x=${x#*;} ; x=${x%%;*} ; echo "${x#${EPREFIX}}" ; done < "${PORTAGE_BUILDDIR}"/build-info/NEEDED.ELF.2)"
+ fi
fi
fi
+}
+install_qa_check_misc() {
# Portage regenerates this on the installed system.
rm -f "${ED%/}"/usr/share/info/dir{,.gz,.bz2} || die "rm failed!"
}
diff --cc bin/phase-functions.sh
index 78bb5caca,0bb5d86e1..9774b9dff
--- a/bin/phase-functions.sh
+++ b/bin/phase-functions.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2019 Gentoo Authors
+ # Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
# Hardcoded bash lists are needed for backward compatibility with
diff --cc bin/phase-helpers.sh
index a1b7bfe05,94f4f24f2..95f55780d
--- a/bin/phase-helpers.sh
+++ b/bin/phase-helpers.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2020 Gentoo Authors
+ # Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
if ___eapi_has_DESTTREE_INSDESTTREE; then
diff --cc lib/_emerge/Package.py
index f970d87f5,e8809a89d..40e595f36
--- a/lib/_emerge/Package.py
+++ b/lib/_emerge/Package.py
@@@ -36,11 -36,11 +36,11 @@@ class Package(Task)
"LICENSE", "MD5", "PDEPEND", "PROVIDES",
"RDEPEND", "repository", "REQUIRED_USE",
"PROPERTIES", "REQUIRES", "RESTRICT", "SIZE",
- "SLOT", "USE", "_mtime_"]
+ "SLOT", "USE", "_mtime_", "EPREFIX"]
- _dep_keys = ('BDEPEND', 'DEPEND', 'PDEPEND', 'RDEPEND')
+ _dep_keys = ('BDEPEND', 'DEPEND', 'IDEPEND', 'PDEPEND', 'RDEPEND')
_buildtime_keys = ('BDEPEND', 'DEPEND')
- _runtime_keys = ('PDEPEND', 'RDEPEND')
+ _runtime_keys = ('IDEPEND', 'PDEPEND', 'RDEPEND')
_use_conditional_misc_keys = ('LICENSE', 'PROPERTIES', 'RESTRICT')
UNKNOWN_REPO = _unknown_repo
diff --cc lib/portage/const.py
index 285e262cf,896771e14..892766c68
--- a/lib/portage/const.py
+++ b/lib/portage/const.py
@@@ -1,12 -1,7 +1,12 @@@
# portage: Constants
- # Copyright 1998-2019 Gentoo Authors
+ # Copyright 1998-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
+# ===========================================================================
+# autotool supplied constants.
+# ===========================================================================
+from portage.const_autotool import *
+
import os
# ===========================================================================
diff --cc lib/portage/dbapi/bintree.py
index 6839a7dd9,cb21a8ee5..7e81b7879
--- a/lib/portage/dbapi/bintree.py
+++ b/lib/portage/dbapi/bintree.py
@@@ -83,10 -83,10 +83,10 @@@ class bindbapi(fakedbapi)
# Selectively cache metadata in order to optimize dep matching.
self._aux_cache_keys = set(
["BDEPEND", "BUILD_ID", "BUILD_TIME", "CHOST", "DEFINED_PHASES",
- "DEPEND", "EAPI", "IUSE", "KEYWORDS",
+ "DEPEND", "EAPI", "IDEPEND", "IUSE", "KEYWORDS",
"LICENSE", "MD5", "PDEPEND", "PROPERTIES",
"PROVIDES", "RDEPEND", "repository", "REQUIRES", "RESTRICT",
- "SIZE", "SLOT", "USE", "_mtime_"
+ "SIZE", "SLOT", "USE", "_mtime_", "EPREFIX"
])
self._aux_cache_slot_dict = slot_dict_class(self._aux_cache_keys)
self._aux_cache = {}
@@@ -384,13 -384,13 +384,13 @@@ class binarytree
self._pkgindex_aux_keys = \
["BASE_URI", "BDEPEND", "BUILD_ID", "BUILD_TIME", "CHOST",
"DEFINED_PHASES", "DEPEND", "DESCRIPTION", "EAPI", "FETCHCOMMAND",
- "IUSE", "KEYWORDS", "LICENSE", "PDEPEND",
+ "IDEPEND", "IUSE", "KEYWORDS", "LICENSE", "PDEPEND",
"PKGINDEX_URI", "PROPERTIES", "PROVIDES",
"RDEPEND", "repository", "REQUIRES", "RESTRICT", "RESUMECOMMAND",
- "SIZE", "SLOT", "USE"]
+ "SIZE", "SLOT", "USE", "EPREFIX"]
self._pkgindex_aux_keys = list(self._pkgindex_aux_keys)
self._pkgindex_use_evaluated_keys = \
- ("BDEPEND", "DEPEND", "LICENSE", "RDEPEND",
+ ("BDEPEND", "DEPEND", "IDEPEND", "LICENSE", "RDEPEND",
"PDEPEND", "PROPERTIES", "RESTRICT")
self._pkgindex_header = None
self._pkgindex_header_keys = set([
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2021-04-16 13:37 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2021-04-16 13:37 UTC (permalink / raw
To: gentoo-commits
commit: cc3c972cfcafc20187ee631af4d766a7e4027593
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Apr 16 13:36:20 2021 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Apr 16 13:36:20 2021 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=cc3c972c
Merge remote-tracking branch 'origin/master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
.github/workflows/ci.yml | 2 +-
MANIFEST.in | 3 +
NEWS | 12 +
RELEASE-NOTES | 33 +
bin/chmod-lite | 11 +-
bin/ebuild-ipc | 9 +-
bin/ebuild-pyhelper | 21 +
bin/egencache | 2 -
bin/glsa-check | 3 +-
bin/shelve-utils | 36 +
cnf/make.globals | 1 +
lib/_emerge/BlockerCache.py | 6 +-
lib/_emerge/EbuildPhase.py | 28 +-
lib/_emerge/Package.py | 9 -
lib/_emerge/Scheduler.py | 2 -
lib/_emerge/UseFlagDisplay.py | 2 -
lib/_emerge/create_depgraph_params.py | 30 +-
lib/_emerge/help.py | 2 +-
lib/_emerge/main.py | 2 +
lib/_emerge/resolver/output.py | 2 -
lib/portage/__init__.py | 18 +-
.../_compat_upgrade/binpkg_multi_instance.py | 33 +
lib/portage/_emirrordist/Config.py | 39 +-
lib/portage/_emirrordist/ContentDB.py | 196 ++++
lib/portage/_emirrordist/DeletionIterator.py | 25 +-
lib/portage/_emirrordist/DeletionTask.py | 8 +
lib/portage/_emirrordist/FetchIterator.py | 3 +-
lib/portage/_emirrordist/FetchTask.py | 5 +-
lib/portage/_emirrordist/main.py | 15 +-
lib/portage/_sets/ProfilePackageSet.py | 9 +-
lib/portage/_sets/profiles.py | 10 +-
lib/portage/cache/flat_hash.py | 3 -
lib/portage/dbapi/_VdbMetadataDelta.py | 11 +-
lib/portage/dbapi/bintree.py | 1 +
lib/portage/dbapi/vartree.py | 48 +-
lib/portage/dep/__init__.py | 7 +-
lib/portage/eapi.py | 107 +-
lib/portage/emaint/modules/merges/__init__.py | 14 +-
lib/portage/emaint/modules/merges/merges.py | 11 +-
lib/portage/emaint/modules/sync/sync.py | 40 +-
lib/portage/locks.py | 125 ++-
.../package/ebuild/_config/KeywordsManager.py | 7 +-
.../package/ebuild/_config/LocationsManager.py | 11 +-
lib/portage/package/ebuild/_config/MaskManager.py | 7 +-
lib/portage/package/ebuild/_config/UseManager.py | 12 +-
lib/portage/package/ebuild/config.py | 28 +-
lib/portage/package/ebuild/fetch.py | 195 +++-
lib/portage/repository/config.py | 36 +-
lib/portage/tests/dep/test_isvalidatom.py | 26 +-
lib/portage/tests/ebuild/test_fetch.py | 332 +++++-
lib/portage/tests/emerge/test_simple.py | 4 +-
lib/portage/tests/resolver/ResolverPlayground.py | 10 +-
.../test_build_id_profile_format.py | 14 +-
lib/portage/tests/resolver/test_autounmask.py | 25 +-
.../resolver/test_autounmask_use_slot_conflict.py | 51 +
.../tests/resolver/test_unpack_dependencies.py | 65 --
lib/portage/tests/resolver/test_use_aliases.py | 131 ---
lib/portage/tests/resolver/test_useflags.py | 28 +-
lib/portage/tests/sync/test_sync_local.py | 15 +-
lib/portage/tests/unicode/test_string_format.py | 9 -
lib/portage/tests/util/test_shelve.py | 60 +
lib/portage/util/_async/BuildLogger.py | 12 +-
lib/portage/util/_async/PipeLogger.py | 12 +-
lib/portage/util/_async/PopenProcess.py | 4 +-
lib/portage/util/_async/SchedulerInterface.py | 10 +-
lib/portage/util/_eventloop/EventLoop.py | 1153 --------------------
lib/portage/util/_eventloop/PollConstants.py | 17 -
lib/portage/util/_eventloop/PollSelectAdapter.py | 74 --
lib/portage/util/_eventloop/asyncio_event_loop.py | 7 +-
lib/portage/util/bin_entry_point.py | 35 +
lib/portage/util/digraph.py | 3 -
lib/portage/util/futures/_asyncio/__init__.py | 50 +-
lib/portage/util/futures/_asyncio/process.py | 116 --
lib/portage/util/futures/_asyncio/streams.py | 13 +-
lib/portage/util/futures/_asyncio/tasks.py | 96 --
lib/portage/util/futures/events.py | 186 ----
lib/portage/util/futures/futures.py | 156 +--
lib/portage/util/futures/transports.py | 87 --
lib/portage/util/futures/unix_events.py | 626 +----------
lib/portage/util/path.py | 4 +-
lib/portage/util/shelve.py | 58 +
lib/portage/versions.py | 10 +-
man/emaint.1 | 6 +-
man/emerge.1 | 20 +-
man/emirrordist.1 | 6 +-
man/make.conf.5 | 17 +-
man/portage.5 | 6 +-
pyproject.toml | 6 +
repoman/RELEASE-NOTES | 6 +
repoman/bin/repoman | 4 +-
repoman/cnf/linechecks/linechecks.yaml | 2 +-
repoman/cnf/qa_data/qa_data.yaml | 1 +
repoman/cnf/repository/qa_data.yaml | 1 +
repoman/cnf/repository/repository.yaml | 1 +
repoman/lib/repoman/actions.py | 1 -
repoman/lib/repoman/argparser.py | 5 +-
repoman/lib/repoman/main.py | 43 +-
.../modules/linechecks/deprecated/deprecated.py | 2 +-
.../repoman/modules/linechecks/phases/__init__.py | 6 +
.../lib/repoman/modules/linechecks/phases/phase.py | 132 ++-
repoman/lib/repoman/modules/scan/module.py | 4 +-
repoman/lib/repoman/repos.py | 8 +-
repoman/lib/repoman/tests/simple/test_simple.py | 207 +++-
repoman/man/repoman.1 | 5 +-
repoman/setup.py | 2 +-
setup.py | 107 +-
tox.ini | 2 +-
107 files changed, 2145 insertions(+), 3194 deletions(-)
diff --cc lib/portage/package/ebuild/config.py
index 4a43eaf7b,0d0b51053..f56e39c47
--- a/lib/portage/package/ebuild/config.py
+++ b/lib/portage/package/ebuild/config.py
@@@ -41,8 -41,11 +41,11 @@@ from portage.env.loaders import KeyValu
from portage.exception import InvalidDependString, PortageException
from portage.localization import _
from portage.output import colorize
-from portage.process import fakeroot_capable, sandbox_capable
+from portage.process import fakeroot_capable, sandbox_capable, macossandbox_capable
- from portage.repository.config import load_repository_config
+ from portage.repository.config import (
+ allow_profile_repo_deps,
+ load_repository_config,
+ )
from portage.util import ensure_dirs, getconfig, grabdict, \
grabdict_package, grabfile, grabfile_package, LazyItemsDict, \
normalize_path, shlex_split, stack_dictlist, stack_dicts, stack_lists, \
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2021-01-24 9:02 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2021-01-24 9:02 UTC (permalink / raw
To: gentoo-commits
commit: 8f0e38ebd69ed128d6ad7ff59c4d255bfc070d94
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Jan 24 09:01:35 2021 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Jan 24 09:01:35 2021 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=8f0e38eb
Merge remote-tracking branch 'origin/master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
NEWS | 6 ++
RELEASE-NOTES | 20 +++++
bin/clean_locks | 5 +-
bin/dispatch-conf | 17 ++--
bin/ebuild | 4 +-
bin/egencache | 6 +-
bin/emerge | 13 +--
bin/glsa-check | 5 +-
bin/portageq | 18 ++--
bin/quickpkg | 10 +--
bin/regenworld | 6 +-
cnf/sets/portage.conf | 5 ++
doc/config/sets.docbook | 8 ++
lib/_emerge/depgraph.py | 22 +++++
lib/portage/__init__.py | 3 +-
lib/portage/_emirrordist/FetchTask.py | 3 +-
lib/portage/_sets/dbapi.py | 39 ++++++++-
lib/portage/dbapi/bintree.py | 58 ++++++++-----
lib/portage/dbapi/vartree.py | 8 +-
lib/portage/emaint/modules/move/move.py | 13 ++-
lib/portage/package/ebuild/doebuild.py | 13 +--
lib/portage/package/ebuild/fetch.py | 11 ++-
.../repository/storage/hardlink_quarantine.py | 45 ++++------
lib/portage/repository/storage/hardlink_rcu.py | 57 +++++--------
lib/portage/repository/storage/inplace.py | 27 ++----
lib/portage/repository/storage/interface.py | 17 ++--
lib/portage/tests/dbapi/test_auxdb.py | 12 ++-
lib/portage/tests/emerge/test_simple.py | 16 ++--
lib/portage/tests/process/test_AsyncFunction.py | 10 +--
lib/portage/tests/process/test_PipeLogger.py | 22 +++--
.../resolver/test_slot_operator_reverse_deps.py | 98 +++++++++++++++++++++-
lib/portage/tests/update/test_move_ent.py | 7 +-
.../util/futures/asyncio/test_child_watcher.py | 10 +--
.../util/futures/asyncio/test_subprocess_exec.py | 32 ++++---
lib/portage/tests/util/futures/test_retry.py | 28 ++-----
lib/portage/tests/util/test_socks5.py | 4 +-
lib/portage/util/_eventloop/asyncio_event_loop.py | 30 ++++++-
lib/portage/util/_eventloop/global_event_loop.py | 28 +------
lib/portage/util/futures/_asyncio/__init__.py | 30 +++++--
lib/portage/util/futures/_sync_decorator.py | 6 +-
lib/portage/util/futures/compat_coroutine.py | 6 +-
lib/portage/util/socks5.py | 9 +-
man/make.conf.5 | 16 +++-
repoman/lib/repoman/_subprocess.py | 4 -
repoman/lib/repoman/actions.py | 3 +-
repoman/lib/repoman/config.py | 5 --
repoman/lib/repoman/gpg.py | 2 -
repoman/lib/repoman/main.py | 3 +-
repoman/lib/repoman/metadata.py | 1 -
repoman/lib/repoman/modules/commit/manifest.py | 1 -
repoman/lib/repoman/modules/commit/repochecks.py | 1 -
repoman/lib/repoman/modules/linechecks/__init__.py | 1 -
.../modules/linechecks/assignment/__init__.py | 3 +-
.../repoman/modules/linechecks/depend/__init__.py | 3 +-
.../modules/linechecks/deprecated/__init__.py | 3 +-
.../modules/linechecks/deprecated/deprecated.py | 2 +-
.../lib/repoman/modules/linechecks/do/__init__.py | 3 +-
.../repoman/modules/linechecks/eapi/__init__.py | 3 +-
.../repoman/modules/linechecks/emake/__init__.py | 3 +-
.../modules/linechecks/gentoo_header/__init__.py | 3 +-
.../repoman/modules/linechecks/helpers/__init__.py | 3 +-
.../repoman/modules/linechecks/helpers/offset.py | 2 +-
.../repoman/modules/linechecks/nested/__init__.py | 3 +-
.../repoman/modules/linechecks/patches/__init__.py | 3 +-
.../repoman/modules/linechecks/patches/patches.py | 3 +-
.../repoman/modules/linechecks/phases/__init__.py | 3 +-
.../repoman/modules/linechecks/portage/__init__.py | 3 +-
.../repoman/modules/linechecks/quotes/__init__.py | 3 +-
.../lib/repoman/modules/linechecks/uri/__init__.py | 3 +-
repoman/lib/repoman/modules/linechecks/uri/uri.py | 28 +++----
.../lib/repoman/modules/linechecks/use/__init__.py | 3 +-
.../repoman/modules/linechecks/use/builtwith.py | 2 +-
.../repoman/modules/linechecks/useless/__init__.py | 3 +-
.../modules/linechecks/whitespace/__init__.py | 3 +-
.../modules/linechecks/workaround/__init__.py | 3 +-
.../modules/linechecks/workaround/workarounds.py | 2 +-
.../lib/repoman/modules/scan/depend/__init__.py | 3 +-
.../repoman/modules/scan/depend/_depend_checks.py | 4 +-
repoman/lib/repoman/modules/scan/depend/profile.py | 22 ++---
.../repoman/modules/scan/directories/__init__.py | 3 +-
repoman/lib/repoman/modules/scan/eapi/__init__.py | 3 +-
.../lib/repoman/modules/scan/ebuild/__init__.py | 3 +-
.../lib/repoman/modules/scan/eclasses/__init__.py | 3 +-
repoman/lib/repoman/modules/scan/fetch/__init__.py | 3 +-
.../lib/repoman/modules/scan/keywords/__init__.py | 3 +-
.../lib/repoman/modules/scan/manifest/__init__.py | 3 +-
.../lib/repoman/modules/scan/metadata/__init__.py | 3 +-
.../modules/scan/metadata/ebuild_metadata.py | 1 -
.../repoman/modules/scan/metadata/pkgmetadata.py | 5 +-
.../lib/repoman/modules/scan/metadata/restrict.py | 1 -
.../lib/repoman/modules/scan/options/__init__.py | 3 +-
repoman/lib/repoman/modules/vcs/None/status.py | 1 -
repoman/lib/repoman/modules/vcs/__init__.py | 1 -
repoman/lib/repoman/modules/vcs/bzr/changes.py | 2 +-
repoman/lib/repoman/modules/vcs/bzr/status.py | 2 +
repoman/lib/repoman/modules/vcs/cvs/status.py | 2 +-
repoman/lib/repoman/modules/vcs/git/changes.py | 3 +-
repoman/lib/repoman/modules/vcs/git/status.py | 3 +-
repoman/lib/repoman/modules/vcs/hg/changes.py | 3 +-
repoman/lib/repoman/modules/vcs/hg/status.py | 2 +
repoman/lib/repoman/modules/vcs/svn/changes.py | 2 -
repoman/lib/repoman/modules/vcs/svn/status.py | 1 +
repoman/lib/repoman/modules/vcs/vcs.py | 2 -
repoman/lib/repoman/repos.py | 1 +
repoman/lib/repoman/tests/commit/__test__.py | 1 -
repoman/lib/repoman/tests/runTests.py | 8 +-
repoman/lib/repoman/tests/simple/__test__.py | 1 -
repoman/lib/repoman/tests/simple/test_simple.py | 4 +-
repoman/lib/repoman/utilities.py | 6 +-
setup.py | 2 +-
tox.ini | 2 +-
111 files changed, 585 insertions(+), 444 deletions(-)
diff --cc bin/clean_locks
index 25dc62915,e5765fd7e..7959486ac
--- a/bin/clean_locks
+++ b/bin/clean_locks
@@@ -1,8 -1,9 +1,9 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 1999-2020 Gentoo Authors
+ # Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
- import sys, errno
+ import errno
+ import sys
from os import path as osp
if osp.isfile(osp.join(osp.dirname(osp.dirname(osp.realpath(__file__))), ".portage_not_installed")):
sys.path.insert(0, osp.join(osp.dirname(osp.dirname(osp.realpath(__file__))), "lib"))
diff --cc bin/dispatch-conf
index 6fe6d332c,0fdfbaa81..d2b034666
--- a/bin/dispatch-conf
+++ b/bin/dispatch-conf
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 1999-2020 Gentoo Authors
+ # Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
#
diff --cc bin/ebuild
index 54c024fd3,0a2b13a13..6f70ee4bf
--- a/bin/ebuild
+++ b/bin/ebuild
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 1999-2020 Gentoo Authors
+ # Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
import argparse
diff --cc bin/egencache
index ae54b611c,9b6df2e7d..e083b78d7
--- a/bin/egencache
+++ b/bin/egencache
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 2009-2020 Gentoo Authors
+ # Copyright 2009-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
# unicode_literals for compat with TextIOWrapper in Python 2
diff --cc bin/emerge
index 8f1db61a6,813d7bae5..d952840ef
--- a/bin/emerge
+++ b/bin/emerge
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 2006-2020 Gentoo Authors
+ # Copyright 2006-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
import platform
diff --cc bin/glsa-check
index a61dee4f8,a3e7aa043..2aada5bee
--- a/bin/glsa-check
+++ b/bin/glsa-check
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 1999-2020 Gentoo Authors
+ # Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
import argparse
diff --cc bin/portageq
index 91b9c1322,67fdc9d38..cb991fef7
--- a/bin/portageq
+++ b/bin/portageq
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 1999-2020 Gentoo Authors
+ # Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
import argparse
diff --cc bin/quickpkg
index 72fe19c18,1b7ad666c..1bcbda8ba
--- a/bin/quickpkg
+++ b/bin/quickpkg
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 1999-2020 Gentoo Authors
+ # Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
import argparse
diff --cc bin/regenworld
index c195c0b3a,a5b1f0431..e3f852f26
--- a/bin/regenworld
+++ b/bin/regenworld
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 1999-2020 Gentoo Authors
+ # Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
import sys
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2021-01-04 10:48 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2021-01-04 10:48 UTC (permalink / raw
To: gentoo-commits
commit: 83d49e592f23319504abd28c5a2e52e7bb14a3d2
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Mon Jan 4 10:47:56 2021 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Mon Jan 4 10:48:37 2021 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=83d49e59
Merge remote-tracking branch 'origin/master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
.github/workflows/ci.yml | 43 +++++++
.travis.yml | 40 ------
bin/misc-functions.sh | 2 +-
lib/portage/__init__.py | 3 +-
lib/portage/tests/util/futures/test_retry.py | 181 ++++++++++++++++-----------
lib/portage/util/__init__.py | 9 +-
lib/portage/util/env_update.py | 30 ++---
lib/portage/util/futures/retry.py | 4 +-
repoman/runtests | 5 +-
runtests | 5 +-
src/portage_util_file_copy_reflink_linux.c | 10 +-
src/portage_util_libc.c | 10 +-
tox.ini | 15 ++-
13 files changed, 194 insertions(+), 163 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2020-12-07 17:28 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2020-12-07 17:28 UTC (permalink / raw
To: gentoo-commits
commit: 042669537ebf956f383ebc5d58809343e5b5d936
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Mon Dec 7 17:27:04 2020 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Mon Dec 7 17:27:04 2020 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=04266953
tarball: drop mercurial remnants
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
tarball.sh | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/tarball.sh b/tarball.sh
index e67e7244c..9aaf7e50d 100755
--- a/tarball.sh
+++ b/tarball.sh
@@ -19,12 +19,11 @@ if [[ -e ${DEST} ]]; then
fi
./tabcheck.py $(
- find ./ -name .git -o -name .hg -prune -o -type f ! -name '*.py' -print \
+ find ./ -name .git -prune -o -type f ! -name '*.py' -print \
| xargs grep -l "#\!@PREFIX_PORTAGE_PYTHON@" \
| grep -v "^\./repoman/"
- find ./ -name .git -o -name .hg -prune -o -type f -name '*.py' -print \
+ find ./ -name .git -prune -o -type f -name '*.py' -print \
| grep -v "^\./repoman/"
-
)
install -d -m0755 ${DEST}
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2020-12-07 16:46 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2020-12-07 16:46 UTC (permalink / raw
To: gentoo-commits
commit: 4e4f3561dc84b6188967bec034b5bf4d72a5f5aa
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Mon Dec 7 16:45:14 2020 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Mon Dec 7 16:45:14 2020 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=4e4f3561
Merge branch 'master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
RELEASE-NOTES | 12 +++
lib/_emerge/DepPrioritySatisfiedRange.py | 1 +
lib/_emerge/depgraph.py | 76 +++++++++++++----
lib/portage/__init__.py | 4 +
lib/portage/dbapi/vartree.py | 1 -
lib/portage/dep/dep_check.py | 31 ++++---
lib/portage/package/ebuild/doebuild.py | 1 -
.../tests/resolver/test_circular_choices_rust.py | 94 ++++++++++++++++++++++
lib/portage/tests/resolver/test_merge_order.py | 27 ++++++-
lib/portage/util/_eventloop/asyncio_event_loop.py | 6 +-
lib/portage/util/_eventloop/global_event_loop.py | 7 --
lib/portage/util/futures/_asyncio/__init__.py | 59 ++++++++++++--
setup.py | 2 +-
13 files changed, 272 insertions(+), 49 deletions(-)
diff --cc lib/portage/dbapi/vartree.py
index 794df99c5,f3d74cf82..e65f1c9d3
--- a/lib/portage/dbapi/vartree.py
+++ b/lib/portage/dbapi/vartree.py
@@@ -39,12 -39,8 +39,11 @@@ portage.proxy.lazyimport.lazyimport(glo
'portage.util._xattr:xattr',
'portage.util._dyn_libs.PreservedLibsRegistry:PreservedLibsRegistry',
'portage.util._dyn_libs.LinkageMapELF:LinkageMapELF@LinkageMap',
+ 'portage.util._dyn_libs.LinkageMapMachO:LinkageMapMachO',
+ 'portage.util._dyn_libs.LinkageMapPeCoff:LinkageMapPeCoff',
+ 'portage.util._dyn_libs.LinkageMapXCoff:LinkageMapXCoff',
'portage.util._dyn_libs.NeededEntry:NeededEntry',
'portage.util._async.SchedulerInterface:SchedulerInterface',
- 'portage.util._eventloop.EventLoop:EventLoop',
'portage.util._eventloop.global_event_loop:global_event_loop',
'portage.versions:best,catpkgsplit,catsplit,cpv_getkey,vercmp,' + \
'_get_slot_re,_pkgsplit@pkgsplit,_pkg_str,_unknown_repo',
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2020-11-23 7:48 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2020-11-23 7:48 UTC (permalink / raw
To: gentoo-commits
commit: 1289ca4c5b58d96173c4b37600b3ff1ab1ff8173
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Mon Nov 23 07:47:31 2020 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Mon Nov 23 07:47:31 2020 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=1289ca4c
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
lib/portage/util/_compare_files.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2020-11-22 11:15 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2020-11-22 11:15 UTC (permalink / raw
To: gentoo-commits
commit: 7238a400b115fca23ba39b1b1ce123f2ac1bf993
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Nov 22 11:14:34 2020 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Nov 22 11:15:16 2020 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=7238a400
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
RELEASE-NOTES | 16 ++
bin/isolated-functions.sh | 3 +-
bin/phase-helpers.sh | 3 +-
bin/pid-ns-init | 24 ++-
cnf/repo.postsync.d/example | 3 +-
cnf/sets/portage.conf | 6 +
doc/package/ebuild/eapi/4.docbook | 2 +-
lib/_emerge/DepPriorityNormalRange.py | 2 +
lib/_emerge/DepPrioritySatisfiedRange.py | 53 +++---
lib/_emerge/actions.py | 19 +-
lib/_emerge/depgraph.py | 56 ++++--
lib/_emerge/main.py | 5 +
lib/portage/cache/template.py | 2 +-
.../package/ebuild/_config/KeywordsManager.py | 2 +-
.../package/ebuild/_config/LocationsManager.py | 30 ++-
lib/portage/package/ebuild/_config/UseManager.py | 4 +-
lib/portage/package/ebuild/config.py | 9 +-
.../package/ebuild/deprecated_profile_check.py | 9 +-
lib/portage/tests/emerge/test_simple.py | 8 +-
lib/portage/tests/resolver/test_merge_order.py | 10 +
.../tests/resolver/test_slot_operator_bdeps.py | 209 +++++++++++++++++++++
lib/portage/util/_compare_files.py | 3 +-
lib/portage/util/movefile.py | 10 +-
lib/portage/util/netlink.py | 2 +-
man/emerge.1 | 10 +-
man/make.conf.5 | 2 +-
repoman/RELEASE-NOTES | 5 +
repoman/cnf/qa_data/qa_data.yaml | 1 -
repoman/cnf/repository/qa_data.yaml | 1 -
repoman/lib/repoman/modules/scan/fetch/fetches.py | 7 +-
repoman/man/repoman.1 | 2 +-
repoman/setup.py | 2 +-
setup.py | 2 +-
33 files changed, 430 insertions(+), 92 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2020-09-26 11:29 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2020-09-26 11:29 UTC (permalink / raw
To: gentoo-commits
commit: c55afba42611e49eb49896c603b5329a8b3d18ca
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sat Sep 26 11:25:03 2020 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sat Sep 26 11:28:52 2020 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=c55afba4
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
.editorconfig | 5 +-
DEVELOPING | 18 ++-
NEWS | 15 ++
RELEASE-NOTES | 64 ++++++++
bin/archive-conf | 4 +-
bin/check-implicit-pointer-usage.py | 2 -
bin/clean_locks | 4 +-
bin/dispatch-conf | 4 +-
bin/dohtml.py | 4 +-
bin/doins.py | 6 +-
bin/ebuild | 4 +-
bin/egencache | 126 +++++++++++----
bin/emaint | 4 +-
bin/emerge | 4 +-
bin/env-update | 4 +-
bin/fixpackages | 4 +-
bin/glsa-check | 4 +-
bin/portageq | 4 +-
bin/quickpkg | 6 +-
bin/regenworld | 4 +-
cnf/repo.postsync.d/example | 14 +-
doc/api/conf.py | 2 +-
lib/_emerge/AbstractEbuildProcess.py | 3 +-
lib/_emerge/AbstractPollTask.py | 3 +-
lib/_emerge/Binpkg.py | 8 +-
lib/_emerge/BinpkgEnvExtractor.py | 11 +-
lib/_emerge/BinpkgFetcher.py | 29 ++--
lib/_emerge/DependencyArg.py | 3 -
lib/_emerge/EbuildBinpkg.py | 4 +-
lib/_emerge/EbuildBuild.py | 4 +-
lib/_emerge/EbuildPhase.py | 17 +-
lib/_emerge/FakeVartree.py | 7 +-
lib/_emerge/FifoIpcDaemon.py | 2 +-
lib/_emerge/MetadataRegen.py | 7 +-
lib/_emerge/Package.py | 1 -
lib/_emerge/PollScheduler.py | 4 +-
lib/_emerge/Scheduler.py | 148 ++++++++++++------
lib/_emerge/SequentialTaskQueue.py | 4 +-
lib/_emerge/SpawnProcess.py | 27 +++-
lib/_emerge/UseFlagDisplay.py | 1 -
lib/_emerge/UserQuery.py | 2 -
lib/_emerge/actions.py | 51 +++---
lib/_emerge/chk_updated_cfg_files.py | 2 -
lib/_emerge/countdown.py | 4 +-
lib/_emerge/depgraph.py | 44 ++++--
lib/_emerge/help.py | 4 +-
lib/_emerge/main.py | 13 +-
lib/_emerge/post_emerge.py | 4 +-
lib/_emerge/resolver/backtracking.py | 7 +-
lib/_emerge/resolver/circular_dependency.py | 14 +-
lib/_emerge/resolver/output.py | 39 +----
lib/_emerge/resolver/output_helpers.py | 115 +-------------
lib/_emerge/resolver/package_tracker.py | 4 +-
lib/_emerge/resolver/slot_collision.py | 143 ++++++++---------
lib/_emerge/search.py | 20 ++-
lib/_emerge/stdout_spinner.py | 2 +-
lib/_emerge/unmerge.py | 34 ++--
lib/portage/__init__.py | 17 ++
lib/portage/_emirrordist/Config.py | 1 -
lib/portage/_emirrordist/FetchTask.py | 8 +-
lib/portage/_emirrordist/main.py | 2 +-
lib/portage/_global_updates.py | 4 +-
lib/portage/_selinux.py | 16 +-
lib/portage/_sets/__init__.py | 12 +-
lib/portage/_sets/base.py | 22 +--
lib/portage/_sets/dbapi.py | 87 ++++-------
lib/portage/_sets/files.py | 14 +-
lib/portage/_sets/libs.py | 4 +-
lib/portage/_sets/security.py | 16 +-
lib/portage/_sets/shell.py | 4 +-
.../{_compat_upgrade => binrepo}/__init__.py | 0
lib/portage/binrepo/config.py | 133 ++++++++++++++++
lib/portage/cache/anydbm.py | 5 +-
lib/portage/cache/ebuild_xattr.py | 1 -
lib/portage/cache/flat_hash.py | 2 +-
lib/portage/cache/fs_template.py | 2 +-
lib/portage/cache/mappings.py | 4 +-
lib/portage/cache/metadata.py | 4 +-
lib/portage/cache/sql_template.py | 24 +--
lib/portage/cache/sqlite.py | 37 +++--
lib/portage/cache/template.py | 6 +-
lib/portage/checksum.py | 19 +--
lib/portage/const.py | 2 +
lib/portage/cvstree.py | 8 +-
lib/portage/dbapi/IndexedPortdb.py | 3 +-
lib/portage/dbapi/__init__.py | 6 +-
lib/portage/dbapi/bintree.py | 75 ++++++---
lib/portage/dbapi/cpv_expand.py | 4 +-
lib/portage/dbapi/porttree.py | 12 +-
lib/portage/dbapi/vartree.py | 15 +-
lib/portage/dbapi/virtual.py | 4 +-
lib/portage/dep/__init__.py | 46 +++---
lib/portage/dep/dep_check.py | 4 +-
lib/portage/dep/soname/SonameAtom.py | 3 -
lib/portage/dep/soname/multilib_category.py | 11 +-
lib/portage/dispatch_conf.py | 5 +-
lib/portage/elog/__init__.py | 4 +-
lib/portage/elog/messages.py | 14 +-
lib/portage/elog/mod_custom.py | 9 +-
lib/portage/elog/mod_echo.py | 5 +-
lib/portage/elog/mod_mail.py | 10 +-
lib/portage/elog/mod_mail_summary.py | 9 +-
lib/portage/elog/mod_save_summary.py | 2 +-
lib/portage/elog/mod_syslog.py | 4 +-
lib/portage/emaint/main.py | 4 +-
lib/portage/emaint/modules/merges/merges.py | 4 +-
lib/portage/emaint/modules/move/move.py | 5 +-
lib/portage/emaint/modules/sync/sync.py | 4 -
lib/portage/env/config.py | 22 +--
lib/portage/env/loaders.py | 22 +--
lib/portage/env/validators.py | 4 +-
lib/portage/exception.py | 2 +-
lib/portage/getbinpkg.py | 40 ++---
lib/portage/glsa.py | 36 ++---
lib/portage/localization.py | 6 +-
lib/portage/locks.py | 10 +-
lib/portage/mail.py | 2 -
lib/portage/manifest.py | 32 ++--
lib/portage/metadata.py | 16 +-
lib/portage/module.py | 5 +-
lib/portage/news.py | 9 +-
lib/portage/output.py | 10 +-
.../package/ebuild/_config/KeywordsManager.py | 5 +-
.../package/ebuild/_config/LicenseManager.py | 4 +-
lib/portage/package/ebuild/_spawn_nofetch.py | 6 +-
lib/portage/package/ebuild/config.py | 12 +-
.../package/ebuild/deprecated_profile_check.py | 4 +-
lib/portage/package/ebuild/doebuild.py | 2 +-
lib/portage/package/ebuild/fetch.py | 10 +-
lib/portage/package/ebuild/getmaskingreason.py | 4 +-
lib/portage/package/ebuild/getmaskingstatus.py | 4 +-
lib/portage/package/ebuild/prepare_build_dirs.py | 1 -
lib/portage/process.py | 32 ++--
lib/portage/repository/config.py | 2 +-
.../repository/storage/hardlink_quarantine.py | 26 +--
lib/portage/repository/storage/hardlink_rcu.py | 34 ++--
lib/portage/repository/storage/inplace.py | 10 +-
lib/portage/repository/storage/interface.py | 10 +-
lib/portage/sync/controller.py | 8 +-
lib/portage/sync/modules/git/git.py | 14 +-
lib/portage/sync/modules/mercurial/__init__.py | 39 +++++
lib/portage/sync/modules/mercurial/mercurial.py | 174 +++++++++++++++++++++
lib/portage/sync/modules/rsync/rsync.py | 64 ++++----
lib/portage/sync/modules/webrsync/webrsync.py | 4 +-
lib/portage/sync/old_tree_timestamp.py | 4 +-
lib/portage/sync/syncbase.py | 23 ++-
lib/portage/tests/__init__.py | 4 +-
lib/portage/tests/dbapi/test_auxdb.py | 43 ++++-
lib/portage/tests/dep/testAtom.py | 6 +-
lib/portage/tests/dep/testExtractAffectingUSE.py | 6 +-
lib/portage/tests/dep/test_dep_getcpv.py | 4 +-
lib/portage/tests/ebuild/test_config.py | 4 +-
lib/portage/tests/emerge/test_simple.py | 23 ++-
.../tests/env/config/test_PackageMaskFile.py | 8 +-
lib/portage/tests/lafilefixer/test_lafilefixer.py | 4 +-
lib/portage/tests/locks/test_lock_nonblock.py | 1 +
lib/portage/tests/process/test_AsyncFunction.py | 28 +++-
lib/portage/tests/process/test_PipeLogger.py | 2 +-
lib/portage/tests/resolver/ResolverPlayground.py | 8 +-
.../tests/resolver/test_circular_dependencies.py | 10 +-
lib/portage/tests/resolver/test_eapi.py | 80 +++++-----
lib/portage/tests/resolver/test_merge_order.py | 3 +-
.../test_missing_iuse_and_evaluated_atoms.py | 8 +-
.../tests/resolver/test_old_dep_chain_display.py | 6 +-
lib/portage/tests/resolver/test_required_use.py | 4 +-
lib/portage/tests/resolver/test_simple.py | 4 +-
.../resolver/test_slot_change_without_revbump.py | 4 +-
lib/portage/tests/resolver/test_slot_collisions.py | 8 +-
.../tests/resolver/test_slot_conflict_rebuild.py | 61 +++++++-
.../test_slot_conflict_unsatisfied_deep_deps.py | 10 +-
.../resolver/test_slot_operator_missed_update.py | 112 +++++++++++++
lib/portage/tests/runTests.py | 7 +-
lib/portage/tests/sets/shell/testShell.py | 4 +-
lib/portage/tests/sync/test_sync_local.py | 67 +++++++-
.../tests/util/eventloop/test_call_soon_fifo.py | 4 +-
.../util/futures/asyncio/test_child_watcher.py | 4 +-
.../futures/asyncio/test_event_loop_in_fork.py | 23 +--
.../util/futures/asyncio/test_wakeup_fd_sigchld.py | 2 +-
.../tests/util/futures/test_compat_coroutine.py | 45 +++---
lib/portage/tests/util/test_getconfig.py | 3 +-
lib/portage/tests/util/test_grabdict.py | 4 +-
lib/portage/tests/util/test_normalizedPath.py | 6 +-
lib/portage/tests/util/test_socks5.py | 2 +-
lib/portage/tests/util/test_xattr.py | 4 +-
lib/portage/tests/xpak/test_decodeint.py | 4 +-
lib/portage/update.py | 2 +-
lib/portage/util/__init__.py | 8 +-
lib/portage/util/_async/BuildLogger.py | 31 ++--
lib/portage/util/_async/ForkProcess.py | 6 +-
lib/portage/util/_async/PipeLogger.py | 6 +-
lib/portage/util/_async/SchedulerInterface.py | 4 +-
lib/portage/util/_desktop_entry.py | 9 +-
lib/portage/util/_dyn_libs/LinkageMapELF.py | 4 +-
lib/portage/util/_dyn_libs/NeededEntry.py | 4 +-
.../util/_dyn_libs/PreservedLibsRegistry.py | 10 +-
.../util/_dyn_libs/display_preserved_libs.py | 6 +-
lib/portage/util/_eventloop/EventLoop.py | 11 +-
lib/portage/util/_eventloop/PollSelectAdapter.py | 5 +-
lib/portage/util/_eventloop/asyncio_event_loop.py | 4 +-
lib/portage/util/_eventloop/global_event_loop.py | 7 +-
lib/portage/util/_urlopen.py | 10 +-
lib/portage/util/_xattr.py | 4 +-
lib/portage/util/env_update.py | 46 +++++-
lib/portage/util/futures/_asyncio/process.py | 16 +-
lib/portage/util/futures/_asyncio/tasks.py | 6 +-
lib/portage/util/futures/_sync_decorator.py | 3 +-
lib/portage/util/futures/compat_coroutine.py | 7 +-
lib/portage/util/futures/executor/fork.py | 2 +-
lib/portage/util/futures/iter_completed.py | 4 +-
lib/portage/util/locale.py | 5 +-
lib/portage/util/movefile.py | 4 +-
lib/portage/util/netlink.py | 3 +-
lib/portage/util/socks5.py | 13 +-
lib/portage/util/whirlpool.py | 2 -
lib/portage/xml/metadata.py | 24 +--
lib/portage/xpak.py | 18 +--
man/egencache.1 | 12 +-
man/emerge.1 | 20 ++-
man/make.conf.5 | 11 +-
man/portage.5 | 118 +++++++++++---
pylintrc | 26 ++-
repoman/RELEASE-NOTES | 11 ++
repoman/bin/repoman | 5 +-
repoman/lib/repoman/actions.py | 4 +-
repoman/lib/repoman/argparser.py | 9 ++
repoman/lib/repoman/errors.py | 2 -
repoman/lib/repoman/gpg.py | 2 -
repoman/lib/repoman/main.py | 5 +-
repoman/lib/repoman/metadata.py | 2 -
repoman/lib/repoman/modules/commit/repochecks.py | 2 -
.../modules/linechecks/deprecated/inherit.py | 1 -
repoman/lib/repoman/modules/scan/depend/profile.py | 117 +++++++++++---
repoman/lib/repoman/modules/scan/ebuild/ebuild.py | 3 +-
repoman/lib/repoman/modules/vcs/settings.py | 3 +-
repoman/lib/repoman/modules/vcs/vcs.py | 3 +-
repoman/lib/repoman/profile.py | 2 -
repoman/lib/repoman/scanner.py | 18 +--
repoman/lib/repoman/tests/__init__.py | 4 +-
repoman/lib/repoman/utilities.py | 4 +-
repoman/man/repoman.1 | 9 +-
repoman/runtests | 4 +-
repoman/setup.py | 6 +-
runtests | 4 +-
setup.py | 4 +-
tabcheck.py | 3 +-
tox.ini | 2 +
246 files changed, 2385 insertions(+), 1464 deletions(-)
diff --cc bin/archive-conf
index 6271b833c,bfc54a629..11e1d25b7
--- a/bin/archive-conf
+++ b/bin/archive-conf
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 1999-2014 Gentoo Foundation
+ # Copyright 1999-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
#
diff --cc bin/clean_locks
index a35b9f73a,d1f296065..25dc62915
--- a/bin/clean_locks
+++ b/bin/clean_locks
@@@ -1,9 -1,7 +1,7 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 1999-2014 Gentoo Foundation
+ # Copyright 1999-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
- from __future__ import print_function
-
import sys, errno
from os import path as osp
if osp.isfile(osp.join(osp.dirname(osp.dirname(osp.realpath(__file__))), ".portage_not_installed")):
diff --cc bin/dispatch-conf
index 2a9db88a9,fa047244a..6fe6d332c
--- a/bin/dispatch-conf
+++ b/bin/dispatch-conf
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 1999-2019 Gentoo Authors
+ # Copyright 1999-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
#
diff --cc bin/dohtml.py
index 7be1241eb,6a1ed10fe..2e25a4b02
--- a/bin/dohtml.py
+++ b/bin/dohtml.py
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 1999-2014 Gentoo Foundation
+ # Copyright 1999-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
#
diff --cc bin/ebuild
index d3961ddc5,09f7f839b..54c024fd3
--- a/bin/ebuild
+++ b/bin/ebuild
@@@ -1,9 -1,7 +1,7 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 1999-2019 Gentoo Authors
+ # Copyright 1999-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
- from __future__ import print_function
-
import argparse
import platform
import signal
diff --cc bin/egencache
index 1dc94b790,4ee63edad..ae54b611c
--- a/bin/egencache
+++ b/bin/egencache
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 2009-2015 Gentoo Foundation
+ # Copyright 2009-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
# unicode_literals for compat with TextIOWrapper in Python 2
diff --cc bin/emaint
index ea97c9b04,5cb667f28..af5234183
--- a/bin/emaint
+++ b/bin/emaint
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 2005-2014 Gentoo Foundation
+ # Copyright 2005-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
"""System health checks and maintenance utilities.
diff --cc bin/emerge
index 65547e390,f0a2b8429..8f1db61a6
--- a/bin/emerge
+++ b/bin/emerge
@@@ -1,9 -1,7 +1,7 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 2006-2014 Gentoo Foundation
+ # Copyright 2006-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
- from __future__ import print_function
-
import platform
import signal
import sys
diff --cc bin/env-update
index 6ffadc638,6571b0011..5c2df8544
--- a/bin/env-update
+++ b/bin/env-update
@@@ -1,9 -1,7 +1,7 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 1999-2014 Gentoo Foundation
+ # Copyright 1999-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
- from __future__ import print_function
-
import errno
import sys
diff --cc bin/fixpackages
index f43506600,e56d26ec1..5c4185071
--- a/bin/fixpackages
+++ b/bin/fixpackages
@@@ -1,9 -1,7 +1,7 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 1999-2014 Gentoo Foundation
+ # Copyright 1999-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
- from __future__ import print_function
-
import argparse
import os
import sys
diff --cc bin/glsa-check
index 64a4ea617,8200f75b6..a61dee4f8
--- a/bin/glsa-check
+++ b/bin/glsa-check
@@@ -1,9 -1,7 +1,7 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 1999-2019 Gentoo Authors
+ # Copyright 1999-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
- from __future__ import print_function
-
import argparse
import re
import sys
diff --cc bin/portageq
index d54f8d02b,dbcd9f62d..91b9c1322
--- a/bin/portageq
+++ b/bin/portageq
@@@ -1,9 -1,7 +1,7 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 1999-2018 Gentoo Foundation
+ # Copyright 1999-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
- from __future__ import print_function
-
import argparse
import signal
import sys
diff --cc bin/quickpkg
index 17de837f7,a171b3bd5..72fe19c18
--- a/bin/quickpkg
+++ b/bin/quickpkg
@@@ -1,9 -1,7 +1,7 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 1999-2014 Gentoo Foundation
+ # Copyright 1999-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
- from __future__ import division, print_function
-
import argparse
import errno
import math
diff --cc bin/regenworld
index 45394ab5b,9f33502c6..c195c0b3a
--- a/bin/regenworld
+++ b/bin/regenworld
@@@ -1,9 -1,7 +1,7 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 1999-2014 Gentoo Foundation
+ # Copyright 1999-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
- from __future__ import print_function
-
import sys
from os import path as osp
if osp.isfile(osp.join(osp.dirname(osp.dirname(osp.realpath(__file__))), ".portage_not_installed")):
diff --cc lib/_emerge/actions.py
index 961ecf3d1,5e8a46957..a2f21ed2f
--- a/lib/_emerge/actions.py
+++ b/lib/_emerge/actions.py
@@@ -34,11 -31,9 +31,10 @@@ portage.proxy.lazyimport.lazyimport(glo
from portage import os
from portage import shutil
- from portage import eapi_is_supported, _encodings, _unicode_decode
- from portage.cache.cache_errors import CacheError
+ from portage import _encodings, _unicode_decode
+ from portage.binrepo.config import BinRepoConfigLoader
+ from portage.const import BINREPOS_CONF_FILE, _DEPCLEAN_LIB_CHECK_DEFAULT
+from portage.const import EPREFIX
- from portage.const import GLOBAL_CONFIG_PATH, VCS_DIRS, _DEPCLEAN_LIB_CHECK_DEFAULT
- from portage.const import SUPPORTED_BINPKG_FORMATS, TIMESTAMP_FORMAT
from portage.dbapi.dep_expand import dep_expand
from portage.dbapi._expand_new_virt import expand_new_virt
from portage.dbapi.IndexedPortdb import IndexedPortdb
diff --cc lib/portage/dbapi/bintree.py
index 9111cec06,7e24589e5..22bc8b7b1
--- a/lib/portage/dbapi/bintree.py
+++ b/lib/portage/dbapi/bintree.py
@@@ -380,11 -382,11 +382,11 @@@ class binarytree
self._pkgindex_keys.update(["CPV", "SIZE"])
self._pkgindex_aux_keys = \
["BASE_URI", "BDEPEND", "BUILD_ID", "BUILD_TIME", "CHOST",
- "DEFINED_PHASES", "DEPEND", "DESCRIPTION", "EAPI",
+ "DEFINED_PHASES", "DEPEND", "DESCRIPTION", "EAPI", "FETCHCOMMAND",
"IUSE", "KEYWORDS", "LICENSE", "PDEPEND",
"PKGINDEX_URI", "PROPERTIES", "PROVIDES",
- "RDEPEND", "repository", "REQUIRES", "RESTRICT",
+ "RDEPEND", "repository", "REQUIRES", "RESTRICT", "RESUMECOMMAND",
- "SIZE", "SLOT", "USE"]
+ "SIZE", "SLOT", "USE", "EPREFIX"]
self._pkgindex_aux_keys = list(self._pkgindex_aux_keys)
self._pkgindex_use_evaluated_keys = \
("BDEPEND", "DEPEND", "LICENSE", "RDEPEND",
diff --cc lib/portage/package/ebuild/config.py
index b1b2b47d7,a09fdbced..8e95fdf3b
--- a/lib/portage/package/ebuild/config.py
+++ b/lib/portage/package/ebuild/config.py
@@@ -39,11 -38,10 +38,10 @@@ from portage.dep import Atom, isvalidat
from portage.eapi import (eapi_exports_AA, eapi_exports_merge_type,
eapi_supports_prefix, eapi_exports_replace_vars, _get_eapi_attrs)
from portage.env.loaders import KeyValuePairFileLoader
- from portage.exception import InvalidDependString, IsADirectory, \
- PortageException
+ from portage.exception import InvalidDependString, PortageException
from portage.localization import _
from portage.output import colorize
-from portage.process import fakeroot_capable, sandbox_capable
+from portage.process import fakeroot_capable, sandbox_capable, macossandbox_capable
from portage.repository.config import load_repository_config
from portage.util import ensure_dirs, getconfig, grabdict, \
grabdict_package, grabfile, grabfile_package, LazyItemsDict, \
diff --cc lib/portage/tests/runTests.py
index 2f9a7ad47,9514abebe..271a959e9
--- a/lib/portage/tests/runTests.py
+++ b/lib/portage/tests/runTests.py
@@@ -1,11 -1,11 +1,11 @@@
-#!/usr/bin/python -bWd
+#!@PREFIX_PORTAGE_PYTHON@ -bWd
# runTests.py -- Portage Unit Test Functionality
- # Copyright 2006-2014 Gentoo Foundation
+ # Copyright 2006-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
- import os, sys
- import os.path as osp
import grp
+ import os
+ import os.path as osp
import platform
import pwd
import signal
diff --cc lib/portage/util/env_update.py
index 59f5bb9cc,dec086cf8..2e0e037dd
--- a/lib/portage/util/env_update.py
+++ b/lib/portage/util/env_update.py
@@@ -333,14 -333,16 +333,16 @@@ def _env_update(makelinks, target_root
del specials["LDPATH"]
- penvnotice = "# THIS FILE IS AUTOMATICALLY GENERATED BY env-update.\n"
- penvnotice += "# DO NOT EDIT THIS FILE. CHANGES TO STARTUP PROFILES\n"
+ notice = "# THIS FILE IS AUTOMATICALLY GENERATED BY env-update.\n"
+ notice += "# DO NOT EDIT THIS FILE."
+ penvnotice = notice + " CHANGES TO STARTUP PROFILES\n"
cenvnotice = penvnotice[:]
- penvnotice += "# GO INTO /etc/profile NOT /etc/profile.env\n\n"
- cenvnotice += "# GO INTO /etc/csh.cshrc NOT /etc/csh.env\n\n"
+ penvnotice += "# GO INTO " + eprefix + "/etc/profile NOT /etc/profile.env\n\n"
+ cenvnotice += "# GO INTO " + eprefix + "/etc/csh.cshrc NOT /etc/csh.env\n\n"
#create /etc/profile.env for bash support
- outfile = atomic_ofstream(os.path.join(eroot, "etc", "profile.env"))
+ profile_env_path = os.path.join(eroot, "etc", "profile.env")
+ outfile = atomic_ofstream(profile_env_path)
outfile.write(penvnotice)
env_keys = [x for x in env if x != "LDPATH"]
diff --cc runtests
index 9bc5dfced,685a7d9c7..3196962a9
--- a/runtests
+++ b/runtests
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!/usr/bin/env python
- # Copyright 2010-2015 Gentoo Foundation
+ # Copyright 2010-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
#
# Note: We don't want to import portage modules directly because we do things
diff --cc tabcheck.py
index fe5227ca7,1df26785d..7bc9813f7
--- a/tabcheck.py
+++ b/tabcheck.py
@@@ -1,7 -1,7 +1,8 @@@
-#!/usr/bin/python -b
+#!/usr/bin/env python
- import tabnanny,sys
+ import sys
+ import tabnanny
for x in sys.argv:
+ print ("Tabchecking " + x)
tabnanny.check(x)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2020-08-02 12:33 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2020-08-02 12:33 UTC (permalink / raw
To: gentoo-commits
commit: 0d9cd144937a2a4388cb299fbcd753257b085970
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Aug 2 11:17:47 2020 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Aug 2 12:32:49 2020 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=0d9cd144
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
.travis.yml | 1 -
NEWS | 11 +
README | 2 +-
RELEASE-NOTES | 43 ++
bin/binhost-snapshot | 5 +-
bin/check-implicit-pointer-usage.py | 25 +-
bin/chmod-lite.py | 11 +-
bin/chpathtool.py | 10 +-
bin/dispatch-conf | 2 +-
bin/dohtml.py | 13 +-
bin/doins.py | 31 +-
bin/ebuild | 15 +-
bin/ebuild-ipc.py | 2 +-
bin/ecompress | 36 +-
bin/egencache | 17 +-
bin/filter-bash-environment.py | 11 +-
bin/glsa-check | 2 +-
bin/install.py | 19 +-
bin/pid-ns-init | 10 +-
bin/portageq | 4 +-
bin/quickpkg | 2 +-
bin/socks5-server.py | 2 +-
bin/xattr-helper.py | 32 +-
lib/_emerge/AbstractEbuildProcess.py | 3 +-
lib/_emerge/AbstractPollTask.py | 1 -
lib/_emerge/AsynchronousLock.py | 12 +-
lib/_emerge/BinpkgFetcher.py | 14 +-
lib/_emerge/BinpkgPrefetcher.py | 1 -
lib/_emerge/BlockerCache.py | 29 +-
lib/_emerge/BlockerDB.py | 7 +-
lib/_emerge/DepPriority.py | 1 -
lib/_emerge/DepPriorityNormalRange.py | 2 +-
lib/_emerge/DepPrioritySatisfiedRange.py | 2 +-
lib/_emerge/Dependency.py | 1 -
lib/_emerge/DependencyArg.py | 17 +-
lib/_emerge/EbuildBuild.py | 26 +-
lib/_emerge/EbuildBuildDir.py | 1 -
lib/_emerge/EbuildExecuter.py | 1 -
lib/_emerge/EbuildFetcher.py | 15 +-
lib/_emerge/EbuildMetadataPhase.py | 12 +-
lib/_emerge/EbuildPhase.py | 53 ++-
lib/_emerge/FakeVartree.py | 16 +-
lib/_emerge/FifoIpcDaemon.py | 31 +-
lib/_emerge/JobStatusDisplay.py | 15 +-
lib/_emerge/MergeListItem.py | 2 +-
lib/_emerge/Package.py | 46 +-
lib/_emerge/PackageVirtualDbapi.py | 7 +-
lib/_emerge/PipeReader.py | 14 +-
lib/_emerge/PollScheduler.py | 2 +-
lib/_emerge/ProgressHandler.py | 3 +-
lib/_emerge/RootConfig.py | 2 +-
lib/_emerge/Scheduler.py | 9 +-
lib/_emerge/SequentialTaskQueue.py | 4 -
lib/_emerge/SetArg.py | 1 -
lib/_emerge/SpawnProcess.py | 85 ++--
lib/_emerge/SubProcess.py | 9 +-
lib/_emerge/TaskSequence.py | 6 +-
lib/_emerge/UnmergeDepPriority.py | 1 -
lib/_emerge/UseFlagDisplay.py | 15 +-
lib/_emerge/UserQuery.py | 23 +-
lib/_emerge/_find_deep_system_runtime_deps.py | 1 -
lib/_emerge/actions.py | 44 +-
lib/_emerge/create_depgraph_params.py | 1 -
lib/_emerge/create_world_atom.py | 10 +-
lib/_emerge/depgraph.py | 167 ++++----
lib/_emerge/emergelog.py | 3 -
lib/_emerge/help.py | 2 +-
lib/_emerge/main.py | 14 +-
lib/_emerge/resolver/DbapiProvidesIndex.py | 8 +-
lib/_emerge/resolver/backtracking.py | 9 +-
lib/_emerge/resolver/circular_dependency.py | 4 +-
lib/_emerge/resolver/output.py | 65 ++-
lib/_emerge/resolver/output_helpers.py | 35 +-
lib/_emerge/resolver/package_tracker.py | 4 +-
lib/_emerge/resolver/slot_collision.py | 85 ++--
lib/_emerge/search.py | 9 +-
lib/_emerge/show_invalid_depstring_notice.py | 1 -
lib/_emerge/stdout_spinner.py | 2 +-
lib/_emerge/unmerge.py | 5 +-
lib/portage/__init__.py | 67 ++-
lib/portage/_emirrordist/Config.py | 11 +-
lib/portage/_emirrordist/DeletionIterator.py | 2 +-
lib/portage/_emirrordist/FetchIterator.py | 2 +-
lib/portage/_emirrordist/FetchTask.py | 51 +--
lib/portage/_emirrordist/MirrorDistTask.py | 8 +-
lib/portage/_emirrordist/main.py | 9 +-
lib/portage/_global_updates.py | 3 +-
lib/portage/_legacy_globals.py | 2 +-
lib/portage/_selinux.py | 15 +-
lib/portage/_sets/__init__.py | 9 +-
lib/portage/_sets/base.py | 16 +-
lib/portage/_sets/dbapi.py | 51 ++-
lib/portage/cache/__init__.py | 1 -
lib/portage/cache/anydbm.py | 36 +-
lib/portage/cache/ebuild_xattr.py | 35 +-
lib/portage/cache/flat_hash.py | 8 +-
lib/portage/cache/fs_template.py | 13 +-
lib/portage/cache/index/IndexStreamIterator.py | 2 +-
lib/portage/cache/index/pkg_desc_index.py | 13 +-
lib/portage/cache/mappings.py | 54 +--
lib/portage/cache/metadata.py | 9 +-
lib/portage/cache/sql_template.py | 6 +-
lib/portage/cache/sqlite.py | 15 +-
lib/portage/cache/template.py | 34 +-
lib/portage/checksum.py | 11 +-
lib/portage/const.py | 2 -
lib/portage/cvstree.py | 19 +-
lib/portage/data.py | 10 +-
lib/portage/dbapi/DummyTree.py | 2 +-
lib/portage/dbapi/IndexedPortdb.py | 7 +-
lib/portage/dbapi/IndexedVardb.py | 5 +-
.../dbapi/_ContentsCaseSensitivityManager.py | 2 +-
lib/portage/dbapi/_MergeProcess.py | 128 ++----
lib/portage/dbapi/_VdbMetadataDelta.py | 2 +-
lib/portage/dbapi/__init__.py | 4 +-
lib/portage/dbapi/_expand_new_virt.py | 2 -
lib/portage/dbapi/bintree.py | 81 ++--
lib/portage/dbapi/cpv_expand.py | 8 +-
lib/portage/dbapi/dep_expand.py | 2 -
lib/portage/dbapi/porttree.py | 42 +-
lib/portage/dbapi/vartree.py | 88 ++--
lib/portage/dbapi/virtual.py | 7 +-
lib/portage/debug.py | 16 +-
lib/portage/dep/__init__.py | 211 +++++-----
lib/portage/dep/_dnf.py | 4 +-
lib/portage/dep/_slot_operator.py | 2 -
lib/portage/dep/dep_check.py | 15 +-
lib/portage/dep/soname/SonameAtom.py | 13 +-
lib/portage/dep/soname/multilib_category.py | 2 -
lib/portage/dep/soname/parse.py | 2 -
lib/portage/dispatch_conf.py | 4 +-
lib/portage/eclass_cache.py | 17 +-
lib/portage/elog/__init__.py | 6 +-
lib/portage/elog/messages.py | 5 +-
lib/portage/elog/mod_echo.py | 5 +-
lib/portage/elog/mod_mail_summary.py | 4 +-
lib/portage/elog/mod_save_summary.py | 5 -
lib/portage/elog/mod_syslog.py | 10 +-
lib/portage/emaint/main.py | 4 +-
lib/portage/emaint/modules/binhost/binhost.py | 12 +-
lib/portage/emaint/modules/config/config.py | 2 +-
lib/portage/emaint/modules/logs/logs.py | 2 +-
lib/portage/emaint/modules/merges/merges.py | 8 +-
lib/portage/emaint/modules/move/move.py | 2 +-
lib/portage/emaint/modules/resume/resume.py | 2 +-
lib/portage/emaint/modules/sync/sync.py | 13 +-
lib/portage/emaint/modules/world/world.py | 3 +-
lib/portage/env/__init__.py | 1 -
lib/portage/env/loaders.py | 2 +-
lib/portage/exception.py | 53 +--
lib/portage/getbinpkg.py | 55 +--
lib/portage/glsa.py | 16 +-
lib/portage/locks.py | 46 +-
lib/portage/mail.py | 57 +--
lib/portage/manifest.py | 37 +-
lib/portage/metadata.py | 2 +-
lib/portage/module.py | 4 +-
lib/portage/news.py | 8 +-
lib/portage/output.py | 49 +--
.../package/ebuild/_config/KeywordsManager.py | 2 +-
.../package/ebuild/_config/LicenseManager.py | 2 +-
.../package/ebuild/_config/LocationsManager.py | 4 +-
lib/portage/package/ebuild/_config/MaskManager.py | 2 +-
lib/portage/package/ebuild/_config/UseManager.py | 2 +-
.../package/ebuild/_config/VirtualsManager.py | 2 +-
lib/portage/package/ebuild/_config/features_set.py | 2 +-
.../package/ebuild/_config/special_env_vars.py | 10 +-
lib/portage/package/ebuild/_ipc/IpcCommand.py | 2 +-
lib/portage/package/ebuild/_ipc/QueryCommand.py | 17 +-
.../ebuild/_parallel_manifest/ManifestProcess.py | 3 +-
.../ebuild/_parallel_manifest/ManifestScheduler.py | 2 -
lib/portage/package/ebuild/config.py | 51 +--
lib/portage/package/ebuild/doebuild.py | 20 +-
lib/portage/package/ebuild/fetch.py | 28 +-
lib/portage/package/ebuild/getmaskingreason.py | 9 +-
lib/portage/package/ebuild/getmaskingstatus.py | 16 +-
lib/portage/package/ebuild/prepare_build_dirs.py | 2 -
lib/portage/process.py | 29 +-
lib/portage/progress.py | 3 +-
lib/portage/proxy/lazyimport.py | 7 +-
lib/portage/proxy/objectproxy.py | 11 +-
lib/portage/repository/config.py | 43 +-
lib/portage/repository/storage/hardlink_rcu.py | 13 +-
lib/portage/repository/storage/interface.py | 2 +-
lib/portage/sync/config_checks.py | 2 +-
lib/portage/sync/controller.py | 12 +-
lib/portage/sync/getaddrinfo_validate.py | 7 +-
lib/portage/sync/modules/git/__init__.py | 13 +-
lib/portage/sync/modules/git/git.py | 37 +-
lib/portage/sync/modules/rsync/rsync.py | 14 +-
lib/portage/sync/modules/webrsync/webrsync.py | 1 -
lib/portage/sync/syncbase.py | 12 +-
lib/portage/tests/bin/setup_env.py | 20 +-
lib/portage/tests/dbapi/test_auxdb.py | 2 -
lib/portage/tests/dep/testAtom.py | 2 +-
lib/portage/tests/dep/test_isvalidatom.py | 2 +-
lib/portage/tests/dep/test_match_from_list.py | 14 +-
lib/portage/tests/dep/test_soname_atom_pickle.py | 3 -
lib/portage/tests/dep/test_use_reduce.py | 2 +-
lib/portage/tests/ebuild/test_config.py | 2 -
lib/portage/tests/ebuild/test_fetch.py | 6 +-
lib/portage/tests/ebuild/test_spawn.py | 1 -
.../tests/ebuild/test_use_expand_incremental.py | 2 -
lib/portage/tests/emerge/test_config_protect.py | 2 -
lib/portage/tests/env/__init__.py | 1 -
lib/portage/tests/env/config/__init__.py | 1 -
lib/portage/tests/glsa/test_security_set.py | 2 -
lib/portage/tests/lint/test_bash_syntax.py | 1 -
lib/portage/tests/process/test_AsyncFunction.py | 38 ++
lib/portage/tests/process/test_PipeLogger.py | 58 +++
lib/portage/tests/process/test_poll.py | 10 +-
lib/portage/tests/resolver/ResolverPlayground.py | 19 +-
.../resolver/test_binary_pkg_ebuild_visibility.py | 1 -
.../tests/resolver/test_profile_default_eapi.py | 2 -
.../tests/resolver/test_profile_package_set.py | 2 -
lib/portage/tests/sets/files/testConfigFileSet.py | 1 -
lib/portage/tests/sets/files/testStaticFileSet.py | 1 -
lib/portage/tests/sets/shell/testShell.py | 8 +-
lib/portage/tests/sync/test_sync_local.py | 11 +-
lib/portage/tests/unicode/test_string_format.py | 54 +--
lib/portage/tests/util/__init__.py | 1 -
.../tests/util/futures/asyncio/test_pipe_closed.py | 10 +-
.../asyncio/test_policy_wrapper_recursion.py | 8 +-
.../util/futures/asyncio/test_subprocess_exec.py | 5 -
.../tests/util/futures/test_compat_coroutine.py | 2 +-
lib/portage/tests/util/futures/test_retry.py | 34 +-
lib/portage/tests/util/test_socks5.py | 31 +-
lib/portage/tests/util/test_xattr.py | 14 +-
lib/portage/update.py | 18 +-
lib/portage/util/SlotObject.py | 2 +-
lib/portage/util/_ShelveUnicodeWrapper.py | 45 --
lib/portage/util/__init__.py | 109 ++---
lib/portage/util/_async/AsyncFunction.py | 4 +-
lib/portage/util/_async/BuildLogger.py | 109 +++++
lib/portage/util/_async/ForkProcess.py | 146 +++++--
lib/portage/util/_async/PipeLogger.py | 160 ++++---
lib/portage/util/_async/SchedulerInterface.py | 32 +-
lib/portage/util/_compare_files.py | 23 +-
lib/portage/util/_desktop_entry.py | 8 +-
lib/portage/util/_dyn_libs/LinkageMapELF.py | 48 +--
lib/portage/util/_dyn_libs/NeededEntry.py | 15 +-
.../util/_dyn_libs/PreservedLibsRegistry.py | 21 +-
lib/portage/util/_dyn_libs/soname_deps.py | 4 +-
lib/portage/util/_eventloop/EventLoop.py | 50 +--
lib/portage/util/_eventloop/PollConstants.py | 3 +-
lib/portage/util/_eventloop/PollSelectAdapter.py | 3 +-
lib/portage/util/_eventloop/asyncio_event_loop.py | 9 +-
lib/portage/util/_eventloop/global_event_loop.py | 16 +-
lib/portage/util/_urlopen.py | 67 ++-
lib/portage/util/_xattr.py | 2 +-
lib/portage/util/backoff.py | 2 +-
lib/portage/util/changelog.py | 29 +-
lib/portage/util/compression_probe.py | 5 +-
lib/portage/util/configparser.py | 23 +-
lib/portage/util/digraph.py | 8 +-
lib/portage/util/elf/header.py | 2 +-
lib/portage/util/env_update.py | 8 +-
lib/portage/util/formatter.py | 5 +-
lib/portage/util/futures/_asyncio/__init__.py | 48 +--
lib/portage/util/futures/_asyncio/process.py | 15 +-
lib/portage/util/futures/_asyncio/streams.py | 52 ++-
lib/portage/util/futures/_asyncio/tasks.py | 9 +-
lib/portage/util/futures/compat_coroutine.py | 3 +-
lib/portage/util/futures/events.py | 37 +-
lib/portage/util/futures/executor/fork.py | 2 +-
lib/portage/util/futures/extendedfutures.py | 2 -
lib/portage/util/futures/futures.py | 42 +-
lib/portage/util/futures/retry.py | 2 +-
lib/portage/util/futures/transports.py | 5 +-
lib/portage/util/futures/unix_events.py | 34 +-
lib/portage/util/install_mask.py | 13 +-
lib/portage/util/iterators/MultiIterGroupBy.py | 4 +-
lib/portage/util/lafilefixer.py | 11 +-
lib/portage/util/listdir.py | 3 -
lib/portage/util/locale.py | 2 +-
lib/portage/util/monotonic.py | 34 --
lib/portage/util/movefile.py | 65 +--
lib/portage/util/mtimedb.py | 6 +-
lib/portage/util/socks5.py | 2 +-
lib/portage/util/whirlpool.py | 26 +-
lib/portage/util/writeable_check.py | 2 -
lib/portage/versions.py | 40 +-
lib/portage/xml/metadata.py | 24 +-
lib/portage/xpak.py | 5 +-
man/ebuild.5 | 4 +-
man/egencache.1 | 4 +-
man/emerge.1 | 8 +-
man/make.conf.5 | 7 +-
man/portage.5 | 54 +--
pylintrc | 464 +++++++++++++++++++++
repoman/RELEASE-NOTES | 5 +
repoman/lib/repoman/__init__.py | 7 +-
repoman/lib/repoman/actions.py | 4 +-
repoman/lib/repoman/copyrights.py | 2 +-
repoman/lib/repoman/errors.py | 2 +-
repoman/lib/repoman/gpg.py | 2 +-
repoman/lib/repoman/main.py | 4 +-
repoman/lib/repoman/metadata.py | 7 +-
repoman/lib/repoman/modules/commit/manifest.py | 2 +-
repoman/lib/repoman/modules/commit/repochecks.py | 2 +-
repoman/lib/repoman/modules/linechecks/base.py | 2 +-
repoman/lib/repoman/modules/linechecks/config.py | 4 +-
.../lib/repoman/modules/linechecks/controller.py | 2 +-
.../modules/linechecks/deprecated/inherit.py | 2 +
repoman/lib/repoman/modules/scan/ebuild/ebuild.py | 2 +-
.../modules/scan/metadata/ebuild_metadata.py | 5 +-
.../lib/repoman/modules/scan/metadata/use_flags.py | 2 +-
repoman/lib/repoman/modules/scan/module.py | 2 +-
repoman/lib/repoman/modules/scan/scanbase.py | 2 +-
repoman/lib/repoman/modules/vcs/None/status.py | 2 +-
repoman/lib/repoman/modules/vcs/bzr/status.py | 2 +-
repoman/lib/repoman/modules/vcs/changes.py | 2 +-
repoman/lib/repoman/modules/vcs/cvs/status.py | 2 +-
repoman/lib/repoman/modules/vcs/git/status.py | 2 +-
repoman/lib/repoman/modules/vcs/hg/status.py | 2 +-
repoman/lib/repoman/modules/vcs/settings.py | 4 +-
repoman/lib/repoman/modules/vcs/svn/status.py | 2 +-
repoman/lib/repoman/modules/vcs/vcs.py | 2 +-
repoman/lib/repoman/profile.py | 4 +-
repoman/lib/repoman/qa_data.py | 2 +-
repoman/lib/repoman/qa_tracker.py | 2 +-
repoman/lib/repoman/repos.py | 2 +-
repoman/lib/repoman/scanner.py | 4 +-
repoman/lib/repoman/utilities.py | 5 +-
repoman/setup.py | 4 +-
runtests | 3 +-
setup.py | 4 +-
tox.ini | 6 +-
328 files changed, 2638 insertions(+), 3095 deletions(-)
diff --cc lib/portage/const.py
index 146808fea,9a7ea23bd..50412c058
--- a/lib/portage/const.py
+++ b/lib/portage/const.py
@@@ -2,13 -2,6 +2,11 @@@
# Copyright 1998-2019 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
- from __future__ import unicode_literals
-
+# ===========================================================================
+# autotool supplied constants.
+# ===========================================================================
+from portage.const_autotool import *
+
import os
# ===========================================================================
diff --cc lib/portage/data.py
index 20a8d1ba7,3887ad32e..d2d356f95
--- a/lib/portage/data.py
+++ b/lib/portage/data.py
@@@ -2,8 -2,10 +2,11 @@@
# Copyright 1998-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
- import os, pwd, grp, platform, sys
+ import grp
+ import os
+ import platform
+ import pwd
+from portage.const import PORTAGE_GROUPNAME, PORTAGE_USERNAME, EPREFIX
import portage
portage.proxy.lazyimport.lazyimport(globals(),
diff --cc lib/portage/util/__init__.py
index 6bff97fb7,84f1391f6..322b217be
--- a/lib/portage/util/__init__.py
+++ b/lib/portage/util/__init__.py
@@@ -48,12 -43,7 +43,8 @@@ from portage.exception import InvalidAt
from portage.localization import _
from portage.proxy.objectproxy import ObjectProxy
from portage.cache.mappings import UserDict
+from portage.const import EPREFIX
- if sys.hexversion >= 0x3000000:
- _unicode = str
- else:
- _unicode = unicode
noiselimit = 0
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2020-06-02 18:55 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2020-06-02 18:55 UTC (permalink / raw
To: gentoo-commits
commit: e8b395c0fdfdf896fb6d3168dd1cf9a130b20796
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Tue Jun 2 18:54:23 2020 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Tue Jun 2 18:54:23 2020 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=e8b395c0
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
.travis.yml | 7 +-
MANIFEST.in | 5 +
NEWS | 17 +
RELEASE-NOTES | 122 +++++
bin/ebuild-helpers/dosym | 13 +-
bin/ecompress | 16 +-
bin/isolated-functions.sh | 4 +-
bin/misc-functions.sh | 12 +-
bin/phase-functions.sh | 4 -
bin/phase-helpers.sh | 10 +-
bin/socks5-server.py | 36 +-
cnf/make.globals | 9 +-
doc/api/.gitignore | 1 +
doc/api/Makefile | 32 ++
doc/api/conf.py | 66 +++
doc/api/index.rst | 18 +
doc/qa.docbook | 98 ++++
lib/_emerge/AbstractEbuildProcess.py | 2 +-
lib/_emerge/AbstractPollTask.py | 3 -
lib/_emerge/AsynchronousTask.py | 75 +--
lib/_emerge/CompositeTask.py | 7 +-
lib/_emerge/EbuildFetcher.py | 12 +-
lib/_emerge/EbuildMetadataPhase.py | 3 +-
lib/_emerge/EbuildPhase.py | 66 ++-
lib/_emerge/FifoIpcDaemon.py | 3 -
lib/_emerge/Scheduler.py | 43 +-
lib/_emerge/SequentialTaskQueue.py | 19 +-
lib/_emerge/SubProcess.py | 15 +-
lib/_emerge/actions.py | 40 +-
lib/_emerge/create_world_atom.py | 11 +-
lib/_emerge/depgraph.py | 65 ++-
lib/portage/_compat_upgrade/binpkg_compression.py | 40 ++
lib/portage/_emirrordist/FetchTask.py | 9 -
lib/portage/_selinux.py | 9 +-
lib/portage/cache/ebuild_xattr.py | 5 +-
lib/portage/cache/template.py | 2 +-
lib/portage/const.py | 1 +
lib/portage/data.py | 10 -
lib/portage/dbapi/cpv_expand.py | 4 +-
lib/portage/dbapi/porttree.py | 9 +-
lib/portage/dbapi/vartree.py | 9 +-
lib/portage/dep/dep_check.py | 93 ++--
lib/portage/dep/soname/SonameAtom.py | 9 +-
lib/portage/dispatch_conf.py | 9 -
lib/portage/emaint/modules/sync/sync.py | 2 +-
lib/portage/locks.py | 67 ++-
.../package/ebuild/_config/KeywordsManager.py | 16 +-
.../package/ebuild/_config/special_env_vars.py | 6 +-
.../package/ebuild/deprecated_profile_check.py | 2 +-
lib/portage/package/ebuild/doebuild.py | 41 +-
lib/portage/package/ebuild/fetch.py | 148 ++++--
lib/portage/package/ebuild/prepare_build_dirs.py | 21 +-
lib/portage/process.py | 29 +-
lib/portage/tests/dbapi/test_auxdb.py | 77 +++
lib/portage/tests/dep/test_soname_atom_pickle.py | 26 +
lib/portage/tests/ebuild/test_doebuild_spawn.py | 4 +-
lib/portage/tests/emerge/test_simple.py | 69 ++-
lib/portage/tests/locks/test_lock_nonblock.py | 16 +-
lib/portage/tests/resolver/ResolverPlayground.py | 99 ++--
.../tests/resolver/test_circular_choices.py | 44 +-
lib/portage/tests/resolver/test_depth.py | 18 +-
lib/portage/tests/resolver/test_multirepo.py | 8 +-
lib/portage/tests/resolver/test_or_choices.py | 572 +++++++++++++++++++--
.../tests/resolver/test_or_upgrade_installed.py | 70 +++
.../resolver/test_slot_operator_reverse_deps.py | 93 +++-
.../tests/util/futures/test_compat_coroutine.py | 29 +-
.../util/futures/test_done_callback_after_exit.py | 44 ++
lib/portage/util/__init__.py | 8 -
lib/portage/util/_async/AsyncFunction.py | 5 +-
lib/portage/util/_async/FileDigester.py | 5 +-
lib/portage/util/_desktop_entry.py | 8 -
lib/portage/util/_dyn_libs/LinkageMapELF.py | 84 ++-
lib/portage/util/_dyn_libs/NeededEntry.py | 5 +
lib/portage/util/_dyn_libs/soname_deps_qa.py | 98 ++++
lib/portage/util/_eventloop/asyncio_event_loop.py | 31 +-
lib/portage/util/compression_probe.py | 10 +-
lib/portage/util/futures/_asyncio/__init__.py | 8 +-
lib/portage/util/futures/compat_coroutine.py | 19 +-
lib/portage/xml/metadata.py | 22 +-
lib/portage/xpak.py | 5 +-
man/emerge.1 | 6 +-
man/make.conf.5 | 8 +-
repoman/RELEASE-NOTES | 11 +
repoman/cnf/linechecks/linechecks.yaml | 1 -
repoman/cnf/repository/repository.yaml | 1 -
repoman/lib/repoman/_subprocess.py | 18 -
repoman/lib/repoman/gpg.py | 9 -
repoman/lib/repoman/metadata.py | 51 +-
.../modules/linechecks/deprecated/inherit.py | 5 +
.../modules/linechecks/workaround/__init__.py | 6 -
.../modules/linechecks/workaround/workarounds.py | 7 -
repoman/lib/repoman/modules/vcs/git/changes.py | 22 +-
repoman/runtests | 8 +-
repoman/setup.py | 2 +-
runtests | 8 +-
setup.py | 37 +-
tox.ini | 4 +-
97 files changed, 2391 insertions(+), 665 deletions(-)
diff --cc bin/ebuild-helpers/dosym
index da15fe397,abd4da4f0..681e198c5
--- a/bin/ebuild-helpers/dosym
+++ b/bin/ebuild-helpers/dosym
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2018 Gentoo Foundation
+ # Copyright 1999-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/isolated-functions.sh
index efc377575,fde684013..7840d6012
--- a/bin/isolated-functions.sh
+++ b/bin/isolated-functions.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2019 Gentoo Authors
+ # Copyright 1999-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}/eapi.sh" || exit 1
diff --cc bin/phase-helpers.sh
index d0ab03712,9495465f9..c4ab51d78
--- a/bin/phase-helpers.sh
+++ b/bin/phase-helpers.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2019 Gentoo Authors
+ # Copyright 1999-2020 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
if ___eapi_has_DESTTREE_INSDESTTREE; then
diff --cc cnf/make.globals
index 25678ee82,dd3f28f70..d3ba98513
--- a/cnf/make.globals
+++ b/cnf/make.globals
@@@ -27,13 -27,17 +27,17 @@@ ACCEPT_PROPERTIES="*
ACCEPT_RESTRICT="*"
# Miscellaneous paths
-DISTDIR="/var/cache/distfiles"
-PKGDIR="/var/cache/binpkgs"
-RPMDIR="/var/cache/rpm"
+DISTDIR="@PORTAGE_EPREFIX@/var/cache/distfiles"
+PKGDIR="@PORTAGE_EPREFIX@/var/cache/binpkgs"
+RPMDIR="@PORTAGE_EPREFIX@/var/cache/rpm"
# Temporary build directory
-PORTAGE_TMPDIR="/var/tmp"
+PORTAGE_TMPDIR="@PORTAGE_EPREFIX@/var/tmp"
+ # The compression used for binary packages. Defaults to zstd except for
+ # existing installs where bzip2 is used for backward compatibility.
+ BINPKG_COMPRESS="zstd"
+
# Fetching command (3 tries, passive ftp for firewall compatibility)
FETCHCOMMAND="wget -t 3 -T 60 --passive-ftp -O \"\${DISTDIR}/\${FILE}\" \"\${URI}\""
RESUMECOMMAND="wget -c -t 3 -T 60 --passive-ftp -O \"\${DISTDIR}/\${FILE}\" \"\${URI}\""
diff --cc lib/portage/package/ebuild/_config/special_env_vars.py
index c8131f5b2,440dd00b2..12d701c9a
--- a/lib/portage/package/ebuild/_config/special_env_vars.py
+++ b/lib/portage/package/ebuild/_config/special_env_vars.py
@@@ -78,12 -78,9 +78,12 @@@ environ_whitelist +=
"PORTAGE_VERBOSE", "PORTAGE_WORKDIR_MODE", "PORTAGE_XATTR_EXCLUDE",
"PORTDIR", "PORTDIR_OVERLAY", "PREROOTPATH", "PYTHONDONTWRITEBYTECODE",
"REPLACING_VERSIONS", "REPLACED_BY_VERSION",
- "ROOT", "ROOTPATH", "SYSROOT", "T", "TMP", "TMPDIR",
+ "ROOT", "ROOTPATH", "SANDBOX_LOG", "SYSROOT", "T", "TMP", "TMPDIR",
"USE_EXPAND", "USE_ORDER", "WORKDIR",
"XARGS", "__PORTAGE_TEST_HARDLINK_LOCKS",
+ # PREFIX LOCAL
+ "EXTRA_PATH", "PORTAGE_GROUP", "PORTAGE_USER",
+ # END PREFIX LOCAL
]
# user config variables
diff --cc lib/portage/package/ebuild/fetch.py
index 5f6c40146,9682fea89..11b13fe56
--- a/lib/portage/package/ebuild/fetch.py
+++ b/lib/portage/package/ebuild/fetch.py
@@@ -46,8 -51,7 +51,8 @@@ from portage.checksum import (get_valid
checksum_str)
from portage.const import BASH_BINARY, CUSTOM_MIRRORS_FILE, \
GLOBAL_CONFIG_PATH
+from portage.const import rootgid
- from portage.data import portage_gid, portage_uid, secpass, userpriv_groups
+ from portage.data import portage_gid, portage_uid, userpriv_groups
from portage.exception import FileNotFound, OperationNotPermitted, \
PortageException, TryAgain
from portage.localization import _
@@@ -153,6 -188,59 +189,59 @@@ def _userpriv_test_write_file(settings
_userpriv_test_write_file_cache[file_path] = rval
return rval
+
+ def _ensure_distdir(settings, distdir):
+ """
+ Ensure that DISTDIR exists with appropriate permissions.
+
+ @param settings: portage config
+ @type settings: portage.package.ebuild.config.config
+ @param distdir: DISTDIR path
+ @type distdir: str
+ @raise PortageException: portage.exception wrapper exception
+ """
+ global _userpriv_test_write_file_cache
+ dirmode = 0o070
+ filemode = 0o60
+ modemask = 0o2
+ dir_gid = portage_gid
+ if "FAKED_MODE" in settings:
+ # When inside fakeroot, directories with portage's gid appear
+ # to have root's gid. Therefore, use root's gid instead of
+ # portage's gid to avoid spurrious permissions adjustments
+ # when inside fakeroot.
- dir_gid = 0
++ dir_gid = rootgid
+
+ userfetch = portage.data.secpass >= 2 and "userfetch" in settings.features
+ userpriv = portage.data.secpass >= 2 and "userpriv" in settings.features
+ write_test_file = os.path.join(distdir, ".__portage_test_write__")
+
+ try:
+ st = os.stat(distdir)
+ except OSError:
+ st = None
+
+ if st is not None and stat.S_ISDIR(st.st_mode):
+ if not (userfetch or userpriv):
+ return
+ if _userpriv_test_write_file(settings, write_test_file):
+ return
+
+ _userpriv_test_write_file_cache.pop(write_test_file, None)
+ if ensure_dirs(distdir, gid=dir_gid, mode=dirmode, mask=modemask):
+ if st is None:
+ # The directory has just been created
+ # and therefore it must be empty.
+ return
+ writemsg(_("Adjusting permissions recursively: '%s'\n") % distdir,
+ noiselevel=-1)
+ if not apply_recursive_permissions(distdir,
+ gid=dir_gid, dirmode=dirmode, dirmask=modemask,
+ filemode=filemode, filemask=modemask, onerror=_raise_exc):
+ raise OperationNotPermitted(
+ _("Failed to apply recursive permissions for the portage group."))
+
+
def _checksum_failure_temp_file(settings, distdir, basename):
"""
First try to find a duplicate temp file with the same checksum and return
diff --cc man/make.conf.5
index 683ce26c2,a3bd662ae..ab00cb7d7
--- a/man/make.conf.5
+++ b/man/make.conf.5
@@@ -610,11 -607,12 +610,15 @@@ If \fIcollision\-protect\fR is enabled
Output a verbose trace of python execution to stderr when a command's
\-\-debug option is enabled.
.TP
+ .B qa\-unresolved\-soname\-deps
+ Trigger a QA warning when a package installs files with unresolved soname
+ dependencies.
+ .TP
.B sandbox
Enable sandbox\-ing when running \fBemerge\fR(1) and \fBebuild\fR(1).
+On Mac OS X platforms that have /usr/bin/sandbox-exec available (10.5
+and later), this particular sandbox implementation is used instead of
+sys-apps/sandbox.
.TP
.B sesandbox
Enable SELinux sandbox\-ing. Do not toggle this \fBFEATURE\fR yourself.
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2020-01-08 19:14 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2020-01-08 19:14 UTC (permalink / raw
To: gentoo-commits
commit: 900023af32d8ef72e7c6069af67d513ccf50048a
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Wed Jan 8 19:11:46 2020 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Wed Jan 8 19:11:46 2020 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=900023af
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
.travis.yml | 18 +-
NEWS | 31 +
RELEASE-NOTES | 154 +++++
bin/dispatch-conf | 11 +-
bin/eapi.sh | 46 +-
bin/ebuild | 10 +-
bin/ebuild-helpers/doins | 3 +-
bin/ebuild.sh | 33 +-
bin/emerge-webrsync | 48 +-
bin/glsa-check | 112 ++--
bin/helper-functions.sh | 7 +-
bin/install.py | 24 +-
bin/isolated-functions.sh | 18 +-
bin/phase-functions.sh | 28 +-
bin/phase-helpers.sh | 103 ++--
bin/pid-ns-init | 2 +-
bin/quickpkg | 34 +-
bin/socks5-server.py | 2 +-
cnf/make.globals | 4 +-
cnf/repos.conf | 6 +-
cnf/sets/portage.conf | 12 +-
doc/package/ebuild.docbook | 1 -
doc/package/ebuild/eapi/5-hdepend.docbook | 32 --
doc/portage.docbook | 1 -
lib/_emerge/Binpkg.py | 96 ++--
lib/_emerge/EbuildExecuter.py | 6 +-
lib/_emerge/EbuildPhase.py | 2 +-
lib/_emerge/Package.py | 6 +-
lib/_emerge/Scheduler.py | 17 +-
lib/_emerge/actions.py | 45 +-
lib/_emerge/create_depgraph_params.py | 39 ++
lib/_emerge/depgraph.py | 628 +++++++++++++--------
lib/_emerge/main.py | 32 +-
lib/_emerge/resolver/backtracking.py | 20 +-
lib/_emerge/resolver/slot_collision.py | 17 +-
lib/portage/__init__.py | 5 +-
lib/portage/_emirrordist/Config.py | 10 +-
lib/portage/_emirrordist/DeletionIterator.py | 38 +-
lib/portage/_emirrordist/DeletionTask.py | 47 +-
lib/portage/_emirrordist/FetchTask.py | 120 ++--
lib/portage/_emirrordist/main.py | 12 +
lib/portage/_sets/__init__.py | 14 +-
lib/portage/_sets/dbapi.py | 18 +-
lib/portage/cache/metadata.py | 2 +-
lib/portage/const.py | 6 +-
lib/portage/dbapi/__init__.py | 5 +-
lib/portage/dbapi/bintree.py | 155 ++++-
lib/portage/dbapi/porttree.py | 2 +-
lib/portage/dbapi/vartree.py | 233 +++++++-
lib/portage/dep/__init__.py | 152 +++--
lib/portage/dep/_slot_operator.py | 3 -
lib/portage/dep/dep_check.py | 10 +
lib/portage/eapi.py | 18 +-
lib/portage/emaint/modules/binhost/binhost.py | 21 +-
lib/portage/exception.py | 6 +-
lib/portage/glsa.py | 15 +-
lib/portage/locks.py | 6 +-
.../package/ebuild/_config/special_env_vars.py | 4 +-
lib/portage/package/ebuild/config.py | 27 +-
lib/portage/package/ebuild/doebuild.py | 17 +-
lib/portage/package/ebuild/fetch.py | 286 +++++++++-
lib/portage/process.py | 100 +++-
lib/portage/repository/config.py | 14 +-
lib/portage/sync/modules/rsync/rsync.py | 7 +
lib/portage/sync/modules/webrsync/webrsync.py | 1 +
lib/portage/sync/syncbase.py | 10 +-
lib/portage/tests/dep/test_use_reduce.py | 72 ++-
lib/portage/tests/ebuild/test_fetch.py | 252 ++++++++-
lib/portage/tests/emerge/test_simple.py | 3 +-
lib/portage/tests/glsa/test_security_set.py | 2 +-
lib/portage/tests/process/test_unshare_net.py | 38 ++
lib/portage/tests/resolver/ResolverPlayground.py | 13 +
.../test_aggressive_backtrack_downgrade.py | 91 +++
lib/portage/tests/resolver/test_autounmask.py | 39 +-
lib/portage/tests/resolver/test_blocker.py | 87 ++-
.../tests/resolver/test_circular_choices.py | 109 +++-
lib/portage/tests/resolver/test_keywords.py | 15 +-
lib/portage/tests/resolver/test_merge_order.py | 28 +-
.../resolver/test_slot_conflict_mask_update.py | 5 +-
.../resolver/test_slot_conflict_update_virt.py | 79 +++
.../resolver/test_slot_operator_autounmask.py | 4 +-
.../resolver/test_slot_operator_complete_graph.py | 2 +-
.../test_slot_operator_runtime_pkg_mask.py | 4 +-
.../resolver/test_virtual_minimize_children.py | 39 ++
lib/portage/tests/resolver/test_with_test_deps.py | 39 +-
.../util/futures/asyncio/test_child_watcher.py | 19 +-
.../util/futures/asyncio/test_subprocess_exec.py | 36 +-
lib/portage/tests/util/test_file_copier.py | 48 ++
lib/portage/tests/util/test_getconfig.py | 4 +-
lib/portage/update.py | 6 +-
lib/portage/util/_async/FileCopier.py | 26 +-
lib/portage/util/_compare_files.py | 103 ++++
lib/portage/util/_dyn_libs/LinkageMapELF.py | 5 +-
lib/portage/util/_urlopen.py | 6 +-
lib/portage/util/_xattr.py | 6 +-
lib/portage/util/futures/_asyncio/__init__.py | 5 +-
lib/portage/util/netlink.py | 98 ++++
man/ebuild.5 | 194 +++++--
man/emerge.1 | 63 ++-
man/emirrordist.1 | 10 +
man/glsa-check.1 | 53 ++
man/make.conf.5 | 38 +-
misc/emerge-delta-webrsync | 33 +-
repoman/RELEASE-NOTES | 28 +
repoman/cnf/linechecks/linechecks.yaml | 46 +-
repoman/cnf/qa_data/qa_data.yaml | 1 +
repoman/cnf/repository/linechecks.yaml | 1 -
repoman/cnf/repository/qa_data.yaml | 1 +
repoman/lib/repoman/actions.py | 15 +-
repoman/lib/repoman/argparser.py | 7 +
repoman/lib/repoman/modules/linechecks/base.py | 5 +-
.../lib/repoman/modules/linechecks/controller.py | 12 +-
.../modules/linechecks/deprecated/inherit.py | 19 +-
repoman/lib/repoman/modules/linechecks/do/dosym.py | 6 +-
.../lib/repoman/modules/linechecks/eapi/checks.py | 10 +-
.../lib/repoman/modules/linechecks/emake/emake.py | 2 +-
.../repoman/modules/linechecks/patches/patches.py | 2 +-
.../lib/repoman/modules/linechecks/phases/phase.py | 4 +-
.../repoman/modules/linechecks/portage/internal.py | 7 +-
.../repoman/modules/linechecks/quotes/quoteda.py | 2 +-
.../repoman/modules/linechecks/useless/dodoc.py | 2 +-
.../repoman/modules/linechecks/whitespace/blank.py | 2 +-
.../lib/repoman/modules/scan/depend/__init__.py | 3 +-
.../repoman/modules/scan/depend/_depend_checks.py | 14 +
repoman/lib/repoman/modules/scan/depend/profile.py | 9 +-
repoman/lib/repoman/modules/scan/ebuild/ebuild.py | 3 +-
.../lib/repoman/modules/scan/metadata/restrict.py | 6 +-
repoman/lib/repoman/scanner.py | 9 +
repoman/lib/repoman/tests/commit/test_commitmsg.py | 2 +-
repoman/lib/repoman/tests/simple/test_simple.py | 1 +
repoman/man/repoman.1 | 9 +-
repoman/runtests | 6 +-
repoman/setup.py | 2 +-
runtests | 6 +-
setup.py | 4 +-
tox.ini | 8 +-
136 files changed, 3799 insertions(+), 1141 deletions(-)
diff --cc .travis.yml
index 4f94e36a3,5123141ac..dc8e2857c
--- a/.travis.yml
+++ b/.travis.yml
@@@ -19,24 -12,11 +12,25 @@@ install
script:
- printf "[build_ext]\nportage-ext-modules=true" >> setup.cfg
+ - find . -type f -exec
+ sed -e "s|@PORTAGE_EPREFIX@||"
+ -e "s|@PORTAGE_BASE@|${PWD}|"
+ -e "s|@PORTAGE_MV@|$(type -P mv)|"
+ -e "s|@PORTAGE_BASH@|$(type -P bash)|"
+ -e "s|@PREFIX_PORTAGE_PYTHON@|$(type -P python)|"
+ -e "s|@EXTRA_PATH@|/usr/sbin:/sbin|"
+ -e "s|@portagegroup@|$(id -gn)|"
+ -e "s|@portageuser@|$(id -un)|"
+ -e "s|@rootuser@|$(id -un)|"
+ -e "s|@rootuid@|$(id -u)|"
+ -e "s|@rootgid@|$(id -g)|"
+ -e "s|@sysconfdir@|/etc|"
+ -i '{}' +
- ./setup.py test
- ./setup.py install --root=/tmp/install-root
- - if [[ ${TRAVIS_PYTHON_VERSION} == ?.? ]]; then
- tox -e py${TRAVIS_PYTHON_VERSION/./};
+ - if [[ ${TRAVIS_PYTHON_VERSION/-dev/} == ?.? ]]; then
+ TOX_PYTHON_VERSION=${TRAVIS_PYTHON_VERSION/-dev/};
+ tox -e py${TOX_PYTHON_VERSION/./};
else
tox -e ${TRAVIS_PYTHON_VERSION};
fi
diff --cc bin/dispatch-conf
index fb16cb63f,62ab3f6cc..c05215fc3
--- a/bin/dispatch-conf
+++ b/bin/dispatch-conf
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 1999-2017 Gentoo Foundation
+ # Copyright 1999-2019 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
#
diff --cc bin/ebuild
index 9e5c3b68e,460aa0fd1..a57dd4941
--- a/bin/ebuild
+++ b/bin/ebuild
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 1999-2018 Gentoo Foundation
+ # Copyright 1999-2019 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/ebuild-helpers/doins
index cf843bce5,24fe48121..825a421af
--- a/bin/ebuild-helpers/doins
+++ b/bin/ebuild-helpers/doins
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2018 Gentoo Foundation
+ # Copyright 1999-2019 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/emerge-webrsync
index f9da96ffa,db39b272e..6ac7e3328
--- a/bin/emerge-webrsync
+++ b/bin/emerge-webrsync
@@@ -50,16 -50,10 +50,16 @@@ eval "$("${portageq}" envvar -v DISTDI
FETCHCOMMAND GENTOO_MIRRORS \
PORTAGE_BIN_PATH PORTAGE_CONFIGROOT PORTAGE_GPG_DIR \
PORTAGE_NICENESS PORTAGE_REPOSITORIES PORTAGE_RSYNC_EXTRA_OPTS \
- PORTAGE_RSYNC_OPTS PORTAGE_TMPDIR \
+ PORTAGE_RSYNC_OPTS PORTAGE_TEMP_GPG_DIR PORTAGE_TMPDIR \
- USERLAND http_proxy ftp_proxy)"
+ USERLAND http_proxy ftp_proxy \
+ PORTAGE_USER PORTAGE_GROUP)"
export http_proxy ftp_proxy
+# PREFIX LOCAL: use Prefix servers, just because we want this and infra
+# can't support us yet
+GENTOO_MIRRORS="http://rsync.prefix.bitzolder.nl"
+# END PREFIX LOCAL
+
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
repo_name=gentoo
diff --cc bin/glsa-check
index 470003215,eff01cf31..f9dae110f
--- a/bin/glsa-check
+++ b/bin/glsa-check
@@@ -1,10 -1,11 +1,11 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 2008-2014 Gentoo Foundation
+ # Copyright 1999-2019 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
- from __future__ import print_function
+ from __future__ import print_function, unicode_literals
import argparse
+ import re
import sys
import codecs
from functools import reduce
diff --cc bin/isolated-functions.sh
index dc0489093,e8d41fd64..efc377575
--- a/bin/isolated-functions.sh
+++ b/bin/isolated-functions.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2018 Gentoo Foundation
+ # Copyright 1999-2019 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}/eapi.sh" || exit 1
diff --cc bin/phase-helpers.sh
index 74de140e7,020862ba0..d0ab03712
--- a/bin/phase-helpers.sh
+++ b/bin/phase-helpers.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2018 Gentoo Foundation
+ # Copyright 1999-2019 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
if ___eapi_has_DESTTREE_INSDESTTREE; then
diff --cc lib/_emerge/Package.py
index 2a303835a,3d7df2437..b2dfc07c1
--- a/lib/_emerge/Package.py
+++ b/lib/_emerge/Package.py
@@@ -48,10 -48,10 +48,10 @@@ class Package(Task)
"LICENSE", "MD5", "PDEPEND", "PROVIDES",
"RDEPEND", "repository", "REQUIRED_USE",
"PROPERTIES", "REQUIRES", "RESTRICT", "SIZE",
- "SLOT", "USE", "_mtime_"]
+ "SLOT", "USE", "_mtime_", "EPREFIX"]
- _dep_keys = ('BDEPEND', 'DEPEND', 'HDEPEND', 'PDEPEND', 'RDEPEND')
- _buildtime_keys = ('BDEPEND', 'DEPEND', 'HDEPEND')
+ _dep_keys = ('BDEPEND', 'DEPEND', 'PDEPEND', 'RDEPEND')
+ _buildtime_keys = ('BDEPEND', 'DEPEND')
_runtime_keys = ('PDEPEND', 'RDEPEND')
_use_conditional_misc_keys = ('LICENSE', 'PROPERTIES', 'RESTRICT')
UNKNOWN_REPO = _unknown_repo
diff --cc lib/portage/dbapi/bintree.py
index 3f2f0052f,311c9a78a..2f6c4f343
--- a/lib/portage/dbapi/bintree.py
+++ b/lib/portage/dbapi/bintree.py
@@@ -87,10 -94,10 +94,10 @@@ class bindbapi(fakedbapi)
# Selectively cache metadata in order to optimize dep matching.
self._aux_cache_keys = set(
["BDEPEND", "BUILD_ID", "BUILD_TIME", "CHOST", "DEFINED_PHASES",
- "DEPEND", "EAPI", "HDEPEND", "IUSE", "KEYWORDS",
+ "DEPEND", "EAPI", "IUSE", "KEYWORDS",
"LICENSE", "MD5", "PDEPEND", "PROPERTIES",
"PROVIDES", "RDEPEND", "repository", "REQUIRES", "RESTRICT",
- "SIZE", "SLOT", "USE", "_mtime_"
+ "SIZE", "SLOT", "USE", "_mtime_", "EPREFIX"
])
self._aux_cache_slot_dict = slot_dict_class(self._aux_cache_keys)
self._aux_cache = {}
@@@ -315,13 -394,13 +394,13 @@@ class binarytree(object)
self._pkgindex_aux_keys = \
["BASE_URI", "BDEPEND", "BUILD_ID", "BUILD_TIME", "CHOST",
"DEFINED_PHASES", "DEPEND", "DESCRIPTION", "EAPI",
- "HDEPEND", "IUSE", "KEYWORDS", "LICENSE", "PDEPEND",
+ "IUSE", "KEYWORDS", "LICENSE", "PDEPEND",
"PKGINDEX_URI", "PROPERTIES", "PROVIDES",
"RDEPEND", "repository", "REQUIRES", "RESTRICT",
- "SIZE", "SLOT", "USE"]
+ "SIZE", "SLOT", "USE", "EPREFIX"]
self._pkgindex_aux_keys = list(self._pkgindex_aux_keys)
self._pkgindex_use_evaluated_keys = \
- ("BDEPEND", "DEPEND", "HDEPEND", "LICENSE", "RDEPEND",
+ ("BDEPEND", "DEPEND", "LICENSE", "RDEPEND",
"PDEPEND", "PROPERTIES", "RESTRICT")
self._pkgindex_header = None
self._pkgindex_header_keys = set([
diff --cc lib/portage/package/ebuild/fetch.py
index d062ec77c,7ab054874..5f6c40146
--- a/lib/portage/package/ebuild/fetch.py
+++ b/lib/portage/package/ebuild/fetch.py
@@@ -32,10 -42,10 +42,11 @@@ portage.proxy.lazyimport.lazyimport(glo
from portage import os, selinux, shutil, _encodings, \
_movefile, _shell_quote, _unicode_encode
from portage.checksum import (get_valid_checksum_keys, perform_md5, verify_all,
- _filter_unaccelarated_hashes, _hash_filter, _apply_hash_filter)
+ _filter_unaccelarated_hashes, _hash_filter, _apply_hash_filter,
+ checksum_str)
from portage.const import BASH_BINARY, CUSTOM_MIRRORS_FILE, \
GLOBAL_CONFIG_PATH
+from portage.const import rootgid
from portage.data import portage_gid, portage_uid, secpass, userpriv_groups
from portage.exception import FileNotFound, OperationNotPermitted, \
PortageException, TryAgain
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2019-07-01 13:11 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2019-07-01 13:11 UTC (permalink / raw
To: gentoo-commits
commit: 9d65486e25c731330509fa29e8d92c776f70987e
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Mon Jul 1 13:11:08 2019 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Mon Jul 1 13:11:08 2019 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=9d65486e
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
RELEASE-NOTES | 8 ++++
bin/ebuild.sh | 1 +
cnf/make.globals | 2 +-
lib/portage/_compat_upgrade/default_locations.py | 13 +++++-
lib/portage/dbapi/porttree.py | 12 +++++
lib/portage/package/ebuild/config.py | 7 ++-
lib/portage/package/ebuild/doebuild.py | 18 ++++----
lib/portage/sync/syncbase.py | 3 +-
lib/portage/util/install_mask.py | 8 +++-
man/emerge.1 | 3 +-
man/make.conf.5 | 4 +-
repoman/RELEASE-NOTES | 18 ++++++++
repoman/cnf/linechecks/linechecks.yaml | 1 -
.../modules/linechecks/gentoo_header/header.py | 51 +++++++++-------------
.../modules/scan/metadata/ebuild_metadata.py | 4 +-
repoman/setup.py | 2 +-
setup.py | 2 +-
17 files changed, 102 insertions(+), 55 deletions(-)
diff --cc cnf/make.globals
index 6a1d3b952,9eeb7a01e..d16f14636
--- a/cnf/make.globals
+++ b/cnf/make.globals
@@@ -27,12 -27,12 +27,12 @@@ ACCEPT_PROPERTIES="*
ACCEPT_RESTRICT="*"
# Miscellaneous paths
-DISTDIR="/var/cache/distfiles"
-PKGDIR="/var/cache/binpkgs"
-RPMDIR="/var/cache/rpm"
+DISTDIR="@PORTAGE_EPREFIX@/var/cache/distfiles"
+PKGDIR="@PORTAGE_EPREFIX@/var/cache/binpkgs"
- RPMDIR="@PORTAGE_EPREFIX@/usr/portage/rpm"
++RPMDIR="@PORTAGE_EPREFIX@/var/cache/rpm"
# Temporary build directory
-PORTAGE_TMPDIR="/var/tmp"
+PORTAGE_TMPDIR="@PORTAGE_EPREFIX@/var/tmp"
# Fetching command (3 tries, passive ftp for firewall compatibility)
FETCHCOMMAND="wget -t 3 -T 60 --passive-ftp -O \"\${DISTDIR}/\${FILE}\" \"\${URI}\""
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2019-05-30 9:20 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2019-05-30 9:20 UTC (permalink / raw
To: gentoo-commits
commit: 84cf376dc22ed7e23c3a684182d9604a8860819c
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu May 30 09:19:43 2019 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu May 30 09:19:43 2019 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=84cf376d
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
NEWS | 9 +
RELEASE-NOTES | 46 +++++
bin/eapi.sh | 4 +
bin/ebuild.sh | 4 +-
bin/estrip | 41 ++--
bin/install-qa-check.d/10ignored-flags | 2 +-
bin/install-qa-check.d/80libraries | 21 +-
bin/install-qa-check.d/95empty-dirs | 4 +-
bin/isolated-functions.sh | 10 +
bin/phase-functions.sh | 16 +-
bin/phase-helpers.sh | 14 +-
cnf/make.conf.example | 21 +-
cnf/make.globals | 4 +-
cnf/repos.conf | 2 +-
lib/_emerge/BinpkgVerifier.py | 4 +-
lib/_emerge/PollScheduler.py | 6 +-
lib/_emerge/actions.py | 2 +-
lib/_emerge/depgraph.py | 10 +-
lib/_emerge/emergelog.py | 2 +-
lib/portage/__init__.py | 29 ++-
.../{sync/modules => _compat_upgrade}/__init__.py | 0
lib/portage/_compat_upgrade/default_locations.py | 82 ++++++++
lib/portage/cache/flat_hash.py | 4 +-
lib/portage/cache/mappings.py | 30 +--
lib/portage/dbapi/__init__.py | 4 +-
lib/portage/dbapi/porttree.py | 15 +-
lib/portage/dbapi/vartree.py | 19 +-
lib/portage/dep/__init__.py | 92 +++++----
lib/portage/dep/soname/multilib_category.py | 51 ++++-
lib/portage/emaint/modules/sync/sync.py | 6 +-
lib/portage/news.py | 5 +-
lib/portage/package/ebuild/_config/helper.py | 4 +-
.../package/ebuild/_config/special_env_vars.py | 7 +-
lib/portage/package/ebuild/_spawn_nofetch.py | 6 +-
lib/portage/package/ebuild/config.py | 20 +-
lib/portage/package/ebuild/doebuild.py | 19 +-
lib/portage/package/ebuild/fetch.py | 111 ++++++----
lib/portage/process.py | 2 +-
lib/portage/repository/config.py | 15 +-
lib/portage/sync/__init__.py | 5 +-
lib/portage/sync/controller.py | 5 +-
lib/portage/sync/syncbase.py | 6 +-
lib/portage/tests/dep/testAtom.py | 16 +-
lib/portage/tests/ebuild/test_fetch.py | 230 +++++++++++++++++++++
lib/portage/tests/emerge/test_emerge_slot_abi.py | 14 +-
lib/portage/tests/news/test_NewsItem.py | 4 +-
lib/portage/tests/process/test_poll.py | 20 +-
lib/portage/tests/resolver/ResolverPlayground.py | 4 +-
lib/portage/tests/resolver/test_slot_abi.py | 42 ++--
.../tests/resolver/test_slot_abi_downgrade.py | 32 +--
lib/portage/tests/resolver/test_slot_collisions.py | 6 +-
.../resolver/test_slot_operator_autounmask.py | 18 +-
lib/portage/tests/resolver/test_targetroot.py | 24 ++-
lib/portage/tests/update/test_move_slot_ent.py | 18 +-
.../util/futures/asyncio/test_wakeup_fd_sigchld.py | 10 +-
lib/portage/util/_eventloop/asyncio_event_loop.py | 5 +
lib/portage/util/_get_vm_info.py | 9 +-
lib/portage/util/elf/constants.py | 10 +-
lib/portage/util/futures/_asyncio/__init__.py | 18 ++
lib/portage/util/socks5.py | 10 +-
lib/portage/xml/metadata.py | 16 +-
man/ebuild.5 | 111 ++++------
man/emerge.1 | 8 +-
man/make.conf.5 | 27 +--
man/portage.5 | 37 ++--
man/quickpkg.1 | 4 +-
repoman/RELEASE-NOTES | 5 +
repoman/lib/repoman/__init__.py | 4 +-
repoman/lib/repoman/argparser.py | 33 +--
repoman/setup.py | 2 +-
setup.py | 4 +-
71 files changed, 1002 insertions(+), 498 deletions(-)
diff --cc bin/phase-functions.sh
index 1352a16cc,e6380f554..0a8b5eda9
--- a/bin/phase-functions.sh
+++ b/bin/phase-functions.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2018 Gentoo Foundation
+ # Copyright 1999-2019 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
# Hardcoded bash lists are needed for backward compatibility with
diff --cc cnf/make.conf.example
index 34957eddd,a309a5c43..c1d060c7c
--- a/cnf/make.conf.example
+++ b/cnf/make.conf.example
@@@ -107,28 -107,21 +107,21 @@@
# this, you must update your /etc/portage/make.profile symlink accordingly.
# ***Warning***
# Data stored inside PORTDIR is in peril of being overwritten or deleted by
- # the emerge --sync command. The default value of PORTAGE_RSYNC_OPTS
- # will protect the default locations of DISTDIR and PKGDIR, but users are
- # warned that any other locations inside PORTDIR are not necessarily safe
- # for data storage.
- #PORTDIR=@PORTAGE_EPREFIX@/usr/portage
+ # the emerge --sync command.
-#PORTDIR=/var/db/repos/gentoo
++#PORTDIR=@PORTAGE_EPREFIX@/var/db/repos/gentoo
#
# DISTDIR is where all of the source code tarballs will be placed for
# emerges. After packages are built, it is safe to remove any and
# all files from this directory since they will be automatically
# fetched on demand for a given build. If you would like to
# selectively prune obsolete files from this directory, see
- # eclean from the gentoolkit package. Note that locations under
- # /usr/portage are not necessarily safe for data storage. See the
- # PORTDIR documentation for more information.
- #DISTDIR=@PORTAGE_EPREFIX@/usr/portage/distfiles
+ # eclean from the gentoolkit package.
-#DISTDIR=/var/cache/distfiles
++#DISTDIR=@PORTAGE_EPREFIX@/var/cache/distfiles
#
# PKGDIR is the location of binary packages that you can have created
# with '--buildpkg' or '-b' while emerging a package. This can get
- # up to several hundred megs, or even a few gigs. Note that
- # locations under /usr/portage are not necessarily safe for data
- # storage. See the PORTDIR documentation for more information.
- #PKGDIR=@PORTAGE_EPREFIX@/usr/portage/packages
+ # up to several hundred megs, or even a few gigs.
-#PKGDIR=/var/cache/binpkgs
++#PKGDIR=@PORTAGE_EPREFIX@/var/cache/binpkgs
#
# PORTAGE_LOGDIR is the location where portage will store all the logs it
# creates from each individual merge. They are stored as
diff --cc cnf/make.globals
index e71325c91,b01cca599..6a1d3b952
--- a/cnf/make.globals
+++ b/cnf/make.globals
@@@ -27,12 -27,12 +27,12 @@@ ACCEPT_PROPERTIES="*
ACCEPT_RESTRICT="*"
# Miscellaneous paths
- DISTDIR="@PORTAGE_EPREFIX@/usr/portage/distfiles"
- PKGDIR="@PORTAGE_EPREFIX@/usr/portage/packages"
-DISTDIR="/var/cache/distfiles"
-PKGDIR="/var/cache/binpkgs"
-RPMDIR="/usr/portage/rpm"
++DISTDIR="@PORTAGE_EPREFIX@/var/cache/distfiles"
++PKGDIR="@PORTAGE_EPREFIX@/var/cache/binpkgs"
+RPMDIR="@PORTAGE_EPREFIX@/usr/portage/rpm"
# Temporary build directory
-PORTAGE_TMPDIR="/var/tmp"
+PORTAGE_TMPDIR="@PORTAGE_EPREFIX@/var/tmp"
# Fetching command (3 tries, passive ftp for firewall compatibility)
FETCHCOMMAND="wget -t 3 -T 60 --passive-ftp -O \"\${DISTDIR}/\${FILE}\" \"\${URI}\""
diff --cc cnf/repos.conf
index 3b4b94209,e84840bf2..95ce7645f
--- a/cnf/repos.conf
+++ b/cnf/repos.conf
@@@ -1,10 -1,10 +1,10 @@@
[DEFAULT]
-main-repo = gentoo
+main-repo = gentoo_prefix
-[gentoo]
-location = /var/db/repos/gentoo
+[gentoo_prefix]
- location = @PORTAGE_EPREFIX@/usr/portage
++location = @PORTAGE_EPREFIX@/var/db/repos/gentoo
sync-type = rsync
-sync-uri = rsync://rsync.gentoo.org/gentoo-portage
+sync-uri = rsync://rsync.prefix.bitzolder.nl/gentoo-portage-prefix
auto-sync = yes
sync-rsync-verify-jobs = 1
sync-rsync-verify-metamanifest = yes
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2019-02-28 12:31 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2019-02-28 12:31 UTC (permalink / raw
To: gentoo-commits
commit: 991d7349c8faab28bf956c00e477e115654ba3b0
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Feb 28 12:31:19 2019 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Feb 28 12:31:19 2019 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=991d7349
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
RELEASE-NOTES | 50 +++++
bin/ebuild.sh | 30 ++-
bin/misc-functions.sh | 2 -
bin/pid-ns-init | 105 +++++++++-
bin/postinst-qa-check.d/50gnome2-utils | 64 -------
bin/postinst-qa-check.d/50xdg-utils | 54 ++++++
bin/preinst-qa-check.d/50gnome2-utils | 1 -
bin/socks5-server.py | 8 +-
cnf/make.conf.example | 8 +-
cnf/make.globals | 3 +-
lib/_emerge/SpawnProcess.py | 2 +-
lib/_emerge/resolver/output.py | 13 +-
lib/portage/dbapi/porttree.py | 20 +-
lib/portage/locks.py | 150 +++++++++++++--
.../package/ebuild/_config/special_env_vars.py | 3 +-
lib/portage/package/ebuild/config.py | 4 +-
lib/portage/package/ebuild/doebuild.py | 3 +-
lib/portage/process.py | 38 +++-
lib/portage/tests/resolver/ResolverPlayground.py | 2 +
lib/portage/tests/util/test_install_mask.py | 36 ++++
lib/portage/tests/util/test_socks5.py | 211 +++++++++++++++++++++
lib/portage/util/cpuinfo.py | 33 +++-
lib/portage/util/futures/executor/fork.py | 4 +-
lib/portage/util/futures/iter_completed.py | 18 +-
lib/portage/util/install_mask.py | 89 ++++++++-
lib/portage/util/socks5.py | 48 ++++-
man/make.conf.5 | 6 +-
setup.py | 2 +-
28 files changed, 839 insertions(+), 168 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2019-01-11 10:19 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2019-01-11 10:19 UTC (permalink / raw
To: gentoo-commits
commit: 35cd9513c698f655b664fba012178ea3b3f6403e
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Jan 11 10:18:35 2019 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Jan 11 10:19:29 2019 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=35cd9513
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
RELEASE-NOTES | 6 ++++++
lib/_emerge/AbstractEbuildProcess.py | 9 ++++++---
lib/portage/dbapi/vartree.py | 15 +++++++++++++--
lib/portage/package/ebuild/doebuild.py | 8 ++++++--
setup.py | 2 +-
5 files changed, 32 insertions(+), 8 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2019-01-07 10:22 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2019-01-07 10:22 UTC (permalink / raw
To: gentoo-commits
commit: d210680a136e873285c4082a85be26aa4a4d3391
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Mon Jan 7 10:22:07 2019 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Mon Jan 7 10:22:41 2019 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=d210680a
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
NEWS | 3 +-
RELEASE-NOTES | 20 +++
bin/ebuild-helpers/bsd/sed | 3 +-
bin/ebuild-helpers/portageq | 1 -
bin/ebuild-helpers/unprivileged/chown | 3 +-
bin/ebuild-helpers/xattr/install | 3 +-
bin/ebuild.sh | 4 +-
bin/etc-update | 2 +-
lib/portage/locks.py | 4 +-
lib/portage/package/ebuild/doebuild.py | 27 +---
lib/portage/process.py | 159 +++++++++++++++++++----
lib/portage/sync/modules/rsync/rsync.py | 11 +-
lib/portage/tests/resolver/ResolverPlayground.py | 17 ++-
setup.py | 2 +-
14 files changed, 190 insertions(+), 69 deletions(-)
diff --cc bin/ebuild-helpers/portageq
index b9ac04274,d31bd6810..47640c5a8
--- a/bin/ebuild-helpers/portageq
+++ b/bin/ebuild-helpers/portageq
@@@ -14,10 -14,9 +14,9 @@@ set -f # in case ${PATH} contains any s
for path in ${PATH}; do
[[ -x ${path}/${scriptname} ]] || continue
[[ ${path} == */portage/*/ebuild-helpers* ]] && continue
- [[ ${path} == */._portage_reinstall_.* ]] && continue
[[ ${path}/${scriptname} -ef ${scriptpath} ]] && continue
PYTHONPATH=${PORTAGE_PYTHONPATH:-${PORTAGE_PYM_PATH}} \
- exec "${PORTAGE_PYTHON:-/usr/bin/python}" \
+ exec "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" \
"${path}/${scriptname}" "$@"
done
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2018-12-23 11:14 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2018-12-23 11:14 UTC (permalink / raw
To: gentoo-commits
commit: 6d68e3cef901f3322a80525b472fb04d1304de5a
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Dec 23 11:12:58 2018 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Dec 23 11:12:58 2018 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=6d68e3ce
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
This also removes DEFAULT_PATH as part of merging daeb75b.
Bug: https://bugs.gentoo.org/585986
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
.travis.yml | 1 -
NEWS | 8 +++-
bin/ebuild-helpers/portageq | 2 +-
bin/save-ebuild-env.sh | 5 ++-
cnf/make.conf.example | 12 +++---
cnf/make.globals | 15 +++----
configure.ac | 5 ---
lib/_emerge/post_emerge.py | 2 +-
lib/portage/dbapi/vartree.py | 2 +-
lib/portage/elog/mod_echo.py | 2 +-
lib/portage/elog/mod_save.py | 4 +-
lib/portage/elog/mod_save_summary.py | 4 +-
lib/portage/emaint/modules/logs/__init__.py | 2 +-
lib/portage/emaint/modules/logs/logs.py | 22 +++++------
.../package/ebuild/_config/special_env_vars.py | 7 ++--
lib/portage/package/ebuild/config.py | 11 ++++++
lib/portage/package/ebuild/doebuild.py | 28 ++++++++-----
lib/portage/package/ebuild/prepare_build_dirs.py | 26 ++++++------
lib/portage/tests/emerge/test_simple.py | 2 +-
lib/portage/tests/resolver/ResolverPlayground.py | 34 ++++++++++++++++
lib/portage/util/ExtractKernelVersion.py | 17 ++++++--
man/ebuild.5 | 2 +-
man/emaint.1 | 6 +--
man/emerge.1 | 8 ++--
man/make.conf.5 | 46 ++++++++++++----------
man/portage.5 | 2 +-
subst-install.in | 1 -
travis.sh | 1 -
28 files changed, 173 insertions(+), 104 deletions(-)
diff --cc .travis.yml
index 16eaafc43,ab0b8d304..4f94e36a3
--- a/.travis.yml
+++ b/.travis.yml
@@@ -19,21 -19,6 +19,20 @@@ install
script:
- printf "[build_ext]\nportage-ext-modules=true" >> setup.cfg
+ - find . -type f -exec
+ sed -e "s|@PORTAGE_EPREFIX@||"
+ -e "s|@PORTAGE_BASE@|${PWD}|"
+ -e "s|@PORTAGE_MV@|$(type -P mv)|"
+ -e "s|@PORTAGE_BASH@|$(type -P bash)|"
+ -e "s|@PREFIX_PORTAGE_PYTHON@|$(type -P python)|"
- -e "s|@DEFAULT_PATH@|/usr/bin:/bin|"
+ -e "s|@EXTRA_PATH@|/usr/sbin:/sbin|"
+ -e "s|@portagegroup@|$(id -gn)|"
+ -e "s|@portageuser@|$(id -un)|"
+ -e "s|@rootuser@|$(id -un)|"
+ -e "s|@rootuid@|$(id -u)|"
+ -e "s|@rootgid@|$(id -g)|"
+ -e "s|@sysconfdir@|/etc|"
+ -i '{}' +
- ./setup.py test
- ./setup.py install --root=/tmp/install-root
- if [[ ${TRAVIS_PYTHON_VERSION} == ?.? ]]; then
diff --cc bin/save-ebuild-env.sh
index bb17382d4,947ac79d5..1cfd79f23
mode 100755,100644..100755
--- a/bin/save-ebuild-env.sh
+++ b/bin/save-ebuild-env.sh
@@@ -122,9 -118,6 +122,10 @@@ __save_ebuild_env()
# user config variables
unset DOC_SYMLINKS_DIR INSTALL_MASK PKG_INSTALL_MASK
- # Prefix additions
- unset DEFAULT_PATH EXTRA_PATH PORTAGE_GROUP PORTAGE_USER
++ # PREFIX LOCAL: Prefix additions
++ unset EXTRA_PATH PORTAGE_GROUP PORTAGE_USER
++ # END PREFIX LOCAL
+
declare -p
declare -fp
if [[ ${BASH_VERSINFO[0]} == 3 ]]; then
diff --cc cnf/make.conf.example
index ad290c7e6,ffebd24d4..81b9c0328
--- a/cnf/make.conf.example
+++ b/cnf/make.conf.example
@@@ -128,9 -128,9 +128,9 @@@
# up to several hundred megs, or even a few gigs. Note that
# locations under /usr/portage are not necessarily safe for data
# storage. See the PORTDIR documentation for more information.
-#PKGDIR=/usr/portage/packages
+#PKGDIR=@PORTAGE_EPREFIX@/usr/portage/packages
#
- # PORT_LOGDIR is the location where portage will store all the logs it
+ # PORTAGE_LOGDIR is the location where portage will store all the logs it
# creates from each individual merge. They are stored as
# ${CATEGORY}:${PF}:YYYYMMDD-HHMMSS.log in the directory specified.
# If the directory does not exist, it will be created automatically and
diff --cc cnf/make.globals
index 24a42cba6,5a3015ae2..5013957ea
--- a/cnf/make.globals
+++ b/cnf/make.globals
@@@ -103,35 -101,15 +104,35 @@@ PORTAGE_RSYNC_OPTS="--recursive --link
PORTAGE_SYNC_STALE="30"
# Executed before emerge exit if FEATURES=clean-logs is enabled.
- PORT_LOGDIR_CLEAN="find \"\${PORT_LOGDIR}\" -type f ! -name \"summary.log*\" -mtime +7 -delete"
+ PORTAGE_LOGDIR_CLEAN="find \"\${PORTAGE_LOGDIR}\" -type f ! -name \"summary.log*\" -mtime +7 -delete"
# Minimal CONFIG_PROTECT
+# NOTE: in Prefix, these are NOT prefixed on purpose, because the
+# profiles define them too
CONFIG_PROTECT="/etc"
CONFIG_PROTECT_MASK="/etc/env.d"
# Disable auto-use
USE_ORDER="env:pkg:conf:defaults:pkginternal:features:repo:env.d"
++# PREFIX LOCAL: additional vars set during install
+# Default portage user/group
+PORTAGE_USER='@portageuser@'
+PORTAGE_GROUP='@portagegroup@'
+PORTAGE_ROOT_USER='@rootuser@'
+
+# Default ownership of installed files.
+PORTAGE_INST_UID="@rootuid@"
+PORTAGE_INST_GID="@rootgid@"
+
- # Default PATH for ebuild env
- DEFAULT_PATH="@DEFAULT_PATH@"
+# Any extra PATHs to add to the ebuild environment's PATH (if any)
+EXTRA_PATH="@EXTRA_PATH@"
+
+# The offset prefix this Portage was configured with (not used by
+# Portage itself)
+CONFIGURE_EPREFIX="@PORTAGE_EPREFIX@"
++# END PREFIX LOCAL
+
# Mode bits for ${WORKDIR} (see ebuild.5).
PORTAGE_WORKDIR_MODE="0700"
diff --cc configure.ac
index 6b8021ec6,000000000..9083824eb
mode 100644,000000..100644
--- a/configure.ac
+++ b/configure.ac
@@@ -1,140 -1,0 +1,135 @@@
+dnl Process this file with autoconf to produce a configure script.
+AC_INIT(portage-prefix, @version@, prefix@gentoo.org)
+
+AC_PREREQ([2.61])
+
+case "${prefix}" in
+ '') AC_MSG_ERROR([bad value ${prefix} for --prefix, must not be empty]) ;;
+ */) AC_MSG_ERROR([bad value ${prefix} for --prefix, must not end with '/']) ;;
+ /*|NONE) ;;
+ *) AC_MSG_ERROR([bad value ${prefix} for --prefix, must start with /]) ;;
+esac
+
+AC_CANONICAL_BUILD
+AC_CANONICAL_HOST
+AC_CANONICAL_TARGET
+
+AM_INIT_AUTOMAKE
+
+dnl Checks for programs.
+dnl store cflags prior, otherwise it's not propagated.
+if test "x$CFLAGS" != "x"
+then
+ CFLAGS=$CFLAGS
+fi
+
+AC_PREFIX_DEFAULT([/usr])
+
+AC_PROG_CC
+AC_PROG_INSTALL
+AC_PROG_LN_S
+AC_PROG_EGREP
+
+GENTOO_PATH_XCU_ID()
+GENTOO_PATH_PYTHON([2.7])
+
+AC_PATH_PROG(PORTAGE_RM, [rm], no)
+AC_PATH_PROG(PORTAGE_MV, [mv], no)
+AC_PATH_PROG(PORTAGE_BASENAME, [basename], no)
+AC_PATH_PROG(PORTAGE_DIRNAME, [dirname], no)
+dnl avoid bash internal variable messing up things here
+GENTOO_PATH_GNUPROG(PORTAGE_BASH, [bash])
+GENTOO_PATH_GNUPROG(PORTAGE_SED, [sed])
+GENTOO_PATH_GNUPROG(PORTAGE_WGET, [wget])
+GENTOO_PATH_GNUPROG(PORTAGE_FIND, [find])
+GENTOO_PATH_GNUPROG(PORTAGE_XARGS, [xargs])
+GENTOO_PATH_GNUPROG(PORTAGE_GREP, [grep])
+
+AC_ARG_WITH(portage-user,
+AC_HELP_STRING([--with-portage-user=myuser],[use user 'myuser' as portage owner (default portage)]),
+[case "${withval}" in
+ ""|yes) AC_MSG_ERROR(bad value ${withval} for --with-portage-user);;
+ *) portageuser="${withval}";;
+esac],
+[portageuser="portage"])
+
+AC_ARG_WITH(portage-group,
+AC_HELP_STRING([--with-portage-group=mygroup],[use group 'mygroup' as portage users group (default portage)]),
+[case "${withval}" in
+ ""|yes) AC_MSG_ERROR(bad value ${withval} for --with-portage-group);;
+ *) portagegroup="${withval}";;
+esac],
+[portagegroup="portage"])
+
+AC_ARG_WITH(root-user,
+AC_HELP_STRING([--with-root-user=myuser],[uses 'myuser' as owner of installed files (default is portage-user)]),
+[case "${withval}" in
+ ""|yes) AC_MSG_ERROR(bad value ${withval} for --with-root-user);;
+ *) rootuser="${withval}";;
+esac],
+[rootuser="${portageuser}"])
+
+AC_MSG_CHECKING([for user id of ${rootuser}])
+dnl grab uid of rootuser
+rootuid=`${XCU_ID} -u "${rootuser}"`
+if test "x`echo ${rootuid} | ${EGREP} '^[[0-9]]+$'`" != "x"
+then
+ AC_MSG_RESULT([${rootuid}])
+else
+ AC_MSG_ERROR([error finding the user id of ${rootuser}])
+fi
+AC_MSG_CHECKING([for group id of ${rootuser}])
+rootgid=`${XCU_ID} -g "${rootuser}"`
+if test "x`echo ${rootgid} | ${EGREP} '^[[0-9]]+$'`" != "x"
+then
+ AC_MSG_RESULT([${rootgid}])
+else
+ AC_MSG_ERROR([error finding the group id of ${rootuser}])
+fi
+
+AC_ARG_WITH(offset-prefix,
+AC_HELP_STRING([--with-offset-prefix],
+ [specify the installation prefix for all packages, defaults to an empty string]),
+ [PORTAGE_EPREFIX=$withval],
+ [PORTAGE_EPREFIX=''])
+
+if test "x$PORTAGE_EPREFIX" != "x"
+then
+ PORTAGE_EPREFIX=`${PREFIX_PORTAGE_PYTHON} -c "import os; print(os.path.normpath('$PORTAGE_EPREFIX'))"`
- DEFAULT_PATH="${PORTAGE_EPREFIX}/usr/sbin:${PORTAGE_EPREFIX}/usr/bin:${PORTAGE_EPREFIX}/sbin:${PORTAGE_EPREFIX}/bin"
- else
- # this is what trunk uses in ebuild.sh
- DEFAULT_PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
+fi
+
+AC_ARG_WITH(extra-path,
+AC_HELP_STRING([--with-extra-path], [specify additional PATHs available to the portage build environment (use with care)]),
+[EXTRA_PATH="$withval"],
+[EXTRA_PATH=""])
+
+AC_SUBST(portageuser)
+AC_SUBST(portagegroup)
+AC_SUBST(rootuser)
+AC_SUBST(rootuid)
+AC_SUBST(rootgid)
+AC_SUBST(PORTAGE_EPREFIX)
- AC_SUBST(DEFAULT_PATH)
+AC_SUBST(EXTRA_PATH)
+AC_SUBST(PORTAGE_BASE,['${exec_prefix}/lib/portage'])
+
+AC_SUBST(PORTAGE_RM)
+AC_SUBST(PORTAGE_MV)
+AC_SUBST(PORTAGE_BASENAME)
+AC_SUBST(PORTAGE_DIRNAME)
+AC_SUBST(PORTAGE_BASH)
+AC_SUBST(PORTAGE_SED)
+AC_SUBST(PORTAGE_WGET)
+AC_SUBST(PORTAGE_FIND)
+AC_SUBST(PORTAGE_XARGS)
+AC_SUBST(PORTAGE_GREP)
+
+AC_CONFIG_FILES([subst-install], [chmod +x subst-install])
+AC_CONFIG_FILES([
+ Makefile
+ man/Makefile
+ bin/Makefile
+ lib/Makefile
+ cnf/Makefile
+])
+
+AC_OUTPUT
diff --cc lib/portage/package/ebuild/_config/special_env_vars.py
index e2ea8c393,f4f2bec2c..70a9c83c1
--- a/lib/portage/package/ebuild/_config/special_env_vars.py
+++ b/lib/portage/package/ebuild/_config/special_env_vars.py
@@@ -80,8 -80,6 +80,9 @@@ environ_whitelist +=
"ROOT", "ROOTPATH", "SYSROOT", "T", "TMP", "TMPDIR",
"USE_EXPAND", "USE_ORDER", "WORKDIR",
"XARGS", "__PORTAGE_TEST_HARDLINK_LOCKS",
- "DEFAULT_PATH", "EXTRA_PATH",
- "PORTAGE_GROUP", "PORTAGE_USER",
++ # PREFIX LOCAL
++ "EXTRA_PATH", "PORTAGE_GROUP", "PORTAGE_USER",
++ # END PREFIX LOCAL
]
# user config variables
diff --cc lib/portage/package/ebuild/doebuild.py
index 83b1f66a9,47c69967c..77c9c713a
--- a/lib/portage/package/ebuild/doebuild.py
+++ b/lib/portage/package/ebuild/doebuild.py
@@@ -247,11 -243,16 +244,22 @@@ def _doebuild_path(settings, eapi=None)
for x in portage_bin_path:
path.append(os.path.join(x, "ebuild-helpers"))
path.extend(prerootpath)
- path.extend(defaultpath)
+
+ for prefix in prefixes:
+ prefix = prefix if prefix else "/"
+ for x in ("usr/local/sbin", "usr/local/bin", "usr/sbin", "usr/bin", "sbin", "bin"):
+ # Respect order defined in ROOTPATH
+ x_abs = os.path.join(prefix, x)
+ if x_abs not in rootpath_set:
+ path.append(x_abs)
+
path.extend(rootpath)
++
++ # PREFIX LOCAL: append EXTRA_PATH from make.globals
++ extrapath = [x for x in settings.get("EXTRA_PATH", "").split(":") if x]
+ path.extend(extrapath)
+ # END PREFIX LOCAL
+
settings["PATH"] = ":".join(path)
def doebuild_environment(myebuild, mydo, myroot=None, settings=None,
diff --cc subst-install.in
index 033fa981b,000000000..e9f375d76
mode 100644,000000..100644
--- a/subst-install.in
+++ b/subst-install.in
@@@ -1,75 -1,0 +1,74 @@@
+#!@PORTAGE_BASH@
+
+# for expansion below we need some things to be defined
+prefix="@prefix@"
+exec_prefix="@exec_prefix@"
+
+# For bug #279550 we have to do some nasty trick to make sure that sed
+# doesn't strip the backslash in the replacement value (because it can
+# be a backreference) and hence escape those. Eventually in strings we
+# need to escape the backslash too, such that the single backslash
+# doesn't get lost when considered an invalid escape
+rootuser='@rootuser@'
+portagegroup='@portagegroup@'
+portageuser='@portageuser@'
+rootuser=${rootuser//\\/\\\\}
+portagegroup=${portagegroup//\\/\\\\\\\\}
+portageuser=${portageuser//\\/\\\\\\\\}
+
+# there are many ways to do this all dynamic, but we only care for raw
+# speed here, so let configure fill in this list and be done with it
+at='@'
+sedexp=(
- -e "s,${at}DEFAULT_PATH${at},@DEFAULT_PATH@,g"
+ -e "s,${at}EXTRA_PATH${at},@EXTRA_PATH@,g"
+ -e "s,${at}PORTAGE_BASE${at},@PORTAGE_BASE@,g"
+ -e "s,${at}PORTAGE_BASH${at},@PORTAGE_BASH@,g"
+ -e "s,${at}PORTAGE_EPREFIX${at},@PORTAGE_EPREFIX@,g"
+ -e "s,${at}PORTAGE_MV${at},@PORTAGE_MV@,g"
+ -e "s,${at}PREFIX_PORTAGE_PYTHON${at},@PREFIX_PORTAGE_PYTHON@,g"
+ -e "s,${at}datadir${at},@datadir@,g"
+ -e "s,${at}portagegroup${at},${portagegroup},g"
+ -e "s,${at}portageuser${at},${portageuser},g"
+ -e "s,${at}rootgid${at},@rootgid@,g"
+ -e "s,${at}rootuid${at},@rootuid@,g"
+ -e "s,${at}rootuser${at},${rootuser},g"
+ -e "s,${at}sysconfdir${at},@sysconfdir@,g"
+)
+
+sources=( )
+target=
+args=( "$@" )
+
+while [[ ${#@} != 0 ]] ; do
+ case "$1" in
+ -t)
+ [[ -n ${target} ]] && sources=( "${sources[@]}" "${target##*/}" )
+ shift
+ target=":${1}"
+ ;;
+ -*)
+ shift
+ ;;
+ *)
+ if [[ -z ${target} ]] ; then
+ target="${1}"
+ elif [[ ${target} != ":"* ]] ; then
+ sources=( "${sources[@]}" "${target##*/}" )
+ target="${1}"
+ else
+ sources=( "${sources[@]}" "${1##*/}" )
+ fi
+ ;;
+ esac
+ shift
+done
+
+target=${target#:}
+INSTALL="@INSTALL@"
+echo @INSTALL_DATA@ "${args[@]}"
+if [[ ! -d ${target} ]] ; then
+ # either install will die, or it was just a single file copy
+ @INSTALL_DATA@ "${args[@]}" && sed -i "${sedexp[@]}" "${target}"
+else
+ @INSTALL_DATA@ "${args[@]}" && sed -i "${sedexp[@]}" "${sources[@]/#/${target}/}"
+fi
diff --cc travis.sh
index 3c03149e6,000000000..bcb95a9cb
mode 100755,000000..100755
--- a/travis.sh
+++ b/travis.sh
@@@ -1,32 -1,0 +1,31 @@@
+#!/usr/bin/env bash
+
+# this script runs the tests as Travis would do (.travis.yml) and can be
+# used to test the Prefix branch of portage on a non-Prefix system
+
+: ${TMPDIR=/var/tmp}
+
+HERE=$(dirname $(realpath ${BASH_SOURCE[0]}))
+REPO=${HERE##*/}.$$
+
+cd ${TMPDIR}
+git clone ${HERE} ${REPO}
+
+cd ${REPO}
+printf "[build_ext]\nportage-ext-modules=true" >> setup.cfg
+find . -type f -exec \
+ sed -e "s|@PORTAGE_EPREFIX@||" \
+ -e "s|@PORTAGE_BASE@|${PWD}|" \
+ -e "s|@PORTAGE_MV@|$(type -P mv)|" \
+ -e "s|@PORTAGE_BASH@|$(type -P bash)|" \
+ -e "s|@PREFIX_PORTAGE_PYTHON@|$(type -P python)|" \
- -e "s|@DEFAULT_PATH@|${EPREFIX}/usr/bin:${EPREFIX}/bin|" \
+ -e "s|@EXTRA_PATH@|${EPREFIX}/usr/sbin:${EPREFIX}/sbin|" \
+ -e "s|@portagegroup@|$(id -gn)|" \
+ -e "s|@portageuser@|$(id -un)|" \
+ -e "s|@rootuser@|$(id -un)|" \
+ -e "s|@rootuid@|$(id -u)|" \
+ -e "s|@rootgid@|$(id -g)|" \
+ -e "s|@sysconfdir@|${EPREFIX}/etc|" \
+ -i '{}' +
+unset EPREFIX
+./setup.py test
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2018-12-12 18:54 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2018-12-12 18:54 UTC (permalink / raw
To: gentoo-commits
commit: 1b9bf9900acb9fe2b2f337e4947c6f290a8ccfa4
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Dec 6 13:21:58 2018 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Dec 6 13:22:15 2018 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=1b9bf990
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>
.travis.yml | 7 +
RELEASE-NOTES | 54 +++++
bin/ebuild | 6 +
bin/ebuild-helpers/dodoc | 3 +-
bin/ebuild-helpers/ecompress | 161 -------------
bin/ebuild-helpers/ecompressdir | 226 -------------------
bin/ebuild-helpers/fowners | 15 ++
bin/ebuild-helpers/fperms | 15 ++
bin/ebuild-helpers/prepall | 29 +--
bin/ebuild-helpers/prepalldocs | 19 +-
bin/ebuild-helpers/prepallinfo | 9 +-
bin/ebuild-helpers/prepallman | 24 +-
bin/ebuild-helpers/prepinfo | 3 +-
bin/ebuild-helpers/prepman | 35 +--
bin/ebuild.sh | 8 +-
bin/ecompress | 198 ++++++++++++++++
bin/ecompress-file | 61 +++++
bin/estrip | 10 +-
bin/etc-update | 12 +-
bin/helper-functions.sh | 9 +-
bin/install-qa-check.d/60pngfix | 13 +-
bin/isolated-functions.sh | 30 ++-
bin/misc-functions.sh | 211 +++++++----------
bin/phase-functions.sh | 9 +-
bin/phase-helpers.sh | 7 +-
bin/pid-ns-init | 30 +++
bin/portageq | 4 +
bin/postinst-qa-check.d/50gnome2-utils | 2 +-
bin/postinst-qa-check.d/50xdg-utils | 2 +-
cnf/make.conf.example.arm64.diff | 37 +++
cnf/make.globals | 4 +-
lib/_emerge/actions.py | 7 -
lib/_emerge/help.py | 2 +-
lib/portage/const.py | 7 +-
lib/portage/dbapi/porttree.py | 72 ++++--
lib/portage/dbapi/vartree.py | 12 +
.../package/ebuild/_config/LocationsManager.py | 5 +-
lib/portage/package/ebuild/config.py | 92 +++++---
lib/portage/package/ebuild/doebuild.py | 22 +-
lib/portage/process.py | 96 +++++++-
lib/portage/repository/config.py | 148 +++++++++---
.../repository/storage}/__init__.py | 2 +-
.../repository/storage/hardlink_quarantine.py | 97 ++++++++
lib/portage/repository/storage/hardlink_rcu.py | 251 +++++++++++++++++++++
lib/portage/repository/storage/inplace.py | 49 ++++
lib/portage/repository/storage/interface.py | 87 +++++++
lib/portage/sync/controller.py | 1 +
lib/portage/sync/modules/git/git.py | 10 +-
lib/portage/sync/modules/rsync/rsync.py | 85 ++-----
lib/portage/sync/syncbase.py | 55 ++++-
lib/portage/tests/__init__.py | 36 +--
lib/portage/tests/bin/setup_env.py | 2 +-
lib/portage/tests/sync/test_sync_local.py | 60 ++++-
.../tests/util/futures/test_compat_coroutine.py | 22 +-
lib/portage/util/futures/_asyncio/__init__.py | 14 ++
lib/portage/util/futures/_sync_decorator.py | 54 +++++
lib/portage/util/futures/compat_coroutine.py | 20 +-
man/ebuild.1 | 5 +
man/ebuild.5 | 6 +-
man/emerge.1 | 2 +-
man/make.conf.5 | 33 ++-
man/portage.5 | 35 +++
repoman/RELEASE-NOTES | 18 ++
repoman/bin/repoman | 2 +-
repoman/cnf/linechecks/linechecks.yaml | 3 +-
repoman/lib/repoman/actions.py | 55 ++++-
repoman/lib/repoman/config.py | 8 +-
repoman/lib/repoman/copyrights.py | 18 +-
repoman/lib/repoman/main.py | 9 +-
repoman/lib/repoman/modules/commit/manifest.py | 11 +-
.../modules/linechecks/gentoo_header/header.py | 47 ++--
repoman/lib/repoman/scanner.py | 3 +-
repoman/lib/repoman/tests/__init__.py | 34 +--
.../lib/repoman/tests/changelog/test_echangelog.py | 2 +-
repoman/lib/repoman/tests/simple/test_simple.py | 14 +-
repoman/runtests | 6 +-
repoman/setup.py | 2 +-
runtests | 6 +-
setup.py | 4 +-
tox.ini | 2 +-
80 files changed, 1892 insertions(+), 994 deletions(-)
diff --cc bin/ebuild-helpers/dodoc
index 2e2b0459c,e83091045..9fdb6495a
--- a/bin/ebuild-helpers/dodoc
+++ b/bin/ebuild-helpers/dodoc
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/prepall
index 95bf24be4,e23a6d410..0251e7948
--- a/bin/ebuild-helpers/prepall
+++ b/bin/ebuild-helpers/prepall
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/prepallman
index a7e1eca83,e23a6d410..0251e7948
--- a/bin/ebuild-helpers/prepallman
+++ b/bin/ebuild-helpers/prepallman
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/etc-update
index d75388bba,0c7a51995..0364fb0de
--- a/bin/etc-update
+++ b/bin/etc-update
@@@ -32,13 -32,13 +32,13 @@@ get_config()
"${PORTAGE_CONFIGROOT}"etc/etc-update.conf)
}
- OS_RELEASE_ID=$(cat "@PORTAGE_EPREFIX@"/etc/os-release 2>/dev/null | grep '^ID=' | cut -d'=' -f2 | sed -e 's/"//g')
-OS_RELEASE_POSSIBLE_IDS=$(source /etc/os-release >/dev/null 2>&1; echo ":${ID}:${ID_LIKE//[[:space:]]/:}:")
++OS_RELEASE_POSSIBLE_IDS=$(source "@PORTAGE_EPREFIX@"/etc/os-release >/dev/null 2>&1; echo ":${ID}:${ID_LIKE//[[:space:]]/:}:")
- case $OS_RELEASE_ID in
- suse|opensuse|opensuse-leap|opensuse-tumbleweed) OS_FAMILY='rpm' ;;
- fedora|rhel) OS_FAMILY='rpm' ;;
- arch|archarm|arch32|manjaro|antergos) OS_FAMILY='arch' NEW_EXT='pacnew';;
- *) OS_FAMILY='gentoo' ;;
+ case ${OS_RELEASE_POSSIBLE_IDS} in
+ *:suse:*|*:opensuse:*|*:opensuse-tumbleweed:*) OS_FAMILY='rpm';;
+ *:fedora:*|*:rhel:*) OS_FAMILY='rpm';;
+ *:arch:*|*:antergos:*) OS_FAMILY='arch' NEW_EXT='pacnew';;
+ *) OS_FAMILY='gentoo';;
esac
if [[ $OS_FAMILY == 'gentoo' ]]; then
diff --cc bin/misc-functions.sh
index 6cb88898e,5de26b44d..0590a2862
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2018 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
#
# Miscellaneous shell functions that make use of the ebuild env but don't need
@@@ -224,51 -99,43 +105,76 @@@ install_qa_check()
)
done < <(printf "%s\0" "${qa_checks[@]}" | LC_ALL=C sort -u -z)
- export STRIP_MASK
- prepall
- ___eapi_has_docompress && prepcompress
- ecompressdir --dequeue
- ecompress --dequeue
+ if has chflags $FEATURES ; then
+ # Save all the file flags for restoration afterwards.
+ mtree -c -p "${ED}" -k flags > "${T}/bsdflags.mtree"
+ # Remove all the file flags so that we can do anything necessary.
+ chflags -R noschg,nouchg,nosappnd,nouappnd "${ED}"
+ chflags -R nosunlnk,nouunlnk "${ED}" 2>/dev/null
+ fi
+
+ [[ -d ${ED%/}/usr/share/info ]] && prepinfo
+
+ # If binpkg-docompress is enabled, apply compression before creating
+ # the binary package.
+ if has binpkg-docompress ${FEATURES}; then
+ "${PORTAGE_BIN_PATH}"/ecompress --queue "${PORTAGE_DOCOMPRESS[@]}"
+ "${PORTAGE_BIN_PATH}"/ecompress --ignore "${PORTAGE_DOCOMPRESS_SKIP[@]}"
+ "${PORTAGE_BIN_PATH}"/ecompress --dequeue
+ fi
+
+ # If binpkg-dostrip is enabled, apply stripping before creating
+ # the binary package.
+ # Note: disabling it won't help with packages calling prepstrip directly.
+ if has binpkg-dostrip ${FEATURES}; then
+ export STRIP_MASK
+ if ___eapi_has_dostrip; then
+ "${PORTAGE_BIN_PATH}"/estrip --queue "${PORTAGE_DOSTRIP[@]}"
+ "${PORTAGE_BIN_PATH}"/estrip --ignore "${PORTAGE_DOSTRIP_SKIP[@]}"
+ "${PORTAGE_BIN_PATH}"/estrip --dequeue
+ else
+ prepallstrip
+ fi
+ fi
- if ___eapi_has_dostrip; then
- "${PORTAGE_BIN_PATH}"/estrip --queue "${PORTAGE_DOSTRIP[@]}"
- "${PORTAGE_BIN_PATH}"/estrip --ignore "${PORTAGE_DOSTRIP_SKIP[@]}"
- "${PORTAGE_BIN_PATH}"/estrip --dequeue
+ if has chflags $FEATURES ; then
+ # Restore all the file flags that were saved earlier on.
+ mtree -U -e -p "${ED}" -k flags < "${T}/bsdflags.mtree" &> /dev/null
fi
+ # PREFIX LOCAL:
+ # anything outside the prefix should be caught by the Prefix QA
+ # check, so if there's nothing in ED, we skip searching for QA
+ # checks there, the specific QA funcs can hence rely on ED existing
+ if [[ -d ${ED} ]] ; then
+ case ${CHOST} in
+ *-darwin*)
+ # Mach-O platforms (NeXT, Darwin, OSX)
+ install_qa_check_macho
+ ;;
+ *-interix*|*-winnt*)
+ # PECOFF platforms (Windows/Interix)
+ install_qa_check_pecoff
+ ;;
+ *-aix*)
+ # XCOFF platforms (AIX)
+ install_qa_check_xcoff
+ ;;
+ *)
+ # because this is the majority: ELF platforms (Linux,
+ # Solaris, *BSD, IRIX, etc.)
+ install_qa_check_elf
+ ;;
+ esac
+ fi
+
+ # this is basically here such that the diff with trunk remains just
+ # offsetted and not out of order
+ install_qa_check_misc
+ # END PREFIX LOCAL
+}
+
+install_qa_check_elf() {
# Create NEEDED.ELF.2 regardless of RESTRICT=binchecks, since this info is
# too useful not to have (it's required for things like preserve-libs), and
# it's tempting for ebuild authors to set RESTRICT=binchecks for packages
@@@ -303,389 -168,51 +209,434 @@@ install_qa_check_misc()
rm -f "${ED%/}"/usr/share/info/dir{,.gz,.bz2} || die "rm failed!"
}
+install_qa_check_macho() {
+ if ! has binchecks ${RESTRICT} ; then
+ # on Darwin, dynamic libraries are called .dylibs instead of
+ # .sos. In addition the version component is before the
+ # extension, not after it. Check for this, and *only* warn
+ # about it. Some packages do ship .so files on Darwin and make
+ # it work (ugly!).
+ rm -f "${T}/mach-o.check"
+ find ${ED%/} -name "*.so" -or -name "*.so.*" | \
+ while read i ; do
+ [[ $(file $i) == *"Mach-O"* ]] && \
+ echo "${i#${D}}" >> "${T}/mach-o.check"
+ done
+ if [[ -f ${T}/mach-o.check ]] ; then
+ f=$(< "${T}/mach-o.check")
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: Found .so dynamic libraries on Darwin:"
+ eqawarn " ${f//$'\n'/\n }"
+ fi
+ rm -f "${T}/mach-o.check"
+
+ # The naming for dynamic libraries is different on Darwin; the
+ # version component is before the extention, instead of after
+ # it, as with .sos. Again, make this a warning only.
+ rm -f "${T}/mach-o.check"
+ find ${ED%/} -name "*.dylib.*" | \
+ while read i ; do
+ echo "${i#${D}}" >> "${T}/mach-o.check"
+ done
+ if [[ -f "${T}/mach-o.check" ]] ; then
+ f=$(< "${T}/mach-o.check")
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: Found wrongly named dynamic libraries on Darwin:"
+ eqawarn " ${f// /\n }"
+ fi
+ rm -f "${T}/mach-o.check"
+ fi
+
+ install_name_is_relative() {
+ case $1 in
+ "@executable_path/"*) return 0 ;;
+ "@loader_path"/*) return 0 ;;
+ "@rpath/"*) return 0 ;;
+ *) return 1 ;;
+ esac
+ }
+
+ # While we generate the NEEDED files, check that we don't get kernel
+ # traps at runtime because of broken install_names on Darwin.
+ rm -f "${T}"/.install_name_check_failed
+ scanmacho -qyRF '%a;%p;%S;%n' "${D}" | { while IFS= read l ; do
+ arch=${l%%;*}; l=${l#*;}
+ obj="/${l%%;*}"; l=${l#*;}
+ install_name=${l%%;*}; l=${l#*;}
+ needed=${l%%;*}; l=${l#*;}
+
+ ignore=
+ qa_var="QA_IGNORE_INSTALL_NAME_FILES_${ARCH/-/_}"
+ eval "[[ -n \${!qa_var} ]] &&
+ QA_IGNORE_INSTALL_NAME_FILES=(\"\${${qa_var}[@]}\")"
+
+ if [[ ${#QA_IGNORE_INSTALL_NAME_FILES[@]} -gt 1 ]] ; then
+ for x in "${QA_IGNORE_INSTALL_NAME_FILES[@]}" ; do
+ [[ ${obj##*/} == ${x} ]] && \
+ ignore=true
+ done
+ else
+ local shopts=$-
+ set -o noglob
+ for x in ${QA_IGNORE_INSTALL_NAME_FILES} ; do
+ [[ ${obj##*/} == ${x} ]] && \
+ ignore=true
+ done
+ set +o noglob
+ set -${shopts}
+ fi
+
+ # See if the self-reference install_name points to an existing
+ # and to be installed file. This usually is a symlink for the
+ # major version.
+ if install_name_is_relative ${install_name} ; then
+ # try to locate the library in the installed image
+ local inpath=${install_name#@*/}
+ local libl
+ for libl in $(find "${ED}" -name "${inpath##*/}") ; do
+ if [[ ${libl} == */${inpath} ]] ; then
+ install_name=/${libl#${D}}
+ break
+ fi
+ done
+ fi
+ if [[ ! -e ${D}${install_name} ]] ; then
+ eqawarn "QA Notice: invalid self-reference install_name ${install_name} in ${obj}"
+ # remember we are in an implicit subshell, that's
+ # why we touch a file here ... ideally we should be
+ # able to die correctly/nicely here
+ [[ -z ${ignore} ]] && touch "${T}"/.install_name_check_failed
+ fi
+
+ # this is ugly, paths with spaces won't work
+ for lib in ${needed//,/ } ; do
+ if [[ ${lib} == ${D}* ]] ; then
+ eqawarn "QA Notice: install_name references \${D}: ${lib} in ${obj}"
+ [[ -z ${ignore} ]] && touch "${T}"/.install_name_check_failed
+ elif [[ ${lib} == ${S}* ]] ; then
+ eqawarn "QA Notice: install_name references \${S}: ${lib} in ${obj}"
+ [[ -z ${ignore} ]] && touch "${T}"/.install_name_check_failed
+ elif ! install_name_is_relative ${lib} && [[ ! -e ${lib} && ! -e ${D}${lib} ]] ; then
+ eqawarn "QA Notice: invalid reference to ${lib} in ${obj}"
+ [[ -z ${ignore} ]] && touch "${T}"/.install_name_check_failed
+ fi
+ done
+
+ # backwards compatibility
+ echo "${obj} ${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ # what we use
+ echo "${arch};${obj};${install_name};${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.MACHO.3
+ done }
+ if [[ -f ${T}/.install_name_check_failed ]] ; then
+ # secret switch "allow_broken_install_names" to get
+ # around this and install broken crap (not a good idea)
+ has allow_broken_install_names ${FEATURES} || \
+ die "invalid install_name found, your application or library will crash at runtime"
+ fi
+}
+
+install_qa_check_pecoff() {
+ local _pfx_scan="readpecoff ${CHOST}"
+
+ # this one uses readpecoff, which supports multiple prefix platforms!
+ # this is absolutely _not_ optimized for speed, and there may be plenty
+ # of possibilities by introducing one or the other cache!
+ if ! has binchecks ${RESTRICT}; then
+ # copied and adapted from the above scanelf code.
+ local qa_var insecure_rpath=0 tmp_quiet=${PORTAGE_QUIET}
+ local f x
+
+ # display warnings when using stricter because we die afterwards
+ if has stricter ${FEATURES} ; then
+ unset PORTAGE_QUIET
+ fi
+
+ local _exec_find_opt="-executable"
+ [[ ${CHOST} == *-winnt* ]] && _exec_find_opt='-name *.dll -o -name *.exe'
+
+ # Make sure we disallow insecure RUNPATH/RPATH's
+ # Don't want paths that point to the tree where the package was built
+ # (older, broken libtools would do this). Also check for null paths
+ # because the loader will search $PWD when it finds null paths.
+
+ f=$(
+ find "${ED}" -type f '(' ${_exec_find_opt} ')' -print0 | xargs -0 ${_pfx_scan} | \
+ while IFS=";" read arch obj soname rpath needed ; do \
+ echo "${rpath}" | grep -E "(${PORTAGE_BUILDDIR}|: |::|^:|^ )" > /dev/null 2>&1 \
+ && echo "${obj}"; done;
+ )
+ # Reject set*id binaries with $ORIGIN in RPATH #260331
+ x=$(
+ find "${ED}" -type f '(' -perm -u+s -o -perm -g+s ')' -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; do \
+ echo "${rpath}" | grep '$ORIGIN' > /dev/null 2>&1 && echo "${obj}"; done;
+ )
+ if [[ -n ${f}${x} ]] ; then
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain insecure RUNPATH's"
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with the maintaining herd of the package."
+ eqawarn "${f}${f:+${x:+\n}}${x}"
+ __vecho -ne '\a\n'
+ if [[ -n ${x} ]] || has stricter ${FEATURES} ; then
+ insecure_rpath=1
+ else
+ eqawarn "cannot automatically fix runpaths on interix platforms!"
+ fi
+ fi
+
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+
+ # Save NEEDED information after removing self-contained providers
+ find "${ED}" -type f '(' ${_exec_find_opt} ')' -print0 | xargs -0 ${_pfx_scan} | { while IFS=';' read arch obj soname rpath needed; do
+ # need to strip image dir from object name.
+ obj="/${obj#${D}}"
+ if [ -z "${rpath}" -o -n "${rpath//*ORIGIN*}" ]; then
+ # object doesn't contain $ORIGIN in its runpath attribute
+ echo "${obj} ${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ echo "${arch};${obj};${soname};${rpath};${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+ else
+ dir=${obj%/*}
+ # replace $ORIGIN with the dirname of the current object for the lookup
+ opath=$(echo :${rpath}: | sed -e "s#.*:\(.*\)\$ORIGIN\(.*\):.*#\1${dir}\2#")
+ sneeded=$(echo ${needed} | tr , ' ')
+ rneeded=""
+ for lib in ${sneeded}; do
+ found=0
+ for path in ${opath//:/ }; do
+ [ -e "${ED}/${path}/${lib}" ] && found=1 && break
+ done
+ [ "${found}" -eq 0 ] && rneeded="${rneeded},${lib}"
+ done
+ rneeded=${rneeded:1}
+ if [ -n "${rneeded}" ]; then
+ echo "${obj} ${rneeded}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ echo "${arch};${obj};${soname};${rpath};${rneeded}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+ fi
+ fi
+ done }
+
+ if [[ ${insecure_rpath} -eq 1 ]] ; then
+ die "Aborting due to serious QA concerns with RUNPATH/RPATH"
+ elif [[ -n ${die_msg} ]] && has stricter ${FEATURES} ; then
+ die "Aborting due to QA concerns: ${die_msg}"
+ fi
+
+ local _so_ext='.so*'
+
+ case "${CHOST}" in
+ *-winnt*) _so_ext=".dll" ;; # no "*" intentionally!
+ esac
+
+ # Run some sanity checks on shared libraries
+ for d in "${ED}"lib* "${ED}"usr/lib* ; do
+ [[ -d "${d}" ]] || continue
+ f=$(find "${d}" -name "lib*${_so_ext}" -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; \
+ do [[ -z "${soname}" ]] && echo "${obj}"; done)
+ if [[ -n ${f} ]] ; then
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: The following shared libraries lack a SONAME"
+ eqawarn "${f}"
+ __vecho -ne '\a\n'
+ sleep 1
+ fi
+
+ f=$(find "${d}" -name "lib*${_so_ext}" -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; \
+ do [[ -z "${needed}" ]] && echo "${obj}"; done)
+ if [[ -n ${f} ]] ; then
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: The following shared libraries lack NEEDED entries"
+ eqawarn "${f}"
+ __vecho -ne '\a\n'
+ sleep 1
+ fi
+ done
+
+ PORTAGE_QUIET=${tmp_quiet}
+ fi
+}
+
+install_qa_check_xcoff() {
+ if ! has binchecks ${RESTRICT}; then
+ local tmp_quiet=${PORTAGE_QUIET}
+ local queryline deplib
+ local insecure_rpath_list= undefined_symbols_list=
+
+ # display warnings when using stricter because we die afterwards
+ if has stricter ${FEATURES} ; then
+ unset PORTAGE_QUIET
+ fi
+
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED.XCOFF.1
+
+ local neededfd
+ for neededfd in {3..1024} none; do ( : <&${neededfd} ) 2>/dev/null || break; done
+ [[ ${neededfd} != none ]] || die "cannot find free file descriptor handle"
+
+ eval "exec ${neededfd}>\"${PORTAGE_BUILDDIR}\"/build-info/NEEDED.XCOFF.1" || die "cannot open ${PORTAGE_BUILDDIR}/build-info/NEEDED.XCOFF.1"
+
+ ( # work around a problem in /usr/bin/dump (used by aixdll-query)
+ # dumping core when path names get too long.
+ cd "${ED}" >/dev/null &&
+ find . -not -type d -exec \
+ aixdll-query '{}' FILE MEMBER FLAGS FORMAT RUNPATH DEPLIBS ';'
+ ) > "${T}"/needed 2>/dev/null
+
+ # Symlinking shared archive libraries is not a good idea on aix,
+ # as there is nothing like "soname" on pure filesystem level.
+ # So we create a copy instead of the symlink.
+ local prev_FILE=
+ local FILE MEMBER FLAGS FORMAT RUNPATH DEPLIBS
+ while read queryline
+ do
+ FILE= MEMBER= FLAGS= FORMAT= RUNPATH= DEPLIBS=
+ eval ${queryline}
+ FILE=${FILE#./}
+
+ if [[ ${prev_FILE} != ${FILE} ]]; then
+ if [[ " ${FLAGS} " == *" SHROBJ "* && -h ${ED}${FILE} ]]; then
+ prev_FILE=${FILE}
+ local target=$(readlink "${ED}${FILE}")
+ if [[ ${target} == /* ]]; then
+ target=${D}${target}
+ else
+ target=${FILE%/*}/${target}
+ fi
+ rm -f "${ED}${FILE}" || die "cannot prune ${FILE}"
+ cp -f "${ED}${target}" "${ED}${FILE}" || die "cannot copy ${target} to ${FILE}"
+ fi
+ fi
+ done <"${T}"/needed
+
+ prev_FILE=
+ while read queryline
+ do
+ FILE= MEMBER= FLAGS= FORMAT= RUNPATH= DEPLIBS=
+ eval ${queryline}
+ FILE=${FILE#./}
+
+ if [[ -n ${MEMBER} && ${prev_FILE} != ${FILE} ]]; then
+ # Save NEEDED information for each archive library stub
+ # even if it is static only: the already installed archive
+ # may contain shared objects to be preserved.
+ echo "${FORMAT##* }${FORMAT%%-*};${EPREFIX}/${FILE};${FILE##*/};;" >&${neededfd}
+ fi
+ prev_FILE=${FILE}
+
+ # shared objects have both EXEC and SHROBJ flags,
+ # while executables have EXEC flag only.
+ [[ " ${FLAGS} " == *" EXEC "* ]] || continue
+
+ # Make sure we disallow insecure RUNPATH's
+ # Don't want paths that point to the tree where the package was built
+ # (older, broken libtools would do this). Also check for null paths
+ # because the loader will search $PWD when it finds null paths.
+ # And we really want absolute paths only.
+ if [[ -n $(echo ":${RUNPATH}:" | grep -E "(${PORTAGE_BUILDDIR}|::|:[^/])") ]]; then
+ insecure_rpath_list="${insecure_rpath_list}\n${FILE}${MEMBER:+[${MEMBER}]}"
+ fi
+
+ local needed=
+ [[ -n ${MEMBER} ]] && needed=${FILE##*/}
+ for deplib in ${DEPLIBS}; do
+ eval deplib=${deplib}
+ if [[ ${deplib} == '.' || ${deplib} == '..' ]]; then
+ # Although we do have runtime linking, we don't want undefined symbols.
+ # AIX does indicate this by needing either '.' or '..'
+ undefined_symbols_list="${undefined_symbols_list}\n${FILE}"
+ else
+ needed="${needed}${needed:+,}${deplib}"
+ fi
+ done
+
+ FILE=${EPREFIX}/${FILE}
+
+ [[ -n ${MEMBER} ]] && MEMBER="[${MEMBER}]"
+ # Save NEEDED information
+ echo "${FORMAT##* }${FORMAT%%-*};${FILE}${MEMBER};${FILE##*/}${MEMBER};${RUNPATH};${needed}" >&${neededfd}
+ done <"${T}"/needed
+
+ eval "exec ${neededfd}>&-" || die "cannot close handle to ${PORTAGE_BUILDDIR}/build-info/NEEDED.XCOFF.1"
+
+ if [[ -n ${undefined_symbols_list} ]]; then
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain undefined symbols."
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with 'prefix' as the maintaining herd of the package."
+ eqawarn "${undefined_symbols_list}"
+ __vecho -ne '\a\n'
+ fi
+
+ if [[ -n ${insecure_rpath_list} ]] ; then
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain insecure RUNPATH's"
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with 'prefix' as the maintaining herd of the package."
+ eqawarn "${insecure_rpath_list}"
+ __vecho -ne '\a\n'
+ if has stricter ${FEATURES} ; then
+ insecure_rpath=1
+ fi
+ fi
+
+ if [[ ${insecure_rpath} -eq 1 ]] ; then
+ die "Aborting due to serious QA concerns with RUNPATH/RPATH"
+ elif [[ -n ${die_msg} ]] && has stricter ${FEATURES} ; then
+ die "Aborting due to QA concerns: ${die_msg}"
+ fi
+
+ PORTAGE_QUIET=${tmp_quiet}
+ fi
+}
+
+ __dyn_instprep() {
+ if [[ -e ${PORTAGE_BUILDDIR}/.instprepped ]] ; then
+ __vecho ">>> It appears that '$PF' is already instprepped; skipping."
+ __vecho ">>> Remove '${PORTAGE_BUILDDIR}/.instprepped' to force instprep."
+ return 0
+ fi
+
+ if has chflags ${FEATURES}; then
+ # Save all the file flags for restoration afterwards.
+ mtree -c -p "${ED}" -k flags > "${T}/bsdflags.mtree"
+ # Remove all the file flags so that we can do anything necessary.
+ chflags -R noschg,nouchg,nosappnd,nouappnd "${ED}"
+ chflags -R nosunlnk,nouunlnk "${ED}" 2>/dev/null
+ fi
+
+ # If binpkg-docompress is disabled, we need to apply compression
+ # before installing.
+ if ! has binpkg-docompress ${FEATURES}; then
+ "${PORTAGE_BIN_PATH}"/ecompress --queue "${PORTAGE_DOCOMPRESS[@]}"
+ "${PORTAGE_BIN_PATH}"/ecompress --ignore "${PORTAGE_DOCOMPRESS_SKIP[@]}"
+ "${PORTAGE_BIN_PATH}"/ecompress --dequeue
+ fi
+
+ # If binpkg-dostrip is disabled, apply stripping before creating
+ # the binary package.
+ if ! has binpkg-dostrip ${FEATURES}; then
+ export STRIP_MASK
+ if ___eapi_has_dostrip; then
+ "${PORTAGE_BIN_PATH}"/estrip --queue "${PORTAGE_DOSTRIP[@]}"
+ "${PORTAGE_BIN_PATH}"/estrip --ignore "${PORTAGE_DOSTRIP_SKIP[@]}"
+ "${PORTAGE_BIN_PATH}"/estrip --dequeue
+ else
+ prepallstrip
+ fi
+ fi
+
+ if has chflags ${FEATURES}; then
+ # Restore all the file flags that were saved earlier on.
+ mtree -U -e -p "${ED}" -k flags < "${T}/bsdflags.mtree" &> /dev/null
+ fi
+
+ >> "${PORTAGE_BUILDDIR}/.instprepped" || \
+ die "Failed to create ${PORTAGE_BUILDDIR}/.instprepped"
+ }
+
preinst_qa_check() {
postinst_qa_check preinst
}
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2018-08-04 6:56 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2018-08-04 6:56 UTC (permalink / raw
To: gentoo-commits
commit: 7930d48e80624f600fd96effbf1f509e12964678
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sat Aug 4 06:55:46 2018 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sat Aug 4 06:55:46 2018 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=7930d48e
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
.gitignore | 1 +
.travis.yml | 28 +---
NEWS | 10 ++
RELEASE-NOTES | 50 +++++++
TEST-NOTES | 6 +-
bin/archive-conf | 2 +-
bin/binhost-snapshot | 2 +-
bin/clean_locks | 2 +-
bin/dispatch-conf | 2 +-
bin/ebuild | 2 +-
bin/ebuild-ipc.py | 2 +-
bin/egencache | 2 +-
bin/emaint | 2 +-
bin/emerge | 2 +-
bin/emerge-webrsync | 23 ++-
bin/env-update | 2 +-
bin/etc-update | 2 +-
bin/fixpackages | 2 +-
bin/glsa-check | 2 +-
bin/install-qa-check.d/10executable-issues | 4 +-
bin/install-qa-check.d/10ignored-flags | 6 +-
bin/install-qa-check.d/60pngfix | 14 +-
bin/misc-functions.sh | 2 +-
bin/portageq | 8 +-
bin/quickpkg | 2 +-
bin/regenworld | 2 +-
cnf/make.globals | 2 +-
{pym => lib}/Makefile.in | 0
{pym => lib}/_emerge/AbstractDepPriority.py | 0
{pym => lib}/_emerge/AbstractEbuildProcess.py | 0
{pym => lib}/_emerge/AbstractPollTask.py | 0
{pym => lib}/_emerge/AsynchronousLock.py | 0
{pym => lib}/_emerge/AsynchronousTask.py | 0
{pym => lib}/_emerge/AtomArg.py | 0
{pym => lib}/_emerge/Binpkg.py | 0
{pym => lib}/_emerge/BinpkgEnvExtractor.py | 0
{pym => lib}/_emerge/BinpkgExtractorAsync.py | 0
{pym => lib}/_emerge/BinpkgFetcher.py | 0
{pym => lib}/_emerge/BinpkgPrefetcher.py | 0
{pym => lib}/_emerge/BinpkgVerifier.py | 0
{pym => lib}/_emerge/Blocker.py | 0
{pym => lib}/_emerge/BlockerCache.py | 0
{pym => lib}/_emerge/BlockerDB.py | 0
{pym => lib}/_emerge/BlockerDepPriority.py | 0
{pym => lib}/_emerge/CompositeTask.py | 0
{pym => lib}/_emerge/DepPriority.py | 0
{pym => lib}/_emerge/DepPriorityNormalRange.py | 0
{pym => lib}/_emerge/DepPrioritySatisfiedRange.py | 0
{pym => lib}/_emerge/Dependency.py | 0
{pym => lib}/_emerge/DependencyArg.py | 0
{pym => lib}/_emerge/EbuildBinpkg.py | 0
{pym => lib}/_emerge/EbuildBuild.py | 0
{pym => lib}/_emerge/EbuildBuildDir.py | 0
{pym => lib}/_emerge/EbuildExecuter.py | 0
{pym => lib}/_emerge/EbuildFetcher.py | 0
{pym => lib}/_emerge/EbuildFetchonly.py | 0
{pym => lib}/_emerge/EbuildIpcDaemon.py | 0
{pym => lib}/_emerge/EbuildMerge.py | 0
{pym => lib}/_emerge/EbuildMetadataPhase.py | 0
{pym => lib}/_emerge/EbuildPhase.py | 0
{pym => lib}/_emerge/EbuildProcess.py | 0
{pym => lib}/_emerge/EbuildSpawnProcess.py | 0
{pym => lib}/_emerge/FakeVartree.py | 0
{pym => lib}/_emerge/FifoIpcDaemon.py | 0
{pym => lib}/_emerge/JobStatusDisplay.py | 0
{pym => lib}/_emerge/MergeListItem.py | 0
{pym => lib}/_emerge/MetadataRegen.py | 0
{pym => lib}/_emerge/MiscFunctionsProcess.py | 0
{pym => lib}/_emerge/Package.py | 5 +-
{pym => lib}/_emerge/PackageArg.py | 0
{pym => lib}/_emerge/PackageMerge.py | 0
{pym => lib}/_emerge/PackagePhase.py | 0
{pym => lib}/_emerge/PackageUninstall.py | 0
{pym => lib}/_emerge/PackageVirtualDbapi.py | 0
{pym => lib}/_emerge/PipeReader.py | 0
{pym => lib}/_emerge/PollScheduler.py | 0
{pym => lib}/_emerge/ProgressHandler.py | 0
{pym => lib}/_emerge/RootConfig.py | 0
{pym => lib}/_emerge/Scheduler.py | 3 +
{pym => lib}/_emerge/SequentialTaskQueue.py | 0
{pym => lib}/_emerge/SetArg.py | 0
{pym => lib}/_emerge/SpawnProcess.py | 0
{pym => lib}/_emerge/SubProcess.py | 0
{pym => lib}/_emerge/Task.py | 0
{pym => lib}/_emerge/TaskSequence.py | 0
{pym => lib}/_emerge/UninstallFailure.py | 0
{pym => lib}/_emerge/UnmergeDepPriority.py | 0
{pym => lib}/_emerge/UseFlagDisplay.py | 0
{pym => lib}/_emerge/UserQuery.py | 0
{pym => lib}/_emerge/__init__.py | 0
.../_emerge/_find_deep_system_runtime_deps.py | 0
{pym => lib}/_emerge/_flush_elog_mod_echo.py | 0
{pym => lib}/_emerge/actions.py | 0
{pym => lib}/_emerge/chk_updated_cfg_files.py | 0
{pym => lib}/_emerge/clear_caches.py | 0
{pym => lib}/_emerge/countdown.py | 0
{pym => lib}/_emerge/create_depgraph_params.py | 0
{pym => lib}/_emerge/create_world_atom.py | 0
{pym => lib}/_emerge/depgraph.py | 14 +-
{pym => lib}/_emerge/emergelog.py | 0
{pym => lib}/_emerge/getloadavg.py | 0
{pym => lib}/_emerge/help.py | 0
{pym => lib}/_emerge/is_valid_package_atom.py | 0
{pym => lib}/_emerge/main.py | 0
{pym => lib}/_emerge/post_emerge.py | 0
.../_emerge/resolver/DbapiProvidesIndex.py | 0
{pym => lib}/_emerge/resolver/__init__.py | 0
{pym => lib}/_emerge/resolver/backtracking.py | 0
.../_emerge/resolver/circular_dependency.py | 0
{pym => lib}/_emerge/resolver/output.py | 0
{pym => lib}/_emerge/resolver/output_helpers.py | 0
{pym => lib}/_emerge/resolver/package_tracker.py | 0
{pym => lib}/_emerge/resolver/slot_collision.py | 0
{pym => lib}/_emerge/search.py | 0
.../_emerge/show_invalid_depstring_notice.py | 0
{pym => lib}/_emerge/stdout_spinner.py | 0
{pym => lib}/_emerge/unmerge.py | 0
{pym => lib}/portage/__init__.py | 0
{pym => lib}/portage/_emirrordist/Config.py | 0
.../portage/_emirrordist/DeletionIterator.py | 0
{pym => lib}/portage/_emirrordist/DeletionTask.py | 0
{pym => lib}/portage/_emirrordist/FetchIterator.py | 0
{pym => lib}/portage/_emirrordist/FetchTask.py | 0
.../portage/_emirrordist/MirrorDistTask.py | 0
{pym => lib}/portage/_emirrordist/__init__.py | 0
{pym => lib}/portage/_emirrordist/main.py | 0
{pym => lib}/portage/_global_updates.py | 0
{pym => lib}/portage/_legacy_globals.py | 0
{pym => lib}/portage/_selinux.py | 0
{pym => lib}/portage/_sets/ProfilePackageSet.py | 0
{pym => lib}/portage/_sets/__init__.py | 0
{pym => lib}/portage/_sets/base.py | 0
{pym => lib}/portage/_sets/dbapi.py | 0
{pym => lib}/portage/_sets/files.py | 0
{pym => lib}/portage/_sets/libs.py | 0
{pym => lib}/portage/_sets/profiles.py | 0
{pym => lib}/portage/_sets/security.py | 0
{pym => lib}/portage/_sets/shell.py | 0
{pym => lib}/portage/cache/__init__.py | 0
{pym => lib}/portage/cache/anydbm.py | 0
{pym => lib}/portage/cache/cache_errors.py | 0
{pym => lib}/portage/cache/ebuild_xattr.py | 0
{pym => lib}/portage/cache/flat_hash.py | 0
{pym => lib}/portage/cache/fs_template.py | 0
.../portage/cache/index/IndexStreamIterator.py | 0
{pym => lib}/portage/cache/index/__init__.py | 0
{pym => lib}/portage/cache/index/pkg_desc_index.py | 0
{pym => lib}/portage/cache/mappings.py | 0
{pym => lib}/portage/cache/metadata.py | 0
{pym => lib}/portage/cache/sql_template.py | 0
{pym => lib}/portage/cache/sqlite.py | 0
{pym => lib}/portage/cache/template.py | 0
{pym => lib}/portage/cache/volatile.py | 0
{pym => lib}/portage/checksum.py | 0
{pym => lib}/portage/const.py | 0
{pym => lib}/portage/const_autotool.py | 0
{pym => lib}/portage/cvstree.py | 0
{pym => lib}/portage/data.py | 0
{pym => lib}/portage/dbapi/DummyTree.py | 0
{pym => lib}/portage/dbapi/IndexedPortdb.py | 0
{pym => lib}/portage/dbapi/IndexedVardb.py | 0
.../dbapi/_ContentsCaseSensitivityManager.py | 0
{pym => lib}/portage/dbapi/_MergeProcess.py | 0
{pym => lib}/portage/dbapi/_SyncfsProcess.py | 0
{pym => lib}/portage/dbapi/_VdbMetadataDelta.py | 0
{pym => lib}/portage/dbapi/__init__.py | 12 ++
{pym => lib}/portage/dbapi/_expand_new_virt.py | 0
{pym => lib}/portage/dbapi/_similar_name_search.py | 0
{pym => lib}/portage/dbapi/bintree.py | 4 +-
{pym => lib}/portage/dbapi/cpv_expand.py | 0
{pym => lib}/portage/dbapi/dep_expand.py | 0
{pym => lib}/portage/dbapi/porttree.py | 81 +++++++----
{pym => lib}/portage/dbapi/vartree.py | 25 +++-
{pym => lib}/portage/dbapi/virtual.py | 0
{pym => lib}/portage/debug.py | 2 +-
{pym => lib}/portage/dep/__init__.py | 0
{pym => lib}/portage/dep/_dnf.py | 0
{pym => lib}/portage/dep/_slot_operator.py | 0
{pym => lib}/portage/dep/dep_check.py | 0
{pym => lib}/portage/dep/soname/SonameAtom.py | 0
{pym => lib}/portage/dep/soname/__init__.py | 0
.../portage/dep/soname/multilib_category.py | 0
{pym => lib}/portage/dep/soname/parse.py | 0
{pym => lib}/portage/dispatch_conf.py | 0
{pym => lib}/portage/eapi.py | 2 +-
{pym => lib}/portage/eclass_cache.py | 0
{pym => lib}/portage/elog/__init__.py | 0
{pym => lib}/portage/elog/filtering.py | 0
{pym => lib}/portage/elog/messages.py | 0
{pym => lib}/portage/elog/mod_custom.py | 0
{pym => lib}/portage/elog/mod_echo.py | 0
{pym => lib}/portage/elog/mod_mail.py | 0
{pym => lib}/portage/elog/mod_mail_summary.py | 0
{pym => lib}/portage/elog/mod_save.py | 0
{pym => lib}/portage/elog/mod_save_summary.py | 0
{pym => lib}/portage/elog/mod_syslog.py | 0
{pym => lib}/portage/emaint/__init__.py | 0
{pym => lib}/portage/emaint/defaults.py | 0
{pym => lib}/portage/emaint/main.py | 0
{pym => lib}/portage/emaint/modules/__init__.py | 0
.../portage/emaint/modules/binhost/__init__.py | 0
.../portage/emaint/modules/binhost/binhost.py | 0
.../portage/emaint/modules/config/__init__.py | 0
.../portage/emaint/modules/config/config.py | 0
.../portage/emaint/modules/logs/__init__.py | 0
{pym => lib}/portage/emaint/modules/logs/logs.py | 0
.../portage/emaint/modules/merges/__init__.py | 0
.../portage/emaint/modules/merges/merges.py | 0
.../portage/emaint/modules/move/__init__.py | 0
{pym => lib}/portage/emaint/modules/move/move.py | 0
.../portage/emaint/modules/resume/__init__.py | 0
.../portage/emaint/modules/resume/resume.py | 0
.../portage/emaint/modules/sync/__init__.py | 0
{pym => lib}/portage/emaint/modules/sync/sync.py | 0
.../portage/emaint/modules/world/__init__.py | 0
{pym => lib}/portage/emaint/modules/world/world.py | 0
{pym => lib}/portage/env/__init__.py | 0
{pym => lib}/portage/env/config.py | 0
{pym => lib}/portage/env/loaders.py | 0
{pym => lib}/portage/env/validators.py | 0
{pym => lib}/portage/exception.py | 0
{pym => lib}/portage/getbinpkg.py | 0
{pym => lib}/portage/glsa.py | 0
{pym => lib}/portage/localization.py | 0
{pym => lib}/portage/locks.py | 0
{pym => lib}/portage/mail.py | 0
{pym => lib}/portage/manifest.py | 0
{pym => lib}/portage/metadata.py | 0
{pym => lib}/portage/module.py | 0
{pym => lib}/portage/news.py | 0
{pym => lib}/portage/output.py | 0
{pym => lib}/portage/package/__init__.py | 0
{pym => lib}/portage/package/ebuild/__init__.py | 0
.../package/ebuild/_config/KeywordsManager.py | 0
.../package/ebuild/_config/LicenseManager.py | 0
.../package/ebuild/_config/LocationsManager.py | 0
.../portage/package/ebuild/_config/MaskManager.py | 0
.../portage/package/ebuild/_config/UseManager.py | 0
.../package/ebuild/_config/VirtualsManager.py | 0
.../portage/package/ebuild/_config/__init__.py | 0
.../package/ebuild/_config/env_var_validation.py | 0
.../portage/package/ebuild/_config/features_set.py | 0
.../portage/package/ebuild/_config/helper.py | 0
.../package/ebuild/_config/special_env_vars.py | 0
.../package/ebuild/_config/unpack_dependencies.py | 0
.../portage/package/ebuild/_ipc/ExitCommand.py | 0
.../portage/package/ebuild/_ipc/IpcCommand.py | 0
.../portage/package/ebuild/_ipc/QueryCommand.py | 0
.../portage/package/ebuild/_ipc/__init__.py | 0
.../portage/package/ebuild/_metadata_invalid.py | 0
.../ebuild/_parallel_manifest/ManifestProcess.py | 0
.../ebuild/_parallel_manifest/ManifestScheduler.py | 0
.../ebuild/_parallel_manifest/ManifestTask.py | 0
.../package/ebuild/_parallel_manifest/__init__.py | 0
.../portage/package/ebuild/_spawn_nofetch.py | 0
{pym => lib}/portage/package/ebuild/config.py | 65 +++++----
.../package/ebuild/deprecated_profile_check.py | 0
{pym => lib}/portage/package/ebuild/digestcheck.py | 0
{pym => lib}/portage/package/ebuild/digestgen.py | 0
{pym => lib}/portage/package/ebuild/doebuild.py | 2 +-
{pym => lib}/portage/package/ebuild/fetch.py | 0
.../portage/package/ebuild/getmaskingreason.py | 0
.../portage/package/ebuild/getmaskingstatus.py | 0
.../portage/package/ebuild/prepare_build_dirs.py | 0
.../portage/package/ebuild/profile_iuse.py | 0
{pym => lib}/portage/process.py | 0
{pym => lib}/portage/progress.py | 0
{pym => lib}/portage/proxy/__init__.py | 0
{pym => lib}/portage/proxy/lazyimport.py | 0
{pym => lib}/portage/proxy/objectproxy.py | 0
{pym => lib}/portage/repository/__init__.py | 0
{pym => lib}/portage/repository/config.py | 16 ++-
{pym => lib}/portage/sync/__init__.py | 0
{pym => lib}/portage/sync/config_checks.py | 0
{pym => lib}/portage/sync/controller.py | 0
{pym => lib}/portage/sync/getaddrinfo_validate.py | 0
{pym => lib}/portage/sync/modules/__init__.py | 0
{pym => lib}/portage/sync/modules/cvs/__init__.py | 0
{pym => lib}/portage/sync/modules/cvs/cvs.py | 0
{pym => lib}/portage/sync/modules/git/__init__.py | 0
{pym => lib}/portage/sync/modules/git/git.py | 72 ++++++++--
.../portage/sync/modules/rsync/__init__.py | 0
{pym => lib}/portage/sync/modules/rsync/rsync.py | 120 ++++++++++------
{pym => lib}/portage/sync/modules/svn/__init__.py | 0
{pym => lib}/portage/sync/modules/svn/svn.py | 0
.../portage/sync/modules/webrsync/__init__.py | 6 +-
lib/portage/sync/modules/webrsync/webrsync.py | 136 ++++++++++++++++++
{pym => lib}/portage/sync/old_tree_timestamp.py | 0
{pym => lib}/portage/sync/syncbase.py | 41 ++++++
{pym => lib}/portage/tests/__init__.py | 0
{pym => lib}/portage/tests/bin/__init__.py | 0
{pym => lib}/portage/tests/bin/__test__.py | 0
{pym => lib}/portage/tests/bin/setup_env.py | 0
{pym => lib}/portage/tests/bin/test_dobin.py | 0
{pym => lib}/portage/tests/bin/test_dodir.py | 0
{pym => lib}/portage/tests/bin/test_doins.py | 0
.../portage/tests/bin/test_eapi7_ver_funcs.py | 0
.../portage/tests/bin/test_filter_bash_env.py | 0
{pym => lib}/portage/tests/dbapi/__init__.py | 0
{pym => lib}/portage/tests/dbapi/__test__.py | 0
{pym => lib}/portage/tests/dbapi/test_fakedbapi.py | 0
.../portage/tests/dbapi/test_portdb_cache.py | 0
{pym => lib}/portage/tests/dep/__init__.py | 0
{pym => lib}/portage/tests/dep/__test__.py | 0
{pym => lib}/portage/tests/dep/testAtom.py | 0
.../portage/tests/dep/testCheckRequiredUse.py | 0
.../portage/tests/dep/testExtendedAtomDict.py | 0
.../portage/tests/dep/testExtractAffectingUSE.py | 0
{pym => lib}/portage/tests/dep/testStandalone.py | 0
.../portage/tests/dep/test_best_match_to_list.py | 0
{pym => lib}/portage/tests/dep/test_dep_getcpv.py | 0
{pym => lib}/portage/tests/dep/test_dep_getrepo.py | 0
{pym => lib}/portage/tests/dep/test_dep_getslot.py | 0
.../portage/tests/dep/test_dep_getusedeps.py | 0
{pym => lib}/portage/tests/dep/test_dnf_convert.py | 0
.../portage/tests/dep/test_get_operator.py | 0
.../tests/dep/test_get_required_use_flags.py | 0
{pym => lib}/portage/tests/dep/test_isjustname.py | 0
{pym => lib}/portage/tests/dep/test_isvalidatom.py | 0
.../portage/tests/dep/test_match_from_list.py | 0
{pym => lib}/portage/tests/dep/test_overlap_dnf.py | 0
.../portage/tests/dep/test_paren_reduce.py | 0
{pym => lib}/portage/tests/dep/test_use_reduce.py | 0
{pym => lib}/portage/tests/ebuild/__init__.py | 0
{pym => lib}/portage/tests/ebuild/__test__.py | 0
.../tests/ebuild/test_array_fromfile_eof.py | 0
{pym => lib}/portage/tests/ebuild/test_config.py | 0
.../portage/tests/ebuild/test_doebuild_fd_pipes.py | 0
.../portage/tests/ebuild/test_doebuild_spawn.py | 0
.../portage/tests/ebuild/test_ipc_daemon.py | 0
{pym => lib}/portage/tests/ebuild/test_spawn.py | 0
.../tests/ebuild/test_use_expand_incremental.py | 0
{pym => lib}/portage/tests/emerge/__init__.py | 0
{pym => lib}/portage/tests/emerge/__test__.py | 0
.../portage/tests/emerge/test_config_protect.py | 0
.../emerge/test_emerge_blocker_file_collision.py | 0
.../portage/tests/emerge/test_emerge_slot_abi.py | 0
.../portage/tests/emerge/test_global_updates.py | 0
{pym => lib}/portage/tests/emerge/test_simple.py | 0
{pym => lib}/portage/tests/env/__init__.py | 0
{pym => lib}/portage/tests/env/__test__.py | 0
{pym => lib}/portage/tests/env/config/__init__.py | 0
{pym => lib}/portage/tests/env/config/__test__.py | 0
.../tests/env/config/test_PackageKeywordsFile.py | 0
.../tests/env/config/test_PackageMaskFile.py | 0
.../tests/env/config/test_PackageUseFile.py | 0
.../tests/env/config/test_PortageModulesFile.py | 0
{pym => lib}/portage/tests/glsa/__init__.py | 0
{pym => lib}/portage/tests/glsa/__test__.py | 0
.../portage/tests/glsa/test_security_set.py | 0
{pym => lib}/portage/tests/lafilefixer/__init__.py | 0
{pym => lib}/portage/tests/lafilefixer/__test__.py | 0
.../portage/tests/lafilefixer/test_lafilefixer.py | 0
{pym => lib}/portage/tests/lazyimport/__init__.py | 0
{pym => lib}/portage/tests/lazyimport/__test__.py | 0
.../test_lazy_import_portage_baseline.py | 0
.../lazyimport/test_preload_portage_submodules.py | 0
{pym => lib}/portage/tests/lint/__init__.py | 0
{pym => lib}/portage/tests/lint/__test__.py | 0
{pym => lib}/portage/tests/lint/metadata.py | 0
.../portage/tests/lint/test_bash_syntax.py | 0
.../portage/tests/lint/test_compile_modules.py | 0
.../portage/tests/lint/test_import_modules.py | 0
{pym => lib}/portage/tests/locks/__init__.py | 0
{pym => lib}/portage/tests/locks/__test__.py | 0
.../portage/tests/locks/test_asynchronous_lock.py | 0
.../portage/tests/locks/test_lock_nonblock.py | 0
{pym => lib}/portage/tests/news/__init__.py | 0
{pym => lib}/portage/tests/news/__test__.py | 0
{pym => lib}/portage/tests/news/test_NewsItem.py | 0
{pym => lib}/portage/tests/process/__init__.py | 0
{pym => lib}/portage/tests/process/__test__.py | 0
.../portage/tests/process/test_PopenProcess.py | 0
.../tests/process/test_PopenProcessBlockingIO.py | 0
{pym => lib}/portage/tests/process/test_poll.py | 0
.../portage/tests/resolver/ResolverPlayground.py | 0
{pym => lib}/portage/tests/resolver/__init__.py | 0
{pym => lib}/portage/tests/resolver/__test__.py | 0
.../resolver/binpkg_multi_instance/__init__.py | 0
.../resolver/binpkg_multi_instance/__test__.py | 0
.../test_build_id_profile_format.py | 0
.../binpkg_multi_instance/test_rebuilt_binaries.py | 0
.../portage/tests/resolver/soname/__init__.py | 0
.../portage/tests/resolver/soname/__test__.py | 0
.../tests/resolver/soname/test_autounmask.py | 0
.../portage/tests/resolver/soname/test_depclean.py | 0
.../tests/resolver/soname/test_downgrade.py | 0
.../tests/resolver/soname/test_or_choices.py | 0
.../tests/resolver/soname/test_reinstall.py | 0
.../tests/resolver/soname/test_skip_update.py | 0
.../soname/test_slot_conflict_reinstall.py | 0
.../resolver/soname/test_slot_conflict_update.py | 0
.../tests/resolver/soname/test_soname_provided.py | 0
.../tests/resolver/soname/test_unsatisfiable.py | 0
.../tests/resolver/soname/test_unsatisfied.py | 0
.../portage/tests/resolver/test_autounmask.py | 0
.../tests/resolver/test_autounmask_binpkg_use.py | 0
.../resolver/test_autounmask_keep_keywords.py | 0
.../tests/resolver/test_autounmask_multilib_use.py | 0
.../tests/resolver/test_autounmask_parent.py | 0
.../resolver/test_autounmask_use_backtrack.py | 0
.../tests/resolver/test_autounmask_use_breakage.py | 0
.../portage/tests/resolver/test_backtracking.py | 0
{pym => lib}/portage/tests/resolver/test_bdeps.py | 0
.../resolver/test_binary_pkg_ebuild_visibility.py | 0
.../portage/tests/resolver/test_blocker.py | 0
.../portage/tests/resolver/test_changed_deps.py | 0
.../tests/resolver/test_circular_choices.py | 0
.../tests/resolver/test_circular_dependencies.py | 0
.../portage/tests/resolver/test_complete_graph.py | 0
...test_complete_if_new_subslot_without_revbump.py | 0
.../portage/tests/resolver/test_depclean.py | 0
.../portage/tests/resolver/test_depclean_order.py | 0
.../resolver/test_depclean_slot_unavailable.py | 0
{pym => lib}/portage/tests/resolver/test_depth.py | 0
.../resolver/test_disjunctive_depend_order.py | 0
{pym => lib}/portage/tests/resolver/test_eapi.py | 0
.../tests/resolver/test_features_test_use.py | 88 ++++++++++++
.../resolver/test_imagemagick_graphicsmagick.py | 0
.../portage/tests/resolver/test_keywords.py | 0
.../portage/tests/resolver/test_merge_order.py | 0
.../test_missing_iuse_and_evaluated_atoms.py | 0
.../portage/tests/resolver/test_multirepo.py | 0
.../portage/tests/resolver/test_multislot.py | 0
.../tests/resolver/test_old_dep_chain_display.py | 0
.../portage/tests/resolver/test_onlydeps.py | 0
.../tests/resolver/test_onlydeps_circular.py | 0
.../tests/resolver/test_onlydeps_minimal.py | 0
.../portage/tests/resolver/test_or_choices.py | 0
.../tests/resolver/test_or_downgrade_installed.py | 0
.../tests/resolver/test_or_upgrade_installed.py | 0
{pym => lib}/portage/tests/resolver/test_output.py | 0
.../portage/tests/resolver/test_package_tracker.py | 0
.../tests/resolver/test_profile_default_eapi.py | 0
.../tests/resolver/test_profile_package_set.py | 0
.../portage/tests/resolver/test_rebuild.py | 0
.../test_regular_slot_change_without_revbump.py | 0
.../portage/tests/resolver/test_required_use.py | 0
.../resolver/test_runtime_cycle_merge_order.py | 0
{pym => lib}/portage/tests/resolver/test_simple.py | 0
.../portage/tests/resolver/test_slot_abi.py | 0
.../tests/resolver/test_slot_abi_downgrade.py | 0
.../resolver/test_slot_change_without_revbump.py | 0
.../portage/tests/resolver/test_slot_collisions.py | 0
.../resolver/test_slot_conflict_force_rebuild.py | 0
.../resolver/test_slot_conflict_mask_update.py | 0
.../tests/resolver/test_slot_conflict_rebuild.py | 0
.../test_slot_conflict_unsatisfied_deep_deps.py | 0
.../tests/resolver/test_slot_conflict_update.py | 0
.../resolver/test_slot_operator_autounmask.py | 0
.../resolver/test_slot_operator_complete_graph.py | 0
.../resolver/test_slot_operator_exclusive_slots.py | 0
.../tests/resolver/test_slot_operator_rebuild.py | 0
.../resolver/test_slot_operator_required_use.py | 0
.../resolver/test_slot_operator_reverse_deps.py | 0
.../test_slot_operator_runtime_pkg_mask.py | 0
.../resolver/test_slot_operator_unsatisfied.py | 0
.../tests/resolver/test_slot_operator_unsolved.py | 0
..._slot_operator_update_probe_parent_downgrade.py | 0
.../test_solve_non_slot_operator_slot_conflicts.py | 0
.../portage/tests/resolver/test_targetroot.py | 0
.../tests/resolver/test_unpack_dependencies.py | 0
.../portage/tests/resolver/test_use_aliases.py | 0
.../tests/resolver/test_use_dep_defaults.py | 0
.../portage/tests/resolver/test_useflags.py | 0
.../resolver/test_virtual_minimize_children.py | 0
.../portage/tests/resolver/test_virtual_slot.py | 0
.../portage/tests/resolver/test_with_test_deps.py | 0
{pym => lib}/portage/tests/runTests.py | 0
{pym => lib}/portage/tests/sets/__init__.py | 0
{pym => lib}/portage/tests/sets/__test__.py | 0
{pym => lib}/portage/tests/sets/base/__init__.py | 0
{pym => lib}/portage/tests/sets/base/__test__.py | 0
.../tests/sets/base/testInternalPackageSet.py | 0
{pym => lib}/portage/tests/sets/files/__init__.py | 0
{pym => lib}/portage/tests/sets/files/__test__.py | 0
.../portage/tests/sets/files/testConfigFileSet.py | 0
.../portage/tests/sets/files/testStaticFileSet.py | 0
{pym => lib}/portage/tests/sets/shell/__init__.py | 0
{pym => lib}/portage/tests/sets/shell/__test__.py | 0
{pym => lib}/portage/tests/sets/shell/testShell.py | 0
{pym => lib}/portage/tests/sync/__init__.py | 0
{pym => lib}/portage/tests/sync/__test__.py | 0
{pym => lib}/portage/tests/sync/test_sync_local.py | 15 +-
{pym => lib}/portage/tests/unicode/__init__.py | 0
{pym => lib}/portage/tests/unicode/__test__.py | 0
.../portage/tests/unicode/test_string_format.py | 0
{pym => lib}/portage/tests/update/__init__.py | 0
{pym => lib}/portage/tests/update/__test__.py | 0
{pym => lib}/portage/tests/update/test_move_ent.py | 0
.../portage/tests/update/test_move_slot_ent.py | 0
.../portage/tests/update/test_update_dbentry.py | 0
{pym => lib}/portage/tests/util/__init__.py | 0
{pym => lib}/portage/tests/util/__test__.py | 0
.../portage/tests/util/dyn_libs/__init__.py | 0
.../portage/tests/util/dyn_libs/__test__.py | 0
.../tests/util/dyn_libs/test_soname_deps.py | 0
.../portage/tests/util/eventloop/__init__.py | 0
.../portage/tests/util/eventloop/__test__.py | 0
.../tests/util/eventloop/test_call_soon_fifo.py | 0
.../portage/tests/util/file_copy/__init__.py | 0
.../portage/tests/util/file_copy/__test__.py | 0
.../portage/tests/util/file_copy/test_copyfile.py | 0
.../portage/tests/util/futures/__init__.py | 0
.../portage/tests/util/futures/__test__.py | 0
.../portage/tests/util/futures/asyncio/__init__.py | 0
.../portage/tests/util/futures/asyncio/__test__.py | 0
.../util/futures/asyncio/test_child_watcher.py | 0
.../futures/asyncio/test_event_loop_in_fork.py | 0
.../tests/util/futures/asyncio/test_pipe_closed.py | 0
.../asyncio/test_policy_wrapper_recursion.py | 0
.../futures/asyncio/test_run_until_complete.py | 0
.../util/futures/asyncio/test_subprocess_exec.py | 0
.../util/futures/asyncio/test_wakeup_fd_sigchld.py | 0
.../tests/util/futures/test_compat_coroutine.py | 159 +++++++++++++++++++++
.../tests/util/futures/test_done_callback.py | 0
.../tests/util/futures/test_iter_completed.py | 0
.../portage/tests/util/futures/test_retry.py | 0
{pym => lib}/portage/tests/util/test_checksum.py | 0
{pym => lib}/portage/tests/util/test_digraph.py | 0
{pym => lib}/portage/tests/util/test_getconfig.py | 0
{pym => lib}/portage/tests/util/test_grabdict.py | 0
lib/portage/tests/util/test_install_mask.py | 129 +++++++++++++++++
.../portage/tests/util/test_normalizedPath.py | 0
.../portage/tests/util/test_stackDictList.py | 0
{pym => lib}/portage/tests/util/test_stackDicts.py | 0
{pym => lib}/portage/tests/util/test_stackLists.py | 0
.../portage/tests/util/test_uniqueArray.py | 0
{pym => lib}/portage/tests/util/test_varExpand.py | 0
{pym => lib}/portage/tests/util/test_whirlpool.py | 0
{pym => lib}/portage/tests/util/test_xattr.py | 0
{pym => lib}/portage/tests/versions/__init__.py | 0
{pym => lib}/portage/tests/versions/__test__.py | 0
.../portage/tests/versions/test_cpv_sort_key.py | 0
{pym => lib}/portage/tests/versions/test_vercmp.py | 0
{pym => lib}/portage/tests/xpak/__init__.py | 0
{pym => lib}/portage/tests/xpak/__test__.py | 0
{pym => lib}/portage/tests/xpak/test_decodeint.py | 0
{pym => lib}/portage/update.py | 0
{pym => lib}/portage/util/ExtractKernelVersion.py | 0
{pym => lib}/portage/util/SlotObject.py | 0
{pym => lib}/portage/util/_ShelveUnicodeWrapper.py | 0
{pym => lib}/portage/util/__init__.py | 0
{pym => lib}/portage/util/_async/AsyncFunction.py | 0
{pym => lib}/portage/util/_async/AsyncScheduler.py | 0
.../portage/util/_async/AsyncTaskFuture.py | 0
{pym => lib}/portage/util/_async/FileCopier.py | 0
{pym => lib}/portage/util/_async/FileDigester.py | 0
{pym => lib}/portage/util/_async/ForkProcess.py | 0
{pym => lib}/portage/util/_async/PipeLogger.py | 0
.../portage/util/_async/PipeReaderBlockingIO.py | 0
{pym => lib}/portage/util/_async/PopenProcess.py | 0
.../portage/util/_async/SchedulerInterface.py | 0
{pym => lib}/portage/util/_async/TaskScheduler.py | 0
{pym => lib}/portage/util/_async/__init__.py | 0
.../portage/util/_async/run_main_scheduler.py | 0
{pym => lib}/portage/util/_ctypes.py | 0
{pym => lib}/portage/util/_desktop_entry.py | 0
.../portage/util/_dyn_libs/LinkageMapELF.py | 0
.../portage/util/_dyn_libs/LinkageMapMachO.py | 0
.../portage/util/_dyn_libs/LinkageMapPeCoff.py | 0
.../portage/util/_dyn_libs/LinkageMapXCoff.py | 0
{pym => lib}/portage/util/_dyn_libs/NeededEntry.py | 0
.../util/_dyn_libs/PreservedLibsRegistry.py | 0
{pym => lib}/portage/util/_dyn_libs/__init__.py | 0
.../util/_dyn_libs/display_preserved_libs.py | 0
{pym => lib}/portage/util/_dyn_libs/soname_deps.py | 0
{pym => lib}/portage/util/_eventloop/EventLoop.py | 50 ++++++-
.../portage/util/_eventloop/PollConstants.py | 0
.../portage/util/_eventloop/PollSelectAdapter.py | 0
{pym => lib}/portage/util/_eventloop/__init__.py | 0
.../portage/util/_eventloop/asyncio_event_loop.py | 31 ++++
.../portage/util/_eventloop/global_event_loop.py | 0
{pym => lib}/portage/util/_get_vm_info.py | 0
{pym => lib}/portage/util/_info_files.py | 0
{pym => lib}/portage/util/_path.py | 0
{pym => lib}/portage/util/_pty.py | 0
{pym => lib}/portage/util/_urlopen.py | 0
{pym => lib}/portage/util/_xattr.py | 0
{pym => lib}/portage/util/backoff.py | 0
{pym => lib}/portage/util/changelog.py | 0
{pym => lib}/portage/util/compression_probe.py | 0
{pym => lib}/portage/util/configparser.py | 0
{pym => lib}/portage/util/cpuinfo.py | 0
{pym => lib}/portage/util/digraph.py | 0
{pym => lib}/portage/util/elf/__init__.py | 0
{pym => lib}/portage/util/elf/constants.py | 0
{pym => lib}/portage/util/elf/header.py | 0
{pym => lib}/portage/util/endian/__init__.py | 0
{pym => lib}/portage/util/endian/decode.py | 0
{pym => lib}/portage/util/env_update.py | 0
{pym => lib}/portage/util/file_copy/__init__.py | 0
{pym => lib}/portage/util/formatter.py | 0
{pym => lib}/portage/util/futures/__init__.py | 0
.../portage/util/futures/_asyncio/__init__.py | 0
.../portage/util/futures/_asyncio/tasks.py | 0
lib/portage/util/futures/compat_coroutine.py | 112 +++++++++++++++
{pym => lib}/portage/util/futures/events.py | 0
.../portage/util/futures/executor/__init__.py | 0
{pym => lib}/portage/util/futures/executor/fork.py | 0
.../portage/util/futures/extendedfutures.py | 0
{pym => lib}/portage/util/futures/futures.py | 0
.../portage/util/futures/iter_completed.py | 0
{pym => lib}/portage/util/futures/retry.py | 0
{pym => lib}/portage/util/futures/transports.py | 0
{pym => lib}/portage/util/futures/unix_events.py | 0
{pym => lib}/portage/util/install_mask.py | 7 +-
.../portage/util/iterators/MultiIterGroupBy.py | 0
{pym => lib}/portage/util/iterators/__init__.py | 0
{pym => lib}/portage/util/lafilefixer.py | 0
{pym => lib}/portage/util/listdir.py | 0
{pym => lib}/portage/util/locale.py | 0
{pym => lib}/portage/util/monotonic.py | 0
{pym => lib}/portage/util/movefile.py | 0
{pym => lib}/portage/util/mtimedb.py | 0
{pym => lib}/portage/util/path.py | 0
{pym => lib}/portage/util/socks5.py | 0
{pym => lib}/portage/util/whirlpool.py | 0
{pym => lib}/portage/util/writeable_check.py | 0
{pym => lib}/portage/versions.py | 0
{pym => lib}/portage/xml/__init__.py | 0
{pym => lib}/portage/xml/metadata.py | 0
{pym => lib}/portage/xpak.py | 0
man/color.map.5 | 2 +-
man/make.conf.5 | 6 +-
man/portage.5 | 23 ++-
man/ru/color.map.5 | 2 +-
misc/emerge-delta-webrsync | 26 +++-
pym/portage/sync/modules/webrsync/webrsync.py | 70 ---------
.../tests/resolver/test_features_test_use.py | 68 ---------
repoman/RELEASE-NOTES | 7 +
repoman/TEST-NOTES | 6 +-
repoman/bin/repoman | 4 +-
repoman/cnf/qa_data/qa_data.yaml | 2 +
repoman/cnf/repository/qa_data.yaml | 2 +
repoman/{pym => lib}/repoman/__init__.py | 0
repoman/{pym => lib}/repoman/_portage.py | 0
repoman/{pym => lib}/repoman/_subprocess.py | 0
repoman/{pym => lib}/repoman/actions.py | 0
repoman/{pym => lib}/repoman/argparser.py | 0
repoman/{pym => lib}/repoman/check_missingslot.py | 0
repoman/{pym => lib}/repoman/checks/__init__.py | 0
.../{pym => lib}/repoman/checks/herds/__init__.py | 0
.../{pym => lib}/repoman/checks/herds/herdbase.py | 0
.../{pym => lib}/repoman/checks/herds/metadata.py | 0
repoman/{pym => lib}/repoman/config.py | 0
repoman/{pym => lib}/repoman/copyrights.py | 0
repoman/{pym => lib}/repoman/errors.py | 0
repoman/{pym => lib}/repoman/gpg.py | 0
repoman/{pym => lib}/repoman/main.py | 0
repoman/{pym => lib}/repoman/metadata.py | 0
repoman/{pym => lib}/repoman/modules/__init__.py | 0
.../repoman/modules/commit/__init__.py | 0
.../repoman/modules/commit/manifest.py | 0
.../repoman/modules/commit/repochecks.py | 0
.../repoman/modules/linechecks/__init__.py | 0
.../modules/linechecks/assignment/__init__.py | 0
.../modules/linechecks/assignment/assignment.py | 0
.../repoman/modules/linechecks/base.py | 0
.../repoman/modules/linechecks/config.py | 0
.../repoman/modules/linechecks/controller.py | 0
.../repoman/modules/linechecks/depend/__init__.py | 0
.../repoman/modules/linechecks/depend/implicit.py | 0
.../modules/linechecks/deprecated/__init__.py | 0
.../modules/linechecks/deprecated/deprecated.py | 0
.../modules/linechecks/deprecated/inherit.py | 0
.../repoman/modules/linechecks/do/__init__.py | 0
.../repoman/modules/linechecks/do/dosym.py | 0
.../repoman/modules/linechecks/eapi/__init__.py | 0
.../repoman/modules/linechecks/eapi/checks.py | 0
.../repoman/modules/linechecks/eapi/definition.py | 0
.../repoman/modules/linechecks/emake/__init__.py | 0
.../repoman/modules/linechecks/emake/emake.py | 0
.../modules/linechecks/gentoo_header/__init__.py | 0
.../modules/linechecks/gentoo_header/header.py | 0
.../repoman/modules/linechecks/helpers/__init__.py | 0
.../repoman/modules/linechecks/helpers/offset.py | 0
.../repoman/modules/linechecks/nested/__init__.py | 0
.../repoman/modules/linechecks/nested/nested.py | 0
.../repoman/modules/linechecks/nested/nesteddie.py | 0
.../repoman/modules/linechecks/patches/__init__.py | 0
.../repoman/modules/linechecks/patches/patches.py | 0
.../repoman/modules/linechecks/phases/__init__.py | 0
.../repoman/modules/linechecks/phases/phase.py | 0
.../repoman/modules/linechecks/portage/__init__.py | 0
.../repoman/modules/linechecks/portage/internal.py | 0
.../repoman/modules/linechecks/quotes/__init__.py | 0
.../repoman/modules/linechecks/quotes/quoteda.py | 0
.../repoman/modules/linechecks/quotes/quotes.py | 0
.../repoman/modules/linechecks/uri/__init__.py | 0
.../repoman/modules/linechecks/uri/uri.py | 0
.../repoman/modules/linechecks/use/__init__.py | 0
.../repoman/modules/linechecks/use/builtwith.py | 0
.../repoman/modules/linechecks/useless/__init__.py | 0
.../repoman/modules/linechecks/useless/cd.py | 0
.../repoman/modules/linechecks/useless/dodoc.py | 0
.../modules/linechecks/whitespace/__init__.py | 0
.../repoman/modules/linechecks/whitespace/blank.py | 0
.../modules/linechecks/whitespace/whitespace.py | 0
.../modules/linechecks/workaround/__init__.py | 0
.../modules/linechecks/workaround/workarounds.py | 0
.../{pym => lib}/repoman/modules/scan/__init__.py | 0
.../repoman/modules/scan/depend/__init__.py | 0
.../repoman/modules/scan/depend/_depend_checks.py | 9 ++
.../repoman/modules/scan/depend/_gen_arches.py | 0
.../repoman/modules/scan/depend/profile.py | 36 +++++
.../repoman/modules/scan/directories/__init__.py | 0
.../repoman/modules/scan/directories/files.py | 0
.../repoman/modules/scan/directories/mtime.py | 0
.../repoman/modules/scan/eapi/__init__.py | 0
.../{pym => lib}/repoman/modules/scan/eapi/eapi.py | 0
.../repoman/modules/scan/ebuild/__init__.py | 0
.../repoman/modules/scan/ebuild/ebuild.py | 0
.../repoman/modules/scan/ebuild/multicheck.py | 0
.../repoman/modules/scan/eclasses/__init__.py | 0
.../repoman/modules/scan/eclasses/live.py | 0
.../repoman/modules/scan/eclasses/ruby.py | 0
.../repoman/modules/scan/fetch/__init__.py | 0
.../repoman/modules/scan/fetch/fetches.py | 0
.../repoman/modules/scan/keywords/__init__.py | 0
.../repoman/modules/scan/keywords/keywords.py | 21 +++
.../repoman/modules/scan/manifest/__init__.py | 0
.../repoman/modules/scan/manifest/manifests.py | 0
.../repoman/modules/scan/metadata/__init__.py | 0
.../repoman/modules/scan/metadata/description.py | 0
.../modules/scan/metadata/ebuild_metadata.py | 0
.../repoman/modules/scan/metadata/pkgmetadata.py | 0
.../repoman/modules/scan/metadata/restrict.py | 0
.../repoman/modules/scan/metadata/use_flags.py | 0
.../{pym => lib}/repoman/modules/scan/module.py | 0
.../repoman/modules/scan/options/__init__.py | 0
.../repoman/modules/scan/options/options.py | 0
repoman/{pym => lib}/repoman/modules/scan/scan.py | 0
.../{pym => lib}/repoman/modules/scan/scanbase.py | 0
.../repoman/modules/vcs/None/__init__.py | 0
.../repoman/modules/vcs/None/changes.py | 0
.../repoman/modules/vcs/None/status.py | 0
.../{pym => lib}/repoman/modules/vcs/__init__.py | 0
.../repoman/modules/vcs/bzr/__init__.py | 0
.../repoman/modules/vcs/bzr/changes.py | 0
.../{pym => lib}/repoman/modules/vcs/bzr/status.py | 0
.../{pym => lib}/repoman/modules/vcs/changes.py | 0
.../repoman/modules/vcs/cvs/__init__.py | 0
.../repoman/modules/vcs/cvs/changes.py | 0
.../{pym => lib}/repoman/modules/vcs/cvs/status.py | 0
.../repoman/modules/vcs/git/__init__.py | 0
.../repoman/modules/vcs/git/changes.py | 0
.../{pym => lib}/repoman/modules/vcs/git/status.py | 0
.../repoman/modules/vcs/hg/__init__.py | 0
.../{pym => lib}/repoman/modules/vcs/hg/changes.py | 0
.../{pym => lib}/repoman/modules/vcs/hg/status.py | 0
.../{pym => lib}/repoman/modules/vcs/settings.py | 0
.../repoman/modules/vcs/svn/__init__.py | 0
.../repoman/modules/vcs/svn/changes.py | 0
.../{pym => lib}/repoman/modules/vcs/svn/status.py | 0
repoman/{pym => lib}/repoman/modules/vcs/vcs.py | 0
repoman/{pym => lib}/repoman/profile.py | 0
repoman/{pym => lib}/repoman/qa_data.py | 0
repoman/{pym => lib}/repoman/qa_tracker.py | 0
repoman/{pym => lib}/repoman/repos.py | 0
repoman/{pym => lib}/repoman/scanner.py | 0
repoman/{pym => lib}/repoman/tests/__init__.py | 0
repoman/{pym => lib}/repoman/tests/__test__.py | 0
.../repoman/tests/changelog/__init__.py | 0
.../repoman/tests/changelog/__test__.py | 0
.../repoman/tests/changelog/test_echangelog.py | 0
.../{pym => lib}/repoman/tests/commit/__init__.py | 0
.../{pym => lib}/repoman/tests/commit/__test__.py | 0
.../repoman/tests/commit/test_commitmsg.py | 0
repoman/{pym => lib}/repoman/tests/runTests.py | 2 +-
.../{pym => lib}/repoman/tests/simple/__init__.py | 0
.../{pym => lib}/repoman/tests/simple/__test__.py | 0
.../repoman/tests/simple/test_simple.py | 0
repoman/{pym => lib}/repoman/utilities.py | 0
repoman/man/repoman.1 | 3 +
repoman/runtests | 4 +-
repoman/setup.py | 24 +---
runtests | 4 +-
setup.py | 8 +-
testpath | 4 +-
tox.ini | 16 +++
781 files changed, 1354 insertions(+), 376 deletions(-)
diff --cc .travis.yml
index b13d8295e,846308e08..27c1c78d4
--- a/.travis.yml
+++ b/.travis.yml
@@@ -23,30 -12,10 +12,25 @@@ install
script:
- printf "[build_ext]\nportage-ext-modules=true" >> setup.cfg
+ - find . -type f -exec
+ sed -e "s|@PORTAGE_EPREFIX@||"
+ -e "s|@PORTAGE_BASE@|${PWD}|"
+ -e "s|@PORTAGE_MV@|$(type -P mv)|"
+ -e "s|@PORTAGE_BASH@|$(type -P bash)|"
+ -e "s|@PREFIX_PORTAGE_PYTHON@|$(type -P python)|"
+ -e "s|@DEFAULT_PATH@|/usr/bin:/bin|"
+ -e "s|@EXTRA_PATH@|/usr/sbin:/sbin|"
+ -e "s|@portagegroup@|$(id -gn)|"
+ -e "s|@portageuser@|$(id -un)|"
+ -e "s|@rootuser@|$(id -un)|"
+ -e "s|@rootuid@|$(id -u)|"
+ -e "s|@rootgid@|$(id -g)|"
+ -e "s|@sysconfdir@|/etc|"
+ -i '{}' +
- ./setup.py test
- ./setup.py install --root=/tmp/install-root
- # prevent repoman tests from trying to fetch metadata.xsd
- - mkdir -p /tmp/install-root/usr/lib/portage/cnf
- - cp repoman/cnf/metadata.xsd /tmp/install-root/usr/lib/portage/cnf/
- - sudo rsync -a /tmp/install-root/. /
- - python -b -Wd -m portage.tests.runTests
- # repoman test block
- - repoman/setup.py test
- - repoman/setup.py install --root=/tmp/install-root
- - sudo rsync -a /tmp/install-root/. /
- - python -b -Wd -m repoman.tests.runTests
+ - if [[ ${TRAVIS_PYTHON_VERSION} == ?.? ]]; then
+ tox -e py${TRAVIS_PYTHON_VERSION/./};
+ else
+ tox -e ${TRAVIS_PYTHON_VERSION};
+ fi
diff --cc bin/portageq
index 7b9addb67,c63591a77..327ef3057
--- a/bin/portageq
+++ b/bin/portageq
@@@ -24,14 -24,12 +24,12 @@@ except KeyboardInterrupt
import os
import types
- # for an explanation on this logic, see pym/_emerge/__init__.py
- # this differs from master, we need to revisit this when we can install
- # using distutils, like master
- if os.environ.__contains__("PORTAGE_PYTHONPATH"):
- pym_paths = [ os.environ["PORTAGE_PYTHONPATH"] ]
+ if os.path.isfile(os.path.join(os.path.dirname(os.path.dirname(os.path.realpath(__file__))), ".portage_not_installed")):
+ pym_paths = [os.path.join(os.path.dirname(os.path.dirname(os.path.realpath(__file__))), "lib")]
+ sys.path.insert(0, pym_paths[0])
else:
- import distutils.sysconfig
- pym_paths = [os.path.join(distutils.sysconfig.get_python_lib(), x) for x in ("_emerge", "portage")]
+ pym_paths = [ os.path.join(os.path.dirname(
+ os.path.dirname(os.path.realpath(__file__))), "pym") ]
# Avoid sandbox violations after Python upgrade.
if os.environ.get("SANDBOX_ON") == "1":
sandbox_write = os.environ.get("SANDBOX_WRITE", "").split(":")
diff --cc cnf/make.globals
index 131c5076d,04a708af8..49a09f664
--- a/cnf/make.globals
+++ b/cnf/make.globals
@@@ -112,26 -107,8 +112,26 @@@ CONFIG_PROTECT="/etc
CONFIG_PROTECT_MASK="/etc/env.d"
# Disable auto-use
- USE_ORDER="env:pkg:conf:defaults:pkginternal:repo:env.d"
+ USE_ORDER="env:pkg:conf:defaults:pkginternal:features:repo:env.d"
+# Default portage user/group
+PORTAGE_USER='@portageuser@'
+PORTAGE_GROUP='@portagegroup@'
+PORTAGE_ROOT_USER='@rootuser@'
+
+# Default ownership of installed files.
+PORTAGE_INST_UID="@rootuid@"
+PORTAGE_INST_GID="@rootgid@"
+
+# Default PATH for ebuild env
+DEFAULT_PATH="@DEFAULT_PATH@"
+# Any extra PATHs to add to the ebuild environment's PATH (if any)
+EXTRA_PATH="@EXTRA_PATH@"
+
+# The offset prefix this Portage was configured with (not used by
+# Portage itself)
+CONFIGURE_EPREFIX="@PORTAGE_EPREFIX@"
+
# Mode bits for ${WORKDIR} (see ebuild.5).
PORTAGE_WORKDIR_MODE="0700"
diff --cc lib/Makefile.in
index a32b219d6,000000000..a32b219d6
mode 100644,000000..100644
--- a/lib/Makefile.in
+++ b/lib/Makefile.in
diff --cc lib/portage/const_autotool.py
index d12003c46,000000000..d12003c46
mode 100644,000000..100644
--- a/lib/portage/const_autotool.py
+++ b/lib/portage/const_autotool.py
diff --cc lib/portage/util/_dyn_libs/LinkageMapPeCoff.py
index fd0ab6ee8,000000000..fd0ab6ee8
mode 100644,000000..100644
--- a/lib/portage/util/_dyn_libs/LinkageMapPeCoff.py
+++ b/lib/portage/util/_dyn_libs/LinkageMapPeCoff.py
diff --cc lib/portage/util/_dyn_libs/LinkageMapXCoff.py
index 6c4c994b5,000000000..6c4c994b5
mode 100644,000000..100644
--- a/lib/portage/util/_dyn_libs/LinkageMapXCoff.py
+++ b/lib/portage/util/_dyn_libs/LinkageMapXCoff.py
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2018-06-25 8:34 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2018-06-25 8:34 UTC (permalink / raw
To: gentoo-commits
commit: e2248ef051753c1f37863632f0c1df2de6fd2362
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Mon Jun 25 08:31:21 2018 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Mon Jun 25 08:31:21 2018 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=e2248ef0
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
pym/_emerge/SpawnProcess.py | 3 +++
pym/portage/package/ebuild/doebuild.py | 9 +++------
2 files changed, 6 insertions(+), 6 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2018-06-17 14:38 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2018-06-17 14:38 UTC (permalink / raw
To: gentoo-commits
commit: aba84e3a714f19584f449ae953b91a132fd5d4cd
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Jun 17 14:38:18 2018 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Jun 17 14:38:18 2018 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=aba84e3a
autogen.sh: try to get the best sh there is
autogen.sh | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/autogen.sh b/autogen.sh
index 972776867..ec510e01b 100755
--- a/autogen.sh
+++ b/autogen.sh
@@ -1,4 +1,4 @@
-#!/bin/sh
+#!/usr/bin/env sh
die() {
echo "!!! $*" > /dev/stderr
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2018-06-17 14:38 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2018-06-17 14:38 UTC (permalink / raw
To: gentoo-commits
commit: 150bd31027eea511980934e21f7604e1536cb5d6
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Jun 17 14:35:24 2018 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Jun 17 14:35:24 2018 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=150bd310
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/etc-update | 2 +-
pym/_emerge/AbstractEbuildProcess.py | 15 ++++++++++++--
pym/_emerge/Binpkg.py | 10 ++++++++-
pym/_emerge/BinpkgFetcher.py | 8 ++++++++
pym/_emerge/CompositeTask.py | 1 +
pym/_emerge/EbuildBuild.py | 34 ++++++++++++++++++++++++++++---
pym/_emerge/EbuildFetcher.py | 12 +++++++++++
pym/_emerge/EbuildFetchonly.py | 5 ++++-
pym/_emerge/EbuildPhase.py | 4 ++++
pym/_emerge/PackageUninstall.py | 10 ++++++++-
pym/_emerge/actions.py | 17 +++++++++++++++-
pym/_emerge/create_depgraph_params.py | 2 +-
pym/portage/_emirrordist/FetchIterator.py | 10 +++++++++
pym/portage/package/ebuild/doebuild.py | 15 +++++++-------
14 files changed, 126 insertions(+), 19 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2018-05-28 15:24 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2018-05-28 15:24 UTC (permalink / raw
To: gentoo-commits
commit: fa9be21a77360886a44615815f4855b8794c3dd8
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Mon May 28 15:23:45 2018 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Mon May 28 15:23:45 2018 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=fa9be21a
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/etc-update | 16 ++---
pym/_emerge/AbstractPollTask.py | 12 +---
pym/_emerge/PipeReader.py | 19 +++---
pym/portage/package/ebuild/prepare_build_dirs.py | 5 +-
pym/portage/tests/process/test_poll.py | 75 ++++++++++++++++--------
pym/portage/util/_eventloop/EventLoop.py | 2 +-
6 files changed, 73 insertions(+), 56 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2018-05-25 19:44 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2018-05-25 19:44 UTC (permalink / raw
To: gentoo-commits
commit: 0bf775500d26e7ed9ef3a5b0d68d68deca70fcbb
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri May 25 19:44:06 2018 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri May 25 19:44:06 2018 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=0bf77550
travis.sh: add helper script to run tests
travis.sh | 32 ++++++++++++++++++++++++++++++++
1 file changed, 32 insertions(+)
diff --git a/travis.sh b/travis.sh
new file mode 100755
index 000000000..3c03149e6
--- /dev/null
+++ b/travis.sh
@@ -0,0 +1,32 @@
+#!/usr/bin/env bash
+
+# this script runs the tests as Travis would do (.travis.yml) and can be
+# used to test the Prefix branch of portage on a non-Prefix system
+
+: ${TMPDIR=/var/tmp}
+
+HERE=$(dirname $(realpath ${BASH_SOURCE[0]}))
+REPO=${HERE##*/}.$$
+
+cd ${TMPDIR}
+git clone ${HERE} ${REPO}
+
+cd ${REPO}
+printf "[build_ext]\nportage-ext-modules=true" >> setup.cfg
+find . -type f -exec \
+ sed -e "s|@PORTAGE_EPREFIX@||" \
+ -e "s|@PORTAGE_BASE@|${PWD}|" \
+ -e "s|@PORTAGE_MV@|$(type -P mv)|" \
+ -e "s|@PORTAGE_BASH@|$(type -P bash)|" \
+ -e "s|@PREFIX_PORTAGE_PYTHON@|$(type -P python)|" \
+ -e "s|@DEFAULT_PATH@|${EPREFIX}/usr/bin:${EPREFIX}/bin|" \
+ -e "s|@EXTRA_PATH@|${EPREFIX}/usr/sbin:${EPREFIX}/sbin|" \
+ -e "s|@portagegroup@|$(id -gn)|" \
+ -e "s|@portageuser@|$(id -un)|" \
+ -e "s|@rootuser@|$(id -un)|" \
+ -e "s|@rootuid@|$(id -u)|" \
+ -e "s|@rootgid@|$(id -g)|" \
+ -e "s|@sysconfdir@|${EPREFIX}/etc|" \
+ -i '{}' +
+unset EPREFIX
+./setup.py test
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2018-05-25 19:44 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2018-05-25 19:44 UTC (permalink / raw
To: gentoo-commits
commit: 2397b7b1beef5819a172c9d9d688f82de887858f
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri May 25 19:42:25 2018 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri May 25 19:42:25 2018 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=2397b7b1
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
RELEASE-NOTES | 15 +++++
pym/_emerge/EbuildPhase.py | 30 ++++++++-
pym/_emerge/MiscFunctionsProcess.py | 6 +-
pym/portage/package/ebuild/doebuild.py | 49 ++++++++++----
.../modules => tests/util/dyn_libs}/__init__.py | 0
.../tests/{bin => util/dyn_libs}/__test__.py | 0
.../tests/util/dyn_libs/test_soname_deps.py | 34 ++++++++++
.../util/futures/asyncio/test_wakeup_fd_sigchld.py | 76 ++++++++++++++++++++++
pym/portage/util/_dyn_libs/soname_deps.py | 42 ++++++++++--
pym/portage/util/_eventloop/asyncio_event_loop.py | 35 ++++++++--
setup.py | 2 +-
11 files changed, 259 insertions(+), 30 deletions(-)
diff --cc pym/portage/package/ebuild/doebuild.py
index bdf1a2e9f,dc443df00..f8b784d6b
--- a/pym/portage/package/ebuild/doebuild.py
+++ b/pym/portage/package/ebuild/doebuild.py
@@@ -1806,16 -1729,28 +1813,30 @@@ _post_phase_cmds =
"install_symlink_html_docs",
"install_hooks"],
- "preinst" : [
- "preinst_aix",
- "preinst_sfperms",
- "preinst_selinux_labels",
- "preinst_suid_scan",
- "preinst_qa_check",
- ],
-
+ "preinst" : (
+ (
+ # Since SELinux does not allow LD_PRELOAD across domain transitions,
+ # disable the LD_PRELOAD sandbox for preinst_selinux_labels.
+ {
+ "ld_preload_sandbox": False,
+ "selinux_only": True,
+ },
+ [
+ "preinst_selinux_labels",
+ ],
+ ),
+ (
+ {},
+ [
++ "preinst_aix",
+ "preinst_sfperms",
+ "preinst_suid_scan",
+ "preinst_qa_check",
+ ],
+ ),
+ ),
"postinst" : [
+ "postinst_aix",
"postinst_qa_check"],
}
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2018-05-18 19:46 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2018-05-18 19:46 UTC (permalink / raw
To: gentoo-commits
commit: c5f668f60d49c36d2c5ff75d57266bb43c5e84da
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri May 18 19:25:41 2018 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri May 18 19:25:41 2018 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=c5f668f6
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
.travis.yml | 4 +
NEWS | 17 +
RELEASE-NOTES | 195 ++++
bin/doins.py | 38 +-
bin/eapi.sh | 60 +-
bin/eapi7-ver-funcs.sh | 191 ++++
bin/ebuild | 10 +-
bin/ebuild-helpers/dobin | 16 +-
bin/ebuild-helpers/doconfd | 11 +-
bin/ebuild-helpers/dodir | 4 +-
bin/ebuild-helpers/dodoc | 2 +-
bin/ebuild-helpers/doenvd | 11 +-
bin/ebuild-helpers/doexe | 8 +-
bin/ebuild-helpers/dohard | 6 +-
bin/ebuild-helpers/doheader | 8 +-
bin/ebuild-helpers/dohtml | 7 +-
bin/ebuild-helpers/doinfo | 8 +-
bin/ebuild-helpers/doins | 20 +-
bin/ebuild-helpers/dolib | 17 +-
bin/ebuild-helpers/dolib.a | 4 +-
bin/ebuild-helpers/dolib.so | 4 +-
bin/ebuild-helpers/doman | 8 +-
bin/ebuild-helpers/domo | 21 +-
bin/ebuild-helpers/dosbin | 16 +-
bin/ebuild-helpers/dosed | 4 +-
bin/ebuild-helpers/dosym | 8 +-
bin/ebuild-helpers/ecompressdir | 27 +-
bin/ebuild-helpers/fowners | 4 +-
bin/ebuild-helpers/fperms | 4 +-
bin/ebuild-helpers/keepdir | 6 +-
bin/ebuild-helpers/newins | 2 +-
bin/ebuild-helpers/nonfatal | 14 +
bin/ebuild-helpers/prepall | 2 +-
bin/ebuild-helpers/prepalldocs | 4 +-
bin/ebuild-helpers/prepallinfo | 4 +-
bin/ebuild-helpers/prepallstrip | 4 +
bin/ebuild-helpers/prepinfo | 12 +-
bin/ebuild-helpers/prepman | 10 +-
bin/ebuild-helpers/prepstrip | 401 +-------
bin/ebuild-ipc.py | 55 +-
bin/ebuild.sh | 50 +-
bin/egencache | 5 +-
bin/emaint | 3 +
bin/emerge | 6 +
bin/emerge-webrsync | 11 +-
bin/emirrordist | 6 +-
bin/{ebuild-helpers/prepstrip => estrip} | 98 +-
bin/etc-update | 55 +-
bin/filter-bash-environment.py | 47 +-
bin/install-qa-check.d/10ignored-flags | 2 +-
bin/install-qa-check.d/60udev | 6 +-
bin/install-qa-check.d/80libraries | 24 +-
bin/install-qa-check.d/90cmake-warnings | 28 +
bin/install-qa-check.d/95empty-dirs | 42 +
bin/isolated-functions.sh | 40 +-
bin/misc-functions.sh | 91 +-
bin/phase-functions.sh | 60 +-
bin/phase-helpers.sh | 273 ++---
bin/portageq | 24 +-
bin/postinst-qa-check.d/50gnome2-utils | 3 +
bin/quickpkg | 14 +-
bin/save-ebuild-env.sh | 8 +-
bin/xattr-helper.py | 19 +-
cnf/repos.conf | 9 +
man/ebuild.5 | 28 +-
man/egencache.1 | 6 +-
man/emaint.1 | 22 +-
man/emerge.1 | 166 ++--
man/make.conf.5 | 4 +
man/portage.5 | 45 +-
misc/emerge-delta-webrsync | 11 +-
pym/_emerge/AbstractEbuildProcess.py | 97 +-
pym/_emerge/AbstractPollTask.py | 87 +-
pym/_emerge/AsynchronousLock.py | 76 +-
pym/_emerge/AsynchronousTask.py | 60 +-
pym/_emerge/Binpkg.py | 78 +-
pym/_emerge/BinpkgExtractorAsync.py | 16 +-
pym/_emerge/BinpkgFetcher.py | 104 +-
pym/_emerge/BlockerDB.py | 4 +-
pym/_emerge/CompositeTask.py | 46 +-
pym/_emerge/EbuildBuild.py | 99 +-
pym/_emerge/EbuildBuildDir.py | 172 ++--
pym/_emerge/EbuildExecuter.py | 6 +-
pym/_emerge/EbuildFetcher.py | 135 ++-
pym/_emerge/EbuildIpcDaemon.py | 28 +-
pym/_emerge/EbuildMerge.py | 26 +-
pym/_emerge/EbuildMetadataPhase.py | 55 +-
pym/_emerge/EbuildPhase.py | 35 +-
pym/_emerge/FakeVartree.py | 2 +-
pym/_emerge/FifoIpcDaemon.py | 32 +-
pym/_emerge/MetadataRegen.py | 14 +-
pym/_emerge/Package.py | 85 +-
pym/_emerge/PackagePhase.py | 93 ++
pym/_emerge/PackageUninstall.py | 37 +-
pym/_emerge/PipeReader.py | 42 +-
pym/_emerge/Scheduler.py | 109 +-
pym/_emerge/SpawnProcess.py | 23 +-
pym/_emerge/SubProcess.py | 123 +--
pym/_emerge/actions.py | 76 +-
pym/_emerge/create_depgraph_params.py | 23 +-
pym/_emerge/create_world_atom.py | 6 +-
pym/_emerge/depgraph.py | 202 +++-
pym/_emerge/main.py | 50 +-
pym/_emerge/resolver/DbapiProvidesIndex.py | 3 +-
pym/_emerge/resolver/output.py | 4 +-
pym/_emerge/search.py | 4 +
pym/_emerge/show_invalid_depstring_notice.py | 4 +-
pym/portage/__init__.py | 65 +-
pym/portage/_emirrordist/FetchIterator.py | 323 ++++--
pym/portage/_emirrordist/FetchTask.py | 2 +-
pym/portage/_emirrordist/MirrorDistTask.py | 27 +-
pym/portage/_legacy_globals.py | 3 +-
pym/portage/_sets/base.py | 17 +-
pym/portage/cache/metadata.py | 4 +-
pym/portage/const.py | 10 +-
pym/portage/dbapi/IndexedVardb.py | 2 +-
pym/portage/dbapi/_MergeProcess.py | 47 +-
pym/portage/dbapi/__init__.py | 48 +-
pym/portage/dbapi/bintree.py | 105 +-
pym/portage/dbapi/dep_expand.py | 2 +-
pym/portage/dbapi/porttree.py | 256 ++++-
pym/portage/dbapi/vartree.py | 76 +-
pym/portage/dbapi/virtual.py | 4 +-
pym/portage/dep/__init__.py | 27 +-
pym/portage/dep/_slot_operator.py | 7 +-
pym/portage/dep/dep_check.py | 75 +-
pym/portage/dep/soname/multilib_category.py | 4 +-
pym/portage/eapi.py | 53 +-
pym/portage/emaint/modules/binhost/binhost.py | 1 +
pym/portage/emaint/modules/move/move.py | 4 +-
pym/portage/exception.py | 3 +
pym/portage/module.py | 49 +-
.../package/ebuild/_config/LocationsManager.py | 25 +-
pym/portage/package/ebuild/_config/MaskManager.py | 10 +-
.../package/ebuild/_config/special_env_vars.py | 13 +-
pym/portage/package/ebuild/_ipc/QueryCommand.py | 4 +-
.../ebuild/_parallel_manifest/ManifestScheduler.py | 25 +-
.../ebuild/_parallel_manifest/ManifestTask.py | 24 +-
pym/portage/package/ebuild/config.py | 190 ++--
pym/portage/package/ebuild/doebuild.py | 98 +-
pym/portage/package/ebuild/prepare_build_dirs.py | 31 +-
pym/portage/package/ebuild/profile_iuse.py | 32 +
pym/portage/process.py | 39 +-
pym/portage/repository/config.py | 36 +-
pym/portage/sync/modules/git/__init__.py | 3 +-
pym/portage/sync/modules/git/git.py | 95 +-
pym/portage/sync/modules/rsync/__init__.py | 5 +-
pym/portage/sync/modules/rsync/rsync.py | 508 ++++++----
pym/portage/sync/syncbase.py | 87 +-
pym/portage/tests/__init__.py | 20 +-
pym/portage/tests/bin/test_doins.py | 7 +-
pym/portage/tests/bin/test_eapi7_ver_funcs.py | 240 +++++
pym/portage/tests/bin/test_filter_bash_env.py | 115 +++
pym/portage/tests/dbapi/test_portdb_cache.py | 3 +-
pym/portage/tests/dep/testCheckRequiredUse.py | 5 +-
pym/portage/tests/dep/test_overlap_dnf.py | 2 +-
pym/portage/tests/ebuild/test_ipc_daemon.py | 18 +-
pym/portage/tests/emerge/test_simple.py | 51 +-
pym/portage/tests/locks/test_asynchronous_lock.py | 22 +-
pym/portage/tests/resolver/ResolverPlayground.py | 12 +-
pym/portage/tests/resolver/test_autounmask.py | 71 +-
pym/portage/tests/resolver/test_changed_deps.py | 1 +
pym/portage/tests/resolver/test_complete_graph.py | 20 +-
pym/portage/tests/resolver/test_eapi.py | 9 +-
pym/portage/tests/resolver/test_multirepo.py | 4 +-
.../tests/resolver/test_or_upgrade_installed.py | 160 +++
pym/portage/tests/resolver/test_required_use.py | 22 +-
.../resolver/test_slot_change_without_revbump.py | 19 +
pym/portage/tests/resolver/test_slot_collisions.py | 20 +-
.../tests/resolver/test_slot_operator_rebuild.py | 45 +-
.../resolver/test_virtual_minimize_children.py | 144 ++-
pym/portage/tests/resolver/test_virtual_slot.py | 2 +-
.../tests/resolver/test_virtual_transition.py | 51 -
pym/portage/tests/runTests.py | 6 +-
.../util/futures/asyncio}/__init__.py | 0
.../{bin => util/futures/asyncio}/__test__.py | 0
.../util/futures/asyncio/test_child_watcher.py | 50 +
.../futures/asyncio/test_event_loop_in_fork.py | 65 ++
.../tests/util/futures/asyncio/test_pipe_closed.py | 151 +++
.../asyncio/test_policy_wrapper_recursion.py | 29 +
.../futures/asyncio/test_run_until_complete.py | 34 +
.../util/futures/asyncio/test_subprocess_exec.py | 236 +++++
.../tests/util/futures/test_iter_completed.py | 86 ++
pym/portage/tests/util/futures/test_retry.py | 234 +++++
pym/portage/util/SlotObject.py | 9 +-
pym/portage/util/_async/AsyncFunction.py | 7 +-
pym/portage/util/_async/AsyncScheduler.py | 49 +-
pym/portage/util/_async/AsyncTaskFuture.py | 31 +
pym/portage/util/_async/FileDigester.py | 7 +-
pym/portage/util/_async/ForkProcess.py | 10 +
pym/portage/util/_async/PipeLogger.py | 32 +-
pym/portage/util/_async/PipeReaderBlockingIO.py | 16 +-
pym/portage/util/_async/PopenProcess.py | 17 +-
pym/portage/util/_async/SchedulerInterface.py | 34 +-
.../util/_dyn_libs/PreservedLibsRegistry.py | 3 +-
pym/portage/util/_eventloop/EventLoop.py | 462 ++++++++-
pym/portage/util/_eventloop/GlibEventLoop.py | 23 -
pym/portage/util/_eventloop/asyncio_event_loop.py | 85 ++
pym/portage/util/_eventloop/global_event_loop.py | 11 +-
pym/portage/util/backoff.py | 53 +
pym/portage/util/elf/constants.py | 5 +-
pym/portage/util/futures/__init__.py | 8 +
pym/portage/util/futures/_asyncio/__init__.py | 185 ++++
pym/portage/util/futures/_asyncio/tasks.py | 105 ++
pym/portage/util/futures/events.py | 191 ++++
.../modules => util/futures/executor}/__init__.py | 0
pym/portage/util/futures/executor/fork.py | 136 +++
pym/portage/util/futures/futures.py | 15 +-
pym/portage/util/futures/iter_completed.py | 183 ++++
pym/portage/util/futures/retry.py | 182 ++++
pym/portage/util/futures/transports.py | 90 ++
pym/portage/util/futures/unix_events.py | 705 +++++++++++++
pym/portage/util/install_mask.py | 125 +++
pym/portage/util/monotonic.py | 34 +
pym/portage/util/movefile.py | 4 +-
pym/portage/versions.py | 6 +-
repoman/NEWS | 5 +
repoman/RELEASE-NOTES | 13 +
repoman/bin/repoman | 3 +
repoman/cnf/linechecks/linechecks.yaml | 35 +
repoman/cnf/qa_data/qa_data.yaml | 136 +++
repoman/cnf/repository/linechecks.yaml | 252 +++++
repoman/cnf/repository/qa_data.yaml | 160 +++
repoman/cnf/repository/repository.yaml | 76 ++
repoman/man/repoman.1 | 29 +-
repoman/pym/repoman/actions.py | 100 +-
repoman/pym/repoman/argparser.py | 8 +-
repoman/pym/repoman/config.py | 159 +++
repoman/pym/repoman/main.py | 30 +-
repoman/pym/repoman/metadata.py | 2 +-
.../__test__.py => modules/linechecks/__init__.py} | 0
.../modules/linechecks/assignment/__init__.py | 28 +
.../modules/linechecks/assignment/assignment.py | 38 +
repoman/pym/repoman/modules/linechecks/base.py | 101 ++
repoman/pym/repoman/modules/linechecks/config.py | 118 +++
.../pym/repoman/modules/linechecks/controller.py | 145 +++
.../repoman/modules/linechecks/depend/__init__.py | 22 +
.../repoman/modules/linechecks/depend/implicit.py | 39 +
.../modules/linechecks/deprecated/__init__.py | 47 +
.../modules/linechecks/deprecated/deprecated.py | 32 +
.../modules/linechecks/deprecated/inherit.py | 69 ++
.../pym/repoman/modules/linechecks/do/__init__.py | 22 +
repoman/pym/repoman/modules/linechecks/do/dosym.py | 16 +
.../repoman/modules/linechecks/eapi/__init__.py | 52 +
.../pym/repoman/modules/linechecks/eapi/checks.py | 83 ++
.../repoman/modules/linechecks/eapi/definition.py | 36 +
.../repoman/modules/linechecks/emake/__init__.py | 28 +
.../pym/repoman/modules/linechecks/emake/emake.py | 23 +
.../modules/linechecks/gentoo_header/__init__.py | 22 +
.../modules/linechecks/gentoo_header/header.py | 49 +
.../repoman/modules/linechecks/helpers/__init__.py | 22 +
.../repoman/modules/linechecks/helpers/offset.py | 22 +
.../repoman/modules/linechecks/nested/__init__.py | 22 +
.../repoman/modules/linechecks/nested/nested.py | 15 +
.../repoman/modules/linechecks/nested/nesteddie.py | 11 +
.../repoman/modules/linechecks/patches/__init__.py | 22 +
.../repoman/modules/linechecks/patches/patches.py | 16 +
.../repoman/modules/linechecks/phases/__init__.py | 35 +
.../pym/repoman/modules/linechecks/phases/phase.py | 71 ++
.../repoman/modules/linechecks/portage/__init__.py | 28 +
.../repoman/modules/linechecks/portage/internal.py | 37 +
.../repoman/modules/linechecks/quotes/__init__.py | 28 +
.../repoman/modules/linechecks/quotes/quoteda.py | 16 +
.../repoman/modules/linechecks/quotes/quotes.py | 87 ++
.../pym/repoman/modules/linechecks/uri/__init__.py | 22 +
repoman/pym/repoman/modules/linechecks/uri/uri.py | 30 +
.../pym/repoman/modules/linechecks/use/__init__.py | 22 +
.../repoman/modules/linechecks/use/builtwith.py | 10 +
.../repoman/modules/linechecks/useless/__init__.py | 28 +
.../pym/repoman/modules/linechecks/useless/cd.py | 24 +
.../repoman/modules/linechecks/useless/dodoc.py | 16 +
.../modules/linechecks/whitespace/__init__.py | 28 +
.../repoman/modules/linechecks/whitespace/blank.py | 25 +
.../modules/linechecks/whitespace/whitespace.py | 21 +
.../modules/linechecks/workaround/__init__.py | 28 +
.../modules/linechecks/workaround/workarounds.py | 18 +
.../pym/repoman/modules/scan/depend/__init__.py | 4 +-
.../repoman/modules/scan/depend/_depend_checks.py | 11 +-
repoman/pym/repoman/modules/scan/depend/profile.py | 5 +-
.../repoman/modules/scan/directories/__init__.py | 5 +-
repoman/pym/repoman/modules/scan/eapi/__init__.py | 4 +-
repoman/pym/repoman/modules/scan/eapi/eapi.py | 4 +-
.../pym/repoman/modules/scan/ebuild/__init__.py | 7 +-
repoman/pym/repoman/modules/scan/ebuild/checks.py | 1044 --------------------
repoman/pym/repoman/modules/scan/ebuild/ebuild.py | 5 +-
repoman/pym/repoman/modules/scan/ebuild/errors.py | 53 -
.../pym/repoman/modules/scan/ebuild/multicheck.py | 10 +-
.../pym/repoman/modules/scan/eclasses/__init__.py | 7 +-
repoman/pym/repoman/modules/scan/eclasses/ruby.py | 5 +-
repoman/pym/repoman/modules/scan/fetch/__init__.py | 4 +-
repoman/pym/repoman/modules/scan/fetch/fetches.py | 2 +-
.../pym/repoman/modules/scan/keywords/__init__.py | 4 +-
.../pym/repoman/modules/scan/manifest/__init__.py | 4 +-
.../pym/repoman/modules/scan/metadata/__init__.py | 13 +-
.../repoman/modules/scan/metadata/description.py | 6 +-
.../modules/scan/metadata/ebuild_metadata.py | 14 +-
.../repoman/modules/scan/metadata/pkgmetadata.py | 2 +-
.../pym/repoman/modules/scan/metadata/restrict.py | 4 +-
repoman/pym/repoman/modules/scan/module.py | 102 ++
.../pym/repoman/modules/scan/options/__init__.py | 4 +-
repoman/pym/repoman/qa_data.py | 454 ++-------
repoman/pym/repoman/qa_tracker.py | 10 +-
repoman/pym/repoman/repos.py | 21 +-
repoman/pym/repoman/scanner.py | 84 +-
repoman/pym/repoman/tests/__init__.py | 3 +
.../pym/repoman/tests/commit}/__init__.py | 2 +-
.../repoman/tests/{simple => commit}/__test__.py | 0
repoman/pym/repoman/tests/commit/test_commitmsg.py | 109 ++
repoman/pym/repoman/tests/runTests.py | 14 +-
repoman/pym/repoman/tests/simple/test_simple.py | 8 +-
repoman/pym/repoman/utilities.py | 22 +-
repoman/setup.py | 11 +-
setup.py | 5 +-
src/portage_util_file_copy_reflink_linux.c | 4 +-
314 files changed, 12015 insertions(+), 4462 deletions(-)
diff --cc bin/eapi.sh
index ac22e6fc9,455bc9b0d..3d1445ede
--- a/bin/eapi.sh
+++ b/bin/eapi.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 2012 Gentoo Foundation
+ # Copyright 2012-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# PHASES
diff --cc bin/ebuild
index d5939db89,710257549..75860e1a8
--- a/bin/ebuild
+++ b/bin/ebuild
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 1999-2015 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/ebuild-helpers/dobin
index dc999ec04,975067fb1..0f0518aba
--- a/bin/ebuild-helpers/dobin
+++ b/bin/ebuild-helpers/dobin
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/doconfd
index 441167d66,15ad980f3..e32c9d5c0
--- a/bin/ebuild-helpers/doconfd
+++ b/bin/ebuild-helpers/doconfd
@@@ -1,9 -1,10 +1,10 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
+ source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
+
if [[ $# -lt 1 ]] ; then
- source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
__helpers_die "${0##*/}: at least one argument needed"
exit 1
fi
diff --cc bin/ebuild-helpers/dodir
index b1502c8ae,9b376c73f..4d309e4b1
--- a/bin/ebuild-helpers/dodir
+++ b/bin/ebuild-helpers/dodir
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/doenvd
index c536ffd7a,f14b95104..4e8068659
--- a/bin/ebuild-helpers/doenvd
+++ b/bin/ebuild-helpers/doenvd
@@@ -1,9 -1,10 +1,10 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
+ source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
+
if [[ $# -lt 1 ]] ; then
- source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
__helpers_die "${0##*/}: at least one argument needed"
exit 1
fi
diff --cc bin/ebuild-helpers/doexe
index 1e8665722,152c13bf6..5fa8f058d
--- a/bin/ebuild-helpers/doexe
+++ b/bin/ebuild-helpers/doexe
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/dohard
index 8670200ad,66e2604b0..1dd2eb582
--- a/bin/ebuild-helpers/dohard
+++ b/bin/ebuild-helpers/dohard
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/doheader
index 101b75742,aedc2322a..a87536c33
--- a/bin/ebuild-helpers/doheader
+++ b/bin/ebuild-helpers/doheader
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/dohtml
index fe2e97d05,49d6a6dfb..a60dbdab8
--- a/bin/ebuild-helpers/dohtml
+++ b/bin/ebuild-helpers/dohtml
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 2009-2013 Gentoo Foundation
+ # Copyright 2009-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/doinfo
index 1a2051b54,30a38e055..e261e9e74
--- a/bin/ebuild-helpers/doinfo
+++ b/bin/ebuild-helpers/doinfo
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/doins
index 04bfdd0d4,fb5fc7c7c..cf843bce5
--- a/bin/ebuild-helpers/doins
+++ b/bin/ebuild-helpers/doins
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/dolib
index e817896f2,bd8eebca7..049088de7
--- a/bin/ebuild-helpers/dolib
+++ b/bin/ebuild-helpers/dolib
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/dolib.a
index f7670d0f5,5ea126b5d..f45aed3e6
--- a/bin/ebuild-helpers/dolib.a
+++ b/bin/ebuild-helpers/dolib.a
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2006 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
- LIBOPTIONS='-m0644' exec dolib "$@"
+ LIBOPTIONS='-m0644' PORTAGE_INTERNAL_DOLIB=1 exec dolib "$@"
diff --cc bin/ebuild-helpers/dolib.so
index 0b1e5fa62,a3b579e5e..2272679c8
--- a/bin/ebuild-helpers/dolib.so
+++ b/bin/ebuild-helpers/dolib.so
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2006 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
- LIBOPTIONS='-m0755' exec dolib "$@"
+ LIBOPTIONS='-m0755' PORTAGE_INTERNAL_DOLIB=1 exec dolib "$@"
diff --cc bin/ebuild-helpers/doman
index 500caff43,9cfc89df0..bfd7356a0
--- a/bin/ebuild-helpers/doman
+++ b/bin/ebuild-helpers/doman
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/domo
index 10ae2fc41,2e95eb751..fb1475470
--- a/bin/ebuild-helpers/domo
+++ b/bin/ebuild-helpers/domo
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/dosbin
index 9cd4d975e,ac0ab37ca..e92286088
--- a/bin/ebuild-helpers/dosbin
+++ b/bin/ebuild-helpers/dosbin
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/dosed
index dfc30f06b,37c8a29d3..44752f54e
--- a/bin/ebuild-helpers/dosed
+++ b/bin/ebuild-helpers/dosed
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/dosym
index a2157e96e,d5a651bf5..da15fe397
--- a/bin/ebuild-helpers/dosym
+++ b/bin/ebuild-helpers/dosym
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2017 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/ecompressdir
index 18a2fae7f,dacb857be..9e8fd40f3
--- a/bin/ebuild-helpers/ecompressdir
+++ b/bin/ebuild-helpers/ecompressdir
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2013 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/helper-functions.sh || exit 1
diff --cc bin/ebuild-helpers/fowners
index f98b65aad,68004210b..981a6f895
--- a/bin/ebuild-helpers/fowners
+++ b/bin/ebuild-helpers/fowners
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/fperms
index 1f031d06f,c63a6abc3..91cd8371c
--- a/bin/ebuild-helpers/fperms
+++ b/bin/ebuild-helpers/fperms
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/keepdir
index b76c74bab,a3c0c151c..9b8986a5e
--- a/bin/ebuild-helpers/keepdir
+++ b/bin/ebuild-helpers/keepdir
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2013 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/prepalldocs
index 2252b5250,6cdceb318..d38f8b943
--- a/bin/ebuild-helpers/prepalldocs
+++ b/bin/ebuild-helpers/prepalldocs
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/prepallinfo
index 18cee4b9c,34d6a74b7..7d1ead071
--- a/bin/ebuild-helpers/prepallinfo
+++ b/bin/ebuild-helpers/prepallinfo
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/prepinfo
index 80bfb74be,eb1b6a7e3..bac66f2f8
--- a/bin/ebuild-helpers/prepinfo
+++ b/bin/ebuild-helpers/prepinfo
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/ebuild-helpers/prepman
index 7302956d9,5e9fe45b6..998d69065
--- a/bin/ebuild-helpers/prepman
+++ b/bin/ebuild-helpers/prepman
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# Do not compress man pages which are smaller than this (in bytes). #169260
diff --cc bin/ebuild-helpers/prepstrip
index 62daf817e,ecbea47ec..4a899ee1a
--- a/bin/ebuild-helpers/prepstrip
+++ b/bin/ebuild-helpers/prepstrip
@@@ -1,402 -1,11 +1,11 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2014 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
- source "${PORTAGE_BIN_PATH}"/helper-functions.sh || exit 1
+ source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
- # avoid multiple calls to `has`. this creates things like:
- # FEATURES_foo=false
- # if "foo" is not in $FEATURES
- tf() { "$@" && echo true || echo false ; }
- exp_tf() {
- local flag var=$1
- shift
- for flag in "$@" ; do
- eval ${var}_${flag}=$(tf has ${flag} ${!var})
- done
- }
- exp_tf FEATURES compressdebug installsources nostrip splitdebug xattr
- exp_tf RESTRICT binchecks installsources splitdebug strip
-
- if ! ___eapi_has_prefix_variables; then
- EPREFIX= ED=${D}
- fi
-
- banner=false
- SKIP_STRIP=false
- if ${RESTRICT_strip} || ${FEATURES_nostrip} ; then
- SKIP_STRIP=true
- banner=true
- ${FEATURES_installsources} || exit 0
- fi
-
- PRESERVE_XATTR=false
- if [[ ${KERNEL} == linux ]] && ${FEATURES_xattr} ; then
- PRESERVE_XATTR=true
- if type -P getfattr >/dev/null && type -P setfattr >/dev/null ; then
- dump_xattrs() {
- getfattr -d --absolute-names "$1"
- }
- restore_xattrs() {
- setfattr --restore=-
- }
- else
- dump_xattrs() {
- PYTHONPATH=${PORTAGE_PYTHONPATH:-${PORTAGE_PYM_PATH}} \
- "${PORTAGE_PYTHON:-/usr/bin/python}" \
- "${PORTAGE_BIN_PATH}/xattr-helper.py" --dump < <(echo -n "$1")
- }
- restore_xattrs() {
- PYTHONPATH=${PORTAGE_PYTHONPATH:-${PORTAGE_PYM_PATH}} \
- "${PORTAGE_PYTHON:-/usr/bin/python}" \
- "${PORTAGE_BIN_PATH}/xattr-helper.py" --restore
- }
- fi
- fi
-
- # look up the tools we might be using
- for t in STRIP:strip OBJCOPY:objcopy READELF:readelf ; do
- v=${t%:*} # STRIP
- t=${t#*:} # strip
- eval ${v}=\"${!v:-${CHOST}-${t}}\"
- type -P -- ${!v} >/dev/null || eval ${v}=${t}
- done
-
- # Figure out what tool set we're using to strip stuff
- unset SAFE_STRIP_FLAGS DEF_STRIP_FLAGS SPLIT_STRIP_FLAGS
- case $(${STRIP} --version 2>/dev/null) in
- *elfutils*) # dev-libs/elfutils
- # elfutils default behavior is always safe, so don't need to specify
- # any flags at all
- SAFE_STRIP_FLAGS=""
- DEF_STRIP_FLAGS="--remove-comment"
- SPLIT_STRIP_FLAGS="-f"
- ;;
- *GNU*) # sys-devel/binutils
- # We'll leave out -R .note for now until we can check out the relevance
- # of the section when it has the ALLOC flag set on it ...
- SAFE_STRIP_FLAGS="--strip-unneeded"
- DEF_STRIP_FLAGS="-R .comment -R .GCC.command.line -R .note.gnu.gold-version"
- SPLIT_STRIP_FLAGS=
- ;;
- esac
- : ${PORTAGE_STRIP_FLAGS=${SAFE_STRIP_FLAGS} ${DEF_STRIP_FLAGS}}
-
- prepstrip_sources_dir=${EPREFIX}/usr/src/debug/${CATEGORY}/${PF}
-
- debugedit=$(type -P debugedit)
- if [[ -z ${debugedit} ]]; then
- debugedit_paths=(
- "${EPREFIX}/usr/libexec/rpm/debugedit"
- )
- for x in "${debugedit_paths[@]}"; do
- if [[ -x ${x} ]]; then
- debugedit=${x}
- break
- fi
- done
- fi
- [[ ${debugedit} ]] && debugedit_found=true || debugedit_found=false
- debugedit_warned=false
-
- __multijob_init
-
- # Setup $T filesystem layout that we care about.
- tmpdir="${T}/prepstrip"
- rm -rf "${tmpdir}"
- mkdir -p "${tmpdir}"/{inodes,splitdebug,sources}
-
- # Usage: save_elf_sources <elf>
- save_elf_sources() {
- ${FEATURES_installsources} || return 0
- ${RESTRICT_installsources} && return 0
- if ! ${debugedit_found} ; then
- if ! ${debugedit_warned} ; then
- debugedit_warned=true
- ewarn "FEATURES=installsources is enabled but the debugedit binary could not be"
- ewarn "found. This feature will not work unless debugedit or rpm is installed!"
- fi
- return 0
- fi
-
- local x=$1
-
- # since we're editing the ELF here, we should recompute the build-id
- # (the -i flag below). save that output so we don't need to recompute
- # it later on in the save_elf_debug step.
- buildid=$("${debugedit}" -i \
- -b "${WORKDIR}" \
- -d "${prepstrip_sources_dir}" \
- -l "${tmpdir}/sources/${x##*/}.${BASHPID:-$(__bashpid)}" \
- "${x}")
- }
-
- # Usage: save_elf_debug <elf> [splitdebug file]
- save_elf_debug() {
- ${FEATURES_splitdebug} || return 0
- ${RESTRICT_splitdebug} && return 0
-
- # NOTE: Debug files must be installed in
- # ${EPREFIX}/usr/lib/debug/${EPREFIX} (note that ${EPREFIX} occurs
- # twice in this path) in order for gdb's debug-file-directory
- # lookup to work correctly.
- local x=$1
- local inode_debug=$2
- local splitdebug=$3
- local y=${ED}usr/lib/debug/${x:${#D}}.debug
-
- # dont save debug info twice
- [[ ${x} == *".debug" ]] && return 0
-
- mkdir -p "${y%/*}"
-
- if [ -f "${inode_debug}" ] ; then
- ln "${inode_debug}" "${y}" || die "ln failed unexpectedly"
- else
- if [[ -n ${splitdebug} ]] ; then
- mv "${splitdebug}" "${y}"
- else
- local objcopy_flags="--only-keep-debug"
- ${FEATURES_compressdebug} && objcopy_flags+=" --compress-debug-sections"
- ${OBJCOPY} ${objcopy_flags} "${x}" "${y}"
- ${OBJCOPY} --add-gnu-debuglink="${y}" "${x}"
- fi
- # Only do the following if the debug file was
- # successfully created (see bug #446774).
- if [ $? -eq 0 ] ; then
- local args="a-x,o-w"
- [[ -g ${x} || -u ${x} ]] && args+=",go-r"
- chmod ${args} "${y}"
- ln "${y}" "${inode_debug}" || die "ln failed unexpectedly"
- fi
- fi
-
- # if we don't already have build-id from debugedit, look it up
- if [[ -z ${buildid} ]] ; then
- # convert the readelf output to something useful
- buildid=$(${READELF} -n "${x}" 2>/dev/null | awk '/Build ID:/{ print $NF; exit }')
- fi
- if [[ -n ${buildid} ]] ; then
- local buildid_dir="${ED}usr/lib/debug/.build-id/${buildid:0:2}"
- local buildid_file="${buildid_dir}/${buildid:2}"
- mkdir -p "${buildid_dir}"
- [ -L "${buildid_file}".debug ] || ln -s "../../${x:${#D}}.debug" "${buildid_file}.debug"
- [ -L "${buildid_file}" ] || ln -s "/${x:${#D}}" "${buildid_file}"
- fi
- }
-
- # Usage: process_elf <elf>
- process_elf() {
- local x=$1 inode_link=$2 strip_flags=${*:3}
- local already_stripped lockfile xt_data
-
- __vecho " ${x:${#ED}}"
-
- # If two processes try to debugedit or strip the same hardlink at the
- # same time, it may corrupt files or cause loss of splitdebug info.
- # So, use a lockfile to prevent interference (easily observed with
- # dev-vcs/git which creates ~111 hardlinks to one file in
- # /usr/libexec/git-core).
- lockfile=${inode_link}_lockfile
- if ! ln "${inode_link}" "${lockfile}" 2>/dev/null ; then
- while [[ -f ${lockfile} ]] ; do
- sleep 1
- done
- unset lockfile
- fi
-
- [ -f "${inode_link}_stripped" ] && already_stripped=true || already_stripped=false
-
- if ! ${already_stripped} ; then
- if ${PRESERVE_XATTR} ; then
- xt_data=$(dump_xattrs "${x}")
- fi
- save_elf_sources "${x}"
- fi
-
- if ${strip_this} ; then
-
- # see if we can split & strip at the same time
- if [[ -n ${SPLIT_STRIP_FLAGS} ]] ; then
- local shortname="${x##*/}.debug"
- local splitdebug="${tmpdir}/splitdebug/${shortname}.${BASHPID:-$(__bashpid)}"
- ${already_stripped} || \
- ${STRIP} ${strip_flags} \
- -f "${splitdebug}" \
- -F "${shortname}" \
- "${x}"
- save_elf_debug "${x}" "${inode_link}_debug" "${splitdebug}"
- else
- save_elf_debug "${x}" "${inode_link}_debug"
- ${already_stripped} || \
- ${STRIP} ${strip_flags} "${x}"
- fi
- fi
-
- if ${already_stripped} ; then
- rm -f "${x}" || die "rm failed unexpectedly"
- ln "${inode_link}_stripped" "${x}" || die "ln failed unexpectedly"
- else
- ln "${x}" "${inode_link}_stripped" || die "ln failed unexpectedly"
- if [[ ${xt_data} ]] ; then
- restore_xattrs <<< "${xt_data}"
- fi
- fi
-
- [[ -n ${lockfile} ]] && rm -f "${lockfile}"
- }
-
- # The existance of the section .symtab tells us that a binary is stripped.
- # We want to log already stripped binaries, as this may be a QA violation.
- # They prevent us from getting the splitdebug data.
- if ! ${RESTRICT_binchecks} && ! ${RESTRICT_strip} ; then
- # We need to do the non-stripped scan serially first before we turn around
- # and start stripping the files ourselves. The log parsing can be done in
- # parallel though.
- log=${tmpdir}/scanelf-already-stripped.log
- scanelf -yqRBF '#k%F' -k '!.symtab' "$@" | sed -e "s#^${ED}##" > "${log}"
- (
- __multijob_child_init
- qa_var="QA_PRESTRIPPED_${ARCH/-/_}"
- [[ -n ${!qa_var} ]] && QA_PRESTRIPPED="${!qa_var}"
- if [[ -n ${QA_PRESTRIPPED} && -s ${log} && \
- ${QA_STRICT_PRESTRIPPED-unset} = unset ]] ; then
- shopts=$-
- set -o noglob
- for x in ${QA_PRESTRIPPED} ; do
- sed -e "s#^${x#/}\$##" -i "${log}"
- done
- set +o noglob
- set -${shopts}
- fi
- sed -e "/^\$/d" -e "s#^#/#" -i "${log}"
- if [[ -s ${log} ]] ; then
- __vecho -e "\n"
- eqawarn "QA Notice: Pre-stripped files found:"
- eqawarn "$(<"${log}")"
- else
- rm -f "${log}"
- fi
- ) &
- __multijob_post_fork
- fi
-
- # Since strip creates a new inode, we need to know the initial set of
- # inodes in advance, so that we can avoid interference due to trying
- # to strip the same (hardlinked) file multiple times in parallel.
- # See bug #421099.
- if [[ ${USERLAND} == BSD ]] ; then
- get_inode_number() { stat -f '%i' "$1"; }
- else
- get_inode_number() { stat -c '%i' "$1"; }
- fi
- cd "${tmpdir}/inodes" || die "cd failed unexpectedly"
- while read -r x ; do
- inode_link=$(get_inode_number "${x}") || die "stat failed unexpectedly"
- echo "${x}" >> "${inode_link}" || die "echo failed unexpectedly"
- done < <(
- # Use sort -u to eliminate duplicates for bug #445336.
- (
- scanelf -yqRBF '#k%F' -k '.symtab' "$@"
- find "$@" -type f ! -type l -name '*.a'
- ) | LC_ALL=C sort -u
- )
-
- # Now we look for unstripped binaries.
- for inode_link in $(shopt -s nullglob; echo *) ; do
- while read -r x
- do
-
- if ! ${banner} ; then
- __vecho "strip: ${STRIP} ${PORTAGE_STRIP_FLAGS}"
- banner=true
- fi
-
- (
- __multijob_child_init
- f=$(file "${x}") || exit 0
- [[ -z ${f} ]] && exit 0
-
- if ! ${SKIP_STRIP} ; then
- # The noglob funk is to support STRIP_MASK="/*/booga" and to keep
- # the for loop from expanding the globs.
- # The eval echo is to support STRIP_MASK="/*/{booga,bar}" sex.
- set -o noglob
- strip_this=true
- for m in $(eval echo ${STRIP_MASK}) ; do
- [[ /${x#${ED}} == ${m} ]] && strip_this=false && break
- done
- set +o noglob
- else
- strip_this=false
- fi
-
- # In Prefix we are usually an unprivileged user, so we can't strip
- # unwritable objects. Make them temporarily writable for the
- # stripping.
- was_not_writable=false
- if [[ ! -w ${x} ]] ; then
- was_not_writable=true
- chmod u+w "${x}"
- fi
-
- # only split debug info for final linked objects
- # or kernel modules as debuginfo for intermediatary
- # files (think crt*.o from gcc/glibc) is useless and
- # actually causes problems. install sources for all
- # elf types though cause that stuff is good.
-
- buildid=
- if [[ ${f} == *"current ar archive"* ]] ; then
- __vecho " ${x:${#ED}}"
- if ${strip_this} ; then
- # If we have split debug enabled, then do not strip this.
- # There is no concept of splitdebug for objects not yet
- # linked in (only for finally linked ELFs), so we have to
- # retain the debug info in the archive itself.
- if ! ${FEATURES_splitdebug} || ${RESTRICT_splitdebug} ; then
- ${STRIP} -g "${x}"
- fi
- fi
- elif [[ ${f} == *"SB executable"* || ${f} == *"SB shared object"* ]] ; then
- process_elf "${x}" "${inode_link}" ${PORTAGE_STRIP_FLAGS}
- elif [[ ${f} == *"SB relocatable"* ]] ; then
- process_elf "${x}" "${inode_link}" ${SAFE_STRIP_FLAGS}
- fi
-
- if ${was_not_writable} ; then
- chmod u-w "${x}"
- fi
- ) &
- __multijob_post_fork
-
- done < "${inode_link}"
- done
-
- # With a bit more work, we could run the rsync processes below in
- # parallel, but not sure that'd be an overall improvement.
- __multijob_finish
-
- cd "${tmpdir}"/sources/ && cat * > "${tmpdir}/debug.sources" 2>/dev/null
- if [[ -s ${tmpdir}/debug.sources ]] && \
- ${FEATURES_installsources} && \
- ! ${RESTRICT_installsources} && \
- ${debugedit_found}
- then
- __vecho "installsources: rsyncing source files"
- [[ -d ${D}${prepstrip_sources_dir} ]] || mkdir -p "${D}${prepstrip_sources_dir}"
- grep -zv '/<[^/>]*>$' "${tmpdir}"/debug.sources | \
- (cd "${WORKDIR}"; LANG=C sort -z -u | \
- rsync -tL0 --chmod=ugo-st,a+r,go-w,Da+x,Fa-x --files-from=- "${WORKDIR}/" "${D}${prepstrip_sources_dir}/" )
-
- # Preserve directory structure.
- # Needed after running save_elf_sources.
- # https://bugzilla.redhat.com/show_bug.cgi?id=444310
- while read -r -d $'\0' emptydir
- do
- >> "${emptydir}"/.keepdir
- done < <(find "${D}${prepstrip_sources_dir}/" -type d -empty -print0)
+ if ___eapi_has_dostrip; then
+ die "${0##*/}: ${0##*/} has been banned for EAPI '$EAPI'; use 'dostrip' instead"
fi
- cd "${T}"
- rm -rf "${tmpdir}"
+ __PORTAGE_HELPER=prepstrip exec "${PORTAGE_BIN_PATH}"/estrip "${@}"
diff --cc bin/ebuild-ipc.py
index 66e70e67c,1f323bdc5..8de1c44f5
--- a/bin/ebuild-ipc.py
+++ b/bin/ebuild-ipc.py
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 2010-2014 Gentoo Foundation
+ # Copyright 2010-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
# This is a helper which ebuild processes can use
diff --cc bin/ebuild.sh
index a60a24da8,98ed570c2..f76a48d8e
--- a/bin/ebuild.sh
+++ b/bin/ebuild.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2015 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# Prevent aliases from causing portage to act inappropriately.
diff --cc bin/install-qa-check.d/80libraries
index 6c7e3757d,bbabc0eb9..9eb811f43
--- a/bin/install-qa-check.d/80libraries
+++ b/bin/install-qa-check.d/80libraries
@@@ -122,12 -122,8 +122,12 @@@ lib_check()
# https://bugs.gentoo.org/4411
local abort="no"
local a s
- for a in "${ED}"usr/lib*/*.a ; do
+ for a in "${ED%/}"/usr/lib*/*.a ; do
- s=${a%.a}.so
+ # PREFIX LOCAL: support MachO objects
+ [[ ${CHOST} == *-darwin* ]] \
+ && s=${a%.a}.dylib \
+ || s=${a%.a}.so
+ # END PREFIX LOCAL
if [[ ! -e ${s} ]] ; then
s=${s%usr/*}${s##*/usr/}
if [[ -e ${s} ]] ; then
@@@ -140,12 -136,7 +140,12 @@@
[[ ${abort} == "yes" ]] && die "add those ldscripts"
# Make sure people don't store libtool files or static libs in /lib
- f=$(ls "${ED%/}"/lib*/*.{a,la} 2>/dev/null)
+ # PREFIX LOCAL: on AIX, "dynamic libs" have extension .a, so don't
+ # get false positives
+ [[ ${CHOST} == *-aix* ]] \
- && f=$(ls "${ED}"lib*/*.la 2>/dev/null || true) \
- || f=$(ls "${ED}"lib*/*.{a,la} 2>/dev/null)
++ && f=$(ls "${ED%/}"lib*/*.la 2>/dev/null || true) \
++ || f=$(ls "${ED%/}"lib*/*.{a,la} 2>/dev/null)
+ # END PREFIX LOCAL
if [[ -n ${f} ]] ; then
__vecho -ne '\n'
eqawarn "QA Notice: Excessive files found in the / partition"
diff --cc bin/isolated-functions.sh
index c6945dd51,28ca94532..6aaae944f
--- a/bin/isolated-functions.sh
+++ b/bin/isolated-functions.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2016 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}/eapi.sh" || exit 1
diff --cc bin/misc-functions.sh
index 702f1ff4a,de8af955d..b36ae8217
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2014 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
# Miscellaneous shell functions that make use of the ebuild env but don't need
@@@ -230,39 -224,12 +230,45 @@@ install_qa_check()
ecompressdir --dequeue
ecompress --dequeue
+ if ___eapi_has_dostrip; then
+ "${PORTAGE_BIN_PATH}"/estrip --queue "${PORTAGE_DOSTRIP[@]}"
+ "${PORTAGE_BIN_PATH}"/estrip --ignore "${PORTAGE_DOSTRIP_SKIP[@]}"
+ "${PORTAGE_BIN_PATH}"/estrip --dequeue
+ fi
+
+ # PREFIX LOCAL:
+ # anything outside the prefix should be caught by the Prefix QA
+ # check, so if there's nothing in ED, we skip searching for QA
+ # checks there, the specific QA funcs can hence rely on ED existing
+ if [[ -d ${ED} ]] ; then
+ case ${CHOST} in
+ *-darwin*)
+ # Mach-O platforms (NeXT, Darwin, OSX)
+ install_qa_check_macho
+ ;;
+ *-interix*|*-winnt*)
+ # PECOFF platforms (Windows/Interix)
+ install_qa_check_pecoff
+ ;;
+ *-aix*)
+ # XCOFF platforms (AIX)
+ install_qa_check_xcoff
+ ;;
+ *)
+ # because this is the majority: ELF platforms (Linux,
+ # Solaris, *BSD, IRIX, etc.)
+ install_qa_check_elf
+ ;;
+ esac
+ fi
+
+ # this is basically here such that the diff with trunk remains just
+ # offsetted and not out of order
+ install_qa_check_misc
+ # END PREFIX LOCAL
+}
+
+install_qa_check_elf() {
# Create NEEDED.ELF.2 regardless of RESTRICT=binchecks, since this info is
# too useful not to have (it's required for things like preserve-libs), and
# it's tempting for ebuild authors to set RESTRICT=binchecks for packages
@@@ -290,396 -257,11 +296,396 @@@
eqawarn "$(while read -r x; do x=${x#*;} ; x=${x%%;*} ; echo "${x#${EPREFIX}}" ; done < "${PORTAGE_BUILDDIR}"/build-info/NEEDED.ELF.2)"
fi
fi
+}
+install_qa_check_misc() {
# Portage regenerates this on the installed system.
- rm -f "${ED}"/usr/share/info/dir{,.gz,.bz2} || die "rm failed!"
+ rm -f "${ED%/}"/usr/share/info/dir{,.gz,.bz2} || die "rm failed!"
}
+install_qa_check_macho() {
+ if ! has binchecks ${RESTRICT} ; then
+ # on Darwin, dynamic libraries are called .dylibs instead of
+ # .sos. In addition the version component is before the
+ # extension, not after it. Check for this, and *only* warn
+ # about it. Some packages do ship .so files on Darwin and make
+ # it work (ugly!).
+ rm -f "${T}/mach-o.check"
+ find ${ED%/} -name "*.so" -or -name "*.so.*" | \
+ while read i ; do
+ [[ $(file $i) == *"Mach-O"* ]] && \
+ echo "${i#${D}}" >> "${T}/mach-o.check"
+ done
+ if [[ -f ${T}/mach-o.check ]] ; then
+ f=$(< "${T}/mach-o.check")
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: Found .so dynamic libraries on Darwin:"
+ eqawarn " ${f//$'\n'/\n }"
+ fi
+ rm -f "${T}/mach-o.check"
+
+ # The naming for dynamic libraries is different on Darwin; the
+ # version component is before the extention, instead of after
+ # it, as with .sos. Again, make this a warning only.
+ rm -f "${T}/mach-o.check"
+ find ${ED%/} -name "*.dylib.*" | \
+ while read i ; do
+ echo "${i#${D}}" >> "${T}/mach-o.check"
+ done
+ if [[ -f "${T}/mach-o.check" ]] ; then
+ f=$(< "${T}/mach-o.check")
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: Found wrongly named dynamic libraries on Darwin:"
+ eqawarn " ${f// /\n }"
+ fi
+ rm -f "${T}/mach-o.check"
+ fi
+
+ install_name_is_relative() {
+ case $1 in
+ "@executable_path/"*) return 0 ;;
+ "@loader_path"/*) return 0 ;;
+ "@rpath/"*) return 0 ;;
+ *) return 1 ;;
+ esac
+ }
+
+ # While we generate the NEEDED files, check that we don't get kernel
+ # traps at runtime because of broken install_names on Darwin.
+ rm -f "${T}"/.install_name_check_failed
+ scanmacho -qyRF '%a;%p;%S;%n' "${D}" | { while IFS= read l ; do
+ arch=${l%%;*}; l=${l#*;}
+ obj="/${l%%;*}"; l=${l#*;}
+ install_name=${l%%;*}; l=${l#*;}
+ needed=${l%%;*}; l=${l#*;}
+
+ ignore=
+ qa_var="QA_IGNORE_INSTALL_NAME_FILES_${ARCH/-/_}"
+ eval "[[ -n \${!qa_var} ]] &&
+ QA_IGNORE_INSTALL_NAME_FILES=(\"\${${qa_var}[@]}\")"
+
+ if [[ ${#QA_IGNORE_INSTALL_NAME_FILES[@]} -gt 1 ]] ; then
+ for x in "${QA_IGNORE_INSTALL_NAME_FILES[@]}" ; do
+ [[ ${obj##*/} == ${x} ]] && \
+ ignore=true
+ done
+ else
+ local shopts=$-
+ set -o noglob
+ for x in ${QA_IGNORE_INSTALL_NAME_FILES} ; do
+ [[ ${obj##*/} == ${x} ]] && \
+ ignore=true
+ done
+ set +o noglob
+ set -${shopts}
+ fi
+
+ # See if the self-reference install_name points to an existing
+ # and to be installed file. This usually is a symlink for the
+ # major version.
+ if install_name_is_relative ${install_name} ; then
+ # try to locate the library in the installed image
+ local inpath=${install_name#@*/}
+ local libl
+ for libl in $(find "${ED}" -name "${inpath##*/}") ; do
+ if [[ ${libl} == */${inpath} ]] ; then
+ install_name=/${libl#${D}}
+ break
+ fi
+ done
+ fi
+ if [[ ! -e ${D}${install_name} ]] ; then
+ eqawarn "QA Notice: invalid self-reference install_name ${install_name} in ${obj}"
+ # remember we are in an implicit subshell, that's
+ # why we touch a file here ... ideally we should be
+ # able to die correctly/nicely here
+ [[ -z ${ignore} ]] && touch "${T}"/.install_name_check_failed
+ fi
+
+ # this is ugly, paths with spaces won't work
+ for lib in ${needed//,/ } ; do
+ if [[ ${lib} == ${D}* ]] ; then
+ eqawarn "QA Notice: install_name references \${D}: ${lib} in ${obj}"
+ [[ -z ${ignore} ]] && touch "${T}"/.install_name_check_failed
+ elif [[ ${lib} == ${S}* ]] ; then
+ eqawarn "QA Notice: install_name references \${S}: ${lib} in ${obj}"
+ [[ -z ${ignore} ]] && touch "${T}"/.install_name_check_failed
+ elif ! install_name_is_relative ${lib} && [[ ! -e ${lib} && ! -e ${D}${lib} ]] ; then
+ eqawarn "QA Notice: invalid reference to ${lib} in ${obj}"
+ [[ -z ${ignore} ]] && touch "${T}"/.install_name_check_failed
+ fi
+ done
+
+ # backwards compatibility
+ echo "${obj} ${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ # what we use
+ echo "${arch};${obj};${install_name};${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.MACHO.3
+ done }
+ if [[ -f ${T}/.install_name_check_failed ]] ; then
+ # secret switch "allow_broken_install_names" to get
+ # around this and install broken crap (not a good idea)
+ has allow_broken_install_names ${FEATURES} || \
+ die "invalid install_name found, your application or library will crash at runtime"
+ fi
+}
+
+install_qa_check_pecoff() {
+ local _pfx_scan="readpecoff ${CHOST}"
+
+ # this one uses readpecoff, which supports multiple prefix platforms!
+ # this is absolutely _not_ optimized for speed, and there may be plenty
+ # of possibilities by introducing one or the other cache!
+ if ! has binchecks ${RESTRICT}; then
+ # copied and adapted from the above scanelf code.
+ local qa_var insecure_rpath=0 tmp_quiet=${PORTAGE_QUIET}
+ local f x
+
+ # display warnings when using stricter because we die afterwards
+ if has stricter ${FEATURES} ; then
+ unset PORTAGE_QUIET
+ fi
+
+ local _exec_find_opt="-executable"
+ [[ ${CHOST} == *-winnt* ]] && _exec_find_opt='-name *.dll -o -name *.exe'
+
+ # Make sure we disallow insecure RUNPATH/RPATH's
+ # Don't want paths that point to the tree where the package was built
+ # (older, broken libtools would do this). Also check for null paths
+ # because the loader will search $PWD when it finds null paths.
+
+ f=$(
+ find "${ED}" -type f '(' ${_exec_find_opt} ')' -print0 | xargs -0 ${_pfx_scan} | \
+ while IFS=";" read arch obj soname rpath needed ; do \
+ echo "${rpath}" | grep -E "(${PORTAGE_BUILDDIR}|: |::|^:|^ )" > /dev/null 2>&1 \
+ && echo "${obj}"; done;
+ )
+ # Reject set*id binaries with $ORIGIN in RPATH #260331
+ x=$(
+ find "${ED}" -type f '(' -perm -u+s -o -perm -g+s ')' -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; do \
+ echo "${rpath}" | grep '$ORIGIN' > /dev/null 2>&1 && echo "${obj}"; done;
+ )
+ if [[ -n ${f}${x} ]] ; then
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain insecure RUNPATH's"
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with the maintaining herd of the package."
+ eqawarn "${f}${f:+${x:+\n}}${x}"
+ __vecho -ne '\a\n'
+ if [[ -n ${x} ]] || has stricter ${FEATURES} ; then
+ insecure_rpath=1
+ else
+ eqawarn "cannot automatically fix runpaths on interix platforms!"
+ fi
+ fi
+
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+
+ # Save NEEDED information after removing self-contained providers
+ find "${ED}" -type f '(' ${_exec_find_opt} ')' -print0 | xargs -0 ${_pfx_scan} | { while IFS=';' read arch obj soname rpath needed; do
+ # need to strip image dir from object name.
+ obj="/${obj#${D}}"
+ if [ -z "${rpath}" -o -n "${rpath//*ORIGIN*}" ]; then
+ # object doesn't contain $ORIGIN in its runpath attribute
+ echo "${obj} ${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ echo "${arch};${obj};${soname};${rpath};${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+ else
+ dir=${obj%/*}
+ # replace $ORIGIN with the dirname of the current object for the lookup
+ opath=$(echo :${rpath}: | sed -e "s#.*:\(.*\)\$ORIGIN\(.*\):.*#\1${dir}\2#")
+ sneeded=$(echo ${needed} | tr , ' ')
+ rneeded=""
+ for lib in ${sneeded}; do
+ found=0
+ for path in ${opath//:/ }; do
+ [ -e "${ED}/${path}/${lib}" ] && found=1 && break
+ done
+ [ "${found}" -eq 0 ] && rneeded="${rneeded},${lib}"
+ done
+ rneeded=${rneeded:1}
+ if [ -n "${rneeded}" ]; then
+ echo "${obj} ${rneeded}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ echo "${arch};${obj};${soname};${rpath};${rneeded}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+ fi
+ fi
+ done }
+
+ if [[ ${insecure_rpath} -eq 1 ]] ; then
+ die "Aborting due to serious QA concerns with RUNPATH/RPATH"
+ elif [[ -n ${die_msg} ]] && has stricter ${FEATURES} ; then
+ die "Aborting due to QA concerns: ${die_msg}"
+ fi
+
+ local _so_ext='.so*'
+
+ case "${CHOST}" in
+ *-winnt*) _so_ext=".dll" ;; # no "*" intentionally!
+ esac
+
+ # Run some sanity checks on shared libraries
+ for d in "${ED}"lib* "${ED}"usr/lib* ; do
+ [[ -d "${d}" ]] || continue
+ f=$(find "${d}" -name "lib*${_so_ext}" -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; \
+ do [[ -z "${soname}" ]] && echo "${obj}"; done)
+ if [[ -n ${f} ]] ; then
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: The following shared libraries lack a SONAME"
+ eqawarn "${f}"
+ __vecho -ne '\a\n'
+ sleep 1
+ fi
+
+ f=$(find "${d}" -name "lib*${_so_ext}" -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; \
+ do [[ -z "${needed}" ]] && echo "${obj}"; done)
+ if [[ -n ${f} ]] ; then
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: The following shared libraries lack NEEDED entries"
+ eqawarn "${f}"
+ __vecho -ne '\a\n'
+ sleep 1
+ fi
+ done
+
+ PORTAGE_QUIET=${tmp_quiet}
+ fi
+}
+
+install_qa_check_xcoff() {
+ if ! has binchecks ${RESTRICT}; then
+ local tmp_quiet=${PORTAGE_QUIET}
+ local queryline deplib
+ local insecure_rpath_list= undefined_symbols_list=
+
+ # display warnings when using stricter because we die afterwards
+ if has stricter ${FEATURES} ; then
+ unset PORTAGE_QUIET
+ fi
+
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED.XCOFF.1
+
+ local neededfd
+ for neededfd in {3..1024} none; do ( : <&${neededfd} ) 2>/dev/null || break; done
+ [[ ${neededfd} != none ]] || die "cannot find free file descriptor handle"
+
+ eval "exec ${neededfd}>\"${PORTAGE_BUILDDIR}\"/build-info/NEEDED.XCOFF.1" || die "cannot open ${PORTAGE_BUILDDIR}/build-info/NEEDED.XCOFF.1"
+
+ ( # work around a problem in /usr/bin/dump (used by aixdll-query)
+ # dumping core when path names get too long.
+ cd "${ED}" >/dev/null &&
+ find . -not -type d -exec \
+ aixdll-query '{}' FILE MEMBER FLAGS FORMAT RUNPATH DEPLIBS ';'
+ ) > "${T}"/needed 2>/dev/null
+
+ # Symlinking shared archive libraries is not a good idea on aix,
+ # as there is nothing like "soname" on pure filesystem level.
+ # So we create a copy instead of the symlink.
+ local prev_FILE=
+ local FILE MEMBER FLAGS FORMAT RUNPATH DEPLIBS
+ while read queryline
+ do
+ FILE= MEMBER= FLAGS= FORMAT= RUNPATH= DEPLIBS=
+ eval ${queryline}
+ FILE=${FILE#./}
+
+ if [[ ${prev_FILE} != ${FILE} ]]; then
+ if [[ " ${FLAGS} " == *" SHROBJ "* && -h ${ED}${FILE} ]]; then
+ prev_FILE=${FILE}
+ local target=$(readlink "${ED}${FILE}")
+ if [[ ${target} == /* ]]; then
+ target=${D}${target}
+ else
+ target=${FILE%/*}/${target}
+ fi
+ rm -f "${ED}${FILE}" || die "cannot prune ${FILE}"
+ cp -f "${ED}${target}" "${ED}${FILE}" || die "cannot copy ${target} to ${FILE}"
+ fi
+ fi
+ done <"${T}"/needed
+
+ prev_FILE=
+ while read queryline
+ do
+ FILE= MEMBER= FLAGS= FORMAT= RUNPATH= DEPLIBS=
+ eval ${queryline}
+ FILE=${FILE#./}
+
+ if [[ -n ${MEMBER} && ${prev_FILE} != ${FILE} ]]; then
+ # Save NEEDED information for each archive library stub
+ # even if it is static only: the already installed archive
+ # may contain shared objects to be preserved.
+ echo "${FORMAT##* }${FORMAT%%-*};${EPREFIX}/${FILE};${FILE##*/};;" >&${neededfd}
+ fi
+ prev_FILE=${FILE}
+
+ # shared objects have both EXEC and SHROBJ flags,
+ # while executables have EXEC flag only.
+ [[ " ${FLAGS} " == *" EXEC "* ]] || continue
+
+ # Make sure we disallow insecure RUNPATH's
+ # Don't want paths that point to the tree where the package was built
+ # (older, broken libtools would do this). Also check for null paths
+ # because the loader will search $PWD when it finds null paths.
+ # And we really want absolute paths only.
+ if [[ -n $(echo ":${RUNPATH}:" | grep -E "(${PORTAGE_BUILDDIR}|::|:[^/])") ]]; then
+ insecure_rpath_list="${insecure_rpath_list}\n${FILE}${MEMBER:+[${MEMBER}]}"
+ fi
+
+ local needed=
+ [[ -n ${MEMBER} ]] && needed=${FILE##*/}
+ for deplib in ${DEPLIBS}; do
+ eval deplib=${deplib}
+ if [[ ${deplib} == '.' || ${deplib} == '..' ]]; then
+ # Although we do have runtime linking, we don't want undefined symbols.
+ # AIX does indicate this by needing either '.' or '..'
+ undefined_symbols_list="${undefined_symbols_list}\n${FILE}"
+ else
+ needed="${needed}${needed:+,}${deplib}"
+ fi
+ done
+
+ FILE=${EPREFIX}/${FILE}
+
+ [[ -n ${MEMBER} ]] && MEMBER="[${MEMBER}]"
+ # Save NEEDED information
+ echo "${FORMAT##* }${FORMAT%%-*};${FILE}${MEMBER};${FILE##*/}${MEMBER};${RUNPATH};${needed}" >&${neededfd}
+ done <"${T}"/needed
+
+ eval "exec ${neededfd}>&-" || die "cannot close handle to ${PORTAGE_BUILDDIR}/build-info/NEEDED.XCOFF.1"
+
+ if [[ -n ${undefined_symbols_list} ]]; then
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain undefined symbols."
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with 'prefix' as the maintaining herd of the package."
+ eqawarn "${undefined_symbols_list}"
+ __vecho -ne '\a\n'
+ fi
+
+ if [[ -n ${insecure_rpath_list} ]] ; then
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain insecure RUNPATH's"
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with 'prefix' as the maintaining herd of the package."
+ eqawarn "${insecure_rpath_list}"
+ __vecho -ne '\a\n'
+ if has stricter ${FEATURES} ; then
+ insecure_rpath=1
+ fi
+ fi
+
+ if [[ ${insecure_rpath} -eq 1 ]] ; then
+ die "Aborting due to serious QA concerns with RUNPATH/RPATH"
+ elif [[ -n ${die_msg} ]] && has stricter ${FEATURES} ; then
+ die "Aborting due to QA concerns: ${die_msg}"
+ fi
+
+ PORTAGE_QUIET=${tmp_quiet}
+ fi
+}
+
preinst_qa_check() {
postinst_qa_check preinst
}
@@@ -741,278 -323,14 +747,267 @@@ postinst_qa_check()
done < <(printf "%s\0" "${qa_checks[@]}" | LC_ALL=C sort -u -z)
}
+install_mask() {
+ local root="$1"
+ shift
+ local install_mask="$*"
+
+ # We think of $install_mask as a space-separated list of
+ # globs. We don't want globbing in the "for" loop; that is, we
+ # want to keep the asterisks in the indivual entries.
+ local shopts=$-
+ set -o noglob
+ local no_inst
+ for no_inst in ${install_mask}; do
+ # Here, $no_inst is a single "entry" potentially
+ # containing a glob. From now on, we *do* want to
+ # expand it.
+ set +o noglob
+
+ # The standard case where $no_inst is something that
+ # the shell could expand on its own.
+ if [[ -e "${root}"/${no_inst} || -L "${root}"/${no_inst} ||
+ "${root}"/${no_inst} != $(echo "${root}"/${no_inst}) ]] ; then
+ __quiet_mode || einfo "Removing ${no_inst}"
+ rm -Rf "${root}"/${no_inst} >&/dev/null
+ fi
+
+ # We also want to allow the user to specify a "bare
+ # glob." For example, $no_inst="*.a" should prevent
+ # ALL files ending in ".a" from being installed,
+ # regardless of their location/depth. We achieve this
+ # by passing the pattern to `find`.
+ find "${root}" \( -path "${no_inst}" -or -name "${no_inst}" \) \
+ -print0 2> /dev/null \
+ | LC_ALL=C sort -z \
+ | while read -r -d ''; do
+ __quiet_mode || einfo "Removing /${REPLY#${root}}"
+ rm -Rf "${REPLY}" >&/dev/null
+ done
+
+ done
+ # set everything back the way we found it
+ set +o noglob
+ set -${shopts}
+}
+
+preinst_aix() {
+ if [[ ${CHOST} != *-aix* ]] || has binchecks ${RESTRICT}; then
+ return 0
+ fi
+ local ar strip
+ if type ${CHOST}-ar >/dev/null 2>&1 && type ${CHOST}-strip >/dev/null 2>&1; then
+ ar=${CHOST}-ar
+ strip=${CHOST}-strip
+ elif [[ ${CBUILD} == "${CHOST}" ]] && type ar >/dev/null 2>&1 && type strip >/dev/null 2>&1; then
+ ar=ar
+ strip=strip
+ elif [[ -x /usr/ccs/bin/ar && -x /usr/ccs/bin/strip ]]; then
+ ar=/usr/ccs/bin/ar
+ strip=/usr/ccs/bin/strip
+ else
+ die "cannot find where to use 'ar' and 'strip' from"
+ fi
+ local archives_members= archives=() helperfiles=()
+ local archive_member soname runpath needed archive contentmember
+ while read archive_member; do
+ archive_member=${archive_member#*;${EPREFIX}/} # drop "^type;EPREFIX/"
+ soname=${archive_member#*;}
+ runpath=${soname#*;}
+ needed=${runpath#*;}
+ soname=${soname%%;*}
+ runpath=${runpath%%;*}
+ archive_member=${archive_member%%;*} # drop ";soname;runpath;needed$"
+ archive=${archive_member%[*}
+ if [[ ${archive_member} != *'['*']' ]]; then
+ if [[ "${soname};${runpath};${needed}" == "${archive##*/};;" && -e ${EROOT}${archive} ]]; then
+ # most likely is an archive stub that already exists,
+ # may have to preserve members being a shared object.
+ archives[${#archives[@]}]=${archive}
+ fi
+ continue
+ fi
+ archives_members="${archives_members}:(${archive_member}):"
+ contentmember="${archive%/*}/.${archive##*/}${archive_member#${archive}}"
+ # portage does os.lstat() on merged files every now
+ # and then, so keep stamp-files for archive members
+ # around to get the preserve-libs feature working.
+ helperfiles[${#helperfiles[@]}]=${ED}${contentmember}
+ done < "${PORTAGE_BUILDDIR}"/build-info/NEEDED.XCOFF.1
+ if [[ ${#helperfiles[@]} > 0 ]]; then
+ rm -f "${helperfiles[@]}" || die "cannot prune ${helperfiles[@]}"
+ local f prev=
+ for f in "${helperfiles[@]}"
+ do
+ if [[ -z ${prev} ]]; then
+ { echo "Please leave this file alone, it is an important helper"
+ echo "for portage to implement the 'preserve-libs' feature on AIX."
+ } > "${f}" || die "cannot create ${f}"
+ chmod 0400 "${f}" || die "cannot chmod ${f}"
+ prev=${f}
+ else
+ ln "${prev}" "${f}" || die "cannot create hardlink ${f}"
+ fi
+ done
+ fi
+
+ local preservemembers libmetadir prunedirs=()
+ local FILE MEMBER FLAGS
+ for archive in "${archives[@]}"; do
+ preservemembers=
+ while read line; do
+ [[ -n ${line} ]] || continue
+ FILE= MEMBER= FLAGS=
+ eval ${line}
+ [[ ${FILE} == ${EROOT}${archive} ]] ||
+ die "invalid result of aixdll-query for ${EROOT}${archive}"
+ [[ -n ${MEMBER} && " ${FLAGS} " == *" SHROBJ "* ]] || continue
+ [[ ${archives_members} == *":(${archive}[${MEMBER}]):"* ]] && continue
+ preservemembers="${preservemembers} ${MEMBER}"
+ done <<-EOF
+ $(aixdll-query "${EROOT}${archive}" FILE MEMBER FLAGS)
+ EOF
+ [[ -n ${preservemembers} ]] || continue
+ einfo "preserving (on spec) ${archive}[${preservemembers# }]"
+ libmetadir=${ED}${archive%/*}/.${archive##*/}
+ mkdir "${libmetadir}" || die "cannot create ${libmetadir}"
+ pushd "${libmetadir}" >/dev/null || die "cannot cd to ${libmetadir}"
+ ${ar} -X32_64 -x "${EROOT}${archive}" ${preservemembers} || die "cannot unpack ${EROOT}${archive}"
+ chmod u+w ${preservemembers} || die "cannot chmod${preservemembers}"
+ ${strip} -X32_64 -e ${preservemembers} || die "cannot strip${preservemembers}"
+ ${ar} -X32_64 -q "${ED}${archive}" ${preservemembers} || die "cannot update ${archive}"
+ eend $?
+ popd >/dev/null || die "cannot leave ${libmetadir}"
+ prunedirs[${#prunedirs[@]}]=${libmetadir}
+ done
+ [[ ${#prunedirs[@]} == 0 ]] ||
+ rm -rf "${prunedirs[@]}" || die "cannot prune ${prunedirs[@]}"
+ return 0
+}
+
+postinst_aix() {
+ if [[ ${CHOST} != *-aix* ]] || has binchecks ${RESTRICT}; then
+ return 0
+ fi
+ local MY_PR=${PR%r0}
+ local ar strip
+ if type ${CHOST}-ar >/dev/null 2>&1 && type ${CHOST}-strip >/dev/null 2>&1; then
+ ar=${CHOST}-ar
+ strip=${CHOST}-strip
+ elif [[ ${CBUILD} == "${CHOST}" ]] && type ar >/dev/null 2>&1 && type strip >/dev/null 2>&1; then
+ ar=ar
+ strip=strip
+ elif [[ -x /usr/ccs/bin/ar && -x /usr/ccs/bin/strip ]]; then
+ ar=/usr/ccs/bin/ar
+ strip=/usr/ccs/bin/strip
+ else
+ die "cannot find where to use 'ar' and 'strip' from"
+ fi
+ local archives_members= archives=() activearchives=
+ local archive_member soname runpath needed
+ while read archive_member; do
+ archive_member=${archive_member#*;${EPREFIX}/} # drop "^type;EPREFIX/"
+ soname=${archive_member#*;}
+ runpath=${soname#*;}
+ needed=${runpath#*;}
+ soname=${soname%%;*}
+ runpath=${runpath%%;*}
+ archive_member=${archive_member%%;*} # drop ";soname;runpath;needed$"
+ [[ ${archive_member} == *'['*']' ]] && continue
+ [[ "${soname};${runpath};${needed}" == "${archive_member##*/};;" ]] || continue
+ # most likely is an archive stub, we might have to
+ # drop members being preserved shared objects.
+ archives[${#archives[@]}]=${archive_member}
+ activearchives="${activearchives}:(${archive_member}):"
+ done < "${PORTAGE_BUILDDIR}"/build-info/NEEDED.XCOFF.1
+
+ local type allcontentmembers= oldarchives=()
+ local contentmember
+ while read type contentmember; do
+ [[ ${type} == 'obj' ]] || continue
+ contentmember=${contentmember% *} # drop " timestamp$"
+ contentmember=${contentmember% *} # drop " hash$"
+ [[ ${contentmember##*/} == *'['*']' ]] || continue
+ contentmember=${contentmember#${EPREFIX}/}
+ allcontentmembers="${allcontentmembers}:(${contentmember}):"
+ contentmember=${contentmember%[*}
+ contentmember=${contentmember%/.*}/${contentmember##*/.}
+ [[ ${activearchives} == *":(${contentmember}):"* ]] && continue
+ oldarchives[${#oldarchives[@]}]=${contentmember}
+ done < "${EPREFIX}/var/db/pkg/${CATEGORY}/${P}${MY_PR:+-}${MY_PR}/CONTENTS"
+
+ local archive line delmembers
+ local FILE MEMBER FLAGS
+ for archive in "${archives[@]}"; do
+ [[ -r ${EROOT}${archive} && -w ${EROOT}${archive} ]] ||
+ chmod a+r,u+w "${EROOT}${archive}" || die "cannot chmod ${EROOT}${archive}"
+ delmembers=
+ while read line; do
+ [[ -n ${line} ]] || continue
+ FILE= MEMBER= FLAGS=
+ eval ${line}
+ [[ ${FILE} == "${EROOT}${archive}" ]] ||
+ die "invalid result '${FILE}' of aixdll-query, expected '${EROOT}${archive}'"
+ [[ -n ${MEMBER} && " ${FLAGS} " == *" SHROBJ "* ]] || continue
+ [[ ${allcontentmembers} == *":(${archive%/*}/.${archive##*/}[${MEMBER}]):"* ]] && continue
+ delmembers="${delmembers} ${MEMBER}"
+ done <<-EOF
+ $(aixdll-query "${EROOT}${archive}" FILE MEMBER FLAGS)
+ EOF
+ [[ -n ${delmembers} ]] || continue
+ einfo "dropping ${archive}[${delmembers# }]"
+ rm -f "${EROOT}${archive}".new || die "cannot prune ${EROOT}${archive}.new"
+ cp "${EROOT}${archive}" "${EROOT}${archive}".new || die "cannot backup ${archive}"
+ ${ar} -X32_64 -z -o -d "${EROOT}${archive}".new ${delmembers} || die "cannot remove${delmembers} from ${archive}.new"
+ mv -f "${EROOT}${archive}".new "${EROOT}${archive}" || die "cannot put ${EROOT}${archive} in place"
+ eend $?
+ done
+ local libmetadir keepmembers prunedirs=()
+ for archive in "${oldarchives[@]}"; do
+ [[ -r ${EROOT}${archive} && -w ${EROOT}${archive} ]] ||
+ chmod a+r,u+w "${EROOT}${archive}" || die "cannot chmod ${EROOT}${archive}"
+ keepmembers=
+ while read line; do
+ FILE= MEMBER= FLAGS=
+ eval ${line}
+ [[ ${FILE} == "${EROOT}${archive}" ]] ||
+ die "invalid result of aixdll-query for ${EROOT}${archive}"
+ [[ -n ${MEMBER} && " ${FLAGS} " == *" SHROBJ "* ]] || continue
+ [[ ${allcontentmembers} == *":(${archive%/*}/.${archive##*/}[${MEMBER}]):"* ]] || continue
+ keepmembers="${keepmembers} ${MEMBER}"
+ done <<-EOF
+ $(aixdll-query "${EROOT}${archive}" FILE MEMBER FLAGS)
+ EOF
+
+ if [[ -n ${keepmembers} ]]; then
+ einfo "preserving (extra)${keepmembers}"
+ libmetadir=${EROOT}${archive%/*}/.${archive##*/}
+ [[ ! -e ${libmetadir} ]] || rm -rf "${libmetadir}" || die "cannot prune ${libmetadir}"
+ mkdir "${libmetadir}" || die "cannot create ${libmetadir}"
+ pushd "${libmetadir}" >/dev/null || die "cannot cd to ${libmetadir}"
+ ${ar} -X32_64 -x "${EROOT}${archive}" ${keepmembers} || die "cannot unpack ${archive}"
+ ${strip} -X32_64 -e ${keepmembers} || die "cannot strip ${keepmembers}"
+ rm -f "${EROOT}${archive}.new" || die "cannot prune ${EROOT}${archive}.new"
+ ${ar} -X32_64 -q "${EROOT}${archive}.new" ${keepmembers} || die "cannot create ${EROOT}${archive}.new"
+ mv -f "${EROOT}${archive}.new" "${EROOT}${archive}" || die "cannot put ${EROOT}${archive} in place"
+ popd > /dev/null || die "cannot leave ${libmetadir}"
+ prunedirs[${#prunedirs[@]}]=${libmetadir}
+ eend $?
+ fi
+ done
+ [[ ${#prunedirs[@]} == 0 ]] ||
+ rm -rf "${prunedirs[@]}" || die "cannot prune ${prunedirs[@]}"
+ return 0
+}
+
preinst_mask() {
- if [ -z "${D}" ]; then
- eerror "${FUNCNAME}: D is unset"
- return 1
- fi
-
- if ! ___eapi_has_prefix_variables; then
- local ED=${D}
- fi
-
- # Make sure $PWD is not ${D} so that we don't leave gmon.out files
- # in there in case any tools were built with -pg in CFLAGS.
- cd "${T}"
-
- # remove man pages, info pages, docs if requested
- local f
+ # Remove man pages, info pages, docs if requested. This is
+ # implemented in bash in order to respect INSTALL_MASK settings
+ # from bashrc.
+ local f x
for f in man info doc; do
- if has no${f} ${FEATURES}; then
- INSTALL_MASK+=" /usr/share/${f}"
+ if has no${f} $FEATURES; then
+ INSTALL_MASK="${INSTALL_MASK} ${EPREFIX}/usr/share/${f}"
fi
done
@@@ -1174,11 -478,11 +1155,11 @@@ __dyn_package()
mkdir -p "${PORTAGE_BINPKG_TMPFILE%/*}" || die "mkdir failed"
[ -z "${PORTAGE_COMPRESSION_COMMAND}" ] && \
die "PORTAGE_COMPRESSION_COMMAND is unset"
- tar $tar_options -cf - $PORTAGE_BINPKG_TAR_OPTS -C "${PROOT}" . | \
- $PORTAGE_COMPRESSION_COMMAND -c > "$PORTAGE_BINPKG_TMPFILE"
+ tar $tar_options -cf - $PORTAGE_BINPKG_TAR_OPTS -C "${D}" . | \
+ $PORTAGE_COMPRESSION_COMMAND > "$PORTAGE_BINPKG_TMPFILE"
assert "failed to pack binary package: '$PORTAGE_BINPKG_TMPFILE'"
PYTHONPATH=${PORTAGE_PYTHONPATH:-${PORTAGE_PYM_PATH}} \
- "${PORTAGE_PYTHON:-/usr/bin/python}" "$PORTAGE_BIN_PATH"/xpak-helper.py recompose \
+ "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "$PORTAGE_BIN_PATH"/xpak-helper.py recompose \
"$PORTAGE_BINPKG_TMPFILE" "$PORTAGE_BUILDDIR/build-info"
if [ $? -ne 0 ]; then
rm -f "${PORTAGE_BINPKG_TMPFILE}"
diff --cc bin/phase-functions.sh
index bbffccf1e,1f9faaa41..209b76c68
--- a/bin/phase-functions.sh
+++ b/bin/phase-functions.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2015 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# Hardcoded bash lists are needed for backward compatibility with
@@@ -30,9 -30,8 +30,8 @@@ PORTAGE_READONLY_VARS="D EBUILD EBUILD_
PORTAGE_SAVED_READONLY_VARS PORTAGE_SIGPIPE_STATUS \
PORTAGE_TMPDIR PORTAGE_UPDATE_ENV PORTAGE_USERNAME \
PORTAGE_VERBOSE PORTAGE_WORKDIR_MODE PORTAGE_XATTR_EXCLUDE \
- PORTDIR \
REPLACING_VERSIONS REPLACED_BY_VERSION T WORKDIR \
- __PORTAGE_HELPER __PORTAGE_TEST_HARDLINK_LOCKS"
+ __PORTAGE_HELPER __PORTAGE_TEST_HARDLINK_LOCKS ED EROOT"
PORTAGE_SAVED_READONLY_VARS="A CATEGORY P PF PN PR PV PVR"
diff --cc bin/phase-helpers.sh
index 2cac6f426,5c9f957e9..75d92b407
--- a/bin/phase-helpers.sh
+++ b/bin/phase-helpers.sh
@@@ -1,9 -1,14 +1,14 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2017 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
- export DESTTREE=/usr
- export INSDESTTREE=""
+ if ___eapi_has_DESTTREE_INSDESTTREE; then
+ export DESTTREE=/usr
+ export INSDESTTREE=""
+ else
+ export _E_DESTTREE_=/usr
+ export _E_INSDESTTREE_=""
+ fi
export _E_EXEDESTTREE_=""
export _E_DOCDESTTREE_=""
export INSOPTIONS="-m0644"
diff --cc bin/portageq
index 3518a0af0,35499afd2..7b9addb67
--- a/bin/portageq
+++ b/bin/portageq
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 1999-2016 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function, unicode_literals
diff --cc bin/save-ebuild-env.sh
index 4c6ca3f17,947ac79d5..bb17382d4
mode 100755,100644..100755
--- a/bin/save-ebuild-env.sh
+++ b/bin/save-ebuild-env.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_PREFIX_BASH@
- # Copyright 1999-2014 Gentoo Foundation
+ # Copyright 1999-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# @FUNCTION: __save_ebuild_env
diff --cc bin/xattr-helper.py
index 43bf70dcb,49c981580..a8aef3880
--- a/bin/xattr-helper.py
+++ b/bin/xattr-helper.py
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 2012-2014 Gentoo Foundation
+ # Copyright 2012-2018 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
doc = """Dump and restore extended attributes.
diff --cc cnf/repos.conf
index b27d5c6b2,352073cfd..3b4b94209
--- a/cnf/repos.conf
+++ b/cnf/repos.conf
@@@ -1,11 -1,20 +1,20 @@@
[DEFAULT]
-main-repo = gentoo
+main-repo = gentoo_prefix
-[gentoo]
-location = /usr/portage
+[gentoo_prefix]
+location = @PORTAGE_EPREFIX@/usr/portage
sync-type = rsync
-sync-uri = rsync://rsync.gentoo.org/gentoo-portage
+sync-uri = rsync://rsync.prefix.bitzolder.nl/gentoo-portage-prefix
auto-sync = yes
+ sync-rsync-verify-jobs = 1
+ sync-rsync-verify-metamanifest = yes
+ sync-rsync-verify-max-age = 24
+ sync-openpgp-key-path = /usr/share/openpgp-keys/gentoo-release.asc
+ sync-openpgp-key-refresh-retry-count = 40
+ sync-openpgp-key-refresh-retry-overall-timeout = 1200
+ sync-openpgp-key-refresh-retry-delay-exp-base = 2
+ sync-openpgp-key-refresh-retry-delay-max = 60
+ sync-openpgp-key-refresh-retry-delay-mult = 4
# for daily squashfs snapshots
#sync-type = squashdelta
diff --cc pym/_emerge/Package.py
index 44029bcb3,a7ce00bc9..1d3457ed8
--- a/pym/_emerge/Package.py
+++ b/pym/_emerge/Package.py
@@@ -42,15 -42,16 +42,16 @@@ class Package(Task)
"_validated_atoms", "_visible")
metadata_keys = [
+ "BDEPEND",
"BUILD_ID", "BUILD_TIME", "CHOST", "COUNTER", "DEFINED_PHASES",
"DEPEND", "EAPI", "HDEPEND", "INHERITED", "IUSE", "KEYWORDS",
- "LICENSE", "MD5", "PDEPEND", "PROVIDE", "PROVIDES",
+ "LICENSE", "MD5", "PDEPEND", "PROVIDES",
"RDEPEND", "repository", "REQUIRED_USE",
"PROPERTIES", "REQUIRES", "RESTRICT", "SIZE",
- "SLOT", "USE", "_mtime_"]
+ "SLOT", "USE", "_mtime_", "EPREFIX"]
- _dep_keys = ('DEPEND', 'HDEPEND', 'PDEPEND', 'RDEPEND')
- _buildtime_keys = ('DEPEND', 'HDEPEND')
+ _dep_keys = ('BDEPEND', 'DEPEND', 'HDEPEND', 'PDEPEND', 'RDEPEND')
+ _buildtime_keys = ('BDEPEND', 'DEPEND', 'HDEPEND')
_runtime_keys = ('PDEPEND', 'RDEPEND')
_use_conditional_misc_keys = ('LICENSE', 'PROPERTIES', 'RESTRICT')
UNKNOWN_REPO = _unknown_repo
diff --cc pym/portage/__init__.py
index 9f3301ef0,166bfc700..95c6dff15
--- a/pym/portage/__init__.py
+++ b/pym/portage/__init__.py
@@@ -419,55 -418,11 +419,10 @@@ def _shell_quote(s)
bsd_chflags = None
- if platform.system() in ('FreeBSD',) and rootuid == 0:
-
+ if platform.system() in ('FreeBSD',):
- # TODO: remove this class?
class bsd_chflags(object):
-
- @classmethod
- def chflags(cls, path, flags, opts=""):
- cmd = ['chflags']
- if opts:
- cmd.append(opts)
- cmd.append('%o' % (flags,))
- cmd.append(path)
-
- if sys.hexversion < 0x3020000 and sys.hexversion >= 0x3000000:
- # Python 3.1 _execvp throws TypeError for non-absolute executable
- # path passed as bytes (see https://bugs.python.org/issue8513).
- fullname = process.find_binary(cmd[0])
- if fullname is None:
- raise exception.CommandNotFound(cmd[0])
- cmd[0] = fullname
-
- encoding = _encodings['fs']
- cmd = [_unicode_encode(x, encoding=encoding, errors='strict')
- for x in cmd]
- proc = subprocess.Popen(cmd, stdout=subprocess.PIPE,
- stderr=subprocess.STDOUT)
- output = proc.communicate()[0]
- status = proc.wait()
- if os.WIFEXITED(status) and os.WEXITSTATUS(status) == os.EX_OK:
- return
- # Try to generate an ENOENT error if appropriate.
- if 'h' in opts:
- _os_merge.lstat(path)
- else:
- _os_merge.stat(path)
- # Make sure the binary exists.
- if not portage.process.find_binary('chflags'):
- raise portage.exception.CommandNotFound('chflags')
- # Now we're not sure exactly why it failed or what
- # the real errno was, so just report EPERM.
- output = _unicode_decode(output, encoding=encoding)
- e = OSError(errno.EPERM, output)
- e.errno = errno.EPERM
- e.filename = path
- e.message = output
- raise e
-
- @classmethod
- def lchflags(cls, path, flags):
- return cls.chflags(path, flags, opts='-h')
+ chflags = os.chflags
+ lchflags = os.lchflags
def load_mod(name):
modname = ".".join(name.split(".")[:-1])
@@@ -581,8 -536,9 +536,8 @@@ def create_trees(config_root=None, targ
if env is None:
env = os.environ
-
settings = config(config_root=config_root, target_root=target_root,
- env=env, eprefix=eprefix)
+ env=env, sysroot=sysroot, eprefix=eprefix)
settings.lock()
depcachedir = settings.get('PORTAGE_DEPCACHEDIR')
diff --cc pym/portage/dbapi/bintree.py
index 8841bf9ec,269a7b226..5376b7e17
--- a/pym/portage/dbapi/bintree.py
+++ b/pym/portage/dbapi/bintree.py
@@@ -85,11 -86,11 +86,11 @@@ class bindbapi(fakedbapi)
self.move_ent = mybintree.move_ent
# Selectively cache metadata in order to optimize dep matching.
self._aux_cache_keys = set(
- ["BUILD_ID", "BUILD_TIME", "CHOST", "DEFINED_PHASES",
+ ["BDEPEND", "BUILD_ID", "BUILD_TIME", "CHOST", "DEFINED_PHASES",
"DEPEND", "EAPI", "HDEPEND", "IUSE", "KEYWORDS",
- "LICENSE", "MD5", "PDEPEND", "PROPERTIES", "PROVIDE",
+ "LICENSE", "MD5", "PDEPEND", "PROPERTIES",
"PROVIDES", "RDEPEND", "repository", "REQUIRES", "RESTRICT",
- "SIZE", "SLOT", "USE", "_mtime_"
+ "SIZE", "SLOT", "USE", "_mtime_", "EPREFIX"
])
self._aux_cache_slot_dict = slot_dict_class(self._aux_cache_keys)
self._aux_cache = {}
@@@ -312,25 -313,26 +313,27 @@@ class binarytree(object)
self._pkgindex_keys = self.dbapi._aux_cache_keys.copy()
self._pkgindex_keys.update(["CPV", "SIZE"])
self._pkgindex_aux_keys = \
- ["BASE_URI", "BUILD_ID", "BUILD_TIME", "CHOST",
+ ["BASE_URI", "BDEPEND", "BUILD_ID", "BUILD_TIME", "CHOST",
"DEFINED_PHASES", "DEPEND", "DESCRIPTION", "EAPI",
"HDEPEND", "IUSE", "KEYWORDS", "LICENSE", "PDEPEND",
- "PKGINDEX_URI", "PROPERTIES", "PROVIDE", "PROVIDES",
+ "PKGINDEX_URI", "PROPERTIES", "PROVIDES",
"RDEPEND", "repository", "REQUIRES", "RESTRICT",
- "SIZE", "SLOT", "USE"]
+ "SIZE", "SLOT", "USE", "EPREFIX"]
self._pkgindex_aux_keys = list(self._pkgindex_aux_keys)
self._pkgindex_use_evaluated_keys = \
- ("DEPEND", "HDEPEND", "LICENSE", "RDEPEND",
- "PDEPEND", "PROPERTIES", "PROVIDE", "RESTRICT")
+ ("BDEPEND", "DEPEND", "HDEPEND", "LICENSE", "RDEPEND",
+ "PDEPEND", "PROPERTIES", "RESTRICT")
+ self._pkgindex_header = None
self._pkgindex_header_keys = set([
"ACCEPT_KEYWORDS", "ACCEPT_LICENSE",
"ACCEPT_PROPERTIES", "ACCEPT_RESTRICT", "CBUILD",
"CONFIG_PROTECT", "CONFIG_PROTECT_MASK", "FEATURES",
"GENTOO_MIRRORS", "INSTALL_MASK", "IUSE_IMPLICIT", "USE",
"USE_EXPAND", "USE_EXPAND_HIDDEN", "USE_EXPAND_IMPLICIT",
- "USE_EXPAND_UNPREFIXED"])
+ "USE_EXPAND_UNPREFIXED",
+ "EPREFIX"])
self._pkgindex_default_pkg_data = {
+ "BDEPEND" : "",
"BUILD_ID" : "",
"BUILD_TIME" : "",
"DEFINED_PHASES" : "",
diff --cc pym/portage/package/ebuild/_config/special_env_vars.py
index 7e5291e58,a308518af..e2ea8c393
--- a/pym/portage/package/ebuild/_config/special_env_vars.py
+++ b/pym/portage/package/ebuild/_config/special_env_vars.py
@@@ -76,11 -77,9 +77,11 @@@ environ_whitelist +=
"PORTAGE_VERBOSE", "PORTAGE_WORKDIR_MODE", "PORTAGE_XATTR_EXCLUDE",
"PORTDIR", "PORTDIR_OVERLAY", "PREROOTPATH", "PYTHONDONTWRITEBYTECODE",
"REPLACING_VERSIONS", "REPLACED_BY_VERSION",
- "ROOT", "ROOTPATH", "T", "TMP", "TMPDIR",
+ "ROOT", "ROOTPATH", "SYSROOT", "T", "TMP", "TMPDIR",
"USE_EXPAND", "USE_ORDER", "WORKDIR",
"XARGS", "__PORTAGE_TEST_HARDLINK_LOCKS",
+ "DEFAULT_PATH", "EXTRA_PATH",
+ "PORTAGE_GROUP", "PORTAGE_USER",
]
# user config variables
diff --cc pym/portage/package/ebuild/doebuild.py
index a24f8fec8,31b552ff3..fff03e1d4
--- a/pym/portage/package/ebuild/doebuild.py
+++ b/pym/portage/package/ebuild/doebuild.py
@@@ -1391,14 -1366,8 +1369,9 @@@ def _spawn_actionmap(settings)
("usersandbox" not in features) and \
"userpriv" not in restrict and \
"nouserpriv" not in restrict)
- if nosandbox and ("userpriv" not in features or \
- "userpriv" in restrict or \
- "nouserpriv" in restrict):
- nosandbox = ("sandbox" not in features and \
- "usersandbox" not in features)
- if not portage.process.sandbox_capable:
+ if not (portage.process.sandbox_capable or \
+ portage.process.macossandbox_capable):
nosandbox = True
sesandbox = settings.selinux_enabled() and \
diff --cc pym/portage/process.py
index b91f17305,fd326731a..5261741b8
--- a/pym/portage/process.py
+++ b/pym/portage/process.py
@@@ -91,9 -91,30 +91,32 @@@ sandbox_capable = (os.path.isfile(SANDB
fakeroot_capable = (os.path.isfile(FAKEROOT_BINARY) and
os.access(FAKEROOT_BINARY, os.X_OK))
+macossandbox_capable = (os.path.isfile(MACOSSANDBOX_BINARY) and
+ os.access(MACOSSANDBOX_BINARY, os.X_OK))
+ def sanitize_fds():
+ """
+ Set the inheritable flag to False for all open file descriptors,
+ except for those corresponding to stdin, stdout, and stderr. This
+ ensures that any unintentionally inherited file descriptors will
+ not be inherited by child processes.
+ """
+ if _set_inheritable is not None:
+
+ whitelist = frozenset([
+ sys.__stdin__.fileno(),
+ sys.__stdout__.fileno(),
+ sys.__stderr__.fileno(),
+ ])
+
+ for fd in get_open_fds():
+ if fd not in whitelist:
+ try:
+ _set_inheritable(fd, False)
+ except OSError:
+ pass
+
+
def spawn_bash(mycommand, debug=False, opt_name=None, **keywords):
"""
Spawns a bash shell running a specific commands
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2017-12-12 8:19 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2017-12-12 8:19 UTC (permalink / raw
To: gentoo-commits
commit: 70aab2af6ad556e45657745a2d4adf64ac23e5b9
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Tue Dec 12 08:19:10 2017 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Tue Dec 12 08:19:10 2017 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=70aab2af
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
COPYING | 375 ++-----------
NEWS | 10 +
RELEASE-NOTES | 42 ++
bin/doins.py | 583 +++++++++++++++++++++
bin/ebuild-helpers/doins | 156 ++----
bin/phase-functions.sh | 2 +-
bin/phase-helpers.sh | 3 +-
bin/quickpkg | 15 +-
cnf/make.globals | 5 +-
man/portage.5 | 13 +-
pym/_emerge/BinpkgExtractorAsync.py | 4 +-
pym/_emerge/Package.py | 32 ++
pym/_emerge/Scheduler.py | 36 --
pym/_emerge/depgraph.py | 12 +-
pym/_emerge/main.py | 3 +-
pym/portage/_emirrordist/FetchTask.py | 2 +-
pym/portage/checksum.py | 9 +-
pym/portage/const.py | 29 +-
pym/portage/dbapi/__init__.py | 5 +
pym/portage/dbapi/bintree.py | 87 +--
pym/portage/dbapi/vartree.py | 42 +-
pym/portage/dep/_dnf.py | 90 ++++
pym/portage/dep/dep_check.py | 147 +++++-
pym/portage/emaint/modules/binhost/binhost.py | 10 +-
pym/portage/manifest.py | 37 +-
pym/portage/output.py | 2 +-
pym/portage/package/ebuild/_config/UseManager.py | 5 +-
pym/portage/package/ebuild/config.py | 10 +-
pym/portage/package/ebuild/digestgen.py | 7 +-
pym/portage/package/ebuild/doebuild.py | 17 +-
pym/portage/repository/config.py | 45 +-
pym/portage/sync/modules/git/git.py | 2 +-
pym/portage/sync/modules/rsync/rsync.py | 2 +-
pym/portage/tests/bin/test_doins.py | 352 +++++++++++++
pym/portage/tests/dbapi/test_fakedbapi.py | 20 +
pym/portage/tests/dep/test_dnf_convert.py | 48 ++
pym/portage/tests/dep/test_overlap_dnf.py | 28 +
pym/portage/tests/ebuild/test_config.py | 1 +
.../resolver/test_disjunctive_depend_order.py | 87 +++
.../tests/resolver/test_onlydeps_minimal.py | 5 +-
.../tests/resolver/test_or_downgrade_installed.py | 97 ++++
.../resolver/test_virtual_minimize_children.py | 145 +++++
pym/portage/tests/sync/test_sync_local.py | 2 +-
pym/portage/tests/versions/test_vercmp.py | 3 -
pym/portage/util/__init__.py | 6 +-
pym/portage/versions.py | 53 +-
repoman/RELEASE-NOTES | 12 +
repoman/pym/repoman/repos.py | 17 +-
repoman/pym/repoman/tests/simple/test_simple.py | 15 -
repoman/setup.py | 24 +-
setup.py | 10 +-
51 files changed, 2080 insertions(+), 684 deletions(-)
diff --cc bin/ebuild-helpers/doins
index 7690cab35,c3a57890a..04bfdd0d4
--- a/bin/ebuild-helpers/doins
+++ b/bin/ebuild-helpers/doins
@@@ -42,127 -41,49 +41,58 @@@ if [[ ${INSDESTTREE#${ED}} != "${INSDES
__helpers_die "${helper} used with \${D} or \${ED}"
exit 1
fi
+# PREFIX LOCAL: check for usage with EPREFIX
+if [[ ${INSDESTTREE#${EPREFIX}} != "${INSDESTTREE}" ]]; then
+ __vecho "-------------------------------------------------------" 1>&2
+ __vecho "You should not use \${EPREFIX} with helpers." 1>&2
+ __vecho " --> ${INSDESTTREE}" 1>&2
+ __vecho "-------------------------------------------------------" 1>&2
+ exit 1
+fi
+# END PREFIX LOCAL
if ___eapi_doins_and_newins_preserve_symlinks; then
- PRESERVE_SYMLINKS=y
- else
- PRESERVE_SYMLINKS=n
+ DOINS_ARGS+=( --preserve_symlinks )
fi
- export TMP=$(mktemp -d "${T}/.doins_tmp_XXXXXX")
- # Use separate directories to avoid potential name collisions.
- mkdir -p "$TMP"/{1,2}
-
- [[ ! -d ${ED}${INSDESTTREE} ]] && dodir "${INSDESTTREE}"
-
- _doins() {
- local mysrc="$1" mydir="$2" cleanup="" rval
-
- if [ -L "$mysrc" ] ; then
- # Our fake $DISTDIR contains symlinks that should
- # not be reproduced inside $D. In order to ensure
- # that things like dodoc "$DISTDIR"/foo.pdf work
- # as expected, we dereference symlinked files that
- # refer to absolute paths inside
- # $PORTAGE_ACTUAL_DISTDIR/.
- if [ $PRESERVE_SYMLINKS = y ] && \
- ! [[ $(readlink "$mysrc") == "$PORTAGE_ACTUAL_DISTDIR"/* ]] ; then
- rm -rf "${ED}$INSDESTTREE/$mydir/${mysrc##*/}" || return $?
- cp -P "$mysrc" "${ED}$INSDESTTREE/$mydir/${mysrc##*/}"
- return $?
- else
- cp "$mysrc" "$TMP/2/${mysrc##*/}" || return $?
- mysrc="$TMP/2/${mysrc##*/}"
- cleanup=$mysrc
- fi
- fi
+ if ___eapi_helpers_can_die; then
+ DOINS_ARGS+=( --helpers_can_die )
+ fi
- install ${INSOPTIONS} "${mysrc}" "${ED}${INSDESTTREE}/${mydir}"
- rval=$?
- [[ -n ${cleanup} ]] && rm -f "${cleanup}"
- [ $rval -ne 0 ] && echo "!!! ${helper}: $mysrc does not exist" 1>&2
- return $rval
- }
-
- _xdoins() {
- local -i failed=0
- while read -r -d $'\0' x ; do
- _doins "$x" "${x%/*}"
- ((failed|=$?))
- done
- return $failed
- }
-
- success=0
- failed=0
-
- for x in "$@" ; do
- if [[ $PRESERVE_SYMLINKS = n && -d $x ]] || \
- [[ $PRESERVE_SYMLINKS = y && -d $x && ! -L $x ]] ; then
- if [ "${DOINSRECUR}" == "n" ] ; then
- if [[ ${helper} == dodoc ]] ; then
- echo "!!! ${helper}: $x is a directory" 1>&2
- ((failed|=1))
- fi
- continue
- fi
-
- while [ "$x" != "${x%/}" ] ; do
- x=${x%/}
- done
- if [ "$x" = "${x%/*}" ] ; then
- pushd "$PWD" >/dev/null
- else
- pushd "${x%/*}" >/dev/null
- fi
- x=${x##*/}
- x_orig=$x
- # Follow any symlinks recursively until we've got
- # a normal directory for 'find' to traverse. The
- # name of the symlink will be used for the name
- # of the installed directory, as discussed in
- # bug #239529.
- while [ -L "$x" ] ; do
- pushd "$(readlink "$x")" >/dev/null
- x=${PWD##*/}
- pushd "${PWD%/*}" >/dev/null
- done
- if [[ $x != $x_orig ]] ; then
- mv "$x" "$TMP/1/$x_orig"
- pushd "$TMP/1" >/dev/null
- fi
- find "$x_orig" -type d -exec dodir "${INSDESTTREE}/{}" \;
- find "$x_orig" \( -type f -or -type l \) -print0 | _xdoins
- if [[ ${PIPESTATUS[1]} -eq 0 ]] ; then
- # NOTE: Even if only an empty directory is installed here, it
- # still counts as success, since an empty directory given as
- # an argument to doins -r should not trigger failure.
- ((success|=1))
- else
- ((failed|=1))
- fi
- if [[ $x != $x_orig ]] ; then
- popd >/dev/null
- mv "$TMP/1/$x_orig" "$x"
- fi
- while popd >/dev/null 2>&1 ; do true ; done
- else
- _doins "${x}"
- if [[ $? -eq 0 ]] ; then
- ((success|=1))
- else
- ((failed|=1))
- fi
- fi
- done
- rm -rf "$TMP"
- [[ $failed -ne 0 || $success -eq 0 ]] && { __helpers_die "${helper} failed"; exit 1; } || exit 0
+ if [[ -n "${INSOPTIONS}" ]]; then
+ DOINS_ARGS+=( "--insoptions=${INSOPTIONS}" )
+ fi
+
+ if [[ -n "${DIROPTIONS}" ]]; then
+ DOINS_ARGS+=( "--diroptions=${DIROPTIONS}" )
+ fi
+
+ if [[ -n "${PORTAGE_ACTUAL_DISTDIR}" ]]; then
+ DOINS_ARGS+=( "--distdir=${PORTAGE_ACTUAL_DISTDIR}" )
+ fi
+
+ if [[ "${DOINSSTRICTOPTION}" == 1 ]]; then
+ DOINS_ARGS+=( --strict_option )
+ fi
+
+ if has xattr ${FEATURES}; then
+ DOINS_ARGS+=(
+ --enable_copy_xattr
+ "--xattr_exclude=${PORTAGE_XATTR_EXCLUDE}"
+ )
+ fi
+
+ DOINS_ARGS+=(
+ "--helper=${helper}"
+ "--dest=${ED}${INSDESTTREE}"
+ )
+
+ # Explicitly set PYTHONPATH to non empty.
+ # If PYTHONPATH is empty (not unset), it means "add current working directory
+ # to the import path" if the Python is prior to 3.4, which would cause
+ # unexpected import. See also #469338.
+ PYTHONPATH="${PORTAGE_PYM_PATH:-/usr/lib/portage/pym}${PYTHONPATH:+:}${PYTHONPATH}" \
+ "${PORTAGE_PYTHON:-/usr/bin/python}" \
+ "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/doins.py \
+ "${DOINS_ARGS[@]}" -- "$@" || \
+ { __helpers_die "${helper} failed"; exit 1; }
diff --cc cnf/make.globals
index d422ca639,08a37a534..131c5076d
--- a/cnf/make.globals
+++ b/cnf/make.globals
@@@ -150,23 -127,12 +150,26 @@@ PORTAGE_GPG_SIGNING_COMMAND="gpg --sig
# security.* attributes may be special (see bug 461868), but
# security.capability is specifically not excluded (bug 548516).
# system.nfs4_acl attributes are irrelevant, see bug #475496.
+ # user.* attributes are not supported on tmpfs (bug 640290), but
+ # user.pax.* is supported with the patch from bug 470644.
PORTAGE_XATTR_EXCLUDE="btrfs.* security.evm security.ima
- security.selinux system.nfs4_acl"
+ security.selinux system.nfs4_acl user.apache_handler
+ user.Beagle.* user.dublincore.* user.mime_encoding user.xdg.*"
+# Writeable paths for Mac OS X seatbelt sandbox
+#
+# If path ends in a slash (/), access will recursively be allowed to directory
+# contents (using a regex), not the directory itself. Without a slash, access
+# to the directory or file itself will be allowed (using a literal), so it can
+# be created, removed and changed. If both is needed, the directory needs to be
+# given twice, once with and once without the slash. Obviously this only makes
+# sense for directories, not files.
+#
+# An empty value for either variable will disable all restrictions on the
+# corresponding operation.
+MACOSSANDBOX_PATHS="/dev/fd/ /private/tmp/ /private/var/tmp/ @@PORTAGE_BUILDDIR@@/ @@PORTAGE_ACTUAL_DISTDIR@@/"
+MACOSSANDBOX_PATHS_CONTENT_ONLY="/dev/null /dev/dtracehelper /dev/tty /private/var/run/syslog"
+
# *****************************
# ** DO NOT EDIT THIS FILE **
# ***************************************************
diff --cc pym/portage/dbapi/vartree.py
index f84dbd5fc,b28b1c56c..d2c35f9e3
--- a/pym/portage/dbapi/vartree.py
+++ b/pym/portage/dbapi/vartree.py
@@@ -39,9 -39,7 +39,10 @@@ portage.proxy.lazyimport.lazyimport(glo
'portage.util._xattr:xattr',
'portage.util._dyn_libs.PreservedLibsRegistry:PreservedLibsRegistry',
'portage.util._dyn_libs.LinkageMapELF:LinkageMapELF@LinkageMap',
+ 'portage.util._dyn_libs.LinkageMapMachO:LinkageMapMachO',
+ 'portage.util._dyn_libs.LinkageMapPeCoff:LinkageMapPeCoff',
+ 'portage.util._dyn_libs.LinkageMapXCoff:LinkageMapXCoff',
+ 'portage.util._dyn_libs.NeededEntry:NeededEntry',
'portage.util._async.SchedulerInterface:SchedulerInterface',
'portage.util._eventloop.EventLoop:EventLoop',
'portage.util._eventloop.global_event_loop:global_event_loop',
diff --cc pym/portage/versions.py
index fb4996797,7b6a57673..7dc0ac0d6
--- a/pym/portage/versions.py
+++ b/pym/portage/versions.py
@@@ -50,9 -50,8 +50,10 @@@ _pkg =
"dots_allowed_in_PN": r'[\w+][\w+.-]*?',
}
- _v = r'(cvs\.)?(\d+)((\.\d+)*)([a-z]?)((_(pre|p|beta|alpha|rc)\d*)*)'
- # PREFIX hack: -r(\d+) -> -r(\d+|0\d+\.\d+) (see below)
+ _v = r'(\d+)((\.\d+)*)([a-z]?)((_(pre|p|beta|alpha|rc)\d*)*)'
-_rev = r'\d+'
++# PREFIX_LOCAL hack: -r(\d+) -> -r(\d+|0\d+\.\d+) (see below)
+_rev = r'(\d+|0\d+\.\d+)'
++# END_PREFIX_LOCAL
_vr = _v + '(-r(' + _rev + '))?'
_cp = {
@@@ -257,39 -250,15 +252,41 @@@ def vercmp(ver1, ver2, silent=1)
if rval:
return rval
- # the suffix part is equal to, so finally check the revision
++ # PREFIX_LOCAL
+ # The suffix part is equal too, so finally check the revision
- # PREFIX hack: a revision starting with 0 is an 'inter-revision',
++ # Prefix hack: a revision starting with 0 is an 'inter-revision',
+ # which means that it is possible to create revisions on revisions.
+ # An example is -r01.1 which is the first revision of -r1. Note
+ # that a period (.) is used to separate the real revision and the
+ # secondary revision number. This trick is in use to allow revision
+ # bumps in ebuilds synced from the main tree for Prefix changes,
+ # while still staying in the main tree versioning scheme.
- if match1.group(10):
- if match1.group(10)[0] == '0' and '.' in match1.group(10):
- t = match1.group(10)[1:].split(".")
+ if match1.group(9):
- r1 = int(match1.group(9))
++ if match1.group(9)[0] == '0' and '.' in match1.group(9):
++ t = match1.group(9)[1:].split(".")
+ r1 = int(t[0])
+ r3 = int(t[1])
+ else:
- r1 = int(match1.group(10))
++ r1 = int(match1.group(9))
+ r3 = 0
else:
r1 = 0
+ r3 = 0
- if match2.group(10):
- if match2.group(10)[0] == '0' and '.' in match2.group(10):
- t = match2.group(10)[1:].split(".")
+ if match2.group(9):
- r2 = int(match2.group(9))
++ if match2.group(9)[0] == '0' and '.' in match2.group(9):
++ t = match2.group(9)[1:].split(".")
+ r2 = int(t[0])
+ r4 = int(t[1])
+ else:
- r2 = int(match2.group(10))
++ r2 = int(match2.group(9))
+ r4 = 0
++ # END_PREFIX_LOCAL
else:
r2 = 0
+ r4 = 0
+ if r1 == r2 and (r3 != 0 or r4 != 0):
+ r1 = r3
+ r2 = r4
rval = (r1 > r2) - (r1 < r2)
return rval
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2017-10-29 14:51 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2017-10-29 14:51 UTC (permalink / raw
To: gentoo-commits
commit: d598b947524aaba3bda162fc0d2cc97a9e0dcef6
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Oct 29 14:51:23 2017 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Oct 29 14:51:23 2017 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=d598b947
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
.travis.yml | 2 +
NEWS | 7 ++
RELEASE-NOTES | 21 +++++
bin/ebuild-helpers/prepstrip | 20 ++++-
bin/install-qa-check.d/10ignored-flags | 5 +-
bin/misc-functions.sh | 18 +++--
bin/postinst-qa-check.d/50gnome2-utils | 6 ++
bin/postinst-qa-check.d/50xdg-utils | 12 +++
bin/preinst-qa-check.d/50gnome2-utils | 1 +
bin/preinst-qa-check.d/50xdg-utils | 1 +
doc/config/bashrc.docbook | 15 ++++
man/ebuild.5 | 78 +-----------------
man/emerge.1 | 2 +-
man/make.conf.5 | 4 +-
man/portage.5 | 14 ++--
pym/_emerge/BinpkgExtractorAsync.py | 26 +++---
pym/_emerge/depgraph.py | 45 ++++++++++-
pym/_emerge/resolver/package_tracker.py | 87 +++++++++++++++-----
pym/portage/checksum.py | 17 +++-
pym/portage/dbapi/porttree.py | 123 ++++++++++++++++++++++++++---
pym/portage/elog/mod_echo.py | 4 +-
pym/portage/package/ebuild/doebuild.py | 1 +
pym/portage/sync/modules/rsync/rsync.py | 6 +-
repoman/RELEASE-NOTES | 5 ++
repoman/setup.py | 2 +-
setup.py | 2 +-
src/portage_util_file_copy_reflink_linux.c | 99 +++++++++++++----------
27 files changed, 435 insertions(+), 188 deletions(-)
diff --cc bin/misc-functions.sh
index 20fd4eef8,a02aa3bfd..702f1ff4a
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -297,391 -256,12 +297,395 @@@ install_qa_check_misc()
rm -f "${ED}"/usr/share/info/dir{,.gz,.bz2} || die "rm failed!"
}
+install_qa_check_macho() {
+ if ! has binchecks ${RESTRICT} ; then
+ # on Darwin, dynamic libraries are called .dylibs instead of
+ # .sos. In addition the version component is before the
+ # extension, not after it. Check for this, and *only* warn
+ # about it. Some packages do ship .so files on Darwin and make
+ # it work (ugly!).
+ rm -f "${T}/mach-o.check"
+ find ${ED%/} -name "*.so" -or -name "*.so.*" | \
+ while read i ; do
+ [[ $(file $i) == *"Mach-O"* ]] && \
+ echo "${i#${D}}" >> "${T}/mach-o.check"
+ done
+ if [[ -f ${T}/mach-o.check ]] ; then
+ f=$(< "${T}/mach-o.check")
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: Found .so dynamic libraries on Darwin:"
+ eqawarn " ${f//$'\n'/\n }"
+ fi
+ rm -f "${T}/mach-o.check"
+
+ # The naming for dynamic libraries is different on Darwin; the
+ # version component is before the extention, instead of after
+ # it, as with .sos. Again, make this a warning only.
+ rm -f "${T}/mach-o.check"
+ find ${ED%/} -name "*.dylib.*" | \
+ while read i ; do
+ echo "${i#${D}}" >> "${T}/mach-o.check"
+ done
+ if [[ -f "${T}/mach-o.check" ]] ; then
+ f=$(< "${T}/mach-o.check")
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: Found wrongly named dynamic libraries on Darwin:"
+ eqawarn " ${f// /\n }"
+ fi
+ rm -f "${T}/mach-o.check"
+ fi
+
+ install_name_is_relative() {
+ case $1 in
+ "@executable_path/"*) return 0 ;;
+ "@loader_path"/*) return 0 ;;
+ "@rpath/"*) return 0 ;;
+ *) return 1 ;;
+ esac
+ }
+
+ # While we generate the NEEDED files, check that we don't get kernel
+ # traps at runtime because of broken install_names on Darwin.
+ rm -f "${T}"/.install_name_check_failed
+ scanmacho -qyRF '%a;%p;%S;%n' "${D}" | { while IFS= read l ; do
+ arch=${l%%;*}; l=${l#*;}
+ obj="/${l%%;*}"; l=${l#*;}
+ install_name=${l%%;*}; l=${l#*;}
+ needed=${l%%;*}; l=${l#*;}
+
+ ignore=
+ qa_var="QA_IGNORE_INSTALL_NAME_FILES_${ARCH/-/_}"
+ eval "[[ -n \${!qa_var} ]] &&
+ QA_IGNORE_INSTALL_NAME_FILES=(\"\${${qa_var}[@]}\")"
+
+ if [[ ${#QA_IGNORE_INSTALL_NAME_FILES[@]} -gt 1 ]] ; then
+ for x in "${QA_IGNORE_INSTALL_NAME_FILES[@]}" ; do
+ [[ ${obj##*/} == ${x} ]] && \
+ ignore=true
+ done
+ else
+ local shopts=$-
+ set -o noglob
+ for x in ${QA_IGNORE_INSTALL_NAME_FILES} ; do
+ [[ ${obj##*/} == ${x} ]] && \
+ ignore=true
+ done
+ set +o noglob
+ set -${shopts}
+ fi
+
+ # See if the self-reference install_name points to an existing
+ # and to be installed file. This usually is a symlink for the
+ # major version.
+ if install_name_is_relative ${install_name} ; then
+ # try to locate the library in the installed image
+ local inpath=${install_name#@*/}
+ local libl
+ for libl in $(find "${ED}" -name "${inpath##*/}") ; do
+ if [[ ${libl} == */${inpath} ]] ; then
+ install_name=/${libl#${D}}
+ break
+ fi
+ done
+ fi
+ if [[ ! -e ${D}${install_name} ]] ; then
+ eqawarn "QA Notice: invalid self-reference install_name ${install_name} in ${obj}"
+ # remember we are in an implicit subshell, that's
+ # why we touch a file here ... ideally we should be
+ # able to die correctly/nicely here
+ [[ -z ${ignore} ]] && touch "${T}"/.install_name_check_failed
+ fi
+
+ # this is ugly, paths with spaces won't work
+ for lib in ${needed//,/ } ; do
+ if [[ ${lib} == ${D}* ]] ; then
+ eqawarn "QA Notice: install_name references \${D}: ${lib} in ${obj}"
+ [[ -z ${ignore} ]] && touch "${T}"/.install_name_check_failed
+ elif [[ ${lib} == ${S}* ]] ; then
+ eqawarn "QA Notice: install_name references \${S}: ${lib} in ${obj}"
+ [[ -z ${ignore} ]] && touch "${T}"/.install_name_check_failed
+ elif ! install_name_is_relative ${lib} && [[ ! -e ${lib} && ! -e ${D}${lib} ]] ; then
+ eqawarn "QA Notice: invalid reference to ${lib} in ${obj}"
+ [[ -z ${ignore} ]] && touch "${T}"/.install_name_check_failed
+ fi
+ done
+
+ # backwards compatibility
+ echo "${obj} ${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ # what we use
+ echo "${arch};${obj};${install_name};${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.MACHO.3
+ done }
+ if [[ -f ${T}/.install_name_check_failed ]] ; then
+ # secret switch "allow_broken_install_names" to get
+ # around this and install broken crap (not a good idea)
+ has allow_broken_install_names ${FEATURES} || \
+ die "invalid install_name found, your application or library will crash at runtime"
+ fi
+}
+
+install_qa_check_pecoff() {
+ local _pfx_scan="readpecoff ${CHOST}"
+
+ # this one uses readpecoff, which supports multiple prefix platforms!
+ # this is absolutely _not_ optimized for speed, and there may be plenty
+ # of possibilities by introducing one or the other cache!
+ if ! has binchecks ${RESTRICT}; then
+ # copied and adapted from the above scanelf code.
+ local qa_var insecure_rpath=0 tmp_quiet=${PORTAGE_QUIET}
+ local f x
+
+ # display warnings when using stricter because we die afterwards
+ if has stricter ${FEATURES} ; then
+ unset PORTAGE_QUIET
+ fi
+
+ local _exec_find_opt="-executable"
+ [[ ${CHOST} == *-winnt* ]] && _exec_find_opt='-name *.dll -o -name *.exe'
+
+ # Make sure we disallow insecure RUNPATH/RPATH's
+ # Don't want paths that point to the tree where the package was built
+ # (older, broken libtools would do this). Also check for null paths
+ # because the loader will search $PWD when it finds null paths.
+
+ f=$(
+ find "${ED}" -type f '(' ${_exec_find_opt} ')' -print0 | xargs -0 ${_pfx_scan} | \
+ while IFS=";" read arch obj soname rpath needed ; do \
+ echo "${rpath}" | grep -E "(${PORTAGE_BUILDDIR}|: |::|^:|^ )" > /dev/null 2>&1 \
+ && echo "${obj}"; done;
+ )
+ # Reject set*id binaries with $ORIGIN in RPATH #260331
+ x=$(
+ find "${ED}" -type f '(' -perm -u+s -o -perm -g+s ')' -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; do \
+ echo "${rpath}" | grep '$ORIGIN' > /dev/null 2>&1 && echo "${obj}"; done;
+ )
+ if [[ -n ${f}${x} ]] ; then
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain insecure RUNPATH's"
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with the maintaining herd of the package."
+ eqawarn "${f}${f:+${x:+\n}}${x}"
+ __vecho -ne '\a\n'
+ if [[ -n ${x} ]] || has stricter ${FEATURES} ; then
+ insecure_rpath=1
+ else
+ eqawarn "cannot automatically fix runpaths on interix platforms!"
+ fi
+ fi
+
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+
+ # Save NEEDED information after removing self-contained providers
+ find "${ED}" -type f '(' ${_exec_find_opt} ')' -print0 | xargs -0 ${_pfx_scan} | { while IFS=';' read arch obj soname rpath needed; do
+ # need to strip image dir from object name.
+ obj="/${obj#${D}}"
+ if [ -z "${rpath}" -o -n "${rpath//*ORIGIN*}" ]; then
+ # object doesn't contain $ORIGIN in its runpath attribute
+ echo "${obj} ${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ echo "${arch};${obj};${soname};${rpath};${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+ else
+ dir=${obj%/*}
+ # replace $ORIGIN with the dirname of the current object for the lookup
+ opath=$(echo :${rpath}: | sed -e "s#.*:\(.*\)\$ORIGIN\(.*\):.*#\1${dir}\2#")
+ sneeded=$(echo ${needed} | tr , ' ')
+ rneeded=""
+ for lib in ${sneeded}; do
+ found=0
+ for path in ${opath//:/ }; do
+ [ -e "${ED}/${path}/${lib}" ] && found=1 && break
+ done
+ [ "${found}" -eq 0 ] && rneeded="${rneeded},${lib}"
+ done
+ rneeded=${rneeded:1}
+ if [ -n "${rneeded}" ]; then
+ echo "${obj} ${rneeded}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ echo "${arch};${obj};${soname};${rpath};${rneeded}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+ fi
+ fi
+ done }
+
+ if [[ ${insecure_rpath} -eq 1 ]] ; then
+ die "Aborting due to serious QA concerns with RUNPATH/RPATH"
+ elif [[ -n ${die_msg} ]] && has stricter ${FEATURES} ; then
+ die "Aborting due to QA concerns: ${die_msg}"
+ fi
+
+ local _so_ext='.so*'
+
+ case "${CHOST}" in
+ *-winnt*) _so_ext=".dll" ;; # no "*" intentionally!
+ esac
+
+ # Run some sanity checks on shared libraries
+ for d in "${ED}"lib* "${ED}"usr/lib* ; do
+ [[ -d "${d}" ]] || continue
+ f=$(find "${d}" -name "lib*${_so_ext}" -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; \
+ do [[ -z "${soname}" ]] && echo "${obj}"; done)
+ if [[ -n ${f} ]] ; then
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: The following shared libraries lack a SONAME"
+ eqawarn "${f}"
+ __vecho -ne '\a\n'
+ sleep 1
+ fi
+
+ f=$(find "${d}" -name "lib*${_so_ext}" -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; \
+ do [[ -z "${needed}" ]] && echo "${obj}"; done)
+ if [[ -n ${f} ]] ; then
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: The following shared libraries lack NEEDED entries"
+ eqawarn "${f}"
+ __vecho -ne '\a\n'
+ sleep 1
+ fi
+ done
+
+ PORTAGE_QUIET=${tmp_quiet}
+ fi
+}
+
+install_qa_check_xcoff() {
+ if ! has binchecks ${RESTRICT}; then
+ local tmp_quiet=${PORTAGE_QUIET}
+ local queryline deplib
+ local insecure_rpath_list= undefined_symbols_list=
+
+ # display warnings when using stricter because we die afterwards
+ if has stricter ${FEATURES} ; then
+ unset PORTAGE_QUIET
+ fi
+
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED.XCOFF.1
+
+ local neededfd
+ for neededfd in {3..1024} none; do ( : <&${neededfd} ) 2>/dev/null || break; done
+ [[ ${neededfd} != none ]] || die "cannot find free file descriptor handle"
+
+ eval "exec ${neededfd}>\"${PORTAGE_BUILDDIR}\"/build-info/NEEDED.XCOFF.1" || die "cannot open ${PORTAGE_BUILDDIR}/build-info/NEEDED.XCOFF.1"
+
+ ( # work around a problem in /usr/bin/dump (used by aixdll-query)
+ # dumping core when path names get too long.
+ cd "${ED}" >/dev/null &&
+ find . -not -type d -exec \
+ aixdll-query '{}' FILE MEMBER FLAGS FORMAT RUNPATH DEPLIBS ';'
+ ) > "${T}"/needed 2>/dev/null
+
+ # Symlinking shared archive libraries is not a good idea on aix,
+ # as there is nothing like "soname" on pure filesystem level.
+ # So we create a copy instead of the symlink.
+ local prev_FILE=
+ local FILE MEMBER FLAGS FORMAT RUNPATH DEPLIBS
+ while read queryline
+ do
+ FILE= MEMBER= FLAGS= FORMAT= RUNPATH= DEPLIBS=
+ eval ${queryline}
+ FILE=${FILE#./}
+
+ if [[ ${prev_FILE} != ${FILE} ]]; then
+ if [[ " ${FLAGS} " == *" SHROBJ "* && -h ${ED}${FILE} ]]; then
+ prev_FILE=${FILE}
+ local target=$(readlink "${ED}${FILE}")
+ if [[ ${target} == /* ]]; then
+ target=${D}${target}
+ else
+ target=${FILE%/*}/${target}
+ fi
+ rm -f "${ED}${FILE}" || die "cannot prune ${FILE}"
+ cp -f "${ED}${target}" "${ED}${FILE}" || die "cannot copy ${target} to ${FILE}"
+ fi
+ fi
+ done <"${T}"/needed
+
+ prev_FILE=
+ while read queryline
+ do
+ FILE= MEMBER= FLAGS= FORMAT= RUNPATH= DEPLIBS=
+ eval ${queryline}
+ FILE=${FILE#./}
+
+ if [[ -n ${MEMBER} && ${prev_FILE} != ${FILE} ]]; then
+ # Save NEEDED information for each archive library stub
+ # even if it is static only: the already installed archive
+ # may contain shared objects to be preserved.
+ echo "${FORMAT##* }${FORMAT%%-*};${EPREFIX}/${FILE};${FILE##*/};;" >&${neededfd}
+ fi
+ prev_FILE=${FILE}
+
+ # shared objects have both EXEC and SHROBJ flags,
+ # while executables have EXEC flag only.
+ [[ " ${FLAGS} " == *" EXEC "* ]] || continue
+
+ # Make sure we disallow insecure RUNPATH's
+ # Don't want paths that point to the tree where the package was built
+ # (older, broken libtools would do this). Also check for null paths
+ # because the loader will search $PWD when it finds null paths.
+ # And we really want absolute paths only.
+ if [[ -n $(echo ":${RUNPATH}:" | grep -E "(${PORTAGE_BUILDDIR}|::|:[^/])") ]]; then
+ insecure_rpath_list="${insecure_rpath_list}\n${FILE}${MEMBER:+[${MEMBER}]}"
+ fi
+
+ local needed=
+ [[ -n ${MEMBER} ]] && needed=${FILE##*/}
+ for deplib in ${DEPLIBS}; do
+ eval deplib=${deplib}
+ if [[ ${deplib} == '.' || ${deplib} == '..' ]]; then
+ # Although we do have runtime linking, we don't want undefined symbols.
+ # AIX does indicate this by needing either '.' or '..'
+ undefined_symbols_list="${undefined_symbols_list}\n${FILE}"
+ else
+ needed="${needed}${needed:+,}${deplib}"
+ fi
+ done
+
+ FILE=${EPREFIX}/${FILE}
+
+ [[ -n ${MEMBER} ]] && MEMBER="[${MEMBER}]"
+ # Save NEEDED information
+ echo "${FORMAT##* }${FORMAT%%-*};${FILE}${MEMBER};${FILE##*/}${MEMBER};${RUNPATH};${needed}" >&${neededfd}
+ done <"${T}"/needed
+
+ eval "exec ${neededfd}>&-" || die "cannot close handle to ${PORTAGE_BUILDDIR}/build-info/NEEDED.XCOFF.1"
+
+ if [[ -n ${undefined_symbols_list} ]]; then
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain undefined symbols."
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with 'prefix' as the maintaining herd of the package."
+ eqawarn "${undefined_symbols_list}"
+ __vecho -ne '\a\n'
+ fi
+
+ if [[ -n ${insecure_rpath_list} ]] ; then
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain insecure RUNPATH's"
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with 'prefix' as the maintaining herd of the package."
+ eqawarn "${insecure_rpath_list}"
+ __vecho -ne '\a\n'
+ if has stricter ${FEATURES} ; then
+ insecure_rpath=1
+ fi
+ fi
+
+ if [[ ${insecure_rpath} -eq 1 ]] ; then
+ die "Aborting due to serious QA concerns with RUNPATH/RPATH"
+ elif [[ -n ${die_msg} ]] && has stricter ${FEATURES} ; then
+ die "Aborting due to QA concerns: ${die_msg}"
+ fi
+
+ PORTAGE_QUIET=${tmp_quiet}
+ fi
+}
+
+ preinst_qa_check() {
+ postinst_qa_check preinst
+ }
+
postinst_qa_check() {
- local d f paths qa_checks=()
+ local d f paths qa_checks=() PORTAGE_QA_PHASE=${1:-postinst}
if ! ___eapi_has_prefix_variables; then
local EPREFIX= EROOT=${ROOT}
fi
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2017-10-03 7:32 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2017-10-03 7:32 UTC (permalink / raw
To: gentoo-commits
commit: 1b9d59c47031a25e8105196ae944615e0b5a0a6a
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Tue Oct 3 07:31:54 2017 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Tue Oct 3 07:31:54 2017 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=1b9d59c4
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
RELEASE-NOTES | 14 ++++
bin/postinst-qa-check.d/50gnome2-utils | 2 +
bin/postinst-qa-check.d/50xdg-utils | 8 +-
pym/_emerge/depgraph.py | 38 ++++++++--
pym/_emerge/search.py | 8 +-
.../resolver/test_autounmask_use_backtrack.py | 86 ++++++++++++++++++++++
.../tests/resolver/test_slot_conflict_update.py | 2 +-
setup.py | 2 +-
8 files changed, 150 insertions(+), 10 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2017-09-22 10:08 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2017-09-22 10:08 UTC (permalink / raw
To: gentoo-commits
commit: 3f9ce6aaf8b9121d942e676cc13bd06ba51907be
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Sep 22 10:07:50 2017 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Sep 22 10:07:50 2017 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=3f9ce6aa
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
RELEASE-NOTES | 19 +++++
bin/clean_locks | 2 +-
bin/dispatch-conf | 2 +-
bin/ebuild | 2 +-
bin/ebuild.sh | 9 +++
bin/emaint | 2 +-
bin/env-update | 2 +-
bin/isolated-functions.sh | 4 +
bin/misc-functions.sh | 59 ++++++++++++++-
bin/phase-functions.sh | 53 +++++++++++++-
bin/portageq | 2 +-
bin/postinst-qa-check.d/50gnome2-utils | 53 ++++++++++++++
bin/postinst-qa-check.d/50xdg-utils | 85 ++++++++++++++++++++++
pym/_emerge/AsynchronousLock.py | 6 +-
pym/portage/dbapi/vartree.py | 5 +-
.../package/ebuild/_config/special_env_vars.py | 4 +-
pym/portage/package/ebuild/doebuild.py | 3 +-
pym/portage/repository/config.py | 4 +-
pym/portage/sync/syncbase.py | 8 +-
pym/portage/tests/locks/test_asynchronous_lock.py | 8 +-
pym/portage/util/digraph.py | 12 +--
repoman/man/repoman.1 | 18 ++++-
repoman/pym/repoman/actions.py | 17 +----
setup.py | 2 +-
24 files changed, 333 insertions(+), 48 deletions(-)
diff --cc bin/clean_locks
index d44162075,fb245972f..ecbedffcb
--- a/bin/clean_locks
+++ b/bin/clean_locks
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -bO
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
# Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/dispatch-conf
index 184044930,49e7774bf..1ea3dbcdd
--- a/bin/dispatch-conf
+++ b/bin/dispatch-conf
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -bO
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
# Copyright 1999-2017 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/ebuild
index d69039d35,bda746f78..d5939db89
--- a/bin/ebuild
+++ b/bin/ebuild
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -bO
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
# Copyright 1999-2015 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/emaint
index 784d8613e,08e75851a..3515b5fd7
--- a/bin/emaint
+++ b/bin/emaint
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -bO
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
# Copyright 2005-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/env-update
index 5b99e66dd,03fc5849f..889fc4967
--- a/bin/env-update
+++ b/bin/env-update
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -bO
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
# Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
diff --cc bin/misc-functions.sh
index d143b6b7e,b0506bde7..20fd4eef8
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -297,389 -256,63 +297,446 @@@ install_qa_check_misc()
rm -f "${ED}"/usr/share/info/dir{,.gz,.bz2} || die "rm failed!"
}
+install_qa_check_macho() {
+ if ! has binchecks ${RESTRICT} ; then
+ # on Darwin, dynamic libraries are called .dylibs instead of
+ # .sos. In addition the version component is before the
+ # extension, not after it. Check for this, and *only* warn
+ # about it. Some packages do ship .so files on Darwin and make
+ # it work (ugly!).
+ rm -f "${T}/mach-o.check"
+ find ${ED%/} -name "*.so" -or -name "*.so.*" | \
+ while read i ; do
+ [[ $(file $i) == *"Mach-O"* ]] && \
+ echo "${i#${D}}" >> "${T}/mach-o.check"
+ done
+ if [[ -f ${T}/mach-o.check ]] ; then
+ f=$(< "${T}/mach-o.check")
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: Found .so dynamic libraries on Darwin:"
+ eqawarn " ${f//$'\n'/\n }"
+ fi
+ rm -f "${T}/mach-o.check"
+
+ # The naming for dynamic libraries is different on Darwin; the
+ # version component is before the extention, instead of after
+ # it, as with .sos. Again, make this a warning only.
+ rm -f "${T}/mach-o.check"
+ find ${ED%/} -name "*.dylib.*" | \
+ while read i ; do
+ echo "${i#${D}}" >> "${T}/mach-o.check"
+ done
+ if [[ -f "${T}/mach-o.check" ]] ; then
+ f=$(< "${T}/mach-o.check")
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: Found wrongly named dynamic libraries on Darwin:"
+ eqawarn " ${f// /\n }"
+ fi
+ rm -f "${T}/mach-o.check"
+ fi
+
+ install_name_is_relative() {
+ case $1 in
+ "@executable_path/"*) return 0 ;;
+ "@loader_path"/*) return 0 ;;
+ "@rpath/"*) return 0 ;;
+ *) return 1 ;;
+ esac
+ }
+
+ # While we generate the NEEDED files, check that we don't get kernel
+ # traps at runtime because of broken install_names on Darwin.
+ rm -f "${T}"/.install_name_check_failed
+ scanmacho -qyRF '%a;%p;%S;%n' "${D}" | { while IFS= read l ; do
+ arch=${l%%;*}; l=${l#*;}
+ obj="/${l%%;*}"; l=${l#*;}
+ install_name=${l%%;*}; l=${l#*;}
+ needed=${l%%;*}; l=${l#*;}
+
+ ignore=
+ qa_var="QA_IGNORE_INSTALL_NAME_FILES_${ARCH/-/_}"
+ eval "[[ -n \${!qa_var} ]] &&
+ QA_IGNORE_INSTALL_NAME_FILES=(\"\${${qa_var}[@]}\")"
+
+ if [[ ${#QA_IGNORE_INSTALL_NAME_FILES[@]} -gt 1 ]] ; then
+ for x in "${QA_IGNORE_INSTALL_NAME_FILES[@]}" ; do
+ [[ ${obj##*/} == ${x} ]] && \
+ ignore=true
+ done
+ else
+ local shopts=$-
+ set -o noglob
+ for x in ${QA_IGNORE_INSTALL_NAME_FILES} ; do
+ [[ ${obj##*/} == ${x} ]] && \
+ ignore=true
+ done
+ set +o noglob
+ set -${shopts}
+ fi
+
+ # See if the self-reference install_name points to an existing
+ # and to be installed file. This usually is a symlink for the
+ # major version.
+ if install_name_is_relative ${install_name} ; then
+ # try to locate the library in the installed image
+ local inpath=${install_name#@*/}
+ local libl
+ for libl in $(find "${ED}" -name "${inpath##*/}") ; do
+ if [[ ${libl} == */${inpath} ]] ; then
+ install_name=/${libl#${D}}
+ break
+ fi
+ done
+ fi
+ if [[ ! -e ${D}${install_name} ]] ; then
+ eqawarn "QA Notice: invalid self-reference install_name ${install_name} in ${obj}"
+ # remember we are in an implicit subshell, that's
+ # why we touch a file here ... ideally we should be
+ # able to die correctly/nicely here
+ [[ -z ${ignore} ]] && touch "${T}"/.install_name_check_failed
+ fi
+
+ # this is ugly, paths with spaces won't work
+ for lib in ${needed//,/ } ; do
+ if [[ ${lib} == ${D}* ]] ; then
+ eqawarn "QA Notice: install_name references \${D}: ${lib} in ${obj}"
+ [[ -z ${ignore} ]] && touch "${T}"/.install_name_check_failed
+ elif [[ ${lib} == ${S}* ]] ; then
+ eqawarn "QA Notice: install_name references \${S}: ${lib} in ${obj}"
+ [[ -z ${ignore} ]] && touch "${T}"/.install_name_check_failed
+ elif ! install_name_is_relative ${lib} && [[ ! -e ${lib} && ! -e ${D}${lib} ]] ; then
+ eqawarn "QA Notice: invalid reference to ${lib} in ${obj}"
+ [[ -z ${ignore} ]] && touch "${T}"/.install_name_check_failed
+ fi
+ done
+
+ # backwards compatibility
+ echo "${obj} ${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ # what we use
+ echo "${arch};${obj};${install_name};${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.MACHO.3
+ done }
+ if [[ -f ${T}/.install_name_check_failed ]] ; then
+ # secret switch "allow_broken_install_names" to get
+ # around this and install broken crap (not a good idea)
+ has allow_broken_install_names ${FEATURES} || \
+ die "invalid install_name found, your application or library will crash at runtime"
+ fi
+}
+
+install_qa_check_pecoff() {
+ local _pfx_scan="readpecoff ${CHOST}"
+
+ # this one uses readpecoff, which supports multiple prefix platforms!
+ # this is absolutely _not_ optimized for speed, and there may be plenty
+ # of possibilities by introducing one or the other cache!
+ if ! has binchecks ${RESTRICT}; then
+ # copied and adapted from the above scanelf code.
+ local qa_var insecure_rpath=0 tmp_quiet=${PORTAGE_QUIET}
+ local f x
+
+ # display warnings when using stricter because we die afterwards
+ if has stricter ${FEATURES} ; then
+ unset PORTAGE_QUIET
+ fi
+
+ local _exec_find_opt="-executable"
+ [[ ${CHOST} == *-winnt* ]] && _exec_find_opt='-name *.dll -o -name *.exe'
+
+ # Make sure we disallow insecure RUNPATH/RPATH's
+ # Don't want paths that point to the tree where the package was built
+ # (older, broken libtools would do this). Also check for null paths
+ # because the loader will search $PWD when it finds null paths.
+
+ f=$(
+ find "${ED}" -type f '(' ${_exec_find_opt} ')' -print0 | xargs -0 ${_pfx_scan} | \
+ while IFS=";" read arch obj soname rpath needed ; do \
+ echo "${rpath}" | grep -E "(${PORTAGE_BUILDDIR}|: |::|^:|^ )" > /dev/null 2>&1 \
+ && echo "${obj}"; done;
+ )
+ # Reject set*id binaries with $ORIGIN in RPATH #260331
+ x=$(
+ find "${ED}" -type f '(' -perm -u+s -o -perm -g+s ')' -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; do \
+ echo "${rpath}" | grep '$ORIGIN' > /dev/null 2>&1 && echo "${obj}"; done;
+ )
+ if [[ -n ${f}${x} ]] ; then
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain insecure RUNPATH's"
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with the maintaining herd of the package."
+ eqawarn "${f}${f:+${x:+\n}}${x}"
+ __vecho -ne '\a\n'
+ if [[ -n ${x} ]] || has stricter ${FEATURES} ; then
+ insecure_rpath=1
+ else
+ eqawarn "cannot automatically fix runpaths on interix platforms!"
+ fi
+ fi
+
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+
+ # Save NEEDED information after removing self-contained providers
+ find "${ED}" -type f '(' ${_exec_find_opt} ')' -print0 | xargs -0 ${_pfx_scan} | { while IFS=';' read arch obj soname rpath needed; do
+ # need to strip image dir from object name.
+ obj="/${obj#${D}}"
+ if [ -z "${rpath}" -o -n "${rpath//*ORIGIN*}" ]; then
+ # object doesn't contain $ORIGIN in its runpath attribute
+ echo "${obj} ${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ echo "${arch};${obj};${soname};${rpath};${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+ else
+ dir=${obj%/*}
+ # replace $ORIGIN with the dirname of the current object for the lookup
+ opath=$(echo :${rpath}: | sed -e "s#.*:\(.*\)\$ORIGIN\(.*\):.*#\1${dir}\2#")
+ sneeded=$(echo ${needed} | tr , ' ')
+ rneeded=""
+ for lib in ${sneeded}; do
+ found=0
+ for path in ${opath//:/ }; do
+ [ -e "${ED}/${path}/${lib}" ] && found=1 && break
+ done
+ [ "${found}" -eq 0 ] && rneeded="${rneeded},${lib}"
+ done
+ rneeded=${rneeded:1}
+ if [ -n "${rneeded}" ]; then
+ echo "${obj} ${rneeded}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ echo "${arch};${obj};${soname};${rpath};${rneeded}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+ fi
+ fi
+ done }
+
+ if [[ ${insecure_rpath} -eq 1 ]] ; then
+ die "Aborting due to serious QA concerns with RUNPATH/RPATH"
+ elif [[ -n ${die_msg} ]] && has stricter ${FEATURES} ; then
+ die "Aborting due to QA concerns: ${die_msg}"
+ fi
+
+ local _so_ext='.so*'
+
+ case "${CHOST}" in
+ *-winnt*) _so_ext=".dll" ;; # no "*" intentionally!
+ esac
+
+ # Run some sanity checks on shared libraries
+ for d in "${ED}"lib* "${ED}"usr/lib* ; do
+ [[ -d "${d}" ]] || continue
+ f=$(find "${d}" -name "lib*${_so_ext}" -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; \
+ do [[ -z "${soname}" ]] && echo "${obj}"; done)
+ if [[ -n ${f} ]] ; then
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: The following shared libraries lack a SONAME"
+ eqawarn "${f}"
+ __vecho -ne '\a\n'
+ sleep 1
+ fi
+
+ f=$(find "${d}" -name "lib*${_so_ext}" -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; \
+ do [[ -z "${needed}" ]] && echo "${obj}"; done)
+ if [[ -n ${f} ]] ; then
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: The following shared libraries lack NEEDED entries"
+ eqawarn "${f}"
+ __vecho -ne '\a\n'
+ sleep 1
+ fi
+ done
+
+ PORTAGE_QUIET=${tmp_quiet}
+ fi
+}
+
+install_qa_check_xcoff() {
+ if ! has binchecks ${RESTRICT}; then
+ local tmp_quiet=${PORTAGE_QUIET}
+ local queryline deplib
+ local insecure_rpath_list= undefined_symbols_list=
+
+ # display warnings when using stricter because we die afterwards
+ if has stricter ${FEATURES} ; then
+ unset PORTAGE_QUIET
+ fi
+
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED.XCOFF.1
+
+ local neededfd
+ for neededfd in {3..1024} none; do ( : <&${neededfd} ) 2>/dev/null || break; done
+ [[ ${neededfd} != none ]] || die "cannot find free file descriptor handle"
+
+ eval "exec ${neededfd}>\"${PORTAGE_BUILDDIR}\"/build-info/NEEDED.XCOFF.1" || die "cannot open ${PORTAGE_BUILDDIR}/build-info/NEEDED.XCOFF.1"
+
+ ( # work around a problem in /usr/bin/dump (used by aixdll-query)
+ # dumping core when path names get too long.
+ cd "${ED}" >/dev/null &&
+ find . -not -type d -exec \
+ aixdll-query '{}' FILE MEMBER FLAGS FORMAT RUNPATH DEPLIBS ';'
+ ) > "${T}"/needed 2>/dev/null
+
+ # Symlinking shared archive libraries is not a good idea on aix,
+ # as there is nothing like "soname" on pure filesystem level.
+ # So we create a copy instead of the symlink.
+ local prev_FILE=
+ local FILE MEMBER FLAGS FORMAT RUNPATH DEPLIBS
+ while read queryline
+ do
+ FILE= MEMBER= FLAGS= FORMAT= RUNPATH= DEPLIBS=
+ eval ${queryline}
+ FILE=${FILE#./}
+
+ if [[ ${prev_FILE} != ${FILE} ]]; then
+ if [[ " ${FLAGS} " == *" SHROBJ "* && -h ${ED}${FILE} ]]; then
+ prev_FILE=${FILE}
+ local target=$(readlink "${ED}${FILE}")
+ if [[ ${target} == /* ]]; then
+ target=${D}${target}
+ else
+ target=${FILE%/*}/${target}
+ fi
+ rm -f "${ED}${FILE}" || die "cannot prune ${FILE}"
+ cp -f "${ED}${target}" "${ED}${FILE}" || die "cannot copy ${target} to ${FILE}"
+ fi
+ fi
+ done <"${T}"/needed
+
+ prev_FILE=
+ while read queryline
+ do
+ FILE= MEMBER= FLAGS= FORMAT= RUNPATH= DEPLIBS=
+ eval ${queryline}
+ FILE=${FILE#./}
+
+ if [[ -n ${MEMBER} && ${prev_FILE} != ${FILE} ]]; then
+ # Save NEEDED information for each archive library stub
+ # even if it is static only: the already installed archive
+ # may contain shared objects to be preserved.
+ echo "${FORMAT##* }${FORMAT%%-*};${EPREFIX}/${FILE};${FILE##*/};;" >&${neededfd}
+ fi
+ prev_FILE=${FILE}
+
+ # shared objects have both EXEC and SHROBJ flags,
+ # while executables have EXEC flag only.
+ [[ " ${FLAGS} " == *" EXEC "* ]] || continue
+
+ # Make sure we disallow insecure RUNPATH's
+ # Don't want paths that point to the tree where the package was built
+ # (older, broken libtools would do this). Also check for null paths
+ # because the loader will search $PWD when it finds null paths.
+ # And we really want absolute paths only.
+ if [[ -n $(echo ":${RUNPATH}:" | grep -E "(${PORTAGE_BUILDDIR}|::|:[^/])") ]]; then
+ insecure_rpath_list="${insecure_rpath_list}\n${FILE}${MEMBER:+[${MEMBER}]}"
+ fi
+
+ local needed=
+ [[ -n ${MEMBER} ]] && needed=${FILE##*/}
+ for deplib in ${DEPLIBS}; do
+ eval deplib=${deplib}
+ if [[ ${deplib} == '.' || ${deplib} == '..' ]]; then
+ # Although we do have runtime linking, we don't want undefined symbols.
+ # AIX does indicate this by needing either '.' or '..'
+ undefined_symbols_list="${undefined_symbols_list}\n${FILE}"
+ else
+ needed="${needed}${needed:+,}${deplib}"
+ fi
+ done
+
+ FILE=${EPREFIX}/${FILE}
+
+ [[ -n ${MEMBER} ]] && MEMBER="[${MEMBER}]"
+ # Save NEEDED information
+ echo "${FORMAT##* }${FORMAT%%-*};${FILE}${MEMBER};${FILE##*/}${MEMBER};${RUNPATH};${needed}" >&${neededfd}
+ done <"${T}"/needed
+
+ eval "exec ${neededfd}>&-" || die "cannot close handle to ${PORTAGE_BUILDDIR}/build-info/NEEDED.XCOFF.1"
+
+ if [[ -n ${undefined_symbols_list} ]]; then
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain undefined symbols."
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with 'prefix' as the maintaining herd of the package."
+ eqawarn "${undefined_symbols_list}"
+ __vecho -ne '\a\n'
+ fi
+
+ if [[ -n ${insecure_rpath_list} ]] ; then
+ __vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain insecure RUNPATH's"
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with 'prefix' as the maintaining herd of the package."
+ eqawarn "${insecure_rpath_list}"
+ __vecho -ne '\a\n'
+ if has stricter ${FEATURES} ; then
+ insecure_rpath=1
+ fi
+ fi
+
+ if [[ ${insecure_rpath} -eq 1 ]] ; then
+ die "Aborting due to serious QA concerns with RUNPATH/RPATH"
+ elif [[ -n ${die_msg} ]] && has stricter ${FEATURES} ; then
+ die "Aborting due to QA concerns: ${die_msg}"
+ fi
+
+ PORTAGE_QUIET=${tmp_quiet}
+ fi
+}
+
+ postinst_qa_check() {
+ local d f paths qa_checks=()
+ if ! ___eapi_has_prefix_variables; then
+ local EPREFIX= EROOT=${ROOT}
+ fi
+
+ cd "${EROOT}" || die "cd failed"
+
+ # Collect the paths for QA checks, highest prio first.
+ paths=(
+ # sysadmin overrides
+ "${PORTAGE_OVERRIDE_EPREFIX}"/usr/local/lib/postinst-qa-check.d
+ # system-wide package installs
+ "${PORTAGE_OVERRIDE_EPREFIX}"/usr/lib/postinst-qa-check.d
+ )
+
+ # Now repo-specific checks.
+ # (yes, PORTAGE_ECLASS_LOCATIONS contains repo paths...)
+ for d in "${PORTAGE_ECLASS_LOCATIONS[@]}"; do
+ paths+=(
+ "${d}"/metadata/postinst-qa-check.d
+ )
+ done
+
+ paths+=(
+ # Portage built-in checks
+ "${PORTAGE_OVERRIDE_EPREFIX}"/usr/lib/portage/postinst-qa-check.d
+ "${PORTAGE_BIN_PATH}"/postinst-qa-check.d
+ )
+
+ # Collect file names of QA checks. We need them early to support
+ # overrides properly.
+ for d in "${paths[@]}"; do
+ for f in "${d}"/*; do
+ [[ -f ${f} ]] && qa_checks+=( "${f##*/}" )
+ done
+ done
+
+ # Now we need to sort the filenames lexically, and process
+ # them in order.
+ while read -r -d '' f; do
+ # Find highest priority file matching the basename.
+ for d in "${paths[@]}"; do
+ [[ -f ${d}/${f} ]] && break
+ done
+
+ # Run in a subshell to treat it like external script,
+ # but use 'source' to pass all variables through.
+ (
+ # Allow inheriting eclasses.
+ # XXX: we want this only in repository-wide checks.
+ _IN_INSTALL_QA_CHECK=1
+ source "${d}/${f}" || eerror "Post-postinst QA check ${f} failed to run"
+ )
+ done < <(printf "%s\0" "${qa_checks[@]}" | LC_ALL=C sort -u -z)
+ }
+
install_mask() {
local root="$1"
shift
diff --cc bin/portageq
index 616aee085,0ac124fde..3518a0af0
--- a/bin/portageq
+++ b/bin/portageq
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -bO
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
# Copyright 1999-2016 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
diff --cc pym/portage/package/ebuild/doebuild.py
index c6d613311,ac697a763..a5adf2c92
--- a/pym/portage/package/ebuild/doebuild.py
+++ b/pym/portage/package/ebuild/doebuild.py
@@@ -1817,7 -1741,7 +1817,8 @@@ _post_phase_cmds =
],
"postinst" : [
- "postinst_aix"]
++ "postinst_aix",
+ "postinst_qa_check"],
}
def _post_phase_userpriv_perms(mysettings):
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2017-08-21 13:27 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2017-08-21 13:27 UTC (permalink / raw
To: gentoo-commits
commit: 9319a3991e322119867135a531d420a884c701f6
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Mon Aug 21 13:26:52 2017 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Mon Aug 21 13:26:52 2017 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=9319a399
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
NEWS | 2 ++
RELEASE-NOTES | 7 +++++++
bin/quickpkg | 25 +++++++++++++++++--------
pym/portage/elog/mod_echo.py | 13 ++++++++++---
setup.py | 2 +-
5 files changed, 37 insertions(+), 12 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2017-08-13 7:21 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2017-08-13 7:21 UTC (permalink / raw
To: gentoo-commits
commit: cb6b68d4b7659fe601d8149b257925673eb9e03c
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Aug 13 07:20:22 2017 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Aug 13 07:20:22 2017 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=cb6b68d4
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
.editorconfig | 14 +++
.travis.yml | 4 +-
NEWS | 15 +++
RELEASE-NOTES | 14 +++
bin/ebuild-helpers/doman | 1 +
bin/install-qa-check.d/80multilib-strict | 7 +-
bin/misc-functions.sh | 6 +-
bin/phase-helpers.sh | 45 +++++---
bin/quickpkg | 62 ++++++++---
cnf/make.globals | 3 +-
man/emerge.1 | 7 ++
man/make.conf.5 | 25 +++++
man/portage.5 | 27 ++++-
pym/_emerge/BinpkgExtractorAsync.py | 43 +++++++-
pym/_emerge/actions.py | 16 +++
pym/_emerge/depgraph.py | 117 +++++++++++++++++----
pym/_emerge/main.py | 9 ++
pym/_emerge/search.py | 24 ++++-
pym/portage/const.py | 3 +-
pym/portage/dbapi/bintree.py | 18 ++--
pym/portage/dep/dep_check.py | 6 +-
pym/portage/localization.py | 2 +-
.../package/ebuild/_config/KeywordsManager.py | 4 +-
pym/portage/package/ebuild/_config/UseManager.py | 5 +-
.../package/ebuild/_config/special_env_vars.py | 3 +-
pym/portage/package/ebuild/config.py | 27 +++++
pym/portage/package/ebuild/doebuild.py | 34 +++++-
pym/portage/sync/modules/git/__init__.py | 8 +-
pym/portage/sync/modules/git/git.py | 37 ++++++-
pym/portage/sync/modules/rsync/__init__.py | 3 +-
pym/portage/sync/modules/rsync/rsync.py | 12 +++
pym/portage/sync/syncbase.py | 5 +-
pym/portage/tests/emerge/test_simple.py | 5 +-
.../tests/resolver/test_autounmask_binpkg_use.py | 64 +++++++++++
.../resolver/test_autounmask_keep_keywords.py | 72 +++++++++++++
pym/portage/util/_urlopen.py | 12 +++
pym/portage/util/compression_probe.py | 45 ++++++--
repoman/RELEASE-NOTES | 12 +++
repoman/bin/repoman | 4 +-
repoman/man/repoman.1 | 20 +++-
repoman/pym/repoman/actions.py | 55 ++++++++--
repoman/pym/repoman/argparser.py | 16 ++-
repoman/pym/repoman/main.py | 4 +-
repoman/pym/repoman/modules/scan/ebuild/checks.py | 5 +
.../pym/repoman/modules/scan/keywords/keywords.py | 26 ++++-
.../repoman/modules/scan/metadata/pkgmetadata.py | 10 ++
repoman/pym/repoman/qa_data.py | 1 +
repoman/pym/repoman/scanner.py | 2 +-
repoman/pym/repoman/tests/runTests.py | 4 +-
repoman/runtests | 4 +-
repoman/setup.py | 2 +-
setup.py | 8 +-
52 files changed, 844 insertions(+), 133 deletions(-)
diff --cc bin/misc-functions.sh
index a9306043d,079369313..d143b6b7e
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -1111,11 -478,13 +1111,13 @@@ __dyn_package()
[ -z "${PORTAGE_BINPKG_TMPFILE}" ] && \
die "PORTAGE_BINPKG_TMPFILE is unset"
mkdir -p "${PORTAGE_BINPKG_TMPFILE%/*}" || die "mkdir failed"
+ [ -z "${PORTAGE_COMPRESSION_COMMAND}" ] && \
+ die "PORTAGE_COMPRESSION_COMMAND is unset"
tar $tar_options -cf - $PORTAGE_BINPKG_TAR_OPTS -C "${PROOT}" . | \
- $PORTAGE_BZIP2_COMMAND -c > "$PORTAGE_BINPKG_TMPFILE"
+ $PORTAGE_COMPRESSION_COMMAND -c > "$PORTAGE_BINPKG_TMPFILE"
assert "failed to pack binary package: '$PORTAGE_BINPKG_TMPFILE'"
PYTHONPATH=${PORTAGE_PYTHONPATH:-${PORTAGE_PYM_PATH}} \
- "${PORTAGE_PYTHON:-/usr/bin/python}" "$PORTAGE_BIN_PATH"/xpak-helper.py recompose \
+ "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "$PORTAGE_BIN_PATH"/xpak-helper.py recompose \
"$PORTAGE_BINPKG_TMPFILE" "$PORTAGE_BUILDDIR/build-info"
if [ $? -ne 0 ]; then
rm -f "${PORTAGE_BINPKG_TMPFILE}"
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2017-05-23 13:34 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2017-05-23 13:34 UTC (permalink / raw
To: gentoo-commits
commit: feb6264c45dda65c8298349f0f04d06ae0c5ff8e
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Tue May 23 13:25:51 2017 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Tue May 23 13:25:51 2017 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=feb6264c
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
RELEASE-NOTES | 29 +++
bin/ebuild-helpers/dosym | 17 +-
bin/phase-functions.sh | 1 +
bin/phase-helpers.sh | 8 +-
man/emerge.1 | 19 +-
pym/_emerge/AsynchronousLock.py | 89 +++++++-
pym/_emerge/EbuildBuild.py | 23 +-
pym/_emerge/Scheduler.py | 3 -
pym/_emerge/SpawnProcess.py | 3 +-
pym/_emerge/SubProcess.py | 23 +-
pym/_emerge/UseFlagDisplay.py | 19 +-
pym/_emerge/actions.py | 17 +-
pym/_emerge/depgraph.py | 136 +++++++++---
pym/_emerge/main.py | 12 ++
pym/portage/dbapi/bintree.py | 19 +-
pym/portage/dbapi/vartree.py | 22 +-
pym/portage/package/ebuild/_spawn_nofetch.py | 106 +++++----
pym/portage/tests/ebuild/test_ipc_daemon.py | 3 +-
pym/portage/tests/locks/test_asynchronous_lock.py | 15 +-
pym/portage/tests/resolver/test_autounmask.py | 57 ++++-
.../tests/resolver/test_autounmask_use_breakage.py | 40 ++++
.../tests/resolver/test_onlydeps_minimal.py | 47 ++++
.../test_slot_conflict_unsatisfied_deep_deps.py | 61 ++++++
...ask.py => test_slot_operator_complete_graph.py} | 23 +-
.../modules => tests/util/eventloop}/__init__.py | 0
.../tests/{bin => util/eventloop}/__test__.py | 0
.../tests/util/eventloop/test_call_soon_fifo.py | 6 +-
.../modules => tests/util/futures}/__init__.py | 0
.../tests/{bin => util/futures}/__test__.py | 0
.../tests/util/futures/test_done_callback.py | 35 +++
pym/portage/tests/util/test_digraph.py | 4 +-
pym/portage/util/_async/SchedulerInterface.py | 3 +-
pym/portage/util/_eventloop/EventLoop.py | 32 ++-
pym/portage/util/digraph.py | 26 ++-
pym/portage/util/futures/futures.py | 238 ++++++++++++++-------
repoman/man/repoman.1 | 4 +
repoman/pym/repoman/modules/scan/ebuild/checks.py | 13 ++
repoman/pym/repoman/qa_data.py | 4 +
setup.py | 7 +-
src/portage_util_file_copy_reflink_linux.c | 19 +-
40 files changed, 945 insertions(+), 238 deletions(-)
diff --cc bin/ebuild-helpers/dosym
index 19e163498,e96039146..a2157e96e
--- a/bin/ebuild-helpers/dosym
+++ b/bin/ebuild-helpers/dosym
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2017 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
diff --cc bin/phase-helpers.sh
index 349cd206a,e1dcfd5e8..83eb69389
--- a/bin/phase-helpers.sh
+++ b/bin/phase-helpers.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2013 Gentoo Foundation
+ # Copyright 1999-2017 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
export DESTTREE=/usr
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2017-03-25 9:12 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2017-03-25 9:12 UTC (permalink / raw
To: gentoo-commits
commit: b35eed4fc4662694da44a4fcc2114d7c40028487
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sat Mar 25 09:11:12 2017 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sat Mar 25 09:11:12 2017 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=b35eed4f
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/emirrordist | 8 ++
man/ebuild.5 | 6 +-
pym/_emerge/EbuildPhase.py | 14 +-
pym/_emerge/PollScheduler.py | 57 +++++---
pym/_emerge/Scheduler.py | 7 +-
pym/_emerge/depgraph.py | 66 ++++++++--
pym/portage/_emirrordist/FetchIterator.py | 21 +++
pym/portage/_emirrordist/MirrorDistTask.py | 47 +++++--
pym/portage/package/ebuild/config.py | 3 -
pym/portage/package/ebuild/doebuild.py | 2 +-
pym/portage/package/ebuild/prepare_build_dirs.py | 13 ++
.../resolver/test_binary_pkg_ebuild_visibility.py | 144 +++++++++++++++++++++
.../resolver/test_slot_operator_exclusive_slots.py | 39 ++++++
.../tests/util/eventloop/test_call_soon_fifo.py | 30 +++++
pym/portage/tests/util/test_digraph.py | 2 +
pym/portage/util/_async/AsyncScheduler.py | 16 +--
pym/portage/util/_async/SchedulerInterface.py | 5 +-
pym/portage/util/_eventloop/EventLoop.py | 67 +++++++++-
pym/portage/util/digraph.py | 9 ++
19 files changed, 490 insertions(+), 66 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2017-03-24 19:09 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2017-03-24 19:09 UTC (permalink / raw
To: gentoo-commits
commit: 142accf1f18d393f0ea0fe8de514f738c3621f30
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Mar 24 13:17:22 2017 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Mar 24 13:17:22 2017 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=142accf1
runtests: use python from PATH
runtests | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/runtests b/runtests
index 64e62bb69..a193c375f 100755
--- a/runtests
+++ b/runtests
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
# Copyright 2010-2015 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2017-03-24 7:43 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2017-03-24 7:43 UTC (permalink / raw
To: gentoo-commits
commit: 7f8fcb62989c5b03e9163eb37a80cc3872261530
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Mar 24 06:45:43 2017 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Mar 24 06:45:43 2017 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=7f8fcb62
travis: set PORTAGE_BASE
.travis.yml | 1 +
1 file changed, 1 insertion(+)
diff --git a/.travis.yml b/.travis.yml
index c0157332d..da42b5e2e 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -21,6 +21,7 @@ script:
- printf "[build_ext]\nportage-ext-modules=true" >> setup.cfg
- find . -type f -exec
sed -e "s|@PORTAGE_EPREFIX@||"
+ -e "s|@PORTAGE_BASE@|${PWD}|"
-e "s|@PORTAGE_MV@|$(type -P mv)|"
-e "s|@PORTAGE_BASH@|$(type -P bash)|"
-e "s|@PREFIX_PORTAGE_PYTHON@|$(type -P python)|"
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2017-03-23 17:46 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2017-03-23 17:46 UTC (permalink / raw
To: gentoo-commits
commit: 2c8af75d7929f4db1e606056cc11dbc4a0668370
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Mar 23 17:46:23 2017 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Mar 23 17:46:23 2017 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=2c8af75d
travis: make configured values reflect build-env
.travis.yml | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/.travis.yml b/.travis.yml
index 467bc3cb7..c0157332d 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -21,16 +21,16 @@ script:
- printf "[build_ext]\nportage-ext-modules=true" >> setup.cfg
- find . -type f -exec
sed -e "s|@PORTAGE_EPREFIX@||"
- -e "s|@PORTAGE_MV@|/bin/mv|"
- -e "s|@PORTAGE_BASH@|/bin/bash|"
+ -e "s|@PORTAGE_MV@|$(type -P mv)|"
+ -e "s|@PORTAGE_BASH@|$(type -P bash)|"
-e "s|@PREFIX_PORTAGE_PYTHON@|$(type -P python)|"
-e "s|@DEFAULT_PATH@|/usr/bin:/bin|"
-e "s|@EXTRA_PATH@|/usr/sbin:/sbin|"
- -e "s|@portagegroup@|portage|"
- -e "s|@portageuser@|portage|"
- -e "s|@rootuser@|root|"
- -e "s|@rootuid@|0|"
- -e "s|@rootgid@|0|"
+ -e "s|@portagegroup@|$(id -gn)|"
+ -e "s|@portageuser@|$(id -un)|"
+ -e "s|@rootuser@|$(id -un)|"
+ -e "s|@rootuid@|$(id -u)|"
+ -e "s|@rootgid@|$(id -g)|"
-e "s|@sysconfdir@|/etc|"
-i '{}' +
- ./setup.py test
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2017-03-23 17:32 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2017-03-23 17:32 UTC (permalink / raw
To: gentoo-commits
commit: 3e1abd7b6335e0da51d907f06f86a6bbd0e4243b
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Mar 23 17:32:32 2017 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Mar 23 17:32:32 2017 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=3e1abd7b
travis: fix stypo
.travis.yml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/.travis.yml b/.travis.yml
index a9cc22de6..467bc3cb7 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -30,7 +30,7 @@ script:
-e "s|@portageuser@|portage|"
-e "s|@rootuser@|root|"
-e "s|@rootuid@|0|"
- -e "s|@rootgid@|0)|"
+ -e "s|@rootgid@|0|"
-e "s|@sysconfdir@|/etc|"
-i '{}' +
- ./setup.py test
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2017-03-23 17:23 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2017-03-23 17:23 UTC (permalink / raw
To: gentoo-commits
commit: c051db133f5d7dd4f392878cd6ed3e8c21bba89c
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Mar 23 17:22:53 2017 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Mar 23 17:22:53 2017 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=c051db13
travis: backslashes are copied verbatim, remove
.travis.yml | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/.travis.yml b/.travis.yml
index 27c134fa7..a9cc22de6 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -19,19 +19,19 @@ install:
script:
- printf "[build_ext]\nportage-ext-modules=true" >> setup.cfg
- - find . -type f -exec \
- sed -e "s|@PORTAGE_EPREFIX@||" \
- -e "s|@PORTAGE_MV@|/bin/mv|" \
- -e "s|@PORTAGE_BASH@|/bin/bash|" \
- -e "s|@PREFIX_PORTAGE_PYTHON@|$(type -P python)|" \
- -e "s|@DEFAULT_PATH@|/usr/bin:/bin|" \
- -e "s|@EXTRA_PATH@|/usr/sbin:/sbin|" \
- -e "s|@portagegroup@|portage|" \
- -e "s|@portageuser@|portage|" \
- -e "s|@rootuser@|root|" \
- -e "s|@rootuid@|0|" \
- -e "s|@rootgid@|0)|" \
- -e "s|@sysconfdir@|/etc|" \
+ - find . -type f -exec
+ sed -e "s|@PORTAGE_EPREFIX@||"
+ -e "s|@PORTAGE_MV@|/bin/mv|"
+ -e "s|@PORTAGE_BASH@|/bin/bash|"
+ -e "s|@PREFIX_PORTAGE_PYTHON@|$(type -P python)|"
+ -e "s|@DEFAULT_PATH@|/usr/bin:/bin|"
+ -e "s|@EXTRA_PATH@|/usr/sbin:/sbin|"
+ -e "s|@portagegroup@|portage|"
+ -e "s|@portageuser@|portage|"
+ -e "s|@rootuser@|root|"
+ -e "s|@rootuid@|0|"
+ -e "s|@rootgid@|0)|"
+ -e "s|@sysconfdir@|/etc|"
-i '{}' +
- ./setup.py test
- ./setup.py install --root=/tmp/install-root
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2017-03-23 15:38 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2017-03-23 15:38 UTC (permalink / raw
To: gentoo-commits
commit: 0d4e946cccee70a58059cb48e1237efa95b900f5
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Mar 23 15:38:35 2017 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Mar 23 15:38:35 2017 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=0d4e946c
travis: try to fix build for Prefix
.travis.yml | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/.travis.yml b/.travis.yml
index 196e3520a..27c134fa7 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -19,6 +19,20 @@ install:
script:
- printf "[build_ext]\nportage-ext-modules=true" >> setup.cfg
+ - find . -type f -exec \
+ sed -e "s|@PORTAGE_EPREFIX@||" \
+ -e "s|@PORTAGE_MV@|/bin/mv|" \
+ -e "s|@PORTAGE_BASH@|/bin/bash|" \
+ -e "s|@PREFIX_PORTAGE_PYTHON@|$(type -P python)|" \
+ -e "s|@DEFAULT_PATH@|/usr/bin:/bin|" \
+ -e "s|@EXTRA_PATH@|/usr/sbin:/sbin|" \
+ -e "s|@portagegroup@|portage|" \
+ -e "s|@portageuser@|portage|" \
+ -e "s|@rootuser@|root|" \
+ -e "s|@rootuid@|0|" \
+ -e "s|@rootgid@|0)|" \
+ -e "s|@sysconfdir@|/etc|" \
+ -i '{}' +
- ./setup.py test
- ./setup.py install --root=/tmp/install-root
# prevent repoman tests from trying to fetch metadata.xsd
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2017-03-17 8:25 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2017-03-17 8:25 UTC (permalink / raw
To: gentoo-commits
commit: 0ed453cdc3f1d395fa73a1abcf4d0870626d9fb4
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Mar 17 08:24:56 2017 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Mar 17 08:24:56 2017 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=0ed453cd
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
.travis.yml | 9 +-
NEWS | 7 +
RELEASE-NOTES | 18 +
man/emerge.1 | 37 +-
pym/_emerge/create_depgraph_params.py | 5 +
pym/_emerge/depgraph.py | 119 +++++--
pym/_emerge/main.py | 5 +
pym/portage/_emirrordist/FetchTask.py | 2 +-
pym/portage/checksum.py | 340 +++++++++++-------
pym/portage/const.py | 3 +-
pym/portage/dbapi/bintree.py | 4 +-
pym/portage/dep/__init__.py | 4 +-
pym/portage/eclass_cache.py | 2 +-
pym/portage/manifest.py | 4 +-
pym/portage/package/ebuild/config.py | 7 +-
pym/portage/package/ebuild/fetch.py | 11 +-
pym/portage/tests/resolver/ResolverPlayground.py | 5 +
.../soname/test_slot_conflict_reinstall.py | 16 +-
pym/portage/tests/resolver/test_bdeps.py | 215 ++++++++++++
pym/portage/tests/resolver/test_slot_abi.py | 12 +-
.../tests/resolver/test_slot_conflict_rebuild.py | 8 +-
.../resolver/test_slot_operator_exclusive_slots.py | 109 ++++++
.../test_slot_operator_runtime_pkg_mask.py | 136 ++++++++
.../modules => tests/util/file_copy}/__init__.py | 0
.../tests/{bin => util/file_copy}/__test__.py | 0
pym/portage/tests/util/file_copy/test_copyfile.py | 71 ++++
pym/portage/tests/util/test_checksum.py | 106 ++++++
pym/portage/util/file_copy/__init__.py | 36 ++
pym/portage/util/movefile.py | 5 +-
repoman/pym/repoman/actions.py | 21 +-
repoman/pym/repoman/modules/scan/ebuild/checks.py | 26 ++
repoman/pym/repoman/modules/scan/ebuild/errors.py | 2 +
repoman/pym/repoman/qa_data.py | 4 +-
repoman/pym/repoman/utilities.py | 39 ++-
setup.py | 11 +-
src/portage_util_file_copy_reflink_linux.c | 385 +++++++++++++++++++++
36 files changed, 1573 insertions(+), 211 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2017-03-02 8:48 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2017-03-02 8:48 UTC (permalink / raw
To: gentoo-commits
commit: 12fa6743561e164442b35bbadc2c264370138ccb
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Mar 2 08:48:04 2017 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Mar 2 08:48:04 2017 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=12fa6743
tarball: exclude repoman sources
tarball.sh | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/tarball.sh b/tarball.sh
index 930a4efc3..9c3f64785 100755
--- a/tarball.sh
+++ b/tarball.sh
@@ -15,8 +15,11 @@ export DEST="${TMP}/${PKG}-${V}"
./tabcheck.py $(
find ./ -name .git -o -name .hg -prune -o -type f ! -name '*.py' -print \
- | xargs grep -l "#\!@PREFIX_PORTAGE_PYTHON@"
- find ./ -name .git -o -name .hg -prune -o -type f -name '*.py' -print
+ | xargs grep -l "#\!@PREFIX_PORTAGE_PYTHON@" \
+ | grep -v "^\./repoman/"
+ find ./ -name .git -o -name .hg -prune -o -type f -name '*.py' -print \
+ | grep -v "^\./repoman/"
+
)
if [[ -e ${DEST} ]]; then
@@ -25,7 +28,7 @@ if [[ -e ${DEST} ]]; then
fi
install -d -m0755 ${DEST}
-rsync -a --exclude='.git' --exclude='.hg' . ${DEST}
+rsync -a --exclude='.git' --exclude='.hg' --exclude="repoman/" . ${DEST}
sed -i -e '/^VERSION\s*=/s/^.*$/VERSION = "'${V}-prefix'"/' ${DEST}/pym/portage/__init__.py
sed -i -e "/version = /s/'[^']\+'/'${V}-prefix'/" ${DEST}/setup.py
sed -i -e "1s/VERSION/${V}-prefix/" ${DEST}/man/{,ru/}*.[15]
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2017-03-02 8:18 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2017-03-02 8:18 UTC (permalink / raw
To: gentoo-commits
commit: 5e05dc6ff4903a2035dd1c95babf6f461249fca4
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Mar 2 08:17:59 2017 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Mar 2 08:17:59 2017 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=5e05dc6f
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
RELEASE-NOTES | 35 ++++++
man/emerge.1 | 8 +-
pym/portage/checksum.py | 78 +++++++++----
pym/portage/const.py | 6 +-
pym/portage/dbapi/vartree.py | 15 +--
pym/portage/emaint/modules/binhost/binhost.py | 2 +-
pym/portage/emaint/modules/config/config.py | 8 +-
pym/portage/emaint/modules/logs/logs.py | 28 +++--
pym/portage/emaint/modules/move/move.py | 6 +-
pym/portage/emaint/modules/resume/resume.py | 2 +-
pym/portage/emaint/modules/sync/sync.py | 136 ++++++++++++----------
pym/portage/emaint/modules/world/world.py | 2 +-
pym/portage/sync/modules/git/__init__.py | 4 -
pym/portage/sync/modules/git/git.py | 9 +-
pym/portage/util/__init__.py | 9 +-
repoman/MANIFEST.in | 1 +
repoman/RELEASE-NOTES | 18 +++
repoman/pym/repoman/copyrights.py | 10 +-
repoman/pym/repoman/modules/scan/ebuild/checks.py | 11 +-
repoman/pym/repoman/modules/scan/ebuild/errors.py | 6 +-
repoman/setup.py | 2 +-
setup.py | 2 +-
22 files changed, 251 insertions(+), 147 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2017-02-23 14:05 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2017-02-23 14:05 UTC (permalink / raw
To: gentoo-commits
commit: 04b6d7c94e37cff191b343c83e352983ebe9faa6
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Feb 23 14:05:06 2017 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Feb 23 14:05:06 2017 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=04b6d7c9
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
.gitignore | 2 ++
.travis.yml | 1 +
bin/check-implicit-pointer-usage.py | 22 ++++++------
bin/dispatch-conf | 4 +--
bin/etc-update | 19 +++++++---
man/portage.5 | 7 ++--
pym/_emerge/actions.py | 13 +++++--
pym/_emerge/main.py | 4 +--
pym/portage/cvstree.py | 6 ++--
pym/portage/emaint/modules/sync/sync.py | 11 +++---
pym/portage/glsa.py | 4 +--
pym/portage/news.py | 4 +--
pym/portage/package/ebuild/fetch.py | 4 +--
pym/portage/repository/config.py | 16 ++++++---
pym/portage/sync/controller.py | 13 +++++--
pym/portage/sync/modules/git/__init__.py | 14 +++++---
pym/portage/sync/modules/git/git.py | 4 ++-
pym/portage/tests/{bin => sets}/__test__.py | 0
pym/portage/tests/{bin => sync}/__test__.py | 0
pym/portage/tests/sync/test_sync_local.py | 23 ++++++++++---
pym/portage/tests/util/test_getconfig.py | 4 +--
pym/portage/tests/util/test_varExpand.py | 4 +--
pym/portage/util/__init__.py | 4 +--
pym/portage/util/_async/PopenProcess.py | 9 ++++-
pym/portage/util/compression_probe.py | 5 ++-
pym/portage/util/lafilefixer.py | 8 ++---
pym/portage/xml/metadata.py | 4 +--
repoman/man/repoman.1 | 5 ++-
repoman/pym/repoman/modules/scan/ebuild/checks.py | 40 ++++++++--------------
repoman/pym/repoman/modules/scan/fetch/fetches.py | 3 ++
repoman/pym/repoman/modules/vcs/cvs/changes.py | 2 +-
repoman/pym/repoman/modules/vcs/cvs/status.py | 6 ++--
repoman/pym/repoman/modules/vcs/git/status.py | 15 ++++----
repoman/pym/repoman/modules/vcs/svn/changes.py | 2 +-
repoman/pym/repoman/qa_data.py | 3 ++
.../pym/repoman/tests/changelog}/__test__.py | 0
repoman/runtests | 1 +
runtests | 1 +
38 files changed, 179 insertions(+), 108 deletions(-)
diff --cc bin/dispatch-conf
index befc8dcec,099c37f57..dc7e182e6
--- a/bin/dispatch-conf
+++ b/bin/dispatch-conf
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -bO
+#!@PREFIX_PORTAGE_PYTHON@ -bO
- # Copyright 1999-2016 Gentoo Foundation
+ # Copyright 1999-2017 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2017-01-27 15:08 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2017-01-27 15:08 UTC (permalink / raw
To: gentoo-commits
commit: 3c689a5900519c178b860bfefbb93967231785df
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Jan 27 15:07:09 2017 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Jan 27 15:07:09 2017 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=3c689a59
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
.gitignore | 2 +
.travis.yml | 14 +-
MANIFEST.in | 1 -
NEWS | 26 +
README | 19 +
RELEASE-NOTES | 97 ++-
bin/chpathtool.py | 2 +-
bin/dispatch-conf | 15 +-
bin/eapi.sh | 4 -
bin/ebuild | 25 +-
bin/ebuild.sh | 41 +-
bin/egencache | 87 +--
bin/glsa-check | 19 +-
bin/helper-functions.sh | 9 +-
bin/install-qa-check.d/10executable-issues | 24 +-
bin/install-qa-check.d/60openrc | 8 +-
bin/install-qa-check.d/80libraries | 2 +-
bin/install-qa-check.d/90gcc-warnings | 14 +-
bin/phase-functions.sh | 8 +-
bin/phase-helpers.sh | 18 +-
bin/portageq | 13 +-
bin/save-ebuild-env.sh | 2 +-
bin/socks5-server.py | 5 +
cnf/make.conf.example | 20 +-
cnf/make.conf.example.alpha.diff | 2 +-
cnf/make.conf.example.amd64-fbsd.diff | 2 +-
cnf/make.conf.example.amd64.diff | 2 +-
cnf/make.conf.example.arm.diff | 4 +-
cnf/make.conf.example.hppa.diff | 2 +-
cnf/make.conf.example.ia64.diff | 2 +-
cnf/make.conf.example.m68k.diff | 2 +-
cnf/make.conf.example.mips.diff | 2 +-
cnf/make.conf.example.ppc.diff | 2 +-
cnf/make.conf.example.ppc64.diff | 2 +-
cnf/make.conf.example.s390.diff | 2 +-
cnf/make.conf.example.sh.diff | 4 +-
cnf/make.conf.example.sparc-fbsd.diff | 2 +-
cnf/make.conf.example.sparc.diff | 8 +-
cnf/make.conf.example.x86-fbsd.diff | 2 +-
cnf/make.conf.example.x86.diff | 2 +-
cnf/make.globals | 4 +-
cnf/metadata.dtd | 102 ---
doc/config/sets.docbook | 4 +-
doc/package/ebuild/eapi/4-python.docbook | 2 +-
doc/package/ebuild/eapi/5-progress.docbook | 2 +-
doc/qa.docbook | 12 +-
man/color.map.5 | 2 +-
man/dispatch-conf.1 | 2 +-
man/ebuild.1 | 2 +-
man/ebuild.5 | 10 +-
man/egencache.1 | 2 +-
man/emaint.1 | 10 +-
man/emerge.1 | 32 +-
man/emirrordist.1 | 2 +-
man/env-update.1 | 2 +-
man/etc-update.1 | 2 +-
man/fixpackages.1 | 2 +-
man/make.conf.5 | 20 +-
man/portage.5 | 23 +-
man/quickpkg.1 | 2 +-
man/ru/color.map.5 | 2 +-
man/ru/dispatch-conf.1 | 2 +-
man/ru/ebuild.1 | 2 +-
man/ru/env-update.1 | 2 +-
man/ru/etc-update.1 | 2 +-
man/ru/fixpackages.1 | 2 +-
pym/_emerge/EbuildBuild.py | 22 +-
pym/_emerge/EbuildPhase.py | 2 +-
pym/_emerge/MiscFunctionsProcess.py | 3 +-
pym/_emerge/Scheduler.py | 24 +-
pym/_emerge/actions.py | 51 +-
pym/_emerge/depgraph.py | 178 +++--
pym/_emerge/main.py | 41 +-
pym/_emerge/post_emerge.py | 6 +-
pym/_emerge/resolver/output.py | 5 +-
pym/_emerge/resolver/output_helpers.py | 11 +
pym/_emerge/resolver/slot_collision.py | 37 +-
pym/_emerge/search.py | 26 +-
pym/portage/__init__.py | 23 +-
pym/portage/_emirrordist/FetchTask.py | 2 +-
pym/portage/_selinux.py | 24 +-
pym/portage/_sets/__init__.py | 37 +-
pym/portage/cache/anydbm.py | 4 +-
pym/portage/cache/ebuild_xattr.py | 2 +-
pym/portage/cache/flat_hash.py | 53 +-
pym/portage/cache/fs_template.py | 2 +-
pym/portage/cache/mappings.py | 4 +-
pym/portage/cache/sqlite.py | 4 +-
pym/portage/cache/template.py | 36 +-
pym/portage/const.py | 2 +-
pym/portage/data.py | 2 +-
pym/portage/dbapi/_SyncfsProcess.py | 2 +-
pym/portage/dbapi/bintree.py | 40 +-
pym/portage/dbapi/porttree.py | 4 +-
pym/portage/dbapi/vartree.py | 135 +++-
pym/portage/dep/__init__.py | 16 +-
pym/portage/dep/dep_check.py | 41 +-
pym/portage/dispatch_conf.py | 2 +-
pym/portage/emaint/main.py | 12 +-
pym/portage/emaint/modules/binhost/__init__.py | 1 +
pym/portage/emaint/modules/binhost/binhost.py | 6 +-
pym/portage/emaint/modules/config/__init__.py | 1 +
pym/portage/emaint/modules/config/config.py | 6 +-
pym/portage/emaint/modules/logs/__init__.py | 1 +
pym/portage/emaint/modules/logs/logs.py | 27 +-
pym/portage/emaint/modules/merges/__init__.py | 1 +
pym/portage/emaint/modules/merges/merges.py | 18 +-
pym/portage/emaint/modules/move/__init__.py | 2 +
pym/portage/emaint/modules/move/move.py | 9 +-
pym/portage/emaint/modules/resume/__init__.py | 1 +
pym/portage/emaint/modules/resume/resume.py | 3 +-
pym/portage/emaint/modules/sync/__init__.py | 1 +
pym/portage/emaint/modules/sync/sync.py | 19 +-
pym/portage/emaint/modules/world/__init__.py | 1 +
pym/portage/emaint/modules/world/world.py | 8 +-
pym/portage/glsa.py | 8 +-
pym/portage/localization.py | 6 +-
pym/portage/locks.py | 59 +-
pym/portage/manifest.py | 23 +-
pym/portage/metadata.py | 2 +-
pym/portage/module.py | 15 +-
pym/portage/news.py | 79 ++-
pym/portage/output.py | 2 +-
pym/portage/package/ebuild/config.py | 25 +-
pym/portage/package/ebuild/digestcheck.py | 2 +-
pym/portage/package/ebuild/doebuild.py | 28 +-
pym/portage/package/ebuild/prepare_build_dirs.py | 6 -
pym/portage/process.py | 4 +
pym/portage/repository/config.py | 65 +-
pym/portage/sync/modules/cvs/__init__.py | 1 +
pym/portage/sync/modules/cvs/cvs.py | 4 +-
pym/portage/sync/modules/git/__init__.py | 1 +
pym/portage/sync/modules/git/git.py | 4 +-
pym/portage/sync/modules/rsync/__init__.py | 1 +
pym/portage/sync/modules/rsync/rsync.py | 4 +-
pym/portage/sync/modules/svn/__init__.py | 1 +
pym/portage/sync/modules/svn/svn.py | 6 +-
pym/portage/sync/modules/webrsync/__init__.py | 1 +
pym/portage/sync/modules/webrsync/webrsync.py | 2 +-
.../tests/ebuild/test_array_fromfile_eof.py | 2 +-
pym/portage/tests/ebuild/test_ipc_daemon.py | 23 +-
...bi.py => test_emerge_blocker_file_collision.py} | 101 ++-
pym/portage/tests/emerge/test_simple.py | 18 +-
pym/portage/tests/news/test_NewsItem.py | 1 +
pym/portage/tests/process/test_poll.py | 2 +-
pym/portage/tests/resolver/ResolverPlayground.py | 24 +-
.../soname/test_slot_conflict_reinstall.py | 1 +
.../resolver/test_imagemagick_graphicsmagick.py | 104 +++
.../resolver/test_runtime_cycle_merge_order.py | 72 ++
.../tests/resolver/test_slot_conflict_rebuild.py | 2 +-
.../resolver/test_slot_operator_reverse_deps.py | 113 +++
pym/portage/tests/util/test_getconfig.py | 4 +-
pym/portage/util/__init__.py | 8 +-
pym/portage/util/_desktop_entry.py | 27 +-
pym/portage/util/_dyn_libs/LinkageMapELF.py | 38 +-
pym/portage/util/_dyn_libs/LinkageMapMachO.py | 4 +-
pym/portage/util/_eventloop/EventLoop.py | 17 +-
pym/portage/util/changelog.py | 69 ++
pym/portage/util/configparser.py | 76 +++
pym/portage/util/env_update.py | 6 +-
pym/{repoman => portage/util/futures}/__init__.py | 0
pym/portage/util/futures/extendedfutures.py | 73 ++
pym/portage/util/futures/futures.py | 118 ++++
pym/portage/util/locale.py | 20 +-
pym/portage/util/writeable_check.py | 28 +-
pym/portage/versions.py | 31 +-
pym/portage/xml/metadata.py | 89 ++-
pym/portage/xpak.py | 2 +-
pym/repoman/_xml.py | 106 ---
pym/repoman/checks/ebuilds/eclasses/live.py | 39 --
pym/repoman/checks/ebuilds/isebuild.py | 71 --
pym/repoman/checks/ebuilds/manifests.py | 102 ---
pym/repoman/checks/ebuilds/misc.py | 57 --
pym/repoman/checks/ebuilds/pkgmetadata.py | 177 -----
pym/repoman/checks/ebuilds/thirdpartymirrors.py | 39 --
pym/repoman/checks/ebuilds/variables/eapi.py | 44 --
pym/repoman/checks/ebuilds/variables/license.py | 47 --
pym/repoman/checks/herds/__init__.py | 0
pym/repoman/ebuild.py | 29 -
pym/repoman/metadata.py | 153 -----
pym/repoman/modules/__init__.py | 0
pym/repoman/modules/commit/__init__.py | 0
pym/repoman/modules/fix/__init__.py | 0
pym/repoman/modules/full/__init__.py | 0
pym/repoman/modules/manifest/__init__.py | 0
pym/repoman/modules/scan/__init__.py | 0
pym/repoman/scan.py | 172 -----
pym/repoman/scanner.py | 755 ---------------------
pym/repoman/vcs/__init__.py | 0
pym/repoman/vcs/vcs.py | 287 --------
pym/repoman/vcs/vcsstatus.py | 114 ----
.../__test__.py => repoman/.repoman_not_installed | 0
repoman/MANIFEST.in | 3 +
repoman/NEWS | 9 +
README => repoman/README | 9 -
repoman/RELEASE-NOTES | 39 ++
TEST-NOTES => repoman/TEST-NOTES | 0
{bin => repoman/bin}/repoman | 11 +-
repoman/cnf/metadata.xsd | 548 +++++++++++++++
{man => repoman/man}/repoman.1 | 13 +-
repoman/pym/repoman/__init__.py | 79 +++
{pym => repoman/pym}/repoman/_portage.py | 0
{pym => repoman/pym}/repoman/_subprocess.py | 4 +-
{pym => repoman/pym}/repoman/actions.py | 471 +++----------
{pym => repoman/pym}/repoman/argparser.py | 0
{pym => repoman/pym}/repoman/check_missingslot.py | 0
{pym => repoman/pym}/repoman/checks/__init__.py | 0
.../pym/repoman/checks/herds}/__init__.py | 0
.../pym}/repoman/checks/herds/herdbase.py | 2 +-
.../pym}/repoman/checks/herds/metadata.py | 0
{pym => repoman/pym}/repoman/copyrights.py | 0
{pym => repoman/pym}/repoman/errors.py | 0
{pym => repoman/pym}/repoman/gpg.py | 2 +-
{pym => repoman/pym}/repoman/main.py | 49 +-
repoman/pym/repoman/metadata.py | 128 ++++
.../pym/repoman/modules}/__init__.py | 0
.../pym/repoman/modules/commit}/__init__.py | 0
repoman/pym/repoman/modules/commit/manifest.py | 115 ++++
.../pym}/repoman/modules/commit/repochecks.py | 4 +-
.../pym/repoman/modules/scan}/__init__.py | 0
.../pym/repoman/modules/scan/depend/__init__.py | 32 +
.../repoman/modules/scan/depend/_depend_checks.py | 194 ++++++
.../pym/repoman/modules/scan/depend/_gen_arches.py | 57 ++
repoman/pym/repoman/modules/scan/depend/profile.py | 256 +++++++
.../repoman/modules/scan/directories/__init__.py | 48 ++
.../pym/repoman/modules/scan}/directories/files.py | 43 +-
.../pym/repoman/modules/scan/directories/mtime.py | 30 +
repoman/pym/repoman/modules/scan/eapi/__init__.py | 29 +
repoman/pym/repoman/modules/scan/eapi/eapi.py | 49 ++
.../pym/repoman/modules/scan/ebuild/__init__.py | 58 ++
.../pym/repoman/modules/scan/ebuild}/checks.py | 8 +-
repoman/pym/repoman/modules/scan/ebuild/ebuild.py | 238 +++++++
.../pym/repoman/modules/scan/ebuild}/errors.py | 0
.../pym/repoman/modules/scan/ebuild/multicheck.py | 56 ++
.../pym/repoman/modules/scan/eclasses/__init__.py | 47 ++
repoman/pym/repoman/modules/scan/eclasses/live.py | 76 +++
.../pym/repoman/modules/scan}/eclasses/ruby.py | 26 +-
repoman/pym/repoman/modules/scan/fetch/__init__.py | 33 +
.../pym/repoman/modules/scan/fetch}/fetches.py | 103 ++-
.../pym/repoman/modules/scan/keywords/__init__.py | 33 +
.../pym/repoman/modules/scan/keywords}/keywords.py | 95 +--
.../pym/repoman/modules/scan/manifest/__init__.py | 30 +
.../pym/repoman/modules/scan/manifest/manifests.py | 56 ++
.../pym/repoman/modules/scan/metadata/__init__.py | 85 +++
.../repoman/modules/scan/metadata}/description.py | 23 +-
.../modules/scan/metadata/ebuild_metadata.py | 84 +++
.../repoman/modules/scan/metadata/pkgmetadata.py | 198 ++++++
.../pym/repoman/modules/scan/metadata}/restrict.py | 32 +-
.../repoman/modules/scan/metadata}/use_flags.py | 28 +-
.../pym/repoman/modules/scan/options/__init__.py | 28 +
.../pym/repoman/modules/scan/options/options.py | 29 +
repoman/pym/repoman/modules/scan/scan.py | 66 ++
repoman/pym/repoman/modules/scan/scanbase.py | 79 +++
repoman/pym/repoman/modules/vcs/None/__init__.py | 34 +
repoman/pym/repoman/modules/vcs/None/changes.py | 50 ++
repoman/pym/repoman/modules/vcs/None/status.py | 53 ++
repoman/pym/repoman/modules/vcs/__init__.py | 14 +
repoman/pym/repoman/modules/vcs/bzr/__init__.py | 34 +
repoman/pym/repoman/modules/vcs/bzr/changes.py | 68 ++
repoman/pym/repoman/modules/vcs/bzr/status.py | 70 ++
repoman/pym/repoman/modules/vcs/changes.py | 169 +++++
repoman/pym/repoman/modules/vcs/cvs/__init__.py | 34 +
repoman/pym/repoman/modules/vcs/cvs/changes.py | 118 ++++
repoman/pym/repoman/modules/vcs/cvs/status.py | 131 ++++
repoman/pym/repoman/modules/vcs/git/__init__.py | 34 +
repoman/pym/repoman/modules/vcs/git/changes.py | 120 ++++
repoman/pym/repoman/modules/vcs/git/status.py | 79 +++
repoman/pym/repoman/modules/vcs/hg/__init__.py | 34 +
repoman/pym/repoman/modules/vcs/hg/changes.py | 105 +++
repoman/pym/repoman/modules/vcs/hg/status.py | 65 ++
repoman/pym/repoman/modules/vcs/settings.py | 108 +++
repoman/pym/repoman/modules/vcs/svn/__init__.py | 34 +
repoman/pym/repoman/modules/vcs/svn/changes.py | 142 ++++
repoman/pym/repoman/modules/vcs/svn/status.py | 150 ++++
repoman/pym/repoman/modules/vcs/vcs.py | 161 +++++
{pym => repoman/pym}/repoman/profile.py | 0
{pym => repoman/pym}/repoman/qa_data.py | 9 +-
{pym => repoman/pym}/repoman/qa_tracker.py | 0
{pym => repoman/pym}/repoman/repos.py | 13 +-
repoman/pym/repoman/scanner.py | 436 ++++++++++++
.../pym/repoman}/tests/__init__.py | 17 +-
.../bin => repoman/pym/repoman/tests}/__test__.py | 0
.../pym/repoman/tests/changelog}/__init__.py | 0
.../repoman/tests/changelog}/test_echangelog.py | 0
.../pym/repoman}/tests/runTests.py | 24 +-
.../pym/repoman/tests/simple}/__init__.py | 0
repoman/pym/repoman/tests/simple/__test__.py | 1 +
.../pym/repoman/tests/simple}/test_simple.py | 25 +-
{pym => repoman/pym}/repoman/utilities.py | 7 -
runtests => repoman/runtests | 22 +-
setup.py => repoman/setup.py | 205 +-----
runtests | 16 +-
setup.py | 42 +-
src/portage_util_libc.c | 68 ++
testpath | 9 +-
295 files changed, 7876 insertions(+), 3900 deletions(-)
diff --cc bin/dispatch-conf
index 4215e5b,fdf564e..befc8dc
--- a/bin/dispatch-conf
+++ b/bin/dispatch-conf
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -bO
+#!@PREFIX_PORTAGE_PYTHON@ -bO
- # Copyright 1999-2014 Gentoo Foundation
+ # Copyright 1999-2016 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
diff --cc bin/egencache
index cf503a3,e994b4a..bb7e7a9
--- a/bin/egencache
+++ b/bin/egencache
@@@ -59,10 -58,10 +58,11 @@@ from portage.util._async.AsyncFunction
from portage.util._async.run_main_scheduler import run_main_scheduler
from portage.util._async.TaskScheduler import TaskScheduler
from portage.util._eventloop.global_event_loop import global_event_loop
+ from portage.util.changelog import ChangeLogTypeSort
from portage import cpv_getkey
from portage.dep import Atom, isjustname
- from portage.versions import pkgsplit, vercmp, _pkg_str
+ from portage.versions import vercmp
+from portage.const import EPREFIX
try:
from xml.etree import ElementTree
diff --cc bin/portageq
index ef2c7fb,06c8e0e..616aee0
--- a/bin/portageq
+++ b/bin/portageq
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -bO
+#!@PREFIX_PORTAGE_PYTHON@ -bO
- # Copyright 1999-2015 Gentoo Foundation
+ # Copyright 1999-2016 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function, unicode_literals
diff --cc cnf/make.conf.example
index 85a851d,04f3a02..6de7a90
--- a/cnf/make.conf.example
+++ b/cnf/make.conf.example
@@@ -141,9 -141,9 +141,9 @@@
# PORTDIR_OVERLAY is a directory where local ebuilds may be stored without
# concern that they will be deleted by rsync updates. Default is not
# defined.
-#PORTDIR_OVERLAY=/usr/local/portage
+#PORTDIR_OVERLAY=@PORTAGE_EPREFIX@/usr/local/portage
- # Fetching files
+ # Fetching files
# ==============
#
# If you need to set a proxy for wget or lukemftp, add the appropriate "export
@@@ -215,12 -215,8 +215,12 @@@
# one of them sync from the rotations above. The other boxes can then rsync
# from the local rsync server, reducing the load on the mirrors.
# Instructions for setting up a local rsync server are available here:
- # http://www.gentoo.org/doc/en/rsync.xml
+ # https://wiki.gentoo.org/wiki/Local_Mirror
#
+# For Gentoo Prefix, use the following URL:
+#
+# Default: "rsync://rsync.prefix.bitzolder.nl/gentoo-portage-prefix
+#
#SYNC="rsync://rsync.gentoo.org/gentoo-portage"
#
# PORTAGE_RSYNC_RETRIES sets the number of times portage will attempt to retrieve
diff --cc pym/portage/util/_dyn_libs/LinkageMapELF.py
index b32ee81,a063621..54a25e0
--- a/pym/portage/util/_dyn_libs/LinkageMapELF.py
+++ b/pym/portage/util/_dyn_libs/LinkageMapELF.py
@@@ -19,8 -22,13 +22,14 @@@ from portage.util import normalize_pat
from portage.util import varexpand
from portage.util import writemsg_level
from portage.util._dyn_libs.NeededEntry import NeededEntry
+ from portage.util.elf.header import ELFHeader
+from portage.const import EPREFIX
+ if sys.hexversion >= 0x3000000:
+ _unicode = str
+ else:
+ _unicode = unicode
+
# Map ELF e_machine values from NEEDED.ELF.2 to approximate multilib
# categories. This approximation will produce incorrect results on x32
# and mips systems, but the result is not worse than using the raw
diff --cc pym/portage/util/_dyn_libs/LinkageMapMachO.py
index 7cfb18e,a063621..5cfbadb
--- a/pym/portage/util/_dyn_libs/LinkageMapMachO.py
+++ b/pym/portage/util/_dyn_libs/LinkageMapMachO.py
@@@ -1,4 -1,4 +1,4 @@@
- # Copyright 1998-2011 Gentoo Foundation
-# Copyright 1998-2016 Gentoo Foundation
++# Copyright 1998-2017 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
import errno
@@@ -231,7 -268,7 +231,7 @@@ class LinkageMapMachO(object)
continue
plibs.update((x, cpv) for x in items)
if plibs:
- args = [EPREFIX + "/usr/bin/scanmacho", "-qF", "%a;%F;%S;%n"]
- args = [os.path.join(EPREFIX or "/", "usr/bin/scanelf"), "-qF", "%a;%F;%S;%r;%n"]
++ args = [os.path.join(EPREFIX or "/", "usr/bin/scanmacho"), "-qF", "%a;%F;%S;%n"]
args.extend(os.path.join(root, x.lstrip("." + os.sep)) \
for x in plibs)
try:
diff --cc pym/portage/versions.py
index fc7af27,adfb1c3..fb49967
--- a/pym/portage/versions.py
+++ b/pym/portage/versions.py
@@@ -257,42 -256,18 +257,42 @@@ def vercmp(ver1, ver2, silent=1)
if rval:
return rval
- # the suffix part is equal to, so finally check the revision
+ # The suffix part is equal too, so finally check the revision
+ # PREFIX hack: a revision starting with 0 is an 'inter-revision',
+ # which means that it is possible to create revisions on revisions.
+ # An example is -r01.1 which is the first revision of -r1. Note
+ # that a period (.) is used to separate the real revision and the
+ # secondary revision number. This trick is in use to allow revision
+ # bumps in ebuilds synced from the main tree for Prefix changes,
+ # while still staying in the main tree versioning scheme.
if match1.group(10):
- r1 = int(match1.group(10))
+ if match1.group(10)[0] == '0' and '.' in match1.group(10):
+ t = match1.group(10)[1:].split(".")
+ r1 = int(t[0])
+ r3 = int(t[1])
+ else:
+ r1 = int(match1.group(10))
+ r3 = 0
else:
r1 = 0
+ r3 = 0
if match2.group(10):
- r2 = int(match2.group(10))
+ if match2.group(10)[0] == '0' and '.' in match2.group(10):
+ t = match2.group(10)[1:].split(".")
+ r2 = int(t[0])
+ r4 = int(t[1])
+ else:
+ r2 = int(match2.group(10))
+ r4 = 0
else:
r2 = 0
+ r4 = 0
+ if r1 == r2 and (r3 != 0 or r4 != 0):
+ r1 = r3
+ r2 = r4
rval = (r1 > r2) - (r1 < r2)
return rval
-
+
def pkgcmp(pkg1, pkg2):
"""
Compare 2 package versions created in pkgsplit format.
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2017-01-27 15:08 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2017-01-27 15:08 UTC (permalink / raw
To: gentoo-commits
commit: 676927062dc65324ca68d72690114cff36265d1d
Author: Brian Dolbec <dolsen <AT> gentoo <DOT> org>
AuthorDate: Fri Jan 6 17:14:54 2017 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Jan 6 17:14:54 2017 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=67692706
Merge Coacher:extend-rdepend-suspect repoman updates
commit '021199792ce55bdc0f99bc7791a2b31ba1533d2e'
repoman/pym/repoman/qa_data.py | 3 +++
1 file changed, 3 insertions(+)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2016-02-21 16:17 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2016-02-21 16:17 UTC (permalink / raw
To: gentoo-commits
commit: 24d8f4014fa2ac555202dd1a4f83f4518950e1bd
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Feb 21 16:17:43 2016 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Feb 21 16:17:43 2016 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=24d8f401
tarball: set version in setup.py too
tarball.sh | 1 +
1 file changed, 1 insertion(+)
diff --git a/tarball.sh b/tarball.sh
index b9f7841..930a4ef 100755
--- a/tarball.sh
+++ b/tarball.sh
@@ -27,6 +27,7 @@ fi
install -d -m0755 ${DEST}
rsync -a --exclude='.git' --exclude='.hg' . ${DEST}
sed -i -e '/^VERSION\s*=/s/^.*$/VERSION = "'${V}-prefix'"/' ${DEST}/pym/portage/__init__.py
+sed -i -e "/version = /s/'[^']\+'/'${V}-prefix'/" ${DEST}/setup.py
sed -i -e "1s/VERSION/${V}-prefix/" ${DEST}/man/{,ru/}*.[15]
sed -i -e "s/@version@/${V}/" ${DEST}/configure.ac
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2016-02-21 16:17 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2016-02-21 16:17 UTC (permalink / raw
To: gentoo-commits
commit: 4de4f298e0f5355d32692e5f0c22da776d86dde3
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Feb 21 16:17:05 2016 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Feb 21 16:17:05 2016 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=4de4f298
subst-install: remove unused expansions
subst-install.in | 9 ---------
1 file changed, 9 deletions(-)
diff --git a/subst-install.in b/subst-install.in
index 83a6313..033fa98 100644
--- a/subst-install.in
+++ b/subst-install.in
@@ -22,19 +22,10 @@ at='@'
sedexp=(
-e "s,${at}DEFAULT_PATH${at},@DEFAULT_PATH@,g"
-e "s,${at}EXTRA_PATH${at},@EXTRA_PATH@,g"
- -e "s,${at}EGREP${at},@EGREP@,g"
-e "s,${at}PORTAGE_BASE${at},@PORTAGE_BASE@,g"
- -e "s,${at}PORTAGE_BASENAME${at},@PORTAGE_BASENAME@,g"
-e "s,${at}PORTAGE_BASH${at},@PORTAGE_BASH@,g"
- -e "s,${at}PORTAGE_DIRNAME${at},@PORTAGE_DIRNAME@,g"
-e "s,${at}PORTAGE_EPREFIX${at},@PORTAGE_EPREFIX@,g"
- -e "s,${at}PORTAGE_FIND${at},@PORTAGE_FIND@,g"
- -e "s,${at}PORTAGE_GREP${at},@PORTAGE_GREP@,g"
-e "s,${at}PORTAGE_MV${at},@PORTAGE_MV@,g"
- -e "s,${at}PORTAGE_RM${at},@PORTAGE_RM@,g"
- -e "s,${at}PORTAGE_SED${at},@PORTAGE_SED@,g"
- -e "s,${at}PORTAGE_WGET${at},@PORTAGE_WGET@,g"
- -e "s,${at}PORTAGE_XARGS${at},@PORTAGE_XARGS@,g"
-e "s,${at}PREFIX_PORTAGE_PYTHON${at},@PREFIX_PORTAGE_PYTHON@,g"
-e "s,${at}datadir${at},@datadir@,g"
-e "s,${at}portagegroup${at},${portagegroup},g"
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2016-02-18 19:35 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2016-02-18 19:35 UTC (permalink / raw
To: gentoo-commits
commit: 0cf294d33a466f4a3c537aa82d385ec7f87c7bf3
Author: Brian Dolbec <dolsen <AT> gentoo <DOT> org>
AuthorDate: Thu Nov 12 19:26:42 2015 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Nov 12 19:26:42 2015 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=0cf294d3
Merge branch 'master' of git+ssh://git.gentoo.org/proj/portage
bin/egencache | 70 +++++++++++++++++++-------------
bin/install-qa-check.d/60openrc | 2 +-
bin/save-ebuild-env.sh | 2 +-
man/portage.5 | 7 +++-
misc/emerge-delta-webrsync | 2 +-
pym/portage/emaint/modules/sync/sync.py | 57 ++++++++++++++++++++++----
pym/portage/repository/config.py | 6 ++-
pym/portage/sync/controller.py | 18 +++++---
pym/portage/sync/modules/git/git.py | 11 ++++-
pym/portage/sync/modules/rsync/rsync.py | 17 ++++----
pym/portage/util/_async/AsyncFunction.py | 5 ++-
11 files changed, 139 insertions(+), 58 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2016-02-18 19:35 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2016-02-18 19:35 UTC (permalink / raw
To: gentoo-commits
commit: ac7a15e255d0c239fb52aeb3df160378518aa496
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Feb 18 19:34:45 2016 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Feb 18 19:34:45 2016 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=ac7a15e2
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
.travis.yml | 1 +
NEWS | 17 +
README | 10 +-
RELEASE-NOTES | 133 +
bin/binhost-snapshot | 4 +-
bin/cgroup-release-agent | 2 +
bin/{ebuild-ipc => chmod-lite} | 8 +-
bin/chmod-lite.py | 30 +
bin/chpathtool.py | 45 +-
bin/dohtml.py | 47 +-
bin/eapi.sh | 8 +
bin/ebuild | 24 +-
bin/ebuild.sh | 68 +-
bin/egencache | 200 +-
bin/fixpackages | 4 +-
bin/glsa-check | 4 +-
bin/install-qa-check.d/60openrc | 2 +-
bin/install.py | 4 +-
bin/isolated-functions.sh | 12 +-
bin/misc-functions.sh | 19 +-
bin/phase-functions.sh | 42 +
bin/phase-helpers.sh | 97 +-
bin/portageq | 22 +-
bin/quickpkg | 16 +-
bin/repoman | 3161 +-------------------
bin/save-ebuild-env.sh | 11 +-
bin/socks5-server.py | 9 +-
bin/xattr-helper.py | 15 +-
bin/xpak-helper.py | 4 +-
cnf/make.globals | 2 +-
cnf/sets/portage.conf | 2 +-
doc/qa.docbook | 3 +-
man/ebuild.5 | 13 +-
man/egencache.1 | 20 +-
man/emerge.1 | 16 +-
man/emirrordist.1 | 10 +-
man/make.conf.5 | 49 +-
man/portage.5 | 18 +-
man/repoman.1 | 5 +-
misc/emerge-delta-webrsync | 2 +-
pym/_emerge/AbstractEbuildProcess.py | 41 +-
pym/_emerge/Binpkg.py | 24 +-
pym/_emerge/BinpkgFetcher.py | 9 +-
pym/_emerge/BlockerDB.py | 5 +-
pym/_emerge/EbuildBuild.py | 21 +-
pym/_emerge/PackageVirtualDbapi.py | 4 +-
pym/_emerge/Scheduler.py | 40 +-
pym/_emerge/SpawnProcess.py | 41 +-
pym/_emerge/UserQuery.py | 15 +-
pym/_emerge/actions.py | 145 +-
pym/_emerge/chk_updated_cfg_files.py | 8 +-
pym/_emerge/depgraph.py | 111 +-
pym/_emerge/main.py | 31 +-
pym/_emerge/resolver/circular_dependency.py | 21 +-
pym/_emerge/resolver/slot_collision.py | 11 +-
pym/_emerge/search.py | 3 +-
pym/portage/_emirrordist/Config.py | 10 +-
pym/portage/_emirrordist/main.py | 27 +-
pym/portage/_legacy_globals.py | 2 +-
pym/portage/_sets/__init__.py | 4 +
pym/portage/cache/anydbm.py | 3 +
pym/portage/cache/flat_hash.py | 5 +
pym/portage/cache/sqlite.py | 9 +-
pym/portage/cache/template.py | 61 +-
pym/portage/checksum.py | 4 +-
pym/portage/const.py | 4 +-
pym/portage/data.py | 2 +-
pym/portage/dbapi/IndexedPortdb.py | 4 +-
pym/portage/dbapi/IndexedVardb.py | 6 +-
pym/portage/dbapi/__init__.py | 27 +-
pym/portage/dbapi/bintree.py | 6 +-
pym/portage/dbapi/porttree.py | 44 +-
pym/portage/dbapi/vartree.py | 22 +-
pym/portage/dbapi/virtual.py | 12 +-
pym/portage/dep/__init__.py | 7 +-
pym/portage/dep/dep_check.py | 6 +-
pym/portage/eapi.py | 11 +-
pym/portage/elog/mod_save.py | 3 +-
pym/portage/emaint/main.py | 6 +-
pym/portage/emaint/modules/merges/__init__.py | 2 +-
pym/portage/emaint/modules/sync/sync.py | 191 +-
pym/portage/exception.py | 5 +-
pym/portage/manifest.py | 61 +
pym/portage/news.py | 2 +-
.../package/ebuild/_config/KeywordsManager.py | 35 +-
.../package/ebuild/_config/LicenseManager.py | 4 +-
.../package/ebuild/_config/special_env_vars.py | 2 +-
pym/portage/package/ebuild/config.py | 64 +-
pym/portage/package/ebuild/doebuild.py | 35 +-
pym/portage/package/ebuild/fetch.py | 9 +-
pym/portage/repository/config.py | 83 +-
pym/portage/sync/__init__.py | 20 +-
pym/portage/sync/controller.py | 90 +-
pym/portage/sync/modules/cvs/__init__.py | 5 +-
pym/portage/sync/modules/cvs/cvs.py | 2 +-
pym/portage/sync/modules/git/__init__.py | 1 +
pym/portage/sync/modules/git/git.py | 31 +-
pym/portage/sync/modules/rsync/__init__.py | 4 +
pym/portage/sync/modules/rsync/rsync.py | 34 +-
pym/portage/sync/modules/svn/__init__.py | 1 +
pym/portage/sync/modules/webrsync/__init__.py | 1 +
pym/portage/sync/syncbase.py | 2 +-
pym/portage/tests/__init__.py | 4 +-
pym/portage/tests/dbapi/test_portdb_cache.py | 3 +-
pym/portage/tests/dep/test_match_from_list.py | 21 +-
pym/portage/tests/ebuild/test_config.py | 4 +-
pym/portage/tests/ebuild/test_doebuild_fd_pipes.py | 37 +-
pym/portage/tests/ebuild/test_doebuild_spawn.py | 3 +-
pym/portage/tests/ebuild/test_ipc_daemon.py | 3 +-
pym/portage/tests/emerge/test_config_protect.py | 3 +-
pym/portage/tests/emerge/test_emerge_slot_abi.py | 3 +-
pym/portage/tests/emerge/test_simple.py | 3 +-
pym/portage/tests/repoman/test_simple.py | 9 +-
.../tests/resolver/test_autounmask_parent.py | 43 +
pym/portage/tests/resolver/test_depclean.py | 10 +-
pym/portage/tests/sync/test_sync_local.py | 82 +-
pym/portage/tests/util/test_xattr.py | 178 ++
pym/portage/util/__init__.py | 97 +-
pym/portage/util/_argparse.py | 42 -
pym/portage/util/_async/AsyncFunction.py | 70 +
pym/portage/util/_xattr.py | 228 ++
pym/portage/util/locale.py | 142 +
pym/portage/util/movefile.py | 100 +-
pym/portage/util/xattr.py | 20 -
pym/portage/xml/metadata.py | 3 +
pym/repoman/_portage.py | 25 +
pym/repoman/_subprocess.py | 83 +
pym/repoman/_xml.py | 106 +
pym/repoman/actions.py | 836 ++++++
pym/repoman/argparser.py | 225 ++
pym/repoman/check_missingslot.py | 7 +-
pym/repoman/{ => checks}/__init__.py | 0
pym/repoman/{ => checks/directories}/__init__.py | 0
pym/repoman/checks/directories/files.py | 81 +
pym/repoman/{ => checks/ebuilds}/__init__.py | 0
pym/repoman/{ => checks/ebuilds}/checks.py | 219 +-
.../{ => checks/ebuilds/eclasses}/__init__.py | 0
pym/repoman/checks/ebuilds/eclasses/live.py | 39 +
pym/repoman/checks/ebuilds/eclasses/ruby.py | 32 +
pym/repoman/checks/ebuilds/errors.py | 49 +
pym/repoman/checks/ebuilds/fetches.py | 135 +
pym/repoman/checks/ebuilds/isebuild.py | 71 +
pym/repoman/checks/ebuilds/keywords.py | 122 +
pym/repoman/checks/ebuilds/manifests.py | 102 +
pym/repoman/checks/ebuilds/misc.py | 57 +
pym/repoman/checks/ebuilds/pkgmetadata.py | 177 ++
pym/repoman/checks/ebuilds/thirdpartymirrors.py | 39 +
pym/repoman/checks/ebuilds/use_flags.py | 90 +
.../{ => checks/ebuilds/variables}/__init__.py | 0
.../checks/ebuilds/variables/description.py | 32 +
pym/repoman/checks/ebuilds/variables/eapi.py | 44 +
pym/repoman/checks/ebuilds/variables/license.py | 47 +
pym/repoman/checks/ebuilds/variables/restrict.py | 41 +
pym/repoman/{ => checks/herds}/__init__.py | 0
pym/repoman/{ => checks/herds}/herdbase.py | 28 +-
pym/repoman/checks/herds/metadata.py | 26 +
pym/repoman/copyrights.py | 120 +
pym/repoman/ebuild.py | 29 +
pym/repoman/errors.py | 49 +-
pym/repoman/gpg.py | 82 +
pym/repoman/main.py | 169 ++
pym/repoman/metadata.py | 153 +
pym/repoman/{ => modules}/__init__.py | 0
pym/repoman/{ => modules/commit}/__init__.py | 0
pym/repoman/modules/commit/repochecks.py | 35 +
pym/repoman/{ => modules/fix}/__init__.py | 0
pym/repoman/{ => modules/full}/__init__.py | 0
pym/repoman/{ => modules/manifest}/__init__.py | 0
pym/repoman/{ => modules/scan}/__init__.py | 0
pym/repoman/profile.py | 87 +
pym/repoman/qa_data.py | 439 +++
pym/repoman/qa_tracker.py | 45 +
pym/repoman/repos.py | 307 ++
pym/repoman/scan.py | 172 ++
pym/repoman/scanner.py | 755 +++++
pym/repoman/utilities.py | 572 +---
pym/repoman/{ => vcs}/__init__.py | 0
pym/repoman/vcs/vcs.py | 287 ++
pym/repoman/vcs/vcsstatus.py | 114 +
runtests | 50 +-
setup.py | 21 +-
181 files changed, 8297 insertions(+), 4668 deletions(-)
diff --cc bin/ebuild
index 1e9dd4b,1f99177..1692a92
--- a/bin/ebuild
+++ b/bin/ebuild
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -bO
+#!@PREFIX_PORTAGE_PYTHON@ -bO
- # Copyright 1999-2014 Gentoo Foundation
+ # Copyright 1999-2015 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/egencache
index b26fd8e,7e3387e..843f374
--- a/bin/egencache
+++ b/bin/egencache
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -b
+#!@PREFIX_PORTAGE_PYTHON@ -b
- # Copyright 2009-2014 Gentoo Foundation
+ # Copyright 2009-2015 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# unicode_literals for compat with TextIOWrapper in Python 2
diff --cc bin/isolated-functions.sh
index a37130c,e320f71..5126bf9
--- a/bin/isolated-functions.sh
+++ b/bin/isolated-functions.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2014 Gentoo Foundation
+ # Copyright 1999-2016 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH}/eapi.sh" || exit 1
diff --cc bin/misc-functions.sh
index a1d4088,15651b9..080e366
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
diff --cc bin/portageq
index be35dc6,925640b..2e90cfa
--- a/bin/portageq
+++ b/bin/portageq
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -bO
+#!@PREFIX_PORTAGE_PYTHON@ -bO
- # Copyright 1999-2014 Gentoo Foundation
+ # Copyright 1999-2015 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function, unicode_literals
diff --cc bin/repoman
index afc26c3,819e0f5..81b33f7
--- a/bin/repoman
+++ b/bin/repoman
@@@ -1,3152 -1,43 +1,43 @@@
-#!/usr/bin/python -bO
+#!@PREFIX_PORTAGE_PYTHON@ -bO
- # Copyright 1999-2015 Gentoo Foundation
+ # Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
- # Next to do: dep syntax checking in mask files
- # Then, check to make sure deps are satisfiable (to avoid "can't find match for" problems)
- # that last one is tricky because multiple profiles need to be checked.
+ """Ebuild and tree health checks and maintenance utilities.
+ """
- from __future__ import print_function, unicode_literals
+ from __future__ import print_function
- import codecs
- import copy
- import errno
- import io
- import logging
- import re
- import signal
- import stat
- import subprocess
import sys
- import tempfile
- import textwrap
- import time
- import platform
- from itertools import chain
- from stat import S_ISDIR
- from pprint import pformat
-
- try:
- from urllib.parse import urlparse
- except ImportError:
- from urlparse import urlparse
-
- from os import path as osp
- if osp.isfile(osp.join(osp.dirname(osp.dirname(osp.realpath(__file__))), ".portage_not_installed")):
- sys.path.insert(0, osp.join(osp.dirname(osp.dirname(osp.realpath(__file__))), "pym"))
- import portage
- portage._internal_caller = True
- portage._disable_legacy_globals()
-
+ import errno
+ # This block ensures that ^C interrupts are handled quietly.
try:
- import xml.etree.ElementTree
- from xml.parsers.expat import ExpatError
- except (SystemExit, KeyboardInterrupt):
- raise
- except (ImportError, SystemError, RuntimeError, Exception):
- # broken or missing xml support
- # http://bugs.python.org/issue14988
- msg = ["Please enable python's \"xml\" USE flag in order to use repoman."]
- from portage.output import EOutput
- out = EOutput()
- for line in msg:
- out.eerror(line)
- sys.exit(1)
-
- from portage import os
- from portage import _encodings
- from portage import _unicode_encode
- import portage.util.formatter as formatter
- import repoman.checks
- from repoman.checks import run_checks
- from repoman.check_missingslot import check_missingslot
- from repoman import utilities
- from repoman.herdbase import make_herd_base
- from _emerge.Package import Package
- from _emerge.RootConfig import RootConfig
- from _emerge.UserQuery import UserQuery
- import portage.checksum
- import portage.const
- import portage.repository.config
- from portage import cvstree, normalize_path
- from portage import util
- from portage.exception import (FileNotFound, InvalidAtom, MissingParameter,
- ParseError, PermissionDenied)
- from portage.dep import Atom
- from portage.process import find_binary, spawn
- from portage.output import bold, create_color_func, \
- green, nocolor, red
- from portage.output import ConsoleStyleFile, StyleWriter
- from portage.util import writemsg_level
- from portage.util._argparse import ArgumentParser
- from portage.package.ebuild.digestgen import digestgen
- from portage.eapi import eapi_has_iuse_defaults, eapi_has_required_use
-
- if sys.hexversion >= 0x3000000:
- basestring = str
+ import signal
- util.initialize_logger()
-
- # 14 is the length of DESCRIPTION=""
- max_desc_len = 100
- allowed_filename_chars="a-zA-Z0-9._+-"
- pv_toolong_re = re.compile(r'[0-9]{19,}')
- GPG_KEY_ID_REGEX = r'(0x)?([0-9a-fA-F]{8}|[0-9a-fA-F]{16}|[0-9a-fA-F]{24}|[0-9a-fA-F]{32}|[0-9a-fA-F]{40})!?'
- bad = create_color_func("BAD")
-
- # A sane umask is needed for files that portage creates.
- os.umask(0o22)
- # Repoman sets it's own ACCEPT_KEYWORDS and we don't want it to
- # behave incrementally.
- repoman_incrementals = tuple(x for x in \
- portage.const.INCREMENTALS if x != 'ACCEPT_KEYWORDS')
- config_root = os.environ.get("PORTAGE_CONFIGROOT")
- repoman_settings = portage.config(config_root=config_root, local_config=False)
-
- if repoman_settings.get("NOCOLOR", "").lower() in ("yes", "true") or \
- repoman_settings.get('TERM') == 'dumb' or \
- not sys.stdout.isatty():
- nocolor()
-
- def warn(txt):
- print("repoman: " + txt)
-
- def err(txt):
- warn(txt)
- sys.exit(1)
-
- def exithandler(signum=None, _frame=None):
- logging.fatal("Interrupted; exiting...")
- if signum is None:
- sys.exit(1)
- else:
+ def exithandler(signum, _frame):
+ signal.signal(signal.SIGINT, signal.SIG_IGN)
+ signal.signal(signal.SIGTERM, signal.SIG_IGN)
sys.exit(128 + signum)
- signal.signal(signal.SIGINT, exithandler)
-
- def ParseArgs(argv, qahelp):
- """This function uses a customized optionParser to parse command line arguments for repoman
- Args:
- argv - a sequence of command line arguments
- qahelp - a dict of qa warning to help message
- Returns:
- (opts, args), just like a call to parser.parse_args()
- """
-
- argv = portage._decode_argv(argv)
-
- modes = {
- 'commit' : 'Run a scan then commit changes',
- 'ci' : 'Run a scan then commit changes',
- 'fix' : 'Fix simple QA issues (stray digests, missing digests)',
- 'full' : 'Scan directory tree and print all issues (not a summary)',
- 'help' : 'Show this screen',
- 'manifest' : 'Generate a Manifest (fetches files if necessary)',
- 'manifest-check' : 'Check Manifests for missing or incorrect digests',
- 'scan' : 'Scan directory tree for QA issues'
- }
-
- output_choices = {
- 'default' : 'The normal output format',
- 'column' : 'Columnar output suitable for use with grep'
- }
-
- mode_keys = list(modes)
- mode_keys.sort()
-
- output_keys = sorted(output_choices)
-
- parser = ArgumentParser(usage="repoman [options] [mode]",
- description="Modes: %s" % " | ".join(mode_keys),
- epilog="For more help consult the man page.")
-
- parser.add_argument('-a', '--ask', dest='ask', action='store_true', default=False,
- help='Request a confirmation before commiting')
-
- parser.add_argument('-m', '--commitmsg', dest='commitmsg',
- help='specify a commit message on the command line')
-
- parser.add_argument('-M', '--commitmsgfile', dest='commitmsgfile',
- help='specify a path to a file that contains a commit message')
-
- parser.add_argument('--digest',
- choices=('y', 'n'), metavar='<y|n>',
- help='Automatically update Manifest digests for modified files')
-
- parser.add_argument('-p', '--pretend', dest='pretend', default=False,
- action='store_true', help='don\'t commit or fix anything; just show what would be done')
-
- parser.add_argument('-q', '--quiet', dest="quiet", action="count", default=0,
- help='do not print unnecessary messages')
-
- parser.add_argument(
- '--echangelog', choices=('y', 'n', 'force'), metavar="<y|n|force>",
- help='for commit mode, call echangelog if ChangeLog is unmodified (or '
- 'regardless of modification if \'force\' is specified)')
-
- parser.add_argument('--experimental-inherit', choices=('y', 'n'),
- metavar="<y|n>", default='n',
- help='Enable experimental inherit.missing checks which may misbehave'
- ' when the internal eclass database becomes outdated')
-
- parser.add_argument('-f', '--force', dest='force', default=False, action='store_true',
- help='Commit with QA violations')
-
- parser.add_argument('-S', '--straight-to-stable', dest='straight_to_stable', default=False,
- action='store_true', help='Allow committing straight to stable')
-
- parser.add_argument('--vcs', dest='vcs',
- help='Force using specific VCS instead of autodetection')
-
- parser.add_argument('-v', '--verbose', dest="verbosity", action='count',
- help='be very verbose in output', default=0)
-
- parser.add_argument('-V', '--version', dest='version', action='store_true',
- help='show version info')
-
- parser.add_argument('-x', '--xmlparse', dest='xml_parse', action='store_true',
- default=False, help='forces the metadata.xml parse check to be carried out')
-
- parser.add_argument(
- '--if-modified', choices=('y', 'n'), default='n',
- metavar="<y|n>",
- help='only check packages that have uncommitted modifications')
-
- parser.add_argument('-i', '--ignore-arches', dest='ignore_arches', action='store_true',
- default=False, help='ignore arch-specific failures (where arch != host)')
-
- parser.add_argument("--ignore-default-opts",
- action="store_true",
- help="do not use the REPOMAN_DEFAULT_OPTS environment variable")
-
- parser.add_argument('-I', '--ignore-masked', dest='ignore_masked', action='store_true',
- default=False, help='ignore masked packages (not allowed with commit mode)')
-
- parser.add_argument('--include-arches', dest='include_arches',
- metavar='ARCHES', action='append',
- help='A space separated list of arches used to '
- 'filter the selection of profiles for dependency checks')
-
- parser.add_argument('-d', '--include-dev', dest='include_dev', action='store_true',
- default=False, help='include dev profiles in dependency checks')
-
- parser.add_argument('-e', '--include-exp-profiles', choices=('y', 'n'),
- default=False, help='include exp profiles in dependency checks',
- metavar='<y|n>')
-
- parser.add_argument('--unmatched-removal', dest='unmatched_removal', action='store_true',
- default=False, help='enable strict checking of package.mask and package.unmask files for unmatched removal atoms')
-
- parser.add_argument('--without-mask', dest='without_mask', action='store_true',
- default=False, help='behave as if no package.mask entries exist (not allowed with commit mode)')
-
- parser.add_argument('--output-style', dest='output_style', choices=output_keys,
- help='select output type', default='default')
-
- parser.add_argument('--mode', dest='mode', choices=mode_keys,
- help='specify which mode repoman will run in (default=full)')
-
- opts, args = parser.parse_known_args(argv[1:])
-
- if not opts.ignore_default_opts:
- default_opts = portage.util.shlex_split(
- repoman_settings.get("REPOMAN_DEFAULT_OPTS", ""))
- if default_opts:
- opts, args = parser.parse_known_args(default_opts + sys.argv[1:])
-
- if opts.mode == 'help':
- parser.print_help(short=False)
-
- for arg in args:
- if arg in modes:
- if not opts.mode:
- opts.mode = arg
- break
- else:
- parser.error("invalid mode: %s" % arg)
-
- if not opts.mode:
- opts.mode = 'full'
-
- if opts.mode == 'ci':
- opts.mode = 'commit' # backwards compat shortcut
-
- # Use the verbosity and quiet options to fiddle with the loglevel appropriately
- for val in range(opts.verbosity):
- logger = logging.getLogger()
- logger.setLevel(logger.getEffectiveLevel() - 10)
-
- for val in range(opts.quiet):
- logger = logging.getLogger()
- logger.setLevel(logger.getEffectiveLevel() + 10)
-
- if opts.mode == 'commit' and not (opts.force or opts.pretend):
- if opts.ignore_masked:
- opts.ignore_masked = False
- logging.warning('Commit mode automatically disables --ignore-masked')
- if opts.without_mask:
- opts.without_mask = False
- logging.warning('Commit mode automatically disables --without-mask')
-
- return (opts, args)
-
- qahelp = {
- "CVS/Entries.IO_error": "Attempting to commit, and an IO error was encountered access the Entries file",
- "ebuild.invalidname": "Ebuild files with a non-parseable or syntactically incorrect name (or using 2.1 versioning extensions)",
- "ebuild.namenomatch": "Ebuild files that do not have the same name as their parent directory",
- "changelog.ebuildadded": "An ebuild was added but the ChangeLog was not modified",
- "changelog.missing": "Missing ChangeLog files",
- "ebuild.notadded": "Ebuilds that exist but have not been added to cvs",
- "ebuild.patches": "PATCHES variable should be a bash array to ensure white space safety",
- "changelog.notadded": "ChangeLogs that exist but have not been added to cvs",
- "dependency.bad": "User-visible ebuilds with unsatisfied dependencies (matched against *visible* ebuilds)",
- "dependency.badmasked": "Masked ebuilds with unsatisfied dependencies (matched against *all* ebuilds)",
- "dependency.badindev": "User-visible ebuilds with unsatisfied dependencies (matched against *visible* ebuilds) in developing arch",
- "dependency.badmaskedindev": "Masked ebuilds with unsatisfied dependencies (matched against *all* ebuilds) in developing arch",
- "dependency.badtilde": "Uses the ~ dep operator with a non-zero revision part, which is useless (the revision is ignored)",
- "dependency.missingslot": "RDEPEND matches more than one SLOT but does not specify a slot and/or use the := or :* slot operator",
- "dependency.perlcore": "This ebuild directly depends on a package in perl-core; it should use the corresponding virtual instead.",
- "dependency.syntax": "Syntax error in dependency string (usually an extra/missing space/parenthesis)",
- "dependency.unknown": "Ebuild has a dependency that refers to an unknown package (which may be valid if it is a blocker for a renamed/removed package, or is an alternative choice provided by an overlay)",
- "file.executable": "Ebuilds, digests, metadata.xml, Manifest, and ChangeLog do not need the executable bit",
- "file.size": "Files in the files directory must be under 20 KiB",
- "file.size.fatal": "Files in the files directory must be under 60 KiB",
- "file.name": "File/dir name must be composed of only the following chars: %s " % allowed_filename_chars,
- "file.UTF8": "File is not UTF8 compliant",
- "inherit.deprecated": "Ebuild inherits a deprecated eclass",
- "inherit.missing": "Ebuild uses functions from an eclass but does not inherit it",
- "inherit.unused": "Ebuild inherits an eclass but does not use it",
- "java.eclassesnotused": "With virtual/jdk in DEPEND you must inherit a java eclass",
- "wxwidgets.eclassnotused": "Ebuild DEPENDs on x11-libs/wxGTK without inheriting wxwidgets.eclass",
- "KEYWORDS.dropped": "Ebuilds that appear to have dropped KEYWORDS for some arch",
- "KEYWORDS.missing": "Ebuilds that have a missing or empty KEYWORDS variable",
- "KEYWORDS.stable": "Ebuilds that have been added directly with stable KEYWORDS",
- "KEYWORDS.stupid": "Ebuilds that use KEYWORDS=-* instead of package.mask",
- "LICENSE.missing": "Ebuilds that have a missing or empty LICENSE variable",
- "LICENSE.virtual": "Virtuals that have a non-empty LICENSE variable",
- "DESCRIPTION.missing": "Ebuilds that have a missing or empty DESCRIPTION variable",
- "DESCRIPTION.toolong": "DESCRIPTION is over %d characters" % max_desc_len,
- "EAPI.definition": "EAPI definition does not conform to PMS section 7.3.1 (first non-comment, non-blank line)",
- "EAPI.deprecated": "Ebuilds that use features that are deprecated in the current EAPI",
- "EAPI.incompatible": "Ebuilds that use features that are only available with a different EAPI",
- "EAPI.unsupported": "Ebuilds that have an unsupported EAPI version (you must upgrade portage)",
- "SLOT.invalid": "Ebuilds that have a missing or invalid SLOT variable value",
- "HOMEPAGE.missing": "Ebuilds that have a missing or empty HOMEPAGE variable",
- "HOMEPAGE.virtual": "Virtuals that have a non-empty HOMEPAGE variable",
- "PDEPEND.suspect": "PDEPEND contains a package that usually only belongs in DEPEND.",
- "LICENSE.syntax": "Syntax error in LICENSE (usually an extra/missing space/parenthesis)",
- "PROVIDE.syntax": "Syntax error in PROVIDE (usually an extra/missing space/parenthesis)",
- "PROPERTIES.syntax": "Syntax error in PROPERTIES (usually an extra/missing space/parenthesis)",
- "RESTRICT.syntax": "Syntax error in RESTRICT (usually an extra/missing space/parenthesis)",
- "REQUIRED_USE.syntax": "Syntax error in REQUIRED_USE (usually an extra/missing space/parenthesis)",
- "SRC_URI.syntax": "Syntax error in SRC_URI (usually an extra/missing space/parenthesis)",
- "SRC_URI.mirror": "A uri listed in profiles/thirdpartymirrors is found in SRC_URI",
- "ebuild.syntax": "Error generating cache entry for ebuild; typically caused by ebuild syntax error or digest verification failure",
- "ebuild.output": "A simple sourcing of the ebuild produces output; this breaks ebuild policy.",
- "ebuild.nesteddie": "Placing 'die' inside ( ) prints an error, but doesn't stop the ebuild.",
- "variable.invalidchar": "A variable contains an invalid character that is not part of the ASCII character set",
- "variable.readonly": "Assigning a readonly variable",
- "variable.usedwithhelpers": "Ebuild uses D, ROOT, ED, EROOT or EPREFIX with helpers",
- "LIVEVCS.stable": "This ebuild is a live checkout from a VCS but has stable keywords.",
- "LIVEVCS.unmasked": "This ebuild is a live checkout from a VCS but has keywords and is not masked in the global package.mask.",
- "IUSE.invalid": "This ebuild has a variable in IUSE that is not in the use.desc or its metadata.xml file",
- "IUSE.missing": "This ebuild has a USE conditional which references a flag that is not listed in IUSE",
- "IUSE.rubydeprecated": "The ebuild has set a ruby interpreter in USE_RUBY, that is not available as a ruby target anymore",
- "LICENSE.invalid": "This ebuild is listing a license that doesnt exist in portages license/ dir.",
- "LICENSE.deprecated": "This ebuild is listing a deprecated license.",
- "KEYWORDS.invalid": "This ebuild contains KEYWORDS that are not listed in profiles/arch.list or for which no valid profile was found",
- "RDEPEND.implicit": "RDEPEND is unset in the ebuild which triggers implicit RDEPEND=$DEPEND assignment (prior to EAPI 4)",
- "RDEPEND.suspect": "RDEPEND contains a package that usually only belongs in DEPEND.",
- "RESTRICT.invalid": "This ebuild contains invalid RESTRICT values.",
- "digest.assumed": "Existing digest must be assumed correct (Package level only)",
- "digest.missing": "Some files listed in SRC_URI aren't referenced in the Manifest",
- "digest.unused": "Some files listed in the Manifest aren't referenced in SRC_URI",
- "ebuild.majorsyn": "This ebuild has a major syntax error that may cause the ebuild to fail partially or fully",
- "ebuild.minorsyn": "This ebuild has a minor syntax error that contravenes gentoo coding style",
- "ebuild.badheader": "This ebuild has a malformed header",
- "manifest.bad": "Manifest has missing or incorrect digests",
- "metadata.missing": "Missing metadata.xml files",
- "metadata.bad": "Bad metadata.xml files",
- "metadata.warning": "Warnings in metadata.xml files",
- "portage.internal": "The ebuild uses an internal Portage function or variable",
- "repo.eapi.banned": "The ebuild uses an EAPI which is banned by the repository's metadata/layout.conf settings",
- "repo.eapi.deprecated": "The ebuild uses an EAPI which is deprecated by the repository's metadata/layout.conf settings",
- "virtual.oldstyle": "The ebuild PROVIDEs an old-style virtual (see GLEP 37)",
- "virtual.suspect": "Ebuild contains a package that usually should be pulled via virtual/, not directly.",
- "usage.obsolete": "The ebuild makes use of an obsolete construct",
- "upstream.workaround": "The ebuild works around an upstream bug, an upstream bug should be filed and tracked in bugs.gentoo.org"
- }
-
- qacats = list(qahelp)
- qacats.sort()
-
- qawarnings = set((
- "changelog.ebuildadded",
- "changelog.missing",
- "changelog.notadded",
- "dependency.unknown",
- "digest.assumed",
- "digest.unused",
- "ebuild.notadded",
- "ebuild.nesteddie",
- "dependency.badmasked",
- "dependency.badindev",
- "dependency.badmaskedindev",
- "dependency.badtilde",
- "dependency.missingslot",
- "dependency.perlcore",
- "DESCRIPTION.toolong",
- "EAPI.deprecated",
- "HOMEPAGE.virtual",
- "LICENSE.deprecated",
- "LICENSE.virtual",
- "KEYWORDS.dropped",
- "KEYWORDS.stupid",
- "KEYWORDS.missing",
- "IUSE.invalid",
- "PDEPEND.suspect",
- "RDEPEND.implicit",
- "RDEPEND.suspect",
- "virtual.suspect",
- "RESTRICT.invalid",
- "ebuild.minorsyn",
- "ebuild.badheader",
- "ebuild.patches",
- "file.size",
- "inherit.unused",
- "inherit.deprecated",
- "java.eclassesnotused",
- "wxwidgets.eclassnotused",
- "metadata.warning",
- "portage.internal",
- "repo.eapi.deprecated",
- "usage.obsolete",
- "upstream.workaround",
- "LIVEVCS.stable",
- "LIVEVCS.unmasked",
- "IUSE.rubydeprecated",
- "SRC_URI.mirror",
- ))
-
- non_ascii_re = re.compile(r'[^\x00-\x7f]')
-
- missingvars = ["KEYWORDS", "LICENSE", "DESCRIPTION", "HOMEPAGE"]
- allvars = set(x for x in portage.auxdbkeys if not x.startswith("UNUSED_"))
- allvars.update(Package.metadata_keys)
- allvars = sorted(allvars)
- commitmessage = None
- for x in missingvars:
- x += ".missing"
- if x not in qacats:
- logging.warning('* missingvars values need to be added to qahelp ("%s")' % x)
- qacats.append(x)
- qawarnings.add(x)
-
- valid_restrict = frozenset(["binchecks", "bindist",
- "fetch", "installsources", "mirror", "preserve-libs",
- "primaryuri", "splitdebug", "strip", "test", "userpriv"])
-
- live_eclasses = portage.const.LIVE_ECLASSES
-
- suspect_rdepend = frozenset([
- "app-arch/cabextract",
- "app-arch/rpm2targz",
- "app-doc/doxygen",
- "dev-lang/nasm",
- "dev-lang/swig",
- "dev-lang/yasm",
- "dev-perl/extutils-pkgconfig",
- "dev-util/byacc",
- "dev-util/cmake",
- "dev-util/ftjam",
- "dev-util/gperf",
- "dev-util/gtk-doc",
- "dev-util/gtk-doc-am",
- "dev-util/intltool",
- "dev-util/jam",
- "dev-util/pkg-config-lite",
- "dev-util/pkgconf",
- "dev-util/pkgconfig",
- "dev-util/pkgconfig-openbsd",
- "dev-util/scons",
- "dev-util/unifdef",
- "dev-util/yacc",
- "media-gfx/ebdftopcf",
- "sys-apps/help2man",
- "sys-devel/autoconf",
- "sys-devel/automake",
- "sys-devel/bin86",
- "sys-devel/bison",
- "sys-devel/dev86",
- "sys-devel/flex",
- "sys-devel/m4",
- "sys-devel/pmake",
- "virtual/linux-sources",
- "virtual/pkgconfig",
- "x11-misc/bdftopcf",
- "x11-misc/imake",
- ])
-
- suspect_virtual = {
- "dev-util/pkg-config-lite":"virtual/pkgconfig",
- "dev-util/pkgconf":"virtual/pkgconfig",
- "dev-util/pkgconfig":"virtual/pkgconfig",
- "dev-util/pkgconfig-openbsd":"virtual/pkgconfig",
- "dev-libs/libusb":"virtual/libusb",
- "dev-libs/libusbx":"virtual/libusb",
- "dev-libs/libusb-compat":"virtual/libusb",
- }
-
- ruby_deprecated = frozenset([
- "ruby_targets_ree18",
- "ruby_targets_ruby18",
- ])
-
- metadata_xml_encoding = 'UTF-8'
- metadata_xml_declaration = '<?xml version="1.0" encoding="%s"?>' % \
- (metadata_xml_encoding,)
- metadata_doctype_name = 'pkgmetadata'
- metadata_dtd_uri = 'http://www.gentoo.org/dtd/metadata.dtd'
- # force refetch if the local copy creation time is older than this
- metadata_dtd_ctime_interval = 60 * 60 * 24 * 7 # 7 days
-
- # file.executable
- no_exec = frozenset(["Manifest", "ChangeLog", "metadata.xml"])
-
- options, arguments = ParseArgs(sys.argv, qahelp)
-
- if options.version:
- print("Portage", portage.VERSION)
- sys.exit(0)
-
- if options.experimental_inherit == 'y':
- # This is experimental, so it's non-fatal.
- qawarnings.add("inherit.missing")
- repoman.checks._init(experimental_inherit=True)
-
- # Set this to False when an extraordinary issue (generally
- # something other than a QA issue) makes it impossible to
- # commit (like if Manifest generation fails).
- can_force = True
-
- portdir, portdir_overlay, mydir = utilities.FindPortdir(repoman_settings)
- if portdir is None:
- sys.exit(1)
-
- myreporoot = os.path.basename(portdir_overlay)
- myreporoot += mydir[len(portdir_overlay):]
-
- if options.vcs:
- if options.vcs in ('cvs', 'svn', 'git', 'bzr', 'hg'):
- vcs = options.vcs
- else:
- vcs = None
- else:
- vcses = utilities.FindVCS()
- if len(vcses) > 1:
- print(red('*** Ambiguous workdir -- more than one VCS found at the same depth: %s.' % ', '.join(vcses)))
- print(red('*** Please either clean up your workdir or specify --vcs option.'))
- sys.exit(1)
- elif vcses:
- vcs = vcses[0]
- else:
- vcs = None
-
- if options.if_modified == "y" and vcs is None:
- logging.info("Not in a version controlled repository; "
- "disabling --if-modified.")
- options.if_modified = "n"
-
- # Disable copyright/mtime check if vcs does not preserve mtime (bug #324075).
- vcs_preserves_mtime = vcs in ('cvs', None)
-
- vcs_local_opts = repoman_settings.get("REPOMAN_VCS_LOCAL_OPTS", "").split()
- vcs_global_opts = repoman_settings.get("REPOMAN_VCS_GLOBAL_OPTS")
- if vcs_global_opts is None:
- if vcs in ('cvs', 'svn'):
- vcs_global_opts = "-q"
- else:
- vcs_global_opts = ""
- vcs_global_opts = vcs_global_opts.split()
-
- if options.mode == 'commit' and not options.pretend and not vcs:
- logging.info("Not in a version controlled repository; enabling pretend mode.")
- options.pretend = True
-
- # Ensure that current repository is in the list of enabled repositories.
- repodir = os.path.realpath(portdir_overlay)
- try:
- repoman_settings.repositories.get_repo_for_location(repodir)
- except KeyError:
- repo_name = portage.repository.config.RepoConfig._read_valid_repo_name(portdir_overlay)[0]
- layout_conf_data = portage.repository.config.parse_layout_conf(portdir_overlay)[0]
- if layout_conf_data['repo-name']:
- repo_name = layout_conf_data['repo-name']
- tmp_conf_file = io.StringIO(textwrap.dedent("""
- [%s]
- location = %s
- """) % (repo_name, portdir_overlay))
- # Ensure that the repository corresponding to $PWD overrides a
- # repository of the same name referenced by the existing PORTDIR
- # or PORTDIR_OVERLAY settings.
- repoman_settings['PORTDIR_OVERLAY'] = "%s %s" % \
- (repoman_settings.get('PORTDIR_OVERLAY', ''),
- portage._shell_quote(portdir_overlay))
- repositories = portage.repository.config.load_repository_config(repoman_settings, extra_files=[tmp_conf_file])
- # We have to call the config constructor again so that attributes
- # dependent on config.repositories are initialized correctly.
- repoman_settings = portage.config(config_root=config_root, local_config=False, repositories=repositories)
-
- root = repoman_settings['EROOT']
- trees = {
- root : {'porttree' : portage.portagetree(settings=repoman_settings)}
- }
- portdb = trees[root]['porttree'].dbapi
-
- # Constrain dependency resolution to the master(s)
- # that are specified in layout.conf.
- repo_config = repoman_settings.repositories.get_repo_for_location(repodir)
- portdb.porttrees = list(repo_config.eclass_db.porttrees)
- portdir = portdb.porttrees[0]
- commit_env = os.environ.copy()
- # list() is for iteration on a copy.
- for repo in list(repoman_settings.repositories):
- # all paths are canonical
- if repo.location not in repo_config.eclass_db.porttrees:
- del repoman_settings.repositories[repo.name]
-
- if repo_config.allow_provide_virtual:
- qawarnings.add("virtual.oldstyle")
-
- if repo_config.sign_commit:
- if vcs == 'git':
- # NOTE: It's possible to use --gpg-sign=key_id to specify the key in
- # the commit arguments. If key_id is unspecified, then it must be
- # configured by `git config user.signingkey key_id`.
- vcs_local_opts.append("--gpg-sign")
- if repoman_settings.get("PORTAGE_GPG_DIR"):
- # Pass GNUPGHOME to git for bug #462362.
- commit_env["GNUPGHOME"] = repoman_settings["PORTAGE_GPG_DIR"]
-
- # Pass GPG_TTY to git for bug #477728.
- try:
- commit_env["GPG_TTY"] = os.ttyname(sys.stdin.fileno())
- except OSError:
- pass
-
- # In order to disable manifest signatures, repos may set
- # "sign-manifests = false" in metadata/layout.conf. This
- # can be used to prevent merge conflicts like those that
- # thin-manifests is designed to prevent.
- sign_manifests = "sign" in repoman_settings.features and \
- repo_config.sign_manifest
+ signal.signal(signal.SIGINT, exithandler)
+ signal.signal(signal.SIGTERM, exithandler)
+ signal.signal(signal.SIGPIPE, signal.SIG_DFL)
- if repo_config.sign_manifest and repo_config.name == "gentoo" and \
- options.mode in ("commit",) and not sign_manifests:
- msg = ("The '%s' repository has manifest signatures enabled, "
- "but FEATURES=sign is currently disabled. In order to avoid this "
- "warning, enable FEATURES=sign in make.conf. Alternatively, "
- "repositories can disable manifest signatures by setting "
- "'sign-manifests = false' in metadata/layout.conf.") % \
- (repo_config.name,)
- for line in textwrap.wrap(msg, 60):
- logging.warning(line)
-
- if sign_manifests and options.mode in ("commit",) and \
- repoman_settings.get("PORTAGE_GPG_KEY") and \
- re.match(r'^%s$' % GPG_KEY_ID_REGEX,
- repoman_settings["PORTAGE_GPG_KEY"]) is None:
- logging.error("PORTAGE_GPG_KEY value is invalid: %s" %
- repoman_settings["PORTAGE_GPG_KEY"])
- sys.exit(1)
-
- manifest_hashes = repo_config.manifest_hashes
- if manifest_hashes is None:
- manifest_hashes = portage.const.MANIFEST2_HASH_DEFAULTS
-
- if options.mode in ("commit", "fix", "manifest"):
- if portage.const.MANIFEST2_REQUIRED_HASH not in manifest_hashes:
- msg = ("The 'manifest-hashes' setting in the '%s' repository's "
- "metadata/layout.conf does not contain the '%s' hash which "
- "is required by this portage version. You will have to "
- "upgrade portage if you want to generate valid manifests for "
- "this repository.") % \
- (repo_config.name, portage.const.MANIFEST2_REQUIRED_HASH)
- for line in textwrap.wrap(msg, 70):
- logging.error(line)
- sys.exit(1)
-
- unsupported_hashes = manifest_hashes.difference(
- portage.const.MANIFEST2_HASH_FUNCTIONS)
- if unsupported_hashes:
- msg = ("The 'manifest-hashes' setting in the '%s' repository's "
- "metadata/layout.conf contains one or more hash types '%s' "
- "which are not supported by this portage version. You will "
- "have to upgrade portage if you want to generate valid "
- "manifests for this repository.") % \
- (repo_config.name, " ".join(sorted(unsupported_hashes)))
- for line in textwrap.wrap(msg, 70):
- logging.error(line)
- sys.exit(1)
-
- if options.echangelog is None and repo_config.update_changelog:
- options.echangelog = 'y'
-
- if vcs is None:
- options.echangelog = 'n'
-
- # The --echangelog option causes automatic ChangeLog generation,
- # which invalidates changelog.ebuildadded and changelog.missing
- # checks.
- # Note: Some don't use ChangeLogs in distributed SCMs.
- # It will be generated on server side from scm log,
- # before package moves to the rsync server.
- # This is needed because they try to avoid merge collisions.
- # Gentoo's Council decided to always use the ChangeLog file.
- # TODO: shouldn't this just be switched on the repo, iso the VCS?
- check_changelog = options.echangelog not in ('y', 'force') and vcs in ('cvs', 'svn')
-
- if 'digest' in repoman_settings.features and options.digest != 'n':
- options.digest = 'y'
-
- logging.debug("vcs: %s" % (vcs,))
- logging.debug("repo config: %s" % (repo_config,))
- logging.debug("options: %s" % (options,))
-
- # It's confusing if these warnings are displayed without the user
- # being told which profile they come from, so disable them.
- env = os.environ.copy()
- env['FEATURES'] = env.get('FEATURES', '') + ' -unknown-features-warn'
-
- categories = []
- for path in repo_config.eclass_db.porttrees:
- categories.extend(portage.util.grabfile(
- os.path.join(path, 'profiles', 'categories')))
- repoman_settings.categories = frozenset(
- portage.util.stack_lists([categories], incremental=1))
- categories = repoman_settings.categories
-
- portdb.settings = repoman_settings
- root_config = RootConfig(repoman_settings, trees[root], None)
- # We really only need to cache the metadata that's necessary for visibility
- # filtering. Anything else can be discarded to reduce memory consumption.
- portdb._aux_cache_keys.clear()
- portdb._aux_cache_keys.update(["EAPI", "IUSE", "KEYWORDS", "repository", "SLOT"])
-
- reposplit = myreporoot.split(os.path.sep)
- repolevel = len(reposplit)
-
- # check if it's in $PORTDIR/$CATEGORY/$PN , otherwise bail if commiting.
- # Reason for this is if they're trying to commit in just $FILESDIR/*, the Manifest needs updating.
- # this check ensures that repoman knows where it is, and the manifest recommit is at least possible.
- if options.mode == 'commit' and repolevel not in [1, 2, 3]:
- print(red("***")+" Commit attempts *must* be from within a vcs co, category, or package directory.")
- print(red("***")+" Attempting to commit from a packages files directory will be blocked for instance.")
- print(red("***")+" This is intended behaviour, to ensure the manifest is recommitted for a package.")
- print(red("***"))
- err("Unable to identify level we're commiting from for %s" % '/'.join(reposplit))
-
- # Make startdir relative to the canonical repodir, so that we can pass
- # it to digestgen and it won't have to be canonicalized again.
- if repolevel == 1:
- startdir = repodir
- else:
- startdir = normalize_path(mydir)
- startdir = os.path.join(repodir, *startdir.split(os.sep)[-2 - repolevel + 3:])
-
- def caterror(mycat):
- err(mycat + " is not an official category. Skipping QA checks in this directory.\nPlease ensure that you add " + catdir + " to " + repodir + "/profiles/categories\nif it is a new category.")
-
- def repoman_getstatusoutput(cmd):
- """
- Implements an interface similar to getstatusoutput(), but with
- customized unicode handling (see bug #310789) and without the shell.
- """
- args = portage.util.shlex_split(cmd)
-
- if sys.hexversion < 0x3020000 and sys.hexversion >= 0x3000000 and \
- not os.path.isabs(args[0]):
- # Python 3.1 _execvp throws TypeError for non-absolute executable
- # path passed as bytes (see http://bugs.python.org/issue8513).
- fullname = find_binary(args[0])
- if fullname is None:
- raise portage.exception.CommandNotFound(args[0])
- args[0] = fullname
-
- encoding = _encodings['fs']
- args = [_unicode_encode(x,
- encoding=encoding, errors='strict') for x in args]
- proc = subprocess.Popen(args, stdout=subprocess.PIPE,
- stderr=subprocess.STDOUT)
- output = portage._unicode_decode(proc.communicate()[0],
- encoding=encoding, errors='strict')
- if output and output[-1] == "\n":
- # getstatusoutput strips one newline
- output = output[:-1]
- return (proc.wait(), output)
-
- class repoman_popen(portage.proxy.objectproxy.ObjectProxy):
- """
- Implements an interface similar to os.popen(), but with customized
- unicode handling (see bug #310789) and without the shell.
- """
-
- __slots__ = ('_proc', '_stdout')
-
- def __init__(self, cmd):
- args = portage.util.shlex_split(cmd)
-
- if sys.hexversion < 0x3020000 and sys.hexversion >= 0x3000000 and \
- not os.path.isabs(args[0]):
- # Python 3.1 _execvp throws TypeError for non-absolute executable
- # path passed as bytes (see http://bugs.python.org/issue8513).
- fullname = find_binary(args[0])
- if fullname is None:
- raise portage.exception.CommandNotFound(args[0])
- args[0] = fullname
-
- encoding = _encodings['fs']
- args = [_unicode_encode(x,
- encoding=encoding, errors='strict') for x in args]
- proc = subprocess.Popen(args, stdout=subprocess.PIPE)
- object.__setattr__(self, '_proc', proc)
- object.__setattr__(self, '_stdout',
- codecs.getreader(encoding)(proc.stdout, 'strict'))
-
- def _get_target(self):
- return object.__getattribute__(self, '_stdout')
-
- __enter__ = _get_target
-
- def __exit__(self, exc_type, exc_value, traceback):
- proc = object.__getattribute__(self, '_proc')
- proc.wait()
- proc.stdout.close()
-
- class ProfileDesc(object):
- __slots__ = ('abs_path', 'arch', 'status', 'sub_path', 'tree_path',)
- def __init__(self, arch, status, sub_path, tree_path):
- self.arch = arch
- self.status = status
- if sub_path:
- sub_path = normalize_path(sub_path.lstrip(os.sep))
- self.sub_path = sub_path
- self.tree_path = tree_path
- if tree_path:
- self.abs_path = os.path.join(tree_path, 'profiles', self.sub_path)
- else:
- self.abs_path = tree_path
-
- def __str__(self):
- if self.sub_path:
- return self.sub_path
- return 'empty profile'
-
- profile_list = []
- valid_profile_types = frozenset(['dev', 'exp', 'stable'])
-
- # get lists of valid keywords, licenses, and use
- kwlist = set()
- liclist = set()
- uselist = set()
- global_pmasklines = []
-
- for path in portdb.porttrees:
- try:
- liclist.update(os.listdir(os.path.join(path, "licenses")))
- except OSError:
- pass
- kwlist.update(portage.grabfile(os.path.join(path,
- "profiles", "arch.list")))
-
- use_desc = portage.grabfile(os.path.join(path, 'profiles', 'use.desc'))
- for x in use_desc:
- x = x.split()
- if x:
- uselist.add(x[0])
-
- expand_desc_dir = os.path.join(path, 'profiles', 'desc')
- try:
- expand_list = os.listdir(expand_desc_dir)
- except OSError:
- pass
- else:
- for fn in expand_list:
- if not fn[-5:] == '.desc':
- continue
- use_prefix = fn[:-5].lower() + '_'
- for x in portage.grabfile(os.path.join(expand_desc_dir, fn)):
- x = x.split()
- if x:
- uselist.add(use_prefix + x[0])
-
- global_pmasklines.append(portage.util.grabfile_package(
- os.path.join(path, 'profiles', 'package.mask'), recursive=1, verify_eapi=True))
-
- desc_path = os.path.join(path, 'profiles', 'profiles.desc')
- try:
- desc_file = io.open(_unicode_encode(desc_path,
- encoding=_encodings['fs'], errors='strict'),
- mode='r', encoding=_encodings['repo.content'], errors='replace')
- except EnvironmentError:
- pass
- else:
- for i, x in enumerate(desc_file):
- if x[0] == "#":
- continue
- arch = x.split()
- if len(arch) == 0:
- continue
- if len(arch) != 3:
- err("wrong format: \"" + bad(x.strip()) + "\" in " + \
- desc_path + " line %d" % (i + 1, ))
- elif arch[0] not in kwlist:
- err("invalid arch: \"" + bad(arch[0]) + "\" in " + \
- desc_path + " line %d" % (i + 1, ))
- elif arch[2] not in valid_profile_types:
- err("invalid profile type: \"" + bad(arch[2]) + "\" in " + \
- desc_path + " line %d" % (i + 1, ))
- profile_desc = ProfileDesc(arch[0], arch[2], arch[1], path)
- if not os.path.isdir(profile_desc.abs_path):
- logging.error(
- "Invalid %s profile (%s) for arch %s in %s line %d",
- arch[2], arch[1], arch[0], desc_path, i + 1)
- continue
- if os.path.exists(
- os.path.join(profile_desc.abs_path, 'deprecated')):
- continue
- profile_list.append(profile_desc)
- desc_file.close()
-
- repoman_settings['PORTAGE_ARCHLIST'] = ' '.join(sorted(kwlist))
- repoman_settings.backup_changes('PORTAGE_ARCHLIST')
-
- global_pmasklines = portage.util.stack_lists(global_pmasklines, incremental=1)
- global_pmaskdict = {}
- for x in global_pmasklines:
- global_pmaskdict.setdefault(x.cp, []).append(x)
- del global_pmasklines
-
- def has_global_mask(pkg):
- mask_atoms = global_pmaskdict.get(pkg.cp)
- if mask_atoms:
- pkg_list = [pkg]
- for x in mask_atoms:
- if portage.dep.match_from_list(x, pkg_list):
- return x
- return None
-
- # Ensure that profile sub_path attributes are unique. Process in reverse order
- # so that profiles with duplicate sub_path from overlays will override
- # profiles with the same sub_path from parent repos.
- profiles = {}
- profile_list.reverse()
- profile_sub_paths = set()
- for prof in profile_list:
- if prof.sub_path in profile_sub_paths:
- continue
- profile_sub_paths.add(prof.sub_path)
- profiles.setdefault(prof.arch, []).append(prof)
-
- # Use an empty profile for checking dependencies of
- # packages that have empty KEYWORDS.
- prof = ProfileDesc('**', 'stable', '', '')
- profiles.setdefault(prof.arch, []).append(prof)
-
- for x in repoman_settings.archlist():
- if x[0] == "~":
- continue
- if x not in profiles:
- print(red("\"" + x + "\" doesn't have a valid profile listed in profiles.desc."))
- print(red("You need to either \"cvs update\" your profiles dir or follow this"))
- print(red("up with the " + x + " team."))
- print()
-
- liclist_deprecated = set()
- if "DEPRECATED" in repoman_settings._license_manager._license_groups:
- liclist_deprecated.update(
- repoman_settings._license_manager.expandLicenseTokens(["@DEPRECATED"]))
-
- if not liclist:
- logging.fatal("Couldn't find licenses?")
- sys.exit(1)
-
- if not kwlist:
- logging.fatal("Couldn't read KEYWORDS from arch.list")
- sys.exit(1)
-
- if not uselist:
- logging.fatal("Couldn't find use.desc?")
+ except KeyboardInterrupt:
sys.exit(1)
- scanlist = []
- if repolevel == 2:
- # we are inside a category directory
- catdir = reposplit[-1]
- if catdir not in categories:
- caterror(catdir)
- mydirlist = os.listdir(startdir)
- for x in mydirlist:
- if x == "CVS" or x.startswith("."):
- continue
- if os.path.isdir(startdir + "/" + x):
- scanlist.append(catdir + "/" + x)
- repo_subdir = catdir + os.sep
- elif repolevel == 1:
- for x in categories:
- if not os.path.isdir(startdir + "/" + x):
- continue
- for y in os.listdir(startdir + "/" + x):
- if y == "CVS" or y.startswith("."):
- continue
- if os.path.isdir(startdir + "/" + x + "/" + y):
- scanlist.append(x + "/" + y)
- repo_subdir = ""
- elif repolevel == 3:
- catdir = reposplit[-2]
- if catdir not in categories:
- caterror(catdir)
- scanlist.append(catdir + "/" + reposplit[-1])
- repo_subdir = scanlist[-1] + os.sep
- else:
- msg = 'Repoman is unable to determine PORTDIR or PORTDIR_OVERLAY' + \
- ' from the current working directory'
- logging.critical(msg)
- sys.exit(1)
-
- repo_subdir_len = len(repo_subdir)
- scanlist.sort()
-
- logging.debug("Found the following packages to scan:\n%s" % '\n'.join(scanlist))
-
- def vcs_files_to_cps(vcs_file_iter):
- """
- Iterate over the given modified file paths returned from the vcs,
- and return a frozenset containing category/pn strings for each
- modified package.
- """
-
- modified_cps = []
-
- if repolevel == 3:
- if reposplit[-2] in categories and \
- next(vcs_file_iter, None) is not None:
- modified_cps.append("/".join(reposplit[-2:]))
-
- elif repolevel == 2:
- category = reposplit[-1]
- if category in categories:
- for filename in vcs_file_iter:
- f_split = filename.split(os.sep)
- # ['.', pn, ...]
- if len(f_split) > 2:
- modified_cps.append(category + "/" + f_split[1])
-
- else:
- # repolevel == 1
- for filename in vcs_file_iter:
- f_split = filename.split(os.sep)
- # ['.', category, pn, ...]
- if len(f_split) > 3 and f_split[1] in categories:
- modified_cps.append("/".join(f_split[1:3]))
-
- # Exclude packages that have been removed, since calling
- # code assumes that the packages exist.
- return frozenset(x for x in frozenset(modified_cps)
- if os.path.exists(os.path.join(repodir, x)))
-
- def git_supports_gpg_sign():
- status, cmd_output = \
- repoman_getstatusoutput("git --version")
- cmd_output = cmd_output.split()
- if cmd_output:
- version = re.match(r'^(\d+)\.(\d+)\.(\d+)', cmd_output[-1])
- if version is not None:
- version = [int(x) for x in version.groups()]
- if version[0] > 1 or \
- (version[0] == 1 and version[1] > 7) or \
- (version[0] == 1 and version[1] == 7 and version[2] >= 9):
- return True
- return False
-
- def dev_keywords(profiles):
- """
- Create a set of KEYWORDS values that exist in 'dev'
- profiles. These are used
- to trigger a message notifying the user when they might
- want to add the --include-dev option.
- """
- type_arch_map = {}
- for arch, arch_profiles in profiles.items():
- for prof in arch_profiles:
- arch_set = type_arch_map.get(prof.status)
- if arch_set is None:
- arch_set = set()
- type_arch_map[prof.status] = arch_set
- arch_set.add(arch)
-
- dev_keywords = type_arch_map.get('dev', set())
- dev_keywords.update(['~' + arch for arch in dev_keywords])
- return frozenset(dev_keywords)
-
- dev_keywords = dev_keywords(profiles)
-
- stats = {}
- fails = {}
-
- for x in qacats:
- stats[x] = 0
- fails[x] = []
-
- xmllint_capable = False
- metadata_dtd = os.path.join(repoman_settings["DISTDIR"], 'metadata.dtd')
-
- def fetch_metadata_dtd():
- """
- Fetch metadata.dtd if it doesn't exist or the ctime is older than
- metadata_dtd_ctime_interval.
- @rtype: bool
- @return: True if successful, otherwise False
- """
-
- must_fetch = True
- metadata_dtd_st = None
- current_time = int(time.time())
- try:
- metadata_dtd_st = os.stat(metadata_dtd)
- except EnvironmentError as e:
- if e.errno not in (errno.ENOENT, errno.ESTALE):
- raise
- del e
- else:
- # Trigger fetch if metadata.dtd mtime is old or clock is wrong.
- if abs(current_time - metadata_dtd_st.st_ctime) \
- < metadata_dtd_ctime_interval:
- must_fetch = False
-
- if must_fetch:
- print()
- print(green("***") + " the local copy of metadata.dtd " + \
- "needs to be refetched, doing that now")
- print()
- parsed_url = urlparse(metadata_dtd_uri)
- setting = 'FETCHCOMMAND_' + parsed_url.scheme.upper()
- fcmd = repoman_settings.get(setting)
- if not fcmd:
- fcmd = repoman_settings.get('FETCHCOMMAND')
- if not fcmd:
- logging.error("FETCHCOMMAND is unset")
- return False
-
- destdir = repoman_settings["DISTDIR"]
- fd, metadata_dtd_tmp = tempfile.mkstemp(
- prefix='metadata.dtd.', dir=destdir)
- os.close(fd)
-
- try:
- if not portage.getbinpkg.file_get(metadata_dtd_uri,
- destdir, fcmd=fcmd,
- filename=os.path.basename(metadata_dtd_tmp)):
- logging.error("failed to fetch metadata.dtd from '%s'" %
- metadata_dtd_uri)
- return False
-
- try:
- portage.util.apply_secpass_permissions(metadata_dtd_tmp,
- gid=portage.data.portage_gid, mode=0o664, mask=0o2)
- except portage.exception.PortageException:
- pass
-
- os.rename(metadata_dtd_tmp, metadata_dtd)
- finally:
- try:
- os.unlink(metadata_dtd_tmp)
- except OSError:
- pass
-
- return True
-
- if options.mode == "manifest":
- pass
- elif not find_binary('xmllint'):
- print(red("!!! xmllint not found. Can't check metadata.xml.\n"))
- if options.xml_parse or repolevel == 3:
- print(red("!!!")+" sorry, xmllint is needed. failing\n")
- sys.exit(1)
- else:
- if not fetch_metadata_dtd():
- sys.exit(1)
- # this can be problematic if xmllint changes their output
- xmllint_capable = True
-
- if options.mode == 'commit' and vcs:
- utilities.detect_vcs_conflicts(options, vcs)
-
- if options.mode == "manifest":
- pass
- elif options.pretend:
- print(green("\nRepoMan does a once-over of the neighborhood..."))
- else:
- print(green("\nRepoMan scours the neighborhood..."))
-
- new_ebuilds = set()
- modified_ebuilds = set()
- modified_changelogs = set()
- mychanged = []
- mynew = []
- myremoved = []
-
- if (options.if_modified != "y" and
- options.mode in ("manifest", "manifest-check")):
- pass
- elif vcs == "cvs":
- mycvstree = cvstree.getentries("./", recursive=1)
- mychanged = cvstree.findchanged(mycvstree, recursive=1, basedir="./")
- mynew = cvstree.findnew(mycvstree, recursive=1, basedir="./")
- if options.if_modified == "y":
- myremoved = cvstree.findremoved(mycvstree, recursive=1, basedir="./")
-
- elif vcs == "svn":
- with repoman_popen("svn status") as f:
- svnstatus = f.readlines()
- mychanged = ["./" + elem.split()[-1:][0] for elem in svnstatus if elem and elem[:1] in "MR"]
- mynew = ["./" + elem.split()[-1:][0] for elem in svnstatus if elem.startswith("A")]
- if options.if_modified == "y":
- myremoved = ["./" + elem.split()[-1:][0] for elem in svnstatus if elem.startswith("D")]
-
- elif vcs == "git":
- with repoman_popen("git diff-index --name-only "
- "--relative --diff-filter=M HEAD") as f:
- mychanged = f.readlines()
- mychanged = ["./" + elem[:-1] for elem in mychanged]
-
- with repoman_popen("git diff-index --name-only "
- "--relative --diff-filter=A HEAD") as f:
- mynew = f.readlines()
- mynew = ["./" + elem[:-1] for elem in mynew]
- if options.if_modified == "y":
- with repoman_popen("git diff-index --name-only "
- "--relative --diff-filter=D HEAD") as f:
- myremoved = f.readlines()
- myremoved = ["./" + elem[:-1] for elem in myremoved]
-
- elif vcs == "bzr":
- with repoman_popen("bzr status -S .") as f:
- bzrstatus = f.readlines()
- mychanged = ["./" + elem.split()[-1:][0].split('/')[-1:][0] for elem in bzrstatus if elem and elem[1:2] == "M"]
- mynew = ["./" + elem.split()[-1:][0].split('/')[-1:][0] for elem in bzrstatus if elem and (elem[1:2] == "NK" or elem[0:1] == "R")]
- if options.if_modified == "y":
- myremoved = ["./" + elem.split()[-3:-2][0].split('/')[-1:][0] for elem in bzrstatus if elem and (elem[1:2] == "K" or elem[0:1] == "R")]
-
- elif vcs == "hg":
- with repoman_popen("hg status --no-status --modified .") as f:
- mychanged = f.readlines()
- mychanged = ["./" + elem.rstrip() for elem in mychanged]
- with repoman_popen("hg status --no-status --added .") as f:
- mynew = f.readlines()
- mynew = ["./" + elem.rstrip() for elem in mynew]
- if options.if_modified == "y":
- with repoman_popen("hg status --no-status --removed .") as f:
- myremoved = f.readlines()
- myremoved = ["./" + elem.rstrip() for elem in myremoved]
-
- if vcs:
- new_ebuilds.update(x for x in mynew if x.endswith(".ebuild"))
- modified_ebuilds.update(x for x in mychanged if x.endswith(".ebuild"))
- modified_changelogs.update(x for x in chain(mychanged, mynew) \
- if os.path.basename(x) == "ChangeLog")
-
- def vcs_new_changed(relative_path):
- for x in chain(mychanged, mynew):
- if x == relative_path:
- return True
- return False
-
- have_pmasked = False
- have_dev_keywords = False
- dofail = 0
-
- # NOTE: match-all caches are not shared due to potential
- # differences between profiles in _get_implicit_iuse.
- arch_caches = {}
- arch_xmatch_caches = {}
- shared_xmatch_caches = {"cp-list":{}}
-
- include_arches = None
- if options.include_arches:
- include_arches = set()
- include_arches.update(*[x.split() for x in options.include_arches])
-
- # Disable the "ebuild.notadded" check when not in commit mode and
- # running `svn status` in every package dir will be too expensive.
-
- check_ebuild_notadded = not \
- (vcs == "svn" and repolevel < 3 and options.mode != "commit")
-
- # Build a regex from thirdpartymirrors for the SRC_URI.mirror check.
- thirdpartymirrors = {}
- for k, v in repoman_settings.thirdpartymirrors().items():
- for v in v:
- if not v.endswith("/"):
- v += "/"
- thirdpartymirrors[v] = k
-
- class _XMLParser(xml.etree.ElementTree.XMLParser):
-
- def __init__(self, data, **kwargs):
- xml.etree.ElementTree.XMLParser.__init__(self, **kwargs)
- self._portage_data = data
- if hasattr(self, 'parser'):
- self._base_XmlDeclHandler = self.parser.XmlDeclHandler
- self.parser.XmlDeclHandler = self._portage_XmlDeclHandler
- self._base_StartDoctypeDeclHandler = \
- self.parser.StartDoctypeDeclHandler
- self.parser.StartDoctypeDeclHandler = \
- self._portage_StartDoctypeDeclHandler
-
- def _portage_XmlDeclHandler(self, version, encoding, standalone):
- if self._base_XmlDeclHandler is not None:
- self._base_XmlDeclHandler(version, encoding, standalone)
- self._portage_data["XML_DECLARATION"] = (version, encoding, standalone)
-
- def _portage_StartDoctypeDeclHandler(self, doctypeName, systemId, publicId,
- has_internal_subset):
- if self._base_StartDoctypeDeclHandler is not None:
- self._base_StartDoctypeDeclHandler(doctypeName, systemId, publicId,
- has_internal_subset)
- self._portage_data["DOCTYPE"] = (doctypeName, systemId, publicId)
-
- class _MetadataTreeBuilder(xml.etree.ElementTree.TreeBuilder):
- """
- Implements doctype() as required to avoid deprecation warnings with
- >=python-2.7.
- """
- def doctype(self, name, pubid, system):
- pass
+ from os import path as osp
+ if osp.isfile(osp.join(osp.dirname(osp.dirname(osp.realpath(__file__))), ".portage_not_installed")):
+ pym_path = osp.join(osp.dirname(osp.dirname(osp.realpath(__file__))), "pym")
+ sys.path.insert(0, pym_path)
+ import portage
+ portage._internal_caller = True
+ from repoman.main import repoman_main
try:
- herd_base = make_herd_base(os.path.join(repoman_settings["PORTDIR"], "metadata/herds.xml"))
- except (EnvironmentError, ParseError, PermissionDenied) as e:
- err(str(e))
- except FileNotFound:
- # TODO: Download as we do for metadata.dtd, but add a way to
- # disable for non-gentoo repoman users who may not have herds.
- herd_base = None
-
- effective_scanlist = scanlist
- if options.if_modified == "y":
- effective_scanlist = sorted(vcs_files_to_cps(
- chain(mychanged, mynew, myremoved)))
-
- for x in effective_scanlist:
- # ebuilds and digests added to cvs respectively.
- logging.info("checking package %s" % x)
- # save memory by discarding xmatch caches from previous package(s)
- arch_xmatch_caches.clear()
- eadded = []
- catdir, pkgdir = x.split("/")
- checkdir = repodir + "/" + x
- checkdir_relative = ""
- if repolevel < 3:
- checkdir_relative = os.path.join(pkgdir, checkdir_relative)
- if repolevel < 2:
- checkdir_relative = os.path.join(catdir, checkdir_relative)
- checkdir_relative = os.path.join(".", checkdir_relative)
- generated_manifest = False
-
- if options.mode == "manifest" or \
- (options.mode != 'manifest-check' and options.digest == 'y') or \
- options.mode in ('commit', 'fix') and not options.pretend:
- auto_assumed = set()
- fetchlist_dict = portage.FetchlistDict(checkdir,
- repoman_settings, portdb)
- if options.mode == 'manifest' and options.force:
- portage._doebuild_manifest_exempt_depend += 1
- try:
- distdir = repoman_settings['DISTDIR']
- mf = repoman_settings.repositories.get_repo_for_location(
- os.path.dirname(os.path.dirname(checkdir)))
- mf = mf.load_manifest(checkdir, distdir,
- fetchlist_dict=fetchlist_dict)
- mf.create(requiredDistfiles=None,
- assumeDistHashesAlways=True)
- for distfiles in fetchlist_dict.values():
- for distfile in distfiles:
- if os.path.isfile(os.path.join(distdir, distfile)):
- mf.fhashdict['DIST'].pop(distfile, None)
- else:
- auto_assumed.add(distfile)
- mf.write()
- finally:
- portage._doebuild_manifest_exempt_depend -= 1
-
- repoman_settings["O"] = checkdir
- try:
- generated_manifest = digestgen(
- mysettings=repoman_settings, myportdb=portdb)
- except portage.exception.PermissionDenied as e:
- generated_manifest = False
- writemsg_level("!!! Permission denied: '%s'\n" % (e,),
- level=logging.ERROR, noiselevel=-1)
-
- if not generated_manifest:
- print("Unable to generate manifest.")
- dofail = 1
-
- if options.mode == "manifest":
- if not dofail and options.force and auto_assumed and \
- 'assume-digests' in repoman_settings.features:
- # Show which digests were assumed despite the --force option
- # being given. This output will already have been shown by
- # digestgen() if assume-digests is not enabled, so only show
- # it here if assume-digests is enabled.
- pkgs = list(fetchlist_dict)
- pkgs.sort()
- portage.writemsg_stdout(" digest.assumed" + \
- portage.output.colorize("WARN",
- str(len(auto_assumed)).rjust(18)) + "\n")
- for cpv in pkgs:
- fetchmap = fetchlist_dict[cpv]
- pf = portage.catsplit(cpv)[1]
- for distfile in sorted(fetchmap):
- if distfile in auto_assumed:
- portage.writemsg_stdout(
- " %s::%s\n" % (pf, distfile))
- continue
- elif dofail:
- sys.exit(1)
-
- if not generated_manifest:
- repoman_settings['O'] = checkdir
- repoman_settings['PORTAGE_QUIET'] = '1'
- if not portage.digestcheck([], repoman_settings, strict=1):
- stats["manifest.bad"] += 1
- fails["manifest.bad"].append(os.path.join(x, 'Manifest'))
- repoman_settings.pop('PORTAGE_QUIET', None)
-
- if options.mode == 'manifest-check':
- continue
-
- checkdirlist = os.listdir(checkdir)
- ebuildlist = []
- pkgs = {}
- allvalid = True
- for y in checkdirlist:
- if (y in no_exec or y.endswith(".ebuild")) and \
- stat.S_IMODE(os.stat(os.path.join(checkdir, y)).st_mode) & 0o111:
- stats["file.executable"] += 1
- fails["file.executable"].append(os.path.join(checkdir, y))
- if y.endswith(".ebuild"):
- pf = y[:-7]
- ebuildlist.append(pf)
- cpv = "%s/%s" % (catdir, pf)
- try:
- myaux = dict(zip(allvars, portdb.aux_get(cpv, allvars)))
- except KeyError:
- allvalid = False
- stats["ebuild.syntax"] += 1
- fails["ebuild.syntax"].append(os.path.join(x, y))
- continue
- except IOError:
- allvalid = False
- stats["ebuild.output"] += 1
- fails["ebuild.output"].append(os.path.join(x, y))
- continue
- if not portage.eapi_is_supported(myaux["EAPI"]):
- allvalid = False
- stats["EAPI.unsupported"] += 1
- fails["EAPI.unsupported"].append(os.path.join(x, y))
- continue
- pkgs[pf] = Package(cpv=cpv, metadata=myaux,
- root_config=root_config, type_name="ebuild")
-
- slot_keywords = {}
-
- if len(pkgs) != len(ebuildlist):
- # If we can't access all the metadata then it's totally unsafe to
- # commit since there's no way to generate a correct Manifest.
- # Do not try to do any more QA checks on this package since missing
- # metadata leads to false positives for several checks, and false
- # positives confuse users.
- can_force = False
- continue
-
- # Sort ebuilds in ascending order for the KEYWORDS.dropped check.
- ebuildlist = sorted(pkgs.values())
- ebuildlist = [pkg.pf for pkg in ebuildlist]
-
- for y in checkdirlist:
- index = repo_config.find_invalid_path_char(y)
- if index != -1:
- y_relative = os.path.join(checkdir_relative, y)
- if vcs is not None and not vcs_new_changed(y_relative):
- # If the file isn't in the VCS new or changed set, then
- # assume that it's an irrelevant temporary file (Manifest
- # entries are not generated for file names containing
- # prohibited characters). See bug #406877.
- index = -1
- if index != -1:
- stats["file.name"] += 1
- fails["file.name"].append("%s/%s: char '%s'" % \
- (checkdir, y, y[index]))
-
- if not (y in ("ChangeLog", "metadata.xml") or y.endswith(".ebuild")):
- continue
- f = None
- try:
- line = 1
- f = io.open(_unicode_encode(os.path.join(checkdir, y),
- encoding=_encodings['fs'], errors='strict'),
- mode='r', encoding=_encodings['repo.content'])
- for l in f:
- line += 1
- except UnicodeDecodeError as ue:
- stats["file.UTF8"] += 1
- s = ue.object[:ue.start]
- l2 = s.count("\n")
- line += l2
- if l2 != 0:
- s = s[s.rfind("\n") + 1:]
- fails["file.UTF8"].append("%s/%s: line %i, just after: '%s'" % (checkdir, y, line, s))
- finally:
- if f is not None:
- f.close()
-
- if vcs in ("git", "hg") and check_ebuild_notadded:
- if vcs == "git":
- myf = repoman_popen("git ls-files --others %s" % \
- (portage._shell_quote(checkdir_relative),))
- if vcs == "hg":
- myf = repoman_popen("hg status --no-status --unknown %s" % \
- (portage._shell_quote(checkdir_relative),))
- for l in myf:
- if l[:-1][-7:] == ".ebuild":
- stats["ebuild.notadded"] += 1
- fails["ebuild.notadded"].append(
- os.path.join(x, os.path.basename(l[:-1])))
- myf.close()
-
- if vcs in ("cvs", "svn", "bzr") and check_ebuild_notadded:
- try:
- if vcs == "cvs":
- myf = open(checkdir + "/CVS/Entries", "r")
- if vcs == "svn":
- myf = repoman_popen("svn status --depth=files --verbose " +
- portage._shell_quote(checkdir))
- if vcs == "bzr":
- myf = repoman_popen("bzr ls -v --kind=file " +
- portage._shell_quote(checkdir))
- myl = myf.readlines()
- myf.close()
- for l in myl:
- if vcs == "cvs":
- if l[0] != "/":
- continue
- splitl = l[1:].split("/")
- if not len(splitl):
- continue
- if splitl[0][-7:] == ".ebuild":
- eadded.append(splitl[0][:-7])
- if vcs == "svn":
- if l[:1] == "?":
- continue
- if l[:7] == ' >':
- # tree conflict, new in subversion 1.6
- continue
- l = l.split()[-1]
- if l[-7:] == ".ebuild":
- eadded.append(os.path.basename(l[:-7]))
- if vcs == "bzr":
- if l[1:2] == "?":
- continue
- l = l.split()[-1]
- if l[-7:] == ".ebuild":
- eadded.append(os.path.basename(l[:-7]))
- if vcs == "svn":
- myf = repoman_popen("svn status " +
- portage._shell_quote(checkdir))
- myl = myf.readlines()
- myf.close()
- for l in myl:
- if l[0] == "A":
- l = l.rstrip().split(' ')[-1]
- if l[-7:] == ".ebuild":
- eadded.append(os.path.basename(l[:-7]))
- except IOError:
- if vcs == "cvs":
- stats["CVS/Entries.IO_error"] += 1
- fails["CVS/Entries.IO_error"].append(checkdir + "/CVS/Entries")
- else:
- raise
- continue
-
- mf = repoman_settings.repositories.get_repo_for_location(
- os.path.dirname(os.path.dirname(checkdir)))
- mf = mf.load_manifest(checkdir, repoman_settings["DISTDIR"])
- mydigests = mf.getTypeDigests("DIST")
-
- fetchlist_dict = portage.FetchlistDict(checkdir, repoman_settings, portdb)
- myfiles_all = []
- src_uri_error = False
- for mykey in fetchlist_dict:
- try:
- myfiles_all.extend(fetchlist_dict[mykey])
- except portage.exception.InvalidDependString as e:
- src_uri_error = True
- try:
- portdb.aux_get(mykey, ["SRC_URI"])
- except KeyError:
- # This will be reported as an "ebuild.syntax" error.
- pass
- else:
- stats["SRC_URI.syntax"] += 1
- fails["SRC_URI.syntax"].append(
- "%s.ebuild SRC_URI: %s" % (mykey, e))
- del fetchlist_dict
- if not src_uri_error:
- # This test can produce false positives if SRC_URI could not
- # be parsed for one or more ebuilds. There's no point in
- # producing a false error here since the root cause will
- # produce a valid error elsewhere, such as "SRC_URI.syntax"
- # or "ebuild.sytax".
- myfiles_all = set(myfiles_all)
- for entry in mydigests:
- if entry not in myfiles_all:
- stats["digest.unused"] += 1
- fails["digest.unused"].append(checkdir + "::" + entry)
- for entry in myfiles_all:
- if entry not in mydigests:
- stats["digest.missing"] += 1
- fails["digest.missing"].append(checkdir + "::" + entry)
- del myfiles_all
-
- if os.path.exists(checkdir + "/files"):
- filesdirlist = os.listdir(checkdir + "/files")
-
- # recurse through files directory
- # use filesdirlist as a stack, appending directories as needed so people can't hide > 20k files in a subdirectory.
- while filesdirlist:
- y = filesdirlist.pop(0)
- relative_path = os.path.join(x, "files", y)
- full_path = os.path.join(repodir, relative_path)
- try:
- mystat = os.stat(full_path)
- except OSError as oe:
- if oe.errno == 2:
- # don't worry about it. it likely was removed via fix above.
- continue
- else:
- raise oe
- if S_ISDIR(mystat.st_mode):
- # !!! VCS "portability" alert! Need some function isVcsDir() or alike !!!
- if y == "CVS" or y == ".svn":
- continue
- for z in os.listdir(checkdir + "/files/" + y):
- if z == "CVS" or z == ".svn":
- continue
- filesdirlist.append(y + "/" + z)
- # Current policy is no files over 20 KiB, these are the checks. File size between
- # 20 KiB and 60 KiB causes a warning, while file size over 60 KiB causes an error.
- elif mystat.st_size > 61440:
- stats["file.size.fatal"] += 1
- fails["file.size.fatal"].append("(" + str(mystat.st_size//1024) + " KiB) " + x + "/files/" + y)
- elif mystat.st_size > 20480:
- stats["file.size"] += 1
- fails["file.size"].append("(" + str(mystat.st_size//1024) + " KiB) " + x + "/files/" + y)
-
- index = repo_config.find_invalid_path_char(y)
- if index != -1:
- y_relative = os.path.join(checkdir_relative, "files", y)
- if vcs is not None and not vcs_new_changed(y_relative):
- # If the file isn't in the VCS new or changed set, then
- # assume that it's an irrelevant temporary file (Manifest
- # entries are not generated for file names containing
- # prohibited characters). See bug #406877.
- index = -1
- if index != -1:
- stats["file.name"] += 1
- fails["file.name"].append("%s/files/%s: char '%s'" % \
- (checkdir, y, y[index]))
- del mydigests
-
- if check_changelog and "ChangeLog" not in checkdirlist:
- stats["changelog.missing"] += 1
- fails["changelog.missing"].append(x + "/ChangeLog")
-
- musedict = {}
- # metadata.xml file check
- if "metadata.xml" not in checkdirlist:
- stats["metadata.missing"] += 1
- fails["metadata.missing"].append(x + "/metadata.xml")
- # metadata.xml parse check
- else:
- metadata_bad = False
- xml_info = {}
- xml_parser = _XMLParser(xml_info, target=_MetadataTreeBuilder())
-
- # read metadata.xml into memory
- try:
- _metadata_xml = xml.etree.ElementTree.parse(
- _unicode_encode(os.path.join(checkdir, "metadata.xml"),
- encoding=_encodings['fs'], errors='strict'),
- parser=xml_parser)
- except (ExpatError, SyntaxError, EnvironmentError) as e:
- metadata_bad = True
- stats["metadata.bad"] += 1
- fails["metadata.bad"].append("%s/metadata.xml: %s" % (x, e))
- del e
- else:
- if not hasattr(xml_parser, 'parser') or \
- sys.hexversion < 0x2070000 or \
- (sys.hexversion > 0x3000000 and sys.hexversion < 0x3020000):
- # doctype is not parsed with python 2.6 or 3.1
- pass
- else:
- if "XML_DECLARATION" not in xml_info:
- stats["metadata.bad"] += 1
- fails["metadata.bad"].append("%s/metadata.xml: "
- "xml declaration is missing on first line, "
- "should be '%s'" % (x, metadata_xml_declaration))
- else:
- xml_version, xml_encoding, xml_standalone = \
- xml_info["XML_DECLARATION"]
- if xml_encoding is None or \
- xml_encoding.upper() != metadata_xml_encoding:
- stats["metadata.bad"] += 1
- if xml_encoding is None:
- encoding_problem = "but it is undefined"
- else:
- encoding_problem = "not '%s'" % xml_encoding
- fails["metadata.bad"].append("%s/metadata.xml: "
- "xml declaration encoding should be '%s', %s" %
- (x, metadata_xml_encoding, encoding_problem))
-
- if "DOCTYPE" not in xml_info:
- metadata_bad = True
- stats["metadata.bad"] += 1
- fails["metadata.bad"].append("%s/metadata.xml: %s" % (x,
- "DOCTYPE is missing"))
- else:
- doctype_name, doctype_system, doctype_pubid = \
- xml_info["DOCTYPE"]
- if doctype_system != metadata_dtd_uri:
- stats["metadata.bad"] += 1
- if doctype_system is None:
- system_problem = "but it is undefined"
- else:
- system_problem = "not '%s'" % doctype_system
- fails["metadata.bad"].append("%s/metadata.xml: "
- "DOCTYPE: SYSTEM should refer to '%s', %s" %
- (x, metadata_dtd_uri, system_problem))
-
- if doctype_name != metadata_doctype_name:
- stats["metadata.bad"] += 1
- fails["metadata.bad"].append("%s/metadata.xml: "
- "DOCTYPE: name should be '%s', not '%s'" %
- (x, metadata_doctype_name, doctype_name))
-
- # load USE flags from metadata.xml
- try:
- musedict = utilities.parse_metadata_use(_metadata_xml)
- except portage.exception.ParseError as e:
- metadata_bad = True
- stats["metadata.bad"] += 1
- fails["metadata.bad"].append("%s/metadata.xml: %s" % (x, e))
- else:
- for atom in chain(*musedict.values()):
- if atom is None:
- continue
- try:
- atom = Atom(atom)
- except InvalidAtom as e:
- stats["metadata.bad"] += 1
- fails["metadata.bad"].append(
- "%s/metadata.xml: Invalid atom: %s" % (x, e))
- else:
- if atom.cp != x:
- stats["metadata.bad"] += 1
- fails["metadata.bad"].append(
- ("%s/metadata.xml: Atom contains "
- "unexpected cat/pn: %s") % (x, atom))
-
- # Run other metadata.xml checkers
- try:
- utilities.check_metadata(_metadata_xml, herd_base)
- except (utilities.UnknownHerdsError, ) as e:
- metadata_bad = True
- stats["metadata.bad"] += 1
- fails["metadata.bad"].append("%s/metadata.xml: %s" % (x, e))
- del e
-
- # Only carry out if in package directory or check forced
- if xmllint_capable and not metadata_bad:
- # xmlint can produce garbage output even on success, so only dump
- # the ouput when it fails.
- st, out = repoman_getstatusoutput(
- "xmllint --nonet --noout --dtdvalid %s %s" % \
- (portage._shell_quote(metadata_dtd),
- portage._shell_quote(os.path.join(checkdir, "metadata.xml"))))
- if st != os.EX_OK:
- print(red("!!!") + " metadata.xml is invalid:")
- for z in out.splitlines():
- print(red("!!! ") + z)
- stats["metadata.bad"] += 1
- fails["metadata.bad"].append(x + "/metadata.xml")
-
- del metadata_bad
- muselist = frozenset(musedict)
-
- changelog_path = os.path.join(checkdir_relative, "ChangeLog")
- changelog_modified = changelog_path in modified_changelogs
-
- # detect unused local USE-descriptions
- used_useflags = set()
-
- for y in ebuildlist:
- relative_path = os.path.join(x, y + ".ebuild")
- full_path = os.path.join(repodir, relative_path)
- ebuild_path = y + ".ebuild"
- if repolevel < 3:
- ebuild_path = os.path.join(pkgdir, ebuild_path)
- if repolevel < 2:
- ebuild_path = os.path.join(catdir, ebuild_path)
- ebuild_path = os.path.join(".", ebuild_path)
- if check_changelog and not changelog_modified \
- and ebuild_path in new_ebuilds:
- stats['changelog.ebuildadded'] += 1
- fails['changelog.ebuildadded'].append(relative_path)
-
- if vcs in ("cvs", "svn", "bzr") and check_ebuild_notadded and y not in eadded:
- # ebuild not added to vcs
- stats["ebuild.notadded"] += 1
- fails["ebuild.notadded"].append(x + "/" + y + ".ebuild")
- myesplit = portage.pkgsplit(y)
- if myesplit is None or myesplit[0] != x.split("/")[-1] \
- or pv_toolong_re.search(myesplit[1]) \
- or pv_toolong_re.search(myesplit[2]):
- stats["ebuild.invalidname"] += 1
- fails["ebuild.invalidname"].append(x + "/" + y + ".ebuild")
- continue
- elif myesplit[0] != pkgdir:
- print(pkgdir, myesplit[0])
- stats["ebuild.namenomatch"] += 1
- fails["ebuild.namenomatch"].append(x + "/" + y + ".ebuild")
- continue
-
- pkg = pkgs[y]
-
- if pkg.invalid:
- allvalid = False
- for k, msgs in pkg.invalid.items():
- for msg in msgs:
- stats[k] += 1
- fails[k].append("%s: %s" % (relative_path, msg))
- continue
-
- myaux = pkg._metadata
- eapi = myaux["EAPI"]
- inherited = pkg.inherited
- live_ebuild = live_eclasses.intersection(inherited)
-
- if repo_config.eapi_is_banned(eapi):
- stats["repo.eapi.banned"] += 1
- fails["repo.eapi.banned"].append(
- "%s: %s" % (relative_path, eapi))
-
- elif repo_config.eapi_is_deprecated(eapi):
- stats["repo.eapi.deprecated"] += 1
- fails["repo.eapi.deprecated"].append(
- "%s: %s" % (relative_path, eapi))
-
- for k, v in myaux.items():
- if not isinstance(v, basestring):
- continue
- m = non_ascii_re.search(v)
- if m is not None:
- stats["variable.invalidchar"] += 1
- fails["variable.invalidchar"].append(
- ("%s: %s variable contains non-ASCII " + \
- "character at position %s") % \
- (relative_path, k, m.start() + 1))
-
- if not src_uri_error:
- # Check that URIs don't reference a server from thirdpartymirrors.
- for uri in portage.dep.use_reduce( \
- myaux["SRC_URI"], matchall=True, is_src_uri=True, eapi=eapi, flat=True):
- contains_mirror = False
- for mirror, mirror_alias in thirdpartymirrors.items():
- if uri.startswith(mirror):
- contains_mirror = True
- break
- if not contains_mirror:
- continue
-
- new_uri = "mirror://%s/%s" % (mirror_alias, uri[len(mirror):])
- stats["SRC_URI.mirror"] += 1
- fails["SRC_URI.mirror"].append(
- "%s: '%s' found in thirdpartymirrors, use '%s'" % \
- (relative_path, mirror, new_uri))
-
- if myaux.get("PROVIDE"):
- stats["virtual.oldstyle"] += 1
- fails["virtual.oldstyle"].append(relative_path)
-
- for pos, missing_var in enumerate(missingvars):
- if not myaux.get(missing_var):
- if catdir == "virtual" and \
- missing_var in ("HOMEPAGE", "LICENSE"):
- continue
- if live_ebuild and missing_var == "KEYWORDS":
- continue
- myqakey = missingvars[pos] + ".missing"
- stats[myqakey] += 1
- fails[myqakey].append(x + "/" + y + ".ebuild")
-
- if catdir == "virtual":
- for var in ("HOMEPAGE", "LICENSE"):
- if myaux.get(var):
- myqakey = var + ".virtual"
- stats[myqakey] += 1
- fails[myqakey].append(relative_path)
-
- # 14 is the length of DESCRIPTION=""
- if len(myaux['DESCRIPTION']) > max_desc_len:
- stats['DESCRIPTION.toolong'] += 1
- fails['DESCRIPTION.toolong'].append(
- "%s: DESCRIPTION is %d characters (max %d)" % \
- (relative_path, len(myaux['DESCRIPTION']), max_desc_len))
-
- keywords = myaux["KEYWORDS"].split()
- if not options.straight_to_stable:
- stable_keywords = []
- for keyword in keywords:
- if not keyword.startswith("~") and \
- not keyword.startswith("-"):
- stable_keywords.append(keyword)
- if stable_keywords:
- if ebuild_path in new_ebuilds and catdir != "virtual":
- stable_keywords.sort()
- stats["KEYWORDS.stable"] += 1
- fails["KEYWORDS.stable"].append(
- relative_path + " added with stable keywords: %s" % \
- " ".join(stable_keywords))
-
- ebuild_archs = set(kw.lstrip("~") for kw in keywords \
- if not kw.startswith("-"))
-
- previous_keywords = slot_keywords.get(pkg.slot)
- if previous_keywords is None:
- slot_keywords[pkg.slot] = set()
- elif ebuild_archs and "*" not in ebuild_archs and not live_ebuild:
- dropped_keywords = previous_keywords.difference(ebuild_archs)
- if dropped_keywords:
- stats["KEYWORDS.dropped"] += 1
- fails["KEYWORDS.dropped"].append(
- relative_path + ": %s" % \
- " ".join(sorted(dropped_keywords)))
-
- slot_keywords[pkg.slot].update(ebuild_archs)
-
- # KEYWORDS="-*" is a stupid replacement for package.mask and screws general KEYWORDS semantics
- if "-*" in keywords:
- haskeyword = False
- for kw in keywords:
- if kw[0] == "~":
- kw = kw[1:]
- if kw in kwlist:
- haskeyword = True
- if not haskeyword:
- stats["KEYWORDS.stupid"] += 1
- fails["KEYWORDS.stupid"].append(x + "/" + y + ".ebuild")
-
- """
- Ebuilds that inherit a "Live" eclass (darcs,subversion,git,cvs,etc..) should
- not be allowed to be marked stable
- """
- if live_ebuild and repo_config.name == "gentoo":
- bad_stable_keywords = []
- for keyword in keywords:
- if not keyword.startswith("~") and \
- not keyword.startswith("-"):
- bad_stable_keywords.append(keyword)
- del keyword
- if bad_stable_keywords:
- stats["LIVEVCS.stable"] += 1
- fails["LIVEVCS.stable"].append(
- x + "/" + y + ".ebuild with stable keywords:%s " % \
- bad_stable_keywords)
- del bad_stable_keywords
-
- if keywords and not has_global_mask(pkg):
- stats["LIVEVCS.unmasked"] += 1
- fails["LIVEVCS.unmasked"].append(relative_path)
-
- if options.ignore_arches:
- arches = [[repoman_settings["ARCH"], repoman_settings["ARCH"],
- repoman_settings["ACCEPT_KEYWORDS"].split()]]
- else:
- arches = set()
- for keyword in keywords:
- if keyword[0] == "-":
- continue
- elif keyword[0] == "~":
- arch = keyword[1:]
- if arch == "*":
- for expanded_arch in profiles:
- if expanded_arch == "**":
- continue
- arches.add((keyword, expanded_arch,
- (expanded_arch, "~" + expanded_arch)))
- else:
- arches.add((keyword, arch, (arch, keyword)))
- else:
- if keyword == "*":
- for expanded_arch in profiles:
- if expanded_arch == "**":
- continue
- arches.add((keyword, expanded_arch,
- (expanded_arch,)))
- else:
- arches.add((keyword, keyword, (keyword,)))
- if not arches:
- # Use an empty profile for checking dependencies of
- # packages that have empty KEYWORDS.
- arches.add(('**', '**', ('**',)))
-
- unknown_pkgs = set()
- baddepsyntax = False
- badlicsyntax = False
- badprovsyntax = False
- catpkg = catdir + "/" + y
-
- inherited_java_eclass = "java-pkg-2" in inherited or \
- "java-pkg-opt-2" in inherited
- inherited_wxwidgets_eclass = "wxwidgets" in inherited
- operator_tokens = set(["||", "(", ")"])
- type_list, badsyntax = [], []
- for mytype in Package._dep_keys + ("LICENSE", "PROPERTIES", "PROVIDE"):
- mydepstr = myaux[mytype]
-
- buildtime = mytype in Package._buildtime_keys
- runtime = mytype in Package._runtime_keys
- token_class = None
- if mytype.endswith("DEPEND"):
- token_class = portage.dep.Atom
-
- try:
- atoms = portage.dep.use_reduce(mydepstr, matchall=1, flat=True, \
- is_valid_flag=pkg.iuse.is_valid_flag, token_class=token_class)
- except portage.exception.InvalidDependString as e:
- atoms = None
- badsyntax.append(str(e))
-
- if atoms and mytype.endswith("DEPEND"):
- if runtime and \
- "test?" in mydepstr.split():
- stats[mytype + '.suspect'] += 1
- fails[mytype + '.suspect'].append(relative_path + \
- ": 'test?' USE conditional in %s" % mytype)
-
- for atom in atoms:
- if atom == "||":
- continue
-
- is_blocker = atom.blocker
-
- # Skip dependency.unknown for blockers, so that we
- # don't encourage people to remove necessary blockers,
- # as discussed in bug 382407. We use atom.without_use
- # due to bug 525376.
- if not is_blocker and \
- not portdb.xmatch("match-all", atom.without_use) and \
- not atom.cp.startswith("virtual/"):
- unknown_pkgs.add((mytype, atom.unevaluated_atom))
-
- if catdir != "virtual":
- if not is_blocker and \
- atom.cp in suspect_virtual:
- stats['virtual.suspect'] += 1
- fails['virtual.suspect'].append(
- relative_path +
- ": %s: consider using '%s' instead of '%s'" %
- (mytype, suspect_virtual[atom.cp], atom))
- if not is_blocker and \
- atom.cp.startswith("perl-core/"):
- stats['dependency.perlcore'] += 1
- fails['dependency.perlcore'].append(
- relative_path +
- ": %s: please use '%s' instead of '%s'" %
- (mytype, atom.replace("perl-core/","virtual/perl-"), atom))
-
- if buildtime and \
- not is_blocker and \
- not inherited_java_eclass and \
- atom.cp == "virtual/jdk":
- stats['java.eclassesnotused'] += 1
- fails['java.eclassesnotused'].append(relative_path)
- elif buildtime and \
- not is_blocker and \
- not inherited_wxwidgets_eclass and \
- atom.cp == "x11-libs/wxGTK":
- stats['wxwidgets.eclassnotused'] += 1
- fails['wxwidgets.eclassnotused'].append(
- (relative_path + ": %ss on x11-libs/wxGTK"
- " without inheriting wxwidgets.eclass") % mytype)
- elif runtime:
- if not is_blocker and \
- atom.cp in suspect_rdepend:
- stats[mytype + '.suspect'] += 1
- fails[mytype + '.suspect'].append(
- relative_path + ": '%s'" % atom)
-
- if atom.operator == "~" and \
- portage.versions.catpkgsplit(atom.cpv)[3] != "r0":
- qacat = 'dependency.badtilde'
- stats[qacat] += 1
- fails[qacat].append(
- (relative_path + ": %s uses the ~ operator"
- " with a non-zero revision:" + \
- " '%s'") % (mytype, atom))
-
- check_missingslot(atom, mytype, eapi, portdb, stats, fails,
- relative_path, myaux)
-
- type_list.extend([mytype] * (len(badsyntax) - len(type_list)))
-
- for m, b in zip(type_list, badsyntax):
- if m.endswith("DEPEND"):
- qacat = "dependency.syntax"
- else:
- qacat = m + ".syntax"
- stats[qacat] += 1
- fails[qacat].append("%s: %s: %s" % (relative_path, m, b))
-
- badlicsyntax = len([z for z in type_list if z == "LICENSE"])
- badprovsyntax = len([z for z in type_list if z == "PROVIDE"])
- baddepsyntax = len(type_list) != badlicsyntax + badprovsyntax
- badlicsyntax = badlicsyntax > 0
- badprovsyntax = badprovsyntax > 0
-
- # uselist checks - global
- myuse = []
- default_use = []
- for myflag in myaux["IUSE"].split():
- flag_name = myflag.lstrip("+-")
- used_useflags.add(flag_name)
- if myflag != flag_name:
- default_use.append(myflag)
- if flag_name not in uselist:
- myuse.append(flag_name)
-
- # uselist checks - metadata
- for mypos in range(len(myuse)-1, -1, -1):
- if myuse[mypos] and (myuse[mypos] in muselist):
- del myuse[mypos]
-
- if default_use and not eapi_has_iuse_defaults(eapi):
- for myflag in default_use:
- stats['EAPI.incompatible'] += 1
- fails['EAPI.incompatible'].append(
- (relative_path + ": IUSE defaults" + \
- " not supported with EAPI='%s':" + \
- " '%s'") % (eapi, myflag))
-
- for mypos in range(len(myuse)):
- stats["IUSE.invalid"] += 1
- fails["IUSE.invalid"].append(x + "/" + y + ".ebuild: %s" % myuse[mypos])
-
- # Check for outdated RUBY targets
- if "ruby-ng" in inherited or "ruby-fakegem" in inherited or "ruby" in inherited:
- ruby_intersection = pkg.iuse.all.intersection(ruby_deprecated)
- if ruby_intersection:
- for myruby in ruby_intersection:
- stats["IUSE.rubydeprecated"] += 1
- fails["IUSE.rubydeprecated"].append(
- (relative_path + ": Deprecated ruby target: %s") % myruby)
-
- # license checks
- if not badlicsyntax:
- # Parse the LICENSE variable, remove USE conditions and
- # flatten it.
- licenses = portage.dep.use_reduce(myaux["LICENSE"], matchall=1, flat=True)
- # Check each entry to ensure that it exists in PORTDIR's
- # license directory.
- for lic in licenses:
- # Need to check for "||" manually as no portage
- # function will remove it without removing values.
- if lic not in liclist and lic != "||":
- stats["LICENSE.invalid"] += 1
- fails["LICENSE.invalid"].append(x + "/" + y + ".ebuild: %s" % lic)
- elif lic in liclist_deprecated:
- stats["LICENSE.deprecated"] += 1
- fails["LICENSE.deprecated"].append("%s: %s" % (relative_path, lic))
-
- # keyword checks
- myuse = myaux["KEYWORDS"].split()
- for mykey in myuse:
- if mykey not in ("-*", "*", "~*"):
- myskey = mykey
- if myskey[:1] == "-":
- myskey = myskey[1:]
- if myskey[:1] == "~":
- myskey = myskey[1:]
- if myskey not in kwlist:
- stats["KEYWORDS.invalid"] += 1
- fails["KEYWORDS.invalid"].append(x + "/" + y + ".ebuild: %s" % mykey)
- elif myskey not in profiles:
- stats["KEYWORDS.invalid"] += 1
- fails["KEYWORDS.invalid"].append(x + "/" + y + ".ebuild: %s (profile invalid)" % mykey)
-
- # restrict checks
- myrestrict = None
- try:
- myrestrict = portage.dep.use_reduce(myaux["RESTRICT"], matchall=1, flat=True)
- except portage.exception.InvalidDependString as e:
- stats["RESTRICT.syntax"] += 1
- fails["RESTRICT.syntax"].append(
- "%s: RESTRICT: %s" % (relative_path, e))
- del e
- if myrestrict:
- myrestrict = set(myrestrict)
- mybadrestrict = myrestrict.difference(valid_restrict)
- if mybadrestrict:
- stats["RESTRICT.invalid"] += len(mybadrestrict)
- for mybad in mybadrestrict:
- fails["RESTRICT.invalid"].append(x + "/" + y + ".ebuild: %s" % mybad)
- # REQUIRED_USE check
- required_use = myaux["REQUIRED_USE"]
- if required_use:
- if not eapi_has_required_use(eapi):
- stats['EAPI.incompatible'] += 1
- fails['EAPI.incompatible'].append(
- relative_path + ": REQUIRED_USE" + \
- " not supported with EAPI='%s'" % (eapi,))
- try:
- portage.dep.check_required_use(required_use, (),
- pkg.iuse.is_valid_flag, eapi=eapi)
- except portage.exception.InvalidDependString as e:
- stats["REQUIRED_USE.syntax"] += 1
- fails["REQUIRED_USE.syntax"].append(
- "%s: REQUIRED_USE: %s" % (relative_path, e))
- del e
-
- # Syntax Checks
- relative_path = os.path.join(x, y + ".ebuild")
- full_path = os.path.join(repodir, relative_path)
- if not vcs_preserves_mtime:
- if ebuild_path not in new_ebuilds and \
- ebuild_path not in modified_ebuilds:
- pkg.mtime = None
- try:
- # All ebuilds should have utf_8 encoding.
- f = io.open(_unicode_encode(full_path,
- encoding=_encodings['fs'], errors='strict'),
- mode='r', encoding=_encodings['repo.content'])
- try:
- for check_name, e in run_checks(f, pkg):
- stats[check_name] += 1
- fails[check_name].append(relative_path + ': %s' % e)
- finally:
- f.close()
- except UnicodeDecodeError:
- # A file.UTF8 failure will have already been recorded above.
- pass
-
- if options.force:
- # The dep_check() calls are the most expensive QA test. If --force
- # is enabled, there's no point in wasting time on these since the
- # user is intent on forcing the commit anyway.
- continue
-
- relevant_profiles = []
- for keyword, arch, groups in arches:
- if arch not in profiles:
- # A missing profile will create an error further down
- # during the KEYWORDS verification.
- continue
-
- if include_arches is not None:
- if arch not in include_arches:
- continue
-
- relevant_profiles.extend((keyword, groups, prof)
- for prof in profiles[arch])
-
- def sort_key(item):
- return item[2].sub_path
-
- relevant_profiles.sort(key=sort_key)
-
- for keyword, groups, prof in relevant_profiles:
-
- if not (prof.status == "stable" or \
- (prof.status == "dev" and options.include_dev) or \
- (prof.status == "exp" and options.include_exp_profiles == 'y')):
- continue
-
- dep_settings = arch_caches.get(prof.sub_path)
- if dep_settings is None:
- dep_settings = portage.config(
- config_profile_path=prof.abs_path,
- config_incrementals=repoman_incrementals,
- config_root=config_root,
- local_config=False,
- _unmatched_removal=options.unmatched_removal,
- env=env, repositories=repoman_settings.repositories)
- dep_settings.categories = repoman_settings.categories
- if options.without_mask:
- dep_settings._mask_manager_obj = \
- copy.deepcopy(dep_settings._mask_manager)
- dep_settings._mask_manager._pmaskdict.clear()
- arch_caches[prof.sub_path] = dep_settings
-
- xmatch_cache_key = (prof.sub_path, tuple(groups))
- xcache = arch_xmatch_caches.get(xmatch_cache_key)
- if xcache is None:
- portdb.melt()
- portdb.freeze()
- xcache = portdb.xcache
- xcache.update(shared_xmatch_caches)
- arch_xmatch_caches[xmatch_cache_key] = xcache
-
- trees[root]["porttree"].settings = dep_settings
- portdb.settings = dep_settings
- portdb.xcache = xcache
-
- dep_settings["ACCEPT_KEYWORDS"] = " ".join(groups)
- # just in case, prevent config.reset() from nuking these.
- dep_settings.backup_changes("ACCEPT_KEYWORDS")
-
- # This attribute is used in dbapi._match_use() to apply
- # use.stable.{mask,force} settings based on the stable
- # status of the parent package. This is required in order
- # for USE deps of unstable packages to be resolved correctly,
- # since otherwise use.stable.{mask,force} settings of
- # dependencies may conflict (see bug #456342).
- dep_settings._parent_stable = dep_settings._isStable(pkg)
-
- # Handle package.use*.{force,mask) calculation, for use
- # in dep_check.
- dep_settings.useforce = dep_settings._use_manager.getUseForce(
- pkg, stable=dep_settings._parent_stable)
- dep_settings.usemask = dep_settings._use_manager.getUseMask(
- pkg, stable=dep_settings._parent_stable)
-
- if not baddepsyntax:
- ismasked = not ebuild_archs or \
- pkg.cpv not in portdb.xmatch("match-visible",
- Atom("%s::%s" % (pkg.cp, repo_config.name)))
- if ismasked:
- if not have_pmasked:
- have_pmasked = bool(dep_settings._getMaskAtom(
- pkg.cpv, pkg._metadata))
- if options.ignore_masked:
- continue
- # we are testing deps for a masked package; give it some lee-way
- suffix = "masked"
- matchmode = "minimum-all"
- else:
- suffix = ""
- matchmode = "minimum-visible"
-
- if not have_dev_keywords:
- have_dev_keywords = \
- bool(dev_keywords.intersection(keywords))
-
- if prof.status == "dev":
- suffix = suffix + "indev"
-
- for mytype in Package._dep_keys:
-
- mykey = "dependency.bad" + suffix
- myvalue = myaux[mytype]
- if not myvalue:
- continue
-
- success, atoms = portage.dep_check(myvalue, portdb,
- dep_settings, use="all", mode=matchmode,
- trees=trees)
-
- if success:
- if atoms:
-
- # Don't bother with dependency.unknown for
- # cases in which *DEPEND.bad is triggered.
- for atom in atoms:
- # dep_check returns all blockers and they
- # aren't counted for *DEPEND.bad, so we
- # ignore them here.
- if not atom.blocker:
- unknown_pkgs.discard(
- (mytype, atom.unevaluated_atom))
-
- if not prof.sub_path:
- # old-style virtuals currently aren't
- # resolvable with empty profile, since
- # 'virtuals' mappings are unavailable
- # (it would be expensive to search
- # for PROVIDE in all ebuilds)
- atoms = [atom for atom in atoms if not \
- (atom.cp.startswith('virtual/') and \
- not portdb.cp_list(atom.cp))]
-
- # we have some unsolvable deps
- # remove ! deps, which always show up as unsatisfiable
- atoms = [str(atom.unevaluated_atom) \
- for atom in atoms if not atom.blocker]
-
- # if we emptied out our list, continue:
- if not atoms:
- continue
- stats[mykey] += 1
- fails[mykey].append("%s: %s: %s(%s)\n%s" % \
- (relative_path, mytype, keyword,
- prof, pformat(atoms, indent=6)))
- else:
- stats[mykey] += 1
- fails[mykey].append("%s: %s: %s(%s)\n%s" % \
- (relative_path, mytype, keyword,
- prof, pformat(atoms, indent=6)))
-
- if not baddepsyntax and unknown_pkgs:
- type_map = {}
- for mytype, atom in unknown_pkgs:
- type_map.setdefault(mytype, set()).add(atom)
- for mytype, atoms in type_map.items():
- stats["dependency.unknown"] += 1
- fails["dependency.unknown"].append("%s: %s: %s" %
- (relative_path, mytype, ", ".join(sorted(atoms))))
-
- # check if there are unused local USE-descriptions in metadata.xml
- # (unless there are any invalids, to avoid noise)
- if allvalid:
- for myflag in muselist.difference(used_useflags):
- stats["metadata.warning"] += 1
- fails["metadata.warning"].append(
- "%s/metadata.xml: unused local USE-description: '%s'" % \
- (x, myflag))
-
- if options.if_modified == "y" and len(effective_scanlist) < 1:
- logging.warning("--if-modified is enabled, but no modified packages were found!")
-
- if options.mode == "manifest":
- sys.exit(dofail)
-
- # dofail will be set to 1 if we have failed in at least one non-warning category
- dofail = 0
- # dowarn will be set to 1 if we tripped any warnings
- dowarn = 0
- # dofull will be set if we should print a "repoman full" informational message
- dofull = options.mode != 'full'
-
- for x in qacats:
- if not stats[x]:
- continue
- dowarn = 1
- if x not in qawarnings:
- dofail = 1
-
- if dofail or \
- (dowarn and not (options.quiet or options.mode == "scan")):
- dofull = 0
-
- # Save QA output so that it can be conveniently displayed
- # in $EDITOR while the user creates a commit message.
- # Otherwise, the user would not be able to see this output
- # once the editor has taken over the screen.
- qa_output = io.StringIO()
- style_file = ConsoleStyleFile(sys.stdout)
- if options.mode == 'commit' and \
- (not commitmessage or not commitmessage.strip()):
- style_file.write_listener = qa_output
- console_writer = StyleWriter(file=style_file, maxcol=9999)
- console_writer.style_listener = style_file.new_styles
-
- f = formatter.AbstractFormatter(console_writer)
-
- format_outputs = {
- 'column': utilities.format_qa_output_column,
- 'default': utilities.format_qa_output
- }
-
- format_output = format_outputs.get(options.output_style,
- format_outputs['default'])
- format_output(f, stats, fails, dofull, dofail, options, qawarnings)
-
- style_file.flush()
- del console_writer, f, style_file
- qa_output = qa_output.getvalue()
- qa_output = qa_output.splitlines(True)
-
- suggest_ignore_masked = False
- suggest_include_dev = False
-
- if have_pmasked and not (options.without_mask or options.ignore_masked):
- suggest_ignore_masked = True
- if have_dev_keywords and not options.include_dev:
- suggest_include_dev = True
-
- if suggest_ignore_masked or suggest_include_dev:
- print()
- if suggest_ignore_masked:
- print(bold("Note: use --without-mask to check " + \
- "KEYWORDS on dependencies of masked packages"))
-
- if suggest_include_dev:
- print(bold("Note: use --include-dev (-d) to check " + \
- "dependencies for 'dev' profiles"))
- print()
-
- if options.mode != 'commit':
- if dofull:
- print(bold("Note: type \"repoman full\" for a complete listing."))
- if dowarn and not dofail:
- print(green("RepoMan sez:"),"\"You're only giving me a partial QA payment?\n I'll take it this time, but I'm not happy.\"")
- elif not dofail:
- print(green("RepoMan sez:"),"\"If everyone were like you, I'd be out of business!\"")
- elif dofail:
- print(bad("Please fix these important QA issues first."))
- print(green("RepoMan sez:"),"\"Make your QA payment on time and you'll never see the likes of me.\"\n")
+ repoman_main(sys.argv[1:])
+ except IOError as e:
+ if e.errno == errno.EACCES:
+ print("\nRepoman: Need user access")
sys.exit(1)
- else:
- if dofail and can_force and options.force and not options.pretend:
- print(green("RepoMan sez:") + \
- " \"You want to commit even with these QA issues?\n" + \
- " I'll take it this time, but I'm not happy.\"\n")
- elif dofail:
- if options.force and not can_force:
- print(bad("The --force option has been disabled due to extraordinary issues."))
- print(bad("Please fix these important QA issues first."))
- print(green("RepoMan sez:"),"\"Make your QA payment on time and you'll never see the likes of me.\"\n")
- sys.exit(1)
-
- if options.pretend:
- print(green("RepoMan sez:"), "\"So, you want to play it safe. Good call.\"\n")
-
- myunadded = []
- if vcs == "cvs":
- try:
- myvcstree = portage.cvstree.getentries("./", recursive=1)
- myunadded = portage.cvstree.findunadded(myvcstree, recursive=1, basedir="./")
- except SystemExit as e:
- raise # TODO propagate this
- except:
- err("Error retrieving CVS tree; exiting.")
- if vcs == "svn":
- try:
- with repoman_popen("svn status --no-ignore") as f:
- svnstatus = f.readlines()
- myunadded = ["./" + elem.rstrip().split()[1] for elem in svnstatus if elem.startswith("?") or elem.startswith("I")]
- except SystemExit as e:
- raise # TODO propagate this
- except:
- err("Error retrieving SVN info; exiting.")
- if vcs == "git":
- # get list of files not under version control or missing
- myf = repoman_popen("git ls-files --others")
- myunadded = ["./" + elem[:-1] for elem in myf]
- myf.close()
- if vcs == "bzr":
- try:
- with repoman_popen("bzr status -S .") as f:
- bzrstatus = f.readlines()
- myunadded = ["./" + elem.rstrip().split()[1].split('/')[-1:][0] for elem in bzrstatus if elem.startswith("?") or elem[0:2] == " D"]
- except SystemExit as e:
- raise # TODO propagate this
- except:
- err("Error retrieving bzr info; exiting.")
- if vcs == "hg":
- with repoman_popen("hg status --no-status --unknown .") as f:
- myunadded = f.readlines()
- myunadded = ["./" + elem.rstrip() for elem in myunadded]
-
- # Mercurial doesn't handle manually deleted files as removed from
- # the repository, so the user need to remove them before commit,
- # using "hg remove [FILES]"
- with repoman_popen("hg status --no-status --deleted .") as f:
- mydeleted = f.readlines()
- mydeleted = ["./" + elem.rstrip() for elem in mydeleted]
-
-
- myautoadd = []
- if myunadded:
- for x in range(len(myunadded)-1, -1, -1):
- xs = myunadded[x].split("/")
- if xs[-1] == "files":
- print("!!! files dir is not added! Please correct this.")
- sys.exit(-1)
- elif xs[-1] == "Manifest":
- # It's a manifest... auto add
- myautoadd += [myunadded[x]]
- del myunadded[x]
-
- if myunadded:
- print(red("!!! The following files are in your local tree but are not added to the master"))
- print(red("!!! tree. Please remove them from the local tree or add them to the master tree."))
- for x in myunadded:
- print(" ", x)
- print()
- print()
- sys.exit(1)
-
- if vcs == "hg" and mydeleted:
- print(red("!!! The following files are removed manually from your local tree but are not"))
- print(red("!!! removed from the repository. Please remove them, using \"hg remove [FILES]\"."))
- for x in mydeleted:
- print(" ", x)
- print()
- print()
- sys.exit(1)
-
- if vcs == "cvs":
- mycvstree = cvstree.getentries("./", recursive=1)
- mychanged = cvstree.findchanged(mycvstree, recursive=1, basedir="./")
- mynew = cvstree.findnew(mycvstree, recursive=1, basedir="./")
- myremoved = portage.cvstree.findremoved(mycvstree, recursive=1, basedir="./")
- bin_blob_pattern = re.compile("^-kb$")
- no_expansion = set(portage.cvstree.findoption(mycvstree, bin_blob_pattern,
- recursive=1, basedir="./"))
-
- if vcs == "svn":
- with repoman_popen("svn status") as f:
- svnstatus = f.readlines()
- mychanged = ["./" + elem.split()[-1:][0] for elem in svnstatus if (elem[:1] in "MR" or elem[1:2] in "M")]
- mynew = ["./" + elem.split()[-1:][0] for elem in svnstatus if elem.startswith("A")]
- myremoved = ["./" + elem.split()[-1:][0] for elem in svnstatus if elem.startswith("D")]
-
- # Subversion expands keywords specified in svn:keywords properties.
- with repoman_popen("svn propget -R svn:keywords") as f:
- props = f.readlines()
- expansion = dict(("./" + prop.split(" - ")[0], prop.split(" - ")[1].split()) \
- for prop in props if " - " in prop)
-
- elif vcs == "git":
- with repoman_popen("git diff-index --name-only "
- "--relative --diff-filter=M HEAD") as f:
- mychanged = f.readlines()
- mychanged = ["./" + elem[:-1] for elem in mychanged]
-
- with repoman_popen("git diff-index --name-only "
- "--relative --diff-filter=A HEAD") as f:
- mynew = f.readlines()
- mynew = ["./" + elem[:-1] for elem in mynew]
-
- with repoman_popen("git diff-index --name-only "
- "--relative --diff-filter=D HEAD") as f:
- myremoved = f.readlines()
- myremoved = ["./" + elem[:-1] for elem in myremoved]
-
- if vcs == "bzr":
- with repoman_popen("bzr status -S .") as f:
- bzrstatus = f.readlines()
- mychanged = ["./" + elem.split()[-1:][0].split('/')[-1:][0] for elem in bzrstatus if elem and elem[1:2] == "M"]
- mynew = ["./" + elem.split()[-1:][0].split('/')[-1:][0] for elem in bzrstatus if elem and (elem[1:2] in "NK" or elem[0:1] == "R")]
- myremoved = ["./" + elem.split()[-1:][0].split('/')[-1:][0] for elem in bzrstatus if elem.startswith("-")]
- myremoved = ["./" + elem.split()[-3:-2][0].split('/')[-1:][0] for elem in bzrstatus if elem and (elem[1:2] == "K" or elem[0:1] == "R")]
- # Bazaar expands nothing.
-
- if vcs == "hg":
- with repoman_popen("hg status --no-status --modified .") as f:
- mychanged = f.readlines()
- mychanged = ["./" + elem.rstrip() for elem in mychanged]
-
- with repoman_popen("hg status --no-status --added .") as f:
- mynew = f.readlines()
- mynew = ["./" + elem.rstrip() for elem in mynew]
-
- with repoman_popen("hg status --no-status --removed .") as f:
- myremoved = f.readlines()
- myremoved = ["./" + elem.rstrip() for elem in myremoved]
-
- if vcs:
- if not (mychanged or mynew or myremoved or (vcs == "hg" and mydeleted)):
- print(green("RepoMan sez:"), "\"Doing nothing is not always good for QA.\"")
- print()
- print("(Didn't find any changed files...)")
- print()
- sys.exit(1)
-
- # Manifests need to be regenerated after all other commits, so don't commit
- # them now even if they have changed.
- mymanifests = set()
- myupdates = set()
- for f in mychanged + mynew:
- if "Manifest" == os.path.basename(f):
- mymanifests.add(f)
- else:
- myupdates.add(f)
- myupdates.difference_update(myremoved)
- myupdates = list(myupdates)
- mymanifests = list(mymanifests)
- myheaders = []
- mydirty = []
-
- commitmessage = options.commitmsg
- if options.commitmsgfile:
- try:
- f = io.open(_unicode_encode(options.commitmsgfile,
- encoding=_encodings['fs'], errors='strict'),
- mode='r', encoding=_encodings['content'], errors='replace')
- commitmessage = f.read()
- f.close()
- del f
- except (IOError, OSError) as e:
- if e.errno == errno.ENOENT:
- portage.writemsg("!!! File Not Found: --commitmsgfile='%s'\n" % options.commitmsgfile)
- else:
- raise
- # We've read the content so the file is no longer needed.
- commitmessagefile = None
- if not commitmessage or not commitmessage.strip():
- try:
- editor = os.environ.get("EDITOR")
- if editor and utilities.editor_is_executable(editor):
- commitmessage = utilities.get_commit_message_with_editor(
- editor, message=qa_output)
- else:
- commitmessage = utilities.get_commit_message_with_stdin()
- except KeyboardInterrupt:
- exithandler()
- if not commitmessage or not commitmessage.strip():
- print("* no commit message? aborting commit.")
- sys.exit(1)
- commitmessage = commitmessage.rstrip()
- changelog_msg = commitmessage
- portage_version = getattr(portage, "VERSION", None)
- gpg_key = repoman_settings.get("PORTAGE_GPG_KEY", "")
- dco_sob = repoman_settings.get("DCO_SIGNED_OFF_BY", "")
- if portage_version is None:
- sys.stderr.write("Failed to insert portage version in message!\n")
- sys.stderr.flush()
- portage_version = "Unknown"
-
- report_options = []
- if options.force:
- report_options.append("--force")
- if options.ignore_arches:
- report_options.append("--ignore-arches")
- if include_arches is not None:
- report_options.append("--include-arches=\"%s\"" %
- " ".join(sorted(include_arches)))
-
- if vcs == "git":
- # Use new footer only for git (see bug #438364).
- commit_footer = "\n\nPackage-Manager: portage-%s" % portage_version
- if report_options:
- commit_footer += "\nRepoMan-Options: " + " ".join(report_options)
- if sign_manifests:
- commit_footer += "\nManifest-Sign-Key: %s" % (gpg_key, )
- if dco_sob:
- commit_footer += "\nSigned-off-by: %s" % (dco_sob, )
- else:
- unameout = platform.system() + " "
- if platform.system() in ["Darwin", "SunOS"]:
- unameout += platform.processor()
- else:
- unameout += platform.machine()
- commit_footer = "\n\n"
- if dco_sob:
- commit_footer += "Signed-off-by: %s\n" % (dco_sob, )
- commit_footer += "(Portage version: %s/%s/%s" % \
- (portage_version, vcs, unameout)
- if report_options:
- commit_footer += ", RepoMan options: " + " ".join(report_options)
- if sign_manifests:
- commit_footer += ", signed Manifest commit with key %s" % \
- (gpg_key, )
- else:
- commit_footer += ", unsigned Manifest commit"
- commit_footer += ")"
-
- commitmessage += commit_footer
-
- broken_changelog_manifests = []
- if options.echangelog in ('y', 'force'):
- logging.info("checking for unmodified ChangeLog files")
- committer_name = utilities.get_committer_name(env=repoman_settings)
- for x in sorted(vcs_files_to_cps(
- chain(myupdates, mymanifests, myremoved))):
- catdir, pkgdir = x.split("/")
- checkdir = repodir + "/" + x
- checkdir_relative = ""
- if repolevel < 3:
- checkdir_relative = os.path.join(pkgdir, checkdir_relative)
- if repolevel < 2:
- checkdir_relative = os.path.join(catdir, checkdir_relative)
- checkdir_relative = os.path.join(".", checkdir_relative)
-
- changelog_path = os.path.join(checkdir_relative, "ChangeLog")
- changelog_modified = changelog_path in modified_changelogs
- if changelog_modified and options.echangelog != 'force':
- continue
-
- # get changes for this package
- cdrlen = len(checkdir_relative)
- clnew = [elem[cdrlen:] for elem in mynew if elem.startswith(checkdir_relative)]
- clremoved = [elem[cdrlen:] for elem in myremoved if elem.startswith(checkdir_relative)]
- clchanged = [elem[cdrlen:] for elem in mychanged if elem.startswith(checkdir_relative)]
-
- # Skip ChangeLog generation if only the Manifest was modified,
- # as discussed in bug #398009.
- nontrivial_cl_files = set()
- nontrivial_cl_files.update(clnew, clremoved, clchanged)
- nontrivial_cl_files.difference_update(['Manifest'])
- if not nontrivial_cl_files and options.echangelog != 'force':
- continue
-
- new_changelog = utilities.UpdateChangeLog(checkdir_relative,
- committer_name, changelog_msg,
- os.path.join(repodir, 'skel.ChangeLog'),
- catdir, pkgdir,
- new=clnew, removed=clremoved, changed=clchanged,
- pretend=options.pretend)
- if new_changelog is None:
- writemsg_level("!!! Updating the ChangeLog failed\n", \
- level=logging.ERROR, noiselevel=-1)
- sys.exit(1)
-
- # if the ChangeLog was just created, add it to vcs
- if new_changelog:
- myautoadd.append(changelog_path)
- # myautoadd is appended to myupdates below
- else:
- myupdates.append(changelog_path)
-
- if options.ask and not options.pretend:
- # regenerate Manifest for modified ChangeLog (bug #420735)
- repoman_settings["O"] = checkdir
- digestgen(mysettings=repoman_settings, myportdb=portdb)
- else:
- broken_changelog_manifests.append(x)
-
- if myautoadd:
- print(">>> Auto-Adding missing Manifest/ChangeLog file(s)...")
- add_cmd = [vcs, "add"]
- add_cmd += myautoadd
- if options.pretend:
- portage.writemsg_stdout("(%s)\n" % " ".join(add_cmd),
- noiselevel=-1)
- else:
-
- if sys.hexversion < 0x3020000 and sys.hexversion >= 0x3000000 and \
- not os.path.isabs(add_cmd[0]):
- # Python 3.1 _execvp throws TypeError for non-absolute executable
- # path passed as bytes (see http://bugs.python.org/issue8513).
- fullname = find_binary(add_cmd[0])
- if fullname is None:
- raise portage.exception.CommandNotFound(add_cmd[0])
- add_cmd[0] = fullname
-
- add_cmd = [_unicode_encode(arg) for arg in add_cmd]
- retcode = subprocess.call(add_cmd)
- if retcode != os.EX_OK:
- logging.error(
- "Exiting on %s error code: %s\n" % (vcs, retcode))
- sys.exit(retcode)
-
- myupdates += myautoadd
-
- print("* %s files being committed..." % green(str(len(myupdates))), end=' ')
-
- if vcs not in ('cvs', 'svn'):
- # With git, bzr and hg, there's never any keyword expansion, so
- # there's no need to regenerate manifests and all files will be
- # committed in one big commit at the end.
- print()
- elif not repo_config.thin_manifest:
- if vcs == 'cvs':
- headerstring = "'\$(Header|Id).*\$'"
- elif vcs == "svn":
- svn_keywords = dict((k.lower(), k) for k in [
- "Rev",
- "Revision",
- "LastChangedRevision",
- "Date",
- "LastChangedDate",
- "Author",
- "LastChangedBy",
- "URL",
- "HeadURL",
- "Id",
- "Header",
- ])
-
- for myfile in myupdates:
-
- # for CVS, no_expansion contains files that are excluded from expansion
- if vcs == "cvs":
- if myfile in no_expansion:
- continue
-
- # for SVN, expansion contains files that are included in expansion
- elif vcs == "svn":
- if myfile not in expansion:
- continue
-
- # Subversion keywords are case-insensitive in svn:keywords properties, but case-sensitive in contents of files.
- enabled_keywords = []
- for k in expansion[myfile]:
- keyword = svn_keywords.get(k.lower())
- if keyword is not None:
- enabled_keywords.append(keyword)
-
- headerstring = "'\$(%s).*\$'" % "|".join(enabled_keywords)
-
- myout = repoman_getstatusoutput("egrep -q " + headerstring + " " +
- portage._shell_quote(myfile))
- if myout[0] == 0:
- myheaders.append(myfile)
-
- print("%s have headers that will change." % green(str(len(myheaders))))
- print("* Files with headers will cause the manifests to be changed and committed separately.")
-
- logging.info("myupdates: %s", myupdates)
- logging.info("myheaders: %s", myheaders)
-
- uq = UserQuery(options)
- if options.ask and uq.query('Commit changes?', True) != 'Yes':
- print("* aborting commit.")
- sys.exit(128 + signal.SIGINT)
-
- # Handle the case where committed files have keywords which
- # will change and need a priming commit before the Manifest
- # can be committed.
- if (myupdates or myremoved) and myheaders:
- myfiles = myupdates + myremoved
- fd, commitmessagefile = tempfile.mkstemp(".repoman.msg")
- mymsg = os.fdopen(fd, "wb")
- mymsg.write(_unicode_encode(commitmessage))
- mymsg.close()
-
- print()
- print(green("Using commit message:"))
- print(green("------------------------------------------------------------------------------"))
- print(commitmessage)
- print(green("------------------------------------------------------------------------------"))
- print()
-
- # Having a leading ./ prefix on file paths can trigger a bug in
- # the cvs server when committing files to multiple directories,
- # so strip the prefix.
- myfiles = [f.lstrip("./") for f in myfiles]
-
- commit_cmd = [vcs]
- commit_cmd.extend(vcs_global_opts)
- commit_cmd.append("commit")
- commit_cmd.extend(vcs_local_opts)
- commit_cmd.extend(["-F", commitmessagefile])
- commit_cmd.extend(myfiles)
-
- try:
- if options.pretend:
- print("(%s)" % (" ".join(commit_cmd),))
- else:
- retval = spawn(commit_cmd, env=commit_env)
- if retval != os.EX_OK:
- writemsg_level(("!!! Exiting on %s (shell) " + \
- "error code: %s\n") % (vcs, retval),
- level=logging.ERROR, noiselevel=-1)
- sys.exit(retval)
- finally:
- try:
- os.unlink(commitmessagefile)
- except OSError:
- pass
-
- # Setup the GPG commands
- def gpgsign(filename):
- gpgcmd = repoman_settings.get("PORTAGE_GPG_SIGNING_COMMAND")
- if gpgcmd in [None, '']:
- raise MissingParameter("PORTAGE_GPG_SIGNING_COMMAND is unset!" + \
- " Is make.globals missing?")
- if "${PORTAGE_GPG_KEY}" in gpgcmd and \
- "PORTAGE_GPG_KEY" not in repoman_settings:
- raise MissingParameter("PORTAGE_GPG_KEY is unset!")
- if "${PORTAGE_GPG_DIR}" in gpgcmd:
- if "PORTAGE_GPG_DIR" not in repoman_settings:
- repoman_settings["PORTAGE_GPG_DIR"] = \
- os.path.expanduser("~/.gnupg")
- logging.info("Automatically setting PORTAGE_GPG_DIR to '%s'" \
- % repoman_settings["PORTAGE_GPG_DIR"])
- else:
- repoman_settings["PORTAGE_GPG_DIR"] = \
- os.path.expanduser(repoman_settings["PORTAGE_GPG_DIR"])
- if not os.access(repoman_settings["PORTAGE_GPG_DIR"], os.X_OK):
- raise portage.exception.InvalidLocation(
- "Unable to access directory: PORTAGE_GPG_DIR='%s'" % \
- repoman_settings["PORTAGE_GPG_DIR"])
- gpgvars = {"FILE": filename}
- for k in ("PORTAGE_GPG_DIR", "PORTAGE_GPG_KEY"):
- v = repoman_settings.get(k)
- if v is not None:
- gpgvars[k] = v
- gpgcmd = portage.util.varexpand(gpgcmd, mydict=gpgvars)
- if options.pretend:
- print("(" + gpgcmd + ")")
- else:
- # Encode unicode manually for bug #310789.
- gpgcmd = portage.util.shlex_split(gpgcmd)
-
- if sys.hexversion < 0x3020000 and sys.hexversion >= 0x3000000 and \
- not os.path.isabs(gpgcmd[0]):
- # Python 3.1 _execvp throws TypeError for non-absolute executable
- # path passed as bytes (see http://bugs.python.org/issue8513).
- fullname = find_binary(gpgcmd[0])
- if fullname is None:
- raise portage.exception.CommandNotFound(gpgcmd[0])
- gpgcmd[0] = fullname
-
- gpgcmd = [_unicode_encode(arg,
- encoding=_encodings['fs'], errors='strict') for arg in gpgcmd]
- rValue = subprocess.call(gpgcmd)
- if rValue == os.EX_OK:
- os.rename(filename + ".asc", filename)
- else:
- raise portage.exception.PortageException("!!! gpg exited with '" + str(rValue) + "' status")
-
- def need_signature(filename):
- try:
- with open(_unicode_encode(filename,
- encoding=_encodings['fs'], errors='strict'), 'rb') as f:
- return b"BEGIN PGP SIGNED MESSAGE" not in f.readline()
- except IOError as e:
- if e.errno in (errno.ENOENT, errno.ESTALE):
- return False
- raise
-
- # When files are removed and re-added, the cvs server will put /Attic/
- # inside the $Header path. This code detects the problem and corrects it
- # so that the Manifest will generate correctly. See bug #169500.
- # Use binary mode in order to avoid potential character encoding issues.
- cvs_header_re = re.compile(br'^#\s*\$Header.*\$$')
- attic_str = b'/Attic/'
- attic_replace = b'/'
- for x in myheaders:
- f = open(_unicode_encode(x,
- encoding=_encodings['fs'], errors='strict'),
- mode='rb')
- mylines = f.readlines()
- f.close()
- modified = False
- for i, line in enumerate(mylines):
- if cvs_header_re.match(line) is not None and \
- attic_str in line:
- mylines[i] = line.replace(attic_str, attic_replace)
- modified = True
- if modified:
- portage.util.write_atomic(x, b''.join(mylines),
- mode='wb')
-
- if repolevel == 1:
- print(green("RepoMan sez:"), "\"You're rather crazy... "
- "doing the entire repository.\"\n")
-
- if vcs in ('cvs', 'svn') and (myupdates or myremoved):
-
- for x in sorted(vcs_files_to_cps(
- chain(myupdates, myremoved, mymanifests))):
- repoman_settings["O"] = os.path.join(repodir, x)
- digestgen(mysettings=repoman_settings, myportdb=portdb)
-
- elif broken_changelog_manifests:
- for x in broken_changelog_manifests:
- repoman_settings["O"] = os.path.join(repodir, x)
- digestgen(mysettings=repoman_settings, myportdb=portdb)
-
- signed = False
- if sign_manifests:
- signed = True
- try:
- for x in sorted(vcs_files_to_cps(
- chain(myupdates, myremoved, mymanifests))):
- repoman_settings["O"] = os.path.join(repodir, x)
- manifest_path = os.path.join(repoman_settings["O"], "Manifest")
- if not need_signature(manifest_path):
- continue
- gpgsign(manifest_path)
- except portage.exception.PortageException as e:
- portage.writemsg("!!! %s\n" % str(e))
- portage.writemsg("!!! Disabled FEATURES='sign'\n")
- signed = False
-
- if vcs == 'git':
- # It's not safe to use the git commit -a option since there might
- # be some modified files elsewhere in the working tree that the
- # user doesn't want to commit. Therefore, call git update-index
- # in order to ensure that the index is updated with the latest
- # versions of all new and modified files in the relevant portion
- # of the working tree.
- myfiles = mymanifests + myupdates
- myfiles.sort()
- update_index_cmd = ["git", "update-index"]
- update_index_cmd.extend(f.lstrip("./") for f in myfiles)
- if options.pretend:
- print("(%s)" % (" ".join(update_index_cmd),))
- else:
- retval = spawn(update_index_cmd, env=os.environ)
- if retval != os.EX_OK:
- writemsg_level(("!!! Exiting on %s (shell) " + \
- "error code: %s\n") % (vcs, retval),
- level=logging.ERROR, noiselevel=-1)
- sys.exit(retval)
-
- if True:
- myfiles = mymanifests[:]
- # If there are no header (SVN/CVS keywords) changes in
- # the files, this Manifest commit must include the
- # other (yet uncommitted) files.
- if not myheaders:
- myfiles += myupdates
- myfiles += myremoved
- myfiles.sort()
-
- fd, commitmessagefile = tempfile.mkstemp(".repoman.msg")
- mymsg = os.fdopen(fd, "wb")
- mymsg.write(_unicode_encode(commitmessage))
- mymsg.close()
-
- commit_cmd = []
- if options.pretend and vcs is None:
- # substitute a bogus value for pretend output
- commit_cmd.append("cvs")
- else:
- commit_cmd.append(vcs)
- commit_cmd.extend(vcs_global_opts)
- commit_cmd.append("commit")
- commit_cmd.extend(vcs_local_opts)
- if vcs == "hg":
- commit_cmd.extend(["--logfile", commitmessagefile])
- commit_cmd.extend(myfiles)
- else:
- commit_cmd.extend(["-F", commitmessagefile])
- commit_cmd.extend(f.lstrip("./") for f in myfiles)
-
- try:
- if options.pretend:
- print("(%s)" % (" ".join(commit_cmd),))
- else:
- retval = spawn(commit_cmd, env=commit_env)
- if retval != os.EX_OK:
- if repo_config.sign_commit and vcs == 'git' and \
- not git_supports_gpg_sign():
- # Inform user that newer git is needed (bug #403323).
- logging.error(
- "Git >=1.7.9 is required for signed commits!")
-
- writemsg_level(("!!! Exiting on %s (shell) " + \
- "error code: %s\n") % (vcs, retval),
- level=logging.ERROR, noiselevel=-1)
- sys.exit(retval)
- finally:
- try:
- os.unlink(commitmessagefile)
- except OSError:
- pass
-
- print()
- if vcs:
- print("Commit complete.")
else:
- print("repoman was too scared by not seeing any familiar version control file that he forgot to commit anything")
- print(green("RepoMan sez:"), "\"If everyone were like you, I'd be out of business!\"\n")
- sys.exit(0)
+ raise
diff --cc bin/save-ebuild-env.sh
index 599d6ea,ddef1fd..28162d1
--- a/bin/save-ebuild-env.sh
+++ b/bin/save-ebuild-env.sh
@@@ -88,8 -89,9 +89,12 @@@ __save_ebuild_env()
___eapi_has_package_manager_build_user && unset -f package_manager_build_user
___eapi_has_package_manager_build_group && unset -f package_manager_build_group
- # Clear out the triple underscore namespace as it is reserved by the PM.
- unset -f $(compgen -A function ___)
- unset ${!___*}
+ # PREFIX: compgen is not compiled in during bootstrap
- type compgen >& /dev/null && unset -f $(compgen -A function ___eapi_)
++ if type compgen >& /dev/null ; then
++ # Clear out the triple underscore namespace as it is reserved by the PM.
++ unset -f $(compgen -A function ___)
++ unset ${!___*}
++ fi
# portage config variables and variables set directly by portage
unset ACCEPT_LICENSE BAD BRACKET BUILD_PREFIX COLS \
diff --cc pym/_emerge/actions.py
index 3218cde,59626ad..1d324aa
--- a/pym/_emerge/actions.py
+++ b/pym/_emerge/actions.py
@@@ -2376,39 -2412,34 +2418,40 @@@ def getgccversion(chost=None)
"!!! other terminals also.\n"
)
+ def getclangversion(output):
+ version = re.search('clang version ([0-9.]+) ', output)
+ if version:
+ return version.group(1)
+ return "unknown"
+
- try:
- proc = subprocess.Popen([ubinpath + "/gcc-config", "-c"],
- stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
- except OSError:
- myoutput = None
- mystatus = 1
- else:
- myoutput = _unicode_decode(proc.communicate()[0]).rstrip("\n")
- mystatus = proc.wait()
- if mystatus == os.EX_OK and myoutput.startswith(chost + "-"):
- return myoutput.replace(chost + "-", gcc_ver_prefix, 1)
+ if chost:
+ try:
+ proc = subprocess.Popen(["gcc-config", "-c"],
+ stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
+ except OSError:
+ myoutput = None
+ mystatus = 1
+ else:
+ myoutput = _unicode_decode(proc.communicate()[0]).rstrip("\n")
+ mystatus = proc.wait()
+ if mystatus == os.EX_OK and myoutput.startswith(chost + "-"):
+ return myoutput.replace(chost + "-", gcc_ver_prefix, 1)
- try:
- proc = subprocess.Popen(
- [ubinpath + "/" + chost + "-" + gcc_ver_command[0]] + gcc_ver_command[1:],
- stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
- except OSError:
- myoutput = None
- mystatus = 1
- else:
- myoutput = _unicode_decode(proc.communicate()[0]).rstrip("\n")
- mystatus = proc.wait()
- if mystatus == os.EX_OK:
- return gcc_ver_prefix + myoutput
+ try:
+ proc = subprocess.Popen(
+ [chost + "-" + gcc_ver_command[0]] + gcc_ver_command[1:],
+ stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
+ except OSError:
+ myoutput = None
+ mystatus = 1
+ else:
+ myoutput = _unicode_decode(proc.communicate()[0]).rstrip("\n")
+ mystatus = proc.wait()
+ if mystatus == os.EX_OK:
+ return gcc_ver_prefix + myoutput
try:
- proc = subprocess.Popen(gcc_ver_command,
+ proc = subprocess.Popen([ubinpath + "/" + gcc_ver_command[0]] + gcc_ver_command[1:],
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
except OSError:
myoutput = None
diff --cc pym/portage/dbapi/vartree.py
index f4c7cdc,e7effca..670221f
--- a/pym/portage/dbapi/vartree.py
+++ b/pym/portage/dbapi/vartree.py
@@@ -35,12 -35,9 +35,12 @@@ portage.proxy.lazyimport.lazyimport(glo
'portage.util.movefile:movefile',
'portage.util.path:first_existing,iter_parents',
'portage.util.writeable_check:get_ro_checker',
- 'portage.util:xattr@_xattr',
+ 'portage.util._xattr:xattr',
'portage.util._dyn_libs.PreservedLibsRegistry:PreservedLibsRegistry',
'portage.util._dyn_libs.LinkageMapELF:LinkageMapELF@LinkageMap',
+ 'portage.util._dyn_libs.LinkageMapMachO:LinkageMapMachO',
+ 'portage.util._dyn_libs.LinkageMapPeCoff:LinkageMapPeCoff',
+ 'portage.util._dyn_libs.LinkageMapXCoff:LinkageMapXCoff',
'portage.util._async.SchedulerInterface:SchedulerInterface',
'portage.util._eventloop.EventLoop:EventLoop',
'portage.util._eventloop.global_event_loop:global_event_loop',
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2015-06-20 7:12 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2015-06-20 7:12 UTC (permalink / raw
To: gentoo-commits
commit: d84a12d49b7b1f28e48e8db22e519343c4518f50
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sat Jun 20 07:12:21 2015 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sat Jun 20 07:12:21 2015 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=d84a12d4
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/ebuild-helpers/die | 2 +-
bin/ebuild-helpers/dobin | 2 +-
bin/ebuild-helpers/doconfd | 2 +-
bin/ebuild-helpers/dodir | 2 +-
bin/ebuild-helpers/dodoc | 2 +-
bin/ebuild-helpers/doenvd | 2 +-
bin/ebuild-helpers/doexe | 2 +-
bin/ebuild-helpers/dohard | 2 +-
bin/ebuild-helpers/doheader | 2 +-
bin/ebuild-helpers/dohtml | 6 ++----
bin/ebuild-helpers/doinfo | 2 +-
bin/ebuild-helpers/doinitd | 2 +-
bin/ebuild-helpers/doins | 2 +-
bin/ebuild-helpers/dolib | 2 +-
bin/ebuild-helpers/doman | 2 +-
bin/ebuild-helpers/domo | 2 +-
bin/ebuild-helpers/dosbin | 2 +-
bin/ebuild-helpers/dosed | 2 +-
bin/ebuild-helpers/dosym | 2 +-
bin/ebuild-helpers/ecompress | 2 +-
bin/ebuild-helpers/ecompressdir | 2 +-
bin/ebuild-helpers/elog | 2 +-
bin/ebuild-helpers/emake | 3 ++-
bin/ebuild-helpers/fowners | 3 +--
bin/ebuild-helpers/fperms | 2 +-
bin/ebuild-helpers/keepdir | 2 +-
bin/ebuild-helpers/newins | 2 +-
bin/ebuild-helpers/portageq | 4 +---
bin/ebuild-helpers/prepall | 2 +-
bin/ebuild-helpers/prepalldocs | 2 +-
bin/ebuild-helpers/prepallinfo | 2 +-
bin/ebuild-helpers/prepallman | 2 +-
bin/ebuild-helpers/prepallstrip | 2 +-
bin/ebuild-helpers/prepinfo | 2 +-
bin/ebuild-helpers/prepman | 2 +-
bin/ebuild-helpers/prepstrip | 2 +-
bin/ebuild-helpers/unprivileged/chown | 4 ++--
bin/ebuild-helpers/xattr/install | 2 --
bin/ebuild-ipc | 4 +---
bin/ebuild.sh | 3 ---
bin/helper-functions.sh | 2 +-
bin/isolated-functions.sh | 2 +-
bin/misc-functions.sh | 2 +-
man/repoman.1 | 4 ++--
pym/_emerge/main.py | 19 +++++++++++++------
45 files changed, 59 insertions(+), 63 deletions(-)
diff --cc bin/ebuild-helpers/dohtml
index bf1f0fe,860d4ab..fe2e97d
--- a/bin/ebuild-helpers/dohtml
+++ b/bin/ebuild-helpers/dohtml
@@@ -8,13 -8,11 +8,11 @@@ if ___eapi_has_dohtml_deprecated; the
eqawarn "'${0##*/}' is deprecated in EAPI '$EAPI'"
fi
- PORTAGE_BIN_PATH=${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}
- PORTAGE_PYM_PATH=${PORTAGE_PYM_PATH:-@PORTAGE_BASE@/pym}
# Use safe cwd, avoiding unsafe import for bug #469338.
export __PORTAGE_HELPER_CWD=${PWD}
- cd "${PORTAGE_PYM_PATH}"
+ cd "${PORTAGE_PYM_PATH}" || die
PYTHONPATH=${PORTAGE_PYTHONPATH:-${PORTAGE_PYM_PATH}} \
- "${PORTAGE_PYTHON:-/usr/bin/python}" "$PORTAGE_BIN_PATH/dohtml.py" "$@"
+ "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "$PORTAGE_BIN_PATH/dohtml.py" "$@"
ret=$?
# Restore cwd for display by __helpers_die
diff --cc bin/ebuild-helpers/emake
index 4b98aec,60718a2..cca9cb6
--- a/bin/ebuild-helpers/emake
+++ b/bin/ebuild-helpers/emake
@@@ -1,4 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
+ # Copyright 1999-2015 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
# emake: Run make and automatically pass along flags set in the env. We support
@@@ -8,10 -9,10 +9,10 @@@
#
# With newer EAPIs, we also automatically fail the build if make itself fails.
- source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
+ source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
cmd=(
- ${MAKE:-make} ${MAKEOPTS} "$@" ${EXTRA_EMAKE}
+ ${MAKE:-make} SHELL="${BASH:-/bin/bash}" ${MAKEOPTS} "$@" ${EXTRA_EMAKE}
)
if [[ ${PORTAGE_QUIET} != 1 ]] ; then
diff --cc bin/ebuild-helpers/prepall
index 407392f,44643bb..94f49d2
--- a/bin/ebuild-helpers/prepall
+++ b/bin/ebuild-helpers/prepall
@@@ -2,10 -2,8 +2,10 @@@
# Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
- source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
+ source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
+[[ -d ${ED} ]] || exit 0
+
if ! ___eapi_has_prefix_variables; then
ED=${D}
fi
diff --cc bin/ebuild-ipc
index 176e6ed,e77b94b..739564d
--- a/bin/ebuild-ipc
+++ b/bin/ebuild-ipc
@@@ -2,9 -2,7 +2,7 @@@
# Copyright 2010-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
- PORTAGE_BIN_PATH=${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}
- PORTAGE_PYM_PATH=${PORTAGE_PYM_PATH:-/usr/lib/portage/pym}
# Use safe cwd, avoiding unsafe import for bug #469338.
- cd "${PORTAGE_PYM_PATH}"
+ cd "${PORTAGE_PYM_PATH}" || exit 1
PYTHONPATH=${PORTAGE_PYTHONPATH:-${PORTAGE_PYM_PATH}} \
- exec "${PORTAGE_PYTHON:-/usr/bin/python}" "$PORTAGE_BIN_PATH/ebuild-ipc.py" "$@"
+ exec "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "$PORTAGE_BIN_PATH/ebuild-ipc.py" "$@"
diff --cc bin/misc-functions.sh
index a8a07f4,9b79351..a1d4088
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2015-06-09 18:30 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2015-06-09 18:30 UTC (permalink / raw
To: gentoo-commits
commit: 842d54de7a15a6a82f71c5787a5e5cff56caf614
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Tue Jun 9 18:12:35 2015 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Tue Jun 9 18:12:35 2015 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=842d54de
tarball: drop sed on no longer existing file
tarball.sh | 1 -
1 file changed, 1 deletion(-)
diff --git a/tarball.sh b/tarball.sh
index 1dcfb5c..b9f7841 100755
--- a/tarball.sh
+++ b/tarball.sh
@@ -27,7 +27,6 @@ fi
install -d -m0755 ${DEST}
rsync -a --exclude='.git' --exclude='.hg' . ${DEST}
sed -i -e '/^VERSION\s*=/s/^.*$/VERSION = "'${V}-prefix'"/' ${DEST}/pym/portage/__init__.py
-sed -i -e "1s/VERSION/${V}-prefix/" ${DEST}/doc/fragment/version
sed -i -e "1s/VERSION/${V}-prefix/" ${DEST}/man/{,ru/}*.[15]
sed -i -e "s/@version@/${V}/" ${DEST}/configure.ac
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2015-06-09 18:01 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2015-06-09 18:01 UTC (permalink / raw
To: gentoo-commits
commit: ee0659f6660f115bad9f5eb3910f47581960193c
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Tue Jun 9 18:01:00 2015 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Tue Jun 9 18:01:00 2015 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=ee0659f6
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
.travis.yml | 2 -
DEVELOPING | 10 +--
README | 2 +-
pym/portage/proxy/lazyimport.py | 17 +++--
pym/portage/util/movefile.py | 2 +-
runtests | 156 ++++++++++++++++++++++++++++++++++++++++
runtests.sh | 109 ----------------------------
7 files changed, 176 insertions(+), 122 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2015-06-04 19:47 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2015-06-04 19:47 UTC (permalink / raw
To: gentoo-commits
commit: 3fc0447f6d36cad9d168c40dad680dbe0876cf38
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Jun 4 19:46:28 2015 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Jun 4 19:46:28 2015 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=3fc0447f
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
NEWS | 15 ++
RELEASE-NOTES | 59 ++++++
bin/ebuild-helpers/bsd/sed | 4 +-
bin/ebuild-helpers/doconfd | 5 +-
bin/ebuild-helpers/dodoc | 5 +-
bin/ebuild-helpers/doenvd | 5 +-
bin/ebuild-helpers/doheader | 5 +-
bin/ebuild-helpers/doinitd | 5 +-
bin/ebuild-helpers/dolib.a | 3 +-
bin/ebuild-helpers/dolib.so | 3 +-
bin/ebuild-helpers/emake | 27 +--
bin/ebuild-helpers/portageq | 4 +-
bin/ebuild-helpers/unprivileged/chown | 4 +-
bin/ebuild-helpers/xattr/install | 14 +-
bin/ebuild.sh | 10 +-
bin/egencache | 17 +-
bin/emerge-webrsync | 4 +-
bin/install-qa-check.d/10executable-issues | 242 ++++++++++++-----------
bin/install-qa-check.d/80libraries | 130 ++++++------
bin/install-qa-check.d/90gcc-warnings | 2 +-
bin/misc-functions.sh | 15 +-
bin/phase-functions.sh | 9 +-
bin/phase-helpers.sh | 2 +-
bin/portageq | 18 +-
bin/quickpkg | 16 +-
bin/repoman | 39 ++--
cnf/make.conf.example | 6 +-
cnf/make.globals | 6 +-
cnf/repos.conf | 4 +
man/ebuild.5 | 22 ++-
man/egencache.1 | 2 +-
man/emaint.1 | 4 +-
man/make.conf.5 | 8 +-
man/repoman.1 | 7 +-
pym/_emerge/BinpkgExtractorAsync.py | 5 +-
pym/_emerge/EbuildMerge.py | 3 +-
pym/_emerge/JobStatusDisplay.py | 4 +-
pym/_emerge/PackageMerge.py | 9 +-
pym/_emerge/Scheduler.py | 20 +-
pym/_emerge/actions.py | 5 +-
pym/_emerge/search.py | 24 ++-
pym/portage/_emirrordist/Config.py | 6 +-
pym/portage/_emirrordist/MirrorDistTask.py | 4 +-
pym/portage/checksum.py | 2 +-
pym/portage/const.py | 3 +
pym/portage/dbapi/_MergeProcess.py | 9 +-
pym/portage/dbapi/_VdbMetadataDelta.py | 25 ++-
pym/portage/dbapi/bintree.py | 15 +-
pym/portage/dbapi/vartree.py | 73 +++++--
pym/portage/dep/soname/multilib_category.py | 2 +-
pym/portage/dispatch_conf.py | 97 +++++++--
pym/portage/output.py | 4 +-
pym/portage/package/ebuild/_config/UseManager.py | 8 +-
pym/portage/package/ebuild/doebuild.py | 22 ++-
pym/portage/package/ebuild/fetch.py | 7 +-
pym/portage/repository/config.py | 15 +-
pym/portage/sync/controller.py | 9 +-
pym/portage/sync/modules/rsync/rsync.py | 13 +-
pym/portage/sync/modules/webrsync/__init__.py | 4 +-
pym/portage/sync/modules/webrsync/webrsync.py | 4 +
pym/portage/util/__init__.py | 24 ++-
pym/portage/util/_dyn_libs/LinkageMapELF.py | 5 +-
pym/portage/util/formatter.py | 69 +++++++
pym/portage/util/writeable_check.py | 24 ++-
pym/portage/util/xattr.py | 20 ++
pym/repoman/utilities.py | 8 +-
setup.py | 2 +-
67 files changed, 888 insertions(+), 377 deletions(-)
diff --cc bin/ebuild-helpers/bsd/sed
index 89f9ec6,9a7f2d4..0be2737
--- a/bin/ebuild-helpers/bsd/sed
+++ b/bin/ebuild-helpers/bsd/sed
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 2007-2012 Gentoo Foundation
+ # Copyright 2007-2015 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
scriptpath=${BASH_SOURCE[0]}
diff --cc bin/ebuild-helpers/dodoc
index 1b508df,6ccf0a4..20f15bb
--- a/bin/ebuild-helpers/dodoc
+++ b/bin/ebuild-helpers/dodoc
@@@ -2,13 -2,10 +2,10 @@@
# Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if ___eapi_dodoc_supports_-r; then
- exec \
- env \
- __PORTAGE_HELPER="dodoc" \
- doins "$@"
+ __PORTAGE_HELPER='dodoc' exec doins "$@"
fi
if [ $# -lt 1 ] ; then
diff --cc bin/ebuild-helpers/emake
index dcb64a3,2a3c2f0..4b98aec
--- a/bin/ebuild-helpers/emake
+++ b/bin/ebuild-helpers/emake
@@@ -1,19 -1,23 +1,22 @@@
-#!/bin/bash
-# Copyright 1999-2015 Gentoo Foundation
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
- # emake: Supplies some default parameters to GNU make. At the moment the
- # only parameter supplied is -jN, where N is a number of
- # parallel processes that should be ideal for the running host
- # (e.g. on a single-CPU machine, N=2). The MAKEOPTS variable
- # is set in make.globals. We don't source make.globals
- # here because emake is only called from an ebuild.
+ # emake: Run make and automatically pass along flags set in the env. We support
+ # MAKEOPTS & EXTRA_EMAKE which allows the user to customize behavior (such as
+ # parallel builds and load limiting). The latter overrides the ebuild and thus
+ # should be used with caution (more a debugging knob).
+ #
+ # With newer EAPIs, we also automatically fail the build if make itself fails.
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
- if [[ $PORTAGE_QUIET != 1 ]] ; then
+ cmd=(
- ${MAKE:-make} ${MAKEOPTS} "$@" ${EXTRA_EMAKE}
++ ${MAKE:-make} SHELL="${BASH:-/bin/bash}" ${MAKEOPTS} "$@" ${EXTRA_EMAKE}
+ )
+
+ if [[ ${PORTAGE_QUIET} != 1 ]] ; then
(
- for arg in ${MAKE:-make} $MAKEOPTS "$@" $EXTRA_EMAKE ; do
+ for arg in "${cmd[@]}" ; do
[[ ${arg} == *" "* ]] \
&& printf "'%s' " "${arg}" \
|| printf "%s " "${arg}"
diff --cc bin/ebuild-helpers/portageq
index 935f548,ba889eb..4f3e4e5
--- a/bin/ebuild-helpers/portageq
+++ b/bin/ebuild-helpers/portageq
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 2009-2013 Gentoo Foundation
+ # Copyright 2009-2015 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
scriptpath=${BASH_SOURCE[0]}
@@@ -15,9 -15,11 +15,11 @@@ set -f # in case ${PATH} contains any s
for path in ${PATH}; do
[[ -x ${path}/${scriptname} ]] || continue
+ [[ ${path} == ${PORTAGE_OVERRIDE_EPREFIX}/usr/lib*/portage/* ]] && continue
+ [[ ${path} == */._portage_reinstall_.* ]] && continue
[[ ${path}/${scriptname} -ef ${scriptpath} ]] && continue
PYTHONPATH=${PORTAGE_PYTHONPATH:-${PORTAGE_PYM_PATH}} \
- exec "${PORTAGE_PYTHON:-/usr/bin/python}" \
+ exec "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" \
"${path}/${scriptname}" "$@"
done
diff --cc bin/ebuild-helpers/unprivileged/chown
index 86b87c2,2f1f161..4705bca
--- a/bin/ebuild-helpers/unprivileged/chown
+++ b/bin/ebuild-helpers/unprivileged/chown
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 2012-2013 Gentoo Foundation
+ # Copyright 2012-2015 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
scriptpath=${BASH_SOURCE[0]}
diff --cc bin/ebuild-helpers/xattr/install
index 233459f,2d2a693..bc1bab9
--- a/bin/ebuild-helpers/xattr/install
+++ b/bin/ebuild-helpers/xattr/install
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 2013 Gentoo Foundation
+ # Copyright 2013-2015 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
PORTAGE_BIN_PATH=${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}
diff --cc bin/ebuild.sh
index 8d1b947,4e26f87..8a815ab
--- a/bin/ebuild.sh
+++ b/bin/ebuild.sh
@@@ -1,9 -1,9 +1,9 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2013 Gentoo Foundation
+ # Copyright 1999-2015 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-PORTAGE_BIN_PATH="${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"
-PORTAGE_PYM_PATH="${PORTAGE_PYM_PATH:-/usr/lib/portage/pym}"
+PORTAGE_BIN_PATH="${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"
+PORTAGE_PYM_PATH="${PORTAGE_PYM_PATH:-@PORTAGE_BASE@/pym}"
# Prevent aliases from causing portage to act inappropriately.
# Make sure it's before everything so we don't mess aliases that follow.
diff --cc bin/misc-functions.sh
index 4928575,24941af..a8a07f4
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
diff --cc bin/phase-functions.sh
index f447fca,7bf4d63..013cc43
--- a/bin/phase-functions.sh
+++ b/bin/phase-functions.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2013 Gentoo Foundation
+ # Copyright 1999-2015 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# Hardcoded bash lists are needed for backward compatibility with
diff --cc bin/repoman
index 943a61f,7cb32ce..afc26c3
--- a/bin/repoman
+++ b/bin/repoman
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -bO
+#!@PREFIX_PORTAGE_PYTHON@ -bO
- # Copyright 1999-2014 Gentoo Foundation
+ # Copyright 1999-2015 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# Next to do: dep syntax checking in mask files
diff --cc cnf/make.globals
index 880fed5,82d8cc1..3518180
--- a/cnf/make.globals
+++ b/cnf/make.globals
@@@ -152,24 -123,12 +152,26 @@@ PORTAGE_ELOG_MAILFROM="@portageuser@@lo
PORTAGE_GPG_SIGNING_COMMAND="gpg --sign --digest-algo SHA256 --clearsign --yes --default-key \"\${PORTAGE_GPG_KEY}\" --homedir \"\${PORTAGE_GPG_DIR}\" \"\${FILE}\""
# btrfs.* attributes are irrelevant, see bug #527636.
- # Security labels are special, see bug #461868.
+ # security.* attributes may be special (see bug 461868), but
+ # security.capability is specifically not excluded (bug 548516).
# system.nfs4_acl attributes are irrelevant, see bug #475496.
- PORTAGE_XATTR_EXCLUDE="btrfs.* security.* system.nfs4_acl"
+ PORTAGE_XATTR_EXCLUDE="btrfs.* security.evm security.ima
+ security.selinux system.nfs4_acl"
+# Writeable paths for Mac OS X seatbelt sandbox
+#
+# If path ends in a slash (/), access will recursively be allowed to directory
+# contents (using a regex), not the directory itself. Without a slash, access
+# to the directory or file itself will be allowed (using a literal), so it can
+# be created, removed and changed. If both is needed, the directory needs to be
+# given twice, once with and once without the slash. Obviously this only makes
+# sense for directories, not files.
+#
+# An empty value for either variable will disable all restrictions on the
+# corresponding operation.
+MACOSSANDBOX_PATHS="/dev/fd/ /private/tmp/ /private/var/tmp/ @@PORTAGE_BUILDDIR@@/ @@PORTAGE_ACTUAL_DISTDIR@@/"
+MACOSSANDBOX_PATHS_CONTENT_ONLY="/dev/null /dev/dtracehelper /dev/tty /private/var/run/syslog"
+
# *****************************
# ** DO NOT EDIT THIS FILE **
# ***************************************************
diff --cc cnf/repos.conf
index 4433546,062fc0d..b27d5c6
--- a/cnf/repos.conf
+++ b/cnf/repos.conf
@@@ -1,8 -1,12 +1,12 @@@
[DEFAULT]
-main-repo = gentoo
+main-repo = gentoo_prefix
-[gentoo]
-location = /usr/portage
+[gentoo_prefix]
+location = @PORTAGE_EPREFIX@/usr/portage
sync-type = rsync
-sync-uri = rsync://rsync.gentoo.org/gentoo-portage
+sync-uri = rsync://rsync.prefix.bitzolder.nl/gentoo-portage-prefix
auto-sync = yes
+
+ # for daily squashfs snapshots
+ #sync-type = squashdelta
+ #sync-uri = mirror://gentoo/../snapshots/squashfs
diff --cc pym/portage/dbapi/vartree.py
index a037200,62d880e..f5527b6
--- a/pym/portage/dbapi/vartree.py
+++ b/pym/portage/dbapi/vartree.py
@@@ -33,13 -33,11 +33,14 @@@ portage.proxy.lazyimport.lazyimport(glo
'portage.util.env_update:env_update',
'portage.util.listdir:dircache,listdir',
'portage.util.movefile:movefile',
- 'portage.util.path:first_existing',
+ 'portage.util.path:first_existing,iter_parents',
'portage.util.writeable_check:get_ro_checker',
+ 'portage.util:xattr@_xattr',
'portage.util._dyn_libs.PreservedLibsRegistry:PreservedLibsRegistry',
'portage.util._dyn_libs.LinkageMapELF:LinkageMapELF@LinkageMap',
+ 'portage.util._dyn_libs.LinkageMapMachO:LinkageMapMachO',
+ 'portage.util._dyn_libs.LinkageMapPeCoff:LinkageMapPeCoff',
+ 'portage.util._dyn_libs.LinkageMapXCoff:LinkageMapXCoff',
'portage.util._async.SchedulerInterface:SchedulerInterface',
'portage.util._eventloop.EventLoop:EventLoop',
'portage.util._eventloop.global_event_loop:global_event_loop',
diff --cc pym/portage/dispatch_conf.py
index fe2a85b,ed9a64a..97a79d8
--- a/pym/portage/dispatch_conf.py
+++ b/pym/portage/dispatch_conf.py
@@@ -20,7 -21,7 +21,8 @@@ from portage import _encodings, os, shu
from portage.env.loaders import KeyValuePairFileLoader
from portage.localization import _
from portage.util import shlex_split, varexpand
+ from portage.util.path import iter_parents
+from portage.const import EPREFIX
RCS_BRANCH = '1.1.1'
RCS_LOCK = 'rcs -ko -M -l'
diff --cc pym/portage/package/ebuild/doebuild.py
index 9ff635e,5e4d7b1..77c92fd
--- a/pym/portage/package/ebuild/doebuild.py
+++ b/pym/portage/package/ebuild/doebuild.py
@@@ -209,29 -211,31 +212,33 @@@ def _doebuild_path(settings, eapi=None)
prefixes.append("/")
path = overrides
+ # PREFIX LOCAL: use DEFAULT_PATH and EXTRA_PATH from make.globals
+ defaultpath = [x for x in settings.get("DEFAULT_PATH", "").split(":") if x]
+ extrapath = [x for x in settings.get("EXTRA_PATH", "").split(":") if x]
if "xattr" in settings.features:
- path.append(os.path.join(portage_bin_path, "ebuild-helpers", "xattr"))
+ for x in portage_bin_path:
+ path.append(os.path.join(x, "ebuild-helpers", "xattr"))
if uid != 0 and \
"unprivileged" in settings.features and \
"fakeroot" not in settings.features:
- path.append(os.path.join(portage_bin_path,
- "ebuild-helpers", "unprivileged"))
+ for x in portage_bin_path:
+ path.append(os.path.join(x,
+ "ebuild-helpers", "unprivileged"))
if settings.get("USERLAND", "GNU") != "GNU":
- path.append(os.path.join(portage_bin_path, "ebuild-helpers", "bsd"))
+ for x in portage_bin_path:
+ path.append(os.path.join(x, "ebuild-helpers", "bsd"))
- path.append(os.path.join(portage_bin_path, "ebuild-helpers"))
+ for x in portage_bin_path:
+ path.append(os.path.join(x, "ebuild-helpers"))
path.extend(prerootpath)
-
- for prefix in prefixes:
- for x in ("usr/local/sbin", "usr/local/bin", "usr/sbin", "usr/bin", "sbin", "bin"):
- path.append(os.path.join(prefix, x))
-
+ path.extend(defaultpath)
path.extend(rootpath)
+ path.extend(extrapath)
+ # END PREFIX LOCAL
+
settings["PATH"] = ":".join(path)
def doebuild_environment(myebuild, mydo, myroot=None, settings=None,
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2015-04-05 9:15 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2015-04-05 9:15 UTC (permalink / raw
To: gentoo-commits
commit: eed711a2e330ae73978bd7612ae596a7b3f7adbb
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Apr 5 09:13:48 2015 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Apr 5 09:13:48 2015 +0000
URL: https://gitweb.gentoo.org/proj/portage.git/commit/?id=eed711a2
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
NEWS | 11 +
RELEASE-NOTES | 121 ++
bin/bashrc-functions.sh | 2 +-
bin/chpathtool.py | 22 +-
bin/dispatch-conf | 68 +-
bin/eapi.sh | 114 +-
bin/ebuild | 7 +-
bin/ebuild-helpers/dohtml | 4 +
bin/ebuild.sh | 98 +-
bin/egencache | 38 +-
bin/emerge-webrsync | 4 +-
bin/etc-update | 158 ++-
bin/install-qa-check.d/05double-D | 9 +-
bin/install-qa-check.d/90world-writable | 27 +-
bin/install.py | 2 +-
bin/isolated-functions.sh | 89 +-
bin/misc-functions.sh | 71 +-
bin/phase-functions.sh | 15 +-
bin/phase-helpers.sh | 315 +++++-
bin/portageq | 27 +-
bin/quickpkg | 3 +-
bin/regenworld | 4 +-
bin/repoman | 43 +-
bin/save-ebuild-env.sh | 9 +-
bin/socks5-server.py | 227 ++++
cnf/dispatch-conf.conf | 2 +-
cnf/make.conf.example | 2 +-
cnf/make.globals | 3 +-
cnf/repo.postsync.d/example | 51 +
cnf/repos.conf | 1 +
cnf/sets/portage.conf | 12 +-
doc/config/sets.docbook | 47 +-
man/ebuild.5 | 28 +-
man/egencache.1 | 15 +-
man/emaint.1 | 71 +-
man/emerge.1 | 117 +-
man/make.conf.5 | 45 +-
man/portage.5 | 262 ++++-
man/repoman.1 | 2 +-
pym/_emerge/AbstractPollTask.py | 52 +-
pym/_emerge/Binpkg.py | 33 +-
pym/_emerge/BinpkgExtractorAsync.py | 25 +-
pym/_emerge/BinpkgFetcher.py | 13 +-
pym/_emerge/BinpkgPrefetcher.py | 2 +-
pym/_emerge/BinpkgVerifier.py | 6 +-
pym/_emerge/EbuildBinpkg.py | 9 +-
pym/_emerge/EbuildBuild.py | 36 +-
pym/_emerge/FakeVartree.py | 16 +-
pym/_emerge/Package.py | 135 ++-
pym/_emerge/Scheduler.py | 7 +-
pym/_emerge/actions.py | 1194 +++-----------------
pym/_emerge/clear_caches.py | 1 -
pym/_emerge/create_depgraph_params.py | 27 +
pym/_emerge/create_world_atom.py | 6 +-
pym/_emerge/depgraph.py | 780 ++++++++++---
pym/_emerge/help.py | 2 +-
pym/_emerge/is_valid_package_atom.py | 5 +-
pym/_emerge/main.py | 94 +-
pym/_emerge/resolver/DbapiProvidesIndex.py | 101 ++
pym/_emerge/resolver/output.py | 40 +-
pym/_emerge/resolver/output_helpers.py | 19 +-
pym/_emerge/resolver/package_tracker.py | 42 +-
pym/_emerge/resolver/slot_collision.py | 35 +-
pym/_emerge/search.py | 188 ++-
pym/_emerge/unmerge.py | 42 +-
pym/portage/__init__.py | 29 +-
pym/portage/_global_updates.py | 4 +-
pym/portage/_selinux.py | 14 +-
pym/portage/_sets/ProfilePackageSet.py | 35 +
pym/portage/_sets/__init__.py | 19 +-
pym/portage/_sets/files.py | 160 ++-
pym/portage/_sets/profiles.py | 28 +-
pym/portage/cache/fs_template.py | 25 +-
pym/portage/cache/index/IndexStreamIterator.py | 27 +
.../sync => portage/cache/index}/__init__.py | 2 +-
pym/portage/cache/index/pkg_desc_index.py | 60 +
pym/portage/const.py | 3 +
pym/portage/data.py | 134 ++-
pym/portage/dbapi/DummyTree.py | 16 +
pym/portage/dbapi/IndexedPortdb.py | 171 +++
pym/portage/dbapi/IndexedVardb.py | 114 ++
.../dbapi/_ContentsCaseSensitivityManager.py | 93 ++
pym/portage/dbapi/_VdbMetadataDelta.py | 153 +++
pym/portage/dbapi/__init__.py | 10 +-
pym/portage/dbapi/bintree.py | 898 ++++++++-------
pym/portage/dbapi/vartree.py | 416 ++++---
pym/portage/dbapi/virtual.py | 113 +-
pym/portage/dep/__init__.py | 69 +-
pym/portage/dep/_slot_operator.py | 13 +
pym/portage/dep/dep_check.py | 69 +-
pym/portage/dep/soname/SonameAtom.py | 72 ++
.../sync => portage/dep/soname}/__init__.py | 2 +-
pym/portage/dep/soname/multilib_category.py | 114 ++
pym/portage/dep/soname/parse.py | 47 +
pym/portage/dispatch_conf.py | 189 +++-
pym/portage/eapi.py | 2 +-
pym/portage/emaint/main.py | 33 +-
pym/portage/emaint/modules/binhost/binhost.py | 47 +-
pym/portage/emaint/modules/merges/merges.py | 15 +-
pym/portage/emaint/modules/sync/__init__.py | 55 +
pym/portage/emaint/modules/sync/sync.py | 289 +++++
pym/portage/exception.py | 1 +
pym/portage/locks.py | 11 +-
pym/portage/metadata.py | 208 ++++
pym/portage/{emaint => }/module.py | 40 +-
pym/portage/news.py | 15 +-
.../package/ebuild/_config/KeywordsManager.py | 7 +-
.../package/ebuild/_config/LocationsManager.py | 44 +-
pym/portage/package/ebuild/_config/MaskManager.py | 25 +-
pym/portage/package/ebuild/_config/UseManager.py | 116 +-
.../package/ebuild/_config/special_env_vars.py | 10 +-
pym/portage/package/ebuild/config.py | 195 +++-
pym/portage/package/ebuild/doebuild.py | 246 +++-
pym/portage/package/ebuild/fetch.py | 7 +-
pym/portage/package/ebuild/prepare_build_dirs.py | 9 +-
pym/portage/{emaint => }/progress.py | 0
pym/portage/repository/config.py | 121 +-
pym/portage/sync/__init__.py | 38 +
pym/portage/sync/config_checks.py | 72 ++
pym/portage/sync/controller.py | 321 ++++++
.../sync/getaddrinfo_validate.py | 0
pym/{repoman => portage/sync/modules}/__init__.py | 0
pym/portage/sync/modules/cvs/__init__.py | 45 +
pym/portage/sync/modules/cvs/cvs.py | 67 ++
pym/portage/sync/modules/git/__init__.py | 55 +
pym/portage/sync/modules/git/git.py | 86 ++
pym/portage/sync/modules/rsync/__init__.py | 28 +
pym/portage/sync/modules/rsync/rsync.py | 543 +++++++++
pym/portage/sync/modules/svn/__init__.py | 31 +
pym/portage/sync/modules/svn/svn.py | 89 ++
pym/portage/sync/modules/webrsync/__init__.py | 49 +
pym/portage/sync/modules/webrsync/webrsync.py | 66 ++
.../sync/old_tree_timestamp.py | 5 +-
pym/portage/sync/syncbase.py | 136 +++
pym/portage/tests/__init__.py | 4 +-
pym/portage/tests/dbapi/test_fakedbapi.py | 11 +-
pym/portage/tests/dbapi/test_portdb_cache.py | 18 +-
pym/portage/tests/dep/test_isvalidatom.py | 8 +-
.../tests/ebuild/test_use_expand_incremental.py | 132 +++
pym/portage/tests/emerge/test_config_protect.py | 292 +++++
pym/portage/tests/emerge/test_simple.py | 10 +-
pym/portage/tests/glsa/test_security_set.py | 5 +-
pym/portage/tests/lint/metadata.py | 11 +
pym/portage/tests/lint/test_compile_modules.py | 13 +
pym/portage/tests/resolver/ResolverPlayground.py | 46 +-
.../resolver/binpkg_multi_instance}/__init__.py | 2 +-
.../resolver/binpkg_multi_instance/__test__.py} | 2 +-
.../test_build_id_profile_format.py | 134 +++
.../binpkg_multi_instance/test_rebuilt_binaries.py | 101 ++
.../tests/resolver/soname}/__init__.py | 2 +-
.../tests/resolver/soname/__test__.py} | 2 +-
.../tests/resolver/soname/test_autounmask.py | 103 ++
pym/portage/tests/resolver/soname/test_depclean.py | 61 +
.../tests/resolver/soname/test_downgrade.py | 240 ++++
.../tests/resolver/soname/test_or_choices.py | 92 ++
.../tests/resolver/soname/test_reinstall.py | 87 ++
.../tests/resolver/soname/test_skip_update.py | 86 ++
.../soname/test_slot_conflict_reinstall.py | 342 ++++++
.../resolver/soname/test_slot_conflict_update.py | 117 ++
.../tests/resolver/soname/test_soname_provided.py | 78 ++
.../tests/resolver/soname/test_unsatisfiable.py | 71 ++
.../tests/resolver/soname/test_unsatisfied.py | 87 ++
pym/portage/tests/resolver/test_backtracking.py | 9 +-
pym/portage/tests/resolver/test_changed_deps.py | 120 ++
.../tests/resolver/test_onlydeps_circular.py | 51 +
pym/portage/tests/resolver/test_or_choices.py | 137 ++-
pym/portage/tests/resolver/test_package_tracker.py | 4 +-
.../tests/resolver/test_profile_default_eapi.py | 126 +++
.../tests/resolver/test_profile_package_set.py | 123 ++
..._slot_operator_update_probe_parent_downgrade.py | 68 ++
pym/portage/tests/resolver/test_virtual_slot.py | 75 ++
pym/portage/tests/resolver/test_with_test_deps.py | 44 +
pym/{_emerge => portage/tests}/sync/__init__.py | 2 +-
pym/portage/tests/sync/test_sync_local.py | 189 ++++
pym/portage/update.py | 4 +-
pym/portage/util/__init__.py | 111 +-
pym/portage/util/_dyn_libs/LinkageMapELF.py | 62 +-
pym/portage/util/_dyn_libs/NeededEntry.py | 82 ++
pym/portage/util/_dyn_libs/soname_deps.py | 138 +++
pym/portage/util/compression_probe.py | 79 ++
pym/portage/util/cpuinfo.py | 18 +
pym/{_emerge/sync => portage/util/elf}/__init__.py | 2 +-
pym/portage/util/elf/constants.py | 45 +
pym/portage/util/elf/header.py | 65 ++
.../sync => portage/util/endian}/__init__.py | 2 +-
pym/portage/util/endian/decode.py | 48 +
pym/portage/util/iterators/MultiIterGroupBy.py | 94 ++
.../sync => portage/util/iterators}/__init__.py | 2 +-
pym/portage/util/movefile.py | 2 +-
pym/portage/util/path.py | 48 +
pym/portage/util/socks5.py | 81 ++
pym/portage/util/writeable_check.py | 2 +-
pym/portage/versions.py | 28 +-
pym/repoman/check_missingslot.py | 31 +
pym/repoman/utilities.py | 7 +-
setup.py | 27 +-
196 files changed, 12504 insertions(+), 2861 deletions(-)
diff --cc bin/bashrc-functions.sh
index 1a92738,cc02546..daa00d2
--- a/bin/bashrc-functions.sh
+++ b/bin/bashrc-functions.sh
@@@ -1,12 -1,7 +1,12 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2013 Gentoo Foundation
+ # Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
+portageq() {
+ PYTHONPATH=${PORTAGE_PYTHONPATH:-${PORTAGE_PYM_PATH}}\
+ "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "${PORTAGE_BIN_PATH}/portageq" "$@"
+}
+
register_die_hook() {
local x
for x in $* ; do
diff --cc bin/dispatch-conf
index 286d821,678a66d..4215e5b
--- a/bin/dispatch-conf
+++ b/bin/dispatch-conf
@@@ -21,11 -26,10 +26,11 @@@ if osp.isfile(osp.join(osp.dirname(osp.
sys.path.insert(0, osp.join(osp.dirname(osp.dirname(osp.realpath(__file__))), "pym"))
import portage
portage._internal_caller = True
- from portage import os
- from portage import _unicode_decode
- from portage.dispatch_conf import diffstatusoutput
+ from portage import os, shutil
+ from portage import _encodings, _unicode_decode
+ from portage.dispatch_conf import diffstatusoutput, diff_mixed_wrapper
from portage.process import find_binary, spawn
+from portage.const import EPREFIX
FIND_EXTANT_CONFIGS = "find '%s' %s -name '._cfg????_%s' ! -name '.*~' ! -iname '.*.bak' -print"
DIFF_CONTENTS = "diff -Nu '%s' '%s'"
diff --cc bin/ebuild-helpers/dohtml
index 70cb1f4,0478e49..bf1f0fe
--- a/bin/ebuild-helpers/dohtml
+++ b/bin/ebuild-helpers/dohtml
@@@ -2,10 -2,14 +2,14 @@@
# Copyright 2009-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
+ if ___eapi_has_dohtml_deprecated; then
+ eqawarn "'${0##*/}' is deprecated in EAPI '$EAPI'"
+ fi
+
-PORTAGE_BIN_PATH=${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}
-PORTAGE_PYM_PATH=${PORTAGE_PYM_PATH:-/usr/lib/portage/pym}
+PORTAGE_BIN_PATH=${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}
+PORTAGE_PYM_PATH=${PORTAGE_PYM_PATH:-@PORTAGE_BASE@/pym}
# Use safe cwd, avoiding unsafe import for bug #469338.
export __PORTAGE_HELPER_CWD=${PWD}
cd "${PORTAGE_PYM_PATH}"
diff --cc bin/egencache
index e2f57a5,f97432f..def4837
--- a/bin/egencache
+++ b/bin/egencache
@@@ -57,8 -58,7 +58,8 @@@ from portage.util._async.run_main_sched
from portage.util._eventloop.global_event_loop import global_event_loop
from portage import cpv_getkey
from portage.dep import Atom, isjustname
- from portage.versions import pkgsplit, vercmp
+ from portage.versions import pkgsplit, vercmp, _pkg_str
+from portage.const import EPREFIX
try:
from xml.etree import ElementTree
diff --cc bin/install-qa-check.d/90world-writable
index 635612d,820683b..bb9b075
--- a/bin/install-qa-check.d/90world-writable
+++ b/bin/install-qa-check.d/90world-writable
@@@ -2,23 -2,34 +2,36 @@@
world_writable_check() {
# Now we look for all world writable files.
- local unsafe_files=$(find "${ED}" -type f -perm -2 | sed -e "s:^${ED}:/:")
+ # PREFIX LOCAL: keep offset prefix in the reported files
- local unsafe_files=$(find "${ED}" -type f -perm -2 | sed -e "s:^${D}:- :")
++ local unsafe_files=$(find "${ED}" -type f -perm -2 | sed -e "s:^${D}:/:")
+ # END PREFIX LOCAL
+ local OLDIFS x prev_shopts=$-
+
+ OLDIFS=$IFS
+ IFS=$'\n'
+ set -f
+
if [[ -n ${unsafe_files} ]] ; then
- __vecho "QA Security Notice: world writable file(s):"
- __vecho "${unsafe_files}"
- __vecho "- This may or may not be a security problem, most of the time it is one."
- __vecho "- Please double check that $PF really needs a world writeable bit and file bugs accordingly."
- sleep 1
+ eqawarn "QA Security Notice: world writable file(s):"
+
+ eqatag -v world-writable $unsafe_files
+
+ eqawarn "This may or may not be a security problem, most of the time it is one."
+ eqawarn "Please double check that $PF really needs a world writeable bit and file bugs accordingly."
+ eqawarn
fi
- local unsafe_files=$(find "${ED}" -type f '(' -perm -2002 -o -perm -4002 ')' | sed -e "s:^${ED}:/:")
+ local unsafe_files=$(find "${ED}" -type f '(' -perm -2002 -o -perm -4002 ')' | sed -e "s:^${D}:/:")
if [[ -n ${unsafe_files} ]] ; then
eqawarn "QA Notice: Unsafe files detected (set*id and world writable)"
- eqawarn "${unsafe_files}"
+
+ eqatag -v world-writable-setid $unsafe_files
+
die "Unsafe files found in \${D}. Portage will not install them."
fi
+
+ IFS=$OLDIFS
+ [[ ${prev_shopts} == *f* ]] || set +f
}
world_writable_check
diff --cc bin/misc-functions.sh
index 5b8e872,e08c228..4928575
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -172,19 -168,21 +172,23 @@@ install_qa_check()
local EPREFIX= ED=${D}
fi
- cd "${ED}" || die "cd failed"
+ # PREFIX LOCAL: ED needs not to exist, whereas D does
+ cd "${D}" || die "cd failed"
+ # END PREFIX LOCAL
- # Run QA checks from install-qa-check.d.
- # Note: checks need to be run *before* stripping.
- local f
- # TODO: handle nullglob-like
- for f in "${PORTAGE_BIN_PATH}"/install-qa-check.d/*; do
- # Run in a subshell to treat it like external script,
- # but use 'source' to pass all variables through.
- (
- source "${f}" || eerror "Post-install QA check ${f##*/} failed to run"
+ # Collect the paths for QA checks, highest prio first.
+ paths=(
+ # sysadmin overrides
+ "${PORTAGE_OVERRIDE_EPREFIX}"/usr/local/lib/install-qa-check.d
+ # system-wide package installs
+ "${PORTAGE_OVERRIDE_EPREFIX}"/usr/lib/install-qa-check.d
+ )
+
+ # Now repo-specific checks.
+ # (yes, PORTAGE_ECLASS_LOCATIONS contains repo paths...)
+ for d in "${PORTAGE_ECLASS_LOCATIONS[@]}"; do
+ paths+=(
+ "${d}"/metadata/install-qa-check.d
)
done
diff --cc bin/phase-functions.sh
index 2dece5a,2743e27..f447fca
--- a/bin/phase-functions.sh
+++ b/bin/phase-functions.sh
@@@ -31,8 -31,8 +31,8 @@@ PORTAGE_READONLY_VARS="D EBUILD EBUILD_
PORTAGE_TMPDIR PORTAGE_UPDATE_ENV PORTAGE_USERNAME \
PORTAGE_VERBOSE PORTAGE_WORKDIR_MODE PORTAGE_XATTR_EXCLUDE \
PORTDIR \
- PROFILE_PATHS REPLACING_VERSIONS REPLACED_BY_VERSION T WORKDIR \
+ REPLACING_VERSIONS REPLACED_BY_VERSION T WORKDIR \
- __PORTAGE_HELPER __PORTAGE_TEST_HARDLINK_LOCKS"
+ __PORTAGE_HELPER __PORTAGE_TEST_HARDLINK_LOCKS ED EROOT"
PORTAGE_SAVED_READONLY_VARS="A CATEGORY P PF PN PR PV PVR"
diff --cc cnf/make.globals
index a57a603,dd99618..880fed5
--- a/cnf/make.globals
+++ b/cnf/make.globals
@@@ -151,24 -122,11 +151,25 @@@ PORTAGE_ELOG_MAILFROM="@portageuser@@lo
# Signing command used by repoman
PORTAGE_GPG_SIGNING_COMMAND="gpg --sign --digest-algo SHA256 --clearsign --yes --default-key \"\${PORTAGE_GPG_KEY}\" --homedir \"\${PORTAGE_GPG_DIR}\" \"\${FILE}\""
+ # btrfs.* attributes are irrelevant, see bug #527636.
# Security labels are special, see bug #461868.
# system.nfs4_acl attributes are irrelevant, see bug #475496.
- PORTAGE_XATTR_EXCLUDE="security.* system.nfs4_acl"
+ PORTAGE_XATTR_EXCLUDE="btrfs.* security.* system.nfs4_acl"
+# Writeable paths for Mac OS X seatbelt sandbox
+#
+# If path ends in a slash (/), access will recursively be allowed to directory
+# contents (using a regex), not the directory itself. Without a slash, access
+# to the directory or file itself will be allowed (using a literal), so it can
+# be created, removed and changed. If both is needed, the directory needs to be
+# given twice, once with and once without the slash. Obviously this only makes
+# sense for directories, not files.
+#
+# An empty value for either variable will disable all restrictions on the
+# corresponding operation.
+MACOSSANDBOX_PATHS="/dev/fd/ /private/tmp/ /private/var/tmp/ @@PORTAGE_BUILDDIR@@/ @@PORTAGE_ACTUAL_DISTDIR@@/"
+MACOSSANDBOX_PATHS_CONTENT_ONLY="/dev/null /dev/dtracehelper /dev/tty /private/var/run/syslog"
+
# *****************************
# ** DO NOT EDIT THIS FILE **
# ***************************************************
diff --cc cnf/repos.conf
index 18c611a,1ca98ca..4433546
--- a/cnf/repos.conf
+++ b/cnf/repos.conf
@@@ -1,7 -1,8 +1,8 @@@
[DEFAULT]
-main-repo = gentoo
+main-repo = gentoo_prefix
-[gentoo]
-location = /usr/portage
+[gentoo_prefix]
+location = @PORTAGE_EPREFIX@/usr/portage
sync-type = rsync
-sync-uri = rsync://rsync.gentoo.org/gentoo-portage
+sync-uri = rsync://rsync.prefix.bitzolder.nl/gentoo-portage-prefix
+ auto-sync = yes
diff --cc pym/_emerge/Package.py
index bdf3b23,2c1a116..1570f3d
--- a/pym/_emerge/Package.py
+++ b/pym/_emerge/Package.py
@@@ -13,12 -13,12 +13,13 @@@ from portage.cache.mappings import slot
from portage.const import EBUILD_PHASES
from portage.dep import Atom, check_required_use, use_reduce, \
paren_enclose, _slot_separator, _repo_separator
+ from portage.dep.soname.parse import parse_soname_deps
from portage.versions import _pkg_str, _unknown_repo
from portage.eapi import _get_eapi_attrs, eapi_has_use_aliases
- from portage.exception import InvalidDependString
+ from portage.exception import InvalidData, InvalidDependString
from portage.localization import _
from _emerge.Task import Task
+from portage.const import EPREFIX
if sys.hexversion >= 0x3000000:
basestring = str
@@@ -40,11 -41,12 +42,12 @@@ class Package(Task)
"_validated_atoms", "_visible")
metadata_keys = [
- "BUILD_TIME", "CHOST", "COUNTER", "DEPEND", "EAPI",
- "HDEPEND", "INHERITED", "IUSE", "KEYWORDS",
- "LICENSE", "PDEPEND", "PROVIDE", "RDEPEND",
- "repository", "PROPERTIES", "RESTRICT", "SLOT", "USE",
- "_mtime_", "DEFINED_PHASES", "REQUIRED_USE", "EPREFIX"]
+ "BUILD_ID", "BUILD_TIME", "CHOST", "COUNTER", "DEFINED_PHASES",
+ "DEPEND", "EAPI", "HDEPEND", "INHERITED", "IUSE", "KEYWORDS",
+ "LICENSE", "MD5", "PDEPEND", "PROVIDE", "PROVIDES",
+ "RDEPEND", "repository", "REQUIRED_USE",
+ "PROPERTIES", "REQUIRES", "RESTRICT", "SIZE",
- "SLOT", "USE", "_mtime_"]
++ "SLOT", "USE", "_mtime_", "EPREFIX"]
_dep_keys = ('DEPEND', 'HDEPEND', 'PDEPEND', 'RDEPEND')
_buildtime_keys = ('DEPEND', 'HDEPEND')
diff --cc pym/_emerge/main.py
index a5de7c3,a5dafa3..0bacd68
--- a/pym/_emerge/main.py
+++ b/pym/_emerge/main.py
@@@ -19,8 -19,8 +19,9 @@@ portage.proxy.lazyimport.lazyimport(glo
'_emerge.is_valid_package_atom:insert_category_into_atom'
)
from portage import os
+from portage.const import EPREFIX
from portage.util._argparse import ArgumentParser
+ from portage.sync import _SUBMODULE_PATH_MAP
if sys.hexversion >= 0x3000000:
long = int
diff --cc pym/portage/data.py
index f4bbb44,2fd287d..fa89242
--- a/pym/portage/data.py
+++ b/pym/portage/data.py
@@@ -91,52 -147,38 +151,50 @@@ def _get_global(k)
#Discover the uid and gid of the portage user/group
keyerror = False
try:
- portage_uid = pwd.getpwnam(_get_global('_portage_username')).pw_uid
+ username = str(_get_global('_portage_username'))
+ portage_uid = pwd.getpwnam(username).pw_uid
except KeyError:
- keyerror = True
- portage_uid = 0
+ # PREFIX LOCAL: some sysadmins are insane, bug #344307
+ if username.isdigit():
+ portage_uid = int(username)
+ else:
+ keyerror = True
+ portage_uid = 0
+ # END PREFIX LOCAL
try:
- portage_gid = grp.getgrnam(_get_global('_portage_grpname')).gr_gid
+ grpname = str(_get_global('_portage_grpname'))
+ portage_gid = grp.getgrnam(grpname).gr_gid
except KeyError:
- keyerror = True
- portage_gid = 0
+ # PREFIX LOCAL: some sysadmins are insane, bug #344307
+ if grpname.isdigit():
+ portage_gid = int(grpname)
+ else:
+ keyerror = True
+ portage_gid = 0
+ # END PREFIX LOCAL
- if secpass < 1 and portage_gid in os.getgroups():
- secpass = 1
-
# Suppress this error message if both PORTAGE_GRPNAME and
# PORTAGE_USERNAME are set to "root", for things like
# Android (see bug #454060).
if keyerror and not (_get_global('_portage_username') == "root" and
_get_global('_portage_grpname') == "root"):
+ # PREFIX LOCAL: we need to fix this one day to distinguish prefix vs non-prefix
+ writemsg(colorize("BAD",
+ _("portage: '%s' user or '%s' group missing." % (_get_global('_portage_username'), _get_global('_portage_grpname')))) + "\n", noiselevel=-1)
writemsg(colorize("BAD",
- _("portage: 'portage' user or group missing.")) + "\n", noiselevel=-1)
- writemsg(_(
- " For the defaults, line 1 goes into passwd, "
- "and 2 into group.\n"), noiselevel=-1)
- writemsg(colorize("GOOD",
- " portage:x:250:250:portage:/var/tmp/portage:/bin/false") \
- + "\n", noiselevel=-1)
- writemsg(colorize("GOOD", " portage::250:portage") + "\n",
- noiselevel=-1)
+ _(" In Prefix Portage this is quite dramatic")) + "\n", noiselevel=-1)
+ writemsg(colorize("BAD",
+ _(" since it means you have thrown away yourself.")) + "\n", noiselevel=-1)
+ writemsg(colorize("BAD",
+ _(" Re-add yourself or re-bootstrap Gentoo Prefix.")) + "\n", noiselevel=-1)
+ # END PREFIX LOCAL
portage_group_warning()
+ globals()['portage_gid'] = portage_gid
_initialized_globals.add('portage_gid')
+ globals()['portage_uid'] = portage_uid
_initialized_globals.add('portage_uid')
- _initialized_globals.add('secpass')
if k == 'portage_gid':
return portage_gid
@@@ -206,28 -244,24 +260,29 @@@
except OSError:
pass
else:
- if k == '_portage_grpname':
- try:
- grp_struct = grp.getgrgid(eroot_st.st_gid)
- except KeyError:
- v = eroot_st.st_gid
- else:
- v = grp_struct.gr_name
- else:
- try:
- pwd_struct = pwd.getpwuid(eroot_st.st_uid)
- except KeyError:
- v = eroot_st.st_uid
+ if _unprivileged_mode(eroot_or_parent, eroot_st):
+ if k == '_portage_grpname':
+ try:
+ grp_struct = grp.getgrgid(eroot_st.st_gid)
+ except KeyError:
+ pass
+ else:
+ v = grp_struct.gr_name
else:
- v = pwd_struct.pw_name
+ try:
+ pwd_struct = pwd.getpwuid(eroot_st.st_uid)
+ except KeyError:
+ pass
+ else:
+ v = pwd_struct.pw_name
if v is None:
- v = 'portage'
+ # PREFIX LOCAL: use var iso hardwired 'portage'
+ if k == '_portage_grpname':
+ v = PORTAGE_GROUPNAME
+ else:
+ v = PORTAGE_USERNAME
+ # END PREFIX LOCAL
else:
raise AssertionError('unknown name: %s' % k)
diff --cc pym/portage/dbapi/bintree.py
index 45e8614,b37f388..e3e50e0
--- a/pym/portage/dbapi/bintree.py
+++ b/pym/portage/dbapi/bintree.py
@@@ -70,18 -70,26 +71,26 @@@ class bindbapi(fakedbapi)
_known_keys = frozenset(list(fakedbapi._known_keys) + \
["CHOST", "repository", "USE"])
def __init__(self, mybintree=None, **kwargs):
- fakedbapi.__init__(self, **kwargs)
+ # Always enable multi_instance mode for bindbapi indexing. This
+ # does not affect the local PKGDIR file layout, since that is
+ # controlled independently by FEATURES=binpkg-multi-instance.
+ # The multi_instance mode is useful for the following reasons:
+ # * binary packages with the same cpv from multiple binhosts
+ # can be considered simultaneously
+ # * if binpkg-multi-instance is disabled, it's still possible
+ # to properly access a PKGDIR which has binpkg-multi-instance
+ # layout (or mixed layout)
+ fakedbapi.__init__(self, exclusive_slots=False,
+ multi_instance=True, **kwargs)
self.bintree = mybintree
self.move_ent = mybintree.move_ent
- self.cpvdict={}
- self.cpdict={}
# Selectively cache metadata in order to optimize dep matching.
self._aux_cache_keys = set(
- ["BUILD_TIME", "CHOST", "DEPEND", "EAPI",
- "HDEPEND", "IUSE", "KEYWORDS",
- "LICENSE", "PDEPEND", "PROPERTIES", "PROVIDE",
- "RDEPEND", "repository", "RESTRICT", "SLOT", "USE", "DEFINED_PHASES",
- "EPREFIX"
+ ["BUILD_ID", "BUILD_TIME", "CHOST", "DEFINED_PHASES",
+ "DEPEND", "EAPI", "HDEPEND", "IUSE", "KEYWORDS",
+ "LICENSE", "MD5", "PDEPEND", "PROPERTIES", "PROVIDE",
+ "PROVIDES", "RDEPEND", "repository", "REQUIRES", "RESTRICT",
- "SIZE", "SLOT", "USE", "_mtime_"
++ "SIZE", "SLOT", "USE", "_mtime_", "EPREFIX"
])
self._aux_cache_slot_dict = slot_dict_class(self._aux_cache_keys)
self._aux_cache = {}
@@@ -306,12 -311,14 +312,14 @@@ class binarytree(object)
self._pkgindex_hashes = ["MD5","SHA1"]
self._pkgindex_file = os.path.join(self.pkgdir, "Packages")
self._pkgindex_keys = self.dbapi._aux_cache_keys.copy()
- self._pkgindex_keys.update(["CPV", "MTIME", "SIZE"])
+ self._pkgindex_keys.update(["CPV", "SIZE"])
self._pkgindex_aux_keys = \
- ["BUILD_TIME", "CHOST", "DEPEND", "DESCRIPTION", "EAPI",
- "HDEPEND", "IUSE", "KEYWORDS", "LICENSE", "PDEPEND", "PROPERTIES",
- "PROVIDE", "RESTRICT", "RDEPEND", "repository", "SLOT", "USE", "DEFINED_PHASES",
- "BASE_URI", "EPREFIX"]
+ ["BASE_URI", "BUILD_ID", "BUILD_TIME", "CHOST",
+ "DEFINED_PHASES", "DEPEND", "DESCRIPTION", "EAPI",
+ "HDEPEND", "IUSE", "KEYWORDS", "LICENSE", "PDEPEND",
+ "PKGINDEX_URI", "PROPERTIES", "PROVIDE", "PROVIDES",
+ "RDEPEND", "repository", "REQUIRES", "RESTRICT",
- "SIZE", "SLOT", "USE"]
++ "SIZE", "SLOT", "USE", "EPREFIX"]
self._pkgindex_aux_keys = list(self._pkgindex_aux_keys)
self._pkgindex_use_evaluated_keys = \
("DEPEND", "HDEPEND", "LICENSE", "RDEPEND",
@@@ -322,9 -329,9 +330,10 @@@
"CONFIG_PROTECT", "CONFIG_PROTECT_MASK", "FEATURES",
"GENTOO_MIRRORS", "INSTALL_MASK", "IUSE_IMPLICIT", "USE",
"USE_EXPAND", "USE_EXPAND_HIDDEN", "USE_EXPAND_IMPLICIT",
- "USE_EXPAND_UNPREFIXED"])
+ "USE_EXPAND_UNPREFIXED",
+ "EPREFIX"])
self._pkgindex_default_pkg_data = {
+ "BUILD_ID" : "",
"BUILD_TIME" : "",
"DEFINED_PHASES" : "",
"DEPEND" : "",
diff --cc pym/portage/dbapi/vartree.py
index a0881a2,277c2f1..a037200
--- a/pym/portage/dbapi/vartree.py
+++ b/pym/portage/dbapi/vartree.py
@@@ -1077,8 -1097,8 +1112,11 @@@ class vardbapi(dbapi)
# Empty path is a code used to represent empty contents.
self._add_path("", pkg_hash)
- for x in contents:
- self._add_path(x[eroot_len:], pkg_hash)
+ for x in db._contents.keys():
- self._add_path(x[eroot_len:], pkg_hash)
++ path = x[eroot_len:]
++ if "case-insensitive-fs" in self._vardb.settings.features:
++ path = path.lower()
++ self._add_path(patch, pkg_hash)
self._vardb._aux_cache["modified"].add(cpv)
@@@ -1259,14 -1279,16 +1297,18 @@@
continue
if is_basename:
- for p in dblink(cpv).getcontents():
+ for p in dblink(cpv)._contents.keys():
+ if case_insensitive:
+ p = p.lower()
if os.path.basename(p) == name:
- owners.append((cpv, p[len(root):]))
+ owners.append((cpv, dblink(cpv).
+ _contents.unmap_key(
+ p)[len(root):]))
else:
- if dblink(cpv).isowner(path):
- owners.append((cpv, path))
+ key = dblink(cpv)._match_contents(path)
+ if key is not False:
+ owners.append(
+ (cpv, key[len(root):]))
except StopIteration:
path_iter.append(path)
@@@ -1314,14 -1336,16 +1356,18 @@@
dblnk = self._vardb._dblink(cpv)
for path, name, is_basename in path_info_list:
if is_basename:
- for p in dblnk.getcontents():
+ for p in dblnk._contents.keys():
+ if case_insensitive:
+ p = p.lower()
if os.path.basename(p) == name:
- search_pkg.results.append((dblnk, p[len(root):]))
+ search_pkg.results.append((dblnk,
+ dblnk._contents.unmap_key(
+ p)[len(root):]))
else:
- if dblnk.isowner(path):
- search_pkg.results.append((dblnk, path))
+ key = dblnk._match_contents(path)
+ if key is not False:
+ search_pkg.results.append(
+ (dblnk, key[len(root):]))
search_pkg.complete = True
return False
@@@ -2795,24 -2828,18 +2850,26 @@@ class dblink(object)
os_filename_arg.path.join(destroot,
filename.lstrip(os_filename_arg.path.sep)))
+ pkgfiles = self.getcontents()
+
+ preserve_case = None
if "case-insensitive-fs" in self.settings.features:
destfile = destfile.lower()
-
- if self._contents.contains(destfile):
- return self._contents.unmap_key(destfile)
-
- if self.getcontents():
+ preserve_case = dict((k.lower(), k) for k in pkgfiles)
+ pkgfiles = dict((k.lower(), v) for k, v in pkgfiles.items())
+
+ if pkgfiles and destfile in pkgfiles:
+ if preserve_case is not None:
+ return preserve_case[destfile]
+ return destfile
++ #if self._contents.contains(destfile):
++ # return self._contents.unmap_key(destfile)
+ if pkgfiles:
basename = os_filename_arg.path.basename(destfile)
if self._contents_basenames is None:
try:
- for x in pkgfiles:
- for x in self._contents.keys():
++ for x in pkgfiles):
_unicode_encode(x,
encoding=_encodings['merge'],
errors='strict')
@@@ -2897,10 -2924,8 +2954,12 @@@
if p_path_list:
for p_path in p_path_list:
x = os_filename_arg.path.join(p_path, basename)
- if self._contents.contains(x):
- return self._contents.unmap_key(x)
+ if x in pkgfiles:
+ if preserve_case is not None:
+ return preserve_case[x]
+ return x
++ #if self._contents.contains(x):
++ # return self._contents.unmap_key(x)
return False
diff --cc pym/portage/package/ebuild/config.py
index 6e578a9,3a4007b..18e95ff
--- a/pym/portage/package/ebuild/config.py
+++ b/pym/portage/package/ebuild/config.py
@@@ -834,40 -857,38 +857,44 @@@ class config(object)
"PORTAGE_INST_UID": "0",
}
- eroot_or_parent = first_existing(eroot)
- unprivileged = False
- try:
- eroot_st = os.stat(eroot_or_parent)
- except OSError:
- pass
- else:
-
- if portage.data._unprivileged_mode(
- eroot_or_parent, eroot_st):
- unprivileged = True
-
- default_inst_ids["PORTAGE_INST_GID"] = str(eroot_st.st_gid)
- default_inst_ids["PORTAGE_INST_UID"] = str(eroot_st.st_uid)
-
- if "PORTAGE_USERNAME" not in self:
- try:
- pwd_struct = pwd.getpwuid(eroot_st.st_uid)
- except KeyError:
- pass
- else:
- self["PORTAGE_USERNAME"] = pwd_struct.pw_name
- self.backup_changes("PORTAGE_USERNAME")
-
- if "PORTAGE_GRPNAME" not in self:
- try:
- grp_struct = grp.getgrgid(eroot_st.st_gid)
- except KeyError:
- pass
- else:
- self["PORTAGE_GRPNAME"] = grp_struct.gr_name
- self.backup_changes("PORTAGE_GRPNAME")
+ # PREFIX LOCAL: inventing UID/GID based on a path is a very
+ # bad idea, it breaks almost everything since group ids
+ # don't have to match, when a user has many
+ # This in particularly breaks the configure-set portage
+ # group and user (in portage/data.py)
- #if eprefix:
- # # For prefix environments, default to the UID and GID of
- # # the top-level EROOT directory.
- # try:
- # eroot_st = os.stat(eroot)
- # except OSError:
- # pass
- # else:
- # default_inst_ids["PORTAGE_INST_GID"] = str(eroot_st.st_gid)
- # default_inst_ids["PORTAGE_INST_UID"] = str(eroot_st.st_uid)
-
- # if "PORTAGE_USERNAME" not in self:
- # try:
- # pwd_struct = pwd.getpwuid(eroot_st.st_uid)
- # except KeyError:
- # pass
- # else:
- # self["PORTAGE_USERNAME"] = pwd_struct.pw_name
- # self.backup_changes("PORTAGE_USERNAME")
-
- # if "PORTAGE_GRPNAME" not in self:
- # try:
- # grp_struct = grp.getgrgid(eroot_st.st_gid)
- # except KeyError:
- # pass
- # else:
- # self["PORTAGE_GRPNAME"] = grp_struct.gr_name
- # self.backup_changes("PORTAGE_GRPNAME")
++# eroot_or_parent = first_existing(eroot)
++# unprivileged = False
++# try:
++# eroot_st = os.stat(eroot_or_parent)
++# except OSError:
++# pass
++# else:
++#
++# if portage.data._unprivileged_mode(
++# eroot_or_parent, eroot_st):
++# unprivileged = True
++#
++# default_inst_ids["PORTAGE_INST_GID"] = str(eroot_st.st_gid)
++# default_inst_ids["PORTAGE_INST_UID"] = str(eroot_st.st_uid)
++#
++# if "PORTAGE_USERNAME" not in self:
++# try:
++# pwd_struct = pwd.getpwuid(eroot_st.st_uid)
++# except KeyError:
++# pass
++# else:
++# self["PORTAGE_USERNAME"] = pwd_struct.pw_name
++# self.backup_changes("PORTAGE_USERNAME")
++#
++# if "PORTAGE_GRPNAME" not in self:
++# try:
++# grp_struct = grp.getgrgid(eroot_st.st_gid)
++# except KeyError:
++# pass
++# else:
++# self["PORTAGE_GRPNAME"] = grp_struct.gr_name
++# self.backup_changes("PORTAGE_GRPNAME")
+ # END PREFIX LOCAL
for var, default_val in default_inst_ids.items():
try:
diff --cc pym/portage/util/_dyn_libs/LinkageMapELF.py
index e4f8ee8,c44666a..c78f397
--- a/pym/portage/util/_dyn_libs/LinkageMapELF.py
+++ b/pym/portage/util/_dyn_libs/LinkageMapELF.py
@@@ -16,9 -16,33 +16,35 @@@ from portage.localization import
from portage.util import getlibpaths
from portage.util import grabfile
from portage.util import normalize_path
+ from portage.util import varexpand
from portage.util import writemsg_level
+ from portage.util._dyn_libs.NeededEntry import NeededEntry
+from portage.const import EPREFIX
+ # Map ELF e_machine values from NEEDED.ELF.2 to approximate multilib
+ # categories. This approximation will produce incorrect results on x32
+ # and mips systems, but the result is not worse than using the raw
+ # e_machine value which was used by earlier versions of portage.
+ _approx_multilib_categories = {
+ "386": "x86_32",
+ "68K": "m68k_32",
+ "AARCH64": "arm_64",
+ "ALPHA": "alpha_64",
+ "ARM": "arm_32",
+ "IA_64": "ia_64",
+ "MIPS": "mips_o32",
+ "PARISC": "hppa_64",
+ "PPC": "ppc_32",
+ "PPC64": "ppc_64",
+ "S390": "s390_64",
+ "SH": "sh_32",
+ "SPARC": "sparc_32",
+ "SPARC32PLUS": "sparc_32",
+ "SPARCV9": "sparc_64",
+ "X86_64": "x86_64",
+ }
++>>>>>>> overlays-gentoo-org/master
+
class LinkageMapELF(object):
"""Models dynamic linker dependencies."""
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2014-11-12 17:31 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2014-11-12 17:31 UTC (permalink / raw
To: gentoo-commits
commit: d46094a93302301accbce53621efd89dad45a47f
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Wed Nov 12 17:31:07 2014 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Wed Nov 12 17:31:07 2014 +0000
URL: http://sources.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=d46094a9
Merge tag 'v2.2.14' into prefix
Conflicts:
bin/ebuild-helpers/portageq
NEWS | 9 +++++
RELEASE-NOTES | 36 +++++++++++++++++++
bin/ebuild-helpers/portageq | 20 +++++++++--
bin/ebuild-ipc.py | 64 ++++++++++++++++++++++++++++-----
bin/install-qa-check.d/90gcc-warnings | 4 +--
man/emerge.1 | 4 +--
misc/emerge-delta-webrsync | 2 +-
pym/_emerge/main.py | 2 +-
pym/portage/dbapi/vartree.py | 32 ++++++++++++-----
pym/portage/tests/emerge/test_simple.py | 19 ++++++++--
setup.py | 4 +--
11 files changed, 168 insertions(+), 28 deletions(-)
diff --cc bin/ebuild-helpers/portageq
index 2c7c428,4151bac..935f548
--- a/bin/ebuild-helpers/portageq
+++ b/bin/ebuild-helpers/portageq
@@@ -2,9 -2,25 +2,25 @@@
# Copyright 2009-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
+ scriptpath=${BASH_SOURCE[0]}
+ scriptname=${scriptpath##*/}
+
-PORTAGE_BIN_PATH=${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}
-PORTAGE_PYM_PATH=${PORTAGE_PYM_PATH:-/usr/lib/portage/pym}
+PORTAGE_BIN_PATH=${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}
+PORTAGE_PYM_PATH=${PORTAGE_PYM_PATH:-@PORTAGE_BASE@/pym}
# Use safe cwd, avoiding unsafe import for bug #469338.
cd "${PORTAGE_PYM_PATH}"
- PYTHONPATH=${PORTAGE_PYTHONPATH:-${PORTAGE_PYM_PATH}} \
- exec "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "$PORTAGE_BIN_PATH/portageq" "$@"
+
+ IFS=':'
+ set -f # in case ${PATH} contains any shell glob characters
+
+ for path in ${PATH}; do
+ [[ -x ${path}/${scriptname} ]] || continue
+ [[ ${path}/${scriptname} -ef ${scriptpath} ]] && continue
+ PYTHONPATH=${PORTAGE_PYTHONPATH:-${PORTAGE_PYM_PATH}} \
- exec "${PORTAGE_PYTHON:-/usr/bin/python}" \
++ exec "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" \
+ "${path}/${scriptname}" "$@"
+ done
+
+ unset IFS
+ echo "${scriptname}: command not found" 1>&2
+ exit 127
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2014-10-02 18:48 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2014-10-02 18:48 UTC (permalink / raw
To: gentoo-commits
commit: afca65a3d4a71a04ae71a60b0a3057ad0a69a25e
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Oct 2 18:48:26 2014 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Oct 2 18:48:26 2014 +0000
URL: http://sources.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=afca65a3
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
man/emerge.1 | 3 +++
pym/_emerge/actions.py | 2 +-
pym/_emerge/main.py | 13 +++++++++++++
pym/portage/tests/__init__.py | 4 +++-
4 files changed, 20 insertions(+), 2 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2014-09-28 17:52 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2014-09-28 17:52 UTC (permalink / raw
To: gentoo-commits
commit: 990c5f4896b309fdcaf1dbbb5779177ecfcf6e74
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Sep 28 17:52:16 2014 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Sep 28 17:52:16 2014 +0000
URL: http://sources.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=990c5f48
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/ebuild-helpers/emake
bin/misc-functions.sh
bin/portageq
doc/Makefile
pym/_emerge/EbuildBuild.py
pym/portage/const.py
pym/portage/dbapi/vartree.py
pym/portage/package/ebuild/doebuild.py
.gitignore | 1 +
.travis.yml | 13 +
DEVELOPING | 22 +-
MANIFEST.in | 18 +
Makefile | 215 ------
NEWS | 27 +-
RELEASE-NOTES | 65 +-
bin/archive-conf | 4 +-
bin/binhost-snapshot | 4 +-
bin/chpathtool.py | 8 +-
bin/clean_locks | 4 +-
bin/deprecated-path | 28 +
bin/dispatch-conf | 4 +-
bin/ebuild | 4 +-
bin/ebuild-helpers/emake | 4 +-
bin/ebuild-helpers/xattr/install | 27 +-
bin/ebuild-ipc.py | 20 +-
bin/ebuild.sh | 131 ++--
bin/egencache | 4 +-
bin/emaint | 4 +-
bin/emerge | 14 +-
bin/emerge-webrsync | 4 +-
bin/env-update | 4 +-
bin/fixpackages | 4 +-
bin/glsa-check | 4 +-
bin/install-qa-check.d/05double-D | 17 +
bin/install-qa-check.d/05prefix | 118 +++
bin/install-qa-check.d/10executable-issues | 140 ++++
bin/install-qa-check.d/10ignored-flags | 99 +++
bin/install-qa-check.d/20deprecated-directories | 18 +
bin/install-qa-check.d/20runtime-directories | 26 +
bin/install-qa-check.d/60bash-completion | 130 ++++
bin/install-qa-check.d/60openrc | 41 ++
bin/install-qa-check.d/60pkgconfig | 15 +
bin/install-qa-check.d/60pngfix | 35 +
bin/install-qa-check.d/60systemd | 25 +
bin/install-qa-check.d/60udev | 21 +
bin/install-qa-check.d/80libraries | 167 +++++
bin/install-qa-check.d/80multilib-strict | 50 ++
bin/install-qa-check.d/90gcc-warnings | 168 +++++
bin/install-qa-check.d/90world-writable | 27 +
bin/misc-functions.sh | 800 +--------------------
bin/phase-functions.sh | 153 ++--
bin/phase-helpers.sh | 40 +-
bin/portageq | 339 +++++----
bin/quickpkg | 8 +-
bin/regenworld | 4 +-
bin/repoman | 32 +-
bin/save-ebuild-env.sh | 2 +-
bin/xattr-helper.py | 6 +-
cnf/sets/portage.conf | 5 +
doc/Makefile | 13 -
doc/fragment/version | 1 -
man/emerge.1 | 19 +-
man/repoman.1 | 4 +
misc/emerge-delta-webrsync | 4 +-
mkrelease.sh | 141 ----
pym/_emerge/Binpkg.py | 9 +-
pym/_emerge/EbuildBuild.py | 6 +-
pym/_emerge/FakeVartree.py | 4 +-
pym/_emerge/MiscFunctionsProcess.py | 6 +-
pym/_emerge/PackageMerge.py | 5 +-
pym/_emerge/Scheduler.py | 2 +-
pym/_emerge/UserQuery.py | 71 ++
pym/_emerge/actions.py | 60 +-
pym/_emerge/depgraph.py | 380 ++++++++--
pym/_emerge/main.py | 18 +-
pym/_emerge/post_emerge.py | 5 +-
pym/_emerge/resolver/output_helpers.py | 2 +-
pym/_emerge/resolver/package_tracker.py | 2 +-
pym/_emerge/sync/old_tree_timestamp.py | 12 +-
pym/_emerge/unmerge.py | 8 +-
pym/_emerge/userquery.py | 55 --
pym/portage/__init__.py | 16 +-
pym/portage/_emirrordist/FetchTask.py | 6 +-
pym/portage/_global_updates.py | 4 +-
pym/portage/_sets/dbapi.py | 85 ++-
pym/portage/cache/sqlite.py | 4 +-
pym/portage/const.py | 12 +-
pym/portage/dbapi/__init__.py | 6 +-
pym/portage/dbapi/vartree.py | 25 +-
pym/portage/dep/_slot_operator.py | 27 +-
pym/portage/dep/dep_check.py | 20 +-
pym/portage/dispatch_conf.py | 3 +-
pym/portage/emaint/main.py | 6 +-
pym/portage/emaint/module.py | 2 +-
pym/portage/emaint/modules/binhost/__init__.py | 8 +-
pym/portage/emaint/modules/config/__init__.py | 8 +-
pym/portage/emaint/modules/logs/__init__.py | 8 +-
pym/portage/emaint/modules/merges/__init__.py | 31 +
pym/portage/emaint/modules/merges/merges.py | 290 ++++++++
pym/portage/emaint/modules/move/__init__.py | 8 +-
pym/portage/emaint/modules/move/move.py | 5 +-
pym/portage/emaint/modules/resume/__init__.py | 6 +-
pym/portage/emaint/modules/world/__init__.py | 8 +-
pym/portage/exception.py | 4 +
pym/portage/localization.py | 7 +-
pym/portage/mail.py | 12 +-
pym/portage/news.py | 8 +-
pym/portage/output.py | 6 +-
pym/portage/package/ebuild/config.py | 17 +-
pym/portage/package/ebuild/doebuild.py | 9 +-
pym/portage/tests/__init__.py | 24 +-
.../date => pym/portage/tests/bin/__test__.py | 0
.../tests/{bin/__test__ => dbapi/__test__.py} | 0
pym/portage/tests/dbapi/test_portdb_cache.py | 23 +-
.../tests/{dbapi/__test__ => dep/__test__.py} | 0
.../tests/{dep/__test__ => ebuild/__test__.py} | 0
pym/portage/tests/ebuild/test_config.py | 71 +-
.../tests/{ebuild/__test__ => emerge/__test__.py} | 0
pym/portage/tests/emerge/test_emerge_slot_abi.py | 7 +-
pym/portage/tests/emerge/test_simple.py | 36 +-
.../tests/{emerge/__test__ => env/__test__.py} | 0
.../tests/env/{__test__ => config/__test__.py} | 0
.../{env/config/__test__ => glsa/__test__.py} | 0
pym/portage/tests/glsa/test_security_set.py | 3 +-
.../{glsa/__test__ => lafilefixer/__test__.py} | 0
.../__test__ => lazyimport/__test__.py} | 0
.../{lazyimport/__test__ => lint/__test__.py} | 0
pym/portage/tests/lint/test_compile_modules.py | 10 +-
pym/portage/tests/lint/test_import_modules.py | 8 +-
.../tests/{lint/__test__ => locks/__test__.py} | 0
.../tests/{locks/__test__ => news/__test__.py} | 0
.../tests/{news/__test__ => process/__test__.py} | 0
.../{process/__test__ => repoman/__test__.py} | 0
pym/portage/tests/repoman/test_simple.py | 8 +-
pym/portage/tests/resolver/ResolverPlayground.py | 59 +-
.../{repoman/__test__ => resolver/__test__.py} | 0
.../tests/resolver/test_autounmask_use_breakage.py | 63 ++
pym/portage/tests/resolver/test_or_choices.py | 73 ++
...fied.py => test_slot_conflict_force_rebuild.py} | 56 +-
.../test_slot_conflict_unsatisfied_deep_deps.py | 115 +++
...nsatisfied.py => test_slot_operator_rebuild.py} | 52 +-
.../resolver/test_slot_operator_required_use.py | 72 ++
...test_solve_non_slot_operator_slot_conflicts.py} | 49 +-
pym/portage/tests/{runTests => runTests.py} | 0
.../{resolver/__test__ => sets/base/__test__.py} | 0
.../sets/{base/__test__ => files/__test__.py} | 0
.../sets/{files/__test__ => shell/__test__.py} | 0
.../{sets/shell/__test__ => unicode/__test__.py} | 0
.../tests/{unicode/__test__ => update/__test__.py} | 0
.../tests/{update/__test__ => util/__test__.py} | 0
pym/portage/tests/util/test_getconfig.py | 4 +-
.../tests/{util/__test__ => versions/__test__.py} | 0
pym/portage/tests/xpak/__test__ | 0
.../tests/{versions/__test__ => xpak/__test__.py} | 0
pym/portage/util/__init__.py | 3 +-
pym/portage/util/_eventloop/EventLoop.py | 8 +-
pym/portage/util/_eventloop/PollSelectAdapter.py | 6 +-
pym/repoman/checks.py | 16 -
runtests.sh | 8 +-
setup.py | 652 +++++++++++++++++
testpath | 11 +
153 files changed, 4118 insertions(+), 1920 deletions(-)
diff --cc bin/ebuild-helpers/emake
index 60286ec,4618053..dcb64a3
--- a/bin/ebuild-helpers/emake
+++ b/bin/ebuild-helpers/emake
@@@ -22,7 -22,7 +22,7 @@@ if [[ $PORTAGE_QUIET != 1 ]] ; the
) >&2
fi
- ${MAKE:-make} SHELL="${BASH:-/bin/bash}" ${MAKEOPTS} ${EXTRA_EMAKE} "$@"
-${MAKE:-make} ${MAKEOPTS} "$@" ${EXTRA_EMAKE}
++${MAKE:-make} SHELL="${BASH:-/bin/bash}" ${MAKEOPTS} "$@" ${EXTRA_EMAKE}
ret=$?
[[ $ret -ne 0 ]] && __helpers_die "${0##*/} failed"
exit $ret
diff --cc bin/install-qa-check.d/05prefix
index 0000000,e1fc2bd..32561e2
mode 000000,100644..100644
--- a/bin/install-qa-check.d/05prefix
+++ b/bin/install-qa-check.d/05prefix
@@@ -1,0 -1,117 +1,118 @@@
+ # Prefix specific QA checks
+
+ install_qa_check_prefix() {
+ [[ ${ED} == ${D} ]] && return
+
+ if [[ -d ${ED}/${D} ]] ; then
+ find "${ED}/${D}" | \
+ while read i ; do
+ eqawarn "QA Notice: /${i##${ED}/${D}} installed in \${ED}/\${D}"
+ done
+ die "Aborting due to QA concerns: files installed in ${ED}/${D}"
+ fi
+
+ if [[ -d ${ED}/${EPREFIX} ]] ; then
+ find "${ED}/${EPREFIX}/" | \
+ while read i ; do
+ eqawarn "QA Notice: ${i#${D}} double prefix"
+ done
+ die "Aborting due to QA concerns: double prefix files installed"
+ fi
+
+ if [[ -d ${D} ]] ; then
+ INSTALLTOD=$(find ${D%/} | egrep -v "^${ED}" | sed -e "s|^${D%/}||" | awk '{if (length($0) <= length("'"${EPREFIX}"'")) { if (substr("'"${EPREFIX}"'", 1, length($0)) != $0) {print $0;} } else if (substr($0, 1, length("'"${EPREFIX}"'")) != "'"${EPREFIX}"'") {print $0;} }')
+ if [[ -n ${INSTALLTOD} ]] ; then
+ eqawarn "QA Notice: the following files are outside of the prefix:"
+ eqawarn "${INSTALLTOD}"
+ die "Aborting due to QA concerns: there are files installed outside the prefix"
+ fi
+ fi
+
+ # all further checks rely on ${ED} existing
+ [[ -d ${ED} ]] || return
+
+ # check shebangs, bug #282539
+ rm -f "${T}"/non-prefix-shebangs-errs
+ local WHITELIST=" /usr/bin/env "
+ # this is hell expensive, but how else?
+ find "${ED}" -executable \! -type d -print0 \
+ | xargs -0 grep -H -n -m1 "^#!" \
+ | while read f ;
+ do
+ local fn=${f%%:*}
+ local pos=${f#*:} ; pos=${pos%:*}
+ local line=${f##*:}
+ # shebang always appears on the first line ;)
+ [[ ${pos} != 1 ]] && continue
+ local oldIFS=${IFS}
+ IFS=$'\r'$'\n'$'\t'" "
+ line=( ${line#"#!"} )
+ IFS=${oldIFS}
+ [[ ${WHITELIST} == *" ${line[0]} "* ]] && continue
+ local fp=${fn#${D}} ; fp=/${fp%/*}
+ # line[0] can be an absolutised path, bug #342929
+ local eprefix=$(canonicalize ${EPREFIX})
+ local rf=${fn}
+ # in case we deal with a symlink, make sure we don't replace it
+ # with a real file (sed -i does that)
+ if [[ -L ${fn} ]] ; then
+ rf=$(readlink ${fn})
+ [[ ${rf} != /* ]] && rf=${fn%/*}/${rf}
+ # ignore symlinks pointing to outside prefix
+ # as seen in sys-devel/native-cctools
+ [[ $(canonicalize "/${rf#${D}}") != ${eprefix}/* ]] && continue
+ fi
+ # does the shebang start with ${EPREFIX}, and does it exist?
+ if [[ ${line[0]} == ${EPREFIX}/* || ${line[0]} == ${eprefix}/* ]] ; then
+ if [[ ! -e ${ROOT%/}${line[0]} && ! -e ${D%/}${line[0]} ]] ; then
+ # hmm, refers explicitly to $EPREFIX, but doesn't exist,
+ # if it's in PATH that's wrong in any case
+ if [[ ":${PATH}:" == *":${fp}:"* ]] ; then
+ echo "${fn#${D}}:${line[0]} (explicit EPREFIX but target not found)" \
+ >> "${T}"/non-prefix-shebangs-errs
+ else
+ eqawarn "${fn#${D}} has explicit EPREFIX in shebang but target not found (${line[0]})"
+ fi
+ fi
+ continue
+ fi
- # unprefixed shebang, is the script directly in $PATH?
- if [[ ":${PATH}:" == *":${fp}:"* ]] ; then
++ # unprefixed shebang, is the script directly in $PATH or an init
++ # script?
++ if [[ ":${PATH}:${EPREFIX}/etc/init.d:" == *":${fp}:"* ]] ; then
+ if [[ -e ${EROOT}${line[0]} || -e ${ED}${line[0]} ]] ; then
+ # is it unprefixed, but we can just fix it because a
+ # prefixed variant exists
+ eqawarn "prefixing shebang of ${fn#${D}}"
+ # statement is made idempotent on purpose, because
+ # symlinks may point to the same target, and hence the
+ # same real file may be sedded multiple times since we
+ # read the shebangs in one go upfront for performance
+ # reasons
+ sed -i -e '1s:^#! \?'"${line[0]}"':#!'"${EPREFIX}"${line[0]}':' "${rf}"
+ continue
+ else
+ # this is definitely wrong: script in $PATH and invalid shebang
+ echo "${fn#${D}}:${line[0]} (script ${fn##*/} installed in PATH but interpreter ${line[0]} not found)" \
+ >> "${T}"/non-prefix-shebangs-errs
+ fi
+ else
+ # unprefixed/invalid shebang, but outside $PATH, this may be
+ # intended (e.g. config.guess) so remain silent by default
+ has stricter ${FEATURES} && \
+ eqawarn "invalid shebang in ${fn#${D}}: ${line[0]}"
+ fi
+ done
+ if [[ -e "${T}"/non-prefix-shebangs-errs ]] ; then
+ eqawarn "QA Notice: the following files use invalid (possible non-prefixed) shebangs:"
+ while read line ; do
+ eqawarn " ${line}"
+ done < "${T}"/non-prefix-shebangs-errs
+ rm -f "${T}"/non-prefix-shebangs-errs
+ die "Aborting due to QA concerns: invalid shebangs found"
+ fi
+ }
+
+ install_qa_check_prefix
+ : # guarantee successful exit
+
+ # vim:ft=sh
diff --cc bin/install-qa-check.d/80libraries
index 0000000,3977bae..c83f278
mode 000000,100644..100644
--- a/bin/install-qa-check.d/80libraries
+++ b/bin/install-qa-check.d/80libraries
@@@ -1,0 -1,158 +1,167 @@@
+ # Check for issues with installed libraries
+
+ lib_check() {
+ local f x i j
+
+ if type -P scanelf > /dev/null && ! has binchecks ${RESTRICT}; then
+ # Check for shared libraries lacking SONAMEs
+ local qa_var="QA_SONAME_${ARCH/-/_}"
+ eval "[[ -n \${!qa_var} ]] && QA_SONAME=(\"\${${qa_var}[@]}\")"
+ f=$(scanelf -ByF '%S %p' "${ED}"{,usr/}lib*/lib*.so* | awk '$2 == "" { print }' | sed -e "s:^[[:space:]]${ED}:/:")
+ if [[ -n ${f} ]] ; then
+ echo "${f}" > "${T}"/scanelf-missing-SONAME.log
+ if [[ "${QA_STRICT_SONAME-unset}" == unset ]] ; then
+ if [[ ${#QA_SONAME[@]} -gt 1 ]] ; then
+ for x in "${QA_SONAME[@]}" ; do
+ sed -e "s#^/${x#/}\$##" -i "${T}"/scanelf-missing-SONAME.log
+ done
+ else
+ local shopts=$-
+ set -o noglob
+ for x in ${QA_SONAME} ; do
+ sed -e "s#^/${x#/}\$##" -i "${T}"/scanelf-missing-SONAME.log
+ done
+ set +o noglob
+ set -${shopts}
+ fi
+ fi
+ sed -e "/^\$/d" -i "${T}"/scanelf-missing-SONAME.log
+ f=$(<"${T}"/scanelf-missing-SONAME.log)
+ if [[ -n ${f} ]] ; then
+ __vecho -ne '\n'
+ eqawarn "QA Notice: The following shared libraries lack a SONAME"
+ eqawarn "${f}"
+ __vecho -ne '\n'
+ sleep 1
+ else
+ rm -f "${T}"/scanelf-missing-SONAME.log
+ fi
+ fi
+
+ # Check for shared libraries lacking NEEDED entries
+ qa_var="QA_DT_NEEDED_${ARCH/-/_}"
+ eval "[[ -n \${!qa_var} ]] && QA_DT_NEEDED=(\"\${${qa_var}[@]}\")"
+ f=$(scanelf -ByF '%n %p' "${ED}"{,usr/}lib*/lib*.so* | awk '$2 == "" { print }' | sed -e "s:^[[:space:]]${ED}:/:")
+ if [[ -n ${f} ]] ; then
+ echo "${f}" > "${T}"/scanelf-missing-NEEDED.log
+ if [[ "${QA_STRICT_DT_NEEDED-unset}" == unset ]] ; then
+ if [[ ${#QA_DT_NEEDED[@]} -gt 1 ]] ; then
+ for x in "${QA_DT_NEEDED[@]}" ; do
+ sed -e "s#^/${x#/}\$##" -i "${T}"/scanelf-missing-NEEDED.log
+ done
+ else
+ local shopts=$-
+ set -o noglob
+ for x in ${QA_DT_NEEDED} ; do
+ sed -e "s#^/${x#/}\$##" -i "${T}"/scanelf-missing-NEEDED.log
+ done
+ set +o noglob
+ set -${shopts}
+ fi
+ fi
+ sed -e "/^\$/d" -i "${T}"/scanelf-missing-NEEDED.log
+ f=$(<"${T}"/scanelf-missing-NEEDED.log)
+ if [[ -n ${f} ]] ; then
+ __vecho -ne '\n'
+ eqawarn "QA Notice: The following shared libraries lack NEEDED entries"
+ eqawarn "${f}"
+ __vecho -ne '\n'
+ sleep 1
+ else
+ rm -f "${T}"/scanelf-missing-NEEDED.log
+ fi
+ fi
+ fi
+
+ # this should help to ensure that all (most?) shared libraries are executable
+ # and that all libtool scripts / static libraries are not executable
+ for i in "${ED}"opt/*/lib* \
+ "${ED}"lib* \
+ "${ED}"usr/lib* ; do
+ [[ ! -d ${i} ]] && continue
+
+ for j in "${i}"/*.so.* "${i}"/*.so ; do
+ [[ ! -e ${j} ]] && continue
+ [[ -L ${j} ]] && continue
+ [[ -x ${j} ]] && continue
+ __vecho "making executable: ${j#${ED}}"
+ chmod +x "${j}"
+ done
+
+ for j in "${i}"/*.a "${i}"/*.la ; do
+ [[ ! -e ${j} ]] && continue
+ [[ -L ${j} ]] && continue
+ [[ ! -x ${j} ]] && continue
+ __vecho "removing executable bit: ${j#${ED}}"
+ chmod -x "${j}"
+ done
+
+ for j in "${i}"/*.{a,dll,dylib,sl,so}.* "${i}"/*.{a,dll,dylib,sl,so} ; do
+ [[ ! -e ${j} ]] && continue
+ [[ ! -L ${j} ]] && continue
+ linkdest=$(readlink "${j}")
+ if [[ ${linkdest} == /* ]] ; then
+ __vecho -ne '\n'
+ eqawarn "QA Notice: Found an absolute symlink in a library directory:"
+ eqawarn " ${j#${D}} -> ${linkdest}"
+ eqawarn " It should be a relative symlink if in the same directory"
+ eqawarn " or a linker script if it crosses the /usr boundary."
+ fi
+ done
+ done
+
+ # When installing static libraries into /usr/lib and shared libraries into
+ # /lib, we have to make sure we have a linker script in /usr/lib along side
+ # the static library, or gcc will utilize the static lib when linking :(.
+ # http://bugs.gentoo.org/4411
+ local abort="no"
+ local a s
+ for a in "${ED}"usr/lib*/*.a ; do
- s=${a%.a}.so
++ # PREFIX LOCAL: support MachO objects
++ [[ ${CHOST} == *-darwin* ]] \
++ && s=${a%.a}.dylib \
++ || s=${a%.a}.so
++ # END PREFIX LOCAL
+ if [[ ! -e ${s} ]] ; then
+ s=${s%usr/*}${s##*/usr/}
+ if [[ -e ${s} ]] ; then
+ __vecho -ne '\n'
+ eqawarn "QA Notice: Missing gen_usr_ldscript for ${s##*/}"
+ abort="yes"
+ fi
+ fi
+ done
+ [[ ${abort} == "yes" ]] && die "add those ldscripts"
+
+ # Make sure people don't store libtool files or static libs in /lib
- f=$(ls "${ED}"lib*/*.{a,la} 2>/dev/null)
++ # PREFIX LOCAL: on AIX, "dynamic libs" have extension .a, so don't
++ # get false positives
++ [[ ${CHOST} == *-aix* ]] \
++ && f=$(ls "${ED}"lib*/*.la 2>/dev/null || true) \
++ || f=$(ls "${ED}"lib*/*.{a,la} 2>/dev/null)
++ # END PREFIX LOCAL
+ if [[ -n ${f} ]] ; then
+ __vecho -ne '\n'
+ eqawarn "QA Notice: Excessive files found in the / partition"
+ eqawarn "${f}"
+ __vecho -ne '\n'
+ die "static archives (*.a) and libtool library files (*.la) belong in /usr/lib*, not /lib*"
+ fi
+
+ # Verify that the libtool files don't contain bogus $D entries.
+ local abort=no gentoo_bug=no always_overflow=no
+ for a in "${ED}"usr/lib*/*.la ; do
+ s=${a##*/}
+ if grep -qs "${ED}" "${a}" ; then
+ __vecho -ne '\n'
+ eqawarn "QA Notice: ${s} appears to contain PORTAGE_TMPDIR paths"
+ abort="yes"
+ fi
+ done
+ [[ ${abort} == "yes" ]] && die "soiled libtool library files found"
+ }
+
+ lib_check
+ : # guarantee successful exit
+
+ # vim:ft=sh
diff --cc bin/install-qa-check.d/80multilib-strict
index 0000000,f944be9..436932e
mode 000000,100644..100644
--- a/bin/install-qa-check.d/80multilib-strict
+++ b/bin/install-qa-check.d/80multilib-strict
@@@ -1,0 -1,50 +1,50 @@@
+ # Strict multilib directory checks
+ multilib_strict_check() {
+ if has multilib-strict ${FEATURES} && \
- [[ -x /usr/bin/file && -x /usr/bin/find ]] && \
++ [[ -x ${EPREFIX}/usr/bin/file && -x ${EPREFIX}/usr/bin/find ]] && \
+ [[ -n ${MULTILIB_STRICT_DIRS} && -n ${MULTILIB_STRICT_DENY} ]]
+ then
+ rm -f "${T}/multilib-strict.log"
+ local abort=no dir file
+ MULTILIB_STRICT_EXEMPT=$(echo ${MULTILIB_STRICT_EXEMPT} | sed -e 's:\([(|)]\):\\\1:g')
+ for dir in ${MULTILIB_STRICT_DIRS} ; do
+ [[ -d ${ED}/${dir} ]] || continue
+ for file in $(find ${ED}/${dir} -type f | grep -v "^${ED}/${dir}/${MULTILIB_STRICT_EXEMPT}"); do
+ if file ${file} | egrep -q "${MULTILIB_STRICT_DENY}" ; then
+ echo "${file#${ED}//}" >> "${T}/multilib-strict.log"
+ fi
+ done
+ done
+
+ if [[ -s ${T}/multilib-strict.log ]] ; then
+ if [[ ${#QA_MULTILIB_PATHS[@]} -eq 1 ]] ; then
+ local shopts=$-
+ set -o noglob
+ QA_MULTILIB_PATHS=(${QA_MULTILIB_PATHS})
+ set +o noglob
+ set -${shopts}
+ fi
+ if [ "${QA_STRICT_MULTILIB_PATHS-unset}" = unset ] ; then
+ local x
+ for x in "${QA_MULTILIB_PATHS[@]}" ; do
+ sed -e "s#^${x#/}\$##" -i "${T}/multilib-strict.log"
+ done
+ sed -e "/^\$/d" -i "${T}/multilib-strict.log"
+ fi
+ if [[ -s ${T}/multilib-strict.log ]] ; then
+ abort=yes
+ echo "Files matching a file type that is not allowed:"
+ while read -r ; do
+ echo " ${REPLY}"
+ done < "${T}/multilib-strict.log"
+ fi
+ fi
+
+ [[ ${abort} == yes ]] && die "multilib-strict check failed!"
+ fi
+ }
+
+ multilib_strict_check
+ : # guarantee successful exit
+
+ # vim:ft=sh
diff --cc bin/install-qa-check.d/90world-writable
index 0000000,771027e..635612d
mode 000000,100644..100644
--- a/bin/install-qa-check.d/90world-writable
+++ b/bin/install-qa-check.d/90world-writable
@@@ -1,0 -1,25 +1,27 @@@
+ # Check for world-writable files
+
+ world_writable_check() {
+ # Now we look for all world writable files.
- local unsafe_files=$(find "${ED}" -type f -perm -2 | sed -e "s:^${ED}:- :")
++ # PREFIX LOCAL: keep offset prefix in the reported files
++ local unsafe_files=$(find "${ED}" -type f -perm -2 | sed -e "s:^${D}:- :")
++ # END PREFIX LOCAL
+ if [[ -n ${unsafe_files} ]] ; then
+ __vecho "QA Security Notice: world writable file(s):"
+ __vecho "${unsafe_files}"
+ __vecho "- This may or may not be a security problem, most of the time it is one."
+ __vecho "- Please double check that $PF really needs a world writeable bit and file bugs accordingly."
+ sleep 1
+ fi
+
- local unsafe_files=$(find "${ED}" -type f '(' -perm -2002 -o -perm -4002 ')' | sed -e "s:^${ED}:/:")
++ local unsafe_files=$(find "${ED}" -type f '(' -perm -2002 -o -perm -4002 ')' | sed -e "s:^${D}:/:")
+ if [[ -n ${unsafe_files} ]] ; then
+ eqawarn "QA Notice: Unsafe files detected (set*id and world writable)"
+ eqawarn "${unsafe_files}"
+ die "Unsafe files found in \${D}. Portage will not install them."
+ fi
+ }
+
+ world_writable_check
+ : # guarantee successful exit
+
+ # vim:ft=sh
diff --cc bin/misc-functions.sh
index d92103f,cc652a9..1904c25
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2013 Gentoo Foundation
+ # Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
# Miscellaneous shell functions that make use of the ebuild env but don't need
@@@ -172,67 -168,34 +172,36 @@@ install_qa_check()
local EPREFIX= ED=${D}
fi
- cd "${ED}" || die "cd failed"
+ # PREFIX LOCAL: ED needs not to exist, whereas D does
+ cd "${D}" || die "cd failed"
+ # END PREFIX LOCAL
- qa_var="QA_FLAGS_IGNORED_${ARCH/-/_}"
- eval "[[ -n \${!qa_var} ]] && QA_FLAGS_IGNORED=(\"\${${qa_var}[@]}\")"
- if [[ ${#QA_FLAGS_IGNORED[@]} -eq 1 ]] ; then
- local shopts=$-
- set -o noglob
- QA_FLAGS_IGNORED=(${QA_FLAGS_IGNORED})
- set +o noglob
- set -${shopts}
- fi
+ # Run QA checks from install-qa-check.d.
+ # Note: checks need to be run *before* stripping.
+ local f
+ # TODO: handle nullglob-like
+ for f in "${PORTAGE_BIN_PATH}"/install-qa-check.d/*; do
+ # Run in a subshell to treat it like external script,
+ # but use 'source' to pass all variables through.
+ (
+ source "${f}" || eerror "Post-install QA check ${f##*/} failed to run"
+ )
+ done
- # Check for files built without respecting *FLAGS. Note that
- # -frecord-gcc-switches must be in all *FLAGS variables, in
- # order to avoid false positive results here.
- # NOTE: This check must execute before prepall/prepstrip, since
- # prepstrip strips the .GCC.command.line sections.
- if type -P scanelf > /dev/null && ! has binchecks ${RESTRICT} && \
- [[ "${CFLAGS}" == *-frecord-gcc-switches* ]] && \
- [[ "${CXXFLAGS}" == *-frecord-gcc-switches* ]] && \
- [[ "${FFLAGS}" == *-frecord-gcc-switches* ]] && \
- [[ "${FCFLAGS}" == *-frecord-gcc-switches* ]] ; then
- rm -f "${T}"/scanelf-ignored-CFLAGS.log
- for x in $(scanelf -qyRF '#k%p' -k '!.GCC.command.line' "${ED}") ; do
- # Separate out file types that are known to support
- # .GCC.command.line sections, using the `file` command
- # similar to how prepstrip uses it.
- f=$(file "${x}") || continue
- [[ -z ${f} ]] && continue
- if [[ ${f} == *"SB executable"* ||
- ${f} == *"SB shared object"* ]] ; then
- echo "${x}" >> "${T}"/scanelf-ignored-CFLAGS.log
+ # Run QA checks from repositories
+ # (yes, PORTAGE_ECLASS_LOCATIONS contains repo paths...)
+ local repo_location
+ for repo_location in "${PORTAGE_ECLASS_LOCATIONS[@]}"; do
+ for f in "${repo_location}"/metadata/install-qa-check.d/*; do
+ if [[ -f ${f} ]]; then
+ (
+ # allow inheriting eclasses
+ _IN_INSTALL_QA_CHECK=1
+ source "${f}" || eerror "Post-install QA check ${f##*/} failed to run"
+ )
fi
done
-
- if [[ -f "${T}"/scanelf-ignored-CFLAGS.log ]] ; then
-
- if [ "${QA_STRICT_FLAGS_IGNORED-unset}" = unset ] ; then
- for x in "${QA_FLAGS_IGNORED[@]}" ; do
- sed -e "s#^${x#/}\$##" -i "${T}"/scanelf-ignored-CFLAGS.log
- done
- fi
- # Filter anything under /usr/lib/debug/ in order to avoid
- # duplicate warnings for splitdebug files.
- sed -e "s#^usr/lib/debug/.*##" -e "/^\$/d" -e "s#^#/#" \
- -i "${T}"/scanelf-ignored-CFLAGS.log
- f=$(<"${T}"/scanelf-ignored-CFLAGS.log)
- if [[ -n ${f} ]] ; then
- __vecho -ne '\n'
- eqawarn "${BAD}QA Notice: Files built without respecting CFLAGS have been detected${NORMAL}"
- eqawarn " Please include the following list of files in your report:"
- eqawarn "${f}"
- __vecho -ne '\n'
- sleep 1
- else
- rm -f "${T}"/scanelf-ignored-CFLAGS.log
- fi
- fi
- fi
+ done
export STRIP_MASK
prepall
@@@ -240,327 -203,6 +209,39 @@@
ecompressdir --dequeue
ecompress --dequeue
- # Prefix specific checks
- [[ ${ED} != ${D} ]] && install_qa_check_prefix
-
- f=
- for x in etc/app-defaults usr/man usr/info usr/X11R6 usr/doc usr/locale ; do
- [[ -d ${ED}/$x ]] && f+=" $x\n"
- done
- if [[ -n $f ]] ; then
- eqawarn "QA Notice: This ebuild installs into the following deprecated directories:"
- eqawarn
- eqawarn "$f"
- fi
-
- # It's ok create these directories, but not to install into them. #493154
- # TODO: We should add var/lib to this list.
- f=
- for x in var/cache var/lock var/run run ; do
- if [[ ! -L ${ED}/${x} && -d ${ED}/${x} ]] ; then
- if [[ -z $(find "${ED}/${x}" -prune -empty) ]] ; then
- f+=$(cd "${ED}"; find "${x}" -printf ' %p\n')
- fi
- fi
- done
- if [[ -n ${f} ]] ; then
- eqawarn "QA Notice: This ebuild installs into paths that should be created at runtime."
- eqawarn " To fix, simply do not install into these directories. Instead, your package"
- eqawarn " should create dirs on the fly at runtime as needed via init scripts/etc..."
- eqawarn
- eqawarn "${f}"
- fi
-
- set +f
- f=
- for x in "${ED}etc/udev/rules.d/"* "${ED}lib"*"/udev/rules.d/"* ; do
- [[ -e ${x} ]] || continue
- [[ ${x} == ${ED}lib/udev/rules.d/* ]] && continue
- f+=" ${x#${ED}}\n"
- done
- if [[ -n $f ]] ; then
- eqawarn "QA Notice: udev rules should be installed in /lib/udev/rules.d:"
- eqawarn
- eqawarn "$f"
- fi
-
- # Now we look for all world writable files.
- # PREFIX LOCAL: keep offset in the paths
- local unsafe_files=$(find "${ED}" -type f -perm -2 | sed -e "s:^${D}:- :")
- # END PREFIX LOCAL
- if [[ -n ${unsafe_files} ]] ; then
- __vecho "QA Security Notice: world writable file(s):"
- __vecho "${unsafe_files}"
- __vecho "- This may or may not be a security problem, most of the time it is one."
- __vecho "- Please double check that $PF really needs a world writeable bit and file bugs accordingly."
- sleep 1
- fi
-
+ # PREFIX LOCAL:
+ # anything outside the prefix should be caught by the Prefix QA
+ # check, so if there's nothing in ED, we skip searching for QA
+ # checks there, the specific QA funcs can hence rely on ED existing
+ if [[ -d ${ED} ]] ; then
+ case ${CHOST} in
+ *-darwin*)
+ # Mach-O platforms (NeXT, Darwin, OSX)
+ install_qa_check_macho
+ ;;
+ *-interix*|*-winnt*)
+ # PECOFF platforms (Windows/Interix)
+ install_qa_check_pecoff
+ ;;
+ *-aix*)
+ # XCOFF platforms (AIX)
+ install_qa_check_xcoff
+ ;;
+ *)
+ # because this is the majority: ELF platforms (Linux,
+ # Solaris, *BSD, IRIX, etc.)
+ install_qa_check_elf
+ ;;
+ esac
+ fi
+
+ # this is basically here such that the diff with trunk remains just
+ # offsetted and not out of order
+ install_qa_check_misc
+ # END PREFIX LOCAL
+}
+
+install_qa_check_elf() {
- if type -P scanelf > /dev/null && ! has binchecks ${RESTRICT}; then
- local insecure_rpath=0 tmp_quiet=${PORTAGE_QUIET}
- local x
-
- # display warnings when using stricter because we die afterwards
- if has stricter ${FEATURES} ; then
- unset PORTAGE_QUIET
- fi
-
- # Make sure we disallow insecure RUNPATH/RPATHs.
- # 1) References to PORTAGE_BUILDDIR are banned because it's a
- # security risk. We don't want to load files from a
- # temporary directory.
- # 2) If ROOT != "/", references to ROOT are banned because
- # that directory won't exist on the target system.
- # 3) Null paths are banned because the loader will search $PWD when
- # it finds null paths.
- local forbidden_dirs="${PORTAGE_BUILDDIR}"
- if [[ -n "${ROOT}" && "${ROOT}" != "/" ]]; then
- forbidden_dirs+=" ${ROOT}"
- fi
- local dir l rpath_files=$(scanelf -F '%F:%r' -qBR "${ED}")
- f=""
- for dir in ${forbidden_dirs}; do
- for l in $(echo "${rpath_files}" | grep -E ":${dir}|::|: "); do
- f+=" ${l%%:*}\n"
- if ! has stricter ${FEATURES}; then
- __vecho "Auto fixing rpaths for ${l%%:*}"
- TMPDIR="${dir}" scanelf -BXr "${l%%:*}" -o /dev/null
- fi
- done
- done
-
- # Reject set*id binaries with $ORIGIN in RPATH #260331
- x=$(
- find "${ED}" -type f \( -perm -u+s -o -perm -g+s \) -print0 | \
- xargs -0 scanelf -qyRF '%r %p' | grep '$ORIGIN'
- )
-
- # Print QA notice.
- if [[ -n ${f}${x} ]] ; then
- __vecho -ne '\n'
- eqawarn "QA Notice: The following files contain insecure RUNPATHs"
- eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
- eqawarn " with the maintaining herd of the package."
- eqawarn "${f}${f:+${x:+\n}}${x}"
- __vecho -ne '\n'
- if [[ -n ${x} ]] || has stricter ${FEATURES} ; then
- insecure_rpath=1
- fi
- fi
-
- # TEXTRELs are baaaaaaaad
- # Allow devs to mark things as ignorable ... e.g. things that are
- # binary-only and upstream isn't cooperating (nvidia-glx) ... we
- # allow ebuild authors to set QA_TEXTRELS_arch and QA_TEXTRELS ...
- # the former overrides the latter ... regexes allowed ! :)
- qa_var="QA_TEXTRELS_${ARCH/-/_}"
- [[ -n ${!qa_var} ]] && QA_TEXTRELS=${!qa_var}
- [[ -n ${QA_STRICT_TEXTRELS} ]] && QA_TEXTRELS=""
- export QA_TEXTRELS="${QA_TEXTRELS} lib*/modules/*.ko"
- f=$(scanelf -qyRF '%t %p' "${ED}" | grep -v 'usr/lib/debug/')
- if [[ -n ${f} ]] ; then
- scanelf -qyRAF '%T %p' "${PORTAGE_BUILDDIR}"/ &> "${T}"/scanelf-textrel.log
- __vecho -ne '\n'
- eqawarn "QA Notice: The following files contain runtime text relocations"
- eqawarn " Text relocations force the dynamic linker to perform extra"
- eqawarn " work at startup, waste system resources, and may pose a security"
- eqawarn " risk. On some architectures, the code may not even function"
- eqawarn " properly, if at all."
- eqawarn " For more information, see http://hardened.gentoo.org/pic-fix-guide.xml"
- eqawarn " Please include the following list of files in your report:"
- eqawarn "${f}"
- __vecho -ne '\n'
- die_msg="${die_msg} textrels,"
- sleep 1
- fi
-
- # Also, executable stacks only matter on linux (and just glibc atm ...)
- f=""
- case ${CTARGET:-${CHOST}} in
- *-linux-gnu*)
- # Check for files with executable stacks, but only on arches which
- # are supported at the moment. Keep this list in sync with
- # http://www.gentoo.org/proj/en/hardened/gnu-stack.xml (Arch Status)
- case ${CTARGET:-${CHOST}} in
- arm*|i?86*|ia64*|m68k*|s390*|sh*|x86_64*)
- # Allow devs to mark things as ignorable ... e.g. things
- # that are binary-only and upstream isn't cooperating ...
- # we allow ebuild authors to set QA_EXECSTACK_arch and
- # QA_EXECSTACK ... the former overrides the latter ...
- # regexes allowed ! :)
-
- qa_var="QA_EXECSTACK_${ARCH/-/_}"
- [[ -n ${!qa_var} ]] && QA_EXECSTACK=${!qa_var}
- [[ -n ${QA_STRICT_EXECSTACK} ]] && QA_EXECSTACK=""
- qa_var="QA_WX_LOAD_${ARCH/-/_}"
- [[ -n ${!qa_var} ]] && QA_WX_LOAD=${!qa_var}
- [[ -n ${QA_STRICT_WX_LOAD} ]] && QA_WX_LOAD=""
- export QA_EXECSTACK="${QA_EXECSTACK} lib*/modules/*.ko"
- export QA_WX_LOAD="${QA_WX_LOAD} lib*/modules/*.ko"
- f=$(scanelf -qyRAF '%e %p' "${ED}" | grep -v 'usr/lib/debug/')
- ;;
- esac
- ;;
- esac
- if [[ -n ${f} ]] ; then
- # One more pass to help devs track down the source
- scanelf -qyRAF '%e %p' "${PORTAGE_BUILDDIR}"/ &> "${T}"/scanelf-execstack.log
- __vecho -ne '\n'
- eqawarn "QA Notice: The following files contain writable and executable sections"
- eqawarn " Files with such sections will not work properly (or at all!) on some"
- eqawarn " architectures/operating systems. A bug should be filed at"
- eqawarn " http://bugs.gentoo.org/ to make sure the issue is fixed."
- eqawarn " For more information, see http://hardened.gentoo.org/gnu-stack.xml"
- eqawarn " Please include the following list of files in your report:"
- eqawarn " Note: Bugs should be filed for the respective maintainers"
- eqawarn " of the package in question and not hardened@g.o."
- eqawarn "${f}"
- __vecho -ne '\n'
- die_msg="${die_msg} execstacks"
- sleep 1
- fi
-
- # Check for files built without respecting LDFLAGS
- if [[ "${LDFLAGS}" == *,--hash-style=gnu* ]] && \
- ! has binchecks ${RESTRICT} ; then
- f=$(scanelf -qyRF '#k%p' -k .hash "${ED}")
- if [[ -n ${f} ]] ; then
- echo "${f}" > "${T}"/scanelf-ignored-LDFLAGS.log
- if [ "${QA_STRICT_FLAGS_IGNORED-unset}" = unset ] ; then
- for x in "${QA_FLAGS_IGNORED[@]}" ; do
- sed -e "s#^${x#/}\$##" -i "${T}"/scanelf-ignored-LDFLAGS.log
- done
- fi
- # Filter anything under /usr/lib/debug/ in order to avoid
- # duplicate warnings for splitdebug files.
- sed -e "s#^usr/lib/debug/.*##" -e "/^\$/d" -e "s#^#/#" \
- -i "${T}"/scanelf-ignored-LDFLAGS.log
- f=$(<"${T}"/scanelf-ignored-LDFLAGS.log)
- if [[ -n ${f} ]] ; then
- __vecho -ne '\n'
- eqawarn "${BAD}QA Notice: Files built without respecting LDFLAGS have been detected${NORMAL}"
- eqawarn " Please include the following list of files in your report:"
- eqawarn "${f}"
- __vecho -ne '\n'
- sleep 1
- else
- rm -f "${T}"/scanelf-ignored-LDFLAGS.log
- fi
- fi
- fi
-
- if [[ ${insecure_rpath} -eq 1 ]] ; then
- die "Aborting due to serious QA concerns with RUNPATH/RPATH"
- elif [[ -n ${die_msg} ]] && has stricter ${FEATURES} ; then
- die "Aborting due to QA concerns: ${die_msg}"
- fi
-
- # Check for shared libraries lacking SONAMEs
- qa_var="QA_SONAME_${ARCH/-/_}"
- eval "[[ -n \${!qa_var} ]] && QA_SONAME=(\"\${${qa_var}[@]}\")"
- f=$(scanelf -ByF '%S %p' "${ED}"{,usr/}lib*/lib*.so* | awk '$2 == "" { print }' | sed -e "s:^[[:space:]]${ED}:/:")
- if [[ -n ${f} ]] ; then
- echo "${f}" > "${T}"/scanelf-missing-SONAME.log
- if [[ "${QA_STRICT_SONAME-unset}" == unset ]] ; then
- if [[ ${#QA_SONAME[@]} -gt 1 ]] ; then
- for x in "${QA_SONAME[@]}" ; do
- sed -e "s#^/${x#/}\$##" -i "${T}"/scanelf-missing-SONAME.log
- done
- else
- local shopts=$-
- set -o noglob
- for x in ${QA_SONAME} ; do
- sed -e "s#^/${x#/}\$##" -i "${T}"/scanelf-missing-SONAME.log
- done
- set +o noglob
- set -${shopts}
- fi
- fi
- sed -e "/^\$/d" -i "${T}"/scanelf-missing-SONAME.log
- f=$(<"${T}"/scanelf-missing-SONAME.log)
- if [[ -n ${f} ]] ; then
- __vecho -ne '\n'
- eqawarn "QA Notice: The following shared libraries lack a SONAME"
- eqawarn "${f}"
- __vecho -ne '\n'
- sleep 1
- else
- rm -f "${T}"/scanelf-missing-SONAME.log
- fi
- fi
-
- # Check for shared libraries lacking NEEDED entries
- qa_var="QA_DT_NEEDED_${ARCH/-/_}"
- eval "[[ -n \${!qa_var} ]] && QA_DT_NEEDED=(\"\${${qa_var}[@]}\")"
- # PREFIX LOCAL: keep offset prefix in the recorded files
- f=$(scanelf -ByF '%n %p' "${ED}"{,usr/}lib*/lib*.so* | awk '$2 == "" { print }' | sed -e "s:^[[:space:]]${D}:/:")
- # END PREFIX LOCAL
- if [[ -n ${f} ]] ; then
- echo "${f}" > "${T}"/scanelf-missing-NEEDED.log
- if [[ "${QA_STRICT_DT_NEEDED-unset}" == unset ]] ; then
- if [[ ${#QA_DT_NEEDED[@]} -gt 1 ]] ; then
- for x in "${QA_DT_NEEDED[@]}" ; do
- sed -e "s#^/${x#/}\$##" -i "${T}"/scanelf-missing-NEEDED.log
- done
- else
- local shopts=$-
- set -o noglob
- for x in ${QA_DT_NEEDED} ; do
- sed -e "s#^/${x#/}\$##" -i "${T}"/scanelf-missing-NEEDED.log
- done
- set +o noglob
- set -${shopts}
- fi
- fi
- sed -e "/^\$/d" -i "${T}"/scanelf-missing-NEEDED.log
- f=$(<"${T}"/scanelf-missing-NEEDED.log)
- if [[ -n ${f} ]] ; then
- __vecho -ne '\n'
- eqawarn "QA Notice: The following shared libraries lack NEEDED entries"
- eqawarn "${f}"
- __vecho -ne '\n'
- sleep 1
- else
- rm -f "${T}"/scanelf-missing-NEEDED.log
- fi
- fi
-
- PORTAGE_QUIET=${tmp_quiet}
- fi
-
# Create NEEDED.ELF.2 regardless of RESTRICT=binchecks, since this info is
# too useful not to have (it's required for things like preserve-libs), and
# it's tempting for ebuild authors to set RESTRICT=binchecks for packages
@@@ -588,829 -230,11 +269,396 @@@
eqawarn "$(while read -r x; do x=${x#*;} ; x=${x%%;*} ; echo "${x#${EPREFIX}}" ; done < "${PORTAGE_BUILDDIR}"/build-info/NEEDED.ELF.2)"
fi
fi
+}
+install_qa_check_misc() {
- # PREFIX LOCAL: keep offset prefix in the reported files
- local unsafe_files=$(find "${ED}" -type f '(' -perm -2002 -o -perm -4002 ')' | sed -e "s:^${D}:/:")
- # END PREFIX LOCAL
- if [[ -n ${unsafe_files} ]] ; then
- eqawarn "QA Notice: Unsafe files detected (set*id and world writable)"
- eqawarn "${unsafe_files}"
- die "Unsafe files found in \${D}. Portage will not install them."
- fi
-
- if [[ -d ${D%/}${D} ]] ; then
- local -i INSTALLTOD=0
- while read -r -d $'\0' i ; do
- eqawarn "QA Notice: /${i##${D%/}${D}} installed in \${D}/\${D}"
- ((INSTALLTOD++))
- done < <(find "${D%/}${D}" -print0)
- die "Aborting due to QA concerns: ${INSTALLTOD} files installed in ${D%/}${D}"
- fi
-
- # Sanity check syntax errors in init.d scripts
- local d
- for d in /etc/conf.d /etc/init.d ; do
- [[ -d ${ED}/${d} ]] || continue
- for i in "${ED}"/${d}/* ; do
- [[ -L ${i} ]] && continue
- # if empty conf.d/init.d dir exists (baselayout), then i will be "/etc/conf.d/*" and not exist
- [[ ! -e ${i} ]] && continue
- if [[ ${d} == /etc/init.d && ${i} != *.sh ]] ; then
- # skip non-shell-script for bug #451386
- [[ $(head -n1 "${i}") =~ ^#!.*[[:space:]/](runscript|sh)$ ]] || continue
- fi
- bash -n "${i}" || die "The init.d file has syntax errors: ${i}"
- done
- done
-
- local checkbashisms=$(type -P checkbashisms)
- if [[ -n ${checkbashisms} ]] ; then
- for d in /etc/init.d ; do
- [[ -d ${ED}${d} ]] || continue
- for i in "${ED}${d}"/* ; do
- [[ -e ${i} ]] || continue
- [[ -L ${i} ]] && continue
- f=$("${checkbashisms}" -f "${i}" 2>&1)
- [[ $? != 0 && -n ${f} ]] || continue
- eqawarn "QA Notice: shell script appears to use non-POSIX feature(s):"
- while read -r ;
- do eqawarn " ${REPLY}"
- done <<< "${f//${ED}}"
- done
- done
- fi
-
- # Look for leaking LDFLAGS into pkg-config files
- f=$(egrep -sH '^Libs.*-Wl,(-O[012]|--hash-style)' "${ED}"/usr/*/pkgconfig/*.pc)
- if [[ -n ${f} ]] ; then
- eqawarn "QA Notice: pkg-config files with wrong LDFLAGS detected:"
- eqawarn "${f//${D}}"
- fi
-
- # this should help to ensure that all (most?) shared libraries are executable
- # and that all libtool scripts / static libraries are not executable
- local j
- for i in "${ED}"opt/*/lib* \
- "${ED}"lib* \
- "${ED}"usr/lib* ; do
- [[ ! -d ${i} ]] && continue
-
- for j in "${i}"/*.so.* "${i}"/*.so "${i}"/*.dylib "${i}"/*.dll ; do
- [[ ! -e ${j} ]] && continue
- [[ -L ${j} ]] && continue
- [[ -x ${j} ]] && continue
- __vecho "making executable: ${j#${ED}}"
- chmod +x "${j}"
- done
-
- for j in "${i}"/*.a "${i}"/*.la ; do
- [[ ! -e ${j} ]] && continue
- [[ -L ${j} ]] && continue
- [[ ! -x ${j} ]] && continue
- __vecho "removing executable bit: ${j#${ED}}"
- chmod -x "${j}"
- done
-
- for j in "${i}"/*.{a,dll,dylib,sl,so}.* "${i}"/*.{a,dll,dylib,sl,so} ; do
- [[ ! -e ${j} ]] && continue
- [[ ! -L ${j} ]] && continue
- linkdest=$(readlink "${j}")
- if [[ ${linkdest} == /* ]] ; then
- __vecho -ne '\n'
- eqawarn "QA Notice: Found an absolute symlink in a library directory:"
- eqawarn " ${j#${D}} -> ${linkdest}"
- eqawarn " It should be a relative symlink if in the same directory"
- eqawarn " or a linker script if it crosses the /usr boundary."
- fi
- done
- done
-
- # When installing static libraries into /usr/lib and shared libraries into
- # /lib, we have to make sure we have a linker script in /usr/lib along side
- # the static library, or gcc will utilize the static lib when linking :(.
- # http://bugs.gentoo.org/4411
- abort="no"
- local a s
- for a in "${ED}"usr/lib*/*.a ; do
- # PREFIX LOCAL: support MachO objects
- [[ ${CHOST} == *-darwin* ]] \
- && s=${a%.a}.dylib \
- || s=${a%.a}.so
- # END PREFIX LOCAL
- if [[ ! -e ${s} ]] ; then
- s=${s%usr/*}${s##*/usr/}
- if [[ -e ${s} ]] ; then
- __vecho -ne '\n'
- eqawarn "QA Notice: Missing gen_usr_ldscript for ${s##*/}"
- abort="yes"
- fi
- fi
- done
- [[ ${abort} == "yes" ]] && die "add those ldscripts"
-
- # Make sure people don't store libtool files or static libs in /lib
- # PREFIX LOCAL: on AIX, "dynamic libs" have extension .a, so don't
- # get false positives
- [[ ${CHOST} == *-aix* ]] \
- && f=$(ls "${ED}"lib*/*.la 2>/dev/null || true) \
- || f=$(ls "${ED}"lib*/*.{a,la} 2>/dev/null)
- # END PREFIX LOCAL
- if [[ -n ${f} ]] ; then
- __vecho -ne '\n'
- eqawarn "QA Notice: Excessive files found in the / partition"
- eqawarn "${f}"
- __vecho -ne '\n'
- die "static archives (*.a) and libtool library files (*.la) belong in /usr/lib*, not /lib*"
- fi
-
- # Verify that the libtool files don't contain bogus $D entries.
- local abort=no gentoo_bug=no always_overflow=no
- for a in "${ED}"usr/lib*/*.la ; do
- s=${a##*/}
- if grep -qs "${ED}" "${a}" ; then
- __vecho -ne '\n'
- eqawarn "QA Notice: ${s} appears to contain PORTAGE_TMPDIR paths"
- abort="yes"
- fi
- done
- [[ ${abort} == "yes" ]] && die "soiled libtool library files found"
-
- # Evaluate misc gcc warnings
- if [[ -n ${PORTAGE_LOG_FILE} && -r ${PORTAGE_LOG_FILE} ]] ; then
- # In debug mode, this variable definition and corresponding grep calls
- # will produce false positives if they're shown in the trace.
- local reset_debug=0
- if [[ ${-/x/} != $- ]] ; then
- set +x
- reset_debug=1
- fi
- local m msgs=(
- ": warning: dereferencing type-punned pointer will break strict-aliasing rules"
- ": warning: dereferencing pointer .* does break strict-aliasing rules"
- ": warning: implicit declaration of function"
- ": warning: incompatible implicit declaration of built-in function"
- ": warning: is used uninitialized in this function" # we'll ignore "may" and "might"
- ": warning: comparisons like X<=Y<=Z do not have their mathematical meaning"
- ": warning: null argument where non-null required"
- ": warning: array subscript is below array bounds"
- ": warning: array subscript is above array bounds"
- ": warning: attempt to free a non-heap object"
- ": warning: .* called with .*bigger.* than .* destination buffer"
- ": warning: call to .* will always overflow destination buffer"
- ": warning: assuming pointer wraparound does not occur when comparing"
- ": warning: hex escape sequence out of range"
- ": warning: [^ ]*-hand operand of comma .*has no effect"
- ": warning: converting to non-pointer type .* from NULL"
- ": warning: NULL used in arithmetic"
- ": warning: passing NULL to non-pointer argument"
- ": warning: the address of [^ ]* will always evaluate as"
- ": warning: the address of [^ ]* will never be NULL"
- ": warning: too few arguments for format"
- ": warning: reference to local variable .* returned"
- ": warning: returning reference to temporary"
- ": warning: function returns address of local variable"
- ": warning: .*\\[-Wsizeof-pointer-memaccess\\]"
- ": warning: .*\\[-Waggressive-loop-optimizations\\]"
- # this may be valid code :/
- #": warning: multi-character character constant"
- # need to check these two ...
- #": warning: assuming signed overflow does not occur when"
- #": warning: comparison with string literal results in unspecified behav"
- # yacc/lex likes to trigger this one
- #": warning: extra tokens at end of .* directive"
- # only gcc itself triggers this ?
- #": warning: .*noreturn.* function does return"
- # these throw false positives when 0 is used instead of NULL
- #": warning: missing sentinel in function call"
- #": warning: not enough variable arguments to fit a sentinel"
- )
- abort="no"
- i=0
- local grep_cmd=grep
- [[ $PORTAGE_LOG_FILE = *.gz ]] && grep_cmd=zgrep
- while [[ -n ${msgs[${i}]} ]] ; do
- m=${msgs[$((i++))]}
- # force C locale to work around slow unicode locales #160234
- f=$(LC_ALL=C $grep_cmd "${m}" "${PORTAGE_LOG_FILE}")
- if [[ -n ${f} ]] ; then
- abort="yes"
- # for now, don't make this fatal (see bug #337031)
- #case "$m" in
- # ": warning: call to .* will always overflow destination buffer") always_overflow=yes ;;
- #esac
- if [[ $always_overflow = yes ]] ; then
- eerror
- eerror "QA Notice: Package triggers severe warnings which indicate that it"
- eerror " may exhibit random runtime failures."
- eerror
- eerror "${f}"
- eerror
- eerror " Please file a bug about this at http://bugs.gentoo.org/"
- eerror " with the maintaining herd of the package."
- eerror
- else
- __vecho -ne '\n'
- eqawarn "QA Notice: Package triggers severe warnings which indicate that it"
- eqawarn " may exhibit random runtime failures."
- eqawarn "${f}"
- __vecho -ne '\n'
- fi
- fi
- done
- local cat_cmd=cat
- [[ $PORTAGE_LOG_FILE = *.gz ]] && cat_cmd=zcat
- [[ $reset_debug = 1 ]] && set -x
- # Use safe cwd, avoiding unsafe import for bug #469338.
- f=$(cd "${PORTAGE_PYM_PATH}" ; $cat_cmd "${PORTAGE_LOG_FILE}" | \
- "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "$PORTAGE_BIN_PATH"/check-implicit-pointer-usage.py || die "check-implicit-pointer-usage.py failed")
- if [[ -n ${f} ]] ; then
-
- # In the future this will be a forced "die". In preparation,
- # increase the log level from "qa" to "eerror" so that people
- # are aware this is a problem that must be fixed asap.
-
- # just warn on 32bit hosts but bail on 64bit hosts
- case ${CHOST} in
- alpha*|hppa64*|ia64*|powerpc64*|mips64*|sparc64*|sparcv9*|x86_64*) gentoo_bug=yes ;;
- esac
-
- abort=yes
-
- if [[ $gentoo_bug = yes ]] ; then
- eerror
- eerror "QA Notice: Package triggers severe warnings which indicate that it"
- eerror " will almost certainly crash on 64bit architectures."
- eerror
- eerror "${f}"
- eerror
- eerror " Please file a bug about this at http://bugs.gentoo.org/"
- eerror " with the maintaining herd of the package."
- eerror
- else
- __vecho -ne '\n'
- eqawarn "QA Notice: Package triggers severe warnings which indicate that it"
- eqawarn " will almost certainly crash on 64bit architectures."
- eqawarn "${f}"
- __vecho -ne '\n'
- fi
-
- fi
- if [[ ${abort} == "yes" ]] ; then
- if [[ $gentoo_bug = yes || $always_overflow = yes ]] ; then
- die "install aborted due to severe warnings shown above"
- else
- echo "Please do not file a Gentoo bug and instead" \
- "report the above QA issues directly to the upstream" \
- "developers of this software." | fmt -w 70 | \
- while read -r line ; do eqawarn "${line}" ; done
- eqawarn "Homepage: ${HOMEPAGE}"
- has stricter ${FEATURES} && \
- die "install aborted due to severe warnings shown above"
- fi
- fi
- fi
-
# Portage regenerates this on the installed system.
rm -f "${ED}"/usr/share/info/dir{,.gz,.bz2} || die "rm failed!"
-
- if has multilib-strict ${FEATURES} && \
- [[ -x ${EPREFIX}/usr/bin/file && -x ${EPREFIX}/usr/bin/find ]] && \
- [[ -n ${MULTILIB_STRICT_DIRS} && -n ${MULTILIB_STRICT_DENY} ]]
- then
- rm -f "${T}/multilib-strict.log"
- local abort=no dir file
- MULTILIB_STRICT_EXEMPT=$(echo ${MULTILIB_STRICT_EXEMPT} | sed -e 's:\([(|)]\):\\\1:g')
- for dir in ${MULTILIB_STRICT_DIRS} ; do
- [[ -d ${ED}/${dir} ]] || continue
- for file in $(find ${ED}/${dir} -type f | grep -v "^${ED}/${dir}/${MULTILIB_STRICT_EXEMPT}"); do
- if file ${file} | egrep -q "${MULTILIB_STRICT_DENY}" ; then
- echo "${file#${ED}//}" >> "${T}/multilib-strict.log"
- fi
- done
- done
-
- if [[ -s ${T}/multilib-strict.log ]] ; then
- if [[ ${#QA_MULTILIB_PATHS[@]} -eq 1 ]] ; then
- local shopts=$-
- set -o noglob
- QA_MULTILIB_PATHS=(${QA_MULTILIB_PATHS})
- set +o noglob
- set -${shopts}
- fi
- if [ "${QA_STRICT_MULTILIB_PATHS-unset}" = unset ] ; then
- for x in "${QA_MULTILIB_PATHS[@]}" ; do
- sed -e "s#^${x#/}\$##" -i "${T}/multilib-strict.log"
- done
- sed -e "/^\$/d" -i "${T}/multilib-strict.log"
- fi
- if [[ -s ${T}/multilib-strict.log ]] ; then
- abort=yes
- echo "Files matching a file type that is not allowed:"
- while read -r ; do
- echo " ${REPLY}"
- done < "${T}/multilib-strict.log"
- fi
- fi
-
- [[ ${abort} == yes ]] && die "multilib-strict check failed!"
- fi
- }
-
- install_qa_check_prefix() {
- if [[ -d ${ED%/}/${D} ]] ; then
- find "${ED%/}/${D}" | \
- while read i ; do
- eqawarn "QA Notice: /${i##${ED%/}/${D}} installed in \${ED}/\${D}"
- done
- die "Aborting due to QA concerns: files installed in ${ED}/${D}"
- fi
-
- if [[ -d ${ED%/}/${EPREFIX} ]] ; then
- find "${ED%/}/${EPREFIX}/" | \
- while read i ; do
- eqawarn "QA Notice: ${i#${D}} double prefix"
- done
- die "Aborting due to QA concerns: double prefix files installed"
- fi
-
- if [[ -d ${D} ]] ; then
- INSTALLTOD=$(find ${D%/} | egrep -v "^${ED}" | sed -e "s|^${D%/}||" | awk '{if (length($0) <= length("'"${EPREFIX}"'")) { if (substr("'"${EPREFIX}"'", 1, length($0)) != $0) {print $0;} } else if (substr($0, 1, length("'"${EPREFIX}"'")) != "'"${EPREFIX}"'") {print $0;} }')
- if [[ -n ${INSTALLTOD} ]] ; then
- eqawarn "QA Notice: the following files are outside of the prefix:"
- eqawarn "${INSTALLTOD}"
- die "Aborting due to QA concerns: there are files installed outside the prefix"
- fi
- fi
-
- # all further checks rely on ${ED} existing
- [[ -d ${ED} ]] || return
-
- # check shebangs, bug #282539
- rm -f "${T}"/non-prefix-shebangs-errs
- local WHITELIST=" /usr/bin/env "
- # this is hell expensive, but how else?
- find "${ED}" -executable \! -type d -print0 \
- | xargs -0 grep -H -n -m1 "^#!" \
- | while read f ;
- do
- local fn=${f%%:*}
- local pos=${f#*:} ; pos=${pos%:*}
- local line=${f##*:}
- # shebang always appears on the first line ;)
- [[ ${pos} != 1 ]] && continue
- local oldIFS=${IFS}
- IFS=$'\r'$'\n'$'\t'" "
- line=( ${line#"#!"} )
- IFS=${oldIFS}
- [[ ${WHITELIST} == *" ${line[0]} "* ]] && continue
- local fp=${fn#${D}} ; fp=/${fp%/*}
- # line[0] can be an absolutised path, bug #342929
- local eprefix=$(canonicalize ${EPREFIX})
- local rf=${fn}
- # in case we deal with a symlink, make sure we don't replace it
- # with a real file (sed -i does that)
- if [[ -L ${fn} ]] ; then
- rf=$(readlink ${fn})
- [[ ${rf} != /* ]] && rf=${fn%/*}/${rf}
- # ignore symlinks pointing to outside prefix
- # as seen in sys-devel/native-cctools
- [[ $(canonicalize "/${rf#${D}}") != ${eprefix}/* ]] && continue
- fi
- # does the shebang start with ${EPREFIX}, and does it exist?
- if [[ ${line[0]} == ${EPREFIX}/* || ${line[0]} == ${eprefix}/* ]] ; then
- if [[ ! -e ${ROOT%/}${line[0]} && ! -e ${D%/}${line[0]} ]] ; then
- # hmm, refers explicitly to $EPREFIX, but doesn't exist,
- # if it's in PATH that's wrong in any case
- if [[ ":${PATH}:" == *":${fp}:"* ]] ; then
- echo "${fn#${D}}:${line[0]} (explicit EPREFIX but target not found)" \
- >> "${T}"/non-prefix-shebangs-errs
- else
- eqawarn "${fn#${D}} has explicit EPREFIX in shebang but target not found (${line[0]})"
- fi
- fi
- continue
- fi
- # unprefixed shebang, is the script directly in $PATH or an init
- # script?
- if [[ ":${PATH}:${EPREFIX}/etc/init.d:" == *":${fp}:"* ]] ; then
- if [[ -e ${EROOT}${line[0]} || -e ${ED}${line[0]} ]] ; then
- # is it unprefixed, but we can just fix it because a
- # prefixed variant exists
- eqawarn "prefixing shebang of ${fn#${D}}"
- # statement is made idempotent on purpose, because
- # symlinks may point to the same target, and hence the
- # same real file may be sedded multiple times since we
- # read the shebangs in one go upfront for performance
- # reasons
- sed -i -e '1s:^#! \?'"${line[0]}"':#!'"${EPREFIX}"${line[0]}':' "${rf}"
- continue
- else
- # this is definitely wrong: script in $PATH and invalid shebang
- echo "${fn#${D}}:${line[0]} (script ${fn##*/} installed in PATH but interpreter ${line[0]} not found)" \
- >> "${T}"/non-prefix-shebangs-errs
- fi
- else
- # unprefixed/invalid shebang, but outside $PATH, this may be
- # intended (e.g. config.guess) so remain silent by default
- has stricter ${FEATURES} && \
- eqawarn "invalid shebang in ${fn#${D}}: ${line[0]}"
- fi
- done
- if [[ -e "${T}"/non-prefix-shebangs-errs ]] ; then
- eqawarn "QA Notice: the following files use invalid (possible non-prefixed) shebangs:"
- while read line ; do
- eqawarn " ${line}"
- done < "${T}"/non-prefix-shebangs-errs
- rm -f "${T}"/non-prefix-shebangs-errs
- die "Aborting due to QA concerns: invalid shebangs found"
- fi
}
+install_qa_check_macho() {
+ if ! has binchecks ${RESTRICT} ; then
+ # on Darwin, dynamic libraries are called .dylibs instead of
+ # .sos. In addition the version component is before the
+ # extension, not after it. Check for this, and *only* warn
+ # about it. Some packages do ship .so files on Darwin and make
+ # it work (ugly!).
+ rm -f "${T}/mach-o.check"
+ find ${ED%/} -name "*.so" -or -name "*.so.*" | \
+ while read i ; do
+ [[ $(file $i) == *"Mach-O"* ]] && \
+ echo "${i#${D}}" >> "${T}/mach-o.check"
+ done
+ if [[ -f ${T}/mach-o.check ]] ; then
+ f=$(< "${T}/mach-o.check")
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: Found .so dynamic libraries on Darwin:"
+ eqawarn " ${f//$'\n'/\n }"
+ fi
+ rm -f "${T}/mach-o.check"
+
+ # The naming for dynamic libraries is different on Darwin; the
+ # version component is before the extention, instead of after
+ # it, as with .sos. Again, make this a warning only.
+ rm -f "${T}/mach-o.check"
+ find ${ED%/} -name "*.dylib.*" | \
+ while read i ; do
+ echo "${i#${D}}" >> "${T}/mach-o.check"
+ done
+ if [[ -f "${T}/mach-o.check" ]] ; then
+ f=$(< "${T}/mach-o.check")
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: Found wrongly named dynamic libraries on Darwin:"
+ eqawarn " ${f// /\n }"
+ fi
+ rm -f "${T}/mach-o.check"
+ fi
+
+ install_name_is_relative() {
+ case $1 in
+ "@executable_path/"*) return 0 ;;
+ "@loader_path"/*) return 0 ;;
+ "@rpath/"*) return 0 ;;
+ *) return 1 ;;
+ esac
+ }
+
+ # While we generate the NEEDED files, check that we don't get kernel
+ # traps at runtime because of broken install_names on Darwin.
+ rm -f "${T}"/.install_name_check_failed
+ scanmacho -qyRF '%a;%p;%S;%n' "${D}" | { while IFS= read l ; do
+ arch=${l%%;*}; l=${l#*;}
+ obj="/${l%%;*}"; l=${l#*;}
+ install_name=${l%%;*}; l=${l#*;}
+ needed=${l%%;*}; l=${l#*;}
+
+ ignore=
+ qa_var="QA_IGNORE_INSTALL_NAME_FILES_${ARCH/-/_}"
+ eval "[[ -n \${!qa_var} ]] &&
+ QA_IGNORE_INSTALL_NAME_FILES=(\"\${${qa_var}[@]}\")"
+
+ if [[ ${#QA_IGNORE_INSTALL_NAME_FILES[@]} -gt 1 ]] ; then
+ for x in "${QA_IGNORE_INSTALL_NAME_FILES[@]}" ; do
+ [[ ${obj##*/} == ${x} ]] && \
+ ignore=true
+ done
+ else
+ local shopts=$-
+ set -o noglob
+ for x in ${QA_IGNORE_INSTALL_NAME_FILES} ; do
+ [[ ${obj##*/} == ${x} ]] && \
+ ignore=true
+ done
+ set +o noglob
+ set -${shopts}
+ fi
+
+ # See if the self-reference install_name points to an existing
+ # and to be installed file. This usually is a symlink for the
+ # major version.
+ if install_name_is_relative ${install_name} ; then
+ # try to locate the library in the installed image
+ local inpath=${install_name#@*/}
+ local libl
+ for libl in $(find "${ED}" -name "${inpath##*/}") ; do
+ if [[ ${libl} == */${inpath} ]] ; then
+ install_name=/${libl#${D}}
+ break
+ fi
+ done
+ fi
+ if [[ ! -e ${D}${install_name} ]] ; then
+ eqawarn "QA Notice: invalid self-reference install_name ${install_name} in ${obj}"
+ # remember we are in an implicit subshell, that's
+ # why we touch a file here ... ideally we should be
+ # able to die correctly/nicely here
+ [[ -z ${ignore} && touch "${T}"/.install_name_check_failed
+ fi
+
+ # this is ugly, paths with spaces won't work
+ for lib in ${needed//,/ } ; do
+ if [[ ${lib} == ${D}* ]] ; then
+ eqawarn "QA Notice: install_name references \${D}: ${lib} in ${obj}"
+ [[ -z ${ignore} && touch "${T}"/.install_name_check_failed
+ elif [[ ${lib} == ${S}* ]] ; then
+ eqawarn "QA Notice: install_name references \${S}: ${lib} in ${obj}"
+ [[ -z ${ignore} && touch "${T}"/.install_name_check_failed
+ elif ! install_name_is_relative ${lib} && [[ ! -e ${lib} && ! -e ${D}${lib} ]] ; then
+ eqawarn "QA Notice: invalid reference to ${lib} in ${obj}"
+ [[ -z ${ignore} && touch "${T}"/.install_name_check_failed
+ fi
+ done
+
+ # backwards compatibility
+ echo "${obj} ${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ # what we use
+ echo "${arch};${obj};${install_name};${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.MACHO.3
+ done }
+ if [[ -f ${T}/.install_name_check_failed ]] ; then
+ # secret switch "allow_broken_install_names" to get
+ # around this and install broken crap (not a good idea)
+ has allow_broken_install_names ${FEATURES} || \
+ die "invalid install_name found, your application or library will crash at runtime"
+ fi
+}
+
+install_qa_check_pecoff() {
+ local _pfx_scan="readpecoff ${CHOST}"
+
+ # this one uses readpecoff, which supports multiple prefix platforms!
+ # this is absolutely _not_ optimized for speed, and there may be plenty
+ # of possibilities by introducing one or the other cache!
+ if ! has binchecks ${RESTRICT}; then
+ # copied and adapted from the above scanelf code.
+ local qa_var insecure_rpath=0 tmp_quiet=${PORTAGE_QUIET}
+ local f x
+
+ # display warnings when using stricter because we die afterwards
+ if has stricter ${FEATURES} ; then
+ unset PORTAGE_QUIET
+ fi
+
+ local _exec_find_opt="-executable"
+ [[ ${CHOST} == *-winnt* ]] && _exec_find_opt='-name *.dll -o -name *.exe'
+
+ # Make sure we disallow insecure RUNPATH/RPATH's
+ # Don't want paths that point to the tree where the package was built
+ # (older, broken libtools would do this). Also check for null paths
+ # because the loader will search $PWD when it finds null paths.
+
+ f=$(
+ find "${ED}" -type f '(' ${_exec_find_opt} ')' -print0 | xargs -0 ${_pfx_scan} | \
+ while IFS=";" read arch obj soname rpath needed ; do \
+ echo "${rpath}" | grep -E "(${PORTAGE_BUILDDIR}|: |::|^:|^ )" > /dev/null 2>&1 \
+ && echo "${obj}"; done;
+ )
+ # Reject set*id binaries with $ORIGIN in RPATH #260331
+ x=$(
+ find "${ED}" -type f '(' -perm -u+s -o -perm -g+s ')' -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; do \
+ echo "${rpath}" | grep '$ORIGIN' > /dev/null 2>&1 && echo "${obj}"; done;
+ )
+ if [[ -n ${f}${x} ]] ; then
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain insecure RUNPATH's"
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with the maintaining herd of the package."
+ eqawarn "${f}${f:+${x:+\n}}${x}"
+ vecho -ne '\a\n'
+ if [[ -n ${x} ]] || has stricter ${FEATURES} ; then
+ insecure_rpath=1
+ else
+ eqawarn "cannot automatically fix runpaths on interix platforms!"
+ fi
+ fi
+
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+
+ # Save NEEDED information after removing self-contained providers
+ find "${ED}" -type f '(' ${_exec_find_opt} ')' -print0 | xargs -0 ${_pfx_scan} | { while IFS=';' read arch obj soname rpath needed; do
+ # need to strip image dir from object name.
+ obj="/${obj#${D}}"
+ if [ -z "${rpath}" -o -n "${rpath//*ORIGIN*}" ]; then
+ # object doesn't contain $ORIGIN in its runpath attribute
+ echo "${obj} ${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ echo "${arch};${obj};${soname};${rpath};${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+ else
+ dir=${obj%/*}
+ # replace $ORIGIN with the dirname of the current object for the lookup
+ opath=$(echo :${rpath}: | sed -e "s#.*:\(.*\)\$ORIGIN\(.*\):.*#\1${dir}\2#")
+ sneeded=$(echo ${needed} | tr , ' ')
+ rneeded=""
+ for lib in ${sneeded}; do
+ found=0
+ for path in ${opath//:/ }; do
+ [ -e "${ED}/${path}/${lib}" ] && found=1 && break
+ done
+ [ "${found}" -eq 0 ] && rneeded="${rneeded},${lib}"
+ done
+ rneeded=${rneeded:1}
+ if [ -n "${rneeded}" ]; then
+ echo "${obj} ${rneeded}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ echo "${arch};${obj};${soname};${rpath};${rneeded}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+ fi
+ fi
+ done }
+
+ if [[ ${insecure_rpath} -eq 1 ]] ; then
+ die "Aborting due to serious QA concerns with RUNPATH/RPATH"
+ elif [[ -n ${die_msg} ]] && has stricter ${FEATURES} ; then
+ die "Aborting due to QA concerns: ${die_msg}"
+ fi
+
+ local _so_ext='.so*'
+
+ case "${CHOST}" in
+ *-winnt*) _so_ext=".dll" ;; # no "*" intentionally!
+ esac
+
+ # Run some sanity checks on shared libraries
+ for d in "${ED}"lib* "${ED}"usr/lib* ; do
+ [[ -d "${d}" ]] || continue
+ f=$(find "${d}" -name "lib*${_so_ext}" -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; \
+ do [[ -z "${soname}" ]] && echo "${obj}"; done)
+ if [[ -n ${f} ]] ; then
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: The following shared libraries lack a SONAME"
+ eqawarn "${f}"
+ vecho -ne '\a\n'
+ sleep 1
+ fi
+
+ f=$(find "${d}" -name "lib*${_so_ext}" -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; \
+ do [[ -z "${needed}" ]] && echo "${obj}"; done)
+ if [[ -n ${f} ]] ; then
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: The following shared libraries lack NEEDED entries"
+ eqawarn "${f}"
+ vecho -ne '\a\n'
+ sleep 1
+ fi
+ done
+
+ PORTAGE_QUIET=${tmp_quiet}
+ fi
+}
+
+install_qa_check_xcoff() {
+ if ! has binchecks ${RESTRICT}; then
+ local tmp_quiet=${PORTAGE_QUIET}
+ local queryline deplib
+ local insecure_rpath_list= undefined_symbols_list=
+
+ # display warnings when using stricter because we die afterwards
+ if has stricter ${FEATURES} ; then
+ unset PORTAGE_QUIET
+ fi
+
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED.XCOFF.1
+
+ local neededfd
+ for neededfd in {3..1024} none; do ( : <&${neededfd} ) 2>/dev/null || break; done
+ [[ ${neededfd} != none ]] || die "cannot find free file descriptor handle"
+
+ eval "exec ${neededfd}>\"${PORTAGE_BUILDDIR}\"/build-info/NEEDED.XCOFF.1" || die "cannot open ${PORTAGE_BUILDDIR}/build-info/NEEDED.XCOFF.1"
+
+ ( # work around a problem in /usr/bin/dump (used by aixdll-query)
+ # dumping core when path names get too long.
+ cd "${ED}" >/dev/null &&
+ find . -not -type d -exec \
+ aixdll-query '{}' FILE MEMBER FLAGS FORMAT RUNPATH DEPLIBS ';'
+ ) > "${T}"/needed 2>/dev/null
+
+ # Symlinking shared archive libraries is not a good idea on aix,
+ # as there is nothing like "soname" on pure filesystem level.
+ # So we create a copy instead of the symlink.
+ local prev_FILE=
+ local FILE MEMBER FLAGS FORMAT RUNPATH DEPLIBS
+ while read queryline
+ do
+ FILE= MEMBER= FLAGS= FORMAT= RUNPATH= DEPLIBS=
+ eval ${queryline}
+ FILE=${FILE#./}
+
+ if [[ ${prev_FILE} != ${FILE} ]]; then
+ if [[ " ${FLAGS} " == *" SHROBJ "* && -h ${ED}${FILE} ]]; then
+ prev_FILE=${FILE}
+ local target=$(readlink "${ED}${FILE}")
+ if [[ ${target} == /* ]]; then
+ target=${D}${target}
+ else
+ target=${FILE%/*}/${target}
+ fi
+ rm -f "${ED}${FILE}" || die "cannot prune ${FILE}"
+ cp -f "${ED}${target}" "${ED}${FILE}" || die "cannot copy ${target} to ${FILE}"
+ fi
+ fi
+ done <"${T}"/needed
+
+ prev_FILE=
+ while read queryline
+ do
+ FILE= MEMBER= FLAGS= FORMAT= RUNPATH= DEPLIBS=
+ eval ${queryline}
+ FILE=${FILE#./}
+
+ if [[ -n ${MEMBER} && ${prev_FILE} != ${FILE} ]]; then
+ # Save NEEDED information for each archive library stub
+ # even if it is static only: the already installed archive
+ # may contain shared objects to be preserved.
+ echo "${FORMAT##* }${FORMAT%%-*};${EPREFIX}/${FILE};${FILE##*/};;" >&${neededfd}
+ fi
+ prev_FILE=${FILE}
+
+ # shared objects have both EXEC and SHROBJ flags,
+ # while executables have EXEC flag only.
+ [[ " ${FLAGS} " == *" EXEC "* ]] || continue
+
+ # Make sure we disallow insecure RUNPATH's
+ # Don't want paths that point to the tree where the package was built
+ # (older, broken libtools would do this). Also check for null paths
+ # because the loader will search $PWD when it finds null paths.
+ # And we really want absolute paths only.
+ if [[ -n $(echo ":${RUNPATH}:" | grep -E "(${PORTAGE_BUILDDIR}|::|:[^/])") ]]; then
+ insecure_rpath_list="${insecure_rpath_list}\n${FILE}${MEMBER:+[${MEMBER}]}"
+ fi
+
+ local needed=
+ [[ -n ${MEMBER} ]] && needed=${FILE##*/}
+ for deplib in ${DEPLIBS}; do
+ eval deplib=${deplib}
+ if [[ ${deplib} == '.' || ${deplib} == '..' ]]; then
+ # Although we do have runtime linking, we don't want undefined symbols.
+ # AIX does indicate this by needing either '.' or '..'
+ undefined_symbols_list="${undefined_symbols_list}\n${FILE}"
+ else
+ needed="${needed}${needed:+,}${deplib}"
+ fi
+ done
+
+ FILE=${EPREFIX}/${FILE}
+
+ [[ -n ${MEMBER} ]] && MEMBER="[${MEMBER}]"
+ # Save NEEDED information
+ echo "${FORMAT##* }${FORMAT%%-*};${FILE}${MEMBER};${FILE##*/}${MEMBER};${RUNPATH};${needed}" >&${neededfd}
+ done <"${T}"/needed
+
+ eval "exec ${neededfd}>&-" || die "cannot close handle to ${PORTAGE_BUILDDIR}/build-info/NEEDED.XCOFF.1"
+
+ if [[ -n ${undefined_symbols_list} ]]; then
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain undefined symbols."
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with 'prefix' as the maintaining herd of the package."
+ eqawarn "${undefined_symbols_list}"
+ vecho -ne '\a\n'
+ fi
+
+ if [[ -n ${insecure_rpath_list} ]] ; then
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain insecure RUNPATH's"
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with 'prefix' as the maintaining herd of the package."
+ eqawarn "${insecure_rpath_list}"
+ vecho -ne '\a\n'
+ if has stricter ${FEATURES} ; then
+ insecure_rpath=1
+ fi
+ fi
+
+ if [[ ${insecure_rpath} -eq 1 ]] ; then
+ die "Aborting due to serious QA concerns with RUNPATH/RPATH"
+ elif [[ -n ${die_msg} ]] && has stricter ${FEATURES} ; then
+ die "Aborting due to QA concerns: ${die_msg}"
+ fi
+
+ PORTAGE_QUIET=${tmp_quiet}
+ fi
+}
+
install_mask() {
local root="$1"
shift
diff --cc bin/portageq
index ea9dfde,009f116..7b9e177
--- a/bin/portageq
+++ b/bin/portageq
@@@ -23,22 -23,22 +23,22 @@@ except KeyboardInterrupt
import os
import types
-if os.path.isfile(os.path.join(os.path.dirname(os.path.dirname(os.path.realpath(__file__))), ".portage_not_installed")):
- pym_paths = [os.path.join(os.path.dirname(os.path.dirname(os.path.realpath(__file__))), "pym")]
- sys.path.insert(0, pym_paths[0])
+# for an explanation on this logic, see pym/_emerge/__init__.py
+if os.environ.__contains__("PORTAGE_PYTHONPATH"):
+ pym_path = os.environ["PORTAGE_PYTHONPATH"]
else:
- import distutils.sysconfig
- pym_paths = [os.path.join(distutils.sysconfig.get_python_lib(), x) for x in ("_emerge", "portage")]
+ pym_path = os.path.join(os.path.dirname(
+ os.path.dirname(os.path.realpath(__file__))), "pym")
- # Avoid sandbox violations after python upgrade.
+ # Avoid sandbox violations after Python upgrade.
if os.environ.get("SANDBOX_ON") == "1":
sandbox_write = os.environ.get("SANDBOX_WRITE", "").split(":")
- if pym_path not in sandbox_write:
- sandbox_write.append(pym_path)
- os.environ["SANDBOX_WRITE"] = \
- ":".join(filter(None, sandbox_write))
- del sandbox_write
+ for pym_path in pym_paths:
+ if pym_path not in sandbox_write:
+ sandbox_write.append(pym_path)
+ os.environ["SANDBOX_WRITE"] = ":".join(filter(None, sandbox_write))
+ del pym_path, sandbox_write
+ del pym_paths
- sys.path.insert(0, pym_path)
import portage
portage._internal_caller = True
from portage import os
diff --cc pym/portage/const.py
index 89d7ee2,acb90f9..5f00fab
--- a/pym/portage/const.py
+++ b/pym/portage/const.py
@@@ -65,35 -58,23 +65,36 @@@ DEPCACHE_PATH = "/var/cache/
GLOBAL_CONFIG_PATH = "/usr/share/portage/config"
# these variables are not used with target_root or config_root
+PORTAGE_BASE_PATH = PORTAGE_BASE
# NOTE: Use realpath(__file__) so that python module symlinks in site-packages
# are followed back to the real location of the whole portage installation.
+#PREFIX: below should work, but I'm not sure how it it affects other places
- #PORTAGE_BASE_PATH = os.path.join(os.sep, os.sep.join(os.path.realpath(
- # __file__.rstrip("co")).split(os.sep)[:-3]))
+ # NOTE: Please keep PORTAGE_BASE_PATH in one line to help substitutions.
-PORTAGE_BASE_PATH = os.path.join(os.sep, os.sep.join(os.path.realpath(__file__.rstrip("co")).split(os.sep)[:-3]))
++#PORTAGE_BASE_PATH = os.path.join(os.sep, os.sep.join(os.path.realpath(__file__.rstrip("co")).split(os.sep)[:-3]))
PORTAGE_BIN_PATH = PORTAGE_BASE_PATH + "/bin"
- PORTAGE_PYM_PATH = PORTAGE_BASE_PATH + "/pym"
+ PORTAGE_PYM_PATH = os.path.realpath(os.path.join(__file__, '../..'))
LOCALE_DATA_PATH = PORTAGE_BASE_PATH + "/locale" # FIXME: not used
EBUILD_SH_BINARY = PORTAGE_BIN_PATH + "/ebuild.sh"
MISC_SH_BINARY = PORTAGE_BIN_PATH + "/misc-functions.sh"
-SANDBOX_BINARY = "/usr/bin/sandbox"
-FAKEROOT_BINARY = "/usr/bin/fakeroot"
-BASH_BINARY = "/bin/bash"
-MOVE_BINARY = "/bin/mv"
+SANDBOX_BINARY = EPREFIX + "/usr/bin/sandbox"
+FAKEROOT_BINARY = EPREFIX + "/usr/bin/fakeroot"
+BASH_BINARY = PORTAGE_BASH
+MOVE_BINARY = PORTAGE_MV
PRELINK_BINARY = "/usr/sbin/prelink"
+MACOSSANDBOX_BINARY = "/usr/bin/sandbox-exec"
+MACOSSANDBOX_PROFILE = '''(version 1)
+(allow default)
+(deny file-write*)
+(allow file-write*
+@@MACOSSANDBOX_PATHS@@)
+(allow file-write-data
+@@MACOSSANDBOX_PATHS_CONTENT_ONLY@@)'''
+
+PORTAGE_GROUPNAME = portagegroup
+PORTAGE_USERNAME = portageuser
INVALID_ENV_FILE = "/etc/spork/is/not/valid/profile.env"
+ MERGING_IDENTIFIER = "-MERGING-"
REPO_NAME_FILE = "repo_name"
REPO_NAME_LOC = "profiles" + "/" + REPO_NAME_FILE
diff --cc pym/portage/dbapi/vartree.py
index 040b546,b46ba0b..deeb779
--- a/pym/portage/dbapi/vartree.py
+++ b/pym/portage/dbapi/vartree.py
@@@ -48,7 -46,7 +49,7 @@@ portage.proxy.lazyimport.lazyimport(glo
)
from portage.const import CACHE_PATH, CONFIG_MEMORY_FILE, \
- PORTAGE_PACKAGE_ATOM, PRIVATE_PATH, VDB_PATH, EPREFIX, EPREFIX_LSTRIP, BASH_BINARY
- MERGING_IDENTIFIER, PORTAGE_PACKAGE_ATOM, PRIVATE_PATH, VDB_PATH
++ MERGING_IDENTIFIER, PORTAGE_PACKAGE_ATOM, PRIVATE_PATH, VDB_PATH, EPREFIX, EPREFIX_LSTRIP, BASH_BINARY
from portage.dbapi import dbapi
from portage.exception import CommandNotFound, \
InvalidData, InvalidLocation, InvalidPackageName, \
diff --cc pym/portage/package/ebuild/config.py
index fb4956d,264ed8e..6e578a9
--- a/pym/portage/package/ebuild/config.py
+++ b/pym/portage/package/ebuild/config.py
@@@ -37,10 -37,11 +37,11 @@@ from portage.dep import Atom, isvalidat
from portage.eapi import eapi_exports_AA, eapi_exports_merge_type, \
eapi_supports_prefix, eapi_exports_replace_vars, _get_eapi_attrs
from portage.env.loaders import KeyValuePairFileLoader
- from portage.exception import InvalidDependString, PortageException
+ from portage.exception import InvalidDependString, IsADirectory, \
+ PortageException
from portage.localization import _
from portage.output import colorize
-from portage.process import fakeroot_capable, sandbox_capable
+from portage.process import fakeroot_capable, sandbox_capable, macossandbox_capable
from portage.repository.config import load_repository_config
from portage.util import ensure_dirs, getconfig, grabdict, \
grabdict_package, grabfile, grabfile_package, LazyItemsDict, \
diff --cc pym/portage/package/ebuild/doebuild.py
index 3c2167a,d3e3f5a..8e55fe2
--- a/pym/portage/package/ebuild/doebuild.py
+++ b/pym/portage/package/ebuild/doebuild.py
@@@ -46,8 -45,7 +46,7 @@@ from portage import auxdbkeys, bsd_chfl
unmerge, _encodings, _os_merge, \
_shell_quote, _unicode_decode, _unicode_encode
from portage.const import EBUILD_SH_ENV_FILE, EBUILD_SH_ENV_DIR, \
- EBUILD_SH_BINARY, INVALID_ENV_FILE, MISC_SH_BINARY, \
- EPREFIX, MACOSSANDBOX_PROFILE
- EBUILD_SH_BINARY, INVALID_ENV_FILE, MISC_SH_BINARY, PORTAGE_PYM_PACKAGES
++ EBUILD_SH_BINARY, INVALID_ENV_FILE, MISC_SH_BINARY, PORTAGE_PYM_PACKAGES, EPREFIX, MACOSSANDBOX_PROFILE
from portage.data import portage_gid, portage_uid, secpass, \
uid, userpriv_groups
from portage.dbapi.porttree import _parse_uri_map
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2014-05-06 19:32 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2014-05-06 19:32 UTC (permalink / raw
To: gentoo-commits
commit: 46900b2cb1099c482740a4dd12d858fb735e1fc1
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Tue May 6 19:32:08 2014 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Tue May 6 19:32:08 2014 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=46900b2c
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
DEVELOPING | 97 ++++++++++------
RELEASE-NOTES | 19 ++-
bin/repoman | 9 ++
man/make.conf.5 | 127 ++++++++++-----------
pym/_emerge/EbuildBuild.py | 9 +-
pym/_emerge/TaskSequence.py | 25 +++-
pym/_emerge/actions.py | 5 +-
pym/_emerge/depgraph.py | 5 +-
pym/_emerge/main.py | 4 +
pym/_emerge/resolver/output.py | 5 +-
pym/_emerge/resolver/output_helpers.py | 20 +---
pym/_emerge/search.py | 10 +-
pym/portage/dbapi/vartree.py | 42 +++----
pym/portage/dep/dep_check.py | 14 ---
pym/portage/glsa.py | 1 +
pym/portage/localization.py | 12 ++
.../tests/resolver/test_slot_conflict_rebuild.py | 41 +++++++
pym/portage/util/writeable_check.py | 29 +++--
pym/repoman/checks.py | 10 --
pym/repoman/utilities.py | 2 +-
20 files changed, 296 insertions(+), 190 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2014-05-06 19:18 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2014-05-06 19:18 UTC (permalink / raw
To: gentoo-commits
commit: 8433914fa1bf95fe312b38b28dde33cd25998d88
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Tue May 6 19:18:12 2014 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Tue May 6 19:18:12 2014 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=8433914f
tarball.sh: fix version tagging
---
tarball.sh | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tarball.sh b/tarball.sh
index 910c1db..1dcfb5c 100755
--- a/tarball.sh
+++ b/tarball.sh
@@ -26,7 +26,7 @@ fi
install -d -m0755 ${DEST}
rsync -a --exclude='.git' --exclude='.hg' . ${DEST}
-sed -i -e '/^VERSION=/s/^.*$/VERSION="'${V}-prefix'"/' ${DEST}/pym/portage/__init__.py
+sed -i -e '/^VERSION\s*=/s/^.*$/VERSION = "'${V}-prefix'"/' ${DEST}/pym/portage/__init__.py
sed -i -e "1s/VERSION/${V}-prefix/" ${DEST}/doc/fragment/version
sed -i -e "1s/VERSION/${V}-prefix/" ${DEST}/man/{,ru/}*.[15]
sed -i -e "s/@version@/${V}/" ${DEST}/configure.ac
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2014-04-22 19:52 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2014-04-22 19:52 UTC (permalink / raw
To: gentoo-commits
commit: 0a8f92ef5e248e0726454769806c1d7b1a60bb67
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Tue Apr 22 19:51:31 2014 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Tue Apr 22 19:51:31 2014 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=0a8f92ef
Merge tag 'v2.2.10' into prefix
Conflicts:
pym/portage/dbapi/bintree.py
DEVELOPING | 3 +-
README | 11 +-
RELEASE-NOTES | 11 +-
bin/repoman | 19 ++-
man/emerge.1 | 11 +-
man/make.conf.5 | 2 +-
man/portage.5 | 78 ++----------
mkrelease.sh | 15 ++-
pym/_emerge/actions.py | 15 ++-
pym/_emerge/create_depgraph_params.py | 3 +-
pym/_emerge/depgraph.py | 101 +++++++++-------
pym/_emerge/help.py | 4 +-
pym/_emerge/main.py | 12 +-
pym/_emerge/resolver/slot_collision.py | 1 +
pym/_emerge/unmerge.py | 5 +-
pym/portage/dbapi/bintree.py | 5 +-
pym/portage/dbapi/vartree.py | 18 +--
pym/portage/emaint/main.py | 10 +-
pym/portage/package/ebuild/_config/MaskManager.py | 8 +-
pym/portage/package/ebuild/_config/UseManager.py | 21 +---
pym/portage/package/ebuild/fetch.py | 7 +-
pym/portage/package/ebuild/getmaskingreason.py | 6 +-
pym/portage/repository/config.py | 132 +++++++++------------
pym/portage/tests/emerge/test_simple.py | 1 +
pym/portage/tests/resolver/test_multirepo.py | 84 ++++++++++++-
pym/portage/tests/resolver/test_onlydeps.py | 34 ++++++
pym/portage/tests/resolver/test_slot_collisions.py | 33 +++++-
pym/portage/tests/resolver/test_useflags.py | 78 ++++++++++++
pym/repoman/utilities.py | 44 +++++++
runtests.sh | 4 +-
30 files changed, 515 insertions(+), 261 deletions(-)
diff --cc pym/portage/dbapi/bintree.py
index 50c7d9d,229ce3b..45e8614
--- a/pym/portage/dbapi/bintree.py
+++ b/pym/portage/dbapi/bintree.py
@@@ -27,8 -27,7 +27,8 @@@ from portage.const import CACHE_PAT
from portage.dbapi.virtual import fakedbapi
from portage.dep import Atom, use_reduce, paren_enclose
from portage.exception import AlarmSignal, InvalidData, InvalidPackageName, \
- PermissionDenied, PortageException
+ ParseError, PermissionDenied, PortageException
+from portage.const import EAPI
from portage.localization import _
from portage import _movefile
from portage import os
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2014-02-06 21:09 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2014-02-06 21:09 UTC (permalink / raw
To: gentoo-commits
commit: 8b858267c018f8fc8597405bc2484cbcf82b69a4
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Feb 6 21:09:15 2014 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Feb 6 21:09:15 2014 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=8b858267
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/archive-conf
bin/binhost-snapshot
bin/check-implicit-pointer-usage.py
bin/chpathtool.py
bin/clean_locks
bin/dispatch-conf
bin/dohtml.py
bin/ebuild
bin/ebuild-helpers/prepstrip
bin/ebuild-ipc.py
bin/egencache
bin/emaint
bin/emerge
bin/emerge-webrsync
bin/emirrordist
bin/env-update
bin/filter-bash-environment.py
bin/fixpackages
bin/glsa-check
bin/install.py
bin/isolated-functions.sh
bin/lock-helper.py
bin/portageq
bin/quickpkg
bin/regenworld
bin/repoman
bin/save-ebuild-env.sh
bin/xattr-helper.py
bin/xpak-helper.py
misc/emerge-delta-webrsync
pym/portage/dispatch_conf.py
pym/portage/getbinpkg.py
pym/portage/tests/runTests
runtests.sh
tabcheck.py
doc/fragment/date => .portage_not_installed | 0
bin/archive-conf | 4 +-
bin/binhost-snapshot | 6 +-
bin/check-implicit-pointer-usage.py | 2 +-
bin/chpathtool.py | 4 +-
bin/clean_locks | 10 +-
bin/dispatch-conf | 4 +-
bin/dohtml.py | 4 +-
bin/ebuild | 4 +-
bin/ebuild-helpers/prepstrip | 2 +-
bin/ebuild-ipc.py | 6 +-
bin/egencache | 4 +-
bin/emaint | 4 +-
bin/emerge | 4 +-
bin/emerge-webrsync | 6 +-
bin/emirrordist | 4 +-
bin/env-update | 4 +-
bin/filter-bash-environment.py | 4 +-
bin/fixpackages | 4 +-
bin/glsa-check | 6 +-
bin/install.py | 6 +-
bin/isolated-functions.sh | 4 +-
bin/lock-helper.py | 4 +-
bin/portageq | 4 +-
bin/quickpkg | 4 +-
bin/regenworld | 4 +-
bin/repoman | 4 +-
bin/save-ebuild-env.sh | 4 +-
bin/xattr-helper.py | 4 +-
bin/xpak-helper.py | 4 +-
doc/config/sets.docbook | 5 +-
man/ebuild.5 | 28 +-
man/emerge.1 | 18 +-
man/make.conf.5 | 25 +-
man/portage.5 | 114 +++-
man/ru/ebuild.1 | 16 +-
misc/emerge-delta-webrsync | 4 +-
mkrelease.sh | 2 +-
pym/_emerge/MergeListItem.py | 14 +-
pym/_emerge/Package.py | 5 +-
pym/_emerge/Scheduler.py | 7 +-
pym/_emerge/actions.py | 10 +-
pym/_emerge/depgraph.py | 675 +++++++++++++++------
pym/_emerge/main.py | 4 +-
pym/_emerge/resolver/output.py | 109 +++-
pym/_emerge/resolver/output_helpers.py | 7 +-
pym/_emerge/resolver/package_tracker.py | 301 +++++++++
pym/_emerge/resolver/slot_collision.py | 141 +++--
pym/portage/__init__.py | 20 +-
pym/portage/_emirrordist/MirrorDistTask.py | 3 +-
pym/portage/_emirrordist/main.py | 20 +-
pym/portage/_global_updates.py | 224 ++++---
pym/portage/_selinux.py | 4 +-
pym/portage/_sets/base.py | 3 +-
pym/portage/cache/flat_hash.py | 3 +-
pym/portage/cache/fs_template.py | 3 +-
pym/portage/cache/metadata.py | 3 +-
pym/portage/cache/sqlite.py | 3 +-
pym/portage/cache/template.py | 3 +-
pym/portage/checksum.py | 12 +-
pym/portage/cvstree.py | 274 +++++----
pym/portage/data.py | 15 +-
pym/portage/dbapi/bintree.py | 3 +-
pym/portage/dbapi/porttree.py | 9 +-
pym/portage/dbapi/vartree.py | 35 +-
pym/portage/debug.py | 10 +-
pym/portage/dep/__init__.py | 3 +-
pym/portage/dispatch_conf.py | 334 +++++-----
pym/portage/eclass_cache.py | 14 +-
pym/portage/elog/__init__.py | 3 +-
pym/portage/elog/mod_echo.py | 3 +-
pym/portage/elog/mod_syslog.py | 13 +-
pym/portage/emaint/main.py | 13 +-
pym/portage/emaint/module.py | 8 +-
pym/portage/emaint/modules/binhost/binhost.py | 4 +-
pym/portage/exception.py | 13 +-
pym/portage/getbinpkg.py | 172 +++---
pym/portage/glsa.py | 4 +-
pym/portage/localization.py | 7 +-
pym/portage/locks.py | 44 +-
pym/portage/mail.py | 7 +-
pym/portage/manifest.py | 3 +-
pym/portage/output.py | 18 +-
.../package/ebuild/_config/LocationsManager.py | 4 +-
pym/portage/package/ebuild/_config/MaskManager.py | 10 +-
pym/portage/package/ebuild/_config/UseManager.py | 23 +-
pym/portage/package/ebuild/config.py | 18 +-
pym/portage/package/ebuild/getmaskingreason.py | 8 +-
pym/portage/package/ebuild/getmaskingstatus.py | 3 +-
pym/portage/process.py | 7 +-
pym/portage/proxy/lazyimport.py | 3 +-
pym/portage/repository/config.py | 150 +++--
pym/portage/tests/dbapi/test_portdb_cache.py | 12 +-
pym/portage/tests/dep/test_match_from_list.py | 3 +-
pym/portage/tests/ebuild/test_config.py | 3 +-
pym/portage/tests/emerge/test_emerge_slot_abi.py | 6 +-
pym/portage/tests/emerge/test_simple.py | 22 +-
pym/portage/tests/lint/test_compile_modules.py | 22 +-
pym/portage/tests/repoman/test_simple.py | 7 +-
pym/portage/tests/resolver/ResolverPlayground.py | 113 ++--
pym/portage/tests/resolver/test_backtracking.py | 13 +-
pym/portage/tests/resolver/test_blocker.py | 48 ++
pym/portage/tests/resolver/test_package_tracker.py | 261 ++++++++
pym/portage/tests/resolver/test_slot_collisions.py | 75 ++-
.../tests/resolver/test_slot_conflict_rebuild.py | 44 +-
pym/portage/tests/runTests | 4 +-
pym/portage/tests/unicode/test_string_format.py | 3 +-
pym/portage/tests/util/test_getconfig.py | 27 +-
pym/portage/tests/util/test_whirlpool.py | 4 +-
pym/portage/update.py | 3 +-
pym/portage/util/__init__.py | 9 +-
.../util/_dyn_libs/PreservedLibsRegistry.py | 3 +-
pym/portage/util/_urlopen.py | 3 +-
pym/portage/util/digraph.py | 12 +-
pym/portage/util/env_update.py | 3 +-
pym/portage/util/writeable_check.py | 79 +++
pym/portage/versions.py | 14 +-
pym/portage/xpak.py | 8 +-
pym/repoman/checks.py | 33 +-
runtests.sh | 4 +-
120 files changed, 2754 insertions(+), 1233 deletions(-)
diff --cc bin/archive-conf
index dc838e4,f73ca42..599e569
--- a/bin/archive-conf
+++ b/bin/archive-conf
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@
- # Copyright 1999-2013 Gentoo Foundation
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
+ # Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
diff --cc bin/binhost-snapshot
index 776686e,c2204f0..ce156a4
--- a/bin/binhost-snapshot
+++ b/bin/binhost-snapshot
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2010-2013 Gentoo Foundation
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
+ # Copyright 2010-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
import io
diff --cc bin/check-implicit-pointer-usage.py
index 710a0b8,242436c..c6389c3
--- a/bin/check-implicit-pointer-usage.py
+++ b/bin/check-implicit-pointer-usage.py
@@@ -1,4 -1,4 +1,4 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -O
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
# Ripped from HP and updated from Debian
# Update by Gentoo to support unicode output
diff --cc bin/chpathtool.py
index f10d49e,6460662..7e9f260
--- a/bin/chpathtool.py
+++ b/bin/chpathtool.py
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2011-2013 Gentoo Foundation
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
+ # Copyright 2011-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
"""Helper tool for converting installed files to custom prefixes.
diff --cc bin/clean_locks
index be53bf0,3e969f2..73213b5
--- a/bin/clean_locks
+++ b/bin/clean_locks
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -O
- # Copyright 1999-2013 Gentoo Foundation
-#!/usr/bin/python -bO
++#!@PREFIX_PORTAGE_PYTHON@ -bO
+ # Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/dispatch-conf
index 9da949c,4b0c0ac..a1fe206
--- a/bin/dispatch-conf
+++ b/bin/dispatch-conf
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -O
- # Copyright 1999-2013 Gentoo Foundation
-#!/usr/bin/python -bO
++#!@PREFIX_PORTAGE_PYTHON@ -bO
+ # Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
diff --cc bin/dohtml.py
index c0a964c,5359f5e..3abb829
--- a/bin/dohtml.py
+++ b/bin/dohtml.py
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@
- # Copyright 1999-2013 Gentoo Foundation
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
+ # Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
diff --cc bin/ebuild
index 14d8ee6,8f4b103..bb0d79a
--- a/bin/ebuild
+++ b/bin/ebuild
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -O
- # Copyright 1999-2013 Gentoo Foundation
-#!/usr/bin/python -bO
++#!@PREFIX_PORTAGE_PYTHON@ -bO
+ # Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/ebuild-helpers/prepstrip
index 0f065ae,2ef8a1a..1f714dd
--- a/bin/ebuild-helpers/prepstrip
+++ b/bin/ebuild-helpers/prepstrip
@@@ -1,8 -1,9 +1,8 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2013 Gentoo Foundation
+ # Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-PORTAGE_PYM_PATH=${PORTAGE_PYM_PATH:-/usr/lib/portage/pym}
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/helper-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/helper-functions.sh
# avoid multiple calls to `has`. this creates things like:
# FEATURES_foo=false
diff --cc bin/ebuild-ipc.py
index a3432c8,00337ee..1c7290f
--- a/bin/ebuild-ipc.py
+++ b/bin/ebuild-ipc.py
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2010-2013 Gentoo Foundation
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
+ # Copyright 2010-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
# This is a helper which ebuild processes can use
diff --cc bin/egencache
index 6c63734,c14be93..4ec4108
--- a/bin/egencache
+++ b/bin/egencache
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2009-2013 Gentoo Foundation
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
+ # Copyright 2009-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# unicode_literals for compat with TextIOWrapper in Python 2
diff --cc bin/emaint
index 9cac2d6,aeeb183..00711aa
--- a/bin/emaint
+++ b/bin/emaint
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -O
- # Copyright 2005-2013 Gentoo Foundation
-#!/usr/bin/python -bO
++#!@PREFIX_PORTAGE_PYTHON@ -bO
+ # Copyright 2005-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
"""System health checks and maintenance utilities.
diff --cc bin/emerge
index 3049528,bb93d83..273c0db
--- a/bin/emerge
+++ b/bin/emerge
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2006-2013 Gentoo Foundation
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
+ # Copyright 2006-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/emerge-webrsync
index 6824086,2f0689c..340c359
--- a/bin/emerge-webrsync
+++ b/bin/emerge-webrsync
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2013 Gentoo Foundation
+ # Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# Author: Karl Trygve Kalleberg <karltk@gentoo.org>
# Rewritten from the old, Perl-based emerge-webrsync script
diff --cc bin/emirrordist
index 34f352b,0368eee..dbb971a
--- a/bin/emirrordist
+++ b/bin/emirrordist
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2013 Gentoo Foundation
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
+ # Copyright 2013-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
import sys
diff --cc bin/env-update
index 09ccb4d,7651ef9..aefbf21
--- a/bin/env-update
+++ b/bin/env-update
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -O
- # Copyright 1999-2013 Gentoo Foundation
-#!/usr/bin/python -bO
++#!@PREFIX_PORTAGE_PYTHON@ -bO
+ # Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/filter-bash-environment.py
index 2537855,a4cdc54..029e491
--- a/bin/filter-bash-environment.py
+++ b/bin/filter-bash-environment.py
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@
- # Copyright 1999-2013 Gentoo Foundation
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
+ # Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
import codecs
diff --cc bin/fixpackages
index 0433d72,cec0030..a4e411d
--- a/bin/fixpackages
+++ b/bin/fixpackages
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@
- # Copyright 1999-2013 Gentoo Foundation
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
+ # Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/glsa-check
index a7e24b1,972679a..85d4c41
--- a/bin/glsa-check
+++ b/bin/glsa-check
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2008-2013 Gentoo Foundation
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
+ # Copyright 2008-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/install.py
index 9bd38c7,3c5e0de..44fd102
--- a/bin/install.py
+++ b/bin/install.py
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2013 Gentoo Foundation
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
+ # Copyright 2013-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
import os
diff --cc bin/isolated-functions.sh
index a0b054a,a22af57..195c879
--- a/bin/isolated-functions.sh
+++ b/bin/isolated-functions.sh
@@@ -1,8 -1,8 +1,8 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2013 Gentoo Foundation
+ # Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}/eapi.sh"
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}/eapi.sh"
# We need this next line for "die" and "assert". It expands
# It _must_ preceed all the calls to die and assert.
diff --cc bin/lock-helper.py
index 3a27e32,aa2dd60..427c138
--- a/bin/lock-helper.py
+++ b/bin/lock-helper.py
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2010-2013 Gentoo Foundation
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
+ # Copyright 2010-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
import os
diff --cc bin/portageq
index 3a96dd4,79818f6..ea9dfde
--- a/bin/portageq
+++ b/bin/portageq
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -O
- # Copyright 1999-2013 Gentoo Foundation
-#!/usr/bin/python -bO
++#!@PREFIX_PORTAGE_PYTHON@ -bO
+ # Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function, unicode_literals
diff --cc bin/quickpkg
index 30bab2f,90277ad..d787ca1
--- a/bin/quickpkg
+++ b/bin/quickpkg
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@
- # Copyright 1999-2013 Gentoo Foundation
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
+ # Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/regenworld
index 3416aff,32e8e5c..d3fb9ed
--- a/bin/regenworld
+++ b/bin/regenworld
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@
- # Copyright 1999-2013 Gentoo Foundation
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
+ # Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/repoman
index eb64a1b,3504b6b..2619e7e
--- a/bin/repoman
+++ b/bin/repoman
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@ -O
- # Copyright 1999-2013 Gentoo Foundation
-#!/usr/bin/python -bO
++#!@PREFIX_PORTAGE_PYTHON@ -bO
+ # Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# Next to do: dep syntax checking in mask files
diff --cc bin/save-ebuild-env.sh
index fd9a016,98cff83..194e46e
--- a/bin/save-ebuild-env.sh
+++ b/bin/save-ebuild-env.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_PREFIX_BASH@
- # Copyright 1999-2013 Gentoo Foundation
+ # Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# @FUNCTION: __save_ebuild_env
diff --cc bin/xattr-helper.py
index 5680be3,ea83a5e..fdbed54
--- a/bin/xattr-helper.py
+++ b/bin/xattr-helper.py
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2012-2013 Gentoo Foundation
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
+ # Copyright 2012-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
"""Dump and restore extended attributes.
diff --cc bin/xpak-helper.py
index abe0acd,c4391cd..6d7759b
--- a/bin/xpak-helper.py
+++ b/bin/xpak-helper.py
@@@ -1,5 -1,5 +1,5 @@@
- #!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2009-2013 Gentoo Foundation
-#!/usr/bin/python -b
++#!@PREFIX_PORTAGE_PYTHON@ -b
+ # Copyright 2009-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
import sys
diff --cc misc/emerge-delta-webrsync
index 66e9275,96564af..b31015e
--- a/misc/emerge-delta-webrsync
+++ b/misc/emerge-delta-webrsync
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2013 Gentoo Foundation
+ # Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# Author: Brian Harring <ferringb@gentoo.org>, karltk@gentoo.org originally.
# Rewritten from the old, Perl-based emerge-webrsync script
diff --cc pym/portage/data.py
index 4c92b8d,54e3a8d..59ce1b5
--- a/pym/portage/data.py
+++ b/pym/portage/data.py
@@@ -13,12 -12,9 +13,12 @@@ portage.proxy.lazyimport.lazyimport(glo
)
from portage.localization import _
- ostype=platform.system()
+ ostype = platform.system()
userland = None
-if ostype == "DragonFly" or ostype.endswith("BSD"):
+# Prefix always has USERLAND=GNU, even on
+# FreeBSD, OpenBSD and Darwin (thank the lord!).
+# Hopefully this entire USERLAND hack can go once
+if EPREFIX == "" and (ostype == "DragonFly" or ostype.endswith("BSD")):
userland = "BSD"
else:
userland = "GNU"
diff --cc pym/portage/dbapi/vartree.py
index 2159028,b593365..0ff5f66
--- a/pym/portage/dbapi/vartree.py
+++ b/pym/portage/dbapi/vartree.py
@@@ -32,11 -32,9 +32,12 @@@ portage.proxy.lazyimport.lazyimport(glo
'portage.util.env_update:env_update',
'portage.util.listdir:dircache,listdir',
'portage.util.movefile:movefile',
+ 'portage.util.writeable_check:get_ro_checker',
'portage.util._dyn_libs.PreservedLibsRegistry:PreservedLibsRegistry',
'portage.util._dyn_libs.LinkageMapELF:LinkageMapELF@LinkageMap',
+ 'portage.util._dyn_libs.LinkageMapMachO:LinkageMapMachO',
+ 'portage.util._dyn_libs.LinkageMapPeCoff:LinkageMapPeCoff',
+ 'portage.util._dyn_libs.LinkageMapXCoff:LinkageMapXCoff',
'portage.util._async.SchedulerInterface:SchedulerInterface',
'portage.util._eventloop.EventLoop:EventLoop',
'portage.util._eventloop.global_event_loop:global_event_loop',
diff --cc pym/portage/getbinpkg.py
index 7294898,14dc149..997cd2e
--- a/pym/portage/getbinpkg.py
+++ b/pym/portage/getbinpkg.py
@@@ -595,10 -591,10 +592,10 @@@ def dir_get_metadata(baseurl, conn=None
metadatafilename = os.path.join(cache_path, 'remote_metadata.pickle')
if makepickle is None:
- makepickle = "/var/cache/edb/metadata.idx.most_recent"
+ makepickle = CACHE_PATH+"/metadata.idx.most_recent"
try:
- conn, protocol, address, params, headers = create_conn(baseurl, conn)
+ conn = create_conn(baseurl, conn)[0]
except _all_errors as e:
# ftplib.FTP(host) can raise errors like this:
# socket.error: (111, 'Connection refused')
diff --cc pym/portage/tests/runTests
index 118e289,9c45276..0ad9715
--- a/pym/portage/tests/runTests
+++ b/pym/portage/tests/runTests
@@@ -1,6 -1,6 +1,6 @@@
- #!/usr/bin/env python
-#!/usr/bin/python -bWd
++#!@PREFIX_PORTAGE_PYTHON@ -bWd
# runTests.py -- Portage Unit Test Functionality
- # Copyright 2006-2013 Gentoo Foundation
+ # Copyright 2006-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
import os, sys
diff --cc runtests.sh
index a276cff,86d34b6..7f1233a
--- a/runtests.sh
+++ b/runtests.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!/usr/bin/env bash
- # Copyright 2010-2012 Gentoo Foundation
+ # Copyright 2010-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# These are the versions we care about. The rest are just "nice to have".
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2014-01-06 9:47 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2014-01-06 9:47 UTC (permalink / raw
To: gentoo-commits
commit: c228c4eb91cf5ce304718f73041a35e6d31f0bb1
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Mon Jan 6 09:46:45 2014 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Mon Jan 6 09:46:45 2014 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=c228c4eb
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/helper-functions.sh
pym/portage/const.py
DEVELOPING | 29 +-
RELEASE-NOTES | 19 +-
bin/chpathtool.py | 61 +++-
bin/ebuild | 6 +-
bin/ebuild-helpers/prepstrip | 13 +-
bin/ebuild.sh | 2 +-
bin/egencache | 138 +++++----
bin/emaint | 4 +-
bin/emerge | 4 +-
bin/helper-functions.sh | 56 ++--
bin/isolated-functions.sh | 9 +-
bin/misc-functions.sh | 22 +-
bin/phase-functions.sh | 2 +-
bin/phase-helpers.sh | 19 +-
bin/portageq | 56 ++--
bin/repoman | 277 +++++++++---------
bin/save-ebuild-env.sh | 3 +-
bin/xattr-helper.py | 112 ++++----
cnf/sets/portage.conf | 2 +-
man/ebuild.5 | 2 -
man/make.conf.5 | 3 +-
man/portage.5 | 104 +++++--
mkrelease.sh | 76 +++--
pym/_emerge/Binpkg.py | 1 +
pym/_emerge/BinpkgExtractorAsync.py | 15 +-
pym/_emerge/BlockerCache.py | 6 +-
pym/_emerge/EbuildExecuter.py | 11 +-
pym/_emerge/actions.py | 6 +-
pym/_emerge/countdown.py | 18 +-
pym/_emerge/depgraph.py | 311 ++++++++++++++++-----
pym/portage/_sets/__init__.py | 2 +-
pym/portage/const.py | 166 ++++++++---
pym/portage/dbapi/bintree.py | 7 +-
pym/portage/env/loaders.py | 26 +-
pym/portage/exception.py | 45 +--
pym/portage/output.py | 14 +-
pym/portage/package/ebuild/_config/MaskManager.py | 4 +-
pym/portage/tests/__init__.py | 54 +++-
pym/portage/tests/ebuild/test_config.py | 2 +-
.../tests/resolver/test_slot_conflict_rebuild.py | 261 ++++++++++++++++-
pym/portage/util/ExtractKernelVersion.py | 6 +-
pym/portage/util/SlotObject.py | 1 -
pym/portage/util/__init__.py | 112 ++++----
pym/portage/util/_info_files.py | 20 +-
pym/portage/util/_urlopen.py | 2 +-
pym/portage/util/digraph.py | 24 +-
pym/portage/util/env_update.py | 8 +-
pym/portage/util/lafilefixer.py | 10 +-
pym/portage/util/movefile.py | 34 +--
runtests.sh | 11 +
50 files changed, 1506 insertions(+), 690 deletions(-)
diff --cc bin/misc-functions.sh
index 83004df,5ccf7c2..9ce9df6
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
diff --cc pym/_emerge/actions.py
index 663f1fd,9bb4774..30bb07d
--- a/pym/_emerge/actions.py
+++ b/pym/_emerge/actions.py
@@@ -38,9 -38,8 +38,9 @@@ from portage import o
from portage import shutil
from portage import eapi_is_supported, _encodings, _unicode_decode
from portage.cache.cache_errors import CacheError
+from portage.const import EPREFIX
from portage.const import GLOBAL_CONFIG_PATH, VCS_DIRS, _DEPCLEAN_LIB_CHECK_DEFAULT
- from portage.const import SUPPORTED_BINPKG_FORMATS
+ from portage.const import SUPPORTED_BINPKG_FORMATS, TIMESTAMP_FORMAT
from portage.dbapi.dep_expand import dep_expand
from portage.dbapi._expand_new_virt import expand_new_virt
from portage.dep import Atom
diff --cc pym/portage/const.py
index 1b96650,1785bff..89d7ee2
--- a/pym/portage/const.py
+++ b/pym/portage/const.py
@@@ -65,11 -58,10 +65,12 @@@ DEPCACHE_PATH = "/var/cache/
GLOBAL_CONFIG_PATH = "/usr/share/portage/config"
# these variables are not used with target_root or config_root
+PORTAGE_BASE_PATH = PORTAGE_BASE
# NOTE: Use realpath(__file__) so that python module symlinks in site-packages
# are followed back to the real location of the whole portage installation.
-PORTAGE_BASE_PATH = os.path.join(os.sep, os.sep.join(os.path.realpath(
- __file__.rstrip("co")).split(os.sep)[:-3]))
+#PREFIX: below should work, but I'm not sure how it it affects other places
- #PORTAGE_BASE_PATH = os.path.join(os.sep, os.sep.join(os.path.realpath(__file__.rstrip("co")).split(os.sep)[:-3]))
++#PORTAGE_BASE_PATH = os.path.join(os.sep, os.sep.join(os.path.realpath(
++# __file__.rstrip("co")).split(os.sep)[:-3]))
PORTAGE_BIN_PATH = PORTAGE_BASE_PATH + "/bin"
PORTAGE_PYM_PATH = PORTAGE_BASE_PATH + "/pym"
LOCALE_DATA_PATH = PORTAGE_BASE_PATH + "/locale" # FIXME: not used
@@@ -182,9 -239,7 +259,9 @@@ MANIFEST2_IDENTIFIERS = ("AUX", "MIS
# a config instance (since it's possible to contruct a config instance with
# a different EPREFIX). Therefore, the EPREFIX constant should *NOT* be used
# in the definition of any other constants within this file.
-EPREFIX = ""
+# PREFIX LOCAL: rely on EPREFIX from autotools
- #EPREFIX=""
++#EPREFIX = ""
+# END PREFIX LOCAL
# pick up EPREFIX from the environment if set
if "PORTAGE_OVERRIDE_EPREFIX" in os.environ:
diff --cc pym/portage/util/_info_files.py
index a920636,fabf74b..de44b0f
--- a/pym/portage/util/_info_files.py
+++ b/pym/portage/util/_info_files.py
@@@ -13,13 -12,13 +13,13 @@@ from portage.const import EPREFI
def chk_updated_info_files(root, infodirs, prev_mtimes):
- if os.path.exists("/usr/bin/install-info"):
+ if os.path.exists(EPREFIX + "/usr/bin/install-info"):
out = portage.output.EOutput()
- regen_infodirs=[]
+ regen_infodirs = []
for z in infodirs:
- if z=='':
+ if z == '':
continue
- inforoot = portage.util.normalize_path(root + z)
+ inforoot = portage.util.normalize_path(root + EPREFIX + z)
if os.path.isdir(inforoot) and \
not [x for x in os.listdir(inforoot) \
if x.startswith('.keepinfodir')]:
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-09-24 17:29 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-09-24 17:29 UTC (permalink / raw
To: gentoo-commits
commit: 2e0635e2668a82203530442feaeeaab5cf9abf46
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Tue Sep 24 17:29:10 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Tue Sep 24 17:29:10 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=2e0635e2
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
README
README | 53 +++++++++++++++++++++++++++++++++-
man/emerge.1 | 4 +++
mkrelease.sh | 2 +-
pym/_emerge/main.py | 1 +
pym/_emerge/resolver/slot_collision.py | 7 ++++-
pym/portage/repository/config.py | 13 +++++++--
6 files changed, 74 insertions(+), 6 deletions(-)
diff --cc README
index 50d66d0,6b102e5..669fbf8
--- a/README
+++ b/README
@@@ -1,6 -1,48 +1,57 @@@
+This is the prefix branch of portage, a branch that deals with portage
+setup as packagemanager for a given offset in the filesystem running
+with user privileges.
+
- If you are not looking for something Gentoo/Alt:Prefix like, then this
++If you are not looking for something Gentoo Prefix like, then this
+is not the right place.
++
++
++=======
+ About Portage
+ =============
+
+ Portage is a package management system based on ports collections. The
+ Package Manager Specification Project (PMS) standardises and documents the
+ behaviour of Portage so that the Portage tree can be used by other package
+ managers.
+
+
+ Dependencies
+ ============
+
+ Python and Bash should be the only hard dependencies. Python 2.6 is the
+ minimum supported version.
+
+
+ Licensing and Legalese
+ =======================
+
+ Portage is free software; you can redistribute it and/or
+ modify it under the terms of the GNU General Public License
+ version 2 as published by the Free Software Foundation.
+
+ Portage is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with Portage; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+
+
+ More information
+ ================
+
+ -DEVELOPING contains some code guidelines.
+ -LICENSE contains the GNU General Public License version 2.
+ -NEWS contains new features/major bug fixes for each version.
+ -RELEASE NOTES contains mainly upgrade information for each version.
+ -TEST-NOTES contains Portage unit test information.
+
+
+ Links
+ =====
+ Gentoo project page: <http://www.gentoo.org/proj/en/portage/>
+ PMS: <https://dev.gentoo.org/~ulm/pms/head/pms.html>
+ PMS git repository: <http://git.overlays.gentoo.org/gitweb/?p=proj/pms.git>
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-09-20 17:59 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-09-20 17:59 UTC (permalink / raw
To: gentoo-commits
commit: 7d148ae9aa6b84035d5550a41f7bf117a3f38b82
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Sep 20 17:58:47 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Sep 20 17:58:47 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=7d148ae9
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/phase-functions.sh | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-09-18 18:34 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-09-18 18:34 UTC (permalink / raw
To: gentoo-commits
commit: 5dbe5a1c2a34989a3d00dfb3e6b929dffcf2e541
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 18 18:34:22 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Wed Sep 18 18:34:22 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=5dbe5a1c
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
pym/_emerge/SpawnProcess.py | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-09-13 18:02 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-09-13 18:02 UTC (permalink / raw
To: gentoo-commits
commit: 2c40532f5b2aedbe9ceaa4c4595fc905773f72a0
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Sep 13 18:01:05 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Sep 13 18:01:05 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=2c40532f
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/ebuild-helpers/unprivileged/chown
NEWS | 5 +
bin/archive-conf | 26 +----
bin/dohtml.py | 47 ++++++--
bin/eapi.sh | 4 +
bin/ebuild-helpers/doexe | 3 +-
bin/ebuild-helpers/doins | 3 +-
bin/ebuild-helpers/keepdir | 20 ++++
bin/ebuild-helpers/newins | 10 +-
bin/ebuild-helpers/unprivileged/chown | 10 +-
bin/ebuild.sh | 2 +-
bin/egencache | 15 ++-
bin/install.py | 5 +
bin/phase-helpers.sh | 116 ++++++++++++++-----
bin/repoman | 32 +++++-
bin/save-ebuild-env.sh | 2 +-
doc/package/ebuild/eapi/4-python.docbook | 1 -
doc/package/ebuild/eapi/5-progress.docbook | 7 +-
man/make.conf.5 | 27 ++++-
man/portage.5 | 13 ++-
man/repoman.1 | 9 +-
man/ru/ebuild.1 | 4 +-
pym/_emerge/AbstractEbuildProcess.py | 39 +++++++
pym/_emerge/AsynchronousLock.py | 21 ++--
pym/_emerge/EbuildMetadataPhase.py | 20 ++--
pym/_emerge/FakeVartree.py | 11 +-
pym/_emerge/FifoIpcDaemon.py | 16 ++-
pym/_emerge/PipeReader.py | 25 ++--
pym/_emerge/SpawnProcess.py | 62 ++++++++--
pym/_emerge/SubProcess.py | 19 +++-
pym/_emerge/actions.py | 75 ++++++++----
pym/portage/__init__.py | 18 ---
pym/portage/_emirrordist/main.py | 15 ++-
pym/portage/cache/fs_template.py | 5 +-
pym/portage/const.py | 10 +-
pym/portage/data.py | 2 +-
pym/portage/dbapi/_MergeProcess.py | 21 ++--
pym/portage/dbapi/vartree.py | 4 +-
pym/portage/dep/dep_check.py | 31 +++--
pym/portage/locks.py | 23 ++--
pym/portage/package/ebuild/doebuild.py | 57 +++++++++-
pym/portage/process.py | 126 +++++++++++++++++++--
pym/portage/repository/config.py | 12 +-
pym/portage/tests/process/test_poll.py | 36 +++---
.../tests/resolver/test_autounmask_multilib_use.py | 85 ++++++++++++++
pym/portage/tests/resolver/test_or_choices.py | 55 +++++++++
pym/portage/util/__init__.py | 47 +++++---
pym/portage/util/_async/PipeLogger.py | 21 ++--
pym/portage/util/_ctypes.py | 2 +-
pym/portage/util/_desktop_entry.py | 14 +++
pym/portage/util/_eventloop/EventLoop.py | 29 +++--
pym/portage/util/movefile.py | 14 ++-
pym/repoman/checks.py | 63 ++++++-----
52 files changed, 1015 insertions(+), 324 deletions(-)
diff --cc bin/ebuild-helpers/unprivileged/chown
index 2a60b58,08fa650..86b87c2
--- a/bin/ebuild-helpers/unprivileged/chown
+++ b/bin/ebuild-helpers/unprivileged/chown
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 2012 Gentoo Foundation
+ # Copyright 2012-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
scriptpath=${BASH_SOURCE[0]}
@@@ -13,7 -13,15 +13,15 @@@ for path in ${PATH}; d
IFS=$' \t\n'
output=$("${path}/${scriptname}" "$@" 2>&1)
if [[ $? -ne 0 ]] ; then
+
+ # Avoid an extreme performance problem when the
+ # output is very long (bug #470992).
+ if [[ $(wc -l <<< "${output}") -gt 100 ]]; then
+ output=$(head -n100 <<< "${output}")
+ output="${output}\n ... (further messages truncated)"
+ fi
+
- source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+ source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if ! ___eapi_has_prefix_variables; then
EPREFIX=
diff --cc pym/portage/process.py
index 0876e4c,9ae7a55..439b6c9
--- a/pym/portage/process.py
+++ b/pym/portage/process.py
@@@ -19,8 -22,9 +22,9 @@@ portage.proxy.lazyimport.lazyimport(glo
'portage.util:dump_traceback,writemsg',
)
-from portage.const import BASH_BINARY, SANDBOX_BINARY, FAKEROOT_BINARY
+from portage.const import BASH_BINARY, SANDBOX_BINARY, MACOSSANDBOX_BINARY, FAKEROOT_BINARY
from portage.exception import CommandNotFound
+ from portage.util._ctypes import find_library, LoadLibrary, ctypes
try:
import resource
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-08-10 20:54 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-08-10 20:54 UTC (permalink / raw
To: gentoo-commits
commit: 13777477df280aa108a070e1dd6fd25aa1703ad8
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sat Aug 10 20:52:25 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sat Aug 10 20:52:25 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=13777477
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/bashrc-functions.sh
bin/ebuild-helpers/dohtml
bin/ebuild-helpers/portageq
bin/ebuild-helpers/prepstrip
bin/ebuild-ipc
bin/emerge-webrsync
bin/filter-bash-environment.py
bin/isolated-functions.sh
bin/misc-functions.sh
bin/save-ebuild-env.sh
cnf/make.globals
man/emerge.1
pym/_emerge/actions.py
pym/_emerge/main.py
pym/portage/dbapi/bintree.py
Makefile | 14 +-
RELEASE-NOTES | 8 +
bin/bashrc-functions.sh | 4 +-
bin/binhost-snapshot | 13 +-
bin/chpathtool.py | 7 +-
bin/ebuild | 53 ++-
bin/ebuild-helpers/dohtml | 2 +-
bin/ebuild-helpers/portageq | 2 +-
bin/ebuild-helpers/prepstrip | 5 +-
bin/ebuild-helpers/xattr/install | 2 +-
bin/ebuild-ipc | 2 +-
bin/ebuild.sh | 36 +-
bin/egencache | 106 +++---
bin/emerge-webrsync | 63 ++--
bin/filter-bash-environment.py | 18 +-
bin/fixpackages | 8 +
bin/glsa-check | 61 ++--
bin/install.py | 20 +-
bin/isolated-functions.sh | 55 ++--
bin/misc-functions.sh | 4 +-
bin/phase-functions.sh | 7 +-
bin/portageq | 236 +++++++++-----
bin/quickpkg | 18 +-
bin/repoman | 139 ++++----
bin/save-ebuild-env.sh | 4 +-
bin/xattr-helper.py | 16 +-
bin/xpak-helper.py | 6 +-
cnf/{make.conf => make.conf.example} | 2 +-
...onf.alpha.diff => make.conf.example.alpha.diff} | 18 +-
...fbsd.diff => make.conf.example.amd64-fbsd.diff} | 18 +-
...onf.amd64.diff => make.conf.example.amd64.diff} | 18 +-
...ke.conf.arm.diff => make.conf.example.arm.diff} | 12 +-
....conf.hppa.diff => make.conf.example.hppa.diff} | 28 +-
....conf.ia64.diff => make.conf.example.ia64.diff} | 10 +-
....conf.m68k.diff => make.conf.example.m68k.diff} | 14 +-
....conf.mips.diff => make.conf.example.mips.diff} | 18 +-
...ke.conf.ppc.diff => make.conf.example.ppc.diff} | 26 +-
...onf.ppc64.diff => make.conf.example.ppc64.diff} | 24 +-
....conf.s390.diff => make.conf.example.s390.diff} | 10 +-
...make.conf.sh.diff => make.conf.example.sh.diff} | 17 +-
...fbsd.diff => make.conf.example.sparc-fbsd.diff} | 12 +-
...onf.sparc.diff => make.conf.example.sparc.diff} | 18 +-
...6-fbsd.diff => make.conf.example.x86-fbsd.diff} | 18 +-
...ke.conf.x86.diff => make.conf.example.x86.diff} | 18 +-
cnf/make.globals | 22 +-
cnf/repos.conf | 7 +
make.conf.example-repatch.sh | 41 +++
man/color.map.5 | 16 +-
man/dispatch-conf.1 | 2 +-
man/ebuild.1 | 18 +-
man/ebuild.5 | 86 ++---
man/egencache.1 | 23 +-
man/emaint.1 | 15 +-
man/emerge.1 | 242 +++++++-------
man/emirrordist.1 | 17 +-
man/env-update.1 | 9 +-
man/etc-update.1 | 2 +-
man/make.conf.5 | 121 ++++---
man/portage.5 | 338 +++++++++++++------
man/quickpkg.1 | 14 +-
man/repoman.1 | 46 ++-
man/ru/color.map.5 | 6 +-
man/xpak.5 | 5 +-
misc/emerge-delta-webrsync | 50 ++-
pym/_emerge/Binpkg.py | 6 +-
pym/_emerge/EbuildBuild.py | 18 +-
pym/_emerge/actions.py | 361 ++++++++++-----------
pym/_emerge/depgraph.py | 78 +++--
pym/_emerge/main.py | 61 +---
pym/portage/__init__.py | 9 +-
pym/portage/_emirrordist/main.py | 69 ++--
pym/portage/cache/sqlite.py | 12 +-
pym/portage/const.py | 4 +
pym/portage/dbapi/bintree.py | 15 +-
pym/portage/dbapi/vartree.py | 2 +-
pym/portage/dep/dep_check.py | 35 +-
pym/portage/eclass_cache.py | 14 +-
pym/portage/emaint/defaults.py | 11 +-
pym/portage/emaint/main.py | 134 ++++----
pym/portage/emaint/modules/logs/__init__.py | 12 +-
.../package/ebuild/_config/special_env_vars.py | 16 +-
pym/portage/package/ebuild/config.py | 98 +++---
pym/portage/package/ebuild/doebuild.py | 3 +-
pym/portage/package/ebuild/fetch.py | 4 +-
pym/portage/package/ebuild/getmaskingreason.py | 12 +-
pym/portage/process.py | 4 +
pym/portage/repository/config.py | 234 +++++++++----
pym/portage/tests/__init__.py | 8 +-
pym/portage/tests/dbapi/test_fakedbapi.py | 10 +-
pym/portage/tests/dbapi/test_portdb_cache.py | 11 +-
pym/portage/tests/ebuild/test_config.py | 20 +-
pym/portage/tests/ebuild/test_doebuild_fd_pipes.py | 16 +
pym/portage/tests/emerge/test_emerge_slot_abi.py | 4 +-
pym/portage/tests/emerge/test_simple.py | 13 +-
pym/portage/tests/repoman/test_simple.py | 26 +-
pym/portage/tests/resolver/ResolverPlayground.py | 47 +--
pym/portage/tests/resolver/test_backtracking.py | 31 --
pym/portage/tests/resolver/test_or_choices.py | 79 +++++
.../resolver/test_slot_conflict_mask_update.py | 41 +++
.../tests/resolver/test_slot_conflict_update.py | 98 ++++++
pym/portage/tests/update/test_move_ent.py | 6 +-
pym/portage/tests/update/test_move_slot_ent.py | 6 +-
pym/portage/tests/update/test_update_dbentry.py | 6 +-
pym/portage/util/_argparse.py | 42 +++
pym/portage/util/movefile.py | 2 +-
pym/repoman/checks.py | 4 +-
pym/repoman/errors.py | 2 +-
pym/repoman/utilities.py | 3 +-
108 files changed, 2371 insertions(+), 1586 deletions(-)
diff --cc bin/bashrc-functions.sh
index 32dab95,503b172..69a5eb9
--- a/bin/bashrc-functions.sh
+++ b/bin/bashrc-functions.sh
@@@ -1,10 -1,10 +1,10 @@@
-#!/bin/bash
+#!@PREFIX_PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
portageq() {
- PYTHONPATH=${PORTAGE_PYM_PATH}${PYTHONPATH:+:}${PYTHONPATH} \
+ PYTHONPATH=${PORTAGE_PYTHONPATH:-${PORTAGE_PYM_PATH}}\
- "${PORTAGE_PYTHON:-/usr/bin/python}" "${PORTAGE_BIN_PATH}/portageq" "$@"
+ "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "${PORTAGE_BIN_PATH}/portageq" "$@"
}
register_die_hook() {
diff --cc bin/ebuild-helpers/dohtml
index b31d45f,75d3d00..70cb1f4
--- a/bin/ebuild-helpers/dohtml
+++ b/bin/ebuild-helpers/dohtml
@@@ -9,8 -9,8 +9,8 @@@ PORTAGE_PYM_PATH=${PORTAGE_PYM_PATH:-@P
# Use safe cwd, avoiding unsafe import for bug #469338.
export __PORTAGE_HELPER_CWD=${PWD}
cd "${PORTAGE_PYM_PATH}"
- PYTHONPATH=$PORTAGE_PYM_PATH${PYTHONPATH:+:}$PYTHONPATH \
+ PYTHONPATH=${PORTAGE_PYTHONPATH:-${PORTAGE_PYM_PATH}} \
- "${PORTAGE_PYTHON:-/usr/bin/python}" "$PORTAGE_BIN_PATH/dohtml.py" "$@"
+ "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "$PORTAGE_BIN_PATH/dohtml.py" "$@"
ret=$?
# Restore cwd for display by __helpers_die
diff --cc bin/ebuild-helpers/portageq
index 6ba2d88,b67b03f..2c7c428
--- a/bin/ebuild-helpers/portageq
+++ b/bin/ebuild-helpers/portageq
@@@ -2,9 -2,9 +2,9 @@@
# Copyright 2009-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-PORTAGE_BIN_PATH=${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}
-PORTAGE_PYM_PATH=${PORTAGE_PYM_PATH:-/usr/lib/portage/pym}
+PORTAGE_BIN_PATH=${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}
+PORTAGE_PYM_PATH=${PORTAGE_PYM_PATH:-@PORTAGE_BASE@/pym}
# Use safe cwd, avoiding unsafe import for bug #469338.
cd "${PORTAGE_PYM_PATH}"
- PYTHONPATH=$PORTAGE_PYM_PATH${PYTHONPATH:+:}$PYTHONPATH \
+ PYTHONPATH=${PORTAGE_PYTHONPATH:-${PORTAGE_PYM_PATH}} \
- exec "${PORTAGE_PYTHON:-/usr/bin/python}" "$PORTAGE_BIN_PATH/portageq" "$@"
+ exec "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "$PORTAGE_BIN_PATH/portageq" "$@"
diff --cc bin/ebuild-helpers/prepstrip
index 834b0fe,9b2d47c..35c84d5
--- a/bin/ebuild-helpers/prepstrip
+++ b/bin/ebuild-helpers/prepstrip
@@@ -1,8 -1,9 +1,9 @@@
-#!/bin/bash
++<<<<<<< HEAD
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-PORTAGE_PYM_PATH=${PORTAGE_PYM_PATH:-/usr/lib/portage/pym}
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/helper-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/helper-functions.sh
# avoid multiple calls to `has`. this creates things like:
# FEATURES_foo=false
diff --cc bin/ebuild-ipc
index 7839195,820005f..176e6ed
--- a/bin/ebuild-ipc
+++ b/bin/ebuild-ipc
@@@ -6,5 -6,5 +6,5 @@@ PORTAGE_BIN_PATH=${PORTAGE_BIN_PATH:-/u
PORTAGE_PYM_PATH=${PORTAGE_PYM_PATH:-/usr/lib/portage/pym}
# Use safe cwd, avoiding unsafe import for bug #469338.
cd "${PORTAGE_PYM_PATH}"
- PYTHONPATH=$PORTAGE_PYM_PATH${PYTHONPATH:+:}$PYTHONPATH \
+ PYTHONPATH=${PORTAGE_PYTHONPATH:-${PORTAGE_PYM_PATH}} \
- exec "${PORTAGE_PYTHON:-/usr/bin/python}" "$PORTAGE_BIN_PATH/ebuild-ipc.py" "$@"
+ exec "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "$PORTAGE_BIN_PATH/ebuild-ipc.py" "$@"
diff --cc bin/emerge-webrsync
index 55aba47,85730a2..df855de
--- a/bin/emerge-webrsync
+++ b/bin/emerge-webrsync
@@@ -1,6 -1,5 +1,6 @@@
+#!@PORTAGE_BASH@
#!/bin/bash
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# Author: Karl Trygve Kalleberg <karltk@gentoo.org>
# Rewritten from the old, Perl-based emerge-webrsync script
@@@ -40,20 -39,24 +40,30 @@@ els
eecho "could not find 'portageq'; aborting"
exit 1
fi
- eval $("${portageq}" envvar -v DISTDIR EPREFIX FEATURES \
+ eval "$("${portageq}" envvar -v DISTDIR EPREFIX FEATURES \
FETCHCOMMAND GENTOO_MIRRORS \
PORTAGE_BIN_PATH PORTAGE_CONFIGROOT PORTAGE_GPG_DIR \
- PORTAGE_NICENESS PORTAGE_RSYNC_EXTRA_OPTS \
- PORTAGE_RSYNC_OPTS PORTAGE_TMPDIR PORTDIR \
- SYNC USERLAND http_proxy ftp_proxy \
+ PORTAGE_NICENESS PORTAGE_REPOSITORIES PORTAGE_RSYNC_EXTRA_OPTS \
+ PORTAGE_RSYNC_OPTS PORTAGE_TMPDIR \
- USERLAND http_proxy ftp_proxy)"
++ USERLAND http_proxy ftp_proxy \
+ PORTAGE_USER PORTAGE_GROUP)
export http_proxy ftp_proxy
+# PREFIX LOCAL: use Prefix servers, just because we want this and infra
+# can't support us yet
+GENTOO_MIRRORS="http://gentoo-mirror1.prefix.freens.org"
+# END PREFIX LOCAL
+
+ source "${PORTAGE_BIN_PATH}"/isolated-functions.sh || exit 1
+
+ repo_name=gentoo
+ repo_location=$(__repo_key "${repo_name}" location)
+ if [[ -z ${repo_location} ]]; then
+ eecho "Repository '${repo_name}' not found"
+ exit 1
+ fi
+ repo_sync_type=$(__repo_key "${repo_name}" sync-type)
+
# If PORTAGE_NICENESS is overriden via the env then it will
# still pass through the portageq call and override properly.
if [ -n "${PORTAGE_NICENESS}" ]; then
diff --cc bin/filter-bash-environment.py
index 9cb42c1,3d4b3ec..2537855
--- a/bin/filter-bash-environment.py
+++ b/bin/filter-bash-environment.py
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
import codecs
diff --cc bin/isolated-functions.sh
index 1d399b6,42d9e70..c91bc1d
--- a/bin/isolated-functions.sh
+++ b/bin/isolated-functions.sh
@@@ -1,8 -1,8 +1,8 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}/eapi.sh"
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}/eapi.sh"
# We need this next line for "die" and "assert". It expands
# It _must_ preceed all the calls to die and assert.
diff --cc bin/misc-functions.sh
index 105129a,ee21444..011e8ba
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -1768,8 -1157,8 +1768,8 @@@ __dyn_package()
tar $tar_options -cf - $PORTAGE_BINPKG_TAR_OPTS -C "${PROOT}" . | \
$PORTAGE_BZIP2_COMMAND -c > "$PORTAGE_BINPKG_TMPFILE"
assert "failed to pack binary package: '$PORTAGE_BINPKG_TMPFILE'"
- PYTHONPATH=${PORTAGE_PYM_PATH}${PYTHONPATH:+:}${PYTHONPATH} \
+ PYTHONPATH=${PORTAGE_PYTHONPATH:-${PORTAGE_PYM_PATH}} \
- "${PORTAGE_PYTHON:-/usr/bin/python}" "$PORTAGE_BIN_PATH"/xpak-helper.py recompose \
+ "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "$PORTAGE_BIN_PATH"/xpak-helper.py recompose \
"$PORTAGE_BINPKG_TMPFILE" "$PORTAGE_BUILDDIR/build-info"
if [ $? -ne 0 ]; then
rm -f "${PORTAGE_BINPKG_TMPFILE}"
diff --cc bin/phase-functions.sh
index 881d50b,37503bf..4321375
--- a/bin/phase-functions.sh
+++ b/bin/phase-functions.sh
@@@ -29,9 -30,9 +30,9 @@@ PORTAGE_READONLY_VARS="D EBUILD EBUILD_
PORTAGE_SAVED_READONLY_VARS PORTAGE_SIGPIPE_STATUS \
PORTAGE_TMPDIR PORTAGE_UPDATE_ENV PORTAGE_USERNAME \
PORTAGE_VERBOSE PORTAGE_WORKDIR_MODE PORTAGE_XATTR_EXCLUDE \
- PORTDIR PORTDIR_OVERLAY \
+ PORTDIR \
PROFILE_PATHS REPLACING_VERSIONS REPLACED_BY_VERSION T WORKDIR \
- __PORTAGE_HELPER __PORTAGE_TEST_HARDLINK_LOCKS"
+ __PORTAGE_HELPER __PORTAGE_TEST_HARDLINK_LOCKS ED EROOT"
PORTAGE_SAVED_READONLY_VARS="A CATEGORY P PF PN PR PV PVR"
diff --cc bin/save-ebuild-env.sh
index c6c1ff3,f695245..552f882
--- a/bin/save-ebuild-env.sh
+++ b/bin/save-ebuild-env.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_PREFIX_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# @FUNCTION: __save_ebuild_env
diff --cc cnf/make.globals
index ca82e22,013c556..8c58fc6
--- a/cnf/make.globals
+++ b/cnf/make.globals
@@@ -19,9 -19,6 +19,10 @@@ LDFLAGS="
FFLAGS=""
FCFLAGS=""
- # Default rsync mirror
++# PREFIX LOCAL: default rsync mirror
+SYNC="rsync://rsync.prefix.freens.org/gentoo-portage-prefix"
++# END PREFIX LOCAL
+
# Default distfiles mirrors. This rotation has multiple hosts and is reliable.
# Approved by the mirror-admin team.
GENTOO_MIRRORS="http://distfiles.gentoo.org"
@@@ -30,14 -27,13 +31,10 @@@ ACCEPT_LICENSE="* -@EULA
ACCEPT_PROPERTIES="*"
ACCEPT_RESTRICT="*"
- # Repository Paths
- PORTDIR="@PORTAGE_EPREFIX@/usr/portage"
- DISTDIR=${PORTDIR}/distfiles
- PKGDIR=${PORTDIR}/packages
- RPMDIR=${PORTDIR}/rpm
-
- # Temporary build directory
- PORTAGE_TMPDIR="@PORTAGE_EPREFIX@/var/tmp"
+ # Miscellaneous paths
-DISTDIR="/usr/portage/distfiles"
-PKGDIR="/usr/portage/packages"
-RPMDIR="/usr/portage/rpm"
-
-# Temporary build directory
-PORTAGE_TMPDIR="/var/tmp"
++DISTDIR="@PORTAGE_EPREFIX@/usr/portage/distfiles"
++PKGDIR="@PORTAGE_EPREFIX@/usr/portage/packages"
++RPMDIR="@PORTAGE_EPREFIX@/usr/portage/rpm"
# Fetching command (3 tries, passive ftp for firewall compatibility)
FETCHCOMMAND="wget -t 3 -T 60 --passive-ftp -O \"\${DISTDIR}/\${FILE}\" \"\${URI}\""
@@@ -153,22 -123,9 +150,23 @@@ PORTAGE_ELOG_MAILFROM="@portageuser@@lo
PORTAGE_GPG_SIGNING_COMMAND="gpg --sign --digest-algo SHA256 --clearsign --yes --default-key \"\${PORTAGE_GPG_KEY}\" --homedir \"\${PORTAGE_GPG_DIR}\" \"\${FILE}\""
# Security labels are special, see bug #461868.
- PORTAGE_XATTR_EXCLUDE="security.*"
+ # system.nfs4_acl attributes are irrelevant, see bug #475496.
+ PORTAGE_XATTR_EXCLUDE="security.* system.nfs4_acl"
+# Writeable paths for Mac OS X seatbelt sandbox
+#
+# If path ends in a slash (/), access will recursively be allowed to directory
+# contents (using a regex), not the directory itself. Without a slash, access
+# to the directory or file itself will be allowed (using a literal), so it can
+# be created, removed and changed. If both is needed, the directory needs to be
+# given twice, once with and once without the slash. Obviously this only makes
+# sense for directories, not files.
+#
+# An empty value for either variable will disable all restrictions on the
+# corresponding operation.
+MACOSSANDBOX_PATHS="/dev/fd/ /private/tmp/ /private/var/tmp/ @@PORTAGE_BUILDDIR@@/ @@PORTAGE_ACTUAL_DISTDIR@@/"
+MACOSSANDBOX_PATHS_CONTENT_ONLY="/dev/null /dev/dtracehelper /dev/tty /private/var/run/syslog"
+
# *****************************
# ** DO NOT EDIT THIS FILE **
# ***************************************************
diff --cc man/emerge.1
index dfda78d,bbcd84b..4b402fb
--- a/man/emerge.1
+++ b/man/emerge.1
@@@ -1112,15 -1124,9 +1124,15 @@@ add this to \fBmake.conf\fR(5)
.LP
.I CONFIG_PROTECT_MASK="/etc/wget /etc/rc.d"
.LP
- Tools such as dispatch\-conf, cfg\-update, and etc\-update are also available to
- aid in the merging of these files. They provide interactive merging and can
+ Tools such as dispatch\-conf, cfg\-update, and etc\-update are also available
+ to aid in the merging of these files. They provide interactive merging and can
auto\-merge trivial changes.
+.LP
+When an offset prefix (\fBEPREFIX\fR) is active, all paths in
+\fBCONFIG_PROTECT\fR and \fBCONFIG_PROTECT_MASK\fR are prefixed with the
+offset by Portage before they are considered. Hence, these paths never
+contain the offset prefix, and the variables can be defined in
+offset-unaware locations, such as the profiles.
.SH "REPORTING BUGS"
Please report any bugs you encounter through our website:
.LP
@@@ -1140,10 -1146,10 +1152,11 @@@ Marius Mauch <genone@gentoo.org
Jason Stubbs <jstubbs@gentoo.org>
Brian Harring <ferringb@gmail.com>
Zac Medico <zmedico@gentoo.org>
+Fabian Groffen <grobian@gentoo.org>
+ Arfrever Frehtes Taifersar Arahesis <arfrever@apache.org>
.fi
.SH "FILES"
- Here is a common list of files you will probably be interested in. For a
+ Here is a common list of files you will probably be interested in. For a
complete listing, please refer to the \fBportage\fR(5) man page.
.TP
.B /usr/share/portage/config/sets/
diff --cc man/make.conf.5
index eb4497f,236fdf0..00d822d
--- a/man/make.conf.5
+++ b/man/make.conf.5
@@@ -154,16 -154,14 +154,17 @@@ automatically have /* appended to them
Defaults to "/lib/modules/* *.py[co]".
.TP
\fBCONFIG_PROTECT\fR = \fI[space delimited list of files and/or directories]\fR
- All files and/or directories that are defined here will have "config file protection"
- enabled for them. See the \fBCONFIGURATION FILES\fR section
+ All files and/or directories that are defined here will have "config file
+ protection" enabled for them. See the \fBCONFIGURATION FILES\fR section
of \fBemerge\fR(1) for more information.
+Note that if an offset prefix (\fBEPREFIX\fR) is activated, all paths defined
+in \fBCONFIG_PROTECT\fR are prefixed by Portage with the offset before
+they are used.
.TP
- \fBCONFIG_PROTECT_MASK\fR = \fI[space delimited list of files and/or directories]\fR
- All files and/or directories that are defined here will have "config file protection"
- disabled for them. See the \fBCONFIGURATION FILES\fR section
+ \fBCONFIG_PROTECT_MASK\fR = \fI[space delimited list of files and/or \
+ directories]\fR
+ All files and/or directories that are defined here will have "config file
+ protection" disabled for them. See the \fBCONFIGURATION FILES\fR section
of \fBemerge\fR(1) for more information.
.TP
.B CTARGET
diff --cc pym/_emerge/actions.py
index 26ced76,4c53c25..4add397
--- a/pym/_emerge/actions.py
+++ b/pym/_emerge/actions.py
@@@ -37,8 -38,8 +38,9 @@@ from portage import o
from portage import shutil
from portage import eapi_is_supported, _encodings, _unicode_decode
from portage.cache.cache_errors import CacheError
+from portage.const import EPREFIX
from portage.const import GLOBAL_CONFIG_PATH, VCS_DIRS, _DEPCLEAN_LIB_CHECK_DEFAULT
+ from portage.const import SUPPORTED_BINPKG_FORMATS
from portage.dbapi.dep_expand import dep_expand
from portage.dbapi._expand_new_virt import expand_new_virt
from portage.dep import Atom
@@@ -2522,22 -2587,20 +2588,20 @@@ def _sync_repo(emerge_config, repo)
msg.append("(and possibly your system's filesystem) configuration.")
for line in msg:
out.eerror(line)
- sys.exit(exitcode)
- elif syncuri[:6]=="cvs://":
+ return exitcode
+ elif repo.sync_type == "cvs":
- if not os.path.exists("/usr/bin/cvs"):
- print("!!! /usr/bin/cvs does not exist, so CVS support is disabled.")
+ if not os.path.exists(EPREFIX + "/usr/bin/cvs"):
- print("!!! " + EPREFIX + "/usr/bin/cvs does not exist, so CVS support is disabled.")
- print("!!! Type \"emerge dev-vcs/cvs\" to enable CVS support.")
- sys.exit(1)
- cvsroot=syncuri[6:]
- cvsdir=os.path.dirname(myportdir)
- if not os.path.exists(myportdir+"/CVS"):
++ print("!!! %s/usr/bin/cvs does not exist, so CVS support is disabled." % (EPREFIX))
+ print("!!! Type \"emerge %s\" to enable CVS support." % portage.const.CVS_PACKAGE_ATOM)
+ return os.EX_UNAVAILABLE
+ cvs_root = syncuri
+ if cvs_root.startswith("cvs://"):
+ cvs_root = cvs_root[6:]
+ if not os.path.exists(os.path.join(repo.location, "CVS")):
#initial checkout
print(">>> Starting initial cvs checkout with "+syncuri+"...")
- if os.path.exists(cvsdir+"/gentoo-x86"):
- print("!!! existing",cvsdir+"/gentoo-x86 directory; exiting.")
- sys.exit(1)
try:
- os.rmdir(myportdir)
+ os.rmdir(repo.location)
except OSError as e:
if e.errno != errno.ENOENT:
sys.stderr.write(
diff --cc pym/_emerge/main.py
index 8977f33,2e68a05..b8f2ffd
--- a/pym/_emerge/main.py
+++ b/pym/_emerge/main.py
@@@ -18,7 -18,7 +18,8 @@@ portage.proxy.lazyimport.lazyimport(glo
'_emerge.is_valid_package_atom:insert_category_into_atom'
)
from portage import os
+from portage.const import EPREFIX
+ from portage.util._argparse import ArgumentParser
if sys.hexversion >= 0x3000000:
long = int
diff --cc pym/portage/dbapi/bintree.py
index 2bad28e,61ac6b5..0a667da
--- a/pym/portage/dbapi/bintree.py
+++ b/pym/portage/dbapi/bintree.py
@@@ -319,7 -317,9 +319,10 @@@ class binarytree(object)
"ACCEPT_KEYWORDS", "ACCEPT_LICENSE",
"ACCEPT_PROPERTIES", "ACCEPT_RESTRICT", "CBUILD",
"CONFIG_PROTECT", "CONFIG_PROTECT_MASK", "FEATURES",
- "GENTOO_MIRRORS", "INSTALL_MASK", "SYNC", "USE", "EPREFIX"])
+ "GENTOO_MIRRORS", "INSTALL_MASK", "IUSE_IMPLICIT", "USE",
+ "USE_EXPAND", "USE_EXPAND_HIDDEN", "USE_EXPAND_IMPLICIT",
- "USE_EXPAND_UNPREFIXED"])
++ "USE_EXPAND_UNPREFIXED",
++ "EPREFIX"])
self._pkgindex_default_pkg_data = {
"BUILD_TIME" : "",
"DEFINED_PHASES" : "",
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-07-10 5:31 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-07-10 5:31 UTC (permalink / raw
To: gentoo-commits
commit: 36ab75c4508cac4a67ff01194eb4c8354b3db5ff
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Wed Jul 10 05:30:43 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Wed Jul 10 05:30:43 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=36ab75c4
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
pym/_emerge/EbuildPhase.py | 2 +-
pym/_emerge/Scheduler.py | 45 ++++++++++++++++++++++++++++++++++++++++++++-
2 files changed, 45 insertions(+), 2 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-07-08 19:32 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-07-08 19:32 UTC (permalink / raw
To: gentoo-commits
commit: 796f9dcebc5fea9b2dd72492fd06a6c14540cccf
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Mon Jul 8 19:31:45 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Mon Jul 8 19:31:45 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=796f9dce
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/phase-functions.sh | 1 +
man/portage.5 | 6 +-
pym/_emerge/AbstractDepPriority.py | 5 +-
pym/_emerge/AsynchronousLock.py | 2 +-
pym/_emerge/DepPriority.py | 29 +++--
pym/_emerge/DepPrioritySatisfiedRange.py | 24 +++-
pym/_emerge/EbuildMetadataPhase.py | 1 -
pym/_emerge/EbuildPhase.py | 44 ++++----
pym/_emerge/SpawnProcess.py | 3 +-
pym/_emerge/UnmergeDepPriority.py | 27 +++--
pym/_emerge/actions.py | 19 ++--
pym/_emerge/depgraph.py | 7 ++
pym/portage/dbapi/_MergeProcess.py | 9 +-
pym/portage/dbapi/vartree.py | 5 +-
pym/portage/output.py | 13 ++-
pym/portage/package/ebuild/_spawn_nofetch.py | 10 +-
pym/portage/package/ebuild/doebuild.py | 94 +++++++++++-----
pym/portage/process.py | 37 +++----
pym/portage/tests/ebuild/test_doebuild_fd_pipes.py | 121 +++++++++++++++++++++
pym/portage/tests/ebuild/test_spawn.py | 10 +-
pym/portage/tests/resolver/test_depclean_order.py | 57 ++++++++++
pym/portage/tests/resolver/test_merge_order.py | 35 +++++-
pym/portage/util/_async/ForkProcess.py | 6 +-
pym/portage/util/movefile.py | 8 +-
24 files changed, 441 insertions(+), 132 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-06-29 5:41 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-06-29 5:41 UTC (permalink / raw
To: gentoo-commits
commit: 9ee0e86c9e29e54ac8c19537443ca067b34baac3
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sat Jun 29 05:41:15 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sat Jun 29 05:41:15 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=9ee0e86c
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/repoman | 6 ++
pym/_emerge/clear_caches.py | 4 +-
pym/portage/repository/config.py | 9 ++-
pym/portage/util/__init__.py | 111 +++++++++++++++--------------------
pym/portage/util/listdir.py | 122 ++++++++++++++++++---------------------
5 files changed, 118 insertions(+), 134 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-06-27 17:20 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-06-27 17:20 UTC (permalink / raw
To: gentoo-commits
commit: 996758fd4e3161cbdd1dcc00e3455804090d85fb
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Jun 27 17:19:13 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Jun 27 17:19:13 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=996758fd
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/chpathtool.py
bin/dispatch-conf
pym/portage/dispatch_conf.py
bin/chpathtool.py | 4 +-
bin/dispatch-conf | 2 +-
bin/ebuild-helpers/xattr/install | 12 +
bin/emerge-webrsync | 2 +-
bin/helper-functions.sh | 6 +-
bin/install.py | 262 +++++++++++
bin/phase-functions.sh | 3 +-
bin/phase-helpers.sh | 18 +-
bin/portageq | 5 +-
bin/repoman | 117 +++--
man/emerge.1 | 10 +
man/make.conf.5 | 6 +-
man/portage.5 | 31 +-
man/ru/color.map.5 | 217 +++++++++
misc/emerge-delta-webrsync | 483 ++++++++++++++-------
pym/_emerge/RootConfig.py | 13 +-
pym/_emerge/actions.py | 356 ++++++++-------
pym/_emerge/depgraph.py | 44 +-
pym/_emerge/main.py | 16 +-
pym/portage/__init__.py | 38 +-
pym/portage/_emirrordist/FetchTask.py | 19 +-
pym/portage/_legacy_globals.py | 3 +-
pym/portage/_sets/__init__.py | 5 +-
pym/portage/checksum.py | 8 +-
pym/portage/const.py | 2 +
pym/portage/data.py | 18 +-
pym/portage/dispatch_conf.py | 18 +-
.../package/ebuild/_config/LocationsManager.py | 33 +-
.../package/ebuild/_config/special_env_vars.py | 4 +-
pym/portage/package/ebuild/config.py | 40 +-
pym/portage/package/ebuild/doebuild.py | 5 +-
pym/portage/package/ebuild/fetch.py | 4 +-
pym/portage/process.py | 26 +-
pym/portage/repository/config.py | 237 +++++++---
pym/portage/tests/emerge/test_simple.py | 42 +-
pym/portage/tests/lint/test_bash_syntax.py | 8 +-
pym/portage/tests/resolver/ResolverPlayground.py | 2 +
pym/portage/util/__init__.py | 95 ++--
pym/portage/util/_desktop_entry.py | 16 +-
pym/repoman/utilities.py | 4 +-
40 files changed, 1622 insertions(+), 612 deletions(-)
diff --cc bin/chpathtool.py
index be42bb0,a040bab..6225bbf
--- a/bin/chpathtool.py
+++ b/bin/chpathtool.py
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2011-2012 Gentoo Foundation
+ # Copyright 2011-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
import io
diff --cc bin/phase-functions.sh
index 423ef8c,6433c54..ac789b4
--- a/bin/phase-functions.sh
+++ b/bin/phase-functions.sh
@@@ -28,9 -28,10 +28,10 @@@ PORTAGE_READONLY_VARS="D EBUILD EBUILD_
PORTAGE_REPO_NAME PORTAGE_RESTRICT \
PORTAGE_SAVED_READONLY_VARS PORTAGE_SIGPIPE_STATUS \
PORTAGE_TMPDIR PORTAGE_UPDATE_ENV PORTAGE_USERNAME \
- PORTAGE_VERBOSE PORTAGE_WORKDIR_MODE PORTDIR PORTDIR_OVERLAY \
+ PORTAGE_VERBOSE PORTAGE_WORKDIR_MODE PORTAGE_XATTR_EXCLUDE \
+ PORTDIR PORTDIR_OVERLAY \
PROFILE_PATHS REPLACING_VERSIONS REPLACED_BY_VERSION T WORKDIR \
- __PORTAGE_HELPER __PORTAGE_TEST_HARDLINK_LOCKS"
+ __PORTAGE_HELPER __PORTAGE_TEST_HARDLINK_LOCKS ED EROOT"
PORTAGE_SAVED_READONLY_VARS="A CATEGORY P PF PN PR PV PVR"
diff --cc pym/portage/package/ebuild/doebuild.py
index 9b2aa19,6901719..4b9be44
--- a/pym/portage/package/ebuild/doebuild.py
+++ b/pym/portage/package/ebuild/doebuild.py
@@@ -173,10 -171,10 +173,13 @@@ def _doebuild_path(settings, eapi=None)
prefixes.append("/")
path = overrides
+ # PREFIX LOCAL: use DEFAULT_PATH and EXTRA_PATH from make.globals
+ defaultpath = [x for x in settings.get("DEFAULT_PATH", "").split(":") if x]
+ extrapath = [x for x in settings.get("EXTRA_PATH", "").split(":") if x]
+ if "xattr" in settings.features:
+ path.append(os.path.join(portage_bin_path, "ebuild-helpers", "xattr"))
+
if eprefix and uid != 0 and "fakeroot" not in settings.features:
path.append(os.path.join(portage_bin_path,
"ebuild-helpers", "unprivileged"))
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-06-12 9:02 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-06-12 9:02 UTC (permalink / raw
To: gentoo-commits
commit: abd42444ce14c48c3eb6dcf8d608c33701399fee
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 12 09:02:07 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Wed Jun 12 09:02:07 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=abd42444
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
man/portage.5 | 15 +--------------
pym/_emerge/actions.py | 13 ++++++-------
pym/portage/dep/dep_check.py | 4 ++--
3 files changed, 9 insertions(+), 23 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-06-09 15:53 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-06-09 15:53 UTC (permalink / raw
To: gentoo-commits
commit: ace2a1c9328b8782497c59a5c890e32eb7456de3
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Jun 9 15:52:49 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Jun 9 15:52:49 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=ace2a1c9
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/dispatch-conf
bin/dohtml.py
bin/ebuild-helpers/dohtml
bin/ebuild-helpers/portageq
bin/ebuild-ipc
bin/misc-functions.sh
bin/phase-helpers.sh
pym/portage/dbapi/bintree.py
NEWS | 2 +-
RELEASE-NOTES | 7 +-
bin/archive-conf | 16 +--
bin/clean_locks | 2 -
bin/dispatch-conf | 9 +-
bin/dohtml.py | 5 +-
bin/ebuild | 3 +-
bin/ebuild-helpers/dohtml | 12 ++-
bin/ebuild-helpers/ecompressdir | 15 +++
bin/ebuild-helpers/portageq | 4 +-
bin/ebuild-ipc | 4 +-
bin/ebuild.sh | 8 +-
bin/egencache | 13 +--
bin/emerge | 1 -
bin/misc-functions.sh | 6 +-
bin/phase-functions.sh | 34 ++++--
bin/phase-helpers.sh | 23 ++--
bin/quickpkg | 3 +-
bin/repoman | 120 +++++++++------------
cnf/make.globals | 8 +-
cnf/metadata.dtd | 2 +-
man/ebuild.5 | 6 +-
man/egencache.1 | 8 +-
man/emerge.1 | 7 +-
man/make.conf.5 | 36 +++++--
man/portage.5 | 8 +-
man/repoman.1 | 10 +-
pym/_emerge/BinpkgFetcher.py | 8 +-
pym/_emerge/EbuildBuild.py | 10 +-
pym/_emerge/EbuildFetcher.py | 3 +-
pym/_emerge/JobStatusDisplay.py | 3 +-
pym/_emerge/MetadataRegen.py | 2 +
pym/_emerge/Scheduler.py | 6 +-
pym/_emerge/actions.py | 103 ++++++++++--------
pym/_emerge/depgraph.py | 58 +++++++---
pym/_emerge/main.py | 38 ++++---
pym/_emerge/resolver/circular_dependency.py | 3 +-
pym/portage/__init__.py | 36 +------
pym/portage/cache/template.py | 13 ++-
pym/portage/dbapi/__init__.py | 57 +++++-----
pym/portage/dbapi/bintree.py | 89 ++++++++++-----
pym/portage/dbapi/porttree.py | 67 ++++++++++--
pym/portage/dbapi/vartree.py | 5 +-
pym/portage/dep/__init__.py | 38 +++++--
pym/portage/getbinpkg.py | 33 ++++--
pym/portage/package/ebuild/_config/UseManager.py | 2 +-
.../package/ebuild/_config/special_env_vars.py | 2 +-
pym/portage/package/ebuild/doebuild.py | 21 ++--
pym/portage/package/ebuild/fetch.py | 22 +++-
pym/portage/repository/config.py | 15 ++-
pym/portage/tests/dep/test_isvalidatom.py | 5 +-
pym/portage/tests/dep/test_match_from_list.py | 7 +-
pym/portage/tests/resolver/ResolverPlayground.py | 3 +-
pym/portage/tests/util/test_getconfig.py | 4 +-
pym/portage/util/env_update.py | 63 ++++++-----
pym/repoman/checks.py | 1 +
56 files changed, 680 insertions(+), 409 deletions(-)
diff --cc bin/dispatch-conf
index 8e070ee,a41464f..d560e2b
--- a/bin/dispatch-conf
+++ b/bin/dispatch-conf
@@@ -22,11 -22,9 +22,10 @@@ sys.path.insert(0, pym_path
import portage
portage._internal_caller = True
from portage import os
- from portage import dispatch_conf
from portage import _unicode_decode
from portage.dispatch_conf import diffstatusoutput
- from portage.process import find_binary
+ from portage.process import find_binary, spawn
+from portage.const import EPREFIX
FIND_EXTANT_CONFIGS = "find '%s' %s -name '._cfg????_%s' ! -name '.*~' ! -iname '.*.bak' -print"
DIFF_CONTENTS = "diff -Nu '%s' '%s'"
diff --cc bin/dohtml.py
index 0ce8339,1b6ba89..e3fa100
--- a/bin/dohtml.py
+++ b/bin/dohtml.py
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
diff --cc bin/ebuild-helpers/dohtml
index 4f102b0,fd9efd2..e22062d
--- a/bin/ebuild-helpers/dohtml
+++ b/bin/ebuild-helpers/dohtml
@@@ -1,14 -1,19 +1,24 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 2009-2010 Gentoo Foundation
+ # Copyright 2009-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
++<<<<<<< HEAD
+PORTAGE_BIN_PATH=${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}
+PORTAGE_PYM_PATH=${PORTAGE_PYM_PATH:-@PORTAGE_BASE@/pym}
++=======
+ PORTAGE_BIN_PATH=${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}
+ PORTAGE_PYM_PATH=${PORTAGE_PYM_PATH:-/usr/lib/portage/pym}
+ # Use safe cwd, avoiding unsafe import for bug #469338.
+ export __PORTAGE_HELPER_CWD=${PWD}
+ cd "${PORTAGE_PYM_PATH}"
++>>>>>>> overlays-gentoo-org/master
PYTHONPATH=$PORTAGE_PYM_PATH${PYTHONPATH:+:}$PYTHONPATH \
- "${PORTAGE_PYTHON:-/usr/bin/python}" "$PORTAGE_BIN_PATH/dohtml.py" "$@"
+ "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "$PORTAGE_BIN_PATH/dohtml.py" "$@"
ret=$?
+ # Restore cwd for display by __helpers_die
+ cd "${__PORTAGE_HELPER_CWD}"
[[ $ret -ne 0 ]] && __helpers_die "${0##*/} failed"
exit $ret
diff --cc bin/ebuild-helpers/portageq
index 1afd79a,7bd330b..6ba2d88
--- a/bin/ebuild-helpers/portageq
+++ b/bin/ebuild-helpers/portageq
@@@ -1,8 -1,10 +1,10 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 2009-2010 Gentoo Foundation
+ # Copyright 2009-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-PORTAGE_BIN_PATH=${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}
-PORTAGE_PYM_PATH=${PORTAGE_PYM_PATH:-/usr/lib/portage/pym}
+PORTAGE_BIN_PATH=${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}
+PORTAGE_PYM_PATH=${PORTAGE_PYM_PATH:-@PORTAGE_BASE@/pym}
+ # Use safe cwd, avoiding unsafe import for bug #469338.
+ cd "${PORTAGE_PYM_PATH}"
PYTHONPATH=$PORTAGE_PYM_PATH${PYTHONPATH:+:}$PYTHONPATH \
- exec "${PORTAGE_PYTHON:-/usr/bin/python}" "$PORTAGE_BIN_PATH/portageq" "$@"
+ exec "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "$PORTAGE_BIN_PATH/portageq" "$@"
diff --cc bin/ebuild-ipc
index 5b9ad53,9ff6f1c..7839195
--- a/bin/ebuild-ipc
+++ b/bin/ebuild-ipc
@@@ -1,8 -1,10 +1,10 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 2010 Gentoo Foundation
+ # Copyright 2010-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
PORTAGE_BIN_PATH=${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}
PORTAGE_PYM_PATH=${PORTAGE_PYM_PATH:-/usr/lib/portage/pym}
+ # Use safe cwd, avoiding unsafe import for bug #469338.
+ cd "${PORTAGE_PYM_PATH}"
PYTHONPATH=$PORTAGE_PYM_PATH${PYTHONPATH:+:}$PYTHONPATH \
- exec "${PORTAGE_PYTHON:-/usr/bin/python}" "$PORTAGE_BIN_PATH/ebuild-ipc.py" "$@"
+ exec "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "$PORTAGE_BIN_PATH/ebuild-ipc.py" "$@"
diff --cc bin/misc-functions.sh
index e2960bd,ad99d8a..105129a
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
# Miscellaneous shell functions that make use of the ebuild env but don't need
@@@ -803,8 -748,9 +804,9 @@@ install_qa_check_misc()
local cat_cmd=cat
[[ $PORTAGE_LOG_FILE = *.gz ]] && cat_cmd=zcat
[[ $reset_debug = 1 ]] && set -x
- f=$($cat_cmd "${PORTAGE_LOG_FILE}" | \
+ # Use safe cwd, avoiding unsafe import for bug #469338.
+ f=$(cd "${PORTAGE_PYM_PATH}" ; $cat_cmd "${PORTAGE_LOG_FILE}" | \
- "${PORTAGE_PYTHON:-/usr/bin/python}" "$PORTAGE_BIN_PATH"/check-implicit-pointer-usage.py || die "check-implicit-pointer-usage.py failed")
+ "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "$PORTAGE_BIN_PATH"/check-implicit-pointer-usage.py || die "check-implicit-pointer-usage.py failed")
if [[ -n ${f} ]] ; then
# In the future this will be a forced "die". In preparation,
diff --cc bin/phase-helpers.sh
index 70c95b0,a97323a..95052eb
--- a/bin/phase-helpers.sh
+++ b/bin/phase-helpers.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PREFIX_PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
export DESTTREE=/usr
diff --cc pym/_emerge/EbuildBuild.py
index 71567ea,75d906f..9a18e72
--- a/pym/_emerge/EbuildBuild.py
+++ b/pym/_emerge/EbuildBuild.py
@@@ -15,8 -15,8 +15,9 @@@ import portag
from portage import os
from portage.output import colorize
from portage.package.ebuild.digestcheck import digestcheck
+ from portage.package.ebuild.digestgen import digestgen
from portage.package.ebuild.doebuild import _check_temp_dir
+from portage.const import EPREFIX
from portage.package.ebuild._spawn_nofetch import spawn_nofetch
class EbuildBuild(CompositeTask):
diff --cc pym/_emerge/main.py
index 2f02fb7,689d136..76404fc
--- a/pym/_emerge/main.py
+++ b/pym/_emerge/main.py
@@@ -14,9 -15,9 +15,10 @@@ portage.proxy.lazyimport.lazyimport(glo
'_emerge.actions:load_emerge_config,run_action,' + \
'validate_ebuild_environment',
'_emerge.help:help@emerge_help',
+ '_emerge.is_valid_package_atom:insert_category_into_atom'
)
from portage import os
+from portage.const import EPREFIX
if sys.hexversion >= 0x3000000:
long = int
diff --cc pym/portage/dbapi/bintree.py
index f3d044e,77b2886..2bad28e
--- a/pym/portage/dbapi/bintree.py
+++ b/pym/portage/dbapi/bintree.py
@@@ -308,8 -307,8 +309,8 @@@ class binarytree(object)
self._pkgindex_aux_keys = \
["BUILD_TIME", "CHOST", "DEPEND", "DESCRIPTION", "EAPI",
"HDEPEND", "IUSE", "KEYWORDS", "LICENSE", "PDEPEND", "PROPERTIES",
- "PROVIDE", "RDEPEND", "repository", "SLOT", "USE", "DEFINED_PHASES",
+ "PROVIDE", "RESTRICT", "RDEPEND", "repository", "SLOT", "USE", "DEFINED_PHASES",
- "BASE_URI"]
+ "BASE_URI", "EPREFIX"]
self._pkgindex_aux_keys = list(self._pkgindex_aux_keys)
self._pkgindex_use_evaluated_keys = \
("DEPEND", "HDEPEND", "LICENSE", "RDEPEND",
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-05-04 18:55 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-05-04 18:55 UTC (permalink / raw
To: gentoo-commits
commit: 18a7cd8392e4516a0a517390d33625ed2a536f80
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sat May 4 18:54:34 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sat May 4 18:54:34 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=18a7cd83
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/misc-functions.sh
bin/ebuild | 8 +-
bin/misc-functions.sh | 33 +-
bin/phase-functions.sh | 2 +-
bin/repoman | 58 ++-
cnf/make.globals | 1 +
man/dispatch-conf.1 | 75 ++--
man/ebuild.5 | 6 +
man/emerge.1 | 44 ++-
man/make.conf.5 | 27 +-
man/portage.5 | 18 +-
man/repoman.1 | 4 +
man/ru/dispatch-conf.1 | 100 ++++
pym/_emerge/Package.py | 12 +-
pym/_emerge/actions.py | 24 +-
pym/_emerge/depgraph.py | 18 +-
pym/_emerge/main.py | 7 +
pym/_emerge/search.py | 4 +-
pym/portage/_sets/libs.py | 9 +-
pym/portage/dbapi/bintree.py | 2 +-
pym/portage/dbapi/porttree.py | 2 +
pym/portage/dbapi/vartree.py | 87 +++-
.../package/ebuild/_config/special_env_vars.py | 4 +-
pym/portage/package/ebuild/config.py | 133 ++++--
pym/portage/package/ebuild/getmaskingstatus.py | 15 +-
pym/portage/process.py | 11 +-
pym/portage/tests/__init__.py | 20 +-
pym/portage/tests/bin/setup_env.py | 44 +-
pym/portage/tests/dep/testAtom.py | 254 +++++-----
pym/portage/tests/dep/testCheckRequiredUse.py | 188 ++++----
pym/portage/tests/dep/testStandalone.py | 26 +-
pym/portage/tests/dep/test_best_match_to_list.py | 46 +-
pym/portage/tests/dep/test_dep_getcpv.py | 16 +-
pym/portage/tests/dep/test_dep_getrepo.py | 6 +-
pym/portage/tests/dep/test_dep_getslot.py | 10 +-
pym/portage/tests/dep/test_dep_getusedeps.py | 12 +-
pym/portage/tests/dep/test_get_operator.py | 24 +-
pym/portage/tests/dep/test_isjustname.py | 14 +-
pym/portage/tests/dep/test_isvalidatom.py | 6 +-
pym/portage/tests/dep/test_match_from_list.py | 120 +++---
pym/portage/tests/dep/test_paren_reduce.py | 50 +-
pym/portage/tests/dep/test_use_reduce.py | 519 ++++++++++----------
pym/portage/tests/ebuild/test_config.py | 4 +-
.../tests/env/config/test_PackageKeywordsFile.py | 8 +-
.../tests/env/config/test_PackageUseFile.py | 6 +-
.../tests/env/config/test_PortageModulesFile.py | 11 +-
pym/portage/tests/resolver/ResolverPlayground.py | 18 +-
pym/portage/tests/resolver/test_autounmask.py | 304 ++++++------
pym/portage/tests/resolver/test_backtracking.py | 6 +-
pym/portage/tests/resolver/test_depclean.py | 100 ++--
.../resolver/test_depclean_slot_unavailable.py | 15 +-
pym/portage/tests/resolver/test_multirepo.py | 4 +-
pym/portage/tests/util/test_stackDictList.py | 12 +-
pym/portage/tests/util/test_stackDicts.py | 41 +-
pym/portage/tests/util/test_stackLists.py | 18 +-
pym/portage/tests/util/test_uniqueArray.py | 14 +-
pym/portage/tests/util/test_varExpand.py | 80 ++--
pym/portage/tests/versions/test_cpv_sort_key.py | 7 +-
pym/portage/tests/versions/test_vercmp.py | 38 +-
pym/portage/util/_dyn_libs/LinkageMapELF.py | 18 +-
.../util/_dyn_libs/display_preserved_libs.py | 2 +-
pym/portage/util/movefile.py | 10 +
pym/repoman/checks.py | 2 +
62 files changed, 1584 insertions(+), 1193 deletions(-)
diff --cc bin/misc-functions.sh
index aa9ad04,725ba55..e2960bd
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -898,26 -843,16 +899,16 @@@ install_qa_check_misc()
}
install_qa_check_prefix() {
-- if [[ -d ${ED}/${D} ]] ; then
-- find "${ED}/${D}" | \
++ if [[ -d ${ED%/}/${D} ]] ; then
++ find "${ED%/}/${D}" | \
while read i ; do
-- eqawarn "QA Notice: /${i##${ED}/${D}} installed in \${ED}/\${D}"
++ eqawarn "QA Notice: /${i##${ED%/}/${D}} installed in \${ED}/\${D}"
done
die "Aborting due to QA concerns: files installed in ${ED}/${D}"
fi
-- if [[ -d ${ED}/${EPREFIX} ]] ; then
-- find "${ED}/${EPREFIX}/" | \
++ if [[ -d ${ED%/}/${EPREFIX} ]] ; then
++ find "${ED%/}/${EPREFIX}/" | \
while read i ; do
eqawarn "QA Notice: ${i#${D}} double prefix"
done
diff --cc cnf/make.globals
index 9693348,6dd3509..5967fb1
--- a/cnf/make.globals
+++ b/cnf/make.globals
@@@ -28,9 -28,10 +28,10 @@@ GENTOO_MIRRORS="http://distfiles.gentoo
ACCEPT_LICENSE="* -@EULA"
ACCEPT_PROPERTIES="*"
+ ACCEPT_RESTRICT="*"
# Repository Paths
-PORTDIR=/usr/portage
+PORTDIR="@PORTAGE_EPREFIX@/usr/portage"
DISTDIR=${PORTDIR}/distfiles
PKGDIR=${PORTDIR}/packages
RPMDIR=${PORTDIR}/rpm
diff --cc pym/portage/dbapi/bintree.py
index 1d2850a,cdd10b3..f3d044e
--- a/pym/portage/dbapi/bintree.py
+++ b/pym/portage/dbapi/bintree.py
@@@ -316,9 -314,9 +316,9 @@@ class binarytree(object)
"PDEPEND", "PROPERTIES", "PROVIDE")
self._pkgindex_header_keys = set([
"ACCEPT_KEYWORDS", "ACCEPT_LICENSE",
- "ACCEPT_PROPERTIES", "CBUILD",
+ "ACCEPT_PROPERTIES", "ACCEPT_RESTRICT", "CBUILD",
"CONFIG_PROTECT", "CONFIG_PROTECT_MASK", "FEATURES",
- "GENTOO_MIRRORS", "INSTALL_MASK", "SYNC", "USE"])
+ "GENTOO_MIRRORS", "INSTALL_MASK", "SYNC", "USE", "EPREFIX"])
self._pkgindex_default_pkg_data = {
"BUILD_TIME" : "",
"DEFINED_PHASES" : "",
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-04-02 16:57 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-04-02 16:57 UTC (permalink / raw
To: gentoo-commits
commit: 136b887caa911c44899cfeff6a3383bd77ebb98f
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Tue Apr 2 16:57:29 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Tue Apr 2 16:57:29 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=136b887c
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/ebuild-helpers/ecompressdir
bin/ebuild-helpers/ecompressdir | 4 ++--
bin/egencache | 11 ++++++++---
pym/portage/_emirrordist/main.py | 12 ++++++++----
pym/portage/dbapi/bintree.py | 19 +++++++++++++++++++
4 files changed, 37 insertions(+), 9 deletions(-)
diff --cc bin/ebuild-helpers/ecompressdir
index da2d001,40079c0..310981a
--- a/bin/ebuild-helpers/ecompressdir
+++ b/bin/ebuild-helpers/ecompressdir
@@@ -1,8 -1,8 +1,8 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/helper-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/helper-functions.sh
if [[ -z $1 ]] ; then
__helpers_die "${0##*/}: at least one argument needed"
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-03-31 19:03 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-03-31 19:03 UTC (permalink / raw
To: gentoo-commits
commit: f39409a51da92bbd82defaaec5b215b195258c9d
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Mar 31 19:03:11 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Mar 31 19:03:11 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=f39409a5
tarball: include man/ru dir
---
tarball.sh | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/tarball.sh b/tarball.sh
index dfe3cfc..910c1db 100755
--- a/tarball.sh
+++ b/tarball.sh
@@ -28,7 +28,7 @@ install -d -m0755 ${DEST}
rsync -a --exclude='.git' --exclude='.hg' . ${DEST}
sed -i -e '/^VERSION=/s/^.*$/VERSION="'${V}-prefix'"/' ${DEST}/pym/portage/__init__.py
sed -i -e "1s/VERSION/${V}-prefix/" ${DEST}/doc/fragment/version
-sed -i -e "1s/VERSION/${V}-prefix/" ${DEST}/man/*
+sed -i -e "1s/VERSION/${V}-prefix/" ${DEST}/man/{,ru/}*.[15]
sed -i -e "s/@version@/${V}/" ${DEST}/configure.ac
cd ${DEST}
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-03-31 19:00 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-03-31 19:00 UTC (permalink / raw
To: gentoo-commits
commit: f1b84bb153039a5f6e6808ea841326d9d2a2a6cc
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Mar 31 18:59:57 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Mar 31 18:59:57 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=f1b84bb1
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
cnf/make.globals
bin/ebuild | 23 +++----
bin/ebuild-helpers/prepstrip | 3 +-
bin/repoman | 2 +-
cnf/make.globals | 5 +-
man/ebuild.1 | 4 +-
man/ebuild.5 | 5 ++
man/make.conf.5 | 11 +++-
pym/_emerge/depgraph.py | 27 +++++--
.../package/ebuild/_config/special_env_vars.py | 3 +-
pym/portage/package/ebuild/config.py | 14 ++--
pym/portage/repository/config.py | 8 ++
pym/portage/util/__init__.py | 65 ++++++++++++++++-
pym/portage/util/movefile.py | 75 ++++++++++++++++++--
13 files changed, 200 insertions(+), 45 deletions(-)
diff --cc cnf/make.globals
index b66786b,f0597c1..9693348
--- a/cnf/make.globals
+++ b/cnf/make.globals
@@@ -149,20 -123,9 +149,23 @@@ PORTAGE_ELOG_MAILFROM="@portageuser@@lo
# Signing command used by repoman
PORTAGE_GPG_SIGNING_COMMAND="gpg --sign --digest-algo SHA256 --clearsign --yes --default-key \"\${PORTAGE_GPG_KEY}\" --homedir \"\${PORTAGE_GPG_DIR}\" \"\${FILE}\""
+ # Security labels are special, see bug #461868.
+ PORTAGE_XATTR_EXCLUDE="security.*"
+
+# Writeable paths for Mac OS X seatbelt sandbox
+#
+# If path ends in a slash (/), access will recursively be allowed to directory
+# contents (using a regex), not the directory itself. Without a slash, access
+# to the directory or file itself will be allowed (using a literal), so it can
+# be created, removed and changed. If both is needed, the directory needs to be
+# given twice, once with and once without the slash. Obviously this only makes
+# sense for directories, not files.
+#
+# An empty value for either variable will disable all restrictions on the
+# corresponding operation.
+MACOSSANDBOX_PATHS="/dev/fd/ /private/tmp/ /private/var/tmp/ @@PORTAGE_BUILDDIR@@/ @@PORTAGE_ACTUAL_DISTDIR@@/"
+MACOSSANDBOX_PATHS_CONTENT_ONLY="/dev/null /dev/dtracehelper /dev/tty /private/var/run/syslog"
+
# *****************************
# ** DO NOT EDIT THIS FILE **
# ***************************************************
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-03-24 8:36 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-03-24 8:36 UTC (permalink / raw
To: gentoo-commits
commit: 04898da141c706f24d1303f2ecf62163fc7808ed
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Mar 24 08:36:43 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Mar 24 08:36:43 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=04898da1
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
pym/_emerge/SpawnProcess.py
man/ebuild.5 | 4 ++--
pym/_emerge/SpawnProcess.py | 13 ++++++-------
2 files changed, 8 insertions(+), 9 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-03-23 19:54 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-03-23 19:54 UTC (permalink / raw
To: gentoo-commits
commit: 1eda827ec2504da6e19a8455046761f060aab953
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sat Mar 23 19:52:38 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sat Mar 23 19:52:38 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=1eda827e
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/ebuild-helpers/prepallman
bin/ebuild-helpers/prepman
pym/_emerge/actions.py
pym/portage/const.py
bin/ebuild-helpers/ecompressdir | 33 ++-
bin/ebuild-helpers/prepallman | 5 +-
bin/ebuild-helpers/prepman | 5 +-
bin/ebuild-helpers/prepstrip | 9 +-
bin/emaint | 3 +-
bin/etc-update | 4 +-
bin/misc-functions.sh | 2 +-
bin/portageq | 304 ++++++++++++++++++--
bin/repoman | 107 ++++++--
man/portage.5 | 11 +-
pym/_emerge/Task.py | 9 +-
pym/_emerge/actions.py | 35 ++-
pym/_emerge/depgraph.py | 193 ++++++++++---
pym/_emerge/is_valid_package_atom.py | 5 +-
pym/_emerge/resolver/output.py | 45 ++--
pym/_emerge/stdout_spinner.py | 13 +-
pym/portage/const.py | 6 +
pym/portage/dbapi/vartree.py | 40 ++-
pym/portage/dep/__init__.py | 21 +--
pym/portage/emaint/__init__.py | 4 +-
pym/portage/emaint/modules/__init__.py | 4 +-
pym/portage/emaint/modules/binhost/__init__.py | 8 +-
pym/portage/emaint/modules/config/__init__.py | 8 +-
pym/portage/emaint/modules/logs/__init__.py | 8 +-
pym/portage/emaint/modules/move/__init__.py | 9 +-
pym/portage/emaint/modules/resume/__init__.py | 6 +-
pym/portage/emaint/modules/world/__init__.py | 8 +-
pym/portage/locks.py | 42 +++-
pym/portage/manifest.py | 3 +-
.../package/ebuild/_config/LocationsManager.py | 17 +-
pym/portage/package/ebuild/config.py | 2 +-
pym/portage/repository/config.py | 38 ++-
pym/portage/tests/__init__.py | 11 +-
pym/portage/tests/bin/setup_env.py | 12 +-
pym/portage/tests/emerge/test_simple.py | 15 +-
pym/portage/tests/resolver/test_slot_abi.py | 85 ++++++-
.../resolver/test_slot_operator_autounmask.py | 120 ++++++++
pym/portage/tests/runTests | 2 +-
pym/portage/update.py | 26 ++-
pym/portage/util/__init__.py | 19 +-
.../util/_dyn_libs/display_preserved_libs.py | 35 ++-
pym/portage/util/listdir.py | 8 +-
pym/portage/versions.py | 23 ++-
pym/repoman/checks.py | 6 +-
44 files changed, 1093 insertions(+), 276 deletions(-)
diff --cc bin/ebuild-helpers/prepallman
index 4ad3a5e,5331eaf..5362304
--- a/bin/ebuild-helpers/prepallman
+++ b/bin/ebuild-helpers/prepallman
@@@ -13,12 -13,7 +13,11 @@@ f
ret=0
+# PREFIX LOCAL: ED needs not to exist, wheras D does
+[[ -d ${ED} ]] || exit ${ret}
+# END PREFIX LOCAL
+
- find "${ED}" -type d -name man > "${T}"/prepallman.filelist
- while read -r mandir ; do
+ while IFS= read -r -d '' mandir ; do
mandir=${mandir#${ED}}
prepman "${mandir%/man}"
((ret|=$?))
diff --cc bin/ebuild-helpers/prepman
index 6520235,fb5dcb4..9166015
--- a/bin/ebuild-helpers/prepman
+++ b/bin/ebuild-helpers/prepman
@@@ -2,7 -2,10 +2,10 @@@
# Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
+ # Do not compress man pages which are smaller than this (in bytes). #169260
+ SIZE_LIMIT='128'
+
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if ! ___eapi_has_prefix_variables; then
ED=${D}
diff --cc bin/misc-functions.sh
index 2d75c92,ce3d681..aa9ad04
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
diff --cc pym/_emerge/actions.py
index 71913e8,2c5a1b3..d6f75e1
--- a/pym/_emerge/actions.py
+++ b/pym/_emerge/actions.py
@@@ -37,9 -37,7 +37,8 @@@ from portage import o
from portage import shutil
from portage import eapi_is_supported, _encodings, _unicode_decode
from portage.cache.cache_errors import CacheError
+from portage.const import EPREFIX
- from portage.const import GLOBAL_CONFIG_PATH
- from portage.const import _DEPCLEAN_LIB_CHECK_DEFAULT
+ from portage.const import GLOBAL_CONFIG_PATH, VCS_DIRS, _DEPCLEAN_LIB_CHECK_DEFAULT
from portage.dbapi.dep_expand import dep_expand
from portage.dbapi._expand_new_virt import expand_new_virt
from portage.dep import Atom
diff --cc pym/portage/const.py
index 360625b,5e960d9..a503c9a
--- a/pym/portage/const.py
+++ b/pym/portage/const.py
@@@ -65,7 -58,9 +65,11 @@@ DEPCACHE_PATH = "/var/cache/
GLOBAL_CONFIG_PATH = "/usr/share/portage/config"
# these variables are not used with target_root or config_root
+PORTAGE_BASE_PATH = PORTAGE_BASE
+ # NOTE: Use realpath(__file__) so that python module symlinks in site-packages
+ # are followed back to the real location of the whole portage installation.
-PORTAGE_BASE_PATH = os.path.join(os.sep, os.sep.join(os.path.realpath(__file__.rstrip("co")).split(os.sep)[:-3]))
++#PREFIX: below should work, but I'm not sure how it it affects other places
++#PORTAGE_BASE_PATH = os.path.join(os.sep, os.sep.join(os.path.realpath(__file__.rstrip("co")).split(os.sep)[:-3]))
PORTAGE_BIN_PATH = PORTAGE_BASE_PATH + "/bin"
PORTAGE_PYM_PATH = PORTAGE_BASE_PATH + "/pym"
LOCALE_DATA_PATH = PORTAGE_BASE_PATH + "/locale" # FIXME: not used
diff --cc pym/portage/util/__init__.py
index 57cf7f6,dafc3e3..a476e98
--- a/pym/portage/util/__init__.py
+++ b/pym/portage/util/__init__.py
@@@ -45,8 -45,12 +45,13 @@@ from portage.exception import InvalidAt
from portage.localization import _
from portage.proxy.objectproxy import ObjectProxy
from portage.cache.mappings import UserDict
+from portage.const import EPREFIX
+ if sys.hexversion >= 0x3000000:
+ _unicode = str
+ else:
+ _unicode = unicode
+
noiselimit = 0
def initialize_logger(level=logging.WARN):
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-02-28 19:29 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-02-28 19:29 UTC (permalink / raw
To: gentoo-commits
commit: 5121499e217fa1e0b67d177dbc10abd957ddec22
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Feb 28 19:29:19 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Feb 28 19:29:19 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=5121499e
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/ebuild-helpers/preplib
bin/misc-functions.sh
pym/_emerge/getloadavg.py
bin/archive-conf | 4 +-
bin/ebuild | 3 +-
bin/ebuild-helpers/preplib | 32 --
bin/misc-functions.sh | 27 +-
bin/phase-functions.sh | 2 +-
bin/phase-helpers.sh | 12 +-
bin/portageq | 38 +-
bin/repoman | 471 ++++++++++----------
doc/qa.docbook | 2 +-
man/emerge.1 | 4 +-
man/ru/ebuild.1 | 2 +-
man/ru/env-update.1 | 6 +-
pym/_emerge/Scheduler.py | 20 +-
pym/_emerge/actions.py | 73 ++--
pym/_emerge/depgraph.py | 394 ++++++++++++++---
pym/_emerge/getloadavg.py | 5 +-
pym/_emerge/help.py | 2 +-
pym/_emerge/resolver/output.py | 114 +++---
pym/_emerge/resolver/output_helpers.py | 3 +-
pym/portage/__init__.py | 41 +-
pym/portage/_sets/files.py | 6 +-
pym/portage/dbapi/__init__.py | 6 +-
pym/portage/package/ebuild/_config/UseManager.py | 12 +-
.../package/ebuild/_config/special_env_vars.py | 4 +-
pym/portage/package/ebuild/config.py | 47 ++-
.../package/ebuild/deprecated_profile_check.py | 46 ++-
pym/portage/package/ebuild/digestgen.py | 107 +++---
pym/portage/process.py | 4 +
pym/portage/tests/emerge/test_simple.py | 2 +-
pym/portage/tests/lint/test_compile_modules.py | 20 +-
...test_complete_if_new_subslot_without_revbump.py | 74 +++
.../test_regular_slot_change_without_revbump.py | 59 +++
.../resolver/test_slot_change_without_revbump.py | 69 +++
.../tests/resolver/test_slot_operator_unsolved.py | 88 ++++
pym/portage/util/_dyn_libs/LinkageMapELF.py | 6 +-
pym/portage/util/_eventloop/EventLoop.py | 8 +-
pym/repoman/checks.py | 23 +-
pym/repoman/utilities.py | 5 +-
38 files changed, 1251 insertions(+), 590 deletions(-)
diff --cc bin/misc-functions.sh
index 29b9615,ba4fb0f..2d75c92
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -507,9 -467,7 +508,9 @@@ install_qa_check_elf()
# Check for shared libraries lacking NEEDED entries
qa_var="QA_DT_NEEDED_${ARCH/-/_}"
eval "[[ -n \${!qa_var} ]] && QA_DT_NEEDED=(\"\${${qa_var}[@]}\")"
- f=$(scanelf -ByF '%n %p' "${ED}"{,usr/}lib*/lib*.so* | awk '$2 == "" { print }' | sed -e "s:^[[:space:]]${ED}:/:")
+ # PREFIX LOCAL: keep offset prefix in the recorded files
- f=$(scanelf -ByF '%n %p' "${ED}"{,usr/}lib*/lib*.so* | gawk '$2 == "" { print }' | sed -e "s:^[[:space:]]${D}:/:")
++ f=$(scanelf -ByF '%n %p' "${ED}"{,usr/}lib*/lib*.so* | awk '$2 == "" { print }' | sed -e "s:^[[:space:]]${D}:/:")
+ # END PREFIX LOCAL
if [[ -n ${f} ]] ; then
echo "${f}" > "${T}"/scanelf-missing-NEEDED.log
if [[ "${QA_STRICT_DT_NEEDED-unset}" == unset ]] ; then
diff --cc pym/_emerge/getloadavg.py
index 8e62ebf,6a2794f..637d011
--- a/pym/_emerge/getloadavg.py
+++ b/pym/_emerge/getloadavg.py
@@@ -12,16 -11,9 +12,17 @@@ if getloadavg is None
Raises OSError if the load average was unobtainable.
"""
try:
- with open('/proc/loadavg') as f:
- loadavg_str = f.readline()
- except IOError:
+ if platform.system() in ["AIX", "HP-UX"]:
+ loadavg_str = os.popen('LANG=C /usr/bin/uptime 2>/dev/null').readline().split()
+ while loadavg_str[0] != 'load' and loadavg_str[1] != 'average:':
+ loadavg_str = loadavg_str[1:]
+ loadavg_str = loadavg_str[2:5]
+ loadavg_str = [x.rstrip(',') for x in loadavg_str]
+ loadavg_str = ' '.join(loadavg_str)
+ else:
- loadavg_str = open('/proc/loadavg').readline()
++ with open('/proc/loadavg') as f:
++ loadavg_str = f.readline()
+ except (IOError, IndexError):
# getloadavg() is only supposed to raise OSError, so convert
raise OSError('unknown')
loadavg_split = loadavg_str.split()
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-02-07 20:01 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-02-07 20:01 UTC (permalink / raw
To: gentoo-commits
commit: e6f4f768391f73b105a3abb1ac03d9f222808235
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Feb 7 19:58:45 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Feb 7 19:58:45 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=e6f4f768
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/misc-functions.sh | 5 +--
bin/phase-helpers.sh | 4 +++
man/ebuild.1 | 2 +-
man/ebuild.5 | 2 +-
man/emerge.1 | 15 ++++++++++-
man/env-update.1 | 14 +++++-----
man/etc-update.1 | 44 ++++++++++++++-----------------
man/portage.5 | 18 +++++++++++-
misc/emerge-delta-webrsync | 2 +-
pym/_emerge/SpawnProcess.py | 2 +-
pym/_emerge/resolver/slot_collision.py | 2 +-
pym/portage/__init__.py | 27 +++++++++++++++++++-
pym/portage/data.py | 4 +-
pym/portage/dbapi/vartree.py | 6 +++-
pym/portage/package/ebuild/config.py | 31 +++++++++++++++++++++-
pym/portage/process.py | 2 +-
pym/portage/tests/runTests | 14 +++++++---
pym/portage/util/__init__.py | 4 +-
18 files changed, 143 insertions(+), 55 deletions(-)
diff --cc bin/misc-functions.sh
index 092d9fb,ddd9179..29b9615
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
diff --cc pym/portage/tests/runTests
index c03c8de,b006969..84e0d98
--- a/pym/portage/tests/runTests
+++ b/pym/portage/tests/runTests
@@@ -1,6 -1,6 +1,6 @@@
-#!/usr/bin/python -Wd
+#!/usr/bin/env python
# runTests.py -- Portage Unit Test Functionality
- # Copyright 2006-2012 Gentoo Foundation
+ # Copyright 2006-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
import os, sys
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-01-27 21:41 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-01-27 21:41 UTC (permalink / raw
To: gentoo-commits
commit: b052d83f4455ae8585daf34ec090539ca612d31c
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Jan 27 21:39:43 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Jan 27 21:39:43 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=b052d83f
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/ebuild.sh
pym/portage/const.py
pym/portage/data.py
Makefile | 17 +-
bin/ebuild | 2 +-
bin/ebuild.sh | 11 +-
bin/egencache | 30 +-
bin/glsa-check | 107 ++++---
bin/isolated-functions.sh | 3 +-
bin/portageq | 10 +-
bin/repoman | 4 +-
cnf/make.conf | 15 +-
man/ebuild.1 | 12 +-
man/make.conf.5 | 15 +-
man/portage.5 | 6 +-
man/ru/ebuild.1 | 245 ++++++++++++++++
man/ru/env-update.1 | 35 +++
man/ru/etc-update.1 | 63 ++++
man/ru/fixpackages.1 | 22 ++
pym/_emerge/DependencyArg.py | 10 +-
pym/_emerge/EbuildMetadataPhase.py | 5 +-
pym/_emerge/EbuildPhase.py | 9 +-
pym/_emerge/EbuildProcess.py | 12 +-
pym/_emerge/EbuildSpawnProcess.py | 10 +-
pym/_emerge/FakeVartree.py | 4 +-
pym/_emerge/JobStatusDisplay.py | 41 ++--
pym/_emerge/MiscFunctionsProcess.py | 7 +-
pym/_emerge/Package.py | 92 ++++---
pym/_emerge/Scheduler.py | 8 +-
pym/_emerge/SpawnProcess.py | 17 +-
pym/_emerge/SubProcess.py | 9 +-
pym/_emerge/UseFlagDisplay.py | 10 +-
pym/_emerge/actions.py | 36 ++-
pym/_emerge/depgraph.py | 56 ++--
pym/_emerge/emergelog.py | 12 +-
pym/_emerge/resolver/circular_dependency.py | 4 +-
pym/_emerge/resolver/output.py | 13 +-
pym/_emerge/resolver/output_helpers.py | 5 +-
pym/_emerge/resolver/slot_collision.py | 75 ++++--
pym/portage/__init__.py | 15 +
pym/portage/_selinux.py | 53 ++--
pym/portage/_sets/__init__.py | 4 +
pym/portage/cache/ebuild_xattr.py | 2 +-
pym/portage/cache/flat_hash.py | 14 +-
pym/portage/cache/sqlite.py | 7 +-
pym/portage/const.py | 3 +-
pym/portage/data.py | 38 ++-
pym/portage/dbapi/__init__.py | 24 ++-
pym/portage/dbapi/_expand_new_virt.py | 4 +-
pym/portage/dbapi/bintree.py | 12 +-
pym/portage/dbapi/cpv_expand.py | 4 +-
pym/portage/dbapi/dep_expand.py | 4 +-
pym/portage/dbapi/porttree.py | 6 +-
pym/portage/dbapi/vartree.py | 128 ++++-----
pym/portage/dbapi/virtual.py | 3 +-
pym/portage/dep/__init__.py | 10 +-
pym/portage/dep/_slot_operator.py | 4 +-
pym/portage/dep/dep_check.py | 11 +-
pym/portage/elog/mod_save_summary.py | 21 +-
pym/portage/getbinpkg.py | 4 +-
pym/portage/glsa.py | 301 +++++++++++---------
pym/portage/localization.py | 12 +-
pym/portage/manifest.py | 2 +
pym/portage/news.py | 10 +-
.../package/ebuild/_config/KeywordsManager.py | 32 ++-
.../package/ebuild/_config/LocationsManager.py | 31 ++-
pym/portage/package/ebuild/_config/UseManager.py | 36 ++-
.../package/ebuild/_config/special_env_vars.py | 8 +-
pym/portage/package/ebuild/_ipc/QueryCommand.py | 7 +-
pym/portage/package/ebuild/config.py | 32 ++-
pym/portage/package/ebuild/doebuild.py | 95 +++----
pym/portage/package/ebuild/prepare_build_dirs.py | 8 +-
pym/portage/process.py | 70 ++++-
pym/portage/repository/config.py | 21 +-
.../{_emirrordist => tests/glsa}/__init__.py | 0
pym/portage/tests/{bin => glsa}/__test__ | 0
pym/portage/tests/glsa/test_security_set.py | 144 ++++++++++
pym/portage/tests/repoman/test_simple.py | 15 +
pym/portage/tests/unicode/test_string_format.py | 51 ++--
pym/portage/update.py | 4 +-
pym/portage/util/_get_vm_info.py | 80 ++++++
pym/portage/util/_path.py | 27 ++
pym/portage/util/digraph.py | 12 +-
pym/portage/util/movefile.py | 12 +-
pym/portage/versions.py | 11 +-
pym/portage/xml/metadata.py | 8 +-
pym/repoman/checks.py | 7 +-
pym/repoman/errors.py | 4 +-
pym/repoman/herdbase.py | 6 +-
pym/repoman/utilities.py | 62 +++--
87 files changed, 1762 insertions(+), 749 deletions(-)
diff --cc bin/ebuild.sh
index 95c95bb,2293938..5ee2dca
--- a/bin/ebuild.sh
+++ b/bin/ebuild.sh
@@@ -1,9 -1,9 +1,9 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-PORTAGE_BIN_PATH="${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"
-PORTAGE_PYM_PATH="${PORTAGE_PYM_PATH:-/usr/lib/portage/pym}"
+PORTAGE_BIN_PATH="${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"
+PORTAGE_PYM_PATH="${PORTAGE_PYM_PATH:-@PORTAGE_BASE@/pym}"
# Prevent aliases from causing portage to act inappropriately.
# Make sure it's before everything so we don't mess aliases that follow.
diff --cc pym/_emerge/emergelog.py
index 3b6b595,aea94f7..9397aca
--- a/pym/_emerge/emergelog.py
+++ b/pym/_emerge/emergelog.py
@@@ -19,12 -18,8 +19,8 @@@ from portage.const import EPREFI
# dblink.merge() and we don't want that to trigger log writes
# unless it's really called via emerge.
_disable = True
-_emerge_log_dir = '/var/log'
+_emerge_log_dir = EPREFIX + '/var/log'
- # Coerce to unicode, in order to prevent TypeError when writing
- # raw bytes to TextIOWrapper with python2.
- _log_fmt = _unicode_decode("%.0f: %s\n")
-
def emergelog(xterm_titles, mystr, short_msg=None):
if _disable:
diff --cc pym/portage/const.py
index 527478f,3859a16..057f9e2
--- a/pym/portage/const.py
+++ b/pym/portage/const.py
@@@ -1,11 -1,8 +1,12 @@@
# portage: Constants
- # Copyright 1998-2012 Gentoo Foundation
+ # Copyright 1998-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
+# ===========================================================================
+# autotool supplied constants.
+# ===========================================================================
+from portage.const_autotool import *
+ from __future__ import unicode_literals
import os
diff --cc pym/portage/data.py
index df87b5c,29292f5..324a4cb
--- a/pym/portage/data.py
+++ b/pym/portage/data.py
@@@ -90,40 -86,37 +90,42 @@@ def _get_global(k)
elif portage.const.EPREFIX:
secpass = 2
#Discover the uid and gid of the portage user/group
+ keyerror = False
try:
- _portage_grpname = _get_global('_portage_grpname')
- if platform.python_implementation() == 'PyPy':
- # Somehow this prevents "TypeError: expected string" errors
- # from grp.getgrnam() with PyPy 1.7
- _portage_grpname = str(_portage_grpname)
- portage_gid = grp.getgrnam(_portage_grpname).gr_gid
+ portage_uid = pwd.getpwnam(_get_global('_portage_username')).pw_uid
except KeyError:
- keyerror = True
+ # PREFIX LOCAL: some sysadmins are insane, bug #344307
+ if _portage_grpname.isdigit():
+ portage_gid = int(_portage_grpname)
+ else:
- portage_gid = None
++ keyerror = True
+ # END PREFIX LOCAL
+ portage_uid = 0
+
try:
- portage_uid = pwd.getpwnam(_get_global('_portage_username')).pw_uid
- if secpass < 1 and portage_gid in os.getgroups():
- secpass = 1
+ portage_gid = grp.getgrnam(_get_global('_portage_grpname')).gr_gid
except KeyError:
- portage_uid = None
-
- if portage_uid is None or portage_gid is None:
- portage_uid = 0
+ keyerror = True
portage_gid = 0
+
+ if secpass < 1 and portage_gid in os.getgroups():
+ secpass = 1
+
+ # Suppress this error message if both PORTAGE_GRPNAME and
+ # PORTAGE_USERNAME are set to "root", for things like
+ # Android (see bug #454060).
+ if keyerror and not (_get_global('_portage_username') == "root" and
+ _get_global('_portage_grpname') == "root"):
+ # PREFIX LOCAL: we need to fix this one day to distinguish prefix vs non-prefix
+ writemsg(colorize("BAD",
+ _("portage: '%s' user or '%s' group missing." % (_get_global('_portage_username'), _get_global('_portage_grpname')))) + "\n", noiselevel=-1)
writemsg(colorize("BAD",
- _("portage: 'portage' user or group missing.")) + "\n", noiselevel=-1)
- writemsg(_(
- " For the defaults, line 1 goes into passwd, "
- "and 2 into group.\n"), noiselevel=-1)
- writemsg(colorize("GOOD",
- " portage:x:250:250:portage:/var/tmp/portage:/bin/false") \
- + "\n", noiselevel=-1)
- writemsg(colorize("GOOD", " portage::250:portage") + "\n",
- noiselevel=-1)
+ _(" In Prefix Portage this is quite dramatic")) + "\n", noiselevel=-1)
+ writemsg(colorize("BAD",
+ _(" since it means you have thrown away yourself.")) + "\n", noiselevel=-1)
+ writemsg(colorize("BAD",
+ _(" Re-add yourself or re-bootstrap Gentoo Prefix.")) + "\n", noiselevel=-1)
+ # END PREFIX LOCAL
portage_group_warning()
_initialized_globals.add('portage_gid')
@@@ -244,14 -232,18 +246,22 @@@ def _init(settings)
if '_portage_grpname' not in _initialized_globals and \
'_portage_username' not in _initialized_globals:
+ # Prevents "TypeError: expected string" errors
+ # from grp.getgrnam() with PyPy
+ native_string = platform.python_implementation() == 'PyPy'
+
- v = settings.get('PORTAGE_GRPNAME', 'portage')
+ # PREFIX LOCAL: use var iso hardwired 'portage'
+ v = settings.get('PORTAGE_GRPNAME', PORTAGE_GROUPNAME)
+ # END PREFIX LOCAL
+ if native_string:
+ v = portage._native_string(v)
globals()['_portage_grpname'] = v
_initialized_globals.add('_portage_grpname')
- v = settings.get('PORTAGE_USERNAME', 'portage')
+ # PREFIX LOCAL: use var iso hardwired 'portage'
+ v = settings.get('PORTAGE_USERNAME', PORTAGE_USERNAME)
+ # END PREFIX LOCAL
+ if native_string:
+ v = portage._native_string(v)
globals()['_portage_username'] = v
_initialized_globals.add('_portage_username')
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-01-27 21:41 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-01-27 21:41 UTC (permalink / raw
To: gentoo-commits
commit: 4feb40f6330edaa2a3170a060fed69651c58b180
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Jan 27 21:40:45 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Jan 27 21:40:45 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=4feb40f6
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/repoman | 13 -------------
1 files changed, 0 insertions(+), 13 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-01-13 10:26 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-01-13 10:26 UTC (permalink / raw
To: gentoo-commits
commit: ce1a76d36f36fc83923f903d3bbcd4e7d654bc63
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Jan 13 10:25:43 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Jan 13 10:25:43 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=ce1a76d3
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/misc-functions.sh | 4 ++
man/make.conf.5 | 2 +-
pym/portage/package/ebuild/_spawn_nofetch.py | 1 +
pym/portage/tests/ebuild/test_doebuild_spawn.py | 39 ++++++++++++++++-------
4 files changed, 33 insertions(+), 13 deletions(-)
diff --cc bin/misc-functions.sh
index c997635,5fd0eab..092d9fb
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-01-10 21:02 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-01-10 21:02 UTC (permalink / raw
To: gentoo-commits
commit: 754956bff8273df8ce2c3d9f6ff7fb22126ebd41
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Jan 10 21:01:23 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Jan 10 21:01:23 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=754956bf
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
pym/portage/package/ebuild/doebuild.py
Makefile | 2 +-
bin/emirrordist | 13 +
man/emirrordist.1 | 143 +++++
man/make.conf.5 | 4 +-
pym/_emerge/MetadataRegen.py | 23 +-
pym/_emerge/Scheduler.py | 4 +-
pym/_emerge/SpawnProcess.py | 20 +
pym/_emerge/depgraph.py | 13 +-
pym/portage/_emirrordist/Config.py | 132 +++++
pym/portage/_emirrordist/DeletionIterator.py | 83 +++
pym/portage/_emirrordist/DeletionTask.py | 129 ++++
pym/portage/_emirrordist/FetchIterator.py | 147 +++++
pym/portage/_emirrordist/FetchTask.py | 620 ++++++++++++++++++++
pym/portage/_emirrordist/MirrorDistTask.py | 218 +++++++
.../sync => portage/_emirrordist}/__init__.py | 2 +-
pym/portage/_emirrordist/main.py | 437 ++++++++++++++
pym/portage/dbapi/porttree.py | 5 +-
pym/portage/dbapi/vartree.py | 4 +-
pym/portage/manifest.py | 36 +-
.../ebuild/_parallel_manifest/ManifestScheduler.py | 19 +-
.../ebuild/_parallel_manifest/ManifestTask.py | 10 +-
pym/portage/package/ebuild/config.py | 15 +-
pym/portage/package/ebuild/doebuild.py | 10 +-
pym/portage/tests/emerge/test_simple.py | 8 +-
pym/portage/tests/process/test_PopenProcess.py | 7 +-
pym/portage/util/_ShelveUnicodeWrapper.py | 45 ++
pym/portage/util/_async/AsyncScheduler.py | 4 +-
pym/portage/util/_async/FileCopier.py | 17 +
pym/portage/util/_async/PipeLogger.py | 16 +-
pym/portage/util/_ctypes.py | 10 +-
30 files changed, 2121 insertions(+), 75 deletions(-)
diff --cc pym/portage/package/ebuild/doebuild.py
index 43f60ef,e4d3ae4..2f026ab
--- a/pym/portage/package/ebuild/doebuild.py
+++ b/pym/portage/package/ebuild/doebuild.py
@@@ -162,10 -160,15 +162,18 @@@ def _doebuild_path(settings, eapi=None)
eprefix = settings["EPREFIX"]
prerootpath = [x for x in settings.get("PREROOTPATH", "").split(":") if x]
rootpath = [x for x in settings.get("ROOTPATH", "").split(":") if x]
+ overrides = [x for x in settings.get(
+ "__PORTAGE_TEST_PATH_OVERRIDE", "").split(":") if x]
+
+ prefixes = []
+ if eprefix:
+ prefixes.append(eprefix)
+ prefixes.append("/")
+
+ path = overrides
+ # PREFIX LOCAL: use DEFAULT_PATH and EXTRA_PATH from make.globals
+ defaultpath = [x for x in settings.get("DEFAULT_PATH", "").split(":") if x]
+ extrapath = [x for x in settings.get("EXTRA_PATH", "").split(":") if x]
- path = []
if eprefix and uid != 0 and "fakeroot" not in settings.features:
path.append(os.path.join(portage_bin_path,
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2013-01-05 18:14 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2013-01-05 18:14 UTC (permalink / raw
To: gentoo-commits
commit: cefe9ddc5e1b8a1ba727afddd11638723875949c
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sat Jan 5 18:12:01 2013 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sat Jan 5 18:12:01 2013 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=cefe9ddc
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/archive-conf
bin/binhost-snapshot
bin/clean_locks
bin/dispatch-conf
bin/ebuild
bin/ebuild-ipc.py
bin/egencache
bin/emaint
bin/emerge
bin/env-update
bin/fixpackages
bin/glsa-check
bin/lock-helper.py
bin/phase-functions.sh
bin/portageq
bin/quickpkg
bin/regenworld
bin/repoman
bin/xpak-helper.py
.gitignore | 2 +
bin/archive-conf | 3 +-
bin/binhost-snapshot | 3 +-
bin/clean_locks | 3 +-
bin/dispatch-conf | 3 +-
bin/ebuild | 4 +-
bin/ebuild-ipc.py | 175 ++++---------
bin/egencache | 67 +----
bin/emaint | 3 +-
bin/emerge | 5 +-
bin/env-update | 3 +-
bin/fixpackages | 3 +-
bin/glsa-check | 3 +-
bin/lock-helper.py | 3 +-
bin/phase-functions.sh | 6 +-
bin/phase-helpers.sh | 8 +-
bin/portageq | 3 +-
bin/quickpkg | 3 +-
bin/regenworld | 3 +-
bin/repoman | 19 +-
bin/xpak-helper.py | 3 +-
man/color.map.5 | 4 +-
man/ebuild.1 | 4 +-
man/ebuild.5 | 11 +-
man/egencache.1 | 4 +-
man/emerge.1 | 8 +-
man/make.conf.5 | 6 +-
man/portage.5 | 2 +-
man/quickpkg.1 | 20 ++-
pym/_emerge/AbstractEbuildProcess.py | 2 +-
pym/_emerge/AsynchronousLock.py | 58 ++---
pym/_emerge/AsynchronousTask.py | 14 +
pym/_emerge/BinpkgFetcher.py | 2 +-
pym/_emerge/BinpkgVerifier.py | 143 +++++++----
pym/_emerge/EbuildBuild.py | 4 +-
pym/_emerge/EbuildFetcher.py | 6 +-
pym/_emerge/EbuildMetadataPhase.py | 19 +-
pym/_emerge/FakeVartree.py | 72 ++++--
pym/_emerge/FifoIpcDaemon.py | 29 ++-
pym/_emerge/MergeListItem.py | 2 +-
pym/_emerge/PackageUninstall.py | 6 +-
pym/_emerge/PipeReader.py | 22 ++-
pym/_emerge/PollScheduler.py | 6 +-
pym/_emerge/Scheduler.py | 9 +-
pym/_emerge/SpawnProcess.py | 6 +-
pym/_emerge/actions.py | 77 +++---
pym/_emerge/depgraph.py | 89 +++++--
pym/_emerge/help.py | 4 +-
pym/_emerge/main.py | 6 +-
pym/portage/__init__.py | 4 +-
pym/portage/cache/flat_hash.py | 9 +-
pym/portage/checksum.py | 17 +-
pym/portage/dbapi/_MergeProcess.py | 18 +-
pym/portage/dbapi/_SyncfsProcess.py | 53 ++++
pym/portage/dbapi/bintree.py | 38 ++-
pym/portage/dbapi/porttree.py | 15 +-
pym/portage/dbapi/vartree.py | 65 ++---
pym/portage/dep/__init__.py | 14 +-
pym/portage/locks.py | 11 +-
pym/portage/manifest.py | 11 +-
pym/portage/package/ebuild/_spawn_nofetch.py | 6 +-
pym/portage/package/ebuild/doebuild.py | 54 ++++-
pym/portage/process.py | 14 +-
pym/portage/tests/dep/test_paren_reduce.py | 11 +-
.../tests/process/test_PopenProcessBlockingIO.py | 63 +++++
pym/portage/tests/repoman/test_echangelog.py | 6 +-
pym/portage/tests/runTests | 1 +
pym/portage/util/_async/FileDigester.py | 73 ++++++
pym/portage/util/_async/PipeLogger.py | 22 +-
pym/portage/util/_async/PipeReaderBlockingIO.py | 91 +++++++
pym/portage/util/_async/SchedulerInterface.py | 25 +--
pym/portage/util/_async/run_main_scheduler.py | 41 +++
pym/portage/util/_ctypes.py | 10 +-
pym/portage/util/_eventloop/EventLoop.py | 271 +++++++++++++------
pym/repoman/checks.py | 17 +-
runtests.sh | 32 +++-
76 files changed, 1287 insertions(+), 665 deletions(-)
diff --cc bin/archive-conf
index b166977,2c10884..26c05cf
--- a/bin/archive-conf
+++ b/bin/archive-conf
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
diff --cc bin/binhost-snapshot
index f5b9979,e9bd45a..4c64cb7
--- a/bin/binhost-snapshot
+++ b/bin/binhost-snapshot
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2010-2012 Gentoo Foundation
+ # Copyright 2010-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
import io
diff --cc bin/clean_locks
index 29c22ff,0a97918..71bd42b
--- a/bin/clean_locks
+++ b/bin/clean_locks
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -O
+#!@PREFIX_PORTAGE_PYTHON@ -O
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/dispatch-conf
index 5803026,e44c0ea..8e070ee
--- a/bin/dispatch-conf
+++ b/bin/dispatch-conf
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -O
+#!@PREFIX_PORTAGE_PYTHON@ -O
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
diff --cc bin/ebuild
index 7725f67,754b9a9..a3ac418
--- a/bin/ebuild
+++ b/bin/ebuild
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -O
+#!@PREFIX_PORTAGE_PYTHON@ -O
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/ebuild-ipc.py
index 7533c8c,d351e94..a3432c8
--- a/bin/ebuild-ipc.py
+++ b/bin/ebuild-ipc.py
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2010-2012 Gentoo Foundation
+ # Copyright 2010-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
# This is a helper which ebuild processes can use
diff --cc bin/egencache
index afa8db1,87673a0..c071c10
--- a/bin/egencache
+++ b/bin/egencache
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2009-2012 Gentoo Foundation
+ # Copyright 2009-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/emaint
index 932d974,bc1b92f..cf754e6
--- a/bin/emaint
+++ b/bin/emaint
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -O
+#!@PREFIX_PORTAGE_PYTHON@ -O
- # Copyright 2005-2012 Gentoo Foundation
+ # Copyright 2005-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
"""'The emaint program provides an interface to system health
diff --cc bin/emerge
index 463b05c,fc85d58..d7470ed
--- a/bin/emerge
+++ b/bin/emerge
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2006-2012 Gentoo Foundation
+ # Copyright 2006-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/env-update
index ef3433f,b500c54..09ccb4d
--- a/bin/env-update
+++ b/bin/env-update
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -O
+#!@PREFIX_PORTAGE_PYTHON@ -O
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/fixpackages
index 3f37e3f,e224b62..b31a4b8
--- a/bin/fixpackages
+++ b/bin/fixpackages
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/glsa-check
index 5fe9b75,ed0df35..14f6e47
--- a/bin/glsa-check
+++ b/bin/glsa-check
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2008-2012 Gentoo Foundation
+ # Copyright 2008-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/lock-helper.py
index 2e9c887,128e4dd..3a27e32
--- a/bin/lock-helper.py
+++ b/bin/lock-helper.py
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2010-2011 Gentoo Foundation
+ # Copyright 2010-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
import os
diff --cc bin/phase-functions.sh
index 52a9954,01c6f55..e8d9863
--- a/bin/phase-functions.sh
+++ b/bin/phase-functions.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PREFIX_PORTAGE_BASH@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# Hardcoded bash lists are needed for backward compatibility with
diff --cc bin/portageq
index b7ab6a4,ee776ef..5c9c064
--- a/bin/portageq
+++ b/bin/portageq
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -O
+#!@PREFIX_PORTAGE_PYTHON@ -O
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/quickpkg
index 4d3a15e,19d90b0..e271881
--- a/bin/quickpkg
+++ b/bin/quickpkg
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/regenworld
index 0b7da44,f74b3dd..3416aff
--- a/bin/regenworld
+++ b/bin/regenworld
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
@@@ -9,7 -9,10 +9,8 @@@ from os import path as os
pym_path = osp.join(osp.dirname(osp.dirname(osp.realpath(__file__))), "pym")
sys.path.insert(0, pym_path)
import portage
+ portage._internal_caller = True
from portage import os
-from portage._sets.files import StaticFileSet, WorldSelectedSet
-
import re
import tempfile
import textwrap
diff --cc bin/repoman
index eeb6882,4c00d5b..54a8015
--- a/bin/repoman
+++ b/bin/repoman
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -O
+#!@PREFIX_PORTAGE_PYTHON@ -O
- # Copyright 1999-2012 Gentoo Foundation
+ # Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# Next to do: dep syntax checking in mask files
diff --cc bin/xpak-helper.py
index 1d57069,7a3865c..cc7f120
--- a/bin/xpak-helper.py
+++ b/bin/xpak-helper.py
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2009-2011 Gentoo Foundation
+ # Copyright 2009-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
import optparse
diff --cc pym/portage/dbapi/vartree.py
index f1970d1,ba149b7..8af2c50
--- a/pym/portage/dbapi/vartree.py
+++ b/pym/portage/dbapi/vartree.py
@@@ -29,14 -30,11 +30,14 @@@ portage.proxy.lazyimport.lazyimport(glo
'portage.util.env_update:env_update',
'portage.util.listdir:dircache,listdir',
'portage.util.movefile:movefile',
- 'portage.util._ctypes:find_library,LoadLibrary',
'portage.util._dyn_libs.PreservedLibsRegistry:PreservedLibsRegistry',
'portage.util._dyn_libs.LinkageMapELF:LinkageMapELF@LinkageMap',
+ 'portage.util._dyn_libs.LinkageMapMachO:LinkageMapMachO',
+ 'portage.util._dyn_libs.LinkageMapPeCoff:LinkageMapPeCoff',
+ 'portage.util._dyn_libs.LinkageMapXCoff:LinkageMapXCoff',
'portage.util._async.SchedulerInterface:SchedulerInterface',
'portage.util._eventloop.EventLoop:EventLoop',
+ 'portage.util._eventloop.global_event_loop:global_event_loop',
'portage.versions:best,catpkgsplit,catsplit,cpv_getkey,vercmp,' + \
'_pkgsplit@pkgsplit,_pkg_str,_unknown_repo',
'subprocess',
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-12-26 14:48 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-12-26 14:48 UTC (permalink / raw
To: gentoo-commits
commit: 487202f9db043fc140df65f644bbb3d2ce8039db
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Wed Dec 26 14:41:49 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Wed Dec 26 14:41:49 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=487202f9
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
pym/_emerge/actions.py
bin/dispatch-conf | 2 +-
bin/ebuild-helpers/prepstrip | 48 +++++-
bin/phase-functions.sh | 3 +-
bin/repoman | 39 ++++-
bin/xattr-helper.py | 176 ++++++++++++++++++++
man/ebuild.5 | 7 +-
man/emerge.1 | 4 +-
pym/_emerge/BinpkgFetcher.py | 2 +-
pym/_emerge/EbuildMetadataPhase.py | 2 +-
pym/_emerge/Package.py | 12 +-
pym/_emerge/SpawnProcess.py | 2 +-
pym/_emerge/actions.py | 56 ++++--
pym/_emerge/depgraph.py | 9 +-
pym/_emerge/help.py | 2 +-
pym/_emerge/is_valid_package_atom.py | 2 +-
pym/_emerge/main.py | 17 ++-
pym/_emerge/resolver/slot_collision.py | 3 +
pym/portage/__init__.py | 11 ++
pym/portage/dbapi/_MergeProcess.py | 3 +
pym/portage/dbapi/bintree.py | 10 +-
pym/portage/dbapi/dep_expand.py | 4 +-
pym/portage/dbapi/porttree.py | 9 +-
pym/portage/dbapi/vartree.py | 14 +-
pym/portage/dep/__init__.py | 2 +-
pym/portage/getbinpkg.py | 2 +-
.../{_eapi_invalid.py => _metadata_invalid.py} | 0
pym/portage/package/ebuild/config.py | 3 +-
pym/portage/package/ebuild/doebuild.py | 39 ++++-
pym/portage/package/ebuild/fetch.py | 2 +-
pym/portage/package/ebuild/getmaskingstatus.py | 7 +
pym/portage/process.py | 2 +-
pym/repoman/checks.py | 10 +-
runtests.sh | 2 +-
33 files changed, 424 insertions(+), 82 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-12-02 15:47 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-12-02 15:47 UTC (permalink / raw
To: gentoo-commits
commit: aff61c48a5da9797de3b5a4090710deab99517b9
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Dec 2 15:46:40 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Dec 2 15:46:40 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=aff61c48
tarball.sh: fix after configure.in rename
---
tarball.sh | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/tarball.sh b/tarball.sh
index 70116ac..dfe3cfc 100755
--- a/tarball.sh
+++ b/tarball.sh
@@ -29,7 +29,7 @@ rsync -a --exclude='.git' --exclude='.hg' . ${DEST}
sed -i -e '/^VERSION=/s/^.*$/VERSION="'${V}-prefix'"/' ${DEST}/pym/portage/__init__.py
sed -i -e "1s/VERSION/${V}-prefix/" ${DEST}/doc/fragment/version
sed -i -e "1s/VERSION/${V}-prefix/" ${DEST}/man/*
-sed -i -e "s/@version@/${V}/" ${DEST}/configure.in
+sed -i -e "s/@version@/${V}/" ${DEST}/configure.ac
cd ${DEST}
find -name '*~' | xargs --no-run-if-empty rm -f
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-12-02 15:36 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-12-02 15:36 UTC (permalink / raw
To: gentoo-commits
commit: 11b7b1409f97ffbbc76968ee0da9ec63825634fb
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Dec 2 15:36:43 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Dec 2 15:36:43 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=11b7b140
runtests.sh: make shebang usable for build tree
---
runtests.sh | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/runtests.sh b/runtests.sh
index 3193409..b2dfc4d 100755
--- a/runtests.sh
+++ b/runtests.sh
@@ -1,4 +1,4 @@
-#!@PORTAGE_BASH@
+#!/usr/bin/env bash
# Copyright 2010-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-12-02 15:33 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-12-02 15:33 UTC (permalink / raw
To: gentoo-commits
commit: 559635f1413e0222434b7ef695ec88ec92e41426
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Dec 2 15:26:00 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Dec 2 15:26:00 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=559635f1
configure.in -> configure.ac for newer autoconf
---
configure.in => configure.ac | 0
1 files changed, 0 insertions(+), 0 deletions(-)
diff --git a/configure.in b/configure.ac
similarity index 100%
rename from configure.in
rename to configure.ac
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-12-02 15:33 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-12-02 15:33 UTC (permalink / raw
To: gentoo-commits
commit: 8cdf36062e5e36baa2098eed40387c656c0f4607
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Dec 2 15:29:39 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Dec 2 15:29:39 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=8cdf3606
GENTOO_PATH_PYTHON: fix syntax for Python 3
---
acinclude.m4 | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/acinclude.m4 b/acinclude.m4
index 5bf24da..1d9ccf5 100644
--- a/acinclude.m4
+++ b/acinclude.m4
@@ -24,9 +24,9 @@ AC_DEFUN([GENTOO_PATH_PYTHON],
fi
dnl is it the version we want?
- ver=`$PREFIX_PORTAGE_PYTHON -c 'import sys; print sys.version.split(" ")[[0]]'`
+ ver=`$PREFIX_PORTAGE_PYTHON -c 'import sys; print(sys.version.split(" ")[[0]])'`
AC_MSG_CHECKING([whether $PREFIX_PORTAGE_PYTHON $ver >= $1])
- cmp=`$PREFIX_PORTAGE_PYTHON -c 'import sys; print sys.version.split(" ")[[0]] >= "$1"'`
+ cmp=`$PREFIX_PORTAGE_PYTHON -c 'import sys; print(sys.version.split(" ")[[0]] >= "$1")'`
if test "$cmp" = "True" ; then
AC_MSG_RESULT([yes])
else
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-12-02 15:33 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-12-02 15:33 UTC (permalink / raw
To: gentoo-commits
commit: e10d741c28c6b5cd0705be1a3676aa2b6daea5a9
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Dec 2 15:32:48 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Dec 2 15:32:48 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=e10d741c
configure.ac: fix syntax for Python 3
---
configure.ac | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/configure.ac b/configure.ac
index d18e59e..5251099 100644
--- a/configure.ac
+++ b/configure.ac
@@ -95,7 +95,7 @@ AC_HELP_STRING([--with-offset-prefix],
if test "x$PORTAGE_EPREFIX" != "x"
then
- PORTAGE_EPREFIX=`${PREFIX_PORTAGE_PYTHON} -c "import os; print os.path.normpath('$PORTAGE_EPREFIX')"`
+ PORTAGE_EPREFIX=`${PREFIX_PORTAGE_PYTHON} -c "import os; print(os.path.normpath('$PORTAGE_EPREFIX'))"`
DEFAULT_PATH="${PORTAGE_EPREFIX}/usr/sbin:${PORTAGE_EPREFIX}/usr/bin:${PORTAGE_EPREFIX}/sbin:${PORTAGE_EPREFIX}/bin"
else
# this is what trunk uses in ebuild.sh
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-12-02 13:12 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-12-02 13:12 UTC (permalink / raw
To: gentoo-commits
commit: 5ce9bb3828e510b677742fe576291132bdcc837b
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Dec 2 13:12:26 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Dec 2 13:12:26 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=5ce9bb38
tabcheck: fix for Python 3
---
tabcheck.py | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/tabcheck.py b/tabcheck.py
index c45b748..fe5227c 100755
--- a/tabcheck.py
+++ b/tabcheck.py
@@ -3,5 +3,5 @@
import tabnanny,sys
for x in sys.argv:
- print "Tabchecking " + x
+ print ("Tabchecking " + x)
tabnanny.check(x)
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-12-02 12:59 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-12-02 12:59 UTC (permalink / raw
To: gentoo-commits
commit: 6f702fe9d6af3e8f58cb7850472df69fc870d6df
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Dec 2 12:59:01 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Dec 2 12:59:01 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=6f702fe9
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/archive-conf | 2 +-
bin/eapi.sh | 8 +
bin/ebuild-helpers/dodoc | 2 +-
bin/ebuild-helpers/doinfo | 2 +-
bin/ebuild-helpers/prepman | 2 +-
bin/ebuild-helpers/prepstrip | 78 +++++---
bin/ebuild.sh | 3 +-
bin/egencache | 2 +-
bin/emerge-webrsync | 9 +-
bin/glsa-check | 14 +-
bin/misc-functions.sh | 21 +-
bin/phase-functions.sh | 3 +-
bin/phase-helpers.sh | 28 ++-
bin/repoman | 61 ++++--
bin/save-ebuild-env.sh | 2 +
cnf/make.globals | 2 +-
doc/package/ebuild/eapi/4-python.docbook | 36 ++++
doc/package/ebuild/eapi/5-progress.docbook | 78 +++++++
man/egencache.1 | 28 ++-
man/emerge.1 | 11 +-
man/portage.5 | 8 +-
misc/emerge-delta-webrsync | 9 +-
pym/_emerge/AbstractEbuildProcess.py | 4 +-
pym/_emerge/AsynchronousLock.py | 1 +
pym/_emerge/BlockerDB.py | 7 +-
pym/_emerge/EbuildBuild.py | 4 +-
pym/_emerge/EbuildBuildDir.py | 4 +-
pym/_emerge/EbuildExecuter.py | 2 +-
pym/_emerge/EbuildMetadataPhase.py | 9 +
pym/_emerge/FakeVartree.py | 32 ++--
pym/_emerge/Package.py | 150 ++++++++++----
pym/_emerge/PackageVirtualDbapi.py | 4 +-
pym/_emerge/Scheduler.py | 32 ++--
pym/_emerge/actions.py | 122 ++++++++----
pym/_emerge/depgraph.py | 213 ++++++++------------
pym/_emerge/main.py | 10 +
pym/_emerge/resolver/circular_dependency.py | 14 +-
pym/_emerge/resolver/output.py | 14 +-
pym/_emerge/resolver/slot_collision.py | 7 +-
pym/portage/__init__.py | 1 +
pym/portage/_sets/base.py | 4 +-
pym/portage/cache/flat_hash.py | 10 +-
pym/portage/dbapi/__init__.py | 19 ++-
pym/portage/dbapi/_similar_name_search.py | 57 ++++++
pym/portage/dbapi/bintree.py | 2 +-
pym/portage/dbapi/porttree.py | 7 +
pym/portage/dbapi/vartree.py | 7 +-
pym/portage/dep/__init__.py | 67 ++++++-
pym/portage/dep/_slot_operator.py | 4 +-
pym/portage/dep/dep_check.py | 4 +-
pym/portage/eapi.py | 6 +
pym/portage/elog/mod_save.py | 24 ++-
pym/portage/elog/mod_save_summary.py | 21 ++-
pym/portage/emaint/modules/move/move.py | 22 ++-
pym/portage/package/ebuild/_config/UseManager.py | 119 +++++++++++-
.../package/ebuild/_config/special_env_vars.py | 7 +-
.../package/ebuild/_config/unpack_dependencies.py | 38 ++++
pym/portage/package/ebuild/_spawn_nofetch.py | 2 +-
pym/portage/package/ebuild/config.py | 5 +-
pym/portage/package/ebuild/doebuild.py | 38 +++-
pym/portage/package/ebuild/getmaskingstatus.py | 2 +-
pym/portage/repository/config.py | 13 +-
pym/portage/tests/dbapi/test_portdb_cache.py | 16 +-
pym/portage/tests/emerge/test_simple.py | 2 +-
pym/portage/tests/repoman/test_simple.py | 2 +-
pym/portage/tests/resolver/ResolverPlayground.py | 17 +-
.../resolver/test_depclean_slot_unavailable.py | 79 +++++++
.../tests/resolver/test_unpack_dependencies.py | 65 ++++++
pym/portage/tests/resolver/test_use_aliases.py | 131 ++++++++++++
pym/portage/tests/update/test_update_dbentry.py | 45 ++++
pym/portage/update.py | 32 +++-
pym/portage/util/_desktop_entry.py | 1 +
pym/portage/util/_eventloop/EventLoop.py | 27 ++-
pym/portage/util/movefile.py | 15 +-
pym/repoman/checks.py | 39 ++---
75 files changed, 1482 insertions(+), 506 deletions(-)
diff --cc bin/misc-functions.sh
index c30c7e0,6f84526..3684d7c
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
diff --cc cnf/make.globals
index 9739fc1,80a68f7..6db019b
--- a/cnf/make.globals
+++ b/cnf/make.globals
@@@ -63,15 -63,9 +63,15 @@@ FEATURES="assume-digests binpkg-log
# Ignore file collisions for unowned *.pyo and *.pyc files, this helps during
# transition from compiling python modules in live file system to compiling
# them in src_install() function.
- COLLISION_IGNORE="/lib/modules/* *.py[co]"
+ COLLISION_IGNORE="/lib/modules/* *.py[co] *\$py.class"
UNINSTALL_IGNORE="/lib/modules/*"
+# Prefix: we want preserve-libs, not sure how mainline goes about this
+FEATURES="${FEATURES} preserve-libs"
+
+# Force EPREFIX, ED and EROOT to exist in all EAPIs, not just 3 and up
+FEATURES="${FEATURES} force-prefix"
+
# By default wait 5 secs before cleaning a package
CLEAN_DELAY="5"
diff --cc pym/_emerge/Package.py
index 315811c,86ed5f7..3ac1c33
--- a/pym/_emerge/Package.py
+++ b/pym/_emerge/Package.py
@@@ -296,16 -326,9 +327,16 @@@ class Package(Task)
if missing_keywords:
masks['KEYWORDS'] = missing_keywords
+ if self.built and not self.installed:
+ # we can have an old binary which has no EPREFIX information
+ if "EPREFIX" not in self.metadata:
+ masks['EPREFIX.missing'] = ''
+ if len(self.metadata["EPREFIX"].strip()) < len(EPREFIX):
+ masks['EPREFIX.tooshort'] = self.metadata["EPREFIX"].strip()
+
try:
missing_properties = settings._getMissingProperties(
- self.cpv, self.metadata)
+ self.cpv, self._metadata)
if missing_properties:
masks['PROPERTIES'] = missing_properties
except InvalidDependString:
diff --cc pym/_emerge/depgraph.py
index 58a4893,65a94ab..88d95ae
--- a/pym/_emerge/depgraph.py
+++ b/pym/_emerge/depgraph.py
@@@ -7647,16 -7604,10 +7604,16 @@@ def _get_masking_status(pkg, pkgsetting
portdb=root_config.trees["porttree"].dbapi, myrepo=myrepo)
if not pkg.installed:
- if not pkgsettings._accept_chost(pkg.cpv, pkg.metadata):
+ if not pkgsettings._accept_chost(pkg.cpv, pkg._metadata):
mreasons.append(_MaskReason("CHOST", "CHOST: %s" % \
- pkg.metadata["CHOST"]))
+ pkg._metadata["CHOST"]))
+ if pkg.built and not pkg.installed:
+ if not "EPREFIX" in pkg.metadata:
+ mreasons.append(_MaskReason("EPREFIX", "missing EPREFIX"))
+ elif len(pkg.metadata["EPREFIX"].strip()) < len(pkgsettings["EPREFIX"]):
+ mreasons.append(_MaskReason("EPREFIX", "EPREFIX: '%s' too small" % pkg.metadata["EPREFIX"]))
+
if pkg.invalid:
for msgs in pkg.invalid.values():
for msg in msgs:
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-11-04 10:48 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-11-04 10:48 UTC (permalink / raw
To: gentoo-commits
commit: 9d0d6b98cb2f0287d460f120d18cb9bfa8b6f005
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Nov 4 10:47:51 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Nov 4 10:47:51 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=9d0d6b98
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
.gitignore | 1 +
bin/isolated-functions.sh | 2 +
bin/misc-functions.sh | 19 +++-
bin/repoman | 12 ++
cnf/make.globals | 2 +-
man/make.conf.5 | 5 +
man/repoman.1 | 5 +-
pym/_emerge/actions.py | 18 +++
pym/_emerge/depgraph.py | 37 ++++-
pym/_emerge/resolver/backtracking.py | 26 +++-
pym/portage/const.py | 1 +
pym/portage/dbapi/_MergeProcess.py | 6 +
pym/portage/dbapi/vartree.py | 162 ++++++++++++++++++--
.../tests/resolver/test_slot_conflict_rebuild.py | 107 +++++++++++++
.../resolver/test_slot_operator_unsatisfied.py | 70 +++++++++
pym/portage/util/_ctypes.py | 47 ++++++
pym/portage/util/_desktop_entry.py | 4 +-
pym/repoman/checks.py | 1 +
18 files changed, 496 insertions(+), 29 deletions(-)
diff --cc bin/misc-functions.sh
index 6d9d1e4,db023e4..d64a61f
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
diff --cc pym/portage/dbapi/vartree.py
index 07aac3d,8d908fc..8b71a6e
--- a/pym/portage/dbapi/vartree.py
+++ b/pym/portage/dbapi/vartree.py
@@@ -30,11 -30,9 +30,12 @@@ portage.proxy.lazyimport.lazyimport(glo
'portage.util.env_update:env_update',
'portage.util.listdir:dircache,listdir',
'portage.util.movefile:movefile',
+ 'portage.util._ctypes:find_library,LoadLibrary',
'portage.util._dyn_libs.PreservedLibsRegistry:PreservedLibsRegistry',
'portage.util._dyn_libs.LinkageMapELF:LinkageMapELF@LinkageMap',
+ 'portage.util._dyn_libs.LinkageMapMachO:LinkageMapMachO',
+ 'portage.util._dyn_libs.LinkageMapPeCoff:LinkageMapPeCoff',
+ 'portage.util._dyn_libs.LinkageMapXCoff:LinkageMapXCoff',
'portage.util._async.SchedulerInterface:SchedulerInterface',
'portage.util._eventloop.EventLoop:EventLoop',
'portage.versions:best,catpkgsplit,catsplit,cpv_getkey,vercmp,' + \
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-10-22 17:25 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-10-22 17:25 UTC (permalink / raw
To: gentoo-commits
commit: 79403078f737a0d836394b3ebe37fe9435c56ae6
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Mon Oct 22 17:23:27 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Mon Oct 22 17:23:27 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=79403078
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
cnf/make.globals
pym/_emerge/main.py
pym/portage/dbapi/vartree.py
pym/portage/getbinpkg.py
bin/eapi.sh | 20 +
bin/ebuild | 2 +-
bin/ebuild.sh | 20 +-
bin/egencache | 169 ++++-
bin/emerge | 97 ++--
bin/glsa-check | 2 +-
bin/misc-functions.sh | 38 +-
bin/phase-helpers.sh | 181 +++++-
bin/portageq | 116 +++-
bin/repoman | 75 ++-
bin/save-ebuild-env.sh | 9 +-
cnf/make.globals | 3 -
doc/package/ebuild/eapi/5-progress.docbook | 30 +
man/ebuild.5 | 7 +-
man/egencache.1 | 21 +
man/emerge.1 | 7 +-
man/make.conf.5 | 16 +-
man/portage.5 | 4 +-
man/repoman.1 | 2 +-
pym/_emerge/AbstractEbuildProcess.py | 13 +-
pym/_emerge/AbstractPollTask.py | 2 +-
pym/_emerge/AsynchronousLock.py | 8 +-
pym/_emerge/CompositeTask.py | 4 +
pym/_emerge/EbuildFetcher.py | 53 +-
pym/_emerge/EbuildMetadataPhase.py | 2 +-
pym/_emerge/FifoIpcDaemon.py | 10 +-
pym/_emerge/MergeListItem.py | 6 +-
pym/_emerge/MetadataRegen.py | 64 +--
pym/_emerge/PackageMerge.py | 7 +-
pym/_emerge/PipeReader.py | 10 +-
pym/_emerge/PollScheduler.py | 127 +---
pym/_emerge/QueueScheduler.py | 105 ---
pym/_emerge/Scheduler.py | 84 ++-
pym/_emerge/SpawnProcess.py | 165 +----
pym/_emerge/SubProcess.py | 2 +-
pym/_emerge/TaskScheduler.py | 26 -
pym/_emerge/actions.py | 818 ++++++++++++++++++--
pym/_emerge/chk_updated_cfg_files.py | 42 +
pym/_emerge/create_world_atom.py | 25 +-
pym/_emerge/depgraph.py | 46 +-
pym/_emerge/main.py | 800 +------------------
pym/_emerge/post_emerge.py | 165 ++++
pym/_emerge/resolver/output.py | 121 ++-
pym/_emerge/resolver/output_helpers.py | 4 +-
pym/portage/__init__.py | 13 +-
pym/portage/_sets/dbapi.py | 25 +-
pym/portage/_sets/files.py | 4 +-
pym/portage/_sets/libs.py | 10 +-
pym/portage/_sets/security.py | 4 +-
pym/portage/cache/mappings.py | 6 +-
pym/portage/dbapi/_MergeProcess.py | 183 +++---
pym/portage/dbapi/bintree.py | 76 +--
pym/portage/dbapi/porttree.py | 23 +-
pym/portage/dbapi/vartree.py | 49 +-
pym/portage/dep/__init__.py | 25 +-
pym/portage/emaint/modules/logs/__init__.py | 2 +-
pym/portage/emaint/modules/move/move.py | 18 +-
pym/portage/getbinpkg.py | 40 +
pym/portage/glsa.py | 9 +-
pym/portage/manifest.py | 10 +-
.../package/ebuild/_config/LocationsManager.py | 6 +-
pym/portage/package/ebuild/_config/MaskManager.py | 31 +-
.../package/ebuild/_config/special_env_vars.py | 2 +-
pym/portage/package/ebuild/_ipc/QueryCommand.py | 86 ++-
.../ebuild/_parallel_manifest/ManifestProcess.py | 43 +
.../ebuild/_parallel_manifest/ManifestScheduler.py | 92 +++
.../ebuild/_parallel_manifest/ManifestTask.py | 180 +++++
.../ebuild/_parallel_manifest}/__init__.py | 0
pym/portage/package/ebuild/_spawn_nofetch.py | 8 +-
pym/portage/package/ebuild/config.py | 4 +-
pym/portage/package/ebuild/doebuild.py | 21 +-
pym/portage/package/ebuild/fetch.py | 5 +-
pym/portage/package/ebuild/getmaskingreason.py | 20 +-
pym/portage/process.py | 42 +-
pym/portage/proxy/lazyimport.py | 4 +-
pym/portage/repository/config.py | 8 +-
pym/portage/tests/dep/testAtom.py | 6 +-
pym/portage/tests/dep/test_best_match_to_list.py | 6 +-
pym/portage/tests/ebuild/test_doebuild_spawn.py | 7 +-
pym/portage/tests/ebuild/test_ipc_daemon.py | 74 ++-
pym/portage/tests/ebuild/test_spawn.py | 5 +-
.../test_lazy_import_portage_baseline.py | 4 +-
pym/portage/tests/locks/test_asynchronous_lock.py | 10 +-
pym/portage/tests/process/test_PopenProcess.py | 88 +++
pym/portage/tests/process/test_poll.py | 17 +-
pym/portage/util/_async/AsyncScheduler.py | 102 +++
pym/portage/util/_async/ForkProcess.py | 63 ++
pym/portage/util/_async/PipeLogger.py | 146 ++++
pym/portage/util/_async/PopenProcess.py | 33 +
pym/portage/util/_async/SchedulerInterface.py | 86 ++
pym/portage/util/_async/TaskScheduler.py | 20 +
.../{tests/update => util/_async}/__init__.py | 0
.../util/_dyn_libs/display_preserved_libs.py | 79 ++
pym/portage/util/_info_files.py | 138 ++++
pym/repoman/checks.py | 25 +-
runtests.sh | 2 +-
96 files changed, 3575 insertions(+), 1913 deletions(-)
diff --cc bin/misc-functions.sh
index 3b2c309,f3b0cc0..6d9d1e4
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -834,10 -780,11 +834,11 @@@ install_qa_check_misc()
rm -f "${ED}"/usr/share/info/dir{,.gz,.bz2} || die "rm failed!"
if has multilib-strict ${FEATURES} && \
- [[ -x /usr/bin/file && -x /usr/bin/find ]] && \
+ [[ -x ${EPREFIX}/usr/bin/file && -x ${EPREFIX}/usr/bin/find ]] && \
[[ -n ${MULTILIB_STRICT_DIRS} && -n ${MULTILIB_STRICT_DENY} ]]
then
- local abort=no dir file firstrun=yes
+ rm -f "${T}/multilib-strict.log"
+ local abort=no dir file
MULTILIB_STRICT_EXEMPT=$(echo ${MULTILIB_STRICT_EXEMPT} | sed -e 's:\([(|)]\):\\\1:g')
for dir in ${MULTILIB_STRICT_DIRS} ; do
[[ -d ${ED}/${dir} ]] || continue
diff --cc cnf/make.globals
index 53c4bb6,e53f186..e61d3dd
--- a/cnf/make.globals
+++ b/cnf/make.globals
@@@ -66,15 -66,6 +66,12 @@@ FEATURES="assume-digests binpkg-log
COLLISION_IGNORE="/lib/modules/* *.py[co]"
UNINSTALL_IGNORE="/lib/modules/*"
+# Prefix: we want preserve-libs, not sure how mainline goes about this
+FEATURES="${FEATURES} preserve-libs"
+
+# Force EPREFIX, ED and EROOT to exist in all EAPIs, not just 3 and up
+FEATURES="${FEATURES} force-prefix"
+
- # Default chunksize for binhost comms
- PORTAGE_BINHOST_CHUNKSIZE="3000"
-
# By default wait 5 secs before cleaning a package
CLEAN_DELAY="5"
diff --cc pym/_emerge/actions.py
index c8d04f2,fec2dfa..8902aab
--- a/pym/_emerge/actions.py
+++ b/pym/_emerge/actions.py
@@@ -28,9 -33,8 +33,9 @@@ portage.proxy.lazyimport.lazyimport(glo
from portage.localization import _
from portage import os
from portage import shutil
- from portage import eapi_is_supported, _unicode_decode
+ from portage import eapi_is_supported, _encodings, _unicode_decode
from portage.cache.cache_errors import CacheError
+from portage.const import EPREFIX
from portage.const import GLOBAL_CONFIG_PATH
from portage.const import _DEPCLEAN_LIB_CHECK_DEFAULT
from portage.dbapi.dep_expand import dep_expand
diff --cc pym/_emerge/main.py
index 8553bbc,be5a5ca..b902234
--- a/pym/_emerge/main.py
+++ b/pym/_emerge/main.py
@@@ -3,51 -3,19 +3,20 @@@
from __future__ import print_function
- import logging
- import signal
- import stat
- import subprocess
- import sys
- import textwrap
import platform
+ import sys
+
import portage
portage.proxy.lazyimport.lazyimport(globals(),
- 'portage.news:count_unread_news,display_news_notifications',
- 'portage.emaint.modules.logs.logs:CleanLogs',
+ 'logging',
+ 'portage.util:writemsg_level',
+ 'textwrap',
+ '_emerge.actions:load_emerge_config,run_action,' + \
+ 'validate_ebuild_environment',
+ '_emerge.help:help@emerge_help',
)
from portage import os
- from portage import _encodings
- from portage import _unicode_decode
- import _emerge.help
- import portage.xpak, errno, re, time
- from portage.output import colorize, xtermTitle, xtermTitleReset
- from portage.output import create_color_func
- good = create_color_func("GOOD")
- bad = create_color_func("BAD")
-
- import portage.elog
- import portage.util
- import portage.locks
- import portage.exception
- from portage.const import EPREFIX, EPREFIX_LSTRIP
- from portage.data import secpass
- from portage.dbapi.dep_expand import dep_expand
- from portage.util import normalize_path as normpath
- from portage.util import (shlex_split, varexpand,
- writemsg_level, writemsg_stdout)
- from portage._sets import SETPREFIX
- from portage._global_updates import _global_updates
-
- from _emerge.actions import action_config, action_sync, action_metadata, \
- action_regen, action_search, action_uninstall, action_info, action_build, \
- adjust_configs, chk_updated_cfg_files, display_missing_pkg_set, \
- display_news_notification, getportageversion, load_emerge_config
- import _emerge
- from _emerge.emergelog import emergelog
- from _emerge._flush_elog_mod_echo import _flush_elog_mod_echo
- from _emerge.is_valid_package_atom import is_valid_package_atom
- from _emerge.stdout_spinner import stdout_spinner
- from _emerge.userquery import userquery
++from portage.const import EPREFIX
if sys.hexversion >= 0x3000000:
long = int
@@@ -115,325 -83,6 +84,328 @@@ COWSAY_MOO = ""
"""
++<<<<<<< HEAD
+def chk_updated_info_files(root, infodirs, prev_mtimes, retval):
+
+ if os.path.exists(EPREFIX + "/usr/bin/install-info"):
+ out = portage.output.EOutput()
+ regen_infodirs=[]
+ for z in infodirs:
+ if z=='':
+ continue
+ inforoot=normpath(root+z)
+ if os.path.isdir(inforoot) and \
+ not [x for x in os.listdir(inforoot) \
+ if x.startswith('.keepinfodir')]:
+ infomtime = os.stat(inforoot)[stat.ST_MTIME]
+ if inforoot not in prev_mtimes or \
+ prev_mtimes[inforoot] != infomtime:
+ regen_infodirs.append(inforoot)
+
+ if not regen_infodirs:
+ portage.writemsg_stdout("\n")
+ if portage.util.noiselimit >= 0:
+ out.einfo("GNU info directory index is up-to-date.")
+ else:
+ portage.writemsg_stdout("\n")
+ if portage.util.noiselimit >= 0:
+ out.einfo("Regenerating GNU info directory index...")
+
+ dir_extensions = ("", ".gz", ".bz2")
+ icount=0
+ badcount=0
+ errmsg = ""
+ for inforoot in regen_infodirs:
+ if inforoot=='':
+ continue
+
+ if not os.path.isdir(inforoot) or \
+ not os.access(inforoot, os.W_OK):
+ continue
+
+ file_list = os.listdir(inforoot)
+ file_list.sort()
+ dir_file = os.path.join(inforoot, "dir")
+ moved_old_dir = False
+ processed_count = 0
+ for x in file_list:
+ if x.startswith(".") or \
+ os.path.isdir(os.path.join(inforoot, x)):
+ continue
+ if x.startswith("dir"):
+ skip = False
+ for ext in dir_extensions:
+ if x == "dir" + ext or \
+ x == "dir" + ext + ".old":
+ skip = True
+ break
+ if skip:
+ continue
+ if processed_count == 0:
+ for ext in dir_extensions:
+ try:
+ os.rename(dir_file + ext, dir_file + ext + ".old")
+ moved_old_dir = True
+ except EnvironmentError as e:
+ if e.errno != errno.ENOENT:
+ raise
+ del e
+ processed_count += 1
+ try:
+ proc = subprocess.Popen(
+ ['%s/usr/bin/install-info'
+ '--dir-file=%s' % (EPREFIX, os.path.join(inforoot, "dir")),
+ os.path.join(inforoot, x)],
+ env=dict(os.environ, LANG="C", LANGUAGE="C"),
+ stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
+ except OSError:
+ myso = None
+ else:
+ myso = _unicode_decode(
+ proc.communicate()[0]).rstrip("\n")
+ proc.wait()
+ existsstr="already exists, for file `"
+ if myso:
+ if re.search(existsstr,myso):
+ # Already exists... Don't increment the count for this.
+ pass
+ elif myso[:44]=="install-info: warning: no info dir entry in ":
+ # This info file doesn't contain a DIR-header: install-info produces this
+ # (harmless) warning (the --quiet switch doesn't seem to work).
+ # Don't increment the count for this.
+ pass
+ else:
+ badcount=badcount+1
+ errmsg += myso + "\n"
+ icount=icount+1
+
+ if moved_old_dir and not os.path.exists(dir_file):
+ # We didn't generate a new dir file, so put the old file
+ # back where it was originally found.
+ for ext in dir_extensions:
+ try:
+ os.rename(dir_file + ext + ".old", dir_file + ext)
+ except EnvironmentError as e:
+ if e.errno != errno.ENOENT:
+ raise
+ del e
+
+ # Clean dir.old cruft so that they don't prevent
+ # unmerge of otherwise empty directories.
+ for ext in dir_extensions:
+ try:
+ os.unlink(dir_file + ext + ".old")
+ except EnvironmentError as e:
+ if e.errno != errno.ENOENT:
+ raise
+ del e
+
+ #update mtime so we can potentially avoid regenerating.
+ prev_mtimes[inforoot] = os.stat(inforoot)[stat.ST_MTIME]
+
+ if badcount:
+ out.eerror("Processed %d info files; %d errors." % \
+ (icount, badcount))
+ writemsg_level(errmsg, level=logging.ERROR, noiselevel=-1)
+ else:
+ if icount > 0 and portage.util.noiselimit >= 0:
+ out.einfo("Processed %d info files." % (icount,))
+
+def display_preserved_libs(vardbapi, myopts):
+ MAX_DISPLAY = 3
+
+ if vardbapi._linkmap is None or \
+ vardbapi._plib_registry is None:
+ # preserve-libs is entirely disabled
+ return
+
+ # Explicitly load and prune the PreservedLibsRegistry in order
+ # to ensure that we do not display stale data.
+ vardbapi._plib_registry.load()
+
+ if vardbapi._plib_registry.hasEntries():
+ if "--quiet" in myopts:
+ print()
+ print(colorize("WARN", "!!!") + " existing preserved libs found")
+ return
+ else:
+ print()
+ print(colorize("WARN", "!!!") + " existing preserved libs:")
+
+ plibdata = vardbapi._plib_registry.getPreservedLibs()
+ linkmap = vardbapi._linkmap
+ consumer_map = {}
+ owners = {}
+
+ try:
+ linkmap.rebuild()
+ except portage.exception.CommandNotFound as e:
+ writemsg_level("!!! Command Not Found: %s\n" % (e,),
+ level=logging.ERROR, noiselevel=-1)
+ del e
+ else:
+ search_for_owners = set()
+ for cpv in plibdata:
+ internal_plib_keys = set(linkmap._obj_key(f) \
+ for f in plibdata[cpv])
+ for f in plibdata[cpv]:
+ if f in consumer_map:
+ continue
+ consumers = []
+ for c in linkmap.findConsumers(f):
+ # Filter out any consumers that are also preserved libs
+ # belonging to the same package as the provider.
+ if linkmap._obj_key(c) not in internal_plib_keys:
+ consumers.append(c)
+ consumers.sort()
+ consumer_map[f] = consumers
+ search_for_owners.update(consumers[:MAX_DISPLAY+1])
+
+ owners = {}
+ for f in search_for_owners:
+ owner_set = set()
+ for owner in linkmap.getOwners(f):
+ owner_dblink = vardbapi._dblink(owner)
+ if owner_dblink.exists():
+ owner_set.add(owner_dblink)
+ if owner_set:
+ owners[f] = owner_set
+
+ for cpv in plibdata:
+ print(colorize("WARN", ">>>") + " package: %s" % cpv)
+ samefile_map = {}
+ for f in plibdata[cpv]:
+ obj_key = linkmap._obj_key(f)
+ alt_paths = samefile_map.get(obj_key)
+ if alt_paths is None:
+ alt_paths = set()
+ samefile_map[obj_key] = alt_paths
+ alt_paths.add(f)
+
+ for alt_paths in samefile_map.values():
+ alt_paths = sorted(alt_paths)
+ for p in alt_paths:
+ print(colorize("WARN", " * ") + " - %s" % (p,))
+ f = alt_paths[0]
+ consumers = consumer_map.get(f, [])
+ for c in consumers[:MAX_DISPLAY]:
+ print(colorize("WARN", " * ") + " used by %s (%s)" % \
+ (c, ", ".join(x.mycpv for x in owners.get(c, []))))
+ if len(consumers) == MAX_DISPLAY + 1:
+ print(colorize("WARN", " * ") + " used by %s (%s)" % \
+ (consumers[MAX_DISPLAY], ", ".join(x.mycpv \
+ for x in owners.get(consumers[MAX_DISPLAY], []))))
+ elif len(consumers) > MAX_DISPLAY:
+ print(colorize("WARN", " * ") + " used by %d other files" % (len(consumers) - MAX_DISPLAY))
+ print("Use " + colorize("GOOD", "emerge @preserved-rebuild") + " to rebuild packages using these libraries")
+
+def post_emerge(myaction, myopts, myfiles,
+ target_root, trees, mtimedb, retval):
+ """
+ Misc. things to run at the end of a merge session.
+
+ Update Info Files
+ Update Config Files
+ Update News Items
+ Commit mtimeDB
+ Display preserved libs warnings
+
+ @param myaction: The action returned from parse_opts()
+ @type myaction: String
+ @param myopts: emerge options
+ @type myopts: dict
+ @param myfiles: emerge arguments
+ @type myfiles: list
+ @param target_root: The target EROOT for myaction
+ @type target_root: String
+ @param trees: A dictionary mapping each ROOT to it's package databases
+ @type trees: dict
+ @param mtimedb: The mtimeDB to store data needed across merge invocations
+ @type mtimedb: MtimeDB class instance
+ @param retval: Emerge's return value
+ @type retval: Int
+ """
+
+ root_config = trees[target_root]["root_config"]
+ vardbapi = trees[target_root]['vartree'].dbapi
+ settings = vardbapi.settings
+ info_mtimes = mtimedb["info"]
+
+ # Load the most current variables from ${ROOT}/etc/profile.env
+ settings.unlock()
+ settings.reload()
+ settings.regenerate()
+ settings.lock()
+
+ config_protect = shlex_split(settings.get("CONFIG_PROTECT", ""))
+ infodirs = settings.get("INFOPATH","").split(":") + \
+ settings.get("INFODIR","").split(":")
+
+ os.chdir("/")
+
+ if retval == os.EX_OK:
+ exit_msg = " *** exiting successfully."
+ else:
+ exit_msg = " *** exiting unsuccessfully with status '%s'." % retval
+ emergelog("notitles" not in settings.features, exit_msg)
+
+ _flush_elog_mod_echo()
+
+ if not vardbapi._pkgs_changed:
+ # GLEP 42 says to display news *after* an emerge --pretend
+ if "--pretend" in myopts:
+ display_news_notification(root_config, myopts)
+ # If vdb state has not changed then there's nothing else to do.
+ return
+
+ vdb_path = os.path.join(root_config.settings['EROOT'], portage.VDB_PATH)
+ portage.util.ensure_dirs(vdb_path)
+ vdb_lock = None
+ if os.access(vdb_path, os.W_OK) and not "--pretend" in myopts:
+ vardbapi.lock()
+ vdb_lock = True
+
+ if vdb_lock:
+ try:
+ if "noinfo" not in settings.features:
+ chk_updated_info_files(target_root + EPREFIX,
+ infodirs, info_mtimes, retval)
+ mtimedb.commit()
+ finally:
+ if vdb_lock:
+ vardbapi.unlock()
+
+ display_preserved_libs(vardbapi, myopts)
+ chk_updated_cfg_files(settings['EROOT'], config_protect)
+
+ display_news_notification(root_config, myopts)
+
+ postemerge = os.path.join(settings["PORTAGE_CONFIGROOT"],
+ portage.USER_CONFIG_PATH, "bin", "post_emerge")
+ if os.access(postemerge, os.X_OK):
+ hook_retval = portage.process.spawn(
+ [postemerge], env=settings.environ())
+ if hook_retval != os.EX_OK:
+ writemsg_level(
+ " %s spawn failed of %s\n" % (bad("*"), postemerge,),
+ level=logging.ERROR, noiselevel=-1)
+
+ clean_logs(settings)
+
+ if "--quiet" not in myopts and \
+ myaction is None and "@world" in myfiles:
+ show_depclean_suggestion()
+
+def show_depclean_suggestion():
+ out = portage.output.EOutput()
+ msg = "After world updates, it is important to remove " + \
+ "obsolete packages with emerge --depclean. Refer " + \
+ "to `man emerge` for more information."
+ for line in textwrap.wrap(msg, 72):
+ out.ewarn(line)
+
++=======
++>>>>>>> overlays-gentoo-org/master
def multiple_actions(action1, action2):
sys.stderr.write("\n!!! Multiple actions requested... Please choose one only.\n")
sys.stderr.write("!!! '%s' or '%s'\n\n" % (action1, action2))
diff --cc pym/portage/dbapi/vartree.py
index dc97491,46afea5..07aac3d
--- a/pym/portage/dbapi/vartree.py
+++ b/pym/portage/dbapi/vartree.py
@@@ -32,11 -32,10 +32,13 @@@ portage.proxy.lazyimport.lazyimport(glo
'portage.util.movefile:movefile',
'portage.util._dyn_libs.PreservedLibsRegistry:PreservedLibsRegistry',
'portage.util._dyn_libs.LinkageMapELF:LinkageMapELF@LinkageMap',
+ 'portage.util._dyn_libs.LinkageMapMachO:LinkageMapMachO',
+ 'portage.util._dyn_libs.LinkageMapPeCoff:LinkageMapPeCoff',
+ 'portage.util._dyn_libs.LinkageMapXCoff:LinkageMapXCoff',
+ 'portage.util._async.SchedulerInterface:SchedulerInterface',
+ 'portage.util._eventloop.EventLoop:EventLoop',
'portage.versions:best,catpkgsplit,catsplit,cpv_getkey,vercmp,' + \
- '_pkgsplit@pkgsplit,_pkg_str',
+ '_pkgsplit@pkgsplit,_pkg_str,_unknown_repo',
'subprocess',
'tarfile',
)
diff --cc pym/portage/getbinpkg.py
index 992e1f4,28b18a0..8c22145
--- a/pym/portage/getbinpkg.py
+++ b/pym/portage/getbinpkg.py
@@@ -18,7 -18,7 +18,8 @@@ import socke
import time
import tempfile
import base64
+from portage.const import CACHE_PATH
+ import warnings
_all_errors = [NotImplementedError, ValueError, socket.error]
diff --cc pym/portage/process.py
index b26d742,fbfbde0..969e7a3
--- a/pym/portage/process.py
+++ b/pym/portage/process.py
@@@ -15,10 -15,10 +15,10 @@@ from portage import _encoding
from portage import _unicode_encode
import portage
portage.proxy.lazyimport.lazyimport(globals(),
- 'portage.util:dump_traceback',
+ 'portage.util:dump_traceback,writemsg',
)
-from portage.const import BASH_BINARY, SANDBOX_BINARY, FAKEROOT_BINARY
+from portage.const import BASH_BINARY, SANDBOX_BINARY, MACOSSANDBOX_BINARY, FAKEROOT_BINARY
from portage.exception import CommandNotFound
try:
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-10-02 12:02 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-10-02 12:02 UTC (permalink / raw
To: gentoo-commits
commit: 8114c27f4546b23d527ba664e7e532877b710f88
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Tue Oct 2 12:02:30 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Tue Oct 2 12:02:30 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=8114c27f
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/ebuild.sh
bin/ebuild.sh | 4 ++--
bin/phase-functions.sh | 2 ++
2 files changed, 4 insertions(+), 2 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-09-30 11:22 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-09-30 11:22 UTC (permalink / raw
To: gentoo-commits
commit: be56a2b322aa6149ec6c0873c44121b5ae0a2933
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Sep 30 11:19:55 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Sep 30 11:19:55 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=be56a2b3
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/ebuild-helpers/dobin
bin/ebuild-helpers/dodir
bin/ebuild-helpers/dodoc
bin/ebuild-helpers/doexe
bin/ebuild-helpers/doinfo
bin/ebuild-helpers/dolib
bin/ebuild-helpers/doman
bin/ebuild-helpers/domo
bin/ebuild-helpers/dosbin
bin/ebuild-helpers/ecompressdir
bin/ebuild-helpers/fperms
bin/ebuild-helpers/prepall
bin/ebuild-helpers/prepallinfo
bin/ebuild-helpers/prepallman
bin/ebuild-helpers/prepallstrip
bin/ebuild-helpers/prepinfo
bin/ebuild-helpers/preplib
bin/ebuild-helpers/prepman
bin/ebuild.sh
bin/misc-functions.sh
bin/eapi.sh | 113 ++++++++++++++
bin/ebuild-helpers/dobin | 7 +-
bin/ebuild-helpers/dodir | 7 +-
bin/ebuild-helpers/dodoc | 23 ++--
bin/ebuild-helpers/doexe | 7 +-
bin/ebuild-helpers/dohard | 19 +--
bin/ebuild-helpers/doheader | 8 +-
bin/ebuild-helpers/doinfo | 7 +-
bin/ebuild-helpers/doins | 18 +--
bin/ebuild-helpers/dolib | 7 +-
bin/ebuild-helpers/doman | 7 +-
bin/ebuild-helpers/domo | 7 +-
bin/ebuild-helpers/dosbin | 7 +-
bin/ebuild-helpers/dosed | 19 +--
bin/ebuild-helpers/dosym | 5 +-
bin/ebuild-helpers/ecompressdir | 7 +-
bin/ebuild-helpers/fowners | 10 +-
bin/ebuild-helpers/fperms | 7 +-
bin/ebuild-helpers/newins | 17 +--
bin/ebuild-helpers/prepall | 7 +-
bin/ebuild-helpers/prepalldocs | 17 +--
bin/ebuild-helpers/prepallinfo | 7 +-
bin/ebuild-helpers/prepallman | 9 +-
bin/ebuild-helpers/prepallstrip | 9 +-
bin/ebuild-helpers/prepinfo | 9 +-
bin/ebuild-helpers/preplib | 7 +-
bin/ebuild-helpers/prepman | 9 +-
bin/ebuild-helpers/prepstrip | 5 +-
bin/ebuild-helpers/unprivileged/chgrp | 1 +
bin/ebuild-helpers/unprivileged/chown | 33 ++++
bin/ebuild.sh | 42 ++---
bin/isolated-functions.sh | 17 +-
bin/misc-functions.sh | 50 ++++---
bin/phase-functions.sh | 55 +++-----
bin/phase-helpers.sh | 155 +++++++++-----------
bin/save-ebuild-env.sh | 9 +-
pym/_emerge/BlockerCache.py | 4 +-
pym/portage/dbapi/vartree.py | 4 +-
pym/portage/dep/_slot_operator.py | 14 +-
pym/portage/getbinpkg.py | 4 +-
.../package/ebuild/_config/LocationsManager.py | 7 +-
pym/portage/package/ebuild/_config/UseManager.py | 20 ++-
pym/portage/package/ebuild/doebuild.py | 4 +
pym/portage/tests/ebuild/test_ipc_daemon.py | 4 +-
pym/portage/util/__init__.py | 16 ++-
45 files changed, 486 insertions(+), 334 deletions(-)
diff --cc bin/ebuild-helpers/dobin
index 3d81c2d,0ba1eb0..2d38580
--- a/bin/ebuild-helpers/dobin
+++ b/bin/ebuild-helpers/dobin
@@@ -1,8 -1,8 +1,8 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [[ $# -lt 1 ]] ; then
__helpers_die "${0##*/}: at least one argument needed"
diff --cc bin/ebuild-helpers/dodir
index 3b9a171,e03ba9a..779bccc
--- a/bin/ebuild-helpers/dodir
+++ b/bin/ebuild-helpers/dodir
@@@ -1,11 -1,12 +1,12 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
- [[ " ${FEATURES} " == *" force-prefix "* ]] || \
- case "$EAPI" in 0|1|2) ED=${D} ;; esac
+ if ! ___eapi_has_prefix_variables; then
+ ED=${D}
+ fi
install -d ${DIROPTIONS} "${@/#/${ED}/}"
ret=$?
diff --cc bin/ebuild-helpers/dodoc
index 7f5e364,c551735..4c1c7bc
--- a/bin/ebuild-helpers/dodoc
+++ b/bin/ebuild-helpers/dodoc
@@@ -2,19 -2,15 +2,15 @@@
# Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
- case "${EAPI}" in
- 0|1|2|3)
- ;;
- *)
- exec \
- env \
- __PORTAGE_HELPER="dodoc" \
- doins "$@"
- ;;
- esac
-
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
+ if ___eapi_dodoc_supports_-r; then
+ exec \
+ env \
+ __PORTAGE_HELPER="dodoc" \
+ doins "$@"
+ fi
+
if [ $# -lt 1 ] ; then
__helpers_die "${0##*/}: at least one argument needed"
exit 1
diff --cc bin/ebuild-helpers/doexe
index 9c845ca,aa050e9..7ada92d
--- a/bin/ebuild-helpers/doexe
+++ b/bin/ebuild-helpers/doexe
@@@ -1,8 -1,8 +1,8 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [[ $# -lt 1 ]] ; then
__helpers_die "${0##*/}: at least one argument needed"
diff --cc bin/ebuild-helpers/doinfo
index db49bed,355047f..816337f
--- a/bin/ebuild-helpers/doinfo
+++ b/bin/ebuild-helpers/doinfo
@@@ -1,8 -1,8 +1,8 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [[ -z $1 ]] ; then
__helpers_die "${0##*/}: at least one argument needed"
diff --cc bin/ebuild-helpers/doins
index 343a150,4679e83..1fdc1d9
--- a/bin/ebuild-helpers/doins
+++ b/bin/ebuild-helpers/doins
@@@ -40,24 -41,12 +41,21 @@@ if [[ ${INSDESTTREE#${ED}} != "${INSDES
__helpers_die "${helper} used with \${D} or \${ED}"
exit 1
fi
+# PREFIX LOCAL: check for usage with EPREFIX
+if [[ ${INSDESTTREE#${EPREFIX}} != "${INSDESTTREE}" ]]; then
+ vecho "-------------------------------------------------------" 1>&2
+ vecho "You should not use \${EPREFIX} with helpers." 1>&2
+ vecho " --> ${INSDESTTREE}" 1>&2
+ vecho "-------------------------------------------------------" 1>&2
+ exit 1
+fi
+# END PREFIX LOCAL
- case "$EAPI" in
- 0|1|2|3)
- PRESERVE_SYMLINKS=n
- ;;
- *)
- PRESERVE_SYMLINKS=y
- ;;
- esac
+ if ___eapi_doins_and_newins_preserve_symlinks; then
+ PRESERVE_SYMLINKS=y
+ else
+ PRESERVE_SYMLINKS=n
+ fi
export TMP=$T/.doins_tmp
# Use separate directories to avoid potential name collisions.
diff --cc bin/ebuild-helpers/dolib
index 67055bb,fd92d7f..f161f72
--- a/bin/ebuild-helpers/dolib
+++ b/bin/ebuild-helpers/dolib
@@@ -1,11 -1,12 +1,12 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
- [[ " ${FEATURES} " == *" force-prefix "* ]] || \
- case "$EAPI" in 0|1|2) ED=${D} ;; esac
+ if ! ___eapi_has_prefix_variables; then
+ ED=${D}
+ fi
# Setup ABI cruft
LIBDIR_VAR="LIBDIR_${ABI}"
diff --cc bin/ebuild-helpers/doman
index dd5440d,d680859..8273cae
--- a/bin/ebuild-helpers/doman
+++ b/bin/ebuild-helpers/doman
@@@ -1,8 -1,8 +1,8 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [[ $# -lt 1 ]] ; then
__helpers_die "${0##*/}: at least one argument needed"
diff --cc bin/ebuild-helpers/domo
index 8246973,9a8dda3..bb9fca2
--- a/bin/ebuild-helpers/domo
+++ b/bin/ebuild-helpers/domo
@@@ -1,8 -1,8 +1,8 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
mynum=${#}
if [ ${mynum} -lt 1 ] ; then
diff --cc bin/ebuild-helpers/dosbin
index 9765b75,361ca83..0ddd655
--- a/bin/ebuild-helpers/dosbin
+++ b/bin/ebuild-helpers/dosbin
@@@ -1,8 -1,8 +1,8 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [[ $# -lt 1 ]] ; then
__helpers_die "${0##*/}: at least one argument needed"
diff --cc bin/ebuild-helpers/ecompressdir
index 464dd37,75f3e3a..83bf7fe
--- a/bin/ebuild-helpers/ecompressdir
+++ b/bin/ebuild-helpers/ecompressdir
@@@ -1,8 -1,8 +1,8 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/helper-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/helper-functions.sh
if [[ -z $1 ]] ; then
__helpers_die "${0##*/}: at least one argument needed"
diff --cc bin/ebuild-helpers/fowners
index 8066ab7,cee4108..0aef378
--- a/bin/ebuild-helpers/fowners
+++ b/bin/ebuild-helpers/fowners
@@@ -2,11 -2,11 +2,12 @@@
# Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
+
- [[ " ${FEATURES} " == *" force-prefix "* ]] || \
- case "$EAPI" in 0|1|2) EPREFIX= ED=${D} ;; esac
+ if ! ___eapi_has_prefix_variables; then
+ EPREFIX= ED=${D}
+ fi
# we can't prefix all arguments because
# chown takes random options
diff --cc bin/ebuild-helpers/fperms
index 0824c15,d854ebb..94f6af2
--- a/bin/ebuild-helpers/fperms
+++ b/bin/ebuild-helpers/fperms
@@@ -1,11 -1,12 +1,12 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
- [[ " ${FEATURES} " == *" force-prefix "* ]] || \
- case "$EAPI" in 0|1|2) ED=${D} ;; esac
+ if ! ___eapi_has_prefix_variables; then
+ ED=${D}
+ fi
# we can't prefix all arguments because
# chmod takes random options
diff --cc bin/ebuild-helpers/prepall
index 3aacb7f,fb5c2db..407392f
--- a/bin/ebuild-helpers/prepall
+++ b/bin/ebuild-helpers/prepall
@@@ -1,13 -1,12 +1,14 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
+
+[[ -d ${ED} ]] || exit 0
- [[ " ${FEATURES} " == *" force-prefix "* ]] || \
- case "$EAPI" in 0|1|2) ED=${D} ;; esac
+ if ! ___eapi_has_prefix_variables; then
+ ED=${D}
+ fi
if has chflags $FEATURES ; then
# Save all the file flags for restoration at the end of prepall.
diff --cc bin/ebuild-helpers/prepalldocs
index 2804987,3094661..c7c85d6
--- a/bin/ebuild-helpers/prepalldocs
+++ b/bin/ebuild-helpers/prepalldocs
@@@ -2,16 -2,12 +2,12 @@@
# Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
- case "${EAPI}" in
- 0|1|2|3)
- ;;
- *)
- die "'${0##*/}' has been banned for EAPI '$EAPI'"
- exit 1
- ;;
- esac
+ if ___eapi_has_docompress; then
+ die "'${0##*/}' has been banned for EAPI '$EAPI'"
+ exit 1
+ fi
if [[ -n $1 ]] ; then
__vecho "${0##*/}: invalid usage; takes no arguments" 1>&2
diff --cc bin/ebuild-helpers/prepallinfo
index 00e1fc4,1a20275..43d0980
--- a/bin/ebuild-helpers/prepallinfo
+++ b/bin/ebuild-helpers/prepallinfo
@@@ -1,11 -1,12 +1,12 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
- [[ " ${FEATURES} " == *" force-prefix "* ]] || \
- case "$EAPI" in 0|1|2) ED=${D} ;; esac
+ if ! ___eapi_has_prefix_variables; then
+ ED=${D}
+ fi
[[ -d ${ED}usr/share/info ]] || exit 0
diff --cc bin/ebuild-helpers/prepallman
index fbc2f1d,7c78324..4ad3a5e
--- a/bin/ebuild-helpers/prepallman
+++ b/bin/ebuild-helpers/prepallman
@@@ -1,14 -1,15 +1,15 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
# replaced by controllable compression in EAPI 4
- has "${EAPI}" 0 1 2 3 || exit 0
+ ___eapi_has_docompress && exit 0
- [[ " ${FEATURES} " == *" force-prefix "* ]] || \
- case "$EAPI" in 0|1|2) ED=${D} ;; esac
+ if ! ___eapi_has_prefix_variables; then
+ ED=${D}
+ fi
ret=0
diff --cc bin/ebuild-helpers/prepallstrip
index f99a15b,1aa6686..d80f27a
--- a/bin/ebuild-helpers/prepallstrip
+++ b/bin/ebuild-helpers/prepallstrip
@@@ -1,8 -1,11 +1,11 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
- [[ " ${FEATURES} " == *" force-prefix "* ]] || \
- case "$EAPI" in 0|1|2) ED=${D} ;; esac
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
++source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
+
+ if ! ___eapi_has_prefix_variables; then
+ ED=${D}
+ fi
exec prepstrip "${ED}"
diff --cc bin/ebuild-helpers/prepinfo
index 4ae4976,5afc18a..170e18a
--- a/bin/ebuild-helpers/prepinfo
+++ b/bin/ebuild-helpers/prepinfo
@@@ -1,11 -1,12 +1,12 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
- [[ " ${FEATURES} " == *" force-prefix "* ]] || \
- case "$EAPI" in 0|1|2) ED=${D} ;; esac
+ if ! ___eapi_has_prefix_variables; then
+ ED=${D}
+ fi
if [[ -z $1 ]] ; then
infodir="/usr/share/info"
diff --cc bin/ebuild-helpers/preplib
index 8e3d4b3,764261d..145540c
--- a/bin/ebuild-helpers/preplib
+++ b/bin/ebuild-helpers/preplib
@@@ -1,8 -1,8 +1,8 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
eqawarn "QA Notice: Deprecated call to 'preplib'"
diff --cc bin/ebuild-helpers/prepman
index 1411499,142d404..add01c8
--- a/bin/ebuild-helpers/prepman
+++ b/bin/ebuild-helpers/prepman
@@@ -1,11 -1,12 +1,12 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
- [[ " ${FEATURES} " == *" force-prefix "* ]] || \
- case "$EAPI" in 0|1|2) ED=${D} ;; esac
+ if ! ___eapi_has_prefix_variables; then
+ ED=${D}
+ fi
if [[ -z $1 ]] ; then
mandir="${ED}usr/share/man"
diff --cc bin/misc-functions.sh
index b66ded4,986264e..3b2c309
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -14,14 -14,12 +14,16 @@@
MISC_FUNCTIONS_ARGS="$@"
shift $#
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}/ebuild.sh"
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}/ebuild.sh"
install_symlink_html_docs() {
- [[ " ${FEATURES} " == *" force-prefix "* ]] || \
- case "$EAPI" in 0|1|2) local ED=${D} ;; esac
- # PREFIX LOCAL: ED needs not to exist, whereas D does
- [[ ! -d ${ED} && -d ${D} ]] && dodir /
- # END PREFIX LOCAL
+ if ! ___eapi_has_prefix_variables; then
+ local ED=${D}
++ else
++ # PREFIX LOCAL: ED needs not to exist, whereas D does
++ [[ ! -d ${ED} && -d ${D} ]] && dodir /
++ # END PREFIX LOCAL
+ fi
cd "${ED}" || die "cd failed"
#symlink the html documentation (if DOC_SYMLINKS_DIR is set in make.conf)
if [ -n "${DOC_SYMLINKS_DIR}" ] ; then
@@@ -165,12 -164,11 +168,13 @@@ prepcompress()
install_qa_check() {
local f i qa_var x
- [[ " ${FEATURES} " == *" force-prefix "* ]] || \
- case "$EAPI" in 0|1|2) local EPREFIX= ED=${D} ;; esac
+ if ! ___eapi_has_prefix_variables; then
+ local EPREFIX= ED=${D}
+ fi
- cd "${ED}" || die "cd failed"
+ # PREFIX LOCAL: ED needs not to exist, whereas D does
+ cd "${D}" || die "cd failed"
+ # END PREFIX LOCAL
qa_var="QA_FLAGS_IGNORED_${ARCH/-/_}"
eval "[[ -n \${!qa_var} ]] && QA_FLAGS_IGNORED=(\"\${${qa_var}[@]}\")"
diff --cc pym/portage/package/ebuild/doebuild.py
index 84e4494,9deed98..2db7900
--- a/pym/portage/package/ebuild/doebuild.py
+++ b/pym/portage/package/ebuild/doebuild.py
@@@ -152,11 -150,18 +152,15 @@@ def _doebuild_path(settings, eapi=None)
eprefix = settings["EPREFIX"]
prerootpath = [x for x in settings.get("PREROOTPATH", "").split(":") if x]
rootpath = [x for x in settings.get("ROOTPATH", "").split(":") if x]
-
- prefixes = []
- if eprefix:
- prefixes.append(eprefix)
- prefixes.append("/")
-
+ # PREFIX LOCAL: use DEFAULT_PATH and EXTRA_PATH from make.globals
+ defaultpath = [x for x in settings.get("DEFAULT_PATH", "").split(":") if x]
+ extrapath = [x for x in settings.get("EXTRA_PATH", "").split(":") if x]
path = []
+ if eprefix and uid != 0 and "fakeroot" not in settings.features:
+ path.append(os.path.join(portage_bin_path,
+ "ebuild-helpers", "unprivileged"))
+
if settings.get("USERLAND", "GNU") != "GNU":
path.append(os.path.join(portage_bin_path, "ebuild-helpers", "bsd"))
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-09-26 18:26 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-09-26 18:26 UTC (permalink / raw
To: gentoo-commits
commit: 596938085a7f715f2dbe44720f42d213b281b50c
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 26 18:24:34 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Wed Sep 26 18:24:34 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=59693808
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/ebuild-helpers/doconfd
bin/ebuild-helpers/doenvd
bin/ebuild-helpers/doinitd
bin/isolated-functions.sh
bin/misc-functions.sh
cnf/make.globals
pym/portage/dbapi/bintree.py
pym/portage/dbapi/vartree.py
NEWS | 13 +-
RELEASE-NOTES | 16 +-
bin/bashrc-functions.sh | 59 +-
bin/dohtml.py | 2 +-
bin/ebuild-helpers/dobin | 6 +-
bin/ebuild-helpers/doconfd | 2 +-
bin/ebuild-helpers/dodir | 2 +-
bin/ebuild-helpers/dodoc | 4 +-
bin/ebuild-helpers/doenvd | 2 +-
bin/ebuild-helpers/doexe | 6 +-
bin/ebuild-helpers/doheader | 4 +-
bin/ebuild-helpers/dohtml | 2 +-
bin/ebuild-helpers/doinfo | 6 +-
bin/ebuild-helpers/doinitd | 2 +-
bin/ebuild-helpers/doins | 14 +-
bin/ebuild-helpers/dolib | 6 +-
bin/ebuild-helpers/doman | 6 +-
bin/ebuild-helpers/domo | 4 +-
bin/ebuild-helpers/dosbin | 6 +-
bin/ebuild-helpers/dosym | 4 +-
bin/ebuild-helpers/ecompress | 14 +-
bin/ebuild-helpers/ecompressdir | 22 +-
bin/ebuild-helpers/emake | 2 +-
bin/ebuild-helpers/fowners | 2 +-
bin/ebuild-helpers/fperms | 2 +-
bin/ebuild-helpers/newins | 10 +-
bin/ebuild-helpers/prepalldocs | 2 +-
bin/ebuild-helpers/prepinfo | 2 +-
bin/ebuild-helpers/prepstrip | 22 +-
bin/ebuild.sh | 82 +-
bin/egencache | 4 +-
bin/emerge-webrsync | 20 +-
bin/helper-functions.sh | 26 +-
bin/isolated-functions.sh | 84 +-
bin/misc-functions.sh | 137 ++--
bin/phase-functions.sh | 314 +++----
bin/phase-helpers.sh | 93 +-
bin/portageq | 9 +-
bin/repoman | 103 +-
bin/save-ebuild-env.sh | 57 +-
cnf/make.globals | 3 +-
doc/package/ebuild.docbook | 2 +
doc/package/ebuild/eapi/4-python.docbook | 7 +-
doc/package/ebuild/eapi/5-hdepend.docbook | 32 +
.../eapi/{4-python.docbook => 5-progress.docbook} | 44 +-
doc/package/ebuild/eapi/5.docbook | 40 +-
doc/portage.docbook | 2 +
man/ebuild.5 | 1070 ++++++++++++--------
man/emerge.1 | 45 +-
man/make.conf.5 | 28 +-
man/portage.5 | 95 ++-
man/repoman.1 | 72 +-
misc/emerge-delta-webrsync | 4 +
pym/_emerge/BlockerDB.py | 5 +-
pym/_emerge/EbuildMetadataPhase.py | 9 +-
pym/_emerge/FakeVartree.py | 4 +-
pym/_emerge/MetadataRegen.py | 6 +-
pym/_emerge/Package.py | 32 +-
pym/_emerge/actions.py | 11 +-
pym/_emerge/depgraph.py | 75 +-
pym/_emerge/is_valid_package_atom.py | 4 +-
pym/_emerge/main.py | 24 +-
pym/_emerge/resolver/circular_dependency.py | 4 +-
pym/_emerge/resolver/output.py | 157 ++--
pym/_emerge/resolver/output_helpers.py | 78 ++-
pym/portage/__init__.py | 6 +-
pym/portage/_global_updates.py | 14 +-
pym/portage/_sets/__init__.py | 4 +
pym/portage/cache/flat_list.py | 134 ---
pym/portage/cache/metadata.py | 3 +-
pym/portage/cache/sqlite.py | 25 +-
pym/portage/const.py | 18 +-
pym/portage/dbapi/__init__.py | 15 +-
pym/portage/dbapi/bintree.py | 13 +-
pym/portage/dbapi/porttree.py | 3 +-
pym/portage/dbapi/vartree.py | 36 +-
pym/portage/dep/__init__.py | 12 +-
pym/portage/dep/_slot_operator.py | 11 +-
pym/portage/dep/dep_check.py | 13 +-
pym/portage/eapi.py | 26 +-
pym/portage/emaint/modules/move/move.py | 4 +-
pym/portage/manifest.py | 38 +-
.../package/ebuild/_config/LocationsManager.py | 49 +-
pym/portage/package/ebuild/_config/UseManager.py | 24 +-
.../package/ebuild/_config/special_env_vars.py | 3 +-
pym/portage/package/ebuild/config.py | 73 +-
.../package/ebuild/deprecated_profile_check.py | 21 +-
pym/portage/package/ebuild/doebuild.py | 46 +-
pym/portage/package/ebuild/fetch.py | 21 +-
pym/portage/repository/config.py | 32 +-
pym/portage/tests/dbapi/test_portdb_cache.py | 15 -
pym/portage/tests/dep/testAtom.py | 1 -
pym/portage/tests/dep/test_isvalidatom.py | 2 +-
pym/portage/tests/dep/test_match_from_list.py | 2 -
pym/portage/tests/emerge/test_emerge_slot_abi.py | 22 -
pym/portage/tests/emerge/test_simple.py | 17 -
pym/portage/tests/repoman/test_simple.py | 35 +-
pym/portage/tests/resolver/ResolverPlayground.py | 240 ++---
.../tests/resolver/test_features_test_use.py | 68 ++
pym/portage/tests/resolver/test_targetroot.py | 85 ++
pym/portage/tests/update/test_update_dbentry.py | 50 +-
pym/portage/update.py | 82 ++-
pym/portage/util/__init__.py | 4 +-
pym/portage/util/_eventloop/EventLoop.py | 30 +-
pym/portage/util/_eventloop/PollSelectAdapter.py | 2 +-
pym/portage/util/env_update.py | 38 +-
pym/portage/versions.py | 4 +-
107 files changed, 2385 insertions(+), 1982 deletions(-)
diff --cc bin/ebuild-helpers/dobin
index 922e600,06ae0c7..3d81c2d
--- a/bin/ebuild-helpers/dobin
+++ b/bin/ebuild-helpers/dobin
@@@ -2,10 -2,10 +2,10 @@@
# Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [[ $# -lt 1 ]] ; then
- helpers_die "${0##*/}: at least one argument needed"
+ __helpers_die "${0##*/}: at least one argument needed"
exit 1
fi
diff --cc bin/ebuild-helpers/doconfd
index 42f3e42,a3c09a5..d0d3901
--- a/bin/ebuild-helpers/doconfd
+++ b/bin/ebuild-helpers/doconfd
@@@ -3,8 -3,8 +3,8 @@@
# Distributed under the terms of the GNU General Public License v2
if [[ $# -lt 1 ]] ; then
- source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+ source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
- helpers_die "${0##*/}: at least one argument needed"
+ __helpers_die "${0##*/}: at least one argument needed"
exit 1
fi
diff --cc bin/ebuild-helpers/dodoc
index 4b9c8b9,2b0533f..7f5e364
--- a/bin/ebuild-helpers/dodoc
+++ b/bin/ebuild-helpers/dodoc
@@@ -13,10 -13,10 +13,10 @@@ case "${EAPI}" i
;;
esac
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [ $# -lt 1 ] ; then
- helpers_die "${0##*/}: at least one argument needed"
+ __helpers_die "${0##*/}: at least one argument needed"
exit 1
fi
diff --cc bin/ebuild-helpers/doenvd
index 31c57d5,9287933..c4fae04
--- a/bin/ebuild-helpers/doenvd
+++ b/bin/ebuild-helpers/doenvd
@@@ -3,8 -3,8 +3,8 @@@
# Distributed under the terms of the GNU General Public License v2
if [[ $# -lt 1 ]] ; then
- source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+ source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
- helpers_die "${0##*/}: at least one argument needed"
+ __helpers_die "${0##*/}: at least one argument needed"
exit 1
fi
diff --cc bin/ebuild-helpers/doexe
index 05a8759,efbf638..9c845ca
--- a/bin/ebuild-helpers/doexe
+++ b/bin/ebuild-helpers/doexe
@@@ -2,10 -2,10 +2,10 @@@
# Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [[ $# -lt 1 ]] ; then
- helpers_die "${0##*/}: at least one argument needed"
+ __helpers_die "${0##*/}: at least one argument needed"
exit 1
fi
diff --cc bin/ebuild-helpers/dohtml
index b3a3c7a,aec5e79..4f102b0
--- a/bin/ebuild-helpers/dohtml
+++ b/bin/ebuild-helpers/dohtml
@@@ -2,13 -2,13 +2,13 @@@
# Copyright 2009-2010 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
-PORTAGE_BIN_PATH=${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}
-PORTAGE_PYM_PATH=${PORTAGE_PYM_PATH:-/usr/lib/portage/pym}
+PORTAGE_BIN_PATH=${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}
+PORTAGE_PYM_PATH=${PORTAGE_PYM_PATH:-@PORTAGE_BASE@/pym}
PYTHONPATH=$PORTAGE_PYM_PATH${PYTHONPATH:+:}$PYTHONPATH \
- "${PORTAGE_PYTHON:-/usr/bin/python}" "$PORTAGE_BIN_PATH/dohtml.py" "$@"
+ "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "$PORTAGE_BIN_PATH/dohtml.py" "$@"
ret=$?
- [[ $ret -ne 0 ]] && helpers_die "${0##*/} failed"
+ [[ $ret -ne 0 ]] && __helpers_die "${0##*/} failed"
exit $ret
diff --cc bin/ebuild-helpers/doinfo
index 4597b2e,e4ccc90..db49bed
--- a/bin/ebuild-helpers/doinfo
+++ b/bin/ebuild-helpers/doinfo
@@@ -2,10 -2,10 +2,10 @@@
# Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [[ -z $1 ]] ; then
- helpers_die "${0##*/}: at least one argument needed"
+ __helpers_die "${0##*/}: at least one argument needed"
exit 1
fi
diff --cc bin/ebuild-helpers/doinitd
index 9eefa52,476b858..8e99fa8
--- a/bin/ebuild-helpers/doinitd
+++ b/bin/ebuild-helpers/doinitd
@@@ -3,8 -3,8 +3,8 @@@
# Distributed under the terms of the GNU General Public License v2
if [[ $# -lt 1 ]] ; then
- source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+ source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
- helpers_die "${0##*/}: at least one argument needed"
+ __helpers_die "${0##*/}: at least one argument needed"
exit 1
fi
diff --cc bin/ebuild-helpers/doins
index 511eb17,c534f3f..343a150
--- a/bin/ebuild-helpers/doins
+++ b/bin/ebuild-helpers/doins
@@@ -33,22 -33,13 +33,22 @@@ f
case "$EAPI" in 0|1|2) export ED="${D}" ;; esac
if [[ ${INSDESTTREE#${ED}} != "${INSDESTTREE}" ]]; then
- vecho "-------------------------------------------------------" 1>&2
- vecho "You should not use \${D} or \${ED} with helpers." 1>&2
- vecho " --> ${INSDESTTREE}" 1>&2
- vecho "-------------------------------------------------------" 1>&2
- helpers_die "${helper} used with \${D} or \${ED}"
+ __vecho "-------------------------------------------------------" 1>&2
+ __vecho "You should not use \${D} or \${ED} with helpers." 1>&2
+ __vecho " --> ${INSDESTTREE}" 1>&2
+ __vecho "-------------------------------------------------------" 1>&2
+ __helpers_die "${helper} used with \${D} or \${ED}"
exit 1
fi
+# PREFIX LOCAL: check for usage with EPREFIX
+if [[ ${INSDESTTREE#${EPREFIX}} != "${INSDESTTREE}" ]]; then
+ vecho "-------------------------------------------------------" 1>&2
+ vecho "You should not use \${EPREFIX} with helpers." 1>&2
+ vecho " --> ${INSDESTTREE}" 1>&2
+ vecho "-------------------------------------------------------" 1>&2
+ exit 1
+fi
+# END PREFIX LOCAL
case "$EAPI" in
0|1|2|3)
diff --cc bin/ebuild-helpers/doman
index 96735f7,c10296b..dd5440d
--- a/bin/ebuild-helpers/doman
+++ b/bin/ebuild-helpers/doman
@@@ -2,10 -2,10 +2,10 @@@
# Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [[ $# -lt 1 ]] ; then
- helpers_die "${0##*/}: at least one argument needed"
+ __helpers_die "${0##*/}: at least one argument needed"
exit 1
fi
diff --cc bin/ebuild-helpers/dosbin
index 694953d,edc4584..9765b75
--- a/bin/ebuild-helpers/dosbin
+++ b/bin/ebuild-helpers/dosbin
@@@ -2,10 -2,10 +2,10 @@@
# Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [[ $# -lt 1 ]] ; then
- helpers_die "${0##*/}: at least one argument needed"
+ __helpers_die "${0##*/}: at least one argument needed"
exit 1
fi
diff --cc bin/ebuild-helpers/dosym
index b96f845,34637c2..a98d053
--- a/bin/ebuild-helpers/dosym
+++ b/bin/ebuild-helpers/dosym
@@@ -2,10 -2,10 +2,10 @@@
# Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [[ $# -ne 2 ]] ; then
- helpers_die "${0##*/}: two arguments needed"
+ __helpers_die "${0##*/}: two arguments needed"
exit 1
fi
diff --cc bin/ebuild-helpers/ecompress
index fe974cc,71287b4..6614cdc
--- a/bin/ebuild-helpers/ecompress
+++ b/bin/ebuild-helpers/ecompress
@@@ -2,10 -2,10 +2,10 @@@
# Copyright 1999-2010 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [[ -z $1 ]] ; then
- helpers_die "${0##*/}: at least one argument needed"
+ __helpers_die "${0##*/}: at least one argument needed"
exit 1
fi
diff --cc bin/ebuild-helpers/ecompressdir
index bf76b1f,097ade2..464dd37
--- a/bin/ebuild-helpers/ecompressdir
+++ b/bin/ebuild-helpers/ecompressdir
@@@ -2,10 -2,10 +2,10 @@@
# Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/helper-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/helper-functions.sh
if [[ -z $1 ]] ; then
- helpers_die "${0##*/}: at least one argument needed"
+ __helpers_die "${0##*/}: at least one argument needed"
exit 1
fi
diff --cc bin/ebuild-helpers/emake
index 197e0da,69d836f..60286ec
--- a/bin/ebuild-helpers/emake
+++ b/bin/ebuild-helpers/emake
@@@ -22,7 -22,7 +22,7 @@@ if [[ $PORTAGE_QUIET != 1 ]] ; the
) >&2
fi
-${MAKE:-make} ${MAKEOPTS} ${EXTRA_EMAKE} "$@"
+${MAKE:-make} SHELL="${BASH:-/bin/bash}" ${MAKEOPTS} ${EXTRA_EMAKE} "$@"
ret=$?
- [[ $ret -ne 0 ]] && helpers_die "${0##*/} failed"
+ [[ $ret -ne 0 ]] && __helpers_die "${0##*/} failed"
exit $ret
diff --cc bin/emerge-webrsync
index cdcf716,09b7574..10ce71d
--- a/bin/emerge-webrsync
+++ b/bin/emerge-webrsync
@@@ -196,11 -189,9 +196,11 @@@ get_snapshot_timestamp()
sync_local() {
local file="$1"
- vecho "Syncing local tree ..."
+ __vecho "Syncing local tree ..."
- local ownership="portage:portage"
+ # PREFIX LOCAL: use PORTAGE_USER and PORTAGE_GROUP
+ local ownership="${PORTAGE_USER:-portage}:${PORTAGE_GROUP:-portage}"
+ # END PREFIX LOCAL
if has usersync ${FEATURES} ; then
case "${USERLAND}" in
BSD)
diff --cc bin/misc-functions.sh
index 8c6a126,55d37f2..b66ded4
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
# Miscellaneous shell functions that make use of the ebuild env but don't need
@@@ -155,12 -165,8 +168,10 @@@ install_qa_check()
[[ " ${FEATURES} " == *" force-prefix "* ]] || \
case "$EAPI" in 0|1|2) local EPREFIX= ED=${D} ;; esac
- cd "${ED}" || die "cd failed"
+ # PREFIX LOCAL: ED needs not to exist, whereas D does
+ cd "${D}" || die "cd failed"
+ # END PREFIX LOCAL
- # Merge QA_FLAGS_IGNORED and QA_DT_HASH into a single array, since
- # QA_DT_HASH is deprecated.
qa_var="QA_FLAGS_IGNORED_${ARCH/-/_}"
eval "[[ -n \${!qa_var} ]] && QA_FLAGS_IGNORED=(\"\${${qa_var}[@]}\")"
if [[ ${#QA_FLAGS_IGNORED[@]} -eq 1 ]] ; then
@@@ -276,14 -259,12 +264,14 @@@
fi
# Now we look for all world writable files.
- local unsafe_files=$(find "${ED}" -type f -perm -2 | sed -e "s:^${ED}:- :")
+ # PREFIX LOCAL: keep offset in the paths
+ local unsafe_files=$(find "${ED}" -type f -perm -2 | sed -e "s:^${D}:- :")
+ # END PREFIX LOCAL
if [[ -n ${unsafe_files} ]] ; then
- vecho "QA Security Notice: world writable file(s):"
- vecho "${unsafe_files}"
- vecho "- This may or may not be a security problem, most of the time it is one."
- vecho "- Please double check that $PF really needs a world writeable bit and file bugs accordingly."
+ __vecho "QA Security Notice: world writable file(s):"
+ __vecho "${unsafe_files}"
+ __vecho "- This may or may not be a security problem, most of the time it is one."
+ __vecho "- Please double check that $PF really needs a world writeable bit and file bugs accordingly."
sleep 1
fi
@@@ -621,13 -565,12 +609,12 @@@ install_qa_check_misc()
# this should help to ensure that all (most?) shared libraries are executable
# and that all libtool scripts / static libraries are not executable
local j
- for i in "${ED}"opt/*/lib{,32,64} \
- "${ED}"lib{,32,64} \
- "${ED}"usr/lib{,32,64} \
- "${ED}"usr/X11R6/lib{,32,64} ; do
+ for i in "${ED}"opt/*/lib* \
+ "${ED}"lib* \
+ "${ED}"usr/lib* ; do
[[ ! -d ${i} ]] && continue
- for j in "${i}"/*.so.* "${i}"/*.so ; do
+ for j in "${i}"/*.so.* "${i}"/*.so "${i}"/*.dylib "${i}"/*.dll ; do
[[ ! -e ${j} ]] && continue
[[ -L ${j} ]] && continue
[[ -x ${j} ]] && continue
@@@ -681,17 -620,12 +668,17 @@@
[[ ${abort} == "yes" ]] && die "add those ldscripts"
# Make sure people don't store libtool files or static libs in /lib
- f=$(ls "${ED}"lib*/*.{a,la} 2>/dev/null)
+ # PREFIX LOCAL: on AIX, "dynamic libs" have extension .a, so don't
+ # get false positives
+ [[ ${CHOST} == *-aix* ]] \
+ && f=$(ls "${ED}"lib*/*.la 2>/dev/null || true) \
+ || f=$(ls "${ED}"lib*/*.{a,la} 2>/dev/null)
+ # END PREFIX LOCAL
if [[ -n ${f} ]] ; then
- vecho -ne '\n'
+ __vecho -ne '\n'
eqawarn "QA Notice: Excessive files found in the / partition"
eqawarn "${f}"
- vecho -ne '\n'
+ __vecho -ne '\n'
die "static archives (*.a) and libtool library files (*.la) do not belong in /"
fi
diff --cc bin/phase-functions.sh
index 857dc6e,97e762a..a9b014a
--- a/bin/phase-functions.sh
+++ b/bin/phase-functions.sh
@@@ -137,13 -137,13 +137,13 @@@ __filter_readonly_variables()
fi
fi
- "${PORTAGE_PYTHON:-/usr/bin/python}" "${PORTAGE_BIN_PATH}"/filter-bash-environment.py "${filtered_vars}" || die "filter-bash-environment.py failed"
+ "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "${PORTAGE_BIN_PATH}"/filter-bash-environment.py "${filtered_vars}" || die "filter-bash-environment.py failed"
}
- # @FUNCTION: preprocess_ebuild_env
+ # @FUNCTION: __preprocess_ebuild_env
# @DESCRIPTION:
# Filter any readonly variables from ${T}/environment, source it, and then
- # save it via save_ebuild_env(). This process should be sufficient to prevent
+ # save it via __save_ebuild_env(). This process should be sufficient to prevent
# any stale variables or functions from an arbitrary environment from
# interfering with the current environment. This is useful when an existing
# environment needs to be loaded from a binary or installed package.
diff --cc cnf/make.globals
index 17d1025,bc69abe..53c4bb6
--- a/cnf/make.globals
+++ b/cnf/make.globals
@@@ -66,13 -66,6 +66,12 @@@ FEATURES="assume-digests binpkg-log
COLLISION_IGNORE="/lib/modules/* *.py[co]"
UNINSTALL_IGNORE="/lib/modules/*"
- # Enable preserve-libs for testing with portage versions that support it.
- # This setting is commented out for portage versions that don't support it.
++# Prefix: we want preserve-libs, not sure how mainline goes about this
+FEATURES="${FEATURES} preserve-libs"
+
+# Force EPREFIX, ED and EROOT to exist in all EAPIs, not just 3 and up
+FEATURES="${FEATURES} force-prefix"
+
# Default chunksize for binhost comms
PORTAGE_BINHOST_CHUNKSIZE="3000"
diff --cc pym/_emerge/Package.py
index fd8539e,b60f744..315811c
--- a/pym/_emerge/Package.py
+++ b/pym/_emerge/Package.py
@@@ -12,8 -12,8 +12,9 @@@ from portage.dep import Atom, check_req
from portage.versions import _pkg_str, _unknown_repo
from portage.eapi import _get_eapi_attrs
from portage.exception import InvalidDependString
+ from portage.localization import _
from _emerge.Task import Task
+from portage.const import EPREFIX
if sys.hexversion >= 0x3000000:
basestring = str
@@@ -36,12 -36,14 +37,14 @@@ class Package(Task)
metadata_keys = [
"BUILD_TIME", "CHOST", "COUNTER", "DEPEND", "EAPI",
- "INHERITED", "IUSE", "KEYWORDS",
+ "HDEPEND", "INHERITED", "IUSE", "KEYWORDS",
"LICENSE", "PDEPEND", "PROVIDE", "RDEPEND",
"repository", "PROPERTIES", "RESTRICT", "SLOT", "USE",
- "_mtime_", "DEFINED_PHASES", "REQUIRED_USE"]
+ "_mtime_", "DEFINED_PHASES", "REQUIRED_USE", "EPREFIX"]
- _dep_keys = ('DEPEND', 'PDEPEND', 'RDEPEND',)
+ _dep_keys = ('DEPEND', 'HDEPEND', 'PDEPEND', 'RDEPEND')
+ _buildtime_keys = ('DEPEND', 'HDEPEND')
+ _runtime_keys = ('PDEPEND', 'RDEPEND')
_use_conditional_misc_keys = ('LICENSE', 'PROPERTIES', 'RESTRICT')
UNKNOWN_REPO = _unknown_repo
diff --cc pym/_emerge/actions.py
index ae8df10,f7ec07a..c8d04f2
--- a/pym/_emerge/actions.py
+++ b/pym/_emerge/actions.py
@@@ -30,9 -30,8 +30,9 @@@ from portage import o
from portage import shutil
from portage import eapi_is_supported, _unicode_decode
from portage.cache.cache_errors import CacheError
+from portage.const import EPREFIX
from portage.const import GLOBAL_CONFIG_PATH
- from portage.const import _ENABLE_DYN_LINK_MAP
+ from portage.const import _DEPCLEAN_LIB_CHECK_DEFAULT
from portage.dbapi.dep_expand import dep_expand
from portage.dbapi._expand_new_virt import expand_new_virt
from portage.dep import Atom
diff --cc pym/portage/dbapi/bintree.py
index a2c4ec4,cbcfa72..b182295
--- a/pym/portage/dbapi/bintree.py
+++ b/pym/portage/dbapi/bintree.py
@@@ -73,10 -72,11 +73,12 @@@ class bindbapi(fakedbapi)
self.cpdict={}
# Selectively cache metadata in order to optimize dep matching.
self._aux_cache_keys = set(
- ["BUILD_TIME", "CHOST", "DEPEND", "EAPI", "IUSE", "KEYWORDS",
+ ["BUILD_TIME", "CHOST", "DEPEND", "EAPI",
+ "HDEPEND", "IUSE", "KEYWORDS",
"LICENSE", "PDEPEND", "PROPERTIES", "PROVIDE",
- "RDEPEND", "repository", "RESTRICT", "SLOT", "USE", "DEFINED_PHASES"
+ "RDEPEND", "repository", "RESTRICT", "SLOT", "USE", "DEFINED_PHASES",
- "EPREFIX"])
++ "EPREFIX"
+ ])
self._aux_cache_slot_dict = slot_dict_class(self._aux_cache_keys)
self._aux_cache = {}
@@@ -303,22 -303,24 +305,24 @@@ class binarytree(object)
self._pkgindex_keys.update(["CPV", "MTIME", "SIZE"])
self._pkgindex_aux_keys = \
["BUILD_TIME", "CHOST", "DEPEND", "DESCRIPTION", "EAPI",
- "IUSE", "KEYWORDS", "LICENSE", "PDEPEND", "PROPERTIES",
+ "HDEPEND", "IUSE", "KEYWORDS", "LICENSE", "PDEPEND", "PROPERTIES",
"PROVIDE", "RDEPEND", "repository", "SLOT", "USE", "DEFINED_PHASES",
- "BASE_URI"]
+ "BASE_URI", "EPREFIX"]
self._pkgindex_aux_keys = list(self._pkgindex_aux_keys)
self._pkgindex_use_evaluated_keys = \
- ("LICENSE", "RDEPEND", "DEPEND",
+ ("DEPEND", "HDEPEND", "LICENSE", "RDEPEND",
"PDEPEND", "PROPERTIES", "PROVIDE")
self._pkgindex_header_keys = set([
"ACCEPT_KEYWORDS", "ACCEPT_LICENSE",
"ACCEPT_PROPERTIES", "CBUILD",
"CONFIG_PROTECT", "CONFIG_PROTECT_MASK", "FEATURES",
- "GENTOO_MIRRORS", "INSTALL_MASK", "SYNC", "USE"])
+ "GENTOO_MIRRORS", "INSTALL_MASK", "SYNC", "USE", "EPREFIX"])
self._pkgindex_default_pkg_data = {
"BUILD_TIME" : "",
+ "DEFINED_PHASES" : "",
"DEPEND" : "",
"EAPI" : "0",
+ "HDEPEND" : "",
"IUSE" : "",
"KEYWORDS": "",
"LICENSE" : "",
@@@ -330,9 -332,8 +334,8 @@@
"RESTRICT": "",
"SLOT" : "0",
"USE" : "",
- "DEFINED_PHASES" : "",
}
- self._pkgindex_inherited_keys = ["CHOST", "repository"]
+ self._pkgindex_inherited_keys = ["CHOST", "repository", "EPREFIX"]
# Populate the header with appropriate defaults.
self._pkgindex_default_header_data = {
diff --cc pym/portage/dbapi/vartree.py
index 5ecdf81,f8980f7..4a01d7a
--- a/pym/portage/dbapi/vartree.py
+++ b/pym/portage/dbapi/vartree.py
@@@ -42,8 -39,7 +42,7 @@@ portage.proxy.lazyimport.lazyimport(glo
)
from portage.const import CACHE_PATH, CONFIG_MEMORY_FILE, \
- PORTAGE_PACKAGE_ATOM, PRIVATE_PATH, VDB_PATH
+ PORTAGE_PACKAGE_ATOM, PRIVATE_PATH, VDB_PATH, EPREFIX, EPREFIX_LSTRIP, BASH_BINARY
- from portage.const import _ENABLE_DYN_LINK_MAP, _ENABLE_PRESERVE_LIBS
from portage.dbapi import dbapi
from portage.exception import CommandNotFound, \
InvalidData, InvalidLocation, InvalidPackageName, \
@@@ -176,25 -171,9 +175,20 @@@ class vardbapi(dbapi)
self._counter_path = os.path.join(self._eroot,
CACHE_PATH, "counter")
- self._plib_registry = None
- if _ENABLE_PRESERVE_LIBS:
- self._plib_registry = PreservedLibsRegistry(settings["ROOT"],
- os.path.join(self._eroot, PRIVATE_PATH,
- "preserved_libs_registry"))
-
- self._linkmap = None
- if _ENABLE_DYN_LINK_MAP:
- chost = self.settings.get('CHOST')
- if not chost:
- chost = 'lunix?' # this happens when profiles are not available
- if chost.find('darwin') >= 0:
- self._linkmap = LinkageMapMachO(self)
- elif chost.find('interix') >= 0 or chost.find('winnt') >= 0:
- self._linkmap = LinkageMapPeCoff(self)
- elif chost.find('aix') >= 0:
- self._linkmap = LinkageMapXCoff(self)
- else:
- self._linkmap = LinkageMap(self)
+ self._plib_registry = PreservedLibsRegistry(settings["ROOT"],
+ os.path.join(self._eroot, PRIVATE_PATH, "preserved_libs_registry"))
+ self._linkmap = LinkageMap(self)
++ chost = self.settings.get('CHOST')
++ if not chost:
++ chost = 'lunix?' # this happens when profiles are not available
++ if chost.find('darwin') >= 0:
++ self._linkmap = LinkageMapMachO(self)
++ elif chost.find('interix') >= 0 or chost.find('winnt') >= 0:
++ self._linkmap = LinkageMapPeCoff(self)
++ elif chost.find('aix') >= 0:
++ self._linkmap = LinkageMapXCoff(self)
++ else:
++ self._linkmap = LinkageMap(self)
self._owners = self._owners_db(self)
self._cached_counter = None
diff --cc pym/portage/util/env_update.py
index 1486508,4c1fbf8..6dc15a6
--- a/pym/portage/util/env_update.py
+++ b/pym/portage/util/env_update.py
@@@ -86,9 -87,10 +87,10 @@@ def _env_update(makelinks, target_root
else:
settings = env
- eprefix = settings.get("EPREFIX", "")
+ eprefix = settings.get("EPREFIX", portage.const.EPREFIX)
eprefix_lstrip = eprefix.lstrip(os.sep)
- envd_dir = os.path.join(target_root, eprefix_lstrip, "etc", "env.d")
+ eroot = normalize_path(os.path.join(target_root, eprefix_lstrip)).rstrip(os.sep) + os.sep
+ envd_dir = os.path.join(eroot, "etc", "env.d")
ensure_dirs(envd_dir, mode=0o755)
fns = listdir(envd_dir, EmptyOnError=1)
fns.sort()
@@@ -304,12 -318,11 +318,11 @@@
penvnotice = "# THIS FILE IS AUTOMATICALLY GENERATED BY env-update.\n"
penvnotice += "# DO NOT EDIT THIS FILE. CHANGES TO STARTUP PROFILES\n"
cenvnotice = penvnotice[:]
- penvnotice += "# GO INTO /etc/profile NOT /etc/profile.env\n\n"
- cenvnotice += "# GO INTO /etc/csh.cshrc NOT /etc/csh.env\n\n"
+ penvnotice += "# GO INTO " + eprefix + "/etc/profile NOT /etc/profile.env\n\n"
+ cenvnotice += "# GO INTO " + eprefix + "/etc/csh.cshrc NOT /etc/csh.env\n\n"
#create /etc/profile.env for bash support
- outfile = atomic_ofstream(os.path.join(
- target_root, eprefix_lstrip, "etc", "profile.env"))
+ outfile = atomic_ofstream(os.path.join(eroot, "etc", "profile.env"))
outfile.write(penvnotice)
env_keys = [ x for x in env if x != "LDPATH" ]
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-09-12 18:18 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-09-12 18:18 UTC (permalink / raw
To: gentoo-commits
commit: f0397094c2ed8fe67b96cbc797da99aaac335cca
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 12 18:17:49 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Wed Sep 12 18:17:49 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=f0397094
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
pym/portage/data.py
bin/egencache | 3 +-
bin/phase-functions.sh | 14 +-
bin/phase-helpers.sh | 12 +-
bin/repoman | 156 ++++++++++++--------
bin/save-ebuild-env.sh | 6 +-
cnf/make.conf | 2 +-
doc/package/ebuild/eapi/5.docbook | 68 +++------
man/emerge.1 | 13 +-
man/make.conf.5 | 4 +-
man/portage.5 | 20 ++--
pym/_emerge/EbuildPhase.py | 5 +-
pym/_emerge/Scheduler.py | 5 +-
pym/_emerge/depgraph.py | 14 ++-
pym/portage/__init__.py | 44 ++++--
pym/portage/checksum.py | 17 ++-
pym/portage/const.py | 2 +-
pym/portage/data.py | 23 ++-
.../package/ebuild/_config/LocationsManager.py | 2 +-
pym/portage/package/ebuild/config.py | 2 +-
pym/portage/package/ebuild/doebuild.py | 7 +-
pym/portage/proxy/objectproxy.py | 9 +-
pym/portage/tests/lint/test_bash_syntax.py | 16 ++-
pym/portage/tests/lint/test_import_modules.py | 2 +-
pym/portage/util/__init__.py | 21 ++-
pym/portage/util/_desktop_entry.py | 57 +-------
pym/portage/xml/metadata.py | 9 +-
pym/repoman/herdbase.py | 5 +-
pym/repoman/utilities.py | 34 +++--
28 files changed, 312 insertions(+), 260 deletions(-)
diff --cc pym/portage/data.py
index 23734f4,b922ff8..df87b5c
--- a/pym/portage/data.py
+++ b/pym/portage/data.py
@@@ -1,9 -1,8 +1,9 @@@
# data.py -- Calculated/Discovered Data Values
- # Copyright 1998-2011 Gentoo Foundation
+ # Copyright 1998-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
- import os, pwd, grp, platform
+ import os, pwd, grp, platform, sys
+from portage.const import PORTAGE_GROUPNAME, PORTAGE_USERNAME, EPREFIX
import portage
portage.proxy.lazyimport.lazyimport(globals(),
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-09-09 7:40 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-09-09 7:40 UTC (permalink / raw
To: gentoo-commits
commit: f344fbaaad0b6f976ad9b404ea46cc518f2c9f5e
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Sep 9 07:39:35 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Sep 9 07:39:35 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=f344fbaa
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
DEVELOPING | 13 +--
bin/egencache | 37 +++--
bin/phase-helpers.sh | 34 ++---
cnf/dispatch-conf.conf | 1 +
pym/_emerge/EbuildPhase.py | 9 +-
pym/portage/package/ebuild/_config/UseManager.py | 86 ++++++++---
pym/portage/repository/config.py | 12 +-
pym/portage/tests/dbapi/test_portdb_cache.py | 193 ++++++++++++++++++++++
8 files changed, 314 insertions(+), 71 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-09-06 18:14 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-09-06 18:14 UTC (permalink / raw
To: gentoo-commits
commit: 0dfc79ae85eeb0ac4101d80eaf659aeca1fcf8cd
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Sep 6 17:59:38 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Sep 6 17:59:38 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=0dfc79ae
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/archive-conf
bin/banned-helper
bin/binhost-snapshot
bin/clean_locks
bin/dispatch-conf
bin/ebuild
bin/ebuild-helpers/bsd/sed
bin/ebuild-helpers/dodoc
bin/ebuild-helpers/dohard
bin/ebuild-helpers/dosed
bin/ebuild-helpers/prepalldocs
bin/emaint
bin/emerge
bin/env-update
bin/fixpackages
bin/glsa-check
bin/phase-functions.sh
bin/phase-helpers.sh
bin/quickpkg
bin/regenworld
bin/repoman
bin/save-ebuild-env.sh
NEWS | 4 +-
bin/archive-conf | 7 +-
bin/banned-helper | 6 -
bin/binhost-snapshot | 13 +-
bin/clean_locks | 11 +-
bin/dispatch-conf | 14 +-
bin/ebuild | 9 +-
bin/ebuild-helpers/4/dodoc | 1 -
bin/ebuild-helpers/4/dohard | 1 -
bin/ebuild-helpers/4/dosed | 1 -
bin/ebuild-helpers/4/prepalldocs | 1 -
bin/ebuild-helpers/{ => bsd}/sed | 16 +-
bin/ebuild-helpers/dodoc | 13 +-
bin/ebuild-helpers/dohard | 11 +-
bin/ebuild-helpers/doheader | 21 ++
bin/ebuild-helpers/doins | 16 +-
bin/ebuild-helpers/dosed | 11 +-
bin/ebuild-helpers/{newbin => newheader} | 0
bin/ebuild-helpers/newins | 2 +-
bin/ebuild-helpers/prepalldocs | 11 +-
bin/ebuild.sh | 2 +-
bin/egencache | 11 +-
bin/emaint | 11 +-
bin/emerge | 8 +-
bin/env-update | 12 +-
bin/etc-update | 15 +-
bin/fixpackages | 13 +-
bin/glsa-check | 13 +-
bin/phase-functions.sh | 34 ++-
bin/phase-helpers.sh | 141 +++++++++--
bin/portageq | 58 +++--
bin/quickpkg | 8 +-
bin/regenworld | 12 +-
bin/repoman | 12 +-
bin/save-ebuild-env.sh | 11 +-
cnf/make.globals | 2 +-
doc/package/ebuild.docbook | 1 +
doc/package/ebuild/eapi/4-slot-abi.docbook | 12 +-
doc/package/ebuild/eapi/5.docbook | 260 ++++++++++++++++++++
doc/portage.docbook | 1 +
man/ebuild.5 | 7 +-
man/emerge.1 | 16 +-
man/make.conf.5 | 5 -
pym/_emerge/EbuildBuildDir.py | 7 +-
pym/_emerge/EbuildMetadataPhase.py | 6 +-
pym/_emerge/FakeVartree.py | 20 +-
pym/_emerge/Package.py | 21 +-
pym/_emerge/Scheduler.py | 19 +-
pym/_emerge/create_depgraph_params.py | 20 +-
pym/_emerge/depgraph.py | 131 ++++++-----
pym/_emerge/main.py | 22 +-
pym/_emerge/resolver/backtracking.py | 16 +-
pym/_emerge/resolver/circular_dependency.py | 9 +-
pym/_emerge/resolver/slot_collision.py | 4 +-
pym/portage/__init__.py | 2 +-
pym/portage/_sets/__init__.py | 17 ++
pym/portage/_sets/dbapi.py | 86 +++++++-
pym/portage/const.py | 5 +-
pym/portage/dbapi/__init__.py | 15 +-
pym/portage/dbapi/_expand_new_virt.py | 10 +-
pym/portage/dep/__init__.py | 141 ++++++-----
.../dep/{_slot_abi.py => _slot_operator.py} | 24 +-
pym/portage/eapi.py | 23 ++-
pym/portage/emaint/modules/config/config.py | 66 ++----
pym/portage/glsa.py | 3 +-
.../package/ebuild/_config/special_env_vars.py | 8 +-
pym/portage/package/ebuild/_eapi_invalid.py | 13 -
pym/portage/package/ebuild/config.py | 104 ++++++++-
pym/portage/package/ebuild/doebuild.py | 49 +++--
pym/portage/tests/dep/testAtom.py | 16 +-
pym/portage/tests/dep/testCheckRequiredUse.py | 16 ++-
.../tests/dep/test_get_required_use_flags.py | 4 +-
pym/portage/tests/dep/test_match_from_list.py | 6 +-
pym/portage/tests/emerge/test_simple.py | 2 +-
pym/portage/tests/resolver/test_complete_graph.py | 4 +-
pym/portage/tests/resolver/test_slot_abi.py | 26 +-
.../tests/resolver/test_slot_abi_downgrade.py | 8 +-
pym/portage/util/_desktop_entry.py | 23 ++-
pym/portage/versions.py | 10 +-
79 files changed, 1223 insertions(+), 567 deletions(-)
diff --cc bin/archive-conf
index 149fdc5,af34db6..a48fc19
--- a/bin/archive-conf
+++ b/bin/archive-conf
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 1999-2006 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
diff --cc bin/binhost-snapshot
index 54d2102,fe2cf6b..f5b9979
--- a/bin/binhost-snapshot
+++ b/bin/binhost-snapshot
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2010-2011 Gentoo Foundation
+ # Copyright 2010-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
import io
diff --cc bin/clean_locks
index c97794f,09ee3e5..29c22ff
--- a/bin/clean_locks
+++ b/bin/clean_locks
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -O
+#!@PREFIX_PORTAGE_PYTHON@ -O
- # Copyright 1999-2006 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/ebuild-helpers/bsd/sed
index 8c979e9,01b8847..89f9ec6
--- a/bin/ebuild-helpers/bsd/sed
+++ b/bin/ebuild-helpers/bsd/sed
@@@ -1,16 -1,11 +1,18 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 2007 Gentoo Foundation
+ # Copyright 2007-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
scriptpath=${BASH_SOURCE[0]}
scriptname=${scriptpath##*/}
++# PREFIX LOCAL: warn about screwups early
+if [[ -n ${EPREFIX} ]] ; then
+ echo "When using Prefix, this BSD sed wrapper should not exist (in ${scriptpath})! This is a bug!" > /dev/stderr
+ exit 1
+fi
++# END PREFIX LOCAL
+
- if [[ sed == ${scriptname} ]] && [[ -n ${ESED} ]]; then
+ if [[ sed == ${scriptname} && -n ${ESED} ]]; then
exec ${ESED} "$@"
elif type -P g${scriptname} > /dev/null ; then
exec g${scriptname} "$@"
diff --cc bin/ebuild-helpers/dodoc
index 0ba6751,d6ce679..4b9c8b9
--- a/bin/ebuild-helpers/dodoc
+++ b/bin/ebuild-helpers/dodoc
@@@ -1,8 -1,19 +1,19 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
+ case "${EAPI}" in
+ 0|1|2|3)
+ ;;
+ *)
+ exec \
+ env \
+ __PORTAGE_HELPER="dodoc" \
+ doins "$@"
+ ;;
+ esac
+
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [ $# -lt 1 ] ; then
helpers_die "${0##*/}: at least one argument needed"
diff --cc bin/ebuild-helpers/dohard
index 2fb0ede,6ae93d2..bd83b5d
--- a/bin/ebuild-helpers/dohard
+++ b/bin/ebuild-helpers/dohard
@@@ -1,7 -1,16 +1,16 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
+ case "${EAPI}" in
+ 0|1|2|3)
+ ;;
+ *)
+ die "'${0##*/}' has been banned for EAPI '$EAPI'"
+ exit 1
+ ;;
+ esac
+
if [[ $# -ne 2 ]] ; then
echo "$0: two arguments needed" 1>&2
exit 1
diff --cc bin/ebuild-helpers/doins
index ca6b0ba,26b11a8..511eb17
--- a/bin/ebuild-helpers/doins
+++ b/bin/ebuild-helpers/doins
@@@ -2,9 -2,11 +2,11 @@@
# Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
- if [[ ${0##*/} == dodoc ]] ; then
+ helper=${__PORTAGE_HELPER:-${0##*/}}
+
+ if [[ ${helper} == dodoc ]] ; then
if [ $# -eq 0 ] ; then
# default_src_install may call dodoc with no arguments
# when DOC is defined but empty, so simply return
@@@ -35,18 -37,9 +37,18 @@@ if [[ ${INSDESTTREE#${ED}} != "${INSDES
vecho "You should not use \${D} or \${ED} with helpers." 1>&2
vecho " --> ${INSDESTTREE}" 1>&2
vecho "-------------------------------------------------------" 1>&2
- helpers_die "${0##*/} used with \${D} or \${ED}"
+ helpers_die "${helper} used with \${D} or \${ED}"
exit 1
fi
+# PREFIX LOCAL: check for usage with EPREFIX
+if [[ ${INSDESTTREE#${EPREFIX}} != "${INSDESTTREE}" ]]; then
+ vecho "-------------------------------------------------------" 1>&2
+ vecho "You should not use \${EPREFIX} with helpers." 1>&2
+ vecho " --> ${INSDESTTREE}" 1>&2
+ vecho "-------------------------------------------------------" 1>&2
+ exit 1
+fi
+# END PREFIX LOCAL
case "$EAPI" in
0|1|2|3)
diff --cc bin/ebuild-helpers/dosed
index 0f2f637,24ec205..3e903fc
--- a/bin/ebuild-helpers/dosed
+++ b/bin/ebuild-helpers/dosed
@@@ -1,7 -1,16 +1,16 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
+ case "${EAPI}" in
+ 0|1|2|3)
+ ;;
+ *)
+ die "'${0##*/}' has been banned for EAPI '$EAPI'"
+ exit 1
+ ;;
+ esac
+
if [[ $# -lt 1 ]] ; then
echo "!!! ${0##*/}: at least one argument needed" >&2
exit 1
diff --cc bin/ebuild-helpers/prepalldocs
index 648efba,c9226d6..a890801
--- a/bin/ebuild-helpers/prepalldocs
+++ b/bin/ebuild-helpers/prepalldocs
@@@ -1,9 -1,18 +1,18 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
+ case "${EAPI}" in
+ 0|1|2|3)
+ ;;
+ *)
+ die "'${0##*/}' has been banned for EAPI '$EAPI'"
+ exit 1
+ ;;
+ esac
+
if [[ -n $1 ]] ; then
vecho "${0##*/}: invalid usage; takes no arguments" 1>&2
fi
diff --cc bin/env-update
index 32c251e,cee3fd6..ef3433f
--- a/bin/env-update
+++ b/bin/env-update
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -O
+#!@PREFIX_PORTAGE_PYTHON@ -O
- # Copyright 1999-2006 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/fixpackages
index 57b6db3,da08520..3f37e3f
--- a/bin/fixpackages
+++ b/bin/fixpackages
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/glsa-check
index f513b58,eddc905..7b8e98a
--- a/bin/glsa-check
+++ b/bin/glsa-check
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2008-2011 Gentoo Foundation
+ # Copyright 2008-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/phase-functions.sh
index fdf17d8,68a33a8..37ee7f2
--- a/bin/phase-functions.sh
+++ b/bin/phase-functions.sh
@@@ -29,7 -29,7 +29,7 @@@ PORTAGE_READONLY_VARS="D EBUILD EBUILD_
PORTAGE_TMPDIR PORTAGE_UPDATE_ENV PORTAGE_USERNAME \
PORTAGE_VERBOSE PORTAGE_WORKDIR_MODE PORTDIR PORTDIR_OVERLAY \
PROFILE_PATHS REPLACING_VERSIONS REPLACED_BY_VERSION T WORKDIR \
- __PORTAGE_TEST_HARDLINK_LOCKS ED EROOT"
- __PORTAGE_HELPER __PORTAGE_TEST_HARDLINK_LOCKS"
++ __PORTAGE_HELPER __PORTAGE_TEST_HARDLINK_LOCKS ED EROOT"
PORTAGE_SAVED_READONLY_VARS="A CATEGORY P PF PN PR PV PVR"
diff --cc bin/phase-helpers.sh
index 10a7b3e,0587991..df51a5b
--- a/bin/phase-helpers.sh
+++ b/bin/phase-helpers.sh
@@@ -611,10 -688,10 +688,10 @@@ has_version()
;;
esac
if [[ -n $PORTAGE_IPC_DAEMON ]] ; then
- "$PORTAGE_BIN_PATH"/ebuild-ipc has_version "${eroot}" "$1"
+ "$PORTAGE_BIN_PATH"/ebuild-ipc has_version "${eroot}" "${atom}"
else
PYTHONPATH=${PORTAGE_PYM_PATH}${PYTHONPATH:+:}${PYTHONPATH} \
- "${PORTAGE_PYTHON:-@PORTAGE_PREFIX_PYTHON@}" "${PORTAGE_BIN_PATH}/portageq" has_version "${eroot}" "$1"
- "${PORTAGE_PYTHON:-/usr/bin/python}" "${PORTAGE_BIN_PATH}/portageq" has_version "${eroot}" "${atom}"
++ "${PORTAGE_PYTHON:-@PORTAGE_PREFIX_PYTHON@}" "${PORTAGE_BIN_PATH}/portageq" has_version "${eroot}" "${atom}"
fi
local retval=$?
case "${retval}" in
@@@ -646,10 -745,10 +745,10 @@@ best_version()
;;
esac
if [[ -n $PORTAGE_IPC_DAEMON ]] ; then
- "$PORTAGE_BIN_PATH"/ebuild-ipc best_version "${eroot}" "$1"
+ "$PORTAGE_BIN_PATH"/ebuild-ipc best_version "${eroot}" "${atom}"
else
PYTHONPATH=${PORTAGE_PYM_PATH}${PYTHONPATH:+:}${PYTHONPATH} \
- "${PORTAGE_PYTHON:-@PORTAGE_PREFIX_PYTHON@}" "${PORTAGE_BIN_PATH}/portageq" best_version "${eroot}" "$1"
- "${PORTAGE_PYTHON:-/usr/bin/python}" "${PORTAGE_BIN_PATH}/portageq" best_version "${eroot}" "${atom}"
++ "${PORTAGE_PYTHON:-@PORTAGE_PREFIX_PYTHON@}" "${PORTAGE_BIN_PATH}/portageq" best_version "${eroot}" "${atom}"
fi
local retval=$?
case "${retval}" in
diff --cc bin/regenworld
index 44e821b,a283344..0b7da44
--- a/bin/regenworld
+++ b/bin/regenworld
@@@ -1,19 -1,17 +1,15 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
import sys
- # for an explanation on this logic, see pym/_emerge/__init__.py
- from os import environ as ose
from os import path as osp
- if ose.__contains__("PORTAGE_PYTHONPATH"):
- sys.path.insert(0, ose["PORTAGE_PYTHONPATH"])
- else:
- sys.path.insert(0, osp.join(osp.dirname(osp.dirname(osp. realpath(__file__))), "pym"))
- import portage
+ pym_path = osp.join(osp.dirname(osp.dirname(osp.realpath(__file__))), "pym")
+ sys.path.insert(0, pym_path)
+ import portage
from portage import os
-from portage._sets.files import StaticFileSet, WorldSelectedSet
-
import re
import tempfile
import textwrap
diff --cc bin/save-ebuild-env.sh
index 1169c09,6d6ed41..74193e7
--- a/bin/save-ebuild-env.sh
+++ b/bin/save-ebuild-env.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_PREFIX_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# @FUNCTION: save_ebuild_env
diff --cc pym/_emerge/EbuildBuildDir.py
index 5e2c034,5d6a262..7cabee0
--- a/pym/_emerge/EbuildBuildDir.py
+++ b/pym/_emerge/EbuildBuildDir.py
@@@ -5,10 -5,8 +5,9 @@@ from _emerge.AsynchronousLock import As
import portage
from portage import os
+import sys
from portage.exception import PortageException
from portage.util.SlotObject import SlotObject
- import errno
class EbuildBuildDir(SlotObject):
diff --cc pym/portage/package/ebuild/doebuild.py
index ff8ad12,471a5da..1c5ac38
--- a/pym/portage/package/ebuild/doebuild.py
+++ b/pym/portage/package/ebuild/doebuild.py
@@@ -131,13 -147,16 +149,13 @@@ def _doebuild_path(settings, eapi=None)
eprefix = settings["EPREFIX"]
prerootpath = [x for x in settings.get("PREROOTPATH", "").split(":") if x]
rootpath = [x for x in settings.get("ROOTPATH", "").split(":") if x]
-
- prefixes = []
- if eprefix:
- prefixes.append(eprefix)
- prefixes.append("/")
-
+ # PREFIX LOCAL: use DEFAULT_PATH and EXTRA_PATH from make.globals
+ defaultpath = [x for x in settings.get("DEFAULT_PATH", "").split(":") if x]
+ extrapath = [x for x in settings.get("EXTRA_PATH", "").split(":") if x]
path = []
- if eapi not in (None, "0", "1", "2", "3"):
- path.append(os.path.join(portage_bin_path, "ebuild-helpers", "4"))
+ if settings.get("USERLAND", "GNU") != "GNU":
+ path.append(os.path.join(portage_bin_path, "ebuild-helpers", "bsd"))
path.append(os.path.join(portage_bin_path, "ebuild-helpers"))
path.extend(prerootpath)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-08-27 6:44 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-08-27 6:44 UTC (permalink / raw
To: gentoo-commits
commit: a7eefaa55f54e73995e5e0b893b285f905177a40
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Mon Aug 27 06:43:45 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Mon Aug 27 06:43:45 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=a7eefaa5
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/dispatch-conf
bin/ebuild-helpers/doins
bin/ebuild-helpers/newbin
bin/ebuild-helpers/newconfd
bin/ebuild-helpers/newdoc
bin/ebuild-helpers/newenvd
bin/ebuild-helpers/newexe
bin/ebuild-helpers/newinitd
bin/ebuild-helpers/newins
bin/ebuild-helpers/newlib.a
bin/ebuild-helpers/newlib.so
bin/ebuild-helpers/newman
bin/ebuild-helpers/newsbin
bin/emerge-webrsync
bin/isolated-functions.sh
bin/phase-helpers.sh
bin/dispatch-conf | 10 +-
bin/ebuild-helpers/doins | 4 +-
bin/ebuild-helpers/newbin | 23 +-
bin/ebuild-helpers/newconfd | 23 +-
bin/ebuild-helpers/newdoc | 23 +-
bin/ebuild-helpers/newenvd | 23 +-
bin/ebuild-helpers/newexe | 23 +-
bin/ebuild-helpers/newinitd | 23 +-
bin/ebuild-helpers/newins | 73 ++-
bin/ebuild-helpers/newlib.a | 23 +-
bin/ebuild-helpers/newlib.so | 23 +-
bin/ebuild-helpers/newman | 23 +-
bin/ebuild-helpers/newsbin | 23 +-
bin/ebuild-helpers/prepstrip | 30 +-
bin/ebuild.sh | 4 +-
bin/egencache | 2 +-
bin/emerge-webrsync | 127 +++--
bin/isolated-functions.sh | 4 +-
bin/phase-functions.sh | 12 +-
bin/phase-helpers.sh | 4 +-
cnf/metadata.dtd | 7 +-
man/make.conf.5 | 24 +-
misc/emerge-delta-webrsync | 613 ++++++++++++++++++++
mkrelease.sh | 2 +-
pym/_emerge/BinpkgFetcher.py | 10 +-
pym/_emerge/EbuildFetcher.py | 6 +-
pym/_emerge/EbuildMetadataPhase.py | 13 +-
pym/_emerge/Package.py | 19 +-
pym/_emerge/PollScheduler.py | 18 +-
pym/_emerge/Scheduler.py | 9 +-
pym/_emerge/SpawnProcess.py | 17 +-
pym/portage/checksum.py | 58 ++
pym/portage/dbapi/__init__.py | 16 +-
pym/portage/dbapi/bintree.py | 7 +-
pym/portage/dbapi/porttree.py | 4 +-
pym/portage/dbapi/virtual.py | 4 +-
pym/portage/dep/dep_check.py | 19 +-
pym/portage/eapi.py | 11 +-
pym/portage/getbinpkg.py | 8 +-
pym/portage/manifest.py | 15 +-
.../package/ebuild/_config/KeywordsManager.py | 42 ++-
pym/portage/package/ebuild/_config/UseManager.py | 79 +++-
.../package/ebuild/_config/special_env_vars.py | 1 +
pym/portage/package/ebuild/config.py | 21 +-
pym/portage/package/ebuild/digestcheck.py | 15 +-
pym/portage/package/ebuild/doebuild.py | 18 +-
pym/portage/package/ebuild/fetch.py | 23 +-
pym/portage/package/ebuild/getmaskingstatus.py | 7 +-
pym/portage/process.py | 6 +-
pym/portage/tests/ebuild/test_doebuild_spawn.py | 2 +
pym/portage/tests/lint/test_bash_syntax.py | 12 +-
pym/portage/util/__init__.py | 7 +
pym/portage/util/_desktop_entry.py | 60 ++-
pym/portage/util/_eventloop/EventLoop.py | 65 ++-
pym/portage/versions.py | 27 +-
55 files changed, 1295 insertions(+), 470 deletions(-)
diff --cc bin/dispatch-conf
index de8d85d,35979db..d671873
--- a/bin/dispatch-conf
+++ b/bin/dispatch-conf
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -O
+#!@PREFIX_PORTAGE_PYTHON@ -O
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
diff --cc bin/ebuild-helpers/doins
index b9c95ed,fd77934..ca6b0ba
--- a/bin/ebuild-helpers/doins
+++ b/bin/ebuild-helpers/doins
@@@ -1,8 -1,8 +1,8 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [[ ${0##*/} == dodoc ]] ; then
if [ $# -eq 0 ] ; then
@@@ -38,18 -38,9 +38,18 @@@ if [[ ${INSDESTTREE#${ED}} != "${INSDES
helpers_die "${0##*/} used with \${D} or \${ED}"
exit 1
fi
+# PREFIX LOCAL: check for usage with EPREFIX
+if [[ ${INSDESTTREE#${EPREFIX}} != "${INSDESTTREE}" ]]; then
+ vecho "-------------------------------------------------------" 1>&2
+ vecho "You should not use \${EPREFIX} with helpers." 1>&2
+ vecho " --> ${INSDESTTREE}" 1>&2
+ vecho "-------------------------------------------------------" 1>&2
+ exit 1
+fi
+# END PREFIX LOCAL
case "$EAPI" in
- 0|1|2|3|3_pre2)
+ 0|1|2|3)
PRESERVE_SYMLINKS=n
;;
*)
diff --cc bin/ebuild-helpers/newins
index 45c3e18,2dc041d..e9bf34b
--- a/bin/ebuild-helpers/newins
+++ b/bin/ebuild-helpers/newins
@@@ -1,16 -1,13 +1,14 @@@
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ #!/bin/bash
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
- if [[ -z ${T} ]] || [[ -z ${2} ]] ; then
- helpers_die "${0##*/}: Need two arguments, old file and new file"
- exit 1
- fi
+ helper=${0##*/}
- if [ ! -e "$1" ] ; then
- helpers_die "!!! ${0##*/}: $1 does not exist"
+ if [[ -z ${T} ]] || [[ -z ${2} ]] ; then
+ helpers_die "${helper}: Need two arguments, old file and new file"
exit 1
fi
diff --cc bin/emerge-webrsync
index 581dc54,a962ab5..cdcf716
--- a/bin/emerge-webrsync
+++ b/bin/emerge-webrsync
@@@ -1,5 -1,5 +1,6 @@@
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ #!/bin/bash
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# Author: Karl Trygve Kalleberg <karltk@gentoo.org>
# Rewritten from the old, Perl-based emerge-webrsync script
@@@ -39,19 -39,14 +40,20 @@@ els
eecho "could not find 'portageq'; aborting"
exit 1
fi
- eval $("${portageq}" envvar -v FEATURES FETCHCOMMAND GENTOO_MIRRORS \
- PORTAGE_BIN_PATH PORTAGE_GPG_DIR \
- PORTAGE_NICENESS PORTAGE_RSYNC_EXTRA_OPTS PORTAGE_TMPDIR PORTDIR \
- SYNC http_proxy ftp_proxy \
- EPREFIX PORTAGE_USER PORTAGE_GROUP)
- DISTDIR="${PORTAGE_TMPDIR}/emerge-webrsync"
+ eval $("${portageq}" envvar -v DISTDIR EPREFIX FEATURES \
+ FETCHCOMMAND GENTOO_MIRRORS \
+ PORTAGE_BIN_PATH PORTAGE_CONFIGROOT PORTAGE_GPG_DIR \
+ PORTAGE_NICENESS PORTAGE_RSYNC_EXTRA_OPTS \
+ PORTAGE_RSYNC_OPTS PORTAGE_TMPDIR PORTDIR \
- SYNC USERLAND http_proxy ftp_proxy)
++ SYNC USERLAND http_proxy ftp_proxy \
++ PORTAGE_USER PORTAGE_GROUP)
export http_proxy ftp_proxy
+# PREFIX LOCAL: use Prefix servers, just because we want this and infra
- # just can't support us yet
- GENTOO_MIRRORS="http://gentoo-mirror1.prefix.freens.org http://gentoo-mirror2.prefix.freens.org"
++# can't support us yet
++GENTOO_MIRRORS="http://gentoo-mirror1.prefix.freens.org"
+# END PREFIX LOCAL
+
# If PORTAGE_NICENESS is overriden via the env then it will
# still pass through the portageq call and override properly.
if [ -n "${PORTAGE_NICENESS}" ]; then
@@@ -191,11 -191,21 +198,23 @@@ sync_local()
vecho "Syncing local tree ..."
- local ownership="portage:portage"
++ # PREFIX LOCAL: use PORTAGE_USER and PORTAGE_GROUP
++ local ownership="${PORTAGE_USER:-portage}:${PORTAGE_GROUP:-portage}"
++ # END PREFIX LOCAL
+ if has usersync ${FEATURES} ; then
+ case "${USERLAND}" in
+ BSD)
+ ownership=$(stat -f '%Su:%Sg' "${PORTDIR}")
+ ;;
+ *)
+ ownership=$(stat -c '%U:%G' "${PORTDIR}")
+ ;;
+ esac
+ fi
+
if type -P tarsync > /dev/null ; then
- # PREFIX LOCAL: use PORTAGE_USER and PORTAGE_GROUP
- local chown_opts="-o ${PORTAGE_USER:-portage} -g ${PORTAGE_GROUP:-portage}"
- chown ${PORTAGE_USER:-portage}:${PORTAGE_GROUP:-portage} portage > /dev/null 2>&1 || chown_opts=""
- # END PREFIX LOCAL
+ local chown_opts="-o ${ownership%:*} -g ${ownership#*:}"
+ chown ${ownership} "${PORTDIR}" > /dev/null 2>&1 || chown_opts=""
if ! tarsync $(vvecho -v) -s 1 ${chown_opts} \
-e /distfiles -e /packages -e /local "${file}" "${PORTDIR}"; then
eecho "tarsync failed; tarball is corrupt? (${file})"
diff --cc bin/isolated-functions.sh
index 3beb14a,d33c0b6..79801e5
--- a/bin/isolated-functions.sh
+++ b/bin/isolated-functions.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# We need this next line for "die" and "assert". It expands
diff --cc bin/phase-helpers.sh
index beca5ad,a00475c..10a7b3e
--- a/bin/phase-helpers.sh
+++ b/bin/phase-helpers.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PREFIX_PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
export DESTTREE=/usr
diff --cc pym/portage/package/ebuild/fetch.py
index 63c792a,260bf10..7f76bb2
--- a/pym/portage/package/ebuild/fetch.py
+++ b/pym/portage/package/ebuild/fetch.py
@@@ -26,10 -26,9 +26,10 @@@ portage.proxy.lazyimport.lazyimport(glo
from portage import OrderedDict, os, selinux, shutil, _encodings, \
_shell_quote, _unicode_encode
from portage.checksum import (hashfunc_map, perform_md5, verify_all,
- _filter_unaccelarated_hashes)
+ _filter_unaccelarated_hashes, _hash_filter, _apply_hash_filter)
from portage.const import BASH_BINARY, CUSTOM_MIRRORS_FILE, \
GLOBAL_CONFIG_PATH
+from portage.const import rootgid
from portage.data import portage_gid, portage_uid, secpass, userpriv_groups
from portage.exception import FileNotFound, OperationNotPermitted, \
PortageException, TryAgain
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-08-12 7:50 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-08-12 7:50 UTC (permalink / raw
To: gentoo-commits
commit: 20e5636d6a423452434ede1819e120133a9229ed
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Aug 12 07:49:56 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Aug 12 07:49:56 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=20e5636d
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/chpathtool.py
bin/repoman
Makefile | 1 -
bin/chpathtool.py | 10 +-
bin/ebuild-helpers/prepstrip | 2 +-
bin/emaint | 659 +-----------------------
bin/repoman | 5 +-
cnf/make.conf | 5 +-
cnf/make.globals | 2 +-
man/ebuild.5 | 2 +-
man/emaint.1 | 28 +-
man/make.conf.5 | 7 +
man/portage.5 | 13 +-
man/repoman.1 | 3 -
pym/_emerge/main.py | 24 +-
pym/portage/_selinux.py | 40 +-
pym/portage/const.py | 2 +-
pym/portage/dbapi/bintree.py | 56 ++-
pym/portage/dbapi/vartree.py | 35 +-
pym/portage/emaint/__init__.py | 7 +
pym/portage/emaint/defaults.py | 18 +
pym/portage/emaint/main.py | 218 ++++++++
pym/portage/emaint/module.py | 194 +++++++
pym/portage/emaint/modules/__init__.py | 7 +
pym/portage/emaint/modules/binhost/__init__.py | 22 +
pym/portage/emaint/modules/binhost/binhost.py | 163 ++++++
pym/portage/emaint/modules/config/__init__.py | 22 +
pym/portage/emaint/modules/config/config.py | 101 ++++
pym/portage/emaint/modules/logs/__init__.py | 51 ++
pym/portage/emaint/modules/logs/logs.py | 103 ++++
pym/portage/emaint/modules/move/__init__.py | 33 ++
pym/portage/emaint/modules/move/move.py | 162 ++++++
pym/portage/emaint/modules/resume/__init__.py | 22 +
pym/portage/emaint/modules/resume/resume.py | 58 ++
pym/portage/emaint/modules/world/__init__.py | 22 +
pym/portage/emaint/modules/world/world.py | 89 ++++
pym/portage/emaint/progress.py | 61 +++
pym/portage/output.py | 34 +-
pym/portage/package/ebuild/doebuild.py | 29 +
pym/portage/util/_desktop_entry.py | 6 +-
pym/portage/util/_urlopen.py | 99 +++-
pym/portage/util/movefile.py | 55 ++-
pym/portage/util/whirlpool.py | 2 +
pym/repoman/checks.py | 22 +-
42 files changed, 1715 insertions(+), 779 deletions(-)
diff --cc bin/chpathtool.py
index 1ae4e7c,85e608e..be42bb0
--- a/bin/chpathtool.py
+++ b/bin/chpathtool.py
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2011 Gentoo Foundation
+ # Copyright 2011-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
import io
diff --cc bin/emaint
index 37a3248,bee46c4..ab21742
--- a/bin/emaint
+++ b/bin/emaint
@@@ -2,649 -2,44 +2,46 @@@
# Copyright 2005-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
+ """'The emaint program provides an interface to system health
+ checks and maintenance.
+ """
+
from __future__ import print_function
- import errno
- import re
- import signal
- import stat
import sys
- import textwrap
- import time
- from optparse import OptionParser, OptionValueError
+ import errno
+ # This block ensures that ^C interrupts are handled quietly.
+ try:
+ import signal
+
+ def exithandler(signum,frame):
+ signal.signal(signal.SIGINT, signal.SIG_IGN)
+ signal.signal(signal.SIGTERM, signal.SIG_IGN)
+ sys.exit(1)
+
+ signal.signal(signal.SIGINT, exithandler)
+ signal.signal(signal.SIGTERM, exithandler)
+ signal.signal(signal.SIGPIPE, signal.SIG_DFL)
+
+ except KeyboardInterrupt:
+ sys.exit(1)
-try:
- import portage
-except ImportError:
- from os import path as osp
- sys.path.insert(0, osp.join(osp.dirname(osp.dirname(osp.realpath(__file__))), "pym"))
- import portage
+# for an explanation on this logic, see pym/_emerge/__init__.py
+import os
+import sys
+if os.environ.__contains__("PORTAGE_PYTHONPATH"):
+ sys.path.insert(0, os.environ["PORTAGE_PYTHONPATH"])
+else:
+ sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(os.path.realpath(__file__))), "pym"))
+import portage
- from portage import os
- from portage.util import writemsg
-
- if sys.hexversion >= 0x3000000:
- long = int
-
- class WorldHandler(object):
-
- short_desc = "Fix problems in the world file"
-
- def name():
- return "world"
- name = staticmethod(name)
-
- def __init__(self):
- self.invalid = []
- self.not_installed = []
- self.okay = []
- from portage._sets import load_default_config
- setconfig = load_default_config(portage.settings,
- portage.db[portage.settings['EROOT']])
- self._sets = setconfig.getSets()
-
- def _check_world(self, onProgress):
- eroot = portage.settings['EROOT']
- self.world_file = os.path.join(eroot, portage.const.WORLD_FILE)
- self.found = os.access(self.world_file, os.R_OK)
- vardb = portage.db[eroot]["vartree"].dbapi
-
- from portage._sets import SETPREFIX
- sets = self._sets
- world_atoms = list(sets["selected"])
- maxval = len(world_atoms)
- if onProgress:
- onProgress(maxval, 0)
- for i, atom in enumerate(world_atoms):
- if not isinstance(atom, portage.dep.Atom):
- if atom.startswith(SETPREFIX):
- s = atom[len(SETPREFIX):]
- if s in sets:
- self.okay.append(atom)
- else:
- self.not_installed.append(atom)
- else:
- self.invalid.append(atom)
- if onProgress:
- onProgress(maxval, i+1)
- continue
- okay = True
- if not vardb.match(atom):
- self.not_installed.append(atom)
- okay = False
- if okay:
- self.okay.append(atom)
- if onProgress:
- onProgress(maxval, i+1)
-
- def check(self, onProgress=None):
- self._check_world(onProgress)
- errors = []
- if self.found:
- errors += ["'%s' is not a valid atom" % x for x in self.invalid]
- errors += ["'%s' is not installed" % x for x in self.not_installed]
- else:
- errors.append(self.world_file + " could not be opened for reading")
- return errors
-
- def fix(self, onProgress=None):
- world_set = self._sets["selected"]
- world_set.lock()
- try:
- world_set.load() # maybe it's changed on disk
- before = set(world_set)
- self._check_world(onProgress)
- after = set(self.okay)
- errors = []
- if before != after:
- try:
- world_set.replace(self.okay)
- except portage.exception.PortageException:
- errors.append("%s could not be opened for writing" % \
- self.world_file)
- return errors
- finally:
- world_set.unlock()
-
- class BinhostHandler(object):
-
- short_desc = "Generate a metadata index for binary packages"
-
- def name():
- return "binhost"
- name = staticmethod(name)
-
- def __init__(self):
- eroot = portage.settings['EROOT']
- self._bintree = portage.db[eroot]["bintree"]
- self._bintree.populate()
- self._pkgindex_file = self._bintree._pkgindex_file
- self._pkgindex = self._bintree._load_pkgindex()
-
- def _need_update(self, cpv, data):
-
- if "MD5" not in data:
- return True
-
- size = data.get("SIZE")
- if size is None:
- return True
-
- mtime = data.get("MTIME")
- if mtime is None:
- return True
-
- pkg_path = self._bintree.getname(cpv)
- try:
- s = os.lstat(pkg_path)
- except OSError as e:
- if e.errno not in (errno.ENOENT, errno.ESTALE):
- raise
- # We can't update the index for this one because
- # it disappeared.
- return False
-
- try:
- if long(mtime) != s[stat.ST_MTIME]:
- return True
- if long(size) != long(s.st_size):
- return True
- except ValueError:
- return True
-
- return False
-
- def check(self, onProgress=None):
- missing = []
- cpv_all = self._bintree.dbapi.cpv_all()
- cpv_all.sort()
- maxval = len(cpv_all)
- if onProgress:
- onProgress(maxval, 0)
- pkgindex = self._pkgindex
- missing = []
- metadata = {}
- for d in pkgindex.packages:
- metadata[d["CPV"]] = d
- for i, cpv in enumerate(cpv_all):
- d = metadata.get(cpv)
- if not d or self._need_update(cpv, d):
- missing.append(cpv)
- if onProgress:
- onProgress(maxval, i+1)
- errors = ["'%s' is not in Packages" % cpv for cpv in missing]
- stale = set(metadata).difference(cpv_all)
- for cpv in stale:
- errors.append("'%s' is not in the repository" % cpv)
- return errors
-
- def fix(self, onProgress=None):
- bintree = self._bintree
- cpv_all = self._bintree.dbapi.cpv_all()
- cpv_all.sort()
- missing = []
- maxval = 0
- if onProgress:
- onProgress(maxval, 0)
- pkgindex = self._pkgindex
- missing = []
- metadata = {}
- for d in pkgindex.packages:
- metadata[d["CPV"]] = d
-
- for i, cpv in enumerate(cpv_all):
- d = metadata.get(cpv)
- if not d or self._need_update(cpv, d):
- missing.append(cpv)
-
- stale = set(metadata).difference(cpv_all)
- if missing or stale:
- from portage import locks
- pkgindex_lock = locks.lockfile(
- self._pkgindex_file, wantnewlockfile=1)
- try:
- # Repopulate with lock held.
- bintree._populate()
- cpv_all = self._bintree.dbapi.cpv_all()
- cpv_all.sort()
-
- pkgindex = bintree._load_pkgindex()
- self._pkgindex = pkgindex
-
- metadata = {}
- for d in pkgindex.packages:
- metadata[d["CPV"]] = d
-
- # Recount missing packages, with lock held.
- del missing[:]
- for i, cpv in enumerate(cpv_all):
- d = metadata.get(cpv)
- if not d or self._need_update(cpv, d):
- missing.append(cpv)
-
- maxval = len(missing)
- for i, cpv in enumerate(missing):
- try:
- metadata[cpv] = bintree._pkgindex_entry(cpv)
- except portage.exception.InvalidDependString:
- writemsg("!!! Invalid binary package: '%s'\n" % \
- bintree.getname(cpv), noiselevel=-1)
-
- if onProgress:
- onProgress(maxval, i+1)
-
- for cpv in set(metadata).difference(
- self._bintree.dbapi.cpv_all()):
- del metadata[cpv]
-
- # We've updated the pkgindex, so set it to
- # repopulate when necessary.
- bintree.populated = False
-
- del pkgindex.packages[:]
- pkgindex.packages.extend(metadata.values())
- from portage.util import atomic_ofstream
- f = atomic_ofstream(self._pkgindex_file)
- try:
- self._pkgindex.write(f)
- finally:
- f.close()
- finally:
- locks.unlockfile(pkgindex_lock)
-
- if onProgress:
- if maxval == 0:
- maxval = 1
- onProgress(maxval, maxval)
- return None
-
- class MoveHandler(object):
-
- def __init__(self, tree, porttree):
- self._tree = tree
- self._portdb = porttree.dbapi
- self._update_keys = ["DEPEND", "RDEPEND", "PDEPEND", "PROVIDE"]
- self._master_repo = \
- self._portdb.getRepositoryName(self._portdb.porttree_root)
-
- def _grab_global_updates(self):
- from portage.update import grab_updates, parse_updates
- retupdates = {}
- errors = []
-
- for repo_name in self._portdb.getRepositories():
- repo = self._portdb.getRepositoryPath(repo_name)
- updpath = os.path.join(repo, "profiles", "updates")
- if not os.path.isdir(updpath):
- continue
-
- try:
- rawupdates = grab_updates(updpath)
- except portage.exception.DirectoryNotFound:
- rawupdates = []
- upd_commands = []
- for mykey, mystat, mycontent in rawupdates:
- commands, errors = parse_updates(mycontent)
- upd_commands.extend(commands)
- errors.extend(errors)
- retupdates[repo_name] = upd_commands
-
- if self._master_repo in retupdates:
- retupdates['DEFAULT'] = retupdates[self._master_repo]
-
- return retupdates, errors
+ from portage.emaint.main import emaint_main
- def check(self, onProgress=None):
- allupdates, errors = self._grab_global_updates()
- # Matching packages and moving them is relatively fast, so the
- # progress bar is updated in indeterminate mode.
- match = self._tree.dbapi.match
- aux_get = self._tree.dbapi.aux_get
- if onProgress:
- onProgress(0, 0)
- for repo, updates in allupdates.items():
- if repo == 'DEFAULT':
- continue
- if not updates:
- continue
-
- def repo_match(repository):
- return repository == repo or \
- (repo == self._master_repo and \
- repository not in allupdates)
-
- for i, update_cmd in enumerate(updates):
- if update_cmd[0] == "move":
- origcp, newcp = update_cmd[1:]
- for cpv in match(origcp):
- if repo_match(aux_get(cpv, ["repository"])[0]):
- errors.append("'%s' moved to '%s'" % (cpv, newcp))
- elif update_cmd[0] == "slotmove":
- pkg, origslot, newslot = update_cmd[1:]
- for cpv in match(pkg):
- slot, prepo = aux_get(cpv, ["SLOT", "repository"])
- if slot == origslot and repo_match(prepo):
- errors.append("'%s' slot moved from '%s' to '%s'" % \
- (cpv, origslot, newslot))
- if onProgress:
- onProgress(0, 0)
-
- # Searching for updates in all the metadata is relatively slow, so this
- # is where the progress bar comes out of indeterminate mode.
- cpv_all = self._tree.dbapi.cpv_all()
- cpv_all.sort()
- maxval = len(cpv_all)
- aux_update = self._tree.dbapi.aux_update
- meta_keys = self._update_keys + ['repository', 'EAPI']
- if onProgress:
- onProgress(maxval, 0)
- for i, cpv in enumerate(cpv_all):
- metadata = dict(zip(meta_keys, aux_get(cpv, meta_keys)))
- eapi = metadata.pop('EAPI')
- repository = metadata.pop('repository')
- try:
- updates = allupdates[repository]
- except KeyError:
- try:
- updates = allupdates['DEFAULT']
- except KeyError:
- continue
- if not updates:
- continue
- metadata_updates = \
- portage.update_dbentries(updates, metadata, eapi=eapi)
- if metadata_updates:
- errors.append("'%s' has outdated metadata" % cpv)
- if onProgress:
- onProgress(maxval, i+1)
- return errors
-
- def fix(self, onProgress=None):
- allupdates, errors = self._grab_global_updates()
- # Matching packages and moving them is relatively fast, so the
- # progress bar is updated in indeterminate mode.
- move = self._tree.dbapi.move_ent
- slotmove = self._tree.dbapi.move_slot_ent
- if onProgress:
- onProgress(0, 0)
- for repo, updates in allupdates.items():
- if repo == 'DEFAULT':
- continue
- if not updates:
- continue
-
- def repo_match(repository):
- return repository == repo or \
- (repo == self._master_repo and \
- repository not in allupdates)
-
- for i, update_cmd in enumerate(updates):
- if update_cmd[0] == "move":
- move(update_cmd, repo_match=repo_match)
- elif update_cmd[0] == "slotmove":
- slotmove(update_cmd, repo_match=repo_match)
- if onProgress:
- onProgress(0, 0)
-
- # Searching for updates in all the metadata is relatively slow, so this
- # is where the progress bar comes out of indeterminate mode.
- self._tree.dbapi.update_ents(allupdates, onProgress=onProgress)
- return errors
-
- class MoveInstalled(MoveHandler):
-
- short_desc = "Perform package move updates for installed packages"
-
- def name():
- return "moveinst"
- name = staticmethod(name)
- def __init__(self):
- eroot = portage.settings['EROOT']
- MoveHandler.__init__(self, portage.db[eroot]["vartree"], portage.db[eroot]["porttree"])
-
- class MoveBinary(MoveHandler):
-
- short_desc = "Perform package move updates for binary packages"
-
- def name():
- return "movebin"
- name = staticmethod(name)
- def __init__(self):
- eroot = portage.settings['EROOT']
- MoveHandler.__init__(self, portage.db[eroot]["bintree"], portage.db[eroot]['porttree'])
-
- class VdbKeyHandler(object):
- def name():
- return "vdbkeys"
- name = staticmethod(name)
-
- def __init__(self):
- self.list = portage.db[portage.settings["EROOT"]]["vartree"].dbapi.cpv_all()
- self.missing = []
- self.keys = ["HOMEPAGE", "SRC_URI", "KEYWORDS", "DESCRIPTION"]
-
- for p in self.list:
- mydir = os.path.join(portage.settings["EROOT"], portage.const.VDB_PATH, p)+os.sep
- ismissing = True
- for k in self.keys:
- if os.path.exists(mydir+k):
- ismissing = False
- break
- if ismissing:
- self.missing.append(p)
-
- def check(self):
- return ["%s has missing keys" % x for x in self.missing]
-
- def fix(self):
-
- errors = []
-
- for p in self.missing:
- mydir = os.path.join(portage.settings["EROOT"], portage.const.VDB_PATH, p)+os.sep
- if not os.access(mydir+"environment.bz2", os.R_OK):
- errors.append("Can't access %s" % (mydir+"environment.bz2"))
- elif not os.access(mydir, os.W_OK):
- errors.append("Can't create files in %s" % mydir)
- else:
- env = os.popen("bzip2 -dcq "+mydir+"environment.bz2", "r")
- envlines = env.read().split("\n")
- env.close()
- for k in self.keys:
- s = [l for l in envlines if l.startswith(k+"=")]
- if len(s) > 1:
- errors.append("multiple matches for %s found in %senvironment.bz2" % (k, mydir))
- elif len(s) == 0:
- s = ""
- else:
- s = s[0].split("=",1)[1]
- s = s.lstrip("$").strip("\'\"")
- s = re.sub("(\\\\[nrt])+", " ", s)
- s = " ".join(s.split()).strip()
- if s != "":
- try:
- keyfile = open(mydir+os.sep+k, "w")
- keyfile.write(s+"\n")
- keyfile.close()
- except (IOError, OSError) as e:
- errors.append("Could not write %s, reason was: %s" % (mydir+k, e))
-
- return errors
-
- class ProgressHandler(object):
- def __init__(self):
- self.curval = 0
- self.maxval = 0
- self.last_update = 0
- self.min_display_latency = 0.2
-
- def onProgress(self, maxval, curval):
- self.maxval = maxval
- self.curval = curval
- cur_time = time.time()
- if cur_time - self.last_update >= self.min_display_latency:
- self.last_update = cur_time
- self.display()
-
- def display(self):
- raise NotImplementedError(self)
-
- class CleanResume(object):
-
- short_desc = "Discard emerge --resume merge lists"
-
- def name():
- return "cleanresume"
- name = staticmethod(name)
-
- def check(self, onProgress=None):
- messages = []
- mtimedb = portage.mtimedb
- resume_keys = ("resume", "resume_backup")
- maxval = len(resume_keys)
- if onProgress:
- onProgress(maxval, 0)
- for i, k in enumerate(resume_keys):
- try:
- d = mtimedb.get(k)
- if d is None:
- continue
- if not isinstance(d, dict):
- messages.append("unrecognized resume list: '%s'" % k)
- continue
- mergelist = d.get("mergelist")
- if mergelist is None or not hasattr(mergelist, "__len__"):
- messages.append("unrecognized resume list: '%s'" % k)
- continue
- messages.append("resume list '%s' contains %d packages" % \
- (k, len(mergelist)))
- finally:
- if onProgress:
- onProgress(maxval, i+1)
- return messages
-
- def fix(self, onProgress=None):
- delete_count = 0
- mtimedb = portage.mtimedb
- resume_keys = ("resume", "resume_backup")
- maxval = len(resume_keys)
- if onProgress:
- onProgress(maxval, 0)
- for i, k in enumerate(resume_keys):
- try:
- if mtimedb.pop(k, None) is not None:
- delete_count += 1
- finally:
- if onProgress:
- onProgress(maxval, i+1)
- if delete_count:
- mtimedb.commit()
-
- def emaint_main(myargv):
-
- # Similar to emerge, emaint needs a default umask so that created
- # files (such as the world file) have sane permissions.
- os.umask(0o22)
-
- # TODO: Create a system that allows external modules to be added without
- # the need for hard coding.
- modules = {
- "world" : WorldHandler,
- "binhost":BinhostHandler,
- "moveinst":MoveInstalled,
- "movebin":MoveBinary,
- "cleanresume":CleanResume
- }
-
- module_names = list(modules)
- module_names.sort()
- module_names.insert(0, "all")
-
- def exclusive(option, *args, **kw):
- var = kw.get("var", None)
- if var is None:
- raise ValueError("var not specified to exclusive()")
- if getattr(parser, var, ""):
- raise OptionValueError("%s and %s are exclusive options" % (getattr(parser, var), option))
- setattr(parser, var, str(option))
-
-
- usage = "usage: emaint [options] COMMAND"
-
- desc = "The emaint program provides an interface to system health " + \
- "checks and maintenance. See the emaint(1) man page " + \
- "for additional information about the following commands:"
-
- usage += "\n\n"
- for line in textwrap.wrap(desc, 65):
- usage += "%s\n" % line
- usage += "\n"
- usage += " %s" % "all".ljust(15) + \
- "Perform all supported commands\n"
- for m in module_names[1:]:
- usage += " %s%s\n" % (m.ljust(15), modules[m].short_desc)
-
- parser = OptionParser(usage=usage, version=portage.VERSION)
- parser.add_option("-c", "--check", help="check for problems",
- action="callback", callback=exclusive, callback_kwargs={"var":"action"})
- parser.add_option("-f", "--fix", help="attempt to fix problems",
- action="callback", callback=exclusive, callback_kwargs={"var":"action"})
- parser.action = None
-
-
- (options, args) = parser.parse_args(args=myargv)
- if len(args) != 1:
- parser.error("Incorrect number of arguments")
- if args[0] not in module_names:
- parser.error("%s target is not a known target" % args[0])
-
- if parser.action:
- action = parser.action
- else:
- print("Defaulting to --check")
- action = "-c/--check"
-
- if args[0] == "all":
- tasks = modules.values()
- else:
- tasks = [modules[args[0]]]
-
-
- if action == "-c/--check":
- func = "check"
- else:
- func = "fix"
-
- isatty = os.environ.get('TERM') != 'dumb' and sys.stdout.isatty()
- for task in tasks:
- inst = task()
- onProgress = None
- if isatty:
- progressBar = portage.output.TermProgressBar(title="Emaint", max_desc_length=26)
- progressHandler = ProgressHandler()
- onProgress = progressHandler.onProgress
- def display():
- progressBar.set(progressHandler.curval, progressHandler.maxval)
- progressHandler.display = display
- def sigwinch_handler(signum, frame):
- lines, progressBar.term_columns = \
- portage.output.get_term_size()
- signal.signal(signal.SIGWINCH, sigwinch_handler)
- progressBar.label(func + " " + inst.name())
- result = getattr(inst, func)(onProgress=onProgress)
- if isatty:
- # make sure the final progress is displayed
- progressHandler.display()
- print()
- signal.signal(signal.SIGWINCH, signal.SIG_DFL)
- if result:
- print()
- print("\n".join(result))
- print("\n")
-
- print("Finished")
-
- if __name__ == "__main__":
+ try:
emaint_main(sys.argv[1:])
+ except IOError as e:
+ if e.errno == errno.EACCES:
+ print("\nemaint: Need superuser access")
+ sys.exit(1)
+ else:
+ raise
diff --cc bin/repoman
index ed6187d,b50fac8..7223b8d
--- a/bin/repoman
+++ b/bin/repoman
@@@ -424,8 -420,6 +423,7 @@@ qawarnings = set(
"KEYWORDS.dropped",
"KEYWORDS.stupid",
"KEYWORDS.missing",
+"IUSE.invalid",
- "IUSE.undefined",
"PDEPEND.suspect",
"RDEPEND.implicit",
"RDEPEND.suspect",
diff --cc cnf/make.globals
index dc3eac9,ce35554..81c1282
--- a/cnf/make.globals
+++ b/cnf/make.globals
@@@ -144,11 -121,11 +144,11 @@@ PORTAGE_WORKDIR_MODE="0700
# Some defaults for elog
PORTAGE_ELOG_CLASSES="log warn error"
- PORTAGE_ELOG_SYSTEM="save_summary echo"
+ PORTAGE_ELOG_SYSTEM="save_summary:log,warn,error,qa echo"
-PORTAGE_ELOG_MAILURI="root"
+PORTAGE_ELOG_MAILURI="@rootuser@"
PORTAGE_ELOG_MAILSUBJECT="[portage] ebuild log for \${PACKAGE} on \${HOST}"
-PORTAGE_ELOG_MAILFROM="portage@localhost"
+PORTAGE_ELOG_MAILFROM="@portageuser@@localhost"
# Signing command used by repoman
PORTAGE_GPG_SIGNING_COMMAND="gpg --sign --digest-algo SHA256 --clearsign --yes --default-key \"\${PORTAGE_GPG_KEY}\" --homedir \"\${PORTAGE_GPG_DIR}\" \"\${FILE}\""
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-07-19 16:25 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-07-19 16:25 UTC (permalink / raw
To: gentoo-commits
commit: d7b78626de394386472d9945afcbd236e2769929
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Jul 19 16:25:19 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Jul 19 16:25:19 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=d7b78626
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/ebuild-helpers/prepstrip | 1 +
bin/emaint | 13 +-
bin/etc-update | 7 +
bin/portageq | 12 +-
bin/quickpkg | 13 +-
cnf/make.globals | 2 +-
man/ebuild.5 | 10 +-
man/emerge.1 | 4 +
man/portage.5 | 3 +
pym/_emerge/DependencyArg.py | 5 +-
pym/_emerge/FakeVartree.py | 5 +-
pym/_emerge/actions.py | 28 +++-
pym/_emerge/create_depgraph_params.py | 5 +
pym/_emerge/depgraph.py | 33 +++-
pym/_emerge/main.py | 6 +
pym/portage/_global_updates.py | 14 +-
pym/portage/checksum.py | 20 ++
pym/portage/dbapi/__init__.py | 35 +++--
pym/portage/dbapi/bintree.py | 22 ++-
pym/portage/dbapi/vartree.py | 30 +++-
pym/portage/dep/__init__.py | 89 ++++++++--
pym/portage/manifest.py | 7 +-
pym/portage/output.py | 42 ++++--
pym/portage/package/ebuild/fetch.py | 13 +-
pym/portage/tests/dep/testAtom.py | 4 +
pym/portage/tests/dep/test_best_match_to_list.py | 2 +
pym/portage/tests/dep/test_match_from_list.py | 2 +
pym/portage/tests/emerge/test_global_updates.py | 6 +-
pym/portage/tests/resolver/ResolverPlayground.py | 3 +
pym/portage/tests/resolver/test_complete_graph.py | 66 +++++++-
pym/portage/tests/runTests | 3 +
.../{util/_eventloop => tests/update}/__init__.py | 0
pym/portage/tests/{bin => update}/__test__ | 0
pym/portage/tests/update/test_move_ent.py | 109 ++++++++++++
pym/portage/tests/update/test_move_slot_ent.py | 154 ++++++++++++++++
pym/portage/tests/update/test_update_dbentry.py | 184 ++++++++++++++++++++
pym/portage/update.py | 44 ++++-
pym/portage/versions.py | 2 +
38 files changed, 875 insertions(+), 123 deletions(-)
diff --cc cnf/make.globals
index 7e61325,ada91f8..dc3eac9
--- a/cnf/make.globals
+++ b/cnf/make.globals
@@@ -146,12 -123,12 +146,12 @@@ PORTAGE_WORKDIR_MODE="0700
PORTAGE_ELOG_CLASSES="log warn error"
PORTAGE_ELOG_SYSTEM="save_summary echo"
-PORTAGE_ELOG_MAILURI="root"
+PORTAGE_ELOG_MAILURI="@rootuser@"
PORTAGE_ELOG_MAILSUBJECT="[portage] ebuild log for \${PACKAGE} on \${HOST}"
-PORTAGE_ELOG_MAILFROM="portage@localhost"
+PORTAGE_ELOG_MAILFROM="@portageuser@@localhost"
# Signing command used by repoman
- PORTAGE_GPG_SIGNING_COMMAND="gpg --sign --digest-algo SHA512 --clearsign --yes --default-key \"\${PORTAGE_GPG_KEY}\" --homedir \"\${PORTAGE_GPG_DIR}\" \"\${FILE}\""
+ PORTAGE_GPG_SIGNING_COMMAND="gpg --sign --digest-algo SHA256 --clearsign --yes --default-key \"\${PORTAGE_GPG_KEY}\" --homedir \"\${PORTAGE_GPG_DIR}\" \"\${FILE}\""
# *****************************
# ** DO NOT EDIT THIS FILE **
diff --cc pym/portage/package/ebuild/fetch.py
index 7ac25c4,60ed04d..63c792a
--- a/pym/portage/package/ebuild/fetch.py
+++ b/pym/portage/package/ebuild/fetch.py
@@@ -25,10 -25,10 +25,11 @@@ portage.proxy.lazyimport.lazyimport(glo
from portage import OrderedDict, os, selinux, shutil, _encodings, \
_shell_quote, _unicode_encode
- from portage.checksum import hashfunc_map, perform_md5, verify_all
+ from portage.checksum import (hashfunc_map, perform_md5, verify_all,
+ _filter_unaccelarated_hashes)
from portage.const import BASH_BINARY, CUSTOM_MIRRORS_FILE, \
GLOBAL_CONFIG_PATH
+from portage.const import rootgid
from portage.data import portage_gid, portage_uid, secpass, userpriv_groups
from portage.exception import FileNotFound, OperationNotPermitted, \
PortageException, TryAgain
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-07-06 7:05 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-07-06 7:05 UTC (permalink / raw
To: gentoo-commits
commit: f028e1bfb2b97990b558b0c1a8782838aeae175c
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Jul 6 07:02:23 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Jul 6 07:02:23 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=f028e1bf
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/dohtml.py
bin/ebuild
bin/ebuild-helpers/ecompressdir
bin/ebuild-helpers/prepstrip
bin/ebuild-ipc.py
bin/egencache
bin/emaint
bin/emerge
bin/misc-functions.sh
bin/quickpkg
pym/_emerge/actions.py
pym/portage/dbapi/vartree.py
pym/portage/dispatch_conf.py
NEWS | 9 +
RELEASE-NOTES | 12 +
bin/dohtml.py | 49 +-
bin/ebuild | 19 +-
bin/ebuild-helpers/ecompressdir | 50 +-
bin/ebuild-helpers/prepstrip | 115 ++-
bin/ebuild-ipc.py | 11 +-
bin/ebuild.sh | 35 +-
bin/egencache | 18 +-
bin/emaint | 9 +-
bin/emerge | 11 +-
bin/helper-functions.sh | 90 ++
bin/isolated-functions.sh | 8 +-
bin/misc-functions.sh | 78 +-
bin/quickpkg | 15 +-
bin/repoman | 81 +-
bin/save-ebuild-env.sh | 6 +-
cnf/make.globals | 13 +-
doc/package/ebuild.docbook | 1 +
doc/package/ebuild/eapi/4-python.docbook | 218 ++--
doc/package/ebuild/eapi/4-slot-abi.docbook | 70 ++
doc/portage.docbook | 1 +
man/ebuild.5 | 6 +-
man/emerge.1 | 39 +-
man/make.conf.5 | 62 +-
man/portage.5 | 26 +-
man/repoman.1 | 20 +-
pym/_emerge/AtomArg.py | 5 +-
pym/_emerge/BlockerCache.py | 9 +-
pym/_emerge/Dependency.py | 2 +-
pym/_emerge/DependencyArg.py | 14 +-
pym/_emerge/EbuildMetadataPhase.py | 105 ++-
pym/_emerge/EbuildPhase.py | 17 +-
pym/_emerge/FakeVartree.py | 34 +-
pym/_emerge/MetadataRegen.py | 8 +-
pym/_emerge/Package.py | 129 ++-
pym/_emerge/PollScheduler.py | 27 +-
pym/_emerge/QueueScheduler.py | 4 +-
pym/_emerge/Scheduler.py | 7 +-
pym/_emerge/SetArg.py | 5 +-
pym/_emerge/TaskScheduler.py | 6 +-
pym/_emerge/actions.py | 74 +-
pym/_emerge/create_depgraph_params.py | 14 +
pym/_emerge/depgraph.py | 1236 +++++++++++++-------
pym/_emerge/main.py | 38 +-
pym/_emerge/resolver/backtracking.py | 36 +-
pym/_emerge/resolver/output.py | 25 +-
pym/_emerge/resolver/slot_collision.py | 44 +-
pym/_emerge/unmerge.py | 22 +-
pym/portage/__init__.py | 39 +-
pym/portage/_sets/__init__.py | 12 +-
pym/portage/_sets/dbapi.py | 14 +-
pym/portage/_sets/files.py | 13 +-
pym/portage/_sets/security.py | 8 +-
pym/portage/cache/fs_template.py | 22 +-
pym/portage/cache/sqlite.py | 42 +-
pym/portage/cache/template.py | 14 +-
pym/portage/const.py | 30 +-
pym/portage/dbapi/__init__.py | 92 +-
pym/portage/dbapi/bintree.py | 41 +-
pym/portage/dbapi/porttree.py | 120 +--
pym/portage/dbapi/vartree.py | 219 +++-
pym/portage/dbapi/virtual.py | 55 +-
pym/portage/dep/__init__.py | 571 ++++++---
pym/portage/dep/_slot_abi.py | 92 ++
pym/portage/dep/dep_check.py | 51 +-
pym/portage/dispatch_conf.py | 10 +-
pym/portage/eapi.py | 44 +
pym/portage/getbinpkg.py | 19 +-
pym/portage/glsa.py | 15 +-
.../package/ebuild/_config/KeywordsManager.py | 21 +-
.../package/ebuild/_config/LicenseManager.py | 9 +-
.../package/ebuild/_config/LocationsManager.py | 71 +-
pym/portage/package/ebuild/_config/UseManager.py | 12 +-
pym/portage/package/ebuild/_config/helper.py | 4 +-
.../package/ebuild/_config/special_env_vars.py | 13 +-
pym/portage/package/ebuild/_eapi_invalid.py | 54 +
pym/portage/package/ebuild/_ipc/QueryCommand.py | 10 +-
pym/portage/package/ebuild/config.py | 143 ++-
pym/portage/package/ebuild/doebuild.py | 180 ++--
pym/portage/package/ebuild/fetch.py | 7 +-
pym/portage/package/ebuild/getmaskingstatus.py | 16 +-
pym/portage/repository/config.py | 32 +-
pym/portage/tests/dbapi/test_fakedbapi.py | 7 +-
pym/portage/tests/dep/testAtom.py | 19 +
pym/portage/tests/dep/testStandalone.py | 5 +-
pym/portage/tests/dep/test_isvalidatom.py | 9 +
pym/portage/tests/dep/test_match_from_list.py | 31 +-
pym/portage/tests/ebuild/test_config.py | 4 +-
pym/portage/tests/emerge/test_emerge_slot_abi.py | 201 ++++
pym/portage/tests/emerge/test_simple.py | 6 +-
pym/portage/tests/repoman/test_echangelog.py | 110 ++
pym/portage/tests/resolver/ResolverPlayground.py | 57 +-
pym/portage/tests/resolver/test_autounmask.py | 17 +-
pym/portage/tests/resolver/test_complete_graph.py | 4 +-
pym/portage/tests/resolver/test_merge_order.py | 6 +-
pym/portage/tests/resolver/test_simple.py | 23 +-
pym/portage/tests/resolver/test_slot_abi.py | 376 ++++++
.../tests/resolver/test_slot_abi_downgrade.py | 225 ++++
pym/portage/tests/resolver/test_slot_collisions.py | 12 +
pym/portage/tests/runTests | 11 +-
pym/portage/tests/util/test_digraph.py | 50 +-
pym/portage/tests/util/test_stackLists.py | 4 +-
pym/portage/util/__init__.py | 64 +-
pym/portage/util/_desktop_entry.py | 75 ++
pym/portage/util/_eventloop/EventLoop.py | 50 +-
pym/portage/util/_urlopen.py | 42 +
pym/portage/util/digraph.py | 15 +-
pym/portage/util/movefile.py | 6 +-
pym/portage/versions.py | 215 +++-
pym/portage/xml/metadata.py | 10 +-
pym/portage/xpak.py | 10 +-
pym/repoman/checks.py | 320 ++++--
pym/repoman/errors.py | 1 -
pym/repoman/herdbase.py | 6 +-
pym/repoman/utilities.py | 68 +-
runtests.sh | 2 +-
117 files changed, 5210 insertions(+), 1879 deletions(-)
diff --cc bin/dohtml.py
index 18095bd,3e80ef5..59d388b
--- a/bin/dohtml.py
+++ b/bin/dohtml.py
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 1999-2006 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
diff --cc bin/ebuild
index 599cc6c,65e5bef..04cb4c6
--- a/bin/ebuild
+++ b/bin/ebuild
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -O
+#!@PREFIX_PORTAGE_PYTHON@ -O
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/ebuild-helpers/ecompressdir
index 58c410e,6801a07..bf76b1f
--- a/bin/ebuild-helpers/ecompressdir
+++ b/bin/ebuild-helpers/ecompressdir
@@@ -2,7 -2,7 +2,7 @@@
# Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
- source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/helper-functions.sh
++source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/helper-functions.sh
if [[ -z $1 ]] ; then
helpers_die "${0##*/}: at least one argument needed"
diff --cc bin/ebuild-helpers/prepstrip
index 0042abe,85d5d6a..22cb72e
--- a/bin/ebuild-helpers/prepstrip
+++ b/bin/ebuild-helpers/prepstrip
@@@ -1,8 -1,8 +1,8 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
- source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/helper-functions.sh
++source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/helper-functions.sh
# avoid multiple calls to `has`. this creates things like:
# FEATURES_foo=false
diff --cc bin/ebuild-ipc.py
index f95bac8,3caf2d1..7533c8c
--- a/bin/ebuild-ipc.py
+++ b/bin/ebuild-ipc.py
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2010-2011 Gentoo Foundation
+ # Copyright 2010-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
# This is a helper which ebuild processes can use
diff --cc bin/egencache
index 01f8e1d,a75a341..5edb180
--- a/bin/egencache
+++ b/bin/egencache
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2009-2011 Gentoo Foundation
+ # Copyright 2009-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
@@@ -42,8 -42,7 +42,8 @@@ from portage.manifest import guessManif
from portage.util import cmp_sort_key, writemsg_level
from portage import cpv_getkey
from portage.dep import Atom, isjustname
- from portage.versions import pkgcmp, pkgsplit, vercmp
+ from portage.versions import pkgsplit, vercmp
+from portage.const import EPREFIX
try:
from xml.etree import ElementTree
diff --cc bin/emaint
index 91d68d5,cf2ccb8..4d8b4f3
--- a/bin/emaint
+++ b/bin/emaint
@@@ -1,5 -1,6 +1,6 @@@
-#!/usr/bin/python -O
+#!@PREFIX_PORTAGE_PYTHON@ -O
- # vim: noet :
+ # Copyright 2005-2012 Gentoo Foundation
+ # Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/emerge
index 324f673,a9a5643..85154fb
--- a/bin/emerge
+++ b/bin/emerge
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2006-2011 Gentoo Foundation
+ # Copyright 2006-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
@@@ -26,16 -27,21 +27,22 @@@ except KeyboardInterrupt
def debug_signal(signum, frame):
import pdb
pdb.set_trace()
- signal.signal(signal.SIGUSR1, debug_signal)
+
+ if platform.python_implementation() == 'Jython':
+ debug_signum = signal.SIGUSR2 # bug #424259
+ else:
+ debug_signum = signal.SIGUSR1
+
+ signal.signal(debug_signum, debug_signal)
-try:
- from _emerge.main import emerge_main
-except ImportError:
- from os import path as osp
- import sys
- sys.path.insert(0, osp.join(osp.dirname(osp.dirname(osp.realpath(__file__))), "pym"))
- from _emerge.main import emerge_main
+# for an explanation on this logic, see pym/_emerge/__init__.py
+from os import environ as ose
+from os import path as osp
+if ose.__contains__("PORTAGE_PYTHONPATH"):
+ sys.path.insert(0, ose["PORTAGE_PYTHONPATH"])
+else:
+ sys.path.insert(0, osp.join(osp.dirname(osp.dirname(osp. realpath(__file__))), "pym"))
+from _emerge.main import emerge_main
if __name__ == "__main__":
import sys
diff --cc bin/misc-functions.sh
index a7a18e2,9eec8bb..496c2d7
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -153,11 -150,9 +153,11 @@@ prepcompress()
install_qa_check() {
local f i qa_var x
[[ " ${FEATURES} " == *" force-prefix "* ]] || \
- case "$EAPI" in 0|1|2) local ED=${D} ;; esac
+ case "$EAPI" in 0|1|2) local EPREFIX= ED=${D} ;; esac
- cd "${ED}" || die "cd failed"
+ # PREFIX LOCAL: ED needs not to exist, whereas D does
+ cd "${D}" || die "cd failed"
+ # END PREFIX LOCAL
# Merge QA_FLAGS_IGNORED and QA_DT_HASH into a single array, since
# QA_DT_HASH is deprecated.
@@@ -276,7 -271,7 +276,9 @@@
fi
# Now we look for all world writable files.
-- local unsafe_files=$(find "${ED}" -type f -perm -2 | sed -e "s:^${ED}:- :")
++ # PREFIX LOCAL: keep offset in the paths
++ local unsafe_files=$(find "${ED}" -type f -perm -2 | sed -e "s:^${D}:- :")
++ # END PREFIX LOCAL
if [[ -n ${unsafe_files} ]] ; then
vecho "QA Security Notice: world writable file(s):"
vecho "${unsafe_files}"
@@@ -586,12 -509,36 +551,40 @@@ install_qa_check_elf()
PORTAGE_QUIET=${tmp_quiet}
fi
+
+ # Create NEEDED.ELF.2 regardless of RESTRICT=binchecks, since this info is
+ # too useful not to have (it's required for things like preserve-libs), and
+ # it's tempting for ebuild authors to set RESTRICT=binchecks for packages
+ # containing pre-built binaries.
+ if type -P scanelf > /dev/null ; then
+ # Save NEEDED information after removing self-contained providers
+ rm -f "$PORTAGE_BUILDDIR"/build-info/NEEDED{,.ELF.2}
+ scanelf -qyRF '%a;%p;%S;%r;%n' "${D}" | { while IFS= read -r l; do
+ arch=${l%%;*}; l=${l#*;}
+ obj="/${l%%;*}"; l=${l#*;}
+ soname=${l%%;*}; l=${l#*;}
+ rpath=${l%%;*}; l=${l#*;}; [ "${rpath}" = " - " ] && rpath=""
+ needed=${l%%;*}; l=${l#*;}
+ echo "${obj} ${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ echo "${arch:3};${obj};${soname};${rpath};${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.ELF.2
+ done }
+
+ [ -n "${QA_SONAME_NO_SYMLINK}" ] && \
+ echo "${QA_SONAME_NO_SYMLINK}" > \
+ "${PORTAGE_BUILDDIR}"/build-info/QA_SONAME_NO_SYMLINK
+
+ if has binchecks ${RESTRICT} && \
+ [ -s "${PORTAGE_BUILDDIR}/build-info/NEEDED.ELF.2" ] ; then
+ eqawarn "QA Notice: RESTRICT=binchecks prevented checks on these ELF files:"
+ eqawarn "$(while read -r x; do x=${x#*;} ; x=${x%%;*} ; echo "${x#${EPREFIX}}" ; done < "${PORTAGE_BUILDDIR}"/build-info/NEEDED.ELF.2)"
+ fi
+ fi
+}
- local unsafe_files=$(find "${ED}" -type f '(' -perm -2002 -o -perm -4002 ')' | sed -e "s:^${ED}:/:")
+install_qa_check_misc() {
+ # PREFIX LOCAL: keep offset prefix in the reported files
+ local unsafe_files=$(find "${ED}" -type f '(' -perm -2002 -o -perm -4002 ')' | sed -e "s:^${D}:/:")
+ # END PREFIX LOCAL
if [[ -n ${unsafe_files} ]] ; then
eqawarn "QA Notice: Unsafe files detected (set*id and world writable)"
eqawarn "${unsafe_files}"
diff --cc bin/quickpkg
index fb7f874,a6bd4d4..83c0d7b
--- a/bin/quickpkg
+++ b/bin/quickpkg
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc cnf/make.globals
index 2e2171d,db97d8b..7e61325
--- a/cnf/make.globals
+++ b/cnf/make.globals
@@@ -141,12 -123,12 +146,12 @@@ PORTAGE_WORKDIR_MODE="0700
PORTAGE_ELOG_CLASSES="log warn error"
PORTAGE_ELOG_SYSTEM="save_summary echo"
-PORTAGE_ELOG_MAILURI="root"
+PORTAGE_ELOG_MAILURI="@rootuser@"
PORTAGE_ELOG_MAILSUBJECT="[portage] ebuild log for \${PACKAGE} on \${HOST}"
-PORTAGE_ELOG_MAILFROM="portage@localhost"
+PORTAGE_ELOG_MAILFROM="@portageuser@@localhost"
# Signing command used by repoman
- PORTAGE_GPG_SIGNING_COMMAND="gpg --sign --clearsign --yes --default-key \"\${PORTAGE_GPG_KEY}\" --homedir \"\${PORTAGE_GPG_DIR}\" \"\${FILE}\""
+ PORTAGE_GPG_SIGNING_COMMAND="gpg --sign --digest-algo SHA512 --clearsign --yes --default-key \"\${PORTAGE_GPG_KEY}\" --homedir \"\${PORTAGE_GPG_DIR}\" \"\${FILE}\""
# *****************************
# ** DO NOT EDIT THIS FILE **
diff --cc pym/_emerge/Package.py
index f818951,14d0694..228cd96
--- a/pym/_emerge/Package.py
+++ b/pym/_emerge/Package.py
@@@ -8,12 -8,11 +8,12 @@@ from portage import _encodings, _unicod
from portage.cache.mappings import slot_dict_class
from portage.const import EBUILD_PHASES
from portage.dep import Atom, check_required_use, use_reduce, \
- paren_enclose, _slot_re, _slot_separator, _repo_separator
- from portage.eapi import eapi_has_iuse_defaults, eapi_has_required_use
+ paren_enclose, _slot_separator, _repo_separator
+ from portage.versions import _pkg_str, _unknown_repo
+ from portage.eapi import _get_eapi_attrs
from portage.exception import InvalidDependString
- from portage.repository.config import _gen_valid_repo
from _emerge.Task import Task
+from portage.const import EPREFIX
if sys.hexversion >= 0x3000000:
basestring = str
diff --cc pym/_emerge/actions.py
index eb74085,af42828..2135f09
--- a/pym/_emerge/actions.py
+++ b/pym/_emerge/actions.py
@@@ -28,10 -28,10 +28,11 @@@ portage.proxy.lazyimport.lazyimport(glo
from portage.localization import _
from portage import os
from portage import shutil
- from portage import _unicode_decode
+ from portage import eapi_is_supported, _unicode_decode
from portage.cache.cache_errors import CacheError
- from portage.const import GLOBAL_CONFIG_PATH, EPREFIX
- from portage.const import _ENABLE_DYN_LINK_MAP, _ENABLE_SET_CONFIG
++from portage.const import EPREFIX
+ from portage.const import GLOBAL_CONFIG_PATH
+ from portage.const import _ENABLE_DYN_LINK_MAP
from portage.dbapi.dep_expand import dep_expand
from portage.dbapi._expand_new_virt import expand_new_virt
from portage.dep import Atom, extended_cp_match
diff --cc pym/portage/_sets/files.py
index 7992b82,b891ea4..69bdc0d
--- a/pym/portage/_sets/files.py
+++ b/pym/portage/_sets/files.py
@@@ -1,6 -1,7 +1,6 @@@
- # Copyright 2007-2011 Gentoo Foundation
+ # Copyright 2007-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-import errno
import re
from itertools import chain
diff --cc pym/portage/const.py
index cd8ecfb,ceef5c5..267798c
--- a/pym/portage/const.py
+++ b/pym/portage/const.py
@@@ -1,12 -1,7 +1,12 @@@
# portage: Constants
- # Copyright 1998-2011 Gentoo Foundation
+ # Copyright 1998-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
+# ===========================================================================
+# autotool supplied constants.
+# ===========================================================================
+from portage.const_autotool import *
+
import os
# ===========================================================================
@@@ -125,10 -90,10 +125,11 @@@ SUPPORTED_FEATURES = frozenset(
"ccache", "chflags", "clean-logs",
"collision-protect", "compress-build-logs", "compressdebug",
"config-protect-if-modified",
- "digest", "distcc", "distcc-pump", "distlocks", "ebuild-locks", "fakeroot",
+ "digest", "distcc", "distcc-pump", "distlocks",
+ "downgrade-backup", "ebuild-locks", "fakeroot",
"fail-clean", "force-mirror", "force-prefix", "getbinpkg",
"installsources", "keeptemp", "keepwork", "fixlafiles", "lmirror",
+ "macossandbox", "macosprefixsandbox", "macosusersandbox",
"metadata-transfer", "mirror", "multilib-strict", "news",
"noauto", "noclean", "nodoc", "noinfo", "noman",
"nostrip", "notitles", "parallel-fetch", "parallel-install",
diff --cc pym/portage/dbapi/bintree.py
index 77ab4df,1048cc1..170ed18
--- a/pym/portage/dbapi/bintree.py
+++ b/pym/portage/dbapi/bintree.py
@@@ -23,9 -23,8 +23,9 @@@ from portage.cache.mappings import slot
from portage.const import CACHE_PATH
from portage.dbapi.virtual import fakedbapi
from portage.dep import Atom, use_reduce, paren_enclose
- from portage.exception import AlarmSignal, InvalidPackageName, \
+ from portage.exception import AlarmSignal, InvalidData, InvalidPackageName, \
PermissionDenied, PortageException
+from portage.const import EAPI
from portage.localization import _
from portage import _movefile
from portage import os
diff --cc pym/portage/dbapi/vartree.py
index 340de4c,fbf2e74..109545a
--- a/pym/portage/dbapi/vartree.py
+++ b/pym/portage/dbapi/vartree.py
@@@ -30,11 -32,8 +32,11 @@@ portage.proxy.lazyimport.lazyimport(glo
'portage.util.movefile:movefile',
'portage.util._dyn_libs.PreservedLibsRegistry:PreservedLibsRegistry',
'portage.util._dyn_libs.LinkageMapELF:LinkageMapELF@LinkageMap',
+ 'portage.util._dyn_libs.LinkageMapMachO:LinkageMapMachO',
+ 'portage.util._dyn_libs.LinkageMapPeCoff:LinkageMapPeCoff',
+ 'portage.util._dyn_libs.LinkageMapXCoff:LinkageMapXCoff',
- 'portage.versions:best,catpkgsplit,catsplit,cpv_getkey,pkgcmp,' + \
- '_pkgsplit@pkgsplit',
+ 'portage.versions:best,catpkgsplit,catsplit,cpv_getkey,vercmp,' + \
+ '_pkgsplit@pkgsplit,_pkg_str',
'subprocess',
'tarfile',
)
@@@ -89,8 -90,10 +93,11 @@@ except ImportError
if sys.hexversion >= 0x3000000:
basestring = str
long = int
+ _unicode = str
+ else:
+ _unicode = unicode
+
class vardbapi(dbapi):
_excluded_dirs = ["CVS", "lost+found"]
diff --cc pym/portage/dispatch_conf.py
index 2ab7301,4c68dfc..7729195
--- a/pym/portage/dispatch_conf.py
+++ b/pym/portage/dispatch_conf.py
@@@ -13,8 -13,7 +13,8 @@@ import os, shutil, subprocess, sy
import portage
from portage.env.loaders import KeyValuePairFileLoader
from portage.localization import _
- from portage.util import varexpand
+ from portage.util import shlex_split, varexpand
+from portage.const import EPREFIX
RCS_BRANCH = '1.1.1'
RCS_LOCK = 'rcs -ko -M -l'
diff --cc pym/portage/package/ebuild/doebuild.py
index f35777a,09062f9..2322746
--- a/pym/portage/package/ebuild/doebuild.py
+++ b/pym/portage/package/ebuild/doebuild.py
@@@ -32,11 -33,10 +34,11 @@@ portage.proxy.lazyimport.lazyimport(glo
from portage import auxdbkeys, bsd_chflags, \
eapi_is_supported, merge, os, selinux, shutil, \
- unmerge, _encodings, _parse_eapi_ebuild_head, _os_merge, \
+ unmerge, _encodings, _os_merge, \
_shell_quote, _unicode_decode, _unicode_encode
from portage.const import EBUILD_SH_ENV_FILE, EBUILD_SH_ENV_DIR, \
- EBUILD_SH_BINARY, INVALID_ENV_FILE, MISC_SH_BINARY
+ EBUILD_SH_BINARY, INVALID_ENV_FILE, MISC_SH_BINARY, \
+ EPREFIX, MACOSSANDBOX_PROFILE
from portage.data import portage_gid, portage_uid, secpass, \
uid, userpriv_groups
from portage.dbapi.porttree import _parse_uri_map
diff --cc pym/portage/tests/runTests
index 223a8cc,91984a3..afb967c
--- a/pym/portage/tests/runTests
+++ b/pym/portage/tests/runTests
@@@ -1,6 -1,6 +1,6 @@@
-#!/usr/bin/python -Wd
+#!/usr/bin/env python
# runTests.py -- Portage Unit Test Functionality
- # Copyright 2006 Gentoo Foundation
+ # Copyright 2006-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
import os, sys
diff --cc pym/portage/versions.py
index f0619be,5893096..81f7559
--- a/pym/portage/versions.py
+++ b/pym/portage/versions.py
@@@ -26,16 -39,27 +39,28 @@@ _cat = r'[\w+][\w+.-]*
# 2.1.2 A package name may contain any of the characters [A-Za-z0-9+_-].
# It must not begin with a hyphen,
# and must not end in a hyphen followed by one or more digits.
- _pkg = r'[\w+][\w+-]*?'
+ _pkg = {
+ "dots_disallowed_in_PN": r'[\w+][\w+-]*?',
+ "dots_allowed_in_PN": r'[\w+][\w+.-]*?',
+ }
_v = r'(cvs\.)?(\d+)((\.\d+)*)([a-z]?)((_(pre|p|beta|alpha|rc)\d*)*)'
-_rev = r'\d+'
+# PREFIX hack: -r(\d+) -> -r(\d+|0\d+\.\d+) (see below)
+_rev = r'(\d+|0\d+\.\d+)'
_vr = _v + '(-r(' + _rev + '))?'
- _cp = '(' + _cat + '/' + _pkg + '(-' + _vr + ')?)'
- _cpv = '(' + _cp + '-' + _vr + ')'
- _pv = '(?P<pn>' + _pkg + '(?P<pn_inval>-' + _vr + ')?)' + '-(?P<ver>' + _v + ')(-r(?P<rev>' + _rev + '))?'
+ _cp = {
+ "dots_disallowed_in_PN": '(' + _cat + '/' + _pkg['dots_disallowed_in_PN'] + '(-' + _vr + ')?)',
+ "dots_allowed_in_PN": '(' + _cat + '/' + _pkg['dots_allowed_in_PN'] + '(-' + _vr + ')?)',
+ }
+ _cpv = {
+ "dots_disallowed_in_PN": '(' + _cp['dots_disallowed_in_PN'] + '-' + _vr + ')',
+ "dots_allowed_in_PN": '(' + _cp['dots_allowed_in_PN'] + '-' + _vr + ')',
+ }
+ _pv = {
+ "dots_disallowed_in_PN": '(?P<pn>' + _pkg['dots_disallowed_in_PN'] + '(?P<pn_inval>-' + _vr + ')?)' + '-(?P<ver>' + _v + ')(-r(?P<rev>' + _rev + '))?',
+ "dots_allowed_in_PN": '(?P<pn>' + _pkg['dots_allowed_in_PN'] + '(?P<pn_inval>-' + _vr + ')?)' + '-(?P<ver>' + _v + ')(-r(?P<rev>' + _rev + '))?',
+ }
ver_regexp = re.compile("^" + _vr + "$")
suffix_regexp = re.compile("^(alpha|beta|rc|pre|p)(\\d*)$")
@@@ -199,44 -230,18 +231,42 @@@ def vercmp(ver1, ver2, silent=1)
r2 = 0
rval = (r1 > r2) - (r1 < r2)
if rval:
- vercmp_cache[mykey] = rval
return rval
- # the suffix part is equal to, so finally check the revision
+ # The suffix part is equal too, so finally check the revision
+ # PREFIX hack: a revision starting with 0 is an 'inter-revision',
+ # which means that it is possible to create revisions on revisions.
+ # An example is -r01.1 which is the first revision of -r1. Note
+ # that a period (.) is used to separate the real revision and the
+ # secondary revision number. This trick is in use to allow revision
+ # bumps in ebuilds synced from the main tree for Prefix changes,
+ # while still staying in the main tree versioning scheme.
if match1.group(10):
- r1 = int(match1.group(10))
+ if match1.group(10)[0] == '0' and '.' in match1.group(10):
+ t = match1.group(10)[1:].split(".")
+ r1 = int(t[0])
+ r3 = int(t[1])
+ else:
+ r1 = int(match1.group(10))
+ r3 = 0
else:
r1 = 0
+ r3 = 0
if match2.group(10):
- r2 = int(match2.group(10))
+ if match2.group(10)[0] == '0' and '.' in match2.group(10):
+ t = match2.group(10)[1:].split(".")
+ r2 = int(t[0])
+ r4 = int(t[1])
+ else:
+ r2 = int(match2.group(10))
+ r4 = 0
else:
r2 = 0
+ r4 = 0
+ if r1 == r2 and (r3 != 0 or r4 != 0):
+ r1 = r3
+ r2 = r4
rval = (r1 > r2) - (r1 < r2)
- vercmp_cache[mykey] = rval
return rval
def pkgcmp(pkg1, pkg2):
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-04-23 19:23 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-04-23 19:23 UTC (permalink / raw
To: gentoo-commits
commit: 20284dc0435e5d586bd52774aa0993c369282a19
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Mon Apr 23 19:21:32 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Mon Apr 23 19:21:32 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=20284dc0
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/dohtml.py
bin/misc-functions.sh
bin/dohtml.py | 10 ++--
bin/isolated-functions.sh | 12 ++++-
bin/misc-functions.sh | 10 ++--
bin/portageq | 58 +++++++++++++--------
bin/repoman | 19 ++++---
doc/package/ebuild/phases.docbook | 9 +++
man/ebuild.5 | 4 +-
man/repoman.1 | 3 +-
pym/_emerge/EbuildBinpkg.py | 6 ++-
pym/_emerge/PackageVirtualDbapi.py | 22 +++++---
pym/_emerge/Scheduler.py | 6 ++-
pym/_emerge/actions.py | 2 +-
pym/_emerge/depgraph.py | 17 +++++-
pym/_emerge/main.py | 8 ++-
pym/_emerge/resolver/slot_collision.py | 18 ++++++-
pym/portage/checksum.py | 2 +-
pym/portage/dbapi/porttree.py | 36 ++++++--------
pym/portage/dbapi/vartree.py | 14 +++---
pym/portage/dbapi/virtual.py | 22 +++++---
pym/portage/dep/__init__.py | 3 +-
pym/portage/package/ebuild/config.py | 8 ++--
pym/portage/package/ebuild/doebuild.py | 7 +++
pym/portage/package/ebuild/prepare_build_dirs.py | 22 ++++++++-
pym/portage/xml/metadata.py | 6 +-
pym/repoman/checks.py | 36 ++++++++++----
pym/repoman/herdbase.py | 4 +-
26 files changed, 238 insertions(+), 126 deletions(-)
diff --cc bin/misc-functions.sh
index 0d60a51,4e81ddf..a7a18e2
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-04-03 18:04 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-04-03 18:04 UTC (permalink / raw
To: gentoo-commits
commit: 88bcbb23788f009ddcdadc6ea7d0555b29fe2d4d
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Tue Apr 3 18:04:09 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Tue Apr 3 18:04:09 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=88bcbb23
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/repoman | 2 +-
pym/_emerge/unmerge.py | 15 +++++-
pym/portage/dispatch_conf.py | 2 +-
pym/portage/package/ebuild/config.py | 2 +-
pym/portage/package/ebuild/doebuild.py | 3 +-
pym/portage/tests/util/test_getconfig.py | 26 ++++++++-
pym/portage/util/__init__.py | 91 +++++++++++++++++++----------
7 files changed, 102 insertions(+), 39 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-03-31 19:31 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-03-31 19:31 UTC (permalink / raw
To: gentoo-commits
commit: 1fac28c57317b8174580b8243c3549fd75aeaa6d
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sat Mar 31 19:30:59 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sat Mar 31 19:30:59 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=1fac28c5
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/ebuild-helpers/dosym
bin/ebuild-helpers/fowners
bin/etc-update
bin/repoman
pym/_emerge/main.py
Makefile | 215 ++++++
bin/dispatch-conf | 50 +-
bin/ebuild-helpers/dosym | 7 +-
bin/ebuild-helpers/fowners | 16 +-
bin/ebuild.sh | 46 +-
bin/emerge-webrsync | 12 +-
bin/etc-update | 756 +++++++++++---------
bin/isolated-functions.sh | 2 +-
bin/misc-functions.sh | 19 +
bin/portageq | 8 +
bin/repoman | 68 ++-
man/etc-update.1 | 32 +-
man/portage.5 | 5 +-
man/repoman.1 | 17 +-
mkrelease.sh | 2 +-
pym/_emerge/AsynchronousTask.py | 15 +
pym/_emerge/Binpkg.py | 6 +-
pym/_emerge/BlockerCache.py | 2 +-
pym/_emerge/CompositeTask.py | 2 +-
pym/_emerge/EbuildBuild.py | 11 +-
pym/_emerge/EbuildFetcher.py | 14 +-
pym/_emerge/JobStatusDisplay.py | 2 +-
pym/_emerge/Package.py | 4 +-
pym/_emerge/PollScheduler.py | 2 +-
pym/_emerge/QueueScheduler.py | 2 +-
pym/_emerge/Scheduler.py | 12 +-
pym/_emerge/_flush_elog_mod_echo.py | 2 +-
pym/_emerge/actions.py | 103 ++-
pym/_emerge/depgraph.py | 12 +-
pym/_emerge/main.py | 9 +-
pym/_emerge/resolver/output.py | 30 +-
pym/_emerge/resolver/output_helpers.py | 78 ++-
pym/_emerge/unmerge.py | 14 +-
pym/_emerge/userquery.py | 6 +-
pym/portage/cache/mappings.py | 2 +-
pym/portage/dbapi/_MergeProcess.py | 13 +-
pym/portage/dbapi/bintree.py | 6 +-
pym/portage/dbapi/porttree.py | 6 +-
pym/portage/dbapi/vartree.py | 60 ++-
pym/portage/dep/__init__.py | 2 +-
pym/portage/dispatch_conf.py | 28 +-
pym/portage/env/loaders.py | 4 +-
pym/portage/glsa.py | 8 +-
pym/portage/locks.py | 22 +-
pym/portage/manifest.py | 10 +-
pym/portage/news.py | 2 +-
pym/portage/output.py | 6 +
pym/portage/package/ebuild/_ipc/QueryCommand.py | 2 +-
pym/portage/package/ebuild/_spawn_nofetch.py | 6 +-
pym/portage/package/ebuild/digestcheck.py | 2 +-
pym/portage/package/ebuild/digestgen.py | 5 +-
pym/portage/package/ebuild/doebuild.py | 63 +-
pym/portage/package/ebuild/getmaskingreason.py | 8 +-
pym/portage/process.py | 18 +-
pym/portage/tests/locks/test_lock_nonblock.py | 5 +-
pym/portage/util/ExtractKernelVersion.py | 2 +-
pym/portage/util/__init__.py | 20 +-
.../util/_dyn_libs/PreservedLibsRegistry.py | 50 +-
pym/portage/util/_eventloop/EventLoop.py | 2 +-
pym/portage/util/_pty.py | 2 +-
pym/portage/util/lafilefixer.py | 2 +-
pym/portage/util/listdir.py | 2 +-
pym/portage/util/movefile.py | 61 +-
pym/portage/util/mtimedb.py | 24 +-
pym/portage/util/whirlpool.py | 2 +-
pym/portage/xpak.py | 247 ++++----
pym/repoman/utilities.py | 39 +-
67 files changed, 1513 insertions(+), 799 deletions(-)
diff --cc bin/dispatch-conf
index 1b86461,139a001..de8d85d
--- a/bin/dispatch-conf
+++ b/bin/dispatch-conf
@@@ -33,9 -27,8 +33,9 @@@ import portag
from portage import os
from portage import dispatch_conf
from portage import _unicode_decode
- from portage.dispatch_conf import diffstatusoutput_len
+ from portage.dispatch_conf import diffstatusoutput
from portage.process import find_binary
+from portage.const import EPREFIX
FIND_EXTANT_CONFIGS = "find '%s' %s -name '._cfg????_%s' ! -name '.*~' ! -iname '.*.bak' -print"
DIFF_CONTENTS = "diff -Nu '%s' '%s'"
diff --cc bin/ebuild-helpers/dosym
index 8f436fa,2489e22..b96f845
--- a/bin/ebuild-helpers/dosym
+++ b/bin/ebuild-helpers/dosym
@@@ -1,8 -1,8 +1,8 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [[ $# -ne 2 ]] ; then
helpers_die "${0##*/}: two arguments needed"
diff --cc bin/ebuild-helpers/fowners
index 9815d2e,a213c9e..b60aad7
--- a/bin/ebuild-helpers/fowners
+++ b/bin/ebuild-helpers/fowners
@@@ -1,18 -1,11 +1,12 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
+
- # PREFIX LOCAL: ignore otherwise failing call
- if hasq prefix ${USE} && [[ $EUID != 0 ]] ; then
- ewarn "fowners ignored in Prefix with non-privileged user"
- exit 0
- fi
- # END PREFIX LOCAL
[[ " ${FEATURES} " == *" force-prefix "* ]] || \
- case "$EAPI" in 0|1|2) ED=${D} ;; esac
+ case "$EAPI" in 0|1|2) EPREFIX= ED=${D} ;; esac
# we can't prefix all arguments because
# chown takes random options
diff --cc bin/emerge-webrsync
index c41eb79,bfd9aa2..581dc54
--- a/bin/emerge-webrsync
+++ b/bin/emerge-webrsync
@@@ -31,11 -39,10 +39,11 @@@ els
eecho "could not find 'portageq'; aborting"
exit 1
fi
- eval $(portageq envvar -v FEATURES FETCHCOMMAND GENTOO_MIRRORS \
+ eval $("${portageq}" envvar -v FEATURES FETCHCOMMAND GENTOO_MIRRORS \
PORTAGE_BIN_PATH PORTAGE_GPG_DIR \
PORTAGE_NICENESS PORTAGE_RSYNC_EXTRA_OPTS PORTAGE_TMPDIR PORTDIR \
- SYNC http_proxy ftp_proxy)
+ SYNC http_proxy ftp_proxy \
+ EPREFIX PORTAGE_USER PORTAGE_GROUP)
DISTDIR="${PORTAGE_TMPDIR}/emerge-webrsync"
export http_proxy ftp_proxy
diff --cc bin/etc-update
index b3877fb,1edc91f..2a24613
--- a/bin/etc-update
+++ b/bin/etc-update
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# Author Brandon Low <lostlogic@gentoo.org>
diff --cc bin/misc-functions.sh
index 406cd8f,b083897..0d60a51
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
diff --cc bin/repoman
index 1f51ddf,54d864f..b89e5ef
--- a/bin/repoman
+++ b/bin/repoman
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -O
+#!@PREFIX_PORTAGE_PYTHON@ -O
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# Next to do: dep syntax checking in mask files
diff --cc pym/_emerge/actions.py
index 15a0bc1,22c3e26..cbc5de2
--- a/pym/_emerge/actions.py
+++ b/pym/_emerge/actions.py
@@@ -25,12 -25,12 +25,12 @@@ portage.proxy.lazyimport.lazyimport(glo
'portage.news:count_unread_news,display_news_notifications',
)
+ from portage.localization import _
from portage import os
from portage import shutil
- from portage import subprocess_getstatusoutput
from portage import _unicode_decode
from portage.cache.cache_errors import CacheError
-from portage.const import GLOBAL_CONFIG_PATH
+from portage.const import GLOBAL_CONFIG_PATH, EPREFIX
from portage.const import _ENABLE_DYN_LINK_MAP, _ENABLE_SET_CONFIG
from portage.dbapi.dep_expand import dep_expand
from portage.dbapi._expand_new_virt import expand_new_virt
diff --cc pym/_emerge/main.py
index 3d50b7d,0fbc4b7..68e951d
--- a/pym/_emerge/main.py
+++ b/pym/_emerge/main.py
@@@ -180,11 -180,21 +181,11 @@@ def chk_updated_info_files(root, infodi
raise
del e
processed_count += 1
- try:
- proc = subprocess.Popen(
- ['/usr/bin/install-info',
- '--dir-file=%s' % os.path.join(inforoot, "dir"),
- os.path.join(inforoot, x)],
- env=dict(os.environ, LANG="C", LANGUAGE="C"),
- stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
- except OSError:
- myso = None
- else:
- myso = _unicode_decode(
- proc.communicate()[0]).rstrip("\n")
- proc.wait()
+ myso = portage.subprocess_getstatusoutput(
+ "LANG=C LANGUAGE=C %s/usr/bin/install-info " \
+ "--dir-file=%s/dir %s/%s" % (EPREFIX, inforoot, inforoot, x))[1]
existsstr="already exists, for file `"
- if myso!="":
+ if myso:
if re.search(existsstr,myso):
# Already exists... Don't increment the count for this.
pass
diff --cc pym/portage/dispatch_conf.py
index 88ed222,cc98fad..2fa1357
--- a/pym/portage/dispatch_conf.py
+++ b/pym/portage/dispatch_conf.py
@@@ -24,23 -23,23 +24,23 @@@ RCS_MERGE = "rcsmerge -p -r" + RCS_BRAN
DIFF3_MERGE = "diff3 -mE '%s' '%s' '%s' > '%s'"
- def diffstatusoutput_len(cmd):
+ def diffstatusoutput(cmd, file1, file2):
"""
Execute the string cmd in a shell with getstatusoutput() and return a
- 2-tuple (status, output_length). If getstatusoutput() raises
- UnicodeDecodeError (known to happen with python3.1), return a
- 2-tuple (1, 1). This provides a simple way to check for non-zero
- output length of diff commands, while providing simple handling of
- UnicodeDecodeError when necessary.
+ 2-tuple (status, output).
"""
- try:
- status, output = portage.subprocess_getstatusoutput(cmd)
- return (status, len(output))
- except UnicodeDecodeError:
- return (1, 1)
+ # Use Popen to emulate getstatusoutput(), since getstatusoutput() may
+ # raise a UnicodeDecodeError which makes the output inaccessible.
+ proc = subprocess.Popen(portage._unicode_encode(cmd % (file1, file2)),
+ stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
+ output = portage._unicode_decode(proc.communicate()[0])
+ if output and output[-1] == "\n":
+ # getstatusoutput strips one newline
+ output = output[:-1]
+ return (proc.wait(), output)
def read_config(mandatory_opts):
- eprefix = portage.const.EPREFIX
+ eprefix = EPREFIX
config_path = os.path.join(eprefix or os.sep, "etc/dispatch-conf.conf")
loader = KeyValuePairFileLoader(config_path, None)
opts, errors = loader.load()
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-03-01 20:32 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-03-01 20:32 UTC (permalink / raw
To: gentoo-commits
commit: b9a7e6d0978be334c1cc7a442075af5c0bfff739
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Mar 1 20:27:41 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Mar 1 20:27:41 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=b9a7e6d0
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/ebuild-helpers/ecompressdir | 13 +-
pym/_emerge/Scheduler.py | 3 +-
pym/_emerge/actions.py | 2 +-
pym/_emerge/depgraph.py | 132 +++++++-----
pym/portage/cvstree.py | 10 +-
pym/portage/dbapi/vartree.py | 280 +++++++++++++++++--------
pym/portage/tests/resolver/test_autounmask.py | 47 ++++-
7 files changed, 336 insertions(+), 151 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-02-19 9:58 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-02-19 9:58 UTC (permalink / raw
To: gentoo-commits
commit: 5f0978d957aa3ad2a3e9806bde34947b167ae554
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Feb 19 09:57:58 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Feb 19 09:57:58 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=5f0978d9
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
runtests.sh
bin/repoman | 21 +-
man/portage.5 | 2 +
man/repoman.1 | 5 +-
pym/_emerge/AbstractDepPriority.py | 4 +-
pym/_emerge/AbstractEbuildProcess.py | 3 +-
pym/_emerge/AbstractPollTask.py | 27 ++-
pym/_emerge/AsynchronousLock.py | 21 +-
pym/_emerge/AsynchronousTask.py | 20 +-
pym/_emerge/Binpkg.py | 4 +-
pym/_emerge/BinpkgFetcher.py | 3 -
pym/_emerge/Dependency.py | 5 +-
pym/_emerge/EbuildBuild.py | 4 +-
pym/_emerge/EbuildBuildDir.py | 5 +-
pym/_emerge/EbuildExecuter.py | 2 +-
pym/_emerge/EbuildFetcher.py | 13 +-
pym/_emerge/EbuildFetchonly.py | 4 +-
pym/_emerge/EbuildIpcDaemon.py | 29 ++-
pym/_emerge/EbuildMerge.py | 4 +-
pym/_emerge/EbuildMetadataPhase.py | 5 +-
pym/_emerge/MetadataRegen.py | 71 ++---
pym/_emerge/PollScheduler.py | 84 +++++-
pym/_emerge/QueueScheduler.py | 31 ++-
pym/_emerge/Scheduler.py | 48 ++--
pym/_emerge/SequentialTaskQueue.py | 21 +-
pym/_emerge/SpawnProcess.py | 20 +-
pym/_emerge/SubProcess.py | 47 ++--
pym/_emerge/Task.py | 5 +-
pym/_emerge/TaskScheduler.py | 1 +
pym/_emerge/depgraph.py | 16 +-
pym/portage/checksum.py | 31 +--
pym/portage/dbapi/_MergeProcess.py | 31 ++-
pym/portage/dbapi/vartree.py | 22 ++-
pym/portage/package/ebuild/config.py | 17 +
pym/portage/package/ebuild/doebuild.py | 3 +-
pym/portage/package/ebuild/fetch.py | 22 +-
pym/portage/process.py | 36 ++-
pym/portage/repository/config.py | 9 +-
pym/portage/tests/ebuild/test_ipc_daemon.py | 10 +-
pym/portage/update.py | 5 +-
pym/{_emerge => portage/util}/SlotObject.py | 13 +-
.../util/_dyn_libs/PreservedLibsRegistry.py | 61 +++-
pym/portage/util/_eventloop/EventLoop.py | 320 +++++++++++++-------
pym/portage/util/_eventloop/GlibEventLoop.py | 23 ++
.../util/_eventloop}/PollConstants.py | 0
.../util/_eventloop}/PollSelectAdapter.py | 7 +-
pym/portage/util/_eventloop/global_event_loop.py | 35 +++
pym/portage/util/mtimedb.py | 75 ++++-
pym/portage/xpak.py | 44 ++--
pym/repoman/checks.py | 3 +-
runtests.sh | 14 +-
50 files changed, 869 insertions(+), 437 deletions(-)
diff --cc pym/_emerge/EbuildBuildDir.py
index d44eb2a,9773bd7..5e2c034
--- a/pym/_emerge/EbuildBuildDir.py
+++ b/pym/_emerge/EbuildBuildDir.py
@@@ -2,11 -2,11 +2,12 @@@
# Distributed under the terms of the GNU General Public License v2
from _emerge.AsynchronousLock import AsynchronousLock
- from _emerge.SlotObject import SlotObject
+
import portage
from portage import os
+import sys
from portage.exception import PortageException
+ from portage.util.SlotObject import SlotObject
import errno
class EbuildBuildDir(SlotObject):
diff --cc runtests.sh
index 1d584a1,56aa2cc..1c3d3c3
--- a/runtests.sh
+++ b/runtests.sh
@@@ -1,8 -1,8 +1,8 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 2010-2011 Gentoo Foundation
+ # Copyright 2010-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
- PYTHON_VERSIONS="2.6 2.7 3.1 3.2 3.3"
+ PYTHON_VERSIONS="2.6 2.7 2.7-pypy-1.8 3.1 3.2 3.3"
# has to be run from portage root dir
cd "${0%/*}" || exit 1
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-02-09 8:01 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-02-09 8:01 UTC (permalink / raw
To: gentoo-commits
commit: 3fcce02f31976dcc8f40b350288f81b648955597
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Feb 9 07:40:12 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Feb 9 07:40:12 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=3fcce02f
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/portageq
pym/portage/package/ebuild/doebuild.py
DEVELOPING | 36 +--
bin/ebuild-helpers/ecompressdir | 4 +
bin/ebuild-helpers/prepstrip | 8 +-
bin/egencache | 20 +-
bin/misc-functions.sh | 27 ++-
bin/phase-helpers.sh | 14 +-
bin/portageq | 17 +-
bin/repoman | 13 +-
cnf/make.globals | 2 +-
doc/package/ebuild/eapi/4-python.docbook | 38 ++-
man/emerge.1 | 10 +-
man/make.conf.5 | 14 +-
man/portage.5 | 3 +
pym/_emerge/AbstractEbuildProcess.py | 37 ++-
pym/_emerge/AbstractPollTask.py | 20 +-
pym/_emerge/AsynchronousLock.py | 19 +-
pym/_emerge/AsynchronousTask.py | 10 +-
pym/_emerge/CompositeTask.py | 5 +-
pym/_emerge/EbuildIpcDaemon.py | 2 +
pym/_emerge/EbuildMetadataPhase.py | 2 +
pym/_emerge/EbuildPhase.py | 6 +-
pym/_emerge/FifoIpcDaemon.py | 9 +-
pym/_emerge/JobStatusDisplay.py | 10 +-
pym/_emerge/MetadataRegen.py | 9 +-
pym/_emerge/MiscFunctionsProcess.py | 10 +-
pym/_emerge/PipeReader.py | 12 +-
pym/_emerge/PollScheduler.py | 255 +-------------
pym/_emerge/QueueScheduler.py | 60 ++--
pym/_emerge/Scheduler.py | 107 ++----
pym/_emerge/SequentialTaskQueue.py | 51 +--
pym/_emerge/SpawnProcess.py | 4 +
pym/_emerge/SubProcess.py | 13 +-
pym/_emerge/actions.py | 1 +
pym/_emerge/create_world_atom.py | 35 ++-
pym/_emerge/depgraph.py | 33 ++-
pym/_emerge/main.py | 2 +
pym/_emerge/resolver/output.py | 156 +++++----
pym/_emerge/resolver/output_helpers.py | 11 +-
pym/portage/const.py | 8 +-
pym/portage/dbapi/_MergeProcess.py | 9 +
pym/portage/dbapi/vartree.py | 27 +-
pym/portage/debug.py | 4 +-
pym/portage/dep/__init__.py | 19 +-
pym/portage/eapi.py | 3 +
pym/portage/package/ebuild/_ipc/QueryCommand.py | 9 +-
pym/portage/package/ebuild/doebuild.py | 2 +-
pym/portage/process.py | 5 +-
pym/portage/repository/config.py | 14 +-
pym/portage/tests/ebuild/test_config.py | 3 +-
pym/portage/tests/ebuild/test_ipc_daemon.py | 4 +-
pym/portage/tests/process/test_poll.py | 14 +-
pym/portage/util/__init__.py | 4 +-
pym/portage/util/_dyn_libs/LinkageMapELF.py | 7 +
pym/portage/util/_eventloop/EventLoop.py | 372 ++++++++++++++++++++
.../util/_eventloop}/__init__.py | 2 +-
pym/portage/util/movefile.py | 4 +-
pym/portage/xml/metadata.py | 9 +-
57 files changed, 972 insertions(+), 632 deletions(-)
diff --cc bin/misc-functions.sh
index c6e66c2,2614151..406cd8f
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -829,10 -778,10 +829,10 @@@ install_qa_check_misc()
fi
# Portage regenerates this on the installed system.
- rm -f "${ED}"/usr/share/info/dir{,.gz,.bz2}
+ rm -f "${ED}"/usr/share/info/dir{,.gz,.bz2} || die "rm failed!"
if has multilib-strict ${FEATURES} && \
- [[ -x /usr/bin/file && -x /usr/bin/find ]] && \
+ [[ -x ${EPREFIX}/usr/bin/file && -x ${EPREFIX}/usr/bin/find ]] && \
[[ -n ${MULTILIB_STRICT_DIRS} && -n ${MULTILIB_STRICT_DENY} ]]
then
local abort=no dir file firstrun=yes
diff --cc bin/portageq
index 35e0ac7,5ecbb21..d697a32
--- a/bin/portageq
+++ b/bin/portageq
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -O
+#!@PREFIX_PORTAGE_PYTHON@ -O
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc pym/portage/package/ebuild/doebuild.py
index 8f8bf39,c52ab31..dd804f4
--- a/pym/portage/package/ebuild/doebuild.py
+++ b/pym/portage/package/ebuild/doebuild.py
@@@ -1507,10 -1487,7 +1507,10 @@@ _post_phase_cmds =
"preinst_sfperms",
"preinst_selinux_labels",
"preinst_suid_scan",
- "preinst_mask"],
+ ]
+
+ "postinst" : [
+ "postinst_aix"]
}
def _post_phase_userpriv_perms(mysettings):
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2012-01-10 17:45 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2012-01-10 17:45 UTC (permalink / raw
To: gentoo-commits
commit: 4b881a4d55a20639caf2f89b2f9b7548446fd81e
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Tue Jan 10 17:33:16 2012 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Tue Jan 10 17:33:16 2012 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=4b881a4d
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/ebuild.sh
bin/phase-functions.sh
pym/portage/util/__init__.py
bin/ebuild.sh | 4 ++--
bin/phase-functions.sh | 4 ++--
bin/repoman | 17 ++---------------
doc/package/ebuild/eapi/4-python.docbook | 17 +++++++++++++++++
man/ebuild.5 | 2 +-
man/repoman.1 | 6 ------
pym/portage/eapi.py | 5 ++++-
.../package/ebuild/_config/special_env_vars.py | 6 +++---
pym/portage/package/ebuild/doebuild.py | 11 ++++++++---
pym/portage/repository/config.py | 19 +++++++++++++------
pym/portage/util/__init__.py | 17 ++++++++++++++++-
pym/portage/util/env_update.py | 4 ----
12 files changed, 68 insertions(+), 44 deletions(-)
diff --cc bin/ebuild.sh
index 30c34e8,f8e71f5..efd6285
--- a/bin/ebuild.sh
+++ b/bin/ebuild.sh
@@@ -1,9 -1,9 +1,9 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-PORTAGE_BIN_PATH="${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"
-PORTAGE_PYM_PATH="${PORTAGE_PYM_PATH:-/usr/lib/portage/pym}"
+PORTAGE_BIN_PATH="${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"
+PORTAGE_PYM_PATH="${PORTAGE_PYM_PATH:-@PORTAGE_BASE@/pym}"
# Prevent aliases from causing portage to act inappropriately.
# Make sure it's before everything so we don't mess aliases that follow.
diff --cc bin/phase-functions.sh
index c002a34,ce251ce..86df843
--- a/bin/phase-functions.sh
+++ b/bin/phase-functions.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PREFIX_PORTAGE_BASH@
- # Copyright 1999-2011 Gentoo Foundation
+ # Copyright 1999-2012 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# Hardcoded bash lists are needed for backward compatibility with
diff --cc pym/portage/util/__init__.py
index 257da65,58501dc..3fdc5cf
--- a/pym/portage/util/__init__.py
+++ b/pym/portage/util/__init__.py
@@@ -1600,22 -1614,10 +1615,22 @@@ def getlibpaths(root, env=None)
""" Return a list of paths that are used for library lookups """
if env is None:
env = os.environ
+
+ # PREFIX HACK: LD_LIBRARY_PATH isn't portable, and considered
+ # harmfull, so better not use it. We don't need any host OS lib
+ # paths either, so do Prefix case.
+ if EPREFIX != '':
+ rval = []
+ rval.append(EPREFIX + "/usr/lib")
+ rval.append(EPREFIX + "/lib")
+ # we don't know the CHOST here, so it's a bit hard to guess
+ # where GCC's and ld's libs are. Though, GCC's libs should be
+ # in lib and usr/lib, binutils' libs rarely used
+ else:
# the following is based on the information from ld.so(8)
- rval = env.get("LD_LIBRARY_PATH", "").split(":")
- rval.extend(read_ld_so_conf(os.path.join(root, "etc", "ld.so.conf")))
- rval.append("/usr/lib")
- rval.append("/lib")
+ rval = env.get("LD_LIBRARY_PATH", "").split(":")
- rval.extend(grabfile(os.path.join(root, "etc", "ld.so.conf")))
++ rval.extend(read_ld_so_conf(os.path.join(root, "etc", "ld.so.conf")))
+ rval.append("/usr/lib")
+ rval.append("/lib")
return [normalize_path(x) for x in rval if x]
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-12-31 16:45 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-12-31 16:45 UTC (permalink / raw
To: gentoo-commits
commit: 5ee3665cba0e31787221e6ed47e8bd7d234c42f3
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sat Dec 31 16:43:57 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sat Dec 31 16:43:57 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=5ee3665c
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
pym/portage/repository/config.py | 34 +++++++++++++---------
pym/portage/tests/ebuild/test_config.py | 2 +
pym/portage/tests/resolver/ResolverPlayground.py | 10 ++----
3 files changed, 25 insertions(+), 21 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-12-26 9:12 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-12-26 9:12 UTC (permalink / raw
To: gentoo-commits
commit: 92652c0d1fb8fa36f2a7bfa005a1a88684ee69e3
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Mon Dec 26 09:10:47 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Mon Dec 26 09:10:47 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=92652c0d
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/misc-functions.sh | 157 +++++++++++++++++++++-----------------
bin/repoman | 8 +-
man/ebuild.5 | 2 +-
man/make.conf.5 | 6 +-
pym/_emerge/AbstractPollTask.py | 4 +-
pym/_emerge/FakeVartree.py | 1 +
pym/_emerge/depgraph.py | 4 +-
pym/portage/const.py | 1 +
pym/portage/dbapi/vartree.py | 15 ++++-
pym/portage/repository/config.py | 4 +-
pym/portage/xpak.py | 2 +-
11 files changed, 118 insertions(+), 86 deletions(-)
diff --cc bin/misc-functions.sh
index 436a50b,5a726b3..c6e66c2
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -155,10 -152,92 +155,94 @@@ install_qa_check()
[[ " ${FEATURES} " == *" force-prefix "* ]] || \
case "$EAPI" in 0|1|2) local ED=${D} ;; esac
- cd "${ED}" || die "cd failed"
+ # PREFIX LOCAL: ED needs not to exist, whereas D does
+ cd "${D}" || die "cd failed"
+ # END PREFIX LOCAL
+ # Merge QA_FLAGS_IGNORED and QA_DT_HASH into a single array, since
+ # QA_DT_HASH is deprecated.
+ qa_var="QA_FLAGS_IGNORED_${ARCH/-/_}"
+ eval "[[ -n \${!qa_var} ]] && QA_FLAGS_IGNORED=(\"\${${qa_var}[@]}\")"
+ if [[ ${#QA_FLAGS_IGNORED[@]} -eq 1 ]] ; then
+ local shopts=$-
+ set -o noglob
+ QA_FLAGS_IGNORED=(${QA_FLAGS_IGNORED})
+ set +o noglob
+ set -${shopts}
+ fi
+
+ qa_var="QA_DT_HASH_${ARCH/-/_}"
+ eval "[[ -n \${!qa_var} ]] && QA_DT_HASH=(\"\${${qa_var}[@]}\")"
+ if [[ ${#QA_DT_HASH[@]} -eq 1 ]] ; then
+ local shopts=$-
+ set -o noglob
+ QA_DT_HASH=(${QA_DT_HASH})
+ set +o noglob
+ set -${shopts}
+ fi
+
+ if [[ -n ${QA_DT_HASH} ]] ; then
+ QA_FLAGS_IGNORED=("${QA_FLAGS_IGNORED[@]}" "${QA_DT_HASH[@]}")
+ unset QA_DT_HASH
+ fi
+
+ # Merge QA_STRICT_FLAGS_IGNORED and QA_STRICT_DT_HASH, since
+ # QA_STRICT_DT_HASH is deprecated
+ if [ "${QA_STRICT_FLAGS_IGNORED-unset}" = unset ] && \
+ [ "${QA_STRICT_DT_HASH-unset}" != unset ] ; then
+ QA_STRICT_FLAGS_IGNORED=1
+ unset QA_STRICT_DT_HASH
+ fi
+
+ # Check for files built without respecting *FLAGS. Note that
+ # -frecord-gcc-switches must be in all *FLAGS variables, in
+ # order to avoid false positive results here.
+ # NOTE: This check must execute before prepall/prepstrip, since
+ # prepstrip strips the .GCC.command.line sections.
+ if type -P scanelf > /dev/null && ! has binchecks ${RESTRICT} && \
+ [[ "${CFLAGS}" == *-frecord-gcc-switches* ]] && \
+ [[ "${CXXFLAGS}" == *-frecord-gcc-switches* ]] && \
+ [[ "${FFLAGS}" == *-frecord-gcc-switches* ]] && \
+ [[ "${FCFLAGS}" == *-frecord-gcc-switches* ]] ; then
+ rm -f "${T}"/scanelf-ignored-CFLAGS.log
+ for x in $(scanelf -qyRF '%k %p' -k \!.GCC.command.line "${ED}" | \
+ sed -e "s:\!.GCC.command.line ::") ; do
+ # Separate out file types that are known to support
+ # .GCC.command.line sections, using the `file` command
+ # similar to how prepstrip uses it.
+ f=$(file "${x}") || continue
+ [[ -z ${f} ]] && continue
+ if [[ ${f} == *"SB executable"* ||
+ ${f} == *"SB shared object"* ]] ; then
+ echo "${x}" >> "${T}"/scanelf-ignored-CFLAGS.log
+ fi
+ done
+
+ if [[ -f "${T}"/scanelf-ignored-CFLAGS.log ]] ; then
+
+ if [ "${QA_STRICT_FLAGS_IGNORED-unset}" = unset ] ; then
+ for x in "${QA_FLAGS_IGNORED[@]}" ; do
+ sed -e "s#^${x#/}\$##" -i "${T}"/scanelf-ignored-CFLAGS.log
+ done
+ fi
+ # Filter anything under /usr/lib/debug/ in order to avoid
+ # duplicate warnings for splitdebug files.
+ sed -e "s#^usr/lib/debug/.*##" -e "/^\$/d" -e "s#^#/#" \
+ -i "${T}"/scanelf-ignored-CFLAGS.log
+ f=$(<"${T}"/scanelf-ignored-CFLAGS.log)
+ if [[ -n ${f} ]] ; then
+ vecho -ne '\n'
+ eqawarn "${BAD}QA Notice: Files built without respecting CFLAGS have been detected${NORMAL}"
+ eqawarn " Please include the following list of files in your report:"
+ eqawarn "${f}"
+ vecho -ne '\n'
+ sleep 1
+ else
+ rm -f "${T}"/scanelf-ignored-CFLAGS.log
+ fi
+ fi
+ fi
+
export STRIP_MASK
prepall
has "${EAPI}" 0 1 2 3 || prepcompress
@@@ -189,41 -268,8 +273,41 @@@
sleep 1
fi
+ # PREFIX LOCAL:
+ # anything outside the prefix should be caught by the Prefix QA
+ # check, so if there's nothing in ED, we skip searching for QA
+ # checks there, the specific QA funcs can hence rely on ED existing
+ if [[ -d ${ED} ]] ; then
+ case ${CHOST} in
+ *-darwin*)
+ # Mach-O platforms (NeXT, Darwin, OSX)
+ install_qa_check_macho
+ ;;
+ *-interix*|*-winnt*)
+ # PECOFF platforms (Windows/Interix)
+ install_qa_check_pecoff
+ ;;
+ *-aix*)
+ # XCOFF platforms (AIX)
+ install_qa_check_xcoff
+ ;;
+ *)
+ # because this is the majority: ELF platforms (Linux,
+ # Solaris, *BSD, IRIX, etc.)
+ install_qa_check_elf
+ ;;
+ esac
+ fi
+
+ # this is basically here such that the diff with trunk remains just
+ # offsetted and not out of order
+ install_qa_check_misc
+ # END PREFIX LOCAL
+}
+
+install_qa_check_elf() {
if type -P scanelf > /dev/null && ! has binchecks ${RESTRICT}; then
- local qa_var insecure_rpath=0 tmp_quiet=${PORTAGE_QUIET}
+ local insecure_rpath=0 tmp_quiet=${PORTAGE_QUIET}
local x
# display warnings when using stricter because we die afterwards
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-12-23 9:51 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-12-23 9:51 UTC (permalink / raw
To: gentoo-commits
commit: d32995adfc27f2a5bd4d85f951c3e828dc867600
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 23 09:44:26 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Dec 23 09:44:26 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=d32995ad
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/ebuild-helpers/newbin
bin/ebuild-helpers/newconfd
bin/ebuild-helpers/newdoc
bin/ebuild-helpers/newenvd
bin/ebuild-helpers/newexe
bin/ebuild-helpers/newinitd
bin/ebuild-helpers/newlib.a
bin/ebuild-helpers/newlib.so
bin/ebuild-helpers/newman
bin/ebuild-helpers/newsbin
pym/portage/util/_pty.py
bin/ebuild-helpers/newbin | 5 +-
bin/ebuild-helpers/newconfd | 5 +-
bin/ebuild-helpers/newdoc | 5 +-
bin/ebuild-helpers/newenvd | 5 +-
bin/ebuild-helpers/newexe | 5 +-
bin/ebuild-helpers/newinitd | 5 +-
bin/ebuild-helpers/newins | 3 +
bin/ebuild-helpers/newlib.a | 5 +-
bin/ebuild-helpers/newlib.so | 5 +-
bin/ebuild-helpers/newman | 5 +-
bin/ebuild-helpers/newsbin | 5 +-
bin/ebuild.sh | 2 +-
bin/misc-functions.sh | 77 ++++++++------
bin/phase-functions.sh | 17 ++--
man/ebuild.5 | 11 +-
man/make.conf.5 | 7 +-
pym/portage/dbapi/vartree.py | 3 +
pym/portage/package/ebuild/config.py | 5 +-
pym/portage/package/ebuild/fetch.py | 25 +++--
pym/portage/tests/ebuild/test_pty_eof.py | 26 -----
pym/portage/util/_pty.py | 162 ++----------------------------
21 files changed, 138 insertions(+), 250 deletions(-)
diff --cc bin/ebuild-helpers/newbin
index bf5def8,bf98744..f3deef1
--- a/bin/ebuild-helpers/newbin
+++ b/bin/ebuild-helpers/newbin
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
diff --cc bin/ebuild-helpers/newconfd
index 284de97,fa3710d..d9462f6
--- a/bin/ebuild-helpers/newconfd
+++ b/bin/ebuild-helpers/newconfd
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
diff --cc bin/ebuild-helpers/newdoc
index 0ca1e52,df6fb1d..d26aec9
--- a/bin/ebuild-helpers/newdoc
+++ b/bin/ebuild-helpers/newdoc
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
diff --cc bin/ebuild-helpers/newenvd
index 02ad10e,c54af05..0688425
--- a/bin/ebuild-helpers/newenvd
+++ b/bin/ebuild-helpers/newenvd
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
diff --cc bin/ebuild-helpers/newexe
index 0877d20,9bcf64b..334096c
--- a/bin/ebuild-helpers/newexe
+++ b/bin/ebuild-helpers/newexe
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
diff --cc bin/ebuild-helpers/newinitd
index aa7955c,03bbe68..ad6d773
--- a/bin/ebuild-helpers/newinitd
+++ b/bin/ebuild-helpers/newinitd
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
diff --cc bin/ebuild-helpers/newlib.a
index d14ea0e,7ff8195..699c2c6
--- a/bin/ebuild-helpers/newlib.a
+++ b/bin/ebuild-helpers/newlib.a
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
diff --cc bin/ebuild-helpers/newlib.so
index 3349900,fd4c097..7e6e7cd
--- a/bin/ebuild-helpers/newlib.so
+++ b/bin/ebuild-helpers/newlib.so
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
diff --cc bin/ebuild-helpers/newman
index d15abc0,889e0f9..dc4962e
--- a/bin/ebuild-helpers/newman
+++ b/bin/ebuild-helpers/newman
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
diff --cc bin/ebuild-helpers/newsbin
index f8e79df,9df0af2..c02124b
--- a/bin/ebuild-helpers/newsbin
+++ b/bin/ebuild-helpers/newsbin
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
diff --cc bin/misc-functions.sh
index 8b58bda,dcfdceb..436a50b
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
diff --cc bin/phase-functions.sh
index 493bb54,2167853..c002a34
--- a/bin/phase-functions.sh
+++ b/bin/phase-functions.sh
@@@ -126,17 -126,18 +126,18 @@@ filter_readonly_variables()
LC_CTYPE LC_MESSAGES LC_MONETARY
LC_NUMERIC LC_PAPER LC_TIME"
fi
- if [[ ${EMERGE_FROM} != binary ]] && ! has --allow-extra-vars $* ; then
- filtered_vars="
- ${filtered_vars}
- ${PORTAGE_SAVED_READONLY_VARS}
- ${PORTAGE_MUTABLE_FILTERED_VARS}
- "
- elif ! has --allow-extra-vars $* ; then
- filtered_vars+=" ${binpkg_untrusted_vars}"
+ if ! has --allow-extra-vars $* ; then
+ if [ "${EMERGE_FROM}" = binary ] ; then
+ # preserve additional variables from build time,
+ # while excluding untrusted variables
+ filtered_vars+=" ${binpkg_untrusted_vars}"
+ else
+ filtered_vars+=" ${PORTAGE_SAVED_READONLY_VARS}"
+ filtered_vars+=" ${PORTAGE_MUTABLE_FILTERED_VARS}"
+ fi
fi
- "${PORTAGE_PYTHON:-/usr/bin/python}" "${PORTAGE_BIN_PATH}"/filter-bash-environment.py "${filtered_vars}" || die "filter-bash-environment.py failed"
+ "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "${PORTAGE_BIN_PATH}"/filter-bash-environment.py "${filtered_vars}" || die "filter-bash-environment.py failed"
}
# @FUNCTION: preprocess_ebuild_env
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-12-22 9:51 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-12-22 9:51 UTC (permalink / raw
To: gentoo-commits
commit: 80319ff9c03b94303c4df153dc1f1de7462e5542
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Dec 22 09:47:11 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Dec 22 09:47:11 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=80319ff9
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/ebuild-helpers/prepstrip
pym/portage/data.py
bin/ebuild-helpers/prepstrip | 24 +++----
bin/misc-functions.sh | 49 +++++++++++-
bin/phase-functions.sh | 8 ++-
cnf/make.conf | 6 ++
cnf/make.conf.ia64.diff | 23 +-----
cnf/make.conf.s390.diff | 23 +-----
cnf/make.globals | 4 +-
doc/package/ebuild.docbook | 1 +
doc/package/ebuild/eapi/4-python.docbook | 49 ++++++++++++
doc/portage.docbook | 1 +
man/ebuild.5 | 9 ++
man/emerge.1 | 4 +-
man/make.conf.5 | 4 +
pym/_emerge/MergeListItem.py | 2 +-
pym/_emerge/PackageMerge.py | 2 +-
pym/_emerge/Scheduler.py | 8 +-
pym/_emerge/depgraph.py | 10 +-
pym/_emerge/main.py | 5 +-
pym/_emerge/resolver/output.py | 2 +-
pym/_emerge/resolver/slot_collision.py | 2 +-
pym/_emerge/search.py | 2 +-
pym/portage/data.py | 55 ++++++++++----
pym/portage/dbapi/vartree.py | 125 ++++++++++++++++++++++++-----
pym/portage/eclass_cache.py | 2 +
pym/portage/package/ebuild/doebuild.py | 4 +
25 files changed, 313 insertions(+), 111 deletions(-)
diff --cc bin/misc-functions.sh
index 1e3785c,c74b4a4..8b58bda
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
diff --cc bin/phase-functions.sh
index e62a9ad,7407aba..493bb54
--- a/bin/phase-functions.sh
+++ b/bin/phase-functions.sh
@@@ -128,9 -132,11 +132,11 @@@ filter_readonly_variables()
${PORTAGE_SAVED_READONLY_VARS}
${PORTAGE_MUTABLE_FILTERED_VARS}
"
+ elif ! has --allow-extra-vars $* ; then
+ filtered_vars+=" ${binpkg_untrusted_vars}"
fi
- "${PORTAGE_PYTHON:-/usr/bin/python}" "${PORTAGE_BIN_PATH}"/filter-bash-environment.py "${filtered_vars}" || die "filter-bash-environment.py failed"
+ "${PORTAGE_PYTHON:-@PREFIX_PORTAGE_PYTHON@}" "${PORTAGE_BIN_PATH}"/filter-bash-environment.py "${filtered_vars}" || die "filter-bash-environment.py failed"
}
# @FUNCTION: preprocess_ebuild_env
diff --cc pym/portage/data.py
index 1351c7f,c4d967a..a136093
--- a/pym/portage/data.py
+++ b/pym/portage/data.py
@@@ -156,26 -141,46 +156,51 @@@ def _get_global(k)
# Avoid instantiating portage.settings when the desired
# variable is set in os.environ.
- elif k == '_portage_grpname':
+ elif k in ('_portage_grpname', '_portage_username'):
v = None
- if 'PORTAGE_GRPNAME' in os.environ:
- v = os.environ['PORTAGE_GRPNAME']
- elif hasattr(portage, 'settings'):
- v = portage.settings.get('PORTAGE_GRPNAME')
- if v is None:
- # PREFIX LOCAL: use var iso hardwired 'portage'
- v = PORTAGE_GROUPNAME
- # END PREFIX LOCAL
- elif k == '_portage_username':
- v = None
- if 'PORTAGE_USERNAME' in os.environ:
- v = os.environ['PORTAGE_USERNAME']
+ if k == '_portage_grpname':
+ env_key = 'PORTAGE_GRPNAME'
+ else:
+ env_key = 'PORTAGE_USERNAME'
+
+ if env_key in os.environ:
+ v = os.environ[env_key]
elif hasattr(portage, 'settings'):
- v = portage.settings.get('PORTAGE_USERNAME')
+ v = portage.settings.get(env_key)
+ elif portage.const.EPREFIX:
+ # For prefix environments, default to the UID and GID of
+ # the top-level EROOT directory. The config class has
+ # equivalent code, but we also need to do it here if
+ # _disable_legacy_globals() has been called.
+ eroot = os.path.join(os.environ.get('ROOT', os.sep),
+ portage.const.EPREFIX.lstrip(os.sep))
+ try:
+ eroot_st = os.stat(eroot)
+ except OSError:
+ pass
+ else:
+ if k == '_portage_grpname':
+ try:
+ grp_struct = grp.getgrgid(eroot_st.st_gid)
+ except KeyError:
+ pass
+ else:
+ v = grp_struct.gr_name
+ else:
+ try:
+ pwd_struct = pwd.getpwuid(eroot_st.st_uid)
+ except KeyError:
+ pass
+ else:
+ v = pwd_struct.pw_name
+
if v is None:
- v = 'portage'
+ # PREFIX LOCAL: use var iso hardwired 'portage'
- v = PORTAGE_USERNAME
++ if k == '_portage_grpname':
++ v = PORTAGE_GRPNAME
++ else:
++ v = PORTAGE_USERNAME
+ # END PREFIX LOCAL
else:
raise AssertionError('unknown name: %s' % k)
diff --cc pym/portage/dbapi/vartree.py
index edc3983,d93d3c2..4745743
--- a/pym/portage/dbapi/vartree.py
+++ b/pym/portage/dbapi/vartree.py
@@@ -30,11 -30,9 +30,12 @@@ portage.proxy.lazyimport.lazyimport(glo
'portage.util.movefile:movefile',
'portage.util._dyn_libs.PreservedLibsRegistry:PreservedLibsRegistry',
'portage.util._dyn_libs.LinkageMapELF:LinkageMapELF@LinkageMap',
+ 'portage.util._dyn_libs.LinkageMapMachO:LinkageMapMachO',
+ 'portage.util._dyn_libs.LinkageMapPeCoff:LinkageMapPeCoff',
+ 'portage.util._dyn_libs.LinkageMapXCoff:LinkageMapXCoff',
'portage.versions:best,catpkgsplit,catsplit,cpv_getkey,pkgcmp,' + \
'_pkgsplit@pkgsplit',
+ 'subprocess',
'tarfile',
)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-12-19 18:30 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-12-19 18:30 UTC (permalink / raw
To: gentoo-commits
commit: 45f8d30649d45064e1d7c67eb4b567f5ac690a2f
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Mon Dec 19 18:29:38 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Mon Dec 19 18:29:38 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=45f8d306
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
pym/portage/data.py
RELEASE-NOTES | 5 -
bin/ebuild.sh | 18 ++++-
bin/isolated-functions.sh | 6 +-
bin/phase-functions.sh | 2 +-
man/emerge.1 | 16 ++--
man/make.conf.5 | 2 +-
pym/_emerge/AbstractPollTask.py | 65 +++++++++++++++-
pym/_emerge/AsynchronousLock.py | 4 +-
pym/_emerge/BinpkgExtractorAsync.py | 3 +-
pym/_emerge/BlockerCache.py | 2 +-
pym/_emerge/EbuildExecuter.py | 2 -
pym/_emerge/FakeVartree.py | 2 -
pym/_emerge/PipeReader.py | 60 +++++++++-----
pym/_emerge/Scheduler.py | 14 ++++
pym/_emerge/SpawnProcess.py | 50 ++++++------
pym/_emerge/actions.py | 19 ++---
pym/_emerge/depgraph.py | 59 +++++---------
pym/_emerge/main.py | 7 +--
pym/_emerge/resolver/backtracking.py | 2 +-
pym/portage/__init__.py | 3 +-
pym/portage/data.py | 11 ++-
pym/portage/locks.py | 2 +-
.../package/ebuild/_config/special_env_vars.py | 2 +-
pym/portage/repository/config.py | 82 ++++++++++----------
pym/portage/tests/ebuild/test_pty_eof.py | 23 +-----
pym/portage/tests/emerge/test_simple.py | 4 +-
pym/portage/tests/process/test_poll.py | 58 ++++++++++++--
pym/portage/util/_pty.py | 70 +++++++++--------
28 files changed, 346 insertions(+), 247 deletions(-)
diff --cc pym/_emerge/depgraph.py
index 12b8439,93819c6..2e3ca4b
--- a/pym/_emerge/depgraph.py
+++ b/pym/_emerge/depgraph.py
@@@ -7186,14 -7169,8 +7169,14 @@@ def _get_masking_status(pkg, pkgsetting
mreasons.append(_MaskReason("CHOST", "CHOST: %s" % \
pkg.metadata["CHOST"]))
+ if pkg.built and not pkg.installed:
+ if not "EPREFIX" in pkg.metadata:
+ mreasons.append(_MaskReason("EPREFIX", "missing EPREFIX"))
+ elif len(pkg.metadata["EPREFIX"].strip()) < len(pkgsettings["EPREFIX"]):
+ mreasons.append(_MaskReason("EPREFIX", "EPREFIX: '%s' too small" % pkg.metadata["EPREFIX"]))
+
if pkg.invalid:
- for msg_type, msgs in pkg.invalid.items():
+ for msgs in pkg.invalid.values():
for msg in msgs:
mreasons.append(
_MaskReason("invalid", "invalid: %s" % (msg,)))
diff --cc pym/portage/data.py
index b2aac06,ec750a6..1351c7f
--- a/pym/portage/data.py
+++ b/pym/portage/data.py
@@@ -90,16 -86,13 +90,21 @@@ def _get_global(k)
secpass = 2
#Discover the uid and gid of the portage user/group
try:
- portage_gid = grp.getgrnam(_get_global('_portage_grpname')).gr_gid
- portage_uid = pwd.getpwnam(_get_global('_portage_username')).pw_uid
+ _portage_grpname = _get_global('_portage_grpname')
+ if platform.python_implementation() == 'PyPy':
+ # Somehow this prevents "TypeError: expected string" errors
+ # from grp.getgrnam() with PyPy 1.7
+ _portage_grpname = str(_portage_grpname)
+ portage_gid = grp.getgrnam(_portage_grpname).gr_gid
+ except KeyError:
+ # PREFIX LOCAL: some sysadmins are insane, bug #344307
- if _get_global('_portage_grpname').isdigit():
- portage_gid = int(_get_global('_portage_grpname'))
++ if _portage_grpname.isdigit():
++ portage_gid = int(_portage_grpname)
+ else:
+ portage_gid = None
+ # END PREFIX LOCAL
+ try:
+ portage_uid = pwd.getpwnam(_get_global('_portage_username')).pw_uid
if secpass < 1 and portage_gid in os.getgroups():
secpass = 1
except KeyError:
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-12-14 15:25 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-12-14 15:25 UTC (permalink / raw
To: gentoo-commits
commit: 87863efd98a474f01af099609903cbf7e7a3754d
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Wed Dec 14 15:23:13 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Wed Dec 14 15:23:13 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=87863efd
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/egencache
bin/phase-functions.sh
pym/portage/const.py
pym/portage/data.py
pym/portage/package/ebuild/_config/special_env_vars.py
bin/ebuild-ipc.py | 38 +++-
bin/ebuild.sh | 9 -
bin/egencache | 6 +-
bin/lock-helper.py | 1 +
bin/phase-functions.sh | 7 +-
bin/repoman | 14 +-
man/emerge.1 | 4 +-
pym/_emerge/AbstractEbuildProcess.py | 16 ++-
pym/_emerge/Binpkg.py | 6 +-
pym/_emerge/FifoIpcDaemon.py | 16 +-
pym/_emerge/Scheduler.py | 2 +-
pym/_emerge/SpawnProcess.py | 31 ++--
pym/_emerge/actions.py | 19 +-
pym/_emerge/emergelog.py | 9 +-
pym/_emerge/main.py | 2 +-
pym/portage/__init__.py | 2 +-
pym/portage/_legacy_globals.py | 2 -
pym/portage/const.py | 9 +-
pym/portage/data.py | 65 ++++--
pym/portage/dbapi/vartree.py | 4 +-
pym/portage/locks.py | 221 ++++++++++++++------
pym/portage/output.py | 15 +-
.../package/ebuild/_config/LocationsManager.py | 5 +-
.../package/ebuild/_config/special_env_vars.py | 8 +-
pym/portage/package/ebuild/_spawn_nofetch.py | 2 +-
pym/portage/package/ebuild/config.py | 50 +++--
pym/portage/package/ebuild/doebuild.py | 15 +-
pym/portage/package/ebuild/fetch.py | 3 +-
pym/portage/package/ebuild/prepare_build_dirs.py | 3 +-
pym/portage/repository/config.py | 7 +-
pym/portage/tests/dbapi/test_fakedbapi.py | 2 +-
pym/portage/tests/ebuild/test_doebuild_spawn.py | 5 +
pym/portage/tests/ebuild/test_ipc_daemon.py | 13 +-
pym/portage/tests/emerge/test_simple.py | 7 +-
pym/portage/tests/locks/test_asynchronous_lock.py | 62 +++++-
pym/portage/tests/locks/test_lock_nonblock.py | 17 ++-
pym/portage/tests/repoman/test_simple.py | 6 +-
pym/portage/tests/resolver/ResolverPlayground.py | 6 +-
pym/portage/util/movefile.py | 6 +-
pym/portage/xpak.py | 2 +-
pym/repoman/utilities.py | 2 +-
41 files changed, 480 insertions(+), 239 deletions(-)
diff --cc bin/phase-functions.sh
index 331afc8,664202a..9aa033c
--- a/bin/phase-functions.sh
+++ b/bin/phase-functions.sh
@@@ -28,7 -29,7 +29,7 @@@ PORTAGE_READONLY_VARS="D EBUILD EBUILD_
PORTAGE_TMPDIR PORTAGE_UPDATE_ENV PORTAGE_USERNAME \
PORTAGE_VERBOSE PORTAGE_WORKDIR_MODE PORTDIR PORTDIR_OVERLAY \
PROFILE_PATHS REPLACING_VERSIONS REPLACED_BY_VERSION T WORKDIR \
- PORTAGE_OVERRIDE_EPREFIX ED EROOT"
- __PORTAGE_TEST_HARDLINK_LOCKS"
++ __PORTAGE_TEST_HARDLINK_LOCKS ED EROOT"
PORTAGE_SAVED_READONLY_VARS="A CATEGORY P PF PN PR PV PVR"
diff --cc pym/_emerge/Binpkg.py
index 213ced2,6c70b19..e6e2e21
--- a/pym/_emerge/Binpkg.py
+++ b/pym/_emerge/Binpkg.py
@@@ -20,9 -20,8 +21,9 @@@ from portage import _unicode_decod
from portage import _unicode_encode
import io
import logging
- import shutil
+ import textwrap
from portage.output import colorize
+from portage.const import EPREFIX
class Binpkg(CompositeTask):
diff --cc pym/portage/const.py
index 2223b4c,77c68eb..aa45cb2
--- a/pym/portage/const.py
+++ b/pym/portage/const.py
@@@ -190,10 -145,8 +190,10 @@@ MANIFEST2_IDENTIFIERS = ("AUX", "MIS
# constant should be minimal, in favor of access via the EPREFIX setting of
# a config instance (since it's possible to contruct a config instance with
# a different EPREFIX). Therefore, the EPREFIX constant should *NOT* be used
- # in the definition of any other contstants within this file.
+ # in the definition of any other constants within this file.
-EPREFIX=""
+# PREFIX LOCAL: rely on EPREFIX from autotools
+#EPREFIX=""
+# END PREFIX LOCAL
# pick up EPREFIX from the environment if set
if "PORTAGE_OVERRIDE_EPREFIX" in os.environ:
diff --cc pym/portage/data.py
index f2f541f,cf94ab0..5651e7d
--- a/pym/portage/data.py
+++ b/pym/portage/data.py
@@@ -90,16 -86,8 +90,20 @@@ def _get_global(k)
secpass = 2
#Discover the uid and gid of the portage user/group
try:
++<<<<<<< HEAD
++=======
+ portage_uid = pwd.getpwnam(_get_global('_portage_username')).pw_uid
++>>>>>>> overlays-gentoo-org/master
portage_gid = grp.getgrnam(_get_global('_portage_grpname')).gr_gid
+ except KeyError:
+ # PREFIX LOCAL: some sysadmins are insane, bug #344307
+ if _get_global('_portage_grpname').isdigit():
+ portage_gid = int(_get_global('_portage_grpname'))
+ else:
+ portage_gid = None
+ # END PREFIX LOCAL
+ try:
+ portage_uid = pwd.getpwnam(_get_global('_portage_uname')).pw_uid
if secpass < 1 and portage_gid in os.getgroups():
secpass = 1
except KeyError:
@@@ -149,16 -134,24 +153,28 @@@
pass
v = sorted(set(v))
+ # Avoid instantiating portage.settings when the desired
+ # variable is set in os.environ.
elif k == '_portage_grpname':
- env = getattr(portage, 'settings', os.environ)
- # PREFIX LOCAL: use var iso hardwired 'portage'
- v = env.get('PORTAGE_GRPNAME', PORTAGE_GROUPNAME)
- # END PREFIX LOCAL
- elif k == '_portage_uname':
- env = getattr(portage, 'settings', os.environ)
- # PREFIX LOCAL: use var iso hardwired 'portage'
- v = env.get('PORTAGE_USERNAME', PORTAGE_USERNAME)
- # END PREFIX LOCAL
+ v = None
+ if 'PORTAGE_GRPNAME' in os.environ:
+ v = os.environ['PORTAGE_GRPNAME']
+ elif hasattr(portage, 'settings'):
+ v = portage.settings.get('PORTAGE_GRPNAME')
+ if v is None:
- v = 'portage'
++ # PREFIX LOCAL: use var iso hardwired 'portage'
++ v = PORTAGE_GROUPNAME
++ # END PREFIX LOCAL
+ elif k == '_portage_username':
+ v = None
+ if 'PORTAGE_USERNAME' in os.environ:
+ v = os.environ['PORTAGE_USERNAME']
+ elif hasattr(portage, 'settings'):
+ v = portage.settings.get('PORTAGE_USERNAME')
+ if v is None:
- v = 'portage'
++ # PREFIX LOCAL: use var iso hardwired 'portage'
++ v = PORTAGE_USERNAME
++ # END PREFIX LOCAL
else:
raise AssertionError('unknown name: %s' % k)
@@@ -188,14 -181,13 +204,17 @@@ def _init(settings)
initialize global variables. This allows settings to come from make.conf
instead of requiring them to be set in the calling environment.
"""
- if '_portage_grpname' not in _initialized_globals:
- v = settings.get('PORTAGE_GRPNAME')
- if v is not None:
- globals()['_portage_grpname'] = v
- _initialized_globals.add('_portage_grpname')
-
- if '_portage_uname' not in _initialized_globals:
- v = settings.get('PORTAGE_USERNAME')
- if v is not None:
- globals()['_portage_uname'] = v
- _initialized_globals.add('_portage_uname')
+ if '_portage_grpname' not in _initialized_globals and \
+ '_portage_username' not in _initialized_globals:
+
- v = settings.get('PORTAGE_GRPNAME', 'portage')
++ # PREFIX LOCAL: use var iso hardwired 'portage'
++ v = settings.get('PORTAGE_GRPNAME', PORTAGE_GROUPNAME)
++ # END PREFIX LOCAL
+ globals()['_portage_grpname'] = v
+ _initialized_globals.add('_portage_grpname')
+
- v = settings.get('PORTAGE_USERNAME', 'portage')
++ # PREFIX LOCAL: use var iso hardwired 'portage'
++ v = settings.get('PORTAGE_USERNAME', PORTAGE_USERNAME)
++ # END PREFIX LOCAL
+ globals()['_portage_username'] = v
+ _initialized_globals.add('_portage_username')
diff --cc pym/portage/dbapi/vartree.py
index 09d7721,a9a147a..edc3983
--- a/pym/portage/dbapi/vartree.py
+++ b/pym/portage/dbapi/vartree.py
@@@ -27,11 -27,9 +27,12 @@@ portage.proxy.lazyimport.lazyimport(glo
'portage.util.digraph:digraph',
'portage.util.env_update:env_update',
'portage.util.listdir:dircache,listdir',
+ 'portage.util.movefile:movefile',
'portage.util._dyn_libs.PreservedLibsRegistry:PreservedLibsRegistry',
'portage.util._dyn_libs.LinkageMapELF:LinkageMapELF@LinkageMap',
+ 'portage.util._dyn_libs.LinkageMapMachO:LinkageMapMachO',
+ 'portage.util._dyn_libs.LinkageMapPeCoff:LinkageMapPeCoff',
+ 'portage.util._dyn_libs.LinkageMapXCoff:LinkageMapXCoff',
'portage.versions:best,catpkgsplit,catsplit,cpv_getkey,pkgcmp,' + \
'_pkgsplit@pkgsplit',
'tarfile',
diff --cc pym/portage/package/ebuild/_config/special_env_vars.py
index a2f6505,d6ee647..acff992
--- a/pym/portage/package/ebuild/_config/special_env_vars.py
+++ b/pym/portage/package/ebuild/_config/special_env_vars.py
@@@ -66,9 -66,7 +66,9 @@@ environ_whitelist +=
"REPLACING_VERSIONS", "REPLACED_BY_VERSION",
"ROOT", "ROOTPATH", "T", "TMP", "TMPDIR",
"USE_EXPAND", "USE_ORDER", "WORKDIR",
- "XARGS", "PORTAGE_OVERRIDE_EPREFIX",
+ "XARGS", "__PORTAGE_TEST_HARDLINK_LOCKS",
+ "BPREFIX", "DEFAULT_PATH", "EXTRA_PATH",
+ "PORTAGE_GROUP", "PORTAGE_USER",
]
# user config variables
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-12-10 11:28 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-12-10 11:28 UTC (permalink / raw
To: gentoo-commits
commit: 28b58821c000fc413312cb44c85f6697a555625c
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sat Dec 10 10:56:53 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sat Dec 10 10:56:53 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=28b58821
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/dispatch-conf
bin/egencache
bin/phase-functions.sh
bin/portageq
bin/repoman
cnf/make.globals
pym/_emerge/Binpkg.py
pym/portage/__init__.py
pym/portage/data.py
pym/portage/dispatch_conf.py
pym/portage/package/ebuild/_config/special_env_vars.py
bin/chpathtool.py | 182 ++++++++++++++++++++
bin/dispatch-conf | 2 +-
bin/egencache | 8 +-
bin/phase-functions.sh | 2 +-
bin/portageq | 6 +-
bin/repoman | 13 +-
man/make.conf.5 | 4 +
pym/_emerge/Binpkg.py | 45 +++--
pym/_emerge/BinpkgEnvExtractor.py | 4 +-
pym/_emerge/main.py | 4 +-
pym/portage/__init__.py | 10 +-
pym/portage/const.py | 10 +-
pym/portage/data.py | 14 +-
pym/portage/dbapi/porttree.py | 8 +-
pym/portage/dispatch_conf.py | 5 +-
pym/portage/exception.py | 4 +
.../package/ebuild/_config/special_env_vars.py | 2 +-
pym/portage/package/ebuild/config.py | 60 ++++++--
pym/portage/tests/dbapi/test_fakedbapi.py | 2 +-
pym/portage/tests/emerge/test_simple.py | 2 +-
pym/portage/tests/repoman/test_simple.py | 2 +-
pym/portage/tests/resolver/ResolverPlayground.py | 3 +-
pym/portage/util/env_update.py | 2 +-
pym/portage/util/movefile.py | 60 ++++++-
24 files changed, 364 insertions(+), 90 deletions(-)
diff --cc bin/dispatch-conf
index e639b66,75e9911..1b86461
--- a/bin/dispatch-conf
+++ b/bin/dispatch-conf
@@@ -92,7 -85,7 +92,7 @@@ class dispatch
confs = []
count = 0
- config_root = os.environ.get("__PORTAGE_TEST_EPREFIX", EPREFIX)
- config_root = portage.const.EPREFIX or os.sep
++ config_root = EPREFIX or os.sep
self.options = portage.dispatch_conf.read_config(MANDATORY_OPTS)
if "log-file" in self.options:
diff --cc bin/egencache
index d27754d,6d18f66..b7893e6
--- a/bin/egencache
+++ b/bin/egencache
@@@ -842,12 -841,10 +842,10 @@@ def egencache_main(args)
if options.portdir is not None:
env['PORTDIR'] = options.portdir
- eprefix = os.environ.get("__PORTAGE_TEST_EPREFIX")
- if not eprefix:
- eprefix = EPREFIX
- eprefix = portage.const.EPREFIX
++ eprefix = EPREFIX
settings = portage.config(config_root=config_root,
- local_config=False, env=env, _eprefix=eprefix)
+ local_config=False, env=env, eprefix=eprefix)
default_opts = None
if not options.ignore_default_opts:
diff --cc bin/phase-functions.sh
index 40decb7,f862b30..331afc8
--- a/bin/phase-functions.sh
+++ b/bin/phase-functions.sh
@@@ -28,7 -28,7 +28,7 @@@ PORTAGE_READONLY_VARS="D EBUILD EBUILD_
PORTAGE_TMPDIR PORTAGE_UPDATE_ENV PORTAGE_USERNAME \
PORTAGE_VERBOSE PORTAGE_WORKDIR_MODE PORTDIR PORTDIR_OVERLAY \
PROFILE_PATHS REPLACING_VERSIONS REPLACED_BY_VERSION T WORKDIR \
- __PORTAGE_TEST_EPREFIX ED EROOT"
- PORTAGE_OVERRIDE_EPREFIX"
++ PORTAGE_OVERRIDE_EPREFIX ED EROOT"
PORTAGE_SAVED_READONLY_VARS="A CATEGORY P PF PN PR PV PVR"
diff --cc pym/_emerge/Binpkg.py
index 4b66033,b259069..213ced2
--- a/pym/_emerge/Binpkg.py
+++ b/pym/_emerge/Binpkg.py
@@@ -19,8 -19,8 +20,9 @@@ from portage import _unicode_decod
from portage import _unicode_encode
import io
import logging
+ import shutil
from portage.output import colorize
+from portage.const import EPREFIX
class Binpkg(CompositeTask):
@@@ -267,23 -267,6 +270,25 @@@
finally:
f.close()
++ # PREFIX LOCAL: deal with EPREFIX from binpkg
+ # Retrieve the EPREFIX this package was built with
- self._buildprefix = pkg_xpak.getfile(_unicode_encode("EPREFIX",
++ self._build_prefix = pkg_xpak.getfile(_unicode_encode("EPREFIX",
+ encoding=_encodings['repo.content']))
- if not self._buildprefix:
- self._buildprefix = ''
++ if not self._buil_dprefix:
++ self._build_prefix = ''
+ else:
- self._buildprefix = self._buildprefix.strip()
++ self._build_prefix = self._build_prefix.strip()
+ # We want to install in "our" prefix, not the binary one
+ self.settings["EPREFIX"] = EPREFIX
+ f = io.open(_unicode_encode(os.path.join(infloc, 'EPREFIX'),
+ encoding=_encodings['fs'], errors='strict'),
+ mode='w', encoding=_encodings['content'], errors='strict')
+ try:
+ f.write(_unicode_decode(EPREFIX + "\n"))
+ finally:
+ f.close()
++ # END PREFIX LOCAL
+
env_extractor = BinpkgEnvExtractor(background=self.background,
scheduler=self.scheduler, settings=self.settings)
@@@ -309,16 -292,9 +314,18 @@@
self.wait()
return
++ # PREFIX LOCAL:
+ # if the prefix differs, we copy it to the image after
+ # extraction using chpathtool
- if (self._buildprefix != EPREFIX):
++ if (self._build_prefix != EPREFIX):
+ pkgloc = self._work_dir
+ else:
+ pkgloc = self._image_dir
++ # END PREFIX LOCAL
+
extractor = BinpkgExtractorAsync(background=self.background,
env=self.settings.environ(),
- image_dir=self._image_dir,
+ image_dir=pkgloc,
pkg=self.pkg, pkg_path=self._pkg_path,
logfile=self.settings.get("PORTAGE_LOG_FILE"),
scheduler=self.scheduler)
@@@ -333,21 -309,61 +340,27 @@@
self.wait()
return
- if self._buildprefix != EPREFIX:
- try:
- with io.open(_unicode_encode(os.path.join(self._infloc, "EPREFIX"),
- encoding=_encodings['fs'], errors='strict'), mode='r',
- encoding=_encodings['repo.content'], errors='replace') as f:
- self._build_prefix = f.read().rstrip('\n')
- except IOError:
- self._build_prefix = ""
-
- if self._build_prefix == self.settings["EPREFIX"]:
- ensure_dirs(self.settings["ED"])
- self._current_task = None
- self.returncode = os.EX_OK
++ # PREFIX LOCAL: use chpathtool binary
++ if self._build_prefix != EPREFIX:
+ chpathtool = BinpkgChpathtoolAsync(background=self.background,
+ image_dir=self._image_dir, work_dir=self._work_dir,
- buildprefix=self._buildprefix, eprefix=EPREFIX,
++ buildprefix=self._build_prefix, eprefix=EPREFIX,
+ pkg=self.pkg, scheduler=self.scheduler)
+ self._writemsg_level(">>> Adjusting Prefix to %s\n" % EPREFIX)
+ self._start_task(chpathtool, self._chpathtool_exit)
+ else:
self.wait()
- return
-
- chpathtool = SpawnProcess(
- args=[portage._python_interpreter,
- os.path.join(self.settings["PORTAGE_BIN_PATH"], "chpathtool.py"),
- self.settings["D"], self._build_prefix, self.settings["EPREFIX"]],
- background=self.background, env=self.settings.environ(),
- scheduler=self.scheduler,
- logfile=self.settings.get('PORTAGE_LOG_FILE'))
- self._writemsg_level(">>> Adjusting Prefix to %s\n" % self.settings["EPREFIX"])
- self._start_task(chpathtool, self._chpathtool_exit)
++ # END PREFIX LOCAL
def _chpathtool_exit(self, chpathtool):
if self._final_exit(chpathtool) != os.EX_OK:
self._unlock_builddir()
- writemsg("!!! Error Adjusting Prefix to %s\n" % EPREFIX,
- noiselevel=-1)
+ self._writemsg_level("!!! Error Adjusting Prefix to %s" %
+ (self.settings["EPREFIX"],),
+ noiselevel=-1, level=logging.ERROR)
+ self.wait()
+ return
+
- # We want to install in "our" prefix, not the binary one
- with io.open(_unicode_encode(os.path.join(self._infloc, "EPREFIX"),
- encoding=_encodings['fs'], errors='strict'), mode='w',
- encoding=_encodings['repo.content'], errors='strict') as f:
- f.write(self.settings["EPREFIX"] + "\n")
-
- # Move the files to the correct location for merge.
- image_tmp_dir = os.path.join(
- self.settings["PORTAGE_BUILDDIR"], "image_tmp")
- build_d = os.path.join(self.settings["D"],
- self._build_prefix.lstrip(os.sep))
- if not os.path.isdir(build_d):
- # Assume this is a virtual package or something.
- shutil.rmtree(self._image_dir)
- ensure_dirs(self.settings["ED"])
- else:
- os.rename(build_d, image_tmp_dir)
- shutil.rmtree(self._image_dir)
- ensure_dirs(os.path.dirname(self.settings["ED"].rstrip(os.sep)))
- os.rename(image_tmp_dir, self.settings["ED"])
-
self.wait()
def _unlock_builddir(self):
diff --cc pym/portage/__init__.py
index c5b7f76,1df9566..21ca69e
--- a/pym/portage/__init__.py
+++ b/pym/portage/__init__.py
@@@ -505,11 -505,9 +506,8 @@@ def create_trees(config_root=None, targ
if env is None:
env = os.environ
- eprefix = env.get("__PORTAGE_TEST_EPREFIX")
- if not eprefix:
- eprefix = EPREFIX
-
settings = config(config_root=config_root, target_root=target_root,
- env=env, _eprefix=eprefix)
+ env=env, eprefix=eprefix)
settings.lock()
trees._target_eroot = settings['EROOT']
diff --cc pym/portage/const.py
index 83d6696,a32933c..2223b4c
--- a/pym/portage/const.py
+++ b/pym/portage/const.py
@@@ -191,13 -146,11 +191,13 @@@ MANIFEST2_IDENTIFIERS = ("AUX", "MIS
# a config instance (since it's possible to contruct a config instance with
# a different EPREFIX). Therefore, the EPREFIX constant should *NOT* be used
# in the definition of any other contstants within this file.
-EPREFIX=""
+# PREFIX LOCAL: rely on EPREFIX from autotools
+#EPREFIX=""
+# END PREFIX LOCAL
# pick up EPREFIX from the environment if set
- if "__PORTAGE_TEST_EPREFIX" in os.environ:
- EPREFIX = os.environ["__PORTAGE_TEST_EPREFIX"]
+ if "PORTAGE_OVERRIDE_EPREFIX" in os.environ:
+ EPREFIX = os.environ["PORTAGE_OVERRIDE_EPREFIX"]
if EPREFIX:
EPREFIX = os.path.normpath(EPREFIX)
diff --cc pym/portage/data.py
index e8a0a3a,53af6b9..196e5dc
--- a/pym/portage/data.py
+++ b/pym/portage/data.py
@@@ -95,15 -86,8 +90,16 @@@ def _get_global(k)
secpass = 2
#Discover the uid and gid of the portage user/group
try:
- portage_gid = grp.getgrnam(_get_global('_portage_grpname'))[2]
- portage_uid = pwd.getpwnam(_get_global('_portage_uname')).pw_uid
+ portage_gid = grp.getgrnam(_get_global('_portage_grpname')).gr_gid
+ except KeyError:
- # some sysadmins are insane, bug #344307
++ # PREFIX LOCAL: some sysadmins are insane, bug #344307
+ if _get_global('_portage_grpname').isdigit():
+ portage_gid = int(_get_global('_portage_grpname'))
+ else:
+ portage_gid = None
++ # END PREFIX LOCAL
+ try:
- portage_uid = pwd.getpwnam(_get_global('_portage_uname'))[2]
++ portage_uid = pwd.getpwnam(_get_global('_portage_uname')).pw_uid
if secpass < 1 and portage_gid in os.getgroups():
secpass = 1
except KeyError:
diff --cc pym/portage/dispatch_conf.py
index bfd6517,7f407ff..11a6a7d
--- a/pym/portage/dispatch_conf.py
+++ b/pym/portage/dispatch_conf.py
@@@ -40,8 -39,8 +40,9 @@@ def diffstatusoutput_len(cmd)
return (1, 1)
def read_config(mandatory_opts):
- eprefix = os.environ.get("__PORTAGE_TEST_EPREFIX", EPREFIX)
- config_path = os.path.join(eprefix, "etc/dispatch-conf.conf")
- eprefix = portage.const.EPREFIX
++ eprefix = EPREFIX
+ config_path = os.path.join(eprefix or os.sep, "etc/dispatch-conf.conf")
++>>>>>>> overlays-gentoo-org/master
loader = KeyValuePairFileLoader(config_path, None)
opts, errors = loader.load()
if not opts:
diff --cc pym/portage/package/ebuild/_config/special_env_vars.py
index d07f68b,3911e97..a2f6505
--- a/pym/portage/package/ebuild/_config/special_env_vars.py
+++ b/pym/portage/package/ebuild/_config/special_env_vars.py
@@@ -66,9 -66,7 +66,9 @@@ environ_whitelist +=
"REPLACING_VERSIONS", "REPLACED_BY_VERSION",
"ROOT", "ROOTPATH", "T", "TMP", "TMPDIR",
"USE_EXPAND", "USE_ORDER", "WORKDIR",
- "XARGS", "__PORTAGE_TEST_EPREFIX",
+ "XARGS", "PORTAGE_OVERRIDE_EPREFIX",
+ "BPREFIX", "DEFAULT_PATH", "EXTRA_PATH",
+ "PORTAGE_GROUP", "PORTAGE_USER",
]
# user config variables
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-12-09 20:33 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-12-09 20:33 UTC (permalink / raw
To: gentoo-commits
commit: 868bcc349966d957d51b302b530f3299dd8fb5c8
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 9 20:32:39 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Dec 9 20:32:39 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=868bcc34
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/dispatch-conf
bin/ebuild-helpers/dobin
bin/ebuild-helpers/dodir
bin/ebuild-helpers/dodoc
bin/ebuild-helpers/doexe
bin/ebuild-helpers/dohard
bin/ebuild-helpers/doinfo
bin/ebuild-helpers/doins
bin/ebuild-helpers/dolib
bin/ebuild-helpers/domo
bin/ebuild-helpers/dosbin
bin/ebuild-helpers/dosed
bin/ebuild-helpers/dosym
bin/ebuild-helpers/ecompressdir
bin/ebuild-helpers/fowners
bin/ebuild-helpers/fperms
bin/ebuild-helpers/prepalldocs
bin/ebuild-helpers/prepallinfo
bin/ebuild-helpers/prepallman
bin/ebuild-helpers/prepallstrip
bin/ebuild-helpers/prepinfo
bin/ebuild-helpers/preplib
bin/ebuild-helpers/prepstrip
bin/emerge-webrsync
bin/etc-update
bin/misc-functions.sh
bin/phase-functions.sh
bin/phase-helpers.sh
cnf/dispatch-conf.conf
cnf/make.conf
cnf/make.globals
pym/portage/data.py
pym/portage/dispatch_conf.py
pym/portage/package/ebuild/_config/special_env_vars.py
pym/portage/package/ebuild/config.py
bin/dispatch-conf | 5 +-
bin/ebuild-helpers/dobin | 7 +-
bin/ebuild-helpers/dodir | 5 +-
bin/ebuild-helpers/dodoc | 5 +-
bin/ebuild-helpers/doexe | 5 +-
bin/ebuild-helpers/dohard | 5 +-
bin/ebuild-helpers/doinfo | 5 +-
bin/ebuild-helpers/doins | 7 +-
bin/ebuild-helpers/dolib | 5 +-
bin/ebuild-helpers/doman | 3 +-
bin/ebuild-helpers/domo | 5 +-
bin/ebuild-helpers/dosbin | 5 +-
bin/ebuild-helpers/dosed | 5 +-
bin/ebuild-helpers/dosym | 5 +-
bin/ebuild-helpers/ecompressdir | 5 +-
bin/ebuild-helpers/fowners | 7 +-
bin/ebuild-helpers/fperms | 6 +-
bin/ebuild-helpers/prepall | 3 +-
bin/ebuild-helpers/prepalldocs | 5 +-
bin/ebuild-helpers/prepallinfo | 5 +-
bin/ebuild-helpers/prepallman | 5 +-
bin/ebuild-helpers/prepallstrip | 5 +-
bin/ebuild-helpers/prepinfo | 5 +-
bin/ebuild-helpers/preplib | 5 +-
bin/ebuild-helpers/prepman | 3 +-
bin/ebuild-helpers/prepstrip | 9 +-
bin/ebuild.sh | 23 ++-
bin/egencache | 2 +-
bin/emerge-webrsync | 9 +-
bin/etc-update | 12 +-
bin/misc-functions.sh | 47 ++---
bin/phase-functions.sh | 28 +++-
bin/phase-helpers.sh | 53 +++---
bin/portageq | 4 +
bin/repoman | 6 +-
cnf/dispatch-conf.conf | 2 +-
cnf/make.conf | 17 +-
cnf/make.globals | 14 +-
doc/config/sets.docbook | 2 +-
man/emerge.1 | 21 ++-
man/fixpackages.1 | 15 ++
man/make.conf.5 | 6 +
pym/_emerge/actions.py | 43 +++--
pym/_emerge/depgraph.py | 15 ++-
pym/portage/_legacy_globals.py | 1 +
pym/portage/const.py | 20 ++-
pym/portage/data.py | 196 +++++++++++++-------
pym/portage/dbapi/porttree.py | 38 +----
pym/portage/dispatch_conf.py | 12 +-
.../package/ebuild/_config/special_env_vars.py | 2 +-
pym/portage/package/ebuild/config.py | 28 ++-
pym/portage/package/ebuild/doebuild.py | 10 +-
pym/portage/repository/config.py | 24 ++-
pym/portage/tests/bin/setup_env.py | 6 +-
pym/portage/tests/util/test_getconfig.py | 2 +-
55 files changed, 461 insertions(+), 332 deletions(-)
diff --cc bin/dispatch-conf
index 6a77f7b,1cad9e0..e639b66
--- a/bin/dispatch-conf
+++ b/bin/dispatch-conf
@@@ -92,7 -85,7 +92,7 @@@ class dispatch
confs = []
count = 0
- config_root = EPREFIX
- config_root = os.environ.get("__PORTAGE_TEST_EPREFIX", "/")
++ config_root = os.environ.get("__PORTAGE_TEST_EPREFIX", EPREFIX)
self.options = portage.dispatch_conf.read_config(MANDATORY_OPTS)
if "log-file" in self.options:
diff --cc bin/ebuild-helpers/dobin
index 8adc65d,f90d893..922e600
--- a/bin/ebuild-helpers/dobin
+++ b/bin/ebuild-helpers/dobin
@@@ -1,8 -1,8 +1,8 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [[ $# -lt 1 ]] ; then
helpers_die "${0##*/}: at least one argument needed"
diff --cc bin/ebuild-helpers/dodir
index 06dd2fe,90a3efe..f7a9c39
--- a/bin/ebuild-helpers/dodir
+++ b/bin/ebuild-helpers/dodir
@@@ -2,11 -2,10 +2,10 @@@
# Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
- # PREFIX LOCAL: always support ED
- #case "$EAPI" in 0|1|2) ED=${D} ;; esac
- # END PREFIX LOCAL
+ [[ " ${FEATURES} " == *" force-prefix "* ]] || \
+ case "$EAPI" in 0|1|2) ED=${D} ;; esac
install -d ${DIROPTIONS} "${@/#/${ED}/}"
ret=$?
diff --cc bin/ebuild-helpers/doins
index 0eafced,443bfdb..b9c95ed
--- a/bin/ebuild-helpers/doins
+++ b/bin/ebuild-helpers/doins
@@@ -39,13 -38,6 +38,15 @@@ if [[ ${INSDESTTREE#${ED}} != "${INSDES
helpers_die "${0##*/} used with \${D} or \${ED}"
exit 1
fi
++# PREFIX LOCAL: check for usage with EPREFIX
+if [[ ${INSDESTTREE#${EPREFIX}} != "${INSDESTTREE}" ]]; then
+ vecho "-------------------------------------------------------" 1>&2
+ vecho "You should not use \${EPREFIX} with helpers." 1>&2
+ vecho " --> ${INSDESTTREE}" 1>&2
+ vecho "-------------------------------------------------------" 1>&2
+ exit 1
+fi
++# END PREFIX LOCAL
case "$EAPI" in
0|1|2|3|3_pre2)
diff --cc bin/ebuild-helpers/fowners
index 5c1ecac,a5a28f2..9815d2e
--- a/bin/ebuild-helpers/fowners
+++ b/bin/ebuild-helpers/fowners
@@@ -2,17 -2,11 +2,18 @@@
# Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
+
++# PREFIX LOCAL: ignore otherwise failing call
+if hasq prefix ${USE} && [[ $EUID != 0 ]] ; then
+ ewarn "fowners ignored in Prefix with non-privileged user"
+ exit 0
+fi
-
- # PREFIX LOCAL: always support ED
- #case "$EAPI" in 0|1|2) ED=${D} ;; esac
+# END PREFIX LOCAL
+ [[ " ${FEATURES} " == *" force-prefix "* ]] || \
+ case "$EAPI" in 0|1|2) ED=${D} ;; esac
+
# we can't prefix all arguments because
# chown takes random options
slash="/"
diff --cc bin/ebuild-helpers/fperms
index 25f77a9,a2f77ea..23b5361
--- a/bin/ebuild-helpers/fperms
+++ b/bin/ebuild-helpers/fperms
@@@ -2,11 -2,11 +2,11 @@@
# Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
- # PREFIX LOCAL: always support ED
- #case "$EAPI" in 0|1|2) ED=${D} ;; esac
- # END PREFIX LOCAL
+ [[ " ${FEATURES} " == *" force-prefix "* ]] || \
+ case "$EAPI" in 0|1|2) ED=${D} ;; esac
+
# we can't prefix all arguments because
# chmod takes random options
slash="/"
diff --cc bin/ebuild-helpers/prepall
index c4e9ffc,49e646c..3aacb7f
--- a/bin/ebuild-helpers/prepall
+++ b/bin/ebuild-helpers/prepall
@@@ -2,11 -2,10 +2,12 @@@
# Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
+
+[[ -d ${ED} ]] || exit 0
- case "$EAPI" in 0|1|2) ED=${D} ;; esac
+ [[ " ${FEATURES} " == *" force-prefix "* ]] || \
+ case "$EAPI" in 0|1|2) ED=${D} ;; esac
if has chflags $FEATURES ; then
# Save all the file flags for restoration at the end of prepall.
diff --cc bin/ebuild-helpers/prepallinfo
index de52098,db9bbfa..00e1fc4
--- a/bin/ebuild-helpers/prepallinfo
+++ b/bin/ebuild-helpers/prepallinfo
@@@ -2,11 -2,10 +2,10 @@@
# Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
- # PREFIX LOCAL: always support ED
- #case "$EAPI" in 0|1|2) ED=${D} ;; esac
- # END PREFIX LOCAL
+ [[ " ${FEATURES} " == *" force-prefix "* ]] || \
+ case "$EAPI" in 0|1|2) ED=${D} ;; esac
[[ -d ${ED}usr/share/info ]] || exit 0
diff --cc bin/ebuild-helpers/prepinfo
index c0ab9c9,ffe2ece..ffd5049
--- a/bin/ebuild-helpers/prepinfo
+++ b/bin/ebuild-helpers/prepinfo
@@@ -2,11 -2,10 +2,10 @@@
# Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
- # PREFIX LOCAL: always support ED
- #case "$EAPI" in 0|1|2) ED=${D} ;; esac
- # END PREFIX LOCAL
+ [[ " ${FEATURES} " == *" force-prefix "* ]] || \
+ case "$EAPI" in 0|1|2) ED=${D} ;; esac
if [[ -z $1 ]] ; then
infodir="/usr/share/info"
diff --cc bin/ebuild-helpers/prepman
index 2c10b26,f96b641..1411499
--- a/bin/ebuild-helpers/prepman
+++ b/bin/ebuild-helpers/prepman
@@@ -2,9 -2,10 +2,10 @@@
# Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
- case "$EAPI" in 0|1|2) ED=${D} ;; esac
+ [[ " ${FEATURES} " == *" force-prefix "* ]] || \
+ case "$EAPI" in 0|1|2) ED=${D} ;; esac
if [[ -z $1 ]] ; then
mandir="${ED}usr/share/man"
diff --cc bin/ebuild-helpers/prepstrip
index 927078a,15eed84..84e2edc
--- a/bin/ebuild-helpers/prepstrip
+++ b/bin/ebuild-helpers/prepstrip
@@@ -221,15 -216,6 +220,17 @@@ d
strip_this=false
fi
++ # PREFIX LOCAL:
+ # In Prefix we are usually an unprivileged user, so we can't strip
+ # unwritable objects. Make them temporarily writable for the
+ # stripping.
+ was_not_writable=false
+ if [[ ! -w ${x} ]] ; then
+ was_not_writable=true
+ chmod u+w "${x}"
+ fi
++ # END PREFIX LOCAL
+
# only split debug info for final linked objects
# or kernel modules as debuginfo for intermediatary
# files (think crt*.o from gcc/glibc) is useless and
@@@ -248,10 -234,6 +249,12 @@@
elif [[ ${f} == *"SB relocatable"* ]] ; then
process_elf "${x}" ${SAFE_STRIP_FLAGS}
fi
+
++ # PREFIX LOCAL: see above
+ if [[ ${was_not_writable} == "true" ]] ; then
+ chmod u-w "${x}"
+ fi
++ # END PREFIX LOCAL
done
if [[ -s ${T}/debug.sources ]] && \
diff --cc bin/ebuild.sh
index 2ec34cd,1f95adb..7511532
--- a/bin/ebuild.sh
+++ b/bin/ebuild.sh
@@@ -669,11 -670,11 +670,13 @@@ els
declare -r $PORTAGE_READONLY_METADATA $PORTAGE_READONLY_VARS
case "$EAPI" in
0|1|2)
+ [[ " ${FEATURES} " == *" force-prefix "* ]] && \
+ declare -r ED EPREFIX EROOT
;;
*)
- declare -r ED EPREFIX EROOT
+ # PREFIX LOCAL: allow prefix vars in any EAPI
+ #declare -r ED EPREFIX EROOT
+ # PREFIX LOCAL
;;
esac
diff --cc bin/emerge-webrsync
index c5809cc,e6749f2..c41eb79
--- a/bin/emerge-webrsync
+++ b/bin/emerge-webrsync
@@@ -39,10 -38,6 +39,11 @@@ eval $(portageq envvar -v FEATURES FETC
DISTDIR="${PORTAGE_TMPDIR}/emerge-webrsync"
export http_proxy ftp_proxy
- # PREFIX HACK: use Prefix servers, just because we want this and infra
++# PREFIX LOCAL: use Prefix servers, just because we want this and infra
+# just can't support us yet
+GENTOO_MIRRORS="http://gentoo-mirror1.prefix.freens.org http://gentoo-mirror2.prefix.freens.org"
++# END PREFIX LOCAL
+
# If PORTAGE_NICENESS is overriden via the env then it will
# still pass through the portageq call and override properly.
if [ -n "${PORTAGE_NICENESS}" ]; then
@@@ -183,8 -178,8 +184,10 @@@ sync_local()
vecho "Syncing local tree ..."
if type -P tarsync > /dev/null ; then
- local chown_opts="-o portage -g portage"
- chown portage:portage portage > /dev/null 2>&1 || chown_opts=""
++ # PREFIX LOCAL: use PORTAGE_USER and PORTAGE_GROUP
+ local chown_opts="-o ${PORTAGE_USER:-portage} -g ${PORTAGE_GROUP:-portage}"
+ chown ${PORTAGE_USER:-portage}:${PORTAGE_GROUP:-portage} portage > /dev/null 2>&1 || chown_opts=""
++ # END PREFIX LOCAL
if ! tarsync $(vvecho -v) -s 1 ${chown_opts} \
-e /distfiles -e /packages -e /local "${file}" "${PORTDIR}"; then
eecho "tarsync failed; tarball is corrupt? (${file})"
@@@ -200,8 -195,8 +203,10 @@@
# Free disk space
rm -f "${file}"
- chown portage:portage portage > /dev/null 2>&1 && \
- chown -R portage:portage portage
++ # PREFIX LOCAL: use PORTAGE_USER and PORTAGE_GROUP
+ chown ${PORTAGE_USER:-portage}:${PORTAGE_GROUP:-portage} portage > /dev/null 2>&1 && \
+ chown -R ${PORTAGE_USER:-portage}:${PORTAGE_GROUP:-portage} portage
++ # END PREFIX LOCAL
cd portage
rsync -av --progress --stats --delete --delete-after \
--exclude='/distfiles' --exclude='/packages' \
@@@ -216,7 -211,9 +221,9 @@@
vecho "Updating cache ..."
emerge --metadata
fi
- [ -x /etc/portage/bin/post_sync ] && /etc/portage/bin/post_sync
+ [ -x "${EPREFIX}"/etc/portage/bin/post_sync ] && "${EPREFIX}"/etc/portage/bin/post_sync
+ # --quiet suppresses output if there are no relevant news items
+ has news ${FEATURES} && emerge --check-news --quiet
return 0
}
diff --cc bin/etc-update
index c735076,731b648..b3877fb
--- a/bin/etc-update
+++ b/bin/etc-update
@@@ -555,16 -555,12 +555,12 @@@ rm -rf "${TMP}" 2> /dev/nul
mkdir "${TMP}" || die "failed to create temp dir" 1
# make sure we have a secure directory to work in
chmod 0700 "${TMP}" || die "failed to set perms on temp dir" 1
- # GID need not to be available, and group 0 is not cool when not being
- # root, hence just rely on mkdir to have created a dir which is owned by
- # the user
- if [[ -z ${UID} || ${UID} == 0 ]] ; then
- chown ${UID:-0}:${GID:-0} "${TMP}" || die "failed to set ownership on temp dir" 1
- fi
+ chown ${PORTAGE_INST_UID:-0}:${PORTAGE_INST_GID:-0} "${TMP}" || \
+ die "failed to set ownership on temp dir" 1
# I need the CONFIG_PROTECT value
-#CONFIG_PROTECT=$(/usr/lib/portage/bin/portageq envvar CONFIG_PROTECT)
-#CONFIG_PROTECT_MASK=$(/usr/lib/portage/bin/portageq envvar CONFIG_PROTECT_MASK)
+#CONFIG_PROTECT=$(@PORTAGE_BASE@/bin/portageq envvar CONFIG_PROTECT)
+#CONFIG_PROTECT_MASK=$(@PORTAGE_BASE@/bin/portageq envvar CONFIG_PROTECT_MASK)
# load etc-config's configuration
CLEAR_TERM=$(get_config clear_term)
diff --cc bin/misc-functions.sh
index 20d52a7,3582889..1e3785c
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -14,15 -14,11 +14,14 @@@
MISC_FUNCTIONS_ARGS="$@"
shift $#
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}/ebuild.sh"
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}/ebuild.sh"
install_symlink_html_docs() {
- # PREFIX LOCAL: always support ED
- #case "$EAPI" in 0|1|2) local ED=${D} ;; esac
- # END PREFIX LOCAL
- # PREFIX LOCAL: ED need not to exist, whereas D does
- [[ ! -d ${ED} ]] && dodir /
+ [[ " ${FEATURES} " == *" force-prefix "* ]] || \
+ case "$EAPI" in 0|1|2) local ED=${D} ;; esac
++ # PREFIX LOCAL: ED needs not to exist, whereas D does
++ [[ ! -d ${ED} && -d ${D} ]] && dodir /
+ # END PREFIX LOCAL
cd "${ED}" || die "cd failed"
#symlink the html documentation (if DOC_SYMLINKS_DIR is set in make.conf)
if [ -n "${DOC_SYMLINKS_DIR}" ] ; then
@@@ -154,13 -149,10 +152,12 @@@ prepcompress()
install_qa_check() {
local f i x
- # PREFIX LOCAL: always support ED
- #case "$EAPI" in 0|1|2) local ED=${D} ;; esac
- # END PREFIX LOCAL
+ [[ " ${FEATURES} " == *" force-prefix "* ]] || \
+ case "$EAPI" in 0|1|2) local ED=${D} ;; esac
- cd "${ED}" || die "cd failed"
+ # PREFIX LOCAL: ED needs not to exist, whereas D does
+ cd "${D}" || die "cd failed"
+ # END PREFIX LOCAL
export STRIP_MASK
prepall
@@@ -192,37 -184,6 +189,39 @@@
sleep 1
fi
++ # PREFIX LOCAL:
+ # anything outside the prefix should be caught by the Prefix QA
+ # check, so if there's nothing in ED, we skip searching for QA
+ # checks there, the specific QA funcs can hence rely on ED existing
+ if [[ -d ${ED} ]] ; then
+ case ${CHOST} in
+ *-darwin*)
+ # Mach-O platforms (NeXT, Darwin, OSX)
+ install_qa_check_macho
+ ;;
+ *-interix*|*-winnt*)
+ # PECOFF platforms (Windows/Interix)
+ install_qa_check_pecoff
+ ;;
+ *-aix*)
+ # XCOFF platforms (AIX)
+ install_qa_check_xcoff
+ ;;
+ *)
+ # because this is the majority: ELF platforms (Linux,
+ # Solaris, *BSD, IRIX, etc.)
+ install_qa_check_elf
+ ;;
+ esac
+ fi
+
+ # this is basically here such that the diff with trunk remains just
+ # offsetted and not out of order
+ install_qa_check_misc
++ # END PREFIX LOCAL
+}
+
+install_qa_check_elf() {
if type -P scanelf > /dev/null && ! has binchecks ${RESTRICT}; then
local qa_var insecure_rpath=0 tmp_quiet=${PORTAGE_QUIET}
local x
diff --cc bin/phase-functions.sh
index 97111f5,482b5b1..75f1664
--- a/bin/phase-functions.sh
+++ b/bin/phase-functions.sh
@@@ -28,7 -28,7 +28,7 @@@ PORTAGE_READONLY_VARS="D EBUILD EBUILD_
PORTAGE_TMPDIR PORTAGE_UPDATE_ENV PORTAGE_USERNAME \
PORTAGE_VERBOSE PORTAGE_WORKDIR_MODE PORTDIR PORTDIR_OVERLAY \
PROFILE_PATHS REPLACING_VERSIONS REPLACED_BY_VERSION T WORKDIR \
- ED EROOT"
- __PORTAGE_TEST_EPREFIX"
++ __PORTAGE_TEST_EPREFIX ED EROOT"
PORTAGE_SAVED_READONLY_VARS="A CATEGORY P PF PN PR PV PVR"
@@@ -558,7 -559,20 +559,24 @@@ dyn_install()
fi
echo "${USE}" > USE
echo "${EAPI:-0}" > EAPI
++<<<<<<< HEAD
+ echo "${EPREFIX}" > EPREFIX
++=======
+
+ # Save EPREFIX, since it makes it easy to use chpathtool to
+ # adjust the content of a binary package so that it will
+ # work in a different EPREFIX from the one is was built for.
+ case "${EAPI:-0}" in
+ 0|1|2)
+ [[ " ${FEATURES} " == *" force-prefix "* ]] && \
+ [ -n "${EPREFIX}" ] && echo "${EPREFIX}" > EPREFIX
+ ;;
+ *)
+ [ -n "${EPREFIX}" ] && echo "${EPREFIX}" > EPREFIX
+ ;;
+ esac
+
++>>>>>>> overlays-gentoo-org/master
set +f
# local variables can leak into the saved environment.
diff --cc cnf/make.conf
index 25488d3,ef570bc..5134188
--- a/cnf/make.conf
+++ b/cnf/make.conf
@@@ -138,20 -137,16 +137,16 @@@
# at \${DISTDIR}/\${FILE}.
#
# Default fetch command (3 tries, passive ftp for firewall compatibility)
- #FETCHCOMMAND="@PORTAGE_EPREFIX@/usr/bin/wget -t 3 -T 60 --passive-ftp -O \"\${DISTDIR}/\${FILE}\" \"\${URI}\""
- #RESUMECOMMAND="@PORTAGE_EPREFIX@/usr/bin/wget -c -t 3 -T 60 --passive-ftp -O \"\${DISTDIR}/\${FILE}\" \"\${URI}\""
+ #FETCHCOMMAND="wget -t 3 -T 60 --passive-ftp -O \"\${DISTDIR}/\${FILE}\" \"\${URI}\""
+ #RESUMECOMMAND="wget -c -t 3 -T 60 --passive-ftp -O \"\${DISTDIR}/\${FILE}\" \"\${URI}\""
#
# Using wget, ratelimiting downloads
- #FETCHCOMMAND="@PORTAGE_EPREFIX@/usr/bin/wget -t 3 -T 60 --passive-ftp --limit-rate=200k -O \"\${DISTDIR}/\${FILE}\" \"\${URI}\""
- #RESUMECOMMAND="@PORTAGE_EPREFIX@/usr/bin/wget -c -t 3 -T 60 --passive-ftp --limit-rate=200k -O \"\${DISTDIR}/\${FILE}\" \"\${URI}\""
- #
- # curl groks urls
- #FETCHCOMMAND="@PORTAGE_EPREFIX@/usr/bin/curl -f --connect-timeout 15 -# -o \${DISTDIR}/\${FILE} \${URI}"
- #RESUMECOMMAND="@PORTAGE_EPREFIX@/usr/bin/curl -f --connect-timeout 15 -# -C - -o \${DISTDIR}/\${FILE} \${URI}"
+ #FETCHCOMMAND="wget -t 3 -T 60 --passive-ftp --limit-rate=200k -O \"\${DISTDIR}/\${FILE}\" \"\${URI}\""
+ #RESUMECOMMAND="wget -c -t 3 -T 60 --passive-ftp --limit-rate=200k -O \"\${DISTDIR}/\${FILE}\" \"\${URI}\""
#
# Lukemftp (BSD ftp):
- #FETCHCOMMAND="@PORTAGE_EPREFIX@/usr/bin/lukemftp -s -a -o \"\${DISTDIR}/\${FILE}\" \"\${URI}\""
- #RESUMECOMMAND="@PORTAGE_EPREFIX@/usr/bin/lukemftp -s -a -R -o \"\${DISTDIR}/\${FILE}\" \"\${URI}\""
-#FETCHCOMMAND="/usr/bin/lukemftp -s -a -o \"\${DISTDIR}/\${FILE}\" \"\${URI}\""
-#RESUMECOMMAND="/usr/bin/lukemftp -s -a -R -o \"\${DISTDIR}/\${FILE}\" \"\${URI}\""
++#FETCHCOMMAND="lukemftp -s -a -o \"\${DISTDIR}/\${FILE}\" \"\${URI}\""
++#RESUMECOMMAND="lukemftp -s -a -R -o \"\${DISTDIR}/\${FILE}\" \"\${URI}\""
#
# Portage uses GENTOO_MIRRORS to specify mirrors to use for source retrieval.
# The list is a space separated list which is read left to right. If you use
diff --cc pym/portage/const.py
index 63e7c67,5eeebe1..336c005
--- a/pym/portage/const.py
+++ b/pym/portage/const.py
@@@ -134,9 -90,8 +134,9 @@@ SUPPORTED_FEATURES = frozenset(
"ccache", "chflags", "clean-logs",
"collision-protect", "compress-build-logs",
"digest", "distcc", "distcc-pump", "distlocks", "ebuild-locks", "fakeroot",
- "fail-clean", "force-mirror", "getbinpkg",
+ "fail-clean", "force-mirror", "force-prefix", "getbinpkg",
"installsources", "keeptemp", "keepwork", "fixlafiles", "lmirror",
+ "macossandbox", "macosprefixsandbox", "macosusersandbox",
"metadata-transfer", "mirror", "multilib-strict", "news",
"noauto", "noclean", "nodoc", "noinfo", "noman",
"nostrip", "notitles", "parallel-fetch", "parallel-install",
diff --cc pym/portage/data.py
index 354fc9c,fa6970c..e8a0a3a
--- a/pym/portage/data.py
+++ b/pym/portage/data.py
@@@ -77,71 -66,117 +75,131 @@@ try
except KeyError:
pass
- # Allow the overriding of the user used for 'userpriv' and 'userfetch'
- _portage_uname = os.environ.get('PORTAGE_USERNAME', PORTAGE_USERNAME)
- _portage_grpname = os.environ.get('PORTAGE_GRPNAME', PORTAGE_GROUPNAME)
-
- #Discover the uid and gid of the portage user/group
- try:
- portage_gid = grp.getgrnam(_portage_grpname)[2]
- except KeyError:
- # some sysadmins are insane, bug #344307
- if _portage_grpname.isdigit():
- portage_gid = int(_portage_grpname)
+ # The portage_uid and portage_gid global constants, and others that
+ # depend on them are initialized lazily, in order to allow configuration
+ # via make.conf. Eventually, these constants may be deprecated in favor
+ # of config attributes, since it's conceivable that multiple
+ # configurations with different constants could be used simultaneously.
+ _initialized_globals = set()
+
+ def _get_global(k):
+ if k in _initialized_globals:
+ return globals()[k]
+
+ if k in ('portage_gid', 'portage_uid', 'secpass'):
+ global portage_gid, portage_uid, secpass
+ secpass = 0
+ if uid == 0:
+ secpass = 2
+ elif "__PORTAGE_TEST_EPREFIX" in os.environ:
+ secpass = 2
+ #Discover the uid and gid of the portage user/group
+ try:
- portage_uid = pwd.getpwnam(_get_global('_portage_uname'))[2]
+ portage_gid = grp.getgrnam(_get_global('_portage_grpname'))[2]
++ except KeyError:
++ # some sysadmins are insane, bug #344307
++ if _get_global('_portage_grpname').isdigit():
++ portage_gid = int(_get_global('_portage_grpname'))
++ else:
++ portage_gid = None
++ try:
++ portage_uid = pwd.getpwnam(_get_global('_portage_uname'))[2]
+ if secpass < 1 and portage_gid in os.getgroups():
+ secpass = 1
+ except KeyError:
++ portage_uid = None
++
++ if portage_uid is None or portage_gid is None:
+ portage_uid = 0
+ portage_gid = 0
++ # PREFIX LOCAL: we need to fix this one day to distinguish prefix vs non-prefix
++ writemsg(colorize("BAD",
++ _("portage: '%s' user or '%s' group missing." % (_get_global('_portage_uname'), _get_global('_portage_grpname')))) + "\n", noiselevel=-1)
++ writemsg(colorize("BAD",
++ _(" In Prefix Portage this is quite dramatic")) + "\n", noiselevel=-1)
+ writemsg(colorize("BAD",
- _("portage: 'portage' user or group missing.")) + "\n", noiselevel=-1)
- writemsg(_(
- " For the defaults, line 1 goes into passwd, "
- "and 2 into group.\n"), noiselevel=-1)
- writemsg(colorize("GOOD",
- " portage:x:250:250:portage:/var/tmp/portage:/bin/false") \
- + "\n", noiselevel=-1)
- writemsg(colorize("GOOD", " portage::250:portage") + "\n",
- noiselevel=-1)
++ _(" since it means you have thrown away yourself.")) + "\n", noiselevel=-1)
++ writemsg(colorize("BAD",
++ _(" Re-add yourself or re-bootstrap Gentoo Prefix.")) + "\n", noiselevel=-1)
++ # END PREFIX LOCAL
+ portage_group_warning()
+
+ _initialized_globals.add('portage_gid')
+ _initialized_globals.add('portage_uid')
+ _initialized_globals.add('secpass')
+
+ if k == 'portage_gid':
+ return portage_gid
+ elif k == 'portage_uid':
+ return portage_uid
+ elif k == 'secpass':
+ return secpass
+ else:
+ raise AssertionError('unknown name: %s' % k)
+
+ elif k == 'userpriv_groups':
+ v = [portage_gid]
+ if secpass >= 2:
+ # Get a list of group IDs for the portage user. Do not use
+ # grp.getgrall() since it is known to trigger spurious
+ # SIGPIPE problems with nss_ldap.
+ mystatus, myoutput = \
+ portage.subprocess_getstatusoutput("id -G %s" % _portage_uname)
+ if mystatus == os.EX_OK:
+ for x in myoutput.split():
+ try:
+ v.append(int(x))
+ except ValueError:
+ pass
+ v = sorted(set(v))
+
+ elif k == '_portage_grpname':
+ env = getattr(portage, 'settings', os.environ)
- v = env.get('PORTAGE_GRPNAME', 'portage')
++ # PREFIX LOCAL: use var iso hardwired 'portage'
++ v = env.get('PORTAGE_GRPNAME', PORTAGE_GROUPNAME)
++ # END PREFIX LOCAL
+ elif k == '_portage_uname':
- env = getattr(portage, 'settings', os.environ)
++ # PREFIX LOCAL: use var iso hardwired 'portage'
++ env = getattr(portage, 'settings', PORTAGE_USERNAME)
++ # END PREFIX LOCAL
+ v = env.get('PORTAGE_USERNAME', 'portage')
else:
- portage_gid = None
- try:
- portage_uid = pwd.getpwnam(_portage_uname)[2]
- except KeyError:
- portage_uid = None
-
- if portage_uid is None or portage_gid is None:
- portage_uid=0
- portage_gid=0
- userpriv_groups = [portage_gid]
- writemsg(colorize("BAD",
- "portage: "+_portage_uname+" user or "+_portage_grpname+" group missing.") + "\n", noiselevel=-1)
- writemsg(colorize("BAD",
- " In Prefix Portage this is quite dramatic") + "\n", noiselevel=-1)
- writemsg(colorize("BAD",
- " since it means you have thrown away yourself.") + "\n", noiselevel=-1)
- writemsg(colorize("BAD",
- " Re-add yourself or re-bootstrap Gentoo Prefix.") + "\n", noiselevel=-1)
- # we need to fix this one day to distinguish prefix vs non-prefix
- # _("portage: 'portage' user or group missing.")) + "\n", noiselevel=-1)
- # writemsg(_(
- # " For the defaults, line 1 goes into passwd, "
- # "and 2 into group.\n"), noiselevel=-1)
- # writemsg(colorize("GOOD",
- # " portage:x:250:250:portage:/var/tmp/portage:/bin/false") \
- # + "\n", noiselevel=-1)
- # writemsg(colorize("GOOD", " portage::250:portage") + "\n",
- # noiselevel=-1)
- portage_group_warning()
- else:
- if secpass < 1 and portage_gid in os.getgroups():
- secpass=1
- userpriv_groups = [portage_gid]
- if secpass >= 2:
- class _LazyUserprivGroups(portage.proxy.objectproxy.ObjectProxy):
- def _get_target(self):
- global userpriv_groups
- if userpriv_groups is not self:
- return userpriv_groups
- userpriv_groups = _userpriv_groups
- # Get a list of group IDs for the portage user. Do not use
- # grp.getgrall() since it is known to trigger spurious
- # SIGPIPE problems with nss_ldap.
- mystatus, myoutput = \
- portage.subprocess_getstatusoutput("id -G %s" % _portage_uname)
- if mystatus == os.EX_OK:
- for x in myoutput.split():
- try:
- userpriv_groups.append(int(x))
- except ValueError:
- pass
- userpriv_groups[:] = sorted(set(userpriv_groups))
- return userpriv_groups
-
- _userpriv_groups = userpriv_groups
- userpriv_groups = _LazyUserprivGroups()
+ raise AssertionError('unknown name: %s' % k)
+
+ globals()[k] = v
+ _initialized_globals.add(k)
+ return v
+
+ class _GlobalProxy(portage.proxy.objectproxy.ObjectProxy):
+
+ __slots__ = ('_name',)
+
+ def __init__(self, name):
+ portage.proxy.objectproxy.ObjectProxy.__init__(self)
+ object.__setattr__(self, '_name', name)
+
+ def _get_target(self):
+ return _get_global(object.__getattribute__(self, '_name'))
+
+ for k in ('portage_gid', 'portage_uid', 'secpass', 'userpriv_groups',
+ '_portage_grpname', '_portage_uname'):
+ globals()[k] = _GlobalProxy(k)
+ del k
+
+ def _init(settings):
+ """
+ Use config variables like PORTAGE_GRPNAME and PORTAGE_USERNAME to
+ initialize global variables. This allows settings to come from make.conf
+ instead of requiring them to be set in the calling environment.
+ """
+ if '_portage_grpname' not in _initialized_globals:
+ v = settings.get('PORTAGE_GRPNAME')
+ if v is not None:
+ globals()['_portage_grpname'] = v
+ _initialized_globals.add('_portage_grpname')
+
+ if '_portage_uname' not in _initialized_globals:
+ v = settings.get('PORTAGE_USERNAME')
+ if v is not None:
+ globals()['_portage_uname'] = v
+ _initialized_globals.add('_portage_uname')
diff --cc pym/portage/dispatch_conf.py
index d35be0a,abd3ac1..bfd6517
--- a/pym/portage/dispatch_conf.py
+++ b/pym/portage/dispatch_conf.py
@@@ -13,7 -13,7 +13,8 @@@ import os, sys, shuti
import portage
from portage.env.loaders import KeyValuePairFileLoader
from portage.localization import _
+ from portage.util import varexpand
+from portage.const import EPREFIX
RCS_BRANCH = '1.1.1'
RCS_LOCK = 'rcs -ko -M -l'
@@@ -39,11 -39,12 +40,12 @@@ def diffstatusoutput_len(cmd)
return (1, 1)
def read_config(mandatory_opts):
- loader = KeyValuePairFileLoader(
- EPREFIX + '/etc/dispatch-conf.conf', None)
- eprefix = os.environ.get("__PORTAGE_TEST_EPREFIX", "/")
++ eprefix = os.environ.get("__PORTAGE_TEST_EPREFIX", EPREFIX)
+ config_path = os.path.join(eprefix, "etc/dispatch-conf.conf")
+ loader = KeyValuePairFileLoader(config_path, None)
opts, errors = loader.load()
if not opts:
- print(_('dispatch-conf: Error reading %s/etc/dispatch-conf.conf; fatal') % (EPREFIX,), file=sys.stderr)
- print(_('dispatch-conf: Error reading /etc/dispatch-conf.conf; fatal'), file=sys.stderr)
++ print(_('dispatch-conf: Error reading %s/etc/dispatch-conf.conf; fatal') % (eprefix,), file=sys.stderr)
sys.exit(1)
# Handle quote removal here, since KeyValuePairFileLoader doesn't do that.
diff --cc pym/portage/package/ebuild/_config/special_env_vars.py
index b729c6a,9ac37fb..d07f68b
--- a/pym/portage/package/ebuild/_config/special_env_vars.py
+++ b/pym/portage/package/ebuild/_config/special_env_vars.py
@@@ -66,9 -66,7 +66,9 @@@ environ_whitelist +=
"REPLACING_VERSIONS", "REPLACED_BY_VERSION",
"ROOT", "ROOTPATH", "T", "TMP", "TMPDIR",
"USE_EXPAND", "USE_ORDER", "WORKDIR",
- "XARGS",
+ "XARGS", "__PORTAGE_TEST_EPREFIX",
+ "BPREFIX", "DEFAULT_PATH", "EXTRA_PATH",
+ "PORTAGE_GROUP", "PORTAGE_USER",
]
# user config variables
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-12-02 20:31 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-12-02 20:31 UTC (permalink / raw
To: gentoo-commits
commit: 1b7a43bdfc130960685c676e523a845fcd5c6440
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 2 20:31:21 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Dec 2 20:31:21 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=1b7a43bd
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
pym/_emerge/actions.py | 13 +++++++++----
pym/_emerge/unmerge.py | 2 +-
2 files changed, 10 insertions(+), 5 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-12-02 19:20 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-12-02 19:20 UTC (permalink / raw
To: gentoo-commits
commit: 081454efed0d1df6b53830dd0b55b4a7e9ec309c
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 2 19:20:05 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Dec 2 19:20:05 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=081454ef
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/emerge-webrsync | 2 +-
bin/repoman | 1 +
man/emerge.1 | 5 +++
man/make.conf.5 | 6 ++-
pym/_emerge/AsynchronousLock.py | 43 +++++++++++++++--------
pym/_emerge/EbuildMetadataPhase.py | 23 +++++++++----
pym/_emerge/PipeReader.py | 35 ++++++++-----------
pym/_emerge/SubProcess.py | 5 ++-
pym/_emerge/depgraph.py | 35 ++++++++++++-------
pym/_emerge/help.py | 2 +-
pym/portage/checksum.py | 29 ++++++++++++++++
pym/portage/dbapi/_MergeProcess.py | 2 +-
pym/portage/elog/mod_syslog.py | 5 +++
pym/portage/locks.py | 5 +++
pym/portage/package/ebuild/config.py | 45 ++++++++++++------------
pym/portage/package/ebuild/doebuild.py | 4 ++
pym/portage/process.py | 7 +++-
pym/portage/tests/util/test_uniqueArray.py | 6 ++--
pym/portage/util/movefile.py | 51 ++++++++++++++++------------
pym/repoman/utilities.py | 4 ++
20 files changed, 206 insertions(+), 109 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-12-02 19:19 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-12-02 19:19 UTC (permalink / raw
To: gentoo-commits
commit: 50bc7775ba7b326c5f7dc384f9f01e2c3e0e4466
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 2 19:18:59 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Dec 2 19:18:59 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=50bc7775
Merge commit 'v2.2.0_alpha76' into prefix
cnf/make.conf | 2 +-
pym/_emerge/JobStatusDisplay.py | 9 +-
pym/_emerge/Scheduler.py | 202 +++++++++---------------
pym/_emerge/actions.py | 2 +-
pym/_emerge/depgraph.py | 11 --
pym/_emerge/main.py | 17 ++-
pym/_emerge/resolver/output.py | 28 +---
pym/_emerge/resolver/output_helpers.py | 4 +-
pym/portage/checksum.py | 12 ++-
pym/portage/dbapi/_MergeProcess.py | 58 -------
pym/portage/elog/messages.py | 33 +++-
pym/portage/package/ebuild/config.py | 59 +++++---
pym/portage/package/ebuild/doebuild.py | 55 +++++++-
pym/portage/tests/emerge/test_simple.py | 4 +
pym/portage/tests/resolver/test_merge_order.py | 8 -
15 files changed, 231 insertions(+), 273 deletions(-)
diff --cc pym/portage/package/ebuild/config.py
index aef3c5e,835cd23..b7b23ae
--- a/pym/portage/package/ebuild/config.py
+++ b/pym/portage/package/ebuild/config.py
@@@ -22,9 -22,9 +22,9 @@@ from portage import bsd_chflags,
load_mod, os, selinux, _unicode_decode
from portage.const import CACHE_PATH, \
DEPCACHE_PATH, INCREMENTALS, MAKE_CONF_FILE, \
- MODULES_FILE_PATH, PORTAGE_BIN_PATH, PORTAGE_PYM_PATH, \
+ MODULES_FILE_PATH, \
PRIVATE_PATH, PROFILE_PATH, USER_CONFIG_PATH, \
- USER_VIRTUALS_FILE
+ USER_VIRTUALS_FILE
from portage.const import _SANDBOX_COMPAT_LEVEL
from portage.dbapi import dbapi
from portage.dbapi.porttree import portdbapi
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-12-02 19:18 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-12-02 19:18 UTC (permalink / raw
To: gentoo-commits
commit: 8546a58d3ce8359538c6847d3b7330322fb2c133
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 2 19:14:03 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Dec 2 19:14:03 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=8546a58d
Merge commit 'v2.2.0_alpha74' into prefix
Most important, fix the _doebuild_path function for Prefix
Conflicts:
bin/dohtml.py
bin/ebuild-helpers/dobin
bin/ebuild-helpers/dodir
bin/ebuild-helpers/dodoc
bin/ebuild-helpers/doexe
bin/ebuild-helpers/dohard
bin/ebuild-helpers/doinfo
bin/ebuild-helpers/doins
bin/ebuild-helpers/dolib
bin/ebuild-helpers/domo
bin/ebuild-helpers/dosbin
bin/ebuild-helpers/dosed
bin/ebuild-helpers/dosym
bin/ebuild-helpers/ecompressdir
bin/ebuild-helpers/fowners
bin/ebuild-helpers/fperms
bin/ebuild-helpers/prepalldocs
bin/ebuild-helpers/prepallinfo
bin/ebuild-helpers/prepallman
bin/ebuild-helpers/prepallstrip
bin/ebuild-helpers/prepinfo
bin/ebuild-helpers/preplib
bin/ebuild-helpers/prepstrip
bin/ebuild.sh
bin/misc-functions.sh
bin/phase-helpers.sh
pym/portage/dbapi/bintree.py
RELEASE-NOTES | 5 +
bin/dohtml.py | 13 ++-
bin/ebuild-helpers/dobin | 4 +
bin/ebuild-helpers/dodir | 6 +-
bin/ebuild-helpers/dodoc | 7 +-
bin/ebuild-helpers/doexe | 6 +-
bin/ebuild-helpers/dohard | 6 +-
bin/ebuild-helpers/doinfo | 6 +-
bin/ebuild-helpers/doins | 10 +-
bin/ebuild-helpers/dolib | 6 +-
bin/ebuild-helpers/doman | 2 +
bin/ebuild-helpers/domo | 7 +-
bin/ebuild-helpers/dosbin | 6 +-
bin/ebuild-helpers/dosed | 6 +-
bin/ebuild-helpers/dosym | 10 ++-
bin/ebuild-helpers/ecompressdir | 19 +++--
bin/ebuild-helpers/fowners | 6 +-
bin/ebuild-helpers/fperms | 5 +-
bin/ebuild-helpers/prepall | 2 +
bin/ebuild-helpers/prepalldocs | 9 +-
bin/ebuild-helpers/prepallinfo | 10 ++-
bin/ebuild-helpers/prepallman | 6 +
bin/ebuild-helpers/prepallstrip | 6 +-
bin/ebuild-helpers/prepinfo | 10 ++-
bin/ebuild-helpers/preplib | 6 +-
bin/ebuild-helpers/prepman | 2 +
bin/ebuild-helpers/prepstrip | 20 +++-
bin/ebuild.sh | 56 ++---------
bin/isolated-functions.sh | 27 +++---
bin/misc-functions.sh | 103 +++++++++++++-------
bin/phase-functions.sh | 13 ++-
bin/phase-helpers.sh | 66 +++++++++++--
bin/save-ebuild-env.sh | 11 ++-
man/emerge.1 | 11 ++-
pym/_emerge/actions.py | 2 +-
pym/_emerge/depgraph.py | 15 ++--
pym/_emerge/main.py | 46 ++++++----
pym/_emerge/resolver/circular_dependency.py | 5 +-
pym/_emerge/resolver/output_helpers.py | 49 ++++++----
pym/_emerge/resolver/slot_collision.py | 2 +-
pym/portage/dbapi/bintree.py | 5 +-
pym/portage/dbapi/vartree.py | 2 +-
.../package/ebuild/_config/special_env_vars.py | 3 +-
pym/portage/package/ebuild/doebuild.py | 79 +++++++++++++--
pym/portage/tests/bin/setup_env.py | 1 +
pym/portage/tests/emerge/test_simple.py | 39 ++++++--
pym/portage/tests/resolver/test_multislot.py | 4 +-
runtests.sh | 2 +-
48 files changed, 508 insertions(+), 234 deletions(-)
diff --cc bin/dohtml.py
index bf9bcc8,122daf3..546e698
--- a/bin/dohtml.py
+++ b/bin/dohtml.py
@@@ -91,8 -91,10 +91,13 @@@ class OptionsClass
if "PF" in os.environ:
self.PF = os.environ["PF"]
- if os.environ.has_key("ED"):
- self.ED = os.environ["ED"]
- if os.environ.get("EAPI", "0") in ("0", "1", "2"):
- self.ED = os.environ.get("D", "")
- else:
++ # PREFIX LOCAL: always retrieve ED
++ #if os.environ.get("EAPI", "0") in ("0", "1", "2"):
++ # self.ED = os.environ.get("D", "")
++ #else:
++ if True:
+ self.ED = os.environ.get("ED", "")
++ # END PREFIX LOCAL
if "_E_DOCDESTTREE_" in os.environ:
self.DOCDESTTREE = os.environ["_E_DOCDESTTREE_"]
diff --cc bin/ebuild-helpers/dobin
index e158a71,af3af0d..8adc65d
--- a/bin/ebuild-helpers/dobin
+++ b/bin/ebuild-helpers/dobin
@@@ -9,6 -9,8 +9,10 @@@ if [[ $# -lt 1 ]] ; the
exit 1
fi
-case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# PREFIX LOCAL: always support ED
++#case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# END PREFIX LOCAL
+
if [[ ! -d ${ED}${DESTTREE}/bin ]] ; then
install -d "${ED}${DESTTREE}/bin" || { helpers_die "${0##*/}: failed to install ${ED}${DESTTREE}/bin"; exit 2; }
fi
diff --cc bin/ebuild-helpers/dodir
index 7da507a,7db7caf..06dd2fe
--- a/bin/ebuild-helpers/dodir
+++ b/bin/ebuild-helpers/dodir
@@@ -1,9 -1,11 +1,13 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
-case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# PREFIX LOCAL: always support ED
++#case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# END PREFIX LOCAL
+
install -d ${DIROPTIONS} "${@/#/${ED}/}"
ret=$?
[[ $ret -ne 0 ]] && helpers_die "${0##*/} failed"
diff --cc bin/ebuild-helpers/dodoc
index 41fc8f5,37bbc79..d1bcb4f
--- a/bin/ebuild-helpers/dodoc
+++ b/bin/ebuild-helpers/dodoc
@@@ -1,5 -1,4 +1,4 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- #!/bin/bash
# Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
@@@ -10,7 -9,9 +9,11 @@@ if [ $# -lt 1 ] ; the
exit 1
fi
- dir="${ED}/usr/share/doc/${PF}/${_E_DOCDESTTREE_}"
-case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# PREFIX LOCAL: always support ED
++#case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# END PREFIX LOCAL
+
+ dir="${ED}usr/share/doc/${PF}/${_E_DOCDESTTREE_}"
if [ ! -d "${dir}" ] ; then
install -d "${dir}"
fi
diff --cc bin/ebuild-helpers/doexe
index 4f40402,a5b9af0..ecf0b9a
--- a/bin/ebuild-helpers/doexe
+++ b/bin/ebuild-helpers/doexe
@@@ -1,14 -1,16 +1,18 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [[ $# -lt 1 ]] ; then
helpers_die "${0##*/}: at least one argument needed"
exit 1
fi
-case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# PREFIX LOCAL: always support ED
++#case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# END PREFIX LOCAL
+
if [[ ! -d ${ED}${_E_EXEDESTTREE_} ]] ; then
install -d "${ED}${_E_EXEDESTTREE_}"
fi
diff --cc bin/ebuild-helpers/dohard
index c7c568c,cf6fb11..b5f8f70
--- a/bin/ebuild-helpers/dohard
+++ b/bin/ebuild-helpers/dohard
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2007 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
if [[ $# -ne 2 ]] ; then
@@@ -7,6 -7,8 +7,10 @@@
exit 1
fi
-case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# PREFIX LOCAL: always support ED
++#case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# END PREFIX LOCAL
+
destdir=${2%/*}
[[ ! -d ${ED}${destdir} ]] && dodir "${destdir}"
diff --cc bin/ebuild-helpers/doinfo
index 3cfbe6f,a922ef1..56d9821
--- a/bin/ebuild-helpers/doinfo
+++ b/bin/ebuild-helpers/doinfo
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
@@@ -9,6 -9,8 +9,10 @@@ if [[ -z $1 ]] ; the
exit 1
fi
-case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# PREFIX LOCAL: always support ED
++#case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# END PREFIX LOCAL
+
if [[ ! -d ${ED}usr/share/info ]] ; then
install -d "${ED}usr/share/info" || { helpers_die "${0##*/}: failed to install ${ED}usr/share/info"; exit 1; }
fi
diff --cc bin/ebuild-helpers/doins
index c91883b,b9189d5..46796cb
--- a/bin/ebuild-helpers/doins
+++ b/bin/ebuild-helpers/doins
@@@ -32,16 -34,9 +34,16 @@@ if [[ ${INSDESTTREE#${ED}} != "${INSDES
vecho "You should not use \${D} or \${ED} with helpers." 1>&2
vecho " --> ${INSDESTTREE}" 1>&2
vecho "-------------------------------------------------------" 1>&2
- helpers_die "${0##*/} used with \${D}"
+ helpers_die "${0##*/} used with \${D} or \${ED}"
exit 1
fi
+if [[ ${INSDESTTREE#${EPREFIX}} != "${INSDESTTREE}" ]]; then
+ vecho "-------------------------------------------------------" 1>&2
+ vecho "You should not use \${EPREFIX} with helpers." 1>&2
+ vecho " --> ${INSDESTTREE}" 1>&2
+ vecho "-------------------------------------------------------" 1>&2
+ exit 1
+fi
case "$EAPI" in
0|1|2|3|3_pre2)
diff --cc bin/ebuild-helpers/dolib
index 53805ea,9dd11d8..5b1b1be
--- a/bin/ebuild-helpers/dolib
+++ b/bin/ebuild-helpers/dolib
@@@ -1,9 -1,11 +1,13 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
-case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# PREFIX LOCAL: always support ED
++#case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# END PREFIX LOCAL
+
# Setup ABI cruft
LIBDIR_VAR="LIBDIR_${ABI}"
if [[ -n ${ABI} && -n ${!LIBDIR_VAR} ]] ; then
diff --cc bin/ebuild-helpers/domo
index f8a7bb9,0e3656d..33af83e
--- a/bin/ebuild-helpers/domo
+++ b/bin/ebuild-helpers/domo
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
@@@ -9,6 -9,9 +9,11 @@@ if [ ${mynum} -lt 1 ] ; the
helpers_die "${0}: at least one argument needed"
exit 1
fi
+
-case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# PREFIX LOCAL: always support ED
++#case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# END PREFIX LOCAL
+
if [ ! -d "${ED}${DESTTREE}/share/locale" ] ; then
install -d "${ED}${DESTTREE}/share/locale/"
fi
diff --cc bin/ebuild-helpers/dosbin
index d1400f4,d0783ed..e52df71
--- a/bin/ebuild-helpers/dosbin
+++ b/bin/ebuild-helpers/dosbin
@@@ -1,14 -1,16 +1,18 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [[ $# -lt 1 ]] ; then
helpers_die "${0##*/}: at least one argument needed"
exit 1
fi
-case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# PREFIX LOCAL: always support ED
++#case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# END PREFIX LOCAL
+
if [[ ! -d ${ED}${DESTTREE}/sbin ]] ; then
install -d "${ED}${DESTTREE}/sbin" || { helpers_die "${0##*/}: failed to install ${ED}${DESTTREE}/sbin"; exit 2; }
fi
diff --cc bin/ebuild-helpers/dosed
index 40cf39a,00cf5da..5e234b3
--- a/bin/ebuild-helpers/dosed
+++ b/bin/ebuild-helpers/dosed
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2006 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
if [[ $# -lt 1 ]] ; then
@@@ -7,6 -7,8 +7,10 @@@
exit 1
fi
-case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# PREFIX LOCAL: always support ED
++#case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# END PREFIX LOCAL
+
ret=0
file_found=0
mysed="s:${ED}::g"
diff --cc bin/ebuild-helpers/dosym
index 16e0df0,8b7b304..e5da333
--- a/bin/ebuild-helpers/dosym
+++ b/bin/ebuild-helpers/dosym
@@@ -1,14 -1,16 +1,18 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [[ $# -ne 2 ]] ; then
helpers_die "${0##*/}: two arguments needed"
exit 1
fi
-case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# PREFIX LOCAL: always support ED
++#case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# END PREFIX LOCAL
+
if [[ ${2} == */ ]] || \
[[ -d ${ED}${2} && ! -L ${ED}${2} ]] ; then
# implicit basename not allowed by PMS (bug #379899)
@@@ -18,9 -20,7 +22,11 @@@ f
destdir=${2%/*}
[[ ! -d ${ED}${destdir} ]] && dodir "${destdir}"
-ln -snf "$1" "${ED}$2"
++# PREFIX LOCAL: when absolute, prefix with offset
+target="${1}"
+[[ ${target:0:1} == "/" ]] && target="${EPREFIX}${target}"
- ln -snf "${target}" "${ED}/${2}"
++ln -snf "${target}" "${ED}${2}"
++# END PREFIX LOCAL
ret=$?
[[ $ret -ne 0 ]] && helpers_die "${0##*/} failed"
exit $ret
diff --cc bin/ebuild-helpers/ecompressdir
index 3d5af2d,f9a846a..3219669
--- a/bin/ebuild-helpers/ecompressdir
+++ b/bin/ebuild-helpers/ecompressdir
@@@ -1,14 -1,16 +1,18 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [[ -z $1 ]] ; then
helpers_die "${0##*/}: at least one argument needed"
exit 1
fi
-case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# PREFIX LOCAL: always support ED
++#case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# END PREFIX LOCAL
+
case $1 in
--ignore)
shift
diff --cc bin/ebuild-helpers/fowners
index 0c5295b,3f51b4e..5c1ecac
--- a/bin/ebuild-helpers/fowners
+++ b/bin/ebuild-helpers/fowners
@@@ -1,14 -1,11 +1,18 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
-case "$EAPI" in 0|1|2) ED=${D} ;; esac
+if hasq prefix ${USE} && [[ $EUID != 0 ]] ; then
+ ewarn "fowners ignored in Prefix with non-privileged user"
+ exit 0
+fi
+
++# PREFIX LOCAL: always support ED
++#case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# END PREFIX LOCAL
+
# we can't prefix all arguments because
# chown takes random options
slash="/"
diff --cc bin/ebuild-helpers/fperms
index dec0b42,9a2971a..25f77a9
--- a/bin/ebuild-helpers/fperms
+++ b/bin/ebuild-helpers/fperms
@@@ -1,9 -1,10 +1,12 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
-case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# PREFIX LOCAL: always support ED
++#case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# END PREFIX LOCAL
# we can't prefix all arguments because
# chmod takes random options
slash="/"
diff --cc bin/ebuild-helpers/prepall
index a765756,611c4ce..c4e9ffc
--- a/bin/ebuild-helpers/prepall
+++ b/bin/ebuild-helpers/prepall
@@@ -2,10 -2,10 +2,12 @@@
# Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
+
+[[ -d ${ED} ]] || exit 0
+ case "$EAPI" in 0|1|2) ED=${D} ;; esac
+
if has chflags $FEATURES ; then
# Save all the file flags for restoration at the end of prepall.
mtree -c -p "${ED}" -k flags > "${T}/bsdflags.mtree"
diff --cc bin/ebuild-helpers/prepalldocs
index ab4166e,540d025..67a3d8f
--- a/bin/ebuild-helpers/prepalldocs
+++ b/bin/ebuild-helpers/prepalldocs
@@@ -1,15 -1,16 +1,18 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [[ -n $1 ]] ; then
vecho "${0##*/}: invalid usage; takes no arguments" 1>&2
fi
- cd "${D}"
- [[ -d ${EPREFIX}usr/share/doc ]] || exit 0
-case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# PREFIX LOCAL: always support ED
++#case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# END PREFIX LOCAL
+
+ [[ -d ${ED}usr/share/doc ]] || exit 0
ecompressdir --ignore /usr/share/doc/${PF}/html
ecompressdir --queue /usr/share/doc
diff --cc bin/ebuild-helpers/prepallinfo
index 249e897,e351f87..2b1b5b4
--- a/bin/ebuild-helpers/prepallinfo
+++ b/bin/ebuild-helpers/prepallinfo
@@@ -1,7 -1,11 +1,13 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2006 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
- [[ ! -d ${ED}usr/share/info ]] && exit 0
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
++source "${PORTAGE_BIN_PATH:-@PORTAGE_BIN_PATH@}"/isolated-functions.sh
+
-case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# PREFIX LOCAL: always support ED
++#case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# END PREFIX LOCAL
+
+ [[ -d ${ED}usr/share/info ]] || exit 0
exec prepinfo
diff --cc bin/ebuild-helpers/prepallman
index 19437dd,be7f194..a7bb30b
--- a/bin/ebuild-helpers/prepallman
+++ b/bin/ebuild-helpers/prepallman
@@@ -7,10 -7,10 +7,16 @@@ source "${PORTAGE_BIN_PATH:-@PORTAGE_BA
# replaced by controllable compression in EAPI 4
has "${EAPI}" 0 1 2 3 || exit 0
-case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# PREFIX LOCAL: always support ED
++#case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# END PREFIX LOCAL
+
ret=0
++# PREFIX LOCAL: ED needs not to exist, wheras D does
+[[ -d ${ED} ]] || exit ${ret}
++# END PREFIX LOCAL
+
find "${ED}" -type d -name man > "${T}"/prepallman.filelist
while read -r mandir ; do
mandir=${mandir#${ED}}
diff --cc bin/ebuild-helpers/prepallstrip
index 18ce4cc,e9f5f8e..b03c053
--- a/bin/ebuild-helpers/prepallstrip
+++ b/bin/ebuild-helpers/prepallstrip
@@@ -1,5 -1,7 +1,9 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2006 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# PREFIX LOCAL: always support ED
++#case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# END PREFIX LOCAL
+
exec prepstrip "${ED}"
diff --cc bin/ebuild-helpers/prepinfo
index c350fba,afe214c..c0ab9c9
--- a/bin/ebuild-helpers/prepinfo
+++ b/bin/ebuild-helpers/prepinfo
@@@ -2,15 -2,17 +2,19 @@@
# Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
-case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# PREFIX LOCAL: always support ED
++#case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# END PREFIX LOCAL
+
if [[ -z $1 ]] ; then
- infodir="${EPREFIX}/usr/share/info"
+ infodir="/usr/share/info"
else
if [[ -d ${ED}$1/share/info ]] ; then
- infodir="${EPREFIX}$1/share/info"
+ infodir="$1/share/info"
else
- infodir="${EPREFIX}$1/info"
+ infodir="$1/info"
fi
fi
diff --cc bin/ebuild-helpers/preplib
index 8b9e04d,8c62921..7090d0c
--- a/bin/ebuild-helpers/preplib
+++ b/bin/ebuild-helpers/preplib
@@@ -1,11 -1,13 +1,15 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2006 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
eqawarn "QA Notice: Deprecated call to 'preplib'"
-case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# PREFIX LOCAL: always support ED
++#case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# END PREFIX LOCAL
+
LIBDIR_VAR="LIBDIR_${ABI}"
if [ -n "${ABI}" -a -n "${!LIBDIR_VAR}" ]; then
CONF_LIBDIR="${!LIBDIR_VAR}"
diff --cc bin/ebuild-helpers/prepman
index f8c670f,8ea7607..2c10b26
--- a/bin/ebuild-helpers/prepman
+++ b/bin/ebuild-helpers/prepman
@@@ -2,8 -2,10 +2,10 @@@
# Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
+ case "$EAPI" in 0|1|2) ED=${D} ;; esac
+
if [[ -z $1 ]] ; then
mandir="${ED}usr/share/man"
else
diff --cc bin/ebuild-helpers/prepstrip
index 0305f0a,fac20b2..927078a
--- a/bin/ebuild-helpers/prepstrip
+++ b/bin/ebuild-helpers/prepstrip
@@@ -18,6 -18,8 +18,10 @@@ exp_tf()
exp_tf FEATURES installsources nostrip splitdebug
exp_tf RESTRICT binchecks installsources strip
-case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# PREFIX LOCAL: always support ED
++#case "$EAPI" in 0|1|2) ED=${D} ;; esac
++# END PREFIX LOCAL
+
banner=false
SKIP_STRIP=false
if ${RESTRICT_strip} || ${FEATURES_nostrip} ; then
@@@ -99,7 -101,7 +103,9 @@@ save_elf_debug()
${FEATURES_splitdebug} || return 0
local x=$1
- local y="${ED}usr/lib/debug/${x:${#ED}}.debug"
++ # PREFIX LOCAL: keep offset path in debug location file
+ local y="${ED}usr/lib/debug/${x:${#D}}.debug"
++ # END PREFIX LOCAL
# dont save debug info twice
[[ ${x} == *".debug" ]] && return 0
@@@ -108,7 -110,7 +114,9 @@@
local inode=$(inode_var_name "$x")
if [[ -n ${!inode} ]] ; then
- ln "${ED}usr/lib/debug/${!inode:${#ED}}.debug" "$y"
++ # PREFIX LOCAL: keep offset path in debug location file
+ ln "${ED}usr/lib/debug/${!inode:${#D}}.debug" "$y"
++ # END PREFIX LOCAL
else
eval $inode=\$x
if [[ -e ${T}/prepstrip.split.debug ]] ; then
diff --cc bin/ebuild.sh
index 7952515,f39e536..2ec34cd
--- a/bin/ebuild.sh
+++ b/bin/ebuild.sh
@@@ -2,23 -2,9 +2,9 @@@
# Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-PORTAGE_BIN_PATH="${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"
-PORTAGE_PYM_PATH="${PORTAGE_PYM_PATH:-/usr/lib/portage/pym}"
+PORTAGE_BIN_PATH="${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"
+PORTAGE_PYM_PATH="${PORTAGE_PYM_PATH:-@PORTAGE_BASE@/pym}"
- ROOTPATH=${ROOTPATH##:}
- ROOTPATH=${ROOTPATH%%:}
- PREROOTPATH=${PREROOTPATH##:}
- PREROOTPATH=${PREROOTPATH%%:}
- #PATH=$PORTAGE_BIN_PATH/ebuild-helpers:$PREROOTPATH${PREROOTPATH:+:}/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin${ROOTPATH:+:}$ROOTPATH
- # PREFIX: our DEFAULT_PATH is equal to the above when not using an
- # offset prefix. With such prefix, the usr/local bits are excluded, and
- # the prefixed variants of {usr/,}{s,}bin are taken. The additional
- # paths given during configure, always come as last thing since they
- # should never override anything from the prefix itself.
- PATH="$PORTAGE_BIN_PATH/ebuild-helpers:$PREROOTPATH${PREROOTPATH:+:}${DEFAULT_PATH}${ROOTPATH:+:}$ROOTPATH${EXTRA_PATH:+:}${EXTRA_PATH}"
- export PATH
-
-
# Prevent aliases from causing portage to act inappropriately.
# Make sure it's before everything so we don't mess aliases that follow.
unalias -a
diff --cc bin/isolated-functions.sh
index 12edfbc,d2ea319..d9e4649
--- a/bin/isolated-functions.sh
+++ b/bin/isolated-functions.sh
@@@ -436,36 -438,25 +440,31 @@@ RC_INDENTATION='
RC_DEFAULT_INDENT=2
RC_DOT_PATTERN=''
- if [[ $EBUILD_PHASE == depend ]] ; then
- # avoid unneeded stty call in set_colors during "depend" phase
- unset_colors
- else
- case "${NOCOLOR:-false}" in
- yes|true)
- unset_colors
- ;;
- no|false)
- set_colors
- ;;
- esac
- fi
+ case "${NOCOLOR:-false}" in
+ yes|true)
+ unset_colors
+ ;;
+ no|false)
+ set_colors
+ ;;
+ esac
-if [[ -z ${USERLAND} ]] ; then
- case $(uname -s) in
- *BSD|DragonFly)
- export USERLAND="BSD"
- ;;
- *)
- export USERLAND="GNU"
- ;;
- esac
-fi
+# In Prefix every platform has USERLAND=GNU, even FreeBSD. Since I
+# don't know how to reliably "figure out" we are in a Prefix instance of
+# portage here, I for now disable this check, and hardcode it to GNU.
+# Somehow it appears stange to me that this code is in this file,
+# non-ebuilds/eclasses should never rely on USERLAND and XARGS, don't they?
+#if [[ -z ${USERLAND} ]] ; then
+# case $(uname -s) in
+# *BSD|DragonFly)
+# export USERLAND="BSD"
+# ;;
+# *)
+# export USERLAND="GNU"
+# ;;
+# esac
+#fi
+[[ -z ${USERLAND} ]] && USERLAND="GNU"
if [[ -z ${XARGS} ]] ; then
case ${USERLAND} in
diff --cc bin/misc-functions.sh
index b093382,1c11dc5..58f5bc8
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -14,12 -14,11 +14,16 @@@
MISC_FUNCTIONS_ARGS="$@"
shift $#
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}/ebuild.sh"
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}/ebuild.sh"
install_symlink_html_docs() {
- cd "${D}" || die "cd failed"
- case "$EAPI" in 0|1|2) local ED=${D} ;; esac
++ # PREFIX LOCAL: always support ED
++ #case "$EAPI" in 0|1|2) local ED=${D} ;; esac
++ # END PREFIX LOCAL
++ # PREFIX LOCAL: ED need not to exist, whereas D does
+ [[ ! -d ${ED} ]] && dodir /
- cd "${ED}" || die "cd shouldn't have failed"
++ # END PREFIX LOCAL
+ cd "${ED}" || die "cd failed"
#symlink the html documentation (if DOC_SYMLINKS_DIR is set in make.conf)
if [ -n "${DOC_SYMLINKS_DIR}" ] ; then
local mydocdir docdir
@@@ -147,8 -147,9 +152,13 @@@ prepcompress()
install_qa_check() {
local f i x
- case "$EAPI" in 0|1|2) local ED=${D} ;; esac
++ # PREFIX LOCAL: always support ED
++ #case "$EAPI" in 0|1|2) local ED=${D} ;; esac
++ # END PREFIX LOCAL
- cd "${ED}" || die "cd failed"
++ # PREFIX LOCAL: ED needs not to exist, whereas D does
+ cd "${D}" || die "cd failed"
++ # END PREFIX LOCAL
export STRIP_MASK
prepall
@@@ -177,40 -181,6 +190,37 @@@
sleep 1
fi
+ # anything outside the prefix should be caught by the Prefix QA
+ # check, so if there's nothing in ED, we skip searching for QA
+ # checks there, the specific QA funcs can hence rely on ED existing
+ if [[ -d ${ED} ]] ; then
+ case ${CHOST} in
+ *-darwin*)
+ # Mach-O platforms (NeXT, Darwin, OSX)
+ install_qa_check_macho
+ ;;
+ *-interix*|*-winnt*)
+ # PECOFF platforms (Windows/Interix)
+ install_qa_check_pecoff
+ ;;
+ *-aix*)
+ # XCOFF platforms (AIX)
+ install_qa_check_xcoff
+ ;;
+ *)
+ # because this is the majority: ELF platforms (Linux,
+ # Solaris, *BSD, IRIX, etc.)
+ install_qa_check_elf
+ ;;
+ esac
+ fi
+
+ # this is basically here such that the diff with trunk remains just
+ # offsetted and not out of order
+ install_qa_check_misc
-
- # Prefix specific checks
- [[ -n ${EPREFIX} ]] && install_qa_check_prefix
+}
+
+install_qa_check_elf() {
if type -P scanelf > /dev/null && ! has binchecks ${RESTRICT}; then
local qa_var insecure_rpath=0 tmp_quiet=${PORTAGE_QUIET}
local x
@@@ -339,8 -309,6 +349,7 @@@
if [[ "${LDFLAGS}" == *,--hash-style=gnu* ]] && [[ "${PN}" != *-bin ]] ; then
qa_var="QA_DT_HASH_${ARCH/-/_}"
eval "[[ -n \${!qa_var} ]] && QA_DT_HASH=(\"\${${qa_var}[@]}\")"
- # use ED here, for the rest of the checks of scanelf's
- # output, scanelf is silent on non-existing ED
++
f=$(scanelf -qyRF '%k %p' -k .hash "${ED}" | sed -e "s:\.hash ::")
if [[ -n ${f} ]] ; then
echo "${f}" > "${T}"/scanelf-ignored-LDFLAGS.log
@@@ -457,7 -425,7 +466,9 @@@
# Check for shared libraries lacking NEEDED entries
qa_var="QA_DT_NEEDED_${ARCH/-/_}"
eval "[[ -n \${!qa_var} ]] && QA_DT_NEEDED=(\"\${${qa_var}[@]}\")"
- f=$(scanelf -ByF '%n %p' "${ED}"{,usr/}lib*/lib*.so* | gawk '$2 == "" { print }' | sed -e "s:^[[:space:]]${ED}:/:")
++ # PREFIX LOCAL: keep offset prefix in the recorded files
+ f=$(scanelf -ByF '%n %p' "${ED}"{,usr/}lib*/lib*.so* | gawk '$2 == "" { print }' | sed -e "s:^[[:space:]]${D}:/:")
++ # END PREFIX LOCAL
if [[ -n ${f} ]] ; then
echo "${f}" > "${T}"/scanelf-missing-NEEDED.log
if [[ "${QA_STRICT_DT_NEEDED-unset}" == unset ]] ; then
@@@ -490,10 -458,8 +501,12 @@@
PORTAGE_QUIET=${tmp_quiet}
fi
+}
- local unsafe_files=$(find "${ED}" -type f '(' -perm -2002 -o -perm -4002 ')' | sed -e "s:^${ED}:/:")
+install_qa_check_misc() {
- local unsafe_files=$(find "${D}" -type f '(' -perm -2002 -o -perm -4002 ')')
++ # PREFIX LOCAL: keep offset prefix in the reported files
++ local unsafe_files=$(find "${ED}" -type f '(' -perm -2002 -o -perm -4002 ')' | sed -e "s:^${D}:/:")
++ # END PREFIX LOCAL
if [[ -n ${unsafe_files} ]] ; then
eqawarn "QA Notice: Unsafe files detected (set*id and world writable)"
eqawarn "${unsafe_files}"
@@@ -566,9 -534,7 +579,11 @@@
abort="no"
local a s
for a in "${ED}"usr/lib*/*.a ; do
- s=${a%.a}.so
++ # PREFIX LOCAL: support MachO objects
+ [[ ${CHOST} == *-darwin* ]] \
+ && s=${a%.a}.dylib \
+ || s=${a%.a}.so
++ # END PREFIX LOCAL
if [[ ! -e ${s} ]] ; then
s=${s%usr/*}${s##*/usr/}
if [[ -e ${s} ]] ; then
@@@ -581,11 -547,7 +596,12 @@@
[[ ${abort} == "yes" ]] && die "add those ldscripts"
# Make sure people don't store libtool files or static libs in /lib
- # on AIX, "dynamic libs" have extension .a, so don't get false
- # positives
- f=$(ls "${ED}"lib*/*.{a,la} 2>/dev/null)
++ # PREFIX LOCAL: on AIX, "dynamic libs" have extension .a, so don't
++ # get false positives
+ [[ ${CHOST} == *-aix* ]] \
+ && f=$(ls "${ED}"lib*/*.la 2>/dev/null || true) \
+ || f=$(ls "${ED}"lib*/*.{a,la} 2>/dev/null)
++ # END PREFIX LOCAL
if [[ -n ${f} ]] ; then
vecho -ne '\n'
eqawarn "QA Notice: Excessive files found in the / partition"
@@@ -894,372 -856,6 +910,371 @@@ install_qa_check_prefix()
fi
}
+install_qa_check_macho() {
+ if ! has binchecks ${RESTRICT} ; then
+ # on Darwin, dynamic libraries are called .dylibs instead of
+ # .sos. In addition the version component is before the
+ # extension, not after it. Check for this, and *only* warn
+ # about it. Some packages do ship .so files on Darwin and make
+ # it work (ugly!).
+ rm -f "${T}/mach-o.check"
+ find ${ED%/} -name "*.so" -or -name "*.so.*" | \
+ while read i ; do
+ [[ $(file $i) == *"Mach-O"* ]] && \
+ echo "${i#${D}}" >> "${T}/mach-o.check"
+ done
+ if [[ -f ${T}/mach-o.check ]] ; then
+ f=$(< "${T}/mach-o.check")
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: Found .so dynamic libraries on Darwin:"
+ eqawarn " ${f//$'\n'/\n }"
+ fi
+ rm -f "${T}/mach-o.check"
+
+ # The naming for dynamic libraries is different on Darwin; the
+ # version component is before the extention, instead of after
+ # it, as with .sos. Again, make this a warning only.
+ rm -f "${T}/mach-o.check"
+ find ${ED%/} -name "*.dylib.*" | \
+ while read i ; do
+ echo "${i#${D}}" >> "${T}/mach-o.check"
+ done
+ if [[ -f "${T}/mach-o.check" ]] ; then
+ f=$(< "${T}/mach-o.check")
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: Found wrongly named dynamic libraries on Darwin:"
+ eqawarn " ${f// /\n }"
+ fi
+ rm -f "${T}/mach-o.check"
+ fi
+
+ # While we generate the NEEDED files, check that we don't get kernel
+ # traps at runtime because of broken install_names on Darwin.
+ rm -f "${T}"/.install_name_check_failed
+ scanmacho -qyRF '%a;%p;%S;%n' "${D}" | { while IFS= read l ; do
+ arch=${l%%;*}; l=${l#*;}
+ obj="/${l%%;*}"; l=${l#*;}
+ install_name=${l%%;*}; l=${l#*;}
+ needed=${l%%;*}; l=${l#*;}
+
+ # See if the self-reference install_name points to an existing
+ # and to be installed file. This usually is a symlink for the
+ # major version.
+ if [[ ! -e ${D}${install_name} ]] ; then
+ eqawarn "QA Notice: invalid self-reference install_name ${install_name} in ${obj}"
+ # remember we are in an implicit subshell, that's
+ # why we touch a file here ... ideally we should be
+ # able to die correctly/nicely here
+ touch "${T}"/.install_name_check_failed
+ fi
+
+ # this is ugly, paths with spaces won't work
+ reevaluate=0
+ for lib in ${needed//,/ } ; do
+ if [[ ${lib} == ${D}* ]] ; then
+ eqawarn "QA Notice: install_name references \${D}: ${lib} in ${obj}"
+ touch "${T}"/.install_name_check_failed
+ elif [[ ${lib} == ${S}* ]] ; then
+ eqawarn "QA Notice: install_name references \${S}: ${lib} in ${obj}"
+ touch "${T}"/.install_name_check_failed
+ elif [[ ! -e ${lib} && ! -e ${D}${lib} && ${lib} != "@executable_path/"* && ${lib} != "@loader_path/"* ]] ; then
+ # try to "repair" this if possible, happens because of
+ # gen_usr_ldscript tactics
+ s=${lib%usr/*}${lib##*/usr/}
+ if [[ -e ${D}${s} ]] ; then
+ ewarn "correcting install_name from ${lib} to ${s} in ${obj}"
+ install_name_tool -change \
+ "${lib}" "${s}" "${D}${obj}"
+ reevaluate=1
+ else
+ eqawarn "QA Notice: invalid reference to ${lib} in ${obj}"
+ # remember we are in an implicit subshell, that's
+ # why we touch a file here ... ideally we should be
+ # able to die correctly/nicely here
+ touch "${T}"/.install_name_check_failed
+ fi
+ fi
+ done
+ if [[ ${reevaluate} == 1 ]]; then
+ # install_name(s) have been changed, refresh data so we
+ # store the correct meta data
+ l=$(scanmacho -qyF '%a;%p;%S;%n' ${D}${obj})
+ arch=${l%%;*}; l=${l#*;}
+ obj="/${l%%;*}"; l=${l#*;}
+ install_name=${l%%;*}; l=${l#*;}
+ needed=${l%%;*}; l=${l#*;}
+ fi
+
+ # backwards compatability
+ echo "${obj} ${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ # what we use
+ echo "${arch};${obj};${install_name};${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.MACHO.3
+ done }
+ if [[ -f ${T}/.install_name_check_failed ]] ; then
+ # secret switch "allow_broken_install_names" to get
+ # around this and install broken crap (not a good idea)
+ has allow_broken_install_names ${FEATURES} || \
+ die "invalid install_name found, your application or library will crash at runtime"
+ fi
+}
+
+install_qa_check_pecoff() {
+ local _pfx_scan="readpecoff ${CHOST}"
+
+ # this one uses readpecoff, which supports multiple prefix platforms!
+ # this is absolutely _not_ optimized for speed, and there may be plenty
+ # of possibilities by introducing one or the other cache!
+ if ! has binchecks ${RESTRICT}; then
+ # copied and adapted from the above scanelf code.
+ local qa_var insecure_rpath=0 tmp_quiet=${PORTAGE_QUIET}
+ local f x
+
+ # display warnings when using stricter because we die afterwards
+ if has stricter ${FEATURES} ; then
+ unset PORTAGE_QUIET
+ fi
+
+ local _exec_find_opt="-executable"
+ [[ ${CHOST} == *-winnt* ]] && _exec_find_opt='-name *.dll -o -name *.exe'
+
+ # Make sure we disallow insecure RUNPATH/RPATH's
+ # Don't want paths that point to the tree where the package was built
+ # (older, broken libtools would do this). Also check for null paths
+ # because the loader will search $PWD when it finds null paths.
+
+ f=$(
+ find "${ED}" -type f '(' ${_exec_find_opt} ')' -print0 | xargs -0 ${_pfx_scan} | \
+ while IFS=";" read arch obj soname rpath needed ; do \
+ echo "${rpath}" | grep -E "(${PORTAGE_BUILDDIR}|: |::|^:|^ )" > /dev/null 2>&1 \
+ && echo "${obj}"; done;
+ )
+ # Reject set*id binaries with $ORIGIN in RPATH #260331
+ x=$(
+ find "${ED}" -type f '(' -perm -u+s -o -perm -g+s ')' -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; do \
+ echo "${rpath}" | grep '$ORIGIN' > /dev/null 2>&1 && echo "${obj}"; done;
+ )
+ if [[ -n ${f}${x} ]] ; then
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain insecure RUNPATH's"
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with the maintaining herd of the package."
+ eqawarn "${f}${f:+${x:+\n}}${x}"
+ vecho -ne '\a\n'
+ if [[ -n ${x} ]] || has stricter ${FEATURES} ; then
+ insecure_rpath=1
+ else
+ eqawarn "cannot automatically fix runpaths on interix platforms!"
+ fi
+ fi
+
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+
+ # Save NEEDED information after removing self-contained providers
+ find "${ED}" -type f '(' ${_exec_find_opt} ')' -print0 | xargs -0 ${_pfx_scan} | { while IFS=';' read arch obj soname rpath needed; do
+ # need to strip image dir from object name.
+ obj="/${obj#${D}}"
+ if [ -z "${rpath}" -o -n "${rpath//*ORIGIN*}" ]; then
+ # object doesn't contain $ORIGIN in its runpath attribute
+ echo "${obj} ${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ echo "${arch};${obj};${soname};${rpath};${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+ else
+ dir=${obj%/*}
+ # replace $ORIGIN with the dirname of the current object for the lookup
+ opath=$(echo :${rpath}: | sed -e "s#.*:\(.*\)\$ORIGIN\(.*\):.*#\1${dir}\2#")
+ sneeded=$(echo ${needed} | tr , ' ')
+ rneeded=""
+ for lib in ${sneeded}; do
+ found=0
+ for path in ${opath//:/ }; do
+ [ -e "${ED}/${path}/${lib}" ] && found=1 && break
+ done
+ [ "${found}" -eq 0 ] && rneeded="${rneeded},${lib}"
+ done
+ rneeded=${rneeded:1}
+ if [ -n "${rneeded}" ]; then
+ echo "${obj} ${rneeded}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ echo "${arch};${obj};${soname};${rpath};${rneeded}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+ fi
+ fi
+ done }
+
+ if [[ ${insecure_rpath} -eq 1 ]] ; then
+ die "Aborting due to serious QA concerns with RUNPATH/RPATH"
+ elif [[ -n ${die_msg} ]] && has stricter ${FEATURES} ; then
+ die "Aborting due to QA concerns: ${die_msg}"
+ fi
+
+ local _so_ext='.so*'
+
+ case "${CHOST}" in
+ *-winnt*) _so_ext=".dll" ;; # no "*" intentionally!
+ esac
+
+ # Run some sanity checks on shared libraries
+ for d in "${ED}"lib* "${ED}"usr/lib* ; do
+ [[ -d "${d}" ]] || continue
+ f=$(find "${d}" -name "lib*${_so_ext}" -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; \
+ do [[ -z "${soname}" ]] && echo "${obj}"; done)
+ if [[ -n ${f} ]] ; then
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: The following shared libraries lack a SONAME"
+ eqawarn "${f}"
+ vecho -ne '\a\n'
+ sleep 1
+ fi
+
+ f=$(find "${d}" -name "lib*${_so_ext}" -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; \
+ do [[ -z "${needed}" ]] && echo "${obj}"; done)
+ if [[ -n ${f} ]] ; then
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: The following shared libraries lack NEEDED entries"
+ eqawarn "${f}"
+ vecho -ne '\a\n'
+ sleep 1
+ fi
+ done
+
+ PORTAGE_QUIET=${tmp_quiet}
+ fi
+}
+
+install_qa_check_xcoff() {
+ if ! has binchecks ${RESTRICT}; then
+ local tmp_quiet=${PORTAGE_QUIET}
+ local queryline deplib
+ local insecure_rpath_list= undefined_symbols_list=
+
+ # display warnings when using stricter because we die afterwards
+ if has stricter ${FEATURES} ; then
+ unset PORTAGE_QUIET
+ fi
+
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED.XCOFF.1
+
+ local neededfd
+ for neededfd in {3..1024} none; do ( : <&${neededfd} ) 2>/dev/null || break; done
+ [[ ${neededfd} != none ]] || die "cannot find free file descriptor handle"
+
+ eval "exec ${neededfd}>\"${PORTAGE_BUILDDIR}\"/build-info/NEEDED.XCOFF.1" || die "cannot open ${PORTAGE_BUILDDIR}/build-info/NEEDED.XCOFF.1"
+
+ ( # work around a problem in /usr/bin/dump (used by aixdll-query)
+ # dumping core when path names get too long.
+ cd "${ED}" >/dev/null &&
+ find . -not -type d -exec \
+ aixdll-query '{}' FILE MEMBER FLAGS FORMAT RUNPATH DEPLIBS ';'
+ ) > "${T}"/needed 2>/dev/null
+
+ # Symlinking shared archive libraries is not a good idea on aix,
+ # as there is nothing like "soname" on pure filesystem level.
+ # So we create a copy instead of the symlink.
+ local prev_FILE=
+ local FILE MEMBER FLAGS FORMAT RUNPATH DEPLIBS
+ while read queryline
+ do
+ FILE= MEMBER= FLAGS= FORMAT= RUNPATH= DEPLIBS=
+ eval ${queryline}
+ FILE=${FILE#./}
+
+ if [[ ${prev_FILE} != ${FILE} ]]; then
+ if [[ " ${FLAGS} " == *" SHROBJ "* && -h ${ED}${FILE} ]]; then
+ prev_FILE=${FILE}
+ local target=$(readlink "${ED}${FILE}")
+ if [[ ${target} == /* ]]; then
+ target=${D}${target}
+ else
+ target=${FILE%/*}/${target}
+ fi
+ rm -f "${ED}${FILE}" || die "cannot prune ${FILE}"
+ cp -f "${ED}${target}" "${ED}${FILE}" || die "cannot copy ${target} to ${FILE}"
+ fi
+ fi
+ done <"${T}"/needed
+
+ prev_FILE=
+ while read queryline
+ do
+ FILE= MEMBER= FLAGS= FORMAT= RUNPATH= DEPLIBS=
+ eval ${queryline}
+ FILE=${FILE#./}
+
+ if [[ -n ${MEMBER} && ${prev_FILE} != ${FILE} ]]; then
+ # Save NEEDED information for each archive library stub
+ # even if it is static only: the already installed archive
+ # may contain shared objects to be preserved.
+ echo "${FORMAT##* }${FORMAT%%-*};${EPREFIX}/${FILE};${FILE##*/};;" >&${neededfd}
+ fi
+ prev_FILE=${FILE}
+
+ # shared objects have both EXEC and SHROBJ flags,
+ # while executables have EXEC flag only.
+ [[ " ${FLAGS} " == *" EXEC "* ]] || continue
+
+ # Make sure we disallow insecure RUNPATH's
+ # Don't want paths that point to the tree where the package was built
+ # (older, broken libtools would do this). Also check for null paths
+ # because the loader will search $PWD when it finds null paths.
+ # And we really want absolute paths only.
+ if [[ -n $(echo ":${RUNPATH}:" | grep -E "(${PORTAGE_BUILDDIR}|::|:[^/])") ]]; then
+ insecure_rpath_list="${insecure_rpath_list}\n${FILE}${MEMBER:+[${MEMBER}]}"
+ fi
+
+ local needed=
+ [[ -n ${MEMBER} ]] && needed=${FILE##*/}
+ for deplib in ${DEPLIBS}; do
+ eval deplib=${deplib}
+ if [[ ${deplib} == '.' || ${deplib} == '..' ]]; then
+ # Although we do have runtime linking, we don't want undefined symbols.
+ # AIX does indicate this by needing either '.' or '..'
+ undefined_symbols_list="${undefined_symbols_list}\n${FILE}"
+ else
+ needed="${needed}${needed:+,}${deplib}"
+ fi
+ done
+
+ FILE=${EPREFIX}/${FILE}
+
+ [[ -n ${MEMBER} ]] && MEMBER="[${MEMBER}]"
+ # Save NEEDED information
+ echo "${FORMAT##* }${FORMAT%%-*};${FILE}${MEMBER};${FILE##*/}${MEMBER};${RUNPATH};${needed}" >&${neededfd}
+ done <"${T}"/needed
+
+ eval "exec ${neededfd}>&-" || die "cannot close handle to ${PORTAGE_BUILDDIR}/build-info/NEEDED.XCOFF.1"
+
+ if [[ -n ${undefined_symbols_list} ]]; then
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain undefined symbols."
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with 'prefix' as the maintaining herd of the package."
+ eqawarn "${undefined_symbols_list}"
+ vecho -ne '\a\n'
+ fi
+
+ if [[ -n ${insecure_rpath_list} ]] ; then
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain insecure RUNPATH's"
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with 'prefix' as the maintaining herd of the package."
+ eqawarn "${insecure_rpath_list}"
+ vecho -ne '\a\n'
+ if has stricter ${FEATURES} ; then
+ insecure_rpath=1
+ fi
+ fi
+
+ if [[ ${insecure_rpath} -eq 1 ]] ; then
+ die "Aborting due to serious QA concerns with RUNPATH/RPATH"
+ elif [[ -n ${die_msg} ]] && has stricter ${FEATURES} ; then
+ die "Aborting due to QA concerns: ${die_msg}"
+ fi
+
+ PORTAGE_QUIET=${tmp_quiet}
+ fi
+}
+
-
install_mask() {
local root="$1"
shift
diff --cc bin/phase-helpers.sh
index 04cc291,04cf35a..ad469ed
--- a/bin/phase-helpers.sh
+++ b/bin/phase-helpers.sh
@@@ -19,6 -19,7 +19,9 @@@ into()
export DESTTREE=""
else
export DESTTREE=$1
- case "$EAPI" in 0|1|2) local ED=${D} ;; esac
++ # PREFIX LOCAL: always support ED
++ #case "$EAPI" in 0|1|2) local ED=${D} ;; esac
++ # END PREFIX LOCAL
if [ ! -d "${ED}${DESTTREE}" ]; then
install -d "${ED}${DESTTREE}"
local ret=$?
@@@ -35,6 -36,7 +38,9 @@@ insinto()
export INSDESTTREE=""
else
export INSDESTTREE=$1
- case "$EAPI" in 0|1|2) local ED=${D} ;; esac
++ # PREFIX LOCAL: always support ED
++ #case "$EAPI" in 0|1|2) local ED=${D} ;; esac
++ # END PREFIX LOCAL
if [ ! -d "${ED}${INSDESTTREE}" ]; then
install -d "${ED}${INSDESTTREE}"
local ret=$?
@@@ -51,6 -53,7 +57,9 @@@ exeinto()
export _E_EXEDESTTREE_=""
else
export _E_EXEDESTTREE_="$1"
- case "$EAPI" in 0|1|2) local ED=${D} ;; esac
++ # PREFIX LOCAL: always support ED
++ #case "$EAPI" in 0|1|2) local ED=${D} ;; esac
++ # END PREFIX LOCAL
if [ ! -d "${ED}${_E_EXEDESTTREE_}" ]; then
install -d "${ED}${_E_EXEDESTTREE_}"
local ret=$?
@@@ -67,6 -70,7 +76,9 @@@ docinto()
export _E_DOCDESTTREE_=""
else
export _E_DOCDESTTREE_="$1"
- case "$EAPI" in 0|1|2) local ED=${D} ;; esac
++ # PREFIX LOCAL: always support ED
++ #case "$EAPI" in 0|1|2) local ED=${D} ;; esac
++ # END PREFIX LOCAL
if [ ! -d "${ED}usr/share/doc/${PF}/${_E_DOCDESTTREE_}" ]; then
install -d "${ED}usr/share/doc/${PF}/${_E_DOCDESTTREE_}"
local ret=$?
@@@ -133,6 -137,7 +145,9 @@@ docompress()
keepdir() {
dodir "$@"
local x
- case "$EAPI" in 0|1|2) local ED=${D} ;; esac
++ # PREFIX LOCAL: always support ED
++ #case "$EAPI" in 0|1|2) local ED=${D} ;; esac
++ # END PREFIX LOCAL
if [ "$1" == "-R" ] || [ "$1" == "-r" ]; then
shift
find "$@" -type d -printf "${ED}%p/.keep_${CATEGORY}_${PN}-${SLOT}\n" \
@@@ -369,6 -374,8 +384,10 @@@ unpack()
econf() {
local x
- case "$EAPI" in 0|1|2) local EPREFIX= ;; esac
++ # PREFIX LOCAL: always support EPREFIX
++ #case "$EAPI" in 0|1|2) local EPREFIX= ;; esac
++ # END PREFIX LOCAL
+
_hasg() {
local x s=$1
shift
@@@ -463,6 -470,7 +482,9 @@@
einstall() {
# CONF_PREFIX is only set if they didn't pass in libdir above.
local LOCAL_EXTRA_EINSTALL="${EXTRA_EINSTALL}"
- case "$EAPI" in 0|1|2) local ED=${D} ;; esac
++ # PREFIX LOCAL: always support ED
++ #case "$EAPI" in 0|1|2) local ED=${D} ;; esac
++ # END PREFIX LOCAL
LIBDIR_VAR="LIBDIR_${ABI}"
if [ -n "${ABI}" -a -n "${!LIBDIR_VAR}" ]; then
CONF_LIBDIR="${!LIBDIR_VAR}"
@@@ -581,15 -589,28 +603,30 @@@ _eapi4_src_install()
fi
}
+ # @FUNCTION: has_version
+ # @USAGE: <DEPEND ATOM>
+ # @DESCRIPTION:
# Return true if given package is installed. Otherwise return false.
- # Takes single depend-type atoms.
+ # Callers may override the ROOT variable in order to match packages from an
+ # alternative ROOT.
has_version() {
+ local eroot
+ case "$EAPI" in
- 0|1|2)
- eroot=${ROOT}
- ;;
++ # PREFIX LOCAL: always support ED
++ #0|1|2)
++ # eroot=${ROOT}
++ # ;;
++ # END PREFIX LOCAL
+ *)
+ eroot=${ROOT%/}${EPREFIX}/
+ ;;
+ esac
if [[ -n $PORTAGE_IPC_DAEMON ]] ; then
- "$PORTAGE_BIN_PATH"/ebuild-ipc has_version "$ROOT" "$1"
+ "$PORTAGE_BIN_PATH"/ebuild-ipc has_version "${eroot}" "$1"
else
PYTHONPATH=${PORTAGE_PYM_PATH}${PYTHONPATH:+:}${PYTHONPATH} \
- "${PORTAGE_PYTHON:-@PORTAGE_PREFIX_PYTHON@}" "${PORTAGE_BIN_PATH}/portageq" has_version "${ROOT}" "$1"
- "${PORTAGE_PYTHON:-/usr/bin/python}" "${PORTAGE_BIN_PATH}/portageq" has_version "${eroot}" "$1"
++ "${PORTAGE_PYTHON:-@PORTAGE_PREFIX_PYTHON@}" "${PORTAGE_BIN_PATH}/portageq" has_version "${eroot}" "$1"
fi
local retval=$?
case "${retval}" in
@@@ -602,15 -623,28 +639,30 @@@
esac
}
+ # @FUNCTION: best_version
+ # @USAGE: <DEPEND ATOM>
+ # @DESCRIPTION:
# Returns the best/most-current match.
- # Takes single depend-type atoms.
+ # Callers may override the ROOT variable in order to match packages from an
+ # alternative ROOT.
best_version() {
+ local eroot
+ case "$EAPI" in
- 0|1|2)
- eroot=${ROOT}
- ;;
++ # PREFIX LOCAL: always support ED
++ #0|1|2)
++ # eroot=${ROOT}
++ # ;;
++ # END PREFIX LOCAL
+ *)
+ eroot=${ROOT%/}${EPREFIX}/
+ ;;
+ esac
if [[ -n $PORTAGE_IPC_DAEMON ]] ; then
- "$PORTAGE_BIN_PATH"/ebuild-ipc best_version "$ROOT" "$1"
+ "$PORTAGE_BIN_PATH"/ebuild-ipc best_version "${eroot}" "$1"
else
PYTHONPATH=${PORTAGE_PYM_PATH}${PYTHONPATH:+:}${PYTHONPATH} \
- "${PORTAGE_PYTHON:-@PORTAGE_PREFIX_PYTHON@}" "${PORTAGE_BIN_PATH}/portageq" best_version "${ROOT}" "$1"
- "${PORTAGE_PYTHON:-/usr/bin/python}" "${PORTAGE_BIN_PATH}/portageq" best_version "${eroot}" "$1"
++ "${PORTAGE_PYTHON:-@PORTAGE_PREFIX_PYTHON@}" "${PORTAGE_BIN_PATH}/portageq" best_version "${eroot}" "$1"
fi
local retval=$?
case "${retval}" in
diff --cc pym/_emerge/main.py
index e9eb025,a2995e2..97a4b48
--- a/pym/_emerge/main.py
+++ b/pym/_emerge/main.py
@@@ -98,9 -97,24 +98,24 @@@ shortmapping=
"v":"--verbose", "V":"--version"
}
+ COWSAY_MOO = """
+
+ Larry loves Gentoo (%s)
+
+ _______________________
+ < Have you mooed today? >
+ -----------------------
+ \ ^__^
+ \ (oo)\_______
+ (__)\ )\/\
+ ||----w |
+ || ||
+
+ """
+
def chk_updated_info_files(root, infodirs, prev_mtimes, retval):
- if os.path.exists("/usr/bin/install-info"):
+ if os.path.exists(EPREFIX + "/usr/bin/install-info"):
out = portage.output.EOutput()
regen_infodirs=[]
for z in infodirs:
diff --cc pym/portage/dbapi/bintree.py
index 48c075f,31ba364..5f46dad
--- a/pym/portage/dbapi/bintree.py
+++ b/pym/portage/dbapi/bintree.py
@@@ -69,7 -68,7 +69,7 @@@ class bindbapi(fakedbapi)
["BUILD_TIME", "CHOST", "DEPEND", "EAPI", "IUSE", "KEYWORDS",
"LICENSE", "PDEPEND", "PROPERTIES", "PROVIDE",
"RDEPEND", "repository", "RESTRICT", "SLOT", "USE", "DEFINED_PHASES",
- "REQUIRED_USE", "EPREFIX"])
- ])
++ "EPREFIX"])
self._aux_cache_slot_dict = slot_dict_class(self._aux_cache_keys)
self._aux_cache = {}
@@@ -291,7 -290,7 +291,7 @@@ class binarytree(object)
["BUILD_TIME", "CHOST", "DEPEND", "DESCRIPTION", "EAPI",
"IUSE", "KEYWORDS", "LICENSE", "PDEPEND", "PROPERTIES",
"PROVIDE", "RDEPEND", "repository", "SLOT", "USE", "DEFINED_PHASES",
- "REQUIRED_USE", "BASE_URI", "EPREFIX"]
- "BASE_URI"]
++ "BASE_URI", "EPREFIX"]
self._pkgindex_aux_keys = list(self._pkgindex_aux_keys)
self._pkgindex_use_evaluated_keys = \
("LICENSE", "RDEPEND", "DEPEND",
@@@ -317,9 -316,8 +317,8 @@@
"SLOT" : "0",
"USE" : "",
"DEFINED_PHASES" : "",
- "REQUIRED_USE" : ""
}
- self._pkgindex_inherited_keys = ["CHOST", "repository"]
+ self._pkgindex_inherited_keys = ["CHOST", "repository", "EPREFIX"]
# Populate the header with appropriate defaults.
self._pkgindex_default_header_data = {
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-12-02 18:03 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-12-02 18:03 UTC (permalink / raw
To: gentoo-commits
commit: f5b5a26b7d729a93b784de29e438ad76b3e7c433
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 2 18:02:26 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Dec 2 18:02:26 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=f5b5a26b
Merge commit 'v2.2.0_alpha72' into prefix
Conflicts:
bin/lock-helper.py
bin/xpak-helper.py
pym/_emerge/actions.py
pym/portage/__init__.py
bin/ebuild | 4 +-
bin/egencache | 87 ++++--
bin/emaint | 22 +-
bin/glsa-check | 5 +-
bin/lock-helper.py | 4 +-
bin/portageq | 81 +++--
bin/quickpkg | 24 +-
bin/regenworld | 5 +-
bin/repoman | 11 +-
bin/xpak-helper.py | 4 +-
man/emerge.1 | 4 +
man/make.conf.5 | 9 +-
man/portage.5 | 12 +-
man/repoman.1 | 5 +-
pym/_emerge/BlockerDB.py | 5 +-
pym/_emerge/EbuildFetchonly.py | 4 +-
pym/_emerge/EbuildMetadataPhase.py | 6 +-
pym/_emerge/FakeVartree.py | 11 +-
pym/_emerge/Package.py | 8 +-
pym/_emerge/RootConfig.py | 2 +-
pym/_emerge/Scheduler.py | 17 +-
pym/_emerge/actions.py | 304 +++++++----------
pym/_emerge/depgraph.py | 33 +-
pym/_emerge/main.py | 49 ++-
pym/_emerge/unmerge.py | 4 +-
pym/portage/__init__.py | 42 ++-
pym/portage/_global_updates.py | 2 +-
pym/portage/_legacy_globals.py | 12 +-
pym/portage/_sets/__init__.py | 38 ++-
pym/portage/cache/metadata.py | 1 -
pym/portage/cache/template.py | 5 +
pym/portage/checksum.py | 41 ++-
pym/portage/dbapi/_expand_new_virt.py | 3 +-
pym/portage/dbapi/bintree.py | 35 ++-
pym/portage/dbapi/porttree.py | 43 ++-
pym/portage/dbapi/vartree.py | 110 +++++--
pym/portage/dep/dep_check.py | 8 +-
pym/portage/elog/mod_syslog.py | 2 +-
pym/portage/news.py | 74 ++++-
.../package/ebuild/_config/KeywordsManager.py | 6 +-
.../package/ebuild/_config/LocationsManager.py | 83 ++++-
pym/portage/package/ebuild/_config/MaskManager.py | 37 ++-
pym/portage/package/ebuild/_config/UseManager.py | 18 +-
pym/portage/package/ebuild/config.py | 96 +++---
pym/portage/package/ebuild/digestcheck.py | 2 +-
pym/portage/package/ebuild/doebuild.py | 25 +-
pym/portage/repository/config.py | 358 +++++++++++++-------
pym/portage/tests/ebuild/test_config.py | 8 +-
pym/portage/tests/ebuild/test_doebuild_spawn.py | 2 +-
pym/portage/tests/emerge/test_global_updates.py | 39 +++
pym/portage/tests/emerge/test_simple.py | 39 ++-
pym/portage/tests/repoman/test_simple.py | 2 +-
pym/portage/tests/resolver/ResolverPlayground.py | 27 +-
pym/portage/update.py | 5 +
pym/portage/util/__init__.py | 11 +-
pym/portage/util/env_update.py | 13 +-
pym/repoman/utilities.py | 4 +
runtests.sh | 20 ++
58 files changed, 1253 insertions(+), 678 deletions(-)
diff --cc bin/lock-helper.py
index 886b52d,065ddcb..23db096
--- a/bin/lock-helper.py
+++ b/bin/lock-helper.py
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2010 Gentoo Foundation
+ # Copyright 2010-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
import os
diff --cc bin/xpak-helper.py
index 4d096cc,ef74920..1d57069
--- a/bin/xpak-helper.py
+++ b/bin/xpak-helper.py
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2009 Gentoo Foundation
+ # Copyright 2009-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
import optparse
diff --cc pym/_emerge/actions.py
index 46d68b8,af3780e..7e032d1
--- a/pym/_emerge/actions.py
+++ b/pym/_emerge/actions.py
@@@ -25,7 -30,7 +30,7 @@@ from portage import o
from portage import subprocess_getstatusoutput
from portage import _unicode_decode
from portage.cache.cache_errors import CacheError
- from portage.const import GLOBAL_CONFIG_PATH, NEWS_LIB_PATH, EPREFIX
-from portage.const import GLOBAL_CONFIG_PATH
++from portage.const import GLOBAL_CONFIG_PATH, EPREFIX
from portage.const import _ENABLE_DYN_LINK_MAP, _ENABLE_SET_CONFIG
from portage.dbapi.dep_expand import dep_expand
from portage.dbapi._expand_new_virt import expand_new_virt
diff --cc pym/portage/__init__.py
index 3de273f,27353a1..c5b7f76
--- a/pym/portage/__init__.py
+++ b/pym/portage/__init__.py
@@@ -491,15 -495,25 +496,27 @@@ def create_trees(config_root=None, targ
portdbapi.portdbapi_instances.remove(portdb)
del trees[myroot]["porttree"], myroot, portdb
- eprefix = os.environ.get("__PORTAGE_TEST_EPREFIX")
+ if trees is None:
+ trees = _trees_dict()
+ elif not isinstance(trees, _trees_dict):
+ # caller passed a normal dict or something,
+ # but we need a _trees_dict instance
+ trees = _trees_dict(trees)
+
+ if env is None:
+ env = os.environ
+ eprefix = env.get("__PORTAGE_TEST_EPREFIX")
+ if not eprefix:
+ eprefix = EPREFIX
settings = config(config_root=config_root, target_root=target_root,
- config_incrementals=portage.const.INCREMENTALS, _eprefix=eprefix)
+ env=env, _eprefix=eprefix)
settings.lock()
- myroots = [(settings["ROOT"], settings)]
- if settings["ROOT"] != "/":
+ trees._target_eroot = settings['EROOT']
+ myroots = [(settings['EROOT'], settings)]
+ if settings["ROOT"] == "/":
+ trees._running_eroot = trees._target_eroot
+ else:
# When ROOT != "/" we only want overrides from the calling
# environment to apply to the config that's associated
diff --cc pym/portage/dbapi/vartree.py
index f333494,73772b0..d189775
--- a/pym/portage/dbapi/vartree.py
+++ b/pym/portage/dbapi/vartree.py
@@@ -29,11 -29,9 +29,12 @@@ portage.proxy.lazyimport.lazyimport(glo
'portage.util.listdir:dircache,listdir',
'portage.util._dyn_libs.PreservedLibsRegistry:PreservedLibsRegistry',
'portage.util._dyn_libs.LinkageMapELF:LinkageMapELF@LinkageMap',
+ 'portage.util._dyn_libs.LinkageMapMachO:LinkageMapMachO',
+ 'portage.util._dyn_libs.LinkageMapPeCoff:LinkageMapPeCoff',
+ 'portage.util._dyn_libs.LinkageMapXCoff:LinkageMapXCoff',
'portage.versions:best,catpkgsplit,catsplit,cpv_getkey,pkgcmp,' + \
'_pkgsplit@pkgsplit',
+ 'tarfile',
)
from portage.const import CACHE_PATH, CONFIG_MEMORY_FILE, \
diff --cc pym/portage/package/ebuild/_config/MaskManager.py
index 89c06fd,bce1152..7d44e79
--- a/pym/portage/package/ebuild/_config/MaskManager.py
+++ b/pym/portage/package/ebuild/_config/MaskManager.py
@@@ -102,22 -116,16 +116,27 @@@ class MaskManager(object)
#to allow profiles to override masks from their parent profiles.
profile_pkgmasklines = []
profile_pkgunmasklines = []
- for x in profiles:
+ # PREFIX LOCAL: Prefix has unmasks for stuff in profiles/package.mask
+ # If we don't consider the repomasks here, those unmasks are
+ # lost, causing lots of issues (e.g. Portage being masked)
+ # for minimal/concentrated code change, empty repo_pkgmasklines here
+ # such that they don't count double
+ repo_pkgmasklines = []
+ repo_pkgunmasklines = []
+ all_profiles = []
+ for repo in repositories.repos_with_profiles():
+ all_profiles.append(os.path.join(repo.location, "profiles"))
+ all_profiles.extend(profiles)
+ for x in all_profiles:
profile_pkgmasklines.append(grabfile_package(
- os.path.join(x, "package.mask"), recursive=1, remember_source_file=True, verify_eapi=True))
- profile_pkgunmasklines.append(grabfile_package(
- os.path.join(x, "package.unmask"), recursive=1, remember_source_file=True, verify_eapi=True))
+ os.path.join(x.location, "package.mask"),
+ recursive=x.portage1_directories,
+ remember_source_file=True, verify_eapi=True))
+ if x.portage1_directories:
+ profile_pkgunmasklines.append(grabfile_package(
+ os.path.join(x.location, "package.unmask"),
+ recursive=x.portage1_directories,
+ remember_source_file=True, verify_eapi=True))
profile_pkgmasklines = stack_lists(profile_pkgmasklines, incremental=1, \
remember_source_file=True, warn_for_unmatched_removal=True,
strict_warn_for_unmatched_removal=strict_umatched_removal)
diff --cc runtests.sh
index b8be75c,b7313b7..d2299f6
--- a/runtests.sh
+++ b/runtests.sh
@@@ -27,11 -27,31 +27,31 @@@ interrupted()
trap interrupted SIGINT
+ unused_args=()
+
+ while [[ -n $1 ]] ; do
+ case "$1" in
+ --python-versions=*)
+ PYTHON_VERSIONS=${1#--python-versions=}
+ ;;
+ --python-versions)
+ shift
+ PYTHON_VERSIONS=$1
+ ;;
+ *)
+ unused_args[${#unused_args[@]}]=$1
+ ;;
+ esac
+ shift
+ done
+
+ set -- "${unused_args[@]}"
+
exit_status="0"
for version in ${PYTHON_VERSIONS}; do
- if [[ -x /usr/bin/python${version} ]]; then
+ if [[ -x @PREFIX_PORTAGE_PYTHON@${version} ]]; then
echo -e "${GOOD}Testing with Python ${version}...${NORMAL}"
- if ! /usr/bin/python${version} -Wd pym/portage/tests/runTests "$@" ; then
+ if ! @PREFIX_PORTAGE_PYTHON@${version} -Wd pym/portage/tests/runTests "$@" ; then
echo -e "${BAD}Testing with Python ${version} failed${NORMAL}"
exit_status="1"
fi
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-10-21 17:34 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-10-21 17:34 UTC (permalink / raw
To: gentoo-commits
commit: fce6d5010a4bd8c011aafae7ca6be27234b4dcf1
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Oct 21 17:34:17 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Oct 21 17:34:17 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=fce6d501
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/misc-functions.sh | 14 +-
bin/repoman | 11 +-
man/portage.5 | 4 +-
pym/portage/tests/repoman/test_simple.py | 25 +++
pym/repoman/utilities.py | 246 +++++++++++++++++++++++------
5 files changed, 237 insertions(+), 63 deletions(-)
diff --cc bin/misc-functions.sh
index f31c9e5,55d9663..b093382
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -168,49 -166,15 +168,49 @@@ install_qa_check()
fi
# Now we look for all world writable files.
- local i=
- for i in $(find "${D}/" -type f -perm -2); do
- vecho "QA Security Notice:"
- vecho "- ${i:${#D}:${#i}} will be a world writable file."
+ local unsafe_files=$(find "${D}" -type f -perm -2 | sed -e "s:^${D}:- :")
+ if [[ -n ${unsafe_files} ]] ; then
+ vecho "QA Security Notice: world writable file(s):"
+ vecho "${unsafe_files}"
vecho "- This may or may not be a security problem, most of the time it is one."
vecho "- Please double check that $PF really needs a world writeable bit and file bugs accordingly."
- done
- [[ -n ${i} ]] && sleep 1
+ sleep 1
+ fi
+ # anything outside the prefix should be caught by the Prefix QA
+ # check, so if there's nothing in ED, we skip searching for QA
+ # checks there, the specific QA funcs can hence rely on ED existing
+ if [[ -d ${ED} ]] ; then
+ case ${CHOST} in
+ *-darwin*)
+ # Mach-O platforms (NeXT, Darwin, OSX)
+ install_qa_check_macho
+ ;;
+ *-interix*|*-winnt*)
+ # PECOFF platforms (Windows/Interix)
+ install_qa_check_pecoff
+ ;;
+ *-aix*)
+ # XCOFF platforms (AIX)
+ install_qa_check_xcoff
+ ;;
+ *)
+ # because this is the majority: ELF platforms (Linux,
+ # Solaris, *BSD, IRIX, etc.)
+ install_qa_check_elf
+ ;;
+ esac
+ fi
+
+ # this is basically here such that the diff with trunk remains just
+ # offsetted and not out of order
+ install_qa_check_misc
+
+ # Prefix specific checks
+ [[ -n ${EPREFIX} ]] && install_qa_check_prefix
+}
+
+install_qa_check_elf() {
if type -P scanelf > /dev/null && ! has binchecks ${RESTRICT}; then
local qa_var insecure_rpath=0 tmp_quiet=${PORTAGE_QUIET}
local x
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-10-21 17:34 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-10-21 17:34 UTC (permalink / raw
To: gentoo-commits
commit: 7a2867753bca22f6655ab4760950c0a8be6c8ef7
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Oct 20 20:40:06 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Oct 20 20:40:06 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=7a286775
Merge branch 'master' of git+ssh://git.overlays.gentoo.org/proj/portage
pym/portage/tests/repoman/test_simple.py | 18 ++++++++++++++----
1 files changed, 14 insertions(+), 4 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-10-20 20:28 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-10-20 20:28 UTC (permalink / raw
To: gentoo-commits
commit: 6ef3b2cc4560387f1d4d6c1a304753e232b5c761
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Oct 20 20:28:26 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Oct 20 20:28:26 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=6ef3b2cc
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
pym/portage/tests/repoman/test_simple.py | 18 ++++++++++++++----
pym/repoman/utilities.py | 8 +++++---
2 files changed, 19 insertions(+), 7 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-10-20 17:08 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-10-20 17:08 UTC (permalink / raw
To: gentoo-commits
commit: de7032613586601ccfd207cc915c8b5faf10b23a
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Oct 20 17:08:12 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Oct 20 17:08:12 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=de703261
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/repoman | 5 +----
pym/portage/repository/config.py | 6 +++++-
2 files changed, 6 insertions(+), 5 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-10-20 16:38 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-10-20 16:38 UTC (permalink / raw
To: gentoo-commits
commit: 65878954fc777d64380db2798d1b98744fdc2036
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Oct 20 16:37:27 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Oct 20 16:37:27 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=65878954
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/misc-functions.sh | 4 +-
bin/repoman | 68 ++++++++++++++------
pym/_emerge/EbuildMetadataPhase.py | 6 +-
pym/_emerge/depgraph.py | 15 ++--
pym/_emerge/resolver/output.py | 23 +++++--
pym/portage/cache/metadata_overlay.py | 105 ------------------------------
pym/portage/cache/template.py | 34 ++++++++--
pym/portage/cache/volatile.py | 3 +-
pym/portage/dbapi/bintree.py | 28 ++++++++
pym/portage/dbapi/porttree.py | 30 ++++----
pym/portage/eclass_cache.py | 6 +-
pym/portage/package/ebuild/config.py | 20 +++++-
pym/portage/package/ebuild/doebuild.py | 7 +-
pym/portage/tests/repoman/test_simple.py | 2 +-
14 files changed, 175 insertions(+), 176 deletions(-)
diff --cc bin/misc-functions.sh
index 34fe18f,0d2d206..f31c9e5
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -174,43 -172,9 +174,43 @@@ install_qa_check()
vecho "- ${i:${#D}:${#i}} will be a world writable file."
vecho "- This may or may not be a security problem, most of the time it is one."
vecho "- Please double check that $PF really needs a world writeable bit and file bugs accordingly."
- sleep 1
done
+ [[ -n ${i} ]] && sleep 1
+ # anything outside the prefix should be caught by the Prefix QA
+ # check, so if there's nothing in ED, we skip searching for QA
+ # checks there, the specific QA funcs can hence rely on ED existing
+ if [[ -d ${ED} ]] ; then
+ case ${CHOST} in
+ *-darwin*)
+ # Mach-O platforms (NeXT, Darwin, OSX)
+ install_qa_check_macho
+ ;;
+ *-interix*|*-winnt*)
+ # PECOFF platforms (Windows/Interix)
+ install_qa_check_pecoff
+ ;;
+ *-aix*)
+ # XCOFF platforms (AIX)
+ install_qa_check_xcoff
+ ;;
+ *)
+ # because this is the majority: ELF platforms (Linux,
+ # Solaris, *BSD, IRIX, etc.)
+ install_qa_check_elf
+ ;;
+ esac
+ fi
+
+ # this is basically here such that the diff with trunk remains just
+ # offsetted and not out of order
+ install_qa_check_misc
+
+ # Prefix specific checks
+ [[ -n ${EPREFIX} ]] && install_qa_check_prefix
+}
+
+install_qa_check_elf() {
if type -P scanelf > /dev/null && ! has binchecks ${RESTRICT}; then
local qa_var insecure_rpath=0 tmp_quiet=${PORTAGE_QUIET}
local x
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-10-17 18:36 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-10-17 18:36 UTC (permalink / raw
To: gentoo-commits
commit: d9f449ca690417b6268ce1e3ef16e5c3f0e01a6f
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Mon Oct 17 18:33:56 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Mon Oct 17 18:33:56 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=d9f449ca
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/repoman | 116 +++++++++----------
man/emerge.1 | 7 +-
man/repoman.1 | 7 +-
pym/_emerge/EbuildBuild.py | 3 +-
pym/_emerge/EbuildMetadataPhase.py | 6 +-
pym/_emerge/Scheduler.py | 7 +-
pym/_emerge/main.py | 28 ++++-
pym/portage/checksum.py | 4 +-
pym/portage/dbapi/porttree.py | 23 ++--
pym/portage/eclass_cache.py | 13 ++-
pym/portage/tests/emerge/test_simple.py | 42 ++++++-
pym/portage/tests/repoman/test_simple.py | 96 ++++++++++++++--
pym/repoman/utilities.py | 183 +++++++++++++++++++++++++++++-
13 files changed, 428 insertions(+), 107 deletions(-)
diff --cc bin/repoman
index 1f4175b,ad1e688..56b2b7a
--- a/bin/repoman
+++ b/bin/repoman
@@@ -75,10 -73,9 +75,10 @@@ from portage.process import find_binary
from portage.output import bold, create_color_func, \
green, nocolor, red
from portage.output import ConsoleStyleFile, StyleWriter
- from portage.util import cmp_sort_key, writemsg_level, writemsg_stdout
+ from portage.util import cmp_sort_key, writemsg_level
from portage.package.ebuild.digestgen import digestgen
from portage.eapi import eapi_has_iuse_defaults, eapi_has_required_use
+from portage.const import EPREFIX
if sys.hexversion >= 0x3000000:
basestring = str
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-10-16 13:59 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-10-16 13:59 UTC (permalink / raw
To: gentoo-commits
commit: ebcf640da49dc319ec3b642a1363726336cc36b6
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Oct 16 13:59:20 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Oct 16 13:59:20 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=ebcf640d
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/ebuild
bin/fixpackages
bin/repoman
bin/ebuild | 3 +--
bin/egencache | 4 +---
bin/fixpackages | 17 +++++++++++++----
bin/repoman | 13 ++++++-------
pym/_emerge/EbuildMetadataPhase.py | 16 ++++++++++------
pym/_emerge/MetadataRegen.py | 1 -
pym/portage/_global_updates.py | 8 ++++----
pym/portage/dbapi/porttree.py | 7 ++-----
pym/portage/eclass_cache.py | 2 +-
.../package/ebuild/_config/special_env_vars.py | 2 +-
pym/portage/package/ebuild/config.py | 6 ++++--
pym/portage/repository/config.py | 4 ++--
pym/portage/tests/emerge/test_simple.py | 8 +++++++-
13 files changed, 52 insertions(+), 39 deletions(-)
diff --cc bin/ebuild
index c64e885,771ccb5..a27b049
--- a/bin/ebuild
+++ b/bin/ebuild
@@@ -56,15 -56,12 +56,14 @@@ parser.add_option("--skip-manifest", he
opts, pargs = parser.parse_args(args=sys.argv[1:])
- os.environ["PORTAGE_CALLER"]="ebuild"
-try:
- import portage
-except ImportError:
- from os import path as osp
- sys.path.insert(0, osp.join(osp.dirname(osp.dirname(osp.realpath(__file__))), "pym"))
- import portage
+# for an explanation on this logic, see pym/_emerge/__init__.py
+from os import environ as ose
+from os import path as osp
+if ose.__contains__("PORTAGE_PYTHONPATH"):
+ sys.path.insert(0, ose["PORTAGE_PYTHONPATH"])
+else:
- sys.path.insert(0, osp.join(osp.dirname(osp.dirname(osp. realpath(__file__))), "pym"))
++ sys.path.insert(0, osp.join(osp.dirname(osp.dirname(osp.realpath(__file__))), "pym"))
+import portage
portage.dep._internal_warnings = True
from portage import os
diff --cc bin/fixpackages
index a60852a,dc43ed2..57b6db3
--- a/bin/fixpackages
+++ b/bin/fixpackages
@@@ -1,11 -1,18 +1,20 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 1999-2006 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
- import os,sys
- os.environ["PORTAGE_CALLER"]="fixpackages"
+ import os
+ import sys
+
-try:
- import portage
-except ImportError:
- from os import path as osp
++# for an explanation on this logic, see pym/_emerge/__init__.py
++from os import environ as ose
++from os import path as osp
++if ose.__contains__("PORTAGE_PYTHONPATH"):
++ sys.path.insert(0, ose["PORTAGE_PYTHONPATH"])
++else:
+ sys.path.insert(0, osp.join(osp.dirname(osp.dirname(osp.realpath(__file__))), "pym"))
- import portage
++import portage
from portage import os
from portage.output import EOutput
diff --cc bin/repoman
index 6d1164a,b1a2ac3..1f4175b
--- a/bin/repoman
+++ b/bin/repoman
@@@ -77,10 -75,7 +77,8 @@@ from portage.output import bold, create
from portage.output import ConsoleStyleFile, StyleWriter
from portage.util import cmp_sort_key, writemsg_level, writemsg_stdout
from portage.package.ebuild.digestgen import digestgen
- from portage.eapi import eapi_has_slot_deps, \
- eapi_has_use_deps, eapi_has_strong_blocks, eapi_has_iuse_defaults, \
- eapi_has_required_use, eapi_has_use_dep_defaults
+ from portage.eapi import eapi_has_iuse_defaults, eapi_has_required_use
+from portage.const import EPREFIX
if sys.hexversion >= 0x3000000:
basestring = str
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-10-15 18:27 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-10-15 18:27 UTC (permalink / raw
To: gentoo-commits
commit: 80ec9bfcb756e9bb2ac1aa28b0f75aef0b97d7fb
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sat Oct 15 18:26:21 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sat Oct 15 18:26:21 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=80ec9bfc
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
RELEASE-NOTES | 6 +
bin/ebuild | 6 +-
bin/egencache | 16 +-
bin/misc-functions.sh | 4 +
bin/portageq | 43 ++-
bin/quickpkg | 53 +++-
bin/repoman | 351 +++++++++++---------
man/ebuild.5 | 6 +
man/emerge.1 | 10 +-
man/repoman.1 | 6 +
pym/_emerge/EbuildMetadataPhase.py | 13 +-
pym/_emerge/MetadataRegen.py | 14 +-
pym/_emerge/actions.py | 48 +--
pym/portage/cache/ebuild_xattr.py | 3 +-
pym/portage/cache/flat_hash.py | 9 +-
pym/portage/cache/metadata.py | 7 +-
pym/portage/cache/template.py | 64 +++-
pym/portage/cache/util.py | 170 ----------
pym/portage/dbapi/porttree.py | 116 ++++---
pym/portage/eclass_cache.py | 71 +++-
.../package/ebuild/_config/special_env_vars.py | 2 +-
pym/portage/package/ebuild/config.py | 1 -
pym/portage/package/ebuild/digestcheck.py | 2 +
pym/portage/package/ebuild/doebuild.py | 38 ++-
pym/portage/repository/config.py | 52 ++--
pym/repoman/utilities.py | 6 +-
26 files changed, 596 insertions(+), 521 deletions(-)
diff --cc bin/misc-functions.sh
index 97e190b,b3e62c5..34fe18f
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-10-13 6:52 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-10-13 6:52 UTC (permalink / raw
To: gentoo-commits
commit: aa61149212569507c6c98aaf8be71ac86747bb08
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Oct 13 06:50:59 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Oct 13 06:50:59 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=aa611492
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/ebuild-helpers/prepstrip
bin/archive-conf | 2 +-
bin/dispatch-conf | 35 +-
bin/ebuild | 12 +-
bin/ebuild-helpers/prepstrip | 157 +++--
bin/etc-update | 11 +-
bin/portageq | 6 +-
bin/repoman | 120 ++--
cnf/dispatch-conf.conf | 10 +-
cnf/etc-update.conf | 12 +-
cnf/make.globals | 2 +-
man/emerge.1 | 14 +
man/make.conf.5 | 9 -
man/portage.5 | 10 +
man/xpak.5 | 253 +++----
pym/_emerge/EbuildBuild.py | 13 +-
pym/_emerge/EbuildFetcher.py | 21 +-
pym/_emerge/FakeVartree.py | 12 +-
pym/_emerge/Scheduler.py | 3 +-
pym/_emerge/create_depgraph_params.py | 4 +
pym/_emerge/depgraph.py | 65 ++-
pym/_emerge/main.py | 6 +
pym/_emerge/resolver/output.py | 2 +
pym/portage/_global_updates.py | 3 +-
pym/portage/cache/template.py | 1 +
pym/portage/checksum.py | 75 ++-
pym/portage/const.py | 34 +-
pym/portage/dbapi/__init__.py | 106 ++--
pym/portage/dbapi/porttree.py | 338 ++++-----
pym/portage/dbapi/vartree.py | 14 +-
pym/portage/dep/__init__.py | 45 ++-
pym/portage/elog/__init__.py | 20 +-
pym/portage/elog/messages.py | 1 +
pym/portage/elog/mod_syslog.py | 9 +-
pym/portage/mail.py | 3 +-
pym/portage/manifest.py | 61 +-
pym/portage/output.py | 12 +-
.../package/ebuild/_config/KeywordsManager.py | 16 +-
pym/portage/package/ebuild/_config/MaskManager.py | 45 +-
pym/portage/package/ebuild/digestcheck.py | 3 +-
pym/portage/package/ebuild/digestgen.py | 6 +
pym/portage/package/ebuild/doebuild.py | 31 +-
pym/portage/package/ebuild/fetch.py | 26 +-
pym/portage/package/ebuild/getmaskingstatus.py | 4 +-
pym/portage/repository/config.py | 106 +++-
pym/portage/tests/__init__.py | 118 ++--
pym/portage/tests/bin/setup_env.py | 3 +
pym/portage/tests/bin/test_dodir.py | 2 +
pym/portage/tests/dep/test_best_match_to_list.py | 28 +-
pym/portage/tests/ebuild/test_config.py | 1 +
pym/portage/tests/emerge/test_simple.py | 9 +
pym/portage/tests/resolver/ResolverPlayground.py | 8 +-
pym/portage/tests/resolver/test_keywords.py | 356 +++++++++
.../tests/resolver/test_virtual_transition.py | 51 ++
pym/portage/tests/runTests | 4 +-
pym/portage/tests/util/test_whirlpool.py | 16 +
pym/portage/util/whirlpool.py | 794 ++++++++++++++++++++
pym/repoman/herdbase.py | 4 -
runtests.sh | 3 +
58 files changed, 2398 insertions(+), 737 deletions(-)
diff --cc bin/archive-conf
index ff46444,7978668..149fdc5
--- a/bin/archive-conf
+++ b/bin/archive-conf
@@@ -12,9 -12,15 +12,9 @@@
from __future__ import print_function
import sys
-try:
- import portage
-except ImportError:
- from os import path as osp
- sys.path.insert(0, osp.join(osp.dirname(osp.dirname(osp.realpath(__file__))), "pym"))
- import portage
from portage import os
- import dispatch_conf
+ from portage import dispatch_conf
FIND_EXTANT_CONTENTS = "find %s -name CONTENTS"
diff --cc bin/ebuild
index 77d576a,d4b8b71..4eb8486
--- a/bin/ebuild
+++ b/bin/ebuild
@@@ -54,18 -56,13 +56,15 @@@ parser.add_option("--skip-manifest", he
opts, pargs = parser.parse_args(args=sys.argv[1:])
- if len(pargs) < 2:
- parser.error("missing required args")
-
os.environ["PORTAGE_CALLER"]="ebuild"
-try:
- import portage
-except ImportError:
- from os import path as osp
- sys.path.insert(0, osp.join(osp.dirname(osp.dirname(osp.realpath(__file__))), "pym"))
- import portage
+# for an explanation on this logic, see pym/_emerge/__init__.py
+from os import environ as ose
+from os import path as osp
+if ose.__contains__("PORTAGE_PYTHONPATH"):
+ sys.path.insert(0, ose["PORTAGE_PYTHONPATH"])
+else:
+ sys.path.insert(0, osp.join(osp.dirname(osp.dirname(osp. realpath(__file__))), "pym"))
+import portage
portage.dep._internal_warnings = True
from portage import os
diff --cc bin/ebuild-helpers/prepstrip
index 9c5d9da,8c2ca48..077656f
--- a/bin/ebuild-helpers/prepstrip
+++ b/bin/ebuild-helpers/prepstrip
@@@ -2,33 -2,62 +2,62 @@@
# Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
+ # avoid multiple calls to `has`. this creates things like:
+ # FEATURES_foo=false
+ # if "foo" is not in $FEATURES
+ tf() { "$@" && echo true || echo false ; }
+ exp_tf() {
+ local flag var=$1
+ shift
+ for flag in "$@" ; do
+ eval ${var}_${flag}=$(tf has ${flag} ${!var})
+ done
+ }
+ exp_tf FEATURES installsources nostrip splitdebug
+ exp_tf RESTRICT binchecks installsources strip
+
banner=false
SKIP_STRIP=false
- if has nostrip ${FEATURES} || \
- has strip ${RESTRICT}
- then
+ if ${RESTRICT_strip} || ${FEATURES_nostrip} ; then
SKIP_STRIP=true
banner=true
- has installsources ${FEATURES} || exit 0
+ ${FEATURES_installsources} || exit 0
fi
- STRIP=${STRIP:-${CHOST}-strip}
- type -P -- ${STRIP} > /dev/null || STRIP=strip
- OBJCOPY=${OBJCOPY:-${CHOST}-objcopy}
- type -P -- ${OBJCOPY} > /dev/null || OBJCOPY=objcopy
+ # look up the tools we might be using
+ for t in STRIP:strip OBJCOPY:objcopy READELF:readelf ; do
+ v=${t%:*} # STRIP
+ t=${t#*:} # strip
+ eval ${v}=\"${!v:-${CHOST}-${t}}\"
+ type -P -- ${!v} >/dev/null || eval ${v}=${t}
+ done
+
+ # Figure out what tool set we're using to strip stuff
+ unset SAFE_STRIP_FLAGS DEF_STRIP_FLAGS SPLIT_STRIP_FLAGS
+ case $(${STRIP} --version 2>/dev/null) in
+ *elfutils*) # dev-libs/elfutils
+ # elfutils default behavior is always safe, so don't need to specify
+ # any flags at all
+ SAFE_STRIP_FLAGS=""
+ DEF_STRIP_FLAGS="--remove-comment"
+ SPLIT_STRIP_FLAGS="-f"
+ ;;
+ *GNU*) # sys-devel/binutils
+ # We'll leave out -R .note for now until we can check out the relevance
+ # of the section when it has the ALLOC flag set on it ...
+ SAFE_STRIP_FLAGS="--strip-unneeded"
+ DEF_STRIP_FLAGS="-R .comment"
+ SPLIT_STRIP_FLAGS=
+ ;;
+ esac
+ : ${PORTAGE_STRIP_FLAGS=${SAFE_STRIP_FLAGS} ${DEF_STRIP_FLAGS}}
- # We'll leave out -R .note for now until we can check out the relevance
- # of the section when it has the ALLOC flag set on it ...
- export SAFE_STRIP_FLAGS="--strip-unneeded"
- export PORTAGE_STRIP_FLAGS=${PORTAGE_STRIP_FLAGS-${SAFE_STRIP_FLAGS} -R .comment}
-prepstrip_sources_dir=/usr/src/debug/${CATEGORY}/${PF}
+prepstrip_sources_dir="${EPREFIX}"/usr/src/debug/${CATEGORY}/${PF}
- if has installsources ${FEATURES} && ! type -P debugedit >/dev/null ; then
- ewarn "FEATURES=installsources is enabled but the debugedit binary could not"
- ewarn "be found. This feature will not work unless debugedit is installed!"
- fi
+ type -P debugedit >/dev/null && debugedit_found=true || debugedit_found=false
+ debugedit_warned=false
unset ${!INODE_*}
@@@ -53,10 -96,10 +96,10 @@@ save_elf_sources()
}
save_elf_debug() {
- has splitdebug ${FEATURES} || return 0
+ ${FEATURES_splitdebug} || return 0
local x=$1
- local y="${D}usr/lib/debug/${x:${#D}}.debug"
+ local y="${ED}usr/lib/debug/${x:${#D}}.debug"
# dont save debug info twice
[[ ${x} == *".debug" ]] && return 0
@@@ -158,31 -219,18 +228,22 @@@ d
# actually causes problems. install sources for all
# elf types though cause that stuff is good.
+ buildid=
if [[ ${f} == *"current ar archive"* ]] ; then
- vecho " ${x:${#D}}"
+ vecho " ${x:${#ED}}"
if ${strip_this} ; then
# hmm, can we split debug/sources for .a ?
${STRIP} -g "${x}"
fi
elif [[ ${f} == *"SB executable"* || ${f} == *"SB shared object"* ]] ; then
- vecho " ${x:${#ED}}"
- save_elf_sources "${x}"
- if ${strip_this} ; then
- save_elf_debug "${x}"
- ${STRIP} ${PORTAGE_STRIP_FLAGS} "${x}"
- fi
+ process_elf "${x}" ${PORTAGE_STRIP_FLAGS}
elif [[ ${f} == *"SB relocatable"* ]] ; then
- vecho " ${x:${#ED}}"
- save_elf_sources "${x}"
- if ${strip_this} ; then
- [[ ${x} == *.ko ]] && save_elf_debug "${x}"
- ${STRIP} ${SAFE_STRIP_FLAGS} "${x}"
- fi
+ process_elf "${x}" ${SAFE_STRIP_FLAGS}
fi
+
+ if [[ ${was_not_writable} == "true" ]] ; then
+ chmod u-w "${x}"
+ fi
done
if [[ -s ${T}/debug.sources ]] && \
diff --cc pym/portage/const.py
index 7fc034a,29c3878..63e7c67
--- a/pym/portage/const.py
+++ b/pym/portage/const.py
@@@ -135,9 -90,8 +134,9 @@@ SUPPORTED_FEATURES = frozenset(
"ccache", "chflags", "clean-logs",
"collision-protect", "compress-build-logs",
"digest", "distcc", "distcc-pump", "distlocks", "ebuild-locks", "fakeroot",
- "fail-clean", "fixpackages", "force-mirror", "getbinpkg",
+ "fail-clean", "force-mirror", "getbinpkg",
"installsources", "keeptemp", "keepwork", "fixlafiles", "lmirror",
+ "macossandbox", "macosprefixsandbox", "macosusersandbox",
"metadata-transfer", "mirror", "multilib-strict", "news",
"noauto", "noclean", "nodoc", "noinfo", "noman",
"nostrip", "notitles", "parallel-fetch", "parallel-install",
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-09-23 18:38 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-09-23 18:38 UTC (permalink / raw
To: gentoo-commits
commit: f7190575cf1dd3698d0cce1bbf8c071e4fedbbc3
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Sep 23 18:37:11 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Sep 23 18:37:11 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=f7190575
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/misc-functions.sh | 2 +-
pym/portage/dbapi/porttree.py | 26 +++++++++++++++++++-------
2 files changed, 20 insertions(+), 8 deletions(-)
diff --cc bin/misc-functions.sh
index 148d038,882d171..97e190b
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-09-23 18:23 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-09-23 18:23 UTC (permalink / raw
To: gentoo-commits
commit: 0f77e9560148a3671e03e180af32c22a2636e09c
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Sep 23 18:06:06 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Sep 23 18:06:06 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=0f77e956
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/misc-functions.sh | 15 +
bin/portageq | 46 +-
bin/repoman | 2 +-
man/emerge.1 | 13 +-
pym/_emerge/create_depgraph_params.py | 16 +
pym/_emerge/depgraph.py | 100 ++-
pym/_emerge/help.py | 814 +-------------------
pym/_emerge/main.py | 20 +-
pym/portage/dbapi/porttree.py | 133 ++--
pym/portage/dep/__init__.py | 9 +
pym/portage/package/ebuild/doebuild.py | 3 +-
pym/portage/package/ebuild/fetch.py | 1 -
.../tests/resolver/test_circular_choices.py | 61 ++
pym/portage/tests/resolver/test_complete_graph.py | 66 ++
14 files changed, 385 insertions(+), 914 deletions(-)
diff --cc bin/misc-functions.sh
index c516520,a54ce2b..cf72dee
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
diff --cc pym/portage/package/ebuild/doebuild.py
index 1133027,9939e9c..62750e6
--- a/pym/portage/package/ebuild/doebuild.py
+++ b/pym/portage/package/ebuild/doebuild.py
@@@ -1433,10 -1393,10 +1433,11 @@@ _post_phase_cmds =
"install" : [
"install_qa_check",
- "install_symlink_html_docs"],
+ "install_symlink_html_docs",
+ "install_hooks"],
"preinst" : [
+ "preinst_aix",
"preinst_sfperms",
"preinst_selinux_labels",
"preinst_suid_scan",
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-09-20 18:25 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-09-20 18:25 UTC (permalink / raw
To: gentoo-commits
commit: e1854a9b5115f443e45a8de1d5b48a84d249c6a0
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Tue Sep 20 17:19:55 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Tue Sep 20 17:19:55 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=e1854a9b
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/ebuild | 6 +-
bin/emerge-webrsync | 1 +
bin/misc-functions.sh | 20 ++--
bin/repoman | 10 +-
man/emerge.1 | 14 +++
pym/_emerge/actions.py | 3 +-
pym/_emerge/depgraph.py | 123 +++++++++++++++------
pym/_emerge/help.py | 22 ++++-
pym/_emerge/main.py | 20 ++++
pym/_emerge/resolver/backtracking.py | 25 +++++
pym/_emerge/resolver/output_helpers.py | 4 +-
pym/_emerge/search.py | 1 -
pym/portage/dbapi/porttree.py | 1 -
pym/portage/dbapi/vartree.py | 30 ++----
pym/portage/manifest.py | 17 +++-
pym/portage/package/ebuild/config.py | 4 +-
pym/portage/package/ebuild/digestcheck.py | 44 ++------
pym/portage/package/ebuild/doebuild.py | 2 +-
pym/portage/package/ebuild/fetch.py | 1 -
pym/portage/repository/config.py | 9 ++-
pym/portage/tests/ebuild/test_config.py | 21 ++++-
pym/portage/tests/repoman/test_simple.py | 2 +-
pym/portage/tests/resolver/ResolverPlayground.py | 34 +++++-
pym/portage/tests/resolver/test_autounmask.py | 103 ++++++++++++++++++
pym/portage/tests/resolver/test_backtracking.py | 43 ++++++++
pym/portage/tests/resolver/test_multislot.py | 16 +++-
pym/portage/tests/resolver/test_virtual_slot.py | 49 +++++++++
pym/portage/util/__init__.py | 4 +-
pym/portage/util/env_update.py | 31 +++++-
29 files changed, 526 insertions(+), 134 deletions(-)
diff --cc bin/misc-functions.sh
index 750f500,30244a7..e9d6e5a
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-09-14 18:43 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-09-14 18:43 UTC (permalink / raw
To: gentoo-commits
commit: dc061ac98c94ca4b4976de299f9fb85f67caf0f0
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 14 18:43:43 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Wed Sep 14 18:43:43 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=dc061ac9
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/repoman | 13 ++++---------
pym/portage/tests/repoman/test_simple.py | 7 ++++++-
2 files changed, 10 insertions(+), 10 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-09-14 18:38 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-09-14 18:38 UTC (permalink / raw
To: gentoo-commits
commit: f707dbc1b84e6e13997d0b49c8f2df1de86d9c29
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 14 18:38:02 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Wed Sep 14 18:38:02 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=f707dbc1
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/ebuild | 2 +-
bin/repoman | 7 +++++++
pym/portage/manifest.py | 20 ++++++++++----------
pym/portage/package/ebuild/digestgen.py | 16 ++++++++--------
4 files changed, 26 insertions(+), 19 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-09-13 17:41 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-09-13 17:41 UTC (permalink / raw
To: gentoo-commits
commit: 74fec93ae94c0149cac8a6e2af49ee2492c97cce
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Tue Sep 13 17:40:17 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Tue Sep 13 17:40:17 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=74fec93a
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/ebuild.sh
bin/egencache
bin/isolated-functions.sh
pym/portage/dbapi/vartree.py
pym/portage/elog/mod_save.py
pym/portage/elog/mod_save_summary.py
pym/portage/package/ebuild/doebuild.py
pym/portage/util/env_update.py
bin/bashrc-functions.sh | 140 ++
bin/ebuild | 25 +-
bin/ebuild.sh | 1963 ++------------------
bin/egencache | 46 +-
bin/isolated-functions.sh | 166 +--
bin/phase-functions.sh | 970 ++++++++++
bin/phase-helpers.sh | 624 +++++++
bin/repoman | 47 +-
bin/save-ebuild-env.sh | 97 +
man/portage.5 | 4 +
pym/_emerge/BlockerCache.py | 10 +-
pym/_emerge/EbuildFetcher.py | 6 +-
pym/_emerge/actions.py | 21 +-
pym/_emerge/depgraph.py | 125 +-
pym/_emerge/main.py | 11 +-
pym/_emerge/search.py | 4 +-
pym/portage/__init__.py | 36 +-
pym/portage/cache/volatile.py | 8 +-
pym/portage/dbapi/porttree.py | 41 +-
pym/portage/dbapi/vartree.py | 53 +-
pym/portage/dep/__init__.py | 8 +
pym/portage/elog/mod_echo.py | 13 +
pym/portage/elog/mod_save.py | 3 +-
pym/portage/elog/mod_save_summary.py | 4 +-
pym/portage/getbinpkg.py | 17 +-
pym/portage/manifest.py | 149 +-
pym/portage/package/ebuild/config.py | 13 +
pym/portage/package/ebuild/digestcheck.py | 6 +-
pym/portage/package/ebuild/digestgen.py | 3 +-
pym/portage/package/ebuild/doebuild.py | 50 +-
pym/portage/package/ebuild/fetch.py | 3 +-
pym/portage/repository/config.py | 216 ++-
pym/portage/tests/ebuild/test_config.py | 48 +
pym/portage/tests/ebuild/test_pty_eof.py | 13 +
pym/portage/tests/emerge/test_simple.py | 205 ++-
pym/portage/tests/repoman/test_simple.py | 19 +-
pym/portage/tests/resolver/ResolverPlayground.py | 71 +-
pym/portage/tests/resolver/test_virtual_slot.py | 93 +
.../util/_dyn_libs/PreservedLibsRegistry.py | 14 +-
pym/portage/util/env_update.py | 6 +-
pym/portage/util/mtimedb.py | 8 +-
pym/portage/xml/metadata.py | 34 +-
pym/repoman/checks.py | 1 +
43 files changed, 3038 insertions(+), 2356 deletions(-)
diff --cc bin/ebuild.sh
index a5fc0ec,7b77c10..7952515
--- a/bin/ebuild.sh
+++ b/bin/ebuild.sh
@@@ -2,9 -2,51 +2,58 @@@
# Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-PORTAGE_BIN_PATH="${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"
-PORTAGE_PYM_PATH="${PORTAGE_PYM_PATH:-/usr/lib/portage/pym}"
+PORTAGE_BIN_PATH="${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"
+PORTAGE_PYM_PATH="${PORTAGE_PYM_PATH:-@PORTAGE_BASE@/pym}"
+ ROOTPATH=${ROOTPATH##:}
+ ROOTPATH=${ROOTPATH%%:}
+ PREROOTPATH=${PREROOTPATH##:}
+ PREROOTPATH=${PREROOTPATH%%:}
-PATH=$PORTAGE_BIN_PATH/ebuild-helpers:$PREROOTPATH${PREROOTPATH:+:}/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin${ROOTPATH:+:}$ROOTPATH
++#PATH=$PORTAGE_BIN_PATH/ebuild-helpers:$PREROOTPATH${PREROOTPATH:+:}/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin${ROOTPATH:+:}$ROOTPATH
++# PREFIX: our DEFAULT_PATH is equal to the above when not using an
++# offset prefix. With such prefix, the usr/local bits are excluded, and
++# the prefixed variants of {usr/,}{s,}bin are taken. The additional
++# paths given during configure, always come as last thing since they
++# should never override anything from the prefix itself.
++PATH="$PORTAGE_BIN_PATH/ebuild-helpers:$PREROOTPATH${PREROOTPATH:+:}${DEFAULT_PATH}${ROOTPATH:+:}$ROOTPATH${EXTRA_PATH:+:}${EXTRA_PATH}"
+ export PATH
+
++
+ # Prevent aliases from causing portage to act inappropriately.
+ # Make sure it's before everything so we don't mess aliases that follow.
+ unalias -a
+
+ source "${PORTAGE_BIN_PATH}/isolated-functions.sh" || exit 1
+
+ if [[ $EBUILD_PHASE != depend ]] ; then
+ source "${PORTAGE_BIN_PATH}/phase-functions.sh" || die
+ source "${PORTAGE_BIN_PATH}/save-ebuild-env.sh" || die
+ source "${PORTAGE_BIN_PATH}/phase-helpers.sh" || die
+ source "${PORTAGE_BIN_PATH}/bashrc-functions.sh" || die
+ else
+ # These dummy functions are for things that are likely to be called
+ # in global scope, even though they are completely useless during
+ # the "depend" phase.
+ for x in diropts docompress exeopts get_KV insopts \
+ keepdir KV_major KV_micro KV_minor KV_to_int \
+ libopts register_die_hook register_success_hook \
+ remove_path_entry set_unless_changed strip_duplicate_slashes \
+ unset_unless_changed use_with use_enable ; do
+ eval "${x}() { : ; }"
+ done
+ # These dummy functions return false, in order to ensure that
+ # `use multislot` is false for the "depend" phase.
+ for x in use useq usev ; do
+ eval "${x}() { return 1; }"
+ done
+ # These functions die because calls to them during the "depend" phase
+ # are considered to be severe QA violations.
+ for x in best_version has_version portageq ; do
+ eval "${x}() { die \"\${FUNCNAME} calls are not allowed in global scope\"; }"
+ done
+ unset x
+ fi
+
if [[ $PORTAGE_SANDBOX_COMPAT_LEVEL -lt 22 ]] ; then
# Ensure that /dev/std* streams have appropriate sandbox permission for
# bug #288863. This can be removed after sandbox is fixed and portage
@@@ -2178,232 -698,20 +707,22 @@@ els
0|1|2)
;;
*)
- declare -r ED EPREFIX EROOT
+ # PREFIX LOCAL: allow prefix vars in any EAPI
+ #declare -r ED EPREFIX EROOT
+ # PREFIX LOCAL
;;
esac
- fi
-
- ebuild_main() {
- # Subshell/helper die support (must export for the die helper).
- # Since this function is typically executed in a subshell,
- # setup EBUILD_MASTER_PID to refer to the current $BASHPID,
- # which seems to give the best results when further
- # nested subshells call die.
- export EBUILD_MASTER_PID=$BASHPID
- trap 'exit 1' SIGTERM
-
- if [[ $EBUILD_PHASE != depend ]] ; then
- # Force configure scripts that automatically detect ccache to
- # respect FEATURES="-ccache".
- has ccache $FEATURES || export CCACHE_DISABLE=1
-
- local phase_func=$(_ebuild_arg_to_phase "$EAPI" "$EBUILD_PHASE")
- [[ -n $phase_func ]] && _ebuild_phase_funcs "$EAPI" "$phase_func"
- unset phase_func
- fi
-
- source_all_bashrcs
-
- case ${EBUILD_SH_ARGS} in
- nofetch)
- ebuild_phase_with_hooks pkg_nofetch
- ;;
- prerm|postrm|postinst|config|info)
- if has "$EBUILD_SH_ARGS" config info && \
- ! declare -F "pkg_$EBUILD_SH_ARGS" >/dev/null ; then
- ewarn "pkg_${EBUILD_SH_ARGS}() is not defined: '${EBUILD##*/}'"
- fi
- export SANDBOX_ON="0"
- if [ "${PORTAGE_DEBUG}" != "1" ] || [ "${-/x/}" != "$-" ]; then
- ebuild_phase_with_hooks pkg_${EBUILD_SH_ARGS}
- else
- set -x
- ebuild_phase_with_hooks pkg_${EBUILD_SH_ARGS}
- set +x
- fi
- if [[ $EBUILD_PHASE == postinst ]] && [[ -n $PORTAGE_UPDATE_ENV ]]; then
- # Update environment.bz2 in case installation phases
- # need to pass some variables to uninstallation phases.
- save_ebuild_env --exclude-init-phases | \
- filter_readonly_variables --filter-path \
- --filter-sandbox --allow-extra-vars \
- | ${PORTAGE_BZIP2_COMMAND} -c -f9 > "$PORTAGE_UPDATE_ENV"
- assert "save_ebuild_env failed"
- fi
- ;;
- unpack|prepare|configure|compile|test|clean|install)
- if [[ ${SANDBOX_DISABLED:-0} = 0 ]] ; then
- export SANDBOX_ON="1"
- else
- export SANDBOX_ON="0"
- fi
-
- case "$EBUILD_SH_ARGS" in
- configure|compile)
-
- local x
- for x in ASFLAGS CCACHE_DIR CCACHE_SIZE \
- CFLAGS CXXFLAGS LDFLAGS LIBCFLAGS LIBCXXFLAGS ; do
- [[ ${!x+set} = set ]] && export $x
- done
- unset x
-
- has distcc $FEATURES && [[ -n $DISTCC_DIR ]] && \
- [[ ${SANDBOX_WRITE/$DISTCC_DIR} = $SANDBOX_WRITE ]] && \
- addwrite "$DISTCC_DIR"
-
- x=LIBDIR_$ABI
- [ -z "$PKG_CONFIG_PATH" -a -n "$ABI" -a -n "${!x}" ] && \
- export PKG_CONFIG_PATH=/usr/${!x}/pkgconfig
-
- if has noauto $FEATURES && \
- [[ ! -f $PORTAGE_BUILDDIR/.unpacked ]] ; then
- echo
- echo "!!! We apparently haven't unpacked..." \
- "This is probably not what you"
- echo "!!! want to be doing... You are using" \
- "FEATURES=noauto so I'll assume"
- echo "!!! that you know what you are doing..." \
- "You have 5 seconds to abort..."
- echo
-
- local x
- for x in 1 2 3 4 5 6 7 8; do
- LC_ALL=C sleep 0.25
- done
-
- sleep 3
- fi
-
- cd "$PORTAGE_BUILDDIR"
- if [ ! -d build-info ] ; then
- mkdir build-info
- cp "$EBUILD" "build-info/$PF.ebuild"
- fi
-
- #our custom version of libtool uses $S and $D to fix
- #invalid paths in .la files
- export S D
-
- ;;
- esac
-
- if [ "${PORTAGE_DEBUG}" != "1" ] || [ "${-/x/}" != "$-" ]; then
- dyn_${EBUILD_SH_ARGS}
- else
- set -x
- dyn_${EBUILD_SH_ARGS}
- set +x
- fi
- export SANDBOX_ON="0"
- ;;
- help|pretend|setup|preinst)
- #pkg_setup needs to be out of the sandbox for tmp file creation;
- #for example, awking and piping a file in /tmp requires a temp file to be created
- #in /etc. If pkg_setup is in the sandbox, both our lilo and apache ebuilds break.
- export SANDBOX_ON="0"
- if [ "${PORTAGE_DEBUG}" != "1" ] || [ "${-/x/}" != "$-" ]; then
- dyn_${EBUILD_SH_ARGS}
- else
- set -x
- dyn_${EBUILD_SH_ARGS}
- set +x
- fi
- ;;
- depend)
- export SANDBOX_ON="0"
- set -f
-
- if [ -n "${dbkey}" ] ; then
- if [ ! -d "${dbkey%/*}" ]; then
- install -d -g ${PORTAGE_GID} -m2775 "${dbkey%/*}"
- fi
- # Make it group writable. 666&~002==664
- umask 002
- fi
-
- auxdbkeys="DEPEND RDEPEND SLOT SRC_URI RESTRICT HOMEPAGE LICENSE
- DESCRIPTION KEYWORDS INHERITED IUSE REQUIRED_USE PDEPEND PROVIDE EAPI
- PROPERTIES DEFINED_PHASES UNUSED_05 UNUSED_04
- UNUSED_03 UNUSED_02 UNUSED_01"
-
- #the extra $(echo) commands remove newlines
- [ -n "${EAPI}" ] || EAPI=0
-
- if [ -n "${dbkey}" ] ; then
- > "${dbkey}"
- for f in ${auxdbkeys} ; do
- echo $(echo ${!f}) >> "${dbkey}" || exit $?
- done
- else
- for f in ${auxdbkeys} ; do
- echo $(echo ${!f}) 1>&9 || exit $?
- done
+ if [[ -n $EBUILD_SH_ARGS ]] ; then
+ (
+ # Don't allow subprocesses to inherit the pipe which
+ # emerge uses to monitor ebuild.sh.
exec 9>&-
- fi
- set +f
- ;;
- _internal_test)
- ;;
- *)
- export SANDBOX_ON="1"
- echo "Unrecognized EBUILD_SH_ARGS: '${EBUILD_SH_ARGS}'"
- echo
- dyn_help
- exit 1
- ;;
- esac
- }
-
- if [[ -s $SANDBOX_LOG ]] ; then
- # We use SANDBOX_LOG to check for sandbox violations,
- # so we ensure that there can't be a stale log to
- # interfere with our logic.
- x=
- if [[ -n SANDBOX_ON ]] ; then
- x=$SANDBOX_ON
- export SANDBOX_ON=0
+ ebuild_main ${EBUILD_SH_ARGS}
+ exit 0
+ )
+ exit $?
fi
-
- rm -f "$SANDBOX_LOG" || \
- die "failed to remove stale sandbox log: '$SANDBOX_LOG'"
-
- if [[ -n $x ]] ; then
- export SANDBOX_ON=$x
- fi
- unset x
- fi
-
- if [[ $EBUILD_PHASE = depend ]] ; then
- ebuild_main
- elif [[ -n $EBUILD_SH_ARGS ]] ; then
- (
- # Don't allow subprocesses to inherit the pipe which
- # emerge uses to monitor ebuild.sh.
- exec 9>&-
-
- ebuild_main
-
- # Save the env only for relevant phases.
- if ! has "$EBUILD_SH_ARGS" clean help info nofetch ; then
- umask 002
- save_ebuild_env | filter_readonly_variables \
- --filter-features > "$T/environment"
- assert "save_ebuild_env failed"
- chown ${PORTAGE_USER:-portage}:${PORTAGE_GROUP:-portage} "$T/environment" &>/dev/null
- chmod g+w "$T/environment" &>/dev/null
- fi
- [[ -n $PORTAGE_EBUILD_EXIT_FILE ]] && > "$PORTAGE_EBUILD_EXIT_FILE"
- if [[ -n $PORTAGE_IPC_DAEMON ]] ; then
- [[ ! -s $SANDBOX_LOG ]]
- "$PORTAGE_BIN_PATH"/ebuild-ipc exit $?
- fi
- exit 0
- )
- exit $?
fi
# Do not exit when ebuild.sh is sourced by other scripts.
diff --cc bin/egencache
index 50cee68,7766e78..ae76a3d
--- a/bin/egencache
+++ b/bin/egencache
@@@ -791,8 -792,10 +793,12 @@@ def egencache_main(args)
if options.portdir is not None:
env['PORTDIR'] = options.portdir
- settings = portage.config(config_root=config_root, _eprefix=EPREFIX,
- target_root='/', local_config=False, env=env)
+ eprefix = os.environ.get("__PORTAGE_TEST_EPREFIX")
++ if not eprefix:
++ eprefix = EPREFIX
+
+ settings = portage.config(config_root=config_root,
+ local_config=False, env=env, _eprefix=eprefix)
default_opts = None
if not options.ignore_default_opts:
diff --cc bin/isolated-functions.sh
index 69eb4db,733795a..12edfbc
--- a/bin/isolated-functions.sh
+++ b/bin/isolated-functions.sh
@@@ -487,31 -434,30 +436,36 @@@ RC_INDENTATION='
RC_DEFAULT_INDENT=2
RC_DOT_PATTERN=''
- case "${NOCOLOR:-false}" in
- yes|true)
- unset_colors
- ;;
- no|false)
- set_colors
- ;;
- esac
+ if [[ $EBUILD_PHASE == depend ]] ; then
+ # avoid unneeded stty call in set_colors during "depend" phase
+ unset_colors
+ else
+ case "${NOCOLOR:-false}" in
+ yes|true)
+ unset_colors
+ ;;
+ no|false)
+ set_colors
+ ;;
+ esac
+ fi
-if [[ -z ${USERLAND} ]] ; then
- case $(uname -s) in
- *BSD|DragonFly)
- export USERLAND="BSD"
- ;;
- *)
- export USERLAND="GNU"
- ;;
- esac
-fi
+# In Prefix every platform has USERLAND=GNU, even FreeBSD. Since I
+# don't know how to reliably "figure out" we are in a Prefix instance of
+# portage here, I for now disable this check, and hardcode it to GNU.
+# Somehow it appears stange to me that this code is in this file,
+# non-ebuilds/eclasses should never rely on USERLAND and XARGS, don't they?
+#if [[ -z ${USERLAND} ]] ; then
+# case $(uname -s) in
+# *BSD|DragonFly)
+# export USERLAND="BSD"
+# ;;
+# *)
+# export USERLAND="GNU"
+# ;;
+# esac
+#fi
+[[ -z ${USERLAND} ]] && USERLAND="GNU"
if [[ -z ${XARGS} ]] ; then
case ${USERLAND} in
diff --cc pym/portage/package/ebuild/doebuild.py
index a92159f,ba7ebc3..8fc5c41
--- a/pym/portage/package/ebuild/doebuild.py
+++ b/pym/portage/package/ebuild/doebuild.py
@@@ -35,8 -35,7 +35,8 @@@ from portage import auxdbkeys, bsd_chfl
unmerge, _encodings, _parse_eapi_ebuild_head, _os_merge, \
_shell_quote, _unicode_decode, _unicode_encode
from portage.const import EBUILD_SH_ENV_FILE, EBUILD_SH_ENV_DIR, \
- EBUILD_SH_BINARY, INVALID_ENV_FILE, MISC_SH_BINARY
+ EBUILD_SH_BINARY, INVALID_ENV_FILE, MISC_SH_BINARY, \
- EPREFIX, EPREFIX_LSTRIP, MACOSSANDBOX_PROFILE
++ EPREFIX, MACOSSANDBOX_PROFILE
from portage.data import portage_gid, portage_uid, secpass, \
uid, userpriv_groups
from portage.dbapi.porttree import _parse_uri_map
diff --cc pym/portage/util/env_update.py
index d39bd39,19c7666..8a2ac57
--- a/pym/portage/util/env_update.py
+++ b/pym/portage/util/env_update.py
@@@ -52,7 -50,7 +50,7 @@@ def env_update(makelinks=1, target_root
else:
settings = env
-- eprefix = settings.get("EPREFIX", "")
++ eprefix = settings.get("EPREFIX", portage.const.EPREFIX)
eprefix_lstrip = eprefix.lstrip(os.sep)
envd_dir = os.path.join(target_root, eprefix_lstrip, "etc", "env.d")
ensure_dirs(envd_dir, mode=0o755)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-08-31 18:39 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-08-31 18:39 UTC (permalink / raw
To: gentoo-commits
commit: 3b74444f6684d7696e549cbdb76b8d1a0795fe2d
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 31 18:39:04 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Wed Aug 31 18:39:04 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=3b74444f
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/repoman | 5 +-
cnf/make.globals | 3 +
man/make.conf.5 | 15 ++++++-
man/repoman.1 | 3 +-
pym/_emerge/EbuildBuild.py | 2 +-
pym/_emerge/actions.py | 3 +
pym/_emerge/main.py | 43 +++++++++++++++++--
pym/portage/const.py | 3 +-
.../package/ebuild/_config/special_env_vars.py | 2 +-
pym/portage/package/ebuild/doebuild.py | 4 +-
pym/portage/tests/emerge/test_simple.py | 7 +--
pym/portage/tests/repoman/test_simple.py | 7 ++-
pym/portage/util/env_update.py | 1 +
13 files changed, 77 insertions(+), 21 deletions(-)
diff --cc cnf/make.globals
index ae98d16,fcd0b41..7aaec5e
--- a/cnf/make.globals
+++ b/cnf/make.globals
@@@ -101,9 -101,10 +101,12 @@@ PORTAGE_RSYNC_OPTS="--recursive --link
# message should be produced.
PORTAGE_SYNC_STALE="30"
+ # Executed before emerge exit if FEATURES=clean-logs is enabled.
+ PORT_LOGDIR_CLEAN="find \"\${PORT_LOGDIR}\" -type f ! -name \"summary.log*\" -mtime +7 -delete"
+
# Minimal CONFIG_PROTECT
+# NOTE: in Prefix, these are NOT prefixed on purpose, because the
+# profiles define them too
CONFIG_PROTECT="/etc"
CONFIG_PROTECT_MASK="/etc/env.d"
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-08-30 18:45 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-08-30 18:45 UTC (permalink / raw
To: gentoo-commits
commit: 05a49f3c13c1da7e0f4b6b70e8bf40c5bc91ccac
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Tue Aug 30 18:44:43 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Tue Aug 30 18:44:43 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=05a49f3c
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
pym/portage/dbapi/vartree.py | 9 ---------
pym/portage/package/ebuild/doebuild.py | 5 ++++-
pym/portage/tests/emerge/test_simple.py | 4 +++-
pym/portage/tests/repoman/test_simple.py | 2 +-
pym/repoman/utilities.py | 2 +-
5 files changed, 9 insertions(+), 13 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-08-29 19:03 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-08-29 19:03 UTC (permalink / raw
To: gentoo-commits
commit: 5b6156b085ab5ae518fa6c4ed9b2e2c5b75fd184
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Mon Aug 29 18:56:30 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Mon Aug 29 18:56:30 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=5b6156b0
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/dispatch-conf
bin/repoman
pym/portage/__init__.py
pym/portage/tests/runTests
pym/portage/util/env_update.py
runtests.sh
bin/dispatch-conf | 6 +-
bin/ebuild.sh | 37 +----
bin/egencache | 12 ++-
bin/repoman | 23 ++-
cnf/metadata.dtd | 99 +++++++++++
man/emerge.1 | 14 +-
pym/_emerge/AsynchronousLock.py | 13 ++
pym/_emerge/BinpkgEnvExtractor.py | 2 +-
pym/_emerge/EbuildPhase.py | 7 +-
pym/_emerge/PipeReader.py | 1 +
pym/_emerge/PollScheduler.py | 3 +
pym/_emerge/Scheduler.py | 2 +-
pym/_emerge/SpawnProcess.py | 10 +-
pym/_emerge/actions.py | 3 +-
pym/_emerge/depgraph.py | 153 +++++++----------
pym/_emerge/help.py | 18 +-
pym/portage/__init__.py | 10 +-
pym/portage/data.py | 3 +
pym/portage/dbapi/vartree.py | 21 ++-
pym/portage/elog/messages.py | 6 +-
pym/portage/package/ebuild/doebuild.py | 90 ++++++++++-
pym/portage/package/ebuild/prepare_build_dirs.py | 4 +
pym/portage/tests/__init__.py | 34 +++-
.../tests/ebuild/test_array_fromfile_eof.py | 1 +
pym/portage/tests/{dbapi => emerge}/__init__.py | 0
pym/portage/tests/{bin => emerge}/__test__ | 0
pym/portage/tests/emerge/test_simple.py | 143 ++++++++++++++++
.../test_lazy_import_portage_baseline.py | 2 +-
pym/portage/tests/process/test_poll.py | 10 +-
pym/portage/tests/{dbapi => repoman}/__init__.py | 0
pym/portage/tests/{bin => repoman}/__test__ | 0
pym/portage/tests/repoman/test_simple.py | 174 ++++++++++++++++++++
pym/portage/tests/resolver/ResolverPlayground.py | 25 +++-
pym/portage/tests/resolver/test_rebuild.py | 65 ++++----
pym/portage/update.py | 9 +-
pym/portage/util/ExtractKernelVersion.py | 2 +
pym/portage/util/_dyn_libs/LinkageMapELF.py | 1 +
pym/portage/util/_pty.py | 1 +
pym/portage/util/env_update.py | 59 +++----
pym/portage/xml/metadata.py | 2 +
pym/portage/xpak.py | 4 +-
runtests.sh | 2 +-
42 files changed, 817 insertions(+), 254 deletions(-)
diff --cc bin/dispatch-conf
index 6ce255e,497927d..fe85c54
--- a/bin/dispatch-conf
+++ b/bin/dispatch-conf
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python -O
+#!@PREFIX_PORTAGE_PYTHON@ -O
- # Copyright 1999-2006 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
diff --cc bin/repoman
index d58c52f,d9ecfc4..ed62fa1
--- a/bin/repoman
+++ b/bin/repoman
@@@ -99,7 -96,8 +99,10 @@@ os.umask(0o22
# behave incrementally.
repoman_incrementals = tuple(x for x in \
portage.const.INCREMENTALS if x != 'ACCEPT_KEYWORDS')
- repoman_settings = portage.config(local_config=False, _eprefix=EPREFIX)
+ eprefix = os.environ.get("__REPOMAN_TEST_EPREFIX")
++if not eprefix:
++ eprefix = EPREFIX
+ repoman_settings = portage.config(local_config=False, _eprefix=eprefix)
repoman_settings.lock()
if repoman_settings.get("NOCOLOR", "").lower() in ("yes", "true") or \
diff --cc pym/portage/__init__.py
index 4ce9efe,72cdf2d..8b5426b
--- a/pym/portage/__init__.py
+++ b/pym/portage/__init__.py
@@@ -473,8 -472,9 +473,11 @@@ def create_trees(config_root=None, targ
portdbapi.portdbapi_instances.remove(portdb)
del trees[myroot]["porttree"], myroot, portdb
+ eprefix = os.environ.get("__PORTAGE_TEST_EPREFIX")
++ if not eprefix:
++ eprefix = EPREFIX
settings = config(config_root=config_root, target_root=target_root,
- config_incrementals=portage.const.INCREMENTALS, _eprefix=EPREFIX)
+ config_incrementals=portage.const.INCREMENTALS, _eprefix=eprefix)
settings.lock()
myroots = [(settings["ROOT"], settings)]
diff --cc pym/portage/data.py
index 0c0e320,c496c0b..354fc9c
--- a/pym/portage/data.py
+++ b/pym/portage/data.py
@@@ -67,8 -63,11 +67,11 @@@ secpass=
uid=os.getuid()
wheelgid=0
-if uid==0:
+if uid==rootuid:
secpass=2
+ elif "__PORTAGE_TEST_EPREFIX" in os.environ:
+ secpass = 2
+
try:
wheelgid=grp.getgrnam("wheel")[2]
except KeyError:
diff --cc pym/portage/util/env_update.py
index 2650f15,1731663..d4f3787
--- a/pym/portage/util/env_update.py
+++ b/pym/portage/util/env_update.py
@@@ -47,8 -46,13 +46,15 @@@ def env_update(makelinks=1, target_root
if prev_mtimes is None:
prev_mtimes = portage.mtimedb["ldpath"]
if env is None:
- env = os.environ
- envd_dir = os.path.join(target_root, EPREFIX_LSTRIP, "etc", "env.d")
+ settings = os.environ
++ if 'EPREFIX' not in settings:
++ settings['EPREFIX'] = portage.const.EPREFIX
+ else:
+ settings = env
+
+ eprefix = settings.get("EPREFIX", "")
+ eprefix_lstrip = eprefix.lstrip(os.sep)
+ envd_dir = os.path.join(target_root, eprefix_lstrip, "etc", "env.d")
ensure_dirs(envd_dir, mode=0o755)
fns = listdir(envd_dir, EmptyOnError=1)
fns.sort()
@@@ -279,12 -271,12 +273,12 @@@
penvnotice = "# THIS FILE IS AUTOMATICALLY GENERATED BY env-update.\n"
penvnotice += "# DO NOT EDIT THIS FILE. CHANGES TO STARTUP PROFILES\n"
cenvnotice = penvnotice[:]
- penvnotice += "# GO INTO " + EPREFIX + "/etc/profile NOT /etc/profile.env\n\n"
- cenvnotice += "# GO INTO " + EPREFIX + "/etc/csh.cshrc NOT /etc/csh.env\n\n"
- penvnotice += "# GO INTO /etc/profile NOT /etc/profile.env\n\n"
- cenvnotice += "# GO INTO /etc/csh.cshrc NOT /etc/csh.env\n\n"
++ penvnotice += "# GO INTO " + eprefix + "/etc/profile NOT /etc/profile.env\n\n"
++ cenvnotice += "# GO INTO " + eprefix + "/etc/csh.cshrc NOT /etc/csh.env\n\n"
#create /etc/profile.env for bash support
- outfile = atomic_ofstream(os.path.join(target_root, EPREFIX_LSTRIP, "etc", "profile.env"))
+ outfile = atomic_ofstream(os.path.join(
+ target_root, eprefix_lstrip, "etc", "profile.env"))
outfile.write(penvnotice)
env_keys = [ x for x in env if x != "LDPATH" ]
diff --cc runtests.sh
index dadd32d,981fa1e..52f7bb0
--- a/runtests.sh
+++ b/runtests.sh
@@@ -26,9 -26,9 +26,9 @@@ trap interrupted SIGIN
exit_status="0"
for version in ${PYTHON_VERSIONS}; do
- if [[ -x /usr/bin/python${version} ]]; then
+ if [[ -x @PREFIX_PORTAGE_PYTHON@${version} ]]; then
echo -e "${GOOD}Testing with Python ${version}...${NORMAL}"
- if ! @PREFIX_PORTAGE_PYTHON@${version} pym/portage/tests/runTests "$@" ; then
- if ! /usr/bin/python${version} -Wd pym/portage/tests/runTests "$@" ; then
++ if ! @PREFIX_PORTAGE_PYTHON@${version} -Wd pym/portage/tests/runTests "$@" ; then
echo -e "${BAD}Testing with Python ${version} failed${NORMAL}"
exit_status="1"
fi
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-08-25 20:25 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-08-25 20:25 UTC (permalink / raw
To: gentoo-commits
commit: 253d0140bb34d02368162b5d5fd5b030ea96965d
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Aug 25 20:22:02 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Aug 25 20:22:02 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=253d0140
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/ebuild-helpers/dosym
bin/ebuild | 8 +++---
bin/ebuild-helpers/dosym | 15 ++++++++----
bin/isolated-functions.sh | 7 +++++-
man/make.conf.5 | 6 +++++
pym/_emerge/PipeReader.py | 6 ++++-
pym/_emerge/Scheduler.py | 4 +++
pym/_emerge/SpawnProcess.py | 6 ++++-
pym/portage/_sets/__init__.py | 9 +++++-
pym/portage/news.py | 7 +++--
pym/portage/output.py | 12 +++++++--
.../package/ebuild/_config/LocationsManager.py | 10 +++++--
pym/portage/package/ebuild/doebuild.py | 24 +++++++++++++++-----
pym/portage/package/ebuild/prepare_build_dirs.py | 2 +-
pym/portage/repository/config.py | 16 ++++++++++--
.../tests/ebuild/test_array_fromfile_eof.py | 11 +++++---
pym/portage/tests/ebuild/test_doebuild_spawn.py | 4 +-
pym/portage/update.py | 7 +++--
pym/portage/util/__init__.py | 13 +++++++---
.../util/_dyn_libs/PreservedLibsRegistry.py | 10 +++++--
pym/portage/util/_pty.py | 8 ++++--
pym/portage/xml/metadata.py | 13 +++++++++-
pym/portage/xpak.py | 7 ++++-
22 files changed, 149 insertions(+), 56 deletions(-)
diff --cc bin/ebuild
index 58c2d83,db7e5e3..e91149e
--- a/bin/ebuild
+++ b/bin/ebuild
@@@ -57,19 -57,13 +57,15 @@@ opts, pargs = parser.parse_args(args=sy
if len(pargs) < 2:
parser.error("missing required args")
- if "merge" in pargs:
- print("Disabling noauto in features... merge disables it. (qmerge doesn't)")
- os.environ["FEATURES"] = os.environ.get("FEATURES", "") + " -noauto"
-
os.environ["PORTAGE_CALLER"]="ebuild"
-try:
- import portage
-except ImportError:
- from os import path as osp
- sys.path.insert(0, osp.join(osp.dirname(osp.dirname(osp.realpath(__file__))), "pym"))
- import portage
+# for an explanation on this logic, see pym/_emerge/__init__.py
+from os import environ as ose
+from os import path as osp
+if ose.__contains__("PORTAGE_PYTHONPATH"):
+ sys.path.insert(0, ose["PORTAGE_PYTHONPATH"])
+else:
+ sys.path.insert(0, osp.join(osp.dirname(osp.dirname(osp. realpath(__file__))), "pym"))
+import portage
portage.dep._internal_warnings = True
from portage import os
diff --cc bin/ebuild-helpers/dosym
index bd4044f,7dd4c6d..16e0df0
--- a/bin/ebuild-helpers/dosym
+++ b/bin/ebuild-helpers/dosym
@@@ -9,13 -9,16 +9,18 @@@ if [[ $# -ne 2 ]] ; the
exit 1
fi
- target="${1}"
- linkname="${2}"
- [[ ${target:0:1} == "/" ]] && target="${EPREFIX}${target}"
- destdir=${linkname%/*}
+ if [[ ${2} == */ ]] || \
- [[ -d ${D}${2} && ! -L ${D}${2} ]] ; then
++ [[ -d ${ED}${2} && ! -L ${ED}${2} ]] ; then
+ # implicit basename not allowed by PMS (bug #379899)
+ eqawarn "QA Notice: dosym target omits basename: '${2}'"
+ fi
+
+ destdir=${2%/*}
-[[ ! -d ${D}${destdir} ]] && dodir "${destdir}"
+[[ ! -d ${ED}${destdir} ]] && dodir "${destdir}"
- ln -snf "${target}" "${ED}/${linkname}"
-ln -snf "$1" "${D}$2"
++target="${1}"
++[[ ${target:0:1} == "/" ]] && target="${EPREFIX}${target}"
++ln -snf "${target}" "${ED}/${2}"
ret=$?
[[ $ret -ne 0 ]] && helpers_die "${0##*/} failed"
exit $ret
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-08-20 17:50 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-08-20 17:50 UTC (permalink / raw
To: gentoo-commits
commit: 7c8aa70f94de389fa452bbc8147de21a1e19a6cb
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sat Aug 20 17:50:06 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sat Aug 20 17:50:06 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=7c8aa70f
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
pym/portage/elog/mod_save.py
bin/ebuild.sh | 4 +-
bin/egencache | 9 +
bin/portageq | 62 +++++--
bin/repoman | 25 +++
man/egencache.1 | 4 +
man/portage.5 | 1 +
man/repoman.1 | 3 +
pym/_emerge/Package.py | 6 -
pym/_emerge/Scheduler.py | 7 +
pym/_emerge/actions.py | 91 +++++-----
pym/_emerge/depgraph.py | 14 +-
pym/_emerge/main.py | 16 +-
pym/_emerge/resolver/output.py | 9 +-
pym/_emerge/search.py | 26 ++-
pym/portage/dbapi/porttree.py | 5 -
pym/portage/dbapi/vartree.py | 198 ++++++++++++++++++--
pym/portage/elog/mod_save.py | 25 +++-
pym/portage/elog/mod_save_summary.py | 18 ++-
.../package/ebuild/_config/special_env_vars.py | 2 +-
pym/portage/package/ebuild/config.py | 16 ++-
pym/portage/package/ebuild/doebuild.py | 27 +++-
pym/portage/package/ebuild/fetch.py | 11 +-
pym/portage/package/ebuild/getmaskingstatus.py | 4 -
pym/portage/package/ebuild/prepare_build_dirs.py | 17 ++-
pym/portage/repository/config.py | 2 +-
pym/repoman/checks.py | 2 +-
26 files changed, 478 insertions(+), 126 deletions(-)
diff --cc pym/portage/elog/mod_save.py
index d1c9bb8,091bbf8..7f97182
--- a/pym/portage/elog/mod_save.py
+++ b/pym/portage/elog/mod_save.py
@@@ -10,8 -11,7 +11,8 @@@ from portage import _unicode_decod
from portage import _unicode_encode
from portage.data import portage_gid, portage_uid
from portage.package.ebuild.prepare_build_dirs import _ensure_log_subdirs
- from portage.util import ensure_dirs, normalize_path
+ from portage.util import apply_permissions, ensure_dirs, normalize_path
+from portage.const import EPREFIX_LSTRIP
def process(mysettings, key, logentries, fulltext):
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-07-26 17:35 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-07-26 17:35 UTC (permalink / raw
To: gentoo-commits
commit: cbbf2f3507bc425cc95c66b8d4451696370bcf30
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Tue Jul 26 16:22:12 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Tue Jul 26 16:22:12 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=cbbf2f35
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
NEWS | 6 +
man/emerge.1 | 8 +-
pym/_emerge/BinpkgFetcher.py | 2 +
pym/_emerge/EbuildBuild.py | 1 +
pym/_emerge/Scheduler.py | 44 +++---
pym/_emerge/SpawnProcess.py | 14 ++-
pym/_emerge/actions.py | 18 ++-
pym/_emerge/depgraph.py | 19 +--
pym/_emerge/help.py | 9 +-
pym/_emerge/main.py | 13 ++-
pym/portage/_sets/dbapi.py | 20 +--
pym/portage/_sets/libs.py | 21 ++-
pym/portage/dbapi/vartree.py | 23 ++-
pym/portage/util/_dyn_libs/LinkageMapELF.py | 198 +++++++++++++++++++++------
14 files changed, 269 insertions(+), 127 deletions(-)
diff --cc pym/portage/util/_dyn_libs/LinkageMapELF.py
index abd0db9,52670d9..259f45b
--- a/pym/portage/util/_dyn_libs/LinkageMapELF.py
+++ b/pym/portage/util/_dyn_libs/LinkageMapELF.py
@@@ -220,9 -232,9 +233,9 @@@ class LinkageMapELF(object)
# already represented via the preserve_paths
# parameter.
continue
- plibs.update(items)
+ plibs.update((x, cpv) for x in items)
if plibs:
- args = ["/usr/bin/scanelf", "-qF", "%a;%F;%S;%r;%n"]
+ args = [EPREFIX + "/usr/bin/scanelf", "-qF", "%a;%F;%S;%r;%n"]
args.extend(os.path.join(root, x.lstrip("." + os.sep)) \
for x in plibs)
try:
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-07-17 9:48 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-07-17 9:48 UTC (permalink / raw
To: gentoo-commits
commit: a88e72b98938a10edf4057b3ec73bc26295a585c
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Jul 17 09:47:16 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Jul 17 09:47:16 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=a88e72b9
configure: require Python 2.7
Save us the hassle of checking if Python 2.6 was built with threads (for
io), by just requiring Python 2.7. It is the only Python in Prefix
anyway.
---
configure.in | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/configure.in b/configure.in
index c839b96..5b909be 100644
--- a/configure.in
+++ b/configure.in
@@ -31,7 +31,7 @@ AC_PROG_LN_S
AC_PROG_EGREP
GENTOO_PATH_XCU_ID()
-GENTOO_PATH_PYTHON([2.6])
+GENTOO_PATH_PYTHON([2.7])
AC_PATH_PROG(PORTAGE_RM, [rm], no)
AC_PATH_PROG(PORTAGE_MV, [mv], no)
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-07-17 8:12 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-07-17 8:12 UTC (permalink / raw
To: gentoo-commits
commit: b128cd7bd251a176ff08f84503fc6764a33311f1
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Jul 17 08:11:06 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Jul 17 08:11:06 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=b128cd7b
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/binhost-snapshot
bin/ebuild-helpers/doman
bin/ebuild-helpers/prepall
bin/ebuild-helpers/prepallman
bin/ebuild-helpers/prepinfo
bin/ebuild-helpers/prepman
bin/ebuild-helpers/prepstrip
bin/ebuild.sh
bin/filter-bash-environment.py
bin/misc-functions.sh
pym/_emerge/Binpkg.py
pym/_emerge/main.py
pym/portage/elog/mod_save.py
pym/portage/elog/mod_save_summary.py
pym/portage/package/ebuild/prepare_build_dirs.py
bin/binhost-snapshot | 6 +-
bin/ebuild-helpers/doman | 8 +-
bin/ebuild-helpers/prepall | 6 +-
bin/ebuild-helpers/prepallman | 4 +-
bin/ebuild-helpers/prepinfo | 4 +-
bin/ebuild-helpers/prepman | 4 +-
bin/ebuild-helpers/prepstrip | 24 ++--
bin/ebuild.sh | 128 +++++++++---------
bin/egencache | 41 +++---
bin/emerge-webrsync | 4 +-
bin/filter-bash-environment.py | 12 +-
bin/isolated-functions.sh | 23 ++-
bin/misc-functions.sh | 36 +++---
bin/portageq | 3 +-
bin/repoman | 15 +-
cnf/logrotate.d/elog-save-summary | 3 +-
man/ebuild.5 | 8 +-
man/make.conf.5 | 6 +-
pym/_emerge/AbstractEbuildProcess.py | 10 +-
pym/_emerge/Binpkg.py | 28 ++--
pym/_emerge/BinpkgVerifier.py | 8 +-
pym/_emerge/EbuildBuild.py | 31 ++++-
pym/_emerge/EbuildFetcher.py | 144 ++++++++++++++++----
pym/_emerge/EbuildMetadataPhase.py | 4 +-
pym/_emerge/EbuildPhase.py | 38 +++---
pym/_emerge/JobStatusDisplay.py | 8 +-
pym/_emerge/Scheduler.py | 6 +-
pym/_emerge/UseFlagDisplay.py | 7 +-
pym/_emerge/actions.py | 26 +++-
pym/_emerge/depgraph.py | 103 +++++++++-----
pym/_emerge/emergelog.py | 18 ++-
pym/_emerge/is_valid_package_atom.py | 18 ++-
pym/_emerge/main.py | 8 +-
pym/_emerge/resolver/circular_dependency.py | 48 +++----
pym/_emerge/resolver/output_helpers.py | 4 +-
pym/_emerge/resolver/slot_collision.py | 44 +++++-
pym/portage/__init__.py | 37 +-----
pym/portage/_ensure_encodings.py | 132 ------------------
pym/portage/cache/flat_hash.py | 19 ++-
pym/portage/cache/flat_list.py | 18 ++-
pym/portage/checksum.py | 9 +-
pym/portage/const.py | 1 +
pym/portage/cvstree.py | 6 +-
pym/portage/dbapi/_MergeProcess.py | 6 +-
pym/portage/dbapi/bintree.py | 5 +-
pym/portage/dbapi/cpv_expand.py | 6 +-
pym/portage/dbapi/porttree.py | 5 +-
pym/portage/dbapi/vartree.py | 39 +++---
pym/portage/dispatch_conf.py | 6 +-
pym/portage/elog/messages.py | 7 +-
pym/portage/elog/mod_save.py | 40 ++++--
pym/portage/elog/mod_save_summary.py | 48 +++++--
pym/portage/env/loaders.py | 6 +-
pym/portage/glsa.py | 8 +-
pym/portage/manifest.py | 11 +-
pym/portage/news.py | 6 +-
pym/portage/output.py | 10 +-
.../package/ebuild/_config/LocationsManager.py | 6 +-
.../package/ebuild/_config/special_env_vars.py | 3 +-
pym/portage/package/ebuild/_ipc/QueryCommand.py | 11 +-
.../package/ebuild/deprecated_profile_check.py | 6 +-
pym/portage/package/ebuild/digestcheck.py | 18 ++-
pym/portage/package/ebuild/doebuild.py | 47 ++++---
pym/portage/package/ebuild/fetch.py | 10 +-
pym/portage/package/ebuild/prepare_build_dirs.py | 55 +++++++-
pym/portage/repository/config.py | 10 +-
pym/portage/tests/ebuild/test_spawn.py | 6 +-
.../test_lazy_import_portage_baseline.py | 2 +-
pym/portage/tests/resolver/test_slot_collisions.py | 18 +++
pym/portage/update.py | 12 +-
pym/portage/util/ExtractKernelVersion.py | 6 +-
pym/portage/util/__init__.py | 39 ++++--
pym/portage/util/env_update.py | 4 +-
pym/repoman/checks.py | 34 ++++-
pym/repoman/errors.py | 2 +
pym/repoman/utilities.py | 6 +-
76 files changed, 908 insertions(+), 700 deletions(-)
diff --cc bin/binhost-snapshot
index 662b027,9d2697d..54d2102
--- a/bin/binhost-snapshot
+++ b/bin/binhost-snapshot
@@@ -1,8 -1,8 +1,8 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2010 Gentoo Foundation
+ # Copyright 2010-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
- import codecs
+ import io
import optparse
import os
import sys
diff --cc bin/ebuild-helpers/doman
index cfe8a8f,4561bef..e947e0d
--- a/bin/ebuild-helpers/doman
+++ b/bin/ebuild-helpers/doman
@@@ -1,8 -1,8 +1,8 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [[ $# -lt 1 ]] ; then
helpers_die "${0##*/}: at least one argument needed"
diff --cc bin/ebuild-helpers/prepall
index f9164c1,701ecba..a765756
--- a/bin/ebuild-helpers/prepall
+++ b/bin/ebuild-helpers/prepall
@@@ -1,17 -1,15 +1,17 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2007 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
+
+[[ -d ${ED} ]] || exit 0
- if hasq chflags $FEATURES ; then
+ if has chflags $FEATURES ; then
# Save all the file flags for restoration at the end of prepall.
- mtree -c -p "${D}" -k flags > "${T}/bsdflags.mtree"
+ mtree -c -p "${ED}" -k flags > "${T}/bsdflags.mtree"
# Remove all the file flags so that prepall can do anything necessary.
- chflags -R noschg,nouchg,nosappnd,nouappnd "${D}"
- chflags -R nosunlnk,nouunlnk "${D}" 2>/dev/null
+ chflags -R noschg,nouchg,nosappnd,nouappnd "${ED}"
+ chflags -R nosunlnk,nouunlnk "${ED}" 2>/dev/null
fi
prepallman
@@@ -19,7 -17,7 +19,7 @@@ prepallinf
prepallstrip
- if hasq chflags $FEATURES ; then
+ if has chflags $FEATURES ; then
# Restore all the file flags that were saved at the beginning of prepall.
- mtree -U -e -p "${D}" -k flags < "${T}/bsdflags.mtree" &> /dev/null
+ mtree -U -e -p "${ED}" -k flags < "${T}/bsdflags.mtree" &> /dev/null
fi
diff --cc bin/ebuild-helpers/prepallman
index 20880fb,e50de6d..19437dd
--- a/bin/ebuild-helpers/prepallman
+++ b/bin/ebuild-helpers/prepallman
@@@ -1,11 -1,11 +1,11 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
# replaced by controllable compression in EAPI 4
- hasq "${EAPI}" 0 1 2 3 || exit 0
+ has "${EAPI}" 0 1 2 3 || exit 0
ret=0
diff --cc bin/ebuild-helpers/prepinfo
index 94e67f0,691fd13..c350fba
--- a/bin/ebuild-helpers/prepinfo
+++ b/bin/ebuild-helpers/prepinfo
@@@ -1,16 -1,16 +1,16 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [[ -z $1 ]] ; then
- infodir="/usr/share/info"
+ infodir="${EPREFIX}/usr/share/info"
else
- if [[ -d ${D}$1/share/info ]] ; then
- infodir="$1/share/info"
+ if [[ -d ${ED}$1/share/info ]] ; then
+ infodir="${EPREFIX}$1/share/info"
else
- infodir="$1/info"
+ infodir="${EPREFIX}$1/info"
fi
fi
diff --cc bin/ebuild-helpers/prepman
index 231f9ae,c9add8a..f8c670f
--- a/bin/ebuild-helpers/prepman
+++ b/bin/ebuild-helpers/prepman
@@@ -1,13 -1,13 +1,13 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [[ -z $1 ]] ; then
- mandir="${D}usr/share/man"
+ mandir="${ED}usr/share/man"
else
- mandir="${D}$1/man"
+ mandir="${ED}$1/man"
fi
if [[ ! -d ${mandir} ]] ; then
diff --cc bin/ebuild-helpers/prepstrip
index 320e332,d25259d..9c5d9da
--- a/bin/ebuild-helpers/prepstrip
+++ b/bin/ebuild-helpers/prepstrip
@@@ -1,8 -1,8 +1,8 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2007 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
banner=false
SKIP_STRIP=false
@@@ -23,9 -23,9 +23,9 @@@ type -P -- ${OBJCOPY} > /dev/null || OB
# of the section when it has the ALLOC flag set on it ...
export SAFE_STRIP_FLAGS="--strip-unneeded"
export PORTAGE_STRIP_FLAGS=${PORTAGE_STRIP_FLAGS-${SAFE_STRIP_FLAGS} -R .comment}
-prepstrip_sources_dir=/usr/src/debug/${CATEGORY}/${PF}
+prepstrip_sources_dir="${EPREFIX}"/usr/src/debug/${CATEGORY}/${PF}
- if hasq installsources ${FEATURES} && ! type -P debugedit >/dev/null ; then
+ if has installsources ${FEATURES} && ! type -P debugedit >/dev/null ; then
ewarn "FEATURES=installsources is enabled but the debugedit binary could not"
ewarn "be found. This feature will not work unless debugedit is installed!"
fi
@@@ -53,10 -53,10 +53,10 @@@ save_elf_sources()
}
save_elf_debug() {
- hasq splitdebug ${FEATURES} || return 0
+ has splitdebug ${FEATURES} || return 0
local x=$1
- local y="${D}usr/lib/debug/${x:${#D}}.debug"
+ local y="${ED}usr/lib/debug/${x:${#D}}.debug"
# dont save debug info twice
[[ ${x} == *".debug" ]] && return 0
diff --cc bin/ebuild.sh
index a0b7b15,4aef413..eaf96ea
--- a/bin/ebuild.sh
+++ b/bin/ebuild.sh
@@@ -2137,13 -2124,13 +2139,13 @@@ if ! has "$EBUILD_PHASE" clean cleanrm
x=LIBDIR_${DEFAULT_ABI}
[[ -n $DEFAULT_ABI && -n ${!x} ]] && x=${!x} || x=lib
- if hasq distcc $FEATURES ; then
+ if has distcc $FEATURES ; then
- PATH="/usr/$x/distcc/bin:$PATH"
+ PATH="${EPREFIX}/usr/$x/distcc/bin:$PATH"
[[ -n $DISTCC_LOG ]] && addwrite "${DISTCC_LOG%/*}"
fi
- if hasq ccache $FEATURES ; then
+ if has ccache $FEATURES ; then
- PATH="/usr/$x/ccache/bin:$PATH"
+ PATH="${EPREFIX}/usr/$x/ccache/bin:$PATH"
if [[ -n $CCACHE_DIR ]] ; then
addread "$CCACHE_DIR"
diff --cc bin/filter-bash-environment.py
index 126e480,b9aec96..9cb42c1
--- a/bin/filter-bash-environment.py
+++ b/bin/filter-bash-environment.py
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 1999-2007 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
import codecs
diff --cc bin/misc-functions.sh
index 5b4ff8a,8c191ff..eb2235d
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -177,41 -175,7 +177,41 @@@ install_qa_check()
sleep 1
done
+ # anything outside the prefix should be caught by the Prefix QA
+ # check, so if there's nothing in ED, we skip searching for QA
+ # checks there, the specific QA funcs can hence rely on ED existing
+ if [[ -d ${ED} ]] ; then
+ case ${CHOST} in
+ *-darwin*)
+ # Mach-O platforms (NeXT, Darwin, OSX)
+ install_qa_check_macho
+ ;;
+ *-interix*|*-winnt*)
+ # PECOFF platforms (Windows/Interix)
+ install_qa_check_pecoff
+ ;;
+ *-aix*)
+ # XCOFF platforms (AIX)
+ install_qa_check_xcoff
+ ;;
+ *)
+ # because this is the majority: ELF platforms (Linux,
+ # Solaris, *BSD, IRIX, etc.)
+ install_qa_check_elf
+ ;;
+ esac
+ fi
+
+ # this is basically here such that the diff with trunk remains just
+ # offsetted and not out of order
+ install_qa_check_misc
+
+ # Prefix specific checks
+ [[ -n ${EPREFIX} ]] && install_qa_check_prefix
+}
+
+install_qa_check_elf() {
- if type -P scanelf > /dev/null && ! hasq binchecks ${RESTRICT}; then
+ if type -P scanelf > /dev/null && ! has binchecks ${RESTRICT}; then
local qa_var insecure_rpath=0 tmp_quiet=${PORTAGE_QUIET}
local x
@@@ -736,10 -692,10 +736,10 @@@ install_qa_check_misc()
fi
# Portage regenerates this on the installed system.
- rm -f "${D}"/usr/share/info/dir{,.gz,.bz2}
+ rm -f "${ED}"/usr/share/info/dir{,.gz,.bz2}
- if hasq multilib-strict ${FEATURES} && \
+ if has multilib-strict ${FEATURES} && \
- [[ -x /usr/bin/file && -x /usr/bin/find ]] && \
+ [[ -x ${EPREFIX}/usr/bin/file && -x ${EPREFIX}/usr/bin/find ]] && \
[[ -n ${MULTILIB_STRICT_DIRS} && -n ${MULTILIB_STRICT_DENY} ]]
then
local abort=no dir file firstrun=yes
@@@ -771,490 -727,6 +771,490 @@@
fi
}
+install_qa_check_prefix() {
+ if [[ -d ${ED}/${D} ]] ; then
+ find "${ED}/${D}" | \
+ while read i ; do
+ eqawarn "QA Notice: /${i##${ED}/${D}} installed in \${ED}/\${D}"
+ done
+ die "Aborting due to QA concerns: files installed in ${ED}/${D}"
+ fi
+
+ if [[ -d ${ED}/${EPREFIX} ]] ; then
+ find "${ED}/${EPREFIX}/" | \
+ while read i ; do
+ eqawarn "QA Notice: ${i#${D}} double prefix"
+ done
+ die "Aborting due to QA concerns: double prefix files installed"
+ fi
+
+ if [[ -d ${D} ]] ; then
+ INSTALLTOD=$(find ${D%/} | egrep -v "^${ED}" | sed -e "s|^${D%/}||" | awk '{if (length($0) <= length("'"${EPREFIX}"'")) { if (substr("'"${EPREFIX}"'", 1, length($0)) != $0) {print $0;} } else if (substr($0, 1, length("'"${EPREFIX}"'")) != "'"${EPREFIX}"'") {print $0;} }')
+ if [[ -n ${INSTALLTOD} ]] ; then
+ eqawarn "QA Notice: the following files are outside of the prefix:"
+ eqawarn "${INSTALLTOD}"
+ die "Aborting due to QA concerns: there are files installed outside the prefix"
+ fi
+ fi
+
+ # all further checks rely on ${ED} existing
+ [[ -d ${ED} ]] || return
+
+ # this does not really belong here, but it's closely tied to
+ # the code below; many runscripts generate positives here, and we
+ # know they don't work (bug #196294) so as long as that one
+ # remains an issue, simply remove them as they won't work
+ # anyway, avoid etc/init.d/functions.sh from being thrown away
+ if [[ ( -d "${ED}"/etc/conf.d || -d "${ED}"/etc/init.d ) && ! -f "${ED}"/etc/init.d/functions.sh ]] ; then
+ ewarn "removed /etc/init.d and /etc/conf.d directories until bug #196294 has been resolved"
+ rm -Rf "${ED}"/etc/{conf,init}.d
+ fi
+
+ # check shebangs, bug #282539
+ rm -f "${T}"/non-prefix-shebangs-errs
+ local WHITELIST=" /usr/bin/env "
+ # this is hell expensive, but how else?
+ find "${ED}" -executable \! -type d -print0 \
+ | xargs -0 grep -H -n -m1 "^#!" \
+ | while read f ;
+ do
+ local fn=${f%%:*}
+ local pos=${f#*:} ; pos=${pos%:*}
+ local line=${f##*:}
+ # shebang always appears on the first line ;)
+ [[ ${pos} != 1 ]] && continue
+ local oldIFS=${IFS}
+ IFS=$'\r'$'\n'$'\t'" "
+ line=( ${line#"#!"} )
+ IFS=${oldIFS}
+ [[ ${WHITELIST} == *" ${line[0]} "* ]] && continue
+ local fp=${fn#${D}} ; fp=/${fp%/*}
+ # line[0] can be an absolutised path, bug #342929
+ local eprefix=$(canonicalize ${EPREFIX})
+ local rf=${fn}
+ # in case we deal with a symlink, make sure we don't replace it
+ # with a real file (sed -i does that)
+ if [[ -L ${fn} ]] ; then
+ rf=$(readlink ${fn})
+ [[ ${rf} != /* ]] && rf=${fn%/*}/${rf}
+ # ignore symlinks pointing to outside prefix
+ # as seen in sys-devel/native-cctools
+ [[ ${rf} != ${EPREFIX}/* && $(canonicalize "${rf}") != ${eprefix}/* ]] && continue
+ fi
+ # does the shebang start with ${EPREFIX}, and does it exist?
+ if [[ ${line[0]} == ${EPREFIX}/* || ${line[0]} == ${eprefix}/* ]] ; then
+ if [[ ! -e ${ROOT%/}${line[0]} && ! -e ${D%/}${line[0]} ]] ; then
+ # hmm, refers explicitly to $EPREFIX, but doesn't exist,
+ # if it's in PATH that's wrong in any case
+ if [[ ":${PATH}:" == *":${fp}:"* ]] ; then
+ echo "${fn#${D}}:${line[0]} (explicit EPREFIX but target not found)" \
+ >> "${T}"/non-prefix-shebangs-errs
+ else
+ eqawarn "${fn#${D}} has explicit EPREFIX in shebang but target not found (${line[0]})"
+ fi
+ fi
+ continue
+ fi
+ # unprefixed shebang, is the script directly in $PATH?
+ if [[ ":${PATH}:" == *":${fp}:"* ]] ; then
+ if [[ -e ${EROOT}${line[0]} || -e ${ED}${line[0]} ]] ; then
+ # is it unprefixed, but we can just fix it because a
+ # prefixed variant exists
+ eqawarn "prefixing shebang of ${fn#${D}}"
+ # statement is made idempotent on purpose, because
+ # symlinks may point to the same target, and hence the
+ # same real file may be sedded multiple times since we
+ # read the shebangs in one go upfront for performance
+ # reasons
+ sed -i -e '1s:^#! \?'"${line[0]}"':#!'"${EPREFIX}"${line[0]}':' "${rf}"
+ continue
+ else
+ # this is definitely wrong: script in $PATH and invalid shebang
+ echo "${fn#${D}}:${line[0]} (script ${fn##*/} installed in PATH but interpreter ${line[0]} not found)" \
+ >> "${T}"/non-prefix-shebangs-errs
+ fi
+ else
+ # unprefixed/invalid shebang, but outside $PATH, this may be
+ # intended (e.g. config.guess) so remain silent by default
- hasq stricter ${FEATURES} && \
++ has stricter ${FEATURES} && \
+ eqawarn "invalid shebang in ${fn#${D}}: ${line[0]}"
+ fi
+ done
+ if [[ -e "${T}"/non-prefix-shebangs-errs ]] ; then
+ eqawarn "QA Notice: the following files use invalid (possible non-prefixed) shebangs:"
+ while read line ; do
+ eqawarn " ${line}"
+ done < "${T}"/non-prefix-shebangs-errs
+ rm -f "${T}"/non-prefix-shebangs-errs
+ die "Aborting due to QA concerns: invalid shebangs found"
+ fi
+}
+
+install_qa_check_macho() {
- if ! hasq binchecks ${RESTRICT} ; then
++ if ! has binchecks ${RESTRICT} ; then
+ # on Darwin, dynamic libraries are called .dylibs instead of
+ # .sos. In addition the version component is before the
+ # extension, not after it. Check for this, and *only* warn
+ # about it. Some packages do ship .so files on Darwin and make
+ # it work (ugly!).
+ rm -f "${T}/mach-o.check"
+ find ${ED%/} -name "*.so" -or -name "*.so.*" | \
+ while read i ; do
+ [[ $(file $i) == *"Mach-O"* ]] && \
+ echo "${i#${D}}" >> "${T}/mach-o.check"
+ done
+ if [[ -f ${T}/mach-o.check ]] ; then
+ f=$(< "${T}/mach-o.check")
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: Found .so dynamic libraries on Darwin:"
+ eqawarn " ${f//$'\n'/\n }"
+ fi
+ rm -f "${T}/mach-o.check"
+
+ # The naming for dynamic libraries is different on Darwin; the
+ # version component is before the extention, instead of after
+ # it, as with .sos. Again, make this a warning only.
+ rm -f "${T}/mach-o.check"
+ find ${ED%/} -name "*.dylib.*" | \
+ while read i ; do
+ echo "${i#${D}}" >> "${T}/mach-o.check"
+ done
+ if [[ -f "${T}/mach-o.check" ]] ; then
+ f=$(< "${T}/mach-o.check")
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: Found wrongly named dynamic libraries on Darwin:"
+ eqawarn " ${f// /\n }"
+ fi
+ rm -f "${T}/mach-o.check"
+ fi
+
+ # While we generate the NEEDED files, check that we don't get kernel
+ # traps at runtime because of broken install_names on Darwin.
+ rm -f "${T}"/.install_name_check_failed
+ scanmacho -qyRF '%a;%p;%S;%n' "${D}" | { while IFS= read l ; do
+ arch=${l%%;*}; l=${l#*;}
+ obj="/${l%%;*}"; l=${l#*;}
+ install_name=${l%%;*}; l=${l#*;}
+ needed=${l%%;*}; l=${l#*;}
+
+ # See if the self-reference install_name points to an existing
+ # and to be installed file. This usually is a symlink for the
+ # major version.
+ if [[ ! -e ${D}${install_name} ]] ; then
+ eqawarn "QA Notice: invalid self-reference install_name ${install_name} in ${obj}"
+ # remember we are in an implicit subshell, that's
+ # why we touch a file here ... ideally we should be
+ # able to die correctly/nicely here
+ touch "${T}"/.install_name_check_failed
+ fi
+
+ # this is ugly, paths with spaces won't work
+ reevaluate=0
+ for lib in ${needed//,/ } ; do
+ if [[ ${lib} == ${D}* ]] ; then
+ eqawarn "QA Notice: install_name references \${D}: ${lib} in ${obj}"
+ touch "${T}"/.install_name_check_failed
+ elif [[ ${lib} == ${S}* ]] ; then
+ eqawarn "QA Notice: install_name references \${S}: ${lib} in ${obj}"
+ touch "${T}"/.install_name_check_failed
+ elif [[ ! -e ${lib} && ! -e ${D}${lib} && ${lib} != "@executable_path/"* && ${lib} != "@loader_path/"* ]] ; then
+ # try to "repair" this if possible, happens because of
+ # gen_usr_ldscript tactics
+ s=${lib%usr/*}${lib##*/usr/}
+ if [[ -e ${D}${s} ]] ; then
+ ewarn "correcting install_name from ${lib} to ${s} in ${obj}"
+ install_name_tool -change \
+ "${lib}" "${s}" "${D}${obj}"
+ reevaluate=1
+ else
+ eqawarn "QA Notice: invalid reference to ${lib} in ${obj}"
+ # remember we are in an implicit subshell, that's
+ # why we touch a file here ... ideally we should be
+ # able to die correctly/nicely here
+ touch "${T}"/.install_name_check_failed
+ fi
+ fi
+ done
+ if [[ ${reevaluate} == 1 ]]; then
+ # install_name(s) have been changed, refresh data so we
+ # store the correct meta data
+ l=$(scanmacho -qyF '%a;%p;%S;%n' ${D}${obj})
+ arch=${l%%;*}; l=${l#*;}
+ obj="/${l%%;*}"; l=${l#*;}
+ install_name=${l%%;*}; l=${l#*;}
+ needed=${l%%;*}; l=${l#*;}
+ fi
+
+ # backwards compatability
+ echo "${obj} ${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ # what we use
+ echo "${arch};${obj};${install_name};${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.MACHO.3
+ done }
+ if [[ -f ${T}/.install_name_check_failed ]] ; then
+ # secret switch "allow_broken_install_names" to get
+ # around this and install broken crap (not a good idea)
- hasq allow_broken_install_names ${FEATURES} || \
++ has allow_broken_install_names ${FEATURES} || \
+ die "invalid install_name found, your application or library will crash at runtime"
+ fi
+}
+
+install_qa_check_pecoff() {
+ local _pfx_scan="readpecoff ${CHOST}"
+
+ # this one uses readpecoff, which supports multiple prefix platforms!
+ # this is absolutely _not_ optimized for speed, and there may be plenty
+ # of possibilities by introducing one or the other cache!
- if ! hasq binchecks ${RESTRICT}; then
++ if ! has binchecks ${RESTRICT}; then
+ # copied and adapted from the above scanelf code.
+ local qa_var insecure_rpath=0 tmp_quiet=${PORTAGE_QUIET}
+ local f x
+
+ # display warnings when using stricter because we die afterwards
+ if has stricter ${FEATURES} ; then
+ unset PORTAGE_QUIET
+ fi
+
+ local _exec_find_opt="-executable"
+ [[ ${CHOST} == *-winnt* ]] && _exec_find_opt='-name *.dll -o -name *.exe'
+
+ # Make sure we disallow insecure RUNPATH/RPATH's
+ # Don't want paths that point to the tree where the package was built
+ # (older, broken libtools would do this). Also check for null paths
+ # because the loader will search $PWD when it finds null paths.
+
+ f=$(
+ find "${ED}" -type f '(' ${_exec_find_opt} ')' -print0 | xargs -0 ${_pfx_scan} | \
+ while IFS=";" read arch obj soname rpath needed ; do \
+ echo "${rpath}" | grep -E "(${PORTAGE_BUILDDIR}|: |::|^:|^ )" > /dev/null 2>&1 \
+ && echo "${obj}"; done;
+ )
+ # Reject set*id binaries with $ORIGIN in RPATH #260331
+ x=$(
+ find "${ED}" -type f '(' -perm -u+s -o -perm -g+s ')' -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; do \
+ echo "${rpath}" | grep '$ORIGIN' > /dev/null 2>&1 && echo "${obj}"; done;
+ )
+ if [[ -n ${f}${x} ]] ; then
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain insecure RUNPATH's"
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with the maintaining herd of the package."
+ eqawarn "${f}${f:+${x:+\n}}${x}"
+ vecho -ne '\a\n'
+ if [[ -n ${x} ]] || has stricter ${FEATURES} ; then
+ insecure_rpath=1
+ else
+ eqawarn "cannot automatically fix runpaths on interix platforms!"
+ fi
+ fi
+
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+
+ # Save NEEDED information after removing self-contained providers
+ find "${ED}" -type f '(' ${_exec_find_opt} ')' -print0 | xargs -0 ${_pfx_scan} | { while IFS=';' read arch obj soname rpath needed; do
+ # need to strip image dir from object name.
+ obj="/${obj#${D}}"
+ if [ -z "${rpath}" -o -n "${rpath//*ORIGIN*}" ]; then
+ # object doesn't contain $ORIGIN in its runpath attribute
+ echo "${obj} ${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ echo "${arch};${obj};${soname};${rpath};${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+ else
+ dir=${obj%/*}
+ # replace $ORIGIN with the dirname of the current object for the lookup
+ opath=$(echo :${rpath}: | sed -e "s#.*:\(.*\)\$ORIGIN\(.*\):.*#\1${dir}\2#")
+ sneeded=$(echo ${needed} | tr , ' ')
+ rneeded=""
+ for lib in ${sneeded}; do
+ found=0
+ for path in ${opath//:/ }; do
+ [ -e "${ED}/${path}/${lib}" ] && found=1 && break
+ done
+ [ "${found}" -eq 0 ] && rneeded="${rneeded},${lib}"
+ done
+ rneeded=${rneeded:1}
+ if [ -n "${rneeded}" ]; then
+ echo "${obj} ${rneeded}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ echo "${arch};${obj};${soname};${rpath};${rneeded}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+ fi
+ fi
+ done }
+
+ if [[ ${insecure_rpath} -eq 1 ]] ; then
+ die "Aborting due to serious QA concerns with RUNPATH/RPATH"
+ elif [[ -n ${die_msg} ]] && has stricter ${FEATURES} ; then
+ die "Aborting due to QA concerns: ${die_msg}"
+ fi
+
+ local _so_ext='.so*'
+
+ case "${CHOST}" in
+ *-winnt*) _so_ext=".dll" ;; # no "*" intentionally!
+ esac
+
+ # Run some sanity checks on shared libraries
+ for d in "${ED}"lib* "${ED}"usr/lib* ; do
+ [[ -d "${d}" ]] || continue
+ f=$(find "${d}" -name "lib*${_so_ext}" -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; \
+ do [[ -z "${soname}" ]] && echo "${obj}"; done)
+ if [[ -n ${f} ]] ; then
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: The following shared libraries lack a SONAME"
+ eqawarn "${f}"
+ vecho -ne '\a\n'
+ sleep 1
+ fi
+
+ f=$(find "${d}" -name "lib*${_so_ext}" -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; \
+ do [[ -z "${needed}" ]] && echo "${obj}"; done)
+ if [[ -n ${f} ]] ; then
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: The following shared libraries lack NEEDED entries"
+ eqawarn "${f}"
+ vecho -ne '\a\n'
+ sleep 1
+ fi
+ done
+
+ PORTAGE_QUIET=${tmp_quiet}
+ fi
+}
+
+install_qa_check_xcoff() {
- if ! hasq binchecks ${RESTRICT}; then
++ if ! has binchecks ${RESTRICT}; then
+ local tmp_quiet=${PORTAGE_QUIET}
+ local queryline deplib
+ local insecure_rpath_list= undefined_symbols_list=
+
+ # display warnings when using stricter because we die afterwards
+ if has stricter ${FEATURES} ; then
+ unset PORTAGE_QUIET
+ fi
+
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED.XCOFF.1
+
+ local neededfd
+ for neededfd in {3..1024} none; do ( : <&${neededfd} ) 2>/dev/null || break; done
+ [[ ${neededfd} != none ]] || die "cannot find free file descriptor handle"
+
+ eval "exec ${neededfd}>\"${PORTAGE_BUILDDIR}\"/build-info/NEEDED.XCOFF.1" || die "cannot open ${PORTAGE_BUILDDIR}/build-info/NEEDED.XCOFF.1"
+
+ ( # work around a problem in /usr/bin/dump (used by aixdll-query)
+ # dumping core when path names get too long.
+ cd "${ED}" >/dev/null &&
+ find . -not -type d -exec \
+ aixdll-query '{}' FILE MEMBER FLAGS FORMAT RUNPATH DEPLIBS ';'
+ ) > "${T}"/needed 2>/dev/null
+
+ # Symlinking shared archive libraries is not a good idea on aix,
+ # as there is nothing like "soname" on pure filesystem level.
+ # So we create a copy instead of the symlink.
+ local prev_FILE=
+ local FILE MEMBER FLAGS FORMAT RUNPATH DEPLIBS
+ while read queryline
+ do
+ FILE= MEMBER= FLAGS= FORMAT= RUNPATH= DEPLIBS=
+ eval ${queryline}
+ FILE=${FILE#./}
+
+ if [[ ${prev_FILE} != ${FILE} ]]; then
+ if [[ " ${FLAGS} " == *" SHROBJ "* && -h ${ED}${FILE} ]]; then
+ prev_FILE=${FILE}
+ local target=$(readlink "${ED}${FILE}")
+ if [[ ${target} == /* ]]; then
+ target=${D}${target}
+ else
+ target=${FILE%/*}/${target}
+ fi
+ rm -f "${ED}${FILE}" || die "cannot prune ${FILE}"
+ cp -f "${ED}${target}" "${ED}${FILE}" || die "cannot copy ${target} to ${FILE}"
+ fi
+ fi
+ done <"${T}"/needed
+
+ prev_FILE=
+ while read queryline
+ do
+ FILE= MEMBER= FLAGS= FORMAT= RUNPATH= DEPLIBS=
+ eval ${queryline}
+ FILE=${FILE#./}
+
+ if [[ -n ${MEMBER} && ${prev_FILE} != ${FILE} ]]; then
+ # Save NEEDED information for each archive library stub
+ # even if it is static only: the already installed archive
+ # may contain shared objects to be preserved.
+ echo "${FORMAT##* }${FORMAT%%-*};${EPREFIX}/${FILE};${FILE##*/};;" >&${neededfd}
+ fi
+ prev_FILE=${FILE}
+
+ # shared objects have both EXEC and SHROBJ flags,
+ # while executables have EXEC flag only.
+ [[ " ${FLAGS} " == *" EXEC "* ]] || continue
+
+ # Make sure we disallow insecure RUNPATH's
+ # Don't want paths that point to the tree where the package was built
+ # (older, broken libtools would do this). Also check for null paths
+ # because the loader will search $PWD when it finds null paths.
+ # And we really want absolute paths only.
+ if [[ -n $(echo ":${RUNPATH}:" | grep -E "(${PORTAGE_BUILDDIR}|::|:[^/])") ]]; then
+ insecure_rpath_list="${insecure_rpath_list}\n${FILE}${MEMBER:+[${MEMBER}]}"
+ fi
+
+ local needed=
+ [[ -n ${MEMBER} ]] && needed=${FILE##*/}
+ for deplib in ${DEPLIBS}; do
+ eval deplib=${deplib}
+ if [[ ${deplib} == '.' || ${deplib} == '..' ]]; then
+ # Although we do have runtime linking, we don't want undefined symbols.
+ # AIX does indicate this by needing either '.' or '..'
+ undefined_symbols_list="${undefined_symbols_list}\n${FILE}"
+ else
+ needed="${needed}${needed:+,}${deplib}"
+ fi
+ done
+
+ FILE=${EPREFIX}/${FILE}
+
+ [[ -n ${MEMBER} ]] && MEMBER="[${MEMBER}]"
+ # Save NEEDED information
+ echo "${FORMAT##* }${FORMAT%%-*};${FILE}${MEMBER};${FILE##*/}${MEMBER};${RUNPATH};${needed}" >&${neededfd}
+ done <"${T}"/needed
+
+ eval "exec ${neededfd}>&-" || die "cannot close handle to ${PORTAGE_BUILDDIR}/build-info/NEEDED.XCOFF.1"
+
+ if [[ -n ${undefined_symbols_list} ]]; then
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain undefined symbols."
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with 'prefix' as the maintaining herd of the package."
+ eqawarn "${undefined_symbols_list}"
+ vecho -ne '\a\n'
+ fi
+
+ if [[ -n ${insecure_rpath_list} ]] ; then
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain insecure RUNPATH's"
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with 'prefix' as the maintaining herd of the package."
+ eqawarn "${insecure_rpath_list}"
+ vecho -ne '\a\n'
+ if has stricter ${FEATURES} ; then
+ insecure_rpath=1
+ fi
+ fi
+
+ if [[ ${insecure_rpath} -eq 1 ]] ; then
+ die "Aborting due to serious QA concerns with RUNPATH/RPATH"
+ elif [[ -n ${die_msg} ]] && has stricter ${FEATURES} ; then
+ die "Aborting due to QA concerns: ${die_msg}"
+ fi
+
+ PORTAGE_QUIET=${tmp_quiet}
+ fi
+}
+
install_mask() {
local root="$1"
@@@ -1280,204 -752,6 +1280,204 @@@
set -${shopts}
}
+preinst_aix() {
- if [[ ${CHOST} != *-aix* ]] || hasq binchecks ${RESTRICT}; then
++ if [[ ${CHOST} != *-aix* ]] || has binchecks ${RESTRICT}; then
+ return 0
+ fi
+ local ar strip
+ if type ${CHOST}-ar >/dev/null 2>&1 && type ${CHOST}-strip >/dev/null 2>&1; then
+ ar=${CHOST}-ar
+ strip=${CHOST}-strip
+ elif [[ ${CBUILD} == "${CHOST}" ]] && type ar >/dev/null 2>&1 && type strip >/dev/null 2>&1; then
+ ar=ar
+ strip=strip
+ elif [[ -x /usr/ccs/bin/ar && -x /usr/ccs/bin/strip ]]; then
+ ar=/usr/ccs/bin/ar
+ strip=/usr/ccs/bin/strip
+ else
+ die "cannot find where to use 'ar' and 'strip' from"
+ fi
+ local archives_members= archives=() chmod400files=()
+ local archive_member soname runpath needed archive contentmember
+ while read archive_member; do
+ archive_member=${archive_member#*;${EPREFIX}/} # drop "^type;EPREFIX/"
+ soname=${archive_member#*;}
+ runpath=${soname#*;}
+ needed=${runpath#*;}
+ soname=${soname%%;*}
+ runpath=${runpath%%;*}
+ archive_member=${archive_member%%;*} # drop ";soname;runpath;needed$"
+ archive=${archive_member%[*}
+ if [[ ${archive_member} != *'['*']' ]]; then
+ if [[ "${soname};${runpath};${needed}" == "${archive##*/};;" && -e ${EROOT}${archive} ]]; then
+ # most likely is an archive stub that already exists,
+ # may have to preserve members being a shared object.
+ archives[${#archives[@]}]=${archive}
+ fi
+ continue
+ fi
+ archives_members="${archives_members}:(${archive_member}):"
+ contentmember="${archive%/*}/.${archive##*/}${archive_member#${archive}}"
+ # portage does os.lstat() on merged files every now
+ # and then, so keep stamp-files for archive members
+ # around to get the preserve-libs feature working.
+ { echo "Please leave this file alone, it is an important helper"
+ echo "for portage to implement the 'preserve-libs' feature on AIX."
+ } > "${ED}${contentmember}" || die "cannot create ${contentmember}"
+ chmod400files[${#chmod400files[@]}]=${ED}${contentmember}
+ done < "${PORTAGE_BUILDDIR}"/build-info/NEEDED.XCOFF.1
+ [[ ${#chmod400files[@]} == 0 ]] ||
+ chmod 0400 "${chmod400files[@]}" || die "cannot chmod ${chmod400files[@]}"
+
+ local preservemembers libmetadir prunedirs=()
+ local FILE MEMBER FLAGS
+ for archive in "${archives[@]}"; do
+ preservemembers=
+ while read line; do
+ [[ -n ${line} ]] || continue
+ FILE= MEMBER= FLAGS=
+ eval ${line}
+ [[ ${FILE} == ${EROOT}${archive} ]] ||
+ die "invalid result of aixdll-query for ${EROOT}${archive}"
+ [[ -n ${MEMBER} && " ${FLAGS} " == *" SHROBJ "* ]] || continue
+ [[ ${archives_members} == *":(${archive}[${MEMBER}]):"* ]] && continue
+ preservemembers="${preservemembers} ${MEMBER}"
+ done <<-EOF
+ $(aixdll-query "${EROOT}${archive}" FILE MEMBER FLAGS)
+ EOF
+ [[ -n ${preservemembers} ]] || continue
+ einfo "preserving (on spec) ${archive}[${preservemembers# }]"
+ libmetadir=${ED}${archive%/*}/.${archive##*/}
+ mkdir "${libmetadir}" || die "cannot create ${libmetadir}"
+ pushd "${libmetadir}" >/dev/null || die "cannot cd to ${libmetadir}"
+ ${ar} -X32_64 -x "${EROOT}${archive}" ${preservemembers} || die "cannot unpack ${EROOT}${archive}"
+ chmod u+w ${preservemembers} || die "cannot chmod${preservemembers}"
+ ${strip} -X32_64 -e ${preservemembers} || die "cannot strip${preservemembers}"
+ ${ar} -X32_64 -q "${ED}${archive}" ${preservemembers} || die "cannot update ${archive}"
+ eend $?
+ popd >/dev/null || die "cannot leave ${libmetadir}"
+ prunedirs[${#prunedirs[@]}]=${libmetadir}
+ done
+ [[ ${#prunedirs[@]} == 0 ]] ||
+ rm -rf "${prunedirs[@]}" || die "cannot prune ${prunedirs[@]}"
+ return 0
+}
+
+postinst_aix() {
- if [[ ${CHOST} != *-aix* ]] || hasq binchecks ${RESTRICT}; then
++ if [[ ${CHOST} != *-aix* ]] || has binchecks ${RESTRICT}; then
+ return 0
+ fi
+ local MY_PR=${PR%r0}
+ local ar strip
+ if type ${CHOST}-ar >/dev/null 2>&1 && type ${CHOST}-strip >/dev/null 2>&1; then
+ ar=${CHOST}-ar
+ strip=${CHOST}-strip
+ elif [[ ${CBUILD} == "${CHOST}" ]] && type ar >/dev/null 2>&1 && type strip >/dev/null 2>&1; then
+ ar=ar
+ strip=strip
+ elif [[ -x /usr/ccs/bin/ar && -x /usr/ccs/bin/strip ]]; then
+ ar=/usr/ccs/bin/ar
+ strip=/usr/ccs/bin/strip
+ else
+ die "cannot find where to use 'ar' and 'strip' from"
+ fi
+ local archives_members= archives=() activearchives=
+ local archive_member soname runpath needed
+ while read archive_member; do
+ archive_member=${archive_member#*;${EPREFIX}/} # drop "^type;EPREFIX/"
+ soname=${archive_member#*;}
+ runpath=${soname#*;}
+ needed=${runpath#*;}
+ soname=${soname%%;*}
+ runpath=${runpath%%;*}
+ archive_member=${archive_member%%;*} # drop ";soname;runpath;needed$"
+ [[ ${archive_member} == *'['*']' ]] && continue
+ [[ "${soname};${runpath};${needed}" == "${archive_member##*/};;" ]] || continue
+ # most likely is an archive stub, we might have to
+ # drop members being preserved shared objects.
+ archives[${#archives[@]}]=${archive_member}
+ activearchives="${activearchives}:(${archive_member}):"
+ done < "${PORTAGE_BUILDDIR}"/build-info/NEEDED.XCOFF.1
+
+ local type allcontentmembers= oldarchives=()
+ local contentmember
+ while read type contentmember; do
+ [[ ${type} == 'obj' ]] || continue
+ contentmember=${contentmember% *} # drop " timestamp$"
+ contentmember=${contentmember% *} # drop " hash$"
+ [[ ${contentmember##*/} == *'['*']' ]] || continue
+ contentmember=${contentmember#${EPREFIX}/}
+ allcontentmembers="${allcontentmembers}:(${contentmember}):"
+ contentmember=${contentmember%[*}
+ contentmember=${contentmember%/.*}/${contentmember##*/.}
+ [[ ${activearchives} == *":(${contentmember}):"* ]] && continue
+ oldarchives[${#oldarchives[@]}]=${contentmember}
+ done < "${EPREFIX}/var/db/pkg/${CATEGORY}/${P}${MY_PR:+-}${MY_PR}/CONTENTS"
+
+ local archive line delmembers
+ local FILE MEMBER FLAGS
+ for archive in "${archives[@]}"; do
+ [[ -r ${EROOT}${archive} && -w ${EROOT}${archive} ]] ||
+ chmod a+r,u+w "${EROOT}${archive}" || die "cannot chmod ${EROOT}${archive}"
+ delmembers=
+ while read line; do
+ [[ -n ${line} ]] || continue
+ FILE= MEMBER= FLAGS=
+ eval ${line}
+ [[ ${FILE} == "${EROOT}${archive}" ]] ||
+ die "invalid result '${FILE}' of aixdll-query, expected '${EROOT}${archive}'"
+ [[ -n ${MEMBER} && " ${FLAGS} " == *" SHROBJ "* ]] || continue
+ [[ ${allcontentmembers} == *":(${archive%/*}/.${archive##*/}[${MEMBER}]):"* ]] && continue
+ delmembers="${delmembers} ${MEMBER}"
+ done <<-EOF
+ $(aixdll-query "${EROOT}${archive}" FILE MEMBER FLAGS)
+ EOF
+ [[ -n ${delmembers} ]] || continue
+ einfo "dropping ${archive}[${delmembers# }]"
+ rm -f "${EROOT}${archive}".new || die "cannot prune ${EROOT}${archive}.new"
+ cp "${EROOT}${archive}" "${EROOT}${archive}".new || die "cannot backup ${archive}"
+ ${ar} -X32_64 -z -o -d "${EROOT}${archive}".new ${delmembers} || die "cannot remove${delmembers} from ${archive}.new"
+ mv -f "${EROOT}${archive}".new "${EROOT}${archive}" || die "cannot put ${EROOT}${archive} in place"
+ eend $?
+ done
+ local libmetadir keepmembers prunedirs=()
+ for archive in "${oldarchives[@]}"; do
+ [[ -r ${EROOT}${archive} && -w ${EROOT}${archive} ]] ||
+ chmod a+r,u+w "${EROOT}${archive}" || die "cannot chmod ${EROOT}${archive}"
+ keepmembers=
+ while read line; do
+ FILE= MEMBER= FLAGS=
+ eval ${line}
+ [[ ${FILE} == "${EROOT}${archive}" ]] ||
+ die "invalid result of aixdll-query for ${EROOT}${archive}"
+ [[ -n ${MEMBER} && " ${FLAGS} " == *" SHROBJ "* ]] || continue
+ [[ ${allcontentmembers} == *":(${archive%/*}/.${archive##*/}[${MEMBER}]):"* ]] || continue
+ keepmembers="${keepmembers} ${MEMBER}"
+ done <<-EOF
+ $(aixdll-query "${EROOT}${archive}" FILE MEMBER FLAGS)
+ EOF
+
+ if [[ -n ${keepmembers} ]]; then
+ einfo "preserving (extra)${keepmembers}"
+ libmetadir=${EROOT}${archive%/*}/.${archive##*/}
+ [[ ! -e ${libmetadir} ]] || rm -rf "${libmetadir}" || die "cannot prune ${libmetadir}"
+ mkdir "${libmetadir}" || die "cannot create ${libmetadir}"
+ pushd "${libmetadir}" >/dev/null || die "cannot cd to ${libmetadir}"
+ ${ar} -X32_64 -x "${EROOT}${archive}" ${keepmembers} || die "cannot unpack ${archive}"
+ ${strip} -X32_64 -e ${keepmembers} || die "cannot strip ${keepmembers}"
+ rm -f "${EROOT}${archive}.new" || die "cannot prune ${EROOT}${archive}.new"
+ ${ar} -X32_64 -q "${EROOT}${archive}.new" ${keepmembers} || die "cannot create ${EROOT}${archive}.new"
+ mv -f "${EROOT}${archive}.new" "${EROOT}${archive}" || die "cannot put ${EROOT}${archive} in place"
+ popd > /dev/null || die "cannot leave ${libmetadir}"
+ prunedirs[${#prunedirs[@]}]=${libmetadir}
+ eend $?
+ fi
+ done
+ [[ ${#prunedirs[@]} == 0 ]] ||
+ rm -rf "${prunedirs[@]}" || die "cannot prune ${prunedirs[@]}"
+ return 0
+}
+
preinst_mask() {
if [ -z "${D}" ]; then
eerror "${FUNCNAME}: D is unset"
@@@ -1491,8 -765,8 +1491,8 @@@
# remove man pages, info pages, docs if requested
local f
for f in man info doc; do
- if hasq no${f} $FEATURES; then
+ if has no${f} $FEATURES; then
- INSTALL_MASK="${INSTALL_MASK} /usr/share/${f}"
+ INSTALL_MASK="${INSTALL_MASK} ${EPREFIX}/usr/share/${f}"
fi
done
@@@ -1545,9 -819,9 +1545,9 @@@ preinst_suid_scan()
return 1
fi
# total suid control.
- if hasq suidctl $FEATURES; then
+ if has suidctl $FEATURES; then
local i sfconf x
- sfconf=${PORTAGE_CONFIGROOT}etc/portage/suidctl.conf
+ sfconf=${PORTAGE_CONFIGROOT}${EPREFIX#/}/etc/portage/suidctl.conf
# sandbox prevents us from writing directly
# to files outside of the sandbox, but this
# can easly be bypassed using the addwrite() function
diff --cc pym/_emerge/Binpkg.py
index eded78b,bc6511e..6c944e2
--- a/pym/_emerge/Binpkg.py
+++ b/pym/_emerge/Binpkg.py
@@@ -15,17 -14,11 +15,12 @@@ from portage.util import writems
import portage
from portage import os
from portage import _encodings
+ from portage import _unicode_decode
from portage import _unicode_encode
- import codecs
- import sys
- if os.environ.__contains__("PORTAGE_PYTHONPATH"):
- sys.path.insert(0, os.environ["PORTAGE_PYTHONPATH"])
- else:
- sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(os.path.realpath(__file__))), "pym"))
- import portage
+ import io
import logging
from portage.output import colorize
+from portage.const import EPREFIX
class Binpkg(CompositeTask):
diff --cc pym/_emerge/actions.py
index 24dd4d7,f6c2721..049b10f
--- a/pym/_emerge/actions.py
+++ b/pym/_emerge/actions.py
@@@ -25,10 -21,10 +21,10 @@@ from itertools import chai
import portage
from portage import os
- from portage import digraph
+ from portage import subprocess_getstatusoutput
from portage import _unicode_decode
from portage.cache.cache_errors import CacheError
-from portage.const import GLOBAL_CONFIG_PATH, NEWS_LIB_PATH
+from portage.const import GLOBAL_CONFIG_PATH, NEWS_LIB_PATH, EPREFIX
from portage.const import _ENABLE_DYN_LINK_MAP, _ENABLE_SET_CONFIG
from portage.dbapi.dep_expand import dep_expand
from portage.dbapi._expand_new_virt import expand_new_virt
diff --cc pym/_emerge/emergelog.py
index 20cec9f,d6ef1b4..629dae8
--- a/pym/_emerge/emergelog.py
+++ b/pym/_emerge/emergelog.py
@@@ -18,8 -18,12 +19,12 @@@ from portage.const import EPREFI
# dblink.merge() and we don't want that to trigger log writes
# unless it's really called via emerge.
_disable = True
-_emerge_log_dir = '/var/log'
+_emerge_log_dir = EPREFIX + '/var/log'
+ # Coerce to unicode, in order to prevent TypeError when writing
+ # raw bytes to TextIOWrapper with python2.
+ _log_fmt = _unicode_decode("%.0f: %s\n")
+
def emergelog(xterm_titles, mystr, short_msg=None):
if _disable:
diff --cc pym/_emerge/main.py
index 7a9fa9c,42ce810..b97e72b
--- a/pym/_emerge/main.py
+++ b/pym/_emerge/main.py
@@@ -163,7 -158,9 +159,9 @@@ def chk_updated_info_files(root, infodi
raise
del e
processed_count += 1
- myso=subprocess_getstatusoutput("LANG=C LANGUAGE=C "+EPREFIX+"/usr/bin/install-info --dir-file="+inforoot+"/dir "+inforoot+"/"+x)[1]
+ myso = portage.subprocess_getstatusoutput(
- "LANG=C LANGUAGE=C /usr/bin/install-info " +
- "--dir-file=%s/dir %s/%s" % (inforoot, inforoot, x))[1]
++ "LANG=C LANGUAGE=C %s/usr/bin/install-info " +
++ "--dir-file=%s/dir %s/%s" % (EPREFIX, inforoot, inforoot, x))[1]
existsstr="already exists, for file `"
if myso!="":
if re.search(existsstr,myso):
diff --cc pym/portage/dbapi/vartree.py
index 3ed96e9,9efc47f..e57df13
--- a/pym/portage/dbapi/vartree.py
+++ b/pym/portage/dbapi/vartree.py
@@@ -37,10 -34,9 +37,9 @@@ portage.proxy.lazyimport.lazyimport(glo
)
from portage.const import CACHE_PATH, CONFIG_MEMORY_FILE, \
- PORTAGE_PACKAGE_ATOM, PRIVATE_PATH, VDB_PATH
+ PORTAGE_PACKAGE_ATOM, PRIVATE_PATH, VDB_PATH, EPREFIX, EPREFIX_LSTRIP, BASH_BINARY
from portage.const import _ENABLE_DYN_LINK_MAP, _ENABLE_PRESERVE_LIBS
from portage.dbapi import dbapi
- from portage.dep import _slot_separator
from portage.exception import CommandNotFound, \
InvalidData, InvalidLocation, InvalidPackageName, \
FileNotFound, PermissionDenied, UnsupportedAPIException
diff --cc pym/portage/elog/mod_save.py
index f6a577e,9350a6e..d1c9bb8
--- a/pym/portage/elog/mod_save.py
+++ b/pym/portage/elog/mod_save.py
@@@ -8,18 -8,24 +8,25 @@@ from portage import o
from portage import _encodings
from portage import _unicode_decode
from portage import _unicode_encode
- from portage.data import portage_uid, portage_gid
- from portage.util import ensure_dirs
- from portage.const import EPREFIX
+ from portage.data import portage_gid, portage_uid
+ from portage.package.ebuild.prepare_build_dirs import _ensure_log_subdirs
+ from portage.util import ensure_dirs, normalize_path
++from portage.const import EPREFIX_LSTRIP
def process(mysettings, key, logentries, fulltext):
- path = key.replace("/", ":")
- if mysettings["PORT_LOGDIR"] != "":
- elogdir = os.path.join(mysettings["PORT_LOGDIR"], "elog")
+ if mysettings.get("PORT_LOGDIR"):
+ logdir = normalize_path(mysettings["PORT_LOGDIR"])
else:
- elogdir = os.path.join(EPREFIX, "var", "log", "portage", "elog")
- ensure_dirs(elogdir, uid=portage_uid, gid=portage_gid, mode=0o2770)
- logdir = os.path.join(os.sep, "var", "log", "portage")
++ logdir = os.path.join(os.sep, EPREFIX_LSTRIP, "var", "log", "portage")
+
+ if not os.path.isdir(logdir):
+ # Only initialize group/mode if the directory doesn't
+ # exist, so that we don't override permissions if they
+ # were previously set by the administrator.
+ # NOTE: These permissions should be compatible with our
+ # default logrotate config as discussed in bug 374287.
+ ensure_dirs(logdir, uid=portage_uid, gid=portage_gid, mode=0o2770)
cat = mysettings['CATEGORY']
pf = mysettings['PF']
diff --cc pym/portage/elog/mod_save_summary.py
index 5d99a7e,4adc6f3..b0ea4a1
--- a/pym/portage/elog/mod_save_summary.py
+++ b/pym/portage/elog/mod_save_summary.py
@@@ -8,17 -8,27 +8,28 @@@ from portage import o
from portage import _encodings
from portage import _unicode_decode
from portage import _unicode_encode
- from portage.data import portage_uid, portage_gid
+ from portage.data import portage_gid, portage_uid
from portage.localization import _
- from portage.util import ensure_dirs, apply_permissions
+ from portage.package.ebuild.prepare_build_dirs import _ensure_log_subdirs
+ from portage.util import apply_permissions, ensure_dirs, normalize_path
+from portage.const import EPREFIX_LSTRIP
def process(mysettings, key, logentries, fulltext):
- if mysettings["PORT_LOGDIR"] != "":
- elogdir = os.path.join(mysettings["PORT_LOGDIR"], "elog")
+ if mysettings.get("PORT_LOGDIR"):
+ logdir = normalize_path(mysettings["PORT_LOGDIR"])
else:
- elogdir = os.path.join("/", EPREFIX_LSTRIP, "var", "log", "portage", "elog")
- ensure_dirs(elogdir, uid=portage_uid, gid=portage_gid, mode=0o2770)
- logdir = os.path.join(os.sep, "var", "log", "portage")
++ logdir = os.path.join(os.sep, EPREFIX_LSTRIP, "var", "log", "portage")
+
+ if not os.path.isdir(logdir):
+ # Only initialize group/mode if the directory doesn't
+ # exist, so that we don't override permissions if they
+ # were previously set by the administrator.
+ # NOTE: These permissions should be compatible with our
+ # default logrotate config as discussed in bug 374287.
+ ensure_dirs(logdir, uid=portage_uid, gid=portage_gid, mode=0o2770)
+
+ elogdir = os.path.join(logdir, "elog")
+ _ensure_log_subdirs(logdir, elogdir)
# TODO: Locking
elogfilename = elogdir+"/summary.log"
diff --cc pym/portage/package/ebuild/fetch.py
index b2cc2a3,2ae1fe8..388c209
--- a/pym/portage/package/ebuild/fetch.py
+++ b/pym/portage/package/ebuild/fetch.py
@@@ -29,10 -29,9 +29,10 @@@ from portage import OrderedDict, os, se
from portage.checksum import hashfunc_map, perform_md5, verify_all
from portage.const import BASH_BINARY, CUSTOM_MIRRORS_FILE, \
GLOBAL_CONFIG_PATH
+from portage.const import rootgid
from portage.data import portage_gid, portage_uid, secpass, userpriv_groups
from portage.exception import FileNotFound, OperationNotPermitted, \
- PermissionDenied, PortageException, TryAgain
+ PortageException, TryAgain
from portage.localization import _
from portage.locks import lockfile, unlockfile
from portage.manifest import Manifest
diff --cc pym/portage/package/ebuild/prepare_build_dirs.py
index 053ebcc,616dc2e..16ba052
--- a/pym/portage/package/ebuild/prepare_build_dirs.py
+++ b/pym/portage/package/ebuild/prepare_build_dirs.py
@@@ -16,8 -16,7 +16,8 @@@ from portage.exception import Directory
from portage.localization import _
from portage.output import colorize
from portage.util import apply_recursive_permissions, \
- apply_secpass_permissions, ensure_dirs, writemsg
+ apply_secpass_permissions, ensure_dirs, normalize_path, writemsg
+from portage.const import EPREFIX
def prepare_build_dirs(myroot=None, settings=None, cleanup=False):
"""
diff --cc pym/portage/util/env_update.py
index 4587a2c,eb8a0d9..2650f15
--- a/pym/portage/util/env_update.py
+++ b/pym/portage/util/env_update.py
@@@ -124,19 -123,9 +124,19 @@@ def env_update(makelinks=1, target_root
they won't be overwritten by this dict.update call."""
env.update(myconfig)
+ if EPREFIX == '':
+ dolinkingstuff(target_root, specials, prelink_capable,
+ makelinks, contents, prev_mtimes, env)
+ writeshellprofile(target_root, env)
+
+def dolinkingstuff(target_root, specials, prelink_capable, makelinks,
+ contents, prev_mtimes, env):
+ # updating this stuff will never work in an offset, other than ROOT
+ # (e.g. not in Prefix), hence the EPREFIX is not taken into account
+ # here since this code should never be triggered on an offset install
ldsoconf_path = os.path.join(target_root, "etc", "ld.so.conf")
try:
- myld = codecs.open(_unicode_encode(ldsoconf_path,
+ myld = io.open(_unicode_encode(ldsoconf_path,
encoding=_encodings['fs'], errors='strict'),
mode='r', encoding=_encodings['content'], errors='replace')
myldlines=myld.readlines()
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-07-01 17:44 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-07-01 17:44 UTC (permalink / raw
To: gentoo-commits
commit: 1683b8f2933a2e23785ae2e7e3b23eec831f9d93
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Jul 1 17:42:16 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Jul 1 17:42:16 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=1683b8f2
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
pym/portage/package/ebuild/prepare_build_dirs.py
pym/portage/util/__init__.py
RELEASE-NOTES | 2 +-
bin/ebuild | 23 +++-
bin/ebuild-helpers/ecompress | 6 +-
bin/emaint | 45 +++++-
bin/misc-functions.sh | 11 ++
bin/portageq | 13 +-
bin/repoman | 37 ++++-
cnf/make.globals | 2 +-
man/ebuild.5 | 5 +-
man/env-update.1 | 3 +-
man/make.conf.5 | 9 +-
man/portage.5 | 7 +
man/repoman.1 | 4 +
pym/_emerge/EbuildPhase.py | 10 ++
pym/_emerge/Package.py | 33 ++++-
pym/_emerge/UseFlagDisplay.py | 8 +-
pym/_emerge/depgraph.py | 21 ++-
pym/_emerge/resolver/circular_dependency.py | 24 ++-
pym/_emerge/resolver/output.py | 13 +-
pym/portage/checksum.py | 15 ++-
pym/portage/const.py | 2 +-
pym/portage/dbapi/vartree.py | 30 ++--
pym/portage/dep/__init__.py | 4 +-
pym/portage/package/ebuild/_config/UseManager.py | 170 +++++++++++++-------
.../package/ebuild/_config/special_env_vars.py | 6 +
pym/portage/package/ebuild/config.py | 57 +++++++-
pym/portage/package/ebuild/doebuild.py | 141 ++++++++++++++++-
pym/portage/package/ebuild/prepare_build_dirs.py | 14 ++-
pym/portage/repository/config.py | 3 +
pym/portage/tests/util/test_varExpand.py | 35 ++++-
pym/portage/util/__init__.py | 52 +++----
pym/portage/util/_dyn_libs/LinkageMapELF.py | 30 +++-
pym/portage/util/env_update.py | 14 +-
33 files changed, 668 insertions(+), 181 deletions(-)
diff --cc bin/emaint
index 6d68cb0,fdd01ed..c8a76d5
--- a/bin/emaint
+++ b/bin/emaint
@@@ -10,16 -12,18 +12,20 @@@ import textwra
import time
from optparse import OptionParser, OptionValueError
-try:
- import portage
-except ImportError:
- from os import path as osp
- sys.path.insert(0, osp.join(osp.dirname(osp.dirname(osp.realpath(__file__))), "pym"))
- import portage
+# for an explanation on this logic, see pym/_emerge/__init__.py
+import os
+import sys
+if os.environ.__contains__("PORTAGE_PYTHONPATH"):
+ sys.path.insert(0, os.environ["PORTAGE_PYTHONPATH"])
+else:
+ sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(os.path.realpath(__file__))), "pym"))
+import portage
from portage import os
+ from portage.util import writemsg
+
+ if sys.hexversion >= 0x3000000:
+ long = int
class WorldHandler(object):
diff --cc bin/misc-functions.sh
index dce730a,d708c1d..bfb6b03
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -758,490 -715,18 +759,500 @@@ install_qa_check_misc()
done
[[ ${abort} == yes ]] && die "multilib-strict check failed!"
fi
+
+ # ensure packages don't install systemd units automagically
+ if ! hasq systemd ${INHERITED} && \
+ [[ -d "${D}"/lib/systemd/system ]]
+ then
+ eqawarn "QA Notice: package installs systemd unit files (/lib/systemd/system)"
+ eqawarn " but does not inherit systemd.eclass."
+ hasq stricter ${FEATURES} \
+ && die "install aborted due to missing inherit of systemd.eclass"
+ fi
}
+install_qa_check_prefix() {
+ if [[ -d ${ED}/${D} ]] ; then
+ find "${ED}/${D}" | \
+ while read i ; do
+ eqawarn "QA Notice: /${i##${ED}/${D}} installed in \${ED}/\${D}"
+ done
+ die "Aborting due to QA concerns: files installed in ${ED}/${D}"
+ fi
+
+ if [[ -d ${ED}/${EPREFIX} ]] ; then
+ find "${ED}/${EPREFIX}/" | \
+ while read i ; do
+ eqawarn "QA Notice: ${i#${D}} double prefix"
+ done
+ die "Aborting due to QA concerns: double prefix files installed"
+ fi
+
+ if [[ -d ${D} ]] ; then
+ INSTALLTOD=$(find ${D%/} | egrep -v "^${ED}" | sed -e "s|^${D%/}||" | awk '{if (length($0) <= length("'"${EPREFIX}"'")) { if (substr("'"${EPREFIX}"'", 1, length($0)) != $0) {print $0;} } else if (substr($0, 1, length("'"${EPREFIX}"'")) != "'"${EPREFIX}"'") {print $0;} }')
+ if [[ -n ${INSTALLTOD} ]] ; then
+ eqawarn "QA Notice: the following files are outside of the prefix:"
+ eqawarn "${INSTALLTOD}"
+ die "Aborting due to QA concerns: there are files installed outside the prefix"
+ fi
+ fi
+
+ # all further checks rely on ${ED} existing
+ [[ -d ${ED} ]] || return
+
+ # this does not really belong here, but it's closely tied to
+ # the code below; many runscripts generate positives here, and we
+ # know they don't work (bug #196294) so as long as that one
+ # remains an issue, simply remove them as they won't work
+ # anyway, avoid etc/init.d/functions.sh from being thrown away
+ if [[ ( -d "${ED}"/etc/conf.d || -d "${ED}"/etc/init.d ) && ! -f "${ED}"/etc/init.d/functions.sh ]] ; then
+ ewarn "removed /etc/init.d and /etc/conf.d directories until bug #196294 has been resolved"
+ rm -Rf "${ED}"/etc/{conf,init}.d
+ fi
+
+ # check shebangs, bug #282539
+ rm -f "${T}"/non-prefix-shebangs-errs
+ local WHITELIST=" /usr/bin/env "
+ # this is hell expensive, but how else?
+ find "${ED}" -executable \! -type d -print0 \
+ | xargs -0 grep -H -n -m1 "^#!" \
+ | while read f ;
+ do
+ local fn=${f%%:*}
+ local pos=${f#*:} ; pos=${pos%:*}
+ local line=${f##*:}
+ # shebang always appears on the first line ;)
+ [[ ${pos} != 1 ]] && continue
+ local oldIFS=${IFS}
+ IFS=$'\r'$'\n'$'\t'" "
+ line=( ${line#"#!"} )
+ IFS=${oldIFS}
+ [[ ${WHITELIST} == *" ${line[0]} "* ]] && continue
+ local fp=${fn#${D}} ; fp=/${fp%/*}
+ # line[0] can be an absolutised path, bug #342929
+ local eprefix=$(canonicalize ${EPREFIX})
+ local rf=${fn}
+ # in case we deal with a symlink, make sure we don't replace it
+ # with a real file (sed -i does that)
+ if [[ -L ${fn} ]] ; then
+ rf=$(readlink ${fn})
+ [[ ${rf} != /* ]] && rf=${fn%/*}/${rf}
+ # ignore symlinks pointing to outside prefix
+ # as seen in sys-devel/native-cctools
+ [[ ${rf} != ${EPREFIX}/* && $(canonicalize "${rf}") != ${eprefix}/* ]] && continue
+ fi
+ # does the shebang start with ${EPREFIX}, and does it exist?
+ if [[ ${line[0]} == ${EPREFIX}/* || ${line[0]} == ${eprefix}/* ]] ; then
+ if [[ ! -e ${ROOT%/}${line[0]} && ! -e ${D%/}${line[0]} ]] ; then
+ # hmm, refers explicitly to $EPREFIX, but doesn't exist,
+ # if it's in PATH that's wrong in any case
+ if [[ ":${PATH}:" == *":${fp}:"* ]] ; then
+ echo "${fn#${D}}:${line[0]} (explicit EPREFIX but target not found)" \
+ >> "${T}"/non-prefix-shebangs-errs
+ else
+ eqawarn "${fn#${D}} has explicit EPREFIX in shebang but target not found (${line[0]})"
+ fi
+ fi
+ continue
+ fi
+ # unprefixed shebang, is the script directly in $PATH?
+ if [[ ":${PATH}:" == *":${fp}:"* ]] ; then
+ if [[ -e ${EROOT}${line[0]} || -e ${ED}${line[0]} ]] ; then
+ # is it unprefixed, but we can just fix it because a
+ # prefixed variant exists
+ eqawarn "prefixing shebang of ${fn#${D}}"
+ # statement is made idempotent on purpose, because
+ # symlinks may point to the same target, and hence the
+ # same real file may be sedded multiple times since we
+ # read the shebangs in one go upfront for performance
+ # reasons
+ sed -i -e '1s:^#! \?'"${line[0]}"':#!'"${EPREFIX}"${line[0]}':' "${rf}"
+ continue
+ else
+ # this is definitely wrong: script in $PATH and invalid shebang
+ echo "${fn#${D}}:${line[0]} (script ${fn##*/} installed in PATH but interpreter ${line[0]} not found)" \
+ >> "${T}"/non-prefix-shebangs-errs
+ fi
+ else
+ # unprefixed/invalid shebang, but outside $PATH, this may be
+ # intended (e.g. config.guess) so remain silent by default
+ hasq stricter ${FEATURES} && \
+ eqawarn "invalid shebang in ${fn#${D}}: ${line[0]}"
+ fi
+ done
+ if [[ -e "${T}"/non-prefix-shebangs-errs ]] ; then
+ eqawarn "QA Notice: the following files use invalid (possible non-prefixed) shebangs:"
+ while read line ; do
+ eqawarn " ${line}"
+ done < "${T}"/non-prefix-shebangs-errs
+ rm -f "${T}"/non-prefix-shebangs-errs
+ die "Aborting due to QA concerns: invalid shebangs found"
+ fi
+}
+
+install_qa_check_macho() {
+ if ! hasq binchecks ${RESTRICT} ; then
+ # on Darwin, dynamic libraries are called .dylibs instead of
+ # .sos. In addition the version component is before the
+ # extension, not after it. Check for this, and *only* warn
+ # about it. Some packages do ship .so files on Darwin and make
+ # it work (ugly!).
+ rm -f "${T}/mach-o.check"
+ find ${ED%/} -name "*.so" -or -name "*.so.*" | \
+ while read i ; do
+ [[ $(file $i) == *"Mach-O"* ]] && \
+ echo "${i#${D}}" >> "${T}/mach-o.check"
+ done
+ if [[ -f ${T}/mach-o.check ]] ; then
+ f=$(< "${T}/mach-o.check")
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: Found .so dynamic libraries on Darwin:"
+ eqawarn " ${f//$'\n'/\n }"
+ fi
+ rm -f "${T}/mach-o.check"
+
+ # The naming for dynamic libraries is different on Darwin; the
+ # version component is before the extention, instead of after
+ # it, as with .sos. Again, make this a warning only.
+ rm -f "${T}/mach-o.check"
+ find ${ED%/} -name "*.dylib.*" | \
+ while read i ; do
+ echo "${i#${D}}" >> "${T}/mach-o.check"
+ done
+ if [[ -f "${T}/mach-o.check" ]] ; then
+ f=$(< "${T}/mach-o.check")
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: Found wrongly named dynamic libraries on Darwin:"
+ eqawarn " ${f// /\n }"
+ fi
+ rm -f "${T}/mach-o.check"
+ fi
+
+ # While we generate the NEEDED files, check that we don't get kernel
+ # traps at runtime because of broken install_names on Darwin.
+ rm -f "${T}"/.install_name_check_failed
+ scanmacho -qyRF '%a;%p;%S;%n' "${D}" | { while IFS= read l ; do
+ arch=${l%%;*}; l=${l#*;}
+ obj="/${l%%;*}"; l=${l#*;}
+ install_name=${l%%;*}; l=${l#*;}
+ needed=${l%%;*}; l=${l#*;}
+
+ # See if the self-reference install_name points to an existing
+ # and to be installed file. This usually is a symlink for the
+ # major version.
+ if [[ ! -e ${D}${install_name} ]] ; then
+ eqawarn "QA Notice: invalid self-reference install_name ${install_name} in ${obj}"
+ # remember we are in an implicit subshell, that's
+ # why we touch a file here ... ideally we should be
+ # able to die correctly/nicely here
+ touch "${T}"/.install_name_check_failed
+ fi
+
+ # this is ugly, paths with spaces won't work
+ reevaluate=0
+ for lib in ${needed//,/ } ; do
+ if [[ ${lib} == ${D}* ]] ; then
+ eqawarn "QA Notice: install_name references \${D}: ${lib} in ${obj}"
+ touch "${T}"/.install_name_check_failed
+ elif [[ ${lib} == ${S}* ]] ; then
+ eqawarn "QA Notice: install_name references \${S}: ${lib} in ${obj}"
+ touch "${T}"/.install_name_check_failed
+ elif [[ ! -e ${lib} && ! -e ${D}${lib} && ${lib} != "@executable_path/"* && ${lib} != "@loader_path/"* ]] ; then
+ # try to "repair" this if possible, happens because of
+ # gen_usr_ldscript tactics
+ s=${lib%usr/*}${lib##*/usr/}
+ if [[ -e ${D}${s} ]] ; then
+ ewarn "correcting install_name from ${lib} to ${s} in ${obj}"
+ install_name_tool -change \
+ "${lib}" "${s}" "${D}${obj}"
+ reevaluate=1
+ else
+ eqawarn "QA Notice: invalid reference to ${lib} in ${obj}"
+ # remember we are in an implicit subshell, that's
+ # why we touch a file here ... ideally we should be
+ # able to die correctly/nicely here
+ touch "${T}"/.install_name_check_failed
+ fi
+ fi
+ done
+ if [[ ${reevaluate} == 1 ]]; then
+ # install_name(s) have been changed, refresh data so we
+ # store the correct meta data
+ l=$(scanmacho -qyF '%a;%p;%S;%n' ${D}${obj})
+ arch=${l%%;*}; l=${l#*;}
+ obj="/${l%%;*}"; l=${l#*;}
+ install_name=${l%%;*}; l=${l#*;}
+ needed=${l%%;*}; l=${l#*;}
+ fi
+
+ # backwards compatability
+ echo "${obj} ${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ # what we use
+ echo "${arch};${obj};${install_name};${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.MACHO.3
+ done }
+ if [[ -f ${T}/.install_name_check_failed ]] ; then
+ # secret switch "allow_broken_install_names" to get
+ # around this and install broken crap (not a good idea)
+ hasq allow_broken_install_names ${FEATURES} || \
+ die "invalid install_name found, your application or library will crash at runtime"
+ fi
+}
+
+install_qa_check_pecoff() {
+ local _pfx_scan="readpecoff ${CHOST}"
+
+ # this one uses readpecoff, which supports multiple prefix platforms!
+ # this is absolutely _not_ optimized for speed, and there may be plenty
+ # of possibilities by introducing one or the other cache!
+ if ! hasq binchecks ${RESTRICT}; then
+ # copied and adapted from the above scanelf code.
+ local qa_var insecure_rpath=0 tmp_quiet=${PORTAGE_QUIET}
+ local f x
+
+ # display warnings when using stricter because we die afterwards
+ if has stricter ${FEATURES} ; then
+ unset PORTAGE_QUIET
+ fi
+
+ local _exec_find_opt="-executable"
+ [[ ${CHOST} == *-winnt* ]] && _exec_find_opt='-name *.dll -o -name *.exe'
+
+ # Make sure we disallow insecure RUNPATH/RPATH's
+ # Don't want paths that point to the tree where the package was built
+ # (older, broken libtools would do this). Also check for null paths
+ # because the loader will search $PWD when it finds null paths.
+
+ f=$(
+ find "${ED}" -type f '(' ${_exec_find_opt} ')' -print0 | xargs -0 ${_pfx_scan} | \
+ while IFS=";" read arch obj soname rpath needed ; do \
+ echo "${rpath}" | grep -E "(${PORTAGE_BUILDDIR}|: |::|^:|^ )" > /dev/null 2>&1 \
+ && echo "${obj}"; done;
+ )
+ # Reject set*id binaries with $ORIGIN in RPATH #260331
+ x=$(
+ find "${ED}" -type f '(' -perm -u+s -o -perm -g+s ')' -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; do \
+ echo "${rpath}" | grep '$ORIGIN' > /dev/null 2>&1 && echo "${obj}"; done;
+ )
+ if [[ -n ${f}${x} ]] ; then
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain insecure RUNPATH's"
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with the maintaining herd of the package."
+ eqawarn "${f}${f:+${x:+\n}}${x}"
+ vecho -ne '\a\n'
+ if [[ -n ${x} ]] || has stricter ${FEATURES} ; then
+ insecure_rpath=1
+ else
+ eqawarn "cannot automatically fix runpaths on interix platforms!"
+ fi
+ fi
+
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+
+ # Save NEEDED information after removing self-contained providers
+ find "${ED}" -type f '(' ${_exec_find_opt} ')' -print0 | xargs -0 ${_pfx_scan} | { while IFS=';' read arch obj soname rpath needed; do
+ # need to strip image dir from object name.
+ obj="/${obj#${D}}"
+ if [ -z "${rpath}" -o -n "${rpath//*ORIGIN*}" ]; then
+ # object doesn't contain $ORIGIN in its runpath attribute
+ echo "${obj} ${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ echo "${arch};${obj};${soname};${rpath};${needed}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+ else
+ dir=${obj%/*}
+ # replace $ORIGIN with the dirname of the current object for the lookup
+ opath=$(echo :${rpath}: | sed -e "s#.*:\(.*\)\$ORIGIN\(.*\):.*#\1${dir}\2#")
+ sneeded=$(echo ${needed} | tr , ' ')
+ rneeded=""
+ for lib in ${sneeded}; do
+ found=0
+ for path in ${opath//:/ }; do
+ [ -e "${ED}/${path}/${lib}" ] && found=1 && break
+ done
+ [ "${found}" -eq 0 ] && rneeded="${rneeded},${lib}"
+ done
+ rneeded=${rneeded:1}
+ if [ -n "${rneeded}" ]; then
+ echo "${obj} ${rneeded}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED
+ echo "${arch};${obj};${soname};${rpath};${rneeded}" >> "${PORTAGE_BUILDDIR}"/build-info/NEEDED.PECOFF.1
+ fi
+ fi
+ done }
+
+ if [[ ${insecure_rpath} -eq 1 ]] ; then
+ die "Aborting due to serious QA concerns with RUNPATH/RPATH"
+ elif [[ -n ${die_msg} ]] && has stricter ${FEATURES} ; then
+ die "Aborting due to QA concerns: ${die_msg}"
+ fi
+
+ local _so_ext='.so*'
+
+ case "${CHOST}" in
+ *-winnt*) _so_ext=".dll" ;; # no "*" intentionally!
+ esac
+
+ # Run some sanity checks on shared libraries
+ for d in "${ED}"lib* "${ED}"usr/lib* ; do
+ [[ -d "${d}" ]] || continue
+ f=$(find "${d}" -name "lib*${_so_ext}" -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; \
+ do [[ -z "${soname}" ]] && echo "${obj}"; done)
+ if [[ -n ${f} ]] ; then
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: The following shared libraries lack a SONAME"
+ eqawarn "${f}"
+ vecho -ne '\a\n'
+ sleep 1
+ fi
+
+ f=$(find "${d}" -name "lib*${_so_ext}" -print0 | \
+ xargs -0 ${_pfx_scan} | while IFS=";" read arch obj soname rpath needed; \
+ do [[ -z "${needed}" ]] && echo "${obj}"; done)
+ if [[ -n ${f} ]] ; then
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: The following shared libraries lack NEEDED entries"
+ eqawarn "${f}"
+ vecho -ne '\a\n'
+ sleep 1
+ fi
+ done
+
+ PORTAGE_QUIET=${tmp_quiet}
+ fi
+}
+
+install_qa_check_xcoff() {
+ if ! hasq binchecks ${RESTRICT}; then
+ local tmp_quiet=${PORTAGE_QUIET}
+ local queryline deplib
+ local insecure_rpath_list= undefined_symbols_list=
+
+ # display warnings when using stricter because we die afterwards
+ if has stricter ${FEATURES} ; then
+ unset PORTAGE_QUIET
+ fi
+
+ rm -f "${PORTAGE_BUILDDIR}"/build-info/NEEDED.XCOFF.1
+
+ local neededfd
+ for neededfd in {3..1024} none; do ( : <&${neededfd} ) 2>/dev/null || break; done
+ [[ ${neededfd} != none ]] || die "cannot find free file descriptor handle"
+
+ eval "exec ${neededfd}>\"${PORTAGE_BUILDDIR}\"/build-info/NEEDED.XCOFF.1" || die "cannot open ${PORTAGE_BUILDDIR}/build-info/NEEDED.XCOFF.1"
+
+ ( # work around a problem in /usr/bin/dump (used by aixdll-query)
+ # dumping core when path names get too long.
+ cd "${ED}" >/dev/null &&
+ find . -not -type d -exec \
+ aixdll-query '{}' FILE MEMBER FLAGS FORMAT RUNPATH DEPLIBS ';'
+ ) > "${T}"/needed 2>/dev/null
+
+ # Symlinking shared archive libraries is not a good idea on aix,
+ # as there is nothing like "soname" on pure filesystem level.
+ # So we create a copy instead of the symlink.
+ local prev_FILE=
+ local FILE MEMBER FLAGS FORMAT RUNPATH DEPLIBS
+ while read queryline
+ do
+ FILE= MEMBER= FLAGS= FORMAT= RUNPATH= DEPLIBS=
+ eval ${queryline}
+ FILE=${FILE#./}
+
+ if [[ ${prev_FILE} != ${FILE} ]]; then
+ if [[ " ${FLAGS} " == *" SHROBJ "* && -h ${ED}${FILE} ]]; then
+ prev_FILE=${FILE}
+ local target=$(readlink "${ED}${FILE}")
+ if [[ ${target} == /* ]]; then
+ target=${D}${target}
+ else
+ target=${FILE%/*}/${target}
+ fi
+ rm -f "${ED}${FILE}" || die "cannot prune ${FILE}"
+ cp -f "${ED}${target}" "${ED}${FILE}" || die "cannot copy ${target} to ${FILE}"
+ fi
+ fi
+ done <"${T}"/needed
+
+ prev_FILE=
+ while read queryline
+ do
+ FILE= MEMBER= FLAGS= FORMAT= RUNPATH= DEPLIBS=
+ eval ${queryline}
+ FILE=${FILE#./}
+
+ if [[ -n ${MEMBER} && ${prev_FILE} != ${FILE} ]]; then
+ # Save NEEDED information for each archive library stub
+ # even if it is static only: the already installed archive
+ # may contain shared objects to be preserved.
+ echo "${FORMAT##* }${FORMAT%%-*};${EPREFIX}/${FILE};${FILE##*/};;" >&${neededfd}
+ fi
+ prev_FILE=${FILE}
+
+ [[ " ${FLAGS} " == *" SHROBJ "* ]] || continue
+
+ # Make sure we disallow insecure RUNPATH's
+ # Don't want paths that point to the tree where the package was built
+ # (older, broken libtools would do this). Also check for null paths
+ # because the loader will search $PWD when it finds null paths.
+ # And we really want absolute paths only.
+ if [[ -n $(echo ":${RUNPATH}:" | grep -E "(${PORTAGE_BUILDDIR}|::|:[^/])") ]]; then
+ insecure_rpath_list="${insecure_rpath_list}\n${FILE}${MEMBER:+[${MEMBER}]}"
+ fi
+
+ local needed=
+ [[ -n ${MEMBER} ]] && needed=${FILE##*/}
+ for deplib in ${DEPLIBS}; do
+ eval deplib=${deplib}
+ if [[ ${deplib} == '.' || ${deplib} == '..' ]]; then
+ # Although we do have runtime linking, we don't want undefined symbols.
+ # AIX does indicate this by needing either '.' or '..'
+ undefined_symbols_list="${undefined_symbols_list}\n${FILE}"
+ else
+ needed="${needed}${needed:+,}${deplib}"
+ fi
+ done
+
+ FILE=${EPREFIX}/${FILE}
+
+ [[ -n ${MEMBER} ]] && MEMBER="[${MEMBER}]"
+ # Save NEEDED information
+ echo "${FORMAT##* }${FORMAT%%-*};${FILE}${MEMBER};${FILE##*/}${MEMBER};${RUNPATH};${needed}" >&${neededfd}
+ done <"${T}"/needed
+
+ eval "exec ${neededfd}>&-" || die "cannot close handle to ${PORTAGE_BUILDDIR}/build-info/NEEDED.XCOFF.1"
+
+ if [[ -n ${undefined_symbols_list} ]]; then
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain undefined symbols."
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with 'prefix' as the maintaining herd of the package."
+ eqawarn "${undefined_symbols_list}"
+ vecho -ne '\a\n'
+ fi
+
+ if [[ -n ${insecure_rpath_list} ]] ; then
+ vecho -ne '\a\n'
+ eqawarn "QA Notice: The following files contain insecure RUNPATH's"
+ eqawarn " Please file a bug about this at http://bugs.gentoo.org/"
+ eqawarn " with 'prefix' as the maintaining herd of the package."
+ eqawarn "${insecure_rpath_list}"
+ vecho -ne '\a\n'
+ if has stricter ${FEATURES} ; then
+ insecure_rpath=1
+ fi
+ fi
+
+ if [[ ${insecure_rpath} -eq 1 ]] ; then
+ die "Aborting due to serious QA concerns with RUNPATH/RPATH"
+ elif [[ -n ${die_msg} ]] && has stricter ${FEATURES} ; then
+ die "Aborting due to QA concerns: ${die_msg}"
+ fi
+
+ PORTAGE_QUIET=${tmp_quiet}
+ fi
+}
+
install_mask() {
local root="$1"
diff --cc bin/repoman
index 25a03f0,0e3820f..74b4f45
--- a/bin/repoman
+++ b/bin/repoman
@@@ -378,9 -376,9 +379,10 @@@ qacats = list(qahelp
qacats.sort()
qawarnings = set((
+"changelog.ebuildadded",
"changelog.missing",
"changelog.notadded",
+ "dependency.unknown",
"digest.assumed",
"digest.unused",
"ebuild.notadded",
diff --cc cnf/make.globals
index de05b3b,2892d50..ae98d16
--- a/cnf/make.globals
+++ b/cnf/make.globals
@@@ -108,21 -106,11 +108,21 @@@ CONFIG_PROTECT="/etc
CONFIG_PROTECT_MASK="/etc/env.d"
# Disable auto-use
- USE_ORDER="env:pkg:conf:defaults:pkginternal:env.d"
+ USE_ORDER="env:pkg:conf:defaults:pkginternal:repo:env.d"
+# Default portage user/group
+PORTAGE_USER='@portageuser@'
+PORTAGE_GROUP='@portagegroup@'
+PORTAGE_ROOT_USER='@rootuser@'
+
# Default ownership of installed files.
-PORTAGE_INST_UID="0"
-PORTAGE_INST_GID="0"
+PORTAGE_INST_UID="@rootuid@"
+PORTAGE_INST_GID="@rootgid@"
+
+# Default PATH for ebuild env
+DEFAULT_PATH="@DEFAULT_PATH@"
+# Any extra PATHs to add to the ebuild environment's PATH (if any)
+EXTRA_PATH="@EXTRA_PATH@"
# Mode bits for ${WORKDIR} (see ebuild.5).
PORTAGE_WORKDIR_MODE="0700"
diff --cc pym/portage/package/ebuild/prepare_build_dirs.py
index a1086fd,259bedf..053ebcc
--- a/pym/portage/package/ebuild/prepare_build_dirs.py
+++ b/pym/portage/package/ebuild/prepare_build_dirs.py
@@@ -143,14 -142,22 +143,22 @@@ def _adjust_perms_msg(settings, msg)
def _prepare_features_dirs(mysettings):
+ # Use default ABI libdir in accordance with bug #355283.
+ libdir = None
+ default_abi = mysettings.get("DEFAULT_ABI")
+ if default_abi:
+ libdir = mysettings.get("LIBDIR_" + default_abi)
+ if not libdir:
+ libdir = "lib"
+
features_dirs = {
"ccache":{
- "path_dir": EPREFIX+"/usr/lib/ccache/bin",
- "path_dir": "/usr/%s/ccache/bin" % (libdir,),
++ "path_dir": EPREFIX+"/usr/%s/ccache/bin" % (libdir,),
"basedir_var":"CCACHE_DIR",
"default_dir":os.path.join(mysettings["PORTAGE_TMPDIR"], "ccache"),
"always_recurse":False},
"distcc":{
- "path_dir": EPREFIX+"/usr/lib/distcc/bin",
- "path_dir": "/usr/%s/distcc/bin" % (libdir,),
++ "path_dir": EPREFIX+"/usr/%s/distcc/bin" % (libdir,),
"basedir_var":"DISTCC_DIR",
"default_dir":os.path.join(mysettings["BUILD_PREFIX"], ".distcc"),
"subdirs":("lock", "state"),
diff --cc pym/portage/util/__init__.py
index 0182c76,5468e28..3f202eb
--- a/pym/portage/util/__init__.py
+++ b/pym/portage/util/__init__.py
@@@ -1581,24 -1570,14 +1571,26 @@@ def find_updated_config_files(target_ro
else:
yield (x, None)
- def getlibpaths(root):
+ def getlibpaths(root, env=None):
""" Return a list of paths that are used for library lookups """
+ if env is None:
+ env = os.environ
+
+ # PREFIX HACK: LD_LIBRARY_PATH isn't portable, and considered
+ # harmfull, so better not use it. We don't need any host OS lib
+ # paths either, so do Prefix case.
+ if EPREFIX != '':
+ rval = []
+ rval.append(EPREFIX + "/usr/lib")
+ rval.append(EPREFIX + "/lib")
+ # we don't know the CHOST here, so it's a bit hard to guess
+ # where GCC's and ld's libs are. Though, GCC's libs should be
+ # in lib and usr/lib, binutils' libs rarely used
+ else:
- # the following is based on the information from ld.so(8)
- rval = os.environ.get("LD_LIBRARY_PATH", "").split(":")
+ # the following is based on the information from ld.so(8)
- rval = env.get("LD_LIBRARY_PATH", "").split(":")
- rval.extend(grabfile(os.path.join(root, "etc", "ld.so.conf")))
- rval.append("/usr/lib")
- rval.append("/lib")
++ rval = env.get("LD_LIBRARY_PATH", "").split(":")
+ rval.extend(grabfile(os.path.join(root, "etc", "ld.so.conf")))
+ rval.append("/usr/lib")
+ rval.append("/lib")
return [normalize_path(x) for x in rval if x]
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-06-14 15:39 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-06-14 15:39 UTC (permalink / raw
To: gentoo-commits
commit: ebe76c40c60e1938493e87a4e690cc2615ebba79
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Tue Jun 14 15:37:40 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Tue Jun 14 15:37:40 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=ebe76c40
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/ebuild.sh
pym/portage/dbapi/bintree.py
bin/ebuild.sh | 14 ++-
man/portage.5 | 51 +++++++-
pym/_emerge/AbstractEbuildProcess.py | 26 +++-
pym/_emerge/BinpkgFetcher.py | 2 +-
pym/_emerge/EbuildFetcher.py | 6 +-
pym/_emerge/EbuildMetadataPhase.py | 4 +-
pym/_emerge/Scheduler.py | 7 +-
pym/_emerge/SubProcess.py | 20 ++--
pym/_emerge/depgraph.py | 144 +++++++++++++++-------
pym/_emerge/resolver/circular_dependency.py | 3 +-
pym/_emerge/resolver/slot_collision.py | 30 +++--
pym/_emerge/unmerge.py | 10 ++-
pym/portage/__init__.py | 2 +-
pym/portage/dbapi/bintree.py | 28 +++--
pym/portage/dbapi/virtual.py | 6 +-
pym/portage/dep/__init__.py | 50 +++++---
pym/portage/locks.py | 9 +-
pym/portage/package/ebuild/config.py | 2 +-
pym/portage/package/ebuild/digestgen.py | 22 +++-
pym/portage/package/ebuild/getmaskingreason.py | 11 ++
pym/portage/package/ebuild/getmaskingstatus.py | 38 ++----
pym/portage/{package => tests/dbapi}/__init__.py | 2 +-
pym/portage/tests/{bin => dbapi}/__test__ | 0
pym/portage/tests/dbapi/test_fakedbapi.py | 58 +++++++++
pym/portage/tests/dep/testExtractAffectingUSE.py | 4 +-
pym/portage/tests/dep/test_use_reduce.py | 12 +-
pym/portage/tests/ebuild/test_spawn.py | 2 +-
pym/portage/tests/locks/test_lock_nonblock.py | 46 +++++++
pym/portage/tests/resolver/test_merge_order.py | 33 +++++
pym/portage/util/__init__.py | 4 +-
30 files changed, 469 insertions(+), 177 deletions(-)
diff --cc bin/ebuild.sh
index 3c9c63e,8c301d8..a0b7b15
--- a/bin/ebuild.sh
+++ b/bin/ebuild.sh
@@@ -2128,18 -2115,20 +2128,22 @@@ if ! hasq "$EBUILD_PHASE" clean cleanr
;;
esac
- PATH=$_ebuild_helpers_path:$PREROOTPATH${PREROOTPATH:+:}/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin${ROOTPATH:+:}$ROOTPATH
+ #PATH=$_ebuild_helpers_path:$PREROOTPATH${PREROOTPATH:+:}/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin${ROOTPATH:+:}$ROOTPATH
+ # PREFIX: same notes apply as at the top of this file
+ PATH="$_ebuild_helpers_path:$PREROOTPATH${PREROOTPATH:+:}${DEFAULT_PATH}${ROOTPATH:+:}$ROOTPATH${EXTRA_PATH:+:}${EXTRA_PATH}"
unset _ebuild_helpers_path
+ # Use default ABI libdir in accordance with bug #355283.
+ x=LIBDIR_${DEFAULT_ABI}
+ [[ -n $DEFAULT_ABI && -n ${!x} ]] && x=${!x} || x=lib
+
if hasq distcc $FEATURES ; then
- PATH="${EPREFIX}/usr/lib/distcc/bin:$PATH"
- PATH="/usr/$x/distcc/bin:$PATH"
++ PATH="${EPREFIX}/usr/$x/distcc/bin:$PATH"
[[ -n $DISTCC_LOG ]] && addwrite "${DISTCC_LOG%/*}"
fi
if hasq ccache $FEATURES ; then
- PATH="${EPREFIX}/usr/lib/ccache/bin:$PATH"
- PATH="/usr/$x/ccache/bin:$PATH"
++ PATH="${EPREFIX}/usr/$x/ccache/bin:$PATH"
if [[ -n $CCACHE_DIR ]] ; then
addread "$CCACHE_DIR"
diff --cc pym/portage/dbapi/bintree.py
index fb52544,ebec11f..c18dc17
--- a/pym/portage/dbapi/bintree.py
+++ b/pym/portage/dbapi/bintree.py
@@@ -250,8 -249,8 +250,8 @@@ class binarytree(object)
self._pkgindex_header_keys = set([
"ACCEPT_KEYWORDS", "ACCEPT_LICENSE",
"ACCEPT_PROPERTIES", "CBUILD",
- "CHOST", "CONFIG_PROTECT", "CONFIG_PROTECT_MASK", "FEATURES",
+ "CONFIG_PROTECT", "CONFIG_PROTECT_MASK", "FEATURES",
- "GENTOO_MIRRORS", "INSTALL_MASK", "SYNC", "USE"])
+ "GENTOO_MIRRORS", "INSTALL_MASK", "SYNC", "USE", "EPREFIX"])
self._pkgindex_default_pkg_data = {
"BUILD_TIME" : "",
"DEPEND" : "",
@@@ -270,10 -269,24 +270,24 @@@
"DEFINED_PHASES" : "",
"REQUIRED_USE" : ""
}
- self._pkgindex_inherited_keys = ["CHOST", "repository"]
+ self._pkgindex_inherited_keys = ["CHOST", "repository", "EPREFIX"]
+
+ # Populate the header with appropriate defaults.
self._pkgindex_default_header_data = {
- "repository":""
+ "CHOST" : self.settings.get("CHOST", ""),
+ "repository" : "",
}
+
+ # It is especially important to populate keys like
+ # "repository" that save space when entries can
+ # inherit them from the header. If an existing
+ # pkgindex header already defines these keys, then
+ # they will appropriately override our defaults.
+ main_repo = self.settings.repositories.mainRepo()
+ if main_repo is not None and not main_repo.missing_repo_name:
+ self._pkgindex_default_header_data["repository"] = \
+ main_repo.name
+
self._pkgindex_translated_keys = (
("DESCRIPTION" , "DESC"),
("repository" , "REPO"),
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-06-06 17:12 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-06-06 17:12 UTC (permalink / raw
To: gentoo-commits
commit: 09b931e2e6f99898a0381a04543344b28213833f
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Mon Jun 6 17:11:14 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Mon Jun 6 17:11:14 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=09b931e2
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
pym/_emerge/emergelog.py
NEWS | 11 +++
RELEASE-NOTES | 9 ++
bin/ebuild | 1 -
bin/etc-update | 7 +-
bin/portageq | 57 ++++++++++++--
cnf/etc-update.conf | 4 +
man/ebuild.5 | 6 ++
man/emerge.1 | 6 +-
pym/_emerge/Binpkg.py | 22 ++----
pym/_emerge/MergeListItem.py | 2 +-
pym/_emerge/Package.py | 3 +-
pym/_emerge/PackageArg.py | 5 +-
pym/_emerge/Scheduler.py | 75 +-----------------
pym/_emerge/actions.py | 2 +-
pym/_emerge/create_world_atom.py | 12 ++-
pym/_emerge/depgraph.py | 59 ++++++++++++---
pym/_emerge/emergelog.py | 5 +-
pym/_emerge/help.py | 6 +-
pym/_emerge/main.py | 28 ++++---
pym/_emerge/resolver/circular_dependency.py | 4 +-
pym/_emerge/unmerge.py | 2 +-
pym/portage/__init__.py | 35 +++++++-
pym/portage/_sets/base.py | 10 ++-
pym/portage/_sets/files.py | 6 +-
pym/portage/dbapi/_MergeProcess.py | 9 ++-
pym/portage/dbapi/__init__.py | 20 +++---
pym/portage/dbapi/bintree.py | 9 ++-
pym/portage/dbapi/dep_expand.py | 25 ++++--
pym/portage/dbapi/porttree.py | 63 +++++++++-------
pym/portage/dbapi/vartree.py | 81 +++++++++-----------
pym/portage/dep/dep_check.py | 11 ++-
pym/portage/getbinpkg.py | 2 +-
.../package/ebuild/_config/KeywordsManager.py | 3 +-
pym/portage/package/ebuild/_config/MaskManager.py | 3 +-
pym/portage/package/ebuild/_config/helper.py | 5 +-
pym/portage/package/ebuild/config.py | 10 ++-
pym/portage/package/ebuild/doebuild.py | 4 +-
pym/portage/package/ebuild/getmaskingreason.py | 42 ++++++++---
pym/portage/package/ebuild/getmaskingstatus.py | 7 +-
pym/portage/repository/config.py | 30 +++++---
pym/portage/tests/resolver/test_merge_order.py | 36 +++++++++-
pym/portage/tests/util/test_digraph.py | 12 ++--
pym/portage/util/__init__.py | 11 ++-
pym/portage/util/mtimedb.py | 8 ++-
pym/repoman/checks.py | 5 +-
45 files changed, 480 insertions(+), 293 deletions(-)
diff --cc bin/etc-update
index 2054389,42518ad..0936295
--- a/bin/etc-update
+++ b/bin/etc-update
@@@ -549,18 -548,14 +549,19 @@@ rm -rf "${TMP}" 2> /dev/nul
mkdir "${TMP}" || die "failed to create temp dir" 1
# make sure we have a secure directory to work in
chmod 0700 "${TMP}" || die "failed to set perms on temp dir" 1
-chown ${UID:-0}:${GID:-0} "${TMP}" || die "failed to set ownership on temp dir" 1
+# GID need not to be available, and group 0 is not cool when not being
+# root, hence just rely on mkdir to have created a dir which is owned by
+# the user
+if [[ -z ${UID} || ${UID} == 0 ]] ; then
+ chown ${UID:-0}:${GID:-0} "${TMP}" || die "failed to set ownership on temp dir" 1
+fi
# I need the CONFIG_PROTECT value
-#CONFIG_PROTECT=$(/usr/lib/portage/bin/portageq envvar CONFIG_PROTECT)
-#CONFIG_PROTECT_MASK=$(/usr/lib/portage/bin/portageq envvar CONFIG_PROTECT_MASK)
+#CONFIG_PROTECT=$(@PORTAGE_BASE@/bin/portageq envvar CONFIG_PROTECT)
+#CONFIG_PROTECT_MASK=$(@PORTAGE_BASE@/bin/portageq envvar CONFIG_PROTECT_MASK)
# load etc-config's configuration
+ CLEAR_TERM=$(get_config clear_term)
EU_AUTOMERGE=$(get_config eu_automerge)
rm_opts=$(get_config rm_opts)
mv_opts=$(get_config mv_opts)
diff --cc bin/portageq
index 5f0e698,fa71388..800b444
--- a/bin/portageq
+++ b/bin/portageq
@@@ -21,16 -21,11 +21,15 @@@ except KeyboardInterrupt
sys.exit(128 + signal.SIGINT)
import os
- import subprocess
import types
+# for an explanation on this logic, see pym/_emerge/__init__.py
+if os.environ.__contains__("PORTAGE_PYTHONPATH"):
+ pym_path = os.environ["PORTAGE_PYTHONPATH"]
+else:
+ pym_path = os.path.join(os.path.dirname(
+ os.path.dirname(os.path.realpath(__file__))), "pym")
# Avoid sandbox violations after python upgrade.
-pym_path = os.path.join(os.path.dirname(
- os.path.dirname(os.path.realpath(__file__))), "pym")
if os.environ.get("SANDBOX_ON") == "1":
sandbox_write = os.environ.get("SANDBOX_WRITE", "").split(":")
if pym_path not in sandbox_write:
diff --cc pym/_emerge/emergelog.py
index 046c99b,9cac3b2..20cec9f
--- a/pym/_emerge/emergelog.py
+++ b/pym/_emerge/emergelog.py
@@@ -12,10 -12,12 +12,13 @@@ from portage import _encoding
from portage import _unicode_encode
from portage.data import secpass
from portage.output import xtermTitle
+from portage.const import EPREFIX
+ # We disable emergelog by default, since it's called from
+ # dblink.merge() and we don't want that to trigger log writes
+ # unless it's really called via emerge.
+ _disable = True
-_emerge_log_dir = '/var/log'
+_emerge_log_dir = EPREFIX + '/var/log'
- _disable = False
def emergelog(xterm_titles, mystr, short_msg=None):
diff --cc pym/portage/_sets/files.py
index f422ed9,f19ecf6..7992b82
--- a/pym/portage/_sets/files.py
+++ b/pym/portage/_sets/files.py
@@@ -1,6 -1,7 +1,6 @@@
- # Copyright 2007 Gentoo Foundation
+ # Copyright 2007-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-import errno
import re
from itertools import chain
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-05-28 8:29 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-05-28 8:29 UTC (permalink / raw
To: gentoo-commits
commit: e35fe943ff13e4ba10a2eddcc17c22632b2d44a8
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sat May 28 08:28:37 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sat May 28 08:28:37 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=e35fe943
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
pym/_emerge/actions.py | 5 ++++-
pym/_emerge/depgraph.py | 15 +++++++++++++--
2 files changed, 17 insertions(+), 3 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-05-27 17:41 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-05-27 17:41 UTC (permalink / raw
To: gentoo-commits
commit: abd5adceac4b5f95de66be7516d4f29f24f00d02
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri May 27 17:39:37 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri May 27 17:39:37 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=abd5adce
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Ported changes to LinkageMapELF to the other LinkageMaps
Conflicts:
bin/etc-update
bin/glsa-check
bin/regenworld
pym/portage/dbapi/vartree.py
bin/ebuild.sh | 7 +
bin/etc-update | 8 +-
bin/glsa-check | 4 +-
bin/regenworld | 6 +-
bin/repoman | 2 +-
cnf/make.conf.sparc.diff | 2 +-
doc/package/ebuild/eapi/4.docbook | 4 +-
make.conf.txt | 719 --------------------
man/ebuild.5 | 4 +-
man/emerge.1 | 15 +-
man/make.conf.5 | 5 +-
pym/_emerge/AsynchronousLock.py | 49 ++-
pym/_emerge/AsynchronousTask.py | 5 +-
pym/_emerge/Binpkg.py | 3 +-
pym/_emerge/BinpkgFetcher.py | 7 +-
pym/_emerge/Blocker.py | 12 +-
pym/_emerge/BlockerDB.py | 9 +-
pym/_emerge/DepPriority.py | 2 +-
pym/_emerge/DepPrioritySatisfiedRange.py | 31 +-
pym/_emerge/EbuildBuild.py | 3 +-
pym/_emerge/EbuildBuildDir.py | 29 +-
pym/_emerge/EbuildMerge.py | 23 +-
pym/_emerge/EbuildPhase.py | 34 +-
pym/_emerge/FakeVartree.py | 21 +-
pym/_emerge/Package.py | 59 ++-
pym/_emerge/PackageUninstall.py | 102 +++-
pym/_emerge/Scheduler.py | 24 +-
pym/_emerge/Task.py | 31 +-
pym/_emerge/actions.py | 30 +-
pym/_emerge/depgraph.py | 617 ++++++++++++++----
pym/_emerge/help.py | 16 +-
pym/_emerge/main.py | 17 +-
pym/_emerge/resolver/backtracking.py | 7 +-
pym/_emerge/resolver/output_helpers.py | 4 +-
pym/_emerge/unmerge.py | 74 ++-
pym/portage/const.py | 2 +-
pym/portage/cvstree.py | 6 +-
pym/portage/dbapi/_MergeProcess.py | 43 +-
pym/portage/dbapi/vartree.py | 315 ++++++---
pym/portage/mail.py | 11 +-
pym/portage/output.py | 4 +-
pym/portage/package/ebuild/doebuild.py | 121 ++--
pym/portage/package/ebuild/getmaskingstatus.py | 7 +-
pym/portage/tests/ebuild/test_config.py | 4 +-
pym/portage/tests/locks/test_asynchronous_lock.py | 95 +++-
pym/portage/tests/resolver/ResolverPlayground.py | 99 +++-
pym/portage/tests/resolver/test_autounmask.py | 51 ++-
.../tests/resolver/test_circular_dependencies.py | 3 +-
pym/portage/tests/resolver/test_depth.py | 8 +-
pym/portage/tests/resolver/test_merge_order.py | 386 +++++++++++
pym/portage/tests/resolver/test_multirepo.py | 3 +
.../tests/resolver/test_old_dep_chain_display.py | 2 +
pym/portage/tests/resolver/test_simple.py | 2 +-
pym/portage/tests/resolver/test_slot_collisions.py | 3 +-
pym/portage/update.py | 4 +-
pym/portage/util/__init__.py | 29 +-
pym/portage/util/_dyn_libs/LinkageMapELF.py | 13 +-
pym/portage/util/_dyn_libs/LinkageMapMachO.py | 13 +-
pym/portage/util/_dyn_libs/LinkageMapPeCoff.py | 11 +-
pym/portage/util/_dyn_libs/LinkageMapXCoff.py | 11 +-
pym/portage/util/digraph.py | 10 +-
pym/portage/util/movefile.py | 5 +-
pym/portage/xml/metadata.py | 4 +-
63 files changed, 1948 insertions(+), 1302 deletions(-)
diff --cc bin/etc-update
index 5fbd345,2369f04..2054389
--- a/bin/etc-update
+++ b/bin/etc-update
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2007 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# Author Brandon Low <lostlogic@gentoo.org>
diff --cc bin/glsa-check
index 64209ab,2f2d555..4f50a1f
--- a/bin/glsa-check
+++ b/bin/glsa-check
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2008-2009 Gentoo Foundation
+ # Copyright 2008-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc bin/regenworld
index e0e9774,6b5af4c..9e0e291
--- a/bin/regenworld
+++ b/bin/regenworld
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
from __future__ import print_function
diff --cc pym/portage/const.py
index 6057520,e91c009..00a53e4
--- a/pym/portage/const.py
+++ b/pym/portage/const.py
@@@ -132,10 -88,9 +132,10 @@@ EBUILD_PHASES = ("pretend",
SUPPORTED_FEATURES = frozenset([
"assume-digests", "binpkg-logs", "buildpkg", "buildsyspkg", "candy",
"ccache", "chflags", "collision-protect", "compress-build-logs",
- "digest", "distcc", "distlocks", "ebuild-locks", "fakeroot",
+ "digest", "distcc", "distcc-pump", "distlocks", "ebuild-locks", "fakeroot",
"fail-clean", "fixpackages", "force-mirror", "getbinpkg",
"installsources", "keeptemp", "keepwork", "fixlafiles", "lmirror",
+ "macossandbox", "macosprefixsandbox", "macosusersandbox",
"metadata-transfer", "mirror", "multilib-strict", "news",
"noauto", "noclean", "nodoc", "noinfo", "noman",
"nostrip", "notitles", "parallel-fetch", "parallel-install",
diff --cc pym/portage/dbapi/vartree.py
index 581300f,e742358..e0f0856
--- a/pym/portage/dbapi/vartree.py
+++ b/pym/portage/dbapi/vartree.py
@@@ -2347,7 -2386,7 +2407,7 @@@ class dblink(object)
def path_to_node(path):
node = path_node_map.get(path)
if node is None:
- node = linkmap._LibGraphNode(path, root)
- node = LinkageMap._LibGraphNode(linkmap._obj_key(path))
++ node = linkmap._LibGraphNode(linkmap._obj_key(path))
alt_path_node = lib_graph.get(node)
if alt_path_node is not None:
node = alt_path_node
@@@ -2512,15 -2552,7 +2573,15 @@@
def path_to_node(path):
node = path_node_map.get(path)
if node is None:
- node = LinkageMap._LibGraphNode(linkmap._obj_key(path))
+ chost = self.settings.get('CHOST')
+ if chost.find('darwin') >= 0:
- node = LinkageMapMachO._LibGraphNode(path, root)
++ node = LinkageMapMachO._LibGraphNode(linkmap._obj_key(path))
+ elif chost.find('interix') >= 0 or chost.find('winnt') >= 0:
- node = LinkageMapPeCoff._LibGraphNode(path, root)
++ node = LinkageMapPeCoff._LibGraphNode(linkmap._obj_key(path))
+ elif chost.find('aix') >= 0:
- node = LinkageMapXCoff._LibGraphNode(path, root)
++ node = LinkageMapXCoff._LibGraphNode(linkmap._obj_key(path))
+ else:
- node = LinkageMap._LibGraphNode(path, root)
++ node = LinkageMap._LibGraphNode(linkmap._obj_key(path))
alt_path_node = lib_graph.get(node)
if alt_path_node is not None:
node = alt_path_node
diff --cc pym/portage/util/_dyn_libs/LinkageMapMachO.py
index cbdf6c2,fef75b6..7ed004a
--- a/pym/portage/util/_dyn_libs/LinkageMapMachO.py
+++ b/pym/portage/util/_dyn_libs/LinkageMapMachO.py
@@@ -59,7 -60,7 +59,7 @@@ class LinkageMapMachO(object)
"""Helper class used as _obj_properties keys for objects."""
- __slots__ = ("__weakref__", "_key")
- __slots__ = ("_key",)
++ __slots__ = ("_key")
def __init__(self, obj, root):
"""
diff --cc pym/portage/util/_dyn_libs/LinkageMapPeCoff.py
index c90947e,0000000..25e8a45
mode 100644,000000..100644
--- a/pym/portage/util/_dyn_libs/LinkageMapPeCoff.py
+++ b/pym/portage/util/_dyn_libs/LinkageMapPeCoff.py
@@@ -1,267 -1,0 +1,274 @@@
+# Copyright 1998-2011 Gentoo Foundation
+# Distributed under the terms of the GNU General Public License v2
+
+import errno
+import logging
+import subprocess
+
+import portage
+from portage import _encodings
+from portage import _os_merge
+from portage import _unicode_decode
+from portage import _unicode_encode
+from portage.cache.mappings import slot_dict_class
+from portage.exception import CommandNotFound
+from portage.localization import _
+from portage.util import getlibpaths
+from portage.util import grabfile
+from portage.util import normalize_path
+from portage.util import writemsg_level
+from portage.const import EPREFIX
+from portage.util._dyn_libs.LinkageMapELF import LinkageMapELF
+
+class LinkageMapPeCoff(LinkageMapELF):
+
+ """Models dynamic linker dependencies."""
+
+ # NEEDED.PECOFF.1 has effectively the _same_ format as NEEDED.ELF.2,
+ # but we keep up the relation "scanelf" -> "NEEDED.ELF", "readpecoff" ->
+ # "NEEDED.PECOFF", "scanmacho" -> "NEEDED.MACHO", etc. others will follow.
+ _needed_aux_key = "NEEDED.PECOFF.1"
+
+ class _ObjectKey(LinkageMapELF._ObjectKey):
+
+ """Helper class used as _obj_properties keys for objects."""
+
+ def _generate_object_key(self, obj, root):
+ """
+ Generate object key for a given object. This is different from the
+ Linux implementation, since some systems (e.g. interix) don't have
+ "inodes", thus the inode field is always zero, or a random value,
+ making it inappropriate for identifying a file... :)
+
+ @param object: path to a file
+ @type object: string (example: '/usr/bin/bar')
+ @rtype: 2-tuple of types (bool, string)
+ @return:
+ 2-tuple of boolean indicating existance, and absolut path
+ """
+
+ os = _os_merge
+
+ try:
+ _unicode_encode(obj,
+ encoding=_encodings['merge'], errors='strict')
+ except UnicodeEncodeError:
+ # The package appears to have been merged with a
+ # different value of sys.getfilesystemencoding(),
+ # so fall back to utf_8 if appropriate.
+ try:
+ _unicode_encode(obj,
+ encoding=_encodings['fs'], errors='strict')
+ except UnicodeEncodeError:
+ pass
+ else:
+ os = portage.os
+
+ abs_path = os.path.join(root, obj.lstrip(os.sep))
+ try:
+ object_stat = os.stat(abs_path)
+ except OSError:
+ return (False, os.path.realpath(abs_path))
+ # On Interix, the inode field may always be zero, since the
+ # filesystem (NTFS) has no inodes ...
+ return (True, os.path.realpath(abs_path))
+
+ def file_exists(self):
+ """
+ Determine if the file for this key exists on the filesystem.
+
+ @rtype: Boolean
+ @return:
+ 1. True if the file exists.
+ 2. False if the file does not exist or is a broken symlink.
+
+ """
+ return self._key[0]
+
+ class _LibGraphNode(_ObjectKey):
+ __slots__ = ("alt_paths",)
+
- def __init__(self, obj, root):
- LinkageMapPeCoff._ObjectKey.__init__(self, obj, root)
++ def __init__(self, key):
++ """
++ Create a _LibGraphNode from an existing _ObjectKey.
++ This re-uses the _key attribute in order to avoid repeating
++ any previous stat calls, which helps to avoid potential race
++ conditions due to inconsistent stat results when the
++ file system is being modified concurrently.
++ """
++ self._key = key._key
+ self.alt_paths = set()
+
+ def __str__(self):
+ return str(sorted(self.alt_paths))
+
+ def rebuild(self, exclude_pkgs=None, include_file=None,
+ preserve_paths=None):
+ """
+ Raises CommandNotFound if there are preserved libs
+ and the readpecoff binary is not available.
+
+ @param exclude_pkgs: A set of packages that should be excluded from
+ the LinkageMap, since they are being unmerged and their NEEDED
+ entries are therefore irrelevant and would only serve to corrupt
+ the LinkageMap.
+ @type exclude_pkgs: set
+ @param include_file: The path of a file containing NEEDED entries for
+ a package which does not exist in the vardbapi yet because it is
+ currently being merged.
+ @type include_file: String
+ @param preserve_paths: Libraries preserved by a package instance that
+ is currently being merged. They need to be explicitly passed to the
+ LinkageMap, since they are not registered in the
+ PreservedLibsRegistry yet.
+ @type preserve_paths: set
+ """
+
+ os = _os_merge
+ root = self._root
+ root_len = len(root) - 1
+ self._clear_cache()
+ self._defpath.update(getlibpaths(self._root))
+ libs = self._libs
+ obj_properties = self._obj_properties
+
+ lines = []
+
+ # Data from include_file is processed first so that it
+ # overrides any data from previously installed files.
+ if include_file is not None:
+ for line in grabfile(include_file):
+ lines.append((include_file, line))
+
+ aux_keys = [self._needed_aux_key]
+ can_lock = os.access(os.path.dirname(self._dbapi._dbroot), os.W_OK)
+ if can_lock:
+ self._dbapi.lock()
+ try:
+ for cpv in self._dbapi.cpv_all():
+ if exclude_pkgs is not None and cpv in exclude_pkgs:
+ continue
+ needed_file = self._dbapi.getpath(cpv,
+ filename=self._needed_aux_key)
+ for line in self._dbapi.aux_get(cpv, aux_keys)[0].splitlines():
+ lines.append((needed_file, line))
+ finally:
+ if can_lock:
+ self._dbapi.unlock()
+
+ # have to call readpecoff for preserved libs here as they aren't
+ # registered in NEEDED.PECOFF.1 files
+ plibs = set()
+ if preserve_paths is not None:
+ plibs.update(preserve_paths)
+ if self._dbapi._plib_registry and \
+ self._dbapi._plib_registry.hasEntries():
+ for cpv, items in \
+ self._dbapi._plib_registry.getPreservedLibs().items():
+ if exclude_pkgs is not None and cpv in exclude_pkgs:
+ # These preserved libs will either be unmerged,
+ # rendering them irrelevant, or they will be
+ # preserved in the replacement package and are
+ # already represented via the preserve_paths
+ # parameter.
+ continue
+ plibs.update(items)
+ if plibs:
+ args = ["readpecoff", self._dbapi.settings.get('CHOST')]
+ args.extend(os.path.join(root, x.lstrip("." + os.sep)) \
+ for x in plibs)
+ try:
+ proc = subprocess.Popen(args, stdout=subprocess.PIPE)
+ except EnvironmentError as e:
+ if e.errno != errno.ENOENT:
+ raise
+ raise CommandNotFound(args[0])
+ else:
+ for l in proc.stdout:
+ try:
+ l = _unicode_decode(l,
+ encoding=_encodings['content'], errors='strict')
+ except UnicodeDecodeError:
+ l = _unicode_decode(l,
+ encoding=_encodings['content'], errors='replace')
+ writemsg_level(_("\nError decoding characters " \
+ "returned from readpecoff: %s\n\n") % (l,),
+ level=logging.ERROR, noiselevel=-1)
+ l = l[3:].rstrip("\n")
+ if not l:
+ continue
+ fields = l.split(";")
+ if len(fields) < 5:
+ writemsg_level(_("\nWrong number of fields " \
+ "returned from readpecoff: %s\n\n") % (l,),
+ level=logging.ERROR, noiselevel=-1)
+ continue
+ fields[1] = fields[1][root_len:]
+ plibs.discard(fields[1])
+ lines.append(("readpecoff", ";".join(fields)))
+ proc.wait()
+
+ if plibs:
+ # Preserved libraries that did not appear in the scanelf output.
+ # This is known to happen with statically linked libraries.
+ # Generate dummy lines for these, so we can assume that every
+ # preserved library has an entry in self._obj_properties. This
+ # is important in order to prevent findConsumers from raising
+ # an unwanted KeyError.
+ for x in plibs:
+ lines.append(("plibs", ";".join(['', x, '', '', ''])))
+
+ for location, l in lines:
+ l = l.rstrip("\n")
+ if not l:
+ continue
+ fields = l.split(";")
+ if len(fields) < 5:
+ writemsg_level(_("\nWrong number of fields " \
+ "in %s: %s\n\n") % (location, l),
+ level=logging.ERROR, noiselevel=-1)
+ continue
+ arch = fields[0]
+ obj = fields[1]
+ soname = fields[2]
+ path = set([normalize_path(x) \
+ for x in filter(None, fields[3].replace(
+ "${ORIGIN}", os.path.dirname(obj)).replace(
+ "$ORIGIN", os.path.dirname(obj)).split(":"))])
+ needed = [x for x in fields[4].split(",") if x]
+
+ obj_key = self._obj_key(obj)
+ indexed = True
+ myprops = obj_properties.get(obj_key)
+ if myprops is None:
+ indexed = False
+ myprops = (arch, needed, path, soname, set())
+ obj_properties[obj_key] = myprops
+ # All object paths are added into the obj_properties tuple.
+ myprops[4].add(obj)
+
+ # Don't index the same file more that once since only one
+ # set of data can be correct and therefore mixing data
+ # may corrupt the index (include_file overrides previously
+ # installed).
+ if indexed:
+ continue
+
+ arch_map = libs.get(arch)
+ if arch_map is None:
+ arch_map = {}
+ libs[arch] = arch_map
+ if soname:
+ soname_map = arch_map.get(soname)
+ if soname_map is None:
+ soname_map = self._soname_map_class(
+ providers=set(), consumers=set())
+ arch_map[soname] = soname_map
+ soname_map.providers.add(obj_key)
+ for needed_soname in needed:
+ soname_map = arch_map.get(needed_soname)
+ if soname_map is None:
+ soname_map = self._soname_map_class(
+ providers=set(), consumers=set())
+ arch_map[needed_soname] = soname_map
+ soname_map.consumers.add(obj_key)
diff --cc pym/portage/util/_dyn_libs/LinkageMapXCoff.py
index 0e930fe,0000000..782cc54
mode 100644,000000..100644
--- a/pym/portage/util/_dyn_libs/LinkageMapXCoff.py
+++ b/pym/portage/util/_dyn_libs/LinkageMapXCoff.py
@@@ -1,319 -1,0 +1,326 @@@
+# Copyright 1998-2011 Gentoo Foundation
+# Distributed under the terms of the GNU General Public License v2
+
+import errno
+import logging
+import subprocess
+
+import portage
+from portage import _encodings
+from portage import _os_merge
+from portage import _unicode_decode
+from portage import _unicode_encode
+from portage.cache.mappings import slot_dict_class
+from portage.exception import CommandNotFound
+from portage.localization import _
+from portage.util import getlibpaths
+from portage.util import grabfile
+from portage.util import normalize_path
+from portage.util import writemsg_level
+from portage.const import EPREFIX, BASH_BINARY
+from portage.util._dyn_libs.LinkageMapELF import LinkageMapELF
+
+class LinkageMapXCoff(LinkageMapELF):
+
+ """Models dynamic linker dependencies."""
+
+ _needed_aux_key = "NEEDED.XCOFF.1"
+
+ class _ObjectKey(LinkageMapELF._ObjectKey):
+
+ def __init__(self, obj, root):
+ LinkageMapELF._ObjectKey.__init__(self, obj, root)
+
+ def _generate_object_key(self, obj, root):
+ """
+ Generate object key for a given object.
+
+ @param object: path to a file
+ @type object: string (example: '/usr/bin/bar')
+ @rtype: 2-tuple of types (long, int) if object exists. string if
+ object does not exist.
+ @return:
+ 1. 2-tuple of object's inode and device from a stat call, if object
+ exists.
+ 2. realpath of object if object does not exist.
+
+ """
+
+ os = _os_merge
+
+ try:
+ _unicode_encode(obj,
+ encoding=_encodings['merge'], errors='strict')
+ except UnicodeEncodeError:
+ # The package appears to have been merged with a
+ # different value of sys.getfilesystemencoding(),
+ # so fall back to utf_8 if appropriate.
+ try:
+ _unicode_encode(obj,
+ encoding=_encodings['fs'], errors='strict')
+ except UnicodeEncodeError:
+ pass
+ else:
+ os = portage.os
+
+ abs_path = os.path.join(root, obj.lstrip(os.sep))
+ try:
+ object_stat = os.stat(abs_path)
+ except OSError:
+ # Use the realpath as the key if the file does not exists on the
+ # filesystem.
+ return os.path.realpath(abs_path)
+ # Return a tuple of the device and inode, as well as the basename,
+ # because of hardlinks the device and inode might be identical.
+ return (object_stat.st_dev, object_stat.st_ino, os.path.basename(abs_path.rstrip(os.sep)))
+
+ def file_exists(self):
+ """
+ Determine if the file for this key exists on the filesystem.
+
+ @rtype: Boolean
+ @return:
+ 1. True if the file exists.
+ 2. False if the file does not exist or is a broken symlink.
+
+ """
+ return isinstance(self._key, tuple)
+
+ class _LibGraphNode(_ObjectKey):
+ __slots__ = ("alt_paths",)
+
- def __init__(self, obj, root):
- LinkageMapXCoff._ObjectKey.__init__(self, obj, root)
++ def __init__(self, key):
++ """
++ Create a _LibGraphNode from an existing _ObjectKey.
++ This re-uses the _key attribute in order to avoid repeating
++ any previous stat calls, which helps to avoid potential race
++ conditions due to inconsistent stat results when the
++ file system is being modified concurrently.
++ """
++ self._key = key._key
+ self.alt_paths = set()
+
+ def __str__(self):
+ return str(sorted(self.alt_paths))
+
+ def rebuild(self, exclude_pkgs=None, include_file=None,
+ preserve_paths=None):
+ """
+ Raises CommandNotFound if there are preserved libs
+ and the scanelf binary is not available.
+
+ @param exclude_pkgs: A set of packages that should be excluded from
+ the LinkageMap, since they are being unmerged and their NEEDED
+ entries are therefore irrelevant and would only serve to corrupt
+ the LinkageMap.
+ @type exclude_pkgs: set
+ @param include_file: The path of a file containing NEEDED entries for
+ a package which does not exist in the vardbapi yet because it is
+ currently being merged.
+ @type include_file: String
+ @param preserve_paths: Libraries preserved by a package instance that
+ is currently being merged. They need to be explicitly passed to the
+ LinkageMap, since they are not registered in the
+ PreservedLibsRegistry yet.
+ @type preserve_paths: set
+ """
+
+ os = _os_merge
+ root = self._root
+ root_len = len(root) - 1
+ self._clear_cache()
+ self._defpath.update(getlibpaths(self._root))
+ libs = self._libs
+ obj_properties = self._obj_properties
+
+ lines = []
+
+ # Data from include_file is processed first so that it
+ # overrides any data from previously installed files.
+ if include_file is not None:
+ for line in grabfile(include_file):
+ lines.append((include_file, line))
+
+ aux_keys = [self._needed_aux_key]
+ can_lock = os.access(os.path.dirname(self._dbapi._dbroot), os.W_OK)
+ if can_lock:
+ self._dbapi.lock()
+ try:
+ for cpv in self._dbapi.cpv_all():
+ if exclude_pkgs is not None and cpv in exclude_pkgs:
+ continue
+ needed_file = self._dbapi.getpath(cpv,
+ filename=self._needed_aux_key)
+ for line in self._dbapi.aux_get(cpv, aux_keys)[0].splitlines():
+ lines.append((needed_file, line))
+ finally:
+ if can_lock:
+ self._dbapi.unlock()
+
+ # have to call scanelf for preserved libs here as they aren't
+ # registered in NEEDED.XCOFF.1 files
+ plibs = set()
+ if preserve_paths is not None:
+ plibs.update(preserve_paths)
+ if self._dbapi._plib_registry and \
+ self._dbapi._plib_registry.hasEntries():
+ for cpv, items in \
+ self._dbapi._plib_registry.getPreservedLibs().items():
+ if exclude_pkgs is not None and cpv in exclude_pkgs:
+ # These preserved libs will either be unmerged,
+ # rendering them irrelevant, or they will be
+ # preserved in the replacement package and are
+ # already represented via the preserve_paths
+ # parameter.
+ continue
+ plibs.update(items)
+ if plibs:
+ for x in plibs:
+ args = [BASH_BINARY, "-c", ':'
+ + '; member="' + x + '"'
+ + '; archive=${member}'
+ + '; if [[ ${member##*/} == .*"["*"]" ]]'
+ + '; then member=${member%/.*}/${member##*/.}'
+ + '; archive=${member%[*}'
+ + '; fi'
+ + '; member=${member#${archive}}'
+ + '; [[ -r ${archive} ]] || chmod a+r "${archive}"'
+ + '; eval $(aixdll-query "${archive}${member}" FILE MEMBER FLAGS FORMAT RUNPATH DEPLIBS)'
+ + '; [[ -n ${member} ]] && needed=${FILE##*/} || needed='
+ + '; for deplib in ${DEPLIBS}'
+ + '; do eval deplib=${deplib}'
+ + '; if [[ ${deplib} != "." && ${deplib} != ".." ]]'
+ + '; then needed="${needed}${needed:+,}${deplib}"'
+ + '; fi'
+ + '; done'
+ + '; [[ -n ${MEMBER} ]] && MEMBER="[${MEMBER}]"'
+ + '; [[ " ${FLAGS} " == *" SHROBJ "* ]] && soname=${FILE##*/}${MEMBER} || soname='
+ + '; echo "${FORMAT##* }${FORMAT%%-*};${FILE#${ROOT%/}}${MEMBER};${soname};${RUNPATH};${needed}"'
+ + '; [[ -z ${member} && -n ${MEMBER} ]] && echo "${FORMAT##* }${FORMAT%%-*};${FILE#${ROOT%/}};${FILE##*/};;"'
+ ]
+ try:
+ proc = subprocess.Popen(args, stdout=subprocess.PIPE)
+ except EnvironmentError as e:
+ if e.errno != errno.ENOENT:
+ raise
+ raise CommandNotFound(args[0])
+ else:
+ for l in proc.stdout:
+ try:
+ l = _unicode_decode(l,
+ encoding=_encodings['content'], errors='strict')
+ except UnicodeDecodeError:
+ l = _unicode_decode(l,
+ encoding=_encodings['content'], errors='replace')
+ writemsg_level(_("\nError decoding characters " \
+ "returned from aixdll-query: %s\n\n") % (l,),
+ level=logging.ERROR, noiselevel=-1)
+ l = l.rstrip("\n")
+ if not l:
+ continue
+ fields = l.split(";")
+ if len(fields) < 5:
+ writemsg_level(_("\nWrong number of fields " \
+ "returned from aixdll-query: %s\n\n") % (l,),
+ level=logging.ERROR, noiselevel=-1)
+ continue
+ fields[1] = fields[1][root_len:]
+ plibs.discard(fields[1])
+ lines.append(("aixdll-query", ";".join(fields)))
+ proc.wait()
+
+ if plibs:
+ # Preserved libraries that did not appear in the bash
+ # aixdll-query code output. This is known to happen with
+ # statically linked libraries. Generate dummy lines for
+ # these, so we can assume that every preserved library has
+ # an entry in self._obj_properties. This is important in
+ # order to prevent findConsumers from raising an unwanted
+ # KeyError.
+ for x in plibs:
+ lines.append(("plibs", ";".join(['', x, '', '', ''])))
+
+ for location, l in lines:
+ l = l.rstrip("\n")
+ if not l:
+ continue
+ fields = l.split(";")
+ if len(fields) < 5:
+ writemsg_level(_("\nWrong number of fields " \
+ "in %s: %s\n\n") % (location, l),
+ level=logging.ERROR, noiselevel=-1)
+ continue
+ arch = fields[0]
+
+ def as_contentmember(obj):
+ if obj.endswith("]"):
+ if obj.find("/") >= 0:
+ return obj[:obj.rfind("/")] + "/." + obj[obj.rfind("/")+1:]
+ return "." + obj
+ return obj
+
+ obj = as_contentmember(fields[1])
+ soname = as_contentmember(fields[2])
+ path = set([normalize_path(x) \
+ for x in filter(None, fields[3].replace(
+ "${ORIGIN}", os.path.dirname(obj)).replace(
+ "$ORIGIN", os.path.dirname(obj)).split(":"))])
+ needed = [as_contentmember(x) for x in fields[4].split(",") if x]
+
+ obj_key = self._obj_key(obj)
+ indexed = True
+ myprops = obj_properties.get(obj_key)
+ if myprops is None:
+ indexed = False
+ myprops = (arch, needed, path, soname, set())
+ obj_properties[obj_key] = myprops
+ # All object paths are added into the obj_properties tuple.
+ myprops[4].add(obj)
+
+ # Don't index the same file more that once since only one
+ # set of data can be correct and therefore mixing data
+ # may corrupt the index (include_file overrides previously
+ # installed).
+ if indexed:
+ continue
+
+ arch_map = libs.get(arch)
+ if arch_map is None:
+ arch_map = {}
+ libs[arch] = arch_map
+ if soname:
+ soname_map = arch_map.get(soname)
+ if soname_map is None:
+ soname_map = self._soname_map_class(
+ providers=set(), consumers=set())
+ arch_map[soname] = soname_map
+ soname_map.providers.add(obj_key)
+ for needed_soname in needed:
+ soname_map = arch_map.get(needed_soname)
+ if soname_map is None:
+ soname_map = self._soname_map_class(
+ providers=set(), consumers=set())
+ arch_map[needed_soname] = soname_map
+ soname_map.consumers.add(obj_key)
+
+ def getSoname(self, obj):
+ """
+ Return the soname associated with an object.
+
+ @param obj: absolute path to an object
+ @type obj: string (example: '/usr/bin/bar')
+ @rtype: string
+ @return: soname as a string
+
+ """
+ if not self._libs:
+ self.rebuild()
+ if isinstance(obj, self._ObjectKey):
+ obj_key = obj
+ if obj_key not in self._obj_properties:
+ raise KeyError("%s not in object list" % obj_key)
+ return self._obj_properties[obj_key][3]
+ if obj not in self._obj_key_cache:
+ raise KeyError("%s not in object list" % obj)
+ return self._obj_properties[self._obj_key_cache[obj]][3]
+
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-05-14 13:59 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-05-14 13:59 UTC (permalink / raw
To: gentoo-commits
commit: d4af3319a7a1ec617498252f3d345622759df5be
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sat May 14 13:22:16 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sat May 14 13:22:16 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=d4af3319
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
pym/portage/util/_dyn_libs/LinkageMapELF.py
bin/ebuild-helpers/doins | 5 +-
bin/ebuild.sh | 18 +-
bin/repoman | 1 +
cnf/make.globals | 2 +-
man/emerge.1 | 29 +-
man/make.conf.5 | 11 +-
pym/_emerge/EbuildPhase.py | 20 +-
pym/_emerge/FakeVartree.py | 17 +-
pym/_emerge/RootConfig.py | 10 +-
pym/_emerge/Scheduler.py | 45 ++-
pym/_emerge/actions.py | 78 +---
pym/_emerge/create_depgraph_params.py | 3 +-
pym/_emerge/depgraph.py | 258 +++++----
pym/_emerge/help.py | 45 ++-
pym/_emerge/main.py | 78 +++-
pym/_emerge/unmerge.py | 20 +-
pym/portage/cache/sqlite.py | 7 +-
pym/portage/cache/volatile.py | 3 +-
pym/portage/const.py | 7 +-
pym/portage/dbapi/_MergeProcess.py | 36 ++-
pym/portage/dbapi/_expand_new_virt.py | 72 +++
pym/portage/dbapi/cpv_expand.py | 37 +-
pym/portage/dbapi/porttree.py | 38 +-
pym/portage/dbapi/vartree.py | 567 ++++++++++++--------
pym/portage/dep/dep_check.py | 15 +-
pym/portage/manifest.py | 4 +-
pym/portage/package/ebuild/config.py | 6 +-
pym/portage/package/ebuild/doebuild.py | 15 +-
pym/portage/tests/resolver/test_rebuild.py | 79 +++-
pym/portage/util/_dyn_libs/LinkageMapELF.py | 65 ++-
.../util/_dyn_libs/PreservedLibsRegistry.py | 69 ++-
31 files changed, 1107 insertions(+), 553 deletions(-)
diff --cc pym/_emerge/actions.py
index 1672d47,215203a..80d872d
--- a/pym/_emerge/actions.py
+++ b/pym/_emerge/actions.py
@@@ -28,10 -28,11 +28,11 @@@ from portage import o
from portage import digraph
from portage import _unicode_decode
from portage.cache.cache_errors import CacheError
-from portage.const import GLOBAL_CONFIG_PATH, NEWS_LIB_PATH
+from portage.const import GLOBAL_CONFIG_PATH, NEWS_LIB_PATH, EPREFIX
from portage.const import _ENABLE_DYN_LINK_MAP, _ENABLE_SET_CONFIG
from portage.dbapi.dep_expand import dep_expand
- from portage.dep import Atom, extended_cp_match, _get_useflag_re
+ from portage.dbapi._expand_new_virt import expand_new_virt
+ from portage.dep import Atom, extended_cp_match
from portage.exception import InvalidAtom
from portage.output import blue, bold, colorize, create_color_func, darkgreen, \
red, yellow
diff --cc pym/portage/const.py
index 7df2ecb,98f3dac..6057520
--- a/pym/portage/const.py
+++ b/pym/portage/const.py
@@@ -132,13 -88,13 +132,14 @@@ EBUILD_PHASES = ("pretend",
SUPPORTED_FEATURES = frozenset([
"assume-digests", "binpkg-logs", "buildpkg", "buildsyspkg", "candy",
"ccache", "chflags", "collision-protect", "compress-build-logs",
- "digest", "distcc", "distlocks", "fakeroot",
+ "digest", "distcc", "distlocks", "ebuild-locks", "fakeroot",
"fail-clean", "fixpackages", "force-mirror", "getbinpkg",
"installsources", "keeptemp", "keepwork", "fixlafiles", "lmirror",
+ "macossandbox", "macosprefixsandbox", "macosusersandbox",
"metadata-transfer", "mirror", "multilib-strict", "news",
- "noauto", "noclean", "nodoc", "noinfo", "noman", "nostrip",
- "notitles", "parallel-fetch", "parse-eapi-ebuild-head",
+ "noauto", "noclean", "nodoc", "noinfo", "noman",
+ "nostrip", "notitles", "parallel-fetch", "parallel-install",
+ "parse-eapi-ebuild-head",
"prelink-checksums", "preserve-libs",
"protect-owned", "python-trace", "sandbox",
"selinux", "sesandbox", "severe", "sfperms",
diff --cc pym/portage/dbapi/vartree.py
index b88fbe5,cdae340..581300f
--- a/pym/portage/dbapi/vartree.py
+++ b/pym/portage/dbapi/vartree.py
@@@ -162,19 -163,11 +167,21 @@@ class vardbapi(dbapi)
self._linkmap = None
if _ENABLE_DYN_LINK_MAP:
- self._linkmap = LinkageMap(self)
+ chost = self.settings.get('CHOST')
+ if not chost:
+ chost = 'lunix?' # this happens when profiles are not available
+ if chost.find('darwin') >= 0:
+ self._linkmap = LinkageMapMachO(self)
+ elif chost.find('interix') >= 0 or chost.find('winnt') >= 0:
+ self._linkmap = LinkageMapPeCoff(self)
+ elif chost.find('aix') >= 0:
+ self._linkmap = LinkageMapXCoff(self)
+ else:
+ self._linkmap = LinkageMap(self)
self._owners = self._owners_db(self)
+ self._cached_counter = None
+
def getpath(self, mykey, filename=None):
# This is an optimized hotspot, so don't use unicode-wrapped
# os module and don't use os.path.join().
diff --cc pym/portage/util/_dyn_libs/LinkageMapELF.py
index b9c022f,fe86a7a..73b7752
--- a/pym/portage/util/_dyn_libs/LinkageMapELF.py
+++ b/pym/portage/util/_dyn_libs/LinkageMapELF.py
@@@ -179,12 -199,24 +200,24 @@@ class LinkageMapELF(object)
# have to call scanelf for preserved libs here as they aren't
# registered in NEEDED.ELF.2 files
plibs = set()
- if self._dbapi._plib_registry and self._dbapi._plib_registry.getPreservedLibs():
- args = [EPREFIX + "/usr/bin/scanelf", "-qF", "%a;%F;%S;%r;%n"]
- for items in self._dbapi._plib_registry.getPreservedLibs().values():
+ if preserve_paths is not None:
+ plibs.update(preserve_paths)
+ if self._dbapi._plib_registry and \
+ self._dbapi._plib_registry.hasEntries():
+ for cpv, items in \
+ self._dbapi._plib_registry.getPreservedLibs().items():
+ if exclude_pkgs is not None and cpv in exclude_pkgs:
+ # These preserved libs will either be unmerged,
+ # rendering them irrelevant, or they will be
+ # preserved in the replacement package and are
+ # already represented via the preserve_paths
+ # parameter.
+ continue
plibs.update(items)
- args.extend(os.path.join(root, x.lstrip("." + os.sep)) \
- for x in items)
+ if plibs:
- args = ["/usr/bin/scanelf", "-qF", "%a;%F;%S;%r;%n"]
++ args = [EPREFIX + "/usr/bin/scanelf", "-qF", "%a;%F;%S;%r;%n"]
+ args.extend(os.path.join(root, x.lstrip("." + os.sep)) \
+ for x in plibs)
try:
proc = subprocess.Popen(args, stdout=subprocess.PIPE)
except EnvironmentError as e:
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-05-02 17:41 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-05-02 17:41 UTC (permalink / raw
To: gentoo-commits
commit: 6a851bcab5cab34dba3a1d89ce603480989e02a1
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Mon May 2 17:38:24 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Mon May 2 17:38:24 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=6a851bca
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/misc-functions.sh
pym/portage/package/ebuild/doebuild.py
runtests.sh
bin/ebuild.sh | 35 ++
bin/isolated-functions.sh | 4 +-
bin/misc-functions.sh | 86 ++--
bin/repoman | 48 +--
man/emerge.1 | 29 ++-
man/portage.5 | 20 +-
man/repoman.1 | 12 +-
pym/_emerge/DepPriority.py | 10 +-
pym/_emerge/DepPriorityNormalRange.py | 4 +-
pym/_emerge/DepPrioritySatisfiedRange.py | 3 +-
pym/_emerge/Dependency.py | 9 +-
pym/_emerge/EbuildPhase.py | 7 +-
pym/_emerge/Package.py | 3 +-
pym/_emerge/Scheduler.py | 65 ---
pym/_emerge/UnmergeDepPriority.py | 6 +-
pym/_emerge/actions.py | 145 +++++--
pym/_emerge/create_depgraph_params.py | 6 +-
pym/_emerge/depgraph.py | 482 ++++++++++++++++++----
pym/_emerge/emergelog.py | 10 +-
pym/_emerge/help.py | 44 ++-
pym/_emerge/main.py | 117 +++++-
pym/_emerge/resolver/backtracking.py | 15 +-
pym/portage/dbapi/bintree.py | 8 +-
pym/portage/package/ebuild/doebuild.py | 35 +-
pym/portage/tests/resolver/ResolverPlayground.py | 4 +-
pym/portage/tests/resolver/test_autounmask.py | 33 ++-
pym/portage/tests/resolver/test_rebuild.py | 69 +++
runtests.sh | 4 +-
28 files changed, 978 insertions(+), 335 deletions(-)
diff --cc bin/misc-functions.sh
index 3bf5060,b28b73f..dce730a
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -1249,220 -741,6 +1267,204 @@@ install_mask()
set -${shopts}
}
- preinst_bsdflags() {
- hasq chflags $FEATURES || return
- # Save all the file flags for restoration after installation.
- mtree -c -p "${D}" -k flags > "${T}/bsdflags.mtree"
- # Remove all the file flags so that the merge phase can do anything
- # necessary.
- chflags -R noschg,nouchg,nosappnd,nouappnd "${D}"
- chflags -R nosunlnk,nouunlnk "${D}" 2>/dev/null
- }
-
- postinst_bsdflags() {
- hasq chflags $FEATURES || return
- # Restore all the file flags that were saved before installation.
- mtree -e -p "${ROOT}" -U -k flags < "${T}/bsdflags.mtree" &> /dev/null
- }
-
+preinst_aix() {
+ if [[ ${CHOST} != *-aix* ]] || hasq binchecks ${RESTRICT}; then
+ return 0
+ fi
+ local ar strip
+ if type ${CHOST}-ar >/dev/null 2>&1 && type ${CHOST}-strip >/dev/null 2>&1; then
+ ar=${CHOST}-ar
+ strip=${CHOST}-strip
+ elif [[ ${CBUILD} == "${CHOST}" ]] && type ar >/dev/null 2>&1 && type strip >/dev/null 2>&1; then
+ ar=ar
+ strip=strip
+ elif [[ -x /usr/ccs/bin/ar && -x /usr/ccs/bin/strip ]]; then
+ ar=/usr/ccs/bin/ar
+ strip=/usr/ccs/bin/strip
+ else
+ die "cannot find where to use 'ar' and 'strip' from"
+ fi
+ local archives_members= archives=() chmod400files=()
+ local archive_member soname runpath needed archive contentmember
+ while read archive_member; do
+ archive_member=${archive_member#*;${EPREFIX}/} # drop "^type;EPREFIX/"
+ soname=${archive_member#*;}
+ runpath=${soname#*;}
+ needed=${runpath#*;}
+ soname=${soname%%;*}
+ runpath=${runpath%%;*}
+ archive_member=${archive_member%%;*} # drop ";soname;runpath;needed$"
+ archive=${archive_member%[*}
+ if [[ ${archive_member} != *'['*']' ]]; then
+ if [[ "${soname};${runpath};${needed}" == "${archive##*/};;" && -e ${EROOT}${archive} ]]; then
+ # most likely is an archive stub that already exists,
+ # may have to preserve members being a shared object.
+ archives[${#archives[@]}]=${archive}
+ fi
+ continue
+ fi
+ archives_members="${archives_members}:(${archive_member}):"
+ contentmember="${archive%/*}/.${archive##*/}${archive_member#${archive}}"
+ # portage does os.lstat() on merged files every now
+ # and then, so keep stamp-files for archive members
+ # around to get the preserve-libs feature working.
+ { echo "Please leave this file alone, it is an important helper"
+ echo "for portage to implement the 'preserve-libs' feature on AIX."
+ } > "${ED}${contentmember}" || die "cannot create ${contentmember}"
+ chmod400files[${#chmod400files[@]}]=${ED}${contentmember}
+ done < "${PORTAGE_BUILDDIR}"/build-info/NEEDED.XCOFF.1
+ [[ ${#chmod400files[@]} == 0 ]] ||
+ chmod 0400 "${chmod400files[@]}" || die "cannot chmod ${chmod400files[@]}"
+
+ local preservemembers libmetadir prunedirs=()
+ local FILE MEMBER FLAGS
+ for archive in "${archives[@]}"; do
+ preservemembers=
+ while read line; do
+ [[ -n ${line} ]] || continue
+ FILE= MEMBER= FLAGS=
+ eval ${line}
+ [[ ${FILE} == ${EROOT}${archive} ]] ||
+ die "invalid result of aixdll-query for ${EROOT}${archive}"
+ [[ -n ${MEMBER} && " ${FLAGS} " == *" SHROBJ "* ]] || continue
+ [[ ${archives_members} == *":(${archive}[${MEMBER}]):"* ]] && continue
+ preservemembers="${preservemembers} ${MEMBER}"
+ done <<-EOF
+ $(aixdll-query "${EROOT}${archive}" FILE MEMBER FLAGS)
+ EOF
+ [[ -n ${preservemembers} ]] || continue
+ einfo "preserving (on spec) ${archive}[${preservemembers# }]"
+ libmetadir=${ED}${archive%/*}/.${archive##*/}
+ mkdir "${libmetadir}" || die "cannot create ${libmetadir}"
+ pushd "${libmetadir}" >/dev/null || die "cannot cd to ${libmetadir}"
+ ${ar} -X32_64 -x "${EROOT}${archive}" ${preservemembers} || die "cannot unpack ${EROOT}${archive}"
+ chmod u+w ${preservemembers} || die "cannot chmod${preservemembers}"
+ ${strip} -X32_64 -e ${preservemembers} || die "cannot strip${preservemembers}"
+ ${ar} -X32_64 -q "${ED}${archive}" ${preservemembers} || die "cannot update ${archive}"
+ eend $?
+ popd >/dev/null || die "cannot leave ${libmetadir}"
+ prunedirs[${#prunedirs[@]}]=${libmetadir}
+ done
+ [[ ${#prunedirs[@]} == 0 ]] ||
+ rm -rf "${prunedirs[@]}" || die "cannot prune ${prunedirs[@]}"
+ return 0
+}
+
+postinst_aix() {
+ if [[ ${CHOST} != *-aix* ]] || hasq binchecks ${RESTRICT}; then
+ return 0
+ fi
+ local MY_PR=${PR%r0}
+ local ar strip
+ if type ${CHOST}-ar >/dev/null 2>&1 && type ${CHOST}-strip >/dev/null 2>&1; then
+ ar=${CHOST}-ar
+ strip=${CHOST}-strip
+ elif [[ ${CBUILD} == "${CHOST}" ]] && type ar >/dev/null 2>&1 && type strip >/dev/null 2>&1; then
+ ar=ar
+ strip=strip
+ elif [[ -x /usr/ccs/bin/ar && -x /usr/ccs/bin/strip ]]; then
+ ar=/usr/ccs/bin/ar
+ strip=/usr/ccs/bin/strip
+ else
+ die "cannot find where to use 'ar' and 'strip' from"
+ fi
+ local archives_members= archives=() activearchives=
+ local archive_member soname runpath needed
+ while read archive_member; do
+ archive_member=${archive_member#*;${EPREFIX}/} # drop "^type;EPREFIX/"
+ soname=${archive_member#*;}
+ runpath=${soname#*;}
+ needed=${runpath#*;}
+ soname=${soname%%;*}
+ runpath=${runpath%%;*}
+ archive_member=${archive_member%%;*} # drop ";soname;runpath;needed$"
+ [[ ${archive_member} == *'['*']' ]] && continue
+ [[ "${soname};${runpath};${needed}" == "${archive_member##*/};;" ]] || continue
+ # most likely is an archive stub, we might have to
+ # drop members being preserved shared objects.
+ archives[${#archives[@]}]=${archive_member}
+ activearchives="${activearchives}:(${archive_member}):"
+ done < "${PORTAGE_BUILDDIR}"/build-info/NEEDED.XCOFF.1
+
+ local type allcontentmembers= oldarchives=()
+ local contentmember
+ while read type contentmember; do
+ [[ ${type} == 'obj' ]] || continue
+ contentmember=${contentmember% *} # drop " timestamp$"
+ contentmember=${contentmember% *} # drop " hash$"
+ [[ ${contentmember##*/} == *'['*']' ]] || continue
+ contentmember=${contentmember#${EPREFIX}/}
+ allcontentmembers="${allcontentmembers}:(${contentmember}):"
+ contentmember=${contentmember%[*}
+ contentmember=${contentmember%/.*}/${contentmember##*/.}
+ [[ ${activearchives} == *":(${contentmember}):"* ]] && continue
+ oldarchives[${#oldarchives[@]}]=${contentmember}
+ done < "${EPREFIX}/var/db/pkg/${CATEGORY}/${P}${MY_PR:+-}${MY_PR}/CONTENTS"
+
+ local archive line delmembers
+ local FILE MEMBER FLAGS
+ for archive in "${archives[@]}"; do
+ [[ -r ${EROOT}${archive} && -w ${EROOT}${archive} ]] ||
+ chmod a+r,u+w "${EROOT}${archive}" || die "cannot chmod ${EROOT}${archive}"
+ delmembers=
+ while read line; do
+ [[ -n ${line} ]] || continue
+ FILE= MEMBER= FLAGS=
+ eval ${line}
+ [[ ${FILE} == "${EROOT}${archive}" ]] ||
+ die "invalid result '${FILE}' of aixdll-query, expected '${EROOT}${archive}'"
+ [[ -n ${MEMBER} && " ${FLAGS} " == *" SHROBJ "* ]] || continue
+ [[ ${allcontentmembers} == *":(${archive%/*}/.${archive##*/}[${MEMBER}]):"* ]] && continue
+ delmembers="${delmembers} ${MEMBER}"
+ done <<-EOF
+ $(aixdll-query "${EROOT}${archive}" FILE MEMBER FLAGS)
+ EOF
+ [[ -n ${delmembers} ]] || continue
+ einfo "dropping ${archive}[${delmembers# }]"
+ rm -f "${EROOT}${archive}".new || die "cannot prune ${EROOT}${archive}.new"
+ cp "${EROOT}${archive}" "${EROOT}${archive}".new || die "cannot backup ${archive}"
+ ${ar} -X32_64 -z -o -d "${EROOT}${archive}".new ${delmembers} || die "cannot remove${delmembers} from ${archive}.new"
+ mv -f "${EROOT}${archive}".new "${EROOT}${archive}" || die "cannot put ${EROOT}${archive} in place"
+ eend $?
+ done
+ local libmetadir keepmembers prunedirs=()
+ for archive in "${oldarchives[@]}"; do
+ [[ -r ${EROOT}${archive} && -w ${EROOT}${archive} ]] ||
+ chmod a+r,u+w "${EROOT}${archive}" || die "cannot chmod ${EROOT}${archive}"
+ keepmembers=
+ while read line; do
+ FILE= MEMBER= FLAGS=
+ eval ${line}
+ [[ ${FILE} == "${EROOT}${archive}" ]] ||
+ die "invalid result of aixdll-query for ${EROOT}${archive}"
+ [[ -n ${MEMBER} && " ${FLAGS} " == *" SHROBJ "* ]] || continue
+ [[ ${allcontentmembers} == *":(${archive%/*}/.${archive##*/}[${MEMBER}]):"* ]] || continue
+ keepmembers="${keepmembers} ${MEMBER}"
+ done <<-EOF
+ $(aixdll-query "${EROOT}${archive}" FILE MEMBER FLAGS)
+ EOF
+
+ if [[ -n ${keepmembers} ]]; then
+ einfo "preserving (extra)${keepmembers}"
+ libmetadir=${EROOT}${archive%/*}/.${archive##*/}
+ [[ ! -e ${libmetadir} ]] || rm -rf "${libmetadir}" || die "cannot prune ${libmetadir}"
+ mkdir "${libmetadir}" || die "cannot create ${libmetadir}"
+ pushd "${libmetadir}" >/dev/null || die "cannot cd to ${libmetadir}"
+ ${ar} -X32_64 -x "${EROOT}${archive}" ${keepmembers} || die "cannot unpack ${archive}"
+ ${strip} -X32_64 -e ${keepmembers} || die "cannot strip ${keepmembers}"
+ rm -f "${EROOT}${archive}.new" || die "cannot prune ${EROOT}${archive}.new"
+ ${ar} -X32_64 -q "${EROOT}${archive}.new" ${keepmembers} || die "cannot create ${EROOT}${archive}.new"
+ mv -f "${EROOT}${archive}.new" "${EROOT}${archive}" || die "cannot put ${EROOT}${archive} in place"
+ popd > /dev/null || die "cannot leave ${libmetadir}"
+ prunedirs[${#prunedirs[@]}]=${libmetadir}
+ eend $?
+ fi
+ done
+ [[ ${#prunedirs[@]} == 0 ]] ||
+ rm -rf "${prunedirs[@]}" || die "cannot prune ${prunedirs[@]}"
+ return 0
+}
+
preinst_mask() {
if [ -z "${D}" ]; then
eerror "${FUNCNAME}: D is unset"
diff --cc pym/_emerge/actions.py
index a15f5e4,6379b36..1672d47
--- a/pym/_emerge/actions.py
+++ b/pym/_emerge/actions.py
@@@ -28,10 -28,10 +28,10 @@@ from portage import o
from portage import digraph
from portage import _unicode_decode
from portage.cache.cache_errors import CacheError
-from portage.const import GLOBAL_CONFIG_PATH, NEWS_LIB_PATH
+from portage.const import GLOBAL_CONFIG_PATH, NEWS_LIB_PATH, EPREFIX
from portage.const import _ENABLE_DYN_LINK_MAP, _ENABLE_SET_CONFIG
from portage.dbapi.dep_expand import dep_expand
- from portage.dep import Atom, extended_cp_match
+ from portage.dep import Atom, extended_cp_match, _get_useflag_re
from portage.exception import InvalidAtom
from portage.output import blue, bold, colorize, create_color_func, darkgreen, \
red, yellow
diff --cc pym/portage/package/ebuild/doebuild.py
index 94c0961,1c04822..ad48398
--- a/pym/portage/package/ebuild/doebuild.py
+++ b/pym/portage/package/ebuild/doebuild.py
@@@ -1274,16 -1233,10 +1274,14 @@@ _post_phase_cmds =
"install_symlink_html_docs"],
"preinst" : [
+ "preinst_aix",
"preinst_sfperms",
"preinst_selinux_labels",
"preinst_suid_scan",
- "preinst_mask"]
+ "preinst_mask"],
+
+ "postinst" : [
- "postinst_aix",
- "postinst_bsdflags"]
++ "postinst_aix"]
}
def _post_phase_userpriv_perms(mysettings):
diff --cc runtests.sh
index 0ae4931,11aec60..dadd32d
--- a/runtests.sh
+++ b/runtests.sh
@@@ -1,4 -1,6 +1,6 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
+ # Copyright 2010-2011 Gentoo Foundation
+ # Distributed under the terms of the GNU General Public License v2
PYTHON_VERSIONS="2.6 2.7 3.1 3.2 3.3"
@@@ -24,9 -26,9 +26,9 @@@ trap interrupted SIGIN
exit_status="0"
for version in ${PYTHON_VERSIONS}; do
- if [[ -x /usr/bin/python${version} ]]; then
+ if [[ -x @PREFIX_PORTAGE_PYTHON@${version} ]]; then
echo -e "${GOOD}Testing with Python ${version}...${NORMAL}"
- if ! @PREFIX_PORTAGE_PYTHON@${version} pym/portage/tests/runTests; then
- if ! /usr/bin/python${version} pym/portage/tests/runTests "$@" ; then
++ if ! @PREFIX_PORTAGE_PYTHON@${version} pym/portage/tests/runTests "$@" ; then
echo -e "${BAD}Testing with Python ${version} failed${NORMAL}"
exit_status="1"
fi
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-04-24 12:08 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-04-24 12:08 UTC (permalink / raw
To: gentoo-commits
commit: 8098ee743006bc4400652bdde8f8db7fa0dffa57
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Apr 24 12:05:49 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Apr 24 12:05:49 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=8098ee74
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/ebuild | 6 ++----
pym/repoman/checks.py | 5 +++++
2 files changed, 7 insertions(+), 4 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-04-15 18:27 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-04-15 18:27 UTC (permalink / raw
To: gentoo-commits
commit: d3ccb4434f987833b705b3a3bceef68e8032e673
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Apr 15 18:25:31 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Apr 15 18:25:31 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=d3ccb443
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/ebuild-helpers/doins | 9 +++-
bin/repoman | 53 +++++++++++---------
cnf/make.globals | 5 ++-
man/make.conf.5 | 10 +++-
pym/_emerge/Scheduler.py | 10 +++-
pym/_emerge/main.py | 8 ++-
pym/_emerge/resolver/output.py | 4 +-
pym/portage/const.py | 6 +-
pym/portage/dbapi/porttree.py | 3 +
pym/portage/dep/__init__.py | 8 ++-
.../package/ebuild/_config/special_env_vars.py | 3 +-
pym/portage/package/ebuild/doebuild.py | 2 +-
pym/portage/package/ebuild/fetch.py | 12 ++--
pym/portage/tests/dep/testExtractAffectingUSE.py | 5 ++-
14 files changed, 90 insertions(+), 48 deletions(-)
diff --cc cnf/make.globals
index c6ba9f5,bcad5de..930b3d5
--- a/cnf/make.globals
+++ b/cnf/make.globals
@@@ -131,10 -119,13 +131,13 @@@ PORTAGE_WORKDIR_MODE="0700
PORTAGE_ELOG_CLASSES="log warn error"
PORTAGE_ELOG_SYSTEM="save_summary echo"
-PORTAGE_ELOG_MAILURI="root"
+PORTAGE_ELOG_MAILURI="@rootuser@"
PORTAGE_ELOG_MAILSUBJECT="[portage] ebuild log for \${PACKAGE} on \${HOST}"
-PORTAGE_ELOG_MAILFROM="portage@localhost"
+PORTAGE_ELOG_MAILFROM="@portageuser@@localhost"
+ # Signing command used by repoman
+ PORTAGE_GPG_SIGNING_COMMAND="gpg --sign --clearsign --yes --default-key \"\${PORTAGE_GPG_KEY}\" --homedir \"\${PORTAGE_GPG_DIR}\" \"\${FILE}\""
+
# *****************************
# ** DO NOT EDIT THIS FILE **
# ***************************************************
diff --cc pym/portage/const.py
index 7929c28,db3f841..7df2ecb
--- a/pym/portage/const.py
+++ b/pym/portage/const.py
@@@ -1,23 -1,9 +1,23 @@@
# portage: Constants
- # Copyright 1998-2010 Gentoo Foundation
+ # Copyright 1998-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
+# ===========================================================================
+# autotool supplied constants.
+# ===========================================================================
+from portage.const_autotool import *
+
import os
+# save the original prefix
+BPREFIX = EPREFIX
+# pick up EPREFIX from the environment if set
+if "EPREFIX" in os.environ:
+ if os.environ["EPREFIX"] != "":
+ EPREFIX = os.path.normpath(os.environ["EPREFIX"])
+ else:
+ EPREFIX = os.environ["EPREFIX"]
+
# ===========================================================================
# START OF CONSTANTS -- START OF CONSTANTS -- START OF CONSTANTS -- START OF
# ===========================================================================
@@@ -132,10 -88,9 +132,10 @@@ EBUILD_PHASES = ("pretend",
SUPPORTED_FEATURES = frozenset([
"assume-digests", "binpkg-logs", "buildpkg", "buildsyspkg", "candy",
"ccache", "chflags", "collision-protect", "compress-build-logs",
- "digest", "distcc", "distlocks",
- "fakeroot", "fail-clean", "fixpackages", "getbinpkg",
+ "digest", "distcc", "distlocks", "fakeroot",
+ "fail-clean", "fixpackages", "force-mirror", "getbinpkg",
"installsources", "keeptemp", "keepwork", "fixlafiles", "lmirror",
+ "macossandbox", "macosprefixsandbox", "macosusersandbox",
"metadata-transfer", "mirror", "multilib-strict", "news",
"noauto", "noclean", "nodoc", "noinfo", "noman", "nostrip",
"notitles", "parallel-fetch", "parse-eapi-ebuild-head",
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-04-15 18:27 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-04-15 18:27 UTC (permalink / raw
To: gentoo-commits
commit: 41616039243b89884444fd837b43860bad4c529f
Author: Arfrever Frehtes Taifersar Arahesis <Arfrever <AT> Gentoo <DOT> Org>
AuthorDate: Sun Apr 3 17:30:33 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Apr 3 17:30:33 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=41616039
Merge branch 'master' of git+ssh://git.overlays.gentoo.org/proj/portage
bin/repoman | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-03-28 16:52 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-03-28 16:52 UTC (permalink / raw
To: gentoo-commits
commit: 0eccbf1e5c9084510dd98fe47b442273d931f76a
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Mon Mar 28 16:47:57 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Mon Mar 28 16:47:57 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=0eccbf1e
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/ebuild.sh | 16 +-
bin/emerge-webrsync | 6 +-
man/emerge.1 | 2 +-
pym/_emerge/AbstractEbuildProcess.py | 6 +-
pym/_emerge/AsynchronousTask.py | 4 +-
pym/_emerge/Binpkg.py | 20 +-
pym/_emerge/BlockerDB.py | 17 +-
pym/_emerge/EbuildBuild.py | 13 +-
pym/_emerge/EbuildMerge.py | 49 ++--
pym/_emerge/EbuildPhase.py | 31 ++-
pym/_emerge/FakeVartree.py | 18 +-
pym/_emerge/MergeListItem.py | 36 +--
pym/_emerge/MiscFunctionsProcess.py | 5 +-
pym/_emerge/PackageMerge.py | 17 +-
pym/_emerge/PollScheduler.py | 49 ++-
pym/_emerge/Scheduler.py | 27 +--
pym/_emerge/actions.py | 7 +
pym/_emerge/depgraph.py | 8 +-
pym/_emerge/help.py | 4 +-
pym/_emerge/main.py | 17 +-
pym/portage/dbapi/_MergeProcess.py | 191 +++++++++++-
pym/portage/dbapi/vartree.py | 313 ++++++++-----------
pym/portage/exception.py | 17 +-
.../package/ebuild/_config/special_env_vars.py | 10 +-
pym/portage/util/env_update.py | 20 +-
25 files changed, 533 insertions(+), 370 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-03-23 19:26 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-03-23 19:26 UTC (permalink / raw
To: gentoo-commits
commit: 13cb50a71e7fcafe7858cc87420c6f93c652eba6
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Wed Mar 23 19:23:07 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Wed Mar 23 19:23:07 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=13cb50a7
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/ebuild-ipc.py | 55 ++++++++++++++++++++++++-------
bin/portageq | 26 ++++++++++++---
man/make.conf.5 | 2 +-
pym/_emerge/AsynchronousLock.py | 8 +++++
pym/_emerge/CompositeTask.py | 22 +++++++++++--
pym/_emerge/EbuildIpcDaemon.py | 29 ++++++++++++++---
pym/_emerge/FakeVartree.py | 4 +-
pym/_emerge/PackageVirtualDbapi.py | 6 ++--
pym/_emerge/TaskSequence.py | 3 +-
pym/_emerge/actions.py | 3 +-
pym/_emerge/main.py | 19 +++++++----
pym/_emerge/resolver/slot_collision.py | 2 +-
12 files changed, 137 insertions(+), 42 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-03-17 19:08 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-03-17 19:08 UTC (permalink / raw
To: gentoo-commits
commit: cd90208398adabd458ef811d1f60d055dd4c146b
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Mar 17 19:05:48 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Mar 17 19:05:48 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=cd902083
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
cnf/make.conf | 2 +-
man/emerge.1 | 6 +++---
man/make.conf.5 | 2 +-
pym/_emerge/AbstractPollTask.py | 3 +++
pym/_emerge/AsynchronousLock.py | 4 ++++
pym/_emerge/AsynchronousTask.py | 15 ++++++++++++---
pym/_emerge/Binpkg.py | 9 ++++-----
pym/_emerge/CompositeTask.py | 32 +++++++++++++++++++++++++++-----
pym/_emerge/EbuildBuild.py | 5 ++---
pym/_emerge/EbuildExecuter.py | 4 ++--
pym/_emerge/FifoIpcDaemon.py | 6 ++----
pym/_emerge/PipeReader.py | 6 ++----
pym/_emerge/QueueScheduler.py | 2 +-
pym/_emerge/Scheduler.py | 20 +++++++++++++++-----
pym/_emerge/SubProcess.py | 8 +-------
pym/_emerge/TaskSequence.py | 3 +--
pym/_emerge/help.py | 7 +++++--
pym/_emerge/main.py | 29 ++++++++++++++++++++++++++---
pym/_emerge/resolver/slot_collision.py | 2 +-
pym/portage/dbapi/vartree.py | 8 --------
pym/portage/dep/dep_check.py | 9 +++++----
pym/portage/util/__init__.py | 2 +-
22 files changed, 119 insertions(+), 65 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-03-13 14:45 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-03-13 14:45 UTC (permalink / raw
To: gentoo-commits
commit: 0e8dc4226bf5bad8ade819aa5282ec6aca01f220
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sun Mar 13 14:42:58 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sun Mar 13 14:42:58 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=0e8dc422
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/ebuild-ipc.py
bin/ebuild-ipc.py | 15 +++++-------
pym/_emerge/AbstractEbuildProcess.py | 5 ++++
pym/_emerge/AbstractPollTask.py | 11 ++++++++-
pym/_emerge/MetadataRegen.py | 15 ++++++-----
pym/_emerge/PollScheduler.py | 39 ++++++++++++++++++++++++++-----
pym/_emerge/QueueScheduler.py | 3 ++
pym/_emerge/Scheduler.py | 24 ++++++++++++++++---
pym/portage/dbapi/vartree.py | 18 +++++++-------
pym/portage/package/ebuild/doebuild.py | 7 ++++-
9 files changed, 98 insertions(+), 39 deletions(-)
diff --cc bin/ebuild-ipc.py
index 786f4a7,d8e7e55..761b695
--- a/bin/ebuild-ipc.py
+++ b/bin/ebuild-ipc.py
@@@ -1,5 -1,5 +1,5 @@@
-#!/usr/bin/python
+#!@PREFIX_PORTAGE_PYTHON@
- # Copyright 2010 Gentoo Foundation
+ # Copyright 2010-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
# This is a helper which ebuild processes can use
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-03-09 19:44 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-03-09 19:44 UTC (permalink / raw
To: gentoo-commits
commit: 5fe6f8c536460d74097aa1b10eeffe115382fa03
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Wed Mar 9 19:42:08 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Wed Mar 9 19:42:08 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=5fe6f8c5
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/ebuild-helpers/dodoc
bin/ebuild | 51 ++++++++++++++++++++++-----------
bin/ebuild-helpers/dodoc | 3 +-
pym/_emerge/actions.py | 36 ++++++++++++++++--------
pym/_emerge/depgraph.py | 22 ++++++++++++--
pym/_emerge/unmerge.py | 31 ++++++++++++--------
pym/portage/dbapi/porttree.py | 2 -
pym/portage/dbapi/vartree.py | 46 +++++++++++++++++++++++-------
pym/portage/eclass_cache.py | 17 +----------
pym/portage/package/ebuild/config.py | 25 ++++++++--------
pym/portage/util/__init__.py | 21 ++++++++++++-
pym/repoman/checks.py | 6 ++-
11 files changed, 170 insertions(+), 90 deletions(-)
diff --cc bin/ebuild-helpers/dodoc
index 073fa20,65713db..41fc8f5
--- a/bin/ebuild-helpers/dodoc
+++ b/bin/ebuild-helpers/dodoc
@@@ -1,8 -1,8 +1,9 @@@
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ #!/bin/bash
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
if [ $# -lt 1 ] ; then
helpers_die "${0##*/}: at least one argument needed"
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-02-26 21:15 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-02-26 21:15 UTC (permalink / raw
To: gentoo-commits
commit: 2c30998a168fbb2f0ebca7d5e409be000793e816
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sat Feb 26 21:13:21 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sat Feb 26 21:13:21 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=2c30998a
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
Conflicts:
bin/ebuild-helpers/4/dodoc
bin/ebuild-helpers/doins
bin/ebuild.sh
bin/isolated-functions.sh
bin/misc-functions.sh
bin/ebuild-helpers/4/dodoc | 49 +------
bin/ebuild-helpers/dodoc | 6 +-
bin/ebuild-helpers/doins | 17 ++-
bin/ebuild.sh | 14 +--
bin/egencache | 45 +++++-
bin/isolated-functions.sh | 4 +-
bin/misc-functions.sh | 4 +-
bin/repoman | 3 +
cnf/sets/portage.conf | 2 +-
doc/package/ebuild/eapi/4.docbook | 2 +-
man/ebuild.1 | 6 +-
man/ebuild.5 | 2 +-
man/emerge.1 | 30 +++-
man/repoman.1 | 3 +
pym/_emerge/BinpkgVerifier.py | 6 +-
pym/_emerge/EbuildExecuter.py | 3 +-
pym/_emerge/Scheduler.py | 4 +-
pym/_emerge/SubProcess.py | 3 +-
pym/_emerge/actions.py | 36 ++++-
pym/_emerge/create_depgraph_params.py | 16 ++-
pym/_emerge/depgraph.py | 256 +++++++++++++++++++++++-------
pym/_emerge/help.py | 22 ++-
pym/_emerge/main.py | 43 +++++-
pym/_emerge/resolver/output.py | 10 +-
pym/_emerge/unmerge.py | 4 +-
pym/portage/dbapi/bintree.py | 16 ++-
pym/portage/dbapi/vartree.py | 14 ++-
pym/portage/dep/__init__.py | 6 +-
pym/portage/dep/dep_check.py | 35 +++--
pym/portage/getbinpkg.py | 8 +-
pym/portage/mail.py | 38 +++--
pym/portage/tests/resolver/test_depth.py | 248 +++++++++++++++++++++++++++++
pym/repoman/checks.py | 14 ++-
runtests.sh | 2 +-
34 files changed, 758 insertions(+), 213 deletions(-)
diff --cc bin/ebuild-helpers/doins
index 882e19c,0aedcb9..e354ee4
--- a/bin/ebuild-helpers/doins
+++ b/bin/ebuild-helpers/doins
@@@ -1,9 -1,20 +1,20 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-source "${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"/isolated-functions.sh
+source "${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"/isolated-functions.sh
+ if [[ ${0##*/} == dodoc ]] ; then
+ if [ $# -eq 0 ] ; then
+ # default_src_install may call dodoc with no arguments
+ # when DOC is defined but empty, so simply return
+ # sucessfully in this case.
+ exit 0
+ fi
+ export INSOPTIONS=-m0644
+ export INSDESTTREE=usr/share/doc/${PF}/${_E_DOCDESTTREE_}
+ fi
+
if [ $# -lt 1 ] ; then
helpers_die "${0##*/}: at least one argument needed"
exit 1
diff --cc bin/ebuild.sh
index 629942d,59bf46e..81af747
--- a/bin/ebuild.sh
+++ b/bin/ebuild.sh
@@@ -1,9 -1,9 +1,9 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
-PORTAGE_BIN_PATH="${PORTAGE_BIN_PATH:-/usr/lib/portage/bin}"
-PORTAGE_PYM_PATH="${PORTAGE_PYM_PATH:-/usr/lib/portage/pym}"
+PORTAGE_BIN_PATH="${PORTAGE_BIN_PATH:-@PORTAGE_BASE@/bin}"
+PORTAGE_PYM_PATH="${PORTAGE_PYM_PATH:-@PORTAGE_BASE@/pym}"
if [[ $PORTAGE_SANDBOX_COMPAT_LEVEL -lt 22 ]] ; then
# Ensure that /dev/std* streams have appropriate sandbox permission for
@@@ -1154,15 -1148,10 +1154,12 @@@ dyn_install()
fi
vecho
- vecho ">>> Install ${PF} into ${D} category ${CATEGORY}"
+ vecho ">>> Install ${PF} into ${ED} category ${CATEGORY}"
#our custom version of libtool uses $S and $D to fix
#invalid paths in .la files
+ # PREFIX: I think this is very old, and all patches (both to
+ # libtool and in ELT-patches) that did this are gone
export S D
- #some packages uses an alternative to $S to build in, cause
- #our libtool to create problematic .la files
- export PWORKDIR="$WORKDIR"
# Reset exeinto(), docinto(), insinto(), and into() state variables
# in case the user is running the install phase multiple times
diff --cc bin/isolated-functions.sh
index 5200827,2f144a0..def1a54
--- a/bin/isolated-functions.sh
+++ b/bin/isolated-functions.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# We need this next line for "die" and "assert". It expands
diff --cc bin/misc-functions.sh
index 4174f15,ae4cc9e..59cc9d4
mode 100644,100755..100644
--- a/bin/misc-functions.sh
+++ b/bin/misc-functions.sh
@@@ -1,5 -1,5 +1,5 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- # Copyright 1999-2010 Gentoo Foundation
+ # Copyright 1999-2011 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
#
# Miscellaneous shell functions that make use of the ebuild env but don't need
diff --cc pym/_emerge/actions.py
index 21f0d10,20220fc..37a9b4a
--- a/pym/_emerge/actions.py
+++ b/pym/_emerge/actions.py
@@@ -2401,56 -2412,10 +2412,58 @@@ def action_sync(settings, trees, mtimed
"cd %s; exec cvs -z0 -q update -dP" % \
(portage._shell_quote(myportdir),), **spawn_kwargs)
if retval != os.EX_OK:
+ writemsg_level("!!! cvs update error; exiting.\n",
+ noiselevel=-1, level=logging.ERROR)
sys.exit(retval)
dosyncuri = syncuri
+ elif syncuri[:11]=="svn+http://" or syncuri[:6]=="svn://" or syncuri[:12]=="svn+https://":
+ # Gentoo Prefix hardcoded SVN support
+ if not os.path.exists(EPREFIX + "/usr/bin/svn"):
+ print("!!! " + EPREFIX + "/usr/bin/svn does not exist, so SVN support is disabled.")
+ print("!!! Type \"emerge dev-util/subversion\" to enable SVN support.")
+ sys.exit(1)
+ svndir=os.path.dirname(myportdir)
+ if not os.path.exists(myportdir+"/.svn"):
+ #initial checkout
+ if syncuri[:4] == "svn+":
+ syncuri = syncuri[4:]
+ print(">>> Starting initial svn checkout with "+syncuri+"...")
+ if os.path.exists(svndir+"/prefix-overlay"):
+ print("!!! existing",svndir+"/prefix-overlay directory; exiting.")
+ sys.exit(1)
+ try:
+ os.rmdir(myportdir)
+ except OSError, e:
+ if e.errno != errno.ENOENT:
+ sys.stderr.write(
+ "!!! existing '%s' directory; exiting.\n" % myportdir)
+ sys.exit(1)
+ del e
+ if portage.spawn("cd "+svndir+"; svn checkout "+syncuri,settings,free=1):
+ print("!!! svn checkout error; exiting.")
+ sys.exit(1)
+ os.rename(os.path.join(svndir, "prefix-overlay"), myportdir)
+ else:
+ #svn update
+ print(">>> Starting svn update...")
+ retval = portage.spawn("cd '%s'; svn update" % myportdir, \
+ settings, free=1)
+ if retval != os.EX_OK:
+ sys.exit(retval)
+
+ # write timestamp.chk
+ try:
+ if not os.path.exists(os.path.join(myportdir, "metadata")):
+ os.mkdir(os.path.join(myportdir, "metadata"))
+ f = open(os.path.join(myportdir, "metadata", "timestamp.chk"), 'w')
+ f.write(time.strftime("%a, %d %b %Y %H:%M:%S +0000", time.gmtime()))
+ f.write('\n')
+ f.close()
+ except IOError, e:
+ # too bad, next time better luck!
+ pass
+
+ dosyncuri = syncuri
else:
writemsg_level("!!! Unrecognized protocol: SYNC='%s'\n" % (syncuri,),
noiselevel=-1, level=logging.ERROR)
diff --cc runtests.sh
index f3d91a9,6c00ce5..0ae4931
--- a/runtests.sh
+++ b/runtests.sh
@@@ -1,6 -1,6 +1,6 @@@
-#!/bin/bash
+#!@PORTAGE_BASH@
- PYTHON_VERSIONS="2.6 2.7 3.1 3.2"
+ PYTHON_VERSIONS="2.6 2.7 3.1 3.2 3.3"
case "${NOCOLOR:-false}" in
yes|true)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-02-10 18:46 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-02-10 18:46 UTC (permalink / raw
To: gentoo-commits
commit: 739af37a0a13a10e5a41a426aad00bbed3bf8ca7
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Feb 10 18:45:34 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Feb 10 18:45:34 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=739af37a
tarball.sh: typo, remove space
---
tarball.sh | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/tarball.sh b/tarball.sh
index 9bfc409..70116ac 100755
--- a/tarball.sh
+++ b/tarball.sh
@@ -28,7 +28,7 @@ install -d -m0755 ${DEST}
rsync -a --exclude='.git' --exclude='.hg' . ${DEST}
sed -i -e '/^VERSION=/s/^.*$/VERSION="'${V}-prefix'"/' ${DEST}/pym/portage/__init__.py
sed -i -e "1s/VERSION/${V}-prefix/" ${DEST}/doc/fragment/version
-sed -i -e "1s/VERSION/${V}-prefix/" ${DEST}/ man/*
+sed -i -e "1s/VERSION/${V}-prefix/" ${DEST}/man/*
sed -i -e "s/@version@/${V}/" ${DEST}/configure.in
cd ${DEST}
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-02-10 18:44 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-02-10 18:44 UTC (permalink / raw
To: gentoo-commits
commit: 461d8ceb62daedcf17fa251e193173a39e12a49e
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Feb 10 18:43:18 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Feb 10 18:43:18 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=461d8ceb
tarball.sh: align version sedding with ebuild
---
tarball.sh | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/tarball.sh b/tarball.sh
index fd58e3b..9bfc409 100755
--- a/tarball.sh
+++ b/tarball.sh
@@ -27,7 +27,8 @@ fi
install -d -m0755 ${DEST}
rsync -a --exclude='.git' --exclude='.hg' . ${DEST}
sed -i -e '/^VERSION=/s/^.*$/VERSION="'${V}-prefix'"/' ${DEST}/pym/portage/__init__.py
-sed -i -e "s/VERSION/${V}-prefix/g" ${DEST}/man/emerge.1
+sed -i -e "1s/VERSION/${V}-prefix/" ${DEST}/doc/fragment/version
+sed -i -e "1s/VERSION/${V}-prefix/" ${DEST}/ man/*
sed -i -e "s/@version@/${V}/" ${DEST}/configure.in
cd ${DEST}
^ permalink raw reply related [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-02-10 18:20 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-02-10 18:20 UTC (permalink / raw
To: gentoo-commits
commit: d9e783605e933473a6b5e35c0492af9e4743ed1a
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Thu Feb 10 18:17:03 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Thu Feb 10 18:17:03 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=d9e78360
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
bin/ebuild.sh | 14 ++++++++++----
man/ebuild.5 | 4 ++--
man/emerge.1 | 10 ++++++----
pym/_emerge/depgraph.py | 3 ++-
pym/portage/cache/ebuild_xattr.py | 13 +++++++------
pym/portage/cache/sqlite.py | 23 +++++++++++++++--------
pym/portage/debug.py | 10 ++++++++--
pym/portage/dep/__init__.py | 2 +-
pym/portage/dep/dep_check.py | 23 +++++++++++++++++------
pym/portage/tests/dep/test_paren_reduce.py | 4 +++-
pym/repoman/herdbase.py | 11 +++++++++--
11 files changed, 80 insertions(+), 37 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
* [gentoo-commits] proj/portage:prefix commit in: /
@ 2011-02-05 12:25 Fabian Groffen
0 siblings, 0 replies; 195+ messages in thread
From: Fabian Groffen @ 2011-02-05 12:25 UTC (permalink / raw
To: gentoo-commits
commit: b82a06113c9c123915854cb365075762f08140d7
Author: Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Sat Feb 5 12:24:52 2011 +0000
Commit: Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Sat Feb 5 12:24:52 2011 +0000
URL: http://git.overlays.gentoo.org/gitweb/?p=proj/portage.git;a=commit;h=b82a0611
Merge remote-tracking branch 'overlays-gentoo-org/master' into prefix
pym/_emerge/depgraph.py | 77 +++++++------
pym/portage/dep/__init__.py | 151 ++++++++++++++++++++++--
pym/portage/package/ebuild/doebuild.py | 29 +++++-
pym/portage/tests/dep/testCheckRequiredUse.py | 112 ++++++++++++++++++-
4 files changed, 317 insertions(+), 52 deletions(-)
^ permalink raw reply [flat|nested] 195+ messages in thread
end of thread, other threads:[~2024-02-25 9:40 UTC | newest]
Thread overview: 195+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-03-20 19:31 [gentoo-commits] proj/portage:prefix commit in: / Fabian Groffen
-- strict thread matches above, loose matches on Subject: below --
2024-02-25 9:40 Fabian Groffen
2024-02-22 7:27 Fabian Groffen
2024-01-18 10:22 Fabian Groffen
2024-01-18 9:36 Fabian Groffen
2023-12-03 10:10 Fabian Groffen
2023-12-03 9:54 Fabian Groffen
2023-12-03 9:54 Fabian Groffen
2023-12-03 9:54 Fabian Groffen
2023-11-24 20:18 Fabian Groffen
2023-11-24 20:06 Fabian Groffen
2023-11-24 20:06 Fabian Groffen
2023-06-22 8:47 Fabian Groffen
2023-06-17 9:04 Fabian Groffen
2023-06-17 8:41 Fabian Groffen
2022-07-28 17:38 Fabian Groffen
2022-07-27 19:20 Fabian Groffen
2022-07-26 19:39 Fabian Groffen
2022-07-25 15:20 Fabian Groffen
2022-07-24 19:27 Fabian Groffen
2022-07-24 14:01 Fabian Groffen
2022-07-24 9:45 Fabian Groffen
2022-01-14 10:40 Fabian Groffen
2022-01-14 10:32 Fabian Groffen
2021-07-06 7:10 Fabian Groffen
2021-04-16 13:37 Fabian Groffen
2021-01-24 9:02 Fabian Groffen
2021-01-04 10:48 Fabian Groffen
2020-12-07 17:28 Fabian Groffen
2020-12-07 16:46 Fabian Groffen
2020-11-23 7:48 Fabian Groffen
2020-11-22 11:15 Fabian Groffen
2020-09-26 11:29 Fabian Groffen
2020-08-02 12:33 Fabian Groffen
2020-06-02 18:55 Fabian Groffen
2020-01-08 19:14 Fabian Groffen
2019-07-01 13:11 Fabian Groffen
2019-05-30 9:20 Fabian Groffen
2019-02-28 12:31 Fabian Groffen
2019-01-11 10:19 Fabian Groffen
2019-01-07 10:22 Fabian Groffen
2018-12-23 11:14 Fabian Groffen
2018-12-12 18:54 Fabian Groffen
2018-08-04 6:56 Fabian Groffen
2018-06-25 8:34 Fabian Groffen
2018-06-17 14:38 Fabian Groffen
2018-06-17 14:38 Fabian Groffen
2018-05-28 15:24 Fabian Groffen
2018-05-25 19:44 Fabian Groffen
2018-05-25 19:44 Fabian Groffen
2018-05-18 19:46 Fabian Groffen
2017-12-12 8:19 Fabian Groffen
2017-10-29 14:51 Fabian Groffen
2017-10-03 7:32 Fabian Groffen
2017-09-22 10:08 Fabian Groffen
2017-08-21 13:27 Fabian Groffen
2017-08-13 7:21 Fabian Groffen
2017-05-23 13:34 Fabian Groffen
2017-03-25 9:12 Fabian Groffen
2017-03-24 19:09 Fabian Groffen
2017-03-24 7:43 Fabian Groffen
2017-03-23 17:46 Fabian Groffen
2017-03-23 17:32 Fabian Groffen
2017-03-23 17:23 Fabian Groffen
2017-03-23 15:38 Fabian Groffen
2017-03-17 8:25 Fabian Groffen
2017-03-02 8:48 Fabian Groffen
2017-03-02 8:18 Fabian Groffen
2017-02-23 14:05 Fabian Groffen
2017-01-27 15:08 Fabian Groffen
2017-01-27 15:08 Fabian Groffen
2016-02-21 16:17 Fabian Groffen
2016-02-21 16:17 Fabian Groffen
2016-02-18 19:35 Fabian Groffen
2016-02-18 19:35 Fabian Groffen
2015-06-20 7:12 Fabian Groffen
2015-06-09 18:30 Fabian Groffen
2015-06-09 18:01 Fabian Groffen
2015-06-04 19:47 Fabian Groffen
2015-04-05 9:15 Fabian Groffen
2014-11-12 17:31 Fabian Groffen
2014-10-02 18:48 Fabian Groffen
2014-09-28 17:52 Fabian Groffen
2014-05-06 19:32 Fabian Groffen
2014-05-06 19:18 Fabian Groffen
2014-04-22 19:52 Fabian Groffen
2014-02-06 21:09 Fabian Groffen
2014-01-06 9:47 Fabian Groffen
2013-09-24 17:29 Fabian Groffen
2013-09-20 17:59 Fabian Groffen
2013-09-18 18:34 Fabian Groffen
2013-09-13 18:02 Fabian Groffen
2013-08-10 20:54 Fabian Groffen
2013-07-10 5:31 Fabian Groffen
2013-07-08 19:32 Fabian Groffen
2013-06-29 5:41 Fabian Groffen
2013-06-27 17:20 Fabian Groffen
2013-06-12 9:02 Fabian Groffen
2013-06-09 15:53 Fabian Groffen
2013-05-04 18:55 Fabian Groffen
2013-04-02 16:57 Fabian Groffen
2013-03-31 19:03 Fabian Groffen
2013-03-31 19:00 Fabian Groffen
2013-03-24 8:36 Fabian Groffen
2013-03-23 19:54 Fabian Groffen
2013-02-28 19:29 Fabian Groffen
2013-02-07 20:01 Fabian Groffen
2013-01-27 21:41 Fabian Groffen
2013-01-27 21:41 Fabian Groffen
2013-01-13 10:26 Fabian Groffen
2013-01-10 21:02 Fabian Groffen
2013-01-05 18:14 Fabian Groffen
2012-12-26 14:48 Fabian Groffen
2012-12-02 15:47 Fabian Groffen
2012-12-02 15:36 Fabian Groffen
2012-12-02 15:33 Fabian Groffen
2012-12-02 15:33 Fabian Groffen
2012-12-02 15:33 Fabian Groffen
2012-12-02 13:12 Fabian Groffen
2012-12-02 12:59 Fabian Groffen
2012-11-04 10:48 Fabian Groffen
2012-10-22 17:25 Fabian Groffen
2012-10-02 12:02 Fabian Groffen
2012-09-30 11:22 Fabian Groffen
2012-09-26 18:26 Fabian Groffen
2012-09-12 18:18 Fabian Groffen
2012-09-09 7:40 Fabian Groffen
2012-09-06 18:14 Fabian Groffen
2012-08-27 6:44 Fabian Groffen
2012-08-12 7:50 Fabian Groffen
2012-07-19 16:25 Fabian Groffen
2012-07-06 7:05 Fabian Groffen
2012-04-23 19:23 Fabian Groffen
2012-04-03 18:04 Fabian Groffen
2012-03-31 19:31 Fabian Groffen
2012-03-01 20:32 Fabian Groffen
2012-02-19 9:58 Fabian Groffen
2012-02-09 8:01 Fabian Groffen
2012-01-10 17:45 Fabian Groffen
2011-12-31 16:45 Fabian Groffen
2011-12-26 9:12 Fabian Groffen
2011-12-23 9:51 Fabian Groffen
2011-12-22 9:51 Fabian Groffen
2011-12-19 18:30 Fabian Groffen
2011-12-14 15:25 Fabian Groffen
2011-12-10 11:28 Fabian Groffen
2011-12-09 20:33 Fabian Groffen
2011-12-02 20:31 Fabian Groffen
2011-12-02 19:20 Fabian Groffen
2011-12-02 19:19 Fabian Groffen
2011-12-02 19:18 Fabian Groffen
2011-12-02 18:03 Fabian Groffen
2011-10-21 17:34 Fabian Groffen
2011-10-21 17:34 Fabian Groffen
2011-10-20 20:28 Fabian Groffen
2011-10-20 17:08 Fabian Groffen
2011-10-20 16:38 Fabian Groffen
2011-10-17 18:36 Fabian Groffen
2011-10-16 13:59 Fabian Groffen
2011-10-15 18:27 Fabian Groffen
2011-10-13 6:52 Fabian Groffen
2011-09-23 18:38 Fabian Groffen
2011-09-23 18:23 Fabian Groffen
2011-09-20 18:25 Fabian Groffen
2011-09-14 18:43 Fabian Groffen
2011-09-14 18:38 Fabian Groffen
2011-09-13 17:41 Fabian Groffen
2011-08-31 18:39 Fabian Groffen
2011-08-30 18:45 Fabian Groffen
2011-08-29 19:03 Fabian Groffen
2011-08-25 20:25 Fabian Groffen
2011-08-20 17:50 Fabian Groffen
2011-07-26 17:35 Fabian Groffen
2011-07-17 9:48 Fabian Groffen
2011-07-17 8:12 Fabian Groffen
2011-07-01 17:44 Fabian Groffen
2011-06-14 15:39 Fabian Groffen
2011-06-06 17:12 Fabian Groffen
2011-05-28 8:29 Fabian Groffen
2011-05-27 17:41 Fabian Groffen
2011-05-14 13:59 Fabian Groffen
2011-05-02 17:41 Fabian Groffen
2011-04-24 12:08 Fabian Groffen
2011-04-15 18:27 Fabian Groffen
2011-04-15 18:27 Fabian Groffen
2011-03-28 16:52 Fabian Groffen
2011-03-23 19:26 Fabian Groffen
2011-03-17 19:08 Fabian Groffen
2011-03-13 14:45 Fabian Groffen
2011-03-09 19:44 Fabian Groffen
2011-02-26 21:15 Fabian Groffen
2011-02-10 18:46 Fabian Groffen
2011-02-10 18:44 Fabian Groffen
2011-02-10 18:20 Fabian Groffen
2011-02-05 12:25 Fabian Groffen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox