* [gentoo-portage-dev] [PATCH 1/2] Add FEATURES=binpkg-multi-instance (bug 150031)
@ 2015-02-17 8:37 Zac Medico
2015-02-17 8:37 ` [gentoo-portage-dev] [PATCH 2/2] Add profile-formats=build-id " Zac Medico
` (2 more replies)
0 siblings, 3 replies; 23+ messages in thread
From: Zac Medico @ 2015-02-17 8:37 UTC (permalink / raw
To: gentoo-portage-dev; +Cc: Zac Medico
FEATURES=binpkg-multi-instance causes an integer build-id to be
associated with each binary package instance. Inclusion of the build-id
in the file name of the binary package file makes it possible to store
an arbitrary number of binary packages built from the same ebuild.
Having multiple instances is useful for a number of purposes, such as
retaining builds that were built with different USE flags or linked
against different versions of libraries. The location of any particular
package within PKGDIR can be expressed as follows:
${PKGDIR}/${CATEGORY}/${PN}/${PF}-${BUILD_ID}.xpak
The build-id starts at 1 for the first build of a particular ebuild,
and is incremented by 1 for each new build. It is possible to share a
writable PKGDIR over NFS, and locking ensures that each package added
to PKGDIR will have a unique build-id. It is not necessary to migrate
an existing PKGDIR to the new layout, since portage is capable of
working with a mixed PKGDIR layout, where packages using the old layout
are allowed to remain in place.
The new PKGDIR layout is backward-compatible with binhost clients
running older portage, since the file format is identical, the
per-package PATH attribute in the 'Packages' index directs them to
download the file from the correct URI, and they automatically use
BUILD_TIME metadata to select the latest builds.
There is currently no automated way to prune old builds from PKGDIR,
although it is possible to remove packages manually, and then run
'emaint --fix binhost' to update the ${PKGDIR}/Packages index.
It is not necessary to migrate an existing PKGDIR to the new layout,
since portage is capable of working with a mixed PKGDIR layout, where
packages using the old layout are allowed to remain in-place.
There is currently no automated way to prune old builds from PKGDIR,
although it is possible to remove packages manually, and then run
'emaint --fix binhost' update the ${PKGDIR}/Packages index. Support for
FEATURES=binpkg-multi-instance is planned for eclean-pkg.
X-Gentoo-Bug: 150031
X-Gentoo-Bug-URL: https://bugs.gentoo.org/show_bug.cgi?id=150031
---
bin/quickpkg | 1 -
man/make.conf.5 | 27 +
pym/_emerge/Binpkg.py | 33 +-
pym/_emerge/BinpkgFetcher.py | 13 +-
pym/_emerge/BinpkgVerifier.py | 6 +-
pym/_emerge/EbuildBinpkg.py | 9 +-
pym/_emerge/EbuildBuild.py | 36 +-
pym/_emerge/Package.py | 67 +-
pym/_emerge/Scheduler.py | 6 +-
pym/_emerge/clear_caches.py | 1 -
pym/_emerge/resolver/output.py | 21 +-
pym/portage/const.py | 2 +
pym/portage/dbapi/__init__.py | 10 +-
pym/portage/dbapi/bintree.py | 842 +++++++++++----------
pym/portage/dbapi/vartree.py | 8 +-
pym/portage/dbapi/virtual.py | 113 ++-
pym/portage/emaint/modules/binhost/binhost.py | 47 +-
pym/portage/package/ebuild/config.py | 3 +-
pym/portage/tests/resolver/ResolverPlayground.py | 26 +-
.../resolver/binpkg_multi_instance/__init__.py | 2 +
.../resolver/binpkg_multi_instance/__test__.py | 2 +
.../binpkg_multi_instance/test_rebuilt_binaries.py | 101 +++
pym/portage/versions.py | 48 +-
23 files changed, 932 insertions(+), 492 deletions(-)
create mode 100644 pym/portage/tests/resolver/binpkg_multi_instance/__init__.py
create mode 100644 pym/portage/tests/resolver/binpkg_multi_instance/__test__.py
create mode 100644 pym/portage/tests/resolver/binpkg_multi_instance/test_rebuilt_binaries.py
diff --git a/bin/quickpkg b/bin/quickpkg
index 2c69a69..8b71c3e 100755
--- a/bin/quickpkg
+++ b/bin/quickpkg
@@ -63,7 +63,6 @@ def quickpkg_atom(options, infos, arg, eout):
pkgs_for_arg = 0
for cpv in matches:
excluded_config_files = []
- bintree.prevent_collision(cpv)
dblnk = vardb._dblink(cpv)
have_lock = False
diff --git a/man/make.conf.5 b/man/make.conf.5
index 84b7191..6ead61b 100644
--- a/man/make.conf.5
+++ b/man/make.conf.5
@@ -256,6 +256,33 @@ has a \fB\-\-force\fR option that can be used to force regeneration of digests.
Keep logs from successful binary package merges. This is relevant only when
\fBPORT_LOGDIR\fR is set.
.TP
+.B binpkg\-multi\-instance
+Enable support for multiple binary package instances per ebuild.
+Having multiple instances is useful for a number of purposes, such as
+retaining builds that were built with different USE flags or linked
+against different versions of libraries. The location of any particular
+package within PKGDIR can be expressed as follows:
+
+ ${PKGDIR}/${CATEGORY}/${PN}/${PF}\-${BUILD_ID}.xpak
+
+The build\-id starts at 1 for the first build of a particular ebuild,
+and is incremented by 1 for each new build. It is possible to share a
+writable PKGDIR over NFS, and locking ensures that each package added
+to PKGDIR will have a unique build\-id. It is not necessary to migrate
+an existing PKGDIR to the new layout, since portage is capable of
+working with a mixed PKGDIR layout, where packages using the old layout
+are allowed to remain in place.
+
+The new PKGDIR layout is backward\-compatible with binhost clients
+running older portage, since the file format is identical, the
+per\-package PATH attribute in the 'Packages' index directs them to
+download the file from the correct URI, and they automatically use
+BUILD_TIME metadata to select the latest builds.
+
+There is currently no automated way to prune old builds from PKGDIR,
+although it is possible to remove packages manually, and then run
+\(aqemaint \-\-fix binhost' to update the ${PKGDIR}/Packages index.
+.TP
.B buildpkg
Binary packages will be created for all packages that are merged. Also see
\fBquickpkg\fR(1) and \fBemerge\fR(1) \fB\-\-buildpkg\fR and
diff --git a/pym/_emerge/Binpkg.py b/pym/_emerge/Binpkg.py
index ded6dfd..7b7ae17 100644
--- a/pym/_emerge/Binpkg.py
+++ b/pym/_emerge/Binpkg.py
@@ -121,16 +121,11 @@ class Binpkg(CompositeTask):
fetcher = BinpkgFetcher(background=self.background,
logfile=self.settings.get("PORTAGE_LOG_FILE"), pkg=self.pkg,
pretend=self.opts.pretend, scheduler=self.scheduler)
- pkg_path = fetcher.pkg_path
- self._pkg_path = pkg_path
- # This gives bashrc users an opportunity to do various things
- # such as remove binary packages after they're installed.
- self.settings["PORTAGE_BINPKG_FILE"] = pkg_path
if self.opts.getbinpkg and self._bintree.isremote(pkg.cpv):
-
msg = " --- (%s of %s) Fetching Binary (%s::%s)" %\
- (pkg_count.curval, pkg_count.maxval, pkg.cpv, pkg_path)
+ (pkg_count.curval, pkg_count.maxval, pkg.cpv,
+ fetcher.pkg_path)
short_msg = "emerge: (%s of %s) %s Fetch" % \
(pkg_count.curval, pkg_count.maxval, pkg.cpv)
self.logger.log(msg, short_msg=short_msg)
@@ -149,7 +144,7 @@ class Binpkg(CompositeTask):
# The fetcher only has a returncode when
# --getbinpkg is enabled.
if fetcher.returncode is not None:
- self._fetched_pkg = True
+ self._fetched_pkg = fetcher.pkg_path
if self._default_exit(fetcher) != os.EX_OK:
self._unlock_builddir()
self.wait()
@@ -163,9 +158,15 @@ class Binpkg(CompositeTask):
verifier = None
if self._verify:
+ if self._fetched_pkg:
+ path = self._fetched_pkg
+ else:
+ path = self.pkg.root_config.trees["bintree"].getname(
+ self.pkg.cpv)
logfile = self.settings.get("PORTAGE_LOG_FILE")
verifier = BinpkgVerifier(background=self.background,
- logfile=logfile, pkg=self.pkg, scheduler=self.scheduler)
+ logfile=logfile, pkg=self.pkg, scheduler=self.scheduler,
+ _pkg_path=path)
self._start_task(verifier, self._verifier_exit)
return
@@ -181,10 +182,20 @@ class Binpkg(CompositeTask):
logger = self.logger
pkg = self.pkg
pkg_count = self.pkg_count
- pkg_path = self._pkg_path
if self._fetched_pkg:
- self._bintree.inject(pkg.cpv, filename=pkg_path)
+ pkg_path = self._bintree.getname(
+ self._bintree.inject(pkg.cpv,
+ filename=self._fetched_pkg),
+ allocate_new=False)
+ else:
+ pkg_path = self.pkg.root_config.trees["bintree"].getname(
+ self.pkg.cpv)
+
+ # This gives bashrc users an opportunity to do various things
+ # such as remove binary packages after they're installed.
+ self.settings["PORTAGE_BINPKG_FILE"] = pkg_path
+ self._pkg_path = pkg_path
logfile = self.settings.get("PORTAGE_LOG_FILE")
if logfile is not None and os.path.isfile(logfile):
diff --git a/pym/_emerge/BinpkgFetcher.py b/pym/_emerge/BinpkgFetcher.py
index 543881e..a7f2d44 100644
--- a/pym/_emerge/BinpkgFetcher.py
+++ b/pym/_emerge/BinpkgFetcher.py
@@ -24,7 +24,8 @@ class BinpkgFetcher(SpawnProcess):
def __init__(self, **kwargs):
SpawnProcess.__init__(self, **kwargs)
pkg = self.pkg
- self.pkg_path = pkg.root_config.trees["bintree"].getname(pkg.cpv)
+ self.pkg_path = pkg.root_config.trees["bintree"].getname(
+ pkg.cpv) + ".partial"
def _start(self):
@@ -51,10 +52,12 @@ class BinpkgFetcher(SpawnProcess):
# urljoin doesn't work correctly with
# unrecognized protocols like sftp
if bintree._remote_has_index:
- rel_uri = bintree._remotepkgs[pkg.cpv].get("PATH")
+ instance_key = bintree.dbapi._instance_key(pkg.cpv)
+ rel_uri = bintree._remotepkgs[instance_key].get("PATH")
if not rel_uri:
rel_uri = pkg.cpv + ".tbz2"
- remote_base_uri = bintree._remotepkgs[pkg.cpv]["BASE_URI"]
+ remote_base_uri = bintree._remotepkgs[
+ instance_key]["BASE_URI"]
uri = remote_base_uri.rstrip("/") + "/" + rel_uri.lstrip("/")
else:
uri = settings["PORTAGE_BINHOST"].rstrip("/") + \
@@ -128,7 +131,9 @@ class BinpkgFetcher(SpawnProcess):
# the fetcher didn't already do it automatically.
bintree = self.pkg.root_config.trees["bintree"]
if bintree._remote_has_index:
- remote_mtime = bintree._remotepkgs[self.pkg.cpv].get("MTIME")
+ remote_mtime = bintree._remotepkgs[
+ bintree.dbapi._instance_key(
+ self.pkg.cpv)].get("MTIME")
if remote_mtime is not None:
try:
remote_mtime = long(remote_mtime)
diff --git a/pym/_emerge/BinpkgVerifier.py b/pym/_emerge/BinpkgVerifier.py
index 2c69792..7a6d15e 100644
--- a/pym/_emerge/BinpkgVerifier.py
+++ b/pym/_emerge/BinpkgVerifier.py
@@ -33,7 +33,6 @@ class BinpkgVerifier(CompositeTask):
digests = _apply_hash_filter(digests, hash_filter)
self._digests = digests
- self._pkg_path = bintree.getname(self.pkg.cpv)
try:
size = os.stat(self._pkg_path).st_size
@@ -90,8 +89,11 @@ class BinpkgVerifier(CompositeTask):
if portage.output.havecolor:
portage.output.havecolor = not self.background
+ path = self._pkg_path
+ if path.endswith(".partial"):
+ path = path[:-len(".partial")]
eout = EOutput()
- eout.ebegin("%s %s ;-)" % (os.path.basename(self._pkg_path),
+ eout.ebegin("%s %s ;-)" % (os.path.basename(path),
" ".join(sorted(self._digests))))
eout.eend(0)
diff --git a/pym/_emerge/EbuildBinpkg.py b/pym/_emerge/EbuildBinpkg.py
index 34a6aef..6e098eb 100644
--- a/pym/_emerge/EbuildBinpkg.py
+++ b/pym/_emerge/EbuildBinpkg.py
@@ -10,13 +10,12 @@ class EbuildBinpkg(CompositeTask):
This assumes that src_install() has successfully completed.
"""
__slots__ = ('pkg', 'settings') + \
- ('_binpkg_tmpfile',)
+ ('_binpkg_tmpfile', '_binpkg_info')
def _start(self):
pkg = self.pkg
root_config = pkg.root_config
bintree = root_config.trees["bintree"]
- bintree.prevent_collision(pkg.cpv)
binpkg_tmpfile = os.path.join(bintree.pkgdir,
pkg.cpv + ".tbz2." + str(os.getpid()))
bintree._ensure_dir(os.path.dirname(binpkg_tmpfile))
@@ -43,8 +42,12 @@ class EbuildBinpkg(CompositeTask):
pkg = self.pkg
bintree = pkg.root_config.trees["bintree"]
- bintree.inject(pkg.cpv, filename=self._binpkg_tmpfile)
+ self._binpkg_info = bintree.inject(pkg.cpv,
+ filename=self._binpkg_tmpfile)
self._current_task = None
self.returncode = os.EX_OK
self.wait()
+
+ def get_binpkg_info(self):
+ return self._binpkg_info
diff --git a/pym/_emerge/EbuildBuild.py b/pym/_emerge/EbuildBuild.py
index b5b1e87..0e98602 100644
--- a/pym/_emerge/EbuildBuild.py
+++ b/pym/_emerge/EbuildBuild.py
@@ -1,6 +1,10 @@
# Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
+from __future__ import unicode_literals
+
+import io
+
import _emerge.emergelog
from _emerge.EbuildExecuter import EbuildExecuter
from _emerge.EbuildPhase import EbuildPhase
@@ -15,7 +19,7 @@ from _emerge.TaskSequence import TaskSequence
from portage.util import writemsg
import portage
-from portage import os
+from portage import _encodings, _unicode_decode, _unicode_encode, os
from portage.output import colorize
from portage.package.ebuild.digestcheck import digestcheck
from portage.package.ebuild.digestgen import digestgen
@@ -317,9 +321,13 @@ class EbuildBuild(CompositeTask):
phase="rpm", scheduler=self.scheduler,
settings=self.settings))
else:
- binpkg_tasks.add(EbuildBinpkg(background=self.background,
+ task = EbuildBinpkg(
+ background=self.background,
pkg=self.pkg, scheduler=self.scheduler,
- settings=self.settings))
+ settings=self.settings)
+ binpkg_tasks.add(task)
+ task.addExitListener(
+ self._record_binpkg_info)
if binpkg_tasks:
self._start_task(binpkg_tasks, self._buildpkg_exit)
@@ -356,6 +364,28 @@ class EbuildBuild(CompositeTask):
self.returncode = packager.returncode
self.wait()
+ def _record_binpkg_info(self, task):
+ if task.returncode != os.EX_OK:
+ return
+
+ # Save info about the created binary package, so that
+ # identifying information can be passed to the install
+ # task, to be recorded in the installed package database.
+ pkg = task.get_binpkg_info()
+ infoloc = os.path.join(self.settings["PORTAGE_BUILDDIR"],
+ "build-info")
+ info = {
+ "BINPKGMD5": "%s\n" % pkg._metadata["MD5"],
+ }
+ if pkg.build_id is not None:
+ info["BUILD_ID"] = "%s\n" % pkg.build_id
+ for k, v in info.items():
+ with io.open(_unicode_encode(os.path.join(infoloc, k),
+ encoding=_encodings['fs'], errors='strict'),
+ mode='w', encoding=_encodings['repo.content'],
+ errors='strict') as f:
+ f.write(v)
+
def _buildpkgonly_success_hook_exit(self, success_hooks):
self._default_exit(success_hooks)
self.returncode = None
diff --git a/pym/_emerge/Package.py b/pym/_emerge/Package.py
index e8a13cb..2c1a116 100644
--- a/pym/_emerge/Package.py
+++ b/pym/_emerge/Package.py
@@ -41,12 +41,12 @@ class Package(Task):
"_validated_atoms", "_visible")
metadata_keys = [
- "BUILD_TIME", "CHOST", "COUNTER", "DEPEND", "EAPI",
- "HDEPEND", "INHERITED", "IUSE", "KEYWORDS",
- "LICENSE", "PDEPEND", "PROVIDE", "RDEPEND",
- "repository", "PROPERTIES", "RESTRICT", "SLOT", "USE",
- "_mtime_", "DEFINED_PHASES", "REQUIRED_USE", "PROVIDES",
- "REQUIRES"]
+ "BUILD_ID", "BUILD_TIME", "CHOST", "COUNTER", "DEFINED_PHASES",
+ "DEPEND", "EAPI", "HDEPEND", "INHERITED", "IUSE", "KEYWORDS",
+ "LICENSE", "MD5", "PDEPEND", "PROVIDE", "PROVIDES",
+ "RDEPEND", "repository", "REQUIRED_USE",
+ "PROPERTIES", "REQUIRES", "RESTRICT", "SIZE",
+ "SLOT", "USE", "_mtime_"]
_dep_keys = ('DEPEND', 'HDEPEND', 'PDEPEND', 'RDEPEND')
_buildtime_keys = ('DEPEND', 'HDEPEND')
@@ -114,13 +114,14 @@ class Package(Task):
return self._metadata["EAPI"]
@property
+ def build_id(self):
+ return self.cpv.build_id
+
+ @property
def build_time(self):
if not self.built:
raise AttributeError('build_time')
- try:
- return long(self._metadata['BUILD_TIME'])
- except (KeyError, ValueError):
- return 0
+ return self.cpv.build_time
@property
def defined_phases(self):
@@ -218,6 +219,8 @@ class Package(Task):
else:
raise TypeError("root_config argument is required")
+ elements = [type_name, root, _unicode(cpv), operation]
+
# For installed (and binary) packages we don't care for the repo
# when it comes to hashing, because there can only be one cpv.
# So overwrite the repo_key with type_name.
@@ -228,14 +231,22 @@ class Package(Task):
raise AssertionError(
"Package._gen_hash_key() " + \
"called without 'repo_name' argument")
- repo_key = repo_name
+ elements.append(repo_name)
+ elif type_name == "binary":
+ # Including a variety of fingerprints in the hash makes
+ # it possible to simultaneously consider multiple similar
+ # packages. Note that digests are not included here, since
+ # they are relatively expensive to compute, and they may
+ # not necessarily be available.
+ elements.extend([cpv.build_id, cpv.file_size,
+ cpv.build_time, cpv.mtime])
else:
# For installed (and binary) packages we don't care for the repo
# when it comes to hashing, because there can only be one cpv.
# So overwrite the repo_key with type_name.
- repo_key = type_name
+ elements.append(type_name)
- return (type_name, root, _unicode(cpv), operation, repo_key)
+ return tuple(elements)
def _validate_deps(self):
"""
@@ -509,9 +520,15 @@ class Package(Task):
else:
cpv_color = "PKG_NOMERGE"
+ build_id_str = ""
+ if isinstance(self.cpv.build_id, long) and self.cpv.build_id > 0:
+ build_id_str = "-%s" % self.cpv.build_id
+
s = "(%s, %s" \
- % (portage.output.colorize(cpv_color, self.cpv + _slot_separator + \
- self.slot + "/" + self.sub_slot + _repo_separator + self.repo) , self.type_name)
+ % (portage.output.colorize(cpv_color, self.cpv +
+ build_id_str + _slot_separator + self.slot + "/" +
+ self.sub_slot + _repo_separator + self.repo),
+ self.type_name)
if self.type_name == "installed":
if self.root_config.settings['ROOT'] != "/":
@@ -755,29 +772,41 @@ class Package(Task):
def __lt__(self, other):
if other.cp != self.cp:
return self.cp < other.cp
- if portage.vercmp(self.version, other.version) < 0:
+ result = portage.vercmp(self.version, other.version)
+ if result < 0:
return True
+ if result == 0 and self.built and other.built:
+ return self.build_time < other.build_time
return False
def __le__(self, other):
if other.cp != self.cp:
return self.cp <= other.cp
- if portage.vercmp(self.version, other.version) <= 0:
+ result = portage.vercmp(self.version, other.version)
+ if result <= 0:
return True
+ if result == 0 and self.built and other.built:
+ return self.build_time <= other.build_time
return False
def __gt__(self, other):
if other.cp != self.cp:
return self.cp > other.cp
- if portage.vercmp(self.version, other.version) > 0:
+ result = portage.vercmp(self.version, other.version)
+ if result > 0:
return True
+ if result == 0 and self.built and other.built:
+ return self.build_time > other.build_time
return False
def __ge__(self, other):
if other.cp != self.cp:
return self.cp >= other.cp
- if portage.vercmp(self.version, other.version) >= 0:
+ result = portage.vercmp(self.version, other.version)
+ if result >= 0:
return True
+ if result == 0 and self.built and other.built:
+ return self.build_time >= other.build_time
return False
def with_use(self, use):
diff --git a/pym/_emerge/Scheduler.py b/pym/_emerge/Scheduler.py
index d6db311..dae7944 100644
--- a/pym/_emerge/Scheduler.py
+++ b/pym/_emerge/Scheduler.py
@@ -862,8 +862,12 @@ class Scheduler(PollScheduler):
continue
fetched = fetcher.pkg_path
+ if fetched is False:
+ filename = bintree.getname(x.cpv)
+ else:
+ filename = fetched
verifier = BinpkgVerifier(pkg=x,
- scheduler=sched_iface)
+ scheduler=sched_iface, _pkg_path=filename)
current_task = verifier
verifier.start()
if verifier.wait() != os.EX_OK:
diff --git a/pym/_emerge/clear_caches.py b/pym/_emerge/clear_caches.py
index 513df62..cb0db10 100644
--- a/pym/_emerge/clear_caches.py
+++ b/pym/_emerge/clear_caches.py
@@ -7,7 +7,6 @@ def clear_caches(trees):
for d in trees.values():
d["porttree"].dbapi.melt()
d["porttree"].dbapi._aux_cache.clear()
- d["bintree"].dbapi._aux_cache.clear()
d["bintree"].dbapi._clear_cache()
if d["vartree"].dbapi._linkmap is None:
# preserve-libs is entirely disabled
diff --git a/pym/_emerge/resolver/output.py b/pym/_emerge/resolver/output.py
index 7df0302..400617d 100644
--- a/pym/_emerge/resolver/output.py
+++ b/pym/_emerge/resolver/output.py
@@ -424,6 +424,18 @@ class Display(object):
pkg_str += _repo_separator + pkg.repo
return pkg_str
+ def _append_build_id(self, pkg_str, pkg, pkg_info):
+ """Potentially appends repository to package string.
+
+ @param pkg_str: string
+ @param pkg: _emerge.Package.Package instance
+ @param pkg_info: dictionary
+ @rtype string
+ """
+ if pkg.type_name == "binary" and pkg.cpv.build_id is not None:
+ pkg_str += "-%s" % pkg.cpv.build_id
+ return pkg_str
+
def _set_non_root_columns(self, pkg, pkg_info):
"""sets the indent level and formats the output
@@ -431,7 +443,7 @@ class Display(object):
@param pkg_info: dictionary
@rtype string
"""
- ver_str = pkg_info.ver
+ ver_str = self._append_build_id(pkg_info.ver, pkg, pkg_info)
if self.conf.verbosity == 3:
ver_str = self._append_slot(ver_str, pkg, pkg_info)
ver_str = self._append_repository(ver_str, pkg, pkg_info)
@@ -470,7 +482,7 @@ class Display(object):
@rtype string
Modifies self.verboseadd
"""
- ver_str = pkg_info.ver
+ ver_str = self._append_build_id(pkg_info.ver, pkg, pkg_info)
if self.conf.verbosity == 3:
ver_str = self._append_slot(ver_str, pkg, pkg_info)
ver_str = self._append_repository(ver_str, pkg, pkg_info)
@@ -507,7 +519,7 @@ class Display(object):
@param pkg_info: dictionary
@rtype the updated addl
"""
- pkg_str = pkg.cpv
+ pkg_str = self._append_build_id(pkg.cpv, pkg, pkg_info)
if self.conf.verbosity == 3:
pkg_str = self._append_slot(pkg_str, pkg, pkg_info)
pkg_str = self._append_repository(pkg_str, pkg, pkg_info)
@@ -868,7 +880,8 @@ class Display(object):
if self.conf.columns:
myprint = self._set_non_root_columns(pkg, pkg_info)
else:
- pkg_str = pkg.cpv
+ pkg_str = self._append_build_id(
+ pkg.cpv, pkg, pkg_info)
if self.conf.verbosity == 3:
pkg_str = self._append_slot(pkg_str, pkg, pkg_info)
pkg_str = self._append_repository(pkg_str, pkg, pkg_info)
diff --git a/pym/portage/const.py b/pym/portage/const.py
index febdb4a..c7ecda2 100644
--- a/pym/portage/const.py
+++ b/pym/portage/const.py
@@ -122,6 +122,7 @@ EBUILD_PHASES = (
SUPPORTED_FEATURES = frozenset([
"assume-digests",
"binpkg-logs",
+ "binpkg-multi-instance",
"buildpkg",
"buildsyspkg",
"candy",
@@ -268,6 +269,7 @@ LIVE_ECLASSES = frozenset([
])
SUPPORTED_BINPKG_FORMATS = ("tar", "rpm")
+SUPPORTED_XPAK_EXTENSIONS = (".tbz2", ".xpak")
# Time formats used in various places like metadata.chk.
TIMESTAMP_FORMAT = "%a, %d %b %Y %H:%M:%S +0000" # to be used with time.gmtime()
diff --git a/pym/portage/dbapi/__init__.py b/pym/portage/dbapi/__init__.py
index 34dfaa7..044faec 100644
--- a/pym/portage/dbapi/__init__.py
+++ b/pym/portage/dbapi/__init__.py
@@ -31,7 +31,8 @@ class dbapi(object):
_use_mutable = False
_known_keys = frozenset(x for x in auxdbkeys
if not x.startswith("UNUSED_0"))
- _pkg_str_aux_keys = ("EAPI", "KEYWORDS", "SLOT", "repository")
+ _pkg_str_aux_keys = ("BUILD_TIME", "EAPI", "BUILD_ID",
+ "KEYWORDS", "SLOT", "repository")
def __init__(self):
pass
@@ -57,7 +58,12 @@ class dbapi(object):
@staticmethod
def _cmp_cpv(cpv1, cpv2):
- return vercmp(cpv1.version, cpv2.version)
+ result = vercmp(cpv1.version, cpv2.version)
+ if (result == 0 and cpv1.build_time is not None and
+ cpv2.build_time is not None):
+ result = ((cpv1.build_time > cpv2.build_time) -
+ (cpv1.build_time < cpv2.build_time))
+ return result
@staticmethod
def _cpv_sort_ascending(cpv_list):
diff --git a/pym/portage/dbapi/bintree.py b/pym/portage/dbapi/bintree.py
index 583e208..b98b26e 100644
--- a/pym/portage/dbapi/bintree.py
+++ b/pym/portage/dbapi/bintree.py
@@ -17,14 +17,13 @@ portage.proxy.lazyimport.lazyimport(globals(),
'portage.update:update_dbentries',
'portage.util:atomic_ofstream,ensure_dirs,normalize_path,' + \
'writemsg,writemsg_stdout',
- 'portage.util.listdir:listdir',
'portage.util.path:first_existing',
'portage.util._urlopen:urlopen@_urlopen',
'portage.versions:best,catpkgsplit,catsplit,_pkg_str',
)
from portage.cache.mappings import slot_dict_class
-from portage.const import CACHE_PATH
+from portage.const import CACHE_PATH, SUPPORTED_XPAK_EXTENSIONS
from portage.dbapi.virtual import fakedbapi
from portage.dep import Atom, use_reduce, paren_enclose
from portage.exception import AlarmSignal, InvalidData, InvalidPackageName, \
@@ -71,18 +70,26 @@ class bindbapi(fakedbapi):
_known_keys = frozenset(list(fakedbapi._known_keys) + \
["CHOST", "repository", "USE"])
def __init__(self, mybintree=None, **kwargs):
- fakedbapi.__init__(self, **kwargs)
+ # Always enable multi_instance mode for bindbapi indexing. This
+ # does not affect the local PKGDIR file layout, since that is
+ # controlled independently by FEATURES=binpkg-multi-instance.
+ # The multi_instance mode is useful for the following reasons:
+ # * binary packages with the same cpv from multiple binhosts
+ # can be considered simultaneously
+ # * if binpkg-multi-instance is disabled, it's still possible
+ # to properly access a PKGDIR which has binpkg-multi-instance
+ # layout (or mixed layout)
+ fakedbapi.__init__(self, exclusive_slots=False,
+ multi_instance=True, **kwargs)
self.bintree = mybintree
self.move_ent = mybintree.move_ent
- self.cpvdict={}
- self.cpdict={}
# Selectively cache metadata in order to optimize dep matching.
self._aux_cache_keys = set(
- ["BUILD_TIME", "CHOST", "DEPEND", "EAPI",
- "HDEPEND", "IUSE", "KEYWORDS",
- "LICENSE", "PDEPEND", "PROPERTIES", "PROVIDE",
- "RDEPEND", "repository", "RESTRICT", "SLOT", "USE",
- "DEFINED_PHASES", "PROVIDES", "REQUIRES"
+ ["BUILD_ID", "BUILD_TIME", "CHOST", "DEFINED_PHASES",
+ "DEPEND", "EAPI", "HDEPEND", "IUSE", "KEYWORDS",
+ "LICENSE", "MD5", "PDEPEND", "PROPERTIES", "PROVIDE",
+ "PROVIDES", "RDEPEND", "repository", "REQUIRES", "RESTRICT",
+ "SIZE", "SLOT", "USE", "_mtime_"
])
self._aux_cache_slot_dict = slot_dict_class(self._aux_cache_keys)
self._aux_cache = {}
@@ -109,33 +116,49 @@ class bindbapi(fakedbapi):
return fakedbapi.cpv_exists(self, cpv)
def cpv_inject(self, cpv, **kwargs):
- self._aux_cache.pop(cpv, None)
- fakedbapi.cpv_inject(self, cpv, **kwargs)
+ if not self.bintree.populated:
+ self.bintree.populate()
+ fakedbapi.cpv_inject(self, cpv,
+ metadata=cpv._metadata, **kwargs)
def cpv_remove(self, cpv):
- self._aux_cache.pop(cpv, None)
+ if not self.bintree.populated:
+ self.bintree.populate()
fakedbapi.cpv_remove(self, cpv)
def aux_get(self, mycpv, wants, myrepo=None):
if self.bintree and not self.bintree.populated:
self.bintree.populate()
- cache_me = False
+ # Support plain string for backward compatibility with API
+ # consumers (including portageq, which passes in a cpv from
+ # a command-line argument).
+ instance_key = self._instance_key(mycpv,
+ support_string=True)
if not self._known_keys.intersection(
wants).difference(self._aux_cache_keys):
- aux_cache = self._aux_cache.get(mycpv)
+ aux_cache = self.cpvdict[instance_key]
if aux_cache is not None:
return [aux_cache.get(x, "") for x in wants]
- cache_me = True
mysplit = mycpv.split("/")
mylist = []
tbz2name = mysplit[1]+".tbz2"
if not self.bintree._remotepkgs or \
not self.bintree.isremote(mycpv):
- tbz2_path = self.bintree.getname(mycpv)
- if not os.path.exists(tbz2_path):
+ try:
+ tbz2_path = self.bintree._pkg_paths[instance_key]
+ except KeyError:
+ raise KeyError(mycpv)
+ tbz2_path = os.path.join(self.bintree.pkgdir, tbz2_path)
+ try:
+ st = os.lstat(tbz2_path)
+ except OSError:
raise KeyError(mycpv)
metadata_bytes = portage.xpak.tbz2(tbz2_path).get_data()
def getitem(k):
+ if k == "_mtime_":
+ return _unicode(st[stat.ST_MTIME])
+ elif k == "SIZE":
+ return _unicode(st.st_size)
v = metadata_bytes.get(_unicode_encode(k,
encoding=_encodings['repo.content'],
errors='backslashreplace'))
@@ -144,11 +167,9 @@ class bindbapi(fakedbapi):
encoding=_encodings['repo.content'], errors='replace')
return v
else:
- getitem = self.bintree._remotepkgs[mycpv].get
+ getitem = self.cpvdict[instance_key].get
mydata = {}
mykeys = wants
- if cache_me:
- mykeys = self._aux_cache_keys.union(wants)
for x in mykeys:
myval = getitem(x)
# myval is None if the key doesn't exist
@@ -159,16 +180,24 @@ class bindbapi(fakedbapi):
if not mydata.setdefault('EAPI', '0'):
mydata['EAPI'] = '0'
- if cache_me:
- aux_cache = self._aux_cache_slot_dict()
- for x in self._aux_cache_keys:
- aux_cache[x] = mydata.get(x, '')
- self._aux_cache[mycpv] = aux_cache
return [mydata.get(x, '') for x in wants]
def aux_update(self, cpv, values):
if not self.bintree.populated:
self.bintree.populate()
+ build_id = None
+ try:
+ build_id = cpv.build_id
+ except AttributeError:
+ if self.bintree._multi_instance:
+ # The cpv.build_id attribute is required if we are in
+ # multi-instance mode, since otherwise we won't know
+ # which instance to update.
+ raise
+ else:
+ cpv = self._instance_key(cpv, support_string=True)[0]
+ build_id = cpv.build_id
+
tbz2path = self.bintree.getname(cpv)
if not os.path.exists(tbz2path):
raise KeyError(cpv)
@@ -187,7 +216,7 @@ class bindbapi(fakedbapi):
del mydata[k]
mytbz2.recompose_mem(portage.xpak.xpak_mem(mydata))
# inject will clear stale caches via cpv_inject.
- self.bintree.inject(cpv)
+ self.bintree.inject(cpv, filename=tbz2path)
def cp_list(self, *pargs, **kwargs):
if not self.bintree.populated:
@@ -219,7 +248,7 @@ class bindbapi(fakedbapi):
if not self.bintree.isremote(pkg):
pass
else:
- metadata = self.bintree._remotepkgs[pkg]
+ metadata = self.bintree._remotepkgs[self._instance_key(pkg)]
try:
size = int(metadata["SIZE"])
except KeyError:
@@ -232,48 +261,6 @@ class bindbapi(fakedbapi):
return filesdict
-def _pkgindex_cpv_map_latest_build(pkgindex):
- """
- Given a PackageIndex instance, create a dict of cpv -> metadata map.
- If multiple packages have identical CPV values, prefer the package
- with latest BUILD_TIME value.
- @param pkgindex: A PackageIndex instance.
- @type pkgindex: PackageIndex
- @rtype: dict
- @return: a dict containing entry for the give cpv.
- """
- cpv_map = {}
-
- for d in pkgindex.packages:
- cpv = d["CPV"]
-
- try:
- cpv = _pkg_str(cpv)
- except InvalidData:
- writemsg(_("!!! Invalid remote binary package: %s\n") % cpv,
- noiselevel=-1)
- continue
-
- btime = d.get('BUILD_TIME', '')
- try:
- btime = int(btime)
- except ValueError:
- btime = None
-
- other_d = cpv_map.get(cpv)
- if other_d is not None:
- other_btime = other_d.get('BUILD_TIME', '')
- try:
- other_btime = int(other_btime)
- except ValueError:
- other_btime = None
- if other_btime and (not btime or other_btime > btime):
- continue
-
- cpv_map[_pkg_str(cpv)] = d
-
- return cpv_map
-
class binarytree(object):
"this tree scans for a list of all packages available in PKGDIR"
def __init__(self, _unused=DeprecationWarning, pkgdir=None,
@@ -300,6 +287,13 @@ class binarytree(object):
if True:
self.pkgdir = normalize_path(pkgdir)
+ # NOTE: Event if binpkg-multi-instance is disabled, it's
+ # still possible to access a PKGDIR which uses the
+ # binpkg-multi-instance layout (or mixed layout).
+ self._multi_instance = ("binpkg-multi-instance" in
+ settings.features)
+ if self._multi_instance:
+ self._allocate_filename = self._allocate_filename_multi
self.dbapi = bindbapi(self, settings=settings)
self.update_ents = self.dbapi.update_ents
self.move_slot_ent = self.dbapi.move_slot_ent
@@ -310,7 +304,6 @@ class binarytree(object):
self.invalids = []
self.settings = settings
self._pkg_paths = {}
- self._pkgindex_uri = {}
self._populating = False
self._all_directory = os.path.isdir(
os.path.join(self.pkgdir, "All"))
@@ -318,12 +311,14 @@ class binarytree(object):
self._pkgindex_hashes = ["MD5","SHA1"]
self._pkgindex_file = os.path.join(self.pkgdir, "Packages")
self._pkgindex_keys = self.dbapi._aux_cache_keys.copy()
- self._pkgindex_keys.update(["CPV", "MTIME", "SIZE"])
+ self._pkgindex_keys.update(["CPV", "SIZE"])
self._pkgindex_aux_keys = \
- ["BUILD_TIME", "CHOST", "DEPEND", "DESCRIPTION", "EAPI",
- "HDEPEND", "IUSE", "KEYWORDS", "LICENSE", "PDEPEND", "PROPERTIES",
- "PROVIDE", "RESTRICT", "RDEPEND", "repository", "SLOT", "USE", "DEFINED_PHASES",
- "BASE_URI", "PROVIDES", "REQUIRES"]
+ ["BASE_URI", "BUILD_ID", "BUILD_TIME", "CHOST",
+ "DEFINED_PHASES", "DEPEND", "DESCRIPTION", "EAPI",
+ "HDEPEND", "IUSE", "KEYWORDS", "LICENSE", "PDEPEND",
+ "PKGINDEX_URI", "PROPERTIES", "PROVIDE", "PROVIDES",
+ "RDEPEND", "repository", "REQUIRES", "RESTRICT",
+ "SIZE", "SLOT", "USE"]
self._pkgindex_aux_keys = list(self._pkgindex_aux_keys)
self._pkgindex_use_evaluated_keys = \
("DEPEND", "HDEPEND", "LICENSE", "RDEPEND",
@@ -336,6 +331,7 @@ class binarytree(object):
"USE_EXPAND", "USE_EXPAND_HIDDEN", "USE_EXPAND_IMPLICIT",
"USE_EXPAND_UNPREFIXED"])
self._pkgindex_default_pkg_data = {
+ "BUILD_ID" : "",
"BUILD_TIME" : "",
"DEFINED_PHASES" : "",
"DEPEND" : "",
@@ -363,6 +359,7 @@ class binarytree(object):
self._pkgindex_translated_keys = (
("DESCRIPTION" , "DESC"),
+ ("_mtime_" , "MTIME"),
("repository" , "REPO"),
)
@@ -453,103 +450,30 @@ class binarytree(object):
mytbz2.recompose_mem(portage.xpak.xpak_mem(mydata))
self.dbapi.cpv_remove(mycpv)
- del self._pkg_paths[mycpv]
+ del self._pkg_paths[self.dbapi._instance_key(mycpv)]
+ metadata = self.dbapi._aux_cache_slot_dict()
+ for k in self.dbapi._aux_cache_keys:
+ v = mydata.get(_unicode_encode(k))
+ if v is not None:
+ v = _unicode_decode(v)
+ metadata[k] = " ".join(v.split())
+ mynewcpv = _pkg_str(mynewcpv, metadata=metadata)
new_path = self.getname(mynewcpv)
- self._pkg_paths[mynewcpv] = os.path.join(
+ self._pkg_paths[
+ self.dbapi._instance_key(mynewcpv)] = os.path.join(
*new_path.split(os.path.sep)[-2:])
if new_path != mytbz2:
self._ensure_dir(os.path.dirname(new_path))
_movefile(tbz2path, new_path, mysettings=self.settings)
- self._remove_symlink(mycpv)
- if new_path.split(os.path.sep)[-2] == "All":
- self._create_symlink(mynewcpv)
self.inject(mynewcpv)
return moves
- def _remove_symlink(self, cpv):
- """Remove a ${PKGDIR}/${CATEGORY}/${PF}.tbz2 symlink and also remove
- the ${PKGDIR}/${CATEGORY} directory if empty. The file will not be
- removed if os.path.islink() returns False."""
- mycat, mypkg = catsplit(cpv)
- mylink = os.path.join(self.pkgdir, mycat, mypkg + ".tbz2")
- if os.path.islink(mylink):
- """Only remove it if it's really a link so that this method never
- removes a real package that was placed here to avoid a collision."""
- os.unlink(mylink)
- try:
- os.rmdir(os.path.join(self.pkgdir, mycat))
- except OSError as e:
- if e.errno not in (errno.ENOENT,
- errno.ENOTEMPTY, errno.EEXIST):
- raise
- del e
-
- def _create_symlink(self, cpv):
- """Create a ${PKGDIR}/${CATEGORY}/${PF}.tbz2 symlink (and
- ${PKGDIR}/${CATEGORY} directory, if necessary). Any file that may
- exist in the location of the symlink will first be removed."""
- mycat, mypkg = catsplit(cpv)
- full_path = os.path.join(self.pkgdir, mycat, mypkg + ".tbz2")
- self._ensure_dir(os.path.dirname(full_path))
- try:
- os.unlink(full_path)
- except OSError as e:
- if e.errno != errno.ENOENT:
- raise
- del e
- os.symlink(os.path.join("..", "All", mypkg + ".tbz2"), full_path)
-
def prevent_collision(self, cpv):
- """Make sure that the file location ${PKGDIR}/All/${PF}.tbz2 is safe to
- use for a given cpv. If a collision will occur with an existing
- package from another category, the existing package will be bumped to
- ${PKGDIR}/${CATEGORY}/${PF}.tbz2 so that both can coexist."""
- if not self._all_directory:
- return
-
- # Copy group permissions for new directories that
- # may have been created.
- for path in ("All", catsplit(cpv)[0]):
- path = os.path.join(self.pkgdir, path)
- self._ensure_dir(path)
- if not os.access(path, os.W_OK):
- raise PermissionDenied("access('%s', W_OK)" % path)
-
- full_path = self.getname(cpv)
- if "All" == full_path.split(os.path.sep)[-2]:
- return
- """Move a colliding package if it exists. Code below this point only
- executes in rare cases."""
- mycat, mypkg = catsplit(cpv)
- myfile = mypkg + ".tbz2"
- mypath = os.path.join("All", myfile)
- dest_path = os.path.join(self.pkgdir, mypath)
-
- try:
- st = os.lstat(dest_path)
- except OSError:
- st = None
- else:
- if stat.S_ISLNK(st.st_mode):
- st = None
- try:
- os.unlink(dest_path)
- except OSError:
- if os.path.exists(dest_path):
- raise
-
- if st is not None:
- # For invalid packages, other_cat could be None.
- other_cat = portage.xpak.tbz2(dest_path).getfile(b"CATEGORY")
- if other_cat:
- other_cat = _unicode_decode(other_cat,
- encoding=_encodings['repo.content'], errors='replace')
- other_cat = other_cat.strip()
- other_cpv = other_cat + "/" + mypkg
- self._move_from_all(other_cpv)
- self.inject(other_cpv)
- self._move_to_all(cpv)
+ warnings.warn("The "
+ "portage.dbapi.bintree.binarytree.prevent_collision "
+ "method is deprecated.",
+ DeprecationWarning, stacklevel=2)
def _ensure_dir(self, path):
"""
@@ -585,37 +509,6 @@ class binarytree(object):
except PortageException:
pass
- def _move_to_all(self, cpv):
- """If the file exists, move it. Whether or not it exists, update state
- for future getname() calls."""
- mycat, mypkg = catsplit(cpv)
- myfile = mypkg + ".tbz2"
- self._pkg_paths[cpv] = os.path.join("All", myfile)
- src_path = os.path.join(self.pkgdir, mycat, myfile)
- try:
- mystat = os.lstat(src_path)
- except OSError as e:
- mystat = None
- if mystat and stat.S_ISREG(mystat.st_mode):
- self._ensure_dir(os.path.join(self.pkgdir, "All"))
- dest_path = os.path.join(self.pkgdir, "All", myfile)
- _movefile(src_path, dest_path, mysettings=self.settings)
- self._create_symlink(cpv)
- self.inject(cpv)
-
- def _move_from_all(self, cpv):
- """Move a package from ${PKGDIR}/All/${PF}.tbz2 to
- ${PKGDIR}/${CATEGORY}/${PF}.tbz2 and update state from getname calls."""
- self._remove_symlink(cpv)
- mycat, mypkg = catsplit(cpv)
- myfile = mypkg + ".tbz2"
- mypath = os.path.join(mycat, myfile)
- dest_path = os.path.join(self.pkgdir, mypath)
- self._ensure_dir(os.path.dirname(dest_path))
- src_path = os.path.join(self.pkgdir, "All", myfile)
- _movefile(src_path, dest_path, mysettings=self.settings)
- self._pkg_paths[cpv] = mypath
-
def populate(self, getbinpkgs=0):
"populates the binarytree"
@@ -643,55 +536,63 @@ class binarytree(object):
# prior to performing package moves since it only wants to
# operate on local packages (getbinpkgs=0).
self._remotepkgs = None
- self.dbapi._clear_cache()
- self.dbapi._aux_cache.clear()
+ self.dbapi.clear()
+ _instance_key = self.dbapi._instance_key
if True:
pkg_paths = {}
self._pkg_paths = pkg_paths
- dirs = listdir(self.pkgdir, dirsonly=True, EmptyOnError=True)
- if "All" in dirs:
- dirs.remove("All")
- dirs.sort()
- dirs.insert(0, "All")
+ dir_files = {}
+ for parent, dir_names, file_names in os.walk(self.pkgdir):
+ relative_parent = parent[len(self.pkgdir)+1:]
+ dir_files[relative_parent] = file_names
+
pkgindex = self._load_pkgindex()
- pf_index = None
if not self._pkgindex_version_supported(pkgindex):
pkgindex = self._new_pkgindex()
header = pkgindex.header
metadata = {}
+ basename_index = {}
for d in pkgindex.packages:
- metadata[d["CPV"]] = d
+ cpv = _pkg_str(d["CPV"], metadata=d,
+ settings=self.settings)
+ d["CPV"] = cpv
+ metadata[_instance_key(cpv)] = d
+ path = d.get("PATH")
+ if not path:
+ path = cpv + ".tbz2"
+ basename = os.path.basename(path)
+ basename_index.setdefault(basename, []).append(d)
+
update_pkgindex = False
- for mydir in dirs:
- for myfile in listdir(os.path.join(self.pkgdir, mydir)):
- if not myfile.endswith(".tbz2"):
+ for mydir, file_names in dir_files.items():
+ try:
+ mydir = _unicode_decode(mydir,
+ encoding=_encodings["fs"], errors="strict")
+ except UnicodeDecodeError:
+ continue
+ for myfile in file_names:
+ try:
+ myfile = _unicode_decode(myfile,
+ encoding=_encodings["fs"], errors="strict")
+ except UnicodeDecodeError:
+ continue
+ if not myfile.endswith(SUPPORTED_XPAK_EXTENSIONS):
continue
mypath = os.path.join(mydir, myfile)
full_path = os.path.join(self.pkgdir, mypath)
s = os.lstat(full_path)
- if stat.S_ISLNK(s.st_mode):
+
+ if not stat.S_ISREG(s.st_mode):
continue
# Validate data from the package index and try to avoid
# reading the xpak if possible.
- if mydir != "All":
- possibilities = None
- d = metadata.get(mydir+"/"+myfile[:-5])
- if d:
- possibilities = [d]
- else:
- if pf_index is None:
- pf_index = {}
- for mycpv in metadata:
- mycat, mypf = catsplit(mycpv)
- pf_index.setdefault(
- mypf, []).append(metadata[mycpv])
- possibilities = pf_index.get(myfile[:-5])
+ possibilities = basename_index.get(myfile)
if possibilities:
match = None
for d in possibilities:
try:
- if long(d["MTIME"]) != s[stat.ST_MTIME]:
+ if long(d["_mtime_"]) != s[stat.ST_MTIME]:
continue
except (KeyError, ValueError):
continue
@@ -705,15 +606,14 @@ class binarytree(object):
break
if match:
mycpv = match["CPV"]
- if mycpv in pkg_paths:
- # discard duplicates (All/ is preferred)
- continue
- mycpv = _pkg_str(mycpv)
- pkg_paths[mycpv] = mypath
+ instance_key = _instance_key(mycpv)
+ pkg_paths[instance_key] = mypath
# update the path if the package has been moved
oldpath = d.get("PATH")
if oldpath and oldpath != mypath:
update_pkgindex = True
+ # Omit PATH if it is the default path for
+ # the current Packages format version.
if mypath != mycpv + ".tbz2":
d["PATH"] = mypath
if not oldpath:
@@ -723,11 +623,6 @@ class binarytree(object):
if oldpath:
update_pkgindex = True
self.dbapi.cpv_inject(mycpv)
- if not self.dbapi._aux_cache_keys.difference(d):
- aux_cache = self.dbapi._aux_cache_slot_dict()
- for k in self.dbapi._aux_cache_keys:
- aux_cache[k] = d[k]
- self.dbapi._aux_cache[mycpv] = aux_cache
continue
if not os.access(full_path, os.R_OK):
writemsg(_("!!! Permission denied to read " \
@@ -735,13 +630,12 @@ class binarytree(object):
noiselevel=-1)
self.invalids.append(myfile[:-5])
continue
- metadata_bytes = portage.xpak.tbz2(full_path).get_data()
- mycat = _unicode_decode(metadata_bytes.get(b"CATEGORY", ""),
- encoding=_encodings['repo.content'], errors='replace')
- mypf = _unicode_decode(metadata_bytes.get(b"PF", ""),
- encoding=_encodings['repo.content'], errors='replace')
- slot = _unicode_decode(metadata_bytes.get(b"SLOT", ""),
- encoding=_encodings['repo.content'], errors='replace')
+ pkg_metadata = self._read_metadata(full_path, s,
+ keys=chain(self.dbapi._aux_cache_keys,
+ ("PF", "CATEGORY")))
+ mycat = pkg_metadata.get("CATEGORY", "")
+ mypf = pkg_metadata.get("PF", "")
+ slot = pkg_metadata.get("SLOT", "")
mypkg = myfile[:-5]
if not mycat or not mypf or not slot:
#old-style or corrupt package
@@ -765,16 +659,51 @@ class binarytree(object):
writemsg("!!! %s\n" % line, noiselevel=-1)
self.invalids.append(mypkg)
continue
- mycat = mycat.strip()
- slot = slot.strip()
- if mycat != mydir and mydir != "All":
+
+ multi_instance = False
+ invalid_name = False
+ build_id = None
+ if myfile.endswith(".xpak"):
+ multi_instance = True
+ build_id = self._parse_build_id(myfile)
+ if build_id < 1:
+ invalid_name = True
+ elif myfile != "%s-%s.xpak" % (
+ mypf, build_id):
+ invalid_name = True
+ else:
+ mypkg = mypkg[:-len(str(build_id))-1]
+ elif myfile != mypf + ".tbz2":
+ invalid_name = True
+
+ if invalid_name:
+ writemsg(_("\n!!! Binary package name is "
+ "invalid: '%s'\n") % full_path,
+ noiselevel=-1)
+ continue
+
+ if pkg_metadata.get("BUILD_ID"):
+ try:
+ build_id = long(pkg_metadata["BUILD_ID"])
+ except ValueError:
+ writemsg(_("!!! Binary package has "
+ "invalid BUILD_ID: '%s'\n") %
+ full_path, noiselevel=-1)
+ continue
+ else:
+ build_id = None
+
+ if multi_instance:
+ name_split = catpkgsplit("%s/%s" %
+ (mycat, mypf))
+ if (name_split is None or
+ tuple(catsplit(mydir)) != name_split[:2]):
+ continue
+ elif mycat != mydir and mydir != "All":
continue
if mypkg != mypf.strip():
continue
mycpv = mycat + "/" + mypkg
- if mycpv in pkg_paths:
- # All is first, so it's preferred.
- continue
if not self.dbapi._category_re.match(mycat):
writemsg(_("!!! Binary package has an " \
"unrecognized category: '%s'\n") % full_path,
@@ -784,14 +713,23 @@ class binarytree(object):
(mycpv, self.settings["PORTAGE_CONFIGROOT"]),
noiselevel=-1)
continue
- mycpv = _pkg_str(mycpv)
- pkg_paths[mycpv] = mypath
+ if build_id is not None:
+ pkg_metadata["BUILD_ID"] = _unicode(build_id)
+ pkg_metadata["SIZE"] = _unicode(s.st_size)
+ # Discard items used only for validation above.
+ pkg_metadata.pop("CATEGORY")
+ pkg_metadata.pop("PF")
+ mycpv = _pkg_str(mycpv,
+ metadata=self.dbapi._aux_cache_slot_dict(
+ pkg_metadata))
+ pkg_paths[_instance_key(mycpv)] = mypath
self.dbapi.cpv_inject(mycpv)
update_pkgindex = True
- d = metadata.get(mycpv, {})
+ d = metadata.get(_instance_key(mycpv),
+ pkgindex._pkg_slot_dict())
if d:
try:
- if long(d["MTIME"]) != s[stat.ST_MTIME]:
+ if long(d["_mtime_"]) != s[stat.ST_MTIME]:
d.clear()
except (KeyError, ValueError):
d.clear()
@@ -802,36 +740,30 @@ class binarytree(object):
except (KeyError, ValueError):
d.clear()
+ for k in self._pkgindex_allowed_pkg_keys:
+ v = pkg_metadata.get(k)
+ if v is not None:
+ d[k] = v
d["CPV"] = mycpv
- d["SLOT"] = slot
- d["MTIME"] = _unicode(s[stat.ST_MTIME])
- d["SIZE"] = _unicode(s.st_size)
- d.update(zip(self._pkgindex_aux_keys,
- self.dbapi.aux_get(mycpv, self._pkgindex_aux_keys)))
try:
self._eval_use_flags(mycpv, d)
except portage.exception.InvalidDependString:
writemsg(_("!!! Invalid binary package: '%s'\n") % \
self.getname(mycpv), noiselevel=-1)
self.dbapi.cpv_remove(mycpv)
- del pkg_paths[mycpv]
+ del pkg_paths[_instance_key(mycpv)]
# record location if it's non-default
if mypath != mycpv + ".tbz2":
d["PATH"] = mypath
else:
d.pop("PATH", None)
- metadata[mycpv] = d
- if not self.dbapi._aux_cache_keys.difference(d):
- aux_cache = self.dbapi._aux_cache_slot_dict()
- for k in self.dbapi._aux_cache_keys:
- aux_cache[k] = d[k]
- self.dbapi._aux_cache[mycpv] = aux_cache
+ metadata[_instance_key(mycpv)] = d
- for cpv in list(metadata):
- if cpv not in pkg_paths:
- del metadata[cpv]
+ for instance_key in list(metadata):
+ if instance_key not in pkg_paths:
+ del metadata[instance_key]
# Do not bother to write the Packages index if $PKGDIR/All/ exists
# since it will provide no benefit due to the need to read CATEGORY
@@ -1056,45 +988,24 @@ class binarytree(object):
# The current user doesn't have permission to cache the
# file, but that's alright.
if pkgindex:
- # Organize remote package list as a cpv -> metadata map.
- remotepkgs = _pkgindex_cpv_map_latest_build(pkgindex)
remote_base_uri = pkgindex.header.get("URI", base_url)
- for cpv, remote_metadata in remotepkgs.items():
- remote_metadata["BASE_URI"] = remote_base_uri
- self._pkgindex_uri[cpv] = url
- self._remotepkgs.update(remotepkgs)
- self._remote_has_index = True
- for cpv in remotepkgs:
+ for d in pkgindex.packages:
+ cpv = _pkg_str(d["CPV"], metadata=d,
+ settings=self.settings)
+ instance_key = _instance_key(cpv)
+ # Local package instances override remote instances
+ # with the same instance_key.
+ if instance_key in metadata:
+ continue
+
+ d["CPV"] = cpv
+ d["BASE_URI"] = remote_base_uri
+ d["PKGINDEX_URI"] = url
+ self._remotepkgs[instance_key] = d
+ metadata[instance_key] = d
self.dbapi.cpv_inject(cpv)
- if True:
- # Remote package instances override local package
- # if they are not identical.
- hash_names = ["SIZE"] + self._pkgindex_hashes
- for cpv, local_metadata in metadata.items():
- remote_metadata = self._remotepkgs.get(cpv)
- if remote_metadata is None:
- continue
- # Use digests to compare identity.
- identical = True
- for hash_name in hash_names:
- local_value = local_metadata.get(hash_name)
- if local_value is None:
- continue
- remote_value = remote_metadata.get(hash_name)
- if remote_value is None:
- continue
- if local_value != remote_value:
- identical = False
- break
- if identical:
- del self._remotepkgs[cpv]
- else:
- # Override the local package in the aux_get cache.
- self.dbapi._aux_cache[cpv] = remote_metadata
- else:
- # Local package instances override remote instances.
- for cpv in metadata:
- self._remotepkgs.pop(cpv, None)
+
+ self._remote_has_index = True
self.populated=1
@@ -1106,7 +1017,8 @@ class binarytree(object):
@param filename: File path of the package to inject, or None if it's
already in the location returned by getname()
@type filename: string
- @rtype: None
+ @rtype: _pkg_str or None
+ @return: A _pkg_str instance on success, or None on failure.
"""
mycat, mypkg = catsplit(cpv)
if not self.populated:
@@ -1124,24 +1036,45 @@ class binarytree(object):
writemsg(_("!!! Binary package does not exist: '%s'\n") % full_path,
noiselevel=-1)
return
- mytbz2 = portage.xpak.tbz2(full_path)
- slot = mytbz2.getfile("SLOT")
+ metadata = self._read_metadata(full_path, s)
+ slot = metadata.get("SLOT")
+ try:
+ self._eval_use_flags(cpv, metadata)
+ except portage.exception.InvalidDependString:
+ slot = None
if slot is None:
writemsg(_("!!! Invalid binary package: '%s'\n") % full_path,
noiselevel=-1)
return
- slot = slot.strip()
- self.dbapi.cpv_inject(cpv)
+
+ fetched = False
+ try:
+ build_id = cpv.build_id
+ except AttributeError:
+ build_id = None
+ else:
+ instance_key = self.dbapi._instance_key(cpv)
+ if instance_key in self.dbapi.cpvdict:
+ # This means we've been called by aux_update (or
+ # similar). The instance key typically changes (due to
+ # file modification), so we need to discard existing
+ # instance key references.
+ self.dbapi.cpv_remove(cpv)
+ self._pkg_paths.pop(instance_key, None)
+ if self._remotepkgs is not None:
+ fetched = self._remotepkgs.pop(instance_key, None)
+
+ cpv = _pkg_str(cpv, metadata=metadata, settings=self.settings)
# Reread the Packages index (in case it's been changed by another
# process) and then updated it, all while holding a lock.
pkgindex_lock = None
- created_symlink = False
try:
pkgindex_lock = lockfile(self._pkgindex_file,
wantnewlockfile=1)
if filename is not None:
- new_filename = self.getname(cpv)
+ new_filename = self.getname(cpv,
+ allocate_new=(build_id is None))
try:
samefile = os.path.samefile(filename, new_filename)
except OSError:
@@ -1151,54 +1084,31 @@ class binarytree(object):
_movefile(filename, new_filename, mysettings=self.settings)
full_path = new_filename
- self._file_permissions(full_path)
+ basename = os.path.basename(full_path)
+ pf = catsplit(cpv)[1]
+ if (build_id is None and not fetched and
+ basename.endswith(".xpak")):
+ # Apply the newly assigned BUILD_ID. This is intended
+ # to occur only for locally built packages. If the
+ # package was fetched, we want to preserve its
+ # attributes, so that we can later distinguish that it
+ # is identical to its remote counterpart.
+ build_id = self._parse_build_id(basename)
+ metadata["BUILD_ID"] = _unicode(build_id)
+ cpv = _pkg_str(cpv, metadata=metadata,
+ settings=self.settings)
+ binpkg = portage.xpak.tbz2(full_path)
+ binary_data = binpkg.get_data()
+ binary_data[b"BUILD_ID"] = _unicode_encode(
+ metadata["BUILD_ID"])
+ binpkg.recompose_mem(portage.xpak.xpak_mem(binary_data))
- if self._all_directory and \
- self.getname(cpv).split(os.path.sep)[-2] == "All":
- self._create_symlink(cpv)
- created_symlink = True
+ self._file_permissions(full_path)
pkgindex = self._load_pkgindex()
-
if not self._pkgindex_version_supported(pkgindex):
pkgindex = self._new_pkgindex()
- # Discard remote metadata to ensure that _pkgindex_entry
- # gets the local metadata. This also updates state for future
- # isremote calls.
- if self._remotepkgs is not None:
- self._remotepkgs.pop(cpv, None)
-
- # Discard cached metadata to ensure that _pkgindex_entry
- # doesn't return stale metadata.
- self.dbapi._aux_cache.pop(cpv, None)
-
- try:
- d = self._pkgindex_entry(cpv)
- except portage.exception.InvalidDependString:
- writemsg(_("!!! Invalid binary package: '%s'\n") % \
- self.getname(cpv), noiselevel=-1)
- self.dbapi.cpv_remove(cpv)
- del self._pkg_paths[cpv]
- return
-
- # If found, remove package(s) with duplicate path.
- path = d.get("PATH", "")
- for i in range(len(pkgindex.packages) - 1, -1, -1):
- d2 = pkgindex.packages[i]
- if path and path == d2.get("PATH"):
- # Handle path collisions in $PKGDIR/All
- # when CPV is not identical.
- del pkgindex.packages[i]
- elif cpv == d2.get("CPV"):
- if path == d2.get("PATH", ""):
- del pkgindex.packages[i]
- elif created_symlink and not d2.get("PATH", ""):
- # Delete entry for the package that was just
- # overwritten by a symlink to this package.
- del pkgindex.packages[i]
-
- pkgindex.packages.append(d)
-
+ d = self._inject_file(pkgindex, cpv, full_path)
self._update_pkgindex_header(pkgindex.header)
self._pkgindex_write(pkgindex)
@@ -1206,6 +1116,72 @@ class binarytree(object):
if pkgindex_lock:
unlockfile(pkgindex_lock)
+ # This is used to record BINPKGMD5 in the installed package
+ # database, for a package that has just been built.
+ cpv._metadata["MD5"] = d["MD5"]
+
+ return cpv
+
+ def _read_metadata(self, filename, st, keys=None):
+ if keys is None:
+ keys = self.dbapi._aux_cache_keys
+ metadata = self.dbapi._aux_cache_slot_dict()
+ else:
+ metadata = {}
+ binary_metadata = portage.xpak.tbz2(filename).get_data()
+ for k in keys:
+ if k == "_mtime_":
+ metadata[k] = _unicode(st[stat.ST_MTIME])
+ elif k == "SIZE":
+ metadata[k] = _unicode(st.st_size)
+ else:
+ v = binary_metadata.get(_unicode_encode(k))
+ if v is not None:
+ v = _unicode_decode(v)
+ metadata[k] = " ".join(v.split())
+ return metadata
+
+ def _inject_file(self, pkgindex, cpv, filename):
+ """
+ Add a package to internal data structures, and add an
+ entry to the given pkgindex.
+ @param pkgindex: The PackageIndex instance to which an entry
+ will be added.
+ @type pkgindex: PackageIndex
+ @param cpv: A _pkg_str instance corresponding to the package
+ being injected.
+ @type cpv: _pkg_str
+ @param filename: Absolute file path of the package to inject.
+ @type filename: string
+ @rtype: dict
+ @return: A dict corresponding to the new entry which has been
+ added to pkgindex. This may be used to access the checksums
+ which have just been generated.
+ """
+ # Update state for future isremote calls.
+ instance_key = self.dbapi._instance_key(cpv)
+ if self._remotepkgs is not None:
+ self._remotepkgs.pop(instance_key, None)
+
+ self.dbapi.cpv_inject(cpv)
+ self._pkg_paths[instance_key] = filename[len(self.pkgdir)+1:]
+ d = self._pkgindex_entry(cpv)
+
+ # If found, remove package(s) with duplicate path.
+ path = d.get("PATH", "")
+ for i in range(len(pkgindex.packages) - 1, -1, -1):
+ d2 = pkgindex.packages[i]
+ if path and path == d2.get("PATH"):
+ # Handle path collisions in $PKGDIR/All
+ # when CPV is not identical.
+ del pkgindex.packages[i]
+ elif cpv == d2.get("CPV"):
+ if path == d2.get("PATH", ""):
+ del pkgindex.packages[i]
+
+ pkgindex.packages.append(d)
+ return d
+
def _pkgindex_write(self, pkgindex):
contents = codecs.getwriter(_encodings['repo.content'])(io.BytesIO())
pkgindex.write(contents)
@@ -1231,7 +1207,7 @@ class binarytree(object):
def _pkgindex_entry(self, cpv):
"""
- Performs checksums and evaluates USE flag conditionals.
+ Performs checksums, and gets size and mtime via lstat.
Raises InvalidDependString if necessary.
@rtype: dict
@return: a dict containing entry for the give cpv.
@@ -1239,23 +1215,20 @@ class binarytree(object):
pkg_path = self.getname(cpv)
- d = dict(zip(self._pkgindex_aux_keys,
- self.dbapi.aux_get(cpv, self._pkgindex_aux_keys)))
-
+ d = dict(cpv._metadata.items())
d.update(perform_multiple_checksums(
pkg_path, hashes=self._pkgindex_hashes))
d["CPV"] = cpv
- st = os.stat(pkg_path)
- d["MTIME"] = _unicode(st[stat.ST_MTIME])
+ st = os.lstat(pkg_path)
+ d["_mtime_"] = _unicode(st[stat.ST_MTIME])
d["SIZE"] = _unicode(st.st_size)
- rel_path = self._pkg_paths[cpv]
+ rel_path = pkg_path[len(self.pkgdir)+1:]
# record location if it's non-default
if rel_path != cpv + ".tbz2":
d["PATH"] = rel_path
- self._eval_use_flags(cpv, d)
return d
def _new_pkgindex(self):
@@ -1309,15 +1282,17 @@ class binarytree(object):
return False
def _eval_use_flags(self, cpv, metadata):
- use = frozenset(metadata["USE"].split())
+ use = frozenset(metadata.get("USE", "").split())
for k in self._pkgindex_use_evaluated_keys:
if k.endswith('DEPEND'):
token_class = Atom
else:
token_class = None
+ deps = metadata.get(k)
+ if deps is None:
+ continue
try:
- deps = metadata[k]
deps = use_reduce(deps, uselist=use, token_class=token_class)
deps = paren_enclose(deps)
except portage.exception.InvalidDependString as e:
@@ -1347,46 +1322,129 @@ class binarytree(object):
return ""
return mymatch
- def getname(self, pkgname):
- """Returns a file location for this package. The default location is
- ${PKGDIR}/All/${PF}.tbz2, but will be ${PKGDIR}/${CATEGORY}/${PF}.tbz2
- in the rare event of a collision. The prevent_collision() method can
- be called to ensure that ${PKGDIR}/All/${PF}.tbz2 is available for a
- specific cpv."""
+ def getname(self, cpv, allocate_new=None):
+ """Returns a file location for this package.
+ If cpv has both build_time and build_id attributes, then the
+ path to the specific corresponding instance is returned.
+ Otherwise, allocate a new path and return that. When allocating
+ a new path, behavior depends on the binpkg-multi-instance
+ FEATURES setting.
+ """
if not self.populated:
self.populate()
- mycpv = pkgname
- mypath = self._pkg_paths.get(mycpv, None)
- if mypath:
- return os.path.join(self.pkgdir, mypath)
- mycat, mypkg = catsplit(mycpv)
- if self._all_directory:
- mypath = os.path.join("All", mypkg + ".tbz2")
- if mypath in self._pkg_paths.values():
- mypath = os.path.join(mycat, mypkg + ".tbz2")
+
+ try:
+ cpv.cp
+ except AttributeError:
+ cpv = _pkg_str(cpv)
+
+ filename = None
+ if allocate_new:
+ filename = self._allocate_filename(cpv)
+ elif self._is_specific_instance(cpv):
+ instance_key = self.dbapi._instance_key(cpv)
+ path = self._pkg_paths.get(instance_key)
+ if path is not None:
+ filename = os.path.join(self.pkgdir, path)
+
+ if filename is None and not allocate_new:
+ try:
+ instance_key = self.dbapi._instance_key(cpv,
+ support_string=True)
+ except KeyError:
+ pass
+ else:
+ filename = self._pkg_paths.get(instance_key)
+ if filename is not None:
+ filename = os.path.join(self.pkgdir, filename)
+
+ if filename is None:
+ if self._multi_instance:
+ pf = catsplit(cpv)[1]
+ filename = "%s-%s.xpak" % (
+ os.path.join(self.pkgdir, cpv.cp, pf), "1")
+ else:
+ filename = os.path.join(self.pkgdir, cpv + ".tbz2")
+
+ return filename
+
+ def _is_specific_instance(self, cpv):
+ specific = True
+ try:
+ build_time = cpv.build_time
+ build_id = cpv.build_id
+ except AttributeError:
+ specific = False
else:
- mypath = os.path.join(mycat, mypkg + ".tbz2")
- self._pkg_paths[mycpv] = mypath # cache for future lookups
- return os.path.join(self.pkgdir, mypath)
+ if build_time is None or build_id is None:
+ specific = False
+ return specific
+
+ def _max_build_id(self, cpv):
+ max_build_id = 0
+ for x in self.dbapi.cp_list(cpv.cp):
+ if (x == cpv and x.build_id is not None and
+ x.build_id > max_build_id):
+ max_build_id = x.build_id
+ return max_build_id
+
+ def _allocate_filename(self, cpv):
+ return os.path.join(self.pkgdir, cpv + ".tbz2")
+
+ def _allocate_filename_multi(self, cpv):
+
+ # First, get the max build_id found when _populate was
+ # called.
+ max_build_id = self._max_build_id(cpv)
+
+ # A new package may have been added concurrently since the
+ # last _populate call, so use increment build_id until
+ # we locate an unused id.
+ pf = catsplit(cpv)[1]
+ build_id = max_build_id + 1
+
+ while True:
+ filename = "%s-%s.xpak" % (
+ os.path.join(self.pkgdir, cpv.cp, pf), build_id)
+ if os.path.exists(filename):
+ build_id += 1
+ else:
+ return filename
+
+ @staticmethod
+ def _parse_build_id(filename):
+ build_id = -1
+ hyphen = filename.rfind("-", 0, -6)
+ if hyphen != -1:
+ build_id = filename[hyphen+1:-5]
+ try:
+ build_id = long(build_id)
+ except ValueError:
+ pass
+ return build_id
def isremote(self, pkgname):
"""Returns true if the package is kept remotely and it has not been
downloaded (or it is only partially downloaded)."""
- if self._remotepkgs is None or pkgname not in self._remotepkgs:
+ if (self._remotepkgs is None or
+ self.dbapi._instance_key(pkgname) not in self._remotepkgs):
return False
# Presence in self._remotepkgs implies that it's remote. When a
# package is downloaded, state is updated by self.inject().
return True
- def get_pkgindex_uri(self, pkgname):
+ def get_pkgindex_uri(self, cpv):
"""Returns the URI to the Packages file for a given package."""
- return self._pkgindex_uri.get(pkgname)
-
-
+ uri = None
+ metadata = self._remotepkgs.get(self.dbapi._instance_key(cpv))
+ if metadata is not None:
+ uri = metadata["PKGINDEX_URI"]
+ return uri
def gettbz2(self, pkgname):
"""Fetches the package from a remote site, if necessary. Attempts to
resume if the file appears to be partially downloaded."""
+ instance_key = self.dbapi._instance_key(pkgname)
tbz2_path = self.getname(pkgname)
tbz2name = os.path.basename(tbz2_path)
resume = False
@@ -1402,10 +1460,10 @@ class binarytree(object):
self._ensure_dir(mydest)
# urljoin doesn't work correctly with unrecognized protocols like sftp
if self._remote_has_index:
- rel_url = self._remotepkgs[pkgname].get("PATH")
+ rel_url = self._remotepkgs[instance_key].get("PATH")
if not rel_url:
rel_url = pkgname+".tbz2"
- remote_base_uri = self._remotepkgs[pkgname]["BASE_URI"]
+ remote_base_uri = self._remotepkgs[instance_key]["BASE_URI"]
url = remote_base_uri.rstrip("/") + "/" + rel_url.lstrip("/")
else:
url = self.settings["PORTAGE_BINHOST"].rstrip("/") + "/" + tbz2name
@@ -1448,15 +1506,19 @@ class binarytree(object):
except AttributeError:
cpv = pkg
+ _instance_key = self.dbapi._instance_key
+ instance_key = _instance_key(cpv)
digests = {}
- metadata = None
- if self._remotepkgs is None or cpv not in self._remotepkgs:
+ metadata = (None if self._remotepkgs is None else
+ self._remotepkgs.get(instance_key))
+ if metadata is None:
for d in self._load_pkgindex().packages:
- if d["CPV"] == cpv:
+ if (d["CPV"] == cpv and
+ instance_key == _instance_key(_pkg_str(d["CPV"],
+ metadata=d, settings=self.settings))):
metadata = d
break
- else:
- metadata = self._remotepkgs[cpv]
+
if metadata is None:
return digests
diff --git a/pym/portage/dbapi/vartree.py b/pym/portage/dbapi/vartree.py
index cf31c8e..277c2f1 100644
--- a/pym/portage/dbapi/vartree.py
+++ b/pym/portage/dbapi/vartree.py
@@ -173,7 +173,8 @@ class vardbapi(dbapi):
self.vartree = vartree
self._aux_cache_keys = set(
["BUILD_TIME", "CHOST", "COUNTER", "DEPEND", "DESCRIPTION",
- "EAPI", "HDEPEND", "HOMEPAGE", "IUSE", "KEYWORDS",
+ "EAPI", "HDEPEND", "HOMEPAGE",
+ "BUILD_ID", "IUSE", "KEYWORDS",
"LICENSE", "PDEPEND", "PROPERTIES", "PROVIDE", "RDEPEND",
"repository", "RESTRICT" , "SLOT", "USE", "DEFINED_PHASES",
"PROVIDES", "REQUIRES"
@@ -425,7 +426,10 @@ class vardbapi(dbapi):
continue
if len(mysplit) > 1:
if ps[0] == mysplit[1]:
- returnme.append(_pkg_str(mysplit[0]+"/"+x))
+ cpv = "%s/%s" % (mysplit[0], x)
+ metadata = dict(zip(self._aux_cache_keys,
+ self.aux_get(cpv, self._aux_cache_keys)))
+ returnme.append(_pkg_str(cpv, metadata=metadata))
self._cpv_sort_ascending(returnme)
if use_cache:
self.cpcache[mycp] = [mystat, returnme[:]]
diff --git a/pym/portage/dbapi/virtual.py b/pym/portage/dbapi/virtual.py
index ba9745c..3b7d10e 100644
--- a/pym/portage/dbapi/virtual.py
+++ b/pym/portage/dbapi/virtual.py
@@ -11,12 +11,17 @@ class fakedbapi(dbapi):
"""A fake dbapi that allows consumers to inject/remove packages to/from it
portage.settings is required to maintain the dbAPI.
"""
- def __init__(self, settings=None, exclusive_slots=True):
+ def __init__(self, settings=None, exclusive_slots=True,
+ multi_instance=False):
"""
@param exclusive_slots: When True, injecting a package with SLOT
metadata causes an existing package in the same slot to be
automatically removed (default is True).
@type exclusive_slots: Boolean
+ @param multi_instance: When True, multiple instances with the
+ same cpv may be stored simultaneously, as long as they are
+ distinguishable (default is False).
+ @type multi_instance: Boolean
"""
self._exclusive_slots = exclusive_slots
self.cpvdict = {}
@@ -25,6 +30,56 @@ class fakedbapi(dbapi):
from portage import settings
self.settings = settings
self._match_cache = {}
+ self._set_multi_instance(multi_instance)
+
+ def _set_multi_instance(self, multi_instance):
+ """
+ Enable or disable multi_instance mode. This should before any
+ packages are injected, so that all packages are indexed with
+ the same implementation of self._instance_key.
+ """
+ if self.cpvdict:
+ raise AssertionError("_set_multi_instance called after "
+ "packages have already been added")
+ self._multi_instance = multi_instance
+ if multi_instance:
+ self._instance_key = self._instance_key_multi_instance
+ else:
+ self._instance_key = self._instance_key_cpv
+
+ def _instance_key_cpv(self, cpv, support_string=False):
+ return cpv
+
+ def _instance_key_multi_instance(self, cpv, support_string=False):
+ try:
+ return (cpv, cpv.build_id, cpv.file_size, cpv.build_time,
+ cpv.mtime)
+ except AttributeError:
+ if not support_string:
+ raise
+
+ # Fallback for interfaces such as aux_get where API consumers
+ # may pass in a plain string.
+ latest = None
+ for pkg in self.cp_list(cpv_getkey(cpv)):
+ if pkg == cpv and (
+ latest is None or
+ latest.build_time < pkg.build_time):
+ latest = pkg
+
+ if latest is not None:
+ return (latest, latest.build_id, latest.file_size,
+ latest.build_time, latest.mtime)
+
+ raise KeyError(cpv)
+
+ def clear(self):
+ """
+ Remove all packages.
+ """
+ self._clear_cache()
+ self.cpvdict.clear()
+ self.cpdict.clear()
def _clear_cache(self):
if self._categories is not None:
@@ -43,7 +98,8 @@ class fakedbapi(dbapi):
return result[:]
def cpv_exists(self, mycpv, myrepo=None):
- return mycpv in self.cpvdict
+ return self._instance_key(mycpv,
+ support_string=True) in self.cpvdict
def cp_list(self, mycp, use_cache=1, myrepo=None):
# NOTE: Cache can be safely shared with the match cache, since the
@@ -63,7 +119,10 @@ class fakedbapi(dbapi):
return list(self.cpdict)
def cpv_all(self):
- return list(self.cpvdict)
+ if self._multi_instance:
+ return [x[0] for x in self.cpvdict]
+ else:
+ return list(self.cpvdict)
def cpv_inject(self, mycpv, metadata=None):
"""Adds a cpv to the list of available packages. See the
@@ -99,13 +158,14 @@ class fakedbapi(dbapi):
except AttributeError:
pass
- self.cpvdict[mycpv] = metadata
+ instance_key = self._instance_key(mycpv)
+ self.cpvdict[instance_key] = metadata
if not self._exclusive_slots:
myslot = None
if myslot and mycp in self.cpdict:
# If necessary, remove another package in the same SLOT.
for cpv in self.cpdict[mycp]:
- if mycpv != cpv:
+ if instance_key != self._instance_key(cpv):
try:
other_slot = cpv.slot
except AttributeError:
@@ -115,40 +175,41 @@ class fakedbapi(dbapi):
self.cpv_remove(cpv)
break
- cp_list = self.cpdict.get(mycp)
- if cp_list is None:
- cp_list = []
- self.cpdict[mycp] = cp_list
- try:
- cp_list.remove(mycpv)
- except ValueError:
- pass
+ cp_list = self.cpdict.get(mycp, [])
+ cp_list = [x for x in cp_list
+ if self._instance_key(x) != instance_key]
cp_list.append(mycpv)
+ self.cpdict[mycp] = cp_list
def cpv_remove(self,mycpv):
"""Removes a cpv from the list of available packages."""
self._clear_cache()
mycp = cpv_getkey(mycpv)
- if mycpv in self.cpvdict:
- del self.cpvdict[mycpv]
- if mycp not in self.cpdict:
- return
- while mycpv in self.cpdict[mycp]:
- del self.cpdict[mycp][self.cpdict[mycp].index(mycpv)]
- if not len(self.cpdict[mycp]):
- del self.cpdict[mycp]
+ instance_key = self._instance_key(mycpv)
+ self.cpvdict.pop(instance_key, None)
+ cp_list = self.cpdict.get(mycp)
+ if cp_list is not None:
+ cp_list = [x for x in cp_list
+ if self._instance_key(x) != instance_key]
+ if cp_list:
+ self.cpdict[mycp] = cp_list
+ else:
+ del self.cpdict[mycp]
def aux_get(self, mycpv, wants, myrepo=None):
- if not self.cpv_exists(mycpv):
+ metadata = self.cpvdict.get(
+ self._instance_key(mycpv, support_string=True))
+ if metadata is None:
raise KeyError(mycpv)
- metadata = self.cpvdict[mycpv]
- if not metadata:
- return ["" for x in wants]
return [metadata.get(x, "") for x in wants]
def aux_update(self, cpv, values):
self._clear_cache()
- self.cpvdict[cpv].update(values)
+ metadata = self.cpvdict.get(
+ self._instance_key(cpv, support_string=True))
+ if metadata is None:
+ raise KeyError(cpv)
+ metadata.update(values)
class testdbapi(object):
"""A dbapi instance with completely fake functions to get by hitting disk
diff --git a/pym/portage/emaint/modules/binhost/binhost.py b/pym/portage/emaint/modules/binhost/binhost.py
index 1138a8c..cf1213e 100644
--- a/pym/portage/emaint/modules/binhost/binhost.py
+++ b/pym/portage/emaint/modules/binhost/binhost.py
@@ -7,6 +7,7 @@ import stat
import portage
from portage import os
from portage.util import writemsg
+from portage.versions import _pkg_str
import sys
@@ -38,7 +39,7 @@ class BinhostHandler(object):
if size is None:
return True
- mtime = data.get("MTIME")
+ mtime = data.get("_mtime_")
if mtime is None:
return True
@@ -90,6 +91,7 @@ class BinhostHandler(object):
def fix(self, **kwargs):
onProgress = kwargs.get('onProgress', None)
bintree = self._bintree
+ _instance_key = bintree.dbapi._instance_key
cpv_all = self._bintree.dbapi.cpv_all()
cpv_all.sort()
missing = []
@@ -98,16 +100,21 @@ class BinhostHandler(object):
onProgress(maxval, 0)
pkgindex = self._pkgindex
missing = []
+ stale = []
metadata = {}
for d in pkgindex.packages:
- metadata[d["CPV"]] = d
-
- for i, cpv in enumerate(cpv_all):
- d = metadata.get(cpv)
+ cpv = _pkg_str(d["CPV"], metadata=d,
+ settings=bintree.settings)
+ d["CPV"] = cpv
+ metadata[_instance_key(cpv)] = d
+ if not bintree.dbapi.cpv_exists(cpv):
+ stale.append(cpv)
+
+ for cpv in cpv_all:
+ d = metadata.get(_instance_key(cpv))
if not d or self._need_update(cpv, d):
missing.append(cpv)
- stale = set(metadata).difference(cpv_all)
if missing or stale:
from portage import locks
pkgindex_lock = locks.lockfile(
@@ -121,31 +128,39 @@ class BinhostHandler(object):
pkgindex = bintree._load_pkgindex()
self._pkgindex = pkgindex
+ # Recount stale/missing packages, with lock held.
+ missing = []
+ stale = []
metadata = {}
for d in pkgindex.packages:
- metadata[d["CPV"]] = d
-
- # Recount missing packages, with lock held.
- del missing[:]
- for i, cpv in enumerate(cpv_all):
- d = metadata.get(cpv)
+ cpv = _pkg_str(d["CPV"], metadata=d,
+ settings=bintree.settings)
+ d["CPV"] = cpv
+ metadata[_instance_key(cpv)] = d
+ if not bintree.dbapi.cpv_exists(cpv):
+ stale.append(cpv)
+
+ for cpv in cpv_all:
+ d = metadata.get(_instance_key(cpv))
if not d or self._need_update(cpv, d):
missing.append(cpv)
maxval = len(missing)
for i, cpv in enumerate(missing):
+ d = bintree._pkgindex_entry(cpv)
try:
- metadata[cpv] = bintree._pkgindex_entry(cpv)
+ bintree._eval_use_flags(cpv, d)
except portage.exception.InvalidDependString:
writemsg("!!! Invalid binary package: '%s'\n" % \
bintree.getname(cpv), noiselevel=-1)
+ else:
+ metadata[_instance_key(cpv)] = d
if onProgress:
onProgress(maxval, i+1)
- for cpv in set(metadata).difference(
- self._bintree.dbapi.cpv_all()):
- del metadata[cpv]
+ for cpv in stale:
+ del metadata[_instance_key(cpv)]
# We've updated the pkgindex, so set it to
# repopulate when necessary.
diff --git a/pym/portage/package/ebuild/config.py b/pym/portage/package/ebuild/config.py
index 71fe4df..961b1c8 100644
--- a/pym/portage/package/ebuild/config.py
+++ b/pym/portage/package/ebuild/config.py
@@ -154,7 +154,8 @@ class config(object):
'PORTAGE_PYM_PATH', 'PORTAGE_PYTHONPATH'])
_setcpv_aux_keys = ('DEFINED_PHASES', 'DEPEND', 'EAPI', 'HDEPEND',
- 'INHERITED', 'IUSE', 'REQUIRED_USE', 'KEYWORDS', 'LICENSE', 'PDEPEND',
+ 'INHERITED', 'BUILD_ID', 'IUSE', 'REQUIRED_USE',
+ 'KEYWORDS', 'LICENSE', 'PDEPEND',
'PROPERTIES', 'PROVIDE', 'RDEPEND', 'SLOT',
'repository', 'RESTRICT', 'LICENSE',)
diff --git a/pym/portage/tests/resolver/ResolverPlayground.py b/pym/portage/tests/resolver/ResolverPlayground.py
index 84ad17c..48b86fc 100644
--- a/pym/portage/tests/resolver/ResolverPlayground.py
+++ b/pym/portage/tests/resolver/ResolverPlayground.py
@@ -39,6 +39,7 @@ class ResolverPlayground(object):
config_files = frozenset(("eapi", "layout.conf", "make.conf", "package.accept_keywords",
"package.keywords", "package.license", "package.mask", "package.properties",
+ "package.provided", "packages",
"package.unmask", "package.use", "package.use.aliases", "package.use.stable.mask",
"soname.provided",
"unpack_dependencies", "use.aliases", "use.force", "use.mask", "layout.conf"))
@@ -208,25 +209,37 @@ class ResolverPlayground(object):
raise AssertionError('digest creation failed for %s' % ebuild_path)
def _create_binpkgs(self, binpkgs):
- for cpv, metadata in binpkgs.items():
+ # When using BUILD_ID, there can be mutiple instances for the
+ # same cpv. Therefore, binpkgs may be an iterable instead of
+ # a dict.
+ items = getattr(binpkgs, 'items', None)
+ items = items() if items is not None else binpkgs
+ for cpv, metadata in items:
a = Atom("=" + cpv, allow_repo=True)
repo = a.repo
if repo is None:
repo = "test_repo"
+ pn = catsplit(a.cp)[1]
cat, pf = catsplit(a.cpv)
metadata = metadata.copy()
metadata.setdefault("SLOT", "0")
metadata.setdefault("KEYWORDS", "x86")
metadata.setdefault("BUILD_TIME", "0")
+ metadata.setdefault("EAPI", "0")
metadata["repository"] = repo
metadata["CATEGORY"] = cat
metadata["PF"] = pf
repo_dir = self.pkgdir
category_dir = os.path.join(repo_dir, cat)
- binpkg_path = os.path.join(category_dir, pf + ".tbz2")
- ensure_dirs(category_dir)
+ if "BUILD_ID" in metadata:
+ binpkg_path = os.path.join(category_dir, pn,
+ "%s-%s.xpak"% (pf, metadata["BUILD_ID"]))
+ else:
+ binpkg_path = os.path.join(category_dir, pf + ".tbz2")
+
+ ensure_dirs(os.path.dirname(binpkg_path))
t = portage.xpak.tbz2(binpkg_path)
t.recompose_mem(portage.xpak.xpak_mem(metadata))
@@ -252,6 +265,7 @@ class ResolverPlayground(object):
unknown_keys = set(metadata).difference(
portage.dbapi.dbapi._known_keys)
unknown_keys.discard("BUILD_TIME")
+ unknown_keys.discard("BUILD_ID")
unknown_keys.discard("COUNTER")
unknown_keys.discard("repository")
unknown_keys.discard("USE")
@@ -749,7 +763,11 @@ class ResolverPlaygroundResult(object):
repo_str = ""
if x.repo != "test_repo":
repo_str = _repo_separator + x.repo
- mergelist_str = x.cpv + repo_str
+ build_id_str = ""
+ if (x.type_name == "binary" and
+ x.cpv.build_id is not None):
+ build_id_str = "-%s" % x.cpv.build_id
+ mergelist_str = x.cpv + build_id_str + repo_str
if x.built:
if x.operation == "merge":
desc = x.type_name
diff --git a/pym/portage/tests/resolver/binpkg_multi_instance/__init__.py b/pym/portage/tests/resolver/binpkg_multi_instance/__init__.py
new file mode 100644
index 0000000..4725d33
--- /dev/null
+++ b/pym/portage/tests/resolver/binpkg_multi_instance/__init__.py
@@ -0,0 +1,2 @@
+# Copyright 2015 Gentoo Foundation
+# Distributed under the terms of the GNU General Public License v2
diff --git a/pym/portage/tests/resolver/binpkg_multi_instance/__test__.py b/pym/portage/tests/resolver/binpkg_multi_instance/__test__.py
new file mode 100644
index 0000000..4725d33
--- /dev/null
+++ b/pym/portage/tests/resolver/binpkg_multi_instance/__test__.py
@@ -0,0 +1,2 @@
+# Copyright 2015 Gentoo Foundation
+# Distributed under the terms of the GNU General Public License v2
diff --git a/pym/portage/tests/resolver/binpkg_multi_instance/test_rebuilt_binaries.py b/pym/portage/tests/resolver/binpkg_multi_instance/test_rebuilt_binaries.py
new file mode 100644
index 0000000..5729df4
--- /dev/null
+++ b/pym/portage/tests/resolver/binpkg_multi_instance/test_rebuilt_binaries.py
@@ -0,0 +1,101 @@
+# Copyright 2015 Gentoo Foundation
+# Distributed under the terms of the GNU General Public License v2
+
+from portage.tests import TestCase
+from portage.tests.resolver.ResolverPlayground import (ResolverPlayground,
+ ResolverPlaygroundTestCase)
+
+class RebuiltBinariesCase(TestCase):
+
+ def testRebuiltBinaries(self):
+
+ user_config = {
+ "make.conf":
+ (
+ "FEATURES=\"binpkg-multi-instance\"",
+ ),
+ }
+
+ binpkgs = (
+ ("app-misc/A-1", {
+ "EAPI": "5",
+ "BUILD_ID": "1",
+ "BUILD_TIME": "1",
+ }),
+ ("app-misc/A-1", {
+ "EAPI": "5",
+ "BUILD_ID": "2",
+ "BUILD_TIME": "2",
+ }),
+ ("app-misc/A-1", {
+ "EAPI": "5",
+ "BUILD_ID": "3",
+ "BUILD_TIME": "3",
+ }),
+ ("dev-libs/B-1", {
+ "EAPI": "5",
+ "BUILD_ID": "1",
+ "BUILD_TIME": "1",
+ }),
+ ("dev-libs/B-1", {
+ "EAPI": "5",
+ "BUILD_ID": "2",
+ "BUILD_TIME": "2",
+ }),
+ ("dev-libs/B-1", {
+ "EAPI": "5",
+ "BUILD_ID": "3",
+ "BUILD_TIME": "3",
+ }),
+ )
+
+ installed = {
+ "app-misc/A-1" : {
+ "EAPI": "5",
+ "BUILD_ID": "1",
+ "BUILD_TIME": "1",
+ },
+ "dev-libs/B-1" : {
+ "EAPI": "5",
+ "BUILD_ID": "2",
+ "BUILD_TIME": "2",
+ },
+ }
+
+ world = (
+ "app-misc/A",
+ "dev-libs/B",
+ )
+
+ test_cases = (
+
+ ResolverPlaygroundTestCase(
+ ["@world"],
+ options = {
+ "--deep": True,
+ "--rebuilt-binaries": True,
+ "--update": True,
+ "--usepkgonly": True,
+ },
+ success = True,
+ ignore_mergelist_order=True,
+ mergelist = [
+ "[binary]dev-libs/B-1-3",
+ "[binary]app-misc/A-1-3"
+ ]
+ ),
+
+ )
+
+ playground = ResolverPlayground(debug=False,
+ binpkgs=binpkgs, installed=installed,
+ user_config=user_config, world=world)
+ try:
+ for test_case in test_cases:
+ playground.run_TestCase(test_case)
+ self.assertEqual(test_case.test_success, True,
+ test_case.fail_msg)
+ finally:
+ # Disable debug so that cleanup works.
+ #playground.debug = False
+ playground.cleanup()
diff --git a/pym/portage/versions.py b/pym/portage/versions.py
index 2c9fe5b..c2c6675 100644
--- a/pym/portage/versions.py
+++ b/pym/portage/versions.py
@@ -18,6 +18,7 @@ if sys.hexversion < 0x3000000:
_unicode = unicode
else:
_unicode = str
+ long = int
import portage
portage.proxy.lazyimport.lazyimport(globals(),
@@ -361,11 +362,13 @@ class _pkg_str(_unicode):
"""
def __new__(cls, cpv, metadata=None, settings=None, eapi=None,
- repo=None, slot=None):
+ repo=None, slot=None, build_time=None, build_id=None,
+ file_size=None, mtime=None):
return _unicode.__new__(cls, cpv)
def __init__(self, cpv, metadata=None, settings=None, eapi=None,
- repo=None, slot=None):
+ repo=None, slot=None, build_time=None, build_id=None,
+ file_size=None, mtime=None):
if not isinstance(cpv, _unicode):
# Avoid TypeError from _unicode.__init__ with PyPy.
cpv = _unicode_decode(cpv)
@@ -375,10 +378,51 @@ class _pkg_str(_unicode):
slot = metadata.get('SLOT', slot)
repo = metadata.get('repository', repo)
eapi = metadata.get('EAPI', eapi)
+ build_time = metadata.get('BUILD_TIME', build_time)
+ file_size = metadata.get('SIZE', file_size)
+ build_id = metadata.get('BUILD_ID', build_id)
+ mtime = metadata.get('_mtime_', mtime)
if settings is not None:
self.__dict__['_settings'] = settings
if eapi is not None:
self.__dict__['eapi'] = eapi
+ if build_time is not None:
+ try:
+ build_time = long(build_time)
+ except ValueError:
+ if build_time:
+ build_time = -1
+ else:
+ build_time = 0
+ if build_id is not None:
+ try:
+ build_id = long(build_id)
+ except ValueError:
+ if build_id:
+ build_id = -1
+ else:
+ build_id = None
+ if file_size is not None:
+ try:
+ file_size = long(file_size)
+ except ValueError:
+ if file_size:
+ file_size = -1
+ else:
+ file_size = None
+ if mtime is not None:
+ try:
+ mtime = long(mtime)
+ except ValueError:
+ if mtime:
+ mtime = -1
+ else:
+ mtime = None
+
+ self.__dict__['build_time'] = build_time
+ self.__dict__['file_size'] = file_size
+ self.__dict__['build_id'] = build_id
+ self.__dict__['mtime'] = mtime
self.__dict__['cpv_split'] = catpkgsplit(cpv, eapi=eapi)
if self.cpv_split is None:
raise InvalidData(cpv)
--
2.0.5
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [gentoo-portage-dev] [PATCH 2/2] Add profile-formats=build-id (bug 150031)
2015-02-17 8:37 [gentoo-portage-dev] [PATCH 1/2] Add FEATURES=binpkg-multi-instance (bug 150031) Zac Medico
@ 2015-02-17 8:37 ` Zac Medico
2015-02-17 18:58 ` Brian Dolbec
2015-02-17 18:42 ` [gentoo-portage-dev] [PATCH 1/2] Add FEATURES=binpkg-multi-instance " Brian Dolbec
2015-02-18 3:40 ` [gentoo-portage-dev] Re: [PATCH 1/2] " Duncan
2 siblings, 1 reply; 23+ messages in thread
From: Zac Medico @ 2015-02-17 8:37 UTC (permalink / raw
To: gentoo-portage-dev; +Cc: Zac Medico
When "profile-formats = build-id" is enabled in layout.conf of the
containing repository, a dependency atom in the profile can refer
to a specific build, using the build-id that is assigned when
FEATURES=binpkg-multi-instance is enabled. A build-id atom is identical
to a version-specific atom, except that the version is followed by a
hyphen and an integer build-id.
With the build-id profile format, it is possible to assemble a system
using specific builds of binary packages, as users of "binary"
distros might be accustomed to. For example, an atom in the "packages"
file can pull a specific build of a package into the @system set, and
an atom in the "package.keywords" file can be used to modify the
effective KEYWORDS of a specific build of a package.
Refering to specific builds can be useful for a number of reasons. For
example, if a particular build needs to undergo a large amount of
testing in a complex environment in order to verify reliability, then it
can be useful to lock a profile to a specific build that has been
thoroughly tested.
X-Gentoo-Bug: 150031
X-Gentoo-Bug-URL: https://bugs.gentoo.org/show_bug.cgi?id=150031
---
man/portage.5 | 8 +-
pym/_emerge/is_valid_package_atom.py | 5 +-
pym/portage/_sets/ProfilePackageSet.py | 3 +-
pym/portage/_sets/profiles.py | 3 +-
pym/portage/dep/__init__.py | 35 +++++-
.../package/ebuild/_config/KeywordsManager.py | 3 +-
.../package/ebuild/_config/LocationsManager.py | 8 +-
pym/portage/package/ebuild/_config/MaskManager.py | 21 +++-
pym/portage/package/ebuild/_config/UseManager.py | 14 ++-
pym/portage/package/ebuild/config.py | 15 ++-
pym/portage/repository/config.py | 2 +-
pym/portage/tests/dep/test_isvalidatom.py | 8 +-
.../test_build_id_profile_format.py | 134 +++++++++++++++++++++
pym/portage/util/__init__.py | 13 +-
14 files changed, 234 insertions(+), 38 deletions(-)
create mode 100644 pym/portage/tests/resolver/binpkg_multi_instance/test_build_id_profile_format.py
diff --git a/man/portage.5 b/man/portage.5
index ed5140d..e062f9f 100644
--- a/man/portage.5
+++ b/man/portage.5
@@ -1180,7 +1180,7 @@ and the newer/faster "md5-dict" format. Default is to detect dirs.
The EAPI to use for profiles when unspecified. This attribute is
supported only if profile-default-eapi is included in profile-formats.
.TP
-.BR profile\-formats " = [pms] [portage-1] [portage-2] [profile-bashrcs] [profile-set] [profile-default-eapi]"
+.BR profile\-formats " = [pms] [portage-1] [portage-2] [profile-bashrcs] [profile-set] [profile-default-eapi] [build-id]"
Control functionality available to profiles in this repo such as which files
may be dirs, or the syntax available in parent files. Use "portage-2" if you're
unsure. The default is "portage-1-compat" mode which is meant to be compatible
@@ -1190,7 +1190,11 @@ Setting profile-bashrcs will enable the per-profile bashrc mechanism
profile \fBpackages\fR file to add atoms to the @profile package set.
See the profile \fBpackages\fR section for more information.
Setting profile-default-eapi enables support for the
-profile_eapi_when_unspecified attribute.
+profile_eapi_when_unspecified attribute. Setting build\-id allows
+dependency atoms in the profile to refer to specific builds (see the
+binpkg\-multi\-instance FEATURES setting in \fBmake.conf\fR(5)). A
+build\-id atom is identical to a version-specific atom, except that the
+version is followed by a hyphen and an integer build\-id.
.RE
.RE
diff --git a/pym/_emerge/is_valid_package_atom.py b/pym/_emerge/is_valid_package_atom.py
index 112afc1..17f7642 100644
--- a/pym/_emerge/is_valid_package_atom.py
+++ b/pym/_emerge/is_valid_package_atom.py
@@ -14,9 +14,10 @@ def insert_category_into_atom(atom, category):
ret = None
return ret
-def is_valid_package_atom(x, allow_repo=False):
+def is_valid_package_atom(x, allow_repo=False, allow_build_id=True):
if "/" not in x.split(":")[0]:
x2 = insert_category_into_atom(x, 'cat')
if x2 != None:
x = x2
- return isvalidatom(x, allow_blockers=False, allow_repo=allow_repo)
+ return isvalidatom(x, allow_blockers=False, allow_repo=allow_repo,
+ allow_build_id=allow_build_id)
diff --git a/pym/portage/_sets/ProfilePackageSet.py b/pym/portage/_sets/ProfilePackageSet.py
index 2fcafb6..fec9373 100644
--- a/pym/portage/_sets/ProfilePackageSet.py
+++ b/pym/portage/_sets/ProfilePackageSet.py
@@ -23,7 +23,8 @@ class ProfilePackageSet(PackageSet):
def load(self):
self._setAtoms(x for x in stack_lists(
[grabfile_package(os.path.join(y.location, "packages"),
- verify_eapi=True, eapi=y.eapi, eapi_default=None)
+ verify_eapi=True, eapi=y.eapi, eapi_default=None,
+ allow_build_id=y.allow_build_id)
for y in self._profiles
if "profile-set" in y.profile_formats],
incremental=1) if x[:1] != "*")
diff --git a/pym/portage/_sets/profiles.py b/pym/portage/_sets/profiles.py
index ccb3432..bccc02e 100644
--- a/pym/portage/_sets/profiles.py
+++ b/pym/portage/_sets/profiles.py
@@ -34,7 +34,8 @@ class PackagesSystemSet(PackageSet):
(self._profiles,), level=logging.DEBUG, noiselevel=-1)
mylist = [grabfile_package(os.path.join(x.location, "packages"),
- verify_eapi=True, eapi=x.eapi, eapi_default=None)
+ verify_eapi=True, eapi=x.eapi, eapi_default=None,
+ allow_build_id=x.allow_build_id)
for x in self._profiles]
if debug:
diff --git a/pym/portage/dep/__init__.py b/pym/portage/dep/__init__.py
index e2e416c..e419dc9 100644
--- a/pym/portage/dep/__init__.py
+++ b/pym/portage/dep/__init__.py
@@ -1191,11 +1191,11 @@ class Atom(_unicode):
self.overlap = self._overlap(forbid=forbid_overlap)
def __new__(cls, s, unevaluated_atom=None, allow_wildcard=False, allow_repo=None,
- _use=None, eapi=None, is_valid_flag=None):
+ _use=None, eapi=None, is_valid_flag=None, allow_build_id=None):
return _unicode.__new__(cls, s)
def __init__(self, s, unevaluated_atom=None, allow_wildcard=False, allow_repo=None,
- _use=None, eapi=None, is_valid_flag=None):
+ _use=None, eapi=None, is_valid_flag=None, allow_build_id=None):
if isinstance(s, Atom):
# This is an efficiency assertion, to ensure that the Atom
# constructor is not called redundantly.
@@ -1218,6 +1218,8 @@ class Atom(_unicode):
else:
if allow_repo is None:
allow_repo = True
+ if allow_build_id is None:
+ allow_build_id = True
blocker_prefix = ""
if "!" == s[:1]:
@@ -1232,6 +1234,7 @@ class Atom(_unicode):
blocker = False
self.__dict__['blocker'] = blocker
m = atom_re.match(s)
+ build_id = None
extended_syntax = False
extended_version = None
if m is None:
@@ -1268,8 +1271,22 @@ class Atom(_unicode):
slot = m.group(atom_re.groups - 2)
repo = m.group(atom_re.groups - 1)
use_str = m.group(atom_re.groups)
- if m.group(base + 4) is not None:
- raise InvalidAtom(self)
+ version = m.group(base + 4)
+ if version is not None:
+ if allow_build_id:
+ cpv_build_id = cpv
+ cpv = cp
+ cp = cp[:-len(version)]
+ build_id = cpv_build_id[len(cpv)+1:]
+ if len(build_id) > 1 and build_id[:1] == "0":
+ # Leading zeros are not allowed.
+ raise InvalidAtom(self)
+ try:
+ build_id = int(build_id)
+ except ValueError:
+ raise InvalidAtom(self)
+ else:
+ raise InvalidAtom(self)
elif m.group('star') is not None:
base = atom_re.groupindex['star']
op = '=*'
@@ -1332,6 +1349,7 @@ class Atom(_unicode):
self.__dict__['slot_operator'] = None
self.__dict__['operator'] = op
self.__dict__['extended_syntax'] = extended_syntax
+ self.__dict__['build_id'] = build_id
if not (repo is None or allow_repo):
raise InvalidAtom(self)
@@ -1877,7 +1895,7 @@ def dep_getusedeps( depend ):
return tuple(use_list)
def isvalidatom(atom, allow_blockers=False, allow_wildcard=False,
- allow_repo=False, eapi=None):
+ allow_repo=False, eapi=None, allow_build_id=False):
"""
Check to see if a depend atom is valid
@@ -1902,7 +1920,8 @@ def isvalidatom(atom, allow_blockers=False, allow_wildcard=False,
try:
if not isinstance(atom, Atom):
atom = Atom(atom, allow_wildcard=allow_wildcard,
- allow_repo=allow_repo, eapi=eapi)
+ allow_repo=allow_repo, eapi=eapi,
+ allow_build_id=allow_build_id)
if not allow_blockers and atom.blocker:
return False
return True
@@ -2107,6 +2126,7 @@ def match_from_list(mydep, candidate_list):
mycpv = mydep.cpv
mycpv_cps = catpkgsplit(mycpv) # Can be None if not specific
slot = mydep.slot
+ build_id = mydep.build_id
if not mycpv_cps:
cat, pkg = catsplit(mycpv)
@@ -2181,6 +2201,9 @@ def match_from_list(mydep, candidate_list):
xcpv = remove_slot(x)
if not cpvequal(xcpv, mycpv):
continue
+ if (build_id is not None and
+ getattr(xcpv, "build_id", None) != build_id):
+ continue
mylist.append(x)
elif operator == "=*": # glob match
diff --git a/pym/portage/package/ebuild/_config/KeywordsManager.py b/pym/portage/package/ebuild/_config/KeywordsManager.py
index e1a8e2b..72e24b9 100644
--- a/pym/portage/package/ebuild/_config/KeywordsManager.py
+++ b/pym/portage/package/ebuild/_config/KeywordsManager.py
@@ -22,7 +22,8 @@ class KeywordsManager(object):
rawpkeywords = [grabdict_package(
os.path.join(x.location, "package.keywords"),
recursive=x.portage1_directories,
- verify_eapi=True, eapi=x.eapi, eapi_default=None)
+ verify_eapi=True, eapi=x.eapi, eapi_default=None,
+ allow_build_id=x.allow_build_id)
for x in profiles]
for pkeyworddict in rawpkeywords:
if not pkeyworddict:
diff --git a/pym/portage/package/ebuild/_config/LocationsManager.py b/pym/portage/package/ebuild/_config/LocationsManager.py
index 34b33e9..55b8c08 100644
--- a/pym/portage/package/ebuild/_config/LocationsManager.py
+++ b/pym/portage/package/ebuild/_config/LocationsManager.py
@@ -31,7 +31,8 @@ _PORTAGE1_DIRECTORIES = frozenset([
'use.mask', 'use.force'])
_profile_node = collections.namedtuple('_profile_node',
- 'location portage1_directories user_config profile_formats eapi')
+ ('location', 'portage1_directories', 'user_config',
+ 'profile_formats', 'eapi', 'allow_build_id'))
_allow_parent_colon = frozenset(
["portage-2"])
@@ -142,7 +143,8 @@ class LocationsManager(object):
_profile_node(custom_prof, True, True,
('profile-bashrcs', 'profile-set'),
read_corresponding_eapi_file(
- custom_prof + os.sep, default=None)))
+ custom_prof + os.sep, default=None),
+ True))
del custom_prof
self.profiles = tuple(self.profiles)
@@ -253,7 +255,7 @@ class LocationsManager(object):
self.profiles.append(currentPath)
self.profiles_complex.append(
_profile_node(currentPath, allow_directories, False,
- current_formats, eapi))
+ current_formats, eapi, 'build-id' in current_formats))
def _expand_parent_colon(self, parentsFile, parentPath,
repo_loc, repositories):
diff --git a/pym/portage/package/ebuild/_config/MaskManager.py b/pym/portage/package/ebuild/_config/MaskManager.py
index 55c8c7a..44aba23 100644
--- a/pym/portage/package/ebuild/_config/MaskManager.py
+++ b/pym/portage/package/ebuild/_config/MaskManager.py
@@ -40,7 +40,9 @@ class MaskManager(object):
pmask_cache[loc] = grabfile_package(path,
recursive=repo_config.portage1_profiles,
remember_source_file=True, verify_eapi=True,
- eapi_default=repo_config.eapi)
+ eapi_default=repo_config.eapi,
+ allow_build_id=("build-id"
+ in repo_config.profile_formats))
if repo_config.portage1_profiles_compat and os.path.isdir(path):
warnings.warn(_("Repository '%(repo_name)s' is implicitly using "
"'portage-1' profile format in its profiles/package.mask, but "
@@ -107,7 +109,8 @@ class MaskManager(object):
continue
repo_lines = grabfile_package(os.path.join(repo.location, "profiles", "package.unmask"), \
recursive=1, remember_source_file=True,
- verify_eapi=True, eapi_default=repo.eapi)
+ verify_eapi=True, eapi_default=repo.eapi,
+ allow_build_id=("build-id" in repo.profile_formats))
lines = stack_lists([repo_lines], incremental=1, \
remember_source_file=True, warn_for_unmatched_removal=True,
strict_warn_for_unmatched_removal=strict_umatched_removal)
@@ -122,13 +125,15 @@ class MaskManager(object):
os.path.join(x.location, "package.mask"),
recursive=x.portage1_directories,
remember_source_file=True, verify_eapi=True,
- eapi=x.eapi, eapi_default=None))
+ eapi=x.eapi, eapi_default=None,
+ allow_build_id=x.allow_build_id))
if x.portage1_directories:
profile_pkgunmasklines.append(grabfile_package(
os.path.join(x.location, "package.unmask"),
recursive=x.portage1_directories,
remember_source_file=True, verify_eapi=True,
- eapi=x.eapi, eapi_default=None))
+ eapi=x.eapi, eapi_default=None,
+ allow_build_id=x.allow_build_id))
profile_pkgmasklines = stack_lists(profile_pkgmasklines, incremental=1, \
remember_source_file=True, warn_for_unmatched_removal=True,
strict_warn_for_unmatched_removal=strict_umatched_removal)
@@ -143,10 +148,14 @@ class MaskManager(object):
if user_config:
user_pkgmasklines = grabfile_package(
os.path.join(abs_user_config, "package.mask"), recursive=1, \
- allow_wildcard=True, allow_repo=True, remember_source_file=True, verify_eapi=False)
+ allow_wildcard=True, allow_repo=True,
+ remember_source_file=True, verify_eapi=False,
+ allow_build_id=True)
user_pkgunmasklines = grabfile_package(
os.path.join(abs_user_config, "package.unmask"), recursive=1, \
- allow_wildcard=True, allow_repo=True, remember_source_file=True, verify_eapi=False)
+ allow_wildcard=True, allow_repo=True,
+ remember_source_file=True, verify_eapi=False,
+ allow_build_id=True)
#Stack everything together. At this point, only user_pkgmasklines may contain -atoms.
#Don't warn for unmatched -atoms here, since we don't do it for any other user config file.
diff --git a/pym/portage/package/ebuild/_config/UseManager.py b/pym/portage/package/ebuild/_config/UseManager.py
index 60d5f92..a93ea5c 100644
--- a/pym/portage/package/ebuild/_config/UseManager.py
+++ b/pym/portage/package/ebuild/_config/UseManager.py
@@ -153,7 +153,8 @@ class UseManager(object):
return tuple(ret)
def _parse_file_to_dict(self, file_name, juststrings=False, recursive=True,
- eapi_filter=None, user_config=False, eapi=None, eapi_default="0"):
+ eapi_filter=None, user_config=False, eapi=None, eapi_default="0",
+ allow_build_id=False):
"""
@param file_name: input file name
@type file_name: str
@@ -176,6 +177,9 @@ class UseManager(object):
@param eapi_default: the default EAPI which applies if the
current profile node does not define a local EAPI
@type eapi_default: str
+ @param allow_build_id: allow atoms to specify a particular
+ build-id
+ @type allow_build_id: bool
@rtype: tuple
@return: collection of USE flags
"""
@@ -192,7 +196,7 @@ class UseManager(object):
file_dict = grabdict_package(file_name, recursive=recursive,
allow_wildcard=extended_syntax, allow_repo=extended_syntax,
verify_eapi=(not extended_syntax), eapi=eapi,
- eapi_default=eapi_default)
+ eapi_default=eapi_default, allow_build_id=allow_build_id)
if eapi is not None and eapi_filter is not None and not eapi_filter(eapi):
if file_dict:
writemsg(_("--- EAPI '%s' does not support '%s': '%s'\n") %
@@ -262,7 +266,8 @@ class UseManager(object):
for repo in repositories.repos_with_profiles():
ret[repo.name] = self._parse_file_to_dict(
os.path.join(repo.location, "profiles", file_name),
- eapi_filter=eapi_filter, eapi_default=repo.eapi)
+ eapi_filter=eapi_filter, eapi_default=repo.eapi,
+ allow_build_id=("build-id" in repo.profile_formats))
return ret
def _parse_profile_files_to_tuple_of_tuples(self, file_name, locations,
@@ -279,7 +284,8 @@ class UseManager(object):
os.path.join(profile.location, file_name), juststrings,
recursive=profile.portage1_directories, eapi_filter=eapi_filter,
user_config=profile.user_config, eapi=profile.eapi,
- eapi_default=None) for profile in locations)
+ eapi_default=None, allow_build_id=profile.allow_build_id)
+ for profile in locations)
def _parse_repository_usealiases(self, repositories):
ret = {}
diff --git a/pym/portage/package/ebuild/config.py b/pym/portage/package/ebuild/config.py
index 961b1c8..35b5c10 100644
--- a/pym/portage/package/ebuild/config.py
+++ b/pym/portage/package/ebuild/config.py
@@ -570,7 +570,8 @@ class config(object):
try:
packages_list = [grabfile_package(
os.path.join(x.location, "packages"),
- verify_eapi=True, eapi=x.eapi, eapi_default=None)
+ verify_eapi=True, eapi=x.eapi, eapi_default=None,
+ allow_build_id=x.allow_build_id)
for x in profiles_complex]
except IOError as e:
if e.errno == IsADirectory.errno:
@@ -708,7 +709,8 @@ class config(object):
#package.properties
propdict = grabdict_package(os.path.join(
abs_user_config, "package.properties"), recursive=1, allow_wildcard=True, \
- allow_repo=True, verify_eapi=False)
+ allow_repo=True, verify_eapi=False,
+ allow_build_id=True)
v = propdict.pop("*/*", None)
if v is not None:
if "ACCEPT_PROPERTIES" in self.configdict["conf"]:
@@ -722,7 +724,8 @@ class config(object):
d = grabdict_package(os.path.join(
abs_user_config, "package.accept_restrict"),
recursive=True, allow_wildcard=True,
- allow_repo=True, verify_eapi=False)
+ allow_repo=True, verify_eapi=False,
+ allow_build_id=True)
v = d.pop("*/*", None)
if v is not None:
if "ACCEPT_RESTRICT" in self.configdict["conf"]:
@@ -735,7 +738,8 @@ class config(object):
#package.env
penvdict = grabdict_package(os.path.join(
abs_user_config, "package.env"), recursive=1, allow_wildcard=True, \
- allow_repo=True, verify_eapi=False)
+ allow_repo=True, verify_eapi=False,
+ allow_build_id=True)
v = penvdict.pop("*/*", None)
if v is not None:
global_wildcard_conf = {}
@@ -765,7 +769,8 @@ class config(object):
bashrc = grabdict_package(os.path.join(profile.location,
"package.bashrc"), recursive=1, allow_wildcard=True,
allow_repo=True, verify_eapi=True,
- eapi=profile.eapi, eapi_default=None)
+ eapi=profile.eapi, eapi_default=None,
+ allow_build_id=profile.allow_build_id)
if not bashrc:
continue
diff --git a/pym/portage/repository/config.py b/pym/portage/repository/config.py
index a884156..5da1810 100644
--- a/pym/portage/repository/config.py
+++ b/pym/portage/repository/config.py
@@ -42,7 +42,7 @@ _invalid_path_char_re = re.compile(r'[^a-zA-Z0-9._\-+:/]')
_valid_profile_formats = frozenset(
['pms', 'portage-1', 'portage-2', 'profile-bashrcs', 'profile-set',
- 'profile-default-eapi'])
+ 'profile-default-eapi', 'build-id'])
_portage1_profiles_allow_directories = frozenset(
["portage-1-compat", "portage-1", 'portage-2'])
diff --git a/pym/portage/tests/dep/test_isvalidatom.py b/pym/portage/tests/dep/test_isvalidatom.py
index 67ba603..9d3367a 100644
--- a/pym/portage/tests/dep/test_isvalidatom.py
+++ b/pym/portage/tests/dep/test_isvalidatom.py
@@ -5,11 +5,13 @@ from portage.tests import TestCase
from portage.dep import isvalidatom
class IsValidAtomTestCase(object):
- def __init__(self, atom, expected, allow_wildcard=False, allow_repo=False):
+ def __init__(self, atom, expected, allow_wildcard=False,
+ allow_repo=False, allow_build_id=False):
self.atom = atom
self.expected = expected
self.allow_wildcard = allow_wildcard
self.allow_repo = allow_repo
+ self.allow_build_id = allow_build_id
class IsValidAtom(TestCase):
@@ -154,5 +156,7 @@ class IsValidAtom(TestCase):
else:
atom_type = "invalid"
self.assertEqual(bool(isvalidatom(test_case.atom, allow_wildcard=test_case.allow_wildcard,
- allow_repo=test_case.allow_repo)), test_case.expected,
+ allow_repo=test_case.allow_repo,
+ allow_build_id=test_case.allow_build_id)),
+ test_case.expected,
msg="isvalidatom(%s) != %s" % (test_case.atom, test_case.expected))
diff --git a/pym/portage/tests/resolver/binpkg_multi_instance/test_build_id_profile_format.py b/pym/portage/tests/resolver/binpkg_multi_instance/test_build_id_profile_format.py
new file mode 100644
index 0000000..0397509
--- /dev/null
+++ b/pym/portage/tests/resolver/binpkg_multi_instance/test_build_id_profile_format.py
@@ -0,0 +1,134 @@
+# Copyright 2015 Gentoo Foundation
+# Distributed under the terms of the GNU General Public License v2
+
+from portage.tests import TestCase
+from portage.tests.resolver.ResolverPlayground import (ResolverPlayground,
+ ResolverPlaygroundTestCase)
+
+class BuildIdProfileFormatTestCase(TestCase):
+
+ def testBuildIdProfileFormat(self):
+
+ profile = {
+ "packages": ("=app-misc/A-1-2",),
+ "package.provided": ("sys-libs/zlib-1.2.8-r1",),
+ }
+
+ repo_configs = {
+ "test_repo": {
+ "layout.conf": (
+ "profile-formats = build-id profile-set",
+ ),
+ }
+ }
+
+ user_config = {
+ "make.conf":
+ (
+ "FEATURES=\"binpkg-multi-instance\"",
+ ),
+ }
+
+ ebuilds = {
+ "app-misc/A-1" : {
+ "EAPI": "5",
+ "RDEPEND": "sys-libs/zlib dev-libs/B[foo]",
+ "DEPEND": "sys-libs/zlib dev-libs/B[foo]",
+ },
+ "dev-libs/B-1" : {
+ "EAPI": "5",
+ "IUSE": "foo",
+ },
+ }
+
+ binpkgs = (
+ ("app-misc/A-1", {
+ "EAPI": "5",
+ "BUILD_ID": "1",
+ "BUILD_TIME": "1",
+ "RDEPEND": "sys-libs/zlib dev-libs/B[foo]",
+ "DEPEND": "sys-libs/zlib dev-libs/B[foo]",
+ }),
+ ("app-misc/A-1", {
+ "EAPI": "5",
+ "BUILD_ID": "2",
+ "BUILD_TIME": "2",
+ "RDEPEND": "sys-libs/zlib dev-libs/B[foo]",
+ "DEPEND": "sys-libs/zlib dev-libs/B[foo]",
+ }),
+ ("app-misc/A-1", {
+ "EAPI": "5",
+ "BUILD_ID": "3",
+ "BUILD_TIME": "3",
+ "RDEPEND": "sys-libs/zlib dev-libs/B[foo]",
+ "DEPEND": "sys-libs/zlib dev-libs/B[foo]",
+ }),
+ ("dev-libs/B-1", {
+ "EAPI": "5",
+ "IUSE": "foo",
+ "USE": "",
+ "BUILD_ID": "1",
+ "BUILD_TIME": "1",
+ }),
+ ("dev-libs/B-1", {
+ "EAPI": "5",
+ "IUSE": "foo",
+ "USE": "foo",
+ "BUILD_ID": "2",
+ "BUILD_TIME": "2",
+ }),
+ ("dev-libs/B-1", {
+ "EAPI": "5",
+ "IUSE": "foo",
+ "USE": "",
+ "BUILD_ID": "3",
+ "BUILD_TIME": "3",
+ }),
+ )
+
+ installed = {
+ "app-misc/A-1" : {
+ "EAPI": "5",
+ "BUILD_ID": "1",
+ "BUILD_TIME": "1",
+ "RDEPEND": "sys-libs/zlib",
+ "DEPEND": "sys-libs/zlib",
+ },
+ "dev-libs/B-1" : {
+ "EAPI": "5",
+ "IUSE": "foo",
+ "USE": "foo",
+ "BUILD_ID": "2",
+ "BUILD_TIME": "2",
+ },
+ }
+
+ world = ()
+
+ test_cases = (
+
+ ResolverPlaygroundTestCase(
+ ["@world"],
+ options = {"--emptytree": True, "--usepkgonly": True},
+ success = True,
+ mergelist = [
+ "[binary]dev-libs/B-1-2",
+ "[binary]app-misc/A-1-2"
+ ]
+ ),
+
+ )
+
+ playground = ResolverPlayground(debug=False,
+ binpkgs=binpkgs, ebuilds=ebuilds, installed=installed,
+ repo_configs=repo_configs, profile=profile,
+ user_config=user_config, world=world)
+ try:
+ for test_case in test_cases:
+ playground.run_TestCase(test_case)
+ self.assertEqual(test_case.test_success, True,
+ test_case.fail_msg)
+ finally:
+ # Disable debug so that cleanup works.
+ #playground.debug = False
+ playground.cleanup()
diff --git a/pym/portage/util/__init__.py b/pym/portage/util/__init__.py
index b6f5787..aeb951e 100644
--- a/pym/portage/util/__init__.py
+++ b/pym/portage/util/__init__.py
@@ -424,7 +424,8 @@ def read_corresponding_eapi_file(filename, default="0"):
return default
return eapi
-def grabdict_package(myfilename, juststrings=0, recursive=0, allow_wildcard=False, allow_repo=False,
+def grabdict_package(myfilename, juststrings=0, recursive=0,
+ allow_wildcard=False, allow_repo=False, allow_build_id=False,
verify_eapi=False, eapi=None, eapi_default="0"):
""" Does the same thing as grabdict except it validates keys
with isvalidatom()"""
@@ -447,7 +448,8 @@ def grabdict_package(myfilename, juststrings=0, recursive=0, allow_wildcard=Fals
for k, v in d.items():
try:
k = Atom(k, allow_wildcard=allow_wildcard,
- allow_repo=allow_repo, eapi=eapi)
+ allow_repo=allow_repo,
+ allow_build_id=allow_build_id, eapi=eapi)
except InvalidAtom as e:
writemsg(_("--- Invalid atom in %s: %s\n") % (filename, e),
noiselevel=-1)
@@ -460,7 +462,8 @@ def grabdict_package(myfilename, juststrings=0, recursive=0, allow_wildcard=Fals
return atoms
-def grabfile_package(myfilename, compatlevel=0, recursive=0, allow_wildcard=False, allow_repo=False,
+def grabfile_package(myfilename, compatlevel=0, recursive=0,
+ allow_wildcard=False, allow_repo=False, allow_build_id=False,
remember_source_file=False, verify_eapi=False, eapi=None,
eapi_default="0"):
@@ -480,7 +483,9 @@ def grabfile_package(myfilename, compatlevel=0, recursive=0, allow_wildcard=Fals
if pkg[:1] == '*' and mybasename == 'packages':
pkg = pkg[1:]
try:
- pkg = Atom(pkg, allow_wildcard=allow_wildcard, allow_repo=allow_repo, eapi=eapi)
+ pkg = Atom(pkg, allow_wildcard=allow_wildcard,
+ allow_repo=allow_repo, allow_build_id=allow_build_id,
+ eapi=eapi)
except InvalidAtom as e:
writemsg(_("--- Invalid atom in %s: %s\n") % (source_file, e),
noiselevel=-1)
--
2.0.5
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [gentoo-portage-dev] [PATCH 2/2] Add profile-formats=build-id (bug 150031)
2015-02-17 8:37 ` [gentoo-portage-dev] [PATCH 2/2] Add profile-formats=build-id " Zac Medico
@ 2015-02-17 18:58 ` Brian Dolbec
2015-02-17 19:37 ` Zac Medico
0 siblings, 1 reply; 23+ messages in thread
From: Brian Dolbec @ 2015-02-17 18:58 UTC (permalink / raw
To: gentoo-portage-dev
On Tue, 17 Feb 2015 00:37:13 -0800
Zac Medico <zmedico@gentoo.org> wrote:
> When "profile-formats = build-id" is enabled in layout.conf of the
> containing repository, a dependency atom in the profile can refer
> to a specific build, using the build-id that is assigned when
> FEATURES=binpkg-multi-instance is enabled. A build-id atom is
> identical to a version-specific atom, except that the version is
> followed by a hyphen and an integer build-id.
>
> With the build-id profile format, it is possible to assemble a system
> using specific builds of binary packages, as users of "binary"
> distros might be accustomed to. For example, an atom in the "packages"
> file can pull a specific build of a package into the @system set, and
> an atom in the "package.keywords" file can be used to modify the
> effective KEYWORDS of a specific build of a package.
>
> Refering to specific builds can be useful for a number of reasons. For
> example, if a particular build needs to undergo a large amount of
> testing in a complex environment in order to verify reliability, then
> it can be useful to lock a profile to a specific build that has been
> thoroughly tested.
>
> X-Gentoo-Bug: 150031
> X-Gentoo-Bug-URL: https://bugs.gentoo.org/show_bug.cgi?id=150031
> ---
> man/portage.5 | 8 +-
> pym/_emerge/is_valid_package_atom.py | 5 +-
> pym/portage/_sets/ProfilePackageSet.py | 3 +-
> pym/portage/_sets/profiles.py | 3 +-
> pym/portage/dep/__init__.py | 35 +++++-
> .../package/ebuild/_config/KeywordsManager.py | 3 +-
> .../package/ebuild/_config/LocationsManager.py | 8 +-
> pym/portage/package/ebuild/_config/MaskManager.py | 21 +++-
> pym/portage/package/ebuild/_config/UseManager.py | 14 ++-
> pym/portage/package/ebuild/config.py | 15 ++-
> pym/portage/repository/config.py | 2 +-
> pym/portage/tests/dep/test_isvalidatom.py | 8 +-
> .../test_build_id_profile_format.py | 134
> +++++++++++++++++++++
> pym/portage/util/__init__.py | 13 +- 14 files
> changed, 234 insertions(+), 38 deletions(-) create mode 100644
> pym/portage/tests/resolver/binpkg_multi_instance/test_build_id_profile_format.py
>
class Atom()
if allow_repo is None:
allow_repo = True
+ if allow_build_id is None:
+ allow_build_id = True
these can be written as
allow_repo = allow_repo or True
allow_build_id = allow_build_id or True
Otherwise looks decent.
--
Brian Dolbec <dolsen>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [gentoo-portage-dev] [PATCH 2/2] Add profile-formats=build-id (bug 150031)
2015-02-17 18:58 ` Brian Dolbec
@ 2015-02-17 19:37 ` Zac Medico
2015-02-17 19:58 ` Brian Dolbec
0 siblings, 1 reply; 23+ messages in thread
From: Zac Medico @ 2015-02-17 19:37 UTC (permalink / raw
To: gentoo-portage-dev
On 02/17/2015 10:58 AM, Brian Dolbec wrote:
>
> class Atom()
>
> if allow_repo is None:
> allow_repo = True
> + if allow_build_id is None:
> + allow_build_id = True
>
>
> these can be written as
> allow_repo = allow_repo or True
> allow_build_id = allow_build_id or True
Actually, your version behaves differently than mine for the case where
False has been passed in for these parameters. The parameters are
designed are provide a "smart" default, as long as the caller has not
passed in an explicit True or False value.
--
Thanks,
Zac
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [gentoo-portage-dev] [PATCH 2/2] Add profile-formats=build-id (bug 150031)
2015-02-17 19:37 ` Zac Medico
@ 2015-02-17 19:58 ` Brian Dolbec
0 siblings, 0 replies; 23+ messages in thread
From: Brian Dolbec @ 2015-02-17 19:58 UTC (permalink / raw
To: gentoo-portage-dev
On Tue, 17 Feb 2015 11:37:44 -0800
Zac Medico <zmedico@gentoo.org> wrote:
> On 02/17/2015 10:58 AM, Brian Dolbec wrote:
> >
> > class Atom()
> >
> > if allow_repo is None:
> > allow_repo = True
> > + if allow_build_id is None:
> > + allow_build_id = True
> >
> >
> > these can be written as
> > allow_repo = allow_repo or True
> > allow_build_id = allow_build_id or True
>
> Actually, your version behaves differently than mine for the case
> where False has been passed in for these parameters. The parameters
> are designed are provide a "smart" default, as long as the caller has
> not passed in an explicit True or False value.
yeah, I never thought of the possibility of them being passed in False.
I just thought, hey, there is a shorter way of doing this. Usually
I've done that for variables expecting lists, class instances, etc...
--
Brian Dolbec <dolsen>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [gentoo-portage-dev] [PATCH 1/2] Add FEATURES=binpkg-multi-instance (bug 150031)
2015-02-17 8:37 [gentoo-portage-dev] [PATCH 1/2] Add FEATURES=binpkg-multi-instance (bug 150031) Zac Medico
2015-02-17 8:37 ` [gentoo-portage-dev] [PATCH 2/2] Add profile-formats=build-id " Zac Medico
@ 2015-02-17 18:42 ` Brian Dolbec
2015-02-17 19:26 ` Zac Medico
2015-02-18 3:40 ` [gentoo-portage-dev] Re: [PATCH 1/2] " Duncan
2 siblings, 1 reply; 23+ messages in thread
From: Brian Dolbec @ 2015-02-17 18:42 UTC (permalink / raw
To: gentoo-portage-dev
On Tue, 17 Feb 2015 00:37:12 -0800
Zac Medico <zmedico@gentoo.org> wrote:
> FEATURES=binpkg-multi-instance causes an integer build-id to be
> associated with each binary package instance. Inclusion of the
> build-id in the file name of the binary package file makes it
> possible to store an arbitrary number of binary packages built from
> the same ebuild.
>
> Having multiple instances is useful for a number of purposes, such as
> retaining builds that were built with different USE flags or linked
> against different versions of libraries. The location of any
> particular package within PKGDIR can be expressed as follows:
>
> ${PKGDIR}/${CATEGORY}/${PN}/${PF}-${BUILD_ID}.xpak
>
> The build-id starts at 1 for the first build of a particular ebuild,
> and is incremented by 1 for each new build. It is possible to share a
> writable PKGDIR over NFS, and locking ensures that each package added
> to PKGDIR will have a unique build-id. It is not necessary to migrate
> an existing PKGDIR to the new layout, since portage is capable of
> working with a mixed PKGDIR layout, where packages using the old
> layout are allowed to remain in place.
>
> The new PKGDIR layout is backward-compatible with binhost clients
> running older portage, since the file format is identical, the
> per-package PATH attribute in the 'Packages' index directs them to
> download the file from the correct URI, and they automatically use
> BUILD_TIME metadata to select the latest builds.
>
> There is currently no automated way to prune old builds from PKGDIR,
> although it is possible to remove packages manually, and then run
> 'emaint --fix binhost' to update the ${PKGDIR}/Packages index.
>
> It is not necessary to migrate an existing PKGDIR to the new layout,
> since portage is capable of working with a mixed PKGDIR layout, where
> packages using the old layout are allowed to remain in-place.
>
> There is currently no automated way to prune old builds from PKGDIR,
> although it is possible to remove packages manually, and then run
> 'emaint --fix binhost' update the ${PKGDIR}/Packages index. Support
> for FEATURES=binpkg-multi-instance is planned for eclean-pkg.
>
> X-Gentoo-Bug: 150031
> X-Gentoo-Bug-URL: https://bugs.gentoo.org/show_bug.cgi?id=150031
> ---
> bin/quickpkg | 1 -
> man/make.conf.5 | 27 +
> pym/_emerge/Binpkg.py | 33 +-
> pym/_emerge/BinpkgFetcher.py | 13 +-
> pym/_emerge/BinpkgVerifier.py | 6 +-
> pym/_emerge/EbuildBinpkg.py | 9 +-
> pym/_emerge/EbuildBuild.py | 36 +-
> pym/_emerge/Package.py | 67 +-
> pym/_emerge/Scheduler.py | 6 +-
> pym/_emerge/clear_caches.py | 1 -
> pym/_emerge/resolver/output.py | 21 +-
> pym/portage/const.py | 2 +
> pym/portage/dbapi/__init__.py | 10 +-
> pym/portage/dbapi/bintree.py | 842
> +++++++++++----------
> pym/portage/dbapi/vartree.py | 8 +-
> pym/portage/dbapi/virtual.py | 113 ++-
> pym/portage/emaint/modules/binhost/binhost.py | 47 +-
> pym/portage/package/ebuild/config.py | 3 +-
> pym/portage/tests/resolver/ResolverPlayground.py | 26
> +- .../resolver/binpkg_multi_instance/__init__.py | 2
> + .../resolver/binpkg_multi_instance/__test__.py | 2
> + .../binpkg_multi_instance/test_rebuilt_binaries.py | 101 +++
> pym/portage/versions.py | 48 +- 23 files
> changed, 932 insertions(+), 492 deletions(-) create mode 100644
>
>
overall, there is no way I know the code well enough to know if you
screwed up. But the code looks decent, so...
My only questions are:
pym/portage/dbapi/bintree.py:
You removed several functions from the binarytree class and essentially
reduced prevent_collision to a warning message. Can you briefly say
why they are not needed please.
_populate() ==> it is some 500+ LOC nasty. From what I can see I
think you added slightly more loc than you deleted/changed. While I
don't expect a breakup of this function to be directly a part of
this commit.
IT IS BADLY NEEDED.
I'd like to see this function properly split into pieces and act as
a driver for the smaller tasks. Currently it is a mess of nested
if/else, for,... that is extremely difficult to keep straight and
make logic changes to. It's almost as bad as repomans 1000+ LOC
main loop. Why "if TRUE:" <== just fix the indent... At the same
time, I noticed _ensure_dirs, _file_permissions functions defined in
that class. Shouldn't these just be util functions available
everywhere.
pym/portage/versions.py
class _pkg_str(_unicode)
in __init__()
if build_time is not None:
try:
build_time = long(build_time)
except ValueError:
if build_time:
build_time = -1
else:
build_time = 0
if build_id is not None:
try:
build_id = long(build_id)
except ValueError:
if build_id:
build_id = -1
else:
build_id = None
if file_size is not None:
try:
file_size = long(file_size)
except ValueError:
if file_size:
file_size = -1
else:
file_size = None
if mtime is not None:
try:
mtime = long(mtime)
except ValueError:
if mtime:
mtime = -1
else:
mtime = None
I hate repeating code. This can be done using one universal small
function and will clean up the __init__()
@static_method
def _long(var, default):
if var is not None:
try:
var = long(var)
except ValueError:
if var:
var = -1
else:
var = default
return var
--
Brian Dolbec <dolsen>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [gentoo-portage-dev] [PATCH 1/2] Add FEATURES=binpkg-multi-instance (bug 150031)
2015-02-17 18:42 ` [gentoo-portage-dev] [PATCH 1/2] Add FEATURES=binpkg-multi-instance " Brian Dolbec
@ 2015-02-17 19:26 ` Zac Medico
2015-02-17 19:56 ` Brian Dolbec
0 siblings, 1 reply; 23+ messages in thread
From: Zac Medico @ 2015-02-17 19:26 UTC (permalink / raw
To: gentoo-portage-dev
On 02/17/2015 10:42 AM, Brian Dolbec wrote:
>
> overall, there is no way I know the code well enough to know if you
> screwed up. But the code looks decent, so...
>
> My only questions are:
>
> pym/portage/dbapi/bintree.py:
>
> You removed several functions from the binarytree class and essentially
> reduced prevent_collision to a warning message. Can you briefly say
> why they are not needed please.
Okay, I'll include this info in an updated patch:
_pkgindex_cpv_map_latest_build:
This is what binhost clients running older versions of portage will use
to select to select the latest builds when their binhost server switches
to FEATURES=binpkg-multi-instance. New portage won't need this anymore
because it is capable of examining multiple builds and it uses sorting
to ensure that the latest builds are preferred when appropriate.
_remove_symlink, _create_symlink, prevent_collision, _move_to_all, and
_move_from_all:
These are all related to the oldest PKGDIR layout, which put all of the
tbz2 files in $PKGDIR/All, and created symlinks to them in the category
directories. The $PKGDIR/All layout should be practically extinct by
now. New portage recognizes all existing layouts, or mixtures of them,
and uses the old packages in place. It never puts new packages in
$PKGDIR/All, so there's no need to move packages around to prevent file
name collisions between packages from different categories. It also only
uses regular files (any symlinks are ignored).
> _populate() ==> it is some 500+ LOC nasty. From what I can see I
> think you added slightly more loc than you deleted/changed. While I
> don't expect a breakup of this function to be directly a part of
> this commit.
>
> IT IS BADLY NEEDED.
>
> I'd like to see this function properly split into pieces and act as
> a driver for the smaller tasks. Currently it is a mess of nested
> if/else, for,... that is extremely difficult to keep straight and
> make logic changes to. It's almost as bad as repomans 1000+ LOC
> main loop. Why "if TRUE:" <== just fix the indent... At the same
> time, I noticed _ensure_dirs, _file_permissions functions defined in
> that class. Shouldn't these just be util functions available
> everywhere.
Yeah, I will work on cleaning that up, and submit it as a separate patch.
>
>
> pym/portage/versions.py
>
> class _pkg_str(_unicode)
>
> in __init__()
>
> if build_time is not None:
> try:
> build_time = long(build_time)
> except ValueError:
> if build_time:
> build_time = -1
> else:
> build_time = 0
> if build_id is not None:
> try:
> build_id = long(build_id)
> except ValueError:
> if build_id:
> build_id = -1
> else:
> build_id = None
> if file_size is not None:
> try:
> file_size = long(file_size)
> except ValueError:
> if file_size:
> file_size = -1
> else:
> file_size = None
> if mtime is not None:
> try:
> mtime = long(mtime)
> except ValueError:
> if mtime:
> mtime = -1
> else:
> mtime = None
>
>
> I hate repeating code. This can be done using one universal small
> function and will clean up the __init__()
>
> @static_method
> def _long(var, default):
> if var is not None:
> try:
> var = long(var)
> except ValueError:
> if var:
> var = -1
> else:
> var = default
> return var
Okay, I'll do that and post the updated patch.
--
Thanks,
Zac
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [gentoo-portage-dev] [PATCH 1/2] Add FEATURES=binpkg-multi-instance (bug 150031)
2015-02-17 19:26 ` Zac Medico
@ 2015-02-17 19:56 ` Brian Dolbec
2015-02-17 19:59 ` Zac Medico
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 0/7] " Zac Medico
0 siblings, 2 replies; 23+ messages in thread
From: Brian Dolbec @ 2015-02-17 19:56 UTC (permalink / raw
To: gentoo-portage-dev
On Tue, 17 Feb 2015 11:26:27 -0800
Zac Medico <zmedico@gentoo.org> wrote:
> On 02/17/2015 10:42 AM, Brian Dolbec wrote:
> >
> > overall, there is no way I know the code well enough to know if you
> > screwed up. But the code looks decent, so...
> >
> > My only questions are:
> >
> > pym/portage/dbapi/bintree.py:
> >
> > You removed several functions from the binarytree class and
> > essentially reduced prevent_collision to a warning message. Can
> > you briefly say why they are not needed please.
>
> Okay, I'll include this info in an updated patch:
>
Actually, I think this one patch could be split into a few logical
ones. Tag them
binpkg-multi-instance 1 of...
in the commit message so it is clear they
belong together. That way the commit messages can more clearly be
relevant to the file(s) changed.
The commit message is already a short story in length ;) before adding
these new explanations.
> _pkgindex_cpv_map_latest_build:
>
> This is what binhost clients running older versions of portage will
> use to select to select the latest builds when their binhost server
> switches to FEATURES=binpkg-multi-instance. New portage won't need
> this anymore because it is capable of examining multiple builds and
> it uses sorting to ensure that the latest builds are preferred when
> appropriate.
>
> _remove_symlink, _create_symlink, prevent_collision, _move_to_all, and
> _move_from_all:
>
> These are all related to the oldest PKGDIR layout, which put all of
> the tbz2 files in $PKGDIR/All, and created symlinks to them in the
> category directories. The $PKGDIR/All layout should be practically
> extinct by now. New portage recognizes all existing layouts, or
> mixtures of them, and uses the old packages in place. It never puts
> new packages in $PKGDIR/All, so there's no need to move packages
> around to prevent file name collisions between packages from
> different categories. It also only uses regular files (any symlinks
> are ignored).
>
> > _populate() ==> it is some 500+ LOC nasty. From what I can see
> > I think you added slightly more loc than you deleted/changed.
> > While I don't expect a breakup of this function to be directly a
> > part of this commit.
> >
> > IT IS BADLY NEEDED.
> >
> > I'd like to see this function properly split into pieces and act
> > as a driver for the smaller tasks. Currently it is a mess of nested
> > if/else, for,... that is extremely difficult to keep straight and
> > make logic changes to. It's almost as bad as repomans 1000+ LOC
> > main loop. Why "if TRUE:" <== just fix the indent... At the
> > same time, I noticed _ensure_dirs, _file_permissions functions
> > defined in that class. Shouldn't these just be util functions
> > available everywhere.
>
> Yeah, I will work on cleaning that up, and submit it as a separate
> patch.
>
> >
> >
> > pym/portage/versions.py
> >
> > class _pkg_str(_unicode)
> >
> > in __init__()
> >
> > if build_time is not None:
> > try:
> > build_time = long(build_time)
> > except ValueError:
> > if build_time:
> > build_time = -1
> > else:
> > build_time = 0
> > if build_id is not None:
> > try:
> > build_id = long(build_id)
> > except ValueError:
> > if build_id:
> > build_id = -1
> > else:
> > build_id = None
> > if file_size is not None:
> > try:
> > file_size = long(file_size)
> > except ValueError:
> > if file_size:
> > file_size = -1
> > else:
> > file_size = None
> > if mtime is not None:
> > try:
> > mtime = long(mtime)
> > except ValueError:
> > if mtime:
> > mtime = -1
> > else:
> > mtime = None
> >
> >
> > I hate repeating code. This can be done using one universal small
> > function and will clean up the __init__()
> >
> > @static_method
> > def _long(var, default):
> > if var is not None:
> > try:
> > var = long(var)
> > except ValueError:
> > if var:
> > var = -1
> > else:
> > var = default
> > return var
>
> Okay, I'll do that and post the updated patch.
--
Brian Dolbec <dolsen>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [gentoo-portage-dev] [PATCH 1/2] Add FEATURES=binpkg-multi-instance (bug 150031)
2015-02-17 19:56 ` Brian Dolbec
@ 2015-02-17 19:59 ` Zac Medico
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 0/7] " Zac Medico
1 sibling, 0 replies; 23+ messages in thread
From: Zac Medico @ 2015-02-17 19:59 UTC (permalink / raw
To: gentoo-portage-dev
On 02/17/2015 11:56 AM, Brian Dolbec wrote:
> On Tue, 17 Feb 2015 11:26:27 -0800
> Zac Medico <zmedico@gentoo.org> wrote:
>
>> On 02/17/2015 10:42 AM, Brian Dolbec wrote:
>>>
>>> overall, there is no way I know the code well enough to know if you
>>> screwed up. But the code looks decent, so...
>>>
>>> My only questions are:
>>>
>>> pym/portage/dbapi/bintree.py:
>>>
>>> You removed several functions from the binarytree class and
>>> essentially reduced prevent_collision to a warning message. Can
>>> you briefly say why they are not needed please.
>>
>> Okay, I'll include this info in an updated patch:
>>
>
> Actually, I think this one patch could be split into a few logical
> ones. Tag them
>
> binpkg-multi-instance 1 of...
>
> in the commit message so it is clear they
> belong together. That way the commit messages can more clearly be
> relevant to the file(s) changed.
>
> The commit message is already a short story in length ;) before adding
> these new explanations.
Okay, will do.
--
Thanks,
Zac
^ permalink raw reply [flat|nested] 23+ messages in thread
* [gentoo-portage-dev] [PATCH 0/7] Add FEATURES=binpkg-multi-instance (bug 150031)
2015-02-17 19:56 ` Brian Dolbec
2015-02-17 19:59 ` Zac Medico
@ 2015-02-18 3:05 ` Zac Medico
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 1/7] binpkg-multi-instance 1 of 7 Zac Medico
` (7 more replies)
1 sibling, 8 replies; 23+ messages in thread
From: Zac Medico @ 2015-02-18 3:05 UTC (permalink / raw
To: gentoo-portage-dev; +Cc: Zac Medico
FEATURES=binpkg-multi-instance causes an integer build-id to be
associated with each binary package instance. Inclusion of the build-id
in the file name of the binary package file makes it possible to store
an arbitrary number of binary packages built from the same ebuild.
Having multiple instances is useful for a number of purposes, such as
retaining builds that were built with different USE flags or linked
against different versions of libraries. The location of any particular
package within PKGDIR can be expressed as follows:
${PKGDIR}/${CATEGORY}/${PN}/${PF}-${BUILD_ID}.xpak
The build-id starts at 1 for the first build of a particular ebuild,
and is incremented by 1 for each new build. It is possible to share a
writable PKGDIR over NFS, and locking ensures that each package added
to PKGDIR will have a unique build-id. It is not necessary to migrate
an existing PKGDIR to the new layout, since portage is capable of
working with a mixed PKGDIR layout, where packages using the old layout
are allowed to remain in place.
The new PKGDIR layout is backward-compatible with binhost clients
running older portage, since the file format is identical, the
per-package PATH attribute in the 'Packages' index directs them to
download the file from the correct URI, and they automatically use
BUILD_TIME metadata to select the latest builds.
There is currently no automated way to prune old builds from PKGDIR,
although it is possible to remove packages manually, and then run
'emaint --fix binhost' to update the ${PKGDIR}/Packages index. Support
for FEATURES=binpkg-multi-instance is planned for eclean-pkg.
X-Gentoo-Bug: 150031
X-Gentoo-Bug-URL: https://bugs.gentoo.org/show_bug.cgi?id=150031
Zac Medico (7):
binpkg-multi-instance 1 of 7
binpkg-multi-instance 2 of 7
binpkg-multi-instance 3 of 7
binpkg-multi-instance 4 of 7
binpkg-multi-instance 5 of 7
binpkg-multi-instance 6 of 7
binpkg-multi-instance 7 of 7
bin/quickpkg | 1 -
man/make.conf.5 | 27 +
man/portage.5 | 8 +-
pym/_emerge/Binpkg.py | 33 +-
pym/_emerge/BinpkgFetcher.py | 13 +-
pym/_emerge/BinpkgVerifier.py | 6 +-
pym/_emerge/EbuildBinpkg.py | 9 +-
pym/_emerge/EbuildBuild.py | 36 +-
pym/_emerge/Package.py | 67 +-
pym/_emerge/Scheduler.py | 6 +-
pym/_emerge/clear_caches.py | 1 -
pym/_emerge/is_valid_package_atom.py | 5 +-
pym/_emerge/resolver/output.py | 21 +-
pym/portage/_sets/ProfilePackageSet.py | 3 +-
pym/portage/_sets/profiles.py | 3 +-
pym/portage/const.py | 2 +
pym/portage/dbapi/__init__.py | 10 +-
pym/portage/dbapi/bintree.py | 843 +++++++++++----------
pym/portage/dbapi/vartree.py | 8 +-
pym/portage/dbapi/virtual.py | 113 ++-
pym/portage/dep/__init__.py | 35 +-
pym/portage/emaint/modules/binhost/binhost.py | 47 +-
.../package/ebuild/_config/KeywordsManager.py | 3 +-
.../package/ebuild/_config/LocationsManager.py | 8 +-
pym/portage/package/ebuild/_config/MaskManager.py | 21 +-
pym/portage/package/ebuild/_config/UseManager.py | 14 +-
pym/portage/package/ebuild/config.py | 15 +-
pym/portage/repository/config.py | 2 +-
pym/portage/tests/dep/test_isvalidatom.py | 8 +-
pym/portage/tests/resolver/ResolverPlayground.py | 25 +-
.../resolver/binpkg_multi_instance/__init__.py | 2 +
.../resolver/binpkg_multi_instance/__test__.py | 2 +
.../test_build_id_profile_format.py | 134 ++++
.../binpkg_multi_instance/test_rebuilt_binaries.py | 101 +++
pym/portage/util/__init__.py | 13 +-
pym/portage/versions.py | 28 +-
36 files changed, 1144 insertions(+), 529 deletions(-)
create mode 100644 pym/portage/tests/resolver/binpkg_multi_instance/__init__.py
create mode 100644 pym/portage/tests/resolver/binpkg_multi_instance/__test__.py
create mode 100644 pym/portage/tests/resolver/binpkg_multi_instance/test_build_id_profile_format.py
create mode 100644 pym/portage/tests/resolver/binpkg_multi_instance/test_rebuilt_binaries.py
--
2.0.5
^ permalink raw reply [flat|nested] 23+ messages in thread
* [gentoo-portage-dev] [PATCH 1/7] binpkg-multi-instance 1 of 7
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 0/7] " Zac Medico
@ 2015-02-18 3:05 ` Zac Medico
2015-02-19 22:26 ` [gentoo-portage-dev] [PATCH 1/7 v2] " Zac Medico
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 2/7] binpkg-multi-instance 2 " Zac Medico
` (6 subsequent siblings)
7 siblings, 1 reply; 23+ messages in thread
From: Zac Medico @ 2015-02-18 3:05 UTC (permalink / raw
To: gentoo-portage-dev; +Cc: Zac Medico
Extend the _pkg_str class with build_id, build_time, file_size, and
mtime attributes. These will be used to distinguish binary package
instances that have the same cpv. Package sorting accounts for
build_time, which will be used to prefer newer builds over older builds
when their versions are identical.
---
pym/_emerge/Package.py | 51 +++++++++++++++++++++++++++++-------------
pym/_emerge/resolver/output.py | 21 +++++++++++++----
pym/portage/dbapi/__init__.py | 10 +++++++--
pym/portage/dbapi/vartree.py | 8 +++++--
pym/portage/versions.py | 28 +++++++++++++++++++++--
5 files changed, 92 insertions(+), 26 deletions(-)
diff --git a/pym/_emerge/Package.py b/pym/_emerge/Package.py
index e8a13cb..975335d 100644
--- a/pym/_emerge/Package.py
+++ b/pym/_emerge/Package.py
@@ -41,12 +41,12 @@ class Package(Task):
"_validated_atoms", "_visible")
metadata_keys = [
- "BUILD_TIME", "CHOST", "COUNTER", "DEPEND", "EAPI",
- "HDEPEND", "INHERITED", "IUSE", "KEYWORDS",
- "LICENSE", "PDEPEND", "PROVIDE", "RDEPEND",
- "repository", "PROPERTIES", "RESTRICT", "SLOT", "USE",
- "_mtime_", "DEFINED_PHASES", "REQUIRED_USE", "PROVIDES",
- "REQUIRES"]
+ "BUILD_ID", "BUILD_TIME", "CHOST", "COUNTER", "DEFINED_PHASES",
+ "DEPEND", "EAPI", "HDEPEND", "INHERITED", "IUSE", "KEYWORDS",
+ "LICENSE", "MD5", "PDEPEND", "PROVIDE", "PROVIDES",
+ "RDEPEND", "repository", "REQUIRED_USE",
+ "PROPERTIES", "REQUIRES", "RESTRICT", "SIZE",
+ "SLOT", "USE", "_mtime_"]
_dep_keys = ('DEPEND', 'HDEPEND', 'PDEPEND', 'RDEPEND')
_buildtime_keys = ('DEPEND', 'HDEPEND')
@@ -114,13 +114,14 @@ class Package(Task):
return self._metadata["EAPI"]
@property
+ def build_id(self):
+ return self.cpv.build_id
+
+ @property
def build_time(self):
if not self.built:
raise AttributeError('build_time')
- try:
- return long(self._metadata['BUILD_TIME'])
- except (KeyError, ValueError):
- return 0
+ return self.cpv.build_time
@property
def defined_phases(self):
@@ -509,9 +510,15 @@ class Package(Task):
else:
cpv_color = "PKG_NOMERGE"
+ build_id_str = ""
+ if isinstance(self.cpv.build_id, long) and self.cpv.build_id > 0:
+ build_id_str = "-%s" % self.cpv.build_id
+
s = "(%s, %s" \
- % (portage.output.colorize(cpv_color, self.cpv + _slot_separator + \
- self.slot + "/" + self.sub_slot + _repo_separator + self.repo) , self.type_name)
+ % (portage.output.colorize(cpv_color, self.cpv +
+ build_id_str + _slot_separator + self.slot + "/" +
+ self.sub_slot + _repo_separator + self.repo),
+ self.type_name)
if self.type_name == "installed":
if self.root_config.settings['ROOT'] != "/":
@@ -755,29 +762,41 @@ class Package(Task):
def __lt__(self, other):
if other.cp != self.cp:
return self.cp < other.cp
- if portage.vercmp(self.version, other.version) < 0:
+ result = portage.vercmp(self.version, other.version)
+ if result < 0:
return True
+ if result == 0 and self.built and other.built:
+ return self.build_time < other.build_time
return False
def __le__(self, other):
if other.cp != self.cp:
return self.cp <= other.cp
- if portage.vercmp(self.version, other.version) <= 0:
+ result = portage.vercmp(self.version, other.version)
+ if result <= 0:
return True
+ if result == 0 and self.built and other.built:
+ return self.build_time <= other.build_time
return False
def __gt__(self, other):
if other.cp != self.cp:
return self.cp > other.cp
- if portage.vercmp(self.version, other.version) > 0:
+ result = portage.vercmp(self.version, other.version)
+ if result > 0:
return True
+ if result == 0 and self.built and other.built:
+ return self.build_time > other.build_time
return False
def __ge__(self, other):
if other.cp != self.cp:
return self.cp >= other.cp
- if portage.vercmp(self.version, other.version) >= 0:
+ result = portage.vercmp(self.version, other.version)
+ if result >= 0:
return True
+ if result == 0 and self.built and other.built:
+ return self.build_time >= other.build_time
return False
def with_use(self, use):
diff --git a/pym/_emerge/resolver/output.py b/pym/_emerge/resolver/output.py
index 7df0302..400617d 100644
--- a/pym/_emerge/resolver/output.py
+++ b/pym/_emerge/resolver/output.py
@@ -424,6 +424,18 @@ class Display(object):
pkg_str += _repo_separator + pkg.repo
return pkg_str
+ def _append_build_id(self, pkg_str, pkg, pkg_info):
+ """Potentially appends repository to package string.
+
+ @param pkg_str: string
+ @param pkg: _emerge.Package.Package instance
+ @param pkg_info: dictionary
+ @rtype string
+ """
+ if pkg.type_name == "binary" and pkg.cpv.build_id is not None:
+ pkg_str += "-%s" % pkg.cpv.build_id
+ return pkg_str
+
def _set_non_root_columns(self, pkg, pkg_info):
"""sets the indent level and formats the output
@@ -431,7 +443,7 @@ class Display(object):
@param pkg_info: dictionary
@rtype string
"""
- ver_str = pkg_info.ver
+ ver_str = self._append_build_id(pkg_info.ver, pkg, pkg_info)
if self.conf.verbosity == 3:
ver_str = self._append_slot(ver_str, pkg, pkg_info)
ver_str = self._append_repository(ver_str, pkg, pkg_info)
@@ -470,7 +482,7 @@ class Display(object):
@rtype string
Modifies self.verboseadd
"""
- ver_str = pkg_info.ver
+ ver_str = self._append_build_id(pkg_info.ver, pkg, pkg_info)
if self.conf.verbosity == 3:
ver_str = self._append_slot(ver_str, pkg, pkg_info)
ver_str = self._append_repository(ver_str, pkg, pkg_info)
@@ -507,7 +519,7 @@ class Display(object):
@param pkg_info: dictionary
@rtype the updated addl
"""
- pkg_str = pkg.cpv
+ pkg_str = self._append_build_id(pkg.cpv, pkg, pkg_info)
if self.conf.verbosity == 3:
pkg_str = self._append_slot(pkg_str, pkg, pkg_info)
pkg_str = self._append_repository(pkg_str, pkg, pkg_info)
@@ -868,7 +880,8 @@ class Display(object):
if self.conf.columns:
myprint = self._set_non_root_columns(pkg, pkg_info)
else:
- pkg_str = pkg.cpv
+ pkg_str = self._append_build_id(
+ pkg.cpv, pkg, pkg_info)
if self.conf.verbosity == 3:
pkg_str = self._append_slot(pkg_str, pkg, pkg_info)
pkg_str = self._append_repository(pkg_str, pkg, pkg_info)
diff --git a/pym/portage/dbapi/__init__.py b/pym/portage/dbapi/__init__.py
index 34dfaa7..044faec 100644
--- a/pym/portage/dbapi/__init__.py
+++ b/pym/portage/dbapi/__init__.py
@@ -31,7 +31,8 @@ class dbapi(object):
_use_mutable = False
_known_keys = frozenset(x for x in auxdbkeys
if not x.startswith("UNUSED_0"))
- _pkg_str_aux_keys = ("EAPI", "KEYWORDS", "SLOT", "repository")
+ _pkg_str_aux_keys = ("BUILD_TIME", "EAPI", "BUILD_ID",
+ "KEYWORDS", "SLOT", "repository")
def __init__(self):
pass
@@ -57,7 +58,12 @@ class dbapi(object):
@staticmethod
def _cmp_cpv(cpv1, cpv2):
- return vercmp(cpv1.version, cpv2.version)
+ result = vercmp(cpv1.version, cpv2.version)
+ if (result == 0 and cpv1.build_time is not None and
+ cpv2.build_time is not None):
+ result = ((cpv1.build_time > cpv2.build_time) -
+ (cpv1.build_time < cpv2.build_time))
+ return result
@staticmethod
def _cpv_sort_ascending(cpv_list):
diff --git a/pym/portage/dbapi/vartree.py b/pym/portage/dbapi/vartree.py
index cf31c8e..277c2f1 100644
--- a/pym/portage/dbapi/vartree.py
+++ b/pym/portage/dbapi/vartree.py
@@ -173,7 +173,8 @@ class vardbapi(dbapi):
self.vartree = vartree
self._aux_cache_keys = set(
["BUILD_TIME", "CHOST", "COUNTER", "DEPEND", "DESCRIPTION",
- "EAPI", "HDEPEND", "HOMEPAGE", "IUSE", "KEYWORDS",
+ "EAPI", "HDEPEND", "HOMEPAGE",
+ "BUILD_ID", "IUSE", "KEYWORDS",
"LICENSE", "PDEPEND", "PROPERTIES", "PROVIDE", "RDEPEND",
"repository", "RESTRICT" , "SLOT", "USE", "DEFINED_PHASES",
"PROVIDES", "REQUIRES"
@@ -425,7 +426,10 @@ class vardbapi(dbapi):
continue
if len(mysplit) > 1:
if ps[0] == mysplit[1]:
- returnme.append(_pkg_str(mysplit[0]+"/"+x))
+ cpv = "%s/%s" % (mysplit[0], x)
+ metadata = dict(zip(self._aux_cache_keys,
+ self.aux_get(cpv, self._aux_cache_keys)))
+ returnme.append(_pkg_str(cpv, metadata=metadata))
self._cpv_sort_ascending(returnme)
if use_cache:
self.cpcache[mycp] = [mystat, returnme[:]]
diff --git a/pym/portage/versions.py b/pym/portage/versions.py
index 2c9fe5b..1ca9a36 100644
--- a/pym/portage/versions.py
+++ b/pym/portage/versions.py
@@ -18,6 +18,7 @@ if sys.hexversion < 0x3000000:
_unicode = unicode
else:
_unicode = str
+ long = int
import portage
portage.proxy.lazyimport.lazyimport(globals(),
@@ -361,11 +362,13 @@ class _pkg_str(_unicode):
"""
def __new__(cls, cpv, metadata=None, settings=None, eapi=None,
- repo=None, slot=None):
+ repo=None, slot=None, build_time=None, build_id=None,
+ file_size=None, mtime=None):
return _unicode.__new__(cls, cpv)
def __init__(self, cpv, metadata=None, settings=None, eapi=None,
- repo=None, slot=None):
+ repo=None, slot=None, build_time=None, build_id=None,
+ file_size=None, mtime=None):
if not isinstance(cpv, _unicode):
# Avoid TypeError from _unicode.__init__ with PyPy.
cpv = _unicode_decode(cpv)
@@ -375,10 +378,19 @@ class _pkg_str(_unicode):
slot = metadata.get('SLOT', slot)
repo = metadata.get('repository', repo)
eapi = metadata.get('EAPI', eapi)
+ build_time = metadata.get('BUILD_TIME', build_time)
+ file_size = metadata.get('SIZE', file_size)
+ build_id = metadata.get('BUILD_ID', build_id)
+ mtime = metadata.get('_mtime_', mtime)
if settings is not None:
self.__dict__['_settings'] = settings
if eapi is not None:
self.__dict__['eapi'] = eapi
+
+ self.__dict__['build_time'] = self._long(build_time, 0)
+ self.__dict__['file_size'] = self._long(file_size, None)
+ self.__dict__['build_id'] = self._long(build_id, None)
+ self.__dict__['mtime'] = self._long(mtime, None)
self.__dict__['cpv_split'] = catpkgsplit(cpv, eapi=eapi)
if self.cpv_split is None:
raise InvalidData(cpv)
@@ -419,6 +431,18 @@ class _pkg_str(_unicode):
raise AttributeError("_pkg_str instances are immutable",
self.__class__, name, value)
+ @staticmethod
+ def _long(var, default):
+ if var is not None:
+ try:
+ var = long(var)
+ except ValueError:
+ if var:
+ var = -1
+ else:
+ var = default
+ return var
+
@property
def stable(self):
try:
--
2.0.5
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [gentoo-portage-dev] [PATCH 1/7 v2] binpkg-multi-instance 1 of 7
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 1/7] binpkg-multi-instance 1 of 7 Zac Medico
@ 2015-02-19 22:26 ` Zac Medico
0 siblings, 0 replies; 23+ messages in thread
From: Zac Medico @ 2015-02-19 22:26 UTC (permalink / raw
To: gentoo-portage-dev; +Cc: Zac Medico
Extend the _pkg_str class with build_id, build_time, file_size, and
mtime attributes. These will be used to distinguish binary package
instances that have the same cpv. Package sorting accounts for
build_time, which will be used to prefer newer builds over older builds
when their versions are identical.
---
[PATCH 1/7 v2] updates pkg_desc_index._pkg_node to have a build_time
attribute, which fixes an AttributeError raised from dbapi._cmp_cpv
for some emerge search actions.
pym/_emerge/Package.py | 51 +++++++++++++++++++++----------
pym/_emerge/resolver/output.py | 21 ++++++++++---
pym/portage/cache/index/pkg_desc_index.py | 1 +
pym/portage/dbapi/__init__.py | 10 ++++--
pym/portage/dbapi/vartree.py | 8 +++--
pym/portage/versions.py | 28 +++++++++++++++--
6 files changed, 93 insertions(+), 26 deletions(-)
diff --git a/pym/_emerge/Package.py b/pym/_emerge/Package.py
index e8a13cb..975335d 100644
--- a/pym/_emerge/Package.py
+++ b/pym/_emerge/Package.py
@@ -41,12 +41,12 @@ class Package(Task):
"_validated_atoms", "_visible")
metadata_keys = [
- "BUILD_TIME", "CHOST", "COUNTER", "DEPEND", "EAPI",
- "HDEPEND", "INHERITED", "IUSE", "KEYWORDS",
- "LICENSE", "PDEPEND", "PROVIDE", "RDEPEND",
- "repository", "PROPERTIES", "RESTRICT", "SLOT", "USE",
- "_mtime_", "DEFINED_PHASES", "REQUIRED_USE", "PROVIDES",
- "REQUIRES"]
+ "BUILD_ID", "BUILD_TIME", "CHOST", "COUNTER", "DEFINED_PHASES",
+ "DEPEND", "EAPI", "HDEPEND", "INHERITED", "IUSE", "KEYWORDS",
+ "LICENSE", "MD5", "PDEPEND", "PROVIDE", "PROVIDES",
+ "RDEPEND", "repository", "REQUIRED_USE",
+ "PROPERTIES", "REQUIRES", "RESTRICT", "SIZE",
+ "SLOT", "USE", "_mtime_"]
_dep_keys = ('DEPEND', 'HDEPEND', 'PDEPEND', 'RDEPEND')
_buildtime_keys = ('DEPEND', 'HDEPEND')
@@ -114,13 +114,14 @@ class Package(Task):
return self._metadata["EAPI"]
@property
+ def build_id(self):
+ return self.cpv.build_id
+
+ @property
def build_time(self):
if not self.built:
raise AttributeError('build_time')
- try:
- return long(self._metadata['BUILD_TIME'])
- except (KeyError, ValueError):
- return 0
+ return self.cpv.build_time
@property
def defined_phases(self):
@@ -509,9 +510,15 @@ class Package(Task):
else:
cpv_color = "PKG_NOMERGE"
+ build_id_str = ""
+ if isinstance(self.cpv.build_id, long) and self.cpv.build_id > 0:
+ build_id_str = "-%s" % self.cpv.build_id
+
s = "(%s, %s" \
- % (portage.output.colorize(cpv_color, self.cpv + _slot_separator + \
- self.slot + "/" + self.sub_slot + _repo_separator + self.repo) , self.type_name)
+ % (portage.output.colorize(cpv_color, self.cpv +
+ build_id_str + _slot_separator + self.slot + "/" +
+ self.sub_slot + _repo_separator + self.repo),
+ self.type_name)
if self.type_name == "installed":
if self.root_config.settings['ROOT'] != "/":
@@ -755,29 +762,41 @@ class Package(Task):
def __lt__(self, other):
if other.cp != self.cp:
return self.cp < other.cp
- if portage.vercmp(self.version, other.version) < 0:
+ result = portage.vercmp(self.version, other.version)
+ if result < 0:
return True
+ if result == 0 and self.built and other.built:
+ return self.build_time < other.build_time
return False
def __le__(self, other):
if other.cp != self.cp:
return self.cp <= other.cp
- if portage.vercmp(self.version, other.version) <= 0:
+ result = portage.vercmp(self.version, other.version)
+ if result <= 0:
return True
+ if result == 0 and self.built and other.built:
+ return self.build_time <= other.build_time
return False
def __gt__(self, other):
if other.cp != self.cp:
return self.cp > other.cp
- if portage.vercmp(self.version, other.version) > 0:
+ result = portage.vercmp(self.version, other.version)
+ if result > 0:
return True
+ if result == 0 and self.built and other.built:
+ return self.build_time > other.build_time
return False
def __ge__(self, other):
if other.cp != self.cp:
return self.cp >= other.cp
- if portage.vercmp(self.version, other.version) >= 0:
+ result = portage.vercmp(self.version, other.version)
+ if result >= 0:
return True
+ if result == 0 and self.built and other.built:
+ return self.build_time >= other.build_time
return False
def with_use(self, use):
diff --git a/pym/_emerge/resolver/output.py b/pym/_emerge/resolver/output.py
index 7df0302..400617d 100644
--- a/pym/_emerge/resolver/output.py
+++ b/pym/_emerge/resolver/output.py
@@ -424,6 +424,18 @@ class Display(object):
pkg_str += _repo_separator + pkg.repo
return pkg_str
+ def _append_build_id(self, pkg_str, pkg, pkg_info):
+ """Potentially appends repository to package string.
+
+ @param pkg_str: string
+ @param pkg: _emerge.Package.Package instance
+ @param pkg_info: dictionary
+ @rtype string
+ """
+ if pkg.type_name == "binary" and pkg.cpv.build_id is not None:
+ pkg_str += "-%s" % pkg.cpv.build_id
+ return pkg_str
+
def _set_non_root_columns(self, pkg, pkg_info):
"""sets the indent level and formats the output
@@ -431,7 +443,7 @@ class Display(object):
@param pkg_info: dictionary
@rtype string
"""
- ver_str = pkg_info.ver
+ ver_str = self._append_build_id(pkg_info.ver, pkg, pkg_info)
if self.conf.verbosity == 3:
ver_str = self._append_slot(ver_str, pkg, pkg_info)
ver_str = self._append_repository(ver_str, pkg, pkg_info)
@@ -470,7 +482,7 @@ class Display(object):
@rtype string
Modifies self.verboseadd
"""
- ver_str = pkg_info.ver
+ ver_str = self._append_build_id(pkg_info.ver, pkg, pkg_info)
if self.conf.verbosity == 3:
ver_str = self._append_slot(ver_str, pkg, pkg_info)
ver_str = self._append_repository(ver_str, pkg, pkg_info)
@@ -507,7 +519,7 @@ class Display(object):
@param pkg_info: dictionary
@rtype the updated addl
"""
- pkg_str = pkg.cpv
+ pkg_str = self._append_build_id(pkg.cpv, pkg, pkg_info)
if self.conf.verbosity == 3:
pkg_str = self._append_slot(pkg_str, pkg, pkg_info)
pkg_str = self._append_repository(pkg_str, pkg, pkg_info)
@@ -868,7 +880,8 @@ class Display(object):
if self.conf.columns:
myprint = self._set_non_root_columns(pkg, pkg_info)
else:
- pkg_str = pkg.cpv
+ pkg_str = self._append_build_id(
+ pkg.cpv, pkg, pkg_info)
if self.conf.verbosity == 3:
pkg_str = self._append_slot(pkg_str, pkg, pkg_info)
pkg_str = self._append_repository(pkg_str, pkg, pkg_info)
diff --git a/pym/portage/cache/index/pkg_desc_index.py b/pym/portage/cache/index/pkg_desc_index.py
index a2e45da..dbcbb83 100644
--- a/pym/portage/cache/index/pkg_desc_index.py
+++ b/pym/portage/cache/index/pkg_desc_index.py
@@ -26,6 +26,7 @@ class pkg_node(_unicode):
self.__dict__['cp'] = cp
self.__dict__['repo'] = repo
self.__dict__['version'] = version
+ self.__dict__['build_time'] = None
def __new__(cls, cp, version, repo=None):
return _unicode.__new__(cls, cp + "-" + version)
diff --git a/pym/portage/dbapi/__init__.py b/pym/portage/dbapi/__init__.py
index 34dfaa7..044faec 100644
--- a/pym/portage/dbapi/__init__.py
+++ b/pym/portage/dbapi/__init__.py
@@ -31,7 +31,8 @@ class dbapi(object):
_use_mutable = False
_known_keys = frozenset(x for x in auxdbkeys
if not x.startswith("UNUSED_0"))
- _pkg_str_aux_keys = ("EAPI", "KEYWORDS", "SLOT", "repository")
+ _pkg_str_aux_keys = ("BUILD_TIME", "EAPI", "BUILD_ID",
+ "KEYWORDS", "SLOT", "repository")
def __init__(self):
pass
@@ -57,7 +58,12 @@ class dbapi(object):
@staticmethod
def _cmp_cpv(cpv1, cpv2):
- return vercmp(cpv1.version, cpv2.version)
+ result = vercmp(cpv1.version, cpv2.version)
+ if (result == 0 and cpv1.build_time is not None and
+ cpv2.build_time is not None):
+ result = ((cpv1.build_time > cpv2.build_time) -
+ (cpv1.build_time < cpv2.build_time))
+ return result
@staticmethod
def _cpv_sort_ascending(cpv_list):
diff --git a/pym/portage/dbapi/vartree.py b/pym/portage/dbapi/vartree.py
index cf31c8e..277c2f1 100644
--- a/pym/portage/dbapi/vartree.py
+++ b/pym/portage/dbapi/vartree.py
@@ -173,7 +173,8 @@ class vardbapi(dbapi):
self.vartree = vartree
self._aux_cache_keys = set(
["BUILD_TIME", "CHOST", "COUNTER", "DEPEND", "DESCRIPTION",
- "EAPI", "HDEPEND", "HOMEPAGE", "IUSE", "KEYWORDS",
+ "EAPI", "HDEPEND", "HOMEPAGE",
+ "BUILD_ID", "IUSE", "KEYWORDS",
"LICENSE", "PDEPEND", "PROPERTIES", "PROVIDE", "RDEPEND",
"repository", "RESTRICT" , "SLOT", "USE", "DEFINED_PHASES",
"PROVIDES", "REQUIRES"
@@ -425,7 +426,10 @@ class vardbapi(dbapi):
continue
if len(mysplit) > 1:
if ps[0] == mysplit[1]:
- returnme.append(_pkg_str(mysplit[0]+"/"+x))
+ cpv = "%s/%s" % (mysplit[0], x)
+ metadata = dict(zip(self._aux_cache_keys,
+ self.aux_get(cpv, self._aux_cache_keys)))
+ returnme.append(_pkg_str(cpv, metadata=metadata))
self._cpv_sort_ascending(returnme)
if use_cache:
self.cpcache[mycp] = [mystat, returnme[:]]
diff --git a/pym/portage/versions.py b/pym/portage/versions.py
index 2c9fe5b..1ca9a36 100644
--- a/pym/portage/versions.py
+++ b/pym/portage/versions.py
@@ -18,6 +18,7 @@ if sys.hexversion < 0x3000000:
_unicode = unicode
else:
_unicode = str
+ long = int
import portage
portage.proxy.lazyimport.lazyimport(globals(),
@@ -361,11 +362,13 @@ class _pkg_str(_unicode):
"""
def __new__(cls, cpv, metadata=None, settings=None, eapi=None,
- repo=None, slot=None):
+ repo=None, slot=None, build_time=None, build_id=None,
+ file_size=None, mtime=None):
return _unicode.__new__(cls, cpv)
def __init__(self, cpv, metadata=None, settings=None, eapi=None,
- repo=None, slot=None):
+ repo=None, slot=None, build_time=None, build_id=None,
+ file_size=None, mtime=None):
if not isinstance(cpv, _unicode):
# Avoid TypeError from _unicode.__init__ with PyPy.
cpv = _unicode_decode(cpv)
@@ -375,10 +378,19 @@ class _pkg_str(_unicode):
slot = metadata.get('SLOT', slot)
repo = metadata.get('repository', repo)
eapi = metadata.get('EAPI', eapi)
+ build_time = metadata.get('BUILD_TIME', build_time)
+ file_size = metadata.get('SIZE', file_size)
+ build_id = metadata.get('BUILD_ID', build_id)
+ mtime = metadata.get('_mtime_', mtime)
if settings is not None:
self.__dict__['_settings'] = settings
if eapi is not None:
self.__dict__['eapi'] = eapi
+
+ self.__dict__['build_time'] = self._long(build_time, 0)
+ self.__dict__['file_size'] = self._long(file_size, None)
+ self.__dict__['build_id'] = self._long(build_id, None)
+ self.__dict__['mtime'] = self._long(mtime, None)
self.__dict__['cpv_split'] = catpkgsplit(cpv, eapi=eapi)
if self.cpv_split is None:
raise InvalidData(cpv)
@@ -419,6 +431,18 @@ class _pkg_str(_unicode):
raise AttributeError("_pkg_str instances are immutable",
self.__class__, name, value)
+ @staticmethod
+ def _long(var, default):
+ if var is not None:
+ try:
+ var = long(var)
+ except ValueError:
+ if var:
+ var = -1
+ else:
+ var = default
+ return var
+
@property
def stable(self):
try:
--
2.0.5
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [gentoo-portage-dev] [PATCH 2/7] binpkg-multi-instance 2 of 7
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 0/7] " Zac Medico
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 1/7] binpkg-multi-instance 1 of 7 Zac Medico
@ 2015-02-18 3:05 ` Zac Medico
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 3/7] binpkg-multi-instance 3 " Zac Medico
` (5 subsequent siblings)
7 siblings, 0 replies; 23+ messages in thread
From: Zac Medico @ 2015-02-18 3:05 UTC (permalink / raw
To: gentoo-portage-dev; +Cc: Zac Medico
Add multi-instance support to fakedbapi, which allows multiple instances
with the same cpv to be stored simultaneously, as long as they are
distinguishable using the new _pkg_str build_id, build_time, file_size,
and mtime attributes. This will be used to add multi-instance support to
the bindbapi class (which inherits from fakedbapi).
---
pym/portage/dbapi/virtual.py | 113 +++++++++++++++++++++++++++++++++----------
1 file changed, 87 insertions(+), 26 deletions(-)
diff --git a/pym/portage/dbapi/virtual.py b/pym/portage/dbapi/virtual.py
index ba9745c..3b7d10e 100644
--- a/pym/portage/dbapi/virtual.py
+++ b/pym/portage/dbapi/virtual.py
@@ -11,12 +11,17 @@ class fakedbapi(dbapi):
"""A fake dbapi that allows consumers to inject/remove packages to/from it
portage.settings is required to maintain the dbAPI.
"""
- def __init__(self, settings=None, exclusive_slots=True):
+ def __init__(self, settings=None, exclusive_slots=True,
+ multi_instance=False):
"""
@param exclusive_slots: When True, injecting a package with SLOT
metadata causes an existing package in the same slot to be
automatically removed (default is True).
@type exclusive_slots: Boolean
+ @param multi_instance: When True, multiple instances with the
+ same cpv may be stored simultaneously, as long as they are
+ distinguishable (default is False).
+ @type multi_instance: Boolean
"""
self._exclusive_slots = exclusive_slots
self.cpvdict = {}
@@ -25,6 +30,56 @@ class fakedbapi(dbapi):
from portage import settings
self.settings = settings
self._match_cache = {}
+ self._set_multi_instance(multi_instance)
+
+ def _set_multi_instance(self, multi_instance):
+ """
+ Enable or disable multi_instance mode. This should before any
+ packages are injected, so that all packages are indexed with
+ the same implementation of self._instance_key.
+ """
+ if self.cpvdict:
+ raise AssertionError("_set_multi_instance called after "
+ "packages have already been added")
+ self._multi_instance = multi_instance
+ if multi_instance:
+ self._instance_key = self._instance_key_multi_instance
+ else:
+ self._instance_key = self._instance_key_cpv
+
+ def _instance_key_cpv(self, cpv, support_string=False):
+ return cpv
+
+ def _instance_key_multi_instance(self, cpv, support_string=False):
+ try:
+ return (cpv, cpv.build_id, cpv.file_size, cpv.build_time,
+ cpv.mtime)
+ except AttributeError:
+ if not support_string:
+ raise
+
+ # Fallback for interfaces such as aux_get where API consumers
+ # may pass in a plain string.
+ latest = None
+ for pkg in self.cp_list(cpv_getkey(cpv)):
+ if pkg == cpv and (
+ latest is None or
+ latest.build_time < pkg.build_time):
+ latest = pkg
+
+ if latest is not None:
+ return (latest, latest.build_id, latest.file_size,
+ latest.build_time, latest.mtime)
+
+ raise KeyError(cpv)
+
+ def clear(self):
+ """
+ Remove all packages.
+ """
+ self._clear_cache()
+ self.cpvdict.clear()
+ self.cpdict.clear()
def _clear_cache(self):
if self._categories is not None:
@@ -43,7 +98,8 @@ class fakedbapi(dbapi):
return result[:]
def cpv_exists(self, mycpv, myrepo=None):
- return mycpv in self.cpvdict
+ return self._instance_key(mycpv,
+ support_string=True) in self.cpvdict
def cp_list(self, mycp, use_cache=1, myrepo=None):
# NOTE: Cache can be safely shared with the match cache, since the
@@ -63,7 +119,10 @@ class fakedbapi(dbapi):
return list(self.cpdict)
def cpv_all(self):
- return list(self.cpvdict)
+ if self._multi_instance:
+ return [x[0] for x in self.cpvdict]
+ else:
+ return list(self.cpvdict)
def cpv_inject(self, mycpv, metadata=None):
"""Adds a cpv to the list of available packages. See the
@@ -99,13 +158,14 @@ class fakedbapi(dbapi):
except AttributeError:
pass
- self.cpvdict[mycpv] = metadata
+ instance_key = self._instance_key(mycpv)
+ self.cpvdict[instance_key] = metadata
if not self._exclusive_slots:
myslot = None
if myslot and mycp in self.cpdict:
# If necessary, remove another package in the same SLOT.
for cpv in self.cpdict[mycp]:
- if mycpv != cpv:
+ if instance_key != self._instance_key(cpv):
try:
other_slot = cpv.slot
except AttributeError:
@@ -115,40 +175,41 @@ class fakedbapi(dbapi):
self.cpv_remove(cpv)
break
- cp_list = self.cpdict.get(mycp)
- if cp_list is None:
- cp_list = []
- self.cpdict[mycp] = cp_list
- try:
- cp_list.remove(mycpv)
- except ValueError:
- pass
+ cp_list = self.cpdict.get(mycp, [])
+ cp_list = [x for x in cp_list
+ if self._instance_key(x) != instance_key]
cp_list.append(mycpv)
+ self.cpdict[mycp] = cp_list
def cpv_remove(self,mycpv):
"""Removes a cpv from the list of available packages."""
self._clear_cache()
mycp = cpv_getkey(mycpv)
- if mycpv in self.cpvdict:
- del self.cpvdict[mycpv]
- if mycp not in self.cpdict:
- return
- while mycpv in self.cpdict[mycp]:
- del self.cpdict[mycp][self.cpdict[mycp].index(mycpv)]
- if not len(self.cpdict[mycp]):
- del self.cpdict[mycp]
+ instance_key = self._instance_key(mycpv)
+ self.cpvdict.pop(instance_key, None)
+ cp_list = self.cpdict.get(mycp)
+ if cp_list is not None:
+ cp_list = [x for x in cp_list
+ if self._instance_key(x) != instance_key]
+ if cp_list:
+ self.cpdict[mycp] = cp_list
+ else:
+ del self.cpdict[mycp]
def aux_get(self, mycpv, wants, myrepo=None):
- if not self.cpv_exists(mycpv):
+ metadata = self.cpvdict.get(
+ self._instance_key(mycpv, support_string=True))
+ if metadata is None:
raise KeyError(mycpv)
- metadata = self.cpvdict[mycpv]
- if not metadata:
- return ["" for x in wants]
return [metadata.get(x, "") for x in wants]
def aux_update(self, cpv, values):
self._clear_cache()
- self.cpvdict[cpv].update(values)
+ metadata = self.cpvdict.get(
+ self._instance_key(cpv, support_string=True))
+ if metadata is None:
+ raise KeyError(cpv)
+ metadata.update(values)
class testdbapi(object):
"""A dbapi instance with completely fake functions to get by hitting disk
--
2.0.5
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [gentoo-portage-dev] [PATCH 3/7] binpkg-multi-instance 3 of 7
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 0/7] " Zac Medico
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 1/7] binpkg-multi-instance 1 of 7 Zac Medico
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 2/7] binpkg-multi-instance 2 " Zac Medico
@ 2015-02-18 3:05 ` Zac Medico
2015-02-27 21:58 ` [gentoo-portage-dev] [PATCH 3/7 v2] " Zac Medico
2015-02-27 23:36 ` [gentoo-portage-dev] [PATCH 3/7 v3] " Zac Medico
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 4/7] binpkg-multi-instance 4 " Zac Medico
` (4 subsequent siblings)
7 siblings, 2 replies; 23+ messages in thread
From: Zac Medico @ 2015-02-18 3:05 UTC (permalink / raw
To: gentoo-portage-dev; +Cc: Zac Medico
FEATURES=binpkg-multi-instance causes an integer build-id to be
associated with each binary package instance. Inclusion of the build-id
in the file name of the binary package file makes it possible to store
an arbitrary number of binary packages built from the same ebuild.
Having multiple instances is useful for a number of purposes, such as
retaining builds that were built with different USE flags or linked
against different versions of libraries. The location of any particular
package within PKGDIR can be expressed as follows:
${PKGDIR}/${CATEGORY}/${PN}/${PF}-${BUILD_ID}.xpak
The build-id starts at 1 for the first build of a particular ebuild,
and is incremented by 1 for each new build. It is possible to share a
writable PKGDIR over NFS, and locking ensures that each package added
to PKGDIR will have a unique build-id. It is not necessary to migrate
an existing PKGDIR to the new layout, since portage is capable of
working with a mixed PKGDIR layout, where packages using the old layout
are allowed to remain in place.
The new PKGDIR layout is backward-compatible with binhost clients
running older portage, since the file format is identical, the
per-package PATH attribute in the 'Packages' index directs them to
download the file from the correct URI, and they automatically use
BUILD_TIME metadata to select the latest builds.
There is currently no automated way to prune old builds from PKGDIR,
although it is possible to remove packages manually, and then run
'emaint --fix binhost' to update the ${PKGDIR}/Packages index. Support
for FEATURES=binpkg-multi-instance is planned for eclean-pkg.
X-Gentoo-Bug: 150031
X-Gentoo-Bug-URL: https://bugs.gentoo.org/show_bug.cgi?id=150031
---
bin/quickpkg | 1 -
man/make.conf.5 | 27 +
pym/_emerge/Binpkg.py | 33 +-
pym/_emerge/BinpkgFetcher.py | 13 +-
pym/_emerge/BinpkgVerifier.py | 6 +-
pym/_emerge/EbuildBinpkg.py | 9 +-
pym/_emerge/EbuildBuild.py | 36 +-
pym/_emerge/Package.py | 16 +-
pym/_emerge/Scheduler.py | 6 +-
pym/_emerge/clear_caches.py | 1 -
pym/portage/const.py | 2 +
pym/portage/dbapi/bintree.py | 684 +++++++++++++++++---------
pym/portage/emaint/modules/binhost/binhost.py | 47 +-
13 files changed, 601 insertions(+), 280 deletions(-)
diff --git a/bin/quickpkg b/bin/quickpkg
index 2c69a69..8b71c3e 100755
--- a/bin/quickpkg
+++ b/bin/quickpkg
@@ -63,7 +63,6 @@ def quickpkg_atom(options, infos, arg, eout):
pkgs_for_arg = 0
for cpv in matches:
excluded_config_files = []
- bintree.prevent_collision(cpv)
dblnk = vardb._dblink(cpv)
have_lock = False
diff --git a/man/make.conf.5 b/man/make.conf.5
index 84b7191..6ead61b 100644
--- a/man/make.conf.5
+++ b/man/make.conf.5
@@ -256,6 +256,33 @@ has a \fB\-\-force\fR option that can be used to force regeneration of digests.
Keep logs from successful binary package merges. This is relevant only when
\fBPORT_LOGDIR\fR is set.
.TP
+.B binpkg\-multi\-instance
+Enable support for multiple binary package instances per ebuild.
+Having multiple instances is useful for a number of purposes, such as
+retaining builds that were built with different USE flags or linked
+against different versions of libraries. The location of any particular
+package within PKGDIR can be expressed as follows:
+
+ ${PKGDIR}/${CATEGORY}/${PN}/${PF}\-${BUILD_ID}.xpak
+
+The build\-id starts at 1 for the first build of a particular ebuild,
+and is incremented by 1 for each new build. It is possible to share a
+writable PKGDIR over NFS, and locking ensures that each package added
+to PKGDIR will have a unique build\-id. It is not necessary to migrate
+an existing PKGDIR to the new layout, since portage is capable of
+working with a mixed PKGDIR layout, where packages using the old layout
+are allowed to remain in place.
+
+The new PKGDIR layout is backward\-compatible with binhost clients
+running older portage, since the file format is identical, the
+per\-package PATH attribute in the 'Packages' index directs them to
+download the file from the correct URI, and they automatically use
+BUILD_TIME metadata to select the latest builds.
+
+There is currently no automated way to prune old builds from PKGDIR,
+although it is possible to remove packages manually, and then run
+\(aqemaint \-\-fix binhost' to update the ${PKGDIR}/Packages index.
+.TP
.B buildpkg
Binary packages will be created for all packages that are merged. Also see
\fBquickpkg\fR(1) and \fBemerge\fR(1) \fB\-\-buildpkg\fR and
diff --git a/pym/_emerge/Binpkg.py b/pym/_emerge/Binpkg.py
index ded6dfd..7b7ae17 100644
--- a/pym/_emerge/Binpkg.py
+++ b/pym/_emerge/Binpkg.py
@@ -121,16 +121,11 @@ class Binpkg(CompositeTask):
fetcher = BinpkgFetcher(background=self.background,
logfile=self.settings.get("PORTAGE_LOG_FILE"), pkg=self.pkg,
pretend=self.opts.pretend, scheduler=self.scheduler)
- pkg_path = fetcher.pkg_path
- self._pkg_path = pkg_path
- # This gives bashrc users an opportunity to do various things
- # such as remove binary packages after they're installed.
- self.settings["PORTAGE_BINPKG_FILE"] = pkg_path
if self.opts.getbinpkg and self._bintree.isremote(pkg.cpv):
-
msg = " --- (%s of %s) Fetching Binary (%s::%s)" %\
- (pkg_count.curval, pkg_count.maxval, pkg.cpv, pkg_path)
+ (pkg_count.curval, pkg_count.maxval, pkg.cpv,
+ fetcher.pkg_path)
short_msg = "emerge: (%s of %s) %s Fetch" % \
(pkg_count.curval, pkg_count.maxval, pkg.cpv)
self.logger.log(msg, short_msg=short_msg)
@@ -149,7 +144,7 @@ class Binpkg(CompositeTask):
# The fetcher only has a returncode when
# --getbinpkg is enabled.
if fetcher.returncode is not None:
- self._fetched_pkg = True
+ self._fetched_pkg = fetcher.pkg_path
if self._default_exit(fetcher) != os.EX_OK:
self._unlock_builddir()
self.wait()
@@ -163,9 +158,15 @@ class Binpkg(CompositeTask):
verifier = None
if self._verify:
+ if self._fetched_pkg:
+ path = self._fetched_pkg
+ else:
+ path = self.pkg.root_config.trees["bintree"].getname(
+ self.pkg.cpv)
logfile = self.settings.get("PORTAGE_LOG_FILE")
verifier = BinpkgVerifier(background=self.background,
- logfile=logfile, pkg=self.pkg, scheduler=self.scheduler)
+ logfile=logfile, pkg=self.pkg, scheduler=self.scheduler,
+ _pkg_path=path)
self._start_task(verifier, self._verifier_exit)
return
@@ -181,10 +182,20 @@ class Binpkg(CompositeTask):
logger = self.logger
pkg = self.pkg
pkg_count = self.pkg_count
- pkg_path = self._pkg_path
if self._fetched_pkg:
- self._bintree.inject(pkg.cpv, filename=pkg_path)
+ pkg_path = self._bintree.getname(
+ self._bintree.inject(pkg.cpv,
+ filename=self._fetched_pkg),
+ allocate_new=False)
+ else:
+ pkg_path = self.pkg.root_config.trees["bintree"].getname(
+ self.pkg.cpv)
+
+ # This gives bashrc users an opportunity to do various things
+ # such as remove binary packages after they're installed.
+ self.settings["PORTAGE_BINPKG_FILE"] = pkg_path
+ self._pkg_path = pkg_path
logfile = self.settings.get("PORTAGE_LOG_FILE")
if logfile is not None and os.path.isfile(logfile):
diff --git a/pym/_emerge/BinpkgFetcher.py b/pym/_emerge/BinpkgFetcher.py
index 543881e..a7f2d44 100644
--- a/pym/_emerge/BinpkgFetcher.py
+++ b/pym/_emerge/BinpkgFetcher.py
@@ -24,7 +24,8 @@ class BinpkgFetcher(SpawnProcess):
def __init__(self, **kwargs):
SpawnProcess.__init__(self, **kwargs)
pkg = self.pkg
- self.pkg_path = pkg.root_config.trees["bintree"].getname(pkg.cpv)
+ self.pkg_path = pkg.root_config.trees["bintree"].getname(
+ pkg.cpv) + ".partial"
def _start(self):
@@ -51,10 +52,12 @@ class BinpkgFetcher(SpawnProcess):
# urljoin doesn't work correctly with
# unrecognized protocols like sftp
if bintree._remote_has_index:
- rel_uri = bintree._remotepkgs[pkg.cpv].get("PATH")
+ instance_key = bintree.dbapi._instance_key(pkg.cpv)
+ rel_uri = bintree._remotepkgs[instance_key].get("PATH")
if not rel_uri:
rel_uri = pkg.cpv + ".tbz2"
- remote_base_uri = bintree._remotepkgs[pkg.cpv]["BASE_URI"]
+ remote_base_uri = bintree._remotepkgs[
+ instance_key]["BASE_URI"]
uri = remote_base_uri.rstrip("/") + "/" + rel_uri.lstrip("/")
else:
uri = settings["PORTAGE_BINHOST"].rstrip("/") + \
@@ -128,7 +131,9 @@ class BinpkgFetcher(SpawnProcess):
# the fetcher didn't already do it automatically.
bintree = self.pkg.root_config.trees["bintree"]
if bintree._remote_has_index:
- remote_mtime = bintree._remotepkgs[self.pkg.cpv].get("MTIME")
+ remote_mtime = bintree._remotepkgs[
+ bintree.dbapi._instance_key(
+ self.pkg.cpv)].get("MTIME")
if remote_mtime is not None:
try:
remote_mtime = long(remote_mtime)
diff --git a/pym/_emerge/BinpkgVerifier.py b/pym/_emerge/BinpkgVerifier.py
index 2c69792..7a6d15e 100644
--- a/pym/_emerge/BinpkgVerifier.py
+++ b/pym/_emerge/BinpkgVerifier.py
@@ -33,7 +33,6 @@ class BinpkgVerifier(CompositeTask):
digests = _apply_hash_filter(digests, hash_filter)
self._digests = digests
- self._pkg_path = bintree.getname(self.pkg.cpv)
try:
size = os.stat(self._pkg_path).st_size
@@ -90,8 +89,11 @@ class BinpkgVerifier(CompositeTask):
if portage.output.havecolor:
portage.output.havecolor = not self.background
+ path = self._pkg_path
+ if path.endswith(".partial"):
+ path = path[:-len(".partial")]
eout = EOutput()
- eout.ebegin("%s %s ;-)" % (os.path.basename(self._pkg_path),
+ eout.ebegin("%s %s ;-)" % (os.path.basename(path),
" ".join(sorted(self._digests))))
eout.eend(0)
diff --git a/pym/_emerge/EbuildBinpkg.py b/pym/_emerge/EbuildBinpkg.py
index 34a6aef..6e098eb 100644
--- a/pym/_emerge/EbuildBinpkg.py
+++ b/pym/_emerge/EbuildBinpkg.py
@@ -10,13 +10,12 @@ class EbuildBinpkg(CompositeTask):
This assumes that src_install() has successfully completed.
"""
__slots__ = ('pkg', 'settings') + \
- ('_binpkg_tmpfile',)
+ ('_binpkg_tmpfile', '_binpkg_info')
def _start(self):
pkg = self.pkg
root_config = pkg.root_config
bintree = root_config.trees["bintree"]
- bintree.prevent_collision(pkg.cpv)
binpkg_tmpfile = os.path.join(bintree.pkgdir,
pkg.cpv + ".tbz2." + str(os.getpid()))
bintree._ensure_dir(os.path.dirname(binpkg_tmpfile))
@@ -43,8 +42,12 @@ class EbuildBinpkg(CompositeTask):
pkg = self.pkg
bintree = pkg.root_config.trees["bintree"]
- bintree.inject(pkg.cpv, filename=self._binpkg_tmpfile)
+ self._binpkg_info = bintree.inject(pkg.cpv,
+ filename=self._binpkg_tmpfile)
self._current_task = None
self.returncode = os.EX_OK
self.wait()
+
+ def get_binpkg_info(self):
+ return self._binpkg_info
diff --git a/pym/_emerge/EbuildBuild.py b/pym/_emerge/EbuildBuild.py
index b5b1e87..0e98602 100644
--- a/pym/_emerge/EbuildBuild.py
+++ b/pym/_emerge/EbuildBuild.py
@@ -1,6 +1,10 @@
# Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
+from __future__ import unicode_literals
+
+import io
+
import _emerge.emergelog
from _emerge.EbuildExecuter import EbuildExecuter
from _emerge.EbuildPhase import EbuildPhase
@@ -15,7 +19,7 @@ from _emerge.TaskSequence import TaskSequence
from portage.util import writemsg
import portage
-from portage import os
+from portage import _encodings, _unicode_decode, _unicode_encode, os
from portage.output import colorize
from portage.package.ebuild.digestcheck import digestcheck
from portage.package.ebuild.digestgen import digestgen
@@ -317,9 +321,13 @@ class EbuildBuild(CompositeTask):
phase="rpm", scheduler=self.scheduler,
settings=self.settings))
else:
- binpkg_tasks.add(EbuildBinpkg(background=self.background,
+ task = EbuildBinpkg(
+ background=self.background,
pkg=self.pkg, scheduler=self.scheduler,
- settings=self.settings))
+ settings=self.settings)
+ binpkg_tasks.add(task)
+ task.addExitListener(
+ self._record_binpkg_info)
if binpkg_tasks:
self._start_task(binpkg_tasks, self._buildpkg_exit)
@@ -356,6 +364,28 @@ class EbuildBuild(CompositeTask):
self.returncode = packager.returncode
self.wait()
+ def _record_binpkg_info(self, task):
+ if task.returncode != os.EX_OK:
+ return
+
+ # Save info about the created binary package, so that
+ # identifying information can be passed to the install
+ # task, to be recorded in the installed package database.
+ pkg = task.get_binpkg_info()
+ infoloc = os.path.join(self.settings["PORTAGE_BUILDDIR"],
+ "build-info")
+ info = {
+ "BINPKGMD5": "%s\n" % pkg._metadata["MD5"],
+ }
+ if pkg.build_id is not None:
+ info["BUILD_ID"] = "%s\n" % pkg.build_id
+ for k, v in info.items():
+ with io.open(_unicode_encode(os.path.join(infoloc, k),
+ encoding=_encodings['fs'], errors='strict'),
+ mode='w', encoding=_encodings['repo.content'],
+ errors='strict') as f:
+ f.write(v)
+
def _buildpkgonly_success_hook_exit(self, success_hooks):
self._default_exit(success_hooks)
self.returncode = None
diff --git a/pym/_emerge/Package.py b/pym/_emerge/Package.py
index 975335d..2c1a116 100644
--- a/pym/_emerge/Package.py
+++ b/pym/_emerge/Package.py
@@ -219,6 +219,8 @@ class Package(Task):
else:
raise TypeError("root_config argument is required")
+ elements = [type_name, root, _unicode(cpv), operation]
+
# For installed (and binary) packages we don't care for the repo
# when it comes to hashing, because there can only be one cpv.
# So overwrite the repo_key with type_name.
@@ -229,14 +231,22 @@ class Package(Task):
raise AssertionError(
"Package._gen_hash_key() " + \
"called without 'repo_name' argument")
- repo_key = repo_name
+ elements.append(repo_name)
+ elif type_name == "binary":
+ # Including a variety of fingerprints in the hash makes
+ # it possible to simultaneously consider multiple similar
+ # packages. Note that digests are not included here, since
+ # they are relatively expensive to compute, and they may
+ # not necessarily be available.
+ elements.extend([cpv.build_id, cpv.file_size,
+ cpv.build_time, cpv.mtime])
else:
# For installed (and binary) packages we don't care for the repo
# when it comes to hashing, because there can only be one cpv.
# So overwrite the repo_key with type_name.
- repo_key = type_name
+ elements.append(type_name)
- return (type_name, root, _unicode(cpv), operation, repo_key)
+ return tuple(elements)
def _validate_deps(self):
"""
diff --git a/pym/_emerge/Scheduler.py b/pym/_emerge/Scheduler.py
index d6db311..dae7944 100644
--- a/pym/_emerge/Scheduler.py
+++ b/pym/_emerge/Scheduler.py
@@ -862,8 +862,12 @@ class Scheduler(PollScheduler):
continue
fetched = fetcher.pkg_path
+ if fetched is False:
+ filename = bintree.getname(x.cpv)
+ else:
+ filename = fetched
verifier = BinpkgVerifier(pkg=x,
- scheduler=sched_iface)
+ scheduler=sched_iface, _pkg_path=filename)
current_task = verifier
verifier.start()
if verifier.wait() != os.EX_OK:
diff --git a/pym/_emerge/clear_caches.py b/pym/_emerge/clear_caches.py
index 513df62..cb0db10 100644
--- a/pym/_emerge/clear_caches.py
+++ b/pym/_emerge/clear_caches.py
@@ -7,7 +7,6 @@ def clear_caches(trees):
for d in trees.values():
d["porttree"].dbapi.melt()
d["porttree"].dbapi._aux_cache.clear()
- d["bintree"].dbapi._aux_cache.clear()
d["bintree"].dbapi._clear_cache()
if d["vartree"].dbapi._linkmap is None:
# preserve-libs is entirely disabled
diff --git a/pym/portage/const.py b/pym/portage/const.py
index febdb4a..c7ecda2 100644
--- a/pym/portage/const.py
+++ b/pym/portage/const.py
@@ -122,6 +122,7 @@ EBUILD_PHASES = (
SUPPORTED_FEATURES = frozenset([
"assume-digests",
"binpkg-logs",
+ "binpkg-multi-instance",
"buildpkg",
"buildsyspkg",
"candy",
@@ -268,6 +269,7 @@ LIVE_ECLASSES = frozenset([
])
SUPPORTED_BINPKG_FORMATS = ("tar", "rpm")
+SUPPORTED_XPAK_EXTENSIONS = (".tbz2", ".xpak")
# Time formats used in various places like metadata.chk.
TIMESTAMP_FORMAT = "%a, %d %b %Y %H:%M:%S +0000" # to be used with time.gmtime()
diff --git a/pym/portage/dbapi/bintree.py b/pym/portage/dbapi/bintree.py
index cd30b67..460b9f7 100644
--- a/pym/portage/dbapi/bintree.py
+++ b/pym/portage/dbapi/bintree.py
@@ -17,14 +17,13 @@ portage.proxy.lazyimport.lazyimport(globals(),
'portage.update:update_dbentries',
'portage.util:atomic_ofstream,ensure_dirs,normalize_path,' + \
'writemsg,writemsg_stdout',
- 'portage.util.listdir:listdir',
'portage.util.path:first_existing',
'portage.util._urlopen:urlopen@_urlopen',
'portage.versions:best,catpkgsplit,catsplit,_pkg_str',
)
from portage.cache.mappings import slot_dict_class
-from portage.const import CACHE_PATH
+from portage.const import CACHE_PATH, SUPPORTED_XPAK_EXTENSIONS
from portage.dbapi.virtual import fakedbapi
from portage.dep import Atom, use_reduce, paren_enclose
from portage.exception import AlarmSignal, InvalidData, InvalidPackageName, \
@@ -71,18 +70,26 @@ class bindbapi(fakedbapi):
_known_keys = frozenset(list(fakedbapi._known_keys) + \
["CHOST", "repository", "USE"])
def __init__(self, mybintree=None, **kwargs):
- fakedbapi.__init__(self, **kwargs)
+ # Always enable multi_instance mode for bindbapi indexing. This
+ # does not affect the local PKGDIR file layout, since that is
+ # controlled independently by FEATURES=binpkg-multi-instance.
+ # The multi_instance mode is useful for the following reasons:
+ # * binary packages with the same cpv from multiple binhosts
+ # can be considered simultaneously
+ # * if binpkg-multi-instance is disabled, it's still possible
+ # to properly access a PKGDIR which has binpkg-multi-instance
+ # layout (or mixed layout)
+ fakedbapi.__init__(self, exclusive_slots=False,
+ multi_instance=True, **kwargs)
self.bintree = mybintree
self.move_ent = mybintree.move_ent
- self.cpvdict={}
- self.cpdict={}
# Selectively cache metadata in order to optimize dep matching.
self._aux_cache_keys = set(
- ["BUILD_TIME", "CHOST", "DEPEND", "EAPI",
- "HDEPEND", "IUSE", "KEYWORDS",
- "LICENSE", "PDEPEND", "PROPERTIES", "PROVIDE",
- "RDEPEND", "repository", "RESTRICT", "SLOT", "USE",
- "DEFINED_PHASES", "PROVIDES", "REQUIRES"
+ ["BUILD_ID", "BUILD_TIME", "CHOST", "DEFINED_PHASES",
+ "DEPEND", "EAPI", "HDEPEND", "IUSE", "KEYWORDS",
+ "LICENSE", "MD5", "PDEPEND", "PROPERTIES", "PROVIDE",
+ "PROVIDES", "RDEPEND", "repository", "REQUIRES", "RESTRICT",
+ "SIZE", "SLOT", "USE", "_mtime_"
])
self._aux_cache_slot_dict = slot_dict_class(self._aux_cache_keys)
self._aux_cache = {}
@@ -109,33 +116,49 @@ class bindbapi(fakedbapi):
return fakedbapi.cpv_exists(self, cpv)
def cpv_inject(self, cpv, **kwargs):
- self._aux_cache.pop(cpv, None)
- fakedbapi.cpv_inject(self, cpv, **kwargs)
+ if not self.bintree.populated:
+ self.bintree.populate()
+ fakedbapi.cpv_inject(self, cpv,
+ metadata=cpv._metadata, **kwargs)
def cpv_remove(self, cpv):
- self._aux_cache.pop(cpv, None)
+ if not self.bintree.populated:
+ self.bintree.populate()
fakedbapi.cpv_remove(self, cpv)
def aux_get(self, mycpv, wants, myrepo=None):
if self.bintree and not self.bintree.populated:
self.bintree.populate()
- cache_me = False
+ # Support plain string for backward compatibility with API
+ # consumers (including portageq, which passes in a cpv from
+ # a command-line argument).
+ instance_key = self._instance_key(mycpv,
+ support_string=True)
if not self._known_keys.intersection(
wants).difference(self._aux_cache_keys):
- aux_cache = self._aux_cache.get(mycpv)
+ aux_cache = self.cpvdict[instance_key]
if aux_cache is not None:
return [aux_cache.get(x, "") for x in wants]
- cache_me = True
mysplit = mycpv.split("/")
mylist = []
tbz2name = mysplit[1]+".tbz2"
if not self.bintree._remotepkgs or \
not self.bintree.isremote(mycpv):
- tbz2_path = self.bintree.getname(mycpv)
- if not os.path.exists(tbz2_path):
+ try:
+ tbz2_path = self.bintree._pkg_paths[instance_key]
+ except KeyError:
+ raise KeyError(mycpv)
+ tbz2_path = os.path.join(self.bintree.pkgdir, tbz2_path)
+ try:
+ st = os.lstat(tbz2_path)
+ except OSError:
raise KeyError(mycpv)
metadata_bytes = portage.xpak.tbz2(tbz2_path).get_data()
def getitem(k):
+ if k == "_mtime_":
+ return _unicode(st[stat.ST_MTIME])
+ elif k == "SIZE":
+ return _unicode(st.st_size)
v = metadata_bytes.get(_unicode_encode(k,
encoding=_encodings['repo.content'],
errors='backslashreplace'))
@@ -144,11 +167,9 @@ class bindbapi(fakedbapi):
encoding=_encodings['repo.content'], errors='replace')
return v
else:
- getitem = self.bintree._remotepkgs[mycpv].get
+ getitem = self.cpvdict[instance_key].get
mydata = {}
mykeys = wants
- if cache_me:
- mykeys = self._aux_cache_keys.union(wants)
for x in mykeys:
myval = getitem(x)
# myval is None if the key doesn't exist
@@ -159,16 +180,24 @@ class bindbapi(fakedbapi):
if not mydata.setdefault('EAPI', '0'):
mydata['EAPI'] = '0'
- if cache_me:
- aux_cache = self._aux_cache_slot_dict()
- for x in self._aux_cache_keys:
- aux_cache[x] = mydata.get(x, '')
- self._aux_cache[mycpv] = aux_cache
return [mydata.get(x, '') for x in wants]
def aux_update(self, cpv, values):
if not self.bintree.populated:
self.bintree.populate()
+ build_id = None
+ try:
+ build_id = cpv.build_id
+ except AttributeError:
+ if self.bintree._multi_instance:
+ # The cpv.build_id attribute is required if we are in
+ # multi-instance mode, since otherwise we won't know
+ # which instance to update.
+ raise
+ else:
+ cpv = self._instance_key(cpv, support_string=True)[0]
+ build_id = cpv.build_id
+
tbz2path = self.bintree.getname(cpv)
if not os.path.exists(tbz2path):
raise KeyError(cpv)
@@ -187,7 +216,7 @@ class bindbapi(fakedbapi):
del mydata[k]
mytbz2.recompose_mem(portage.xpak.xpak_mem(mydata))
# inject will clear stale caches via cpv_inject.
- self.bintree.inject(cpv)
+ self.bintree.inject(cpv, filename=tbz2path)
def cp_list(self, *pargs, **kwargs):
if not self.bintree.populated:
@@ -219,7 +248,7 @@ class bindbapi(fakedbapi):
if not self.bintree.isremote(pkg):
pass
else:
- metadata = self.bintree._remotepkgs[pkg]
+ metadata = self.bintree._remotepkgs[self._instance_key(pkg)]
try:
size = int(metadata["SIZE"])
except KeyError:
@@ -300,6 +329,13 @@ class binarytree(object):
if True:
self.pkgdir = normalize_path(pkgdir)
+ # NOTE: Event if binpkg-multi-instance is disabled, it's
+ # still possible to access a PKGDIR which uses the
+ # binpkg-multi-instance layout (or mixed layout).
+ self._multi_instance = ("binpkg-multi-instance" in
+ settings.features)
+ if self._multi_instance:
+ self._allocate_filename = self._allocate_filename_multi
self.dbapi = bindbapi(self, settings=settings)
self.update_ents = self.dbapi.update_ents
self.move_slot_ent = self.dbapi.move_slot_ent
@@ -310,7 +346,6 @@ class binarytree(object):
self.invalids = []
self.settings = settings
self._pkg_paths = {}
- self._pkgindex_uri = {}
self._populating = False
self._all_directory = os.path.isdir(
os.path.join(self.pkgdir, "All"))
@@ -318,12 +353,14 @@ class binarytree(object):
self._pkgindex_hashes = ["MD5","SHA1"]
self._pkgindex_file = os.path.join(self.pkgdir, "Packages")
self._pkgindex_keys = self.dbapi._aux_cache_keys.copy()
- self._pkgindex_keys.update(["CPV", "MTIME", "SIZE"])
+ self._pkgindex_keys.update(["CPV", "SIZE"])
self._pkgindex_aux_keys = \
- ["BUILD_TIME", "CHOST", "DEPEND", "DESCRIPTION", "EAPI",
- "HDEPEND", "IUSE", "KEYWORDS", "LICENSE", "PDEPEND", "PROPERTIES",
- "PROVIDE", "RESTRICT", "RDEPEND", "repository", "SLOT", "USE", "DEFINED_PHASES",
- "BASE_URI", "PROVIDES", "REQUIRES"]
+ ["BASE_URI", "BUILD_ID", "BUILD_TIME", "CHOST",
+ "DEFINED_PHASES", "DEPEND", "DESCRIPTION", "EAPI",
+ "HDEPEND", "IUSE", "KEYWORDS", "LICENSE", "PDEPEND",
+ "PKGINDEX_URI", "PROPERTIES", "PROVIDE", "PROVIDES",
+ "RDEPEND", "repository", "REQUIRES", "RESTRICT",
+ "SIZE", "SLOT", "USE"]
self._pkgindex_aux_keys = list(self._pkgindex_aux_keys)
self._pkgindex_use_evaluated_keys = \
("DEPEND", "HDEPEND", "LICENSE", "RDEPEND",
@@ -336,6 +373,7 @@ class binarytree(object):
"USE_EXPAND", "USE_EXPAND_HIDDEN", "USE_EXPAND_IMPLICIT",
"USE_EXPAND_UNPREFIXED"])
self._pkgindex_default_pkg_data = {
+ "BUILD_ID" : "",
"BUILD_TIME" : "",
"DEFINED_PHASES" : "",
"DEPEND" : "",
@@ -365,6 +403,7 @@ class binarytree(object):
self._pkgindex_translated_keys = (
("DESCRIPTION" , "DESC"),
+ ("_mtime_" , "MTIME"),
("repository" , "REPO"),
)
@@ -455,16 +494,21 @@ class binarytree(object):
mytbz2.recompose_mem(portage.xpak.xpak_mem(mydata))
self.dbapi.cpv_remove(mycpv)
- del self._pkg_paths[mycpv]
+ del self._pkg_paths[self.dbapi._instance_key(mycpv)]
+ metadata = self.dbapi._aux_cache_slot_dict()
+ for k in self.dbapi._aux_cache_keys:
+ v = mydata.get(_unicode_encode(k))
+ if v is not None:
+ v = _unicode_decode(v)
+ metadata[k] = " ".join(v.split())
+ mynewcpv = _pkg_str(mynewcpv, metadata=metadata)
new_path = self.getname(mynewcpv)
- self._pkg_paths[mynewcpv] = os.path.join(
+ self._pkg_paths[
+ self.dbapi._instance_key(mynewcpv)] = os.path.join(
*new_path.split(os.path.sep)[-2:])
if new_path != mytbz2:
self._ensure_dir(os.path.dirname(new_path))
_movefile(tbz2path, new_path, mysettings=self.settings)
- self._remove_symlink(mycpv)
- if new_path.split(os.path.sep)[-2] == "All":
- self._create_symlink(mynewcpv)
self.inject(mynewcpv)
return moves
@@ -645,55 +689,63 @@ class binarytree(object):
# prior to performing package moves since it only wants to
# operate on local packages (getbinpkgs=0).
self._remotepkgs = None
- self.dbapi._clear_cache()
- self.dbapi._aux_cache.clear()
+ self.dbapi.clear()
+ _instance_key = self.dbapi._instance_key
if True:
pkg_paths = {}
self._pkg_paths = pkg_paths
- dirs = listdir(self.pkgdir, dirsonly=True, EmptyOnError=True)
- if "All" in dirs:
- dirs.remove("All")
- dirs.sort()
- dirs.insert(0, "All")
+ dir_files = {}
+ for parent, dir_names, file_names in os.walk(self.pkgdir):
+ relative_parent = parent[len(self.pkgdir)+1:]
+ dir_files[relative_parent] = file_names
+
pkgindex = self._load_pkgindex()
- pf_index = None
if not self._pkgindex_version_supported(pkgindex):
pkgindex = self._new_pkgindex()
header = pkgindex.header
metadata = {}
+ basename_index = {}
for d in pkgindex.packages:
- metadata[d["CPV"]] = d
+ cpv = _pkg_str(d["CPV"], metadata=d,
+ settings=self.settings)
+ d["CPV"] = cpv
+ metadata[_instance_key(cpv)] = d
+ path = d.get("PATH")
+ if not path:
+ path = cpv + ".tbz2"
+ basename = os.path.basename(path)
+ basename_index.setdefault(basename, []).append(d)
+
update_pkgindex = False
- for mydir in dirs:
- for myfile in listdir(os.path.join(self.pkgdir, mydir)):
- if not myfile.endswith(".tbz2"):
+ for mydir, file_names in dir_files.items():
+ try:
+ mydir = _unicode_decode(mydir,
+ encoding=_encodings["fs"], errors="strict")
+ except UnicodeDecodeError:
+ continue
+ for myfile in file_names:
+ try:
+ myfile = _unicode_decode(myfile,
+ encoding=_encodings["fs"], errors="strict")
+ except UnicodeDecodeError:
+ continue
+ if not myfile.endswith(SUPPORTED_XPAK_EXTENSIONS):
continue
mypath = os.path.join(mydir, myfile)
full_path = os.path.join(self.pkgdir, mypath)
s = os.lstat(full_path)
- if stat.S_ISLNK(s.st_mode):
+
+ if not stat.S_ISREG(s.st_mode):
continue
# Validate data from the package index and try to avoid
# reading the xpak if possible.
- if mydir != "All":
- possibilities = None
- d = metadata.get(mydir+"/"+myfile[:-5])
- if d:
- possibilities = [d]
- else:
- if pf_index is None:
- pf_index = {}
- for mycpv in metadata:
- mycat, mypf = catsplit(mycpv)
- pf_index.setdefault(
- mypf, []).append(metadata[mycpv])
- possibilities = pf_index.get(myfile[:-5])
+ possibilities = basename_index.get(myfile)
if possibilities:
match = None
for d in possibilities:
try:
- if long(d["MTIME"]) != s[stat.ST_MTIME]:
+ if long(d["_mtime_"]) != s[stat.ST_MTIME]:
continue
except (KeyError, ValueError):
continue
@@ -707,15 +759,14 @@ class binarytree(object):
break
if match:
mycpv = match["CPV"]
- if mycpv in pkg_paths:
- # discard duplicates (All/ is preferred)
- continue
- mycpv = _pkg_str(mycpv)
- pkg_paths[mycpv] = mypath
+ instance_key = _instance_key(mycpv)
+ pkg_paths[instance_key] = mypath
# update the path if the package has been moved
oldpath = d.get("PATH")
if oldpath and oldpath != mypath:
update_pkgindex = True
+ # Omit PATH if it is the default path for
+ # the current Packages format version.
if mypath != mycpv + ".tbz2":
d["PATH"] = mypath
if not oldpath:
@@ -725,11 +776,6 @@ class binarytree(object):
if oldpath:
update_pkgindex = True
self.dbapi.cpv_inject(mycpv)
- if not self.dbapi._aux_cache_keys.difference(d):
- aux_cache = self.dbapi._aux_cache_slot_dict()
- for k in self.dbapi._aux_cache_keys:
- aux_cache[k] = d[k]
- self.dbapi._aux_cache[mycpv] = aux_cache
continue
if not os.access(full_path, os.R_OK):
writemsg(_("!!! Permission denied to read " \
@@ -737,13 +783,12 @@ class binarytree(object):
noiselevel=-1)
self.invalids.append(myfile[:-5])
continue
- metadata_bytes = portage.xpak.tbz2(full_path).get_data()
- mycat = _unicode_decode(metadata_bytes.get(b"CATEGORY", ""),
- encoding=_encodings['repo.content'], errors='replace')
- mypf = _unicode_decode(metadata_bytes.get(b"PF", ""),
- encoding=_encodings['repo.content'], errors='replace')
- slot = _unicode_decode(metadata_bytes.get(b"SLOT", ""),
- encoding=_encodings['repo.content'], errors='replace')
+ pkg_metadata = self._read_metadata(full_path, s,
+ keys=chain(self.dbapi._aux_cache_keys,
+ ("PF", "CATEGORY")))
+ mycat = pkg_metadata.get("CATEGORY", "")
+ mypf = pkg_metadata.get("PF", "")
+ slot = pkg_metadata.get("SLOT", "")
mypkg = myfile[:-5]
if not mycat or not mypf or not slot:
#old-style or corrupt package
@@ -767,16 +812,51 @@ class binarytree(object):
writemsg("!!! %s\n" % line, noiselevel=-1)
self.invalids.append(mypkg)
continue
- mycat = mycat.strip()
- slot = slot.strip()
- if mycat != mydir and mydir != "All":
+
+ multi_instance = False
+ invalid_name = False
+ build_id = None
+ if myfile.endswith(".xpak"):
+ multi_instance = True
+ build_id = self._parse_build_id(myfile)
+ if build_id < 1:
+ invalid_name = True
+ elif myfile != "%s-%s.xpak" % (
+ mypf, build_id):
+ invalid_name = True
+ else:
+ mypkg = mypkg[:-len(str(build_id))-1]
+ elif myfile != mypf + ".tbz2":
+ invalid_name = True
+
+ if invalid_name:
+ writemsg(_("\n!!! Binary package name is "
+ "invalid: '%s'\n") % full_path,
+ noiselevel=-1)
+ continue
+
+ if pkg_metadata.get("BUILD_ID"):
+ try:
+ build_id = long(pkg_metadata["BUILD_ID"])
+ except ValueError:
+ writemsg(_("!!! Binary package has "
+ "invalid BUILD_ID: '%s'\n") %
+ full_path, noiselevel=-1)
+ continue
+ else:
+ build_id = None
+
+ if multi_instance:
+ name_split = catpkgsplit("%s/%s" %
+ (mycat, mypf))
+ if (name_split is None or
+ tuple(catsplit(mydir)) != name_split[:2]):
+ continue
+ elif mycat != mydir and mydir != "All":
continue
if mypkg != mypf.strip():
continue
mycpv = mycat + "/" + mypkg
- if mycpv in pkg_paths:
- # All is first, so it's preferred.
- continue
if not self.dbapi._category_re.match(mycat):
writemsg(_("!!! Binary package has an " \
"unrecognized category: '%s'\n") % full_path,
@@ -786,14 +866,23 @@ class binarytree(object):
(mycpv, self.settings["PORTAGE_CONFIGROOT"]),
noiselevel=-1)
continue
- mycpv = _pkg_str(mycpv)
- pkg_paths[mycpv] = mypath
+ if build_id is not None:
+ pkg_metadata["BUILD_ID"] = _unicode(build_id)
+ pkg_metadata["SIZE"] = _unicode(s.st_size)
+ # Discard items used only for validation above.
+ pkg_metadata.pop("CATEGORY")
+ pkg_metadata.pop("PF")
+ mycpv = _pkg_str(mycpv,
+ metadata=self.dbapi._aux_cache_slot_dict(
+ pkg_metadata))
+ pkg_paths[_instance_key(mycpv)] = mypath
self.dbapi.cpv_inject(mycpv)
update_pkgindex = True
- d = metadata.get(mycpv, {})
+ d = metadata.get(_instance_key(mycpv),
+ pkgindex._pkg_slot_dict())
if d:
try:
- if long(d["MTIME"]) != s[stat.ST_MTIME]:
+ if long(d["_mtime_"]) != s[stat.ST_MTIME]:
d.clear()
except (KeyError, ValueError):
d.clear()
@@ -804,36 +893,30 @@ class binarytree(object):
except (KeyError, ValueError):
d.clear()
+ for k in self._pkgindex_allowed_pkg_keys:
+ v = pkg_metadata.get(k)
+ if v is not None:
+ d[k] = v
d["CPV"] = mycpv
- d["SLOT"] = slot
- d["MTIME"] = _unicode(s[stat.ST_MTIME])
- d["SIZE"] = _unicode(s.st_size)
- d.update(zip(self._pkgindex_aux_keys,
- self.dbapi.aux_get(mycpv, self._pkgindex_aux_keys)))
try:
self._eval_use_flags(mycpv, d)
except portage.exception.InvalidDependString:
writemsg(_("!!! Invalid binary package: '%s'\n") % \
self.getname(mycpv), noiselevel=-1)
self.dbapi.cpv_remove(mycpv)
- del pkg_paths[mycpv]
+ del pkg_paths[_instance_key(mycpv)]
# record location if it's non-default
if mypath != mycpv + ".tbz2":
d["PATH"] = mypath
else:
d.pop("PATH", None)
- metadata[mycpv] = d
- if not self.dbapi._aux_cache_keys.difference(d):
- aux_cache = self.dbapi._aux_cache_slot_dict()
- for k in self.dbapi._aux_cache_keys:
- aux_cache[k] = d[k]
- self.dbapi._aux_cache[mycpv] = aux_cache
+ metadata[_instance_key(mycpv)] = d
- for cpv in list(metadata):
- if cpv not in pkg_paths:
- del metadata[cpv]
+ for instance_key in list(metadata):
+ if instance_key not in pkg_paths:
+ del metadata[instance_key]
# Do not bother to write the Packages index if $PKGDIR/All/ exists
# since it will provide no benefit due to the need to read CATEGORY
@@ -1058,45 +1141,24 @@ class binarytree(object):
# The current user doesn't have permission to cache the
# file, but that's alright.
if pkgindex:
- # Organize remote package list as a cpv -> metadata map.
- remotepkgs = _pkgindex_cpv_map_latest_build(pkgindex)
remote_base_uri = pkgindex.header.get("URI", base_url)
- for cpv, remote_metadata in remotepkgs.items():
- remote_metadata["BASE_URI"] = remote_base_uri
- self._pkgindex_uri[cpv] = url
- self._remotepkgs.update(remotepkgs)
- self._remote_has_index = True
- for cpv in remotepkgs:
+ for d in pkgindex.packages:
+ cpv = _pkg_str(d["CPV"], metadata=d,
+ settings=self.settings)
+ instance_key = _instance_key(cpv)
+ # Local package instances override remote instances
+ # with the same instance_key.
+ if instance_key in metadata:
+ continue
+
+ d["CPV"] = cpv
+ d["BASE_URI"] = remote_base_uri
+ d["PKGINDEX_URI"] = url
+ self._remotepkgs[instance_key] = d
+ metadata[instance_key] = d
self.dbapi.cpv_inject(cpv)
- if True:
- # Remote package instances override local package
- # if they are not identical.
- hash_names = ["SIZE"] + self._pkgindex_hashes
- for cpv, local_metadata in metadata.items():
- remote_metadata = self._remotepkgs.get(cpv)
- if remote_metadata is None:
- continue
- # Use digests to compare identity.
- identical = True
- for hash_name in hash_names:
- local_value = local_metadata.get(hash_name)
- if local_value is None:
- continue
- remote_value = remote_metadata.get(hash_name)
- if remote_value is None:
- continue
- if local_value != remote_value:
- identical = False
- break
- if identical:
- del self._remotepkgs[cpv]
- else:
- # Override the local package in the aux_get cache.
- self.dbapi._aux_cache[cpv] = remote_metadata
- else:
- # Local package instances override remote instances.
- for cpv in metadata:
- self._remotepkgs.pop(cpv, None)
+
+ self._remote_has_index = True
self.populated=1
@@ -1108,7 +1170,8 @@ class binarytree(object):
@param filename: File path of the package to inject, or None if it's
already in the location returned by getname()
@type filename: string
- @rtype: None
+ @rtype: _pkg_str or None
+ @return: A _pkg_str instance on success, or None on failure.
"""
mycat, mypkg = catsplit(cpv)
if not self.populated:
@@ -1126,24 +1189,45 @@ class binarytree(object):
writemsg(_("!!! Binary package does not exist: '%s'\n") % full_path,
noiselevel=-1)
return
- mytbz2 = portage.xpak.tbz2(full_path)
- slot = mytbz2.getfile("SLOT")
+ metadata = self._read_metadata(full_path, s)
+ slot = metadata.get("SLOT")
+ try:
+ self._eval_use_flags(cpv, metadata)
+ except portage.exception.InvalidDependString:
+ slot = None
if slot is None:
writemsg(_("!!! Invalid binary package: '%s'\n") % full_path,
noiselevel=-1)
return
- slot = slot.strip()
- self.dbapi.cpv_inject(cpv)
+
+ fetched = False
+ try:
+ build_id = cpv.build_id
+ except AttributeError:
+ build_id = None
+ else:
+ instance_key = self.dbapi._instance_key(cpv)
+ if instance_key in self.dbapi.cpvdict:
+ # This means we've been called by aux_update (or
+ # similar). The instance key typically changes (due to
+ # file modification), so we need to discard existing
+ # instance key references.
+ self.dbapi.cpv_remove(cpv)
+ self._pkg_paths.pop(instance_key, None)
+ if self._remotepkgs is not None:
+ fetched = self._remotepkgs.pop(instance_key, None)
+
+ cpv = _pkg_str(cpv, metadata=metadata, settings=self.settings)
# Reread the Packages index (in case it's been changed by another
# process) and then updated it, all while holding a lock.
pkgindex_lock = None
- created_symlink = False
try:
pkgindex_lock = lockfile(self._pkgindex_file,
wantnewlockfile=1)
if filename is not None:
- new_filename = self.getname(cpv)
+ new_filename = self.getname(cpv,
+ allocate_new=(build_id is None))
try:
samefile = os.path.samefile(filename, new_filename)
except OSError:
@@ -1153,54 +1237,31 @@ class binarytree(object):
_movefile(filename, new_filename, mysettings=self.settings)
full_path = new_filename
- self._file_permissions(full_path)
+ basename = os.path.basename(full_path)
+ pf = catsplit(cpv)[1]
+ if (build_id is None and not fetched and
+ basename.endswith(".xpak")):
+ # Apply the newly assigned BUILD_ID. This is intended
+ # to occur only for locally built packages. If the
+ # package was fetched, we want to preserve its
+ # attributes, so that we can later distinguish that it
+ # is identical to its remote counterpart.
+ build_id = self._parse_build_id(basename)
+ metadata["BUILD_ID"] = _unicode(build_id)
+ cpv = _pkg_str(cpv, metadata=metadata,
+ settings=self.settings)
+ binpkg = portage.xpak.tbz2(full_path)
+ binary_data = binpkg.get_data()
+ binary_data[b"BUILD_ID"] = _unicode_encode(
+ metadata["BUILD_ID"])
+ binpkg.recompose_mem(portage.xpak.xpak_mem(binary_data))
- if self._all_directory and \
- self.getname(cpv).split(os.path.sep)[-2] == "All":
- self._create_symlink(cpv)
- created_symlink = True
+ self._file_permissions(full_path)
pkgindex = self._load_pkgindex()
-
if not self._pkgindex_version_supported(pkgindex):
pkgindex = self._new_pkgindex()
- # Discard remote metadata to ensure that _pkgindex_entry
- # gets the local metadata. This also updates state for future
- # isremote calls.
- if self._remotepkgs is not None:
- self._remotepkgs.pop(cpv, None)
-
- # Discard cached metadata to ensure that _pkgindex_entry
- # doesn't return stale metadata.
- self.dbapi._aux_cache.pop(cpv, None)
-
- try:
- d = self._pkgindex_entry(cpv)
- except portage.exception.InvalidDependString:
- writemsg(_("!!! Invalid binary package: '%s'\n") % \
- self.getname(cpv), noiselevel=-1)
- self.dbapi.cpv_remove(cpv)
- del self._pkg_paths[cpv]
- return
-
- # If found, remove package(s) with duplicate path.
- path = d.get("PATH", "")
- for i in range(len(pkgindex.packages) - 1, -1, -1):
- d2 = pkgindex.packages[i]
- if path and path == d2.get("PATH"):
- # Handle path collisions in $PKGDIR/All
- # when CPV is not identical.
- del pkgindex.packages[i]
- elif cpv == d2.get("CPV"):
- if path == d2.get("PATH", ""):
- del pkgindex.packages[i]
- elif created_symlink and not d2.get("PATH", ""):
- # Delete entry for the package that was just
- # overwritten by a symlink to this package.
- del pkgindex.packages[i]
-
- pkgindex.packages.append(d)
-
+ d = self._inject_file(pkgindex, cpv, full_path)
self._update_pkgindex_header(pkgindex.header)
self._pkgindex_write(pkgindex)
@@ -1208,6 +1269,73 @@ class binarytree(object):
if pkgindex_lock:
unlockfile(pkgindex_lock)
+ # This is used to record BINPKGMD5 in the installed package
+ # database, for a package that has just been built.
+ cpv._metadata["MD5"] = d["MD5"]
+
+ return cpv
+
+ def _read_metadata(self, filename, st, keys=None):
+ if keys is None:
+ keys = self.dbapi._aux_cache_keys
+ metadata = self.dbapi._aux_cache_slot_dict()
+ else:
+ metadata = {}
+ binary_metadata = portage.xpak.tbz2(filename).get_data()
+ for k in keys:
+ if k == "_mtime_":
+ metadata[k] = _unicode(st[stat.ST_MTIME])
+ elif k == "SIZE":
+ metadata[k] = _unicode(st.st_size)
+ else:
+ v = binary_metadata.get(_unicode_encode(k))
+ if v is not None:
+ v = _unicode_decode(v)
+ metadata[k] = " ".join(v.split())
+ metadata.setdefault("EAPI", "0")
+ return metadata
+
+ def _inject_file(self, pkgindex, cpv, filename):
+ """
+ Add a package to internal data structures, and add an
+ entry to the given pkgindex.
+ @param pkgindex: The PackageIndex instance to which an entry
+ will be added.
+ @type pkgindex: PackageIndex
+ @param cpv: A _pkg_str instance corresponding to the package
+ being injected.
+ @type cpv: _pkg_str
+ @param filename: Absolute file path of the package to inject.
+ @type filename: string
+ @rtype: dict
+ @return: A dict corresponding to the new entry which has been
+ added to pkgindex. This may be used to access the checksums
+ which have just been generated.
+ """
+ # Update state for future isremote calls.
+ instance_key = self.dbapi._instance_key(cpv)
+ if self._remotepkgs is not None:
+ self._remotepkgs.pop(instance_key, None)
+
+ self.dbapi.cpv_inject(cpv)
+ self._pkg_paths[instance_key] = filename[len(self.pkgdir)+1:]
+ d = self._pkgindex_entry(cpv)
+
+ # If found, remove package(s) with duplicate path.
+ path = d.get("PATH", "")
+ for i in range(len(pkgindex.packages) - 1, -1, -1):
+ d2 = pkgindex.packages[i]
+ if path and path == d2.get("PATH"):
+ # Handle path collisions in $PKGDIR/All
+ # when CPV is not identical.
+ del pkgindex.packages[i]
+ elif cpv == d2.get("CPV"):
+ if path == d2.get("PATH", ""):
+ del pkgindex.packages[i]
+
+ pkgindex.packages.append(d)
+ return d
+
def _pkgindex_write(self, pkgindex):
contents = codecs.getwriter(_encodings['repo.content'])(io.BytesIO())
pkgindex.write(contents)
@@ -1233,7 +1361,7 @@ class binarytree(object):
def _pkgindex_entry(self, cpv):
"""
- Performs checksums and evaluates USE flag conditionals.
+ Performs checksums, and gets size and mtime via lstat.
Raises InvalidDependString if necessary.
@rtype: dict
@return: a dict containing entry for the give cpv.
@@ -1241,23 +1369,20 @@ class binarytree(object):
pkg_path = self.getname(cpv)
- d = dict(zip(self._pkgindex_aux_keys,
- self.dbapi.aux_get(cpv, self._pkgindex_aux_keys)))
-
+ d = dict(cpv._metadata.items())
d.update(perform_multiple_checksums(
pkg_path, hashes=self._pkgindex_hashes))
d["CPV"] = cpv
- st = os.stat(pkg_path)
- d["MTIME"] = _unicode(st[stat.ST_MTIME])
+ st = os.lstat(pkg_path)
+ d["_mtime_"] = _unicode(st[stat.ST_MTIME])
d["SIZE"] = _unicode(st.st_size)
- rel_path = self._pkg_paths[cpv]
+ rel_path = pkg_path[len(self.pkgdir)+1:]
# record location if it's non-default
if rel_path != cpv + ".tbz2":
d["PATH"] = rel_path
- self._eval_use_flags(cpv, d)
return d
def _new_pkgindex(self):
@@ -1311,15 +1436,17 @@ class binarytree(object):
return False
def _eval_use_flags(self, cpv, metadata):
- use = frozenset(metadata["USE"].split())
+ use = frozenset(metadata.get("USE", "").split())
for k in self._pkgindex_use_evaluated_keys:
if k.endswith('DEPEND'):
token_class = Atom
else:
token_class = None
+ deps = metadata.get(k)
+ if deps is None:
+ continue
try:
- deps = metadata[k]
deps = use_reduce(deps, uselist=use, token_class=token_class)
deps = paren_enclose(deps)
except portage.exception.InvalidDependString as e:
@@ -1349,46 +1476,129 @@ class binarytree(object):
return ""
return mymatch
- def getname(self, pkgname):
- """Returns a file location for this package. The default location is
- ${PKGDIR}/All/${PF}.tbz2, but will be ${PKGDIR}/${CATEGORY}/${PF}.tbz2
- in the rare event of a collision. The prevent_collision() method can
- be called to ensure that ${PKGDIR}/All/${PF}.tbz2 is available for a
- specific cpv."""
+ def getname(self, cpv, allocate_new=None):
+ """Returns a file location for this package.
+ If cpv has both build_time and build_id attributes, then the
+ path to the specific corresponding instance is returned.
+ Otherwise, allocate a new path and return that. When allocating
+ a new path, behavior depends on the binpkg-multi-instance
+ FEATURES setting.
+ """
if not self.populated:
self.populate()
- mycpv = pkgname
- mypath = self._pkg_paths.get(mycpv, None)
- if mypath:
- return os.path.join(self.pkgdir, mypath)
- mycat, mypkg = catsplit(mycpv)
- if self._all_directory:
- mypath = os.path.join("All", mypkg + ".tbz2")
- if mypath in self._pkg_paths.values():
- mypath = os.path.join(mycat, mypkg + ".tbz2")
+
+ try:
+ cpv.cp
+ except AttributeError:
+ cpv = _pkg_str(cpv)
+
+ filename = None
+ if allocate_new:
+ filename = self._allocate_filename(cpv)
+ elif self._is_specific_instance(cpv):
+ instance_key = self.dbapi._instance_key(cpv)
+ path = self._pkg_paths.get(instance_key)
+ if path is not None:
+ filename = os.path.join(self.pkgdir, path)
+
+ if filename is None and not allocate_new:
+ try:
+ instance_key = self.dbapi._instance_key(cpv,
+ support_string=True)
+ except KeyError:
+ pass
+ else:
+ filename = self._pkg_paths.get(instance_key)
+ if filename is not None:
+ filename = os.path.join(self.pkgdir, filename)
+
+ if filename is None:
+ if self._multi_instance:
+ pf = catsplit(cpv)[1]
+ filename = "%s-%s.xpak" % (
+ os.path.join(self.pkgdir, cpv.cp, pf), "1")
+ else:
+ filename = os.path.join(self.pkgdir, cpv + ".tbz2")
+
+ return filename
+
+ def _is_specific_instance(self, cpv):
+ specific = True
+ try:
+ build_time = cpv.build_time
+ build_id = cpv.build_id
+ except AttributeError:
+ specific = False
else:
- mypath = os.path.join(mycat, mypkg + ".tbz2")
- self._pkg_paths[mycpv] = mypath # cache for future lookups
- return os.path.join(self.pkgdir, mypath)
+ if build_time is None or build_id is None:
+ specific = False
+ return specific
+
+ def _max_build_id(self, cpv):
+ max_build_id = 0
+ for x in self.dbapi.cp_list(cpv.cp):
+ if (x == cpv and x.build_id is not None and
+ x.build_id > max_build_id):
+ max_build_id = x.build_id
+ return max_build_id
+
+ def _allocate_filename(self, cpv):
+ return os.path.join(self.pkgdir, cpv + ".tbz2")
+
+ def _allocate_filename_multi(self, cpv):
+
+ # First, get the max build_id found when _populate was
+ # called.
+ max_build_id = self._max_build_id(cpv)
+
+ # A new package may have been added concurrently since the
+ # last _populate call, so use increment build_id until
+ # we locate an unused id.
+ pf = catsplit(cpv)[1]
+ build_id = max_build_id + 1
+
+ while True:
+ filename = "%s-%s.xpak" % (
+ os.path.join(self.pkgdir, cpv.cp, pf), build_id)
+ if os.path.exists(filename):
+ build_id += 1
+ else:
+ return filename
+
+ @staticmethod
+ def _parse_build_id(filename):
+ build_id = -1
+ hyphen = filename.rfind("-", 0, -6)
+ if hyphen != -1:
+ build_id = filename[hyphen+1:-5]
+ try:
+ build_id = long(build_id)
+ except ValueError:
+ pass
+ return build_id
def isremote(self, pkgname):
"""Returns true if the package is kept remotely and it has not been
downloaded (or it is only partially downloaded)."""
- if self._remotepkgs is None or pkgname not in self._remotepkgs:
+ if (self._remotepkgs is None or
+ self.dbapi._instance_key(pkgname) not in self._remotepkgs):
return False
# Presence in self._remotepkgs implies that it's remote. When a
# package is downloaded, state is updated by self.inject().
return True
- def get_pkgindex_uri(self, pkgname):
+ def get_pkgindex_uri(self, cpv):
"""Returns the URI to the Packages file for a given package."""
- return self._pkgindex_uri.get(pkgname)
-
-
+ uri = None
+ metadata = self._remotepkgs.get(self.dbapi._instance_key(cpv))
+ if metadata is not None:
+ uri = metadata["PKGINDEX_URI"]
+ return uri
def gettbz2(self, pkgname):
"""Fetches the package from a remote site, if necessary. Attempts to
resume if the file appears to be partially downloaded."""
+ instance_key = self.dbapi._instance_key(pkgname)
tbz2_path = self.getname(pkgname)
tbz2name = os.path.basename(tbz2_path)
resume = False
@@ -1404,10 +1614,10 @@ class binarytree(object):
self._ensure_dir(mydest)
# urljoin doesn't work correctly with unrecognized protocols like sftp
if self._remote_has_index:
- rel_url = self._remotepkgs[pkgname].get("PATH")
+ rel_url = self._remotepkgs[instance_key].get("PATH")
if not rel_url:
rel_url = pkgname+".tbz2"
- remote_base_uri = self._remotepkgs[pkgname]["BASE_URI"]
+ remote_base_uri = self._remotepkgs[instance_key]["BASE_URI"]
url = remote_base_uri.rstrip("/") + "/" + rel_url.lstrip("/")
else:
url = self.settings["PORTAGE_BINHOST"].rstrip("/") + "/" + tbz2name
@@ -1450,15 +1660,19 @@ class binarytree(object):
except AttributeError:
cpv = pkg
+ _instance_key = self.dbapi._instance_key
+ instance_key = _instance_key(cpv)
digests = {}
- metadata = None
- if self._remotepkgs is None or cpv not in self._remotepkgs:
+ metadata = (None if self._remotepkgs is None else
+ self._remotepkgs.get(instance_key))
+ if metadata is None:
for d in self._load_pkgindex().packages:
- if d["CPV"] == cpv:
+ if (d["CPV"] == cpv and
+ instance_key == _instance_key(_pkg_str(d["CPV"],
+ metadata=d, settings=self.settings))):
metadata = d
break
- else:
- metadata = self._remotepkgs[cpv]
+
if metadata is None:
return digests
diff --git a/pym/portage/emaint/modules/binhost/binhost.py b/pym/portage/emaint/modules/binhost/binhost.py
index 1138a8c..cf1213e 100644
--- a/pym/portage/emaint/modules/binhost/binhost.py
+++ b/pym/portage/emaint/modules/binhost/binhost.py
@@ -7,6 +7,7 @@ import stat
import portage
from portage import os
from portage.util import writemsg
+from portage.versions import _pkg_str
import sys
@@ -38,7 +39,7 @@ class BinhostHandler(object):
if size is None:
return True
- mtime = data.get("MTIME")
+ mtime = data.get("_mtime_")
if mtime is None:
return True
@@ -90,6 +91,7 @@ class BinhostHandler(object):
def fix(self, **kwargs):
onProgress = kwargs.get('onProgress', None)
bintree = self._bintree
+ _instance_key = bintree.dbapi._instance_key
cpv_all = self._bintree.dbapi.cpv_all()
cpv_all.sort()
missing = []
@@ -98,16 +100,21 @@ class BinhostHandler(object):
onProgress(maxval, 0)
pkgindex = self._pkgindex
missing = []
+ stale = []
metadata = {}
for d in pkgindex.packages:
- metadata[d["CPV"]] = d
-
- for i, cpv in enumerate(cpv_all):
- d = metadata.get(cpv)
+ cpv = _pkg_str(d["CPV"], metadata=d,
+ settings=bintree.settings)
+ d["CPV"] = cpv
+ metadata[_instance_key(cpv)] = d
+ if not bintree.dbapi.cpv_exists(cpv):
+ stale.append(cpv)
+
+ for cpv in cpv_all:
+ d = metadata.get(_instance_key(cpv))
if not d or self._need_update(cpv, d):
missing.append(cpv)
- stale = set(metadata).difference(cpv_all)
if missing or stale:
from portage import locks
pkgindex_lock = locks.lockfile(
@@ -121,31 +128,39 @@ class BinhostHandler(object):
pkgindex = bintree._load_pkgindex()
self._pkgindex = pkgindex
+ # Recount stale/missing packages, with lock held.
+ missing = []
+ stale = []
metadata = {}
for d in pkgindex.packages:
- metadata[d["CPV"]] = d
-
- # Recount missing packages, with lock held.
- del missing[:]
- for i, cpv in enumerate(cpv_all):
- d = metadata.get(cpv)
+ cpv = _pkg_str(d["CPV"], metadata=d,
+ settings=bintree.settings)
+ d["CPV"] = cpv
+ metadata[_instance_key(cpv)] = d
+ if not bintree.dbapi.cpv_exists(cpv):
+ stale.append(cpv)
+
+ for cpv in cpv_all:
+ d = metadata.get(_instance_key(cpv))
if not d or self._need_update(cpv, d):
missing.append(cpv)
maxval = len(missing)
for i, cpv in enumerate(missing):
+ d = bintree._pkgindex_entry(cpv)
try:
- metadata[cpv] = bintree._pkgindex_entry(cpv)
+ bintree._eval_use_flags(cpv, d)
except portage.exception.InvalidDependString:
writemsg("!!! Invalid binary package: '%s'\n" % \
bintree.getname(cpv), noiselevel=-1)
+ else:
+ metadata[_instance_key(cpv)] = d
if onProgress:
onProgress(maxval, i+1)
- for cpv in set(metadata).difference(
- self._bintree.dbapi.cpv_all()):
- del metadata[cpv]
+ for cpv in stale:
+ del metadata[_instance_key(cpv)]
# We've updated the pkgindex, so set it to
# repopulate when necessary.
--
2.0.5
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [gentoo-portage-dev] [PATCH 3/7 v2] binpkg-multi-instance 3 of 7
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 3/7] binpkg-multi-instance 3 " Zac Medico
@ 2015-02-27 21:58 ` Zac Medico
2015-02-27 23:36 ` [gentoo-portage-dev] [PATCH 3/7 v3] " Zac Medico
1 sibling, 0 replies; 23+ messages in thread
From: Zac Medico @ 2015-02-27 21:58 UTC (permalink / raw
To: gentoo-portage-dev; +Cc: Zac Medico
FEATURES=binpkg-multi-instance causes an integer build-id to be
associated with each binary package instance. Inclusion of the build-id
in the file name of the binary package file makes it possible to store
an arbitrary number of binary packages built from the same ebuild.
Having multiple instances is useful for a number of purposes, such as
retaining builds that were built with different USE flags or linked
against different versions of libraries. The location of any particular
package within PKGDIR can be expressed as follows:
${PKGDIR}/${CATEGORY}/${PN}/${PF}-${BUILD_ID}.xpak
The build-id starts at 1 for the first build of a particular ebuild,
and is incremented by 1 for each new build. It is possible to share a
writable PKGDIR over NFS, and locking ensures that each package added
to PKGDIR will have a unique build-id. It is not necessary to migrate
an existing PKGDIR to the new layout, since portage is capable of
working with a mixed PKGDIR layout, where packages using the old layout
are allowed to remain in place.
The new PKGDIR layout is backward-compatible with binhost clients
running older portage, since the file format is identical, the
per-package PATH attribute in the 'Packages' index directs them to
download the file from the correct URI, and they automatically use
BUILD_TIME metadata to select the latest builds.
There is currently no automated way to prune old builds from PKGDIR,
although it is possible to remove packages manually, and then run
'emaint --fix binhost' to update the ${PKGDIR}/Packages index. Support
for FEATURES=binpkg-multi-instance is planned for eclean-pkg.
X-Gentoo-Bug: 150031
X-Gentoo-Bug-URL: https://bugs.gentoo.org/show_bug.cgi?id=150031
---
PATCH 3/7 v2 fixes BinpkgPrefetcher to pass the _pkg_path to
BinpkgVerifier. Thanks to David James <davidjames@google.com>.
bin/quickpkg | 1 -
man/make.conf.5 | 27 +
pym/_emerge/Binpkg.py | 33 +-
pym/_emerge/BinpkgFetcher.py | 13 +-
pym/_emerge/BinpkgPrefetcher.py | 2 +-
pym/_emerge/BinpkgVerifier.py | 6 +-
pym/_emerge/EbuildBinpkg.py | 9 +-
pym/_emerge/EbuildBuild.py | 36 +-
pym/_emerge/Package.py | 16 +-
pym/_emerge/Scheduler.py | 6 +-
pym/_emerge/clear_caches.py | 1 -
pym/portage/const.py | 2 +
pym/portage/dbapi/bintree.py | 684 +++++++++++++++++---------
pym/portage/emaint/modules/binhost/binhost.py | 47 +-
14 files changed, 602 insertions(+), 281 deletions(-)
diff --git a/bin/quickpkg b/bin/quickpkg
index 2c69a69..8b71c3e 100755
--- a/bin/quickpkg
+++ b/bin/quickpkg
@@ -63,7 +63,6 @@ def quickpkg_atom(options, infos, arg, eout):
pkgs_for_arg = 0
for cpv in matches:
excluded_config_files = []
- bintree.prevent_collision(cpv)
dblnk = vardb._dblink(cpv)
have_lock = False
diff --git a/man/make.conf.5 b/man/make.conf.5
index cd1ae21..1b71b97 100644
--- a/man/make.conf.5
+++ b/man/make.conf.5
@@ -256,6 +256,33 @@ has a \fB\-\-force\fR option that can be used to force regeneration of digests.
Keep logs from successful binary package merges. This is relevant only when
\fBPORT_LOGDIR\fR is set.
.TP
+.B binpkg\-multi\-instance
+Enable support for multiple binary package instances per ebuild.
+Having multiple instances is useful for a number of purposes, such as
+retaining builds that were built with different USE flags or linked
+against different versions of libraries. The location of any particular
+package within PKGDIR can be expressed as follows:
+
+ ${PKGDIR}/${CATEGORY}/${PN}/${PF}\-${BUILD_ID}.xpak
+
+The build\-id starts at 1 for the first build of a particular ebuild,
+and is incremented by 1 for each new build. It is possible to share a
+writable PKGDIR over NFS, and locking ensures that each package added
+to PKGDIR will have a unique build\-id. It is not necessary to migrate
+an existing PKGDIR to the new layout, since portage is capable of
+working with a mixed PKGDIR layout, where packages using the old layout
+are allowed to remain in place.
+
+The new PKGDIR layout is backward\-compatible with binhost clients
+running older portage, since the file format is identical, the
+per\-package PATH attribute in the 'Packages' index directs them to
+download the file from the correct URI, and they automatically use
+BUILD_TIME metadata to select the latest builds.
+
+There is currently no automated way to prune old builds from PKGDIR,
+although it is possible to remove packages manually, and then run
+\(aqemaint \-\-fix binhost' to update the ${PKGDIR}/Packages index.
+.TP
.B buildpkg
Binary packages will be created for all packages that are merged. Also see
\fBquickpkg\fR(1) and \fBemerge\fR(1) \fB\-\-buildpkg\fR and
diff --git a/pym/_emerge/Binpkg.py b/pym/_emerge/Binpkg.py
index ded6dfd..7b7ae17 100644
--- a/pym/_emerge/Binpkg.py
+++ b/pym/_emerge/Binpkg.py
@@ -121,16 +121,11 @@ class Binpkg(CompositeTask):
fetcher = BinpkgFetcher(background=self.background,
logfile=self.settings.get("PORTAGE_LOG_FILE"), pkg=self.pkg,
pretend=self.opts.pretend, scheduler=self.scheduler)
- pkg_path = fetcher.pkg_path
- self._pkg_path = pkg_path
- # This gives bashrc users an opportunity to do various things
- # such as remove binary packages after they're installed.
- self.settings["PORTAGE_BINPKG_FILE"] = pkg_path
if self.opts.getbinpkg and self._bintree.isremote(pkg.cpv):
-
msg = " --- (%s of %s) Fetching Binary (%s::%s)" %\
- (pkg_count.curval, pkg_count.maxval, pkg.cpv, pkg_path)
+ (pkg_count.curval, pkg_count.maxval, pkg.cpv,
+ fetcher.pkg_path)
short_msg = "emerge: (%s of %s) %s Fetch" % \
(pkg_count.curval, pkg_count.maxval, pkg.cpv)
self.logger.log(msg, short_msg=short_msg)
@@ -149,7 +144,7 @@ class Binpkg(CompositeTask):
# The fetcher only has a returncode when
# --getbinpkg is enabled.
if fetcher.returncode is not None:
- self._fetched_pkg = True
+ self._fetched_pkg = fetcher.pkg_path
if self._default_exit(fetcher) != os.EX_OK:
self._unlock_builddir()
self.wait()
@@ -163,9 +158,15 @@ class Binpkg(CompositeTask):
verifier = None
if self._verify:
+ if self._fetched_pkg:
+ path = self._fetched_pkg
+ else:
+ path = self.pkg.root_config.trees["bintree"].getname(
+ self.pkg.cpv)
logfile = self.settings.get("PORTAGE_LOG_FILE")
verifier = BinpkgVerifier(background=self.background,
- logfile=logfile, pkg=self.pkg, scheduler=self.scheduler)
+ logfile=logfile, pkg=self.pkg, scheduler=self.scheduler,
+ _pkg_path=path)
self._start_task(verifier, self._verifier_exit)
return
@@ -181,10 +182,20 @@ class Binpkg(CompositeTask):
logger = self.logger
pkg = self.pkg
pkg_count = self.pkg_count
- pkg_path = self._pkg_path
if self._fetched_pkg:
- self._bintree.inject(pkg.cpv, filename=pkg_path)
+ pkg_path = self._bintree.getname(
+ self._bintree.inject(pkg.cpv,
+ filename=self._fetched_pkg),
+ allocate_new=False)
+ else:
+ pkg_path = self.pkg.root_config.trees["bintree"].getname(
+ self.pkg.cpv)
+
+ # This gives bashrc users an opportunity to do various things
+ # such as remove binary packages after they're installed.
+ self.settings["PORTAGE_BINPKG_FILE"] = pkg_path
+ self._pkg_path = pkg_path
logfile = self.settings.get("PORTAGE_LOG_FILE")
if logfile is not None and os.path.isfile(logfile):
diff --git a/pym/_emerge/BinpkgFetcher.py b/pym/_emerge/BinpkgFetcher.py
index 543881e..a7f2d44 100644
--- a/pym/_emerge/BinpkgFetcher.py
+++ b/pym/_emerge/BinpkgFetcher.py
@@ -24,7 +24,8 @@ class BinpkgFetcher(SpawnProcess):
def __init__(self, **kwargs):
SpawnProcess.__init__(self, **kwargs)
pkg = self.pkg
- self.pkg_path = pkg.root_config.trees["bintree"].getname(pkg.cpv)
+ self.pkg_path = pkg.root_config.trees["bintree"].getname(
+ pkg.cpv) + ".partial"
def _start(self):
@@ -51,10 +52,12 @@ class BinpkgFetcher(SpawnProcess):
# urljoin doesn't work correctly with
# unrecognized protocols like sftp
if bintree._remote_has_index:
- rel_uri = bintree._remotepkgs[pkg.cpv].get("PATH")
+ instance_key = bintree.dbapi._instance_key(pkg.cpv)
+ rel_uri = bintree._remotepkgs[instance_key].get("PATH")
if not rel_uri:
rel_uri = pkg.cpv + ".tbz2"
- remote_base_uri = bintree._remotepkgs[pkg.cpv]["BASE_URI"]
+ remote_base_uri = bintree._remotepkgs[
+ instance_key]["BASE_URI"]
uri = remote_base_uri.rstrip("/") + "/" + rel_uri.lstrip("/")
else:
uri = settings["PORTAGE_BINHOST"].rstrip("/") + \
@@ -128,7 +131,9 @@ class BinpkgFetcher(SpawnProcess):
# the fetcher didn't already do it automatically.
bintree = self.pkg.root_config.trees["bintree"]
if bintree._remote_has_index:
- remote_mtime = bintree._remotepkgs[self.pkg.cpv].get("MTIME")
+ remote_mtime = bintree._remotepkgs[
+ bintree.dbapi._instance_key(
+ self.pkg.cpv)].get("MTIME")
if remote_mtime is not None:
try:
remote_mtime = long(remote_mtime)
diff --git a/pym/_emerge/BinpkgPrefetcher.py b/pym/_emerge/BinpkgPrefetcher.py
index ffa4900..7ca8970 100644
--- a/pym/_emerge/BinpkgPrefetcher.py
+++ b/pym/_emerge/BinpkgPrefetcher.py
@@ -27,7 +27,7 @@ class BinpkgPrefetcher(CompositeTask):
verifier = BinpkgVerifier(background=self.background,
logfile=self.scheduler.fetch.log_file, pkg=self.pkg,
- scheduler=self.scheduler)
+ scheduler=self.scheduler, _pkg_path=self.pkg_path)
self._start_task(verifier, self._verifier_exit)
def _verifier_exit(self, verifier):
diff --git a/pym/_emerge/BinpkgVerifier.py b/pym/_emerge/BinpkgVerifier.py
index 2c69792..7a6d15e 100644
--- a/pym/_emerge/BinpkgVerifier.py
+++ b/pym/_emerge/BinpkgVerifier.py
@@ -33,7 +33,6 @@ class BinpkgVerifier(CompositeTask):
digests = _apply_hash_filter(digests, hash_filter)
self._digests = digests
- self._pkg_path = bintree.getname(self.pkg.cpv)
try:
size = os.stat(self._pkg_path).st_size
@@ -90,8 +89,11 @@ class BinpkgVerifier(CompositeTask):
if portage.output.havecolor:
portage.output.havecolor = not self.background
+ path = self._pkg_path
+ if path.endswith(".partial"):
+ path = path[:-len(".partial")]
eout = EOutput()
- eout.ebegin("%s %s ;-)" % (os.path.basename(self._pkg_path),
+ eout.ebegin("%s %s ;-)" % (os.path.basename(path),
" ".join(sorted(self._digests))))
eout.eend(0)
diff --git a/pym/_emerge/EbuildBinpkg.py b/pym/_emerge/EbuildBinpkg.py
index 34a6aef..6e098eb 100644
--- a/pym/_emerge/EbuildBinpkg.py
+++ b/pym/_emerge/EbuildBinpkg.py
@@ -10,13 +10,12 @@ class EbuildBinpkg(CompositeTask):
This assumes that src_install() has successfully completed.
"""
__slots__ = ('pkg', 'settings') + \
- ('_binpkg_tmpfile',)
+ ('_binpkg_tmpfile', '_binpkg_info')
def _start(self):
pkg = self.pkg
root_config = pkg.root_config
bintree = root_config.trees["bintree"]
- bintree.prevent_collision(pkg.cpv)
binpkg_tmpfile = os.path.join(bintree.pkgdir,
pkg.cpv + ".tbz2." + str(os.getpid()))
bintree._ensure_dir(os.path.dirname(binpkg_tmpfile))
@@ -43,8 +42,12 @@ class EbuildBinpkg(CompositeTask):
pkg = self.pkg
bintree = pkg.root_config.trees["bintree"]
- bintree.inject(pkg.cpv, filename=self._binpkg_tmpfile)
+ self._binpkg_info = bintree.inject(pkg.cpv,
+ filename=self._binpkg_tmpfile)
self._current_task = None
self.returncode = os.EX_OK
self.wait()
+
+ def get_binpkg_info(self):
+ return self._binpkg_info
diff --git a/pym/_emerge/EbuildBuild.py b/pym/_emerge/EbuildBuild.py
index b5b1e87..0e98602 100644
--- a/pym/_emerge/EbuildBuild.py
+++ b/pym/_emerge/EbuildBuild.py
@@ -1,6 +1,10 @@
# Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
+from __future__ import unicode_literals
+
+import io
+
import _emerge.emergelog
from _emerge.EbuildExecuter import EbuildExecuter
from _emerge.EbuildPhase import EbuildPhase
@@ -15,7 +19,7 @@ from _emerge.TaskSequence import TaskSequence
from portage.util import writemsg
import portage
-from portage import os
+from portage import _encodings, _unicode_decode, _unicode_encode, os
from portage.output import colorize
from portage.package.ebuild.digestcheck import digestcheck
from portage.package.ebuild.digestgen import digestgen
@@ -317,9 +321,13 @@ class EbuildBuild(CompositeTask):
phase="rpm", scheduler=self.scheduler,
settings=self.settings))
else:
- binpkg_tasks.add(EbuildBinpkg(background=self.background,
+ task = EbuildBinpkg(
+ background=self.background,
pkg=self.pkg, scheduler=self.scheduler,
- settings=self.settings))
+ settings=self.settings)
+ binpkg_tasks.add(task)
+ task.addExitListener(
+ self._record_binpkg_info)
if binpkg_tasks:
self._start_task(binpkg_tasks, self._buildpkg_exit)
@@ -356,6 +364,28 @@ class EbuildBuild(CompositeTask):
self.returncode = packager.returncode
self.wait()
+ def _record_binpkg_info(self, task):
+ if task.returncode != os.EX_OK:
+ return
+
+ # Save info about the created binary package, so that
+ # identifying information can be passed to the install
+ # task, to be recorded in the installed package database.
+ pkg = task.get_binpkg_info()
+ infoloc = os.path.join(self.settings["PORTAGE_BUILDDIR"],
+ "build-info")
+ info = {
+ "BINPKGMD5": "%s\n" % pkg._metadata["MD5"],
+ }
+ if pkg.build_id is not None:
+ info["BUILD_ID"] = "%s\n" % pkg.build_id
+ for k, v in info.items():
+ with io.open(_unicode_encode(os.path.join(infoloc, k),
+ encoding=_encodings['fs'], errors='strict'),
+ mode='w', encoding=_encodings['repo.content'],
+ errors='strict') as f:
+ f.write(v)
+
def _buildpkgonly_success_hook_exit(self, success_hooks):
self._default_exit(success_hooks)
self.returncode = None
diff --git a/pym/_emerge/Package.py b/pym/_emerge/Package.py
index 975335d..2c1a116 100644
--- a/pym/_emerge/Package.py
+++ b/pym/_emerge/Package.py
@@ -219,6 +219,8 @@ class Package(Task):
else:
raise TypeError("root_config argument is required")
+ elements = [type_name, root, _unicode(cpv), operation]
+
# For installed (and binary) packages we don't care for the repo
# when it comes to hashing, because there can only be one cpv.
# So overwrite the repo_key with type_name.
@@ -229,14 +231,22 @@ class Package(Task):
raise AssertionError(
"Package._gen_hash_key() " + \
"called without 'repo_name' argument")
- repo_key = repo_name
+ elements.append(repo_name)
+ elif type_name == "binary":
+ # Including a variety of fingerprints in the hash makes
+ # it possible to simultaneously consider multiple similar
+ # packages. Note that digests are not included here, since
+ # they are relatively expensive to compute, and they may
+ # not necessarily be available.
+ elements.extend([cpv.build_id, cpv.file_size,
+ cpv.build_time, cpv.mtime])
else:
# For installed (and binary) packages we don't care for the repo
# when it comes to hashing, because there can only be one cpv.
# So overwrite the repo_key with type_name.
- repo_key = type_name
+ elements.append(type_name)
- return (type_name, root, _unicode(cpv), operation, repo_key)
+ return tuple(elements)
def _validate_deps(self):
"""
diff --git a/pym/_emerge/Scheduler.py b/pym/_emerge/Scheduler.py
index 6e3bf1a..6b39e3b 100644
--- a/pym/_emerge/Scheduler.py
+++ b/pym/_emerge/Scheduler.py
@@ -862,8 +862,12 @@ class Scheduler(PollScheduler):
continue
fetched = fetcher.pkg_path
+ if fetched is False:
+ filename = bintree.getname(x.cpv)
+ else:
+ filename = fetched
verifier = BinpkgVerifier(pkg=x,
- scheduler=sched_iface)
+ scheduler=sched_iface, _pkg_path=filename)
current_task = verifier
verifier.start()
if verifier.wait() != os.EX_OK:
diff --git a/pym/_emerge/clear_caches.py b/pym/_emerge/clear_caches.py
index 513df62..cb0db10 100644
--- a/pym/_emerge/clear_caches.py
+++ b/pym/_emerge/clear_caches.py
@@ -7,7 +7,6 @@ def clear_caches(trees):
for d in trees.values():
d["porttree"].dbapi.melt()
d["porttree"].dbapi._aux_cache.clear()
- d["bintree"].dbapi._aux_cache.clear()
d["bintree"].dbapi._clear_cache()
if d["vartree"].dbapi._linkmap is None:
# preserve-libs is entirely disabled
diff --git a/pym/portage/const.py b/pym/portage/const.py
index febdb4a..c7ecda2 100644
--- a/pym/portage/const.py
+++ b/pym/portage/const.py
@@ -122,6 +122,7 @@ EBUILD_PHASES = (
SUPPORTED_FEATURES = frozenset([
"assume-digests",
"binpkg-logs",
+ "binpkg-multi-instance",
"buildpkg",
"buildsyspkg",
"candy",
@@ -268,6 +269,7 @@ LIVE_ECLASSES = frozenset([
])
SUPPORTED_BINPKG_FORMATS = ("tar", "rpm")
+SUPPORTED_XPAK_EXTENSIONS = (".tbz2", ".xpak")
# Time formats used in various places like metadata.chk.
TIMESTAMP_FORMAT = "%a, %d %b %Y %H:%M:%S +0000" # to be used with time.gmtime()
diff --git a/pym/portage/dbapi/bintree.py b/pym/portage/dbapi/bintree.py
index cd30b67..460b9f7 100644
--- a/pym/portage/dbapi/bintree.py
+++ b/pym/portage/dbapi/bintree.py
@@ -17,14 +17,13 @@ portage.proxy.lazyimport.lazyimport(globals(),
'portage.update:update_dbentries',
'portage.util:atomic_ofstream,ensure_dirs,normalize_path,' + \
'writemsg,writemsg_stdout',
- 'portage.util.listdir:listdir',
'portage.util.path:first_existing',
'portage.util._urlopen:urlopen@_urlopen',
'portage.versions:best,catpkgsplit,catsplit,_pkg_str',
)
from portage.cache.mappings import slot_dict_class
-from portage.const import CACHE_PATH
+from portage.const import CACHE_PATH, SUPPORTED_XPAK_EXTENSIONS
from portage.dbapi.virtual import fakedbapi
from portage.dep import Atom, use_reduce, paren_enclose
from portage.exception import AlarmSignal, InvalidData, InvalidPackageName, \
@@ -71,18 +70,26 @@ class bindbapi(fakedbapi):
_known_keys = frozenset(list(fakedbapi._known_keys) + \
["CHOST", "repository", "USE"])
def __init__(self, mybintree=None, **kwargs):
- fakedbapi.__init__(self, **kwargs)
+ # Always enable multi_instance mode for bindbapi indexing. This
+ # does not affect the local PKGDIR file layout, since that is
+ # controlled independently by FEATURES=binpkg-multi-instance.
+ # The multi_instance mode is useful for the following reasons:
+ # * binary packages with the same cpv from multiple binhosts
+ # can be considered simultaneously
+ # * if binpkg-multi-instance is disabled, it's still possible
+ # to properly access a PKGDIR which has binpkg-multi-instance
+ # layout (or mixed layout)
+ fakedbapi.__init__(self, exclusive_slots=False,
+ multi_instance=True, **kwargs)
self.bintree = mybintree
self.move_ent = mybintree.move_ent
- self.cpvdict={}
- self.cpdict={}
# Selectively cache metadata in order to optimize dep matching.
self._aux_cache_keys = set(
- ["BUILD_TIME", "CHOST", "DEPEND", "EAPI",
- "HDEPEND", "IUSE", "KEYWORDS",
- "LICENSE", "PDEPEND", "PROPERTIES", "PROVIDE",
- "RDEPEND", "repository", "RESTRICT", "SLOT", "USE",
- "DEFINED_PHASES", "PROVIDES", "REQUIRES"
+ ["BUILD_ID", "BUILD_TIME", "CHOST", "DEFINED_PHASES",
+ "DEPEND", "EAPI", "HDEPEND", "IUSE", "KEYWORDS",
+ "LICENSE", "MD5", "PDEPEND", "PROPERTIES", "PROVIDE",
+ "PROVIDES", "RDEPEND", "repository", "REQUIRES", "RESTRICT",
+ "SIZE", "SLOT", "USE", "_mtime_"
])
self._aux_cache_slot_dict = slot_dict_class(self._aux_cache_keys)
self._aux_cache = {}
@@ -109,33 +116,49 @@ class bindbapi(fakedbapi):
return fakedbapi.cpv_exists(self, cpv)
def cpv_inject(self, cpv, **kwargs):
- self._aux_cache.pop(cpv, None)
- fakedbapi.cpv_inject(self, cpv, **kwargs)
+ if not self.bintree.populated:
+ self.bintree.populate()
+ fakedbapi.cpv_inject(self, cpv,
+ metadata=cpv._metadata, **kwargs)
def cpv_remove(self, cpv):
- self._aux_cache.pop(cpv, None)
+ if not self.bintree.populated:
+ self.bintree.populate()
fakedbapi.cpv_remove(self, cpv)
def aux_get(self, mycpv, wants, myrepo=None):
if self.bintree and not self.bintree.populated:
self.bintree.populate()
- cache_me = False
+ # Support plain string for backward compatibility with API
+ # consumers (including portageq, which passes in a cpv from
+ # a command-line argument).
+ instance_key = self._instance_key(mycpv,
+ support_string=True)
if not self._known_keys.intersection(
wants).difference(self._aux_cache_keys):
- aux_cache = self._aux_cache.get(mycpv)
+ aux_cache = self.cpvdict[instance_key]
if aux_cache is not None:
return [aux_cache.get(x, "") for x in wants]
- cache_me = True
mysplit = mycpv.split("/")
mylist = []
tbz2name = mysplit[1]+".tbz2"
if not self.bintree._remotepkgs or \
not self.bintree.isremote(mycpv):
- tbz2_path = self.bintree.getname(mycpv)
- if not os.path.exists(tbz2_path):
+ try:
+ tbz2_path = self.bintree._pkg_paths[instance_key]
+ except KeyError:
+ raise KeyError(mycpv)
+ tbz2_path = os.path.join(self.bintree.pkgdir, tbz2_path)
+ try:
+ st = os.lstat(tbz2_path)
+ except OSError:
raise KeyError(mycpv)
metadata_bytes = portage.xpak.tbz2(tbz2_path).get_data()
def getitem(k):
+ if k == "_mtime_":
+ return _unicode(st[stat.ST_MTIME])
+ elif k == "SIZE":
+ return _unicode(st.st_size)
v = metadata_bytes.get(_unicode_encode(k,
encoding=_encodings['repo.content'],
errors='backslashreplace'))
@@ -144,11 +167,9 @@ class bindbapi(fakedbapi):
encoding=_encodings['repo.content'], errors='replace')
return v
else:
- getitem = self.bintree._remotepkgs[mycpv].get
+ getitem = self.cpvdict[instance_key].get
mydata = {}
mykeys = wants
- if cache_me:
- mykeys = self._aux_cache_keys.union(wants)
for x in mykeys:
myval = getitem(x)
# myval is None if the key doesn't exist
@@ -159,16 +180,24 @@ class bindbapi(fakedbapi):
if not mydata.setdefault('EAPI', '0'):
mydata['EAPI'] = '0'
- if cache_me:
- aux_cache = self._aux_cache_slot_dict()
- for x in self._aux_cache_keys:
- aux_cache[x] = mydata.get(x, '')
- self._aux_cache[mycpv] = aux_cache
return [mydata.get(x, '') for x in wants]
def aux_update(self, cpv, values):
if not self.bintree.populated:
self.bintree.populate()
+ build_id = None
+ try:
+ build_id = cpv.build_id
+ except AttributeError:
+ if self.bintree._multi_instance:
+ # The cpv.build_id attribute is required if we are in
+ # multi-instance mode, since otherwise we won't know
+ # which instance to update.
+ raise
+ else:
+ cpv = self._instance_key(cpv, support_string=True)[0]
+ build_id = cpv.build_id
+
tbz2path = self.bintree.getname(cpv)
if not os.path.exists(tbz2path):
raise KeyError(cpv)
@@ -187,7 +216,7 @@ class bindbapi(fakedbapi):
del mydata[k]
mytbz2.recompose_mem(portage.xpak.xpak_mem(mydata))
# inject will clear stale caches via cpv_inject.
- self.bintree.inject(cpv)
+ self.bintree.inject(cpv, filename=tbz2path)
def cp_list(self, *pargs, **kwargs):
if not self.bintree.populated:
@@ -219,7 +248,7 @@ class bindbapi(fakedbapi):
if not self.bintree.isremote(pkg):
pass
else:
- metadata = self.bintree._remotepkgs[pkg]
+ metadata = self.bintree._remotepkgs[self._instance_key(pkg)]
try:
size = int(metadata["SIZE"])
except KeyError:
@@ -300,6 +329,13 @@ class binarytree(object):
if True:
self.pkgdir = normalize_path(pkgdir)
+ # NOTE: Event if binpkg-multi-instance is disabled, it's
+ # still possible to access a PKGDIR which uses the
+ # binpkg-multi-instance layout (or mixed layout).
+ self._multi_instance = ("binpkg-multi-instance" in
+ settings.features)
+ if self._multi_instance:
+ self._allocate_filename = self._allocate_filename_multi
self.dbapi = bindbapi(self, settings=settings)
self.update_ents = self.dbapi.update_ents
self.move_slot_ent = self.dbapi.move_slot_ent
@@ -310,7 +346,6 @@ class binarytree(object):
self.invalids = []
self.settings = settings
self._pkg_paths = {}
- self._pkgindex_uri = {}
self._populating = False
self._all_directory = os.path.isdir(
os.path.join(self.pkgdir, "All"))
@@ -318,12 +353,14 @@ class binarytree(object):
self._pkgindex_hashes = ["MD5","SHA1"]
self._pkgindex_file = os.path.join(self.pkgdir, "Packages")
self._pkgindex_keys = self.dbapi._aux_cache_keys.copy()
- self._pkgindex_keys.update(["CPV", "MTIME", "SIZE"])
+ self._pkgindex_keys.update(["CPV", "SIZE"])
self._pkgindex_aux_keys = \
- ["BUILD_TIME", "CHOST", "DEPEND", "DESCRIPTION", "EAPI",
- "HDEPEND", "IUSE", "KEYWORDS", "LICENSE", "PDEPEND", "PROPERTIES",
- "PROVIDE", "RESTRICT", "RDEPEND", "repository", "SLOT", "USE", "DEFINED_PHASES",
- "BASE_URI", "PROVIDES", "REQUIRES"]
+ ["BASE_URI", "BUILD_ID", "BUILD_TIME", "CHOST",
+ "DEFINED_PHASES", "DEPEND", "DESCRIPTION", "EAPI",
+ "HDEPEND", "IUSE", "KEYWORDS", "LICENSE", "PDEPEND",
+ "PKGINDEX_URI", "PROPERTIES", "PROVIDE", "PROVIDES",
+ "RDEPEND", "repository", "REQUIRES", "RESTRICT",
+ "SIZE", "SLOT", "USE"]
self._pkgindex_aux_keys = list(self._pkgindex_aux_keys)
self._pkgindex_use_evaluated_keys = \
("DEPEND", "HDEPEND", "LICENSE", "RDEPEND",
@@ -336,6 +373,7 @@ class binarytree(object):
"USE_EXPAND", "USE_EXPAND_HIDDEN", "USE_EXPAND_IMPLICIT",
"USE_EXPAND_UNPREFIXED"])
self._pkgindex_default_pkg_data = {
+ "BUILD_ID" : "",
"BUILD_TIME" : "",
"DEFINED_PHASES" : "",
"DEPEND" : "",
@@ -365,6 +403,7 @@ class binarytree(object):
self._pkgindex_translated_keys = (
("DESCRIPTION" , "DESC"),
+ ("_mtime_" , "MTIME"),
("repository" , "REPO"),
)
@@ -455,16 +494,21 @@ class binarytree(object):
mytbz2.recompose_mem(portage.xpak.xpak_mem(mydata))
self.dbapi.cpv_remove(mycpv)
- del self._pkg_paths[mycpv]
+ del self._pkg_paths[self.dbapi._instance_key(mycpv)]
+ metadata = self.dbapi._aux_cache_slot_dict()
+ for k in self.dbapi._aux_cache_keys:
+ v = mydata.get(_unicode_encode(k))
+ if v is not None:
+ v = _unicode_decode(v)
+ metadata[k] = " ".join(v.split())
+ mynewcpv = _pkg_str(mynewcpv, metadata=metadata)
new_path = self.getname(mynewcpv)
- self._pkg_paths[mynewcpv] = os.path.join(
+ self._pkg_paths[
+ self.dbapi._instance_key(mynewcpv)] = os.path.join(
*new_path.split(os.path.sep)[-2:])
if new_path != mytbz2:
self._ensure_dir(os.path.dirname(new_path))
_movefile(tbz2path, new_path, mysettings=self.settings)
- self._remove_symlink(mycpv)
- if new_path.split(os.path.sep)[-2] == "All":
- self._create_symlink(mynewcpv)
self.inject(mynewcpv)
return moves
@@ -645,55 +689,63 @@ class binarytree(object):
# prior to performing package moves since it only wants to
# operate on local packages (getbinpkgs=0).
self._remotepkgs = None
- self.dbapi._clear_cache()
- self.dbapi._aux_cache.clear()
+ self.dbapi.clear()
+ _instance_key = self.dbapi._instance_key
if True:
pkg_paths = {}
self._pkg_paths = pkg_paths
- dirs = listdir(self.pkgdir, dirsonly=True, EmptyOnError=True)
- if "All" in dirs:
- dirs.remove("All")
- dirs.sort()
- dirs.insert(0, "All")
+ dir_files = {}
+ for parent, dir_names, file_names in os.walk(self.pkgdir):
+ relative_parent = parent[len(self.pkgdir)+1:]
+ dir_files[relative_parent] = file_names
+
pkgindex = self._load_pkgindex()
- pf_index = None
if not self._pkgindex_version_supported(pkgindex):
pkgindex = self._new_pkgindex()
header = pkgindex.header
metadata = {}
+ basename_index = {}
for d in pkgindex.packages:
- metadata[d["CPV"]] = d
+ cpv = _pkg_str(d["CPV"], metadata=d,
+ settings=self.settings)
+ d["CPV"] = cpv
+ metadata[_instance_key(cpv)] = d
+ path = d.get("PATH")
+ if not path:
+ path = cpv + ".tbz2"
+ basename = os.path.basename(path)
+ basename_index.setdefault(basename, []).append(d)
+
update_pkgindex = False
- for mydir in dirs:
- for myfile in listdir(os.path.join(self.pkgdir, mydir)):
- if not myfile.endswith(".tbz2"):
+ for mydir, file_names in dir_files.items():
+ try:
+ mydir = _unicode_decode(mydir,
+ encoding=_encodings["fs"], errors="strict")
+ except UnicodeDecodeError:
+ continue
+ for myfile in file_names:
+ try:
+ myfile = _unicode_decode(myfile,
+ encoding=_encodings["fs"], errors="strict")
+ except UnicodeDecodeError:
+ continue
+ if not myfile.endswith(SUPPORTED_XPAK_EXTENSIONS):
continue
mypath = os.path.join(mydir, myfile)
full_path = os.path.join(self.pkgdir, mypath)
s = os.lstat(full_path)
- if stat.S_ISLNK(s.st_mode):
+
+ if not stat.S_ISREG(s.st_mode):
continue
# Validate data from the package index and try to avoid
# reading the xpak if possible.
- if mydir != "All":
- possibilities = None
- d = metadata.get(mydir+"/"+myfile[:-5])
- if d:
- possibilities = [d]
- else:
- if pf_index is None:
- pf_index = {}
- for mycpv in metadata:
- mycat, mypf = catsplit(mycpv)
- pf_index.setdefault(
- mypf, []).append(metadata[mycpv])
- possibilities = pf_index.get(myfile[:-5])
+ possibilities = basename_index.get(myfile)
if possibilities:
match = None
for d in possibilities:
try:
- if long(d["MTIME"]) != s[stat.ST_MTIME]:
+ if long(d["_mtime_"]) != s[stat.ST_MTIME]:
continue
except (KeyError, ValueError):
continue
@@ -707,15 +759,14 @@ class binarytree(object):
break
if match:
mycpv = match["CPV"]
- if mycpv in pkg_paths:
- # discard duplicates (All/ is preferred)
- continue
- mycpv = _pkg_str(mycpv)
- pkg_paths[mycpv] = mypath
+ instance_key = _instance_key(mycpv)
+ pkg_paths[instance_key] = mypath
# update the path if the package has been moved
oldpath = d.get("PATH")
if oldpath and oldpath != mypath:
update_pkgindex = True
+ # Omit PATH if it is the default path for
+ # the current Packages format version.
if mypath != mycpv + ".tbz2":
d["PATH"] = mypath
if not oldpath:
@@ -725,11 +776,6 @@ class binarytree(object):
if oldpath:
update_pkgindex = True
self.dbapi.cpv_inject(mycpv)
- if not self.dbapi._aux_cache_keys.difference(d):
- aux_cache = self.dbapi._aux_cache_slot_dict()
- for k in self.dbapi._aux_cache_keys:
- aux_cache[k] = d[k]
- self.dbapi._aux_cache[mycpv] = aux_cache
continue
if not os.access(full_path, os.R_OK):
writemsg(_("!!! Permission denied to read " \
@@ -737,13 +783,12 @@ class binarytree(object):
noiselevel=-1)
self.invalids.append(myfile[:-5])
continue
- metadata_bytes = portage.xpak.tbz2(full_path).get_data()
- mycat = _unicode_decode(metadata_bytes.get(b"CATEGORY", ""),
- encoding=_encodings['repo.content'], errors='replace')
- mypf = _unicode_decode(metadata_bytes.get(b"PF", ""),
- encoding=_encodings['repo.content'], errors='replace')
- slot = _unicode_decode(metadata_bytes.get(b"SLOT", ""),
- encoding=_encodings['repo.content'], errors='replace')
+ pkg_metadata = self._read_metadata(full_path, s,
+ keys=chain(self.dbapi._aux_cache_keys,
+ ("PF", "CATEGORY")))
+ mycat = pkg_metadata.get("CATEGORY", "")
+ mypf = pkg_metadata.get("PF", "")
+ slot = pkg_metadata.get("SLOT", "")
mypkg = myfile[:-5]
if not mycat or not mypf or not slot:
#old-style or corrupt package
@@ -767,16 +812,51 @@ class binarytree(object):
writemsg("!!! %s\n" % line, noiselevel=-1)
self.invalids.append(mypkg)
continue
- mycat = mycat.strip()
- slot = slot.strip()
- if mycat != mydir and mydir != "All":
+
+ multi_instance = False
+ invalid_name = False
+ build_id = None
+ if myfile.endswith(".xpak"):
+ multi_instance = True
+ build_id = self._parse_build_id(myfile)
+ if build_id < 1:
+ invalid_name = True
+ elif myfile != "%s-%s.xpak" % (
+ mypf, build_id):
+ invalid_name = True
+ else:
+ mypkg = mypkg[:-len(str(build_id))-1]
+ elif myfile != mypf + ".tbz2":
+ invalid_name = True
+
+ if invalid_name:
+ writemsg(_("\n!!! Binary package name is "
+ "invalid: '%s'\n") % full_path,
+ noiselevel=-1)
+ continue
+
+ if pkg_metadata.get("BUILD_ID"):
+ try:
+ build_id = long(pkg_metadata["BUILD_ID"])
+ except ValueError:
+ writemsg(_("!!! Binary package has "
+ "invalid BUILD_ID: '%s'\n") %
+ full_path, noiselevel=-1)
+ continue
+ else:
+ build_id = None
+
+ if multi_instance:
+ name_split = catpkgsplit("%s/%s" %
+ (mycat, mypf))
+ if (name_split is None or
+ tuple(catsplit(mydir)) != name_split[:2]):
+ continue
+ elif mycat != mydir and mydir != "All":
continue
if mypkg != mypf.strip():
continue
mycpv = mycat + "/" + mypkg
- if mycpv in pkg_paths:
- # All is first, so it's preferred.
- continue
if not self.dbapi._category_re.match(mycat):
writemsg(_("!!! Binary package has an " \
"unrecognized category: '%s'\n") % full_path,
@@ -786,14 +866,23 @@ class binarytree(object):
(mycpv, self.settings["PORTAGE_CONFIGROOT"]),
noiselevel=-1)
continue
- mycpv = _pkg_str(mycpv)
- pkg_paths[mycpv] = mypath
+ if build_id is not None:
+ pkg_metadata["BUILD_ID"] = _unicode(build_id)
+ pkg_metadata["SIZE"] = _unicode(s.st_size)
+ # Discard items used only for validation above.
+ pkg_metadata.pop("CATEGORY")
+ pkg_metadata.pop("PF")
+ mycpv = _pkg_str(mycpv,
+ metadata=self.dbapi._aux_cache_slot_dict(
+ pkg_metadata))
+ pkg_paths[_instance_key(mycpv)] = mypath
self.dbapi.cpv_inject(mycpv)
update_pkgindex = True
- d = metadata.get(mycpv, {})
+ d = metadata.get(_instance_key(mycpv),
+ pkgindex._pkg_slot_dict())
if d:
try:
- if long(d["MTIME"]) != s[stat.ST_MTIME]:
+ if long(d["_mtime_"]) != s[stat.ST_MTIME]:
d.clear()
except (KeyError, ValueError):
d.clear()
@@ -804,36 +893,30 @@ class binarytree(object):
except (KeyError, ValueError):
d.clear()
+ for k in self._pkgindex_allowed_pkg_keys:
+ v = pkg_metadata.get(k)
+ if v is not None:
+ d[k] = v
d["CPV"] = mycpv
- d["SLOT"] = slot
- d["MTIME"] = _unicode(s[stat.ST_MTIME])
- d["SIZE"] = _unicode(s.st_size)
- d.update(zip(self._pkgindex_aux_keys,
- self.dbapi.aux_get(mycpv, self._pkgindex_aux_keys)))
try:
self._eval_use_flags(mycpv, d)
except portage.exception.InvalidDependString:
writemsg(_("!!! Invalid binary package: '%s'\n") % \
self.getname(mycpv), noiselevel=-1)
self.dbapi.cpv_remove(mycpv)
- del pkg_paths[mycpv]
+ del pkg_paths[_instance_key(mycpv)]
# record location if it's non-default
if mypath != mycpv + ".tbz2":
d["PATH"] = mypath
else:
d.pop("PATH", None)
- metadata[mycpv] = d
- if not self.dbapi._aux_cache_keys.difference(d):
- aux_cache = self.dbapi._aux_cache_slot_dict()
- for k in self.dbapi._aux_cache_keys:
- aux_cache[k] = d[k]
- self.dbapi._aux_cache[mycpv] = aux_cache
+ metadata[_instance_key(mycpv)] = d
- for cpv in list(metadata):
- if cpv not in pkg_paths:
- del metadata[cpv]
+ for instance_key in list(metadata):
+ if instance_key not in pkg_paths:
+ del metadata[instance_key]
# Do not bother to write the Packages index if $PKGDIR/All/ exists
# since it will provide no benefit due to the need to read CATEGORY
@@ -1058,45 +1141,24 @@ class binarytree(object):
# The current user doesn't have permission to cache the
# file, but that's alright.
if pkgindex:
- # Organize remote package list as a cpv -> metadata map.
- remotepkgs = _pkgindex_cpv_map_latest_build(pkgindex)
remote_base_uri = pkgindex.header.get("URI", base_url)
- for cpv, remote_metadata in remotepkgs.items():
- remote_metadata["BASE_URI"] = remote_base_uri
- self._pkgindex_uri[cpv] = url
- self._remotepkgs.update(remotepkgs)
- self._remote_has_index = True
- for cpv in remotepkgs:
+ for d in pkgindex.packages:
+ cpv = _pkg_str(d["CPV"], metadata=d,
+ settings=self.settings)
+ instance_key = _instance_key(cpv)
+ # Local package instances override remote instances
+ # with the same instance_key.
+ if instance_key in metadata:
+ continue
+
+ d["CPV"] = cpv
+ d["BASE_URI"] = remote_base_uri
+ d["PKGINDEX_URI"] = url
+ self._remotepkgs[instance_key] = d
+ metadata[instance_key] = d
self.dbapi.cpv_inject(cpv)
- if True:
- # Remote package instances override local package
- # if they are not identical.
- hash_names = ["SIZE"] + self._pkgindex_hashes
- for cpv, local_metadata in metadata.items():
- remote_metadata = self._remotepkgs.get(cpv)
- if remote_metadata is None:
- continue
- # Use digests to compare identity.
- identical = True
- for hash_name in hash_names:
- local_value = local_metadata.get(hash_name)
- if local_value is None:
- continue
- remote_value = remote_metadata.get(hash_name)
- if remote_value is None:
- continue
- if local_value != remote_value:
- identical = False
- break
- if identical:
- del self._remotepkgs[cpv]
- else:
- # Override the local package in the aux_get cache.
- self.dbapi._aux_cache[cpv] = remote_metadata
- else:
- # Local package instances override remote instances.
- for cpv in metadata:
- self._remotepkgs.pop(cpv, None)
+
+ self._remote_has_index = True
self.populated=1
@@ -1108,7 +1170,8 @@ class binarytree(object):
@param filename: File path of the package to inject, or None if it's
already in the location returned by getname()
@type filename: string
- @rtype: None
+ @rtype: _pkg_str or None
+ @return: A _pkg_str instance on success, or None on failure.
"""
mycat, mypkg = catsplit(cpv)
if not self.populated:
@@ -1126,24 +1189,45 @@ class binarytree(object):
writemsg(_("!!! Binary package does not exist: '%s'\n") % full_path,
noiselevel=-1)
return
- mytbz2 = portage.xpak.tbz2(full_path)
- slot = mytbz2.getfile("SLOT")
+ metadata = self._read_metadata(full_path, s)
+ slot = metadata.get("SLOT")
+ try:
+ self._eval_use_flags(cpv, metadata)
+ except portage.exception.InvalidDependString:
+ slot = None
if slot is None:
writemsg(_("!!! Invalid binary package: '%s'\n") % full_path,
noiselevel=-1)
return
- slot = slot.strip()
- self.dbapi.cpv_inject(cpv)
+
+ fetched = False
+ try:
+ build_id = cpv.build_id
+ except AttributeError:
+ build_id = None
+ else:
+ instance_key = self.dbapi._instance_key(cpv)
+ if instance_key in self.dbapi.cpvdict:
+ # This means we've been called by aux_update (or
+ # similar). The instance key typically changes (due to
+ # file modification), so we need to discard existing
+ # instance key references.
+ self.dbapi.cpv_remove(cpv)
+ self._pkg_paths.pop(instance_key, None)
+ if self._remotepkgs is not None:
+ fetched = self._remotepkgs.pop(instance_key, None)
+
+ cpv = _pkg_str(cpv, metadata=metadata, settings=self.settings)
# Reread the Packages index (in case it's been changed by another
# process) and then updated it, all while holding a lock.
pkgindex_lock = None
- created_symlink = False
try:
pkgindex_lock = lockfile(self._pkgindex_file,
wantnewlockfile=1)
if filename is not None:
- new_filename = self.getname(cpv)
+ new_filename = self.getname(cpv,
+ allocate_new=(build_id is None))
try:
samefile = os.path.samefile(filename, new_filename)
except OSError:
@@ -1153,54 +1237,31 @@ class binarytree(object):
_movefile(filename, new_filename, mysettings=self.settings)
full_path = new_filename
- self._file_permissions(full_path)
+ basename = os.path.basename(full_path)
+ pf = catsplit(cpv)[1]
+ if (build_id is None and not fetched and
+ basename.endswith(".xpak")):
+ # Apply the newly assigned BUILD_ID. This is intended
+ # to occur only for locally built packages. If the
+ # package was fetched, we want to preserve its
+ # attributes, so that we can later distinguish that it
+ # is identical to its remote counterpart.
+ build_id = self._parse_build_id(basename)
+ metadata["BUILD_ID"] = _unicode(build_id)
+ cpv = _pkg_str(cpv, metadata=metadata,
+ settings=self.settings)
+ binpkg = portage.xpak.tbz2(full_path)
+ binary_data = binpkg.get_data()
+ binary_data[b"BUILD_ID"] = _unicode_encode(
+ metadata["BUILD_ID"])
+ binpkg.recompose_mem(portage.xpak.xpak_mem(binary_data))
- if self._all_directory and \
- self.getname(cpv).split(os.path.sep)[-2] == "All":
- self._create_symlink(cpv)
- created_symlink = True
+ self._file_permissions(full_path)
pkgindex = self._load_pkgindex()
-
if not self._pkgindex_version_supported(pkgindex):
pkgindex = self._new_pkgindex()
- # Discard remote metadata to ensure that _pkgindex_entry
- # gets the local metadata. This also updates state for future
- # isremote calls.
- if self._remotepkgs is not None:
- self._remotepkgs.pop(cpv, None)
-
- # Discard cached metadata to ensure that _pkgindex_entry
- # doesn't return stale metadata.
- self.dbapi._aux_cache.pop(cpv, None)
-
- try:
- d = self._pkgindex_entry(cpv)
- except portage.exception.InvalidDependString:
- writemsg(_("!!! Invalid binary package: '%s'\n") % \
- self.getname(cpv), noiselevel=-1)
- self.dbapi.cpv_remove(cpv)
- del self._pkg_paths[cpv]
- return
-
- # If found, remove package(s) with duplicate path.
- path = d.get("PATH", "")
- for i in range(len(pkgindex.packages) - 1, -1, -1):
- d2 = pkgindex.packages[i]
- if path and path == d2.get("PATH"):
- # Handle path collisions in $PKGDIR/All
- # when CPV is not identical.
- del pkgindex.packages[i]
- elif cpv == d2.get("CPV"):
- if path == d2.get("PATH", ""):
- del pkgindex.packages[i]
- elif created_symlink and not d2.get("PATH", ""):
- # Delete entry for the package that was just
- # overwritten by a symlink to this package.
- del pkgindex.packages[i]
-
- pkgindex.packages.append(d)
-
+ d = self._inject_file(pkgindex, cpv, full_path)
self._update_pkgindex_header(pkgindex.header)
self._pkgindex_write(pkgindex)
@@ -1208,6 +1269,73 @@ class binarytree(object):
if pkgindex_lock:
unlockfile(pkgindex_lock)
+ # This is used to record BINPKGMD5 in the installed package
+ # database, for a package that has just been built.
+ cpv._metadata["MD5"] = d["MD5"]
+
+ return cpv
+
+ def _read_metadata(self, filename, st, keys=None):
+ if keys is None:
+ keys = self.dbapi._aux_cache_keys
+ metadata = self.dbapi._aux_cache_slot_dict()
+ else:
+ metadata = {}
+ binary_metadata = portage.xpak.tbz2(filename).get_data()
+ for k in keys:
+ if k == "_mtime_":
+ metadata[k] = _unicode(st[stat.ST_MTIME])
+ elif k == "SIZE":
+ metadata[k] = _unicode(st.st_size)
+ else:
+ v = binary_metadata.get(_unicode_encode(k))
+ if v is not None:
+ v = _unicode_decode(v)
+ metadata[k] = " ".join(v.split())
+ metadata.setdefault("EAPI", "0")
+ return metadata
+
+ def _inject_file(self, pkgindex, cpv, filename):
+ """
+ Add a package to internal data structures, and add an
+ entry to the given pkgindex.
+ @param pkgindex: The PackageIndex instance to which an entry
+ will be added.
+ @type pkgindex: PackageIndex
+ @param cpv: A _pkg_str instance corresponding to the package
+ being injected.
+ @type cpv: _pkg_str
+ @param filename: Absolute file path of the package to inject.
+ @type filename: string
+ @rtype: dict
+ @return: A dict corresponding to the new entry which has been
+ added to pkgindex. This may be used to access the checksums
+ which have just been generated.
+ """
+ # Update state for future isremote calls.
+ instance_key = self.dbapi._instance_key(cpv)
+ if self._remotepkgs is not None:
+ self._remotepkgs.pop(instance_key, None)
+
+ self.dbapi.cpv_inject(cpv)
+ self._pkg_paths[instance_key] = filename[len(self.pkgdir)+1:]
+ d = self._pkgindex_entry(cpv)
+
+ # If found, remove package(s) with duplicate path.
+ path = d.get("PATH", "")
+ for i in range(len(pkgindex.packages) - 1, -1, -1):
+ d2 = pkgindex.packages[i]
+ if path and path == d2.get("PATH"):
+ # Handle path collisions in $PKGDIR/All
+ # when CPV is not identical.
+ del pkgindex.packages[i]
+ elif cpv == d2.get("CPV"):
+ if path == d2.get("PATH", ""):
+ del pkgindex.packages[i]
+
+ pkgindex.packages.append(d)
+ return d
+
def _pkgindex_write(self, pkgindex):
contents = codecs.getwriter(_encodings['repo.content'])(io.BytesIO())
pkgindex.write(contents)
@@ -1233,7 +1361,7 @@ class binarytree(object):
def _pkgindex_entry(self, cpv):
"""
- Performs checksums and evaluates USE flag conditionals.
+ Performs checksums, and gets size and mtime via lstat.
Raises InvalidDependString if necessary.
@rtype: dict
@return: a dict containing entry for the give cpv.
@@ -1241,23 +1369,20 @@ class binarytree(object):
pkg_path = self.getname(cpv)
- d = dict(zip(self._pkgindex_aux_keys,
- self.dbapi.aux_get(cpv, self._pkgindex_aux_keys)))
-
+ d = dict(cpv._metadata.items())
d.update(perform_multiple_checksums(
pkg_path, hashes=self._pkgindex_hashes))
d["CPV"] = cpv
- st = os.stat(pkg_path)
- d["MTIME"] = _unicode(st[stat.ST_MTIME])
+ st = os.lstat(pkg_path)
+ d["_mtime_"] = _unicode(st[stat.ST_MTIME])
d["SIZE"] = _unicode(st.st_size)
- rel_path = self._pkg_paths[cpv]
+ rel_path = pkg_path[len(self.pkgdir)+1:]
# record location if it's non-default
if rel_path != cpv + ".tbz2":
d["PATH"] = rel_path
- self._eval_use_flags(cpv, d)
return d
def _new_pkgindex(self):
@@ -1311,15 +1436,17 @@ class binarytree(object):
return False
def _eval_use_flags(self, cpv, metadata):
- use = frozenset(metadata["USE"].split())
+ use = frozenset(metadata.get("USE", "").split())
for k in self._pkgindex_use_evaluated_keys:
if k.endswith('DEPEND'):
token_class = Atom
else:
token_class = None
+ deps = metadata.get(k)
+ if deps is None:
+ continue
try:
- deps = metadata[k]
deps = use_reduce(deps, uselist=use, token_class=token_class)
deps = paren_enclose(deps)
except portage.exception.InvalidDependString as e:
@@ -1349,46 +1476,129 @@ class binarytree(object):
return ""
return mymatch
- def getname(self, pkgname):
- """Returns a file location for this package. The default location is
- ${PKGDIR}/All/${PF}.tbz2, but will be ${PKGDIR}/${CATEGORY}/${PF}.tbz2
- in the rare event of a collision. The prevent_collision() method can
- be called to ensure that ${PKGDIR}/All/${PF}.tbz2 is available for a
- specific cpv."""
+ def getname(self, cpv, allocate_new=None):
+ """Returns a file location for this package.
+ If cpv has both build_time and build_id attributes, then the
+ path to the specific corresponding instance is returned.
+ Otherwise, allocate a new path and return that. When allocating
+ a new path, behavior depends on the binpkg-multi-instance
+ FEATURES setting.
+ """
if not self.populated:
self.populate()
- mycpv = pkgname
- mypath = self._pkg_paths.get(mycpv, None)
- if mypath:
- return os.path.join(self.pkgdir, mypath)
- mycat, mypkg = catsplit(mycpv)
- if self._all_directory:
- mypath = os.path.join("All", mypkg + ".tbz2")
- if mypath in self._pkg_paths.values():
- mypath = os.path.join(mycat, mypkg + ".tbz2")
+
+ try:
+ cpv.cp
+ except AttributeError:
+ cpv = _pkg_str(cpv)
+
+ filename = None
+ if allocate_new:
+ filename = self._allocate_filename(cpv)
+ elif self._is_specific_instance(cpv):
+ instance_key = self.dbapi._instance_key(cpv)
+ path = self._pkg_paths.get(instance_key)
+ if path is not None:
+ filename = os.path.join(self.pkgdir, path)
+
+ if filename is None and not allocate_new:
+ try:
+ instance_key = self.dbapi._instance_key(cpv,
+ support_string=True)
+ except KeyError:
+ pass
+ else:
+ filename = self._pkg_paths.get(instance_key)
+ if filename is not None:
+ filename = os.path.join(self.pkgdir, filename)
+
+ if filename is None:
+ if self._multi_instance:
+ pf = catsplit(cpv)[1]
+ filename = "%s-%s.xpak" % (
+ os.path.join(self.pkgdir, cpv.cp, pf), "1")
+ else:
+ filename = os.path.join(self.pkgdir, cpv + ".tbz2")
+
+ return filename
+
+ def _is_specific_instance(self, cpv):
+ specific = True
+ try:
+ build_time = cpv.build_time
+ build_id = cpv.build_id
+ except AttributeError:
+ specific = False
else:
- mypath = os.path.join(mycat, mypkg + ".tbz2")
- self._pkg_paths[mycpv] = mypath # cache for future lookups
- return os.path.join(self.pkgdir, mypath)
+ if build_time is None or build_id is None:
+ specific = False
+ return specific
+
+ def _max_build_id(self, cpv):
+ max_build_id = 0
+ for x in self.dbapi.cp_list(cpv.cp):
+ if (x == cpv and x.build_id is not None and
+ x.build_id > max_build_id):
+ max_build_id = x.build_id
+ return max_build_id
+
+ def _allocate_filename(self, cpv):
+ return os.path.join(self.pkgdir, cpv + ".tbz2")
+
+ def _allocate_filename_multi(self, cpv):
+
+ # First, get the max build_id found when _populate was
+ # called.
+ max_build_id = self._max_build_id(cpv)
+
+ # A new package may have been added concurrently since the
+ # last _populate call, so use increment build_id until
+ # we locate an unused id.
+ pf = catsplit(cpv)[1]
+ build_id = max_build_id + 1
+
+ while True:
+ filename = "%s-%s.xpak" % (
+ os.path.join(self.pkgdir, cpv.cp, pf), build_id)
+ if os.path.exists(filename):
+ build_id += 1
+ else:
+ return filename
+
+ @staticmethod
+ def _parse_build_id(filename):
+ build_id = -1
+ hyphen = filename.rfind("-", 0, -6)
+ if hyphen != -1:
+ build_id = filename[hyphen+1:-5]
+ try:
+ build_id = long(build_id)
+ except ValueError:
+ pass
+ return build_id
def isremote(self, pkgname):
"""Returns true if the package is kept remotely and it has not been
downloaded (or it is only partially downloaded)."""
- if self._remotepkgs is None or pkgname not in self._remotepkgs:
+ if (self._remotepkgs is None or
+ self.dbapi._instance_key(pkgname) not in self._remotepkgs):
return False
# Presence in self._remotepkgs implies that it's remote. When a
# package is downloaded, state is updated by self.inject().
return True
- def get_pkgindex_uri(self, pkgname):
+ def get_pkgindex_uri(self, cpv):
"""Returns the URI to the Packages file for a given package."""
- return self._pkgindex_uri.get(pkgname)
-
-
+ uri = None
+ metadata = self._remotepkgs.get(self.dbapi._instance_key(cpv))
+ if metadata is not None:
+ uri = metadata["PKGINDEX_URI"]
+ return uri
def gettbz2(self, pkgname):
"""Fetches the package from a remote site, if necessary. Attempts to
resume if the file appears to be partially downloaded."""
+ instance_key = self.dbapi._instance_key(pkgname)
tbz2_path = self.getname(pkgname)
tbz2name = os.path.basename(tbz2_path)
resume = False
@@ -1404,10 +1614,10 @@ class binarytree(object):
self._ensure_dir(mydest)
# urljoin doesn't work correctly with unrecognized protocols like sftp
if self._remote_has_index:
- rel_url = self._remotepkgs[pkgname].get("PATH")
+ rel_url = self._remotepkgs[instance_key].get("PATH")
if not rel_url:
rel_url = pkgname+".tbz2"
- remote_base_uri = self._remotepkgs[pkgname]["BASE_URI"]
+ remote_base_uri = self._remotepkgs[instance_key]["BASE_URI"]
url = remote_base_uri.rstrip("/") + "/" + rel_url.lstrip("/")
else:
url = self.settings["PORTAGE_BINHOST"].rstrip("/") + "/" + tbz2name
@@ -1450,15 +1660,19 @@ class binarytree(object):
except AttributeError:
cpv = pkg
+ _instance_key = self.dbapi._instance_key
+ instance_key = _instance_key(cpv)
digests = {}
- metadata = None
- if self._remotepkgs is None or cpv not in self._remotepkgs:
+ metadata = (None if self._remotepkgs is None else
+ self._remotepkgs.get(instance_key))
+ if metadata is None:
for d in self._load_pkgindex().packages:
- if d["CPV"] == cpv:
+ if (d["CPV"] == cpv and
+ instance_key == _instance_key(_pkg_str(d["CPV"],
+ metadata=d, settings=self.settings))):
metadata = d
break
- else:
- metadata = self._remotepkgs[cpv]
+
if metadata is None:
return digests
diff --git a/pym/portage/emaint/modules/binhost/binhost.py b/pym/portage/emaint/modules/binhost/binhost.py
index 1138a8c..cf1213e 100644
--- a/pym/portage/emaint/modules/binhost/binhost.py
+++ b/pym/portage/emaint/modules/binhost/binhost.py
@@ -7,6 +7,7 @@ import stat
import portage
from portage import os
from portage.util import writemsg
+from portage.versions import _pkg_str
import sys
@@ -38,7 +39,7 @@ class BinhostHandler(object):
if size is None:
return True
- mtime = data.get("MTIME")
+ mtime = data.get("_mtime_")
if mtime is None:
return True
@@ -90,6 +91,7 @@ class BinhostHandler(object):
def fix(self, **kwargs):
onProgress = kwargs.get('onProgress', None)
bintree = self._bintree
+ _instance_key = bintree.dbapi._instance_key
cpv_all = self._bintree.dbapi.cpv_all()
cpv_all.sort()
missing = []
@@ -98,16 +100,21 @@ class BinhostHandler(object):
onProgress(maxval, 0)
pkgindex = self._pkgindex
missing = []
+ stale = []
metadata = {}
for d in pkgindex.packages:
- metadata[d["CPV"]] = d
-
- for i, cpv in enumerate(cpv_all):
- d = metadata.get(cpv)
+ cpv = _pkg_str(d["CPV"], metadata=d,
+ settings=bintree.settings)
+ d["CPV"] = cpv
+ metadata[_instance_key(cpv)] = d
+ if not bintree.dbapi.cpv_exists(cpv):
+ stale.append(cpv)
+
+ for cpv in cpv_all:
+ d = metadata.get(_instance_key(cpv))
if not d or self._need_update(cpv, d):
missing.append(cpv)
- stale = set(metadata).difference(cpv_all)
if missing or stale:
from portage import locks
pkgindex_lock = locks.lockfile(
@@ -121,31 +128,39 @@ class BinhostHandler(object):
pkgindex = bintree._load_pkgindex()
self._pkgindex = pkgindex
+ # Recount stale/missing packages, with lock held.
+ missing = []
+ stale = []
metadata = {}
for d in pkgindex.packages:
- metadata[d["CPV"]] = d
-
- # Recount missing packages, with lock held.
- del missing[:]
- for i, cpv in enumerate(cpv_all):
- d = metadata.get(cpv)
+ cpv = _pkg_str(d["CPV"], metadata=d,
+ settings=bintree.settings)
+ d["CPV"] = cpv
+ metadata[_instance_key(cpv)] = d
+ if not bintree.dbapi.cpv_exists(cpv):
+ stale.append(cpv)
+
+ for cpv in cpv_all:
+ d = metadata.get(_instance_key(cpv))
if not d or self._need_update(cpv, d):
missing.append(cpv)
maxval = len(missing)
for i, cpv in enumerate(missing):
+ d = bintree._pkgindex_entry(cpv)
try:
- metadata[cpv] = bintree._pkgindex_entry(cpv)
+ bintree._eval_use_flags(cpv, d)
except portage.exception.InvalidDependString:
writemsg("!!! Invalid binary package: '%s'\n" % \
bintree.getname(cpv), noiselevel=-1)
+ else:
+ metadata[_instance_key(cpv)] = d
if onProgress:
onProgress(maxval, i+1)
- for cpv in set(metadata).difference(
- self._bintree.dbapi.cpv_all()):
- del metadata[cpv]
+ for cpv in stale:
+ del metadata[_instance_key(cpv)]
# We've updated the pkgindex, so set it to
# repopulate when necessary.
--
2.0.5
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [gentoo-portage-dev] [PATCH 3/7 v3] binpkg-multi-instance 3 of 7
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 3/7] binpkg-multi-instance 3 " Zac Medico
2015-02-27 21:58 ` [gentoo-portage-dev] [PATCH 3/7 v2] " Zac Medico
@ 2015-02-27 23:36 ` Zac Medico
1 sibling, 0 replies; 23+ messages in thread
From: Zac Medico @ 2015-02-27 23:36 UTC (permalink / raw
To: gentoo-portage-dev; +Cc: Zac Medico
FEATURES=binpkg-multi-instance causes an integer build-id to be
associated with each binary package instance. Inclusion of the build-id
in the file name of the binary package file makes it possible to store
an arbitrary number of binary packages built from the same ebuild.
Having multiple instances is useful for a number of purposes, such as
retaining builds that were built with different USE flags or linked
against different versions of libraries. The location of any particular
package within PKGDIR can be expressed as follows:
${PKGDIR}/${CATEGORY}/${PN}/${PF}-${BUILD_ID}.xpak
The build-id starts at 1 for the first build of a particular ebuild,
and is incremented by 1 for each new build. It is possible to share a
writable PKGDIR over NFS, and locking ensures that each package added
to PKGDIR will have a unique build-id. It is not necessary to migrate
an existing PKGDIR to the new layout, since portage is capable of
working with a mixed PKGDIR layout, where packages using the old layout
are allowed to remain in place.
The new PKGDIR layout is backward-compatible with binhost clients
running older portage, since the file format is identical, the
per-package PATH attribute in the 'Packages' index directs them to
download the file from the correct URI, and they automatically use
BUILD_TIME metadata to select the latest builds.
There is currently no automated way to prune old builds from PKGDIR,
although it is possible to remove packages manually, and then run
'emaint --fix binhost' to update the ${PKGDIR}/Packages index. Support
for FEATURES=binpkg-multi-instance is planned for eclean-pkg.
X-Gentoo-Bug: 150031
X-Gentoo-Bug-URL: https://bugs.gentoo.org/show_bug.cgi?id=150031
---
PATCH 3/7 v3 fixes a couple of issues reported by David James:
* Fix depgraph to exhaustively search for a binary package with the
desired USE settings
* Fix binarytree.inject to preserve multiple package instances with
the same BUILD_ID (or even missing BUILD_ID). This is useful if the
client has FEATURES=binpkg-multi-instance enabled, in order to
preserve multiple instances from multiple binhosts that do not have
FEATURES=binpkg-multi-instance enabled. In this case, the number
in the local binpkg file name does not have to correspond to the
BUILD_ID metadata in the package.
bin/quickpkg | 1 -
man/make.conf.5 | 27 +
pym/_emerge/Binpkg.py | 33 +-
pym/_emerge/BinpkgFetcher.py | 13 +-
pym/_emerge/BinpkgPrefetcher.py | 2 +-
pym/_emerge/BinpkgVerifier.py | 6 +-
pym/_emerge/EbuildBinpkg.py | 9 +-
pym/_emerge/EbuildBuild.py | 36 +-
pym/_emerge/Package.py | 16 +-
pym/_emerge/Scheduler.py | 6 +-
pym/_emerge/clear_caches.py | 1 -
pym/_emerge/depgraph.py | 16 +-
pym/portage/const.py | 2 +
pym/portage/dbapi/bintree.py | 683 +++++++++++++++++---------
pym/portage/emaint/modules/binhost/binhost.py | 47 +-
15 files changed, 613 insertions(+), 285 deletions(-)
diff --git a/bin/quickpkg b/bin/quickpkg
index 2c69a69..8b71c3e 100755
--- a/bin/quickpkg
+++ b/bin/quickpkg
@@ -63,7 +63,6 @@ def quickpkg_atom(options, infos, arg, eout):
pkgs_for_arg = 0
for cpv in matches:
excluded_config_files = []
- bintree.prevent_collision(cpv)
dblnk = vardb._dblink(cpv)
have_lock = False
diff --git a/man/make.conf.5 b/man/make.conf.5
index cd1ae21..1b71b97 100644
--- a/man/make.conf.5
+++ b/man/make.conf.5
@@ -256,6 +256,33 @@ has a \fB\-\-force\fR option that can be used to force regeneration of digests.
Keep logs from successful binary package merges. This is relevant only when
\fBPORT_LOGDIR\fR is set.
.TP
+.B binpkg\-multi\-instance
+Enable support for multiple binary package instances per ebuild.
+Having multiple instances is useful for a number of purposes, such as
+retaining builds that were built with different USE flags or linked
+against different versions of libraries. The location of any particular
+package within PKGDIR can be expressed as follows:
+
+ ${PKGDIR}/${CATEGORY}/${PN}/${PF}\-${BUILD_ID}.xpak
+
+The build\-id starts at 1 for the first build of a particular ebuild,
+and is incremented by 1 for each new build. It is possible to share a
+writable PKGDIR over NFS, and locking ensures that each package added
+to PKGDIR will have a unique build\-id. It is not necessary to migrate
+an existing PKGDIR to the new layout, since portage is capable of
+working with a mixed PKGDIR layout, where packages using the old layout
+are allowed to remain in place.
+
+The new PKGDIR layout is backward\-compatible with binhost clients
+running older portage, since the file format is identical, the
+per\-package PATH attribute in the 'Packages' index directs them to
+download the file from the correct URI, and they automatically use
+BUILD_TIME metadata to select the latest builds.
+
+There is currently no automated way to prune old builds from PKGDIR,
+although it is possible to remove packages manually, and then run
+\(aqemaint \-\-fix binhost' to update the ${PKGDIR}/Packages index.
+.TP
.B buildpkg
Binary packages will be created for all packages that are merged. Also see
\fBquickpkg\fR(1) and \fBemerge\fR(1) \fB\-\-buildpkg\fR and
diff --git a/pym/_emerge/Binpkg.py b/pym/_emerge/Binpkg.py
index ded6dfd..7b7ae17 100644
--- a/pym/_emerge/Binpkg.py
+++ b/pym/_emerge/Binpkg.py
@@ -121,16 +121,11 @@ class Binpkg(CompositeTask):
fetcher = BinpkgFetcher(background=self.background,
logfile=self.settings.get("PORTAGE_LOG_FILE"), pkg=self.pkg,
pretend=self.opts.pretend, scheduler=self.scheduler)
- pkg_path = fetcher.pkg_path
- self._pkg_path = pkg_path
- # This gives bashrc users an opportunity to do various things
- # such as remove binary packages after they're installed.
- self.settings["PORTAGE_BINPKG_FILE"] = pkg_path
if self.opts.getbinpkg and self._bintree.isremote(pkg.cpv):
-
msg = " --- (%s of %s) Fetching Binary (%s::%s)" %\
- (pkg_count.curval, pkg_count.maxval, pkg.cpv, pkg_path)
+ (pkg_count.curval, pkg_count.maxval, pkg.cpv,
+ fetcher.pkg_path)
short_msg = "emerge: (%s of %s) %s Fetch" % \
(pkg_count.curval, pkg_count.maxval, pkg.cpv)
self.logger.log(msg, short_msg=short_msg)
@@ -149,7 +144,7 @@ class Binpkg(CompositeTask):
# The fetcher only has a returncode when
# --getbinpkg is enabled.
if fetcher.returncode is not None:
- self._fetched_pkg = True
+ self._fetched_pkg = fetcher.pkg_path
if self._default_exit(fetcher) != os.EX_OK:
self._unlock_builddir()
self.wait()
@@ -163,9 +158,15 @@ class Binpkg(CompositeTask):
verifier = None
if self._verify:
+ if self._fetched_pkg:
+ path = self._fetched_pkg
+ else:
+ path = self.pkg.root_config.trees["bintree"].getname(
+ self.pkg.cpv)
logfile = self.settings.get("PORTAGE_LOG_FILE")
verifier = BinpkgVerifier(background=self.background,
- logfile=logfile, pkg=self.pkg, scheduler=self.scheduler)
+ logfile=logfile, pkg=self.pkg, scheduler=self.scheduler,
+ _pkg_path=path)
self._start_task(verifier, self._verifier_exit)
return
@@ -181,10 +182,20 @@ class Binpkg(CompositeTask):
logger = self.logger
pkg = self.pkg
pkg_count = self.pkg_count
- pkg_path = self._pkg_path
if self._fetched_pkg:
- self._bintree.inject(pkg.cpv, filename=pkg_path)
+ pkg_path = self._bintree.getname(
+ self._bintree.inject(pkg.cpv,
+ filename=self._fetched_pkg),
+ allocate_new=False)
+ else:
+ pkg_path = self.pkg.root_config.trees["bintree"].getname(
+ self.pkg.cpv)
+
+ # This gives bashrc users an opportunity to do various things
+ # such as remove binary packages after they're installed.
+ self.settings["PORTAGE_BINPKG_FILE"] = pkg_path
+ self._pkg_path = pkg_path
logfile = self.settings.get("PORTAGE_LOG_FILE")
if logfile is not None and os.path.isfile(logfile):
diff --git a/pym/_emerge/BinpkgFetcher.py b/pym/_emerge/BinpkgFetcher.py
index 543881e..a7f2d44 100644
--- a/pym/_emerge/BinpkgFetcher.py
+++ b/pym/_emerge/BinpkgFetcher.py
@@ -24,7 +24,8 @@ class BinpkgFetcher(SpawnProcess):
def __init__(self, **kwargs):
SpawnProcess.__init__(self, **kwargs)
pkg = self.pkg
- self.pkg_path = pkg.root_config.trees["bintree"].getname(pkg.cpv)
+ self.pkg_path = pkg.root_config.trees["bintree"].getname(
+ pkg.cpv) + ".partial"
def _start(self):
@@ -51,10 +52,12 @@ class BinpkgFetcher(SpawnProcess):
# urljoin doesn't work correctly with
# unrecognized protocols like sftp
if bintree._remote_has_index:
- rel_uri = bintree._remotepkgs[pkg.cpv].get("PATH")
+ instance_key = bintree.dbapi._instance_key(pkg.cpv)
+ rel_uri = bintree._remotepkgs[instance_key].get("PATH")
if not rel_uri:
rel_uri = pkg.cpv + ".tbz2"
- remote_base_uri = bintree._remotepkgs[pkg.cpv]["BASE_URI"]
+ remote_base_uri = bintree._remotepkgs[
+ instance_key]["BASE_URI"]
uri = remote_base_uri.rstrip("/") + "/" + rel_uri.lstrip("/")
else:
uri = settings["PORTAGE_BINHOST"].rstrip("/") + \
@@ -128,7 +131,9 @@ class BinpkgFetcher(SpawnProcess):
# the fetcher didn't already do it automatically.
bintree = self.pkg.root_config.trees["bintree"]
if bintree._remote_has_index:
- remote_mtime = bintree._remotepkgs[self.pkg.cpv].get("MTIME")
+ remote_mtime = bintree._remotepkgs[
+ bintree.dbapi._instance_key(
+ self.pkg.cpv)].get("MTIME")
if remote_mtime is not None:
try:
remote_mtime = long(remote_mtime)
diff --git a/pym/_emerge/BinpkgPrefetcher.py b/pym/_emerge/BinpkgPrefetcher.py
index ffa4900..7ca8970 100644
--- a/pym/_emerge/BinpkgPrefetcher.py
+++ b/pym/_emerge/BinpkgPrefetcher.py
@@ -27,7 +27,7 @@ class BinpkgPrefetcher(CompositeTask):
verifier = BinpkgVerifier(background=self.background,
logfile=self.scheduler.fetch.log_file, pkg=self.pkg,
- scheduler=self.scheduler)
+ scheduler=self.scheduler, _pkg_path=self.pkg_path)
self._start_task(verifier, self._verifier_exit)
def _verifier_exit(self, verifier):
diff --git a/pym/_emerge/BinpkgVerifier.py b/pym/_emerge/BinpkgVerifier.py
index 2c69792..7a6d15e 100644
--- a/pym/_emerge/BinpkgVerifier.py
+++ b/pym/_emerge/BinpkgVerifier.py
@@ -33,7 +33,6 @@ class BinpkgVerifier(CompositeTask):
digests = _apply_hash_filter(digests, hash_filter)
self._digests = digests
- self._pkg_path = bintree.getname(self.pkg.cpv)
try:
size = os.stat(self._pkg_path).st_size
@@ -90,8 +89,11 @@ class BinpkgVerifier(CompositeTask):
if portage.output.havecolor:
portage.output.havecolor = not self.background
+ path = self._pkg_path
+ if path.endswith(".partial"):
+ path = path[:-len(".partial")]
eout = EOutput()
- eout.ebegin("%s %s ;-)" % (os.path.basename(self._pkg_path),
+ eout.ebegin("%s %s ;-)" % (os.path.basename(path),
" ".join(sorted(self._digests))))
eout.eend(0)
diff --git a/pym/_emerge/EbuildBinpkg.py b/pym/_emerge/EbuildBinpkg.py
index 34a6aef..6e098eb 100644
--- a/pym/_emerge/EbuildBinpkg.py
+++ b/pym/_emerge/EbuildBinpkg.py
@@ -10,13 +10,12 @@ class EbuildBinpkg(CompositeTask):
This assumes that src_install() has successfully completed.
"""
__slots__ = ('pkg', 'settings') + \
- ('_binpkg_tmpfile',)
+ ('_binpkg_tmpfile', '_binpkg_info')
def _start(self):
pkg = self.pkg
root_config = pkg.root_config
bintree = root_config.trees["bintree"]
- bintree.prevent_collision(pkg.cpv)
binpkg_tmpfile = os.path.join(bintree.pkgdir,
pkg.cpv + ".tbz2." + str(os.getpid()))
bintree._ensure_dir(os.path.dirname(binpkg_tmpfile))
@@ -43,8 +42,12 @@ class EbuildBinpkg(CompositeTask):
pkg = self.pkg
bintree = pkg.root_config.trees["bintree"]
- bintree.inject(pkg.cpv, filename=self._binpkg_tmpfile)
+ self._binpkg_info = bintree.inject(pkg.cpv,
+ filename=self._binpkg_tmpfile)
self._current_task = None
self.returncode = os.EX_OK
self.wait()
+
+ def get_binpkg_info(self):
+ return self._binpkg_info
diff --git a/pym/_emerge/EbuildBuild.py b/pym/_emerge/EbuildBuild.py
index b5b1e87..0e98602 100644
--- a/pym/_emerge/EbuildBuild.py
+++ b/pym/_emerge/EbuildBuild.py
@@ -1,6 +1,10 @@
# Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
+from __future__ import unicode_literals
+
+import io
+
import _emerge.emergelog
from _emerge.EbuildExecuter import EbuildExecuter
from _emerge.EbuildPhase import EbuildPhase
@@ -15,7 +19,7 @@ from _emerge.TaskSequence import TaskSequence
from portage.util import writemsg
import portage
-from portage import os
+from portage import _encodings, _unicode_decode, _unicode_encode, os
from portage.output import colorize
from portage.package.ebuild.digestcheck import digestcheck
from portage.package.ebuild.digestgen import digestgen
@@ -317,9 +321,13 @@ class EbuildBuild(CompositeTask):
phase="rpm", scheduler=self.scheduler,
settings=self.settings))
else:
- binpkg_tasks.add(EbuildBinpkg(background=self.background,
+ task = EbuildBinpkg(
+ background=self.background,
pkg=self.pkg, scheduler=self.scheduler,
- settings=self.settings))
+ settings=self.settings)
+ binpkg_tasks.add(task)
+ task.addExitListener(
+ self._record_binpkg_info)
if binpkg_tasks:
self._start_task(binpkg_tasks, self._buildpkg_exit)
@@ -356,6 +364,28 @@ class EbuildBuild(CompositeTask):
self.returncode = packager.returncode
self.wait()
+ def _record_binpkg_info(self, task):
+ if task.returncode != os.EX_OK:
+ return
+
+ # Save info about the created binary package, so that
+ # identifying information can be passed to the install
+ # task, to be recorded in the installed package database.
+ pkg = task.get_binpkg_info()
+ infoloc = os.path.join(self.settings["PORTAGE_BUILDDIR"],
+ "build-info")
+ info = {
+ "BINPKGMD5": "%s\n" % pkg._metadata["MD5"],
+ }
+ if pkg.build_id is not None:
+ info["BUILD_ID"] = "%s\n" % pkg.build_id
+ for k, v in info.items():
+ with io.open(_unicode_encode(os.path.join(infoloc, k),
+ encoding=_encodings['fs'], errors='strict'),
+ mode='w', encoding=_encodings['repo.content'],
+ errors='strict') as f:
+ f.write(v)
+
def _buildpkgonly_success_hook_exit(self, success_hooks):
self._default_exit(success_hooks)
self.returncode = None
diff --git a/pym/_emerge/Package.py b/pym/_emerge/Package.py
index 975335d..2c1a116 100644
--- a/pym/_emerge/Package.py
+++ b/pym/_emerge/Package.py
@@ -219,6 +219,8 @@ class Package(Task):
else:
raise TypeError("root_config argument is required")
+ elements = [type_name, root, _unicode(cpv), operation]
+
# For installed (and binary) packages we don't care for the repo
# when it comes to hashing, because there can only be one cpv.
# So overwrite the repo_key with type_name.
@@ -229,14 +231,22 @@ class Package(Task):
raise AssertionError(
"Package._gen_hash_key() " + \
"called without 'repo_name' argument")
- repo_key = repo_name
+ elements.append(repo_name)
+ elif type_name == "binary":
+ # Including a variety of fingerprints in the hash makes
+ # it possible to simultaneously consider multiple similar
+ # packages. Note that digests are not included here, since
+ # they are relatively expensive to compute, and they may
+ # not necessarily be available.
+ elements.extend([cpv.build_id, cpv.file_size,
+ cpv.build_time, cpv.mtime])
else:
# For installed (and binary) packages we don't care for the repo
# when it comes to hashing, because there can only be one cpv.
# So overwrite the repo_key with type_name.
- repo_key = type_name
+ elements.append(type_name)
- return (type_name, root, _unicode(cpv), operation, repo_key)
+ return tuple(elements)
def _validate_deps(self):
"""
diff --git a/pym/_emerge/Scheduler.py b/pym/_emerge/Scheduler.py
index 6e3bf1a..6b39e3b 100644
--- a/pym/_emerge/Scheduler.py
+++ b/pym/_emerge/Scheduler.py
@@ -862,8 +862,12 @@ class Scheduler(PollScheduler):
continue
fetched = fetcher.pkg_path
+ if fetched is False:
+ filename = bintree.getname(x.cpv)
+ else:
+ filename = fetched
verifier = BinpkgVerifier(pkg=x,
- scheduler=sched_iface)
+ scheduler=sched_iface, _pkg_path=filename)
current_task = verifier
verifier.start()
if verifier.wait() != os.EX_OK:
diff --git a/pym/_emerge/clear_caches.py b/pym/_emerge/clear_caches.py
index 513df62..cb0db10 100644
--- a/pym/_emerge/clear_caches.py
+++ b/pym/_emerge/clear_caches.py
@@ -7,7 +7,6 @@ def clear_caches(trees):
for d in trees.values():
d["porttree"].dbapi.melt()
d["porttree"].dbapi._aux_cache.clear()
- d["bintree"].dbapi._aux_cache.clear()
d["bintree"].dbapi._clear_cache()
if d["vartree"].dbapi._linkmap is None:
# preserve-libs is entirely disabled
diff --git a/pym/_emerge/depgraph.py b/pym/_emerge/depgraph.py
index e8a3110..b6014a4 100644
--- a/pym/_emerge/depgraph.py
+++ b/pym/_emerge/depgraph.py
@@ -5747,11 +5747,11 @@ class depgraph(object):
if want_reinstall and matched_packages:
continue
- # Ignore USE deps for the initial match since we want to
- # ensure that updates aren't missed solely due to the user's
- # USE configuration.
+ # For unbuilt ebuilds, ignore USE deps for the initial
+ # match since we want to ensure that updates aren't
+ # missed solely due to the user's USE configuration.
for pkg in self._iter_match_pkgs(root_config, pkg_type,
- atom.without_use if atom.package else atom,
+ atom.without_use if (atom.package and not built) else atom,
onlydeps=onlydeps):
if have_new_virt is True and pkg.cp != atom_cp:
# pull in a new-style virtual instead
@@ -6014,6 +6014,10 @@ class depgraph(object):
pkg, {}).setdefault(
"respect_use", set()).update(
reinstall_for_flags)
+ # Continue searching for a binary
+ # package instance built with the
+ # desired USE settings.
+ continue
break
if (((installed and changed_deps) or
@@ -6023,6 +6027,10 @@ class depgraph(object):
self._dynamic_config.\
ignored_binaries.setdefault(
pkg, {})["changed_deps"] = True
+ # Continue searching for a binary
+ # package instance built with the
+ # desired USE settings.
+ continue
break
# Compare current config to installed package
diff --git a/pym/portage/const.py b/pym/portage/const.py
index febdb4a..c7ecda2 100644
--- a/pym/portage/const.py
+++ b/pym/portage/const.py
@@ -122,6 +122,7 @@ EBUILD_PHASES = (
SUPPORTED_FEATURES = frozenset([
"assume-digests",
"binpkg-logs",
+ "binpkg-multi-instance",
"buildpkg",
"buildsyspkg",
"candy",
@@ -268,6 +269,7 @@ LIVE_ECLASSES = frozenset([
])
SUPPORTED_BINPKG_FORMATS = ("tar", "rpm")
+SUPPORTED_XPAK_EXTENSIONS = (".tbz2", ".xpak")
# Time formats used in various places like metadata.chk.
TIMESTAMP_FORMAT = "%a, %d %b %Y %H:%M:%S +0000" # to be used with time.gmtime()
diff --git a/pym/portage/dbapi/bintree.py b/pym/portage/dbapi/bintree.py
index cd30b67..9bc5d98 100644
--- a/pym/portage/dbapi/bintree.py
+++ b/pym/portage/dbapi/bintree.py
@@ -17,14 +17,13 @@ portage.proxy.lazyimport.lazyimport(globals(),
'portage.update:update_dbentries',
'portage.util:atomic_ofstream,ensure_dirs,normalize_path,' + \
'writemsg,writemsg_stdout',
- 'portage.util.listdir:listdir',
'portage.util.path:first_existing',
'portage.util._urlopen:urlopen@_urlopen',
'portage.versions:best,catpkgsplit,catsplit,_pkg_str',
)
from portage.cache.mappings import slot_dict_class
-from portage.const import CACHE_PATH
+from portage.const import CACHE_PATH, SUPPORTED_XPAK_EXTENSIONS
from portage.dbapi.virtual import fakedbapi
from portage.dep import Atom, use_reduce, paren_enclose
from portage.exception import AlarmSignal, InvalidData, InvalidPackageName, \
@@ -71,18 +70,26 @@ class bindbapi(fakedbapi):
_known_keys = frozenset(list(fakedbapi._known_keys) + \
["CHOST", "repository", "USE"])
def __init__(self, mybintree=None, **kwargs):
- fakedbapi.__init__(self, **kwargs)
+ # Always enable multi_instance mode for bindbapi indexing. This
+ # does not affect the local PKGDIR file layout, since that is
+ # controlled independently by FEATURES=binpkg-multi-instance.
+ # The multi_instance mode is useful for the following reasons:
+ # * binary packages with the same cpv from multiple binhosts
+ # can be considered simultaneously
+ # * if binpkg-multi-instance is disabled, it's still possible
+ # to properly access a PKGDIR which has binpkg-multi-instance
+ # layout (or mixed layout)
+ fakedbapi.__init__(self, exclusive_slots=False,
+ multi_instance=True, **kwargs)
self.bintree = mybintree
self.move_ent = mybintree.move_ent
- self.cpvdict={}
- self.cpdict={}
# Selectively cache metadata in order to optimize dep matching.
self._aux_cache_keys = set(
- ["BUILD_TIME", "CHOST", "DEPEND", "EAPI",
- "HDEPEND", "IUSE", "KEYWORDS",
- "LICENSE", "PDEPEND", "PROPERTIES", "PROVIDE",
- "RDEPEND", "repository", "RESTRICT", "SLOT", "USE",
- "DEFINED_PHASES", "PROVIDES", "REQUIRES"
+ ["BUILD_ID", "BUILD_TIME", "CHOST", "DEFINED_PHASES",
+ "DEPEND", "EAPI", "HDEPEND", "IUSE", "KEYWORDS",
+ "LICENSE", "MD5", "PDEPEND", "PROPERTIES", "PROVIDE",
+ "PROVIDES", "RDEPEND", "repository", "REQUIRES", "RESTRICT",
+ "SIZE", "SLOT", "USE", "_mtime_"
])
self._aux_cache_slot_dict = slot_dict_class(self._aux_cache_keys)
self._aux_cache = {}
@@ -109,33 +116,49 @@ class bindbapi(fakedbapi):
return fakedbapi.cpv_exists(self, cpv)
def cpv_inject(self, cpv, **kwargs):
- self._aux_cache.pop(cpv, None)
- fakedbapi.cpv_inject(self, cpv, **kwargs)
+ if not self.bintree.populated:
+ self.bintree.populate()
+ fakedbapi.cpv_inject(self, cpv,
+ metadata=cpv._metadata, **kwargs)
def cpv_remove(self, cpv):
- self._aux_cache.pop(cpv, None)
+ if not self.bintree.populated:
+ self.bintree.populate()
fakedbapi.cpv_remove(self, cpv)
def aux_get(self, mycpv, wants, myrepo=None):
if self.bintree and not self.bintree.populated:
self.bintree.populate()
- cache_me = False
+ # Support plain string for backward compatibility with API
+ # consumers (including portageq, which passes in a cpv from
+ # a command-line argument).
+ instance_key = self._instance_key(mycpv,
+ support_string=True)
if not self._known_keys.intersection(
wants).difference(self._aux_cache_keys):
- aux_cache = self._aux_cache.get(mycpv)
+ aux_cache = self.cpvdict[instance_key]
if aux_cache is not None:
return [aux_cache.get(x, "") for x in wants]
- cache_me = True
mysplit = mycpv.split("/")
mylist = []
tbz2name = mysplit[1]+".tbz2"
if not self.bintree._remotepkgs or \
not self.bintree.isremote(mycpv):
- tbz2_path = self.bintree.getname(mycpv)
- if not os.path.exists(tbz2_path):
+ try:
+ tbz2_path = self.bintree._pkg_paths[instance_key]
+ except KeyError:
+ raise KeyError(mycpv)
+ tbz2_path = os.path.join(self.bintree.pkgdir, tbz2_path)
+ try:
+ st = os.lstat(tbz2_path)
+ except OSError:
raise KeyError(mycpv)
metadata_bytes = portage.xpak.tbz2(tbz2_path).get_data()
def getitem(k):
+ if k == "_mtime_":
+ return _unicode(st[stat.ST_MTIME])
+ elif k == "SIZE":
+ return _unicode(st.st_size)
v = metadata_bytes.get(_unicode_encode(k,
encoding=_encodings['repo.content'],
errors='backslashreplace'))
@@ -144,11 +167,9 @@ class bindbapi(fakedbapi):
encoding=_encodings['repo.content'], errors='replace')
return v
else:
- getitem = self.bintree._remotepkgs[mycpv].get
+ getitem = self.cpvdict[instance_key].get
mydata = {}
mykeys = wants
- if cache_me:
- mykeys = self._aux_cache_keys.union(wants)
for x in mykeys:
myval = getitem(x)
# myval is None if the key doesn't exist
@@ -159,16 +180,24 @@ class bindbapi(fakedbapi):
if not mydata.setdefault('EAPI', '0'):
mydata['EAPI'] = '0'
- if cache_me:
- aux_cache = self._aux_cache_slot_dict()
- for x in self._aux_cache_keys:
- aux_cache[x] = mydata.get(x, '')
- self._aux_cache[mycpv] = aux_cache
return [mydata.get(x, '') for x in wants]
def aux_update(self, cpv, values):
if not self.bintree.populated:
self.bintree.populate()
+ build_id = None
+ try:
+ build_id = cpv.build_id
+ except AttributeError:
+ if self.bintree._multi_instance:
+ # The cpv.build_id attribute is required if we are in
+ # multi-instance mode, since otherwise we won't know
+ # which instance to update.
+ raise
+ else:
+ cpv = self._instance_key(cpv, support_string=True)[0]
+ build_id = cpv.build_id
+
tbz2path = self.bintree.getname(cpv)
if not os.path.exists(tbz2path):
raise KeyError(cpv)
@@ -187,7 +216,7 @@ class bindbapi(fakedbapi):
del mydata[k]
mytbz2.recompose_mem(portage.xpak.xpak_mem(mydata))
# inject will clear stale caches via cpv_inject.
- self.bintree.inject(cpv)
+ self.bintree.inject(cpv, filename=tbz2path)
def cp_list(self, *pargs, **kwargs):
if not self.bintree.populated:
@@ -219,7 +248,7 @@ class bindbapi(fakedbapi):
if not self.bintree.isremote(pkg):
pass
else:
- metadata = self.bintree._remotepkgs[pkg]
+ metadata = self.bintree._remotepkgs[self._instance_key(pkg)]
try:
size = int(metadata["SIZE"])
except KeyError:
@@ -300,6 +329,13 @@ class binarytree(object):
if True:
self.pkgdir = normalize_path(pkgdir)
+ # NOTE: Event if binpkg-multi-instance is disabled, it's
+ # still possible to access a PKGDIR which uses the
+ # binpkg-multi-instance layout (or mixed layout).
+ self._multi_instance = ("binpkg-multi-instance" in
+ settings.features)
+ if self._multi_instance:
+ self._allocate_filename = self._allocate_filename_multi
self.dbapi = bindbapi(self, settings=settings)
self.update_ents = self.dbapi.update_ents
self.move_slot_ent = self.dbapi.move_slot_ent
@@ -310,7 +346,6 @@ class binarytree(object):
self.invalids = []
self.settings = settings
self._pkg_paths = {}
- self._pkgindex_uri = {}
self._populating = False
self._all_directory = os.path.isdir(
os.path.join(self.pkgdir, "All"))
@@ -318,12 +353,14 @@ class binarytree(object):
self._pkgindex_hashes = ["MD5","SHA1"]
self._pkgindex_file = os.path.join(self.pkgdir, "Packages")
self._pkgindex_keys = self.dbapi._aux_cache_keys.copy()
- self._pkgindex_keys.update(["CPV", "MTIME", "SIZE"])
+ self._pkgindex_keys.update(["CPV", "SIZE"])
self._pkgindex_aux_keys = \
- ["BUILD_TIME", "CHOST", "DEPEND", "DESCRIPTION", "EAPI",
- "HDEPEND", "IUSE", "KEYWORDS", "LICENSE", "PDEPEND", "PROPERTIES",
- "PROVIDE", "RESTRICT", "RDEPEND", "repository", "SLOT", "USE", "DEFINED_PHASES",
- "BASE_URI", "PROVIDES", "REQUIRES"]
+ ["BASE_URI", "BUILD_ID", "BUILD_TIME", "CHOST",
+ "DEFINED_PHASES", "DEPEND", "DESCRIPTION", "EAPI",
+ "HDEPEND", "IUSE", "KEYWORDS", "LICENSE", "PDEPEND",
+ "PKGINDEX_URI", "PROPERTIES", "PROVIDE", "PROVIDES",
+ "RDEPEND", "repository", "REQUIRES", "RESTRICT",
+ "SIZE", "SLOT", "USE"]
self._pkgindex_aux_keys = list(self._pkgindex_aux_keys)
self._pkgindex_use_evaluated_keys = \
("DEPEND", "HDEPEND", "LICENSE", "RDEPEND",
@@ -336,6 +373,7 @@ class binarytree(object):
"USE_EXPAND", "USE_EXPAND_HIDDEN", "USE_EXPAND_IMPLICIT",
"USE_EXPAND_UNPREFIXED"])
self._pkgindex_default_pkg_data = {
+ "BUILD_ID" : "",
"BUILD_TIME" : "",
"DEFINED_PHASES" : "",
"DEPEND" : "",
@@ -365,6 +403,7 @@ class binarytree(object):
self._pkgindex_translated_keys = (
("DESCRIPTION" , "DESC"),
+ ("_mtime_" , "MTIME"),
("repository" , "REPO"),
)
@@ -455,16 +494,21 @@ class binarytree(object):
mytbz2.recompose_mem(portage.xpak.xpak_mem(mydata))
self.dbapi.cpv_remove(mycpv)
- del self._pkg_paths[mycpv]
+ del self._pkg_paths[self.dbapi._instance_key(mycpv)]
+ metadata = self.dbapi._aux_cache_slot_dict()
+ for k in self.dbapi._aux_cache_keys:
+ v = mydata.get(_unicode_encode(k))
+ if v is not None:
+ v = _unicode_decode(v)
+ metadata[k] = " ".join(v.split())
+ mynewcpv = _pkg_str(mynewcpv, metadata=metadata)
new_path = self.getname(mynewcpv)
- self._pkg_paths[mynewcpv] = os.path.join(
+ self._pkg_paths[
+ self.dbapi._instance_key(mynewcpv)] = os.path.join(
*new_path.split(os.path.sep)[-2:])
if new_path != mytbz2:
self._ensure_dir(os.path.dirname(new_path))
_movefile(tbz2path, new_path, mysettings=self.settings)
- self._remove_symlink(mycpv)
- if new_path.split(os.path.sep)[-2] == "All":
- self._create_symlink(mynewcpv)
self.inject(mynewcpv)
return moves
@@ -645,55 +689,63 @@ class binarytree(object):
# prior to performing package moves since it only wants to
# operate on local packages (getbinpkgs=0).
self._remotepkgs = None
- self.dbapi._clear_cache()
- self.dbapi._aux_cache.clear()
+ self.dbapi.clear()
+ _instance_key = self.dbapi._instance_key
if True:
pkg_paths = {}
self._pkg_paths = pkg_paths
- dirs = listdir(self.pkgdir, dirsonly=True, EmptyOnError=True)
- if "All" in dirs:
- dirs.remove("All")
- dirs.sort()
- dirs.insert(0, "All")
+ dir_files = {}
+ for parent, dir_names, file_names in os.walk(self.pkgdir):
+ relative_parent = parent[len(self.pkgdir)+1:]
+ dir_files[relative_parent] = file_names
+
pkgindex = self._load_pkgindex()
- pf_index = None
if not self._pkgindex_version_supported(pkgindex):
pkgindex = self._new_pkgindex()
header = pkgindex.header
metadata = {}
+ basename_index = {}
for d in pkgindex.packages:
- metadata[d["CPV"]] = d
+ cpv = _pkg_str(d["CPV"], metadata=d,
+ settings=self.settings)
+ d["CPV"] = cpv
+ metadata[_instance_key(cpv)] = d
+ path = d.get("PATH")
+ if not path:
+ path = cpv + ".tbz2"
+ basename = os.path.basename(path)
+ basename_index.setdefault(basename, []).append(d)
+
update_pkgindex = False
- for mydir in dirs:
- for myfile in listdir(os.path.join(self.pkgdir, mydir)):
- if not myfile.endswith(".tbz2"):
+ for mydir, file_names in dir_files.items():
+ try:
+ mydir = _unicode_decode(mydir,
+ encoding=_encodings["fs"], errors="strict")
+ except UnicodeDecodeError:
+ continue
+ for myfile in file_names:
+ try:
+ myfile = _unicode_decode(myfile,
+ encoding=_encodings["fs"], errors="strict")
+ except UnicodeDecodeError:
+ continue
+ if not myfile.endswith(SUPPORTED_XPAK_EXTENSIONS):
continue
mypath = os.path.join(mydir, myfile)
full_path = os.path.join(self.pkgdir, mypath)
s = os.lstat(full_path)
- if stat.S_ISLNK(s.st_mode):
+
+ if not stat.S_ISREG(s.st_mode):
continue
# Validate data from the package index and try to avoid
# reading the xpak if possible.
- if mydir != "All":
- possibilities = None
- d = metadata.get(mydir+"/"+myfile[:-5])
- if d:
- possibilities = [d]
- else:
- if pf_index is None:
- pf_index = {}
- for mycpv in metadata:
- mycat, mypf = catsplit(mycpv)
- pf_index.setdefault(
- mypf, []).append(metadata[mycpv])
- possibilities = pf_index.get(myfile[:-5])
+ possibilities = basename_index.get(myfile)
if possibilities:
match = None
for d in possibilities:
try:
- if long(d["MTIME"]) != s[stat.ST_MTIME]:
+ if long(d["_mtime_"]) != s[stat.ST_MTIME]:
continue
except (KeyError, ValueError):
continue
@@ -707,15 +759,14 @@ class binarytree(object):
break
if match:
mycpv = match["CPV"]
- if mycpv in pkg_paths:
- # discard duplicates (All/ is preferred)
- continue
- mycpv = _pkg_str(mycpv)
- pkg_paths[mycpv] = mypath
+ instance_key = _instance_key(mycpv)
+ pkg_paths[instance_key] = mypath
# update the path if the package has been moved
oldpath = d.get("PATH")
if oldpath and oldpath != mypath:
update_pkgindex = True
+ # Omit PATH if it is the default path for
+ # the current Packages format version.
if mypath != mycpv + ".tbz2":
d["PATH"] = mypath
if not oldpath:
@@ -725,11 +776,6 @@ class binarytree(object):
if oldpath:
update_pkgindex = True
self.dbapi.cpv_inject(mycpv)
- if not self.dbapi._aux_cache_keys.difference(d):
- aux_cache = self.dbapi._aux_cache_slot_dict()
- for k in self.dbapi._aux_cache_keys:
- aux_cache[k] = d[k]
- self.dbapi._aux_cache[mycpv] = aux_cache
continue
if not os.access(full_path, os.R_OK):
writemsg(_("!!! Permission denied to read " \
@@ -737,13 +783,12 @@ class binarytree(object):
noiselevel=-1)
self.invalids.append(myfile[:-5])
continue
- metadata_bytes = portage.xpak.tbz2(full_path).get_data()
- mycat = _unicode_decode(metadata_bytes.get(b"CATEGORY", ""),
- encoding=_encodings['repo.content'], errors='replace')
- mypf = _unicode_decode(metadata_bytes.get(b"PF", ""),
- encoding=_encodings['repo.content'], errors='replace')
- slot = _unicode_decode(metadata_bytes.get(b"SLOT", ""),
- encoding=_encodings['repo.content'], errors='replace')
+ pkg_metadata = self._read_metadata(full_path, s,
+ keys=chain(self.dbapi._aux_cache_keys,
+ ("PF", "CATEGORY")))
+ mycat = pkg_metadata.get("CATEGORY", "")
+ mypf = pkg_metadata.get("PF", "")
+ slot = pkg_metadata.get("SLOT", "")
mypkg = myfile[:-5]
if not mycat or not mypf or not slot:
#old-style or corrupt package
@@ -767,16 +812,51 @@ class binarytree(object):
writemsg("!!! %s\n" % line, noiselevel=-1)
self.invalids.append(mypkg)
continue
- mycat = mycat.strip()
- slot = slot.strip()
- if mycat != mydir and mydir != "All":
+
+ multi_instance = False
+ invalid_name = False
+ build_id = None
+ if myfile.endswith(".xpak"):
+ multi_instance = True
+ build_id = self._parse_build_id(myfile)
+ if build_id < 1:
+ invalid_name = True
+ elif myfile != "%s-%s.xpak" % (
+ mypf, build_id):
+ invalid_name = True
+ else:
+ mypkg = mypkg[:-len(str(build_id))-1]
+ elif myfile != mypf + ".tbz2":
+ invalid_name = True
+
+ if invalid_name:
+ writemsg(_("\n!!! Binary package name is "
+ "invalid: '%s'\n") % full_path,
+ noiselevel=-1)
+ continue
+
+ if pkg_metadata.get("BUILD_ID"):
+ try:
+ build_id = long(pkg_metadata["BUILD_ID"])
+ except ValueError:
+ writemsg(_("!!! Binary package has "
+ "invalid BUILD_ID: '%s'\n") %
+ full_path, noiselevel=-1)
+ continue
+ else:
+ build_id = None
+
+ if multi_instance:
+ name_split = catpkgsplit("%s/%s" %
+ (mycat, mypf))
+ if (name_split is None or
+ tuple(catsplit(mydir)) != name_split[:2]):
+ continue
+ elif mycat != mydir and mydir != "All":
continue
if mypkg != mypf.strip():
continue
mycpv = mycat + "/" + mypkg
- if mycpv in pkg_paths:
- # All is first, so it's preferred.
- continue
if not self.dbapi._category_re.match(mycat):
writemsg(_("!!! Binary package has an " \
"unrecognized category: '%s'\n") % full_path,
@@ -786,14 +866,23 @@ class binarytree(object):
(mycpv, self.settings["PORTAGE_CONFIGROOT"]),
noiselevel=-1)
continue
- mycpv = _pkg_str(mycpv)
- pkg_paths[mycpv] = mypath
+ if build_id is not None:
+ pkg_metadata["BUILD_ID"] = _unicode(build_id)
+ pkg_metadata["SIZE"] = _unicode(s.st_size)
+ # Discard items used only for validation above.
+ pkg_metadata.pop("CATEGORY")
+ pkg_metadata.pop("PF")
+ mycpv = _pkg_str(mycpv,
+ metadata=self.dbapi._aux_cache_slot_dict(
+ pkg_metadata))
+ pkg_paths[_instance_key(mycpv)] = mypath
self.dbapi.cpv_inject(mycpv)
update_pkgindex = True
- d = metadata.get(mycpv, {})
+ d = metadata.get(_instance_key(mycpv),
+ pkgindex._pkg_slot_dict())
if d:
try:
- if long(d["MTIME"]) != s[stat.ST_MTIME]:
+ if long(d["_mtime_"]) != s[stat.ST_MTIME]:
d.clear()
except (KeyError, ValueError):
d.clear()
@@ -804,36 +893,30 @@ class binarytree(object):
except (KeyError, ValueError):
d.clear()
+ for k in self._pkgindex_allowed_pkg_keys:
+ v = pkg_metadata.get(k)
+ if v is not None:
+ d[k] = v
d["CPV"] = mycpv
- d["SLOT"] = slot
- d["MTIME"] = _unicode(s[stat.ST_MTIME])
- d["SIZE"] = _unicode(s.st_size)
- d.update(zip(self._pkgindex_aux_keys,
- self.dbapi.aux_get(mycpv, self._pkgindex_aux_keys)))
try:
self._eval_use_flags(mycpv, d)
except portage.exception.InvalidDependString:
writemsg(_("!!! Invalid binary package: '%s'\n") % \
self.getname(mycpv), noiselevel=-1)
self.dbapi.cpv_remove(mycpv)
- del pkg_paths[mycpv]
+ del pkg_paths[_instance_key(mycpv)]
# record location if it's non-default
if mypath != mycpv + ".tbz2":
d["PATH"] = mypath
else:
d.pop("PATH", None)
- metadata[mycpv] = d
- if not self.dbapi._aux_cache_keys.difference(d):
- aux_cache = self.dbapi._aux_cache_slot_dict()
- for k in self.dbapi._aux_cache_keys:
- aux_cache[k] = d[k]
- self.dbapi._aux_cache[mycpv] = aux_cache
+ metadata[_instance_key(mycpv)] = d
- for cpv in list(metadata):
- if cpv not in pkg_paths:
- del metadata[cpv]
+ for instance_key in list(metadata):
+ if instance_key not in pkg_paths:
+ del metadata[instance_key]
# Do not bother to write the Packages index if $PKGDIR/All/ exists
# since it will provide no benefit due to the need to read CATEGORY
@@ -1058,45 +1141,24 @@ class binarytree(object):
# The current user doesn't have permission to cache the
# file, but that's alright.
if pkgindex:
- # Organize remote package list as a cpv -> metadata map.
- remotepkgs = _pkgindex_cpv_map_latest_build(pkgindex)
remote_base_uri = pkgindex.header.get("URI", base_url)
- for cpv, remote_metadata in remotepkgs.items():
- remote_metadata["BASE_URI"] = remote_base_uri
- self._pkgindex_uri[cpv] = url
- self._remotepkgs.update(remotepkgs)
- self._remote_has_index = True
- for cpv in remotepkgs:
+ for d in pkgindex.packages:
+ cpv = _pkg_str(d["CPV"], metadata=d,
+ settings=self.settings)
+ instance_key = _instance_key(cpv)
+ # Local package instances override remote instances
+ # with the same instance_key.
+ if instance_key in metadata:
+ continue
+
+ d["CPV"] = cpv
+ d["BASE_URI"] = remote_base_uri
+ d["PKGINDEX_URI"] = url
+ self._remotepkgs[instance_key] = d
+ metadata[instance_key] = d
self.dbapi.cpv_inject(cpv)
- if True:
- # Remote package instances override local package
- # if they are not identical.
- hash_names = ["SIZE"] + self._pkgindex_hashes
- for cpv, local_metadata in metadata.items():
- remote_metadata = self._remotepkgs.get(cpv)
- if remote_metadata is None:
- continue
- # Use digests to compare identity.
- identical = True
- for hash_name in hash_names:
- local_value = local_metadata.get(hash_name)
- if local_value is None:
- continue
- remote_value = remote_metadata.get(hash_name)
- if remote_value is None:
- continue
- if local_value != remote_value:
- identical = False
- break
- if identical:
- del self._remotepkgs[cpv]
- else:
- # Override the local package in the aux_get cache.
- self.dbapi._aux_cache[cpv] = remote_metadata
- else:
- # Local package instances override remote instances.
- for cpv in metadata:
- self._remotepkgs.pop(cpv, None)
+
+ self._remote_has_index = True
self.populated=1
@@ -1108,7 +1170,8 @@ class binarytree(object):
@param filename: File path of the package to inject, or None if it's
already in the location returned by getname()
@type filename: string
- @rtype: None
+ @rtype: _pkg_str or None
+ @return: A _pkg_str instance on success, or None on failure.
"""
mycat, mypkg = catsplit(cpv)
if not self.populated:
@@ -1126,24 +1189,44 @@ class binarytree(object):
writemsg(_("!!! Binary package does not exist: '%s'\n") % full_path,
noiselevel=-1)
return
- mytbz2 = portage.xpak.tbz2(full_path)
- slot = mytbz2.getfile("SLOT")
+ metadata = self._read_metadata(full_path, s)
+ slot = metadata.get("SLOT")
+ try:
+ self._eval_use_flags(cpv, metadata)
+ except portage.exception.InvalidDependString:
+ slot = None
if slot is None:
writemsg(_("!!! Invalid binary package: '%s'\n") % full_path,
noiselevel=-1)
return
- slot = slot.strip()
- self.dbapi.cpv_inject(cpv)
+
+ fetched = False
+ try:
+ build_id = cpv.build_id
+ except AttributeError:
+ build_id = None
+ else:
+ instance_key = self.dbapi._instance_key(cpv)
+ if instance_key in self.dbapi.cpvdict:
+ # This means we've been called by aux_update (or
+ # similar). The instance key typically changes (due to
+ # file modification), so we need to discard existing
+ # instance key references.
+ self.dbapi.cpv_remove(cpv)
+ self._pkg_paths.pop(instance_key, None)
+ if self._remotepkgs is not None:
+ fetched = self._remotepkgs.pop(instance_key, None)
+
+ cpv = _pkg_str(cpv, metadata=metadata, settings=self.settings)
# Reread the Packages index (in case it's been changed by another
# process) and then updated it, all while holding a lock.
pkgindex_lock = None
- created_symlink = False
try:
pkgindex_lock = lockfile(self._pkgindex_file,
wantnewlockfile=1)
if filename is not None:
- new_filename = self.getname(cpv)
+ new_filename = self.getname(cpv, allocate_new=True)
try:
samefile = os.path.samefile(filename, new_filename)
except OSError:
@@ -1153,54 +1236,31 @@ class binarytree(object):
_movefile(filename, new_filename, mysettings=self.settings)
full_path = new_filename
- self._file_permissions(full_path)
+ basename = os.path.basename(full_path)
+ pf = catsplit(cpv)[1]
+ if (build_id is None and not fetched and
+ basename.endswith(".xpak")):
+ # Apply the newly assigned BUILD_ID. This is intended
+ # to occur only for locally built packages. If the
+ # package was fetched, we want to preserve its
+ # attributes, so that we can later distinguish that it
+ # is identical to its remote counterpart.
+ build_id = self._parse_build_id(basename)
+ metadata["BUILD_ID"] = _unicode(build_id)
+ cpv = _pkg_str(cpv, metadata=metadata,
+ settings=self.settings)
+ binpkg = portage.xpak.tbz2(full_path)
+ binary_data = binpkg.get_data()
+ binary_data[b"BUILD_ID"] = _unicode_encode(
+ metadata["BUILD_ID"])
+ binpkg.recompose_mem(portage.xpak.xpak_mem(binary_data))
- if self._all_directory and \
- self.getname(cpv).split(os.path.sep)[-2] == "All":
- self._create_symlink(cpv)
- created_symlink = True
+ self._file_permissions(full_path)
pkgindex = self._load_pkgindex()
-
if not self._pkgindex_version_supported(pkgindex):
pkgindex = self._new_pkgindex()
- # Discard remote metadata to ensure that _pkgindex_entry
- # gets the local metadata. This also updates state for future
- # isremote calls.
- if self._remotepkgs is not None:
- self._remotepkgs.pop(cpv, None)
-
- # Discard cached metadata to ensure that _pkgindex_entry
- # doesn't return stale metadata.
- self.dbapi._aux_cache.pop(cpv, None)
-
- try:
- d = self._pkgindex_entry(cpv)
- except portage.exception.InvalidDependString:
- writemsg(_("!!! Invalid binary package: '%s'\n") % \
- self.getname(cpv), noiselevel=-1)
- self.dbapi.cpv_remove(cpv)
- del self._pkg_paths[cpv]
- return
-
- # If found, remove package(s) with duplicate path.
- path = d.get("PATH", "")
- for i in range(len(pkgindex.packages) - 1, -1, -1):
- d2 = pkgindex.packages[i]
- if path and path == d2.get("PATH"):
- # Handle path collisions in $PKGDIR/All
- # when CPV is not identical.
- del pkgindex.packages[i]
- elif cpv == d2.get("CPV"):
- if path == d2.get("PATH", ""):
- del pkgindex.packages[i]
- elif created_symlink and not d2.get("PATH", ""):
- # Delete entry for the package that was just
- # overwritten by a symlink to this package.
- del pkgindex.packages[i]
-
- pkgindex.packages.append(d)
-
+ d = self._inject_file(pkgindex, cpv, full_path)
self._update_pkgindex_header(pkgindex.header)
self._pkgindex_write(pkgindex)
@@ -1208,6 +1268,73 @@ class binarytree(object):
if pkgindex_lock:
unlockfile(pkgindex_lock)
+ # This is used to record BINPKGMD5 in the installed package
+ # database, for a package that has just been built.
+ cpv._metadata["MD5"] = d["MD5"]
+
+ return cpv
+
+ def _read_metadata(self, filename, st, keys=None):
+ if keys is None:
+ keys = self.dbapi._aux_cache_keys
+ metadata = self.dbapi._aux_cache_slot_dict()
+ else:
+ metadata = {}
+ binary_metadata = portage.xpak.tbz2(filename).get_data()
+ for k in keys:
+ if k == "_mtime_":
+ metadata[k] = _unicode(st[stat.ST_MTIME])
+ elif k == "SIZE":
+ metadata[k] = _unicode(st.st_size)
+ else:
+ v = binary_metadata.get(_unicode_encode(k))
+ if v is not None:
+ v = _unicode_decode(v)
+ metadata[k] = " ".join(v.split())
+ metadata.setdefault("EAPI", "0")
+ return metadata
+
+ def _inject_file(self, pkgindex, cpv, filename):
+ """
+ Add a package to internal data structures, and add an
+ entry to the given pkgindex.
+ @param pkgindex: The PackageIndex instance to which an entry
+ will be added.
+ @type pkgindex: PackageIndex
+ @param cpv: A _pkg_str instance corresponding to the package
+ being injected.
+ @type cpv: _pkg_str
+ @param filename: Absolute file path of the package to inject.
+ @type filename: string
+ @rtype: dict
+ @return: A dict corresponding to the new entry which has been
+ added to pkgindex. This may be used to access the checksums
+ which have just been generated.
+ """
+ # Update state for future isremote calls.
+ instance_key = self.dbapi._instance_key(cpv)
+ if self._remotepkgs is not None:
+ self._remotepkgs.pop(instance_key, None)
+
+ self.dbapi.cpv_inject(cpv)
+ self._pkg_paths[instance_key] = filename[len(self.pkgdir)+1:]
+ d = self._pkgindex_entry(cpv)
+
+ # If found, remove package(s) with duplicate path.
+ path = d.get("PATH", "")
+ for i in range(len(pkgindex.packages) - 1, -1, -1):
+ d2 = pkgindex.packages[i]
+ if path and path == d2.get("PATH"):
+ # Handle path collisions in $PKGDIR/All
+ # when CPV is not identical.
+ del pkgindex.packages[i]
+ elif cpv == d2.get("CPV"):
+ if path == d2.get("PATH", ""):
+ del pkgindex.packages[i]
+
+ pkgindex.packages.append(d)
+ return d
+
def _pkgindex_write(self, pkgindex):
contents = codecs.getwriter(_encodings['repo.content'])(io.BytesIO())
pkgindex.write(contents)
@@ -1233,7 +1360,7 @@ class binarytree(object):
def _pkgindex_entry(self, cpv):
"""
- Performs checksums and evaluates USE flag conditionals.
+ Performs checksums, and gets size and mtime via lstat.
Raises InvalidDependString if necessary.
@rtype: dict
@return: a dict containing entry for the give cpv.
@@ -1241,23 +1368,20 @@ class binarytree(object):
pkg_path = self.getname(cpv)
- d = dict(zip(self._pkgindex_aux_keys,
- self.dbapi.aux_get(cpv, self._pkgindex_aux_keys)))
-
+ d = dict(cpv._metadata.items())
d.update(perform_multiple_checksums(
pkg_path, hashes=self._pkgindex_hashes))
d["CPV"] = cpv
- st = os.stat(pkg_path)
- d["MTIME"] = _unicode(st[stat.ST_MTIME])
+ st = os.lstat(pkg_path)
+ d["_mtime_"] = _unicode(st[stat.ST_MTIME])
d["SIZE"] = _unicode(st.st_size)
- rel_path = self._pkg_paths[cpv]
+ rel_path = pkg_path[len(self.pkgdir)+1:]
# record location if it's non-default
if rel_path != cpv + ".tbz2":
d["PATH"] = rel_path
- self._eval_use_flags(cpv, d)
return d
def _new_pkgindex(self):
@@ -1311,15 +1435,17 @@ class binarytree(object):
return False
def _eval_use_flags(self, cpv, metadata):
- use = frozenset(metadata["USE"].split())
+ use = frozenset(metadata.get("USE", "").split())
for k in self._pkgindex_use_evaluated_keys:
if k.endswith('DEPEND'):
token_class = Atom
else:
token_class = None
+ deps = metadata.get(k)
+ if deps is None:
+ continue
try:
- deps = metadata[k]
deps = use_reduce(deps, uselist=use, token_class=token_class)
deps = paren_enclose(deps)
except portage.exception.InvalidDependString as e:
@@ -1349,46 +1475,129 @@ class binarytree(object):
return ""
return mymatch
- def getname(self, pkgname):
- """Returns a file location for this package. The default location is
- ${PKGDIR}/All/${PF}.tbz2, but will be ${PKGDIR}/${CATEGORY}/${PF}.tbz2
- in the rare event of a collision. The prevent_collision() method can
- be called to ensure that ${PKGDIR}/All/${PF}.tbz2 is available for a
- specific cpv."""
+ def getname(self, cpv, allocate_new=None):
+ """Returns a file location for this package.
+ If cpv has both build_time and build_id attributes, then the
+ path to the specific corresponding instance is returned.
+ Otherwise, allocate a new path and return that. When allocating
+ a new path, behavior depends on the binpkg-multi-instance
+ FEATURES setting.
+ """
if not self.populated:
self.populate()
- mycpv = pkgname
- mypath = self._pkg_paths.get(mycpv, None)
- if mypath:
- return os.path.join(self.pkgdir, mypath)
- mycat, mypkg = catsplit(mycpv)
- if self._all_directory:
- mypath = os.path.join("All", mypkg + ".tbz2")
- if mypath in self._pkg_paths.values():
- mypath = os.path.join(mycat, mypkg + ".tbz2")
+
+ try:
+ cpv.cp
+ except AttributeError:
+ cpv = _pkg_str(cpv)
+
+ filename = None
+ if allocate_new:
+ filename = self._allocate_filename(cpv)
+ elif self._is_specific_instance(cpv):
+ instance_key = self.dbapi._instance_key(cpv)
+ path = self._pkg_paths.get(instance_key)
+ if path is not None:
+ filename = os.path.join(self.pkgdir, path)
+
+ if filename is None and not allocate_new:
+ try:
+ instance_key = self.dbapi._instance_key(cpv,
+ support_string=True)
+ except KeyError:
+ pass
+ else:
+ filename = self._pkg_paths.get(instance_key)
+ if filename is not None:
+ filename = os.path.join(self.pkgdir, filename)
+
+ if filename is None:
+ if self._multi_instance:
+ pf = catsplit(cpv)[1]
+ filename = "%s-%s.xpak" % (
+ os.path.join(self.pkgdir, cpv.cp, pf), "1")
+ else:
+ filename = os.path.join(self.pkgdir, cpv + ".tbz2")
+
+ return filename
+
+ def _is_specific_instance(self, cpv):
+ specific = True
+ try:
+ build_time = cpv.build_time
+ build_id = cpv.build_id
+ except AttributeError:
+ specific = False
else:
- mypath = os.path.join(mycat, mypkg + ".tbz2")
- self._pkg_paths[mycpv] = mypath # cache for future lookups
- return os.path.join(self.pkgdir, mypath)
+ if build_time is None or build_id is None:
+ specific = False
+ return specific
+
+ def _max_build_id(self, cpv):
+ max_build_id = 0
+ for x in self.dbapi.cp_list(cpv.cp):
+ if (x == cpv and x.build_id is not None and
+ x.build_id > max_build_id):
+ max_build_id = x.build_id
+ return max_build_id
+
+ def _allocate_filename(self, cpv):
+ return os.path.join(self.pkgdir, cpv + ".tbz2")
+
+ def _allocate_filename_multi(self, cpv):
+
+ # First, get the max build_id found when _populate was
+ # called.
+ max_build_id = self._max_build_id(cpv)
+
+ # A new package may have been added concurrently since the
+ # last _populate call, so use increment build_id until
+ # we locate an unused id.
+ pf = catsplit(cpv)[1]
+ build_id = max_build_id + 1
+
+ while True:
+ filename = "%s-%s.xpak" % (
+ os.path.join(self.pkgdir, cpv.cp, pf), build_id)
+ if os.path.exists(filename):
+ build_id += 1
+ else:
+ return filename
+
+ @staticmethod
+ def _parse_build_id(filename):
+ build_id = -1
+ hyphen = filename.rfind("-", 0, -6)
+ if hyphen != -1:
+ build_id = filename[hyphen+1:-5]
+ try:
+ build_id = long(build_id)
+ except ValueError:
+ pass
+ return build_id
def isremote(self, pkgname):
"""Returns true if the package is kept remotely and it has not been
downloaded (or it is only partially downloaded)."""
- if self._remotepkgs is None or pkgname not in self._remotepkgs:
+ if (self._remotepkgs is None or
+ self.dbapi._instance_key(pkgname) not in self._remotepkgs):
return False
# Presence in self._remotepkgs implies that it's remote. When a
# package is downloaded, state is updated by self.inject().
return True
- def get_pkgindex_uri(self, pkgname):
+ def get_pkgindex_uri(self, cpv):
"""Returns the URI to the Packages file for a given package."""
- return self._pkgindex_uri.get(pkgname)
-
-
+ uri = None
+ metadata = self._remotepkgs.get(self.dbapi._instance_key(cpv))
+ if metadata is not None:
+ uri = metadata["PKGINDEX_URI"]
+ return uri
def gettbz2(self, pkgname):
"""Fetches the package from a remote site, if necessary. Attempts to
resume if the file appears to be partially downloaded."""
+ instance_key = self.dbapi._instance_key(pkgname)
tbz2_path = self.getname(pkgname)
tbz2name = os.path.basename(tbz2_path)
resume = False
@@ -1404,10 +1613,10 @@ class binarytree(object):
self._ensure_dir(mydest)
# urljoin doesn't work correctly with unrecognized protocols like sftp
if self._remote_has_index:
- rel_url = self._remotepkgs[pkgname].get("PATH")
+ rel_url = self._remotepkgs[instance_key].get("PATH")
if not rel_url:
rel_url = pkgname+".tbz2"
- remote_base_uri = self._remotepkgs[pkgname]["BASE_URI"]
+ remote_base_uri = self._remotepkgs[instance_key]["BASE_URI"]
url = remote_base_uri.rstrip("/") + "/" + rel_url.lstrip("/")
else:
url = self.settings["PORTAGE_BINHOST"].rstrip("/") + "/" + tbz2name
@@ -1450,15 +1659,19 @@ class binarytree(object):
except AttributeError:
cpv = pkg
+ _instance_key = self.dbapi._instance_key
+ instance_key = _instance_key(cpv)
digests = {}
- metadata = None
- if self._remotepkgs is None or cpv not in self._remotepkgs:
+ metadata = (None if self._remotepkgs is None else
+ self._remotepkgs.get(instance_key))
+ if metadata is None:
for d in self._load_pkgindex().packages:
- if d["CPV"] == cpv:
+ if (d["CPV"] == cpv and
+ instance_key == _instance_key(_pkg_str(d["CPV"],
+ metadata=d, settings=self.settings))):
metadata = d
break
- else:
- metadata = self._remotepkgs[cpv]
+
if metadata is None:
return digests
diff --git a/pym/portage/emaint/modules/binhost/binhost.py b/pym/portage/emaint/modules/binhost/binhost.py
index 1138a8c..cf1213e 100644
--- a/pym/portage/emaint/modules/binhost/binhost.py
+++ b/pym/portage/emaint/modules/binhost/binhost.py
@@ -7,6 +7,7 @@ import stat
import portage
from portage import os
from portage.util import writemsg
+from portage.versions import _pkg_str
import sys
@@ -38,7 +39,7 @@ class BinhostHandler(object):
if size is None:
return True
- mtime = data.get("MTIME")
+ mtime = data.get("_mtime_")
if mtime is None:
return True
@@ -90,6 +91,7 @@ class BinhostHandler(object):
def fix(self, **kwargs):
onProgress = kwargs.get('onProgress', None)
bintree = self._bintree
+ _instance_key = bintree.dbapi._instance_key
cpv_all = self._bintree.dbapi.cpv_all()
cpv_all.sort()
missing = []
@@ -98,16 +100,21 @@ class BinhostHandler(object):
onProgress(maxval, 0)
pkgindex = self._pkgindex
missing = []
+ stale = []
metadata = {}
for d in pkgindex.packages:
- metadata[d["CPV"]] = d
-
- for i, cpv in enumerate(cpv_all):
- d = metadata.get(cpv)
+ cpv = _pkg_str(d["CPV"], metadata=d,
+ settings=bintree.settings)
+ d["CPV"] = cpv
+ metadata[_instance_key(cpv)] = d
+ if not bintree.dbapi.cpv_exists(cpv):
+ stale.append(cpv)
+
+ for cpv in cpv_all:
+ d = metadata.get(_instance_key(cpv))
if not d or self._need_update(cpv, d):
missing.append(cpv)
- stale = set(metadata).difference(cpv_all)
if missing or stale:
from portage import locks
pkgindex_lock = locks.lockfile(
@@ -121,31 +128,39 @@ class BinhostHandler(object):
pkgindex = bintree._load_pkgindex()
self._pkgindex = pkgindex
+ # Recount stale/missing packages, with lock held.
+ missing = []
+ stale = []
metadata = {}
for d in pkgindex.packages:
- metadata[d["CPV"]] = d
-
- # Recount missing packages, with lock held.
- del missing[:]
- for i, cpv in enumerate(cpv_all):
- d = metadata.get(cpv)
+ cpv = _pkg_str(d["CPV"], metadata=d,
+ settings=bintree.settings)
+ d["CPV"] = cpv
+ metadata[_instance_key(cpv)] = d
+ if not bintree.dbapi.cpv_exists(cpv):
+ stale.append(cpv)
+
+ for cpv in cpv_all:
+ d = metadata.get(_instance_key(cpv))
if not d or self._need_update(cpv, d):
missing.append(cpv)
maxval = len(missing)
for i, cpv in enumerate(missing):
+ d = bintree._pkgindex_entry(cpv)
try:
- metadata[cpv] = bintree._pkgindex_entry(cpv)
+ bintree._eval_use_flags(cpv, d)
except portage.exception.InvalidDependString:
writemsg("!!! Invalid binary package: '%s'\n" % \
bintree.getname(cpv), noiselevel=-1)
+ else:
+ metadata[_instance_key(cpv)] = d
if onProgress:
onProgress(maxval, i+1)
- for cpv in set(metadata).difference(
- self._bintree.dbapi.cpv_all()):
- del metadata[cpv]
+ for cpv in stale:
+ del metadata[_instance_key(cpv)]
# We've updated the pkgindex, so set it to
# repopulate when necessary.
--
2.0.5
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [gentoo-portage-dev] [PATCH 4/7] binpkg-multi-instance 4 of 7
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 0/7] " Zac Medico
` (2 preceding siblings ...)
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 3/7] binpkg-multi-instance 3 " Zac Medico
@ 2015-02-18 3:05 ` Zac Medico
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 5/7] binpkg-multi-instance 5 " Zac Medico
` (3 subsequent siblings)
7 siblings, 0 replies; 23+ messages in thread
From: Zac Medico @ 2015-02-18 3:05 UTC (permalink / raw
To: gentoo-portage-dev; +Cc: Zac Medico
Add a test case to verify that emerge --rebuilt-binaries works with
binpkg-multi-instance. This relies on the fact that binary packages of
the same version are ordered by BUILD_TIME, so that the latest builds
are preferred when appropriate.
---
pym/portage/tests/resolver/ResolverPlayground.py | 25 ++++-
.../resolver/binpkg_multi_instance/__init__.py | 2 +
.../resolver/binpkg_multi_instance/__test__.py | 2 +
.../binpkg_multi_instance/test_rebuilt_binaries.py | 101 +++++++++++++++++++++
4 files changed, 126 insertions(+), 4 deletions(-)
create mode 100644 pym/portage/tests/resolver/binpkg_multi_instance/__init__.py
create mode 100644 pym/portage/tests/resolver/binpkg_multi_instance/__test__.py
create mode 100644 pym/portage/tests/resolver/binpkg_multi_instance/test_rebuilt_binaries.py
diff --git a/pym/portage/tests/resolver/ResolverPlayground.py b/pym/portage/tests/resolver/ResolverPlayground.py
index 84ad17c..6bdf2c7 100644
--- a/pym/portage/tests/resolver/ResolverPlayground.py
+++ b/pym/portage/tests/resolver/ResolverPlayground.py
@@ -39,6 +39,7 @@ class ResolverPlayground(object):
config_files = frozenset(("eapi", "layout.conf", "make.conf", "package.accept_keywords",
"package.keywords", "package.license", "package.mask", "package.properties",
+ "package.provided", "packages",
"package.unmask", "package.use", "package.use.aliases", "package.use.stable.mask",
"soname.provided",
"unpack_dependencies", "use.aliases", "use.force", "use.mask", "layout.conf"))
@@ -208,12 +209,18 @@ class ResolverPlayground(object):
raise AssertionError('digest creation failed for %s' % ebuild_path)
def _create_binpkgs(self, binpkgs):
- for cpv, metadata in binpkgs.items():
+ # When using BUILD_ID, there can be mutiple instances for the
+ # same cpv. Therefore, binpkgs may be an iterable instead of
+ # a dict.
+ items = getattr(binpkgs, 'items', None)
+ items = items() if items is not None else binpkgs
+ for cpv, metadata in items:
a = Atom("=" + cpv, allow_repo=True)
repo = a.repo
if repo is None:
repo = "test_repo"
+ pn = catsplit(a.cp)[1]
cat, pf = catsplit(a.cpv)
metadata = metadata.copy()
metadata.setdefault("SLOT", "0")
@@ -225,8 +232,13 @@ class ResolverPlayground(object):
repo_dir = self.pkgdir
category_dir = os.path.join(repo_dir, cat)
- binpkg_path = os.path.join(category_dir, pf + ".tbz2")
- ensure_dirs(category_dir)
+ if "BUILD_ID" in metadata:
+ binpkg_path = os.path.join(category_dir, pn,
+ "%s-%s.xpak"% (pf, metadata["BUILD_ID"]))
+ else:
+ binpkg_path = os.path.join(category_dir, pf + ".tbz2")
+
+ ensure_dirs(os.path.dirname(binpkg_path))
t = portage.xpak.tbz2(binpkg_path)
t.recompose_mem(portage.xpak.xpak_mem(metadata))
@@ -252,6 +264,7 @@ class ResolverPlayground(object):
unknown_keys = set(metadata).difference(
portage.dbapi.dbapi._known_keys)
unknown_keys.discard("BUILD_TIME")
+ unknown_keys.discard("BUILD_ID")
unknown_keys.discard("COUNTER")
unknown_keys.discard("repository")
unknown_keys.discard("USE")
@@ -749,7 +762,11 @@ class ResolverPlaygroundResult(object):
repo_str = ""
if x.repo != "test_repo":
repo_str = _repo_separator + x.repo
- mergelist_str = x.cpv + repo_str
+ build_id_str = ""
+ if (x.type_name == "binary" and
+ x.cpv.build_id is not None):
+ build_id_str = "-%s" % x.cpv.build_id
+ mergelist_str = x.cpv + build_id_str + repo_str
if x.built:
if x.operation == "merge":
desc = x.type_name
diff --git a/pym/portage/tests/resolver/binpkg_multi_instance/__init__.py b/pym/portage/tests/resolver/binpkg_multi_instance/__init__.py
new file mode 100644
index 0000000..4725d33
--- /dev/null
+++ b/pym/portage/tests/resolver/binpkg_multi_instance/__init__.py
@@ -0,0 +1,2 @@
+# Copyright 2015 Gentoo Foundation
+# Distributed under the terms of the GNU General Public License v2
diff --git a/pym/portage/tests/resolver/binpkg_multi_instance/__test__.py b/pym/portage/tests/resolver/binpkg_multi_instance/__test__.py
new file mode 100644
index 0000000..4725d33
--- /dev/null
+++ b/pym/portage/tests/resolver/binpkg_multi_instance/__test__.py
@@ -0,0 +1,2 @@
+# Copyright 2015 Gentoo Foundation
+# Distributed under the terms of the GNU General Public License v2
diff --git a/pym/portage/tests/resolver/binpkg_multi_instance/test_rebuilt_binaries.py b/pym/portage/tests/resolver/binpkg_multi_instance/test_rebuilt_binaries.py
new file mode 100644
index 0000000..5729df4
--- /dev/null
+++ b/pym/portage/tests/resolver/binpkg_multi_instance/test_rebuilt_binaries.py
@@ -0,0 +1,101 @@
+# Copyright 2015 Gentoo Foundation
+# Distributed under the terms of the GNU General Public License v2
+
+from portage.tests import TestCase
+from portage.tests.resolver.ResolverPlayground import (ResolverPlayground,
+ ResolverPlaygroundTestCase)
+
+class RebuiltBinariesCase(TestCase):
+
+ def testRebuiltBinaries(self):
+
+ user_config = {
+ "make.conf":
+ (
+ "FEATURES=\"binpkg-multi-instance\"",
+ ),
+ }
+
+ binpkgs = (
+ ("app-misc/A-1", {
+ "EAPI": "5",
+ "BUILD_ID": "1",
+ "BUILD_TIME": "1",
+ }),
+ ("app-misc/A-1", {
+ "EAPI": "5",
+ "BUILD_ID": "2",
+ "BUILD_TIME": "2",
+ }),
+ ("app-misc/A-1", {
+ "EAPI": "5",
+ "BUILD_ID": "3",
+ "BUILD_TIME": "3",
+ }),
+ ("dev-libs/B-1", {
+ "EAPI": "5",
+ "BUILD_ID": "1",
+ "BUILD_TIME": "1",
+ }),
+ ("dev-libs/B-1", {
+ "EAPI": "5",
+ "BUILD_ID": "2",
+ "BUILD_TIME": "2",
+ }),
+ ("dev-libs/B-1", {
+ "EAPI": "5",
+ "BUILD_ID": "3",
+ "BUILD_TIME": "3",
+ }),
+ )
+
+ installed = {
+ "app-misc/A-1" : {
+ "EAPI": "5",
+ "BUILD_ID": "1",
+ "BUILD_TIME": "1",
+ },
+ "dev-libs/B-1" : {
+ "EAPI": "5",
+ "BUILD_ID": "2",
+ "BUILD_TIME": "2",
+ },
+ }
+
+ world = (
+ "app-misc/A",
+ "dev-libs/B",
+ )
+
+ test_cases = (
+
+ ResolverPlaygroundTestCase(
+ ["@world"],
+ options = {
+ "--deep": True,
+ "--rebuilt-binaries": True,
+ "--update": True,
+ "--usepkgonly": True,
+ },
+ success = True,
+ ignore_mergelist_order=True,
+ mergelist = [
+ "[binary]dev-libs/B-1-3",
+ "[binary]app-misc/A-1-3"
+ ]
+ ),
+
+ )
+
+ playground = ResolverPlayground(debug=False,
+ binpkgs=binpkgs, installed=installed,
+ user_config=user_config, world=world)
+ try:
+ for test_case in test_cases:
+ playground.run_TestCase(test_case)
+ self.assertEqual(test_case.test_success, True,
+ test_case.fail_msg)
+ finally:
+ # Disable debug so that cleanup works.
+ #playground.debug = False
+ playground.cleanup()
--
2.0.5
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [gentoo-portage-dev] [PATCH 5/7] binpkg-multi-instance 5 of 7
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 0/7] " Zac Medico
` (3 preceding siblings ...)
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 4/7] binpkg-multi-instance 4 " Zac Medico
@ 2015-02-18 3:05 ` Zac Medico
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 6/7] binpkg-multi-instance 6 " Zac Medico
` (2 subsequent siblings)
7 siblings, 0 replies; 23+ messages in thread
From: Zac Medico @ 2015-02-18 3:05 UTC (permalink / raw
To: gentoo-portage-dev; +Cc: Zac Medico
Remove unused bintree _pkgindex_cpv_map_latest_build function. This
function is used by binhost clients running older versions of portage
to select the latest builds when their binhost server switches to
FEATURES=binpkg-multi-instance. The function is now unused because
portage is now capable of examining multiple builds and it sorts them
by BUILD_TIME in order to ensure that the latest builds are preferred
when appropriate.
---
pym/portage/dbapi/bintree.py | 42 ------------------------------------------
1 file changed, 42 deletions(-)
diff --git a/pym/portage/dbapi/bintree.py b/pym/portage/dbapi/bintree.py
index 460b9f7..5636a5f 100644
--- a/pym/portage/dbapi/bintree.py
+++ b/pym/portage/dbapi/bintree.py
@@ -261,48 +261,6 @@ class bindbapi(fakedbapi):
return filesdict
-def _pkgindex_cpv_map_latest_build(pkgindex):
- """
- Given a PackageIndex instance, create a dict of cpv -> metadata map.
- If multiple packages have identical CPV values, prefer the package
- with latest BUILD_TIME value.
- @param pkgindex: A PackageIndex instance.
- @type pkgindex: PackageIndex
- @rtype: dict
- @return: a dict containing entry for the give cpv.
- """
- cpv_map = {}
-
- for d in pkgindex.packages:
- cpv = d["CPV"]
-
- try:
- cpv = _pkg_str(cpv)
- except InvalidData:
- writemsg(_("!!! Invalid remote binary package: %s\n") % cpv,
- noiselevel=-1)
- continue
-
- btime = d.get('BUILD_TIME', '')
- try:
- btime = int(btime)
- except ValueError:
- btime = None
-
- other_d = cpv_map.get(cpv)
- if other_d is not None:
- other_btime = other_d.get('BUILD_TIME', '')
- try:
- other_btime = int(other_btime)
- except ValueError:
- other_btime = None
- if other_btime and (not btime or other_btime > btime):
- continue
-
- cpv_map[_pkg_str(cpv)] = d
-
- return cpv_map
-
class binarytree(object):
"this tree scans for a list of all packages available in PKGDIR"
def __init__(self, _unused=DeprecationWarning, pkgdir=None,
--
2.0.5
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [gentoo-portage-dev] [PATCH 6/7] binpkg-multi-instance 6 of 7
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 0/7] " Zac Medico
` (4 preceding siblings ...)
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 5/7] binpkg-multi-instance 5 " Zac Medico
@ 2015-02-18 3:05 ` Zac Medico
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 7/7] binpkg-multi-instance 7 " Zac Medico
2015-03-04 21:34 ` [gentoo-portage-dev] [PATCH 0/7] Add FEATURES=binpkg-multi-instance (bug 150031) Brian Dolbec
7 siblings, 0 replies; 23+ messages in thread
From: Zac Medico @ 2015-02-18 3:05 UTC (permalink / raw
To: gentoo-portage-dev; +Cc: Zac Medico
Remove unused binarytree _remove_symlink, _create_symlink,
prevent_collision, _move_to_all, and _move_from_all methods. These are
all related to the oldest PKGDIR layout, which put all of the tbz2
files in $PKGDIR/All, and created symlinks to them in the category
directories. The $PKGDIR/All layout should be practically extinct by
now. Now portage recognizes all existing layouts, or mixtures of them,
and uses the old packages in place. It never puts new packages in
$PKGDIR/All, so there's no need to move packages around to prevent file
name collisions between packages from different categories. It also
only uses regular files (any symlinks are ignored).
---
pym/portage/dbapi/bintree.py | 117 ++-----------------------------------------
1 file changed, 4 insertions(+), 113 deletions(-)
diff --git a/pym/portage/dbapi/bintree.py b/pym/portage/dbapi/bintree.py
index 5636a5f..a475fb5 100644
--- a/pym/portage/dbapi/bintree.py
+++ b/pym/portage/dbapi/bintree.py
@@ -471,89 +471,11 @@ class binarytree(object):
return moves
- def _remove_symlink(self, cpv):
- """Remove a ${PKGDIR}/${CATEGORY}/${PF}.tbz2 symlink and also remove
- the ${PKGDIR}/${CATEGORY} directory if empty. The file will not be
- removed if os.path.islink() returns False."""
- mycat, mypkg = catsplit(cpv)
- mylink = os.path.join(self.pkgdir, mycat, mypkg + ".tbz2")
- if os.path.islink(mylink):
- """Only remove it if it's really a link so that this method never
- removes a real package that was placed here to avoid a collision."""
- os.unlink(mylink)
- try:
- os.rmdir(os.path.join(self.pkgdir, mycat))
- except OSError as e:
- if e.errno not in (errno.ENOENT,
- errno.ENOTEMPTY, errno.EEXIST):
- raise
- del e
-
- def _create_symlink(self, cpv):
- """Create a ${PKGDIR}/${CATEGORY}/${PF}.tbz2 symlink (and
- ${PKGDIR}/${CATEGORY} directory, if necessary). Any file that may
- exist in the location of the symlink will first be removed."""
- mycat, mypkg = catsplit(cpv)
- full_path = os.path.join(self.pkgdir, mycat, mypkg + ".tbz2")
- self._ensure_dir(os.path.dirname(full_path))
- try:
- os.unlink(full_path)
- except OSError as e:
- if e.errno != errno.ENOENT:
- raise
- del e
- os.symlink(os.path.join("..", "All", mypkg + ".tbz2"), full_path)
-
def prevent_collision(self, cpv):
- """Make sure that the file location ${PKGDIR}/All/${PF}.tbz2 is safe to
- use for a given cpv. If a collision will occur with an existing
- package from another category, the existing package will be bumped to
- ${PKGDIR}/${CATEGORY}/${PF}.tbz2 so that both can coexist."""
- if not self._all_directory:
- return
-
- # Copy group permissions for new directories that
- # may have been created.
- for path in ("All", catsplit(cpv)[0]):
- path = os.path.join(self.pkgdir, path)
- self._ensure_dir(path)
- if not os.access(path, os.W_OK):
- raise PermissionDenied("access('%s', W_OK)" % path)
-
- full_path = self.getname(cpv)
- if "All" == full_path.split(os.path.sep)[-2]:
- return
- """Move a colliding package if it exists. Code below this point only
- executes in rare cases."""
- mycat, mypkg = catsplit(cpv)
- myfile = mypkg + ".tbz2"
- mypath = os.path.join("All", myfile)
- dest_path = os.path.join(self.pkgdir, mypath)
-
- try:
- st = os.lstat(dest_path)
- except OSError:
- st = None
- else:
- if stat.S_ISLNK(st.st_mode):
- st = None
- try:
- os.unlink(dest_path)
- except OSError:
- if os.path.exists(dest_path):
- raise
-
- if st is not None:
- # For invalid packages, other_cat could be None.
- other_cat = portage.xpak.tbz2(dest_path).getfile(b"CATEGORY")
- if other_cat:
- other_cat = _unicode_decode(other_cat,
- encoding=_encodings['repo.content'], errors='replace')
- other_cat = other_cat.strip()
- other_cpv = other_cat + "/" + mypkg
- self._move_from_all(other_cpv)
- self.inject(other_cpv)
- self._move_to_all(cpv)
+ warnings.warn("The "
+ "portage.dbapi.bintree.binarytree.prevent_collision "
+ "method is deprecated.",
+ DeprecationWarning, stacklevel=2)
def _ensure_dir(self, path):
"""
@@ -589,37 +511,6 @@ class binarytree(object):
except PortageException:
pass
- def _move_to_all(self, cpv):
- """If the file exists, move it. Whether or not it exists, update state
- for future getname() calls."""
- mycat, mypkg = catsplit(cpv)
- myfile = mypkg + ".tbz2"
- self._pkg_paths[cpv] = os.path.join("All", myfile)
- src_path = os.path.join(self.pkgdir, mycat, myfile)
- try:
- mystat = os.lstat(src_path)
- except OSError as e:
- mystat = None
- if mystat and stat.S_ISREG(mystat.st_mode):
- self._ensure_dir(os.path.join(self.pkgdir, "All"))
- dest_path = os.path.join(self.pkgdir, "All", myfile)
- _movefile(src_path, dest_path, mysettings=self.settings)
- self._create_symlink(cpv)
- self.inject(cpv)
-
- def _move_from_all(self, cpv):
- """Move a package from ${PKGDIR}/All/${PF}.tbz2 to
- ${PKGDIR}/${CATEGORY}/${PF}.tbz2 and update state from getname calls."""
- self._remove_symlink(cpv)
- mycat, mypkg = catsplit(cpv)
- myfile = mypkg + ".tbz2"
- mypath = os.path.join(mycat, myfile)
- dest_path = os.path.join(self.pkgdir, mypath)
- self._ensure_dir(os.path.dirname(dest_path))
- src_path = os.path.join(self.pkgdir, "All", myfile)
- _movefile(src_path, dest_path, mysettings=self.settings)
- self._pkg_paths[cpv] = mypath
-
def populate(self, getbinpkgs=0):
"populates the binarytree"
--
2.0.5
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [gentoo-portage-dev] [PATCH 7/7] binpkg-multi-instance 7 of 7
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 0/7] " Zac Medico
` (5 preceding siblings ...)
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 6/7] binpkg-multi-instance 6 " Zac Medico
@ 2015-02-18 3:05 ` Zac Medico
2015-03-04 21:34 ` [gentoo-portage-dev] [PATCH 0/7] Add FEATURES=binpkg-multi-instance (bug 150031) Brian Dolbec
7 siblings, 0 replies; 23+ messages in thread
From: Zac Medico @ 2015-02-18 3:05 UTC (permalink / raw
To: gentoo-portage-dev; +Cc: Zac Medico
Support "profile-formats = build-id" setting for layout.conf. When
this is enabled in layout.conf of the containing repository, a
dependency atom in the profile can refer to a specific build, using the
build-id that is assigned when FEATURES=binpkg-multi-instance is
enabled. A build-id atom is identical to a version-specific atom,
except that the version is followed by a hyphen and an integer build-id.
With the build-id profile format, it is possible to assemble a system
using specific builds of binary packages, as users of "binary"
distros might be accustomed to. For example, an atom in the "packages"
file can pull a specific build of a package into the @system set, and
an atom in the "package.keywords" file can be used to modify the
effective KEYWORDS of a specific build of a package.
Refering to specific builds can be useful for a number of reasons. For
example, if a particular build needs to undergo a large amount of
testing in a complex environment in order to verify reliability, then
it can be useful to lock a profile to a specific build that has been
thoroughly tested.
---
This patch is identical to "[PATCH 2/2] Add profile-formats=build-id
(bug 150031)" which was sent to the list earlier, except that it adds
a comment about "smart" defaults in the Atom class.
man/portage.5 | 8 +-
pym/_emerge/is_valid_package_atom.py | 5 +-
pym/portage/_sets/ProfilePackageSet.py | 3 +-
pym/portage/_sets/profiles.py | 3 +-
pym/portage/dep/__init__.py | 38 +++++-
.../package/ebuild/_config/KeywordsManager.py | 3 +-
.../package/ebuild/_config/LocationsManager.py | 8 +-
pym/portage/package/ebuild/_config/MaskManager.py | 21 +++-
pym/portage/package/ebuild/_config/UseManager.py | 14 ++-
pym/portage/package/ebuild/config.py | 15 ++-
pym/portage/repository/config.py | 2 +-
pym/portage/tests/dep/test_isvalidatom.py | 8 +-
.../test_build_id_profile_format.py | 134 +++++++++++++++++++++
pym/portage/util/__init__.py | 13 +-
14 files changed, 237 insertions(+), 38 deletions(-)
create mode 100644 pym/portage/tests/resolver/binpkg_multi_instance/test_build_id_profile_format.py
diff --git a/man/portage.5 b/man/portage.5
index ed5140d..e062f9f 100644
--- a/man/portage.5
+++ b/man/portage.5
@@ -1180,7 +1180,7 @@ and the newer/faster "md5-dict" format. Default is to detect dirs.
The EAPI to use for profiles when unspecified. This attribute is
supported only if profile-default-eapi is included in profile-formats.
.TP
-.BR profile\-formats " = [pms] [portage-1] [portage-2] [profile-bashrcs] [profile-set] [profile-default-eapi]"
+.BR profile\-formats " = [pms] [portage-1] [portage-2] [profile-bashrcs] [profile-set] [profile-default-eapi] [build-id]"
Control functionality available to profiles in this repo such as which files
may be dirs, or the syntax available in parent files. Use "portage-2" if you're
unsure. The default is "portage-1-compat" mode which is meant to be compatible
@@ -1190,7 +1190,11 @@ Setting profile-bashrcs will enable the per-profile bashrc mechanism
profile \fBpackages\fR file to add atoms to the @profile package set.
See the profile \fBpackages\fR section for more information.
Setting profile-default-eapi enables support for the
-profile_eapi_when_unspecified attribute.
+profile_eapi_when_unspecified attribute. Setting build\-id allows
+dependency atoms in the profile to refer to specific builds (see the
+binpkg\-multi\-instance FEATURES setting in \fBmake.conf\fR(5)). A
+build\-id atom is identical to a version-specific atom, except that the
+version is followed by a hyphen and an integer build\-id.
.RE
.RE
diff --git a/pym/_emerge/is_valid_package_atom.py b/pym/_emerge/is_valid_package_atom.py
index 112afc1..17f7642 100644
--- a/pym/_emerge/is_valid_package_atom.py
+++ b/pym/_emerge/is_valid_package_atom.py
@@ -14,9 +14,10 @@ def insert_category_into_atom(atom, category):
ret = None
return ret
-def is_valid_package_atom(x, allow_repo=False):
+def is_valid_package_atom(x, allow_repo=False, allow_build_id=True):
if "/" not in x.split(":")[0]:
x2 = insert_category_into_atom(x, 'cat')
if x2 != None:
x = x2
- return isvalidatom(x, allow_blockers=False, allow_repo=allow_repo)
+ return isvalidatom(x, allow_blockers=False, allow_repo=allow_repo,
+ allow_build_id=allow_build_id)
diff --git a/pym/portage/_sets/ProfilePackageSet.py b/pym/portage/_sets/ProfilePackageSet.py
index 2fcafb6..fec9373 100644
--- a/pym/portage/_sets/ProfilePackageSet.py
+++ b/pym/portage/_sets/ProfilePackageSet.py
@@ -23,7 +23,8 @@ class ProfilePackageSet(PackageSet):
def load(self):
self._setAtoms(x for x in stack_lists(
[grabfile_package(os.path.join(y.location, "packages"),
- verify_eapi=True, eapi=y.eapi, eapi_default=None)
+ verify_eapi=True, eapi=y.eapi, eapi_default=None,
+ allow_build_id=y.allow_build_id)
for y in self._profiles
if "profile-set" in y.profile_formats],
incremental=1) if x[:1] != "*")
diff --git a/pym/portage/_sets/profiles.py b/pym/portage/_sets/profiles.py
index ccb3432..bccc02e 100644
--- a/pym/portage/_sets/profiles.py
+++ b/pym/portage/_sets/profiles.py
@@ -34,7 +34,8 @@ class PackagesSystemSet(PackageSet):
(self._profiles,), level=logging.DEBUG, noiselevel=-1)
mylist = [grabfile_package(os.path.join(x.location, "packages"),
- verify_eapi=True, eapi=x.eapi, eapi_default=None)
+ verify_eapi=True, eapi=x.eapi, eapi_default=None,
+ allow_build_id=x.allow_build_id)
for x in self._profiles]
if debug:
diff --git a/pym/portage/dep/__init__.py b/pym/portage/dep/__init__.py
index e2e416c..1f2c8cd 100644
--- a/pym/portage/dep/__init__.py
+++ b/pym/portage/dep/__init__.py
@@ -1191,11 +1191,11 @@ class Atom(_unicode):
self.overlap = self._overlap(forbid=forbid_overlap)
def __new__(cls, s, unevaluated_atom=None, allow_wildcard=False, allow_repo=None,
- _use=None, eapi=None, is_valid_flag=None):
+ _use=None, eapi=None, is_valid_flag=None, allow_build_id=None):
return _unicode.__new__(cls, s)
def __init__(self, s, unevaluated_atom=None, allow_wildcard=False, allow_repo=None,
- _use=None, eapi=None, is_valid_flag=None):
+ _use=None, eapi=None, is_valid_flag=None, allow_build_id=None):
if isinstance(s, Atom):
# This is an efficiency assertion, to ensure that the Atom
# constructor is not called redundantly.
@@ -1216,8 +1216,13 @@ class Atom(_unicode):
# Ignore allow_repo when eapi is specified.
allow_repo = eapi_attrs.repo_deps
else:
+ # These parameters have "smart" defaults that are only
+ # applied when the caller does not explicitly pass in a
+ # True or False value.
if allow_repo is None:
allow_repo = True
+ if allow_build_id is None:
+ allow_build_id = True
blocker_prefix = ""
if "!" == s[:1]:
@@ -1232,6 +1237,7 @@ class Atom(_unicode):
blocker = False
self.__dict__['blocker'] = blocker
m = atom_re.match(s)
+ build_id = None
extended_syntax = False
extended_version = None
if m is None:
@@ -1268,8 +1274,22 @@ class Atom(_unicode):
slot = m.group(atom_re.groups - 2)
repo = m.group(atom_re.groups - 1)
use_str = m.group(atom_re.groups)
- if m.group(base + 4) is not None:
- raise InvalidAtom(self)
+ version = m.group(base + 4)
+ if version is not None:
+ if allow_build_id:
+ cpv_build_id = cpv
+ cpv = cp
+ cp = cp[:-len(version)]
+ build_id = cpv_build_id[len(cpv)+1:]
+ if len(build_id) > 1 and build_id[:1] == "0":
+ # Leading zeros are not allowed.
+ raise InvalidAtom(self)
+ try:
+ build_id = int(build_id)
+ except ValueError:
+ raise InvalidAtom(self)
+ else:
+ raise InvalidAtom(self)
elif m.group('star') is not None:
base = atom_re.groupindex['star']
op = '=*'
@@ -1332,6 +1352,7 @@ class Atom(_unicode):
self.__dict__['slot_operator'] = None
self.__dict__['operator'] = op
self.__dict__['extended_syntax'] = extended_syntax
+ self.__dict__['build_id'] = build_id
if not (repo is None or allow_repo):
raise InvalidAtom(self)
@@ -1877,7 +1898,7 @@ def dep_getusedeps( depend ):
return tuple(use_list)
def isvalidatom(atom, allow_blockers=False, allow_wildcard=False,
- allow_repo=False, eapi=None):
+ allow_repo=False, eapi=None, allow_build_id=False):
"""
Check to see if a depend atom is valid
@@ -1902,7 +1923,8 @@ def isvalidatom(atom, allow_blockers=False, allow_wildcard=False,
try:
if not isinstance(atom, Atom):
atom = Atom(atom, allow_wildcard=allow_wildcard,
- allow_repo=allow_repo, eapi=eapi)
+ allow_repo=allow_repo, eapi=eapi,
+ allow_build_id=allow_build_id)
if not allow_blockers and atom.blocker:
return False
return True
@@ -2107,6 +2129,7 @@ def match_from_list(mydep, candidate_list):
mycpv = mydep.cpv
mycpv_cps = catpkgsplit(mycpv) # Can be None if not specific
slot = mydep.slot
+ build_id = mydep.build_id
if not mycpv_cps:
cat, pkg = catsplit(mycpv)
@@ -2181,6 +2204,9 @@ def match_from_list(mydep, candidate_list):
xcpv = remove_slot(x)
if not cpvequal(xcpv, mycpv):
continue
+ if (build_id is not None and
+ getattr(xcpv, "build_id", None) != build_id):
+ continue
mylist.append(x)
elif operator == "=*": # glob match
diff --git a/pym/portage/package/ebuild/_config/KeywordsManager.py b/pym/portage/package/ebuild/_config/KeywordsManager.py
index e1a8e2b..72e24b9 100644
--- a/pym/portage/package/ebuild/_config/KeywordsManager.py
+++ b/pym/portage/package/ebuild/_config/KeywordsManager.py
@@ -22,7 +22,8 @@ class KeywordsManager(object):
rawpkeywords = [grabdict_package(
os.path.join(x.location, "package.keywords"),
recursive=x.portage1_directories,
- verify_eapi=True, eapi=x.eapi, eapi_default=None)
+ verify_eapi=True, eapi=x.eapi, eapi_default=None,
+ allow_build_id=x.allow_build_id)
for x in profiles]
for pkeyworddict in rawpkeywords:
if not pkeyworddict:
diff --git a/pym/portage/package/ebuild/_config/LocationsManager.py b/pym/portage/package/ebuild/_config/LocationsManager.py
index 34b33e9..55b8c08 100644
--- a/pym/portage/package/ebuild/_config/LocationsManager.py
+++ b/pym/portage/package/ebuild/_config/LocationsManager.py
@@ -31,7 +31,8 @@ _PORTAGE1_DIRECTORIES = frozenset([
'use.mask', 'use.force'])
_profile_node = collections.namedtuple('_profile_node',
- 'location portage1_directories user_config profile_formats eapi')
+ ('location', 'portage1_directories', 'user_config',
+ 'profile_formats', 'eapi', 'allow_build_id'))
_allow_parent_colon = frozenset(
["portage-2"])
@@ -142,7 +143,8 @@ class LocationsManager(object):
_profile_node(custom_prof, True, True,
('profile-bashrcs', 'profile-set'),
read_corresponding_eapi_file(
- custom_prof + os.sep, default=None)))
+ custom_prof + os.sep, default=None),
+ True))
del custom_prof
self.profiles = tuple(self.profiles)
@@ -253,7 +255,7 @@ class LocationsManager(object):
self.profiles.append(currentPath)
self.profiles_complex.append(
_profile_node(currentPath, allow_directories, False,
- current_formats, eapi))
+ current_formats, eapi, 'build-id' in current_formats))
def _expand_parent_colon(self, parentsFile, parentPath,
repo_loc, repositories):
diff --git a/pym/portage/package/ebuild/_config/MaskManager.py b/pym/portage/package/ebuild/_config/MaskManager.py
index 55c8c7a..44aba23 100644
--- a/pym/portage/package/ebuild/_config/MaskManager.py
+++ b/pym/portage/package/ebuild/_config/MaskManager.py
@@ -40,7 +40,9 @@ class MaskManager(object):
pmask_cache[loc] = grabfile_package(path,
recursive=repo_config.portage1_profiles,
remember_source_file=True, verify_eapi=True,
- eapi_default=repo_config.eapi)
+ eapi_default=repo_config.eapi,
+ allow_build_id=("build-id"
+ in repo_config.profile_formats))
if repo_config.portage1_profiles_compat and os.path.isdir(path):
warnings.warn(_("Repository '%(repo_name)s' is implicitly using "
"'portage-1' profile format in its profiles/package.mask, but "
@@ -107,7 +109,8 @@ class MaskManager(object):
continue
repo_lines = grabfile_package(os.path.join(repo.location, "profiles", "package.unmask"), \
recursive=1, remember_source_file=True,
- verify_eapi=True, eapi_default=repo.eapi)
+ verify_eapi=True, eapi_default=repo.eapi,
+ allow_build_id=("build-id" in repo.profile_formats))
lines = stack_lists([repo_lines], incremental=1, \
remember_source_file=True, warn_for_unmatched_removal=True,
strict_warn_for_unmatched_removal=strict_umatched_removal)
@@ -122,13 +125,15 @@ class MaskManager(object):
os.path.join(x.location, "package.mask"),
recursive=x.portage1_directories,
remember_source_file=True, verify_eapi=True,
- eapi=x.eapi, eapi_default=None))
+ eapi=x.eapi, eapi_default=None,
+ allow_build_id=x.allow_build_id))
if x.portage1_directories:
profile_pkgunmasklines.append(grabfile_package(
os.path.join(x.location, "package.unmask"),
recursive=x.portage1_directories,
remember_source_file=True, verify_eapi=True,
- eapi=x.eapi, eapi_default=None))
+ eapi=x.eapi, eapi_default=None,
+ allow_build_id=x.allow_build_id))
profile_pkgmasklines = stack_lists(profile_pkgmasklines, incremental=1, \
remember_source_file=True, warn_for_unmatched_removal=True,
strict_warn_for_unmatched_removal=strict_umatched_removal)
@@ -143,10 +148,14 @@ class MaskManager(object):
if user_config:
user_pkgmasklines = grabfile_package(
os.path.join(abs_user_config, "package.mask"), recursive=1, \
- allow_wildcard=True, allow_repo=True, remember_source_file=True, verify_eapi=False)
+ allow_wildcard=True, allow_repo=True,
+ remember_source_file=True, verify_eapi=False,
+ allow_build_id=True)
user_pkgunmasklines = grabfile_package(
os.path.join(abs_user_config, "package.unmask"), recursive=1, \
- allow_wildcard=True, allow_repo=True, remember_source_file=True, verify_eapi=False)
+ allow_wildcard=True, allow_repo=True,
+ remember_source_file=True, verify_eapi=False,
+ allow_build_id=True)
#Stack everything together. At this point, only user_pkgmasklines may contain -atoms.
#Don't warn for unmatched -atoms here, since we don't do it for any other user config file.
diff --git a/pym/portage/package/ebuild/_config/UseManager.py b/pym/portage/package/ebuild/_config/UseManager.py
index 60d5f92..a93ea5c 100644
--- a/pym/portage/package/ebuild/_config/UseManager.py
+++ b/pym/portage/package/ebuild/_config/UseManager.py
@@ -153,7 +153,8 @@ class UseManager(object):
return tuple(ret)
def _parse_file_to_dict(self, file_name, juststrings=False, recursive=True,
- eapi_filter=None, user_config=False, eapi=None, eapi_default="0"):
+ eapi_filter=None, user_config=False, eapi=None, eapi_default="0",
+ allow_build_id=False):
"""
@param file_name: input file name
@type file_name: str
@@ -176,6 +177,9 @@ class UseManager(object):
@param eapi_default: the default EAPI which applies if the
current profile node does not define a local EAPI
@type eapi_default: str
+ @param allow_build_id: allow atoms to specify a particular
+ build-id
+ @type allow_build_id: bool
@rtype: tuple
@return: collection of USE flags
"""
@@ -192,7 +196,7 @@ class UseManager(object):
file_dict = grabdict_package(file_name, recursive=recursive,
allow_wildcard=extended_syntax, allow_repo=extended_syntax,
verify_eapi=(not extended_syntax), eapi=eapi,
- eapi_default=eapi_default)
+ eapi_default=eapi_default, allow_build_id=allow_build_id)
if eapi is not None and eapi_filter is not None and not eapi_filter(eapi):
if file_dict:
writemsg(_("--- EAPI '%s' does not support '%s': '%s'\n") %
@@ -262,7 +266,8 @@ class UseManager(object):
for repo in repositories.repos_with_profiles():
ret[repo.name] = self._parse_file_to_dict(
os.path.join(repo.location, "profiles", file_name),
- eapi_filter=eapi_filter, eapi_default=repo.eapi)
+ eapi_filter=eapi_filter, eapi_default=repo.eapi,
+ allow_build_id=("build-id" in repo.profile_formats))
return ret
def _parse_profile_files_to_tuple_of_tuples(self, file_name, locations,
@@ -279,7 +284,8 @@ class UseManager(object):
os.path.join(profile.location, file_name), juststrings,
recursive=profile.portage1_directories, eapi_filter=eapi_filter,
user_config=profile.user_config, eapi=profile.eapi,
- eapi_default=None) for profile in locations)
+ eapi_default=None, allow_build_id=profile.allow_build_id)
+ for profile in locations)
def _parse_repository_usealiases(self, repositories):
ret = {}
diff --git a/pym/portage/package/ebuild/config.py b/pym/portage/package/ebuild/config.py
index 71fe4df..f16d16e 100644
--- a/pym/portage/package/ebuild/config.py
+++ b/pym/portage/package/ebuild/config.py
@@ -569,7 +569,8 @@ class config(object):
try:
packages_list = [grabfile_package(
os.path.join(x.location, "packages"),
- verify_eapi=True, eapi=x.eapi, eapi_default=None)
+ verify_eapi=True, eapi=x.eapi, eapi_default=None,
+ allow_build_id=x.allow_build_id)
for x in profiles_complex]
except IOError as e:
if e.errno == IsADirectory.errno:
@@ -707,7 +708,8 @@ class config(object):
#package.properties
propdict = grabdict_package(os.path.join(
abs_user_config, "package.properties"), recursive=1, allow_wildcard=True, \
- allow_repo=True, verify_eapi=False)
+ allow_repo=True, verify_eapi=False,
+ allow_build_id=True)
v = propdict.pop("*/*", None)
if v is not None:
if "ACCEPT_PROPERTIES" in self.configdict["conf"]:
@@ -721,7 +723,8 @@ class config(object):
d = grabdict_package(os.path.join(
abs_user_config, "package.accept_restrict"),
recursive=True, allow_wildcard=True,
- allow_repo=True, verify_eapi=False)
+ allow_repo=True, verify_eapi=False,
+ allow_build_id=True)
v = d.pop("*/*", None)
if v is not None:
if "ACCEPT_RESTRICT" in self.configdict["conf"]:
@@ -734,7 +737,8 @@ class config(object):
#package.env
penvdict = grabdict_package(os.path.join(
abs_user_config, "package.env"), recursive=1, allow_wildcard=True, \
- allow_repo=True, verify_eapi=False)
+ allow_repo=True, verify_eapi=False,
+ allow_build_id=True)
v = penvdict.pop("*/*", None)
if v is not None:
global_wildcard_conf = {}
@@ -764,7 +768,8 @@ class config(object):
bashrc = grabdict_package(os.path.join(profile.location,
"package.bashrc"), recursive=1, allow_wildcard=True,
allow_repo=True, verify_eapi=True,
- eapi=profile.eapi, eapi_default=None)
+ eapi=profile.eapi, eapi_default=None,
+ allow_build_id=profile.allow_build_id)
if not bashrc:
continue
diff --git a/pym/portage/repository/config.py b/pym/portage/repository/config.py
index a884156..5da1810 100644
--- a/pym/portage/repository/config.py
+++ b/pym/portage/repository/config.py
@@ -42,7 +42,7 @@ _invalid_path_char_re = re.compile(r'[^a-zA-Z0-9._\-+:/]')
_valid_profile_formats = frozenset(
['pms', 'portage-1', 'portage-2', 'profile-bashrcs', 'profile-set',
- 'profile-default-eapi'])
+ 'profile-default-eapi', 'build-id'])
_portage1_profiles_allow_directories = frozenset(
["portage-1-compat", "portage-1", 'portage-2'])
diff --git a/pym/portage/tests/dep/test_isvalidatom.py b/pym/portage/tests/dep/test_isvalidatom.py
index 67ba603..9d3367a 100644
--- a/pym/portage/tests/dep/test_isvalidatom.py
+++ b/pym/portage/tests/dep/test_isvalidatom.py
@@ -5,11 +5,13 @@ from portage.tests import TestCase
from portage.dep import isvalidatom
class IsValidAtomTestCase(object):
- def __init__(self, atom, expected, allow_wildcard=False, allow_repo=False):
+ def __init__(self, atom, expected, allow_wildcard=False,
+ allow_repo=False, allow_build_id=False):
self.atom = atom
self.expected = expected
self.allow_wildcard = allow_wildcard
self.allow_repo = allow_repo
+ self.allow_build_id = allow_build_id
class IsValidAtom(TestCase):
@@ -154,5 +156,7 @@ class IsValidAtom(TestCase):
else:
atom_type = "invalid"
self.assertEqual(bool(isvalidatom(test_case.atom, allow_wildcard=test_case.allow_wildcard,
- allow_repo=test_case.allow_repo)), test_case.expected,
+ allow_repo=test_case.allow_repo,
+ allow_build_id=test_case.allow_build_id)),
+ test_case.expected,
msg="isvalidatom(%s) != %s" % (test_case.atom, test_case.expected))
diff --git a/pym/portage/tests/resolver/binpkg_multi_instance/test_build_id_profile_format.py b/pym/portage/tests/resolver/binpkg_multi_instance/test_build_id_profile_format.py
new file mode 100644
index 0000000..0397509
--- /dev/null
+++ b/pym/portage/tests/resolver/binpkg_multi_instance/test_build_id_profile_format.py
@@ -0,0 +1,134 @@
+# Copyright 2015 Gentoo Foundation
+# Distributed under the terms of the GNU General Public License v2
+
+from portage.tests import TestCase
+from portage.tests.resolver.ResolverPlayground import (ResolverPlayground,
+ ResolverPlaygroundTestCase)
+
+class BuildIdProfileFormatTestCase(TestCase):
+
+ def testBuildIdProfileFormat(self):
+
+ profile = {
+ "packages": ("=app-misc/A-1-2",),
+ "package.provided": ("sys-libs/zlib-1.2.8-r1",),
+ }
+
+ repo_configs = {
+ "test_repo": {
+ "layout.conf": (
+ "profile-formats = build-id profile-set",
+ ),
+ }
+ }
+
+ user_config = {
+ "make.conf":
+ (
+ "FEATURES=\"binpkg-multi-instance\"",
+ ),
+ }
+
+ ebuilds = {
+ "app-misc/A-1" : {
+ "EAPI": "5",
+ "RDEPEND": "sys-libs/zlib dev-libs/B[foo]",
+ "DEPEND": "sys-libs/zlib dev-libs/B[foo]",
+ },
+ "dev-libs/B-1" : {
+ "EAPI": "5",
+ "IUSE": "foo",
+ },
+ }
+
+ binpkgs = (
+ ("app-misc/A-1", {
+ "EAPI": "5",
+ "BUILD_ID": "1",
+ "BUILD_TIME": "1",
+ "RDEPEND": "sys-libs/zlib dev-libs/B[foo]",
+ "DEPEND": "sys-libs/zlib dev-libs/B[foo]",
+ }),
+ ("app-misc/A-1", {
+ "EAPI": "5",
+ "BUILD_ID": "2",
+ "BUILD_TIME": "2",
+ "RDEPEND": "sys-libs/zlib dev-libs/B[foo]",
+ "DEPEND": "sys-libs/zlib dev-libs/B[foo]",
+ }),
+ ("app-misc/A-1", {
+ "EAPI": "5",
+ "BUILD_ID": "3",
+ "BUILD_TIME": "3",
+ "RDEPEND": "sys-libs/zlib dev-libs/B[foo]",
+ "DEPEND": "sys-libs/zlib dev-libs/B[foo]",
+ }),
+ ("dev-libs/B-1", {
+ "EAPI": "5",
+ "IUSE": "foo",
+ "USE": "",
+ "BUILD_ID": "1",
+ "BUILD_TIME": "1",
+ }),
+ ("dev-libs/B-1", {
+ "EAPI": "5",
+ "IUSE": "foo",
+ "USE": "foo",
+ "BUILD_ID": "2",
+ "BUILD_TIME": "2",
+ }),
+ ("dev-libs/B-1", {
+ "EAPI": "5",
+ "IUSE": "foo",
+ "USE": "",
+ "BUILD_ID": "3",
+ "BUILD_TIME": "3",
+ }),
+ )
+
+ installed = {
+ "app-misc/A-1" : {
+ "EAPI": "5",
+ "BUILD_ID": "1",
+ "BUILD_TIME": "1",
+ "RDEPEND": "sys-libs/zlib",
+ "DEPEND": "sys-libs/zlib",
+ },
+ "dev-libs/B-1" : {
+ "EAPI": "5",
+ "IUSE": "foo",
+ "USE": "foo",
+ "BUILD_ID": "2",
+ "BUILD_TIME": "2",
+ },
+ }
+
+ world = ()
+
+ test_cases = (
+
+ ResolverPlaygroundTestCase(
+ ["@world"],
+ options = {"--emptytree": True, "--usepkgonly": True},
+ success = True,
+ mergelist = [
+ "[binary]dev-libs/B-1-2",
+ "[binary]app-misc/A-1-2"
+ ]
+ ),
+
+ )
+
+ playground = ResolverPlayground(debug=False,
+ binpkgs=binpkgs, ebuilds=ebuilds, installed=installed,
+ repo_configs=repo_configs, profile=profile,
+ user_config=user_config, world=world)
+ try:
+ for test_case in test_cases:
+ playground.run_TestCase(test_case)
+ self.assertEqual(test_case.test_success, True,
+ test_case.fail_msg)
+ finally:
+ # Disable debug so that cleanup works.
+ #playground.debug = False
+ playground.cleanup()
diff --git a/pym/portage/util/__init__.py b/pym/portage/util/__init__.py
index b6f5787..aeb951e 100644
--- a/pym/portage/util/__init__.py
+++ b/pym/portage/util/__init__.py
@@ -424,7 +424,8 @@ def read_corresponding_eapi_file(filename, default="0"):
return default
return eapi
-def grabdict_package(myfilename, juststrings=0, recursive=0, allow_wildcard=False, allow_repo=False,
+def grabdict_package(myfilename, juststrings=0, recursive=0,
+ allow_wildcard=False, allow_repo=False, allow_build_id=False,
verify_eapi=False, eapi=None, eapi_default="0"):
""" Does the same thing as grabdict except it validates keys
with isvalidatom()"""
@@ -447,7 +448,8 @@ def grabdict_package(myfilename, juststrings=0, recursive=0, allow_wildcard=Fals
for k, v in d.items():
try:
k = Atom(k, allow_wildcard=allow_wildcard,
- allow_repo=allow_repo, eapi=eapi)
+ allow_repo=allow_repo,
+ allow_build_id=allow_build_id, eapi=eapi)
except InvalidAtom as e:
writemsg(_("--- Invalid atom in %s: %s\n") % (filename, e),
noiselevel=-1)
@@ -460,7 +462,8 @@ def grabdict_package(myfilename, juststrings=0, recursive=0, allow_wildcard=Fals
return atoms
-def grabfile_package(myfilename, compatlevel=0, recursive=0, allow_wildcard=False, allow_repo=False,
+def grabfile_package(myfilename, compatlevel=0, recursive=0,
+ allow_wildcard=False, allow_repo=False, allow_build_id=False,
remember_source_file=False, verify_eapi=False, eapi=None,
eapi_default="0"):
@@ -480,7 +483,9 @@ def grabfile_package(myfilename, compatlevel=0, recursive=0, allow_wildcard=Fals
if pkg[:1] == '*' and mybasename == 'packages':
pkg = pkg[1:]
try:
- pkg = Atom(pkg, allow_wildcard=allow_wildcard, allow_repo=allow_repo, eapi=eapi)
+ pkg = Atom(pkg, allow_wildcard=allow_wildcard,
+ allow_repo=allow_repo, allow_build_id=allow_build_id,
+ eapi=eapi)
except InvalidAtom as e:
writemsg(_("--- Invalid atom in %s: %s\n") % (source_file, e),
noiselevel=-1)
--
2.0.5
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [gentoo-portage-dev] [PATCH 0/7] Add FEATURES=binpkg-multi-instance (bug 150031)
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 0/7] " Zac Medico
` (6 preceding siblings ...)
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 7/7] binpkg-multi-instance 7 " Zac Medico
@ 2015-03-04 21:34 ` Brian Dolbec
7 siblings, 0 replies; 23+ messages in thread
From: Brian Dolbec @ 2015-03-04 21:34 UTC (permalink / raw
To: gentoo-portage-dev
On Tue, 17 Feb 2015 19:05:38 -0800
Zac Medico <zmedico@gentoo.org> wrote:
> FEATURES=binpkg-multi-instance causes an integer build-id to be
> associated with each binary package instance. Inclusion of the
> build-id in the file name of the binary package file makes it
> possible to store an arbitrary number of binary packages built from
> the same ebuild.
>
> Having multiple instances is useful for a number of purposes, such as
> retaining builds that were built with different USE flags or linked
> against different versions of libraries. The location of any
> particular package within PKGDIR can be expressed as follows:
>
> ${PKGDIR}/${CATEGORY}/${PN}/${PF}-${BUILD_ID}.xpak
>
> The build-id starts at 1 for the first build of a particular ebuild,
> and is incremented by 1 for each new build. It is possible to share a
> writable PKGDIR over NFS, and locking ensures that each package added
> to PKGDIR will have a unique build-id. It is not necessary to migrate
> an existing PKGDIR to the new layout, since portage is capable of
> working with a mixed PKGDIR layout, where packages using the old
> layout are allowed to remain in place.
>
> The new PKGDIR layout is backward-compatible with binhost clients
> running older portage, since the file format is identical, the
> per-package PATH attribute in the 'Packages' index directs them to
> download the file from the correct URI, and they automatically use
> BUILD_TIME metadata to select the latest builds.
>
> There is currently no automated way to prune old builds from PKGDIR,
> although it is possible to remove packages manually, and then run
> 'emaint --fix binhost' to update the ${PKGDIR}/Packages index. Support
> for FEATURES=binpkg-multi-instance is planned for eclean-pkg.
>
> X-Gentoo-Bug: 150031
> X-Gentoo-Bug-URL: https://bugs.gentoo.org/show_bug.cgi?id=150031
>
> Zac Medico (7):
> binpkg-multi-instance 1 of 7
> binpkg-multi-instance 2 of 7
> binpkg-multi-instance 3 of 7
> binpkg-multi-instance 4 of 7
> binpkg-multi-instance 5 of 7
> binpkg-multi-instance 6 of 7
> binpkg-multi-instance 7 of 7
>
> bin/quickpkg | 1 -
> man/make.conf.5 | 27 +
> man/portage.5 | 8 +-
> pym/_emerge/Binpkg.py | 33 +-
> pym/_emerge/BinpkgFetcher.py | 13 +-
> pym/_emerge/BinpkgVerifier.py | 6 +-
> pym/_emerge/EbuildBinpkg.py | 9 +-
> pym/_emerge/EbuildBuild.py | 36 +-
> pym/_emerge/Package.py | 67 +-
> pym/_emerge/Scheduler.py | 6 +-
> pym/_emerge/clear_caches.py | 1 -
> pym/_emerge/is_valid_package_atom.py | 5 +-
> pym/_emerge/resolver/output.py | 21 +-
> pym/portage/_sets/ProfilePackageSet.py | 3 +-
> pym/portage/_sets/profiles.py | 3 +-
> pym/portage/const.py | 2 +
> pym/portage/dbapi/__init__.py | 10 +-
> pym/portage/dbapi/bintree.py | 843
> +++++++++++----------
> pym/portage/dbapi/vartree.py | 8 +-
> pym/portage/dbapi/virtual.py | 113 ++-
> pym/portage/dep/__init__.py | 35 +-
> pym/portage/emaint/modules/binhost/binhost.py | 47
> +- .../package/ebuild/_config/KeywordsManager.py | 3
> +- .../package/ebuild/_config/LocationsManager.py | 8 +-
> pym/portage/package/ebuild/_config/MaskManager.py | 21 +-
> pym/portage/package/ebuild/_config/UseManager.py | 14 +-
> pym/portage/package/ebuild/config.py | 15 +-
> pym/portage/repository/config.py | 2 +-
> pym/portage/tests/dep/test_isvalidatom.py | 8 +-
> pym/portage/tests/resolver/ResolverPlayground.py | 25
> +- .../resolver/binpkg_multi_instance/__init__.py | 2
> + .../resolver/binpkg_multi_instance/__test__.py | 2
> + .../test_build_id_profile_format.py | 134
> ++++ .../binpkg_multi_instance/test_rebuilt_binaries.py | 101 +++
> pym/portage/util/__init__.py | 13 +-
> pym/portage/versions.py | 28 +- 36 files
> changed, 1144 insertions(+), 529 deletions(-) create mode 100644
> pym/portage/tests/resolver/binpkg_multi_instance/__init__.py create
> mode 100644
> pym/portage/tests/resolver/binpkg_multi_instance/__test__.py create
> mode 100644
> pym/portage/tests/resolver/binpkg_multi_instance/test_build_id_profile_format.py
> create mode 100644
> pym/portage/tests/resolver/binpkg_multi_instance/test_rebuilt_binaries.py
>
This 7 series looks fine. With 2.2 18 released, cleared to merge :)
Thanks
--
Brian Dolbec <dolsen>
^ permalink raw reply [flat|nested] 23+ messages in thread
* [gentoo-portage-dev] Re: [PATCH 1/2] Add FEATURES=binpkg-multi-instance (bug 150031)
2015-02-17 8:37 [gentoo-portage-dev] [PATCH 1/2] Add FEATURES=binpkg-multi-instance (bug 150031) Zac Medico
2015-02-17 8:37 ` [gentoo-portage-dev] [PATCH 2/2] Add profile-formats=build-id " Zac Medico
2015-02-17 18:42 ` [gentoo-portage-dev] [PATCH 1/2] Add FEATURES=binpkg-multi-instance " Brian Dolbec
@ 2015-02-18 3:40 ` Duncan
2015-02-18 3:46 ` Duncan
2 siblings, 1 reply; 23+ messages in thread
From: Duncan @ 2015-02-18 3:40 UTC (permalink / raw
To: gentoo-portage-dev
Zac Medico posted on Tue, 17 Feb 2015 00:37:12 -0800 as excerpted:
> It is not necessary to migrate an
> existing PKGDIR to the new layout, since portage is capable of working
> with a mixed PKGDIR layout, where packages using the old layout are
> allowed to remain in place.
[...]
> There is currently no automated way to prune old builds from PKGDIR,
> although it is possible to remove packages manually, and then run
> 'emaint --fix binhost' to update the ${PKGDIR}/Packages index.
>
> It is not necessary to migrate an existing PKGDIR to the new layout,
> since portage is capable of working with a mixed PKGDIR layout, where
> packages using the old layout are allowed to remain in-place.
>
> There is currently no automated way to prune old builds from PKGDIR,
> although it is possible to remove packages manually, and then run
> 'emaint --fix binhost' update the ${PKGDIR}/Packages index.
[...]
Reading this gave me a distinct sense of deja vu reading this. ... =:^P
(After editing a bit, the second "reading this" deliberately left in
place. =:^)
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2015-03-04 21:34 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-02-17 8:37 [gentoo-portage-dev] [PATCH 1/2] Add FEATURES=binpkg-multi-instance (bug 150031) Zac Medico
2015-02-17 8:37 ` [gentoo-portage-dev] [PATCH 2/2] Add profile-formats=build-id " Zac Medico
2015-02-17 18:58 ` Brian Dolbec
2015-02-17 19:37 ` Zac Medico
2015-02-17 19:58 ` Brian Dolbec
2015-02-17 18:42 ` [gentoo-portage-dev] [PATCH 1/2] Add FEATURES=binpkg-multi-instance " Brian Dolbec
2015-02-17 19:26 ` Zac Medico
2015-02-17 19:56 ` Brian Dolbec
2015-02-17 19:59 ` Zac Medico
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 0/7] " Zac Medico
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 1/7] binpkg-multi-instance 1 of 7 Zac Medico
2015-02-19 22:26 ` [gentoo-portage-dev] [PATCH 1/7 v2] " Zac Medico
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 2/7] binpkg-multi-instance 2 " Zac Medico
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 3/7] binpkg-multi-instance 3 " Zac Medico
2015-02-27 21:58 ` [gentoo-portage-dev] [PATCH 3/7 v2] " Zac Medico
2015-02-27 23:36 ` [gentoo-portage-dev] [PATCH 3/7 v3] " Zac Medico
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 4/7] binpkg-multi-instance 4 " Zac Medico
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 5/7] binpkg-multi-instance 5 " Zac Medico
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 6/7] binpkg-multi-instance 6 " Zac Medico
2015-02-18 3:05 ` [gentoo-portage-dev] [PATCH 7/7] binpkg-multi-instance 7 " Zac Medico
2015-03-04 21:34 ` [gentoo-portage-dev] [PATCH 0/7] Add FEATURES=binpkg-multi-instance (bug 150031) Brian Dolbec
2015-02-18 3:40 ` [gentoo-portage-dev] Re: [PATCH 1/2] " Duncan
2015-02-18 3:46 ` Duncan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox