* [gentoo-portage-dev] [PATCH 1/2] Add FEATURES=binpkg-multi-instance (bug 150031)
@ 2015-02-17 8:37 99% Zac Medico
0 siblings, 0 replies; 1+ results
From: Zac Medico @ 2015-02-17 8:37 UTC (permalink / raw
To: gentoo-portage-dev; +Cc: Zac Medico
FEATURES=binpkg-multi-instance causes an integer build-id to be
associated with each binary package instance. Inclusion of the build-id
in the file name of the binary package file makes it possible to store
an arbitrary number of binary packages built from the same ebuild.
Having multiple instances is useful for a number of purposes, such as
retaining builds that were built with different USE flags or linked
against different versions of libraries. The location of any particular
package within PKGDIR can be expressed as follows:
${PKGDIR}/${CATEGORY}/${PN}/${PF}-${BUILD_ID}.xpak
The build-id starts at 1 for the first build of a particular ebuild,
and is incremented by 1 for each new build. It is possible to share a
writable PKGDIR over NFS, and locking ensures that each package added
to PKGDIR will have a unique build-id. It is not necessary to migrate
an existing PKGDIR to the new layout, since portage is capable of
working with a mixed PKGDIR layout, where packages using the old layout
are allowed to remain in place.
The new PKGDIR layout is backward-compatible with binhost clients
running older portage, since the file format is identical, the
per-package PATH attribute in the 'Packages' index directs them to
download the file from the correct URI, and they automatically use
BUILD_TIME metadata to select the latest builds.
There is currently no automated way to prune old builds from PKGDIR,
although it is possible to remove packages manually, and then run
'emaint --fix binhost' to update the ${PKGDIR}/Packages index.
It is not necessary to migrate an existing PKGDIR to the new layout,
since portage is capable of working with a mixed PKGDIR layout, where
packages using the old layout are allowed to remain in-place.
There is currently no automated way to prune old builds from PKGDIR,
although it is possible to remove packages manually, and then run
'emaint --fix binhost' update the ${PKGDIR}/Packages index. Support for
FEATURES=binpkg-multi-instance is planned for eclean-pkg.
X-Gentoo-Bug: 150031
X-Gentoo-Bug-URL: https://bugs.gentoo.org/show_bug.cgi?id=150031
---
bin/quickpkg | 1 -
man/make.conf.5 | 27 +
pym/_emerge/Binpkg.py | 33 +-
pym/_emerge/BinpkgFetcher.py | 13 +-
pym/_emerge/BinpkgVerifier.py | 6 +-
pym/_emerge/EbuildBinpkg.py | 9 +-
pym/_emerge/EbuildBuild.py | 36 +-
pym/_emerge/Package.py | 67 +-
pym/_emerge/Scheduler.py | 6 +-
pym/_emerge/clear_caches.py | 1 -
pym/_emerge/resolver/output.py | 21 +-
pym/portage/const.py | 2 +
pym/portage/dbapi/__init__.py | 10 +-
pym/portage/dbapi/bintree.py | 842 +++++++++++----------
pym/portage/dbapi/vartree.py | 8 +-
pym/portage/dbapi/virtual.py | 113 ++-
pym/portage/emaint/modules/binhost/binhost.py | 47 +-
pym/portage/package/ebuild/config.py | 3 +-
pym/portage/tests/resolver/ResolverPlayground.py | 26 +-
.../resolver/binpkg_multi_instance/__init__.py | 2 +
.../resolver/binpkg_multi_instance/__test__.py | 2 +
.../binpkg_multi_instance/test_rebuilt_binaries.py | 101 +++
pym/portage/versions.py | 48 +-
23 files changed, 932 insertions(+), 492 deletions(-)
create mode 100644 pym/portage/tests/resolver/binpkg_multi_instance/__init__.py
create mode 100644 pym/portage/tests/resolver/binpkg_multi_instance/__test__.py
create mode 100644 pym/portage/tests/resolver/binpkg_multi_instance/test_rebuilt_binaries.py
diff --git a/bin/quickpkg b/bin/quickpkg
index 2c69a69..8b71c3e 100755
--- a/bin/quickpkg
+++ b/bin/quickpkg
@@ -63,7 +63,6 @@ def quickpkg_atom(options, infos, arg, eout):
pkgs_for_arg = 0
for cpv in matches:
excluded_config_files = []
- bintree.prevent_collision(cpv)
dblnk = vardb._dblink(cpv)
have_lock = False
diff --git a/man/make.conf.5 b/man/make.conf.5
index 84b7191..6ead61b 100644
--- a/man/make.conf.5
+++ b/man/make.conf.5
@@ -256,6 +256,33 @@ has a \fB\-\-force\fR option that can be used to force regeneration of digests.
Keep logs from successful binary package merges. This is relevant only when
\fBPORT_LOGDIR\fR is set.
.TP
+.B binpkg\-multi\-instance
+Enable support for multiple binary package instances per ebuild.
+Having multiple instances is useful for a number of purposes, such as
+retaining builds that were built with different USE flags or linked
+against different versions of libraries. The location of any particular
+package within PKGDIR can be expressed as follows:
+
+ ${PKGDIR}/${CATEGORY}/${PN}/${PF}\-${BUILD_ID}.xpak
+
+The build\-id starts at 1 for the first build of a particular ebuild,
+and is incremented by 1 for each new build. It is possible to share a
+writable PKGDIR over NFS, and locking ensures that each package added
+to PKGDIR will have a unique build\-id. It is not necessary to migrate
+an existing PKGDIR to the new layout, since portage is capable of
+working with a mixed PKGDIR layout, where packages using the old layout
+are allowed to remain in place.
+
+The new PKGDIR layout is backward\-compatible with binhost clients
+running older portage, since the file format is identical, the
+per\-package PATH attribute in the 'Packages' index directs them to
+download the file from the correct URI, and they automatically use
+BUILD_TIME metadata to select the latest builds.
+
+There is currently no automated way to prune old builds from PKGDIR,
+although it is possible to remove packages manually, and then run
+\(aqemaint \-\-fix binhost' to update the ${PKGDIR}/Packages index.
+.TP
.B buildpkg
Binary packages will be created for all packages that are merged. Also see
\fBquickpkg\fR(1) and \fBemerge\fR(1) \fB\-\-buildpkg\fR and
diff --git a/pym/_emerge/Binpkg.py b/pym/_emerge/Binpkg.py
index ded6dfd..7b7ae17 100644
--- a/pym/_emerge/Binpkg.py
+++ b/pym/_emerge/Binpkg.py
@@ -121,16 +121,11 @@ class Binpkg(CompositeTask):
fetcher = BinpkgFetcher(background=self.background,
logfile=self.settings.get("PORTAGE_LOG_FILE"), pkg=self.pkg,
pretend=self.opts.pretend, scheduler=self.scheduler)
- pkg_path = fetcher.pkg_path
- self._pkg_path = pkg_path
- # This gives bashrc users an opportunity to do various things
- # such as remove binary packages after they're installed.
- self.settings["PORTAGE_BINPKG_FILE"] = pkg_path
if self.opts.getbinpkg and self._bintree.isremote(pkg.cpv):
-
msg = " --- (%s of %s) Fetching Binary (%s::%s)" %\
- (pkg_count.curval, pkg_count.maxval, pkg.cpv, pkg_path)
+ (pkg_count.curval, pkg_count.maxval, pkg.cpv,
+ fetcher.pkg_path)
short_msg = "emerge: (%s of %s) %s Fetch" % \
(pkg_count.curval, pkg_count.maxval, pkg.cpv)
self.logger.log(msg, short_msg=short_msg)
@@ -149,7 +144,7 @@ class Binpkg(CompositeTask):
# The fetcher only has a returncode when
# --getbinpkg is enabled.
if fetcher.returncode is not None:
- self._fetched_pkg = True
+ self._fetched_pkg = fetcher.pkg_path
if self._default_exit(fetcher) != os.EX_OK:
self._unlock_builddir()
self.wait()
@@ -163,9 +158,15 @@ class Binpkg(CompositeTask):
verifier = None
if self._verify:
+ if self._fetched_pkg:
+ path = self._fetched_pkg
+ else:
+ path = self.pkg.root_config.trees["bintree"].getname(
+ self.pkg.cpv)
logfile = self.settings.get("PORTAGE_LOG_FILE")
verifier = BinpkgVerifier(background=self.background,
- logfile=logfile, pkg=self.pkg, scheduler=self.scheduler)
+ logfile=logfile, pkg=self.pkg, scheduler=self.scheduler,
+ _pkg_path=path)
self._start_task(verifier, self._verifier_exit)
return
@@ -181,10 +182,20 @@ class Binpkg(CompositeTask):
logger = self.logger
pkg = self.pkg
pkg_count = self.pkg_count
- pkg_path = self._pkg_path
if self._fetched_pkg:
- self._bintree.inject(pkg.cpv, filename=pkg_path)
+ pkg_path = self._bintree.getname(
+ self._bintree.inject(pkg.cpv,
+ filename=self._fetched_pkg),
+ allocate_new=False)
+ else:
+ pkg_path = self.pkg.root_config.trees["bintree"].getname(
+ self.pkg.cpv)
+
+ # This gives bashrc users an opportunity to do various things
+ # such as remove binary packages after they're installed.
+ self.settings["PORTAGE_BINPKG_FILE"] = pkg_path
+ self._pkg_path = pkg_path
logfile = self.settings.get("PORTAGE_LOG_FILE")
if logfile is not None and os.path.isfile(logfile):
diff --git a/pym/_emerge/BinpkgFetcher.py b/pym/_emerge/BinpkgFetcher.py
index 543881e..a7f2d44 100644
--- a/pym/_emerge/BinpkgFetcher.py
+++ b/pym/_emerge/BinpkgFetcher.py
@@ -24,7 +24,8 @@ class BinpkgFetcher(SpawnProcess):
def __init__(self, **kwargs):
SpawnProcess.__init__(self, **kwargs)
pkg = self.pkg
- self.pkg_path = pkg.root_config.trees["bintree"].getname(pkg.cpv)
+ self.pkg_path = pkg.root_config.trees["bintree"].getname(
+ pkg.cpv) + ".partial"
def _start(self):
@@ -51,10 +52,12 @@ class BinpkgFetcher(SpawnProcess):
# urljoin doesn't work correctly with
# unrecognized protocols like sftp
if bintree._remote_has_index:
- rel_uri = bintree._remotepkgs[pkg.cpv].get("PATH")
+ instance_key = bintree.dbapi._instance_key(pkg.cpv)
+ rel_uri = bintree._remotepkgs[instance_key].get("PATH")
if not rel_uri:
rel_uri = pkg.cpv + ".tbz2"
- remote_base_uri = bintree._remotepkgs[pkg.cpv]["BASE_URI"]
+ remote_base_uri = bintree._remotepkgs[
+ instance_key]["BASE_URI"]
uri = remote_base_uri.rstrip("/") + "/" + rel_uri.lstrip("/")
else:
uri = settings["PORTAGE_BINHOST"].rstrip("/") + \
@@ -128,7 +131,9 @@ class BinpkgFetcher(SpawnProcess):
# the fetcher didn't already do it automatically.
bintree = self.pkg.root_config.trees["bintree"]
if bintree._remote_has_index:
- remote_mtime = bintree._remotepkgs[self.pkg.cpv].get("MTIME")
+ remote_mtime = bintree._remotepkgs[
+ bintree.dbapi._instance_key(
+ self.pkg.cpv)].get("MTIME")
if remote_mtime is not None:
try:
remote_mtime = long(remote_mtime)
diff --git a/pym/_emerge/BinpkgVerifier.py b/pym/_emerge/BinpkgVerifier.py
index 2c69792..7a6d15e 100644
--- a/pym/_emerge/BinpkgVerifier.py
+++ b/pym/_emerge/BinpkgVerifier.py
@@ -33,7 +33,6 @@ class BinpkgVerifier(CompositeTask):
digests = _apply_hash_filter(digests, hash_filter)
self._digests = digests
- self._pkg_path = bintree.getname(self.pkg.cpv)
try:
size = os.stat(self._pkg_path).st_size
@@ -90,8 +89,11 @@ class BinpkgVerifier(CompositeTask):
if portage.output.havecolor:
portage.output.havecolor = not self.background
+ path = self._pkg_path
+ if path.endswith(".partial"):
+ path = path[:-len(".partial")]
eout = EOutput()
- eout.ebegin("%s %s ;-)" % (os.path.basename(self._pkg_path),
+ eout.ebegin("%s %s ;-)" % (os.path.basename(path),
" ".join(sorted(self._digests))))
eout.eend(0)
diff --git a/pym/_emerge/EbuildBinpkg.py b/pym/_emerge/EbuildBinpkg.py
index 34a6aef..6e098eb 100644
--- a/pym/_emerge/EbuildBinpkg.py
+++ b/pym/_emerge/EbuildBinpkg.py
@@ -10,13 +10,12 @@ class EbuildBinpkg(CompositeTask):
This assumes that src_install() has successfully completed.
"""
__slots__ = ('pkg', 'settings') + \
- ('_binpkg_tmpfile',)
+ ('_binpkg_tmpfile', '_binpkg_info')
def _start(self):
pkg = self.pkg
root_config = pkg.root_config
bintree = root_config.trees["bintree"]
- bintree.prevent_collision(pkg.cpv)
binpkg_tmpfile = os.path.join(bintree.pkgdir,
pkg.cpv + ".tbz2." + str(os.getpid()))
bintree._ensure_dir(os.path.dirname(binpkg_tmpfile))
@@ -43,8 +42,12 @@ class EbuildBinpkg(CompositeTask):
pkg = self.pkg
bintree = pkg.root_config.trees["bintree"]
- bintree.inject(pkg.cpv, filename=self._binpkg_tmpfile)
+ self._binpkg_info = bintree.inject(pkg.cpv,
+ filename=self._binpkg_tmpfile)
self._current_task = None
self.returncode = os.EX_OK
self.wait()
+
+ def get_binpkg_info(self):
+ return self._binpkg_info
diff --git a/pym/_emerge/EbuildBuild.py b/pym/_emerge/EbuildBuild.py
index b5b1e87..0e98602 100644
--- a/pym/_emerge/EbuildBuild.py
+++ b/pym/_emerge/EbuildBuild.py
@@ -1,6 +1,10 @@
# Copyright 1999-2014 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
+from __future__ import unicode_literals
+
+import io
+
import _emerge.emergelog
from _emerge.EbuildExecuter import EbuildExecuter
from _emerge.EbuildPhase import EbuildPhase
@@ -15,7 +19,7 @@ from _emerge.TaskSequence import TaskSequence
from portage.util import writemsg
import portage
-from portage import os
+from portage import _encodings, _unicode_decode, _unicode_encode, os
from portage.output import colorize
from portage.package.ebuild.digestcheck import digestcheck
from portage.package.ebuild.digestgen import digestgen
@@ -317,9 +321,13 @@ class EbuildBuild(CompositeTask):
phase="rpm", scheduler=self.scheduler,
settings=self.settings))
else:
- binpkg_tasks.add(EbuildBinpkg(background=self.background,
+ task = EbuildBinpkg(
+ background=self.background,
pkg=self.pkg, scheduler=self.scheduler,
- settings=self.settings))
+ settings=self.settings)
+ binpkg_tasks.add(task)
+ task.addExitListener(
+ self._record_binpkg_info)
if binpkg_tasks:
self._start_task(binpkg_tasks, self._buildpkg_exit)
@@ -356,6 +364,28 @@ class EbuildBuild(CompositeTask):
self.returncode = packager.returncode
self.wait()
+ def _record_binpkg_info(self, task):
+ if task.returncode != os.EX_OK:
+ return
+
+ # Save info about the created binary package, so that
+ # identifying information can be passed to the install
+ # task, to be recorded in the installed package database.
+ pkg = task.get_binpkg_info()
+ infoloc = os.path.join(self.settings["PORTAGE_BUILDDIR"],
+ "build-info")
+ info = {
+ "BINPKGMD5": "%s\n" % pkg._metadata["MD5"],
+ }
+ if pkg.build_id is not None:
+ info["BUILD_ID"] = "%s\n" % pkg.build_id
+ for k, v in info.items():
+ with io.open(_unicode_encode(os.path.join(infoloc, k),
+ encoding=_encodings['fs'], errors='strict'),
+ mode='w', encoding=_encodings['repo.content'],
+ errors='strict') as f:
+ f.write(v)
+
def _buildpkgonly_success_hook_exit(self, success_hooks):
self._default_exit(success_hooks)
self.returncode = None
diff --git a/pym/_emerge/Package.py b/pym/_emerge/Package.py
index e8a13cb..2c1a116 100644
--- a/pym/_emerge/Package.py
+++ b/pym/_emerge/Package.py
@@ -41,12 +41,12 @@ class Package(Task):
"_validated_atoms", "_visible")
metadata_keys = [
- "BUILD_TIME", "CHOST", "COUNTER", "DEPEND", "EAPI",
- "HDEPEND", "INHERITED", "IUSE", "KEYWORDS",
- "LICENSE", "PDEPEND", "PROVIDE", "RDEPEND",
- "repository", "PROPERTIES", "RESTRICT", "SLOT", "USE",
- "_mtime_", "DEFINED_PHASES", "REQUIRED_USE", "PROVIDES",
- "REQUIRES"]
+ "BUILD_ID", "BUILD_TIME", "CHOST", "COUNTER", "DEFINED_PHASES",
+ "DEPEND", "EAPI", "HDEPEND", "INHERITED", "IUSE", "KEYWORDS",
+ "LICENSE", "MD5", "PDEPEND", "PROVIDE", "PROVIDES",
+ "RDEPEND", "repository", "REQUIRED_USE",
+ "PROPERTIES", "REQUIRES", "RESTRICT", "SIZE",
+ "SLOT", "USE", "_mtime_"]
_dep_keys = ('DEPEND', 'HDEPEND', 'PDEPEND', 'RDEPEND')
_buildtime_keys = ('DEPEND', 'HDEPEND')
@@ -114,13 +114,14 @@ class Package(Task):
return self._metadata["EAPI"]
@property
+ def build_id(self):
+ return self.cpv.build_id
+
+ @property
def build_time(self):
if not self.built:
raise AttributeError('build_time')
- try:
- return long(self._metadata['BUILD_TIME'])
- except (KeyError, ValueError):
- return 0
+ return self.cpv.build_time
@property
def defined_phases(self):
@@ -218,6 +219,8 @@ class Package(Task):
else:
raise TypeError("root_config argument is required")
+ elements = [type_name, root, _unicode(cpv), operation]
+
# For installed (and binary) packages we don't care for the repo
# when it comes to hashing, because there can only be one cpv.
# So overwrite the repo_key with type_name.
@@ -228,14 +231,22 @@ class Package(Task):
raise AssertionError(
"Package._gen_hash_key() " + \
"called without 'repo_name' argument")
- repo_key = repo_name
+ elements.append(repo_name)
+ elif type_name == "binary":
+ # Including a variety of fingerprints in the hash makes
+ # it possible to simultaneously consider multiple similar
+ # packages. Note that digests are not included here, since
+ # they are relatively expensive to compute, and they may
+ # not necessarily be available.
+ elements.extend([cpv.build_id, cpv.file_size,
+ cpv.build_time, cpv.mtime])
else:
# For installed (and binary) packages we don't care for the repo
# when it comes to hashing, because there can only be one cpv.
# So overwrite the repo_key with type_name.
- repo_key = type_name
+ elements.append(type_name)
- return (type_name, root, _unicode(cpv), operation, repo_key)
+ return tuple(elements)
def _validate_deps(self):
"""
@@ -509,9 +520,15 @@ class Package(Task):
else:
cpv_color = "PKG_NOMERGE"
+ build_id_str = ""
+ if isinstance(self.cpv.build_id, long) and self.cpv.build_id > 0:
+ build_id_str = "-%s" % self.cpv.build_id
+
s = "(%s, %s" \
- % (portage.output.colorize(cpv_color, self.cpv + _slot_separator + \
- self.slot + "/" + self.sub_slot + _repo_separator + self.repo) , self.type_name)
+ % (portage.output.colorize(cpv_color, self.cpv +
+ build_id_str + _slot_separator + self.slot + "/" +
+ self.sub_slot + _repo_separator + self.repo),
+ self.type_name)
if self.type_name == "installed":
if self.root_config.settings['ROOT'] != "/":
@@ -755,29 +772,41 @@ class Package(Task):
def __lt__(self, other):
if other.cp != self.cp:
return self.cp < other.cp
- if portage.vercmp(self.version, other.version) < 0:
+ result = portage.vercmp(self.version, other.version)
+ if result < 0:
return True
+ if result == 0 and self.built and other.built:
+ return self.build_time < other.build_time
return False
def __le__(self, other):
if other.cp != self.cp:
return self.cp <= other.cp
- if portage.vercmp(self.version, other.version) <= 0:
+ result = portage.vercmp(self.version, other.version)
+ if result <= 0:
return True
+ if result == 0 and self.built and other.built:
+ return self.build_time <= other.build_time
return False
def __gt__(self, other):
if other.cp != self.cp:
return self.cp > other.cp
- if portage.vercmp(self.version, other.version) > 0:
+ result = portage.vercmp(self.version, other.version)
+ if result > 0:
return True
+ if result == 0 and self.built and other.built:
+ return self.build_time > other.build_time
return False
def __ge__(self, other):
if other.cp != self.cp:
return self.cp >= other.cp
- if portage.vercmp(self.version, other.version) >= 0:
+ result = portage.vercmp(self.version, other.version)
+ if result >= 0:
return True
+ if result == 0 and self.built and other.built:
+ return self.build_time >= other.build_time
return False
def with_use(self, use):
diff --git a/pym/_emerge/Scheduler.py b/pym/_emerge/Scheduler.py
index d6db311..dae7944 100644
--- a/pym/_emerge/Scheduler.py
+++ b/pym/_emerge/Scheduler.py
@@ -862,8 +862,12 @@ class Scheduler(PollScheduler):
continue
fetched = fetcher.pkg_path
+ if fetched is False:
+ filename = bintree.getname(x.cpv)
+ else:
+ filename = fetched
verifier = BinpkgVerifier(pkg=x,
- scheduler=sched_iface)
+ scheduler=sched_iface, _pkg_path=filename)
current_task = verifier
verifier.start()
if verifier.wait() != os.EX_OK:
diff --git a/pym/_emerge/clear_caches.py b/pym/_emerge/clear_caches.py
index 513df62..cb0db10 100644
--- a/pym/_emerge/clear_caches.py
+++ b/pym/_emerge/clear_caches.py
@@ -7,7 +7,6 @@ def clear_caches(trees):
for d in trees.values():
d["porttree"].dbapi.melt()
d["porttree"].dbapi._aux_cache.clear()
- d["bintree"].dbapi._aux_cache.clear()
d["bintree"].dbapi._clear_cache()
if d["vartree"].dbapi._linkmap is None:
# preserve-libs is entirely disabled
diff --git a/pym/_emerge/resolver/output.py b/pym/_emerge/resolver/output.py
index 7df0302..400617d 100644
--- a/pym/_emerge/resolver/output.py
+++ b/pym/_emerge/resolver/output.py
@@ -424,6 +424,18 @@ class Display(object):
pkg_str += _repo_separator + pkg.repo
return pkg_str
+ def _append_build_id(self, pkg_str, pkg, pkg_info):
+ """Potentially appends repository to package string.
+
+ @param pkg_str: string
+ @param pkg: _emerge.Package.Package instance
+ @param pkg_info: dictionary
+ @rtype string
+ """
+ if pkg.type_name == "binary" and pkg.cpv.build_id is not None:
+ pkg_str += "-%s" % pkg.cpv.build_id
+ return pkg_str
+
def _set_non_root_columns(self, pkg, pkg_info):
"""sets the indent level and formats the output
@@ -431,7 +443,7 @@ class Display(object):
@param pkg_info: dictionary
@rtype string
"""
- ver_str = pkg_info.ver
+ ver_str = self._append_build_id(pkg_info.ver, pkg, pkg_info)
if self.conf.verbosity == 3:
ver_str = self._append_slot(ver_str, pkg, pkg_info)
ver_str = self._append_repository(ver_str, pkg, pkg_info)
@@ -470,7 +482,7 @@ class Display(object):
@rtype string
Modifies self.verboseadd
"""
- ver_str = pkg_info.ver
+ ver_str = self._append_build_id(pkg_info.ver, pkg, pkg_info)
if self.conf.verbosity == 3:
ver_str = self._append_slot(ver_str, pkg, pkg_info)
ver_str = self._append_repository(ver_str, pkg, pkg_info)
@@ -507,7 +519,7 @@ class Display(object):
@param pkg_info: dictionary
@rtype the updated addl
"""
- pkg_str = pkg.cpv
+ pkg_str = self._append_build_id(pkg.cpv, pkg, pkg_info)
if self.conf.verbosity == 3:
pkg_str = self._append_slot(pkg_str, pkg, pkg_info)
pkg_str = self._append_repository(pkg_str, pkg, pkg_info)
@@ -868,7 +880,8 @@ class Display(object):
if self.conf.columns:
myprint = self._set_non_root_columns(pkg, pkg_info)
else:
- pkg_str = pkg.cpv
+ pkg_str = self._append_build_id(
+ pkg.cpv, pkg, pkg_info)
if self.conf.verbosity == 3:
pkg_str = self._append_slot(pkg_str, pkg, pkg_info)
pkg_str = self._append_repository(pkg_str, pkg, pkg_info)
diff --git a/pym/portage/const.py b/pym/portage/const.py
index febdb4a..c7ecda2 100644
--- a/pym/portage/const.py
+++ b/pym/portage/const.py
@@ -122,6 +122,7 @@ EBUILD_PHASES = (
SUPPORTED_FEATURES = frozenset([
"assume-digests",
"binpkg-logs",
+ "binpkg-multi-instance",
"buildpkg",
"buildsyspkg",
"candy",
@@ -268,6 +269,7 @@ LIVE_ECLASSES = frozenset([
])
SUPPORTED_BINPKG_FORMATS = ("tar", "rpm")
+SUPPORTED_XPAK_EXTENSIONS = (".tbz2", ".xpak")
# Time formats used in various places like metadata.chk.
TIMESTAMP_FORMAT = "%a, %d %b %Y %H:%M:%S +0000" # to be used with time.gmtime()
diff --git a/pym/portage/dbapi/__init__.py b/pym/portage/dbapi/__init__.py
index 34dfaa7..044faec 100644
--- a/pym/portage/dbapi/__init__.py
+++ b/pym/portage/dbapi/__init__.py
@@ -31,7 +31,8 @@ class dbapi(object):
_use_mutable = False
_known_keys = frozenset(x for x in auxdbkeys
if not x.startswith("UNUSED_0"))
- _pkg_str_aux_keys = ("EAPI", "KEYWORDS", "SLOT", "repository")
+ _pkg_str_aux_keys = ("BUILD_TIME", "EAPI", "BUILD_ID",
+ "KEYWORDS", "SLOT", "repository")
def __init__(self):
pass
@@ -57,7 +58,12 @@ class dbapi(object):
@staticmethod
def _cmp_cpv(cpv1, cpv2):
- return vercmp(cpv1.version, cpv2.version)
+ result = vercmp(cpv1.version, cpv2.version)
+ if (result == 0 and cpv1.build_time is not None and
+ cpv2.build_time is not None):
+ result = ((cpv1.build_time > cpv2.build_time) -
+ (cpv1.build_time < cpv2.build_time))
+ return result
@staticmethod
def _cpv_sort_ascending(cpv_list):
diff --git a/pym/portage/dbapi/bintree.py b/pym/portage/dbapi/bintree.py
index 583e208..b98b26e 100644
--- a/pym/portage/dbapi/bintree.py
+++ b/pym/portage/dbapi/bintree.py
@@ -17,14 +17,13 @@ portage.proxy.lazyimport.lazyimport(globals(),
'portage.update:update_dbentries',
'portage.util:atomic_ofstream,ensure_dirs,normalize_path,' + \
'writemsg,writemsg_stdout',
- 'portage.util.listdir:listdir',
'portage.util.path:first_existing',
'portage.util._urlopen:urlopen@_urlopen',
'portage.versions:best,catpkgsplit,catsplit,_pkg_str',
)
from portage.cache.mappings import slot_dict_class
-from portage.const import CACHE_PATH
+from portage.const import CACHE_PATH, SUPPORTED_XPAK_EXTENSIONS
from portage.dbapi.virtual import fakedbapi
from portage.dep import Atom, use_reduce, paren_enclose
from portage.exception import AlarmSignal, InvalidData, InvalidPackageName, \
@@ -71,18 +70,26 @@ class bindbapi(fakedbapi):
_known_keys = frozenset(list(fakedbapi._known_keys) + \
["CHOST", "repository", "USE"])
def __init__(self, mybintree=None, **kwargs):
- fakedbapi.__init__(self, **kwargs)
+ # Always enable multi_instance mode for bindbapi indexing. This
+ # does not affect the local PKGDIR file layout, since that is
+ # controlled independently by FEATURES=binpkg-multi-instance.
+ # The multi_instance mode is useful for the following reasons:
+ # * binary packages with the same cpv from multiple binhosts
+ # can be considered simultaneously
+ # * if binpkg-multi-instance is disabled, it's still possible
+ # to properly access a PKGDIR which has binpkg-multi-instance
+ # layout (or mixed layout)
+ fakedbapi.__init__(self, exclusive_slots=False,
+ multi_instance=True, **kwargs)
self.bintree = mybintree
self.move_ent = mybintree.move_ent
- self.cpvdict={}
- self.cpdict={}
# Selectively cache metadata in order to optimize dep matching.
self._aux_cache_keys = set(
- ["BUILD_TIME", "CHOST", "DEPEND", "EAPI",
- "HDEPEND", "IUSE", "KEYWORDS",
- "LICENSE", "PDEPEND", "PROPERTIES", "PROVIDE",
- "RDEPEND", "repository", "RESTRICT", "SLOT", "USE",
- "DEFINED_PHASES", "PROVIDES", "REQUIRES"
+ ["BUILD_ID", "BUILD_TIME", "CHOST", "DEFINED_PHASES",
+ "DEPEND", "EAPI", "HDEPEND", "IUSE", "KEYWORDS",
+ "LICENSE", "MD5", "PDEPEND", "PROPERTIES", "PROVIDE",
+ "PROVIDES", "RDEPEND", "repository", "REQUIRES", "RESTRICT",
+ "SIZE", "SLOT", "USE", "_mtime_"
])
self._aux_cache_slot_dict = slot_dict_class(self._aux_cache_keys)
self._aux_cache = {}
@@ -109,33 +116,49 @@ class bindbapi(fakedbapi):
return fakedbapi.cpv_exists(self, cpv)
def cpv_inject(self, cpv, **kwargs):
- self._aux_cache.pop(cpv, None)
- fakedbapi.cpv_inject(self, cpv, **kwargs)
+ if not self.bintree.populated:
+ self.bintree.populate()
+ fakedbapi.cpv_inject(self, cpv,
+ metadata=cpv._metadata, **kwargs)
def cpv_remove(self, cpv):
- self._aux_cache.pop(cpv, None)
+ if not self.bintree.populated:
+ self.bintree.populate()
fakedbapi.cpv_remove(self, cpv)
def aux_get(self, mycpv, wants, myrepo=None):
if self.bintree and not self.bintree.populated:
self.bintree.populate()
- cache_me = False
+ # Support plain string for backward compatibility with API
+ # consumers (including portageq, which passes in a cpv from
+ # a command-line argument).
+ instance_key = self._instance_key(mycpv,
+ support_string=True)
if not self._known_keys.intersection(
wants).difference(self._aux_cache_keys):
- aux_cache = self._aux_cache.get(mycpv)
+ aux_cache = self.cpvdict[instance_key]
if aux_cache is not None:
return [aux_cache.get(x, "") for x in wants]
- cache_me = True
mysplit = mycpv.split("/")
mylist = []
tbz2name = mysplit[1]+".tbz2"
if not self.bintree._remotepkgs or \
not self.bintree.isremote(mycpv):
- tbz2_path = self.bintree.getname(mycpv)
- if not os.path.exists(tbz2_path):
+ try:
+ tbz2_path = self.bintree._pkg_paths[instance_key]
+ except KeyError:
+ raise KeyError(mycpv)
+ tbz2_path = os.path.join(self.bintree.pkgdir, tbz2_path)
+ try:
+ st = os.lstat(tbz2_path)
+ except OSError:
raise KeyError(mycpv)
metadata_bytes = portage.xpak.tbz2(tbz2_path).get_data()
def getitem(k):
+ if k == "_mtime_":
+ return _unicode(st[stat.ST_MTIME])
+ elif k == "SIZE":
+ return _unicode(st.st_size)
v = metadata_bytes.get(_unicode_encode(k,
encoding=_encodings['repo.content'],
errors='backslashreplace'))
@@ -144,11 +167,9 @@ class bindbapi(fakedbapi):
encoding=_encodings['repo.content'], errors='replace')
return v
else:
- getitem = self.bintree._remotepkgs[mycpv].get
+ getitem = self.cpvdict[instance_key].get
mydata = {}
mykeys = wants
- if cache_me:
- mykeys = self._aux_cache_keys.union(wants)
for x in mykeys:
myval = getitem(x)
# myval is None if the key doesn't exist
@@ -159,16 +180,24 @@ class bindbapi(fakedbapi):
if not mydata.setdefault('EAPI', '0'):
mydata['EAPI'] = '0'
- if cache_me:
- aux_cache = self._aux_cache_slot_dict()
- for x in self._aux_cache_keys:
- aux_cache[x] = mydata.get(x, '')
- self._aux_cache[mycpv] = aux_cache
return [mydata.get(x, '') for x in wants]
def aux_update(self, cpv, values):
if not self.bintree.populated:
self.bintree.populate()
+ build_id = None
+ try:
+ build_id = cpv.build_id
+ except AttributeError:
+ if self.bintree._multi_instance:
+ # The cpv.build_id attribute is required if we are in
+ # multi-instance mode, since otherwise we won't know
+ # which instance to update.
+ raise
+ else:
+ cpv = self._instance_key(cpv, support_string=True)[0]
+ build_id = cpv.build_id
+
tbz2path = self.bintree.getname(cpv)
if not os.path.exists(tbz2path):
raise KeyError(cpv)
@@ -187,7 +216,7 @@ class bindbapi(fakedbapi):
del mydata[k]
mytbz2.recompose_mem(portage.xpak.xpak_mem(mydata))
# inject will clear stale caches via cpv_inject.
- self.bintree.inject(cpv)
+ self.bintree.inject(cpv, filename=tbz2path)
def cp_list(self, *pargs, **kwargs):
if not self.bintree.populated:
@@ -219,7 +248,7 @@ class bindbapi(fakedbapi):
if not self.bintree.isremote(pkg):
pass
else:
- metadata = self.bintree._remotepkgs[pkg]
+ metadata = self.bintree._remotepkgs[self._instance_key(pkg)]
try:
size = int(metadata["SIZE"])
except KeyError:
@@ -232,48 +261,6 @@ class bindbapi(fakedbapi):
return filesdict
-def _pkgindex_cpv_map_latest_build(pkgindex):
- """
- Given a PackageIndex instance, create a dict of cpv -> metadata map.
- If multiple packages have identical CPV values, prefer the package
- with latest BUILD_TIME value.
- @param pkgindex: A PackageIndex instance.
- @type pkgindex: PackageIndex
- @rtype: dict
- @return: a dict containing entry for the give cpv.
- """
- cpv_map = {}
-
- for d in pkgindex.packages:
- cpv = d["CPV"]
-
- try:
- cpv = _pkg_str(cpv)
- except InvalidData:
- writemsg(_("!!! Invalid remote binary package: %s\n") % cpv,
- noiselevel=-1)
- continue
-
- btime = d.get('BUILD_TIME', '')
- try:
- btime = int(btime)
- except ValueError:
- btime = None
-
- other_d = cpv_map.get(cpv)
- if other_d is not None:
- other_btime = other_d.get('BUILD_TIME', '')
- try:
- other_btime = int(other_btime)
- except ValueError:
- other_btime = None
- if other_btime and (not btime or other_btime > btime):
- continue
-
- cpv_map[_pkg_str(cpv)] = d
-
- return cpv_map
-
class binarytree(object):
"this tree scans for a list of all packages available in PKGDIR"
def __init__(self, _unused=DeprecationWarning, pkgdir=None,
@@ -300,6 +287,13 @@ class binarytree(object):
if True:
self.pkgdir = normalize_path(pkgdir)
+ # NOTE: Event if binpkg-multi-instance is disabled, it's
+ # still possible to access a PKGDIR which uses the
+ # binpkg-multi-instance layout (or mixed layout).
+ self._multi_instance = ("binpkg-multi-instance" in
+ settings.features)
+ if self._multi_instance:
+ self._allocate_filename = self._allocate_filename_multi
self.dbapi = bindbapi(self, settings=settings)
self.update_ents = self.dbapi.update_ents
self.move_slot_ent = self.dbapi.move_slot_ent
@@ -310,7 +304,6 @@ class binarytree(object):
self.invalids = []
self.settings = settings
self._pkg_paths = {}
- self._pkgindex_uri = {}
self._populating = False
self._all_directory = os.path.isdir(
os.path.join(self.pkgdir, "All"))
@@ -318,12 +311,14 @@ class binarytree(object):
self._pkgindex_hashes = ["MD5","SHA1"]
self._pkgindex_file = os.path.join(self.pkgdir, "Packages")
self._pkgindex_keys = self.dbapi._aux_cache_keys.copy()
- self._pkgindex_keys.update(["CPV", "MTIME", "SIZE"])
+ self._pkgindex_keys.update(["CPV", "SIZE"])
self._pkgindex_aux_keys = \
- ["BUILD_TIME", "CHOST", "DEPEND", "DESCRIPTION", "EAPI",
- "HDEPEND", "IUSE", "KEYWORDS", "LICENSE", "PDEPEND", "PROPERTIES",
- "PROVIDE", "RESTRICT", "RDEPEND", "repository", "SLOT", "USE", "DEFINED_PHASES",
- "BASE_URI", "PROVIDES", "REQUIRES"]
+ ["BASE_URI", "BUILD_ID", "BUILD_TIME", "CHOST",
+ "DEFINED_PHASES", "DEPEND", "DESCRIPTION", "EAPI",
+ "HDEPEND", "IUSE", "KEYWORDS", "LICENSE", "PDEPEND",
+ "PKGINDEX_URI", "PROPERTIES", "PROVIDE", "PROVIDES",
+ "RDEPEND", "repository", "REQUIRES", "RESTRICT",
+ "SIZE", "SLOT", "USE"]
self._pkgindex_aux_keys = list(self._pkgindex_aux_keys)
self._pkgindex_use_evaluated_keys = \
("DEPEND", "HDEPEND", "LICENSE", "RDEPEND",
@@ -336,6 +331,7 @@ class binarytree(object):
"USE_EXPAND", "USE_EXPAND_HIDDEN", "USE_EXPAND_IMPLICIT",
"USE_EXPAND_UNPREFIXED"])
self._pkgindex_default_pkg_data = {
+ "BUILD_ID" : "",
"BUILD_TIME" : "",
"DEFINED_PHASES" : "",
"DEPEND" : "",
@@ -363,6 +359,7 @@ class binarytree(object):
self._pkgindex_translated_keys = (
("DESCRIPTION" , "DESC"),
+ ("_mtime_" , "MTIME"),
("repository" , "REPO"),
)
@@ -453,103 +450,30 @@ class binarytree(object):
mytbz2.recompose_mem(portage.xpak.xpak_mem(mydata))
self.dbapi.cpv_remove(mycpv)
- del self._pkg_paths[mycpv]
+ del self._pkg_paths[self.dbapi._instance_key(mycpv)]
+ metadata = self.dbapi._aux_cache_slot_dict()
+ for k in self.dbapi._aux_cache_keys:
+ v = mydata.get(_unicode_encode(k))
+ if v is not None:
+ v = _unicode_decode(v)
+ metadata[k] = " ".join(v.split())
+ mynewcpv = _pkg_str(mynewcpv, metadata=metadata)
new_path = self.getname(mynewcpv)
- self._pkg_paths[mynewcpv] = os.path.join(
+ self._pkg_paths[
+ self.dbapi._instance_key(mynewcpv)] = os.path.join(
*new_path.split(os.path.sep)[-2:])
if new_path != mytbz2:
self._ensure_dir(os.path.dirname(new_path))
_movefile(tbz2path, new_path, mysettings=self.settings)
- self._remove_symlink(mycpv)
- if new_path.split(os.path.sep)[-2] == "All":
- self._create_symlink(mynewcpv)
self.inject(mynewcpv)
return moves
- def _remove_symlink(self, cpv):
- """Remove a ${PKGDIR}/${CATEGORY}/${PF}.tbz2 symlink and also remove
- the ${PKGDIR}/${CATEGORY} directory if empty. The file will not be
- removed if os.path.islink() returns False."""
- mycat, mypkg = catsplit(cpv)
- mylink = os.path.join(self.pkgdir, mycat, mypkg + ".tbz2")
- if os.path.islink(mylink):
- """Only remove it if it's really a link so that this method never
- removes a real package that was placed here to avoid a collision."""
- os.unlink(mylink)
- try:
- os.rmdir(os.path.join(self.pkgdir, mycat))
- except OSError as e:
- if e.errno not in (errno.ENOENT,
- errno.ENOTEMPTY, errno.EEXIST):
- raise
- del e
-
- def _create_symlink(self, cpv):
- """Create a ${PKGDIR}/${CATEGORY}/${PF}.tbz2 symlink (and
- ${PKGDIR}/${CATEGORY} directory, if necessary). Any file that may
- exist in the location of the symlink will first be removed."""
- mycat, mypkg = catsplit(cpv)
- full_path = os.path.join(self.pkgdir, mycat, mypkg + ".tbz2")
- self._ensure_dir(os.path.dirname(full_path))
- try:
- os.unlink(full_path)
- except OSError as e:
- if e.errno != errno.ENOENT:
- raise
- del e
- os.symlink(os.path.join("..", "All", mypkg + ".tbz2"), full_path)
-
def prevent_collision(self, cpv):
- """Make sure that the file location ${PKGDIR}/All/${PF}.tbz2 is safe to
- use for a given cpv. If a collision will occur with an existing
- package from another category, the existing package will be bumped to
- ${PKGDIR}/${CATEGORY}/${PF}.tbz2 so that both can coexist."""
- if not self._all_directory:
- return
-
- # Copy group permissions for new directories that
- # may have been created.
- for path in ("All", catsplit(cpv)[0]):
- path = os.path.join(self.pkgdir, path)
- self._ensure_dir(path)
- if not os.access(path, os.W_OK):
- raise PermissionDenied("access('%s', W_OK)" % path)
-
- full_path = self.getname(cpv)
- if "All" == full_path.split(os.path.sep)[-2]:
- return
- """Move a colliding package if it exists. Code below this point only
- executes in rare cases."""
- mycat, mypkg = catsplit(cpv)
- myfile = mypkg + ".tbz2"
- mypath = os.path.join("All", myfile)
- dest_path = os.path.join(self.pkgdir, mypath)
-
- try:
- st = os.lstat(dest_path)
- except OSError:
- st = None
- else:
- if stat.S_ISLNK(st.st_mode):
- st = None
- try:
- os.unlink(dest_path)
- except OSError:
- if os.path.exists(dest_path):
- raise
-
- if st is not None:
- # For invalid packages, other_cat could be None.
- other_cat = portage.xpak.tbz2(dest_path).getfile(b"CATEGORY")
- if other_cat:
- other_cat = _unicode_decode(other_cat,
- encoding=_encodings['repo.content'], errors='replace')
- other_cat = other_cat.strip()
- other_cpv = other_cat + "/" + mypkg
- self._move_from_all(other_cpv)
- self.inject(other_cpv)
- self._move_to_all(cpv)
+ warnings.warn("The "
+ "portage.dbapi.bintree.binarytree.prevent_collision "
+ "method is deprecated.",
+ DeprecationWarning, stacklevel=2)
def _ensure_dir(self, path):
"""
@@ -585,37 +509,6 @@ class binarytree(object):
except PortageException:
pass
- def _move_to_all(self, cpv):
- """If the file exists, move it. Whether or not it exists, update state
- for future getname() calls."""
- mycat, mypkg = catsplit(cpv)
- myfile = mypkg + ".tbz2"
- self._pkg_paths[cpv] = os.path.join("All", myfile)
- src_path = os.path.join(self.pkgdir, mycat, myfile)
- try:
- mystat = os.lstat(src_path)
- except OSError as e:
- mystat = None
- if mystat and stat.S_ISREG(mystat.st_mode):
- self._ensure_dir(os.path.join(self.pkgdir, "All"))
- dest_path = os.path.join(self.pkgdir, "All", myfile)
- _movefile(src_path, dest_path, mysettings=self.settings)
- self._create_symlink(cpv)
- self.inject(cpv)
-
- def _move_from_all(self, cpv):
- """Move a package from ${PKGDIR}/All/${PF}.tbz2 to
- ${PKGDIR}/${CATEGORY}/${PF}.tbz2 and update state from getname calls."""
- self._remove_symlink(cpv)
- mycat, mypkg = catsplit(cpv)
- myfile = mypkg + ".tbz2"
- mypath = os.path.join(mycat, myfile)
- dest_path = os.path.join(self.pkgdir, mypath)
- self._ensure_dir(os.path.dirname(dest_path))
- src_path = os.path.join(self.pkgdir, "All", myfile)
- _movefile(src_path, dest_path, mysettings=self.settings)
- self._pkg_paths[cpv] = mypath
-
def populate(self, getbinpkgs=0):
"populates the binarytree"
@@ -643,55 +536,63 @@ class binarytree(object):
# prior to performing package moves since it only wants to
# operate on local packages (getbinpkgs=0).
self._remotepkgs = None
- self.dbapi._clear_cache()
- self.dbapi._aux_cache.clear()
+ self.dbapi.clear()
+ _instance_key = self.dbapi._instance_key
if True:
pkg_paths = {}
self._pkg_paths = pkg_paths
- dirs = listdir(self.pkgdir, dirsonly=True, EmptyOnError=True)
- if "All" in dirs:
- dirs.remove("All")
- dirs.sort()
- dirs.insert(0, "All")
+ dir_files = {}
+ for parent, dir_names, file_names in os.walk(self.pkgdir):
+ relative_parent = parent[len(self.pkgdir)+1:]
+ dir_files[relative_parent] = file_names
+
pkgindex = self._load_pkgindex()
- pf_index = None
if not self._pkgindex_version_supported(pkgindex):
pkgindex = self._new_pkgindex()
header = pkgindex.header
metadata = {}
+ basename_index = {}
for d in pkgindex.packages:
- metadata[d["CPV"]] = d
+ cpv = _pkg_str(d["CPV"], metadata=d,
+ settings=self.settings)
+ d["CPV"] = cpv
+ metadata[_instance_key(cpv)] = d
+ path = d.get("PATH")
+ if not path:
+ path = cpv + ".tbz2"
+ basename = os.path.basename(path)
+ basename_index.setdefault(basename, []).append(d)
+
update_pkgindex = False
- for mydir in dirs:
- for myfile in listdir(os.path.join(self.pkgdir, mydir)):
- if not myfile.endswith(".tbz2"):
+ for mydir, file_names in dir_files.items():
+ try:
+ mydir = _unicode_decode(mydir,
+ encoding=_encodings["fs"], errors="strict")
+ except UnicodeDecodeError:
+ continue
+ for myfile in file_names:
+ try:
+ myfile = _unicode_decode(myfile,
+ encoding=_encodings["fs"], errors="strict")
+ except UnicodeDecodeError:
+ continue
+ if not myfile.endswith(SUPPORTED_XPAK_EXTENSIONS):
continue
mypath = os.path.join(mydir, myfile)
full_path = os.path.join(self.pkgdir, mypath)
s = os.lstat(full_path)
- if stat.S_ISLNK(s.st_mode):
+
+ if not stat.S_ISREG(s.st_mode):
continue
# Validate data from the package index and try to avoid
# reading the xpak if possible.
- if mydir != "All":
- possibilities = None
- d = metadata.get(mydir+"/"+myfile[:-5])
- if d:
- possibilities = [d]
- else:
- if pf_index is None:
- pf_index = {}
- for mycpv in metadata:
- mycat, mypf = catsplit(mycpv)
- pf_index.setdefault(
- mypf, []).append(metadata[mycpv])
- possibilities = pf_index.get(myfile[:-5])
+ possibilities = basename_index.get(myfile)
if possibilities:
match = None
for d in possibilities:
try:
- if long(d["MTIME"]) != s[stat.ST_MTIME]:
+ if long(d["_mtime_"]) != s[stat.ST_MTIME]:
continue
except (KeyError, ValueError):
continue
@@ -705,15 +606,14 @@ class binarytree(object):
break
if match:
mycpv = match["CPV"]
- if mycpv in pkg_paths:
- # discard duplicates (All/ is preferred)
- continue
- mycpv = _pkg_str(mycpv)
- pkg_paths[mycpv] = mypath
+ instance_key = _instance_key(mycpv)
+ pkg_paths[instance_key] = mypath
# update the path if the package has been moved
oldpath = d.get("PATH")
if oldpath and oldpath != mypath:
update_pkgindex = True
+ # Omit PATH if it is the default path for
+ # the current Packages format version.
if mypath != mycpv + ".tbz2":
d["PATH"] = mypath
if not oldpath:
@@ -723,11 +623,6 @@ class binarytree(object):
if oldpath:
update_pkgindex = True
self.dbapi.cpv_inject(mycpv)
- if not self.dbapi._aux_cache_keys.difference(d):
- aux_cache = self.dbapi._aux_cache_slot_dict()
- for k in self.dbapi._aux_cache_keys:
- aux_cache[k] = d[k]
- self.dbapi._aux_cache[mycpv] = aux_cache
continue
if not os.access(full_path, os.R_OK):
writemsg(_("!!! Permission denied to read " \
@@ -735,13 +630,12 @@ class binarytree(object):
noiselevel=-1)
self.invalids.append(myfile[:-5])
continue
- metadata_bytes = portage.xpak.tbz2(full_path).get_data()
- mycat = _unicode_decode(metadata_bytes.get(b"CATEGORY", ""),
- encoding=_encodings['repo.content'], errors='replace')
- mypf = _unicode_decode(metadata_bytes.get(b"PF", ""),
- encoding=_encodings['repo.content'], errors='replace')
- slot = _unicode_decode(metadata_bytes.get(b"SLOT", ""),
- encoding=_encodings['repo.content'], errors='replace')
+ pkg_metadata = self._read_metadata(full_path, s,
+ keys=chain(self.dbapi._aux_cache_keys,
+ ("PF", "CATEGORY")))
+ mycat = pkg_metadata.get("CATEGORY", "")
+ mypf = pkg_metadata.get("PF", "")
+ slot = pkg_metadata.get("SLOT", "")
mypkg = myfile[:-5]
if not mycat or not mypf or not slot:
#old-style or corrupt package
@@ -765,16 +659,51 @@ class binarytree(object):
writemsg("!!! %s\n" % line, noiselevel=-1)
self.invalids.append(mypkg)
continue
- mycat = mycat.strip()
- slot = slot.strip()
- if mycat != mydir and mydir != "All":
+
+ multi_instance = False
+ invalid_name = False
+ build_id = None
+ if myfile.endswith(".xpak"):
+ multi_instance = True
+ build_id = self._parse_build_id(myfile)
+ if build_id < 1:
+ invalid_name = True
+ elif myfile != "%s-%s.xpak" % (
+ mypf, build_id):
+ invalid_name = True
+ else:
+ mypkg = mypkg[:-len(str(build_id))-1]
+ elif myfile != mypf + ".tbz2":
+ invalid_name = True
+
+ if invalid_name:
+ writemsg(_("\n!!! Binary package name is "
+ "invalid: '%s'\n") % full_path,
+ noiselevel=-1)
+ continue
+
+ if pkg_metadata.get("BUILD_ID"):
+ try:
+ build_id = long(pkg_metadata["BUILD_ID"])
+ except ValueError:
+ writemsg(_("!!! Binary package has "
+ "invalid BUILD_ID: '%s'\n") %
+ full_path, noiselevel=-1)
+ continue
+ else:
+ build_id = None
+
+ if multi_instance:
+ name_split = catpkgsplit("%s/%s" %
+ (mycat, mypf))
+ if (name_split is None or
+ tuple(catsplit(mydir)) != name_split[:2]):
+ continue
+ elif mycat != mydir and mydir != "All":
continue
if mypkg != mypf.strip():
continue
mycpv = mycat + "/" + mypkg
- if mycpv in pkg_paths:
- # All is first, so it's preferred.
- continue
if not self.dbapi._category_re.match(mycat):
writemsg(_("!!! Binary package has an " \
"unrecognized category: '%s'\n") % full_path,
@@ -784,14 +713,23 @@ class binarytree(object):
(mycpv, self.settings["PORTAGE_CONFIGROOT"]),
noiselevel=-1)
continue
- mycpv = _pkg_str(mycpv)
- pkg_paths[mycpv] = mypath
+ if build_id is not None:
+ pkg_metadata["BUILD_ID"] = _unicode(build_id)
+ pkg_metadata["SIZE"] = _unicode(s.st_size)
+ # Discard items used only for validation above.
+ pkg_metadata.pop("CATEGORY")
+ pkg_metadata.pop("PF")
+ mycpv = _pkg_str(mycpv,
+ metadata=self.dbapi._aux_cache_slot_dict(
+ pkg_metadata))
+ pkg_paths[_instance_key(mycpv)] = mypath
self.dbapi.cpv_inject(mycpv)
update_pkgindex = True
- d = metadata.get(mycpv, {})
+ d = metadata.get(_instance_key(mycpv),
+ pkgindex._pkg_slot_dict())
if d:
try:
- if long(d["MTIME"]) != s[stat.ST_MTIME]:
+ if long(d["_mtime_"]) != s[stat.ST_MTIME]:
d.clear()
except (KeyError, ValueError):
d.clear()
@@ -802,36 +740,30 @@ class binarytree(object):
except (KeyError, ValueError):
d.clear()
+ for k in self._pkgindex_allowed_pkg_keys:
+ v = pkg_metadata.get(k)
+ if v is not None:
+ d[k] = v
d["CPV"] = mycpv
- d["SLOT"] = slot
- d["MTIME"] = _unicode(s[stat.ST_MTIME])
- d["SIZE"] = _unicode(s.st_size)
- d.update(zip(self._pkgindex_aux_keys,
- self.dbapi.aux_get(mycpv, self._pkgindex_aux_keys)))
try:
self._eval_use_flags(mycpv, d)
except portage.exception.InvalidDependString:
writemsg(_("!!! Invalid binary package: '%s'\n") % \
self.getname(mycpv), noiselevel=-1)
self.dbapi.cpv_remove(mycpv)
- del pkg_paths[mycpv]
+ del pkg_paths[_instance_key(mycpv)]
# record location if it's non-default
if mypath != mycpv + ".tbz2":
d["PATH"] = mypath
else:
d.pop("PATH", None)
- metadata[mycpv] = d
- if not self.dbapi._aux_cache_keys.difference(d):
- aux_cache = self.dbapi._aux_cache_slot_dict()
- for k in self.dbapi._aux_cache_keys:
- aux_cache[k] = d[k]
- self.dbapi._aux_cache[mycpv] = aux_cache
+ metadata[_instance_key(mycpv)] = d
- for cpv in list(metadata):
- if cpv not in pkg_paths:
- del metadata[cpv]
+ for instance_key in list(metadata):
+ if instance_key not in pkg_paths:
+ del metadata[instance_key]
# Do not bother to write the Packages index if $PKGDIR/All/ exists
# since it will provide no benefit due to the need to read CATEGORY
@@ -1056,45 +988,24 @@ class binarytree(object):
# The current user doesn't have permission to cache the
# file, but that's alright.
if pkgindex:
- # Organize remote package list as a cpv -> metadata map.
- remotepkgs = _pkgindex_cpv_map_latest_build(pkgindex)
remote_base_uri = pkgindex.header.get("URI", base_url)
- for cpv, remote_metadata in remotepkgs.items():
- remote_metadata["BASE_URI"] = remote_base_uri
- self._pkgindex_uri[cpv] = url
- self._remotepkgs.update(remotepkgs)
- self._remote_has_index = True
- for cpv in remotepkgs:
+ for d in pkgindex.packages:
+ cpv = _pkg_str(d["CPV"], metadata=d,
+ settings=self.settings)
+ instance_key = _instance_key(cpv)
+ # Local package instances override remote instances
+ # with the same instance_key.
+ if instance_key in metadata:
+ continue
+
+ d["CPV"] = cpv
+ d["BASE_URI"] = remote_base_uri
+ d["PKGINDEX_URI"] = url
+ self._remotepkgs[instance_key] = d
+ metadata[instance_key] = d
self.dbapi.cpv_inject(cpv)
- if True:
- # Remote package instances override local package
- # if they are not identical.
- hash_names = ["SIZE"] + self._pkgindex_hashes
- for cpv, local_metadata in metadata.items():
- remote_metadata = self._remotepkgs.get(cpv)
- if remote_metadata is None:
- continue
- # Use digests to compare identity.
- identical = True
- for hash_name in hash_names:
- local_value = local_metadata.get(hash_name)
- if local_value is None:
- continue
- remote_value = remote_metadata.get(hash_name)
- if remote_value is None:
- continue
- if local_value != remote_value:
- identical = False
- break
- if identical:
- del self._remotepkgs[cpv]
- else:
- # Override the local package in the aux_get cache.
- self.dbapi._aux_cache[cpv] = remote_metadata
- else:
- # Local package instances override remote instances.
- for cpv in metadata:
- self._remotepkgs.pop(cpv, None)
+
+ self._remote_has_index = True
self.populated=1
@@ -1106,7 +1017,8 @@ class binarytree(object):
@param filename: File path of the package to inject, or None if it's
already in the location returned by getname()
@type filename: string
- @rtype: None
+ @rtype: _pkg_str or None
+ @return: A _pkg_str instance on success, or None on failure.
"""
mycat, mypkg = catsplit(cpv)
if not self.populated:
@@ -1124,24 +1036,45 @@ class binarytree(object):
writemsg(_("!!! Binary package does not exist: '%s'\n") % full_path,
noiselevel=-1)
return
- mytbz2 = portage.xpak.tbz2(full_path)
- slot = mytbz2.getfile("SLOT")
+ metadata = self._read_metadata(full_path, s)
+ slot = metadata.get("SLOT")
+ try:
+ self._eval_use_flags(cpv, metadata)
+ except portage.exception.InvalidDependString:
+ slot = None
if slot is None:
writemsg(_("!!! Invalid binary package: '%s'\n") % full_path,
noiselevel=-1)
return
- slot = slot.strip()
- self.dbapi.cpv_inject(cpv)
+
+ fetched = False
+ try:
+ build_id = cpv.build_id
+ except AttributeError:
+ build_id = None
+ else:
+ instance_key = self.dbapi._instance_key(cpv)
+ if instance_key in self.dbapi.cpvdict:
+ # This means we've been called by aux_update (or
+ # similar). The instance key typically changes (due to
+ # file modification), so we need to discard existing
+ # instance key references.
+ self.dbapi.cpv_remove(cpv)
+ self._pkg_paths.pop(instance_key, None)
+ if self._remotepkgs is not None:
+ fetched = self._remotepkgs.pop(instance_key, None)
+
+ cpv = _pkg_str(cpv, metadata=metadata, settings=self.settings)
# Reread the Packages index (in case it's been changed by another
# process) and then updated it, all while holding a lock.
pkgindex_lock = None
- created_symlink = False
try:
pkgindex_lock = lockfile(self._pkgindex_file,
wantnewlockfile=1)
if filename is not None:
- new_filename = self.getname(cpv)
+ new_filename = self.getname(cpv,
+ allocate_new=(build_id is None))
try:
samefile = os.path.samefile(filename, new_filename)
except OSError:
@@ -1151,54 +1084,31 @@ class binarytree(object):
_movefile(filename, new_filename, mysettings=self.settings)
full_path = new_filename
- self._file_permissions(full_path)
+ basename = os.path.basename(full_path)
+ pf = catsplit(cpv)[1]
+ if (build_id is None and not fetched and
+ basename.endswith(".xpak")):
+ # Apply the newly assigned BUILD_ID. This is intended
+ # to occur only for locally built packages. If the
+ # package was fetched, we want to preserve its
+ # attributes, so that we can later distinguish that it
+ # is identical to its remote counterpart.
+ build_id = self._parse_build_id(basename)
+ metadata["BUILD_ID"] = _unicode(build_id)
+ cpv = _pkg_str(cpv, metadata=metadata,
+ settings=self.settings)
+ binpkg = portage.xpak.tbz2(full_path)
+ binary_data = binpkg.get_data()
+ binary_data[b"BUILD_ID"] = _unicode_encode(
+ metadata["BUILD_ID"])
+ binpkg.recompose_mem(portage.xpak.xpak_mem(binary_data))
- if self._all_directory and \
- self.getname(cpv).split(os.path.sep)[-2] == "All":
- self._create_symlink(cpv)
- created_symlink = True
+ self._file_permissions(full_path)
pkgindex = self._load_pkgindex()
-
if not self._pkgindex_version_supported(pkgindex):
pkgindex = self._new_pkgindex()
- # Discard remote metadata to ensure that _pkgindex_entry
- # gets the local metadata. This also updates state for future
- # isremote calls.
- if self._remotepkgs is not None:
- self._remotepkgs.pop(cpv, None)
-
- # Discard cached metadata to ensure that _pkgindex_entry
- # doesn't return stale metadata.
- self.dbapi._aux_cache.pop(cpv, None)
-
- try:
- d = self._pkgindex_entry(cpv)
- except portage.exception.InvalidDependString:
- writemsg(_("!!! Invalid binary package: '%s'\n") % \
- self.getname(cpv), noiselevel=-1)
- self.dbapi.cpv_remove(cpv)
- del self._pkg_paths[cpv]
- return
-
- # If found, remove package(s) with duplicate path.
- path = d.get("PATH", "")
- for i in range(len(pkgindex.packages) - 1, -1, -1):
- d2 = pkgindex.packages[i]
- if path and path == d2.get("PATH"):
- # Handle path collisions in $PKGDIR/All
- # when CPV is not identical.
- del pkgindex.packages[i]
- elif cpv == d2.get("CPV"):
- if path == d2.get("PATH", ""):
- del pkgindex.packages[i]
- elif created_symlink and not d2.get("PATH", ""):
- # Delete entry for the package that was just
- # overwritten by a symlink to this package.
- del pkgindex.packages[i]
-
- pkgindex.packages.append(d)
-
+ d = self._inject_file(pkgindex, cpv, full_path)
self._update_pkgindex_header(pkgindex.header)
self._pkgindex_write(pkgindex)
@@ -1206,6 +1116,72 @@ class binarytree(object):
if pkgindex_lock:
unlockfile(pkgindex_lock)
+ # This is used to record BINPKGMD5 in the installed package
+ # database, for a package that has just been built.
+ cpv._metadata["MD5"] = d["MD5"]
+
+ return cpv
+
+ def _read_metadata(self, filename, st, keys=None):
+ if keys is None:
+ keys = self.dbapi._aux_cache_keys
+ metadata = self.dbapi._aux_cache_slot_dict()
+ else:
+ metadata = {}
+ binary_metadata = portage.xpak.tbz2(filename).get_data()
+ for k in keys:
+ if k == "_mtime_":
+ metadata[k] = _unicode(st[stat.ST_MTIME])
+ elif k == "SIZE":
+ metadata[k] = _unicode(st.st_size)
+ else:
+ v = binary_metadata.get(_unicode_encode(k))
+ if v is not None:
+ v = _unicode_decode(v)
+ metadata[k] = " ".join(v.split())
+ return metadata
+
+ def _inject_file(self, pkgindex, cpv, filename):
+ """
+ Add a package to internal data structures, and add an
+ entry to the given pkgindex.
+ @param pkgindex: The PackageIndex instance to which an entry
+ will be added.
+ @type pkgindex: PackageIndex
+ @param cpv: A _pkg_str instance corresponding to the package
+ being injected.
+ @type cpv: _pkg_str
+ @param filename: Absolute file path of the package to inject.
+ @type filename: string
+ @rtype: dict
+ @return: A dict corresponding to the new entry which has been
+ added to pkgindex. This may be used to access the checksums
+ which have just been generated.
+ """
+ # Update state for future isremote calls.
+ instance_key = self.dbapi._instance_key(cpv)
+ if self._remotepkgs is not None:
+ self._remotepkgs.pop(instance_key, None)
+
+ self.dbapi.cpv_inject(cpv)
+ self._pkg_paths[instance_key] = filename[len(self.pkgdir)+1:]
+ d = self._pkgindex_entry(cpv)
+
+ # If found, remove package(s) with duplicate path.
+ path = d.get("PATH", "")
+ for i in range(len(pkgindex.packages) - 1, -1, -1):
+ d2 = pkgindex.packages[i]
+ if path and path == d2.get("PATH"):
+ # Handle path collisions in $PKGDIR/All
+ # when CPV is not identical.
+ del pkgindex.packages[i]
+ elif cpv == d2.get("CPV"):
+ if path == d2.get("PATH", ""):
+ del pkgindex.packages[i]
+
+ pkgindex.packages.append(d)
+ return d
+
def _pkgindex_write(self, pkgindex):
contents = codecs.getwriter(_encodings['repo.content'])(io.BytesIO())
pkgindex.write(contents)
@@ -1231,7 +1207,7 @@ class binarytree(object):
def _pkgindex_entry(self, cpv):
"""
- Performs checksums and evaluates USE flag conditionals.
+ Performs checksums, and gets size and mtime via lstat.
Raises InvalidDependString if necessary.
@rtype: dict
@return: a dict containing entry for the give cpv.
@@ -1239,23 +1215,20 @@ class binarytree(object):
pkg_path = self.getname(cpv)
- d = dict(zip(self._pkgindex_aux_keys,
- self.dbapi.aux_get(cpv, self._pkgindex_aux_keys)))
-
+ d = dict(cpv._metadata.items())
d.update(perform_multiple_checksums(
pkg_path, hashes=self._pkgindex_hashes))
d["CPV"] = cpv
- st = os.stat(pkg_path)
- d["MTIME"] = _unicode(st[stat.ST_MTIME])
+ st = os.lstat(pkg_path)
+ d["_mtime_"] = _unicode(st[stat.ST_MTIME])
d["SIZE"] = _unicode(st.st_size)
- rel_path = self._pkg_paths[cpv]
+ rel_path = pkg_path[len(self.pkgdir)+1:]
# record location if it's non-default
if rel_path != cpv + ".tbz2":
d["PATH"] = rel_path
- self._eval_use_flags(cpv, d)
return d
def _new_pkgindex(self):
@@ -1309,15 +1282,17 @@ class binarytree(object):
return False
def _eval_use_flags(self, cpv, metadata):
- use = frozenset(metadata["USE"].split())
+ use = frozenset(metadata.get("USE", "").split())
for k in self._pkgindex_use_evaluated_keys:
if k.endswith('DEPEND'):
token_class = Atom
else:
token_class = None
+ deps = metadata.get(k)
+ if deps is None:
+ continue
try:
- deps = metadata[k]
deps = use_reduce(deps, uselist=use, token_class=token_class)
deps = paren_enclose(deps)
except portage.exception.InvalidDependString as e:
@@ -1347,46 +1322,129 @@ class binarytree(object):
return ""
return mymatch
- def getname(self, pkgname):
- """Returns a file location for this package. The default location is
- ${PKGDIR}/All/${PF}.tbz2, but will be ${PKGDIR}/${CATEGORY}/${PF}.tbz2
- in the rare event of a collision. The prevent_collision() method can
- be called to ensure that ${PKGDIR}/All/${PF}.tbz2 is available for a
- specific cpv."""
+ def getname(self, cpv, allocate_new=None):
+ """Returns a file location for this package.
+ If cpv has both build_time and build_id attributes, then the
+ path to the specific corresponding instance is returned.
+ Otherwise, allocate a new path and return that. When allocating
+ a new path, behavior depends on the binpkg-multi-instance
+ FEATURES setting.
+ """
if not self.populated:
self.populate()
- mycpv = pkgname
- mypath = self._pkg_paths.get(mycpv, None)
- if mypath:
- return os.path.join(self.pkgdir, mypath)
- mycat, mypkg = catsplit(mycpv)
- if self._all_directory:
- mypath = os.path.join("All", mypkg + ".tbz2")
- if mypath in self._pkg_paths.values():
- mypath = os.path.join(mycat, mypkg + ".tbz2")
+
+ try:
+ cpv.cp
+ except AttributeError:
+ cpv = _pkg_str(cpv)
+
+ filename = None
+ if allocate_new:
+ filename = self._allocate_filename(cpv)
+ elif self._is_specific_instance(cpv):
+ instance_key = self.dbapi._instance_key(cpv)
+ path = self._pkg_paths.get(instance_key)
+ if path is not None:
+ filename = os.path.join(self.pkgdir, path)
+
+ if filename is None and not allocate_new:
+ try:
+ instance_key = self.dbapi._instance_key(cpv,
+ support_string=True)
+ except KeyError:
+ pass
+ else:
+ filename = self._pkg_paths.get(instance_key)
+ if filename is not None:
+ filename = os.path.join(self.pkgdir, filename)
+
+ if filename is None:
+ if self._multi_instance:
+ pf = catsplit(cpv)[1]
+ filename = "%s-%s.xpak" % (
+ os.path.join(self.pkgdir, cpv.cp, pf), "1")
+ else:
+ filename = os.path.join(self.pkgdir, cpv + ".tbz2")
+
+ return filename
+
+ def _is_specific_instance(self, cpv):
+ specific = True
+ try:
+ build_time = cpv.build_time
+ build_id = cpv.build_id
+ except AttributeError:
+ specific = False
else:
- mypath = os.path.join(mycat, mypkg + ".tbz2")
- self._pkg_paths[mycpv] = mypath # cache for future lookups
- return os.path.join(self.pkgdir, mypath)
+ if build_time is None or build_id is None:
+ specific = False
+ return specific
+
+ def _max_build_id(self, cpv):
+ max_build_id = 0
+ for x in self.dbapi.cp_list(cpv.cp):
+ if (x == cpv and x.build_id is not None and
+ x.build_id > max_build_id):
+ max_build_id = x.build_id
+ return max_build_id
+
+ def _allocate_filename(self, cpv):
+ return os.path.join(self.pkgdir, cpv + ".tbz2")
+
+ def _allocate_filename_multi(self, cpv):
+
+ # First, get the max build_id found when _populate was
+ # called.
+ max_build_id = self._max_build_id(cpv)
+
+ # A new package may have been added concurrently since the
+ # last _populate call, so use increment build_id until
+ # we locate an unused id.
+ pf = catsplit(cpv)[1]
+ build_id = max_build_id + 1
+
+ while True:
+ filename = "%s-%s.xpak" % (
+ os.path.join(self.pkgdir, cpv.cp, pf), build_id)
+ if os.path.exists(filename):
+ build_id += 1
+ else:
+ return filename
+
+ @staticmethod
+ def _parse_build_id(filename):
+ build_id = -1
+ hyphen = filename.rfind("-", 0, -6)
+ if hyphen != -1:
+ build_id = filename[hyphen+1:-5]
+ try:
+ build_id = long(build_id)
+ except ValueError:
+ pass
+ return build_id
def isremote(self, pkgname):
"""Returns true if the package is kept remotely and it has not been
downloaded (or it is only partially downloaded)."""
- if self._remotepkgs is None or pkgname not in self._remotepkgs:
+ if (self._remotepkgs is None or
+ self.dbapi._instance_key(pkgname) not in self._remotepkgs):
return False
# Presence in self._remotepkgs implies that it's remote. When a
# package is downloaded, state is updated by self.inject().
return True
- def get_pkgindex_uri(self, pkgname):
+ def get_pkgindex_uri(self, cpv):
"""Returns the URI to the Packages file for a given package."""
- return self._pkgindex_uri.get(pkgname)
-
-
+ uri = None
+ metadata = self._remotepkgs.get(self.dbapi._instance_key(cpv))
+ if metadata is not None:
+ uri = metadata["PKGINDEX_URI"]
+ return uri
def gettbz2(self, pkgname):
"""Fetches the package from a remote site, if necessary. Attempts to
resume if the file appears to be partially downloaded."""
+ instance_key = self.dbapi._instance_key(pkgname)
tbz2_path = self.getname(pkgname)
tbz2name = os.path.basename(tbz2_path)
resume = False
@@ -1402,10 +1460,10 @@ class binarytree(object):
self._ensure_dir(mydest)
# urljoin doesn't work correctly with unrecognized protocols like sftp
if self._remote_has_index:
- rel_url = self._remotepkgs[pkgname].get("PATH")
+ rel_url = self._remotepkgs[instance_key].get("PATH")
if not rel_url:
rel_url = pkgname+".tbz2"
- remote_base_uri = self._remotepkgs[pkgname]["BASE_URI"]
+ remote_base_uri = self._remotepkgs[instance_key]["BASE_URI"]
url = remote_base_uri.rstrip("/") + "/" + rel_url.lstrip("/")
else:
url = self.settings["PORTAGE_BINHOST"].rstrip("/") + "/" + tbz2name
@@ -1448,15 +1506,19 @@ class binarytree(object):
except AttributeError:
cpv = pkg
+ _instance_key = self.dbapi._instance_key
+ instance_key = _instance_key(cpv)
digests = {}
- metadata = None
- if self._remotepkgs is None or cpv not in self._remotepkgs:
+ metadata = (None if self._remotepkgs is None else
+ self._remotepkgs.get(instance_key))
+ if metadata is None:
for d in self._load_pkgindex().packages:
- if d["CPV"] == cpv:
+ if (d["CPV"] == cpv and
+ instance_key == _instance_key(_pkg_str(d["CPV"],
+ metadata=d, settings=self.settings))):
metadata = d
break
- else:
- metadata = self._remotepkgs[cpv]
+
if metadata is None:
return digests
diff --git a/pym/portage/dbapi/vartree.py b/pym/portage/dbapi/vartree.py
index cf31c8e..277c2f1 100644
--- a/pym/portage/dbapi/vartree.py
+++ b/pym/portage/dbapi/vartree.py
@@ -173,7 +173,8 @@ class vardbapi(dbapi):
self.vartree = vartree
self._aux_cache_keys = set(
["BUILD_TIME", "CHOST", "COUNTER", "DEPEND", "DESCRIPTION",
- "EAPI", "HDEPEND", "HOMEPAGE", "IUSE", "KEYWORDS",
+ "EAPI", "HDEPEND", "HOMEPAGE",
+ "BUILD_ID", "IUSE", "KEYWORDS",
"LICENSE", "PDEPEND", "PROPERTIES", "PROVIDE", "RDEPEND",
"repository", "RESTRICT" , "SLOT", "USE", "DEFINED_PHASES",
"PROVIDES", "REQUIRES"
@@ -425,7 +426,10 @@ class vardbapi(dbapi):
continue
if len(mysplit) > 1:
if ps[0] == mysplit[1]:
- returnme.append(_pkg_str(mysplit[0]+"/"+x))
+ cpv = "%s/%s" % (mysplit[0], x)
+ metadata = dict(zip(self._aux_cache_keys,
+ self.aux_get(cpv, self._aux_cache_keys)))
+ returnme.append(_pkg_str(cpv, metadata=metadata))
self._cpv_sort_ascending(returnme)
if use_cache:
self.cpcache[mycp] = [mystat, returnme[:]]
diff --git a/pym/portage/dbapi/virtual.py b/pym/portage/dbapi/virtual.py
index ba9745c..3b7d10e 100644
--- a/pym/portage/dbapi/virtual.py
+++ b/pym/portage/dbapi/virtual.py
@@ -11,12 +11,17 @@ class fakedbapi(dbapi):
"""A fake dbapi that allows consumers to inject/remove packages to/from it
portage.settings is required to maintain the dbAPI.
"""
- def __init__(self, settings=None, exclusive_slots=True):
+ def __init__(self, settings=None, exclusive_slots=True,
+ multi_instance=False):
"""
@param exclusive_slots: When True, injecting a package with SLOT
metadata causes an existing package in the same slot to be
automatically removed (default is True).
@type exclusive_slots: Boolean
+ @param multi_instance: When True, multiple instances with the
+ same cpv may be stored simultaneously, as long as they are
+ distinguishable (default is False).
+ @type multi_instance: Boolean
"""
self._exclusive_slots = exclusive_slots
self.cpvdict = {}
@@ -25,6 +30,56 @@ class fakedbapi(dbapi):
from portage import settings
self.settings = settings
self._match_cache = {}
+ self._set_multi_instance(multi_instance)
+
+ def _set_multi_instance(self, multi_instance):
+ """
+ Enable or disable multi_instance mode. This should before any
+ packages are injected, so that all packages are indexed with
+ the same implementation of self._instance_key.
+ """
+ if self.cpvdict:
+ raise AssertionError("_set_multi_instance called after "
+ "packages have already been added")
+ self._multi_instance = multi_instance
+ if multi_instance:
+ self._instance_key = self._instance_key_multi_instance
+ else:
+ self._instance_key = self._instance_key_cpv
+
+ def _instance_key_cpv(self, cpv, support_string=False):
+ return cpv
+
+ def _instance_key_multi_instance(self, cpv, support_string=False):
+ try:
+ return (cpv, cpv.build_id, cpv.file_size, cpv.build_time,
+ cpv.mtime)
+ except AttributeError:
+ if not support_string:
+ raise
+
+ # Fallback for interfaces such as aux_get where API consumers
+ # may pass in a plain string.
+ latest = None
+ for pkg in self.cp_list(cpv_getkey(cpv)):
+ if pkg == cpv and (
+ latest is None or
+ latest.build_time < pkg.build_time):
+ latest = pkg
+
+ if latest is not None:
+ return (latest, latest.build_id, latest.file_size,
+ latest.build_time, latest.mtime)
+
+ raise KeyError(cpv)
+
+ def clear(self):
+ """
+ Remove all packages.
+ """
+ self._clear_cache()
+ self.cpvdict.clear()
+ self.cpdict.clear()
def _clear_cache(self):
if self._categories is not None:
@@ -43,7 +98,8 @@ class fakedbapi(dbapi):
return result[:]
def cpv_exists(self, mycpv, myrepo=None):
- return mycpv in self.cpvdict
+ return self._instance_key(mycpv,
+ support_string=True) in self.cpvdict
def cp_list(self, mycp, use_cache=1, myrepo=None):
# NOTE: Cache can be safely shared with the match cache, since the
@@ -63,7 +119,10 @@ class fakedbapi(dbapi):
return list(self.cpdict)
def cpv_all(self):
- return list(self.cpvdict)
+ if self._multi_instance:
+ return [x[0] for x in self.cpvdict]
+ else:
+ return list(self.cpvdict)
def cpv_inject(self, mycpv, metadata=None):
"""Adds a cpv to the list of available packages. See the
@@ -99,13 +158,14 @@ class fakedbapi(dbapi):
except AttributeError:
pass
- self.cpvdict[mycpv] = metadata
+ instance_key = self._instance_key(mycpv)
+ self.cpvdict[instance_key] = metadata
if not self._exclusive_slots:
myslot = None
if myslot and mycp in self.cpdict:
# If necessary, remove another package in the same SLOT.
for cpv in self.cpdict[mycp]:
- if mycpv != cpv:
+ if instance_key != self._instance_key(cpv):
try:
other_slot = cpv.slot
except AttributeError:
@@ -115,40 +175,41 @@ class fakedbapi(dbapi):
self.cpv_remove(cpv)
break
- cp_list = self.cpdict.get(mycp)
- if cp_list is None:
- cp_list = []
- self.cpdict[mycp] = cp_list
- try:
- cp_list.remove(mycpv)
- except ValueError:
- pass
+ cp_list = self.cpdict.get(mycp, [])
+ cp_list = [x for x in cp_list
+ if self._instance_key(x) != instance_key]
cp_list.append(mycpv)
+ self.cpdict[mycp] = cp_list
def cpv_remove(self,mycpv):
"""Removes a cpv from the list of available packages."""
self._clear_cache()
mycp = cpv_getkey(mycpv)
- if mycpv in self.cpvdict:
- del self.cpvdict[mycpv]
- if mycp not in self.cpdict:
- return
- while mycpv in self.cpdict[mycp]:
- del self.cpdict[mycp][self.cpdict[mycp].index(mycpv)]
- if not len(self.cpdict[mycp]):
- del self.cpdict[mycp]
+ instance_key = self._instance_key(mycpv)
+ self.cpvdict.pop(instance_key, None)
+ cp_list = self.cpdict.get(mycp)
+ if cp_list is not None:
+ cp_list = [x for x in cp_list
+ if self._instance_key(x) != instance_key]
+ if cp_list:
+ self.cpdict[mycp] = cp_list
+ else:
+ del self.cpdict[mycp]
def aux_get(self, mycpv, wants, myrepo=None):
- if not self.cpv_exists(mycpv):
+ metadata = self.cpvdict.get(
+ self._instance_key(mycpv, support_string=True))
+ if metadata is None:
raise KeyError(mycpv)
- metadata = self.cpvdict[mycpv]
- if not metadata:
- return ["" for x in wants]
return [metadata.get(x, "") for x in wants]
def aux_update(self, cpv, values):
self._clear_cache()
- self.cpvdict[cpv].update(values)
+ metadata = self.cpvdict.get(
+ self._instance_key(cpv, support_string=True))
+ if metadata is None:
+ raise KeyError(cpv)
+ metadata.update(values)
class testdbapi(object):
"""A dbapi instance with completely fake functions to get by hitting disk
diff --git a/pym/portage/emaint/modules/binhost/binhost.py b/pym/portage/emaint/modules/binhost/binhost.py
index 1138a8c..cf1213e 100644
--- a/pym/portage/emaint/modules/binhost/binhost.py
+++ b/pym/portage/emaint/modules/binhost/binhost.py
@@ -7,6 +7,7 @@ import stat
import portage
from portage import os
from portage.util import writemsg
+from portage.versions import _pkg_str
import sys
@@ -38,7 +39,7 @@ class BinhostHandler(object):
if size is None:
return True
- mtime = data.get("MTIME")
+ mtime = data.get("_mtime_")
if mtime is None:
return True
@@ -90,6 +91,7 @@ class BinhostHandler(object):
def fix(self, **kwargs):
onProgress = kwargs.get('onProgress', None)
bintree = self._bintree
+ _instance_key = bintree.dbapi._instance_key
cpv_all = self._bintree.dbapi.cpv_all()
cpv_all.sort()
missing = []
@@ -98,16 +100,21 @@ class BinhostHandler(object):
onProgress(maxval, 0)
pkgindex = self._pkgindex
missing = []
+ stale = []
metadata = {}
for d in pkgindex.packages:
- metadata[d["CPV"]] = d
-
- for i, cpv in enumerate(cpv_all):
- d = metadata.get(cpv)
+ cpv = _pkg_str(d["CPV"], metadata=d,
+ settings=bintree.settings)
+ d["CPV"] = cpv
+ metadata[_instance_key(cpv)] = d
+ if not bintree.dbapi.cpv_exists(cpv):
+ stale.append(cpv)
+
+ for cpv in cpv_all:
+ d = metadata.get(_instance_key(cpv))
if not d or self._need_update(cpv, d):
missing.append(cpv)
- stale = set(metadata).difference(cpv_all)
if missing or stale:
from portage import locks
pkgindex_lock = locks.lockfile(
@@ -121,31 +128,39 @@ class BinhostHandler(object):
pkgindex = bintree._load_pkgindex()
self._pkgindex = pkgindex
+ # Recount stale/missing packages, with lock held.
+ missing = []
+ stale = []
metadata = {}
for d in pkgindex.packages:
- metadata[d["CPV"]] = d
-
- # Recount missing packages, with lock held.
- del missing[:]
- for i, cpv in enumerate(cpv_all):
- d = metadata.get(cpv)
+ cpv = _pkg_str(d["CPV"], metadata=d,
+ settings=bintree.settings)
+ d["CPV"] = cpv
+ metadata[_instance_key(cpv)] = d
+ if not bintree.dbapi.cpv_exists(cpv):
+ stale.append(cpv)
+
+ for cpv in cpv_all:
+ d = metadata.get(_instance_key(cpv))
if not d or self._need_update(cpv, d):
missing.append(cpv)
maxval = len(missing)
for i, cpv in enumerate(missing):
+ d = bintree._pkgindex_entry(cpv)
try:
- metadata[cpv] = bintree._pkgindex_entry(cpv)
+ bintree._eval_use_flags(cpv, d)
except portage.exception.InvalidDependString:
writemsg("!!! Invalid binary package: '%s'\n" % \
bintree.getname(cpv), noiselevel=-1)
+ else:
+ metadata[_instance_key(cpv)] = d
if onProgress:
onProgress(maxval, i+1)
- for cpv in set(metadata).difference(
- self._bintree.dbapi.cpv_all()):
- del metadata[cpv]
+ for cpv in stale:
+ del metadata[_instance_key(cpv)]
# We've updated the pkgindex, so set it to
# repopulate when necessary.
diff --git a/pym/portage/package/ebuild/config.py b/pym/portage/package/ebuild/config.py
index 71fe4df..961b1c8 100644
--- a/pym/portage/package/ebuild/config.py
+++ b/pym/portage/package/ebuild/config.py
@@ -154,7 +154,8 @@ class config(object):
'PORTAGE_PYM_PATH', 'PORTAGE_PYTHONPATH'])
_setcpv_aux_keys = ('DEFINED_PHASES', 'DEPEND', 'EAPI', 'HDEPEND',
- 'INHERITED', 'IUSE', 'REQUIRED_USE', 'KEYWORDS', 'LICENSE', 'PDEPEND',
+ 'INHERITED', 'BUILD_ID', 'IUSE', 'REQUIRED_USE',
+ 'KEYWORDS', 'LICENSE', 'PDEPEND',
'PROPERTIES', 'PROVIDE', 'RDEPEND', 'SLOT',
'repository', 'RESTRICT', 'LICENSE',)
diff --git a/pym/portage/tests/resolver/ResolverPlayground.py b/pym/portage/tests/resolver/ResolverPlayground.py
index 84ad17c..48b86fc 100644
--- a/pym/portage/tests/resolver/ResolverPlayground.py
+++ b/pym/portage/tests/resolver/ResolverPlayground.py
@@ -39,6 +39,7 @@ class ResolverPlayground(object):
config_files = frozenset(("eapi", "layout.conf", "make.conf", "package.accept_keywords",
"package.keywords", "package.license", "package.mask", "package.properties",
+ "package.provided", "packages",
"package.unmask", "package.use", "package.use.aliases", "package.use.stable.mask",
"soname.provided",
"unpack_dependencies", "use.aliases", "use.force", "use.mask", "layout.conf"))
@@ -208,25 +209,37 @@ class ResolverPlayground(object):
raise AssertionError('digest creation failed for %s' % ebuild_path)
def _create_binpkgs(self, binpkgs):
- for cpv, metadata in binpkgs.items():
+ # When using BUILD_ID, there can be mutiple instances for the
+ # same cpv. Therefore, binpkgs may be an iterable instead of
+ # a dict.
+ items = getattr(binpkgs, 'items', None)
+ items = items() if items is not None else binpkgs
+ for cpv, metadata in items:
a = Atom("=" + cpv, allow_repo=True)
repo = a.repo
if repo is None:
repo = "test_repo"
+ pn = catsplit(a.cp)[1]
cat, pf = catsplit(a.cpv)
metadata = metadata.copy()
metadata.setdefault("SLOT", "0")
metadata.setdefault("KEYWORDS", "x86")
metadata.setdefault("BUILD_TIME", "0")
+ metadata.setdefault("EAPI", "0")
metadata["repository"] = repo
metadata["CATEGORY"] = cat
metadata["PF"] = pf
repo_dir = self.pkgdir
category_dir = os.path.join(repo_dir, cat)
- binpkg_path = os.path.join(category_dir, pf + ".tbz2")
- ensure_dirs(category_dir)
+ if "BUILD_ID" in metadata:
+ binpkg_path = os.path.join(category_dir, pn,
+ "%s-%s.xpak"% (pf, metadata["BUILD_ID"]))
+ else:
+ binpkg_path = os.path.join(category_dir, pf + ".tbz2")
+
+ ensure_dirs(os.path.dirname(binpkg_path))
t = portage.xpak.tbz2(binpkg_path)
t.recompose_mem(portage.xpak.xpak_mem(metadata))
@@ -252,6 +265,7 @@ class ResolverPlayground(object):
unknown_keys = set(metadata).difference(
portage.dbapi.dbapi._known_keys)
unknown_keys.discard("BUILD_TIME")
+ unknown_keys.discard("BUILD_ID")
unknown_keys.discard("COUNTER")
unknown_keys.discard("repository")
unknown_keys.discard("USE")
@@ -749,7 +763,11 @@ class ResolverPlaygroundResult(object):
repo_str = ""
if x.repo != "test_repo":
repo_str = _repo_separator + x.repo
- mergelist_str = x.cpv + repo_str
+ build_id_str = ""
+ if (x.type_name == "binary" and
+ x.cpv.build_id is not None):
+ build_id_str = "-%s" % x.cpv.build_id
+ mergelist_str = x.cpv + build_id_str + repo_str
if x.built:
if x.operation == "merge":
desc = x.type_name
diff --git a/pym/portage/tests/resolver/binpkg_multi_instance/__init__.py b/pym/portage/tests/resolver/binpkg_multi_instance/__init__.py
new file mode 100644
index 0000000..4725d33
--- /dev/null
+++ b/pym/portage/tests/resolver/binpkg_multi_instance/__init__.py
@@ -0,0 +1,2 @@
+# Copyright 2015 Gentoo Foundation
+# Distributed under the terms of the GNU General Public License v2
diff --git a/pym/portage/tests/resolver/binpkg_multi_instance/__test__.py b/pym/portage/tests/resolver/binpkg_multi_instance/__test__.py
new file mode 100644
index 0000000..4725d33
--- /dev/null
+++ b/pym/portage/tests/resolver/binpkg_multi_instance/__test__.py
@@ -0,0 +1,2 @@
+# Copyright 2015 Gentoo Foundation
+# Distributed under the terms of the GNU General Public License v2
diff --git a/pym/portage/tests/resolver/binpkg_multi_instance/test_rebuilt_binaries.py b/pym/portage/tests/resolver/binpkg_multi_instance/test_rebuilt_binaries.py
new file mode 100644
index 0000000..5729df4
--- /dev/null
+++ b/pym/portage/tests/resolver/binpkg_multi_instance/test_rebuilt_binaries.py
@@ -0,0 +1,101 @@
+# Copyright 2015 Gentoo Foundation
+# Distributed under the terms of the GNU General Public License v2
+
+from portage.tests import TestCase
+from portage.tests.resolver.ResolverPlayground import (ResolverPlayground,
+ ResolverPlaygroundTestCase)
+
+class RebuiltBinariesCase(TestCase):
+
+ def testRebuiltBinaries(self):
+
+ user_config = {
+ "make.conf":
+ (
+ "FEATURES=\"binpkg-multi-instance\"",
+ ),
+ }
+
+ binpkgs = (
+ ("app-misc/A-1", {
+ "EAPI": "5",
+ "BUILD_ID": "1",
+ "BUILD_TIME": "1",
+ }),
+ ("app-misc/A-1", {
+ "EAPI": "5",
+ "BUILD_ID": "2",
+ "BUILD_TIME": "2",
+ }),
+ ("app-misc/A-1", {
+ "EAPI": "5",
+ "BUILD_ID": "3",
+ "BUILD_TIME": "3",
+ }),
+ ("dev-libs/B-1", {
+ "EAPI": "5",
+ "BUILD_ID": "1",
+ "BUILD_TIME": "1",
+ }),
+ ("dev-libs/B-1", {
+ "EAPI": "5",
+ "BUILD_ID": "2",
+ "BUILD_TIME": "2",
+ }),
+ ("dev-libs/B-1", {
+ "EAPI": "5",
+ "BUILD_ID": "3",
+ "BUILD_TIME": "3",
+ }),
+ )
+
+ installed = {
+ "app-misc/A-1" : {
+ "EAPI": "5",
+ "BUILD_ID": "1",
+ "BUILD_TIME": "1",
+ },
+ "dev-libs/B-1" : {
+ "EAPI": "5",
+ "BUILD_ID": "2",
+ "BUILD_TIME": "2",
+ },
+ }
+
+ world = (
+ "app-misc/A",
+ "dev-libs/B",
+ )
+
+ test_cases = (
+
+ ResolverPlaygroundTestCase(
+ ["@world"],
+ options = {
+ "--deep": True,
+ "--rebuilt-binaries": True,
+ "--update": True,
+ "--usepkgonly": True,
+ },
+ success = True,
+ ignore_mergelist_order=True,
+ mergelist = [
+ "[binary]dev-libs/B-1-3",
+ "[binary]app-misc/A-1-3"
+ ]
+ ),
+
+ )
+
+ playground = ResolverPlayground(debug=False,
+ binpkgs=binpkgs, installed=installed,
+ user_config=user_config, world=world)
+ try:
+ for test_case in test_cases:
+ playground.run_TestCase(test_case)
+ self.assertEqual(test_case.test_success, True,
+ test_case.fail_msg)
+ finally:
+ # Disable debug so that cleanup works.
+ #playground.debug = False
+ playground.cleanup()
diff --git a/pym/portage/versions.py b/pym/portage/versions.py
index 2c9fe5b..c2c6675 100644
--- a/pym/portage/versions.py
+++ b/pym/portage/versions.py
@@ -18,6 +18,7 @@ if sys.hexversion < 0x3000000:
_unicode = unicode
else:
_unicode = str
+ long = int
import portage
portage.proxy.lazyimport.lazyimport(globals(),
@@ -361,11 +362,13 @@ class _pkg_str(_unicode):
"""
def __new__(cls, cpv, metadata=None, settings=None, eapi=None,
- repo=None, slot=None):
+ repo=None, slot=None, build_time=None, build_id=None,
+ file_size=None, mtime=None):
return _unicode.__new__(cls, cpv)
def __init__(self, cpv, metadata=None, settings=None, eapi=None,
- repo=None, slot=None):
+ repo=None, slot=None, build_time=None, build_id=None,
+ file_size=None, mtime=None):
if not isinstance(cpv, _unicode):
# Avoid TypeError from _unicode.__init__ with PyPy.
cpv = _unicode_decode(cpv)
@@ -375,10 +378,51 @@ class _pkg_str(_unicode):
slot = metadata.get('SLOT', slot)
repo = metadata.get('repository', repo)
eapi = metadata.get('EAPI', eapi)
+ build_time = metadata.get('BUILD_TIME', build_time)
+ file_size = metadata.get('SIZE', file_size)
+ build_id = metadata.get('BUILD_ID', build_id)
+ mtime = metadata.get('_mtime_', mtime)
if settings is not None:
self.__dict__['_settings'] = settings
if eapi is not None:
self.__dict__['eapi'] = eapi
+ if build_time is not None:
+ try:
+ build_time = long(build_time)
+ except ValueError:
+ if build_time:
+ build_time = -1
+ else:
+ build_time = 0
+ if build_id is not None:
+ try:
+ build_id = long(build_id)
+ except ValueError:
+ if build_id:
+ build_id = -1
+ else:
+ build_id = None
+ if file_size is not None:
+ try:
+ file_size = long(file_size)
+ except ValueError:
+ if file_size:
+ file_size = -1
+ else:
+ file_size = None
+ if mtime is not None:
+ try:
+ mtime = long(mtime)
+ except ValueError:
+ if mtime:
+ mtime = -1
+ else:
+ mtime = None
+
+ self.__dict__['build_time'] = build_time
+ self.__dict__['file_size'] = file_size
+ self.__dict__['build_id'] = build_id
+ self.__dict__['mtime'] = mtime
self.__dict__['cpv_split'] = catpkgsplit(cpv, eapi=eapi)
if self.cpv_split is None:
raise InvalidData(cpv)
--
2.0.5
^ permalink raw reply related [relevance 99%]
Results 1-1 of 1 | reverse | options above
-- pct% links below jump to the message on this page, permalinks otherwise --
2015-02-17 8:37 99% [gentoo-portage-dev] [PATCH 1/2] Add FEATURES=binpkg-multi-instance (bug 150031) Zac Medico
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox