public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Fabian Groffen" <grobian@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/portage:prefix commit in: /
Date: Fri, 14 Jan 2022 10:32:24 +0000 (UTC)	[thread overview]
Message-ID: <1642156321.9d0d47eed1ed7b5e2bba49b1d79ca3e9fc7fb7ec.grobian@gentoo> (raw)

commit:     9d0d47eed1ed7b5e2bba49b1d79ca3e9fc7fb7ec
Author:     Fabian Groffen <grobian <AT> gentoo <DOT> org>
AuthorDate: Fri Jan 14 10:32:01 2022 +0000
Commit:     Fabian Groffen <grobian <AT> gentoo <DOT> org>
CommitDate: Fri Jan 14 10:32:01 2022 +0000
URL:        https://gitweb.gentoo.org/proj/portage.git/commit/?id=9d0d47ee

Merge remote-tracking branch 'origin/master' into prefix

Signed-off-by: Fabian Groffen <grobian <AT> gentoo.org>

 .editorconfig                                      |     2 +-
 .github/workflows/black.yml                        |    10 +
 .github/workflows/ci.yml                           |     4 +-
 .gitignorerevs                                     |     4 +
 DEVELOPING                                         |    19 +-
 MANIFEST.in                                        |     2 +
 NEWS                                               |    56 +
 README => README.md                                |    59 +-
 RELEASE-NOTES                                      |    27 +
 bin/check-implicit-pointer-usage.py                |    52 +-
 bin/chmod-lite.py                                  |    25 +-
 bin/chpathtool.py                                  |   323 +-
 bin/dispatch-conf                                  |    13 +-
 bin/dohtml.py                                      |   424 +-
 bin/doins.py                                       |  1007 +-
 bin/ebuild-ipc.py                                  |   519 +-
 bin/ebuild.sh                                      |    83 +-
 bin/estrip                                         |   103 +-
 bin/filter-bash-environment.py                     |   266 +-
 bin/install-qa-check.d/10ignored-flags             |     4 +-
 bin/install.py                                     |   379 +-
 bin/isolated-functions.sh                          |    90 +-
 bin/lock-helper.py                                 |    34 +-
 bin/misc-functions.sh                              |    64 +-
 bin/phase-functions.sh                             |     6 +-
 bin/phase-helpers.sh                               |    14 +-
 bin/pid-ns-init                                    |   248 +-
 bin/portageq                                       |     3 +-
 bin/save-ebuild-env.sh                             |    18 +-
 bin/socks5-server.py                               |   449 +-
 bin/xattr-helper.py                                |   226 +-
 bin/xpak-helper.py                                 |    81 +-
 cnf/make.conf.example.riscv.diff                   |    61 +
 cnf/make.globals                                   |     2 +-
 doc/api/conf.py                                    |    24 +-
 lib/_emerge/AbstractDepPriority.py                 |    38 +-
 lib/_emerge/AbstractEbuildProcess.py               |   869 +-
 lib/_emerge/AbstractPollTask.py                    |   205 +-
 lib/_emerge/AsynchronousLock.py                    |   571 +-
 lib/_emerge/AsynchronousTask.py                    |   391 +-
 lib/_emerge/AtomArg.py                             |    11 +-
 lib/_emerge/Binpkg.py                              |  1003 +-
 lib/_emerge/BinpkgEnvExtractor.py                  |   123 +-
 lib/_emerge/BinpkgExtractorAsync.py                |   190 +-
 lib/_emerge/BinpkgFetcher.py                       |   447 +-
 lib/_emerge/BinpkgPrefetcher.py                    |    74 +-
 lib/_emerge/BinpkgVerifier.py                      |   232 +-
 lib/_emerge/Blocker.py                             |    15 +-
 lib/_emerge/BlockerCache.py                        |   343 +-
 lib/_emerge/BlockerDB.py                           |   226 +-
 lib/_emerge/BlockerDepPriority.py                  |    14 +-
 lib/_emerge/CompositeTask.py                       |   234 +-
 lib/_emerge/DepPriority.py                         |   100 +-
 lib/_emerge/DepPriorityNormalRange.py              |    86 +-
 lib/_emerge/DepPrioritySatisfiedRange.py           |   183 +-
 lib/_emerge/Dependency.py                          |    38 +-
 lib/_emerge/DependencyArg.py                       |    48 +-
 lib/_emerge/EbuildBinpkg.py                        |    94 +-
 lib/_emerge/EbuildBuild.py                         |  1142 +-
 lib/_emerge/EbuildBuildDir.py                      |   295 +-
 lib/_emerge/EbuildExecuter.py                      |   156 +-
 lib/_emerge/EbuildFetcher.py                       |   741 +-
 lib/_emerge/EbuildFetchonly.py                     |    58 +-
 lib/_emerge/EbuildIpcDaemon.py                     |   164 +-
 lib/_emerge/EbuildMerge.py                         |   139 +-
 lib/_emerge/EbuildMetadataPhase.py                 |   431 +-
 lib/_emerge/EbuildPhase.py                         |  1043 +-
 lib/_emerge/EbuildProcess.py                       |    31 +-
 lib/_emerge/EbuildSpawnProcess.py                  |    23 +-
 lib/_emerge/FakeVartree.py                         |   621 +-
 lib/_emerge/FifoIpcDaemon.py                       |   105 +-
 lib/_emerge/JobStatusDisplay.py                    |   556 +-
 lib/_emerge/MergeListItem.py                       |   262 +-
 lib/_emerge/MetadataRegen.py                       |   292 +-
 lib/_emerge/MiscFunctionsProcess.py                |    91 +-
 lib/_emerge/Package.py                             |  1869 +-
 lib/_emerge/PackageArg.py                          |    18 +-
 lib/_emerge/PackageMerge.py                        |    90 +-
 lib/_emerge/PackagePhase.py                        |   176 +-
 lib/_emerge/PackageUninstall.py                    |   274 +-
 lib/_emerge/PackageVirtualDbapi.py                 |   274 +-
 lib/_emerge/PipeReader.py                          |   169 +-
 lib/_emerge/PollScheduler.py                       |   346 +-
 lib/_emerge/ProgressHandler.py                     |    30 +-
 lib/_emerge/RootConfig.py                          |    64 +-
 lib/_emerge/Scheduler.py                           |  4260 ++--
 lib/_emerge/SequentialTaskQueue.py                 |   157 +-
 lib/_emerge/SetArg.py                              |    12 +-
 lib/_emerge/SpawnProcess.py                        |   545 +-
 lib/_emerge/SubProcess.py                          |   157 +-
 lib/_emerge/Task.py                                |    91 +-
 lib/_emerge/TaskSequence.py                        |    95 +-
 lib/_emerge/UninstallFailure.py                    |    21 +-
 lib/_emerge/UnmergeDepPriority.py                  |    68 +-
 lib/_emerge/UseFlagDisplay.py                      |   188 +-
 lib/_emerge/UserQuery.py                           |   108 +-
 lib/_emerge/_find_deep_system_runtime_deps.py      |    54 +-
 lib/_emerge/_flush_elog_mod_echo.py                |    19 +-
 lib/_emerge/actions.py                             |  7077 +++---
 lib/_emerge/chk_updated_cfg_files.py               |    74 +-
 lib/_emerge/clear_caches.py                        |    21 +-
 lib/_emerge/countdown.py                           |    24 +-
 lib/_emerge/create_depgraph_params.py              |   398 +-
 lib/_emerge/create_world_atom.py                   |   213 +-
 lib/_emerge/depgraph.py                            | 21924 ++++++++++---------
 lib/_emerge/emergelog.py                           |    68 +-
 lib/_emerge/getloadavg.py                          |    51 +-
 lib/_emerge/help.py                                |   168 +-
 lib/_emerge/is_valid_package_atom.py               |    30 +-
 lib/_emerge/main.py                                |  2568 +--
 lib/_emerge/post_emerge.py                         |   292 +-
 lib/_emerge/resolver/DbapiProvidesIndex.py         |   191 +-
 lib/_emerge/resolver/backtracking.py               |   548 +-
 lib/_emerge/resolver/circular_dependency.py        |   576 +-
 lib/_emerge/resolver/output.py                     |  1973 +-
 lib/_emerge/resolver/output_helpers.py             |  1110 +-
 lib/_emerge/resolver/package_tracker.py            |   740 +-
 lib/_emerge/resolver/slot_collision.py             |  2466 ++-
 lib/_emerge/search.py                              |  1071 +-
 lib/_emerge/show_invalid_depstring_notice.py       |    49 +-
 lib/_emerge/stdout_spinner.py                      |   148 +-
 lib/_emerge/unmerge.py                             |  1268 +-
 lib/portage/__init__.py                            |  1200 +-
 lib/portage/_compat_upgrade/binpkg_compression.py  |    67 +-
 .../_compat_upgrade/binpkg_multi_instance.py       |    44 +-
 lib/portage/_compat_upgrade/default_locations.py   |   174 +-
 lib/portage/_emirrordist/Config.py                 |   274 +-
 lib/portage/_emirrordist/ContentDB.py              |   371 +-
 lib/portage/_emirrordist/DeletionIterator.py       |   204 +-
 lib/portage/_emirrordist/DeletionTask.py           |   284 +-
 lib/portage/_emirrordist/FetchIterator.py          |   539 +-
 lib/portage/_emirrordist/FetchTask.py              |  1354 +-
 lib/portage/_emirrordist/MirrorDistTask.py         |   458 +-
 lib/portage/_emirrordist/main.py                   |   882 +-
 lib/portage/_global_updates.py                     |   504 +-
 lib/portage/_legacy_globals.py                     |   145 +-
 lib/portage/_selinux.py                            |   228 +-
 lib/portage/_sets/ProfilePackageSet.py             |    65 +-
 lib/portage/_sets/__init__.py                      |   619 +-
 lib/portage/_sets/base.py                          |   462 +-
 lib/portage/_sets/dbapi.py                         |  1073 +-
 lib/portage/_sets/files.py                         |   757 +-
 lib/portage/_sets/libs.py                          |   172 +-
 lib/portage/_sets/profiles.py                      |   110 +-
 lib/portage/_sets/security.py                      |   151 +-
 lib/portage/_sets/shell.py                         |    65 +-
 lib/portage/binrepo/config.py                      |   249 +-
 lib/portage/cache/anydbm.py                        |   159 +-
 lib/portage/cache/cache_errors.py                  |   103 +-
 lib/portage/cache/ebuild_xattr.py                  |   298 +-
 lib/portage/cache/flat_hash.py                     |   263 +-
 lib/portage/cache/fs_template.py                   |   130 +-
 lib/portage/cache/index/IndexStreamIterator.py     |    32 +-
 lib/portage/cache/index/pkg_desc_index.py          |    67 +-
 lib/portage/cache/mappings.py                      |   780 +-
 lib/portage/cache/metadata.py                      |   292 +-
 lib/portage/cache/sql_template.py                  |   618 +-
 lib/portage/cache/sqlite.py                        |   633 +-
 lib/portage/cache/template.py                      |   674 +-
 lib/portage/cache/volatile.py                      |    35 +-
 lib/portage/checksum.py                            |   948 +-
 lib/portage/const.py                               |   389 +-
 lib/portage/cvstree.py                             |   551 +-
 lib/portage/data.py                                |   564 +-
 lib/portage/dbapi/DummyTree.py                     |    24 +-
 lib/portage/dbapi/IndexedPortdb.py                 |   313 +-
 lib/portage/dbapi/IndexedVardb.py                  |   209 +-
 .../dbapi/_ContentsCaseSensitivityManager.py       |   179 +-
 lib/portage/dbapi/_MergeProcess.py                 |   465 +-
 lib/portage/dbapi/_SyncfsProcess.py                |    91 +-
 lib/portage/dbapi/_VdbMetadataDelta.py             |   333 +-
 lib/portage/dbapi/__init__.py                      |   878 +-
 lib/portage/dbapi/_expand_new_virt.py              |   129 +-
 lib/portage/dbapi/_similar_name_search.py          |    96 +-
 lib/portage/dbapi/bintree.py                       |  3835 ++--
 lib/portage/dbapi/cpv_expand.py                    |   183 +-
 lib/portage/dbapi/dep_expand.py                    |    84 +-
 lib/portage/dbapi/porttree.py                      |  3230 +--
 lib/portage/dbapi/vartree.py                       | 12207 ++++++-----
 lib/portage/dbapi/virtual.py                       |   442 +-
 lib/portage/debug.py                               |   215 +-
 lib/portage/dep/__init__.py                        |  5919 ++---
 lib/portage/dep/_dnf.py                            |   159 +-
 lib/portage/dep/_slot_operator.py                  |   206 +-
 lib/portage/dep/dep_check.py                       |  2019 +-
 lib/portage/dep/soname/SonameAtom.py               |    96 +-
 lib/portage/dep/soname/multilib_category.py        |   268 +-
 lib/portage/dep/soname/parse.py                    |    68 +-
 lib/portage/dispatch_conf.py                       |   706 +-
 lib/portage/eapi.py                                |   466 +-
 lib/portage/eclass_cache.py                        |   322 +-
 lib/portage/elog/__init__.py                       |   326 +-
 lib/portage/elog/filtering.py                      |    21 +-
 lib/portage/elog/messages.py                       |   311 +-
 lib/portage/elog/mod_custom.py                     |    25 +-
 lib/portage/elog/mod_echo.py                       |   103 +-
 lib/portage/elog/mod_mail.py                       |    63 +-
 lib/portage/elog/mod_mail_summary.py               |   150 +-
 lib/portage/elog/mod_save.py                       |   128 +-
 lib/portage/elog/mod_save_summary.py               |   134 +-
 lib/portage/elog/mod_syslog.py                     |    37 +-
 lib/portage/emaint/defaults.py                     |    39 +-
 lib/portage/emaint/main.py                         |   431 +-
 lib/portage/emaint/modules/binhost/__init__.py     |    26 +-
 lib/portage/emaint/modules/binhost/binhost.py      |   351 +-
 lib/portage/emaint/modules/config/__init__.py      |    26 +-
 lib/portage/emaint/modules/config/config.py        |   129 +-
 lib/portage/emaint/modules/logs/__init__.py        |    79 +-
 lib/portage/emaint/modules/logs/logs.py            |   186 +-
 lib/portage/emaint/modules/merges/__init__.py      |    69 +-
 lib/portage/emaint/modules/merges/merges.py        |   543 +-
 lib/portage/emaint/modules/move/__init__.py        |    46 +-
 lib/portage/emaint/modules/move/move.py            |   361 +-
 lib/portage/emaint/modules/resume/__init__.py      |    26 +-
 lib/portage/emaint/modules/resume/resume.py        |    97 +-
 lib/portage/emaint/modules/sync/__init__.py        |    99 +-
 lib/portage/emaint/modules/sync/sync.py            |   936 +-
 lib/portage/emaint/modules/world/__init__.py       |    26 +-
 lib/portage/emaint/modules/world/world.py          |   158 +-
 lib/portage/env/config.py                          |   162 +-
 lib/portage/env/loaders.py                         |   588 +-
 lib/portage/env/validators.py                      |    23 +-
 lib/portage/exception.py                           |   248 +-
 lib/portage/getbinpkg.py                           |  1737 +-
 lib/portage/glsa.py                                |  1432 +-
 lib/portage/localization.py                        |    62 +-
 lib/portage/locks.py                               |  1419 +-
 lib/portage/mail.py                                |   238 +-
 lib/portage/manifest.py                            |  1446 +-
 lib/portage/metadata.py                            |   417 +-
 lib/portage/module.py                              |   449 +-
 lib/portage/news.py                                |   862 +-
 lib/portage/output.py                              |  1589 +-
 .../package/ebuild/_config/KeywordsManager.py      |   658 +-
 .../package/ebuild/_config/LicenseManager.py       |   450 +-
 .../package/ebuild/_config/LocationsManager.py     |   752 +-
 lib/portage/package/ebuild/_config/MaskManager.py  |   585 +-
 lib/portage/package/ebuild/_config/UseManager.py   |  1297 +-
 .../package/ebuild/_config/VirtualsManager.py      |   444 +-
 .../package/ebuild/_config/env_var_validation.py   |    31 +-
 lib/portage/package/ebuild/_config/features_set.py |   237 +-
 lib/portage/package/ebuild/_config/helper.py       |   103 +-
 .../package/ebuild/_config/special_env_vars.py     |   452 +-
 .../package/ebuild/_config/unpack_dependencies.py  |    67 +-
 lib/portage/package/ebuild/_ipc/ExitCommand.py     |    36 +-
 lib/portage/package/ebuild/_ipc/IpcCommand.py      |     7 +-
 lib/portage/package/ebuild/_ipc/QueryCommand.py    |   264 +-
 lib/portage/package/ebuild/_metadata_invalid.py    |    68 +-
 .../ebuild/_parallel_manifest/ManifestProcess.py   |    74 +-
 .../ebuild/_parallel_manifest/ManifestScheduler.py |   146 +-
 .../ebuild/_parallel_manifest/ManifestTask.py      |   403 +-
 lib/portage/package/ebuild/_spawn_nofetch.py       |   206 +-
 lib/portage/package/ebuild/config.py               |  6259 +++---
 .../package/ebuild/deprecated_profile_check.py     |   172 +-
 lib/portage/package/ebuild/digestcheck.py          |   296 +-
 lib/portage/package/ebuild/digestgen.py            |   401 +-
 lib/portage/package/ebuild/doebuild.py             |  5730 ++---
 lib/portage/package/ebuild/fetch.py                |  3521 +--
 lib/portage/package/ebuild/getmaskingreason.py     |   205 +-
 lib/portage/package/ebuild/getmaskingstatus.py     |   333 +-
 lib/portage/package/ebuild/prepare_build_dirs.py   |   914 +-
 lib/portage/package/ebuild/profile_iuse.py         |    49 +-
 lib/portage/process.py                             |  1870 +-
 lib/portage/progress.py                            |    87 +-
 lib/portage/proxy/lazyimport.py                    |   385 +-
 lib/portage/proxy/objectproxy.py                   |   120 +-
 lib/portage/repository/config.py                   |  2743 +--
 .../repository/storage/hardlink_quarantine.py      |   181 +-
 lib/portage/repository/storage/hardlink_rcu.py     |   491 +-
 lib/portage/repository/storage/inplace.py          |    61 +-
 lib/portage/repository/storage/interface.py        |   127 +-
 lib/portage/sync/__init__.py                       |    57 +-
 lib/portage/sync/config_checks.py                  |   131 +-
 lib/portage/sync/controller.py                     |   731 +-
 lib/portage/sync/getaddrinfo_validate.py           |    37 +-
 lib/portage/sync/modules/cvs/__init__.py           |    64 +-
 lib/portage/sync/modules/cvs/cvs.py                |   116 +-
 lib/portage/sync/modules/git/__init__.py           |   115 +-
 lib/portage/sync/modules/git/git.py                |   584 +-
 lib/portage/sync/modules/mercurial/__init__.py     |    52 +-
 lib/portage/sync/modules/mercurial/mercurial.py    |   320 +-
 lib/portage/sync/modules/rsync/__init__.py         |    52 +-
 lib/portage/sync/modules/rsync/rsync.py            |  1519 +-
 lib/portage/sync/modules/svn/__init__.py           |    38 +-
 lib/portage/sync/modules/svn/svn.py                |   152 +-
 lib/portage/sync/modules/webrsync/__init__.py      |    60 +-
 lib/portage/sync/modules/webrsync/webrsync.py      |   236 +-
 lib/portage/sync/old_tree_timestamp.py             |   167 +-
 lib/portage/sync/syncbase.py                       |   657 +-
 lib/portage/tests/__init__.py                      |   550 +-
 lib/portage/tests/bin/setup_env.py                 |   131 +-
 lib/portage/tests/bin/test_dobin.py                |    19 +-
 lib/portage/tests/bin/test_dodir.py                |    23 +-
 lib/portage/tests/bin/test_doins.py                |   636 +-
 lib/portage/tests/bin/test_eapi7_ver_funcs.py      |   460 +-
 lib/portage/tests/bin/test_filter_bash_env.py      |    70 +-
 lib/portage/tests/dbapi/test_auxdb.py              |   197 +-
 lib/portage/tests/dbapi/test_fakedbapi.py          |   163 +-
 lib/portage/tests/dbapi/test_portdb_cache.py       |   349 +-
 lib/portage/tests/dep/testAtom.py                  |   997 +-
 lib/portage/tests/dep/testCheckRequiredUse.py      |   414 +-
 lib/portage/tests/dep/testExtendedAtomDict.py      |    20 +-
 lib/portage/tests/dep/testExtractAffectingUSE.py   |   158 +-
 lib/portage/tests/dep/testStandalone.py            |    66 +-
 lib/portage/tests/dep/test_best_match_to_list.py   |   168 +-
 lib/portage/tests/dep/test_dep_getcpv.py           |    56 +-
 lib/portage/tests/dep/test_dep_getrepo.py          |    40 +-
 lib/portage/tests/dep/test_dep_getslot.py          |    35 +-
 lib/portage/tests/dep/test_dep_getusedeps.py       |    47 +-
 lib/portage/tests/dep/test_dnf_convert.py          |    94 +-
 lib/portage/tests/dep/test_get_operator.py         |    53 +-
 .../tests/dep/test_get_required_use_flags.py       |    76 +-
 lib/portage/tests/dep/test_isjustname.py           |    32 +-
 lib/portage/tests/dep/test_isvalidatom.py          |   407 +-
 lib/portage/tests/dep/test_match_from_list.py      |   410 +-
 lib/portage/tests/dep/test_overlap_dnf.py          |    57 +-
 lib/portage/tests/dep/test_paren_reduce.py         |   121 +-
 lib/portage/tests/dep/test_soname_atom_pickle.py   |    20 +-
 lib/portage/tests/dep/test_use_reduce.py           |  1391 +-
 .../tests/ebuild/test_array_fromfile_eof.py        |    73 +-
 lib/portage/tests/ebuild/test_config.py            |   724 +-
 lib/portage/tests/ebuild/test_doebuild_fd_pipes.py |   273 +-
 lib/portage/tests/ebuild/test_doebuild_spawn.py    |   176 +-
 lib/portage/tests/ebuild/test_fetch.py             |  1518 +-
 lib/portage/tests/ebuild/test_ipc_daemon.py        |   289 +-
 lib/portage/tests/ebuild/test_shell_quote.py       |   218 +-
 lib/portage/tests/ebuild/test_spawn.py             |    79 +-
 .../tests/ebuild/test_use_expand_incremental.py    |   212 +-
 lib/portage/tests/emerge/test_config_protect.py    |   451 +-
 .../emerge/test_emerge_blocker_file_collision.py   |   304 +-
 lib/portage/tests/emerge/test_emerge_slot_abi.py   |   335 +-
 lib/portage/tests/emerge/test_global_updates.py    |    33 +-
 lib/portage/tests/emerge/test_simple.py            |  1054 +-
 .../tests/env/config/test_PackageKeywordsFile.py   |    59 +-
 .../tests/env/config/test_PackageMaskFile.py       |    34 +-
 .../tests/env/config/test_PackageUseFile.py        |    44 +-
 .../tests/env/config/test_PortageModulesFile.py    |    57 +-
 lib/portage/tests/glsa/test_security_set.py        |   174 +-
 lib/portage/tests/lafilefixer/test_lafilefixer.py  |   187 +-
 .../test_lazy_import_portage_baseline.py           |   135 +-
 .../lazyimport/test_preload_portage_submodules.py  |    18 +-
 lib/portage/tests/lint/metadata.py                 |     9 +-
 lib/portage/tests/lint/test_bash_syntax.py         |    81 +-
 lib/portage/tests/lint/test_compile_modules.py     |   103 +-
 lib/portage/tests/lint/test_import_modules.py      |    59 +-
 lib/portage/tests/locks/test_asynchronous_lock.py  |   340 +-
 lib/portage/tests/locks/test_lock_nonblock.py      |   120 +-
 lib/portage/tests/news/test_NewsItem.py            |   121 +-
 lib/portage/tests/process/test_AsyncFunction.py    |    91 +-
 lib/portage/tests/process/test_PipeLogger.py       |   106 +-
 lib/portage/tests/process/test_PopenProcess.py     |   156 +-
 .../tests/process/test_PopenProcessBlockingIO.py   |   103 +-
 lib/portage/tests/process/test_poll.py             |   182 +-
 lib/portage/tests/process/test_unshare_net.py      |    38 +-
 lib/portage/tests/resolver/ResolverPlayground.py   |  1950 +-
 .../test_build_id_profile_format.py                |   271 +-
 .../binpkg_multi_instance/test_rebuilt_binaries.py |   193 +-
 .../tests/resolver/soname/test_autounmask.py       |   176 +-
 lib/portage/tests/resolver/soname/test_depclean.py |   106 +-
 .../tests/resolver/soname/test_downgrade.py        |   463 +-
 .../tests/resolver/soname/test_or_choices.py       |   166 +-
 .../tests/resolver/soname/test_reinstall.py        |   150 +-
 .../tests/resolver/soname/test_skip_update.py      |   148 +-
 .../soname/test_slot_conflict_reinstall.py         |   671 +-
 .../resolver/soname/test_slot_conflict_update.py   |   203 +-
 .../tests/resolver/soname/test_soname_provided.py  |   131 +-
 .../tests/resolver/soname/test_unsatisfiable.py    |   117 +-
 .../tests/resolver/soname/test_unsatisfied.py      |   148 +-
 .../test_aggressive_backtrack_downgrade.py         |   153 +-
 lib/portage/tests/resolver/test_autounmask.py      |  1346 +-
 .../tests/resolver/test_autounmask_binpkg_use.py   |   115 +-
 .../resolver/test_autounmask_keep_keywords.py      |   122 +-
 .../tests/resolver/test_autounmask_multilib_use.py |   147 +-
 .../tests/resolver/test_autounmask_parent.py       |    65 +-
 .../resolver/test_autounmask_use_backtrack.py      |   146 +-
 .../tests/resolver/test_autounmask_use_breakage.py |   174 +-
 .../resolver/test_autounmask_use_slot_conflict.py  |    76 +-
 lib/portage/tests/resolver/test_backtracking.py    |   360 +-
 lib/portage/tests/resolver/test_bdeps.py           |   399 +-
 .../resolver/test_binary_pkg_ebuild_visibility.py  |   262 +-
 lib/portage/tests/resolver/test_blocker.py         |   240 +-
 lib/portage/tests/resolver/test_changed_deps.py    |   213 +-
 .../tests/resolver/test_circular_choices.py        |   401 +-
 .../tests/resolver/test_circular_choices_rust.py   |   160 +-
 .../tests/resolver/test_circular_dependencies.py   |   197 +-
 lib/portage/tests/resolver/test_complete_graph.py  |   291 +-
 ...test_complete_if_new_subslot_without_revbump.py |   120 +-
 lib/portage/tests/resolver/test_depclean.py        |   552 +-
 lib/portage/tests/resolver/test_depclean_order.py  |   105 +-
 .../resolver/test_depclean_slot_unavailable.py     |   127 +-
 lib/portage/tests/resolver/test_depth.py           |   586 +-
 .../resolver/test_disjunctive_depend_order.py      |   145 +-
 lib/portage/tests/resolver/test_eapi.py            |   298 +-
 .../tests/resolver/test_features_test_use.py       |   146 +-
 .../resolver/test_imagemagick_graphicsmagick.py    |   183 +-
 lib/portage/tests/resolver/test_keywords.py        |   664 +-
 lib/portage/tests/resolver/test_merge_order.py     |  1237 +-
 .../test_missing_iuse_and_evaluated_atoms.py       |    53 +-
 lib/portage/tests/resolver/test_multirepo.py       |   779 +-
 lib/portage/tests/resolver/test_multislot.py       |    99 +-
 .../tests/resolver/test_old_dep_chain_display.py   |    67 +-
 lib/portage/tests/resolver/test_onlydeps.py        |    57 +-
 .../tests/resolver/test_onlydeps_circular.py       |    87 +-
 .../tests/resolver/test_onlydeps_minimal.py        |    83 +-
 lib/portage/tests/resolver/test_or_choices.py      |  1474 +-
 .../tests/resolver/test_or_downgrade_installed.py  |   152 +-
 .../tests/resolver/test_or_upgrade_installed.py    |   418 +-
 lib/portage/tests/resolver/test_output.py          |   183 +-
 lib/portage/tests/resolver/test_package_tracker.py |   509 +-
 .../tests/resolver/test_profile_default_eapi.py    |   214 +-
 .../tests/resolver/test_profile_package_set.py     |   217 +-
 lib/portage/tests/resolver/test_rebuild.py         |   324 +-
 .../test_regular_slot_change_without_revbump.py    |   104 +-
 lib/portage/tests/resolver/test_required_use.py    |   435 +-
 .../resolver/test_runtime_cycle_merge_order.py     |   127 +-
 lib/portage/tests/resolver/test_simple.py          |   144 +-
 lib/portage/tests/resolver/test_slot_abi.py        |   907 +-
 .../tests/resolver/test_slot_abi_downgrade.py      |   425 +-
 .../resolver/test_slot_change_without_revbump.py   |   150 +-
 lib/portage/tests/resolver/test_slot_collisions.py |   606 +-
 .../resolver/test_slot_conflict_force_rebuild.py   |   125 +-
 .../resolver/test_slot_conflict_mask_update.py     |    63 +-
 .../tests/resolver/test_slot_conflict_rebuild.py   |   937 +-
 .../test_slot_conflict_unsatisfied_deep_deps.py    |   351 +-
 .../tests/resolver/test_slot_conflict_update.py    |   156 +-
 .../resolver/test_slot_conflict_update_virt.py     |   129 +-
 .../resolver/test_slot_operator_autounmask.py      |   232 +-
 .../tests/resolver/test_slot_operator_bdeps.py     |   395 +-
 .../resolver/test_slot_operator_complete_graph.py  |   250 +-
 .../resolver/test_slot_operator_exclusive_slots.py |   266 +-
 .../resolver/test_slot_operator_missed_update.py   |   196 +-
 .../tests/resolver/test_slot_operator_rebuild.py   |   196 +-
 .../resolver/test_slot_operator_required_use.py    |   114 +-
 .../resolver/test_slot_operator_reverse_deps.py    |   540 +-
 .../test_slot_operator_runtime_pkg_mask.py         |   240 +-
 .../resolver/test_slot_operator_unsatisfied.py     |   121 +-
 .../tests/resolver/test_slot_operator_unsolved.py  |   147 +-
 ..._slot_operator_update_probe_parent_downgrade.py |   112 +-
 .../test_solve_non_slot_operator_slot_conflicts.py |   113 +-
 lib/portage/tests/resolver/test_targetroot.py      |   178 +-
 .../tests/resolver/test_unecessary_slot_upgrade.py |    62 +
 lib/portage/tests/resolver/test_unmerge_order.py   |   394 +-
 .../tests/resolver/test_use_dep_defaults.py        |    80 +-
 lib/portage/tests/resolver/test_useflags.py        |   214 +-
 .../resolver/test_virtual_minimize_children.py     |   548 +-
 lib/portage/tests/resolver/test_virtual_slot.py    |   458 +-
 lib/portage/tests/resolver/test_with_test_deps.py  |   150 +-
 lib/portage/tests/runTests.py                      |    36 +-
 .../tests/sets/base/testInternalPackageSet.py      |    67 +-
 lib/portage/tests/sets/files/testConfigFileSet.py  |    37 +-
 lib/portage/tests/sets/files/testStaticFileSet.py  |    27 +-
 lib/portage/tests/sets/shell/testShell.py          |    31 +-
 lib/portage/tests/sync/test_sync_local.py          |   836 +-
 lib/portage/tests/unicode/test_string_format.py    |    78 +-
 lib/portage/tests/update/test_move_ent.py          |   188 +-
 lib/portage/tests/update/test_move_slot_ent.py     |   259 +-
 lib/portage/tests/update/test_update_dbentry.py    |   560 +-
 .../tests/util/dyn_libs/test_soname_deps.py        |    37 +-
 .../tests/util/eventloop/test_call_soon_fifo.py    |    28 +-
 lib/portage/tests/util/file_copy/test_copyfile.py  |    98 +-
 .../util/futures/asyncio/test_child_watcher.py     |    78 +-
 .../futures/asyncio/test_event_loop_in_fork.py     |    68 +-
 .../tests/util/futures/asyncio/test_pipe_closed.py |   251 +-
 .../asyncio/test_policy_wrapper_recursion.py       |    22 +-
 .../futures/asyncio/test_run_until_complete.py     |    44 +-
 .../util/futures/asyncio/test_subprocess_exec.py   |   356 +-
 .../util/futures/asyncio/test_wakeup_fd_sigchld.py |    62 +-
 .../tests/util/futures/test_compat_coroutine.py    |   392 +-
 .../tests/util/futures/test_done_callback.py       |    41 +-
 .../util/futures/test_done_callback_after_exit.py  |    69 +-
 .../tests/util/futures/test_iter_completed.py      |   141 +-
 lib/portage/tests/util/futures/test_retry.py       |   462 +-
 lib/portage/tests/util/test_checksum.py            |   236 +-
 lib/portage/tests/util/test_digraph.py             |   495 +-
 lib/portage/tests/util/test_file_copier.py         |    63 +-
 lib/portage/tests/util/test_getconfig.py           |   111 +-
 lib/portage/tests/util/test_grabdict.py            |     9 +-
 lib/portage/tests/util/test_install_mask.py        |   315 +-
 lib/portage/tests/util/test_normalizedPath.py      |    11 +-
 lib/portage/tests/util/test_shelve.py              |    90 +-
 lib/portage/tests/util/test_socks5.py              |   314 +-
 lib/portage/tests/util/test_stackDictList.py       |    28 +-
 lib/portage/tests/util/test_stackDicts.py          |    41 +-
 lib/portage/tests/util/test_stackLists.py          |    22 +-
 lib/portage/tests/util/test_uniqueArray.py         |    33 +-
 lib/portage/tests/util/test_varExpand.py           |   182 +-
 lib/portage/tests/util/test_whirlpool.py           |    17 +-
 lib/portage/tests/util/test_xattr.py               |   278 +-
 lib/portage/tests/versions/test_cpv_sort_key.py    |    18 +-
 lib/portage/tests/versions/test_vercmp.py          |   151 +-
 lib/portage/tests/xpak/test_decodeint.py           |    12 +-
 lib/portage/update.py                              |   805 +-
 lib/portage/util/ExtractKernelVersion.py           |   135 +-
 lib/portage/util/SlotObject.py                     |   101 +-
 lib/portage/util/__init__.py                       |  3605 +--
 lib/portage/util/_async/AsyncFunction.py           |   116 +-
 lib/portage/util/_async/AsyncScheduler.py          |   194 +-
 lib/portage/util/_async/AsyncTaskFuture.py         |    48 +-
 lib/portage/util/_async/BuildLogger.py             |   201 +-
 lib/portage/util/_async/FileCopier.py              |    32 +-
 lib/portage/util/_async/FileDigester.py            |   140 +-
 lib/portage/util/_async/ForkProcess.py             |   295 +-
 lib/portage/util/_async/PipeLogger.py              |   359 +-
 lib/portage/util/_async/PipeReaderBlockingIO.py    |   130 +-
 lib/portage/util/_async/PopenProcess.py            |    56 +-
 lib/portage/util/_async/SchedulerInterface.py      |   233 +-
 lib/portage/util/_async/TaskScheduler.py           |    23 +-
 lib/portage/util/_async/run_main_scheduler.py      |    66 +-
 lib/portage/util/_compare_files.py                 |   179 +-
 lib/portage/util/_ctypes.py                        |    65 +-
 lib/portage/util/_desktop_entry.py                 |   117 +-
 lib/portage/util/_dyn_libs/LinkageMapELF.py        |  1855 +-
 lib/portage/util/_dyn_libs/NeededEntry.py          |   129 +-
 .../util/_dyn_libs/PreservedLibsRegistry.py        |   461 +-
 .../util/_dyn_libs/display_preserved_libs.py       |   166 +-
 lib/portage/util/_dyn_libs/dyn_libs.py             |    28 +
 lib/portage/util/_dyn_libs/soname_deps.py          |   310 +-
 lib/portage/util/_dyn_libs/soname_deps_qa.py       |   161 +-
 lib/portage/util/_eventloop/asyncio_event_loop.py  |   264 +-
 lib/portage/util/_eventloop/global_event_loop.py   |     2 +-
 lib/portage/util/_get_vm_info.py                   |   140 +-
 lib/portage/util/_info_files.py                    |   237 +-
 lib/portage/util/_path.py                          |    34 +-
 lib/portage/util/_pty.py                           |    98 +-
 lib/portage/util/_urlopen.py                       |   157 +-
 lib/portage/util/_xattr.py                         |   349 +-
 lib/portage/util/backoff.py                        |    80 +-
 lib/portage/util/bin_entry_point.py                |    44 +-
 lib/portage/util/changelog.py                      |   103 +-
 lib/portage/util/compression_probe.py              |   191 +-
 lib/portage/util/configparser.py                   |   106 +-
 lib/portage/util/cpuinfo.py                        |    43 +-
 lib/portage/util/digraph.py                        |   747 +-
 lib/portage/util/elf/constants.py                  |    91 +-
 lib/portage/util/elf/header.py                     |   116 +-
 lib/portage/util/endian/decode.py                  |    70 +-
 lib/portage/util/env_update.py                     |   801 +-
 lib/portage/util/file_copy/__init__.py             |    37 +-
 lib/portage/util/formatter.py                      |    95 +-
 lib/portage/util/futures/__init__.py               |     4 +-
 lib/portage/util/futures/_asyncio/__init__.py      |   486 +-
 lib/portage/util/futures/_asyncio/streams.py       |   139 +-
 lib/portage/util/futures/_sync_decorator.py        |    75 +-
 lib/portage/util/futures/compat_coroutine.py       |   220 +-
 lib/portage/util/futures/executor/fork.py          |   228 +-
 lib/portage/util/futures/extendedfutures.py        |   115 +-
 lib/portage/util/futures/futures.py                |    16 +-
 lib/portage/util/futures/iter_completed.py         |   342 +-
 lib/portage/util/futures/retry.py                  |   377 +-
 lib/portage/util/futures/unix_events.py            |    95 +-
 lib/portage/util/hooks.py                          |    52 +
 lib/portage/util/install_mask.py                   |   340 +-
 lib/portage/util/iterators/MultiIterGroupBy.py     |   164 +-
 lib/portage/util/lafilefixer.py                    |   331 +-
 lib/portage/util/listdir.py                        |   251 +-
 lib/portage/util/locale.py                         |   240 +-
 lib/portage/util/movefile.py                       |   665 +-
 lib/portage/util/mtimedb.py                        |   221 +-
 lib/portage/util/netlink.py                        |   139 +-
 lib/portage/util/path.py                           |    82 +-
 lib/portage/util/shelve.py                         |    86 +-
 lib/portage/util/socks5.py                         |   199 +-
 lib/portage/util/whirlpool.py                      |  2736 ++-
 lib/portage/util/writeable_check.py                |   199 +-
 lib/portage/versions.py                            |  1100 +-
 lib/portage/xml/metadata.py                        |   842 +-
 lib/portage/xpak.py                                |   943 +-
 man/color.map.5                                    |     6 +-
 man/dispatch-conf.1                                |     5 +
 man/emerge.1                                       |     9 +-
 man/make.conf.5                                    |    15 +-
 repoman/bin/repoman                                |    46 +-
 repoman/lib/repoman/__init__.py                    |   153 +-
 repoman/lib/repoman/_portage.py                    |     7 +-
 repoman/lib/repoman/_subprocess.py                 |    89 +-
 repoman/lib/repoman/actions.py                     |  1467 +-
 repoman/lib/repoman/argparser.py                   |   621 +-
 repoman/lib/repoman/check_missingslot.py           |    43 +-
 repoman/lib/repoman/config.py                      |   284 +-
 repoman/lib/repoman/copyrights.py                  |   221 +-
 repoman/lib/repoman/errors.py                      |    17 +-
 repoman/lib/repoman/gpg.py                         |   106 +-
 repoman/lib/repoman/main.py                        |   369 +-
 repoman/lib/repoman/metadata.py                    |   130 +-
 repoman/lib/repoman/modules/commit/manifest.py     |   189 +-
 repoman/lib/repoman/modules/commit/repochecks.py   |    54 +-
 .../modules/linechecks/assignment/__init__.py      |    34 +-
 .../modules/linechecks/assignment/assignment.py    |    40 +-
 repoman/lib/repoman/modules/linechecks/base.py     |   177 +-
 repoman/lib/repoman/modules/linechecks/config.py   |   195 +-
 .../lib/repoman/modules/linechecks/controller.py   |   277 +-
 .../repoman/modules/linechecks/depend/__init__.py  |    22 +-
 .../repoman/modules/linechecks/depend/implicit.py  |    63 +-
 .../modules/linechecks/deprecated/__init__.py      |    70 +-
 .../modules/linechecks/deprecated/deprecated.py    |    39 +-
 .../modules/linechecks/deprecated/inherit.py       |   113 +-
 .../lib/repoman/modules/linechecks/do/__init__.py  |    22 +-
 repoman/lib/repoman/modules/linechecks/do/dosym.py |    24 +-
 .../repoman/modules/linechecks/eapi/__init__.py    |    82 +-
 .../lib/repoman/modules/linechecks/eapi/checks.py  |   100 +-
 .../repoman/modules/linechecks/eapi/definition.py  |    55 +-
 .../repoman/modules/linechecks/emake/__init__.py   |    34 +-
 .../lib/repoman/modules/linechecks/emake/emake.py  |    30 +-
 .../modules/linechecks/gentoo_header/__init__.py   |    22 +-
 .../modules/linechecks/gentoo_header/header.py     |    95 +-
 .../repoman/modules/linechecks/helpers/__init__.py |    22 +-
 .../repoman/modules/linechecks/helpers/offset.py   |    31 +-
 .../repoman/modules/linechecks/nested/__init__.py  |    22 +-
 .../repoman/modules/linechecks/nested/nested.py    |    13 +-
 .../repoman/modules/linechecks/nested/nesteddie.py |    14 +-
 .../repoman/modules/linechecks/patches/__init__.py |    22 +-
 .../repoman/modules/linechecks/patches/patches.py  |    29 +-
 .../repoman/modules/linechecks/phases/__init__.py  |    58 +-
 .../lib/repoman/modules/linechecks/phases/phase.py |   305 +-
 .../repoman/modules/linechecks/portage/__init__.py |    34 +-
 .../repoman/modules/linechecks/portage/internal.py |    44 +-
 .../repoman/modules/linechecks/quotes/__init__.py  |    34 +-
 .../repoman/modules/linechecks/quotes/quoteda.py   |    15 +-
 .../repoman/modules/linechecks/quotes/quotes.py    |   147 +-
 .../lib/repoman/modules/linechecks/uri/__init__.py |    22 +-
 repoman/lib/repoman/modules/linechecks/uri/uri.py  |    50 +-
 .../lib/repoman/modules/linechecks/use/__init__.py |    22 +-
 .../repoman/modules/linechecks/use/builtwith.py    |     9 +-
 .../repoman/modules/linechecks/useless/__init__.py |    34 +-
 .../lib/repoman/modules/linechecks/useless/cd.py   |    32 +-
 .../repoman/modules/linechecks/useless/dodoc.py    |    19 +-
 .../modules/linechecks/whitespace/__init__.py      |    34 +-
 .../repoman/modules/linechecks/whitespace/blank.py |    31 +-
 .../modules/linechecks/whitespace/whitespace.py    |    23 +-
 .../modules/linechecks/workaround/__init__.py      |    22 +-
 .../modules/linechecks/workaround/workarounds.py   |    12 +-
 .../lib/repoman/modules/scan/depend/__init__.py    |    57 +-
 .../repoman/modules/scan/depend/_depend_checks.py  |   448 +-
 .../lib/repoman/modules/scan/depend/_gen_arches.py |   112 +-
 repoman/lib/repoman/modules/scan/depend/profile.py |   748 +-
 .../repoman/modules/scan/directories/__init__.py   |    83 +-
 .../lib/repoman/modules/scan/directories/files.py  |   155 +-
 .../lib/repoman/modules/scan/directories/mtime.py  |    46 +-
 repoman/lib/repoman/modules/scan/eapi/__init__.py  |    38 +-
 repoman/lib/repoman/modules/scan/eapi/eapi.py      |    87 +-
 .../lib/repoman/modules/scan/ebuild/__init__.py    |   106 +-
 repoman/lib/repoman/modules/scan/ebuild/ebuild.py  |   464 +-
 .../lib/repoman/modules/scan/ebuild/multicheck.py  |   100 +-
 .../lib/repoman/modules/scan/eclasses/__init__.py  |    78 +-
 repoman/lib/repoman/modules/scan/eclasses/live.py  |   119 +-
 repoman/lib/repoman/modules/scan/eclasses/ruby.py  |    86 +-
 repoman/lib/repoman/modules/scan/fetch/__init__.py |    51 +-
 repoman/lib/repoman/modules/scan/fetch/fetches.py  |   369 +-
 .../lib/repoman/modules/scan/keywords/__init__.py  |    51 +-
 .../lib/repoman/modules/scan/keywords/keywords.py  |   331 +-
 .../lib/repoman/modules/scan/manifest/__init__.py  |    45 +-
 .../lib/repoman/modules/scan/manifest/manifests.py |    92 +-
 .../lib/repoman/modules/scan/metadata/__init__.py  |   158 +-
 .../repoman/modules/scan/metadata/description.py   |    73 +-
 .../modules/scan/metadata/ebuild_metadata.py       |   128 +-
 .../repoman/modules/scan/metadata/pkgmetadata.py   |   374 +-
 .../lib/repoman/modules/scan/metadata/restrict.py  |    96 +-
 .../lib/repoman/modules/scan/metadata/use_flags.py |   155 +-
 repoman/lib/repoman/modules/scan/module.py         |   173 +-
 .../lib/repoman/modules/scan/options/__init__.py   |    37 +-
 .../lib/repoman/modules/scan/options/options.py    |    48 +-
 repoman/lib/repoman/modules/scan/scan.py           |    99 +-
 repoman/lib/repoman/modules/scan/scanbase.py       |   106 +-
 repoman/lib/repoman/modules/vcs/None/__init__.py   |    46 +-
 repoman/lib/repoman/modules/vcs/None/changes.py    |    88 +-
 repoman/lib/repoman/modules/vcs/None/status.py     |    96 +-
 repoman/lib/repoman/modules/vcs/__init__.py        |     5 +-
 repoman/lib/repoman/modules/vcs/bzr/__init__.py    |    46 +-
 repoman/lib/repoman/modules/vcs/bzr/changes.py     |   115 +-
 repoman/lib/repoman/modules/vcs/bzr/status.py      |   104 +-
 repoman/lib/repoman/modules/vcs/changes.py         |   311 +-
 repoman/lib/repoman/modules/vcs/cvs/__init__.py    |    46 +-
 repoman/lib/repoman/modules/vcs/cvs/changes.py     |   218 +-
 repoman/lib/repoman/modules/vcs/cvs/status.py      |   239 +-
 repoman/lib/repoman/modules/vcs/git/__init__.py    |    46 +-
 repoman/lib/repoman/modules/vcs/git/changes.py     |   252 +-
 repoman/lib/repoman/modules/vcs/git/status.py      |   115 +-
 repoman/lib/repoman/modules/vcs/hg/__init__.py     |    46 +-
 repoman/lib/repoman/modules/vcs/hg/changes.py      |   193 +-
 repoman/lib/repoman/modules/vcs/hg/status.py       |    95 +-
 repoman/lib/repoman/modules/vcs/settings.py        |   176 +-
 repoman/lib/repoman/modules/vcs/svn/__init__.py    |    46 +-
 repoman/lib/repoman/modules/vcs/svn/changes.py     |   266 +-
 repoman/lib/repoman/modules/vcs/svn/status.py      |   268 +-
 repoman/lib/repoman/modules/vcs/vcs.py             |   259 +-
 repoman/lib/repoman/profile.py                     |   133 +-
 repoman/lib/repoman/qa_data.py                     |   374 +-
 repoman/lib/repoman/qa_tracker.py                  |    81 +-
 repoman/lib/repoman/repos.py                       |   624 +-
 repoman/lib/repoman/scanner.py                     |   852 +-
 repoman/lib/repoman/tests/__init__.py              |   539 +-
 .../lib/repoman/tests/changelog/test_echangelog.py |   251 +-
 repoman/lib/repoman/tests/commit/test_commitmsg.py |   156 +-
 repoman/lib/repoman/tests/runTests.py              |    36 +-
 repoman/lib/repoman/tests/simple/test_simple.py    |   903 +-
 repoman/lib/repoman/utilities.py                   |  1019 +-
 repoman/runtests                                   |   258 +-
 repoman/setup.py                                   |   768 +-
 runtests                                           |   252 +-
 setup.py                                           |  1427 +-
 src/portage_util_file_copy_reflink_linux.c         |     7 +-
 tabcheck.py                                        |     3 +-
 tox.ini                                            |    18 +-
 703 files changed, 142061 insertions(+), 124529 deletions(-)

diff --cc README.md
index d9e004269,e75b430c6..2c17d09b2
--- a/README.md
+++ b/README.md
@@@ -1,20 -1,41 +1,46 @@@
 -[![CI](https://github.com/gentoo/portage/actions/workflows/ci.yml/badge.svg)](https://github.com/gentoo/portage/actions/workflows/ci.yml)
 -
+ About Portage
+ =============
+ 
+ Portage is a package management system based on ports collections. The
+ Package Manager Specification Project (PMS) standardises and documents
+ the behaviour of Portage so that ebuild repositories can be used by
+ other package managers.
+ 
 +This is the prefix branch of portage, a branch that deals with portage
 +setup as packagemanager for a given offset in the filesystem running
 +with user privileges.
 +
 +If you are not looking for something Gentoo Prefix like, then this
 +is not the right place.
 +
+ Contributing
+ ============
  
- =======
- About Portage
- =============
+ Contributions are always welcome! We've started using
+ [black](https://pypi.org/project/black/) to format the code base. Please make
+ sure you run it against any PR's prior to submitting (otherwise we'll probably
+ reject it).
  
- Portage is a package management system based on ports collections. The
- Package Manager Specification Project (PMS) standardises and documents
- the behaviour of Portage so that ebuild repositories can be used by
- other package managers.
+ There are [ways to
+ integrate](https://black.readthedocs.io/en/stable/integrations/editors.html)
+ black into your text editor and/or IDE.
  
+ You can also set up a git hook to check your commits, in case you don't want
+ editor integration. Something like this:
+ 
+ ```sh
+ # .git/hooks/pre-commit (don't forget to chmod +x)
+ 
+ #!/bin/bash
+ black --check --diff .
+ ```
+ 
+ To ignore commit 1bb64ff452 - which is a massive commit that simply formatted
+ the code base using black - you can do the following:
+ 
+ ```sh
+ git config blame.ignoreRevsFile .gitignorerevs
+ ```
  
  Dependencies
  ============
diff --cc bin/save-ebuild-env.sh
index 7d7235f23,98808814b..b3d4c7363
mode 100755,100644..100755
--- a/bin/save-ebuild-env.sh
+++ b/bin/save-ebuild-env.sh
@@@ -1,5 -1,5 +1,5 @@@
 -#!/bin/bash
 +#!@PORTAGE_BASH@
- # Copyright 1999-2018 Gentoo Foundation
+ # Copyright 1999-2021 Gentoo Authors
  # Distributed under the terms of the GNU General Public License v2
  
  # @FUNCTION: __save_ebuild_env
@@@ -89,16 -89,12 +89,16 @@@ __save_ebuild_env() 
  	___eapi_has_package_manager_build_user && unset -f package_manager_build_user
  	___eapi_has_package_manager_build_group && unset -f package_manager_build_group
  
 -	# Clear out the triple underscore namespace as it is reserved by the PM.
 -	unset -f $(compgen -A function ___)
 -	unset ${!___*}
 +	# BEGIN PREFIX LOCAL: compgen is not compiled in during bootstrap
 +	if type compgen >& /dev/null ; then
 +		# Clear out the triple underscore namespace as it is reserved by the PM.
 +		unset -f $(compgen -A function ___)
 +		unset ${!___*}
 +	fi
 +	# END PREFIX LOCAL
  
  	# portage config variables and variables set directly by portage
- 	unset ACCEPT_LICENSE BAD BRACKET BUILD_PREFIX COLS \
+ 	unset ACCEPT_LICENSE BUILD_PREFIX COLS \
  		DISTDIR DOC_SYMLINKS_DIR \
  		EBUILD_FORCE_TEST EBUILD_MASTER_PID \
  		ECLASS_DEPTH ENDCOL FAKEROOTKEY \
diff --cc lib/_emerge/EbuildPhase.py
index 8eaa57497,12326fffd..d07cff7bd
--- a/lib/_emerge/EbuildPhase.py
+++ b/lib/_emerge/EbuildPhase.py
@@@ -26,29 -29,29 +29,31 @@@ from portage.util._async.AsyncTaskFutur
  from portage.util._async.BuildLogger import BuildLogger
  from portage.util.futures import asyncio
  from portage.util.futures.executor.fork import ForkExecutor
 +# PREFIX LOCAL
 +from portage.const import EPREFIX
  
  try:
- 	from portage.xml.metadata import MetaDataXML
+     from portage.xml.metadata import MetaDataXML
  except (SystemExit, KeyboardInterrupt):
- 	raise
+     raise
  except (ImportError, SystemError, RuntimeError, Exception):
- 	# broken or missing xml support
- 	# https://bugs.python.org/issue14988
- 	MetaDataXML = None
+     # broken or missing xml support
+     # https://bugs.python.org/issue14988
+     MetaDataXML = None
  
  import portage
- portage.proxy.lazyimport.lazyimport(globals(),
- 	'portage.elog:messages@elog_messages',
- 	'portage.package.ebuild.doebuild:_check_build_log,' + \
- 		'_post_phase_cmds,_post_phase_userpriv_perms,' + \
- 		'_post_phase_emptydir_cleanup,' +
- 		'_post_src_install_soname_symlinks,' + \
- 		'_post_src_install_uid_fix,_postinst_bsdflags,' + \
- 		'_post_src_install_write_metadata,' + \
- 		'_preinst_bsdflags',
- 	'portage.util.futures.unix_events:_set_nonblocking',
+ 
+ portage.proxy.lazyimport.lazyimport(
+     globals(),
+     "portage.elog:messages@elog_messages",
+     "portage.package.ebuild.doebuild:_check_build_log,"
+     + "_post_phase_cmds,_post_phase_userpriv_perms,"
+     + "_post_phase_emptydir_cleanup,"
+     + "_post_src_install_soname_symlinks,"
+     + "_post_src_install_uid_fix,_postinst_bsdflags,"
+     + "_post_src_install_write_metadata,"
+     + "_preinst_bsdflags",
+     "portage.util.futures.unix_events:_set_nonblocking",
  )
  from portage import os
  from portage import _encodings
@@@ -450,84 -521,93 +523,110 @@@ class EbuildPhase(CompositeTask)
  
  class _PostPhaseCommands(CompositeTask):
  
- 	__slots__ = ("commands", "elog", "fd_pipes", "logfile", "phase", "settings")
- 
- 	def _start(self):
- 		if isinstance(self.commands, list):
- 			cmds = [({}, self.commands)]
- 		else:
- 			cmds = list(self.commands)
- 
- 		if 'selinux' not in self.settings.features:
- 			cmds = [(kwargs, commands) for kwargs, commands in
- 				cmds if not kwargs.get('selinux_only')]
- 
- 		tasks = TaskSequence()
- 		for kwargs, commands in cmds:
- 			# Select args intended for MiscFunctionsProcess.
- 			kwargs = dict((k, v) for k, v in kwargs.items()
- 				if k in ('ld_preload_sandbox',))
- 			tasks.add(MiscFunctionsProcess(background=self.background,
- 				commands=commands, fd_pipes=self.fd_pipes,
- 				logfile=self.logfile, phase=self.phase,
- 				scheduler=self.scheduler, settings=self.settings, **kwargs))
- 
- 		self._start_task(tasks, self._commands_exit)
- 
- 	def _commands_exit(self, task):
- 
- 		if self._default_exit(task) != os.EX_OK:
- 			self._async_wait()
- 			return
- 
- 		if self.phase == 'install':
- 			out = io.StringIO()
- 			_post_src_install_soname_symlinks(self.settings, out)
- 			msg = out.getvalue()
- 			if msg:
- 				self.scheduler.output(msg, log_path=self.settings.get("PORTAGE_LOG_FILE"))
- 
- 			if 'qa-unresolved-soname-deps' in self.settings.features:
- 				# This operates on REQUIRES metadata generated by the above function call.
- 				future = asyncio.ensure_future(self._soname_deps_qa(), loop=self.scheduler)
- 				# If an unexpected exception occurs, then this will raise it.
- 				future.add_done_callback(lambda future: future.cancelled() or future.result())
- 				self._start_task(AsyncTaskFuture(future=future), self._default_final_exit)
- 			else:
- 				self._default_final_exit(task)
- 		else:
- 			self._default_final_exit(task)
- 
- 	async def _soname_deps_qa(self):
- 
- 		vardb = QueryCommand.get_db()[self.settings['EROOT']]['vartree'].dbapi
- 
- 		all_provides = (await self.scheduler.run_in_executor(ForkExecutor(loop=self.scheduler), _get_all_provides, vardb))
- 
- 		unresolved = _get_unresolved_soname_deps(os.path.join(self.settings['PORTAGE_BUILDDIR'], 'build-info'), all_provides)
+     __slots__ = ("commands", "elog", "fd_pipes", "logfile", "phase", "settings")
+ 
+     def _start(self):
+         if isinstance(self.commands, list):
+             cmds = [({}, self.commands)]
+         else:
+             cmds = list(self.commands)
+ 
+         if "selinux" not in self.settings.features:
+             cmds = [
+                 (kwargs, commands)
+                 for kwargs, commands in cmds
+                 if not kwargs.get("selinux_only")
+             ]
+ 
+         tasks = TaskSequence()
+         for kwargs, commands in cmds:
+             # Select args intended for MiscFunctionsProcess.
+             kwargs = dict(
+                 (k, v) for k, v in kwargs.items() if k in ("ld_preload_sandbox",)
+             )
+             tasks.add(
+                 MiscFunctionsProcess(
+                     background=self.background,
+                     commands=commands,
+                     fd_pipes=self.fd_pipes,
+                     logfile=self.logfile,
+                     phase=self.phase,
+                     scheduler=self.scheduler,
+                     settings=self.settings,
+                     **kwargs
+                 )
+             )
+ 
+         self._start_task(tasks, self._commands_exit)
+ 
+     def _commands_exit(self, task):
+ 
+         if self._default_exit(task) != os.EX_OK:
+             self._async_wait()
+             return
+ 
+         if self.phase == "install":
+             out = io.StringIO()
+             _post_src_install_soname_symlinks(self.settings, out)
+             msg = out.getvalue()
+             if msg:
+                 self.scheduler.output(
+                     msg, log_path=self.settings.get("PORTAGE_LOG_FILE")
+                 )
+ 
+             if "qa-unresolved-soname-deps" in self.settings.features:
+                 # This operates on REQUIRES metadata generated by the above function call.
+                 future = asyncio.ensure_future(
+                     self._soname_deps_qa(), loop=self.scheduler
+                 )
+                 # If an unexpected exception occurs, then this will raise it.
+                 future.add_done_callback(
+                     lambda future: future.cancelled() or future.result()
+                 )
+                 self._start_task(
+                     AsyncTaskFuture(future=future), self._default_final_exit
+                 )
+             else:
+                 self._default_final_exit(task)
+         else:
+             self._default_final_exit(task)
+ 
+     async def _soname_deps_qa(self):
+ 
+         vardb = QueryCommand.get_db()[self.settings["EROOT"]]["vartree"].dbapi
+ 
+         all_provides = await self.scheduler.run_in_executor(
+             ForkExecutor(loop=self.scheduler), _get_all_provides, vardb
+         )
+ 
+         unresolved = _get_unresolved_soname_deps(
+             os.path.join(self.settings["PORTAGE_BUILDDIR"], "build-info"), all_provides
+         )
  
 +		# BEGIN PREFIX LOCAL
 +		if EPREFIX != "" and unresolved:
 +			# in prefix, consider the host libs for any unresolved libs,
 +			# so we kill warnings about missing libc.so.1, etc.
 +			for obj, libs in list(unresolved):
 +				unresolved.remove((obj, libs))
 +				libs=list(libs)
 +				for lib in list(libs):
 +					for path in ['/lib64', '/lib/64', '/lib', \
 +							'/usr/lib64', '/usr/lib/64', '/usr/lib']:
 +						if os.path.exists(os.path.join(path, lib)):
 +							libs.remove(lib)
 +							break
 +				if len(libs) > 0:
 +					unresolved.append((obj, tuple(libs)))
 +		# END PREFIX LOCAL
 +
- 		if unresolved:
- 			unresolved.sort()
- 			qa_msg = ["QA Notice: Unresolved soname dependencies:"]
- 			qa_msg.append("")
- 			qa_msg.extend("\t%s: %s" % (filename, " ".join(sorted(soname_deps)))
- 				for filename, soname_deps in unresolved)
- 			qa_msg.append("")
- 			await self.elog("eqawarn", qa_msg)
+         if unresolved:
+             unresolved.sort()
+             qa_msg = ["QA Notice: Unresolved soname dependencies:"]
+             qa_msg.append("")
+             qa_msg.extend(
+                 "\t%s: %s" % (filename, " ".join(sorted(soname_deps)))
+                 for filename, soname_deps in unresolved
+             )
+             qa_msg.append("")
+             await self.elog("eqawarn", qa_msg)
diff --cc lib/_emerge/Package.py
index 40e595f36,90dfccdef..79f5459b3
--- a/lib/_emerge/Package.py
+++ b/lib/_emerge/Package.py
@@@ -16,783 -22,884 +22,886 @@@ from portage.exception import InvalidDa
  from portage.localization import _
  from _emerge.Task import Task
  
+ 
  class Package(Task):
  
- 	__hash__ = Task.__hash__
- 	__slots__ = ("built", "cpv", "depth",
- 		"installed", "onlydeps", "operation",
- 		"root_config", "type_name",
- 		"category", "counter", "cp", "cpv_split",
- 		"inherited", "iuse", "mtime",
- 		"pf", "root", "slot", "sub_slot", "slot_atom", "version") + \
- 		("_invalid", "_masks", "_metadata", "_provided_cps",
- 		"_raw_metadata", "_provides", "_requires", "_use",
- 		"_validated_atoms", "_visible")
- 
- 	metadata_keys = [
- 		"BDEPEND",
- 		"BUILD_ID", "BUILD_TIME", "CHOST", "COUNTER", "DEFINED_PHASES",
- 		"DEPEND", "EAPI", "IDEPEND", "INHERITED", "IUSE", "KEYWORDS",
- 		"LICENSE", "MD5", "PDEPEND", "PROVIDES",
- 		"RDEPEND", "repository", "REQUIRED_USE",
- 		"PROPERTIES", "REQUIRES", "RESTRICT", "SIZE",
- 		"SLOT", "USE", "_mtime_", "EPREFIX"]
- 
- 	_dep_keys = ('BDEPEND', 'DEPEND', 'IDEPEND', 'PDEPEND', 'RDEPEND')
- 	_buildtime_keys = ('BDEPEND', 'DEPEND')
- 	_runtime_keys = ('IDEPEND', 'PDEPEND', 'RDEPEND')
- 	_use_conditional_misc_keys = ('LICENSE', 'PROPERTIES', 'RESTRICT')
- 	UNKNOWN_REPO = _unknown_repo
- 
- 	def __init__(self, **kwargs):
- 		metadata = _PackageMetadataWrapperBase(kwargs.pop('metadata'))
- 		Task.__init__(self, **kwargs)
- 		# the SlotObject constructor assigns self.root_config from keyword args
- 		# and is an instance of a '_emerge.RootConfig.RootConfig class
- 		self.root = self.root_config.root
- 		self._raw_metadata = metadata
- 		self._metadata = _PackageMetadataWrapper(self, metadata)
- 		if not self.built:
- 			self._metadata['CHOST'] = self.root_config.settings.get('CHOST', '')
- 		eapi_attrs = _get_eapi_attrs(self.eapi)
- 
- 		try:
- 			db = self.cpv._db
- 		except AttributeError:
- 			if self.built:
- 				# For independence from the source ebuild repository and
- 				# profile implicit IUSE state, require the _db attribute
- 				# for built packages.
- 				raise
- 			db = self.root_config.trees['porttree'].dbapi
- 
- 		self.cpv = _pkg_str(self.cpv, metadata=self._metadata,
- 			settings=self.root_config.settings, db=db)
- 		if hasattr(self.cpv, 'slot_invalid'):
- 			self._invalid_metadata('SLOT.invalid',
- 				"SLOT: invalid value: '%s'" % self._metadata["SLOT"])
- 		self.cpv_split = self.cpv.cpv_split
- 		self.category, self.pf = portage.catsplit(self.cpv)
- 		self.cp = self.cpv.cp
- 		self.version = self.cpv.version
- 		self.slot = self.cpv.slot
- 		self.sub_slot = self.cpv.sub_slot
- 		self.slot_atom = Atom("%s%s%s" % (self.cp, _slot_separator, self.slot))
- 		# sync metadata with validated repo (may be UNKNOWN_REPO)
- 		self._metadata['repository'] = self.cpv.repo
- 
- 		if self.root_config.settings.local_config:
- 			implicit_match = db._iuse_implicit_cnstr(self.cpv, self._metadata)
- 		else:
- 			implicit_match = db._repoman_iuse_implicit_cnstr(self.cpv, self._metadata)
- 		usealiases = self.root_config.settings._use_manager.getUseAliases(self)
- 		self.iuse = self._iuse(self, self._metadata["IUSE"].split(),
- 			implicit_match, usealiases, self.eapi)
- 
- 		if (self.iuse.enabled or self.iuse.disabled) and \
- 			not eapi_attrs.iuse_defaults:
- 			if not self.installed:
- 				self._invalid_metadata('EAPI.incompatible',
- 					"IUSE contains defaults, but EAPI doesn't allow them")
- 		if self.inherited is None:
- 			self.inherited = frozenset()
- 
- 		if self.operation is None:
- 			if self.onlydeps or self.installed:
- 				self.operation = "nomerge"
- 			else:
- 				self.operation = "merge"
- 
- 		self._hash_key = Package._gen_hash_key(cpv=self.cpv,
- 			installed=self.installed, onlydeps=self.onlydeps,
- 			operation=self.operation, repo_name=self.cpv.repo,
- 			root_config=self.root_config,
- 			type_name=self.type_name)
- 		self._hash_value = hash(self._hash_key)
- 
- 	@property
- 	def eapi(self):
- 		return self._metadata["EAPI"]
- 
- 	@property
- 	def build_id(self):
- 		return self.cpv.build_id
- 
- 	@property
- 	def build_time(self):
- 		if not self.built:
- 			raise AttributeError('build_time')
- 		return self.cpv.build_time
- 
- 	@property
- 	def defined_phases(self):
- 		return self._metadata.defined_phases
- 
- 	@property
- 	def properties(self):
- 		return self._metadata.properties
- 
- 	@property
- 	def provided_cps(self):
- 		return (self.cp,)
- 
- 	@property
- 	def restrict(self):
- 		return self._metadata.restrict
- 
- 	@property
- 	def metadata(self):
- 		warnings.warn("_emerge.Package.Package.metadata is deprecated",
- 			DeprecationWarning, stacklevel=3)
- 		return self._metadata
- 
- 	# These are calculated on-demand, so that they are calculated
- 	# after FakeVartree applies its metadata tweaks.
- 	@property
- 	def invalid(self):
- 		if self._invalid is None:
- 			self._validate_deps()
- 			if self._invalid is None:
- 				self._invalid = False
- 		return self._invalid
- 
- 	@property
- 	def masks(self):
- 		if self._masks is None:
- 			self._masks = self._eval_masks()
- 		return self._masks
- 
- 	@property
- 	def visible(self):
- 		if self._visible is None:
- 			self._visible = self._eval_visiblity(self.masks)
- 		return self._visible
- 
- 	@property
- 	def validated_atoms(self):
- 		"""
- 		Returns *all* validated atoms from the deps, regardless
- 		of USE conditionals, with USE conditionals inside
- 		atoms left unevaluated.
- 		"""
- 		if self._validated_atoms is None:
- 			self._validate_deps()
- 		return self._validated_atoms
- 
- 	@property
- 	def stable(self):
- 		return self.cpv.stable
- 
- 	@property
- 	def provides(self):
- 		self.invalid
- 		return self._provides
- 
- 	@property
- 	def requires(self):
- 		self.invalid
- 		return self._requires
- 
- 	@classmethod
- 	def _gen_hash_key(cls, cpv=None, installed=None, onlydeps=None,
- 		operation=None, repo_name=None, root_config=None,
- 		type_name=None, **kwargs):
- 
- 		if operation is None:
- 			if installed or onlydeps:
- 				operation = "nomerge"
- 			else:
- 				operation = "merge"
- 
- 		root = None
- 		if root_config is not None:
- 			root = root_config.root
- 		else:
- 			raise TypeError("root_config argument is required")
- 
- 		elements = [type_name, root, str(cpv), operation]
- 
- 		# For installed (and binary) packages we don't care for the repo
- 		# when it comes to hashing, because there can only be one cpv.
- 		# So overwrite the repo_key with type_name.
- 		if type_name is None:
- 			raise TypeError("type_name argument is required")
- 		elif type_name == "ebuild":
- 			if repo_name is None:
- 				raise AssertionError(
- 					"Package._gen_hash_key() " + \
- 					"called without 'repo_name' argument")
- 			elements.append(repo_name)
- 		elif type_name == "binary":
- 			# Including a variety of fingerprints in the hash makes
- 			# it possible to simultaneously consider multiple similar
- 			# packages. Note that digests are not included here, since
- 			# they are relatively expensive to compute, and they may
- 			# not necessarily be available.
- 			elements.extend([cpv.build_id, cpv.file_size,
- 				cpv.build_time, cpv.mtime])
- 		else:
- 			# For installed (and binary) packages we don't care for the repo
- 			# when it comes to hashing, because there can only be one cpv.
- 			# So overwrite the repo_key with type_name.
- 			elements.append(type_name)
- 
- 		return tuple(elements)
- 
- 	def _validate_deps(self):
- 		"""
- 		Validate deps. This does not trigger USE calculation since that
- 		is expensive for ebuilds and therefore we want to avoid doing
- 		it unnecessarily (like for masked packages).
- 		"""
- 		eapi = self.eapi
- 		dep_eapi = eapi
- 		dep_valid_flag = self.iuse.is_valid_flag
- 		if self.installed:
- 			# Ignore EAPI.incompatible and conditionals missing
- 			# from IUSE for installed packages since these issues
- 			# aren't relevant now (re-evaluate when new EAPIs are
- 			# deployed).
- 			dep_eapi = None
- 			dep_valid_flag = None
- 
- 		validated_atoms = []
- 		for k in self._dep_keys:
- 			v = self._metadata.get(k)
- 			if not v:
- 				continue
- 			try:
- 				atoms = use_reduce(v, eapi=dep_eapi,
- 					matchall=True, is_valid_flag=dep_valid_flag,
- 					token_class=Atom, flat=True)
- 			except InvalidDependString as e:
- 				self._metadata_exception(k, e)
- 			else:
- 				validated_atoms.extend(atoms)
- 				if not self.built:
- 					for atom in atoms:
- 						if not isinstance(atom, Atom):
- 							continue
- 						if atom.slot_operator_built:
- 							e = InvalidDependString(
- 								_("Improper context for slot-operator "
- 								"\"built\" atom syntax: %s") %
- 								(atom.unevaluated_atom,))
- 							self._metadata_exception(k, e)
- 
- 		self._validated_atoms = tuple(set(atom for atom in
- 			validated_atoms if isinstance(atom, Atom)))
- 
- 		for k in self._use_conditional_misc_keys:
- 			v = self._metadata.get(k)
- 			if not v:
- 				continue
- 			try:
- 				use_reduce(v, eapi=dep_eapi, matchall=True,
- 					is_valid_flag=dep_valid_flag)
- 			except InvalidDependString as e:
- 				self._metadata_exception(k, e)
- 
- 		k = 'REQUIRED_USE'
- 		v = self._metadata.get(k)
- 		if v and not self.built:
- 			if not _get_eapi_attrs(eapi).required_use:
- 				self._invalid_metadata('EAPI.incompatible',
- 					"REQUIRED_USE set, but EAPI='%s' doesn't allow it" % eapi)
- 			else:
- 				try:
- 					check_required_use(v, (),
- 						self.iuse.is_valid_flag, eapi=eapi)
- 				except InvalidDependString as e:
- 					self._invalid_metadata(k + ".syntax", "%s: %s" % (k, e))
- 
- 		k = 'SRC_URI'
- 		v = self._metadata.get(k)
- 		if v:
- 			try:
- 				use_reduce(v, is_src_uri=True, eapi=eapi, matchall=True,
- 					is_valid_flag=self.iuse.is_valid_flag)
- 			except InvalidDependString as e:
- 				if not self.installed:
- 					self._metadata_exception(k, e)
- 
- 		if self.built:
- 			k = 'PROVIDES'
- 			try:
- 				self._provides = frozenset(
- 					parse_soname_deps(self._metadata[k]))
- 			except InvalidData as e:
- 				self._invalid_metadata(k + ".syntax", "%s: %s" % (k, e))
- 
- 			k = 'REQUIRES'
- 			try:
- 				self._requires = frozenset(
- 					parse_soname_deps(self._metadata[k]))
- 			except InvalidData as e:
- 				self._invalid_metadata(k + ".syntax", "%s: %s" % (k, e))
- 
- 	def copy(self):
- 		return Package(built=self.built, cpv=self.cpv, depth=self.depth,
- 			installed=self.installed, metadata=self._raw_metadata,
- 			onlydeps=self.onlydeps, operation=self.operation,
- 			root_config=self.root_config, type_name=self.type_name)
- 
- 	def _eval_masks(self):
- 		masks = {}
- 		settings = self.root_config.settings
- 
- 		if self.invalid is not False:
- 			masks['invalid'] = self.invalid
- 
- 		if not settings._accept_chost(self.cpv, self._metadata):
- 			masks['CHOST'] = self._metadata['CHOST']
- 
- 		eapi = self.eapi
- 		if not portage.eapi_is_supported(eapi):
- 			masks['EAPI.unsupported'] = eapi
- 		if portage._eapi_is_deprecated(eapi):
- 			masks['EAPI.deprecated'] = eapi
- 
- 		missing_keywords = settings._getMissingKeywords(
- 			self.cpv, self._metadata)
- 		if missing_keywords:
- 			masks['KEYWORDS'] = missing_keywords
- 
- 		try:
- 			missing_properties = settings._getMissingProperties(
- 				self.cpv, self._metadata)
- 			if missing_properties:
- 				masks['PROPERTIES'] = missing_properties
- 		except InvalidDependString:
- 			# already recorded as 'invalid'
- 			pass
- 
- 		try:
- 			missing_restricts = settings._getMissingRestrict(
- 				self.cpv, self._metadata)
- 			if missing_restricts:
- 				masks['RESTRICT'] = missing_restricts
- 		except InvalidDependString:
- 			# already recorded as 'invalid'
- 			pass
- 
- 		mask_atom = settings._getMaskAtom(self.cpv, self._metadata)
- 		if mask_atom is not None:
- 			masks['package.mask'] = mask_atom
- 
- 		try:
- 			missing_licenses = settings._getMissingLicenses(
- 				self.cpv, self._metadata)
- 			if missing_licenses:
- 				masks['LICENSE'] = missing_licenses
- 		except InvalidDependString:
- 			# already recorded as 'invalid'
- 			pass
- 
- 		if not masks:
- 			masks = False
- 
- 		return masks
- 
- 	def _eval_visiblity(self, masks):
- 
- 		if masks is not False:
- 
- 			if 'EAPI.unsupported' in masks:
- 				return False
- 
- 			if 'invalid' in masks:
- 				return False
- 
- 			if not self.installed and ( \
- 				'CHOST' in masks or \
- 				'EAPI.deprecated' in masks or \
- 				'KEYWORDS' in masks or \
- 				'PROPERTIES' in masks or \
- 				'RESTRICT' in masks):
- 				return False
- 
- 			if 'package.mask' in masks or \
- 				'LICENSE' in masks:
- 				return False
- 
- 		return True
- 
- 	def get_keyword_mask(self):
- 		"""returns None, 'missing', or 'unstable'."""
- 
- 		missing = self.root_config.settings._getRawMissingKeywords(
- 				self.cpv, self._metadata)
- 
- 		if not missing:
- 			return None
- 
- 		if '**' in missing:
- 			return 'missing'
- 
- 		global_accept_keywords = frozenset(
- 			self.root_config.settings.get("ACCEPT_KEYWORDS", "").split())
- 
- 		for keyword in missing:
- 			if keyword.lstrip("~") in global_accept_keywords:
- 				return 'unstable'
- 
- 		return 'missing'
- 
- 	def isHardMasked(self):
- 		"""returns a bool if the cpv is in the list of
- 		expanded pmaskdict[cp] available ebuilds"""
- 		pmask = self.root_config.settings._getRawMaskAtom(
- 			self.cpv, self._metadata)
- 		return pmask is not None
- 
- 	def _metadata_exception(self, k, e):
- 
- 		if k.endswith('DEPEND'):
- 			qacat = 'dependency.syntax'
- 		else:
- 			qacat = k + ".syntax"
- 
- 		if not self.installed:
- 			categorized_error = False
- 			if e.errors:
- 				for error in e.errors:
- 					if getattr(error, 'category', None) is None:
- 						continue
- 					categorized_error = True
- 					self._invalid_metadata(error.category,
- 						"%s: %s" % (k, error))
- 
- 			if not categorized_error:
- 				self._invalid_metadata(qacat,"%s: %s" % (k, e))
- 		else:
- 			# For installed packages, show the path of the file
- 			# containing the invalid metadata, since the user may
- 			# want to fix the deps by hand.
- 			vardb = self.root_config.trees['vartree'].dbapi
- 			path = vardb.getpath(self.cpv, filename=k)
- 			self._invalid_metadata(qacat, "%s: %s in '%s'" % (k, e, path))
- 
- 	def _invalid_metadata(self, msg_type, msg):
- 		if self._invalid is None:
- 			self._invalid = {}
- 		msgs = self._invalid.get(msg_type)
- 		if msgs is None:
- 			msgs = []
- 			self._invalid[msg_type] = msgs
- 		msgs.append(msg)
- 
- 	def __str__(self):
- 		if self.operation == "merge":
- 			if self.type_name == "binary":
- 				cpv_color = "PKG_BINARY_MERGE"
- 			else:
- 				cpv_color = "PKG_MERGE"
- 		elif self.operation == "uninstall":
- 			cpv_color = "PKG_UNINSTALL"
- 		else:
- 			cpv_color = "PKG_NOMERGE"
- 
- 		build_id_str = ""
- 		if isinstance(self.cpv.build_id, int) and self.cpv.build_id > 0:
- 			build_id_str = "-%s" % self.cpv.build_id
- 
- 		s = "(%s, %s" \
- 			% (portage.output.colorize(cpv_color, self.cpv +
- 			build_id_str + _slot_separator + self.slot + "/" +
- 			self.sub_slot + _repo_separator + self.repo),
- 			self.type_name)
- 
- 		if self.type_name == "installed":
- 			if self.root_config.settings['ROOT'] != "/":
- 				s += " in '%s'" % self.root_config.settings['ROOT']
- 			if self.operation == "uninstall":
- 				s += " scheduled for uninstall"
- 		else:
- 			if self.operation == "merge":
- 				s += " scheduled for merge"
- 				if self.root_config.settings['ROOT'] != "/":
- 					s += " to '%s'" % self.root_config.settings['ROOT']
- 		s += ")"
- 		return s
- 
- 	class _use_class:
- 
- 		__slots__ = ("enabled", "_expand", "_expand_hidden",
- 			"_force", "_pkg", "_mask")
- 
- 		# Share identical frozenset instances when available.
- 		_frozensets = {}
- 
- 		def __init__(self, pkg, enabled_flags):
- 			self._pkg = pkg
- 			self._expand = None
- 			self._expand_hidden = None
- 			self._force = None
- 			self._mask = None
- 			if eapi_has_use_aliases(pkg.eapi):
- 				for enabled_flag in enabled_flags:
- 					enabled_flags.extend(pkg.iuse.alias_mapping.get(enabled_flag, []))
- 			self.enabled = frozenset(enabled_flags)
- 			if pkg.built:
- 				# Use IUSE to validate USE settings for built packages,
- 				# in case the package manager that built this package
- 				# failed to do that for some reason (or in case of
- 				# data corruption).
- 				missing_iuse = pkg.iuse.get_missing_iuse(self.enabled)
- 				if missing_iuse:
- 					self.enabled = self.enabled.difference(missing_iuse)
- 
- 		def _init_force_mask(self):
- 			pkgsettings = self._pkg._get_pkgsettings()
- 			frozensets = self._frozensets
- 			s = frozenset(
- 				pkgsettings.get("USE_EXPAND", "").lower().split())
- 			self._expand = frozensets.setdefault(s, s)
- 			s = frozenset(
- 				pkgsettings.get("USE_EXPAND_HIDDEN", "").lower().split())
- 			self._expand_hidden = frozensets.setdefault(s, s)
- 			s = pkgsettings.useforce
- 			self._force = frozensets.setdefault(s, s)
- 			s = pkgsettings.usemask
- 			self._mask = frozensets.setdefault(s, s)
- 
- 		@property
- 		def expand(self):
- 			if self._expand is None:
- 				self._init_force_mask()
- 			return self._expand
- 
- 		@property
- 		def expand_hidden(self):
- 			if self._expand_hidden is None:
- 				self._init_force_mask()
- 			return self._expand_hidden
- 
- 		@property
- 		def force(self):
- 			if self._force is None:
- 				self._init_force_mask()
- 			return self._force
- 
- 		@property
- 		def mask(self):
- 			if self._mask is None:
- 				self._init_force_mask()
- 			return self._mask
- 
- 	@property
- 	def repo(self):
- 		return self._metadata['repository']
- 
- 	@property
- 	def repo_priority(self):
- 		repo_info = self.root_config.settings.repositories.prepos.get(self.repo)
- 		if repo_info is None:
- 			return None
- 		return repo_info.priority
- 
- 	@property
- 	def use(self):
- 		if self._use is None:
- 			self._init_use()
- 		return self._use
- 
- 	def _get_pkgsettings(self):
- 		pkgsettings = self.root_config.trees[
- 			'porttree'].dbapi.doebuild_settings
- 		pkgsettings.setcpv(self)
- 		return pkgsettings
- 
- 	def _init_use(self):
- 		if self.built:
- 			# Use IUSE to validate USE settings for built packages,
- 			# in case the package manager that built this package
- 			# failed to do that for some reason (or in case of
- 			# data corruption). The enabled flags must be consistent
- 			# with implicit IUSE, in order to avoid potential
- 			# inconsistencies in USE dep matching (see bug #453400).
- 			use_str = self._metadata['USE']
- 			is_valid_flag = self.iuse.is_valid_flag
- 			enabled_flags = [x for x in use_str.split() if is_valid_flag(x)]
- 			use_str = " ".join(enabled_flags)
- 			self._use = self._use_class(
- 				self, enabled_flags)
- 		else:
- 			try:
- 				use_str = _PackageMetadataWrapperBase.__getitem__(
- 					self._metadata, 'USE')
- 			except KeyError:
- 				use_str = None
- 			calculated_use = False
- 			if not use_str:
- 				use_str = self._get_pkgsettings()["PORTAGE_USE"]
- 				calculated_use = True
- 			self._use = self._use_class(
- 				self, use_str.split())
- 			# Initialize these now, since USE access has just triggered
- 			# setcpv, and we want to cache the result of the force/mask
- 			# calculations that were done.
- 			if calculated_use:
- 				self._use._init_force_mask()
- 
- 		_PackageMetadataWrapperBase.__setitem__(
- 			self._metadata, 'USE', use_str)
- 
- 		return use_str
- 
- 	class _iuse:
- 
- 		__slots__ = ("__weakref__", "_iuse_implicit_match", "_pkg", "alias_mapping",
- 			"all", "all_aliases", "enabled", "disabled", "tokens")
- 
- 		def __init__(self, pkg, tokens, iuse_implicit_match, aliases, eapi):
- 			self._pkg = pkg
- 			self.tokens = tuple(tokens)
- 			self._iuse_implicit_match = iuse_implicit_match
- 			enabled = []
- 			disabled = []
- 			other = []
- 			enabled_aliases = []
- 			disabled_aliases = []
- 			other_aliases = []
- 			aliases_supported = eapi_has_use_aliases(eapi)
- 			self.alias_mapping = {}
- 			for x in tokens:
- 				prefix = x[:1]
- 				if prefix == "+":
- 					enabled.append(x[1:])
- 					if aliases_supported:
- 						self.alias_mapping[x[1:]] = aliases.get(x[1:], [])
- 						enabled_aliases.extend(self.alias_mapping[x[1:]])
- 				elif prefix == "-":
- 					disabled.append(x[1:])
- 					if aliases_supported:
- 						self.alias_mapping[x[1:]] = aliases.get(x[1:], [])
- 						disabled_aliases.extend(self.alias_mapping[x[1:]])
- 				else:
- 					other.append(x)
- 					if aliases_supported:
- 						self.alias_mapping[x] = aliases.get(x, [])
- 						other_aliases.extend(self.alias_mapping[x])
- 			self.enabled = frozenset(chain(enabled, enabled_aliases))
- 			self.disabled = frozenset(chain(disabled, disabled_aliases))
- 			self.all = frozenset(chain(enabled, disabled, other))
- 			self.all_aliases = frozenset(chain(enabled_aliases, disabled_aliases, other_aliases))
- 
- 		def is_valid_flag(self, flags):
- 			"""
- 			@return: True if all flags are valid USE values which may
- 				be specified in USE dependencies, False otherwise.
- 			"""
- 			if isinstance(flags, str):
- 				flags = [flags]
- 
- 			for flag in flags:
- 				if not flag in self.all and not flag in self.all_aliases and \
- 					not self._iuse_implicit_match(flag):
- 					return False
- 			return True
- 
- 		def get_missing_iuse(self, flags):
- 			"""
- 			@return: A list of flags missing from IUSE.
- 			"""
- 			if isinstance(flags, str):
- 				flags = [flags]
- 			missing_iuse = []
- 			for flag in flags:
- 				if not flag in self.all and not flag in self.all_aliases and \
- 					not self._iuse_implicit_match(flag):
- 					missing_iuse.append(flag)
- 			return missing_iuse
- 
- 		def get_real_flag(self, flag):
- 			"""
- 			Returns the flag's name within the scope of this package
- 			(accounting for aliases), or None if the flag is unknown.
- 			"""
- 			if flag in self.all:
- 				return flag
- 
- 			if flag in self.all_aliases:
- 				for k, v in self.alias_mapping.items():
- 					if flag in v:
- 						return k
- 
- 			if self._iuse_implicit_match(flag):
- 				return flag
- 
- 			return None
- 
- 	def __len__(self):
- 		return 4
- 
- 	def __iter__(self):
- 		"""
- 		This is used to generate mtimedb resume mergelist entries, so we
- 		limit it to 4 items for backward compatibility.
- 		"""
- 		return iter(self._hash_key[:4])
- 
- 	def __lt__(self, other):
- 		if other.cp != self.cp:
- 			return self.cp < other.cp
- 		result = portage.vercmp(self.version, other.version)
- 		if result < 0:
- 			return True
- 		if result == 0 and self.built and other.built:
- 			return self.build_time < other.build_time
- 		return False
- 
- 	def __le__(self, other):
- 		if other.cp != self.cp:
- 			return self.cp <= other.cp
- 		result = portage.vercmp(self.version, other.version)
- 		if result <= 0:
- 			return True
- 		if result == 0 and self.built and other.built:
- 			return self.build_time <= other.build_time
- 		return False
- 
- 	def __gt__(self, other):
- 		if other.cp != self.cp:
- 			return self.cp > other.cp
- 		result = portage.vercmp(self.version, other.version)
- 		if result > 0:
- 			return True
- 		if result == 0 and self.built and other.built:
- 			return self.build_time > other.build_time
- 		return False
- 
- 	def __ge__(self, other):
- 		if other.cp != self.cp:
- 			return self.cp >= other.cp
- 		result = portage.vercmp(self.version, other.version)
- 		if result >= 0:
- 			return True
- 		if result == 0 and self.built and other.built:
- 			return self.build_time >= other.build_time
- 		return False
- 
- 	def with_use(self, use):
- 		"""
- 		Return an Package instance with the specified USE flags. The
- 		current instance may be returned if it has identical USE flags.
- 		@param use: a set of USE flags
- 		@type use: frozenset
- 		@return: A package with the specified USE flags
- 		@rtype: Package
- 		"""
- 		if use is not self.use.enabled:
- 			pkg = self.copy()
- 			pkg._metadata["USE"] = " ".join(use)
- 		else:
- 			pkg = self
- 		return pkg
- 
- _all_metadata_keys = set(x for x in portage.auxdbkeys \
- 	if not x.startswith("UNUSED_"))
+     __hash__ = Task.__hash__
+     __slots__ = (
+         "built",
+         "cpv",
+         "depth",
+         "installed",
+         "onlydeps",
+         "operation",
+         "root_config",
+         "type_name",
+         "category",
+         "counter",
+         "cp",
+         "cpv_split",
+         "inherited",
+         "iuse",
+         "mtime",
+         "pf",
+         "root",
+         "slot",
+         "sub_slot",
+         "slot_atom",
+         "version",
+     ) + (
+         "_invalid",
+         "_masks",
+         "_metadata",
+         "_provided_cps",
+         "_raw_metadata",
+         "_provides",
+         "_requires",
+         "_use",
+         "_validated_atoms",
+         "_visible",
+     )
+ 
+     metadata_keys = [
+         "BDEPEND",
+         "BUILD_ID",
+         "BUILD_TIME",
+         "CHOST",
+         "COUNTER",
+         "DEFINED_PHASES",
+         "DEPEND",
+         "EAPI",
+         "IDEPEND",
+         "INHERITED",
+         "IUSE",
+         "KEYWORDS",
+         "LICENSE",
+         "MD5",
+         "PDEPEND",
+         "PROVIDES",
+         "RDEPEND",
+         "repository",
+         "REQUIRED_USE",
+         "PROPERTIES",
+         "REQUIRES",
+         "RESTRICT",
+         "SIZE",
+         "SLOT",
+         "USE",
+         "_mtime_",
++        # PREFIX LOCAL
++        "EPREFIX",
+     ]
+ 
+     _dep_keys = ("BDEPEND", "DEPEND", "IDEPEND", "PDEPEND", "RDEPEND")
+     _buildtime_keys = ("BDEPEND", "DEPEND")
+     _runtime_keys = ("IDEPEND", "PDEPEND", "RDEPEND")
+     _use_conditional_misc_keys = ("LICENSE", "PROPERTIES", "RESTRICT")
+     UNKNOWN_REPO = _unknown_repo
+ 
+     def __init__(self, **kwargs):
+         metadata = _PackageMetadataWrapperBase(kwargs.pop("metadata"))
+         Task.__init__(self, **kwargs)
+         # the SlotObject constructor assigns self.root_config from keyword args
+         # and is an instance of a '_emerge.RootConfig.RootConfig class
+         self.root = self.root_config.root
+         self._raw_metadata = metadata
+         self._metadata = _PackageMetadataWrapper(self, metadata)
+         if not self.built:
+             self._metadata["CHOST"] = self.root_config.settings.get("CHOST", "")
+         eapi_attrs = _get_eapi_attrs(self.eapi)
+ 
+         try:
+             db = self.cpv._db
+         except AttributeError:
+             if self.built:
+                 # For independence from the source ebuild repository and
+                 # profile implicit IUSE state, require the _db attribute
+                 # for built packages.
+                 raise
+             db = self.root_config.trees["porttree"].dbapi
+ 
+         self.cpv = _pkg_str(
+             self.cpv, metadata=self._metadata, settings=self.root_config.settings, db=db
+         )
+         if hasattr(self.cpv, "slot_invalid"):
+             self._invalid_metadata(
+                 "SLOT.invalid", "SLOT: invalid value: '%s'" % self._metadata["SLOT"]
+             )
+         self.cpv_split = self.cpv.cpv_split
+         self.category, self.pf = portage.catsplit(self.cpv)
+         self.cp = self.cpv.cp
+         self.version = self.cpv.version
+         self.slot = self.cpv.slot
+         self.sub_slot = self.cpv.sub_slot
+         self.slot_atom = Atom("%s%s%s" % (self.cp, _slot_separator, self.slot))
+         # sync metadata with validated repo (may be UNKNOWN_REPO)
+         self._metadata["repository"] = self.cpv.repo
+ 
+         if self.root_config.settings.local_config:
+             implicit_match = db._iuse_implicit_cnstr(self.cpv, self._metadata)
+         else:
+             implicit_match = db._repoman_iuse_implicit_cnstr(self.cpv, self._metadata)
+         usealiases = self.root_config.settings._use_manager.getUseAliases(self)
+         self.iuse = self._iuse(
+             self, self._metadata["IUSE"].split(), implicit_match, usealiases, self.eapi
+         )
+ 
+         if (self.iuse.enabled or self.iuse.disabled) and not eapi_attrs.iuse_defaults:
+             if not self.installed:
+                 self._invalid_metadata(
+                     "EAPI.incompatible",
+                     "IUSE contains defaults, but EAPI doesn't allow them",
+                 )
+         if self.inherited is None:
+             self.inherited = frozenset()
+ 
+         if self.operation is None:
+             if self.onlydeps or self.installed:
+                 self.operation = "nomerge"
+             else:
+                 self.operation = "merge"
+ 
+         self._hash_key = Package._gen_hash_key(
+             cpv=self.cpv,
+             installed=self.installed,
+             onlydeps=self.onlydeps,
+             operation=self.operation,
+             repo_name=self.cpv.repo,
+             root_config=self.root_config,
+             type_name=self.type_name,
+         )
+         self._hash_value = hash(self._hash_key)
+ 
+     @property
+     def eapi(self):
+         return self._metadata["EAPI"]
+ 
+     @property
+     def build_id(self):
+         return self.cpv.build_id
+ 
+     @property
+     def build_time(self):
+         if not self.built:
+             raise AttributeError("build_time")
+         return self.cpv.build_time
+ 
+     @property
+     def defined_phases(self):
+         return self._metadata.defined_phases
+ 
+     @property
+     def properties(self):
+         return self._metadata.properties
+ 
+     @property
+     def provided_cps(self):
+         return (self.cp,)
+ 
+     @property
+     def restrict(self):
+         return self._metadata.restrict
+ 
+     @property
+     def metadata(self):
+         warnings.warn(
+             "_emerge.Package.Package.metadata is deprecated",
+             DeprecationWarning,
+             stacklevel=3,
+         )
+         return self._metadata
+ 
+     # These are calculated on-demand, so that they are calculated
+     # after FakeVartree applies its metadata tweaks.
+     @property
+     def invalid(self):
+         if self._invalid is None:
+             self._validate_deps()
+             if self._invalid is None:
+                 self._invalid = False
+         return self._invalid
+ 
+     @property
+     def masks(self):
+         if self._masks is None:
+             self._masks = self._eval_masks()
+         return self._masks
+ 
+     @property
+     def visible(self):
+         if self._visible is None:
+             self._visible = self._eval_visiblity(self.masks)
+         return self._visible
+ 
+     @property
+     def validated_atoms(self):
+         """
+         Returns *all* validated atoms from the deps, regardless
+         of USE conditionals, with USE conditionals inside
+         atoms left unevaluated.
+         """
+         if self._validated_atoms is None:
+             self._validate_deps()
+         return self._validated_atoms
+ 
+     @property
+     def stable(self):
+         return self.cpv.stable
+ 
+     @property
+     def provides(self):
+         self.invalid
+         return self._provides
+ 
+     @property
+     def requires(self):
+         self.invalid
+         return self._requires
+ 
+     @classmethod
+     def _gen_hash_key(
+         cls,
+         cpv=None,
+         installed=None,
+         onlydeps=None,
+         operation=None,
+         repo_name=None,
+         root_config=None,
+         type_name=None,
+         **kwargs
+     ):
+ 
+         if operation is None:
+             if installed or onlydeps:
+                 operation = "nomerge"
+             else:
+                 operation = "merge"
+ 
+         root = None
+         if root_config is not None:
+             root = root_config.root
+         else:
+             raise TypeError("root_config argument is required")
+ 
+         elements = [type_name, root, str(cpv), operation]
+ 
+         # For installed (and binary) packages we don't care for the repo
+         # when it comes to hashing, because there can only be one cpv.
+         # So overwrite the repo_key with type_name.
+         if type_name is None:
+             raise TypeError("type_name argument is required")
+         elif type_name == "ebuild":
+             if repo_name is None:
+                 raise AssertionError(
+                     "Package._gen_hash_key() " + "called without 'repo_name' argument"
+                 )
+             elements.append(repo_name)
+         elif type_name == "binary":
+             # Including a variety of fingerprints in the hash makes
+             # it possible to simultaneously consider multiple similar
+             # packages. Note that digests are not included here, since
+             # they are relatively expensive to compute, and they may
+             # not necessarily be available.
+             elements.extend([cpv.build_id, cpv.file_size, cpv.build_time, cpv.mtime])
+         else:
+             # For installed (and binary) packages we don't care for the repo
+             # when it comes to hashing, because there can only be one cpv.
+             # So overwrite the repo_key with type_name.
+             elements.append(type_name)
+ 
+         return tuple(elements)
+ 
+     def _validate_deps(self):
+         """
+         Validate deps. This does not trigger USE calculation since that
+         is expensive for ebuilds and therefore we want to avoid doing
+         it unnecessarily (like for masked packages).
+         """
+         eapi = self.eapi
+         dep_eapi = eapi
+         dep_valid_flag = self.iuse.is_valid_flag
+         if self.installed:
+             # Ignore EAPI.incompatible and conditionals missing
+             # from IUSE for installed packages since these issues
+             # aren't relevant now (re-evaluate when new EAPIs are
+             # deployed).
+             dep_eapi = None
+             dep_valid_flag = None
+ 
+         validated_atoms = []
+         for k in self._dep_keys:
+             v = self._metadata.get(k)
+             if not v:
+                 continue
+             try:
+                 atoms = use_reduce(
+                     v,
+                     eapi=dep_eapi,
+                     matchall=True,
+                     is_valid_flag=dep_valid_flag,
+                     token_class=Atom,
+                     flat=True,
+                 )
+             except InvalidDependString as e:
+                 self._metadata_exception(k, e)
+             else:
+                 validated_atoms.extend(atoms)
+                 if not self.built:
+                     for atom in atoms:
+                         if not isinstance(atom, Atom):
+                             continue
+                         if atom.slot_operator_built:
+                             e = InvalidDependString(
+                                 _(
+                                     "Improper context for slot-operator "
+                                     '"built" atom syntax: %s'
+                                 )
+                                 % (atom.unevaluated_atom,)
+                             )
+                             self._metadata_exception(k, e)
+ 
+         self._validated_atoms = tuple(
+             set(atom for atom in validated_atoms if isinstance(atom, Atom))
+         )
+ 
+         for k in self._use_conditional_misc_keys:
+             v = self._metadata.get(k)
+             if not v:
+                 continue
+             try:
+                 use_reduce(
+                     v, eapi=dep_eapi, matchall=True, is_valid_flag=dep_valid_flag
+                 )
+             except InvalidDependString as e:
+                 self._metadata_exception(k, e)
+ 
+         k = "REQUIRED_USE"
+         v = self._metadata.get(k)
+         if v and not self.built:
+             if not _get_eapi_attrs(eapi).required_use:
+                 self._invalid_metadata(
+                     "EAPI.incompatible",
+                     "REQUIRED_USE set, but EAPI='%s' doesn't allow it" % eapi,
+                 )
+             else:
+                 try:
+                     check_required_use(v, (), self.iuse.is_valid_flag, eapi=eapi)
+                 except InvalidDependString as e:
+                     self._invalid_metadata(k + ".syntax", "%s: %s" % (k, e))
+ 
+         k = "SRC_URI"
+         v = self._metadata.get(k)
+         if v:
+             try:
+                 use_reduce(
+                     v,
+                     is_src_uri=True,
+                     eapi=eapi,
+                     matchall=True,
+                     is_valid_flag=self.iuse.is_valid_flag,
+                 )
+             except InvalidDependString as e:
+                 if not self.installed:
+                     self._metadata_exception(k, e)
+ 
+         if self.built:
+             k = "PROVIDES"
+             try:
+                 self._provides = frozenset(parse_soname_deps(self._metadata[k]))
+             except InvalidData as e:
+                 self._invalid_metadata(k + ".syntax", "%s: %s" % (k, e))
+ 
+             k = "REQUIRES"
+             try:
+                 self._requires = frozenset(parse_soname_deps(self._metadata[k]))
+             except InvalidData as e:
+                 self._invalid_metadata(k + ".syntax", "%s: %s" % (k, e))
+ 
+     def copy(self):
+         return Package(
+             built=self.built,
+             cpv=self.cpv,
+             depth=self.depth,
+             installed=self.installed,
+             metadata=self._raw_metadata,
+             onlydeps=self.onlydeps,
+             operation=self.operation,
+             root_config=self.root_config,
+             type_name=self.type_name,
+         )
+ 
+     def _eval_masks(self):
+         masks = {}
+         settings = self.root_config.settings
+ 
+         if self.invalid is not False:
+             masks["invalid"] = self.invalid
+ 
+         if not settings._accept_chost(self.cpv, self._metadata):
+             masks["CHOST"] = self._metadata["CHOST"]
+ 
+         eapi = self.eapi
+         if not portage.eapi_is_supported(eapi):
+             masks["EAPI.unsupported"] = eapi
+         if portage._eapi_is_deprecated(eapi):
+             masks["EAPI.deprecated"] = eapi
+ 
+         missing_keywords = settings._getMissingKeywords(self.cpv, self._metadata)
+         if missing_keywords:
+             masks["KEYWORDS"] = missing_keywords
+ 
+         try:
+             missing_properties = settings._getMissingProperties(
+                 self.cpv, self._metadata
+             )
+             if missing_properties:
+                 masks["PROPERTIES"] = missing_properties
+         except InvalidDependString:
+             # already recorded as 'invalid'
+             pass
+ 
+         try:
+             missing_restricts = settings._getMissingRestrict(self.cpv, self._metadata)
+             if missing_restricts:
+                 masks["RESTRICT"] = missing_restricts
+         except InvalidDependString:
+             # already recorded as 'invalid'
+             pass
+ 
+         mask_atom = settings._getMaskAtom(self.cpv, self._metadata)
+         if mask_atom is not None:
+             masks["package.mask"] = mask_atom
+ 
+         try:
+             missing_licenses = settings._getMissingLicenses(self.cpv, self._metadata)
+             if missing_licenses:
+                 masks["LICENSE"] = missing_licenses
+         except InvalidDependString:
+             # already recorded as 'invalid'
+             pass
+ 
+         if not masks:
+             masks = False
+ 
+         return masks
+ 
+     def _eval_visiblity(self, masks):
+ 
+         if masks is not False:
+ 
+             if "EAPI.unsupported" in masks:
+                 return False
+ 
+             if "invalid" in masks:
+                 return False
+ 
+             if not self.installed and (
+                 "CHOST" in masks
+                 or "EAPI.deprecated" in masks
+                 or "KEYWORDS" in masks
+                 or "PROPERTIES" in masks
+                 or "RESTRICT" in masks
+             ):
+                 return False
+ 
+             if "package.mask" in masks or "LICENSE" in masks:
+                 return False
+ 
+         return True
+ 
+     def get_keyword_mask(self):
+         """returns None, 'missing', or 'unstable'."""
+ 
+         missing = self.root_config.settings._getRawMissingKeywords(
+             self.cpv, self._metadata
+         )
+ 
+         if not missing:
+             return None
+ 
+         if "**" in missing:
+             return "missing"
+ 
+         global_accept_keywords = frozenset(
+             self.root_config.settings.get("ACCEPT_KEYWORDS", "").split()
+         )
+ 
+         for keyword in missing:
+             if keyword.lstrip("~") in global_accept_keywords:
+                 return "unstable"
+ 
+         return "missing"
+ 
+     def isHardMasked(self):
+         """returns a bool if the cpv is in the list of
+         expanded pmaskdict[cp] available ebuilds"""
+         pmask = self.root_config.settings._getRawMaskAtom(self.cpv, self._metadata)
+         return pmask is not None
+ 
+     def _metadata_exception(self, k, e):
+ 
+         if k.endswith("DEPEND"):
+             qacat = "dependency.syntax"
+         else:
+             qacat = k + ".syntax"
+ 
+         if not self.installed:
+             categorized_error = False
+             if e.errors:
+                 for error in e.errors:
+                     if getattr(error, "category", None) is None:
+                         continue
+                     categorized_error = True
+                     self._invalid_metadata(error.category, "%s: %s" % (k, error))
+ 
+             if not categorized_error:
+                 self._invalid_metadata(qacat, "%s: %s" % (k, e))
+         else:
+             # For installed packages, show the path of the file
+             # containing the invalid metadata, since the user may
+             # want to fix the deps by hand.
+             vardb = self.root_config.trees["vartree"].dbapi
+             path = vardb.getpath(self.cpv, filename=k)
+             self._invalid_metadata(qacat, "%s: %s in '%s'" % (k, e, path))
+ 
+     def _invalid_metadata(self, msg_type, msg):
+         if self._invalid is None:
+             self._invalid = {}
+         msgs = self._invalid.get(msg_type)
+         if msgs is None:
+             msgs = []
+             self._invalid[msg_type] = msgs
+         msgs.append(msg)
+ 
+     def __str__(self):
+         if self.operation == "merge":
+             if self.type_name == "binary":
+                 cpv_color = "PKG_BINARY_MERGE"
+             else:
+                 cpv_color = "PKG_MERGE"
+         elif self.operation == "uninstall":
+             cpv_color = "PKG_UNINSTALL"
+         else:
+             cpv_color = "PKG_NOMERGE"
+ 
+         build_id_str = ""
+         if isinstance(self.cpv.build_id, int) and self.cpv.build_id > 0:
+             build_id_str = "-%s" % self.cpv.build_id
+ 
+         s = "(%s, %s" % (
+             portage.output.colorize(
+                 cpv_color,
+                 self.cpv
+                 + build_id_str
+                 + _slot_separator
+                 + self.slot
+                 + "/"
+                 + self.sub_slot
+                 + _repo_separator
+                 + self.repo,
+             ),
+             self.type_name,
+         )
+ 
+         if self.type_name == "installed":
+             if self.root_config.settings["ROOT"] != "/":
+                 s += " in '%s'" % self.root_config.settings["ROOT"]
+             if self.operation == "uninstall":
+                 s += " scheduled for uninstall"
+         else:
+             if self.operation == "merge":
+                 s += " scheduled for merge"
+                 if self.root_config.settings["ROOT"] != "/":
+                     s += " to '%s'" % self.root_config.settings["ROOT"]
+         s += ")"
+         return s
+ 
+     class _use_class:
+ 
+         __slots__ = ("enabled", "_expand", "_expand_hidden", "_force", "_pkg", "_mask")
+ 
+         # Share identical frozenset instances when available.
+         _frozensets = {}
+ 
+         def __init__(self, pkg, enabled_flags):
+             self._pkg = pkg
+             self._expand = None
+             self._expand_hidden = None
+             self._force = None
+             self._mask = None
+             if eapi_has_use_aliases(pkg.eapi):
+                 for enabled_flag in enabled_flags:
+                     enabled_flags.extend(pkg.iuse.alias_mapping.get(enabled_flag, []))
+             self.enabled = frozenset(enabled_flags)
+             if pkg.built:
+                 # Use IUSE to validate USE settings for built packages,
+                 # in case the package manager that built this package
+                 # failed to do that for some reason (or in case of
+                 # data corruption).
+                 missing_iuse = pkg.iuse.get_missing_iuse(self.enabled)
+                 if missing_iuse:
+                     self.enabled = self.enabled.difference(missing_iuse)
+ 
+         def _init_force_mask(self):
+             pkgsettings = self._pkg._get_pkgsettings()
+             frozensets = self._frozensets
+             s = frozenset(pkgsettings.get("USE_EXPAND", "").lower().split())
+             self._expand = frozensets.setdefault(s, s)
+             s = frozenset(pkgsettings.get("USE_EXPAND_HIDDEN", "").lower().split())
+             self._expand_hidden = frozensets.setdefault(s, s)
+             s = pkgsettings.useforce
+             self._force = frozensets.setdefault(s, s)
+             s = pkgsettings.usemask
+             self._mask = frozensets.setdefault(s, s)
+ 
+         @property
+         def expand(self):
+             if self._expand is None:
+                 self._init_force_mask()
+             return self._expand
+ 
+         @property
+         def expand_hidden(self):
+             if self._expand_hidden is None:
+                 self._init_force_mask()
+             return self._expand_hidden
+ 
+         @property
+         def force(self):
+             if self._force is None:
+                 self._init_force_mask()
+             return self._force
+ 
+         @property
+         def mask(self):
+             if self._mask is None:
+                 self._init_force_mask()
+             return self._mask
+ 
+     @property
+     def repo(self):
+         return self._metadata["repository"]
+ 
+     @property
+     def repo_priority(self):
+         repo_info = self.root_config.settings.repositories.prepos.get(self.repo)
+         if repo_info is None:
+             return None
+         return repo_info.priority
+ 
+     @property
+     def use(self):
+         if self._use is None:
+             self._init_use()
+         return self._use
+ 
+     def _get_pkgsettings(self):
+         pkgsettings = self.root_config.trees["porttree"].dbapi.doebuild_settings
+         pkgsettings.setcpv(self)
+         return pkgsettings
+ 
+     def _init_use(self):
+         if self.built:
+             # Use IUSE to validate USE settings for built packages,
+             # in case the package manager that built this package
+             # failed to do that for some reason (or in case of
+             # data corruption). The enabled flags must be consistent
+             # with implicit IUSE, in order to avoid potential
+             # inconsistencies in USE dep matching (see bug #453400).
+             use_str = self._metadata["USE"]
+             is_valid_flag = self.iuse.is_valid_flag
+             enabled_flags = [x for x in use_str.split() if is_valid_flag(x)]
+             use_str = " ".join(enabled_flags)
+             self._use = self._use_class(self, enabled_flags)
+         else:
+             try:
+                 use_str = _PackageMetadataWrapperBase.__getitem__(self._metadata, "USE")
+             except KeyError:
+                 use_str = None
+             calculated_use = False
+             if not use_str:
+                 use_str = self._get_pkgsettings()["PORTAGE_USE"]
+                 calculated_use = True
+             self._use = self._use_class(self, use_str.split())
+             # Initialize these now, since USE access has just triggered
+             # setcpv, and we want to cache the result of the force/mask
+             # calculations that were done.
+             if calculated_use:
+                 self._use._init_force_mask()
+ 
+         _PackageMetadataWrapperBase.__setitem__(self._metadata, "USE", use_str)
+ 
+         return use_str
+ 
+     class _iuse:
+ 
+         __slots__ = (
+             "__weakref__",
+             "_iuse_implicit_match",
+             "_pkg",
+             "alias_mapping",
+             "all",
+             "all_aliases",
+             "enabled",
+             "disabled",
+             "tokens",
+         )
+ 
+         def __init__(self, pkg, tokens, iuse_implicit_match, aliases, eapi):
+             self._pkg = pkg
+             self.tokens = tuple(tokens)
+             self._iuse_implicit_match = iuse_implicit_match
+             enabled = []
+             disabled = []
+             other = []
+             enabled_aliases = []
+             disabled_aliases = []
+             other_aliases = []
+             aliases_supported = eapi_has_use_aliases(eapi)
+             self.alias_mapping = {}
+             for x in tokens:
+                 prefix = x[:1]
+                 if prefix == "+":
+                     enabled.append(x[1:])
+                     if aliases_supported:
+                         self.alias_mapping[x[1:]] = aliases.get(x[1:], [])
+                         enabled_aliases.extend(self.alias_mapping[x[1:]])
+                 elif prefix == "-":
+                     disabled.append(x[1:])
+                     if aliases_supported:
+                         self.alias_mapping[x[1:]] = aliases.get(x[1:], [])
+                         disabled_aliases.extend(self.alias_mapping[x[1:]])
+                 else:
+                     other.append(x)
+                     if aliases_supported:
+                         self.alias_mapping[x] = aliases.get(x, [])
+                         other_aliases.extend(self.alias_mapping[x])
+             self.enabled = frozenset(chain(enabled, enabled_aliases))
+             self.disabled = frozenset(chain(disabled, disabled_aliases))
+             self.all = frozenset(chain(enabled, disabled, other))
+             self.all_aliases = frozenset(
+                 chain(enabled_aliases, disabled_aliases, other_aliases)
+             )
+ 
+         def is_valid_flag(self, flags):
+             """
+             @return: True if all flags are valid USE values which may
+                     be specified in USE dependencies, False otherwise.
+             """
+             if isinstance(flags, str):
+                 flags = [flags]
+ 
+             for flag in flags:
+                 if (
+                     not flag in self.all
+                     and not flag in self.all_aliases
+                     and not self._iuse_implicit_match(flag)
+                 ):
+                     return False
+             return True
+ 
+         def get_missing_iuse(self, flags):
+             """
+             @return: A list of flags missing from IUSE.
+             """
+             if isinstance(flags, str):
+                 flags = [flags]
+             missing_iuse = []
+             for flag in flags:
+                 if (
+                     not flag in self.all
+                     and not flag in self.all_aliases
+                     and not self._iuse_implicit_match(flag)
+                 ):
+                     missing_iuse.append(flag)
+             return missing_iuse
+ 
+         def get_real_flag(self, flag):
+             """
+             Returns the flag's name within the scope of this package
+             (accounting for aliases), or None if the flag is unknown.
+             """
+             if flag in self.all:
+                 return flag
+ 
+             if flag in self.all_aliases:
+                 for k, v in self.alias_mapping.items():
+                     if flag in v:
+                         return k
+ 
+             if self._iuse_implicit_match(flag):
+                 return flag
+ 
+             return None
+ 
+     def __len__(self):
+         return 4
+ 
+     def __iter__(self):
+         """
+         This is used to generate mtimedb resume mergelist entries, so we
+         limit it to 4 items for backward compatibility.
+         """
+         return iter(self._hash_key[:4])
+ 
+     def __lt__(self, other):
+         if other.cp != self.cp:
+             return self.cp < other.cp
+         result = portage.vercmp(self.version, other.version)
+         if result < 0:
+             return True
+         if result == 0 and self.built and other.built:
+             return self.build_time < other.build_time
+         return False
+ 
+     def __le__(self, other):
+         if other.cp != self.cp:
+             return self.cp <= other.cp
+         result = portage.vercmp(self.version, other.version)
+         if result <= 0:
+             return True
+         if result == 0 and self.built and other.built:
+             return self.build_time <= other.build_time
+         return False
+ 
+     def __gt__(self, other):
+         if other.cp != self.cp:
+             return self.cp > other.cp
+         result = portage.vercmp(self.version, other.version)
+         if result > 0:
+             return True
+         if result == 0 and self.built and other.built:
+             return self.build_time > other.build_time
+         return False
+ 
+     def __ge__(self, other):
+         if other.cp != self.cp:
+             return self.cp >= other.cp
+         result = portage.vercmp(self.version, other.version)
+         if result >= 0:
+             return True
+         if result == 0 and self.built and other.built:
+             return self.build_time >= other.build_time
+         return False
+ 
+     def with_use(self, use):
+         """
+         Return an Package instance with the specified USE flags. The
+         current instance may be returned if it has identical USE flags.
+         @param use: a set of USE flags
+         @type use: frozenset
+         @return: A package with the specified USE flags
+         @rtype: Package
+         """
+         if use is not self.use.enabled:
+             pkg = self.copy()
+             pkg._metadata["USE"] = " ".join(use)
+         else:
+             pkg = self
+         return pkg
+ 
+ 
+ _all_metadata_keys = set(x for x in portage.auxdbkeys)
  _all_metadata_keys.update(Package.metadata_keys)
  _all_metadata_keys = frozenset(_all_metadata_keys)
  
diff --cc lib/_emerge/actions.py
index 7eea1c93a,515b22b66..0ed90cd71
--- a/lib/_emerge/actions.py
+++ b/lib/_emerge/actions.py
@@@ -2479,151 -2842,124 +2843,153 @@@ def getportageversion(portdir, _unused
  
  class _emerge_config(SlotObject):
  
- 	__slots__ = ('action', 'args', 'opts',
- 		'running_config', 'target_config', 'trees')
+     __slots__ = ("action", "args", "opts", "running_config", "target_config", "trees")
  
- 	# Support unpack as tuple, for load_emerge_config backward compatibility.
- 	def __iter__(self):
- 		yield self.target_config.settings
- 		yield self.trees
- 		yield self.target_config.mtimedb
+     # Support unpack as tuple, for load_emerge_config backward compatibility.
+     def __iter__(self):
+         yield self.target_config.settings
+         yield self.trees
+         yield self.target_config.mtimedb
  
- 	def __getitem__(self, index):
- 		return list(self)[index]
+     def __getitem__(self, index):
+         return list(self)[index]
  
- 	def __len__(self):
- 		return 3
+     def __len__(self):
+         return 3
  
- def load_emerge_config(emerge_config=None, env=None, **kargs):
  
- 	if emerge_config is None:
- 		emerge_config = _emerge_config(**kargs)
- 
- 	env = os.environ if env is None else env
- 	kwargs = {'env': env}
- 	for k, envvar in (("config_root", "PORTAGE_CONFIGROOT"), ("target_root", "ROOT"),
- 			("sysroot", "SYSROOT"), ("eprefix", "EPREFIX")):
- 		v = env.get(envvar)
- 		if v is not None:
- 			kwargs[k] = v
- 	emerge_config.trees = portage.create_trees(trees=emerge_config.trees,
- 				**kwargs)
- 
- 	for root_trees in emerge_config.trees.values():
- 		settings = root_trees["vartree"].settings
- 		settings._init_dirs()
- 		setconfig = load_default_config(settings, root_trees)
- 		root_config = RootConfig(settings, root_trees, setconfig)
- 		if "root_config" in root_trees:
- 			# Propagate changes to the existing instance,
- 			# which may be referenced by a depgraph.
- 			root_trees["root_config"].update(root_config)
- 		else:
- 			root_trees["root_config"] = root_config
+ def load_emerge_config(emerge_config=None, env=None, **kargs):
  
- 	target_eroot = emerge_config.trees._target_eroot
- 	emerge_config.target_config = \
- 		emerge_config.trees[target_eroot]['root_config']
- 	emerge_config.target_config.mtimedb = portage.MtimeDB(
- 		os.path.join(target_eroot, portage.CACHE_PATH, "mtimedb"))
- 	emerge_config.running_config = emerge_config.trees[
- 		emerge_config.trees._running_eroot]['root_config']
- 	QueryCommand._db = emerge_config.trees
+     if emerge_config is None:
+         emerge_config = _emerge_config(**kargs)
+ 
+     env = os.environ if env is None else env
+     kwargs = {"env": env}
+     for k, envvar in (
+         ("config_root", "PORTAGE_CONFIGROOT"),
+         ("target_root", "ROOT"),
+         ("sysroot", "SYSROOT"),
+         ("eprefix", "EPREFIX"),
+     ):
+         v = env.get(envvar)
+         if v is not None:
+             kwargs[k] = v
+     emerge_config.trees = portage.create_trees(trees=emerge_config.trees, **kwargs)
+ 
+     for root_trees in emerge_config.trees.values():
+         settings = root_trees["vartree"].settings
+         settings._init_dirs()
+         setconfig = load_default_config(settings, root_trees)
+         root_config = RootConfig(settings, root_trees, setconfig)
+         if "root_config" in root_trees:
+             # Propagate changes to the existing instance,
+             # which may be referenced by a depgraph.
+             root_trees["root_config"].update(root_config)
+         else:
+             root_trees["root_config"] = root_config
+ 
+     target_eroot = emerge_config.trees._target_eroot
+     emerge_config.target_config = emerge_config.trees[target_eroot]["root_config"]
+     emerge_config.target_config.mtimedb = portage.MtimeDB(
+         os.path.join(target_eroot, portage.CACHE_PATH, "mtimedb")
+     )
+     emerge_config.running_config = emerge_config.trees[
+         emerge_config.trees._running_eroot
+     ]["root_config"]
+     QueryCommand._db = emerge_config.trees
+ 
+     return emerge_config
  
- 	return emerge_config
  
  def getgccversion(chost=None):
- 	"""
- 	rtype: C{str}
- 	return:  the current in-use gcc version
- 	"""
+     """
+     rtype: C{str}
+     return:  the current in-use gcc version
+     """
  
- 	gcc_ver_command = ['gcc', '-dumpversion']
- 	gcc_ver_prefix = 'gcc-'
+     gcc_ver_command = ["gcc", "-dumpversion"]
+     gcc_ver_prefix = "gcc-"
  
 +	clang_ver_command = ['clang', '--version']
 +	clang_ver_prefix = 'clang-'
 +
 +	ubinpath = os.path.join('/', portage.const.EPREFIX, 'usr', 'bin')
 +
- 	gcc_not_found_error = red(
- 	"!!! No gcc found. You probably need to 'source /etc/profile'\n" +
- 	"!!! to update the environment of this terminal and possibly\n" +
- 	"!!! other terminals also.\n"
- 	)
+     gcc_not_found_error = red(
+         "!!! No gcc found. You probably need to 'source /etc/profile'\n"
+         + "!!! to update the environment of this terminal and possibly\n"
+         + "!!! other terminals also.\n"
+     )
  
 +	def getclangversion(output):
 +		version = re.search('clang version ([0-9.]+) ', output)
 +		if version:
 +			return version.group(1)
 +		return "unknown"
 +
- 	if chost:
- 		try:
- 			proc = subprocess.Popen(["gcc-config", "-c"],
- 				stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
- 		except OSError:
- 			myoutput = None
- 			mystatus = 1
- 		else:
- 			myoutput = _unicode_decode(proc.communicate()[0]).rstrip("\n")
- 			mystatus = proc.wait()
- 		if mystatus == os.EX_OK and myoutput.startswith(chost + "-"):
- 			return myoutput.replace(chost + "-", gcc_ver_prefix, 1)
+     if chost:
+         try:
+             proc = subprocess.Popen(
 -                ["gcc-config", "-c"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT
++                [ubinpath + "/" + "gcc-config", "-c"],
++                stdout=subprocess.PIPE, stderr=subprocess.STDOUT
+             )
+         except OSError:
+             myoutput = None
+             mystatus = 1
+         else:
+             myoutput = _unicode_decode(proc.communicate()[0]).rstrip("\n")
+             mystatus = proc.wait()
+         if mystatus == os.EX_OK and myoutput.startswith(chost + "-"):
+             return myoutput.replace(chost + "-", gcc_ver_prefix, 1)
+ 
+         try:
+             proc = subprocess.Popen(
 -                [chost + "-" + gcc_ver_command[0]] + gcc_ver_command[1:],
++                [ubinpath + "/" + chost + "-" + gcc_ver_command[0]]
++                + gcc_ver_command[1:],
+                 stdout=subprocess.PIPE,
+                 stderr=subprocess.STDOUT,
+             )
+         except OSError:
+             myoutput = None
+             mystatus = 1
+         else:
+             myoutput = _unicode_decode(proc.communicate()[0]).rstrip("\n")
+             mystatus = proc.wait()
+         if mystatus == os.EX_OK:
+             return gcc_ver_prefix + myoutput
  
 +		try:
 +			proc = subprocess.Popen(
- 				[chost + "-" + gcc_ver_command[0]] + gcc_ver_command[1:],
- 				stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
- 		except OSError:
- 			myoutput = None
- 			mystatus = 1
- 		else:
- 			myoutput = _unicode_decode(proc.communicate()[0]).rstrip("\n")
- 			mystatus = proc.wait()
- 		if mystatus == os.EX_OK:
- 			return gcc_ver_prefix + myoutput
- 
- 	try:
- 		proc = subprocess.Popen([ubinpath + "/" + gcc_ver_command[0]] + gcc_ver_command[1:],
- 			stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
- 	except OSError:
- 		myoutput = None
- 		mystatus = 1
- 	else:
- 		myoutput = _unicode_decode(proc.communicate()[0]).rstrip("\n")
- 		mystatus = proc.wait()
- 	if mystatus == os.EX_OK:
- 		return gcc_ver_prefix + myoutput
- 
- 	if chost:
- 		try:
- 			proc = subprocess.Popen(
- 				[ubinpath + "/" + chost + "-" + clang_ver_command[0]] + clang_ver_command[1:],
- 				stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
++				[ubinpath + "/" + chost + "-" + clang_ver_command[0]]
++				+ clang_ver_command[1:],
++				stdout=subprocess.PIPE,
++				stderr=subprocess.STDOUT,
++			)
 +		except OSError:
 +			myoutput = None
 +			mystatus = 1
 +		else:
 +			myoutput = _unicode_decode(proc.communicate()[0]).rstrip("\n")
 +			mystatus = proc.wait()
 +		if mystatus == os.EX_OK:
 +			return clang_ver_prefix + getclangversion(myoutput)
 +
- 	try:
- 		proc = subprocess.Popen([ubinpath + "/" + clang_ver_command[0]] + clang_ver_command[1:],
- 			stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
- 	except OSError:
- 		myoutput = None
- 		mystatus = 1
- 	else:
- 		myoutput = _unicode_decode(proc.communicate()[0]).rstrip("\n")
- 		mystatus = proc.wait()
- 	if mystatus == os.EX_OK:
- 		return clang_ver_prefix + getclangversion(myoutput)
- 
- 	portage.writemsg(gcc_not_found_error, noiselevel=-1)
- 	return "[unavailable]"
+     try:
+         proc = subprocess.Popen(
+             gcc_ver_command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
+         )
+     except OSError:
+         myoutput = None
+         mystatus = 1
+     else:
+         myoutput = _unicode_decode(proc.communicate()[0]).rstrip("\n")
+         mystatus = proc.wait()
+     if mystatus == os.EX_OK:
+         return gcc_ver_prefix + myoutput
+ 
+     portage.writemsg(gcc_not_found_error, noiselevel=-1)
+     return "[unavailable]"
+ 
  
  # Warn about features that may confuse users and
  # lead them to report invalid bugs.
diff --cc lib/_emerge/depgraph.py
index 4d578d557,f6549eba6..61be9d02b
--- a/lib/_emerge/depgraph.py
+++ b/lib/_emerge/depgraph.py
@@@ -10083,234 -11596,273 +11596,283 @@@ def _backtrack_depgraph(settings, trees
  
  
  def resume_depgraph(settings, trees, mtimedb, myopts, myparams, spinner):
- 	"""
- 	Raises PackageSetNotFound if myfiles contains a missing package set.
- 	"""
- 	_spinner_start(spinner, myopts)
- 	try:
- 		return _resume_depgraph(settings, trees, mtimedb, myopts,
- 			myparams, spinner)
- 	finally:
- 		_spinner_stop(spinner)
+     """
+     Raises PackageSetNotFound if myfiles contains a missing package set.
+     """
+     _spinner_start(spinner, myopts)
+     try:
+         return _resume_depgraph(settings, trees, mtimedb, myopts, myparams, spinner)
+     finally:
+         _spinner_stop(spinner)
+ 
  
  def _resume_depgraph(settings, trees, mtimedb, myopts, myparams, spinner):
- 	"""
- 	Construct a depgraph for the given resume list. This will raise
- 	PackageNotFound or depgraph.UnsatisfiedResumeDep when necessary.
- 	TODO: Return reasons for dropped_tasks, for display/logging.
- 	@rtype: tuple
- 	@return: (success, depgraph, dropped_tasks)
- 	"""
- 	skip_masked = True
- 	skip_unsatisfied = True
- 	mergelist = mtimedb["resume"]["mergelist"]
- 	dropped_tasks = {}
- 	frozen_config = _frozen_depgraph_config(settings, trees,
- 		myopts, myparams, spinner)
- 	while True:
- 		mydepgraph = depgraph(settings, trees,
- 			myopts, myparams, spinner, frozen_config=frozen_config)
- 		try:
- 			success = mydepgraph._loadResumeCommand(mtimedb["resume"],
- 				skip_masked=skip_masked)
- 		except depgraph.UnsatisfiedResumeDep as e:
- 			if not skip_unsatisfied:
- 				raise
- 
- 			graph = mydepgraph._dynamic_config.digraph
- 			unsatisfied_parents = {}
- 			traversed_nodes = set()
- 			unsatisfied_stack = [(dep.parent, dep.atom) for dep in e.value]
- 			while unsatisfied_stack:
- 				pkg, atom = unsatisfied_stack.pop()
- 				if atom is not None and \
- 					mydepgraph._select_pkg_from_installed(
- 					pkg.root, atom)[0] is not None:
- 					continue
- 				atoms = unsatisfied_parents.get(pkg)
- 				if atoms is None:
- 					atoms = []
- 					unsatisfied_parents[pkg] = atoms
- 				if atom is not None:
- 					atoms.append(atom)
- 				if pkg in traversed_nodes:
- 					continue
- 				traversed_nodes.add(pkg)
- 
- 				# If this package was pulled in by a parent
- 				# package scheduled for merge, removing this
- 				# package may cause the parent package's
- 				# dependency to become unsatisfied.
- 				for parent_node, atom in \
- 					mydepgraph._dynamic_config._parent_atoms.get(pkg, []):
- 					if not isinstance(parent_node, Package) \
- 						or parent_node.operation not in ("merge", "nomerge"):
- 						continue
- 					# We need to traverse all priorities here, in order to
- 					# ensure that a package with an unsatisfied depenedency
- 					# won't get pulled in, even indirectly via a soft
- 					# dependency.
- 					unsatisfied_stack.append((parent_node, atom))
- 
- 			unsatisfied_tuples = frozenset(tuple(parent_node)
- 				for parent_node in unsatisfied_parents
- 				if isinstance(parent_node, Package))
- 			pruned_mergelist = []
- 			for x in mergelist:
- 				if isinstance(x, list) and \
- 					tuple(x) not in unsatisfied_tuples:
- 					pruned_mergelist.append(x)
- 
- 			# If the mergelist doesn't shrink then this loop is infinite.
- 			if len(pruned_mergelist) == len(mergelist):
- 				# This happens if a package can't be dropped because
- 				# it's already installed, but it has unsatisfied PDEPEND.
- 				raise
- 			mergelist[:] = pruned_mergelist
- 
- 			# Exclude installed packages that have been removed from the graph due
- 			# to failure to build/install runtime dependencies after the dependent
- 			# package has already been installed.
- 			dropped_tasks.update((pkg, atoms) for pkg, atoms in \
- 				unsatisfied_parents.items() if pkg.operation != "nomerge")
- 
- 			del e, graph, traversed_nodes, \
- 				unsatisfied_parents, unsatisfied_stack
- 			continue
- 		else:
- 			break
- 	return (success, mydepgraph, dropped_tasks)
- 
- def get_mask_info(root_config, cpv, pkgsettings,
- 	db, pkg_type, built, installed, db_keys, myrepo = None, _pkg_use_enabled=None):
- 	try:
- 		metadata = dict(zip(db_keys,
- 			db.aux_get(cpv, db_keys, myrepo=myrepo)))
- 	except KeyError:
- 		metadata = None
- 
- 	if metadata is None:
- 		mreasons = ["corruption"]
- 	else:
- 		eapi = metadata['EAPI']
- 		if not portage.eapi_is_supported(eapi):
- 			mreasons = ['EAPI %s' % eapi]
- 		else:
- 			pkg = Package(type_name=pkg_type, root_config=root_config,
- 				cpv=cpv, built=built, installed=installed, metadata=metadata)
- 
- 			modified_use = None
- 			if _pkg_use_enabled is not None:
- 				modified_use = _pkg_use_enabled(pkg)
- 
- 			mreasons = get_masking_status(pkg, pkgsettings, root_config, myrepo=myrepo, use=modified_use)
- 
- 	return metadata, mreasons
+     """
+     Construct a depgraph for the given resume list. This will raise
+     PackageNotFound or depgraph.UnsatisfiedResumeDep when necessary.
+     TODO: Return reasons for dropped_tasks, for display/logging.
+     @rtype: tuple
+     @return: (success, depgraph, dropped_tasks)
+     """
+     skip_masked = True
+     skip_unsatisfied = True
+     mergelist = mtimedb["resume"]["mergelist"]
+     dropped_tasks = {}
+     frozen_config = _frozen_depgraph_config(settings, trees, myopts, myparams, spinner)
+     while True:
+         mydepgraph = depgraph(
+             settings, trees, myopts, myparams, spinner, frozen_config=frozen_config
+         )
+         try:
+             success = mydepgraph._loadResumeCommand(
+                 mtimedb["resume"], skip_masked=skip_masked
+             )
+         except depgraph.UnsatisfiedResumeDep as e:
+             if not skip_unsatisfied:
+                 raise
+ 
+             graph = mydepgraph._dynamic_config.digraph
+             unsatisfied_parents = {}
+             traversed_nodes = set()
+             unsatisfied_stack = [(dep.parent, dep.atom) for dep in e.value]
+             while unsatisfied_stack:
+                 pkg, atom = unsatisfied_stack.pop()
+                 if (
+                     atom is not None
+                     and mydepgraph._select_pkg_from_installed(pkg.root, atom)[0]
+                     is not None
+                 ):
+                     continue
+                 atoms = unsatisfied_parents.get(pkg)
+                 if atoms is None:
+                     atoms = []
+                     unsatisfied_parents[pkg] = atoms
+                 if atom is not None:
+                     atoms.append(atom)
+                 if pkg in traversed_nodes:
+                     continue
+                 traversed_nodes.add(pkg)
+ 
+                 # If this package was pulled in by a parent
+                 # package scheduled for merge, removing this
+                 # package may cause the parent package's
+                 # dependency to become unsatisfied.
+                 for parent_node, atom in mydepgraph._dynamic_config._parent_atoms.get(
+                     pkg, []
+                 ):
+                     if not isinstance(
+                         parent_node, Package
+                     ) or parent_node.operation not in ("merge", "nomerge"):
+                         continue
+                     # We need to traverse all priorities here, in order to
+                     # ensure that a package with an unsatisfied depenedency
+                     # won't get pulled in, even indirectly via a soft
+                     # dependency.
+                     unsatisfied_stack.append((parent_node, atom))
+ 
+             unsatisfied_tuples = frozenset(
+                 tuple(parent_node)
+                 for parent_node in unsatisfied_parents
+                 if isinstance(parent_node, Package)
+             )
+             pruned_mergelist = []
+             for x in mergelist:
+                 if isinstance(x, list) and tuple(x) not in unsatisfied_tuples:
+                     pruned_mergelist.append(x)
+ 
+             # If the mergelist doesn't shrink then this loop is infinite.
+             if len(pruned_mergelist) == len(mergelist):
+                 # This happens if a package can't be dropped because
+                 # it's already installed, but it has unsatisfied PDEPEND.
+                 raise
+             mergelist[:] = pruned_mergelist
+ 
+             # Exclude installed packages that have been removed from the graph due
+             # to failure to build/install runtime dependencies after the dependent
+             # package has already been installed.
+             dropped_tasks.update(
+                 (pkg, atoms)
+                 for pkg, atoms in unsatisfied_parents.items()
+                 if pkg.operation != "nomerge"
+             )
+ 
+             del e, graph, traversed_nodes, unsatisfied_parents, unsatisfied_stack
+             continue
+         else:
+             break
+     return (success, mydepgraph, dropped_tasks)
+ 
+ 
+ def get_mask_info(
+     root_config,
+     cpv,
+     pkgsettings,
+     db,
+     pkg_type,
+     built,
+     installed,
+     db_keys,
+     myrepo=None,
+     _pkg_use_enabled=None,
+ ):
+     try:
+         metadata = dict(zip(db_keys, db.aux_get(cpv, db_keys, myrepo=myrepo)))
+     except KeyError:
+         metadata = None
+ 
+     if metadata is None:
+         mreasons = ["corruption"]
+     else:
+         eapi = metadata["EAPI"]
+         if not portage.eapi_is_supported(eapi):
+             mreasons = ["EAPI %s" % eapi]
+         else:
+             pkg = Package(
+                 type_name=pkg_type,
+                 root_config=root_config,
+                 cpv=cpv,
+                 built=built,
+                 installed=installed,
+                 metadata=metadata,
+             )
+ 
+             modified_use = None
+             if _pkg_use_enabled is not None:
+                 modified_use = _pkg_use_enabled(pkg)
+ 
+             mreasons = get_masking_status(
+                 pkg, pkgsettings, root_config, myrepo=myrepo, use=modified_use
+             )
+ 
+     return metadata, mreasons
+ 
  
  def show_masked_packages(masked_packages):
- 	shown_licenses = set()
- 	shown_comments = set()
- 	# Maybe there is both an ebuild and a binary. Only
- 	# show one of them to avoid redundant appearance.
- 	shown_cpvs = set()
- 	have_eapi_mask = False
- 	for (root_config, pkgsettings, cpv, repo,
- 		metadata, mreasons) in masked_packages:
- 		output_cpv = cpv
- 		if repo:
- 			output_cpv += _repo_separator + repo
- 		if output_cpv in shown_cpvs:
- 			continue
- 		shown_cpvs.add(output_cpv)
- 		eapi_masked = metadata is not None and \
- 			not portage.eapi_is_supported(metadata["EAPI"])
- 		if eapi_masked:
- 			have_eapi_mask = True
- 			# When masked by EAPI, metadata is mostly useless since
- 			# it doesn't contain essential things like SLOT.
- 			metadata = None
- 		comment, filename = None, None
- 		if not eapi_masked and \
- 			"package.mask" in mreasons:
- 			comment, filename = \
- 				portage.getmaskingreason(
- 				cpv, metadata=metadata,
- 				settings=pkgsettings,
- 				portdb=root_config.trees["porttree"].dbapi,
- 				return_location=True)
- 		missing_licenses = []
- 		if not eapi_masked and metadata is not None:
- 			try:
- 				missing_licenses = \
- 					pkgsettings._getMissingLicenses(
- 						cpv, metadata)
- 			except portage.exception.InvalidDependString:
- 				# This will have already been reported
- 				# above via mreasons.
- 				pass
- 
- 		writemsg("- "+output_cpv+" (masked by: "+", ".join(mreasons)+")\n",
- 			noiselevel=-1)
- 
- 		if comment and comment not in shown_comments:
- 			writemsg(filename + ":\n" + comment + "\n",
- 				noiselevel=-1)
- 			shown_comments.add(comment)
- 		portdb = root_config.trees["porttree"].dbapi
- 		for l in missing_licenses:
- 			if l in shown_licenses:
- 				continue
- 			l_path = portdb.findLicensePath(l)
- 			if l_path is None:
- 				continue
- 			msg = ("A copy of the '%s' license" + \
- 			" is located at '%s'.\n\n") % (l, l_path)
- 			writemsg(msg, noiselevel=-1)
- 			shown_licenses.add(l)
- 	return have_eapi_mask
+     shown_licenses = set()
+     shown_comments = set()
+     # Maybe there is both an ebuild and a binary. Only
+     # show one of them to avoid redundant appearance.
+     shown_cpvs = set()
+     have_eapi_mask = False
+     for (root_config, pkgsettings, cpv, repo, metadata, mreasons) in masked_packages:
+         output_cpv = cpv
+         if repo:
+             output_cpv += _repo_separator + repo
+         if output_cpv in shown_cpvs:
+             continue
+         shown_cpvs.add(output_cpv)
+         eapi_masked = metadata is not None and not portage.eapi_is_supported(
+             metadata["EAPI"]
+         )
+         if eapi_masked:
+             have_eapi_mask = True
+             # When masked by EAPI, metadata is mostly useless since
+             # it doesn't contain essential things like SLOT.
+             metadata = None
+         comment, filename = None, None
+         if not eapi_masked and "package.mask" in mreasons:
+             comment, filename = portage.getmaskingreason(
+                 cpv,
+                 metadata=metadata,
+                 settings=pkgsettings,
+                 portdb=root_config.trees["porttree"].dbapi,
+                 return_location=True,
+             )
+         missing_licenses = []
+         if not eapi_masked and metadata is not None:
+             try:
+                 missing_licenses = pkgsettings._getMissingLicenses(cpv, metadata)
+             except portage.exception.InvalidDependString:
+                 # This will have already been reported
+                 # above via mreasons.
+                 pass
+ 
+         writemsg(
+             "- " + output_cpv + " (masked by: " + ", ".join(mreasons) + ")\n",
+             noiselevel=-1,
+         )
+ 
+         if comment and comment not in shown_comments:
+             writemsg(filename + ":\n" + comment + "\n", noiselevel=-1)
+             shown_comments.add(comment)
+         portdb = root_config.trees["porttree"].dbapi
+         for l in missing_licenses:
+             if l in shown_licenses:
+                 continue
+             l_path = portdb.findLicensePath(l)
+             if l_path is None:
+                 continue
+             msg = ("A copy of the '%s' license" + " is located at '%s'.\n\n") % (
+                 l,
+                 l_path,
+             )
+             writemsg(msg, noiselevel=-1)
+             shown_licenses.add(l)
+     return have_eapi_mask
+ 
  
  def show_mask_docs():
- 	writemsg("For more information, see the MASKED PACKAGES "
- 		"section in the emerge\n", noiselevel=-1)
- 	writemsg("man page or refer to the Gentoo Handbook.\n", noiselevel=-1)
+     writemsg(
+         "For more information, see the MASKED PACKAGES " "section in the emerge\n",
+         noiselevel=-1,
+     )
+     writemsg("man page or refer to the Gentoo Handbook.\n", noiselevel=-1)
+ 
  
  def show_blocker_docs_link():
- 	writemsg("\nFor more information about " + bad("Blocked Packages") + ", please refer to the following\n", noiselevel=-1)
- 	writemsg("section of the Gentoo Linux x86 Handbook (architecture is irrelevant):\n\n", noiselevel=-1)
- 	writemsg("https://wiki.gentoo.org/wiki/Handbook:X86/Working/Portage#Blocked_packages\n\n", noiselevel=-1)
+     writemsg(
+         "\nFor more information about "
+         + bad("Blocked Packages")
+         + ", please refer to the following\n",
+         noiselevel=-1,
+     )
+     writemsg(
+         "section of the Gentoo Linux x86 Handbook (architecture is irrelevant):\n\n",
+         noiselevel=-1,
+     )
+     writemsg(
+         "https://wiki.gentoo.org/wiki/Handbook:X86/Working/Portage#Blocked_packages\n\n",
+         noiselevel=-1,
+     )
+ 
  
  def get_masking_status(pkg, pkgsettings, root_config, myrepo=None, use=None):
- 	return [mreason.message for \
- 		mreason in _get_masking_status(pkg, pkgsettings, root_config, myrepo=myrepo, use=use)]
+     return [
+         mreason.message
+         for mreason in _get_masking_status(
+             pkg, pkgsettings, root_config, myrepo=myrepo, use=use
+         )
+     ]
+ 
  
  def _get_masking_status(pkg, pkgsettings, root_config, myrepo=None, use=None):
- 	mreasons = _getmaskingstatus(
- 		pkg, settings=pkgsettings,
- 		portdb=root_config.trees["porttree"].dbapi, myrepo=myrepo)
+     mreasons = _getmaskingstatus(
+         pkg,
+         settings=pkgsettings,
+         portdb=root_config.trees["porttree"].dbapi,
+         myrepo=myrepo,
+     )
  
- 	if not pkg.installed:
- 		if not pkgsettings._accept_chost(pkg.cpv, pkg._metadata):
- 			mreasons.append(_MaskReason("CHOST", "CHOST: %s" % \
- 				pkg._metadata["CHOST"]))
+     if not pkg.installed:
+         if not pkgsettings._accept_chost(pkg.cpv, pkg._metadata):
+             mreasons.append(_MaskReason("CHOST", "CHOST: %s" % pkg._metadata["CHOST"]))
  
 +	eprefix = pkgsettings["EPREFIX"]
 +	if len(eprefix.rstrip('/')) > 0 and pkg.built and not pkg.installed:
 +		if not "EPREFIX" in pkg._metadata:
 +			mreasons.append(_MaskReason("EPREFIX",
 +			    "missing EPREFIX"))
 +		elif len(pkg._metadata["EPREFIX"].strip()) < len(eprefix):
 +			mreasons.append(_MaskReason("EPREFIX",
 +			    "EPREFIX: '%s' too small" % \
 +				    pkg._metadata["EPREFIX"]))
 +
- 	if pkg.invalid:
- 		for msgs in pkg.invalid.values():
- 			for msg in msgs:
- 				mreasons.append(
- 					_MaskReason("invalid", "invalid: %s" % (msg,)))
+     if pkg.invalid:
+         for msgs in pkg.invalid.values():
+             for msg in msgs:
+                 mreasons.append(_MaskReason("invalid", "invalid: %s" % (msg,)))
  
- 	if not pkg._metadata["SLOT"]:
- 		mreasons.append(
- 			_MaskReason("invalid", "SLOT: undefined"))
+     if not pkg._metadata["SLOT"]:
+         mreasons.append(_MaskReason("invalid", "SLOT: undefined"))
  
- 	return mreasons
+     return mreasons
diff --cc lib/_emerge/emergelog.py
index 3562f8eb3,14439da6e..a891f5b54
--- a/lib/_emerge/emergelog.py
+++ b/lib/_emerge/emergelog.py
@@@ -16,7 -15,8 +16,9 @@@ from portage.const import EPREFI
  # dblink.merge() and we don't want that to trigger log writes
  # unless it's really called via emerge.
  _disable = True
 +_emerge_log_dir = EPREFIX + '/var/log'
+ _emerge_log_dir = "/var/log"
+ 
  
  def emergelog(xterm_titles, mystr, short_msg=None):
  
diff --cc lib/portage/__init__.py
index de2dbfc05,13af8da09..b6b00a8c2
--- a/lib/portage/__init__.py
+++ b/lib/portage/__init__.py
@@@ -9,146 -9,182 +9,198 @@@ VERSION = "HEAD
  # ===========================================================================
  
  try:
- 	import asyncio
- 	import sys
- 	import errno
- 	if not hasattr(errno, 'ESTALE'):
- 		# ESTALE may not be defined on some systems, such as interix.
- 		errno.ESTALE = -1
- 	import multiprocessing.util
- 	import re
- 	import types
- 	import platform
+     import asyncio
+     import sys
+     import errno
 +	# PREFIX LOCAL
 +	import multiprocessing
  
- 	# Temporarily delete these imports, to ensure that only the
- 	# wrapped versions are imported by portage internals.
- 	import os
- 	del os
- 	import shutil
- 	del shutil
+     if not hasattr(errno, "ESTALE"):
+         # ESTALE may not be defined on some systems, such as interix.
+         errno.ESTALE = -1
+     import multiprocessing.util
+     import re
+     import types
+     import platform
  
- except ImportError as e:
- 	sys.stderr.write("\n\n")
- 	sys.stderr.write("!!! Failed to complete python imports. These are internal modules for\n")
- 	sys.stderr.write("!!! python and failure here indicates that you have a problem with python\n")
- 	sys.stderr.write("!!! itself and thus portage is not able to continue processing.\n\n")
+     # Temporarily delete these imports, to ensure that only the
+     # wrapped versions are imported by portage internals.
+     import os
+ 
+     del os
+     import shutil
+ 
+     del shutil
  
- 	sys.stderr.write("!!! You might consider starting python with verbose flags to see what has\n")
- 	sys.stderr.write("!!! gone wrong. Here is the information we got for this exception:\n")
- 	sys.stderr.write("    "+str(e)+"\n\n")
- 	raise
+ except ImportError as e:
+     sys.stderr.write("\n\n")
+     sys.stderr.write(
+         "!!! Failed to complete python imports. These are internal modules for\n"
+     )
+     sys.stderr.write(
+         "!!! python and failure here indicates that you have a problem with python\n"
+     )
+     sys.stderr.write(
+         "!!! itself and thus portage is not able to continue processing.\n\n"
+     )
+ 
+     sys.stderr.write(
+         "!!! You might consider starting python with verbose flags to see what has\n"
+     )
+     sys.stderr.write(
+         "!!! gone wrong. Here is the information we got for this exception:\n"
+     )
+     sys.stderr.write("    " + str(e) + "\n\n")
+     raise
  
 +# BEGIN PREFIX LOCAL
 +# for bug #758230, on macOS the default was switched from fork to spawn,
 +# the latter causing issues because all kinds of things can't be
 +# pickled, so force fork mode for now
 +try:
 +	multiprocessing.set_start_method('fork')
 +except RuntimeError:
 +	pass
 +# END PREFIX LOCAL
 +
  try:
  
- 	import portage.proxy.lazyimport
- 	import portage.proxy as proxy
- 	proxy.lazyimport.lazyimport(globals(),
- 		'portage.cache.cache_errors:CacheError',
- 		'portage.checksum',
- 		'portage.checksum:perform_checksum,perform_md5,prelink_capable',
- 		'portage.cvstree',
- 		'portage.data',
- 		'portage.data:lchown,ostype,portage_gid,portage_uid,secpass,' + \
- 			'uid,userland,userpriv_groups,wheelgid',
- 		'portage.dbapi',
- 		'portage.dbapi.bintree:bindbapi,binarytree',
- 		'portage.dbapi.cpv_expand:cpv_expand',
- 		'portage.dbapi.dep_expand:dep_expand',
- 		'portage.dbapi.porttree:close_portdbapi_caches,FetchlistDict,' + \
- 			'portagetree,portdbapi',
- 		'portage.dbapi.vartree:dblink,merge,unmerge,vardbapi,vartree',
- 		'portage.dbapi.virtual:fakedbapi',
- 		'portage.debug',
- 		'portage.dep',
- 		'portage.dep:best_match_to_list,dep_getcpv,dep_getkey,' + \
- 			'flatten,get_operator,isjustname,isspecific,isvalidatom,' + \
- 			'match_from_list,match_to_list',
- 		'portage.dep.dep_check:dep_check,dep_eval,dep_wordreduce,dep_zapdeps',
- 		'portage.eclass_cache',
- 		'portage.elog',
- 		'portage.exception',
- 		'portage.getbinpkg',
- 		'portage.locks',
- 		'portage.locks:lockdir,lockfile,unlockdir,unlockfile',
- 		'portage.mail',
- 		'portage.manifest:Manifest',
- 		'portage.output',
- 		'portage.output:bold,colorize',
- 		'portage.package.ebuild.doebuild:doebuild,' + \
- 			'doebuild_environment,spawn,spawnebuild',
- 		'portage.package.ebuild.config:autouse,best_from_dict,' + \
- 			'check_config_instance,config',
- 		'portage.package.ebuild.deprecated_profile_check:' + \
- 			'deprecated_profile_check',
- 		'portage.package.ebuild.digestcheck:digestcheck',
- 		'portage.package.ebuild.digestgen:digestgen',
- 		'portage.package.ebuild.fetch:fetch',
- 		'portage.package.ebuild.getmaskingreason:getmaskingreason',
- 		'portage.package.ebuild.getmaskingstatus:getmaskingstatus',
- 		'portage.package.ebuild.prepare_build_dirs:prepare_build_dirs',
- 		'portage.process',
- 		'portage.process:atexit_register,run_exitfuncs',
- 		'portage.update:dep_transform,fixdbentries,grab_updates,' + \
- 			'parse_updates,update_config_files,update_dbentries,' + \
- 			'update_dbentry',
- 		'portage.util',
- 		'portage.util:atomic_ofstream,apply_secpass_permissions,' + \
- 			'apply_recursive_permissions,dump_traceback,getconfig,' + \
- 			'grabdict,grabdict_package,grabfile,grabfile_package,' + \
- 			'map_dictlist_vals,new_protect_filename,normalize_path,' + \
- 			'pickle_read,pickle_write,stack_dictlist,stack_dicts,' + \
- 			'stack_lists,unique_array,varexpand,writedict,writemsg,' + \
- 			'writemsg_stdout,write_atomic',
- 		'portage.util.digraph:digraph',
- 		'portage.util.env_update:env_update',
- 		'portage.util.ExtractKernelVersion:ExtractKernelVersion',
- 		'portage.util.listdir:cacheddir,listdir',
- 		'portage.util.movefile:movefile',
- 		'portage.util.mtimedb:MtimeDB',
- 		'portage.versions',
- 		'portage.versions:best,catpkgsplit,catsplit,cpv_getkey,' + \
- 			'cpv_getkey@getCPFromCPV,endversion_keys,' + \
- 			'suffix_value@endversion,pkgcmp,pkgsplit,vercmp,ververify',
- 		'portage.xpak',
- 		'subprocess',
- 		'time',
- 	)
- 
- 	from collections import OrderedDict
- 
- 	import portage.const
- 	from portage.const import VDB_PATH, PRIVATE_PATH, CACHE_PATH, DEPCACHE_PATH, \
- 		USER_CONFIG_PATH, MODULES_FILE_PATH, CUSTOM_PROFILE_PATH, PORTAGE_BASE_PATH, \
- 		PORTAGE_BIN_PATH, PORTAGE_PYM_PATH, PROFILE_PATH, LOCALE_DATA_PATH, \
- 		EBUILD_SH_BINARY, SANDBOX_BINARY, BASH_BINARY, \
- 		MOVE_BINARY, PRELINK_BINARY, WORLD_FILE, MAKE_CONF_FILE, MAKE_DEFAULTS_FILE, \
- 		DEPRECATED_PROFILE_FILE, USER_VIRTUALS_FILE, EBUILD_SH_ENV_FILE, \
- 		INVALID_ENV_FILE, CUSTOM_MIRRORS_FILE, CONFIG_MEMORY_FILE,\
- 		INCREMENTALS, EAPI, MISC_SH_BINARY, REPO_NAME_LOC, REPO_NAME_FILE, \
- 		EPREFIX, rootuid
+     import portage.proxy.lazyimport
+     import portage.proxy as proxy
+ 
+     proxy.lazyimport.lazyimport(
+         globals(),
+         "portage.cache.cache_errors:CacheError",
+         "portage.checksum",
+         "portage.checksum:perform_checksum,perform_md5,prelink_capable",
+         "portage.cvstree",
+         "portage.data",
+         "portage.data:lchown,ostype,portage_gid,portage_uid,secpass,"
+         + "uid,userland,userpriv_groups,wheelgid",
+         "portage.dbapi",
+         "portage.dbapi.bintree:bindbapi,binarytree",
+         "portage.dbapi.cpv_expand:cpv_expand",
+         "portage.dbapi.dep_expand:dep_expand",
+         "portage.dbapi.porttree:close_portdbapi_caches,FetchlistDict,"
+         + "portagetree,portdbapi",
+         "portage.dbapi.vartree:dblink,merge,unmerge,vardbapi,vartree",
+         "portage.dbapi.virtual:fakedbapi",
+         "portage.debug",
+         "portage.dep",
+         "portage.dep:best_match_to_list,dep_getcpv,dep_getkey,"
+         + "flatten,get_operator,isjustname,isspecific,isvalidatom,"
+         + "match_from_list,match_to_list",
+         "portage.dep.dep_check:dep_check,dep_eval,dep_wordreduce,dep_zapdeps",
+         "portage.eclass_cache",
+         "portage.elog",
+         "portage.exception",
+         "portage.getbinpkg",
+         "portage.locks",
+         "portage.locks:lockdir,lockfile,unlockdir,unlockfile",
+         "portage.mail",
+         "portage.manifest:Manifest",
+         "portage.output",
+         "portage.output:bold,colorize",
+         "portage.package.ebuild.doebuild:doebuild,"
+         + "doebuild_environment,spawn,spawnebuild",
+         "portage.package.ebuild.config:autouse,best_from_dict,"
+         + "check_config_instance,config",
+         "portage.package.ebuild.deprecated_profile_check:" + "deprecated_profile_check",
+         "portage.package.ebuild.digestcheck:digestcheck",
+         "portage.package.ebuild.digestgen:digestgen",
+         "portage.package.ebuild.fetch:fetch",
+         "portage.package.ebuild.getmaskingreason:getmaskingreason",
+         "portage.package.ebuild.getmaskingstatus:getmaskingstatus",
+         "portage.package.ebuild.prepare_build_dirs:prepare_build_dirs",
+         "portage.process",
+         "portage.process:atexit_register,run_exitfuncs",
+         "portage.update:dep_transform,fixdbentries,grab_updates,"
+         + "parse_updates,update_config_files,update_dbentries,"
+         + "update_dbentry",
+         "portage.util",
+         "portage.util:atomic_ofstream,apply_secpass_permissions,"
+         + "apply_recursive_permissions,dump_traceback,getconfig,"
+         + "grabdict,grabdict_package,grabfile,grabfile_package,"
+         + "map_dictlist_vals,new_protect_filename,normalize_path,"
+         + "pickle_read,pickle_write,stack_dictlist,stack_dicts,"
+         + "stack_lists,unique_array,varexpand,writedict,writemsg,"
+         + "writemsg_stdout,write_atomic",
+         "portage.util.digraph:digraph",
+         "portage.util.env_update:env_update",
+         "portage.util.ExtractKernelVersion:ExtractKernelVersion",
+         "portage.util.listdir:cacheddir,listdir",
+         "portage.util.movefile:movefile",
+         "portage.util.mtimedb:MtimeDB",
+         "portage.versions",
+         "portage.versions:best,catpkgsplit,catsplit,cpv_getkey,"
+         + "cpv_getkey@getCPFromCPV,endversion_keys,"
+         + "suffix_value@endversion,pkgcmp,pkgsplit,vercmp,ververify",
+         "portage.xpak",
+         "subprocess",
+         "time",
+     )
+ 
+     from collections import OrderedDict
+ 
+     import portage.const
+     from portage.const import (
+         VDB_PATH,
+         PRIVATE_PATH,
+         CACHE_PATH,
+         DEPCACHE_PATH,
+         USER_CONFIG_PATH,
+         MODULES_FILE_PATH,
+         CUSTOM_PROFILE_PATH,
+         PORTAGE_BASE_PATH,
+         PORTAGE_BIN_PATH,
+         PORTAGE_PYM_PATH,
+         PROFILE_PATH,
+         LOCALE_DATA_PATH,
+         EBUILD_SH_BINARY,
+         SANDBOX_BINARY,
+         BASH_BINARY,
+         MOVE_BINARY,
+         PRELINK_BINARY,
+         WORLD_FILE,
+         MAKE_CONF_FILE,
+         MAKE_DEFAULTS_FILE,
+         DEPRECATED_PROFILE_FILE,
+         USER_VIRTUALS_FILE,
+         EBUILD_SH_ENV_FILE,
+         INVALID_ENV_FILE,
+         CUSTOM_MIRRORS_FILE,
+         CONFIG_MEMORY_FILE,
+         INCREMENTALS,
+         EAPI,
+         MISC_SH_BINARY,
+         REPO_NAME_LOC,
+         REPO_NAME_FILE,
++        # BEGIN PREFIX LOCAL
++        EPREFIX,
++        rootuid,
++        # END PREFIX LOCAL
+     )
  
  except ImportError as e:
- 	sys.stderr.write("\n\n")
- 	sys.stderr.write("!!! Failed to complete portage imports. There are internal modules for\n")
- 	sys.stderr.write("!!! portage and failure here indicates that you have a problem with your\n")
- 	sys.stderr.write("!!! installation of portage. Please try a rescue portage located in the ebuild\n")
- 	sys.stderr.write("!!! repository under '/var/db/repos/gentoo/sys-apps/portage/files/' (default).\n")
- 	sys.stderr.write("!!! There is a README.RESCUE file that details the steps required to perform\n")
- 	sys.stderr.write("!!! a recovery of portage.\n")
- 	sys.stderr.write("    "+str(e)+"\n\n")
- 	raise
+     sys.stderr.write("\n\n")
+     sys.stderr.write(
+         "!!! Failed to complete portage imports. There are internal modules for\n"
+     )
+     sys.stderr.write(
+         "!!! portage and failure here indicates that you have a problem with your\n"
+     )
+     sys.stderr.write(
+         "!!! installation of portage. Please try a rescue portage located in the ebuild\n"
+     )
+     sys.stderr.write(
+         "!!! repository under '/var/db/repos/gentoo/sys-apps/portage/files/' (default).\n"
+     )
+     sys.stderr.write(
+         "!!! There is a README.RESCUE file that details the steps required to perform\n"
+     )
+     sys.stderr.write("!!! a recovery of portage.\n")
+     sys.stderr.write("    " + str(e) + "\n\n")
+     raise
  
  
  # We use utf_8 encoding everywhere. Previously, we used
diff --cc lib/portage/const.py
index 892766c68,1edc5fcf1..f2c69a4bb
--- a/lib/portage/const.py
+++ b/lib/portage/const.py
@@@ -58,185 -53,176 +58,196 @@@ NEWS_LIB_PATH = "var/lib/gentoo
  
  # these variables get EPREFIX prepended automagically when they are
  # translated into their lowercase variants
- DEPCACHE_PATH            = "/var/cache/edb/dep"
- GLOBAL_CONFIG_PATH       = "/usr/share/portage/config"
+ DEPCACHE_PATH = "/var/cache/edb/dep"
+ GLOBAL_CONFIG_PATH = "/usr/share/portage/config"
  
  # these variables are not used with target_root or config_root
 +PORTAGE_BASE_PATH        = PORTAGE_BASE
  # NOTE: Use realpath(__file__) so that python module symlinks in site-packages
  # are followed back to the real location of the whole portage installation.
 +#PREFIX: below should work, but I'm not sure how it it affects other places
  # NOTE: Please keep PORTAGE_BASE_PATH in one line to help substitutions.
- #PORTAGE_BASE_PATH        = os.path.join(os.sep, os.sep.join(os.path.realpath(__file__.rstrip("co")).split(os.sep)[:-3]))
- PORTAGE_BIN_PATH         = PORTAGE_BASE_PATH + "/bin"
- PORTAGE_PYM_PATH         = os.path.realpath(os.path.join(__file__, '../..'))
- LOCALE_DATA_PATH         = PORTAGE_BASE_PATH + "/locale"  # FIXME: not used
- EBUILD_SH_BINARY         = PORTAGE_BIN_PATH + "/ebuild.sh"
- MISC_SH_BINARY           = PORTAGE_BIN_PATH + "/misc-functions.sh"
- SANDBOX_BINARY           = EPREFIX + "/usr/bin/sandbox"
- FAKEROOT_BINARY          = EPREFIX + "/usr/bin/fakeroot"
- BASH_BINARY              = PORTAGE_BASH
- MOVE_BINARY              = PORTAGE_MV
- PRELINK_BINARY           = "/usr/sbin/prelink"
+ # fmt:off
 -PORTAGE_BASE_PATH = os.path.join(os.sep, os.sep.join(os.path.realpath(__file__.rstrip("co")).split(os.sep)[:-3]))
++# PREFIX LOCAL (from const_autotools)
++#PORTAGE_BASE_PATH = os.path.join(os.sep, os.sep.join(os.path.realpath(__file__.rstrip("co")).split(os.sep)[:-3]))
+ # fmt:on
+ PORTAGE_BIN_PATH = PORTAGE_BASE_PATH + "/bin"
+ PORTAGE_PYM_PATH = os.path.realpath(os.path.join(__file__, "../.."))
+ LOCALE_DATA_PATH = PORTAGE_BASE_PATH + "/locale"  # FIXME: not used
+ EBUILD_SH_BINARY = PORTAGE_BIN_PATH + "/ebuild.sh"
+ MISC_SH_BINARY = PORTAGE_BIN_PATH + "/misc-functions.sh"
 -SANDBOX_BINARY = "/usr/bin/sandbox"
 -FAKEROOT_BINARY = "/usr/bin/fakeroot"
++# BEGIN PREFIX LOCAL
++SANDBOX_BINARY = EPREFIX + "/usr/bin/sandbox"
++FAKEROOT_BINARY = EPREFIX + "/usr/bin/fakeroot"
++# END PREFIX LOCAL
+ BASH_BINARY = "/bin/bash"
+ MOVE_BINARY = "/bin/mv"
+ PRELINK_BINARY = "/usr/sbin/prelink"
++# BEGIN PREFIX LOCAL
 +MACOSSANDBOX_BINARY      = "/usr/bin/sandbox-exec"
 +MACOSSANDBOX_PROFILE     = '''(version 1)
 +(allow default)
 +(deny file-write*)
 +(allow file-write* file-write-setugid
 +@@MACOSSANDBOX_PATHS@@)
 +(allow file-write-data
 +@@MACOSSANDBOX_PATHS_CONTENT_ONLY@@)'''
 +
 +PORTAGE_GROUPNAME        = portagegroup
 +PORTAGE_USERNAME         = portageuser
++# END PREFIX LOCAL
  
- INVALID_ENV_FILE         = "/etc/spork/is/not/valid/profile.env"
- MERGING_IDENTIFIER       = "-MERGING-"
- REPO_NAME_FILE           = "repo_name"
- REPO_NAME_LOC            = "profiles" + "/" + REPO_NAME_FILE
+ INVALID_ENV_FILE = "/etc/spork/is/not/valid/profile.env"
+ MERGING_IDENTIFIER = "-MERGING-"
+ REPO_NAME_FILE = "repo_name"
+ REPO_NAME_LOC = "profiles" + "/" + REPO_NAME_FILE
  
- PORTAGE_PACKAGE_ATOM     = "sys-apps/portage"
- LIBC_PACKAGE_ATOM        = "virtual/libc"
- OS_HEADERS_PACKAGE_ATOM  = "virtual/os-headers"
- CVS_PACKAGE_ATOM         = "dev-vcs/cvs"
- GIT_PACKAGE_ATOM         = "dev-vcs/git"
- HG_PACKAGE_ATOM          = "dev-vcs/mercurial"
- RSYNC_PACKAGE_ATOM       = "net-misc/rsync"
+ PORTAGE_PACKAGE_ATOM = "sys-apps/portage"
+ LIBC_PACKAGE_ATOM = "virtual/libc"
+ OS_HEADERS_PACKAGE_ATOM = "virtual/os-headers"
+ CVS_PACKAGE_ATOM = "dev-vcs/cvs"
+ GIT_PACKAGE_ATOM = "dev-vcs/git"
+ HG_PACKAGE_ATOM = "dev-vcs/mercurial"
+ RSYNC_PACKAGE_ATOM = "net-misc/rsync"
  
- INCREMENTALS             = (
- 	"ACCEPT_KEYWORDS",
- 	"CONFIG_PROTECT",
- 	"CONFIG_PROTECT_MASK",
- 	"ENV_UNSET",
- 	"FEATURES",
- 	"IUSE_IMPLICIT",
- 	"PRELINK_PATH",
- 	"PRELINK_PATH_MASK",
- 	"PROFILE_ONLY_VARIABLES",
- 	"USE",
- 	"USE_EXPAND",
- 	"USE_EXPAND_HIDDEN",
- 	"USE_EXPAND_IMPLICIT",
- 	"USE_EXPAND_UNPREFIXED",
+ INCREMENTALS = (
+     "ACCEPT_KEYWORDS",
+     "CONFIG_PROTECT",
+     "CONFIG_PROTECT_MASK",
+     "ENV_UNSET",
+     "FEATURES",
+     "IUSE_IMPLICIT",
+     "PRELINK_PATH",
+     "PRELINK_PATH_MASK",
+     "PROFILE_ONLY_VARIABLES",
+     "USE",
+     "USE_EXPAND",
+     "USE_EXPAND_HIDDEN",
+     "USE_EXPAND_IMPLICIT",
+     "USE_EXPAND_UNPREFIXED",
  )
- EBUILD_PHASES            = (
- 	"pretend",
- 	"setup",
- 	"unpack",
- 	"prepare",
- 	"configure",
- 	"compile",
- 	"test",
- 	"install",
- 	"package",
- 	"instprep",
- 	"preinst",
- 	"postinst",
- 	"prerm",
- 	"postrm",
- 	"nofetch",
- 	"config",
- 	"info",
- 	"other",
+ EBUILD_PHASES = (
+     "pretend",
+     "setup",
+     "unpack",
+     "prepare",
+     "configure",
+     "compile",
+     "test",
+     "install",
+     "package",
+     "instprep",
+     "preinst",
+     "postinst",
+     "prerm",
+     "postrm",
+     "nofetch",
+     "config",
+     "info",
+     "other",
+ )
+ SUPPORTED_FEATURES = frozenset(
+     [
+         "assume-digests",
+         "binpkg-docompress",
+         "binpkg-dostrip",
+         "binpkg-logs",
+         "binpkg-multi-instance",
+         "buildpkg",
+         "buildpkg-live",
+         "buildsyspkg",
+         "candy",
+         "case-insensitive-fs",
+         "ccache",
+         "cgroup",
+         "chflags",
+         "clean-logs",
+         "collision-protect",
+         "compress-build-logs",
+         "compressdebug",
+         "compress-index",
+         "config-protect-if-modified",
+         "digest",
+         "distcc",
+         "distlocks",
+         "downgrade-backup",
+         "ebuild-locks",
+         "fail-clean",
+         "fakeroot",
+         "fixlafiles",
+         "force-mirror",
+         "force-prefix",
+         "getbinpkg",
+         "icecream",
+         "installsources",
+         "ipc-sandbox",
+         "keeptemp",
+         "keepwork",
+         "lmirror",
+         "merge-sync",
+         "metadata-transfer",
+         "mirror",
+         "mount-sandbox",
+         "multilib-strict",
+         "network-sandbox",
+         "network-sandbox-proxy",
+         "news",
+         "noauto",
+         "noclean",
+         "nodoc",
+         "noinfo",
+         "noman",
+         "nostrip",
+         "notitles",
+         "parallel-fetch",
+         "parallel-install",
+         "pid-sandbox",
+         "pkgdir-index-trusted",
+         "prelink-checksums",
+         "preserve-libs",
+         "protect-owned",
+         "python-trace",
+         "qa-unresolved-soname-deps",
+         "sandbox",
+         "selinux",
+         "sesandbox",
+         "sfperms",
+         "sign",
+         "skiprocheck",
+         "splitdebug",
+         "split-elog",
+         "split-log",
+         "strict",
+         "strict-keepdir",
+         "stricter",
+         "suidctl",
+         "test",
+         "test-fail-continue",
+         "unknown-features-filter",
+         "unknown-features-warn",
+         "unmerge-backup",
+         "unmerge-logs",
+         "unmerge-orphans",
+         "unprivileged",
+         "userfetch",
+         "userpriv",
+         "usersandbox",
+         "usersync",
+         "webrsync-gpg",
+         "xattr",
++        # PREFIX LOCAL
++		"stacked-prefix",
+     ]
  )
- SUPPORTED_FEATURES       = frozenset([
- 	"assume-digests",
- 	"binpkg-docompress",
- 	"binpkg-dostrip",
- 	"binpkg-logs",
- 	"binpkg-multi-instance",
- 	"buildpkg",
- 	"buildsyspkg",
- 	"candy",
- 	"case-insensitive-fs",
- 	"ccache",
- 	"cgroup",
- 	"chflags",
- 	"clean-logs",
- 	"collision-protect",
- 	"compress-build-logs",
- 	"compressdebug",
- 	"compress-index",
- 	"config-protect-if-modified",
- 	"digest",
- 	"distcc",
- 	"distlocks",
- 	"downgrade-backup",
- 	"ebuild-locks",
- 	"fail-clean",
- 	"fakeroot",
- 	"fixlafiles",
- 	"force-mirror",
- 	"force-prefix",
- 	"getbinpkg",
- 	"icecream",
- 	"installsources",
- 	"ipc-sandbox",
- 	"keeptemp",
- 	"keepwork",
- 	"lmirror",
- 	"merge-sync",
- 	"metadata-transfer",
- 	"mirror",
- 	"mount-sandbox",
- 	"multilib-strict",
- 	"network-sandbox",
- 	"network-sandbox-proxy",
- 	"news",
- 	"noauto",
- 	"noclean",
- 	"nodoc",
- 	"noinfo",
- 	"noman",
- 	"nostrip",
- 	"notitles",
- 	"parallel-fetch",
- 	"parallel-install",
- 	"pid-sandbox",
- 	"pkgdir-index-trusted",
- 	"prelink-checksums",
- 	"preserve-libs",
- 	"protect-owned",
- 	"python-trace",
- 	"qa-unresolved-soname-deps",
- 	"sandbox",
- 	"selinux",
- 	"sesandbox",
- 	"sfperms",
- 	"sign",
- 	"skiprocheck",
- 	"splitdebug",
- 	"split-elog",
- 	"split-log",
- 	"stacked-prefix",  # PREFIX LOCAL
- 	"strict",
- 	"strict-keepdir",
- 	"stricter",
- 	"suidctl",
- 	"test",
- 	"test-fail-continue",
- 	"unknown-features-filter",
- 	"unknown-features-warn",
- 	"unmerge-backup",
- 	"unmerge-logs",
- 	"unmerge-orphans",
- 	"unprivileged",
- 	"userfetch",
- 	"userpriv",
- 	"usersandbox",
- 	"usersync",
- 	"webrsync-gpg",
- 	"xattr",
- ])
  
- EAPI                     = 8
+ EAPI = 8
  
- HASHING_BLOCKSIZE        = 32768
+ HASHING_BLOCKSIZE = 32768
  
  MANIFEST2_HASH_DEFAULTS = frozenset(["BLAKE2B", "SHA512"])
- MANIFEST2_HASH_DEFAULT  = "BLAKE2B"
+ MANIFEST2_HASH_DEFAULT = "BLAKE2B"
  
- MANIFEST2_IDENTIFIERS    = ("AUX", "MISC", "DIST", "EBUILD")
+ MANIFEST2_IDENTIFIERS = ("AUX", "MISC", "DIST", "EBUILD")
  
  # The EPREFIX for the current install is hardcoded here, but access to this
  # constant should be minimal, in favor of access via the EPREFIX setting of
diff --cc lib/portage/data.py
index d2d356f95,09a4dd079..0dac72845
--- a/lib/portage/data.py
+++ b/lib/portage/data.py
@@@ -6,26 -6,24 +6,28 @@@ import gr
  import os
  import platform
  import pwd
 +from portage.const import PORTAGE_GROUPNAME, PORTAGE_USERNAME, EPREFIX
  
  import portage
- portage.proxy.lazyimport.lazyimport(globals(),
- 	'portage.output:colorize',
- 	'portage.util:writemsg',
- 	'portage.util.path:first_existing',
- 	'subprocess'
+ 
+ portage.proxy.lazyimport.lazyimport(
+     globals(),
+     "portage.output:colorize",
+     "portage.util:writemsg",
+     "portage.util.path:first_existing",
+     "subprocess",
  )
  from portage.localization import _
  
  ostype = platform.system()
  userland = None
 -if ostype == "DragonFly" or ostype.endswith("BSD"):
 +# Prefix always has USERLAND=GNU, even on
 +# FreeBSD, OpenBSD and Darwin (thank the lord!).
 +# Hopefully this entire USERLAND hack can go once
 +if EPREFIX == "" and (ostype == "DragonFly" or ostype.endswith("BSD")):
- 	userland = "BSD"
+     userland = "BSD"
  else:
- 	userland = "GNU"
+     userland = "GNU"
  
  lchown = getattr(os, "lchown", None)
  
@@@ -119,221 -134,231 +138,236 @@@ except KeyError
  # configurations with different constants could be used simultaneously.
  _initialized_globals = set()
  
+ 
  def _get_global(k):
- 	if k in _initialized_globals:
- 		return globals()[k]
- 
- 	if k == 'secpass':
- 
- 		unprivileged = False
- 		if hasattr(portage, 'settings'):
- 			unprivileged = "unprivileged" in portage.settings.features
- 		else:
- 			# The config class has equivalent code, but we also need to
- 			# do it here if _disable_legacy_globals() has been called.
- 			eroot_or_parent = first_existing(os.path.join(
- 				_target_root(), _target_eprefix().lstrip(os.sep)))
- 			try:
- 				eroot_st = os.stat(eroot_or_parent)
- 			except OSError:
- 				pass
- 			else:
- 				unprivileged = _unprivileged_mode(
- 					eroot_or_parent, eroot_st)
- 
- 		v = 0
- 		if uid == 0:
- 			v = 2
- 		elif unprivileged:
- 			v = 2
- 		elif _get_global('portage_gid') in os.getgroups():
- 			v = 1
- 
- 	elif k in ('portage_gid', 'portage_uid'):
- 
- 		#Discover the uid and gid of the portage user/group
- 		keyerror = False
- 		try:
- 			username = str(_get_global('_portage_username'))
- 			portage_uid = pwd.getpwnam(username).pw_uid
- 		except KeyError:
- 			# PREFIX LOCAL: some sysadmins are insane, bug #344307
- 			if username.isdigit():
- 				portage_uid = int(username)
- 			else:
- 				keyerror = True
- 				portage_uid = 0
- 			# END PREFIX LOCAL
- 
- 		try:
- 			grpname = str(_get_global('_portage_grpname'))
- 			portage_gid = grp.getgrnam(grpname).gr_gid
- 		except KeyError:
- 			# PREFIX LOCAL: some sysadmins are insane, bug #344307
- 			if grpname.isdigit():
- 				portage_gid = int(grpname)
- 			else:
- 				keyerror = True
- 				portage_gid = 0
- 			# END PREFIX LOCAL
- 
- 		# Suppress this error message if both PORTAGE_GRPNAME and
- 		# PORTAGE_USERNAME are set to "root", for things like
- 		# Android (see bug #454060).
- 		if keyerror and not (_get_global('_portage_username') == "root" and
- 			_get_global('_portage_grpname') == "root"):
- 			# PREFIX LOCAL: we need to fix this one day to distinguish prefix vs non-prefix
- 			writemsg(colorize("BAD",
- 				_("portage: '%s' user or '%s' group missing." % (_get_global('_portage_username'), _get_global('_portage_grpname')))) + "\n", noiselevel=-1)
- 			writemsg(colorize("BAD",
- 				_("         In Prefix Portage this is quite dramatic")) + "\n", noiselevel=-1)
- 			writemsg(colorize("BAD",
- 				_("         since it means you have thrown away yourself.")) + "\n", noiselevel=-1)
- 			writemsg(colorize("BAD",
- 				_("         Re-add yourself or re-bootstrap Gentoo Prefix.")) + "\n", noiselevel=-1)
- 			# END PREFIX LOCAL
- 			portage_group_warning()
- 
- 		globals()['portage_gid'] = portage_gid
- 		_initialized_globals.add('portage_gid')
- 		globals()['portage_uid'] = portage_uid
- 		_initialized_globals.add('portage_uid')
- 
- 		if k == 'portage_gid':
- 			return portage_gid
- 		if k == 'portage_uid':
- 			return portage_uid
- 		raise AssertionError('unknown name: %s' % k)
- 
- 	elif k == 'userpriv_groups':
- 		v = [_get_global('portage_gid')]
- 		if secpass >= 2:
- 			# Get a list of group IDs for the portage user. Do not use
- 			# grp.getgrall() since it is known to trigger spurious
- 			# SIGPIPE problems with nss_ldap.
- 			cmd = ["id", "-G", _portage_username]
- 
- 			encoding = portage._encodings['content']
- 			cmd = [portage._unicode_encode(x,
- 				encoding=encoding, errors='strict') for x in cmd]
- 			proc = subprocess.Popen(cmd, stdout=subprocess.PIPE,
- 				stderr=subprocess.STDOUT)
- 			myoutput = proc.communicate()[0]
- 			status = proc.wait()
- 			if os.WIFEXITED(status) and os.WEXITSTATUS(status) == os.EX_OK:
- 				for x in portage._unicode_decode(myoutput,
- 					encoding=encoding, errors='strict').split():
- 					try:
- 						v.append(int(x))
- 					except ValueError:
- 						pass
- 				v = sorted(set(v))
- 
- 	# Avoid instantiating portage.settings when the desired
- 	# variable is set in os.environ.
- 	elif k in ('_portage_grpname', '_portage_username'):
- 		v = None
- 		if k == '_portage_grpname':
- 			env_key = 'PORTAGE_GRPNAME'
- 		else:
- 			env_key = 'PORTAGE_USERNAME'
- 
- 		if env_key in os.environ:
- 			v = os.environ[env_key]
- 		elif hasattr(portage, 'settings'):
- 			v = portage.settings.get(env_key)
- 		else:
- 			# The config class has equivalent code, but we also need to
- 			# do it here if _disable_legacy_globals() has been called.
- 			eroot_or_parent = first_existing(os.path.join(
- 				_target_root(), _target_eprefix().lstrip(os.sep)))
- 			try:
- 				eroot_st = os.stat(eroot_or_parent)
- 			except OSError:
- 				pass
- 			else:
- 				if _unprivileged_mode(eroot_or_parent, eroot_st):
- 					if k == '_portage_grpname':
- 						try:
- 							grp_struct = grp.getgrgid(eroot_st.st_gid)
- 						except KeyError:
- 							pass
- 						else:
- 							v = grp_struct.gr_name
- 					else:
- 						try:
- 							pwd_struct = pwd.getpwuid(eroot_st.st_uid)
- 						except KeyError:
- 							pass
- 						else:
- 							v = pwd_struct.pw_name
- 
- 		if v is None:
- 			# PREFIX LOCAL: use var iso hardwired 'portage'
- 			if k == '_portage_grpname':
- 				v = PORTAGE_GROUPNAME
- 			else:
- 				v = PORTAGE_USERNAME
- 			# END PREFIX LOCAL
- 	else:
- 		raise AssertionError('unknown name: %s' % k)
- 
- 	globals()[k] = v
- 	_initialized_globals.add(k)
- 	return v
+     if k in _initialized_globals:
+         return globals()[k]
+ 
+     if k == "secpass":
+ 
+         unprivileged = False
+         if hasattr(portage, "settings"):
+             unprivileged = "unprivileged" in portage.settings.features
+         else:
+             # The config class has equivalent code, but we also need to
+             # do it here if _disable_legacy_globals() has been called.
+             eroot_or_parent = first_existing(
+                 os.path.join(_target_root(), _target_eprefix().lstrip(os.sep))
+             )
+             try:
+                 eroot_st = os.stat(eroot_or_parent)
+             except OSError:
+                 pass
+             else:
+                 unprivileged = _unprivileged_mode(eroot_or_parent, eroot_st)
+ 
+         v = 0
+         if uid == 0:
+             v = 2
+         elif unprivileged:
+             v = 2
+         elif _get_global("portage_gid") in os.getgroups():
+             v = 1
+ 
+     elif k in ("portage_gid", "portage_uid"):
+ 
+         # Discover the uid and gid of the portage user/group
+         keyerror = False
+         try:
+             portage_uid = pwd.getpwnam(_get_global("_portage_username")).pw_uid
+         except KeyError:
 -            keyerror = True
 -            portage_uid = 0
++            # PREFIX LOCAL: some sysadmins are insane, bug #344307
++            if username.isdigit():
++                portage_uid = int(username)
++            else:
++                keyerror = True
++                portage_uid = 0
++            # END PREFIX LOCAL
+ 
+         try:
+             portage_gid = grp.getgrnam(_get_global("_portage_grpname")).gr_gid
+         except KeyError:
 -            keyerror = True
 -            portage_gid = 0
++            # PREFIX LOCAL: some sysadmins are insane, bug #344307
++            if grpname.isdigit():
++                portage_gid = int(grpname)
++            else:
++                keyerror = True
++                portage_gid = 0
++            # END PREFIX LOCAL
+ 
+         # Suppress this error message if both PORTAGE_GRPNAME and
+         # PORTAGE_USERNAME are set to "root", for things like
+         # Android (see bug #454060).
 -        if keyerror and not (
 -            _get_global("_portage_username") == "root"
 -            and _get_global("_portage_grpname") == "root"
 -        ):
 -            writemsg(
 -                colorize("BAD", _("portage: 'portage' user or group missing.")) + "\n",
 -                noiselevel=-1,
 -            )
 -            writemsg(
 -                _(
 -                    "         For the defaults, line 1 goes into passwd, "
 -                    "and 2 into group.\n"
 -                ),
 -                noiselevel=-1,
 -            )
 -            writemsg(
 -                colorize(
 -                    "GOOD",
 -                    "         portage:x:250:250:portage:/var/tmp/portage:/bin/false",
 -                )
 -                + "\n",
 -                noiselevel=-1,
 -            )
 -            writemsg(
 -                colorize("GOOD", "         portage::250:portage") + "\n", noiselevel=-1
 -            )
++        if keyerror and not (_get_global('_portage_username') == "root" and
++            _get_global('_portage_grpname') == "root"):
++            # PREFIX LOCAL: we need to fix this one day to distinguish prefix vs non-prefix
++            writemsg(colorize("BAD",
++                _("portage: '%s' user or '%s' group missing." % (_get_global('_portage_username'), _get_global('_portage_grpname')))) + "\n", noiselevel=-1)
++            writemsg(colorize("BAD",
++                _("         In Prefix Portage this is quite dramatic")) + "\n", noiselevel=-1)
++            writemsg(colorize("BAD",
++                _("         since it means you have thrown away yourself.")) + "\n", noiselevel=-1)
++            writemsg(colorize("BAD",
++                _("         Re-add yourself or re-bootstrap Gentoo Prefix.")) + "\n", noiselevel=-1)
++            # END PREFIX LOCAL
+             portage_group_warning()
+ 
+         globals()["portage_gid"] = portage_gid
+         _initialized_globals.add("portage_gid")
+         globals()["portage_uid"] = portage_uid
+         _initialized_globals.add("portage_uid")
+ 
+         if k == "portage_gid":
+             return portage_gid
+         if k == "portage_uid":
+             return portage_uid
+         raise AssertionError("unknown name: %s" % k)
+ 
+     elif k == "userpriv_groups":
+         v = [_get_global("portage_gid")]
+         if secpass >= 2:
+             # Get a list of group IDs for the portage user. Do not use
+             # grp.getgrall() since it is known to trigger spurious
+             # SIGPIPE problems with nss_ldap.
+             cmd = ["id", "-G", _portage_username]
+ 
+             encoding = portage._encodings["content"]
+             cmd = [
+                 portage._unicode_encode(x, encoding=encoding, errors="strict")
+                 for x in cmd
+             ]
+             proc = subprocess.Popen(
+                 cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
+             )
+             myoutput = proc.communicate()[0]
+             status = proc.wait()
+             if os.WIFEXITED(status) and os.WEXITSTATUS(status) == os.EX_OK:
+                 for x in portage._unicode_decode(
+                     myoutput, encoding=encoding, errors="strict"
+                 ).split():
+                     try:
+                         v.append(int(x))
+                     except ValueError:
+                         pass
+                 v = sorted(set(v))
+ 
+     # Avoid instantiating portage.settings when the desired
+     # variable is set in os.environ.
+     elif k in ("_portage_grpname", "_portage_username"):
+         v = None
+         if k == "_portage_grpname":
+             env_key = "PORTAGE_GRPNAME"
+         else:
+             env_key = "PORTAGE_USERNAME"
+ 
+         if env_key in os.environ:
+             v = os.environ[env_key]
+         elif hasattr(portage, "settings"):
+             v = portage.settings.get(env_key)
+         else:
+             # The config class has equivalent code, but we also need to
+             # do it here if _disable_legacy_globals() has been called.
+             eroot_or_parent = first_existing(
+                 os.path.join(_target_root(), _target_eprefix().lstrip(os.sep))
+             )
+             try:
+                 eroot_st = os.stat(eroot_or_parent)
+             except OSError:
+                 pass
+             else:
+                 if _unprivileged_mode(eroot_or_parent, eroot_st):
+                     if k == "_portage_grpname":
+                         try:
+                             grp_struct = grp.getgrgid(eroot_st.st_gid)
+                         except KeyError:
+                             pass
+                         else:
+                             v = grp_struct.gr_name
+                     else:
+                         try:
+                             pwd_struct = pwd.getpwuid(eroot_st.st_uid)
+                         except KeyError:
+                             pass
+                         else:
+                             v = pwd_struct.pw_name
+ 
+         if v is None:
 -            v = "portage"
++            # PREFIX LOCAL: use var iso hardwired 'portage'
++            if k == '_portage_grpname':
++                v = PORTAGE_GROUPNAME
++            else:
++                v = PORTAGE_USERNAME
++            # END PREFIX LOCAL
+     else:
+         raise AssertionError("unknown name: %s" % k)
+ 
+     globals()[k] = v
+     _initialized_globals.add(k)
+     return v
+ 
  
  class _GlobalProxy(portage.proxy.objectproxy.ObjectProxy):
  
- 	__slots__ = ('_name',)
+     __slots__ = ("_name",)
  
- 	def __init__(self, name):
- 		portage.proxy.objectproxy.ObjectProxy.__init__(self)
- 		object.__setattr__(self, '_name', name)
+     def __init__(self, name):
+         portage.proxy.objectproxy.ObjectProxy.__init__(self)
+         object.__setattr__(self, "_name", name)
  
- 	def _get_target(self):
- 		return _get_global(object.__getattribute__(self, '_name'))
+     def _get_target(self):
+         return _get_global(object.__getattribute__(self, "_name"))
  
- for k in ('portage_gid', 'portage_uid', 'secpass', 'userpriv_groups',
- 	'_portage_grpname', '_portage_username'):
- 	globals()[k] = _GlobalProxy(k)
+ 
+ for k in (
+     "portage_gid",
+     "portage_uid",
+     "secpass",
+     "userpriv_groups",
+     "_portage_grpname",
+     "_portage_username",
+ ):
+     globals()[k] = _GlobalProxy(k)
  del k
  
+ 
  def _init(settings):
- 	"""
- 	Use config variables like PORTAGE_GRPNAME and PORTAGE_USERNAME to
- 	initialize global variables. This allows settings to come from make.conf
- 	instead of requiring them to be set in the calling environment.
- 	"""
- 	if '_portage_grpname' not in _initialized_globals and \
- 		'_portage_username' not in _initialized_globals:
- 
- 		# Prevents "TypeError: expected string" errors
- 		# from grp.getgrnam() with PyPy
- 		native_string = platform.python_implementation() == 'PyPy'
- 
- 		# PREFIX LOCAL: use var iso hardwired 'portage'
- 		v = settings.get('PORTAGE_GRPNAME', PORTAGE_GROUPNAME)
- 		# END PREFIX LOCAL
- 		if native_string:
- 			v = portage._native_string(v)
- 		globals()['_portage_grpname'] = v
- 		_initialized_globals.add('_portage_grpname')
- 
- 		# PREFIX LOCAL: use var iso hardwired 'portage'
- 		v = settings.get('PORTAGE_USERNAME', PORTAGE_USERNAME)
- 		# END PREFIX LOCAL
- 		if native_string:
- 			v = portage._native_string(v)
- 		globals()['_portage_username'] = v
- 		_initialized_globals.add('_portage_username')
- 
- 	if 'secpass' not in _initialized_globals:
- 		v = 0
- 		if uid == 0:
- 			v = 2
- 		elif "unprivileged" in settings.features:
- 			v = 2
- 		elif portage_gid in os.getgroups():
- 			v = 1
- 		globals()['secpass'] = v
- 		_initialized_globals.add('secpass')
+     """
+     Use config variables like PORTAGE_GRPNAME and PORTAGE_USERNAME to
+     initialize global variables. This allows settings to come from make.conf
+     instead of requiring them to be set in the calling environment.
+     """
+     if (
+         "_portage_grpname" not in _initialized_globals
+         and "_portage_username" not in _initialized_globals
+     ):
+ 
+         # Prevents "TypeError: expected string" errors
+         # from grp.getgrnam() with PyPy
+         native_string = platform.python_implementation() == "PyPy"
+ 
 -        v = settings.get("PORTAGE_GRPNAME", "portage")
++        # PREFIX LOCAL: use var iso hardwired 'portage'
++        v = settings.get('PORTAGE_GRPNAME', PORTAGE_GROUPNAME)
++        # END PREFIX LOCAL
+         if native_string:
+             v = portage._native_string(v)
 -        globals()["_portage_grpname"] = v
 -        _initialized_globals.add("_portage_grpname")
++        globals()['_portage_grpname'] = v
++        _initialized_globals.add('_portage_grpname')
+ 
 -        v = settings.get("PORTAGE_USERNAME", "portage")
++        # PREFIX LOCAL: use var iso hardwired 'portage'
++        v = settings.get('PORTAGE_USERNAME', PORTAGE_USERNAME)
++        # END PREFIX LOCAL
+         if native_string:
+             v = portage._native_string(v)
 -        globals()["_portage_username"] = v
 -        _initialized_globals.add("_portage_username")
++        globals()['_portage_username'] = v
++        _initialized_globals.add('_portage_username')
+ 
+     if "secpass" not in _initialized_globals:
+         v = 0
+         if uid == 0:
+             v = 2
+         elif "unprivileged" in settings.features:
+             v = 2
+         elif portage_gid in os.getgroups():
+             v = 1
+         globals()["secpass"] = v
+         _initialized_globals.add("secpass")
diff --cc lib/portage/dbapi/bintree.py
index 7e81b7879,9dbf9ee8b..8b008a93d
--- a/lib/portage/dbapi/bintree.py
+++ b/lib/portage/dbapi/bintree.py
@@@ -57,1806 -62,1994 +62,2002 @@@ from urllib.parse import urlpars
  
  
  class UseCachedCopyOfRemoteIndex(Exception):
- 	# If the local copy is recent enough
- 	# then fetching the remote index can be skipped.
- 	pass
+     # If the local copy is recent enough
+     # then fetching the remote index can be skipped.
+     pass
+ 
  
  class bindbapi(fakedbapi):
- 	_known_keys = frozenset(list(fakedbapi._known_keys) + \
- 		["CHOST", "repository", "USE"])
- 	_pkg_str_aux_keys = fakedbapi._pkg_str_aux_keys + ("BUILD_ID", "BUILD_TIME", "_mtime_")
- 
- 	def __init__(self, mybintree=None, **kwargs):
- 		# Always enable multi_instance mode for bindbapi indexing. This
- 		# does not affect the local PKGDIR file layout, since that is
- 		# controlled independently by FEATURES=binpkg-multi-instance.
- 		# The multi_instance mode is useful for the following reasons:
- 		# * binary packages with the same cpv from multiple binhosts
- 		#   can be considered simultaneously
- 		# * if binpkg-multi-instance is disabled, it's still possible
- 		#   to properly access a PKGDIR which has binpkg-multi-instance
- 		#   layout (or mixed layout)
- 		fakedbapi.__init__(self, exclusive_slots=False,
- 			multi_instance=True, **kwargs)
- 		self.bintree = mybintree
- 		self.move_ent = mybintree.move_ent
- 		# Selectively cache metadata in order to optimize dep matching.
- 		self._aux_cache_keys = set(
- 			["BDEPEND", "BUILD_ID", "BUILD_TIME", "CHOST", "DEFINED_PHASES",
- 			"DEPEND", "EAPI", "IDEPEND", "IUSE", "KEYWORDS",
- 			"LICENSE", "MD5", "PDEPEND", "PROPERTIES",
- 			"PROVIDES", "RDEPEND", "repository", "REQUIRES", "RESTRICT",
- 			"SIZE", "SLOT", "USE", "_mtime_", "EPREFIX"
- 			])
- 		self._aux_cache_slot_dict = slot_dict_class(self._aux_cache_keys)
- 		self._aux_cache = {}
- 
- 	@property
- 	def writable(self):
- 		"""
- 		Check if PKGDIR is writable, or permissions are sufficient
- 		to create it if it does not exist yet.
- 		@rtype: bool
- 		@return: True if PKGDIR is writable or can be created,
- 			False otherwise
- 		"""
- 		return os.access(first_existing(self.bintree.pkgdir), os.W_OK)
- 
- 	def match(self, *pargs, **kwargs):
- 		if self.bintree and not self.bintree.populated:
- 			self.bintree.populate()
- 		return fakedbapi.match(self, *pargs, **kwargs)
- 
- 	def cpv_exists(self, cpv, myrepo=None):
- 		if self.bintree and not self.bintree.populated:
- 			self.bintree.populate()
- 		return fakedbapi.cpv_exists(self, cpv)
- 
- 	def cpv_inject(self, cpv, **kwargs):
- 		if not self.bintree.populated:
- 			self.bintree.populate()
- 		fakedbapi.cpv_inject(self, cpv,
- 			metadata=cpv._metadata, **kwargs)
- 
- 	def cpv_remove(self, cpv):
- 		if not self.bintree.populated:
- 			self.bintree.populate()
- 		fakedbapi.cpv_remove(self, cpv)
- 
- 	def aux_get(self, mycpv, wants, myrepo=None):
- 		if self.bintree and not self.bintree.populated:
- 			self.bintree.populate()
- 		# Support plain string for backward compatibility with API
- 		# consumers (including portageq, which passes in a cpv from
- 		# a command-line argument).
- 		instance_key = self._instance_key(mycpv,
- 			support_string=True)
- 		if not self._known_keys.intersection(
- 			wants).difference(self._aux_cache_keys):
- 			aux_cache = self.cpvdict[instance_key]
- 			if aux_cache is not None:
- 				return [aux_cache.get(x, "") for x in wants]
- 		mysplit = mycpv.split("/")
- 		mylist = []
- 		add_pkg = self.bintree._additional_pkgs.get(instance_key)
- 		if add_pkg is not None:
- 			return add_pkg._db.aux_get(add_pkg, wants)
- 		if not self.bintree._remotepkgs or \
- 			not self.bintree.isremote(mycpv):
- 			try:
- 				tbz2_path = self.bintree._pkg_paths[instance_key]
- 			except KeyError:
- 				raise KeyError(mycpv)
- 			tbz2_path = os.path.join(self.bintree.pkgdir, tbz2_path)
- 			try:
- 				st = os.lstat(tbz2_path)
- 			except OSError:
- 				raise KeyError(mycpv)
- 			metadata_bytes = portage.xpak.tbz2(tbz2_path).get_data()
- 			def getitem(k):
- 				if k == "_mtime_":
- 					return str(st[stat.ST_MTIME])
- 				if k == "SIZE":
- 					return str(st.st_size)
- 				v = metadata_bytes.get(_unicode_encode(k,
- 					encoding=_encodings['repo.content'],
- 					errors='backslashreplace'))
- 				if v is not None:
- 					v = _unicode_decode(v,
- 						encoding=_encodings['repo.content'], errors='replace')
- 				return v
- 		else:
- 			getitem = self.cpvdict[instance_key].get
- 		mydata = {}
- 		mykeys = wants
- 		for x in mykeys:
- 			myval = getitem(x)
- 			# myval is None if the key doesn't exist
- 			# or the tbz2 is corrupt.
- 			if myval:
- 				mydata[x] = " ".join(myval.split())
- 
- 		if not mydata.setdefault('EAPI', '0'):
- 			mydata['EAPI'] = '0'
- 
- 		return [mydata.get(x, '') for x in wants]
- 
- 	def aux_update(self, cpv, values):
- 		if not self.bintree.populated:
- 			self.bintree.populate()
- 		build_id = None
- 		try:
- 			build_id = cpv.build_id
- 		except AttributeError:
- 			if self.bintree._multi_instance:
- 				# The cpv.build_id attribute is required if we are in
- 				# multi-instance mode, since otherwise we won't know
- 				# which instance to update.
- 				raise
- 			else:
- 				cpv = self._instance_key(cpv, support_string=True)[0]
- 				build_id = cpv.build_id
- 
- 		tbz2path = self.bintree.getname(cpv)
- 		if not os.path.exists(tbz2path):
- 			raise KeyError(cpv)
- 		mytbz2 = portage.xpak.tbz2(tbz2path)
- 		mydata = mytbz2.get_data()
- 
- 		for k, v in values.items():
- 			k = _unicode_encode(k,
- 				encoding=_encodings['repo.content'], errors='backslashreplace')
- 			v = _unicode_encode(v,
- 				encoding=_encodings['repo.content'], errors='backslashreplace')
- 			mydata[k] = v
- 
- 		for k, v in list(mydata.items()):
- 			if not v:
- 				del mydata[k]
- 		mytbz2.recompose_mem(portage.xpak.xpak_mem(mydata))
- 		# inject will clear stale caches via cpv_inject.
- 		self.bintree.inject(cpv)
- 
- 
- 	@coroutine
- 	def unpack_metadata(self, pkg, dest_dir, loop=None):
- 		"""
- 		Unpack package metadata to a directory. This method is a coroutine.
- 
- 		@param pkg: package to unpack
- 		@type pkg: _pkg_str or portage.config
- 		@param dest_dir: destination directory
- 		@type dest_dir: str
- 		"""
- 		loop = asyncio._wrap_loop(loop)
- 		if isinstance(pkg, _pkg_str):
- 			cpv = pkg
- 		else:
- 			cpv = pkg.mycpv
- 		key = self._instance_key(cpv)
- 		add_pkg = self.bintree._additional_pkgs.get(key)
- 		if add_pkg is not None:
- 			yield add_pkg._db.unpack_metadata(pkg, dest_dir, loop=loop)
- 		else:
- 			tbz2_file = self.bintree.getname(cpv)
- 			yield loop.run_in_executor(ForkExecutor(loop=loop),
- 				portage.xpak.tbz2(tbz2_file).unpackinfo, dest_dir)
- 
- 	@coroutine
- 	def unpack_contents(self, pkg, dest_dir, loop=None):
- 		"""
- 		Unpack package contents to a directory. This method is a coroutine.
- 
- 		@param pkg: package to unpack
- 		@type pkg: _pkg_str or portage.config
- 		@param dest_dir: destination directory
- 		@type dest_dir: str
- 		"""
- 		loop = asyncio._wrap_loop(loop)
- 		if isinstance(pkg, _pkg_str):
- 			settings = self.settings
- 			cpv = pkg
- 		else:
- 			settings = pkg
- 			cpv = settings.mycpv
- 
- 		pkg_path = self.bintree.getname(cpv)
- 		if pkg_path is not None:
- 
- 			extractor = BinpkgExtractorAsync(
- 				background=settings.get('PORTAGE_BACKGROUND') == '1',
- 				env=settings.environ(),
- 				features=settings.features,
- 				image_dir=dest_dir,
- 				pkg=cpv, pkg_path=pkg_path,
- 				logfile=settings.get('PORTAGE_LOG_FILE'),
- 				scheduler=SchedulerInterface(loop))
- 
- 			extractor.start()
- 			yield extractor.async_wait()
- 			if extractor.returncode != os.EX_OK:
- 				raise PortageException("Error Extracting '{}'".format(pkg_path))
- 
- 		else:
- 			instance_key = self._instance_key(cpv)
- 			add_pkg = self.bintree._additional_pkgs.get(instance_key)
- 			if add_pkg is None:
- 				raise portage.exception.PackageNotFound(cpv)
- 			yield add_pkg._db.unpack_contents(pkg, dest_dir, loop=loop)
- 
- 	def cp_list(self, *pargs, **kwargs):
- 		if not self.bintree.populated:
- 			self.bintree.populate()
- 		return fakedbapi.cp_list(self, *pargs, **kwargs)
- 
- 	def cp_all(self, sort=False):
- 		if not self.bintree.populated:
- 			self.bintree.populate()
- 		return fakedbapi.cp_all(self, sort=sort)
- 
- 	def cpv_all(self):
- 		if not self.bintree.populated:
- 			self.bintree.populate()
- 		return fakedbapi.cpv_all(self)
- 
- 	def getfetchsizes(self, pkg):
- 		"""
- 		This will raise MissingSignature if SIZE signature is not available,
- 		or InvalidSignature if SIZE signature is invalid.
- 		"""
- 
- 		if not self.bintree.populated:
- 			self.bintree.populate()
- 
- 		pkg = getattr(pkg, 'cpv', pkg)
- 
- 		filesdict = {}
- 		if not self.bintree.isremote(pkg):
- 			pass
- 		else:
- 			metadata = self.bintree._remotepkgs[self._instance_key(pkg)]
- 			try:
- 				size = int(metadata["SIZE"])
- 			except KeyError:
- 				raise portage.exception.MissingSignature("SIZE")
- 			except ValueError:
- 				raise portage.exception.InvalidSignature(
- 					"SIZE: %s" % metadata["SIZE"])
- 			else:
- 				filesdict[os.path.basename(self.bintree.getname(pkg))] = size
- 
- 		return filesdict
+     _known_keys = frozenset(
+         list(fakedbapi._known_keys) + ["CHOST", "repository", "USE"]
+     )
+     _pkg_str_aux_keys = fakedbapi._pkg_str_aux_keys + (
+         "BUILD_ID",
+         "BUILD_TIME",
+         "_mtime_",
+     )
+ 
+     def __init__(self, mybintree=None, **kwargs):
+         # Always enable multi_instance mode for bindbapi indexing. This
+         # does not affect the local PKGDIR file layout, since that is
+         # controlled independently by FEATURES=binpkg-multi-instance.
+         # The multi_instance mode is useful for the following reasons:
+         # * binary packages with the same cpv from multiple binhosts
+         #   can be considered simultaneously
+         # * if binpkg-multi-instance is disabled, it's still possible
+         #   to properly access a PKGDIR which has binpkg-multi-instance
+         #   layout (or mixed layout)
+         fakedbapi.__init__(self, exclusive_slots=False, multi_instance=True, **kwargs)
+         self.bintree = mybintree
+         self.move_ent = mybintree.move_ent
+         # Selectively cache metadata in order to optimize dep matching.
+         self._aux_cache_keys = set(
+             [
+                 "BDEPEND",
+                 "BUILD_ID",
+                 "BUILD_TIME",
+                 "CHOST",
+                 "DEFINED_PHASES",
+                 "DEPEND",
+                 "EAPI",
+                 "IDEPEND",
+                 "IUSE",
+                 "KEYWORDS",
+                 "LICENSE",
+                 "MD5",
+                 "PDEPEND",
+                 "PROPERTIES",
+                 "PROVIDES",
+                 "RDEPEND",
+                 "repository",
+                 "REQUIRES",
+                 "RESTRICT",
+                 "SIZE",
+                 "SLOT",
+                 "USE",
+                 "_mtime_",
++                # PREFIX LOCAL
++                "EPREFIX",
+             ]
+         )
+         self._aux_cache_slot_dict = slot_dict_class(self._aux_cache_keys)
+         self._aux_cache = {}
+ 
+     @property
+     def writable(self):
+         """
+         Check if PKGDIR is writable, or permissions are sufficient
+         to create it if it does not exist yet.
+         @rtype: bool
+         @return: True if PKGDIR is writable or can be created,
+                 False otherwise
+         """
+         return os.access(first_existing(self.bintree.pkgdir), os.W_OK)
+ 
+     def match(self, *pargs, **kwargs):
+         if self.bintree and not self.bintree.populated:
+             self.bintree.populate()
+         return fakedbapi.match(self, *pargs, **kwargs)
+ 
+     def cpv_exists(self, cpv, myrepo=None):
+         if self.bintree and not self.bintree.populated:
+             self.bintree.populate()
+         return fakedbapi.cpv_exists(self, cpv)
+ 
+     def cpv_inject(self, cpv, **kwargs):
+         if not self.bintree.populated:
+             self.bintree.populate()
+         fakedbapi.cpv_inject(self, cpv, metadata=cpv._metadata, **kwargs)
+ 
+     def cpv_remove(self, cpv):
+         if not self.bintree.populated:
+             self.bintree.populate()
+         fakedbapi.cpv_remove(self, cpv)
+ 
+     def aux_get(self, mycpv, wants, myrepo=None):
+         if self.bintree and not self.bintree.populated:
+             self.bintree.populate()
+         # Support plain string for backward compatibility with API
+         # consumers (including portageq, which passes in a cpv from
+         # a command-line argument).
+         instance_key = self._instance_key(mycpv, support_string=True)
+         if not self._known_keys.intersection(wants).difference(self._aux_cache_keys):
+             aux_cache = self.cpvdict[instance_key]
+             if aux_cache is not None:
+                 return [aux_cache.get(x, "") for x in wants]
+         mysplit = mycpv.split("/")
+         mylist = []
+         add_pkg = self.bintree._additional_pkgs.get(instance_key)
+         if add_pkg is not None:
+             return add_pkg._db.aux_get(add_pkg, wants)
+         if not self.bintree._remotepkgs or not self.bintree.isremote(mycpv):
+             try:
+                 tbz2_path = self.bintree._pkg_paths[instance_key]
+             except KeyError:
+                 raise KeyError(mycpv)
+             tbz2_path = os.path.join(self.bintree.pkgdir, tbz2_path)
+             try:
+                 st = os.lstat(tbz2_path)
+             except OSError:
+                 raise KeyError(mycpv)
+             metadata_bytes = portage.xpak.tbz2(tbz2_path).get_data()
+ 
+             def getitem(k):
+                 if k == "_mtime_":
+                     return str(st[stat.ST_MTIME])
+                 if k == "SIZE":
+                     return str(st.st_size)
+                 v = metadata_bytes.get(
+                     _unicode_encode(
+                         k,
+                         encoding=_encodings["repo.content"],
+                         errors="backslashreplace",
+                     )
+                 )
+                 if v is not None:
+                     v = _unicode_decode(
+                         v, encoding=_encodings["repo.content"], errors="replace"
+                     )
+                 return v
+ 
+         else:
+             getitem = self.cpvdict[instance_key].get
+         mydata = {}
+         mykeys = wants
+         for x in mykeys:
+             myval = getitem(x)
+             # myval is None if the key doesn't exist
+             # or the tbz2 is corrupt.
+             if myval:
+                 mydata[x] = " ".join(myval.split())
+ 
+         if not mydata.setdefault("EAPI", "0"):
+             mydata["EAPI"] = "0"
+ 
+         return [mydata.get(x, "") for x in wants]
+ 
+     def aux_update(self, cpv, values):
+         if not self.bintree.populated:
+             self.bintree.populate()
+         build_id = None
+         try:
+             build_id = cpv.build_id
+         except AttributeError:
+             if self.bintree._multi_instance:
+                 # The cpv.build_id attribute is required if we are in
+                 # multi-instance mode, since otherwise we won't know
+                 # which instance to update.
+                 raise
+             else:
+                 cpv = self._instance_key(cpv, support_string=True)[0]
+                 build_id = cpv.build_id
+ 
+         tbz2path = self.bintree.getname(cpv)
+         if not os.path.exists(tbz2path):
+             raise KeyError(cpv)
+         mytbz2 = portage.xpak.tbz2(tbz2path)
+         mydata = mytbz2.get_data()
+ 
+         for k, v in values.items():
+             k = _unicode_encode(
+                 k, encoding=_encodings["repo.content"], errors="backslashreplace"
+             )
+             v = _unicode_encode(
+                 v, encoding=_encodings["repo.content"], errors="backslashreplace"
+             )
+             mydata[k] = v
+ 
+         for k, v in list(mydata.items()):
+             if not v:
+                 del mydata[k]
+         mytbz2.recompose_mem(portage.xpak.xpak_mem(mydata))
+         # inject will clear stale caches via cpv_inject.
+         self.bintree.inject(cpv)
+ 
+     async def unpack_metadata(self, pkg, dest_dir, loop=None):
+         """
+         Unpack package metadata to a directory. This method is a coroutine.
+ 
+         @param pkg: package to unpack
+         @type pkg: _pkg_str or portage.config
+         @param dest_dir: destination directory
+         @type dest_dir: str
+         """
+         loop = asyncio._wrap_loop(loop)
+         if isinstance(pkg, _pkg_str):
+             cpv = pkg
+         else:
+             cpv = pkg.mycpv
+         key = self._instance_key(cpv)
+         add_pkg = self.bintree._additional_pkgs.get(key)
+         if add_pkg is not None:
+             await add_pkg._db.unpack_metadata(pkg, dest_dir, loop=loop)
+         else:
+             tbz2_file = self.bintree.getname(cpv)
+             await loop.run_in_executor(
+                 ForkExecutor(loop=loop),
+                 portage.xpak.tbz2(tbz2_file).unpackinfo,
+                 dest_dir,
+             )
+ 
+     async def unpack_contents(self, pkg, dest_dir, loop=None):
+         """
+         Unpack package contents to a directory. This method is a coroutine.
+ 
+         @param pkg: package to unpack
+         @type pkg: _pkg_str or portage.config
+         @param dest_dir: destination directory
+         @type dest_dir: str
+         """
+         loop = asyncio._wrap_loop(loop)
+         if isinstance(pkg, _pkg_str):
+             settings = self.settings
+             cpv = pkg
+         else:
+             settings = pkg
+             cpv = settings.mycpv
+ 
+         pkg_path = self.bintree.getname(cpv)
+         if pkg_path is not None:
+ 
+             extractor = BinpkgExtractorAsync(
+                 background=settings.get("PORTAGE_BACKGROUND") == "1",
+                 env=settings.environ(),
+                 features=settings.features,
+                 image_dir=dest_dir,
+                 pkg=cpv,
+                 pkg_path=pkg_path,
+                 logfile=settings.get("PORTAGE_LOG_FILE"),
+                 scheduler=SchedulerInterface(loop),
+             )
+ 
+             extractor.start()
+             await extractor.async_wait()
+             if extractor.returncode != os.EX_OK:
+                 raise PortageException("Error Extracting '{}'".format(pkg_path))
+ 
+         else:
+             instance_key = self._instance_key(cpv)
+             add_pkg = self.bintree._additional_pkgs.get(instance_key)
+             if add_pkg is None:
+                 raise portage.exception.PackageNotFound(cpv)
+             await add_pkg._db.unpack_contents(pkg, dest_dir, loop=loop)
+ 
+     def cp_list(self, *pargs, **kwargs):
+         if not self.bintree.populated:
+             self.bintree.populate()
+         return fakedbapi.cp_list(self, *pargs, **kwargs)
+ 
+     def cp_all(self, sort=False):
+         if not self.bintree.populated:
+             self.bintree.populate()
+         return fakedbapi.cp_all(self, sort=sort)
+ 
+     def cpv_all(self):
+         if not self.bintree.populated:
+             self.bintree.populate()
+         return fakedbapi.cpv_all(self)
+ 
+     def getfetchsizes(self, pkg):
+         """
+         This will raise MissingSignature if SIZE signature is not available,
+         or InvalidSignature if SIZE signature is invalid.
+         """
+ 
+         if not self.bintree.populated:
+             self.bintree.populate()
+ 
+         pkg = getattr(pkg, "cpv", pkg)
+ 
+         filesdict = {}
+         if not self.bintree.isremote(pkg):
+             pass
+         else:
+             metadata = self.bintree._remotepkgs[self._instance_key(pkg)]
+             try:
+                 size = int(metadata["SIZE"])
+             except KeyError:
+                 raise portage.exception.MissingSignature("SIZE")
+             except ValueError:
+                 raise portage.exception.InvalidSignature("SIZE: %s" % metadata["SIZE"])
+             else:
+                 filesdict[os.path.basename(self.bintree.getname(pkg))] = size
+ 
+         return filesdict
  
  
  class binarytree:
- 	"this tree scans for a list of all packages available in PKGDIR"
- 	def __init__(self, _unused=DeprecationWarning, pkgdir=None,
- 		virtual=DeprecationWarning, settings=None):
- 
- 		if pkgdir is None:
- 			raise TypeError("pkgdir parameter is required")
- 
- 		if settings is None:
- 			raise TypeError("settings parameter is required")
- 
- 		if _unused is not DeprecationWarning:
- 			warnings.warn("The first parameter of the "
- 				"portage.dbapi.bintree.binarytree"
- 				" constructor is now unused. Instead "
- 				"settings['ROOT'] is used.",
- 				DeprecationWarning, stacklevel=2)
- 
- 		if virtual is not DeprecationWarning:
- 			warnings.warn("The 'virtual' parameter of the "
- 				"portage.dbapi.bintree.binarytree"
- 				" constructor is unused",
- 				DeprecationWarning, stacklevel=2)
- 
- 		if True:
- 			self.pkgdir = normalize_path(pkgdir)
- 			# NOTE: Event if binpkg-multi-instance is disabled, it's
- 			# still possible to access a PKGDIR which uses the
- 			# binpkg-multi-instance layout (or mixed layout).
- 			self._multi_instance = ("binpkg-multi-instance" in
- 				settings.features)
- 			if self._multi_instance:
- 				self._allocate_filename = self._allocate_filename_multi
- 			self.dbapi = bindbapi(self, settings=settings)
- 			self.update_ents = self.dbapi.update_ents
- 			self.move_slot_ent = self.dbapi.move_slot_ent
- 			self.populated = 0
- 			self.tree = {}
- 			self._binrepos_conf = None
- 			self._remote_has_index = False
- 			self._remotepkgs = None # remote metadata indexed by cpv
- 			self._additional_pkgs = {}
- 			self.invalids = []
- 			self.settings = settings
- 			self._pkg_paths = {}
- 			self._populating = False
- 			self._all_directory = os.path.isdir(
- 				os.path.join(self.pkgdir, "All"))
- 			self._pkgindex_version = 0
- 			self._pkgindex_hashes = ["MD5","SHA1"]
- 			self._pkgindex_file = os.path.join(self.pkgdir, "Packages")
- 			self._pkgindex_keys = self.dbapi._aux_cache_keys.copy()
- 			self._pkgindex_keys.update(["CPV", "SIZE"])
- 			self._pkgindex_aux_keys = \
- 				["BASE_URI", "BDEPEND", "BUILD_ID", "BUILD_TIME", "CHOST",
- 				"DEFINED_PHASES", "DEPEND", "DESCRIPTION", "EAPI", "FETCHCOMMAND",
- 				"IDEPEND", "IUSE", "KEYWORDS", "LICENSE", "PDEPEND",
- 				"PKGINDEX_URI", "PROPERTIES", "PROVIDES",
- 				"RDEPEND", "repository", "REQUIRES", "RESTRICT", "RESUMECOMMAND",
- 				"SIZE", "SLOT", "USE", "EPREFIX"]
- 			self._pkgindex_aux_keys = list(self._pkgindex_aux_keys)
- 			self._pkgindex_use_evaluated_keys = \
- 				("BDEPEND", "DEPEND", "IDEPEND", "LICENSE", "RDEPEND",
- 				"PDEPEND", "PROPERTIES", "RESTRICT")
- 			self._pkgindex_header = None
- 			self._pkgindex_header_keys = set([
- 				"ACCEPT_KEYWORDS", "ACCEPT_LICENSE",
- 				"ACCEPT_PROPERTIES", "ACCEPT_RESTRICT", "CBUILD",
- 				"CONFIG_PROTECT", "CONFIG_PROTECT_MASK", "FEATURES",
- 				"GENTOO_MIRRORS", "INSTALL_MASK", "IUSE_IMPLICIT", "USE",
- 				"USE_EXPAND", "USE_EXPAND_HIDDEN", "USE_EXPAND_IMPLICIT",
- 				"USE_EXPAND_UNPREFIXED",
- 				"EPREFIX"])
- 			self._pkgindex_default_pkg_data = {
- 				"BDEPEND" : "",
- 				"BUILD_ID"           : "",
- 				"BUILD_TIME"         : "",
- 				"DEFINED_PHASES"     : "",
- 				"DEPEND"  : "",
- 				"EAPI"    : "0",
- 				"IDEPEND" : "",
- 				"IUSE"    : "",
- 				"KEYWORDS": "",
- 				"LICENSE" : "",
- 				"PATH"    : "",
- 				"PDEPEND" : "",
- 				"PROPERTIES" : "",
- 				"PROVIDES": "",
- 				"RDEPEND" : "",
- 				"REQUIRES": "",
- 				"RESTRICT": "",
- 				"SLOT"    : "0",
- 				"USE"     : "",
- 			}
- 			self._pkgindex_inherited_keys = ["CHOST", "repository", "EPREFIX"]
- 
- 			# Populate the header with appropriate defaults.
- 			self._pkgindex_default_header_data = {
- 				"CHOST"        : self.settings.get("CHOST", ""),
- 				"repository"   : "",
- 			}
- 
- 			self._pkgindex_translated_keys = (
- 				("DESCRIPTION"   ,   "DESC"),
- 				("_mtime_"       ,   "MTIME"),
- 				("repository"    ,   "REPO"),
- 			)
- 
- 			self._pkgindex_allowed_pkg_keys = set(chain(
- 				self._pkgindex_keys,
- 				self._pkgindex_aux_keys,
- 				self._pkgindex_hashes,
- 				self._pkgindex_default_pkg_data,
- 				self._pkgindex_inherited_keys,
- 				chain(*self._pkgindex_translated_keys)
- 			))
- 
- 	@property
- 	def root(self):
- 		warnings.warn("The root attribute of "
- 			"portage.dbapi.bintree.binarytree"
- 			" is deprecated. Use "
- 			"settings['ROOT'] instead.",
- 			DeprecationWarning, stacklevel=3)
- 		return self.settings['ROOT']
- 
- 	def move_ent(self, mylist, repo_match=None):
- 		if not self.populated:
- 			self.populate()
- 		origcp = mylist[1]
- 		newcp = mylist[2]
- 		# sanity check
- 		for atom in (origcp, newcp):
- 			if not isjustname(atom):
- 				raise InvalidPackageName(str(atom))
- 		mynewcat = catsplit(newcp)[0]
- 		origmatches=self.dbapi.cp_list(origcp)
- 		moves = 0
- 		if not origmatches:
- 			return moves
- 		for mycpv in origmatches:
- 			mycpv_cp = mycpv.cp
- 			if mycpv_cp != origcp:
- 				# Ignore PROVIDE virtual match.
- 				continue
- 			if repo_match is not None \
- 				and not repo_match(mycpv.repo):
- 				continue
- 
- 			# Use isvalidatom() to check if this move is valid for the
- 			# EAPI (characters allowed in package names may vary).
- 			if not isvalidatom(newcp, eapi=mycpv.eapi):
- 				continue
- 
- 			mynewcpv = mycpv.replace(mycpv_cp, str(newcp), 1)
- 			myoldpkg = catsplit(mycpv)[1]
- 			mynewpkg = catsplit(mynewcpv)[1]
- 
- 			# If this update has already been applied to the same
- 			# package build then silently continue.
- 			applied = False
- 			for maybe_applied in self.dbapi.match('={}'.format(mynewcpv)):
- 				if maybe_applied.build_time == mycpv.build_time:
- 					applied = True
- 					break
- 
- 			if applied:
- 				continue
- 
- 			if (mynewpkg != myoldpkg) and self.dbapi.cpv_exists(mynewcpv):
- 				writemsg(_("!!! Cannot update binary: Destination exists.\n"),
- 					noiselevel=-1)
- 				writemsg("!!! "+mycpv+" -> "+mynewcpv+"\n", noiselevel=-1)
- 				continue
- 
- 			tbz2path = self.getname(mycpv)
- 			if os.path.exists(tbz2path) and not os.access(tbz2path,os.W_OK):
- 				writemsg(_("!!! Cannot update readonly binary: %s\n") % mycpv,
- 					noiselevel=-1)
- 				continue
- 
- 			moves += 1
- 			mytbz2 = portage.xpak.tbz2(tbz2path)
- 			mydata = mytbz2.get_data()
- 			updated_items = update_dbentries([mylist], mydata, parent=mycpv)
- 			mydata.update(updated_items)
- 			mydata[b'PF'] = \
- 				_unicode_encode(mynewpkg + "\n",
- 				encoding=_encodings['repo.content'])
- 			mydata[b'CATEGORY'] = \
- 				_unicode_encode(mynewcat + "\n",
- 				encoding=_encodings['repo.content'])
- 			if mynewpkg != myoldpkg:
- 				ebuild_data = mydata.pop(_unicode_encode(myoldpkg + '.ebuild',
- 					encoding=_encodings['repo.content']), None)
- 				if ebuild_data is not None:
- 					mydata[_unicode_encode(mynewpkg + '.ebuild',
- 						encoding=_encodings['repo.content'])] = ebuild_data
- 
- 			metadata = self.dbapi._aux_cache_slot_dict()
- 			for k in self.dbapi._aux_cache_keys:
- 				v = mydata.get(_unicode_encode(k))
- 				if v is not None:
- 					v = _unicode_decode(v)
- 					metadata[k] = " ".join(v.split())
- 
- 			# Create a copy of the old version of the package and
- 			# apply the update to it. Leave behind the old version,
- 			# assuming that it will be deleted by eclean-pkg when its
- 			# time comes.
- 			mynewcpv = _pkg_str(mynewcpv, metadata=metadata, db=self.dbapi)
- 			update_path = self.getname(mynewcpv, allocate_new=True) + ".partial"
- 			self._ensure_dir(os.path.dirname(update_path))
- 			update_path_lock = None
- 			try:
- 				update_path_lock = lockfile(update_path, wantnewlockfile=True)
- 				copyfile(tbz2path, update_path)
- 				mytbz2 = portage.xpak.tbz2(update_path)
- 				mytbz2.recompose_mem(portage.xpak.xpak_mem(mydata))
- 				self.inject(mynewcpv, filename=update_path)
- 			finally:
- 				if update_path_lock is not None:
- 					try:
- 						os.unlink(update_path)
- 					except OSError:
- 						pass
- 					unlockfile(update_path_lock)
- 
- 		return moves
- 
- 	def prevent_collision(self, cpv):
- 		warnings.warn("The "
- 			"portage.dbapi.bintree.binarytree.prevent_collision "
- 			"method is deprecated.",
- 			DeprecationWarning, stacklevel=2)
- 
- 	def _ensure_dir(self, path):
- 		"""
- 		Create the specified directory. Also, copy gid and group mode
- 		bits from self.pkgdir if possible.
- 		@param cat_dir: Absolute path of the directory to be created.
- 		@type cat_dir: String
- 		"""
- 		try:
- 			pkgdir_st = os.stat(self.pkgdir)
- 		except OSError:
- 			ensure_dirs(path)
- 			return
- 		pkgdir_gid = pkgdir_st.st_gid
- 		pkgdir_grp_mode = 0o2070 & pkgdir_st.st_mode
- 		try:
- 			ensure_dirs(path, gid=pkgdir_gid, mode=pkgdir_grp_mode, mask=0)
- 		except PortageException:
- 			if not os.path.isdir(path):
- 				raise
- 
- 	def _file_permissions(self, path):
- 		try:
- 			pkgdir_st = os.stat(self.pkgdir)
- 		except OSError:
- 			pass
- 		else:
- 			pkgdir_gid = pkgdir_st.st_gid
- 			pkgdir_grp_mode = 0o0060 & pkgdir_st.st_mode
- 			try:
- 				portage.util.apply_permissions(path, gid=pkgdir_gid,
- 					mode=pkgdir_grp_mode, mask=0)
- 			except PortageException:
- 				pass
- 
- 	def populate(self, getbinpkgs=False, getbinpkg_refresh=True, add_repos=()):
- 		"""
- 		Populates the binarytree with package metadata.
- 
- 		@param getbinpkgs: include remote packages
- 		@type getbinpkgs: bool
- 		@param getbinpkg_refresh: attempt to refresh the cache
- 			of remote package metadata if getbinpkgs is also True
- 		@type getbinpkg_refresh: bool
- 		@param add_repos: additional binary package repositories
- 		@type add_repos: sequence
- 		"""
- 
- 		if self._populating:
- 			return
- 
- 		if not os.path.isdir(self.pkgdir) and not (getbinpkgs or add_repos):
- 			self.populated = True
- 			return
- 
- 		# Clear all caches in case populate is called multiple times
- 		# as may be the case when _global_updates calls populate()
- 		# prior to performing package moves since it only wants to
- 		# operate on local packages (getbinpkgs=0).
- 		self._remotepkgs = None
- 
- 		self._populating = True
- 		try:
- 			update_pkgindex = self._populate_local(
- 				reindex='pkgdir-index-trusted' not in self.settings.features)
- 
- 			if update_pkgindex and self.dbapi.writable:
- 				# If the Packages file needs to be updated, then _populate_local
- 				# needs to be called once again while the file is locked, so
- 				# that changes made by a concurrent process cannot be lost. This
- 				# case is avoided when possible, in order to minimize lock
- 				# contention.
- 				pkgindex_lock = None
- 				try:
- 					pkgindex_lock = lockfile(self._pkgindex_file,
- 						wantnewlockfile=True)
- 					update_pkgindex = self._populate_local()
- 					if update_pkgindex:
- 						self._pkgindex_write(update_pkgindex)
- 				finally:
- 					if pkgindex_lock:
- 						unlockfile(pkgindex_lock)
- 
- 			if add_repos:
- 				self._populate_additional(add_repos)
- 
- 			if getbinpkgs:
- 				config_path = os.path.join(self.settings['PORTAGE_CONFIGROOT'], BINREPOS_CONF_FILE)
- 				self._binrepos_conf = BinRepoConfigLoader((config_path,), self.settings)
- 				if not self._binrepos_conf:
- 					writemsg(_("!!! %s is missing (or PORTAGE_BINHOST is unset), but use is requested.\n") % (config_path,),
- 						noiselevel=-1)
- 				else:
- 					self._populate_remote(getbinpkg_refresh=getbinpkg_refresh)
- 
- 		finally:
- 			self._populating = False
- 
- 		self.populated = True
- 
- 	def _populate_local(self, reindex=True):
- 		"""
- 		Populates the binarytree with local package metadata.
- 
- 		@param reindex: detect added / modified / removed packages and
- 			regenerate the index file if necessary
- 		@type reindex: bool
- 		"""
- 		self.dbapi.clear()
- 		_instance_key = self.dbapi._instance_key
- 		# In order to minimize disk I/O, we never compute digests here.
- 		# Therefore we exclude hashes from the minimum_keys, so that
- 		# the Packages file will not be needlessly re-written due to
- 		# missing digests.
- 		minimum_keys = self._pkgindex_keys.difference(self._pkgindex_hashes)
- 		if True:
- 			pkg_paths = {}
- 			self._pkg_paths = pkg_paths
- 			dir_files = {}
- 			if reindex:
- 				for parent, dir_names, file_names in os.walk(self.pkgdir):
- 					relative_parent = parent[len(self.pkgdir)+1:]
- 					dir_files[relative_parent] = file_names
- 
- 			pkgindex = self._load_pkgindex()
- 			if not self._pkgindex_version_supported(pkgindex):
- 				pkgindex = self._new_pkgindex()
- 			metadata = {}
- 			basename_index = {}
- 			for d in pkgindex.packages:
- 				cpv = _pkg_str(d["CPV"], metadata=d,
- 					settings=self.settings, db=self.dbapi)
- 				d["CPV"] = cpv
- 				metadata[_instance_key(cpv)] = d
- 				path = d.get("PATH")
- 				if not path:
- 					path = cpv + ".tbz2"
- 
- 				if reindex:
- 					basename = os.path.basename(path)
- 					basename_index.setdefault(basename, []).append(d)
- 				else:
- 					instance_key = _instance_key(cpv)
- 					pkg_paths[instance_key] = path
- 					self.dbapi.cpv_inject(cpv)
- 
- 			update_pkgindex = False
- 			for mydir, file_names in dir_files.items():
- 				try:
- 					mydir = _unicode_decode(mydir,
- 						encoding=_encodings["fs"], errors="strict")
- 				except UnicodeDecodeError:
- 					continue
- 				for myfile in file_names:
- 					try:
- 						myfile = _unicode_decode(myfile,
- 							encoding=_encodings["fs"], errors="strict")
- 					except UnicodeDecodeError:
- 						continue
- 					if not myfile.endswith(SUPPORTED_XPAK_EXTENSIONS):
- 						continue
- 					mypath = os.path.join(mydir, myfile)
- 					full_path = os.path.join(self.pkgdir, mypath)
- 					s = os.lstat(full_path)
- 
- 					if not stat.S_ISREG(s.st_mode):
- 						continue
- 
- 					# Validate data from the package index and try to avoid
- 					# reading the xpak if possible.
- 					possibilities = basename_index.get(myfile)
- 					if possibilities:
- 						match = None
- 						for d in possibilities:
- 							try:
- 								if int(d["_mtime_"]) != s[stat.ST_MTIME]:
- 									continue
- 							except (KeyError, ValueError):
- 								continue
- 							try:
- 								if int(d["SIZE"]) != int(s.st_size):
- 									continue
- 							except (KeyError, ValueError):
- 								continue
- 							if not minimum_keys.difference(d):
- 								match = d
- 								break
- 						if match:
- 							mycpv = match["CPV"]
- 							instance_key = _instance_key(mycpv)
- 							pkg_paths[instance_key] = mypath
- 							# update the path if the package has been moved
- 							oldpath = d.get("PATH")
- 							if oldpath and oldpath != mypath:
- 								update_pkgindex = True
- 							# Omit PATH if it is the default path for
- 							# the current Packages format version.
- 							if mypath != mycpv + ".tbz2":
- 								d["PATH"] = mypath
- 								if not oldpath:
- 									update_pkgindex = True
- 							else:
- 								d.pop("PATH", None)
- 								if oldpath:
- 									update_pkgindex = True
- 							self.dbapi.cpv_inject(mycpv)
- 							continue
- 					if not os.access(full_path, os.R_OK):
- 						writemsg(_("!!! Permission denied to read " \
- 							"binary package: '%s'\n") % full_path,
- 							noiselevel=-1)
- 						self.invalids.append(myfile[:-5])
- 						continue
- 					pkg_metadata = self._read_metadata(full_path, s,
- 						keys=chain(self.dbapi._aux_cache_keys,
- 						("PF", "CATEGORY")))
- 					mycat = pkg_metadata.get("CATEGORY", "")
- 					mypf = pkg_metadata.get("PF", "")
- 					slot = pkg_metadata.get("SLOT", "")
- 					mypkg = myfile[:-5]
- 					if not mycat or not mypf or not slot:
- 						#old-style or corrupt package
- 						writemsg(_("\n!!! Invalid binary package: '%s'\n") % full_path,
- 							noiselevel=-1)
- 						missing_keys = []
- 						if not mycat:
- 							missing_keys.append("CATEGORY")
- 						if not mypf:
- 							missing_keys.append("PF")
- 						if not slot:
- 							missing_keys.append("SLOT")
- 						msg = []
- 						if missing_keys:
- 							missing_keys.sort()
- 							msg.append(_("Missing metadata key(s): %s.") % \
- 								", ".join(missing_keys))
- 						msg.append(_(" This binary package is not " \
- 							"recoverable and should be deleted."))
- 						for line in textwrap.wrap("".join(msg), 72):
- 							writemsg("!!! %s\n" % line, noiselevel=-1)
- 						self.invalids.append(mypkg)
- 						continue
- 
- 					multi_instance = False
- 					invalid_name = False
- 					build_id = None
- 					if myfile.endswith(".xpak"):
- 						multi_instance = True
- 						build_id = self._parse_build_id(myfile)
- 						if build_id < 1:
- 							invalid_name = True
- 						elif myfile != "%s-%s.xpak" % (
- 							mypf, build_id):
- 							invalid_name = True
- 						else:
- 							mypkg = mypkg[:-len(str(build_id))-1]
- 					elif myfile != mypf + ".tbz2":
- 						invalid_name = True
- 
- 					if invalid_name:
- 						writemsg(_("\n!!! Binary package name is "
- 							"invalid: '%s'\n") % full_path,
- 							noiselevel=-1)
- 						continue
- 
- 					if pkg_metadata.get("BUILD_ID"):
- 						try:
- 							build_id = int(pkg_metadata["BUILD_ID"])
- 						except ValueError:
- 							writemsg(_("!!! Binary package has "
- 								"invalid BUILD_ID: '%s'\n") %
- 								full_path, noiselevel=-1)
- 							continue
- 					else:
- 						build_id = None
- 
- 					if multi_instance:
- 						name_split = catpkgsplit("%s/%s" %
- 							(mycat, mypf))
- 						if (name_split is None or
- 							tuple(catsplit(mydir)) != name_split[:2]):
- 							continue
- 					elif mycat != mydir and mydir != "All":
- 						continue
- 					if mypkg != mypf.strip():
- 						continue
- 					mycpv = mycat + "/" + mypkg
- 					if not self.dbapi._category_re.match(mycat):
- 						writemsg(_("!!! Binary package has an " \
- 							"unrecognized category: '%s'\n") % full_path,
- 							noiselevel=-1)
- 						writemsg(_("!!! '%s' has a category that is not" \
- 							" listed in %setc/portage/categories\n") % \
- 							(mycpv, self.settings["PORTAGE_CONFIGROOT"]),
- 							noiselevel=-1)
- 						continue
- 					if build_id is not None:
- 						pkg_metadata["BUILD_ID"] = str(build_id)
- 					pkg_metadata["SIZE"] = str(s.st_size)
- 					# Discard items used only for validation above.
- 					pkg_metadata.pop("CATEGORY")
- 					pkg_metadata.pop("PF")
- 					mycpv = _pkg_str(mycpv,
- 						metadata=self.dbapi._aux_cache_slot_dict(pkg_metadata),
- 						db=self.dbapi)
- 					pkg_paths[_instance_key(mycpv)] = mypath
- 					self.dbapi.cpv_inject(mycpv)
- 					update_pkgindex = True
- 					d = metadata.get(_instance_key(mycpv),
- 						pkgindex._pkg_slot_dict())
- 					if d:
- 						try:
- 							if int(d["_mtime_"]) != s[stat.ST_MTIME]:
- 								d.clear()
- 						except (KeyError, ValueError):
- 							d.clear()
- 					if d:
- 						try:
- 							if int(d["SIZE"]) != int(s.st_size):
- 								d.clear()
- 						except (KeyError, ValueError):
- 							d.clear()
- 
- 					for k in self._pkgindex_allowed_pkg_keys:
- 						v = pkg_metadata.get(k)
- 						if v:
- 							d[k] = v
- 					d["CPV"] = mycpv
- 
- 					try:
- 						self._eval_use_flags(mycpv, d)
- 					except portage.exception.InvalidDependString:
- 						writemsg(_("!!! Invalid binary package: '%s'\n") % \
- 							self.getname(mycpv), noiselevel=-1)
- 						self.dbapi.cpv_remove(mycpv)
- 						del pkg_paths[_instance_key(mycpv)]
- 
- 					# record location if it's non-default
- 					if mypath != mycpv + ".tbz2":
- 						d["PATH"] = mypath
- 					else:
- 						d.pop("PATH", None)
- 					metadata[_instance_key(mycpv)] = d
- 
- 			if reindex:
- 				for instance_key in list(metadata):
- 					if instance_key not in pkg_paths:
- 						del metadata[instance_key]
- 
- 			if update_pkgindex:
- 				del pkgindex.packages[:]
- 				pkgindex.packages.extend(iter(metadata.values()))
- 				self._update_pkgindex_header(pkgindex.header)
- 
- 			self._pkgindex_header = {}
- 			self._merge_pkgindex_header(pkgindex.header,
- 				self._pkgindex_header)
- 
- 		return pkgindex if update_pkgindex else None
- 
- 	def _populate_remote(self, getbinpkg_refresh=True):
- 
- 		self._remote_has_index = False
- 		self._remotepkgs = {}
- 		# Order by descending priority.
- 		for repo in reversed(list(self._binrepos_conf.values())):
- 			base_url = repo.sync_uri
- 			parsed_url = urlparse(base_url)
- 			host = parsed_url.netloc
- 			port = parsed_url.port
- 			user = None
- 			passwd = None
- 			user_passwd = ""
- 			if "@" in host:
- 				user, host = host.split("@", 1)
- 				user_passwd = user + "@"
- 				if ":" in user:
- 					user, passwd = user.split(":", 1)
- 
- 			if port is not None:
- 				port_str = ":%s" % (port,)
- 				if host.endswith(port_str):
- 					host = host[:-len(port_str)]
- 			pkgindex_file = os.path.join(self.settings["EROOT"], CACHE_PATH, "binhost",
- 				host, parsed_url.path.lstrip("/"), "Packages")
- 			pkgindex = self._new_pkgindex()
- 			try:
- 				f = io.open(_unicode_encode(pkgindex_file,
- 					encoding=_encodings['fs'], errors='strict'),
- 					mode='r', encoding=_encodings['repo.content'],
- 					errors='replace')
- 				try:
- 					pkgindex.read(f)
- 				finally:
- 					f.close()
- 			except EnvironmentError as e:
- 				if e.errno != errno.ENOENT:
- 					raise
- 			local_timestamp = pkgindex.header.get("TIMESTAMP", None)
- 			try:
- 				download_timestamp = \
- 					float(pkgindex.header.get("DOWNLOAD_TIMESTAMP", 0))
- 			except ValueError:
- 				download_timestamp = 0
- 			remote_timestamp = None
- 			rmt_idx = self._new_pkgindex()
- 			proc = None
- 			tmp_filename = None
- 			try:
- 				# urlparse.urljoin() only works correctly with recognized
- 				# protocols and requires the base url to have a trailing
- 				# slash, so join manually...
- 				url = base_url.rstrip("/") + "/Packages"
- 				f = None
- 
- 				if not getbinpkg_refresh and local_timestamp:
- 					raise UseCachedCopyOfRemoteIndex()
- 
- 				try:
- 					ttl = float(pkgindex.header.get("TTL", 0))
- 				except ValueError:
- 					pass
- 				else:
- 					if download_timestamp and ttl and \
- 						download_timestamp + ttl > time.time():
- 						raise UseCachedCopyOfRemoteIndex()
- 
- 				# Set proxy settings for _urlopen -> urllib_request
- 				proxies = {}
- 				for proto in ('http', 'https'):
- 					value = self.settings.get(proto + '_proxy')
- 					if value is not None:
- 						proxies[proto] = value
- 
- 				# Don't use urlopen for https, unless
- 				# PEP 476 is supported (bug #469888).
- 				if repo.fetchcommand is None and (parsed_url.scheme not in ('https',) or _have_pep_476()):
- 					try:
- 						f = _urlopen(url, if_modified_since=local_timestamp, proxies=proxies)
- 						if hasattr(f, 'headers') and f.headers.get('timestamp', ''):
- 							remote_timestamp = f.headers.get('timestamp')
- 					except IOError as err:
- 						if hasattr(err, 'code') and err.code == 304: # not modified (since local_timestamp)
- 							raise UseCachedCopyOfRemoteIndex()
- 
- 						if parsed_url.scheme in ('ftp', 'http', 'https'):
- 							# This protocol is supposedly supported by urlopen,
- 							# so apparently there's a problem with the url
- 							# or a bug in urlopen.
- 							if self.settings.get("PORTAGE_DEBUG", "0") != "0":
- 								traceback.print_exc()
- 
- 							raise
- 					except ValueError:
- 						raise ParseError("Invalid Portage BINHOST value '%s'"
- 										 % url.lstrip())
- 
- 				if f is None:
- 
- 					path = parsed_url.path.rstrip("/") + "/Packages"
- 
- 					if repo.fetchcommand is None and parsed_url.scheme == 'ssh':
- 						# Use a pipe so that we can terminate the download
- 						# early if we detect that the TIMESTAMP header
- 						# matches that of the cached Packages file.
- 						ssh_args = ['ssh']
- 						if port is not None:
- 							ssh_args.append("-p%s" % (port,))
- 						# NOTE: shlex evaluates embedded quotes
- 						ssh_args.extend(portage.util.shlex_split(
- 							self.settings.get("PORTAGE_SSH_OPTS", "")))
- 						ssh_args.append(user_passwd + host)
- 						ssh_args.append('--')
- 						ssh_args.append('cat')
- 						ssh_args.append(path)
- 
- 						proc = subprocess.Popen(ssh_args,
- 							stdout=subprocess.PIPE)
- 						f = proc.stdout
- 					else:
- 						if repo.fetchcommand is None:
- 							setting = 'FETCHCOMMAND_' + parsed_url.scheme.upper()
- 							fcmd = self.settings.get(setting)
- 							if not fcmd:
- 								fcmd = self.settings.get('FETCHCOMMAND')
- 								if not fcmd:
- 									raise EnvironmentError("FETCHCOMMAND is unset")
- 						else:
- 							fcmd = repo.fetchcommand
- 
- 						fd, tmp_filename = tempfile.mkstemp()
- 						tmp_dirname, tmp_basename = os.path.split(tmp_filename)
- 						os.close(fd)
- 
- 						fcmd_vars = {
- 							"DISTDIR": tmp_dirname,
- 							"FILE": tmp_basename,
- 							"URI": url
- 						}
- 
- 						for k in ("PORTAGE_SSH_OPTS",):
- 							v = self.settings.get(k)
- 							if v is not None:
- 								fcmd_vars[k] = v
- 
- 						success = portage.getbinpkg.file_get(
- 							fcmd=fcmd, fcmd_vars=fcmd_vars)
- 						if not success:
- 							raise EnvironmentError("%s failed" % (setting,))
- 						f = open(tmp_filename, 'rb')
- 
- 				f_dec = codecs.iterdecode(f,
- 					_encodings['repo.content'], errors='replace')
- 				try:
- 					rmt_idx.readHeader(f_dec)
- 					if not remote_timestamp: # in case it had not been read from HTTP header
- 						remote_timestamp = rmt_idx.header.get("TIMESTAMP", None)
- 					if not remote_timestamp:
- 						# no timestamp in the header, something's wrong
- 						pkgindex = None
- 						writemsg(_("\n\n!!! Binhost package index " \
- 						" has no TIMESTAMP field.\n"), noiselevel=-1)
- 					else:
- 						if not self._pkgindex_version_supported(rmt_idx):
- 							writemsg(_("\n\n!!! Binhost package index version" \
- 							" is not supported: '%s'\n") % \
- 							rmt_idx.header.get("VERSION"), noiselevel=-1)
- 							pkgindex = None
- 						elif local_timestamp != remote_timestamp:
- 							rmt_idx.readBody(f_dec)
- 							pkgindex = rmt_idx
- 				finally:
- 					# Timeout after 5 seconds, in case close() blocks
- 					# indefinitely (see bug #350139).
- 					try:
- 						try:
- 							AlarmSignal.register(5)
- 							f.close()
- 						finally:
- 							AlarmSignal.unregister()
- 					except AlarmSignal:
- 						writemsg("\n\n!!! %s\n" % \
- 							_("Timed out while closing connection to binhost"),
- 							noiselevel=-1)
- 			except UseCachedCopyOfRemoteIndex:
- 				writemsg_stdout("\n")
- 				writemsg_stdout(
- 					colorize("GOOD", _("Local copy of remote index is up-to-date and will be used.")) + \
- 					"\n")
- 				rmt_idx = pkgindex
- 			except EnvironmentError as e:
- 				# This includes URLError which is raised for SSL
- 				# certificate errors when PEP 476 is supported.
- 				writemsg(_("\n\n!!! Error fetching binhost package" \
- 					" info from '%s'\n") % _hide_url_passwd(base_url))
- 				# With Python 2, the EnvironmentError message may
- 				# contain bytes or unicode, so use str to ensure
- 				# safety with all locales (bug #532784).
- 				try:
- 					error_msg = str(e)
- 				except UnicodeDecodeError as uerror:
- 					error_msg = str(uerror.object,
- 						encoding='utf_8', errors='replace')
- 				writemsg("!!! %s\n\n" % error_msg)
- 				del e
- 				pkgindex = None
- 			if proc is not None:
- 				if proc.poll() is None:
- 					proc.kill()
- 					proc.wait()
- 				proc = None
- 			if tmp_filename is not None:
- 				try:
- 					os.unlink(tmp_filename)
- 				except OSError:
- 					pass
- 			if pkgindex is rmt_idx:
- 				pkgindex.modified = False # don't update the header
- 				pkgindex.header["DOWNLOAD_TIMESTAMP"] = "%d" % time.time()
- 				try:
- 					ensure_dirs(os.path.dirname(pkgindex_file))
- 					f = atomic_ofstream(pkgindex_file)
- 					pkgindex.write(f)
- 					f.close()
- 				except (IOError, PortageException):
- 					if os.access(os.path.dirname(pkgindex_file), os.W_OK):
- 						raise
- 					# The current user doesn't have permission to cache the
- 					# file, but that's alright.
- 			if pkgindex:
- 				remote_base_uri = pkgindex.header.get("URI", base_url)
- 				for d in pkgindex.packages:
- 					cpv = _pkg_str(d["CPV"], metadata=d,
- 						settings=self.settings, db=self.dbapi)
- 					# Local package instances override remote instances
- 					# with the same instance_key.
- 					if self.dbapi.cpv_exists(cpv):
- 						continue
- 
- 					d["CPV"] = cpv
- 					d["BASE_URI"] = remote_base_uri
- 					d["PKGINDEX_URI"] = url
- 					# FETCHCOMMAND and RESUMECOMMAND may be specified
- 					# by binrepos.conf, and otherwise ensure that they
- 					# do not propagate from the Packages index since
- 					# it may be unsafe to execute remotely specified
- 					# commands.
- 					if repo.fetchcommand is None:
- 						d.pop('FETCHCOMMAND', None)
- 					else:
- 						d['FETCHCOMMAND'] = repo.fetchcommand
- 					if repo.resumecommand is None:
- 						d.pop('RESUMECOMMAND', None)
- 					else:
- 						d['RESUMECOMMAND'] = repo.resumecommand
- 					self._remotepkgs[self.dbapi._instance_key(cpv)] = d
- 					self.dbapi.cpv_inject(cpv)
- 
- 				self._remote_has_index = True
- 				self._merge_pkgindex_header(pkgindex.header,
- 					self._pkgindex_header)
- 
- 	def _populate_additional(self, repos):
- 		for repo in repos:
- 			aux_keys = list(set(chain(repo._aux_cache_keys, repo._pkg_str_aux_keys)))
- 			for cpv in repo.cpv_all():
- 				metadata = dict(zip(aux_keys, repo.aux_get(cpv, aux_keys)))
- 				pkg = _pkg_str(cpv, metadata=metadata, settings=repo.settings, db=repo)
- 				instance_key = self.dbapi._instance_key(pkg)
- 				self._additional_pkgs[instance_key] = pkg
- 				self.dbapi.cpv_inject(pkg)
- 
- 	def inject(self, cpv, filename=None):
- 		"""Add a freshly built package to the database.  This updates
- 		$PKGDIR/Packages with the new package metadata (including MD5).
- 		@param cpv: The cpv of the new package to inject
- 		@type cpv: string
- 		@param filename: File path of the package to inject, or None if it's
- 			already in the location returned by getname()
- 		@type filename: string
- 		@rtype: _pkg_str or None
- 		@return: A _pkg_str instance on success, or None on failure.
- 		"""
- 		mycat, mypkg = catsplit(cpv)
- 		if not self.populated:
- 			self.populate()
- 		if filename is None:
- 			full_path = self.getname(cpv)
- 		else:
- 			full_path = filename
- 		try:
- 			s = os.stat(full_path)
- 		except OSError as e:
- 			if e.errno != errno.ENOENT:
- 				raise
- 			del e
- 			writemsg(_("!!! Binary package does not exist: '%s'\n") % full_path,
- 				noiselevel=-1)
- 			return
- 		metadata = self._read_metadata(full_path, s)
- 		invalid_depend = False
- 		try:
- 			self._eval_use_flags(cpv, metadata)
- 		except portage.exception.InvalidDependString:
- 			invalid_depend = True
- 		if invalid_depend or not metadata.get("SLOT"):
- 			writemsg(_("!!! Invalid binary package: '%s'\n") % full_path,
- 				noiselevel=-1)
- 			return
- 
- 		fetched = False
- 		try:
- 			build_id = cpv.build_id
- 		except AttributeError:
- 			build_id = None
- 		else:
- 			instance_key = self.dbapi._instance_key(cpv)
- 			if instance_key in self.dbapi.cpvdict:
- 				# This means we've been called by aux_update (or
- 				# similar). The instance key typically changes (due to
- 				# file modification), so we need to discard existing
- 				# instance key references.
- 				self.dbapi.cpv_remove(cpv)
- 				self._pkg_paths.pop(instance_key, None)
- 				if self._remotepkgs is not None:
- 					fetched = self._remotepkgs.pop(instance_key, None)
- 
- 		cpv = _pkg_str(cpv, metadata=metadata, settings=self.settings,
- 			db=self.dbapi)
- 
- 		# Reread the Packages index (in case it's been changed by another
- 		# process) and then updated it, all while holding a lock.
- 		pkgindex_lock = None
- 		try:
- 			os.makedirs(self.pkgdir, exist_ok=True)
- 			pkgindex_lock = lockfile(self._pkgindex_file,
- 				wantnewlockfile=1)
- 			if filename is not None:
- 				new_filename = self.getname(cpv, allocate_new=True)
- 				try:
- 					samefile = os.path.samefile(filename, new_filename)
- 				except OSError:
- 					samefile = False
- 				if not samefile:
- 					self._ensure_dir(os.path.dirname(new_filename))
- 					_movefile(filename, new_filename, mysettings=self.settings)
- 				full_path = new_filename
- 
- 			basename = os.path.basename(full_path)
- 			pf = catsplit(cpv)[1]
- 			if (build_id is None and not fetched and
- 				basename.endswith(".xpak")):
- 				# Apply the newly assigned BUILD_ID. This is intended
- 				# to occur only for locally built packages. If the
- 				# package was fetched, we want to preserve its
- 				# attributes, so that we can later distinguish that it
- 				# is identical to its remote counterpart.
- 				build_id = self._parse_build_id(basename)
- 				metadata["BUILD_ID"] = str(build_id)
- 				cpv = _pkg_str(cpv, metadata=metadata,
- 					settings=self.settings, db=self.dbapi)
- 				binpkg = portage.xpak.tbz2(full_path)
- 				binary_data = binpkg.get_data()
- 				binary_data[b"BUILD_ID"] = _unicode_encode(
- 					metadata["BUILD_ID"])
- 				binpkg.recompose_mem(portage.xpak.xpak_mem(binary_data))
- 
- 			self._file_permissions(full_path)
- 			pkgindex = self._load_pkgindex()
- 			if not self._pkgindex_version_supported(pkgindex):
- 				pkgindex = self._new_pkgindex()
- 
- 			d = self._inject_file(pkgindex, cpv, full_path)
- 			self._update_pkgindex_header(pkgindex.header)
- 			self._pkgindex_write(pkgindex)
- 
- 		finally:
- 			if pkgindex_lock:
- 				unlockfile(pkgindex_lock)
- 
- 		# This is used to record BINPKGMD5 in the installed package
- 		# database, for a package that has just been built.
- 		cpv._metadata["MD5"] = d["MD5"]
- 
- 		return cpv
- 
- 	def _read_metadata(self, filename, st, keys=None):
- 		"""
- 		Read metadata from a binary package. The returned metadata
- 		dictionary will contain empty strings for any values that
- 		are undefined (this is important because the _pkg_str class
- 		distinguishes between missing and undefined values).
- 
- 		@param filename: File path of the binary package
- 		@type filename: string
- 		@param st: stat result for the binary package
- 		@type st: os.stat_result
- 		@param keys: optional list of specific metadata keys to retrieve
- 		@type keys: iterable
- 		@rtype: dict
- 		@return: package metadata
- 		"""
- 		if keys is None:
- 			keys = self.dbapi._aux_cache_keys
- 			metadata = self.dbapi._aux_cache_slot_dict()
- 		else:
- 			metadata = {}
- 		binary_metadata = portage.xpak.tbz2(filename).get_data()
- 		for k in keys:
- 			if k == "_mtime_":
- 				metadata[k] = str(st[stat.ST_MTIME])
- 			elif k == "SIZE":
- 				metadata[k] = str(st.st_size)
- 			else:
- 				v = binary_metadata.get(_unicode_encode(k))
- 				if v is None:
- 					if k == "EAPI":
- 						metadata[k] = "0"
- 					else:
- 						metadata[k] = ""
- 				else:
- 					v = _unicode_decode(v)
- 					metadata[k] = " ".join(v.split())
- 		return metadata
- 
- 	def _inject_file(self, pkgindex, cpv, filename):
- 		"""
- 		Add a package to internal data structures, and add an
- 		entry to the given pkgindex.
- 		@param pkgindex: The PackageIndex instance to which an entry
- 			will be added.
- 		@type pkgindex: PackageIndex
- 		@param cpv: A _pkg_str instance corresponding to the package
- 			being injected.
- 		@type cpv: _pkg_str
- 		@param filename: Absolute file path of the package to inject.
- 		@type filename: string
- 		@rtype: dict
- 		@return: A dict corresponding to the new entry which has been
- 			added to pkgindex. This may be used to access the checksums
- 			which have just been generated.
- 		"""
- 		# Update state for future isremote calls.
- 		instance_key = self.dbapi._instance_key(cpv)
- 		if self._remotepkgs is not None:
- 			self._remotepkgs.pop(instance_key, None)
- 
- 		self.dbapi.cpv_inject(cpv)
- 		self._pkg_paths[instance_key] = filename[len(self.pkgdir)+1:]
- 		d = self._pkgindex_entry(cpv)
- 
- 		# If found, remove package(s) with duplicate path.
- 		path = d.get("PATH", "")
- 		for i in range(len(pkgindex.packages) - 1, -1, -1):
- 			d2 = pkgindex.packages[i]
- 			if path and path == d2.get("PATH"):
- 				# Handle path collisions in $PKGDIR/All
- 				# when CPV is not identical.
- 				del pkgindex.packages[i]
- 			elif cpv == d2.get("CPV"):
- 				if path == d2.get("PATH", ""):
- 					del pkgindex.packages[i]
- 
- 		pkgindex.packages.append(d)
- 		return d
- 
- 	def _pkgindex_write(self, pkgindex):
- 		contents = codecs.getwriter(_encodings['repo.content'])(io.BytesIO())
- 		pkgindex.write(contents)
- 		contents = contents.getvalue()
- 		atime = mtime = int(pkgindex.header["TIMESTAMP"])
- 		output_files = [(atomic_ofstream(self._pkgindex_file, mode="wb"),
- 			self._pkgindex_file, None)]
- 
- 		if "compress-index" in self.settings.features:
- 			gz_fname = self._pkgindex_file + ".gz"
- 			fileobj = atomic_ofstream(gz_fname, mode="wb")
- 			output_files.append((GzipFile(filename='', mode="wb",
- 				fileobj=fileobj, mtime=mtime), gz_fname, fileobj))
- 
- 		for f, fname, f_close in output_files:
- 			f.write(contents)
- 			f.close()
- 			if f_close is not None:
- 				f_close.close()
- 			self._file_permissions(fname)
- 			# some seconds might have elapsed since TIMESTAMP
- 			os.utime(fname, (atime, mtime))
- 
- 	def _pkgindex_entry(self, cpv):
- 		"""
- 		Performs checksums, and gets size and mtime via lstat.
- 		Raises InvalidDependString if necessary.
- 		@rtype: dict
- 		@return: a dict containing entry for the give cpv.
- 		"""
- 
- 		pkg_path = self.getname(cpv)
- 
- 		d = dict(cpv._metadata.items())
- 		d.update(perform_multiple_checksums(
- 			pkg_path, hashes=self._pkgindex_hashes))
- 
- 		d["CPV"] = cpv
- 		st = os.lstat(pkg_path)
- 		d["_mtime_"] = str(st[stat.ST_MTIME])
- 		d["SIZE"] = str(st.st_size)
- 
- 		rel_path = pkg_path[len(self.pkgdir)+1:]
- 		# record location if it's non-default
- 		if rel_path != cpv + ".tbz2":
- 			d["PATH"] = rel_path
- 
- 		return d
- 
- 	def _new_pkgindex(self):
- 		return portage.getbinpkg.PackageIndex(
- 			allowed_pkg_keys=self._pkgindex_allowed_pkg_keys,
- 			default_header_data=self._pkgindex_default_header_data,
- 			default_pkg_data=self._pkgindex_default_pkg_data,
- 			inherited_keys=self._pkgindex_inherited_keys,
- 			translated_keys=self._pkgindex_translated_keys)
- 
- 	@staticmethod
- 	def _merge_pkgindex_header(src, dest):
- 		"""
- 		Merge Packages header settings from src to dest, in order to
- 		propagate implicit IUSE and USE_EXPAND settings for use with
- 		binary and installed packages. Values are appended, so the
- 		result is a union of elements from src and dest.
- 
- 		Pull in ARCH if it's not defined, since it's used for validation
- 		by emerge's profile_check function, and also for KEYWORDS logic
- 		in the _getmaskingstatus function.
- 
- 		@param src: source mapping (read only)
- 		@type src: Mapping
- 		@param dest: destination mapping
- 		@type dest: MutableMapping
- 		"""
- 		for k, v in iter_iuse_vars(src):
- 			v_before = dest.get(k)
- 			if v_before is not None:
- 				merged_values = set(v_before.split())
- 				merged_values.update(v.split())
- 				v = ' '.join(sorted(merged_values))
- 			dest[k] = v
- 
- 		if 'ARCH' not in dest and 'ARCH' in src:
- 			dest['ARCH'] = src['ARCH']
- 
- 	def _propagate_config(self, config):
- 		"""
- 		Propagate implicit IUSE and USE_EXPAND settings from the binary
- 		package database to a config instance. If settings are not
- 		available to propagate, then this will do nothing and return
- 		False.
- 
- 		@param config: config instance
- 		@type config: portage.config
- 		@rtype: bool
- 		@return: True if settings successfully propagated, False if settings
- 			were not available to propagate.
- 		"""
- 		if self._pkgindex_header is None:
- 			return False
- 
- 		self._merge_pkgindex_header(self._pkgindex_header,
- 			config.configdict['defaults'])
- 		config.regenerate()
- 		config._init_iuse()
- 		return True
- 
- 	def _update_pkgindex_header(self, header):
- 		"""
- 		Add useful settings to the Packages file header, for use by
- 		binhost clients.
- 
- 		This will return silently if the current profile is invalid or
- 		does not have an IUSE_IMPLICIT variable, since it's useful to
- 		maintain a cache of implicit IUSE settings for use with binary
- 		packages.
- 		"""
- 		if not (self.settings.profile_path and
- 			"IUSE_IMPLICIT" in self.settings):
- 			header.setdefault("VERSION", str(self._pkgindex_version))
- 			return
- 
- 		portdir = normalize_path(os.path.realpath(self.settings["PORTDIR"]))
- 		profiles_base = os.path.join(portdir, "profiles") + os.path.sep
- 		if self.settings.profile_path:
- 			profile_path = normalize_path(
- 				os.path.realpath(self.settings.profile_path))
- 			if profile_path.startswith(profiles_base):
- 				profile_path = profile_path[len(profiles_base):]
- 			header["PROFILE"] = profile_path
- 		header["VERSION"] = str(self._pkgindex_version)
- 		base_uri = self.settings.get("PORTAGE_BINHOST_HEADER_URI")
- 		if base_uri:
- 			header["URI"] = base_uri
- 		else:
- 			header.pop("URI", None)
- 		for k in list(self._pkgindex_header_keys) + \
- 			self.settings.get("USE_EXPAND_IMPLICIT", "").split() + \
- 			self.settings.get("USE_EXPAND_UNPREFIXED", "").split():
- 			v = self.settings.get(k, None)
- 			if v:
- 				header[k] = v
- 			else:
- 				header.pop(k, None)
- 
- 		# These values may be useful for using a binhost without
- 		# having a local copy of the profile (bug #470006).
- 		for k in self.settings.get("USE_EXPAND_IMPLICIT", "").split():
- 			k = "USE_EXPAND_VALUES_" + k
- 			v = self.settings.get(k)
- 			if v:
- 				header[k] = v
- 			else:
- 				header.pop(k, None)
- 
- 	def _pkgindex_version_supported(self, pkgindex):
- 		version = pkgindex.header.get("VERSION")
- 		if version:
- 			try:
- 				if int(version) <= self._pkgindex_version:
- 					return True
- 			except ValueError:
- 				pass
- 		return False
- 
- 	def _eval_use_flags(self, cpv, metadata):
- 		use = frozenset(metadata.get("USE", "").split())
- 		for k in self._pkgindex_use_evaluated_keys:
- 			if k.endswith('DEPEND'):
- 				token_class = Atom
- 			else:
- 				token_class = None
- 
- 			deps = metadata.get(k)
- 			if deps is None:
- 				continue
- 			try:
- 				deps = use_reduce(deps, uselist=use, token_class=token_class)
- 				deps = paren_enclose(deps)
- 			except portage.exception.InvalidDependString as e:
- 				writemsg("%s: %s\n" % (k, e), noiselevel=-1)
- 				raise
- 			metadata[k] = deps
- 
- 	def exists_specific(self, cpv):
- 		if not self.populated:
- 			self.populate()
- 		return self.dbapi.match(
- 			dep_expand("="+cpv, mydb=self.dbapi, settings=self.settings))
- 
- 	def dep_bestmatch(self, mydep):
- 		"compatibility method -- all matches, not just visible ones"
- 		if not self.populated:
- 			self.populate()
- 		writemsg("\n\n", 1)
- 		writemsg("mydep: %s\n" % mydep, 1)
- 		mydep = dep_expand(mydep, mydb=self.dbapi, settings=self.settings)
- 		writemsg("mydep: %s\n" % mydep, 1)
- 		mykey = dep_getkey(mydep)
- 		writemsg("mykey: %s\n" % mykey, 1)
- 		mymatch = best(match_from_list(mydep,self.dbapi.cp_list(mykey)))
- 		writemsg("mymatch: %s\n" % mymatch, 1)
- 		if mymatch is None:
- 			return ""
- 		return mymatch
- 
- 	def getname(self, cpv, allocate_new=None):
- 		"""Returns a file location for this package.
- 		If cpv has both build_time and build_id attributes, then the
- 		path to the specific corresponding instance is returned.
- 		Otherwise, allocate a new path and return that. When allocating
- 		a new path, behavior depends on the binpkg-multi-instance
- 		FEATURES setting.
- 		"""
- 		if not self.populated:
- 			self.populate()
- 
- 		try:
- 			cpv.cp
- 		except AttributeError:
- 			cpv = _pkg_str(cpv)
- 
- 		filename = None
- 		if allocate_new:
- 			filename = self._allocate_filename(cpv)
- 		elif self._is_specific_instance(cpv):
- 			instance_key = self.dbapi._instance_key(cpv)
- 			path = self._pkg_paths.get(instance_key)
- 			if path is not None:
- 				filename = os.path.join(self.pkgdir, path)
- 
- 		if filename is None and not allocate_new:
- 			try:
- 				instance_key = self.dbapi._instance_key(cpv,
- 					support_string=True)
- 			except KeyError:
- 				pass
- 			else:
- 				filename = self._pkg_paths.get(instance_key)
- 				if filename is not None:
- 					filename = os.path.join(self.pkgdir, filename)
- 				elif instance_key in self._additional_pkgs:
- 					return None
- 
- 		if filename is None:
- 			if self._multi_instance:
- 				pf = catsplit(cpv)[1]
- 				filename = "%s-%s.xpak" % (
- 					os.path.join(self.pkgdir, cpv.cp, pf), "1")
- 			else:
- 				filename = os.path.join(self.pkgdir, cpv + ".tbz2")
- 
- 		return filename
- 
- 	def _is_specific_instance(self, cpv):
- 		specific = True
- 		try:
- 			build_time = cpv.build_time
- 			build_id = cpv.build_id
- 		except AttributeError:
- 			specific = False
- 		else:
- 			if build_time is None or build_id is None:
- 				specific = False
- 		return specific
- 
- 	def _max_build_id(self, cpv):
- 		max_build_id = 0
- 		for x in self.dbapi.cp_list(cpv.cp):
- 			if (x == cpv and x.build_id is not None and
- 				x.build_id > max_build_id):
- 				max_build_id = x.build_id
- 		return max_build_id
- 
- 	def _allocate_filename(self, cpv):
- 		return os.path.join(self.pkgdir, cpv + ".tbz2")
- 
- 	def _allocate_filename_multi(self, cpv):
- 
- 		# First, get the max build_id found when _populate was
- 		# called.
- 		max_build_id = self._max_build_id(cpv)
- 
- 		# A new package may have been added concurrently since the
- 		# last _populate call, so use increment build_id until
- 		# we locate an unused id.
- 		pf = catsplit(cpv)[1]
- 		build_id = max_build_id + 1
- 
- 		while True:
- 			filename = "%s-%s.xpak" % (
- 				os.path.join(self.pkgdir, cpv.cp, pf), build_id)
- 			if os.path.exists(filename):
- 				build_id += 1
- 			else:
- 				return filename
- 
- 	@staticmethod
- 	def _parse_build_id(filename):
- 		build_id = -1
- 		suffixlen = len(".xpak")
- 		hyphen = filename.rfind("-", 0, -(suffixlen + 1))
- 		if hyphen != -1:
- 			build_id = filename[hyphen+1:-suffixlen]
- 		try:
- 			build_id = int(build_id)
- 		except ValueError:
- 			pass
- 		return build_id
- 
- 	def isremote(self, pkgname):
- 		"""Returns true if the package is kept remotely and it has not been
- 		downloaded (or it is only partially downloaded)."""
- 		if self._remotepkgs is None:
- 			return False
- 		instance_key = self.dbapi._instance_key(pkgname)
- 		if instance_key not in self._remotepkgs:
- 			return False
- 		if instance_key in self._additional_pkgs:
- 			return False
- 		# Presence in self._remotepkgs implies that it's remote. When a
- 		# package is downloaded, state is updated by self.inject().
- 		return True
- 
- 	def get_pkgindex_uri(self, cpv):
- 		"""Returns the URI to the Packages file for a given package."""
- 		uri = None
- 		if self._remotepkgs is not None:
- 			metadata = self._remotepkgs.get(self.dbapi._instance_key(cpv))
- 			if metadata is not None:
- 				uri = metadata["PKGINDEX_URI"]
- 		return uri
- 
- 	def gettbz2(self, pkgname):
- 		"""Fetches the package from a remote site, if necessary.  Attempts to
- 		resume if the file appears to be partially downloaded."""
- 		instance_key = self.dbapi._instance_key(pkgname)
- 		tbz2_path = self.getname(pkgname)
- 		tbz2name = os.path.basename(tbz2_path)
- 		resume = False
- 		if os.path.exists(tbz2_path):
- 			if tbz2name[:-5] not in self.invalids:
- 				return
- 
- 			resume = True
- 			writemsg(_("Resuming download of this tbz2, but it is possible that it is corrupt.\n"),
- 				noiselevel=-1)
- 
- 		mydest = os.path.dirname(self.getname(pkgname))
- 		self._ensure_dir(mydest)
- 		# urljoin doesn't work correctly with unrecognized protocols like sftp
- 		if self._remote_has_index:
- 			rel_url = self._remotepkgs[instance_key].get("PATH")
- 			if not rel_url:
- 				rel_url = pkgname + ".tbz2"
- 			remote_base_uri = self._remotepkgs[instance_key]["BASE_URI"]
- 			url = remote_base_uri.rstrip("/") + "/" + rel_url.lstrip("/")
- 		else:
- 			url = self.settings["PORTAGE_BINHOST"].rstrip("/") + "/" + tbz2name
- 		protocol = urlparse(url)[0]
- 		fcmd_prefix = "FETCHCOMMAND"
- 		if resume:
- 			fcmd_prefix = "RESUMECOMMAND"
- 		fcmd = self.settings.get(fcmd_prefix + "_" + protocol.upper())
- 		if not fcmd:
- 			fcmd = self.settings.get(fcmd_prefix)
- 		success = portage.getbinpkg.file_get(url, mydest, fcmd=fcmd)
- 		if not success:
- 			try:
- 				os.unlink(self.getname(pkgname))
- 			except OSError:
- 				pass
- 			raise portage.exception.FileNotFound(mydest)
- 		self.inject(pkgname)
- 
- 	def _load_pkgindex(self):
- 		pkgindex = self._new_pkgindex()
- 		try:
- 			f = io.open(_unicode_encode(self._pkgindex_file,
- 				encoding=_encodings['fs'], errors='strict'),
- 				mode='r', encoding=_encodings['repo.content'],
- 				errors='replace')
- 		except EnvironmentError:
- 			pass
- 		else:
- 			try:
- 				pkgindex.read(f)
- 			finally:
- 				f.close()
- 		return pkgindex
- 
- 	def _get_digests(self, pkg):
- 
- 		try:
- 			cpv = pkg.cpv
- 		except AttributeError:
- 			cpv = pkg
- 
- 		_instance_key = self.dbapi._instance_key
- 		instance_key = _instance_key(cpv)
- 		digests = {}
- 		metadata = (None if self._remotepkgs is None else
- 			self._remotepkgs.get(instance_key))
- 		if metadata is None:
- 			for d in self._load_pkgindex().packages:
- 				if (d["CPV"] == cpv and
- 					instance_key == _instance_key(_pkg_str(d["CPV"],
- 					metadata=d, settings=self.settings))):
- 					metadata = d
- 					break
- 
- 		if metadata is None:
- 			return digests
- 
- 		for k in get_valid_checksum_keys():
- 			v = metadata.get(k)
- 			if not v:
- 				continue
- 			digests[k] = v
- 
- 		if "SIZE" in metadata:
- 			try:
- 				digests["size"] = int(metadata["SIZE"])
- 			except ValueError:
- 				writemsg(_("!!! Malformed SIZE attribute in remote " \
- 				"metadata for '%s'\n") % cpv)
- 
- 		return digests
- 
- 	def digestCheck(self, pkg):
- 		"""
- 		Verify digests for the given package and raise DigestException
- 		if verification fails.
- 		@rtype: bool
- 		@return: True if digests could be located, False otherwise.
- 		"""
- 
- 		digests = self._get_digests(pkg)
- 
- 		if not digests:
- 			return False
- 
- 		try:
- 			cpv = pkg.cpv
- 		except AttributeError:
- 			cpv = pkg
- 
- 		pkg_path = self.getname(cpv)
- 		hash_filter = _hash_filter(
- 			self.settings.get("PORTAGE_CHECKSUM_FILTER", ""))
- 		if not hash_filter.transparent:
- 			digests = _apply_hash_filter(digests, hash_filter)
- 		eout = EOutput()
- 		eout.quiet = self.settings.get("PORTAGE_QUIET") == "1"
- 		ok, st = _check_distfile(pkg_path, digests, eout, show_errors=0)
- 		if not ok:
- 			ok, reason = verify_all(pkg_path, digests)
- 			if not ok:
- 				raise portage.exception.DigestException(
- 					(pkg_path,) + tuple(reason))
- 
- 		return True
- 
- 	def getslot(self, mycatpkg):
- 		"Get a slot for a catpkg; assume it exists."
- 		myslot = ""
- 		try:
- 			myslot = self.dbapi._pkg_str(mycatpkg, None).slot
- 		except KeyError:
- 			pass
- 		return myslot
+     "this tree scans for a list of all packages available in PKGDIR"
+ 
+     def __init__(
+         self,
+         _unused=DeprecationWarning,
+         pkgdir=None,
+         virtual=DeprecationWarning,
+         settings=None,
+     ):
+ 
+         if pkgdir is None:
+             raise TypeError("pkgdir parameter is required")
+ 
+         if settings is None:
+             raise TypeError("settings parameter is required")
+ 
+         if _unused is not DeprecationWarning:
+             warnings.warn(
+                 "The first parameter of the "
+                 "portage.dbapi.bintree.binarytree"
+                 " constructor is now unused. Instead "
+                 "settings['ROOT'] is used.",
+                 DeprecationWarning,
+                 stacklevel=2,
+             )
+ 
+         if virtual is not DeprecationWarning:
+             warnings.warn(
+                 "The 'virtual' parameter of the "
+                 "portage.dbapi.bintree.binarytree"
+                 " constructor is unused",
+                 DeprecationWarning,
+                 stacklevel=2,
+             )
+ 
+         if True:
+             self.pkgdir = normalize_path(pkgdir)
+             # NOTE: Event if binpkg-multi-instance is disabled, it's
+             # still possible to access a PKGDIR which uses the
+             # binpkg-multi-instance layout (or mixed layout).
+             self._multi_instance = "binpkg-multi-instance" in settings.features
+             if self._multi_instance:
+                 self._allocate_filename = self._allocate_filename_multi
+             self.dbapi = bindbapi(self, settings=settings)
+             self.update_ents = self.dbapi.update_ents
+             self.move_slot_ent = self.dbapi.move_slot_ent
+             self.populated = 0
+             self.tree = {}
+             self._binrepos_conf = None
+             self._remote_has_index = False
+             self._remotepkgs = None  # remote metadata indexed by cpv
+             self._additional_pkgs = {}
+             self.invalids = []
+             self.settings = settings
+             self._pkg_paths = {}
+             self._populating = False
+             self._all_directory = os.path.isdir(os.path.join(self.pkgdir, "All"))
+             self._pkgindex_version = 0
+             self._pkgindex_hashes = ["MD5", "SHA1"]
+             self._pkgindex_file = os.path.join(self.pkgdir, "Packages")
+             self._pkgindex_keys = self.dbapi._aux_cache_keys.copy()
+             self._pkgindex_keys.update(["CPV", "SIZE"])
+             self._pkgindex_aux_keys = [
+                 "BASE_URI",
+                 "BDEPEND",
+                 "BUILD_ID",
+                 "BUILD_TIME",
+                 "CHOST",
+                 "DEFINED_PHASES",
+                 "DEPEND",
+                 "DESCRIPTION",
+                 "EAPI",
+                 "FETCHCOMMAND",
+                 "IDEPEND",
+                 "IUSE",
+                 "KEYWORDS",
+                 "LICENSE",
+                 "PDEPEND",
+                 "PKGINDEX_URI",
+                 "PROPERTIES",
+                 "PROVIDES",
+                 "RDEPEND",
+                 "repository",
+                 "REQUIRES",
+                 "RESTRICT",
+                 "RESUMECOMMAND",
+                 "SIZE",
+                 "SLOT",
+                 "USE",
++                # PREFIX LOCAL
++                "EPREFIX",
+             ]
+             self._pkgindex_aux_keys = list(self._pkgindex_aux_keys)
+             self._pkgindex_use_evaluated_keys = (
+                 "BDEPEND",
+                 "DEPEND",
+                 "IDEPEND",
+                 "LICENSE",
+                 "RDEPEND",
+                 "PDEPEND",
+                 "PROPERTIES",
+                 "RESTRICT",
+             )
+             self._pkgindex_header = None
+             self._pkgindex_header_keys = set(
+                 [
+                     "ACCEPT_KEYWORDS",
+                     "ACCEPT_LICENSE",
+                     "ACCEPT_PROPERTIES",
+                     "ACCEPT_RESTRICT",
+                     "CBUILD",
+                     "CONFIG_PROTECT",
+                     "CONFIG_PROTECT_MASK",
+                     "FEATURES",
+                     "GENTOO_MIRRORS",
+                     "INSTALL_MASK",
+                     "IUSE_IMPLICIT",
+                     "USE",
+                     "USE_EXPAND",
+                     "USE_EXPAND_HIDDEN",
+                     "USE_EXPAND_IMPLICIT",
+                     "USE_EXPAND_UNPREFIXED",
++                    # PREFIX LOCAL
++                    "EPREFIX",
+                 ]
+             )
+             self._pkgindex_default_pkg_data = {
+                 "BDEPEND": "",
+                 "BUILD_ID": "",
+                 "BUILD_TIME": "",
+                 "DEFINED_PHASES": "",
+                 "DEPEND": "",
+                 "EAPI": "0",
+                 "IDEPEND": "",
+                 "IUSE": "",
+                 "KEYWORDS": "",
+                 "LICENSE": "",
+                 "PATH": "",
+                 "PDEPEND": "",
+                 "PROPERTIES": "",
+                 "PROVIDES": "",
+                 "RDEPEND": "",
+                 "REQUIRES": "",
+                 "RESTRICT": "",
+                 "SLOT": "0",
+                 "USE": "",
+             }
 -            self._pkgindex_inherited_keys = ["CHOST", "repository"]
++            self._pkgindex_inherited_keys = ["CHOST", "repository",
++                    # PREFIX LOCAL
++                    "EPREFIX"]
+ 
+             # Populate the header with appropriate defaults.
+             self._pkgindex_default_header_data = {
+                 "CHOST": self.settings.get("CHOST", ""),
+                 "repository": "",
+             }
+ 
+             self._pkgindex_translated_keys = (
+                 ("DESCRIPTION", "DESC"),
+                 ("_mtime_", "MTIME"),
+                 ("repository", "REPO"),
+             )
+ 
+             self._pkgindex_allowed_pkg_keys = set(
+                 chain(
+                     self._pkgindex_keys,
+                     self._pkgindex_aux_keys,
+                     self._pkgindex_hashes,
+                     self._pkgindex_default_pkg_data,
+                     self._pkgindex_inherited_keys,
+                     chain(*self._pkgindex_translated_keys),
+                 )
+             )
+ 
+     @property
+     def root(self):
+         warnings.warn(
+             "The root attribute of "
+             "portage.dbapi.bintree.binarytree"
+             " is deprecated. Use "
+             "settings['ROOT'] instead.",
+             DeprecationWarning,
+             stacklevel=3,
+         )
+         return self.settings["ROOT"]
+ 
+     def move_ent(self, mylist, repo_match=None):
+         if not self.populated:
+             self.populate()
+         origcp = mylist[1]
+         newcp = mylist[2]
+         # sanity check
+         for atom in (origcp, newcp):
+             if not isjustname(atom):
+                 raise InvalidPackageName(str(atom))
+         mynewcat = catsplit(newcp)[0]
+         origmatches = self.dbapi.cp_list(origcp)
+         moves = 0
+         if not origmatches:
+             return moves
+         for mycpv in origmatches:
+             mycpv_cp = mycpv.cp
+             if mycpv_cp != origcp:
+                 # Ignore PROVIDE virtual match.
+                 continue
+             if repo_match is not None and not repo_match(mycpv.repo):
+                 continue
+ 
+             # Use isvalidatom() to check if this move is valid for the
+             # EAPI (characters allowed in package names may vary).
+             if not isvalidatom(newcp, eapi=mycpv.eapi):
+                 continue
+ 
+             mynewcpv = mycpv.replace(mycpv_cp, str(newcp), 1)
+             myoldpkg = catsplit(mycpv)[1]
+             mynewpkg = catsplit(mynewcpv)[1]
+ 
+             # If this update has already been applied to the same
+             # package build then silently continue.
+             applied = False
+             for maybe_applied in self.dbapi.match("={}".format(mynewcpv)):
+                 if maybe_applied.build_time == mycpv.build_time:
+                     applied = True
+                     break
+ 
+             if applied:
+                 continue
+ 
+             if (mynewpkg != myoldpkg) and self.dbapi.cpv_exists(mynewcpv):
+                 writemsg(
+                     _("!!! Cannot update binary: Destination exists.\n"), noiselevel=-1
+                 )
+                 writemsg("!!! " + mycpv + " -> " + mynewcpv + "\n", noiselevel=-1)
+                 continue
+ 
+             tbz2path = self.getname(mycpv)
+             if os.path.exists(tbz2path) and not os.access(tbz2path, os.W_OK):
+                 writemsg(
+                     _("!!! Cannot update readonly binary: %s\n") % mycpv, noiselevel=-1
+                 )
+                 continue
+ 
+             moves += 1
+             mytbz2 = portage.xpak.tbz2(tbz2path)
+             mydata = mytbz2.get_data()
+             updated_items = update_dbentries([mylist], mydata, parent=mycpv)
+             mydata.update(updated_items)
+             mydata[b"PF"] = _unicode_encode(
+                 mynewpkg + "\n", encoding=_encodings["repo.content"]
+             )
+             mydata[b"CATEGORY"] = _unicode_encode(
+                 mynewcat + "\n", encoding=_encodings["repo.content"]
+             )
+             if mynewpkg != myoldpkg:
+                 ebuild_data = mydata.pop(
+                     _unicode_encode(
+                         myoldpkg + ".ebuild", encoding=_encodings["repo.content"]
+                     ),
+                     None,
+                 )
+                 if ebuild_data is not None:
+                     mydata[
+                         _unicode_encode(
+                             mynewpkg + ".ebuild", encoding=_encodings["repo.content"]
+                         )
+                     ] = ebuild_data
+ 
+             metadata = self.dbapi._aux_cache_slot_dict()
+             for k in self.dbapi._aux_cache_keys:
+                 v = mydata.get(_unicode_encode(k))
+                 if v is not None:
+                     v = _unicode_decode(v)
+                     metadata[k] = " ".join(v.split())
+ 
+             # Create a copy of the old version of the package and
+             # apply the update to it. Leave behind the old version,
+             # assuming that it will be deleted by eclean-pkg when its
+             # time comes.
+             mynewcpv = _pkg_str(mynewcpv, metadata=metadata, db=self.dbapi)
+             update_path = self.getname(mynewcpv, allocate_new=True) + ".partial"
+             self._ensure_dir(os.path.dirname(update_path))
+             update_path_lock = None
+             try:
+                 update_path_lock = lockfile(update_path, wantnewlockfile=True)
+                 copyfile(tbz2path, update_path)
+                 mytbz2 = portage.xpak.tbz2(update_path)
+                 mytbz2.recompose_mem(portage.xpak.xpak_mem(mydata))
+                 self.inject(mynewcpv, filename=update_path)
+             finally:
+                 if update_path_lock is not None:
+                     try:
+                         os.unlink(update_path)
+                     except OSError:
+                         pass
+                     unlockfile(update_path_lock)
+ 
+         return moves
+ 
+     def prevent_collision(self, cpv):
+         warnings.warn(
+             "The "
+             "portage.dbapi.bintree.binarytree.prevent_collision "
+             "method is deprecated.",
+             DeprecationWarning,
+             stacklevel=2,
+         )
+ 
+     def _ensure_dir(self, path):
+         """
+         Create the specified directory. Also, copy gid and group mode
+         bits from self.pkgdir if possible.
+         @param cat_dir: Absolute path of the directory to be created.
+         @type cat_dir: String
+         """
+         try:
+             pkgdir_st = os.stat(self.pkgdir)
+         except OSError:
+             ensure_dirs(path)
+             return
+         pkgdir_gid = pkgdir_st.st_gid
+         pkgdir_grp_mode = 0o2070 & pkgdir_st.st_mode
+         try:
+             ensure_dirs(path, gid=pkgdir_gid, mode=pkgdir_grp_mode, mask=0)
+         except PortageException:
+             if not os.path.isdir(path):
+                 raise
+ 
+     def _file_permissions(self, path):
+         try:
+             pkgdir_st = os.stat(self.pkgdir)
+         except OSError:
+             pass
+         else:
+             pkgdir_gid = pkgdir_st.st_gid
+             pkgdir_grp_mode = 0o0060 & pkgdir_st.st_mode
+             try:
+                 portage.util.apply_permissions(
+                     path, gid=pkgdir_gid, mode=pkgdir_grp_mode, mask=0
+                 )
+             except PortageException:
+                 pass
+ 
+     def populate(self, getbinpkgs=False, getbinpkg_refresh=True, add_repos=()):
+         """
+         Populates the binarytree with package metadata.
+ 
+         @param getbinpkgs: include remote packages
+         @type getbinpkgs: bool
+         @param getbinpkg_refresh: attempt to refresh the cache
+                 of remote package metadata if getbinpkgs is also True
+         @type getbinpkg_refresh: bool
+         @param add_repos: additional binary package repositories
+         @type add_repos: sequence
+         """
+ 
+         if self._populating:
+             return
+ 
+         if not os.path.isdir(self.pkgdir) and not (getbinpkgs or add_repos):
+             self.populated = True
+             return
+ 
+         # Clear all caches in case populate is called multiple times
+         # as may be the case when _global_updates calls populate()
+         # prior to performing package moves since it only wants to
+         # operate on local packages (getbinpkgs=0).
+         self._remotepkgs = None
+ 
+         self._populating = True
+         try:
+             update_pkgindex = self._populate_local(
+                 reindex="pkgdir-index-trusted" not in self.settings.features
+             )
+ 
+             if update_pkgindex and self.dbapi.writable:
+                 # If the Packages file needs to be updated, then _populate_local
+                 # needs to be called once again while the file is locked, so
+                 # that changes made by a concurrent process cannot be lost. This
+                 # case is avoided when possible, in order to minimize lock
+                 # contention.
+                 pkgindex_lock = None
+                 try:
+                     pkgindex_lock = lockfile(self._pkgindex_file, wantnewlockfile=True)
+                     update_pkgindex = self._populate_local()
+                     if update_pkgindex:
+                         self._pkgindex_write(update_pkgindex)
+                 finally:
+                     if pkgindex_lock:
+                         unlockfile(pkgindex_lock)
+ 
+             if add_repos:
+                 self._populate_additional(add_repos)
+ 
+             if getbinpkgs:
+                 config_path = os.path.join(
+                     self.settings["PORTAGE_CONFIGROOT"], BINREPOS_CONF_FILE
+                 )
+                 self._binrepos_conf = BinRepoConfigLoader((config_path,), self.settings)
+                 if not self._binrepos_conf:
+                     writemsg(
+                         _(
+                             "!!! %s is missing (or PORTAGE_BINHOST is unset), but use is requested.\n"
+                         )
+                         % (config_path,),
+                         noiselevel=-1,
+                     )
+                 else:
+                     self._populate_remote(getbinpkg_refresh=getbinpkg_refresh)
+ 
+         finally:
+             self._populating = False
+ 
+         self.populated = True
+ 
+     def _populate_local(self, reindex=True):
+         """
+         Populates the binarytree with local package metadata.
+ 
+         @param reindex: detect added / modified / removed packages and
+                 regenerate the index file if necessary
+         @type reindex: bool
+         """
+         self.dbapi.clear()
+         _instance_key = self.dbapi._instance_key
+         # In order to minimize disk I/O, we never compute digests here.
+         # Therefore we exclude hashes from the minimum_keys, so that
+         # the Packages file will not be needlessly re-written due to
+         # missing digests.
+         minimum_keys = self._pkgindex_keys.difference(self._pkgindex_hashes)
+         if True:
+             pkg_paths = {}
+             self._pkg_paths = pkg_paths
+             dir_files = {}
+             if reindex:
+                 for parent, dir_names, file_names in os.walk(self.pkgdir):
+                     relative_parent = parent[len(self.pkgdir) + 1 :]
+                     dir_files[relative_parent] = file_names
+ 
+             pkgindex = self._load_pkgindex()
+             if not self._pkgindex_version_supported(pkgindex):
+                 pkgindex = self._new_pkgindex()
+             metadata = {}
+             basename_index = {}
+             for d in pkgindex.packages:
+                 cpv = _pkg_str(
+                     d["CPV"], metadata=d, settings=self.settings, db=self.dbapi
+                 )
+                 d["CPV"] = cpv
+                 metadata[_instance_key(cpv)] = d
+                 path = d.get("PATH")
+                 if not path:
+                     path = cpv + ".tbz2"
+ 
+                 if reindex:
+                     basename = os.path.basename(path)
+                     basename_index.setdefault(basename, []).append(d)
+                 else:
+                     instance_key = _instance_key(cpv)
+                     pkg_paths[instance_key] = path
+                     self.dbapi.cpv_inject(cpv)
+ 
+             update_pkgindex = False
+             for mydir, file_names in dir_files.items():
+                 try:
+                     mydir = _unicode_decode(
+                         mydir, encoding=_encodings["fs"], errors="strict"
+                     )
+                 except UnicodeDecodeError:
+                     continue
+                 for myfile in file_names:
+                     try:
+                         myfile = _unicode_decode(
+                             myfile, encoding=_encodings["fs"], errors="strict"
+                         )
+                     except UnicodeDecodeError:
+                         continue
+                     if not myfile.endswith(SUPPORTED_XPAK_EXTENSIONS):
+                         continue
+                     mypath = os.path.join(mydir, myfile)
+                     full_path = os.path.join(self.pkgdir, mypath)
+                     s = os.lstat(full_path)
+ 
+                     if not stat.S_ISREG(s.st_mode):
+                         continue
+ 
+                     # Validate data from the package index and try to avoid
+                     # reading the xpak if possible.
+                     possibilities = basename_index.get(myfile)
+                     if possibilities:
+                         match = None
+                         for d in possibilities:
+                             try:
+                                 if int(d["_mtime_"]) != s[stat.ST_MTIME]:
+                                     continue
+                             except (KeyError, ValueError):
+                                 continue
+                             try:
+                                 if int(d["SIZE"]) != int(s.st_size):
+                                     continue
+                             except (KeyError, ValueError):
+                                 continue
+                             if not minimum_keys.difference(d):
+                                 match = d
+                                 break
+                         if match:
+                             mycpv = match["CPV"]
+                             instance_key = _instance_key(mycpv)
+                             pkg_paths[instance_key] = mypath
+                             # update the path if the package has been moved
+                             oldpath = d.get("PATH")
+                             if oldpath and oldpath != mypath:
+                                 update_pkgindex = True
+                             # Omit PATH if it is the default path for
+                             # the current Packages format version.
+                             if mypath != mycpv + ".tbz2":
+                                 d["PATH"] = mypath
+                                 if not oldpath:
+                                     update_pkgindex = True
+                             else:
+                                 d.pop("PATH", None)
+                                 if oldpath:
+                                     update_pkgindex = True
+                             self.dbapi.cpv_inject(mycpv)
+                             continue
+                     if not os.access(full_path, os.R_OK):
+                         writemsg(
+                             _("!!! Permission denied to read " "binary package: '%s'\n")
+                             % full_path,
+                             noiselevel=-1,
+                         )
+                         self.invalids.append(myfile[:-5])
+                         continue
+                     pkg_metadata = self._read_metadata(
+                         full_path,
+                         s,
+                         keys=chain(self.dbapi._aux_cache_keys, ("PF", "CATEGORY")),
+                     )
+                     mycat = pkg_metadata.get("CATEGORY", "")
+                     mypf = pkg_metadata.get("PF", "")
+                     slot = pkg_metadata.get("SLOT", "")
+                     mypkg = myfile[:-5]
+                     if not mycat or not mypf or not slot:
+                         # old-style or corrupt package
+                         writemsg(
+                             _("\n!!! Invalid binary package: '%s'\n") % full_path,
+                             noiselevel=-1,
+                         )
+                         missing_keys = []
+                         if not mycat:
+                             missing_keys.append("CATEGORY")
+                         if not mypf:
+                             missing_keys.append("PF")
+                         if not slot:
+                             missing_keys.append("SLOT")
+                         msg = []
+                         if missing_keys:
+                             missing_keys.sort()
+                             msg.append(
+                                 _("Missing metadata key(s): %s.")
+                                 % ", ".join(missing_keys)
+                             )
+                         msg.append(
+                             _(
+                                 " This binary package is not "
+                                 "recoverable and should be deleted."
+                             )
+                         )
+                         for line in textwrap.wrap("".join(msg), 72):
+                             writemsg("!!! %s\n" % line, noiselevel=-1)
+                         self.invalids.append(mypkg)
+                         continue
+ 
+                     multi_instance = False
+                     invalid_name = False
+                     build_id = None
+                     if myfile.endswith(".xpak"):
+                         multi_instance = True
+                         build_id = self._parse_build_id(myfile)
+                         if build_id < 1:
+                             invalid_name = True
+                         elif myfile != "%s-%s.xpak" % (mypf, build_id):
+                             invalid_name = True
+                         else:
+                             mypkg = mypkg[: -len(str(build_id)) - 1]
+                     elif myfile != mypf + ".tbz2":
+                         invalid_name = True
+ 
+                     if invalid_name:
+                         writemsg(
+                             _("\n!!! Binary package name is " "invalid: '%s'\n")
+                             % full_path,
+                             noiselevel=-1,
+                         )
+                         continue
+ 
+                     if pkg_metadata.get("BUILD_ID"):
+                         try:
+                             build_id = int(pkg_metadata["BUILD_ID"])
+                         except ValueError:
+                             writemsg(
+                                 _("!!! Binary package has " "invalid BUILD_ID: '%s'\n")
+                                 % full_path,
+                                 noiselevel=-1,
+                             )
+                             continue
+                     else:
+                         build_id = None
+ 
+                     if multi_instance:
+                         name_split = catpkgsplit("%s/%s" % (mycat, mypf))
+                         if (
+                             name_split is None
+                             or tuple(catsplit(mydir)) != name_split[:2]
+                         ):
+                             continue
+                     elif mycat != mydir and mydir != "All":
+                         continue
+                     if mypkg != mypf.strip():
+                         continue
+                     mycpv = mycat + "/" + mypkg
+                     if not self.dbapi._category_re.match(mycat):
+                         writemsg(
+                             _(
+                                 "!!! Binary package has an "
+                                 "unrecognized category: '%s'\n"
+                             )
+                             % full_path,
+                             noiselevel=-1,
+                         )
+                         writemsg(
+                             _(
+                                 "!!! '%s' has a category that is not"
+                                 " listed in %setc/portage/categories\n"
+                             )
+                             % (mycpv, self.settings["PORTAGE_CONFIGROOT"]),
+                             noiselevel=-1,
+                         )
+                         continue
+                     if build_id is not None:
+                         pkg_metadata["BUILD_ID"] = str(build_id)
+                     pkg_metadata["SIZE"] = str(s.st_size)
+                     # Discard items used only for validation above.
+                     pkg_metadata.pop("CATEGORY")
+                     pkg_metadata.pop("PF")
+                     mycpv = _pkg_str(
+                         mycpv,
+                         metadata=self.dbapi._aux_cache_slot_dict(pkg_metadata),
+                         db=self.dbapi,
+                     )
+                     pkg_paths[_instance_key(mycpv)] = mypath
+                     self.dbapi.cpv_inject(mycpv)
+                     update_pkgindex = True
+                     d = metadata.get(_instance_key(mycpv), pkgindex._pkg_slot_dict())
+                     if d:
+                         try:
+                             if int(d["_mtime_"]) != s[stat.ST_MTIME]:
+                                 d.clear()
+                         except (KeyError, ValueError):
+                             d.clear()
+                     if d:
+                         try:
+                             if int(d["SIZE"]) != int(s.st_size):
+                                 d.clear()
+                         except (KeyError, ValueError):
+                             d.clear()
+ 
+                     for k in self._pkgindex_allowed_pkg_keys:
+                         v = pkg_metadata.get(k)
+                         if v:
+                             d[k] = v
+                     d["CPV"] = mycpv
+ 
+                     try:
+                         self._eval_use_flags(mycpv, d)
+                     except portage.exception.InvalidDependString:
+                         writemsg(
+                             _("!!! Invalid binary package: '%s'\n")
+                             % self.getname(mycpv),
+                             noiselevel=-1,
+                         )
+                         self.dbapi.cpv_remove(mycpv)
+                         del pkg_paths[_instance_key(mycpv)]
+ 
+                     # record location if it's non-default
+                     if mypath != mycpv + ".tbz2":
+                         d["PATH"] = mypath
+                     else:
+                         d.pop("PATH", None)
+                     metadata[_instance_key(mycpv)] = d
+ 
+             if reindex:
+                 for instance_key in list(metadata):
+                     if instance_key not in pkg_paths:
+                         del metadata[instance_key]
+ 
+             if update_pkgindex:
+                 del pkgindex.packages[:]
+                 pkgindex.packages.extend(iter(metadata.values()))
+                 self._update_pkgindex_header(pkgindex.header)
+ 
+             self._pkgindex_header = {}
+             self._merge_pkgindex_header(pkgindex.header, self._pkgindex_header)
+ 
+         return pkgindex if update_pkgindex else None
+ 
+     def _populate_remote(self, getbinpkg_refresh=True):
+ 
+         self._remote_has_index = False
+         self._remotepkgs = {}
+         # Order by descending priority.
+         for repo in reversed(list(self._binrepos_conf.values())):
+             base_url = repo.sync_uri
+             parsed_url = urlparse(base_url)
+             host = parsed_url.netloc
+             port = parsed_url.port
+             user = None
+             passwd = None
+             user_passwd = ""
+             if "@" in host:
+                 user, host = host.split("@", 1)
+                 user_passwd = user + "@"
+                 if ":" in user:
+                     user, passwd = user.split(":", 1)
+ 
+             if port is not None:
+                 port_str = ":%s" % (port,)
+                 if host.endswith(port_str):
+                     host = host[: -len(port_str)]
+             pkgindex_file = os.path.join(
+                 self.settings["EROOT"],
+                 CACHE_PATH,
+                 "binhost",
+                 host,
+                 parsed_url.path.lstrip("/"),
+                 "Packages",
+             )
+             pkgindex = self._new_pkgindex()
+             try:
+                 f = io.open(
+                     _unicode_encode(
+                         pkgindex_file, encoding=_encodings["fs"], errors="strict"
+                     ),
+                     mode="r",
+                     encoding=_encodings["repo.content"],
+                     errors="replace",
+                 )
+                 try:
+                     pkgindex.read(f)
+                 finally:
+                     f.close()
+             except EnvironmentError as e:
+                 if e.errno != errno.ENOENT:
+                     raise
+             local_timestamp = pkgindex.header.get("TIMESTAMP", None)
+             try:
+                 download_timestamp = float(pkgindex.header.get("DOWNLOAD_TIMESTAMP", 0))
+             except ValueError:
+                 download_timestamp = 0
+             remote_timestamp = None
+             rmt_idx = self._new_pkgindex()
+             proc = None
+             tmp_filename = None
+             try:
+                 # urlparse.urljoin() only works correctly with recognized
+                 # protocols and requires the base url to have a trailing
+                 # slash, so join manually...
+                 url = base_url.rstrip("/") + "/Packages"
+                 f = None
+ 
+                 if not getbinpkg_refresh and local_timestamp:
+                     raise UseCachedCopyOfRemoteIndex()
+ 
+                 try:
+                     ttl = float(pkgindex.header.get("TTL", 0))
+                 except ValueError:
+                     pass
+                 else:
+                     if (
+                         download_timestamp
+                         and ttl
+                         and download_timestamp + ttl > time.time()
+                     ):
+                         raise UseCachedCopyOfRemoteIndex()
+ 
+                 # Set proxy settings for _urlopen -> urllib_request
+                 proxies = {}
+                 for proto in ("http", "https"):
+                     value = self.settings.get(proto + "_proxy")
+                     if value is not None:
+                         proxies[proto] = value
+ 
+                 # Don't use urlopen for https, unless
+                 # PEP 476 is supported (bug #469888).
+                 if repo.fetchcommand is None and (
+                     parsed_url.scheme not in ("https",) or _have_pep_476()
+                 ):
+                     try:
+                         f = _urlopen(
+                             url, if_modified_since=local_timestamp, proxies=proxies
+                         )
+                         if hasattr(f, "headers") and f.headers.get("timestamp", ""):
+                             remote_timestamp = f.headers.get("timestamp")
+                     except IOError as err:
+                         if (
+                             hasattr(err, "code") and err.code == 304
+                         ):  # not modified (since local_timestamp)
+                             raise UseCachedCopyOfRemoteIndex()
+ 
+                         if parsed_url.scheme in ("ftp", "http", "https"):
+                             # This protocol is supposedly supported by urlopen,
+                             # so apparently there's a problem with the url
+                             # or a bug in urlopen.
+                             if self.settings.get("PORTAGE_DEBUG", "0") != "0":
+                                 traceback.print_exc()
+ 
+                             raise
+                     except ValueError:
+                         raise ParseError(
+                             "Invalid Portage BINHOST value '%s'" % url.lstrip()
+                         )
+ 
+                 if f is None:
+ 
+                     path = parsed_url.path.rstrip("/") + "/Packages"
+ 
+                     if repo.fetchcommand is None and parsed_url.scheme == "ssh":
+                         # Use a pipe so that we can terminate the download
+                         # early if we detect that the TIMESTAMP header
+                         # matches that of the cached Packages file.
+                         ssh_args = ["ssh"]
+                         if port is not None:
+                             ssh_args.append("-p%s" % (port,))
+                         # NOTE: shlex evaluates embedded quotes
+                         ssh_args.extend(
+                             portage.util.shlex_split(
+                                 self.settings.get("PORTAGE_SSH_OPTS", "")
+                             )
+                         )
+                         ssh_args.append(user_passwd + host)
+                         ssh_args.append("--")
+                         ssh_args.append("cat")
+                         ssh_args.append(path)
+ 
+                         proc = subprocess.Popen(ssh_args, stdout=subprocess.PIPE)
+                         f = proc.stdout
+                     else:
+                         if repo.fetchcommand is None:
+                             setting = "FETCHCOMMAND_" + parsed_url.scheme.upper()
+                             fcmd = self.settings.get(setting)
+                             if not fcmd:
+                                 fcmd = self.settings.get("FETCHCOMMAND")
+                                 if not fcmd:
+                                     raise EnvironmentError("FETCHCOMMAND is unset")
+                         else:
+                             fcmd = repo.fetchcommand
+ 
+                         fd, tmp_filename = tempfile.mkstemp()
+                         tmp_dirname, tmp_basename = os.path.split(tmp_filename)
+                         os.close(fd)
+ 
+                         fcmd_vars = {
+                             "DISTDIR": tmp_dirname,
+                             "FILE": tmp_basename,
+                             "URI": url,
+                         }
+ 
+                         for k in ("PORTAGE_SSH_OPTS",):
+                             v = self.settings.get(k)
+                             if v is not None:
+                                 fcmd_vars[k] = v
+ 
+                         success = portage.getbinpkg.file_get(
+                             fcmd=fcmd, fcmd_vars=fcmd_vars
+                         )
+                         if not success:
+                             raise EnvironmentError("%s failed" % (setting,))
+                         f = open(tmp_filename, "rb")
+ 
+                 f_dec = codecs.iterdecode(
+                     f, _encodings["repo.content"], errors="replace"
+                 )
+                 try:
+                     rmt_idx.readHeader(f_dec)
+                     if (
+                         not remote_timestamp
+                     ):  # in case it had not been read from HTTP header
+                         remote_timestamp = rmt_idx.header.get("TIMESTAMP", None)
+                     if not remote_timestamp:
+                         # no timestamp in the header, something's wrong
+                         pkgindex = None
+                         writemsg(
+                             _(
+                                 "\n\n!!! Binhost package index "
+                                 " has no TIMESTAMP field.\n"
+                             ),
+                             noiselevel=-1,
+                         )
+                     else:
+                         if not self._pkgindex_version_supported(rmt_idx):
+                             writemsg(
+                                 _(
+                                     "\n\n!!! Binhost package index version"
+                                     " is not supported: '%s'\n"
+                                 )
+                                 % rmt_idx.header.get("VERSION"),
+                                 noiselevel=-1,
+                             )
+                             pkgindex = None
+                         elif local_timestamp != remote_timestamp:
+                             rmt_idx.readBody(f_dec)
+                             pkgindex = rmt_idx
+                 finally:
+                     # Timeout after 5 seconds, in case close() blocks
+                     # indefinitely (see bug #350139).
+                     try:
+                         try:
+                             AlarmSignal.register(5)
+                             f.close()
+                         finally:
+                             AlarmSignal.unregister()
+                     except AlarmSignal:
+                         writemsg(
+                             "\n\n!!! %s\n"
+                             % _("Timed out while closing connection to binhost"),
+                             noiselevel=-1,
+                         )
+             except UseCachedCopyOfRemoteIndex:
+                 writemsg_stdout("\n")
+                 writemsg_stdout(
+                     colorize(
+                         "GOOD",
+                         _("Local copy of remote index is up-to-date and will be used."),
+                     )
+                     + "\n"
+                 )
+                 rmt_idx = pkgindex
+             except EnvironmentError as e:
+                 # This includes URLError which is raised for SSL
+                 # certificate errors when PEP 476 is supported.
+                 writemsg(
+                     _("\n\n!!! Error fetching binhost package" " info from '%s'\n")
+                     % _hide_url_passwd(base_url)
+                 )
+                 # With Python 2, the EnvironmentError message may
+                 # contain bytes or unicode, so use str to ensure
+                 # safety with all locales (bug #532784).
+                 try:
+                     error_msg = str(e)
+                 except UnicodeDecodeError as uerror:
+                     error_msg = str(uerror.object, encoding="utf_8", errors="replace")
+                 writemsg("!!! %s\n\n" % error_msg)
+                 del e
+                 pkgindex = None
+             if proc is not None:
+                 if proc.poll() is None:
+                     proc.kill()
+                     proc.wait()
+                 proc = None
+             if tmp_filename is not None:
+                 try:
+                     os.unlink(tmp_filename)
+                 except OSError:
+                     pass
+             if pkgindex is rmt_idx:
+                 pkgindex.modified = False  # don't update the header
+                 pkgindex.header["DOWNLOAD_TIMESTAMP"] = "%d" % time.time()
+                 try:
+                     ensure_dirs(os.path.dirname(pkgindex_file))
+                     f = atomic_ofstream(pkgindex_file)
+                     pkgindex.write(f)
+                     f.close()
+                 except (IOError, PortageException):
+                     if os.access(os.path.dirname(pkgindex_file), os.W_OK):
+                         raise
+                     # The current user doesn't have permission to cache the
+                     # file, but that's alright.
+             if pkgindex:
+                 remote_base_uri = pkgindex.header.get("URI", base_url)
+                 for d in pkgindex.packages:
+                     cpv = _pkg_str(
+                         d["CPV"], metadata=d, settings=self.settings, db=self.dbapi
+                     )
+                     # Local package instances override remote instances
+                     # with the same instance_key.
+                     if self.dbapi.cpv_exists(cpv):
+                         continue
+ 
+                     d["CPV"] = cpv
+                     d["BASE_URI"] = remote_base_uri
+                     d["PKGINDEX_URI"] = url
+                     # FETCHCOMMAND and RESUMECOMMAND may be specified
+                     # by binrepos.conf, and otherwise ensure that they
+                     # do not propagate from the Packages index since
+                     # it may be unsafe to execute remotely specified
+                     # commands.
+                     if repo.fetchcommand is None:
+                         d.pop("FETCHCOMMAND", None)
+                     else:
+                         d["FETCHCOMMAND"] = repo.fetchcommand
+                     if repo.resumecommand is None:
+                         d.pop("RESUMECOMMAND", None)
+                     else:
+                         d["RESUMECOMMAND"] = repo.resumecommand
+                     self._remotepkgs[self.dbapi._instance_key(cpv)] = d
+                     self.dbapi.cpv_inject(cpv)
+ 
+                 self._remote_has_index = True
+                 self._merge_pkgindex_header(pkgindex.header, self._pkgindex_header)
+ 
+     def _populate_additional(self, repos):
+         for repo in repos:
+             aux_keys = list(set(chain(repo._aux_cache_keys, repo._pkg_str_aux_keys)))
+             for cpv in repo.cpv_all():
+                 metadata = dict(zip(aux_keys, repo.aux_get(cpv, aux_keys)))
+                 pkg = _pkg_str(cpv, metadata=metadata, settings=repo.settings, db=repo)
+                 instance_key = self.dbapi._instance_key(pkg)
+                 self._additional_pkgs[instance_key] = pkg
+                 self.dbapi.cpv_inject(pkg)
+ 
+     def inject(self, cpv, filename=None):
+         """Add a freshly built package to the database.  This updates
+         $PKGDIR/Packages with the new package metadata (including MD5).
+         @param cpv: The cpv of the new package to inject
+         @type cpv: string
+         @param filename: File path of the package to inject, or None if it's
+                 already in the location returned by getname()
+         @type filename: string
+         @rtype: _pkg_str or None
+         @return: A _pkg_str instance on success, or None on failure.
+         """
+         mycat, mypkg = catsplit(cpv)
+         if not self.populated:
+             self.populate()
+         if filename is None:
+             full_path = self.getname(cpv)
+         else:
+             full_path = filename
+         try:
+             s = os.stat(full_path)
+         except OSError as e:
+             if e.errno != errno.ENOENT:
+                 raise
+             del e
+             writemsg(
+                 _("!!! Binary package does not exist: '%s'\n") % full_path,
+                 noiselevel=-1,
+             )
+             return
+         metadata = self._read_metadata(full_path, s)
+         invalid_depend = False
+         try:
+             self._eval_use_flags(cpv, metadata)
+         except portage.exception.InvalidDependString:
+             invalid_depend = True
+         if invalid_depend or not metadata.get("SLOT"):
+             writemsg(_("!!! Invalid binary package: '%s'\n") % full_path, noiselevel=-1)
+             return
+ 
+         fetched = False
+         try:
+             build_id = cpv.build_id
+         except AttributeError:
+             build_id = None
+         else:
+             instance_key = self.dbapi._instance_key(cpv)
+             if instance_key in self.dbapi.cpvdict:
+                 # This means we've been called by aux_update (or
+                 # similar). The instance key typically changes (due to
+                 # file modification), so we need to discard existing
+                 # instance key references.
+                 self.dbapi.cpv_remove(cpv)
+                 self._pkg_paths.pop(instance_key, None)
+                 if self._remotepkgs is not None:
+                     fetched = self._remotepkgs.pop(instance_key, None)
+ 
+         cpv = _pkg_str(cpv, metadata=metadata, settings=self.settings, db=self.dbapi)
+ 
+         # Reread the Packages index (in case it's been changed by another
+         # process) and then updated it, all while holding a lock.
+         pkgindex_lock = None
+         try:
+             os.makedirs(self.pkgdir, exist_ok=True)
+             pkgindex_lock = lockfile(self._pkgindex_file, wantnewlockfile=1)
+             if filename is not None:
+                 new_filename = self.getname(cpv, allocate_new=True)
+                 try:
+                     samefile = os.path.samefile(filename, new_filename)
+                 except OSError:
+                     samefile = False
+                 if not samefile:
+                     self._ensure_dir(os.path.dirname(new_filename))
+                     _movefile(filename, new_filename, mysettings=self.settings)
+                 full_path = new_filename
+ 
+             basename = os.path.basename(full_path)
+             pf = catsplit(cpv)[1]
+             if build_id is None and not fetched and basename.endswith(".xpak"):
+                 # Apply the newly assigned BUILD_ID. This is intended
+                 # to occur only for locally built packages. If the
+                 # package was fetched, we want to preserve its
+                 # attributes, so that we can later distinguish that it
+                 # is identical to its remote counterpart.
+                 build_id = self._parse_build_id(basename)
+                 metadata["BUILD_ID"] = str(build_id)
+                 cpv = _pkg_str(
+                     cpv, metadata=metadata, settings=self.settings, db=self.dbapi
+                 )
+                 binpkg = portage.xpak.tbz2(full_path)
+                 binary_data = binpkg.get_data()
+                 binary_data[b"BUILD_ID"] = _unicode_encode(metadata["BUILD_ID"])
+                 binpkg.recompose_mem(portage.xpak.xpak_mem(binary_data))
+ 
+             self._file_permissions(full_path)
+             pkgindex = self._load_pkgindex()
+             if not self._pkgindex_version_supported(pkgindex):
+                 pkgindex = self._new_pkgindex()
+ 
+             d = self._inject_file(pkgindex, cpv, full_path)
+             self._update_pkgindex_header(pkgindex.header)
+             self._pkgindex_write(pkgindex)
+ 
+         finally:
+             if pkgindex_lock:
+                 unlockfile(pkgindex_lock)
+ 
+         # This is used to record BINPKGMD5 in the installed package
+         # database, for a package that has just been built.
+         cpv._metadata["MD5"] = d["MD5"]
+ 
+         return cpv
+ 
+     def _read_metadata(self, filename, st, keys=None):
+         """
+         Read metadata from a binary package. The returned metadata
+         dictionary will contain empty strings for any values that
+         are undefined (this is important because the _pkg_str class
+         distinguishes between missing and undefined values).
+ 
+         @param filename: File path of the binary package
+         @type filename: string
+         @param st: stat result for the binary package
+         @type st: os.stat_result
+         @param keys: optional list of specific metadata keys to retrieve
+         @type keys: iterable
+         @rtype: dict
+         @return: package metadata
+         """
+         if keys is None:
+             keys = self.dbapi._aux_cache_keys
+             metadata = self.dbapi._aux_cache_slot_dict()
+         else:
+             metadata = {}
+         binary_metadata = portage.xpak.tbz2(filename).get_data()
+         for k in keys:
+             if k == "_mtime_":
+                 metadata[k] = str(st[stat.ST_MTIME])
+             elif k == "SIZE":
+                 metadata[k] = str(st.st_size)
+             else:
+                 v = binary_metadata.get(_unicode_encode(k))
+                 if v is None:
+                     if k == "EAPI":
+                         metadata[k] = "0"
+                     else:
+                         metadata[k] = ""
+                 else:
+                     v = _unicode_decode(v)
+                     metadata[k] = " ".join(v.split())
+         return metadata
+ 
+     def _inject_file(self, pkgindex, cpv, filename):
+         """
+         Add a package to internal data structures, and add an
+         entry to the given pkgindex.
+         @param pkgindex: The PackageIndex instance to which an entry
+                 will be added.
+         @type pkgindex: PackageIndex
+         @param cpv: A _pkg_str instance corresponding to the package
+                 being injected.
+         @type cpv: _pkg_str
+         @param filename: Absolute file path of the package to inject.
+         @type filename: string
+         @rtype: dict
+         @return: A dict corresponding to the new entry which has been
+                 added to pkgindex. This may be used to access the checksums
+                 which have just been generated.
+         """
+         # Update state for future isremote calls.
+         instance_key = self.dbapi._instance_key(cpv)
+         if self._remotepkgs is not None:
+             self._remotepkgs.pop(instance_key, None)
+ 
+         self.dbapi.cpv_inject(cpv)
+         self._pkg_paths[instance_key] = filename[len(self.pkgdir) + 1 :]
+         d = self._pkgindex_entry(cpv)
+ 
+         # If found, remove package(s) with duplicate path.
+         path = d.get("PATH", "")
+         for i in range(len(pkgindex.packages) - 1, -1, -1):
+             d2 = pkgindex.packages[i]
+             if path and path == d2.get("PATH"):
+                 # Handle path collisions in $PKGDIR/All
+                 # when CPV is not identical.
+                 del pkgindex.packages[i]
+             elif cpv == d2.get("CPV"):
+                 if path == d2.get("PATH", ""):
+                     del pkgindex.packages[i]
+ 
+         pkgindex.packages.append(d)
+         return d
+ 
+     def _pkgindex_write(self, pkgindex):
+         contents = codecs.getwriter(_encodings["repo.content"])(io.BytesIO())
+         pkgindex.write(contents)
+         contents = contents.getvalue()
+         atime = mtime = int(pkgindex.header["TIMESTAMP"])
+         output_files = [
+             (atomic_ofstream(self._pkgindex_file, mode="wb"), self._pkgindex_file, None)
+         ]
+ 
+         if "compress-index" in self.settings.features:
+             gz_fname = self._pkgindex_file + ".gz"
+             fileobj = atomic_ofstream(gz_fname, mode="wb")
+             output_files.append(
+                 (
+                     GzipFile(filename="", mode="wb", fileobj=fileobj, mtime=mtime),
+                     gz_fname,
+                     fileobj,
+                 )
+             )
+ 
+         for f, fname, f_close in output_files:
+             f.write(contents)
+             f.close()
+             if f_close is not None:
+                 f_close.close()
+             self._file_permissions(fname)
+             # some seconds might have elapsed since TIMESTAMP
+             os.utime(fname, (atime, mtime))
+ 
+     def _pkgindex_entry(self, cpv):
+         """
+         Performs checksums, and gets size and mtime via lstat.
+         Raises InvalidDependString if necessary.
+         @rtype: dict
+         @return: a dict containing entry for the give cpv.
+         """
+ 
+         pkg_path = self.getname(cpv)
+ 
+         d = dict(cpv._metadata.items())
+         d.update(perform_multiple_checksums(pkg_path, hashes=self._pkgindex_hashes))
+ 
+         d["CPV"] = cpv
+         st = os.lstat(pkg_path)
+         d["_mtime_"] = str(st[stat.ST_MTIME])
+         d["SIZE"] = str(st.st_size)
+ 
+         rel_path = pkg_path[len(self.pkgdir) + 1 :]
+         # record location if it's non-default
+         if rel_path != cpv + ".tbz2":
+             d["PATH"] = rel_path
+ 
+         return d
+ 
+     def _new_pkgindex(self):
+         return portage.getbinpkg.PackageIndex(
+             allowed_pkg_keys=self._pkgindex_allowed_pkg_keys,
+             default_header_data=self._pkgindex_default_header_data,
+             default_pkg_data=self._pkgindex_default_pkg_data,
+             inherited_keys=self._pkgindex_inherited_keys,
+             translated_keys=self._pkgindex_translated_keys,
+         )
+ 
+     @staticmethod
+     def _merge_pkgindex_header(src, dest):
+         """
+         Merge Packages header settings from src to dest, in order to
+         propagate implicit IUSE and USE_EXPAND settings for use with
+         binary and installed packages. Values are appended, so the
+         result is a union of elements from src and dest.
+ 
+         Pull in ARCH if it's not defined, since it's used for validation
+         by emerge's profile_check function, and also for KEYWORDS logic
+         in the _getmaskingstatus function.
+ 
+         @param src: source mapping (read only)
+         @type src: Mapping
+         @param dest: destination mapping
+         @type dest: MutableMapping
+         """
+         for k, v in iter_iuse_vars(src):
+             v_before = dest.get(k)
+             if v_before is not None:
+                 merged_values = set(v_before.split())
+                 merged_values.update(v.split())
+                 v = " ".join(sorted(merged_values))
+             dest[k] = v
+ 
+         if "ARCH" not in dest and "ARCH" in src:
+             dest["ARCH"] = src["ARCH"]
+ 
+     def _propagate_config(self, config):
+         """
+         Propagate implicit IUSE and USE_EXPAND settings from the binary
+         package database to a config instance. If settings are not
+         available to propagate, then this will do nothing and return
+         False.
+ 
+         @param config: config instance
+         @type config: portage.config
+         @rtype: bool
+         @return: True if settings successfully propagated, False if settings
+                 were not available to propagate.
+         """
+         if self._pkgindex_header is None:
+             return False
+ 
+         self._merge_pkgindex_header(
+             self._pkgindex_header, config.configdict["defaults"]
+         )
+         config.regenerate()
+         config._init_iuse()
+         return True
+ 
+     def _update_pkgindex_header(self, header):
+         """
+         Add useful settings to the Packages file header, for use by
+         binhost clients.
+ 
+         This will return silently if the current profile is invalid or
+         does not have an IUSE_IMPLICIT variable, since it's useful to
+         maintain a cache of implicit IUSE settings for use with binary
+         packages.
+         """
+         if not (self.settings.profile_path and "IUSE_IMPLICIT" in self.settings):
+             header.setdefault("VERSION", str(self._pkgindex_version))
+             return
+ 
+         portdir = normalize_path(os.path.realpath(self.settings["PORTDIR"]))
+         profiles_base = os.path.join(portdir, "profiles") + os.path.sep
+         if self.settings.profile_path:
+             profile_path = normalize_path(os.path.realpath(self.settings.profile_path))
+             if profile_path.startswith(profiles_base):
+                 profile_path = profile_path[len(profiles_base) :]
+             header["PROFILE"] = profile_path
+         header["VERSION"] = str(self._pkgindex_version)
+         base_uri = self.settings.get("PORTAGE_BINHOST_HEADER_URI")
+         if base_uri:
+             header["URI"] = base_uri
+         else:
+             header.pop("URI", None)
+         for k in (
+             list(self._pkgindex_header_keys)
+             + self.settings.get("USE_EXPAND_IMPLICIT", "").split()
+             + self.settings.get("USE_EXPAND_UNPREFIXED", "").split()
+         ):
+             v = self.settings.get(k, None)
+             if v:
+                 header[k] = v
+             else:
+                 header.pop(k, None)
+ 
+         # These values may be useful for using a binhost without
+         # having a local copy of the profile (bug #470006).
+         for k in self.settings.get("USE_EXPAND_IMPLICIT", "").split():
+             k = "USE_EXPAND_VALUES_" + k
+             v = self.settings.get(k)
+             if v:
+                 header[k] = v
+             else:
+                 header.pop(k, None)
+ 
+     def _pkgindex_version_supported(self, pkgindex):
+         version = pkgindex.header.get("VERSION")
+         if version:
+             try:
+                 if int(version) <= self._pkgindex_version:
+                     return True
+             except ValueError:
+                 pass
+         return False
+ 
+     def _eval_use_flags(self, cpv, metadata):
+         use = frozenset(metadata.get("USE", "").split())
+         for k in self._pkgindex_use_evaluated_keys:
+             if k.endswith("DEPEND"):
+                 token_class = Atom
+             else:
+                 token_class = None
+ 
+             deps = metadata.get(k)
+             if deps is None:
+                 continue
+             try:
+                 deps = use_reduce(deps, uselist=use, token_class=token_class)
+                 deps = paren_enclose(deps)
+             except portage.exception.InvalidDependString as e:
+                 writemsg("%s: %s\n" % (k, e), noiselevel=-1)
+                 raise
+             metadata[k] = deps
+ 
+     def exists_specific(self, cpv):
+         if not self.populated:
+             self.populate()
+         return self.dbapi.match(
+             dep_expand("=" + cpv, mydb=self.dbapi, settings=self.settings)
+         )
+ 
+     def dep_bestmatch(self, mydep):
+         "compatibility method -- all matches, not just visible ones"
+         if not self.populated:
+             self.populate()
+         writemsg("\n\n", 1)
+         writemsg("mydep: %s\n" % mydep, 1)
+         mydep = dep_expand(mydep, mydb=self.dbapi, settings=self.settings)
+         writemsg("mydep: %s\n" % mydep, 1)
+         mykey = dep_getkey(mydep)
+         writemsg("mykey: %s\n" % mykey, 1)
+         mymatch = best(match_from_list(mydep, self.dbapi.cp_list(mykey)))
+         writemsg("mymatch: %s\n" % mymatch, 1)
+         if mymatch is None:
+             return ""
+         return mymatch
+ 
+     def getname(self, cpv, allocate_new=None):
+         """Returns a file location for this package.
+         If cpv has both build_time and build_id attributes, then the
+         path to the specific corresponding instance is returned.
+         Otherwise, allocate a new path and return that. When allocating
+         a new path, behavior depends on the binpkg-multi-instance
+         FEATURES setting.
+         """
+         if not self.populated:
+             self.populate()
+ 
+         try:
+             cpv.cp
+         except AttributeError:
+             cpv = _pkg_str(cpv)
+ 
+         filename = None
+         if allocate_new:
+             filename = self._allocate_filename(cpv)
+         elif self._is_specific_instance(cpv):
+             instance_key = self.dbapi._instance_key(cpv)
+             path = self._pkg_paths.get(instance_key)
+             if path is not None:
+                 filename = os.path.join(self.pkgdir, path)
+ 
+         if filename is None and not allocate_new:
+             try:
+                 instance_key = self.dbapi._instance_key(cpv, support_string=True)
+             except KeyError:
+                 pass
+             else:
+                 filename = self._pkg_paths.get(instance_key)
+                 if filename is not None:
+                     filename = os.path.join(self.pkgdir, filename)
+                 elif instance_key in self._additional_pkgs:
+                     return None
+ 
+         if filename is None:
+             if self._multi_instance:
+                 pf = catsplit(cpv)[1]
+                 filename = "%s-%s.xpak" % (os.path.join(self.pkgdir, cpv.cp, pf), "1")
+             else:
+                 filename = os.path.join(self.pkgdir, cpv + ".tbz2")
+ 
+         return filename
+ 
+     def _is_specific_instance(self, cpv):
+         specific = True
+         try:
+             build_time = cpv.build_time
+             build_id = cpv.build_id
+         except AttributeError:
+             specific = False
+         else:
+             if build_time is None or build_id is None:
+                 specific = False
+         return specific
+ 
+     def _max_build_id(self, cpv):
+         max_build_id = 0
+         for x in self.dbapi.cp_list(cpv.cp):
+             if x == cpv and x.build_id is not None and x.build_id > max_build_id:
+                 max_build_id = x.build_id
+         return max_build_id
+ 
+     def _allocate_filename(self, cpv):
+         return os.path.join(self.pkgdir, cpv + ".tbz2")
+ 
+     def _allocate_filename_multi(self, cpv):
+ 
+         # First, get the max build_id found when _populate was
+         # called.
+         max_build_id = self._max_build_id(cpv)
+ 
+         # A new package may have been added concurrently since the
+         # last _populate call, so use increment build_id until
+         # we locate an unused id.
+         pf = catsplit(cpv)[1]
+         build_id = max_build_id + 1
+ 
+         while True:
+             filename = "%s-%s.xpak" % (os.path.join(self.pkgdir, cpv.cp, pf), build_id)
+             if os.path.exists(filename):
+                 build_id += 1
+             else:
+                 return filename
+ 
+     @staticmethod
+     def _parse_build_id(filename):
+         build_id = -1
+         suffixlen = len(".xpak")
+         hyphen = filename.rfind("-", 0, -(suffixlen + 1))
+         if hyphen != -1:
+             try:
+                 build_id = int(filename[hyphen + 1 : -suffixlen])
+             except ValueError:
+                 pass
+         return build_id
+ 
+     def isremote(self, pkgname):
+         """Returns true if the package is kept remotely and it has not been
+         downloaded (or it is only partially downloaded)."""
+         if self._remotepkgs is None:
+             return False
+         instance_key = self.dbapi._instance_key(pkgname)
+         if instance_key not in self._remotepkgs:
+             return False
+         if instance_key in self._additional_pkgs:
+             return False
+         # Presence in self._remotepkgs implies that it's remote. When a
+         # package is downloaded, state is updated by self.inject().
+         return True
+ 
+     def get_pkgindex_uri(self, cpv):
+         """Returns the URI to the Packages file for a given package."""
+         uri = None
+         if self._remotepkgs is not None:
+             metadata = self._remotepkgs.get(self.dbapi._instance_key(cpv))
+             if metadata is not None:
+                 uri = metadata["PKGINDEX_URI"]
+         return uri
+ 
+     def gettbz2(self, pkgname):
+         """Fetches the package from a remote site, if necessary.  Attempts to
+         resume if the file appears to be partially downloaded."""
+         instance_key = self.dbapi._instance_key(pkgname)
+         tbz2_path = self.getname(pkgname)
+         tbz2name = os.path.basename(tbz2_path)
+         resume = False
+         if os.path.exists(tbz2_path):
+             if tbz2name[:-5] not in self.invalids:
+                 return
+ 
+             resume = True
+             writemsg(
+                 _(
+                     "Resuming download of this tbz2, but it is possible that it is corrupt.\n"
+                 ),
+                 noiselevel=-1,
+             )
+ 
+         mydest = os.path.dirname(self.getname(pkgname))
+         self._ensure_dir(mydest)
+         # urljoin doesn't work correctly with unrecognized protocols like sftp
+         if self._remote_has_index:
+             rel_url = self._remotepkgs[instance_key].get("PATH")
+             if not rel_url:
+                 rel_url = pkgname + ".tbz2"
+             remote_base_uri = self._remotepkgs[instance_key]["BASE_URI"]
+             url = remote_base_uri.rstrip("/") + "/" + rel_url.lstrip("/")
+         else:
+             url = self.settings["PORTAGE_BINHOST"].rstrip("/") + "/" + tbz2name
+         protocol = urlparse(url)[0]
+         fcmd_prefix = "FETCHCOMMAND"
+         if resume:
+             fcmd_prefix = "RESUMECOMMAND"
+         fcmd = self.settings.get(fcmd_prefix + "_" + protocol.upper())
+         if not fcmd:
+             fcmd = self.settings.get(fcmd_prefix)
+         success = portage.getbinpkg.file_get(url, mydest, fcmd=fcmd)
+         if not success:
+             try:
+                 os.unlink(self.getname(pkgname))
+             except OSError:
+                 pass
+             raise portage.exception.FileNotFound(mydest)
+         self.inject(pkgname)
+ 
+     def _load_pkgindex(self):
+         pkgindex = self._new_pkgindex()
+         try:
+             f = io.open(
+                 _unicode_encode(
+                     self._pkgindex_file, encoding=_encodings["fs"], errors="strict"
+                 ),
+                 mode="r",
+                 encoding=_encodings["repo.content"],
+                 errors="replace",
+             )
+         except EnvironmentError:
+             pass
+         else:
+             try:
+                 pkgindex.read(f)
+             finally:
+                 f.close()
+         return pkgindex
+ 
+     def _get_digests(self, pkg):
+ 
+         try:
+             cpv = pkg.cpv
+         except AttributeError:
+             cpv = pkg
+ 
+         _instance_key = self.dbapi._instance_key
+         instance_key = _instance_key(cpv)
+         digests = {}
+         metadata = (
+             None if self._remotepkgs is None else self._remotepkgs.get(instance_key)
+         )
+         if metadata is None:
+             for d in self._load_pkgindex().packages:
+                 if d["CPV"] == cpv and instance_key == _instance_key(
+                     _pkg_str(d["CPV"], metadata=d, settings=self.settings)
+                 ):
+                     metadata = d
+                     break
+ 
+         if metadata is None:
+             return digests
+ 
+         for k in get_valid_checksum_keys():
+             v = metadata.get(k)
+             if not v:
+                 continue
+             digests[k] = v
+ 
+         if "SIZE" in metadata:
+             try:
+                 digests["size"] = int(metadata["SIZE"])
+             except ValueError:
+                 writemsg(
+                     _("!!! Malformed SIZE attribute in remote " "metadata for '%s'\n")
+                     % cpv
+                 )
+ 
+         return digests
+ 
+     def digestCheck(self, pkg):
+         """
+         Verify digests for the given package and raise DigestException
+         if verification fails.
+         @rtype: bool
+         @return: True if digests could be located, False otherwise.
+         """
+ 
+         digests = self._get_digests(pkg)
+ 
+         if not digests:
+             return False
+ 
+         try:
+             cpv = pkg.cpv
+         except AttributeError:
+             cpv = pkg
+ 
+         pkg_path = self.getname(cpv)
+         hash_filter = _hash_filter(self.settings.get("PORTAGE_CHECKSUM_FILTER", ""))
+         if not hash_filter.transparent:
+             digests = _apply_hash_filter(digests, hash_filter)
+         eout = EOutput()
+         eout.quiet = self.settings.get("PORTAGE_QUIET") == "1"
+         ok, st = _check_distfile(pkg_path, digests, eout, show_errors=0)
+         if not ok:
+             ok, reason = verify_all(pkg_path, digests)
+             if not ok:
+                 raise portage.exception.DigestException((pkg_path,) + tuple(reason))
+ 
+         return True
+ 
+     def getslot(self, mycatpkg):
+         "Get a slot for a catpkg; assume it exists."
+         myslot = ""
+         try:
+             myslot = self.dbapi._pkg_str(mycatpkg, None).slot
+         except KeyError:
+             pass
+         return myslot
diff --cc lib/portage/dbapi/vartree.py
index 749963fa9,8ffb23b1c..73202f625
--- a/lib/portage/dbapi/vartree.py
+++ b/lib/portage/dbapi/vartree.py
@@@ -1,62 -1,70 +1,72 @@@
  # Copyright 1998-2021 Gentoo Authors
  # Distributed under the terms of the GNU General Public License v2
  
- __all__ = [
- 	"vardbapi", "vartree", "dblink"] + \
- 	["write_contents", "tar_contents"]
+ __all__ = ["vardbapi", "vartree", "dblink"] + ["write_contents", "tar_contents"]
  
  import portage
- portage.proxy.lazyimport.lazyimport(globals(),
- 	'hashlib:md5',
- 	'portage.checksum:_perform_md5_merge@perform_md5',
- 	'portage.data:portage_gid,portage_uid,secpass',
- 	'portage.dbapi.dep_expand:dep_expand',
- 	'portage.dbapi._MergeProcess:MergeProcess',
- 	'portage.dbapi._SyncfsProcess:SyncfsProcess',
- 	'portage.dep:dep_getkey,isjustname,isvalidatom,match_from_list,' + \
- 	 	'use_reduce,_slot_separator,_repo_separator',
- 	'portage.eapi:_get_eapi_attrs',
- 	'portage.elog:collect_ebuild_messages,collect_messages,' + \
- 		'elog_process,_merge_logentries',
- 	'portage.locks:lockdir,unlockdir,lockfile,unlockfile',
- 	'portage.output:bold,colorize',
- 	'portage.package.ebuild.doebuild:doebuild_environment,' + \
- 		'_merge_unicode_error',
- 	'portage.package.ebuild.prepare_build_dirs:prepare_build_dirs',
- 	'portage.package.ebuild._ipc.QueryCommand:QueryCommand',
- 	'portage.process:find_binary',
- 	'portage.util:apply_secpass_permissions,ConfigProtect,ensure_dirs,' + \
- 		'writemsg,writemsg_level,write_atomic,atomic_ofstream,writedict,' + \
- 		'grabdict,normalize_path,new_protect_filename',
- 	'portage.util._compare_files:compare_files',
- 	'portage.util.digraph:digraph',
- 	'portage.util.env_update:env_update',
- 	'portage.util.install_mask:install_mask_dir,InstallMask,_raise_exc',
- 	'portage.util.listdir:dircache,listdir',
- 	'portage.util.movefile:movefile',
- 	'portage.util.path:first_existing,iter_parents',
- 	'portage.util.writeable_check:get_ro_checker',
- 	'portage.util._xattr:xattr',
- 	'portage.util._dyn_libs.PreservedLibsRegistry:PreservedLibsRegistry',
- 	'portage.util._dyn_libs.LinkageMapELF:LinkageMapELF@LinkageMap',
- 	'portage.util._dyn_libs.LinkageMapMachO:LinkageMapMachO',
- 	'portage.util._dyn_libs.LinkageMapPeCoff:LinkageMapPeCoff',
- 	'portage.util._dyn_libs.LinkageMapXCoff:LinkageMapXCoff',
- 	'portage.util._dyn_libs.NeededEntry:NeededEntry',
- 	'portage.util._async.SchedulerInterface:SchedulerInterface',
- 	'portage.util._eventloop.global_event_loop:global_event_loop',
- 	'portage.versions:best,catpkgsplit,catsplit,cpv_getkey,vercmp,' + \
- 		'_get_slot_re,_pkgsplit@pkgsplit,_pkg_str,_unknown_repo',
- 	'subprocess',
- 	'tarfile',
+ 
+ portage.proxy.lazyimport.lazyimport(
+     globals(),
+     "hashlib:md5",
+     "portage.checksum:_perform_md5_merge@perform_md5",
+     "portage.data:portage_gid,portage_uid,secpass",
+     "portage.dbapi.dep_expand:dep_expand",
+     "portage.dbapi._MergeProcess:MergeProcess",
+     "portage.dbapi._SyncfsProcess:SyncfsProcess",
+     "portage.dep:dep_getkey,isjustname,isvalidatom,match_from_list,"
+     + "use_reduce,_slot_separator,_repo_separator",
+     "portage.eapi:_get_eapi_attrs",
+     "portage.elog:collect_ebuild_messages,collect_messages,"
+     + "elog_process,_merge_logentries",
+     "portage.locks:lockdir,unlockdir,lockfile,unlockfile",
+     "portage.output:bold,colorize",
+     "portage.package.ebuild.doebuild:doebuild_environment," + "_merge_unicode_error",
+     "portage.package.ebuild.prepare_build_dirs:prepare_build_dirs",
+     "portage.package.ebuild._ipc.QueryCommand:QueryCommand",
+     "portage.process:find_binary",
+     "portage.util:apply_secpass_permissions,ConfigProtect,ensure_dirs,"
+     + "writemsg,writemsg_level,write_atomic,atomic_ofstream,writedict,"
+     + "grabdict,normalize_path,new_protect_filename",
+     "portage.util._compare_files:compare_files",
+     "portage.util.digraph:digraph",
+     "portage.util.env_update:env_update",
+     "portage.util.install_mask:install_mask_dir,InstallMask,_raise_exc",
+     "portage.util.listdir:dircache,listdir",
+     "portage.util.movefile:movefile",
+     "portage.util.path:first_existing,iter_parents",
+     "portage.util.writeable_check:get_ro_checker",
+     "portage.util._xattr:xattr",
+     "portage.util._dyn_libs.PreservedLibsRegistry:PreservedLibsRegistry",
+     "portage.util._dyn_libs.LinkageMapELF:LinkageMapELF@LinkageMap",
+     "portage.util._dyn_libs.NeededEntry:NeededEntry",
+     "portage.util._async.SchedulerInterface:SchedulerInterface",
+     "portage.util._eventloop.global_event_loop:global_event_loop",
+     "portage.versions:best,catpkgsplit,catsplit,cpv_getkey,vercmp,"
+     + "_get_slot_re,_pkgsplit@pkgsplit,_pkg_str,_unknown_repo",
+     "subprocess",
+     "tarfile",
  )
  
- from portage.const import CACHE_PATH, CONFIG_MEMORY_FILE, \
- 	MERGING_IDENTIFIER, PORTAGE_PACKAGE_ATOM, PRIVATE_PATH, VDB_PATH, EPREFIX
+ from portage.const import (
+     CACHE_PATH,
+     CONFIG_MEMORY_FILE,
+     MERGING_IDENTIFIER,
+     PORTAGE_PACKAGE_ATOM,
+     PRIVATE_PATH,
+     VDB_PATH,
++    # PREFIX LOCAL
++    EPREFIX,
+ )
  from portage.dbapi import dbapi
- from portage.exception import CommandNotFound, \
- 	InvalidData, InvalidLocation, InvalidPackageName, \
- 	FileNotFound, PermissionDenied, UnsupportedAPIException
+ from portage.exception import (
+     CommandNotFound,
+     InvalidData,
+     InvalidLocation,
+     InvalidPackageName,
+     FileNotFound,
+     PermissionDenied,
+     UnsupportedAPIException,
+ )
  from portage.localization import _
  from portage.util.futures import asyncio
  
@@@ -105,5697 -112,6397 +114,6415 @@@ import warning
  
  class vardbapi(dbapi):
  
- 	_excluded_dirs = ["CVS", "lost+found"]
- 	_excluded_dirs = [re.escape(x) for x in _excluded_dirs]
- 	_excluded_dirs = re.compile(r'^(\..*|' + MERGING_IDENTIFIER + '.*|' + \
- 		"|".join(_excluded_dirs) + r')$')
- 
- 	_aux_cache_version        = "1"
- 	_owners_cache_version     = "1"
- 
- 	# Number of uncached packages to trigger cache update, since
- 	# it's wasteful to update it for every vdb change.
- 	_aux_cache_threshold = 5
- 
- 	_aux_cache_keys_re = re.compile(r'^NEEDED\..*$')
- 	_aux_multi_line_re = re.compile(r'^(CONTENTS|NEEDED\..*)$')
- 	_pkg_str_aux_keys = dbapi._pkg_str_aux_keys + ("BUILD_ID", "BUILD_TIME", "_mtime_")
- 
- 	def __init__(self, _unused_param=DeprecationWarning,
- 		categories=None, settings=None, vartree=None):
- 		"""
- 		The categories parameter is unused since the dbapi class
- 		now has a categories property that is generated from the
- 		available packages.
- 		"""
- 
- 		# Used by emerge to check whether any packages
- 		# have been added or removed.
- 		self._pkgs_changed = False
- 
- 		# The _aux_cache_threshold doesn't work as designed
- 		# if the cache is flushed from a subprocess, so we
- 		# use this to avoid waste vdb cache updates.
- 		self._flush_cache_enabled = True
- 
- 		#cache for category directory mtimes
- 		self.mtdircache = {}
- 
- 		#cache for dependency checks
- 		self.matchcache = {}
- 
- 		#cache for cp_list results
- 		self.cpcache = {}
- 
- 		self.blockers = None
- 		if settings is None:
- 			settings = portage.settings
- 		self.settings = settings
- 
- 		if _unused_param is not DeprecationWarning:
- 			warnings.warn("The first parameter of the "
- 				"portage.dbapi.vartree.vardbapi"
- 				" constructor is now unused. Instead "
- 				"settings['ROOT'] is used.",
- 				DeprecationWarning, stacklevel=2)
- 
- 		self._eroot = settings['EROOT']
- 		self._dbroot = self._eroot + VDB_PATH
- 		self._lock = None
- 		self._lock_count = 0
- 
- 		self._conf_mem_file = self._eroot + CONFIG_MEMORY_FILE
- 		self._fs_lock_obj = None
- 		self._fs_lock_count = 0
- 		self._slot_locks = {}
- 
- 		if vartree is None:
- 			vartree = portage.db[settings['EROOT']]['vartree']
- 		self.vartree = vartree
- 		self._aux_cache_keys = set(
- 			["BDEPEND", "BUILD_TIME", "CHOST", "COUNTER", "DEPEND",
- 			"DESCRIPTION", "EAPI", "HOMEPAGE",
- 			"BUILD_ID", "IDEPEND", "IUSE", "KEYWORDS",
- 			"LICENSE", "PDEPEND", "PROPERTIES", "RDEPEND",
- 			"repository", "RESTRICT" , "SLOT", "USE", "DEFINED_PHASES",
- 			"PROVIDES", "REQUIRES"
- 			])
- 		self._aux_cache_obj = None
- 		self._aux_cache_filename = os.path.join(self._eroot,
- 			CACHE_PATH, "vdb_metadata.pickle")
- 		self._cache_delta_filename = os.path.join(self._eroot,
- 			CACHE_PATH, "vdb_metadata_delta.json")
- 		self._cache_delta = VdbMetadataDelta(self)
- 		self._counter_path = os.path.join(self._eroot,
- 			CACHE_PATH, "counter")
- 
- 		self._plib_registry = PreservedLibsRegistry(settings["ROOT"],
- 			os.path.join(self._eroot, PRIVATE_PATH, "preserved_libs_registry"))
- 		self._linkmap = LinkageMap(self)
- 		chost = self.settings.get('CHOST')
- 		if not chost:
- 			chost = 'lunix?' # this happens when profiles are not available
- 		if chost.find('darwin') >= 0:
- 			self._linkmap = LinkageMapMachO(self)
- 		elif chost.find('interix') >= 0 or chost.find('winnt') >= 0:
- 			self._linkmap = LinkageMapPeCoff(self)
- 		elif chost.find('aix') >= 0:
- 			self._linkmap = LinkageMapXCoff(self)
- 		else:
- 			self._linkmap = LinkageMap(self)
- 		self._owners = self._owners_db(self)
- 
- 		self._cached_counter = None
- 
- 	@property
- 	def writable(self):
- 		"""
- 		Check if var/db/pkg is writable, or permissions are sufficient
- 		to create it if it does not exist yet.
- 		@rtype: bool
- 		@return: True if var/db/pkg is writable or can be created,
- 			False otherwise
- 		"""
- 		return os.access(first_existing(self._dbroot), os.W_OK)
- 
- 	@property
- 	def root(self):
- 		warnings.warn("The root attribute of "
- 			"portage.dbapi.vartree.vardbapi"
- 			" is deprecated. Use "
- 			"settings['ROOT'] instead.",
- 			DeprecationWarning, stacklevel=3)
- 		return self.settings['ROOT']
- 
- 	def getpath(self, mykey, filename=None):
- 		# This is an optimized hotspot, so don't use unicode-wrapped
- 		# os module and don't use os.path.join().
- 		rValue = self._eroot + VDB_PATH + _os.sep + mykey
- 		if filename is not None:
- 			# If filename is always relative, we can do just
- 			# rValue += _os.sep + filename
- 			rValue = _os.path.join(rValue, filename)
- 		return rValue
- 
- 	def lock(self):
- 		"""
- 		Acquire a reentrant lock, blocking, for cooperation with concurrent
- 		processes. State is inherited by subprocesses, allowing subprocesses
- 		to reenter a lock that was acquired by a parent process. However,
- 		a lock can be released only by the same process that acquired it.
- 		"""
- 		if self._lock_count:
- 			self._lock_count += 1
- 		else:
- 			if self._lock is not None:
- 				raise AssertionError("already locked")
- 			# At least the parent needs to exist for the lock file.
- 			ensure_dirs(self._dbroot)
- 			self._lock = lockdir(self._dbroot)
- 			self._lock_count += 1
- 
- 	def unlock(self):
- 		"""
- 		Release a lock, decrementing the recursion level. Each unlock() call
- 		must be matched with a prior lock() call, or else an AssertionError
- 		will be raised if unlock() is called while not locked.
- 		"""
- 		if self._lock_count > 1:
- 			self._lock_count -= 1
- 		else:
- 			if self._lock is None:
- 				raise AssertionError("not locked")
- 			self._lock_count = 0
- 			unlockdir(self._lock)
- 			self._lock = None
- 
- 	def _fs_lock(self):
- 		"""
- 		Acquire a reentrant lock, blocking, for cooperation with concurrent
- 		processes.
- 		"""
- 		if self._fs_lock_count < 1:
- 			if self._fs_lock_obj is not None:
- 				raise AssertionError("already locked")
- 			try:
- 				self._fs_lock_obj = lockfile(self._conf_mem_file)
- 			except InvalidLocation:
- 				self.settings._init_dirs()
- 				self._fs_lock_obj = lockfile(self._conf_mem_file)
- 		self._fs_lock_count += 1
- 
- 	def _fs_unlock(self):
- 		"""
- 		Release a lock, decrementing the recursion level.
- 		"""
- 		if self._fs_lock_count <= 1:
- 			if self._fs_lock_obj is None:
- 				raise AssertionError("not locked")
- 			unlockfile(self._fs_lock_obj)
- 			self._fs_lock_obj = None
- 		self._fs_lock_count -= 1
- 
- 	def _slot_lock(self, slot_atom):
- 		"""
- 		Acquire a slot lock (reentrant).
- 
- 		WARNING: The varbapi._slot_lock method is not safe to call
- 		in the main process when that process is scheduling
- 		install/uninstall tasks in parallel, since the locks would
- 		be inherited by child processes. In order to avoid this sort
- 		of problem, this method should be called in a subprocess
- 		(typically spawned by the MergeProcess class).
- 		"""
- 		lock, counter = self._slot_locks.get(slot_atom, (None, 0))
- 		if lock is None:
- 			lock_path = self.getpath("%s:%s" % (slot_atom.cp, slot_atom.slot))
- 			ensure_dirs(os.path.dirname(lock_path))
- 			lock = lockfile(lock_path, wantnewlockfile=True)
- 		self._slot_locks[slot_atom] = (lock, counter + 1)
- 
- 	def _slot_unlock(self, slot_atom):
- 		"""
- 		Release a slot lock (or decrementing recursion level).
- 		"""
- 		lock, counter = self._slot_locks.get(slot_atom, (None, 0))
- 		if lock is None:
- 			raise AssertionError("not locked")
- 		counter -= 1
- 		if counter == 0:
- 			unlockfile(lock)
- 			del self._slot_locks[slot_atom]
- 		else:
- 			self._slot_locks[slot_atom] = (lock, counter)
- 
- 	def _bump_mtime(self, cpv):
- 		"""
- 		This is called before an after any modifications, so that consumers
- 		can use directory mtimes to validate caches. See bug #290428.
- 		"""
- 		base = self._eroot + VDB_PATH
- 		cat = catsplit(cpv)[0]
- 		catdir = base + _os.sep + cat
- 		t = time.time()
- 		t = (t, t)
- 		try:
- 			for x in (catdir, base):
- 				os.utime(x, t)
- 		except OSError:
- 			ensure_dirs(catdir)
- 
- 	def cpv_exists(self, mykey, myrepo=None):
- 		"Tells us whether an actual ebuild exists on disk (no masking)"
- 		return os.path.exists(self.getpath(mykey))
- 
- 	def cpv_counter(self, mycpv):
- 		"This method will grab the COUNTER. Returns a counter value."
- 		try:
- 			return int(self.aux_get(mycpv, ["COUNTER"])[0])
- 		except (KeyError, ValueError):
- 			pass
- 		writemsg_level(_("portage: COUNTER for %s was corrupted; " \
- 			"resetting to value of 0\n") % (mycpv,),
- 			level=logging.ERROR, noiselevel=-1)
- 		return 0
- 
- 	def cpv_inject(self, mycpv):
- 		"injects a real package into our on-disk database; assumes mycpv is valid and doesn't already exist"
- 		ensure_dirs(self.getpath(mycpv))
- 		counter = self.counter_tick(mycpv=mycpv)
- 		# write local package counter so that emerge clean does the right thing
- 		write_atomic(self.getpath(mycpv, filename="COUNTER"), str(counter))
- 
- 	def isInjected(self, mycpv):
- 		if self.cpv_exists(mycpv):
- 			if os.path.exists(self.getpath(mycpv, filename="INJECTED")):
- 				return True
- 			if not os.path.exists(self.getpath(mycpv, filename="CONTENTS")):
- 				return True
- 		return False
- 
- 	def move_ent(self, mylist, repo_match=None):
- 		origcp = mylist[1]
- 		newcp = mylist[2]
- 
- 		# sanity check
- 		for atom in (origcp, newcp):
- 			if not isjustname(atom):
- 				raise InvalidPackageName(str(atom))
- 		origmatches = self.match(origcp, use_cache=0)
- 		moves = 0
- 		if not origmatches:
- 			return moves
- 		for mycpv in origmatches:
- 			mycpv_cp = mycpv.cp
- 			if mycpv_cp != origcp:
- 				# Ignore PROVIDE virtual match.
- 				continue
- 			if repo_match is not None \
- 				and not repo_match(mycpv.repo):
- 				continue
- 
- 			# Use isvalidatom() to check if this move is valid for the
- 			# EAPI (characters allowed in package names may vary).
- 			if not isvalidatom(newcp, eapi=mycpv.eapi):
- 				continue
- 
- 			mynewcpv = mycpv.replace(mycpv_cp, str(newcp), 1)
- 			mynewcat = catsplit(newcp)[0]
- 			origpath = self.getpath(mycpv)
- 			if not os.path.exists(origpath):
- 				continue
- 			moves += 1
- 			if not os.path.exists(self.getpath(mynewcat)):
- 				#create the directory
- 				ensure_dirs(self.getpath(mynewcat))
- 			newpath = self.getpath(mynewcpv)
- 			if os.path.exists(newpath):
- 				#dest already exists; keep this puppy where it is.
- 				continue
- 			_movefile(origpath, newpath, mysettings=self.settings)
- 			self._clear_pkg_cache(self._dblink(mycpv))
- 			self._clear_pkg_cache(self._dblink(mynewcpv))
- 
- 			# We need to rename the ebuild now.
- 			old_pf = catsplit(mycpv)[1]
- 			new_pf = catsplit(mynewcpv)[1]
- 			if new_pf != old_pf:
- 				try:
- 					os.rename(os.path.join(newpath, old_pf + ".ebuild"),
- 						os.path.join(newpath, new_pf + ".ebuild"))
- 				except EnvironmentError as e:
- 					if e.errno != errno.ENOENT:
- 						raise
- 					del e
- 			write_atomic(os.path.join(newpath, "PF"), new_pf+"\n")
- 			write_atomic(os.path.join(newpath, "CATEGORY"), mynewcat+"\n")
- 
- 		return moves
- 
- 	def cp_list(self, mycp, use_cache=1):
- 		mysplit=catsplit(mycp)
- 		if mysplit[0] == '*':
- 			mysplit[0] = mysplit[0][1:]
- 		try:
- 			mystat = os.stat(self.getpath(mysplit[0])).st_mtime_ns
- 		except OSError:
- 			mystat = 0
- 		if use_cache and mycp in self.cpcache:
- 			cpc = self.cpcache[mycp]
- 			if cpc[0] == mystat:
- 				return cpc[1][:]
- 		cat_dir = self.getpath(mysplit[0])
- 		try:
- 			dir_list = os.listdir(cat_dir)
- 		except EnvironmentError as e:
- 			if e.errno == PermissionDenied.errno:
- 				raise PermissionDenied(cat_dir)
- 			del e
- 			dir_list = []
- 
- 		returnme = []
- 		for x in dir_list:
- 			if self._excluded_dirs.match(x) is not None:
- 				continue
- 			ps = pkgsplit(x)
- 			if not ps:
- 				self.invalidentry(os.path.join(self.getpath(mysplit[0]), x))
- 				continue
- 			if len(mysplit) > 1:
- 				if ps[0] == mysplit[1]:
- 					cpv = "%s/%s" % (mysplit[0], x)
- 					metadata = dict(zip(self._aux_cache_keys,
- 						self.aux_get(cpv, self._aux_cache_keys)))
- 					returnme.append(_pkg_str(cpv, metadata=metadata,
- 						settings=self.settings, db=self))
- 		self._cpv_sort_ascending(returnme)
- 		if use_cache:
- 			self.cpcache[mycp] = [mystat, returnme[:]]
- 		elif mycp in self.cpcache:
- 			del self.cpcache[mycp]
- 		return returnme
- 
- 	def cpv_all(self, use_cache=1):
- 		"""
- 		Set use_cache=0 to bypass the portage.cachedir() cache in cases
- 		when the accuracy of mtime staleness checks should not be trusted
- 		(generally this is only necessary in critical sections that
- 		involve merge or unmerge of packages).
- 		"""
- 		return list(self._iter_cpv_all(use_cache=use_cache))
- 
- 	def _iter_cpv_all(self, use_cache=True, sort=False):
- 		returnme = []
- 		basepath = os.path.join(self._eroot, VDB_PATH) + os.path.sep
- 
- 		if use_cache:
- 			from portage import listdir
- 		else:
- 			def listdir(p, **kwargs):
- 				try:
- 					return [x for x in os.listdir(p) \
- 						if os.path.isdir(os.path.join(p, x))]
- 				except EnvironmentError as e:
- 					if e.errno == PermissionDenied.errno:
- 						raise PermissionDenied(p)
- 					del e
- 					return []
- 
- 		catdirs = listdir(basepath, EmptyOnError=1, ignorecvs=1, dirsonly=1)
- 		if sort:
- 			catdirs.sort()
- 
- 		for x in catdirs:
- 			if self._excluded_dirs.match(x) is not None:
- 				continue
- 			if not self._category_re.match(x):
- 				continue
- 
- 			pkgdirs = listdir(basepath + x, EmptyOnError=1, dirsonly=1)
- 			if sort:
- 				pkgdirs.sort()
- 
- 			for y in pkgdirs:
- 				if self._excluded_dirs.match(y) is not None:
- 					continue
- 				subpath = x + "/" + y
- 				# -MERGING- should never be a cpv, nor should files.
- 				try:
- 					subpath = _pkg_str(subpath, db=self)
- 				except InvalidData:
- 					self.invalidentry(self.getpath(subpath))
- 					continue
- 
- 				yield subpath
- 
- 	def cp_all(self, use_cache=1, sort=False):
- 		mylist = self.cpv_all(use_cache=use_cache)
- 		d={}
- 		for y in mylist:
- 			if y[0] == '*':
- 				y = y[1:]
- 			try:
- 				mysplit = catpkgsplit(y)
- 			except InvalidData:
- 				self.invalidentry(self.getpath(y))
- 				continue
- 			if not mysplit:
- 				self.invalidentry(self.getpath(y))
- 				continue
- 			d[mysplit[0]+"/"+mysplit[1]] = None
- 		return sorted(d) if sort else list(d)
- 
- 	def checkblockers(self, origdep):
- 		pass
- 
- 	def _clear_cache(self):
- 		self.mtdircache.clear()
- 		self.matchcache.clear()
- 		self.cpcache.clear()
- 		self._aux_cache_obj = None
- 
- 	def _add(self, pkg_dblink):
- 		self._pkgs_changed = True
- 		self._clear_pkg_cache(pkg_dblink)
- 
- 	def _remove(self, pkg_dblink):
- 		self._pkgs_changed = True
- 		self._clear_pkg_cache(pkg_dblink)
- 
- 	def _clear_pkg_cache(self, pkg_dblink):
- 		# Due to 1 second mtime granularity in <python-2.5, mtime checks
- 		# are not always sufficient to invalidate vardbapi caches. Therefore,
- 		# the caches need to be actively invalidated here.
- 		self.mtdircache.pop(pkg_dblink.cat, None)
- 		self.matchcache.pop(pkg_dblink.cat, None)
- 		self.cpcache.pop(pkg_dblink.mysplit[0], None)
- 		dircache.pop(pkg_dblink.dbcatdir, None)
- 
- 	def match(self, origdep, use_cache=1):
- 		"caching match function"
- 		mydep = dep_expand(
- 			origdep, mydb=self, use_cache=use_cache, settings=self.settings)
- 		cache_key = (mydep, mydep.unevaluated_atom)
- 		mykey = dep_getkey(mydep)
- 		mycat = catsplit(mykey)[0]
- 		if not use_cache:
- 			if mycat in self.matchcache:
- 				del self.mtdircache[mycat]
- 				del self.matchcache[mycat]
- 			return list(self._iter_match(mydep,
- 				self.cp_list(mydep.cp, use_cache=use_cache)))
- 		try:
- 			curmtime = os.stat(os.path.join(self._eroot, VDB_PATH, mycat)).st_mtime_ns
- 		except (IOError, OSError):
- 			curmtime=0
- 
- 		if mycat not in self.matchcache or \
- 			self.mtdircache[mycat] != curmtime:
- 			# clear cache entry
- 			self.mtdircache[mycat] = curmtime
- 			self.matchcache[mycat] = {}
- 		if mydep not in self.matchcache[mycat]:
- 			mymatch = list(self._iter_match(mydep,
- 				self.cp_list(mydep.cp, use_cache=use_cache)))
- 			self.matchcache[mycat][cache_key] = mymatch
- 		return self.matchcache[mycat][cache_key][:]
- 
- 	def findname(self, mycpv, myrepo=None):
- 		return self.getpath(str(mycpv), filename=catsplit(mycpv)[1]+".ebuild")
- 
- 	def flush_cache(self):
- 		"""If the current user has permission and the internal aux_get cache has
- 		been updated, save it to disk and mark it unmodified.  This is called
- 		by emerge after it has loaded the full vdb for use in dependency
- 		calculations.  Currently, the cache is only written if the user has
- 		superuser privileges (since that's required to obtain a lock), but all
- 		users have read access and benefit from faster metadata lookups (as
- 		long as at least part of the cache is still valid)."""
- 		if self._flush_cache_enabled and \
- 			self._aux_cache is not None and \
- 			secpass >= 2 and \
- 			(len(self._aux_cache["modified"]) >= self._aux_cache_threshold or
- 			not os.path.exists(self._cache_delta_filename)):
- 
- 			ensure_dirs(os.path.dirname(self._aux_cache_filename))
- 
- 			self._owners.populate() # index any unindexed contents
- 			valid_nodes = set(self.cpv_all())
- 			for cpv in list(self._aux_cache["packages"]):
- 				if cpv not in valid_nodes:
- 					del self._aux_cache["packages"][cpv]
- 			del self._aux_cache["modified"]
- 			timestamp = time.time()
- 			self._aux_cache["timestamp"] = timestamp
- 
- 			with atomic_ofstream(self._aux_cache_filename, 'wb') as f:
- 				pickle.dump(self._aux_cache, f, protocol=2)
- 
- 			apply_secpass_permissions(
- 				self._aux_cache_filename, mode=0o644)
- 
- 			self._cache_delta.initialize(timestamp)
- 			apply_secpass_permissions(
- 				self._cache_delta_filename, mode=0o644)
- 
- 			self._aux_cache["modified"] = set()
- 
- 	@property
- 	def _aux_cache(self):
- 		if self._aux_cache_obj is None:
- 			self._aux_cache_init()
- 		return self._aux_cache_obj
- 
- 	def _aux_cache_init(self):
- 		aux_cache = None
- 		open_kwargs = {}
- 		try:
- 			with open(_unicode_encode(self._aux_cache_filename,
- 				encoding=_encodings['fs'], errors='strict'),
- 				mode='rb', **open_kwargs) as f:
- 				mypickle = pickle.Unpickler(f)
- 				try:
- 					mypickle.find_global = None
- 				except AttributeError:
- 					# TODO: If py3k, override Unpickler.find_class().
- 					pass
- 				aux_cache = mypickle.load()
- 		except (SystemExit, KeyboardInterrupt):
- 			raise
- 		except Exception as e:
- 			if isinstance(e, EnvironmentError) and \
- 				getattr(e, 'errno', None) in (errno.ENOENT, errno.EACCES):
- 				pass
- 			else:
- 				writemsg(_("!!! Error loading '%s': %s\n") % \
- 					(self._aux_cache_filename, e), noiselevel=-1)
- 			del e
- 
- 		if not aux_cache or \
- 			not isinstance(aux_cache, dict) or \
- 			aux_cache.get("version") != self._aux_cache_version or \
- 			not aux_cache.get("packages"):
- 			aux_cache = {"version": self._aux_cache_version}
- 			aux_cache["packages"] = {}
- 
- 		owners = aux_cache.get("owners")
- 		if owners is not None:
- 			if not isinstance(owners, dict):
- 				owners = None
- 			elif "version" not in owners:
- 				owners = None
- 			elif owners["version"] != self._owners_cache_version:
- 				owners = None
- 			elif "base_names" not in owners:
- 				owners = None
- 			elif not isinstance(owners["base_names"], dict):
- 				owners = None
- 
- 		if owners is None:
- 			owners = {
- 				"base_names" : {},
- 				"version"    : self._owners_cache_version
- 			}
- 			aux_cache["owners"] = owners
- 
- 		aux_cache["modified"] = set()
- 		self._aux_cache_obj = aux_cache
- 
- 	def aux_get(self, mycpv, wants, myrepo = None):
- 		"""This automatically caches selected keys that are frequently needed
- 		by emerge for dependency calculations.  The cached metadata is
- 		considered valid if the mtime of the package directory has not changed
- 		since the data was cached.  The cache is stored in a pickled dict
- 		object with the following format:
- 
- 		{version:"1", "packages":{cpv1:(mtime,{k1,v1, k2,v2, ...}), cpv2...}}
- 
- 		If an error occurs while loading the cache pickle or the version is
- 		unrecognized, the cache will simple be recreated from scratch (it is
- 		completely disposable).
- 		"""
- 		cache_these_wants = self._aux_cache_keys.intersection(wants)
- 		for x in wants:
- 			if self._aux_cache_keys_re.match(x) is not None:
- 				cache_these_wants.add(x)
- 
- 		if not cache_these_wants:
- 			mydata = self._aux_get(mycpv, wants)
- 			return [mydata[x] for x in wants]
- 
- 		cache_these = set(self._aux_cache_keys)
- 		cache_these.update(cache_these_wants)
- 
- 		mydir = self.getpath(mycpv)
- 		mydir_stat = None
- 		try:
- 			mydir_stat = os.stat(mydir)
- 		except OSError as e:
- 			if e.errno != errno.ENOENT:
- 				raise
- 			raise KeyError(mycpv)
- 		# Use float mtime when available.
- 		mydir_mtime = mydir_stat.st_mtime
- 		pkg_data = self._aux_cache["packages"].get(mycpv)
- 		pull_me = cache_these.union(wants)
- 		mydata = {"_mtime_" : mydir_mtime}
- 		cache_valid = False
- 		cache_incomplete = False
- 		cache_mtime = None
- 		metadata = None
- 		if pkg_data is not None:
- 			if not isinstance(pkg_data, tuple) or len(pkg_data) != 2:
- 				pkg_data = None
- 			else:
- 				cache_mtime, metadata = pkg_data
- 				if not isinstance(cache_mtime, (float, int)) or \
- 					not isinstance(metadata, dict):
- 					pkg_data = None
- 
- 		if pkg_data:
- 			cache_mtime, metadata = pkg_data
- 			if isinstance(cache_mtime, float):
- 				if cache_mtime == mydir_stat.st_mtime:
- 					cache_valid = True
- 
- 				# Handle truncated mtime in order to avoid cache
- 				# invalidation for livecd squashfs (bug 564222).
- 				elif int(cache_mtime) == mydir_stat.st_mtime:
- 					cache_valid = True
- 			else:
- 				# Cache may contain integer mtime.
- 				cache_valid = cache_mtime == mydir_stat[stat.ST_MTIME]
- 
- 		if cache_valid:
- 			# Migrate old metadata to unicode.
- 			for k, v in metadata.items():
- 				metadata[k] = _unicode_decode(v,
- 					encoding=_encodings['repo.content'], errors='replace')
- 
- 			mydata.update(metadata)
- 			pull_me.difference_update(mydata)
- 
- 		if pull_me:
- 			# pull any needed data and cache it
- 			aux_keys = list(pull_me)
- 			mydata.update(self._aux_get(mycpv, aux_keys, st=mydir_stat))
- 			if not cache_valid or cache_these.difference(metadata):
- 				cache_data = {}
- 				if cache_valid and metadata:
- 					cache_data.update(metadata)
- 				for aux_key in cache_these:
- 					cache_data[aux_key] = mydata[aux_key]
- 				self._aux_cache["packages"][str(mycpv)] = \
- 					(mydir_mtime, cache_data)
- 				self._aux_cache["modified"].add(mycpv)
- 
- 		eapi_attrs = _get_eapi_attrs(mydata['EAPI'])
- 		if _get_slot_re(eapi_attrs).match(mydata['SLOT']) is None:
- 			# Empty or invalid slot triggers InvalidAtom exceptions when
- 			# generating slot atoms for packages, so translate it to '0' here.
- 			mydata['SLOT'] = '0'
- 
- 		return [mydata[x] for x in wants]
- 
- 	def _aux_get(self, mycpv, wants, st=None):
- 		mydir = self.getpath(mycpv)
- 		if st is None:
- 			try:
- 				st = os.stat(mydir)
- 			except OSError as e:
- 				if e.errno == errno.ENOENT:
- 					raise KeyError(mycpv)
- 				elif e.errno == PermissionDenied.errno:
- 					raise PermissionDenied(mydir)
- 				else:
- 					raise
- 		if not stat.S_ISDIR(st.st_mode):
- 			raise KeyError(mycpv)
- 		results = {}
- 		env_keys = []
- 		for x in wants:
- 			if x == "_mtime_":
- 				results[x] = st[stat.ST_MTIME]
- 				continue
- 			try:
- 				with io.open(
- 					_unicode_encode(os.path.join(mydir, x),
- 					encoding=_encodings['fs'], errors='strict'),
- 					mode='r', encoding=_encodings['repo.content'],
- 					errors='replace') as f:
- 					myd = f.read()
- 			except IOError:
- 				if x not in self._aux_cache_keys and \
- 					self._aux_cache_keys_re.match(x) is None:
- 					env_keys.append(x)
- 					continue
- 				myd = ''
- 
- 			# Preserve \n for metadata that is known to
- 			# contain multiple lines.
- 			if self._aux_multi_line_re.match(x) is None:
- 				myd = " ".join(myd.split())
- 
- 			results[x] = myd
- 
- 		if env_keys:
- 			env_results = self._aux_env_search(mycpv, env_keys)
- 			for k in env_keys:
- 				v = env_results.get(k)
- 				if v is None:
- 					v = ''
- 				if self._aux_multi_line_re.match(k) is None:
- 					v = " ".join(v.split())
- 				results[k] = v
- 
- 		if results.get("EAPI") == "":
- 			results["EAPI"] = '0'
- 
- 		return results
- 
- 	def _aux_env_search(self, cpv, variables):
- 		"""
- 		Search environment.bz2 for the specified variables. Returns
- 		a dict mapping variables to values, and any variables not
- 		found in the environment will not be included in the dict.
- 		This is useful for querying variables like ${SRC_URI} and
- 		${A}, which are not saved in separate files but are available
- 		in environment.bz2 (see bug #395463).
- 		"""
- 		env_file = self.getpath(cpv, filename="environment.bz2")
- 		if not os.path.isfile(env_file):
- 			return {}
- 		bunzip2_cmd = portage.util.shlex_split(
- 			self.settings.get("PORTAGE_BUNZIP2_COMMAND", ""))
- 		if not bunzip2_cmd:
- 			bunzip2_cmd = portage.util.shlex_split(
- 				self.settings["PORTAGE_BZIP2_COMMAND"])
- 			bunzip2_cmd.append("-d")
- 		args = bunzip2_cmd + ["-c", env_file]
- 		try:
- 			proc = subprocess.Popen(args, stdout=subprocess.PIPE)
- 		except EnvironmentError as e:
- 			if e.errno != errno.ENOENT:
- 				raise
- 			raise portage.exception.CommandNotFound(args[0])
- 
- 		# Parts of the following code are borrowed from
- 		# filter-bash-environment.py (keep them in sync).
- 		var_assign_re = re.compile(r'(^|^declare\s+-\S+\s+|^declare\s+|^export\s+)([^=\s]+)=("|\')?(.*)$')
- 		close_quote_re = re.compile(r'(\\"|"|\')\s*$')
- 		def have_end_quote(quote, line):
- 			close_quote_match = close_quote_re.search(line)
- 			return close_quote_match is not None and \
- 				close_quote_match.group(1) == quote
- 
- 		variables = frozenset(variables)
- 		results = {}
- 		for line in proc.stdout:
- 			line = _unicode_decode(line,
- 				encoding=_encodings['content'], errors='replace')
- 			var_assign_match = var_assign_re.match(line)
- 			if var_assign_match is not None:
- 				key = var_assign_match.group(2)
- 				quote = var_assign_match.group(3)
- 				if quote is not None:
- 					if have_end_quote(quote,
- 						line[var_assign_match.end(2)+2:]):
- 						value = var_assign_match.group(4)
- 					else:
- 						value = [var_assign_match.group(4)]
- 						for line in proc.stdout:
- 							line = _unicode_decode(line,
- 								encoding=_encodings['content'],
- 								errors='replace')
- 							value.append(line)
- 							if have_end_quote(quote, line):
- 								break
- 						value = ''.join(value)
- 					# remove trailing quote and whitespace
- 					value = value.rstrip()[:-1]
- 				else:
- 					value = var_assign_match.group(4).rstrip()
- 
- 				if key in variables:
- 					results[key] = value
- 
- 		proc.wait()
- 		proc.stdout.close()
- 		return results
- 
- 	def aux_update(self, cpv, values):
- 		mylink = self._dblink(cpv)
- 		if not mylink.exists():
- 			raise KeyError(cpv)
- 		self._bump_mtime(cpv)
- 		self._clear_pkg_cache(mylink)
- 		for k, v in values.items():
- 			if v:
- 				mylink.setfile(k, v)
- 			else:
- 				try:
- 					os.unlink(os.path.join(self.getpath(cpv), k))
- 				except EnvironmentError:
- 					pass
- 		self._bump_mtime(cpv)
- 
- 	@coroutine
- 	def unpack_metadata(self, pkg, dest_dir, loop=None):
- 		"""
- 		Unpack package metadata to a directory. This method is a coroutine.
- 
- 		@param pkg: package to unpack
- 		@type pkg: _pkg_str or portage.config
- 		@param dest_dir: destination directory
- 		@type dest_dir: str
- 		"""
- 		loop = asyncio._wrap_loop(loop)
- 		if not isinstance(pkg, portage.config):
- 			cpv = pkg
- 		else:
- 			cpv = pkg.mycpv
- 		dbdir = self.getpath(cpv)
- 		def async_copy():
- 			for parent, dirs, files in os.walk(dbdir, onerror=_raise_exc):
- 				for key in files:
- 					shutil.copy(os.path.join(parent, key),
- 						os.path.join(dest_dir, key))
- 				break
- 		yield loop.run_in_executor(ForkExecutor(loop=loop), async_copy)
- 
- 	@coroutine
- 	def unpack_contents(self, pkg, dest_dir,
- 		include_config=None, include_unmodified_config=None, loop=None):
- 		"""
- 		Unpack package contents to a directory. This method is a coroutine.
- 
- 		This copies files from the installed system, in the same way
- 		as the quickpkg(1) command. Default behavior for handling
- 		of protected configuration files is controlled by the
- 		QUICKPKG_DEFAULT_OPTS variable. The relevant quickpkg options
- 		are --include-config and --include-unmodified-config. When
- 		a configuration file is not included because it is protected,
- 		an ewarn message is logged.
- 
- 		@param pkg: package to unpack
- 		@type pkg: _pkg_str or portage.config
- 		@param dest_dir: destination directory
- 		@type dest_dir: str
- 		@param include_config: Include all files protected by
- 			CONFIG_PROTECT (as a security precaution, default is False
- 			unless modified by QUICKPKG_DEFAULT_OPTS).
- 		@type include_config: bool
- 		@param include_unmodified_config: Include files protected by
- 			CONFIG_PROTECT that have not been modified since installation
- 			(as a security precaution, default is False unless modified
- 			by QUICKPKG_DEFAULT_OPTS).
- 		@type include_unmodified_config: bool
- 		"""
- 		loop = asyncio._wrap_loop(loop)
- 		if not isinstance(pkg, portage.config):
- 			settings = self.settings
- 			cpv = pkg
- 		else:
- 			settings = pkg
- 			cpv = settings.mycpv
- 
- 		scheduler = SchedulerInterface(loop)
- 		parser = argparse.ArgumentParser()
- 		parser.add_argument('--include-config',
- 			choices=('y', 'n'),
- 			default='n')
- 		parser.add_argument('--include-unmodified-config',
- 			choices=('y', 'n'),
- 			default='n')
- 
- 		# Method parameters may override QUICKPKG_DEFAULT_OPTS.
- 		opts_list = portage.util.shlex_split(settings.get('QUICKPKG_DEFAULT_OPTS', ''))
- 		if include_config is not None:
- 			opts_list.append('--include-config={}'.format(
- 				'y' if include_config else 'n'))
- 		if include_unmodified_config is not None:
- 			opts_list.append('--include-unmodified-config={}'.format(
- 				'y' if include_unmodified_config else 'n'))
- 
- 		opts, args = parser.parse_known_args(opts_list)
- 
- 		tar_cmd = ('tar', '-x', '--xattrs', '--xattrs-include=*', '-C', dest_dir)
- 		pr, pw = os.pipe()
- 		proc = (yield asyncio.create_subprocess_exec(*tar_cmd, stdin=pr))
- 		os.close(pr)
- 		with os.fdopen(pw, 'wb', 0) as pw_file:
- 			excluded_config_files = (yield loop.run_in_executor(ForkExecutor(loop=loop),
- 				functools.partial(self._dblink(cpv).quickpkg,
- 				pw_file,
- 				include_config=opts.include_config == 'y',
- 				include_unmodified_config=opts.include_unmodified_config == 'y')))
- 		yield proc.wait()
- 		if proc.returncode != os.EX_OK:
- 			raise PortageException('command failed: {}'.format(tar_cmd))
- 
- 		if excluded_config_files:
- 			log_lines = ([_("Config files excluded by QUICKPKG_DEFAULT_OPTS (see quickpkg(1) man page):")] +
- 				['\t{}'.format(name) for name in excluded_config_files])
- 			out = io.StringIO()
- 			for line in log_lines:
- 				portage.elog.messages.ewarn(line, phase='install', key=cpv, out=out)
- 			scheduler.output(out.getvalue(),
- 				background=self.settings.get("PORTAGE_BACKGROUND") == "1",
- 				log_path=settings.get("PORTAGE_LOG_FILE"))
- 
- 	def counter_tick(self, myroot=None, mycpv=None):
- 		"""
- 		@param myroot: ignored, self._eroot is used instead
- 		"""
- 		return self.counter_tick_core(incrementing=1, mycpv=mycpv)
- 
- 	def get_counter_tick_core(self, myroot=None, mycpv=None):
- 		"""
- 		Use this method to retrieve the counter instead
- 		of having to trust the value of a global counter
- 		file that can lead to invalid COUNTER
- 		generation. When cache is valid, the package COUNTER
- 		files are not read and we rely on the timestamp of
- 		the package directory to validate cache. The stat
- 		calls should only take a short time, so performance
- 		is sufficient without having to rely on a potentially
- 		corrupt global counter file.
- 
- 		The global counter file located at
- 		$CACHE_PATH/counter serves to record the
- 		counter of the last installed package and
- 		it also corresponds to the total number of
- 		installation actions that have occurred in
- 		the history of this package database.
- 
- 		@param myroot: ignored, self._eroot is used instead
- 		"""
- 		del myroot
- 		counter = -1
- 		try:
- 			with io.open(
- 				_unicode_encode(self._counter_path,
- 				encoding=_encodings['fs'], errors='strict'),
- 				mode='r', encoding=_encodings['repo.content'],
- 				errors='replace') as f:
- 				try:
- 					counter = int(f.readline().strip())
- 				except (OverflowError, ValueError) as e:
- 					writemsg(_("!!! COUNTER file is corrupt: '%s'\n") %
- 						self._counter_path, noiselevel=-1)
- 					writemsg("!!! %s\n" % (e,), noiselevel=-1)
- 		except EnvironmentError as e:
- 			# Silently allow ENOENT since files under
- 			# /var/cache/ are allowed to disappear.
- 			if e.errno != errno.ENOENT:
- 				writemsg(_("!!! Unable to read COUNTER file: '%s'\n") % \
- 					self._counter_path, noiselevel=-1)
- 				writemsg("!!! %s\n" % str(e), noiselevel=-1)
- 			del e
- 
- 		if self._cached_counter == counter:
- 			max_counter = counter
- 		else:
- 			# We must ensure that we return a counter
- 			# value that is at least as large as the
- 			# highest one from the installed packages,
- 			# since having a corrupt value that is too low
- 			# can trigger incorrect AUTOCLEAN behavior due
- 			# to newly installed packages having lower
- 			# COUNTERs than the previous version in the
- 			# same slot.
- 			max_counter = counter
- 			for cpv in self.cpv_all():
- 				try:
- 					pkg_counter = int(self.aux_get(cpv, ["COUNTER"])[0])
- 				except (KeyError, OverflowError, ValueError):
- 					continue
- 				if pkg_counter > max_counter:
- 					max_counter = pkg_counter
- 
- 		return max_counter + 1
- 
- 	def counter_tick_core(self, myroot=None, incrementing=1, mycpv=None):
- 		"""
- 		This method will grab the next COUNTER value and record it back
- 		to the global file. Note that every package install must have
- 		a unique counter, since a slotmove update can move two packages
- 		into the same SLOT and in that case it's important that both
- 		packages have different COUNTER metadata.
- 
- 		@param myroot: ignored, self._eroot is used instead
- 		@param mycpv: ignored
- 		@rtype: int
- 		@return: new counter value
- 		"""
- 		myroot = None
- 		mycpv = None
- 		self.lock()
- 		try:
- 			counter = self.get_counter_tick_core() - 1
- 			if incrementing:
- 				#increment counter
- 				counter += 1
- 				# update new global counter file
- 				try:
- 					write_atomic(self._counter_path, str(counter))
- 				except InvalidLocation:
- 					self.settings._init_dirs()
- 					write_atomic(self._counter_path, str(counter))
- 			self._cached_counter = counter
- 
- 			# Since we hold a lock, this is a good opportunity
- 			# to flush the cache. Note that this will only
- 			# flush the cache periodically in the main process
- 			# when _aux_cache_threshold is exceeded.
- 			self.flush_cache()
- 		finally:
- 			self.unlock()
- 
- 		return counter
- 
- 	def _dblink(self, cpv):
- 		category, pf = catsplit(cpv)
- 		return dblink(category, pf, settings=self.settings,
- 			vartree=self.vartree, treetype="vartree")
- 
- 	def removeFromContents(self, pkg, paths, relative_paths=True):
- 		"""
- 		@param pkg: cpv for an installed package
- 		@type pkg: string
- 		@param paths: paths of files to remove from contents
- 		@type paths: iterable
- 		"""
- 		if not hasattr(pkg, "getcontents"):
- 			pkg = self._dblink(pkg)
- 		root = self.settings['ROOT']
- 		root_len = len(root) - 1
- 		new_contents = pkg.getcontents().copy()
- 		removed = 0
- 
- 		for filename in paths:
- 			filename = _unicode_decode(filename,
- 				encoding=_encodings['content'], errors='strict')
- 			filename = normalize_path(filename)
- 			if relative_paths:
- 				relative_filename = filename
- 			else:
- 				relative_filename = filename[root_len:]
- 			contents_key = pkg._match_contents(relative_filename)
- 			if contents_key:
- 				# It's possible for two different paths to refer to the same
- 				# contents_key, due to directory symlinks. Therefore, pass a
- 				# default value to pop, in order to avoid a KeyError which
- 				# could otherwise be triggered (see bug #454400).
- 				new_contents.pop(contents_key, None)
- 				removed += 1
- 
- 		if removed:
- 			# Also remove corresponding NEEDED lines, so that they do
- 			# no corrupt LinkageMap data for preserve-libs.
- 			needed_filename = os.path.join(pkg.dbdir, LinkageMap._needed_aux_key)
- 			new_needed = None
- 			try:
- 				with io.open(_unicode_encode(needed_filename,
- 					encoding=_encodings['fs'], errors='strict'),
- 					mode='r', encoding=_encodings['repo.content'],
- 					errors='replace') as f:
- 					needed_lines = f.readlines()
- 			except IOError as e:
- 				if e.errno not in (errno.ENOENT, errno.ESTALE):
- 					raise
- 			else:
- 				new_needed = []
- 				for l in needed_lines:
- 					l = l.rstrip("\n")
- 					if not l:
- 						continue
- 					try:
- 						entry = NeededEntry.parse(needed_filename, l)
- 					except InvalidData as e:
- 						writemsg_level("\n%s\n\n" % (e,),
- 							level=logging.ERROR, noiselevel=-1)
- 						continue
- 
- 					filename = os.path.join(root, entry.filename.lstrip(os.sep))
- 					if filename in new_contents:
- 						new_needed.append(entry)
- 
- 			self.writeContentsToContentsFile(pkg, new_contents, new_needed=new_needed)
- 
- 	def writeContentsToContentsFile(self, pkg, new_contents, new_needed=None):
- 		"""
- 		@param pkg: package to write contents file for
- 		@type pkg: dblink
- 		@param new_contents: contents to write to CONTENTS file
- 		@type new_contents: contents dictionary of the form
- 					{u'/path/to/file' : (contents_attribute 1, ...), ...}
- 		@param new_needed: new NEEDED entries
- 		@type new_needed: list of NeededEntry
- 		"""
- 		root = self.settings['ROOT']
- 		self._bump_mtime(pkg.mycpv)
- 		if new_needed is not None:
- 			f = atomic_ofstream(os.path.join(pkg.dbdir, LinkageMap._needed_aux_key))
- 			for entry in new_needed:
- 				f.write(str(entry))
- 			f.close()
- 		f = atomic_ofstream(os.path.join(pkg.dbdir, "CONTENTS"))
- 		write_contents(new_contents, root, f)
- 		f.close()
- 		self._bump_mtime(pkg.mycpv)
- 		pkg._clear_contents_cache()
- 
- 	class _owners_cache:
- 		"""
- 		This class maintains an hash table that serves to index package
- 		contents by mapping the basename of file to a list of possible
- 		packages that own it. This is used to optimize owner lookups
- 		by narrowing the search down to a smaller number of packages.
- 		"""
- 		_new_hash = md5
- 		_hash_bits = 16
- 		_hex_chars = _hash_bits // 4
- 
- 		def __init__(self, vardb):
- 			self._vardb = vardb
- 
- 		def add(self, cpv):
- 			eroot_len = len(self._vardb._eroot)
- 			pkg_hash = self._hash_pkg(cpv)
- 			db = self._vardb._dblink(cpv)
- 			if not db.getcontents():
- 				# Empty path is a code used to represent empty contents.
- 				self._add_path("", pkg_hash)
- 
- 			for x in db._contents.keys():
- 				self._add_path(x[eroot_len:], pkg_hash)
- 
- 			self._vardb._aux_cache["modified"].add(cpv)
- 
- 		def _add_path(self, path, pkg_hash):
- 			"""
- 			Empty path is a code that represents empty contents.
- 			"""
- 			if path:
- 				name = os.path.basename(path.rstrip(os.path.sep))
- 				if not name:
- 					return
- 			else:
- 				name = path
- 			name_hash = self._hash_str(name)
- 			base_names = self._vardb._aux_cache["owners"]["base_names"]
- 			pkgs = base_names.get(name_hash)
- 			if pkgs is None:
- 				pkgs = {}
- 				base_names[name_hash] = pkgs
- 			pkgs[pkg_hash] = None
- 
- 		def _hash_str(self, s):
- 			h = self._new_hash()
- 			# Always use a constant utf_8 encoding here, since
- 			# the "default" encoding can change.
- 			h.update(_unicode_encode(s,
- 				encoding=_encodings['repo.content'],
- 				errors='backslashreplace'))
- 			h = h.hexdigest()
- 			h = h[-self._hex_chars:]
- 			h = int(h, 16)
- 			return h
- 
- 		def _hash_pkg(self, cpv):
- 			counter, mtime = self._vardb.aux_get(
- 				cpv, ["COUNTER", "_mtime_"])
- 			try:
- 				counter = int(counter)
- 			except ValueError:
- 				counter = 0
- 			return (str(cpv), counter, mtime)
- 
- 	class _owners_db:
- 
- 		def __init__(self, vardb):
- 			self._vardb = vardb
- 
- 		def populate(self):
- 			self._populate()
- 
- 		def _populate(self):
- 			owners_cache = vardbapi._owners_cache(self._vardb)
- 			cached_hashes = set()
- 			base_names = self._vardb._aux_cache["owners"]["base_names"]
- 
- 			# Take inventory of all cached package hashes.
- 			for name, hash_values in list(base_names.items()):
- 				if not isinstance(hash_values, dict):
- 					del base_names[name]
- 					continue
- 				cached_hashes.update(hash_values)
- 
- 			# Create sets of valid package hashes and uncached packages.
- 			uncached_pkgs = set()
- 			hash_pkg = owners_cache._hash_pkg
- 			valid_pkg_hashes = set()
- 			for cpv in self._vardb.cpv_all():
- 				hash_value = hash_pkg(cpv)
- 				valid_pkg_hashes.add(hash_value)
- 				if hash_value not in cached_hashes:
- 					uncached_pkgs.add(cpv)
- 
- 			# Cache any missing packages.
- 			for cpv in uncached_pkgs:
- 				owners_cache.add(cpv)
- 
- 			# Delete any stale cache.
- 			stale_hashes = cached_hashes.difference(valid_pkg_hashes)
- 			if stale_hashes:
- 				for base_name_hash, bucket in list(base_names.items()):
- 					for hash_value in stale_hashes.intersection(bucket):
- 						del bucket[hash_value]
- 					if not bucket:
- 						del base_names[base_name_hash]
- 
- 			return owners_cache
- 
- 		def get_owners(self, path_iter):
- 			"""
- 			@return the owners as a dblink -> set(files) mapping.
- 			"""
- 			owners = {}
- 			for owner, f in self.iter_owners(path_iter):
- 				owned_files = owners.get(owner)
- 				if owned_files is None:
- 					owned_files = set()
- 					owners[owner] = owned_files
- 				owned_files.add(f)
- 			return owners
- 
- 		def getFileOwnerMap(self, path_iter):
- 			owners = self.get_owners(path_iter)
- 			file_owners = {}
- 			for pkg_dblink, files in owners.items():
- 				for f in files:
- 					owner_set = file_owners.get(f)
- 					if owner_set is None:
- 						owner_set = set()
- 						file_owners[f] = owner_set
- 					owner_set.add(pkg_dblink)
- 			return file_owners
- 
- 		def iter_owners(self, path_iter):
- 			"""
- 			Iterate over tuples of (dblink, path). In order to avoid
- 			consuming too many resources for too much time, resources
- 			are only allocated for the duration of a given iter_owners()
- 			call. Therefore, to maximize reuse of resources when searching
- 			for multiple files, it's best to search for them all in a single
- 			call.
- 			"""
- 
- 			if not isinstance(path_iter, list):
- 				path_iter = list(path_iter)
- 			owners_cache = self._populate()
- 			vardb = self._vardb
- 			root = vardb._eroot
- 			hash_pkg = owners_cache._hash_pkg
- 			hash_str = owners_cache._hash_str
- 			base_names = self._vardb._aux_cache["owners"]["base_names"]
- 			case_insensitive = "case-insensitive-fs" \
- 				in vardb.settings.features
- 
- 			dblink_cache = {}
- 
- 			def dblink(cpv):
- 				x = dblink_cache.get(cpv)
- 				if x is None:
- 					if len(dblink_cache) > 20:
- 						# Ensure that we don't run out of memory.
- 						raise StopIteration()
- 					x = self._vardb._dblink(cpv)
- 					dblink_cache[cpv] = x
- 				return x
- 
- 			while path_iter:
- 
- 				path = path_iter.pop()
- 				if case_insensitive:
- 					path = path.lower()
- 				is_basename = os.sep != path[:1]
- 				if is_basename:
- 					name = path
- 				else:
- 					name = os.path.basename(path.rstrip(os.path.sep))
- 
- 				if not name:
- 					continue
- 
- 				name_hash = hash_str(name)
- 				pkgs = base_names.get(name_hash)
- 				owners = []
- 				if pkgs is not None:
- 					try:
- 						for hash_value in pkgs:
- 							if not isinstance(hash_value, tuple) or \
- 								len(hash_value) != 3:
- 								continue
- 							cpv, counter, mtime = hash_value
- 							if not isinstance(cpv, str):
- 								continue
- 							try:
- 								current_hash = hash_pkg(cpv)
- 							except KeyError:
- 								continue
- 
- 							if current_hash != hash_value:
- 								continue
- 
- 							if is_basename:
- 								for p in dblink(cpv)._contents.keys():
- 									if os.path.basename(p) == name:
- 										owners.append((cpv, dblink(cpv).
- 										_contents.unmap_key(
- 										p)[len(root):]))
- 							else:
- 								key = dblink(cpv)._match_contents(path)
- 								if key is not False:
- 									owners.append(
- 										(cpv, key[len(root):]))
- 
- 					except StopIteration:
- 						path_iter.append(path)
- 						del owners[:]
- 						dblink_cache.clear()
- 						gc.collect()
- 						for x in self._iter_owners_low_mem(path_iter):
- 							yield x
- 						return
- 					else:
- 						for cpv, p in owners:
- 							yield (dblink(cpv), p)
- 
- 		def _iter_owners_low_mem(self, path_list):
- 			"""
- 			This implemention will make a short-lived dblink instance (and
- 			parse CONTENTS) for every single installed package. This is
- 			slower and but uses less memory than the method which uses the
- 			basename cache.
- 			"""
- 
- 			if not path_list:
- 				return
- 
- 			case_insensitive = "case-insensitive-fs" \
- 				in self._vardb.settings.features
- 			path_info_list = []
- 			for path in path_list:
- 				if case_insensitive:
- 					path = path.lower()
- 				is_basename = os.sep != path[:1]
- 				if is_basename:
- 					name = path
- 				else:
- 					name = os.path.basename(path.rstrip(os.path.sep))
- 				path_info_list.append((path, name, is_basename))
- 
- 			# Do work via the global event loop, so that it can be used
- 			# for indication of progress during the search (bug #461412).
- 			event_loop = asyncio._safe_loop()
- 			root = self._vardb._eroot
- 
- 			def search_pkg(cpv, search_future):
- 				dblnk = self._vardb._dblink(cpv)
- 				results = []
- 				for path, name, is_basename in path_info_list:
- 					if is_basename:
- 						for p in dblnk._contents.keys():
- 							if os.path.basename(p) == name:
- 								results.append((dblnk,
- 									dblnk._contents.unmap_key(
- 										p)[len(root):]))
- 					else:
- 						key = dblnk._match_contents(path)
- 						if key is not False:
- 							results.append(
- 								(dblnk, key[len(root):]))
- 				search_future.set_result(results)
- 
- 			for cpv in self._vardb.cpv_all():
- 				search_future = event_loop.create_future()
- 				event_loop.call_soon(search_pkg, cpv, search_future)
- 				event_loop.run_until_complete(search_future)
- 				for result in search_future.result():
- 					yield result
+     _excluded_dirs = ["CVS", "lost+found"]
+     _excluded_dirs = [re.escape(x) for x in _excluded_dirs]
+     _excluded_dirs = re.compile(
+         r"^(\..*|" + MERGING_IDENTIFIER + ".*|" + "|".join(_excluded_dirs) + r")$"
+     )
+ 
+     _aux_cache_version = "1"
+     _owners_cache_version = "1"
+ 
+     # Number of uncached packages to trigger cache update, since
+     # it's wasteful to update it for every vdb change.
+     _aux_cache_threshold = 5
+ 
+     _aux_cache_keys_re = re.compile(r"^NEEDED\..*$")
+     _aux_multi_line_re = re.compile(r"^(CONTENTS|NEEDED\..*)$")
+     _pkg_str_aux_keys = dbapi._pkg_str_aux_keys + ("BUILD_ID", "BUILD_TIME", "_mtime_")
+ 
+     def __init__(
+         self,
+         _unused_param=DeprecationWarning,
+         categories=None,
+         settings=None,
+         vartree=None,
+     ):
+         """
+         The categories parameter is unused since the dbapi class
+         now has a categories property that is generated from the
+         available packages.
+         """
+ 
+         # Used by emerge to check whether any packages
+         # have been added or removed.
+         self._pkgs_changed = False
+ 
+         # The _aux_cache_threshold doesn't work as designed
+         # if the cache is flushed from a subprocess, so we
+         # use this to avoid waste vdb cache updates.
+         self._flush_cache_enabled = True
+ 
+         # cache for category directory mtimes
+         self.mtdircache = {}
+ 
+         # cache for dependency checks
+         self.matchcache = {}
+ 
+         # cache for cp_list results
+         self.cpcache = {}
+ 
+         self.blockers = None
+         if settings is None:
+             settings = portage.settings
+         self.settings = settings
+ 
+         if _unused_param is not DeprecationWarning:
+             warnings.warn(
+                 "The first parameter of the "
+                 "portage.dbapi.vartree.vardbapi"
+                 " constructor is now unused. Instead "
+                 "settings['ROOT'] is used.",
+                 DeprecationWarning,
+                 stacklevel=2,
+             )
+ 
+         self._eroot = settings["EROOT"]
+         self._dbroot = self._eroot + VDB_PATH
+         self._lock = None
+         self._lock_count = 0
+ 
+         self._conf_mem_file = self._eroot + CONFIG_MEMORY_FILE
+         self._fs_lock_obj = None
+         self._fs_lock_count = 0
+         self._slot_locks = {}
+ 
+         if vartree is None:
+             vartree = portage.db[settings["EROOT"]]["vartree"]
+         self.vartree = vartree
+         self._aux_cache_keys = set(
+             [
+                 "BDEPEND",
+                 "BUILD_TIME",
+                 "CHOST",
+                 "COUNTER",
+                 "DEPEND",
+                 "DESCRIPTION",
+                 "EAPI",
+                 "HOMEPAGE",
+                 "BUILD_ID",
+                 "IDEPEND",
+                 "IUSE",
+                 "KEYWORDS",
+                 "LICENSE",
+                 "PDEPEND",
+                 "PROPERTIES",
+                 "RDEPEND",
+                 "repository",
+                 "RESTRICT",
+                 "SLOT",
+                 "USE",
+                 "DEFINED_PHASES",
+                 "PROVIDES",
+                 "REQUIRES",
+             ]
+         )
+         self._aux_cache_obj = None
+         self._aux_cache_filename = os.path.join(
+             self._eroot, CACHE_PATH, "vdb_metadata.pickle"
+         )
+         self._cache_delta_filename = os.path.join(
+             self._eroot, CACHE_PATH, "vdb_metadata_delta.json"
+         )
+         self._cache_delta = VdbMetadataDelta(self)
+         self._counter_path = os.path.join(self._eroot, CACHE_PATH, "counter")
+ 
+         self._plib_registry = PreservedLibsRegistry(
+             settings["ROOT"],
+             os.path.join(self._eroot, PRIVATE_PATH, "preserved_libs_registry"),
+         )
 -        self._linkmap = LinkageMap(self)
++        chost = self.settings.get('CHOST')
++        if not chost:
++            chost = 'lunix?' # this happens when profiles are not available
++        if chost.find('darwin') >= 0:
++            self._linkmap = LinkageMapMachO(self)
++        elif chost.find('interix') >= 0 or chost.find('winnt') >= 0:
++            self._linkmap = LinkageMapPeCoff(self)
++        elif chost.find('aix') >= 0:
++            self._linkmap = LinkageMapXCoff(self)
++        else:
++            self._linkmap = LinkageMap(self)
+         self._owners = self._owners_db(self)
+ 
+         self._cached_counter = None
+ 
+     @property
+     def writable(self):
+         """
+         Check if var/db/pkg is writable, or permissions are sufficient
+         to create it if it does not exist yet.
+         @rtype: bool
+         @return: True if var/db/pkg is writable or can be created,
+                 False otherwise
+         """
+         return os.access(first_existing(self._dbroot), os.W_OK)
+ 
+     @property
+     def root(self):
+         warnings.warn(
+             "The root attribute of "
+             "portage.dbapi.vartree.vardbapi"
+             " is deprecated. Use "
+             "settings['ROOT'] instead.",
+             DeprecationWarning,
+             stacklevel=3,
+         )
+         return self.settings["ROOT"]
+ 
+     def getpath(self, mykey, filename=None):
+         # This is an optimized hotspot, so don't use unicode-wrapped
+         # os module and don't use os.path.join().
+         rValue = self._eroot + VDB_PATH + _os.sep + mykey
+         if filename is not None:
+             # If filename is always relative, we can do just
+             # rValue += _os.sep + filename
+             rValue = _os.path.join(rValue, filename)
+         return rValue
+ 
+     def lock(self):
+         """
+         Acquire a reentrant lock, blocking, for cooperation with concurrent
+         processes. State is inherited by subprocesses, allowing subprocesses
+         to reenter a lock that was acquired by a parent process. However,
+         a lock can be released only by the same process that acquired it.
+         """
+         if self._lock_count:
+             self._lock_count += 1
+         else:
+             if self._lock is not None:
+                 raise AssertionError("already locked")
+             # At least the parent needs to exist for the lock file.
+             ensure_dirs(self._dbroot)
+             self._lock = lockdir(self._dbroot)
+             self._lock_count += 1
+ 
+     def unlock(self):
+         """
+         Release a lock, decrementing the recursion level. Each unlock() call
+         must be matched with a prior lock() call, or else an AssertionError
+         will be raised if unlock() is called while not locked.
+         """
+         if self._lock_count > 1:
+             self._lock_count -= 1
+         else:
+             if self._lock is None:
+                 raise AssertionError("not locked")
+             self._lock_count = 0
+             unlockdir(self._lock)
+             self._lock = None
+ 
+     def _fs_lock(self):
+         """
+         Acquire a reentrant lock, blocking, for cooperation with concurrent
+         processes.
+         """
+         if self._fs_lock_count < 1:
+             if self._fs_lock_obj is not None:
+                 raise AssertionError("already locked")
+             try:
+                 self._fs_lock_obj = lockfile(self._conf_mem_file)
+             except InvalidLocation:
+                 self.settings._init_dirs()
+                 self._fs_lock_obj = lockfile(self._conf_mem_file)
+         self._fs_lock_count += 1
+ 
+     def _fs_unlock(self):
+         """
+         Release a lock, decrementing the recursion level.
+         """
+         if self._fs_lock_count <= 1:
+             if self._fs_lock_obj is None:
+                 raise AssertionError("not locked")
+             unlockfile(self._fs_lock_obj)
+             self._fs_lock_obj = None
+         self._fs_lock_count -= 1
+ 
+     def _slot_lock(self, slot_atom):
+         """
+         Acquire a slot lock (reentrant).
+ 
+         WARNING: The varbapi._slot_lock method is not safe to call
+         in the main process when that process is scheduling
+         install/uninstall tasks in parallel, since the locks would
+         be inherited by child processes. In order to avoid this sort
+         of problem, this method should be called in a subprocess
+         (typically spawned by the MergeProcess class).
+         """
+         lock, counter = self._slot_locks.get(slot_atom, (None, 0))
+         if lock is None:
+             lock_path = self.getpath("%s:%s" % (slot_atom.cp, slot_atom.slot))
+             ensure_dirs(os.path.dirname(lock_path))
+             lock = lockfile(lock_path, wantnewlockfile=True)
+         self._slot_locks[slot_atom] = (lock, counter + 1)
+ 
+     def _slot_unlock(self, slot_atom):
+         """
+         Release a slot lock (or decrementing recursion level).
+         """
+         lock, counter = self._slot_locks.get(slot_atom, (None, 0))
+         if lock is None:
+             raise AssertionError("not locked")
+         counter -= 1
+         if counter == 0:
+             unlockfile(lock)
+             del self._slot_locks[slot_atom]
+         else:
+             self._slot_locks[slot_atom] = (lock, counter)
+ 
+     def _bump_mtime(self, cpv):
+         """
+         This is called before an after any modifications, so that consumers
+         can use directory mtimes to validate caches. See bug #290428.
+         """
+         base = self._eroot + VDB_PATH
+         cat = catsplit(cpv)[0]
+         catdir = base + _os.sep + cat
+         t = time.time()
+         t = (t, t)
+         try:
+             for x in (catdir, base):
+                 os.utime(x, t)
+         except OSError:
+             ensure_dirs(catdir)
+ 
+     def cpv_exists(self, mykey, myrepo=None):
+         "Tells us whether an actual ebuild exists on disk (no masking)"
+         return os.path.exists(self.getpath(mykey))
+ 
+     def cpv_counter(self, mycpv):
+         "This method will grab the COUNTER. Returns a counter value."
+         try:
+             return int(self.aux_get(mycpv, ["COUNTER"])[0])
+         except (KeyError, ValueError):
+             pass
+         writemsg_level(
+             _("portage: COUNTER for %s was corrupted; " "resetting to value of 0\n")
+             % (mycpv,),
+             level=logging.ERROR,
+             noiselevel=-1,
+         )
+         return 0
+ 
+     def cpv_inject(self, mycpv):
+         "injects a real package into our on-disk database; assumes mycpv is valid and doesn't already exist"
+         ensure_dirs(self.getpath(mycpv))
+         counter = self.counter_tick(mycpv=mycpv)
+         # write local package counter so that emerge clean does the right thing
+         write_atomic(self.getpath(mycpv, filename="COUNTER"), str(counter))
+ 
+     def isInjected(self, mycpv):
+         if self.cpv_exists(mycpv):
+             if os.path.exists(self.getpath(mycpv, filename="INJECTED")):
+                 return True
+             if not os.path.exists(self.getpath(mycpv, filename="CONTENTS")):
+                 return True
+         return False
+ 
+     def move_ent(self, mylist, repo_match=None):
+         origcp = mylist[1]
+         newcp = mylist[2]
+ 
+         # sanity check
+         for atom in (origcp, newcp):
+             if not isjustname(atom):
+                 raise InvalidPackageName(str(atom))
+         origmatches = self.match(origcp, use_cache=0)
+         moves = 0
+         if not origmatches:
+             return moves
+         for mycpv in origmatches:
+             mycpv_cp = mycpv.cp
+             if mycpv_cp != origcp:
+                 # Ignore PROVIDE virtual match.
+                 continue
+             if repo_match is not None and not repo_match(mycpv.repo):
+                 continue
+ 
+             # Use isvalidatom() to check if this move is valid for the
+             # EAPI (characters allowed in package names may vary).
+             if not isvalidatom(newcp, eapi=mycpv.eapi):
+                 continue
+ 
+             mynewcpv = mycpv.replace(mycpv_cp, str(newcp), 1)
+             mynewcat = catsplit(newcp)[0]
+             origpath = self.getpath(mycpv)
+             if not os.path.exists(origpath):
+                 continue
+             moves += 1
+             if not os.path.exists(self.getpath(mynewcat)):
+                 # create the directory
+                 ensure_dirs(self.getpath(mynewcat))
+             newpath = self.getpath(mynewcpv)
+             if os.path.exists(newpath):
+                 # dest already exists; keep this puppy where it is.
+                 continue
+             _movefile(origpath, newpath, mysettings=self.settings)
+             self._clear_pkg_cache(self._dblink(mycpv))
+             self._clear_pkg_cache(self._dblink(mynewcpv))
+ 
+             # We need to rename the ebuild now.
+             old_pf = catsplit(mycpv)[1]
+             new_pf = catsplit(mynewcpv)[1]
+             if new_pf != old_pf:
+                 try:
+                     os.rename(
+                         os.path.join(newpath, old_pf + ".ebuild"),
+                         os.path.join(newpath, new_pf + ".ebuild"),
+                     )
+                 except EnvironmentError as e:
+                     if e.errno != errno.ENOENT:
+                         raise
+                     del e
+             write_atomic(os.path.join(newpath, "PF"), new_pf + "\n")
+             write_atomic(os.path.join(newpath, "CATEGORY"), mynewcat + "\n")
+ 
+         return moves
+ 
+     def cp_list(self, mycp, use_cache=1):
+         mysplit = catsplit(mycp)
+         if mysplit[0] == "*":
+             mysplit[0] = mysplit[0][1:]
+         try:
+             mystat = os.stat(self.getpath(mysplit[0])).st_mtime_ns
+         except OSError:
+             mystat = 0
+         if use_cache and mycp in self.cpcache:
+             cpc = self.cpcache[mycp]
+             if cpc[0] == mystat:
+                 return cpc[1][:]
+         cat_dir = self.getpath(mysplit[0])
+         try:
+             dir_list = os.listdir(cat_dir)
+         except EnvironmentError as e:
+             if e.errno == PermissionDenied.errno:
+                 raise PermissionDenied(cat_dir)
+             del e
+             dir_list = []
+ 
+         returnme = []
+         for x in dir_list:
+             if self._excluded_dirs.match(x) is not None:
+                 continue
+             ps = pkgsplit(x)
+             if not ps:
+                 self.invalidentry(os.path.join(self.getpath(mysplit[0]), x))
+                 continue
+             if len(mysplit) > 1:
+                 if ps[0] == mysplit[1]:
+                     cpv = "%s/%s" % (mysplit[0], x)
+                     metadata = dict(
+                         zip(
+                             self._aux_cache_keys,
+                             self.aux_get(cpv, self._aux_cache_keys),
+                         )
+                     )
+                     returnme.append(
+                         _pkg_str(
+                             cpv, metadata=metadata, settings=self.settings, db=self
+                         )
+                     )
+         self._cpv_sort_ascending(returnme)
+         if use_cache:
+             self.cpcache[mycp] = [mystat, returnme[:]]
+         elif mycp in self.cpcache:
+             del self.cpcache[mycp]
+         return returnme
+ 
+     def cpv_all(self, use_cache=1):
+         """
+         Set use_cache=0 to bypass the portage.cachedir() cache in cases
+         when the accuracy of mtime staleness checks should not be trusted
+         (generally this is only necessary in critical sections that
+         involve merge or unmerge of packages).
+         """
+         return list(self._iter_cpv_all(use_cache=use_cache))
+ 
+     def _iter_cpv_all(self, use_cache=True, sort=False):
+         returnme = []
+         basepath = os.path.join(self._eroot, VDB_PATH) + os.path.sep
+ 
+         if use_cache:
+             from portage import listdir
+         else:
+ 
+             def listdir(p, **kwargs):
+                 try:
+                     return [
+                         x for x in os.listdir(p) if os.path.isdir(os.path.join(p, x))
+                     ]
+                 except EnvironmentError as e:
+                     if e.errno == PermissionDenied.errno:
+                         raise PermissionDenied(p)
+                     del e
+                     return []
+ 
+         catdirs = listdir(basepath, EmptyOnError=1, ignorecvs=1, dirsonly=1)
+         if sort:
+             catdirs.sort()
+ 
+         for x in catdirs:
+             if self._excluded_dirs.match(x) is not None:
+                 continue
+             if not self._category_re.match(x):
+                 continue
+ 
+             pkgdirs = listdir(basepath + x, EmptyOnError=1, dirsonly=1)
+             if sort:
+                 pkgdirs.sort()
+ 
+             for y in pkgdirs:
+                 if self._excluded_dirs.match(y) is not None:
+                     continue
+                 subpath = x + "/" + y
+                 # -MERGING- should never be a cpv, nor should files.
+                 try:
+                     subpath = _pkg_str(subpath, db=self)
+                 except InvalidData:
+                     self.invalidentry(self.getpath(subpath))
+                     continue
+ 
+                 yield subpath
+ 
+     def cp_all(self, use_cache=1, sort=False):
+         mylist = self.cpv_all(use_cache=use_cache)
+         d = {}
+         for y in mylist:
+             if y[0] == "*":
+                 y = y[1:]
+             try:
+                 mysplit = catpkgsplit(y)
+             except InvalidData:
+                 self.invalidentry(self.getpath(y))
+                 continue
+             if not mysplit:
+                 self.invalidentry(self.getpath(y))
+                 continue
+             d[mysplit[0] + "/" + mysplit[1]] = None
+         return sorted(d) if sort else list(d)
+ 
+     def checkblockers(self, origdep):
+         pass
+ 
+     def _clear_cache(self):
+         self.mtdircache.clear()
+         self.matchcache.clear()
+         self.cpcache.clear()
+         self._aux_cache_obj = None
+ 
+     def _add(self, pkg_dblink):
+         self._pkgs_changed = True
+         self._clear_pkg_cache(pkg_dblink)
+ 
+     def _remove(self, pkg_dblink):
+         self._pkgs_changed = True
+         self._clear_pkg_cache(pkg_dblink)
+ 
+     def _clear_pkg_cache(self, pkg_dblink):
+         # Due to 1 second mtime granularity in <python-2.5, mtime checks
+         # are not always sufficient to invalidate vardbapi caches. Therefore,
+         # the caches need to be actively invalidated here.
+         self.mtdircache.pop(pkg_dblink.cat, None)
+         self.matchcache.pop(pkg_dblink.cat, None)
+         self.cpcache.pop(pkg_dblink.mysplit[0], None)
+         dircache.pop(pkg_dblink.dbcatdir, None)
+ 
+     def match(self, origdep, use_cache=1):
+         "caching match function"
+         mydep = dep_expand(
+             origdep, mydb=self, use_cache=use_cache, settings=self.settings
+         )
+         cache_key = (mydep, mydep.unevaluated_atom)
+         mykey = dep_getkey(mydep)
+         mycat = catsplit(mykey)[0]
+         if not use_cache:
+             if mycat in self.matchcache:
+                 del self.mtdircache[mycat]
+                 del self.matchcache[mycat]
+             return list(
+                 self._iter_match(mydep, self.cp_list(mydep.cp, use_cache=use_cache))
+             )
+         try:
+             curmtime = os.stat(os.path.join(self._eroot, VDB_PATH, mycat)).st_mtime_ns
+         except (IOError, OSError):
+             curmtime = 0
+ 
+         if mycat not in self.matchcache or self.mtdircache[mycat] != curmtime:
+             # clear cache entry
+             self.mtdircache[mycat] = curmtime
+             self.matchcache[mycat] = {}
+         if mydep not in self.matchcache[mycat]:
+             mymatch = list(
+                 self._iter_match(mydep, self.cp_list(mydep.cp, use_cache=use_cache))
+             )
+             self.matchcache[mycat][cache_key] = mymatch
+         return self.matchcache[mycat][cache_key][:]
+ 
+     def findname(self, mycpv, myrepo=None):
+         return self.getpath(str(mycpv), filename=catsplit(mycpv)[1] + ".ebuild")
+ 
+     def flush_cache(self):
+         """If the current user has permission and the internal aux_get cache has
+         been updated, save it to disk and mark it unmodified.  This is called
+         by emerge after it has loaded the full vdb for use in dependency
+         calculations.  Currently, the cache is only written if the user has
+         superuser privileges (since that's required to obtain a lock), but all
+         users have read access and benefit from faster metadata lookups (as
+         long as at least part of the cache is still valid)."""
+         if (
+             self._flush_cache_enabled
+             and self._aux_cache is not None
+             and secpass >= 2
+             and (
+                 len(self._aux_cache["modified"]) >= self._aux_cache_threshold
+                 or not os.path.exists(self._cache_delta_filename)
+             )
+         ):
+ 
+             ensure_dirs(os.path.dirname(self._aux_cache_filename))
+ 
+             self._owners.populate()  # index any unindexed contents
+             valid_nodes = set(self.cpv_all())
+             for cpv in list(self._aux_cache["packages"]):
+                 if cpv not in valid_nodes:
+                     del self._aux_cache["packages"][cpv]
+             del self._aux_cache["modified"]
+             timestamp = time.time()
+             self._aux_cache["timestamp"] = timestamp
+ 
+             with atomic_ofstream(self._aux_cache_filename, "wb") as f:
+                 pickle.dump(self._aux_cache, f, protocol=2)
+ 
+             apply_secpass_permissions(self._aux_cache_filename, mode=0o644)
+ 
+             self._cache_delta.initialize(timestamp)
+             apply_secpass_permissions(self._cache_delta_filename, mode=0o644)
+ 
+             self._aux_cache["modified"] = set()
+ 
+     @property
+     def _aux_cache(self):
+         if self._aux_cache_obj is None:
+             self._aux_cache_init()
+         return self._aux_cache_obj
+ 
+     def _aux_cache_init(self):
+         aux_cache = None
+         open_kwargs = {}
+         try:
+             with open(
+                 _unicode_encode(
+                     self._aux_cache_filename, encoding=_encodings["fs"], errors="strict"
+                 ),
+                 mode="rb",
+                 **open_kwargs
+             ) as f:
+                 mypickle = pickle.Unpickler(f)
+                 try:
+                     mypickle.find_global = None
+                 except AttributeError:
+                     # TODO: If py3k, override Unpickler.find_class().
+                     pass
+                 aux_cache = mypickle.load()
+         except (SystemExit, KeyboardInterrupt):
+             raise
+         except Exception as e:
+             if isinstance(e, EnvironmentError) and getattr(e, "errno", None) in (
+                 errno.ENOENT,
+                 errno.EACCES,
+             ):
+                 pass
+             else:
+                 writemsg(
+                     _("!!! Error loading '%s': %s\n") % (self._aux_cache_filename, e),
+                     noiselevel=-1,
+                 )
+             del e
+ 
+         if (
+             not aux_cache
+             or not isinstance(aux_cache, dict)
+             or aux_cache.get("version") != self._aux_cache_version
+             or not aux_cache.get("packages")
+         ):
+             aux_cache = {"version": self._aux_cache_version}
+             aux_cache["packages"] = {}
+ 
+         owners = aux_cache.get("owners")
+         if owners is not None:
+             if not isinstance(owners, dict):
+                 owners = None
+             elif "version" not in owners:
+                 owners = None
+             elif owners["version"] != self._owners_cache_version:
+                 owners = None
+             elif "base_names" not in owners:
+                 owners = None
+             elif not isinstance(owners["base_names"], dict):
+                 owners = None
+ 
+         if owners is None:
+             owners = {"base_names": {}, "version": self._owners_cache_version}
+             aux_cache["owners"] = owners
+ 
+         aux_cache["modified"] = set()
+         self._aux_cache_obj = aux_cache
+ 
+     def aux_get(self, mycpv, wants, myrepo=None):
+         """This automatically caches selected keys that are frequently needed
+         by emerge for dependency calculations.  The cached metadata is
+         considered valid if the mtime of the package directory has not changed
+         since the data was cached.  The cache is stored in a pickled dict
+         object with the following format:
+ 
+         {version:"1", "packages":{cpv1:(mtime,{k1,v1, k2,v2, ...}), cpv2...}}
+ 
+         If an error occurs while loading the cache pickle or the version is
+         unrecognized, the cache will simple be recreated from scratch (it is
+         completely disposable).
+         """
+         cache_these_wants = self._aux_cache_keys.intersection(wants)
+         for x in wants:
+             if self._aux_cache_keys_re.match(x) is not None:
+                 cache_these_wants.add(x)
+ 
+         if not cache_these_wants:
+             mydata = self._aux_get(mycpv, wants)
+             return [mydata[x] for x in wants]
+ 
+         cache_these = set(self._aux_cache_keys)
+         cache_these.update(cache_these_wants)
+ 
+         mydir = self.getpath(mycpv)
+         mydir_stat = None
+         try:
+             mydir_stat = os.stat(mydir)
+         except OSError as e:
+             if e.errno != errno.ENOENT:
+                 raise
+             raise KeyError(mycpv)
+         # Use float mtime when available.
+         mydir_mtime = mydir_stat.st_mtime
+         pkg_data = self._aux_cache["packages"].get(mycpv)
+         pull_me = cache_these.union(wants)
+         mydata = {"_mtime_": mydir_mtime}
+         cache_valid = False
+         cache_incomplete = False
+         cache_mtime = None
+         metadata = None
+         if pkg_data is not None:
+             if not isinstance(pkg_data, tuple) or len(pkg_data) != 2:
+                 pkg_data = None
+             else:
+                 cache_mtime, metadata = pkg_data
+                 if not isinstance(cache_mtime, (float, int)) or not isinstance(
+                     metadata, dict
+                 ):
+                     pkg_data = None
+ 
+         if pkg_data:
+             cache_mtime, metadata = pkg_data
+             if isinstance(cache_mtime, float):
+                 if cache_mtime == mydir_stat.st_mtime:
+                     cache_valid = True
+ 
+                 # Handle truncated mtime in order to avoid cache
+                 # invalidation for livecd squashfs (bug 564222).
+                 elif int(cache_mtime) == mydir_stat.st_mtime:
+                     cache_valid = True
+             else:
+                 # Cache may contain integer mtime.
+                 cache_valid = cache_mtime == mydir_stat[stat.ST_MTIME]
+ 
+         if cache_valid:
+             # Migrate old metadata to unicode.
+             for k, v in metadata.items():
+                 metadata[k] = _unicode_decode(
+                     v, encoding=_encodings["repo.content"], errors="replace"
+                 )
+ 
+             mydata.update(metadata)
+             pull_me.difference_update(mydata)
+ 
+         if pull_me:
+             # pull any needed data and cache it
+             aux_keys = list(pull_me)
+             mydata.update(self._aux_get(mycpv, aux_keys, st=mydir_stat))
+             if not cache_valid or cache_these.difference(metadata):
+                 cache_data = {}
+                 if cache_valid and metadata:
+                     cache_data.update(metadata)
+                 for aux_key in cache_these:
+                     cache_data[aux_key] = mydata[aux_key]
+                 self._aux_cache["packages"][str(mycpv)] = (mydir_mtime, cache_data)
+                 self._aux_cache["modified"].add(mycpv)
+ 
+         eapi_attrs = _get_eapi_attrs(mydata["EAPI"])
+         if _get_slot_re(eapi_attrs).match(mydata["SLOT"]) is None:
+             # Empty or invalid slot triggers InvalidAtom exceptions when
+             # generating slot atoms for packages, so translate it to '0' here.
+             mydata["SLOT"] = "0"
+ 
+         return [mydata[x] for x in wants]
+ 
+     def _aux_get(self, mycpv, wants, st=None):
+         mydir = self.getpath(mycpv)
+         if st is None:
+             try:
+                 st = os.stat(mydir)
+             except OSError as e:
+                 if e.errno == errno.ENOENT:
+                     raise KeyError(mycpv)
+                 elif e.errno == PermissionDenied.errno:
+                     raise PermissionDenied(mydir)
+                 else:
+                     raise
+         if not stat.S_ISDIR(st.st_mode):
+             raise KeyError(mycpv)
+         results = {}
+         env_keys = []
+         for x in wants:
+             if x == "_mtime_":
+                 results[x] = st[stat.ST_MTIME]
+                 continue
+             try:
+                 with io.open(
+                     _unicode_encode(
+                         os.path.join(mydir, x),
+                         encoding=_encodings["fs"],
+                         errors="strict",
+                     ),
+                     mode="r",
+                     encoding=_encodings["repo.content"],
+                     errors="replace",
+                 ) as f:
+                     myd = f.read()
+             except IOError:
+                 if (
+                     x not in self._aux_cache_keys
+                     and self._aux_cache_keys_re.match(x) is None
+                 ):
+                     env_keys.append(x)
+                     continue
+                 myd = ""
+ 
+             # Preserve \n for metadata that is known to
+             # contain multiple lines.
+             if self._aux_multi_line_re.match(x) is None:
+                 myd = " ".join(myd.split())
+ 
+             results[x] = myd
+ 
+         if env_keys:
+             env_results = self._aux_env_search(mycpv, env_keys)
+             for k in env_keys:
+                 v = env_results.get(k)
+                 if v is None:
+                     v = ""
+                 if self._aux_multi_line_re.match(k) is None:
+                     v = " ".join(v.split())
+                 results[k] = v
+ 
+         if results.get("EAPI") == "":
+             results["EAPI"] = "0"
+ 
+         return results
+ 
+     def _aux_env_search(self, cpv, variables):
+         """
+         Search environment.bz2 for the specified variables. Returns
+         a dict mapping variables to values, and any variables not
+         found in the environment will not be included in the dict.
+         This is useful for querying variables like ${SRC_URI} and
+         ${A}, which are not saved in separate files but are available
+         in environment.bz2 (see bug #395463).
+         """
+         env_file = self.getpath(cpv, filename="environment.bz2")
+         if not os.path.isfile(env_file):
+             return {}
+         bunzip2_cmd = portage.util.shlex_split(
+             self.settings.get("PORTAGE_BUNZIP2_COMMAND", "")
+         )
+         if not bunzip2_cmd:
+             bunzip2_cmd = portage.util.shlex_split(
+                 self.settings["PORTAGE_BZIP2_COMMAND"]
+             )
+             bunzip2_cmd.append("-d")
+         args = bunzip2_cmd + ["-c", env_file]
+         try:
+             proc = subprocess.Popen(args, stdout=subprocess.PIPE)
+         except EnvironmentError as e:
+             if e.errno != errno.ENOENT:
+                 raise
+             raise portage.exception.CommandNotFound(args[0])
+ 
+         # Parts of the following code are borrowed from
+         # filter-bash-environment.py (keep them in sync).
+         var_assign_re = re.compile(
+             r'(^|^declare\s+-\S+\s+|^declare\s+|^export\s+)([^=\s]+)=("|\')?(.*)$'
+         )
+         close_quote_re = re.compile(r'(\\"|"|\')\s*$')
+ 
+         def have_end_quote(quote, line):
+             close_quote_match = close_quote_re.search(line)
+             return close_quote_match is not None and close_quote_match.group(1) == quote
+ 
+         variables = frozenset(variables)
+         results = {}
+         for line in proc.stdout:
+             line = _unicode_decode(
+                 line, encoding=_encodings["content"], errors="replace"
+             )
+             var_assign_match = var_assign_re.match(line)
+             if var_assign_match is not None:
+                 key = var_assign_match.group(2)
+                 quote = var_assign_match.group(3)
+                 if quote is not None:
+                     if have_end_quote(quote, line[var_assign_match.end(2) + 2 :]):
+                         value = var_assign_match.group(4)
+                     else:
+                         value = [var_assign_match.group(4)]
+                         for line in proc.stdout:
+                             line = _unicode_decode(
+                                 line, encoding=_encodings["content"], errors="replace"
+                             )
+                             value.append(line)
+                             if have_end_quote(quote, line):
+                                 break
+                         value = "".join(value)
+                     # remove trailing quote and whitespace
+                     value = value.rstrip()[:-1]
+                 else:
+                     value = var_assign_match.group(4).rstrip()
+ 
+                 if key in variables:
+                     results[key] = value
+ 
+         proc.wait()
+         proc.stdout.close()
+         return results
+ 
+     def aux_update(self, cpv, values):
+         mylink = self._dblink(cpv)
+         if not mylink.exists():
+             raise KeyError(cpv)
+         self._bump_mtime(cpv)
+         self._clear_pkg_cache(mylink)
+         for k, v in values.items():
+             if v:
+                 mylink.setfile(k, v)
+             else:
+                 try:
+                     os.unlink(os.path.join(self.getpath(cpv), k))
+                 except EnvironmentError:
+                     pass
+         self._bump_mtime(cpv)
+ 
+     async def unpack_metadata(self, pkg, dest_dir, loop=None):
+         """
+         Unpack package metadata to a directory. This method is a coroutine.
+ 
+         @param pkg: package to unpack
+         @type pkg: _pkg_str or portage.config
+         @param dest_dir: destination directory
+         @type dest_dir: str
+         """
+         loop = asyncio._wrap_loop(loop)
+         if not isinstance(pkg, portage.config):
+             cpv = pkg
+         else:
+             cpv = pkg.mycpv
+         dbdir = self.getpath(cpv)
+ 
+         def async_copy():
+             for parent, dirs, files in os.walk(dbdir, onerror=_raise_exc):
+                 for key in files:
+                     shutil.copy(os.path.join(parent, key), os.path.join(dest_dir, key))
+                 break
+ 
+         await loop.run_in_executor(ForkExecutor(loop=loop), async_copy)
+ 
+     async def unpack_contents(
+         self,
+         pkg,
+         dest_dir,
+         include_config=None,
+         include_unmodified_config=None,
+         loop=None,
+     ):
+         """
+         Unpack package contents to a directory. This method is a coroutine.
+ 
+         This copies files from the installed system, in the same way
+         as the quickpkg(1) command. Default behavior for handling
+         of protected configuration files is controlled by the
+         QUICKPKG_DEFAULT_OPTS variable. The relevant quickpkg options
+         are --include-config and --include-unmodified-config. When
+         a configuration file is not included because it is protected,
+         an ewarn message is logged.
+ 
+         @param pkg: package to unpack
+         @type pkg: _pkg_str or portage.config
+         @param dest_dir: destination directory
+         @type dest_dir: str
+         @param include_config: Include all files protected by
+                 CONFIG_PROTECT (as a security precaution, default is False
+                 unless modified by QUICKPKG_DEFAULT_OPTS).
+         @type include_config: bool
+         @param include_unmodified_config: Include files protected by
+                 CONFIG_PROTECT that have not been modified since installation
+                 (as a security precaution, default is False unless modified
+                 by QUICKPKG_DEFAULT_OPTS).
+         @type include_unmodified_config: bool
+         """
+         loop = asyncio._wrap_loop(loop)
+         if not isinstance(pkg, portage.config):
+             settings = self.settings
+             cpv = pkg
+         else:
+             settings = pkg
+             cpv = settings.mycpv
+ 
+         scheduler = SchedulerInterface(loop)
+         parser = argparse.ArgumentParser()
+         parser.add_argument("--include-config", choices=("y", "n"), default="n")
+         parser.add_argument(
+             "--include-unmodified-config", choices=("y", "n"), default="n"
+         )
+ 
+         # Method parameters may override QUICKPKG_DEFAULT_OPTS.
+         opts_list = portage.util.shlex_split(settings.get("QUICKPKG_DEFAULT_OPTS", ""))
+         if include_config is not None:
+             opts_list.append(
+                 "--include-config={}".format("y" if include_config else "n")
+             )
+         if include_unmodified_config is not None:
+             opts_list.append(
+                 "--include-unmodified-config={}".format(
+                     "y" if include_unmodified_config else "n"
+                 )
+             )
+ 
+         opts, args = parser.parse_known_args(opts_list)
+ 
+         tar_cmd = ("tar", "-x", "--xattrs", "--xattrs-include=*", "-C", dest_dir)
+         pr, pw = os.pipe()
+         proc = await asyncio.create_subprocess_exec(*tar_cmd, stdin=pr)
+         os.close(pr)
+         with os.fdopen(pw, "wb", 0) as pw_file:
+             excluded_config_files = await loop.run_in_executor(
+                 ForkExecutor(loop=loop),
+                 functools.partial(
+                     self._dblink(cpv).quickpkg,
+                     pw_file,
+                     include_config=opts.include_config == "y",
+                     include_unmodified_config=opts.include_unmodified_config == "y",
+                 ),
+             )
+         await proc.wait()
+         if proc.returncode != os.EX_OK:
+             raise PortageException("command failed: {}".format(tar_cmd))
+ 
+         if excluded_config_files:
+             log_lines = [
+                 _(
+                     "Config files excluded by QUICKPKG_DEFAULT_OPTS (see quickpkg(1) man page):"
+                 )
+             ] + ["\t{}".format(name) for name in excluded_config_files]
+             out = io.StringIO()
+             for line in log_lines:
+                 portage.elog.messages.ewarn(line, phase="install", key=cpv, out=out)
+             scheduler.output(
+                 out.getvalue(),
+                 background=self.settings.get("PORTAGE_BACKGROUND") == "1",
+                 log_path=settings.get("PORTAGE_LOG_FILE"),
+             )
+ 
+     def counter_tick(self, myroot=None, mycpv=None):
+         """
+         @param myroot: ignored, self._eroot is used instead
+         """
+         return self.counter_tick_core(incrementing=1, mycpv=mycpv)
+ 
+     def get_counter_tick_core(self, myroot=None, mycpv=None):
+         """
+         Use this method to retrieve the counter instead
+         of having to trust the value of a global counter
+         file that can lead to invalid COUNTER
+         generation. When cache is valid, the package COUNTER
+         files are not read and we rely on the timestamp of
+         the package directory to validate cache. The stat
+         calls should only take a short time, so performance
+         is sufficient without having to rely on a potentially
+         corrupt global counter file.
+ 
+         The global counter file located at
+         $CACHE_PATH/counter serves to record the
+         counter of the last installed package and
+         it also corresponds to the total number of
+         installation actions that have occurred in
+         the history of this package database.
+ 
+         @param myroot: ignored, self._eroot is used instead
+         """
+         del myroot
+         counter = -1
+         try:
+             with io.open(
+                 _unicode_encode(
+                     self._counter_path, encoding=_encodings["fs"], errors="strict"
+                 ),
+                 mode="r",
+                 encoding=_encodings["repo.content"],
+                 errors="replace",
+             ) as f:
+                 try:
+                     counter = int(f.readline().strip())
+                 except (OverflowError, ValueError) as e:
+                     writemsg(
+                         _("!!! COUNTER file is corrupt: '%s'\n") % self._counter_path,
+                         noiselevel=-1,
+                     )
+                     writemsg("!!! %s\n" % (e,), noiselevel=-1)
+         except EnvironmentError as e:
+             # Silently allow ENOENT since files under
+             # /var/cache/ are allowed to disappear.
+             if e.errno != errno.ENOENT:
+                 writemsg(
+                     _("!!! Unable to read COUNTER file: '%s'\n") % self._counter_path,
+                     noiselevel=-1,
+                 )
+                 writemsg("!!! %s\n" % str(e), noiselevel=-1)
+             del e
+ 
+         if self._cached_counter == counter:
+             max_counter = counter
+         else:
+             # We must ensure that we return a counter
+             # value that is at least as large as the
+             # highest one from the installed packages,
+             # since having a corrupt value that is too low
+             # can trigger incorrect AUTOCLEAN behavior due
+             # to newly installed packages having lower
+             # COUNTERs than the previous version in the
+             # same slot.
+             max_counter = counter
+             for cpv in self.cpv_all():
+                 try:
+                     pkg_counter = int(self.aux_get(cpv, ["COUNTER"])[0])
+                 except (KeyError, OverflowError, ValueError):
+                     continue
+                 if pkg_counter > max_counter:
+                     max_counter = pkg_counter
+ 
+         return max_counter + 1
+ 
+     def counter_tick_core(self, myroot=None, incrementing=1, mycpv=None):
+         """
+         This method will grab the next COUNTER value and record it back
+         to the global file. Note that every package install must have
+         a unique counter, since a slotmove update can move two packages
+         into the same SLOT and in that case it's important that both
+         packages have different COUNTER metadata.
+ 
+         @param myroot: ignored, self._eroot is used instead
+         @param mycpv: ignored
+         @rtype: int
+         @return: new counter value
+         """
+         myroot = None
+         mycpv = None
+         self.lock()
+         try:
+             counter = self.get_counter_tick_core() - 1
+             if incrementing:
+                 # increment counter
+                 counter += 1
+                 # update new global counter file
+                 try:
+                     write_atomic(self._counter_path, str(counter))
+                 except InvalidLocation:
+                     self.settings._init_dirs()
+                     write_atomic(self._counter_path, str(counter))
+             self._cached_counter = counter
+ 
+             # Since we hold a lock, this is a good opportunity
+             # to flush the cache. Note that this will only
+             # flush the cache periodically in the main process
+             # when _aux_cache_threshold is exceeded.
+             self.flush_cache()
+         finally:
+             self.unlock()
+ 
+         return counter
+ 
+     def _dblink(self, cpv):
+         category, pf = catsplit(cpv)
+         return dblink(
+             category,
+             pf,
+             settings=self.settings,
+             vartree=self.vartree,
+             treetype="vartree",
+         )
+ 
+     def removeFromContents(self, pkg, paths, relative_paths=True):
+         """
+         @param pkg: cpv for an installed package
+         @type pkg: string
+         @param paths: paths of files to remove from contents
+         @type paths: iterable
+         """
+         if not hasattr(pkg, "getcontents"):
+             pkg = self._dblink(pkg)
+         root = self.settings["ROOT"]
+         root_len = len(root) - 1
+         new_contents = pkg.getcontents().copy()
+         removed = 0
+ 
+         for filename in paths:
+             filename = _unicode_decode(
+                 filename, encoding=_encodings["content"], errors="strict"
+             )
+             filename = normalize_path(filename)
+             if relative_paths:
+                 relative_filename = filename
+             else:
+                 relative_filename = filename[root_len:]
+             contents_key = pkg._match_contents(relative_filename)
+             if contents_key:
+                 # It's possible for two different paths to refer to the same
+                 # contents_key, due to directory symlinks. Therefore, pass a
+                 # default value to pop, in order to avoid a KeyError which
+                 # could otherwise be triggered (see bug #454400).
+                 new_contents.pop(contents_key, None)
+                 removed += 1
+ 
+         if removed:
+             # Also remove corresponding NEEDED lines, so that they do
+             # no corrupt LinkageMap data for preserve-libs.
+             needed_filename = os.path.join(pkg.dbdir, LinkageMap._needed_aux_key)
+             new_needed = None
+             try:
+                 with io.open(
+                     _unicode_encode(
+                         needed_filename, encoding=_encodings["fs"], errors="strict"
+                     ),
+                     mode="r",
+                     encoding=_encodings["repo.content"],
+                     errors="replace",
+                 ) as f:
+                     needed_lines = f.readlines()
+             except IOError as e:
+                 if e.errno not in (errno.ENOENT, errno.ESTALE):
+                     raise
+             else:
+                 new_needed = []
+                 for l in needed_lines:
+                     l = l.rstrip("\n")
+                     if not l:
+                         continue
+                     try:
+                         entry = NeededEntry.parse(needed_filename, l)
+                     except InvalidData as e:
+                         writemsg_level(
+                             "\n%s\n\n" % (e,), level=logging.ERROR, noiselevel=-1
+                         )
+                         continue
+ 
+                     filename = os.path.join(root, entry.filename.lstrip(os.sep))
+                     if filename in new_contents:
+                         new_needed.append(entry)
+ 
+             self.writeContentsToContentsFile(pkg, new_contents, new_needed=new_needed)
+ 
+     def writeContentsToContentsFile(self, pkg, new_contents, new_needed=None):
+         """
+         @param pkg: package to write contents file for
+         @type pkg: dblink
+         @param new_contents: contents to write to CONTENTS file
+         @type new_contents: contents dictionary of the form
+                                 {u'/path/to/file' : (contents_attribute 1, ...), ...}
+         @param new_needed: new NEEDED entries
+         @type new_needed: list of NeededEntry
+         """
+         root = self.settings["ROOT"]
+         self._bump_mtime(pkg.mycpv)
+         if new_needed is not None:
+             f = atomic_ofstream(os.path.join(pkg.dbdir, LinkageMap._needed_aux_key))
+             for entry in new_needed:
+                 f.write(str(entry))
+             f.close()
+         f = atomic_ofstream(os.path.join(pkg.dbdir, "CONTENTS"))
+         write_contents(new_contents, root, f)
+         f.close()
+         self._bump_mtime(pkg.mycpv)
+         pkg._clear_contents_cache()
+ 
+     class _owners_cache:
+         """
+         This class maintains an hash table that serves to index package
+         contents by mapping the basename of file to a list of possible
+         packages that own it. This is used to optimize owner lookups
+         by narrowing the search down to a smaller number of packages.
+         """
+ 
+         _new_hash = md5
+         _hash_bits = 16
+         _hex_chars = _hash_bits // 4
+ 
+         def __init__(self, vardb):
+             self._vardb = vardb
+ 
+         def add(self, cpv):
+             eroot_len = len(self._vardb._eroot)
+             pkg_hash = self._hash_pkg(cpv)
+             db = self._vardb._dblink(cpv)
+             if not db.getcontents():
+                 # Empty path is a code used to represent empty contents.
+                 self._add_path("", pkg_hash)
+ 
+             for x in db._contents.keys():
+                 self._add_path(x[eroot_len:], pkg_hash)
+ 
+             self._vardb._aux_cache["modified"].add(cpv)
+ 
+         def _add_path(self, path, pkg_hash):
+             """
+             Empty path is a code that represents empty contents.
+             """
+             if path:
+                 name = os.path.basename(path.rstrip(os.path.sep))
+                 if not name:
+                     return
+             else:
+                 name = path
+             name_hash = self._hash_str(name)
+             base_names = self._vardb._aux_cache["owners"]["base_names"]
+             pkgs = base_names.get(name_hash)
+             if pkgs is None:
+                 pkgs = {}
+                 base_names[name_hash] = pkgs
+             pkgs[pkg_hash] = None
+ 
+         def _hash_str(self, s):
+             h = self._new_hash()
+             # Always use a constant utf_8 encoding here, since
+             # the "default" encoding can change.
+             h.update(
+                 _unicode_encode(
+                     s, encoding=_encodings["repo.content"], errors="backslashreplace"
+                 )
+             )
+             h = h.hexdigest()
+             h = h[-self._hex_chars :]
+             h = int(h, 16)
+             return h
+ 
+         def _hash_pkg(self, cpv):
+             counter, mtime = self._vardb.aux_get(cpv, ["COUNTER", "_mtime_"])
+             try:
+                 counter = int(counter)
+             except ValueError:
+                 counter = 0
+             return (str(cpv), counter, mtime)
+ 
+     class _owners_db:
+         def __init__(self, vardb):
+             self._vardb = vardb
+ 
+         def populate(self):
+             self._populate()
+ 
+         def _populate(self):
+             owners_cache = vardbapi._owners_cache(self._vardb)
+             cached_hashes = set()
+             base_names = self._vardb._aux_cache["owners"]["base_names"]
+ 
+             # Take inventory of all cached package hashes.
+             for name, hash_values in list(base_names.items()):
+                 if not isinstance(hash_values, dict):
+                     del base_names[name]
+                     continue
+                 cached_hashes.update(hash_values)
+ 
+             # Create sets of valid package hashes and uncached packages.
+             uncached_pkgs = set()
+             hash_pkg = owners_cache._hash_pkg
+             valid_pkg_hashes = set()
+             for cpv in self._vardb.cpv_all():
+                 hash_value = hash_pkg(cpv)
+                 valid_pkg_hashes.add(hash_value)
+                 if hash_value not in cached_hashes:
+                     uncached_pkgs.add(cpv)
+ 
+             # Cache any missing packages.
+             for cpv in uncached_pkgs:
+                 owners_cache.add(cpv)
+ 
+             # Delete any stale cache.
+             stale_hashes = cached_hashes.difference(valid_pkg_hashes)
+             if stale_hashes:
+                 for base_name_hash, bucket in list(base_names.items()):
+                     for hash_value in stale_hashes.intersection(bucket):
+                         del bucket[hash_value]
+                     if not bucket:
+                         del base_names[base_name_hash]
+ 
+             return owners_cache
+ 
+         def get_owners(self, path_iter):
+             """
+             @return the owners as a dblink -> set(files) mapping.
+             """
+             owners = {}
+             for owner, f in self.iter_owners(path_iter):
+                 owned_files = owners.get(owner)
+                 if owned_files is None:
+                     owned_files = set()
+                     owners[owner] = owned_files
+                 owned_files.add(f)
+             return owners
+ 
+         def getFileOwnerMap(self, path_iter):
+             owners = self.get_owners(path_iter)
+             file_owners = {}
+             for pkg_dblink, files in owners.items():
+                 for f in files:
+                     owner_set = file_owners.get(f)
+                     if owner_set is None:
+                         owner_set = set()
+                         file_owners[f] = owner_set
+                     owner_set.add(pkg_dblink)
+             return file_owners
+ 
+         def iter_owners(self, path_iter):
+             """
+             Iterate over tuples of (dblink, path). In order to avoid
+             consuming too many resources for too much time, resources
+             are only allocated for the duration of a given iter_owners()
+             call. Therefore, to maximize reuse of resources when searching
+             for multiple files, it's best to search for them all in a single
+             call.
+             """
+ 
+             if not isinstance(path_iter, list):
+                 path_iter = list(path_iter)
+             owners_cache = self._populate()
+             vardb = self._vardb
+             root = vardb._eroot
+             hash_pkg = owners_cache._hash_pkg
+             hash_str = owners_cache._hash_str
+             base_names = self._vardb._aux_cache["owners"]["base_names"]
+             case_insensitive = "case-insensitive-fs" in vardb.settings.features
+ 
+             dblink_cache = {}
+ 
+             def dblink(cpv):
+                 x = dblink_cache.get(cpv)
+                 if x is None:
+                     if len(dblink_cache) > 20:
+                         # Ensure that we don't run out of memory.
+                         raise StopIteration()
+                     x = self._vardb._dblink(cpv)
+                     dblink_cache[cpv] = x
+                 return x
+ 
+             while path_iter:
+ 
+                 path = path_iter.pop()
+                 if case_insensitive:
+                     path = path.lower()
+                 is_basename = os.sep != path[:1]
+                 if is_basename:
+                     name = path
+                 else:
+                     name = os.path.basename(path.rstrip(os.path.sep))
+ 
+                 if not name:
+                     continue
+ 
+                 name_hash = hash_str(name)
+                 pkgs = base_names.get(name_hash)
+                 owners = []
+                 if pkgs is not None:
+                     try:
+                         for hash_value in pkgs:
+                             if (
+                                 not isinstance(hash_value, tuple)
+                                 or len(hash_value) != 3
+                             ):
+                                 continue
+                             cpv, counter, mtime = hash_value
+                             if not isinstance(cpv, str):
+                                 continue
+                             try:
+                                 current_hash = hash_pkg(cpv)
+                             except KeyError:
+                                 continue
+ 
+                             if current_hash != hash_value:
+                                 continue
+ 
+                             if is_basename:
+                                 for p in dblink(cpv)._contents.keys():
+                                     if os.path.basename(p) == name:
+                                         owners.append(
+                                             (
+                                                 cpv,
+                                                 dblink(cpv)._contents.unmap_key(p)[
+                                                     len(root) :
+                                                 ],
+                                             )
+                                         )
+                             else:
+                                 key = dblink(cpv)._match_contents(path)
+                                 if key is not False:
+                                     owners.append((cpv, key[len(root) :]))
+ 
+                     except StopIteration:
+                         path_iter.append(path)
+                         del owners[:]
+                         dblink_cache.clear()
+                         gc.collect()
+                         for x in self._iter_owners_low_mem(path_iter):
+                             yield x
+                         return
+                     else:
+                         for cpv, p in owners:
+                             yield (dblink(cpv), p)
+ 
+         def _iter_owners_low_mem(self, path_list):
+             """
+             This implemention will make a short-lived dblink instance (and
+             parse CONTENTS) for every single installed package. This is
+             slower and but uses less memory than the method which uses the
+             basename cache.
+             """
+ 
+             if not path_list:
+                 return
+ 
+             case_insensitive = "case-insensitive-fs" in self._vardb.settings.features
+             path_info_list = []
+             for path in path_list:
+                 if case_insensitive:
+                     path = path.lower()
+                 is_basename = os.sep != path[:1]
+                 if is_basename:
+                     name = path
+                 else:
+                     name = os.path.basename(path.rstrip(os.path.sep))
+                 path_info_list.append((path, name, is_basename))
+ 
+             # Do work via the global event loop, so that it can be used
+             # for indication of progress during the search (bug #461412).
+             event_loop = asyncio._safe_loop()
+             root = self._vardb._eroot
+ 
+             def search_pkg(cpv, search_future):
+                 dblnk = self._vardb._dblink(cpv)
+                 results = []
+                 for path, name, is_basename in path_info_list:
+                     if is_basename:
+                         for p in dblnk._contents.keys():
+                             if os.path.basename(p) == name:
+                                 results.append(
+                                     (dblnk, dblnk._contents.unmap_key(p)[len(root) :])
+                                 )
+                     else:
+                         key = dblnk._match_contents(path)
+                         if key is not False:
+                             results.append((dblnk, key[len(root) :]))
+                 search_future.set_result(results)
+ 
+             for cpv in self._vardb.cpv_all():
+                 search_future = event_loop.create_future()
+                 event_loop.call_soon(search_pkg, cpv, search_future)
+                 event_loop.run_until_complete(search_future)
+                 for result in search_future.result():
+                     yield result
+ 
  
  class vartree:
- 	"this tree will scan a var/db/pkg database located at root (passed to init)"
- 	def __init__(self, root=None, virtual=DeprecationWarning, categories=None,
- 		settings=None):
- 
- 		if settings is None:
- 			settings = portage.settings
- 
- 		if root is not None and root != settings['ROOT']:
- 			warnings.warn("The 'root' parameter of the "
- 				"portage.dbapi.vartree.vartree"
- 				" constructor is now unused. Use "
- 				"settings['ROOT'] instead.",
- 				DeprecationWarning, stacklevel=2)
- 
- 		if virtual is not DeprecationWarning:
- 			warnings.warn("The 'virtual' parameter of the "
- 				"portage.dbapi.vartree.vartree"
- 				" constructor is unused",
- 				DeprecationWarning, stacklevel=2)
- 
- 		self.settings = settings
- 		self.dbapi = vardbapi(settings=settings, vartree=self)
- 		self.populated = 1
- 
- 	@property
- 	def root(self):
- 		warnings.warn("The root attribute of "
- 			"portage.dbapi.vartree.vartree"
- 			" is deprecated. Use "
- 			"settings['ROOT'] instead.",
- 			DeprecationWarning, stacklevel=3)
- 		return self.settings['ROOT']
- 
- 	def getpath(self, mykey, filename=None):
- 		return self.dbapi.getpath(mykey, filename=filename)
- 
- 	def zap(self, mycpv):
- 		return
- 
- 	def inject(self, mycpv):
- 		return
- 
- 	def get_provide(self, mycpv):
- 		return []
- 
- 	def get_all_provides(self):
- 		return {}
- 
- 	def dep_bestmatch(self, mydep, use_cache=1):
- 		"compatibility method -- all matches, not just visible ones"
- 		#mymatch=best(match(dep_expand(mydep,self.dbapi),self.dbapi))
- 		mymatch = best(self.dbapi.match(
- 			dep_expand(mydep, mydb=self.dbapi, settings=self.settings),
- 			use_cache=use_cache))
- 		if mymatch is None:
- 			return ""
- 		return mymatch
- 
- 	def dep_match(self, mydep, use_cache=1):
- 		"compatibility method -- we want to see all matches, not just visible ones"
- 		#mymatch = match(mydep,self.dbapi)
- 		mymatch = self.dbapi.match(mydep, use_cache=use_cache)
- 		if mymatch is None:
- 			return []
- 		return mymatch
- 
- 	def exists_specific(self, cpv):
- 		return self.dbapi.cpv_exists(cpv)
- 
- 	def getallcpv(self):
- 		"""temporary function, probably to be renamed --- Gets a list of all
- 		category/package-versions installed on the system."""
- 		return self.dbapi.cpv_all()
- 
- 	def getallnodes(self):
- 		"""new behavior: these are all *unmasked* nodes.  There may or may not be available
- 		masked package for nodes in this nodes list."""
- 		return self.dbapi.cp_all()
- 
- 	def getebuildpath(self, fullpackage):
- 		cat, package = catsplit(fullpackage)
- 		return self.getpath(fullpackage, filename=package+".ebuild")
- 
- 	def getslot(self, mycatpkg):
- 		"Get a slot for a catpkg; assume it exists."
- 		try:
- 			return self.dbapi._pkg_str(mycatpkg, None).slot
- 		except KeyError:
- 			return ""
- 
- 	def populate(self):
- 		self.populated=1
+     "this tree will scan a var/db/pkg database located at root (passed to init)"
+ 
+     def __init__(
+         self, root=None, virtual=DeprecationWarning, categories=None, settings=None
+     ):
+ 
+         if settings is None:
+             settings = portage.settings
+ 
+         if root is not None and root != settings["ROOT"]:
+             warnings.warn(
+                 "The 'root' parameter of the "
+                 "portage.dbapi.vartree.vartree"
+                 " constructor is now unused. Use "
+                 "settings['ROOT'] instead.",
+                 DeprecationWarning,
+                 stacklevel=2,
+             )
+ 
+         if virtual is not DeprecationWarning:
+             warnings.warn(
+                 "The 'virtual' parameter of the "
+                 "portage.dbapi.vartree.vartree"
+                 " constructor is unused",
+                 DeprecationWarning,
+                 stacklevel=2,
+             )
+ 
+         self.settings = settings
+         self.dbapi = vardbapi(settings=settings, vartree=self)
+         self.populated = 1
+ 
+     @property
+     def root(self):
+         warnings.warn(
+             "The root attribute of "
+             "portage.dbapi.vartree.vartree"
+             " is deprecated. Use "
+             "settings['ROOT'] instead.",
+             DeprecationWarning,
+             stacklevel=3,
+         )
+         return self.settings["ROOT"]
+ 
+     def getpath(self, mykey, filename=None):
+         return self.dbapi.getpath(mykey, filename=filename)
+ 
+     def zap(self, mycpv):
+         return
+ 
+     def inject(self, mycpv):
+         return
+ 
+     def get_provide(self, mycpv):
+         return []
+ 
+     def get_all_provides(self):
+         return {}
+ 
+     def dep_bestmatch(self, mydep, use_cache=1):
+         "compatibility method -- all matches, not just visible ones"
+         # mymatch=best(match(dep_expand(mydep,self.dbapi),self.dbapi))
+         mymatch = best(
+             self.dbapi.match(
+                 dep_expand(mydep, mydb=self.dbapi, settings=self.settings),
+                 use_cache=use_cache,
+             )
+         )
+         if mymatch is None:
+             return ""
+         return mymatch
+ 
+     def dep_match(self, mydep, use_cache=1):
+         "compatibility method -- we want to see all matches, not just visible ones"
+         # mymatch = match(mydep,self.dbapi)
+         mymatch = self.dbapi.match(mydep, use_cache=use_cache)
+         if mymatch is None:
+             return []
+         return mymatch
+ 
+     def exists_specific(self, cpv):
+         return self.dbapi.cpv_exists(cpv)
+ 
+     def getallcpv(self):
+         """temporary function, probably to be renamed --- Gets a list of all
+         category/package-versions installed on the system."""
+         return self.dbapi.cpv_all()
+ 
+     def getallnodes(self):
+         """new behavior: these are all *unmasked* nodes.  There may or may not be available
+         masked package for nodes in this nodes list."""
+         return self.dbapi.cp_all()
+ 
+     def getebuildpath(self, fullpackage):
+         cat, package = catsplit(fullpackage)
+         return self.getpath(fullpackage, filename=package + ".ebuild")
+ 
+     def getslot(self, mycatpkg):
+         "Get a slot for a catpkg; assume it exists."
+         try:
+             return self.dbapi._pkg_str(mycatpkg, None).slot
+         except KeyError:
+             return ""
+ 
+     def populate(self):
+         self.populated = 1
+ 
  
  class dblink:
- 	"""
- 	This class provides an interface to the installed package database
- 	At present this is implemented as a text backend in /var/db/pkg.
- 	"""
- 
- 	_normalize_needed = re.compile(r'//|^[^/]|./$|(^|/)\.\.?(/|$)')
- 
- 	_contents_re = re.compile(r'^(' + \
- 		r'(?P<dir>(dev|dir|fif) (.+))|' + \
- 		r'(?P<obj>(obj) (.+) (\S+) (\d+))|' + \
- 		r'(?P<sym>(sym) (.+) -> (.+) ((\d+)|(?P<oldsym>(' + \
- 		r'\(\d+, \d+L, \d+L, \d+, \d+, \d+, \d+L, \d+, (\d+), \d+\)))))' + \
- 		r')$'
- 	)
- 
- 	# These files are generated by emerge, so we need to remove
- 	# them when they are the only thing left in a directory.
- 	_infodir_cleanup = frozenset(["dir", "dir.old"])
- 
- 	_ignored_unlink_errnos = (
- 		errno.EBUSY, errno.ENOENT,
- 		errno.ENOTDIR, errno.EISDIR)
- 
- 	_ignored_rmdir_errnos = (
- 		errno.EEXIST, errno.ENOTEMPTY,
- 		errno.EBUSY, errno.ENOENT,
- 		errno.ENOTDIR, errno.EISDIR,
- 		errno.EPERM)
- 
- 	def __init__(self, cat, pkg, myroot=None, settings=None, treetype=None,
- 		vartree=None, blockers=None, scheduler=None, pipe=None):
- 		"""
- 		Creates a DBlink object for a given CPV.
- 		The given CPV may not be present in the database already.
- 
- 		@param cat: Category
- 		@type cat: String
- 		@param pkg: Package (PV)
- 		@type pkg: String
- 		@param myroot: ignored, settings['ROOT'] is used instead
- 		@type myroot: String (Path)
- 		@param settings: Typically portage.settings
- 		@type settings: portage.config
- 		@param treetype: one of ['porttree','bintree','vartree']
- 		@type treetype: String
- 		@param vartree: an instance of vartree corresponding to myroot.
- 		@type vartree: vartree
- 		"""
- 
- 		if settings is None:
- 			raise TypeError("settings argument is required")
- 
- 		mysettings = settings
- 		self._eroot = mysettings['EROOT']
- 		self.cat = cat
- 		self.pkg = pkg
- 		self.mycpv = self.cat + "/" + self.pkg
- 		if self.mycpv == settings.mycpv and \
- 			isinstance(settings.mycpv, _pkg_str):
- 			self.mycpv = settings.mycpv
- 		else:
- 			self.mycpv = _pkg_str(self.mycpv)
- 		self.mysplit = list(self.mycpv.cpv_split[1:])
- 		self.mysplit[0] = self.mycpv.cp
- 		self.treetype = treetype
- 		if vartree is None:
- 			vartree = portage.db[self._eroot]["vartree"]
- 		self.vartree = vartree
- 		self._blockers = blockers
- 		self._scheduler = scheduler
- 		self.dbroot = normalize_path(os.path.join(self._eroot, VDB_PATH))
- 		self.dbcatdir = self.dbroot+"/"+cat
- 		self.dbpkgdir = self.dbcatdir+"/"+pkg
- 		self.dbtmpdir = self.dbcatdir+"/"+MERGING_IDENTIFIER+pkg
- 		self.dbdir = self.dbpkgdir
- 		self.settings = mysettings
- 		self._verbose = self.settings.get("PORTAGE_VERBOSE") == "1"
- 
- 		self.myroot = self.settings['ROOT']
- 		self._installed_instance = None
- 		self.contentscache = None
- 		self._contents_inodes = None
- 		self._contents_basenames = None
- 		self._linkmap_broken = False
- 		self._device_path_map = {}
- 		self._hardlink_merge_map = {}
- 		self._hash_key = (self._eroot, self.mycpv)
- 		self._protect_obj = None
- 		self._pipe = pipe
- 		self._postinst_failure = False
- 
- 		# When necessary, this attribute is modified for
- 		# compliance with RESTRICT=preserve-libs.
- 		self._preserve_libs = "preserve-libs" in mysettings.features
- 		self._contents = ContentsCaseSensitivityManager(self)
- 		self._slot_locks = []
- 
- 	def __hash__(self):
- 		return hash(self._hash_key)
- 
- 	def __eq__(self, other):
- 		return isinstance(other, dblink) and \
- 			self._hash_key == other._hash_key
- 
- 	def _get_protect_obj(self):
- 
- 		if self._protect_obj is None:
- 			self._protect_obj = ConfigProtect(self._eroot,
- 			portage.util.shlex_split(
- 				self.settings.get("CONFIG_PROTECT", "")),
- 			portage.util.shlex_split(
- 				self.settings.get("CONFIG_PROTECT_MASK", "")),
- 			case_insensitive=("case-insensitive-fs"
- 					in self.settings.features))
- 
- 		return self._protect_obj
- 
- 	def isprotected(self, obj):
- 		return self._get_protect_obj().isprotected(obj)
- 
- 	def updateprotect(self):
- 		self._get_protect_obj().updateprotect()
- 
- 	def lockdb(self):
- 		self.vartree.dbapi.lock()
- 
- 	def unlockdb(self):
- 		self.vartree.dbapi.unlock()
- 
- 	def _slot_locked(f):
- 		"""
- 		A decorator function which, when parallel-install is enabled,
- 		acquires and releases slot locks for the current package and
- 		blocked packages. This is required in order to account for
- 		interactions with blocked packages (involving resolution of
- 		file collisions).
- 		"""
- 		def wrapper(self, *args, **kwargs):
- 			if "parallel-install" in self.settings.features:
- 				self._acquire_slot_locks(
- 					kwargs.get("mydbapi", self.vartree.dbapi))
- 			try:
- 				return f(self, *args, **kwargs)
- 			finally:
- 				self._release_slot_locks()
- 		return wrapper
- 
- 	def _acquire_slot_locks(self, db):
- 		"""
- 		Acquire slot locks for the current package and blocked packages.
- 		"""
- 
- 		slot_atoms = []
- 
- 		try:
- 			slot = self.mycpv.slot
- 		except AttributeError:
- 			slot, = db.aux_get(self.mycpv, ["SLOT"])
- 			slot = slot.partition("/")[0]
- 
- 		slot_atoms.append(portage.dep.Atom(
- 			"%s:%s" % (self.mycpv.cp, slot)))
- 
- 		for blocker in self._blockers or []:
- 			slot_atoms.append(blocker.slot_atom)
- 
- 		# Sort atoms so that locks are acquired in a predictable
- 		# order, preventing deadlocks with competitors that may
- 		# be trying to acquire overlapping locks.
- 		slot_atoms.sort()
- 		for slot_atom in slot_atoms:
- 			self.vartree.dbapi._slot_lock(slot_atom)
- 			self._slot_locks.append(slot_atom)
- 
- 	def _release_slot_locks(self):
- 		"""
- 		Release all slot locks.
- 		"""
- 		while self._slot_locks:
- 			self.vartree.dbapi._slot_unlock(self._slot_locks.pop())
- 
- 	def getpath(self):
- 		"return path to location of db information (for >>> informational display)"
- 		return self.dbdir
- 
- 	def exists(self):
- 		"does the db entry exist?  boolean."
- 		return os.path.exists(self.dbdir)
- 
- 	def delete(self):
- 		"""
- 		Remove this entry from the database
- 		"""
- 		try:
- 			os.lstat(self.dbdir)
- 		except OSError as e:
- 			if e.errno not in (errno.ENOENT, errno.ENOTDIR, errno.ESTALE):
- 				raise
- 			return
- 
- 		# Check validity of self.dbdir before attempting to remove it.
- 		if not self.dbdir.startswith(self.dbroot):
- 			writemsg(_("portage.dblink.delete(): invalid dbdir: %s\n") % \
- 				self.dbdir, noiselevel=-1)
- 			return
- 
- 		if self.dbdir is self.dbpkgdir:
- 			counter, = self.vartree.dbapi.aux_get(
- 				self.mycpv, ["COUNTER"])
- 			self.vartree.dbapi._cache_delta.recordEvent(
- 				"remove", self.mycpv,
- 				self.settings["SLOT"].split("/")[0], counter)
- 
- 		shutil.rmtree(self.dbdir)
- 		# If empty, remove parent category directory.
- 		try:
- 			os.rmdir(os.path.dirname(self.dbdir))
- 		except OSError:
- 			pass
- 		self.vartree.dbapi._remove(self)
- 
- 		# Use self.dbroot since we need an existing path for syncfs.
- 		try:
- 			self._merged_path(self.dbroot, os.lstat(self.dbroot))
- 		except OSError:
- 			pass
- 
- 		self._post_merge_sync()
- 
- 	def clearcontents(self):
- 		"""
- 		For a given db entry (self), erase the CONTENTS values.
- 		"""
- 		self.lockdb()
- 		try:
- 			if os.path.exists(self.dbdir+"/CONTENTS"):
- 				os.unlink(self.dbdir+"/CONTENTS")
- 		finally:
- 			self.unlockdb()
- 
- 	def _clear_contents_cache(self):
- 		self.contentscache = None
- 		self._contents_inodes = None
- 		self._contents_basenames = None
- 		self._contents.clear_cache()
- 
- 	def getcontents(self):
- 		"""
- 		Get the installed files of a given package (aka what that package installed)
- 		"""
- 		if self.contentscache is not None:
- 			return self.contentscache
- 		contents_file = os.path.join(self.dbdir, "CONTENTS")
- 		pkgfiles = {}
- 		try:
- 			with io.open(_unicode_encode(contents_file,
- 				encoding=_encodings['fs'], errors='strict'),
- 				mode='r', encoding=_encodings['repo.content'],
- 				errors='replace') as f:
- 				mylines = f.readlines()
- 		except EnvironmentError as e:
- 			if e.errno != errno.ENOENT:
- 				raise
- 			del e
- 			self.contentscache = pkgfiles
- 			return pkgfiles
- 
- 		null_byte = "\0"
- 		normalize_needed = self._normalize_needed
- 		contents_re = self._contents_re
- 		obj_index = contents_re.groupindex['obj']
- 		dir_index = contents_re.groupindex['dir']
- 		sym_index = contents_re.groupindex['sym']
- 		# The old symlink format may exist on systems that have packages
- 		# which were installed many years ago (see bug #351814).
- 		oldsym_index = contents_re.groupindex['oldsym']
- 		# CONTENTS files already contain EPREFIX
- 		myroot = self.settings['ROOT']
- 		if myroot == os.path.sep:
- 			myroot = None
- 		# used to generate parent dir entries
- 		dir_entry = ("dir",)
- 		eroot_split_len = len(self.settings["EROOT"].split(os.sep)) - 1
- 		pos = 0
- 		errors = []
- 		for pos, line in enumerate(mylines):
- 			if null_byte in line:
- 				# Null bytes are a common indication of corruption.
- 				errors.append((pos + 1, _("Null byte found in CONTENTS entry")))
- 				continue
- 			line = line.rstrip("\n")
- 			m = contents_re.match(line)
- 			if m is None:
- 				errors.append((pos + 1, _("Unrecognized CONTENTS entry")))
- 				continue
- 
- 			if m.group(obj_index) is not None:
- 				base = obj_index
- 				#format: type, mtime, md5sum
- 				data = (m.group(base+1), m.group(base+4), m.group(base+3))
- 			elif m.group(dir_index) is not None:
- 				base = dir_index
- 				#format: type
- 				data = (m.group(base+1),)
- 			elif m.group(sym_index) is not None:
- 				base = sym_index
- 				if m.group(oldsym_index) is None:
- 					mtime = m.group(base+5)
- 				else:
- 					mtime = m.group(base+8)
- 				#format: type, mtime, dest
- 				data = (m.group(base+1), mtime, m.group(base+3))
- 			else:
- 				# This won't happen as long the regular expression
- 				# is written to only match valid entries.
- 				raise AssertionError(_("required group not found " + \
- 					"in CONTENTS entry: '%s'") % line)
- 
- 			path = m.group(base+2)
- 			if normalize_needed.search(path) is not None:
- 				path = normalize_path(path)
- 				if not path.startswith(os.path.sep):
- 					path = os.path.sep + path
- 
- 			if myroot is not None:
- 				path = os.path.join(myroot, path.lstrip(os.path.sep))
- 
- 			# Implicitly add parent directories, since we can't necessarily
- 			# assume that they are explicitly listed in CONTENTS, and it's
- 			# useful for callers if they can rely on parent directory entries
- 			# being generated here (crucial for things like dblink.isowner()).
- 			path_split = path.split(os.sep)
- 			path_split.pop()
- 			while len(path_split) > eroot_split_len:
- 				parent = os.sep.join(path_split)
- 				if parent in pkgfiles:
- 					break
- 				pkgfiles[parent] = dir_entry
- 				path_split.pop()
- 
- 			pkgfiles[path] = data
- 
- 		if errors:
- 			writemsg(_("!!! Parse error in '%s'\n") % contents_file, noiselevel=-1)
- 			for pos, e in errors:
- 				writemsg(_("!!!   line %d: %s\n") % (pos, e), noiselevel=-1)
- 		self.contentscache = pkgfiles
- 		return pkgfiles
- 
- 	def quickpkg(self, output_file, include_config=False, include_unmodified_config=False):
- 		"""
- 		Create a tar file appropriate for use by quickpkg.
- 
- 		@param output_file: Write binary tar stream to file.
- 		@type output_file: file
- 		@param include_config: Include all files protected by CONFIG_PROTECT
- 			(as a security precaution, default is False).
- 		@type include_config: bool
- 		@param include_unmodified_config: Include files protected by CONFIG_PROTECT
- 			that have not been modified since installation (as a security precaution,
- 			default is False).
- 		@type include_unmodified_config: bool
- 		@rtype: list
- 		@return: Paths of protected configuration files which have been omitted.
- 		"""
- 		settings = self.settings
- 		cpv = self.mycpv
- 		xattrs = 'xattr' in settings.features
- 		contents = self.getcontents()
- 		excluded_config_files = []
- 		protect = None
- 
- 		if not include_config:
- 			confprot = ConfigProtect(settings['EROOT'],
- 				portage.util.shlex_split(settings.get('CONFIG_PROTECT', '')),
- 				portage.util.shlex_split(settings.get('CONFIG_PROTECT_MASK', '')),
- 				case_insensitive=('case-insensitive-fs' in settings.features))
- 
- 			def protect(filename):
- 				if not confprot.isprotected(filename):
- 					return False
- 				if include_unmodified_config:
- 					file_data = contents[filename]
- 					if file_data[0] == 'obj':
- 						orig_md5 = file_data[2].lower()
- 						cur_md5 = perform_md5(filename, calc_prelink=1)
- 						if orig_md5 == cur_md5:
- 							return False
- 				excluded_config_files.append(filename)
- 				return True
- 
- 		# The tarfile module will write pax headers holding the
- 		# xattrs only if PAX_FORMAT is specified here.
- 		with tarfile.open(fileobj=output_file, mode='w|',
- 			format=tarfile.PAX_FORMAT if xattrs else tarfile.DEFAULT_FORMAT) as tar:
- 			tar_contents(contents, settings['ROOT'], tar, protect=protect, xattrs=xattrs)
- 
- 		return excluded_config_files
- 
- 	def _prune_plib_registry(self, unmerge=False,
- 		needed=None, preserve_paths=None):
- 		# remove preserved libraries that don't have any consumers left
- 		if not (self._linkmap_broken or
- 			self.vartree.dbapi._linkmap is None or
- 			self.vartree.dbapi._plib_registry is None):
- 			self.vartree.dbapi._fs_lock()
- 			plib_registry = self.vartree.dbapi._plib_registry
- 			plib_registry.lock()
- 			try:
- 				plib_registry.load()
- 
- 				unmerge_with_replacement = \
- 					unmerge and preserve_paths is not None
- 				if unmerge_with_replacement:
- 					# If self.mycpv is about to be unmerged and we
- 					# have a replacement package, we want to exclude
- 					# the irrelevant NEEDED data that belongs to
- 					# files which are being unmerged now.
- 					exclude_pkgs = (self.mycpv,)
- 				else:
- 					exclude_pkgs = None
- 
- 				self._linkmap_rebuild(exclude_pkgs=exclude_pkgs,
- 					include_file=needed, preserve_paths=preserve_paths)
- 
- 				if unmerge:
- 					unmerge_preserve = None
- 					if not unmerge_with_replacement:
- 						unmerge_preserve = \
- 							self._find_libs_to_preserve(unmerge=True)
- 					counter = self.vartree.dbapi.cpv_counter(self.mycpv)
- 					try:
- 						slot = self.mycpv.slot
- 					except AttributeError:
- 						slot = _pkg_str(self.mycpv, slot=self.settings["SLOT"]).slot
- 					plib_registry.unregister(self.mycpv, slot, counter)
- 					if unmerge_preserve:
- 						for path in sorted(unmerge_preserve):
- 							contents_key = self._match_contents(path)
- 							if not contents_key:
- 								continue
- 							obj_type = self.getcontents()[contents_key][0]
- 							self._display_merge(_(">>> needed   %s %s\n") % \
- 								(obj_type, contents_key), noiselevel=-1)
- 						plib_registry.register(self.mycpv,
- 							slot, counter, unmerge_preserve)
- 						# Remove the preserved files from our contents
- 						# so that they won't be unmerged.
- 						self.vartree.dbapi.removeFromContents(self,
- 							unmerge_preserve)
- 
- 				unmerge_no_replacement = \
- 					unmerge and not unmerge_with_replacement
- 				cpv_lib_map = self._find_unused_preserved_libs(
- 					unmerge_no_replacement)
- 				if cpv_lib_map:
- 					self._remove_preserved_libs(cpv_lib_map)
- 					self.vartree.dbapi.lock()
- 					try:
- 						for cpv, removed in cpv_lib_map.items():
- 							if not self.vartree.dbapi.cpv_exists(cpv):
- 								continue
- 							self.vartree.dbapi.removeFromContents(cpv, removed)
- 					finally:
- 						self.vartree.dbapi.unlock()
- 
- 				plib_registry.store()
- 			finally:
- 				plib_registry.unlock()
- 				self.vartree.dbapi._fs_unlock()
- 
- 	@_slot_locked
- 	def unmerge(self, pkgfiles=None, trimworld=None, cleanup=True,
- 		ldpath_mtimes=None, others_in_slot=None, needed=None,
- 		preserve_paths=None):
- 		"""
- 		Calls prerm
- 		Unmerges a given package (CPV)
- 		calls postrm
- 		calls cleanrm
- 		calls env_update
- 
- 		@param pkgfiles: files to unmerge (generally self.getcontents() )
- 		@type pkgfiles: Dictionary
- 		@param trimworld: Unused
- 		@type trimworld: Boolean
- 		@param cleanup: cleanup to pass to doebuild (see doebuild)
- 		@type cleanup: Boolean
- 		@param ldpath_mtimes: mtimes to pass to env_update (see env_update)
- 		@type ldpath_mtimes: Dictionary
- 		@param others_in_slot: all dblink instances in this slot, excluding self
- 		@type others_in_slot: list
- 		@param needed: Filename containing libraries needed after unmerge.
- 		@type needed: String
- 		@param preserve_paths: Libraries preserved by a package instance that
- 			is currently being merged. They need to be explicitly passed to the
- 			LinkageMap, since they are not registered in the
- 			PreservedLibsRegistry yet.
- 		@type preserve_paths: set
- 		@rtype: Integer
- 		@return:
- 		1. os.EX_OK if everything went well.
- 		2. return code of the failed phase (for prerm, postrm, cleanrm)
- 		"""
- 
- 		if trimworld is not None:
- 			warnings.warn("The trimworld parameter of the " + \
- 				"portage.dbapi.vartree.dblink.unmerge()" + \
- 				" method is now unused.",
- 				DeprecationWarning, stacklevel=2)
- 
- 		background = False
- 		log_path = self.settings.get("PORTAGE_LOG_FILE")
- 		if self._scheduler is None:
- 			# We create a scheduler instance and use it to
- 			# log unmerge output separately from merge output.
- 			self._scheduler = SchedulerInterface(asyncio._safe_loop())
- 		if self.settings.get("PORTAGE_BACKGROUND") == "subprocess":
- 			if self.settings.get("PORTAGE_BACKGROUND_UNMERGE") == "1":
- 				self.settings["PORTAGE_BACKGROUND"] = "1"
- 				self.settings.backup_changes("PORTAGE_BACKGROUND")
- 				background = True
- 			elif self.settings.get("PORTAGE_BACKGROUND_UNMERGE") == "0":
- 				self.settings["PORTAGE_BACKGROUND"] = "0"
- 				self.settings.backup_changes("PORTAGE_BACKGROUND")
- 		elif self.settings.get("PORTAGE_BACKGROUND") == "1":
- 			background = True
- 
- 		self.vartree.dbapi._bump_mtime(self.mycpv)
- 		showMessage = self._display_merge
- 		if self.vartree.dbapi._categories is not None:
- 			self.vartree.dbapi._categories = None
- 
- 		# When others_in_slot is not None, the backup has already been
- 		# handled by the caller.
- 		caller_handles_backup = others_in_slot is not None
- 
- 		# When others_in_slot is supplied, the security check has already been
- 		# done for this slot, so it shouldn't be repeated until the next
- 		# replacement or unmerge operation.
- 		if others_in_slot is None:
- 			slot = self.vartree.dbapi._pkg_str(self.mycpv, None).slot
- 			slot_matches = self.vartree.dbapi.match(
- 				"%s:%s" % (portage.cpv_getkey(self.mycpv), slot))
- 			others_in_slot = []
- 			for cur_cpv in slot_matches:
- 				if cur_cpv == self.mycpv:
- 					continue
- 				others_in_slot.append(dblink(self.cat, catsplit(cur_cpv)[1],
- 					settings=self.settings, vartree=self.vartree,
- 					treetype="vartree", pipe=self._pipe))
- 
- 			retval = self._security_check([self] + others_in_slot)
- 			if retval:
- 				return retval
- 
- 		contents = self.getcontents()
- 		# Now, don't assume that the name of the ebuild is the same as the
- 		# name of the dir; the package may have been moved.
- 		myebuildpath = os.path.join(self.dbdir, self.pkg + ".ebuild")
- 		failures = 0
- 		ebuild_phase = "prerm"
- 		mystuff = os.listdir(self.dbdir)
- 		for x in mystuff:
- 			if x.endswith(".ebuild"):
- 				if x[:-7] != self.pkg:
- 					# Clean up after vardbapi.move_ent() breakage in
- 					# portage versions before 2.1.2
- 					os.rename(os.path.join(self.dbdir, x), myebuildpath)
- 					write_atomic(os.path.join(self.dbdir, "PF"), self.pkg+"\n")
- 				break
- 
- 		if self.mycpv != self.settings.mycpv or \
- 			"EAPI" not in self.settings.configdict["pkg"]:
- 			# We avoid a redundant setcpv call here when
- 			# the caller has already taken care of it.
- 			self.settings.setcpv(self.mycpv, mydb=self.vartree.dbapi)
- 
- 		eapi_unsupported = False
- 		try:
- 			doebuild_environment(myebuildpath, "prerm",
- 				settings=self.settings, db=self.vartree.dbapi)
- 		except UnsupportedAPIException as e:
- 			eapi_unsupported = e
- 
- 		if self._preserve_libs and "preserve-libs" in \
- 			self.settings["PORTAGE_RESTRICT"].split():
- 			self._preserve_libs = False
- 
- 		builddir_lock = None
- 		scheduler = self._scheduler
- 		retval = os.EX_OK
- 		try:
- 			# Only create builddir_lock if the caller
- 			# has not already acquired the lock.
- 			if "PORTAGE_BUILDDIR_LOCKED" not in self.settings:
- 				builddir_lock = EbuildBuildDir(
- 					scheduler=scheduler,
- 					settings=self.settings)
- 				scheduler.run_until_complete(builddir_lock.async_lock())
- 				prepare_build_dirs(settings=self.settings, cleanup=True)
- 				log_path = self.settings.get("PORTAGE_LOG_FILE")
- 
- 			# Do this before the following _prune_plib_registry call, since
- 			# that removes preserved libraries from our CONTENTS, and we
- 			# may want to backup those libraries first.
- 			if not caller_handles_backup:
- 				retval = self._pre_unmerge_backup(background)
- 				if retval != os.EX_OK:
- 					showMessage(_("!!! FAILED prerm: quickpkg: %s\n") % retval,
- 						level=logging.ERROR, noiselevel=-1)
- 					return retval
- 
- 			self._prune_plib_registry(unmerge=True, needed=needed,
- 				preserve_paths=preserve_paths)
- 
- 			# Log the error after PORTAGE_LOG_FILE is initialized
- 			# by prepare_build_dirs above.
- 			if eapi_unsupported:
- 				# Sometimes this happens due to corruption of the EAPI file.
- 				failures += 1
- 				showMessage(_("!!! FAILED prerm: %s\n") % \
- 					os.path.join(self.dbdir, "EAPI"),
- 					level=logging.ERROR, noiselevel=-1)
- 				showMessage("%s\n" % (eapi_unsupported,),
- 					level=logging.ERROR, noiselevel=-1)
- 			elif os.path.isfile(myebuildpath):
- 				phase = EbuildPhase(background=background,
- 					phase=ebuild_phase, scheduler=scheduler,
- 					settings=self.settings)
- 				phase.start()
- 				retval = phase.wait()
- 
- 				# XXX: Decide how to handle failures here.
- 				if retval != os.EX_OK:
- 					failures += 1
- 					showMessage(_("!!! FAILED prerm: %s\n") % retval,
- 						level=logging.ERROR, noiselevel=-1)
- 
- 			self.vartree.dbapi._fs_lock()
- 			try:
- 				self._unmerge_pkgfiles(pkgfiles, others_in_slot)
- 			finally:
- 				self.vartree.dbapi._fs_unlock()
- 			self._clear_contents_cache()
- 
- 			if not eapi_unsupported and os.path.isfile(myebuildpath):
- 				ebuild_phase = "postrm"
- 				phase = EbuildPhase(background=background,
- 					phase=ebuild_phase, scheduler=scheduler,
- 					settings=self.settings)
- 				phase.start()
- 				retval = phase.wait()
- 
- 				# XXX: Decide how to handle failures here.
- 				if retval != os.EX_OK:
- 					failures += 1
- 					showMessage(_("!!! FAILED postrm: %s\n") % retval,
- 						level=logging.ERROR, noiselevel=-1)
- 
- 		finally:
- 			self.vartree.dbapi._bump_mtime(self.mycpv)
- 			try:
- 					if not eapi_unsupported and os.path.isfile(myebuildpath):
- 						if retval != os.EX_OK:
- 							msg_lines = []
- 							msg = _("The '%(ebuild_phase)s' "
- 							"phase of the '%(cpv)s' package "
- 							"has failed with exit value %(retval)s.") % \
- 							{"ebuild_phase":ebuild_phase, "cpv":self.mycpv,
- 							"retval":retval}
- 							from textwrap import wrap
- 							msg_lines.extend(wrap(msg, 72))
- 							msg_lines.append("")
- 
- 							ebuild_name = os.path.basename(myebuildpath)
- 							ebuild_dir = os.path.dirname(myebuildpath)
- 							msg = _("The problem occurred while executing "
- 							"the ebuild file named '%(ebuild_name)s' "
- 							"located in the '%(ebuild_dir)s' directory. "
- 							"If necessary, manually remove "
- 							"the environment.bz2 file and/or the "
- 							"ebuild file located in that directory.") % \
- 							{"ebuild_name":ebuild_name, "ebuild_dir":ebuild_dir}
- 							msg_lines.extend(wrap(msg, 72))
- 							msg_lines.append("")
- 
- 							msg = _("Removal "
- 							"of the environment.bz2 file is "
- 							"preferred since it may allow the "
- 							"removal phases to execute successfully. "
- 							"The ebuild will be "
- 							"sourced and the eclasses "
- 							"from the current ebuild repository will be used "
- 							"when necessary. Removal of "
- 							"the ebuild file will cause the "
- 							"pkg_prerm() and pkg_postrm() removal "
- 							"phases to be skipped entirely.")
- 							msg_lines.extend(wrap(msg, 72))
- 
- 							self._eerror(ebuild_phase, msg_lines)
- 
- 					self._elog_process(phasefilter=("prerm", "postrm"))
- 
- 					if retval == os.EX_OK:
- 						try:
- 							doebuild_environment(myebuildpath, "cleanrm",
- 								settings=self.settings, db=self.vartree.dbapi)
- 						except UnsupportedAPIException:
- 							pass
- 						phase = EbuildPhase(background=background,
- 							phase="cleanrm", scheduler=scheduler,
- 							settings=self.settings)
- 						phase.start()
- 						retval = phase.wait()
- 			finally:
- 					if builddir_lock is not None:
- 						scheduler.run_until_complete(
- 							builddir_lock.async_unlock())
- 
- 		if log_path is not None:
- 
- 			if not failures and 'unmerge-logs' not in self.settings.features:
- 				try:
- 					os.unlink(log_path)
- 				except OSError:
- 					pass
- 
- 			try:
- 				st = os.stat(log_path)
- 			except OSError:
- 				pass
- 			else:
- 				if st.st_size == 0:
- 					try:
- 						os.unlink(log_path)
- 					except OSError:
- 						pass
- 
- 		if log_path is not None and os.path.exists(log_path):
- 			# Restore this since it gets lost somewhere above and it
- 			# needs to be set for _display_merge() to be able to log.
- 			# Note that the log isn't necessarily supposed to exist
- 			# since if PORTAGE_LOGDIR is unset then it's a temp file
- 			# so it gets cleaned above.
- 			self.settings["PORTAGE_LOG_FILE"] = log_path
- 		else:
- 			self.settings.pop("PORTAGE_LOG_FILE", None)
- 
- 		env_update(target_root=self.settings['ROOT'],
- 			prev_mtimes=ldpath_mtimes,
- 			contents=contents, env=self.settings,
- 			writemsg_level=self._display_merge, vardbapi=self.vartree.dbapi)
- 
- 		unmerge_with_replacement = preserve_paths is not None
- 		if not unmerge_with_replacement:
- 			# When there's a replacement package which calls us via treewalk,
- 			# treewalk will automatically call _prune_plib_registry for us.
- 			# Otherwise, we need to call _prune_plib_registry ourselves.
- 			# Don't pass in the "unmerge=True" flag here, since that flag
- 			# is intended to be used _prior_ to unmerge, not after.
- 			self._prune_plib_registry()
- 
- 		return os.EX_OK
- 
- 	def _display_merge(self, msg, level=0, noiselevel=0):
- 		if not self._verbose and noiselevel >= 0 and level < logging.WARN:
- 			return
- 		if self._scheduler is None:
- 			writemsg_level(msg, level=level, noiselevel=noiselevel)
- 		else:
- 			log_path = None
- 			if self.settings.get("PORTAGE_BACKGROUND") != "subprocess":
- 				log_path = self.settings.get("PORTAGE_LOG_FILE")
- 			background = self.settings.get("PORTAGE_BACKGROUND") == "1"
- 
- 			if background and log_path is None:
- 				if level >= logging.WARN:
- 					writemsg_level(msg, level=level, noiselevel=noiselevel)
- 			else:
- 				self._scheduler.output(msg,
- 					log_path=log_path, background=background,
- 					level=level, noiselevel=noiselevel)
- 
- 	def _show_unmerge(self, zing, desc, file_type, file_name):
- 		self._display_merge("%s %s %s %s\n" % \
- 			(zing, desc.ljust(8), file_type, file_name))
- 
- 	def _unmerge_pkgfiles(self, pkgfiles, others_in_slot):
- 		"""
- 
- 		Unmerges the contents of a package from the liveFS
- 		Removes the VDB entry for self
- 
- 		@param pkgfiles: typically self.getcontents()
- 		@type pkgfiles: Dictionary { filename: [ 'type', '?', 'md5sum' ] }
- 		@param others_in_slot: all dblink instances in this slot, excluding self
- 		@type others_in_slot: list
- 		@rtype: None
- 		"""
- 
- 		os = _os_merge
- 		perf_md5 = perform_md5
- 		showMessage = self._display_merge
- 		show_unmerge = self._show_unmerge
- 		ignored_unlink_errnos = self._ignored_unlink_errnos
- 		ignored_rmdir_errnos = self._ignored_rmdir_errnos
- 
- 		if not pkgfiles:
- 			showMessage(_("No package files given... Grabbing a set.\n"))
- 			pkgfiles = self.getcontents()
- 
- 		if others_in_slot is None:
- 			others_in_slot = []
- 			slot = self.vartree.dbapi._pkg_str(self.mycpv, None).slot
- 			slot_matches = self.vartree.dbapi.match(
- 				"%s:%s" % (portage.cpv_getkey(self.mycpv), slot))
- 			for cur_cpv in slot_matches:
- 				if cur_cpv == self.mycpv:
- 					continue
- 				others_in_slot.append(dblink(self.cat, catsplit(cur_cpv)[1],
- 					settings=self.settings,
- 					vartree=self.vartree, treetype="vartree", pipe=self._pipe))
- 
- 		cfgfiledict = grabdict(self.vartree.dbapi._conf_mem_file)
- 		stale_confmem = []
- 		protected_symlinks = {}
- 
- 		unmerge_orphans = "unmerge-orphans" in self.settings.features
- 		calc_prelink = "prelink-checksums" in self.settings.features
- 
- 		if pkgfiles:
- 			self.updateprotect()
- 			mykeys = list(pkgfiles)
- 			mykeys.sort()
- 			mykeys.reverse()
- 
- 			#process symlinks second-to-last, directories last.
- 			mydirs = set()
- 
- 			uninstall_ignore = portage.util.shlex_split(
- 				self.settings.get("UNINSTALL_IGNORE", ""))
- 
- 			def unlink(file_name, lstatobj):
- 				if bsd_chflags:
- 					if lstatobj.st_flags != 0:
- 						bsd_chflags.lchflags(file_name, 0)
- 					parent_name = os.path.dirname(file_name)
- 					# Use normal stat/chflags for the parent since we want to
- 					# follow any symlinks to the real parent directory.
- 					pflags = os.stat(parent_name).st_flags
- 					if pflags != 0:
- 						bsd_chflags.chflags(parent_name, 0)
- 				try:
- 					if not stat.S_ISLNK(lstatobj.st_mode):
- 						# Remove permissions to ensure that any hardlinks to
- 						# suid/sgid files are rendered harmless.
- 						os.chmod(file_name, 0)
- 					os.unlink(file_name)
- 				except OSError as ose:
- 					# If the chmod or unlink fails, you are in trouble.
- 					# With Prefix this can be because the file is owned
- 					# by someone else (a screwup by root?), on a normal
- 					# system maybe filesystem corruption.  In any case,
- 					# if we backtrace and die here, we leave the system
- 					# in a totally undefined state, hence we just bleed
- 					# like hell and continue to hopefully finish all our
- 					# administrative and pkg_postinst stuff.
- 					self._eerror("postrm",
- 						["Could not chmod or unlink '%s': %s" % \
- 						(file_name, ose)])
- 				else:
- 
- 					# Even though the file no longer exists, we log it
- 					# here so that _unmerge_dirs can see that we've
- 					# removed a file from this device, and will record
- 					# the parent directory for a syncfs call.
- 					self._merged_path(file_name, lstatobj, exists=False)
- 
- 				finally:
- 					if bsd_chflags and pflags != 0:
- 						# Restore the parent flags we saved before unlinking
- 						bsd_chflags.chflags(parent_name, pflags)
- 
- 			unmerge_desc = {}
- 			unmerge_desc["cfgpro"] = _("cfgpro")
- 			unmerge_desc["replaced"] = _("replaced")
- 			unmerge_desc["!dir"] = _("!dir")
- 			unmerge_desc["!empty"] = _("!empty")
- 			unmerge_desc["!fif"] = _("!fif")
- 			unmerge_desc["!found"] = _("!found")
- 			unmerge_desc["!md5"] = _("!md5")
- 			unmerge_desc["!mtime"] = _("!mtime")
- 			unmerge_desc["!obj"] = _("!obj")
- 			unmerge_desc["!sym"] = _("!sym")
- 			unmerge_desc["!prefix"] = _("!prefix")
- 
- 			real_root = self.settings['ROOT']
- 			real_root_len = len(real_root) - 1
- 			eroot = self.settings["EROOT"]
- 
- 			infodirs = frozenset(infodir for infodir in chain(
- 				self.settings.get("INFOPATH", "").split(":"),
- 				self.settings.get("INFODIR", "").split(":")) if infodir)
- 			infodirs_inodes = set()
- 			for infodir in infodirs:
- 				infodir = os.path.join(real_root, infodir.lstrip(os.sep))
- 				try:
- 					statobj = os.stat(infodir)
- 				except OSError:
- 					pass
- 				else:
- 					infodirs_inodes.add((statobj.st_dev, statobj.st_ino))
- 
- 			for i, objkey in enumerate(mykeys):
- 
- 				obj = normalize_path(objkey)
- 				if os is _os_merge:
- 					try:
- 						_unicode_encode(obj,
- 							encoding=_encodings['merge'], errors='strict')
- 					except UnicodeEncodeError:
- 						# The package appears to have been merged with a
- 						# different value of sys.getfilesystemencoding(),
- 						# so fall back to utf_8 if appropriate.
- 						try:
- 							_unicode_encode(obj,
- 								encoding=_encodings['fs'], errors='strict')
- 						except UnicodeEncodeError:
- 							pass
- 						else:
- 							os = portage.os
- 							perf_md5 = portage.checksum.perform_md5
- 
- 				file_data = pkgfiles[objkey]
- 				file_type = file_data[0]
- 
- 				# don't try to unmerge the prefix offset itself
- 				if len(obj) <= len(eroot) or not obj.startswith(eroot):
- 					show_unmerge("---", unmerge_desc["!prefix"], file_type, obj)
- 					continue
- 
- 				statobj = None
- 				try:
- 					statobj = os.stat(obj)
- 				except OSError:
- 					pass
- 				lstatobj = None
- 				try:
- 					lstatobj = os.lstat(obj)
- 				except (OSError, AttributeError):
- 					pass
- 				islink = lstatobj is not None and stat.S_ISLNK(lstatobj.st_mode)
- 				if lstatobj is None:
- 						show_unmerge("---", unmerge_desc["!found"], file_type, obj)
- 						continue
- 
- 				f_match = obj[len(eroot)-1:]
- 				ignore = False
- 				for pattern in uninstall_ignore:
- 					if fnmatch.fnmatch(f_match, pattern):
- 						ignore = True
- 						break
- 
- 				if not ignore:
- 					if islink and f_match in \
- 						("/lib", "/usr/lib", "/usr/local/lib"):
- 						# Ignore libdir symlinks for bug #423127.
- 						ignore = True
- 
- 				if ignore:
- 					show_unmerge("---", unmerge_desc["cfgpro"], file_type, obj)
- 					continue
- 
- 				# don't use EROOT, CONTENTS entries already contain EPREFIX
- 				if obj.startswith(real_root):
- 					relative_path = obj[real_root_len:]
- 					is_owned = False
- 					for dblnk in others_in_slot:
- 						if dblnk.isowner(relative_path):
- 							is_owned = True
- 							break
- 
- 					if is_owned and islink and \
- 						file_type in ("sym", "dir") and \
- 						statobj and stat.S_ISDIR(statobj.st_mode):
- 						# A new instance of this package claims the file, so
- 						# don't unmerge it. If the file is symlink to a
- 						# directory and the unmerging package installed it as
- 						# a symlink, but the new owner has it listed as a
- 						# directory, then we'll produce a warning since the
- 						# symlink is a sort of orphan in this case (see
- 						# bug #326685).
- 						symlink_orphan = False
- 						for dblnk in others_in_slot:
- 							parent_contents_key = \
- 								dblnk._match_contents(relative_path)
- 							if not parent_contents_key:
- 								continue
- 							if not parent_contents_key.startswith(
- 								real_root):
- 								continue
- 							if dblnk.getcontents()[
- 								parent_contents_key][0] == "dir":
- 								symlink_orphan = True
- 								break
- 
- 						if symlink_orphan:
- 							protected_symlinks.setdefault(
- 								(statobj.st_dev, statobj.st_ino),
- 								[]).append(relative_path)
- 
- 					if is_owned:
- 						show_unmerge("---", unmerge_desc["replaced"], file_type, obj)
- 						continue
- 					elif relative_path in cfgfiledict:
- 						stale_confmem.append(relative_path)
- 
- 				# Don't unlink symlinks to directories here since that can
- 				# remove /lib and /usr/lib symlinks.
- 				if unmerge_orphans and \
- 					lstatobj and not stat.S_ISDIR(lstatobj.st_mode) and \
- 					not (islink and statobj and stat.S_ISDIR(statobj.st_mode)) and \
- 					not self.isprotected(obj):
- 					try:
- 						unlink(obj, lstatobj)
- 					except EnvironmentError as e:
- 						if e.errno not in ignored_unlink_errnos:
- 							raise
- 						del e
- 					show_unmerge("<<<", "", file_type, obj)
- 					continue
- 
- 				lmtime = str(lstatobj[stat.ST_MTIME])
- 				if (pkgfiles[objkey][0] not in ("dir", "fif", "dev")) and (lmtime != pkgfiles[objkey][1]):
- 					show_unmerge("---", unmerge_desc["!mtime"], file_type, obj)
- 					continue
- 
- 				if file_type == "dir" and not islink:
- 					if lstatobj is None or not stat.S_ISDIR(lstatobj.st_mode):
- 						show_unmerge("---", unmerge_desc["!dir"], file_type, obj)
- 						continue
- 					mydirs.add((obj, (lstatobj.st_dev, lstatobj.st_ino)))
- 				elif file_type == "sym" or (file_type == "dir" and islink):
- 					if not islink:
- 						show_unmerge("---", unmerge_desc["!sym"], file_type, obj)
- 						continue
- 
- 					# If this symlink points to a directory then we don't want
- 					# to unmerge it if there are any other packages that
- 					# installed files into the directory via this symlink
- 					# (see bug #326685).
- 					# TODO: Resolving a symlink to a directory will require
- 					# simulation if $ROOT != / and the link is not relative.
- 					if islink and statobj and stat.S_ISDIR(statobj.st_mode) \
- 						and obj.startswith(real_root):
- 
- 						relative_path = obj[real_root_len:]
- 						try:
- 							target_dir_contents = os.listdir(obj)
- 						except OSError:
- 							pass
- 						else:
- 							if target_dir_contents:
- 								# If all the children are regular files owned
- 								# by this package, then the symlink should be
- 								# safe to unmerge.
- 								all_owned = True
- 								for child in target_dir_contents:
- 									child = os.path.join(relative_path, child)
- 									if not self.isowner(child):
- 										all_owned = False
- 										break
- 									try:
- 										child_lstat = os.lstat(os.path.join(
- 											real_root, child.lstrip(os.sep)))
- 									except OSError:
- 										continue
- 
- 									if not stat.S_ISREG(child_lstat.st_mode):
- 										# Nested symlinks or directories make
- 										# the issue very complex, so just
- 										# preserve the symlink in order to be
- 										# on the safe side.
- 										all_owned = False
- 										break
- 
- 								if not all_owned:
- 									protected_symlinks.setdefault(
- 										(statobj.st_dev, statobj.st_ino),
- 										[]).append(relative_path)
- 									show_unmerge("---", unmerge_desc["!empty"],
- 										file_type, obj)
- 									continue
- 
- 					# Go ahead and unlink symlinks to directories here when
- 					# they're actually recorded as symlinks in the contents.
- 					# Normally, symlinks such as /lib -> lib64 are not recorded
- 					# as symlinks in the contents of a package.  If a package
- 					# installs something into ${D}/lib/, it is recorded in the
- 					# contents as a directory even if it happens to correspond
- 					# to a symlink when it's merged to the live filesystem.
- 					try:
- 						unlink(obj, lstatobj)
- 						show_unmerge("<<<", "", file_type, obj)
- 					except (OSError, IOError) as e:
- 						if e.errno not in ignored_unlink_errnos:
- 							raise
- 						del e
- 						show_unmerge("!!!", "", file_type, obj)
- 				elif pkgfiles[objkey][0] == "obj":
- 					if statobj is None or not stat.S_ISREG(statobj.st_mode):
- 						show_unmerge("---", unmerge_desc["!obj"], file_type, obj)
- 						continue
- 					mymd5 = None
- 					try:
- 						mymd5 = perf_md5(obj, calc_prelink=calc_prelink)
- 					except FileNotFound as e:
- 						# the file has disappeared between now and our stat call
- 						show_unmerge("---", unmerge_desc["!obj"], file_type, obj)
- 						continue
- 
- 					# string.lower is needed because db entries used to be in upper-case.  The
- 					# string.lower allows for backwards compatibility.
- 					if mymd5 != pkgfiles[objkey][2].lower():
- 						show_unmerge("---", unmerge_desc["!md5"], file_type, obj)
- 						continue
- 					try:
- 						unlink(obj, lstatobj)
- 					except (OSError, IOError) as e:
- 						if e.errno not in ignored_unlink_errnos:
- 							raise
- 						del e
- 					show_unmerge("<<<", "", file_type, obj)
- 				elif pkgfiles[objkey][0] == "fif":
- 					if not stat.S_ISFIFO(lstatobj[stat.ST_MODE]):
- 						show_unmerge("---", unmerge_desc["!fif"], file_type, obj)
- 						continue
- 					show_unmerge("---", "", file_type, obj)
- 				elif pkgfiles[objkey][0] == "dev":
- 					show_unmerge("---", "", file_type, obj)
- 
- 			self._unmerge_dirs(mydirs, infodirs_inodes,
- 				protected_symlinks, unmerge_desc, unlink, os)
- 			mydirs.clear()
- 
- 		if protected_symlinks:
- 			self._unmerge_protected_symlinks(others_in_slot, infodirs_inodes,
- 				protected_symlinks, unmerge_desc, unlink, os)
- 
- 		if protected_symlinks:
- 			msg = "One or more symlinks to directories have been " + \
- 				"preserved in order to ensure that files installed " + \
- 				"via these symlinks remain accessible. " + \
- 				"This indicates that the mentioned symlink(s) may " + \
- 				"be obsolete remnants of an old install, and it " + \
- 				"may be appropriate to replace a given symlink " + \
- 				"with the directory that it points to."
- 			lines = textwrap.wrap(msg, 72)
- 			lines.append("")
- 			flat_list = set()
- 			flat_list.update(*protected_symlinks.values())
- 			flat_list = sorted(flat_list)
- 			for f in flat_list:
- 				lines.append("\t%s" % (os.path.join(real_root,
- 					f.lstrip(os.sep))))
- 			lines.append("")
- 			self._elog("elog", "postrm", lines)
- 
- 		# Remove stale entries from config memory.
- 		if stale_confmem:
- 			for filename in stale_confmem:
- 				del cfgfiledict[filename]
- 			writedict(cfgfiledict, self.vartree.dbapi._conf_mem_file)
- 
- 		#remove self from vartree database so that our own virtual gets zapped if we're the last node
- 		self.vartree.zap(self.mycpv)
- 
- 	def _unmerge_protected_symlinks(self, others_in_slot, infodirs_inodes,
- 		protected_symlinks, unmerge_desc, unlink, os):
- 
- 		real_root = self.settings['ROOT']
- 		show_unmerge = self._show_unmerge
- 		ignored_unlink_errnos = self._ignored_unlink_errnos
- 
- 		flat_list = set()
- 		flat_list.update(*protected_symlinks.values())
- 		flat_list = sorted(flat_list)
- 
- 		for f in flat_list:
- 			for dblnk in others_in_slot:
- 				if dblnk.isowner(f):
- 					# If another package in the same slot installed
- 					# a file via a protected symlink, return early
- 					# and don't bother searching for any other owners.
- 					return
- 
- 		msg = []
- 		msg.append("")
- 		msg.append(_("Directory symlink(s) may need protection:"))
- 		msg.append("")
- 
- 		for f in flat_list:
- 			msg.append("\t%s" % \
- 				os.path.join(real_root, f.lstrip(os.path.sep)))
- 
- 		msg.append("")
- 		msg.append("Use the UNINSTALL_IGNORE variable to exempt specific symlinks")
- 		msg.append("from the following search (see the make.conf man page).")
- 		msg.append("")
- 		msg.append(_("Searching all installed"
- 			" packages for files installed via above symlink(s)..."))
- 		msg.append("")
- 		self._elog("elog", "postrm", msg)
- 
- 		self.lockdb()
- 		try:
- 			owners = self.vartree.dbapi._owners.get_owners(flat_list)
- 			self.vartree.dbapi.flush_cache()
- 		finally:
- 			self.unlockdb()
- 
- 		for owner in list(owners):
- 			if owner.mycpv == self.mycpv:
- 				owners.pop(owner, None)
- 
- 		if not owners:
- 			msg = []
- 			msg.append(_("The above directory symlink(s) are all "
- 				"safe to remove. Removing them now..."))
- 			msg.append("")
- 			self._elog("elog", "postrm", msg)
- 			dirs = set()
- 			for unmerge_syms in protected_symlinks.values():
- 				for relative_path in unmerge_syms:
- 					obj = os.path.join(real_root,
- 						relative_path.lstrip(os.sep))
- 					parent = os.path.dirname(obj)
- 					while len(parent) > len(self._eroot):
- 						try:
- 							lstatobj = os.lstat(parent)
- 						except OSError:
- 							break
- 						else:
- 							dirs.add((parent,
- 								(lstatobj.st_dev, lstatobj.st_ino)))
- 							parent = os.path.dirname(parent)
- 					try:
- 						unlink(obj, os.lstat(obj))
- 						show_unmerge("<<<", "", "sym", obj)
- 					except (OSError, IOError) as e:
- 						if e.errno not in ignored_unlink_errnos:
- 							raise
- 						del e
- 						show_unmerge("!!!", "", "sym", obj)
- 
- 			protected_symlinks.clear()
- 			self._unmerge_dirs(dirs, infodirs_inodes,
- 				protected_symlinks, unmerge_desc, unlink, os)
- 			dirs.clear()
- 
- 	def _unmerge_dirs(self, dirs, infodirs_inodes,
- 		protected_symlinks, unmerge_desc, unlink, os):
- 
- 		show_unmerge = self._show_unmerge
- 		infodir_cleanup = self._infodir_cleanup
- 		ignored_unlink_errnos = self._ignored_unlink_errnos
- 		ignored_rmdir_errnos = self._ignored_rmdir_errnos
- 		real_root = self.settings['ROOT']
- 
- 		dirs = sorted(dirs)
- 		revisit = {}
- 
- 		while True:
- 			try:
- 				obj, inode_key = dirs.pop()
- 			except IndexError:
- 				break
- 			# Treat any directory named "info" as a candidate here,
- 			# since it might have been in INFOPATH previously even
- 			# though it may not be there now.
- 			if inode_key in infodirs_inodes or \
- 				os.path.basename(obj) == "info":
- 				try:
- 					remaining = os.listdir(obj)
- 				except OSError:
- 					pass
- 				else:
- 					cleanup_info_dir = ()
- 					if remaining and \
- 						len(remaining) <= len(infodir_cleanup):
- 						if not set(remaining).difference(infodir_cleanup):
- 							cleanup_info_dir = remaining
- 
- 					for child in cleanup_info_dir:
- 						child = os.path.join(obj, child)
- 						try:
- 							lstatobj = os.lstat(child)
- 							if stat.S_ISREG(lstatobj.st_mode):
- 								unlink(child, lstatobj)
- 								show_unmerge("<<<", "", "obj", child)
- 						except EnvironmentError as e:
- 							if e.errno not in ignored_unlink_errnos:
- 								raise
- 							del e
- 							show_unmerge("!!!", "", "obj", child)
- 
- 			try:
- 				parent_name = os.path.dirname(obj)
- 				parent_stat = os.stat(parent_name)
- 
- 				if bsd_chflags:
- 					lstatobj = os.lstat(obj)
- 					if lstatobj.st_flags != 0:
- 						bsd_chflags.lchflags(obj, 0)
- 
- 					# Use normal stat/chflags for the parent since we want to
- 					# follow any symlinks to the real parent directory.
- 					pflags = parent_stat.st_flags
- 					if pflags != 0:
- 						bsd_chflags.chflags(parent_name, 0)
- 				try:
- 					os.rmdir(obj)
- 				finally:
- 					if bsd_chflags and pflags != 0:
- 						# Restore the parent flags we saved before unlinking
- 						bsd_chflags.chflags(parent_name, pflags)
- 
- 				# Record the parent directory for use in syncfs calls.
- 				# Note that we use a realpath and a regular stat here, since
- 				# we want to follow any symlinks back to the real device where
- 				# the real parent directory resides.
- 				self._merged_path(os.path.realpath(parent_name), parent_stat)
- 
- 				show_unmerge("<<<", "", "dir", obj)
- 			except EnvironmentError as e:
- 				if e.errno not in ignored_rmdir_errnos:
- 					raise
- 				if e.errno != errno.ENOENT:
- 					show_unmerge("---", unmerge_desc["!empty"], "dir", obj)
- 					revisit[obj] = inode_key
- 
- 				# Since we didn't remove this directory, record the directory
- 				# itself for use in syncfs calls, if we have removed another
- 				# file from the same device.
- 				# Note that we use a realpath and a regular stat here, since
- 				# we want to follow any symlinks back to the real device where
- 				# the real directory resides.
- 				try:
- 					dir_stat = os.stat(obj)
- 				except OSError:
- 					pass
- 				else:
- 					if dir_stat.st_dev in self._device_path_map:
- 						self._merged_path(os.path.realpath(obj), dir_stat)
- 
- 			else:
- 				# When a directory is successfully removed, there's
- 				# no need to protect symlinks that point to it.
- 				unmerge_syms = protected_symlinks.pop(inode_key, None)
- 				if unmerge_syms is not None:
- 					parents = []
- 					for relative_path in unmerge_syms:
- 						obj = os.path.join(real_root,
- 							relative_path.lstrip(os.sep))
- 						try:
- 							unlink(obj, os.lstat(obj))
- 							show_unmerge("<<<", "", "sym", obj)
- 						except (OSError, IOError) as e:
- 							if e.errno not in ignored_unlink_errnos:
- 								raise
- 							del e
- 							show_unmerge("!!!", "", "sym", obj)
- 						else:
- 							parents.append(os.path.dirname(obj))
- 
- 					if parents:
- 						# Revisit parents recursively (bug 640058).
- 						recursive_parents = []
- 						for parent in set(parents):
- 							while parent in revisit:
- 								recursive_parents.append(parent)
- 								parent = os.path.dirname(parent)
- 								if parent == '/':
- 									break
- 
- 						for parent in sorted(set(recursive_parents)):
- 							dirs.append((parent, revisit.pop(parent)))
- 
- 	def isowner(self, filename, destroot=None):
- 		"""
- 		Check if a file belongs to this package. This may
- 		result in a stat call for the parent directory of
- 		every installed file, since the inode numbers are
- 		used to work around the problem of ambiguous paths
- 		caused by symlinked directories. The results of
- 		stat calls are cached to optimize multiple calls
- 		to this method.
- 
- 		@param filename:
- 		@type filename:
- 		@param destroot:
- 		@type destroot:
- 		@rtype: Boolean
- 		@return:
- 		1. True if this package owns the file.
- 		2. False if this package does not own the file.
- 		"""
- 
- 		if destroot is not None and destroot != self._eroot:
- 			warnings.warn("The second parameter of the " + \
- 				"portage.dbapi.vartree.dblink.isowner()" + \
- 				" is now unused. Instead " + \
- 				"self.settings['EROOT'] will be used.",
- 				DeprecationWarning, stacklevel=2)
- 
- 		return bool(self._match_contents(filename))
- 
- 	def _match_contents(self, filename, destroot=None):
- 		"""
- 		The matching contents entry is returned, which is useful
- 		since the path may differ from the one given by the caller,
- 		due to symlinks.
- 
- 		@rtype: String
- 		@return: the contents entry corresponding to the given path, or False
- 			if the file is not owned by this package.
- 		"""
- 
- 		filename = _unicode_decode(filename,
- 			encoding=_encodings['content'], errors='strict')
- 
- 		if destroot is not None and destroot != self._eroot:
- 			warnings.warn("The second parameter of the " + \
- 				"portage.dbapi.vartree.dblink._match_contents()" + \
- 				" is now unused. Instead " + \
- 				"self.settings['ROOT'] will be used.",
- 				DeprecationWarning, stacklevel=2)
- 
- 		# don't use EROOT here, image already contains EPREFIX
- 		destroot = self.settings['ROOT']
- 
- 		# The given filename argument might have a different encoding than the
- 		# the filenames contained in the contents, so use separate wrapped os
- 		# modules for each. The basename is more likely to contain non-ascii
- 		# characters than the directory path, so use os_filename_arg for all
- 		# operations involving the basename of the filename arg.
- 		os_filename_arg = _os_merge
- 		os = _os_merge
- 
- 		try:
- 			_unicode_encode(filename,
- 				encoding=_encodings['merge'], errors='strict')
- 		except UnicodeEncodeError:
- 			# The package appears to have been merged with a
- 			# different value of sys.getfilesystemencoding(),
- 			# so fall back to utf_8 if appropriate.
- 			try:
- 				_unicode_encode(filename,
- 					encoding=_encodings['fs'], errors='strict')
- 			except UnicodeEncodeError:
- 				pass
- 			else:
- 				os_filename_arg = portage.os
- 
- 		destfile = normalize_path(
- 			os_filename_arg.path.join(destroot,
- 			filename.lstrip(os_filename_arg.path.sep)))
- 
- 		if "case-insensitive-fs" in self.settings.features:
- 			destfile = destfile.lower()
- 
- 		if self._contents.contains(destfile):
- 			return self._contents.unmap_key(destfile)
- 
- 		if self.getcontents():
- 			basename = os_filename_arg.path.basename(destfile)
- 			if self._contents_basenames is None:
- 
- 				try:
- 					for x in self._contents.keys():
- 						_unicode_encode(x,
- 							encoding=_encodings['merge'],
- 							errors='strict')
- 				except UnicodeEncodeError:
- 					# The package appears to have been merged with a
- 					# different value of sys.getfilesystemencoding(),
- 					# so fall back to utf_8 if appropriate.
- 					try:
- 						for x in self._contents.keys():
- 							_unicode_encode(x,
- 								encoding=_encodings['fs'],
- 								errors='strict')
- 					except UnicodeEncodeError:
- 						pass
- 					else:
- 						os = portage.os
- 
- 				self._contents_basenames = set(
- 					os.path.basename(x) for x in self._contents.keys())
- 			if basename not in self._contents_basenames:
- 				# This is a shortcut that, in most cases, allows us to
- 				# eliminate this package as an owner without the need
- 				# to examine inode numbers of parent directories.
- 				return False
- 
- 			# Use stat rather than lstat since we want to follow
- 			# any symlinks to the real parent directory.
- 			parent_path = os_filename_arg.path.dirname(destfile)
- 			try:
- 				parent_stat = os_filename_arg.stat(parent_path)
- 			except EnvironmentError as e:
- 				if e.errno != errno.ENOENT:
- 					raise
- 				del e
- 				return False
- 			if self._contents_inodes is None:
- 
- 				if os is _os_merge:
- 					try:
- 						for x in self._contents.keys():
- 							_unicode_encode(x,
- 								encoding=_encodings['merge'],
- 								errors='strict')
- 					except UnicodeEncodeError:
- 						# The package appears to have been merged with a
- 						# different value of sys.getfilesystemencoding(),
- 						# so fall back to utf_8 if appropriate.
- 						try:
- 							for x in self._contents.keys():
- 								_unicode_encode(x,
- 									encoding=_encodings['fs'],
- 									errors='strict')
- 						except UnicodeEncodeError:
- 							pass
- 						else:
- 							os = portage.os
- 
- 				self._contents_inodes = {}
- 				parent_paths = set()
- 				for x in self._contents.keys():
- 					p_path = os.path.dirname(x)
- 					if p_path in parent_paths:
- 						continue
- 					parent_paths.add(p_path)
- 					try:
- 						s = os.stat(p_path)
- 					except OSError:
- 						pass
- 					else:
- 						inode_key = (s.st_dev, s.st_ino)
- 						# Use lists of paths in case multiple
- 						# paths reference the same inode.
- 						p_path_list = self._contents_inodes.get(inode_key)
- 						if p_path_list is None:
- 							p_path_list = []
- 							self._contents_inodes[inode_key] = p_path_list
- 						if p_path not in p_path_list:
- 							p_path_list.append(p_path)
- 
- 			p_path_list = self._contents_inodes.get(
- 				(parent_stat.st_dev, parent_stat.st_ino))
- 			if p_path_list:
- 				for p_path in p_path_list:
- 					x = os_filename_arg.path.join(p_path, basename)
- 					if self._contents.contains(x):
- 						return self._contents.unmap_key(x)
- 
- 		return False
- 
- 	def _linkmap_rebuild(self, **kwargs):
- 		"""
- 		Rebuild the self._linkmap if it's not broken due to missing
- 		scanelf binary. Also, return early if preserve-libs is disabled
- 		and the preserve-libs registry is empty.
- 		"""
- 		if self._linkmap_broken or \
- 			self.vartree.dbapi._linkmap is None or \
- 			self.vartree.dbapi._plib_registry is None or \
- 			("preserve-libs" not in self.settings.features and \
- 			not self.vartree.dbapi._plib_registry.hasEntries()):
- 			return
- 		try:
- 			self.vartree.dbapi._linkmap.rebuild(**kwargs)
- 		except CommandNotFound as e:
- 			self._linkmap_broken = True
- 			self._display_merge(_("!!! Disabling preserve-libs " \
- 				"due to error: Command Not Found: %s\n") % (e,),
- 				level=logging.ERROR, noiselevel=-1)
- 
- 	def _find_libs_to_preserve(self, unmerge=False):
- 		"""
- 		Get set of relative paths for libraries to be preserved. When
- 		unmerge is False, file paths to preserve are selected from
- 		self._installed_instance. Otherwise, paths are selected from
- 		self.
- 		"""
- 		if self._linkmap_broken or \
- 			self.vartree.dbapi._linkmap is None or \
- 			self.vartree.dbapi._plib_registry is None or \
- 			(not unmerge and self._installed_instance is None) or \
- 			not self._preserve_libs:
- 			return set()
- 
- 		os = _os_merge
- 		linkmap = self.vartree.dbapi._linkmap
- 		if unmerge:
- 			installed_instance = self
- 		else:
- 			installed_instance = self._installed_instance
- 		old_contents = installed_instance.getcontents()
- 		root = self.settings['ROOT']
- 		root_len = len(root) - 1
- 		lib_graph = digraph()
- 		path_node_map = {}
- 
- 		def path_to_node(path):
- 			node = path_node_map.get(path)
- 			if node is None:
- 				node = linkmap._LibGraphNode(linkmap._obj_key(path))
- 				alt_path_node = lib_graph.get(node)
- 				if alt_path_node is not None:
- 					node = alt_path_node
- 				node.alt_paths.add(path)
- 				path_node_map[path] = node
- 			return node
- 
- 		consumer_map = {}
- 		provider_nodes = set()
- 		# Create provider nodes and add them to the graph.
- 		for f_abs in old_contents:
- 
- 			if os is _os_merge:
- 				try:
- 					_unicode_encode(f_abs,
- 						encoding=_encodings['merge'], errors='strict')
- 				except UnicodeEncodeError:
- 					# The package appears to have been merged with a
- 					# different value of sys.getfilesystemencoding(),
- 					# so fall back to utf_8 if appropriate.
- 					try:
- 						_unicode_encode(f_abs,
- 							encoding=_encodings['fs'], errors='strict')
- 					except UnicodeEncodeError:
- 						pass
- 					else:
- 						os = portage.os
- 
- 			f = f_abs[root_len:]
- 			try:
- 				consumers = linkmap.findConsumers(f,
- 					exclude_providers=(installed_instance.isowner,))
- 			except KeyError:
- 				continue
- 			if not consumers:
- 				continue
- 			provider_node = path_to_node(f)
- 			lib_graph.add(provider_node, None)
- 			provider_nodes.add(provider_node)
- 			consumer_map[provider_node] = consumers
- 
- 		# Create consumer nodes and add them to the graph.
- 		# Note that consumers can also be providers.
- 		for provider_node, consumers in consumer_map.items():
- 			for c in consumers:
- 				consumer_node = path_to_node(c)
- 				if installed_instance.isowner(c) and \
- 					consumer_node not in provider_nodes:
- 					# This is not a provider, so it will be uninstalled.
- 					continue
- 				lib_graph.add(provider_node, consumer_node)
- 
- 		# Locate nodes which should be preserved. They consist of all
- 		# providers that are reachable from consumers that are not
- 		# providers themselves.
- 		preserve_nodes = set()
- 		for consumer_node in lib_graph.root_nodes():
- 			if consumer_node in provider_nodes:
- 				continue
- 			# Preserve all providers that are reachable from this consumer.
- 			node_stack = lib_graph.child_nodes(consumer_node)
- 			while node_stack:
- 				provider_node = node_stack.pop()
- 				if provider_node in preserve_nodes:
- 					continue
- 				preserve_nodes.add(provider_node)
- 				node_stack.extend(lib_graph.child_nodes(provider_node))
- 
- 		preserve_paths = set()
- 		for preserve_node in preserve_nodes:
- 			# Preserve the library itself, and also preserve the
- 			# soname symlink which is the only symlink that is
- 			# strictly required.
- 			hardlinks = set()
- 			soname_symlinks = set()
- 			soname = linkmap.getSoname(next(iter(preserve_node.alt_paths)))
- 			have_replacement_soname_link = False
- 			have_replacement_hardlink = False
- 			for f in preserve_node.alt_paths:
- 				f_abs = os.path.join(root, f.lstrip(os.sep))
- 				try:
- 					if stat.S_ISREG(os.lstat(f_abs).st_mode):
- 						hardlinks.add(f)
- 						if not unmerge and self.isowner(f):
- 							have_replacement_hardlink = True
- 							if os.path.basename(f) == soname:
- 								have_replacement_soname_link = True
- 					elif os.path.basename(f) == soname:
- 						soname_symlinks.add(f)
- 						if not unmerge and self.isowner(f):
- 							have_replacement_soname_link = True
- 				except OSError:
- 					pass
- 
- 			if have_replacement_hardlink and have_replacement_soname_link:
- 				continue
- 
- 			if hardlinks:
- 				preserve_paths.update(hardlinks)
- 				preserve_paths.update(soname_symlinks)
- 
- 		return preserve_paths
- 
- 	def _add_preserve_libs_to_contents(self, preserve_paths):
- 		"""
- 		Preserve libs returned from _find_libs_to_preserve().
- 		"""
- 
- 		if not preserve_paths:
- 			return
- 
- 		os = _os_merge
- 		showMessage = self._display_merge
- 		root = self.settings['ROOT']
- 
- 		# Copy contents entries from the old package to the new one.
- 		new_contents = self.getcontents().copy()
- 		old_contents = self._installed_instance.getcontents()
- 		for f in sorted(preserve_paths):
- 			f = _unicode_decode(f,
- 				encoding=_encodings['content'], errors='strict')
- 			f_abs = os.path.join(root, f.lstrip(os.sep))
- 			contents_entry = old_contents.get(f_abs)
- 			if contents_entry is None:
- 				# This will probably never happen, but it might if one of the
- 				# paths returned from findConsumers() refers to one of the libs
- 				# that should be preserved yet the path is not listed in the
- 				# contents. Such a path might belong to some other package, so
- 				# it shouldn't be preserved here.
- 				showMessage(_("!!! File '%s' will not be preserved "
- 					"due to missing contents entry\n") % (f_abs,),
- 					level=logging.ERROR, noiselevel=-1)
- 				preserve_paths.remove(f)
- 				continue
- 			new_contents[f_abs] = contents_entry
- 			obj_type = contents_entry[0]
- 			showMessage(_(">>> needed    %s %s\n") % (obj_type, f_abs),
- 				noiselevel=-1)
- 			# Add parent directories to contents if necessary.
- 			parent_dir = os.path.dirname(f_abs)
- 			while len(parent_dir) > len(root):
- 				new_contents[parent_dir] = ["dir"]
- 				prev = parent_dir
- 				parent_dir = os.path.dirname(parent_dir)
- 				if prev == parent_dir:
- 					break
- 		outfile = atomic_ofstream(os.path.join(self.dbtmpdir, "CONTENTS"))
- 		write_contents(new_contents, root, outfile)
- 		outfile.close()
- 		self._clear_contents_cache()
- 
- 	def _find_unused_preserved_libs(self, unmerge_no_replacement):
- 		"""
- 		Find preserved libraries that don't have any consumers left.
- 		"""
- 
- 		if self._linkmap_broken or \
- 			self.vartree.dbapi._linkmap is None or \
- 			self.vartree.dbapi._plib_registry is None or \
- 			not self.vartree.dbapi._plib_registry.hasEntries():
- 			return {}
- 
- 		# Since preserved libraries can be consumers of other preserved
- 		# libraries, use a graph to track consumer relationships.
- 		plib_dict = self.vartree.dbapi._plib_registry.getPreservedLibs()
- 		linkmap = self.vartree.dbapi._linkmap
- 		lib_graph = digraph()
- 		preserved_nodes = set()
- 		preserved_paths = set()
- 		path_cpv_map = {}
- 		path_node_map = {}
- 		root = self.settings['ROOT']
- 
- 		def path_to_node(path):
- 			node = path_node_map.get(path)
- 			if node is None:
- 				chost = self.settings.get('CHOST')
- 				if chost.find('darwin') >= 0:
- 					node = LinkageMapMachO._LibGraphNode(linkmap._obj_key(path))
- 				elif chost.find('interix') >= 0 or chost.find('winnt') >= 0:
- 					node = LinkageMapPeCoff._LibGraphNode(linkmap._obj_key(path))
- 				elif chost.find('aix') >= 0:
- 					node = LinkageMapXCoff._LibGraphNode(linkmap._obj_key(path))
- 				else:
- 					node = LinkageMap._LibGraphNode(linkmap._obj_key(path))
- 				alt_path_node = lib_graph.get(node)
- 				if alt_path_node is not None:
- 					node = alt_path_node
- 				node.alt_paths.add(path)
- 				path_node_map[path] = node
- 			return node
- 
- 		for cpv, plibs in plib_dict.items():
- 			for f in plibs:
- 				path_cpv_map[f] = cpv
- 				preserved_node = path_to_node(f)
- 				if not preserved_node.file_exists():
- 					continue
- 				lib_graph.add(preserved_node, None)
- 				preserved_paths.add(f)
- 				preserved_nodes.add(preserved_node)
- 				for c in self.vartree.dbapi._linkmap.findConsumers(f):
- 					consumer_node = path_to_node(c)
- 					if not consumer_node.file_exists():
- 						continue
- 					# Note that consumers may also be providers.
- 					lib_graph.add(preserved_node, consumer_node)
- 
- 		# Eliminate consumers having providers with the same soname as an
- 		# installed library that is not preserved. This eliminates
- 		# libraries that are erroneously preserved due to a move from one
- 		# directory to another.
- 		# Also eliminate consumers that are going to be unmerged if
- 		# unmerge_no_replacement is True.
- 		provider_cache = {}
- 		for preserved_node in preserved_nodes:
- 			soname = linkmap.getSoname(preserved_node)
- 			for consumer_node in lib_graph.parent_nodes(preserved_node):
- 				if consumer_node in preserved_nodes:
- 					continue
- 				if unmerge_no_replacement:
- 					will_be_unmerged = True
- 					for path in consumer_node.alt_paths:
- 						if not self.isowner(path):
- 							will_be_unmerged = False
- 							break
- 					if will_be_unmerged:
- 						# This consumer is not preserved and it is
- 						# being unmerged, so drop this edge.
- 						lib_graph.remove_edge(preserved_node, consumer_node)
- 						continue
- 
- 				providers = provider_cache.get(consumer_node)
- 				if providers is None:
- 					providers = linkmap.findProviders(consumer_node)
- 					provider_cache[consumer_node] = providers
- 				providers = providers.get(soname)
- 				if providers is None:
- 					continue
- 				for provider in providers:
- 					if provider in preserved_paths:
- 						continue
- 					provider_node = path_to_node(provider)
- 					if not provider_node.file_exists():
- 						continue
- 					if provider_node in preserved_nodes:
- 						continue
- 					# An alternative provider seems to be
- 					# installed, so drop this edge.
- 					lib_graph.remove_edge(preserved_node, consumer_node)
- 					break
- 
- 		cpv_lib_map = {}
- 		while lib_graph:
- 			root_nodes = preserved_nodes.intersection(lib_graph.root_nodes())
- 			if not root_nodes:
- 				break
- 			lib_graph.difference_update(root_nodes)
- 			unlink_list = set()
- 			for node in root_nodes:
- 				unlink_list.update(node.alt_paths)
- 			unlink_list = sorted(unlink_list)
- 			for obj in unlink_list:
- 				cpv = path_cpv_map.get(obj)
- 				if cpv is None:
- 					# This means that a symlink is in the preserved libs
- 					# registry, but the actual lib it points to is not.
- 					self._display_merge(_("!!! symlink to lib is preserved, "
- 						"but not the lib itself:\n!!! '%s'\n") % (obj,),
- 						level=logging.ERROR, noiselevel=-1)
- 					continue
- 				removed = cpv_lib_map.get(cpv)
- 				if removed is None:
- 					removed = set()
- 					cpv_lib_map[cpv] = removed
- 				removed.add(obj)
- 
- 		return cpv_lib_map
- 
- 	def _remove_preserved_libs(self, cpv_lib_map):
- 		"""
- 		Remove files returned from _find_unused_preserved_libs().
- 		"""
- 
- 		os = _os_merge
- 
- 		files_to_remove = set()
- 		for files in cpv_lib_map.values():
- 			files_to_remove.update(files)
- 		files_to_remove = sorted(files_to_remove)
- 		showMessage = self._display_merge
- 		root = self.settings['ROOT']
- 
- 		parent_dirs = set()
- 		for obj in files_to_remove:
- 			obj = os.path.join(root, obj.lstrip(os.sep))
- 			parent_dirs.add(os.path.dirname(obj))
- 			if os.path.islink(obj):
- 				obj_type = _("sym")
- 			else:
- 				obj_type = _("obj")
- 			try:
- 				os.unlink(obj)
- 			except OSError as e:
- 				if e.errno != errno.ENOENT:
- 					raise
- 				del e
- 			else:
- 				showMessage(_("<<< !needed  %s %s\n") % (obj_type, obj),
- 					noiselevel=-1)
- 
- 		# Remove empty parent directories if possible.
- 		while parent_dirs:
- 			x = parent_dirs.pop()
- 			while True:
- 				try:
- 					os.rmdir(x)
- 				except OSError:
- 					break
- 				prev = x
- 				x = os.path.dirname(x)
- 				if x == prev:
- 					break
- 
- 		self.vartree.dbapi._plib_registry.pruneNonExisting()
- 
- 	def _collision_protect(self, srcroot, destroot, mypkglist,
- 		file_list, symlink_list):
- 
- 			os = _os_merge
- 
- 			real_relative_paths = {}
- 
- 			collision_ignore = []
- 			for x in portage.util.shlex_split(
- 				self.settings.get("COLLISION_IGNORE", "")):
- 				if os.path.isdir(os.path.join(self._eroot, x.lstrip(os.sep))):
- 					x = normalize_path(x)
- 					x += "/*"
- 				collision_ignore.append(x)
- 
- 			# For collisions with preserved libraries, the current package
- 			# will assume ownership and the libraries will be unregistered.
- 			if self.vartree.dbapi._plib_registry is None:
- 				# preserve-libs is entirely disabled
- 				plib_cpv_map = None
- 				plib_paths = None
- 				plib_inodes = {}
- 			else:
- 				plib_dict = self.vartree.dbapi._plib_registry.getPreservedLibs()
- 				plib_cpv_map = {}
- 				plib_paths = set()
- 				for cpv, paths in plib_dict.items():
- 					plib_paths.update(paths)
- 					for f in paths:
- 						plib_cpv_map[f] = cpv
- 				plib_inodes = self._lstat_inode_map(plib_paths)
- 
- 			plib_collisions = {}
- 
- 			showMessage = self._display_merge
- 			stopmerge = False
- 			collisions = []
- 			dirs = set()
- 			dirs_ro = set()
- 			symlink_collisions = []
- 			destroot = self.settings['ROOT']
- 			totfiles = len(file_list) + len(symlink_list)
- 			previous = time.monotonic()
- 			progress_shown = False
- 			report_interval = 1.7  # seconds
- 			falign = len("%d" % totfiles)
- 			showMessage(_(" %s checking %d files for package collisions\n") % \
- 				(colorize("GOOD", "*"), totfiles))
- 			for i, (f, f_type) in enumerate(chain(
- 				((f, "reg") for f in file_list),
- 				((f, "sym") for f in symlink_list))):
- 				current = time.monotonic()
- 				if current - previous > report_interval:
- 					showMessage(_("%3d%% done,  %*d files remaining ...\n") %
- 							(i * 100 / totfiles, falign, totfiles - i))
- 					previous = current
- 					progress_shown = True
- 
- 				dest_path = normalize_path(os.path.join(destroot, f.lstrip(os.path.sep)))
- 
- 				# Relative path with symbolic links resolved only in parent directories
- 				real_relative_path = os.path.join(os.path.realpath(os.path.dirname(dest_path)),
- 					os.path.basename(dest_path))[len(destroot):]
- 
- 				real_relative_paths.setdefault(real_relative_path, []).append(f.lstrip(os.path.sep))
- 
- 				parent = os.path.dirname(dest_path)
- 				if parent not in dirs:
- 					for x in iter_parents(parent):
- 						if x in dirs:
- 							break
- 						dirs.add(x)
- 						if os.path.isdir(x):
- 							if not os.access(x, os.W_OK):
- 								dirs_ro.add(x)
- 							break
- 
- 				try:
- 					dest_lstat = os.lstat(dest_path)
- 				except EnvironmentError as e:
- 					if e.errno == errno.ENOENT:
- 						del e
- 						continue
- 					elif e.errno == errno.ENOTDIR:
- 						del e
- 						# A non-directory is in a location where this package
- 						# expects to have a directory.
- 						dest_lstat = None
- 						parent_path = dest_path
- 						while len(parent_path) > len(destroot):
- 							parent_path = os.path.dirname(parent_path)
- 							try:
- 								dest_lstat = os.lstat(parent_path)
- 								break
- 							except EnvironmentError as e:
- 								if e.errno != errno.ENOTDIR:
- 									raise
- 								del e
- 						if not dest_lstat:
- 							raise AssertionError(
- 								"unable to find non-directory " + \
- 								"parent for '%s'" % dest_path)
- 						dest_path = parent_path
- 						f = os.path.sep + dest_path[len(destroot):]
- 						if f in collisions:
- 							continue
- 					else:
- 						raise
- 				if f[0] != "/":
- 					f="/"+f
- 
- 				if stat.S_ISDIR(dest_lstat.st_mode):
- 					if f_type == "sym":
- 						# This case is explicitly banned
- 						# by PMS (see bug #326685).
- 						symlink_collisions.append(f)
- 						collisions.append(f)
- 						continue
- 
- 				plibs = plib_inodes.get((dest_lstat.st_dev, dest_lstat.st_ino))
- 				if plibs:
- 					for path in plibs:
- 						cpv = plib_cpv_map[path]
- 						paths = plib_collisions.get(cpv)
- 						if paths is None:
- 							paths = set()
- 							plib_collisions[cpv] = paths
- 						paths.add(path)
- 					# The current package will assume ownership and the
- 					# libraries will be unregistered, so exclude this
- 					# path from the normal collisions.
- 					continue
- 
- 				isowned = False
- 				full_path = os.path.join(destroot, f.lstrip(os.path.sep))
- 				for ver in mypkglist:
- 					if ver.isowner(f):
- 						isowned = True
- 						break
- 				if not isowned and self.isprotected(full_path):
- 					isowned = True
- 				if not isowned:
- 					f_match = full_path[len(self._eroot)-1:]
- 					stopmerge = True
- 					for pattern in collision_ignore:
- 						if fnmatch.fnmatch(f_match, pattern):
- 							stopmerge = False
- 							break
- 					if stopmerge:
- 						collisions.append(f)
- 
- 			internal_collisions = {}
- 			for real_relative_path, files in real_relative_paths.items():
- 				# Detect internal collisions between non-identical files.
- 				if len(files) >= 2:
- 					files.sort()
- 					for i in range(len(files) - 1):
- 						file1 = normalize_path(os.path.join(srcroot, files[i]))
- 						file2 = normalize_path(os.path.join(srcroot, files[i+1]))
- 						# Compare files, ignoring differences in times.
- 						differences = compare_files(file1, file2, skipped_types=("atime", "mtime", "ctime"))
- 						if differences:
- 							internal_collisions.setdefault(real_relative_path, {})[(files[i], files[i+1])] = differences
- 
- 			if progress_shown:
- 				showMessage(_("100% done\n"))
- 
- 			return collisions, internal_collisions, dirs_ro, symlink_collisions, plib_collisions
- 
- 	def _lstat_inode_map(self, path_iter):
- 		"""
- 		Use lstat to create a map of the form:
- 		  {(st_dev, st_ino) : set([path1, path2, ...])}
- 		Multiple paths may reference the same inode due to hardlinks.
- 		All lstat() calls are relative to self.myroot.
- 		"""
- 
- 		os = _os_merge
- 
- 		root = self.settings['ROOT']
- 		inode_map = {}
- 		for f in path_iter:
- 			path = os.path.join(root, f.lstrip(os.sep))
- 			try:
- 				st = os.lstat(path)
- 			except OSError as e:
- 				if e.errno not in (errno.ENOENT, errno.ENOTDIR):
- 					raise
- 				del e
- 				continue
- 			key = (st.st_dev, st.st_ino)
- 			paths = inode_map.get(key)
- 			if paths is None:
- 				paths = set()
- 				inode_map[key] = paths
- 			paths.add(f)
- 		return inode_map
- 
- 	def _security_check(self, installed_instances):
- 		if not installed_instances:
- 			return 0
- 
- 		os = _os_merge
- 
- 		showMessage = self._display_merge
- 
- 		file_paths = set()
- 		for dblnk in installed_instances:
- 			file_paths.update(dblnk.getcontents())
- 		inode_map = {}
- 		real_paths = set()
- 		for i, path in enumerate(file_paths):
- 
- 			if os is _os_merge:
- 				try:
- 					_unicode_encode(path,
- 						encoding=_encodings['merge'], errors='strict')
- 				except UnicodeEncodeError:
- 					# The package appears to have been merged with a
- 					# different value of sys.getfilesystemencoding(),
- 					# so fall back to utf_8 if appropriate.
- 					try:
- 						_unicode_encode(path,
- 							encoding=_encodings['fs'], errors='strict')
- 					except UnicodeEncodeError:
- 						pass
- 					else:
- 						os = portage.os
- 
- 			try:
- 				s = os.lstat(path)
- 			except OSError as e:
- 				if e.errno not in (errno.ENOENT, errno.ENOTDIR):
- 					raise
- 				del e
- 				continue
- 			if not stat.S_ISREG(s.st_mode):
- 				continue
- 			path = os.path.realpath(path)
- 			if path in real_paths:
- 				continue
- 			real_paths.add(path)
- 			if s.st_nlink > 1 and \
- 				s.st_mode & (stat.S_ISUID | stat.S_ISGID):
- 				k = (s.st_dev, s.st_ino)
- 				inode_map.setdefault(k, []).append((path, s))
- 		suspicious_hardlinks = []
- 		for path_list in inode_map.values():
- 			path, s = path_list[0]
- 			if len(path_list) == s.st_nlink:
- 				# All hardlinks seem to be owned by this package.
- 				continue
- 			suspicious_hardlinks.append(path_list)
- 		if not suspicious_hardlinks:
- 			return 0
- 
- 		msg = []
- 		msg.append(_("suid/sgid file(s) "
- 			"with suspicious hardlink(s):"))
- 		msg.append("")
- 		for path_list in suspicious_hardlinks:
- 			for path, s in path_list:
- 				msg.append("\t%s" % path)
- 		msg.append("")
- 		msg.append(_("See the Gentoo Security Handbook "
- 			"guide for advice on how to proceed."))
- 
- 		self._eerror("preinst", msg)
- 
- 		return 1
- 
- 	def _eqawarn(self, phase, lines):
- 		self._elog("eqawarn", phase, lines)
- 
- 	def _eerror(self, phase, lines):
- 		self._elog("eerror", phase, lines)
- 
- 	def _elog(self, funcname, phase, lines):
- 		func = getattr(portage.elog.messages, funcname)
- 		if self._scheduler is None:
- 			for l in lines:
- 				func(l, phase=phase, key=self.mycpv)
- 		else:
- 			background = self.settings.get("PORTAGE_BACKGROUND") == "1"
- 			log_path = None
- 			if self.settings.get("PORTAGE_BACKGROUND") != "subprocess":
- 				log_path = self.settings.get("PORTAGE_LOG_FILE")
- 			out = io.StringIO()
- 			for line in lines:
- 				func(line, phase=phase, key=self.mycpv, out=out)
- 			msg = out.getvalue()
- 			self._scheduler.output(msg,
- 				background=background, log_path=log_path)
- 
- 	def _elog_process(self, phasefilter=None):
- 		cpv = self.mycpv
- 		if self._pipe is None:
- 			elog_process(cpv, self.settings, phasefilter=phasefilter)
- 		else:
- 			logdir = os.path.join(self.settings["T"], "logging")
- 			ebuild_logentries = collect_ebuild_messages(logdir)
- 			# phasefilter is irrelevant for the above collect_ebuild_messages
- 			# call, since this package instance has a private logdir. However,
- 			# it may be relevant for the following collect_messages call.
- 			py_logentries = collect_messages(key=cpv, phasefilter=phasefilter).get(cpv, {})
- 			logentries = _merge_logentries(py_logentries, ebuild_logentries)
- 			funcnames = {
- 				"INFO": "einfo",
- 				"LOG": "elog",
- 				"WARN": "ewarn",
- 				"QA": "eqawarn",
- 				"ERROR": "eerror"
- 			}
- 			str_buffer = []
- 			for phase, messages in logentries.items():
- 				for key, lines in messages:
- 					funcname = funcnames[key]
- 					if isinstance(lines, str):
- 						lines = [lines]
- 					for line in lines:
- 						for line in line.split('\n'):
- 							fields = (funcname, phase, cpv, line)
- 							str_buffer.append(' '.join(fields))
- 							str_buffer.append('\n')
- 			if str_buffer:
- 				str_buffer = _unicode_encode(''.join(str_buffer))
- 				while str_buffer:
- 					str_buffer = str_buffer[os.write(self._pipe, str_buffer):]
- 
- 	def _emerge_log(self, msg):
- 		emergelog(False, msg)
- 
- 	def treewalk(self, srcroot, destroot, inforoot, myebuild, cleanup=0,
- 		mydbapi=None, prev_mtimes=None, counter=None):
- 		"""
- 
- 		This function does the following:
- 
- 		calls doebuild(mydo=instprep)
- 		calls get_ro_checker to retrieve a function for checking whether Portage
- 		will write to a read-only filesystem, then runs it against the directory list
- 		calls self._preserve_libs if FEATURES=preserve-libs
- 		calls self._collision_protect if FEATURES=collision-protect
- 		calls doebuild(mydo=pkg_preinst)
- 		Merges the package to the livefs
- 		unmerges old version (if required)
- 		calls doebuild(mydo=pkg_postinst)
- 		calls env_update
- 
- 		@param srcroot: Typically this is ${D}
- 		@type srcroot: String (Path)
- 		@param destroot: ignored, self.settings['ROOT'] is used instead
- 		@type destroot: String (Path)
- 		@param inforoot: root of the vardb entry ?
- 		@type inforoot: String (Path)
- 		@param myebuild: path to the ebuild that we are processing
- 		@type myebuild: String (Path)
- 		@param mydbapi: dbapi which is handed to doebuild.
- 		@type mydbapi: portdbapi instance
- 		@param prev_mtimes: { Filename:mtime } mapping for env_update
- 		@type prev_mtimes: Dictionary
- 		@rtype: Boolean
- 		@return:
- 		1. 0 on success
- 		2. 1 on failure
- 
- 		secondhand is a list of symlinks that have been skipped due to their target
- 		not existing; we will merge these symlinks at a later time.
- 		"""
- 
- 		os = _os_merge
- 
- 		srcroot = _unicode_decode(srcroot,
- 			encoding=_encodings['content'], errors='strict')
- 		destroot = self.settings['ROOT']
- 		inforoot = _unicode_decode(inforoot,
- 			encoding=_encodings['content'], errors='strict')
- 		myebuild = _unicode_decode(myebuild,
- 			encoding=_encodings['content'], errors='strict')
- 
- 		showMessage = self._display_merge
- 		srcroot = normalize_path(srcroot).rstrip(os.path.sep) + os.path.sep
- 
- 		if not os.path.isdir(srcroot):
- 			showMessage(_("!!! Directory Not Found: D='%s'\n") % srcroot,
- 				level=logging.ERROR, noiselevel=-1)
- 			return 1
- 
- 		# run instprep internal phase
- 		doebuild_environment(myebuild, "instprep",
- 			settings=self.settings, db=mydbapi)
- 		phase = EbuildPhase(background=False, phase="instprep",
- 			scheduler=self._scheduler, settings=self.settings)
- 		phase.start()
- 		if phase.wait() != os.EX_OK:
- 			showMessage(_("!!! instprep failed\n"),
- 				level=logging.ERROR, noiselevel=-1)
- 			return 1
- 
- 		is_binpkg = self.settings.get("EMERGE_FROM") == "binary"
- 		slot = ''
- 		for var_name in ('CHOST', 'SLOT'):
- 			try:
- 				with io.open(_unicode_encode(
- 					os.path.join(inforoot, var_name),
- 					encoding=_encodings['fs'], errors='strict'),
- 					mode='r', encoding=_encodings['repo.content'],
- 					errors='replace') as f:
- 					val = f.readline().strip()
- 			except EnvironmentError as e:
- 				if e.errno != errno.ENOENT:
- 					raise
- 				del e
- 				val = ''
- 
- 			if var_name == 'SLOT':
- 				slot = val
- 
- 				if not slot.strip():
- 					slot = self.settings.get(var_name, '')
- 					if not slot.strip():
- 						showMessage(_("!!! SLOT is undefined\n"),
- 							level=logging.ERROR, noiselevel=-1)
- 						return 1
- 					write_atomic(os.path.join(inforoot, var_name), slot + '\n')
- 
- 			# This check only applies when built from source, since
- 			# inforoot values are written just after src_install.
- 			if not is_binpkg and val != self.settings.get(var_name, ''):
- 				self._eqawarn('preinst',
- 					[_("QA Notice: Expected %(var_name)s='%(expected_value)s', got '%(actual_value)s'\n") % \
- 					{"var_name":var_name, "expected_value":self.settings.get(var_name, ''), "actual_value":val}])
- 
- 		def eerror(lines):
- 			self._eerror("preinst", lines)
- 
- 		if not os.path.exists(self.dbcatdir):
- 			ensure_dirs(self.dbcatdir)
- 
- 		# NOTE: We use SLOT obtained from the inforoot
- 		#	directory, in order to support USE=multislot.
- 		# Use _pkg_str discard the sub-slot part if necessary.
- 		slot = _pkg_str(self.mycpv, slot=slot).slot
- 		cp = self.mysplit[0]
- 		slot_atom = "%s:%s" % (cp, slot)
- 
- 		self.lockdb()
- 		try:
- 			# filter any old-style virtual matches
- 			slot_matches = [cpv for cpv in self.vartree.dbapi.match(slot_atom)
- 				if cpv_getkey(cpv) == cp]
- 
- 			if self.mycpv not in slot_matches and \
- 				self.vartree.dbapi.cpv_exists(self.mycpv):
- 				# handle multislot or unapplied slotmove
- 				slot_matches.append(self.mycpv)
- 
- 			others_in_slot = []
- 			for cur_cpv in slot_matches:
- 				# Clone the config in case one of these has to be unmerged,
- 				# since we need it to have private ${T} etc... for things
- 				# like elog.
- 				settings_clone = portage.config(clone=self.settings)
- 				# This reset ensures that there is no unintended leakage
- 				# of variables which should not be shared.
- 				settings_clone.reset()
- 				settings_clone.setcpv(cur_cpv, mydb=self.vartree.dbapi)
- 				if self._preserve_libs and "preserve-libs" in \
- 					settings_clone["PORTAGE_RESTRICT"].split():
- 					self._preserve_libs = False
- 				others_in_slot.append(dblink(self.cat, catsplit(cur_cpv)[1],
- 					settings=settings_clone,
- 					vartree=self.vartree, treetype="vartree",
- 					scheduler=self._scheduler, pipe=self._pipe))
- 		finally:
- 			self.unlockdb()
- 
- 		# If any instance has RESTRICT=preserve-libs, then
- 		# restrict it for all instances.
- 		if not self._preserve_libs:
- 			for dblnk in others_in_slot:
- 				dblnk._preserve_libs = False
- 
- 		retval = self._security_check(others_in_slot)
- 		if retval:
- 			return retval
- 
- 		if slot_matches:
- 			# Used by self.isprotected().
- 			max_dblnk = None
- 			max_counter = -1
- 			for dblnk in others_in_slot:
- 				cur_counter = self.vartree.dbapi.cpv_counter(dblnk.mycpv)
- 				if cur_counter > max_counter:
- 					max_counter = cur_counter
- 					max_dblnk = dblnk
- 			self._installed_instance = max_dblnk
- 
- 		# Apply INSTALL_MASK before collision-protect, since it may
- 		# be useful to avoid collisions in some scenarios.
- 		# We cannot detect if this is needed or not here as INSTALL_MASK can be
- 		# modified by bashrc files.
- 		phase = MiscFunctionsProcess(background=False,
- 			commands=["preinst_mask"], phase="preinst",
- 			scheduler=self._scheduler, settings=self.settings)
- 		phase.start()
- 		phase.wait()
- 		try:
- 			with io.open(_unicode_encode(os.path.join(inforoot, "INSTALL_MASK"),
- 				encoding=_encodings['fs'], errors='strict'),
- 				mode='r', encoding=_encodings['repo.content'],
- 				errors='replace') as f:
- 				install_mask = InstallMask(f.read())
- 		except EnvironmentError:
- 			install_mask = None
- 
- 		if install_mask:
- 			install_mask_dir(self.settings["ED"], install_mask)
- 			if any(x in self.settings.features for x in ('nodoc', 'noman', 'noinfo')):
- 				try:
- 					os.rmdir(os.path.join(self.settings["ED"], 'usr', 'share'))
- 				except OSError:
- 					pass
- 
- 		# We check for unicode encoding issues after src_install. However,
- 		# the check must be repeated here for binary packages (it's
- 		# inexpensive since we call os.walk() here anyway).
- 		unicode_errors = []
- 		line_ending_re = re.compile('[\n\r]')
- 		srcroot_len = len(srcroot)
- 		ed_len = len(self.settings["ED"])
- 		eprefix_len = len(self.settings["EPREFIX"])
- 
- 		while True:
- 
- 			unicode_error = False
- 			eagain_error = False
- 
- 			filelist = []
- 			linklist = []
- 			paths_with_newlines = []
- 			def onerror(e):
- 				raise
- 			walk_iter = os.walk(srcroot, onerror=onerror)
- 			while True:
- 				try:
- 					parent, dirs, files = next(walk_iter)
- 				except StopIteration:
- 					break
- 				except OSError as e:
- 					if e.errno != errno.EAGAIN:
- 						raise
- 					# Observed with PyPy 1.8.
- 					eagain_error = True
- 					break
- 
- 				try:
- 					parent = _unicode_decode(parent,
- 						encoding=_encodings['merge'], errors='strict')
- 				except UnicodeDecodeError:
- 					new_parent = _unicode_decode(parent,
- 						encoding=_encodings['merge'], errors='replace')
- 					new_parent = _unicode_encode(new_parent,
- 						encoding='ascii', errors='backslashreplace')
- 					new_parent = _unicode_decode(new_parent,
- 						encoding=_encodings['merge'], errors='replace')
- 					os.rename(parent, new_parent)
- 					unicode_error = True
- 					unicode_errors.append(new_parent[ed_len:])
- 					break
- 
- 				for fname in files:
- 					try:
- 						fname = _unicode_decode(fname,
- 							encoding=_encodings['merge'], errors='strict')
- 					except UnicodeDecodeError:
- 						fpath = portage._os.path.join(
- 							parent.encode(_encodings['merge']), fname)
- 						new_fname = _unicode_decode(fname,
- 							encoding=_encodings['merge'], errors='replace')
- 						new_fname = _unicode_encode(new_fname,
- 							encoding='ascii', errors='backslashreplace')
- 						new_fname = _unicode_decode(new_fname,
- 							encoding=_encodings['merge'], errors='replace')
- 						new_fpath = os.path.join(parent, new_fname)
- 						os.rename(fpath, new_fpath)
- 						unicode_error = True
- 						unicode_errors.append(new_fpath[ed_len:])
- 						fname = new_fname
- 						fpath = new_fpath
- 					else:
- 						fpath = os.path.join(parent, fname)
- 
- 					relative_path = fpath[srcroot_len:]
- 
- 					if line_ending_re.search(relative_path) is not None:
- 						paths_with_newlines.append(relative_path)
- 
- 					file_mode = os.lstat(fpath).st_mode
- 					if stat.S_ISREG(file_mode):
- 						filelist.append(relative_path)
- 					elif stat.S_ISLNK(file_mode):
- 						# Note: os.walk puts symlinks to directories in the "dirs"
- 						# list and it does not traverse them since that could lead
- 						# to an infinite recursion loop.
- 						linklist.append(relative_path)
- 
- 						myto = _unicode_decode(
- 							_os.readlink(_unicode_encode(fpath,
- 							encoding=_encodings['merge'], errors='strict')),
- 							encoding=_encodings['merge'], errors='replace')
- 						if line_ending_re.search(myto) is not None:
- 							paths_with_newlines.append(relative_path)
- 
- 				if unicode_error:
- 					break
- 
- 			if not (unicode_error or eagain_error):
- 				break
- 
- 		if unicode_errors:
- 			self._elog("eqawarn", "preinst",
- 				_merge_unicode_error(unicode_errors))
- 
- 		if paths_with_newlines:
- 			msg = []
- 			msg.append(_("This package installs one or more files containing line ending characters:"))
- 			msg.append("")
- 			paths_with_newlines.sort()
- 			for f in paths_with_newlines:
- 				msg.append("\t/%s" % (f.replace("\n", "\\n").replace("\r", "\\r")))
- 			msg.append("")
- 			msg.append(_("package %s NOT merged") % self.mycpv)
- 			msg.append("")
- 			eerror(msg)
- 			return 1
- 
- 		# If there are no files to merge, and an installed package in the same
- 		# slot has files, it probably means that something went wrong.
- 		if self.settings.get("PORTAGE_PACKAGE_EMPTY_ABORT") == "1" and \
- 			not filelist and not linklist and others_in_slot:
- 			installed_files = None
- 			for other_dblink in others_in_slot:
- 				installed_files = other_dblink.getcontents()
- 				if not installed_files:
- 					continue
- 				from textwrap import wrap
- 				wrap_width = 72
- 				msg = []
- 				d = {
- 					"new_cpv":self.mycpv,
- 					"old_cpv":other_dblink.mycpv
- 				}
- 				msg.extend(wrap(_("The '%(new_cpv)s' package will not install "
- 					"any files, but the currently installed '%(old_cpv)s'"
- 					" package has the following files: ") % d, wrap_width))
- 				msg.append("")
- 				msg.extend(sorted(installed_files))
- 				msg.append("")
- 				msg.append(_("package %s NOT merged") % self.mycpv)
- 				msg.append("")
- 				msg.extend(wrap(
- 					_("Manually run `emerge --unmerge =%s` if you "
- 					"really want to remove the above files. Set "
- 					"PORTAGE_PACKAGE_EMPTY_ABORT=\"0\" in "
- 					"/etc/portage/make.conf if you do not want to "
- 					"abort in cases like this.") % other_dblink.mycpv,
- 					wrap_width))
- 				eerror(msg)
- 			if installed_files:
- 				return 1
- 
- 		# Make sure the ebuild environment is initialized and that ${T}/elog
- 		# exists for logging of collision-protect eerror messages.
- 		if myebuild is None:
- 			myebuild = os.path.join(inforoot, self.pkg + ".ebuild")
- 		doebuild_environment(myebuild, "preinst",
- 			settings=self.settings, db=mydbapi)
- 		self.settings["REPLACING_VERSIONS"] = " ".join(
- 			[portage.versions.cpv_getversion(other.mycpv)
- 			for other in others_in_slot])
- 		prepare_build_dirs(settings=self.settings, cleanup=cleanup)
- 
- 		# check for package collisions
- 		blockers = []
- 		for blocker in self._blockers or []:
- 			blocker = self.vartree.dbapi._dblink(blocker.cpv)
- 			# It may have been unmerged before lock(s)
- 			# were aquired.
- 			if blocker.exists():
- 				blockers.append(blocker)
- 
- 		collisions, internal_collisions, dirs_ro, symlink_collisions, plib_collisions = \
- 			self._collision_protect(srcroot, destroot,
- 			others_in_slot + blockers, filelist, linklist)
- 
- 		# Check for read-only filesystems.
- 		ro_checker = get_ro_checker()
- 		rofilesystems = ro_checker(dirs_ro)
- 
- 		if rofilesystems:
- 			msg = _("One or more files installed to this package are "
- 				"set to be installed to read-only filesystems. "
- 				"Please mount the following filesystems as read-write "
- 				"and retry.")
- 			msg = textwrap.wrap(msg, 70)
- 			msg.append("")
- 			for f in rofilesystems:
- 				msg.append("\t%s" % f)
- 			msg.append("")
- 			self._elog("eerror", "preinst", msg)
- 
- 			msg = _("Package '%s' NOT merged due to read-only file systems.") % \
- 				self.settings.mycpv
- 			msg += _(" If necessary, refer to your elog "
- 				"messages for the whole content of the above message.")
- 			msg = textwrap.wrap(msg, 70)
- 			eerror(msg)
- 			return 1
- 
- 		if internal_collisions:
- 			msg = _("Package '%s' has internal collisions between non-identical files "
- 				"(located in separate directories in the installation image (${D}) "
- 				"corresponding to merged directories in the target "
- 				"filesystem (${ROOT})):") % self.settings.mycpv
- 			msg = textwrap.wrap(msg, 70)
- 			msg.append("")
- 			for k, v in sorted(internal_collisions.items(), key=operator.itemgetter(0)):
- 				msg.append("\t%s" % os.path.join(destroot, k.lstrip(os.path.sep)))
- 				for (file1, file2), differences in sorted(v.items()):
- 					msg.append("\t\t%s" % os.path.join(destroot, file1.lstrip(os.path.sep)))
- 					msg.append("\t\t%s" % os.path.join(destroot, file2.lstrip(os.path.sep)))
- 					msg.append("\t\t\tDifferences: %s" % ", ".join(differences))
- 					msg.append("")
- 			self._elog("eerror", "preinst", msg)
- 
- 			msg = _("Package '%s' NOT merged due to internal collisions "
- 				"between non-identical files.") % self.settings.mycpv
- 			msg += _(" If necessary, refer to your elog messages for the whole "
- 				"content of the above message.")
- 			eerror(textwrap.wrap(msg, 70))
- 			return 1
- 
- 		if symlink_collisions:
- 			# Symlink collisions need to be distinguished from other types
- 			# of collisions, in order to avoid confusion (see bug #409359).
- 			msg = _("Package '%s' has one or more collisions "
- 				"between symlinks and directories, which is explicitly "
- 				"forbidden by PMS section 13.4 (see bug #326685):") % \
- 				(self.settings.mycpv,)
- 			msg = textwrap.wrap(msg, 70)
- 			msg.append("")
- 			for f in symlink_collisions:
- 				msg.append("\t%s" % os.path.join(destroot,
- 					f.lstrip(os.path.sep)))
- 			msg.append("")
- 			self._elog("eerror", "preinst", msg)
- 
- 		if collisions:
- 			collision_protect = "collision-protect" in self.settings.features
- 			protect_owned = "protect-owned" in self.settings.features
- 			msg = _("This package will overwrite one or more files that"
- 			" may belong to other packages (see list below).")
- 			if not (collision_protect or protect_owned):
- 				msg += _(" Add either \"collision-protect\" or"
- 				" \"protect-owned\" to FEATURES in"
- 				" make.conf if you would like the merge to abort"
- 				" in cases like this. See the make.conf man page for"
- 				" more information about these features.")
- 			if self.settings.get("PORTAGE_QUIET") != "1":
- 				msg += _(" You can use a command such as"
- 				" `portageq owners / <filename>` to identify the"
- 				" installed package that owns a file. If portageq"
- 				" reports that only one package owns a file then do NOT"
- 				" file a bug report. A bug report is only useful if it"
- 				" identifies at least two or more packages that are known"
- 				" to install the same file(s)."
- 				" If a collision occurs and you"
- 				" can not explain where the file came from then you"
- 				" should simply ignore the collision since there is not"
- 				" enough information to determine if a real problem"
- 				" exists. Please do NOT file a bug report at"
- 				" https://bugs.gentoo.org/ unless you report exactly which"
- 				" two packages install the same file(s). See"
- 				" https://wiki.gentoo.org/wiki/Knowledge_Base:Blockers"
- 				" for tips on how to solve the problem. And once again,"
- 				" please do NOT file a bug report unless you have"
- 				" completely understood the above message.")
- 
- 			self.settings["EBUILD_PHASE"] = "preinst"
- 			from textwrap import wrap
- 			msg = wrap(msg, 70)
- 			if collision_protect:
- 				msg.append("")
- 				msg.append(_("package %s NOT merged") % self.settings.mycpv)
- 			msg.append("")
- 			msg.append(_("Detected file collision(s):"))
- 			msg.append("")
- 
- 			for f in collisions:
- 				msg.append("\t%s" % \
- 					os.path.join(destroot, f.lstrip(os.path.sep)))
- 
- 			eerror(msg)
- 
- 			owners = None
- 			if collision_protect or protect_owned or symlink_collisions:
- 				msg = []
- 				msg.append("")
- 				msg.append(_("Searching all installed"
- 					" packages for file collisions..."))
- 				msg.append("")
- 				msg.append(_("Press Ctrl-C to Stop"))
- 				msg.append("")
- 				eerror(msg)
- 
- 				if len(collisions) > 20:
- 					# get_owners is slow for large numbers of files, so
- 					# don't look them all up.
- 					collisions = collisions[:20]
- 
- 				pkg_info_strs = {}
- 				self.lockdb()
- 				try:
- 					owners = self.vartree.dbapi._owners.get_owners(collisions)
- 					self.vartree.dbapi.flush_cache()
- 
- 					for pkg in owners:
- 						pkg = self.vartree.dbapi._pkg_str(pkg.mycpv, None)
- 						pkg_info_str = "%s%s%s" % (pkg,
- 							_slot_separator, pkg.slot)
- 						if pkg.repo != _unknown_repo:
- 							pkg_info_str += "%s%s" % (_repo_separator,
- 								pkg.repo)
- 						pkg_info_strs[pkg] = pkg_info_str
- 
- 				finally:
- 					self.unlockdb()
- 
- 				for pkg, owned_files in owners.items():
- 					msg = []
- 					msg.append(pkg_info_strs[pkg.mycpv])
- 					for f in sorted(owned_files):
- 						msg.append("\t%s" % os.path.join(destroot,
- 							f.lstrip(os.path.sep)))
- 					msg.append("")
- 					eerror(msg)
- 
- 				if not owners:
- 					eerror([_("None of the installed"
- 						" packages claim the file(s)."), ""])
- 
- 			symlink_abort_msg =_("Package '%s' NOT merged since it has "
- 				"one or more collisions between symlinks and directories, "
- 				"which is explicitly forbidden by PMS section 13.4 "
- 				"(see bug #326685).")
- 
- 			# The explanation about the collision and how to solve
- 			# it may not be visible via a scrollback buffer, especially
- 			# if the number of file collisions is large. Therefore,
- 			# show a summary at the end.
- 			abort = False
- 			if symlink_collisions:
- 				abort = True
- 				msg = symlink_abort_msg % (self.settings.mycpv,)
- 			elif collision_protect:
- 				abort = True
- 				msg = _("Package '%s' NOT merged due to file collisions.") % \
- 					self.settings.mycpv
- 			elif protect_owned and owners:
- 				abort = True
- 				msg = _("Package '%s' NOT merged due to file collisions.") % \
- 					self.settings.mycpv
- 			else:
- 				msg = _("Package '%s' merged despite file collisions.") % \
- 					self.settings.mycpv
- 			msg += _(" If necessary, refer to your elog "
- 				"messages for the whole content of the above message.")
- 			eerror(wrap(msg, 70))
- 
- 			if abort:
- 				return 1
- 
- 		# The merge process may move files out of the image directory,
- 		# which causes invalidation of the .installed flag.
- 		try:
- 			os.unlink(os.path.join(
- 				os.path.dirname(normalize_path(srcroot)), ".installed"))
- 		except OSError as e:
- 			if e.errno != errno.ENOENT:
- 				raise
- 			del e
- 
- 		self.dbdir = self.dbtmpdir
- 		self.delete()
- 		ensure_dirs(self.dbtmpdir)
- 
- 		downgrade = False
- 		if self._installed_instance is not None and \
- 			vercmp(self.mycpv.version,
- 			self._installed_instance.mycpv.version) < 0:
- 			downgrade = True
- 
- 		if self._installed_instance is not None:
- 			rval = self._pre_merge_backup(self._installed_instance, downgrade)
- 			if rval != os.EX_OK:
- 				showMessage(_("!!! FAILED preinst: ") +
- 					"quickpkg: %s\n" % rval,
- 					level=logging.ERROR, noiselevel=-1)
- 				return rval
- 
- 		# run preinst script
- 		showMessage(_(">>> Merging %(cpv)s to %(destroot)s\n") % \
- 			{"cpv":self.mycpv, "destroot":destroot})
- 		phase = EbuildPhase(background=False, phase="preinst",
- 			scheduler=self._scheduler, settings=self.settings)
- 		phase.start()
- 		a = phase.wait()
- 
- 		# XXX: Decide how to handle failures here.
- 		if a != os.EX_OK:
- 			showMessage(_("!!! FAILED preinst: ")+str(a)+"\n",
- 				level=logging.ERROR, noiselevel=-1)
- 			return a
- 
- 		# copy "info" files (like SLOT, CFLAGS, etc.) into the database
- 		for x in os.listdir(inforoot):
- 			self.copyfile(inforoot+"/"+x)
- 
- 		# write local package counter for recording
- 		if counter is None:
- 			counter = self.vartree.dbapi.counter_tick(mycpv=self.mycpv)
- 		with io.open(_unicode_encode(os.path.join(self.dbtmpdir, 'COUNTER'),
- 			encoding=_encodings['fs'], errors='strict'),
- 			mode='w', encoding=_encodings['repo.content'],
- 			errors='backslashreplace') as f:
- 			f.write("%s" % counter)
- 
- 		self.updateprotect()
- 
- 		#if we have a file containing previously-merged config file md5sums, grab it.
- 		self.vartree.dbapi._fs_lock()
- 		try:
- 			# This prunes any libraries from the registry that no longer
- 			# exist on disk, in case they have been manually removed.
- 			# This has to be done prior to merge, since after merge it
- 			# is non-trivial to distinguish these files from files
- 			# that have just been merged.
- 			plib_registry = self.vartree.dbapi._plib_registry
- 			if plib_registry:
- 				plib_registry.lock()
- 				try:
- 					plib_registry.load()
- 					plib_registry.store()
- 				finally:
- 					plib_registry.unlock()
- 
- 			# Always behave like --noconfmem is enabled for downgrades
- 			# so that people who don't know about this option are less
- 			# likely to get confused when doing upgrade/downgrade cycles.
- 			cfgfiledict = grabdict(self.vartree.dbapi._conf_mem_file)
- 			if "NOCONFMEM" in self.settings or downgrade:
- 				cfgfiledict["IGNORE"]=1
- 			else:
- 				cfgfiledict["IGNORE"]=0
- 
- 			rval = self._merge_contents(srcroot, destroot, cfgfiledict)
- 			if rval != os.EX_OK:
- 				return rval
- 		finally:
- 			self.vartree.dbapi._fs_unlock()
- 
- 		# These caches are populated during collision-protect and the data
- 		# they contain is now invalid. It's very important to invalidate
- 		# the contents_inodes cache so that FEATURES=unmerge-orphans
- 		# doesn't unmerge anything that belongs to this package that has
- 		# just been merged.
- 		for dblnk in others_in_slot:
- 			dblnk._clear_contents_cache()
- 		self._clear_contents_cache()
- 
- 		linkmap = self.vartree.dbapi._linkmap
- 		plib_registry = self.vartree.dbapi._plib_registry
- 		# We initialize preserve_paths to an empty set rather
- 		# than None here because it plays an important role
- 		# in prune_plib_registry logic by serving to indicate
- 		# that we have a replacement for a package that's
- 		# being unmerged.
- 
- 		preserve_paths = set()
- 		needed = None
- 		if not (self._linkmap_broken or linkmap is None or
- 			plib_registry is None):
- 			self.vartree.dbapi._fs_lock()
- 			plib_registry.lock()
- 			try:
- 				plib_registry.load()
- 				needed = os.path.join(inforoot, linkmap._needed_aux_key)
- 				self._linkmap_rebuild(include_file=needed)
- 
- 				# Preserve old libs if they are still in use
- 				# TODO: Handle cases where the previous instance
- 				# has already been uninstalled but it still has some
- 				# preserved libraries in the registry that we may
- 				# want to preserve here.
- 				preserve_paths = self._find_libs_to_preserve()
- 			finally:
- 				plib_registry.unlock()
- 				self.vartree.dbapi._fs_unlock()
- 
- 			if preserve_paths:
- 				self._add_preserve_libs_to_contents(preserve_paths)
- 
- 		# If portage is reinstalling itself, remove the old
- 		# version now since we want to use the temporary
- 		# PORTAGE_BIN_PATH that will be removed when we return.
- 		reinstall_self = False
- 		if self.myroot == "/" and \
- 			match_from_list(PORTAGE_PACKAGE_ATOM, [self.mycpv]):
- 			reinstall_self = True
- 
- 		emerge_log = self._emerge_log
- 
- 		# If we have any preserved libraries then autoclean
- 		# is forced so that preserve-libs logic doesn't have
- 		# to account for the additional complexity of the
- 		# AUTOCLEAN=no mode.
- 		autoclean = self.settings.get("AUTOCLEAN", "yes") == "yes" \
- 			or preserve_paths
- 
- 		if autoclean:
- 			emerge_log(_(" >>> AUTOCLEAN: %s") % (slot_atom,))
- 
- 		others_in_slot.append(self)  # self has just been merged
- 		for dblnk in list(others_in_slot):
- 			if dblnk is self:
- 				continue
- 			if not (autoclean or dblnk.mycpv == self.mycpv or reinstall_self):
- 				continue
- 			showMessage(_(">>> Safely unmerging already-installed instance...\n"))
- 			emerge_log(_(" === Unmerging... (%s)") % (dblnk.mycpv,))
- 			others_in_slot.remove(dblnk) # dblnk will unmerge itself now
- 			dblnk._linkmap_broken = self._linkmap_broken
- 			dblnk.settings["REPLACED_BY_VERSION"] = portage.versions.cpv_getversion(self.mycpv)
- 			dblnk.settings.backup_changes("REPLACED_BY_VERSION")
- 			unmerge_rval = dblnk.unmerge(ldpath_mtimes=prev_mtimes,
- 				others_in_slot=others_in_slot, needed=needed,
- 				preserve_paths=preserve_paths)
- 			dblnk.settings.pop("REPLACED_BY_VERSION", None)
- 
- 			if unmerge_rval == os.EX_OK:
- 				emerge_log(_(" >>> unmerge success: %s") % (dblnk.mycpv,))
- 			else:
- 				emerge_log(_(" !!! unmerge FAILURE: %s") % (dblnk.mycpv,))
- 
- 			self.lockdb()
- 			try:
- 				# TODO: Check status and abort if necessary.
- 				dblnk.delete()
- 			finally:
- 				self.unlockdb()
- 			showMessage(_(">>> Original instance of package unmerged safely.\n"))
- 
- 		if len(others_in_slot) > 1:
- 			showMessage(colorize("WARN", _("WARNING:"))
- 				+ _(" AUTOCLEAN is disabled.  This can cause serious"
- 				" problems due to overlapping packages.\n"),
- 				level=logging.WARN, noiselevel=-1)
- 
- 		# We hold both directory locks.
- 		self.dbdir = self.dbpkgdir
- 		self.lockdb()
- 		try:
- 			self.delete()
- 			_movefile(self.dbtmpdir, self.dbpkgdir, mysettings=self.settings)
- 			self._merged_path(self.dbpkgdir, os.lstat(self.dbpkgdir))
- 			self.vartree.dbapi._cache_delta.recordEvent(
- 				"add", self.mycpv, slot, counter)
- 		finally:
- 			self.unlockdb()
- 
- 		# Check for file collisions with blocking packages
- 		# and remove any colliding files from their CONTENTS
- 		# since they now belong to this package.
- 		self._clear_contents_cache()
- 		contents = self.getcontents()
- 		destroot_len = len(destroot) - 1
- 		self.lockdb()
- 		try:
- 			for blocker in blockers:
- 				self.vartree.dbapi.removeFromContents(blocker, iter(contents),
- 					relative_paths=False)
- 		finally:
- 			self.unlockdb()
- 
- 		plib_registry = self.vartree.dbapi._plib_registry
- 		if plib_registry:
- 			self.vartree.dbapi._fs_lock()
- 			plib_registry.lock()
- 			try:
- 				plib_registry.load()
- 
- 				if preserve_paths:
- 					# keep track of the libs we preserved
- 					plib_registry.register(self.mycpv, slot, counter,
- 						sorted(preserve_paths))
- 
- 				# Unregister any preserved libs that this package has overwritten
- 				# and update the contents of the packages that owned them.
- 				plib_dict = plib_registry.getPreservedLibs()
- 				for cpv, paths in plib_collisions.items():
- 					if cpv not in plib_dict:
- 						continue
- 					has_vdb_entry = False
- 					if cpv != self.mycpv:
- 						# If we've replaced another instance with the
- 						# same cpv then the vdb entry no longer belongs
- 						# to it, so we'll have to get the slot and counter
- 						# from plib_registry._data instead.
- 						self.vartree.dbapi.lock()
- 						try:
- 							try:
- 								slot = self.vartree.dbapi._pkg_str(cpv, None).slot
- 								counter = self.vartree.dbapi.cpv_counter(cpv)
- 							except (KeyError, InvalidData):
- 								pass
- 							else:
- 								has_vdb_entry = True
- 								self.vartree.dbapi.removeFromContents(
- 									cpv, paths)
- 						finally:
- 							self.vartree.dbapi.unlock()
- 
- 					if not has_vdb_entry:
- 						# It's possible for previously unmerged packages
- 						# to have preserved libs in the registry, so try
- 						# to retrieve the slot and counter from there.
- 						has_registry_entry = False
- 						for plib_cps, (plib_cpv, plib_counter, plib_paths) in \
- 							plib_registry._data.items():
- 							if plib_cpv != cpv:
- 								continue
- 							try:
- 								cp, slot = plib_cps.split(":", 1)
- 							except ValueError:
- 								continue
- 							counter = plib_counter
- 							has_registry_entry = True
- 							break
- 
- 						if not has_registry_entry:
- 							continue
- 
- 					remaining = [f for f in plib_dict[cpv] if f not in paths]
- 					plib_registry.register(cpv, slot, counter, remaining)
- 
- 				plib_registry.store()
- 			finally:
- 				plib_registry.unlock()
- 				self.vartree.dbapi._fs_unlock()
- 
- 		self.vartree.dbapi._add(self)
- 		contents = self.getcontents()
- 
- 		#do postinst script
- 		self.settings["PORTAGE_UPDATE_ENV"] = \
- 			os.path.join(self.dbpkgdir, "environment.bz2")
- 		self.settings.backup_changes("PORTAGE_UPDATE_ENV")
- 		try:
- 			phase = EbuildPhase(background=False, phase="postinst",
- 				scheduler=self._scheduler, settings=self.settings)
- 			phase.start()
- 			a = phase.wait()
- 			if a == os.EX_OK:
- 				showMessage(_(">>> %s merged.\n") % self.mycpv)
- 		finally:
- 			self.settings.pop("PORTAGE_UPDATE_ENV", None)
- 
- 		if a != os.EX_OK:
- 			# It's stupid to bail out here, so keep going regardless of
- 			# phase return code.
- 			self._postinst_failure = True
- 			self._elog("eerror", "postinst", [
- 				_("FAILED postinst: %s") % (a,),
- 			])
- 
- 		#update environment settings, library paths. DO NOT change symlinks.
- 		env_update(
- 			target_root=self.settings['ROOT'], prev_mtimes=prev_mtimes,
- 			contents=contents, env=self.settings,
- 			writemsg_level=self._display_merge, vardbapi=self.vartree.dbapi)
- 
- 		# For gcc upgrades, preserved libs have to be removed after the
- 		# the library path has been updated.
- 		self._prune_plib_registry()
- 		self._post_merge_sync()
- 
- 		return os.EX_OK
- 
- 	def _new_backup_path(self, p):
- 		"""
- 		The works for any type path, such as a regular file, symlink,
- 		or directory. The parent directory is assumed to exist.
- 		The returned filename is of the form p + '.backup.' + x, where
- 		x guarantees that the returned path does not exist yet.
- 		"""
- 		os = _os_merge
- 
- 		x = -1
- 		while True:
- 			x += 1
- 			backup_p = '%s.backup.%04d' % (p, x)
- 			try:
- 				os.lstat(backup_p)
- 			except OSError:
- 				break
- 
- 		return backup_p
- 
- 	def _merge_contents(self, srcroot, destroot, cfgfiledict):
- 
- 		cfgfiledict_orig = cfgfiledict.copy()
- 
- 		# open CONTENTS file (possibly overwriting old one) for recording
- 		# Use atomic_ofstream for automatic coercion of raw bytes to
- 		# unicode, in order to prevent TypeError when writing raw bytes
- 		# to TextIOWrapper with python2.
- 		outfile = atomic_ofstream(_unicode_encode(
- 			os.path.join(self.dbtmpdir, 'CONTENTS'),
- 			encoding=_encodings['fs'], errors='strict'),
- 			mode='w', encoding=_encodings['repo.content'],
- 			errors='backslashreplace')
- 
- 		# Don't bump mtimes on merge since some application require
- 		# preservation of timestamps.  This means that the unmerge phase must
- 		# check to see if file belongs to an installed instance in the same
- 		# slot.
- 		mymtime = None
- 
- 		# set umask to 0 for merging; back up umask, save old one in prevmask (since this is a global change)
- 		prevmask = os.umask(0)
- 		secondhand = []
- 
- 		# we do a first merge; this will recurse through all files in our srcroot but also build up a
- 		# "second hand" of symlinks to merge later
- 		if self.mergeme(srcroot, destroot, outfile, secondhand,
- 			self.settings["EPREFIX"].lstrip(os.sep), cfgfiledict, mymtime):
- 			return 1
- 
- 		# now, it's time for dealing our second hand; we'll loop until we can't merge anymore.	The rest are
- 		# broken symlinks.  We'll merge them too.
- 		lastlen = 0
- 		while len(secondhand) and len(secondhand)!=lastlen:
- 			# clear the thirdhand.	Anything from our second hand that
- 			# couldn't get merged will be added to thirdhand.
- 
- 			thirdhand = []
- 			if self.mergeme(srcroot, destroot, outfile, thirdhand,
- 				secondhand, cfgfiledict, mymtime):
- 				return 1
- 
- 			#swap hands
- 			lastlen = len(secondhand)
- 
- 			# our thirdhand now becomes our secondhand.  It's ok to throw
- 			# away secondhand since thirdhand contains all the stuff that
- 			# couldn't be merged.
- 			secondhand = thirdhand
- 
- 		if len(secondhand):
- 			# force merge of remaining symlinks (broken or circular; oh well)
- 			if self.mergeme(srcroot, destroot, outfile, None,
- 				secondhand, cfgfiledict, mymtime):
- 				return 1
- 
- 		#restore umask
- 		os.umask(prevmask)
- 
- 		#if we opened it, close it
- 		outfile.flush()
- 		outfile.close()
- 
- 		# write out our collection of md5sums
- 		if cfgfiledict != cfgfiledict_orig:
- 			cfgfiledict.pop("IGNORE", None)
- 			try:
- 				writedict(cfgfiledict, self.vartree.dbapi._conf_mem_file)
- 			except InvalidLocation:
- 				self.settings._init_dirs()
- 				writedict(cfgfiledict, self.vartree.dbapi._conf_mem_file)
- 
- 		return os.EX_OK
- 
- 	def mergeme(self, srcroot, destroot, outfile, secondhand, stufftomerge, cfgfiledict, thismtime):
- 		"""
- 
- 		This function handles actual merging of the package contents to the livefs.
- 		It also handles config protection.
- 
- 		@param srcroot: Where are we copying files from (usually ${D})
- 		@type srcroot: String (Path)
- 		@param destroot: Typically ${ROOT}
- 		@type destroot: String (Path)
- 		@param outfile: File to log operations to
- 		@type outfile: File Object
- 		@param secondhand: A set of items to merge in pass two (usually
- 		or symlinks that point to non-existing files that may get merged later)
- 		@type secondhand: List
- 		@param stufftomerge: Either a diretory to merge, or a list of items.
- 		@type stufftomerge: String or List
- 		@param cfgfiledict: { File:mtime } mapping for config_protected files
- 		@type cfgfiledict: Dictionary
- 		@param thismtime: None or new mtime for merged files (expressed in seconds
- 		in Python <3.3 and nanoseconds in Python >=3.3)
- 		@type thismtime: None or Int
- 		@rtype: None or Boolean
- 		@return:
- 		1. True on failure
- 		2. None otherwise
- 
- 		"""
- 
- 		showMessage = self._display_merge
- 		writemsg = self._display_merge
- 
- 		os = _os_merge
- 		sep = os.sep
- 		join = os.path.join
- 		srcroot = normalize_path(srcroot).rstrip(sep) + sep
- 		destroot = normalize_path(destroot).rstrip(sep) + sep
- 		calc_prelink = "prelink-checksums" in self.settings.features
- 
- 		protect_if_modified = \
- 			"config-protect-if-modified" in self.settings.features and \
- 			self._installed_instance is not None
- 
- 		# this is supposed to merge a list of files.  There will be 2 forms of argument passing.
- 		if isinstance(stufftomerge, str):
- 			#A directory is specified.  Figure out protection paths, listdir() it and process it.
- 			mergelist = [join(stufftomerge, child) for child in \
- 				os.listdir(join(srcroot, stufftomerge))]
- 		else:
- 			mergelist = stufftomerge[:]
- 
- 		while mergelist:
- 
- 			relative_path = mergelist.pop()
- 			mysrc = join(srcroot, relative_path)
- 			mydest = join(destroot, relative_path)
- 			# myrealdest is mydest without the $ROOT prefix (makes a difference if ROOT!="/")
- 			myrealdest = join(sep, relative_path)
- 			# stat file once, test using S_* macros many times (faster that way)
- 			mystat = os.lstat(mysrc)
- 			mymode = mystat[stat.ST_MODE]
- 			mymd5 = None
- 			myto = None
- 
- 			mymtime = mystat.st_mtime_ns
- 
- 			if stat.S_ISREG(mymode):
- 				mymd5 = perform_md5(mysrc, calc_prelink=calc_prelink)
- 			elif stat.S_ISLNK(mymode):
- 				# The file name of mysrc and the actual file that it points to
- 				# will have earlier been forcefully converted to the 'merge'
- 				# encoding if necessary, but the content of the symbolic link
- 				# may need to be forcefully converted here.
- 				myto = _os.readlink(_unicode_encode(mysrc,
- 					encoding=_encodings['merge'], errors='strict'))
- 				try:
- 					myto = _unicode_decode(myto,
- 						encoding=_encodings['merge'], errors='strict')
- 				except UnicodeDecodeError:
- 					myto = _unicode_decode(myto, encoding=_encodings['merge'],
- 						errors='replace')
- 					myto = _unicode_encode(myto, encoding='ascii',
- 						errors='backslashreplace')
- 					myto = _unicode_decode(myto, encoding=_encodings['merge'],
- 						errors='replace')
- 					os.unlink(mysrc)
- 					os.symlink(myto, mysrc)
- 
- 				mymd5 = md5(_unicode_encode(myto)).hexdigest()
- 
- 			protected = False
- 			if stat.S_ISLNK(mymode) or stat.S_ISREG(mymode):
- 				protected = self.isprotected(mydest)
- 
- 				if stat.S_ISREG(mymode) and \
- 					mystat.st_size == 0 and \
- 					os.path.basename(mydest).startswith(".keep"):
- 					protected = False
- 
- 			destmd5 = None
- 			mydest_link = None
- 			# handy variables; mydest is the target object on the live filesystems;
- 			# mysrc is the source object in the temporary install dir
- 			try:
- 				mydstat = os.lstat(mydest)
- 				mydmode = mydstat.st_mode
- 				if protected:
- 					if stat.S_ISLNK(mydmode):
- 						# Read symlink target as bytes, in case the
- 						# target path has a bad encoding.
- 						mydest_link = _os.readlink(
- 							_unicode_encode(mydest,
- 							encoding=_encodings['merge'],
- 							errors='strict'))
- 						mydest_link = _unicode_decode(mydest_link,
- 							encoding=_encodings['merge'],
- 							errors='replace')
- 
- 						# For protection of symlinks, the md5
- 						# of the link target path string is used
- 						# for cfgfiledict (symlinks are
- 						# protected since bug #485598).
- 						destmd5 = md5(_unicode_encode(mydest_link)).hexdigest()
- 
- 					elif stat.S_ISREG(mydmode):
- 						destmd5 = perform_md5(mydest,
- 							calc_prelink=calc_prelink)
- 			except (FileNotFound, OSError) as e:
- 				if isinstance(e, OSError) and e.errno != errno.ENOENT:
- 					raise
- 				#dest file doesn't exist
- 				mydstat = None
- 				mydmode = None
- 				mydest_link = None
- 				destmd5 = None
- 
- 			moveme = True
- 			if protected:
- 				mydest, protected, moveme = self._protect(cfgfiledict,
- 					protect_if_modified, mymd5, myto, mydest,
- 					myrealdest, mydmode, destmd5, mydest_link)
- 
- 			zing = "!!!"
- 			if not moveme:
- 				# confmem rejected this update
- 				zing = "---"
- 
- 			if stat.S_ISLNK(mymode):
- 				# we are merging a symbolic link
- 				# Pass in the symlink target in order to bypass the
- 				# os.readlink() call inside abssymlink(), since that
- 				# call is unsafe if the merge encoding is not ascii
- 				# or utf_8 (see bug #382021).
- 				myabsto = abssymlink(mysrc, target=myto)
- 
- 				if myabsto.startswith(srcroot):
- 					myabsto = myabsto[len(srcroot):]
- 				myabsto = myabsto.lstrip(sep)
- 				if self.settings and self.settings["D"]:
- 					if myto.startswith(self.settings["D"]):
- 						myto = myto[len(self.settings["D"])-1:]
- 				# myrealto contains the path of the real file to which this symlink points.
- 				# we can simply test for existence of this file to see if the target has been merged yet
- 				myrealto = normalize_path(os.path.join(destroot, myabsto))
- 				if mydmode is not None and stat.S_ISDIR(mydmode):
- 					if not protected:
- 						# we can't merge a symlink over a directory
- 						newdest = self._new_backup_path(mydest)
- 						msg = []
- 						msg.append("")
- 						msg.append(_("Installation of a symlink is blocked by a directory:"))
- 						msg.append("  '%s'" % mydest)
- 						msg.append(_("This symlink will be merged with a different name:"))
- 						msg.append("  '%s'" % newdest)
- 						msg.append("")
- 						self._eerror("preinst", msg)
- 						mydest = newdest
- 
- 				# if secondhand is None it means we're operating in "force" mode and should not create a second hand.
- 				if (secondhand != None) and (not os.path.exists(myrealto)):
- 					# either the target directory doesn't exist yet or the target file doesn't exist -- or
- 					# the target is a broken symlink.  We will add this file to our "second hand" and merge
- 					# it later.
- 					secondhand.append(mysrc[len(srcroot):])
- 					continue
- 				# unlinking no longer necessary; "movefile" will overwrite symlinks atomically and correctly
- 				if moveme:
- 					zing = ">>>"
- 					mymtime = movefile(mysrc, mydest, newmtime=thismtime,
- 						sstat=mystat, mysettings=self.settings,
- 						encoding=_encodings['merge'])
- 
- 				try:
- 					self._merged_path(mydest, os.lstat(mydest))
- 				except OSError:
- 					pass
- 
- 				if mymtime != None:
- 					# Use lexists, since if the target happens to be a broken
- 					# symlink then that should trigger an independent warning.
- 					if not (os.path.lexists(myrealto) or
- 						os.path.lexists(join(srcroot, myabsto))):
- 						self._eqawarn('preinst',
- 							[_("QA Notice: Symbolic link /%s points to /%s which does not exist.")
- 							% (relative_path, myabsto)])
- 
- 					showMessage("%s %s -> %s\n" % (zing, mydest, myto))
- 					outfile.write(
- 						self._format_contents_line(
- 							node_type="sym",
- 							abs_path=myrealdest,
- 							symlink_target=myto,
- 							mtime_ns=mymtime,
- 						)
- 					)
- 				else:
- 					showMessage(_("!!! Failed to move file.\n"),
- 						level=logging.ERROR, noiselevel=-1)
- 					showMessage("!!! %s -> %s\n" % (mydest, myto),
- 						level=logging.ERROR, noiselevel=-1)
- 					return 1
- 			elif stat.S_ISDIR(mymode):
- 				# we are merging a directory
- 				if mydmode != None:
- 					# destination exists
- 
- 					if bsd_chflags:
- 						# Save then clear flags on dest.
- 						dflags = mydstat.st_flags
- 						if dflags != 0:
- 							bsd_chflags.lchflags(mydest, 0)
- 
- 					if not stat.S_ISLNK(mydmode) and \
- 						not os.access(mydest, os.W_OK):
- 						pkgstuff = pkgsplit(self.pkg)
- 						writemsg(_("\n!!! Cannot write to '%s'.\n") % mydest, noiselevel=-1)
- 						writemsg(_("!!! Please check permissions and directories for broken symlinks.\n"))
- 						writemsg(_("!!! You may start the merge process again by using ebuild:\n"))
- 						writemsg("!!! ebuild "+self.settings["PORTDIR"]+"/"+self.cat+"/"+pkgstuff[0]+"/"+self.pkg+".ebuild merge\n")
- 						writemsg(_("!!! And finish by running this: env-update\n\n"))
- 						return 1
- 
- 					if stat.S_ISDIR(mydmode) or \
- 						(stat.S_ISLNK(mydmode) and os.path.isdir(mydest)):
- 						# a symlink to an existing directory will work for us; keep it:
- 						showMessage("--- %s/\n" % mydest)
- 						if bsd_chflags:
- 							bsd_chflags.lchflags(mydest, dflags)
- 					else:
- 						# a non-directory and non-symlink-to-directory.  Won't work for us.  Move out of the way.
- 						backup_dest = self._new_backup_path(mydest)
- 						msg = []
- 						msg.append("")
- 						msg.append(_("Installation of a directory is blocked by a file:"))
- 						msg.append("  '%s'" % mydest)
- 						msg.append(_("This file will be renamed to a different name:"))
- 						msg.append("  '%s'" % backup_dest)
- 						msg.append("")
- 						self._eerror("preinst", msg)
- 						if movefile(mydest, backup_dest,
- 							mysettings=self.settings,
- 							encoding=_encodings['merge']) is None:
- 							return 1
- 						showMessage(_("bak %s %s.backup\n") % (mydest, mydest),
- 							level=logging.ERROR, noiselevel=-1)
- 						#now create our directory
- 						try:
- 							if self.settings.selinux_enabled():
- 								_selinux_merge.mkdir(mydest, mysrc)
- 							else:
- 								os.mkdir(mydest)
- 						except OSError as e:
- 							# Error handling should be equivalent to
- 							# portage.util.ensure_dirs() for cases
- 							# like bug #187518.
- 							if e.errno in (errno.EEXIST,):
- 								pass
- 							elif os.path.isdir(mydest):
- 								pass
- 							else:
- 								raise
- 							del e
- 
- 						if bsd_chflags:
- 							bsd_chflags.lchflags(mydest, dflags)
- 						os.chmod(mydest, mystat[0])
- 						os.chown(mydest, mystat[4], mystat[5])
- 						showMessage(">>> %s/\n" % mydest)
- 				else:
- 					try:
- 						#destination doesn't exist
- 						if self.settings.selinux_enabled():
- 							_selinux_merge.mkdir(mydest, mysrc)
- 						else:
- 							os.mkdir(mydest)
- 					except OSError as e:
- 						# Error handling should be equivalent to
- 						# portage.util.ensure_dirs() for cases
- 						# like bug #187518.
- 						if e.errno in (errno.EEXIST,):
- 							pass
- 						elif os.path.isdir(mydest):
- 							pass
- 						else:
- 							raise
- 						del e
- 					os.chmod(mydest, mystat[0])
- 					os.chown(mydest, mystat[4], mystat[5])
- 					showMessage(">>> %s/\n" % mydest)
- 
- 				try:
- 					self._merged_path(mydest, os.lstat(mydest))
- 				except OSError:
- 					pass
- 
- 				outfile.write(
- 					self._format_contents_line(node_type="dir", abs_path=myrealdest)
- 				)
- 				# recurse and merge this directory
- 				mergelist.extend(join(relative_path, child) for child in
- 					os.listdir(join(srcroot, relative_path)))
- 
- 			elif stat.S_ISREG(mymode):
- 				# we are merging a regular file
- 				if not protected and \
- 					mydmode is not None and stat.S_ISDIR(mydmode):
- 						# install of destination is blocked by an existing directory with the same name
- 						newdest = self._new_backup_path(mydest)
- 						msg = []
- 						msg.append("")
- 						msg.append(_("Installation of a regular file is blocked by a directory:"))
- 						msg.append("  '%s'" % mydest)
- 						msg.append(_("This file will be merged with a different name:"))
- 						msg.append("  '%s'" % newdest)
- 						msg.append("")
- 						self._eerror("preinst", msg)
- 						mydest = newdest
- 
- 				# whether config protection or not, we merge the new file the
- 				# same way.  Unless moveme=0 (blocking directory)
- 				if moveme:
- 					# Create hardlinks only for source files that already exist
- 					# as hardlinks (having identical st_dev and st_ino).
- 					hardlink_key = (mystat.st_dev, mystat.st_ino)
- 
- 					hardlink_candidates = self._hardlink_merge_map.get(hardlink_key)
- 					if hardlink_candidates is None:
- 						hardlink_candidates = []
- 						self._hardlink_merge_map[hardlink_key] = hardlink_candidates
- 
- 					mymtime = movefile(mysrc, mydest, newmtime=thismtime,
- 						sstat=mystat, mysettings=self.settings,
- 						hardlink_candidates=hardlink_candidates,
- 						encoding=_encodings['merge'])
- 					if mymtime is None:
- 						return 1
- 					hardlink_candidates.append(mydest)
- 					zing = ">>>"
- 
- 					try:
- 						self._merged_path(mydest, os.lstat(mydest))
- 					except OSError:
- 						pass
- 
- 				if mymtime != None:
- 					outfile.write(
- 						self._format_contents_line(
- 							node_type="obj",
- 							abs_path=myrealdest,
- 							md5_digest=mymd5,
- 							mtime_ns=mymtime,
- 						)
- 					)
- 				showMessage("%s %s\n" % (zing,mydest))
- 			else:
- 				# we are merging a fifo or device node
- 				zing = "!!!"
- 				if mydmode is None:
- 					# destination doesn't exist
- 					if movefile(mysrc, mydest, newmtime=thismtime,
- 						sstat=mystat, mysettings=self.settings,
- 						encoding=_encodings['merge']) is not None:
- 						zing = ">>>"
- 
- 						try:
- 							self._merged_path(mydest, os.lstat(mydest))
- 						except OSError:
- 							pass
- 
- 					else:
- 						return 1
- 				if stat.S_ISFIFO(mymode):
- 					outfile.write(
- 						self._format_contents_line(node_type="fif", abs_path=myrealdest)
- 					)
- 				else:
- 					outfile.write(
- 						self._format_contents_line(node_type="dev", abs_path=myrealdest)
- 					)
- 				showMessage(zing + " " + mydest + "\n")
- 
- 	def _protect(self, cfgfiledict, protect_if_modified, src_md5,
- 		src_link, dest, dest_real, dest_mode, dest_md5, dest_link):
- 
- 		move_me = True
- 		protected = True
- 		force = False
- 		k = False
- 		if self._installed_instance is not None:
- 			k = self._installed_instance._match_contents(dest_real)
- 		if k is not False:
- 			if dest_mode is None:
- 				# If the file doesn't exist, then it may
- 				# have been deleted or renamed by the
- 				# admin. Therefore, force the file to be
- 				# merged with a ._cfg name, so that the
- 				# admin will be prompted for this update
- 				# (see bug #523684).
- 				force = True
- 
- 			elif protect_if_modified:
- 				data = self._installed_instance.getcontents()[k]
- 				if data[0] == "obj" and data[2] == dest_md5:
- 					protected = False
- 				elif data[0] == "sym" and data[2] == dest_link:
- 					protected = False
- 
- 		if protected and dest_mode is not None:
- 			# we have a protection path; enable config file management.
- 			if src_md5 == dest_md5:
- 				protected = False
- 
- 			elif src_md5 == cfgfiledict.get(dest_real, [None])[0]:
- 				# An identical update has previously been
- 				# merged.  Skip it unless the user has chosen
- 				# --noconfmem.
- 				move_me = protected = bool(cfgfiledict["IGNORE"])
- 
- 			if protected and \
- 				(dest_link is not None or src_link is not None) and \
- 				dest_link != src_link:
- 				# If either one is a symlink, and they are not
- 				# identical symlinks, then force config protection.
- 				force = True
- 
- 			if move_me:
- 				# Merging a new file, so update confmem.
- 				cfgfiledict[dest_real] = [src_md5]
- 			elif dest_md5 == cfgfiledict.get(dest_real, [None])[0]:
- 				# A previously remembered update has been
- 				# accepted, so it is removed from confmem.
- 				del cfgfiledict[dest_real]
- 
- 		if protected and move_me:
- 			dest = new_protect_filename(dest,
- 				newmd5=(dest_link or src_md5),
- 				force=force)
- 
- 		return dest, protected, move_me
- 
- 	def _format_contents_line(
- 		self, node_type, abs_path, md5_digest=None, symlink_target=None, mtime_ns=None
- 	):
- 		fields = [node_type, abs_path]
- 		if md5_digest is not None:
- 			fields.append(md5_digest)
- 		elif symlink_target is not None:
- 			fields.append("-> {}".format(symlink_target))
- 		if mtime_ns is not None:
- 			fields.append(str(mtime_ns // 1000000000))
- 		return "{}\n".format(" ".join(fields))
- 
- 	def _merged_path(self, path, lstatobj, exists=True):
- 		previous_path = self._device_path_map.get(lstatobj.st_dev)
- 		if previous_path is None or previous_path is False or \
- 			(exists and len(path) < len(previous_path)):
- 			if exists:
- 				self._device_path_map[lstatobj.st_dev] = path
- 			else:
- 				# This entry is used to indicate that we've unmerged
- 				# a file from this device, and later, this entry is
- 				# replaced by a parent directory.
- 				self._device_path_map[lstatobj.st_dev] = False
- 
- 	def _post_merge_sync(self):
- 		"""
- 		Call this after merge or unmerge, in order to sync relevant files to
- 		disk and avoid data-loss in the event of a power failure. This method
- 		does nothing if FEATURES=merge-sync is disabled.
- 		"""
- 		if not self._device_path_map or \
- 			"merge-sync" not in self.settings.features:
- 			return
- 
- 		returncode = None
- 		if platform.system() == "Linux":
- 
- 			paths = []
- 			for path in self._device_path_map.values():
- 				if path is not False:
- 					paths.append(path)
- 			paths = tuple(paths)
- 
- 			proc = SyncfsProcess(paths=paths,
- 				scheduler=(self._scheduler or asyncio._safe_loop()))
- 			proc.start()
- 			returncode = proc.wait()
- 
- 		if returncode is None or returncode != os.EX_OK:
- 			try:
- 				proc = subprocess.Popen(["sync"])
- 			except EnvironmentError:
- 				pass
- 			else:
- 				proc.wait()
- 
- 	@_slot_locked
- 	def merge(self, mergeroot, inforoot, myroot=None, myebuild=None, cleanup=0,
- 		mydbapi=None, prev_mtimes=None, counter=None):
- 		"""
- 		@param myroot: ignored, self._eroot is used instead
- 		"""
- 		myroot = None
- 		retval = -1
- 		parallel_install = "parallel-install" in self.settings.features
- 		if not parallel_install:
- 			self.lockdb()
- 		self.vartree.dbapi._bump_mtime(self.mycpv)
- 		if self._scheduler is None:
- 			self._scheduler = SchedulerInterface(asyncio._safe_loop())
- 		try:
- 			retval = self.treewalk(mergeroot, myroot, inforoot, myebuild,
- 				cleanup=cleanup, mydbapi=mydbapi, prev_mtimes=prev_mtimes,
- 				counter=counter)
- 
- 			# If PORTAGE_BUILDDIR doesn't exist, then it probably means
- 			# fail-clean is enabled, and the success/die hooks have
- 			# already been called by EbuildPhase.
- 			if os.path.isdir(self.settings['PORTAGE_BUILDDIR']):
- 
- 				if retval == os.EX_OK:
- 					phase = 'success_hooks'
- 				else:
- 					phase = 'die_hooks'
- 
- 				ebuild_phase = MiscFunctionsProcess(
- 					background=False, commands=[phase],
- 					scheduler=self._scheduler, settings=self.settings)
- 				ebuild_phase.start()
- 				ebuild_phase.wait()
- 				self._elog_process()
- 
- 				if 'noclean' not in self.settings.features and \
- 					(retval == os.EX_OK or \
- 					'fail-clean' in self.settings.features):
- 					if myebuild is None:
- 						myebuild = os.path.join(inforoot, self.pkg + ".ebuild")
- 
- 					doebuild_environment(myebuild, "clean",
- 						settings=self.settings, db=mydbapi)
- 					phase = EbuildPhase(background=False, phase="clean",
- 						scheduler=self._scheduler, settings=self.settings)
- 					phase.start()
- 					phase.wait()
- 		finally:
- 			self.settings.pop('REPLACING_VERSIONS', None)
- 			if self.vartree.dbapi._linkmap is None:
- 				# preserve-libs is entirely disabled
- 				pass
- 			else:
- 				self.vartree.dbapi._linkmap._clear_cache()
- 			self.vartree.dbapi._bump_mtime(self.mycpv)
- 			if not parallel_install:
- 				self.unlockdb()
- 
- 		if retval == os.EX_OK and self._postinst_failure:
- 			retval = portage.const.RETURNCODE_POSTINST_FAILURE
- 
- 		return retval
- 
- 	def getstring(self,name):
- 		"returns contents of a file with whitespace converted to spaces"
- 		if not os.path.exists(self.dbdir+"/"+name):
- 			return ""
- 		with io.open(
- 			_unicode_encode(os.path.join(self.dbdir, name),
- 			encoding=_encodings['fs'], errors='strict'),
- 			mode='r', encoding=_encodings['repo.content'], errors='replace'
- 			) as f:
- 			mydata = f.read().split()
- 		return " ".join(mydata)
- 
- 	def copyfile(self,fname):
- 		shutil.copyfile(fname,self.dbdir+"/"+os.path.basename(fname))
- 
- 	def getfile(self,fname):
- 		if not os.path.exists(self.dbdir+"/"+fname):
- 			return ""
- 		with io.open(_unicode_encode(os.path.join(self.dbdir, fname),
- 			encoding=_encodings['fs'], errors='strict'),
- 			mode='r', encoding=_encodings['repo.content'], errors='replace'
- 			) as f:
- 			return f.read()
- 
- 	def setfile(self,fname,data):
- 		kwargs = {}
- 		if fname == 'environment.bz2' or not isinstance(data, str):
- 			kwargs['mode'] = 'wb'
- 		else:
- 			kwargs['mode'] = 'w'
- 			kwargs['encoding'] = _encodings['repo.content']
- 		write_atomic(os.path.join(self.dbdir, fname), data, **kwargs)
- 
- 	def getelements(self,ename):
- 		if not os.path.exists(self.dbdir+"/"+ename):
- 			return []
- 		with io.open(_unicode_encode(
- 			os.path.join(self.dbdir, ename),
- 			encoding=_encodings['fs'], errors='strict'),
- 			mode='r', encoding=_encodings['repo.content'], errors='replace'
- 			) as f:
- 			mylines = f.readlines()
- 		myreturn = []
- 		for x in mylines:
- 			for y in x[:-1].split():
- 				myreturn.append(y)
- 		return myreturn
- 
- 	def setelements(self,mylist,ename):
- 		with io.open(_unicode_encode(
- 			os.path.join(self.dbdir, ename),
- 			encoding=_encodings['fs'], errors='strict'),
- 			mode='w', encoding=_encodings['repo.content'],
- 			errors='backslashreplace') as f:
- 			for x in mylist:
- 				f.write("%s\n" % x)
- 
- 	def isregular(self):
- 		"Is this a regular package (does it have a CATEGORY file?  A dblink can be virtual *and* regular)"
- 		return os.path.exists(os.path.join(self.dbdir, "CATEGORY"))
- 
- 	def _pre_merge_backup(self, backup_dblink, downgrade):
- 
- 		if ("unmerge-backup" in self.settings.features or
- 			(downgrade and "downgrade-backup" in self.settings.features)):
- 			return self._quickpkg_dblink(backup_dblink, False, None)
- 
- 		return os.EX_OK
- 
- 	def _pre_unmerge_backup(self, background):
- 
- 		if "unmerge-backup" in self.settings.features :
- 			logfile = None
- 			if self.settings.get("PORTAGE_BACKGROUND") != "subprocess":
- 				logfile = self.settings.get("PORTAGE_LOG_FILE")
- 			return self._quickpkg_dblink(self, background, logfile)
- 
- 		return os.EX_OK
- 
- 	def _quickpkg_dblink(self, backup_dblink, background, logfile):
- 
- 		build_time = backup_dblink.getfile('BUILD_TIME')
- 		try:
- 			build_time = int(build_time.strip())
- 		except ValueError:
- 			build_time = 0
- 
- 		trees = QueryCommand.get_db()[self.settings["EROOT"]]
- 		bintree = trees["bintree"]
- 
- 		for binpkg in reversed(
- 			bintree.dbapi.match('={}'.format(backup_dblink.mycpv))):
- 			if binpkg.build_time == build_time:
- 				return os.EX_OK
- 
- 		self.lockdb()
- 		try:
- 
- 			if not backup_dblink.exists():
- 				# It got unmerged by a concurrent process.
- 				return os.EX_OK
- 
- 			# Call quickpkg for support of QUICKPKG_DEFAULT_OPTS and stuff.
- 			quickpkg_binary = os.path.join(self.settings["PORTAGE_BIN_PATH"],
- 				"quickpkg")
- 
- 			if not os.access(quickpkg_binary, os.X_OK):
- 				# If not running from the source tree, use PATH.
- 				quickpkg_binary = find_binary("quickpkg")
- 				if quickpkg_binary is None:
- 					self._display_merge(
- 						_("%s: command not found") % "quickpkg",
- 						level=logging.ERROR, noiselevel=-1)
- 					return 127
- 
- 			# Let quickpkg inherit the global vartree config's env.
- 			env = dict(self.vartree.settings.items())
- 			env["__PORTAGE_INHERIT_VARDB_LOCK"] = "1"
- 
- 			pythonpath = [x for x in env.get('PYTHONPATH', '').split(":") if x]
- 			if not pythonpath or \
- 				not os.path.samefile(pythonpath[0], portage._pym_path):
- 				pythonpath.insert(0, portage._pym_path)
- 			env['PYTHONPATH'] = ":".join(pythonpath)
- 
- 			quickpkg_proc = SpawnProcess(
- 				args=[portage._python_interpreter, quickpkg_binary,
- 					"=%s" % (backup_dblink.mycpv,)],
- 				background=background, env=env,
- 				scheduler=self._scheduler, logfile=logfile)
- 			quickpkg_proc.start()
- 
- 			return quickpkg_proc.wait()
- 
- 		finally:
- 			self.unlockdb()
- 
- def merge(mycat, mypkg, pkgloc, infloc,
- 	myroot=None, settings=None, myebuild=None,
- 	mytree=None, mydbapi=None, vartree=None, prev_mtimes=None, blockers=None,
- 	scheduler=None, fd_pipes=None):
- 	"""
- 	@param myroot: ignored, settings['EROOT'] is used instead
- 	"""
- 	myroot = None
- 	if settings is None:
- 		raise TypeError("settings argument is required")
- 	if not os.access(settings['EROOT'], os.W_OK):
- 		writemsg(_("Permission denied: access('%s', W_OK)\n") % settings['EROOT'],
- 			noiselevel=-1)
- 		return errno.EACCES
- 	background = (settings.get('PORTAGE_BACKGROUND') == '1')
- 	merge_task = MergeProcess(
- 		mycat=mycat, mypkg=mypkg, settings=settings,
- 		treetype=mytree, vartree=vartree,
- 		scheduler=(scheduler or asyncio._safe_loop()),
- 		background=background, blockers=blockers, pkgloc=pkgloc,
- 		infloc=infloc, myebuild=myebuild, mydbapi=mydbapi,
- 		prev_mtimes=prev_mtimes, logfile=settings.get('PORTAGE_LOG_FILE'),
- 		fd_pipes=fd_pipes)
- 	merge_task.start()
- 	retcode = merge_task.wait()
- 	return retcode
- 
- def unmerge(cat, pkg, myroot=None, settings=None,
- 	mytrimworld=None, vartree=None,
- 	ldpath_mtimes=None, scheduler=None):
- 	"""
- 	@param myroot: ignored, settings['EROOT'] is used instead
- 	@param mytrimworld: ignored
- 	"""
- 	myroot = None
- 	if settings is None:
- 		raise TypeError("settings argument is required")
- 	mylink = dblink(cat, pkg, settings=settings, treetype="vartree",
- 		vartree=vartree, scheduler=scheduler)
- 	vartree = mylink.vartree
- 	parallel_install = "parallel-install" in settings.features
- 	if not parallel_install:
- 		mylink.lockdb()
- 	try:
- 		if mylink.exists():
- 			retval = mylink.unmerge(ldpath_mtimes=ldpath_mtimes)
- 			if retval == os.EX_OK:
- 				mylink.lockdb()
- 				try:
- 					mylink.delete()
- 				finally:
- 					mylink.unlockdb()
- 			return retval
- 		return os.EX_OK
- 	finally:
- 		if vartree.dbapi._linkmap is None:
- 			# preserve-libs is entirely disabled
- 			pass
- 		else:
- 			vartree.dbapi._linkmap._clear_cache()
- 		if not parallel_install:
- 			mylink.unlockdb()
+     """
+     This class provides an interface to the installed package database
+     At present this is implemented as a text backend in /var/db/pkg.
+     """
+ 
+     _normalize_needed = re.compile(r"//|^[^/]|./$|(^|/)\.\.?(/|$)")
+ 
+     _contents_re = re.compile(
+         r"^("
+         + r"(?P<dir>(dev|dir|fif) (.+))|"
+         + r"(?P<obj>(obj) (.+) (\S+) (\d+))|"
+         + r"(?P<sym>(sym) (.+) -> (.+) ((\d+)|(?P<oldsym>("
+         + r"\(\d+, \d+L, \d+L, \d+, \d+, \d+, \d+L, \d+, (\d+), \d+\)))))"
+         + r")$"
+     )
+ 
+     # These files are generated by emerge, so we need to remove
+     # them when they are the only thing left in a directory.
+     _infodir_cleanup = frozenset(["dir", "dir.old"])
+ 
+     _ignored_unlink_errnos = (errno.EBUSY, errno.ENOENT, errno.ENOTDIR, errno.EISDIR)
+ 
+     _ignored_rmdir_errnos = (
+         errno.EEXIST,
+         errno.ENOTEMPTY,
+         errno.EBUSY,
+         errno.ENOENT,
+         errno.ENOTDIR,
+         errno.EISDIR,
+         errno.EPERM,
+     )
+ 
+     def __init__(
+         self,
+         cat,
+         pkg,
+         myroot=None,
+         settings=None,
+         treetype=None,
+         vartree=None,
+         blockers=None,
+         scheduler=None,
+         pipe=None,
+     ):
+         """
+         Creates a DBlink object for a given CPV.
+         The given CPV may not be present in the database already.
+ 
+         @param cat: Category
+         @type cat: String
+         @param pkg: Package (PV)
+         @type pkg: String
+         @param myroot: ignored, settings['ROOT'] is used instead
+         @type myroot: String (Path)
+         @param settings: Typically portage.settings
+         @type settings: portage.config
+         @param treetype: one of ['porttree','bintree','vartree']
+         @type treetype: String
+         @param vartree: an instance of vartree corresponding to myroot.
+         @type vartree: vartree
+         """
+ 
+         if settings is None:
+             raise TypeError("settings argument is required")
+ 
+         mysettings = settings
+         self._eroot = mysettings["EROOT"]
+         self.cat = cat
+         self.pkg = pkg
+         self.mycpv = self.cat + "/" + self.pkg
+         if self.mycpv == settings.mycpv and isinstance(settings.mycpv, _pkg_str):
+             self.mycpv = settings.mycpv
+         else:
+             self.mycpv = _pkg_str(self.mycpv)
+         self.mysplit = list(self.mycpv.cpv_split[1:])
+         self.mysplit[0] = self.mycpv.cp
+         self.treetype = treetype
+         if vartree is None:
+             vartree = portage.db[self._eroot]["vartree"]
+         self.vartree = vartree
+         self._blockers = blockers
+         self._scheduler = scheduler
+         self.dbroot = normalize_path(os.path.join(self._eroot, VDB_PATH))
+         self.dbcatdir = self.dbroot + "/" + cat
+         self.dbpkgdir = self.dbcatdir + "/" + pkg
+         self.dbtmpdir = self.dbcatdir + "/" + MERGING_IDENTIFIER + pkg
+         self.dbdir = self.dbpkgdir
+         self.settings = mysettings
+         self._verbose = self.settings.get("PORTAGE_VERBOSE") == "1"
+ 
+         self.myroot = self.settings["ROOT"]
+         self._installed_instance = None
+         self.contentscache = None
+         self._contents_inodes = None
+         self._contents_basenames = None
+         self._linkmap_broken = False
+         self._device_path_map = {}
+         self._hardlink_merge_map = {}
+         self._hash_key = (self._eroot, self.mycpv)
+         self._protect_obj = None
+         self._pipe = pipe
+         self._postinst_failure = False
+ 
+         # When necessary, this attribute is modified for
+         # compliance with RESTRICT=preserve-libs.
+         self._preserve_libs = "preserve-libs" in mysettings.features
+         self._contents = ContentsCaseSensitivityManager(self)
+         self._slot_locks = []
+ 
+     def __hash__(self):
+         return hash(self._hash_key)
+ 
+     def __eq__(self, other):
+         return isinstance(other, dblink) and self._hash_key == other._hash_key
+ 
+     def _get_protect_obj(self):
+ 
+         if self._protect_obj is None:
+             self._protect_obj = ConfigProtect(
+                 self._eroot,
+                 portage.util.shlex_split(self.settings.get("CONFIG_PROTECT", "")),
+                 portage.util.shlex_split(self.settings.get("CONFIG_PROTECT_MASK", "")),
+                 case_insensitive=("case-insensitive-fs" in self.settings.features),
+             )
+ 
+         return self._protect_obj
+ 
+     def isprotected(self, obj):
+         return self._get_protect_obj().isprotected(obj)
+ 
+     def updateprotect(self):
+         self._get_protect_obj().updateprotect()
+ 
+     def lockdb(self):
+         self.vartree.dbapi.lock()
+ 
+     def unlockdb(self):
+         self.vartree.dbapi.unlock()
+ 
+     def _slot_locked(f):
+         """
+         A decorator function which, when parallel-install is enabled,
+         acquires and releases slot locks for the current package and
+         blocked packages. This is required in order to account for
+         interactions with blocked packages (involving resolution of
+         file collisions).
+         """
+ 
+         def wrapper(self, *args, **kwargs):
+             if "parallel-install" in self.settings.features:
+                 self._acquire_slot_locks(kwargs.get("mydbapi", self.vartree.dbapi))
+             try:
+                 return f(self, *args, **kwargs)
+             finally:
+                 self._release_slot_locks()
+ 
+         return wrapper
+ 
+     def _acquire_slot_locks(self, db):
+         """
+         Acquire slot locks for the current package and blocked packages.
+         """
+ 
+         slot_atoms = []
+ 
+         try:
+             slot = self.mycpv.slot
+         except AttributeError:
+             (slot,) = db.aux_get(self.mycpv, ["SLOT"])
+             slot = slot.partition("/")[0]
+ 
+         slot_atoms.append(portage.dep.Atom("%s:%s" % (self.mycpv.cp, slot)))
+ 
+         for blocker in self._blockers or []:
+             slot_atoms.append(blocker.slot_atom)
+ 
+         # Sort atoms so that locks are acquired in a predictable
+         # order, preventing deadlocks with competitors that may
+         # be trying to acquire overlapping locks.
+         slot_atoms.sort()
+         for slot_atom in slot_atoms:
+             self.vartree.dbapi._slot_lock(slot_atom)
+             self._slot_locks.append(slot_atom)
+ 
+     def _release_slot_locks(self):
+         """
+         Release all slot locks.
+         """
+         while self._slot_locks:
+             self.vartree.dbapi._slot_unlock(self._slot_locks.pop())
+ 
+     def getpath(self):
+         "return path to location of db information (for >>> informational display)"
+         return self.dbdir
+ 
+     def exists(self):
+         "does the db entry exist?  boolean."
+         return os.path.exists(self.dbdir)
+ 
+     def delete(self):
+         """
+         Remove this entry from the database
+         """
+         try:
+             os.lstat(self.dbdir)
+         except OSError as e:
+             if e.errno not in (errno.ENOENT, errno.ENOTDIR, errno.ESTALE):
+                 raise
+             return
+ 
+         # Check validity of self.dbdir before attempting to remove it.
+         if not self.dbdir.startswith(self.dbroot):
+             writemsg(
+                 _("portage.dblink.delete(): invalid dbdir: %s\n") % self.dbdir,
+                 noiselevel=-1,
+             )
+             return
+ 
+         if self.dbdir is self.dbpkgdir:
+             (counter,) = self.vartree.dbapi.aux_get(self.mycpv, ["COUNTER"])
+             self.vartree.dbapi._cache_delta.recordEvent(
+                 "remove", self.mycpv, self.settings["SLOT"].split("/")[0], counter
+             )
+ 
+         shutil.rmtree(self.dbdir)
+         # If empty, remove parent category directory.
+         try:
+             os.rmdir(os.path.dirname(self.dbdir))
+         except OSError:
+             pass
+         self.vartree.dbapi._remove(self)
+ 
+         # Use self.dbroot since we need an existing path for syncfs.
+         try:
+             self._merged_path(self.dbroot, os.lstat(self.dbroot))
+         except OSError:
+             pass
+ 
+         self._post_merge_sync()
+ 
+     def clearcontents(self):
+         """
+         For a given db entry (self), erase the CONTENTS values.
+         """
+         self.lockdb()
+         try:
+             if os.path.exists(self.dbdir + "/CONTENTS"):
+                 os.unlink(self.dbdir + "/CONTENTS")
+         finally:
+             self.unlockdb()
+ 
+     def _clear_contents_cache(self):
+         self.contentscache = None
+         self._contents_inodes = None
+         self._contents_basenames = None
+         self._contents.clear_cache()
+ 
+     def getcontents(self):
+         """
+         Get the installed files of a given package (aka what that package installed)
+         """
+         if self.contentscache is not None:
+             return self.contentscache
+         contents_file = os.path.join(self.dbdir, "CONTENTS")
+         pkgfiles = {}
+         try:
+             with io.open(
+                 _unicode_encode(
+                     contents_file, encoding=_encodings["fs"], errors="strict"
+                 ),
+                 mode="r",
+                 encoding=_encodings["repo.content"],
+                 errors="replace",
+             ) as f:
+                 mylines = f.readlines()
+         except EnvironmentError as e:
+             if e.errno != errno.ENOENT:
+                 raise
+             del e
+             self.contentscache = pkgfiles
+             return pkgfiles
+ 
+         null_byte = "\0"
+         normalize_needed = self._normalize_needed
+         contents_re = self._contents_re
+         obj_index = contents_re.groupindex["obj"]
+         dir_index = contents_re.groupindex["dir"]
+         sym_index = contents_re.groupindex["sym"]
+         # The old symlink format may exist on systems that have packages
+         # which were installed many years ago (see bug #351814).
+         oldsym_index = contents_re.groupindex["oldsym"]
+         # CONTENTS files already contain EPREFIX
+         myroot = self.settings["ROOT"]
+         if myroot == os.path.sep:
+             myroot = None
+         # used to generate parent dir entries
+         dir_entry = ("dir",)
+         eroot_split_len = len(self.settings["EROOT"].split(os.sep)) - 1
+         pos = 0
+         errors = []
+         for pos, line in enumerate(mylines):
+             if null_byte in line:
+                 # Null bytes are a common indication of corruption.
+                 errors.append((pos + 1, _("Null byte found in CONTENTS entry")))
+                 continue
+             line = line.rstrip("\n")
+             m = contents_re.match(line)
+             if m is None:
+                 errors.append((pos + 1, _("Unrecognized CONTENTS entry")))
+                 continue
+ 
+             if m.group(obj_index) is not None:
+                 base = obj_index
+                 # format: type, mtime, md5sum
+                 data = (m.group(base + 1), m.group(base + 4), m.group(base + 3))
+             elif m.group(dir_index) is not None:
+                 base = dir_index
+                 # format: type
+                 data = (m.group(base + 1),)
+             elif m.group(sym_index) is not None:
+                 base = sym_index
+                 if m.group(oldsym_index) is None:
+                     mtime = m.group(base + 5)
+                 else:
+                     mtime = m.group(base + 8)
+                 # format: type, mtime, dest
+                 data = (m.group(base + 1), mtime, m.group(base + 3))
+             else:
+                 # This won't happen as long the regular expression
+                 # is written to only match valid entries.
+                 raise AssertionError(
+                     _("required group not found " + "in CONTENTS entry: '%s'") % line
+                 )
+ 
+             path = m.group(base + 2)
+             if normalize_needed.search(path) is not None:
+                 path = normalize_path(path)
+                 if not path.startswith(os.path.sep):
+                     path = os.path.sep + path
+ 
+             if myroot is not None:
+                 path = os.path.join(myroot, path.lstrip(os.path.sep))
+ 
+             # Implicitly add parent directories, since we can't necessarily
+             # assume that they are explicitly listed in CONTENTS, and it's
+             # useful for callers if they can rely on parent directory entries
+             # being generated here (crucial for things like dblink.isowner()).
+             path_split = path.split(os.sep)
+             path_split.pop()
+             while len(path_split) > eroot_split_len:
+                 parent = os.sep.join(path_split)
+                 if parent in pkgfiles:
+                     break
+                 pkgfiles[parent] = dir_entry
+                 path_split.pop()
+ 
+             pkgfiles[path] = data
+ 
+         if errors:
+             writemsg(_("!!! Parse error in '%s'\n") % contents_file, noiselevel=-1)
+             for pos, e in errors:
+                 writemsg(_("!!!   line %d: %s\n") % (pos, e), noiselevel=-1)
+         self.contentscache = pkgfiles
+         return pkgfiles
+ 
+     def quickpkg(
+         self, output_file, include_config=False, include_unmodified_config=False
+     ):
+         """
+         Create a tar file appropriate for use by quickpkg.
+ 
+         @param output_file: Write binary tar stream to file.
+         @type output_file: file
+         @param include_config: Include all files protected by CONFIG_PROTECT
+                 (as a security precaution, default is False).
+         @type include_config: bool
+         @param include_unmodified_config: Include files protected by CONFIG_PROTECT
+                 that have not been modified since installation (as a security precaution,
+                 default is False).
+         @type include_unmodified_config: bool
+         @rtype: list
+         @return: Paths of protected configuration files which have been omitted.
+         """
+         settings = self.settings
+         cpv = self.mycpv
+         xattrs = "xattr" in settings.features
+         contents = self.getcontents()
+         excluded_config_files = []
+         protect = None
+ 
+         if not include_config:
+             confprot = ConfigProtect(
+                 settings["EROOT"],
+                 portage.util.shlex_split(settings.get("CONFIG_PROTECT", "")),
+                 portage.util.shlex_split(settings.get("CONFIG_PROTECT_MASK", "")),
+                 case_insensitive=("case-insensitive-fs" in settings.features),
+             )
+ 
+             def protect(filename):
+                 if not confprot.isprotected(filename):
+                     return False
+                 if include_unmodified_config:
+                     file_data = contents[filename]
+                     if file_data[0] == "obj":
+                         orig_md5 = file_data[2].lower()
+                         cur_md5 = perform_md5(filename, calc_prelink=1)
+                         if orig_md5 == cur_md5:
+                             return False
+                 excluded_config_files.append(filename)
+                 return True
+ 
+         # The tarfile module will write pax headers holding the
+         # xattrs only if PAX_FORMAT is specified here.
+         with tarfile.open(
+             fileobj=output_file,
+             mode="w|",
+             format=tarfile.PAX_FORMAT if xattrs else tarfile.DEFAULT_FORMAT,
+         ) as tar:
+             tar_contents(
+                 contents, settings["ROOT"], tar, protect=protect, xattrs=xattrs
+             )
+ 
+         return excluded_config_files
+ 
+     def _prune_plib_registry(self, unmerge=False, needed=None, preserve_paths=None):
+         # remove preserved libraries that don't have any consumers left
+         if not (
+             self._linkmap_broken
+             or self.vartree.dbapi._linkmap is None
+             or self.vartree.dbapi._plib_registry is None
+         ):
+             self.vartree.dbapi._fs_lock()
+             plib_registry = self.vartree.dbapi._plib_registry
+             plib_registry.lock()
+             try:
+                 plib_registry.load()
+ 
+                 unmerge_with_replacement = unmerge and preserve_paths is not None
+                 if unmerge_with_replacement:
+                     # If self.mycpv is about to be unmerged and we
+                     # have a replacement package, we want to exclude
+                     # the irrelevant NEEDED data that belongs to
+                     # files which are being unmerged now.
+                     exclude_pkgs = (self.mycpv,)
+                 else:
+                     exclude_pkgs = None
+ 
+                 self._linkmap_rebuild(
+                     exclude_pkgs=exclude_pkgs,
+                     include_file=needed,
+                     preserve_paths=preserve_paths,
+                 )
+ 
+                 if unmerge:
+                     unmerge_preserve = None
+                     if not unmerge_with_replacement:
+                         unmerge_preserve = self._find_libs_to_preserve(unmerge=True)
+                     counter = self.vartree.dbapi.cpv_counter(self.mycpv)
+                     try:
+                         slot = self.mycpv.slot
+                     except AttributeError:
+                         slot = _pkg_str(self.mycpv, slot=self.settings["SLOT"]).slot
+                     plib_registry.unregister(self.mycpv, slot, counter)
+                     if unmerge_preserve:
+                         for path in sorted(unmerge_preserve):
+                             contents_key = self._match_contents(path)
+                             if not contents_key:
+                                 continue
+                             obj_type = self.getcontents()[contents_key][0]
+                             self._display_merge(
+                                 _(">>> needed   %s %s\n") % (obj_type, contents_key),
+                                 noiselevel=-1,
+                             )
+                         plib_registry.register(
+                             self.mycpv, slot, counter, unmerge_preserve
+                         )
+                         # Remove the preserved files from our contents
+                         # so that they won't be unmerged.
+                         self.vartree.dbapi.removeFromContents(self, unmerge_preserve)
+ 
+                 unmerge_no_replacement = unmerge and not unmerge_with_replacement
+                 cpv_lib_map = self._find_unused_preserved_libs(unmerge_no_replacement)
+                 if cpv_lib_map:
+                     self._remove_preserved_libs(cpv_lib_map)
+                     self.vartree.dbapi.lock()
+                     try:
+                         for cpv, removed in cpv_lib_map.items():
+                             if not self.vartree.dbapi.cpv_exists(cpv):
+                                 continue
+                             self.vartree.dbapi.removeFromContents(cpv, removed)
+                     finally:
+                         self.vartree.dbapi.unlock()
+ 
+                 plib_registry.store()
+             finally:
+                 plib_registry.unlock()
+                 self.vartree.dbapi._fs_unlock()
+ 
+     @_slot_locked
+     def unmerge(
+         self,
+         pkgfiles=None,
+         trimworld=None,
+         cleanup=True,
+         ldpath_mtimes=None,
+         others_in_slot=None,
+         needed=None,
+         preserve_paths=None,
+     ):
+         """
+         Calls prerm
+         Unmerges a given package (CPV)
+         calls postrm
+         calls cleanrm
+         calls env_update
+ 
+         @param pkgfiles: files to unmerge (generally self.getcontents() )
+         @type pkgfiles: Dictionary
+         @param trimworld: Unused
+         @type trimworld: Boolean
+         @param cleanup: cleanup to pass to doebuild (see doebuild)
+         @type cleanup: Boolean
+         @param ldpath_mtimes: mtimes to pass to env_update (see env_update)
+         @type ldpath_mtimes: Dictionary
+         @param others_in_slot: all dblink instances in this slot, excluding self
+         @type others_in_slot: list
+         @param needed: Filename containing libraries needed after unmerge.
+         @type needed: String
+         @param preserve_paths: Libraries preserved by a package instance that
+                 is currently being merged. They need to be explicitly passed to the
+                 LinkageMap, since they are not registered in the
+                 PreservedLibsRegistry yet.
+         @type preserve_paths: set
+         @rtype: Integer
+         @return:
+         1. os.EX_OK if everything went well.
+         2. return code of the failed phase (for prerm, postrm, cleanrm)
+         """
+ 
+         if trimworld is not None:
+             warnings.warn(
+                 "The trimworld parameter of the "
+                 + "portage.dbapi.vartree.dblink.unmerge()"
+                 + " method is now unused.",
+                 DeprecationWarning,
+                 stacklevel=2,
+             )
+ 
+         background = False
+         log_path = self.settings.get("PORTAGE_LOG_FILE")
+         if self._scheduler is None:
+             # We create a scheduler instance and use it to
+             # log unmerge output separately from merge output.
+             self._scheduler = SchedulerInterface(asyncio._safe_loop())
+         if self.settings.get("PORTAGE_BACKGROUND") == "subprocess":
+             if self.settings.get("PORTAGE_BACKGROUND_UNMERGE") == "1":
+                 self.settings["PORTAGE_BACKGROUND"] = "1"
+                 self.settings.backup_changes("PORTAGE_BACKGROUND")
+                 background = True
+             elif self.settings.get("PORTAGE_BACKGROUND_UNMERGE") == "0":
+                 self.settings["PORTAGE_BACKGROUND"] = "0"
+                 self.settings.backup_changes("PORTAGE_BACKGROUND")
+         elif self.settings.get("PORTAGE_BACKGROUND") == "1":
+             background = True
+ 
+         self.vartree.dbapi._bump_mtime(self.mycpv)
+         showMessage = self._display_merge
+         if self.vartree.dbapi._categories is not None:
+             self.vartree.dbapi._categories = None
+ 
+         # When others_in_slot is not None, the backup has already been
+         # handled by the caller.
+         caller_handles_backup = others_in_slot is not None
+ 
+         # When others_in_slot is supplied, the security check has already been
+         # done for this slot, so it shouldn't be repeated until the next
+         # replacement or unmerge operation.
+         if others_in_slot is None:
+             slot = self.vartree.dbapi._pkg_str(self.mycpv, None).slot
+             slot_matches = self.vartree.dbapi.match(
+                 "%s:%s" % (portage.cpv_getkey(self.mycpv), slot)
+             )
+             others_in_slot = []
+             for cur_cpv in slot_matches:
+                 if cur_cpv == self.mycpv:
+                     continue
+                 others_in_slot.append(
+                     dblink(
+                         self.cat,
+                         catsplit(cur_cpv)[1],
+                         settings=self.settings,
+                         vartree=self.vartree,
+                         treetype="vartree",
+                         pipe=self._pipe,
+                     )
+                 )
+ 
+             retval = self._security_check([self] + others_in_slot)
+             if retval:
+                 return retval
+ 
+         contents = self.getcontents()
+         # Now, don't assume that the name of the ebuild is the same as the
+         # name of the dir; the package may have been moved.
+         myebuildpath = os.path.join(self.dbdir, self.pkg + ".ebuild")
+         failures = 0
+         ebuild_phase = "prerm"
+         mystuff = os.listdir(self.dbdir)
+         for x in mystuff:
+             if x.endswith(".ebuild"):
+                 if x[:-7] != self.pkg:
+                     # Clean up after vardbapi.move_ent() breakage in
+                     # portage versions before 2.1.2
+                     os.rename(os.path.join(self.dbdir, x), myebuildpath)
+                     write_atomic(os.path.join(self.dbdir, "PF"), self.pkg + "\n")
+                 break
+ 
+         if (
+             self.mycpv != self.settings.mycpv
+             or "EAPI" not in self.settings.configdict["pkg"]
+         ):
+             # We avoid a redundant setcpv call here when
+             # the caller has already taken care of it.
+             self.settings.setcpv(self.mycpv, mydb=self.vartree.dbapi)
+ 
+         eapi_unsupported = False
+         try:
+             doebuild_environment(
+                 myebuildpath, "prerm", settings=self.settings, db=self.vartree.dbapi
+             )
+         except UnsupportedAPIException as e:
+             eapi_unsupported = e
+ 
+         if (
+             self._preserve_libs
+             and "preserve-libs" in self.settings["PORTAGE_RESTRICT"].split()
+         ):
+             self._preserve_libs = False
+ 
+         builddir_lock = None
+         scheduler = self._scheduler
+         retval = os.EX_OK
+         try:
+             # Only create builddir_lock if the caller
+             # has not already acquired the lock.
+             if "PORTAGE_BUILDDIR_LOCKED" not in self.settings:
+                 builddir_lock = EbuildBuildDir(
+                     scheduler=scheduler, settings=self.settings
+                 )
+                 scheduler.run_until_complete(builddir_lock.async_lock())
+                 prepare_build_dirs(settings=self.settings, cleanup=True)
+                 log_path = self.settings.get("PORTAGE_LOG_FILE")
+ 
+             # Do this before the following _prune_plib_registry call, since
+             # that removes preserved libraries from our CONTENTS, and we
+             # may want to backup those libraries first.
+             if not caller_handles_backup:
+                 retval = self._pre_unmerge_backup(background)
+                 if retval != os.EX_OK:
+                     showMessage(
+                         _("!!! FAILED prerm: quickpkg: %s\n") % retval,
+                         level=logging.ERROR,
+                         noiselevel=-1,
+                     )
+                     return retval
+ 
+             self._prune_plib_registry(
+                 unmerge=True, needed=needed, preserve_paths=preserve_paths
+             )
+ 
+             # Log the error after PORTAGE_LOG_FILE is initialized
+             # by prepare_build_dirs above.
+             if eapi_unsupported:
+                 # Sometimes this happens due to corruption of the EAPI file.
+                 failures += 1
+                 showMessage(
+                     _("!!! FAILED prerm: %s\n") % os.path.join(self.dbdir, "EAPI"),
+                     level=logging.ERROR,
+                     noiselevel=-1,
+                 )
+                 showMessage(
+                     "%s\n" % (eapi_unsupported,), level=logging.ERROR, noiselevel=-1
+                 )
+             elif os.path.isfile(myebuildpath):
+                 phase = EbuildPhase(
+                     background=background,
+                     phase=ebuild_phase,
+                     scheduler=scheduler,
+                     settings=self.settings,
+                 )
+                 phase.start()
+                 retval = phase.wait()
+ 
+                 # XXX: Decide how to handle failures here.
+                 if retval != os.EX_OK:
+                     failures += 1
+                     showMessage(
+                         _("!!! FAILED prerm: %s\n") % retval,
+                         level=logging.ERROR,
+                         noiselevel=-1,
+                     )
+ 
+             self.vartree.dbapi._fs_lock()
+             try:
+                 self._unmerge_pkgfiles(pkgfiles, others_in_slot)
+             finally:
+                 self.vartree.dbapi._fs_unlock()
+             self._clear_contents_cache()
+ 
+             if not eapi_unsupported and os.path.isfile(myebuildpath):
+                 ebuild_phase = "postrm"
+                 phase = EbuildPhase(
+                     background=background,
+                     phase=ebuild_phase,
+                     scheduler=scheduler,
+                     settings=self.settings,
+                 )
+                 phase.start()
+                 retval = phase.wait()
+ 
+                 # XXX: Decide how to handle failures here.
+                 if retval != os.EX_OK:
+                     failures += 1
+                     showMessage(
+                         _("!!! FAILED postrm: %s\n") % retval,
+                         level=logging.ERROR,
+                         noiselevel=-1,
+                     )
+ 
+         finally:
+             self.vartree.dbapi._bump_mtime(self.mycpv)
+             try:
+                 if not eapi_unsupported and os.path.isfile(myebuildpath):
+                     if retval != os.EX_OK:
+                         msg_lines = []
+                         msg = _(
+                             "The '%(ebuild_phase)s' "
+                             "phase of the '%(cpv)s' package "
+                             "has failed with exit value %(retval)s."
+                         ) % {
+                             "ebuild_phase": ebuild_phase,
+                             "cpv": self.mycpv,
+                             "retval": retval,
+                         }
+                         from textwrap import wrap
+ 
+                         msg_lines.extend(wrap(msg, 72))
+                         msg_lines.append("")
+ 
+                         ebuild_name = os.path.basename(myebuildpath)
+                         ebuild_dir = os.path.dirname(myebuildpath)
+                         msg = _(
+                             "The problem occurred while executing "
+                             "the ebuild file named '%(ebuild_name)s' "
+                             "located in the '%(ebuild_dir)s' directory. "
+                             "If necessary, manually remove "
+                             "the environment.bz2 file and/or the "
+                             "ebuild file located in that directory."
+                         ) % {"ebuild_name": ebuild_name, "ebuild_dir": ebuild_dir}
+                         msg_lines.extend(wrap(msg, 72))
+                         msg_lines.append("")
+ 
+                         msg = _(
+                             "Removal "
+                             "of the environment.bz2 file is "
+                             "preferred since it may allow the "
+                             "removal phases to execute successfully. "
+                             "The ebuild will be "
+                             "sourced and the eclasses "
+                             "from the current ebuild repository will be used "
+                             "when necessary. Removal of "
+                             "the ebuild file will cause the "
+                             "pkg_prerm() and pkg_postrm() removal "
+                             "phases to be skipped entirely."
+                         )
+                         msg_lines.extend(wrap(msg, 72))
+ 
+                         self._eerror(ebuild_phase, msg_lines)
+ 
+                 self._elog_process(phasefilter=("prerm", "postrm"))
+ 
+                 if retval == os.EX_OK:
+                     try:
+                         doebuild_environment(
+                             myebuildpath,
+                             "cleanrm",
+                             settings=self.settings,
+                             db=self.vartree.dbapi,
+                         )
+                     except UnsupportedAPIException:
+                         pass
+                     phase = EbuildPhase(
+                         background=background,
+                         phase="cleanrm",
+                         scheduler=scheduler,
+                         settings=self.settings,
+                     )
+                     phase.start()
+                     retval = phase.wait()
+             finally:
+                 if builddir_lock is not None:
+                     scheduler.run_until_complete(builddir_lock.async_unlock())
+ 
+         if log_path is not None:
+ 
+             if not failures and "unmerge-logs" not in self.settings.features:
+                 try:
+                     os.unlink(log_path)
+                 except OSError:
+                     pass
+ 
+             try:
+                 st = os.stat(log_path)
+             except OSError:
+                 pass
+             else:
+                 if st.st_size == 0:
+                     try:
+                         os.unlink(log_path)
+                     except OSError:
+                         pass
+ 
+         if log_path is not None and os.path.exists(log_path):
+             # Restore this since it gets lost somewhere above and it
+             # needs to be set for _display_merge() to be able to log.
+             # Note that the log isn't necessarily supposed to exist
+             # since if PORTAGE_LOGDIR is unset then it's a temp file
+             # so it gets cleaned above.
+             self.settings["PORTAGE_LOG_FILE"] = log_path
+         else:
+             self.settings.pop("PORTAGE_LOG_FILE", None)
+ 
+         env_update(
+             target_root=self.settings["ROOT"],
+             prev_mtimes=ldpath_mtimes,
+             contents=contents,
+             env=self.settings,
+             writemsg_level=self._display_merge,
+             vardbapi=self.vartree.dbapi,
+         )
+ 
+         unmerge_with_replacement = preserve_paths is not None
+         if not unmerge_with_replacement:
+             # When there's a replacement package which calls us via treewalk,
+             # treewalk will automatically call _prune_plib_registry for us.
+             # Otherwise, we need to call _prune_plib_registry ourselves.
+             # Don't pass in the "unmerge=True" flag here, since that flag
+             # is intended to be used _prior_ to unmerge, not after.
+             self._prune_plib_registry()
+ 
+         return os.EX_OK
+ 
+     def _display_merge(self, msg, level=0, noiselevel=0):
+         if not self._verbose and noiselevel >= 0 and level < logging.WARN:
+             return
+         if self._scheduler is None:
+             writemsg_level(msg, level=level, noiselevel=noiselevel)
+         else:
+             log_path = None
+             if self.settings.get("PORTAGE_BACKGROUND") != "subprocess":
+                 log_path = self.settings.get("PORTAGE_LOG_FILE")
+             background = self.settings.get("PORTAGE_BACKGROUND") == "1"
+ 
+             if background and log_path is None:
+                 if level >= logging.WARN:
+                     writemsg_level(msg, level=level, noiselevel=noiselevel)
+             else:
+                 self._scheduler.output(
+                     msg,
+                     log_path=log_path,
+                     background=background,
+                     level=level,
+                     noiselevel=noiselevel,
+                 )
+ 
+     def _show_unmerge(self, zing, desc, file_type, file_name):
+         self._display_merge(
+             "%s %s %s %s\n" % (zing, desc.ljust(8), file_type, file_name)
+         )
+ 
+     def _unmerge_pkgfiles(self, pkgfiles, others_in_slot):
+         """
+ 
+         Unmerges the contents of a package from the liveFS
+         Removes the VDB entry for self
+ 
+         @param pkgfiles: typically self.getcontents()
+         @type pkgfiles: Dictionary { filename: [ 'type', '?', 'md5sum' ] }
+         @param others_in_slot: all dblink instances in this slot, excluding self
+         @type others_in_slot: list
+         @rtype: None
+         """
+ 
+         os = _os_merge
+         perf_md5 = perform_md5
+         showMessage = self._display_merge
+         show_unmerge = self._show_unmerge
+         ignored_unlink_errnos = self._ignored_unlink_errnos
+         ignored_rmdir_errnos = self._ignored_rmdir_errnos
+ 
+         if not pkgfiles:
+             showMessage(_("No package files given... Grabbing a set.\n"))
+             pkgfiles = self.getcontents()
+ 
+         if others_in_slot is None:
+             others_in_slot = []
+             slot = self.vartree.dbapi._pkg_str(self.mycpv, None).slot
+             slot_matches = self.vartree.dbapi.match(
+                 "%s:%s" % (portage.cpv_getkey(self.mycpv), slot)
+             )
+             for cur_cpv in slot_matches:
+                 if cur_cpv == self.mycpv:
+                     continue
+                 others_in_slot.append(
+                     dblink(
+                         self.cat,
+                         catsplit(cur_cpv)[1],
+                         settings=self.settings,
+                         vartree=self.vartree,
+                         treetype="vartree",
+                         pipe=self._pipe,
+                     )
+                 )
+ 
+         cfgfiledict = grabdict(self.vartree.dbapi._conf_mem_file)
+         stale_confmem = []
+         protected_symlinks = {}
+ 
+         unmerge_orphans = "unmerge-orphans" in self.settings.features
+         calc_prelink = "prelink-checksums" in self.settings.features
+ 
+         if pkgfiles:
+             self.updateprotect()
+             mykeys = list(pkgfiles)
+             mykeys.sort()
+             mykeys.reverse()
+ 
+             # process symlinks second-to-last, directories last.
+             mydirs = set()
+ 
+             uninstall_ignore = portage.util.shlex_split(
+                 self.settings.get("UNINSTALL_IGNORE", "")
+             )
+ 
+             def unlink(file_name, lstatobj):
+                 if bsd_chflags:
+                     if lstatobj.st_flags != 0:
+                         bsd_chflags.lchflags(file_name, 0)
+                     parent_name = os.path.dirname(file_name)
+                     # Use normal stat/chflags for the parent since we want to
+                     # follow any symlinks to the real parent directory.
+                     pflags = os.stat(parent_name).st_flags
+                     if pflags != 0:
+                         bsd_chflags.chflags(parent_name, 0)
+                 try:
+                     if not stat.S_ISLNK(lstatobj.st_mode):
+                         # Remove permissions to ensure that any hardlinks to
+                         # suid/sgid files are rendered harmless.
+                         os.chmod(file_name, 0)
+                     os.unlink(file_name)
+                 except OSError as ose:
+                     # If the chmod or unlink fails, you are in trouble.
+                     # With Prefix this can be because the file is owned
+                     # by someone else (a screwup by root?), on a normal
+                     # system maybe filesystem corruption.  In any case,
+                     # if we backtrace and die here, we leave the system
+                     # in a totally undefined state, hence we just bleed
+                     # like hell and continue to hopefully finish all our
+                     # administrative and pkg_postinst stuff.
+                     self._eerror(
+                         "postrm",
+                         ["Could not chmod or unlink '%s': %s" % (file_name, ose)],
+                     )
+                 else:
+ 
+                     # Even though the file no longer exists, we log it
+                     # here so that _unmerge_dirs can see that we've
+                     # removed a file from this device, and will record
+                     # the parent directory for a syncfs call.
+                     self._merged_path(file_name, lstatobj, exists=False)
+ 
+                 finally:
+                     if bsd_chflags and pflags != 0:
+                         # Restore the parent flags we saved before unlinking
+                         bsd_chflags.chflags(parent_name, pflags)
+ 
+             unmerge_desc = {}
+             unmerge_desc["cfgpro"] = _("cfgpro")
+             unmerge_desc["replaced"] = _("replaced")
+             unmerge_desc["!dir"] = _("!dir")
+             unmerge_desc["!empty"] = _("!empty")
+             unmerge_desc["!fif"] = _("!fif")
+             unmerge_desc["!found"] = _("!found")
+             unmerge_desc["!md5"] = _("!md5")
+             unmerge_desc["!mtime"] = _("!mtime")
+             unmerge_desc["!obj"] = _("!obj")
+             unmerge_desc["!sym"] = _("!sym")
+             unmerge_desc["!prefix"] = _("!prefix")
+ 
+             real_root = self.settings["ROOT"]
+             real_root_len = len(real_root) - 1
+             eroot = self.settings["EROOT"]
+ 
+             infodirs = frozenset(
+                 infodir
+                 for infodir in chain(
+                     self.settings.get("INFOPATH", "").split(":"),
+                     self.settings.get("INFODIR", "").split(":"),
+                 )
+                 if infodir
+             )
+             infodirs_inodes = set()
+             for infodir in infodirs:
+                 infodir = os.path.join(real_root, infodir.lstrip(os.sep))
+                 try:
+                     statobj = os.stat(infodir)
+                 except OSError:
+                     pass
+                 else:
+                     infodirs_inodes.add((statobj.st_dev, statobj.st_ino))
+ 
+             for i, objkey in enumerate(mykeys):
+ 
+                 obj = normalize_path(objkey)
+                 if os is _os_merge:
+                     try:
+                         _unicode_encode(
+                             obj, encoding=_encodings["merge"], errors="strict"
+                         )
+                     except UnicodeEncodeError:
+                         # The package appears to have been merged with a
+                         # different value of sys.getfilesystemencoding(),
+                         # so fall back to utf_8 if appropriate.
+                         try:
+                             _unicode_encode(
+                                 obj, encoding=_encodings["fs"], errors="strict"
+                             )
+                         except UnicodeEncodeError:
+                             pass
+                         else:
+                             os = portage.os
+                             perf_md5 = portage.checksum.perform_md5
+ 
+                 file_data = pkgfiles[objkey]
+                 file_type = file_data[0]
+ 
+                 # don't try to unmerge the prefix offset itself
+                 if len(obj) <= len(eroot) or not obj.startswith(eroot):
+                     show_unmerge("---", unmerge_desc["!prefix"], file_type, obj)
+                     continue
+ 
+                 statobj = None
+                 try:
+                     statobj = os.stat(obj)
+                 except OSError:
+                     pass
+                 lstatobj = None
+                 try:
+                     lstatobj = os.lstat(obj)
+                 except (OSError, AttributeError):
+                     pass
+                 islink = lstatobj is not None and stat.S_ISLNK(lstatobj.st_mode)
+                 if lstatobj is None:
+                     show_unmerge("---", unmerge_desc["!found"], file_type, obj)
+                     continue
+ 
+                 f_match = obj[len(eroot) - 1 :]
+                 ignore = False
+                 for pattern in uninstall_ignore:
+                     if fnmatch.fnmatch(f_match, pattern):
+                         ignore = True
+                         break
+ 
+                 if not ignore:
+                     if islink and f_match in ("/lib", "/usr/lib", "/usr/local/lib"):
+                         # Ignore libdir symlinks for bug #423127.
+                         ignore = True
+ 
+                 if ignore:
+                     show_unmerge("---", unmerge_desc["cfgpro"], file_type, obj)
+                     continue
+ 
+                 # don't use EROOT, CONTENTS entries already contain EPREFIX
+                 if obj.startswith(real_root):
+                     relative_path = obj[real_root_len:]
+                     is_owned = False
+                     for dblnk in others_in_slot:
+                         if dblnk.isowner(relative_path):
+                             is_owned = True
+                             break
+ 
+                     if (
+                         is_owned
+                         and islink
+                         and file_type in ("sym", "dir")
+                         and statobj
+                         and stat.S_ISDIR(statobj.st_mode)
+                     ):
+                         # A new instance of this package claims the file, so
+                         # don't unmerge it. If the file is symlink to a
+                         # directory and the unmerging package installed it as
+                         # a symlink, but the new owner has it listed as a
+                         # directory, then we'll produce a warning since the
+                         # symlink is a sort of orphan in this case (see
+                         # bug #326685).
+                         symlink_orphan = False
+                         for dblnk in others_in_slot:
+                             parent_contents_key = dblnk._match_contents(relative_path)
+                             if not parent_contents_key:
+                                 continue
+                             if not parent_contents_key.startswith(real_root):
+                                 continue
+                             if dblnk.getcontents()[parent_contents_key][0] == "dir":
+                                 symlink_orphan = True
+                                 break
+ 
+                         if symlink_orphan:
+                             protected_symlinks.setdefault(
+                                 (statobj.st_dev, statobj.st_ino), []
+                             ).append(relative_path)
+ 
+                     if is_owned:
+                         show_unmerge("---", unmerge_desc["replaced"], file_type, obj)
+                         continue
+                     elif relative_path in cfgfiledict:
+                         stale_confmem.append(relative_path)
+ 
+                 # Don't unlink symlinks to directories here since that can
+                 # remove /lib and /usr/lib symlinks.
+                 if (
+                     unmerge_orphans
+                     and lstatobj
+                     and not stat.S_ISDIR(lstatobj.st_mode)
+                     and not (islink and statobj and stat.S_ISDIR(statobj.st_mode))
+                     and not self.isprotected(obj)
+                 ):
+                     try:
+                         unlink(obj, lstatobj)
+                     except EnvironmentError as e:
+                         if e.errno not in ignored_unlink_errnos:
+                             raise
+                         del e
+                     show_unmerge("<<<", "", file_type, obj)
+                     continue
+ 
+                 lmtime = str(lstatobj[stat.ST_MTIME])
+                 if (pkgfiles[objkey][0] not in ("dir", "fif", "dev")) and (
+                     lmtime != pkgfiles[objkey][1]
+                 ):
+                     show_unmerge("---", unmerge_desc["!mtime"], file_type, obj)
+                     continue
+ 
+                 if file_type == "dir" and not islink:
+                     if lstatobj is None or not stat.S_ISDIR(lstatobj.st_mode):
+                         show_unmerge("---", unmerge_desc["!dir"], file_type, obj)
+                         continue
+                     mydirs.add((obj, (lstatobj.st_dev, lstatobj.st_ino)))
+                 elif file_type == "sym" or (file_type == "dir" and islink):
+                     if not islink:
+                         show_unmerge("---", unmerge_desc["!sym"], file_type, obj)
+                         continue
+ 
+                     # If this symlink points to a directory then we don't want
+                     # to unmerge it if there are any other packages that
+                     # installed files into the directory via this symlink
+                     # (see bug #326685).
+                     # TODO: Resolving a symlink to a directory will require
+                     # simulation if $ROOT != / and the link is not relative.
+                     if (
+                         islink
+                         and statobj
+                         and stat.S_ISDIR(statobj.st_mode)
+                         and obj.startswith(real_root)
+                     ):
+ 
+                         relative_path = obj[real_root_len:]
+                         try:
+                             target_dir_contents = os.listdir(obj)
+                         except OSError:
+                             pass
+                         else:
+                             if target_dir_contents:
+                                 # If all the children are regular files owned
+                                 # by this package, then the symlink should be
+                                 # safe to unmerge.
+                                 all_owned = True
+                                 for child in target_dir_contents:
+                                     child = os.path.join(relative_path, child)
+                                     if not self.isowner(child):
+                                         all_owned = False
+                                         break
+                                     try:
+                                         child_lstat = os.lstat(
+                                             os.path.join(
+                                                 real_root, child.lstrip(os.sep)
+                                             )
+                                         )
+                                     except OSError:
+                                         continue
+ 
+                                     if not stat.S_ISREG(child_lstat.st_mode):
+                                         # Nested symlinks or directories make
+                                         # the issue very complex, so just
+                                         # preserve the symlink in order to be
+                                         # on the safe side.
+                                         all_owned = False
+                                         break
+ 
+                                 if not all_owned:
+                                     protected_symlinks.setdefault(
+                                         (statobj.st_dev, statobj.st_ino), []
+                                     ).append(relative_path)
+                                     show_unmerge(
+                                         "---", unmerge_desc["!empty"], file_type, obj
+                                     )
+                                     continue
+ 
+                     # Go ahead and unlink symlinks to directories here when
+                     # they're actually recorded as symlinks in the contents.
+                     # Normally, symlinks such as /lib -> lib64 are not recorded
+                     # as symlinks in the contents of a package.  If a package
+                     # installs something into ${D}/lib/, it is recorded in the
+                     # contents as a directory even if it happens to correspond
+                     # to a symlink when it's merged to the live filesystem.
+                     try:
+                         unlink(obj, lstatobj)
+                         show_unmerge("<<<", "", file_type, obj)
+                     except (OSError, IOError) as e:
+                         if e.errno not in ignored_unlink_errnos:
+                             raise
+                         del e
+                         show_unmerge("!!!", "", file_type, obj)
+                 elif pkgfiles[objkey][0] == "obj":
+                     if statobj is None or not stat.S_ISREG(statobj.st_mode):
+                         show_unmerge("---", unmerge_desc["!obj"], file_type, obj)
+                         continue
+                     mymd5 = None
+                     try:
+                         mymd5 = perf_md5(obj, calc_prelink=calc_prelink)
+                     except FileNotFound as e:
+                         # the file has disappeared between now and our stat call
+                         show_unmerge("---", unmerge_desc["!obj"], file_type, obj)
+                         continue
+ 
+                     # string.lower is needed because db entries used to be in upper-case.  The
+                     # string.lower allows for backwards compatibility.
+                     if mymd5 != pkgfiles[objkey][2].lower():
+                         show_unmerge("---", unmerge_desc["!md5"], file_type, obj)
+                         continue
+                     try:
+                         unlink(obj, lstatobj)
+                     except (OSError, IOError) as e:
+                         if e.errno not in ignored_unlink_errnos:
+                             raise
+                         del e
+                     show_unmerge("<<<", "", file_type, obj)
+                 elif pkgfiles[objkey][0] == "fif":
+                     if not stat.S_ISFIFO(lstatobj[stat.ST_MODE]):
+                         show_unmerge("---", unmerge_desc["!fif"], file_type, obj)
+                         continue
+                     show_unmerge("---", "", file_type, obj)
+                 elif pkgfiles[objkey][0] == "dev":
+                     show_unmerge("---", "", file_type, obj)
+ 
+             self._unmerge_dirs(
+                 mydirs, infodirs_inodes, protected_symlinks, unmerge_desc, unlink, os
+             )
+             mydirs.clear()
+ 
+         if protected_symlinks:
+             self._unmerge_protected_symlinks(
+                 others_in_slot,
+                 infodirs_inodes,
+                 protected_symlinks,
+                 unmerge_desc,
+                 unlink,
+                 os,
+             )
+ 
+         if protected_symlinks:
+             msg = (
+                 "One or more symlinks to directories have been "
+                 + "preserved in order to ensure that files installed "
+                 + "via these symlinks remain accessible. "
+                 + "This indicates that the mentioned symlink(s) may "
+                 + "be obsolete remnants of an old install, and it "
+                 + "may be appropriate to replace a given symlink "
+                 + "with the directory that it points to."
+             )
+             lines = textwrap.wrap(msg, 72)
+             lines.append("")
+             flat_list = set()
+             flat_list.update(*protected_symlinks.values())
+             flat_list = sorted(flat_list)
+             for f in flat_list:
+                 lines.append("\t%s" % (os.path.join(real_root, f.lstrip(os.sep))))
+             lines.append("")
+             self._elog("elog", "postrm", lines)
+ 
+         # Remove stale entries from config memory.
+         if stale_confmem:
+             for filename in stale_confmem:
+                 del cfgfiledict[filename]
+             writedict(cfgfiledict, self.vartree.dbapi._conf_mem_file)
+ 
+         # remove self from vartree database so that our own virtual gets zapped if we're the last node
+         self.vartree.zap(self.mycpv)
+ 
+     def _unmerge_protected_symlinks(
+         self,
+         others_in_slot,
+         infodirs_inodes,
+         protected_symlinks,
+         unmerge_desc,
+         unlink,
+         os,
+     ):
+ 
+         real_root = self.settings["ROOT"]
+         show_unmerge = self._show_unmerge
+         ignored_unlink_errnos = self._ignored_unlink_errnos
+ 
+         flat_list = set()
+         flat_list.update(*protected_symlinks.values())
+         flat_list = sorted(flat_list)
+ 
+         for f in flat_list:
+             for dblnk in others_in_slot:
+                 if dblnk.isowner(f):
+                     # If another package in the same slot installed
+                     # a file via a protected symlink, return early
+                     # and don't bother searching for any other owners.
+                     return
+ 
+         msg = []
+         msg.append("")
+         msg.append(_("Directory symlink(s) may need protection:"))
+         msg.append("")
+ 
+         for f in flat_list:
+             msg.append("\t%s" % os.path.join(real_root, f.lstrip(os.path.sep)))
+ 
+         msg.append("")
+         msg.append("Use the UNINSTALL_IGNORE variable to exempt specific symlinks")
+         msg.append("from the following search (see the make.conf man page).")
+         msg.append("")
+         msg.append(
+             _(
+                 "Searching all installed"
+                 " packages for files installed via above symlink(s)..."
+             )
+         )
+         msg.append("")
+         self._elog("elog", "postrm", msg)
+ 
+         self.lockdb()
+         try:
+             owners = self.vartree.dbapi._owners.get_owners(flat_list)
+             self.vartree.dbapi.flush_cache()
+         finally:
+             self.unlockdb()
+ 
+         for owner in list(owners):
+             if owner.mycpv == self.mycpv:
+                 owners.pop(owner, None)
+ 
+         if not owners:
+             msg = []
+             msg.append(
+                 _(
+                     "The above directory symlink(s) are all "
+                     "safe to remove. Removing them now..."
+                 )
+             )
+             msg.append("")
+             self._elog("elog", "postrm", msg)
+             dirs = set()
+             for unmerge_syms in protected_symlinks.values():
+                 for relative_path in unmerge_syms:
+                     obj = os.path.join(real_root, relative_path.lstrip(os.sep))
+                     parent = os.path.dirname(obj)
+                     while len(parent) > len(self._eroot):
+                         try:
+                             lstatobj = os.lstat(parent)
+                         except OSError:
+                             break
+                         else:
+                             dirs.add((parent, (lstatobj.st_dev, lstatobj.st_ino)))
+                             parent = os.path.dirname(parent)
+                     try:
+                         unlink(obj, os.lstat(obj))
+                         show_unmerge("<<<", "", "sym", obj)
+                     except (OSError, IOError) as e:
+                         if e.errno not in ignored_unlink_errnos:
+                             raise
+                         del e
+                         show_unmerge("!!!", "", "sym", obj)
+ 
+             protected_symlinks.clear()
+             self._unmerge_dirs(
+                 dirs, infodirs_inodes, protected_symlinks, unmerge_desc, unlink, os
+             )
+             dirs.clear()
+ 
+     def _unmerge_dirs(
+         self, dirs, infodirs_inodes, protected_symlinks, unmerge_desc, unlink, os
+     ):
+ 
+         show_unmerge = self._show_unmerge
+         infodir_cleanup = self._infodir_cleanup
+         ignored_unlink_errnos = self._ignored_unlink_errnos
+         ignored_rmdir_errnos = self._ignored_rmdir_errnos
+         real_root = self.settings["ROOT"]
+ 
+         dirs = sorted(dirs)
+         revisit = {}
+ 
+         while True:
+             try:
+                 obj, inode_key = dirs.pop()
+             except IndexError:
+                 break
+             # Treat any directory named "info" as a candidate here,
+             # since it might have been in INFOPATH previously even
+             # though it may not be there now.
+             if inode_key in infodirs_inodes or os.path.basename(obj) == "info":
+                 try:
+                     remaining = os.listdir(obj)
+                 except OSError:
+                     pass
+                 else:
+                     cleanup_info_dir = ()
+                     if remaining and len(remaining) <= len(infodir_cleanup):
+                         if not set(remaining).difference(infodir_cleanup):
+                             cleanup_info_dir = remaining
+ 
+                     for child in cleanup_info_dir:
+                         child = os.path.join(obj, child)
+                         try:
+                             lstatobj = os.lstat(child)
+                             if stat.S_ISREG(lstatobj.st_mode):
+                                 unlink(child, lstatobj)
+                                 show_unmerge("<<<", "", "obj", child)
+                         except EnvironmentError as e:
+                             if e.errno not in ignored_unlink_errnos:
+                                 raise
+                             del e
+                             show_unmerge("!!!", "", "obj", child)
+ 
+             try:
+                 parent_name = os.path.dirname(obj)
+                 parent_stat = os.stat(parent_name)
+ 
+                 if bsd_chflags:
+                     lstatobj = os.lstat(obj)
+                     if lstatobj.st_flags != 0:
+                         bsd_chflags.lchflags(obj, 0)
+ 
+                     # Use normal stat/chflags for the parent since we want to
+                     # follow any symlinks to the real parent directory.
+                     pflags = parent_stat.st_flags
+                     if pflags != 0:
+                         bsd_chflags.chflags(parent_name, 0)
+                 try:
+                     os.rmdir(obj)
+                 finally:
+                     if bsd_chflags and pflags != 0:
+                         # Restore the parent flags we saved before unlinking
+                         bsd_chflags.chflags(parent_name, pflags)
+ 
+                 # Record the parent directory for use in syncfs calls.
+                 # Note that we use a realpath and a regular stat here, since
+                 # we want to follow any symlinks back to the real device where
+                 # the real parent directory resides.
+                 self._merged_path(os.path.realpath(parent_name), parent_stat)
+ 
+                 show_unmerge("<<<", "", "dir", obj)
+             except EnvironmentError as e:
+                 if e.errno not in ignored_rmdir_errnos:
+                     raise
+                 if e.errno != errno.ENOENT:
+                     show_unmerge("---", unmerge_desc["!empty"], "dir", obj)
+                     revisit[obj] = inode_key
+ 
+                 # Since we didn't remove this directory, record the directory
+                 # itself for use in syncfs calls, if we have removed another
+                 # file from the same device.
+                 # Note that we use a realpath and a regular stat here, since
+                 # we want to follow any symlinks back to the real device where
+                 # the real directory resides.
+                 try:
+                     dir_stat = os.stat(obj)
+                 except OSError:
+                     pass
+                 else:
+                     if dir_stat.st_dev in self._device_path_map:
+                         self._merged_path(os.path.realpath(obj), dir_stat)
+ 
+             else:
+                 # When a directory is successfully removed, there's
+                 # no need to protect symlinks that point to it.
+                 unmerge_syms = protected_symlinks.pop(inode_key, None)
+                 if unmerge_syms is not None:
+                     parents = []
+                     for relative_path in unmerge_syms:
+                         obj = os.path.join(real_root, relative_path.lstrip(os.sep))
+                         try:
+                             unlink(obj, os.lstat(obj))
+                             show_unmerge("<<<", "", "sym", obj)
+                         except (OSError, IOError) as e:
+                             if e.errno not in ignored_unlink_errnos:
+                                 raise
+                             del e
+                             show_unmerge("!!!", "", "sym", obj)
+                         else:
+                             parents.append(os.path.dirname(obj))
+ 
+                     if parents:
+                         # Revisit parents recursively (bug 640058).
+                         recursive_parents = []
+                         for parent in set(parents):
+                             while parent in revisit:
+                                 recursive_parents.append(parent)
+                                 parent = os.path.dirname(parent)
+                                 if parent == "/":
+                                     break
+ 
+                         for parent in sorted(set(recursive_parents)):
+                             dirs.append((parent, revisit.pop(parent)))
+ 
+     def isowner(self, filename, destroot=None):
+         """
+         Check if a file belongs to this package. This may
+         result in a stat call for the parent directory of
+         every installed file, since the inode numbers are
+         used to work around the problem of ambiguous paths
+         caused by symlinked directories. The results of
+         stat calls are cached to optimize multiple calls
+         to this method.
+ 
+         @param filename:
+         @type filename:
+         @param destroot:
+         @type destroot:
+         @rtype: Boolean
+         @return:
+         1. True if this package owns the file.
+         2. False if this package does not own the file.
+         """
+ 
+         if destroot is not None and destroot != self._eroot:
+             warnings.warn(
+                 "The second parameter of the "
+                 + "portage.dbapi.vartree.dblink.isowner()"
+                 + " is now unused. Instead "
+                 + "self.settings['EROOT'] will be used.",
+                 DeprecationWarning,
+                 stacklevel=2,
+             )
+ 
+         return bool(self._match_contents(filename))
+ 
+     def _match_contents(self, filename, destroot=None):
+         """
+         The matching contents entry is returned, which is useful
+         since the path may differ from the one given by the caller,
+         due to symlinks.
+ 
+         @rtype: String
+         @return: the contents entry corresponding to the given path, or False
+                 if the file is not owned by this package.
+         """
+ 
+         filename = _unicode_decode(
+             filename, encoding=_encodings["content"], errors="strict"
+         )
+ 
+         if destroot is not None and destroot != self._eroot:
+             warnings.warn(
+                 "The second parameter of the "
+                 + "portage.dbapi.vartree.dblink._match_contents()"
+                 + " is now unused. Instead "
+                 + "self.settings['ROOT'] will be used.",
+                 DeprecationWarning,
+                 stacklevel=2,
+             )
+ 
+         # don't use EROOT here, image already contains EPREFIX
+         destroot = self.settings["ROOT"]
+ 
+         # The given filename argument might have a different encoding than the
+         # the filenames contained in the contents, so use separate wrapped os
+         # modules for each. The basename is more likely to contain non-ascii
+         # characters than the directory path, so use os_filename_arg for all
+         # operations involving the basename of the filename arg.
+         os_filename_arg = _os_merge
+         os = _os_merge
+ 
+         try:
+             _unicode_encode(filename, encoding=_encodings["merge"], errors="strict")
+         except UnicodeEncodeError:
+             # The package appears to have been merged with a
+             # different value of sys.getfilesystemencoding(),
+             # so fall back to utf_8 if appropriate.
+             try:
+                 _unicode_encode(filename, encoding=_encodings["fs"], errors="strict")
+             except UnicodeEncodeError:
+                 pass
+             else:
+                 os_filename_arg = portage.os
+ 
+         destfile = normalize_path(
+             os_filename_arg.path.join(
+                 destroot, filename.lstrip(os_filename_arg.path.sep)
+             )
+         )
+ 
+         if "case-insensitive-fs" in self.settings.features:
+             destfile = destfile.lower()
+ 
+         if self._contents.contains(destfile):
+             return self._contents.unmap_key(destfile)
+ 
+         if self.getcontents():
+             basename = os_filename_arg.path.basename(destfile)
+             if self._contents_basenames is None:
+ 
+                 try:
+                     for x in self._contents.keys():
+                         _unicode_encode(
+                             x, encoding=_encodings["merge"], errors="strict"
+                         )
+                 except UnicodeEncodeError:
+                     # The package appears to have been merged with a
+                     # different value of sys.getfilesystemencoding(),
+                     # so fall back to utf_8 if appropriate.
+                     try:
+                         for x in self._contents.keys():
+                             _unicode_encode(
+                                 x, encoding=_encodings["fs"], errors="strict"
+                             )
+                     except UnicodeEncodeError:
+                         pass
+                     else:
+                         os = portage.os
+ 
+                 self._contents_basenames = set(
+                     os.path.basename(x) for x in self._contents.keys()
+                 )
+             if basename not in self._contents_basenames:
+                 # This is a shortcut that, in most cases, allows us to
+                 # eliminate this package as an owner without the need
+                 # to examine inode numbers of parent directories.
+                 return False
+ 
+             # Use stat rather than lstat since we want to follow
+             # any symlinks to the real parent directory.
+             parent_path = os_filename_arg.path.dirname(destfile)
+             try:
+                 parent_stat = os_filename_arg.stat(parent_path)
+             except EnvironmentError as e:
+                 if e.errno != errno.ENOENT:
+                     raise
+                 del e
+                 return False
+             if self._contents_inodes is None:
+ 
+                 if os is _os_merge:
+                     try:
+                         for x in self._contents.keys():
+                             _unicode_encode(
+                                 x, encoding=_encodings["merge"], errors="strict"
+                             )
+                     except UnicodeEncodeError:
+                         # The package appears to have been merged with a
+                         # different value of sys.getfilesystemencoding(),
+                         # so fall back to utf_8 if appropriate.
+                         try:
+                             for x in self._contents.keys():
+                                 _unicode_encode(
+                                     x, encoding=_encodings["fs"], errors="strict"
+                                 )
+                         except UnicodeEncodeError:
+                             pass
+                         else:
+                             os = portage.os
+ 
+                 self._contents_inodes = {}
+                 parent_paths = set()
+                 for x in self._contents.keys():
+                     p_path = os.path.dirname(x)
+                     if p_path in parent_paths:
+                         continue
+                     parent_paths.add(p_path)
+                     try:
+                         s = os.stat(p_path)
+                     except OSError:
+                         pass
+                     else:
+                         inode_key = (s.st_dev, s.st_ino)
+                         # Use lists of paths in case multiple
+                         # paths reference the same inode.
+                         p_path_list = self._contents_inodes.get(inode_key)
+                         if p_path_list is None:
+                             p_path_list = []
+                             self._contents_inodes[inode_key] = p_path_list
+                         if p_path not in p_path_list:
+                             p_path_list.append(p_path)
+ 
+             p_path_list = self._contents_inodes.get(
+                 (parent_stat.st_dev, parent_stat.st_ino)
+             )
+             if p_path_list:
+                 for p_path in p_path_list:
+                     x = os_filename_arg.path.join(p_path, basename)
+                     if self._contents.contains(x):
+                         return self._contents.unmap_key(x)
+ 
+         return False
+ 
+     def _linkmap_rebuild(self, **kwargs):
+         """
+         Rebuild the self._linkmap if it's not broken due to missing
+         scanelf binary. Also, return early if preserve-libs is disabled
+         and the preserve-libs registry is empty.
+         """
+         if (
+             self._linkmap_broken
+             or self.vartree.dbapi._linkmap is None
+             or self.vartree.dbapi._plib_registry is None
+             or (
+                 "preserve-libs" not in self.settings.features
+                 and not self.vartree.dbapi._plib_registry.hasEntries()
+             )
+         ):
+             return
+         try:
+             self.vartree.dbapi._linkmap.rebuild(**kwargs)
+         except CommandNotFound as e:
+             self._linkmap_broken = True
+             self._display_merge(
+                 _(
+                     "!!! Disabling preserve-libs "
+                     "due to error: Command Not Found: %s\n"
+                 )
+                 % (e,),
+                 level=logging.ERROR,
+                 noiselevel=-1,
+             )
+ 
+     def _find_libs_to_preserve(self, unmerge=False):
+         """
+         Get set of relative paths for libraries to be preserved. When
+         unmerge is False, file paths to preserve are selected from
+         self._installed_instance. Otherwise, paths are selected from
+         self.
+         """
+         if (
+             self._linkmap_broken
+             or self.vartree.dbapi._linkmap is None
+             or self.vartree.dbapi._plib_registry is None
+             or (not unmerge and self._installed_instance is None)
+             or not self._preserve_libs
+         ):
+             return set()
+ 
+         os = _os_merge
+         linkmap = self.vartree.dbapi._linkmap
+         if unmerge:
+             installed_instance = self
+         else:
+             installed_instance = self._installed_instance
+         old_contents = installed_instance.getcontents()
+         root = self.settings["ROOT"]
+         root_len = len(root) - 1
+         lib_graph = digraph()
+         path_node_map = {}
+ 
+         def path_to_node(path):
+             node = path_node_map.get(path)
+             if node is None:
+                 node = LinkageMap._LibGraphNode(linkmap._obj_key(path))
+                 alt_path_node = lib_graph.get(node)
+                 if alt_path_node is not None:
+                     node = alt_path_node
+                 node.alt_paths.add(path)
+                 path_node_map[path] = node
+             return node
+ 
+         consumer_map = {}
+         provider_nodes = set()
+         # Create provider nodes and add them to the graph.
+         for f_abs in old_contents:
+ 
+             if os is _os_merge:
+                 try:
+                     _unicode_encode(
+                         f_abs, encoding=_encodings["merge"], errors="strict"
+                     )
+                 except UnicodeEncodeError:
+                     # The package appears to have been merged with a
+                     # different value of sys.getfilesystemencoding(),
+                     # so fall back to utf_8 if appropriate.
+                     try:
+                         _unicode_encode(
+                             f_abs, encoding=_encodings["fs"], errors="strict"
+                         )
+                     except UnicodeEncodeError:
+                         pass
+                     else:
+                         os = portage.os
+ 
+             f = f_abs[root_len:]
+             try:
+                 consumers = linkmap.findConsumers(
+                     f, exclude_providers=(installed_instance.isowner,)
+                 )
+             except KeyError:
+                 continue
+             if not consumers:
+                 continue
+             provider_node = path_to_node(f)
+             lib_graph.add(provider_node, None)
+             provider_nodes.add(provider_node)
+             consumer_map[provider_node] = consumers
+ 
+         # Create consumer nodes and add them to the graph.
+         # Note that consumers can also be providers.
+         for provider_node, consumers in consumer_map.items():
+             for c in consumers:
+                 consumer_node = path_to_node(c)
+                 if (
+                     installed_instance.isowner(c)
+                     and consumer_node not in provider_nodes
+                 ):
+                     # This is not a provider, so it will be uninstalled.
+                     continue
+                 lib_graph.add(provider_node, consumer_node)
+ 
+         # Locate nodes which should be preserved. They consist of all
+         # providers that are reachable from consumers that are not
+         # providers themselves.
+         preserve_nodes = set()
+         for consumer_node in lib_graph.root_nodes():
+             if consumer_node in provider_nodes:
+                 continue
+             # Preserve all providers that are reachable from this consumer.
+             node_stack = lib_graph.child_nodes(consumer_node)
+             while node_stack:
+                 provider_node = node_stack.pop()
+                 if provider_node in preserve_nodes:
+                     continue
+                 preserve_nodes.add(provider_node)
+                 node_stack.extend(lib_graph.child_nodes(provider_node))
+ 
+         preserve_paths = set()
+         for preserve_node in preserve_nodes:
+             # Preserve the library itself, and also preserve the
+             # soname symlink which is the only symlink that is
+             # strictly required.
+             hardlinks = set()
+             soname_symlinks = set()
+             soname = linkmap.getSoname(next(iter(preserve_node.alt_paths)))
+             have_replacement_soname_link = False
+             have_replacement_hardlink = False
+             for f in preserve_node.alt_paths:
+                 f_abs = os.path.join(root, f.lstrip(os.sep))
+                 try:
+                     if stat.S_ISREG(os.lstat(f_abs).st_mode):
+                         hardlinks.add(f)
+                         if not unmerge and self.isowner(f):
+                             have_replacement_hardlink = True
+                             if os.path.basename(f) == soname:
+                                 have_replacement_soname_link = True
+                     elif os.path.basename(f) == soname:
+                         soname_symlinks.add(f)
+                         if not unmerge and self.isowner(f):
+                             have_replacement_soname_link = True
+                 except OSError:
+                     pass
+ 
+             if have_replacement_hardlink and have_replacement_soname_link:
+                 continue
+ 
+             if hardlinks:
+                 preserve_paths.update(hardlinks)
+                 preserve_paths.update(soname_symlinks)
+ 
+         return preserve_paths
+ 
+     def _add_preserve_libs_to_contents(self, preserve_paths):
+         """
+         Preserve libs returned from _find_libs_to_preserve().
+         """
+ 
+         if not preserve_paths:
+             return
+ 
+         os = _os_merge
+         showMessage = self._display_merge
+         root = self.settings["ROOT"]
+ 
+         # Copy contents entries from the old package to the new one.
+         new_contents = self.getcontents().copy()
+         old_contents = self._installed_instance.getcontents()
+         for f in sorted(preserve_paths):
+             f = _unicode_decode(f, encoding=_encodings["content"], errors="strict")
+             f_abs = os.path.join(root, f.lstrip(os.sep))
+             contents_entry = old_contents.get(f_abs)
+             if contents_entry is None:
+                 # This will probably never happen, but it might if one of the
+                 # paths returned from findConsumers() refers to one of the libs
+                 # that should be preserved yet the path is not listed in the
+                 # contents. Such a path might belong to some other package, so
+                 # it shouldn't be preserved here.
+                 showMessage(
+                     _(
+                         "!!! File '%s' will not be preserved "
+                         "due to missing contents entry\n"
+                     )
+                     % (f_abs,),
+                     level=logging.ERROR,
+                     noiselevel=-1,
+                 )
+                 preserve_paths.remove(f)
+                 continue
+             new_contents[f_abs] = contents_entry
+             obj_type = contents_entry[0]
+             showMessage(_(">>> needed    %s %s\n") % (obj_type, f_abs), noiselevel=-1)
+             # Add parent directories to contents if necessary.
+             parent_dir = os.path.dirname(f_abs)
+             while len(parent_dir) > len(root):
+                 new_contents[parent_dir] = ["dir"]
+                 prev = parent_dir
+                 parent_dir = os.path.dirname(parent_dir)
+                 if prev == parent_dir:
+                     break
+         outfile = atomic_ofstream(os.path.join(self.dbtmpdir, "CONTENTS"))
+         write_contents(new_contents, root, outfile)
+         outfile.close()
+         self._clear_contents_cache()
+ 
+     def _find_unused_preserved_libs(self, unmerge_no_replacement):
+         """
+         Find preserved libraries that don't have any consumers left.
+         """
+ 
+         if (
+             self._linkmap_broken
+             or self.vartree.dbapi._linkmap is None
+             or self.vartree.dbapi._plib_registry is None
+             or not self.vartree.dbapi._plib_registry.hasEntries()
+         ):
+             return {}
+ 
+         # Since preserved libraries can be consumers of other preserved
+         # libraries, use a graph to track consumer relationships.
+         plib_dict = self.vartree.dbapi._plib_registry.getPreservedLibs()
+         linkmap = self.vartree.dbapi._linkmap
+         lib_graph = digraph()
+         preserved_nodes = set()
+         preserved_paths = set()
+         path_cpv_map = {}
+         path_node_map = {}
+         root = self.settings["ROOT"]
+ 
+         def path_to_node(path):
+             node = path_node_map.get(path)
+             if node is None:
 -                node = LinkageMap._LibGraphNode(linkmap._obj_key(path))
++                chost = self.settings.get('CHOST')
++                if chost.find('darwin') >= 0:
++                    node = LinkageMapMachO._LibGraphNode(linkmap._obj_key(path))
++                elif chost.find('interix') >= 0 or chost.find('winnt') >= 0:
++                    node = LinkageMapPeCoff._LibGraphNode(linkmap._obj_key(path))
++                elif chost.find('aix') >= 0:
++                    node = LinkageMapXCoff._LibGraphNode(linkmap._obj_key(path))
++                else
++                    node = LinkageMap._LibGraphNode(linkmap._obj_key(path))
+                 alt_path_node = lib_graph.get(node)
+                 if alt_path_node is not None:
+                     node = alt_path_node
+                 node.alt_paths.add(path)
+                 path_node_map[path] = node
+             return node
+ 
+         for cpv, plibs in plib_dict.items():
+             for f in plibs:
+                 path_cpv_map[f] = cpv
+                 preserved_node = path_to_node(f)
+                 if not preserved_node.file_exists():
+                     continue
+                 lib_graph.add(preserved_node, None)
+                 preserved_paths.add(f)
+                 preserved_nodes.add(preserved_node)
+                 for c in self.vartree.dbapi._linkmap.findConsumers(f):
+                     consumer_node = path_to_node(c)
+                     if not consumer_node.file_exists():
+                         continue
+                     # Note that consumers may also be providers.
+                     lib_graph.add(preserved_node, consumer_node)
+ 
+         # Eliminate consumers having providers with the same soname as an
+         # installed library that is not preserved. This eliminates
+         # libraries that are erroneously preserved due to a move from one
+         # directory to another.
+         # Also eliminate consumers that are going to be unmerged if
+         # unmerge_no_replacement is True.
+         provider_cache = {}
+         for preserved_node in preserved_nodes:
+             soname = linkmap.getSoname(preserved_node)
+             for consumer_node in lib_graph.parent_nodes(preserved_node):
+                 if consumer_node in preserved_nodes:
+                     continue
+                 if unmerge_no_replacement:
+                     will_be_unmerged = True
+                     for path in consumer_node.alt_paths:
+                         if not self.isowner(path):
+                             will_be_unmerged = False
+                             break
+                     if will_be_unmerged:
+                         # This consumer is not preserved and it is
+                         # being unmerged, so drop this edge.
+                         lib_graph.remove_edge(preserved_node, consumer_node)
+                         continue
+ 
+                 providers = provider_cache.get(consumer_node)
+                 if providers is None:
+                     providers = linkmap.findProviders(consumer_node)
+                     provider_cache[consumer_node] = providers
+                 providers = providers.get(soname)
+                 if providers is None:
+                     continue
+                 for provider in providers:
+                     if provider in preserved_paths:
+                         continue
+                     provider_node = path_to_node(provider)
+                     if not provider_node.file_exists():
+                         continue
+                     if provider_node in preserved_nodes:
+                         continue
+                     # An alternative provider seems to be
+                     # installed, so drop this edge.
+                     lib_graph.remove_edge(preserved_node, consumer_node)
+                     break
+ 
+         cpv_lib_map = {}
+         while lib_graph:
+             root_nodes = preserved_nodes.intersection(lib_graph.root_nodes())
+             if not root_nodes:
+                 break
+             lib_graph.difference_update(root_nodes)
+             unlink_list = set()
+             for node in root_nodes:
+                 unlink_list.update(node.alt_paths)
+             unlink_list = sorted(unlink_list)
+             for obj in unlink_list:
+                 cpv = path_cpv_map.get(obj)
+                 if cpv is None:
+                     # This means that a symlink is in the preserved libs
+                     # registry, but the actual lib it points to is not.
+                     self._display_merge(
+                         _(
+                             "!!! symlink to lib is preserved, "
+                             "but not the lib itself:\n!!! '%s'\n"
+                         )
+                         % (obj,),
+                         level=logging.ERROR,
+                         noiselevel=-1,
+                     )
+                     continue
+                 removed = cpv_lib_map.get(cpv)
+                 if removed is None:
+                     removed = set()
+                     cpv_lib_map[cpv] = removed
+                 removed.add(obj)
+ 
+         return cpv_lib_map
+ 
+     def _remove_preserved_libs(self, cpv_lib_map):
+         """
+         Remove files returned from _find_unused_preserved_libs().
+         """
+ 
+         os = _os_merge
+ 
+         files_to_remove = set()
+         for files in cpv_lib_map.values():
+             files_to_remove.update(files)
+         files_to_remove = sorted(files_to_remove)
+         showMessage = self._display_merge
+         root = self.settings["ROOT"]
+ 
+         parent_dirs = set()
+         for obj in files_to_remove:
+             obj = os.path.join(root, obj.lstrip(os.sep))
+             parent_dirs.add(os.path.dirname(obj))
+             if os.path.islink(obj):
+                 obj_type = _("sym")
+             else:
+                 obj_type = _("obj")
+             try:
+                 os.unlink(obj)
+             except OSError as e:
+                 if e.errno != errno.ENOENT:
+                     raise
+                 del e
+             else:
+                 showMessage(_("<<< !needed  %s %s\n") % (obj_type, obj), noiselevel=-1)
+ 
+         # Remove empty parent directories if possible.
+         while parent_dirs:
+             x = parent_dirs.pop()
+             while True:
+                 try:
+                     os.rmdir(x)
+                 except OSError:
+                     break
+                 prev = x
+                 x = os.path.dirname(x)
+                 if x == prev:
+                     break
+ 
+         self.vartree.dbapi._plib_registry.pruneNonExisting()
+ 
+     def _collision_protect(self, srcroot, destroot, mypkglist, file_list, symlink_list):
+ 
+         os = _os_merge
+ 
+         real_relative_paths = {}
+ 
+         collision_ignore = []
+         for x in portage.util.shlex_split(self.settings.get("COLLISION_IGNORE", "")):
+             if os.path.isdir(os.path.join(self._eroot, x.lstrip(os.sep))):
+                 x = normalize_path(x)
+                 x += "/*"
+             collision_ignore.append(x)
+ 
+         # For collisions with preserved libraries, the current package
+         # will assume ownership and the libraries will be unregistered.
+         if self.vartree.dbapi._plib_registry is None:
+             # preserve-libs is entirely disabled
+             plib_cpv_map = None
+             plib_paths = None
+             plib_inodes = {}
+         else:
+             plib_dict = self.vartree.dbapi._plib_registry.getPreservedLibs()
+             plib_cpv_map = {}
+             plib_paths = set()
+             for cpv, paths in plib_dict.items():
+                 plib_paths.update(paths)
+                 for f in paths:
+                     plib_cpv_map[f] = cpv
+             plib_inodes = self._lstat_inode_map(plib_paths)
+ 
+         plib_collisions = {}
+ 
+         showMessage = self._display_merge
+         stopmerge = False
+         collisions = []
+         dirs = set()
+         dirs_ro = set()
+         symlink_collisions = []
+         destroot = self.settings["ROOT"]
+         totfiles = len(file_list) + len(symlink_list)
+         previous = time.monotonic()
+         progress_shown = False
+         report_interval = 1.7  # seconds
+         falign = len("%d" % totfiles)
+         showMessage(
+             _(" %s checking %d files for package collisions\n")
+             % (colorize("GOOD", "*"), totfiles)
+         )
+         for i, (f, f_type) in enumerate(
+             chain(((f, "reg") for f in file_list), ((f, "sym") for f in symlink_list))
+         ):
+             current = time.monotonic()
+             if current - previous > report_interval:
+                 showMessage(
+                     _("%3d%% done,  %*d files remaining ...\n")
+                     % (i * 100 / totfiles, falign, totfiles - i)
+                 )
+                 previous = current
+                 progress_shown = True
+ 
+             dest_path = normalize_path(os.path.join(destroot, f.lstrip(os.path.sep)))
+ 
+             # Relative path with symbolic links resolved only in parent directories
+             real_relative_path = os.path.join(
+                 os.path.realpath(os.path.dirname(dest_path)),
+                 os.path.basename(dest_path),
+             )[len(destroot) :]
+ 
+             real_relative_paths.setdefault(real_relative_path, []).append(
+                 f.lstrip(os.path.sep)
+             )
+ 
+             parent = os.path.dirname(dest_path)
+             if parent not in dirs:
+                 for x in iter_parents(parent):
+                     if x in dirs:
+                         break
+                     dirs.add(x)
+                     if os.path.isdir(x):
+                         if not os.access(x, os.W_OK):
+                             dirs_ro.add(x)
+                         break
+ 
+             try:
+                 dest_lstat = os.lstat(dest_path)
+             except EnvironmentError as e:
+                 if e.errno == errno.ENOENT:
+                     del e
+                     continue
+                 elif e.errno == errno.ENOTDIR:
+                     del e
+                     # A non-directory is in a location where this package
+                     # expects to have a directory.
+                     dest_lstat = None
+                     parent_path = dest_path
+                     while len(parent_path) > len(destroot):
+                         parent_path = os.path.dirname(parent_path)
+                         try:
+                             dest_lstat = os.lstat(parent_path)
+                             break
+                         except EnvironmentError as e:
+                             if e.errno != errno.ENOTDIR:
+                                 raise
+                             del e
+                     if not dest_lstat:
+                         raise AssertionError(
+                             "unable to find non-directory "
+                             + "parent for '%s'" % dest_path
+                         )
+                     dest_path = parent_path
+                     f = os.path.sep + dest_path[len(destroot) :]
+                     if f in collisions:
+                         continue
+                 else:
+                     raise
+             if f[0] != "/":
+                 f = "/" + f
+ 
+             if stat.S_ISDIR(dest_lstat.st_mode):
+                 if f_type == "sym":
+                     # This case is explicitly banned
+                     # by PMS (see bug #326685).
+                     symlink_collisions.append(f)
+                     collisions.append(f)
+                     continue
+ 
+             plibs = plib_inodes.get((dest_lstat.st_dev, dest_lstat.st_ino))
+             if plibs:
+                 for path in plibs:
+                     cpv = plib_cpv_map[path]
+                     paths = plib_collisions.get(cpv)
+                     if paths is None:
+                         paths = set()
+                         plib_collisions[cpv] = paths
+                     paths.add(path)
+                 # The current package will assume ownership and the
+                 # libraries will be unregistered, so exclude this
+                 # path from the normal collisions.
+                 continue
+ 
+             isowned = False
+             full_path = os.path.join(destroot, f.lstrip(os.path.sep))
+             for ver in mypkglist:
+                 if ver.isowner(f):
+                     isowned = True
+                     break
+             if not isowned and self.isprotected(full_path):
+                 isowned = True
+             if not isowned:
+                 f_match = full_path[len(self._eroot) - 1 :]
+                 stopmerge = True
+                 for pattern in collision_ignore:
+                     if fnmatch.fnmatch(f_match, pattern):
+                         stopmerge = False
+                         break
+                 if stopmerge:
+                     collisions.append(f)
+ 
+         internal_collisions = {}
+         for real_relative_path, files in real_relative_paths.items():
+             # Detect internal collisions between non-identical files.
+             if len(files) >= 2:
+                 files.sort()
+                 for i in range(len(files) - 1):
+                     file1 = normalize_path(os.path.join(srcroot, files[i]))
+                     file2 = normalize_path(os.path.join(srcroot, files[i + 1]))
+                     # Compare files, ignoring differences in times.
+                     differences = compare_files(
+                         file1, file2, skipped_types=("atime", "mtime", "ctime")
+                     )
+                     if differences:
+                         internal_collisions.setdefault(real_relative_path, {})[
+                             (files[i], files[i + 1])
+                         ] = differences
+ 
+         if progress_shown:
+             showMessage(_("100% done\n"))
+ 
+         return (
+             collisions,
+             internal_collisions,
+             dirs_ro,
+             symlink_collisions,
+             plib_collisions,
+         )
+ 
+     def _lstat_inode_map(self, path_iter):
+         """
+         Use lstat to create a map of the form:
+           {(st_dev, st_ino) : set([path1, path2, ...])}
+         Multiple paths may reference the same inode due to hardlinks.
+         All lstat() calls are relative to self.myroot.
+         """
+ 
+         os = _os_merge
+ 
+         root = self.settings["ROOT"]
+         inode_map = {}
+         for f in path_iter:
+             path = os.path.join(root, f.lstrip(os.sep))
+             try:
+                 st = os.lstat(path)
+             except OSError as e:
+                 if e.errno not in (errno.ENOENT, errno.ENOTDIR):
+                     raise
+                 del e
+                 continue
+             key = (st.st_dev, st.st_ino)
+             paths = inode_map.get(key)
+             if paths is None:
+                 paths = set()
+                 inode_map[key] = paths
+             paths.add(f)
+         return inode_map
+ 
+     def _security_check(self, installed_instances):
+         if not installed_instances:
+             return 0
+ 
+         os = _os_merge
+ 
+         showMessage = self._display_merge
+ 
+         file_paths = set()
+         for dblnk in installed_instances:
+             file_paths.update(dblnk.getcontents())
+         inode_map = {}
+         real_paths = set()
+         for i, path in enumerate(file_paths):
+ 
+             if os is _os_merge:
+                 try:
+                     _unicode_encode(path, encoding=_encodings["merge"], errors="strict")
+                 except UnicodeEncodeError:
+                     # The package appears to have been merged with a
+                     # different value of sys.getfilesystemencoding(),
+                     # so fall back to utf_8 if appropriate.
+                     try:
+                         _unicode_encode(
+                             path, encoding=_encodings["fs"], errors="strict"
+                         )
+                     except UnicodeEncodeError:
+                         pass
+                     else:
+                         os = portage.os
+ 
+             try:
+                 s = os.lstat(path)
+             except OSError as e:
+                 if e.errno not in (errno.ENOENT, errno.ENOTDIR):
+                     raise
+                 del e
+                 continue
+             if not stat.S_ISREG(s.st_mode):
+                 continue
+             path = os.path.realpath(path)
+             if path in real_paths:
+                 continue
+             real_paths.add(path)
+             if s.st_nlink > 1 and s.st_mode & (stat.S_ISUID | stat.S_ISGID):
+                 k = (s.st_dev, s.st_ino)
+                 inode_map.setdefault(k, []).append((path, s))
+         suspicious_hardlinks = []
+         for path_list in inode_map.values():
+             path, s = path_list[0]
+             if len(path_list) == s.st_nlink:
+                 # All hardlinks seem to be owned by this package.
+                 continue
+             suspicious_hardlinks.append(path_list)
+         if not suspicious_hardlinks:
+             return 0
+ 
+         msg = []
+         msg.append(_("suid/sgid file(s) " "with suspicious hardlink(s):"))
+         msg.append("")
+         for path_list in suspicious_hardlinks:
+             for path, s in path_list:
+                 msg.append("\t%s" % path)
+         msg.append("")
+         msg.append(
+             _("See the Gentoo Security Handbook " "guide for advice on how to proceed.")
+         )
+ 
+         self._eerror("preinst", msg)
+ 
+         return 1
+ 
+     def _eqawarn(self, phase, lines):
+         self._elog("eqawarn", phase, lines)
+ 
+     def _eerror(self, phase, lines):
+         self._elog("eerror", phase, lines)
+ 
+     def _elog(self, funcname, phase, lines):
+         func = getattr(portage.elog.messages, funcname)
+         if self._scheduler is None:
+             for l in lines:
+                 func(l, phase=phase, key=self.mycpv)
+         else:
+             background = self.settings.get("PORTAGE_BACKGROUND") == "1"
+             log_path = None
+             if self.settings.get("PORTAGE_BACKGROUND") != "subprocess":
+                 log_path = self.settings.get("PORTAGE_LOG_FILE")
+             out = io.StringIO()
+             for line in lines:
+                 func(line, phase=phase, key=self.mycpv, out=out)
+             msg = out.getvalue()
+             self._scheduler.output(msg, background=background, log_path=log_path)
+ 
+     def _elog_process(self, phasefilter=None):
+         cpv = self.mycpv
+         if self._pipe is None:
+             elog_process(cpv, self.settings, phasefilter=phasefilter)
+         else:
+             logdir = os.path.join(self.settings["T"], "logging")
+             ebuild_logentries = collect_ebuild_messages(logdir)
+             # phasefilter is irrelevant for the above collect_ebuild_messages
+             # call, since this package instance has a private logdir. However,
+             # it may be relevant for the following collect_messages call.
+             py_logentries = collect_messages(key=cpv, phasefilter=phasefilter).get(
+                 cpv, {}
+             )
+             logentries = _merge_logentries(py_logentries, ebuild_logentries)
+             funcnames = {
+                 "INFO": "einfo",
+                 "LOG": "elog",
+                 "WARN": "ewarn",
+                 "QA": "eqawarn",
+                 "ERROR": "eerror",
+             }
+             str_buffer = []
+             for phase, messages in logentries.items():
+                 for key, lines in messages:
+                     funcname = funcnames[key]
+                     if isinstance(lines, str):
+                         lines = [lines]
+                     for line in lines:
+                         for line in line.split("\n"):
+                             fields = (funcname, phase, cpv, line)
+                             str_buffer.append(" ".join(fields))
+                             str_buffer.append("\n")
+             if str_buffer:
+                 str_buffer = _unicode_encode("".join(str_buffer))
+                 while str_buffer:
+                     str_buffer = str_buffer[os.write(self._pipe, str_buffer) :]
+ 
+     def _emerge_log(self, msg):
+         emergelog(False, msg)
+ 
+     def treewalk(
+         self,
+         srcroot,
+         destroot,
+         inforoot,
+         myebuild,
+         cleanup=0,
+         mydbapi=None,
+         prev_mtimes=None,
+         counter=None,
+     ):
+         """
+ 
+         This function does the following:
+ 
+         calls doebuild(mydo=instprep)
+         calls get_ro_checker to retrieve a function for checking whether Portage
+         will write to a read-only filesystem, then runs it against the directory list
+         calls self._preserve_libs if FEATURES=preserve-libs
+         calls self._collision_protect if FEATURES=collision-protect
+         calls doebuild(mydo=pkg_preinst)
+         Merges the package to the livefs
+         unmerges old version (if required)
+         calls doebuild(mydo=pkg_postinst)
+         calls env_update
+ 
+         @param srcroot: Typically this is ${D}
+         @type srcroot: String (Path)
+         @param destroot: ignored, self.settings['ROOT'] is used instead
+         @type destroot: String (Path)
+         @param inforoot: root of the vardb entry ?
+         @type inforoot: String (Path)
+         @param myebuild: path to the ebuild that we are processing
+         @type myebuild: String (Path)
+         @param mydbapi: dbapi which is handed to doebuild.
+         @type mydbapi: portdbapi instance
+         @param prev_mtimes: { Filename:mtime } mapping for env_update
+         @type prev_mtimes: Dictionary
+         @rtype: Boolean
+         @return:
+         1. 0 on success
+         2. 1 on failure
+ 
+         secondhand is a list of symlinks that have been skipped due to their target
+         not existing; we will merge these symlinks at a later time.
+         """
+ 
+         os = _os_merge
+ 
+         srcroot = _unicode_decode(
+             srcroot, encoding=_encodings["content"], errors="strict"
+         )
+         destroot = self.settings["ROOT"]
+         inforoot = _unicode_decode(
+             inforoot, encoding=_encodings["content"], errors="strict"
+         )
+         myebuild = _unicode_decode(
+             myebuild, encoding=_encodings["content"], errors="strict"
+         )
+ 
+         showMessage = self._display_merge
+         srcroot = normalize_path(srcroot).rstrip(os.path.sep) + os.path.sep
+ 
+         if not os.path.isdir(srcroot):
+             showMessage(
+                 _("!!! Directory Not Found: D='%s'\n") % srcroot,
+                 level=logging.ERROR,
+                 noiselevel=-1,
+             )
+             return 1
+ 
+         # run instprep internal phase
+         doebuild_environment(myebuild, "instprep", settings=self.settings, db=mydbapi)
+         phase = EbuildPhase(
+             background=False,
+             phase="instprep",
+             scheduler=self._scheduler,
+             settings=self.settings,
+         )
+         phase.start()
+         if phase.wait() != os.EX_OK:
+             showMessage(_("!!! instprep failed\n"), level=logging.ERROR, noiselevel=-1)
+             return 1
+ 
+         is_binpkg = self.settings.get("EMERGE_FROM") == "binary"
+         slot = ""
+         for var_name in ("CHOST", "SLOT"):
+             try:
+                 with io.open(
+                     _unicode_encode(
+                         os.path.join(inforoot, var_name),
+                         encoding=_encodings["fs"],
+                         errors="strict",
+                     ),
+                     mode="r",
+                     encoding=_encodings["repo.content"],
+                     errors="replace",
+                 ) as f:
+                     val = f.readline().strip()
+             except EnvironmentError as e:
+                 if e.errno != errno.ENOENT:
+                     raise
+                 del e
+                 val = ""
+ 
+             if var_name == "SLOT":
+                 slot = val
+ 
+                 if not slot.strip():
+                     slot = self.settings.get(var_name, "")
+                     if not slot.strip():
+                         showMessage(
+                             _("!!! SLOT is undefined\n"),
+                             level=logging.ERROR,
+                             noiselevel=-1,
+                         )
+                         return 1
+                     write_atomic(os.path.join(inforoot, var_name), slot + "\n")
+ 
+             # This check only applies when built from source, since
+             # inforoot values are written just after src_install.
+             if not is_binpkg and val != self.settings.get(var_name, ""):
+                 self._eqawarn(
+                     "preinst",
+                     [
+                         _(
+                             "QA Notice: Expected %(var_name)s='%(expected_value)s', got '%(actual_value)s'\n"
+                         )
+                         % {
+                             "var_name": var_name,
+                             "expected_value": self.settings.get(var_name, ""),
+                             "actual_value": val,
+                         }
+                     ],
+                 )
+ 
+         def eerror(lines):
+             self._eerror("preinst", lines)
+ 
+         if not os.path.exists(self.dbcatdir):
+             ensure_dirs(self.dbcatdir)
+ 
+         # NOTE: We use SLOT obtained from the inforoot
+         # 	directory, in order to support USE=multislot.
+         # Use _pkg_str discard the sub-slot part if necessary.
+         slot = _pkg_str(self.mycpv, slot=slot).slot
+         cp = self.mysplit[0]
+         slot_atom = "%s:%s" % (cp, slot)
+ 
+         self.lockdb()
+         try:
+             # filter any old-style virtual matches
+             slot_matches = [
+                 cpv
+                 for cpv in self.vartree.dbapi.match(slot_atom)
+                 if cpv_getkey(cpv) == cp
+             ]
+ 
+             if self.mycpv not in slot_matches and self.vartree.dbapi.cpv_exists(
+                 self.mycpv
+             ):
+                 # handle multislot or unapplied slotmove
+                 slot_matches.append(self.mycpv)
+ 
+             others_in_slot = []
+             for cur_cpv in slot_matches:
+                 # Clone the config in case one of these has to be unmerged,
+                 # since we need it to have private ${T} etc... for things
+                 # like elog.
+                 settings_clone = portage.config(clone=self.settings)
+                 # This reset ensures that there is no unintended leakage
+                 # of variables which should not be shared.
+                 settings_clone.reset()
+                 settings_clone.setcpv(cur_cpv, mydb=self.vartree.dbapi)
+                 if (
+                     self._preserve_libs
+                     and "preserve-libs" in settings_clone["PORTAGE_RESTRICT"].split()
+                 ):
+                     self._preserve_libs = False
+                 others_in_slot.append(
+                     dblink(
+                         self.cat,
+                         catsplit(cur_cpv)[1],
+                         settings=settings_clone,
+                         vartree=self.vartree,
+                         treetype="vartree",
+                         scheduler=self._scheduler,
+                         pipe=self._pipe,
+                     )
+                 )
+         finally:
+             self.unlockdb()
+ 
+         # If any instance has RESTRICT=preserve-libs, then
+         # restrict it for all instances.
+         if not self._preserve_libs:
+             for dblnk in others_in_slot:
+                 dblnk._preserve_libs = False
+ 
+         retval = self._security_check(others_in_slot)
+         if retval:
+             return retval
+ 
+         if slot_matches:
+             # Used by self.isprotected().
+             max_dblnk = None
+             max_counter = -1
+             for dblnk in others_in_slot:
+                 cur_counter = self.vartree.dbapi.cpv_counter(dblnk.mycpv)
+                 if cur_counter > max_counter:
+                     max_counter = cur_counter
+                     max_dblnk = dblnk
+             self._installed_instance = max_dblnk
+ 
+         # Apply INSTALL_MASK before collision-protect, since it may
+         # be useful to avoid collisions in some scenarios.
+         # We cannot detect if this is needed or not here as INSTALL_MASK can be
+         # modified by bashrc files.
+         phase = MiscFunctionsProcess(
+             background=False,
+             commands=["preinst_mask"],
+             phase="preinst",
+             scheduler=self._scheduler,
+             settings=self.settings,
+         )
+         phase.start()
+         phase.wait()
+         try:
+             with io.open(
+                 _unicode_encode(
+                     os.path.join(inforoot, "INSTALL_MASK"),
+                     encoding=_encodings["fs"],
+                     errors="strict",
+                 ),
+                 mode="r",
+                 encoding=_encodings["repo.content"],
+                 errors="replace",
+             ) as f:
+                 install_mask = InstallMask(f.read())
+         except EnvironmentError:
+             install_mask = None
+ 
+         if install_mask:
+             install_mask_dir(self.settings["ED"], install_mask)
+             if any(x in self.settings.features for x in ("nodoc", "noman", "noinfo")):
+                 try:
+                     os.rmdir(os.path.join(self.settings["ED"], "usr", "share"))
+                 except OSError:
+                     pass
+ 
+         # We check for unicode encoding issues after src_install. However,
+         # the check must be repeated here for binary packages (it's
+         # inexpensive since we call os.walk() here anyway).
+         unicode_errors = []
+         line_ending_re = re.compile("[\n\r]")
+         srcroot_len = len(srcroot)
+         ed_len = len(self.settings["ED"])
+         eprefix_len = len(self.settings["EPREFIX"])
+ 
+         while True:
+ 
+             unicode_error = False
+             eagain_error = False
+ 
+             filelist = []
+             linklist = []
+             paths_with_newlines = []
+ 
+             def onerror(e):
+                 raise
+ 
+             walk_iter = os.walk(srcroot, onerror=onerror)
+             while True:
+                 try:
+                     parent, dirs, files = next(walk_iter)
+                 except StopIteration:
+                     break
+                 except OSError as e:
+                     if e.errno != errno.EAGAIN:
+                         raise
+                     # Observed with PyPy 1.8.
+                     eagain_error = True
+                     break
+ 
+                 try:
+                     parent = _unicode_decode(
+                         parent, encoding=_encodings["merge"], errors="strict"
+                     )
+                 except UnicodeDecodeError:
+                     new_parent = _unicode_decode(
+                         parent, encoding=_encodings["merge"], errors="replace"
+                     )
+                     new_parent = _unicode_encode(
+                         new_parent, encoding="ascii", errors="backslashreplace"
+                     )
+                     new_parent = _unicode_decode(
+                         new_parent, encoding=_encodings["merge"], errors="replace"
+                     )
+                     os.rename(parent, new_parent)
+                     unicode_error = True
+                     unicode_errors.append(new_parent[ed_len:])
+                     break
+ 
+                 for fname in files:
+                     try:
+                         fname = _unicode_decode(
+                             fname, encoding=_encodings["merge"], errors="strict"
+                         )
+                     except UnicodeDecodeError:
+                         fpath = portage._os.path.join(
+                             parent.encode(_encodings["merge"]), fname
+                         )
+                         new_fname = _unicode_decode(
+                             fname, encoding=_encodings["merge"], errors="replace"
+                         )
+                         new_fname = _unicode_encode(
+                             new_fname, encoding="ascii", errors="backslashreplace"
+                         )
+                         new_fname = _unicode_decode(
+                             new_fname, encoding=_encodings["merge"], errors="replace"
+                         )
+                         new_fpath = os.path.join(parent, new_fname)
+                         os.rename(fpath, new_fpath)
+                         unicode_error = True
+                         unicode_errors.append(new_fpath[ed_len:])
+                         fname = new_fname
+                         fpath = new_fpath
+                     else:
+                         fpath = os.path.join(parent, fname)
+ 
+                     relative_path = fpath[srcroot_len:]
+ 
+                     if line_ending_re.search(relative_path) is not None:
+                         paths_with_newlines.append(relative_path)
+ 
+                     file_mode = os.lstat(fpath).st_mode
+                     if stat.S_ISREG(file_mode):
+                         filelist.append(relative_path)
+                     elif stat.S_ISLNK(file_mode):
+                         # Note: os.walk puts symlinks to directories in the "dirs"
+                         # list and it does not traverse them since that could lead
+                         # to an infinite recursion loop.
+                         linklist.append(relative_path)
+ 
+                         myto = _unicode_decode(
+                             _os.readlink(
+                                 _unicode_encode(
+                                     fpath, encoding=_encodings["merge"], errors="strict"
+                                 )
+                             ),
+                             encoding=_encodings["merge"],
+                             errors="replace",
+                         )
+                         if line_ending_re.search(myto) is not None:
+                             paths_with_newlines.append(relative_path)
+ 
+                 if unicode_error:
+                     break
+ 
+             if not (unicode_error or eagain_error):
+                 break
+ 
+         if unicode_errors:
+             self._elog("eqawarn", "preinst", _merge_unicode_error(unicode_errors))
+ 
+         if paths_with_newlines:
+             msg = []
+             msg.append(
+                 _(
+                     "This package installs one or more files containing line ending characters:"
+                 )
+             )
+             msg.append("")
+             paths_with_newlines.sort()
+             for f in paths_with_newlines:
+                 msg.append("\t/%s" % (f.replace("\n", "\\n").replace("\r", "\\r")))
+             msg.append("")
+             msg.append(_("package %s NOT merged") % self.mycpv)
+             msg.append("")
+             eerror(msg)
+             return 1
+ 
+         # If there are no files to merge, and an installed package in the same
+         # slot has files, it probably means that something went wrong.
+         if (
+             self.settings.get("PORTAGE_PACKAGE_EMPTY_ABORT") == "1"
+             and not filelist
+             and not linklist
+             and others_in_slot
+         ):
+             installed_files = None
+             for other_dblink in others_in_slot:
+                 installed_files = other_dblink.getcontents()
+                 if not installed_files:
+                     continue
+                 from textwrap import wrap
+ 
+                 wrap_width = 72
+                 msg = []
+                 d = {"new_cpv": self.mycpv, "old_cpv": other_dblink.mycpv}
+                 msg.extend(
+                     wrap(
+                         _(
+                             "The '%(new_cpv)s' package will not install "
+                             "any files, but the currently installed '%(old_cpv)s'"
+                             " package has the following files: "
+                         )
+                         % d,
+                         wrap_width,
+                     )
+                 )
+                 msg.append("")
+                 msg.extend(sorted(installed_files))
+                 msg.append("")
+                 msg.append(_("package %s NOT merged") % self.mycpv)
+                 msg.append("")
+                 msg.extend(
+                     wrap(
+                         _(
+                             "Manually run `emerge --unmerge =%s` if you "
+                             "really want to remove the above files. Set "
+                             'PORTAGE_PACKAGE_EMPTY_ABORT="0" in '
+                             "/etc/portage/make.conf if you do not want to "
+                             "abort in cases like this."
+                         )
+                         % other_dblink.mycpv,
+                         wrap_width,
+                     )
+                 )
+                 eerror(msg)
+             if installed_files:
+                 return 1
+ 
+         # Make sure the ebuild environment is initialized and that ${T}/elog
+         # exists for logging of collision-protect eerror messages.
+         if myebuild is None:
+             myebuild = os.path.join(inforoot, self.pkg + ".ebuild")
+         doebuild_environment(myebuild, "preinst", settings=self.settings, db=mydbapi)
+         self.settings["REPLACING_VERSIONS"] = " ".join(
+             [portage.versions.cpv_getversion(other.mycpv) for other in others_in_slot]
+         )
+         prepare_build_dirs(settings=self.settings, cleanup=cleanup)
+ 
+         # check for package collisions
+         blockers = []
+         for blocker in self._blockers or []:
+             blocker = self.vartree.dbapi._dblink(blocker.cpv)
+             # It may have been unmerged before lock(s)
+             # were aquired.
+             if blocker.exists():
+                 blockers.append(blocker)
+ 
+         (
+             collisions,
+             internal_collisions,
+             dirs_ro,
+             symlink_collisions,
+             plib_collisions,
+         ) = self._collision_protect(
+             srcroot, destroot, others_in_slot + blockers, filelist, linklist
+         )
+ 
+         # Check for read-only filesystems.
+         ro_checker = get_ro_checker()
+         rofilesystems = ro_checker(dirs_ro)
+ 
+         if rofilesystems:
+             msg = _(
+                 "One or more files installed to this package are "
+                 "set to be installed to read-only filesystems. "
+                 "Please mount the following filesystems as read-write "
+                 "and retry."
+             )
+             msg = textwrap.wrap(msg, 70)
+             msg.append("")
+             for f in rofilesystems:
+                 msg.append("\t%s" % f)
+             msg.append("")
+             self._elog("eerror", "preinst", msg)
+ 
+             msg = (
+                 _("Package '%s' NOT merged due to read-only file systems.")
+                 % self.settings.mycpv
+             )
+             msg += _(
+                 " If necessary, refer to your elog "
+                 "messages for the whole content of the above message."
+             )
+             msg = textwrap.wrap(msg, 70)
+             eerror(msg)
+             return 1
+ 
+         if internal_collisions:
+             msg = (
+                 _(
+                     "Package '%s' has internal collisions between non-identical files "
+                     "(located in separate directories in the installation image (${D}) "
+                     "corresponding to merged directories in the target "
+                     "filesystem (${ROOT})):"
+                 )
+                 % self.settings.mycpv
+             )
+             msg = textwrap.wrap(msg, 70)
+             msg.append("")
+             for k, v in sorted(internal_collisions.items(), key=operator.itemgetter(0)):
+                 msg.append("\t%s" % os.path.join(destroot, k.lstrip(os.path.sep)))
+                 for (file1, file2), differences in sorted(v.items()):
+                     msg.append(
+                         "\t\t%s" % os.path.join(destroot, file1.lstrip(os.path.sep))
+                     )
+                     msg.append(
+                         "\t\t%s" % os.path.join(destroot, file2.lstrip(os.path.sep))
+                     )
+                     msg.append("\t\t\tDifferences: %s" % ", ".join(differences))
+                     msg.append("")
+             self._elog("eerror", "preinst", msg)
+ 
+             msg = (
+                 _(
+                     "Package '%s' NOT merged due to internal collisions "
+                     "between non-identical files."
+                 )
+                 % self.settings.mycpv
+             )
+             msg += _(
+                 " If necessary, refer to your elog messages for the whole "
+                 "content of the above message."
+             )
+             eerror(textwrap.wrap(msg, 70))
+             return 1
+ 
+         if symlink_collisions:
+             # Symlink collisions need to be distinguished from other types
+             # of collisions, in order to avoid confusion (see bug #409359).
+             msg = _(
+                 "Package '%s' has one or more collisions "
+                 "between symlinks and directories, which is explicitly "
+                 "forbidden by PMS section 13.4 (see bug #326685):"
+             ) % (self.settings.mycpv,)
+             msg = textwrap.wrap(msg, 70)
+             msg.append("")
+             for f in symlink_collisions:
+                 msg.append("\t%s" % os.path.join(destroot, f.lstrip(os.path.sep)))
+             msg.append("")
+             self._elog("eerror", "preinst", msg)
+ 
+         if collisions:
+             collision_protect = "collision-protect" in self.settings.features
+             protect_owned = "protect-owned" in self.settings.features
+             msg = _(
+                 "This package will overwrite one or more files that"
+                 " may belong to other packages (see list below)."
+             )
+             if not (collision_protect or protect_owned):
+                 msg += _(
+                     ' Add either "collision-protect" or'
+                     ' "protect-owned" to FEATURES in'
+                     " make.conf if you would like the merge to abort"
+                     " in cases like this. See the make.conf man page for"
+                     " more information about these features."
+                 )
+             if self.settings.get("PORTAGE_QUIET") != "1":
+                 msg += _(
+                     " You can use a command such as"
+                     " `portageq owners / <filename>` to identify the"
+                     " installed package that owns a file. If portageq"
+                     " reports that only one package owns a file then do NOT"
+                     " file a bug report. A bug report is only useful if it"
+                     " identifies at least two or more packages that are known"
+                     " to install the same file(s)."
+                     " If a collision occurs and you"
+                     " can not explain where the file came from then you"
+                     " should simply ignore the collision since there is not"
+                     " enough information to determine if a real problem"
+                     " exists. Please do NOT file a bug report at"
+                     " https://bugs.gentoo.org/ unless you report exactly which"
+                     " two packages install the same file(s). See"
+                     " https://wiki.gentoo.org/wiki/Knowledge_Base:Blockers"
+                     " for tips on how to solve the problem. And once again,"
+                     " please do NOT file a bug report unless you have"
+                     " completely understood the above message."
+                 )
+ 
+             self.settings["EBUILD_PHASE"] = "preinst"
+             from textwrap import wrap
+ 
+             msg = wrap(msg, 70)
+             if collision_protect:
+                 msg.append("")
+                 msg.append(_("package %s NOT merged") % self.settings.mycpv)
+             msg.append("")
+             msg.append(_("Detected file collision(s):"))
+             msg.append("")
+ 
+             for f in collisions:
+                 msg.append("\t%s" % os.path.join(destroot, f.lstrip(os.path.sep)))
+ 
+             eerror(msg)
+ 
+             owners = None
+             if collision_protect or protect_owned or symlink_collisions:
+                 msg = []
+                 msg.append("")
+                 msg.append(
+                     _("Searching all installed" " packages for file collisions...")
+                 )
+                 msg.append("")
+                 msg.append(_("Press Ctrl-C to Stop"))
+                 msg.append("")
+                 eerror(msg)
+ 
+                 if len(collisions) > 20:
+                     # get_owners is slow for large numbers of files, so
+                     # don't look them all up.
+                     collisions = collisions[:20]
+ 
+                 pkg_info_strs = {}
+                 self.lockdb()
+                 try:
+                     owners = self.vartree.dbapi._owners.get_owners(collisions)
+                     self.vartree.dbapi.flush_cache()
+ 
+                     for pkg in owners:
+                         pkg = self.vartree.dbapi._pkg_str(pkg.mycpv, None)
+                         pkg_info_str = "%s%s%s" % (pkg, _slot_separator, pkg.slot)
+                         if pkg.repo != _unknown_repo:
+                             pkg_info_str += "%s%s" % (_repo_separator, pkg.repo)
+                         pkg_info_strs[pkg] = pkg_info_str
+ 
+                 finally:
+                     self.unlockdb()
+ 
+                 for pkg, owned_files in owners.items():
+                     msg = []
+                     msg.append(pkg_info_strs[pkg.mycpv])
+                     for f in sorted(owned_files):
+                         msg.append(
+                             "\t%s" % os.path.join(destroot, f.lstrip(os.path.sep))
+                         )
+                     msg.append("")
+                     eerror(msg)
+ 
+                 if not owners:
+                     eerror(
+                         [_("None of the installed" " packages claim the file(s)."), ""]
+                     )
+ 
+             symlink_abort_msg = _(
+                 "Package '%s' NOT merged since it has "
+                 "one or more collisions between symlinks and directories, "
+                 "which is explicitly forbidden by PMS section 13.4 "
+                 "(see bug #326685)."
+             )
+ 
+             # The explanation about the collision and how to solve
+             # it may not be visible via a scrollback buffer, especially
+             # if the number of file collisions is large. Therefore,
+             # show a summary at the end.
+             abort = False
+             if symlink_collisions:
+                 abort = True
+                 msg = symlink_abort_msg % (self.settings.mycpv,)
+             elif collision_protect:
+                 abort = True
+                 msg = (
+                     _("Package '%s' NOT merged due to file collisions.")
+                     % self.settings.mycpv
+                 )
+             elif protect_owned and owners:
+                 abort = True
+                 msg = (
+                     _("Package '%s' NOT merged due to file collisions.")
+                     % self.settings.mycpv
+                 )
+             else:
+                 msg = (
+                     _("Package '%s' merged despite file collisions.")
+                     % self.settings.mycpv
+                 )
+             msg += _(
+                 " If necessary, refer to your elog "
+                 "messages for the whole content of the above message."
+             )
+             eerror(wrap(msg, 70))
+ 
+             if abort:
+                 return 1
+ 
+         # The merge process may move files out of the image directory,
+         # which causes invalidation of the .installed flag.
+         try:
+             os.unlink(
+                 os.path.join(os.path.dirname(normalize_path(srcroot)), ".installed")
+             )
+         except OSError as e:
+             if e.errno != errno.ENOENT:
+                 raise
+             del e
+ 
+         self.dbdir = self.dbtmpdir
+         self.delete()
+         ensure_dirs(self.dbtmpdir)
+ 
+         downgrade = False
+         if (
+             self._installed_instance is not None
+             and vercmp(self.mycpv.version, self._installed_instance.mycpv.version) < 0
+         ):
+             downgrade = True
+ 
+         if self._installed_instance is not None:
+             rval = self._pre_merge_backup(self._installed_instance, downgrade)
+             if rval != os.EX_OK:
+                 showMessage(
+                     _("!!! FAILED preinst: ") + "quickpkg: %s\n" % rval,
+                     level=logging.ERROR,
+                     noiselevel=-1,
+                 )
+                 return rval
+ 
+         # run preinst script
+         showMessage(
+             _(">>> Merging %(cpv)s to %(destroot)s\n")
+             % {"cpv": self.mycpv, "destroot": destroot}
+         )
+         phase = EbuildPhase(
+             background=False,
+             phase="preinst",
+             scheduler=self._scheduler,
+             settings=self.settings,
+         )
+         phase.start()
+         a = phase.wait()
+ 
+         # XXX: Decide how to handle failures here.
+         if a != os.EX_OK:
+             showMessage(
+                 _("!!! FAILED preinst: ") + str(a) + "\n",
+                 level=logging.ERROR,
+                 noiselevel=-1,
+             )
+             return a
+ 
+         # copy "info" files (like SLOT, CFLAGS, etc.) into the database
+         for x in os.listdir(inforoot):
+             self.copyfile(inforoot + "/" + x)
+ 
+         # write local package counter for recording
+         if counter is None:
+             counter = self.vartree.dbapi.counter_tick(mycpv=self.mycpv)
+         with io.open(
+             _unicode_encode(
+                 os.path.join(self.dbtmpdir, "COUNTER"),
+                 encoding=_encodings["fs"],
+                 errors="strict",
+             ),
+             mode="w",
+             encoding=_encodings["repo.content"],
+             errors="backslashreplace",
+         ) as f:
+             f.write("%s" % counter)
+ 
+         self.updateprotect()
+ 
+         # if we have a file containing previously-merged config file md5sums, grab it.
+         self.vartree.dbapi._fs_lock()
+         try:
+             # This prunes any libraries from the registry that no longer
+             # exist on disk, in case they have been manually removed.
+             # This has to be done prior to merge, since after merge it
+             # is non-trivial to distinguish these files from files
+             # that have just been merged.
+             plib_registry = self.vartree.dbapi._plib_registry
+             if plib_registry:
+                 plib_registry.lock()
+                 try:
+                     plib_registry.load()
+                     plib_registry.store()
+                 finally:
+                     plib_registry.unlock()
+ 
+             # Always behave like --noconfmem is enabled for downgrades
+             # so that people who don't know about this option are less
+             # likely to get confused when doing upgrade/downgrade cycles.
+             cfgfiledict = grabdict(self.vartree.dbapi._conf_mem_file)
+             if "NOCONFMEM" in self.settings or downgrade:
+                 cfgfiledict["IGNORE"] = 1
+             else:
+                 cfgfiledict["IGNORE"] = 0
+ 
+             rval = self._merge_contents(srcroot, destroot, cfgfiledict)
+             if rval != os.EX_OK:
+                 return rval
+         finally:
+             self.vartree.dbapi._fs_unlock()
+ 
+         # These caches are populated during collision-protect and the data
+         # they contain is now invalid. It's very important to invalidate
+         # the contents_inodes cache so that FEATURES=unmerge-orphans
+         # doesn't unmerge anything that belongs to this package that has
+         # just been merged.
+         for dblnk in others_in_slot:
+             dblnk._clear_contents_cache()
+         self._clear_contents_cache()
+ 
+         linkmap = self.vartree.dbapi._linkmap
+         plib_registry = self.vartree.dbapi._plib_registry
+         # We initialize preserve_paths to an empty set rather
+         # than None here because it plays an important role
+         # in prune_plib_registry logic by serving to indicate
+         # that we have a replacement for a package that's
+         # being unmerged.
+ 
+         preserve_paths = set()
+         needed = None
+         if not (self._linkmap_broken or linkmap is None or plib_registry is None):
+             self.vartree.dbapi._fs_lock()
+             plib_registry.lock()
+             try:
+                 plib_registry.load()
+                 needed = os.path.join(inforoot, linkmap._needed_aux_key)
+                 self._linkmap_rebuild(include_file=needed)
+ 
+                 # Preserve old libs if they are still in use
+                 # TODO: Handle cases where the previous instance
+                 # has already been uninstalled but it still has some
+                 # preserved libraries in the registry that we may
+                 # want to preserve here.
+                 preserve_paths = self._find_libs_to_preserve()
+             finally:
+                 plib_registry.unlock()
+                 self.vartree.dbapi._fs_unlock()
+ 
+             if preserve_paths:
+                 self._add_preserve_libs_to_contents(preserve_paths)
+ 
+         # If portage is reinstalling itself, remove the old
+         # version now since we want to use the temporary
+         # PORTAGE_BIN_PATH that will be removed when we return.
+         reinstall_self = False
+         if self.myroot == "/" and match_from_list(PORTAGE_PACKAGE_ATOM, [self.mycpv]):
+             reinstall_self = True
+ 
+         emerge_log = self._emerge_log
+ 
+         # If we have any preserved libraries then autoclean
+         # is forced so that preserve-libs logic doesn't have
+         # to account for the additional complexity of the
+         # AUTOCLEAN=no mode.
+         autoclean = self.settings.get("AUTOCLEAN", "yes") == "yes" or preserve_paths
+ 
+         if autoclean:
+             emerge_log(_(" >>> AUTOCLEAN: %s") % (slot_atom,))
+ 
+         others_in_slot.append(self)  # self has just been merged
+         for dblnk in list(others_in_slot):
+             if dblnk is self:
+                 continue
+             if not (autoclean or dblnk.mycpv == self.mycpv or reinstall_self):
+                 continue
+             showMessage(_(">>> Safely unmerging already-installed instance...\n"))
+             emerge_log(_(" === Unmerging... (%s)") % (dblnk.mycpv,))
+             others_in_slot.remove(dblnk)  # dblnk will unmerge itself now
+             dblnk._linkmap_broken = self._linkmap_broken
+             dblnk.settings["REPLACED_BY_VERSION"] = portage.versions.cpv_getversion(
+                 self.mycpv
+             )
+             dblnk.settings.backup_changes("REPLACED_BY_VERSION")
+             unmerge_rval = dblnk.unmerge(
+                 ldpath_mtimes=prev_mtimes,
+                 others_in_slot=others_in_slot,
+                 needed=needed,
+                 preserve_paths=preserve_paths,
+             )
+             dblnk.settings.pop("REPLACED_BY_VERSION", None)
+ 
+             if unmerge_rval == os.EX_OK:
+                 emerge_log(_(" >>> unmerge success: %s") % (dblnk.mycpv,))
+             else:
+                 emerge_log(_(" !!! unmerge FAILURE: %s") % (dblnk.mycpv,))
+ 
+             self.lockdb()
+             try:
+                 # TODO: Check status and abort if necessary.
+                 dblnk.delete()
+             finally:
+                 self.unlockdb()
+             showMessage(_(">>> Original instance of package unmerged safely.\n"))
+ 
+         if len(others_in_slot) > 1:
+             showMessage(
+                 colorize("WARN", _("WARNING:"))
+                 + _(
+                     " AUTOCLEAN is disabled.  This can cause serious"
+                     " problems due to overlapping packages.\n"
+                 ),
+                 level=logging.WARN,
+                 noiselevel=-1,
+             )
+ 
+         # We hold both directory locks.
+         self.dbdir = self.dbpkgdir
+         self.lockdb()
+         try:
+             self.delete()
+             _movefile(self.dbtmpdir, self.dbpkgdir, mysettings=self.settings)
+             self._merged_path(self.dbpkgdir, os.lstat(self.dbpkgdir))
+             self.vartree.dbapi._cache_delta.recordEvent(
+                 "add", self.mycpv, slot, counter
+             )
+         finally:
+             self.unlockdb()
+ 
+         # Check for file collisions with blocking packages
+         # and remove any colliding files from their CONTENTS
+         # since they now belong to this package.
+         self._clear_contents_cache()
+         contents = self.getcontents()
+         destroot_len = len(destroot) - 1
+         self.lockdb()
+         try:
+             for blocker in blockers:
+                 self.vartree.dbapi.removeFromContents(
+                     blocker, iter(contents), relative_paths=False
+                 )
+         finally:
+             self.unlockdb()
+ 
+         plib_registry = self.vartree.dbapi._plib_registry
+         if plib_registry:
+             self.vartree.dbapi._fs_lock()
+             plib_registry.lock()
+             try:
+                 plib_registry.load()
+ 
+                 if preserve_paths:
+                     # keep track of the libs we preserved
+                     plib_registry.register(
+                         self.mycpv, slot, counter, sorted(preserve_paths)
+                     )
+ 
+                 # Unregister any preserved libs that this package has overwritten
+                 # and update the contents of the packages that owned them.
+                 plib_dict = plib_registry.getPreservedLibs()
+                 for cpv, paths in plib_collisions.items():
+                     if cpv not in plib_dict:
+                         continue
+                     has_vdb_entry = False
+                     if cpv != self.mycpv:
+                         # If we've replaced another instance with the
+                         # same cpv then the vdb entry no longer belongs
+                         # to it, so we'll have to get the slot and counter
+                         # from plib_registry._data instead.
+                         self.vartree.dbapi.lock()
+                         try:
+                             try:
+                                 slot = self.vartree.dbapi._pkg_str(cpv, None).slot
+                                 counter = self.vartree.dbapi.cpv_counter(cpv)
+                             except (KeyError, InvalidData):
+                                 pass
+                             else:
+                                 has_vdb_entry = True
+                                 self.vartree.dbapi.removeFromContents(cpv, paths)
+                         finally:
+                             self.vartree.dbapi.unlock()
+ 
+                     if not has_vdb_entry:
+                         # It's possible for previously unmerged packages
+                         # to have preserved libs in the registry, so try
+                         # to retrieve the slot and counter from there.
+                         has_registry_entry = False
+                         for plib_cps, (
+                             plib_cpv,
+                             plib_counter,
+                             plib_paths,
+                         ) in plib_registry._data.items():
+                             if plib_cpv != cpv:
+                                 continue
+                             try:
+                                 cp, slot = plib_cps.split(":", 1)
+                             except ValueError:
+                                 continue
+                             counter = plib_counter
+                             has_registry_entry = True
+                             break
+ 
+                         if not has_registry_entry:
+                             continue
+ 
+                     remaining = [f for f in plib_dict[cpv] if f not in paths]
+                     plib_registry.register(cpv, slot, counter, remaining)
+ 
+                 plib_registry.store()
+             finally:
+                 plib_registry.unlock()
+                 self.vartree.dbapi._fs_unlock()
+ 
+         self.vartree.dbapi._add(self)
+         contents = self.getcontents()
+ 
+         # do postinst script
+         self.settings["PORTAGE_UPDATE_ENV"] = os.path.join(
+             self.dbpkgdir, "environment.bz2"
+         )
+         self.settings.backup_changes("PORTAGE_UPDATE_ENV")
+         try:
+             phase = EbuildPhase(
+                 background=False,
+                 phase="postinst",
+                 scheduler=self._scheduler,
+                 settings=self.settings,
+             )
+             phase.start()
+             a = phase.wait()
+             if a == os.EX_OK:
+                 showMessage(_(">>> %s merged.\n") % self.mycpv)
+         finally:
+             self.settings.pop("PORTAGE_UPDATE_ENV", None)
+ 
+         if a != os.EX_OK:
+             # It's stupid to bail out here, so keep going regardless of
+             # phase return code.
+             self._postinst_failure = True
+             self._elog(
+                 "eerror",
+                 "postinst",
+                 [
+                     _("FAILED postinst: %s") % (a,),
+                 ],
+             )
+ 
+         # update environment settings, library paths. DO NOT change symlinks.
+         env_update(
+             target_root=self.settings["ROOT"],
+             prev_mtimes=prev_mtimes,
+             contents=contents,
+             env=self.settings,
+             writemsg_level=self._display_merge,
+             vardbapi=self.vartree.dbapi,
+         )
+ 
+         # For gcc upgrades, preserved libs have to be removed after the
+         # the library path has been updated.
+         self._prune_plib_registry()
+         self._post_merge_sync()
+ 
+         return os.EX_OK
+ 
+     def _new_backup_path(self, p):
+         """
+         The works for any type path, such as a regular file, symlink,
+         or directory. The parent directory is assumed to exist.
+         The returned filename is of the form p + '.backup.' + x, where
+         x guarantees that the returned path does not exist yet.
+         """
+         os = _os_merge
+ 
+         x = -1
+         while True:
+             x += 1
+             backup_p = "%s.backup.%04d" % (p, x)
+             try:
+                 os.lstat(backup_p)
+             except OSError:
+                 break
+ 
+         return backup_p
+ 
+     def _merge_contents(self, srcroot, destroot, cfgfiledict):
+ 
+         cfgfiledict_orig = cfgfiledict.copy()
+ 
+         # open CONTENTS file (possibly overwriting old one) for recording
+         # Use atomic_ofstream for automatic coercion of raw bytes to
+         # unicode, in order to prevent TypeError when writing raw bytes
+         # to TextIOWrapper with python2.
+         outfile = atomic_ofstream(
+             _unicode_encode(
+                 os.path.join(self.dbtmpdir, "CONTENTS"),
+                 encoding=_encodings["fs"],
+                 errors="strict",
+             ),
+             mode="w",
+             encoding=_encodings["repo.content"],
+             errors="backslashreplace",
+         )
+ 
+         # Don't bump mtimes on merge since some application require
+         # preservation of timestamps.  This means that the unmerge phase must
+         # check to see if file belongs to an installed instance in the same
+         # slot.
+         mymtime = None
+ 
+         # set umask to 0 for merging; back up umask, save old one in prevmask (since this is a global change)
+         prevmask = os.umask(0)
+         secondhand = []
+ 
+         # we do a first merge; this will recurse through all files in our srcroot but also build up a
+         # "second hand" of symlinks to merge later
+         if self.mergeme(
+             srcroot,
+             destroot,
+             outfile,
+             secondhand,
+             self.settings["EPREFIX"].lstrip(os.sep),
+             cfgfiledict,
+             mymtime,
+         ):
+             return 1
+ 
+         # now, it's time for dealing our second hand; we'll loop until we can't merge anymore.	The rest are
+         # broken symlinks.  We'll merge them too.
+         lastlen = 0
+         while len(secondhand) and len(secondhand) != lastlen:
+             # clear the thirdhand.	Anything from our second hand that
+             # couldn't get merged will be added to thirdhand.
+ 
+             thirdhand = []
+             if self.mergeme(
+                 srcroot, destroot, outfile, thirdhand, secondhand, cfgfiledict, mymtime
+             ):
+                 return 1
+ 
+             # swap hands
+             lastlen = len(secondhand)
+ 
+             # our thirdhand now becomes our secondhand.  It's ok to throw
+             # away secondhand since thirdhand contains all the stuff that
+             # couldn't be merged.
+             secondhand = thirdhand
+ 
+         if len(secondhand):
+             # force merge of remaining symlinks (broken or circular; oh well)
+             if self.mergeme(
+                 srcroot, destroot, outfile, None, secondhand, cfgfiledict, mymtime
+             ):
+                 return 1
+ 
+         # restore umask
+         os.umask(prevmask)
+ 
+         # if we opened it, close it
+         outfile.flush()
+         outfile.close()
+ 
+         # write out our collection of md5sums
+         if cfgfiledict != cfgfiledict_orig:
+             cfgfiledict.pop("IGNORE", None)
+             try:
+                 writedict(cfgfiledict, self.vartree.dbapi._conf_mem_file)
+             except InvalidLocation:
+                 self.settings._init_dirs()
+                 writedict(cfgfiledict, self.vartree.dbapi._conf_mem_file)
+ 
+         return os.EX_OK
+ 
+     def mergeme(
+         self,
+         srcroot,
+         destroot,
+         outfile,
+         secondhand,
+         stufftomerge,
+         cfgfiledict,
+         thismtime,
+     ):
+         """
+ 
+         This function handles actual merging of the package contents to the livefs.
+         It also handles config protection.
+ 
+         @param srcroot: Where are we copying files from (usually ${D})
+         @type srcroot: String (Path)
+         @param destroot: Typically ${ROOT}
+         @type destroot: String (Path)
+         @param outfile: File to log operations to
+         @type outfile: File Object
+         @param secondhand: A set of items to merge in pass two (usually
+         or symlinks that point to non-existing files that may get merged later)
+         @type secondhand: List
+         @param stufftomerge: Either a diretory to merge, or a list of items.
+         @type stufftomerge: String or List
+         @param cfgfiledict: { File:mtime } mapping for config_protected files
+         @type cfgfiledict: Dictionary
+         @param thismtime: None or new mtime for merged files (expressed in seconds
+         in Python <3.3 and nanoseconds in Python >=3.3)
+         @type thismtime: None or Int
+         @rtype: None or Boolean
+         @return:
+         1. True on failure
+         2. None otherwise
+ 
+         """
+ 
+         showMessage = self._display_merge
+         writemsg = self._display_merge
+ 
+         os = _os_merge
+         sep = os.sep
+         join = os.path.join
+         srcroot = normalize_path(srcroot).rstrip(sep) + sep
+         destroot = normalize_path(destroot).rstrip(sep) + sep
+         calc_prelink = "prelink-checksums" in self.settings.features
+ 
+         protect_if_modified = (
+             "config-protect-if-modified" in self.settings.features
+             and self._installed_instance is not None
+         )
+ 
+         # this is supposed to merge a list of files.  There will be 2 forms of argument passing.
+         if isinstance(stufftomerge, str):
+             # A directory is specified.  Figure out protection paths, listdir() it and process it.
+             mergelist = [
+                 join(stufftomerge, child)
+                 for child in os.listdir(join(srcroot, stufftomerge))
+             ]
+         else:
+             mergelist = stufftomerge[:]
+ 
+         while mergelist:
+ 
+             relative_path = mergelist.pop()
+             mysrc = join(srcroot, relative_path)
+             mydest = join(destroot, relative_path)
+             # myrealdest is mydest without the $ROOT prefix (makes a difference if ROOT!="/")
+             myrealdest = join(sep, relative_path)
+             # stat file once, test using S_* macros many times (faster that way)
+             mystat = os.lstat(mysrc)
+             mymode = mystat[stat.ST_MODE]
+             mymd5 = None
+             myto = None
+ 
+             mymtime = mystat.st_mtime_ns
+ 
+             if stat.S_ISREG(mymode):
+                 mymd5 = perform_md5(mysrc, calc_prelink=calc_prelink)
+             elif stat.S_ISLNK(mymode):
+                 # The file name of mysrc and the actual file that it points to
+                 # will have earlier been forcefully converted to the 'merge'
+                 # encoding if necessary, but the content of the symbolic link
+                 # may need to be forcefully converted here.
+                 myto = _os.readlink(
+                     _unicode_encode(
+                         mysrc, encoding=_encodings["merge"], errors="strict"
+                     )
+                 )
+                 try:
+                     myto = _unicode_decode(
+                         myto, encoding=_encodings["merge"], errors="strict"
+                     )
+                 except UnicodeDecodeError:
+                     myto = _unicode_decode(
+                         myto, encoding=_encodings["merge"], errors="replace"
+                     )
+                     myto = _unicode_encode(
+                         myto, encoding="ascii", errors="backslashreplace"
+                     )
+                     myto = _unicode_decode(
+                         myto, encoding=_encodings["merge"], errors="replace"
+                     )
+                     os.unlink(mysrc)
+                     os.symlink(myto, mysrc)
+ 
+                 mymd5 = md5(_unicode_encode(myto)).hexdigest()
+ 
+             protected = False
+             if stat.S_ISLNK(mymode) or stat.S_ISREG(mymode):
+                 protected = self.isprotected(mydest)
+ 
+                 if (
+                     stat.S_ISREG(mymode)
+                     and mystat.st_size == 0
+                     and os.path.basename(mydest).startswith(".keep")
+                 ):
+                     protected = False
+ 
+             destmd5 = None
+             mydest_link = None
+             # handy variables; mydest is the target object on the live filesystems;
+             # mysrc is the source object in the temporary install dir
+             try:
+                 mydstat = os.lstat(mydest)
+                 mydmode = mydstat.st_mode
+                 if protected:
+                     if stat.S_ISLNK(mydmode):
+                         # Read symlink target as bytes, in case the
+                         # target path has a bad encoding.
+                         mydest_link = _os.readlink(
+                             _unicode_encode(
+                                 mydest, encoding=_encodings["merge"], errors="strict"
+                             )
+                         )
+                         mydest_link = _unicode_decode(
+                             mydest_link, encoding=_encodings["merge"], errors="replace"
+                         )
+ 
+                         # For protection of symlinks, the md5
+                         # of the link target path string is used
+                         # for cfgfiledict (symlinks are
+                         # protected since bug #485598).
+                         destmd5 = md5(_unicode_encode(mydest_link)).hexdigest()
+ 
+                     elif stat.S_ISREG(mydmode):
+                         destmd5 = perform_md5(mydest, calc_prelink=calc_prelink)
+             except (FileNotFound, OSError) as e:
+                 if isinstance(e, OSError) and e.errno != errno.ENOENT:
+                     raise
+                 # dest file doesn't exist
+                 mydstat = None
+                 mydmode = None
+                 mydest_link = None
+                 destmd5 = None
+ 
+             moveme = True
+             if protected:
+                 mydest, protected, moveme = self._protect(
+                     cfgfiledict,
+                     protect_if_modified,
+                     mymd5,
+                     myto,
+                     mydest,
+                     myrealdest,
+                     mydmode,
+                     destmd5,
+                     mydest_link,
+                 )
+ 
+             zing = "!!!"
+             if not moveme:
+                 # confmem rejected this update
+                 zing = "---"
+ 
+             if stat.S_ISLNK(mymode):
+                 # we are merging a symbolic link
+                 # Pass in the symlink target in order to bypass the
+                 # os.readlink() call inside abssymlink(), since that
+                 # call is unsafe if the merge encoding is not ascii
+                 # or utf_8 (see bug #382021).
+                 myabsto = abssymlink(mysrc, target=myto)
+ 
+                 if myabsto.startswith(srcroot):
+                     myabsto = myabsto[len(srcroot) :]
+                 myabsto = myabsto.lstrip(sep)
+                 if self.settings and self.settings["D"]:
+                     if myto.startswith(self.settings["D"]):
+                         myto = myto[len(self.settings["D"]) - 1 :]
+                 # myrealto contains the path of the real file to which this symlink points.
+                 # we can simply test for existence of this file to see if the target has been merged yet
+                 myrealto = normalize_path(os.path.join(destroot, myabsto))
+                 if mydmode is not None and stat.S_ISDIR(mydmode):
+                     if not protected:
+                         # we can't merge a symlink over a directory
+                         newdest = self._new_backup_path(mydest)
+                         msg = []
+                         msg.append("")
+                         msg.append(
+                             _("Installation of a symlink is blocked by a directory:")
+                         )
+                         msg.append("  '%s'" % mydest)
+                         msg.append(
+                             _("This symlink will be merged with a different name:")
+                         )
+                         msg.append("  '%s'" % newdest)
+                         msg.append("")
+                         self._eerror("preinst", msg)
+                         mydest = newdest
+ 
+                 # if secondhand is None it means we're operating in "force" mode and should not create a second hand.
+                 if (secondhand != None) and (not os.path.exists(myrealto)):
+                     # either the target directory doesn't exist yet or the target file doesn't exist -- or
+                     # the target is a broken symlink.  We will add this file to our "second hand" and merge
+                     # it later.
+                     secondhand.append(mysrc[len(srcroot) :])
+                     continue
+                 # unlinking no longer necessary; "movefile" will overwrite symlinks atomically and correctly
+                 if moveme:
+                     zing = ">>>"
+                     mymtime = movefile(
+                         mysrc,
+                         mydest,
+                         newmtime=thismtime,
+                         sstat=mystat,
+                         mysettings=self.settings,
+                         encoding=_encodings["merge"],
+                     )
+ 
+                 try:
+                     self._merged_path(mydest, os.lstat(mydest))
+                 except OSError:
+                     pass
+ 
+                 if mymtime != None:
+                     # Use lexists, since if the target happens to be a broken
+                     # symlink then that should trigger an independent warning.
+                     if not (
+                         os.path.lexists(myrealto)
+                         or os.path.lexists(join(srcroot, myabsto))
+                     ):
+                         self._eqawarn(
+                             "preinst",
+                             [
+                                 _(
+                                     "QA Notice: Symbolic link /%s points to /%s which does not exist."
+                                 )
+                                 % (relative_path, myabsto)
+                             ],
+                         )
+ 
+                     showMessage("%s %s -> %s\n" % (zing, mydest, myto))
+                     outfile.write(
+                         self._format_contents_line(
+                             node_type="sym",
+                             abs_path=myrealdest,
+                             symlink_target=myto,
+                             mtime_ns=mymtime,
+                         )
+                     )
+                 else:
+                     showMessage(
+                         _("!!! Failed to move file.\n"),
+                         level=logging.ERROR,
+                         noiselevel=-1,
+                     )
+                     showMessage(
+                         "!!! %s -> %s\n" % (mydest, myto),
+                         level=logging.ERROR,
+                         noiselevel=-1,
+                     )
+                     return 1
+             elif stat.S_ISDIR(mymode):
+                 # we are merging a directory
+                 if mydmode != None:
+                     # destination exists
+ 
+                     if bsd_chflags:
+                         # Save then clear flags on dest.
+                         dflags = mydstat.st_flags
+                         if dflags != 0:
+                             bsd_chflags.lchflags(mydest, 0)
+ 
+                     if not stat.S_ISLNK(mydmode) and not os.access(mydest, os.W_OK):
+                         pkgstuff = pkgsplit(self.pkg)
+                         writemsg(
+                             _("\n!!! Cannot write to '%s'.\n") % mydest, noiselevel=-1
+                         )
+                         writemsg(
+                             _(
+                                 "!!! Please check permissions and directories for broken symlinks.\n"
+                             )
+                         )
+                         writemsg(
+                             _(
+                                 "!!! You may start the merge process again by using ebuild:\n"
+                             )
+                         )
+                         writemsg(
+                             "!!! ebuild "
+                             + self.settings["PORTDIR"]
+                             + "/"
+                             + self.cat
+                             + "/"
+                             + pkgstuff[0]
+                             + "/"
+                             + self.pkg
+                             + ".ebuild merge\n"
+                         )
+                         writemsg(_("!!! And finish by running this: env-update\n\n"))
+                         return 1
+ 
+                     if stat.S_ISDIR(mydmode) or (
+                         stat.S_ISLNK(mydmode) and os.path.isdir(mydest)
+                     ):
+                         # a symlink to an existing directory will work for us; keep it:
+                         showMessage("--- %s/\n" % mydest)
+                         if bsd_chflags:
+                             bsd_chflags.lchflags(mydest, dflags)
+                     else:
+                         # a non-directory and non-symlink-to-directory.  Won't work for us.  Move out of the way.
+                         backup_dest = self._new_backup_path(mydest)
+                         msg = []
+                         msg.append("")
+                         msg.append(
+                             _("Installation of a directory is blocked by a file:")
+                         )
+                         msg.append("  '%s'" % mydest)
+                         msg.append(_("This file will be renamed to a different name:"))
+                         msg.append("  '%s'" % backup_dest)
+                         msg.append("")
+                         self._eerror("preinst", msg)
+                         if (
+                             movefile(
+                                 mydest,
+                                 backup_dest,
+                                 mysettings=self.settings,
+                                 encoding=_encodings["merge"],
+                             )
+                             is None
+                         ):
+                             return 1
+                         showMessage(
+                             _("bak %s %s.backup\n") % (mydest, mydest),
+                             level=logging.ERROR,
+                             noiselevel=-1,
+                         )
+                         # now create our directory
+                         try:
+                             if self.settings.selinux_enabled():
+                                 _selinux_merge.mkdir(mydest, mysrc)
+                             else:
+                                 os.mkdir(mydest)
+                         except OSError as e:
+                             # Error handling should be equivalent to
+                             # portage.util.ensure_dirs() for cases
+                             # like bug #187518.
+                             if e.errno in (errno.EEXIST,):
+                                 pass
+                             elif os.path.isdir(mydest):
+                                 pass
+                             else:
+                                 raise
+                             del e
+ 
+                         if bsd_chflags:
+                             bsd_chflags.lchflags(mydest, dflags)
+                         os.chmod(mydest, mystat[0])
+                         os.chown(mydest, mystat[4], mystat[5])
+                         showMessage(">>> %s/\n" % mydest)
+                 else:
+                     try:
+                         # destination doesn't exist
+                         if self.settings.selinux_enabled():
+                             _selinux_merge.mkdir(mydest, mysrc)
+                         else:
+                             os.mkdir(mydest)
+                     except OSError as e:
+                         # Error handling should be equivalent to
+                         # portage.util.ensure_dirs() for cases
+                         # like bug #187518.
+                         if e.errno in (errno.EEXIST,):
+                             pass
+                         elif os.path.isdir(mydest):
+                             pass
+                         else:
+                             raise
+                         del e
+                     os.chmod(mydest, mystat[0])
+                     os.chown(mydest, mystat[4], mystat[5])
+                     showMessage(">>> %s/\n" % mydest)
+ 
+                 try:
+                     self._merged_path(mydest, os.lstat(mydest))
+                 except OSError:
+                     pass
+ 
+                 outfile.write(
+                     self._format_contents_line(node_type="dir", abs_path=myrealdest)
+                 )
+                 # recurse and merge this directory
+                 mergelist.extend(
+                     join(relative_path, child)
+                     for child in os.listdir(join(srcroot, relative_path))
+                 )
+ 
+             elif stat.S_ISREG(mymode):
+                 # we are merging a regular file
+                 if not protected and mydmode is not None and stat.S_ISDIR(mydmode):
+                     # install of destination is blocked by an existing directory with the same name
+                     newdest = self._new_backup_path(mydest)
+                     msg = []
+                     msg.append("")
+                     msg.append(
+                         _("Installation of a regular file is blocked by a directory:")
+                     )
+                     msg.append("  '%s'" % mydest)
+                     msg.append(_("This file will be merged with a different name:"))
+                     msg.append("  '%s'" % newdest)
+                     msg.append("")
+                     self._eerror("preinst", msg)
+                     mydest = newdest
+ 
+                 # whether config protection or not, we merge the new file the
+                 # same way.  Unless moveme=0 (blocking directory)
+                 if moveme:
+                     # Create hardlinks only for source files that already exist
+                     # as hardlinks (having identical st_dev and st_ino).
+                     hardlink_key = (mystat.st_dev, mystat.st_ino)
+ 
+                     hardlink_candidates = self._hardlink_merge_map.get(hardlink_key)
+                     if hardlink_candidates is None:
+                         hardlink_candidates = []
+                         self._hardlink_merge_map[hardlink_key] = hardlink_candidates
+ 
+                     mymtime = movefile(
+                         mysrc,
+                         mydest,
+                         newmtime=thismtime,
+                         sstat=mystat,
+                         mysettings=self.settings,
+                         hardlink_candidates=hardlink_candidates,
+                         encoding=_encodings["merge"],
+                     )
+                     if mymtime is None:
+                         return 1
+                     hardlink_candidates.append(mydest)
+                     zing = ">>>"
+ 
+                     try:
+                         self._merged_path(mydest, os.lstat(mydest))
+                     except OSError:
+                         pass
+ 
+                 if mymtime != None:
+                     outfile.write(
+                         self._format_contents_line(
+                             node_type="obj",
+                             abs_path=myrealdest,
+                             md5_digest=mymd5,
+                             mtime_ns=mymtime,
+                         )
+                     )
+                 showMessage("%s %s\n" % (zing, mydest))
+             else:
+                 # we are merging a fifo or device node
+                 zing = "!!!"
+                 if mydmode is None:
+                     # destination doesn't exist
+                     if (
+                         movefile(
+                             mysrc,
+                             mydest,
+                             newmtime=thismtime,
+                             sstat=mystat,
+                             mysettings=self.settings,
+                             encoding=_encodings["merge"],
+                         )
+                         is not None
+                     ):
+                         zing = ">>>"
+ 
+                         try:
+                             self._merged_path(mydest, os.lstat(mydest))
+                         except OSError:
+                             pass
+ 
+                     else:
+                         return 1
+                 if stat.S_ISFIFO(mymode):
+                     outfile.write(
+                         self._format_contents_line(node_type="fif", abs_path=myrealdest)
+                     )
+                 else:
+                     outfile.write(
+                         self._format_contents_line(node_type="dev", abs_path=myrealdest)
+                     )
+                 showMessage(zing + " " + mydest + "\n")
+ 
+     def _protect(
+         self,
+         cfgfiledict,
+         protect_if_modified,
+         src_md5,
+         src_link,
+         dest,
+         dest_real,
+         dest_mode,
+         dest_md5,
+         dest_link,
+     ):
+ 
+         move_me = True
+         protected = True
+         force = False
+         k = False
+         if self._installed_instance is not None:
+             k = self._installed_instance._match_contents(dest_real)
+         if k is not False:
+             if dest_mode is None:
+                 # If the file doesn't exist, then it may
+                 # have been deleted or renamed by the
+                 # admin. Therefore, force the file to be
+                 # merged with a ._cfg name, so that the
+                 # admin will be prompted for this update
+                 # (see bug #523684).
+                 force = True
+ 
+             elif protect_if_modified:
+                 data = self._installed_instance.getcontents()[k]
+                 if data[0] == "obj" and data[2] == dest_md5:
+                     protected = False
+                 elif data[0] == "sym" and data[2] == dest_link:
+                     protected = False
+ 
+         if protected and dest_mode is not None:
+             # we have a protection path; enable config file management.
+             if src_md5 == dest_md5:
+                 protected = False
+ 
+             elif src_md5 == cfgfiledict.get(dest_real, [None])[0]:
+                 # An identical update has previously been
+                 # merged.  Skip it unless the user has chosen
+                 # --noconfmem.
+                 move_me = protected = bool(cfgfiledict["IGNORE"])
+ 
+             if (
+                 protected
+                 and (dest_link is not None or src_link is not None)
+                 and dest_link != src_link
+             ):
+                 # If either one is a symlink, and they are not
+                 # identical symlinks, then force config protection.
+                 force = True
+ 
+             if move_me:
+                 # Merging a new file, so update confmem.
+                 cfgfiledict[dest_real] = [src_md5]
+             elif dest_md5 == cfgfiledict.get(dest_real, [None])[0]:
+                 # A previously remembered update has been
+                 # accepted, so it is removed from confmem.
+                 del cfgfiledict[dest_real]
+ 
+         if protected and move_me:
+             dest = new_protect_filename(
+                 dest, newmd5=(dest_link or src_md5), force=force
+             )
+ 
+         return dest, protected, move_me
+ 
+     def _format_contents_line(
+         self, node_type, abs_path, md5_digest=None, symlink_target=None, mtime_ns=None
+     ):
+         fields = [node_type, abs_path]
+         if md5_digest is not None:
+             fields.append(md5_digest)
+         elif symlink_target is not None:
+             fields.append("-> {}".format(symlink_target))
+         if mtime_ns is not None:
+             fields.append(str(mtime_ns // 1000000000))
+         return "{}\n".format(" ".join(fields))
+ 
+     def _merged_path(self, path, lstatobj, exists=True):
+         previous_path = self._device_path_map.get(lstatobj.st_dev)
+         if (
+             previous_path is None
+             or previous_path is False
+             or (exists and len(path) < len(previous_path))
+         ):
+             if exists:
+                 self._device_path_map[lstatobj.st_dev] = path
+             else:
+                 # This entry is used to indicate that we've unmerged
+                 # a file from this device, and later, this entry is
+                 # replaced by a parent directory.
+                 self._device_path_map[lstatobj.st_dev] = False
+ 
+     def _post_merge_sync(self):
+         """
+         Call this after merge or unmerge, in order to sync relevant files to
+         disk and avoid data-loss in the event of a power failure. This method
+         does nothing if FEATURES=merge-sync is disabled.
+         """
+         if not self._device_path_map or "merge-sync" not in self.settings.features:
+             return
+ 
+         returncode = None
+         if platform.system() == "Linux":
+ 
+             paths = []
+             for path in self._device_path_map.values():
+                 if path is not False:
+                     paths.append(path)
+             paths = tuple(paths)
+ 
+             proc = SyncfsProcess(
+                 paths=paths, scheduler=(self._scheduler or asyncio._safe_loop())
+             )
+             proc.start()
+             returncode = proc.wait()
+ 
+         if returncode is None or returncode != os.EX_OK:
+             try:
+                 proc = subprocess.Popen(["sync"])
+             except EnvironmentError:
+                 pass
+             else:
+                 proc.wait()
+ 
+     @_slot_locked
+     def merge(
+         self,
+         mergeroot,
+         inforoot,
+         myroot=None,
+         myebuild=None,
+         cleanup=0,
+         mydbapi=None,
+         prev_mtimes=None,
+         counter=None,
+     ):
+         """
+         @param myroot: ignored, self._eroot is used instead
+         """
+         myroot = None
+         retval = -1
+         parallel_install = "parallel-install" in self.settings.features
+         if not parallel_install:
+             self.lockdb()
+         self.vartree.dbapi._bump_mtime(self.mycpv)
+         if self._scheduler is None:
+             self._scheduler = SchedulerInterface(asyncio._safe_loop())
+         try:
+             retval = self.treewalk(
+                 mergeroot,
+                 myroot,
+                 inforoot,
+                 myebuild,
+                 cleanup=cleanup,
+                 mydbapi=mydbapi,
+                 prev_mtimes=prev_mtimes,
+                 counter=counter,
+             )
+ 
+             # If PORTAGE_BUILDDIR doesn't exist, then it probably means
+             # fail-clean is enabled, and the success/die hooks have
+             # already been called by EbuildPhase.
+             if os.path.isdir(self.settings["PORTAGE_BUILDDIR"]):
+ 
+                 if retval == os.EX_OK:
+                     phase = "success_hooks"
+                 else:
+                     phase = "die_hooks"
+ 
+                 ebuild_phase = MiscFunctionsProcess(
+                     background=False,
+                     commands=[phase],
+                     scheduler=self._scheduler,
+                     settings=self.settings,
+                 )
+                 ebuild_phase.start()
+                 ebuild_phase.wait()
+                 self._elog_process()
+ 
+                 if "noclean" not in self.settings.features and (
+                     retval == os.EX_OK or "fail-clean" in self.settings.features
+                 ):
+                     if myebuild is None:
+                         myebuild = os.path.join(inforoot, self.pkg + ".ebuild")
+ 
+                     doebuild_environment(
+                         myebuild, "clean", settings=self.settings, db=mydbapi
+                     )
+                     phase = EbuildPhase(
+                         background=False,
+                         phase="clean",
+                         scheduler=self._scheduler,
+                         settings=self.settings,
+                     )
+                     phase.start()
+                     phase.wait()
+         finally:
+             self.settings.pop("REPLACING_VERSIONS", None)
+             if self.vartree.dbapi._linkmap is None:
+                 # preserve-libs is entirely disabled
+                 pass
+             else:
+                 self.vartree.dbapi._linkmap._clear_cache()
+             self.vartree.dbapi._bump_mtime(self.mycpv)
+             if not parallel_install:
+                 self.unlockdb()
+ 
+         if retval == os.EX_OK and self._postinst_failure:
+             retval = portage.const.RETURNCODE_POSTINST_FAILURE
+ 
+         return retval
+ 
+     def getstring(self, name):
+         "returns contents of a file with whitespace converted to spaces"
+         if not os.path.exists(self.dbdir + "/" + name):
+             return ""
+         with io.open(
+             _unicode_encode(
+                 os.path.join(self.dbdir, name),
+                 encoding=_encodings["fs"],
+                 errors="strict",
+             ),
+             mode="r",
+             encoding=_encodings["repo.content"],
+             errors="replace",
+         ) as f:
+             mydata = f.read().split()
+         return " ".join(mydata)
+ 
+     def copyfile(self, fname):
+         shutil.copyfile(fname, self.dbdir + "/" + os.path.basename(fname))
+ 
+     def getfile(self, fname):
+         if not os.path.exists(self.dbdir + "/" + fname):
+             return ""
+         with io.open(
+             _unicode_encode(
+                 os.path.join(self.dbdir, fname),
+                 encoding=_encodings["fs"],
+                 errors="strict",
+             ),
+             mode="r",
+             encoding=_encodings["repo.content"],
+             errors="replace",
+         ) as f:
+             return f.read()
+ 
+     def setfile(self, fname, data):
+         kwargs = {}
+         if fname == "environment.bz2" or not isinstance(data, str):
+             kwargs["mode"] = "wb"
+         else:
+             kwargs["mode"] = "w"
+             kwargs["encoding"] = _encodings["repo.content"]
+         write_atomic(os.path.join(self.dbdir, fname), data, **kwargs)
+ 
+     def getelements(self, ename):
+         if not os.path.exists(self.dbdir + "/" + ename):
+             return []
+         with io.open(
+             _unicode_encode(
+                 os.path.join(self.dbdir, ename),
+                 encoding=_encodings["fs"],
+                 errors="strict",
+             ),
+             mode="r",
+             encoding=_encodings["repo.content"],
+             errors="replace",
+         ) as f:
+             mylines = f.readlines()
+         myreturn = []
+         for x in mylines:
+             for y in x[:-1].split():
+                 myreturn.append(y)
+         return myreturn
+ 
+     def setelements(self, mylist, ename):
+         with io.open(
+             _unicode_encode(
+                 os.path.join(self.dbdir, ename),
+                 encoding=_encodings["fs"],
+                 errors="strict",
+             ),
+             mode="w",
+             encoding=_encodings["repo.content"],
+             errors="backslashreplace",
+         ) as f:
+             for x in mylist:
+                 f.write("%s\n" % x)
+ 
+     def isregular(self):
+         "Is this a regular package (does it have a CATEGORY file?  A dblink can be virtual *and* regular)"
+         return os.path.exists(os.path.join(self.dbdir, "CATEGORY"))
+ 
+     def _pre_merge_backup(self, backup_dblink, downgrade):
+ 
+         if "unmerge-backup" in self.settings.features or (
+             downgrade and "downgrade-backup" in self.settings.features
+         ):
+             return self._quickpkg_dblink(backup_dblink, False, None)
+ 
+         return os.EX_OK
+ 
+     def _pre_unmerge_backup(self, background):
+ 
+         if "unmerge-backup" in self.settings.features:
+             logfile = None
+             if self.settings.get("PORTAGE_BACKGROUND") != "subprocess":
+                 logfile = self.settings.get("PORTAGE_LOG_FILE")
+             return self._quickpkg_dblink(self, background, logfile)
+ 
+         return os.EX_OK
+ 
+     def _quickpkg_dblink(self, backup_dblink, background, logfile):
+ 
+         build_time = backup_dblink.getfile("BUILD_TIME")
+         try:
+             build_time = int(build_time.strip())
+         except ValueError:
+             build_time = 0
+ 
+         trees = QueryCommand.get_db()[self.settings["EROOT"]]
+         bintree = trees["bintree"]
+ 
+         for binpkg in reversed(bintree.dbapi.match("={}".format(backup_dblink.mycpv))):
+             if binpkg.build_time == build_time:
+                 return os.EX_OK
+ 
+         self.lockdb()
+         try:
+ 
+             if not backup_dblink.exists():
+                 # It got unmerged by a concurrent process.
+                 return os.EX_OK
+ 
+             # Call quickpkg for support of QUICKPKG_DEFAULT_OPTS and stuff.
+             quickpkg_binary = os.path.join(
+                 self.settings["PORTAGE_BIN_PATH"], "quickpkg"
+             )
+ 
+             if not os.access(quickpkg_binary, os.X_OK):
+                 # If not running from the source tree, use PATH.
+                 quickpkg_binary = find_binary("quickpkg")
+                 if quickpkg_binary is None:
+                     self._display_merge(
+                         _("%s: command not found") % "quickpkg",
+                         level=logging.ERROR,
+                         noiselevel=-1,
+                     )
+                     return 127
+ 
+             # Let quickpkg inherit the global vartree config's env.
+             env = dict(self.vartree.settings.items())
+             env["__PORTAGE_INHERIT_VARDB_LOCK"] = "1"
+ 
+             pythonpath = [x for x in env.get("PYTHONPATH", "").split(":") if x]
+             if not pythonpath or not os.path.samefile(pythonpath[0], portage._pym_path):
+                 pythonpath.insert(0, portage._pym_path)
+             env["PYTHONPATH"] = ":".join(pythonpath)
+ 
+             quickpkg_proc = SpawnProcess(
+                 args=[
+                     portage._python_interpreter,
+                     quickpkg_binary,
+                     "=%s" % (backup_dblink.mycpv,),
+                 ],
+                 background=background,
+                 env=env,
+                 scheduler=self._scheduler,
+                 logfile=logfile,
+             )
+             quickpkg_proc.start()
+ 
+             return quickpkg_proc.wait()
+ 
+         finally:
+             self.unlockdb()
+ 
+ 
+ def merge(
+     mycat,
+     mypkg,
+     pkgloc,
+     infloc,
+     myroot=None,
+     settings=None,
+     myebuild=None,
+     mytree=None,
+     mydbapi=None,
+     vartree=None,
+     prev_mtimes=None,
+     blockers=None,
+     scheduler=None,
+     fd_pipes=None,
+ ):
+     """
+     @param myroot: ignored, settings['EROOT'] is used instead
+     """
+     myroot = None
+     if settings is None:
+         raise TypeError("settings argument is required")
+     if not os.access(settings["EROOT"], os.W_OK):
+         writemsg(
+             _("Permission denied: access('%s', W_OK)\n") % settings["EROOT"],
+             noiselevel=-1,
+         )
+         return errno.EACCES
+     background = settings.get("PORTAGE_BACKGROUND") == "1"
+     merge_task = MergeProcess(
+         mycat=mycat,
+         mypkg=mypkg,
+         settings=settings,
+         treetype=mytree,
+         vartree=vartree,
+         scheduler=(scheduler or asyncio._safe_loop()),
+         background=background,
+         blockers=blockers,
+         pkgloc=pkgloc,
+         infloc=infloc,
+         myebuild=myebuild,
+         mydbapi=mydbapi,
+         prev_mtimes=prev_mtimes,
+         logfile=settings.get("PORTAGE_LOG_FILE"),
+         fd_pipes=fd_pipes,
+     )
+     merge_task.start()
+     retcode = merge_task.wait()
+     return retcode
+ 
+ 
+ def unmerge(
+     cat,
+     pkg,
+     myroot=None,
+     settings=None,
+     mytrimworld=None,
+     vartree=None,
+     ldpath_mtimes=None,
+     scheduler=None,
+ ):
+     """
+     @param myroot: ignored, settings['EROOT'] is used instead
+     @param mytrimworld: ignored
+     """
+     myroot = None
+     if settings is None:
+         raise TypeError("settings argument is required")
+     mylink = dblink(
+         cat,
+         pkg,
+         settings=settings,
+         treetype="vartree",
+         vartree=vartree,
+         scheduler=scheduler,
+     )
+     vartree = mylink.vartree
+     parallel_install = "parallel-install" in settings.features
+     if not parallel_install:
+         mylink.lockdb()
+     try:
+         if mylink.exists():
+             retval = mylink.unmerge(ldpath_mtimes=ldpath_mtimes)
+             if retval == os.EX_OK:
+                 mylink.lockdb()
+                 try:
+                     mylink.delete()
+                 finally:
+                     mylink.unlockdb()
+             return retval
+         return os.EX_OK
+     finally:
+         if vartree.dbapi._linkmap is None:
+             # preserve-libs is entirely disabled
+             pass
+         else:
+             vartree.dbapi._linkmap._clear_cache()
+         if not parallel_install:
+             mylink.unlockdb()
+ 
  
  def write_contents(contents, root, f):
- 	"""
- 	Write contents to any file like object. The file will be left open.
- 	"""
- 	root_len = len(root) - 1
- 	for filename in sorted(contents):
- 		entry_data = contents[filename]
- 		entry_type = entry_data[0]
- 		relative_filename = filename[root_len:]
- 		if entry_type == "obj":
- 			entry_type, mtime, md5sum = entry_data
- 			line = "%s %s %s %s\n" % \
- 				(entry_type, relative_filename, md5sum, mtime)
- 		elif entry_type == "sym":
- 			entry_type, mtime, link = entry_data
- 			line = "%s %s -> %s %s\n" % \
- 				(entry_type, relative_filename, link, mtime)
- 		else: # dir, dev, fif
- 			line = "%s %s\n" % (entry_type, relative_filename)
- 		f.write(line)
- 
- def tar_contents(contents, root, tar, protect=None, onProgress=None,
- 	xattrs=False):
- 	os = _os_merge
- 	encoding = _encodings['merge']
- 
- 	try:
- 		for x in contents:
- 			_unicode_encode(x,
- 				encoding=_encodings['merge'],
- 				errors='strict')
- 	except UnicodeEncodeError:
- 		# The package appears to have been merged with a
- 		# different value of sys.getfilesystemencoding(),
- 		# so fall back to utf_8 if appropriate.
- 		try:
- 			for x in contents:
- 				_unicode_encode(x,
- 					encoding=_encodings['fs'],
- 					errors='strict')
- 		except UnicodeEncodeError:
- 			pass
- 		else:
- 			os = portage.os
- 			encoding = _encodings['fs']
- 
- 	tar.encoding = encoding
- 	root = normalize_path(root).rstrip(os.path.sep) + os.path.sep
- 	id_strings = {}
- 	maxval = len(contents)
- 	curval = 0
- 	if onProgress:
- 		onProgress(maxval, 0)
- 	paths = list(contents)
- 	paths.sort()
- 	for path in paths:
- 		curval += 1
- 		try:
- 			lst = os.lstat(path)
- 		except OSError as e:
- 			if e.errno != errno.ENOENT:
- 				raise
- 			del e
- 			if onProgress:
- 				onProgress(maxval, curval)
- 			continue
- 		contents_type = contents[path][0]
- 		if path.startswith(root):
- 			arcname = "./" + path[len(root):]
- 		else:
- 			raise ValueError("invalid root argument: '%s'" % root)
- 		live_path = path
- 		if 'dir' == contents_type and \
- 			not stat.S_ISDIR(lst.st_mode) and \
- 			os.path.isdir(live_path):
- 			# Even though this was a directory in the original ${D}, it exists
- 			# as a symlink to a directory in the live filesystem.  It must be
- 			# recorded as a real directory in the tar file to ensure that tar
- 			# can properly extract it's children.
- 			live_path = os.path.realpath(live_path)
- 			lst = os.lstat(live_path)
- 
- 		# Since os.lstat() inside TarFile.gettarinfo() can trigger a
- 		# UnicodeEncodeError when python has something other than utf_8
- 		# return from sys.getfilesystemencoding() (as in bug #388773),
- 		# we implement the needed functionality here, using the result
- 		# of our successful lstat call. An alternative to this would be
- 		# to pass in the fileobj argument to TarFile.gettarinfo(), so
- 		# that it could use fstat instead of lstat. However, that would
- 		# have the unwanted effect of dereferencing symlinks.
- 
- 		tarinfo = tar.tarinfo()
- 		tarinfo.name = arcname
- 		tarinfo.mode = lst.st_mode
- 		tarinfo.uid = lst.st_uid
- 		tarinfo.gid = lst.st_gid
- 		tarinfo.size = 0
- 		tarinfo.mtime = lst.st_mtime
- 		tarinfo.linkname = ""
- 		if stat.S_ISREG(lst.st_mode):
- 			inode = (lst.st_ino, lst.st_dev)
- 			if (lst.st_nlink > 1 and
- 				inode in tar.inodes and
- 				arcname != tar.inodes[inode]):
- 				tarinfo.type = tarfile.LNKTYPE
- 				tarinfo.linkname = tar.inodes[inode]
- 			else:
- 				tar.inodes[inode] = arcname
- 				tarinfo.type = tarfile.REGTYPE
- 				tarinfo.size = lst.st_size
- 		elif stat.S_ISDIR(lst.st_mode):
- 			tarinfo.type = tarfile.DIRTYPE
- 		elif stat.S_ISLNK(lst.st_mode):
- 			tarinfo.type = tarfile.SYMTYPE
- 			tarinfo.linkname = os.readlink(live_path)
- 		else:
- 			continue
- 		try:
- 			tarinfo.uname = pwd.getpwuid(tarinfo.uid)[0]
- 		except KeyError:
- 			pass
- 		try:
- 			tarinfo.gname = grp.getgrgid(tarinfo.gid)[0]
- 		except KeyError:
- 			pass
- 
- 		if stat.S_ISREG(lst.st_mode):
- 			if protect and protect(path):
- 				# Create an empty file as a place holder in order to avoid
- 				# potential collision-protect issues.
- 				f = tempfile.TemporaryFile()
- 				f.write(_unicode_encode(
- 					"# empty file because --include-config=n " + \
- 					"when `quickpkg` was used\n"))
- 				f.flush()
- 				f.seek(0)
- 				tarinfo.size = os.fstat(f.fileno()).st_size
- 				tar.addfile(tarinfo, f)
- 				f.close()
- 			else:
- 				path_bytes = _unicode_encode(path,
- 					encoding=encoding,
- 					errors='strict')
- 
- 				if xattrs:
- 					# Compatible with GNU tar, which saves the xattrs
- 					# under the SCHILY.xattr namespace.
- 					for k in xattr.list(path_bytes):
- 						tarinfo.pax_headers['SCHILY.xattr.' +
- 							_unicode_decode(k)] = _unicode_decode(
- 							xattr.get(path_bytes, _unicode_encode(k)))
- 
- 				with open(path_bytes, 'rb') as f:
- 					tar.addfile(tarinfo, f)
- 
- 		else:
- 			tar.addfile(tarinfo)
- 		if onProgress:
- 			onProgress(maxval, curval)
+     """
+     Write contents to any file like object. The file will be left open.
+     """
+     root_len = len(root) - 1
+     for filename in sorted(contents):
+         entry_data = contents[filename]
+         entry_type = entry_data[0]
+         relative_filename = filename[root_len:]
+         if entry_type == "obj":
+             entry_type, mtime, md5sum = entry_data
+             line = "%s %s %s %s\n" % (entry_type, relative_filename, md5sum, mtime)
+         elif entry_type == "sym":
+             entry_type, mtime, link = entry_data
+             line = "%s %s -> %s %s\n" % (entry_type, relative_filename, link, mtime)
+         else:  # dir, dev, fif
+             line = "%s %s\n" % (entry_type, relative_filename)
+         f.write(line)
+ 
+ 
+ def tar_contents(contents, root, tar, protect=None, onProgress=None, xattrs=False):
+     os = _os_merge
+     encoding = _encodings["merge"]
+ 
+     try:
+         for x in contents:
+             _unicode_encode(x, encoding=_encodings["merge"], errors="strict")
+     except UnicodeEncodeError:
+         # The package appears to have been merged with a
+         # different value of sys.getfilesystemencoding(),
+         # so fall back to utf_8 if appropriate.
+         try:
+             for x in contents:
+                 _unicode_encode(x, encoding=_encodings["fs"], errors="strict")
+         except UnicodeEncodeError:
+             pass
+         else:
+             os = portage.os
+             encoding = _encodings["fs"]
+ 
+     tar.encoding = encoding
+     root = normalize_path(root).rstrip(os.path.sep) + os.path.sep
+     id_strings = {}
+     maxval = len(contents)
+     curval = 0
+     if onProgress:
+         onProgress(maxval, 0)
+     paths = list(contents)
+     paths.sort()
+     for path in paths:
+         curval += 1
+         try:
+             lst = os.lstat(path)
+         except OSError as e:
+             if e.errno != errno.ENOENT:
+                 raise
+             del e
+             if onProgress:
+                 onProgress(maxval, curval)
+             continue
+         contents_type = contents[path][0]
+         if path.startswith(root):
+             arcname = "./" + path[len(root) :]
+         else:
+             raise ValueError("invalid root argument: '%s'" % root)
+         live_path = path
+         if (
+             "dir" == contents_type
+             and not stat.S_ISDIR(lst.st_mode)
+             and os.path.isdir(live_path)
+         ):
+             # Even though this was a directory in the original ${D}, it exists
+             # as a symlink to a directory in the live filesystem.  It must be
+             # recorded as a real directory in the tar file to ensure that tar
+             # can properly extract it's children.
+             live_path = os.path.realpath(live_path)
+             lst = os.lstat(live_path)
+ 
+         # Since os.lstat() inside TarFile.gettarinfo() can trigger a
+         # UnicodeEncodeError when python has something other than utf_8
+         # return from sys.getfilesystemencoding() (as in bug #388773),
+         # we implement the needed functionality here, using the result
+         # of our successful lstat call. An alternative to this would be
+         # to pass in the fileobj argument to TarFile.gettarinfo(), so
+         # that it could use fstat instead of lstat. However, that would
+         # have the unwanted effect of dereferencing symlinks.
+ 
+         tarinfo = tar.tarinfo()
+         tarinfo.name = arcname
+         tarinfo.mode = lst.st_mode
+         tarinfo.uid = lst.st_uid
+         tarinfo.gid = lst.st_gid
+         tarinfo.size = 0
+         tarinfo.mtime = lst.st_mtime
+         tarinfo.linkname = ""
+         if stat.S_ISREG(lst.st_mode):
+             inode = (lst.st_ino, lst.st_dev)
+             if (
+                 lst.st_nlink > 1
+                 and inode in tar.inodes
+                 and arcname != tar.inodes[inode]
+             ):
+                 tarinfo.type = tarfile.LNKTYPE
+                 tarinfo.linkname = tar.inodes[inode]
+             else:
+                 tar.inodes[inode] = arcname
+                 tarinfo.type = tarfile.REGTYPE
+                 tarinfo.size = lst.st_size
+         elif stat.S_ISDIR(lst.st_mode):
+             tarinfo.type = tarfile.DIRTYPE
+         elif stat.S_ISLNK(lst.st_mode):
+             tarinfo.type = tarfile.SYMTYPE
+             tarinfo.linkname = os.readlink(live_path)
+         else:
+             continue
+         try:
+             tarinfo.uname = pwd.getpwuid(tarinfo.uid)[0]
+         except KeyError:
+             pass
+         try:
+             tarinfo.gname = grp.getgrgid(tarinfo.gid)[0]
+         except KeyError:
+             pass
+ 
+         if stat.S_ISREG(lst.st_mode):
+             if protect and protect(path):
+                 # Create an empty file as a place holder in order to avoid
+                 # potential collision-protect issues.
+                 f = tempfile.TemporaryFile()
+                 f.write(
+                     _unicode_encode(
+                         "# empty file because --include-config=n "
+                         + "when `quickpkg` was used\n"
+                     )
+                 )
+                 f.flush()
+                 f.seek(0)
+                 tarinfo.size = os.fstat(f.fileno()).st_size
+                 tar.addfile(tarinfo, f)
+                 f.close()
+             else:
+                 path_bytes = _unicode_encode(path, encoding=encoding, errors="strict")
+ 
+                 if xattrs:
+                     # Compatible with GNU tar, which saves the xattrs
+                     # under the SCHILY.xattr namespace.
+                     for k in xattr.list(path_bytes):
+                         tarinfo.pax_headers[
+                             "SCHILY.xattr." + _unicode_decode(k)
+                         ] = _unicode_decode(xattr.get(path_bytes, _unicode_encode(k)))
+ 
+                 with open(path_bytes, "rb") as f:
+                     tar.addfile(tarinfo, f)
+ 
+         else:
+             tar.addfile(tarinfo)
+         if onProgress:
+             onProgress(maxval, curval)
diff --cc lib/portage/getbinpkg.py
index 3eb9479f2,6aa8f1de1..aaf0bcf81
--- a/lib/portage/getbinpkg.py
+++ b/lib/portage/getbinpkg.py
@@@ -19,7 -19,6 +19,8 @@@ import socke
  import time
  import tempfile
  import base64
++# PREFIX LOCAL
 +from portage.const import CACHE_PATH
  import warnings
  
  _all_errors = [NotImplementedError, ValueError, socket.error]
@@@ -348,561 -386,616 +388,618 @@@ def match_in_array(array, prefix="", su
  
  
  def dir_get_list(baseurl, conn=None):
- 	"""Takes a base url to connect to and read from.
- 	URI should be in the form <proto>://<site>[:port]<path>
- 	Connection is used for persistent connection instances."""
- 
- 	warnings.warn("portage.getbinpkg.dir_get_list() is deprecated",
- 		DeprecationWarning, stacklevel=2)
- 
- 	if not conn:
- 		keepconnection = 0
- 	else:
- 		keepconnection = 1
- 
- 	conn, protocol, address, params, headers = create_conn(baseurl, conn)
- 
- 	listing = None
- 	if protocol in ["http","https"]:
- 		if not address.endswith("/"):
- 			# http servers can return a 400 error here
- 			# if the address doesn't end with a slash.
- 			address += "/"
- 		page, rc, msg = make_http_request(conn, address, params, headers)
- 
- 		if page:
- 			parser = ParseLinks()
- 			parser.feed(_unicode_decode(page))
- 			del page
- 			listing = parser.get_anchors()
- 		else:
- 			import portage.exception
- 			raise portage.exception.PortageException(
- 				_("Unable to get listing: %s %s") % (rc,msg))
- 	elif protocol in ["ftp"]:
- 		if address[-1] == '/':
- 			olddir = conn.pwd()
- 			conn.cwd(address)
- 			listing = conn.nlst()
- 			conn.cwd(olddir)
- 			del olddir
- 		else:
- 			listing = conn.nlst(address)
- 	elif protocol == "sftp":
- 		listing = conn.listdir(address)
- 	else:
- 		raise TypeError(_("Unknown protocol. '%s'") % protocol)
- 
- 	if not keepconnection:
- 		conn.close()
- 
- 	return listing
+     """Takes a base url to connect to and read from.
+     URI should be in the form <proto>://<site>[:port]<path>
+     Connection is used for persistent connection instances."""
+ 
+     warnings.warn(
+         "portage.getbinpkg.dir_get_list() is deprecated",
+         DeprecationWarning,
+         stacklevel=2,
+     )
+ 
+     if not conn:
+         keepconnection = 0
+     else:
+         keepconnection = 1
+ 
+     conn, protocol, address, params, headers = create_conn(baseurl, conn)
+ 
+     listing = None
+     if protocol in ["http", "https"]:
+         if not address.endswith("/"):
+             # http servers can return a 400 error here
+             # if the address doesn't end with a slash.
+             address += "/"
+         page, rc, msg = make_http_request(conn, address, params, headers)
+ 
+         if page:
+             parser = ParseLinks()
+             parser.feed(_unicode_decode(page))
+             del page
+             listing = parser.get_anchors()
+         else:
+             import portage.exception
+ 
+             raise portage.exception.PortageException(
+                 _("Unable to get listing: %s %s") % (rc, msg)
+             )
+     elif protocol in ["ftp"]:
+         if address[-1] == "/":
+             olddir = conn.pwd()
+             conn.cwd(address)
+             listing = conn.nlst()
+             conn.cwd(olddir)
+             del olddir
+         else:
+             listing = conn.nlst(address)
+     elif protocol == "sftp":
+         listing = conn.listdir(address)
+     else:
+         raise TypeError(_("Unknown protocol. '%s'") % protocol)
+ 
+     if not keepconnection:
+         conn.close()
+ 
+     return listing
+ 
  
  def file_get_metadata(baseurl, conn=None, chunk_size=3000):
- 	"""Takes a base url to connect to and read from.
- 	URI should be in the form <proto>://<site>[:port]<path>
- 	Connection is used for persistent connection instances."""
- 
- 	warnings.warn("portage.getbinpkg.file_get_metadata() is deprecated",
- 		DeprecationWarning, stacklevel=2)
- 
- 	if not conn:
- 		keepconnection = 0
- 	else:
- 		keepconnection = 1
- 
- 	conn, protocol, address, params, headers = create_conn(baseurl, conn)
- 
- 	if protocol in ["http","https"]:
- 		headers["Range"] = "bytes=-%s" % str(chunk_size)
- 		data, _x, _x = make_http_request(conn, address, params, headers)
- 	elif protocol in ["ftp"]:
- 		data, _x, _x = make_ftp_request(conn, address, -chunk_size)
- 	elif protocol == "sftp":
- 		f = conn.open(address)
- 		try:
- 			f.seek(-chunk_size, 2)
- 			data = f.read()
- 		finally:
- 			f.close()
- 	else:
- 		raise TypeError(_("Unknown protocol. '%s'") % protocol)
- 
- 	if data:
- 		xpaksize = portage.xpak.decodeint(data[-8:-4])
- 		if (xpaksize + 8) > chunk_size:
- 			myid = file_get_metadata(baseurl, conn, xpaksize + 8)
- 			if not keepconnection:
- 				conn.close()
- 			return myid
- 		xpak_data = data[len(data) - (xpaksize + 8):-8]
- 		del data
- 
- 		myid = portage.xpak.xsplit_mem(xpak_data)
- 		if not myid:
- 			myid = None, None
- 		del xpak_data
- 	else:
- 		myid = None, None
- 
- 	if not keepconnection:
- 		conn.close()
- 
- 	return myid
- 
- 
- def file_get(baseurl=None, dest=None, conn=None, fcmd=None, filename=None,
- 	fcmd_vars=None):
- 	"""Takes a base url to connect to and read from.
- 	URI should be in the form <proto>://[user[:pass]@]<site>[:port]<path>"""
- 
- 	if not fcmd:
- 
- 		warnings.warn("Use of portage.getbinpkg.file_get() without the fcmd "
- 			"parameter is deprecated", DeprecationWarning, stacklevel=2)
- 
- 		return file_get_lib(baseurl, dest, conn)
- 
- 	variables = {}
- 
- 	if fcmd_vars is not None:
- 		variables.update(fcmd_vars)
- 
- 	if "DISTDIR" not in variables:
- 		if dest is None:
- 			raise portage.exception.MissingParameter(
- 				_("%s is missing required '%s' key") %
- 				("fcmd_vars", "DISTDIR"))
- 		variables["DISTDIR"] = dest
- 
- 	if "URI" not in variables:
- 		if baseurl is None:
- 			raise portage.exception.MissingParameter(
- 				_("%s is missing required '%s' key") %
- 				("fcmd_vars", "URI"))
- 		variables["URI"] = baseurl
- 
- 	if "FILE" not in variables:
- 		if filename is None:
- 			filename = os.path.basename(variables["URI"])
- 		variables["FILE"] = filename
- 
- 	from portage.util import varexpand
- 	from portage.process import spawn
- 	myfetch = portage.util.shlex_split(fcmd)
- 	myfetch = [varexpand(x, mydict=variables) for x in myfetch]
- 	fd_pipes = {
- 		0: portage._get_stdin().fileno(),
- 		1: sys.__stdout__.fileno(),
- 		2: sys.__stdout__.fileno()
- 	}
- 	sys.__stdout__.flush()
- 	sys.__stderr__.flush()
- 	retval = spawn(myfetch, env=os.environ.copy(), fd_pipes=fd_pipes)
- 	if retval != os.EX_OK:
- 		sys.stderr.write(_("Fetcher exited with a failure condition.\n"))
- 		return 0
- 	return 1
+     """Takes a base url to connect to and read from.
+     URI should be in the form <proto>://<site>[:port]<path>
+     Connection is used for persistent connection instances."""
+ 
+     warnings.warn(
+         "portage.getbinpkg.file_get_metadata() is deprecated",
+         DeprecationWarning,
+         stacklevel=2,
+     )
+ 
+     if not conn:
+         keepconnection = 0
+     else:
+         keepconnection = 1
+ 
+     conn, protocol, address, params, headers = create_conn(baseurl, conn)
+ 
+     if protocol in ["http", "https"]:
+         headers["Range"] = "bytes=-%s" % str(chunk_size)
+         data, _x, _x = make_http_request(conn, address, params, headers)
+     elif protocol in ["ftp"]:
+         data, _x, _x = make_ftp_request(conn, address, -chunk_size)
+     elif protocol == "sftp":
+         f = conn.open(address)
+         try:
+             f.seek(-chunk_size, 2)
+             data = f.read()
+         finally:
+             f.close()
+     else:
+         raise TypeError(_("Unknown protocol. '%s'") % protocol)
+ 
+     if data:
+         xpaksize = portage.xpak.decodeint(data[-8:-4])
+         if (xpaksize + 8) > chunk_size:
+             myid = file_get_metadata(baseurl, conn, xpaksize + 8)
+             if not keepconnection:
+                 conn.close()
+             return myid
+         xpak_data = data[len(data) - (xpaksize + 8) : -8]
+         del data
+ 
+         myid = portage.xpak.xsplit_mem(xpak_data)
+         if not myid:
+             myid = None, None
+         del xpak_data
+     else:
+         myid = None, None
+ 
+     if not keepconnection:
+         conn.close()
+ 
+     return myid
+ 
+ 
+ def file_get(
+     baseurl=None, dest=None, conn=None, fcmd=None, filename=None, fcmd_vars=None
+ ):
+     """Takes a base url to connect to and read from.
+     URI should be in the form <proto>://[user[:pass]@]<site>[:port]<path>"""
+ 
+     if not fcmd:
+ 
+         warnings.warn(
+             "Use of portage.getbinpkg.file_get() without the fcmd "
+             "parameter is deprecated",
+             DeprecationWarning,
+             stacklevel=2,
+         )
+ 
+         return file_get_lib(baseurl, dest, conn)
+ 
+     variables = {}
+ 
+     if fcmd_vars is not None:
+         variables.update(fcmd_vars)
+ 
+     if "DISTDIR" not in variables:
+         if dest is None:
+             raise portage.exception.MissingParameter(
+                 _("%s is missing required '%s' key") % ("fcmd_vars", "DISTDIR")
+             )
+         variables["DISTDIR"] = dest
+ 
+     if "URI" not in variables:
+         if baseurl is None:
+             raise portage.exception.MissingParameter(
+                 _("%s is missing required '%s' key") % ("fcmd_vars", "URI")
+             )
+         variables["URI"] = baseurl
+ 
+     if "FILE" not in variables:
+         if filename is None:
+             filename = os.path.basename(variables["URI"])
+         variables["FILE"] = filename
+ 
+     from portage.util import varexpand
+     from portage.process import spawn
+ 
+     myfetch = portage.util.shlex_split(fcmd)
+     myfetch = [varexpand(x, mydict=variables) for x in myfetch]
+     fd_pipes = {
+         0: portage._get_stdin().fileno(),
+         1: sys.__stdout__.fileno(),
+         2: sys.__stdout__.fileno(),
+     }
+     sys.__stdout__.flush()
+     sys.__stderr__.flush()
+     retval = spawn(myfetch, env=os.environ.copy(), fd_pipes=fd_pipes)
+     if retval != os.EX_OK:
+         sys.stderr.write(_("Fetcher exited with a failure condition.\n"))
+         return 0
+     return 1
+ 
  
  def file_get_lib(baseurl, dest, conn=None):
- 	"""Takes a base url to connect to and read from.
- 	URI should be in the form <proto>://<site>[:port]<path>
- 	Connection is used for persistent connection instances."""
- 
- 	warnings.warn("portage.getbinpkg.file_get_lib() is deprecated",
- 		DeprecationWarning, stacklevel=2)
- 
- 	if not conn:
- 		keepconnection = 0
- 	else:
- 		keepconnection = 1
- 
- 	conn, protocol, address, params, headers = create_conn(baseurl, conn)
- 
- 	sys.stderr.write("Fetching '" + str(os.path.basename(address)) + "'\n")
- 	if protocol in ["http", "https"]:
- 		data, rc, _msg = make_http_request(conn, address, params, headers, dest=dest)
- 	elif protocol in ["ftp"]:
- 		data, rc, _msg = make_ftp_request(conn, address, dest=dest)
- 	elif protocol == "sftp":
- 		rc = 0
- 		try:
- 			f = conn.open(address)
- 		except SystemExit:
- 			raise
- 		except Exception:
- 			rc = 1
- 		else:
- 			try:
- 				if dest:
- 					bufsize = 8192
- 					while True:
- 						data = f.read(bufsize)
- 						if not data:
- 							break
- 						dest.write(data)
- 			finally:
- 				f.close()
- 	else:
- 		raise TypeError(_("Unknown protocol. '%s'") % protocol)
- 
- 	if not keepconnection:
- 		conn.close()
- 
- 	return rc
- 
- 
- def dir_get_metadata(baseurl, conn=None, chunk_size=3000, verbose=1, usingcache=1, makepickle=None):
- 
- 	warnings.warn("portage.getbinpkg.dir_get_metadata() is deprecated",
- 		DeprecationWarning, stacklevel=2)
- 
- 	if not conn:
- 		keepconnection = 0
- 	else:
- 		keepconnection = 1
- 
- 	cache_path = CACHE_PATH
- 	metadatafilename = os.path.join(cache_path, 'remote_metadata.pickle')
- 
- 	if makepickle is None:
- 		makepickle = CACHE_PATH+"/metadata.idx.most_recent"
- 
- 	try:
- 		conn = create_conn(baseurl, conn)[0]
- 	except _all_errors as e:
- 		# ftplib.FTP(host) can raise errors like this:
- 		#   socket.error: (111, 'Connection refused')
- 		sys.stderr.write("!!! %s\n" % (e,))
- 		return {}
- 
- 	out = sys.stdout
- 	try:
- 		metadatafile = open(_unicode_encode(metadatafilename,
- 			encoding=_encodings['fs'], errors='strict'), 'rb')
- 		mypickle = pickle.Unpickler(metadatafile)
- 		try:
- 			mypickle.find_global = None
- 		except AttributeError:
- 			# TODO: If py3k, override Unpickler.find_class().
- 			pass
- 		metadata = mypickle.load()
- 		out.write(_("Loaded metadata pickle.\n"))
- 		out.flush()
- 		metadatafile.close()
- 	except (SystemExit, KeyboardInterrupt):
- 		raise
- 	except Exception:
- 		metadata = {}
- 	if baseurl not in metadata:
- 		metadata[baseurl] = {}
- 	if "indexname" not in metadata[baseurl]:
- 		metadata[baseurl]["indexname"] = ""
- 	if "timestamp" not in metadata[baseurl]:
- 		metadata[baseurl]["timestamp"] = 0
- 	if "unmodified" not in metadata[baseurl]:
- 		metadata[baseurl]["unmodified"] = 0
- 	if "data" not in metadata[baseurl]:
- 		metadata[baseurl]["data"] = {}
- 
- 	if not os.access(cache_path, os.W_OK):
- 		sys.stderr.write(_("!!! Unable to write binary metadata to disk!\n"))
- 		sys.stderr.write(_("!!! Permission denied: '%s'\n") % cache_path)
- 		return metadata[baseurl]["data"]
- 
- 	import portage.exception
- 	try:
- 		filelist = dir_get_list(baseurl, conn)
- 	except portage.exception.PortageException as e:
- 		sys.stderr.write(_("!!! Error connecting to '%s'.\n") %
- 			_hide_url_passwd(baseurl))
- 		sys.stderr.write("!!! %s\n" % str(e))
- 		del e
- 		return metadata[baseurl]["data"]
- 	tbz2list = match_in_array(filelist, suffix=".tbz2")
- 	metalist = match_in_array(filelist, prefix="metadata.idx")
- 	del filelist
- 
- 	# Determine if our metadata file is current.
- 	metalist.sort()
- 	metalist.reverse() # makes the order new-to-old.
- 	for mfile in metalist:
- 		if usingcache and \
- 		   ((metadata[baseurl]["indexname"] != mfile) or \
- 			  (metadata[baseurl]["timestamp"] < int(time.time() - (60 * 60 * 24)))):
- 			# Try to download new cache until we succeed on one.
- 			data = ""
- 			for trynum in [1, 2, 3]:
- 				mytempfile = tempfile.TemporaryFile()
- 				try:
- 					file_get(baseurl + "/" + mfile, mytempfile, conn)
- 					if mytempfile.tell() > len(data):
- 						mytempfile.seek(0)
- 						data = mytempfile.read()
- 				except ValueError as e:
- 					sys.stderr.write("--- %s\n" % str(e))
- 					if trynum < 3:
- 						sys.stderr.write(_("Retrying...\n"))
- 					sys.stderr.flush()
- 					mytempfile.close()
- 					continue
- 				if match_in_array([mfile], suffix=".gz"):
- 					out.write("gzip'd\n")
- 					out.flush()
- 					try:
- 						import gzip
- 						mytempfile.seek(0)
- 						gzindex = gzip.GzipFile(mfile[:-3], 'rb', 9, mytempfile)
- 						data = gzindex.read()
- 					except SystemExit as e:
- 						raise
- 					except Exception as e:
- 						mytempfile.close()
- 						sys.stderr.write(_("!!! Failed to use gzip: ") + str(e) + "\n")
- 						sys.stderr.flush()
- 					mytempfile.close()
- 				try:
- 					metadata[baseurl]["data"] = pickle.loads(data)
- 					del data
- 					metadata[baseurl]["indexname"] = mfile
- 					metadata[baseurl]["timestamp"] = int(time.time())
- 					metadata[baseurl]["modified"]  = 0 # It's not, right after download.
- 					out.write(_("Pickle loaded.\n"))
- 					out.flush()
- 					break
- 				except SystemExit as e:
- 					raise
- 				except Exception as e:
- 					sys.stderr.write(_("!!! Failed to read data from index: ") + str(mfile) + "\n")
- 					sys.stderr.write("!!! %s" % str(e))
- 					sys.stderr.flush()
- 			try:
- 				metadatafile = open(_unicode_encode(metadatafilename,
- 					encoding=_encodings['fs'], errors='strict'), 'wb')
- 				pickle.dump(metadata, metadatafile, protocol=2)
- 				metadatafile.close()
- 			except SystemExit as e:
- 				raise
- 			except Exception as e:
- 				sys.stderr.write(_("!!! Failed to write binary metadata to disk!\n"))
- 				sys.stderr.write("!!! %s\n" % str(e))
- 				sys.stderr.flush()
- 			break
- 	# We may have metadata... now we run through the tbz2 list and check.
- 
- 	class CacheStats:
- 		from time import time
- 		def __init__(self, out):
- 			self.misses = 0
- 			self.hits = 0
- 			self.last_update = 0
- 			self.out = out
- 			self.min_display_latency = 0.2
- 		def update(self):
- 			cur_time = self.time()
- 			if cur_time - self.last_update >= self.min_display_latency:
- 				self.last_update = cur_time
- 				self.display()
- 		def display(self):
- 			self.out.write("\r"+colorize("WARN",
- 				_("cache miss: '") + str(self.misses) + "'") + \
- 				" --- " + colorize("GOOD", _("cache hit: '") + str(self.hits) + "'"))
- 			self.out.flush()
- 
- 	cache_stats = CacheStats(out)
- 	have_tty = os.environ.get('TERM') != 'dumb' and out.isatty()
- 	if have_tty:
- 		cache_stats.display()
- 	binpkg_filenames = set()
- 	for x in tbz2list:
- 		x = os.path.basename(x)
- 		binpkg_filenames.add(x)
- 		if x not in metadata[baseurl]["data"]:
- 			cache_stats.misses += 1
- 			if have_tty:
- 				cache_stats.update()
- 			metadata[baseurl]["modified"] = 1
- 			myid = None
- 			for _x in range(3):
- 				try:
- 					myid = file_get_metadata(
- 						"/".join((baseurl.rstrip("/"), x.lstrip("/"))),
- 						conn, chunk_size)
- 					break
- 				except http_client_BadStatusLine:
- 					# Sometimes this error is thrown from conn.getresponse() in
- 					# make_http_request().  The docstring for this error in
- 					# httplib.py says "Presumably, the server closed the
- 					# connection before sending a valid response".
- 					conn = create_conn(baseurl)[0]
- 				except http_client_ResponseNotReady:
- 					# With some http servers this error is known to be thrown
- 					# from conn.getresponse() in make_http_request() when the
- 					# remote file does not have appropriate read permissions.
- 					# Maybe it's possible to recover from this exception in
- 					# cases though, so retry.
- 					conn = create_conn(baseurl)[0]
- 
- 			if myid and myid[0]:
- 				metadata[baseurl]["data"][x] = make_metadata_dict(myid)
- 			elif verbose:
- 				sys.stderr.write(colorize("BAD",
- 					_("!!! Failed to retrieve metadata on: ")) + str(x) + "\n")
- 				sys.stderr.flush()
- 		else:
- 			cache_stats.hits += 1
- 			if have_tty:
- 				cache_stats.update()
- 	cache_stats.display()
- 	# Cleanse stale cache for files that don't exist on the server anymore.
- 	stale_cache = set(metadata[baseurl]["data"]).difference(binpkg_filenames)
- 	if stale_cache:
- 		for x in stale_cache:
- 			del metadata[baseurl]["data"][x]
- 		metadata[baseurl]["modified"] = 1
- 	del stale_cache
- 	del binpkg_filenames
- 	out.write("\n")
- 	out.flush()
- 
- 	try:
- 		if "modified" in metadata[baseurl] and metadata[baseurl]["modified"]:
- 			metadata[baseurl]["timestamp"] = int(time.time())
- 			metadatafile = open(_unicode_encode(metadatafilename,
- 				encoding=_encodings['fs'], errors='strict'), 'wb')
- 			pickle.dump(metadata, metadatafile, protocol=2)
- 			metadatafile.close()
- 		if makepickle:
- 			metadatafile = open(_unicode_encode(makepickle,
- 				encoding=_encodings['fs'], errors='strict'), 'wb')
- 			pickle.dump(metadata[baseurl]["data"], metadatafile, protocol=2)
- 			metadatafile.close()
- 	except SystemExit as e:
- 		raise
- 	except Exception as e:
- 		sys.stderr.write(_("!!! Failed to write binary metadata to disk!\n"))
- 		sys.stderr.write("!!! "+str(e)+"\n")
- 		sys.stderr.flush()
- 
- 	if not keepconnection:
- 		conn.close()
- 
- 	return metadata[baseurl]["data"]
+     """Takes a base url to connect to and read from.
+     URI should be in the form <proto>://<site>[:port]<path>
+     Connection is used for persistent connection instances."""
+ 
+     warnings.warn(
+         "portage.getbinpkg.file_get_lib() is deprecated",
+         DeprecationWarning,
+         stacklevel=2,
+     )
+ 
+     if not conn:
+         keepconnection = 0
+     else:
+         keepconnection = 1
+ 
+     conn, protocol, address, params, headers = create_conn(baseurl, conn)
+ 
+     sys.stderr.write("Fetching '" + str(os.path.basename(address)) + "'\n")
+     if protocol in ["http", "https"]:
+         data, rc, _msg = make_http_request(conn, address, params, headers, dest=dest)
+     elif protocol in ["ftp"]:
+         data, rc, _msg = make_ftp_request(conn, address, dest=dest)
+     elif protocol == "sftp":
+         rc = 0
+         try:
+             f = conn.open(address)
+         except SystemExit:
+             raise
+         except Exception:
+             rc = 1
+         else:
+             try:
+                 if dest:
+                     bufsize = 8192
+                     while True:
+                         data = f.read(bufsize)
+                         if not data:
+                             break
+                         dest.write(data)
+             finally:
+                 f.close()
+     else:
+         raise TypeError(_("Unknown protocol. '%s'") % protocol)
+ 
+     if not keepconnection:
+         conn.close()
+ 
+     return rc
+ 
+ 
+ def dir_get_metadata(
+     baseurl, conn=None, chunk_size=3000, verbose=1, usingcache=1, makepickle=None
+ ):
+ 
+     warnings.warn(
+         "portage.getbinpkg.dir_get_metadata() is deprecated",
+         DeprecationWarning,
+         stacklevel=2,
+     )
+ 
+     if not conn:
+         keepconnection = 0
+     else:
+         keepconnection = 1
+ 
 -    cache_path = "/var/cache/edb"
++    # PREFIX LOCAL
++    cache_path = CACHE_PATH
+     metadatafilename = os.path.join(cache_path, "remote_metadata.pickle")
+ 
+     if makepickle is None:
 -        makepickle = "/var/cache/edb/metadata.idx.most_recent"
++        # PREFIX LOCAL
++        makepickle = CACHE_PATH + "/metadata.idx.most_recent"
+ 
+     try:
+         conn = create_conn(baseurl, conn)[0]
+     except _all_errors as e:
+         # ftplib.FTP(host) can raise errors like this:
+         #   socket.error: (111, 'Connection refused')
+         sys.stderr.write("!!! %s\n" % (e,))
+         return {}
+ 
+     out = sys.stdout
+     try:
+         metadatafile = open(
+             _unicode_encode(
+                 metadatafilename, encoding=_encodings["fs"], errors="strict"
+             ),
+             "rb",
+         )
+         mypickle = pickle.Unpickler(metadatafile)
+         try:
+             mypickle.find_global = None
+         except AttributeError:
+             # TODO: If py3k, override Unpickler.find_class().
+             pass
+         metadata = mypickle.load()
+         out.write(_("Loaded metadata pickle.\n"))
+         out.flush()
+         metadatafile.close()
+     except (SystemExit, KeyboardInterrupt):
+         raise
+     except Exception:
+         metadata = {}
+     if baseurl not in metadata:
+         metadata[baseurl] = {}
+     if "indexname" not in metadata[baseurl]:
+         metadata[baseurl]["indexname"] = ""
+     if "timestamp" not in metadata[baseurl]:
+         metadata[baseurl]["timestamp"] = 0
+     if "unmodified" not in metadata[baseurl]:
+         metadata[baseurl]["unmodified"] = 0
+     if "data" not in metadata[baseurl]:
+         metadata[baseurl]["data"] = {}
+ 
+     if not os.access(cache_path, os.W_OK):
+         sys.stderr.write(_("!!! Unable to write binary metadata to disk!\n"))
+         sys.stderr.write(_("!!! Permission denied: '%s'\n") % cache_path)
+         return metadata[baseurl]["data"]
+ 
+     import portage.exception
+ 
+     try:
+         filelist = dir_get_list(baseurl, conn)
+     except portage.exception.PortageException as e:
+         sys.stderr.write(
+             _("!!! Error connecting to '%s'.\n") % _hide_url_passwd(baseurl)
+         )
+         sys.stderr.write("!!! %s\n" % str(e))
+         del e
+         return metadata[baseurl]["data"]
+     tbz2list = match_in_array(filelist, suffix=".tbz2")
+     metalist = match_in_array(filelist, prefix="metadata.idx")
+     del filelist
+ 
+     # Determine if our metadata file is current.
+     metalist.sort()
+     metalist.reverse()  # makes the order new-to-old.
+     for mfile in metalist:
+         if usingcache and (
+             (metadata[baseurl]["indexname"] != mfile)
+             or (metadata[baseurl]["timestamp"] < int(time.time() - (60 * 60 * 24)))
+         ):
+             # Try to download new cache until we succeed on one.
+             data = ""
+             for trynum in [1, 2, 3]:
+                 mytempfile = tempfile.TemporaryFile()
+                 try:
+                     file_get(baseurl + "/" + mfile, mytempfile, conn)
+                     if mytempfile.tell() > len(data):
+                         mytempfile.seek(0)
+                         data = mytempfile.read()
+                 except ValueError as e:
+                     sys.stderr.write("--- %s\n" % str(e))
+                     if trynum < 3:
+                         sys.stderr.write(_("Retrying...\n"))
+                     sys.stderr.flush()
+                     mytempfile.close()
+                     continue
+                 if match_in_array([mfile], suffix=".gz"):
+                     out.write("gzip'd\n")
+                     out.flush()
+                     try:
+                         import gzip
+ 
+                         mytempfile.seek(0)
+                         gzindex = gzip.GzipFile(mfile[:-3], "rb", 9, mytempfile)
+                         data = gzindex.read()
+                     except SystemExit as e:
+                         raise
+                     except Exception as e:
+                         mytempfile.close()
+                         sys.stderr.write(_("!!! Failed to use gzip: ") + str(e) + "\n")
+                         sys.stderr.flush()
+                     mytempfile.close()
+                 try:
+                     metadata[baseurl]["data"] = pickle.loads(data)
+                     del data
+                     metadata[baseurl]["indexname"] = mfile
+                     metadata[baseurl]["timestamp"] = int(time.time())
+                     metadata[baseurl]["modified"] = 0  # It's not, right after download.
+                     out.write(_("Pickle loaded.\n"))
+                     out.flush()
+                     break
+                 except SystemExit as e:
+                     raise
+                 except Exception as e:
+                     sys.stderr.write(
+                         _("!!! Failed to read data from index: ") + str(mfile) + "\n"
+                     )
+                     sys.stderr.write("!!! %s" % str(e))
+                     sys.stderr.flush()
+             try:
+                 metadatafile = open(
+                     _unicode_encode(
+                         metadatafilename, encoding=_encodings["fs"], errors="strict"
+                     ),
+                     "wb",
+                 )
+                 pickle.dump(metadata, metadatafile, protocol=2)
+                 metadatafile.close()
+             except SystemExit as e:
+                 raise
+             except Exception as e:
+                 sys.stderr.write(_("!!! Failed to write binary metadata to disk!\n"))
+                 sys.stderr.write("!!! %s\n" % str(e))
+                 sys.stderr.flush()
+             break
+     # We may have metadata... now we run through the tbz2 list and check.
+ 
+     class CacheStats:
+         from time import time
+ 
+         def __init__(self, out):
+             self.misses = 0
+             self.hits = 0
+             self.last_update = 0
+             self.out = out
+             self.min_display_latency = 0.2
+ 
+         def update(self):
+             cur_time = self.time()
+             if cur_time - self.last_update >= self.min_display_latency:
+                 self.last_update = cur_time
+                 self.display()
+ 
+         def display(self):
+             self.out.write(
+                 "\r"
+                 + colorize("WARN", _("cache miss: '") + str(self.misses) + "'")
+                 + " --- "
+                 + colorize("GOOD", _("cache hit: '") + str(self.hits) + "'")
+             )
+             self.out.flush()
+ 
+     cache_stats = CacheStats(out)
+     have_tty = os.environ.get("TERM") != "dumb" and out.isatty()
+     if have_tty:
+         cache_stats.display()
+     binpkg_filenames = set()
+     for x in tbz2list:
+         x = os.path.basename(x)
+         binpkg_filenames.add(x)
+         if x not in metadata[baseurl]["data"]:
+             cache_stats.misses += 1
+             if have_tty:
+                 cache_stats.update()
+             metadata[baseurl]["modified"] = 1
+             myid = None
+             for _x in range(3):
+                 try:
+                     myid = file_get_metadata(
+                         "/".join((baseurl.rstrip("/"), x.lstrip("/"))), conn, chunk_size
+                     )
+                     break
+                 except http_client_BadStatusLine:
+                     # Sometimes this error is thrown from conn.getresponse() in
+                     # make_http_request().  The docstring for this error in
+                     # httplib.py says "Presumably, the server closed the
+                     # connection before sending a valid response".
+                     conn = create_conn(baseurl)[0]
+                 except http_client_ResponseNotReady:
+                     # With some http servers this error is known to be thrown
+                     # from conn.getresponse() in make_http_request() when the
+                     # remote file does not have appropriate read permissions.
+                     # Maybe it's possible to recover from this exception in
+                     # cases though, so retry.
+                     conn = create_conn(baseurl)[0]
+ 
+             if myid and myid[0]:
+                 metadata[baseurl]["data"][x] = make_metadata_dict(myid)
+             elif verbose:
+                 sys.stderr.write(
+                     colorize("BAD", _("!!! Failed to retrieve metadata on: "))
+                     + str(x)
+                     + "\n"
+                 )
+                 sys.stderr.flush()
+         else:
+             cache_stats.hits += 1
+             if have_tty:
+                 cache_stats.update()
+     cache_stats.display()
+     # Cleanse stale cache for files that don't exist on the server anymore.
+     stale_cache = set(metadata[baseurl]["data"]).difference(binpkg_filenames)
+     if stale_cache:
+         for x in stale_cache:
+             del metadata[baseurl]["data"][x]
+         metadata[baseurl]["modified"] = 1
+     del stale_cache
+     del binpkg_filenames
+     out.write("\n")
+     out.flush()
+ 
+     try:
+         if "modified" in metadata[baseurl] and metadata[baseurl]["modified"]:
+             metadata[baseurl]["timestamp"] = int(time.time())
+             metadatafile = open(
+                 _unicode_encode(
+                     metadatafilename, encoding=_encodings["fs"], errors="strict"
+                 ),
+                 "wb",
+             )
+             pickle.dump(metadata, metadatafile, protocol=2)
+             metadatafile.close()
+         if makepickle:
+             metadatafile = open(
+                 _unicode_encode(makepickle, encoding=_encodings["fs"], errors="strict"),
+                 "wb",
+             )
+             pickle.dump(metadata[baseurl]["data"], metadatafile, protocol=2)
+             metadatafile.close()
+     except SystemExit as e:
+         raise
+     except Exception as e:
+         sys.stderr.write(_("!!! Failed to write binary metadata to disk!\n"))
+         sys.stderr.write("!!! " + str(e) + "\n")
+         sys.stderr.flush()
+ 
+     if not keepconnection:
+         conn.close()
+ 
+     return metadata[baseurl]["data"]
+ 
  
  def _cmp_cpv(d1, d2):
- 	cpv1 = d1["CPV"]
- 	cpv2 = d2["CPV"]
- 	if cpv1 > cpv2:
- 		return 1
- 	if cpv1 == cpv2:
- 		return 0
- 	return -1
+     cpv1 = d1["CPV"]
+     cpv2 = d2["CPV"]
+     if cpv1 > cpv2:
+         return 1
+     if cpv1 == cpv2:
+         return 0
+     return -1
  
- class PackageIndex:
  
- 	def __init__(self,
- 		allowed_pkg_keys=None,
- 		default_header_data=None,
- 		default_pkg_data=None,
- 		inherited_keys=None,
- 		translated_keys=None):
- 
- 		self._pkg_slot_dict = None
- 		if allowed_pkg_keys is not None:
- 			self._pkg_slot_dict = slot_dict_class(allowed_pkg_keys)
- 
- 		self._default_header_data = default_header_data
- 		self._default_pkg_data = default_pkg_data
- 		self._inherited_keys = inherited_keys
- 		self._write_translation_map = {}
- 		self._read_translation_map = {}
- 		if translated_keys:
- 			self._write_translation_map.update(translated_keys)
- 			self._read_translation_map.update(((y, x) for (x, y) in translated_keys))
- 		self.header = {}
- 		if self._default_header_data:
- 			self.header.update(self._default_header_data)
- 		self.packages = []
- 		self.modified = True
- 
- 	def _readpkgindex(self, pkgfile, pkg_entry=True):
- 
- 		allowed_keys = None
- 		if self._pkg_slot_dict is None or not pkg_entry:
- 			d = {}
- 		else:
- 			d = self._pkg_slot_dict()
- 			allowed_keys = d.allowed_keys
- 
- 		for line in pkgfile:
- 			line = line.rstrip("\n")
- 			if not line:
- 				break
- 			line = line.split(":", 1)
- 			if not len(line) == 2:
- 				continue
- 			k, v = line
- 			if v:
- 				v = v[1:]
- 			k = self._read_translation_map.get(k, k)
- 			if allowed_keys is not None and \
- 				k not in allowed_keys:
- 				continue
- 			d[k] = v
- 		return d
- 
- 	def _writepkgindex(self, pkgfile, items):
- 		for k, v in items:
- 			pkgfile.write("%s: %s\n" % \
- 				(self._write_translation_map.get(k, k), v))
- 		pkgfile.write("\n")
- 
- 	def read(self, pkgfile):
- 		self.readHeader(pkgfile)
- 		self.readBody(pkgfile)
- 
- 	def readHeader(self, pkgfile):
- 		self.header.update(self._readpkgindex(pkgfile, pkg_entry=False))
- 
- 	def readBody(self, pkgfile):
- 		while True:
- 			d = self._readpkgindex(pkgfile)
- 			if not d:
- 				break
- 			mycpv = d.get("CPV")
- 			if not mycpv:
- 				continue
- 			if self._default_pkg_data:
- 				for k, v in self._default_pkg_data.items():
- 					d.setdefault(k, v)
- 			if self._inherited_keys:
- 				for k in self._inherited_keys:
- 					v = self.header.get(k)
- 					if v is not None:
- 						d.setdefault(k, v)
- 			self.packages.append(d)
- 
- 	def write(self, pkgfile):
- 		if self.modified:
- 			self.header["TIMESTAMP"] = str(int(time.time()))
- 			self.header["PACKAGES"] = str(len(self.packages))
- 		keys = list(self.header)
- 		keys.sort()
- 		self._writepkgindex(pkgfile, [(k, self.header[k]) \
- 			for k in keys if self.header[k]])
- 		for metadata in sorted(self.packages,
- 			key=portage.util.cmp_sort_key(_cmp_cpv)):
- 			metadata = metadata.copy()
- 			if self._inherited_keys:
- 				for k in self._inherited_keys:
- 					v = self.header.get(k)
- 					if v is not None and v == metadata.get(k):
- 						del metadata[k]
- 			if self._default_pkg_data:
- 				for k, v in self._default_pkg_data.items():
- 					if metadata.get(k) == v:
- 						metadata.pop(k, None)
- 			keys = list(metadata)
- 			keys.sort()
- 			self._writepkgindex(pkgfile,
- 				[(k, metadata[k]) for k in keys if metadata[k]])
+ class PackageIndex:
+     def __init__(
+         self,
+         allowed_pkg_keys=None,
+         default_header_data=None,
+         default_pkg_data=None,
+         inherited_keys=None,
+         translated_keys=None,
+     ):
+ 
+         self._pkg_slot_dict = None
+         if allowed_pkg_keys is not None:
+             self._pkg_slot_dict = slot_dict_class(allowed_pkg_keys)
+ 
+         self._default_header_data = default_header_data
+         self._default_pkg_data = default_pkg_data
+         self._inherited_keys = inherited_keys
+         self._write_translation_map = {}
+         self._read_translation_map = {}
+         if translated_keys:
+             self._write_translation_map.update(translated_keys)
+             self._read_translation_map.update(((y, x) for (x, y) in translated_keys))
+         self.header = {}
+         if self._default_header_data:
+             self.header.update(self._default_header_data)
+         self.packages = []
+         self.modified = True
+ 
+     def _readpkgindex(self, pkgfile, pkg_entry=True):
+ 
+         allowed_keys = None
+         if self._pkg_slot_dict is None or not pkg_entry:
+             d = {}
+         else:
+             d = self._pkg_slot_dict()
+             allowed_keys = d.allowed_keys
+ 
+         for line in pkgfile:
+             line = line.rstrip("\n")
+             if not line:
+                 break
+             line = line.split(":", 1)
+             if not len(line) == 2:
+                 continue
+             k, v = line
+             if v:
+                 v = v[1:]
+             k = self._read_translation_map.get(k, k)
+             if allowed_keys is not None and k not in allowed_keys:
+                 continue
+             d[k] = v
+         return d
+ 
+     def _writepkgindex(self, pkgfile, items):
+         for k, v in items:
+             pkgfile.write("%s: %s\n" % (self._write_translation_map.get(k, k), v))
+         pkgfile.write("\n")
+ 
+     def read(self, pkgfile):
+         self.readHeader(pkgfile)
+         self.readBody(pkgfile)
+ 
+     def readHeader(self, pkgfile):
+         self.header.update(self._readpkgindex(pkgfile, pkg_entry=False))
+ 
+     def readBody(self, pkgfile):
+         while True:
+             d = self._readpkgindex(pkgfile)
+             if not d:
+                 break
+             mycpv = d.get("CPV")
+             if not mycpv:
+                 continue
+             if self._default_pkg_data:
+                 for k, v in self._default_pkg_data.items():
+                     d.setdefault(k, v)
+             if self._inherited_keys:
+                 for k in self._inherited_keys:
+                     v = self.header.get(k)
+                     if v is not None:
+                         d.setdefault(k, v)
+             self.packages.append(d)
+ 
+     def write(self, pkgfile):
+         if self.modified:
+             self.header["TIMESTAMP"] = str(int(time.time()))
+             self.header["PACKAGES"] = str(len(self.packages))
+         keys = list(self.header)
+         keys.sort()
+         self._writepkgindex(
+             pkgfile, [(k, self.header[k]) for k in keys if self.header[k]]
+         )
+         for metadata in sorted(self.packages, key=portage.util.cmp_sort_key(_cmp_cpv)):
+             metadata = metadata.copy()
+             if self._inherited_keys:
+                 for k in self._inherited_keys:
+                     v = self.header.get(k)
+                     if v is not None and v == metadata.get(k):
+                         del metadata[k]
+             if self._default_pkg_data:
+                 for k, v in self._default_pkg_data.items():
+                     if metadata.get(k) == v:
+                         metadata.pop(k, None)
+             keys = list(metadata)
+             keys.sort()
+             self._writepkgindex(
+                 pkgfile, [(k, metadata[k]) for k in keys if metadata[k]]
+             )
diff --cc lib/portage/package/ebuild/_config/special_env_vars.py
index 682bea62b,06ae3aa39..9331bf451
--- a/lib/portage/package/ebuild/_config/special_env_vars.py
+++ b/lib/portage/package/ebuild/_config/special_env_vars.py
@@@ -40,49 -83,109 +83,114 @@@ environ_whitelist = [
  # environment in order to prevent sandbox from sourcing /etc/profile
  # in it's bashrc (causing major leakage).
  environ_whitelist += [
- 	"ACCEPT_LICENSE", "BASH_ENV", "BASH_FUNC____in_portage_iuse%%",
- 	"BROOT", "BUILD_PREFIX", "COLUMNS", "D",
- 	"DISTDIR", "DOC_SYMLINKS_DIR", "EAPI", "EBUILD",
- 	"EBUILD_FORCE_TEST",
- 	"EBUILD_PHASE", "EBUILD_PHASE_FUNC", "ECLASSDIR", "ECLASS_DEPTH", "ED",
- 	"EMERGE_FROM", "ENV_UNSET", "EPREFIX", "EROOT", "ESYSROOT",
- 	"FEATURES", "FILESDIR", "HOME", "MERGE_TYPE", "NOCOLOR", "PATH",
- 	"PKGDIR",
- 	"PKGUSE", "PKG_LOGDIR", "PKG_TMPDIR",
- 	"PORTAGE_ACTUAL_DISTDIR", "PORTAGE_ARCHLIST", "PORTAGE_BASHRC_FILES",
- 	"PORTAGE_BASHRC", "PM_EBUILD_HOOK_DIR",
- 	"PORTAGE_BINPKG_FILE", "PORTAGE_BINPKG_TAR_OPTS",
- 	"PORTAGE_BINPKG_TMPFILE",
- 	"PORTAGE_BIN_PATH",
- 	"PORTAGE_BUILDDIR", "PORTAGE_BUILD_GROUP", "PORTAGE_BUILD_USER",
- 	"PORTAGE_BUNZIP2_COMMAND", "PORTAGE_BZIP2_COMMAND",
- 	"PORTAGE_COLORMAP", "PORTAGE_COMPRESS", "PORTAGE_COMPRESSION_COMMAND",
- 	"PORTAGE_COMPRESS_EXCLUDE_SUFFIXES",
- 	"PORTAGE_CONFIGROOT", "PORTAGE_DEBUG", "PORTAGE_DEPCACHEDIR",
- 	"PORTAGE_DOHTML_UNWARNED_SKIPPED_EXTENSIONS",
- 	"PORTAGE_DOHTML_UNWARNED_SKIPPED_FILES",
- 	"PORTAGE_DOHTML_WARN_ON_SKIPPED_FILES",
- 	"PORTAGE_EBUILD_EXIT_FILE", "PORTAGE_FEATURES",
- 	"PORTAGE_GID", "PORTAGE_GRPNAME",
- 	"PORTAGE_INTERNAL_CALLER",
- 	"PORTAGE_INST_GID", "PORTAGE_INST_UID",
- 	"PORTAGE_IPC_DAEMON", "PORTAGE_IUSE", "PORTAGE_ECLASS_LOCATIONS",
- 	"PORTAGE_LOG_FILE", "PORTAGE_OVERRIDE_EPREFIX", "PORTAGE_PIPE_FD",
- 	"PORTAGE_PROPERTIES",
- 	"PORTAGE_PYM_PATH", "PORTAGE_PYTHON",
- 	"PORTAGE_PYTHONPATH", "PORTAGE_QUIET",
- 	"PORTAGE_REPO_NAME", "PORTAGE_REPOSITORIES", "PORTAGE_RESTRICT",
- 	"PORTAGE_SIGPIPE_STATUS", "PORTAGE_SOCKS5_PROXY",
- 	"PORTAGE_TMPDIR", "PORTAGE_UPDATE_ENV", "PORTAGE_USERNAME",
- 	"PORTAGE_VERBOSE", "PORTAGE_WORKDIR_MODE", "PORTAGE_XATTR_EXCLUDE",
- 	"PORTDIR", "PORTDIR_OVERLAY", "PREROOTPATH", "PYTHONDONTWRITEBYTECODE",
- 	"REPLACING_VERSIONS", "REPLACED_BY_VERSION",
- 	"ROOT", "ROOTPATH", "SANDBOX_LOG", "SYSROOT", "T", "TMP", "TMPDIR",
- 	"USE_EXPAND", "USE_ORDER", "WORKDIR",
- 	"XARGS", "__PORTAGE_TEST_HARDLINK_LOCKS",
- 	# PREFIX LOCAL
- 	"EXTRA_PATH", "PORTAGE_GROUP", "PORTAGE_USER",
- 	# END PREFIX LOCAL
+     "ACCEPT_LICENSE",
+     "BASH_ENV",
+     "BASH_FUNC____in_portage_iuse%%",
+     "BROOT",
+     "BUILD_PREFIX",
+     "COLUMNS",
+     "D",
+     "DISTDIR",
+     "DOC_SYMLINKS_DIR",
+     "EAPI",
+     "EBUILD",
+     "EBUILD_FORCE_TEST",
+     "EBUILD_PHASE",
+     "EBUILD_PHASE_FUNC",
+     "ECLASSDIR",
+     "ECLASS_DEPTH",
+     "ED",
+     "EMERGE_FROM",
+     "ENV_UNSET",
+     "EPREFIX",
+     "EROOT",
+     "ESYSROOT",
+     "FEATURES",
+     "FILESDIR",
+     "HOME",
+     "MERGE_TYPE",
+     "NOCOLOR",
+     "PATH",
+     "PKGDIR",
+     "PKGUSE",
+     "PKG_LOGDIR",
+     "PKG_TMPDIR",
+     "PORTAGE_ACTUAL_DISTDIR",
+     "PORTAGE_ARCHLIST",
+     "PORTAGE_BASHRC_FILES",
+     "PORTAGE_BASHRC",
+     "PM_EBUILD_HOOK_DIR",
+     "PORTAGE_BINPKG_FILE",
+     "PORTAGE_BINPKG_TAR_OPTS",
+     "PORTAGE_BINPKG_TMPFILE",
+     "PORTAGE_BIN_PATH",
+     "PORTAGE_BUILDDIR",
+     "PORTAGE_BUILD_GROUP",
+     "PORTAGE_BUILD_USER",
+     "PORTAGE_BUNZIP2_COMMAND",
+     "PORTAGE_BZIP2_COMMAND",
+     "PORTAGE_COLORMAP",
+     "PORTAGE_COMPRESS",
+     "PORTAGE_COMPRESSION_COMMAND",
+     "PORTAGE_COMPRESS_EXCLUDE_SUFFIXES",
+     "PORTAGE_CONFIGROOT",
+     "PORTAGE_DEBUG",
+     "PORTAGE_DEPCACHEDIR",
+     "PORTAGE_DOHTML_UNWARNED_SKIPPED_EXTENSIONS",
+     "PORTAGE_DOHTML_UNWARNED_SKIPPED_FILES",
+     "PORTAGE_DOHTML_WARN_ON_SKIPPED_FILES",
+     "PORTAGE_EBUILD_EXIT_FILE",
+     "PORTAGE_FEATURES",
+     "PORTAGE_GID",
+     "PORTAGE_GRPNAME",
+     "PORTAGE_INTERNAL_CALLER",
+     "PORTAGE_INST_GID",
+     "PORTAGE_INST_UID",
+     "PORTAGE_IPC_DAEMON",
+     "PORTAGE_IUSE",
+     "PORTAGE_ECLASS_LOCATIONS",
+     "PORTAGE_LOG_FILE",
+     "PORTAGE_OVERRIDE_EPREFIX",
+     "PORTAGE_PIPE_FD",
+     "PORTAGE_PROPERTIES",
+     "PORTAGE_PYM_PATH",
+     "PORTAGE_PYTHON",
+     "PORTAGE_PYTHONPATH",
+     "PORTAGE_QUIET",
+     "PORTAGE_REPO_NAME",
+     "PORTAGE_REPOSITORIES",
+     "PORTAGE_RESTRICT",
+     "PORTAGE_SIGPIPE_STATUS",
+     "PORTAGE_SOCKS5_PROXY",
+     "PORTAGE_TMPDIR",
+     "PORTAGE_UPDATE_ENV",
+     "PORTAGE_USERNAME",
+     "PORTAGE_VERBOSE",
+     "PORTAGE_WORKDIR_MODE",
+     "PORTAGE_XATTR_EXCLUDE",
+     "PORTDIR",
+     "PORTDIR_OVERLAY",
+     "PREROOTPATH",
+     "PYTHONDONTWRITEBYTECODE",
+     "REPLACING_VERSIONS",
+     "REPLACED_BY_VERSION",
+     "ROOT",
+     "ROOTPATH",
+     "SANDBOX_LOG",
+     "SYSROOT",
+     "T",
+     "TMP",
+     "TMPDIR",
+     "USE_EXPAND",
+     "USE_ORDER",
+     "WORKDIR",
+     "XARGS",
+     "__PORTAGE_TEST_HARDLINK_LOCKS",
++    # BEGIN PREFIX LOCAL
++    "EXTRA_PATH",
++    "PORTAGE_GROUP",
++    "PORTAGE_USER",
++    # END PREFIX LOCAL
  ]
  
  # user config variables
@@@ -115,13 -232,15 +237,18 @@@ environ_whitelist += 
  ]
  
  # other variables inherited from the calling environment
 +# UNIXMODE is necessary for MiNT
  environ_whitelist += [
- 	"CVS_RSH", "ECHANGELOG_USER",
- 	"GPG_AGENT_INFO",
- 	"SSH_AGENT_PID", "SSH_AUTH_SOCK",
- 	"STY", "WINDOW", "XAUTHORITY",
- 	"UNIXMODE",
+     "CVS_RSH",
+     "ECHANGELOG_USER",
+     "GPG_AGENT_INFO",
+     "SSH_AGENT_PID",
+     "SSH_AUTH_SOCK",
+     "STY",
+     "WINDOW",
+     "XAUTHORITY",
++    # PREFIX LOCAL
++    "UNIXMODE",
  ]
  
  environ_whitelist = frozenset(environ_whitelist)
@@@ -141,11 -265,9 +273,22 @@@ environ_filter += 
  
  # misc variables inherited from the calling environment
  environ_filter += [
- 	"INFOPATH", "MANPATH", "USER",
- 	"HOST", "GROUP", "LOGNAME", "MAIL", "REMOTEHOST",
- 	"SECURITYSESSIONID",
- 	"TERMINFO", "TERM_PROGRAM", "TERM_PROGRAM_VERSION",
- 	"VENDOR", "__CF_USER_TEXT_ENCODING",
+     "INFOPATH",
+     "MANPATH",
+     "USER",
++    # BEGIN PREFIX LOCAL
++    "HOST",
++    "GROUP",
++    "LOGNAME",
++    "MAIL",
++    "REMOTEHOST",
++    "SECURITYSESSIONID",
++    "TERMINFO",
++    "TERM_PROGRAM",
++    "TERM_PROGRAM_VERSION",
++    "VENDOR",
++    "__CF_USER_TEXT_ENCODING",
++    # END PREFIX LOCAL
  ]
  
  # variables that break bash
diff --cc lib/portage/package/ebuild/config.py
index 59972af76,b4d6862a3..625a1be49
--- a/lib/portage/package/ebuild/config.py
+++ b/lib/portage/package/ebuild/config.py
@@@ -41,15 -64,28 +64,28 @@@ from portage.env.loaders import KeyValu
  from portage.exception import InvalidDependString, PortageException
  from portage.localization import _
  from portage.output import colorize
 -from portage.process import fakeroot_capable, sandbox_capable
 +from portage.process import fakeroot_capable, sandbox_capable, macossandbox_capable
  from portage.repository.config import (
- 	allow_profile_repo_deps,
- 	load_repository_config,
+     allow_profile_repo_deps,
+     load_repository_config,
+ )
+ from portage.util import (
+     ensure_dirs,
+     getconfig,
+     grabdict,
+     grabdict_package,
+     grabfile,
+     grabfile_package,
+     LazyItemsDict,
+     normalize_path,
+     shlex_split,
+     stack_dictlist,
+     stack_dicts,
+     stack_lists,
+     writemsg,
+     writemsg_level,
+     _eapi_cache,
  )
- from portage.util import ensure_dirs, getconfig, grabdict, \
- 	grabdict_package, grabfile, grabfile_package, LazyItemsDict, \
- 	normalize_path, shlex_split, stack_dictlist, stack_dicts, stack_lists, \
- 	writemsg, writemsg_level, _eapi_cache
  from portage.util.install_mask import _raise_exc
  from portage.util.path import first_existing
  from portage.util._path import exists_raise_eaccess, isdir_raise_eaccess
@@@ -70,2889 -111,3307 +111,3315 @@@ from portage.package.ebuild._config.unp
  
  _feature_flags_cache = {}
  
+ 
  def _get_feature_flags(eapi_attrs):
- 	cache_key = (eapi_attrs.feature_flag_test,)
- 	flags = _feature_flags_cache.get(cache_key)
- 	if flags is not None:
- 		return flags
+     cache_key = (eapi_attrs.feature_flag_test,)
+     flags = _feature_flags_cache.get(cache_key)
+     if flags is not None:
+         return flags
+ 
+     flags = []
+     if eapi_attrs.feature_flag_test:
+         flags.append("test")
  
- 	flags = []
- 	if eapi_attrs.feature_flag_test:
- 		flags.append("test")
+     flags = frozenset(flags)
+     _feature_flags_cache[cache_key] = flags
+     return flags
  
- 	flags = frozenset(flags)
- 	_feature_flags_cache[cache_key] = flags
- 	return flags
  
  def autouse(myvartree, use_cache=1, mysettings=None):
- 	warnings.warn("portage.autouse() is deprecated",
- 		DeprecationWarning, stacklevel=2)
- 	return ""
+     warnings.warn("portage.autouse() is deprecated", DeprecationWarning, stacklevel=2)
+     return ""
+ 
  
  def check_config_instance(test):
- 	if not isinstance(test, config):
- 		raise TypeError("Invalid type for config object: %s (should be %s)" % (test.__class__, config))
+     if not isinstance(test, config):
+         raise TypeError(
+             "Invalid type for config object: %s (should be %s)"
+             % (test.__class__, config)
+         )
+ 
  
  def best_from_dict(key, top_dict, key_order, EmptyOnError=1, FullCopy=1, AllowEmpty=1):
- 	for x in key_order:
- 		if x in top_dict and key in top_dict[x]:
- 			if FullCopy:
- 				return copy.deepcopy(top_dict[x][key])
- 			return top_dict[x][key]
- 	if EmptyOnError:
- 		return ""
- 	raise KeyError("Key not found in list; '%s'" % key)
+     for x in key_order:
+         if x in top_dict and key in top_dict[x]:
+             if FullCopy:
+                 return copy.deepcopy(top_dict[x][key])
+             return top_dict[x][key]
+     if EmptyOnError:
+         return ""
+     raise KeyError("Key not found in list; '%s'" % key)
+ 
  
  def _lazy_iuse_regex(iuse_implicit):
- 	"""
- 	The PORTAGE_IUSE value is lazily evaluated since re.escape() is slow
- 	and the value is only used when an ebuild phase needs to be executed
- 	(it's used only to generate QA notices).
- 	"""
- 	# Escape anything except ".*" which is supposed to pass through from
- 	# _get_implicit_iuse().
- 	regex = sorted(re.escape(x) for x in iuse_implicit)
- 	regex = "^(%s)$" % "|".join(regex)
- 	regex = regex.replace("\\.\\*", ".*")
- 	return regex
+     """
+     The PORTAGE_IUSE value is lazily evaluated since re.escape() is slow
+     and the value is only used when an ebuild phase needs to be executed
+     (it's used only to generate QA notices).
+     """
+     # Escape anything except ".*" which is supposed to pass through from
+     # _get_implicit_iuse().
+     regex = sorted(re.escape(x) for x in iuse_implicit)
+     regex = "^(%s)$" % "|".join(regex)
+     regex = regex.replace("\\.\\*", ".*")
+     return regex
+ 
  
  class _iuse_implicit_match_cache:
+     def __init__(self, settings):
+         self._iuse_implicit_re = re.compile(
+             "^(%s)$" % "|".join(settings._get_implicit_iuse())
+         )
+         self._cache = {}
+ 
+     def __call__(self, flag):
+         """
+         Returns True if the flag is matched, False otherwise.
+         """
+         try:
+             return self._cache[flag]
+         except KeyError:
+             m = self._iuse_implicit_re.match(flag) is not None
+             self._cache[flag] = m
+             return m
  
- 	def __init__(self, settings):
- 		self._iuse_implicit_re = re.compile("^(%s)$" % \
- 			"|".join(settings._get_implicit_iuse()))
- 		self._cache = {}
- 
- 	def __call__(self, flag):
- 		"""
- 		Returns True if the flag is matched, False otherwise.
- 		"""
- 		try:
- 			return self._cache[flag]
- 		except KeyError:
- 			m = self._iuse_implicit_re.match(flag) is not None
- 			self._cache[flag] = m
- 			return m
  
  class config:
- 	"""
- 	This class encompasses the main portage configuration.  Data is pulled from
- 	ROOT/PORTDIR/profiles/, from ROOT/etc/make.profile incrementally through all
- 	parent profiles as well as from ROOT/PORTAGE_CONFIGROOT/* for user specified
- 	overrides.
- 
- 	Generally if you need data like USE flags, FEATURES, environment variables,
- 	virtuals ...etc you look in here.
- 	"""
- 
- 	_constant_keys = frozenset(['PORTAGE_BIN_PATH', 'PORTAGE_GID',
- 		'PORTAGE_PYM_PATH', 'PORTAGE_PYTHONPATH'])
- 
- 	_deprecated_keys = {'PORTAGE_LOGDIR': 'PORT_LOGDIR',
- 		'PORTAGE_LOGDIR_CLEAN': 'PORT_LOGDIR_CLEAN',
- 		'SIGNED_OFF_BY': 'DCO_SIGNED_OFF_BY'}
- 
- 	_setcpv_aux_keys = ('BDEPEND', 'DEFINED_PHASES', 'DEPEND', 'EAPI', 'IDEPEND',
- 		'INHERITED', 'IUSE', 'REQUIRED_USE', 'KEYWORDS', 'LICENSE', 'PDEPEND',
- 		'PROPERTIES', 'RDEPEND', 'SLOT',
- 		'repository', 'RESTRICT', 'LICENSE',)
- 
- 	_module_aliases = {
- 		"cache.metadata_overlay.database" : "portage.cache.flat_hash.mtime_md5_database",
- 		"portage.cache.metadata_overlay.database" : "portage.cache.flat_hash.mtime_md5_database",
- 	}
- 
- 	_case_insensitive_vars = special_env_vars.case_insensitive_vars
- 	_default_globals = special_env_vars.default_globals
- 	_env_blacklist = special_env_vars.env_blacklist
- 	_environ_filter = special_env_vars.environ_filter
- 	_environ_whitelist = special_env_vars.environ_whitelist
- 	_environ_whitelist_re = special_env_vars.environ_whitelist_re
- 	_global_only_vars = special_env_vars.global_only_vars
- 
- 	def __init__(self, clone=None, mycpv=None, config_profile_path=None,
- 		config_incrementals=None, config_root=None, target_root=None,
- 		sysroot=None, eprefix=None, local_config=True, env=None,
- 		_unmatched_removal=False, repositories=None):
- 		"""
- 		@param clone: If provided, init will use deepcopy to copy by value the instance.
- 		@type clone: Instance of config class.
- 		@param mycpv: CPV to load up (see setcpv), this is the same as calling init with mycpv=None
- 		and then calling instance.setcpv(mycpv).
- 		@type mycpv: String
- 		@param config_profile_path: Configurable path to the profile (usually PROFILE_PATH from portage.const)
- 		@type config_profile_path: String
- 		@param config_incrementals: List of incremental variables
- 			(defaults to portage.const.INCREMENTALS)
- 		@type config_incrementals: List
- 		@param config_root: path to read local config from (defaults to "/", see PORTAGE_CONFIGROOT)
- 		@type config_root: String
- 		@param target_root: the target root, which typically corresponds to the
- 			value of the $ROOT env variable (default is /)
- 		@type target_root: String
- 		@param sysroot: the sysroot to build against, which typically corresponds
- 			 to the value of the $SYSROOT env variable (default is /)
- 		@type sysroot: String
- 		@param eprefix: set the EPREFIX variable (default is portage.const.EPREFIX)
- 		@type eprefix: String
- 		@param local_config: Enables loading of local config (/etc/portage); used most by repoman to
- 		ignore local config (keywording and unmasking)
- 		@type local_config: Boolean
- 		@param env: The calling environment which is used to override settings.
- 			Defaults to os.environ if unspecified.
- 		@type env: dict
- 		@param _unmatched_removal: Enabled by repoman when the
- 			--unmatched-removal option is given.
- 		@type _unmatched_removal: Boolean
- 		@param repositories: Configuration of repositories.
- 			Defaults to portage.repository.config.load_repository_config().
- 		@type repositories: Instance of portage.repository.config.RepoConfigLoader class.
- 		"""
- 
- 		# This is important when config is reloaded after emerge --sync.
- 		_eapi_cache.clear()
- 
- 		# When initializing the global portage.settings instance, avoid
- 		# raising exceptions whenever possible since exceptions thrown
- 		# from 'import portage' or 'import portage.exceptions' statements
- 		# can practically render the api unusable for api consumers.
- 		tolerant = hasattr(portage, '_initializing_globals')
- 		self._tolerant = tolerant
- 		self._unmatched_removal = _unmatched_removal
- 
- 		self.locked   = 0
- 		self.mycpv    = None
- 		self._setcpv_args_hash = None
- 		self.puse     = ""
- 		self._penv    = []
- 		self.modifiedkeys = []
- 		self.uvlist = []
- 		self._accept_chost_re = None
- 		self._accept_properties = None
- 		self._accept_restrict = None
- 		self._features_overrides = []
- 		self._make_defaults = None
- 		self._parent_stable = None
- 		self._soname_provided = None
- 
- 		# _unknown_features records unknown features that
- 		# have triggered warning messages, and ensures that
- 		# the same warning isn't shown twice.
- 		self._unknown_features = set()
- 
- 		self.local_config = local_config
- 
- 		if clone:
- 			# For immutable attributes, use shallow copy for
- 			# speed and memory conservation.
- 			self._tolerant = clone._tolerant
- 			self._unmatched_removal = clone._unmatched_removal
- 			self.categories = clone.categories
- 			self.depcachedir = clone.depcachedir
- 			self.incrementals = clone.incrementals
- 			self.module_priority = clone.module_priority
- 			self.profile_path = clone.profile_path
- 			self.profiles = clone.profiles
- 			self.packages = clone.packages
- 			self.repositories = clone.repositories
- 			self.unpack_dependencies = clone.unpack_dependencies
- 			self._default_features_use = clone._default_features_use
- 			self._iuse_effective = clone._iuse_effective
- 			self._iuse_implicit_match = clone._iuse_implicit_match
- 			self._non_user_variables = clone._non_user_variables
- 			self._env_d_blacklist = clone._env_d_blacklist
- 			self._pbashrc = clone._pbashrc
- 			self._repo_make_defaults = clone._repo_make_defaults
- 			self.usemask = clone.usemask
- 			self.useforce = clone.useforce
- 			self.puse = clone.puse
- 			self.user_profile_dir = clone.user_profile_dir
- 			self.local_config = clone.local_config
- 			self.make_defaults_use = clone.make_defaults_use
- 			self.mycpv = clone.mycpv
- 			self._setcpv_args_hash = clone._setcpv_args_hash
- 			self._soname_provided = clone._soname_provided
- 			self._profile_bashrc = clone._profile_bashrc
- 
- 			# immutable attributes (internal policy ensures lack of mutation)
- 			self._locations_manager = clone._locations_manager
- 			self._use_manager = clone._use_manager
- 			# force instantiation of lazy immutable objects when cloning, so
- 			# that they're not instantiated more than once
- 			self._keywords_manager_obj = clone._keywords_manager
- 			self._mask_manager_obj = clone._mask_manager
- 
- 			# shared mutable attributes
- 			self._unknown_features = clone._unknown_features
- 
- 			self.modules         = copy.deepcopy(clone.modules)
- 			self._penv = copy.deepcopy(clone._penv)
- 
- 			self.configdict = copy.deepcopy(clone.configdict)
- 			self.configlist = [
- 				self.configdict['env.d'],
- 				self.configdict['repo'],
- 				self.configdict['features'],
- 				self.configdict['pkginternal'],
- 				self.configdict['globals'],
- 				self.configdict['defaults'],
- 				self.configdict['conf'],
- 				self.configdict['pkg'],
- 				self.configdict['env'],
- 			]
- 			self.lookuplist = self.configlist[:]
- 			self.lookuplist.reverse()
- 			self._use_expand_dict = copy.deepcopy(clone._use_expand_dict)
- 			self.backupenv  = self.configdict["backupenv"]
- 			self.prevmaskdict = copy.deepcopy(clone.prevmaskdict)
- 			self.pprovideddict = copy.deepcopy(clone.pprovideddict)
- 			self.features = features_set(self)
- 			self.features._features = copy.deepcopy(clone.features._features)
- 			self._features_overrides = copy.deepcopy(clone._features_overrides)
- 
- 			#Strictly speaking _license_manager is not immutable. Users need to ensure that
- 			#extract_global_changes() is called right after __init__ (if at all).
- 			#It also has the mutable member _undef_lic_groups. It is used to track
- 			#undefined license groups, to not display an error message for the same
- 			#group again and again. Because of this, it's useful to share it between
- 			#all LicenseManager instances.
- 			self._license_manager = clone._license_manager
- 
- 			# force instantiation of lazy objects when cloning, so
- 			# that they're not instantiated more than once
- 			self._virtuals_manager_obj = copy.deepcopy(clone._virtuals_manager)
- 
- 			self._accept_properties = copy.deepcopy(clone._accept_properties)
- 			self._ppropertiesdict = copy.deepcopy(clone._ppropertiesdict)
- 			self._accept_restrict = copy.deepcopy(clone._accept_restrict)
- 			self._paccept_restrict = copy.deepcopy(clone._paccept_restrict)
- 			self._penvdict = copy.deepcopy(clone._penvdict)
- 			self._pbashrcdict = copy.deepcopy(clone._pbashrcdict)
- 			self._expand_map = copy.deepcopy(clone._expand_map)
- 
- 		else:
- 			# lazily instantiated objects
- 			self._keywords_manager_obj = None
- 			self._mask_manager_obj = None
- 			self._virtuals_manager_obj = None
- 
- 			locations_manager = LocationsManager(config_root=config_root,
- 				config_profile_path=config_profile_path, eprefix=eprefix,
- 				local_config=local_config, target_root=target_root,
- 				sysroot=sysroot)
- 			self._locations_manager = locations_manager
- 
- 			eprefix = locations_manager.eprefix
- 			config_root = locations_manager.config_root
- 			sysroot = locations_manager.sysroot
- 			esysroot = locations_manager.esysroot
- 			broot = locations_manager.broot
- 			abs_user_config = locations_manager.abs_user_config
- 			make_conf_paths = [
- 				os.path.join(config_root, 'etc', 'make.conf'),
- 				os.path.join(config_root, MAKE_CONF_FILE)
- 			]
- 			try:
- 				if os.path.samefile(*make_conf_paths):
- 					make_conf_paths.pop()
- 			except OSError:
- 				pass
- 
- 			make_conf_count = 0
- 			make_conf = {}
- 			for x in make_conf_paths:
- 				mygcfg = getconfig(x,
- 					tolerant=tolerant, allow_sourcing=True,
- 					expand=make_conf, recursive=True)
- 				if mygcfg is not None:
- 					make_conf.update(mygcfg)
- 					make_conf_count += 1
- 
- 			if make_conf_count == 2:
- 				writemsg("!!! %s\n" %
- 					_("Found 2 make.conf files, using both '%s' and '%s'") %
- 					tuple(make_conf_paths), noiselevel=-1)
- 
- 			# __* variables set in make.conf are local and are not be propagated.
- 			make_conf = {k: v for k, v in make_conf.items() if not k.startswith("__")}
- 
- 			# Allow ROOT setting to come from make.conf if it's not overridden
- 			# by the constructor argument (from the calling environment).
- 			locations_manager.set_root_override(make_conf.get("ROOT"))
- 			target_root = locations_manager.target_root
- 			eroot = locations_manager.eroot
- 			self.global_config_path = locations_manager.global_config_path
- 
- 			# The expand_map is used for variable substitution
- 			# in getconfig() calls, and the getconfig() calls
- 			# update expand_map with the value of each variable
- 			# assignment that occurs. Variable substitution occurs
- 			# in the following order, which corresponds to the
- 			# order of appearance in self.lookuplist:
- 			#
- 			#   * env.d
- 			#   * make.globals
- 			#   * make.defaults
- 			#   * make.conf
- 			#
- 			# Notably absent is "env", since we want to avoid any
- 			# interaction with the calling environment that might
- 			# lead to unexpected results.
- 
- 			env_d = getconfig(os.path.join(eroot, "etc", "profile.env"),
- 				tolerant=tolerant, expand=False) or {}
- 			expand_map = env_d.copy()
- 			self._expand_map = expand_map
- 
- 			# Allow make.globals and make.conf to set paths relative to vars like ${EPREFIX}.
- 			expand_map["BROOT"] = broot
- 			expand_map["EPREFIX"] = eprefix
- 			expand_map["EROOT"] = eroot
- 			expand_map["ESYSROOT"] = esysroot
- 			expand_map["PORTAGE_CONFIGROOT"] = config_root
- 			expand_map["ROOT"] = target_root
- 			expand_map["SYSROOT"] = sysroot
- 
- 			if portage._not_installed:
- 				make_globals_path = os.path.join(PORTAGE_BASE_PATH, "cnf", "make.globals")
- 			else:
- 				make_globals_path = os.path.join(self.global_config_path, "make.globals")
- 			old_make_globals = os.path.join(config_root, "etc", "make.globals")
- 			if os.path.isfile(old_make_globals) and \
- 				not os.path.samefile(make_globals_path, old_make_globals):
- 				# Don't warn if they refer to the same path, since
- 				# that can be used for backward compatibility with
- 				# old software.
- 				writemsg("!!! %s\n" %
- 					_("Found obsolete make.globals file: "
- 					"'%s', (using '%s' instead)") %
- 					(old_make_globals, make_globals_path),
- 					noiselevel=-1)
- 
- 			make_globals = getconfig(make_globals_path,
- 				tolerant=tolerant, expand=expand_map)
- 			if make_globals is None:
- 				make_globals = {}
- 
- 			for k, v in self._default_globals.items():
- 				make_globals.setdefault(k, v)
- 
- 			if config_incrementals is None:
- 				self.incrementals = INCREMENTALS
- 			else:
- 				self.incrementals = config_incrementals
- 			if not isinstance(self.incrementals, frozenset):
- 				self.incrementals = frozenset(self.incrementals)
- 
- 			self.module_priority    = ("user", "default")
- 			self.modules            = {}
- 			modules_file = os.path.join(config_root, MODULES_FILE_PATH)
- 			modules_loader = KeyValuePairFileLoader(modules_file, None, None)
- 			modules_dict, modules_errors = modules_loader.load()
- 			self.modules["user"] = modules_dict
- 			if self.modules["user"] is None:
- 				self.modules["user"] = {}
- 			user_auxdbmodule = \
- 				self.modules["user"].get("portdbapi.auxdbmodule")
- 			if user_auxdbmodule is not None and \
- 				user_auxdbmodule in self._module_aliases:
- 				warnings.warn("'%s' is deprecated: %s" %
- 				(user_auxdbmodule, modules_file))
- 
- 			self.modules["default"] = {
- 				"portdbapi.auxdbmodule":  "portage.cache.flat_hash.mtime_md5_database",
- 			}
- 
- 			self.configlist=[]
- 
- 			# back up our incremental variables:
- 			self.configdict={}
- 			self._use_expand_dict = {}
- 			# configlist will contain: [ env.d, globals, features, defaults, conf, pkg, backupenv, env ]
- 			self.configlist.append({})
- 			self.configdict["env.d"] = self.configlist[-1]
- 
- 			self.configlist.append({})
- 			self.configdict["repo"] = self.configlist[-1]
- 
- 			self.configlist.append({})
- 			self.configdict["features"] = self.configlist[-1]
- 
- 			self.configlist.append({})
- 			self.configdict["pkginternal"] = self.configlist[-1]
- 
- 			# env_d will be None if profile.env doesn't exist.
- 			if env_d:
- 				self.configdict["env.d"].update(env_d)
- 
- 			# backupenv is used for calculating incremental variables.
- 			if env is None:
- 				env = os.environ
- 
- 			# Avoid potential UnicodeDecodeError exceptions later.
- 			env_unicode = dict((_unicode_decode(k), _unicode_decode(v))
- 				for k, v in env.items())
- 
- 			self.backupenv = env_unicode
- 
- 			if env_d:
- 				# Remove duplicate values so they don't override updated
- 				# profile.env values later (profile.env is reloaded in each
- 				# call to self.regenerate).
- 				for k, v in env_d.items():
- 					try:
- 						if self.backupenv[k] == v:
- 							del self.backupenv[k]
- 					except KeyError:
- 						pass
- 				del k, v
- 
- 			self.configdict["env"] = LazyItemsDict(self.backupenv)
- 
- 			self.configlist.append(make_globals)
- 			self.configdict["globals"]=self.configlist[-1]
- 
- 			self.make_defaults_use = []
- 
- 			#Loading Repositories
- 			self["PORTAGE_CONFIGROOT"] = config_root
- 			self["ROOT"] = target_root
- 			self["SYSROOT"] = sysroot
- 			self["EPREFIX"] = eprefix
- 			self["EROOT"] = eroot
- 			self["ESYSROOT"] = esysroot
- 			self["BROOT"] = broot
- 			known_repos = []
- 			portdir = ""
- 			portdir_overlay = ""
- 			portdir_sync = None
- 			for confs in [make_globals, make_conf, self.configdict["env"]]:
- 				v = confs.get("PORTDIR")
- 				if v is not None:
- 					portdir = v
- 					known_repos.append(v)
- 				v = confs.get("PORTDIR_OVERLAY")
- 				if v is not None:
- 					portdir_overlay = v
- 					known_repos.extend(shlex_split(v))
- 				v = confs.get("SYNC")
- 				if v is not None:
- 					portdir_sync = v
- 				if 'PORTAGE_RSYNC_EXTRA_OPTS' in confs:
- 					self['PORTAGE_RSYNC_EXTRA_OPTS'] = confs['PORTAGE_RSYNC_EXTRA_OPTS']
- 
- 			self["PORTDIR"] = portdir
- 			self["PORTDIR_OVERLAY"] = portdir_overlay
- 			if portdir_sync:
- 				self["SYNC"] = portdir_sync
- 			self.lookuplist = [self.configdict["env"]]
- 			if repositories is None:
- 				self.repositories = load_repository_config(self)
- 			else:
- 				self.repositories = repositories
- 
- 			known_repos.extend(repo.location for repo in self.repositories)
- 			known_repos = frozenset(known_repos)
- 
- 			self['PORTAGE_REPOSITORIES'] = self.repositories.config_string()
- 			self.backup_changes('PORTAGE_REPOSITORIES')
- 
- 			#filling PORTDIR and PORTDIR_OVERLAY variable for compatibility
- 			main_repo = self.repositories.mainRepo()
- 			if main_repo is not None:
- 				self["PORTDIR"] = main_repo.location
- 				self.backup_changes("PORTDIR")
- 				expand_map["PORTDIR"] = self["PORTDIR"]
- 
- 			# repoman controls PORTDIR_OVERLAY via the environment, so no
- 			# special cases are needed here.
- 			portdir_overlay = list(self.repositories.repoLocationList())
- 			if portdir_overlay and portdir_overlay[0] == self["PORTDIR"]:
- 				portdir_overlay = portdir_overlay[1:]
- 
- 			new_ov = []
- 			if portdir_overlay:
- 				for ov in portdir_overlay:
- 					ov = normalize_path(ov)
- 					if isdir_raise_eaccess(ov) or portage._sync_mode:
- 						new_ov.append(portage._shell_quote(ov))
- 					else:
- 						writemsg(_("!!! Invalid PORTDIR_OVERLAY"
- 							" (not a dir): '%s'\n") % ov, noiselevel=-1)
- 
- 			self["PORTDIR_OVERLAY"] = " ".join(new_ov)
- 			self.backup_changes("PORTDIR_OVERLAY")
- 			expand_map["PORTDIR_OVERLAY"] = self["PORTDIR_OVERLAY"]
- 
- 			locations_manager.set_port_dirs(self["PORTDIR"], self["PORTDIR_OVERLAY"])
- 			locations_manager.load_profiles(self.repositories, known_repos)
- 
- 			profiles_complex = locations_manager.profiles_complex
- 			self.profiles = locations_manager.profiles
- 			self.profile_path = locations_manager.profile_path
- 			self.user_profile_dir = locations_manager.user_profile_dir
- 
- 			try:
- 				packages_list = [grabfile_package(
- 					os.path.join(x.location, "packages"),
- 					verify_eapi=True, eapi=x.eapi, eapi_default=None,
- 					allow_repo=allow_profile_repo_deps(x),
- 					allow_build_id=x.allow_build_id)
- 					for x in profiles_complex]
- 			except EnvironmentError as e:
- 				_raise_exc(e)
- 
- 			self.packages = tuple(stack_lists(packages_list, incremental=1))
- 
- 			# revmaskdict
- 			self.prevmaskdict={}
- 			for x in self.packages:
- 				# Negative atoms are filtered by the above stack_lists() call.
- 				if not isinstance(x, Atom):
- 					x = Atom(x.lstrip('*'))
- 				self.prevmaskdict.setdefault(x.cp, []).append(x)
- 
- 			self.unpack_dependencies = load_unpack_dependencies_configuration(self.repositories)
- 
- 			mygcfg = {}
- 			if profiles_complex:
- 				mygcfg_dlists = []
- 				for x in profiles_complex:
- 					# Prevent accidents triggered by USE="${USE} ..." settings
- 					# at the top of make.defaults which caused parent profile
- 					# USE to override parent profile package.use settings.
- 					# It would be nice to guard USE_EXPAND variables like
- 					# this too, but unfortunately USE_EXPAND is not known
- 					# until after make.defaults has been evaluated, so that
- 					# will require some form of make.defaults preprocessing.
- 					expand_map.pop("USE", None)
- 					mygcfg_dlists.append(
- 						getconfig(os.path.join(x.location, "make.defaults"),
- 						tolerant=tolerant, expand=expand_map,
- 						recursive=x.portage1_directories))
- 				self._make_defaults = mygcfg_dlists
- 				mygcfg = stack_dicts(mygcfg_dlists,
- 					incrementals=self.incrementals)
- 				if mygcfg is None:
- 					mygcfg = {}
- 			self.configlist.append(mygcfg)
- 			self.configdict["defaults"]=self.configlist[-1]
- 
- 			mygcfg = {}
- 			for x in make_conf_paths:
- 				mygcfg.update(getconfig(x,
- 					tolerant=tolerant, allow_sourcing=True,
- 					expand=expand_map, recursive=True) or {})
- 
- 			# __* variables set in make.conf are local and are not be propagated.
- 			mygcfg = {k: v for k, v in mygcfg.items() if not k.startswith("__")}
- 
- 			# Don't allow the user to override certain variables in make.conf
- 			profile_only_variables = self.configdict["defaults"].get(
- 				"PROFILE_ONLY_VARIABLES", "").split()
- 			profile_only_variables = stack_lists([profile_only_variables])
- 			non_user_variables = set()
- 			non_user_variables.update(profile_only_variables)
- 			non_user_variables.update(self._env_blacklist)
- 			non_user_variables.update(self._global_only_vars)
- 			non_user_variables = frozenset(non_user_variables)
- 			self._non_user_variables = non_user_variables
- 
- 			self._env_d_blacklist = frozenset(chain(
- 				profile_only_variables,
- 				self._env_blacklist,
- 			))
- 			env_d = self.configdict["env.d"]
- 			for k in self._env_d_blacklist:
- 				env_d.pop(k, None)
- 
- 			for k in profile_only_variables:
- 				mygcfg.pop(k, None)
- 
- 			self.configlist.append(mygcfg)
- 			self.configdict["conf"]=self.configlist[-1]
- 
- 			self.configlist.append(LazyItemsDict())
- 			self.configdict["pkg"]=self.configlist[-1]
- 
- 			self.configdict["backupenv"] = self.backupenv
- 
- 			# Don't allow the user to override certain variables in the env
- 			for k in profile_only_variables:
- 				self.backupenv.pop(k, None)
- 
- 			self.configlist.append(self.configdict["env"])
- 
- 			# make lookuplist for loading package.*
- 			self.lookuplist=self.configlist[:]
- 			self.lookuplist.reverse()
- 
- 			# Blacklist vars that could interfere with portage internals.
- 			for blacklisted in self._env_blacklist:
- 				for cfg in self.lookuplist:
- 					cfg.pop(blacklisted, None)
- 				self.backupenv.pop(blacklisted, None)
- 			del blacklisted, cfg
- 
- 			self["PORTAGE_CONFIGROOT"] = config_root
- 			self.backup_changes("PORTAGE_CONFIGROOT")
- 			self["ROOT"] = target_root
- 			self.backup_changes("ROOT")
- 			self["SYSROOT"] = sysroot
- 			self.backup_changes("SYSROOT")
- 			self["EPREFIX"] = eprefix
- 			self.backup_changes("EPREFIX")
- 			self["EROOT"] = eroot
- 			self.backup_changes("EROOT")
- 			self["ESYSROOT"] = esysroot
- 			self.backup_changes("ESYSROOT")
- 			self["BROOT"] = broot
- 			self.backup_changes("BROOT")
- 
- 			# The prefix of the running portage instance is used in the
- 			# ebuild environment to implement the --host-root option for
- 			# best_version and has_version.
- 			self["PORTAGE_OVERRIDE_EPREFIX"] = portage.const.EPREFIX
- 			self.backup_changes("PORTAGE_OVERRIDE_EPREFIX")
- 
- 			self._ppropertiesdict = portage.dep.ExtendedAtomDict(dict)
- 			self._paccept_restrict = portage.dep.ExtendedAtomDict(dict)
- 			self._penvdict = portage.dep.ExtendedAtomDict(dict)
- 			self._pbashrcdict = {}
- 			self._pbashrc = ()
- 
- 			self._repo_make_defaults = {}
- 			for repo in self.repositories.repos_with_profiles():
- 				d = getconfig(os.path.join(repo.location, "profiles", "make.defaults"),
- 					tolerant=tolerant, expand=self.configdict["globals"].copy(), recursive=repo.portage1_profiles) or {}
- 				if d:
- 					for k in chain(self._env_blacklist,
- 						profile_only_variables, self._global_only_vars):
- 						d.pop(k, None)
- 				self._repo_make_defaults[repo.name] = d
- 
- 			#Read all USE related files from profiles and optionally from user config.
- 			self._use_manager = UseManager(self.repositories, profiles_complex,
- 				abs_user_config, self._isStable, user_config=local_config)
- 			#Initialize all USE related variables we track ourselves.
- 			self.usemask = self._use_manager.getUseMask()
- 			self.useforce = self._use_manager.getUseForce()
- 			self.configdict["conf"]["USE"] = \
- 				self._use_manager.extract_global_USE_changes( \
- 					self.configdict["conf"].get("USE", ""))
- 
- 			#Read license_groups and optionally license_groups and package.license from user config
- 			self._license_manager = LicenseManager(locations_manager.profile_locations, \
- 				abs_user_config, user_config=local_config)
- 			#Extract '*/*' entries from package.license
- 			self.configdict["conf"]["ACCEPT_LICENSE"] = \
- 				self._license_manager.extract_global_changes( \
- 					self.configdict["conf"].get("ACCEPT_LICENSE", ""))
- 
- 			# profile.bashrc
- 			self._profile_bashrc = tuple(os.path.isfile(os.path.join(profile.location, 'profile.bashrc'))
- 				for profile in profiles_complex)
- 
- 			if local_config:
- 				#package.properties
- 				propdict = grabdict_package(os.path.join(
- 					abs_user_config, "package.properties"), recursive=1, allow_wildcard=True, \
- 					allow_repo=True, verify_eapi=False,
- 					allow_build_id=True)
- 				v = propdict.pop("*/*", None)
- 				if v is not None:
- 					if "ACCEPT_PROPERTIES" in self.configdict["conf"]:
- 						self.configdict["conf"]["ACCEPT_PROPERTIES"] += " " + " ".join(v)
- 					else:
- 						self.configdict["conf"]["ACCEPT_PROPERTIES"] = " ".join(v)
- 				for k, v in propdict.items():
- 					self._ppropertiesdict.setdefault(k.cp, {})[k] = v
- 
- 				# package.accept_restrict
- 				d = grabdict_package(os.path.join(
- 					abs_user_config, "package.accept_restrict"),
- 					recursive=True, allow_wildcard=True,
- 					allow_repo=True, verify_eapi=False,
- 					allow_build_id=True)
- 				v = d.pop("*/*", None)
- 				if v is not None:
- 					if "ACCEPT_RESTRICT" in self.configdict["conf"]:
- 						self.configdict["conf"]["ACCEPT_RESTRICT"] += " " + " ".join(v)
- 					else:
- 						self.configdict["conf"]["ACCEPT_RESTRICT"] = " ".join(v)
- 				for k, v in d.items():
- 					self._paccept_restrict.setdefault(k.cp, {})[k] = v
- 
- 				#package.env
- 				penvdict = grabdict_package(os.path.join(
- 					abs_user_config, "package.env"), recursive=1, allow_wildcard=True, \
- 					allow_repo=True, verify_eapi=False,
- 					allow_build_id=True)
- 				v = penvdict.pop("*/*", None)
- 				if v is not None:
- 					global_wildcard_conf = {}
- 					self._grab_pkg_env(v, global_wildcard_conf)
- 					incrementals = self.incrementals
- 					conf_configdict = self.configdict["conf"]
- 					for k, v in global_wildcard_conf.items():
- 						if k in incrementals:
- 							if k in conf_configdict:
- 								conf_configdict[k] = \
- 									conf_configdict[k] + " " + v
- 							else:
- 								conf_configdict[k] = v
- 						else:
- 							conf_configdict[k] = v
- 						expand_map[k] = v
- 
- 				for k, v in penvdict.items():
- 					self._penvdict.setdefault(k.cp, {})[k] = v
- 
- 				# package.bashrc
- 				for profile in profiles_complex:
- 					if not 'profile-bashrcs' in profile.profile_formats:
- 						continue
- 					self._pbashrcdict[profile] = \
- 						portage.dep.ExtendedAtomDict(dict)
- 					bashrc = grabdict_package(os.path.join(profile.location,
- 						"package.bashrc"), recursive=1, allow_wildcard=True,
- 								allow_repo=allow_profile_repo_deps(profile),
- 								verify_eapi=True,
- 								eapi=profile.eapi, eapi_default=None,
- 								allow_build_id=profile.allow_build_id)
- 					if not bashrc:
- 						continue
- 
- 					for k, v in bashrc.items():
- 						envfiles = [os.path.join(profile.location,
- 							"bashrc",
- 							envname) for envname in v]
- 						self._pbashrcdict[profile].setdefault(k.cp, {})\
- 							.setdefault(k, []).extend(envfiles)
- 
- 			#getting categories from an external file now
- 			self.categories = [grabfile(os.path.join(x, "categories")) \
- 				for x in locations_manager.profile_and_user_locations]
- 			category_re = dbapi._category_re
- 			# categories used to be a tuple, but now we use a frozenset
- 			# for hashed category validation in pordbapi.cp_list()
- 			self.categories = frozenset(
- 				x for x in stack_lists(self.categories, incremental=1)
- 				if category_re.match(x) is not None)
- 
- 			archlist = [grabfile(os.path.join(x, "arch.list")) \
- 				for x in locations_manager.profile_and_user_locations]
- 			archlist = sorted(stack_lists(archlist, incremental=1))
- 			self.configdict["conf"]["PORTAGE_ARCHLIST"] = " ".join(archlist)
- 
- 			pkgprovidedlines = []
- 			for x in profiles_complex:
- 				provpath = os.path.join(x.location, "package.provided")
- 				if os.path.exists(provpath):
- 					if _get_eapi_attrs(x.eapi).allows_package_provided:
- 						pkgprovidedlines.append(grabfile(provpath,
- 							recursive=x.portage1_directories))
- 					else:
- 						# TODO: bail out?
- 						writemsg((_("!!! package.provided not allowed in EAPI %s: ")
- 								%x.eapi)+x.location+"\n",
- 							noiselevel=-1)
- 
- 			pkgprovidedlines = stack_lists(pkgprovidedlines, incremental=1)
- 			has_invalid_data = False
- 			for x in range(len(pkgprovidedlines)-1, -1, -1):
- 				myline = pkgprovidedlines[x]
- 				if not isvalidatom("=" + myline):
- 					writemsg(_("Invalid package name in package.provided: %s\n") % \
- 						myline, noiselevel=-1)
- 					has_invalid_data = True
- 					del pkgprovidedlines[x]
- 					continue
- 				cpvr = catpkgsplit(pkgprovidedlines[x])
- 				if not cpvr or cpvr[0] == "null":
- 					writemsg(_("Invalid package name in package.provided: ")+pkgprovidedlines[x]+"\n",
- 						noiselevel=-1)
- 					has_invalid_data = True
- 					del pkgprovidedlines[x]
- 					continue
- 			if has_invalid_data:
- 				writemsg(_("See portage(5) for correct package.provided usage.\n"),
- 					noiselevel=-1)
- 			self.pprovideddict = {}
- 			for x in pkgprovidedlines:
- 				x_split = catpkgsplit(x)
- 				if x_split is None:
- 					continue
- 				mycatpkg = cpv_getkey(x)
- 				if mycatpkg in self.pprovideddict:
- 					self.pprovideddict[mycatpkg].append(x)
- 				else:
- 					self.pprovideddict[mycatpkg]=[x]
- 
- 			# reasonable defaults; this is important as without USE_ORDER,
- 			# USE will always be "" (nothing set)!
- 			if "USE_ORDER" not in self:
- 				self["USE_ORDER"] = "env:pkg:conf:defaults:pkginternal:features:repo:env.d"
- 				self.backup_changes("USE_ORDER")
- 
- 			if "CBUILD" not in self and "CHOST" in self:
- 				self["CBUILD"] = self["CHOST"]
- 				self.backup_changes("CBUILD")
- 
- 			if "USERLAND" not in self:
- 				# Set default USERLAND so that our test cases can assume that
- 				# it's always set. This allows isolated-functions.sh to avoid
- 				# calling uname -s when sourced.
- 				system = platform.system()
- 				if system is not None and \
- 					(system.endswith("BSD") or system == "DragonFly"):
- 					self["USERLAND"] = "BSD"
- 				else:
- 					self["USERLAND"] = "GNU"
- 				self.backup_changes("USERLAND")
- 
- 			default_inst_ids = {
- 				"PORTAGE_INST_GID": "0",
- 				"PORTAGE_INST_UID": "0",
- 			}
- 
- 			# PREFIX LOCAL: inventing UID/GID based on a path is a very
- 			# bad idea, it breaks almost everything since group ids
- 			# don't have to match, when a user has many
- 			# This in particularly breaks the configure-set portage
- 			# group and user (in portage/data.py)
- 			eroot_or_parent = first_existing(eroot)
- 			unprivileged = True
- #			try:
- #				eroot_st = os.stat(eroot_or_parent)
- #			except OSError:
- #				pass
- #			else:
- #
- #				if portage.data._unprivileged_mode(
- #					eroot_or_parent, eroot_st):
- #					unprivileged = True
- #
- #					default_inst_ids["PORTAGE_INST_GID"] = str(eroot_st.st_gid)
- #					default_inst_ids["PORTAGE_INST_UID"] = str(eroot_st.st_uid)
- #
- #					if "PORTAGE_USERNAME" not in self:
- #						try:
- #							pwd_struct = pwd.getpwuid(eroot_st.st_uid)
- #						except KeyError:
- #							pass
- #						else:
- #							self["PORTAGE_USERNAME"] = pwd_struct.pw_name
- #							self.backup_changes("PORTAGE_USERNAME")
- #
- #					if "PORTAGE_GRPNAME" not in self:
- #						try:
- #							grp_struct = grp.getgrgid(eroot_st.st_gid)
- #						except KeyError:
- #							pass
- #						else:
- #							self["PORTAGE_GRPNAME"] = grp_struct.gr_name
- #							self.backup_changes("PORTAGE_GRPNAME")
- 			# END PREFIX LOCAL
- 
- 			for var, default_val in default_inst_ids.items():
- 				try:
- 					self[var] = str(int(self.get(var, default_val)))
- 				except ValueError:
- 					writemsg(_("!!! %s='%s' is not a valid integer.  "
- 						"Falling back to %s.\n") % (var, self[var], default_val),
- 						noiselevel=-1)
- 					self[var] = default_val
- 				self.backup_changes(var)
- 
- 			self.depcachedir = self.get("PORTAGE_DEPCACHEDIR")
- 			if self.depcachedir is None:
- 				self.depcachedir = os.path.join(os.sep,
- 					portage.const.EPREFIX, DEPCACHE_PATH.lstrip(os.sep))
- 				if unprivileged and target_root != os.sep:
- 					# In unprivileged mode, automatically make
- 					# depcachedir relative to target_root if the
- 					# default depcachedir is not writable.
- 					if not os.access(first_existing(self.depcachedir),
- 						os.W_OK):
- 						self.depcachedir = os.path.join(eroot,
- 							DEPCACHE_PATH.lstrip(os.sep))
- 
- 			self["PORTAGE_DEPCACHEDIR"] = self.depcachedir
- 			self.backup_changes("PORTAGE_DEPCACHEDIR")
- 
- 			if portage._internal_caller:
- 				self["PORTAGE_INTERNAL_CALLER"] = "1"
- 				self.backup_changes("PORTAGE_INTERNAL_CALLER")
- 
- 			# initialize self.features
- 			self.regenerate()
- 			feature_use = []
- 			if "test" in self.features:
- 				feature_use.append("test")
- 			self.configdict["features"]["USE"] = self._default_features_use = " ".join(feature_use)
- 			if feature_use:
- 				# Regenerate USE so that the initial "test" flag state is
- 				# correct for evaluation of !test? conditionals in RESTRICT.
- 				self.regenerate()
- 
- 			if unprivileged:
- 				self.features.add('unprivileged')
- 
- 			if bsd_chflags:
- 				self.features.add('chflags')
- 
- 			self._init_iuse()
- 
- 			self._validate_commands()
- 
- 			for k in self._case_insensitive_vars:
- 				if k in self:
- 					self[k] = self[k].lower()
- 					self.backup_changes(k)
- 
- 			# The first constructed config object initializes these modules,
- 			# and subsequent calls to the _init() functions have no effect.
- 			portage.output._init(config_root=self['PORTAGE_CONFIGROOT'])
- 			portage.data._init(self)
- 
- 		if mycpv:
- 			self.setcpv(mycpv)
- 
- 	def _init_iuse(self):
- 		self._iuse_effective = self._calc_iuse_effective()
- 		self._iuse_implicit_match = _iuse_implicit_match_cache(self)
- 
- 	@property
- 	def mygcfg(self):
- 		warnings.warn("portage.config.mygcfg is deprecated", stacklevel=3)
- 		return {}
- 
- 	def _validate_commands(self):
- 		for k in special_env_vars.validate_commands:
- 			v = self.get(k)
- 			if v is not None:
- 				valid, v_split = validate_cmd_var(v)
- 
- 				if not valid:
- 					if v_split:
- 						writemsg_level(_("%s setting is invalid: '%s'\n") % \
- 							(k, v), level=logging.ERROR, noiselevel=-1)
- 
- 					# before deleting the invalid setting, backup
- 					# the default value if available
- 					v = self.configdict['globals'].get(k)
- 					if v is not None:
- 						default_valid, v_split = validate_cmd_var(v)
- 						if not default_valid:
- 							if v_split:
- 								writemsg_level(
- 									_("%s setting from make.globals" + \
- 									" is invalid: '%s'\n") % \
- 									(k, v), level=logging.ERROR, noiselevel=-1)
- 							# make.globals seems corrupt, so try for
- 							# a hardcoded default instead
- 							v = self._default_globals.get(k)
- 
- 					# delete all settings for this key,
- 					# including the invalid one
- 					del self[k]
- 					self.backupenv.pop(k, None)
- 					if v:
- 						# restore validated default
- 						self.configdict['globals'][k] = v
- 
- 	def _init_dirs(self):
- 		"""
- 		Create a few directories that are critical to portage operation
- 		"""
- 		if not os.access(self["EROOT"], os.W_OK):
- 			return
- 
- 		#                                gid, mode, mask, preserve_perms
- 		dir_mode_map = {
- 			"tmp"             : (         -1, 0o1777,  0,  True),
- 			"var/tmp"         : (         -1, 0o1777,  0,  True),
- 			PRIVATE_PATH      : (portage_gid, 0o2750, 0o2, False),
- 			CACHE_PATH        : (portage_gid,  0o755, 0o2, False)
- 		}
- 
- 		for mypath, (gid, mode, modemask, preserve_perms) \
- 			in dir_mode_map.items():
- 			mydir = os.path.join(self["EROOT"], mypath)
- 			if preserve_perms and os.path.isdir(mydir):
- 				# Only adjust permissions on some directories if
- 				# they don't exist yet. This gives freedom to the
- 				# user to adjust permissions to suit their taste.
- 				continue
- 			try:
- 				ensure_dirs(mydir, gid=gid, mode=mode, mask=modemask)
- 			except PortageException as e:
- 				writemsg(_("!!! Directory initialization failed: '%s'\n") % mydir,
- 					noiselevel=-1)
- 				writemsg("!!! %s\n" % str(e),
- 					noiselevel=-1)
- 
- 	@property
- 	def _keywords_manager(self):
- 		if self._keywords_manager_obj is None:
- 			self._keywords_manager_obj = KeywordsManager(
- 				self._locations_manager.profiles_complex,
- 				self._locations_manager.abs_user_config,
- 				self.local_config,
- 				global_accept_keywords=self.configdict["defaults"].get("ACCEPT_KEYWORDS", ""))
- 		return self._keywords_manager_obj
- 
- 	@property
- 	def _mask_manager(self):
- 		if self._mask_manager_obj is None:
- 			self._mask_manager_obj = MaskManager(self.repositories,
- 				self._locations_manager.profiles_complex,
- 				self._locations_manager.abs_user_config,
- 				user_config=self.local_config,
- 				strict_umatched_removal=self._unmatched_removal)
- 		return self._mask_manager_obj
- 
- 	@property
- 	def _virtuals_manager(self):
- 		if self._virtuals_manager_obj is None:
- 			self._virtuals_manager_obj = VirtualsManager(self.profiles)
- 		return self._virtuals_manager_obj
- 
- 	@property
- 	def pkeywordsdict(self):
- 		result = self._keywords_manager.pkeywordsdict.copy()
- 		for k, v in result.items():
- 			result[k] = v.copy()
- 		return result
- 
- 	@property
- 	def pmaskdict(self):
- 		return self._mask_manager._pmaskdict.copy()
- 
- 	@property
- 	def punmaskdict(self):
- 		return self._mask_manager._punmaskdict.copy()
- 
- 	@property
- 	def soname_provided(self):
- 		if self._soname_provided is None:
- 			d = stack_dictlist((grabdict(
- 				os.path.join(x, "soname.provided"), recursive=True)
- 				for x in self.profiles), incremental=True)
- 			self._soname_provided = frozenset(SonameAtom(cat, soname)
- 				for cat, sonames in d.items() for soname in sonames)
- 		return self._soname_provided
- 
- 	def expandLicenseTokens(self, tokens):
- 		""" Take a token from ACCEPT_LICENSE or package.license and expand it
- 		if it's a group token (indicated by @) or just return it if it's not a
- 		group.  If a group is negated then negate all group elements."""
- 		return self._license_manager.expandLicenseTokens(tokens)
- 
- 	def validate(self):
- 		"""Validate miscellaneous settings and display warnings if necessary.
- 		(This code was previously in the global scope of portage.py)"""
- 
- 		groups = self.get("ACCEPT_KEYWORDS", "").split()
- 		archlist = self.archlist()
- 		if not archlist:
- 			writemsg(_("--- 'profiles/arch.list' is empty or "
- 				"not available. Empty ebuild repository?\n"), noiselevel=1)
- 		else:
- 			for group in groups:
- 				if group not in archlist and \
- 					not (group.startswith("-") and group[1:] in archlist) and \
- 					group not in ("*", "~*", "**"):
- 					writemsg(_("!!! INVALID ACCEPT_KEYWORDS: %s\n") % str(group),
- 						noiselevel=-1)
- 
- 		profile_broken = False
- 
- 		# getmaskingstatus requires ARCH for ACCEPT_KEYWORDS support
- 		arch = self.get('ARCH')
- 		if not self.profile_path or not arch:
- 			profile_broken = True
- 		else:
- 			# If any one of these files exists, then
- 			# the profile is considered valid.
- 			for x in ("make.defaults", "parent",
- 				"packages", "use.force", "use.mask"):
- 				if exists_raise_eaccess(os.path.join(self.profile_path, x)):
- 					break
- 			else:
- 				profile_broken = True
- 
- 		if profile_broken and not portage._sync_mode:
- 			abs_profile_path = None
- 			for x in (PROFILE_PATH, 'etc/make.profile'):
- 				x = os.path.join(self["PORTAGE_CONFIGROOT"], x)
- 				try:
- 					os.lstat(x)
- 				except OSError:
- 					pass
- 				else:
- 					abs_profile_path = x
- 					break
- 
- 			if abs_profile_path is None:
- 				abs_profile_path = os.path.join(self["PORTAGE_CONFIGROOT"],
- 					PROFILE_PATH)
- 
- 			writemsg(_("\n\n!!! %s is not a symlink and will probably prevent most merges.\n") % abs_profile_path,
- 				noiselevel=-1)
- 			writemsg(_("!!! It should point into a profile within %s/profiles/\n") % self["PORTDIR"])
- 			writemsg(_("!!! (You can safely ignore this message when syncing. It's harmless.)\n\n\n"))
- 
- 		abs_user_virtuals = os.path.join(self["PORTAGE_CONFIGROOT"],
- 			USER_VIRTUALS_FILE)
- 		if os.path.exists(abs_user_virtuals):
- 			writemsg("\n!!! /etc/portage/virtuals is deprecated in favor of\n")
- 			writemsg("!!! /etc/portage/profile/virtuals. Please move it to\n")
- 			writemsg("!!! this new location.\n\n")
- 
- 		if not sandbox_capable and not macossandbox_capable and \
- 			("sandbox" in self.features or "usersandbox" in self.features):
- 			if self.profile_path is not None and \
- 				os.path.realpath(self.profile_path) == \
- 				os.path.realpath(os.path.join(
- 				self["PORTAGE_CONFIGROOT"], PROFILE_PATH)):
- 				# Don't show this warning when running repoman and the
- 				# sandbox feature came from a profile that doesn't belong
- 				# to the user.
- 				writemsg(colorize("BAD", _("!!! Problem with sandbox"
- 					" binary. Disabling...\n\n")), noiselevel=-1)
- 
- 		if "fakeroot" in self.features and \
- 			not fakeroot_capable:
- 			writemsg(_("!!! FEATURES=fakeroot is enabled, but the "
- 				"fakeroot binary is not installed.\n"), noiselevel=-1)
- 
- 		if "webrsync-gpg" in self.features:
- 			writemsg(_("!!! FEATURES=webrsync-gpg is deprecated, see the make.conf(5) man page.\n"),
- 				noiselevel=-1)
- 
- 		if os.getuid() == 0 and not hasattr(os, "setgroups"):
- 			warning_shown = False
- 
- 			if "userpriv" in self.features:
- 				writemsg(_("!!! FEATURES=userpriv is enabled, but "
- 					"os.setgroups is not available.\n"), noiselevel=-1)
- 				warning_shown = True
- 
- 			if "userfetch" in self.features:
- 				writemsg(_("!!! FEATURES=userfetch is enabled, but "
- 					"os.setgroups is not available.\n"), noiselevel=-1)
- 				warning_shown = True
- 
- 			if warning_shown and platform.python_implementation() == 'PyPy':
- 				writemsg(_("!!! See https://bugs.pypy.org/issue833 for details.\n"),
- 					noiselevel=-1)
- 
- 		binpkg_compression = self.get("BINPKG_COMPRESS")
- 		if binpkg_compression:
- 			try:
- 				compression = _compressors[binpkg_compression]
- 			except KeyError as e:
- 				writemsg("!!! BINPKG_COMPRESS contains invalid or "
- 					"unsupported compression method: %s" % e.args[0],
- 					noiselevel=-1)
- 			else:
- 				try:
- 					compression_binary = shlex_split(
- 						portage.util.varexpand(compression["compress"],
- 						mydict=self))[0]
- 				except IndexError as e:
- 					writemsg("!!! BINPKG_COMPRESS contains invalid or "
- 						"unsupported compression method: %s" % e.args[0],
- 						noiselevel=-1)
- 				else:
- 					if portage.process.find_binary(
- 						compression_binary) is None:
- 						missing_package = compression["package"]
- 						writemsg("!!! BINPKG_COMPRESS unsupported %s. "
- 							"Missing package: %s" %
- 							(binpkg_compression, missing_package),
- 							noiselevel=-1)
- 
- 	def load_best_module(self,property_string):
- 		best_mod = best_from_dict(property_string,self.modules,self.module_priority)
- 		mod = None
- 		try:
- 			mod = load_mod(best_mod)
- 		except ImportError:
- 			if best_mod in self._module_aliases:
- 				mod = load_mod(self._module_aliases[best_mod])
- 			elif not best_mod.startswith("cache."):
- 				raise
- 			else:
- 				best_mod = "portage." + best_mod
- 				try:
- 					mod = load_mod(best_mod)
- 				except ImportError:
- 					raise
- 		return mod
- 
- 	def lock(self):
- 		self.locked = 1
- 
- 	def unlock(self):
- 		self.locked = 0
- 
- 	def modifying(self):
- 		if self.locked:
- 			raise Exception(_("Configuration is locked."))
- 
- 	def backup_changes(self,key=None):
- 		self.modifying()
- 		if key and key in self.configdict["env"]:
- 			self.backupenv[key] = copy.deepcopy(self.configdict["env"][key])
- 		else:
- 			raise KeyError(_("No such key defined in environment: %s") % key)
- 
- 	def reset(self, keeping_pkg=0, use_cache=None):
- 		"""
- 		Restore environment from self.backupenv, call self.regenerate()
- 		@param keeping_pkg: Should we keep the setcpv() data or delete it.
- 		@type keeping_pkg: Boolean
- 		@rype: None
- 		"""
- 
- 		if use_cache is not None:
- 			warnings.warn("The use_cache parameter for config.reset() is deprecated and without effect.",
- 				DeprecationWarning, stacklevel=2)
- 
- 		self.modifying()
- 		self.configdict["env"].clear()
- 		self.configdict["env"].update(self.backupenv)
- 
- 		self.modifiedkeys = []
- 		if not keeping_pkg:
- 			self.mycpv = None
- 			self._setcpv_args_hash = None
- 			self.puse = ""
- 			del self._penv[:]
- 			self.configdict["pkg"].clear()
- 			self.configdict["pkginternal"].clear()
- 			self.configdict["features"]["USE"] = self._default_features_use
- 			self.configdict["repo"].clear()
- 			self.configdict["defaults"]["USE"] = \
- 				" ".join(self.make_defaults_use)
- 			self.usemask = self._use_manager.getUseMask()
- 			self.useforce = self._use_manager.getUseForce()
- 		self.regenerate()
- 
- 	class _lazy_vars:
- 
- 		__slots__ = ('built_use', 'settings', 'values')
- 
- 		def __init__(self, built_use, settings):
- 			self.built_use = built_use
- 			self.settings = settings
- 			self.values = None
- 
- 		def __getitem__(self, k):
- 			if self.values is None:
- 				self.values = self._init_values()
- 			return self.values[k]
- 
- 		def _init_values(self):
- 			values = {}
- 			settings = self.settings
- 			use = self.built_use
- 			if use is None:
- 				use = frozenset(settings['PORTAGE_USE'].split())
- 
- 			values['ACCEPT_LICENSE'] = settings._license_manager.get_prunned_accept_license( \
- 				settings.mycpv, use, settings.get('LICENSE', ''), settings.get('SLOT'), settings.get('PORTAGE_REPO_NAME'))
- 			values['PORTAGE_PROPERTIES'] = self._flatten('PROPERTIES', use, settings)
- 			values['PORTAGE_RESTRICT'] = self._flatten('RESTRICT', use, settings)
- 			return values
- 
- 		def _flatten(self, var, use, settings):
- 			try:
- 				restrict = set(use_reduce(settings.get(var, ''), uselist=use, flat=True))
- 			except InvalidDependString:
- 				restrict = set()
- 			return ' '.join(sorted(restrict))
- 
- 	class _lazy_use_expand:
- 		"""
- 		Lazily evaluate USE_EXPAND variables since they are only needed when
- 		an ebuild shell is spawned. Variables values are made consistent with
- 		the previously calculated USE settings.
- 		"""
- 
- 		def __init__(self, settings, unfiltered_use,
- 			use, usemask, iuse_effective,
- 			use_expand_split, use_expand_dict):
- 			self._settings = settings
- 			self._unfiltered_use = unfiltered_use
- 			self._use = use
- 			self._usemask = usemask
- 			self._iuse_effective = iuse_effective
- 			self._use_expand_split = use_expand_split
- 			self._use_expand_dict = use_expand_dict
- 
- 		def __getitem__(self, key):
- 			prefix = key.lower() + '_'
- 			prefix_len = len(prefix)
- 			expand_flags = set( x[prefix_len:] for x in self._use \
- 				if x[:prefix_len] == prefix )
- 			var_split = self._use_expand_dict.get(key, '').split()
- 			# Preserve the order of var_split because it can matter for things
- 			# like LINGUAS.
- 			var_split = [ x for x in var_split if x in expand_flags ]
- 			var_split.extend(expand_flags.difference(var_split))
- 			has_wildcard = '*' in expand_flags
- 			if has_wildcard:
- 				var_split = [ x for x in var_split if x != "*" ]
- 			has_iuse = set()
- 			for x in self._iuse_effective:
- 				if x[:prefix_len] == prefix:
- 					has_iuse.add(x[prefix_len:])
- 			if has_wildcard:
- 				# * means to enable everything in IUSE that's not masked
- 				if has_iuse:
- 					usemask = self._usemask
- 					for suffix in has_iuse:
- 						x = prefix + suffix
- 						if x not in usemask:
- 							if suffix not in expand_flags:
- 								var_split.append(suffix)
- 				else:
- 					# If there is a wildcard and no matching flags in IUSE then
- 					# LINGUAS should be unset so that all .mo files are
- 					# installed.
- 					var_split = []
- 			# Make the flags unique and filter them according to IUSE.
- 			# Also, continue to preserve order for things like LINGUAS
- 			# and filter any duplicates that variable may contain.
- 			filtered_var_split = []
- 			remaining = has_iuse.intersection(var_split)
- 			for x in var_split:
- 				if x in remaining:
- 					remaining.remove(x)
- 					filtered_var_split.append(x)
- 			var_split = filtered_var_split
- 
- 			return ' '.join(var_split)
- 
- 	def _setcpv_recursion_gate(f):
- 		"""
- 		Raise AssertionError for recursive setcpv calls.
- 		"""
- 		def wrapper(self, *args, **kwargs):
- 			if hasattr(self, '_setcpv_active'):
- 				raise AssertionError('setcpv recursion detected')
- 			self._setcpv_active = True
- 			try:
- 				return f(self, *args, **kwargs)
- 			finally:
- 				del self._setcpv_active
- 		return wrapper
- 
- 	@_setcpv_recursion_gate
- 	def setcpv(self, mycpv, use_cache=None, mydb=None):
- 		"""
- 		Load a particular CPV into the config, this lets us see the
- 		Default USE flags for a particular ebuild as well as the USE
- 		flags from package.use.
- 
- 		@param mycpv: A cpv to load
- 		@type mycpv: string
- 		@param mydb: a dbapi instance that supports aux_get with the IUSE key.
- 		@type mydb: dbapi or derivative.
- 		@rtype: None
- 		"""
- 
- 		if use_cache is not None:
- 			warnings.warn("The use_cache parameter for config.setcpv() is deprecated and without effect.",
- 				DeprecationWarning, stacklevel=2)
- 
- 		self.modifying()
- 
- 		pkg = None
- 		built_use = None
- 		explicit_iuse = None
- 		if not isinstance(mycpv, str):
- 			pkg = mycpv
- 			mycpv = pkg.cpv
- 			mydb = pkg._metadata
- 			explicit_iuse = pkg.iuse.all
- 			args_hash = (mycpv, id(pkg))
- 			if pkg.built:
- 				built_use = pkg.use.enabled
- 		else:
- 			args_hash = (mycpv, id(mydb))
- 
- 		if args_hash == self._setcpv_args_hash:
- 			return
- 		self._setcpv_args_hash = args_hash
- 
- 		has_changed = False
- 		self.mycpv = mycpv
- 		cat, pf = catsplit(mycpv)
- 		cp = cpv_getkey(mycpv)
- 		cpv_slot = self.mycpv
- 		pkginternaluse = ""
- 		pkginternaluse_list = []
- 		feature_use = []
- 		iuse = ""
- 		pkg_configdict = self.configdict["pkg"]
- 		previous_iuse = pkg_configdict.get("IUSE")
- 		previous_iuse_effective = pkg_configdict.get("IUSE_EFFECTIVE")
- 		previous_features = pkg_configdict.get("FEATURES")
- 		previous_penv = self._penv
- 
- 		aux_keys = self._setcpv_aux_keys
- 
- 		# Discard any existing metadata and package.env settings from
- 		# the previous package instance.
- 		pkg_configdict.clear()
- 
- 		pkg_configdict["CATEGORY"] = cat
- 		pkg_configdict["PF"] = pf
- 		repository = None
- 		eapi = None
- 		if mydb:
- 			if not hasattr(mydb, "aux_get"):
- 				for k in aux_keys:
- 					if k in mydb:
- 						# Make these lazy, since __getitem__ triggers
- 						# evaluation of USE conditionals which can't
- 						# occur until PORTAGE_USE is calculated below.
- 						pkg_configdict.addLazySingleton(k,
- 							mydb.__getitem__, k)
- 			else:
- 				# When calling dbapi.aux_get(), grab USE for built/installed
- 				# packages since we want to save it PORTAGE_BUILT_USE for
- 				# evaluating conditional USE deps in atoms passed via IPC to
- 				# helpers like has_version and best_version.
- 				aux_keys = set(aux_keys)
- 				if hasattr(mydb, '_aux_cache_keys'):
- 					aux_keys = aux_keys.intersection(mydb._aux_cache_keys)
- 				aux_keys.add('USE')
- 				aux_keys = list(aux_keys)
- 				for k, v in zip(aux_keys, mydb.aux_get(self.mycpv, aux_keys)):
- 					pkg_configdict[k] = v
- 				built_use = frozenset(pkg_configdict.pop('USE').split())
- 				if not built_use:
- 					# Empty USE means this dbapi instance does not contain
- 					# built packages.
- 					built_use = None
- 			eapi = pkg_configdict['EAPI']
- 
- 			repository = pkg_configdict.pop("repository", None)
- 			if repository is not None:
- 				pkg_configdict["PORTAGE_REPO_NAME"] = repository
- 			iuse = pkg_configdict["IUSE"]
- 			if pkg is None:
- 				self.mycpv = _pkg_str(self.mycpv, metadata=pkg_configdict,
- 					settings=self)
- 				cpv_slot = self.mycpv
- 			else:
- 				cpv_slot = pkg
- 			for x in iuse.split():
- 				if x.startswith("+"):
- 					pkginternaluse_list.append(x[1:])
- 				elif x.startswith("-"):
- 					pkginternaluse_list.append(x)
- 			pkginternaluse = " ".join(pkginternaluse_list)
- 
- 		eapi_attrs = _get_eapi_attrs(eapi)
- 
- 		if pkginternaluse != self.configdict["pkginternal"].get("USE", ""):
- 			self.configdict["pkginternal"]["USE"] = pkginternaluse
- 			has_changed = True
- 
- 		repo_env = []
- 		if repository and repository != Package.UNKNOWN_REPO:
- 			repos = []
- 			try:
- 				repos.extend(repo.name for repo in
- 					self.repositories[repository].masters)
- 			except KeyError:
- 				pass
- 			repos.append(repository)
- 			for repo in repos:
- 				d = self._repo_make_defaults.get(repo)
- 				if d is None:
- 					d = {}
- 				else:
- 					# make a copy, since we might modify it with
- 					# package.use settings
- 					d = d.copy()
- 				cpdict = self._use_manager._repo_puse_dict.get(repo, {}).get(cp)
- 				if cpdict:
- 					repo_puse = ordered_by_atom_specificity(cpdict, cpv_slot)
- 					if repo_puse:
- 						for x in repo_puse:
- 							d["USE"] = d.get("USE", "") + " " + " ".join(x)
- 				if d:
- 					repo_env.append(d)
- 
- 		if repo_env or self.configdict["repo"]:
- 			self.configdict["repo"].clear()
- 			self.configdict["repo"].update(stack_dicts(repo_env,
- 				incrementals=self.incrementals))
- 			has_changed = True
- 
- 		defaults = []
- 		for i, pkgprofileuse_dict in enumerate(self._use_manager._pkgprofileuse):
- 			if self.make_defaults_use[i]:
- 				defaults.append(self.make_defaults_use[i])
- 			cpdict = pkgprofileuse_dict.get(cp)
- 			if cpdict:
- 				pkg_defaults = ordered_by_atom_specificity(cpdict, cpv_slot)
- 				if pkg_defaults:
- 					defaults.extend(pkg_defaults)
- 		defaults = " ".join(defaults)
- 		if defaults != self.configdict["defaults"].get("USE",""):
- 			self.configdict["defaults"]["USE"] = defaults
- 			has_changed = True
- 
- 		useforce = self._use_manager.getUseForce(cpv_slot)
- 		if useforce != self.useforce:
- 			self.useforce = useforce
- 			has_changed = True
- 
- 		usemask = self._use_manager.getUseMask(cpv_slot)
- 		if usemask != self.usemask:
- 			self.usemask = usemask
- 			has_changed = True
- 
- 		oldpuse = self.puse
- 		self.puse = self._use_manager.getPUSE(cpv_slot)
- 		if oldpuse != self.puse:
- 			has_changed = True
- 		self.configdict["pkg"]["PKGUSE"] = self.puse[:] # For saving to PUSE file
- 		self.configdict["pkg"]["USE"]    = self.puse[:] # this gets appended to USE
- 
- 		if previous_features:
- 			# The package from the previous setcpv call had package.env
- 			# settings which modified FEATURES. Therefore, trigger a
- 			# regenerate() call in order to ensure that self.features
- 			# is accurate.
- 			has_changed = True
- 			# Prevent stale features USE from corrupting the evaluation
- 			# of USE conditional RESTRICT.
- 			self.configdict["features"]["USE"] = self._default_features_use
- 
- 		self._penv = []
- 		cpdict = self._penvdict.get(cp)
- 		if cpdict:
- 			penv_matches = ordered_by_atom_specificity(cpdict, cpv_slot)
- 			if penv_matches:
- 				for x in penv_matches:
- 					self._penv.extend(x)
- 
- 		bashrc_files = []
- 
- 		for profile, profile_bashrc in zip(self._locations_manager.profiles_complex, self._profile_bashrc):
- 			if profile_bashrc:
- 				bashrc_files.append(os.path.join(profile.location, 'profile.bashrc'))
- 			if profile in self._pbashrcdict:
- 				cpdict = self._pbashrcdict[profile].get(cp)
- 				if cpdict:
- 					bashrc_matches = \
- 						ordered_by_atom_specificity(cpdict, cpv_slot)
- 					for x in bashrc_matches:
- 						bashrc_files.extend(x)
- 
- 		self._pbashrc = tuple(bashrc_files)
- 
- 		protected_pkg_keys = set(pkg_configdict)
- 		protected_pkg_keys.discard('USE')
- 
- 		# If there are _any_ package.env settings for this package
- 		# then it automatically triggers config.reset(), in order
- 		# to account for possible incremental interaction between
- 		# package.use, package.env, and overrides from the calling
- 		# environment (configdict['env']).
- 		if self._penv:
- 			has_changed = True
- 			# USE is special because package.use settings override
- 			# it. Discard any package.use settings here and they'll
- 			# be added back later.
- 			pkg_configdict.pop('USE', None)
- 			self._grab_pkg_env(self._penv, pkg_configdict,
- 				protected_keys=protected_pkg_keys)
- 
- 			# Now add package.use settings, which override USE from
- 			# package.env
- 			if self.puse:
- 				if 'USE' in pkg_configdict:
- 					pkg_configdict['USE'] = \
- 						pkg_configdict['USE'] + " " + self.puse
- 				else:
- 					pkg_configdict['USE'] = self.puse
- 
- 		elif previous_penv:
- 			has_changed = True
- 
- 		if not (previous_iuse == iuse and
- 			previous_iuse_effective is not None == eapi_attrs.iuse_effective):
- 			has_changed = True
- 
- 		if has_changed:
- 			# This can modify self.features due to package.env settings.
- 			self.reset(keeping_pkg=1)
- 
- 		if "test" in self.features:
- 			# This is independent of IUSE and RESTRICT, so that the same
- 			# value can be shared between packages with different settings,
- 			# which is important when evaluating USE conditional RESTRICT.
- 			feature_use.append("test")
- 
- 		feature_use = " ".join(feature_use)
- 		if feature_use != self.configdict["features"]["USE"]:
- 			# Regenerate USE for evaluation of conditional RESTRICT.
- 			self.configdict["features"]["USE"] = feature_use
- 			self.reset(keeping_pkg=1)
- 			has_changed = True
- 
- 		if explicit_iuse is None:
- 			explicit_iuse = frozenset(x.lstrip("+-") for x in iuse.split())
- 		if eapi_attrs.iuse_effective:
- 			iuse_implicit_match = self._iuse_effective_match
- 		else:
- 			iuse_implicit_match = self._iuse_implicit_match
- 
- 		if pkg is None:
- 			raw_properties = pkg_configdict.get("PROPERTIES")
- 			raw_restrict = pkg_configdict.get("RESTRICT")
- 		else:
- 			raw_properties = pkg._raw_metadata["PROPERTIES"]
- 			raw_restrict = pkg._raw_metadata["RESTRICT"]
- 
- 		restrict_test = False
- 		if raw_restrict:
- 			try:
- 				if built_use is not None:
- 					properties = use_reduce(raw_properties,
- 						uselist=built_use, flat=True)
- 					restrict = use_reduce(raw_restrict,
- 						uselist=built_use, flat=True)
- 				else:
- 					properties = use_reduce(raw_properties,
- 						uselist=frozenset(x for x in self['USE'].split()
- 						if x in explicit_iuse or iuse_implicit_match(x)),
- 						flat=True)
- 					restrict = use_reduce(raw_restrict,
- 						uselist=frozenset(x for x in self['USE'].split()
- 						if x in explicit_iuse or iuse_implicit_match(x)),
- 						flat=True)
- 			except PortageException:
- 				pass
- 			else:
- 				allow_test = self.get('ALLOW_TEST', '').split()
- 				restrict_test = (
- 					"test" in restrict and not "all" in allow_test and
- 					not ("test_network" in properties and "network" in allow_test))
- 
- 		if restrict_test and "test" in self.features:
- 			# Handle it like IUSE="-test", since features USE is
- 			# independent of RESTRICT.
- 			pkginternaluse_list.append("-test")
- 			pkginternaluse = " ".join(pkginternaluse_list)
- 			self.configdict["pkginternal"]["USE"] = pkginternaluse
- 			# TODO: can we avoid that?
- 			self.reset(keeping_pkg=1)
- 			has_changed = True
- 
- 		env_configdict = self.configdict['env']
- 
- 		# Ensure that "pkg" values are always preferred over "env" values.
- 		# This must occur _after_ the above reset() call, since reset()
- 		# copies values from self.backupenv.
- 		for k in protected_pkg_keys:
- 			env_configdict.pop(k, None)
- 
- 		lazy_vars = self._lazy_vars(built_use, self)
- 		env_configdict.addLazySingleton('ACCEPT_LICENSE',
- 			lazy_vars.__getitem__, 'ACCEPT_LICENSE')
- 		env_configdict.addLazySingleton('PORTAGE_PROPERTIES',
- 			lazy_vars.__getitem__, 'PORTAGE_PROPERTIES')
- 		env_configdict.addLazySingleton('PORTAGE_RESTRICT',
- 			lazy_vars.__getitem__, 'PORTAGE_RESTRICT')
- 
- 		if built_use is not None:
- 			pkg_configdict['PORTAGE_BUILT_USE'] = ' '.join(built_use)
- 
- 		# If reset() has not been called, it's safe to return
- 		# early if IUSE has not changed.
- 		if not has_changed:
- 			return
- 
- 		# Filter out USE flags that aren't part of IUSE. This has to
- 		# be done for every setcpv() call since practically every
- 		# package has different IUSE.
- 		use = set(self["USE"].split())
- 		unfiltered_use = frozenset(use)
- 
- 		if eapi_attrs.iuse_effective:
- 			portage_iuse = set(self._iuse_effective)
- 			portage_iuse.update(explicit_iuse)
- 			if built_use is not None:
- 				# When the binary package was built, the profile may have
- 				# had different IUSE_IMPLICIT settings, so any member of
- 				# the built USE setting is considered to be a member of
- 				# IUSE_EFFECTIVE (see bug 640318).
- 				portage_iuse.update(built_use)
- 			self.configdict["pkg"]["IUSE_EFFECTIVE"] = \
- 				" ".join(sorted(portage_iuse))
- 
- 			self.configdict["env"]["BASH_FUNC____in_portage_iuse%%"] = (
- 				"() { "
- 				"if [[ ${#___PORTAGE_IUSE_HASH[@]} -lt 1 ]]; then "
- 				"  declare -gA ___PORTAGE_IUSE_HASH=(%s); "
- 				"fi; "
- 				"[[ -n ${___PORTAGE_IUSE_HASH[$1]} ]]; "
- 				"}" ) % " ".join('["%s"]=1' % x for x in portage_iuse)
- 		else:
- 			portage_iuse = self._get_implicit_iuse()
- 			portage_iuse.update(explicit_iuse)
- 
- 			# The _get_implicit_iuse() returns a regular expression
- 			# so we can't use the (faster) map.  Fall back to
- 			# implementing ___in_portage_iuse() the older/slower way.
- 
- 			# PORTAGE_IUSE is not always needed so it's lazily evaluated.
- 			self.configdict["env"].addLazySingleton(
- 				"PORTAGE_IUSE", _lazy_iuse_regex, portage_iuse)
- 			self.configdict["env"]["BASH_FUNC____in_portage_iuse%%"] = \
- 				"() { [[ $1 =~ ${PORTAGE_IUSE} ]]; }"
- 
- 		ebuild_force_test = not restrict_test and \
- 			self.get("EBUILD_FORCE_TEST") == "1"
- 
- 		if "test" in explicit_iuse or iuse_implicit_match("test"):
- 			if "test" in self.features:
- 				if ebuild_force_test and "test" in self.usemask:
- 					self.usemask = \
- 						frozenset(x for x in self.usemask if x != "test")
- 			if restrict_test or \
- 				("test" in self.usemask and not ebuild_force_test):
- 				# "test" is in IUSE and USE=test is masked, so execution
- 				# of src_test() probably is not reliable. Therefore,
- 				# temporarily disable FEATURES=test just for this package.
- 				self["FEATURES"] = " ".join(x for x in self.features \
- 					if x != "test")
- 
- 		# Allow _* flags from USE_EXPAND wildcards to pass through here.
- 		use.difference_update([x for x in use \
- 			if (x not in explicit_iuse and \
- 			not iuse_implicit_match(x)) and x[-2:] != '_*'])
- 
- 		# Use the calculated USE flags to regenerate the USE_EXPAND flags so
- 		# that they are consistent. For optimal performance, use slice
- 		# comparison instead of startswith().
- 		use_expand_split = set(x.lower() for \
- 			x in self.get('USE_EXPAND', '').split())
- 		lazy_use_expand = self._lazy_use_expand(
- 			self, unfiltered_use, use, self.usemask,
- 			portage_iuse, use_expand_split, self._use_expand_dict)
- 
- 		use_expand_iuses = dict((k, set()) for k in use_expand_split)
- 		for x in portage_iuse:
- 			x_split = x.split('_')
- 			if len(x_split) == 1:
- 				continue
- 			for i in range(len(x_split) - 1):
- 				k = '_'.join(x_split[:i+1])
- 				if k in use_expand_split:
- 					use_expand_iuses[k].add(x)
- 					break
- 
- 		for k, use_expand_iuse in use_expand_iuses.items():
- 			if k + '_*' in use:
- 				use.update( x for x in use_expand_iuse if x not in usemask )
- 			k = k.upper()
- 			self.configdict['env'].addLazySingleton(k,
- 				lazy_use_expand.__getitem__, k)
- 
- 		for k in self.get("USE_EXPAND_UNPREFIXED", "").split():
- 			var_split = self.get(k, '').split()
- 			var_split = [ x for x in var_split if x in use ]
- 			if var_split:
- 				self.configlist[-1][k] = ' '.join(var_split)
- 			elif k in self:
- 				self.configlist[-1][k] = ''
- 
- 		# Filtered for the ebuild environment. Store this in a separate
- 		# attribute since we still want to be able to see global USE
- 		# settings for things like emerge --info.
- 
- 		self.configdict["env"]["PORTAGE_USE"] = \
- 			" ".join(sorted(x for x in use if x[-2:] != '_*'))
- 
- 		# Clear the eapi cache here rather than in the constructor, since
- 		# setcpv triggers lazy instantiation of things like _use_manager.
- 		_eapi_cache.clear()
- 
- 	def _grab_pkg_env(self, penv, container, protected_keys=None):
- 		if protected_keys is None:
- 			protected_keys = ()
- 		abs_user_config = os.path.join(
- 			self['PORTAGE_CONFIGROOT'], USER_CONFIG_PATH)
- 		non_user_variables = self._non_user_variables
- 		# Make a copy since we don't want per-package settings
- 		# to pollute the global expand_map.
- 		expand_map = self._expand_map.copy()
- 		incrementals = self.incrementals
- 		for envname in penv:
- 			penvfile = os.path.join(abs_user_config, "env", envname)
- 			penvconfig = getconfig(penvfile, tolerant=self._tolerant,
- 				allow_sourcing=True, expand=expand_map)
- 			if penvconfig is None:
- 				writemsg("!!! %s references non-existent file: %s\n" % \
- 					(os.path.join(abs_user_config, 'package.env'), penvfile),
- 					noiselevel=-1)
- 			else:
- 				for k, v in penvconfig.items():
- 					if k in protected_keys or \
- 						k in non_user_variables:
- 						writemsg("!!! Illegal variable " + \
- 							"'%s' assigned in '%s'\n" % \
- 							(k, penvfile), noiselevel=-1)
- 					elif k in incrementals:
- 						if k in container:
- 							container[k] = container[k] + " " + v
- 						else:
- 							container[k] = v
- 					else:
- 						container[k] = v
- 
- 	def _iuse_effective_match(self, flag):
- 		return flag in self._iuse_effective
- 
- 	def _calc_iuse_effective(self):
- 		"""
- 		Beginning with EAPI 5, IUSE_EFFECTIVE is defined by PMS.
- 		"""
- 		iuse_effective = []
- 		iuse_effective.extend(self.get("IUSE_IMPLICIT", "").split())
- 
- 		# USE_EXPAND_IMPLICIT should contain things like ARCH, ELIBC,
- 		# KERNEL, and USERLAND.
- 		use_expand_implicit = frozenset(
- 			self.get("USE_EXPAND_IMPLICIT", "").split())
- 
- 		# USE_EXPAND_UNPREFIXED should contain at least ARCH, and
- 		# USE_EXPAND_VALUES_ARCH should contain all valid ARCH flags.
- 		for v in self.get("USE_EXPAND_UNPREFIXED", "").split():
- 			if v not in use_expand_implicit:
- 				continue
- 			iuse_effective.extend(
- 				self.get("USE_EXPAND_VALUES_" + v, "").split())
- 
- 		use_expand = frozenset(self.get("USE_EXPAND", "").split())
- 		for v in use_expand_implicit:
- 			if v not in use_expand:
- 				continue
- 			lower_v = v.lower()
- 			for x in self.get("USE_EXPAND_VALUES_" + v, "").split():
- 				iuse_effective.append(lower_v + "_" + x)
- 
- 		return frozenset(iuse_effective)
- 
- 	def _get_implicit_iuse(self):
- 		"""
- 		Prior to EAPI 5, these flags are considered to
- 		be implicit members of IUSE:
- 		  * Flags derived from ARCH
- 		  * Flags derived from USE_EXPAND_HIDDEN variables
- 		  * Masked flags, such as those from {,package}use.mask
- 		  * Forced flags, such as those from {,package}use.force
- 		  * build and bootstrap flags used by bootstrap.sh
- 		"""
- 		iuse_implicit = set()
- 		# Flags derived from ARCH.
- 		arch = self.configdict["defaults"].get("ARCH")
- 		if arch:
- 			iuse_implicit.add(arch)
- 		iuse_implicit.update(self.get("PORTAGE_ARCHLIST", "").split())
- 
- 		# Flags derived from USE_EXPAND_HIDDEN variables
- 		# such as ELIBC, KERNEL, and USERLAND.
- 		use_expand_hidden = self.get("USE_EXPAND_HIDDEN", "").split()
- 		for x in use_expand_hidden:
- 			iuse_implicit.add(x.lower() + "_.*")
- 
- 		# Flags that have been masked or forced.
- 		iuse_implicit.update(self.usemask)
- 		iuse_implicit.update(self.useforce)
- 
- 		# build and bootstrap flags used by bootstrap.sh
- 		iuse_implicit.add("build")
- 		iuse_implicit.add("bootstrap")
- 
- 		return iuse_implicit
- 
- 	def _getUseMask(self, pkg, stable=None):
- 		return self._use_manager.getUseMask(pkg, stable=stable)
- 
- 	def _getUseForce(self, pkg, stable=None):
- 		return self._use_manager.getUseForce(pkg, stable=stable)
- 
- 	def _getMaskAtom(self, cpv, metadata):
- 		"""
- 		Take a package and return a matching package.mask atom, or None if no
- 		such atom exists or it has been cancelled by package.unmask.
- 
- 		@param cpv: The package name
- 		@type cpv: String
- 		@param metadata: A dictionary of raw package metadata
- 		@type metadata: dict
- 		@rtype: String
- 		@return: A matching atom string or None if one is not found.
- 		"""
- 		return self._mask_manager.getMaskAtom(cpv, metadata["SLOT"], metadata.get('repository'))
- 
- 	def _getRawMaskAtom(self, cpv, metadata):
- 		"""
- 		Take a package and return a matching package.mask atom, or None if no
- 		such atom exists or it has been cancelled by package.unmask.
- 
- 		@param cpv: The package name
- 		@type cpv: String
- 		@param metadata: A dictionary of raw package metadata
- 		@type metadata: dict
- 		@rtype: String
- 		@return: A matching atom string or None if one is not found.
- 		"""
- 		return self._mask_manager.getRawMaskAtom(cpv, metadata["SLOT"], metadata.get('repository'))
- 
- 
- 	def _getProfileMaskAtom(self, cpv, metadata):
- 		"""
- 		Take a package and return a matching profile atom, or None if no
- 		such atom exists. Note that a profile atom may or may not have a "*"
- 		prefix.
- 
- 		@param cpv: The package name
- 		@type cpv: String
- 		@param metadata: A dictionary of raw package metadata
- 		@type metadata: dict
- 		@rtype: String
- 		@return: A matching profile atom string or None if one is not found.
- 		"""
- 
- 		warnings.warn("The config._getProfileMaskAtom() method is deprecated.",
- 			DeprecationWarning, stacklevel=2)
- 
- 		cp = cpv_getkey(cpv)
- 		profile_atoms = self.prevmaskdict.get(cp)
- 		if profile_atoms:
- 			pkg = "".join((cpv, _slot_separator, metadata["SLOT"]))
- 			repo = metadata.get("repository")
- 			if repo and repo != Package.UNKNOWN_REPO:
- 				pkg = "".join((pkg, _repo_separator, repo))
- 			pkg_list = [pkg]
- 			for x in profile_atoms:
- 				if match_from_list(x, pkg_list):
- 					continue
- 				return x
- 		return None
- 
- 	def _isStable(self, pkg):
- 		return self._keywords_manager.isStable(pkg,
- 			self.get("ACCEPT_KEYWORDS", ""),
- 			self.configdict["backupenv"].get("ACCEPT_KEYWORDS", ""))
- 
- 	def _getKeywords(self, cpv, metadata):
- 		return self._keywords_manager.getKeywords(cpv, metadata["SLOT"], \
- 			metadata.get("KEYWORDS", ""), metadata.get("repository"))
- 
- 	def _getMissingKeywords(self, cpv, metadata):
- 		"""
- 		Take a package and return a list of any KEYWORDS that the user may
- 		need to accept for the given package. If the KEYWORDS are empty
- 		and the ** keyword has not been accepted, the returned list will
- 		contain ** alone (in order to distinguish from the case of "none
- 		missing").
- 
- 		@param cpv: The package name (for package.keywords support)
- 		@type cpv: String
- 		@param metadata: A dictionary of raw package metadata
- 		@type metadata: dict
- 		@rtype: List
- 		@return: A list of KEYWORDS that have not been accepted.
- 		"""
- 
- 		# Hack: Need to check the env directly here as otherwise stacking
- 		# doesn't work properly as negative values are lost in the config
- 		# object (bug #139600)
- 		backuped_accept_keywords = self.configdict["backupenv"].get("ACCEPT_KEYWORDS", "")
- 		global_accept_keywords = self.get("ACCEPT_KEYWORDS", "")
- 
- 		return self._keywords_manager.getMissingKeywords(cpv, metadata["SLOT"], \
- 			metadata.get("KEYWORDS", ""), metadata.get('repository'), \
- 			global_accept_keywords, backuped_accept_keywords)
- 
- 	def _getRawMissingKeywords(self, cpv, metadata):
- 		"""
- 		Take a package and return a list of any KEYWORDS that the user may
- 		need to accept for the given package. If the KEYWORDS are empty,
- 		the returned list will contain ** alone (in order to distinguish
- 		from the case of "none missing").  This DOES NOT apply any user config
- 		package.accept_keywords acceptance.
- 
- 		@param cpv: The package name (for package.keywords support)
- 		@type cpv: String
- 		@param metadata: A dictionary of raw package metadata
- 		@type metadata: dict
- 		@rtype: List
- 		@return: lists of KEYWORDS that have not been accepted
- 		and the keywords it looked for.
- 		"""
- 		return self._keywords_manager.getRawMissingKeywords(cpv, metadata["SLOT"], \
- 			metadata.get("KEYWORDS", ""), metadata.get('repository'), \
- 			self.get("ACCEPT_KEYWORDS", ""))
- 
- 	def _getPKeywords(self, cpv, metadata):
- 		global_accept_keywords = self.get("ACCEPT_KEYWORDS", "")
- 
- 		return self._keywords_manager.getPKeywords(cpv, metadata["SLOT"], \
- 			metadata.get('repository'), global_accept_keywords)
- 
- 	def _getMissingLicenses(self, cpv, metadata):
- 		"""
- 		Take a LICENSE string and return a list of any licenses that the user
- 		may need to accept for the given package.  The returned list will not
- 		contain any licenses that have already been accepted.  This method
- 		can throw an InvalidDependString exception.
- 
- 		@param cpv: The package name (for package.license support)
- 		@type cpv: String
- 		@param metadata: A dictionary of raw package metadata
- 		@type metadata: dict
- 		@rtype: List
- 		@return: A list of licenses that have not been accepted.
- 		"""
- 		return self._license_manager.getMissingLicenses( \
- 			cpv, metadata["USE"], metadata["LICENSE"], metadata["SLOT"], metadata.get('repository'))
- 
- 	def _getMissingProperties(self, cpv, metadata):
- 		"""
- 		Take a PROPERTIES string and return a list of any properties the user
- 		may need to accept for the given package.  The returned list will not
- 		contain any properties that have already been accepted.  This method
- 		can throw an InvalidDependString exception.
- 
- 		@param cpv: The package name (for package.properties support)
- 		@type cpv: String
- 		@param metadata: A dictionary of raw package metadata
- 		@type metadata: dict
- 		@rtype: List
- 		@return: A list of properties that have not been accepted.
- 		"""
- 		accept_properties = self._accept_properties
- 		try:
- 			cpv.slot
- 		except AttributeError:
- 			cpv = _pkg_str(cpv, metadata=metadata, settings=self)
- 		cp = cpv_getkey(cpv)
- 		cpdict = self._ppropertiesdict.get(cp)
- 		if cpdict:
- 			pproperties_list = ordered_by_atom_specificity(cpdict, cpv)
- 			if pproperties_list:
- 				accept_properties = list(self._accept_properties)
- 				for x in pproperties_list:
- 					accept_properties.extend(x)
- 
- 		properties_str = metadata.get("PROPERTIES", "")
- 		properties = set(use_reduce(properties_str, matchall=1, flat=True))
- 
- 		acceptable_properties = set()
- 		for x in accept_properties:
- 			if x == '*':
- 				acceptable_properties.update(properties)
- 			elif x == '-*':
- 				acceptable_properties.clear()
- 			elif x[:1] == '-':
- 				acceptable_properties.discard(x[1:])
- 			else:
- 				acceptable_properties.add(x)
- 
- 		if "?" in properties_str:
- 			use = metadata["USE"].split()
- 		else:
- 			use = []
- 
- 		return [x for x in use_reduce(properties_str, uselist=use, flat=True)
- 			if x not in acceptable_properties]
- 
- 	def _getMissingRestrict(self, cpv, metadata):
- 		"""
- 		Take a RESTRICT string and return a list of any tokens the user
- 		may need to accept for the given package.  The returned list will not
- 		contain any tokens that have already been accepted.  This method
- 		can throw an InvalidDependString exception.
- 
- 		@param cpv: The package name (for package.accept_restrict support)
- 		@type cpv: String
- 		@param metadata: A dictionary of raw package metadata
- 		@type metadata: dict
- 		@rtype: List
- 		@return: A list of tokens that have not been accepted.
- 		"""
- 		accept_restrict = self._accept_restrict
- 		try:
- 			cpv.slot
- 		except AttributeError:
- 			cpv = _pkg_str(cpv, metadata=metadata, settings=self)
- 		cp = cpv_getkey(cpv)
- 		cpdict = self._paccept_restrict.get(cp)
- 		if cpdict:
- 			paccept_restrict_list = ordered_by_atom_specificity(cpdict, cpv)
- 			if paccept_restrict_list:
- 				accept_restrict = list(self._accept_restrict)
- 				for x in paccept_restrict_list:
- 					accept_restrict.extend(x)
- 
- 		restrict_str = metadata.get("RESTRICT", "")
- 		all_restricts = set(use_reduce(restrict_str, matchall=1, flat=True))
- 
- 		acceptable_restricts = set()
- 		for x in accept_restrict:
- 			if x == '*':
- 				acceptable_restricts.update(all_restricts)
- 			elif x == '-*':
- 				acceptable_restricts.clear()
- 			elif x[:1] == '-':
- 				acceptable_restricts.discard(x[1:])
- 			else:
- 				acceptable_restricts.add(x)
- 
- 		if "?" in restrict_str:
- 			use = metadata["USE"].split()
- 		else:
- 			use = []
- 
- 		return [x for x in use_reduce(restrict_str, uselist=use, flat=True)
- 			if x not in acceptable_restricts]
- 
- 	def _accept_chost(self, cpv, metadata):
- 		"""
- 		@return True if pkg CHOST is accepted, False otherwise.
- 		"""
- 		if self._accept_chost_re is None:
- 			accept_chost = self.get("ACCEPT_CHOSTS", "").split()
- 			if not accept_chost:
- 				chost = self.get("CHOST")
- 				if chost:
- 					accept_chost.append(chost)
- 			if not accept_chost:
- 				self._accept_chost_re = re.compile(".*")
- 			elif len(accept_chost) == 1:
- 				try:
- 					self._accept_chost_re = re.compile(r'^%s$' % accept_chost[0])
- 				except re.error as e:
- 					writemsg(_("!!! Invalid ACCEPT_CHOSTS value: '%s': %s\n") % \
- 						(accept_chost[0], e), noiselevel=-1)
- 					self._accept_chost_re = re.compile("^$")
- 			else:
- 				try:
- 					self._accept_chost_re = re.compile(
- 						r'^(%s)$' % "|".join(accept_chost))
- 				except re.error as e:
- 					writemsg(_("!!! Invalid ACCEPT_CHOSTS value: '%s': %s\n") % \
- 						(" ".join(accept_chost), e), noiselevel=-1)
- 					self._accept_chost_re = re.compile("^$")
- 
- 		pkg_chost = metadata.get('CHOST', '')
- 		return not pkg_chost or \
- 			self._accept_chost_re.match(pkg_chost) is not None
- 
- 	def setinst(self, mycpv, mydbapi):
- 		"""This used to update the preferences for old-style virtuals.
- 		It is no-op now."""
- 		pass
- 
- 	def reload(self):
- 		"""Reload things like /etc/profile.env that can change during runtime."""
- 		env_d_filename = os.path.join(self["EROOT"], "etc", "profile.env")
- 		self.configdict["env.d"].clear()
- 		env_d = getconfig(env_d_filename,
- 			tolerant=self._tolerant, expand=False)
- 		if env_d:
- 			# env_d will be None if profile.env doesn't exist.
- 			for k in self._env_d_blacklist:
- 				env_d.pop(k, None)
- 			self.configdict["env.d"].update(env_d)
- 
- 	def regenerate(self, useonly=0, use_cache=None):
- 		"""
- 		Regenerate settings
- 		This involves regenerating valid USE flags, re-expanding USE_EXPAND flags
- 		re-stacking USE flags (-flag and -*), as well as any other INCREMENTAL
- 		variables.  This also updates the env.d configdict; useful in case an ebuild
- 		changes the environment.
- 
- 		If FEATURES has already stacked, it is not stacked twice.
- 
- 		@param useonly: Only regenerate USE flags (not any other incrementals)
- 		@type useonly: Boolean
- 		@rtype: None
- 		"""
- 
- 		if use_cache is not None:
- 			warnings.warn("The use_cache parameter for config.regenerate() is deprecated and without effect.",
- 				DeprecationWarning, stacklevel=2)
- 
- 		self.modifying()
- 
- 		if useonly:
- 			myincrementals=["USE"]
- 		else:
- 			myincrementals = self.incrementals
- 		myincrementals = set(myincrementals)
- 
- 		# Process USE last because it depends on USE_EXPAND which is also
- 		# an incremental!
- 		myincrementals.discard("USE")
- 
- 		mydbs = self.configlist[:-1]
- 		mydbs.append(self.backupenv)
- 
- 		# ACCEPT_LICENSE is a lazily evaluated incremental, so that * can be
- 		# used to match all licenses without every having to explicitly expand
- 		# it to all licenses.
- 		if self.local_config:
- 			mysplit = []
- 			for curdb in mydbs:
- 				mysplit.extend(curdb.get('ACCEPT_LICENSE', '').split())
- 			mysplit = prune_incremental(mysplit)
- 			accept_license_str = ' '.join(mysplit) or '* -@EULA'
- 			self.configlist[-1]['ACCEPT_LICENSE'] = accept_license_str
- 			self._license_manager.set_accept_license_str(accept_license_str)
- 		else:
- 			# repoman will accept any license
- 			self._license_manager.set_accept_license_str("*")
- 
- 		# ACCEPT_PROPERTIES works like ACCEPT_LICENSE, without groups
- 		if self.local_config:
- 			mysplit = []
- 			for curdb in mydbs:
- 				mysplit.extend(curdb.get('ACCEPT_PROPERTIES', '').split())
- 			mysplit = prune_incremental(mysplit)
- 			self.configlist[-1]['ACCEPT_PROPERTIES'] = ' '.join(mysplit)
- 			if tuple(mysplit) != self._accept_properties:
- 				self._accept_properties = tuple(mysplit)
- 		else:
- 			# repoman will accept any property
- 			self._accept_properties = ('*',)
- 
- 		if self.local_config:
- 			mysplit = []
- 			for curdb in mydbs:
- 				mysplit.extend(curdb.get('ACCEPT_RESTRICT', '').split())
- 			mysplit = prune_incremental(mysplit)
- 			self.configlist[-1]['ACCEPT_RESTRICT'] = ' '.join(mysplit)
- 			if tuple(mysplit) != self._accept_restrict:
- 				self._accept_restrict = tuple(mysplit)
- 		else:
- 			# repoman will accept any property
- 			self._accept_restrict = ('*',)
- 
- 		increment_lists = {}
- 		for k in myincrementals:
- 			incremental_list = []
- 			increment_lists[k] = incremental_list
- 			for curdb in mydbs:
- 				v = curdb.get(k)
- 				if v is not None:
- 					incremental_list.append(v.split())
- 
- 		if 'FEATURES' in increment_lists:
- 			increment_lists['FEATURES'].append(self._features_overrides)
- 
- 		myflags = set()
- 		for mykey, incremental_list in increment_lists.items():
- 
- 			myflags.clear()
- 			for mysplit in incremental_list:
- 
- 				for x in mysplit:
- 					if x=="-*":
- 						# "-*" is a special "minus" var that means "unset all settings".
- 						# so USE="-* gnome" will have *just* gnome enabled.
- 						myflags.clear()
- 						continue
- 
- 					if x[0]=="+":
- 						# Not legal. People assume too much. Complain.
- 						writemsg(colorize("BAD",
- 							_("%s values should not start with a '+': %s") % (mykey,x)) \
- 							+ "\n", noiselevel=-1)
- 						x=x[1:]
- 						if not x:
- 							continue
- 
- 					if x[0] == "-":
- 						myflags.discard(x[1:])
- 						continue
- 
- 					# We got here, so add it now.
- 					myflags.add(x)
- 
- 			#store setting in last element of configlist, the original environment:
- 			if myflags or mykey in self:
- 				self.configlist[-1][mykey] = " ".join(sorted(myflags))
- 
- 		# Do the USE calculation last because it depends on USE_EXPAND.
- 		use_expand = self.get("USE_EXPAND", "").split()
- 		use_expand_dict = self._use_expand_dict
- 		use_expand_dict.clear()
- 		for k in use_expand:
- 			v = self.get(k)
- 			if v is not None:
- 				use_expand_dict[k] = v
- 
- 		use_expand_unprefixed = self.get("USE_EXPAND_UNPREFIXED", "").split()
- 
- 		# In order to best accomodate the long-standing practice of
- 		# setting default USE_EXPAND variables in the profile's
- 		# make.defaults, we translate these variables into their
- 		# equivalent USE flags so that useful incremental behavior
- 		# is enabled (for sub-profiles).
- 		configdict_defaults = self.configdict['defaults']
- 		if self._make_defaults is not None:
- 			for i, cfg in enumerate(self._make_defaults):
- 				if not cfg:
- 					self.make_defaults_use.append("")
- 					continue
- 				use = cfg.get("USE", "")
- 				expand_use = []
- 
- 				for k in use_expand_unprefixed:
- 					v = cfg.get(k)
- 					if v is not None:
- 						expand_use.extend(v.split())
- 
- 				for k in use_expand_dict:
- 					v = cfg.get(k)
- 					if v is None:
- 						continue
- 					prefix = k.lower() + '_'
- 					for x in v.split():
- 						if x[:1] == '-':
- 							expand_use.append('-' + prefix + x[1:])
- 						else:
- 							expand_use.append(prefix + x)
- 
- 				if expand_use:
- 					expand_use.append(use)
- 					use  = ' '.join(expand_use)
- 				self.make_defaults_use.append(use)
- 			self.make_defaults_use = tuple(self.make_defaults_use)
- 			# Preserve both positive and negative flags here, since
- 			# negative flags may later interact with other flags pulled
- 			# in via USE_ORDER.
- 			configdict_defaults['USE'] = ' '.join(
- 				filter(None, self.make_defaults_use))
- 			# Set to None so this code only runs once.
- 			self._make_defaults = None
- 
- 		if not self.uvlist:
- 			for x in self["USE_ORDER"].split(":"):
- 				if x in self.configdict:
- 					self.uvlist.append(self.configdict[x])
- 			self.uvlist.reverse()
- 
- 		# For optimal performance, use slice
- 		# comparison instead of startswith().
- 		iuse = self.configdict["pkg"].get("IUSE")
- 		if iuse is not None:
- 			iuse = [x.lstrip("+-") for x in iuse.split()]
- 		myflags = set()
- 		for curdb in self.uvlist:
- 
- 			for k in use_expand_unprefixed:
- 				v = curdb.get(k)
- 				if v is None:
- 					continue
- 				for x in v.split():
- 					if x[:1] == "-":
- 						myflags.discard(x[1:])
- 					else:
- 						myflags.add(x)
- 
- 			cur_use_expand = [x for x in use_expand if x in curdb]
- 			mysplit = curdb.get("USE", "").split()
- 			if not mysplit and not cur_use_expand:
- 				continue
- 			for x in mysplit:
- 				if x == "-*":
- 					myflags.clear()
- 					continue
- 
- 				if x[0] == "+":
- 					writemsg(colorize("BAD", _("USE flags should not start "
- 						"with a '+': %s\n") % x), noiselevel=-1)
- 					x = x[1:]
- 					if not x:
- 						continue
- 
- 				if x[0] == "-":
- 					if x[-2:] == '_*':
- 						prefix = x[1:-1]
- 						prefix_len = len(prefix)
- 						myflags.difference_update(
- 							[y for y in myflags if \
- 							y[:prefix_len] == prefix])
- 					myflags.discard(x[1:])
- 					continue
- 
- 				if iuse is not None and x[-2:] == '_*':
- 					# Expand wildcards here, so that cases like
- 					# USE="linguas_* -linguas_en_US" work correctly.
- 					prefix = x[:-1]
- 					prefix_len = len(prefix)
- 					has_iuse = False
- 					for y in iuse:
- 						if y[:prefix_len] == prefix:
- 							has_iuse = True
- 							myflags.add(y)
- 					if not has_iuse:
- 						# There are no matching IUSE, so allow the
- 						# wildcard to pass through. This allows
- 						# linguas_* to trigger unset LINGUAS in
- 						# cases when no linguas_ flags are in IUSE.
- 						myflags.add(x)
- 				else:
- 					myflags.add(x)
- 
- 			if curdb is configdict_defaults:
- 				# USE_EXPAND flags from make.defaults are handled
- 				# earlier, in order to provide useful incremental
- 				# behavior (for sub-profiles).
- 				continue
- 
- 			for var in cur_use_expand:
- 				var_lower = var.lower()
- 				is_not_incremental = var not in myincrementals
- 				if is_not_incremental:
- 					prefix = var_lower + "_"
- 					prefix_len = len(prefix)
- 					for x in list(myflags):
- 						if x[:prefix_len] == prefix:
- 							myflags.remove(x)
- 				for x in curdb[var].split():
- 					if x[0] == "+":
- 						if is_not_incremental:
- 							writemsg(colorize("BAD", _("Invalid '+' "
- 								"operator in non-incremental variable "
- 								 "'%s': '%s'\n") % (var, x)), noiselevel=-1)
- 							continue
- 						else:
- 							writemsg(colorize("BAD", _("Invalid '+' "
- 								"operator in incremental variable "
- 								 "'%s': '%s'\n") % (var, x)), noiselevel=-1)
- 						x = x[1:]
- 					if x[0] == "-":
- 						if is_not_incremental:
- 							writemsg(colorize("BAD", _("Invalid '-' "
- 								"operator in non-incremental variable "
- 								 "'%s': '%s'\n") % (var, x)), noiselevel=-1)
- 							continue
- 						myflags.discard(var_lower + "_" + x[1:])
- 						continue
- 					myflags.add(var_lower + "_" + x)
- 
- 		if hasattr(self, "features"):
- 			self.features._features.clear()
- 		else:
- 			self.features = features_set(self)
- 		self.features._features.update(self.get('FEATURES', '').split())
- 		self.features._sync_env_var()
- 		self.features._validate()
- 
- 		myflags.update(self.useforce)
- 		arch = self.configdict["defaults"].get("ARCH")
- 		if arch:
- 			myflags.add(arch)
- 
- 		myflags.difference_update(self.usemask)
- 		self.configlist[-1]["USE"]= " ".join(sorted(myflags))
- 
- 		if self.mycpv is None:
- 			# Generate global USE_EXPAND variables settings that are
- 			# consistent with USE, for display by emerge --info. For
- 			# package instances, these are instead generated via
- 			# setcpv().
- 			for k in use_expand:
- 				prefix = k.lower() + '_'
- 				prefix_len = len(prefix)
- 				expand_flags = set( x[prefix_len:] for x in myflags \
- 					if x[:prefix_len] == prefix )
- 				var_split = use_expand_dict.get(k, '').split()
- 				var_split = [ x for x in var_split if x in expand_flags ]
- 				var_split.extend(sorted(expand_flags.difference(var_split)))
- 				if var_split:
- 					self.configlist[-1][k] = ' '.join(var_split)
- 				elif k in self:
- 					self.configlist[-1][k] = ''
- 
- 			for k in use_expand_unprefixed:
- 				var_split = self.get(k, '').split()
- 				var_split = [ x for x in var_split if x in myflags ]
- 				if var_split:
- 					self.configlist[-1][k] = ' '.join(var_split)
- 				elif k in self:
- 					self.configlist[-1][k] = ''
- 
- 	@property
- 	def virts_p(self):
- 		warnings.warn("portage config.virts_p attribute " + \
- 			"is deprecated, use config.get_virts_p()",
- 			DeprecationWarning, stacklevel=2)
- 		return self.get_virts_p()
- 
- 	@property
- 	def virtuals(self):
- 		warnings.warn("portage config.virtuals attribute " + \
- 			"is deprecated, use config.getvirtuals()",
- 			DeprecationWarning, stacklevel=2)
- 		return self.getvirtuals()
- 
- 	def get_virts_p(self):
- 		# Ensure that we don't trigger the _treeVirtuals
- 		# assertion in VirtualsManager._compile_virtuals().
- 		self.getvirtuals()
- 		return self._virtuals_manager.get_virts_p()
- 
- 	def getvirtuals(self):
- 		if self._virtuals_manager._treeVirtuals is None:
- 			#Hack around the fact that VirtualsManager needs a vartree
- 			#and vartree needs a config instance.
- 			#This code should be part of VirtualsManager.getvirtuals().
- 			if self.local_config:
- 				temp_vartree = vartree(settings=self)
- 				self._virtuals_manager._populate_treeVirtuals(temp_vartree)
- 			else:
- 				self._virtuals_manager._treeVirtuals = {}
- 
- 		return self._virtuals_manager.getvirtuals()
- 
- 	def _populate_treeVirtuals_if_needed(self, vartree):
- 		"""Reduce the provides into a list by CP."""
- 		if self._virtuals_manager._treeVirtuals is None:
- 			if self.local_config:
- 				self._virtuals_manager._populate_treeVirtuals(vartree)
- 			else:
- 				self._virtuals_manager._treeVirtuals = {}
- 
- 	def __delitem__(self,mykey):
- 		self.pop(mykey)
- 
- 	def __getitem__(self, key):
- 		try:
- 			return self._getitem(key)
- 		except KeyError:
- 			if portage._internal_caller:
- 				stack = traceback.format_stack()[:-1] + traceback.format_exception(*sys.exc_info())[1:]
- 				try:
- 					# Ensure that output is written to terminal.
- 					with open("/dev/tty", "w") as f:
- 						f.write("=" * 96 + "\n")
- 						f.write("=" * 8 + " Traceback for invalid call to portage.package.ebuild.config.config.__getitem__ " + "=" * 8 + "\n")
- 						f.writelines(stack)
- 						f.write("=" * 96 + "\n")
- 				except Exception:
- 					pass
- 				raise
- 			else:
- 				warnings.warn(_("Passing nonexistent key %r to %s is deprecated. Use %s instead.") %
- 					(key, "portage.package.ebuild.config.config.__getitem__",
- 					"portage.package.ebuild.config.config.get"), DeprecationWarning, stacklevel=2)
- 				return ""
- 
- 	def _getitem(self, mykey):
- 
- 		if mykey in self._constant_keys:
- 			# These two point to temporary values when
- 			# portage plans to update itself.
- 			if mykey == "PORTAGE_BIN_PATH":
- 				return portage._bin_path
- 			if mykey == "PORTAGE_PYM_PATH":
- 				return portage._pym_path
- 
- 			if mykey == "PORTAGE_PYTHONPATH":
- 				value = [x for x in \
- 					self.backupenv.get("PYTHONPATH", "").split(":") if x]
- 				need_pym_path = True
- 				if value:
- 					try:
- 						need_pym_path = not os.path.samefile(value[0],
- 							portage._pym_path)
- 					except OSError:
- 						pass
- 				if need_pym_path:
- 					value.insert(0, portage._pym_path)
- 				return ":".join(value)
- 
- 			if mykey == "PORTAGE_GID":
- 				return "%s" % portage_gid
- 
- 		for d in self.lookuplist:
- 			try:
- 				return d[mykey]
- 			except KeyError:
- 				pass
- 
- 		deprecated_key = self._deprecated_keys.get(mykey)
- 		if deprecated_key is not None:
- 			value = self._getitem(deprecated_key)
- 			#warnings.warn(_("Key %s has been renamed to %s. Please ",
- 			#	"update your configuration") % (deprecated_key, mykey),
- 			#	UserWarning)
- 			return value
- 
- 		raise KeyError(mykey)
- 
- 	def get(self, k, x=None):
- 		try:
- 			return self._getitem(k)
- 		except KeyError:
- 			return x
- 
- 	def pop(self, key, *args):
- 		self.modifying()
- 		if len(args) > 1:
- 			raise TypeError(
- 				"pop expected at most 2 arguments, got " + \
- 				repr(1 + len(args)))
- 		v = self
- 		for d in reversed(self.lookuplist):
- 			v = d.pop(key, v)
- 		if v is self:
- 			if args:
- 				return args[0]
- 			raise KeyError(key)
- 		return v
- 
- 	def __contains__(self, mykey):
- 		"""Called to implement membership test operators (in and not in)."""
- 		try:
- 			self._getitem(mykey)
- 		except KeyError:
- 			return False
- 		else:
- 			return True
- 
- 	def setdefault(self, k, x=None):
- 		v = self.get(k)
- 		if v is not None:
- 			return v
- 		self[k] = x
- 		return x
- 
- 	def __iter__(self):
- 		keys = set()
- 		keys.update(self._constant_keys)
- 		for d in self.lookuplist:
- 			keys.update(d)
- 		return iter(keys)
- 
- 	def iterkeys(self):
- 		return iter(self)
- 
- 	def iteritems(self):
- 		for k in self:
- 			yield (k, self._getitem(k))
- 
- 	def __setitem__(self,mykey,myvalue):
- 		"set a value; will be thrown away at reset() time"
- 		if not isinstance(myvalue, str):
- 			raise ValueError("Invalid type being used as a value: '%s': '%s'" % (str(mykey),str(myvalue)))
- 
- 		# Avoid potential UnicodeDecodeError exceptions later.
- 		mykey = _unicode_decode(mykey)
- 		myvalue = _unicode_decode(myvalue)
- 
- 		self.modifying()
- 		self.modifiedkeys.append(mykey)
- 		self.configdict["env"][mykey]=myvalue
- 
- 	def environ(self):
- 		"return our locally-maintained environment"
- 		mydict={}
- 		environ_filter = self._environ_filter
- 
- 		eapi = self.get('EAPI')
- 		eapi_attrs = _get_eapi_attrs(eapi)
- 		phase = self.get('EBUILD_PHASE')
- 		emerge_from = self.get('EMERGE_FROM')
- 		filter_calling_env = False
- 		if self.mycpv is not None and \
- 			not (emerge_from == 'ebuild' and phase == 'setup') and \
- 			phase not in ('clean', 'cleanrm', 'depend', 'fetch'):
- 			temp_dir = self.get('T')
- 			if temp_dir is not None and \
- 				os.path.exists(os.path.join(temp_dir, 'environment')):
- 				filter_calling_env = True
- 
- 		environ_whitelist = self._environ_whitelist
- 		for x, myvalue in self.iteritems():
- 			if x in environ_filter:
- 				continue
- 			if not isinstance(myvalue, str):
- 				writemsg(_("!!! Non-string value in config: %s=%s\n") % \
- 					(x, myvalue), noiselevel=-1)
- 				continue
- 			if filter_calling_env and \
- 				x not in environ_whitelist and \
- 				not self._environ_whitelist_re.match(x):
- 				# Do not allow anything to leak into the ebuild
- 				# environment unless it is explicitly whitelisted.
- 				# This ensures that variables unset by the ebuild
- 				# remain unset (bug #189417).
- 				continue
- 			mydict[x] = myvalue
- 		if "HOME" not in mydict and "BUILD_PREFIX" in mydict:
- 			writemsg("*** HOME not set. Setting to "+mydict["BUILD_PREFIX"]+"\n")
- 			mydict["HOME"]=mydict["BUILD_PREFIX"][:]
- 
- 		if filter_calling_env:
- 			if phase:
- 				whitelist = []
- 				if "rpm" == phase:
- 					whitelist.append("RPMDIR")
- 				for k in whitelist:
- 					v = self.get(k)
- 					if v is not None:
- 						mydict[k] = v
- 
- 		# At some point we may want to stop exporting FEATURES to the ebuild
- 		# environment, in order to prevent ebuilds from abusing it. In
- 		# preparation for that, export it as PORTAGE_FEATURES so that bashrc
- 		# users will be able to migrate any FEATURES conditional code to
- 		# use this alternative variable.
- 		mydict["PORTAGE_FEATURES"] = self["FEATURES"]
- 
- 		# Filtered by IUSE and implicit IUSE.
- 		mydict["USE"] = self.get("PORTAGE_USE", "")
- 
- 		# Don't export AA to the ebuild environment in EAPIs that forbid it
- 		if not eapi_exports_AA(eapi):
- 			mydict.pop("AA", None)
- 
- 		if not eapi_exports_merge_type(eapi):
- 			mydict.pop("MERGE_TYPE", None)
- 
- 		src_like_phase = (phase == 'setup' or
- 				_phase_func_map.get(phase, '').startswith('src_'))
- 
- 		if not (src_like_phase and eapi_attrs.sysroot):
- 			mydict.pop("ESYSROOT", None)
- 
- 		if not (src_like_phase and eapi_attrs.broot):
- 			mydict.pop("BROOT", None)
- 
- 		# Prefix variables are supported beginning with EAPI 3, or when
- 		# force-prefix is in FEATURES, since older EAPIs would otherwise be
- 		# useless with prefix configurations. This brings compatibility with
- 		# the prefix branch of portage, which also supports EPREFIX for all
- 		# EAPIs (for obvious reasons).
- 		if phase == 'depend' or \
- 			('force-prefix' not in self.features and
- 			eapi is not None and not eapi_supports_prefix(eapi)):
- 			mydict.pop("ED", None)
- 			mydict.pop("EPREFIX", None)
- 			mydict.pop("EROOT", None)
- 			mydict.pop("ESYSROOT", None)
- 
- 		if phase not in ("pretend", "setup", "preinst", "postinst") or \
- 			not eapi_exports_replace_vars(eapi):
- 			mydict.pop("REPLACING_VERSIONS", None)
- 
- 		if phase not in ("prerm", "postrm") or \
- 			not eapi_exports_replace_vars(eapi):
- 			mydict.pop("REPLACED_BY_VERSION", None)
- 
- 		if phase is not None and eapi_attrs.exports_EBUILD_PHASE_FUNC:
- 			phase_func = _phase_func_map.get(phase)
- 			if phase_func is not None:
- 				mydict["EBUILD_PHASE_FUNC"] = phase_func
- 
- 		if eapi_attrs.posixish_locale:
- 			split_LC_ALL(mydict)
- 			mydict["LC_COLLATE"] = "C"
- 			# check_locale() returns None when check can not be executed.
- 			if check_locale(silent=True, env=mydict) is False:
- 				# try another locale
- 				for l in ("C.UTF-8", "en_US.UTF-8", "en_GB.UTF-8", "C"):
- 					mydict["LC_CTYPE"] = l
- 					if check_locale(silent=True, env=mydict):
- 						# TODO: output the following only once
- #						writemsg(_("!!! LC_CTYPE unsupported, using %s instead\n")
- #								% mydict["LC_CTYPE"])
- 						break
- 				else:
- 					raise AssertionError("C locale did not pass the test!")
- 
- 		if not eapi_attrs.exports_PORTDIR:
- 			mydict.pop("PORTDIR", None)
- 		if not eapi_attrs.exports_ECLASSDIR:
- 			mydict.pop("ECLASSDIR", None)
- 
- 		if not eapi_attrs.path_variables_end_with_trailing_slash:
- 			for v in ("D", "ED", "ROOT", "EROOT", "ESYSROOT", "BROOT"):
- 				if v in mydict:
- 					mydict[v] = mydict[v].rstrip(os.path.sep)
- 
- 		# Since SYSROOT=/ interacts badly with autotools.eclass (bug 654600),
- 		# and no EAPI expects SYSROOT to have a trailing slash, always strip
- 		# the trailing slash from SYSROOT.
- 		if 'SYSROOT' in mydict:
- 			mydict['SYSROOT'] = mydict['SYSROOT'].rstrip(os.sep)
- 
- 		try:
- 			builddir = mydict["PORTAGE_BUILDDIR"]
- 			distdir = mydict["DISTDIR"]
- 		except KeyError:
- 			pass
- 		else:
- 			mydict["PORTAGE_ACTUAL_DISTDIR"] = distdir
- 			mydict["DISTDIR"] = os.path.join(builddir, "distdir")
- 
- 		return mydict
- 
- 	def thirdpartymirrors(self):
- 		if getattr(self, "_thirdpartymirrors", None) is None:
- 			thirdparty_lists = []
- 			for repo_name in reversed(self.repositories.prepos_order):
- 				thirdparty_lists.append(grabdict(os.path.join(
- 					self.repositories[repo_name].location,
- 					"profiles", "thirdpartymirrors")))
- 			self._thirdpartymirrors = stack_dictlist(thirdparty_lists, incremental=True)
- 		return self._thirdpartymirrors
- 
- 	def archlist(self):
- 		_archlist = []
- 		for myarch in self["PORTAGE_ARCHLIST"].split():
- 			_archlist.append(myarch)
- 			_archlist.append("~" + myarch)
- 		return _archlist
- 
- 	def selinux_enabled(self):
- 		if getattr(self, "_selinux_enabled", None) is None:
- 			self._selinux_enabled = 0
- 			if "selinux" in self["USE"].split():
- 				if selinux:
- 					if selinux.is_selinux_enabled() == 1:
- 						self._selinux_enabled = 1
- 					else:
- 						self._selinux_enabled = 0
- 				else:
- 					writemsg(_("!!! SELinux module not found. Please verify that it was installed.\n"),
- 						noiselevel=-1)
- 					self._selinux_enabled = 0
- 
- 		return self._selinux_enabled
- 
- 	keys = __iter__
- 	items = iteritems
++
+     """
+     This class encompasses the main portage configuration.  Data is pulled from
+     ROOT/PORTDIR/profiles/, from ROOT/etc/make.profile incrementally through all
+     parent profiles as well as from ROOT/PORTAGE_CONFIGROOT/* for user specified
+     overrides.
+ 
+     Generally if you need data like USE flags, FEATURES, environment variables,
+     virtuals ...etc you look in here.
+     """
+ 
+     _constant_keys = frozenset(
+         ["PORTAGE_BIN_PATH", "PORTAGE_GID", "PORTAGE_PYM_PATH", "PORTAGE_PYTHONPATH"]
+     )
+ 
+     _deprecated_keys = {
+         "PORTAGE_LOGDIR": "PORT_LOGDIR",
+         "PORTAGE_LOGDIR_CLEAN": "PORT_LOGDIR_CLEAN",
+         "SIGNED_OFF_BY": "DCO_SIGNED_OFF_BY",
+     }
+ 
+     _setcpv_aux_keys = (
+         "BDEPEND",
+         "DEFINED_PHASES",
+         "DEPEND",
+         "EAPI",
+         "IDEPEND",
+         "INHERITED",
+         "IUSE",
+         "REQUIRED_USE",
+         "KEYWORDS",
+         "LICENSE",
+         "PDEPEND",
+         "PROPERTIES",
+         "RDEPEND",
+         "SLOT",
+         "repository",
+         "RESTRICT",
+         "LICENSE",
+     )
+ 
+     _module_aliases = {
+         "cache.metadata_overlay.database": "portage.cache.flat_hash.mtime_md5_database",
+         "portage.cache.metadata_overlay.database": "portage.cache.flat_hash.mtime_md5_database",
+     }
+ 
+     _case_insensitive_vars = special_env_vars.case_insensitive_vars
+     _default_globals = special_env_vars.default_globals
+     _env_blacklist = special_env_vars.env_blacklist
+     _environ_filter = special_env_vars.environ_filter
+     _environ_whitelist = special_env_vars.environ_whitelist
+     _environ_whitelist_re = special_env_vars.environ_whitelist_re
+     _global_only_vars = special_env_vars.global_only_vars
+ 
+     def __init__(
+         self,
+         clone=None,
+         mycpv=None,
+         config_profile_path=None,
+         config_incrementals=None,
+         config_root=None,
+         target_root=None,
+         sysroot=None,
+         eprefix=None,
+         local_config=True,
+         env=None,
+         _unmatched_removal=False,
+         repositories=None,
+     ):
+         """
+         @param clone: If provided, init will use deepcopy to copy by value the instance.
+         @type clone: Instance of config class.
+         @param mycpv: CPV to load up (see setcpv), this is the same as calling init with mycpv=None
+         and then calling instance.setcpv(mycpv).
+         @type mycpv: String
+         @param config_profile_path: Configurable path to the profile (usually PROFILE_PATH from portage.const)
+         @type config_profile_path: String
+         @param config_incrementals: List of incremental variables
+                 (defaults to portage.const.INCREMENTALS)
+         @type config_incrementals: List
+         @param config_root: path to read local config from (defaults to "/", see PORTAGE_CONFIGROOT)
+         @type config_root: String
+         @param target_root: the target root, which typically corresponds to the
+                 value of the $ROOT env variable (default is /)
+         @type target_root: String
+         @param sysroot: the sysroot to build against, which typically corresponds
+                  to the value of the $SYSROOT env variable (default is /)
+         @type sysroot: String
+         @param eprefix: set the EPREFIX variable (default is portage.const.EPREFIX)
+         @type eprefix: String
+         @param local_config: Enables loading of local config (/etc/portage); used most by repoman to
+         ignore local config (keywording and unmasking)
+         @type local_config: Boolean
+         @param env: The calling environment which is used to override settings.
+                 Defaults to os.environ if unspecified.
+         @type env: dict
+         @param _unmatched_removal: Enabled by repoman when the
+                 --unmatched-removal option is given.
+         @type _unmatched_removal: Boolean
+         @param repositories: Configuration of repositories.
+                 Defaults to portage.repository.config.load_repository_config().
+         @type repositories: Instance of portage.repository.config.RepoConfigLoader class.
+         """
+ 
+         # This is important when config is reloaded after emerge --sync.
+         _eapi_cache.clear()
+ 
+         # When initializing the global portage.settings instance, avoid
+         # raising exceptions whenever possible since exceptions thrown
+         # from 'import portage' or 'import portage.exceptions' statements
+         # can practically render the api unusable for api consumers.
+         tolerant = hasattr(portage, "_initializing_globals")
+         self._tolerant = tolerant
+         self._unmatched_removal = _unmatched_removal
+ 
+         self.locked = 0
+         self.mycpv = None
+         self._setcpv_args_hash = None
+         self.puse = ""
+         self._penv = []
+         self.modifiedkeys = []
+         self.uvlist = []
+         self._accept_chost_re = None
+         self._accept_properties = None
+         self._accept_restrict = None
+         self._features_overrides = []
+         self._make_defaults = None
+         self._parent_stable = None
+         self._soname_provided = None
+ 
+         # _unknown_features records unknown features that
+         # have triggered warning messages, and ensures that
+         # the same warning isn't shown twice.
+         self._unknown_features = set()
+ 
+         self.local_config = local_config
+ 
+         if clone:
+             # For immutable attributes, use shallow copy for
+             # speed and memory conservation.
+             self._tolerant = clone._tolerant
+             self._unmatched_removal = clone._unmatched_removal
+             self.categories = clone.categories
+             self.depcachedir = clone.depcachedir
+             self.incrementals = clone.incrementals
+             self.module_priority = clone.module_priority
+             self.profile_path = clone.profile_path
+             self.profiles = clone.profiles
+             self.packages = clone.packages
+             self.repositories = clone.repositories
+             self.unpack_dependencies = clone.unpack_dependencies
+             self._default_features_use = clone._default_features_use
+             self._iuse_effective = clone._iuse_effective
+             self._iuse_implicit_match = clone._iuse_implicit_match
+             self._non_user_variables = clone._non_user_variables
+             self._env_d_blacklist = clone._env_d_blacklist
+             self._pbashrc = clone._pbashrc
+             self._repo_make_defaults = clone._repo_make_defaults
+             self.usemask = clone.usemask
+             self.useforce = clone.useforce
+             self.puse = clone.puse
+             self.user_profile_dir = clone.user_profile_dir
+             self.local_config = clone.local_config
+             self.make_defaults_use = clone.make_defaults_use
+             self.mycpv = clone.mycpv
+             self._setcpv_args_hash = clone._setcpv_args_hash
+             self._soname_provided = clone._soname_provided
+             self._profile_bashrc = clone._profile_bashrc
+ 
+             # immutable attributes (internal policy ensures lack of mutation)
+             self._locations_manager = clone._locations_manager
+             self._use_manager = clone._use_manager
+             # force instantiation of lazy immutable objects when cloning, so
+             # that they're not instantiated more than once
+             self._keywords_manager_obj = clone._keywords_manager
+             self._mask_manager_obj = clone._mask_manager
+ 
+             # shared mutable attributes
+             self._unknown_features = clone._unknown_features
+ 
+             self.modules = copy.deepcopy(clone.modules)
+             self._penv = copy.deepcopy(clone._penv)
+ 
+             self.configdict = copy.deepcopy(clone.configdict)
+             self.configlist = [
+                 self.configdict["env.d"],
+                 self.configdict["repo"],
+                 self.configdict["features"],
+                 self.configdict["pkginternal"],
+                 self.configdict["globals"],
+                 self.configdict["defaults"],
+                 self.configdict["conf"],
+                 self.configdict["pkg"],
+                 self.configdict["env"],
+             ]
+             self.lookuplist = self.configlist[:]
+             self.lookuplist.reverse()
+             self._use_expand_dict = copy.deepcopy(clone._use_expand_dict)
+             self.backupenv = self.configdict["backupenv"]
+             self.prevmaskdict = copy.deepcopy(clone.prevmaskdict)
+             self.pprovideddict = copy.deepcopy(clone.pprovideddict)
+             self.features = features_set(self)
+             self.features._features = copy.deepcopy(clone.features._features)
+             self._features_overrides = copy.deepcopy(clone._features_overrides)
+ 
+             # Strictly speaking _license_manager is not immutable. Users need to ensure that
+             # extract_global_changes() is called right after __init__ (if at all).
+             # It also has the mutable member _undef_lic_groups. It is used to track
+             # undefined license groups, to not display an error message for the same
+             # group again and again. Because of this, it's useful to share it between
+             # all LicenseManager instances.
+             self._license_manager = clone._license_manager
+ 
+             # force instantiation of lazy objects when cloning, so
+             # that they're not instantiated more than once
+             self._virtuals_manager_obj = copy.deepcopy(clone._virtuals_manager)
+ 
+             self._accept_properties = copy.deepcopy(clone._accept_properties)
+             self._ppropertiesdict = copy.deepcopy(clone._ppropertiesdict)
+             self._accept_restrict = copy.deepcopy(clone._accept_restrict)
+             self._paccept_restrict = copy.deepcopy(clone._paccept_restrict)
+             self._penvdict = copy.deepcopy(clone._penvdict)
+             self._pbashrcdict = copy.deepcopy(clone._pbashrcdict)
+             self._expand_map = copy.deepcopy(clone._expand_map)
+ 
+         else:
+             # lazily instantiated objects
+             self._keywords_manager_obj = None
+             self._mask_manager_obj = None
+             self._virtuals_manager_obj = None
+ 
+             locations_manager = LocationsManager(
+                 config_root=config_root,
+                 config_profile_path=config_profile_path,
+                 eprefix=eprefix,
+                 local_config=local_config,
+                 target_root=target_root,
+                 sysroot=sysroot,
+             )
+             self._locations_manager = locations_manager
+ 
+             eprefix = locations_manager.eprefix
+             config_root = locations_manager.config_root
+             sysroot = locations_manager.sysroot
+             esysroot = locations_manager.esysroot
+             broot = locations_manager.broot
+             abs_user_config = locations_manager.abs_user_config
+             make_conf_paths = [
+                 os.path.join(config_root, "etc", "make.conf"),
+                 os.path.join(config_root, MAKE_CONF_FILE),
+             ]
+             try:
+                 if os.path.samefile(*make_conf_paths):
+                     make_conf_paths.pop()
+             except OSError:
+                 pass
+ 
+             make_conf_count = 0
+             make_conf = {}
+             for x in make_conf_paths:
+                 mygcfg = getconfig(
+                     x,
+                     tolerant=tolerant,
+                     allow_sourcing=True,
+                     expand=make_conf,
+                     recursive=True,
+                 )
+                 if mygcfg is not None:
+                     make_conf.update(mygcfg)
+                     make_conf_count += 1
+ 
+             if make_conf_count == 2:
+                 writemsg(
+                     "!!! %s\n"
+                     % _("Found 2 make.conf files, using both '%s' and '%s'")
+                     % tuple(make_conf_paths),
+                     noiselevel=-1,
+                 )
+ 
+             # __* variables set in make.conf are local and are not be propagated.
+             make_conf = {k: v for k, v in make_conf.items() if not k.startswith("__")}
+ 
+             # Allow ROOT setting to come from make.conf if it's not overridden
+             # by the constructor argument (from the calling environment).
+             locations_manager.set_root_override(make_conf.get("ROOT"))
+             target_root = locations_manager.target_root
+             eroot = locations_manager.eroot
+             self.global_config_path = locations_manager.global_config_path
+ 
+             # The expand_map is used for variable substitution
+             # in getconfig() calls, and the getconfig() calls
+             # update expand_map with the value of each variable
+             # assignment that occurs. Variable substitution occurs
+             # in the following order, which corresponds to the
+             # order of appearance in self.lookuplist:
+             #
+             #   * env.d
+             #   * make.globals
+             #   * make.defaults
+             #   * make.conf
+             #
+             # Notably absent is "env", since we want to avoid any
+             # interaction with the calling environment that might
+             # lead to unexpected results.
+ 
+             env_d = (
+                 getconfig(
+                     os.path.join(eroot, "etc", "profile.env"),
+                     tolerant=tolerant,
+                     expand=False,
+                 )
+                 or {}
+             )
+             expand_map = env_d.copy()
+             self._expand_map = expand_map
+ 
+             # Allow make.globals and make.conf to set paths relative to vars like ${EPREFIX}.
+             expand_map["BROOT"] = broot
+             expand_map["EPREFIX"] = eprefix
+             expand_map["EROOT"] = eroot
+             expand_map["ESYSROOT"] = esysroot
+             expand_map["PORTAGE_CONFIGROOT"] = config_root
+             expand_map["ROOT"] = target_root
+             expand_map["SYSROOT"] = sysroot
+ 
+             if portage._not_installed:
+                 make_globals_path = os.path.join(
+                     PORTAGE_BASE_PATH, "cnf", "make.globals"
+                 )
+             else:
+                 make_globals_path = os.path.join(
+                     self.global_config_path, "make.globals"
+                 )
+             old_make_globals = os.path.join(config_root, "etc", "make.globals")
+             if os.path.isfile(old_make_globals) and not os.path.samefile(
+                 make_globals_path, old_make_globals
+             ):
+                 # Don't warn if they refer to the same path, since
+                 # that can be used for backward compatibility with
+                 # old software.
+                 writemsg(
+                     "!!! %s\n"
+                     % _(
+                         "Found obsolete make.globals file: "
+                         "'%s', (using '%s' instead)"
+                     )
+                     % (old_make_globals, make_globals_path),
+                     noiselevel=-1,
+                 )
+ 
+             make_globals = getconfig(
+                 make_globals_path, tolerant=tolerant, expand=expand_map
+             )
+             if make_globals is None:
+                 make_globals = {}
+ 
+             for k, v in self._default_globals.items():
+                 make_globals.setdefault(k, v)
+ 
+             if config_incrementals is None:
+                 self.incrementals = INCREMENTALS
+             else:
+                 self.incrementals = config_incrementals
+             if not isinstance(self.incrementals, frozenset):
+                 self.incrementals = frozenset(self.incrementals)
+ 
+             self.module_priority = ("user", "default")
+             self.modules = {}
+             modules_file = os.path.join(config_root, MODULES_FILE_PATH)
+             modules_loader = KeyValuePairFileLoader(modules_file, None, None)
+             modules_dict, modules_errors = modules_loader.load()
+             self.modules["user"] = modules_dict
+             if self.modules["user"] is None:
+                 self.modules["user"] = {}
+             user_auxdbmodule = self.modules["user"].get("portdbapi.auxdbmodule")
+             if (
+                 user_auxdbmodule is not None
+                 and user_auxdbmodule in self._module_aliases
+             ):
+                 warnings.warn(
+                     "'%s' is deprecated: %s" % (user_auxdbmodule, modules_file)
+                 )
+ 
+             self.modules["default"] = {
+                 "portdbapi.auxdbmodule": "portage.cache.flat_hash.mtime_md5_database",
+             }
+ 
+             self.configlist = []
+ 
+             # back up our incremental variables:
+             self.configdict = {}
+             self._use_expand_dict = {}
+             # configlist will contain: [ env.d, globals, features, defaults, conf, pkg, backupenv, env ]
+             self.configlist.append({})
+             self.configdict["env.d"] = self.configlist[-1]
+ 
+             self.configlist.append({})
+             self.configdict["repo"] = self.configlist[-1]
+ 
+             self.configlist.append({})
+             self.configdict["features"] = self.configlist[-1]
+ 
+             self.configlist.append({})
+             self.configdict["pkginternal"] = self.configlist[-1]
+ 
+             # env_d will be None if profile.env doesn't exist.
+             if env_d:
+                 self.configdict["env.d"].update(env_d)
+ 
+             # backupenv is used for calculating incremental variables.
+             if env is None:
+                 env = os.environ
+ 
+             # Avoid potential UnicodeDecodeError exceptions later.
+             env_unicode = dict(
+                 (_unicode_decode(k), _unicode_decode(v)) for k, v in env.items()
+             )
+ 
+             self.backupenv = env_unicode
+ 
+             if env_d:
+                 # Remove duplicate values so they don't override updated
+                 # profile.env values later (profile.env is reloaded in each
+                 # call to self.regenerate).
+                 for k, v in env_d.items():
+                     try:
+                         if self.backupenv[k] == v:
+                             del self.backupenv[k]
+                     except KeyError:
+                         pass
+                 del k, v
+ 
+             self.configdict["env"] = LazyItemsDict(self.backupenv)
+ 
+             self.configlist.append(make_globals)
+             self.configdict["globals"] = self.configlist[-1]
+ 
+             self.make_defaults_use = []
+ 
+             # Loading Repositories
+             self["PORTAGE_CONFIGROOT"] = config_root
+             self["ROOT"] = target_root
+             self["SYSROOT"] = sysroot
+             self["EPREFIX"] = eprefix
+             self["EROOT"] = eroot
+             self["ESYSROOT"] = esysroot
+             self["BROOT"] = broot
+             known_repos = []
+             portdir = ""
+             portdir_overlay = ""
+             portdir_sync = None
+             for confs in [make_globals, make_conf, self.configdict["env"]]:
+                 v = confs.get("PORTDIR")
+                 if v is not None:
+                     portdir = v
+                     known_repos.append(v)
+                 v = confs.get("PORTDIR_OVERLAY")
+                 if v is not None:
+                     portdir_overlay = v
+                     known_repos.extend(shlex_split(v))
+                 v = confs.get("SYNC")
+                 if v is not None:
+                     portdir_sync = v
+                 if "PORTAGE_RSYNC_EXTRA_OPTS" in confs:
+                     self["PORTAGE_RSYNC_EXTRA_OPTS"] = confs["PORTAGE_RSYNC_EXTRA_OPTS"]
+ 
+             self["PORTDIR"] = portdir
+             self["PORTDIR_OVERLAY"] = portdir_overlay
+             if portdir_sync:
+                 self["SYNC"] = portdir_sync
+             self.lookuplist = [self.configdict["env"]]
+             if repositories is None:
+                 self.repositories = load_repository_config(self)
+             else:
+                 self.repositories = repositories
+ 
+             known_repos.extend(repo.location for repo in self.repositories)
+             known_repos = frozenset(known_repos)
+ 
+             self["PORTAGE_REPOSITORIES"] = self.repositories.config_string()
+             self.backup_changes("PORTAGE_REPOSITORIES")
+ 
+             # filling PORTDIR and PORTDIR_OVERLAY variable for compatibility
+             main_repo = self.repositories.mainRepo()
+             if main_repo is not None:
+                 self["PORTDIR"] = main_repo.location
+                 self.backup_changes("PORTDIR")
+                 expand_map["PORTDIR"] = self["PORTDIR"]
+ 
+             # repoman controls PORTDIR_OVERLAY via the environment, so no
+             # special cases are needed here.
+             portdir_overlay = list(self.repositories.repoLocationList())
+             if portdir_overlay and portdir_overlay[0] == self["PORTDIR"]:
+                 portdir_overlay = portdir_overlay[1:]
+ 
+             new_ov = []
+             if portdir_overlay:
+                 for ov in portdir_overlay:
+                     ov = normalize_path(ov)
+                     if isdir_raise_eaccess(ov) or portage._sync_mode:
+                         new_ov.append(portage._shell_quote(ov))
+                     else:
+                         writemsg(
+                             _("!!! Invalid PORTDIR_OVERLAY" " (not a dir): '%s'\n")
+                             % ov,
+                             noiselevel=-1,
+                         )
+ 
+             self["PORTDIR_OVERLAY"] = " ".join(new_ov)
+             self.backup_changes("PORTDIR_OVERLAY")
+             expand_map["PORTDIR_OVERLAY"] = self["PORTDIR_OVERLAY"]
+ 
+             locations_manager.set_port_dirs(self["PORTDIR"], self["PORTDIR_OVERLAY"])
+             locations_manager.load_profiles(self.repositories, known_repos)
+ 
+             profiles_complex = locations_manager.profiles_complex
+             self.profiles = locations_manager.profiles
+             self.profile_path = locations_manager.profile_path
+             self.user_profile_dir = locations_manager.user_profile_dir
+ 
+             try:
+                 packages_list = [
+                     grabfile_package(
+                         os.path.join(x.location, "packages"),
+                         verify_eapi=True,
+                         eapi=x.eapi,
+                         eapi_default=None,
+                         allow_repo=allow_profile_repo_deps(x),
+                         allow_build_id=x.allow_build_id,
+                     )
+                     for x in profiles_complex
+                 ]
+             except EnvironmentError as e:
+                 _raise_exc(e)
+ 
+             self.packages = tuple(stack_lists(packages_list, incremental=1))
+ 
+             # revmaskdict
+             self.prevmaskdict = {}
+             for x in self.packages:
+                 # Negative atoms are filtered by the above stack_lists() call.
+                 if not isinstance(x, Atom):
+                     x = Atom(x.lstrip("*"))
+                 self.prevmaskdict.setdefault(x.cp, []).append(x)
+ 
+             self.unpack_dependencies = load_unpack_dependencies_configuration(
+                 self.repositories
+             )
+ 
+             mygcfg = {}
+             if profiles_complex:
+                 mygcfg_dlists = []
+                 for x in profiles_complex:
+                     # Prevent accidents triggered by USE="${USE} ..." settings
+                     # at the top of make.defaults which caused parent profile
+                     # USE to override parent profile package.use settings.
+                     # It would be nice to guard USE_EXPAND variables like
+                     # this too, but unfortunately USE_EXPAND is not known
+                     # until after make.defaults has been evaluated, so that
+                     # will require some form of make.defaults preprocessing.
+                     expand_map.pop("USE", None)
+                     mygcfg_dlists.append(
+                         getconfig(
+                             os.path.join(x.location, "make.defaults"),
+                             tolerant=tolerant,
+                             expand=expand_map,
+                             recursive=x.portage1_directories,
+                         )
+                     )
+                 self._make_defaults = mygcfg_dlists
+                 mygcfg = stack_dicts(mygcfg_dlists, incrementals=self.incrementals)
+                 if mygcfg is None:
+                     mygcfg = {}
+             self.configlist.append(mygcfg)
+             self.configdict["defaults"] = self.configlist[-1]
+ 
+             mygcfg = {}
+             for x in make_conf_paths:
+                 mygcfg.update(
+                     getconfig(
+                         x,
+                         tolerant=tolerant,
+                         allow_sourcing=True,
+                         expand=expand_map,
+                         recursive=True,
+                     )
+                     or {}
+                 )
+ 
+             # __* variables set in make.conf are local and are not be propagated.
+             mygcfg = {k: v for k, v in mygcfg.items() if not k.startswith("__")}
+ 
+             # Don't allow the user to override certain variables in make.conf
+             profile_only_variables = (
+                 self.configdict["defaults"].get("PROFILE_ONLY_VARIABLES", "").split()
+             )
+             profile_only_variables = stack_lists([profile_only_variables])
+             non_user_variables = set()
+             non_user_variables.update(profile_only_variables)
+             non_user_variables.update(self._env_blacklist)
+             non_user_variables.update(self._global_only_vars)
+             non_user_variables = frozenset(non_user_variables)
+             self._non_user_variables = non_user_variables
+ 
+             self._env_d_blacklist = frozenset(
+                 chain(
+                     profile_only_variables,
+                     self._env_blacklist,
+                 )
+             )
+             env_d = self.configdict["env.d"]
+             for k in self._env_d_blacklist:
+                 env_d.pop(k, None)
+ 
+             for k in profile_only_variables:
+                 mygcfg.pop(k, None)
+ 
+             self.configlist.append(mygcfg)
+             self.configdict["conf"] = self.configlist[-1]
+ 
+             self.configlist.append(LazyItemsDict())
+             self.configdict["pkg"] = self.configlist[-1]
+ 
+             self.configdict["backupenv"] = self.backupenv
+ 
+             # Don't allow the user to override certain variables in the env
+             for k in profile_only_variables:
+                 self.backupenv.pop(k, None)
+ 
+             self.configlist.append(self.configdict["env"])
+ 
+             # make lookuplist for loading package.*
+             self.lookuplist = self.configlist[:]
+             self.lookuplist.reverse()
+ 
+             # Blacklist vars that could interfere with portage internals.
+             for blacklisted in self._env_blacklist:
+                 for cfg in self.lookuplist:
+                     cfg.pop(blacklisted, None)
+                 self.backupenv.pop(blacklisted, None)
+             del blacklisted, cfg
+ 
+             self["PORTAGE_CONFIGROOT"] = config_root
+             self.backup_changes("PORTAGE_CONFIGROOT")
+             self["ROOT"] = target_root
+             self.backup_changes("ROOT")
+             self["SYSROOT"] = sysroot
+             self.backup_changes("SYSROOT")
+             self["EPREFIX"] = eprefix
+             self.backup_changes("EPREFIX")
+             self["EROOT"] = eroot
+             self.backup_changes("EROOT")
+             self["ESYSROOT"] = esysroot
+             self.backup_changes("ESYSROOT")
+             self["BROOT"] = broot
+             self.backup_changes("BROOT")
+ 
+             # The prefix of the running portage instance is used in the
+             # ebuild environment to implement the --host-root option for
+             # best_version and has_version.
+             self["PORTAGE_OVERRIDE_EPREFIX"] = portage.const.EPREFIX
+             self.backup_changes("PORTAGE_OVERRIDE_EPREFIX")
+ 
+             self._ppropertiesdict = portage.dep.ExtendedAtomDict(dict)
+             self._paccept_restrict = portage.dep.ExtendedAtomDict(dict)
+             self._penvdict = portage.dep.ExtendedAtomDict(dict)
+             self._pbashrcdict = {}
+             self._pbashrc = ()
+ 
+             self._repo_make_defaults = {}
+             for repo in self.repositories.repos_with_profiles():
+                 d = (
+                     getconfig(
+                         os.path.join(repo.location, "profiles", "make.defaults"),
+                         tolerant=tolerant,
+                         expand=self.configdict["globals"].copy(),
+                         recursive=repo.portage1_profiles,
+                     )
+                     or {}
+                 )
+                 if d:
+                     for k in chain(
+                         self._env_blacklist,
+                         profile_only_variables,
+                         self._global_only_vars,
+                     ):
+                         d.pop(k, None)
+                 self._repo_make_defaults[repo.name] = d
+ 
+             # Read all USE related files from profiles and optionally from user config.
+             self._use_manager = UseManager(
+                 self.repositories,
+                 profiles_complex,
+                 abs_user_config,
+                 self._isStable,
+                 user_config=local_config,
+             )
+             # Initialize all USE related variables we track ourselves.
+             self.usemask = self._use_manager.getUseMask()
+             self.useforce = self._use_manager.getUseForce()
+             self.configdict["conf"][
+                 "USE"
+             ] = self._use_manager.extract_global_USE_changes(
+                 self.configdict["conf"].get("USE", "")
+             )
+ 
+             # Read license_groups and optionally license_groups and package.license from user config
+             self._license_manager = LicenseManager(
+                 locations_manager.profile_locations,
+                 abs_user_config,
+                 user_config=local_config,
+             )
+             # Extract '*/*' entries from package.license
+             self.configdict["conf"][
+                 "ACCEPT_LICENSE"
+             ] = self._license_manager.extract_global_changes(
+                 self.configdict["conf"].get("ACCEPT_LICENSE", "")
+             )
+ 
+             # profile.bashrc
+             self._profile_bashrc = tuple(
+                 os.path.isfile(os.path.join(profile.location, "profile.bashrc"))
+                 for profile in profiles_complex
+             )
+ 
+             if local_config:
+                 # package.properties
+                 propdict = grabdict_package(
+                     os.path.join(abs_user_config, "package.properties"),
+                     recursive=1,
+                     allow_wildcard=True,
+                     allow_repo=True,
+                     verify_eapi=False,
+                     allow_build_id=True,
+                 )
+                 v = propdict.pop("*/*", None)
+                 if v is not None:
+                     if "ACCEPT_PROPERTIES" in self.configdict["conf"]:
+                         self.configdict["conf"]["ACCEPT_PROPERTIES"] += " " + " ".join(
+                             v
+                         )
+                     else:
+                         self.configdict["conf"]["ACCEPT_PROPERTIES"] = " ".join(v)
+                 for k, v in propdict.items():
+                     self._ppropertiesdict.setdefault(k.cp, {})[k] = v
+ 
+                 # package.accept_restrict
+                 d = grabdict_package(
+                     os.path.join(abs_user_config, "package.accept_restrict"),
+                     recursive=True,
+                     allow_wildcard=True,
+                     allow_repo=True,
+                     verify_eapi=False,
+                     allow_build_id=True,
+                 )
+                 v = d.pop("*/*", None)
+                 if v is not None:
+                     if "ACCEPT_RESTRICT" in self.configdict["conf"]:
+                         self.configdict["conf"]["ACCEPT_RESTRICT"] += " " + " ".join(v)
+                     else:
+                         self.configdict["conf"]["ACCEPT_RESTRICT"] = " ".join(v)
+                 for k, v in d.items():
+                     self._paccept_restrict.setdefault(k.cp, {})[k] = v
+ 
+                 # package.env
+                 penvdict = grabdict_package(
+                     os.path.join(abs_user_config, "package.env"),
+                     recursive=1,
+                     allow_wildcard=True,
+                     allow_repo=True,
+                     verify_eapi=False,
+                     allow_build_id=True,
+                 )
+                 v = penvdict.pop("*/*", None)
+                 if v is not None:
+                     global_wildcard_conf = {}
+                     self._grab_pkg_env(v, global_wildcard_conf)
+                     incrementals = self.incrementals
+                     conf_configdict = self.configdict["conf"]
+                     for k, v in global_wildcard_conf.items():
+                         if k in incrementals:
+                             if k in conf_configdict:
+                                 conf_configdict[k] = conf_configdict[k] + " " + v
+                             else:
+                                 conf_configdict[k] = v
+                         else:
+                             conf_configdict[k] = v
+                         expand_map[k] = v
+ 
+                 for k, v in penvdict.items():
+                     self._penvdict.setdefault(k.cp, {})[k] = v
+ 
+                 # package.bashrc
+                 for profile in profiles_complex:
+                     if not "profile-bashrcs" in profile.profile_formats:
+                         continue
+                     self._pbashrcdict[profile] = portage.dep.ExtendedAtomDict(dict)
+                     bashrc = grabdict_package(
+                         os.path.join(profile.location, "package.bashrc"),
+                         recursive=1,
+                         allow_wildcard=True,
+                         allow_repo=allow_profile_repo_deps(profile),
+                         verify_eapi=True,
+                         eapi=profile.eapi,
+                         eapi_default=None,
+                         allow_build_id=profile.allow_build_id,
+                     )
+                     if not bashrc:
+                         continue
+ 
+                     for k, v in bashrc.items():
+                         envfiles = [
+                             os.path.join(profile.location, "bashrc", envname)
+                             for envname in v
+                         ]
+                         self._pbashrcdict[profile].setdefault(k.cp, {}).setdefault(
+                             k, []
+                         ).extend(envfiles)
+ 
+             # getting categories from an external file now
+             self.categories = [
+                 grabfile(os.path.join(x, "categories"))
+                 for x in locations_manager.profile_and_user_locations
+             ]
+             category_re = dbapi._category_re
+             # categories used to be a tuple, but now we use a frozenset
+             # for hashed category validation in pordbapi.cp_list()
+             self.categories = frozenset(
+                 x
+                 for x in stack_lists(self.categories, incremental=1)
+                 if category_re.match(x) is not None
+             )
+ 
+             archlist = [
+                 grabfile(os.path.join(x, "arch.list"))
+                 for x in locations_manager.profile_and_user_locations
+             ]
+             archlist = sorted(stack_lists(archlist, incremental=1))
+             self.configdict["conf"]["PORTAGE_ARCHLIST"] = " ".join(archlist)
+ 
+             pkgprovidedlines = []
+             for x in profiles_complex:
+                 provpath = os.path.join(x.location, "package.provided")
+                 if os.path.exists(provpath):
+                     if _get_eapi_attrs(x.eapi).allows_package_provided:
+                         pkgprovidedlines.append(
+                             grabfile(provpath, recursive=x.portage1_directories)
+                         )
+                     else:
+                         # TODO: bail out?
+                         writemsg(
+                             (
+                                 _("!!! package.provided not allowed in EAPI %s: ")
+                                 % x.eapi
+                             )
+                             + x.location
+                             + "\n",
+                             noiselevel=-1,
+                         )
+ 
+             pkgprovidedlines = stack_lists(pkgprovidedlines, incremental=1)
+             has_invalid_data = False
+             for x in range(len(pkgprovidedlines) - 1, -1, -1):
+                 myline = pkgprovidedlines[x]
+                 if not isvalidatom("=" + myline):
+                     writemsg(
+                         _("Invalid package name in package.provided: %s\n") % myline,
+                         noiselevel=-1,
+                     )
+                     has_invalid_data = True
+                     del pkgprovidedlines[x]
+                     continue
+                 cpvr = catpkgsplit(pkgprovidedlines[x])
+                 if not cpvr or cpvr[0] == "null":
+                     writemsg(
+                         _("Invalid package name in package.provided: ")
+                         + pkgprovidedlines[x]
+                         + "\n",
+                         noiselevel=-1,
+                     )
+                     has_invalid_data = True
+                     del pkgprovidedlines[x]
+                     continue
+             if has_invalid_data:
+                 writemsg(
+                     _("See portage(5) for correct package.provided usage.\n"),
+                     noiselevel=-1,
+                 )
+             self.pprovideddict = {}
+             for x in pkgprovidedlines:
+                 x_split = catpkgsplit(x)
+                 if x_split is None:
+                     continue
+                 mycatpkg = cpv_getkey(x)
+                 if mycatpkg in self.pprovideddict:
+                     self.pprovideddict[mycatpkg].append(x)
+                 else:
+                     self.pprovideddict[mycatpkg] = [x]
+ 
+             # reasonable defaults; this is important as without USE_ORDER,
+             # USE will always be "" (nothing set)!
+             if "USE_ORDER" not in self:
+                 self[
+                     "USE_ORDER"
+                 ] = "env:pkg:conf:defaults:pkginternal:features:repo:env.d"
+                 self.backup_changes("USE_ORDER")
+ 
+             if "CBUILD" not in self and "CHOST" in self:
+                 self["CBUILD"] = self["CHOST"]
+                 self.backup_changes("CBUILD")
+ 
+             if "USERLAND" not in self:
+                 # Set default USERLAND so that our test cases can assume that
+                 # it's always set. This allows isolated-functions.sh to avoid
+                 # calling uname -s when sourced.
+                 system = platform.system()
+                 if system is not None and (
+                     system.endswith("BSD") or system == "DragonFly"
+                 ):
+                     self["USERLAND"] = "BSD"
+                 else:
+                     self["USERLAND"] = "GNU"
+                 self.backup_changes("USERLAND")
+ 
+             default_inst_ids = {
+                 "PORTAGE_INST_GID": "0",
+                 "PORTAGE_INST_UID": "0",
+             }
+ 
+             eroot_or_parent = first_existing(eroot)
+             unprivileged = False
+             try:
++                # PREFIX LOCAL: inventing UID/GID based on a path is a very
++                # bad idea, it breaks almost everything since group ids
++                # don't have to match, when a user has many
++                # This in particularly breaks the configure-set portage
++                # group and user (in portage/data.py)
++                raise OSError(2, "No such file or directory")
+                 eroot_st = os.stat(eroot_or_parent)
+             except OSError:
+                 pass
+             else:
+ 
+                 if portage.data._unprivileged_mode(eroot_or_parent, eroot_st):
+                     unprivileged = True
+ 
+                     default_inst_ids["PORTAGE_INST_GID"] = str(eroot_st.st_gid)
+                     default_inst_ids["PORTAGE_INST_UID"] = str(eroot_st.st_uid)
+ 
+                     if "PORTAGE_USERNAME" not in self:
+                         try:
+                             pwd_struct = pwd.getpwuid(eroot_st.st_uid)
+                         except KeyError:
+                             pass
+                         else:
+                             self["PORTAGE_USERNAME"] = pwd_struct.pw_name
+                             self.backup_changes("PORTAGE_USERNAME")
+ 
+                     if "PORTAGE_GRPNAME" not in self:
+                         try:
+                             grp_struct = grp.getgrgid(eroot_st.st_gid)
+                         except KeyError:
+                             pass
+                         else:
+                             self["PORTAGE_GRPNAME"] = grp_struct.gr_name
+                             self.backup_changes("PORTAGE_GRPNAME")
+ 
+             for var, default_val in default_inst_ids.items():
+                 try:
+                     self[var] = str(int(self.get(var, default_val)))
+                 except ValueError:
+                     writemsg(
+                         _(
+                             "!!! %s='%s' is not a valid integer.  "
+                             "Falling back to %s.\n"
+                         )
+                         % (var, self[var], default_val),
+                         noiselevel=-1,
+                     )
+                     self[var] = default_val
+                 self.backup_changes(var)
+ 
+             self.depcachedir = self.get("PORTAGE_DEPCACHEDIR")
+             if self.depcachedir is None:
+                 self.depcachedir = os.path.join(
+                     os.sep, portage.const.EPREFIX, DEPCACHE_PATH.lstrip(os.sep)
+                 )
+                 if unprivileged and target_root != os.sep:
+                     # In unprivileged mode, automatically make
+                     # depcachedir relative to target_root if the
+                     # default depcachedir is not writable.
+                     if not os.access(first_existing(self.depcachedir), os.W_OK):
+                         self.depcachedir = os.path.join(
+                             eroot, DEPCACHE_PATH.lstrip(os.sep)
+                         )
+ 
+             self["PORTAGE_DEPCACHEDIR"] = self.depcachedir
+             self.backup_changes("PORTAGE_DEPCACHEDIR")
+ 
+             if portage._internal_caller:
+                 self["PORTAGE_INTERNAL_CALLER"] = "1"
+                 self.backup_changes("PORTAGE_INTERNAL_CALLER")
+ 
+             # initialize self.features
+             self.regenerate()
+             feature_use = []
+             if "test" in self.features:
+                 feature_use.append("test")
+             self.configdict["features"]["USE"] = self._default_features_use = " ".join(
+                 feature_use
+             )
+             if feature_use:
+                 # Regenerate USE so that the initial "test" flag state is
+                 # correct for evaluation of !test? conditionals in RESTRICT.
+                 self.regenerate()
+ 
+             if unprivileged:
+                 self.features.add("unprivileged")
+ 
+             if bsd_chflags:
+                 self.features.add("chflags")
+ 
+             self._init_iuse()
+ 
+             self._validate_commands()
+ 
+             for k in self._case_insensitive_vars:
+                 if k in self:
+                     self[k] = self[k].lower()
+                     self.backup_changes(k)
+ 
+             # The first constructed config object initializes these modules,
+             # and subsequent calls to the _init() functions have no effect.
+             portage.output._init(config_root=self["PORTAGE_CONFIGROOT"])
+             portage.data._init(self)
+ 
+         if mycpv:
+             self.setcpv(mycpv)
+ 
+     def _init_iuse(self):
+         self._iuse_effective = self._calc_iuse_effective()
+         self._iuse_implicit_match = _iuse_implicit_match_cache(self)
+ 
+     @property
+     def mygcfg(self):
+         warnings.warn("portage.config.mygcfg is deprecated", stacklevel=3)
+         return {}
+ 
+     def _validate_commands(self):
+         for k in special_env_vars.validate_commands:
+             v = self.get(k)
+             if v is not None:
+                 valid, v_split = validate_cmd_var(v)
+ 
+                 if not valid:
+                     if v_split:
+                         writemsg_level(
+                             _("%s setting is invalid: '%s'\n") % (k, v),
+                             level=logging.ERROR,
+                             noiselevel=-1,
+                         )
+ 
+                     # before deleting the invalid setting, backup
+                     # the default value if available
+                     v = self.configdict["globals"].get(k)
+                     if v is not None:
+                         default_valid, v_split = validate_cmd_var(v)
+                         if not default_valid:
+                             if v_split:
+                                 writemsg_level(
+                                     _(
+                                         "%s setting from make.globals"
+                                         + " is invalid: '%s'\n"
+                                     )
+                                     % (k, v),
+                                     level=logging.ERROR,
+                                     noiselevel=-1,
+                                 )
+                             # make.globals seems corrupt, so try for
+                             # a hardcoded default instead
+                             v = self._default_globals.get(k)
+ 
+                     # delete all settings for this key,
+                     # including the invalid one
+                     del self[k]
+                     self.backupenv.pop(k, None)
+                     if v:
+                         # restore validated default
+                         self.configdict["globals"][k] = v
+ 
+     def _init_dirs(self):
+         """
+         Create a few directories that are critical to portage operation
+         """
+         if not os.access(self["EROOT"], os.W_OK):
+             return
+ 
+         #                                gid, mode, mask, preserve_perms
+         dir_mode_map = {
+             "tmp": (-1, 0o1777, 0, True),
+             "var/tmp": (-1, 0o1777, 0, True),
+             PRIVATE_PATH: (portage_gid, 0o2750, 0o2, False),
+             CACHE_PATH: (portage_gid, 0o755, 0o2, False),
+         }
+ 
+         for mypath, (gid, mode, modemask, preserve_perms) in dir_mode_map.items():
+             mydir = os.path.join(self["EROOT"], mypath)
+             if preserve_perms and os.path.isdir(mydir):
+                 # Only adjust permissions on some directories if
+                 # they don't exist yet. This gives freedom to the
+                 # user to adjust permissions to suit their taste.
+                 continue
+             try:
+                 ensure_dirs(mydir, gid=gid, mode=mode, mask=modemask)
+             except PortageException as e:
+                 writemsg(
+                     _("!!! Directory initialization failed: '%s'\n") % mydir,
+                     noiselevel=-1,
+                 )
+                 writemsg("!!! %s\n" % str(e), noiselevel=-1)
+ 
+     @property
+     def _keywords_manager(self):
+         if self._keywords_manager_obj is None:
+             self._keywords_manager_obj = KeywordsManager(
+                 self._locations_manager.profiles_complex,
+                 self._locations_manager.abs_user_config,
+                 self.local_config,
+                 global_accept_keywords=self.configdict["defaults"].get(
+                     "ACCEPT_KEYWORDS", ""
+                 ),
+             )
+         return self._keywords_manager_obj
+ 
+     @property
+     def _mask_manager(self):
+         if self._mask_manager_obj is None:
+             self._mask_manager_obj = MaskManager(
+                 self.repositories,
+                 self._locations_manager.profiles_complex,
+                 self._locations_manager.abs_user_config,
+                 user_config=self.local_config,
+                 strict_umatched_removal=self._unmatched_removal,
+             )
+         return self._mask_manager_obj
+ 
+     @property
+     def _virtuals_manager(self):
+         if self._virtuals_manager_obj is None:
+             self._virtuals_manager_obj = VirtualsManager(self.profiles)
+         return self._virtuals_manager_obj
+ 
+     @property
+     def pkeywordsdict(self):
+         result = self._keywords_manager.pkeywordsdict.copy()
+         for k, v in result.items():
+             result[k] = v.copy()
+         return result
+ 
+     @property
+     def pmaskdict(self):
+         return self._mask_manager._pmaskdict.copy()
+ 
+     @property
+     def punmaskdict(self):
+         return self._mask_manager._punmaskdict.copy()
+ 
+     @property
+     def soname_provided(self):
+         if self._soname_provided is None:
+             d = stack_dictlist(
+                 (
+                     grabdict(os.path.join(x, "soname.provided"), recursive=True)
+                     for x in self.profiles
+                 ),
+                 incremental=True,
+             )
+             self._soname_provided = frozenset(
+                 SonameAtom(cat, soname)
+                 for cat, sonames in d.items()
+                 for soname in sonames
+             )
+         return self._soname_provided
+ 
+     def expandLicenseTokens(self, tokens):
+         """Take a token from ACCEPT_LICENSE or package.license and expand it
+         if it's a group token (indicated by @) or just return it if it's not a
+         group.  If a group is negated then negate all group elements."""
+         return self._license_manager.expandLicenseTokens(tokens)
+ 
+     def validate(self):
+         """Validate miscellaneous settings and display warnings if necessary.
+         (This code was previously in the global scope of portage.py)"""
+ 
+         groups = self.get("ACCEPT_KEYWORDS", "").split()
+         archlist = self.archlist()
+         if not archlist:
+             writemsg(
+                 _(
+                     "--- 'profiles/arch.list' is empty or "
+                     "not available. Empty ebuild repository?\n"
+                 ),
+                 noiselevel=1,
+             )
+         else:
+             for group in groups:
+                 if (
+                     group not in archlist
+                     and not (group.startswith("-") and group[1:] in archlist)
+                     and group not in ("*", "~*", "**")
+                 ):
+                     writemsg(
+                         _("!!! INVALID ACCEPT_KEYWORDS: %s\n") % str(group),
+                         noiselevel=-1,
+                     )
+ 
+         profile_broken = False
+ 
+         # getmaskingstatus requires ARCH for ACCEPT_KEYWORDS support
+         arch = self.get("ARCH")
+         if not self.profile_path or not arch:
+             profile_broken = True
+         else:
+             # If any one of these files exists, then
+             # the profile is considered valid.
+             for x in ("make.defaults", "parent", "packages", "use.force", "use.mask"):
+                 if exists_raise_eaccess(os.path.join(self.profile_path, x)):
+                     break
+             else:
+                 profile_broken = True
+ 
+         if profile_broken and not portage._sync_mode:
+             abs_profile_path = None
+             for x in (PROFILE_PATH, "etc/make.profile"):
+                 x = os.path.join(self["PORTAGE_CONFIGROOT"], x)
+                 try:
+                     os.lstat(x)
+                 except OSError:
+                     pass
+                 else:
+                     abs_profile_path = x
+                     break
+ 
+             if abs_profile_path is None:
+                 abs_profile_path = os.path.join(
+                     self["PORTAGE_CONFIGROOT"], PROFILE_PATH
+                 )
+ 
+             writemsg(
+                 _(
+                     "\n\n!!! %s is not a symlink and will probably prevent most merges.\n"
+                 )
+                 % abs_profile_path,
+                 noiselevel=-1,
+             )
+             writemsg(
+                 _("!!! It should point into a profile within %s/profiles/\n")
+                 % self["PORTDIR"]
+             )
+             writemsg(
+                 _(
+                     "!!! (You can safely ignore this message when syncing. It's harmless.)\n\n\n"
+                 )
+             )
+ 
+         abs_user_virtuals = os.path.join(self["PORTAGE_CONFIGROOT"], USER_VIRTUALS_FILE)
+         if os.path.exists(abs_user_virtuals):
+             writemsg("\n!!! /etc/portage/virtuals is deprecated in favor of\n")
+             writemsg("!!! /etc/portage/profile/virtuals. Please move it to\n")
+             writemsg("!!! this new location.\n\n")
+ 
+         if not sandbox_capable and (
+             "sandbox" in self.features or "usersandbox" in self.features
+         ):
+             if self.profile_path is not None and os.path.realpath(
+                 self.profile_path
+             ) == os.path.realpath(
+                 os.path.join(self["PORTAGE_CONFIGROOT"], PROFILE_PATH)
+             ):
+                 # Don't show this warning when running repoman and the
+                 # sandbox feature came from a profile that doesn't belong
+                 # to the user.
+                 writemsg(
+                     colorize(
+                         "BAD", _("!!! Problem with sandbox" " binary. Disabling...\n\n")
+                     ),
+                     noiselevel=-1,
+                 )
+ 
+         if "fakeroot" in self.features and not fakeroot_capable:
+             writemsg(
+                 _(
+                     "!!! FEATURES=fakeroot is enabled, but the "
+                     "fakeroot binary is not installed.\n"
+                 ),
+                 noiselevel=-1,
+             )
+ 
+         if "webrsync-gpg" in self.features:
+             writemsg(
+                 _(
+                     "!!! FEATURES=webrsync-gpg is deprecated, see the make.conf(5) man page.\n"
+                 ),
+                 noiselevel=-1,
+             )
+ 
+         if os.getuid() == 0 and not hasattr(os, "setgroups"):
+             warning_shown = False
+ 
+             if "userpriv" in self.features:
+                 writemsg(
+                     _(
+                         "!!! FEATURES=userpriv is enabled, but "
+                         "os.setgroups is not available.\n"
+                     ),
+                     noiselevel=-1,
+                 )
+                 warning_shown = True
+ 
+             if "userfetch" in self.features:
+                 writemsg(
+                     _(
+                         "!!! FEATURES=userfetch is enabled, but "
+                         "os.setgroups is not available.\n"
+                     ),
+                     noiselevel=-1,
+                 )
+                 warning_shown = True
+ 
+             if warning_shown and platform.python_implementation() == "PyPy":
+                 writemsg(
+                     _("!!! See https://bugs.pypy.org/issue833 for details.\n"),
+                     noiselevel=-1,
+                 )
+ 
+         binpkg_compression = self.get("BINPKG_COMPRESS")
+         if binpkg_compression:
+             try:
+                 compression = _compressors[binpkg_compression]
+             except KeyError as e:
+                 writemsg(
+                     "!!! BINPKG_COMPRESS contains invalid or "
+                     "unsupported compression method: %s" % e.args[0],
+                     noiselevel=-1,
+                 )
+             else:
+                 try:
+                     compression_binary = shlex_split(
+                         portage.util.varexpand(compression["compress"], mydict=self)
+                     )[0]
+                 except IndexError as e:
+                     writemsg(
+                         "!!! BINPKG_COMPRESS contains invalid or "
+                         "unsupported compression method: %s" % e.args[0],
+                         noiselevel=-1,
+                     )
+                 else:
+                     if portage.process.find_binary(compression_binary) is None:
+                         missing_package = compression["package"]
+                         writemsg(
+                             "!!! BINPKG_COMPRESS unsupported %s. "
+                             "Missing package: %s"
+                             % (binpkg_compression, missing_package),
+                             noiselevel=-1,
+                         )
+ 
+     def load_best_module(self, property_string):
+         best_mod = best_from_dict(property_string, self.modules, self.module_priority)
+         mod = None
+         try:
+             mod = load_mod(best_mod)
+         except ImportError:
+             if best_mod in self._module_aliases:
+                 mod = load_mod(self._module_aliases[best_mod])
+             elif not best_mod.startswith("cache."):
+                 raise
+             else:
+                 best_mod = "portage." + best_mod
+                 try:
+                     mod = load_mod(best_mod)
+                 except ImportError:
+                     raise
+         return mod
+ 
+     def lock(self):
+         self.locked = 1
+ 
+     def unlock(self):
+         self.locked = 0
+ 
+     def modifying(self):
+         if self.locked:
+             raise Exception(_("Configuration is locked."))
+ 
+     def backup_changes(self, key=None):
+         self.modifying()
+         if key and key in self.configdict["env"]:
+             self.backupenv[key] = copy.deepcopy(self.configdict["env"][key])
+         else:
+             raise KeyError(_("No such key defined in environment: %s") % key)
+ 
+     def reset(self, keeping_pkg=0, use_cache=None):
+         """
+         Restore environment from self.backupenv, call self.regenerate()
+         @param keeping_pkg: Should we keep the setcpv() data or delete it.
+         @type keeping_pkg: Boolean
+         @rype: None
+         """
+ 
+         if use_cache is not None:
+             warnings.warn(
+                 "The use_cache parameter for config.reset() is deprecated and without effect.",
+                 DeprecationWarning,
+                 stacklevel=2,
+             )
+ 
+         self.modifying()
+         self.configdict["env"].clear()
+         self.configdict["env"].update(self.backupenv)
+ 
+         self.modifiedkeys = []
+         if not keeping_pkg:
+             self.mycpv = None
+             self._setcpv_args_hash = None
+             self.puse = ""
+             del self._penv[:]
+             self.configdict["pkg"].clear()
+             self.configdict["pkginternal"].clear()
+             self.configdict["features"]["USE"] = self._default_features_use
+             self.configdict["repo"].clear()
+             self.configdict["defaults"]["USE"] = " ".join(self.make_defaults_use)
+             self.usemask = self._use_manager.getUseMask()
+             self.useforce = self._use_manager.getUseForce()
+         self.regenerate()
+ 
+     class _lazy_vars:
+ 
+         __slots__ = ("built_use", "settings", "values")
+ 
+         def __init__(self, built_use, settings):
+             self.built_use = built_use
+             self.settings = settings
+             self.values = None
+ 
+         def __getitem__(self, k):
+             if self.values is None:
+                 self.values = self._init_values()
+             return self.values[k]
+ 
+         def _init_values(self):
+             values = {}
+             settings = self.settings
+             use = self.built_use
+             if use is None:
+                 use = frozenset(settings["PORTAGE_USE"].split())
+ 
+             values[
+                 "ACCEPT_LICENSE"
+             ] = settings._license_manager.get_prunned_accept_license(
+                 settings.mycpv,
+                 use,
+                 settings.get("LICENSE", ""),
+                 settings.get("SLOT"),
+                 settings.get("PORTAGE_REPO_NAME"),
+             )
+             values["PORTAGE_PROPERTIES"] = self._flatten("PROPERTIES", use, settings)
+             values["PORTAGE_RESTRICT"] = self._flatten("RESTRICT", use, settings)
+             return values
+ 
+         def _flatten(self, var, use, settings):
+             try:
+                 restrict = set(
+                     use_reduce(settings.get(var, ""), uselist=use, flat=True)
+                 )
+             except InvalidDependString:
+                 restrict = set()
+             return " ".join(sorted(restrict))
+ 
+     class _lazy_use_expand:
+         """
+         Lazily evaluate USE_EXPAND variables since they are only needed when
+         an ebuild shell is spawned. Variables values are made consistent with
+         the previously calculated USE settings.
+         """
+ 
+         def __init__(
+             self,
+             settings,
+             unfiltered_use,
+             use,
+             usemask,
+             iuse_effective,
+             use_expand_split,
+             use_expand_dict,
+         ):
+             self._settings = settings
+             self._unfiltered_use = unfiltered_use
+             self._use = use
+             self._usemask = usemask
+             self._iuse_effective = iuse_effective
+             self._use_expand_split = use_expand_split
+             self._use_expand_dict = use_expand_dict
+ 
+         def __getitem__(self, key):
+             prefix = key.lower() + "_"
+             prefix_len = len(prefix)
+             expand_flags = set(
+                 x[prefix_len:] for x in self._use if x[:prefix_len] == prefix
+             )
+             var_split = self._use_expand_dict.get(key, "").split()
+             # Preserve the order of var_split because it can matter for things
+             # like LINGUAS.
+             var_split = [x for x in var_split if x in expand_flags]
+             var_split.extend(expand_flags.difference(var_split))
+             has_wildcard = "*" in expand_flags
+             if has_wildcard:
+                 var_split = [x for x in var_split if x != "*"]
+             has_iuse = set()
+             for x in self._iuse_effective:
+                 if x[:prefix_len] == prefix:
+                     has_iuse.add(x[prefix_len:])
+             if has_wildcard:
+                 # * means to enable everything in IUSE that's not masked
+                 if has_iuse:
+                     usemask = self._usemask
+                     for suffix in has_iuse:
+                         x = prefix + suffix
+                         if x not in usemask:
+                             if suffix not in expand_flags:
+                                 var_split.append(suffix)
+                 else:
+                     # If there is a wildcard and no matching flags in IUSE then
+                     # LINGUAS should be unset so that all .mo files are
+                     # installed.
+                     var_split = []
+             # Make the flags unique and filter them according to IUSE.
+             # Also, continue to preserve order for things like LINGUAS
+             # and filter any duplicates that variable may contain.
+             filtered_var_split = []
+             remaining = has_iuse.intersection(var_split)
+             for x in var_split:
+                 if x in remaining:
+                     remaining.remove(x)
+                     filtered_var_split.append(x)
+             var_split = filtered_var_split
+ 
+             return " ".join(var_split)
+ 
+     def _setcpv_recursion_gate(f):
+         """
+         Raise AssertionError for recursive setcpv calls.
+         """
+ 
+         def wrapper(self, *args, **kwargs):
+             if hasattr(self, "_setcpv_active"):
+                 raise AssertionError("setcpv recursion detected")
+             self._setcpv_active = True
+             try:
+                 return f(self, *args, **kwargs)
+             finally:
+                 del self._setcpv_active
+ 
+         return wrapper
+ 
+     @_setcpv_recursion_gate
+     def setcpv(self, mycpv, use_cache=None, mydb=None):
+         """
+         Load a particular CPV into the config, this lets us see the
+         Default USE flags for a particular ebuild as well as the USE
+         flags from package.use.
+ 
+         @param mycpv: A cpv to load
+         @type mycpv: string
+         @param mydb: a dbapi instance that supports aux_get with the IUSE key.
+         @type mydb: dbapi or derivative.
+         @rtype: None
+         """
+ 
+         if use_cache is not None:
+             warnings.warn(
+                 "The use_cache parameter for config.setcpv() is deprecated and without effect.",
+                 DeprecationWarning,
+                 stacklevel=2,
+             )
+ 
+         self.modifying()
+ 
+         pkg = None
+         built_use = None
+         explicit_iuse = None
+         if not isinstance(mycpv, str):
+             pkg = mycpv
+             mycpv = pkg.cpv
+             mydb = pkg._metadata
+             explicit_iuse = pkg.iuse.all
+             args_hash = (mycpv, id(pkg))
+             if pkg.built:
+                 built_use = pkg.use.enabled
+         else:
+             args_hash = (mycpv, id(mydb))
+ 
+         if args_hash == self._setcpv_args_hash:
+             return
+         self._setcpv_args_hash = args_hash
+ 
+         has_changed = False
+         self.mycpv = mycpv
+         cat, pf = catsplit(mycpv)
+         cp = cpv_getkey(mycpv)
+         cpv_slot = self.mycpv
+         pkginternaluse = ""
+         pkginternaluse_list = []
+         feature_use = []
+         iuse = ""
+         pkg_configdict = self.configdict["pkg"]
+         previous_iuse = pkg_configdict.get("IUSE")
+         previous_iuse_effective = pkg_configdict.get("IUSE_EFFECTIVE")
+         previous_features = pkg_configdict.get("FEATURES")
+         previous_penv = self._penv
+ 
+         aux_keys = self._setcpv_aux_keys
+ 
+         # Discard any existing metadata and package.env settings from
+         # the previous package instance.
+         pkg_configdict.clear()
+ 
+         pkg_configdict["CATEGORY"] = cat
+         pkg_configdict["PF"] = pf
+         repository = None
+         eapi = None
+         if mydb:
+             if not hasattr(mydb, "aux_get"):
+                 for k in aux_keys:
+                     if k in mydb:
+                         # Make these lazy, since __getitem__ triggers
+                         # evaluation of USE conditionals which can't
+                         # occur until PORTAGE_USE is calculated below.
+                         pkg_configdict.addLazySingleton(k, mydb.__getitem__, k)
+             else:
+                 # When calling dbapi.aux_get(), grab USE for built/installed
+                 # packages since we want to save it PORTAGE_BUILT_USE for
+                 # evaluating conditional USE deps in atoms passed via IPC to
+                 # helpers like has_version and best_version.
+                 aux_keys = set(aux_keys)
+                 if hasattr(mydb, "_aux_cache_keys"):
+                     aux_keys = aux_keys.intersection(mydb._aux_cache_keys)
+                 aux_keys.add("USE")
+                 aux_keys = list(aux_keys)
+                 for k, v in zip(aux_keys, mydb.aux_get(self.mycpv, aux_keys)):
+                     pkg_configdict[k] = v
+                 built_use = frozenset(pkg_configdict.pop("USE").split())
+                 if not built_use:
+                     # Empty USE means this dbapi instance does not contain
+                     # built packages.
+                     built_use = None
+             eapi = pkg_configdict["EAPI"]
+ 
+             repository = pkg_configdict.pop("repository", None)
+             if repository is not None:
+                 pkg_configdict["PORTAGE_REPO_NAME"] = repository
+             iuse = pkg_configdict["IUSE"]
+             if pkg is None:
+                 self.mycpv = _pkg_str(
+                     self.mycpv, metadata=pkg_configdict, settings=self
+                 )
+                 cpv_slot = self.mycpv
+             else:
+                 cpv_slot = pkg
+             for x in iuse.split():
+                 if x.startswith("+"):
+                     pkginternaluse_list.append(x[1:])
+                 elif x.startswith("-"):
+                     pkginternaluse_list.append(x)
+             pkginternaluse = " ".join(pkginternaluse_list)
+ 
+         eapi_attrs = _get_eapi_attrs(eapi)
+ 
+         if pkginternaluse != self.configdict["pkginternal"].get("USE", ""):
+             self.configdict["pkginternal"]["USE"] = pkginternaluse
+             has_changed = True
+ 
+         repo_env = []
+         if repository and repository != Package.UNKNOWN_REPO:
+             repos = []
+             try:
+                 repos.extend(
+                     repo.name for repo in self.repositories[repository].masters
+                 )
+             except KeyError:
+                 pass
+             repos.append(repository)
+             for repo in repos:
+                 d = self._repo_make_defaults.get(repo)
+                 if d is None:
+                     d = {}
+                 else:
+                     # make a copy, since we might modify it with
+                     # package.use settings
+                     d = d.copy()
+                 cpdict = self._use_manager._repo_puse_dict.get(repo, {}).get(cp)
+                 if cpdict:
+                     repo_puse = ordered_by_atom_specificity(cpdict, cpv_slot)
+                     if repo_puse:
+                         for x in repo_puse:
+                             d["USE"] = d.get("USE", "") + " " + " ".join(x)
+                 if d:
+                     repo_env.append(d)
+ 
+         if repo_env or self.configdict["repo"]:
+             self.configdict["repo"].clear()
+             self.configdict["repo"].update(
+                 stack_dicts(repo_env, incrementals=self.incrementals)
+             )
+             has_changed = True
+ 
+         defaults = []
+         for i, pkgprofileuse_dict in enumerate(self._use_manager._pkgprofileuse):
+             if self.make_defaults_use[i]:
+                 defaults.append(self.make_defaults_use[i])
+             cpdict = pkgprofileuse_dict.get(cp)
+             if cpdict:
+                 pkg_defaults = ordered_by_atom_specificity(cpdict, cpv_slot)
+                 if pkg_defaults:
+                     defaults.extend(pkg_defaults)
+         defaults = " ".join(defaults)
+         if defaults != self.configdict["defaults"].get("USE", ""):
+             self.configdict["defaults"]["USE"] = defaults
+             has_changed = True
+ 
+         useforce = self._use_manager.getUseForce(cpv_slot)
+         if useforce != self.useforce:
+             self.useforce = useforce
+             has_changed = True
+ 
+         usemask = self._use_manager.getUseMask(cpv_slot)
+         if usemask != self.usemask:
+             self.usemask = usemask
+             has_changed = True
+ 
+         oldpuse = self.puse
+         self.puse = self._use_manager.getPUSE(cpv_slot)
+         if oldpuse != self.puse:
+             has_changed = True
+         self.configdict["pkg"]["PKGUSE"] = self.puse[:]  # For saving to PUSE file
+         self.configdict["pkg"]["USE"] = self.puse[:]  # this gets appended to USE
+ 
+         if previous_features:
+             # The package from the previous setcpv call had package.env
+             # settings which modified FEATURES. Therefore, trigger a
+             # regenerate() call in order to ensure that self.features
+             # is accurate.
+             has_changed = True
+             # Prevent stale features USE from corrupting the evaluation
+             # of USE conditional RESTRICT.
+             self.configdict["features"]["USE"] = self._default_features_use
+ 
+         self._penv = []
+         cpdict = self._penvdict.get(cp)
+         if cpdict:
+             penv_matches = ordered_by_atom_specificity(cpdict, cpv_slot)
+             if penv_matches:
+                 for x in penv_matches:
+                     self._penv.extend(x)
+ 
+         bashrc_files = []
+ 
+         for profile, profile_bashrc in zip(
+             self._locations_manager.profiles_complex, self._profile_bashrc
+         ):
+             if profile_bashrc:
+                 bashrc_files.append(os.path.join(profile.location, "profile.bashrc"))
+             if profile in self._pbashrcdict:
+                 cpdict = self._pbashrcdict[profile].get(cp)
+                 if cpdict:
+                     bashrc_matches = ordered_by_atom_specificity(cpdict, cpv_slot)
+                     for x in bashrc_matches:
+                         bashrc_files.extend(x)
+ 
+         self._pbashrc = tuple(bashrc_files)
+ 
+         protected_pkg_keys = set(pkg_configdict)
+         protected_pkg_keys.discard("USE")
+ 
+         # If there are _any_ package.env settings for this package
+         # then it automatically triggers config.reset(), in order
+         # to account for possible incremental interaction between
+         # package.use, package.env, and overrides from the calling
+         # environment (configdict['env']).
+         if self._penv:
+             has_changed = True
+             # USE is special because package.use settings override
+             # it. Discard any package.use settings here and they'll
+             # be added back later.
+             pkg_configdict.pop("USE", None)
+             self._grab_pkg_env(
+                 self._penv, pkg_configdict, protected_keys=protected_pkg_keys
+             )
+ 
+             # Now add package.use settings, which override USE from
+             # package.env
+             if self.puse:
+                 if "USE" in pkg_configdict:
+                     pkg_configdict["USE"] = pkg_configdict["USE"] + " " + self.puse
+                 else:
+                     pkg_configdict["USE"] = self.puse
+ 
+         elif previous_penv:
+             has_changed = True
+ 
+         if not (
+             previous_iuse == iuse
+             and previous_iuse_effective is not None == eapi_attrs.iuse_effective
+         ):
+             has_changed = True
+ 
+         if has_changed:
+             # This can modify self.features due to package.env settings.
+             self.reset(keeping_pkg=1)
+ 
+         if "test" in self.features:
+             # This is independent of IUSE and RESTRICT, so that the same
+             # value can be shared between packages with different settings,
+             # which is important when evaluating USE conditional RESTRICT.
+             feature_use.append("test")
+ 
+         feature_use = " ".join(feature_use)
+         if feature_use != self.configdict["features"]["USE"]:
+             # Regenerate USE for evaluation of conditional RESTRICT.
+             self.configdict["features"]["USE"] = feature_use
+             self.reset(keeping_pkg=1)
+             has_changed = True
+ 
+         if explicit_iuse is None:
+             explicit_iuse = frozenset(x.lstrip("+-") for x in iuse.split())
+         if eapi_attrs.iuse_effective:
+             iuse_implicit_match = self._iuse_effective_match
+         else:
+             iuse_implicit_match = self._iuse_implicit_match
+ 
+         if pkg is None:
+             raw_properties = pkg_configdict.get("PROPERTIES")
+             raw_restrict = pkg_configdict.get("RESTRICT")
+         else:
+             raw_properties = pkg._raw_metadata["PROPERTIES"]
+             raw_restrict = pkg._raw_metadata["RESTRICT"]
+ 
+         restrict_test = False
+         if raw_restrict:
+             try:
+                 if built_use is not None:
+                     properties = use_reduce(
+                         raw_properties, uselist=built_use, flat=True
+                     )
+                     restrict = use_reduce(raw_restrict, uselist=built_use, flat=True)
+                 else:
+                     properties = use_reduce(
+                         raw_properties,
+                         uselist=frozenset(
+                             x
+                             for x in self["USE"].split()
+                             if x in explicit_iuse or iuse_implicit_match(x)
+                         ),
+                         flat=True,
+                     )
+                     restrict = use_reduce(
+                         raw_restrict,
+                         uselist=frozenset(
+                             x
+                             for x in self["USE"].split()
+                             if x in explicit_iuse or iuse_implicit_match(x)
+                         ),
+                         flat=True,
+                     )
+             except PortageException:
+                 pass
+             else:
+                 allow_test = self.get("ALLOW_TEST", "").split()
+                 restrict_test = (
+                     "test" in restrict
+                     and not "all" in allow_test
+                     and not ("test_network" in properties and "network" in allow_test)
+                 )
+ 
+         if restrict_test and "test" in self.features:
+             # Handle it like IUSE="-test", since features USE is
+             # independent of RESTRICT.
+             pkginternaluse_list.append("-test")
+             pkginternaluse = " ".join(pkginternaluse_list)
+             self.configdict["pkginternal"]["USE"] = pkginternaluse
+             # TODO: can we avoid that?
+             self.reset(keeping_pkg=1)
+             has_changed = True
+ 
+         env_configdict = self.configdict["env"]
+ 
+         # Ensure that "pkg" values are always preferred over "env" values.
+         # This must occur _after_ the above reset() call, since reset()
+         # copies values from self.backupenv.
+         for k in protected_pkg_keys:
+             env_configdict.pop(k, None)
+ 
+         lazy_vars = self._lazy_vars(built_use, self)
+         env_configdict.addLazySingleton(
+             "ACCEPT_LICENSE", lazy_vars.__getitem__, "ACCEPT_LICENSE"
+         )
+         env_configdict.addLazySingleton(
+             "PORTAGE_PROPERTIES", lazy_vars.__getitem__, "PORTAGE_PROPERTIES"
+         )
+         env_configdict.addLazySingleton(
+             "PORTAGE_RESTRICT", lazy_vars.__getitem__, "PORTAGE_RESTRICT"
+         )
+ 
+         if built_use is not None:
+             pkg_configdict["PORTAGE_BUILT_USE"] = " ".join(built_use)
+ 
+         # If reset() has not been called, it's safe to return
+         # early if IUSE has not changed.
+         if not has_changed:
+             return
+ 
+         # Filter out USE flags that aren't part of IUSE. This has to
+         # be done for every setcpv() call since practically every
+         # package has different IUSE.
+         use = set(self["USE"].split())
+         unfiltered_use = frozenset(use)
+ 
+         if eapi_attrs.iuse_effective:
+             portage_iuse = set(self._iuse_effective)
+             portage_iuse.update(explicit_iuse)
+             if built_use is not None:
+                 # When the binary package was built, the profile may have
+                 # had different IUSE_IMPLICIT settings, so any member of
+                 # the built USE setting is considered to be a member of
+                 # IUSE_EFFECTIVE (see bug 640318).
+                 portage_iuse.update(built_use)
+             self.configdict["pkg"]["IUSE_EFFECTIVE"] = " ".join(sorted(portage_iuse))
+ 
+             self.configdict["env"]["BASH_FUNC____in_portage_iuse%%"] = (
+                 "() { "
+                 "if [[ ${#___PORTAGE_IUSE_HASH[@]} -lt 1 ]]; then "
+                 "  declare -gA ___PORTAGE_IUSE_HASH=(%s); "
+                 "fi; "
+                 "[[ -n ${___PORTAGE_IUSE_HASH[$1]} ]]; "
+                 "}"
+             ) % " ".join('["%s"]=1' % x for x in portage_iuse)
+         else:
+             portage_iuse = self._get_implicit_iuse()
+             portage_iuse.update(explicit_iuse)
+ 
+             # The _get_implicit_iuse() returns a regular expression
+             # so we can't use the (faster) map.  Fall back to
+             # implementing ___in_portage_iuse() the older/slower way.
+ 
+             # PORTAGE_IUSE is not always needed so it's lazily evaluated.
+             self.configdict["env"].addLazySingleton(
+                 "PORTAGE_IUSE", _lazy_iuse_regex, portage_iuse
+             )
+             self.configdict["env"][
+                 "BASH_FUNC____in_portage_iuse%%"
+             ] = "() { [[ $1 =~ ${PORTAGE_IUSE} ]]; }"
+ 
+         ebuild_force_test = not restrict_test and self.get("EBUILD_FORCE_TEST") == "1"
+ 
+         if "test" in explicit_iuse or iuse_implicit_match("test"):
+             if "test" in self.features:
+                 if ebuild_force_test and "test" in self.usemask:
+                     self.usemask = frozenset(x for x in self.usemask if x != "test")
+             if restrict_test or ("test" in self.usemask and not ebuild_force_test):
+                 # "test" is in IUSE and USE=test is masked, so execution
+                 # of src_test() probably is not reliable. Therefore,
+                 # temporarily disable FEATURES=test just for this package.
+                 self["FEATURES"] = " ".join(x for x in self.features if x != "test")
+ 
+         # Allow _* flags from USE_EXPAND wildcards to pass through here.
+         use.difference_update(
+             [
+                 x
+                 for x in use
+                 if (x not in explicit_iuse and not iuse_implicit_match(x))
+                 and x[-2:] != "_*"
+             ]
+         )
+ 
+         # Use the calculated USE flags to regenerate the USE_EXPAND flags so
+         # that they are consistent. For optimal performance, use slice
+         # comparison instead of startswith().
+         use_expand_split = set(x.lower() for x in self.get("USE_EXPAND", "").split())
+         lazy_use_expand = self._lazy_use_expand(
+             self,
+             unfiltered_use,
+             use,
+             self.usemask,
+             portage_iuse,
+             use_expand_split,
+             self._use_expand_dict,
+         )
+ 
+         use_expand_iuses = dict((k, set()) for k in use_expand_split)
+         for x in portage_iuse:
+             x_split = x.split("_")
+             if len(x_split) == 1:
+                 continue
+             for i in range(len(x_split) - 1):
+                 k = "_".join(x_split[: i + 1])
+                 if k in use_expand_split:
+                     use_expand_iuses[k].add(x)
+                     break
+ 
+         for k, use_expand_iuse in use_expand_iuses.items():
+             if k + "_*" in use:
+                 use.update(x for x in use_expand_iuse if x not in usemask)
+             k = k.upper()
+             self.configdict["env"].addLazySingleton(k, lazy_use_expand.__getitem__, k)
+ 
+         for k in self.get("USE_EXPAND_UNPREFIXED", "").split():
+             var_split = self.get(k, "").split()
+             var_split = [x for x in var_split if x in use]
+             if var_split:
+                 self.configlist[-1][k] = " ".join(var_split)
+             elif k in self:
+                 self.configlist[-1][k] = ""
+ 
+         # Filtered for the ebuild environment. Store this in a separate
+         # attribute since we still want to be able to see global USE
+         # settings for things like emerge --info.
+ 
+         self.configdict["env"]["PORTAGE_USE"] = " ".join(
+             sorted(x for x in use if x[-2:] != "_*")
+         )
+ 
+         # Clear the eapi cache here rather than in the constructor, since
+         # setcpv triggers lazy instantiation of things like _use_manager.
+         _eapi_cache.clear()
+ 
+     def _grab_pkg_env(self, penv, container, protected_keys=None):
+         if protected_keys is None:
+             protected_keys = ()
+         abs_user_config = os.path.join(self["PORTAGE_CONFIGROOT"], USER_CONFIG_PATH)
+         non_user_variables = self._non_user_variables
+         # Make a copy since we don't want per-package settings
+         # to pollute the global expand_map.
+         expand_map = self._expand_map.copy()
+         incrementals = self.incrementals
+         for envname in penv:
+             penvfile = os.path.join(abs_user_config, "env", envname)
+             penvconfig = getconfig(
+                 penvfile,
+                 tolerant=self._tolerant,
+                 allow_sourcing=True,
+                 expand=expand_map,
+             )
+             if penvconfig is None:
+                 writemsg(
+                     "!!! %s references non-existent file: %s\n"
+                     % (os.path.join(abs_user_config, "package.env"), penvfile),
+                     noiselevel=-1,
+                 )
+             else:
+                 for k, v in penvconfig.items():
+                     if k in protected_keys or k in non_user_variables:
+                         writemsg(
+                             "!!! Illegal variable "
+                             + "'%s' assigned in '%s'\n" % (k, penvfile),
+                             noiselevel=-1,
+                         )
+                     elif k in incrementals:
+                         if k in container:
+                             container[k] = container[k] + " " + v
+                         else:
+                             container[k] = v
+                     else:
+                         container[k] = v
+ 
+     def _iuse_effective_match(self, flag):
+         return flag in self._iuse_effective
+ 
+     def _calc_iuse_effective(self):
+         """
+         Beginning with EAPI 5, IUSE_EFFECTIVE is defined by PMS.
+         """
+         iuse_effective = []
+         iuse_effective.extend(self.get("IUSE_IMPLICIT", "").split())
+ 
+         # USE_EXPAND_IMPLICIT should contain things like ARCH, ELIBC,
+         # KERNEL, and USERLAND.
+         use_expand_implicit = frozenset(self.get("USE_EXPAND_IMPLICIT", "").split())
+ 
+         # USE_EXPAND_UNPREFIXED should contain at least ARCH, and
+         # USE_EXPAND_VALUES_ARCH should contain all valid ARCH flags.
+         for v in self.get("USE_EXPAND_UNPREFIXED", "").split():
+             if v not in use_expand_implicit:
+                 continue
+             iuse_effective.extend(self.get("USE_EXPAND_VALUES_" + v, "").split())
+ 
+         use_expand = frozenset(self.get("USE_EXPAND", "").split())
+         for v in use_expand_implicit:
+             if v not in use_expand:
+                 continue
+             lower_v = v.lower()
+             for x in self.get("USE_EXPAND_VALUES_" + v, "").split():
+                 iuse_effective.append(lower_v + "_" + x)
+ 
+         return frozenset(iuse_effective)
+ 
+     def _get_implicit_iuse(self):
+         """
+         Prior to EAPI 5, these flags are considered to
+         be implicit members of IUSE:
+           * Flags derived from ARCH
+           * Flags derived from USE_EXPAND_HIDDEN variables
+           * Masked flags, such as those from {,package}use.mask
+           * Forced flags, such as those from {,package}use.force
+           * build and bootstrap flags used by bootstrap.sh
+         """
+         iuse_implicit = set()
+         # Flags derived from ARCH.
+         arch = self.configdict["defaults"].get("ARCH")
+         if arch:
+             iuse_implicit.add(arch)
+         iuse_implicit.update(self.get("PORTAGE_ARCHLIST", "").split())
+ 
+         # Flags derived from USE_EXPAND_HIDDEN variables
+         # such as ELIBC, KERNEL, and USERLAND.
+         use_expand_hidden = self.get("USE_EXPAND_HIDDEN", "").split()
+         for x in use_expand_hidden:
+             iuse_implicit.add(x.lower() + "_.*")
+ 
+         # Flags that have been masked or forced.
+         iuse_implicit.update(self.usemask)
+         iuse_implicit.update(self.useforce)
+ 
+         # build and bootstrap flags used by bootstrap.sh
+         iuse_implicit.add("build")
+         iuse_implicit.add("bootstrap")
+ 
+         return iuse_implicit
+ 
+     def _getUseMask(self, pkg, stable=None):
+         return self._use_manager.getUseMask(pkg, stable=stable)
+ 
+     def _getUseForce(self, pkg, stable=None):
+         return self._use_manager.getUseForce(pkg, stable=stable)
+ 
+     def _getMaskAtom(self, cpv, metadata):
+         """
+         Take a package and return a matching package.mask atom, or None if no
+         such atom exists or it has been cancelled by package.unmask.
+ 
+         @param cpv: The package name
+         @type cpv: String
+         @param metadata: A dictionary of raw package metadata
+         @type metadata: dict
+         @rtype: String
+         @return: A matching atom string or None if one is not found.
+         """
+         return self._mask_manager.getMaskAtom(
+             cpv, metadata["SLOT"], metadata.get("repository")
+         )
+ 
+     def _getRawMaskAtom(self, cpv, metadata):
+         """
+         Take a package and return a matching package.mask atom, or None if no
+         such atom exists or it has been cancelled by package.unmask.
+ 
+         @param cpv: The package name
+         @type cpv: String
+         @param metadata: A dictionary of raw package metadata
+         @type metadata: dict
+         @rtype: String
+         @return: A matching atom string or None if one is not found.
+         """
+         return self._mask_manager.getRawMaskAtom(
+             cpv, metadata["SLOT"], metadata.get("repository")
+         )
+ 
+     def _getProfileMaskAtom(self, cpv, metadata):
+         """
+         Take a package and return a matching profile atom, or None if no
+         such atom exists. Note that a profile atom may or may not have a "*"
+         prefix.
+ 
+         @param cpv: The package name
+         @type cpv: String
+         @param metadata: A dictionary of raw package metadata
+         @type metadata: dict
+         @rtype: String
+         @return: A matching profile atom string or None if one is not found.
+         """
+ 
+         warnings.warn(
+             "The config._getProfileMaskAtom() method is deprecated.",
+             DeprecationWarning,
+             stacklevel=2,
+         )
+ 
+         cp = cpv_getkey(cpv)
+         profile_atoms = self.prevmaskdict.get(cp)
+         if profile_atoms:
+             pkg = "".join((cpv, _slot_separator, metadata["SLOT"]))
+             repo = metadata.get("repository")
+             if repo and repo != Package.UNKNOWN_REPO:
+                 pkg = "".join((pkg, _repo_separator, repo))
+             pkg_list = [pkg]
+             for x in profile_atoms:
+                 if match_from_list(x, pkg_list):
+                     continue
+                 return x
+         return None
+ 
+     def _isStable(self, pkg):
+         return self._keywords_manager.isStable(
+             pkg,
+             self.get("ACCEPT_KEYWORDS", ""),
+             self.configdict["backupenv"].get("ACCEPT_KEYWORDS", ""),
+         )
+ 
+     def _getKeywords(self, cpv, metadata):
+         return self._keywords_manager.getKeywords(
+             cpv,
+             metadata["SLOT"],
+             metadata.get("KEYWORDS", ""),
+             metadata.get("repository"),
+         )
+ 
+     def _getMissingKeywords(self, cpv, metadata):
+         """
+         Take a package and return a list of any KEYWORDS that the user may
+         need to accept for the given package. If the KEYWORDS are empty
+         and the ** keyword has not been accepted, the returned list will
+         contain ** alone (in order to distinguish from the case of "none
+         missing").
+ 
+         @param cpv: The package name (for package.keywords support)
+         @type cpv: String
+         @param metadata: A dictionary of raw package metadata
+         @type metadata: dict
+         @rtype: List
+         @return: A list of KEYWORDS that have not been accepted.
+         """
+ 
+         # Hack: Need to check the env directly here as otherwise stacking
+         # doesn't work properly as negative values are lost in the config
+         # object (bug #139600)
+         backuped_accept_keywords = self.configdict["backupenv"].get(
+             "ACCEPT_KEYWORDS", ""
+         )
+         global_accept_keywords = self.get("ACCEPT_KEYWORDS", "")
+ 
+         return self._keywords_manager.getMissingKeywords(
+             cpv,
+             metadata["SLOT"],
+             metadata.get("KEYWORDS", ""),
+             metadata.get("repository"),
+             global_accept_keywords,
+             backuped_accept_keywords,
+         )
+ 
+     def _getRawMissingKeywords(self, cpv, metadata):
+         """
+         Take a package and return a list of any KEYWORDS that the user may
+         need to accept for the given package. If the KEYWORDS are empty,
+         the returned list will contain ** alone (in order to distinguish
+         from the case of "none missing").  This DOES NOT apply any user config
+         package.accept_keywords acceptance.
+ 
+         @param cpv: The package name (for package.keywords support)
+         @type cpv: String
+         @param metadata: A dictionary of raw package metadata
+         @type metadata: dict
+         @rtype: List
+         @return: lists of KEYWORDS that have not been accepted
+         and the keywords it looked for.
+         """
+         return self._keywords_manager.getRawMissingKeywords(
+             cpv,
+             metadata["SLOT"],
+             metadata.get("KEYWORDS", ""),
+             metadata.get("repository"),
+             self.get("ACCEPT_KEYWORDS", ""),
+         )
+ 
+     def _getPKeywords(self, cpv, metadata):
+         global_accept_keywords = self.get("ACCEPT_KEYWORDS", "")
+ 
+         return self._keywords_manager.getPKeywords(
+             cpv, metadata["SLOT"], metadata.get("repository"), global_accept_keywords
+         )
+ 
+     def _getMissingLicenses(self, cpv, metadata):
+         """
+         Take a LICENSE string and return a list of any licenses that the user
+         may need to accept for the given package.  The returned list will not
+         contain any licenses that have already been accepted.  This method
+         can throw an InvalidDependString exception.
+ 
+         @param cpv: The package name (for package.license support)
+         @type cpv: String
+         @param metadata: A dictionary of raw package metadata
+         @type metadata: dict
+         @rtype: List
+         @return: A list of licenses that have not been accepted.
+         """
+         return self._license_manager.getMissingLicenses(
+             cpv,
+             metadata["USE"],
+             metadata["LICENSE"],
+             metadata["SLOT"],
+             metadata.get("repository"),
+         )
+ 
+     def _getMissingProperties(self, cpv, metadata):
+         """
+         Take a PROPERTIES string and return a list of any properties the user
+         may need to accept for the given package.  The returned list will not
+         contain any properties that have already been accepted.  This method
+         can throw an InvalidDependString exception.
+ 
+         @param cpv: The package name (for package.properties support)
+         @type cpv: String
+         @param metadata: A dictionary of raw package metadata
+         @type metadata: dict
+         @rtype: List
+         @return: A list of properties that have not been accepted.
+         """
+         accept_properties = self._accept_properties
+         try:
+             cpv.slot
+         except AttributeError:
+             cpv = _pkg_str(cpv, metadata=metadata, settings=self)
+         cp = cpv_getkey(cpv)
+         cpdict = self._ppropertiesdict.get(cp)
+         if cpdict:
+             pproperties_list = ordered_by_atom_specificity(cpdict, cpv)
+             if pproperties_list:
+                 accept_properties = list(self._accept_properties)
+                 for x in pproperties_list:
+                     accept_properties.extend(x)
+ 
+         properties_str = metadata.get("PROPERTIES", "")
+         properties = set(use_reduce(properties_str, matchall=1, flat=True))
+ 
+         acceptable_properties = set()
+         for x in accept_properties:
+             if x == "*":
+                 acceptable_properties.update(properties)
+             elif x == "-*":
+                 acceptable_properties.clear()
+             elif x[:1] == "-":
+                 acceptable_properties.discard(x[1:])
+             else:
+                 acceptable_properties.add(x)
+ 
+         if "?" in properties_str:
+             use = metadata["USE"].split()
+         else:
+             use = []
+ 
+         return [
+             x
+             for x in use_reduce(properties_str, uselist=use, flat=True)
+             if x not in acceptable_properties
+         ]
+ 
+     def _getMissingRestrict(self, cpv, metadata):
+         """
+         Take a RESTRICT string and return a list of any tokens the user
+         may need to accept for the given package.  The returned list will not
+         contain any tokens that have already been accepted.  This method
+         can throw an InvalidDependString exception.
+ 
+         @param cpv: The package name (for package.accept_restrict support)
+         @type cpv: String
+         @param metadata: A dictionary of raw package metadata
+         @type metadata: dict
+         @rtype: List
+         @return: A list of tokens that have not been accepted.
+         """
+         accept_restrict = self._accept_restrict
+         try:
+             cpv.slot
+         except AttributeError:
+             cpv = _pkg_str(cpv, metadata=metadata, settings=self)
+         cp = cpv_getkey(cpv)
+         cpdict = self._paccept_restrict.get(cp)
+         if cpdict:
+             paccept_restrict_list = ordered_by_atom_specificity(cpdict, cpv)
+             if paccept_restrict_list:
+                 accept_restrict = list(self._accept_restrict)
+                 for x in paccept_restrict_list:
+                     accept_restrict.extend(x)
+ 
+         restrict_str = metadata.get("RESTRICT", "")
+         all_restricts = set(use_reduce(restrict_str, matchall=1, flat=True))
+ 
+         acceptable_restricts = set()
+         for x in accept_restrict:
+             if x == "*":
+                 acceptable_restricts.update(all_restricts)
+             elif x == "-*":
+                 acceptable_restricts.clear()
+             elif x[:1] == "-":
+                 acceptable_restricts.discard(x[1:])
+             else:
+                 acceptable_restricts.add(x)
+ 
+         if "?" in restrict_str:
+             use = metadata["USE"].split()
+         else:
+             use = []
+ 
+         return [
+             x
+             for x in use_reduce(restrict_str, uselist=use, flat=True)
+             if x not in acceptable_restricts
+         ]
+ 
+     def _accept_chost(self, cpv, metadata):
+         """
+         @return True if pkg CHOST is accepted, False otherwise.
+         """
+         if self._accept_chost_re is None:
+             accept_chost = self.get("ACCEPT_CHOSTS", "").split()
+             if not accept_chost:
+                 chost = self.get("CHOST")
+                 if chost:
+                     accept_chost.append(chost)
+             if not accept_chost:
+                 self._accept_chost_re = re.compile(".*")
+             elif len(accept_chost) == 1:
+                 try:
+                     self._accept_chost_re = re.compile(r"^%s$" % accept_chost[0])
+                 except re.error as e:
+                     writemsg(
+                         _("!!! Invalid ACCEPT_CHOSTS value: '%s': %s\n")
+                         % (accept_chost[0], e),
+                         noiselevel=-1,
+                     )
+                     self._accept_chost_re = re.compile("^$")
+             else:
+                 try:
+                     self._accept_chost_re = re.compile(
+                         r"^(%s)$" % "|".join(accept_chost)
+                     )
+                 except re.error as e:
+                     writemsg(
+                         _("!!! Invalid ACCEPT_CHOSTS value: '%s': %s\n")
+                         % (" ".join(accept_chost), e),
+                         noiselevel=-1,
+                     )
+                     self._accept_chost_re = re.compile("^$")
+ 
+         pkg_chost = metadata.get("CHOST", "")
+         return not pkg_chost or self._accept_chost_re.match(pkg_chost) is not None
+ 
+     def setinst(self, mycpv, mydbapi):
+         """This used to update the preferences for old-style virtuals.
+         It is no-op now."""
+         pass
+ 
+     def reload(self):
+         """Reload things like /etc/profile.env that can change during runtime."""
+         env_d_filename = os.path.join(self["EROOT"], "etc", "profile.env")
+         self.configdict["env.d"].clear()
+         env_d = getconfig(env_d_filename, tolerant=self._tolerant, expand=False)
+         if env_d:
+             # env_d will be None if profile.env doesn't exist.
+             for k in self._env_d_blacklist:
+                 env_d.pop(k, None)
+             self.configdict["env.d"].update(env_d)
+ 
+     def regenerate(self, useonly=0, use_cache=None):
+         """
+         Regenerate settings
+         This involves regenerating valid USE flags, re-expanding USE_EXPAND flags
+         re-stacking USE flags (-flag and -*), as well as any other INCREMENTAL
+         variables.  This also updates the env.d configdict; useful in case an ebuild
+         changes the environment.
+ 
+         If FEATURES has already stacked, it is not stacked twice.
+ 
+         @param useonly: Only regenerate USE flags (not any other incrementals)
+         @type useonly: Boolean
+         @rtype: None
+         """
+ 
+         if use_cache is not None:
+             warnings.warn(
+                 "The use_cache parameter for config.regenerate() is deprecated and without effect.",
+                 DeprecationWarning,
+                 stacklevel=2,
+             )
+ 
+         self.modifying()
+ 
+         if useonly:
+             myincrementals = ["USE"]
+         else:
+             myincrementals = self.incrementals
+         myincrementals = set(myincrementals)
+ 
+         # Process USE last because it depends on USE_EXPAND which is also
+         # an incremental!
+         myincrementals.discard("USE")
+ 
+         mydbs = self.configlist[:-1]
+         mydbs.append(self.backupenv)
+ 
+         # ACCEPT_LICENSE is a lazily evaluated incremental, so that * can be
+         # used to match all licenses without every having to explicitly expand
+         # it to all licenses.
+         if self.local_config:
+             mysplit = []
+             for curdb in mydbs:
+                 mysplit.extend(curdb.get("ACCEPT_LICENSE", "").split())
+             mysplit = prune_incremental(mysplit)
+             accept_license_str = " ".join(mysplit) or "* -@EULA"
+             self.configlist[-1]["ACCEPT_LICENSE"] = accept_license_str
+             self._license_manager.set_accept_license_str(accept_license_str)
+         else:
+             # repoman will accept any license
+             self._license_manager.set_accept_license_str("*")
+ 
+         # ACCEPT_PROPERTIES works like ACCEPT_LICENSE, without groups
+         if self.local_config:
+             mysplit = []
+             for curdb in mydbs:
+                 mysplit.extend(curdb.get("ACCEPT_PROPERTIES", "").split())
+             mysplit = prune_incremental(mysplit)
+             self.configlist[-1]["ACCEPT_PROPERTIES"] = " ".join(mysplit)
+             if tuple(mysplit) != self._accept_properties:
+                 self._accept_properties = tuple(mysplit)
+         else:
+             # repoman will accept any property
+             self._accept_properties = ("*",)
+ 
+         if self.local_config:
+             mysplit = []
+             for curdb in mydbs:
+                 mysplit.extend(curdb.get("ACCEPT_RESTRICT", "").split())
+             mysplit = prune_incremental(mysplit)
+             self.configlist[-1]["ACCEPT_RESTRICT"] = " ".join(mysplit)
+             if tuple(mysplit) != self._accept_restrict:
+                 self._accept_restrict = tuple(mysplit)
+         else:
+             # repoman will accept any property
+             self._accept_restrict = ("*",)
+ 
+         increment_lists = {}
+         for k in myincrementals:
+             incremental_list = []
+             increment_lists[k] = incremental_list
+             for curdb in mydbs:
+                 v = curdb.get(k)
+                 if v is not None:
+                     incremental_list.append(v.split())
+ 
+         if "FEATURES" in increment_lists:
+             increment_lists["FEATURES"].append(self._features_overrides)
+ 
+         myflags = set()
+         for mykey, incremental_list in increment_lists.items():
+ 
+             myflags.clear()
+             for mysplit in incremental_list:
+ 
+                 for x in mysplit:
+                     if x == "-*":
+                         # "-*" is a special "minus" var that means "unset all settings".
+                         # so USE="-* gnome" will have *just* gnome enabled.
+                         myflags.clear()
+                         continue
+ 
+                     if x[0] == "+":
+                         # Not legal. People assume too much. Complain.
+                         writemsg(
+                             colorize(
+                                 "BAD",
+                                 _("%s values should not start with a '+': %s")
+                                 % (mykey, x),
+                             )
+                             + "\n",
+                             noiselevel=-1,
+                         )
+                         x = x[1:]
+                         if not x:
+                             continue
+ 
+                     if x[0] == "-":
+                         myflags.discard(x[1:])
+                         continue
+ 
+                     # We got here, so add it now.
+                     myflags.add(x)
+ 
+             # store setting in last element of configlist, the original environment:
+             if myflags or mykey in self:
+                 self.configlist[-1][mykey] = " ".join(sorted(myflags))
+ 
+         # Do the USE calculation last because it depends on USE_EXPAND.
+         use_expand = self.get("USE_EXPAND", "").split()
+         use_expand_dict = self._use_expand_dict
+         use_expand_dict.clear()
+         for k in use_expand:
+             v = self.get(k)
+             if v is not None:
+                 use_expand_dict[k] = v
+ 
+         use_expand_unprefixed = self.get("USE_EXPAND_UNPREFIXED", "").split()
+ 
+         # In order to best accomodate the long-standing practice of
+         # setting default USE_EXPAND variables in the profile's
+         # make.defaults, we translate these variables into their
+         # equivalent USE flags so that useful incremental behavior
+         # is enabled (for sub-profiles).
+         configdict_defaults = self.configdict["defaults"]
+         if self._make_defaults is not None:
+             for i, cfg in enumerate(self._make_defaults):
+                 if not cfg:
+                     self.make_defaults_use.append("")
+                     continue
+                 use = cfg.get("USE", "")
+                 expand_use = []
+ 
+                 for k in use_expand_unprefixed:
+                     v = cfg.get(k)
+                     if v is not None:
+                         expand_use.extend(v.split())
+ 
+                 for k in use_expand_dict:
+                     v = cfg.get(k)
+                     if v is None:
+                         continue
+                     prefix = k.lower() + "_"
+                     for x in v.split():
+                         if x[:1] == "-":
+                             expand_use.append("-" + prefix + x[1:])
+                         else:
+                             expand_use.append(prefix + x)
+ 
+                 if expand_use:
+                     expand_use.append(use)
+                     use = " ".join(expand_use)
+                 self.make_defaults_use.append(use)
+             self.make_defaults_use = tuple(self.make_defaults_use)
+             # Preserve both positive and negative flags here, since
+             # negative flags may later interact with other flags pulled
+             # in via USE_ORDER.
+             configdict_defaults["USE"] = " ".join(filter(None, self.make_defaults_use))
+             # Set to None so this code only runs once.
+             self._make_defaults = None
+ 
+         if not self.uvlist:
+             for x in self["USE_ORDER"].split(":"):
+                 if x in self.configdict:
+                     self.uvlist.append(self.configdict[x])
+             self.uvlist.reverse()
+ 
+         # For optimal performance, use slice
+         # comparison instead of startswith().
+         iuse = self.configdict["pkg"].get("IUSE")
+         if iuse is not None:
+             iuse = [x.lstrip("+-") for x in iuse.split()]
+         myflags = set()
+         for curdb in self.uvlist:
+ 
+             for k in use_expand_unprefixed:
+                 v = curdb.get(k)
+                 if v is None:
+                     continue
+                 for x in v.split():
+                     if x[:1] == "-":
+                         myflags.discard(x[1:])
+                     else:
+                         myflags.add(x)
+ 
+             cur_use_expand = [x for x in use_expand if x in curdb]
+             mysplit = curdb.get("USE", "").split()
+             if not mysplit and not cur_use_expand:
+                 continue
+             for x in mysplit:
+                 if x == "-*":
+                     myflags.clear()
+                     continue
+ 
+                 if x[0] == "+":
+                     writemsg(
+                         colorize(
+                             "BAD",
+                             _("USE flags should not start " "with a '+': %s\n") % x,
+                         ),
+                         noiselevel=-1,
+                     )
+                     x = x[1:]
+                     if not x:
+                         continue
+ 
+                 if x[0] == "-":
+                     if x[-2:] == "_*":
+                         prefix = x[1:-1]
+                         prefix_len = len(prefix)
+                         myflags.difference_update(
+                             [y for y in myflags if y[:prefix_len] == prefix]
+                         )
+                     myflags.discard(x[1:])
+                     continue
+ 
+                 if iuse is not None and x[-2:] == "_*":
+                     # Expand wildcards here, so that cases like
+                     # USE="linguas_* -linguas_en_US" work correctly.
+                     prefix = x[:-1]
+                     prefix_len = len(prefix)
+                     has_iuse = False
+                     for y in iuse:
+                         if y[:prefix_len] == prefix:
+                             has_iuse = True
+                             myflags.add(y)
+                     if not has_iuse:
+                         # There are no matching IUSE, so allow the
+                         # wildcard to pass through. This allows
+                         # linguas_* to trigger unset LINGUAS in
+                         # cases when no linguas_ flags are in IUSE.
+                         myflags.add(x)
+                 else:
+                     myflags.add(x)
+ 
+             if curdb is configdict_defaults:
+                 # USE_EXPAND flags from make.defaults are handled
+                 # earlier, in order to provide useful incremental
+                 # behavior (for sub-profiles).
+                 continue
+ 
+             for var in cur_use_expand:
+                 var_lower = var.lower()
+                 is_not_incremental = var not in myincrementals
+                 if is_not_incremental:
+                     prefix = var_lower + "_"
+                     prefix_len = len(prefix)
+                     for x in list(myflags):
+                         if x[:prefix_len] == prefix:
+                             myflags.remove(x)
+                 for x in curdb[var].split():
+                     if x[0] == "+":
+                         if is_not_incremental:
+                             writemsg(
+                                 colorize(
+                                     "BAD",
+                                     _(
+                                         "Invalid '+' "
+                                         "operator in non-incremental variable "
+                                         "'%s': '%s'\n"
+                                     )
+                                     % (var, x),
+                                 ),
+                                 noiselevel=-1,
+                             )
+                             continue
+                         else:
+                             writemsg(
+                                 colorize(
+                                     "BAD",
+                                     _(
+                                         "Invalid '+' "
+                                         "operator in incremental variable "
+                                         "'%s': '%s'\n"
+                                     )
+                                     % (var, x),
+                                 ),
+                                 noiselevel=-1,
+                             )
+                         x = x[1:]
+                     if x[0] == "-":
+                         if is_not_incremental:
+                             writemsg(
+                                 colorize(
+                                     "BAD",
+                                     _(
+                                         "Invalid '-' "
+                                         "operator in non-incremental variable "
+                                         "'%s': '%s'\n"
+                                     )
+                                     % (var, x),
+                                 ),
+                                 noiselevel=-1,
+                             )
+                             continue
+                         myflags.discard(var_lower + "_" + x[1:])
+                         continue
+                     myflags.add(var_lower + "_" + x)
+ 
+         if hasattr(self, "features"):
+             self.features._features.clear()
+         else:
+             self.features = features_set(self)
+         self.features._features.update(self.get("FEATURES", "").split())
+         self.features._sync_env_var()
+         self.features._validate()
+ 
+         myflags.update(self.useforce)
+         arch = self.configdict["defaults"].get("ARCH")
+         if arch:
+             myflags.add(arch)
+ 
+         myflags.difference_update(self.usemask)
+         self.configlist[-1]["USE"] = " ".join(sorted(myflags))
+ 
+         if self.mycpv is None:
+             # Generate global USE_EXPAND variables settings that are
+             # consistent with USE, for display by emerge --info. For
+             # package instances, these are instead generated via
+             # setcpv().
+             for k in use_expand:
+                 prefix = k.lower() + "_"
+                 prefix_len = len(prefix)
+                 expand_flags = set(
+                     x[prefix_len:] for x in myflags if x[:prefix_len] == prefix
+                 )
+                 var_split = use_expand_dict.get(k, "").split()
+                 var_split = [x for x in var_split if x in expand_flags]
+                 var_split.extend(sorted(expand_flags.difference(var_split)))
+                 if var_split:
+                     self.configlist[-1][k] = " ".join(var_split)
+                 elif k in self:
+                     self.configlist[-1][k] = ""
+ 
+             for k in use_expand_unprefixed:
+                 var_split = self.get(k, "").split()
+                 var_split = [x for x in var_split if x in myflags]
+                 if var_split:
+                     self.configlist[-1][k] = " ".join(var_split)
+                 elif k in self:
+                     self.configlist[-1][k] = ""
+ 
+     @property
+     def virts_p(self):
+         warnings.warn(
+             "portage config.virts_p attribute "
+             + "is deprecated, use config.get_virts_p()",
+             DeprecationWarning,
+             stacklevel=2,
+         )
+         return self.get_virts_p()
+ 
+     @property
+     def virtuals(self):
+         warnings.warn(
+             "portage config.virtuals attribute "
+             + "is deprecated, use config.getvirtuals()",
+             DeprecationWarning,
+             stacklevel=2,
+         )
+         return self.getvirtuals()
+ 
+     def get_virts_p(self):
+         # Ensure that we don't trigger the _treeVirtuals
+         # assertion in VirtualsManager._compile_virtuals().
+         self.getvirtuals()
+         return self._virtuals_manager.get_virts_p()
+ 
+     def getvirtuals(self):
+         if self._virtuals_manager._treeVirtuals is None:
+             # Hack around the fact that VirtualsManager needs a vartree
+             # and vartree needs a config instance.
+             # This code should be part of VirtualsManager.getvirtuals().
+             if self.local_config:
+                 temp_vartree = vartree(settings=self)
+                 self._virtuals_manager._populate_treeVirtuals(temp_vartree)
+             else:
+                 self._virtuals_manager._treeVirtuals = {}
+ 
+         return self._virtuals_manager.getvirtuals()
+ 
+     def _populate_treeVirtuals_if_needed(self, vartree):
+         """Reduce the provides into a list by CP."""
+         if self._virtuals_manager._treeVirtuals is None:
+             if self.local_config:
+                 self._virtuals_manager._populate_treeVirtuals(vartree)
+             else:
+                 self._virtuals_manager._treeVirtuals = {}
+ 
+     def __delitem__(self, mykey):
+         self.pop(mykey)
+ 
+     def __getitem__(self, key):
+         try:
+             return self._getitem(key)
+         except KeyError:
+             if portage._internal_caller:
+                 stack = (
+                     traceback.format_stack()[:-1]
+                     + traceback.format_exception(*sys.exc_info())[1:]
+                 )
+                 try:
+                     # Ensure that output is written to terminal.
+                     with open("/dev/tty", "w") as f:
+                         f.write("=" * 96 + "\n")
+                         f.write(
+                             "=" * 8
+                             + " Traceback for invalid call to portage.package.ebuild.config.config.__getitem__ "
+                             + "=" * 8
+                             + "\n"
+                         )
+                         f.writelines(stack)
+                         f.write("=" * 96 + "\n")
+                 except Exception:
+                     pass
+                 raise
+             else:
+                 warnings.warn(
+                     _("Passing nonexistent key %r to %s is deprecated. Use %s instead.")
+                     % (
+                         key,
+                         "portage.package.ebuild.config.config.__getitem__",
+                         "portage.package.ebuild.config.config.get",
+                     ),
+                     DeprecationWarning,
+                     stacklevel=2,
+                 )
+                 return ""
+ 
+     def _getitem(self, mykey):
+ 
+         if mykey in self._constant_keys:
+             # These two point to temporary values when
+             # portage plans to update itself.
+             if mykey == "PORTAGE_BIN_PATH":
+                 return portage._bin_path
+             if mykey == "PORTAGE_PYM_PATH":
+                 return portage._pym_path
+ 
+             if mykey == "PORTAGE_PYTHONPATH":
+                 value = [
+                     x for x in self.backupenv.get("PYTHONPATH", "").split(":") if x
+                 ]
+                 need_pym_path = True
+                 if value:
+                     try:
+                         need_pym_path = not os.path.samefile(
+                             value[0], portage._pym_path
+                         )
+                     except OSError:
+                         pass
+                 if need_pym_path:
+                     value.insert(0, portage._pym_path)
+                 return ":".join(value)
+ 
+             if mykey == "PORTAGE_GID":
+                 return "%s" % portage_gid
+ 
+         for d in self.lookuplist:
+             try:
+                 return d[mykey]
+             except KeyError:
+                 pass
+ 
+         deprecated_key = self._deprecated_keys.get(mykey)
+         if deprecated_key is not None:
+             value = self._getitem(deprecated_key)
+             # warnings.warn(_("Key %s has been renamed to %s. Please ",
+             # 	"update your configuration") % (deprecated_key, mykey),
+             # 	UserWarning)
+             return value
+ 
+         raise KeyError(mykey)
+ 
+     def get(self, k, x=None):
+         try:
+             return self._getitem(k)
+         except KeyError:
+             return x
+ 
+     def pop(self, key, *args):
+         self.modifying()
+         if len(args) > 1:
+             raise TypeError(
+                 "pop expected at most 2 arguments, got " + repr(1 + len(args))
+             )
+         v = self
+         for d in reversed(self.lookuplist):
+             v = d.pop(key, v)
+         if v is self:
+             if args:
+                 return args[0]
+             raise KeyError(key)
+         return v
+ 
+     def __contains__(self, mykey):
+         """Called to implement membership test operators (in and not in)."""
+         try:
+             self._getitem(mykey)
+         except KeyError:
+             return False
+         else:
+             return True
+ 
+     def setdefault(self, k, x=None):
+         v = self.get(k)
+         if v is not None:
+             return v
+         self[k] = x
+         return x
+ 
+     def __iter__(self):
+         keys = set()
+         keys.update(self._constant_keys)
+         for d in self.lookuplist:
+             keys.update(d)
+         return iter(keys)
+ 
+     def iterkeys(self):
+         return iter(self)
+ 
+     def iteritems(self):
+         for k in self:
+             yield (k, self._getitem(k))
+ 
+     def __setitem__(self, mykey, myvalue):
+         "set a value; will be thrown away at reset() time"
+         if not isinstance(myvalue, str):
+             raise ValueError(
+                 "Invalid type being used as a value: '%s': '%s'"
+                 % (str(mykey), str(myvalue))
+             )
+ 
+         # Avoid potential UnicodeDecodeError exceptions later.
+         mykey = _unicode_decode(mykey)
+         myvalue = _unicode_decode(myvalue)
+ 
+         self.modifying()
+         self.modifiedkeys.append(mykey)
+         self.configdict["env"][mykey] = myvalue
+ 
+     def environ(self):
+         "return our locally-maintained environment"
+         mydict = {}
+         environ_filter = self._environ_filter
+ 
+         eapi = self.get("EAPI")
+         eapi_attrs = _get_eapi_attrs(eapi)
+         phase = self.get("EBUILD_PHASE")
+         emerge_from = self.get("EMERGE_FROM")
+         filter_calling_env = False
+         if (
+             self.mycpv is not None
+             and not (emerge_from == "ebuild" and phase == "setup")
+             and phase not in ("clean", "cleanrm", "depend", "fetch")
+         ):
+             temp_dir = self.get("T")
+             if temp_dir is not None and os.path.exists(
+                 os.path.join(temp_dir, "environment")
+             ):
+                 filter_calling_env = True
+ 
+         environ_whitelist = self._environ_whitelist
+         for x, myvalue in self.iteritems():
+             if x in environ_filter:
+                 continue
+             if not isinstance(myvalue, str):
+                 writemsg(
+                     _("!!! Non-string value in config: %s=%s\n") % (x, myvalue),
+                     noiselevel=-1,
+                 )
+                 continue
+             if (
+                 filter_calling_env
+                 and x not in environ_whitelist
+                 and not self._environ_whitelist_re.match(x)
+             ):
+                 # Do not allow anything to leak into the ebuild
+                 # environment unless it is explicitly whitelisted.
+                 # This ensures that variables unset by the ebuild
+                 # remain unset (bug #189417).
+                 continue
+             mydict[x] = myvalue
+         if "HOME" not in mydict and "BUILD_PREFIX" in mydict:
+             writemsg("*** HOME not set. Setting to " + mydict["BUILD_PREFIX"] + "\n")
+             mydict["HOME"] = mydict["BUILD_PREFIX"][:]
+ 
+         if filter_calling_env:
+             if phase:
+                 whitelist = []
+                 if "rpm" == phase:
+                     whitelist.append("RPMDIR")
+                 for k in whitelist:
+                     v = self.get(k)
+                     if v is not None:
+                         mydict[k] = v
+ 
+         # At some point we may want to stop exporting FEATURES to the ebuild
+         # environment, in order to prevent ebuilds from abusing it. In
+         # preparation for that, export it as PORTAGE_FEATURES so that bashrc
+         # users will be able to migrate any FEATURES conditional code to
+         # use this alternative variable.
+         mydict["PORTAGE_FEATURES"] = self["FEATURES"]
+ 
+         # Filtered by IUSE and implicit IUSE.
+         mydict["USE"] = self.get("PORTAGE_USE", "")
+ 
+         # Don't export AA to the ebuild environment in EAPIs that forbid it
+         if not eapi_exports_AA(eapi):
+             mydict.pop("AA", None)
+ 
+         if not eapi_exports_merge_type(eapi):
+             mydict.pop("MERGE_TYPE", None)
+ 
+         src_like_phase = phase == "setup" or _phase_func_map.get(phase, "").startswith(
+             "src_"
+         )
+ 
+         if not (src_like_phase and eapi_attrs.sysroot):
+             mydict.pop("ESYSROOT", None)
+ 
+         if not (src_like_phase and eapi_attrs.broot):
+             mydict.pop("BROOT", None)
+ 
+         # Prefix variables are supported beginning with EAPI 3, or when
+         # force-prefix is in FEATURES, since older EAPIs would otherwise be
+         # useless with prefix configurations. This brings compatibility with
+         # the prefix branch of portage, which also supports EPREFIX for all
+         # EAPIs (for obvious reasons).
+         if phase == "depend" or (
+             "force-prefix" not in self.features
+             and eapi is not None
+             and not eapi_supports_prefix(eapi)
+         ):
+             mydict.pop("ED", None)
+             mydict.pop("EPREFIX", None)
+             mydict.pop("EROOT", None)
+             mydict.pop("ESYSROOT", None)
+ 
+         if (
+             phase
+             not in (
+                 "pretend",
+                 "setup",
+                 "preinst",
+                 "postinst",
+             )
+             or not eapi_exports_replace_vars(eapi)
+         ):
+             mydict.pop("REPLACING_VERSIONS", None)
+ 
+         if phase not in ("prerm", "postrm") or not eapi_exports_replace_vars(eapi):
+             mydict.pop("REPLACED_BY_VERSION", None)
+ 
+         if phase is not None and eapi_attrs.exports_EBUILD_PHASE_FUNC:
+             phase_func = _phase_func_map.get(phase)
+             if phase_func is not None:
+                 mydict["EBUILD_PHASE_FUNC"] = phase_func
+ 
+         if eapi_attrs.posixish_locale:
+             split_LC_ALL(mydict)
+             mydict["LC_COLLATE"] = "C"
+             # check_locale() returns None when check can not be executed.
+             if check_locale(silent=True, env=mydict) is False:
+                 # try another locale
+                 for l in ("C.UTF-8", "en_US.UTF-8", "en_GB.UTF-8", "C"):
+                     mydict["LC_CTYPE"] = l
+                     if check_locale(silent=True, env=mydict):
+                         # TODO: output the following only once
+                         # 						writemsg(_("!!! LC_CTYPE unsupported, using %s instead\n")
+                         # 								% mydict["LC_CTYPE"])
+                         break
+                 else:
+                     raise AssertionError("C locale did not pass the test!")
+ 
+         if not eapi_attrs.exports_PORTDIR:
+             mydict.pop("PORTDIR", None)
+         if not eapi_attrs.exports_ECLASSDIR:
+             mydict.pop("ECLASSDIR", None)
+ 
+         if not eapi_attrs.path_variables_end_with_trailing_slash:
+             for v in ("D", "ED", "ROOT", "EROOT", "ESYSROOT", "BROOT"):
+                 if v in mydict:
+                     mydict[v] = mydict[v].rstrip(os.path.sep)
+ 
+         # Since SYSROOT=/ interacts badly with autotools.eclass (bug 654600),
+         # and no EAPI expects SYSROOT to have a trailing slash, always strip
+         # the trailing slash from SYSROOT.
+         if "SYSROOT" in mydict:
+             mydict["SYSROOT"] = mydict["SYSROOT"].rstrip(os.sep)
+ 
+         try:
+             builddir = mydict["PORTAGE_BUILDDIR"]
+             distdir = mydict["DISTDIR"]
+         except KeyError:
+             pass
+         else:
+             mydict["PORTAGE_ACTUAL_DISTDIR"] = distdir
+             mydict["DISTDIR"] = os.path.join(builddir, "distdir")
+ 
+         return mydict
+ 
+     def thirdpartymirrors(self):
+         if getattr(self, "_thirdpartymirrors", None) is None:
+             thirdparty_lists = []
+             for repo_name in reversed(self.repositories.prepos_order):
+                 thirdparty_lists.append(
+                     grabdict(
+                         os.path.join(
+                             self.repositories[repo_name].location,
+                             "profiles",
+                             "thirdpartymirrors",
+                         )
+                     )
+                 )
+             self._thirdpartymirrors = stack_dictlist(thirdparty_lists, incremental=True)
+         return self._thirdpartymirrors
+ 
+     def archlist(self):
+         _archlist = []
+         for myarch in self["PORTAGE_ARCHLIST"].split():
+             _archlist.append(myarch)
+             _archlist.append("~" + myarch)
+         return _archlist
+ 
+     def selinux_enabled(self):
+         if getattr(self, "_selinux_enabled", None) is None:
+             self._selinux_enabled = 0
+             if "selinux" in self["USE"].split():
+                 if selinux:
+                     if selinux.is_selinux_enabled() == 1:
+                         self._selinux_enabled = 1
+                     else:
+                         self._selinux_enabled = 0
+                 else:
+                     writemsg(
+                         _(
+                             "!!! SELinux module not found. Please verify that it was installed.\n"
+                         ),
+                         noiselevel=-1,
+                     )
+                     self._selinux_enabled = 0
+ 
+         return self._selinux_enabled
+ 
+     keys = __iter__
+     items = iteritems
++>>>>>>> origin/master
diff --cc lib/portage/package/ebuild/doebuild.py
index 69132e651,ac627f555..af8845f34
--- a/lib/portage/package/ebuild/doebuild.py
+++ b/lib/portage/package/ebuild/doebuild.py
@@@ -22,48 -22,81 +22,86 @@@ from textwrap import wra
  import time
  import warnings
  import zlib
 +import platform
  
  import portage
- portage.proxy.lazyimport.lazyimport(globals(),
- 	'portage.package.ebuild.config:check_config_instance',
- 	'portage.package.ebuild.digestcheck:digestcheck',
- 	'portage.package.ebuild.digestgen:digestgen',
- 	'portage.package.ebuild.fetch:_drop_privs_userfetch,_want_userfetch,fetch',
- 	'portage.package.ebuild.prepare_build_dirs:_prepare_fake_distdir',
- 	'portage.package.ebuild._ipc.QueryCommand:QueryCommand',
- 	'portage.dep._slot_operator:evaluate_slot_operator_equal_deps',
- 	'portage.package.ebuild._spawn_nofetch:spawn_nofetch',
- 	'portage.util.elf.header:ELFHeader',
- 	'portage.dep.soname.multilib_category:compute_multilib_category',
- 	'portage.util._desktop_entry:validate_desktop_entry',
- 	'portage.util._dyn_libs.NeededEntry:NeededEntry',
- 	'portage.util._dyn_libs.soname_deps:SonameDepsProcessor',
- 	'portage.util._async.SchedulerInterface:SchedulerInterface',
- 	'portage.util._eventloop.global_event_loop:global_event_loop',
- 	'portage.util.ExtractKernelVersion:ExtractKernelVersion'
+ 
+ portage.proxy.lazyimport.lazyimport(
+     globals(),
+     "portage.package.ebuild.config:check_config_instance",
+     "portage.package.ebuild.digestcheck:digestcheck",
+     "portage.package.ebuild.digestgen:digestgen",
+     "portage.package.ebuild.fetch:_drop_privs_userfetch,_want_userfetch,fetch",
+     "portage.package.ebuild.prepare_build_dirs:_prepare_fake_distdir",
+     "portage.package.ebuild._ipc.QueryCommand:QueryCommand",
+     "portage.dep._slot_operator:evaluate_slot_operator_equal_deps",
+     "portage.package.ebuild._spawn_nofetch:spawn_nofetch",
+     "portage.util.elf.header:ELFHeader",
+     "portage.dep.soname.multilib_category:compute_multilib_category",
+     "portage.util._desktop_entry:validate_desktop_entry",
+     "portage.util._dyn_libs.NeededEntry:NeededEntry",
+     "portage.util._dyn_libs.soname_deps:SonameDepsProcessor",
+     "portage.util._async.SchedulerInterface:SchedulerInterface",
+     "portage.util._eventloop.global_event_loop:global_event_loop",
+     "portage.util.ExtractKernelVersion:ExtractKernelVersion",
  )
  
- from portage import bsd_chflags, \
- 	eapi_is_supported, merge, os, selinux, shutil, \
- 	unmerge, _encodings, _os_merge, \
- 	_shell_quote, _unicode_decode, _unicode_encode
- from portage.const import EBUILD_SH_ENV_FILE, EBUILD_SH_ENV_DIR, \
- 	EBUILD_SH_BINARY, INVALID_ENV_FILE, MISC_SH_BINARY, PORTAGE_PYM_PACKAGES, EPREFIX, MACOSSANDBOX_PROFILE
- from portage.data import portage_gid, portage_uid, secpass, \
- 	uid, userpriv_groups
+ from portage import (
+     bsd_chflags,
+     eapi_is_supported,
+     merge,
+     os,
+     selinux,
+     shutil,
+     unmerge,
+     _encodings,
+     _os_merge,
+     _shell_quote,
+     _unicode_decode,
+     _unicode_encode,
+ )
+ from portage.const import (
+     EBUILD_SH_ENV_FILE,
+     EBUILD_SH_ENV_DIR,
+     EBUILD_SH_BINARY,
+     INVALID_ENV_FILE,
+     MISC_SH_BINARY,
+     PORTAGE_PYM_PACKAGES,
++    # BEGIN PREFIX LOCAL
++    EPREFIX,
++    MACOSSANDBOX_PROFILE,
++    # END PREFIX LOCAL
+ )
+ from portage.data import portage_gid, portage_uid, secpass, uid, userpriv_groups
  from portage.dbapi.porttree import _parse_uri_map
- from portage.dep import Atom, check_required_use, \
- 	human_readable_required_use, paren_enclose, use_reduce
- from portage.eapi import (eapi_exports_KV, eapi_exports_merge_type,
- 	eapi_exports_replace_vars, eapi_exports_REPOSITORY,
- 	eapi_has_required_use, eapi_has_src_prepare_and_src_configure,
- 	eapi_has_pkg_pretend, _get_eapi_attrs)
+ from portage.dep import (
+     Atom,
+     check_required_use,
+     human_readable_required_use,
+     paren_enclose,
+     use_reduce,
+ )
+ from portage.eapi import (
+     eapi_exports_KV,
+     eapi_exports_merge_type,
+     eapi_exports_replace_vars,
+     eapi_exports_REPOSITORY,
+     eapi_has_required_use,
+     eapi_has_src_prepare_and_src_configure,
+     eapi_has_pkg_pretend,
+     _get_eapi_attrs,
+ )
  from portage.elog import elog_process, _preload_elog_modules
  from portage.elog.messages import eerror, eqawarn
- from portage.exception import (DigestException, FileNotFound,
- 	IncorrectParameter, InvalidData, InvalidDependString,
- 	PermissionDenied, UnsupportedAPIException)
+ from portage.exception import (
+     DigestException,
+     FileNotFound,
+     IncorrectParameter,
+     InvalidData,
+     InvalidDependString,
+     PermissionDenied,
+     UnsupportedAPIException,
+ )
  from portage.localization import _
  from portage.output import colormap
  from portage.package.ebuild.prepare_build_dirs import prepare_build_dirs
@@@ -92,1326 -127,1660 +132,1667 @@@ from _emerge.EbuildSpawnProcess import 
  from _emerge.Package import Package
  from _emerge.RootConfig import RootConfig
  
- 
- _unsandboxed_phases = frozenset([
- 	"clean", "cleanrm", "config",
- 	"help", "info", "postinst",
- 	"preinst", "pretend", "postrm",
- 	"prerm", "setup"
- ])
+ _unsandboxed_phases = frozenset(
+     [
+         "clean",
+         "cleanrm",
+         "config",
+         "help",
+         "info",
+         "postinst",
+         "preinst",
+         "pretend",
+         "postrm",
+         "prerm",
+         "setup",
+     ]
+ )
  
  # phases in which IPC with host is allowed
- _ipc_phases = frozenset([
- 	"setup", "pretend", "config", "info",
- 	"preinst", "postinst", "prerm", "postrm",
- ])
+ _ipc_phases = frozenset(
+     [
+         "setup",
+         "pretend",
+         "config",
+         "info",
+         "preinst",
+         "postinst",
+         "prerm",
+         "postrm",
+     ]
+ )
  
  # phases which execute in the global PID namespace
- _global_pid_phases = frozenset([
- 	'config', 'depend', 'preinst', 'prerm', 'postinst', 'postrm'])
+ _global_pid_phases = frozenset(
+     ["config", "depend", "preinst", "prerm", "postinst", "postrm"]
+ )
  
  _phase_func_map = {
- 	"config": "pkg_config",
- 	"setup": "pkg_setup",
- 	"nofetch": "pkg_nofetch",
- 	"unpack": "src_unpack",
- 	"prepare": "src_prepare",
- 	"configure": "src_configure",
- 	"compile": "src_compile",
- 	"test": "src_test",
- 	"install": "src_install",
- 	"preinst": "pkg_preinst",
- 	"postinst": "pkg_postinst",
- 	"prerm": "pkg_prerm",
- 	"postrm": "pkg_postrm",
- 	"info": "pkg_info",
- 	"pretend": "pkg_pretend",
+     "config": "pkg_config",
+     "setup": "pkg_setup",
+     "nofetch": "pkg_nofetch",
+     "unpack": "src_unpack",
+     "prepare": "src_prepare",
+     "configure": "src_configure",
+     "compile": "src_compile",
+     "test": "src_test",
+     "install": "src_install",
+     "preinst": "pkg_preinst",
+     "postinst": "pkg_postinst",
+     "prerm": "pkg_prerm",
+     "postrm": "pkg_postrm",
+     "info": "pkg_info",
+     "pretend": "pkg_pretend",
  }
  
- _vdb_use_conditional_keys = Package._dep_keys + \
- 	('LICENSE', 'PROPERTIES', 'RESTRICT',)
+ _vdb_use_conditional_keys = Package._dep_keys + (
+     "LICENSE",
+     "PROPERTIES",
+     "RESTRICT",
+ )
+ 
  
  def _doebuild_spawn(phase, settings, actionmap=None, **kwargs):
- 	"""
- 	All proper ebuild phases which execute ebuild.sh are spawned
- 	via this function. No exceptions.
- 	"""
- 
- 	if phase in _unsandboxed_phases:
- 		kwargs['free'] = True
- 
- 	kwargs['ipc'] = 'ipc-sandbox' not in settings.features or \
- 		phase in _ipc_phases
- 	kwargs['mountns'] = 'mount-sandbox' in settings.features
- 	kwargs['networked'] = (
- 		'network-sandbox' not in settings.features or
- 		(phase == 'unpack' and
- 			'live' in settings['PORTAGE_PROPERTIES'].split()) or
- 		(phase == 'test' and
- 			'test_network' in settings['PORTAGE_PROPERTIES'].split()) or
- 		phase in _ipc_phases or
- 		'network-sandbox' in settings['PORTAGE_RESTRICT'].split())
- 	kwargs['pidns'] = ('pid-sandbox' in settings.features and
- 		phase not in _global_pid_phases)
- 
- 	if phase == 'depend':
- 		kwargs['droppriv'] = 'userpriv' in settings.features
- 		# It's not necessary to close_fds for this phase, since
- 		# it should not spawn any daemons, and close_fds is
- 		# best avoided since it can interact badly with some
- 		# garbage collectors (see _setup_pipes docstring).
- 		kwargs['close_fds'] = False
- 
- 	if actionmap is not None and phase in actionmap:
- 		kwargs.update(actionmap[phase]["args"])
- 		cmd = actionmap[phase]["cmd"] % phase
- 	else:
- 		if phase == 'cleanrm':
- 			ebuild_sh_arg = 'clean'
- 		else:
- 			ebuild_sh_arg = phase
- 
- 		cmd = "%s %s" % (_shell_quote(
- 			os.path.join(settings["PORTAGE_BIN_PATH"],
- 			os.path.basename(EBUILD_SH_BINARY))),
- 			ebuild_sh_arg)
- 
- 	settings['EBUILD_PHASE'] = phase
- 	try:
- 		return spawn(cmd, settings, **kwargs)
- 	finally:
- 		settings.pop('EBUILD_PHASE', None)
- 
- def _spawn_phase(phase, settings, actionmap=None, returnpid=False,
- 		logfile=None, **kwargs):
- 
- 	if returnpid:
- 		return _doebuild_spawn(phase, settings, actionmap=actionmap,
- 			returnpid=returnpid, logfile=logfile, **kwargs)
- 
- 	# The logfile argument is unused here, since EbuildPhase uses
- 	# the PORTAGE_LOG_FILE variable if set.
- 	ebuild_phase = EbuildPhase(actionmap=actionmap, background=False,
- 		phase=phase, scheduler=SchedulerInterface(asyncio._safe_loop()),
- 		settings=settings, **kwargs)
- 
- 	ebuild_phase.start()
- 	ebuild_phase.wait()
- 	return ebuild_phase.returncode
+     """
+     All proper ebuild phases which execute ebuild.sh are spawned
+     via this function. No exceptions.
+     """
+ 
+     if phase in _unsandboxed_phases:
+         kwargs["free"] = True
+ 
+     kwargs["ipc"] = "ipc-sandbox" not in settings.features or phase in _ipc_phases
+     kwargs["mountns"] = "mount-sandbox" in settings.features
+     kwargs["networked"] = (
+         "network-sandbox" not in settings.features
+         or (phase == "unpack" and "live" in settings["PORTAGE_PROPERTIES"].split())
+         or (
+             phase == "test" and "test_network" in settings["PORTAGE_PROPERTIES"].split()
+         )
+         or phase in _ipc_phases
+         or "network-sandbox" in settings["PORTAGE_RESTRICT"].split()
+     )
+     kwargs["pidns"] = (
+         "pid-sandbox" in settings.features and phase not in _global_pid_phases
+     )
+ 
+     if phase == "depend":
+         kwargs["droppriv"] = "userpriv" in settings.features
+         # It's not necessary to close_fds for this phase, since
+         # it should not spawn any daemons, and close_fds is
+         # best avoided since it can interact badly with some
+         # garbage collectors (see _setup_pipes docstring).
+         kwargs["close_fds"] = False
+ 
+     if actionmap is not None and phase in actionmap:
+         kwargs.update(actionmap[phase]["args"])
+         cmd = actionmap[phase]["cmd"] % phase
+     else:
+         if phase == "cleanrm":
+             ebuild_sh_arg = "clean"
+         else:
+             ebuild_sh_arg = phase
+ 
+         cmd = "%s %s" % (
+             _shell_quote(
+                 os.path.join(
+                     settings["PORTAGE_BIN_PATH"], os.path.basename(EBUILD_SH_BINARY)
+                 )
+             ),
+             ebuild_sh_arg,
+         )
+ 
+     settings["EBUILD_PHASE"] = phase
+     try:
+         return spawn(cmd, settings, **kwargs)
+     finally:
+         settings.pop("EBUILD_PHASE", None)
+ 
+ 
+ def _spawn_phase(
+     phase, settings, actionmap=None, returnpid=False, logfile=None, **kwargs
+ ):
+ 
+     if returnpid:
+         return _doebuild_spawn(
+             phase,
+             settings,
+             actionmap=actionmap,
+             returnpid=returnpid,
+             logfile=logfile,
+             **kwargs
+         )
+ 
+     # The logfile argument is unused here, since EbuildPhase uses
+     # the PORTAGE_LOG_FILE variable if set.
+     ebuild_phase = EbuildPhase(
+         actionmap=actionmap,
+         background=False,
+         phase=phase,
+         scheduler=SchedulerInterface(asyncio._safe_loop()),
+         settings=settings,
+         **kwargs
+     )
+ 
+     ebuild_phase.start()
+     ebuild_phase.wait()
+     return ebuild_phase.returncode
+ 
  
  def _doebuild_path(settings, eapi=None):
- 	"""
- 	Generate the PATH variable.
- 	"""
- 
- 	# Note: PORTAGE_BIN_PATH may differ from the global constant
- 	# when portage is reinstalling itself.
- 	portage_bin_path = [settings["PORTAGE_BIN_PATH"]]
- 	if portage_bin_path[0] != portage.const.PORTAGE_BIN_PATH:
- 		# Add a fallback path for restarting failed builds (bug 547086)
- 		portage_bin_path.append(portage.const.PORTAGE_BIN_PATH)
- 	prerootpath = [x for x in settings.get("PREROOTPATH", "").split(":") if x]
- 	rootpath = [x for x in settings.get("ROOTPATH", "").split(":") if x]
- 	rootpath_set = frozenset(rootpath)
- 	overrides = [x for x in settings.get(
- 		"__PORTAGE_TEST_PATH_OVERRIDE", "").split(":") if x]
- 
- 	prefixes = []
- 	# settings["EPREFIX"] should take priority over portage.const.EPREFIX
- 	if portage.const.EPREFIX != settings["EPREFIX"] and settings["ROOT"] == os.sep:
- 		prefixes.append(settings["EPREFIX"])
- 	prefixes.append(portage.const.EPREFIX)
- 
- 	path = overrides
- 
- 	if "xattr" in settings.features:
- 		for x in portage_bin_path:
- 			path.append(os.path.join(x, "ebuild-helpers", "xattr"))
- 
- 	if uid != 0 and \
- 		"unprivileged" in settings.features and \
- 		"fakeroot" not in settings.features:
- 		for x in portage_bin_path:
- 			path.append(os.path.join(x,
- 				"ebuild-helpers", "unprivileged"))
- 
- 	if settings.get("USERLAND", "GNU") != "GNU":
- 		for x in portage_bin_path:
- 			path.append(os.path.join(x, "ebuild-helpers", "bsd"))
- 
- 	for x in portage_bin_path:
- 		path.append(os.path.join(x, "ebuild-helpers"))
- 	path.extend(prerootpath)
- 
- 	for prefix in prefixes:
- 		prefix = prefix if prefix else "/"
- 		for x in ("usr/local/sbin", "usr/local/bin", "usr/sbin", "usr/bin", "sbin", "bin"):
- 			# Respect order defined in ROOTPATH
- 			x_abs = os.path.join(prefix, x)
- 			if x_abs not in rootpath_set:
- 				path.append(x_abs)
- 
- 	path.extend(rootpath)
- 
- 	# PREFIX LOCAL: append EXTRA_PATH from make.globals
- 	extrapath = [x for x in settings.get("EXTRA_PATH", "").split(":") if x]
- 	path.extend(extrapath)
- 	# END PREFIX LOCAL
- 
- 	settings["PATH"] = ":".join(path)
- 
- def doebuild_environment(myebuild, mydo, myroot=None, settings=None,
- 	debug=False, use_cache=None, db=None):
- 	"""
- 	Create and store environment variable in the config instance
- 	that's passed in as the "settings" parameter. This will raise
- 	UnsupportedAPIException if the given ebuild has an unsupported
- 	EAPI. All EAPI dependent code comes last, so that essential
- 	variables like PORTAGE_BUILDDIR are still initialized even in
- 	cases when UnsupportedAPIException needs to be raised, which
- 	can be useful when uninstalling a package that has corrupt
- 	EAPI metadata.
- 	The myroot and use_cache parameters are unused.
- 	"""
- 
- 	if settings is None:
- 		raise TypeError("settings argument is required")
- 
- 	if db is None:
- 		raise TypeError("db argument is required")
- 
- 	mysettings = settings
- 	mydbapi = db
- 	ebuild_path = os.path.abspath(myebuild)
- 	pkg_dir     = os.path.dirname(ebuild_path)
- 	mytree = os.path.dirname(os.path.dirname(pkg_dir))
- 	mypv = os.path.basename(ebuild_path)[:-7]
- 	mysplit = _pkgsplit(mypv, eapi=mysettings.configdict["pkg"].get("EAPI"))
- 	if mysplit is None:
- 		raise IncorrectParameter(
- 			_("Invalid ebuild path: '%s'") % myebuild)
- 
- 	if mysettings.mycpv is not None and \
- 		mysettings.configdict["pkg"].get("PF") == mypv and \
- 		"CATEGORY" in mysettings.configdict["pkg"]:
- 		# Assume that PF is enough to assume that we've got
- 		# the correct CATEGORY, though this is not really
- 		# a solid assumption since it's possible (though
- 		# unlikely) that two packages in different
- 		# categories have the same PF. Callers should call
- 		# setcpv or create a clean clone of a locked config
- 		# instance in order to ensure that this assumption
- 		# does not fail like in bug #408817.
- 		cat = mysettings.configdict["pkg"]["CATEGORY"]
- 		mycpv = mysettings.mycpv
- 	elif os.path.basename(pkg_dir) in (mysplit[0], mypv):
- 		# portdbapi or vardbapi
- 		cat = os.path.basename(os.path.dirname(pkg_dir))
- 		mycpv = cat + "/" + mypv
- 	else:
- 		raise AssertionError("unable to determine CATEGORY")
- 
- 	# Make a backup of PORTAGE_TMPDIR prior to calling config.reset()
- 	# so that the caller can override it.
- 	tmpdir = mysettings["PORTAGE_TMPDIR"]
- 
- 	if mydo == 'depend':
- 		if mycpv != mysettings.mycpv:
- 			# Don't pass in mydbapi here since the resulting aux_get
- 			# call would lead to infinite 'depend' phase recursion.
- 			mysettings.setcpv(mycpv)
- 	else:
- 		# If EAPI isn't in configdict["pkg"], it means that setcpv()
- 		# hasn't been called with the mydb argument, so we have to
- 		# call it here (portage code always calls setcpv properly,
- 		# but api consumers might not).
- 		if mycpv != mysettings.mycpv or \
- 			"EAPI" not in mysettings.configdict["pkg"]:
- 			# Reload env.d variables and reset any previous settings.
- 			mysettings.reload()
- 			mysettings.reset()
- 			mysettings.setcpv(mycpv, mydb=mydbapi)
- 
- 	# config.reset() might have reverted a change made by the caller,
- 	# so restore it to its original value. Sandbox needs canonical
- 	# paths, so realpath it.
- 	mysettings["PORTAGE_TMPDIR"] = os.path.realpath(tmpdir)
- 
- 	mysettings.pop("EBUILD_PHASE", None) # remove from backupenv
- 	mysettings["EBUILD_PHASE"] = mydo
- 
- 	# Set requested Python interpreter for Portage helpers.
- 	mysettings['PORTAGE_PYTHON'] = portage._python_interpreter
- 
- 	# This is used by assert_sigpipe_ok() that's used by the ebuild
- 	# unpack() helper. SIGPIPE is typically 13, but its better not
- 	# to assume that.
- 	mysettings['PORTAGE_SIGPIPE_STATUS'] = str(128 + signal.SIGPIPE)
- 
- 	# We are disabling user-specific bashrc files.
- 	mysettings["BASH_ENV"] = INVALID_ENV_FILE
- 
- 	if debug: # Otherwise it overrides emerge's settings.
- 		# We have no other way to set debug... debug can't be passed in
- 		# due to how it's coded... Don't overwrite this so we can use it.
- 		mysettings["PORTAGE_DEBUG"] = "1"
- 
- 	mysettings["EBUILD"]   = ebuild_path
- 	mysettings["O"]        = pkg_dir
- 	mysettings.configdict["pkg"]["CATEGORY"] = cat
- 	mysettings["PF"]       = mypv
- 
- 	if hasattr(mydbapi, 'repositories'):
- 		repo = mydbapi.repositories.get_repo_for_location(mytree)
- 		mysettings['PORTDIR'] = repo.eclass_db.porttrees[0]
- 		mysettings['PORTAGE_ECLASS_LOCATIONS'] = repo.eclass_db.eclass_locations_string
- 		mysettings.configdict["pkg"]["PORTAGE_REPO_NAME"] = repo.name
- 
- 	mysettings["PORTDIR"] = os.path.realpath(mysettings["PORTDIR"])
- 	mysettings.pop("PORTDIR_OVERLAY", None)
- 	mysettings["DISTDIR"] = os.path.realpath(mysettings["DISTDIR"])
- 	mysettings["RPMDIR"]  = os.path.realpath(mysettings["RPMDIR"])
- 
- 	mysettings["ECLASSDIR"]   = mysettings["PORTDIR"]+"/eclass"
- 
- 	mysettings["PORTAGE_BASHRC_FILES"] = "\n".join(mysettings._pbashrc)
- 
- 	mysettings["P"]  = mysplit[0]+"-"+mysplit[1]
- 	mysettings["PN"] = mysplit[0]
- 	mysettings["PV"] = mysplit[1]
- 	mysettings["PR"] = mysplit[2]
- 
- 	if noiselimit < 0:
- 		mysettings["PORTAGE_QUIET"] = "1"
- 
- 	if mysplit[2] == "r0":
- 		mysettings["PVR"]=mysplit[1]
- 	else:
- 		mysettings["PVR"]=mysplit[1]+"-"+mysplit[2]
- 
- 	# All temporary directories should be subdirectories of
- 	# $PORTAGE_TMPDIR/portage, since it's common for /tmp and /var/tmp
- 	# to be mounted with the "noexec" option (see bug #346899).
- 	mysettings["BUILD_PREFIX"] = mysettings["PORTAGE_TMPDIR"]+"/portage"
- 	mysettings["PKG_TMPDIR"]   = mysettings["BUILD_PREFIX"]+"/._unmerge_"
- 
- 	# Package {pre,post}inst and {pre,post}rm may overlap, so they must have separate
- 	# locations in order to prevent interference.
- 	if mydo in ("unmerge", "prerm", "postrm", "cleanrm"):
- 		mysettings["PORTAGE_BUILDDIR"] = os.path.join(
- 			mysettings["PKG_TMPDIR"],
- 			mysettings["CATEGORY"], mysettings["PF"])
- 	else:
- 		mysettings["PORTAGE_BUILDDIR"] = os.path.join(
- 			mysettings["BUILD_PREFIX"],
- 			mysettings["CATEGORY"], mysettings["PF"])
- 
- 	mysettings["HOME"] = os.path.join(mysettings["PORTAGE_BUILDDIR"], "homedir")
- 	mysettings["WORKDIR"] = os.path.join(mysettings["PORTAGE_BUILDDIR"], "work")
- 	mysettings["D"] = os.path.join(mysettings["PORTAGE_BUILDDIR"], "image") + os.sep
- 	mysettings["T"] = os.path.join(mysettings["PORTAGE_BUILDDIR"], "temp")
- 	mysettings["SANDBOX_LOG"] = os.path.join(mysettings["T"], "sandbox.log")
- 	mysettings["FILESDIR"] = os.path.join(settings["PORTAGE_BUILDDIR"], "files")
- 
- 	# Prefix forward compatability
- 	eprefix_lstrip = mysettings["EPREFIX"].lstrip(os.sep)
- 	mysettings["ED"] = os.path.join(
- 		mysettings["D"], eprefix_lstrip).rstrip(os.sep) + os.sep
- 
- 	mysettings["PORTAGE_BASHRC"] = os.path.join(
- 		mysettings["PORTAGE_CONFIGROOT"], EBUILD_SH_ENV_FILE)
- 	mysettings["PM_EBUILD_HOOK_DIR"] = os.path.join(
- 		mysettings["PORTAGE_CONFIGROOT"], EBUILD_SH_ENV_DIR)
- 
- 	# Allow color.map to control colors associated with einfo, ewarn, etc...
- 	mysettings["PORTAGE_COLORMAP"] = colormap()
- 
- 	if "COLUMNS" not in mysettings:
- 		# Set COLUMNS, in order to prevent unnecessary stty calls
- 		# inside the set_colors function of isolated-functions.sh.
- 		# We cache the result in os.environ, in order to avoid
- 		# multiple stty calls in cases when get_term_size() falls
- 		# back to stty due to a missing or broken curses module.
- 		columns = os.environ.get("COLUMNS")
- 		if columns is None:
- 			rows, columns = portage.output.get_term_size()
- 			if columns < 1:
- 				# Force a sane value for COLUMNS, so that tools
- 				# like ls don't complain (see bug #394091).
- 				columns = 80
- 			columns = str(columns)
- 			os.environ["COLUMNS"] = columns
- 		mysettings["COLUMNS"] = columns
- 
- 	# EAPI is always known here, even for the "depend" phase, because
- 	# EbuildMetadataPhase gets it from _parse_eapi_ebuild_head().
- 	eapi = mysettings.configdict['pkg']['EAPI']
- 	_doebuild_path(mysettings, eapi=eapi)
- 
- 	# All EAPI dependent code comes last, so that essential variables like
- 	# PATH and PORTAGE_BUILDDIR are still initialized even in cases when
- 	# UnsupportedAPIException needs to be raised, which can be useful
- 	# when uninstalling a package that has corrupt EAPI metadata.
- 	if not eapi_is_supported(eapi):
- 		raise UnsupportedAPIException(mycpv, eapi)
- 
- 	if eapi_exports_REPOSITORY(eapi) and "PORTAGE_REPO_NAME" in mysettings.configdict["pkg"]:
- 		mysettings.configdict["pkg"]["REPOSITORY"] = mysettings.configdict["pkg"]["PORTAGE_REPO_NAME"]
- 
- 	if mydo != "depend":
- 		if hasattr(mydbapi, "getFetchMap") and \
- 			("A" not in mysettings.configdict["pkg"] or \
- 			"AA" not in mysettings.configdict["pkg"]):
- 			src_uri = mysettings.configdict["pkg"].get("SRC_URI")
- 			if src_uri is None:
- 				src_uri, = mydbapi.aux_get(mysettings.mycpv,
- 					["SRC_URI"], mytree=mytree)
- 			metadata = {
- 				"EAPI"    : eapi,
- 				"SRC_URI" : src_uri,
- 			}
- 			use = frozenset(mysettings["PORTAGE_USE"].split())
- 			try:
- 				uri_map = _parse_uri_map(mysettings.mycpv, metadata, use=use)
- 			except InvalidDependString:
- 				mysettings.configdict["pkg"]["A"] = ""
- 			else:
- 				mysettings.configdict["pkg"]["A"] = " ".join(uri_map)
- 
- 			try:
- 				uri_map = _parse_uri_map(mysettings.mycpv, metadata)
- 			except InvalidDependString:
- 				mysettings.configdict["pkg"]["AA"] = ""
- 			else:
- 				mysettings.configdict["pkg"]["AA"] = " ".join(uri_map)
- 
- 		ccache = "ccache" in mysettings.features
- 		distcc = "distcc" in mysettings.features
- 		icecream = "icecream" in mysettings.features
- 
- 		if ccache or distcc or icecream:
- 			libdir = None
- 			default_abi = mysettings.get("DEFAULT_ABI")
- 			if default_abi:
- 				libdir = mysettings.get("LIBDIR_" + default_abi)
- 			if not libdir:
- 				libdir = "lib"
- 
- 			# The installation locations use to vary between versions...
- 			# Safer to look them up rather than assuming
- 			possible_libexecdirs = (libdir, "lib", "libexec")
- 			masquerades = []
- 			if distcc:
- 				masquerades.append(("distcc", "distcc"))
- 			if icecream:
- 				masquerades.append(("icecream", "icecc"))
- 			if ccache:
- 				masquerades.append(("ccache", "ccache"))
- 
- 			for feature, m in masquerades:
- 				for l in possible_libexecdirs:
- 					p = os.path.join(os.sep, eprefix_lstrip,
- 							"usr", l, m, "bin")
- 					if os.path.isdir(p):
- 						mysettings["PATH"] = p + ":" + mysettings["PATH"]
- 						break
- 				else:
- 					writemsg(("Warning: %s requested but no masquerade dir "
- 						"can be found in /usr/lib*/%s/bin\n") % (m, m))
- 					mysettings.features.remove(feature)
- 
- 		if 'MAKEOPTS' not in mysettings:
- 			nproc = get_cpu_count()
- 			if nproc:
- 				mysettings['MAKEOPTS'] = '-j%d' % (nproc)
- 
- 		if not eapi_exports_KV(eapi):
- 			# Discard KV for EAPIs that don't support it. Cached KV is restored
- 			# from the backupenv whenever config.reset() is called.
- 			mysettings.pop('KV', None)
- 		elif 'KV' not in mysettings and \
- 			mydo in ('compile', 'config', 'configure', 'info',
- 			'install', 'nofetch', 'postinst', 'postrm', 'preinst',
- 			'prepare', 'prerm', 'setup', 'test', 'unpack'):
- 			mykv, err1 = ExtractKernelVersion(
- 				os.path.join(mysettings['EROOT'], "usr/src/linux"))
- 			if mykv:
- 				# Regular source tree
- 				mysettings["KV"] = mykv
- 			else:
- 				mysettings["KV"] = ""
- 			mysettings.backup_changes("KV")
- 
- 		binpkg_compression = mysettings.get("BINPKG_COMPRESS", "bzip2")
- 		try:
- 			compression = _compressors[binpkg_compression]
- 		except KeyError as e:
- 			if binpkg_compression:
- 				writemsg("Warning: Invalid or unsupported compression method: %s\n" % e.args[0])
- 			else:
- 				# Empty BINPKG_COMPRESS disables compression.
- 				mysettings['PORTAGE_COMPRESSION_COMMAND'] = 'cat'
- 		else:
- 			try:
- 				compression_binary = shlex_split(varexpand(compression["compress"], mydict=settings))[0]
- 			except IndexError as e:
- 				writemsg("Warning: Invalid or unsupported compression method: %s\n" % e.args[0])
- 			else:
- 				if find_binary(compression_binary) is None:
- 					missing_package = compression["package"]
- 					writemsg("Warning: File compression unsupported %s. Missing package: %s\n" % (binpkg_compression, missing_package))
- 				else:
- 					cmd = [varexpand(x, mydict=settings) for x in shlex_split(compression["compress"])]
- 					# Filter empty elements
- 					cmd = [x for x in cmd if x != ""]
- 					mysettings['PORTAGE_COMPRESSION_COMMAND'] = ' '.join(cmd)
+     """
+     Generate the PATH variable.
+     """
+ 
+     # Note: PORTAGE_BIN_PATH may differ from the global constant
+     # when portage is reinstalling itself.
+     portage_bin_path = [settings["PORTAGE_BIN_PATH"]]
+     if portage_bin_path[0] != portage.const.PORTAGE_BIN_PATH:
+         # Add a fallback path for restarting failed builds (bug 547086)
+         portage_bin_path.append(portage.const.PORTAGE_BIN_PATH)
+     prerootpath = [x for x in settings.get("PREROOTPATH", "").split(":") if x]
+     rootpath = [x for x in settings.get("ROOTPATH", "").split(":") if x]
+     rootpath_set = frozenset(rootpath)
+     overrides = [
+         x for x in settings.get("__PORTAGE_TEST_PATH_OVERRIDE", "").split(":") if x
+     ]
+ 
+     prefixes = []
+     # settings["EPREFIX"] should take priority over portage.const.EPREFIX
+     if portage.const.EPREFIX != settings["EPREFIX"] and settings["ROOT"] == os.sep:
+         prefixes.append(settings["EPREFIX"])
+     prefixes.append(portage.const.EPREFIX)
+ 
+     path = overrides
+ 
+     if "xattr" in settings.features:
+         for x in portage_bin_path:
+             path.append(os.path.join(x, "ebuild-helpers", "xattr"))
+ 
+     if (
+         uid != 0
+         and "unprivileged" in settings.features
+         and "fakeroot" not in settings.features
+     ):
+         for x in portage_bin_path:
+             path.append(os.path.join(x, "ebuild-helpers", "unprivileged"))
+ 
+     if settings.get("USERLAND", "GNU") != "GNU":
+         for x in portage_bin_path:
+             path.append(os.path.join(x, "ebuild-helpers", "bsd"))
+ 
+     for x in portage_bin_path:
+         path.append(os.path.join(x, "ebuild-helpers"))
+     path.extend(prerootpath)
+ 
+     for prefix in prefixes:
+         prefix = prefix if prefix else "/"
+         for x in (
+             "usr/local/sbin",
+             "usr/local/bin",
+             "usr/sbin",
+             "usr/bin",
+             "sbin",
+             "bin",
+         ):
+             # Respect order defined in ROOTPATH
+             x_abs = os.path.join(prefix, x)
+             if x_abs not in rootpath_set:
+                 path.append(x_abs)
+ 
+     path.extend(rootpath)
++
++    # BEGIN PREFIX LOCAL: append EXTRA_PATH from make.globals
++    extrapath = [x for x in settings.get("EXTRA_PATH", "").split(":") if x]
++    path.extend(extrapath)
++    # END PREFIX LOCAL
++
+     settings["PATH"] = ":".join(path)
+ 
+ 
+ def doebuild_environment(
+     myebuild, mydo, myroot=None, settings=None, debug=False, use_cache=None, db=None
+ ):
+     """
+     Create and store environment variable in the config instance
+     that's passed in as the "settings" parameter. This will raise
+     UnsupportedAPIException if the given ebuild has an unsupported
+     EAPI. All EAPI dependent code comes last, so that essential
+     variables like PORTAGE_BUILDDIR are still initialized even in
+     cases when UnsupportedAPIException needs to be raised, which
+     can be useful when uninstalling a package that has corrupt
+     EAPI metadata.
+     The myroot and use_cache parameters are unused.
+     """
+ 
+     if settings is None:
+         raise TypeError("settings argument is required")
+ 
+     if db is None:
+         raise TypeError("db argument is required")
+ 
+     mysettings = settings
+     mydbapi = db
+     ebuild_path = os.path.abspath(myebuild)
+     pkg_dir = os.path.dirname(ebuild_path)
+     mytree = os.path.dirname(os.path.dirname(pkg_dir))
+     mypv = os.path.basename(ebuild_path)[:-7]
+     mysplit = _pkgsplit(mypv, eapi=mysettings.configdict["pkg"].get("EAPI"))
+     if mysplit is None:
+         raise IncorrectParameter(_("Invalid ebuild path: '%s'") % myebuild)
+ 
+     if (
+         mysettings.mycpv is not None
+         and mysettings.configdict["pkg"].get("PF") == mypv
+         and "CATEGORY" in mysettings.configdict["pkg"]
+     ):
+         # Assume that PF is enough to assume that we've got
+         # the correct CATEGORY, though this is not really
+         # a solid assumption since it's possible (though
+         # unlikely) that two packages in different
+         # categories have the same PF. Callers should call
+         # setcpv or create a clean clone of a locked config
+         # instance in order to ensure that this assumption
+         # does not fail like in bug #408817.
+         cat = mysettings.configdict["pkg"]["CATEGORY"]
+         mycpv = mysettings.mycpv
+     elif os.path.basename(pkg_dir) in (mysplit[0], mypv):
+         # portdbapi or vardbapi
+         cat = os.path.basename(os.path.dirname(pkg_dir))
+         mycpv = cat + "/" + mypv
+     else:
+         raise AssertionError("unable to determine CATEGORY")
+ 
+     # Make a backup of PORTAGE_TMPDIR prior to calling config.reset()
+     # so that the caller can override it.
+     tmpdir = mysettings["PORTAGE_TMPDIR"]
+ 
+     if mydo == "depend":
+         if mycpv != mysettings.mycpv:
+             # Don't pass in mydbapi here since the resulting aux_get
+             # call would lead to infinite 'depend' phase recursion.
+             mysettings.setcpv(mycpv)
+     else:
+         # If EAPI isn't in configdict["pkg"], it means that setcpv()
+         # hasn't been called with the mydb argument, so we have to
+         # call it here (portage code always calls setcpv properly,
+         # but api consumers might not).
+         if mycpv != mysettings.mycpv or "EAPI" not in mysettings.configdict["pkg"]:
+             # Reload env.d variables and reset any previous settings.
+             mysettings.reload()
+             mysettings.reset()
+             mysettings.setcpv(mycpv, mydb=mydbapi)
+ 
+     # config.reset() might have reverted a change made by the caller,
+     # so restore it to its original value. Sandbox needs canonical
+     # paths, so realpath it.
+     mysettings["PORTAGE_TMPDIR"] = os.path.realpath(tmpdir)
+ 
+     mysettings.pop("EBUILD_PHASE", None)  # remove from backupenv
+     mysettings["EBUILD_PHASE"] = mydo
+ 
+     # Set requested Python interpreter for Portage helpers.
+     mysettings["PORTAGE_PYTHON"] = portage._python_interpreter
+ 
+     # This is used by assert_sigpipe_ok() that's used by the ebuild
+     # unpack() helper. SIGPIPE is typically 13, but its better not
+     # to assume that.
+     mysettings["PORTAGE_SIGPIPE_STATUS"] = str(128 + signal.SIGPIPE)
+ 
+     # We are disabling user-specific bashrc files.
+     mysettings["BASH_ENV"] = INVALID_ENV_FILE
+ 
+     if debug:  # Otherwise it overrides emerge's settings.
+         # We have no other way to set debug... debug can't be passed in
+         # due to how it's coded... Don't overwrite this so we can use it.
+         mysettings["PORTAGE_DEBUG"] = "1"
+ 
+     mysettings["EBUILD"] = ebuild_path
+     mysettings["O"] = pkg_dir
+     mysettings.configdict["pkg"]["CATEGORY"] = cat
+     mysettings["PF"] = mypv
+ 
+     if hasattr(mydbapi, "repositories"):
+         repo = mydbapi.repositories.get_repo_for_location(mytree)
+         mysettings["PORTDIR"] = repo.eclass_db.porttrees[0]
+         mysettings["PORTAGE_ECLASS_LOCATIONS"] = repo.eclass_db.eclass_locations_string
+         mysettings.configdict["pkg"]["PORTAGE_REPO_NAME"] = repo.name
+ 
+     mysettings["PORTDIR"] = os.path.realpath(mysettings["PORTDIR"])
+     mysettings.pop("PORTDIR_OVERLAY", None)
+     mysettings["DISTDIR"] = os.path.realpath(mysettings["DISTDIR"])
+     mysettings["RPMDIR"] = os.path.realpath(mysettings["RPMDIR"])
+ 
+     mysettings["ECLASSDIR"] = mysettings["PORTDIR"] + "/eclass"
+ 
+     mysettings["PORTAGE_BASHRC_FILES"] = "\n".join(mysettings._pbashrc)
+ 
+     mysettings["P"] = mysplit[0] + "-" + mysplit[1]
+     mysettings["PN"] = mysplit[0]
+     mysettings["PV"] = mysplit[1]
+     mysettings["PR"] = mysplit[2]
+ 
+     if noiselimit < 0:
+         mysettings["PORTAGE_QUIET"] = "1"
+ 
+     if mysplit[2] == "r0":
+         mysettings["PVR"] = mysplit[1]
+     else:
+         mysettings["PVR"] = mysplit[1] + "-" + mysplit[2]
+ 
+     # All temporary directories should be subdirectories of
+     # $PORTAGE_TMPDIR/portage, since it's common for /tmp and /var/tmp
+     # to be mounted with the "noexec" option (see bug #346899).
+     mysettings["BUILD_PREFIX"] = mysettings["PORTAGE_TMPDIR"] + "/portage"
+     mysettings["PKG_TMPDIR"] = mysettings["BUILD_PREFIX"] + "/._unmerge_"
+ 
+     # Package {pre,post}inst and {pre,post}rm may overlap, so they must have separate
+     # locations in order to prevent interference.
+     if mydo in ("unmerge", "prerm", "postrm", "cleanrm"):
+         mysettings["PORTAGE_BUILDDIR"] = os.path.join(
+             mysettings["PKG_TMPDIR"], mysettings["CATEGORY"], mysettings["PF"]
+         )
+     else:
+         mysettings["PORTAGE_BUILDDIR"] = os.path.join(
+             mysettings["BUILD_PREFIX"], mysettings["CATEGORY"], mysettings["PF"]
+         )
+ 
+     mysettings["HOME"] = os.path.join(mysettings["PORTAGE_BUILDDIR"], "homedir")
+     mysettings["WORKDIR"] = os.path.join(mysettings["PORTAGE_BUILDDIR"], "work")
+     mysettings["D"] = os.path.join(mysettings["PORTAGE_BUILDDIR"], "image") + os.sep
+     mysettings["T"] = os.path.join(mysettings["PORTAGE_BUILDDIR"], "temp")
+     mysettings["SANDBOX_LOG"] = os.path.join(mysettings["T"], "sandbox.log")
+     mysettings["FILESDIR"] = os.path.join(settings["PORTAGE_BUILDDIR"], "files")
+ 
+     # Prefix forward compatability
+     eprefix_lstrip = mysettings["EPREFIX"].lstrip(os.sep)
+     mysettings["ED"] = (
+         os.path.join(mysettings["D"], eprefix_lstrip).rstrip(os.sep) + os.sep
+     )
+ 
+     mysettings["PORTAGE_BASHRC"] = os.path.join(
+         mysettings["PORTAGE_CONFIGROOT"], EBUILD_SH_ENV_FILE
+     )
+     mysettings["PM_EBUILD_HOOK_DIR"] = os.path.join(
+         mysettings["PORTAGE_CONFIGROOT"], EBUILD_SH_ENV_DIR
+     )
+ 
+     # Allow color.map to control colors associated with einfo, ewarn, etc...
+     mysettings["PORTAGE_COLORMAP"] = colormap()
+ 
+     if "COLUMNS" not in mysettings:
+         # Set COLUMNS, in order to prevent unnecessary stty calls
+         # inside the set_colors function of isolated-functions.sh.
+         # We cache the result in os.environ, in order to avoid
+         # multiple stty calls in cases when get_term_size() falls
+         # back to stty due to a missing or broken curses module.
+         columns = os.environ.get("COLUMNS")
+         if columns is None:
+             rows, columns = portage.output.get_term_size()
+             if columns < 1:
+                 # Force a sane value for COLUMNS, so that tools
+                 # like ls don't complain (see bug #394091).
+                 columns = 80
+             columns = str(columns)
+             os.environ["COLUMNS"] = columns
+         mysettings["COLUMNS"] = columns
+ 
+     # EAPI is always known here, even for the "depend" phase, because
+     # EbuildMetadataPhase gets it from _parse_eapi_ebuild_head().
+     eapi = mysettings.configdict["pkg"]["EAPI"]
+     _doebuild_path(mysettings, eapi=eapi)
+ 
+     # All EAPI dependent code comes last, so that essential variables like
+     # PATH and PORTAGE_BUILDDIR are still initialized even in cases when
+     # UnsupportedAPIException needs to be raised, which can be useful
+     # when uninstalling a package that has corrupt EAPI metadata.
+     if not eapi_is_supported(eapi):
+         raise UnsupportedAPIException(mycpv, eapi)
+ 
+     if (
+         eapi_exports_REPOSITORY(eapi)
+         and "PORTAGE_REPO_NAME" in mysettings.configdict["pkg"]
+     ):
+         mysettings.configdict["pkg"]["REPOSITORY"] = mysettings.configdict["pkg"][
+             "PORTAGE_REPO_NAME"
+         ]
+ 
+     if mydo != "depend":
+         if hasattr(mydbapi, "getFetchMap") and (
+             "A" not in mysettings.configdict["pkg"]
+             or "AA" not in mysettings.configdict["pkg"]
+         ):
+             src_uri = mysettings.configdict["pkg"].get("SRC_URI")
+             if src_uri is None:
+                 (src_uri,) = mydbapi.aux_get(
+                     mysettings.mycpv, ["SRC_URI"], mytree=mytree
+                 )
+             metadata = {
+                 "EAPI": eapi,
+                 "SRC_URI": src_uri,
+             }
+             use = frozenset(mysettings["PORTAGE_USE"].split())
+             try:
+                 uri_map = _parse_uri_map(mysettings.mycpv, metadata, use=use)
+             except InvalidDependString:
+                 mysettings.configdict["pkg"]["A"] = ""
+             else:
+                 mysettings.configdict["pkg"]["A"] = " ".join(uri_map)
+ 
+             try:
+                 uri_map = _parse_uri_map(mysettings.mycpv, metadata)
+             except InvalidDependString:
+                 mysettings.configdict["pkg"]["AA"] = ""
+             else:
+                 mysettings.configdict["pkg"]["AA"] = " ".join(uri_map)
+ 
+         ccache = "ccache" in mysettings.features
+         distcc = "distcc" in mysettings.features
+         icecream = "icecream" in mysettings.features
+ 
+         if ccache or distcc or icecream:
+             libdir = None
+             default_abi = mysettings.get("DEFAULT_ABI")
+             if default_abi:
+                 libdir = mysettings.get("LIBDIR_" + default_abi)
+             if not libdir:
+                 libdir = "lib"
+ 
+             # The installation locations use to vary between versions...
+             # Safer to look them up rather than assuming
+             possible_libexecdirs = (libdir, "lib", "libexec")
+             masquerades = []
+             if distcc:
+                 masquerades.append(("distcc", "distcc"))
+             if icecream:
+                 masquerades.append(("icecream", "icecc"))
+             if ccache:
+                 masquerades.append(("ccache", "ccache"))
+ 
+             for feature, m in masquerades:
+                 for l in possible_libexecdirs:
+                     p = os.path.join(os.sep, eprefix_lstrip, "usr", l, m, "bin")
+                     if os.path.isdir(p):
+                         mysettings["PATH"] = p + ":" + mysettings["PATH"]
+                         break
+                 else:
+                     writemsg(
+                         (
+                             "Warning: %s requested but no masquerade dir "
+                             "can be found in /usr/lib*/%s/bin\n"
+                         )
+                         % (m, m)
+                     )
+                     mysettings.features.remove(feature)
+ 
+         if "MAKEOPTS" not in mysettings:
+             nproc = get_cpu_count()
+             if nproc:
+                 mysettings["MAKEOPTS"] = "-j%d" % (nproc)
+ 
+         if not eapi_exports_KV(eapi):
+             # Discard KV for EAPIs that don't support it. Cached KV is restored
+             # from the backupenv whenever config.reset() is called.
+             mysettings.pop("KV", None)
+         elif "KV" not in mysettings and mydo in (
+             "compile",
+             "config",
+             "configure",
+             "info",
+             "install",
+             "nofetch",
+             "postinst",
+             "postrm",
+             "preinst",
+             "prepare",
+             "prerm",
+             "setup",
+             "test",
+             "unpack",
+         ):
+             mykv, err1 = ExtractKernelVersion(
+                 os.path.join(mysettings["EROOT"], "usr/src/linux")
+             )
+             if mykv:
+                 # Regular source tree
+                 mysettings["KV"] = mykv
+             else:
+                 mysettings["KV"] = ""
+             mysettings.backup_changes("KV")
+ 
+         binpkg_compression = mysettings.get("BINPKG_COMPRESS", "bzip2")
+         try:
+             compression = _compressors[binpkg_compression]
+         except KeyError as e:
+             if binpkg_compression:
+                 writemsg(
+                     "Warning: Invalid or unsupported compression method: %s\n"
+                     % e.args[0]
+                 )
+             else:
+                 # Empty BINPKG_COMPRESS disables compression.
+                 mysettings["PORTAGE_COMPRESSION_COMMAND"] = "cat"
+         else:
+             try:
+                 compression_binary = shlex_split(
+                     varexpand(compression["compress"], mydict=settings)
+                 )[0]
+             except IndexError as e:
+                 writemsg(
+                     "Warning: Invalid or unsupported compression method: %s\n"
+                     % e.args[0]
+                 )
+             else:
+                 if find_binary(compression_binary) is None:
+                     missing_package = compression["package"]
+                     writemsg(
+                         "Warning: File compression unsupported %s. Missing package: %s\n"
+                         % (binpkg_compression, missing_package)
+                     )
+                 else:
+                     cmd = [
+                         varexpand(x, mydict=settings)
+                         for x in shlex_split(compression["compress"])
+                     ]
+                     # Filter empty elements
+                     cmd = [x for x in cmd if x != ""]
+                     mysettings["PORTAGE_COMPRESSION_COMMAND"] = " ".join(cmd)
+ 
  
  _doebuild_manifest_cache = None
  _doebuild_broken_ebuilds = set()
  _doebuild_broken_manifests = set()
  _doebuild_commands_without_builddir = (
- 	'clean', 'cleanrm', 'depend', 'digest',
- 	'fetch', 'fetchall', 'help', 'manifest'
+     "clean",
+     "cleanrm",
+     "depend",
+     "digest",
+     "fetch",
+     "fetchall",
+     "help",
+     "manifest",
  )
  
- def doebuild(myebuild, mydo, _unused=DeprecationWarning, settings=None, debug=0, listonly=0,
- 	fetchonly=0, cleanup=0, dbkey=DeprecationWarning, use_cache=1, fetchall=0, tree=None,
- 	mydbapi=None, vartree=None, prev_mtimes=None,
- 	fd_pipes=None, returnpid=False):
- 	"""
- 	Wrapper function that invokes specific ebuild phases through the spawning
- 	of ebuild.sh
- 
- 	@param myebuild: name of the ebuild to invoke the phase on (CPV)
- 	@type myebuild: String
- 	@param mydo: Phase to run
- 	@type mydo: String
- 	@param _unused: Deprecated (use settings["ROOT"] instead)
- 	@type _unused: String
- 	@param settings: Portage Configuration
- 	@type settings: instance of portage.config
- 	@param debug: Turns on various debug information (eg, debug for spawn)
- 	@type debug: Boolean
- 	@param listonly: Used to wrap fetch(); passed such that fetch only lists files required.
- 	@type listonly: Boolean
- 	@param fetchonly: Used to wrap fetch(); passed such that files are only fetched (no other actions)
- 	@type fetchonly: Boolean
- 	@param cleanup: Passed to prepare_build_dirs (TODO: what does it do?)
- 	@type cleanup: Boolean
- 	@param dbkey: A file path where metadata generated by the 'depend' phase
- 		will be written.
- 	@type dbkey: String
- 	@param use_cache: Enables the cache
- 	@type use_cache: Boolean
- 	@param fetchall: Used to wrap fetch(), fetches all URIs (even ones invalid due to USE conditionals)
- 	@type fetchall: Boolean
- 	@param tree: Which tree to use ('vartree','porttree','bintree', etc..), defaults to 'porttree'
- 	@type tree: String
- 	@param mydbapi: a dbapi instance to pass to various functions; this should be a portdbapi instance.
- 	@type mydbapi: portdbapi instance
- 	@param vartree: A instance of vartree; used for aux_get calls, defaults to db[myroot]['vartree']
- 	@type vartree: vartree instance
- 	@param prev_mtimes: A dict of { filename:mtime } keys used by merge() to do config_protection
- 	@type prev_mtimes: dictionary
- 	@param fd_pipes: A dict of mapping for pipes, { '0': stdin, '1': stdout }
- 		for example.
- 	@type fd_pipes: Dictionary
- 	@param returnpid: Return a list of process IDs for a successful spawn, or
- 		an integer value if spawn is unsuccessful. NOTE: This requires the
- 		caller clean up all returned PIDs.
- 	@type returnpid: Boolean
- 	@rtype: Boolean
- 	@return:
- 	1. 0 for success
- 	2. 1 for error
- 
- 	Most errors have an accompanying error message.
- 
- 	listonly and fetchonly are only really necessary for operations involving 'fetch'
- 	prev_mtimes are only necessary for merge operations.
- 	Other variables may not be strictly required, many have defaults that are set inside of doebuild.
- 
- 	"""
- 
- 	if settings is None:
- 		raise TypeError("settings parameter is required")
- 	mysettings = settings
- 	myroot = settings['EROOT']
- 
- 	if _unused is not DeprecationWarning:
- 		warnings.warn("The third parameter of the "
- 			"portage.doebuild() is deprecated. Instead "
- 			"settings['EROOT'] is used.",
- 			DeprecationWarning, stacklevel=2)
- 
- 	if dbkey is not DeprecationWarning:
- 		warnings.warn("portage.doebuild() called "
- 			"with deprecated dbkey argument.",
- 			DeprecationWarning, stacklevel=2)
- 
- 	if not tree:
- 		writemsg("Warning: tree not specified to doebuild\n")
- 		tree = "porttree"
- 
- 	# chunked out deps for each phase, so that ebuild binary can use it
- 	# to collapse targets down.
- 	actionmap_deps={
- 	"pretend"  : [],
- 	"setup":  ["pretend"],
- 	"unpack": ["setup"],
- 	"prepare": ["unpack"],
- 	"configure": ["prepare"],
- 	"compile":["configure"],
- 	"test":   ["compile"],
- 	"install":["test"],
- 	"instprep":["install"],
- 	"rpm":    ["install"],
- 	"package":["install"],
- 	"merge"  :["install"],
- 	}
- 
- 	if mydbapi is None:
- 		mydbapi = portage.db[myroot][tree].dbapi
- 
- 	if vartree is None and mydo in ("merge", "qmerge", "unmerge"):
- 		vartree = portage.db[myroot]["vartree"]
- 
- 	features = mysettings.features
- 
- 	clean_phases = ("clean", "cleanrm")
- 	validcommands = ["help","clean","prerm","postrm","cleanrm","preinst","postinst",
- 	                "config", "info", "setup", "depend", "pretend",
- 	                "fetch", "fetchall", "digest",
- 	                "unpack", "prepare", "configure", "compile", "test",
- 	                "install", "instprep", "rpm", "qmerge", "merge",
- 	                "package", "unmerge", "manifest", "nofetch"]
- 
- 	if mydo not in validcommands:
- 		validcommands.sort()
- 		writemsg("!!! doebuild: '%s' is not one of the following valid commands:" % mydo,
- 			noiselevel=-1)
- 		for vcount in range(len(validcommands)):
- 			if vcount%6 == 0:
- 				writemsg("\n!!! ", noiselevel=-1)
- 			writemsg(validcommands[vcount].ljust(11), noiselevel=-1)
- 		writemsg("\n", noiselevel=-1)
- 		return 1
- 
- 	if returnpid and mydo != 'depend':
- 		# This case is not supported, since it bypasses the EbuildPhase class
- 		# which implements important functionality (including post phase hooks
- 		# and IPC for things like best/has_version and die).
- 		warnings.warn("portage.doebuild() called "
- 			"with returnpid parameter enabled. This usage will "
- 			"not be supported in the future.",
- 			DeprecationWarning, stacklevel=2)
- 
- 	if mydo == "fetchall":
- 		fetchall = 1
- 		mydo = "fetch"
- 
- 	if mydo not in clean_phases and not os.path.exists(myebuild):
- 		writemsg("!!! doebuild: %s not found for %s\n" % (myebuild, mydo),
- 			noiselevel=-1)
- 		return 1
- 
- 	global _doebuild_manifest_cache
- 	pkgdir = os.path.dirname(myebuild)
- 	manifest_path = os.path.join(pkgdir, "Manifest")
- 	if tree == "porttree":
- 		repo_config = mysettings.repositories.get_repo_for_location(
- 			os.path.dirname(os.path.dirname(pkgdir)))
- 	else:
- 		repo_config = None
- 
- 	mf = None
- 	if "strict" in features and \
- 		"digest" not in features and \
- 		tree == "porttree" and \
- 		not repo_config.thin_manifest and \
- 		mydo not in ("digest", "manifest", "help") and \
- 		not portage._doebuild_manifest_exempt_depend and \
- 		not (repo_config.allow_missing_manifest and not os.path.exists(manifest_path)):
- 		# Always verify the ebuild checksums before executing it.
- 		global _doebuild_broken_ebuilds
- 
- 		if myebuild in _doebuild_broken_ebuilds:
- 			return 1
- 
- 		# Avoid checking the same Manifest several times in a row during a
- 		# regen with an empty cache.
- 		if _doebuild_manifest_cache is None or \
- 			_doebuild_manifest_cache.getFullname() != manifest_path:
- 			_doebuild_manifest_cache = None
- 			if not os.path.exists(manifest_path):
- 				out = portage.output.EOutput()
- 				out.eerror(_("Manifest not found for '%s'") % (myebuild,))
- 				_doebuild_broken_ebuilds.add(myebuild)
- 				return 1
- 			mf = repo_config.load_manifest(pkgdir, mysettings["DISTDIR"])
- 
- 		else:
- 			mf = _doebuild_manifest_cache
- 
- 		try:
- 			mf.checkFileHashes("EBUILD", os.path.basename(myebuild))
- 		except KeyError:
- 			if not (mf.allow_missing and
- 				os.path.basename(myebuild) not in mf.fhashdict["EBUILD"]):
- 				out = portage.output.EOutput()
- 				out.eerror(_("Missing digest for '%s'") % (myebuild,))
- 				_doebuild_broken_ebuilds.add(myebuild)
- 				return 1
- 		except FileNotFound:
- 			out = portage.output.EOutput()
- 			out.eerror(_("A file listed in the Manifest "
- 				"could not be found: '%s'") % (myebuild,))
- 			_doebuild_broken_ebuilds.add(myebuild)
- 			return 1
- 		except DigestException as e:
- 			out = portage.output.EOutput()
- 			out.eerror(_("Digest verification failed:"))
- 			out.eerror("%s" % e.value[0])
- 			out.eerror(_("Reason: %s") % e.value[1])
- 			out.eerror(_("Got: %s") % e.value[2])
- 			out.eerror(_("Expected: %s") % e.value[3])
- 			_doebuild_broken_ebuilds.add(myebuild)
- 			return 1
- 
- 		if mf.getFullname() in _doebuild_broken_manifests:
- 			return 1
- 
- 		if mf is not _doebuild_manifest_cache and not mf.allow_missing:
- 
- 			# Make sure that all of the ebuilds are
- 			# actually listed in the Manifest.
- 			for f in os.listdir(pkgdir):
- 				pf = None
- 				if f[-7:] == '.ebuild':
- 					pf = f[:-7]
- 				if pf is not None and not mf.hasFile("EBUILD", f):
- 					f = os.path.join(pkgdir, f)
- 					if f not in _doebuild_broken_ebuilds:
- 						out = portage.output.EOutput()
- 						out.eerror(_("A file is not listed in the "
- 							"Manifest: '%s'") % (f,))
- 					_doebuild_broken_manifests.add(manifest_path)
- 					return 1
- 
- 		# We cache it only after all above checks succeed.
- 		_doebuild_manifest_cache = mf
- 
- 	logfile=None
- 	builddir_lock = None
- 	tmpdir = None
- 	tmpdir_orig = None
- 
- 	try:
- 		if mydo in ("digest", "manifest", "help"):
- 			# Temporarily exempt the depend phase from manifest checks, in case
- 			# aux_get calls trigger cache generation.
- 			portage._doebuild_manifest_exempt_depend += 1
- 
- 		# If we don't need much space and we don't need a constant location,
- 		# we can temporarily override PORTAGE_TMPDIR with a random temp dir
- 		# so that there's no need for locking and it can be used even if the
- 		# user isn't in the portage group.
- 		if not returnpid and mydo in ("info",):
- 			tmpdir = tempfile.mkdtemp()
- 			tmpdir_orig = mysettings["PORTAGE_TMPDIR"]
- 			mysettings["PORTAGE_TMPDIR"] = tmpdir
- 
- 		doebuild_environment(myebuild, mydo, myroot, mysettings, debug,
- 			use_cache, mydbapi)
- 
- 		if mydo in clean_phases:
- 			builddir_lock = None
- 			if not returnpid and \
- 				'PORTAGE_BUILDDIR_LOCKED' not in mysettings:
- 				builddir_lock = EbuildBuildDir(
- 					scheduler=asyncio._safe_loop(),
- 					settings=mysettings)
- 				builddir_lock.scheduler.run_until_complete(
- 					builddir_lock.async_lock())
- 			try:
- 				return _spawn_phase(mydo, mysettings,
- 					fd_pipes=fd_pipes, returnpid=returnpid)
- 			finally:
- 				if builddir_lock is not None:
- 					builddir_lock.scheduler.run_until_complete(
- 						builddir_lock.async_unlock())
- 
- 		# get possible slot information from the deps file
- 		if mydo == "depend":
- 			writemsg("!!! DEBUG: dbkey: %s\n" % str(dbkey), 2)
- 			if returnpid:
- 				return _spawn_phase(mydo, mysettings,
- 					fd_pipes=fd_pipes, returnpid=returnpid)
- 			if dbkey and dbkey is not DeprecationWarning:
- 				mysettings["dbkey"] = dbkey
- 			else:
- 				mysettings["dbkey"] = \
- 					os.path.join(mysettings.depcachedir, "aux_db_key_temp")
- 
- 			return _spawn_phase(mydo, mysettings,
- 				fd_pipes=fd_pipes, returnpid=returnpid)
- 
- 		if mydo == "nofetch":
- 
- 			if returnpid:
- 				writemsg("!!! doebuild: %s\n" %
- 					_("returnpid is not supported for phase '%s'\n" % mydo),
- 					noiselevel=-1)
- 
- 			return spawn_nofetch(mydbapi, myebuild, settings=mysettings,
- 				fd_pipes=fd_pipes)
- 
- 		if tree == "porttree":
- 
- 			if not returnpid:
- 				# Validate dependency metadata here to ensure that ebuilds with
- 				# invalid data are never installed via the ebuild command. Skip
- 				# this when returnpid is True (assume the caller handled it).
- 				rval = _validate_deps(mysettings, myroot, mydo, mydbapi)
- 				if rval != os.EX_OK:
- 					return rval
- 
- 		else:
- 			# FEATURES=noauto only makes sense for porttree, and we don't want
- 			# it to trigger redundant sourcing of the ebuild for API consumers
- 			# that are using binary packages
- 			if "noauto" in mysettings.features:
- 				mysettings.features.discard("noauto")
- 
- 		# If we are not using a private temp dir, then check access
- 		# to the global temp dir.
- 		if tmpdir is None and \
- 			mydo not in _doebuild_commands_without_builddir:
- 			rval = _check_temp_dir(mysettings)
- 			if rval != os.EX_OK:
- 				return rval
- 
- 		if mydo == "unmerge":
- 			if returnpid:
- 				writemsg("!!! doebuild: %s\n" %
- 					_("returnpid is not supported for phase '%s'\n" % mydo),
- 					noiselevel=-1)
- 			return unmerge(mysettings["CATEGORY"],
- 				mysettings["PF"], myroot, mysettings, vartree=vartree)
- 
- 		phases_to_run = set()
- 		if returnpid or \
- 			"noauto" in mysettings.features or \
- 			mydo not in actionmap_deps:
- 			phases_to_run.add(mydo)
- 		else:
- 			phase_stack = [mydo]
- 			while phase_stack:
- 				x = phase_stack.pop()
- 				if x in phases_to_run:
- 					continue
- 				phases_to_run.add(x)
- 				phase_stack.extend(actionmap_deps.get(x, []))
- 			del phase_stack
- 
- 		alist = set(mysettings.configdict["pkg"].get("A", "").split())
- 
- 		unpacked = False
- 		if tree != "porttree" or \
- 			mydo in _doebuild_commands_without_builddir:
- 			pass
- 		elif "unpack" not in phases_to_run:
- 			unpacked = os.path.exists(os.path.join(
- 				mysettings["PORTAGE_BUILDDIR"], ".unpacked"))
- 		else:
- 			try:
- 				workdir_st = os.stat(mysettings["WORKDIR"])
- 			except OSError:
- 				pass
- 			else:
- 				newstuff = False
- 				if not os.path.exists(os.path.join(
- 					mysettings["PORTAGE_BUILDDIR"], ".unpacked")):
- 					writemsg_stdout(_(
- 						">>> Not marked as unpacked; recreating WORKDIR...\n"))
- 					newstuff = True
- 				else:
- 					for x in alist:
- 						writemsg_stdout(">>> Checking %s's mtime...\n" % x)
- 						try:
- 							x_st = os.stat(os.path.join(
- 								mysettings["DISTDIR"], x))
- 						except OSError:
- 							# file deleted
- 							x_st = None
- 
- 						if x_st is not None and x_st.st_mtime > workdir_st.st_mtime:
- 							writemsg_stdout(_(">>> Timestamp of "
- 								"%s has changed; recreating WORKDIR...\n") % x)
- 							newstuff = True
- 							break
- 
- 				if newstuff:
- 					if builddir_lock is None and \
- 						'PORTAGE_BUILDDIR_LOCKED' not in mysettings:
- 						builddir_lock = EbuildBuildDir(
- 							scheduler=asyncio._safe_loop(),
- 							settings=mysettings)
- 						builddir_lock.scheduler.run_until_complete(
- 							builddir_lock.async_lock())
- 					try:
- 						_spawn_phase("clean", mysettings)
- 					finally:
- 						if builddir_lock is not None:
- 							builddir_lock.scheduler.run_until_complete(
- 								builddir_lock.async_unlock())
- 							builddir_lock = None
- 				else:
- 					writemsg_stdout(_(">>> WORKDIR is up-to-date, keeping...\n"))
- 					unpacked = True
- 
- 		# Build directory creation isn't required for any of these.
- 		# In the fetch phase, the directory is needed only for RESTRICT=fetch
- 		# in order to satisfy the sane $PWD requirement (from bug #239560)
- 		# when pkg_nofetch is spawned.
- 		have_build_dirs = False
- 		if mydo not in ('digest', 'fetch', 'help', 'manifest'):
- 			if not returnpid and \
- 				'PORTAGE_BUILDDIR_LOCKED' not in mysettings:
- 				builddir_lock = EbuildBuildDir(
- 					scheduler=asyncio._safe_loop(),
- 					settings=mysettings)
- 				builddir_lock.scheduler.run_until_complete(
- 					builddir_lock.async_lock())
- 			mystatus = prepare_build_dirs(myroot, mysettings, cleanup)
- 			if mystatus:
- 				return mystatus
- 			have_build_dirs = True
- 
- 			# emerge handles logging externally
- 			if not returnpid:
- 				# PORTAGE_LOG_FILE is set by the
- 				# above prepare_build_dirs() call.
- 				logfile = mysettings.get("PORTAGE_LOG_FILE")
- 
- 		if have_build_dirs:
- 			rval = _prepare_env_file(mysettings)
- 			if rval != os.EX_OK:
- 				return rval
- 
- 		if eapi_exports_merge_type(mysettings["EAPI"]) and \
- 			"MERGE_TYPE" not in mysettings.configdict["pkg"]:
- 			if tree == "porttree":
- 				mysettings.configdict["pkg"]["MERGE_TYPE"] = "source"
- 			elif tree == "bintree":
- 				mysettings.configdict["pkg"]["MERGE_TYPE"] = "binary"
- 
- 		if tree == "porttree":
- 			mysettings.configdict["pkg"]["EMERGE_FROM"] = "ebuild"
- 		elif tree == "bintree":
- 			mysettings.configdict["pkg"]["EMERGE_FROM"] = "binary"
- 
- 		# NOTE: It's not possible to set REPLACED_BY_VERSION for prerm
- 		#       and postrm here, since we don't necessarily know what
- 		#       versions are being installed. This could be a problem
- 		#       for API consumers if they don't use dblink.treewalk()
- 		#       to execute prerm and postrm.
- 		if eapi_exports_replace_vars(mysettings["EAPI"]) and \
- 			(mydo in ("postinst", "preinst", "pretend", "setup") or \
- 			("noauto" not in features and not returnpid and \
- 			(mydo in actionmap_deps or mydo in ("merge", "package", "qmerge")))):
- 			if not vartree:
- 				writemsg("Warning: vartree not given to doebuild. " + \
- 					"Cannot set REPLACING_VERSIONS in pkg_{pretend,setup}\n")
- 			else:
- 				vardb = vartree.dbapi
- 				cpv = mysettings.mycpv
- 				cpv_slot = "%s%s%s" % \
- 					(cpv.cp, portage.dep._slot_separator, cpv.slot)
- 				mysettings["REPLACING_VERSIONS"] = " ".join(
- 					set(portage.versions.cpv_getversion(match) \
- 						for match in vardb.match(cpv_slot) + \
- 						vardb.match('='+cpv)))
- 
- 		# if any of these are being called, handle them -- running them out of
- 		# the sandbox -- and stop now.
- 		if mydo in ("config", "help", "info", "postinst",
- 			"preinst", "pretend", "postrm", "prerm"):
- 			if mydo in ("preinst", "postinst"):
- 				env_file = os.path.join(os.path.dirname(mysettings["EBUILD"]),
- 					"environment.bz2")
- 				if os.path.isfile(env_file):
- 					mysettings["PORTAGE_UPDATE_ENV"] = env_file
- 			try:
- 				return _spawn_phase(mydo, mysettings,
- 					fd_pipes=fd_pipes, logfile=logfile, returnpid=returnpid)
- 			finally:
- 				mysettings.pop("PORTAGE_UPDATE_ENV", None)
- 
- 		mycpv = "/".join((mysettings["CATEGORY"], mysettings["PF"]))
- 
- 		# Only try and fetch the files if we are going to need them ...
- 		# otherwise, if user has FEATURES=noauto and they run `ebuild clean
- 		# unpack compile install`, we will try and fetch 4 times :/
- 		need_distfiles = tree == "porttree" and not unpacked and \
- 			(mydo in ("fetch", "unpack") or \
- 			mydo not in ("digest", "manifest") and "noauto" not in features)
- 		if need_distfiles:
- 
- 			src_uri = mysettings.configdict["pkg"].get("SRC_URI")
- 			if src_uri is None:
- 				src_uri, = mydbapi.aux_get(mysettings.mycpv,
- 					["SRC_URI"], mytree=os.path.dirname(os.path.dirname(
- 					os.path.dirname(myebuild))))
- 			metadata = {
- 				"EAPI"    : mysettings["EAPI"],
- 				"SRC_URI" : src_uri,
- 			}
- 			use = frozenset(mysettings["PORTAGE_USE"].split())
- 			try:
- 				alist = _parse_uri_map(mysettings.mycpv, metadata, use=use)
- 				aalist = _parse_uri_map(mysettings.mycpv, metadata)
- 			except InvalidDependString as e:
- 				writemsg("!!! %s\n" % str(e), noiselevel=-1)
- 				writemsg(_("!!! Invalid SRC_URI for '%s'.\n") % mycpv,
- 					noiselevel=-1)
- 				del e
- 				return 1
- 
- 			if "mirror" in features or fetchall:
- 				fetchme = aalist
- 			else:
- 				fetchme = alist
- 
- 			dist_digests = None
- 			if mf is not None:
- 				dist_digests = mf.getTypeDigests("DIST")
- 
- 			def _fetch_subprocess(fetchme, mysettings, listonly, dist_digests):
- 				# For userfetch, drop privileges for the entire fetch call, in
- 				# order to handle DISTDIR on NFS with root_squash for bug 601252.
- 				if _want_userfetch(mysettings):
- 					_drop_privs_userfetch(mysettings)
- 
- 				return fetch(fetchme, mysettings, listonly=listonly,
- 					fetchonly=fetchonly, allow_missing_digests=False,
- 					digests=dist_digests)
- 
- 			loop = asyncio._safe_loop()
- 			if loop.is_running():
- 				# Called by EbuildFetchonly for emerge --pretend --fetchonly.
- 				success = fetch(fetchme, mysettings, listonly=listonly,
- 					fetchonly=fetchonly, allow_missing_digests=False,
- 					digests=dist_digests)
- 			else:
- 				success = loop.run_until_complete(
- 					loop.run_in_executor(ForkExecutor(loop=loop),
- 					_fetch_subprocess, fetchme, mysettings, listonly, dist_digests))
- 			if not success:
- 				# Since listonly mode is called by emerge --pretend in an
- 				# asynchronous context, spawn_nofetch would trigger event loop
- 				# recursion here, therefore delegate execution of pkg_nofetch
- 				# to the caller (bug 657360).
- 				if not listonly:
- 					spawn_nofetch(mydbapi, myebuild, settings=mysettings,
- 						fd_pipes=fd_pipes)
- 				return 1
- 
- 		if need_distfiles:
- 			# Files are already checked inside fetch(),
- 			# so do not check them again.
- 			checkme = []
- 		elif unpacked:
- 			# The unpack phase is marked as complete, so it
- 			# would be wasteful to check distfiles again.
- 			checkme = []
- 		else:
- 			checkme = alist
- 
- 		if mydo == "fetch" and listonly:
- 			return 0
- 
- 		try:
- 			if mydo == "manifest":
- 				mf = None
- 				_doebuild_manifest_cache = None
- 				return not digestgen(mysettings=mysettings, myportdb=mydbapi)
- 			if mydo == "digest":
- 				mf = None
- 				_doebuild_manifest_cache = None
- 				return not digestgen(mysettings=mysettings, myportdb=mydbapi)
- 			if "digest" in mysettings.features:
- 				mf = None
- 				_doebuild_manifest_cache = None
- 				digestgen(mysettings=mysettings, myportdb=mydbapi)
- 		except PermissionDenied as e:
- 			writemsg(_("!!! Permission Denied: %s\n") % (e,), noiselevel=-1)
- 			if mydo in ("digest", "manifest"):
- 				return 1
- 
- 		if mydo == "fetch":
- 			# Return after digestgen for FEATURES=digest support.
- 			# Return before digestcheck, since fetch() already
- 			# checked any relevant digests.
- 			return 0
- 
- 		# See above comment about fetching only when needed
- 		if tree == 'porttree' and \
- 			not digestcheck(checkme, mysettings, "strict" in features, mf=mf):
- 			return 1
- 
- 		# remove PORTAGE_ACTUAL_DISTDIR once cvs/svn is supported via SRC_URI
- 		if tree == 'porttree' and \
- 			((mydo != "setup" and "noauto" not in features) \
- 			or mydo in ("install", "unpack")):
- 			_prepare_fake_distdir(mysettings, alist)
- 
- 		#initial dep checks complete; time to process main commands
- 		actionmap = _spawn_actionmap(mysettings)
- 
- 		# merge the deps in so we have again a 'full' actionmap
- 		# be glad when this can die.
- 		for x in actionmap:
- 			if len(actionmap_deps.get(x, [])):
- 				actionmap[x]["dep"] = ' '.join(actionmap_deps[x])
- 
- 		regular_actionmap_phase = mydo in actionmap
- 
- 		if regular_actionmap_phase:
- 			bintree = None
- 			if mydo == "package":
- 				# Make sure the package directory exists before executing
- 				# this phase. This can raise PermissionDenied if
- 				# the current user doesn't have write access to $PKGDIR.
- 				if hasattr(portage, 'db'):
- 					bintree = portage.db[mysettings['EROOT']]['bintree']
- 					binpkg_tmpfile_dir = os.path.join(bintree.pkgdir, mysettings["CATEGORY"])
- 					bintree._ensure_dir(binpkg_tmpfile_dir)
- 					with tempfile.NamedTemporaryFile(
- 						prefix=mysettings["PF"],
- 						suffix=".tbz2." + str(portage.getpid()),
- 						dir=binpkg_tmpfile_dir,
- 						delete=False) as binpkg_tmpfile:
- 						mysettings["PORTAGE_BINPKG_TMPFILE"] = binpkg_tmpfile.name
- 				else:
- 					parent_dir = os.path.join(mysettings["PKGDIR"],
- 						mysettings["CATEGORY"])
- 					portage.util.ensure_dirs(parent_dir)
- 					if not os.access(parent_dir, os.W_OK):
- 						raise PermissionDenied(
- 							"access('%s', os.W_OK)" % parent_dir)
- 			retval = spawnebuild(mydo,
- 				actionmap, mysettings, debug, logfile=logfile,
- 				fd_pipes=fd_pipes, returnpid=returnpid)
- 
- 			if returnpid and isinstance(retval, list):
- 				return retval
- 
- 			if retval == os.EX_OK:
- 				if mydo == "package" and bintree is not None:
- 					pkg = bintree.inject(mysettings.mycpv,
- 						filename=mysettings["PORTAGE_BINPKG_TMPFILE"])
- 					if pkg is not None:
- 						infoloc = os.path.join(
- 							mysettings["PORTAGE_BUILDDIR"], "build-info")
- 						build_info = {
- 							"BINPKGMD5": "%s\n" % pkg._metadata["MD5"],
- 						}
- 						if pkg.build_id is not None:
- 							build_info["BUILD_ID"] = "%s\n" % pkg.build_id
- 						for k, v in build_info.items():
- 							with io.open(_unicode_encode(
- 								os.path.join(infoloc, k),
- 								encoding=_encodings['fs'], errors='strict'),
- 								mode='w', encoding=_encodings['repo.content'],
- 								errors='strict') as f:
- 								f.write(v)
- 			else:
- 				if "PORTAGE_BINPKG_TMPFILE" in mysettings:
- 					try:
- 						os.unlink(mysettings["PORTAGE_BINPKG_TMPFILE"])
- 					except OSError:
- 						pass
- 
- 		elif returnpid:
- 			writemsg("!!! doebuild: %s\n" %
- 				_("returnpid is not supported for phase '%s'\n" % mydo),
- 				noiselevel=-1)
- 
- 		if regular_actionmap_phase:
- 			# handled above
- 			pass
- 		elif mydo == "qmerge":
- 			# check to ensure install was run.  this *only* pops up when users
- 			# forget it and are using ebuild
- 			if not os.path.exists(
- 				os.path.join(mysettings["PORTAGE_BUILDDIR"], ".installed")):
- 				writemsg(_("!!! mydo=qmerge, but the install phase has not been run\n"),
- 					noiselevel=-1)
- 				return 1
- 			# qmerge is a special phase that implies noclean.
- 			if "noclean" not in mysettings.features:
- 				mysettings.features.add("noclean")
- 			_handle_self_update(mysettings, vartree.dbapi)
- 			#qmerge is specifically not supposed to do a runtime dep check
- 			retval = merge(
- 				mysettings["CATEGORY"], mysettings["PF"], mysettings["D"],
- 				os.path.join(mysettings["PORTAGE_BUILDDIR"], "build-info"),
- 				myroot, mysettings, myebuild=mysettings["EBUILD"], mytree=tree,
- 				mydbapi=mydbapi, vartree=vartree, prev_mtimes=prev_mtimes,
- 				fd_pipes=fd_pipes)
- 		elif mydo=="merge":
- 			retval = spawnebuild("install", actionmap, mysettings, debug,
- 				alwaysdep=1, logfile=logfile, fd_pipes=fd_pipes,
- 				returnpid=returnpid)
- 			if retval != os.EX_OK:
- 				# The merge phase handles this already.  Callers don't know how
- 				# far this function got, so we have to call elog_process() here
- 				# so that it's only called once.
- 				elog_process(mysettings.mycpv, mysettings)
- 			if retval == os.EX_OK:
- 				_handle_self_update(mysettings, vartree.dbapi)
- 				retval = merge(mysettings["CATEGORY"], mysettings["PF"],
- 					mysettings["D"], os.path.join(mysettings["PORTAGE_BUILDDIR"],
- 					"build-info"), myroot, mysettings,
- 					myebuild=mysettings["EBUILD"], mytree=tree, mydbapi=mydbapi,
- 					vartree=vartree, prev_mtimes=prev_mtimes,
- 					fd_pipes=fd_pipes)
- 
- 		else:
- 			writemsg_stdout(_("!!! Unknown mydo: %s\n") % mydo, noiselevel=-1)
- 			return 1
- 
- 		return retval
- 
- 	finally:
- 
- 		if builddir_lock is not None:
- 			builddir_lock.scheduler.run_until_complete(
- 				builddir_lock.async_unlock())
- 		if tmpdir:
- 			mysettings["PORTAGE_TMPDIR"] = tmpdir_orig
- 			shutil.rmtree(tmpdir)
- 
- 		mysettings.pop("REPLACING_VERSIONS", None)
- 
- 		if logfile and not returnpid:
- 			try:
- 				if os.stat(logfile).st_size == 0:
- 					os.unlink(logfile)
- 			except OSError:
- 				pass
- 
- 		if mydo in ("digest", "manifest", "help"):
- 			# If necessary, depend phase has been triggered by aux_get calls
- 			# and the exemption is no longer needed.
- 			portage._doebuild_manifest_exempt_depend -= 1
+ 
+ def doebuild(
+     myebuild,
+     mydo,
+     _unused=DeprecationWarning,
+     settings=None,
+     debug=0,
+     listonly=0,
+     fetchonly=0,
+     cleanup=0,
+     use_cache=1,
+     fetchall=0,
+     tree=None,
+     mydbapi=None,
+     vartree=None,
+     prev_mtimes=None,
+     fd_pipes=None,
+     returnpid=False,
+ ):
+     """
+     Wrapper function that invokes specific ebuild phases through the spawning
+     of ebuild.sh
+ 
+     @param myebuild: name of the ebuild to invoke the phase on (CPV)
+     @type myebuild: String
+     @param mydo: Phase to run
+     @type mydo: String
+     @param _unused: Deprecated (use settings["ROOT"] instead)
+     @type _unused: String
+     @param settings: Portage Configuration
+     @type settings: instance of portage.config
+     @param debug: Turns on various debug information (eg, debug for spawn)
+     @type debug: Boolean
+     @param listonly: Used to wrap fetch(); passed such that fetch only lists files required.
+     @type listonly: Boolean
+     @param fetchonly: Used to wrap fetch(); passed such that files are only fetched (no other actions)
+     @type fetchonly: Boolean
+     @param cleanup: Passed to prepare_build_dirs (TODO: what does it do?)
+     @type cleanup: Boolean
+     @param use_cache: Enables the cache
+     @type use_cache: Boolean
+     @param fetchall: Used to wrap fetch(), fetches all URIs (even ones invalid due to USE conditionals)
+     @type fetchall: Boolean
+     @param tree: Which tree to use ('vartree','porttree','bintree', etc..), defaults to 'porttree'
+     @type tree: String
+     @param mydbapi: a dbapi instance to pass to various functions; this should be a portdbapi instance.
+     @type mydbapi: portdbapi instance
+     @param vartree: A instance of vartree; used for aux_get calls, defaults to db[myroot]['vartree']
+     @type vartree: vartree instance
+     @param prev_mtimes: A dict of { filename:mtime } keys used by merge() to do config_protection
+     @type prev_mtimes: dictionary
+     @param fd_pipes: A dict of mapping for pipes, { '0': stdin, '1': stdout }
+             for example.
+     @type fd_pipes: Dictionary
+     @param returnpid: Return a list of process IDs for a successful spawn, or
+             an integer value if spawn is unsuccessful. NOTE: This requires the
+             caller clean up all returned PIDs.
+     @type returnpid: Boolean
+     @rtype: Boolean
+     @return:
+     1. 0 for success
+     2. 1 for error
+ 
+     Most errors have an accompanying error message.
+ 
+     listonly and fetchonly are only really necessary for operations involving 'fetch'
+     prev_mtimes are only necessary for merge operations.
+     Other variables may not be strictly required, many have defaults that are set inside of doebuild.
+ 
+     """
+ 
+     if settings is None:
+         raise TypeError("settings parameter is required")
+     mysettings = settings
+     myroot = settings["EROOT"]
+ 
+     if _unused is not DeprecationWarning:
+         warnings.warn(
+             "The third parameter of the "
+             "portage.doebuild() is deprecated. Instead "
+             "settings['EROOT'] is used.",
+             DeprecationWarning,
+             stacklevel=2,
+         )
+ 
+     if not tree:
+         writemsg("Warning: tree not specified to doebuild\n")
+         tree = "porttree"
+ 
+     # chunked out deps for each phase, so that ebuild binary can use it
+     # to collapse targets down.
+     actionmap_deps = {
+         "pretend": [],
+         "setup": ["pretend"],
+         "unpack": ["setup"],
+         "prepare": ["unpack"],
+         "configure": ["prepare"],
+         "compile": ["configure"],
+         "test": ["compile"],
+         "install": ["test"],
+         "instprep": ["install"],
+         "rpm": ["install"],
+         "package": ["install"],
+         "merge": ["install"],
+     }
+ 
+     if mydbapi is None:
+         mydbapi = portage.db[myroot][tree].dbapi
+ 
+     if vartree is None and mydo in ("merge", "qmerge", "unmerge"):
+         vartree = portage.db[myroot]["vartree"]
+ 
+     features = mysettings.features
+ 
+     clean_phases = ("clean", "cleanrm")
+     validcommands = [
+         "help",
+         "clean",
+         "prerm",
+         "postrm",
+         "cleanrm",
+         "preinst",
+         "postinst",
+         "config",
+         "info",
+         "setup",
+         "depend",
+         "pretend",
+         "fetch",
+         "fetchall",
+         "digest",
+         "unpack",
+         "prepare",
+         "configure",
+         "compile",
+         "test",
+         "install",
+         "instprep",
+         "rpm",
+         "qmerge",
+         "merge",
+         "package",
+         "unmerge",
+         "manifest",
+         "nofetch",
+     ]
+ 
+     if mydo not in validcommands:
+         validcommands.sort()
+         writemsg(
+             "!!! doebuild: '%s' is not one of the following valid commands:" % mydo,
+             noiselevel=-1,
+         )
+         for vcount in range(len(validcommands)):
+             if vcount % 6 == 0:
+                 writemsg("\n!!! ", noiselevel=-1)
+             writemsg(validcommands[vcount].ljust(11), noiselevel=-1)
+         writemsg("\n", noiselevel=-1)
+         return 1
+ 
+     if returnpid and mydo != "depend":
+         # This case is not supported, since it bypasses the EbuildPhase class
+         # which implements important functionality (including post phase hooks
+         # and IPC for things like best/has_version and die).
+         warnings.warn(
+             "portage.doebuild() called "
+             "with returnpid parameter enabled. This usage will "
+             "not be supported in the future.",
+             DeprecationWarning,
+             stacklevel=2,
+         )
+ 
+     if mydo == "fetchall":
+         fetchall = 1
+         mydo = "fetch"
+ 
+     if mydo not in clean_phases and not os.path.exists(myebuild):
+         writemsg(
+             "!!! doebuild: %s not found for %s\n" % (myebuild, mydo), noiselevel=-1
+         )
+         return 1
+ 
+     global _doebuild_manifest_cache
+     pkgdir = os.path.dirname(myebuild)
+     manifest_path = os.path.join(pkgdir, "Manifest")
+     if tree == "porttree":
+         repo_config = mysettings.repositories.get_repo_for_location(
+             os.path.dirname(os.path.dirname(pkgdir))
+         )
+     else:
+         repo_config = None
+ 
+     mf = None
+     if (
+         "strict" in features
+         and "digest" not in features
+         and tree == "porttree"
+         and not repo_config.thin_manifest
+         and mydo not in ("digest", "manifest", "help")
+         and not portage._doebuild_manifest_exempt_depend
+         and not (
+             repo_config.allow_missing_manifest and not os.path.exists(manifest_path)
+         )
+     ):
+         # Always verify the ebuild checksums before executing it.
+         global _doebuild_broken_ebuilds
+ 
+         if myebuild in _doebuild_broken_ebuilds:
+             return 1
+ 
+         # Avoid checking the same Manifest several times in a row during a
+         # regen with an empty cache.
+         if (
+             _doebuild_manifest_cache is None
+             or _doebuild_manifest_cache.getFullname() != manifest_path
+         ):
+             _doebuild_manifest_cache = None
+             if not os.path.exists(manifest_path):
+                 out = portage.output.EOutput()
+                 out.eerror(_("Manifest not found for '%s'") % (myebuild,))
+                 _doebuild_broken_ebuilds.add(myebuild)
+                 return 1
+             mf = repo_config.load_manifest(pkgdir, mysettings["DISTDIR"])
+ 
+         else:
+             mf = _doebuild_manifest_cache
+ 
+         try:
+             mf.checkFileHashes("EBUILD", os.path.basename(myebuild))
+         except KeyError:
+             if not (
+                 mf.allow_missing
+                 and os.path.basename(myebuild) not in mf.fhashdict["EBUILD"]
+             ):
+                 out = portage.output.EOutput()
+                 out.eerror(_("Missing digest for '%s'") % (myebuild,))
+                 _doebuild_broken_ebuilds.add(myebuild)
+                 return 1
+         except FileNotFound:
+             out = portage.output.EOutput()
+             out.eerror(
+                 _("A file listed in the Manifest " "could not be found: '%s'")
+                 % (myebuild,)
+             )
+             _doebuild_broken_ebuilds.add(myebuild)
+             return 1
+         except DigestException as e:
+             out = portage.output.EOutput()
+             out.eerror(_("Digest verification failed:"))
+             out.eerror("%s" % e.value[0])
+             out.eerror(_("Reason: %s") % e.value[1])
+             out.eerror(_("Got: %s") % e.value[2])
+             out.eerror(_("Expected: %s") % e.value[3])
+             _doebuild_broken_ebuilds.add(myebuild)
+             return 1
+ 
+         if mf.getFullname() in _doebuild_broken_manifests:
+             return 1
+ 
+         if mf is not _doebuild_manifest_cache and not mf.allow_missing:
+ 
+             # Make sure that all of the ebuilds are
+             # actually listed in the Manifest.
+             for f in os.listdir(pkgdir):
+                 pf = None
+                 if f[-7:] == ".ebuild":
+                     pf = f[:-7]
+                 if pf is not None and not mf.hasFile("EBUILD", f):
+                     f = os.path.join(pkgdir, f)
+                     if f not in _doebuild_broken_ebuilds:
+                         out = portage.output.EOutput()
+                         out.eerror(
+                             _("A file is not listed in the " "Manifest: '%s'") % (f,)
+                         )
+                     _doebuild_broken_manifests.add(manifest_path)
+                     return 1
+ 
+         # We cache it only after all above checks succeed.
+         _doebuild_manifest_cache = mf
+ 
+     logfile = None
+     builddir_lock = None
+     tmpdir = None
+     tmpdir_orig = None
+ 
+     try:
+         if mydo in ("digest", "manifest", "help"):
+             # Temporarily exempt the depend phase from manifest checks, in case
+             # aux_get calls trigger cache generation.
+             portage._doebuild_manifest_exempt_depend += 1
+ 
+         # If we don't need much space and we don't need a constant location,
+         # we can temporarily override PORTAGE_TMPDIR with a random temp dir
+         # so that there's no need for locking and it can be used even if the
+         # user isn't in the portage group.
+         if not returnpid and mydo in ("info",):
+             tmpdir = tempfile.mkdtemp()
+             tmpdir_orig = mysettings["PORTAGE_TMPDIR"]
+             mysettings["PORTAGE_TMPDIR"] = tmpdir
+ 
+         doebuild_environment(
+             myebuild, mydo, myroot, mysettings, debug, use_cache, mydbapi
+         )
+ 
+         if mydo in clean_phases:
+             builddir_lock = None
+             if not returnpid and "PORTAGE_BUILDDIR_LOCKED" not in mysettings:
+                 builddir_lock = EbuildBuildDir(
+                     scheduler=asyncio._safe_loop(), settings=mysettings
+                 )
+                 builddir_lock.scheduler.run_until_complete(builddir_lock.async_lock())
+             try:
+                 return _spawn_phase(
+                     mydo, mysettings, fd_pipes=fd_pipes, returnpid=returnpid
+                 )
+             finally:
+                 if builddir_lock is not None:
+                     builddir_lock.scheduler.run_until_complete(
+                         builddir_lock.async_unlock()
+                     )
+ 
+         # get possible slot information from the deps file
+         if mydo == "depend":
+             if not returnpid:
+                 raise TypeError("returnpid must be True for depend phase")
+             return _spawn_phase(
+                 mydo, mysettings, fd_pipes=fd_pipes, returnpid=returnpid
+             )
+ 
+         if mydo == "nofetch":
+ 
+             if returnpid:
+                 writemsg(
+                     "!!! doebuild: %s\n"
+                     % _("returnpid is not supported for phase '%s'\n" % mydo),
+                     noiselevel=-1,
+                 )
+ 
+             return spawn_nofetch(
+                 mydbapi, myebuild, settings=mysettings, fd_pipes=fd_pipes
+             )
+ 
+         if tree == "porttree":
+ 
+             if not returnpid:
+                 # Validate dependency metadata here to ensure that ebuilds with
+                 # invalid data are never installed via the ebuild command. Skip
+                 # this when returnpid is True (assume the caller handled it).
+                 rval = _validate_deps(mysettings, myroot, mydo, mydbapi)
+                 if rval != os.EX_OK:
+                     return rval
+ 
+         else:
+             # FEATURES=noauto only makes sense for porttree, and we don't want
+             # it to trigger redundant sourcing of the ebuild for API consumers
+             # that are using binary packages
+             if "noauto" in mysettings.features:
+                 mysettings.features.discard("noauto")
+ 
+         # If we are not using a private temp dir, then check access
+         # to the global temp dir.
+         if tmpdir is None and mydo not in _doebuild_commands_without_builddir:
+             rval = _check_temp_dir(mysettings)
+             if rval != os.EX_OK:
+                 return rval
+ 
+         if mydo == "unmerge":
+             if returnpid:
+                 writemsg(
+                     "!!! doebuild: %s\n"
+                     % _("returnpid is not supported for phase '%s'\n" % mydo),
+                     noiselevel=-1,
+                 )
+             return unmerge(
+                 mysettings["CATEGORY"],
+                 mysettings["PF"],
+                 myroot,
+                 mysettings,
+                 vartree=vartree,
+             )
+ 
+         phases_to_run = set()
+         if returnpid or "noauto" in mysettings.features or mydo not in actionmap_deps:
+             phases_to_run.add(mydo)
+         else:
+             phase_stack = [mydo]
+             while phase_stack:
+                 x = phase_stack.pop()
+                 if x in phases_to_run:
+                     continue
+                 phases_to_run.add(x)
+                 phase_stack.extend(actionmap_deps.get(x, []))
+             del phase_stack
+ 
+         alist = set(mysettings.configdict["pkg"].get("A", "").split())
+ 
+         unpacked = False
+         if tree != "porttree" or mydo in _doebuild_commands_without_builddir:
+             pass
+         elif "unpack" not in phases_to_run:
+             unpacked = os.path.exists(
+                 os.path.join(mysettings["PORTAGE_BUILDDIR"], ".unpacked")
+             )
+         else:
+             try:
+                 workdir_st = os.stat(mysettings["WORKDIR"])
+             except OSError:
+                 pass
+             else:
+                 newstuff = False
+                 if not os.path.exists(
+                     os.path.join(mysettings["PORTAGE_BUILDDIR"], ".unpacked")
+                 ):
+                     writemsg_stdout(
+                         _(">>> Not marked as unpacked; recreating WORKDIR...\n")
+                     )
+                     newstuff = True
+                 else:
+                     for x in alist:
+                         writemsg_stdout(">>> Checking %s's mtime...\n" % x)
+                         try:
+                             x_st = os.stat(os.path.join(mysettings["DISTDIR"], x))
+                         except OSError:
+                             # file deleted
+                             x_st = None
+ 
+                         if x_st is not None and x_st.st_mtime > workdir_st.st_mtime:
+                             writemsg_stdout(
+                                 _(
+                                     ">>> Timestamp of "
+                                     "%s has changed; recreating WORKDIR...\n"
+                                 )
+                                 % x
+                             )
+                             newstuff = True
+                             break
+ 
+                 if newstuff:
+                     if (
+                         builddir_lock is None
+                         and "PORTAGE_BUILDDIR_LOCKED" not in mysettings
+                     ):
+                         builddir_lock = EbuildBuildDir(
+                             scheduler=asyncio._safe_loop(), settings=mysettings
+                         )
+                         builddir_lock.scheduler.run_until_complete(
+                             builddir_lock.async_lock()
+                         )
+                     try:
+                         _spawn_phase("clean", mysettings)
+                     finally:
+                         if builddir_lock is not None:
+                             builddir_lock.scheduler.run_until_complete(
+                                 builddir_lock.async_unlock()
+                             )
+                             builddir_lock = None
+                 else:
+                     writemsg_stdout(_(">>> WORKDIR is up-to-date, keeping...\n"))
+                     unpacked = True
+ 
+         # Build directory creation isn't required for any of these.
+         # In the fetch phase, the directory is needed only for RESTRICT=fetch
+         # in order to satisfy the sane $PWD requirement (from bug #239560)
+         # when pkg_nofetch is spawned.
+         have_build_dirs = False
+         if mydo not in ("digest", "fetch", "help", "manifest"):
+             if not returnpid and "PORTAGE_BUILDDIR_LOCKED" not in mysettings:
+                 builddir_lock = EbuildBuildDir(
+                     scheduler=asyncio._safe_loop(), settings=mysettings
+                 )
+                 builddir_lock.scheduler.run_until_complete(builddir_lock.async_lock())
+             mystatus = prepare_build_dirs(myroot, mysettings, cleanup)
+             if mystatus:
+                 return mystatus
+             have_build_dirs = True
+ 
+             # emerge handles logging externally
+             if not returnpid:
+                 # PORTAGE_LOG_FILE is set by the
+                 # above prepare_build_dirs() call.
+                 logfile = mysettings.get("PORTAGE_LOG_FILE")
+ 
+         if have_build_dirs:
+             rval = _prepare_env_file(mysettings)
+             if rval != os.EX_OK:
+                 return rval
+ 
+         if (
+             eapi_exports_merge_type(mysettings["EAPI"])
+             and "MERGE_TYPE" not in mysettings.configdict["pkg"]
+         ):
+             if tree == "porttree":
+                 mysettings.configdict["pkg"]["MERGE_TYPE"] = "source"
+             elif tree == "bintree":
+                 mysettings.configdict["pkg"]["MERGE_TYPE"] = "binary"
+ 
+         if tree == "porttree":
+             mysettings.configdict["pkg"]["EMERGE_FROM"] = "ebuild"
+         elif tree == "bintree":
+             mysettings.configdict["pkg"]["EMERGE_FROM"] = "binary"
+ 
+         # NOTE: It's not possible to set REPLACED_BY_VERSION for prerm
+         #       and postrm here, since we don't necessarily know what
+         #       versions are being installed. This could be a problem
+         #       for API consumers if they don't use dblink.treewalk()
+         #       to execute prerm and postrm.
+         if eapi_exports_replace_vars(mysettings["EAPI"]) and (
+             mydo in ("postinst", "preinst", "pretend", "setup")
+             or (
+                 "noauto" not in features
+                 and not returnpid
+                 and (mydo in actionmap_deps or mydo in ("merge", "package", "qmerge"))
+             )
+         ):
+             if not vartree:
+                 writemsg(
+                     "Warning: vartree not given to doebuild. "
+                     + "Cannot set REPLACING_VERSIONS in pkg_{pretend,setup}\n"
+                 )
+             else:
+                 vardb = vartree.dbapi
+                 cpv = mysettings.mycpv
+                 cpv_slot = "%s%s%s" % (cpv.cp, portage.dep._slot_separator, cpv.slot)
+                 mysettings["REPLACING_VERSIONS"] = " ".join(
+                     set(
+                         portage.versions.cpv_getversion(match)
+                         for match in vardb.match(cpv_slot) + vardb.match("=" + cpv)
+                     )
+                 )
+ 
+         # if any of these are being called, handle them -- running them out of
+         # the sandbox -- and stop now.
+         if mydo in (
+             "config",
+             "help",
+             "info",
+             "postinst",
+             "preinst",
+             "pretend",
+             "postrm",
+             "prerm",
+         ):
+             if mydo in ("preinst", "postinst"):
+                 env_file = os.path.join(
+                     os.path.dirname(mysettings["EBUILD"]), "environment.bz2"
+                 )
+                 if os.path.isfile(env_file):
+                     mysettings["PORTAGE_UPDATE_ENV"] = env_file
+             try:
+                 return _spawn_phase(
+                     mydo,
+                     mysettings,
+                     fd_pipes=fd_pipes,
+                     logfile=logfile,
+                     returnpid=returnpid,
+                 )
+             finally:
+                 mysettings.pop("PORTAGE_UPDATE_ENV", None)
+ 
+         mycpv = "/".join((mysettings["CATEGORY"], mysettings["PF"]))
+ 
+         # Only try and fetch the files if we are going to need them ...
+         # otherwise, if user has FEATURES=noauto and they run `ebuild clean
+         # unpack compile install`, we will try and fetch 4 times :/
+         need_distfiles = (
+             tree == "porttree"
+             and not unpacked
+             and (
+                 mydo in ("fetch", "unpack")
+                 or mydo not in ("digest", "manifest")
+                 and "noauto" not in features
+             )
+         )
+         if need_distfiles:
+ 
+             src_uri = mysettings.configdict["pkg"].get("SRC_URI")
+             if src_uri is None:
+                 (src_uri,) = mydbapi.aux_get(
+                     mysettings.mycpv,
+                     ["SRC_URI"],
+                     mytree=os.path.dirname(os.path.dirname(os.path.dirname(myebuild))),
+                 )
+             metadata = {
+                 "EAPI": mysettings["EAPI"],
+                 "SRC_URI": src_uri,
+             }
+             use = frozenset(mysettings["PORTAGE_USE"].split())
+             try:
+                 alist = _parse_uri_map(mysettings.mycpv, metadata, use=use)
+                 aalist = _parse_uri_map(mysettings.mycpv, metadata)
+             except InvalidDependString as e:
+                 writemsg("!!! %s\n" % str(e), noiselevel=-1)
+                 writemsg(_("!!! Invalid SRC_URI for '%s'.\n") % mycpv, noiselevel=-1)
+                 del e
+                 return 1
+ 
+             if "mirror" in features or fetchall:
+                 fetchme = aalist
+             else:
+                 fetchme = alist
+ 
+             dist_digests = None
+             if mf is not None:
+                 dist_digests = mf.getTypeDigests("DIST")
+ 
+             def _fetch_subprocess(fetchme, mysettings, listonly, dist_digests):
+                 # For userfetch, drop privileges for the entire fetch call, in
+                 # order to handle DISTDIR on NFS with root_squash for bug 601252.
+                 if _want_userfetch(mysettings):
+                     _drop_privs_userfetch(mysettings)
+ 
+                 return fetch(
+                     fetchme,
+                     mysettings,
+                     listonly=listonly,
+                     fetchonly=fetchonly,
+                     allow_missing_digests=False,
+                     digests=dist_digests,
+                 )
+ 
+             loop = asyncio._safe_loop()
+             if loop.is_running():
+                 # Called by EbuildFetchonly for emerge --pretend --fetchonly.
+                 success = fetch(
+                     fetchme,
+                     mysettings,
+                     listonly=listonly,
+                     fetchonly=fetchonly,
+                     allow_missing_digests=False,
+                     digests=dist_digests,
+                 )
+             else:
+                 success = loop.run_until_complete(
+                     loop.run_in_executor(
+                         ForkExecutor(loop=loop),
+                         _fetch_subprocess,
+                         fetchme,
+                         mysettings,
+                         listonly,
+                         dist_digests,
+                     )
+                 )
+             if not success:
+                 # Since listonly mode is called by emerge --pretend in an
+                 # asynchronous context, spawn_nofetch would trigger event loop
+                 # recursion here, therefore delegate execution of pkg_nofetch
+                 # to the caller (bug 657360).
+                 if not listonly:
+                     spawn_nofetch(
+                         mydbapi, myebuild, settings=mysettings, fd_pipes=fd_pipes
+                     )
+                 return 1
+ 
+         if need_distfiles:
+             # Files are already checked inside fetch(),
+             # so do not check them again.
+             checkme = []
+         elif unpacked:
+             # The unpack phase is marked as complete, so it
+             # would be wasteful to check distfiles again.
+             checkme = []
+         else:
+             checkme = alist
+ 
+         if mydo == "fetch" and listonly:
+             return 0
+ 
+         try:
+             if mydo == "manifest":
+                 mf = None
+                 _doebuild_manifest_cache = None
+                 return not digestgen(mysettings=mysettings, myportdb=mydbapi)
+             if mydo == "digest":
+                 mf = None
+                 _doebuild_manifest_cache = None
+                 return not digestgen(mysettings=mysettings, myportdb=mydbapi)
+             if "digest" in mysettings.features:
+                 mf = None
+                 _doebuild_manifest_cache = None
+                 digestgen(mysettings=mysettings, myportdb=mydbapi)
+         except PermissionDenied as e:
+             writemsg(_("!!! Permission Denied: %s\n") % (e,), noiselevel=-1)
+             if mydo in ("digest", "manifest"):
+                 return 1
+ 
+         if mydo == "fetch":
+             # Return after digestgen for FEATURES=digest support.
+             # Return before digestcheck, since fetch() already
+             # checked any relevant digests.
+             return 0
+ 
+         # See above comment about fetching only when needed
+         if tree == "porttree" and not digestcheck(
+             checkme, mysettings, "strict" in features, mf=mf
+         ):
+             return 1
+ 
+         # remove PORTAGE_ACTUAL_DISTDIR once cvs/svn is supported via SRC_URI
+         if tree == "porttree" and (
+             (mydo != "setup" and "noauto" not in features)
+             or mydo in ("install", "unpack")
+         ):
+             _prepare_fake_distdir(mysettings, alist)
+ 
+         # initial dep checks complete; time to process main commands
+         actionmap = _spawn_actionmap(mysettings)
+ 
+         # merge the deps in so we have again a 'full' actionmap
+         # be glad when this can die.
+         for x in actionmap:
+             if len(actionmap_deps.get(x, [])):
+                 actionmap[x]["dep"] = " ".join(actionmap_deps[x])
+ 
+         regular_actionmap_phase = mydo in actionmap
+ 
+         if regular_actionmap_phase:
+             bintree = None
+             if mydo == "package":
+                 # Make sure the package directory exists before executing
+                 # this phase. This can raise PermissionDenied if
+                 # the current user doesn't have write access to $PKGDIR.
+                 if hasattr(portage, "db"):
+                     bintree = portage.db[mysettings["EROOT"]]["bintree"]
+                     binpkg_tmpfile_dir = os.path.join(
+                         bintree.pkgdir, mysettings["CATEGORY"]
+                     )
+                     bintree._ensure_dir(binpkg_tmpfile_dir)
+                     with tempfile.NamedTemporaryFile(
+                         prefix=mysettings["PF"],
+                         suffix=".tbz2." + str(portage.getpid()),
+                         dir=binpkg_tmpfile_dir,
+                         delete=False,
+                     ) as binpkg_tmpfile:
+                         mysettings["PORTAGE_BINPKG_TMPFILE"] = binpkg_tmpfile.name
+                 else:
+                     parent_dir = os.path.join(
+                         mysettings["PKGDIR"], mysettings["CATEGORY"]
+                     )
+                     portage.util.ensure_dirs(parent_dir)
+                     if not os.access(parent_dir, os.W_OK):
+                         raise PermissionDenied("access('%s', os.W_OK)" % parent_dir)
+             retval = spawnebuild(
+                 mydo,
+                 actionmap,
+                 mysettings,
+                 debug,
+                 logfile=logfile,
+                 fd_pipes=fd_pipes,
+                 returnpid=returnpid,
+             )
+ 
+             if returnpid and isinstance(retval, list):
+                 return retval
+ 
+             if retval == os.EX_OK:
+                 if mydo == "package" and bintree is not None:
+                     pkg = bintree.inject(
+                         mysettings.mycpv, filename=mysettings["PORTAGE_BINPKG_TMPFILE"]
+                     )
+                     if pkg is not None:
+                         infoloc = os.path.join(
+                             mysettings["PORTAGE_BUILDDIR"], "build-info"
+                         )
+                         build_info = {
+                             "BINPKGMD5": "%s\n" % pkg._metadata["MD5"],
+                         }
+                         if pkg.build_id is not None:
+                             build_info["BUILD_ID"] = "%s\n" % pkg.build_id
+                         for k, v in build_info.items():
+                             with io.open(
+                                 _unicode_encode(
+                                     os.path.join(infoloc, k),
+                                     encoding=_encodings["fs"],
+                                     errors="strict",
+                                 ),
+                                 mode="w",
+                                 encoding=_encodings["repo.content"],
+                                 errors="strict",
+                             ) as f:
+                                 f.write(v)
+             else:
+                 if "PORTAGE_BINPKG_TMPFILE" in mysettings:
+                     try:
+                         os.unlink(mysettings["PORTAGE_BINPKG_TMPFILE"])
+                     except OSError:
+                         pass
+ 
+         elif returnpid:
+             writemsg(
+                 "!!! doebuild: %s\n"
+                 % _("returnpid is not supported for phase '%s'\n" % mydo),
+                 noiselevel=-1,
+             )
+ 
+         if regular_actionmap_phase:
+             # handled above
+             pass
+         elif mydo == "qmerge":
+             # check to ensure install was run.  this *only* pops up when users
+             # forget it and are using ebuild
+             if not os.path.exists(
+                 os.path.join(mysettings["PORTAGE_BUILDDIR"], ".installed")
+             ):
+                 writemsg(
+                     _("!!! mydo=qmerge, but the install phase has not been run\n"),
+                     noiselevel=-1,
+                 )
+                 return 1
+             # qmerge is a special phase that implies noclean.
+             if "noclean" not in mysettings.features:
+                 mysettings.features.add("noclean")
+             _handle_self_update(mysettings, vartree.dbapi)
+             # qmerge is specifically not supposed to do a runtime dep check
+             retval = merge(
+                 mysettings["CATEGORY"],
+                 mysettings["PF"],
+                 mysettings["D"],
+                 os.path.join(mysettings["PORTAGE_BUILDDIR"], "build-info"),
+                 myroot,
+                 mysettings,
+                 myebuild=mysettings["EBUILD"],
+                 mytree=tree,
+                 mydbapi=mydbapi,
+                 vartree=vartree,
+                 prev_mtimes=prev_mtimes,
+                 fd_pipes=fd_pipes,
+             )
+         elif mydo == "merge":
+             retval = spawnebuild(
+                 "install",
+                 actionmap,
+                 mysettings,
+                 debug,
+                 alwaysdep=1,
+                 logfile=logfile,
+                 fd_pipes=fd_pipes,
+                 returnpid=returnpid,
+             )
+             if retval != os.EX_OK:
+                 # The merge phase handles this already.  Callers don't know how
+                 # far this function got, so we have to call elog_process() here
+                 # so that it's only called once.
+                 elog_process(mysettings.mycpv, mysettings)
+             if retval == os.EX_OK:
+                 _handle_self_update(mysettings, vartree.dbapi)
+                 retval = merge(
+                     mysettings["CATEGORY"],
+                     mysettings["PF"],
+                     mysettings["D"],
+                     os.path.join(mysettings["PORTAGE_BUILDDIR"], "build-info"),
+                     myroot,
+                     mysettings,
+                     myebuild=mysettings["EBUILD"],
+                     mytree=tree,
+                     mydbapi=mydbapi,
+                     vartree=vartree,
+                     prev_mtimes=prev_mtimes,
+                     fd_pipes=fd_pipes,
+                 )
+ 
+         else:
+             writemsg_stdout(_("!!! Unknown mydo: %s\n") % mydo, noiselevel=-1)
+             return 1
+ 
+         return retval
+ 
+     finally:
+ 
+         if builddir_lock is not None:
+             builddir_lock.scheduler.run_until_complete(builddir_lock.async_unlock())
+         if tmpdir:
+             mysettings["PORTAGE_TMPDIR"] = tmpdir_orig
+             shutil.rmtree(tmpdir)
+ 
+         mysettings.pop("REPLACING_VERSIONS", None)
+ 
+         if logfile and not returnpid:
+             try:
+                 if os.stat(logfile).st_size == 0:
+                     os.unlink(logfile)
+             except OSError:
+                 pass
+ 
+         if mydo in ("digest", "manifest", "help"):
+             # If necessary, depend phase has been triggered by aux_get calls
+             # and the exemption is no longer needed.
+             portage._doebuild_manifest_exempt_depend -= 1
+ 
  
  def _check_temp_dir(settings):
- 	if "PORTAGE_TMPDIR" not in settings or \
- 		not os.path.isdir(settings["PORTAGE_TMPDIR"]):
- 		writemsg(_("The directory specified in your "
- 			"PORTAGE_TMPDIR variable, '%s',\n"
- 			"does not exist.  Please create this directory or "
- 			"correct your PORTAGE_TMPDIR setting.\n") % \
- 			settings.get("PORTAGE_TMPDIR", ""), noiselevel=-1)
- 		return 1
- 
- 	# as some people use a separate PORTAGE_TMPDIR mount
- 	# we prefer that as the checks below would otherwise be pointless
- 	# for those people.
- 	checkdir = first_existing(os.path.join(settings["PORTAGE_TMPDIR"], "portage"))
- 
- 	if not os.access(checkdir, os.W_OK):
- 		writemsg(_("%s is not writable.\n"
- 			"Likely cause is that you've mounted it as readonly.\n") % checkdir,
- 			noiselevel=-1)
- 		return 1
- 
- 	with tempfile.NamedTemporaryFile(prefix="exectest-", dir=checkdir) as fd:
- 		os.chmod(fd.name, 0o755)
- 		if not os.access(fd.name, os.X_OK):
- 			writemsg(_("Can not execute files in %s\n"
- 				"Likely cause is that you've mounted it with one of the\n"
- 				"following mount options: 'noexec', 'user', 'users'\n\n"
- 				"Please make sure that portage can execute files in this directory.\n") % checkdir,
- 				noiselevel=-1)
- 			return 1
- 
- 	return os.EX_OK
+     if "PORTAGE_TMPDIR" not in settings or not os.path.isdir(
+         settings["PORTAGE_TMPDIR"]
+     ):
+         writemsg(
+             _(
+                 "The directory specified in your "
+                 "PORTAGE_TMPDIR variable, '%s',\n"
+                 "does not exist.  Please create this directory or "
+                 "correct your PORTAGE_TMPDIR setting.\n"
+             )
+             % settings.get("PORTAGE_TMPDIR", ""),
+             noiselevel=-1,
+         )
+         return 1
+ 
+     # as some people use a separate PORTAGE_TMPDIR mount
+     # we prefer that as the checks below would otherwise be pointless
+     # for those people.
+     checkdir = first_existing(os.path.join(settings["PORTAGE_TMPDIR"], "portage"))
+ 
+     if not os.access(checkdir, os.W_OK):
+         writemsg(
+             _(
+                 "%s is not writable.\n"
+                 "Likely cause is that you've mounted it as readonly.\n"
+             )
+             % checkdir,
+             noiselevel=-1,
+         )
+         return 1
+ 
+     with tempfile.NamedTemporaryFile(prefix="exectest-", dir=checkdir) as fd:
+         os.chmod(fd.name, 0o755)
+         if not os.access(fd.name, os.X_OK):
+             writemsg(
+                 _(
+                     "Can not execute files in %s\n"
+                     "Likely cause is that you've mounted it with one of the\n"
+                     "following mount options: 'noexec', 'user', 'users'\n\n"
+                     "Please make sure that portage can execute files in this directory.\n"
+                 )
+                 % checkdir,
+                 noiselevel=-1,
+             )
+             return 1
+ 
+     return os.EX_OK
+ 
  
  def _prepare_env_file(settings):
- 	"""
- 	Extract environment.bz2 if it exists, but only if the destination
- 	environment file doesn't already exist. There are lots of possible
- 	states when doebuild() calls this function, and we want to avoid
- 	clobbering an existing environment file.
- 	"""
- 
- 	env_extractor = BinpkgEnvExtractor(background=False,
- 		scheduler=asyncio._safe_loop(),
- 		settings=settings)
- 
- 	if env_extractor.dest_env_exists():
- 		# There are lots of possible states when doebuild()
- 		# calls this function, and we want to avoid
- 		# clobbering an existing environment file.
- 		return os.EX_OK
- 
- 	if not env_extractor.saved_env_exists():
- 		# If the environment.bz2 doesn't exist, then ebuild.sh will
- 		# source the ebuild as a fallback.
- 		return os.EX_OK
- 
- 	env_extractor.start()
- 	env_extractor.wait()
- 	return env_extractor.returncode
+     """
+     Extract environment.bz2 if it exists, but only if the destination
+     environment file doesn't already exist. There are lots of possible
+     states when doebuild() calls this function, and we want to avoid
+     clobbering an existing environment file.
+     """
+ 
+     env_extractor = BinpkgEnvExtractor(
+         background=False, scheduler=asyncio._safe_loop(), settings=settings
+     )
+ 
+     if env_extractor.dest_env_exists():
+         # There are lots of possible states when doebuild()
+         # calls this function, and we want to avoid
+         # clobbering an existing environment file.
+         return os.EX_OK
+ 
+     if not env_extractor.saved_env_exists():
+         # If the environment.bz2 doesn't exist, then ebuild.sh will
+         # source the ebuild as a fallback.
+         return os.EX_OK
+ 
+     env_extractor.start()
+     env_extractor.wait()
+     return env_extractor.returncode
+ 
  
  def _spawn_actionmap(settings):
- 	features = settings.features
- 	restrict = settings["PORTAGE_RESTRICT"].split()
- 	nosandbox = (("userpriv" in features) and \
- 		("usersandbox" not in features) and \
- 		"userpriv" not in restrict and \
- 		"nouserpriv" not in restrict)
- 
- 	if not (portage.process.sandbox_capable or \
- 		portage.process.macossandbox_capable):
- 		nosandbox = True
- 
- 	sesandbox = settings.selinux_enabled() and \
- 		"sesandbox" in features
- 
- 	droppriv = "userpriv" in features and \
- 		"userpriv" not in restrict and \
- 		secpass >= 2
- 
- 	fakeroot = "fakeroot" in features
- 
- 	portage_bin_path = settings["PORTAGE_BIN_PATH"]
- 	ebuild_sh_binary = os.path.join(portage_bin_path,
- 		os.path.basename(EBUILD_SH_BINARY))
- 	misc_sh_binary = os.path.join(portage_bin_path,
- 		os.path.basename(MISC_SH_BINARY))
- 	ebuild_sh = _shell_quote(ebuild_sh_binary) + " %s"
- 	misc_sh = _shell_quote(misc_sh_binary) + " __dyn_%s"
- 
- 	# args are for the to spawn function
- 	actionmap = {
- "pretend":  {"cmd":ebuild_sh, "args":{"droppriv":0,        "free":1,         "sesandbox":0,         "fakeroot":0}},
- "setup":    {"cmd":ebuild_sh, "args":{"droppriv":0,        "free":1,         "sesandbox":0,         "fakeroot":0}},
- "unpack":   {"cmd":ebuild_sh, "args":{"droppriv":droppriv, "free":0,         "sesandbox":sesandbox, "fakeroot":0}},
- "prepare":  {"cmd":ebuild_sh, "args":{"droppriv":droppriv, "free":0,         "sesandbox":sesandbox, "fakeroot":0}},
- "configure":{"cmd":ebuild_sh, "args":{"droppriv":droppriv, "free":nosandbox, "sesandbox":sesandbox, "fakeroot":0}},
- "compile":  {"cmd":ebuild_sh, "args":{"droppriv":droppriv, "free":nosandbox, "sesandbox":sesandbox, "fakeroot":0}},
- "test":     {"cmd":ebuild_sh, "args":{"droppriv":droppriv, "free":nosandbox, "sesandbox":sesandbox, "fakeroot":0}},
- "install":  {"cmd":ebuild_sh, "args":{"droppriv":0,        "free":0,         "sesandbox":sesandbox, "fakeroot":fakeroot}},
- "instprep": {"cmd":misc_sh,   "args":{"droppriv":0,        "free":0,         "sesandbox":sesandbox, "fakeroot":fakeroot}},
- "rpm":      {"cmd":misc_sh,   "args":{"droppriv":0,        "free":0,         "sesandbox":0,         "fakeroot":fakeroot}},
- "package":  {"cmd":misc_sh,   "args":{"droppriv":0,        "free":0,         "sesandbox":0,         "fakeroot":fakeroot}},
- 		}
- 
- 	return actionmap
+     features = settings.features
+     restrict = settings["PORTAGE_RESTRICT"].split()
+     nosandbox = (
+         ("userpriv" in features)
+         and ("usersandbox" not in features)
+         and "userpriv" not in restrict
+         and "nouserpriv" not in restrict
+     )
+ 
 -    if not portage.process.sandbox_capable:
++    if not (portage.process.sandbox_capable
++            or portage.process.macossandbox_capable):
+         nosandbox = True
+ 
+     sesandbox = settings.selinux_enabled() and "sesandbox" in features
+ 
+     droppriv = "userpriv" in features and "userpriv" not in restrict and secpass >= 2
+ 
+     fakeroot = "fakeroot" in features
+ 
+     portage_bin_path = settings["PORTAGE_BIN_PATH"]
+     ebuild_sh_binary = os.path.join(
+         portage_bin_path, os.path.basename(EBUILD_SH_BINARY)
+     )
+     misc_sh_binary = os.path.join(portage_bin_path, os.path.basename(MISC_SH_BINARY))
+     ebuild_sh = _shell_quote(ebuild_sh_binary) + " %s"
+     misc_sh = _shell_quote(misc_sh_binary) + " __dyn_%s"
+ 
+     # args are for the to spawn function
+     actionmap = {
+         "pretend": {
+             "cmd": ebuild_sh,
+             "args": {"droppriv": 0, "free": 1, "sesandbox": 0, "fakeroot": 0},
+         },
+         "setup": {
+             "cmd": ebuild_sh,
+             "args": {"droppriv": 0, "free": 1, "sesandbox": 0, "fakeroot": 0},
+         },
+         "unpack": {
+             "cmd": ebuild_sh,
+             "args": {
+                 "droppriv": droppriv,
+                 "free": 0,
+                 "sesandbox": sesandbox,
+                 "fakeroot": 0,
+             },
+         },
+         "prepare": {
+             "cmd": ebuild_sh,
+             "args": {
+                 "droppriv": droppriv,
+                 "free": 0,
+                 "sesandbox": sesandbox,
+                 "fakeroot": 0,
+             },
+         },
+         "configure": {
+             "cmd": ebuild_sh,
+             "args": {
+                 "droppriv": droppriv,
+                 "free": nosandbox,
+                 "sesandbox": sesandbox,
+                 "fakeroot": 0,
+             },
+         },
+         "compile": {
+             "cmd": ebuild_sh,
+             "args": {
+                 "droppriv": droppriv,
+                 "free": nosandbox,
+                 "sesandbox": sesandbox,
+                 "fakeroot": 0,
+             },
+         },
+         "test": {
+             "cmd": ebuild_sh,
+             "args": {
+                 "droppriv": droppriv,
+                 "free": nosandbox,
+                 "sesandbox": sesandbox,
+                 "fakeroot": 0,
+             },
+         },
+         "install": {
+             "cmd": ebuild_sh,
+             "args": {
+                 "droppriv": 0,
+                 "free": 0,
+                 "sesandbox": sesandbox,
+                 "fakeroot": fakeroot,
+             },
+         },
+         "instprep": {
+             "cmd": misc_sh,
+             "args": {
+                 "droppriv": 0,
+                 "free": 0,
+                 "sesandbox": sesandbox,
+                 "fakeroot": fakeroot,
+             },
+         },
+         "rpm": {
+             "cmd": misc_sh,
+             "args": {"droppriv": 0, "free": 0, "sesandbox": 0, "fakeroot": fakeroot},
+         },
+         "package": {
+             "cmd": misc_sh,
+             "args": {"droppriv": 0, "free": 0, "sesandbox": 0, "fakeroot": fakeroot},
+         },
+     }
+ 
+     return actionmap
+ 
  
  def _validate_deps(mysettings, myroot, mydo, mydbapi):
  
@@@ -1482,382 -1881,363 +1893,451 @@@
  
  # XXX This would be to replace getstatusoutput completely.
  # XXX Issue: cannot block execution. Deadlock condition.
- def spawn(mystring, mysettings, debug=False, free=False, droppriv=False,
- 	sesandbox=False, fakeroot=False, networked=True, ipc=True,
- 	mountns=False, pidns=False, **keywords):
- 	"""
- 	Spawn a subprocess with extra portage-specific options.
- 	Optiosn include:
- 
- 	Sandbox: Sandbox means the spawned process will be limited in its ability t
- 	read and write files (normally this means it is restricted to ${D}/)
- 	SElinux Sandbox: Enables sandboxing on SElinux
- 	Reduced Privileges: Drops privilages such that the process runs as portage:portage
- 	instead of as root.
- 
- 	Notes: os.system cannot be used because it messes with signal handling.  Instead we
- 	use the portage.process spawn* family of functions.
- 
- 	This function waits for the process to terminate.
- 
- 	@param mystring: Command to run
- 	@type mystring: String
- 	@param mysettings: Either a Dict of Key,Value pairs or an instance of portage.config
- 	@type mysettings: Dictionary or config instance
- 	@param debug: Ignored
- 	@type debug: Boolean
- 	@param free: Enable sandboxing for this process
- 	@type free: Boolean
- 	@param droppriv: Drop to portage:portage when running this command
- 	@type droppriv: Boolean
- 	@param sesandbox: Enable SELinux Sandboxing (toggles a context switch)
- 	@type sesandbox: Boolean
- 	@param fakeroot: Run this command with faked root privileges
- 	@type fakeroot: Boolean
- 	@param networked: Run this command with networking access enabled
- 	@type networked: Boolean
- 	@param ipc: Run this command with host IPC access enabled
- 	@type ipc: Boolean
- 	@param mountns: Run this command inside mount namespace
- 	@type mountns: Boolean
- 	@param pidns: Run this command in isolated PID namespace
- 	@type pidns: Boolean
- 	@param keywords: Extra options encoded as a dict, to be passed to spawn
- 	@type keywords: Dictionary
- 	@rtype: Integer
- 	@return:
- 	1. The return code of the spawned process.
- 	"""
- 
- 	check_config_instance(mysettings)
- 
- 	fd_pipes = keywords.get("fd_pipes")
- 	if fd_pipes is None:
- 		fd_pipes = {
- 			0:portage._get_stdin().fileno(),
- 			1:sys.__stdout__.fileno(),
- 			2:sys.__stderr__.fileno(),
- 		}
- 	# In some cases the above print statements don't flush stdout, so
- 	# it needs to be flushed before allowing a child process to use it
- 	# so that output always shows in the correct order.
- 	stdout_filenos = (sys.__stdout__.fileno(), sys.__stderr__.fileno())
- 	for fd in fd_pipes.values():
- 		if fd in stdout_filenos:
- 			sys.__stdout__.flush()
- 			sys.__stderr__.flush()
- 			break
- 
- 	features = mysettings.features
- 
- 	# Use Linux namespaces if available
- 	if uid == 0 and platform.system() == 'Linux':
- 		keywords['unshare_net'] = not networked
- 		keywords['unshare_ipc'] = not ipc
- 		keywords['unshare_mount'] = mountns
- 		keywords['unshare_pid'] = pidns
- 
- 		if not networked and mysettings.get("EBUILD_PHASE") != "nofetch" and \
- 			("network-sandbox-proxy" in features or "distcc" in features):
- 			# Provide a SOCKS5-over-UNIX-socket proxy to escape sandbox
- 			# Don't do this for pkg_nofetch, since the spawn_nofetch
- 			# function creates a private PORTAGE_TMPDIR.
- 			try:
- 				proxy = get_socks5_proxy(mysettings)
- 			except NotImplementedError:
- 				pass
- 			else:
- 				mysettings['PORTAGE_SOCKS5_PROXY'] = proxy
- 				mysettings['DISTCC_SOCKS_PROXY'] = proxy
- 
- 	# TODO: Enable fakeroot to be used together with droppriv.  The
- 	# fake ownership/permissions will have to be converted to real
- 	# permissions in the merge phase.
- 	fakeroot = fakeroot and uid != 0 and portage.process.fakeroot_capable
- 	portage_build_uid = os.getuid()
- 	portage_build_gid = os.getgid()
- 	logname = None
- 	if uid == 0 and portage_uid and portage_gid and hasattr(os, "setgroups"):
- 		if droppriv:
- 			logname = portage.data._portage_username
- 			keywords.update({
- 				"uid": portage_uid,
- 				"gid": portage_gid,
- 				"groups": userpriv_groups,
- 				"umask": 0o22
- 			})
- 
- 			# Adjust pty ownership so that subprocesses
- 			# can directly access /dev/fd/{1,2}.
- 			stdout_fd = fd_pipes.get(1)
- 			if stdout_fd is not None:
- 				try:
- 					subprocess_tty = _os.ttyname(stdout_fd)
- 				except OSError:
- 					pass
- 				else:
- 					try:
- 						parent_tty = _os.ttyname(sys.__stdout__.fileno())
- 					except OSError:
- 						parent_tty = None
- 
- 					if subprocess_tty != parent_tty:
- 						_os.chown(subprocess_tty,
- 							int(portage_uid), int(portage_gid))
- 
- 		if "userpriv" in features and "userpriv" not in mysettings["PORTAGE_RESTRICT"].split() and secpass >= 2:
- 			# Since Python 3.4, getpwuid and getgrgid
- 			# require int type (no proxies).
- 			portage_build_uid = int(portage_uid)
- 			portage_build_gid = int(portage_gid)
- 
- 	if "PORTAGE_BUILD_USER" not in mysettings:
- 		user = None
- 		try:
- 			user = pwd.getpwuid(portage_build_uid).pw_name
- 		except KeyError:
- 			if portage_build_uid == 0:
- 				user = "root"
- 			elif portage_build_uid == portage_uid:
- 				user = portage.data._portage_username
- 			# PREFIX LOCAL: accept numeric uid
- 			else:
- 				user = portage_uid
- 			# END PREFIX LOCAL
- 		if user is not None:
- 			mysettings["PORTAGE_BUILD_USER"] = user
- 
- 	if "PORTAGE_BUILD_GROUP" not in mysettings:
- 		group = None
- 		try:
- 			group = grp.getgrgid(portage_build_gid).gr_name
- 		except KeyError:
- 			if portage_build_gid == 0:
- 				group = "root"
- 			elif portage_build_gid == portage_gid:
- 				group = portage.data._portage_grpname
- 			# PREFIX LOCAL: accept numeric gid
- 			else:
- 				group = portage_gid
- 			# END PREFIX LOCAL
- 		if group is not None:
- 			mysettings["PORTAGE_BUILD_GROUP"] = group
- 
- 	if not free:
- 		free=((droppriv and "usersandbox" not in features) or \
- 			(not droppriv and "sandbox" not in features and \
- 			"usersandbox" not in features and not fakeroot))
- 
- 	if not free and not (fakeroot or portage.process.sandbox_capable or \
- 		portage.process.macossandbox_capable):
- 		free = True
- 
- 	if mysettings.mycpv is not None:
- 		keywords["opt_name"] = "[%s]" % mysettings.mycpv
- 	else:
- 		keywords["opt_name"] = "[%s/%s]" % \
- 			(mysettings.get("CATEGORY",""), mysettings.get("PF",""))
- 
- 	if free or "SANDBOX_ACTIVE" in os.environ:
- 		keywords["opt_name"] += " bash"
- 		spawn_func = portage.process.spawn_bash
- 	elif fakeroot:
- 		keywords["opt_name"] += " fakeroot"
- 		keywords["fakeroot_state"] = os.path.join(mysettings["T"], "fakeroot.state")
- 		spawn_func = portage.process.spawn_fakeroot
- 	elif "sandbox" in features and platform.system() == 'Darwin':
- 		keywords["opt_name"] += " macossandbox"
- 		sbprofile = MACOSSANDBOX_PROFILE
- 
- 		# determine variable names from profile: split
- 		# "text@@VARNAME@@moretext@@OTHERVAR@@restoftext" into
- 		# ("text", # "VARNAME", "moretext", "OTHERVAR", "restoftext")
- 		# and extract variable named by reading every second item.
- 		variables = []
- 		for line in sbprofile.split("\n"):
- 			variables.extend(line.split("@@")[1:-1:2])
- 
- 		for var in variables:
- 			paths = ""
- 			if var in mysettings:
- 				paths = mysettings[var]
- 			else:
- 				writemsg("Warning: sandbox profile references variable %s "
- 						 "which is not set.\nThe rule using it will have no "
- 						 "effect, which is most likely not the intended "
- 						 "result.\nPlease check make.conf/make.globals.\n" %
- 						 var)
- 
- 			# not set or empty value
- 			if not paths:
- 				sbprofile = sbprofile.replace("@@%s@@" % var, "")
- 				continue
- 
- 			rules_literal = ""
- 			rules_regex = ""
- 
- 			# FIXME: Allow for quoting inside the variable
- 			# to allow paths with spaces in them?
- 			for path in paths.split(" "):
- 				# do a second round of token
- 				# replacements to be able to reference
- 				# settings like EPREFIX or
- 				# PORTAGE_BUILDDIR.
- 				for token in path.split("@@")[1:-1:2]:
- 					if token not in mysettings:
- 						continue
- 
- 					path = path.replace("@@%s@@" % token, mysettings[token])
- 
- 				if "@@" in path:
- 					# unreplaced tokens left -
- 					# silently ignore path - needed
- 					# for PORTAGE_ACTUAL_DISTDIR
- 					# which isn't always set
- 					pass
- 				elif path[-1] == os.sep:
- 					# path ends in slash - make it a
- 					# regex and allow access
- 					# recursively.
- 					path = path.replace(r'+', r'\+')
- 					path = path.replace(r'*', r'\*')
- 					path = path.replace(r'[', r'\[')
- 					path = path.replace(r']', r'\]')
- 					rules_regex += "    #\"^%s\"\n" % path
- 				else:
- 					rules_literal += "    #\"%s\"\n" % path
- 
- 			rules = ""
- 			if rules_literal:
- 				rules += "  (literal\n" + rules_literal + "  )\n"
- 			if rules_regex:
- 				rules += "  (regex\n" + rules_regex + "  )\n"
- 			sbprofile = sbprofile.replace("@@%s@@" % var, rules)
- 
- 		keywords["profile"] = sbprofile
- 		spawn_func = portage.process.spawn_macossandbox
- 	else:
- 		keywords["opt_name"] += " sandbox"
- 		spawn_func = portage.process.spawn_sandbox
- 
- 	if sesandbox:
- 		spawn_func = selinux.spawn_wrapper(spawn_func,
- 			mysettings["PORTAGE_SANDBOX_T"])
- 
- 	logname_backup = None
- 	if logname is not None:
- 		logname_backup = mysettings.configdict["env"].get("LOGNAME")
- 		mysettings.configdict["env"]["LOGNAME"] = logname
- 
- 	try:
- 		if keywords.get("returnpid"):
- 			return spawn_func(mystring, env=mysettings.environ(),
- 				**keywords)
- 
- 		proc = EbuildSpawnProcess(
- 			background=False, args=mystring,
- 			scheduler=SchedulerInterface(asyncio._safe_loop()),
- 			spawn_func=spawn_func,
- 			settings=mysettings, **keywords)
- 
- 		proc.start()
- 		proc.wait()
- 
- 		return proc.returncode
- 
- 	finally:
- 		if logname is None:
- 			pass
- 		elif logname_backup is None:
- 			mysettings.configdict["env"].pop("LOGNAME", None)
- 		else:
- 			mysettings.configdict["env"]["LOGNAME"] = logname_backup
+ 
+ 
+ def spawn(
+     mystring,
+     mysettings,
+     debug=False,
+     free=False,
+     droppriv=False,
+     sesandbox=False,
+     fakeroot=False,
+     networked=True,
+     ipc=True,
+     mountns=False,
+     pidns=False,
+     **keywords
+ ):
+     """
+     Spawn a subprocess with extra portage-specific options.
+     Optiosn include:
+ 
+     Sandbox: Sandbox means the spawned process will be limited in its ability t
+     read and write files (normally this means it is restricted to ${D}/)
+     SElinux Sandbox: Enables sandboxing on SElinux
+     Reduced Privileges: Drops privilages such that the process runs as portage:portage
+     instead of as root.
+ 
+     Notes: os.system cannot be used because it messes with signal handling.  Instead we
+     use the portage.process spawn* family of functions.
+ 
+     This function waits for the process to terminate.
+ 
+     @param mystring: Command to run
+     @type mystring: String
+     @param mysettings: Either a Dict of Key,Value pairs or an instance of portage.config
+     @type mysettings: Dictionary or config instance
+     @param debug: Ignored
+     @type debug: Boolean
+     @param free: Enable sandboxing for this process
+     @type free: Boolean
+     @param droppriv: Drop to portage:portage when running this command
+     @type droppriv: Boolean
+     @param sesandbox: Enable SELinux Sandboxing (toggles a context switch)
+     @type sesandbox: Boolean
+     @param fakeroot: Run this command with faked root privileges
+     @type fakeroot: Boolean
+     @param networked: Run this command with networking access enabled
+     @type networked: Boolean
+     @param ipc: Run this command with host IPC access enabled
+     @type ipc: Boolean
+     @param mountns: Run this command inside mount namespace
+     @type mountns: Boolean
+     @param pidns: Run this command in isolated PID namespace
+     @type pidns: Boolean
+     @param keywords: Extra options encoded as a dict, to be passed to spawn
+     @type keywords: Dictionary
+     @rtype: Integer
+     @return:
+     1. The return code of the spawned process.
+     """
+ 
+     check_config_instance(mysettings)
+ 
+     fd_pipes = keywords.get("fd_pipes")
+     if fd_pipes is None:
+         fd_pipes = {
+             0: portage._get_stdin().fileno(),
+             1: sys.__stdout__.fileno(),
+             2: sys.__stderr__.fileno(),
+         }
+     # In some cases the above print statements don't flush stdout, so
+     # it needs to be flushed before allowing a child process to use it
+     # so that output always shows in the correct order.
+     stdout_filenos = (sys.__stdout__.fileno(), sys.__stderr__.fileno())
+     for fd in fd_pipes.values():
+         if fd in stdout_filenos:
+             sys.__stdout__.flush()
+             sys.__stderr__.flush()
+             break
+ 
+     features = mysettings.features
+ 
+     # Use Linux namespaces if available
+     if uid == 0 and platform.system() == "Linux":
+         keywords["unshare_net"] = not networked
+         keywords["unshare_ipc"] = not ipc
+         keywords["unshare_mount"] = mountns
+         keywords["unshare_pid"] = pidns
+ 
+         if (
+             not networked
+             and mysettings.get("EBUILD_PHASE") != "nofetch"
+             and ("network-sandbox-proxy" in features or "distcc" in features)
+         ):
+             # Provide a SOCKS5-over-UNIX-socket proxy to escape sandbox
+             # Don't do this for pkg_nofetch, since the spawn_nofetch
+             # function creates a private PORTAGE_TMPDIR.
+             try:
+                 proxy = get_socks5_proxy(mysettings)
+             except NotImplementedError:
+                 pass
+             else:
+                 mysettings["PORTAGE_SOCKS5_PROXY"] = proxy
+                 mysettings["DISTCC_SOCKS_PROXY"] = proxy
+ 
+     # TODO: Enable fakeroot to be used together with droppriv.  The
+     # fake ownership/permissions will have to be converted to real
+     # permissions in the merge phase.
+     fakeroot = fakeroot and uid != 0 and portage.process.fakeroot_capable
+     portage_build_uid = os.getuid()
+     portage_build_gid = os.getgid()
+     logname = None
+     if uid == 0 and portage_uid and portage_gid and hasattr(os, "setgroups"):
+         if droppriv:
+             logname = portage.data._portage_username
+             keywords.update(
+                 {
+                     "uid": portage_uid,
+                     "gid": portage_gid,
+                     "groups": userpriv_groups,
+                     "umask": 0o22,
+                 }
+             )
+ 
+             # Adjust pty ownership so that subprocesses
+             # can directly access /dev/fd/{1,2}.
+             stdout_fd = fd_pipes.get(1)
+             if stdout_fd is not None:
+                 try:
+                     subprocess_tty = _os.ttyname(stdout_fd)
+                 except OSError:
+                     pass
+                 else:
+                     try:
+                         parent_tty = _os.ttyname(sys.__stdout__.fileno())
+                     except OSError:
+                         parent_tty = None
+ 
+                     if subprocess_tty != parent_tty:
+                         _os.chown(subprocess_tty, int(portage_uid), int(portage_gid))
+ 
+         if (
+             "userpriv" in features
+             and "userpriv" not in mysettings["PORTAGE_RESTRICT"].split()
+             and secpass >= 2
+         ):
+             # Since Python 3.4, getpwuid and getgrgid
+             # require int type (no proxies).
+             portage_build_uid = int(portage_uid)
+             portage_build_gid = int(portage_gid)
+ 
+     if "PORTAGE_BUILD_USER" not in mysettings:
+         user = None
+         try:
+             user = pwd.getpwuid(portage_build_uid).pw_name
+         except KeyError:
+             if portage_build_uid == 0:
+                 user = "root"
+             elif portage_build_uid == portage_uid:
+                 user = portage.data._portage_username
++            # BEGIN PREFIX LOCAL: accept numeric uid
++            else:
++                user = portage_uid
++            # END PREFIX LOCAL
+         if user is not None:
+             mysettings["PORTAGE_BUILD_USER"] = user
+ 
+     if "PORTAGE_BUILD_GROUP" not in mysettings:
+         group = None
+         try:
+             group = grp.getgrgid(portage_build_gid).gr_name
+         except KeyError:
+             if portage_build_gid == 0:
+                 group = "root"
+             elif portage_build_gid == portage_gid:
+                 group = portage.data._portage_grpname
++            # BEGIN PREFIX LOCAL: accept numeric gid
++            else:
++                group = portage_gid
++            # END PREFIX LOCAL
+         if group is not None:
+             mysettings["PORTAGE_BUILD_GROUP"] = group
+ 
+     if not free:
+         free = (droppriv and "usersandbox" not in features) or (
+             not droppriv
+             and "sandbox" not in features
+             and "usersandbox" not in features
+             and not fakeroot
+         )
+ 
 -    if not free and not (fakeroot or portage.process.sandbox_capable):
++    if not free and not (fakeroot or portage.process.sandbox_capable
++            or portage.process.macossandbox_capable):  # PREFIX LOCAL
+         free = True
+ 
+     if mysettings.mycpv is not None:
+         keywords["opt_name"] = "[%s]" % mysettings.mycpv
+     else:
+         keywords["opt_name"] = "[%s/%s]" % (
+             mysettings.get("CATEGORY", ""),
+             mysettings.get("PF", ""),
+         )
+ 
+     if free or "SANDBOX_ACTIVE" in os.environ:
+         keywords["opt_name"] += " bash"
+         spawn_func = portage.process.spawn_bash
+     elif fakeroot:
+         keywords["opt_name"] += " fakeroot"
+         keywords["fakeroot_state"] = os.path.join(mysettings["T"], "fakeroot.state")
+         spawn_func = portage.process.spawn_fakeroot
++    # BEGIN PREFIX LOCAL
++    elif "sandbox" in features and platform.system() == 'Darwin':
++        keywords["opt_name"] += " macossandbox"
++        sbprofile = MACOSSANDBOX_PROFILE
++
++        # determine variable names from profile: split
++        # "text@@VARNAME@@moretext@@OTHERVAR@@restoftext" into
++        # ("text", # "VARNAME", "moretext", "OTHERVAR", "restoftext")
++        # and extract variable named by reading every second item.
++        variables = []
++        for line in sbprofile.split("\n"):
++            variables.extend(line.split("@@")[1:-1:2])
++
++        for var in variables:
++            paths = ""
++            if var in mysettings:
++                paths = mysettings[var]
++            else:
++                writemsg("Warning: sandbox profile references variable %s "
++                         "which is not set.\nThe rule using it will have no "
++                         "effect, which is most likely not the intended "
++                         "result.\nPlease check make.conf/make.globals.\n" %
++                         var)
++
++            # not set or empty value
++            if not paths:
++                sbprofile = sbprofile.replace("@@%s@@" % var, "")
++                continue
++
++            rules_literal = ""
++            rules_regex = ""
++
++            # FIXME: Allow for quoting inside the variable
++            # to allow paths with spaces in them?
++            for path in paths.split(" "):
++                # do a second round of token
++                # replacements to be able to reference
++                # settings like EPREFIX or
++                # PORTAGE_BUILDDIR.
++                for token in path.split("@@")[1:-1:2]:
++                    if token not in mysettings:
++                        continue
++
++                    path = path.replace("@@%s@@" % token, mysettings[token])
++
++                if "@@" in path:
++                    # unreplaced tokens left -
++                    # silently ignore path - needed
++                    # for PORTAGE_ACTUAL_DISTDIR
++                    # which isn't always set
++                    pass
++                elif path[-1] == os.sep:
++                    # path ends in slash - make it a
++                    # regex and allow access
++                    # recursively.
++                    path = path.replace(r'+', r'\+')
++                    path = path.replace(r'*', r'\*')
++                    path = path.replace(r'[', r'\[')
++                    path = path.replace(r']', r'\]')
++                    rules_regex += "    #\"^%s\"\n" % path
++                else:
++                    rules_literal += "    #\"%s\"\n" % path
++
++            rules = ""
++            if rules_literal:
++                rules += "  (literal\n" + rules_literal + "  )\n"
++            if rules_regex:
++                rules += "  (regex\n" + rules_regex + "  )\n"
++            sbprofile = sbprofile.replace("@@%s@@" % var, rules)
++
++        keywords["profile"] = sbprofile
++        spawn_func = portage.process.spawn_macossandbox
++    # END PREFIX LOCAL
+     else:
+         keywords["opt_name"] += " sandbox"
+         spawn_func = portage.process.spawn_sandbox
+ 
+     if sesandbox:
+         spawn_func = selinux.spawn_wrapper(spawn_func, mysettings["PORTAGE_SANDBOX_T"])
+ 
+     logname_backup = None
+     if logname is not None:
+         logname_backup = mysettings.configdict["env"].get("LOGNAME")
+         mysettings.configdict["env"]["LOGNAME"] = logname
+ 
+     try:
+         if keywords.get("returnpid"):
+             return spawn_func(mystring, env=mysettings.environ(), **keywords)
+ 
+         proc = EbuildSpawnProcess(
+             background=False,
+             args=mystring,
+             scheduler=SchedulerInterface(asyncio._safe_loop()),
+             spawn_func=spawn_func,
+             settings=mysettings,
+             **keywords
+         )
+ 
+         proc.start()
+         proc.wait()
+ 
+         return proc.returncode
+ 
+     finally:
+         if logname is None:
+             pass
+         elif logname_backup is None:
+             mysettings.configdict["env"].pop("LOGNAME", None)
+         else:
+             mysettings.configdict["env"]["LOGNAME"] = logname_backup
+ 
  
  # parse actionmap to spawn ebuild with the appropriate args
- def spawnebuild(mydo, actionmap, mysettings, debug, alwaysdep=0,
- 	logfile=None, fd_pipes=None, returnpid=False):
- 
- 	if returnpid:
- 		warnings.warn("portage.spawnebuild() called "
- 			"with returnpid parameter enabled. This usage will "
- 			"not be supported in the future.",
- 			DeprecationWarning, stacklevel=2)
- 
- 	if not returnpid and \
- 		(alwaysdep or "noauto" not in mysettings.features):
- 		# process dependency first
- 		if "dep" in actionmap[mydo]:
- 			retval = spawnebuild(actionmap[mydo]["dep"], actionmap,
- 				mysettings, debug, alwaysdep=alwaysdep, logfile=logfile,
- 				fd_pipes=fd_pipes, returnpid=returnpid)
- 			if retval:
- 				return retval
- 
- 	eapi = mysettings["EAPI"]
- 
- 	if mydo in ("configure", "prepare") and not eapi_has_src_prepare_and_src_configure(eapi):
- 		return os.EX_OK
- 
- 	if mydo == "pretend" and not eapi_has_pkg_pretend(eapi):
- 		return os.EX_OK
- 
- 	if not (mydo == "install" and "noauto" in mysettings.features):
- 		check_file = os.path.join(
- 			mysettings["PORTAGE_BUILDDIR"], ".%sed" % mydo.rstrip('e'))
- 		if os.path.exists(check_file):
- 			writemsg_stdout(_(">>> It appears that "
- 				"'%(action)s' has already executed for '%(pkg)s'; skipping.\n") %
- 				{"action":mydo, "pkg":mysettings["PF"]})
- 			writemsg_stdout(_(">>> Remove '%(file)s' to force %(action)s.\n") %
- 				{"file":check_file, "action":mydo})
- 			return os.EX_OK
- 
- 	return _spawn_phase(mydo, mysettings,
- 		actionmap=actionmap, logfile=logfile,
- 		fd_pipes=fd_pipes, returnpid=returnpid)
  
- _post_phase_cmds = {
  
- 	"install" : [
- 		"install_qa_check",
- 		"install_symlink_html_docs",
- 		"install_hooks"],
- 
- 	"preinst" : (
- 		(
- 			# Since SELinux does not allow LD_PRELOAD across domain transitions,
- 			# disable the LD_PRELOAD sandbox for preinst_selinux_labels.
- 			{
- 				"ld_preload_sandbox": False,
- 				"selinux_only": True,
- 			},
- 			[
- 				"preinst_selinux_labels",
- 			],
- 		),
- 		(
- 			{},
- 			[
- 				"preinst_aix",
- 				"preinst_sfperms",
- 				"preinst_suid_scan",
- 				"preinst_qa_check",
- 			],
- 		),
- 	),
- 	"postinst" : [
- 		"postinst_aix",
- 		"postinst_qa_check"],
+ def spawnebuild(
+     mydo,
+     actionmap,
+     mysettings,
+     debug,
+     alwaysdep=0,
+     logfile=None,
+     fd_pipes=None,
+     returnpid=False,
+ ):
+ 
+     if returnpid:
+         warnings.warn(
+             "portage.spawnebuild() called "
+             "with returnpid parameter enabled. This usage will "
+             "not be supported in the future.",
+             DeprecationWarning,
+             stacklevel=2,
+         )
+ 
+     if not returnpid and (alwaysdep or "noauto" not in mysettings.features):
+         # process dependency first
+         if "dep" in actionmap[mydo]:
+             retval = spawnebuild(
+                 actionmap[mydo]["dep"],
+                 actionmap,
+                 mysettings,
+                 debug,
+                 alwaysdep=alwaysdep,
+                 logfile=logfile,
+                 fd_pipes=fd_pipes,
+                 returnpid=returnpid,
+             )
+             if retval:
+                 return retval
+ 
+     eapi = mysettings["EAPI"]
+ 
+     if mydo in ("configure", "prepare") and not eapi_has_src_prepare_and_src_configure(
+         eapi
+     ):
+         return os.EX_OK
+ 
+     if mydo == "pretend" and not eapi_has_pkg_pretend(eapi):
+         return os.EX_OK
+ 
+     if not (mydo == "install" and "noauto" in mysettings.features):
+         check_file = os.path.join(
+             mysettings["PORTAGE_BUILDDIR"], ".%sed" % mydo.rstrip("e")
+         )
+         if os.path.exists(check_file):
+             writemsg_stdout(
+                 _(
+                     ">>> It appears that "
+                     "'%(action)s' has already executed for '%(pkg)s'; skipping.\n"
+                 )
+                 % {"action": mydo, "pkg": mysettings["PF"]}
+             )
+             writemsg_stdout(
+                 _(">>> Remove '%(file)s' to force %(action)s.\n")
+                 % {"file": check_file, "action": mydo}
+             )
+             return os.EX_OK
+ 
+     return _spawn_phase(
+         mydo,
+         mysettings,
+         actionmap=actionmap,
+         logfile=logfile,
+         fd_pipes=fd_pipes,
+         returnpid=returnpid,
+     )
+ 
+ 
+ _post_phase_cmds = {
+     "install": ["install_qa_check", "install_symlink_html_docs", "install_hooks"],
+     "preinst": (
+         (
+             # Since SELinux does not allow LD_PRELOAD across domain transitions,
+             # disable the LD_PRELOAD sandbox for preinst_selinux_labels.
+             {
+                 "ld_preload_sandbox": False,
+                 "selinux_only": True,
+             },
+             [
+                 "preinst_selinux_labels",
+             ],
+         ),
+         (
+             {},
+             [
++                # PREFIX LOCAL
++                "preinst_aix",
+                 "preinst_sfperms",
+                 "preinst_suid_scan",
+                 "preinst_qa_check",
+             ],
+         ),
+     ),
 -    "postinst": ["postinst_qa_check"],
++    "postinst": [
++            # PREFIX LOCAL
++            "postinst_aix",
++            "postinst_qa_check",
++            ],
  }
  
+ 
  def _post_phase_userpriv_perms(mysettings):
- 	if "userpriv" in mysettings.features and secpass >= 2:
- 		""" Privileged phases may have left files that need to be made
- 		writable to a less privileged user."""
- 		for path in (mysettings["HOME"], mysettings["T"]):
- 			apply_recursive_permissions(path,
- 				uid=portage_uid, gid=portage_gid, dirmode=0o700, dirmask=0,
- 				filemode=0o600, filemask=0)
+     if "userpriv" in mysettings.features and secpass >= 2:
+         """Privileged phases may have left files that need to be made
+         writable to a less privileged user."""
+         for path in (mysettings["HOME"], mysettings["T"]):
+             apply_recursive_permissions(
+                 path,
+                 uid=portage_uid,
+                 gid=portage_gid,
+                 dirmode=0o700,
+                 dirmask=0,
+                 filemode=0o600,
+                 filemask=0,
+             )
  
  
  def _post_phase_emptydir_cleanup(mysettings):
diff --cc lib/portage/package/ebuild/fetch.py
index 7c95245a7,2d3625800..2fcd33bd9
--- a/lib/portage/package/ebuild/fetch.py
+++ b/lib/portage/package/ebuild/fetch.py
@@@ -22,29 -22,44 +22,46 @@@ from urllib.parse import urlpars
  from urllib.parse import quote as urlquote
  
  import portage
- portage.proxy.lazyimport.lazyimport(globals(),
- 	'portage.package.ebuild.config:check_config_instance,config',
- 	'portage.package.ebuild.doebuild:doebuild_environment,' + \
- 		'_doebuild_spawn',
- 	'portage.package.ebuild.prepare_build_dirs:prepare_build_dirs',
- 	'portage.util:atomic_ofstream',
- 	'portage.util.configparser:SafeConfigParser,read_configs,' +
- 		'ConfigParserError',
- 	'portage.util.install_mask:_raise_exc',
- 	'portage.util._urlopen:urlopen',
+ 
+ portage.proxy.lazyimport.lazyimport(
+     globals(),
+     "portage.package.ebuild.config:check_config_instance,config",
+     "portage.package.ebuild.doebuild:doebuild_environment," + "_doebuild_spawn",
+     "portage.package.ebuild.prepare_build_dirs:prepare_build_dirs",
+     "portage.util:atomic_ofstream",
+     "portage.util.configparser:SafeConfigParser,read_configs," + "ConfigParserError",
+     "portage.util.install_mask:_raise_exc",
+     "portage.util._urlopen:urlopen",
  )
  
- from portage import os, selinux, shutil, _encodings, \
- 	_movefile, _shell_quote, _unicode_encode
- from portage.checksum import (get_valid_checksum_keys, perform_md5, verify_all,
- 	_filter_unaccelarated_hashes, _hash_filter, _apply_hash_filter,
- 	checksum_str)
- from portage.const import BASH_BINARY, CUSTOM_MIRRORS_FILE, \
- 	GLOBAL_CONFIG_PATH
+ from portage import (
+     os,
+     selinux,
+     shutil,
+     _encodings,
+     _movefile,
+     _shell_quote,
+     _unicode_encode,
+ )
+ from portage.checksum import (
+     get_valid_checksum_keys,
+     perform_md5,
+     verify_all,
+     _filter_unaccelarated_hashes,
+     _hash_filter,
+     _apply_hash_filter,
+     checksum_str,
+ )
+ from portage.const import BASH_BINARY, CUSTOM_MIRRORS_FILE, GLOBAL_CONFIG_PATH
++# PREFIX LOCAL
 +from portage.const import rootgid
  from portage.data import portage_gid, portage_uid, userpriv_groups
- from portage.exception import FileNotFound, OperationNotPermitted, \
- 	PortageException, TryAgain
+ from portage.exception import (
+     FileNotFound,
+     OperationNotPermitted,
+     PortageException,
+     TryAgain,
+ )
  from portage.localization import _
  from portage.locks import lockfile, unlockfile
  from portage.output import colorize, EOutput
@@@ -181,55 -217,63 +219,64 @@@ def _userpriv_test_write_file(settings
  
  
  def _ensure_distdir(settings, distdir):
- 	"""
- 	Ensure that DISTDIR exists with appropriate permissions.
- 
- 	@param settings: portage config
- 	@type settings: portage.package.ebuild.config.config
- 	@param distdir: DISTDIR path
- 	@type distdir: str
- 	@raise PortageException: portage.exception wrapper exception
- 	"""
- 	global _userpriv_test_write_file_cache
- 	dirmode  = 0o070
- 	filemode =   0o60
- 	modemask =    0o2
- 	dir_gid = portage_gid
- 	if "FAKED_MODE" in settings:
- 		# When inside fakeroot, directories with portage's gid appear
- 		# to have root's gid. Therefore, use root's gid instead of
- 		# portage's gid to avoid spurrious permissions adjustments
- 		# when inside fakeroot.
- 		dir_gid = rootgid
- 
- 	userfetch = portage.data.secpass >= 2 and "userfetch" in settings.features
- 	userpriv = portage.data.secpass >= 2 and "userpriv" in settings.features
- 	write_test_file = os.path.join(distdir, ".__portage_test_write__")
- 
- 	try:
- 		st = os.stat(distdir)
- 	except OSError:
- 		st = None
- 
- 	if st is not None and stat.S_ISDIR(st.st_mode):
- 		if not (userfetch or userpriv):
- 			return
- 		if _userpriv_test_write_file(settings, write_test_file):
- 			return
- 
- 	_userpriv_test_write_file_cache.pop(write_test_file, None)
- 	if ensure_dirs(distdir, gid=dir_gid, mode=dirmode, mask=modemask):
- 		if st is None:
- 			# The directory has just been created
- 			# and therefore it must be empty.
- 			return
- 		writemsg(_("Adjusting permissions recursively: '%s'\n") % distdir,
- 			noiselevel=-1)
- 		if not apply_recursive_permissions(distdir,
- 			gid=dir_gid, dirmode=dirmode, dirmask=modemask,
- 			filemode=filemode, filemask=modemask, onerror=_raise_exc):
- 			raise OperationNotPermitted(
- 				_("Failed to apply recursive permissions for the portage group."))
+     """
+     Ensure that DISTDIR exists with appropriate permissions.
+ 
+     @param settings: portage config
+     @type settings: portage.package.ebuild.config.config
+     @param distdir: DISTDIR path
+     @type distdir: str
+     @raise PortageException: portage.exception wrapper exception
+     """
+     global _userpriv_test_write_file_cache
+     dirmode = 0o070
+     filemode = 0o60
+     modemask = 0o2
+     dir_gid = portage_gid
+     if "FAKED_MODE" in settings:
+         # When inside fakeroot, directories with portage's gid appear
+         # to have root's gid. Therefore, use root's gid instead of
+         # portage's gid to avoid spurrious permissions adjustments
+         # when inside fakeroot.
 -        dir_gid = 0
++        # PREFIX LOCAL: do not assume root to be 0
++        dir_gid = rootgid
+ 
+     userfetch = portage.data.secpass >= 2 and "userfetch" in settings.features
+     userpriv = portage.data.secpass >= 2 and "userpriv" in settings.features
+     write_test_file = os.path.join(distdir, ".__portage_test_write__")
+ 
+     try:
+         st = os.stat(distdir)
+     except OSError:
+         st = None
+ 
+     if st is not None and stat.S_ISDIR(st.st_mode):
+         if not (userfetch or userpriv):
+             return
+         if _userpriv_test_write_file(settings, write_test_file):
+             return
+ 
+     _userpriv_test_write_file_cache.pop(write_test_file, None)
+     if ensure_dirs(distdir, gid=dir_gid, mode=dirmode, mask=modemask):
+         if st is None:
+             # The directory has just been created
+             # and therefore it must be empty.
+             return
+         writemsg(
+             _("Adjusting permissions recursively: '%s'\n") % distdir, noiselevel=-1
+         )
+         if not apply_recursive_permissions(
+             distdir,
+             gid=dir_gid,
+             dirmode=dirmode,
+             dirmask=modemask,
+             filemode=filemode,
+             filemask=modemask,
+             onerror=_raise_exc,
+         ):
+             raise OperationNotPermitted(
+                 _("Failed to apply recursive permissions for the portage group.")
+             )
  
  
  def _checksum_failure_temp_file(settings, distdir, basename):
diff --cc lib/portage/process.py
index d608e6237,84e09f8ec..5879694f9
--- a/lib/portage/process.py
+++ b/lib/portage/process.py
@@@ -19,11 -19,13 +19,13 @@@ from portage import o
  from portage import _encodings
  from portage import _unicode_encode
  import portage
- portage.proxy.lazyimport.lazyimport(globals(),
- 	'portage.util:dump_traceback,writemsg',
+ 
+ portage.proxy.lazyimport.lazyimport(
+     globals(),
+     "portage.util:dump_traceback,writemsg",
  )
  
 -from portage.const import BASH_BINARY, SANDBOX_BINARY, FAKEROOT_BINARY
 +from portage.const import BASH_BINARY, SANDBOX_BINARY, MACOSSANDBOX_BINARY, FAKEROOT_BINARY
  from portage.exception import CommandNotFound
  from portage.util._ctypes import find_library, LoadLibrary, ctypes
  
@@@ -55,165 -58,166 +58,181 @@@ except AttributeError
  # Prefer /proc/self/fd if available (/dev/fd
  # doesn't work on solaris, see bug #474536).
  for _fd_dir in ("/proc/self/fd", "/dev/fd"):
- 	if os.path.isdir(_fd_dir):
- 		break
- 	else:
- 		_fd_dir = None
+     if os.path.isdir(_fd_dir):
+         break
+     else:
+         _fd_dir = None
  
  # /dev/fd does not work on FreeBSD, see bug #478446
- if platform.system() in ('FreeBSD',) and _fd_dir == '/dev/fd':
- 	_fd_dir = None
+ if platform.system() in ("FreeBSD",) and _fd_dir == "/dev/fd":
+     _fd_dir = None
  
  if _fd_dir is not None:
- 	def get_open_fds():
- 		return (int(fd) for fd in os.listdir(_fd_dir) if fd.isdigit())
- 
- 	if platform.python_implementation() == 'PyPy':
- 		# EAGAIN observed with PyPy 1.8.
- 		_get_open_fds = get_open_fds
- 		def get_open_fds():
- 			try:
- 				return _get_open_fds()
- 			except OSError as e:
- 				if e.errno != errno.EAGAIN:
- 					raise
- 				return range(max_fd_limit)
+ 
+     def get_open_fds():
+         return (int(fd) for fd in os.listdir(_fd_dir) if fd.isdigit())
+ 
+     if platform.python_implementation() == "PyPy":
+         # EAGAIN observed with PyPy 1.8.
+         _get_open_fds = get_open_fds
+ 
+         def get_open_fds():
+             try:
+                 return _get_open_fds()
+             except OSError as e:
+                 if e.errno != errno.EAGAIN:
+                     raise
+                 return range(max_fd_limit)
  
  elif os.path.isdir("/proc/%s/fd" % portage.getpid()):
- 	# In order for this function to work in forked subprocesses,
- 	# os.getpid() must be called from inside the function.
- 	def get_open_fds():
- 		return (int(fd) for fd in os.listdir("/proc/%s/fd" % portage.getpid())
- 			if fd.isdigit())
+     # In order for this function to work in forked subprocesses,
+     # os.getpid() must be called from inside the function.
+     def get_open_fds():
+         return (
+             int(fd)
+             for fd in os.listdir("/proc/%s/fd" % portage.getpid())
+             if fd.isdigit()
+         )
  
  else:
- 	def get_open_fds():
- 		return range(max_fd_limit)
  
- sandbox_capable = (os.path.isfile(SANDBOX_BINARY) and
-                    os.access(SANDBOX_BINARY, os.X_OK))
+     def get_open_fds():
+         return range(max_fd_limit)
  
- fakeroot_capable = (os.path.isfile(FAKEROOT_BINARY) and
-                     os.access(FAKEROOT_BINARY, os.X_OK))
+ 
+ sandbox_capable = os.path.isfile(SANDBOX_BINARY) and os.access(SANDBOX_BINARY, os.X_OK)
+ 
+ fakeroot_capable = os.path.isfile(FAKEROOT_BINARY) and os.access(
+     FAKEROOT_BINARY, os.X_OK
+ )
  
 +macossandbox_capable = (os.path.isfile(MACOSSANDBOX_BINARY) and
 +                   os.access(MACOSSANDBOX_BINARY, os.X_OK))
  
  def sanitize_fds():
- 	"""
- 	Set the inheritable flag to False for all open file descriptors,
- 	except for those corresponding to stdin, stdout, and stderr. This
- 	ensures that any unintentionally inherited file descriptors will
- 	not be inherited by child processes.
- 	"""
- 	if _set_inheritable is not None:
- 
- 		whitelist = frozenset([
- 			portage._get_stdin().fileno(),
- 			sys.__stdout__.fileno(),
- 			sys.__stderr__.fileno(),
- 		])
- 
- 		for fd in get_open_fds():
- 			if fd not in whitelist:
- 				try:
- 					_set_inheritable(fd, False)
- 				except OSError:
- 					pass
+     """
+     Set the inheritable flag to False for all open file descriptors,
+     except for those corresponding to stdin, stdout, and stderr. This
+     ensures that any unintentionally inherited file descriptors will
+     not be inherited by child processes.
+     """
+     if _set_inheritable is not None:
+ 
+         whitelist = frozenset(
+             [
+                 portage._get_stdin().fileno(),
+                 sys.__stdout__.fileno(),
+                 sys.__stderr__.fileno(),
+             ]
+         )
+ 
+         for fd in get_open_fds():
+             if fd not in whitelist:
+                 try:
+                     _set_inheritable(fd, False)
+                 except OSError:
+                     pass
  
  
  def spawn_bash(mycommand, debug=False, opt_name=None, **keywords):
- 	"""
- 	Spawns a bash shell running a specific commands
- 
- 	@param mycommand: The command for bash to run
- 	@type mycommand: String
- 	@param debug: Turn bash debugging on (set -x)
- 	@type debug: Boolean
- 	@param opt_name: Name of the spawned process (detaults to binary name)
- 	@type opt_name: String
- 	@param keywords: Extra Dictionary arguments to pass to spawn
- 	@type keywords: Dictionary
- 	"""
- 
- 	args = [BASH_BINARY]
- 	if not opt_name:
- 		opt_name = os.path.basename(mycommand.split()[0])
- 	if debug:
- 		# Print commands and their arguments as they are executed.
- 		args.append("-x")
- 	args.append("-c")
- 	args.append(mycommand)
- 	return spawn(args, opt_name=opt_name, **keywords)
+     """
+     Spawns a bash shell running a specific commands
+ 
+     @param mycommand: The command for bash to run
+     @type mycommand: String
+     @param debug: Turn bash debugging on (set -x)
+     @type debug: Boolean
+     @param opt_name: Name of the spawned process (detaults to binary name)
+     @type opt_name: String
+     @param keywords: Extra Dictionary arguments to pass to spawn
+     @type keywords: Dictionary
+     """
+ 
+     args = [BASH_BINARY]
+     if not opt_name:
+         opt_name = os.path.basename(mycommand.split()[0])
+     if debug:
+         # Print commands and their arguments as they are executed.
+         args.append("-x")
+     args.append("-c")
+     args.append(mycommand)
+     return spawn(args, opt_name=opt_name, **keywords)
+ 
  
  def spawn_sandbox(mycommand, opt_name=None, **keywords):
- 	if not sandbox_capable:
- 		return spawn_bash(mycommand, opt_name=opt_name, **keywords)
- 	args = [SANDBOX_BINARY]
- 	if not opt_name:
- 		opt_name = os.path.basename(mycommand.split()[0])
- 	args.append(mycommand)
- 	return spawn(args, opt_name=opt_name, **keywords)
+     if not sandbox_capable:
+         return spawn_bash(mycommand, opt_name=opt_name, **keywords)
+     args = [SANDBOX_BINARY]
+     if not opt_name:
+         opt_name = os.path.basename(mycommand.split()[0])
+     args.append(mycommand)
+     return spawn(args, opt_name=opt_name, **keywords)
+ 
  
  def spawn_fakeroot(mycommand, fakeroot_state=None, opt_name=None, **keywords):
- 	args = [FAKEROOT_BINARY]
- 	if not opt_name:
- 		opt_name = os.path.basename(mycommand.split()[0])
- 	if fakeroot_state:
- 		open(fakeroot_state, "a").close()
- 		args.append("-s")
- 		args.append(fakeroot_state)
- 		args.append("-i")
- 		args.append(fakeroot_state)
- 	args.append("--")
- 	args.append(BASH_BINARY)
- 	args.append("-c")
- 	args.append(mycommand)
- 	return spawn(args, opt_name=opt_name, **keywords)
+     args = [FAKEROOT_BINARY]
+     if not opt_name:
+         opt_name = os.path.basename(mycommand.split()[0])
+     if fakeroot_state:
+         open(fakeroot_state, "a").close()
+         args.append("-s")
+         args.append(fakeroot_state)
+         args.append("-i")
+         args.append(fakeroot_state)
+     args.append("--")
+     args.append(BASH_BINARY)
+     args.append("-c")
+     args.append(mycommand)
+     return spawn(args, opt_name=opt_name, **keywords)
+ 
  
 +def spawn_macossandbox(mycommand, profile=None, opt_name=None, **keywords):
 +	if not macossandbox_capable:
 +		return spawn_bash(mycommand, opt_name=opt_name, **keywords)
 +	args=[MACOSSANDBOX_BINARY]
 +	if not opt_name:
 +		opt_name = os.path.basename(mycommand.split()[0])
 +	args.append("-p")
 +	args.append(profile)
 +	args.append(BASH_BINARY)
 +	args.append("-c")
 +	args.append(mycommand)
 +	return spawn(args, opt_name=opt_name, **keywords)
 +
  _exithandlers = []
+ 
+ 
  def atexit_register(func, *args, **kargs):
- 	"""Wrapper around atexit.register that is needed in order to track
- 	what is registered.  For example, when portage restarts itself via
- 	os.execv, the atexit module does not work so we have to do it
- 	manually by calling the run_exitfuncs() function in this module."""
- 	_exithandlers.append((func, args, kargs))
+     """Wrapper around atexit.register that is needed in order to track
+     what is registered.  For example, when portage restarts itself via
+     os.execv, the atexit module does not work so we have to do it
+     manually by calling the run_exitfuncs() function in this module."""
+     _exithandlers.append((func, args, kargs))
+ 
  
  def run_exitfuncs():
- 	"""This should behave identically to the routine performed by
- 	the atexit module at exit time.  It's only necessary to call this
- 	function when atexit will not work (because of os.execv, for
- 	example)."""
- 
- 	# This function is a copy of the private atexit._run_exitfuncs()
- 	# from the python 2.4.2 sources.  The only difference from the
- 	# original function is in the output to stderr.
- 	exc_info = None
- 	while _exithandlers:
- 		func, targs, kargs = _exithandlers.pop()
- 		try:
- 			func(*targs, **kargs)
- 		except SystemExit:
- 			exc_info = sys.exc_info()
- 		except: # No idea what they called, so we need this broad except here.
- 			dump_traceback("Error in portage.process.run_exitfuncs", noiselevel=0)
- 			exc_info = sys.exc_info()
- 
- 	if exc_info is not None:
- 		raise exc_info[0](exc_info[1]).with_traceback(exc_info[2])
+     """This should behave identically to the routine performed by
+     the atexit module at exit time.  It's only necessary to call this
+     function when atexit will not work (because of os.execv, for
+     example)."""
+ 
+     # This function is a copy of the private atexit._run_exitfuncs()
+     # from the python 2.4.2 sources.  The only difference from the
+     # original function is in the output to stderr.
+     exc_info = None
+     while _exithandlers:
+         func, targs, kargs = _exithandlers.pop()
+         try:
+             func(*targs, **kargs)
+         except SystemExit:
+             exc_info = sys.exc_info()
+         except:  # No idea what they called, so we need this broad except here.
+             dump_traceback("Error in portage.process.run_exitfuncs", noiselevel=0)
+             exc_info = sys.exc_info()
+ 
+     if exc_info is not None:
+         raise exc_info[0](exc_info[1]).with_traceback(exc_info[2])
+ 
  
  atexit.register(run_exitfuncs)
  
diff --cc lib/portage/tests/lazyimport/test_lazy_import_portage_baseline.py
index 2bc54698a,cf239240c..1115b18d7
--- a/lib/portage/tests/lazyimport/test_lazy_import_portage_baseline.py
+++ b/lib/portage/tests/lazyimport/test_lazy_import_portage_baseline.py
@@@ -11,19 -11,26 +11,28 @@@ from portage.util._eventloop.global_eve
  from _emerge.PipeReader import PipeReader
  from _emerge.SpawnProcess import SpawnProcess
  
+ 
  class LazyImportPortageBaselineTestCase(TestCase):
  
- 	_module_re = re.compile(r'^(portage|repoman|_emerge)\.')
+     _module_re = re.compile(r"^(portage|repoman|_emerge)\.")
  
- 	_baseline_imports = frozenset([
- 		'portage.const', 'portage.localization',
- 		'portage.proxy', 'portage.proxy.lazyimport',
- 		'portage.proxy.objectproxy',
- 		'portage._selinux',
- 		'portage.const_autotool',
- 	])
+     _baseline_imports = frozenset(
+         [
+             "portage.const",
+             "portage.localization",
+             "portage.proxy",
+             "portage.proxy.lazyimport",
+             "portage.proxy.objectproxy",
+             "portage._selinux",
++            # PREFIX LOCAL
++            'portage.const_autotool',
+         ]
+     )
  
- 	_baseline_import_cmd = [portage._python_interpreter, '-c', '''
+     _baseline_import_cmd = [
+         portage._python_interpreter,
+         "-c",
+         """
  import os
  import sys
  sys.path.insert(0, os.environ["PORTAGE_PYM_PATH"])
diff --cc lib/portage/tests/resolver/ResolverPlayground.py
index 67267a5cd,fdd0714e6..969d8f2fb
--- a/lib/portage/tests/resolver/ResolverPlayground.py
+++ b/lib/portage/tests/resolver/ResolverPlayground.py
@@@ -72,616 -88,669 +88,671 @@@ class ResolverPlayground
  </pkgmetadata>
  """
  
- 	portage_bin = (
- 		'ebuild',
- 		'egencache',
- 		'emerge',
- 		'emerge-webrsync',
- 		'emirrordist',
- 		'glsa-check',
- 		'portageq',
- 		'quickpkg',
- 	)
- 
- 	portage_sbin = (
- 		'archive-conf',
- 		'dispatch-conf',
- 		'emaint',
- 		'env-update',
- 		'etc-update',
- 		'fixpackages',
- 		'regenworld',
- 	)
- 
- 	def __init__(self, ebuilds={}, binpkgs={}, installed={}, profile={}, repo_configs={}, \
- 		user_config={}, sets={}, world=[], world_sets=[], distfiles={}, eclasses={},
- 		eprefix=None, targetroot=False, debug=False):
- 		"""
- 		ebuilds: cpv -> metadata mapping simulating available ebuilds.
- 		installed: cpv -> metadata mapping simulating installed packages.
- 			If a metadata key is missing, it gets a default value.
- 		profile: settings defined by the profile.
- 		"""
- 
- 		self.debug = debug
- 		if eprefix is None:
- 			self.eprefix = normalize_path(tempfile.mkdtemp())
- 
- 			# EPREFIX/bin is used by fake true_binaries. Real binaries goes into EPREFIX/usr/bin
- 			eubin = os.path.join(self.eprefix, "usr", "bin")
- 			ensure_dirs(eubin)
- 			for x in self.portage_bin:
- 				os.symlink(os.path.join(PORTAGE_BIN_PATH, x), os.path.join(eubin, x))
- 
- 			eusbin = os.path.join(self.eprefix, "usr", "sbin")
- 			ensure_dirs(eusbin)
- 			for x in self.portage_sbin:
- 				os.symlink(os.path.join(PORTAGE_BIN_PATH, x), os.path.join(eusbin, x))
- 
- 			essential_binaries = (
- 				"awk",
- 				"basename",
- 				"bzip2",
- 				"cat",
- 				"chgrp",
- 				"chmod",
- 				"chown",
- 				"comm",
- 				"cp",
- 				"egrep",
- 				"env",
- 				"find",
- 				"grep",
- 				"head",
- 				"install",
- 				"ln",
- 				"mkdir",
- 				"mkfifo",
- 				"mktemp",
- 				"mv",
- 				"readlink",
- 				"rm",
- 				"sed",
- 				"sort",
- 				"tar",
- 				"tr",
- 				"uname",
- 				"uniq",
- 				"xargs",
- 				"zstd",
- 			)
- 			# Exclude internal wrappers from PATH lookup.
- 			orig_path = os.environ['PATH']
- 			included_paths = []
- 			for path in orig_path.split(':'):
- 				if path and not fnmatch.fnmatch(path, '*/portage/*/ebuild-helpers*'):
- 					included_paths.append(path)
- 			try:
- 				os.environ['PATH'] = ':'.join(included_paths)
- 				for x in essential_binaries:
- 					path = find_binary(x)
- 					if path is None:
- 						raise portage.exception.CommandNotFound(x)
- 					os.symlink(path, os.path.join(eubin, x))
- 			finally:
- 				os.environ['PATH'] = orig_path
- 		else:
- 			self.eprefix = normalize_path(eprefix)
- 
- 		# Tests may override portage.const.EPREFIX in order to
- 		# simulate a prefix installation. It's reasonable to do
- 		# this because tests should be self-contained such that
- 		# the "real" value of portage.const.EPREFIX is entirely
- 		# irrelevant (see bug #492932).
- 		self._orig_eprefix = portage.const.EPREFIX
- 		portage.const.EPREFIX = self.eprefix.rstrip(os.sep)
- 
- 		self.eroot = self.eprefix + os.sep
- 		if targetroot:
- 			self.target_root = os.path.join(self.eroot, 'target_root')
- 		else:
- 			self.target_root = os.sep
- 		self.distdir = os.path.join(self.eroot, "var", "portage", "distfiles")
- 		self.pkgdir = os.path.join(self.eprefix, "pkgdir")
- 		self.vdbdir = os.path.join(self.eroot, "var/db/pkg")
- 		os.makedirs(self.vdbdir)
- 
- 		if not debug:
- 			portage.util.noiselimit = -2
- 
- 		self._repositories = {}
- 		#Make sure the main repo is always created
- 		self._get_repo_dir("test_repo")
- 
- 		self._create_distfiles(distfiles)
- 		self._create_ebuilds(ebuilds)
- 		self._create_binpkgs(binpkgs)
- 		self._create_installed(installed)
- 		self._create_profile(ebuilds, eclasses, installed, profile, repo_configs, user_config, sets)
- 		self._create_world(world, world_sets)
- 
- 		self.settings, self.trees = self._load_config()
- 
- 		self._create_ebuild_manifests(ebuilds)
- 
- 		portage.util.noiselimit = 0
- 
- 	def reload_config(self):
- 		"""
- 		Reload configuration from disk, which is useful if it has
- 		been modified after the constructor has been called.
- 		"""
- 		for eroot in self.trees:
- 			portdb = self.trees[eroot]["porttree"].dbapi
- 			portdb.close_caches()
- 		self.settings, self.trees = self._load_config()
- 
- 	def _get_repo_dir(self, repo):
- 		"""
- 		Create the repo directory if needed.
- 		"""
- 		if repo not in self._repositories:
- 			if repo == "test_repo":
- 				self._repositories["DEFAULT"] = {"main-repo": repo}
- 
- 			repo_path = os.path.join(self.eroot, "var", "repositories", repo)
- 			self._repositories[repo] = {"location": repo_path}
- 			profile_path = os.path.join(repo_path, "profiles")
- 
- 			try:
- 				os.makedirs(profile_path)
- 			except os.error:
- 				pass
- 
- 			repo_name_file = os.path.join(profile_path, "repo_name")
- 			with open(repo_name_file, "w") as f:
- 				f.write("%s\n" % repo)
- 
- 		return self._repositories[repo]["location"]
- 
- 	def _create_distfiles(self, distfiles):
- 		os.makedirs(self.distdir)
- 		for k, v in distfiles.items():
- 			with open(os.path.join(self.distdir, k), 'wb') as f:
- 				f.write(v)
- 
- 	def _create_ebuilds(self, ebuilds):
- 		for cpv in ebuilds:
- 			a = Atom("=" + cpv, allow_repo=True)
- 			repo = a.repo
- 			if repo is None:
- 				repo = "test_repo"
- 
- 			metadata = ebuilds[cpv].copy()
- 			copyright_header = metadata.pop("COPYRIGHT_HEADER", None)
- 			eapi = metadata.pop("EAPI", "0")
- 			misc_content = metadata.pop("MISC_CONTENT", None)
- 			metadata.setdefault("DEPEND", "")
- 			metadata.setdefault("SLOT", "0")
- 			metadata.setdefault("KEYWORDS", "x86")
- 			metadata.setdefault("IUSE", "")
- 
- 			unknown_keys = set(metadata).difference(
- 				portage.dbapi.dbapi._known_keys)
- 			if unknown_keys:
- 				raise ValueError("metadata of ebuild '%s' contains unknown keys: %s" %
- 					(cpv, sorted(unknown_keys)))
- 
- 			repo_dir = self._get_repo_dir(repo)
- 			ebuild_dir = os.path.join(repo_dir, a.cp)
- 			ebuild_path = os.path.join(ebuild_dir, a.cpv.split("/")[1] + ".ebuild")
- 			try:
- 				os.makedirs(ebuild_dir)
- 			except os.error:
- 				pass
- 
- 			with open(ebuild_path, "w") as f:
- 				if copyright_header is not None:
- 					f.write(copyright_header)
- 				f.write('EAPI="%s"\n' % eapi)
- 				for k, v in metadata.items():
- 					f.write('%s="%s"\n' % (k, v))
- 				if misc_content is not None:
- 					f.write(misc_content)
- 
- 	def _create_ebuild_manifests(self, ebuilds):
- 		tmpsettings = config(clone=self.settings)
- 		tmpsettings['PORTAGE_QUIET'] = '1'
- 		for cpv in ebuilds:
- 			a = Atom("=" + cpv, allow_repo=True)
- 			repo = a.repo
- 			if repo is None:
- 				repo = "test_repo"
- 
- 			repo_dir = self._get_repo_dir(repo)
- 			ebuild_dir = os.path.join(repo_dir, a.cp)
- 			ebuild_path = os.path.join(ebuild_dir, a.cpv.split("/")[1] + ".ebuild")
- 
- 			portdb = self.trees[self.eroot]["porttree"].dbapi
- 			tmpsettings['O'] = ebuild_dir
- 			if not digestgen(mysettings=tmpsettings, myportdb=portdb):
- 				raise AssertionError('digest creation failed for %s' % ebuild_path)
- 
- 	def _create_binpkgs(self, binpkgs):
- 		# When using BUILD_ID, there can be mutiple instances for the
- 		# same cpv. Therefore, binpkgs may be an iterable instead of
- 		# a dict.
- 		items = getattr(binpkgs, 'items', None)
- 		items = items() if items is not None else binpkgs
- 		for cpv, metadata in items:
- 			a = Atom("=" + cpv, allow_repo=True)
- 			repo = a.repo
- 			if repo is None:
- 				repo = "test_repo"
- 
- 			pn = catsplit(a.cp)[1]
- 			cat, pf = catsplit(a.cpv)
- 			metadata = metadata.copy()
- 			metadata.setdefault("SLOT", "0")
- 			metadata.setdefault("KEYWORDS", "x86")
- 			metadata.setdefault("BUILD_TIME", "0")
- 			metadata["repository"] = repo
- 			metadata["CATEGORY"] = cat
- 			metadata["PF"] = pf
- 			metadata["EPREFIX"] = self.eprefix
- 
- 			repo_dir = self.pkgdir
- 			category_dir = os.path.join(repo_dir, cat)
- 			if "BUILD_ID" in metadata:
- 				binpkg_path = os.path.join(category_dir, pn,
- 					"%s-%s.xpak"% (pf, metadata["BUILD_ID"]))
- 			else:
- 				binpkg_path = os.path.join(category_dir, pf + ".tbz2")
- 
- 			ensure_dirs(os.path.dirname(binpkg_path))
- 			t = portage.xpak.tbz2(binpkg_path)
- 			t.recompose_mem(portage.xpak.xpak_mem(metadata))
- 
- 	def _create_installed(self, installed):
- 		for cpv in installed:
- 			a = Atom("=" + cpv, allow_repo=True)
- 			repo = a.repo
- 			if repo is None:
- 				repo = "test_repo"
- 
- 			vdb_pkg_dir = os.path.join(self.vdbdir, a.cpv)
- 			try:
- 				os.makedirs(vdb_pkg_dir)
- 			except os.error:
- 				pass
- 
- 			metadata = installed[cpv].copy()
- 			metadata.setdefault("SLOT", "0")
- 			metadata.setdefault("BUILD_TIME", "0")
- 			metadata.setdefault("COUNTER", "0")
- 			metadata.setdefault("KEYWORDS", "~x86")
- 
- 			unknown_keys = set(metadata).difference(
- 				portage.dbapi.dbapi._known_keys)
- 			unknown_keys.discard("BUILD_TIME")
- 			unknown_keys.discard("BUILD_ID")
- 			unknown_keys.discard("COUNTER")
- 			unknown_keys.discard("repository")
- 			unknown_keys.discard("USE")
- 			unknown_keys.discard("PROVIDES")
- 			unknown_keys.discard("REQUIRES")
- 			if unknown_keys:
- 				raise ValueError("metadata of installed '%s' contains unknown keys: %s" %
- 					(cpv, sorted(unknown_keys)))
- 
- 			metadata["repository"] = repo
- 			for k, v in metadata.items():
- 				with open(os.path.join(vdb_pkg_dir, k), "w") as f:
- 					f.write("%s\n" % v)
- 
- 			ebuild_path = os.path.join(vdb_pkg_dir, a.cpv.split("/")[1] + ".ebuild")
- 			with open(ebuild_path, "w") as f:
- 				f.write('EAPI="%s"\n' % metadata.pop('EAPI', '0'))
- 				for k, v in metadata.items():
- 					f.write('%s="%s"\n' % (k, v))
- 
- 			env_path = os.path.join(vdb_pkg_dir, 'environment.bz2')
- 			with bz2.BZ2File(env_path, mode='w') as f:
- 				with open(ebuild_path, 'rb') as inputfile:
- 					f.write(inputfile.read())
- 
- 	def _create_profile(self, ebuilds, eclasses, installed, profile, repo_configs, user_config, sets):
- 
- 		user_config_dir = os.path.join(self.eroot, USER_CONFIG_PATH)
- 
- 		try:
- 			os.makedirs(user_config_dir)
- 		except os.error:
- 			pass
- 
- 		for repo in self._repositories:
- 			if repo == "DEFAULT":
- 				continue
- 
- 			repo_dir = self._get_repo_dir(repo)
- 			profile_dir = os.path.join(repo_dir, "profiles")
- 			metadata_dir = os.path.join(repo_dir, "metadata")
- 			os.makedirs(metadata_dir)
- 
- 			#Create $REPO/profiles/categories
- 			categories = set()
- 			for cpv in ebuilds:
- 				ebuilds_repo = Atom("="+cpv, allow_repo=True).repo
- 				if ebuilds_repo is None:
- 					ebuilds_repo = "test_repo"
- 				if ebuilds_repo == repo:
- 					categories.add(catsplit(cpv)[0])
- 
- 			categories_file = os.path.join(profile_dir, "categories")
- 			with open(categories_file, "w") as f:
- 				for cat in categories:
- 					f.write(cat + "\n")
- 
- 			#Create $REPO/profiles/license_groups
- 			license_file = os.path.join(profile_dir, "license_groups")
- 			with open(license_file, "w") as f:
- 				f.write("EULA TEST\n")
- 
- 			repo_config = repo_configs.get(repo)
- 			if repo_config:
- 				for config_file, lines in repo_config.items():
- 					if config_file not in self.config_files and not any(fnmatch.fnmatch(config_file, os.path.join(x, "*")) for x in self.config_files):
- 						raise ValueError("Unknown config file: '%s'" % config_file)
- 
- 					if config_file in ("layout.conf",):
- 						file_name = os.path.join(repo_dir, "metadata", config_file)
- 					else:
- 						file_name = os.path.join(profile_dir, config_file)
- 						if "/" in config_file and not os.path.isdir(os.path.dirname(file_name)):
- 							os.makedirs(os.path.dirname(file_name))
- 					with open(file_name, "w") as f:
- 						for line in lines:
- 							f.write("%s\n" % line)
- 						# Temporarily write empty value of masters until it becomes default.
- 						# TODO: Delete all references to "# use implicit masters" when empty value becomes default.
- 						if config_file == "layout.conf" and not any(line.startswith(("masters =", "# use implicit masters")) for line in lines):
- 							f.write("masters =\n")
- 
- 			#Create $profile_dir/eclass (we fail to digest the ebuilds if it's not there)
- 			eclass_dir = os.path.join(repo_dir, "eclass")
- 			os.makedirs(eclass_dir)
- 
- 			for eclass_name, eclass_content in eclasses.items():
- 				with open(os.path.join(eclass_dir, "{}.eclass".format(eclass_name)), 'wt') as f:
- 					if isinstance(eclass_content, str):
- 						eclass_content = [eclass_content]
- 					for line in eclass_content:
- 						f.write("{}\n".format(line))
- 
- 			# Temporarily write empty value of masters until it becomes default.
- 			if not repo_config or "layout.conf" not in repo_config:
- 				layout_conf_path = os.path.join(repo_dir, "metadata", "layout.conf")
- 				with open(layout_conf_path, "w") as f:
- 					f.write("masters =\n")
- 
- 			if repo == "test_repo":
- 				#Create a minimal profile in /var/db/repos/gentoo
- 				sub_profile_dir = os.path.join(profile_dir, "default", "linux", "x86", "test_profile")
- 				os.makedirs(sub_profile_dir)
- 
- 				if not (profile and "eapi" in profile):
- 					eapi_file = os.path.join(sub_profile_dir, "eapi")
- 					with open(eapi_file, "w") as f:
- 						f.write("0\n")
- 
- 				make_defaults_file = os.path.join(sub_profile_dir, "make.defaults")
- 				with open(make_defaults_file, "w") as f:
- 					f.write("ARCH=\"x86\"\n")
- 					f.write("ACCEPT_KEYWORDS=\"x86\"\n")
- 
- 				use_force_file = os.path.join(sub_profile_dir, "use.force")
- 				with open(use_force_file, "w") as f:
- 					f.write("x86\n")
- 
- 				parent_file = os.path.join(sub_profile_dir, "parent")
- 				with open(parent_file, "w") as f:
- 					f.write("..\n")
- 
- 				if profile:
- 					for config_file, lines in profile.items():
- 						if config_file not in self.config_files:
- 							raise ValueError("Unknown config file: '%s'" % config_file)
- 
- 						file_name = os.path.join(sub_profile_dir, config_file)
- 						with open(file_name, "w") as f:
- 							for line in lines:
- 								f.write("%s\n" % line)
- 
- 				#Create profile symlink
- 				os.symlink(sub_profile_dir, os.path.join(user_config_dir, "make.profile"))
- 
- 		make_conf = {
- 			"ACCEPT_KEYWORDS": "x86",
- 			"CLEAN_DELAY": "0",
- 			"DISTDIR" : self.distdir,
- 			"EMERGE_WARNING_DELAY": "0",
- 			"PKGDIR": self.pkgdir,
- 			"PORTAGE_INST_GID": str(portage.data.portage_gid),
- 			"PORTAGE_INST_UID": str(portage.data.portage_uid),
- 			"PORTAGE_TMPDIR": os.path.join(self.eroot, 'var/tmp'),
- 		}
- 
- 		if os.environ.get("NOCOLOR"):
- 			make_conf["NOCOLOR"] = os.environ["NOCOLOR"]
- 
- 		# Pass along PORTAGE_USERNAME and PORTAGE_GRPNAME since they
- 		# need to be inherited by ebuild subprocesses.
- 		if 'PORTAGE_USERNAME' in os.environ:
- 			make_conf['PORTAGE_USERNAME'] = os.environ['PORTAGE_USERNAME']
- 		if 'PORTAGE_GRPNAME' in os.environ:
- 			make_conf['PORTAGE_GRPNAME'] = os.environ['PORTAGE_GRPNAME']
- 
- 		make_conf_lines = []
- 		for k_v in make_conf.items():
- 			make_conf_lines.append('%s="%s"' % k_v)
- 
- 		if "make.conf" in user_config:
- 			make_conf_lines.extend(user_config["make.conf"])
- 
- 		if not portage.process.sandbox_capable or \
- 			os.environ.get("SANDBOX_ON") == "1":
- 			# avoid problems from nested sandbox instances
- 			make_conf_lines.append('FEATURES="${FEATURES} -sandbox -usersandbox"')
- 
- 		configs = user_config.copy()
- 		configs["make.conf"] = make_conf_lines
- 
- 		for config_file, lines in configs.items():
- 			if config_file not in self.config_files:
- 				raise ValueError("Unknown config file: '%s'" % config_file)
- 
- 			file_name = os.path.join(user_config_dir, config_file)
- 			with open(file_name, "w") as f:
- 				for line in lines:
- 					f.write("%s\n" % line)
- 
- 		#Create /usr/share/portage/config/make.globals
- 		make_globals_path = os.path.join(self.eroot,
- 			GLOBAL_CONFIG_PATH.lstrip(os.sep), "make.globals")
- 		ensure_dirs(os.path.dirname(make_globals_path))
- 		os.symlink(os.path.join(cnf_path, "make.globals"),
- 			make_globals_path)
- 
- 		#Create /usr/share/portage/config/sets/portage.conf
- 		default_sets_conf_dir = os.path.join(self.eroot, "usr/share/portage/config/sets")
- 
- 		try:
- 			os.makedirs(default_sets_conf_dir)
- 		except os.error:
- 			pass
- 
- 		provided_sets_portage_conf = (
- 			os.path.join(cnf_path, "sets", "portage.conf"))
- 		os.symlink(provided_sets_portage_conf, os.path.join(default_sets_conf_dir, "portage.conf"))
- 
- 		set_config_dir = os.path.join(user_config_dir, "sets")
- 
- 		try:
- 			os.makedirs(set_config_dir)
- 		except os.error:
- 			pass
- 
- 		for sets_file, lines in sets.items():
- 			file_name = os.path.join(set_config_dir, sets_file)
- 			with open(file_name, "w") as f:
- 				for line in lines:
- 					f.write("%s\n" % line)
- 
- 		if cnf_path_repoman is not None:
- 			#Create /usr/share/repoman
- 			repoman_share_dir = os.path.join(self.eroot, 'usr', 'share', 'repoman')
- 			os.symlink(cnf_path_repoman, repoman_share_dir)
- 
- 	def _create_world(self, world, world_sets):
- 		#Create /var/lib/portage/world
- 		var_lib_portage = os.path.join(self.eroot, "var", "lib", "portage")
- 		os.makedirs(var_lib_portage)
- 
- 		world_file = os.path.join(var_lib_portage, "world")
- 		world_set_file = os.path.join(var_lib_portage, "world_sets")
- 
- 		with open(world_file, "w") as f:
- 			for atom in world:
- 				f.write("%s\n" % atom)
- 
- 		with open(world_set_file, "w") as f:
- 			for atom in world_sets:
- 				f.write("%s\n" % atom)
- 
- 	def _load_config(self):
- 
- 		create_trees_kwargs = {}
- 		if self.target_root != os.sep:
- 			create_trees_kwargs["target_root"] = self.target_root
- 
- 		env = {
- 			"PORTAGE_REPOSITORIES": "\n".join("[%s]\n%s" % (repo_name, "\n".join("%s = %s" % (k, v) for k, v in repo_config.items())) for repo_name, repo_config in self._repositories.items())
- 		}
- 
- 		if self.debug:
- 			env["PORTAGE_DEBUG"] = "1"
- 
- 		trees = portage.create_trees(env=env, eprefix=self.eprefix,
- 			**create_trees_kwargs)
- 
- 		for root, root_trees in trees.items():
- 			settings = root_trees["vartree"].settings
- 			settings._init_dirs()
- 			setconfig = load_default_config(settings, root_trees)
- 			root_trees["root_config"] = RootConfig(settings, root_trees, setconfig)
- 
- 		return trees[trees._target_eroot]["vartree"].settings, trees
- 
- 	def run(self, atoms, options={}, action=None):
- 		options = options.copy()
- 		options["--pretend"] = True
- 		if self.debug:
- 			options["--debug"] = True
- 
- 		if action is None:
- 			if options.get("--depclean"):
- 				action = "depclean"
- 			elif options.get("--prune"):
- 				action = "prune"
- 
- 		if "--usepkgonly" in options:
- 			options["--usepkg"] = True
- 
- 		global_noiselimit = portage.util.noiselimit
- 		global_emergelog_disable = _emerge.emergelog._disable
- 		try:
- 
- 			if not self.debug:
- 				portage.util.noiselimit = -2
- 			_emerge.emergelog._disable = True
- 
- 			if action in ("depclean", "prune"):
- 				depclean_result = _calc_depclean(self.settings, self.trees, None,
- 					options, action, InternalPackageSet(initial_atoms=atoms, allow_wildcard=True), None)
- 				result = ResolverPlaygroundDepcleanResult(
- 					atoms,
- 					depclean_result.returncode,
- 					depclean_result.cleanlist,
- 					depclean_result.ordered,
- 					depclean_result.req_pkg_count,
- 					depclean_result.depgraph,
- 				)
- 			else:
- 				params = create_depgraph_params(options, action)
- 				success, depgraph, favorites = backtrack_depgraph(
- 					self.settings, self.trees, options, params, action, atoms, None)
- 				depgraph._show_merge_list()
- 				depgraph.display_problems()
- 				result = ResolverPlaygroundResult(atoms, success, depgraph, favorites)
- 		finally:
- 			portage.util.noiselimit = global_noiselimit
- 			_emerge.emergelog._disable = global_emergelog_disable
- 
- 		return result
- 
- 	def run_TestCase(self, test_case):
- 		if not isinstance(test_case, ResolverPlaygroundTestCase):
- 			raise TypeError("ResolverPlayground needs a ResolverPlaygroundTestCase")
- 		for atoms in test_case.requests:
- 			result = self.run(atoms, test_case.options, test_case.action)
- 			if not test_case.compare_with_result(result):
- 				return
- 
- 	def cleanup(self):
- 		for eroot in self.trees:
- 			portdb = self.trees[eroot]["porttree"].dbapi
- 			portdb.close_caches()
- 		if self.debug:
- 			print("\nEROOT=%s" % self.eroot)
- 		else:
- 			shutil.rmtree(self.eroot)
- 		if hasattr(self, '_orig_eprefix'):
- 			portage.const.EPREFIX = self._orig_eprefix
+     portage_bin = (
+         "ebuild",
+         "egencache",
+         "emerge",
+         "emerge-webrsync",
+         "emirrordist",
+         "glsa-check",
+         "portageq",
+         "quickpkg",
+     )
+ 
+     portage_sbin = (
+         "archive-conf",
+         "dispatch-conf",
+         "emaint",
+         "env-update",
+         "etc-update",
+         "fixpackages",
+         "regenworld",
+     )
+ 
+     def __init__(
+         self,
+         ebuilds={},
+         binpkgs={},
+         installed={},
+         profile={},
+         repo_configs={},
+         user_config={},
+         sets={},
+         world=[],
+         world_sets=[],
+         distfiles={},
+         eclasses={},
+         eprefix=None,
+         targetroot=False,
+         debug=False,
+     ):
+         """
+         ebuilds: cpv -> metadata mapping simulating available ebuilds.
+         installed: cpv -> metadata mapping simulating installed packages.
+                 If a metadata key is missing, it gets a default value.
+         profile: settings defined by the profile.
+         """
+ 
+         self.debug = debug
+         if eprefix is None:
+             self.eprefix = normalize_path(tempfile.mkdtemp())
+ 
+             # EPREFIX/bin is used by fake true_binaries. Real binaries goes into EPREFIX/usr/bin
+             eubin = os.path.join(self.eprefix, "usr", "bin")
+             ensure_dirs(eubin)
+             for x in self.portage_bin:
+                 os.symlink(os.path.join(PORTAGE_BIN_PATH, x), os.path.join(eubin, x))
+ 
+             eusbin = os.path.join(self.eprefix, "usr", "sbin")
+             ensure_dirs(eusbin)
+             for x in self.portage_sbin:
+                 os.symlink(os.path.join(PORTAGE_BIN_PATH, x), os.path.join(eusbin, x))
+ 
+             essential_binaries = (
+                 "awk",
+                 "basename",
+                 "bzip2",
+                 "cat",
+                 "chgrp",
+                 "chmod",
+                 "chown",
+                 "comm",
+                 "cp",
+                 "egrep",
+                 "env",
+                 "find",
+                 "grep",
+                 "head",
+                 "install",
+                 "ln",
+                 "mkdir",
+                 "mkfifo",
+                 "mktemp",
+                 "mv",
+                 "readlink",
+                 "rm",
+                 "sed",
+                 "sort",
+                 "tar",
+                 "tr",
+                 "uname",
+                 "uniq",
+                 "xargs",
+                 "zstd",
+             )
+             # Exclude internal wrappers from PATH lookup.
+             orig_path = os.environ["PATH"]
+             included_paths = []
+             for path in orig_path.split(":"):
+                 if path and not fnmatch.fnmatch(path, "*/portage/*/ebuild-helpers*"):
+                     included_paths.append(path)
+             try:
+                 os.environ["PATH"] = ":".join(included_paths)
+                 for x in essential_binaries:
+                     path = find_binary(x)
+                     if path is None:
+                         raise portage.exception.CommandNotFound(x)
+                     os.symlink(path, os.path.join(eubin, x))
+             finally:
+                 os.environ["PATH"] = orig_path
+         else:
+             self.eprefix = normalize_path(eprefix)
+ 
+         # Tests may override portage.const.EPREFIX in order to
+         # simulate a prefix installation. It's reasonable to do
+         # this because tests should be self-contained such that
+         # the "real" value of portage.const.EPREFIX is entirely
+         # irrelevant (see bug #492932).
+         self._orig_eprefix = portage.const.EPREFIX
+         portage.const.EPREFIX = self.eprefix.rstrip(os.sep)
+ 
+         self.eroot = self.eprefix + os.sep
+         if targetroot:
+             self.target_root = os.path.join(self.eroot, "target_root")
+         else:
+             self.target_root = os.sep
+         self.distdir = os.path.join(self.eroot, "var", "portage", "distfiles")
+         self.pkgdir = os.path.join(self.eprefix, "pkgdir")
+         self.vdbdir = os.path.join(self.eroot, "var/db/pkg")
+         os.makedirs(self.vdbdir)
+ 
+         if not debug:
+             portage.util.noiselimit = -2
+ 
+         self._repositories = {}
+         # Make sure the main repo is always created
+         self._get_repo_dir("test_repo")
+ 
+         self._create_distfiles(distfiles)
+         self._create_ebuilds(ebuilds)
+         self._create_binpkgs(binpkgs)
+         self._create_installed(installed)
+         self._create_profile(
+             ebuilds, eclasses, installed, profile, repo_configs, user_config, sets
+         )
+         self._create_world(world, world_sets)
+ 
+         self.settings, self.trees = self._load_config()
+ 
+         self._create_ebuild_manifests(ebuilds)
+ 
+         portage.util.noiselimit = 0
+ 
+     def reload_config(self):
+         """
+         Reload configuration from disk, which is useful if it has
+         been modified after the constructor has been called.
+         """
+         for eroot in self.trees:
+             portdb = self.trees[eroot]["porttree"].dbapi
+             portdb.close_caches()
+         self.settings, self.trees = self._load_config()
+ 
+     def _get_repo_dir(self, repo):
+         """
+         Create the repo directory if needed.
+         """
+         if repo not in self._repositories:
+             if repo == "test_repo":
+                 self._repositories["DEFAULT"] = {"main-repo": repo}
+ 
+             repo_path = os.path.join(self.eroot, "var", "repositories", repo)
+             self._repositories[repo] = {"location": repo_path}
+             profile_path = os.path.join(repo_path, "profiles")
+ 
+             try:
+                 os.makedirs(profile_path)
+             except os.error:
+                 pass
+ 
+             repo_name_file = os.path.join(profile_path, "repo_name")
+             with open(repo_name_file, "w") as f:
+                 f.write("%s\n" % repo)
+ 
+         return self._repositories[repo]["location"]
+ 
+     def _create_distfiles(self, distfiles):
+         os.makedirs(self.distdir)
+         for k, v in distfiles.items():
+             with open(os.path.join(self.distdir, k), "wb") as f:
+                 f.write(v)
+ 
+     def _create_ebuilds(self, ebuilds):
+         for cpv in ebuilds:
+             a = Atom("=" + cpv, allow_repo=True)
+             repo = a.repo
+             if repo is None:
+                 repo = "test_repo"
+ 
+             metadata = ebuilds[cpv].copy()
+             copyright_header = metadata.pop("COPYRIGHT_HEADER", None)
+             eapi = metadata.pop("EAPI", "0")
+             misc_content = metadata.pop("MISC_CONTENT", None)
+             metadata.setdefault("DEPEND", "")
+             metadata.setdefault("SLOT", "0")
+             metadata.setdefault("KEYWORDS", "x86")
+             metadata.setdefault("IUSE", "")
+ 
+             unknown_keys = set(metadata).difference(portage.dbapi.dbapi._known_keys)
+             if unknown_keys:
+                 raise ValueError(
+                     "metadata of ebuild '%s' contains unknown keys: %s"
+                     % (cpv, sorted(unknown_keys))
+                 )
+ 
+             repo_dir = self._get_repo_dir(repo)
+             ebuild_dir = os.path.join(repo_dir, a.cp)
+             ebuild_path = os.path.join(ebuild_dir, a.cpv.split("/")[1] + ".ebuild")
+             try:
+                 os.makedirs(ebuild_dir)
+             except os.error:
+                 pass
+ 
+             with open(ebuild_path, "w") as f:
+                 if copyright_header is not None:
+                     f.write(copyright_header)
+                 f.write('EAPI="%s"\n' % eapi)
+                 for k, v in metadata.items():
+                     f.write('%s="%s"\n' % (k, v))
+                 if misc_content is not None:
+                     f.write(misc_content)
+ 
+     def _create_ebuild_manifests(self, ebuilds):
+         tmpsettings = config(clone=self.settings)
+         tmpsettings["PORTAGE_QUIET"] = "1"
+         for cpv in ebuilds:
+             a = Atom("=" + cpv, allow_repo=True)
+             repo = a.repo
+             if repo is None:
+                 repo = "test_repo"
+ 
+             repo_dir = self._get_repo_dir(repo)
+             ebuild_dir = os.path.join(repo_dir, a.cp)
+             ebuild_path = os.path.join(ebuild_dir, a.cpv.split("/")[1] + ".ebuild")
+ 
+             portdb = self.trees[self.eroot]["porttree"].dbapi
+             tmpsettings["O"] = ebuild_dir
+             if not digestgen(mysettings=tmpsettings, myportdb=portdb):
+                 raise AssertionError("digest creation failed for %s" % ebuild_path)
+ 
+     def _create_binpkgs(self, binpkgs):
+         # When using BUILD_ID, there can be mutiple instances for the
+         # same cpv. Therefore, binpkgs may be an iterable instead of
+         # a dict.
+         items = getattr(binpkgs, "items", None)
+         items = items() if items is not None else binpkgs
+         for cpv, metadata in items:
+             a = Atom("=" + cpv, allow_repo=True)
+             repo = a.repo
+             if repo is None:
+                 repo = "test_repo"
+ 
+             pn = catsplit(a.cp)[1]
+             cat, pf = catsplit(a.cpv)
+             metadata = metadata.copy()
+             metadata.setdefault("SLOT", "0")
+             metadata.setdefault("KEYWORDS", "x86")
+             metadata.setdefault("BUILD_TIME", "0")
+             metadata["repository"] = repo
+             metadata["CATEGORY"] = cat
+             metadata["PF"] = pf
++            # PREFIX LOCAL
++            metadata["EPREFIX"] = self.eprefix
+ 
+             repo_dir = self.pkgdir
+             category_dir = os.path.join(repo_dir, cat)
+             if "BUILD_ID" in metadata:
+                 binpkg_path = os.path.join(
+                     category_dir, pn, "%s-%s.xpak" % (pf, metadata["BUILD_ID"])
+                 )
+             else:
+                 binpkg_path = os.path.join(category_dir, pf + ".tbz2")
+ 
+             ensure_dirs(os.path.dirname(binpkg_path))
+             t = portage.xpak.tbz2(binpkg_path)
+             t.recompose_mem(portage.xpak.xpak_mem(metadata))
+ 
+     def _create_installed(self, installed):
+         for cpv in installed:
+             a = Atom("=" + cpv, allow_repo=True)
+             repo = a.repo
+             if repo is None:
+                 repo = "test_repo"
+ 
+             vdb_pkg_dir = os.path.join(self.vdbdir, a.cpv)
+             try:
+                 os.makedirs(vdb_pkg_dir)
+             except os.error:
+                 pass
+ 
+             metadata = installed[cpv].copy()
+             metadata.setdefault("SLOT", "0")
+             metadata.setdefault("BUILD_TIME", "0")
+             metadata.setdefault("COUNTER", "0")
+             metadata.setdefault("KEYWORDS", "~x86")
+ 
+             unknown_keys = set(metadata).difference(portage.dbapi.dbapi._known_keys)
+             unknown_keys.discard("BUILD_TIME")
+             unknown_keys.discard("BUILD_ID")
+             unknown_keys.discard("COUNTER")
+             unknown_keys.discard("repository")
+             unknown_keys.discard("USE")
+             unknown_keys.discard("PROVIDES")
+             unknown_keys.discard("REQUIRES")
+             if unknown_keys:
+                 raise ValueError(
+                     "metadata of installed '%s' contains unknown keys: %s"
+                     % (cpv, sorted(unknown_keys))
+                 )
+ 
+             metadata["repository"] = repo
+             for k, v in metadata.items():
+                 with open(os.path.join(vdb_pkg_dir, k), "w") as f:
+                     f.write("%s\n" % v)
+ 
+             ebuild_path = os.path.join(vdb_pkg_dir, a.cpv.split("/")[1] + ".ebuild")
+             with open(ebuild_path, "w") as f:
+                 f.write('EAPI="%s"\n' % metadata.pop("EAPI", "0"))
+                 for k, v in metadata.items():
+                     f.write('%s="%s"\n' % (k, v))
+ 
+             env_path = os.path.join(vdb_pkg_dir, "environment.bz2")
+             with bz2.BZ2File(env_path, mode="w") as f:
+                 with open(ebuild_path, "rb") as inputfile:
+                     f.write(inputfile.read())
+ 
+     def _create_profile(
+         self, ebuilds, eclasses, installed, profile, repo_configs, user_config, sets
+     ):
+ 
+         user_config_dir = os.path.join(self.eroot, USER_CONFIG_PATH)
+ 
+         try:
+             os.makedirs(user_config_dir)
+         except os.error:
+             pass
+ 
+         for repo in self._repositories:
+             if repo == "DEFAULT":
+                 continue
+ 
+             repo_dir = self._get_repo_dir(repo)
+             profile_dir = os.path.join(repo_dir, "profiles")
+             metadata_dir = os.path.join(repo_dir, "metadata")
+             os.makedirs(metadata_dir)
+ 
+             # Create $REPO/profiles/categories
+             categories = set()
+             for cpv in ebuilds:
+                 ebuilds_repo = Atom("=" + cpv, allow_repo=True).repo
+                 if ebuilds_repo is None:
+                     ebuilds_repo = "test_repo"
+                 if ebuilds_repo == repo:
+                     categories.add(catsplit(cpv)[0])
+ 
+             categories_file = os.path.join(profile_dir, "categories")
+             with open(categories_file, "w") as f:
+                 for cat in categories:
+                     f.write(cat + "\n")
+ 
+             # Create $REPO/profiles/license_groups
+             license_file = os.path.join(profile_dir, "license_groups")
+             with open(license_file, "w") as f:
+                 f.write("EULA TEST\n")
+ 
+             repo_config = repo_configs.get(repo)
+             if repo_config:
+                 for config_file, lines in repo_config.items():
+                     if config_file not in self.config_files and not any(
+                         fnmatch.fnmatch(config_file, os.path.join(x, "*"))
+                         for x in self.config_files
+                     ):
+                         raise ValueError("Unknown config file: '%s'" % config_file)
+ 
+                     if config_file in ("layout.conf",):
+                         file_name = os.path.join(repo_dir, "metadata", config_file)
+                     else:
+                         file_name = os.path.join(profile_dir, config_file)
+                         if "/" in config_file and not os.path.isdir(
+                             os.path.dirname(file_name)
+                         ):
+                             os.makedirs(os.path.dirname(file_name))
+                     with open(file_name, "w") as f:
+                         for line in lines:
+                             f.write("%s\n" % line)
+                         # Temporarily write empty value of masters until it becomes default.
+                         # TODO: Delete all references to "# use implicit masters" when empty value becomes default.
+                         if config_file == "layout.conf" and not any(
+                             line.startswith(("masters =", "# use implicit masters"))
+                             for line in lines
+                         ):
+                             f.write("masters =\n")
+ 
+             # Create $profile_dir/eclass (we fail to digest the ebuilds if it's not there)
+             eclass_dir = os.path.join(repo_dir, "eclass")
+             os.makedirs(eclass_dir)
+ 
+             for eclass_name, eclass_content in eclasses.items():
+                 with open(
+                     os.path.join(eclass_dir, "{}.eclass".format(eclass_name)), "wt"
+                 ) as f:
+                     if isinstance(eclass_content, str):
+                         eclass_content = [eclass_content]
+                     for line in eclass_content:
+                         f.write("{}\n".format(line))
+ 
+             # Temporarily write empty value of masters until it becomes default.
+             if not repo_config or "layout.conf" not in repo_config:
+                 layout_conf_path = os.path.join(repo_dir, "metadata", "layout.conf")
+                 with open(layout_conf_path, "w") as f:
+                     f.write("masters =\n")
+ 
+             if repo == "test_repo":
+                 # Create a minimal profile in /var/db/repos/gentoo
+                 sub_profile_dir = os.path.join(
+                     profile_dir, "default", "linux", "x86", "test_profile"
+                 )
+                 os.makedirs(sub_profile_dir)
+ 
+                 if not (profile and "eapi" in profile):
+                     eapi_file = os.path.join(sub_profile_dir, "eapi")
+                     with open(eapi_file, "w") as f:
+                         f.write("0\n")
+ 
+                 make_defaults_file = os.path.join(sub_profile_dir, "make.defaults")
+                 with open(make_defaults_file, "w") as f:
+                     f.write('ARCH="x86"\n')
+                     f.write('ACCEPT_KEYWORDS="x86"\n')
+ 
+                 use_force_file = os.path.join(sub_profile_dir, "use.force")
+                 with open(use_force_file, "w") as f:
+                     f.write("x86\n")
+ 
+                 parent_file = os.path.join(sub_profile_dir, "parent")
+                 with open(parent_file, "w") as f:
+                     f.write("..\n")
+ 
+                 if profile:
+                     for config_file, lines in profile.items():
+                         if config_file not in self.config_files:
+                             raise ValueError("Unknown config file: '%s'" % config_file)
+ 
+                         file_name = os.path.join(sub_profile_dir, config_file)
+                         with open(file_name, "w") as f:
+                             for line in lines:
+                                 f.write("%s\n" % line)
+ 
+                 # Create profile symlink
+                 os.symlink(
+                     sub_profile_dir, os.path.join(user_config_dir, "make.profile")
+                 )
+ 
+         make_conf = {
+             "ACCEPT_KEYWORDS": "x86",
+             "CLEAN_DELAY": "0",
+             "DISTDIR": self.distdir,
+             "EMERGE_WARNING_DELAY": "0",
+             "PKGDIR": self.pkgdir,
+             "PORTAGE_INST_GID": str(portage.data.portage_gid),
+             "PORTAGE_INST_UID": str(portage.data.portage_uid),
+             "PORTAGE_TMPDIR": os.path.join(self.eroot, "var/tmp"),
+         }
+ 
+         if os.environ.get("NOCOLOR"):
+             make_conf["NOCOLOR"] = os.environ["NOCOLOR"]
+ 
+         # Pass along PORTAGE_USERNAME and PORTAGE_GRPNAME since they
+         # need to be inherited by ebuild subprocesses.
+         if "PORTAGE_USERNAME" in os.environ:
+             make_conf["PORTAGE_USERNAME"] = os.environ["PORTAGE_USERNAME"]
+         if "PORTAGE_GRPNAME" in os.environ:
+             make_conf["PORTAGE_GRPNAME"] = os.environ["PORTAGE_GRPNAME"]
+ 
+         make_conf_lines = []
+         for k_v in make_conf.items():
+             make_conf_lines.append('%s="%s"' % k_v)
+ 
+         if "make.conf" in user_config:
+             make_conf_lines.extend(user_config["make.conf"])
+ 
+         if not portage.process.sandbox_capable or os.environ.get("SANDBOX_ON") == "1":
+             # avoid problems from nested sandbox instances
+             make_conf_lines.append('FEATURES="${FEATURES} -sandbox -usersandbox"')
+ 
+         configs = user_config.copy()
+         configs["make.conf"] = make_conf_lines
+ 
+         for config_file, lines in configs.items():
+             if config_file not in self.config_files:
+                 raise ValueError("Unknown config file: '%s'" % config_file)
+ 
+             file_name = os.path.join(user_config_dir, config_file)
+             with open(file_name, "w") as f:
+                 for line in lines:
+                     f.write("%s\n" % line)
+ 
+         # Create /usr/share/portage/config/make.globals
+         make_globals_path = os.path.join(
+             self.eroot, GLOBAL_CONFIG_PATH.lstrip(os.sep), "make.globals"
+         )
+         ensure_dirs(os.path.dirname(make_globals_path))
+         os.symlink(os.path.join(cnf_path, "make.globals"), make_globals_path)
+ 
+         # Create /usr/share/portage/config/sets/portage.conf
+         default_sets_conf_dir = os.path.join(
+             self.eroot, "usr/share/portage/config/sets"
+         )
+ 
+         try:
+             os.makedirs(default_sets_conf_dir)
+         except os.error:
+             pass
+ 
+         provided_sets_portage_conf = os.path.join(cnf_path, "sets", "portage.conf")
+         os.symlink(
+             provided_sets_portage_conf,
+             os.path.join(default_sets_conf_dir, "portage.conf"),
+         )
+ 
+         set_config_dir = os.path.join(user_config_dir, "sets")
+ 
+         try:
+             os.makedirs(set_config_dir)
+         except os.error:
+             pass
+ 
+         for sets_file, lines in sets.items():
+             file_name = os.path.join(set_config_dir, sets_file)
+             with open(file_name, "w") as f:
+                 for line in lines:
+                     f.write("%s\n" % line)
+ 
+         if cnf_path_repoman is not None:
+             # Create /usr/share/repoman
+             repoman_share_dir = os.path.join(self.eroot, "usr", "share", "repoman")
+             os.symlink(cnf_path_repoman, repoman_share_dir)
+ 
+     def _create_world(self, world, world_sets):
+         # Create /var/lib/portage/world
+         var_lib_portage = os.path.join(self.eroot, "var", "lib", "portage")
+         os.makedirs(var_lib_portage)
+ 
+         world_file = os.path.join(var_lib_portage, "world")
+         world_set_file = os.path.join(var_lib_portage, "world_sets")
+ 
+         with open(world_file, "w") as f:
+             for atom in world:
+                 f.write("%s\n" % atom)
+ 
+         with open(world_set_file, "w") as f:
+             for atom in world_sets:
+                 f.write("%s\n" % atom)
+ 
+     def _load_config(self):
+ 
+         create_trees_kwargs = {}
+         if self.target_root != os.sep:
+             create_trees_kwargs["target_root"] = self.target_root
+ 
+         env = {
+             "PORTAGE_REPOSITORIES": "\n".join(
+                 "[%s]\n%s"
+                 % (
+                     repo_name,
+                     "\n".join("%s = %s" % (k, v) for k, v in repo_config.items()),
+                 )
+                 for repo_name, repo_config in self._repositories.items()
+             )
+         }
+ 
+         if self.debug:
+             env["PORTAGE_DEBUG"] = "1"
+ 
+         trees = portage.create_trees(
+             env=env, eprefix=self.eprefix, **create_trees_kwargs
+         )
+ 
+         for root, root_trees in trees.items():
+             settings = root_trees["vartree"].settings
+             settings._init_dirs()
+             setconfig = load_default_config(settings, root_trees)
+             root_trees["root_config"] = RootConfig(settings, root_trees, setconfig)
+ 
+         return trees[trees._target_eroot]["vartree"].settings, trees
+ 
+     def run(self, atoms, options={}, action=None):
+         options = options.copy()
+         options["--pretend"] = True
+         if self.debug:
+             options["--debug"] = True
+ 
+         if action is None:
+             if options.get("--depclean"):
+                 action = "depclean"
+             elif options.get("--prune"):
+                 action = "prune"
+ 
+         if "--usepkgonly" in options:
+             options["--usepkg"] = True
+ 
+         global_noiselimit = portage.util.noiselimit
+         global_emergelog_disable = _emerge.emergelog._disable
+         try:
+ 
+             if not self.debug:
+                 portage.util.noiselimit = -2
+             _emerge.emergelog._disable = True
+ 
+             if action in ("depclean", "prune"):
+                 depclean_result = _calc_depclean(
+                     self.settings,
+                     self.trees,
+                     None,
+                     options,
+                     action,
+                     InternalPackageSet(initial_atoms=atoms, allow_wildcard=True),
+                     None,
+                 )
+                 result = ResolverPlaygroundDepcleanResult(
+                     atoms,
+                     depclean_result.returncode,
+                     depclean_result.cleanlist,
+                     depclean_result.ordered,
+                     depclean_result.req_pkg_count,
+                     depclean_result.depgraph,
+                 )
+             else:
+                 params = create_depgraph_params(options, action)
+                 success, depgraph, favorites = backtrack_depgraph(
+                     self.settings, self.trees, options, params, action, atoms, None
+                 )
+                 depgraph._show_merge_list()
+                 depgraph.display_problems()
+                 result = ResolverPlaygroundResult(atoms, success, depgraph, favorites)
+         finally:
+             portage.util.noiselimit = global_noiselimit
+             _emerge.emergelog._disable = global_emergelog_disable
+ 
+         return result
+ 
+     def run_TestCase(self, test_case):
+         if not isinstance(test_case, ResolverPlaygroundTestCase):
+             raise TypeError("ResolverPlayground needs a ResolverPlaygroundTestCase")
+         for atoms in test_case.requests:
+             result = self.run(atoms, test_case.options, test_case.action)
+             if not test_case.compare_with_result(result):
+                 return
+ 
+     def cleanup(self):
+         for eroot in self.trees:
+             portdb = self.trees[eroot]["porttree"].dbapi
+             portdb.close_caches()
+         if self.debug:
+             print("\nEROOT=%s" % self.eroot)
+         else:
+             shutil.rmtree(self.eroot)
+         if hasattr(self, "_orig_eprefix"):
+             portage.const.EPREFIX = self._orig_eprefix
  
  
  class ResolverPlaygroundTestCase:
diff --cc lib/portage/util/__init__.py
index 8c2f96f56,5ade7f660..11a7d0677
--- a/lib/portage/util/__init__.py
+++ b/lib/portage/util/__init__.py
@@@ -904,918 -1053,970 +1054,983 @@@ def varexpand(mystring, mydict=None, er
  # broken and removed, but can still be imported
  pickle_write = None
  
+ 
  def pickle_read(filename, default=None, debug=0):
- 	if not os.access(filename, os.R_OK):
- 		writemsg(_("pickle_read(): File not readable. '") + filename + "'\n", 1)
- 		return default
- 	data = None
- 	try:
- 		myf = open(_unicode_encode(filename,
- 			encoding=_encodings['fs'], errors='strict'), 'rb')
- 		mypickle = pickle.Unpickler(myf)
- 		data = mypickle.load()
- 		myf.close()
- 		del mypickle, myf
- 		writemsg(_("pickle_read(): Loaded pickle. '") + filename + "'\n", 1)
- 	except SystemExit as e:
- 		raise
- 	except Exception as e:
- 		writemsg(_("!!! Failed to load pickle: ") + str(e) + "\n", 1)
- 		data = default
- 	return data
+     if not os.access(filename, os.R_OK):
+         writemsg(_("pickle_read(): File not readable. '") + filename + "'\n", 1)
+         return default
+     data = None
+     try:
+         myf = open(
+             _unicode_encode(filename, encoding=_encodings["fs"], errors="strict"), "rb"
+         )
+         mypickle = pickle.Unpickler(myf)
+         data = mypickle.load()
+         myf.close()
+         del mypickle, myf
+         writemsg(_("pickle_read(): Loaded pickle. '") + filename + "'\n", 1)
+     except SystemExit as e:
+         raise
+     except Exception as e:
+         writemsg(_("!!! Failed to load pickle: ") + str(e) + "\n", 1)
+         data = default
+     return data
+ 
  
  def dump_traceback(msg, noiselevel=1):
- 	info = sys.exc_info()
- 	if not info[2]:
- 		stack = traceback.extract_stack()[:-1]
- 		error = None
- 	else:
- 		stack = traceback.extract_tb(info[2])
- 		error = str(info[1])
- 	writemsg("\n====================================\n", noiselevel=noiselevel)
- 	writemsg("%s\n\n" % msg, noiselevel=noiselevel)
- 	for line in traceback.format_list(stack):
- 		writemsg(line, noiselevel=noiselevel)
- 	if error:
- 		writemsg(error+"\n", noiselevel=noiselevel)
- 	writemsg("====================================\n\n", noiselevel=noiselevel)
+     info = sys.exc_info()
+     if not info[2]:
+         stack = traceback.extract_stack()[:-1]
+         error = None
+     else:
+         stack = traceback.extract_tb(info[2])
+         error = str(info[1])
+     writemsg("\n====================================\n", noiselevel=noiselevel)
+     writemsg("%s\n\n" % msg, noiselevel=noiselevel)
+     for line in traceback.format_list(stack):
+         writemsg(line, noiselevel=noiselevel)
+     if error:
+         writemsg(error + "\n", noiselevel=noiselevel)
+     writemsg("====================================\n\n", noiselevel=noiselevel)
+ 
  
  class cmp_sort_key:
- 	"""
- 	In python-3.0 the list.sort() method no longer has a "cmp" keyword
- 	argument. This class acts as an adapter which converts a cmp function
- 	into one that's suitable for use as the "key" keyword argument to
- 	list.sort(), making it easier to port code for python-3.0 compatibility.
- 	It works by generating key objects which use the given cmp function to
- 	implement their __lt__ method.
- 
- 	Beginning with Python 2.7 and 3.2, equivalent functionality is provided
- 	by functools.cmp_to_key().
- 	"""
- 	__slots__ = ("_cmp_func",)
+     """
+     In python-3.0 the list.sort() method no longer has a "cmp" keyword
+     argument. This class acts as an adapter which converts a cmp function
+     into one that's suitable for use as the "key" keyword argument to
+     list.sort(), making it easier to port code for python-3.0 compatibility.
+     It works by generating key objects which use the given cmp function to
+     implement their __lt__ method.
+ 
+     Beginning with Python 2.7 and 3.2, equivalent functionality is provided
+     by functools.cmp_to_key().
+     """
+ 
+     __slots__ = ("_cmp_func",)
  
- 	def __init__(self, cmp_func):
- 		"""
- 		@type cmp_func: callable which takes 2 positional arguments
- 		@param cmp_func: A cmp function.
- 		"""
- 		self._cmp_func = cmp_func
+     def __init__(self, cmp_func):
+         """
+         @type cmp_func: callable which takes 2 positional arguments
+         @param cmp_func: A cmp function.
+         """
+         self._cmp_func = cmp_func
  
- 	def __call__(self, lhs):
- 		return self._cmp_key(self._cmp_func, lhs)
+     def __call__(self, lhs):
+         return self._cmp_key(self._cmp_func, lhs)
  
- 	class _cmp_key:
- 		__slots__ = ("_cmp_func", "_obj")
+     class _cmp_key:
+         __slots__ = ("_cmp_func", "_obj")
  
- 		def __init__(self, cmp_func, obj):
- 			self._cmp_func = cmp_func
- 			self._obj = obj
+         def __init__(self, cmp_func, obj):
+             self._cmp_func = cmp_func
+             self._obj = obj
+ 
+         def __lt__(self, other):
+             if other.__class__ is not self.__class__:
+                 raise TypeError(
+                     "Expected type %s, got %s" % (self.__class__, other.__class__)
+                 )
+             return self._cmp_func(self._obj, other._obj) < 0
  
- 		def __lt__(self, other):
- 			if other.__class__ is not self.__class__:
- 				raise TypeError("Expected type %s, got %s" % \
- 					(self.__class__, other.__class__))
- 			return self._cmp_func(self._obj, other._obj) < 0
  
  def unique_array(s):
- 	"""lifted from python cookbook, credit: Tim Peters
- 	Return a list of the elements in s in arbitrary order, sans duplicates"""
- 	n = len(s)
- 	# assume all elements are hashable, if so, it's linear
- 	try:
- 		return list(set(s))
- 	except TypeError:
- 		pass
- 
- 	# so much for linear.  abuse sort.
- 	try:
- 		t = list(s)
- 		t.sort()
- 	except TypeError:
- 		pass
- 	else:
- 		assert n > 0
- 		last = t[0]
- 		lasti = i = 1
- 		while i < n:
- 			if t[i] != last:
- 				t[lasti] = last = t[i]
- 				lasti += 1
- 			i += 1
- 		return t[:lasti]
- 
- 	# blah.	 back to original portage.unique_array
- 	u = []
- 	for x in s:
- 		if x not in u:
- 			u.append(x)
- 	return u
+     """lifted from python cookbook, credit: Tim Peters
+     Return a list of the elements in s in arbitrary order, sans duplicates"""
+     n = len(s)
+     # assume all elements are hashable, if so, it's linear
+     try:
+         return list(set(s))
+     except TypeError:
+         pass
+ 
+     # so much for linear.  abuse sort.
+     try:
+         t = list(s)
+         t.sort()
+     except TypeError:
+         pass
+     else:
+         assert n > 0
+         last = t[0]
+         lasti = i = 1
+         while i < n:
+             if t[i] != last:
+                 t[lasti] = last = t[i]
+                 lasti += 1
+             i += 1
+         return t[:lasti]
+ 
+     # blah.	 back to original portage.unique_array
+     u = []
+     for x in s:
+         if x not in u:
+             u.append(x)
+     return u
+ 
  
  def unique_everseen(iterable, key=None):
- 	"""
- 	List unique elements, preserving order. Remember all elements ever seen.
- 	Taken from itertools documentation.
- 	"""
- 	# unique_everseen('AAAABBBCCDAABBB') --> A B C D
- 	# unique_everseen('ABBCcAD', str.lower) --> A B C D
- 	seen = set()
- 	seen_add = seen.add
- 	if key is None:
- 		for element in filterfalse(seen.__contains__, iterable):
- 			seen_add(element)
- 			yield element
- 	else:
- 		for element in iterable:
- 			k = key(element)
- 			if k not in seen:
- 				seen_add(k)
- 				yield element
+     """
+     List unique elements, preserving order. Remember all elements ever seen.
+     Taken from itertools documentation.
+     """
+     # unique_everseen('AAAABBBCCDAABBB') --> A B C D
+     # unique_everseen('ABBCcAD', str.lower) --> A B C D
+     seen = set()
+     seen_add = seen.add
+     if key is None:
+         for element in filterfalse(seen.__contains__, iterable):
+             seen_add(element)
+             yield element
+     else:
+         for element in iterable:
+             k = key(element)
+             if k not in seen:
+                 seen_add(k)
+                 yield element
+ 
  
  def _do_stat(filename, follow_links=True):
- 	try:
- 		if follow_links:
- 			return os.stat(filename)
- 		return os.lstat(filename)
- 	except OSError as oe:
- 		func_call = "stat('%s')" % filename
- 		if oe.errno == errno.EPERM:
- 			raise OperationNotPermitted(func_call)
- 		if oe.errno == errno.EACCES:
- 			raise PermissionDenied(func_call)
- 		if oe.errno == errno.ENOENT:
- 			raise FileNotFound(filename)
- 		raise
- 
- def apply_permissions(filename, uid=-1, gid=-1, mode=-1, mask=-1,
- 	stat_cached=None, follow_links=True):
- 	"""Apply user, group, and mode bits to a file if the existing bits do not
- 	already match.  The default behavior is to force an exact match of mode
- 	bits.  When mask=0 is specified, mode bits on the target file are allowed
- 	to be a superset of the mode argument (via logical OR).  When mask>0, the
- 	mode bits that the target file is allowed to have are restricted via
- 	logical XOR.
- 	Returns True if the permissions were modified and False otherwise."""
- 
- 	modified = False
- 
- 	# Since Python 3.4, chown requires int type (no proxies).
- 	uid = int(uid)
- 	gid = int(gid)
- 
- 	if stat_cached is None:
- 		stat_cached = _do_stat(filename, follow_links=follow_links)
- 
- 	if	(uid != -1 and uid != stat_cached.st_uid) or \
- 		(gid != -1 and gid != stat_cached.st_gid):
- 		try:
- 			if follow_links:
- 				os.chown(filename, uid, gid)
- 			else:
- 				portage.data.lchown(filename, uid, gid)
- 			modified = True
- 		except OSError as oe:
- 			func_call = "chown('%s', %i, %i)" % (filename, uid, gid)
- 			if oe.errno == errno.EPERM:
- 				raise OperationNotPermitted(func_call)
- 			elif oe.errno == errno.EACCES:
- 				raise PermissionDenied(func_call)
- 			elif oe.errno == errno.EROFS:
- 				raise ReadOnlyFileSystem(func_call)
- 			elif oe.errno == errno.ENOENT:
- 				raise FileNotFound(filename)
- 			else:
- 				raise
- 
- 	new_mode = -1
- 	st_mode = stat_cached.st_mode & 0o7777 # protect from unwanted bits
- 	if mask >= 0:
- 		if mode == -1:
- 			mode = 0 # Don't add any mode bits when mode is unspecified.
- 		else:
- 			mode = mode & 0o7777
- 		if	(mode & st_mode != mode) or \
- 			((mask ^ st_mode) & st_mode != st_mode):
- 			new_mode = mode | st_mode
- 			new_mode = (mask ^ new_mode) & new_mode
- 	elif mode != -1:
- 		mode = mode & 0o7777 # protect from unwanted bits
- 		if mode != st_mode:
- 			new_mode = mode
- 
- 	# The chown system call may clear S_ISUID and S_ISGID
- 	# bits, so those bits are restored if necessary.
- 	if modified and new_mode == -1 and \
- 		(st_mode & stat.S_ISUID or st_mode & stat.S_ISGID):
- 		if mode == -1:
- 			new_mode = st_mode
- 		else:
- 			mode = mode & 0o7777
- 			if mask >= 0:
- 				new_mode = mode | st_mode
- 				new_mode = (mask ^ new_mode) & new_mode
- 			else:
- 				new_mode = mode
- 			if not (new_mode & stat.S_ISUID or new_mode & stat.S_ISGID):
- 				new_mode = -1
- 
- 	if not follow_links and stat.S_ISLNK(stat_cached.st_mode):
- 		# Mode doesn't matter for symlinks.
- 		new_mode = -1
- 
- 	if new_mode != -1:
- 		try:
- 			os.chmod(filename, new_mode)
- 			modified = True
- 		except OSError as oe:
- 			func_call = "chmod('%s', %s)" % (filename, oct(new_mode))
- 			if oe.errno == errno.EPERM:
- 				raise OperationNotPermitted(func_call)
- 			elif oe.errno == errno.EACCES:
- 				raise PermissionDenied(func_call)
- 			elif oe.errno == errno.EROFS:
- 				raise ReadOnlyFileSystem(func_call)
- 			elif oe.errno == errno.ENOENT:
- 				raise FileNotFound(filename)
- 			raise
- 	return modified
+     try:
+         if follow_links:
+             return os.stat(filename)
+         return os.lstat(filename)
+     except OSError as oe:
+         func_call = "stat('%s')" % filename
+         if oe.errno == errno.EPERM:
+             raise OperationNotPermitted(func_call)
+         if oe.errno == errno.EACCES:
+             raise PermissionDenied(func_call)
+         if oe.errno == errno.ENOENT:
+             raise FileNotFound(filename)
+         raise
+ 
+ 
+ def apply_permissions(
+     filename, uid=-1, gid=-1, mode=-1, mask=-1, stat_cached=None, follow_links=True
+ ):
+     """Apply user, group, and mode bits to a file if the existing bits do not
+     already match.  The default behavior is to force an exact match of mode
+     bits.  When mask=0 is specified, mode bits on the target file are allowed
+     to be a superset of the mode argument (via logical OR).  When mask>0, the
+     mode bits that the target file is allowed to have are restricted via
+     logical XOR.
+     Returns True if the permissions were modified and False otherwise."""
+ 
+     modified = False
+ 
+     # Since Python 3.4, chown requires int type (no proxies).
+     uid = int(uid)
+     gid = int(gid)
+ 
+     if stat_cached is None:
+         stat_cached = _do_stat(filename, follow_links=follow_links)
+ 
+     if (uid != -1 and uid != stat_cached.st_uid) or (
+         gid != -1 and gid != stat_cached.st_gid
+     ):
+         try:
+             if follow_links:
+                 os.chown(filename, uid, gid)
+             else:
+                 portage.data.lchown(filename, uid, gid)
+             modified = True
+         except OSError as oe:
+             func_call = "chown('%s', %i, %i)" % (filename, uid, gid)
+             if oe.errno == errno.EPERM:
+                 raise OperationNotPermitted(func_call)
+             elif oe.errno == errno.EACCES:
+                 raise PermissionDenied(func_call)
+             elif oe.errno == errno.EROFS:
+                 raise ReadOnlyFileSystem(func_call)
+             elif oe.errno == errno.ENOENT:
+                 raise FileNotFound(filename)
+             else:
+                 raise
+ 
+     new_mode = -1
+     st_mode = stat_cached.st_mode & 0o7777  # protect from unwanted bits
+     if mask >= 0:
+         if mode == -1:
+             mode = 0  # Don't add any mode bits when mode is unspecified.
+         else:
+             mode = mode & 0o7777
+         if (mode & st_mode != mode) or ((mask ^ st_mode) & st_mode != st_mode):
+             new_mode = mode | st_mode
+             new_mode = (mask ^ new_mode) & new_mode
+     elif mode != -1:
+         mode = mode & 0o7777  # protect from unwanted bits
+         if mode != st_mode:
+             new_mode = mode
+ 
+     # The chown system call may clear S_ISUID and S_ISGID
+     # bits, so those bits are restored if necessary.
+     if (
+         modified
+         and new_mode == -1
+         and (st_mode & stat.S_ISUID or st_mode & stat.S_ISGID)
+     ):
+         if mode == -1:
+             new_mode = st_mode
+         else:
+             mode = mode & 0o7777
+             if mask >= 0:
+                 new_mode = mode | st_mode
+                 new_mode = (mask ^ new_mode) & new_mode
+             else:
+                 new_mode = mode
+             if not (new_mode & stat.S_ISUID or new_mode & stat.S_ISGID):
+                 new_mode = -1
+ 
+     if not follow_links and stat.S_ISLNK(stat_cached.st_mode):
+         # Mode doesn't matter for symlinks.
+         new_mode = -1
+ 
+     if new_mode != -1:
+         try:
+             os.chmod(filename, new_mode)
+             modified = True
+         except OSError as oe:
+             func_call = "chmod('%s', %s)" % (filename, oct(new_mode))
+             if oe.errno == errno.EPERM:
+                 raise OperationNotPermitted(func_call)
+             elif oe.errno == errno.EACCES:
+                 raise PermissionDenied(func_call)
+             elif oe.errno == errno.EROFS:
+                 raise ReadOnlyFileSystem(func_call)
+             elif oe.errno == errno.ENOENT:
+                 raise FileNotFound(filename)
+             raise
+     return modified
+ 
  
  def apply_stat_permissions(filename, newstat, **kwargs):
- 	"""A wrapper around apply_secpass_permissions that gets
- 	uid, gid, and mode from a stat object"""
- 	return apply_secpass_permissions(filename, uid=newstat.st_uid, gid=newstat.st_gid,
- 	mode=newstat.st_mode, **kwargs)
- 
- def apply_recursive_permissions(top, uid=-1, gid=-1,
- 	dirmode=-1, dirmask=-1, filemode=-1, filemask=-1, onerror=None):
- 	"""A wrapper around apply_secpass_permissions that applies permissions
- 	recursively.  If optional argument onerror is specified, it should be a
- 	function; it will be called with one argument, a PortageException instance.
- 	Returns True if all permissions are applied and False if some are left
- 	unapplied."""
- 
- 	# Avoid issues with circular symbolic links, as in bug #339670.
- 	follow_links = False
- 
- 	if onerror is None:
- 		# Default behavior is to dump errors to stderr so they won't
- 		# go unnoticed.  Callers can pass in a quiet instance.
- 		def onerror(e):
- 			if isinstance(e, OperationNotPermitted):
- 				writemsg(_("Operation Not Permitted: %s\n") % str(e),
- 					noiselevel=-1)
- 			elif isinstance(e, FileNotFound):
- 				writemsg(_("File Not Found: '%s'\n") % str(e), noiselevel=-1)
- 			else:
- 				raise
- 
- 	# For bug 554084, always apply permissions to a directory before
- 	# that directory is traversed.
- 	all_applied = True
- 
- 	try:
- 		stat_cached = _do_stat(top, follow_links=follow_links)
- 	except FileNotFound:
- 		# backward compatibility
- 		return True
- 
- 	if stat.S_ISDIR(stat_cached.st_mode):
- 		mode = dirmode
- 		mask = dirmask
- 	else:
- 		mode = filemode
- 		mask = filemask
- 
- 	try:
- 		applied = apply_secpass_permissions(top,
- 			uid=uid, gid=gid, mode=mode, mask=mask,
- 			stat_cached=stat_cached, follow_links=follow_links)
- 		if not applied:
- 			all_applied = False
- 	except PortageException as e:
- 		all_applied = False
- 		onerror(e)
- 
- 	for dirpath, dirnames, filenames in os.walk(top):
- 		for name, mode, mask in chain(
- 			((x, filemode, filemask) for x in filenames),
- 			((x, dirmode, dirmask) for x in dirnames)):
- 			try:
- 				applied = apply_secpass_permissions(os.path.join(dirpath, name),
- 					uid=uid, gid=gid, mode=mode, mask=mask,
- 					follow_links=follow_links)
- 				if not applied:
- 					all_applied = False
- 			except PortageException as e:
- 				# Ignore InvalidLocation exceptions such as FileNotFound
- 				# and DirectoryNotFound since sometimes things disappear,
- 				# like when adjusting permissions on DISTCC_DIR.
- 				if not isinstance(e, portage.exception.InvalidLocation):
- 					all_applied = False
- 					onerror(e)
- 	return all_applied
- 
- def apply_secpass_permissions(filename, uid=-1, gid=-1, mode=-1, mask=-1,
- 	stat_cached=None, follow_links=True):
- 	"""A wrapper around apply_permissions that uses secpass and simple
- 	logic to apply as much of the permissions as possible without
- 	generating an obviously avoidable permission exception. Despite
- 	attempts to avoid an exception, it's possible that one will be raised
- 	anyway, so be prepared.
- 	Returns True if all permissions are applied and False if some are left
- 	unapplied."""
- 
- 	if stat_cached is None:
- 		stat_cached = _do_stat(filename, follow_links=follow_links)
- 
- 	all_applied = True
- 
- 	# Avoid accessing portage.data.secpass when possible, since
- 	# it triggers config loading (undesirable for chmod-lite).
- 	if (uid != -1 or gid != -1) and portage.data.secpass < 2:
- 
- 		if uid != -1 and \
- 		uid != stat_cached.st_uid:
- 			all_applied = False
- 			uid = -1
- 
- 		if gid != -1 and \
- 		gid != stat_cached.st_gid and \
- 		gid not in os.getgroups():
- 			all_applied = False
- 			gid = -1
- 
- 	apply_permissions(filename, uid=uid, gid=gid, mode=mode, mask=mask,
- 		stat_cached=stat_cached, follow_links=follow_links)
- 	return all_applied
+     """A wrapper around apply_secpass_permissions that gets
+     uid, gid, and mode from a stat object"""
+     return apply_secpass_permissions(
+         filename, uid=newstat.st_uid, gid=newstat.st_gid, mode=newstat.st_mode, **kwargs
+     )
+ 
+ 
+ def apply_recursive_permissions(
+     top, uid=-1, gid=-1, dirmode=-1, dirmask=-1, filemode=-1, filemask=-1, onerror=None
+ ):
+     """A wrapper around apply_secpass_permissions that applies permissions
+     recursively.  If optional argument onerror is specified, it should be a
+     function; it will be called with one argument, a PortageException instance.
+     Returns True if all permissions are applied and False if some are left
+     unapplied."""
+ 
+     # Avoid issues with circular symbolic links, as in bug #339670.
+     follow_links = False
+ 
+     if onerror is None:
+         # Default behavior is to dump errors to stderr so they won't
+         # go unnoticed.  Callers can pass in a quiet instance.
+         def onerror(e):
+             if isinstance(e, OperationNotPermitted):
+                 writemsg(_("Operation Not Permitted: %s\n") % str(e), noiselevel=-1)
+             elif isinstance(e, FileNotFound):
+                 writemsg(_("File Not Found: '%s'\n") % str(e), noiselevel=-1)
+             else:
+                 raise
+ 
+     # For bug 554084, always apply permissions to a directory before
+     # that directory is traversed.
+     all_applied = True
+ 
+     try:
+         stat_cached = _do_stat(top, follow_links=follow_links)
+     except FileNotFound:
+         # backward compatibility
+         return True
+ 
+     if stat.S_ISDIR(stat_cached.st_mode):
+         mode = dirmode
+         mask = dirmask
+     else:
+         mode = filemode
+         mask = filemask
+ 
+     try:
+         applied = apply_secpass_permissions(
+             top,
+             uid=uid,
+             gid=gid,
+             mode=mode,
+             mask=mask,
+             stat_cached=stat_cached,
+             follow_links=follow_links,
+         )
+         if not applied:
+             all_applied = False
+     except PortageException as e:
+         all_applied = False
+         onerror(e)
+ 
+     for dirpath, dirnames, filenames in os.walk(top):
+         for name, mode, mask in chain(
+             ((x, filemode, filemask) for x in filenames),
+             ((x, dirmode, dirmask) for x in dirnames),
+         ):
+             try:
+                 applied = apply_secpass_permissions(
+                     os.path.join(dirpath, name),
+                     uid=uid,
+                     gid=gid,
+                     mode=mode,
+                     mask=mask,
+                     follow_links=follow_links,
+                 )
+                 if not applied:
+                     all_applied = False
+             except PortageException as e:
+                 # Ignore InvalidLocation exceptions such as FileNotFound
+                 # and DirectoryNotFound since sometimes things disappear,
+                 # like when adjusting permissions on DISTCC_DIR.
+                 if not isinstance(e, portage.exception.InvalidLocation):
+                     all_applied = False
+                     onerror(e)
+     return all_applied
+ 
+ 
+ def apply_secpass_permissions(
+     filename, uid=-1, gid=-1, mode=-1, mask=-1, stat_cached=None, follow_links=True
+ ):
+     """A wrapper around apply_permissions that uses secpass and simple
+     logic to apply as much of the permissions as possible without
+     generating an obviously avoidable permission exception. Despite
+     attempts to avoid an exception, it's possible that one will be raised
+     anyway, so be prepared.
+     Returns True if all permissions are applied and False if some are left
+     unapplied."""
+ 
+     if stat_cached is None:
+         stat_cached = _do_stat(filename, follow_links=follow_links)
+ 
+     all_applied = True
+ 
+     # Avoid accessing portage.data.secpass when possible, since
+     # it triggers config loading (undesirable for chmod-lite).
+     if (uid != -1 or gid != -1) and portage.data.secpass < 2:
+ 
+         if uid != -1 and uid != stat_cached.st_uid:
+             all_applied = False
+             uid = -1
+ 
+         if gid != -1 and gid != stat_cached.st_gid and gid not in os.getgroups():
+             all_applied = False
+             gid = -1
+ 
+     apply_permissions(
+         filename,
+         uid=uid,
+         gid=gid,
+         mode=mode,
+         mask=mask,
+         stat_cached=stat_cached,
+         follow_links=follow_links,
+     )
+     return all_applied
+ 
  
  class atomic_ofstream(AbstractContextManager, ObjectProxy):
- 	"""Write a file atomically via os.rename().  Atomic replacement prevents
- 	interprocess interference and prevents corruption of the target
- 	file when the write is interrupted (for example, when an 'out of space'
- 	error occurs)."""
- 
- 	def __init__(self, filename, mode='w', follow_links=True, **kargs):
- 		"""Opens a temporary filename.pid in the same directory as filename."""
- 		ObjectProxy.__init__(self)
- 		object.__setattr__(self, '_aborted', False)
- 		if 'b' in mode:
- 			open_func = open
- 		else:
- 			open_func = io.open
- 			kargs.setdefault('encoding', _encodings['content'])
- 			kargs.setdefault('errors', 'backslashreplace')
- 
- 		if follow_links:
- 			canonical_path = os.path.realpath(filename)
- 			object.__setattr__(self, '_real_name', canonical_path)
- 			tmp_name = "%s.%i" % (canonical_path, portage.getpid())
- 			try:
- 				object.__setattr__(self, '_file',
- 					open_func(_unicode_encode(tmp_name,
- 						encoding=_encodings['fs'], errors='strict'),
- 						mode=mode, **kargs))
- 				return
- 			except IOError as e:
- 				if canonical_path == filename:
- 					raise
- 				# Ignore this error, since it's irrelevant
- 				# and the below open call will produce a
- 				# new error if necessary.
- 
- 		object.__setattr__(self, '_real_name', filename)
- 		tmp_name = "%s.%i" % (filename, portage.getpid())
- 		object.__setattr__(self, '_file',
- 			open_func(_unicode_encode(tmp_name,
- 				encoding=_encodings['fs'], errors='strict'),
- 				mode=mode, **kargs))
- 
- 	def __exit__(self, exc_type, exc_val, exc_tb):
- 		if exc_type is not None:
- 			self.abort()
- 		else:
- 			self.close()
- 
- 	def _get_target(self):
- 		return object.__getattribute__(self, '_file')
- 
- 	def __getattribute__(self, attr):
- 		if attr in ('close', 'abort', '__del__'):
- 			return object.__getattribute__(self, attr)
- 		return getattr(object.__getattribute__(self, '_file'), attr)
- 
- 	def close(self):
- 		"""Closes the temporary file, copies permissions (if possible),
- 		and performs the atomic replacement via os.rename().  If the abort()
- 		method has been called, then the temp file is closed and removed."""
- 		f = object.__getattribute__(self, '_file')
- 		real_name = object.__getattribute__(self, '_real_name')
- 		if not f.closed:
- 			try:
- 				f.close()
- 				if not object.__getattribute__(self, '_aborted'):
- 					try:
- 						apply_stat_permissions(f.name, os.stat(real_name))
- 					except OperationNotPermitted:
- 						pass
- 					except FileNotFound:
- 						pass
- 					except OSError as oe: # from the above os.stat call
- 						if oe.errno in (errno.ENOENT, errno.EPERM):
- 							pass
- 						else:
- 							raise
- 					os.rename(f.name, real_name)
- 			finally:
- 				# Make sure we cleanup the temp file
- 				# even if an exception is raised.
- 				try:
- 					os.unlink(f.name)
- 				except OSError as oe:
- 					pass
- 
- 	def abort(self):
- 		"""If an error occurs while writing the file, the user should
- 		call this method in order to leave the target file unchanged.
- 		This will call close() automatically."""
- 		if not object.__getattribute__(self, '_aborted'):
- 			object.__setattr__(self, '_aborted', True)
- 			self.close()
- 
- 	def __del__(self):
- 		"""If the user does not explicitly call close(), it is
- 		assumed that an error has occurred, so we abort()."""
- 		try:
- 			f = object.__getattribute__(self, '_file')
- 		except AttributeError:
- 			pass
- 		else:
- 			if not f.closed:
- 				self.abort()
- 		# ensure destructor from the base class is called
- 		base_destructor = getattr(ObjectProxy, '__del__', None)
- 		if base_destructor is not None:
- 			base_destructor(self)
+     """Write a file atomically via os.rename().  Atomic replacement prevents
+     interprocess interference and prevents corruption of the target
+     file when the write is interrupted (for example, when an 'out of space'
+     error occurs)."""
+ 
+     def __init__(self, filename, mode="w", follow_links=True, **kargs):
+         """Opens a temporary filename.pid in the same directory as filename."""
+         ObjectProxy.__init__(self)
+         object.__setattr__(self, "_aborted", False)
+         if "b" in mode:
+             open_func = open
+         else:
+             open_func = io.open
+             kargs.setdefault("encoding", _encodings["content"])
+             kargs.setdefault("errors", "backslashreplace")
+ 
+         if follow_links:
+             canonical_path = os.path.realpath(filename)
+             object.__setattr__(self, "_real_name", canonical_path)
+             tmp_name = "%s.%i" % (canonical_path, portage.getpid())
+             try:
+                 object.__setattr__(
+                     self,
+                     "_file",
+                     open_func(
+                         _unicode_encode(
+                             tmp_name, encoding=_encodings["fs"], errors="strict"
+                         ),
+                         mode=mode,
+                         **kargs
+                     ),
+                 )
+                 return
+             except IOError as e:
+                 if canonical_path == filename:
+                     raise
+                 # Ignore this error, since it's irrelevant
+                 # and the below open call will produce a
+                 # new error if necessary.
+ 
+         object.__setattr__(self, "_real_name", filename)
+         tmp_name = "%s.%i" % (filename, portage.getpid())
+         object.__setattr__(
+             self,
+             "_file",
+             open_func(
+                 _unicode_encode(tmp_name, encoding=_encodings["fs"], errors="strict"),
+                 mode=mode,
+                 **kargs
+             ),
+         )
+ 
+     def __exit__(self, exc_type, exc_val, exc_tb):
+         if exc_type is not None:
+             self.abort()
+         else:
+             self.close()
+ 
+     def _get_target(self):
+         return object.__getattribute__(self, "_file")
+ 
+     def __getattribute__(self, attr):
+         if attr in ("close", "abort", "__del__"):
+             return object.__getattribute__(self, attr)
+         return getattr(object.__getattribute__(self, "_file"), attr)
+ 
+     def close(self):
+         """Closes the temporary file, copies permissions (if possible),
+         and performs the atomic replacement via os.rename().  If the abort()
+         method has been called, then the temp file is closed and removed."""
+         f = object.__getattribute__(self, "_file")
+         real_name = object.__getattribute__(self, "_real_name")
+         if not f.closed:
+             try:
+                 f.close()
+                 if not object.__getattribute__(self, "_aborted"):
+                     try:
+                         apply_stat_permissions(f.name, os.stat(real_name))
+                     except OperationNotPermitted:
+                         pass
+                     except FileNotFound:
+                         pass
+                     except OSError as oe:  # from the above os.stat call
+                         if oe.errno in (errno.ENOENT, errno.EPERM):
+                             pass
+                         else:
+                             raise
+                     os.rename(f.name, real_name)
+             finally:
+                 # Make sure we cleanup the temp file
+                 # even if an exception is raised.
+                 try:
+                     os.unlink(f.name)
+                 except OSError as oe:
+                     pass
+ 
+     def abort(self):
+         """If an error occurs while writing the file, the user should
+         call this method in order to leave the target file unchanged.
+         This will call close() automatically."""
+         if not object.__getattribute__(self, "_aborted"):
+             object.__setattr__(self, "_aborted", True)
+             self.close()
+ 
+     def __del__(self):
+         """If the user does not explicitly call close(), it is
+         assumed that an error has occurred, so we abort()."""
+         try:
+             f = object.__getattribute__(self, "_file")
+         except AttributeError:
+             pass
+         else:
+             if not f.closed:
+                 self.abort()
+         # ensure destructor from the base class is called
+         base_destructor = getattr(ObjectProxy, "__del__", None)
+         if base_destructor is not None:
+             base_destructor(self)
+ 
  
  def write_atomic(file_path, content, **kwargs):
- 	f = None
- 	try:
- 		f = atomic_ofstream(file_path, **kwargs)
- 		f.write(content)
- 		f.close()
- 	except (IOError, OSError) as e:
- 		if f:
- 			f.abort()
- 		func_call = "write_atomic('%s')" % file_path
- 		if e.errno == errno.EPERM:
- 			raise OperationNotPermitted(func_call)
- 		elif e.errno == errno.EACCES:
- 			raise PermissionDenied(func_call)
- 		elif e.errno == errno.EROFS:
- 			raise ReadOnlyFileSystem(func_call)
- 		elif e.errno == errno.ENOENT:
- 			raise FileNotFound(file_path)
- 		else:
- 			raise
+     f = None
+     try:
+         f = atomic_ofstream(file_path, **kwargs)
+         f.write(content)
+         f.close()
+     except (IOError, OSError) as e:
+         if f:
+             f.abort()
+         func_call = "write_atomic('%s')" % file_path
+         if e.errno == errno.EPERM:
+             raise OperationNotPermitted(func_call)
+         elif e.errno == errno.EACCES:
+             raise PermissionDenied(func_call)
+         elif e.errno == errno.EROFS:
+             raise ReadOnlyFileSystem(func_call)
+         elif e.errno == errno.ENOENT:
+             raise FileNotFound(file_path)
+         else:
+             raise
  
- def ensure_dirs(dir_path, **kwargs):
- 	"""Create a directory and call apply_permissions.
- 	Returns True if a directory is created or the permissions needed to be
- 	modified, and False otherwise.
  
- 	This function's handling of EEXIST errors makes it useful for atomic
- 	directory creation, in which multiple processes may be competing to
- 	create the same directory.
- 	"""
+ def ensure_dirs(dir_path, **kwargs):
+     """Create a directory and call apply_permissions.
+     Returns True if a directory is created or the permissions needed to be
+     modified, and False otherwise.
+ 
+     This function's handling of EEXIST errors makes it useful for atomic
+     directory creation, in which multiple processes may be competing to
+     create the same directory.
+     """
+ 
+     created_dir = False
+ 
+     try:
+         os.makedirs(dir_path)
+         created_dir = True
+     except OSError as oe:
+         func_call = "makedirs('%s')" % dir_path
+         if oe.errno in (errno.EEXIST,):
+             pass
+         else:
+             if os.path.isdir(dir_path):
+                 # NOTE: DragonFly raises EPERM for makedir('/')
+                 # and that is supposed to be ignored here.
+                 # Also, sometimes mkdir raises EISDIR on FreeBSD
+                 # and we want to ignore that too (bug #187518).
+                 pass
+             elif oe.errno == errno.EPERM:
+                 raise OperationNotPermitted(func_call)
+             elif oe.errno == errno.EACCES:
+                 raise PermissionDenied(func_call)
+             elif oe.errno == errno.EROFS:
+                 raise ReadOnlyFileSystem(func_call)
+             else:
+                 raise
+     if kwargs:
+         perms_modified = apply_permissions(dir_path, **kwargs)
+     else:
+         perms_modified = False
+     return created_dir or perms_modified
  
- 	created_dir = False
- 
- 	try:
- 		os.makedirs(dir_path)
- 		created_dir = True
- 	except OSError as oe:
- 		func_call = "makedirs('%s')" % dir_path
- 		if oe.errno in (errno.EEXIST,):
- 			pass
- 		else:
- 			if os.path.isdir(dir_path):
- 				# NOTE: DragonFly raises EPERM for makedir('/')
- 				# and that is supposed to be ignored here.
- 				# Also, sometimes mkdir raises EISDIR on FreeBSD
- 				# and we want to ignore that too (bug #187518).
- 				pass
- 			elif oe.errno == errno.EPERM:
- 				raise OperationNotPermitted(func_call)
- 			elif oe.errno == errno.EACCES:
- 				raise PermissionDenied(func_call)
- 			elif oe.errno == errno.EROFS:
- 				raise ReadOnlyFileSystem(func_call)
- 			else:
- 				raise
- 	if kwargs:
- 		perms_modified = apply_permissions(dir_path, **kwargs)
- 	else:
- 		perms_modified = False
- 	return created_dir or perms_modified
  
  class LazyItemsDict(UserDict):
- 	"""A mapping object that behaves like a standard dict except that it allows
- 	for lazy initialization of values via callable objects.  Lazy items can be
- 	overwritten and deleted just as normal items."""
- 
- 	__slots__ = ('lazy_items',)
- 
- 	def __init__(self, *args, **kwargs):
- 
- 		self.lazy_items = {}
- 		UserDict.__init__(self, *args, **kwargs)
- 
- 	def addLazyItem(self, item_key, value_callable, *pargs, **kwargs):
- 		"""Add a lazy item for the given key.  When the item is requested,
- 		value_callable will be called with *pargs and **kwargs arguments."""
- 		self.lazy_items[item_key] = \
- 			self._LazyItem(value_callable, pargs, kwargs, False)
- 		# make it show up in self.keys(), etc...
- 		UserDict.__setitem__(self, item_key, None)
- 
- 	def addLazySingleton(self, item_key, value_callable, *pargs, **kwargs):
- 		"""This is like addLazyItem except value_callable will only be called
- 		a maximum of 1 time and the result will be cached for future requests."""
- 		self.lazy_items[item_key] = \
- 			self._LazyItem(value_callable, pargs, kwargs, True)
- 		# make it show up in self.keys(), etc...
- 		UserDict.__setitem__(self, item_key, None)
- 
- 	def update(self, *args, **kwargs):
- 		if len(args) > 1:
- 			raise TypeError(
- 				"expected at most 1 positional argument, got " + \
- 				repr(len(args)))
- 		if args:
- 			map_obj = args[0]
- 		else:
- 			map_obj = None
- 		if map_obj is None:
- 			pass
- 		elif isinstance(map_obj, LazyItemsDict):
- 			for k in map_obj:
- 				if k in map_obj.lazy_items:
- 					UserDict.__setitem__(self, k, None)
- 				else:
- 					UserDict.__setitem__(self, k, map_obj[k])
- 			self.lazy_items.update(map_obj.lazy_items)
- 		else:
- 			UserDict.update(self, map_obj)
- 		if kwargs:
- 			UserDict.update(self, kwargs)
- 
- 	def __getitem__(self, item_key):
- 		if item_key in self.lazy_items:
- 			lazy_item = self.lazy_items[item_key]
- 			pargs = lazy_item.pargs
- 			if pargs is None:
- 				pargs = ()
- 			kwargs = lazy_item.kwargs
- 			if kwargs is None:
- 				kwargs = {}
- 			result = lazy_item.func(*pargs, **kwargs)
- 			if lazy_item.singleton:
- 				self[item_key] = result
- 			return result
- 
- 		return UserDict.__getitem__(self, item_key)
- 
- 	def __setitem__(self, item_key, value):
- 		if item_key in self.lazy_items:
- 			del self.lazy_items[item_key]
- 		UserDict.__setitem__(self, item_key, value)
- 
- 	def __delitem__(self, item_key):
- 		if item_key in self.lazy_items:
- 			del self.lazy_items[item_key]
- 		UserDict.__delitem__(self, item_key)
- 
- 	def clear(self):
- 		self.lazy_items.clear()
- 		UserDict.clear(self)
- 
- 	def copy(self):
- 		return self.__copy__()
- 
- 	def __copy__(self):
- 		return self.__class__(self)
- 
- 	def __deepcopy__(self, memo=None):
- 		"""
- 		This forces evaluation of each contained lazy item, and deepcopy of
- 		the result. A TypeError is raised if any contained lazy item is not
- 		a singleton, since it is not necessarily possible for the behavior
- 		of this type of item to be safely preserved.
- 		"""
- 		if memo is None:
- 			memo = {}
- 		result = self.__class__()
- 		memo[id(self)] = result
- 		for k in self:
- 			k_copy = deepcopy(k, memo)
- 			lazy_item = self.lazy_items.get(k)
- 			if lazy_item is not None:
- 				if not lazy_item.singleton:
- 					raise TypeError("LazyItemsDict " + \
- 						"deepcopy is unsafe with lazy items that are " + \
- 						"not singletons: key=%s value=%s" % (k, lazy_item,))
- 			UserDict.__setitem__(result, k_copy, deepcopy(self[k], memo))
- 		return result
- 
- 	class _LazyItem:
- 
- 		__slots__ = ('func', 'pargs', 'kwargs', 'singleton')
- 
- 		def __init__(self, func, pargs, kwargs, singleton):
- 
- 			if not pargs:
- 				pargs = None
- 			if not kwargs:
- 				kwargs = None
- 
- 			self.func = func
- 			self.pargs = pargs
- 			self.kwargs = kwargs
- 			self.singleton = singleton
- 
- 		def __copy__(self):
- 			return self.__class__(self.func, self.pargs,
- 				self.kwargs, self.singleton)
- 
- 		def __deepcopy__(self, memo=None):
- 			"""
- 			Override this since the default implementation can fail silently,
- 			leaving some attributes unset.
- 			"""
- 			if memo is None:
- 				memo = {}
- 			result = self.__copy__()
- 			memo[id(self)] = result
- 			result.func = deepcopy(self.func, memo)
- 			result.pargs = deepcopy(self.pargs, memo)
- 			result.kwargs = deepcopy(self.kwargs, memo)
- 			result.singleton = deepcopy(self.singleton, memo)
- 			return result
+     """A mapping object that behaves like a standard dict except that it allows
+     for lazy initialization of values via callable objects.  Lazy items can be
+     overwritten and deleted just as normal items."""
+ 
+     __slots__ = ("lazy_items",)
+ 
+     def __init__(self, *args, **kwargs):
+ 
+         self.lazy_items = {}
+         UserDict.__init__(self, *args, **kwargs)
+ 
+     def addLazyItem(self, item_key, value_callable, *pargs, **kwargs):
+         """Add a lazy item for the given key.  When the item is requested,
+         value_callable will be called with *pargs and **kwargs arguments."""
+         self.lazy_items[item_key] = self._LazyItem(value_callable, pargs, kwargs, False)
+         # make it show up in self.keys(), etc...
+         UserDict.__setitem__(self, item_key, None)
+ 
+     def addLazySingleton(self, item_key, value_callable, *pargs, **kwargs):
+         """This is like addLazyItem except value_callable will only be called
+         a maximum of 1 time and the result will be cached for future requests."""
+         self.lazy_items[item_key] = self._LazyItem(value_callable, pargs, kwargs, True)
+         # make it show up in self.keys(), etc...
+         UserDict.__setitem__(self, item_key, None)
+ 
+     def update(self, *args, **kwargs):
+         if len(args) > 1:
+             raise TypeError(
+                 "expected at most 1 positional argument, got " + repr(len(args))
+             )
+         if args:
+             map_obj = args[0]
+         else:
+             map_obj = None
+         if map_obj is None:
+             pass
+         elif isinstance(map_obj, LazyItemsDict):
+             for k in map_obj:
+                 if k in map_obj.lazy_items:
+                     UserDict.__setitem__(self, k, None)
+                 else:
+                     UserDict.__setitem__(self, k, map_obj[k])
+             self.lazy_items.update(map_obj.lazy_items)
+         else:
+             UserDict.update(self, map_obj)
+         if kwargs:
+             UserDict.update(self, kwargs)
+ 
+     def __getitem__(self, item_key):
+         if item_key in self.lazy_items:
+             lazy_item = self.lazy_items[item_key]
+             pargs = lazy_item.pargs
+             if pargs is None:
+                 pargs = ()
+             kwargs = lazy_item.kwargs
+             if kwargs is None:
+                 kwargs = {}
+             result = lazy_item.func(*pargs, **kwargs)
+             if lazy_item.singleton:
+                 self[item_key] = result
+             return result
+ 
+         return UserDict.__getitem__(self, item_key)
+ 
+     def __setitem__(self, item_key, value):
+         if item_key in self.lazy_items:
+             del self.lazy_items[item_key]
+         UserDict.__setitem__(self, item_key, value)
+ 
+     def __delitem__(self, item_key):
+         if item_key in self.lazy_items:
+             del self.lazy_items[item_key]
+         UserDict.__delitem__(self, item_key)
+ 
+     def clear(self):
+         self.lazy_items.clear()
+         UserDict.clear(self)
+ 
+     def copy(self):
+         return self.__copy__()
+ 
+     def __copy__(self):
+         return self.__class__(self)
+ 
+     def __deepcopy__(self, memo=None):
+         """
+         This forces evaluation of each contained lazy item, and deepcopy of
+         the result. A TypeError is raised if any contained lazy item is not
+         a singleton, since it is not necessarily possible for the behavior
+         of this type of item to be safely preserved.
+         """
+         if memo is None:
+             memo = {}
+         result = self.__class__()
+         memo[id(self)] = result
+         for k in self:
+             k_copy = deepcopy(k, memo)
+             lazy_item = self.lazy_items.get(k)
+             if lazy_item is not None:
+                 if not lazy_item.singleton:
+                     raise TypeError(
+                         "LazyItemsDict "
+                         + "deepcopy is unsafe with lazy items that are "
+                         + "not singletons: key=%s value=%s"
+                         % (
+                             k,
+                             lazy_item,
+                         )
+                     )
+             UserDict.__setitem__(result, k_copy, deepcopy(self[k], memo))
+         return result
+ 
+     class _LazyItem:
+ 
+         __slots__ = ("func", "pargs", "kwargs", "singleton")
+ 
+         def __init__(self, func, pargs, kwargs, singleton):
+ 
+             if not pargs:
+                 pargs = None
+             if not kwargs:
+                 kwargs = None
+ 
+             self.func = func
+             self.pargs = pargs
+             self.kwargs = kwargs
+             self.singleton = singleton
+ 
+         def __copy__(self):
+             return self.__class__(self.func, self.pargs, self.kwargs, self.singleton)
+ 
+         def __deepcopy__(self, memo=None):
+             """
+             Override this since the default implementation can fail silently,
+             leaving some attributes unset.
+             """
+             if memo is None:
+                 memo = {}
+             result = self.__copy__()
+             memo[id(self)] = result
+             result.func = deepcopy(self.func, memo)
+             result.pargs = deepcopy(self.pargs, memo)
+             result.kwargs = deepcopy(self.kwargs, memo)
+             result.singleton = deepcopy(self.singleton, memo)
+             return result
+ 
  
  class ConfigProtect:
- 	def __init__(self, myroot, protect_list, mask_list,
- 		case_insensitive=False):
- 		self.myroot = myroot
- 		self.protect_list = protect_list
- 		self.mask_list = mask_list
- 		self.case_insensitive = case_insensitive
- 		self.updateprotect()
- 
- 	def updateprotect(self):
- 		"""Update internal state for isprotected() calls.  Nonexistent paths
- 		are ignored."""
- 
- 		os = _os_merge
- 
- 		self.protect = []
- 		self._dirs = set()
- 		for x in self.protect_list:
- 			ppath = normalize_path(
- 				os.path.join(self.myroot, x.lstrip(os.path.sep)))
- 			# Protect files that don't exist (bug #523684). If the
- 			# parent directory doesn't exist, we can safely skip it.
- 			if os.path.isdir(os.path.dirname(ppath)):
- 				self.protect.append(ppath)
- 			try:
- 				if stat.S_ISDIR(os.stat(ppath).st_mode):
- 					self._dirs.add(ppath)
- 			except OSError:
- 				pass
- 
- 		self.protectmask = []
- 		for x in self.mask_list:
- 			ppath = normalize_path(
- 				os.path.join(self.myroot, x.lstrip(os.path.sep)))
- 			if self.case_insensitive:
- 				ppath = ppath.lower()
- 			try:
- 				"""Use lstat so that anything, even a broken symlink can be
- 				protected."""
- 				if stat.S_ISDIR(os.lstat(ppath).st_mode):
- 					self._dirs.add(ppath)
- 				self.protectmask.append(ppath)
- 				"""Now use stat in case this is a symlink to a directory."""
- 				if stat.S_ISDIR(os.stat(ppath).st_mode):
- 					self._dirs.add(ppath)
- 			except OSError:
- 				# If it doesn't exist, there's no need to mask it.
- 				pass
- 
- 	def isprotected(self, obj):
- 		"""Returns True if obj is protected, False otherwise.  The caller must
- 		ensure that obj is normalized with a single leading slash.  A trailing
- 		slash is optional for directories."""
- 		masked = 0
- 		protected = 0
- 		sep = os.path.sep
- 		if self.case_insensitive:
- 			obj = obj.lower()
- 		for ppath in self.protect:
- 			if len(ppath) > masked and obj.startswith(ppath):
- 				if ppath in self._dirs:
- 					if obj != ppath and not obj.startswith(ppath + sep):
- 						# /etc/foo does not match /etc/foobaz
- 						continue
- 				elif obj != ppath:
- 					# force exact match when CONFIG_PROTECT lists a
- 					# non-directory
- 					continue
- 				protected = len(ppath)
- 				#config file management
- 				for pmpath in self.protectmask:
- 					if len(pmpath) >= protected and obj.startswith(pmpath):
- 						if pmpath in self._dirs:
- 							if obj != pmpath and \
- 								not obj.startswith(pmpath + sep):
- 								# /etc/foo does not match /etc/foobaz
- 								continue
- 						elif obj != pmpath:
- 							# force exact match when CONFIG_PROTECT_MASK lists
- 							# a non-directory
- 							continue
- 						#skip, it's in the mask
- 						masked = len(pmpath)
- 		return protected > masked
+     def __init__(self, myroot, protect_list, mask_list, case_insensitive=False):
+         self.myroot = myroot
+         self.protect_list = protect_list
+         self.mask_list = mask_list
+         self.case_insensitive = case_insensitive
+         self.updateprotect()
+ 
+     def updateprotect(self):
+         """Update internal state for isprotected() calls.  Nonexistent paths
+         are ignored."""
+ 
+         os = _os_merge
+ 
+         self.protect = []
+         self._dirs = set()
+         for x in self.protect_list:
+             ppath = normalize_path(os.path.join(self.myroot, x.lstrip(os.path.sep)))
+             # Protect files that don't exist (bug #523684). If the
+             # parent directory doesn't exist, we can safely skip it.
+             if os.path.isdir(os.path.dirname(ppath)):
+                 self.protect.append(ppath)
+             try:
+                 if stat.S_ISDIR(os.stat(ppath).st_mode):
+                     self._dirs.add(ppath)
+             except OSError:
+                 pass
+ 
+         self.protectmask = []
+         for x in self.mask_list:
+             ppath = normalize_path(os.path.join(self.myroot, x.lstrip(os.path.sep)))
+             if self.case_insensitive:
+                 ppath = ppath.lower()
+             try:
+                 """Use lstat so that anything, even a broken symlink can be
+                 protected."""
+                 if stat.S_ISDIR(os.lstat(ppath).st_mode):
+                     self._dirs.add(ppath)
+                 self.protectmask.append(ppath)
+                 """Now use stat in case this is a symlink to a directory."""
+                 if stat.S_ISDIR(os.stat(ppath).st_mode):
+                     self._dirs.add(ppath)
+             except OSError:
+                 # If it doesn't exist, there's no need to mask it.
+                 pass
+ 
+     def isprotected(self, obj):
+         """Returns True if obj is protected, False otherwise.  The caller must
+         ensure that obj is normalized with a single leading slash.  A trailing
+         slash is optional for directories."""
+         masked = 0
+         protected = 0
+         sep = os.path.sep
+         if self.case_insensitive:
+             obj = obj.lower()
+         for ppath in self.protect:
+             if len(ppath) > masked and obj.startswith(ppath):
+                 if ppath in self._dirs:
+                     if obj != ppath and not obj.startswith(ppath + sep):
+                         # /etc/foo does not match /etc/foobaz
+                         continue
+                 elif obj != ppath:
+                     # force exact match when CONFIG_PROTECT lists a
+                     # non-directory
+                     continue
+                 protected = len(ppath)
+                 # config file management
+                 for pmpath in self.protectmask:
+                     if len(pmpath) >= protected and obj.startswith(pmpath):
+                         if pmpath in self._dirs:
+                             if obj != pmpath and not obj.startswith(pmpath + sep):
+                                 # /etc/foo does not match /etc/foobaz
+                                 continue
+                         elif obj != pmpath:
+                             # force exact match when CONFIG_PROTECT_MASK lists
+                             # a non-directory
+                             continue
+                         # skip, it's in the mask
+                         masked = len(pmpath)
+         return protected > masked
+ 
  
  def new_protect_filename(mydest, newmd5=None, force=False):
- 	"""Resolves a config-protect filename for merging, optionally
- 	using the last filename if the md5 matches. If force is True,
- 	then a new filename will be generated even if mydest does not
- 	exist yet.
- 	(dest,md5) ==> 'string'            --- path_to_target_filename
- 	(dest)     ==> ('next', 'highest') --- next_target and most-recent_target
- 	"""
+     """Resolves a config-protect filename for merging, optionally
+     using the last filename if the md5 matches. If force is True,
+     then a new filename will be generated even if mydest does not
+     exist yet.
+     (dest,md5) ==> 'string'            --- path_to_target_filename
+     (dest)     ==> ('next', 'highest') --- next_target and most-recent_target
+     """
+ 
+     # config protection filename format:
+     # ._cfg0000_foo
+     # 0123456789012
+ 
+     os = _os_merge
+ 
+     prot_num = -1
+     last_pfile = ""
+ 
+     if not force and not os.path.exists(mydest):
+         return mydest
+ 
+     real_filename = os.path.basename(mydest)
+     real_dirname = os.path.dirname(mydest)
+     for pfile in os.listdir(real_dirname):
+         if pfile[0:5] != "._cfg":
+             continue
+         if pfile[10:] != real_filename:
+             continue
+         try:
+             new_prot_num = int(pfile[5:9])
+             if new_prot_num > prot_num:
+                 prot_num = new_prot_num
+                 last_pfile = pfile
+         except ValueError:
+             continue
+     prot_num = prot_num + 1
+ 
+     new_pfile = normalize_path(
+         os.path.join(
+             real_dirname, "._cfg" + str(prot_num).zfill(4) + "_" + real_filename
+         )
+     )
+     old_pfile = normalize_path(os.path.join(real_dirname, last_pfile))
+     if last_pfile and newmd5:
+         try:
+             old_pfile_st = os.lstat(old_pfile)
+         except OSError as e:
+             if e.errno != errno.ENOENT:
+                 raise
+         else:
+             if stat.S_ISLNK(old_pfile_st.st_mode):
+                 try:
+                     # Read symlink target as bytes, in case the
+                     # target path has a bad encoding.
+                     pfile_link = os.readlink(
+                         _unicode_encode(
+                             old_pfile, encoding=_encodings["merge"], errors="strict"
+                         )
+                     )
+                 except OSError:
+                     if e.errno != errno.ENOENT:
+                         raise
+                 else:
+                     pfile_link = _unicode_decode(
+                         pfile_link, encoding=_encodings["merge"], errors="replace"
+                     )
+                     if pfile_link == newmd5:
+                         return old_pfile
+             else:
+                 try:
+                     last_pfile_md5 = portage.checksum._perform_md5_merge(old_pfile)
+                 except FileNotFound:
+                     # The file suddenly disappeared or it's a
+                     # broken symlink.
+                     pass
+                 else:
+                     if last_pfile_md5 == newmd5:
+                         return old_pfile
+     return new_pfile
  
- 	# config protection filename format:
- 	# ._cfg0000_foo
- 	# 0123456789012
- 
- 	os = _os_merge
- 
- 	prot_num = -1
- 	last_pfile = ""
- 
- 	if not force and \
- 		not os.path.exists(mydest):
- 		return mydest
- 
- 	real_filename = os.path.basename(mydest)
- 	real_dirname  = os.path.dirname(mydest)
- 	for pfile in os.listdir(real_dirname):
- 		if pfile[0:5] != "._cfg":
- 			continue
- 		if pfile[10:] != real_filename:
- 			continue
- 		try:
- 			new_prot_num = int(pfile[5:9])
- 			if new_prot_num > prot_num:
- 				prot_num = new_prot_num
- 				last_pfile = pfile
- 		except ValueError:
- 			continue
- 	prot_num = prot_num + 1
- 
- 	new_pfile = normalize_path(os.path.join(real_dirname,
- 		"._cfg" + str(prot_num).zfill(4) + "_" + real_filename))
- 	old_pfile = normalize_path(os.path.join(real_dirname, last_pfile))
- 	if last_pfile and newmd5:
- 		try:
- 			old_pfile_st = os.lstat(old_pfile)
- 		except OSError as e:
- 			if e.errno != errno.ENOENT:
- 				raise
- 		else:
- 			if stat.S_ISLNK(old_pfile_st.st_mode):
- 				try:
- 					# Read symlink target as bytes, in case the
- 					# target path has a bad encoding.
- 					pfile_link = os.readlink(_unicode_encode(old_pfile,
- 						encoding=_encodings['merge'], errors='strict'))
- 				except OSError:
- 					if e.errno != errno.ENOENT:
- 						raise
- 				else:
- 					pfile_link = _unicode_decode(pfile_link,
- 						encoding=_encodings['merge'], errors='replace')
- 					if pfile_link == newmd5:
- 						return old_pfile
- 			else:
- 				try:
- 					last_pfile_md5 = \
- 						portage.checksum._perform_md5_merge(old_pfile)
- 				except FileNotFound:
- 					# The file suddenly disappeared or it's a
- 					# broken symlink.
- 					pass
- 				else:
- 					if last_pfile_md5 == newmd5:
- 						return old_pfile
- 	return new_pfile
  
  def find_updated_config_files(target_root, config_protect):
- 	"""
- 	Return a tuple of configuration files that needs to be updated.
- 	The tuple contains lists organized like this:
- 		[protected_dir, file_list]
- 	If the protected config isn't a protected_dir but a procted_file, list is:
- 		[protected_file, None]
- 	If no configuration files needs to be updated, None is returned
- 	"""
+     """
+     Return a tuple of configuration files that needs to be updated.
+     The tuple contains lists organized like this:
+             [protected_dir, file_list]
+     If the protected config isn't a protected_dir but a procted_file, list is:
+             [protected_file, None]
+     If no configuration files needs to be updated, None is returned
+     """
+ 
+     encoding = _encodings["fs"]
+ 
+     if config_protect:
+         # directories with some protect files in them
+         for x in config_protect:
+             files = []
+ 
+             x = os.path.join(target_root, x.lstrip(os.path.sep))
+             if not os.access(x, os.W_OK):
+                 continue
+             try:
+                 mymode = os.lstat(x).st_mode
+             except OSError:
+                 continue
+ 
+             if stat.S_ISLNK(mymode):
+                 # We want to treat it like a directory if it
+                 # is a symlink to an existing directory.
+                 try:
+                     real_mode = os.stat(x).st_mode
+                     if stat.S_ISDIR(real_mode):
+                         mymode = real_mode
+                 except OSError:
+                     pass
+ 
+             if stat.S_ISDIR(mymode):
+                 mycommand = (
+                     "find '%s' -name '.*' -type d -prune -o -name '._cfg????_*'" % x
+                 )
+             else:
+                 mycommand = (
+                     "find '%s' -maxdepth 1 -name '._cfg????_%s'"
+                     % os.path.split(x.rstrip(os.path.sep))
+                 )
+             mycommand += " ! -name '.*~' ! -iname '.*.bak' -print0"
+             cmd = shlex_split(mycommand)
+ 
+             cmd = [
+                 _unicode_encode(arg, encoding=encoding, errors="strict") for arg in cmd
+             ]
+             proc = subprocess.Popen(
+                 cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
+             )
+             output = _unicode_decode(proc.communicate()[0], encoding=encoding)
+             status = proc.wait()
+             if os.WIFEXITED(status) and os.WEXITSTATUS(status) == os.EX_OK:
+                 files = output.split("\0")
+                 # split always produces an empty string as the last element
+                 if files and not files[-1]:
+                     del files[-1]
+                 if files:
+                     if stat.S_ISDIR(mymode):
+                         yield (x, files)
+                     else:
+                         yield (x, None)
+ 
+ 
+ _ld_so_include_re = re.compile(r"^include\s+(\S.*)")
  
- 	encoding = _encodings['fs']
- 
- 	if config_protect:
- 		# directories with some protect files in them
- 		for x in config_protect:
- 			files = []
- 
- 			x = os.path.join(target_root, x.lstrip(os.path.sep))
- 			if not os.access(x, os.W_OK):
- 				continue
- 			try:
- 				mymode = os.lstat(x).st_mode
- 			except OSError:
- 				continue
- 
- 			if stat.S_ISLNK(mymode):
- 				# We want to treat it like a directory if it
- 				# is a symlink to an existing directory.
- 				try:
- 					real_mode = os.stat(x).st_mode
- 					if stat.S_ISDIR(real_mode):
- 						mymode = real_mode
- 				except OSError:
- 					pass
- 
- 			if stat.S_ISDIR(mymode):
- 				mycommand = \
- 					"find '%s' -name '.*' -type d -prune -o -name '._cfg????_*'" % x
- 			else:
- 				mycommand = "find '%s' -maxdepth 1 -name '._cfg????_%s'" % \
- 						os.path.split(x.rstrip(os.path.sep))
- 			mycommand += " ! -name '.*~' ! -iname '.*.bak' -print0"
- 			cmd = shlex_split(mycommand)
- 
- 			cmd = [_unicode_encode(arg, encoding=encoding, errors='strict')
- 				for arg in cmd]
- 			proc = subprocess.Popen(cmd, stdout=subprocess.PIPE,
- 				stderr=subprocess.STDOUT)
- 			output = _unicode_decode(proc.communicate()[0], encoding=encoding)
- 			status = proc.wait()
- 			if os.WIFEXITED(status) and os.WEXITSTATUS(status) == os.EX_OK:
- 				files = output.split('\0')
- 				# split always produces an empty string as the last element
- 				if files and not files[-1]:
- 					del files[-1]
- 				if files:
- 					if stat.S_ISDIR(mymode):
- 						yield (x, files)
- 					else:
- 						yield (x, None)
- 
- _ld_so_include_re = re.compile(r'^include\s+(\S.*)')
  
  def getlibpaths(root, env=None):
- 	def read_ld_so_conf(path):
- 		for l in grabfile(path):
- 			include_match = _ld_so_include_re.match(l)
- 			if include_match is not None:
- 				subpath = os.path.join(os.path.dirname(path),
- 					include_match.group(1))
- 				for p in glob.glob(subpath):
- 					for r in read_ld_so_conf(p):
- 						yield r
- 			else:
- 				yield l
- 
- 	""" Return a list of paths that are used for library lookups """
- 	if env is None:
- 		env = os.environ
- 
- 	# PREFIX HACK: LD_LIBRARY_PATH isn't portable, and considered
- 	# harmfull, so better not use it.  We don't need any host OS lib
- 	# paths either, so do Prefix case.
- 	if EPREFIX != '':
- 		rval = []
- 		rval.append(EPREFIX + "/usr/lib")
- 		rval.append(EPREFIX + "/lib")
- 		# we don't know the CHOST here, so it's a bit hard to guess
- 		# where GCC's and ld's libs are.  Though, GCC's libs should be
- 		# in lib and usr/lib, binutils' libs rarely used
- 	else:
- 	# the following is based on the information from ld.so(8)
- 		rval = env.get("LD_LIBRARY_PATH", "").split(":")
- 		rval.extend(read_ld_so_conf(os.path.join(root, "etc", "ld.so.conf")))
- 		rval.append("/usr/lib")
- 		rval.append("/lib")
- 
- 	return [normalize_path(x) for x in rval if x]
+     def read_ld_so_conf(path):
+         for l in grabfile(path):
+             include_match = _ld_so_include_re.match(l)
+             if include_match is not None:
+                 subpath = os.path.join(os.path.dirname(path), include_match.group(1))
+                 for p in glob.glob(subpath):
+                     for r in read_ld_so_conf(p):
+                         yield r
+             else:
+                 yield l
+ 
+     """ Return a list of paths that are used for library lookups """
+     if env is None:
+         env = os.environ
 -    # the following is based on the information from ld.so(8)
 -    rval = env.get("LD_LIBRARY_PATH", "").split(":")
 -    rval.extend(read_ld_so_conf(os.path.join(root, "etc", "ld.so.conf")))
 -    rval.append("/usr/lib")
 -    rval.append("/lib")
++    # BEGIN PREFIX LOCAL:
++    # LD_LIBRARY_PATH isn't portable, and considered harmfull, so better
++    # not use it.  We don't need any host OS lib paths either, so do
++    # Prefix case.
++    if EPREFIX != '':
++        rval = []
++        rval.append(EPREFIX + "/usr/lib")
++        rval.append(EPREFIX + "/lib")
++        # we don't know the CHOST here, so it's a bit hard to guess
++        # where GCC's and ld's libs are.  Though, GCC's libs should be
++        # in lib and usr/lib, binutils' libs are rarely used
++    else:
++    # END PREFIX LOCAL
++        # the following is based on the information from ld.so(8)
++        rval = env.get("LD_LIBRARY_PATH", "").split(":")
++        rval.extend(read_ld_so_conf(os.path.join(root, "etc", "ld.so.conf")))
++        rval.append("/usr/lib")
++        rval.append("/lib")
+ 
+     return [normalize_path(x) for x in rval if x]
diff --cc lib/portage/util/_info_files.py
index de44b0fdc,528b273d9..2a8d277b3
--- a/lib/portage/util/_info_files.py
+++ b/lib/portage/util/_info_files.py
@@@ -9,131 -9,132 +9,136 @@@ import subproces
  
  import portage
  from portage import os
 +from portage.const import EPREFIX
  
+ 
  def chk_updated_info_files(root, infodirs, prev_mtimes):
  
- 	if os.path.exists(EPREFIX + "/usr/bin/install-info"):
- 		out = portage.output.EOutput()
- 		regen_infodirs = []
- 		for z in infodirs:
- 			if z == '':
- 				continue
- 			inforoot = portage.util.normalize_path(root + EPREFIX + z)
- 			if os.path.isdir(inforoot) and \
- 				not [x for x in os.listdir(inforoot) \
- 				if x.startswith('.keepinfodir')]:
- 					infomtime = os.stat(inforoot)[stat.ST_MTIME]
- 					if inforoot not in prev_mtimes or \
- 						prev_mtimes[inforoot] != infomtime:
- 							regen_infodirs.append(inforoot)
 -    if os.path.exists("/usr/bin/install-info"):
++    # PREFIX LOCAL
++    if os.path.exists(EPREFIX + "/usr/bin/install-info"):
+         out = portage.output.EOutput()
+         regen_infodirs = []
+         for z in infodirs:
+             if z == "":
+                 continue
 -            inforoot = portage.util.normalize_path(root + z)
++            # PREFIX LOCAL
++            inforoot = portage.util.normalize_path(root + EPREFIX + z)
+             if os.path.isdir(inforoot) and not [
+                 x for x in os.listdir(inforoot) if x.startswith(".keepinfodir")
+             ]:
+                 infomtime = os.stat(inforoot)[stat.ST_MTIME]
+                 if inforoot not in prev_mtimes or prev_mtimes[inforoot] != infomtime:
+                     regen_infodirs.append(inforoot)
  
- 		if not regen_infodirs:
- 			portage.util.writemsg_stdout("\n")
- 			if portage.util.noiselimit >= 0:
- 				out.einfo("GNU info directory index is up-to-date.")
- 		else:
- 			portage.util.writemsg_stdout("\n")
- 			if portage.util.noiselimit >= 0:
- 				out.einfo("Regenerating GNU info directory index...")
+         if not regen_infodirs:
+             portage.util.writemsg_stdout("\n")
+             if portage.util.noiselimit >= 0:
+                 out.einfo("GNU info directory index is up-to-date.")
+         else:
+             portage.util.writemsg_stdout("\n")
+             if portage.util.noiselimit >= 0:
+                 out.einfo("Regenerating GNU info directory index...")
  
- 			dir_extensions = ("", ".gz", ".bz2")
- 			icount = 0
- 			badcount = 0
- 			errmsg = ""
- 			for inforoot in regen_infodirs:
- 				if inforoot == '':
- 					continue
+             dir_extensions = ("", ".gz", ".bz2")
+             icount = 0
+             badcount = 0
+             errmsg = ""
+             for inforoot in regen_infodirs:
+                 if inforoot == "":
+                     continue
  
- 				if not os.path.isdir(inforoot) or \
- 					not os.access(inforoot, os.W_OK):
- 					continue
+                 if not os.path.isdir(inforoot) or not os.access(inforoot, os.W_OK):
+                     continue
  
- 				file_list = os.listdir(inforoot)
- 				file_list.sort()
- 				dir_file = os.path.join(inforoot, "dir")
- 				moved_old_dir = False
- 				processed_count = 0
- 				for x in file_list:
- 					if x.startswith(".") or \
- 						os.path.isdir(os.path.join(inforoot, x)):
- 						continue
- 					if x.startswith("dir"):
- 						skip = False
- 						for ext in dir_extensions:
- 							if x == "dir" + ext or \
- 								x == "dir" + ext + ".old":
- 								skip = True
- 								break
- 						if skip:
- 							continue
- 					if processed_count == 0:
- 						for ext in dir_extensions:
- 							try:
- 								os.rename(dir_file + ext, dir_file + ext + ".old")
- 								moved_old_dir = True
- 							except EnvironmentError as e:
- 								if e.errno != errno.ENOENT:
- 									raise
- 								del e
- 					processed_count += 1
- 					try:
- 						proc = subprocess.Popen(
- 							['%s/usr/bin/install-info' % EPREFIX,
- 							'--dir-file=%s' % os.path.join(inforoot, "dir"),
- 							os.path.join(inforoot, x)],
- 							env=dict(os.environ, LANG="C", LANGUAGE="C"),
- 							stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
- 					except OSError:
- 						myso = None
- 					else:
- 						myso = portage._unicode_decode(
- 							proc.communicate()[0]).rstrip("\n")
- 						proc.wait()
- 					existsstr = "already exists, for file `"
- 					if myso:
- 						if re.search(existsstr, myso):
- 							# Already exists... Don't increment the count for this.
- 							pass
- 						elif myso[:44] == "install-info: warning: no info dir entry in ":
- 							# This info file doesn't contain a DIR-header: install-info produces this
- 							# (harmless) warning (the --quiet switch doesn't seem to work).
- 							# Don't increment the count for this.
- 							pass
- 						else:
- 							badcount += 1
- 							errmsg += myso + "\n"
- 					icount += 1
+                 file_list = os.listdir(inforoot)
+                 file_list.sort()
+                 dir_file = os.path.join(inforoot, "dir")
+                 moved_old_dir = False
+                 processed_count = 0
+                 for x in file_list:
+                     if x.startswith(".") or os.path.isdir(os.path.join(inforoot, x)):
+                         continue
+                     if x.startswith("dir"):
+                         skip = False
+                         for ext in dir_extensions:
+                             if x == "dir" + ext or x == "dir" + ext + ".old":
+                                 skip = True
+                                 break
+                         if skip:
+                             continue
+                     if processed_count == 0:
+                         for ext in dir_extensions:
+                             try:
+                                 os.rename(dir_file + ext, dir_file + ext + ".old")
+                                 moved_old_dir = True
+                             except EnvironmentError as e:
+                                 if e.errno != errno.ENOENT:
+                                     raise
+                                 del e
+                     processed_count += 1
+                     try:
+                         proc = subprocess.Popen(
+                             [
 -                                "/usr/bin/install-info",
++                                # PREFIX LOCAL
++                                "%s/usr/bin/install-info", EPREFIX,
+                                 "--dir-file=%s" % os.path.join(inforoot, "dir"),
+                                 os.path.join(inforoot, x),
+                             ],
+                             env=dict(os.environ, LANG="C", LANGUAGE="C"),
+                             stdout=subprocess.PIPE,
+                             stderr=subprocess.STDOUT,
+                         )
+                     except OSError:
+                         myso = None
+                     else:
+                         myso = portage._unicode_decode(proc.communicate()[0]).rstrip(
+                             "\n"
+                         )
+                         proc.wait()
+                     existsstr = "already exists, for file `"
+                     if myso:
+                         if re.search(existsstr, myso):
+                             # Already exists... Don't increment the count for this.
+                             pass
+                         elif (
+                             myso[:44] == "install-info: warning: no info dir entry in "
+                         ):
+                             # This info file doesn't contain a DIR-header: install-info produces this
+                             # (harmless) warning (the --quiet switch doesn't seem to work).
+                             # Don't increment the count for this.
+                             pass
+                         else:
+                             badcount += 1
+                             errmsg += myso + "\n"
+                     icount += 1
  
- 				if moved_old_dir and not os.path.exists(dir_file):
- 					# We didn't generate a new dir file, so put the old file
- 					# back where it was originally found.
- 					for ext in dir_extensions:
- 						try:
- 							os.rename(dir_file + ext + ".old", dir_file + ext)
- 						except EnvironmentError as e:
- 							if e.errno != errno.ENOENT:
- 								raise
- 							del e
+                 if moved_old_dir and not os.path.exists(dir_file):
+                     # We didn't generate a new dir file, so put the old file
+                     # back where it was originally found.
+                     for ext in dir_extensions:
+                         try:
+                             os.rename(dir_file + ext + ".old", dir_file + ext)
+                         except EnvironmentError as e:
+                             if e.errno != errno.ENOENT:
+                                 raise
+                             del e
  
- 				# Clean dir.old cruft so that they don't prevent
- 				# unmerge of otherwise empty directories.
- 				for ext in dir_extensions:
- 					try:
- 						os.unlink(dir_file + ext + ".old")
- 					except EnvironmentError as e:
- 						if e.errno != errno.ENOENT:
- 							raise
- 						del e
+                 # Clean dir.old cruft so that they don't prevent
+                 # unmerge of otherwise empty directories.
+                 for ext in dir_extensions:
+                     try:
+                         os.unlink(dir_file + ext + ".old")
+                     except EnvironmentError as e:
+                         if e.errno != errno.ENOENT:
+                             raise
+                         del e
  
- 				#update mtime so we can potentially avoid regenerating.
- 				prev_mtimes[inforoot] = os.stat(inforoot)[stat.ST_MTIME]
+                 # update mtime so we can potentially avoid regenerating.
+                 prev_mtimes[inforoot] = os.stat(inforoot)[stat.ST_MTIME]
  
- 			if badcount:
- 				out.eerror("Processed %d info files; %d errors." % \
- 					(icount, badcount))
- 				portage.util.writemsg_level(errmsg,
- 					level=logging.ERROR, noiselevel=-1)
- 			else:
- 				if icount > 0 and portage.util.noiselimit >= 0:
- 					out.einfo("Processed %d info files." % (icount,))
+             if badcount:
+                 out.eerror("Processed %d info files; %d errors." % (icount, badcount))
+                 portage.util.writemsg_level(errmsg, level=logging.ERROR, noiselevel=-1)
+             else:
+                 if icount > 0 and portage.util.noiselimit >= 0:
+                     out.einfo("Processed %d info files." % (icount,))
diff --cc lib/portage/util/_pty.py
index a92f57543,e58f95e0a..40ab1a8db
--- a/lib/portage/util/_pty.py
+++ b/lib/portage/util/_pty.py
@@@ -9,70 -9,68 +9,68 @@@ from portage import o
  from portage.output import get_term_size, set_term_size
  from portage.util import writemsg
  
 -# Disable the use of openpty on Solaris as it seems Python's openpty
 -# implementation doesn't play nice on Solaris with Portage's
 -# behaviour causing hangs/deadlocks.
 +# Disable the use of openpty on Solaris (and others) as it seems Python's
 +# openpty implementation doesn't play nice with Portage's behaviour,
 +# causing hangs/deadlocks.
  # Additional note for the future: on Interix, pipes do NOT work, so
  # _disable_openpty on Interix must *never* be True
 -_disable_openpty = platform.system() in ("SunOS",)
 +_disable_openpty = platform.system() in ("AIX","FreeMiNT","HP-UX","SunOS",)
  
- _fbsd_test_pty = platform.system() == 'FreeBSD'
+ _fbsd_test_pty = platform.system() == "FreeBSD"
+ 
  
  def _create_pty_or_pipe(copy_term_size=None):
- 	"""
- 	Try to create a pty and if then fails then create a normal
- 	pipe instead.
+     """
+     Try to create a pty and if then fails then create a normal
+     pipe instead.
  
- 	@param copy_term_size: If a tty file descriptor is given
- 		then the term size will be copied to the pty.
- 	@type copy_term_size: int
- 	@rtype: tuple
- 	@return: A tuple of (is_pty, master_fd, slave_fd) where
- 		is_pty is True if a pty was successfully allocated, and
- 		False if a normal pipe was allocated.
- 	"""
+     @param copy_term_size: If a tty file descriptor is given
+             then the term size will be copied to the pty.
+     @type copy_term_size: int
+     @rtype: tuple
+     @return: A tuple of (is_pty, master_fd, slave_fd) where
+             is_pty is True if a pty was successfully allocated, and
+             False if a normal pipe was allocated.
+     """
  
- 	got_pty = False
+     got_pty = False
  
- 	global _disable_openpty, _fbsd_test_pty
+     global _disable_openpty, _fbsd_test_pty
  
- 	if _fbsd_test_pty and not _disable_openpty:
- 		# Test for python openpty breakage after freebsd7 to freebsd8
- 		# upgrade, which results in a 'Function not implemented' error
- 		# and the process being killed.
- 		pid = os.fork()
- 		if pid == 0:
- 			pty.openpty()
- 			os._exit(os.EX_OK)
- 		pid, status = os.waitpid(pid, 0)
- 		if (status & 0xff) == 140:
- 			_disable_openpty = True
- 		_fbsd_test_pty = False
+     if _fbsd_test_pty and not _disable_openpty:
+         # Test for python openpty breakage after freebsd7 to freebsd8
+         # upgrade, which results in a 'Function not implemented' error
+         # and the process being killed.
+         pid = os.fork()
+         if pid == 0:
+             pty.openpty()
+             os._exit(os.EX_OK)
+         pid, status = os.waitpid(pid, 0)
+         if (status & 0xFF) == 140:
+             _disable_openpty = True
+         _fbsd_test_pty = False
  
- 	if _disable_openpty:
- 		master_fd, slave_fd = os.pipe()
- 	else:
- 		try:
- 			master_fd, slave_fd = pty.openpty()
- 			got_pty = True
- 		except EnvironmentError as e:
- 			_disable_openpty = True
- 			writemsg("openpty failed: '%s'\n" % str(e),
- 				noiselevel=-1)
- 			del e
- 			master_fd, slave_fd = os.pipe()
+     if _disable_openpty:
+         master_fd, slave_fd = os.pipe()
+     else:
+         try:
+             master_fd, slave_fd = pty.openpty()
+             got_pty = True
+         except EnvironmentError as e:
+             _disable_openpty = True
+             writemsg("openpty failed: '%s'\n" % str(e), noiselevel=-1)
+             del e
+             master_fd, slave_fd = os.pipe()
  
- 	if got_pty:
- 		# Disable post-processing of output since otherwise weird
- 		# things like \n -> \r\n transformations may occur.
- 		mode = termios.tcgetattr(slave_fd)
- 		mode[1] &= ~termios.OPOST
- 		termios.tcsetattr(slave_fd, termios.TCSANOW, mode)
+     if got_pty:
+         # Disable post-processing of output since otherwise weird
+         # things like \n -> \r\n transformations may occur.
+         mode = termios.tcgetattr(slave_fd)
+         mode[1] &= ~termios.OPOST
+         termios.tcsetattr(slave_fd, termios.TCSANOW, mode)
  
- 	if got_pty and \
- 		copy_term_size is not None and \
- 		os.isatty(copy_term_size):
- 		rows, columns = get_term_size()
- 		set_term_size(rows, columns, slave_fd)
+     if got_pty and copy_term_size is not None and os.isatty(copy_term_size):
+         rows, columns = get_term_size()
+         set_term_size(rows, columns, slave_fd)
  
- 	return (got_pty, master_fd, slave_fd)
+     return (got_pty, master_fd, slave_fd)
diff --cc lib/portage/util/env_update.py
index 31aacc292,bb0ebf84c..b7f27dfb7
--- a/lib/portage/util/env_update.py
+++ b/lib/portage/util/env_update.py
@@@ -23,374 -28,419 +28,422 @@@ from portage.dbapi.vartree import vartr
  from portage.package.ebuild.config import config
  
  
- def env_update(makelinks=1, target_root=None, prev_mtimes=None, contents=None,
- 	env=None, writemsg_level=None, vardbapi=None):
- 	"""
- 	Parse /etc/env.d and use it to generate /etc/profile.env, csh.env,
- 	ld.so.conf, and prelink.conf. Finally, run ldconfig. When ldconfig is
- 	called, its -X option will be used in order to avoid potential
- 	interference with installed soname symlinks that are required for
- 	correct operation of FEATURES=preserve-libs for downgrade operations.
- 	It's not necessary for ldconfig to create soname symlinks, since
- 	portage will use NEEDED.ELF.2 data to automatically create them
- 	after src_install if they happen to be missing.
- 	@param makelinks: True if ldconfig should be called, False otherwise
- 	@param target_root: root that is passed to the ldconfig -r option,
- 		defaults to portage.settings["ROOT"].
- 	@type target_root: String (Path)
- 	"""
- 	if vardbapi is None:
- 		if isinstance(env, config):
- 			vardbapi = vartree(settings=env).dbapi
- 		else:
- 			if target_root is None:
- 				eprefix = portage.settings["EPREFIX"]
- 				target_root = portage.settings["ROOT"]
- 				target_eroot = portage.settings['EROOT']
- 			else:
- 				eprefix = portage.const.EPREFIX
- 				target_eroot = os.path.join(target_root,
- 					eprefix.lstrip(os.sep))
- 				target_eroot = target_eroot.rstrip(os.sep) + os.sep
- 			if hasattr(portage, "db") and target_eroot in portage.db:
- 				vardbapi = portage.db[target_eroot]["vartree"].dbapi
- 			else:
- 				settings = config(config_root=target_root,
- 					target_root=target_root, eprefix=eprefix)
- 				target_root = settings["ROOT"]
- 				if env is None:
- 					env = settings
- 				vardbapi = vartree(settings=settings).dbapi
- 
- 	# Lock the config memory file to prevent symlink creation
- 	# in merge_contents from overlapping with env-update.
- 	vardbapi._fs_lock()
- 	try:
- 		return _env_update(makelinks, target_root, prev_mtimes, contents,
- 			env, writemsg_level)
- 	finally:
- 		vardbapi._fs_unlock()
- 
- def _env_update(makelinks, target_root, prev_mtimes, contents, env,
- 	writemsg_level):
- 	if writemsg_level is None:
- 		writemsg_level = portage.util.writemsg_level
- 	if target_root is None:
- 		target_root = portage.settings["ROOT"]
- 	if prev_mtimes is None:
- 		prev_mtimes = portage.mtimedb["ldpath"]
- 	if env is None:
- 		settings = portage.settings
- 	else:
- 		settings = env
- 
- 	eprefix = settings.get("EPREFIX", portage.const.EPREFIX)
- 	eprefix_lstrip = eprefix.lstrip(os.sep)
- 	eroot = normalize_path(os.path.join(target_root, eprefix_lstrip)).rstrip(os.sep) + os.sep
- 	envd_dir = os.path.join(eroot, "etc", "env.d")
- 	ensure_dirs(envd_dir, mode=0o755)
- 	fns = listdir(envd_dir, EmptyOnError=1)
- 	fns.sort()
- 	templist = []
- 	for x in fns:
- 		if len(x) < 3:
- 			continue
- 		if not x[0].isdigit() or not x[1].isdigit():
- 			continue
- 		if x.startswith(".") or x.endswith("~") or x.endswith(".bak"):
- 			continue
- 		templist.append(x)
- 	fns = templist
- 	del templist
- 
- 	space_separated = set(["CONFIG_PROTECT", "CONFIG_PROTECT_MASK"])
- 	colon_separated = set(["ADA_INCLUDE_PATH", "ADA_OBJECTS_PATH",
- 		"CLASSPATH", "INFODIR", "INFOPATH", "KDEDIRS", "LDPATH", "MANPATH",
- 		  "PATH", "PKG_CONFIG_PATH", "PRELINK_PATH", "PRELINK_PATH_MASK",
- 		  "PYTHONPATH", "ROOTPATH"])
- 
- 	config_list = []
- 
- 	for x in fns:
- 		file_path = os.path.join(envd_dir, x)
- 		try:
- 			myconfig = getconfig(file_path, expand=False)
- 		except ParseError as e:
- 			writemsg("!!! '%s'\n" % str(e), noiselevel=-1)
- 			del e
- 			continue
- 		if myconfig is None:
- 			# broken symlink or file removed by a concurrent process
- 			writemsg("!!! File Not Found: '%s'\n" % file_path, noiselevel=-1)
- 			continue
- 
- 		config_list.append(myconfig)
- 		if "SPACE_SEPARATED" in myconfig:
- 			space_separated.update(myconfig["SPACE_SEPARATED"].split())
- 			del myconfig["SPACE_SEPARATED"]
- 		if "COLON_SEPARATED" in myconfig:
- 			colon_separated.update(myconfig["COLON_SEPARATED"].split())
- 			del myconfig["COLON_SEPARATED"]
- 
- 	env = {}
- 	specials = {}
- 	for var in space_separated:
- 		mylist = []
- 		for myconfig in config_list:
- 			if var in myconfig:
- 				for item in myconfig[var].split():
- 					if item and not item in mylist:
- 						mylist.append(item)
- 				del myconfig[var] # prepare for env.update(myconfig)
- 		if mylist:
- 			env[var] = " ".join(mylist)
- 		specials[var] = mylist
- 
- 	for var in colon_separated:
- 		mylist = []
- 		for myconfig in config_list:
- 			if var in myconfig:
- 				for item in myconfig[var].split(":"):
- 					if item and not item in mylist:
- 						mylist.append(item)
- 				del myconfig[var] # prepare for env.update(myconfig)
- 		if mylist:
- 			env[var] = ":".join(mylist)
- 		specials[var] = mylist
- 
- 	for myconfig in config_list:
- 		"""Cumulative variables have already been deleted from myconfig so that
- 		they won't be overwritten by this dict.update call."""
- 		env.update(myconfig)
- 
- 	ldsoconf_path = os.path.join(eroot, "etc", "ld.so.conf")
- 	try:
- 		myld = io.open(_unicode_encode(ldsoconf_path,
- 			encoding=_encodings['fs'], errors='strict'),
- 			mode='r', encoding=_encodings['content'], errors='replace')
- 		myldlines = myld.readlines()
- 		myld.close()
- 		oldld = []
- 		for x in myldlines:
- 			#each line has at least one char (a newline)
- 			if x[:1] == "#":
- 				continue
- 			oldld.append(x[:-1])
- 	except (IOError, OSError) as e:
- 		if e.errno != errno.ENOENT:
- 			raise
- 		oldld = None
- 
- 	newld = specials["LDPATH"]
- 	if oldld != newld:
- 		#ld.so.conf needs updating and ldconfig needs to be run
- 		myfd = atomic_ofstream(ldsoconf_path)
- 		myfd.write("# ld.so.conf autogenerated by env-update; make all changes to\n")
- 		myfd.write("# contents of /etc/env.d directory\n")
- 		for x in specials["LDPATH"]:
- 			myfd.write(x + "\n")
- 		myfd.close()
- 
- 	potential_lib_dirs = set()
- 	for lib_dir_glob in ('usr/lib*', 'lib*'):
- 		x = os.path.join(eroot, lib_dir_glob)
- 		for y in glob.glob(_unicode_encode(x,
- 			encoding=_encodings['fs'], errors='strict')):
- 			try:
- 				y = _unicode_decode(y,
- 					encoding=_encodings['fs'], errors='strict')
- 			except UnicodeDecodeError:
- 				continue
- 			if os.path.basename(y) != 'libexec':
- 				potential_lib_dirs.add(y[len(eroot):])
- 
- 	# Update prelink.conf if we are prelink-enabled
- 	if prelink_capable:
- 		prelink_d = os.path.join(eroot, 'etc', 'prelink.conf.d')
- 		ensure_dirs(prelink_d)
- 		newprelink = atomic_ofstream(os.path.join(prelink_d, 'portage.conf'))
- 		newprelink.write("# prelink.conf autogenerated by env-update; make all changes to\n")
- 		newprelink.write("# contents of /etc/env.d directory\n")
- 
- 		for x in sorted(potential_lib_dirs) + ['bin', 'sbin']:
- 			newprelink.write('-l /%s\n' % (x,))
- 		prelink_paths = set()
- 		prelink_paths |= set(specials.get('LDPATH', []))
- 		prelink_paths |= set(specials.get('PATH', []))
- 		prelink_paths |= set(specials.get('PRELINK_PATH', []))
- 		prelink_path_mask = specials.get('PRELINK_PATH_MASK', [])
- 		for x in prelink_paths:
- 			if not x:
- 				continue
- 			if x[-1:] != '/':
- 				x += "/"
- 			plmasked = 0
- 			for y in prelink_path_mask:
- 				if not y:
- 					continue
- 				if y[-1] != '/':
- 					y += "/"
- 				if y == x[0:len(y)]:
- 					plmasked = 1
- 					break
- 			if not plmasked:
- 				newprelink.write("-h %s\n" % (x,))
- 		for x in prelink_path_mask:
- 			newprelink.write("-b %s\n" % (x,))
- 		newprelink.close()
- 
- 		# Migration code path.  If /etc/prelink.conf was generated by us, then
- 		# point it to the new stuff until the prelink package re-installs.
- 		prelink_conf = os.path.join(eroot, 'etc', 'prelink.conf')
- 		try:
- 			with open(_unicode_encode(prelink_conf,
- 				encoding=_encodings['fs'], errors='strict'), 'rb') as f:
- 				if f.readline() == b'# prelink.conf autogenerated by env-update; make all changes to\n':
- 					f = atomic_ofstream(prelink_conf)
- 					f.write('-c /etc/prelink.conf.d/*.conf\n')
- 					f.close()
- 		except IOError as e:
- 			if e.errno != errno.ENOENT:
- 				raise
- 
- 	current_time = int(time.time())
- 	mtime_changed = False
- 
- 	lib_dirs = set()
- 	for lib_dir in set(specials['LDPATH']) | potential_lib_dirs:
- 		x = os.path.join(eroot, lib_dir.lstrip(os.sep))
- 		try:
- 			newldpathtime = os.stat(x)[stat.ST_MTIME]
- 			lib_dirs.add(normalize_path(x))
- 		except OSError as oe:
- 			if oe.errno == errno.ENOENT:
- 				try:
- 					del prev_mtimes[x]
- 				except KeyError:
- 					pass
- 				# ignore this path because it doesn't exist
- 				continue
- 			raise
- 		if newldpathtime == current_time:
- 			# Reset mtime to avoid the potential ambiguity of times that
- 			# differ by less than 1 second.
- 			newldpathtime -= 1
- 			os.utime(x, (newldpathtime, newldpathtime))
- 			prev_mtimes[x] = newldpathtime
- 			mtime_changed = True
- 		elif x in prev_mtimes:
- 			if prev_mtimes[x] == newldpathtime:
- 				pass
- 			else:
- 				prev_mtimes[x] = newldpathtime
- 				mtime_changed = True
- 		else:
- 			prev_mtimes[x] = newldpathtime
- 			mtime_changed = True
- 
- 	if makelinks and \
- 		not mtime_changed and \
- 		contents is not None:
- 		libdir_contents_changed = False
- 		for mypath, mydata in contents.items():
- 			if mydata[0] not in ("obj", "sym"):
- 				continue
- 			head, tail = os.path.split(mypath)
- 			if head in lib_dirs:
- 				libdir_contents_changed = True
- 				break
- 		if not libdir_contents_changed:
- 			makelinks = False
- 
- 	if "CHOST" in settings and "CBUILD" in settings and \
- 		settings["CHOST"] != settings["CBUILD"]:
- 		ldconfig = find_binary("%s-ldconfig" % settings["CHOST"])
- 	else:
- 		ldconfig = os.path.join(eroot, "sbin", "ldconfig")
- 
- 	if ldconfig is None:
- 		pass
- 	elif not (os.access(ldconfig, os.X_OK) and os.path.isfile(ldconfig)):
- 		ldconfig = None
- 
- 	# Only run ldconfig as needed
- 	if makelinks and ldconfig:
- 		# ldconfig has very different behaviour between FreeBSD and Linux
- 		if ostype == "Linux" or ostype.lower().endswith("gnu"):
- 			# We can't update links if we haven't cleaned other versions first, as
- 			# an older package installed ON TOP of a newer version will cause ldconfig
- 			# to overwrite the symlinks we just made. -X means no links. After 'clean'
- 			# we can safely create links.
- 			writemsg_level(_(">>> Regenerating %setc/ld.so.cache...\n") % \
- 				(target_root,))
- 			os.system("cd / ; %s -X -r '%s'" % (ldconfig, target_root))
- 		elif ostype in ("FreeBSD", "DragonFly"):
- 			writemsg_level(_(">>> Regenerating %svar/run/ld-elf.so.hints...\n") % \
- 				target_root)
- 			os.system(("cd / ; %s -elf -i " + \
- 				"-f '%svar/run/ld-elf.so.hints' '%setc/ld.so.conf'") % \
- 				(ldconfig, target_root, target_root))
- 
- 	del specials["LDPATH"]
- 
- 	notice      = "# THIS FILE IS AUTOMATICALLY GENERATED BY env-update.\n"
- 	notice     += "# DO NOT EDIT THIS FILE."
- 	penvnotice  = notice + " CHANGES TO STARTUP PROFILES\n"
- 	cenvnotice  = penvnotice[:]
- 	penvnotice += "# GO INTO " + eprefix + "/etc/profile NOT /etc/profile.env\n\n"
- 	cenvnotice += "# GO INTO " + eprefix + "/etc/csh.cshrc NOT /etc/csh.env\n\n"
- 
- 	#create /etc/profile.env for bash support
- 	profile_env_path = os.path.join(eroot, "etc", "profile.env")
- 	with atomic_ofstream(profile_env_path) as outfile:
- 		outfile.write(penvnotice)
- 
- 		env_keys = [x for x in env if x != "LDPATH"]
- 		env_keys.sort()
- 		for k in env_keys:
- 			v = env[k]
- 			if v.startswith('$') and not v.startswith('${'):
- 				outfile.write("export %s=$'%s'\n" % (k, v[1:]))
- 			else:
- 				outfile.write("export %s='%s'\n" % (k, v))
- 
- 	# Create the systemd user environment configuration file
- 	# /etc/environment.d/10-gentoo-env.conf with the
- 	# environment configuration from /etc/env.d.
- 	systemd_environment_dir = os.path.join(eroot, "etc", "environment.d")
- 	os.makedirs(systemd_environment_dir, exist_ok=True)
- 
- 	systemd_gentoo_env_path = os.path.join(systemd_environment_dir,
- 						"10-gentoo-env.conf")
- 	with atomic_ofstream(systemd_gentoo_env_path) as systemd_gentoo_env:
- 		senvnotice = notice + "\n\n"
- 		systemd_gentoo_env.write(senvnotice)
- 
- 		for env_key in env_keys:
- 			# Skip PATH since this makes it impossible to use
- 			# "systemctl --user import-environment PATH".
- 			if env_key == 'PATH':
- 				continue
- 
- 			env_key_value = env[env_key]
- 
- 			# Skip variables with the empty string
- 			# as value. Those sometimes appear in
- 			# profile.env (e.g. "export GCC_SPECS=''"),
- 			# but are invalid in systemd's syntax.
- 			if not env_key_value:
- 				continue
- 
- 			# Transform into systemd environment.d
- 			# conf syntax, basically shell variable
- 			# assignment (without "export ").
- 			line = f"{env_key}={env_key_value}\n"
- 
- 			systemd_gentoo_env.write(line)
- 
- 	#create /etc/csh.env for (t)csh support
- 	outfile = atomic_ofstream(os.path.join(eroot, "etc", "csh.env"))
- 	outfile.write(cenvnotice)
- 	for x in env_keys:
- 		outfile.write("setenv %s '%s'\n" % (x, env[x]))
- 	outfile.close()
+ def env_update(
+     makelinks=1,
+     target_root=None,
+     prev_mtimes=None,
+     contents=None,
+     env=None,
+     writemsg_level=None,
+     vardbapi=None,
+ ):
+     """
+     Parse /etc/env.d and use it to generate /etc/profile.env, csh.env,
+     ld.so.conf, and prelink.conf. Finally, run ldconfig. When ldconfig is
+     called, its -X option will be used in order to avoid potential
+     interference with installed soname symlinks that are required for
+     correct operation of FEATURES=preserve-libs for downgrade operations.
+     It's not necessary for ldconfig to create soname symlinks, since
+     portage will use NEEDED.ELF.2 data to automatically create them
+     after src_install if they happen to be missing.
+     @param makelinks: True if ldconfig should be called, False otherwise
+     @param target_root: root that is passed to the ldconfig -r option,
+             defaults to portage.settings["ROOT"].
+     @type target_root: String (Path)
+     """
+     if vardbapi is None:
+         if isinstance(env, config):
+             vardbapi = vartree(settings=env).dbapi
+         else:
+             if target_root is None:
+                 eprefix = portage.settings["EPREFIX"]
+                 target_root = portage.settings["ROOT"]
+                 target_eroot = portage.settings["EROOT"]
+             else:
+                 eprefix = portage.const.EPREFIX
+                 target_eroot = os.path.join(target_root, eprefix.lstrip(os.sep))
+                 target_eroot = target_eroot.rstrip(os.sep) + os.sep
+             if hasattr(portage, "db") and target_eroot in portage.db:
+                 vardbapi = portage.db[target_eroot]["vartree"].dbapi
+             else:
+                 settings = config(
+                     config_root=target_root, target_root=target_root, eprefix=eprefix
+                 )
+                 target_root = settings["ROOT"]
+                 if env is None:
+                     env = settings
+                 vardbapi = vartree(settings=settings).dbapi
+ 
+     # Lock the config memory file to prevent symlink creation
+     # in merge_contents from overlapping with env-update.
+     vardbapi._fs_lock()
+     try:
+         return _env_update(
+             makelinks, target_root, prev_mtimes, contents, env, writemsg_level
+         )
+     finally:
+         vardbapi._fs_unlock()
+ 
+ 
+ def _env_update(makelinks, target_root, prev_mtimes, contents, env, writemsg_level):
+     if writemsg_level is None:
+         writemsg_level = portage.util.writemsg_level
+     if target_root is None:
+         target_root = portage.settings["ROOT"]
+     if prev_mtimes is None:
+         prev_mtimes = portage.mtimedb["ldpath"]
+     if env is None:
+         settings = portage.settings
+     else:
+         settings = env
+ 
 -    eprefix = settings.get("EPREFIX", "")
++    # PREFIX LOCAL
++    eprefix = settings.get("EPREFIX", portage.const.EPREFIX)
+     eprefix_lstrip = eprefix.lstrip(os.sep)
+     eroot = (
+         normalize_path(os.path.join(target_root, eprefix_lstrip)).rstrip(os.sep)
+         + os.sep
+     )
+     envd_dir = os.path.join(eroot, "etc", "env.d")
+     ensure_dirs(envd_dir, mode=0o755)
+     fns = listdir(envd_dir, EmptyOnError=1)
+     fns.sort()
+     templist = []
+     for x in fns:
+         if len(x) < 3:
+             continue
+         if not x[0].isdigit() or not x[1].isdigit():
+             continue
+         if x.startswith(".") or x.endswith("~") or x.endswith(".bak"):
+             continue
+         templist.append(x)
+     fns = templist
+     del templist
+ 
+     space_separated = set(["CONFIG_PROTECT", "CONFIG_PROTECT_MASK"])
+     colon_separated = set(
+         [
+             "ADA_INCLUDE_PATH",
+             "ADA_OBJECTS_PATH",
+             "CLASSPATH",
+             "INFODIR",
+             "INFOPATH",
+             "KDEDIRS",
+             "LDPATH",
+             "MANPATH",
+             "PATH",
+             "PKG_CONFIG_PATH",
+             "PRELINK_PATH",
+             "PRELINK_PATH_MASK",
+             "PYTHONPATH",
+             "ROOTPATH",
+         ]
+     )
+ 
+     config_list = []
+ 
+     for x in fns:
+         file_path = os.path.join(envd_dir, x)
+         try:
+             myconfig = getconfig(file_path, expand=False)
+         except ParseError as e:
+             writemsg("!!! '%s'\n" % str(e), noiselevel=-1)
+             del e
+             continue
+         if myconfig is None:
+             # broken symlink or file removed by a concurrent process
+             writemsg("!!! File Not Found: '%s'\n" % file_path, noiselevel=-1)
+             continue
+ 
+         config_list.append(myconfig)
+         if "SPACE_SEPARATED" in myconfig:
+             space_separated.update(myconfig["SPACE_SEPARATED"].split())
+             del myconfig["SPACE_SEPARATED"]
+         if "COLON_SEPARATED" in myconfig:
+             colon_separated.update(myconfig["COLON_SEPARATED"].split())
+             del myconfig["COLON_SEPARATED"]
+ 
+     env = {}
+     specials = {}
+     for var in space_separated:
+         mylist = []
+         for myconfig in config_list:
+             if var in myconfig:
+                 for item in myconfig[var].split():
+                     if item and not item in mylist:
+                         mylist.append(item)
+                 del myconfig[var]  # prepare for env.update(myconfig)
+         if mylist:
+             env[var] = " ".join(mylist)
+         specials[var] = mylist
+ 
+     for var in colon_separated:
+         mylist = []
+         for myconfig in config_list:
+             if var in myconfig:
+                 for item in myconfig[var].split(":"):
+                     if item and not item in mylist:
+                         mylist.append(item)
+                 del myconfig[var]  # prepare for env.update(myconfig)
+         if mylist:
+             env[var] = ":".join(mylist)
+         specials[var] = mylist
+ 
+     for myconfig in config_list:
+         """Cumulative variables have already been deleted from myconfig so that
+         they won't be overwritten by this dict.update call."""
+         env.update(myconfig)
+ 
+     ldsoconf_path = os.path.join(eroot, "etc", "ld.so.conf")
+     try:
+         myld = io.open(
+             _unicode_encode(ldsoconf_path, encoding=_encodings["fs"], errors="strict"),
+             mode="r",
+             encoding=_encodings["content"],
+             errors="replace",
+         )
+         myldlines = myld.readlines()
+         myld.close()
+         oldld = []
+         for x in myldlines:
+             # each line has at least one char (a newline)
+             if x[:1] == "#":
+                 continue
+             oldld.append(x[:-1])
+     except (IOError, OSError) as e:
+         if e.errno != errno.ENOENT:
+             raise
+         oldld = None
+ 
+     newld = specials["LDPATH"]
+     if oldld != newld:
+         # ld.so.conf needs updating and ldconfig needs to be run
+         myfd = atomic_ofstream(ldsoconf_path)
+         myfd.write("# ld.so.conf autogenerated by env-update; make all changes to\n")
+         myfd.write("# contents of /etc/env.d directory\n")
+         for x in specials["LDPATH"]:
+             myfd.write(x + "\n")
+         myfd.close()
+ 
+     potential_lib_dirs = set()
+     for lib_dir_glob in ("usr/lib*", "lib*"):
+         x = os.path.join(eroot, lib_dir_glob)
+         for y in glob.glob(
+             _unicode_encode(x, encoding=_encodings["fs"], errors="strict")
+         ):
+             try:
+                 y = _unicode_decode(y, encoding=_encodings["fs"], errors="strict")
+             except UnicodeDecodeError:
+                 continue
+             if os.path.basename(y) != "libexec":
+                 potential_lib_dirs.add(y[len(eroot) :])
+ 
+     # Update prelink.conf if we are prelink-enabled
+     if prelink_capable:
+         prelink_d = os.path.join(eroot, "etc", "prelink.conf.d")
+         ensure_dirs(prelink_d)
+         newprelink = atomic_ofstream(os.path.join(prelink_d, "portage.conf"))
+         newprelink.write(
+             "# prelink.conf autogenerated by env-update; make all changes to\n"
+         )
+         newprelink.write("# contents of /etc/env.d directory\n")
+ 
+         for x in sorted(potential_lib_dirs) + ["bin", "sbin"]:
+             newprelink.write("-l /%s\n" % (x,))
+         prelink_paths = set()
+         prelink_paths |= set(specials.get("LDPATH", []))
+         prelink_paths |= set(specials.get("PATH", []))
+         prelink_paths |= set(specials.get("PRELINK_PATH", []))
+         prelink_path_mask = specials.get("PRELINK_PATH_MASK", [])
+         for x in prelink_paths:
+             if not x:
+                 continue
+             if x[-1:] != "/":
+                 x += "/"
+             plmasked = 0
+             for y in prelink_path_mask:
+                 if not y:
+                     continue
+                 if y[-1] != "/":
+                     y += "/"
+                 if y == x[0 : len(y)]:
+                     plmasked = 1
+                     break
+             if not plmasked:
+                 newprelink.write("-h %s\n" % (x,))
+         for x in prelink_path_mask:
+             newprelink.write("-b %s\n" % (x,))
+         newprelink.close()
+ 
+         # Migration code path.  If /etc/prelink.conf was generated by us, then
+         # point it to the new stuff until the prelink package re-installs.
+         prelink_conf = os.path.join(eroot, "etc", "prelink.conf")
+         try:
+             with open(
+                 _unicode_encode(
+                     prelink_conf, encoding=_encodings["fs"], errors="strict"
+                 ),
+                 "rb",
+             ) as f:
+                 if (
+                     f.readline()
+                     == b"# prelink.conf autogenerated by env-update; make all changes to\n"
+                 ):
+                     f = atomic_ofstream(prelink_conf)
+                     f.write("-c /etc/prelink.conf.d/*.conf\n")
+                     f.close()
+         except IOError as e:
+             if e.errno != errno.ENOENT:
+                 raise
+ 
+     current_time = int(time.time())
+     mtime_changed = False
+ 
+     lib_dirs = set()
+     for lib_dir in set(specials["LDPATH"]) | potential_lib_dirs:
+         x = os.path.join(eroot, lib_dir.lstrip(os.sep))
+         try:
+             newldpathtime = os.stat(x)[stat.ST_MTIME]
+             lib_dirs.add(normalize_path(x))
+         except OSError as oe:
+             if oe.errno == errno.ENOENT:
+                 try:
+                     del prev_mtimes[x]
+                 except KeyError:
+                     pass
+                 # ignore this path because it doesn't exist
+                 continue
+             raise
+         if newldpathtime == current_time:
+             # Reset mtime to avoid the potential ambiguity of times that
+             # differ by less than 1 second.
+             newldpathtime -= 1
+             os.utime(x, (newldpathtime, newldpathtime))
+             prev_mtimes[x] = newldpathtime
+             mtime_changed = True
+         elif x in prev_mtimes:
+             if prev_mtimes[x] == newldpathtime:
+                 pass
+             else:
+                 prev_mtimes[x] = newldpathtime
+                 mtime_changed = True
+         else:
+             prev_mtimes[x] = newldpathtime
+             mtime_changed = True
+ 
+     if makelinks and not mtime_changed and contents is not None:
+         libdir_contents_changed = False
+         for mypath, mydata in contents.items():
+             if mydata[0] not in ("obj", "sym"):
+                 continue
+             head, tail = os.path.split(mypath)
+             if head in lib_dirs:
+                 libdir_contents_changed = True
+                 break
+         if not libdir_contents_changed:
+             makelinks = False
+ 
+     if (
+         "CHOST" in settings
+         and "CBUILD" in settings
+         and settings["CHOST"] != settings["CBUILD"]
+     ):
+         ldconfig = find_binary("%s-ldconfig" % settings["CHOST"])
+     else:
+         ldconfig = os.path.join(eroot, "sbin", "ldconfig")
+ 
+     if ldconfig is None:
+         pass
+     elif not (os.access(ldconfig, os.X_OK) and os.path.isfile(ldconfig)):
+         ldconfig = None
+ 
+     # Only run ldconfig as needed
+     if makelinks and ldconfig:
+         # ldconfig has very different behaviour between FreeBSD and Linux
+         if ostype == "Linux" or ostype.lower().endswith("gnu"):
+             # We can't update links if we haven't cleaned other versions first, as
+             # an older package installed ON TOP of a newer version will cause ldconfig
+             # to overwrite the symlinks we just made. -X means no links. After 'clean'
+             # we can safely create links.
+             writemsg_level(
+                 _(">>> Regenerating %setc/ld.so.cache...\n") % (target_root,)
+             )
+             os.system("cd / ; %s -X -r '%s'" % (ldconfig, target_root))
+         elif ostype in ("FreeBSD", "DragonFly"):
+             writemsg_level(
+                 _(">>> Regenerating %svar/run/ld-elf.so.hints...\n") % target_root
+             )
+             os.system(
+                 (
+                     "cd / ; %s -elf -i "
+                     + "-f '%svar/run/ld-elf.so.hints' '%setc/ld.so.conf'"
+                 )
+                 % (ldconfig, target_root, target_root)
+             )
+ 
+     del specials["LDPATH"]
+ 
+     notice = "# THIS FILE IS AUTOMATICALLY GENERATED BY env-update.\n"
+     notice += "# DO NOT EDIT THIS FILE."
+     penvnotice = notice + " CHANGES TO STARTUP PROFILES\n"
+     cenvnotice = penvnotice[:]
 -    penvnotice += "# GO INTO /etc/profile NOT /etc/profile.env\n\n"
 -    cenvnotice += "# GO INTO /etc/csh.cshrc NOT /etc/csh.env\n\n"
++    # BEGIN PREFIX LOCAL
++    penvnotice += "# GO INTO " + eprefix + "/etc/profile NOT " + eprefix + "/etc/profile.env\n\n"
++    cenvnotice += "# GO INTO " + eprefix + "/etc/csh.cshrc NOT " + eprefix + "/etc/csh.env\n\n"
++    # END PREFIX LOCAL
+ 
+     # create /etc/profile.env for bash support
+     profile_env_path = os.path.join(eroot, "etc", "profile.env")
+     with atomic_ofstream(profile_env_path) as outfile:
+         outfile.write(penvnotice)
+ 
+         env_keys = [x for x in env if x != "LDPATH"]
+         env_keys.sort()
+         for k in env_keys:
+             v = env[k]
+             if v.startswith("$") and not v.startswith("${"):
+                 outfile.write("export %s=$'%s'\n" % (k, v[1:]))
+             else:
+                 outfile.write("export %s='%s'\n" % (k, v))
+ 
+     # Create the systemd user environment configuration file
+     # /etc/environment.d/10-gentoo-env.conf with the
+     # environment configuration from /etc/env.d.
+     systemd_environment_dir = os.path.join(eroot, "etc", "environment.d")
+     os.makedirs(systemd_environment_dir, exist_ok=True)
+ 
+     systemd_gentoo_env_path = os.path.join(
+         systemd_environment_dir, "10-gentoo-env.conf"
+     )
+     with atomic_ofstream(systemd_gentoo_env_path) as systemd_gentoo_env:
+         senvnotice = notice + "\n\n"
+         systemd_gentoo_env.write(senvnotice)
+ 
+         for env_key in env_keys:
+             # Skip PATH since this makes it impossible to use
+             # "systemctl --user import-environment PATH".
+             if env_key == "PATH":
+                 continue
+ 
+             env_key_value = env[env_key]
+ 
+             # Skip variables with the empty string
+             # as value. Those sometimes appear in
+             # profile.env (e.g. "export GCC_SPECS=''"),
+             # but are invalid in systemd's syntax.
+             if not env_key_value:
+                 continue
+ 
+             # Transform into systemd environment.d
+             # conf syntax, basically shell variable
+             # assignment (without "export ").
+             line = f"{env_key}={env_key_value}\n"
+ 
+             systemd_gentoo_env.write(line)
+ 
+     # create /etc/csh.env for (t)csh support
+     outfile = atomic_ofstream(os.path.join(eroot, "etc", "csh.env"))
+     outfile.write(cenvnotice)
+     for x in env_keys:
+         outfile.write("setenv %s '%s'\n" % (x, env[x]))
+     outfile.close()


             reply	other threads:[~2022-01-14 10:32 UTC|newest]

Thread overview: 195+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-14 10:32 Fabian Groffen [this message]
  -- strict thread matches above, loose matches on Subject: below --
2024-02-25  9:40 [gentoo-commits] proj/portage:prefix commit in: / Fabian Groffen
2024-02-22  7:27 Fabian Groffen
2024-01-18 10:22 Fabian Groffen
2024-01-18  9:36 Fabian Groffen
2023-12-03 10:10 Fabian Groffen
2023-12-03  9:54 Fabian Groffen
2023-12-03  9:54 Fabian Groffen
2023-12-03  9:54 Fabian Groffen
2023-11-24 20:18 Fabian Groffen
2023-11-24 20:06 Fabian Groffen
2023-11-24 20:06 Fabian Groffen
2023-06-22  8:47 Fabian Groffen
2023-06-17  9:04 Fabian Groffen
2023-06-17  8:41 Fabian Groffen
2022-07-28 17:38 Fabian Groffen
2022-07-27 19:20 Fabian Groffen
2022-07-26 19:39 Fabian Groffen
2022-07-25 15:20 Fabian Groffen
2022-07-24 19:27 Fabian Groffen
2022-07-24 14:01 Fabian Groffen
2022-07-24  9:45 Fabian Groffen
2022-01-14 10:40 Fabian Groffen
2021-07-06  7:10 Fabian Groffen
2021-04-16 13:37 Fabian Groffen
2021-01-24  9:02 Fabian Groffen
2021-01-04 10:48 Fabian Groffen
2020-12-07 17:28 Fabian Groffen
2020-12-07 16:46 Fabian Groffen
2020-11-23  7:48 Fabian Groffen
2020-11-22 11:15 Fabian Groffen
2020-09-26 11:29 Fabian Groffen
2020-08-02 12:33 Fabian Groffen
2020-06-02 18:55 Fabian Groffen
2020-01-08 19:14 Fabian Groffen
2019-07-01 13:11 Fabian Groffen
2019-05-30  9:20 Fabian Groffen
2019-02-28 12:31 Fabian Groffen
2019-01-11 10:19 Fabian Groffen
2019-01-07 10:22 Fabian Groffen
2018-12-23 11:14 Fabian Groffen
2018-12-12 18:54 Fabian Groffen
2018-08-04  6:56 Fabian Groffen
2018-06-25  8:34 Fabian Groffen
2018-06-17 14:38 Fabian Groffen
2018-06-17 14:38 Fabian Groffen
2018-05-28 15:24 Fabian Groffen
2018-05-25 19:44 Fabian Groffen
2018-05-25 19:44 Fabian Groffen
2018-05-18 19:46 Fabian Groffen
2017-12-12  8:19 Fabian Groffen
2017-10-29 14:51 Fabian Groffen
2017-10-03  7:32 Fabian Groffen
2017-09-22 10:08 Fabian Groffen
2017-08-21 13:27 Fabian Groffen
2017-08-13  7:21 Fabian Groffen
2017-05-23 13:34 Fabian Groffen
2017-03-25  9:12 Fabian Groffen
2017-03-24 19:09 Fabian Groffen
2017-03-24  7:43 Fabian Groffen
2017-03-23 17:46 Fabian Groffen
2017-03-23 17:32 Fabian Groffen
2017-03-23 17:23 Fabian Groffen
2017-03-23 15:38 Fabian Groffen
2017-03-17  8:25 Fabian Groffen
2017-03-02  8:48 Fabian Groffen
2017-03-02  8:18 Fabian Groffen
2017-02-23 14:05 Fabian Groffen
2017-01-27 15:08 Fabian Groffen
2017-01-27 15:08 Fabian Groffen
2016-03-20 19:31 Fabian Groffen
2016-02-21 16:17 Fabian Groffen
2016-02-21 16:17 Fabian Groffen
2016-02-18 19:35 Fabian Groffen
2016-02-18 19:35 Fabian Groffen
2015-06-20  7:12 Fabian Groffen
2015-06-09 18:30 Fabian Groffen
2015-06-09 18:01 Fabian Groffen
2015-06-04 19:47 Fabian Groffen
2015-04-05  9:15 Fabian Groffen
2014-11-12 17:31 Fabian Groffen
2014-10-02 18:48 Fabian Groffen
2014-09-28 17:52 Fabian Groffen
2014-05-06 19:32 Fabian Groffen
2014-05-06 19:18 Fabian Groffen
2014-04-22 19:52 Fabian Groffen
2014-02-06 21:09 Fabian Groffen
2014-01-06  9:47 Fabian Groffen
2013-09-24 17:29 Fabian Groffen
2013-09-20 17:59 Fabian Groffen
2013-09-18 18:34 Fabian Groffen
2013-09-13 18:02 Fabian Groffen
2013-08-10 20:54 Fabian Groffen
2013-07-10  5:31 Fabian Groffen
2013-07-08 19:32 Fabian Groffen
2013-06-29  5:41 Fabian Groffen
2013-06-27 17:20 Fabian Groffen
2013-06-12  9:02 Fabian Groffen
2013-06-09 15:53 Fabian Groffen
2013-05-04 18:55 Fabian Groffen
2013-04-02 16:57 Fabian Groffen
2013-03-31 19:03 Fabian Groffen
2013-03-31 19:00 Fabian Groffen
2013-03-24  8:36 Fabian Groffen
2013-03-23 19:54 Fabian Groffen
2013-02-28 19:29 Fabian Groffen
2013-02-07 20:01 Fabian Groffen
2013-01-27 21:41 Fabian Groffen
2013-01-27 21:41 Fabian Groffen
2013-01-13 10:26 Fabian Groffen
2013-01-10 21:02 Fabian Groffen
2013-01-05 18:14 Fabian Groffen
2012-12-26 14:48 Fabian Groffen
2012-12-02 15:47 Fabian Groffen
2012-12-02 15:36 Fabian Groffen
2012-12-02 15:33 Fabian Groffen
2012-12-02 15:33 Fabian Groffen
2012-12-02 15:33 Fabian Groffen
2012-12-02 13:12 Fabian Groffen
2012-12-02 12:59 Fabian Groffen
2012-11-04 10:48 Fabian Groffen
2012-10-22 17:25 Fabian Groffen
2012-10-02 12:02 Fabian Groffen
2012-09-30 11:22 Fabian Groffen
2012-09-26 18:26 Fabian Groffen
2012-09-12 18:18 Fabian Groffen
2012-09-09  7:40 Fabian Groffen
2012-09-06 18:14 Fabian Groffen
2012-08-27  6:44 Fabian Groffen
2012-08-12  7:50 Fabian Groffen
2012-07-19 16:25 Fabian Groffen
2012-07-06  7:05 Fabian Groffen
2012-04-23 19:23 Fabian Groffen
2012-04-03 18:04 Fabian Groffen
2012-03-31 19:31 Fabian Groffen
2012-03-01 20:32 Fabian Groffen
2012-02-19  9:58 Fabian Groffen
2012-02-09  8:01 Fabian Groffen
2012-01-10 17:45 Fabian Groffen
2011-12-31 16:45 Fabian Groffen
2011-12-26  9:12 Fabian Groffen
2011-12-23  9:51 Fabian Groffen
2011-12-22  9:51 Fabian Groffen
2011-12-19 18:30 Fabian Groffen
2011-12-14 15:25 Fabian Groffen
2011-12-10 11:28 Fabian Groffen
2011-12-09 20:33 Fabian Groffen
2011-12-02 20:31 Fabian Groffen
2011-12-02 19:20 Fabian Groffen
2011-12-02 19:19 Fabian Groffen
2011-12-02 19:18 Fabian Groffen
2011-12-02 18:03 Fabian Groffen
2011-10-21 17:34 Fabian Groffen
2011-10-21 17:34 Fabian Groffen
2011-10-20 20:28 Fabian Groffen
2011-10-20 17:08 Fabian Groffen
2011-10-20 16:38 Fabian Groffen
2011-10-17 18:36 Fabian Groffen
2011-10-16 13:59 Fabian Groffen
2011-10-15 18:27 Fabian Groffen
2011-10-13  6:52 Fabian Groffen
2011-09-23 18:38 Fabian Groffen
2011-09-23 18:23 Fabian Groffen
2011-09-20 18:25 Fabian Groffen
2011-09-14 18:43 Fabian Groffen
2011-09-14 18:38 Fabian Groffen
2011-09-13 17:41 Fabian Groffen
2011-08-31 18:39 Fabian Groffen
2011-08-30 18:45 Fabian Groffen
2011-08-29 19:03 Fabian Groffen
2011-08-25 20:25 Fabian Groffen
2011-08-20 17:50 Fabian Groffen
2011-07-26 17:35 Fabian Groffen
2011-07-17  9:48 Fabian Groffen
2011-07-17  8:12 Fabian Groffen
2011-07-01 17:44 Fabian Groffen
2011-06-14 15:39 Fabian Groffen
2011-06-06 17:12 Fabian Groffen
2011-05-28  8:29 Fabian Groffen
2011-05-27 17:41 Fabian Groffen
2011-05-14 13:59 Fabian Groffen
2011-05-02 17:41 Fabian Groffen
2011-04-24 12:08 Fabian Groffen
2011-04-15 18:27 Fabian Groffen
2011-04-15 18:27 Fabian Groffen
2011-03-28 16:52 Fabian Groffen
2011-03-23 19:26 Fabian Groffen
2011-03-17 19:08 Fabian Groffen
2011-03-13 14:45 Fabian Groffen
2011-03-09 19:44 Fabian Groffen
2011-02-26 21:15 Fabian Groffen
2011-02-10 18:46 Fabian Groffen
2011-02-10 18:44 Fabian Groffen
2011-02-10 18:20 Fabian Groffen
2011-02-05 12:25 Fabian Groffen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1642156321.9d0d47eed1ed7b5e2bba49b1d79ca3e9fc7fb7ec.grobian@gentoo \
    --to=grobian@gentoo.org \
    --cc=gentoo-commits@lists.gentoo.org \
    --cc=gentoo-dev@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox