public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-04-23 14:28 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-04-23 14:28 UTC (permalink / raw
  To: gentoo-commits

commit:     88293d7cad719d3fce2ef9f2054e5c6bd8946d0b
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Apr 23 14:27:44 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Apr 23 14:27:44 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=88293d7c

fix a import syntax

---
 gobs/pym/repoman.py |    4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/repoman.py b/gobs/pym/repoman.py
index 81e0aae..c495d80 100644
--- a/gobs/pym/repoman.py
+++ b/gobs/pym/repoman.py
@@ -1,10 +1,12 @@
+import sys
+import os
 import portage
 from portage import os, _encodings, _unicode_decode
 from portage import _unicode_encode
 from portage.exception import DigestException, FileNotFound, ParseError, PermissionDenied
 from _emerge.Package import Package
 from _emerge.RootConfig import RootConfig
-import run_checks from repoman.checks
+from repoman.checks import run_checks
 import codecs
 
 class gobs_repoman(object):



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-04-23 15:26 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-04-23 15:26 UTC (permalink / raw
  To: gentoo-commits

commit:     9717dd657ef219907c71267bb1311eefdf80d99f
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Apr 23 15:26:25 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Apr 23 15:26:25 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=9717dd65

fir the import repoman prob

---
 gobs/pym/package.py                      |    2 +-
 gobs/pym/{repoman.py => repoman_gobs.py} |    0
 2 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index b75c515..1180bd2 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -1,6 +1,6 @@
 import portage
 from gobs.flags import gobs_use_flags
-from gobs.repoman import gobs_repoman
+from gobs.repoman_gobs import gobs_repoman
 from gobs.manifest import gobs_manifest
 from gobs.text import gobs_text
 from gobs.old_cpv import gobs_old_cpv

diff --git a/gobs/pym/repoman.py b/gobs/pym/repoman_gobs.py
similarity index 100%
rename from gobs/pym/repoman.py
rename to gobs/pym/repoman_gobs.py



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-04-24 22:21 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-04-24 22:21 UTC (permalink / raw
  To: gentoo-commits

commit:     850d14cc46bc805df127eace1858ab0f9f3398df
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sun Apr 24 22:21:35 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sun Apr 24 22:21:35 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=850d14cc

update manifest.py

---
 gobs/pym/manifest.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/manifest.py b/gobs/pym/manifest.py
index 69c6f8b..140a5d1 100644
--- a/gobs/pym/manifest.py
+++ b/gobs/pym/manifest.py
@@ -11,7 +11,7 @@ class gobs_manifest(object):
 		self.mysettings = mysettings
 
 	# Copy of portage.digestcheck() but without the writemsg() stuff
-	def digestcheck(self, pkdir):
+	def digestcheck(self, pkgdir):
 		"""
 		Verifies checksums. Assumes all files have been downloaded.
 		@rtype: int



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-07-29 15:31 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-07-29 15:31 UTC (permalink / raw
  To: gentoo-commits

commit:     8554eeee41909a2df43989d6872346b5b64e4570
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Jul 29 15:29:35 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Jul 29 15:29:35 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=8554eeee

Updated alot in the pym dir

---
 gobs/pym/ConnectionManager.py |   26 +-
 gobs/pym/build_log.py         |   99 ++++---
 gobs/pym/build_queru.py       |  158 ++++++++++
 gobs/pym/depclean.py          |  632 +++++++++++++++++++++++++++++++++++++++++
 gobs/pym/flags.py             |   27 +-
 gobs/pym/old_cpv.py           |   11 +-
 gobs/pym/package.py           |   44 +---
 gobs/pym/text.py              |   19 +-
 8 files changed, 901 insertions(+), 115 deletions(-)

diff --git a/gobs/pym/ConnectionManager.py b/gobs/pym/ConnectionManager.py
index 7d87702..1bbeb35 100644
--- a/gobs/pym/ConnectionManager.py
+++ b/gobs/pym/ConnectionManager.py
@@ -1,31 +1,31 @@
 #a simple CM build around sie singleton so there can only be 1 CM but you can call the class in different place with out caring about it.
 #when the first object is created of this class, the SQL settings are read from the file and stored in the class for later reuse by the next object and so on.
 #(maybe later add support for connection pools)
+from __future__ import print_function
+
 class connectionManager(object):
     _instance = None   
 
-		      #size of the connection Pool
+    #size of the connection Pool
     def __new__(cls, settings_dict, numberOfconnections=20, *args, **kwargs):
         if not cls._instance:
             cls._instance = super(connectionManager, cls).__new__(cls, *args, **kwargs)
             #read the sql user/host etc and store it in the local object
-            print settings_dict['sql_host']
+            print(settings_dict['sql_host'])
             cls._host=settings_dict['sql_host']
             cls._user=settings_dict['sql_user']
             cls._password=settings_dict['sql_passwd']
             cls._database=settings_dict['sql_db']
             #shouldnt we include port also?
             try:
-	      from psycopg2 import pool
-	      cls._connectionNumber=numberOfconnections
-	      #always create 1 connection
-	      cls._pool=pool.ThreadedConnectionPool(1,cls._connectionNumber,host=cls._host,database=cls._database,user=cls._user,password=cls._password)
-	      cls._name='pgsql'
-	      
-	      
-	    except ImportError:
-	      print "Please install a recent version of dev-python/psycopg for Python"
-	      sys.exit(1)
+              from psycopg2 import pool
+              cls._connectionNumber=numberOfconnections
+              #always create 1 connection
+              cls._pool=pool.ThreadedConnectionPool(1,cls._connectionNumber,host=cls._host,database=cls._database,user=cls._user,password=cls._password)
+              cls._name='pgsql'
+            except ImportError:
+              print("Please install a recent version of dev-python/psycopg for Python")
+              sys.exit(1)
             #setup connection pool
         return cls._instance
     
@@ -38,7 +38,7 @@ class connectionManager(object):
       
     def putConnection(self, connection):
       self._pool.putconn(connection)
-	
+
     def closeAllConnections(self):
       self._pool.closeall()
 

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index 4f5a801..eb5fcea 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -1,8 +1,10 @@
+from __future__ import print_function
 import re
 from gobs.text import get_log_text_list
 from gobs.repoman_gobs import gobs_repoman
 import portage
 from gobs.readconf import get_conf_settings
+from gobs.flags import gobs_use_flags
 reader=get_conf_settings()
 gobs_settings_dict=reader.read_gobs_settings_all()
 # make a CM
@@ -16,35 +18,58 @@ elif CM.getName()=='mysql':
 
 class gobs_buildlog(object):
 	
-	def __init__(self, CM, mysettings, build_dict, config_profile):
+	def __init__(self,  mysettings, build_dict):
 		self._mysettings = mysettings
 		self._myportdb = portage.portdbapi(mysettings=self._mysettings)
 		self._build_dict = build_dict
-		self._config_profile = config_profile
-		self._CM = CM
 		self._logfile_text = get_log_text_list(self._mysettings.get("PORTAGE_LOG_FILE"))
+	
+	def add_new_ebuild_buildlog(self, build_error, summary_error, build_log_dict):
+		conn=CM.getConnection()
+		cpv = self._build_dict['cpv']
+		init_useflags = gobs_use_flags(self._mysettings, self._myportdb, cpv)
+		iuse_flags_list, final_use_list = init_useflags.get_flags_looked()
+		iuse = []
+		use_flags_list = []
+		use_enable_list = []
+		for iuse_line in iuse_flags_list:
+			iuse.append(init_useflags.reduce_flag(iuse_line))
+		iuse_flags_list2 = list(set(iuse))
+		use_enable = final_use_list
+		use_disable = list(set(iuse_flags_list2).difference(set(use_enable)))
+		use_flagsDict = {}
+		for x in use_enable:
+			use_flagsDict[x] = True
+		for x in use_disable:
+			use_flagsDict[x] = False
+		for u, s in  use_flagsDict.iteritems():
+			use_flags_list.append(u)
+			use_enable_list.append(s)
+		build_id = add_new_buildlog(conn, self._build_dict, use_flags_list, use_enable_list, build_error, summary_error, build_log_dict)
+		CM.putConnection(conn)
+		return build_id
 
-	def search_info(self, textline, error_log_list, i):
+	def search_info(self, textline, error_log_list):
 		if re.search(" * Package:", textline):
-			print 'Package'
+			print('Package')
 			error_log_list.append(textline)
 		if re.search(" * Repository:", textline):
-			print 'Repository'
+			print('Repository')
 			error_log_list.append(textline)
 		if re.search(" * Maintainer:", textline):
 			error_log_list.append(textline)
-			print 'Maintainer'
+			print('Maintainer')
 		if re.search(" * USE:", textline):
 			error_log_list.append(textline)
-			print 'USE'
+			print('USE')
 		if re.search(" * FEATURES:", textline):
 			error_log_list.append(textline)
-			print 'FEATURES'
+			print('FEATURES')
 		return error_log_list
 
 	def search_error(self, textline, error_log_list, sum_build_log_list, i):
 		if re.search("Error 1", textline):
-			print 'Error'
+			print('Error')
 			x = i - 20
 			endline = True
 			error_log_list.append(".....\n")
@@ -56,7 +81,7 @@ class gobs_buildlog(object):
 				else:
 					x = x +1
 		if re.search(" * ERROR:", textline):
-			print 'ERROR'
+			print('ERROR')
 			x = i
 			endline= True
 			field = textline.split(" ")
@@ -69,12 +94,25 @@ class gobs_buildlog(object):
 					endline = False
 				else:
 					x = x +1
+		if re.search("configure: error:", textline):
+			print('configure: error:')
+			x = i - 4
+			endline = True
+			error_log_list.append(".....\n")
+			while x != i + 3 and endline:
+				try:
+					error_log_list.append(self._logfile_text[x])
+				except:
+					endline = False
+				else:
+					x = x +1
 		return error_log_list, sum_build_log_list
 
 	def search_qa(self, textline, qa_error_list, error_log_list,i):
-		if re.search(" * QA Notice: Package has poor programming", textline):
-			print 'QA Notice'
+		if re.search(" * QA Notice:", textline):
+			print('QA Notice')
 			x = i
+			qa_error_list.append(self._logfile_text[x])
 			endline= True
 			error_log_list.append(".....\n")
 			while x != i + 3 and endline:
@@ -84,20 +122,6 @@ class gobs_buildlog(object):
 					endline = False
 				else:
 					x = x +1
-			qa_error_list.append('QA Notice: Package has poor programming practices')
-			if re.search(" * QA Notice: The following shared libraries lack NEEDED", textline):
-				print 'QA Notice'
-				x = i
-				endline= True
-				error_log_list.append(".....\n")
-				while x != i + 2 and endline:
-					try:
-						error_log_list.append(self._logfile_text[x])
-					except:
-						endline = False
-					else:
-						x = x +1
-				qa_error_list.append('QA Notice: The following shared libraries lack NEEDED entries')
 		return qa_error_list, error_log_list
 
 	def get_buildlog_info(self):
@@ -110,15 +134,12 @@ class gobs_buildlog(object):
 		repoman_error_list = []
 		sum_build_log_list = []
 		for textline in self._logfile_text:
-			error_log_list = self.search_info(textline, error_log_list, i)
+			error_log_list = self.search_info(textline, error_log_list)
 			error_log_list, sum_build_log_list = self.search_error(textline, error_log_list, sum_build_log_list, i)
 			qa_error_list, error_log_list = self.search_qa(textline, qa_error_list, error_log_list, i)
 			i = i +1
 		# Run repoman check_repoman()
-		categories = self._build_dict['categories']
-		package = self._build_dict['package']
-		ebuild_version = self._build_dict['ebuild_version']
-		repoman_error_list = init_repoman.check_repoman(categories, package, ebuild_version, self._config_profile)
+		repoman_error_list = init_repoman.check_repoman(self._build_dict['categories'], self._build_dict['package'], self._build_dict['ebuild_version'], self._build_dict['config_profile'])
 		if repoman_error_list != []:
 			sum_build_log_list.append("repoman")
 		if qa_error_list != []:
@@ -130,7 +151,7 @@ class gobs_buildlog(object):
 		return build_log_dict
 
 	def add_buildlog_main(self):
-		conn=self._CM.getConnection()
+		conn=CM.getConnection()
 		build_log_dict = {}
 		build_log_dict = self.get_buildlog_info()
 		sum_build_log_list = build_log_dict['summary_error_list']
@@ -143,8 +164,12 @@ class gobs_buildlog(object):
 		if sum_build_log_list != []:
 			for sum_log_line in sum_build_log_list:
 				summary_error = summary_error + " " + sum_log_line
-		print 'summary_error', summary_error
-		logfilename = re.sub("\/var\/log\/portage\/", "",  self._mysettings.get("PORTAGE_LOG_FILE"))
-		build_id = move_queru_buildlog(conn, self._build_dict['queue_id'], build_error, summary_error, logfilename, build_log_dict)
+		print('summary_error', summary_error)
+		build_log_dict['logfilename'] = re.sub("\/var\/log\/portage\/", "",  self._mysettings.get("PORTAGE_LOG_FILE"))
+		print(self._build_dict['queue_id'], build_error, summary_error, build_log_dict['logfilename'], build_log_dict)
+		if self._build_dict['queue_id'] is None:
+			build_id = self.add_new_ebuild_buildlog(build_error, summary_error, build_log_dict)
+		else:
+			build_id = move_queru_buildlog(conn, self._build_dict['queue_id'], build_error, summary_error, build_log_dict)
 		# update_qa_repoman(conn, build_id, build_log_dict)
-		print "build_id", build_id, "logged to db."
+		print("build_id", build_id[0], "logged to db.")

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
new file mode 100644
index 0000000..3d53a05
--- /dev/null
+++ b/gobs/pym/build_queru.py
@@ -0,0 +1,158 @@
+# Get the options from the config file set in gobs.readconf
+from gobs.readconf import get_conf_settings
+reader=get_conf_settings()
+gobs_settings_dict=reader.read_gobs_settings_all()
+# make a CM
+from gobs.ConnectionManager import connectionManager
+CM=connectionManager(gobs_settings_dict)
+#selectively import the pgsql/mysql querys
+if CM.getName()=='pgsql':
+	from gobs.querys.pgsql import *
+elif CM.getName()=='mysql':
+	from gobs.querys.mysql import *
+
+import portage
+import os
+from gobs.manifest import gobs_manifest
+from gobs.depclean import main_depclean
+from gobs.flags import gobs_use_flags
+from _emerge.main import emerge_main
+
+class queruaction(object):
+
+	def __init__(self, config_profile):
+		self._mysettings = portage.settings
+		self._config_profile = config_profile
+		self._myportdb =  portage.portdb
+
+	def log_fail_queru(self, build_dict, fail_querue_dict):
+		fail_times = 0
+		if fail_querue_dict == {}:
+			attDict = {}
+			attDict[build_dict['type_fail']] = 1
+			attDict['build_info'] = build_dict
+			fail_querue_dict[build_dict['querue_id']] = attDict
+			return fail_querue_dict
+		else:
+			# FIXME:If  is 5 remove fail_querue_dict[build_dict['querue_id'] from 
+			# fail_querue_dict and add log to db.
+			if not fail_querue_dict[build_dict['querue_id']] is None:
+				if fail_querue_dict[build_dict['querue_id']][build_dict['type_fail']] is None:
+					fail_querue_dict[build_dict['querue_id']][build_dict['type_fail']] = 1
+					return fail_querue_dict
+				else:
+					fail_times = fail_querue_dict[build_dict['querue_id']][build_dict['type_fail']]
+					fail_times  = fail_times + 1
+					if not fail_times is 5:
+						fail_querue_dict[build_dict['querue_id']][build_dict['type_fail']] = fail_times
+						return fail_querue_dict
+					else:
+						# FIXME:If  is 5 remove fail_querue_dict[build_dict['querue_id']] from
+						# fail_querue_dict and add log to db.
+						return fail_querue_dict
+			else:
+				attDict = {}
+				attDict[build_dict['type_fail']] = 1
+				attDict['build_info'] = build_dict
+				fail_querue_dict[build_dict['querue_id']] = attDict
+			return fail_querue_dict
+
+	def make_build_list(self, build_dict):
+		conn=CM.getConnection()
+		cpv = build_dict['category']+'/'+build_dict['package']+'-'+build_dict['ebuild_version']
+		pkgdir = os.path.join(self._mysettings['PORTDIR'], build_dict['category'] + "/" + build_dict['package'])
+    		init_manifest =  gobs_manifest(self._mysettings, pkgdir)
+    		try:
+			ebuild_version_checksum_tree = portage.checksum.sha256hash(pkgdir+ "/" + build_dict['package'] + "-" + build_dict['ebuild_version'] + ".ebuild")[0]
+		except:
+			ebuild_version_checksum_tree = None
+		if ebuild_version_checksum_tree == build_dict['checksum']:
+			if portage.getmaskingstatus(cpv, settings=self._mysettings, portdb=self._myportdb) == []:
+				init_flags = gobs_use_flags(self._mysettings, self._myportdb, cpv)
+				build_use_flags_list = init_flags.comper_useflags(build_dict)
+				print "build_use_flags_list", build_use_flags_list
+				manifest_error = init_manifest.check_file_in_manifest(self._myportdb, cpv, build_dict, build_use_flags_list)
+				if manifest_error is None:
+					build_dict['check_fail'] = False
+					build_cpv_dict = init_flags.get_needed_dep_useflags(build_use_flags_list)
+					print build_cpv_dict, build_use_flags_list, cpv
+					build_use_flags_dict = {}
+					if build_use_flags_list is None:
+						build_use_flags_dict['None'] = None
+					if build_cpv_dict is None:
+						build_cpv_dict = {}
+						build_cpv_dict[cpv] = build_use_flags_dict
+					else:
+						build_cpv_dict[cpv] = build_use_flags_dict
+					print build_cpv_dict
+					return build_cpv_dict, build_dict
+				else:
+					build_dict['1'] = 1
+			else:
+				build_dict['2'] = 2
+		else:
+			build_dict['3'] = 3
+		build_dict['check_fail'] = True
+		return build_cpv_dict, build_dict
+
+	def build_procces(self, buildqueru_cpv_dict, build_dict):
+		build_cpv_list = []
+		for k, v in buildqueru_cpv_dict.iteritems():
+				build_use_flags_list = []
+				for x, y in v.iteritems():
+					if y is True:
+						build_use_flags_list.append(x)
+					if y is False:
+						build_use_flags_list.append("-" + x)
+				print k, build_use_flags_list
+				if build_use_flags_list == []:
+					build_cpv_list.append("=" + k)
+				else:
+					build_use_flags = ""
+					for flags in build_use_flags_list:
+						build_use_flags = build_use_flags + flags + ","
+					build_cpv_list.append("=" + k + "[" + build_use_flags + "]")
+		print 'build_cpv_list', build_cpv_list
+		argscmd = []
+		if not "nooneshort" in build_dict['post_message']:
+			argscmd.append("--oneshot")
+		argscmd.append("--buildpkg")
+		argscmd.append("--usepkg")
+		for build_cpv in build_cpv_list:
+			argscmd.append(build_cpv)
+		print argscmd
+		# Call main_emerge to build the package in build_cpv_list 
+		try: 
+			build_fail = emerge_main(args=argscmd)
+		except:
+			build_fail = False
+		# Run depclean
+		if not "nodepclean" in build_dict['post_message']:
+			depclean_fail = main_depclean()
+		if build_fail is False or depclean_fail is False:
+			return False
+		return True
+
+	def procces_qureru(self, fail_querue_dict):
+		conn=CM.getConnection()
+		build_dict = {}
+		build_dict = get_packages_to_build(conn, self._config_profile)
+		print "build_dict",  build_dict
+		if build_dict is None and fail_querue_dict == {}:
+			return fail_querue_dict
+		if build_dict is None and fail_querue_dict != {}:
+			return fail_querue_dict
+		if not build_dict['ebuild_id'] is None and build_dict['checksum'] is not None:
+			buildqueru_cpv_dict, build_dict = self.make_build_list(build_dict)
+			print 'buildqueru_cpv_dict', buildqueru_cpv_dict
+			if buildqueru_cpv_dict is None:
+				return fail_querue_dict
+			fail_build_procces = self.build_procces(buildqueru_cpv_dict, build_dict)
+			if build_dict['check_fail'] is True:
+				fail_querue_dict = self.log_fail_queru(build_dict, fail_querue_dict)
+			return fail_querue_dict
+		if not build_dict['post_message'] is [] and build_dict['ebuild_id'] is None:
+			return fail_querue_dict
+		if not build_dict['ebuild_id'] is None and build_dict['checksum'] is None:
+			del_old_queue(conn, build_dict['queue_id'])
+		return fail_querue_dict

diff --git a/gobs/pym/depclean.py b/gobs/pym/depclean.py
new file mode 100644
index 0000000..b6096b6
--- /dev/null
+++ b/gobs/pym/depclean.py
@@ -0,0 +1,632 @@
+from __future__ import print_function
+import errno
+import portage
+from portage._sets.base import InternalPackageSet
+from _emerge.main import parse_opts
+from _emerge.create_depgraph_params import create_depgraph_params
+from _emerge.depgraph import backtrack_depgraph, depgraph, resume_depgraph
+from _emerge.UnmergeDepPriority import UnmergeDepPriority
+from _emerge.SetArg import SetArg
+from _emerge.actions import load_emerge_config
+from _emerge.Package import Package
+from _emerge.unmerge import unmerge
+from portage.util import cmp_sort_key, writemsg, \
+	writemsg_level, writemsg_stdout
+from portage.util.digraph import digraph
+
+def main_depclean():
+	mysettings, mytrees, mtimedb = load_emerge_config()
+	myroot = mysettings["ROOT"]
+	root_config = mytrees[myroot]["root_config"]
+	psets = root_config.setconfig.psets
+	args_set = InternalPackageSet(allow_repo=True)
+	spinner=None
+	scheduler=None
+	tmpcmdline = []
+	tmpcmdline.append("--depclean")
+	tmpcmdline.append("--pretend")
+	print("depclean",tmpcmdline)
+	myaction, myopts, myfiles = parse_opts(tmpcmdline, silent=False)
+	if myfiles:
+		args_set.update(myfiles)
+		matched_packages = False
+		for x in args_set:
+			if vardb.match(x):
+				matched_packages = True
+		if not matched_packages:
+			return 0
+
+	rval, cleanlist, ordered, req_pkg_count, unresolvable = calc_depclean(mysettings, mytrees, mtimedb["ldpath"], myopts, myaction, args_set, spinner)
+	print('rval, cleanlist, ordered, req_pkg_count, unresolvable', rval, cleanlist, ordered, req_pkg_count, unresolvable)
+	if unresolvable != []:
+		return True
+	if cleanlist != []:
+		conflict_package_list = []
+		for depclean_cpv in cleanlist:
+			if portage.versions.cpv_getkey(depclean_cpv) in list(psets["system"]):
+				conflict_package_list.append(depclean_cpv)
+			if portage.versions.cpv_getkey(depclean_cpv) in list(psets['selected']):
+				conflict_package_list.append(depclean_cpv)
+		print('conflict_package_list', conflict_package_list)
+		if conflict_package_list == []:
+			tmpcmdline = []
+			tmpcmdline.append("--depclean")
+			myaction, myopts, myfiles = parse_opts(tmpcmdline, silent=False)
+			unmerge(root_config, myopts, "unmerge", cleanlist, mtimedb["ldpath"], ordered=ordered, scheduler=scheduler)
+			print("Number removed:       "+str(len(cleanlist)))
+			return True
+	return True
+
+def calc_depclean(settings, trees, ldpath_mtimes,
+	myopts, action, args_set, spinner):
+	allow_missing_deps = bool(args_set)
+
+	debug = '--debug' in myopts
+	xterm_titles = "notitles" not in settings.features
+	myroot = settings["ROOT"]
+	root_config = trees[myroot]["root_config"]
+	psets = root_config.setconfig.psets
+	deselect = myopts.get('--deselect') != 'n'
+	required_sets = {}
+	required_sets['world'] = psets['world']
+
+	# When removing packages, a temporary version of the world 'selected'
+	# set may be used which excludes packages that are intended to be
+	# eligible for removal.
+	selected_set = psets['selected']
+	required_sets['selected'] = selected_set
+	protected_set = InternalPackageSet()
+	protected_set_name = '____depclean_protected_set____'
+	required_sets[protected_set_name] = protected_set
+	system_set = psets["system"]
+
+	if not system_set or not selected_set:
+
+		if not system_set:
+			writemsg_level("!!! You have no system list.\n",
+				level=logging.ERROR, noiselevel=-1)
+
+		if not selected_set:
+			writemsg_level("!!! You have no world file.\n",
+					level=logging.WARNING, noiselevel=-1)
+
+		writemsg_level("!!! Proceeding is likely to " + \
+			"break your installation.\n",
+			level=logging.WARNING, noiselevel=-1)
+		if "--pretend" not in myopts:
+			countdown(int(settings["EMERGE_WARNING_DELAY"]), ">>> Depclean")
+
+	if action == "depclean":
+		print(" >>> depclean")
+
+	writemsg_level("\nCalculating dependencies  ")
+	resolver_params = create_depgraph_params(myopts, "remove")
+	resolver = depgraph(settings, trees, myopts, resolver_params, spinner)
+	resolver._load_vdb()
+	vardb = resolver._frozen_config.trees[myroot]["vartree"].dbapi
+	real_vardb = trees[myroot]["vartree"].dbapi
+
+	if action == "depclean":
+
+		if args_set:
+
+			if deselect:
+				# Start with an empty set.
+				selected_set = InternalPackageSet()
+				required_sets['selected'] = selected_set
+				# Pull in any sets nested within the selected set.
+				selected_set.update(psets['selected'].getNonAtoms())
+
+			# Pull in everything that's installed but not matched
+			# by an argument atom since we don't want to clean any
+			# package if something depends on it.
+			for pkg in vardb:
+				if spinner:
+					spinner.update()
+
+				try:
+					if args_set.findAtomForPackage(pkg) is None:
+						protected_set.add("=" + pkg.cpv)
+						continue
+				except portage.exception.InvalidDependString as e:
+					show_invalid_depstring_notice(pkg,
+						pkg.metadata["PROVIDE"], str(e))
+					del e
+					protected_set.add("=" + pkg.cpv)
+					continue
+
+	elif action == "prune":
+
+		if deselect:
+			# Start with an empty set.
+			selected_set = InternalPackageSet()
+			required_sets['selected'] = selected_set
+			# Pull in any sets nested within the selected set.
+			selected_set.update(psets['selected'].getNonAtoms())
+
+		# Pull in everything that's installed since we don't
+		# to prune a package if something depends on it.
+		protected_set.update(vardb.cp_all())
+
+		if not args_set:
+
+			# Try to prune everything that's slotted.
+			for cp in vardb.cp_all():
+				if len(vardb.cp_list(cp)) > 1:
+					args_set.add(cp)
+
+		# Remove atoms from world that match installed packages
+		# that are also matched by argument atoms, but do not remove
+		# them if they match the highest installed version.
+		for pkg in vardb:
+			spinner.update()
+			pkgs_for_cp = vardb.match_pkgs(pkg.cp)
+			if not pkgs_for_cp or pkg not in pkgs_for_cp:
+				raise AssertionError("package expected in matches: " + \
+					"cp = %s, cpv = %s matches = %s" % \
+					(pkg.cp, pkg.cpv, [str(x) for x in pkgs_for_cp]))
+
+			highest_version = pkgs_for_cp[-1]
+			if pkg == highest_version:
+				# pkg is the highest version
+				protected_set.add("=" + pkg.cpv)
+				continue
+
+			if len(pkgs_for_cp) <= 1:
+				raise AssertionError("more packages expected: " + \
+					"cp = %s, cpv = %s matches = %s" % \
+					(pkg.cp, pkg.cpv, [str(x) for x in pkgs_for_cp]))
+
+			try:
+				if args_set.findAtomForPackage(pkg) is None:
+					protected_set.add("=" + pkg.cpv)
+					continue
+			except portage.exception.InvalidDependString as e:
+				show_invalid_depstring_notice(pkg,
+					pkg.metadata["PROVIDE"], str(e))
+				del e
+				protected_set.add("=" + pkg.cpv)
+				continue
+
+	if resolver._frozen_config.excluded_pkgs:
+		excluded_set = resolver._frozen_config.excluded_pkgs
+		required_sets['__excluded__'] = InternalPackageSet()
+
+		for pkg in vardb:
+			if spinner:
+				spinner.update()
+
+			try:
+				if excluded_set.findAtomForPackage(pkg):
+					required_sets['__excluded__'].add("=" + pkg.cpv)
+			except portage.exception.InvalidDependString as e:
+				show_invalid_depstring_notice(pkg,
+					pkg.metadata["PROVIDE"], str(e))
+				del e
+				required_sets['__excluded__'].add("=" + pkg.cpv)
+
+	success = resolver._complete_graph(required_sets={myroot:required_sets})
+	writemsg_level("\b\b... done!\n")
+
+	resolver.display_problems()
+
+	if not success:
+		return True, [], False, 0, []
+
+	def unresolved_deps():
+
+		unresolvable = set()
+		for dep in resolver._dynamic_config._initially_unsatisfied_deps:
+			if isinstance(dep.parent, Package) and \
+				(dep.priority > UnmergeDepPriority.SOFT):
+				unresolvable.add((dep.atom, dep.parent.cpv))
+
+		if not unresolvable:
+			return None
+
+		if unresolvable and not allow_missing_deps:
+
+			prefix = bad(" * ")
+			msg = []
+			msg.append("Dependencies could not be completely resolved due to")
+			msg.append("the following required packages not being installed:")
+			msg.append("")
+			for atom, parent in unresolvable:
+				msg.append("  %s pulled in by:" % (atom,))
+				msg.append("    %s" % (parent,))
+				msg.append("")
+			msg.extend(textwrap.wrap(
+				"Have you forgotten to do a complete update prior " + \
+				"to depclean? The most comprehensive command for this " + \
+				"purpose is as follows:", 65
+			))
+			msg.append("")
+			msg.append("  " + \
+				good("emerge --update --newuse --deep --with-bdeps=y @world"))
+			msg.append("")
+			msg.extend(textwrap.wrap(
+				"Note that the --with-bdeps=y option is not required in " + \
+				"many situations. Refer to the emerge manual page " + \
+				"(run `man emerge`) for more information about " + \
+				"--with-bdeps.", 65
+			))
+			msg.append("")
+			msg.extend(textwrap.wrap(
+				"Also, note that it may be necessary to manually uninstall " + \
+				"packages that no longer exist in the portage tree, since " + \
+				"it may not be possible to satisfy their dependencies.", 65
+			))
+			if action == "prune":
+				msg.append("")
+				msg.append("If you would like to ignore " + \
+					"dependencies then use %s." % good("--nodeps"))
+			writemsg_level("".join("%s%s\n" % (prefix, line) for line in msg),
+				level=logging.ERROR, noiselevel=-1)
+			return unresolvable
+		return None
+
+	unresolvable = unresolved_deps()
+	if not unresolvable is None:
+		return False, [], False, 0, unresolvable
+
+	graph = resolver._dynamic_config.digraph.copy()
+	required_pkgs_total = 0
+	for node in graph:
+		if isinstance(node, Package):
+			required_pkgs_total += 1
+
+	def show_parents(child_node):
+		parent_nodes = graph.parent_nodes(child_node)
+		if not parent_nodes:
+			# With --prune, the highest version can be pulled in without any
+			# real parent since all installed packages are pulled in.  In that
+			# case there's nothing to show here.
+			return
+		parent_strs = []
+		for node in parent_nodes:
+			parent_strs.append(str(getattr(node, "cpv", node)))
+		parent_strs.sort()
+		msg = []
+		msg.append("  %s pulled in by:\n" % (child_node.cpv,))
+		for parent_str in parent_strs:
+			msg.append("    %s\n" % (parent_str,))
+		msg.append("\n")
+		portage.writemsg_stdout("".join(msg), noiselevel=-1)
+
+	def cmp_pkg_cpv(pkg1, pkg2):
+		"""Sort Package instances by cpv."""
+		if pkg1.cpv > pkg2.cpv:
+			return 1
+		elif pkg1.cpv == pkg2.cpv:
+			return 0
+		else:
+			return -1
+
+	def create_cleanlist():
+
+		# Never display the special internal protected_set.
+		for node in graph:
+			if isinstance(node, SetArg) and node.name == protected_set_name:
+				graph.remove(node)
+				break
+
+		pkgs_to_remove = []
+
+		if action == "depclean":
+			if args_set:
+
+				for pkg in sorted(vardb, key=cmp_sort_key(cmp_pkg_cpv)):
+					arg_atom = None
+					try:
+						arg_atom = args_set.findAtomForPackage(pkg)
+					except portage.exception.InvalidDependString:
+						# this error has already been displayed by now
+						continue
+
+					if arg_atom:
+						if pkg not in graph:
+							pkgs_to_remove.append(pkg)
+						elif "--verbose" in myopts:
+							show_parents(pkg)
+
+			else:
+				for pkg in sorted(vardb, key=cmp_sort_key(cmp_pkg_cpv)):
+					if pkg not in graph:
+						pkgs_to_remove.append(pkg)
+					elif "--verbose" in myopts:
+						show_parents(pkg)
+
+		elif action == "prune":
+
+			for atom in args_set:
+				for pkg in vardb.match_pkgs(atom):
+					if pkg not in graph:
+						pkgs_to_remove.append(pkg)
+					elif "--verbose" in myopts:
+						show_parents(pkg)
+
+		return pkgs_to_remove
+
+	cleanlist = create_cleanlist()
+	clean_set = set(cleanlist)
+
+	if cleanlist and \
+		real_vardb._linkmap is not None and \
+		myopts.get("--depclean-lib-check") != "n" and \
+		"preserve-libs" not in settings.features:
+
+		# Check if any of these packages are the sole providers of libraries
+		# with consumers that have not been selected for removal. If so, these
+		# packages and any dependencies need to be added to the graph.
+		linkmap = real_vardb._linkmap
+		consumer_cache = {}
+		provider_cache = {}
+		consumer_map = {}
+
+		writemsg_level(">>> Checking for lib consumers...\n")
+
+		for pkg in cleanlist:
+			pkg_dblink = real_vardb._dblink(pkg.cpv)
+			consumers = {}
+
+			for lib in pkg_dblink.getcontents():
+				lib = lib[len(myroot):]
+				lib_key = linkmap._obj_key(lib)
+				lib_consumers = consumer_cache.get(lib_key)
+				if lib_consumers is None:
+					try:
+						lib_consumers = linkmap.findConsumers(lib_key)
+					except KeyError:
+						continue
+					consumer_cache[lib_key] = lib_consumers
+				if lib_consumers:
+					consumers[lib_key] = lib_consumers
+
+			if not consumers:
+				continue
+
+			for lib, lib_consumers in list(consumers.items()):
+				for consumer_file in list(lib_consumers):
+					if pkg_dblink.isowner(consumer_file):
+						lib_consumers.remove(consumer_file)
+				if not lib_consumers:
+					del consumers[lib]
+
+			if not consumers:
+				continue
+
+			for lib, lib_consumers in consumers.items():
+
+				soname = linkmap.getSoname(lib)
+
+				consumer_providers = []
+				for lib_consumer in lib_consumers:
+					providers = provider_cache.get(lib)
+					if providers is None:
+						providers = linkmap.findProviders(lib_consumer)
+						provider_cache[lib_consumer] = providers
+					if soname not in providers:
+						# Why does this happen?
+						continue
+					consumer_providers.append(
+						(lib_consumer, providers[soname]))
+
+				consumers[lib] = consumer_providers
+
+			consumer_map[pkg] = consumers
+
+		if consumer_map:
+
+			search_files = set()
+			for consumers in consumer_map.values():
+				for lib, consumer_providers in consumers.items():
+					for lib_consumer, providers in consumer_providers:
+						search_files.add(lib_consumer)
+						search_files.update(providers)
+
+			writemsg_level(">>> Assigning files to packages...\n")
+			file_owners = real_vardb._owners.getFileOwnerMap(search_files)
+
+			for pkg, consumers in list(consumer_map.items()):
+				for lib, consumer_providers in list(consumers.items()):
+					lib_consumers = set()
+
+					for lib_consumer, providers in consumer_providers:
+						owner_set = file_owners.get(lib_consumer)
+						provider_dblinks = set()
+						provider_pkgs = set()
+
+						if len(providers) > 1:
+							for provider in providers:
+								provider_set = file_owners.get(provider)
+								if provider_set is not None:
+									provider_dblinks.update(provider_set)
+
+						if len(provider_dblinks) > 1:
+							for provider_dblink in provider_dblinks:
+								provider_pkg = resolver._pkg(
+									provider_dblink.mycpv, "installed",
+									root_config, installed=True)
+								if provider_pkg not in clean_set:
+									provider_pkgs.add(provider_pkg)
+
+						if provider_pkgs:
+							continue
+
+						if owner_set is not None:
+							lib_consumers.update(owner_set)
+
+					for consumer_dblink in list(lib_consumers):
+						if resolver._pkg(consumer_dblink.mycpv, "installed",
+							root_config, installed=True) in clean_set:
+							lib_consumers.remove(consumer_dblink)
+							continue
+
+					if lib_consumers:
+						consumers[lib] = lib_consumers
+					else:
+						del consumers[lib]
+				if not consumers:
+					del consumer_map[pkg]
+
+		if consumer_map:
+			# TODO: Implement a package set for rebuilding consumer packages.
+
+			msg = "In order to avoid breakage of link level " + \
+				"dependencies, one or more packages will not be removed. " + \
+				"This can be solved by rebuilding " + \
+				"the packages that pulled them in."
+
+			prefix = bad(" * ")
+			from textwrap import wrap
+			writemsg_level("".join(prefix + "%s\n" % line for \
+				line in wrap(msg, 70)), level=logging.WARNING, noiselevel=-1)
+
+			msg = []
+			for pkg in sorted(consumer_map, key=cmp_sort_key(cmp_pkg_cpv)):
+				consumers = consumer_map[pkg]
+				consumer_libs = {}
+				for lib, lib_consumers in consumers.items():
+					for consumer in lib_consumers:
+						consumer_libs.setdefault(
+							consumer.mycpv, set()).add(linkmap.getSoname(lib))
+				unique_consumers = set(chain(*consumers.values()))
+				unique_consumers = sorted(consumer.mycpv \
+					for consumer in unique_consumers)
+				msg.append("")
+				msg.append("  %s pulled in by:" % (pkg.cpv,))
+				for consumer in unique_consumers:
+					libs = consumer_libs[consumer]
+					msg.append("    %s needs %s" % \
+						(consumer, ', '.join(sorted(libs))))
+			msg.append("")
+			writemsg_level("".join(prefix + "%s\n" % line for line in msg),
+				level=logging.WARNING, noiselevel=-1)
+
+			# Add lib providers to the graph as children of lib consumers,
+			# and also add any dependencies pulled in by the provider.
+			writemsg_level(">>> Adding lib providers to graph...\n")
+
+			for pkg, consumers in consumer_map.items():
+				for consumer_dblink in set(chain(*consumers.values())):
+					consumer_pkg = resolver._pkg(consumer_dblink.mycpv,
+						"installed", root_config, installed=True)
+					if not resolver._add_pkg(pkg,
+						Dependency(parent=consumer_pkg,
+						priority=UnmergeDepPriority(runtime=True),
+						root=pkg.root)):
+						resolver.display_problems()
+						return True, [], False, 0, []
+
+			writemsg_level("\nCalculating dependencies  ")
+			success = resolver._complete_graph(
+				required_sets={myroot:required_sets})
+			writemsg_level("\b\b... done!\n")
+			resolver.display_problems()
+			if not success:
+				return True, [], False, 0, []
+			unresolvable = unresolved_deps()
+			if not unresolvable is None:
+				return False, [], False, 0, unresolvable
+
+			graph = resolver._dynamic_config.digraph.copy()
+			required_pkgs_total = 0
+			for node in graph:
+				if isinstance(node, Package):
+					required_pkgs_total += 1
+			cleanlist = create_cleanlist()
+			if not cleanlist:
+				return 0, [], False, required_pkgs_total, unresolvable
+			clean_set = set(cleanlist)
+
+	if clean_set:
+		writemsg_level(">>> Calculating removal order...\n")
+		# Use a topological sort to create an unmerge order such that
+		# each package is unmerged before it's dependencies. This is
+		# necessary to avoid breaking things that may need to run
+		# during pkg_prerm or pkg_postrm phases.
+
+		# Create a new graph to account for dependencies between the
+		# packages being unmerged.
+		graph = digraph()
+		del cleanlist[:]
+
+		dep_keys = ["DEPEND", "RDEPEND", "PDEPEND"]
+		runtime = UnmergeDepPriority(runtime=True)
+		runtime_post = UnmergeDepPriority(runtime_post=True)
+		buildtime = UnmergeDepPriority(buildtime=True)
+		priority_map = {
+			"RDEPEND": runtime,
+			"PDEPEND": runtime_post,
+			"DEPEND": buildtime,
+		}
+
+		for node in clean_set:
+			graph.add(node, None)
+			mydeps = []
+			for dep_type in dep_keys:
+				depstr = node.metadata[dep_type]
+				if not depstr:
+					continue
+				priority = priority_map[dep_type]
+
+				try:
+					atoms = resolver._select_atoms(myroot, depstr,
+						myuse=node.use.enabled, parent=node,
+						priority=priority)[node]
+				except portage.exception.InvalidDependString:
+					# Ignore invalid deps of packages that will
+					# be uninstalled anyway.
+					continue
+
+				for atom in atoms:
+					if not isinstance(atom, portage.dep.Atom):
+						# Ignore invalid atoms returned from dep_check().
+						continue
+					if atom.blocker:
+						continue
+					matches = vardb.match_pkgs(atom)
+					if not matches:
+						continue
+					for child_node in matches:
+						if child_node in clean_set:
+							graph.add(child_node, node, priority=priority)
+
+		ordered = True
+		if len(graph.order) == len(graph.root_nodes()):
+			# If there are no dependencies between packages
+			# let unmerge() group them by cat/pn.
+			ordered = False
+			cleanlist = [pkg.cpv for pkg in graph.order]
+		else:
+			# Order nodes from lowest to highest overall reference count for
+			# optimal root node selection (this can help minimize issues
+			# with unaccounted implicit dependencies).
+			node_refcounts = {}
+			for node in graph.order:
+				node_refcounts[node] = len(graph.parent_nodes(node))
+			def cmp_reference_count(node1, node2):
+				return node_refcounts[node1] - node_refcounts[node2]
+			graph.order.sort(key=cmp_sort_key(cmp_reference_count))
+
+			ignore_priority_range = [None]
+			ignore_priority_range.extend(
+				range(UnmergeDepPriority.MIN, UnmergeDepPriority.MAX + 1))
+			while graph:
+				for ignore_priority in ignore_priority_range:
+					nodes = graph.root_nodes(ignore_priority=ignore_priority)
+					if nodes:
+						break
+				if not nodes:
+					raise AssertionError("no root nodes")
+				if ignore_priority is not None:
+					# Some deps have been dropped due to circular dependencies,
+					# so only pop one node in order to minimize the number that
+					# are dropped.
+					del nodes[1:]
+				for node in nodes:
+					graph.remove(node)
+					cleanlist.append(node.cpv)
+
+		return True, cleanlist, ordered, required_pkgs_total, []
+	return True, [], False, required_pkgs_total, []

diff --git a/gobs/pym/flags.py b/gobs/pym/flags.py
index ba9faf6..c2e3bcc 100644
--- a/gobs/pym/flags.py
+++ b/gobs/pym/flags.py
@@ -1,3 +1,4 @@
+from __future__ import print_function
 from _emerge.main import parse_opts
 from _emerge.depgraph import backtrack_depgraph, depgraph, resume_depgraph
 from _emerge.create_depgraph_params import create_depgraph_params
@@ -154,20 +155,21 @@ class gobs_use_flags(object):
 		return iuse_flags, final_flags
 
 	def get_needed_dep_useflags(self, build_use_flags_list):
+		cpv = self._cpv
 		tmpcmdline = []
 		tmpcmdline.append("-p")
 		tmpcmdline.append("--autounmask")
 		tmpcmdline.append("=" + self._cpv)
-		print tmpcmdline
+		print(tmpcmdline)
 		myaction, myopts, myfiles = parse_opts(tmpcmdline, silent=False)
-		print myaction, myopts, myfiles
+		print(myaction, myopts, myfiles)
 		myparams = create_depgraph_params(myopts, myaction)
-		print myparams
+		print(myparams)
 		settings, trees, mtimedb = load_emerge_config()
 		try:
 			success, mydepgraph, favorites = backtrack_depgraph(
 				settings, trees, myopts, myparams, myaction, myfiles, spinner=None)
-			print  success, mydepgraph, favorites
+			print(success, mydepgraph, favorites)
 		except portage.exception.PackageSetNotFound as e:
 			root_config = trees[settings["ROOT"]]["root_config"]
 			display_missing_pkg_set(root_config, e.value)
@@ -179,23 +181,22 @@ class gobs_use_flags(object):
 			use_changes = {}
 			for pkg, needed_use_config_changes in mydepgraph._dynamic_config._needed_use_config_changes.items():
 				new_use, changes = needed_use_config_changes
-				use_changes[pkg.self._cpv] = changes
-		print use_changes
+				use_changes[pkg.cpv] = changes
 		if use_changes is None:
 			return None
 		iteritems_packages = {}
 		for k, v in use_changes.iteritems():
 			k_package = portage.versions.cpv_getkey(k)
 			iteritems_packages[ k_package ] = v
-		print iteritems_packages
+		print('iteritems_packages', iteritems_packages)
 		return iteritems_packages
 							
 	def comper_useflags(self, build_dict):
 		iuse_flags, use_enable = self.get_flags()
 		iuse = []
-		print "use_enable", use_enable
+		print("use_enable", use_enable)
 		build_use_flags_dict = build_dict['build_useflags']
-		print "build_use_flags_dict", build_use_flags_dict
+		print("build_use_flags_dict", build_use_flags_dict)
 		build_use_flags_list = []
 		if use_enable == []:
 			if build_use_flags_dict is None:
@@ -209,10 +210,10 @@ class gobs_use_flags(object):
 			use_flagsDict[x] = True
 		for x in use_disable:
 			use_flagsDict[x] = False
-		print "use_flagsDict", use_flagsDict
+		print("use_flagsDict", use_flagsDict)
 		for k, v in use_flagsDict.iteritems():
-			print "tree use flags", k, v
-			print "db use flags", k, build_use_flags_dict[k]
+			print("tree use flags", k, v)
+			print("db use flags", k, build_use_flags_dict[k])
 		if build_use_flags_dict[k] != v:
 			if build_use_flags_dict[k] is True:
 				build_use_flags_list.append(k)
@@ -220,5 +221,5 @@ class gobs_use_flags(object):
 				build_use_flags_list.append("-" + k)
 		if build_use_flags_list == []:
 			build_use_flags_list = None
-		print build_use_flags_list
+		print(build_use_flags_list)
 		return build_use_flags_list

diff --git a/gobs/pym/old_cpv.py b/gobs/pym/old_cpv.py
index 9dacd82..8a1a9c5 100644
--- a/gobs/pym/old_cpv.py
+++ b/gobs/pym/old_cpv.py
@@ -1,3 +1,4 @@
+from __future__ import print_function
 class gobs_old_cpv(object):
 	
 	def __init__(self, CM, myportdb, mysettings):
@@ -25,14 +26,14 @@ class gobs_old_cpv(object):
 			# Set no active on ebuilds in the db that no longer in tree
 			if  old_ebuild_list != []:
 				for old_ebuild in old_ebuild_list:
-					print "O", categories + "/" + package + "-" + old_ebuild[0]
+					print("O", categories + "/" + package + "-" + old_ebuild[0])
 					self.dbquerys.add_old_ebuild(conn,package_id, old_ebuild_list)
 		# Check if we have older no activ ebuilds then 60 days
 		ebuild_old_list_db = self.dbquerys.cp_list_old_db(conn,package_id)
 		# Delete older ebuilds in the db
 		if ebuild_old_list_db != []:
 			for del_ebuild_old in ebuild_old_list_db:
-				print "D", categories + "/" + package + "-" + del_ebuild_old[1]
+				print("D", categories + "/" + package + "-" + del_ebuild_old[1])
 			self.dbquerys.del_old_ebuild(conn,ebuild_old_list_db)
 		self._CM.putConnection(conn)
 
@@ -52,14 +53,14 @@ class gobs_old_cpv(object):
 			if mark_old_list != []:
 				for x in mark_old_list:
 					element = self.dbquerys.get_cp_from_package_id(conn,x)
-					print "O", element[0]
+					print("O", element[0])
 			# Check if we have older no activ categories/package then 60 days
 			del_package_id_old_list = self.dbquerys.cp_all_old_db(conn,old_package_id_list)
 		# Delete older  categories/package and ebuilds in the db
 		if del_package_id_old_list != []:
 			for i in del_package_id_old_list:
 				element = self.dbquerys.get_cp_from_package_id(conn,i)
-				print "D", element
+				print("D", element)
 			self.dbquerys.del_old_package(conn,del_package_id_old_list)
 		self._CM.putConnection(conn)
 		
@@ -80,5 +81,5 @@ class gobs_old_cpv(object):
 		if categories_old_list != []:
 			for real_old_categories in categories_old_list:
 				self.dbquerys.del_old_categories(conn,real_old_categoriess)
-				print "D", real_old_categories
+				print("D", real_old_categories)
 		self._CM.putConnection(conn)
\ No newline at end of file

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index 46e13cb..4f0864d 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -1,3 +1,4 @@
+from __future__ import print_function
 import portage
 from gobs.flags import gobs_use_flags
 from gobs.repoman_gobs import gobs_repoman
@@ -157,37 +158,10 @@ class gobs_package(object):
 					# Comper ebuild_version and add the ebuild_version to buildqueue
 					if portage.vercmp(v['ebuild_version_tree'], latest_ebuild_version) == 0:
 						self._dbquerys.add_new_package_buildqueue(conn,ebuild_id, config_id, use_flags_list, use_enable_list, message)
-						print "B",  config_id, v['categories'] + "/" + v['package'] + "-" + latest_ebuild_version, "USE:", use_enable	# B = Build config cpv use-flags
+						print("B",  config_id, v['categories'] + "/" + v['package'] + "-" + latest_ebuild_version, "USE:", use_enable)	# B = Build config cpv use-flags
 					i = i +1
 		self._CM.putConnection(conn)
 
-	def add_new_ebuild_buildquery_db_looked(self, build_dict, config_profile):
-		conn=self._CM.getConnection()
-		myportdb = portage.portdbapi(mysettings=self._mysettings)
-		cpv = build_dict['cpv']
-		message = None
-		init_useflags = gobs_use_flags(self._mysettings, myportdb, cpv)
-		iuse_flags_list, final_use_list = init_useflags.get_flags_looked()
-		iuse = []
-		use_flags_list = []
-		use_enable_list = []
-		for iuse_line in iuse_flags_list:
-			iuse.append(init_useflags.reduce_flag(iuse_line))
-		iuse_flags_list2 = list(set(iuse))
-		use_enable = final_use_list
-		use_disable = list(set(iuse_flags_list2).difference(set(use_enable)))
-		use_flagsDict = {}
-		for x in use_enable:
-			use_flagsDict[x] = True
-		for x in use_disable:
-			use_flagsDict[x] = False
-		for u, s in  use_flagsDict.iteritems():
-			use_flags_list.append(u)
-			use_enable_list.append(s)
-		ebuild_id = self._dbquerys.get_ebuild_id_db_checksum(conn, build_dict)
-		self._dbquerys.add_new_package_buildqueue(conn, ebuild_id, config_profile, use_flags_list, use_enable_list, message)
-		self._CM.putConnection(conn)
-
 	def get_package_metadataDict(self, pkgdir, package):
 		# Make package_metadataDict
 		attDict = {}
@@ -206,7 +180,7 @@ class gobs_package(object):
 	def add_new_package_db(self, categories, package):
 		conn=self._CM.getConnection()
 		# add new categories package ebuild to tables package and ebuilds
-		print "N", categories + "/" + package				# N = New Package
+		print("N", categories + "/" + package)				# N = New Package
 		pkgdir = self._mysettings['PORTDIR'] + "/" + categories + "/" + package		# Get PORTDIR + cp
 		categories_dir = self._mysettings['PORTDIR'] + "/" + categories + "/"
 		# Get the ebuild list for cp
@@ -231,7 +205,7 @@ class gobs_package(object):
 			manifest_error = init_manifest.digestcheck()
 			if manifest_error is not None:
 				qa_error.append(manifest_error)
-				print "QA:", categories + "/" + package, qa_error
+				print("QA:", categories + "/" + package, qa_error)
 			self._dbquerys.add_qa_repoman(conn,ebuild_id_list, qa_error, packageDict, config_id)
 			# Add the ebuild to the buildqueru table if needed
 			self.add_new_ebuild_buildquery_db(ebuild_id_list, packageDict, config_cpv_listDict)
@@ -256,7 +230,7 @@ class gobs_package(object):
 		# if we have the same checksum return else update the package
 		ebuild_list_tree = self._myportdb.cp_list((categories + "/" + package), use_cache=1, mytree=None)
 		if manifest_checksum_tree != manifest_checksum_db:
-			print "U", categories + "/" + package		# U = Update
+			print("U", categories + "/" + package)		# U = Update
 			# Get package_metadataDict and update the db with it
 			package_metadataDict = self.get_package_metadataDict(pkgdir, package)
 			self._dbquerys.update_new_package_metadata(conn,package_id, package_metadataDict)
@@ -276,9 +250,9 @@ class gobs_package(object):
 					# Get packageDict for ebuild
 					packageDict[ebuild_line] = self.get_packageDict(pkgdir, ebuild_line, categories, package, config_id)
 					if ebuild_version_manifest_checksum_db is None:
-						print "N", categories + "/" + package + "-" + ebuild_version_tree	# N = New ebuild
+						print("N", categories + "/" + package + "-" + ebuild_version_tree)	# N = New ebuild
 					else:
-						print "U", categories + "/" + package + "-" + ebuild_version_tree	# U = Updated ebuild
+						print("U", categories + "/" + package + "-" + ebuild_version_tree)	# U = Updated ebuild
 						# Fix so we can use add_new_package_sql(packageDict) to update the ebuilds
 						old_ebuild_list.append(ebuild_version_tree)
 						self._dbquerys.add_old_ebuild(conn,package_id, old_ebuild_list)
@@ -297,7 +271,7 @@ class gobs_package(object):
 			manifest_error = init_manifest.digestcheck()
 			if manifest_error is not None:
 				qa_error.append(manifest_error)
-				print "QA:", categories + "/" + package, qa_error
+				print("QA:", categories + "/" + package, qa_error)
 			self._dbquerys.add_qa_repoman(conn,ebuild_id_list, qa_error, packageDict, config_id)
 			# Add the ebuild to the buildqueru table if needed
 			self.add_new_ebuild_buildquery_db(ebuild_id_list, packageDict, config_cpv_listDict)
@@ -324,4 +298,4 @@ class gobs_package(object):
 			self._dbquerys.add_old_ebuild(conn,package_id, old_ebuild_list)
 			self._dbquerys.update_active_ebuild(conn,package_id, ebuild_version_tree)
 		return_id = self._dbquerys.add_new_package_sql(conn,packageDict)
-		print 'return_id', return_id
\ No newline at end of file
+		print('return_id', return_id)
\ No newline at end of file

diff --git a/gobs/pym/text.py b/gobs/pym/text.py
index 9f5bb4e..2f1f689 100644
--- a/gobs/pym/text.py
+++ b/gobs/pym/text.py
@@ -1,3 +1,4 @@
+from __future__ import print_function
 import sys
 import re
 import os
@@ -7,10 +8,8 @@ def  get_file_text(filename):
 	# Return the filename contents
 	try:
 		textfile = open(filename)
-	except IOError, oe:
-		if oe.errno not in (errno.ENOENT, ):
-			raise
-			return "No file", filename
+	except:
+		return "No file", filename
 	text = ""
 	for line in textfile:
 		text += unicode(line, 'utf-8')
@@ -21,10 +20,8 @@ def  get_ebuild_text(filename):
 	"""Return the ebuild contents"""
 	try:
 		ebuildfile = open(filename)
-	except IOError, oe:
-		if oe.errno not in (errno.ENOENT, ):
-			raise
-			return "No Ebuild file there"
+	except:
+		return "No Ebuild file there"
 	text = ""
 	dataLines = ebuildfile.readlines()
 	for i in dataLines:
@@ -40,12 +37,10 @@ def  get_ebuild_text(filename):
 
 def  get_log_text_list(filename):
 	"""Return the log contents as a list"""
-	print "filename", filename
+	print("filename", filename)
 	try:
 		logfile = open(filename)
-	except IOError, oe:
-		if oe.errno not in (errno.ENOENT, ):
-			raise
+	except:
 		return None
 	text = []
 	dataLines = logfile.readlines()



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-08-30 23:41 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-08-30 23:41 UTC (permalink / raw
  To: gentoo-commits

commit:     972fe00dfbc63e23c5508c4503be33220a3a9769
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Tue Aug 30 23:41:14 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Tue Aug 30 23:41:14 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=972fe00d

Fix uplicate key value violates unique constraint in arch.py

---
 gobs/pym/arch.py |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/arch.py b/gobs/pym/arch.py
index 1a26083..ebd0017 100644
--- a/gobs/pym/arch.py
+++ b/gobs/pym/arch.py
@@ -20,6 +20,6 @@ class gobs_arch(object):
 			for arch in arch_list:
 				if arch[0] not in ["~","-"]:
 					arch_list.append("-" + arch)
-					arch_list.append("-*")
-					add_new_arch_db(conn,arch_list)
+			arch_list.append("-*")
+			add_new_arch_db(conn,arch_list)
 		CM.putConnection(conn)
\ No newline at end of file



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-08-31  2:05 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-08-31  2:05 UTC (permalink / raw
  To: gentoo-commits

commit:     811e03314117d364ce6cf8dbcd6c38b091447105
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 31 02:05:00 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Aug 31 02:05:00 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=811e0331

fix error when foo-ggg/baa is emty part2

---
 gobs/pym/package.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index 78af8be..8705345 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -184,7 +184,7 @@ class gobs_package(object):
 		categories_dir = self._mysettings['PORTDIR'] + "/" + categories + "/"
 		# Get the ebuild list for cp
 		ebuild_list_tree = self._myportdb.cp_list((categories + "/" + package), use_cache=1, mytree=None)
-		if ebuild_list_tree is []:
+		if ebuild_list_tree == []:
 			return None
 		config_cpv_listDict = self.config_match_ebuild(categories, package)
 		config_id  = get_default_config(conn)



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-08-31 23:31 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-08-31 23:31 UTC (permalink / raw
  To: gentoo-commits

commit:     7fdbd21c446825fc650369fdb2946c4c0e8393fe
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Aug 31 23:31:29 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Aug 31 23:31:29 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=7fdbd21c

fix error in mark_old_package_db()

---
 gobs/pym/old_cpv.py |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/old_cpv.py b/gobs/pym/old_cpv.py
index 4923bf7..2fa1fab 100644
--- a/gobs/pym/old_cpv.py
+++ b/gobs/pym/old_cpv.py
@@ -58,8 +58,8 @@ class gobs_old_cpv(object):
 				for x in mark_old_list:
 					element = get_cp_from_package_id(conn,x)
 					print("O", element[0])
-			# Check if we have older no activ categories/package then 60 days
-			del_package_id_old_list = cp_all_old_db(conn,old_package_id_list)
+		# Check if we have older no activ categories/package then 60 days
+		del_package_id_old_list = cp_all_old_db(conn,old_package_id_list)
 		# Delete older  categories/package and ebuilds in the db
 		if del_package_id_old_list != []:
 			for i in del_package_id_old_list:



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-09-01 23:34 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-09-01 23:34 UTC (permalink / raw
  To: gentoo-commits

commit:     8d3415a5dee667d3c78b26a040c422dd713f6649
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Thu Sep  1 23:34:19 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Thu Sep  1 23:34:19 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=8d3415a5

Fix errors for python 3.* support part2

---
 gobs/pym/check_setup.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/check_setup.py b/gobs/pym/check_setup.py
index 09823cd..7dcd053 100644
--- a/gobs/pym/check_setup.py
+++ b/gobs/pym/check_setup.py
@@ -77,7 +77,7 @@ def check_make_conf_guest(connection, config_profile):
 
 def check_configure_guest(connection, config_profile):
 	pass_make_conf = check_make_conf_guest(connection, config_profile)
-	print pass_make_conf
+	print(pass_make_conf)
 	if pass_make_conf == "1":
 		# profile not active or updatedb is runing
 		return False



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-09-13  1:02 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-09-13  1:02 UTC (permalink / raw
  To: gentoo-commits

commit:     e31fc2ef6f6ed357e2dbc9ded690a870a9c28839
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Tue Sep 13 01:02:19 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Tue Sep 13 01:02:19 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=e31fc2ef

fix a error in update_ebuild_db()

---
 gobs/pym/package.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index 8705345..d2cc8ac 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -291,7 +291,7 @@ class gobs_package(object):
 		ebuild_version_tree = build_dict['ebuild_version']
 		pkgdir = self._mysettings['PORTDIR'] + "/" + categories + "/" + package		# Get PORTDIR with cp
 		packageDict ={}
-		ebuild_version_manifest_checksum_db = self._dbquerys.get_ebuild_checksum(conn,package_id, ebuild_version_tree)
+		ebuild_version_manifest_checksum_db = get_ebuild_checksum(conn,package_id, ebuild_version_tree)
 		packageDict[cpv] = self.get_packageDict(pkgdir, cpv, categories, package, config_id)
 		old_ebuild_list = []
 		if ebuild_version_manifest_checksum_db is not None:



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-09-13 23:06 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-09-13 23:06 UTC (permalink / raw
  To: gentoo-commits

commit:     42cb541dc1a86db0bf0e26dcc7ce5ffbe959b474
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Tue Sep 13 23:05:53 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Tue Sep 13 23:05:53 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=42cb541d

more verbost on backtrack_depgraph in get_needed_dep_useflags()

---
 gobs/pym/flags.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/flags.py b/gobs/pym/flags.py
index c2e3bcc..e01fed3 100644
--- a/gobs/pym/flags.py
+++ b/gobs/pym/flags.py
@@ -169,7 +169,7 @@ class gobs_use_flags(object):
 		try:
 			success, mydepgraph, favorites = backtrack_depgraph(
 				settings, trees, myopts, myparams, myaction, myfiles, spinner=None)
-			print(success, mydepgraph, favorites)
+			print("success mydepgraph favorites", success, mydepgraph, favorites)
 		except portage.exception.PackageSetNotFound as e:
 			root_config = trees[settings["ROOT"]]["root_config"]
 			display_missing_pkg_set(root_config, e.value)



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-09-27 11:05 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-09-27 11:05 UTC (permalink / raw
  To: gentoo-commits

commit:     1191464787d9d5c2bba27290b28abe2f7bbd257f
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Tue Sep 27 11:05:39 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Tue Sep 27 11:05:39 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=11914647

uncommet update of time for fail_querue_dict

---
 gobs/pym/pgsql.py |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/pgsql.py b/gobs/pym/pgsql.py
index 9979b1b..d181129 100644
--- a/gobs/pym/pgsql.py
+++ b/gobs/pym/pgsql.py
@@ -573,7 +573,7 @@ def update_fail_times(connection, fail_querue_dict):
 	sqlQ1 = 'UPDATE querue_retest SET fail_times = %s WHERE querue_id = %s AND fail_type = %s'
 	sqlQ2 = 'UPDATE buildqueue SET timestamp = NOW() WHERE queue_id = %s'
 	cursor.execute(sqlQ1, (fail_querue_dict['fail_times'], fail_querue_dict['querue_id'], fail_querue_dict['fail_type'],))
-	#cursor.execute(sqlQ2, (fail_querue_dict['querue_id'],))
+	cursor.execute(sqlQ2, (fail_querue_dict['querue_id'],))
 	connection.commit()
 
 def get_fail_querue_dict(connection, build_dict):
@@ -592,7 +592,7 @@ def add_fail_querue_dict(connection, fail_querue_dict):
 	sqlQ1 = 'INSERT INTO querue_retest (querue_id, fail_type, fail_times) VALUES ( %s, %s, %s)'
 	sqlQ2 = 'UPDATE buildqueue SET timestamp = NOW() WHERE queue_id = %s'
 	cursor.execute(sqlQ1, (fail_querue_dict['querue_id'],fail_querue_dict['fail_type'], fail_querue_dict['fail_times']))
-	#cursor.execute(sqlQ2, (fail_querue_dict['querue_id'],))
+	cursor.execute(sqlQ2, (fail_querue_dict['querue_id'],))
 	connection.commit()
 
 def make_conf_error(connection,config_profile):



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-09-27 23:43 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-09-27 23:43 UTC (permalink / raw
  To: gentoo-commits

commit:     132b5560d4981aea3cb4fe25aa29207278e44f5d
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Tue Sep 27 23:43:25 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Tue Sep 27 23:43:25 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=132b5560

fix bug in check_make_conf_guest

---
 gobs/pym/build_queru.py |   10 +++++-----
 gobs/pym/check_setup.py |    4 ++--
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index 7f5e3c1..65743ad 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -613,17 +613,17 @@ class queruaction(object):
 
 	def make_build_list(self, build_dict):
 		cpv = build_dict['category']+'/'+build_dict['package']+'-'+build_dict['ebuild_version']
-		pkgdir = os.path.join(self._mysettings['PORTDIR'], build_dict['category'] + "/" + build_dict['package'])
-		init_manifest =  gobs_manifest(self._mysettings, pkgdir)
+		pkgdir = os.path.join(portage.settings['PORTDIR'], build_dict['category'] + "/" + build_dict['package'])
+		init_manifest =  gobs_manifest(portage.settings, pkgdir)
 		try:
 			ebuild_version_checksum_tree = portage.checksum.sha256hash(pkgdir+ "/" + build_dict['package'] + "-" + build_dict['ebuild_version'] + ".ebuild")[0]
 		except:
 			ebuild_version_checksum_tree = None
 		if ebuild_version_checksum_tree == build_dict['checksum']:
-			init_flags = gobs_use_flags(self._mysettings, self._myportdb, cpv)
+			init_flags = gobs_use_flags(portage.settings, portage.portdb, cpv)
 			build_use_flags_list = init_flags.comper_useflags(build_dict)
 			print("build_use_flags_list", build_use_flags_list)
-			manifest_error = init_manifest.check_file_in_manifest(self._myportdb, cpv, build_dict, build_use_flags_list)
+			manifest_error = init_manifest.check_file_in_manifest(portage.portdb, cpv, build_dict, build_use_flags_list)
 			if manifest_error is None:
 				build_dict['check_fail'] = False
 				build_use_flags_dict = {}
@@ -646,7 +646,7 @@ class queruaction(object):
 
 	def build_procces(self, buildqueru_cpv_dict, build_dict):
 		build_cpv_list = []
-		abs_user_config = os.path.join(self._mysettings["PORTAGE_CONFIGROOT"], USER_CONFIG_PATH)
+		abs_user_config = os.path.join(portage.settings["PORTAGE_CONFIGROOT"], USER_CONFIG_PATH)
 		print('abs_user_config', abs_user_config)
 		for k, v in buildqueru_cpv_dict.iteritems():
 				build_use_flags_list = []

diff --git a/gobs/pym/check_setup.py b/gobs/pym/check_setup.py
index 6b52f29..58948ec 100644
--- a/gobs/pym/check_setup.py
+++ b/gobs/pym/check_setup.py
@@ -72,8 +72,8 @@ def check_make_conf_guest(config_profile):
 		open_make_conf = open(make_conf_file)
 		open_make_conf.close()
 		portage.util.getconfig(make_conf_file, tolerant=0, allow_sourcing=False, expand=True)
-		portage.config()
-		portage.settings.validate()
+		mysettings = portage.config(config_root = "/")
+		mysettings.validate()
 		# With errors we return false
 	except Exception as e:
 		return "3"



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-09-28  1:04 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-09-28  1:04 UTC (permalink / raw
  To: gentoo-commits

commit:     0eaa44031df5eb6e41690457bea7fe72e9bb47c0
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 28 01:04:35 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Sep 28 01:04:35 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=0eaa4403

remove prints

---
 gobs/pym/build_log.py |   11 -----------
 1 files changed, 0 insertions(+), 11 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index 6539b37..2656732 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -49,25 +49,19 @@ class gobs_buildlog(object):
 
 	def search_info(self, textline, error_log_list):
 		if re.search(" * Package:", textline):
-			print('Package')
 			error_log_list.append(textline)
 		if re.search(" * Repository:", textline):
-			print('Repository')
 			error_log_list.append(textline)
 		if re.search(" * Maintainer:", textline):
 			error_log_list.append(textline)
-			print('Maintainer')
 		if re.search(" * USE:", textline):
 			error_log_list.append(textline)
-			print('USE')
 		if re.search(" * FEATURES:", textline):
 			error_log_list.append(textline)
-			print('FEATURES')
 		return error_log_list
 
 	def search_error(self, textline, error_log_list, sum_build_log_list, i):
 		if re.search("Error 1", textline):
-			print('Error')
 			x = i - 20
 			endline = True
 			error_log_list.append(".....\n")
@@ -79,7 +73,6 @@ class gobs_buildlog(object):
 				else:
 					x = x +1
 		if re.search(" * ERROR:", textline):
-			print('ERROR')
 			x = i
 			endline= True
 			field = textline.split(" ")
@@ -93,7 +86,6 @@ class gobs_buildlog(object):
 				else:
 					x = x +1
 		if re.search("configure: error:", textline):
-			print('configure: error:')
 			x = i - 4
 			endline = True
 			error_log_list.append(".....\n")
@@ -108,7 +100,6 @@ class gobs_buildlog(object):
 
 	def search_qa(self, textline, qa_error_list, error_log_list,i):
 		if re.search(" * QA Notice:", textline):
-			print('QA Notice')
 			x = i
 			qa_error_list.append(self._logfile_text[x])
 			endline= True
@@ -452,9 +443,7 @@ class gobs_buildlog(object):
 		if sum_build_log_list != []:
 			for sum_log_line in sum_build_log_list:
 				summary_error = summary_error + " " + sum_log_line
-		print('summary_error', summary_error)
 		build_log_dict['logfilename'] = re.sub("\/var\/log\/portage\/", "",  self._mysettings.get("PORTAGE_LOG_FILE"))
-		print(self._build_dict['queue_id'], build_error, summary_error, build_log_dict['logfilename'], build_log_dict)
 		if self._build_dict['queue_id'] is None:
 			build_id = self.add_new_ebuild_buildlog(build_error, summary_error, build_log_dict)
 		else:



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-09-28  1:39 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-09-28  1:39 UTC (permalink / raw
  To: gentoo-commits

commit:     40d73f20bf0851d5c9b058c3c6f6f06a4a60030d
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 28 01:39:15 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Sep 28 01:39:15 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=40d73f20

remove prints

---
 gobs/pym/pgsql.py |    8 --------
 1 files changed, 0 insertions(+), 8 deletions(-)

diff --git a/gobs/pym/pgsql.py b/gobs/pym/pgsql.py
index d181129..013aab4 100644
--- a/gobs/pym/pgsql.py
+++ b/gobs/pym/pgsql.py
@@ -65,11 +65,9 @@ def check_revision(connection, build_dict):
   sqlQ2 = "SELECT useflag FROM ebuildqueuedwithuses WHERE queue_id = %s AND enabled = 'True'"
   cursor.execute(sqlQ1, (build_dict['ebuild_id'], build_dict['config_profile']))
   queue_id_list = cursor.fetchall()
-  print('queue_id_list',  queue_id_list)
   if queue_id_list == []:
     return None
   for queue_id in queue_id_list:
-    print('queue_id after list', queue_id[0])
     cursor.execute(sqlQ2, (queue_id[0],))
     entries = cursor.fetchall()
     build_useflags = []
@@ -78,9 +76,7 @@ def check_revision(connection, build_dict):
     else:
       for use_line in sorted(entries):
 	      build_useflags.append(use_line[0])
-    print("build_useflags build_dict['build_useflags']", build_useflags, build_dict['build_useflags'])
     if build_useflags == build_dict['build_useflags']:
-      print('queue_id', queue_id[0])
       return queue_id[0]
   return None
 
@@ -285,7 +281,6 @@ def get_ebuild_id_db_checksum(connection, build_dict):
 	sqlQ = 'SELECT id FROM ebuilds WHERE ebuild_version = %s AND ebuild_checksum = %s AND package_id = %s'
 	cursor.execute(sqlQ, (build_dict['ebuild_version'], build_dict['checksum'], build_dict['package_id']))
 	ebuild_id = cursor.fetchone()
-	print('ebuild_id', ebuild_id)
 	if ebuild_id is None:
 		return None
 	return ebuild_id[0]
@@ -487,8 +482,6 @@ def cp_list_old_db(connection, package_id):
 
 def move_queru_buildlog(connection, queue_id, build_error, summary_error, build_log_dict):
 	cursor = connection.cursor()
-	print('queue_id', queue_id)
-	print('build_log_dict', build_log_dict)
 	repoman_error_list = build_log_dict['repoman_error_list']
 	qa_error_list = build_log_dict['qa_error_list']
 	sqlQ = 'SELECT make_buildlog( %s, %s, %s, %s, %s, %s)'
@@ -504,7 +497,6 @@ def add_new_buildlog(connection, build_dict, use_flags_list, use_enable_list, bu
 	if not use_flags_list:
 		use_flags_list=None
 		use_enable=None
-	print('make_deplog', build_dict['ebuild_id'], build_dict['config_profile'], use_flags_list, use_enable_list, summary_error, build_error, build_log_dict['logfilename'], qa_error_list, repoman_error_list)
 	sqlQ = 'SELECT make_deplog( %s, %s, %s, %s, %s, %s, %s, %s, %s)'
 	params = (build_dict['ebuild_id'], build_dict['config_profile'], use_flags_list, use_enable_list, summary_error, build_error, build_log_dict['logfilename'], qa_error_list, repoman_error_list)
 	cursor.execute(sqlQ, params)



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-09-28  1:41 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-09-28  1:41 UTC (permalink / raw
  To: gentoo-commits

commit:     a95afc56af666f38e402d1941414e0d0fad3ff8f
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Sep 28 01:41:00 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Sep 28 01:41:00 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=a95afc56

remove prints

---
 gobs/pym/ConnectionManager.py |    1 -
 1 files changed, 0 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/ConnectionManager.py b/gobs/pym/ConnectionManager.py
index 1bbeb35..404e62f 100644
--- a/gobs/pym/ConnectionManager.py
+++ b/gobs/pym/ConnectionManager.py
@@ -11,7 +11,6 @@ class connectionManager(object):
         if not cls._instance:
             cls._instance = super(connectionManager, cls).__new__(cls, *args, **kwargs)
             #read the sql user/host etc and store it in the local object
-            print(settings_dict['sql_host'])
             cls._host=settings_dict['sql_host']
             cls._user=settings_dict['sql_user']
             cls._password=settings_dict['sql_passwd']



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-09-30 13:17 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-09-30 13:17 UTC (permalink / raw
  To: gentoo-commits

commit:     28772c5dcefe37aab4a7707b2366e538713e7d78
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Sep 30 13:15:08 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Sep 30 13:15:08 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=28772c5d

add code for testing action_info()

---
 gobs/pym/build_log.py   |   16 ++++++++++++++++
 gobs/pym/build_queru.py |    8 ++++++--
 gobs/pym/package.py     |    3 ++-
 3 files changed, 24 insertions(+), 3 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index 2656732..079a3c7 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -1,6 +1,14 @@
 from __future__ import print_function
 import re
+import os
+try:
+	from subprocess import getstatusoutput as subprocess_getstatusoutput
+except ImportError:
+	from commands import getstatusoutput as subprocess_getstatusoutput
 from gobs.text import get_log_text_list
+from _emerge.main import parse_opts
+from portage.util import writemsg, \
+	writemsg_level, writemsg_stdout
 from gobs.repoman_gobs import gobs_repoman
 import portage
 from gobs.readconf import get_conf_settings
@@ -428,6 +436,7 @@ class gobs_buildlog(object):
 						mydbapi=trees[self._mysettings["ROOT"]]["bintree"].dbapi,
 						tree="bintree")
 					shutil.rmtree(tmpdir)
+		print('emerge info list', msg)
 
 	def add_buildlog_main(self):
 		conn=CM.getConnection()
@@ -444,9 +453,16 @@ class gobs_buildlog(object):
 			for sum_log_line in sum_build_log_list:
 				summary_error = summary_error + " " + sum_log_line
 		build_log_dict['logfilename'] = re.sub("\/var\/log\/portage\/", "",  self._mysettings.get("PORTAGE_LOG_FILE"))
+		os.fchmod(self._mysettings.get("PORTAGE_LOG_FILE"), 224)
 		if self._build_dict['queue_id'] is None:
 			build_id = self.add_new_ebuild_buildlog(build_error, summary_error, build_log_dict)
 		else:
 			build_id = move_queru_buildlog(conn, self._build_dict['queue_id'], build_error, summary_error, build_log_dict)
 		# update_qa_repoman(conn, build_id, build_log_dict)
+		argscmd = []
+		myaction, myopts, myfiles = parse_opts(argscmd, silent=True)
+		trees = {
+		root : {'porttree' : portage.portagetree(root, settings=self._mysettings)}
+		}
+		action_info(self, trees, myopts, myfiles):
 		print("build_id", build_id[0], "logged to db.")

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index 88eed9f..92d3286 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -102,6 +102,7 @@ class queruaction(object):
 						summary_error = summary_error + " " + sum_log_line
 				if settings.get("PORTAGE_LOG_FILE") is not None:
 					build_log_dict['logfilename'] = re.sub("\/var\/log\/portage\/", "",  settings.get("PORTAGE_LOG_FILE"))
+					# os.chmode(settings.get("PORTAGE_LOG_FILE"), 224)
 				else:
 					build_log_dict['logfilename'] = ""
 				move_queru_buildlog(conn, build_dict['queue_id'], build_error, summary_error, build_log_dict)
@@ -647,8 +648,10 @@ class queruaction(object):
 
 	def build_procces(self, buildqueru_cpv_dict, build_dict, settings, portdb):
 		build_cpv_list = []
-		abs_user_config = os.path.join(settings["PORTAGE_CONFIGROOT"], USER_CONFIG_PATH)
-		print('abs_user_config', abs_user_config)
+		try:
+			open("/etc/portage/package.use/gobs.use", "a"
+		except:
+			pass
 		for k, v in buildqueru_cpv_dict.iteritems():
 				build_use_flags_list = []
 				for x, y in v.iteritems():
@@ -682,6 +685,7 @@ class queruaction(object):
 		print('build_fail', build_fail)
 		if not "nodepclean" in build_dict['post_message']:
 			depclean_fail = main_depclean()
+		os.remove("/etc/portage/package.use/gobs.use")
 		if build_fail is False or depclean_fail is False:
 			return False
 		return True

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index d2cc8ac..cac1046 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -299,4 +299,5 @@ class gobs_package(object):
 			add_old_ebuild(conn,package_id, old_ebuild_list)
 			update_active_ebuild(conn,package_id, ebuild_version_tree)
 		return_id = add_new_package_sql(conn,packageDict)
-		print('return_id', return_id)
\ No newline at end of file
+		print('return_id', return_id)
+		CM.putConnection(conn)
\ No newline at end of file



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-09-30 13:33 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-09-30 13:33 UTC (permalink / raw
  To: gentoo-commits

commit:     05de9d5f9221a80f2a7a5b41144e692a3c61ed6a
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Sep 30 13:32:56 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Sep 30 13:32:56 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=05de9d5f

add code for testing action_info() part2

---
 gobs/pym/build_log.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index 079a3c7..e0c33e1 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -464,5 +464,5 @@ class gobs_buildlog(object):
 		trees = {
 		root : {'porttree' : portage.portagetree(root, settings=self._mysettings)}
 		}
-		action_info(self, trees, myopts, myfiles):
+		action_info(self, trees, myopts, myfiles)
 		print("build_id", build_id[0], "logged to db.")



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-09-30 13:38 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-09-30 13:38 UTC (permalink / raw
  To: gentoo-commits

commit:     6d26dd46607ef7283ca0f6a94b1b252cebe2f811
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Sep 30 13:38:09 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Sep 30 13:38:09 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=6d26dd46

comment some open() error

---
 gobs/pym/build_queru.py |    7 +++----
 1 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index 92d3286..17b6868 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -648,10 +648,9 @@ class queruaction(object):
 
 	def build_procces(self, buildqueru_cpv_dict, build_dict, settings, portdb):
 		build_cpv_list = []
-		try:
-			open("/etc/portage/package.use/gobs.use", "a"
-		except:
-			pass
+		#try:
+		#	open("/etc/portage/package.use/gobs.use", "a"
+		#except:	
 		for k, v in buildqueru_cpv_dict.iteritems():
 				build_use_flags_list = []
 				for x, y in v.iteritems():



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-10-09 21:49 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-10-09 21:49 UTC (permalink / raw
  To: gentoo-commits

commit:     8f6725c6ed8d63dfaa74c7d99916b53004d6c99e
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sun Oct  9 21:48:35 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sun Oct  9 21:48:35 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=8f6725c6

Use Scheduler.py instead of the hooks

---
 gobs/pym/Scheduler.py   | 1986 +++++++++++++++++++++++++++++++++++++++++++++++
 gobs/pym/build_log.py   |  214 ++++--
 gobs/pym/build_queru.py |    2 +-
 gobs/pym/pgsql.py       |    2 +-
 4 files changed, 2122 insertions(+), 82 deletions(-)

diff --git a/gobs/pym/Scheduler.py b/gobs/pym/Scheduler.py
new file mode 100644
index 0000000..cab701c
--- /dev/null
+++ b/gobs/pym/Scheduler.py
@@ -0,0 +1,1986 @@
+# Copyright 1999-2011 Gentoo Foundation
+# Distributed under the terms of the GNU General Public License v2
+
+from __future__ import print_function
+
+from collections import deque
+import gc
+import gzip
+import logging
+import shutil
+import signal
+import sys
+import tempfile
+import textwrap
+import time
+import warnings
+import weakref
+import zlib
+
+import portage
+from portage import os
+from portage import _encodings
+from portage import _unicode_decode, _unicode_encode
+from portage.cache.mappings import slot_dict_class
+from portage.elog.messages import eerror
+from portage.localization import _
+from portage.output import colorize, create_color_func, red
+bad = create_color_func("BAD")
+from portage._sets import SETPREFIX
+from portage._sets.base import InternalPackageSet
+from portage.util import writemsg, writemsg_level
+from portage.package.ebuild.digestcheck import digestcheck
+from portage.package.ebuild.digestgen import digestgen
+from portage.package.ebuild.prepare_build_dirs import prepare_build_dirs
+
+import _emerge
+from _emerge.BinpkgFetcher import BinpkgFetcher
+from _emerge.BinpkgPrefetcher import BinpkgPrefetcher
+from _emerge.BinpkgVerifier import BinpkgVerifier
+from _emerge.Blocker import Blocker
+from _emerge.BlockerDB import BlockerDB
+from _emerge.clear_caches import clear_caches
+from _emerge.create_depgraph_params import create_depgraph_params
+from _emerge.create_world_atom import create_world_atom
+from _emerge.DepPriority import DepPriority
+from _emerge.depgraph import depgraph, resume_depgraph
+from _emerge.EbuildFetcher import EbuildFetcher
+from _emerge.EbuildPhase import EbuildPhase
+from _emerge.emergelog import emergelog
+from _emerge.FakeVartree import FakeVartree
+from _emerge._find_deep_system_runtime_deps import _find_deep_system_runtime_deps
+from _emerge._flush_elog_mod_echo import _flush_elog_mod_echo
+from _emerge.JobStatusDisplay import JobStatusDisplay
+from _emerge.MergeListItem import MergeListItem
+from _emerge.MiscFunctionsProcess import MiscFunctionsProcess
+from _emerge.Package import Package
+from _emerge.PackageMerge import PackageMerge
+from _emerge.PollScheduler import PollScheduler
+from _emerge.RootConfig import RootConfig
+from _emerge.SlotObject import SlotObject
+from _emerge.SequentialTaskQueue import SequentialTaskQueue
+
+from gobs.build_log import gobs_buildlog
+
+if sys.hexversion >= 0x3000000:
+	basestring = str
+
+class Scheduler(PollScheduler):
+
+	# max time between display status updates (milliseconds)
+	_max_display_latency = 3000
+
+	_opts_ignore_blockers = \
+		frozenset(["--buildpkgonly",
+		"--fetchonly", "--fetch-all-uri",
+		"--nodeps", "--pretend"])
+
+	_opts_no_background = \
+		frozenset(["--pretend",
+		"--fetchonly", "--fetch-all-uri"])
+
+	_opts_no_restart = frozenset(["--buildpkgonly",
+		"--fetchonly", "--fetch-all-uri", "--pretend"])
+
+	_bad_resume_opts = set(["--ask", "--changelog",
+		"--resume", "--skipfirst"])
+
+	class _iface_class(SlotObject):
+		__slots__ = ("fetch",
+			"output", "register", "schedule",
+			"scheduleSetup", "scheduleUnpack", "scheduleYield",
+			"unregister")
+
+	class _fetch_iface_class(SlotObject):
+		__slots__ = ("log_file", "schedule")
+
+	_task_queues_class = slot_dict_class(
+		("merge", "jobs", "ebuild_locks", "fetch", "unpack"), prefix="")
+
+	class _build_opts_class(SlotObject):
+		__slots__ = ("buildpkg", "buildpkgonly",
+			"fetch_all_uri", "fetchonly", "pretend")
+
+	class _binpkg_opts_class(SlotObject):
+		__slots__ = ("fetchonly", "getbinpkg", "pretend")
+
+	class _pkg_count_class(SlotObject):
+		__slots__ = ("curval", "maxval")
+
+	class _emerge_log_class(SlotObject):
+		__slots__ = ("xterm_titles",)
+
+		def log(self, *pargs, **kwargs):
+			if not self.xterm_titles:
+				# Avoid interference with the scheduler's status display.
+				kwargs.pop("short_msg", None)
+			emergelog(self.xterm_titles, *pargs, **kwargs)
+
+	class _failed_pkg(SlotObject):
+		__slots__ = ("build_dir", "build_log", "pkg", "returncode")
+
+	class _ConfigPool(object):
+		"""Interface for a task to temporarily allocate a config
+		instance from a pool. This allows a task to be constructed
+		long before the config instance actually becomes needed, like
+		when prefetchers are constructed for the whole merge list."""
+		__slots__ = ("_root", "_allocate", "_deallocate")
+		def __init__(self, root, allocate, deallocate):
+			self._root = root
+			self._allocate = allocate
+			self._deallocate = deallocate
+		def allocate(self):
+			return self._allocate(self._root)
+		def deallocate(self, settings):
+			self._deallocate(settings)
+
+	class _unknown_internal_error(portage.exception.PortageException):
+		"""
+		Used internally to terminate scheduling. The specific reason for
+		the failure should have been dumped to stderr.
+		"""
+		def __init__(self, value=""):
+			portage.exception.PortageException.__init__(self, value)
+
+	def __init__(self, settings, trees, mtimedb, myopts,
+		spinner, mergelist=None, favorites=None, graph_config=None):
+		PollScheduler.__init__(self)
+
+		if mergelist is not None:
+			warnings.warn("The mergelist parameter of the " + \
+				"_emerge.Scheduler constructor is now unused. Use " + \
+				"the graph_config parameter instead.",
+				DeprecationWarning, stacklevel=2)
+
+		self.settings = settings
+		self.target_root = settings["ROOT"]
+		self.trees = trees
+		self.myopts = myopts
+		self._spinner = spinner
+		self._mtimedb = mtimedb
+		self._favorites = favorites
+		self._args_set = InternalPackageSet(favorites, allow_repo=True)
+		self._build_opts = self._build_opts_class()
+		for k in self._build_opts.__slots__:
+			setattr(self._build_opts, k, "--" + k.replace("_", "-") in myopts)
+		self._binpkg_opts = self._binpkg_opts_class()
+		for k in self._binpkg_opts.__slots__:
+			setattr(self._binpkg_opts, k, "--" + k.replace("_", "-") in myopts)
+
+		self.curval = 0
+		self._logger = self._emerge_log_class()
+		self._task_queues = self._task_queues_class()
+		for k in self._task_queues.allowed_keys:
+			setattr(self._task_queues, k,
+				SequentialTaskQueue())
+
+		# Holds merges that will wait to be executed when no builds are
+		# executing. This is useful for system packages since dependencies
+		# on system packages are frequently unspecified. For example, see
+		# bug #256616.
+		self._merge_wait_queue = deque()
+		# Holds merges that have been transfered from the merge_wait_queue to
+		# the actual merge queue. They are removed from this list upon
+		# completion. Other packages can start building only when this list is
+		# empty.
+		self._merge_wait_scheduled = []
+
+		# Holds system packages and their deep runtime dependencies. Before
+		# being merged, these packages go to merge_wait_queue, to be merged
+		# when no other packages are building.
+		self._deep_system_deps = set()
+
+		# Holds packages to merge which will satisfy currently unsatisfied
+		# deep runtime dependencies of system packages. If this is not empty
+		# then no parallel builds will be spawned until it is empty. This
+		# minimizes the possibility that a build will fail due to the system
+		# being in a fragile state. For example, see bug #259954.
+		self._unsatisfied_system_deps = set()
+
+		self._status_display = JobStatusDisplay(
+			xterm_titles=('notitles' not in settings.features))
+		self._max_load = myopts.get("--load-average")
+		max_jobs = myopts.get("--jobs")
+		if max_jobs is None:
+			max_jobs = 1
+		self._set_max_jobs(max_jobs)
+
+		# The root where the currently running
+		# portage instance is installed.
+		self._running_root = trees["/"]["root_config"]
+		self.edebug = 0
+		if settings.get("PORTAGE_DEBUG", "") == "1":
+			self.edebug = 1
+		self.pkgsettings = {}
+		self._config_pool = {}
+		for root in self.trees:
+			self._config_pool[root] = []
+
+		self._fetch_log = os.path.join(_emerge.emergelog._emerge_log_dir,
+			'emerge-fetch.log')
+		fetch_iface = self._fetch_iface_class(log_file=self._fetch_log,
+			schedule=self._schedule_fetch)
+		self._sched_iface = self._iface_class(
+			fetch=fetch_iface, output=self._task_output,
+			register=self._register,
+			schedule=self._schedule_wait,
+			scheduleSetup=self._schedule_setup,
+			scheduleUnpack=self._schedule_unpack,
+			scheduleYield=self._schedule_yield,
+			unregister=self._unregister)
+
+		self._prefetchers = weakref.WeakValueDictionary()
+		self._pkg_queue = []
+		self._running_tasks = {}
+		self._completed_tasks = set()
+
+		self._failed_pkgs = []
+		self._failed_pkgs_all = []
+		self._failed_pkgs_die_msgs = []
+		self._post_mod_echo_msgs = []
+		self._parallel_fetch = False
+		self._init_graph(graph_config)
+		merge_count = len([x for x in self._mergelist \
+			if isinstance(x, Package) and x.operation == "merge"])
+		self._pkg_count = self._pkg_count_class(
+			curval=0, maxval=merge_count)
+		self._status_display.maxval = self._pkg_count.maxval
+
+		# The load average takes some time to respond when new
+		# jobs are added, so we need to limit the rate of adding
+		# new jobs.
+		self._job_delay_max = 10
+		self._job_delay_factor = 1.0
+		self._job_delay_exp = 1.5
+		self._previous_job_start_time = None
+
+		# This is used to memoize the _choose_pkg() result when
+		# no packages can be chosen until one of the existing
+		# jobs completes.
+		self._choose_pkg_return_early = False
+
+		features = self.settings.features
+		if "parallel-fetch" in features and \
+			not ("--pretend" in self.myopts or \
+			"--fetch-all-uri" in self.myopts or \
+			"--fetchonly" in self.myopts):
+			if "distlocks" not in features:
+				portage.writemsg(red("!!!")+"\n", noiselevel=-1)
+				portage.writemsg(red("!!!")+" parallel-fetching " + \
+					"requires the distlocks feature enabled"+"\n",
+					noiselevel=-1)
+				portage.writemsg(red("!!!")+" you have it disabled, " + \
+					"thus parallel-fetching is being disabled"+"\n",
+					noiselevel=-1)
+				portage.writemsg(red("!!!")+"\n", noiselevel=-1)
+			elif merge_count > 1:
+				self._parallel_fetch = True
+
+		if self._parallel_fetch:
+				# clear out existing fetch log if it exists
+				try:
+					open(self._fetch_log, 'w').close()
+				except EnvironmentError:
+					pass
+
+		self._running_portage = None
+		portage_match = self._running_root.trees["vartree"].dbapi.match(
+			portage.const.PORTAGE_PACKAGE_ATOM)
+		if portage_match:
+			cpv = portage_match.pop()
+			self._running_portage = self._pkg(cpv, "installed",
+				self._running_root, installed=True)
+
+	def _terminate_tasks(self):
+		self._status_display.quiet = True
+		while self._running_tasks:
+			task_id, task = self._running_tasks.popitem()
+			task.cancel()
+		for q in self._task_queues.values():
+			q.clear()
+
+	def _init_graph(self, graph_config):
+		"""
+		Initialization structures used for dependency calculations
+		involving currently installed packages.
+		"""
+		self._set_graph_config(graph_config)
+		self._blocker_db = {}
+		for root in self.trees:
+			if graph_config is None:
+				fake_vartree = FakeVartree(self.trees[root]["root_config"],
+					pkg_cache=self._pkg_cache)
+				fake_vartree.sync()
+			else:
+				fake_vartree = graph_config.trees[root]['vartree']
+			self._blocker_db[root] = BlockerDB(fake_vartree)
+
+	def _destroy_graph(self):
+		"""
+		Use this to free memory at the beginning of _calc_resume_list().
+		After _calc_resume_list(), the _init_graph() method
+		must to be called in order to re-generate the structures that
+		this method destroys. 
+		"""
+		self._blocker_db = None
+		self._set_graph_config(None)
+		gc.collect()
+
+	def _poll(self, timeout=None):
+
+		self._schedule()
+
+		if timeout is None:
+			while True:
+				if not self._poll_event_handlers:
+					self._schedule()
+					if not self._poll_event_handlers:
+						raise StopIteration(
+							"timeout is None and there are no poll() event handlers")
+				previous_count = len(self._poll_event_queue)
+				PollScheduler._poll(self, timeout=self._max_display_latency)
+				self._status_display.display()
+				if previous_count != len(self._poll_event_queue):
+					break
+
+		elif timeout <= self._max_display_latency:
+			PollScheduler._poll(self, timeout=timeout)
+			if timeout == 0:
+				# The display is updated by _schedule() above, so it would be
+				# redundant to update it here when timeout is 0.
+				pass
+			else:
+				self._status_display.display()
+
+		else:
+			remaining_timeout = timeout
+			start_time = time.time()
+			while True:
+				previous_count = len(self._poll_event_queue)
+				PollScheduler._poll(self,
+					timeout=min(self._max_display_latency, remaining_timeout))
+				self._status_display.display()
+				if previous_count != len(self._poll_event_queue):
+					break
+				elapsed_time = time.time() - start_time
+				if elapsed_time < 0:
+					# The system clock has changed such that start_time
+					# is now in the future, so just assume that the
+					# timeout has already elapsed.
+					break
+				remaining_timeout = timeout - 1000 * elapsed_time
+				if remaining_timeout <= 0:
+					break
+
+	def _set_max_jobs(self, max_jobs):
+		self._max_jobs = max_jobs
+		self._task_queues.jobs.max_jobs = max_jobs
+		if "parallel-install" in self.settings.features:
+			self._task_queues.merge.max_jobs = max_jobs
+
+	def _background_mode(self):
+		"""
+		Check if background mode is enabled and adjust states as necessary.
+
+		@rtype: bool
+		@returns: True if background mode is enabled, False otherwise.
+		"""
+		background = (self._max_jobs is True or \
+			self._max_jobs > 1 or "--quiet" in self.myopts \
+			or "--quiet-build" in self.myopts) and \
+			not bool(self._opts_no_background.intersection(self.myopts))
+
+		if background:
+			interactive_tasks = self._get_interactive_tasks()
+			if interactive_tasks:
+				background = False
+				writemsg_level(">>> Sending package output to stdio due " + \
+					"to interactive package(s):\n",
+					level=logging.INFO, noiselevel=-1)
+				msg = [""]
+				for pkg in interactive_tasks:
+					pkg_str = "  " + colorize("INFORM", str(pkg.cpv))
+					if pkg.root != "/":
+						pkg_str += " for " + pkg.root
+					msg.append(pkg_str)
+				msg.append("")
+				writemsg_level("".join("%s\n" % (l,) for l in msg),
+					level=logging.INFO, noiselevel=-1)
+				if self._max_jobs is True or self._max_jobs > 1:
+					self._set_max_jobs(1)
+					writemsg_level(">>> Setting --jobs=1 due " + \
+						"to the above interactive package(s)\n",
+						level=logging.INFO, noiselevel=-1)
+					writemsg_level(">>> In order to temporarily mask " + \
+						"interactive updates, you may\n" + \
+						">>> specify --accept-properties=-interactive\n",
+						level=logging.INFO, noiselevel=-1)
+		self._status_display.quiet = \
+			not background or \
+			("--quiet" in self.myopts and \
+			"--verbose" not in self.myopts)
+
+		self._logger.xterm_titles = \
+			"notitles" not in self.settings.features and \
+			self._status_display.quiet
+
+		return background
+
+	def _get_interactive_tasks(self):
+		interactive_tasks = []
+		for task in self._mergelist:
+			if not (isinstance(task, Package) and \
+				task.operation == "merge"):
+				continue
+			if 'interactive' in task.metadata.properties:
+				interactive_tasks.append(task)
+		return interactive_tasks
+
+	def _set_graph_config(self, graph_config):
+
+		if graph_config is None:
+			self._graph_config = None
+			self._pkg_cache = {}
+			self._digraph = None
+			self._mergelist = []
+			self._deep_system_deps.clear()
+			return
+
+		self._graph_config = graph_config
+		self._pkg_cache = graph_config.pkg_cache
+		self._digraph = graph_config.graph
+		self._mergelist = graph_config.mergelist
+
+		if "--nodeps" in self.myopts or \
+			(self._max_jobs is not True and self._max_jobs < 2):
+			# save some memory
+			self._digraph = None
+			graph_config.graph = None
+			graph_config.pkg_cache.clear()
+			self._deep_system_deps.clear()
+			for pkg in self._mergelist:
+				self._pkg_cache[pkg] = pkg
+			return
+
+		self._find_system_deps()
+		self._prune_digraph()
+		self._prevent_builddir_collisions()
+		if '--debug' in self.myopts:
+			writemsg("\nscheduler digraph:\n\n", noiselevel=-1)
+			self._digraph.debug_print()
+			writemsg("\n", noiselevel=-1)
+
+	def _find_system_deps(self):
+		"""
+		Find system packages and their deep runtime dependencies. Before being
+		merged, these packages go to merge_wait_queue, to be merged when no
+		other packages are building.
+		NOTE: This can only find deep system deps if the system set has been
+		added to the graph and traversed deeply (the depgraph "complete"
+		parameter will do this, triggered by emerge --complete-graph option).
+		"""
+		deep_system_deps = self._deep_system_deps
+		deep_system_deps.clear()
+		deep_system_deps.update(
+			_find_deep_system_runtime_deps(self._digraph))
+		deep_system_deps.difference_update([pkg for pkg in \
+			deep_system_deps if pkg.operation != "merge"])
+
+	def _prune_digraph(self):
+		"""
+		Prune any root nodes that are irrelevant.
+		"""
+
+		graph = self._digraph
+		completed_tasks = self._completed_tasks
+		removed_nodes = set()
+		while True:
+			for node in graph.root_nodes():
+				if not isinstance(node, Package) or \
+					(node.installed and node.operation == "nomerge") or \
+					node.onlydeps or \
+					node in completed_tasks:
+					removed_nodes.add(node)
+			if removed_nodes:
+				graph.difference_update(removed_nodes)
+			if not removed_nodes:
+				break
+			removed_nodes.clear()
+
+	def _prevent_builddir_collisions(self):
+		"""
+		When building stages, sometimes the same exact cpv needs to be merged
+		to both $ROOTs. Add edges to the digraph in order to avoid collisions
+		in the builddir. Currently, normal file locks would be inappropriate
+		for this purpose since emerge holds all of it's build dir locks from
+		the main process.
+		"""
+		cpv_map = {}
+		for pkg in self._mergelist:
+			if not isinstance(pkg, Package):
+				# a satisfied blocker
+				continue
+			if pkg.installed:
+				continue
+			if pkg.cpv not in cpv_map:
+				cpv_map[pkg.cpv] = [pkg]
+				continue
+			for earlier_pkg in cpv_map[pkg.cpv]:
+				self._digraph.add(earlier_pkg, pkg,
+					priority=DepPriority(buildtime=True))
+			cpv_map[pkg.cpv].append(pkg)
+
+	class _pkg_failure(portage.exception.PortageException):
+		"""
+		An instance of this class is raised by unmerge() when
+		an uninstallation fails.
+		"""
+		status = 1
+		def __init__(self, *pargs):
+			portage.exception.PortageException.__init__(self, pargs)
+			if pargs:
+				self.status = pargs[0]
+
+	def _schedule_fetch(self, fetcher):
+		"""
+		Schedule a fetcher, in order to control the number of concurrent
+		fetchers. If self._max_jobs is greater than 1 then the fetch
+		queue is bypassed and the fetcher is started immediately,
+		otherwise it is added to the front of the parallel-fetch queue.
+		NOTE: The parallel-fetch queue is currently used to serialize
+		access to the parallel-fetch log, so changes in the log handling
+		would be required before it would be possible to enable
+		concurrent fetching within the parallel-fetch queue.
+		"""
+		if self._max_jobs > 1:
+			fetcher.start()
+		else:
+			self._task_queues.fetch.addFront(fetcher)
+
+	def _schedule_setup(self, setup_phase):
+		"""
+		Schedule a setup phase on the merge queue, in order to
+		serialize unsandboxed access to the live filesystem.
+		"""
+		if self._task_queues.merge.max_jobs > 1 and \
+			"ebuild-locks" in self.settings.features:
+			# Use a separate queue for ebuild-locks when the merge
+			# queue allows more than 1 job (due to parallel-install),
+			# since the portage.locks module does not behave as desired
+			# if we try to lock the same file multiple times
+			# concurrently from the same process.
+			self._task_queues.ebuild_locks.add(setup_phase)
+		else:
+			self._task_queues.merge.add(setup_phase)
+		self._schedule()
+
+	def _schedule_unpack(self, unpack_phase):
+		"""
+		Schedule an unpack phase on the unpack queue, in order
+		to serialize $DISTDIR access for live ebuilds.
+		"""
+		self._task_queues.unpack.add(unpack_phase)
+
+	def _find_blockers(self, new_pkg):
+		"""
+		Returns a callable.
+		"""
+		def get_blockers():
+			return self._find_blockers_impl(new_pkg)
+		return get_blockers
+
+	def _find_blockers_impl(self, new_pkg):
+		if self._opts_ignore_blockers.intersection(self.myopts):
+			return None
+
+		blocker_db = self._blocker_db[new_pkg.root]
+
+		blocker_dblinks = []
+		for blocking_pkg in blocker_db.findInstalledBlockers(new_pkg):
+			if new_pkg.slot_atom == blocking_pkg.slot_atom:
+				continue
+			if new_pkg.cpv == blocking_pkg.cpv:
+				continue
+			blocker_dblinks.append(portage.dblink(
+				blocking_pkg.category, blocking_pkg.pf, blocking_pkg.root,
+				self.pkgsettings[blocking_pkg.root], treetype="vartree",
+				vartree=self.trees[blocking_pkg.root]["vartree"]))
+
+		return blocker_dblinks
+
+	def _generate_digests(self):
+		"""
+		Generate digests if necessary for --digests or FEATURES=digest.
+		In order to avoid interference, this must done before parallel
+		tasks are started.
+		"""
+
+		if '--fetchonly' in self.myopts:
+			return os.EX_OK
+
+		digest = '--digest' in self.myopts
+		if not digest:
+			for pkgsettings in self.pkgsettings.values():
+				if pkgsettings.mycpv is not None:
+					# ensure that we are using global features
+					# settings rather than those from package.env
+					pkgsettings.reset()
+				if 'digest' in pkgsettings.features:
+					digest = True
+					break
+
+		if not digest:
+			return os.EX_OK
+
+		for x in self._mergelist:
+			if not isinstance(x, Package) or \
+				x.type_name != 'ebuild' or \
+				x.operation != 'merge':
+				continue
+			pkgsettings = self.pkgsettings[x.root]
+			if pkgsettings.mycpv is not None:
+				# ensure that we are using global features
+				# settings rather than those from package.env
+				pkgsettings.reset()
+			if '--digest' not in self.myopts and \
+				'digest' not in pkgsettings.features:
+				continue
+			portdb = x.root_config.trees['porttree'].dbapi
+			ebuild_path = portdb.findname(x.cpv, myrepo=x.repo)
+			if ebuild_path is None:
+				raise AssertionError("ebuild not found for '%s'" % x.cpv)
+			pkgsettings['O'] = os.path.dirname(ebuild_path)
+			if not digestgen(mysettings=pkgsettings, myportdb=portdb):
+				writemsg_level(
+					"!!! Unable to generate manifest for '%s'.\n" \
+					% x.cpv, level=logging.ERROR, noiselevel=-1)
+				return 1
+
+		return os.EX_OK
+
+	def _env_sanity_check(self):
+		"""
+		Verify a sane environment before trying to build anything from source.
+		"""
+		have_src_pkg = False
+		for x in self._mergelist:
+			if isinstance(x, Package) and not x.built:
+				have_src_pkg = True
+				break
+
+		if not have_src_pkg:
+			return os.EX_OK
+
+		for settings in self.pkgsettings.values():
+			for var in ("ARCH", ):
+				value = settings.get(var)
+				if value and value.strip():
+					continue
+				msg = _("%(var)s is not set... "
+					"Are you missing the '%(configroot)setc/make.profile' symlink? "
+					"Is the symlink correct? "
+					"Is your portage tree complete?") % \
+					{"var": var, "configroot": settings["PORTAGE_CONFIGROOT"]}
+
+				out = portage.output.EOutput()
+				for line in textwrap.wrap(msg, 70):
+					out.eerror(line)
+				return 1
+
+		return os.EX_OK
+
+	def _check_manifests(self):
+		# Verify all the manifests now so that the user is notified of failure
+		# as soon as possible.
+		if "strict" not in self.settings.features or \
+			"--fetchonly" in self.myopts or \
+			"--fetch-all-uri" in self.myopts:
+			return os.EX_OK
+
+		shown_verifying_msg = False
+		quiet_settings = {}
+		for myroot, pkgsettings in self.pkgsettings.items():
+			quiet_config = portage.config(clone=pkgsettings)
+			quiet_config["PORTAGE_QUIET"] = "1"
+			quiet_config.backup_changes("PORTAGE_QUIET")
+			quiet_settings[myroot] = quiet_config
+			del quiet_config
+
+		failures = 0
+
+		for x in self._mergelist:
+			if not isinstance(x, Package) or \
+				x.type_name != "ebuild":
+				continue
+
+			if x.operation == "uninstall":
+				continue
+
+			if not shown_verifying_msg:
+				shown_verifying_msg = True
+				self._status_msg("Verifying ebuild manifests")
+
+			root_config = x.root_config
+			portdb = root_config.trees["porttree"].dbapi
+			quiet_config = quiet_settings[root_config.root]
+			ebuild_path = portdb.findname(x.cpv, myrepo=x.repo)
+			if ebuild_path is None:
+				raise AssertionError("ebuild not found for '%s'" % x.cpv)
+			quiet_config["O"] = os.path.dirname(ebuild_path)
+			if not digestcheck([], quiet_config, strict=True):
+				failures |= 1
+
+		if failures:
+			return 1
+		return os.EX_OK
+
+	def _add_prefetchers(self):
+
+		if not self._parallel_fetch:
+			return
+
+		if self._parallel_fetch:
+			self._status_msg("Starting parallel fetch")
+
+			prefetchers = self._prefetchers
+			getbinpkg = "--getbinpkg" in self.myopts
+
+			for pkg in self._mergelist:
+				# mergelist can contain solved Blocker instances
+				if not isinstance(pkg, Package) or pkg.operation == "uninstall":
+					continue
+				prefetcher = self._create_prefetcher(pkg)
+				if prefetcher is not None:
+					self._task_queues.fetch.add(prefetcher)
+					prefetchers[pkg] = prefetcher
+
+			# Start the first prefetcher immediately so that self._task()
+			# won't discard it. This avoids a case where the first
+			# prefetcher is discarded, causing the second prefetcher to
+			# occupy the fetch queue before the first fetcher has an
+			# opportunity to execute.
+			self._task_queues.fetch.schedule()
+
+	def _create_prefetcher(self, pkg):
+		"""
+		@return: a prefetcher, or None if not applicable
+		"""
+		prefetcher = None
+
+		if not isinstance(pkg, Package):
+			pass
+
+		elif pkg.type_name == "ebuild":
+
+			prefetcher = EbuildFetcher(background=True,
+				config_pool=self._ConfigPool(pkg.root,
+				self._allocate_config, self._deallocate_config),
+				fetchonly=1, logfile=self._fetch_log,
+				pkg=pkg, prefetch=True, scheduler=self._sched_iface)
+
+		elif pkg.type_name == "binary" and \
+			"--getbinpkg" in self.myopts and \
+			pkg.root_config.trees["bintree"].isremote(pkg.cpv):
+
+			prefetcher = BinpkgPrefetcher(background=True,
+				pkg=pkg, scheduler=self._sched_iface)
+
+		return prefetcher
+
+	def _is_restart_scheduled(self):
+		"""
+		Check if the merge list contains a replacement
+		for the current running instance, that will result
+		in restart after merge.
+		@rtype: bool
+		@returns: True if a restart is scheduled, False otherwise.
+		"""
+		if self._opts_no_restart.intersection(self.myopts):
+			return False
+
+		mergelist = self._mergelist
+
+		for i, pkg in enumerate(mergelist):
+			if self._is_restart_necessary(pkg) and \
+				i != len(mergelist) - 1:
+				return True
+
+		return False
+
+	def _is_restart_necessary(self, pkg):
+		"""
+		@return: True if merging the given package
+			requires restart, False otherwise.
+		"""
+
+		# Figure out if we need a restart.
+		if pkg.root == self._running_root.root and \
+			portage.match_from_list(
+			portage.const.PORTAGE_PACKAGE_ATOM, [pkg]):
+			if self._running_portage is None:
+				return True
+			elif pkg.cpv != self._running_portage.cpv or \
+				'9999' in pkg.cpv or \
+				'git' in pkg.inherited or \
+				'git-2' in pkg.inherited:
+				return True
+		return False
+
+	def _restart_if_necessary(self, pkg):
+		"""
+		Use execv() to restart emerge. This happens
+		if portage upgrades itself and there are
+		remaining packages in the list.
+		"""
+
+		if self._opts_no_restart.intersection(self.myopts):
+			return
+
+		if not self._is_restart_necessary(pkg):
+			return
+
+		if pkg == self._mergelist[-1]:
+			return
+
+		self._main_loop_cleanup()
+
+		logger = self._logger
+		pkg_count = self._pkg_count
+		mtimedb = self._mtimedb
+		bad_resume_opts = self._bad_resume_opts
+
+		logger.log(" ::: completed emerge (%s of %s) %s to %s" % \
+			(pkg_count.curval, pkg_count.maxval, pkg.cpv, pkg.root))
+
+		logger.log(" *** RESTARTING " + \
+			"emerge via exec() after change of " + \
+			"portage version.")
+
+		mtimedb["resume"]["mergelist"].remove(list(pkg))
+		mtimedb.commit()
+		portage.run_exitfuncs()
+		# Don't trust sys.argv[0] here because eselect-python may modify it.
+		emerge_binary = os.path.join(portage.const.PORTAGE_BIN_PATH, 'emerge')
+		mynewargv = [emerge_binary, "--resume"]
+		resume_opts = self.myopts.copy()
+		# For automatic resume, we need to prevent
+		# any of bad_resume_opts from leaking in
+		# via EMERGE_DEFAULT_OPTS.
+		resume_opts["--ignore-default-opts"] = True
+		for myopt, myarg in resume_opts.items():
+			if myopt not in bad_resume_opts:
+				if myarg is True:
+					mynewargv.append(myopt)
+				elif isinstance(myarg, list):
+					# arguments like --exclude that use 'append' action
+					for x in myarg:
+						mynewargv.append("%s=%s" % (myopt, x))
+				else:
+					mynewargv.append("%s=%s" % (myopt, myarg))
+		# priority only needs to be adjusted on the first run
+		os.environ["PORTAGE_NICENESS"] = "0"
+		os.execv(mynewargv[0], mynewargv)
+
+	def _run_pkg_pretend(self):
+		"""
+		Since pkg_pretend output may be important, this method sends all
+		output directly to stdout (regardless of options like --quiet or
+		--jobs).
+		"""
+
+		failures = 0
+
+		# Use a local PollScheduler instance here, since we don't
+		# want tasks here to trigger the usual Scheduler callbacks
+		# that handle job scheduling and status display.
+		sched_iface = PollScheduler().sched_iface
+
+		for x in self._mergelist:
+			if not isinstance(x, Package):
+				continue
+
+			if x.operation == "uninstall":
+				continue
+
+			if x.metadata["EAPI"] in ("0", "1", "2", "3"):
+				continue
+
+			if "pretend" not in x.metadata.defined_phases:
+				continue
+
+			out_str =">>> Running pre-merge checks for " + colorize("INFORM", x.cpv) + "\n"
+			portage.util.writemsg_stdout(out_str, noiselevel=-1)
+
+			root_config = x.root_config
+			settings = self.pkgsettings[root_config.root]
+			settings.setcpv(x)
+			tmpdir = tempfile.mkdtemp()
+			tmpdir_orig = settings["PORTAGE_TMPDIR"]
+			settings["PORTAGE_TMPDIR"] = tmpdir
+
+			try:
+				if x.built:
+					tree = "bintree"
+					bintree = root_config.trees["bintree"].dbapi.bintree
+					fetched = False
+
+					# Display fetch on stdout, so that it's always clear what
+					# is consuming time here.
+					if bintree.isremote(x.cpv):
+						fetcher = BinpkgFetcher(pkg=x,
+							scheduler=sched_iface)
+						fetcher.start()
+						if fetcher.wait() != os.EX_OK:
+							failures += 1
+							continue
+						fetched = fetcher.pkg_path
+
+					verifier = BinpkgVerifier(pkg=x,
+						scheduler=sched_iface)
+					verifier.start()
+					if verifier.wait() != os.EX_OK:
+						failures += 1
+						continue
+
+					if fetched:
+						bintree.inject(x.cpv, filename=fetched)
+					tbz2_file = bintree.getname(x.cpv)
+					infloc = os.path.join(tmpdir, x.category, x.pf, "build-info")
+					os.makedirs(infloc)
+					portage.xpak.tbz2(tbz2_file).unpackinfo(infloc)
+					ebuild_path = os.path.join(infloc, x.pf + ".ebuild")
+					settings.configdict["pkg"]["EMERGE_FROM"] = "binary"
+					settings.configdict["pkg"]["MERGE_TYPE"] = "binary"
+
+				else:
+					tree = "porttree"
+					portdb = root_config.trees["porttree"].dbapi
+					ebuild_path = portdb.findname(x.cpv, myrepo=x.repo)
+					if ebuild_path is None:
+						raise AssertionError("ebuild not found for '%s'" % x.cpv)
+					settings.configdict["pkg"]["EMERGE_FROM"] = "ebuild"
+					if self._build_opts.buildpkgonly:
+						settings.configdict["pkg"]["MERGE_TYPE"] = "buildonly"
+					else:
+						settings.configdict["pkg"]["MERGE_TYPE"] = "source"
+
+				portage.package.ebuild.doebuild.doebuild_environment(ebuild_path,
+					"pretend", settings=settings,
+					db=self.trees[settings["ROOT"]][tree].dbapi)
+				prepare_build_dirs(root_config.root, settings, cleanup=0)
+
+				vardb = root_config.trees['vartree'].dbapi
+				settings["REPLACING_VERSIONS"] = " ".join(
+					set(portage.versions.cpv_getversion(match) \
+						for match in vardb.match(x.slot_atom) + \
+						vardb.match('='+x.cpv)))
+				pretend_phase = EbuildPhase(
+					phase="pretend", scheduler=sched_iface,
+					settings=settings)
+
+				pretend_phase.start()
+				ret = pretend_phase.wait()
+				if ret != os.EX_OK:
+					failures += 1
+				portage.elog.elog_process(x.cpv, settings)
+			finally:
+				shutil.rmtree(tmpdir)
+				settings["PORTAGE_TMPDIR"] = tmpdir_orig
+
+		if failures:
+			return 1
+		return os.EX_OK
+
+	def merge(self):
+		if "--resume" in self.myopts:
+			# We're resuming.
+			portage.writemsg_stdout(
+				colorize("GOOD", "*** Resuming merge...\n"), noiselevel=-1)
+			self._logger.log(" *** Resuming merge...")
+
+		self._save_resume_list()
+
+		try:
+			self._background = self._background_mode()
+		except self._unknown_internal_error:
+			return 1
+
+		for root in self.trees:
+			root_config = self.trees[root]["root_config"]
+
+			# Even for --pretend --fetch mode, PORTAGE_TMPDIR is required
+			# since it might spawn pkg_nofetch which requires PORTAGE_BUILDDIR
+			# for ensuring sane $PWD (bug #239560) and storing elog messages.
+			tmpdir = root_config.settings.get("PORTAGE_TMPDIR", "")
+			if not tmpdir or not os.path.isdir(tmpdir):
+				msg = "The directory specified in your " + \
+					"PORTAGE_TMPDIR variable, '%s', " % tmpdir + \
+				"does not exist. Please create this " + \
+				"directory or correct your PORTAGE_TMPDIR setting."
+				msg = textwrap.wrap(msg, 70)
+				out = portage.output.EOutput()
+				for l in msg:
+					out.eerror(l)
+				return 1
+
+			if self._background:
+				root_config.settings.unlock()
+				root_config.settings["PORTAGE_BACKGROUND"] = "1"
+				root_config.settings.backup_changes("PORTAGE_BACKGROUND")
+				root_config.settings.lock()
+
+			self.pkgsettings[root] = portage.config(
+				clone=root_config.settings)
+
+		keep_going = "--keep-going" in self.myopts
+		fetchonly = self._build_opts.fetchonly
+		mtimedb = self._mtimedb
+		failed_pkgs = self._failed_pkgs
+
+		rval = self._generate_digests()
+		if rval != os.EX_OK:
+			return rval
+
+		rval = self._env_sanity_check()
+		if rval != os.EX_OK:
+			return rval
+
+		# TODO: Immediately recalculate deps here if --keep-going
+		#       is enabled and corrupt manifests are detected.
+		rval = self._check_manifests()
+		if rval != os.EX_OK and not keep_going:
+			return rval
+
+		if not fetchonly:
+			rval = self._run_pkg_pretend()
+			if rval != os.EX_OK:
+				return rval
+
+		while True:
+
+			received_signal = []
+
+			def sighandler(signum, frame):
+				signal.signal(signal.SIGINT, signal.SIG_IGN)
+				signal.signal(signal.SIGTERM, signal.SIG_IGN)
+				portage.util.writemsg("\n\nExiting on signal %(signal)s\n" % \
+					{"signal":signum})
+				self.terminate()
+				received_signal.append(128 + signum)
+
+			earlier_sigint_handler = signal.signal(signal.SIGINT, sighandler)
+			earlier_sigterm_handler = signal.signal(signal.SIGTERM, sighandler)
+
+			try:
+				rval = self._merge()
+			finally:
+				# Restore previous handlers
+				if earlier_sigint_handler is not None:
+					signal.signal(signal.SIGINT, earlier_sigint_handler)
+				else:
+					signal.signal(signal.SIGINT, signal.SIG_DFL)
+				if earlier_sigterm_handler is not None:
+					signal.signal(signal.SIGTERM, earlier_sigterm_handler)
+				else:
+					signal.signal(signal.SIGTERM, signal.SIG_DFL)
+
+			if received_signal:
+				sys.exit(received_signal[0])
+
+			if rval == os.EX_OK or fetchonly or not keep_going:
+				break
+			if "resume" not in mtimedb:
+				break
+			mergelist = self._mtimedb["resume"].get("mergelist")
+			if not mergelist:
+				break
+
+			if not failed_pkgs:
+				break
+
+			for failed_pkg in failed_pkgs:
+				mergelist.remove(list(failed_pkg.pkg))
+
+			self._failed_pkgs_all.extend(failed_pkgs)
+			del failed_pkgs[:]
+
+			if not mergelist:
+				break
+
+			if not self._calc_resume_list():
+				break
+
+			clear_caches(self.trees)
+			if not self._mergelist:
+				break
+
+			self._save_resume_list()
+			self._pkg_count.curval = 0
+			self._pkg_count.maxval = len([x for x in self._mergelist \
+				if isinstance(x, Package) and x.operation == "merge"])
+			self._status_display.maxval = self._pkg_count.maxval
+
+		self._logger.log(" *** Finished. Cleaning up...")
+
+		if failed_pkgs:
+			self._failed_pkgs_all.extend(failed_pkgs)
+			del failed_pkgs[:]
+
+		printer = portage.output.EOutput()
+		background = self._background
+		failure_log_shown = False
+		if background and len(self._failed_pkgs_all) == 1:
+			# If only one package failed then just show it's
+			# whole log for easy viewing.
+			failed_pkg = self._failed_pkgs_all[-1]
+			build_dir = failed_pkg.build_dir
+			log_file = None
+			log_file_real = None
+
+			log_paths = [failed_pkg.build_log]
+
+			log_path = self._locate_failure_log(failed_pkg)
+			if log_path is not None:
+				try:
+					log_file = open(_unicode_encode(log_path,
+						encoding=_encodings['fs'], errors='strict'), mode='rb')
+				except IOError:
+					pass
+				else:
+					if log_path.endswith('.gz'):
+						log_file_real = log_file
+						log_file =  gzip.GzipFile(filename='',
+							mode='rb', fileobj=log_file)
+
+			if log_file is not None:
+				try:
+					for line in log_file:
+						writemsg_level(line, noiselevel=-1)
+				except zlib.error as e:
+					writemsg_level("%s\n" % (e,), level=logging.ERROR,
+						noiselevel=-1)
+				finally:
+					log_file.close()
+					if log_file_real is not None:
+						log_file_real.close()
+				failure_log_shown = True
+
+		# Dump mod_echo output now since it tends to flood the terminal.
+		# This allows us to avoid having more important output, generated
+		# later, from being swept away by the mod_echo output.
+		mod_echo_output =  _flush_elog_mod_echo()
+
+		if background and not failure_log_shown and \
+			self._failed_pkgs_all and \
+			self._failed_pkgs_die_msgs and \
+			not mod_echo_output:
+
+			for mysettings, key, logentries in self._failed_pkgs_die_msgs:
+				root_msg = ""
+				if mysettings["ROOT"] != "/":
+					root_msg = " merged to %s" % mysettings["ROOT"]
+				print()
+				printer.einfo("Error messages for package %s%s:" % \
+					(colorize("INFORM", key), root_msg))
+				print()
+				for phase in portage.const.EBUILD_PHASES:
+					if phase not in logentries:
+						continue
+					for msgtype, msgcontent in logentries[phase]:
+						if isinstance(msgcontent, basestring):
+							msgcontent = [msgcontent]
+						for line in msgcontent:
+							printer.eerror(line.strip("\n"))
+
+		if self._post_mod_echo_msgs:
+			for msg in self._post_mod_echo_msgs:
+				msg()
+
+		if len(self._failed_pkgs_all) > 1 or \
+			(self._failed_pkgs_all and keep_going):
+			if len(self._failed_pkgs_all) > 1:
+				msg = "The following %d packages have " % \
+					len(self._failed_pkgs_all) + \
+					"failed to build or install:"
+			else:
+				msg = "The following package has " + \
+					"failed to build or install:"
+
+			printer.eerror("")
+			for line in textwrap.wrap(msg, 72):
+				printer.eerror(line)
+			printer.eerror("")
+			for failed_pkg in self._failed_pkgs_all:
+				# Use _unicode_decode() to force unicode format string so
+				# that Package.__unicode__() is called in python2.
+				msg = _unicode_decode(" %s") % (failed_pkg.pkg,)
+				log_path = self._locate_failure_log(failed_pkg)
+				if log_path is not None:
+					msg += ", Log file:"
+				printer.eerror(msg)
+				if log_path is not None:
+					printer.eerror("  '%s'" % colorize('INFORM', log_path))
+			printer.eerror("")
+
+		if self._failed_pkgs_all:
+			return 1
+		return os.EX_OK
+
+	def _elog_listener(self, mysettings, key, logentries, fulltext):
+		errors = portage.elog.filter_loglevels(logentries, ["ERROR"])
+		if errors:
+			self._failed_pkgs_die_msgs.append(
+				(mysettings, key, errors))
+
+	def _locate_failure_log(self, failed_pkg):
+
+		build_dir = failed_pkg.build_dir
+		log_file = None
+
+		log_paths = [failed_pkg.build_log]
+
+		for log_path in log_paths:
+			if not log_path:
+				continue
+
+			try:
+				log_size = os.stat(log_path).st_size
+			except OSError:
+				continue
+
+			if log_size == 0:
+				continue
+
+			return log_path
+
+		return None
+
+	def _add_packages(self):
+		pkg_queue = self._pkg_queue
+		for pkg in self._mergelist:
+			if isinstance(pkg, Package):
+				pkg_queue.append(pkg)
+			elif isinstance(pkg, Blocker):
+				pass
+
+	def _system_merge_started(self, merge):
+		"""
+		Add any unsatisfied runtime deps to self._unsatisfied_system_deps.
+		In general, this keeps track of installed system packages with
+		unsatisfied RDEPEND or PDEPEND (circular dependencies). It can be
+		a fragile situation, so we don't execute any unrelated builds until
+		the circular dependencies are built and installed.
+		"""
+		graph = self._digraph
+		if graph is None:
+			return
+		pkg = merge.merge.pkg
+
+		# Skip this if $ROOT != / since it shouldn't matter if there
+		# are unsatisfied system runtime deps in this case.
+		if pkg.root != '/':
+			return
+
+		completed_tasks = self._completed_tasks
+		unsatisfied = self._unsatisfied_system_deps
+
+		def ignore_non_runtime_or_satisfied(priority):
+			"""
+			Ignore non-runtime and satisfied runtime priorities.
+			"""
+			if isinstance(priority, DepPriority) and \
+				not priority.satisfied and \
+				(priority.runtime or priority.runtime_post):
+				return False
+			return True
+
+		# When checking for unsatisfied runtime deps, only check
+		# direct deps since indirect deps are checked when the
+		# corresponding parent is merged.
+		for child in graph.child_nodes(pkg,
+			ignore_priority=ignore_non_runtime_or_satisfied):
+			if not isinstance(child, Package) or \
+				child.operation == 'uninstall':
+				continue
+			if child is pkg:
+				continue
+			if child.operation == 'merge' and \
+				child not in completed_tasks:
+				unsatisfied.add(child)
+
+	def _merge_wait_exit_handler(self, task):
+		self._merge_wait_scheduled.remove(task)
+		self._merge_exit(task)
+
+	def _merge_exit(self, merge):
+		self._running_tasks.pop(id(merge), None)
+		self._do_merge_exit(merge)
+		self._deallocate_config(merge.merge.settings)
+		if merge.returncode == os.EX_OK and \
+			not merge.merge.pkg.installed:
+			self._status_display.curval += 1
+		self._status_display.merges = len(self._task_queues.merge)
+		self._schedule()
+
+	def _do_merge_exit(self, merge):
+		pkg = merge.merge.pkg
+		settings = merge.merge.settings
+		trees = self.trees[merge.merge.settings["ROOT"]]
+		init_buildlog = gobs_buildlog()
+		if merge.returncode != os.EX_OK:
+			build_dir = settings.get("PORTAGE_BUILDDIR")
+			build_log = settings.get("PORTAGE_LOG_FILE")
+
+			self._failed_pkgs.append(self._failed_pkg(
+				build_dir=build_dir, build_log=build_log,
+				pkg=pkg,
+				returncode=merge.returncode))
+			if not self._terminated_tasks:
+				self._failed_pkg_msg(self._failed_pkgs[-1], "install", "to")
+				self._status_display.failed = len(self._failed_pkgs)
+			init_buildlog.add_buildlog_main(settings, pkg, trees)
+			return
+
+		self._task_complete(pkg)
+		pkg_to_replace = merge.merge.pkg_to_replace
+		if pkg_to_replace is not None:
+			# When a package is replaced, mark it's uninstall
+			# task complete (if any).
+			if self._digraph is not None and \
+				pkg_to_replace in self._digraph:
+				try:
+					self._pkg_queue.remove(pkg_to_replace)
+				except ValueError:
+					pass
+				self._task_complete(pkg_to_replace)
+			else:
+				self._pkg_cache.pop(pkg_to_replace, None)
+
+		if pkg.installed:
+			init_buildlog.add_buildlog_main(settings, pkg, trees)
+			return
+
+		self._restart_if_necessary(pkg)
+
+		# Call mtimedb.commit() after each merge so that
+		# --resume still works after being interrupted
+		# by reboot, sigkill or similar.
+		mtimedb = self._mtimedb
+		mtimedb["resume"]["mergelist"].remove(list(pkg))
+		if not mtimedb["resume"]["mergelist"]:
+			del mtimedb["resume"]
+		mtimedb.commit()
+		init_buildlog.add_buildlog_main(settings, pkg, trees)
+
+	def _build_exit(self, build):
+		self._running_tasks.pop(id(build), None)
+		if build.returncode == os.EX_OK and self._terminated_tasks:
+			# We've been interrupted, so we won't
+			# add this to the merge queue.
+			self.curval += 1
+			self._deallocate_config(build.settings)
+		elif build.returncode == os.EX_OK:
+			self.curval += 1
+			merge = PackageMerge(merge=build)
+			self._running_tasks[id(merge)] = merge
+			if not build.build_opts.buildpkgonly and \
+				build.pkg in self._deep_system_deps:
+				# Since dependencies on system packages are frequently
+				# unspecified, merge them only when no builds are executing.
+				self._merge_wait_queue.append(merge)
+				merge.addStartListener(self._system_merge_started)
+			else:
+				merge.addExitListener(self._merge_exit)
+				self._task_queues.merge.add(merge)
+				self._status_display.merges = len(self._task_queues.merge)
+		else:
+			settings = build.settings
+			build_dir = settings.get("PORTAGE_BUILDDIR")
+			build_log = settings.get("PORTAGE_LOG_FILE")
+
+			self._failed_pkgs.append(self._failed_pkg(
+				build_dir=build_dir, build_log=build_log,
+				pkg=build.pkg,
+				returncode=build.returncode))
+			if not self._terminated_tasks:
+				self._failed_pkg_msg(self._failed_pkgs[-1], "emerge", "for")
+				self._status_display.failed = len(self._failed_pkgs)
+			self._deallocate_config(build.settings)
+		self._jobs -= 1
+		self._status_display.running = self._jobs
+		self._schedule()
+
+	def _extract_exit(self, build):
+		self._build_exit(build)
+
+	def _task_complete(self, pkg):
+		self._completed_tasks.add(pkg)
+		self._unsatisfied_system_deps.discard(pkg)
+		self._choose_pkg_return_early = False
+		blocker_db = self._blocker_db[pkg.root]
+		blocker_db.discardBlocker(pkg)
+
+	def _merge(self):
+
+		self._add_prefetchers()
+		self._add_packages()
+		pkg_queue = self._pkg_queue
+		failed_pkgs = self._failed_pkgs
+		portage.locks._quiet = self._background
+		portage.elog.add_listener(self._elog_listener)
+		rval = os.EX_OK
+
+		try:
+			self._main_loop()
+		finally:
+			self._main_loop_cleanup()
+			portage.locks._quiet = False
+			portage.elog.remove_listener(self._elog_listener)
+			if failed_pkgs:
+				rval = failed_pkgs[-1].returncode
+
+		return rval
+
+	def _main_loop_cleanup(self):
+		del self._pkg_queue[:]
+		self._completed_tasks.clear()
+		self._deep_system_deps.clear()
+		self._unsatisfied_system_deps.clear()
+		self._choose_pkg_return_early = False
+		self._status_display.reset()
+		self._digraph = None
+		self._task_queues.fetch.clear()
+		self._prefetchers.clear()
+
+	def _choose_pkg(self):
+		"""
+		Choose a task that has all its dependencies satisfied. This is used
+		for parallel build scheduling, and ensures that we don't build
+		anything with deep dependencies that have yet to be merged.
+		"""
+
+		if self._choose_pkg_return_early:
+			return None
+
+		if self._digraph is None:
+			if self._is_work_scheduled() and \
+				not ("--nodeps" in self.myopts and \
+				(self._max_jobs is True or self._max_jobs > 1)):
+				self._choose_pkg_return_early = True
+				return None
+			return self._pkg_queue.pop(0)
+
+		if not self._is_work_scheduled():
+			return self._pkg_queue.pop(0)
+
+		self._prune_digraph()
+
+		chosen_pkg = None
+
+		# Prefer uninstall operations when available.
+		graph = self._digraph
+		for pkg in self._pkg_queue:
+			if pkg.operation == 'uninstall' and \
+				not graph.child_nodes(pkg):
+				chosen_pkg = pkg
+				break
+
+		if chosen_pkg is None:
+			later = set(self._pkg_queue)
+			for pkg in self._pkg_queue:
+				later.remove(pkg)
+				if not self._dependent_on_scheduled_merges(pkg, later):
+					chosen_pkg = pkg
+					break
+
+		if chosen_pkg is not None:
+			self._pkg_queue.remove(chosen_pkg)
+
+		if chosen_pkg is None:
+			# There's no point in searching for a package to
+			# choose until at least one of the existing jobs
+			# completes.
+			self._choose_pkg_return_early = True
+
+		return chosen_pkg
+
+	def _dependent_on_scheduled_merges(self, pkg, later):
+		"""
+		Traverse the subgraph of the given packages deep dependencies
+		to see if it contains any scheduled merges.
+		@param pkg: a package to check dependencies for
+		@type pkg: Package
+		@param later: packages for which dependence should be ignored
+			since they will be merged later than pkg anyway and therefore
+			delaying the merge of pkg will not result in a more optimal
+			merge order
+		@type later: set
+		@rtype: bool
+		@returns: True if the package is dependent, False otherwise.
+		"""
+
+		graph = self._digraph
+		completed_tasks = self._completed_tasks
+
+		dependent = False
+		traversed_nodes = set([pkg])
+		direct_deps = graph.child_nodes(pkg)
+		node_stack = direct_deps
+		direct_deps = frozenset(direct_deps)
+		while node_stack:
+			node = node_stack.pop()
+			if node in traversed_nodes:
+				continue
+			traversed_nodes.add(node)
+			if not ((node.installed and node.operation == "nomerge") or \
+				(node.operation == "uninstall" and \
+				node not in direct_deps) or \
+				node in completed_tasks or \
+				node in later):
+				dependent = True
+				break
+
+			# Don't traverse children of uninstall nodes since
+			# those aren't dependencies in the usual sense.
+			if node.operation != "uninstall":
+				node_stack.extend(graph.child_nodes(node))
+
+		return dependent
+
+	def _allocate_config(self, root):
+		"""
+		Allocate a unique config instance for a task in order
+		to prevent interference between parallel tasks.
+		"""
+		if self._config_pool[root]:
+			temp_settings = self._config_pool[root].pop()
+		else:
+			temp_settings = portage.config(clone=self.pkgsettings[root])
+		# Since config.setcpv() isn't guaranteed to call config.reset() due to
+		# performance reasons, call it here to make sure all settings from the
+		# previous package get flushed out (such as PORTAGE_LOG_FILE).
+		temp_settings.reload()
+		temp_settings.reset()
+		return temp_settings
+
+	def _deallocate_config(self, settings):
+		self._config_pool[settings["ROOT"]].append(settings)
+
+	def _main_loop(self):
+
+		# Only allow 1 job max if a restart is scheduled
+		# due to portage update.
+		if self._is_restart_scheduled() or \
+			self._opts_no_background.intersection(self.myopts):
+			self._set_max_jobs(1)
+
+		while self._schedule():
+			self._poll_loop()
+
+		while True:
+			self._schedule()
+			if not self._is_work_scheduled():
+				break
+			self._poll_loop()
+
+	def _keep_scheduling(self):
+		return bool(not self._terminated_tasks and self._pkg_queue and \
+			not (self._failed_pkgs and not self._build_opts.fetchonly))
+
+	def _is_work_scheduled(self):
+		return bool(self._running_tasks)
+
+	def _schedule_tasks(self):
+
+		while True:
+
+			# When the number of jobs and merges drops to zero,
+			# process a single merge from _merge_wait_queue if
+			# it's not empty. We only process one since these are
+			# special packages and we want to ensure that
+			# parallel-install does not cause more than one of
+			# them to install at the same time.
+			if (self._merge_wait_queue and not self._jobs and
+				not self._task_queues.merge):
+				task = self._merge_wait_queue.popleft()
+				task.addExitListener(self._merge_wait_exit_handler)
+				self._task_queues.merge.add(task)
+				self._status_display.merges = len(self._task_queues.merge)
+				self._merge_wait_scheduled.append(task)
+
+			self._schedule_tasks_imp()
+			self._status_display.display()
+
+			state_change = 0
+			for q in self._task_queues.values():
+				if q.schedule():
+					state_change += 1
+
+			# Cancel prefetchers if they're the only reason
+			# the main poll loop is still running.
+			if self._failed_pkgs and not self._build_opts.fetchonly and \
+				not self._is_work_scheduled() and \
+				self._task_queues.fetch:
+				self._task_queues.fetch.clear()
+				state_change += 1
+
+			if not (state_change or \
+				(self._merge_wait_queue and not self._jobs and
+				not self._task_queues.merge)):
+				break
+
+		return self._keep_scheduling()
+
+	def _job_delay(self):
+		"""
+		@rtype: bool
+		@returns: True if job scheduling should be delayed, False otherwise.
+		"""
+
+		if self._jobs and self._max_load is not None:
+
+			current_time = time.time()
+
+			delay = self._job_delay_factor * self._jobs ** self._job_delay_exp
+			if delay > self._job_delay_max:
+				delay = self._job_delay_max
+			if (current_time - self._previous_job_start_time) < delay:
+				return True
+
+		return False
+
+	def _schedule_tasks_imp(self):
+		"""
+		@rtype: bool
+		@returns: True if state changed, False otherwise.
+		"""
+
+		state_change = 0
+
+		while True:
+
+			if not self._keep_scheduling():
+				return bool(state_change)
+
+			if self._choose_pkg_return_early or \
+				self._merge_wait_scheduled or \
+				(self._jobs and self._unsatisfied_system_deps) or \
+				not self._can_add_job() or \
+				self._job_delay():
+				return bool(state_change)
+
+			pkg = self._choose_pkg()
+			if pkg is None:
+				return bool(state_change)
+
+			state_change += 1
+
+			if not pkg.installed:
+				self._pkg_count.curval += 1
+
+			task = self._task(pkg)
+
+			if pkg.installed:
+				merge = PackageMerge(merge=task)
+				self._running_tasks[id(merge)] = merge
+				merge.addExitListener(self._merge_exit)
+				self._task_queues.merge.addFront(merge)
+
+			elif pkg.built:
+				self._jobs += 1
+				self._previous_job_start_time = time.time()
+				self._status_display.running = self._jobs
+				self._running_tasks[id(task)] = task
+				task.addExitListener(self._extract_exit)
+				self._task_queues.jobs.add(task)
+
+			else:
+				self._jobs += 1
+				self._previous_job_start_time = time.time()
+				self._status_display.running = self._jobs
+				self._running_tasks[id(task)] = task
+				task.addExitListener(self._build_exit)
+				self._task_queues.jobs.add(task)
+
+		return bool(state_change)
+
+	def _task(self, pkg):
+
+		pkg_to_replace = None
+		if pkg.operation != "uninstall":
+			vardb = pkg.root_config.trees["vartree"].dbapi
+			previous_cpv = [x for x in vardb.match(pkg.slot_atom) \
+				if portage.cpv_getkey(x) == pkg.cp]
+			if not previous_cpv and vardb.cpv_exists(pkg.cpv):
+				# same cpv, different SLOT
+				previous_cpv = [pkg.cpv]
+			if previous_cpv:
+				previous_cpv = previous_cpv.pop()
+				pkg_to_replace = self._pkg(previous_cpv,
+					"installed", pkg.root_config, installed=True,
+					operation="uninstall")
+
+		prefetcher = self._prefetchers.pop(pkg, None)
+		if prefetcher is not None and not prefetcher.isAlive():
+			try:
+				self._task_queues.fetch._task_queue.remove(prefetcher)
+			except ValueError:
+				pass
+			prefetcher = None
+
+		task = MergeListItem(args_set=self._args_set,
+			background=self._background, binpkg_opts=self._binpkg_opts,
+			build_opts=self._build_opts,
+			config_pool=self._ConfigPool(pkg.root,
+			self._allocate_config, self._deallocate_config),
+			emerge_opts=self.myopts,
+			find_blockers=self._find_blockers(pkg), logger=self._logger,
+			mtimedb=self._mtimedb, pkg=pkg, pkg_count=self._pkg_count.copy(),
+			pkg_to_replace=pkg_to_replace,
+			prefetcher=prefetcher,
+			scheduler=self._sched_iface,
+			settings=self._allocate_config(pkg.root),
+			statusMessage=self._status_msg,
+			world_atom=self._world_atom)
+
+		return task
+
+	def _failed_pkg_msg(self, failed_pkg, action, preposition):
+		pkg = failed_pkg.pkg
+		msg = "%s to %s %s" % \
+			(bad("Failed"), action, colorize("INFORM", pkg.cpv))
+		if pkg.root != "/":
+			msg += " %s %s" % (preposition, pkg.root)
+
+		log_path = self._locate_failure_log(failed_pkg)
+		if log_path is not None:
+			msg += ", Log file:"
+		self._status_msg(msg)
+
+		if log_path is not None:
+			self._status_msg(" '%s'" % (colorize("INFORM", log_path),))
+
+	def _status_msg(self, msg):
+		"""
+		Display a brief status message (no newlines) in the status display.
+		This is called by tasks to provide feedback to the user. This
+		delegates the resposibility of generating \r and \n control characters,
+		to guarantee that lines are created or erased when necessary and
+		appropriate.
+
+		@type msg: str
+		@param msg: a brief status message (no newlines allowed)
+		"""
+		if not self._background:
+			writemsg_level("\n")
+		self._status_display.displayMessage(msg)
+
+	def _save_resume_list(self):
+		"""
+		Do this before verifying the ebuild Manifests since it might
+		be possible for the user to use --resume --skipfirst get past
+		a non-essential package with a broken digest.
+		"""
+		mtimedb = self._mtimedb
+
+		mtimedb["resume"] = {}
+		# Stored as a dict starting with portage-2.1.6_rc1, and supported
+		# by >=portage-2.1.3_rc8. Versions <portage-2.1.3_rc8 only support
+		# a list type for options.
+		mtimedb["resume"]["myopts"] = self.myopts.copy()
+
+		# Convert Atom instances to plain str.
+		mtimedb["resume"]["favorites"] = [str(x) for x in self._favorites]
+		mtimedb["resume"]["mergelist"] = [list(x) \
+			for x in self._mergelist \
+			if isinstance(x, Package) and x.operation == "merge"]
+
+		mtimedb.commit()
+
+	def _calc_resume_list(self):
+		"""
+		Use the current resume list to calculate a new one,
+		dropping any packages with unsatisfied deps.
+		@rtype: bool
+		@returns: True if successful, False otherwise.
+		"""
+		print(colorize("GOOD", "*** Resuming merge..."))
+
+		# free some memory before creating
+		# the resume depgraph
+		self._destroy_graph()
+
+		myparams = create_depgraph_params(self.myopts, None)
+		success = False
+		e = None
+		try:
+			success, mydepgraph, dropped_tasks = resume_depgraph(
+				self.settings, self.trees, self._mtimedb, self.myopts,
+				myparams, self._spinner)
+		except depgraph.UnsatisfiedResumeDep as exc:
+			# rename variable to avoid python-3.0 error:
+			# SyntaxError: can not delete variable 'e' referenced in nested
+			#              scope
+			e = exc
+			mydepgraph = e.depgraph
+			dropped_tasks = set()
+
+		if e is not None:
+			def unsatisfied_resume_dep_msg():
+				mydepgraph.display_problems()
+				out = portage.output.EOutput()
+				out.eerror("One or more packages are either masked or " + \
+					"have missing dependencies:")
+				out.eerror("")
+				indent = "  "
+				show_parents = set()
+				for dep in e.value:
+					if dep.parent in show_parents:
+						continue
+					show_parents.add(dep.parent)
+					if dep.atom is None:
+						out.eerror(indent + "Masked package:")
+						out.eerror(2 * indent + str(dep.parent))
+						out.eerror("")
+					else:
+						out.eerror(indent + str(dep.atom) + " pulled in by:")
+						out.eerror(2 * indent + str(dep.parent))
+						out.eerror("")
+				msg = "The resume list contains packages " + \
+					"that are either masked or have " + \
+					"unsatisfied dependencies. " + \
+					"Please restart/continue " + \
+					"the operation manually, or use --skipfirst " + \
+					"to skip the first package in the list and " + \
+					"any other packages that may be " + \
+					"masked or have missing dependencies."
+				for line in textwrap.wrap(msg, 72):
+					out.eerror(line)
+			self._post_mod_echo_msgs.append(unsatisfied_resume_dep_msg)
+			return False
+
+		if success and self._show_list():
+			mylist = mydepgraph.altlist()
+			if mylist:
+				if "--tree" in self.myopts:
+					mylist.reverse()
+				mydepgraph.display(mylist, favorites=self._favorites)
+
+		if not success:
+			self._post_mod_echo_msgs.append(mydepgraph.display_problems)
+			return False
+		mydepgraph.display_problems()
+		self._init_graph(mydepgraph.schedulerGraph())
+
+		msg_width = 75
+		for task in dropped_tasks:
+			if not (isinstance(task, Package) and task.operation == "merge"):
+				continue
+			pkg = task
+			msg = "emerge --keep-going:" + \
+				" %s" % (pkg.cpv,)
+			if pkg.root != "/":
+				msg += " for %s" % (pkg.root,)
+			msg += " dropped due to unsatisfied dependency."
+			for line in textwrap.wrap(msg, msg_width):
+				eerror(line, phase="other", key=pkg.cpv)
+			settings = self.pkgsettings[pkg.root]
+			# Ensure that log collection from $T is disabled inside
+			# elog_process(), since any logs that might exist are
+			# not valid here.
+			settings.pop("T", None)
+			portage.elog.elog_process(pkg.cpv, settings)
+			self._failed_pkgs_all.append(self._failed_pkg(pkg=pkg))
+
+		return True
+
+	def _show_list(self):
+		myopts = self.myopts
+		if "--quiet" not in myopts and \
+			("--ask" in myopts or "--tree" in myopts or \
+			"--verbose" in myopts):
+			return True
+		return False
+
+	def _world_atom(self, pkg):
+		"""
+		Add or remove the package to the world file, but only if
+		it's supposed to be added or removed. Otherwise, do nothing.
+		"""
+
+		if set(("--buildpkgonly", "--fetchonly",
+			"--fetch-all-uri",
+			"--oneshot", "--onlydeps",
+			"--pretend")).intersection(self.myopts):
+			return
+
+		if pkg.root != self.target_root:
+			return
+
+		args_set = self._args_set
+		if not args_set.findAtomForPackage(pkg):
+			return
+
+		logger = self._logger
+		pkg_count = self._pkg_count
+		root_config = pkg.root_config
+		world_set = root_config.sets["selected"]
+		world_locked = False
+		if hasattr(world_set, "lock"):
+			world_set.lock()
+			world_locked = True
+
+		try:
+			if hasattr(world_set, "load"):
+				world_set.load() # maybe it's changed on disk
+
+			if pkg.operation == "uninstall":
+				if hasattr(world_set, "cleanPackage"):
+					world_set.cleanPackage(pkg.root_config.trees["vartree"].dbapi,
+							pkg.cpv)
+				if hasattr(world_set, "remove"):
+					for s in pkg.root_config.setconfig.active:
+						world_set.remove(SETPREFIX+s)
+			else:
+				atom = create_world_atom(pkg, args_set, root_config)
+				if atom:
+					if hasattr(world_set, "add"):
+						self._status_msg(('Recording %s in "world" ' + \
+							'favorites file...') % atom)
+						logger.log(" === (%s of %s) Updating world file (%s)" % \
+							(pkg_count.curval, pkg_count.maxval, pkg.cpv))
+						world_set.add(atom)
+					else:
+						writemsg_level('\n!!! Unable to record %s in "world"\n' % \
+							(atom,), level=logging.WARN, noiselevel=-1)
+		finally:
+			if world_locked:
+				world_set.unlock()
+
+	def _pkg(self, cpv, type_name, root_config, installed=False,
+		operation=None, myrepo=None):
+		"""
+		Get a package instance from the cache, or create a new
+		one if necessary. Raises KeyError from aux_get if it
+		failures for some reason (package does not exist or is
+		corrupt).
+		"""
+
+		# Reuse existing instance when available.
+		pkg = self._pkg_cache.get(Package._gen_hash_key(cpv=cpv,
+			type_name=type_name, repo_name=myrepo, root_config=root_config,
+			installed=installed, operation=operation))
+
+		if pkg is not None:
+			return pkg
+
+		tree_type = depgraph.pkg_tree_map[type_name]
+		db = root_config.trees[tree_type].dbapi
+		db_keys = list(self.trees[root_config.root][
+			tree_type].dbapi._aux_cache_keys)
+		metadata = zip(db_keys, db.aux_get(cpv, db_keys, myrepo=myrepo))
+		pkg = Package(built=(type_name != "ebuild"),
+			cpv=cpv, installed=installed, metadata=metadata,
+			root_config=root_config, type_name=type_name)
+		self._pkg_cache[pkg] = pkg
+		return pkg

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index e0c33e1..7ffe53a 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -1,16 +1,25 @@
 from __future__ import print_function
 import re
 import os
+import platform
 try:
 	from subprocess import getstatusoutput as subprocess_getstatusoutput
 except ImportError:
 	from commands import getstatusoutput as subprocess_getstatusoutput
 from gobs.text import get_log_text_list
-from _emerge.main import parse_opts
+from _emerge.main import parse_opts, load_emerge_config, \
+        getportageversion
 from portage.util import writemsg, \
-	writemsg_level, writemsg_stdout
+        writemsg_level, writemsg_stdout
+from portage.exception import InvalidAtom
+from portage.dep import Atom
+from portage.dbapi._expand_new_virt import expand_new_virt
+from portage.const import GLOBAL_CONFIG_PATH, NEWS_LIB_PATH
+from portage.const import _ENABLE_DYN_LINK_MAP, _ENABLE_SET_CONFIG
+from portage.versions import catpkgsplit, cpv_getversion
 from gobs.repoman_gobs import gobs_repoman
 import portage
+from gobs.package import gobs_package
 from gobs.readconf import get_conf_settings
 from gobs.flags import gobs_use_flags
 reader=get_conf_settings()
@@ -24,17 +33,57 @@ if CM.getName()=='pgsql':
 
 class gobs_buildlog(object):
 	
-	def __init__(self,  mysettings, build_dict):
-		self._mysettings = mysettings
-		self._myportdb = portage.portdbapi(mysettings=self._mysettings)
-		self._build_dict = build_dict
-		self._logfile_text = get_log_text_list(self._mysettings.get("PORTAGE_LOG_FILE"))
+	def __init__(self):
+		self._config_profile = gobs_settings_dict['gobs_config']
 	
-	def add_new_ebuild_buildlog(self, build_error, summary_error, build_log_dict):
+	def get_build_dict_db(self, settings, pkg):
 		conn=CM.getConnection()
-		cpv = self._build_dict['cpv']
-		init_useflags = gobs_use_flags(self._mysettings, self._myportdb, cpv)
-		iuse_flags_list, final_use_list = init_useflags.get_flags_looked()
+		myportdb = portage.portdbapi(mysettings=settings)
+		cpvr_list = catpkgsplit(pkg.cpv, silent=1)
+		categories = cpvr_list[0]
+		package = cpvr_list[1]
+		ebuild_version = cpv_getversion(pkg.cpv)
+		print('cpv: ' + pkg.cpv)
+		init_package = gobs_package(settings, myportdb)
+		package_id = have_package_db(conn, categories, package)
+		# print("package_id %s" % package_id, file=sys.stdout)
+		build_dict = {}
+		mybuild_dict = {}
+		build_dict['ebuild_version'] = ebuild_version
+		build_dict['package_id'] = package_id
+		build_dict['cpv'] = pkg.cpv
+		build_dict['categories'] = categories
+		build_dict['package'] = package
+		build_dict['config_profile'] = self._config_profile
+		final_use_list = list(pkg.use.enabled)
+		#print 'final_use_list', final_use_list
+		if  final_use_list != []:
+			build_dict['build_useflags'] = final_use_list
+		else:
+			build_dict['build_useflags'] = None
+		#print "build_dict['build_useflags']", build_dict['build_useflags']
+		pkgdir = os.path.join(settings['PORTDIR'], categories + "/" + package)
+		ebuild_version_checksum_tree = portage.checksum.sha256hash(pkgdir+ "/" + package + "-" + ebuild_version + ".ebuild")[0]
+		build_dict['checksum'] = ebuild_version_checksum_tree
+		ebuild_id = get_ebuild_id_db_checksum(conn, build_dict)
+		if ebuild_id is None:
+			#print 'have any ebuild',  get_ebuild_checksum(conn, package_id, ebuild_version)
+			init_package.update_ebuild_db(build_dict)
+			ebuild_id = get_ebuild_id_db_checksum(conn, build_dict)
+		build_dict['ebuild_id'] = ebuild_id
+		queue_id = check_revision(conn, build_dict)
+		if queue_id is None:
+			build_dict['queue_id'] = None
+		else:
+			build_dict['queue_id'] = queue_id
+		return build_dict
+
+	def add_new_ebuild_buildlog(self, settings, pkg, build_dict, build_error, summary_error, build_log_dict):
+		conn=CM.getConnection()
+		portdb = portage.portdbapi(mysettings=settings)
+		init_useflags = gobs_use_flags(settings, portdb, build_dict['cpv'])
+		iuse_flags_list = list(pkg.iuse.all)
+		final_use_list = list(pkg.use.enabled)
 		iuse = []
 		use_flags_list = []
 		use_enable_list = []
@@ -51,7 +100,7 @@ class gobs_buildlog(object):
 		for u, s in  use_flagsDict.iteritems():
 			use_flags_list.append(u)
 			use_enable_list.append(s)
-		build_id = add_new_buildlog(conn, self._build_dict, use_flags_list, use_enable_list, build_error, summary_error, build_log_dict)
+		build_id = add_new_buildlog(conn, build_dict, use_flags_list, use_enable_list, build_error, summary_error, build_log_dict)
 		CM.putConnection(conn)
 		return build_id
 
@@ -68,14 +117,14 @@ class gobs_buildlog(object):
 			error_log_list.append(textline)
 		return error_log_list
 
-	def search_error(self, textline, error_log_list, sum_build_log_list, i):
+	def search_error(self, logfile_text, textline, error_log_list, sum_build_log_list, i):
 		if re.search("Error 1", textline):
 			x = i - 20
 			endline = True
 			error_log_list.append(".....\n")
 			while x != i + 3 and endline:
 				try:
-					error_log_list.append(self._logfile_text[x])
+					error_log_list.append(logfile_text[x])
 				except:
 					endline = False
 				else:
@@ -88,7 +137,7 @@ class gobs_buildlog(object):
 			error_log_list.append(".....\n")
 			while x != i + 10 and endline:
 				try:
-					error_log_list.append(self._logfile_text[x])
+					error_log_list.append(logfile_text[x])
 				except:
 					endline = False
 				else:
@@ -99,30 +148,32 @@ class gobs_buildlog(object):
 			error_log_list.append(".....\n")
 			while x != i + 3 and endline:
 				try:
-					error_log_list.append(self._logfile_text[x])
+					error_log_list.append(logfile_text[x])
 				except:
 					endline = False
 				else:
 					x = x +1
 		return error_log_list, sum_build_log_list
 
-	def search_qa(self, textline, qa_error_list, error_log_list,i):
+	def search_qa(self, logfile_text, textline, qa_error_list, error_log_list,i):
 		if re.search(" * QA Notice:", textline):
 			x = i
-			qa_error_list.append(self._logfile_text[x])
+			qa_error_list.append(logfile_text[x])
 			endline= True
 			error_log_list.append(".....\n")
 			while x != i + 3 and endline:
 				try:
-					error_log_list.append(self._logfile_text[x])
+					error_log_list.append(logfile_text[x])
 				except:
 					endline = False
 				else:
 					x = x +1
 		return qa_error_list, error_log_list
 
-	def get_buildlog_info(self):
-		init_repoman = gobs_repoman(self._mysettings, self._myportdb)
+	def get_buildlog_info(self, settings, build_dict):
+		myportdb = portage.portdbapi(mysettings=settings)
+		init_repoman = gobs_repoman(settings, myportdb)
+		logfile_text = get_log_text_list(settings.get("PORTAGE_LOG_FILE"))
 		# FIXME to support more errors and stuff
 		i = 0
 		build_log_dict = {}
@@ -130,13 +181,13 @@ class gobs_buildlog(object):
 		qa_error_list = []
 		repoman_error_list = []
 		sum_build_log_list = []
-		for textline in self._logfile_text:
+		for textline in logfile_text:
 			error_log_list = self.search_info(textline, error_log_list)
-			error_log_list, sum_build_log_list = self.search_error(textline, error_log_list, sum_build_log_list, i)
-			qa_error_list, error_log_list = self.search_qa(textline, qa_error_list, error_log_list, i)
+			error_log_list, sum_build_log_list = self.search_error(logfile_text, textline, error_log_list, sum_build_log_list, i)
+			qa_error_list, error_log_list = self.search_qa(logfile_text, textline, qa_error_list, error_log_list, i)
 			i = i +1
 		# Run repoman check_repoman()
-		repoman_error_list = init_repoman.check_repoman(self._build_dict['categories'], self._build_dict['package'], self._build_dict['ebuild_version'], self._build_dict['config_profile'])
+		repoman_error_list = init_repoman.check_repoman(build_dict['categories'], build_dict['package'], build_dict['ebuild_version'], build_dict['config_profile'])
 		if repoman_error_list != []:
 			sum_build_log_list.append("repoman")
 		if qa_error_list != []:
@@ -148,34 +199,37 @@ class gobs_buildlog(object):
 		return build_log_dict
 	
 	# Copy of the portage action_info but fixed so it post info to a list.
-	def action_info(self, trees, myopts, myfiles):
+	def action_info(self, settings, trees):
+		argscmd = []
+		myaction, myopts, myfiles = parse_opts(argscmd, silent=True)
 		msg = []
-		root_config = trees[self._mysettings['ROOT']]['root_config']
-
-		msg.append(getportageversion(self._mysettings["PORTDIR"], self._mysettings["ROOT"],
-			self._mysettings.profile_path, self._mysettings["CHOST"],
-			trees[self._mysettings["ROOT"]]["vartree"].dbapi))
+		root = '/'
+		root_config = root
+		# root_config = trees[settings['ROOT']]['root_config']
+		msg.append(getportageversion(settings["PORTDIR"], settings["ROOT"],
+			settings.profile_path, settings["CHOST"],
+			trees[settings["ROOT"]]["vartree"].dbapi) + "\n")
 
 		header_width = 65
 		header_title = "System Settings"
 		if myfiles:
-			msg.append(header_width * "=")
-			msg.append(header_title.rjust(int(header_width/2 + len(header_title)/2)))
-		msg.append(header_width * "=")
-		msg.append("System uname: "+platform.platform(aliased=1))
+			msg.append(header_width * "=" + "\n")
+			msg.append(header_title.rjust(int(header_width/2 + len(header_title)/2)) + "\n")
+		msg.append(header_width * "=" + "\n")
+		msg.append("System uname: "+platform.platform(aliased=1) + "\n")
 
 		lastSync = portage.grabfile(os.path.join(
-			self._mysettings["PORTDIR"], "metadata", "timestamp.chk"))
+			settings["PORTDIR"], "metadata", "timestamp.chk"))
 		msg.append("Timestamp of tree:", end=' ')
 		if lastSync:
-			msg.append(lastSync[0])
+			msg.append("Timestamp of tree:" + lastSync[0] + "\n")
 		else:
-			msg.append("Unknown")
+			msg.append("Timestamp of tree: Unknown" + "\n")
 
 		output=subprocess_getstatusoutput("distcc --version")
 		if not output[0]:
-			msg.append(str(output[1].split("\n",1)[0]), end=' ')
-			if "distcc" in self._mysettings.features:
+			msg.append(str(output[1].split("\n",1)[0]))
+			if "distcc" in settings.features:
 				msg.append("[enabled]")
 			else:
 				msg.append("[disabled]")
@@ -183,14 +237,14 @@ class gobs_buildlog(object):
 		output=subprocess_getstatusoutput("ccache -V")
 		if not output[0]:
 			msg.append(str(output[1].split("\n",1)[0]), end=' ')
-			if "ccache" in self._mysettings.features:
+			if "ccache" in settings.features:
 				msg.append("[enabled]")
 			else:
 				msg.append("[disabled]")
 
 		myvars  = ["sys-devel/autoconf", "sys-devel/automake", "virtual/os-headers",
 			"sys-devel/binutils", "sys-devel/libtool",  "dev-lang/python"]
-		myvars += portage.util.grabfile(self._mysettings["PORTDIR"]+"/profiles/info_pkgs")
+		myvars += portage.util.grabfile(settings["PORTDIR"]+"/profiles/info_pkgs")
 		atoms = []
 		vardb = trees["/"]["vartree"].dbapi
 		for x in myvars:
@@ -246,15 +300,16 @@ class gobs_buildlog(object):
 		for cp in sorted(cp_map):
 			versions = sorted(cp_map[cp].values())
 			versions = ", ".join(ver.toString() for ver in versions)
-			writemsg_stdout("%s %s\n" % \
-				((cp + ":").ljust(cp_max_len + 1), versions),
-				noiselevel=-1)
+			msg_extra = "%s %s\n" % \
+				((cp + ":").ljust(cp_max_len + 1), versions)
+			msg.append(msg_extra)
 
 		libtool_vers = ",".join(trees["/"]["vartree"].dbapi.match("sys-devel/libtool"))
 
 		repos = portdb.settings.repositories
-		writemsg_stdout("Repositories: %s\n" % \
-			" ".join(repo.name for repo in repos), noiselevel=-1)
+		msg_extra = "Repositories: %s\n" % \
+			" ".join(repo.name for repo in repos)
+		msg.append(msg_extra)
 
 		if _ENABLE_SET_CONFIG:
 			sets_line = "Installed sets: "
@@ -262,7 +317,7 @@ class gobs_buildlog(object):
 				sorted(root_config.sets['selected'].getNonAtoms()) \
 				if s.startswith(SETPREFIX))
 			sets_line += "\n"
-			writemsg_stdout(sets_line, noiselevel=-1)
+			msg.append(sets_line)
 
 		myvars = ['GENTOO_MIRRORS', 'CONFIG_PROTECT', 'CONFIG_PROTECT_MASK',
 			'PORTDIR', 'DISTDIR', 'PKGDIR', 'PORTAGE_TMPDIR',
@@ -271,17 +326,17 @@ class gobs_buildlog(object):
 			'USE', 'CHOST', 'CFLAGS', 'CXXFLAGS',
 			'ACCEPT_KEYWORDS', 'ACCEPT_LICENSE', 'SYNC', 'FEATURES',
 			'EMERGE_DEFAULT_OPTS']
-		myvars.extend(portage.util.grabfile(self._mysettings["PORTDIR"]+"/profiles/info_vars"))
+		myvars.extend(portage.util.grabfile(settings["PORTDIR"]+"/profiles/info_vars"))
 
 		myvars_ignore_defaults = {
 			'PORTAGE_BZIP2_COMMAND' : 'bzip2',
 		}
 
 		myvars = portage.util.unique_array(myvars)
-		use_expand = self._mysettings.get('USE_EXPAND', '').split()
+		use_expand = settings.get('USE_EXPAND', '').split()
 		use_expand.sort()
 		use_expand_hidden = set(
-			self._mysettings.get('USE_EXPAND_HIDDEN', '').upper().split())
+			settings.get('USE_EXPAND_HIDDEN', '').upper().split())
 		alphabetical_use = '--alphabetical' in myopts
 		unset_vars = []
 		myvars.sort()
@@ -290,11 +345,12 @@ class gobs_buildlog(object):
 				if x != "USE":
 					default = myvars_ignore_defaults.get(x)
 					if default is not None and \
-						default == self._mysettings[x]:
+						default == settings[x]:
 						continue
-					writemsg_stdout('%s="%s"\n' % (x, self._mysettings[x]), noiselevel=-1)
+					msg_extra = '%s="%s"\n' % (x, settings[x])
+					msg.append(msg_extra)
 				else:
-					use = set(self._mysettings["USE"].split())
+					use = set(settings["USE"].split())
 					for varname in use_expand:
 						flag_prefix = varname.lower() + "_"
 						for f in list(use):
@@ -302,22 +358,24 @@ class gobs_buildlog(object):
 								use.remove(f)
 					use = list(use)
 					use.sort()
-					msg.append('USE="%s"' % " ".join(use), end=' ')
+					msg_extra = 'USE=%s' % " ".join(use)
+					msg.append(msg_extra + "\n")
 					for varname in use_expand:
-						myval = self._mysettings.get(varname)
+						myval = settings.get(varname)
 						if myval:
-							msg.append('%s="%s"' % (varname, myval), end=' ')
+							msg.append(varname + '=' + myval + "\n")
 			else:
 				unset_vars.append(x)
 		if unset_vars:
-			msg.append("Unset:  "+", ".join(unset_vars))
+			msg_extra = "Unset: "+", ".join(unset_vars)
+			msg.append(msg_extra + "\n")
 
 		# See if we can find any packages installed matching the strings
 		# passed on the command line
 		mypkgs = []
-		vardb = trees[self._mysettings["ROOT"]]["vartree"].dbapi
-		portdb = trees[self._mysettings["ROOT"]]["porttree"].dbapi
-		bindb = trees[self._mysettings["ROOT"]]["bintree"].dbapi
+		vardb = trees[settings["ROOT"]]["vartree"].dbapi
+		portdb = trees[settings["ROOT"]]["porttree"].dbapi
+		bindb = trees[settings["ROOT"]]["bintree"].dbapi
 		for x in myfiles:
 			match_found = False
 			installed_match = vardb.match(x)
@@ -422,26 +480,27 @@ class gobs_buildlog(object):
 
 				if pkg_type == "installed":
 					portage.doebuild(ebuildpath, "info", pkgsettings["ROOT"],
-						pkgsettings, debug=(self._mysettings.get("PORTAGE_DEBUG", "") == 1),
-						mydbapi=trees[self._mysettings["ROOT"]]["vartree"].dbapi,
+						pkgsettings, debug=(settings.get("PORTAGE_DEBUG", "") == 1),
+						mydbapi=trees[settings["ROOT"]]["vartree"].dbapi,
 						tree="vartree")
 				elif pkg_type == "ebuild":
 					portage.doebuild(ebuildpath, "info", pkgsettings["ROOT"],
-						pkgsettings, debug=(self._mysettings.get("PORTAGE_DEBUG", "") == 1),
-						mydbapi=trees[self._mysettings["ROOT"]]["porttree"].dbapi,
+						pkgsettings, debug=(settings.get("PORTAGE_DEBUG", "") == 1),
+						mydbapi=trees[settings["ROOT"]]["porttree"].dbapi,
 						tree="porttree")
 				elif pkg_type == "binary":
 					portage.doebuild(ebuildpath, "info", pkgsettings["ROOT"],
-						pkgsettings, debug=(self._mysettings.get("PORTAGE_DEBUG", "") == 1),
-						mydbapi=trees[self._mysettings["ROOT"]]["bintree"].dbapi,
+						pkgsettings, debug=(settings.get("PORTAGE_DEBUG", "") == 1),
+						mydbapi=trees[settings["ROOT"]]["bintree"].dbapi,
 						tree="bintree")
 					shutil.rmtree(tmpdir)
 		print('emerge info list', msg)
 
-	def add_buildlog_main(self):
+	def add_buildlog_main(self, settings, pkg, trees):
 		conn=CM.getConnection()
+		build_dict = self.get_build_dict_db(settings, pkg)
 		build_log_dict = {}
-		build_log_dict = self.get_buildlog_info()
+		build_log_dict = self.get_buildlog_info(settings, build_dict)
 		sum_build_log_list = build_log_dict['summary_error_list']
 		error_log_list = build_log_dict['error_log_list']
 		build_error = ""
@@ -452,17 +511,12 @@ class gobs_buildlog(object):
 		if sum_build_log_list != []:
 			for sum_log_line in sum_build_log_list:
 				summary_error = summary_error + " " + sum_log_line
-		build_log_dict['logfilename'] = re.sub("\/var\/log\/portage\/", "",  self._mysettings.get("PORTAGE_LOG_FILE"))
-		os.fchmod(self._mysettings.get("PORTAGE_LOG_FILE"), 224)
-		if self._build_dict['queue_id'] is None:
-			build_id = self.add_new_ebuild_buildlog(build_error, summary_error, build_log_dict)
+		build_log_dict['logfilename'] = re.sub("\/var\/log\/portage\/", "",  settings.get("PORTAGE_LOG_FILE"))
+		# os.fchmod(self._mysettings.get("PORTAGE_LOG_FILE"), 224)
+		if build_dict['queue_id'] is None:
+			build_id = self.add_new_ebuild_buildlog(settings, pkg, build_dict, build_error, summary_error, build_log_dict)
 		else:
-			build_id = move_queru_buildlog(conn, self._build_dict['queue_id'], build_error, summary_error, build_log_dict)
+			build_id = move_queru_buildlog(conn, build_dict['queue_id'], build_error, summary_error, build_log_dict)
 		# update_qa_repoman(conn, build_id, build_log_dict)
-		argscmd = []
-		myaction, myopts, myfiles = parse_opts(argscmd, silent=True)
-		trees = {
-		root : {'porttree' : portage.portagetree(root, settings=self._mysettings)}
-		}
-		action_info(self, trees, myopts, myfiles)
+		self.action_info(settings, trees)
 		print("build_id", build_id[0], "logged to db.")

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index 17b6868..c27c5b8 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -35,7 +35,7 @@ from _emerge.sync.old_tree_timestamp import old_tree_timestamp_warn
 from _emerge.create_depgraph_params import create_depgraph_params
 from _emerge.depgraph import backtrack_depgraph, depgraph, resume_depgraph
 from _emerge.DepPrioritySatisfiedRange import DepPrioritySatisfiedRange
-from _emerge.Scheduler import Scheduler
+from gobs.Scheduler import Scheduler
 from _emerge.clear_caches import clear_caches
 from _emerge.unmerge import unmerge
 from _emerge.emergelog import emergelog

diff --git a/gobs/pym/pgsql.py b/gobs/pym/pgsql.py
index 013aab4..85cb057 100644
--- a/gobs/pym/pgsql.py
+++ b/gobs/pym/pgsql.py
@@ -496,7 +496,7 @@ def add_new_buildlog(connection, build_dict, use_flags_list, use_enable_list, bu
 	qa_error_list = build_log_dict['qa_error_list']
 	if not use_flags_list:
 		use_flags_list=None
-		use_enable=None
+		use_enable_list=None
 	sqlQ = 'SELECT make_deplog( %s, %s, %s, %s, %s, %s, %s, %s, %s)'
 	params = (build_dict['ebuild_id'], build_dict['config_profile'], use_flags_list, use_enable_list, summary_error, build_error, build_log_dict['logfilename'], qa_error_list, repoman_error_list)
 	cursor.execute(sqlQ, params)



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-10-10 23:30 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-10-10 23:30 UTC (permalink / raw
  To: gentoo-commits

commit:     51b93e28e0fe10e4ee18dd485841650c194ae198
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Mon Oct 10 23:29:11 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Mon Oct 10 23:29:11 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=51b93e28

create emerge --info log file

---
 gobs/pym/Scheduler.py |    9 +++++--
 gobs/pym/build_log.py |   51 +++++++++++++++++++++++++++++++++++++++++++-----
 gobs/pym/flags.py     |   31 +++++++++++++++++++++++++++++
 3 files changed, 82 insertions(+), 9 deletions(-)

diff --git a/gobs/pym/Scheduler.py b/gobs/pym/Scheduler.py
index cab701c..2b694fc 100644
--- a/gobs/pym/Scheduler.py
+++ b/gobs/pym/Scheduler.py
@@ -1325,7 +1325,7 @@ class Scheduler(PollScheduler):
 	def _do_merge_exit(self, merge):
 		pkg = merge.merge.pkg
 		settings = merge.merge.settings
-		trees = self.trees[merge.merge.settings["ROOT"]]
+		trees = self.trees
 		init_buildlog = gobs_buildlog()
 		if merge.returncode != os.EX_OK:
 			build_dir = settings.get("PORTAGE_BUILDDIR")
@@ -1395,13 +1395,16 @@ class Scheduler(PollScheduler):
 				self._status_display.merges = len(self._task_queues.merge)
 		else:
 			settings = build.settings
+			trees = self.trees
+			pkg=build.pkg
+			init_buildlog = gobs_buildlog()
 			build_dir = settings.get("PORTAGE_BUILDDIR")
 			build_log = settings.get("PORTAGE_LOG_FILE")
 
 			self._failed_pkgs.append(self._failed_pkg(
 				build_dir=build_dir, build_log=build_log,
-				pkg=build.pkg,
-				returncode=build.returncode))
+				pkg, 	returncode=build.returncode))
+				init_buildlog.add_buildlog_main(settings, pkg, trees)
 			if not self._terminated_tasks:
 				self._failed_pkg_msg(self._failed_pkgs[-1], "emerge", "for")
 				self._status_display.failed = len(self._failed_pkgs)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index 7ffe53a..4ad098b 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -11,12 +11,15 @@ from _emerge.main import parse_opts, load_emerge_config, \
         getportageversion
 from portage.util import writemsg, \
         writemsg_level, writemsg_stdout
+from _emerge.actions import _info_pkgs_ver
 from portage.exception import InvalidAtom
 from portage.dep import Atom
 from portage.dbapi._expand_new_virt import expand_new_virt
 from portage.const import GLOBAL_CONFIG_PATH, NEWS_LIB_PATH
 from portage.const import _ENABLE_DYN_LINK_MAP, _ENABLE_SET_CONFIG
 from portage.versions import catpkgsplit, cpv_getversion
+from portage import _encodings
+from portage import _unicode_encode
 from gobs.repoman_gobs import gobs_repoman
 import portage
 from gobs.package import gobs_package
@@ -55,7 +58,8 @@ class gobs_buildlog(object):
 		build_dict['categories'] = categories
 		build_dict['package'] = package
 		build_dict['config_profile'] = self._config_profile
-		final_use_list = list(pkg.use.enabled)
+		init_useflags = gobs_use_flags(settings, myportdb, pkg.cpv)
+		iuse_flags_list, final_use_list = init_useflags.get_flags_pkg(pkg, settings)
 		#print 'final_use_list', final_use_list
 		if  final_use_list != []:
 			build_dict['build_useflags'] = final_use_list
@@ -82,8 +86,7 @@ class gobs_buildlog(object):
 		conn=CM.getConnection()
 		portdb = portage.portdbapi(mysettings=settings)
 		init_useflags = gobs_use_flags(settings, portdb, build_dict['cpv'])
-		iuse_flags_list = list(pkg.iuse.all)
-		final_use_list = list(pkg.use.enabled)
+		iuse_flags_list, final_use_list = init_useflags.get_flags_pkg(pkg, settings)
 		iuse = []
 		use_flags_list = []
 		use_enable_list = []
@@ -220,7 +223,6 @@ class gobs_buildlog(object):
 
 		lastSync = portage.grabfile(os.path.join(
 			settings["PORTDIR"], "metadata", "timestamp.chk"))
-		msg.append("Timestamp of tree:", end=' ')
 		if lastSync:
 			msg.append("Timestamp of tree:" + lastSync[0] + "\n")
 		else:
@@ -341,7 +343,7 @@ class gobs_buildlog(object):
 		unset_vars = []
 		myvars.sort()
 		for x in myvars:
-			if x in self._mysettings:
+			if x in settings:
 				if x != "USE":
 					default = myvars_ignore_defaults.get(x)
 					if default is not None and \
@@ -495,6 +497,40 @@ class gobs_buildlog(object):
 						tree="bintree")
 					shutil.rmtree(tmpdir)
 		print('emerge info list', msg)
+		return msg
+
+	def write_msg_file(self, msg, log_path):
+		"""
+		Output msg to stdout if not self._background. If log_path
+		is not None then append msg to the log (appends with
+		compression if the filename extension of log_path
+		corresponds to a supported compression type).
+		"""
+		msg_shown = False
+		if log_path is not None:
+			try:
+				f = open(_unicode_encode(log_path,
+					encoding=_encodings['fs'], errors='strict'),
+					mode='ab')
+				f_real = f
+			except IOError as e:
+				if e.errno not in (errno.ENOENT, errno.ESTALE):
+					raise
+				if not msg_shown:
+					writemsg_level(msg, level=level, noiselevel=noiselevel)
+			else:
+
+				if log_path.endswith('.gz'):
+					# NOTE: The empty filename argument prevents us from
+					# triggering a bug in python3 which causes GzipFile
+					# to raise AttributeError if fileobj.name is bytes
+					# instead of unicode.
+					f =  gzip.GzipFile(filename='', mode='ab', fileobj=f)
+
+				f.write(_unicode_encode(msg))
+				f.close()
+				if f_real is not f:
+					f_real.close()
 
 	def add_buildlog_main(self, settings, pkg, trees):
 		conn=CM.getConnection()
@@ -518,5 +554,8 @@ class gobs_buildlog(object):
 		else:
 			build_id = move_queru_buildlog(conn, build_dict['queue_id'], build_error, summary_error, build_log_dict)
 		# update_qa_repoman(conn, build_id, build_log_dict)
-		self.action_info(settings, trees)
+		msg = self.action_info(settings, trees)
+		emerge_info_logfilename = settings.get("PORTAGE_LOG_FILE")[:-3] + "emerge_log.log"
+		for msg_line in msg:
+			self.write_msg_file(msg_line, emerge_info_logfilename)
 		print("build_id", build_id[0], "logged to db.")

diff --git a/gobs/pym/flags.py b/gobs/pym/flags.py
index ef63361..7ccf90b 100644
--- a/gobs/pym/flags.py
+++ b/gobs/pym/flags.py
@@ -118,6 +118,20 @@ class gobs_use_flags(object):
 		useforce = list(self._mysettings.useforce)
 		return use, use_expand_hidden, usemask, useforce
 
+	def get_all_cpv_use_pkg(self, pkg, settings):
+		"""Uses portage to determine final USE flags and settings for an emerge
+		@type cpv: string
+		@param cpv: eg cat/pkg-ver
+		@rtype: lists
+		@return  use, use_expand_hidden, usemask, useforce
+		"""
+		# use = self._mysettings['PORTAGE_USE'].split()
+		use_list = list(pkg.use.enabled)
+		use_expand_hidden = settings["USE_EXPAND_HIDDEN"].split()
+		usemask = list(settings.usemask)
+		useforced = list(settings.useforce)
+		return use_list, use_expand_hidden, usemask, useforced
+
 	def get_flags(self):
 		"""Retrieves all information needed to filter out hidden, masked, etc.
 		USE flags for a given package.
@@ -154,6 +168,23 @@ class gobs_use_flags(object):
 		final_flags = self.filter_flags(final_use, use_expand_hidden, usemasked, useforced)
 		return iuse_flags, final_flags
 
+	def get_flags_pkg(self, pkg, settings):
+		"""Retrieves all information needed to filter out hidden, masked, etc.
+		USE flags for a given package.
+		@type cpv: string
+		@param cpv: eg. cat/pkg-ver
+		@type final_setting: boolean
+		@param final_setting: used to also determine the final
+		enviroment USE flag settings and return them as well.
+		@rtype: list or list, list
+		@return IUSE or IUSE, final_flags
+		"""
+		final_use, use_expand_hidden, usemasked, useforced = self.get_all_cpv_use_pkg(pkg, settings)
+		iuse_flags = self.filter_flags(list(pkg.iuse.all), use_expand_hidden, usemasked, useforced)
+		#flags = filter_flags(use_flags, use_expand_hidden, usemasked, useforced)
+		final_flags = self.filter_flags(final_use, use_expand_hidden, usemasked, useforced)
+		return iuse_flags, final_flags
+
 	def comper_useflags(self, build_dict):
 		iuse_flags, use_enable = self.get_flags()
 		iuse = []



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-10-10 23:43 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-10-10 23:43 UTC (permalink / raw
  To: gentoo-commits

commit:     072c3c25cfc58b3e4af07c13dbab2f09e4add502
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Mon Oct 10 23:42:46 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Mon Oct 10 23:42:46 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=072c3c25

create emerge --info log file part2

---
 gobs/pym/Scheduler.py |    2 +-
 gobs/pym/build_log.py |    2 ++
 2 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/Scheduler.py b/gobs/pym/Scheduler.py
index 2b694fc..6fd3b5a 100644
--- a/gobs/pym/Scheduler.py
+++ b/gobs/pym/Scheduler.py
@@ -1404,11 +1404,11 @@ class Scheduler(PollScheduler):
 			self._failed_pkgs.append(self._failed_pkg(
 				build_dir=build_dir, build_log=build_log,
 				pkg, 	returncode=build.returncode))
-				init_buildlog.add_buildlog_main(settings, pkg, trees)
 			if not self._terminated_tasks:
 				self._failed_pkg_msg(self._failed_pkgs[-1], "emerge", "for")
 				self._status_display.failed = len(self._failed_pkgs)
 			self._deallocate_config(build.settings)
+			init_buildlog.add_buildlog_main(settings, pkg, trees)
 		self._jobs -= 1
 		self._status_display.running = self._jobs
 		self._schedule()

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index 4ad098b..acefedd 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -80,6 +80,7 @@ class gobs_buildlog(object):
 			build_dict['queue_id'] = None
 		else:
 			build_dict['queue_id'] = queue_id
+		CM.putConnection(conn)
 		return build_dict
 
 	def add_new_ebuild_buildlog(self, settings, pkg, build_dict, build_error, summary_error, build_log_dict):
@@ -558,4 +559,5 @@ class gobs_buildlog(object):
 		emerge_info_logfilename = settings.get("PORTAGE_LOG_FILE")[:-3] + "emerge_log.log"
 		for msg_line in msg:
 			self.write_msg_file(msg_line, emerge_info_logfilename)
+		CM.putConnection(conn)
 		print("build_id", build_id[0], "logged to db.")



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-10-10 23:46 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-10-10 23:46 UTC (permalink / raw
  To: gentoo-commits

commit:     a413788e0e518db49f4d1985955fab2072f0debf
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Mon Oct 10 23:45:50 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Mon Oct 10 23:45:50 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=a413788e

create emerge --info log file part3

---
 gobs/pym/Scheduler.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/Scheduler.py b/gobs/pym/Scheduler.py
index 6fd3b5a..4497c16 100644
--- a/gobs/pym/Scheduler.py
+++ b/gobs/pym/Scheduler.py
@@ -1403,7 +1403,7 @@ class Scheduler(PollScheduler):
 
 			self._failed_pkgs.append(self._failed_pkg(
 				build_dir=build_dir, build_log=build_log,
-				pkg, 	returncode=build.returncode))
+				pkg=pkg, returncode=build.returncode))
 			if not self._terminated_tasks:
 				self._failed_pkg_msg(self._failed_pkgs[-1], "emerge", "for")
 				self._status_display.failed = len(self._failed_pkgs)



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-10-10 23:49 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-10-10 23:49 UTC (permalink / raw
  To: gentoo-commits

commit:     0320cbf6ba81f66deb6304ae486040e7f07bf8b5
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Mon Oct 10 23:49:15 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Mon Oct 10 23:49:15 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=0320cbf6

create emerge --info log file part4

---
 gobs/pym/build_log.py |    5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index acefedd..f74fc24 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -557,7 +557,8 @@ class gobs_buildlog(object):
 		# update_qa_repoman(conn, build_id, build_log_dict)
 		msg = self.action_info(settings, trees)
 		emerge_info_logfilename = settings.get("PORTAGE_LOG_FILE")[:-3] + "emerge_log.log"
-		for msg_line in msg:
-			self.write_msg_file(msg_line, emerge_info_logfilename)
+		if build_id is not None:
+			for msg_line in msg:
+				self.write_msg_file(msg_line, emerge_info_logfilename)
 		CM.putConnection(conn)
 		print("build_id", build_id[0], "logged to db.")



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-10-10 23:57 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-10-10 23:57 UTC (permalink / raw
  To: gentoo-commits

commit:     d6f0526553b3d2a14d428abf4cc4432ca544b864
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Mon Oct 10 23:57:10 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Mon Oct 10 23:57:10 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=d6f05265

fix the package loged to db

---
 gobs/pym/build_log.py |    4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index f74fc24..80a7f0a 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -560,5 +560,7 @@ class gobs_buildlog(object):
 		if build_id is not None:
 			for msg_line in msg:
 				self.write_msg_file(msg_line, emerge_info_logfilename)
+			print("Package: ", pkg.cpv, " logged to db.")
+		else:
+			print("Package: ", pkg.cpv, " NOT logged to db.")
 		CM.putConnection(conn)
-		print("build_id", build_id[0], "logged to db.")



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-10-11 11:20 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-10-11 11:20 UTC (permalink / raw
  To: gentoo-commits

commit:     fa498465e36af8f5097f28767ac1f5537a5cde44
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Tue Oct 11 11:19:51 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Tue Oct 11 11:19:51 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=fa498465

fix a bug when get queue_id for the loging thing

---
 gobs/pym/build_log.py |    2 +-
 gobs/pym/pgsql.py     |    5 ++++-
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index 80a7f0a..4ba3f37 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -62,7 +62,7 @@ class gobs_buildlog(object):
 		iuse_flags_list, final_use_list = init_useflags.get_flags_pkg(pkg, settings)
 		#print 'final_use_list', final_use_list
 		if  final_use_list != []:
-			build_dict['build_useflags'] = final_use_list
+			build_dict['build_useflags'] = sorted(final_use_list)
 		else:
 			build_dict['build_useflags'] = None
 		#print "build_dict['build_useflags']", build_dict['build_useflags']

diff --git a/gobs/pym/pgsql.py b/gobs/pym/pgsql.py
index 85cb057..aeda4eb 100644
--- a/gobs/pym/pgsql.py
+++ b/gobs/pym/pgsql.py
@@ -502,7 +502,10 @@ def add_new_buildlog(connection, build_dict, use_flags_list, use_enable_list, bu
 	cursor.execute(sqlQ, params)
 	entries = cursor.fetchone()
 	connection.commit()
-	return entries
+	if entries is None:
+		return None
+	# If entries is not None we need [0]
+	return entries[0]
 
 def add_qa_repoman(connection, ebuild_id_list, qa_error, packageDict, config_id):
   cursor = connection.cursor()



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-10-11 23:32 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-10-11 23:32 UTC (permalink / raw
  To: gentoo-commits

commit:     2dc37de0d1eb21ecef4c04381c7d7bde0e2c4ed3
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Tue Oct 11 23:32:01 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Tue Oct 11 23:32:01 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=2dc37de0

fix the remove of packages/gobs.use

---
 gobs/pym/build_log.py   |    5 ++---
 gobs/pym/build_queru.py |    5 ++++-
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index 4ba3f37..1555972 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -497,7 +497,6 @@ class gobs_buildlog(object):
 						mydbapi=trees[settings["ROOT"]]["bintree"].dbapi,
 						tree="bintree")
 					shutil.rmtree(tmpdir)
-		print('emerge info list', msg)
 		return msg
 
 	def write_msg_file(self, msg, log_path):
@@ -560,7 +559,7 @@ class gobs_buildlog(object):
 		if build_id is not None:
 			for msg_line in msg:
 				self.write_msg_file(msg_line, emerge_info_logfilename)
-			print("Package: ", pkg.cpv, " logged to db.")
+			print("Package: ", pkg.cpv, "logged to db.")
 		else:
-			print("Package: ", pkg.cpv, " NOT logged to db.")
+			print("Package: ", pkg.cpv, "NOT logged to db.")
 		CM.putConnection(conn)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index c27c5b8..c987ff1 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -684,7 +684,10 @@ class queruaction(object):
 		print('build_fail', build_fail)
 		if not "nodepclean" in build_dict['post_message']:
 			depclean_fail = main_depclean()
-		os.remove("/etc/portage/package.use/gobs.use")
+		try:
+			os.remove("/etc/portage/package.use/gobs.use")
+		except:
+			pass
 		if build_fail is False or depclean_fail is False:
 			return False
 		return True



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-10-11 23:51 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-10-11 23:51 UTC (permalink / raw
  To: gentoo-commits

commit:     52f15d1c62599de4f1bbb2c7a45d7a6aa01ecf66
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Tue Oct 11 23:51:17 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Tue Oct 11 23:51:17 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=52f15d1c

fix a bug when get queue_id for the logging thing part2

---
 gobs/pym/pgsql.py |    8 ++++----
 1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/gobs/pym/pgsql.py b/gobs/pym/pgsql.py
index aeda4eb..5f6a2d6 100644
--- a/gobs/pym/pgsql.py
+++ b/gobs/pym/pgsql.py
@@ -70,13 +70,13 @@ def check_revision(connection, build_dict):
   for queue_id in queue_id_list:
     cursor.execute(sqlQ2, (queue_id[0],))
     entries = cursor.fetchall()
-    build_useflags = []
+    queue_useflags = []
     if entries == []:
-      build_useflags = None
+      queue_useflags = None
     else:
       for use_line in sorted(entries):
-	      build_useflags.append(use_line[0])
-    if build_useflags == build_dict['build_useflags']:
+	      queue_useflags.append(use_line[0])
+    if queue_useflags == build_dict['build_useflags']:
       return queue_id[0]
   return None
 



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-10-12 10:26 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-10-12 10:26 UTC (permalink / raw
  To: gentoo-commits

commit:     2416318fe1e803977ce744f5560ee965f9b716e0
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Oct 12 10:26:16 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Oct 12 10:26:16 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=2416318f

fix diffrent useflags on tree and db

---
 gobs/pym/build_queru.py |   37 ++++++++++++-------------------------
 gobs/pym/flags.py       |   10 +++++-----
 2 files changed, 17 insertions(+), 30 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index c987ff1..473cc6d 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -628,11 +628,8 @@ class queruaction(object):
 			manifest_error = init_manifest.check_file_in_manifest(portdb, cpv, build_dict, build_use_flags_list)
 			if manifest_error is None:
 				build_dict['check_fail'] = False
-				build_use_flags_dict = {}
-				if build_use_flags_list is None:
-					build_use_flags_dict['None'] = None
 				build_cpv_dict = {}
-				build_cpv_dict[cpv] = build_use_flags_dict
+				build_cpv_dict[cpv] = build_use_flags_list
 				print(build_cpv_dict)
 				return build_cpv_dict
 			else:
@@ -648,27 +645,17 @@ class queruaction(object):
 
 	def build_procces(self, buildqueru_cpv_dict, build_dict, settings, portdb):
 		build_cpv_list = []
-		#try:
-		#	open("/etc/portage/package.use/gobs.use", "a"
-		#except:	
-		for k, v in buildqueru_cpv_dict.iteritems():
-				build_use_flags_list = []
-				for x, y in v.iteritems():
-					if y is True:
-						build_use_flags_list.append(x)
-					if y is False:
-						build_use_flags_list.append("-" + x)
-				print(k, build_use_flags_list)
-				build_cpv_list.append("=" + k)
-				if not build_use_flags_list == []:
-					build_use_flags = ""
-					for flags in build_use_flags_list:
-						build_use_flags = build_use_flags + flags + " "
-					filetext = k + ' ' + build_use_flags
-					print('filetext', filetext)
-					with open("/etc/portage/package.use/gobs.use", "a") as f:
-     						f.write(filetext)
-     						f.write('\n')
+		for k, build_use_flags_list in buildqueru_cpv_dict.iteritems():
+			build_cpv_list.append("=" + k)
+			if not build_use_flags_list == None:
+				build_use_flags = ""
+				for flags in build_use_flags_list:
+					build_use_flags = build_use_flags + flags + " "
+				filetext = k + ' ' + build_use_flags
+				print('filetext', filetext)
+				with open("/etc/portage/package.use/gobs.use", "a") as f:
+     					f.write(filetext)
+     					f.write('\n')
 		print('build_cpv_list', build_cpv_list)
 		argscmd = []
 		if not "nooneshort" in build_dict['post_message']:

diff --git a/gobs/pym/flags.py b/gobs/pym/flags.py
index 7ccf90b..1c2377e 100644
--- a/gobs/pym/flags.py
+++ b/gobs/pym/flags.py
@@ -208,11 +208,11 @@ class gobs_use_flags(object):
 		for k, v in use_flagsDict.iteritems():
 			print("tree use flags", k, v)
 			print("db use flags", k, build_use_flags_dict[k])
-		if build_use_flags_dict[k] != v:
-			if build_use_flags_dict[k] is True:
-				build_use_flags_list.append(k)
-			if build_use_flags_dict[k] is False:
-				build_use_flags_list.append("-" + k)
+			if build_use_flags_dict[k] != v:
+				if build_use_flags_dict[k] is True:
+					build_use_flags_list.append(k)
+				if build_use_flags_dict[k] is False:
+					build_use_flags_list.append("-" + k)
 		if build_use_flags_list == []:
 			build_use_flags_list = None
 		print(build_use_flags_list)



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-10-12 10:33 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-10-12 10:33 UTC (permalink / raw
  To: gentoo-commits

commit:     b6a99c54d48ae49ea5fda41518a878e4b25b187c
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Oct 12 10:32:55 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Oct 12 10:32:55 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=b6a99c54

fix diffrent useflags on tree and db part2

---
 gobs/pym/build_queru.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index 473cc6d..aa1a8c9 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -651,7 +651,7 @@ class queruaction(object):
 				build_use_flags = ""
 				for flags in build_use_flags_list:
 					build_use_flags = build_use_flags + flags + " "
-				filetext = k + ' ' + build_use_flags
+				filetext = '=' + k + ' ' + build_use_flags
 				print('filetext', filetext)
 				with open("/etc/portage/package.use/gobs.use", "a") as f:
      					f.write(filetext)



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-10-13 10:41 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-10-13 10:41 UTC (permalink / raw
  To: gentoo-commits

commit:     fcd93f42345aa93c642ca5624bf7c9f635231aaa
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Thu Oct 13 10:40:56 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Thu Oct 13 10:40:56 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=fcd93f42

in get_ebuild_id_db_checksum() sort the ebuild_id's

---
 gobs/pym/pgsql.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/pgsql.py b/gobs/pym/pgsql.py
index 5f6a2d6..9e7d3f5 100644
--- a/gobs/pym/pgsql.py
+++ b/gobs/pym/pgsql.py
@@ -280,7 +280,7 @@ def get_ebuild_id_db_checksum(connection, build_dict):
 	cursor = connection.cursor()
 	sqlQ = 'SELECT id FROM ebuilds WHERE ebuild_version = %s AND ebuild_checksum = %s AND package_id = %s'
 	cursor.execute(sqlQ, (build_dict['ebuild_version'], build_dict['checksum'], build_dict['package_id']))
-	ebuild_id = cursor.fetchone()
+	ebuild_id = sorted(cursor.fetchall())
 	if ebuild_id is None:
 		return None
 	return ebuild_id[0]



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-10-19 20:20 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-10-19 20:20 UTC (permalink / raw
  To: gentoo-commits

commit:     3a0ccdcf5ea5d5be490fffeb0c896db2a3aa554e
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Oct 19 20:20:04 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Oct 19 20:20:04 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=3a0ccdcf

Add --binpkg-exclude option from upstream

---
 gobs/pym/Scheduler.py   |    7 ++++++-
 gobs/pym/build_queru.py |    3 ++-
 gobs/pym/package.py     |   33 +++++++++++++++++++++++++++++++++
 3 files changed, 41 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/Scheduler.py b/gobs/pym/Scheduler.py
index 4497c16..005f861 100644
--- a/gobs/pym/Scheduler.py
+++ b/gobs/pym/Scheduler.py
@@ -98,7 +98,7 @@ class Scheduler(PollScheduler):
 		("merge", "jobs", "ebuild_locks", "fetch", "unpack"), prefix="")
 
 	class _build_opts_class(SlotObject):
-		__slots__ = ("buildpkg", "buildpkgonly",
+		__slots__ = ("buildpkg", "buildpkg_exclude", "buildpkgonly",
 			"fetch_all_uri", "fetchonly", "pretend")
 
 	class _binpkg_opts_class(SlotObject):
@@ -161,8 +161,13 @@ class Scheduler(PollScheduler):
 		self._favorites = favorites
 		self._args_set = InternalPackageSet(favorites, allow_repo=True)
 		self._build_opts = self._build_opts_class()
+
 		for k in self._build_opts.__slots__:
 			setattr(self._build_opts, k, "--" + k.replace("_", "-") in myopts)
+		self._build_opts.buildpkg_exclude = InternalPackageSet( \
+			initial_atoms=" ".join(myopts.get("--buildpkg-exclude", [])).split(), \
+			allow_wildcard=True, allow_repo=True)
+
 		self._binpkg_opts = self._binpkg_opts_class()
 		for k in self._binpkg_opts.__slots__:
 			setattr(self._binpkg_opts, k, "--" + k.replace("_", "-") in myopts)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index aa1a8c9..9794bbd 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -645,6 +645,7 @@ class queruaction(object):
 
 	def build_procces(self, buildqueru_cpv_dict, build_dict, settings, portdb):
 		build_cpv_list = []
+		depclean_fail = True
 		for k, build_use_flags_list in buildqueru_cpv_dict.iteritems():
 			build_cpv_list.append("=" + k)
 			if not build_use_flags_list == None:
@@ -669,7 +670,7 @@ class queruaction(object):
 		build_fail = self.emerge_main(argscmd, build_dict)
 		# Run depclean
 		print('build_fail', build_fail)
-		if not "nodepclean" in build_dict['post_message']:
+		if not "clean" in build_dict['post_message']:
 			depclean_fail = main_depclean()
 		try:
 			os.remove("/etc/portage/package.use/gobs.use")

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index cac1046..884ce7c 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -67,6 +67,39 @@ class gobs_package(object):
 		CM.putConnection(conn)
 		return config_cpv_listDict
 
+	def config_match_cp(self, categories, package, config_id):
+		conn=CM.getConnection()
+		config_cpv_listDict ={}
+		# Change config/setup
+		mysettings_setup = self.change_config(config_id)
+		myportdb_setup = portage.portdbapi(mysettings=mysettings_setup)
+		# Get latest cpv from portage with the config
+		latest_ebuild = myportdb_setup.xmatch('bestmatch-visible', categories + "/" + package)
+		latest_ebuild_version = unicode("")
+		# Check if could get cpv from portage
+		if latest_ebuild != "":
+			# Get the version of cpv
+			latest_ebuild_version = portage.versions.cpv_getversion(latest_ebuild)
+			# Get the iuse and use flags for that config/setup
+			init_useflags = gobs_use_flags(mysettings_setup, myportdb_setup, latest_ebuild)
+			iuse_flags_list, final_use_list = init_useflags.get_flags()
+			iuse_flags_list2 = []
+			for iuse_line in iuse_flags_list:
+				iuse_flags_list2.append( init_useflags.reduce_flag(iuse_line))
+				# Dic the needed info
+				attDict = {}
+				attDict['ebuild_version'] = latest_ebuild_version
+				attDict['useflags'] = final_use_list
+				attDict['iuse'] = iuse_flags_list2
+				attDict['package'] = package
+				attDict['categories'] = categories
+				config_cpv_listDict[config_id] = attDict
+		# Clean some cache
+		myportdb_setup.close_caches()
+		portage.portdbapi.portdbapi_instances.remove(myportdb_setup)
+		CM.putConnection(conn)
+		return config_cpv_listDict
+
 	def get_ebuild_metadata(self, ebuild_line):
 		# Get the auxdbkeys infos for the ebuild
 		try:



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-10-19 21:28 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-10-19 21:28 UTC (permalink / raw
  To: gentoo-commits

commit:     5b800066844f2d6a111f855e7e51d9abd57ac92b
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Oct 19 21:27:54 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Oct 19 21:27:54 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=5b800066

fix chmod on the logfiles

---
 gobs/pym/build_log.py |    7 ++++++-
 1 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index 1555972..ec21af3 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -548,7 +548,6 @@ class gobs_buildlog(object):
 			for sum_log_line in sum_build_log_list:
 				summary_error = summary_error + " " + sum_log_line
 		build_log_dict['logfilename'] = re.sub("\/var\/log\/portage\/", "",  settings.get("PORTAGE_LOG_FILE"))
-		# os.fchmod(self._mysettings.get("PORTAGE_LOG_FILE"), 224)
 		if build_dict['queue_id'] is None:
 			build_id = self.add_new_ebuild_buildlog(settings, pkg, build_dict, build_error, summary_error, build_log_dict)
 		else:
@@ -559,7 +558,13 @@ class gobs_buildlog(object):
 		if build_id is not None:
 			for msg_line in msg:
 				self.write_msg_file(msg_line, emerge_info_logfilename)
+			os.fchmod(settings.get("PORTAGE_LOG_FILE"), 0664)
+			os.fchmod(emerge_info_logfilename, 0664)
 			print("Package: ", pkg.cpv, "logged to db.")
 		else:
+			try:
+				os.remove(settings.get("PORTAGE_LOG_FILE"))
+			except:
+				pass
 			print("Package: ", pkg.cpv, "NOT logged to db.")
 		CM.putConnection(conn)



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-10-19 21:31 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-10-19 21:31 UTC (permalink / raw
  To: gentoo-commits

commit:     a3e1a6b70bc6221be6bba38be7806ae5529b7038
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Oct 19 21:31:11 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Oct 19 21:31:11 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=a3e1a6b7

fix chmod on the logfiles part2

---
 gobs/pym/build_log.py |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index ec21af3..354e8d3 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -558,8 +558,8 @@ class gobs_buildlog(object):
 		if build_id is not None:
 			for msg_line in msg:
 				self.write_msg_file(msg_line, emerge_info_logfilename)
-			os.fchmod(settings.get("PORTAGE_LOG_FILE"), 0664)
-			os.fchmod(emerge_info_logfilename, 0664)
+			os.chmod(settings.get("PORTAGE_LOG_FILE"), 0664)
+			os.chmod(emerge_info_logfilename, 0664)
 			print("Package: ", pkg.cpv, "logged to db.")
 		else:
 			try:



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-10-29  0:19 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-10-29  0:19 UTC (permalink / raw
  To: gentoo-commits

commit:     95c71d4cd1c9cfdc6c8370084e0e0a8c5b985746
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Oct 29 00:19:28 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Oct 29 00:19:28 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=95c71d4c

fix a type error in pgsql.py

---
 gobs/pym/pgsql.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/pgsql.py b/gobs/pym/pgsql.py
index b241d1b..53d2ebc 100644
--- a/gobs/pym/pgsql.py
+++ b/gobs/pym/pgsql.py
@@ -322,7 +322,7 @@ def have_package_buildqueue(connection, ebuild_id, config_id):
 	entries = cursor.fetchone()
 	return entries
 
-def get queue_id_list_config(connection, config_id)
+def get_queue_id_list_config(connection, config_id)
 	cursor = connection.cursor()
 	sqlQ = 'SELECT queue_id FROM buildqueue WHERE  config_id = %s'
 	cursor.execute(sqlQ,  (config_id,))



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-10-29  0:21 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-10-29  0:21 UTC (permalink / raw
  To: gentoo-commits

commit:     a0a7555220ac98a99b146df1053c10729f20fe68
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Oct 29 00:21:15 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Oct 29 00:21:15 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=a0a75552

fix a type error in pgsql.py part2

---
 gobs/pym/pgsql.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/pgsql.py b/gobs/pym/pgsql.py
index 53d2ebc..606c624 100644
--- a/gobs/pym/pgsql.py
+++ b/gobs/pym/pgsql.py
@@ -322,7 +322,7 @@ def have_package_buildqueue(connection, ebuild_id, config_id):
 	entries = cursor.fetchone()
 	return entries
 
-def get_queue_id_list_config(connection, config_id)
+def get_queue_id_list_config(connection, config_id):
 	cursor = connection.cursor()
 	sqlQ = 'SELECT queue_id FROM buildqueue WHERE  config_id = %s'
 	cursor.execute(sqlQ,  (config_id,))



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-10-29 22:24 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-10-29 22:24 UTC (permalink / raw
  To: gentoo-commits

commit:     4e044fc62205c73300cd1ab942f7f235f5ea0acc
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Oct 29 22:23:54 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Oct 29 22:23:54 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=4e044fc6

fix bugs in setup_profile

---
 gobs/pym/init_setup_profile.py |   10 ++++++----
 gobs/pym/package.py            |   36 ++++++------------------------------
 gobs/pym/pgsql.py              |    6 +++++-
 3 files changed, 17 insertions(+), 35 deletions(-)

diff --git a/gobs/pym/init_setup_profile.py b/gobs/pym/init_setup_profile.py
index 6f640d9..23a2056 100644
--- a/gobs/pym/init_setup_profile.py
+++ b/gobs/pym/init_setup_profile.py
@@ -45,10 +45,12 @@ def setup_profile_main(args=None):
 			# Set config_root (PORTAGE_CONFIGROOT)  to default_config_root
 			mysettings = portage.config(config_root = default_config_root)
 			myportdb = portage.portdbapi(mysettings=mysettings)
-			init_package = gobs_package
+			init_package = gobs_package(mysettings, myportdb)
 			# get the cp list
-			package_list_tree = package_list_tree = myportdb.cp_all()(mysettings, myportdb)
+			package_list_tree = package_list_tree = myportdb.cp_all()
 			print "Setting default config to:", config_id
+			config_id_list = []
+			config_id_list.append(config_id)
 			for package_line in sorted(package_list_tree):
 				build_dict = {}
 				packageDict = {}
@@ -59,7 +61,7 @@ def setup_profile_main(args=None):
 				package = element[1]
 				print "C", categories + "/" + package			# C = Checking
 				pkgdir = mysettings['PORTDIR'] + "/" + categories + "/" + package
-				config_cpv_listDict = gobs_package.config_match_cp(categories, package, config_id)
+				config_cpv_listDict = init_package.config_match_cp(categories, package, config_id_list)
 				packageDict['ebuild_version_tree'] = config_cpv_listDict['ebuild_version']
 				build_dict['checksum'] = portage.checksum.sha256hash(pkgdir + "/" + package + "-" + config_cpv_listDict['ebuild_version'] + ".ebuild")[0]
 				build_dict['package_id'] = have_package_db(categories, package)
@@ -67,7 +69,7 @@ def setup_profile_main(args=None):
 				ebuild_id = get_ebuild_id_db_checksum(connection, build_dict)
 				if ebuild_id is not None:
 					ebuild_id_list.append(ebuild_id)
-					gobs_package.add_new_ebuild_buildquery_db(ebuild_id_list, packageDict, config_cpv_listDict)
+					init_package.add_new_ebuild_buildquery_db(ebuild_id_list, packageDict, config_cpv_listDict)
 		if args[0] is "-del":
 			config_id = args[1]
 			querue_id_list = get_queue_id_list_config(conn, config_id)

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index bbd5998..3bd614e 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -28,7 +28,7 @@ class gobs_package(object):
 		mysettings_setup = portage.config(config_root = my_new_setup)
 		return mysettings_setup
 
-	def config_match_ebuild(self, categories, package):
+	def config_match_ebuild(self, categories, package, config_list_all):
 		conn=CM.getConnection()
 		config_cpv_listDict ={}
 		# Get a list from table configs/setups with default_config=Fales and active = True
@@ -67,32 +67,6 @@ class gobs_package(object):
 		CM.putConnection(conn)
 		return config_cpv_listDict
 
-	def config_match_cp(self, categories, package, config_id):
-		config_cpv_listDict ={}
-		# Get latest cpv from portage with the config
-		latest_ebuild = self._myportdb_setup.xmatch('bestmatch-visible', categories + "/" + package)
-		latest_ebuild_version = unicode("")
-		# Check if could get cpv from portage
-		if latest_ebuild != "":
-			# Get the version of cpv
-			latest_ebuild_version = portage.versions.cpv_getversion(latest_ebuild)
-			# Get the iuse and use flags for that config/setup
-			init_useflags = gobs_use_flags(self._mysettings, self._myportdb, latest_ebuild)
-			iuse_flags_list, final_use_list = init_useflags.get_flags()
-			iuse_flags_list2 = []
-			for iuse_line in iuse_flags_list:
-				iuse_flags_list2.append( init_useflags.reduce_flag(iuse_line))
-				# Dic the needed info
-				attDict = {}
-				attDict['ebuild_version'] = latest_ebuild_version
-				attDict['useflags'] = final_use_list
-				attDict['iuse'] = iuse_flags_list2
-				attDict['package'] = package
-				attDict['categories'] = categories
-				config_cpv_listDict[config_id] = attDict
-		# Clean some cache
-		return config_cpv_listDict
-
 	def get_ebuild_metadata(self, ebuild_line):
 		# Get the auxdbkeys infos for the ebuild
 		try:
@@ -156,7 +130,7 @@ class gobs_package(object):
 		conn=CM.getConnection()
 		# Get the needed info from packageDict and config_cpv_listDict and put that in buildqueue
 		# Only add it if ebuild_version in packageDict and config_cpv_listDict match
-		if config_cpv_listDict != {}:
+		if config_cpv_listDict not None:
 			message = None
 			# Unpack config_cpv_listDict
 			for k, v in config_cpv_listDict.iteritems():
@@ -212,7 +186,8 @@ class gobs_package(object):
 		ebuild_list_tree = self._myportdb.cp_list((categories + "/" + package), use_cache=1, mytree=None)
 		if ebuild_list_tree == []:
 			return None
-		config_cpv_listDict = self.config_match_ebuild(categories, package)
+		config_list_all  = get_config_list(conn)
+		config_cpv_listDict = self.config_match_ebuild(categories, package, config_list_all)
 		config_id  = get_default_config(conn)
 		packageDict ={}
 		for ebuild_line in sorted(ebuild_list_tree):
@@ -262,7 +237,8 @@ class gobs_package(object):
 			package_metadataDict = self.get_package_metadataDict(pkgdir, package)
 			update_new_package_metadata(conn,package_id, package_metadataDict)
 			# Get config_cpv_listDict
-			config_cpv_listDict = self.config_match_ebuild(categories, package)
+			config_list_all  = get_config_list(conn)
+			config_cpv_listDict = self.config_match_ebuild(categories, package, config_list_all)
 			config_id  = get_default_config(conn)
 			packageDict ={}
 			for ebuild_line in sorted(ebuild_list_tree):

diff --git a/gobs/pym/pgsql.py b/gobs/pym/pgsql.py
index 606c624..1d7ee48 100644
--- a/gobs/pym/pgsql.py
+++ b/gobs/pym/pgsql.py
@@ -84,7 +84,11 @@ def get_config_list(connection):
   cursor = connection.cursor()
   sqlQ = 'SELECT id FROM configs WHERE default_config = False AND active = True'
   cursor.execute(sqlQ)
-  return cursor.fetchall()
+  entries = cursor.fetchall()
+  if entries == ():
+    return None
+  else:
+    return entries
 
 def get_config_list_all(connection):
   cursor = connection.cursor()



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-10-29 22:28 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-10-29 22:28 UTC (permalink / raw
  To: gentoo-commits

commit:     456c57d0a43e96b56aab5eb10857afa0f37820b2
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Oct 29 22:28:13 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Oct 29 22:28:13 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=456c57d0

fix bugs in setup_profile part2

---
 gobs/pym/check_setup.py        |    1 +
 gobs/pym/init_setup_profile.py |    8 ++------
 2 files changed, 3 insertions(+), 6 deletions(-)

diff --git a/gobs/pym/check_setup.py b/gobs/pym/check_setup.py
index b5f612b..8512470 100644
--- a/gobs/pym/check_setup.py
+++ b/gobs/pym/check_setup.py
@@ -2,6 +2,7 @@ from __future__ import print_function
 import portage
 import os
 import errno
+from git import *
 from gobs.text import get_file_text
 
 from gobs.readconf import get_conf_settings

diff --git a/gobs/pym/init_setup_profile.py b/gobs/pym/init_setup_profile.py
index 23a2056..c79b51b 100644
--- a/gobs/pym/init_setup_profile.py
+++ b/gobs/pym/init_setup_profile.py
@@ -20,11 +20,7 @@ if CM.getName()=='pgsql':
   from gobs.pgsql import *
 
 from gobs.check_setup import check_make_conf, git_pull
-from gobs.arch import gobs_arch
 from gobs.package import gobs_package
-from gobs.categories import gobs_categories
-from gobs.old_cpv import gobs_old_cpv
-from gobs.categories import gobs_categories
 import portage
 
 def setup_profile_main(args=None):
@@ -35,7 +31,7 @@ def setup_profile_main(args=None):
 	conn=CM.getConnection()
 	if args is None:
 		args = sys.argv[1:]
-		if args[0] is "-add":
+		if args[0] == "-add":
 			git_pull()
 			check_make_conf()
 			print "Check configs done"
@@ -70,7 +66,7 @@ def setup_profile_main(args=None):
 				if ebuild_id is not None:
 					ebuild_id_list.append(ebuild_id)
 					init_package.add_new_ebuild_buildquery_db(ebuild_id_list, packageDict, config_cpv_listDict)
-		if args[0] is "-del":
+		if args[0] == "-del":
 			config_id = args[1]
 			querue_id_list = get_queue_id_list_config(conn, config_id)
 			if querue_id_list is not None:



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-10-29 22:38 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-10-29 22:38 UTC (permalink / raw
  To: gentoo-commits

commit:     7e8a8fb059e563b08b738f3fcd5fa6e892e21a2c
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Oct 29 22:38:25 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Oct 29 22:38:25 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=7e8a8fb0

fix bugs in setup_profile part4

---
 gobs/pym/package.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index 3bd614e..b68d8f6 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -130,7 +130,7 @@ class gobs_package(object):
 		conn=CM.getConnection()
 		# Get the needed info from packageDict and config_cpv_listDict and put that in buildqueue
 		# Only add it if ebuild_version in packageDict and config_cpv_listDict match
-		if config_cpv_listDict not None:
+		if config_cpv_listDict is not None:
 			message = None
 			# Unpack config_cpv_listDict
 			for k, v in config_cpv_listDict.iteritems():



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-10-29 22:48 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-10-29 22:48 UTC (permalink / raw
  To: gentoo-commits

commit:     5de7600aa8002d11819480315ee6df1b8d768a30
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Oct 29 22:48:41 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Oct 29 22:48:41 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=5de7600a

fix bugs in setup_profile part5

---
 gobs/pym/init_setup_profile.py |   20 +++++++++++---------
 1 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/gobs/pym/init_setup_profile.py b/gobs/pym/init_setup_profile.py
index c79b51b..cc799cb 100644
--- a/gobs/pym/init_setup_profile.py
+++ b/gobs/pym/init_setup_profile.py
@@ -57,15 +57,17 @@ def setup_profile_main(args=None):
 				package = element[1]
 				print "C", categories + "/" + package			# C = Checking
 				pkgdir = mysettings['PORTDIR'] + "/" + categories + "/" + package
-				config_cpv_listDict = init_package.config_match_cp(categories, package, config_id_list)
-				packageDict['ebuild_version_tree'] = config_cpv_listDict['ebuild_version']
-				build_dict['checksum'] = portage.checksum.sha256hash(pkgdir + "/" + package + "-" + config_cpv_listDict['ebuild_version'] + ".ebuild")[0]
-				build_dict['package_id'] = have_package_db(categories, package)
-				build_dict['ebuild_version'] = config_cpv_listDict['ebuild_version']
-				ebuild_id = get_ebuild_id_db_checksum(connection, build_dict)
-				if ebuild_id is not None:
-					ebuild_id_list.append(ebuild_id)
-					init_package.add_new_ebuild_buildquery_db(ebuild_id_list, packageDict, config_cpv_listDict)
+				config_cpv_listDict = init_package.config_match_ebuild(categories, package, config_id_list)
+				if config_cpv_listDict != {}:
+					packageDict['ebuild_version_tree'] = config_cpv_listDict['ebuild_version']
+					build_dict['checksum'] = portage.checksum.sha256hash(pkgdir + "/" + package + "-" + config_cpv_listDict['ebuild_version'] + ".ebuild")[0]
+					build_dict['package_id'] = have_package_db(categories, package)
+					build_dict['ebuild_version'] = config_cpv_listDict['ebuild_version']
+					ebuild_id = get_ebuild_id_db_checksum(connection, build_dict)
+					if ebuild_id is not None:
+						ebuild_id_list.append(ebuild_id)
+						init_package.add_new_ebuild_buildquery_db(ebuild_id_list, packageDict, config_cpv_listDict)
+
 		if args[0] == "-del":
 			config_id = args[1]
 			querue_id_list = get_queue_id_list_config(conn, config_id)



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2011-10-31 21:32 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2011-10-31 21:32 UTC (permalink / raw
  To: gentoo-commits

commit:     d0d469c1afdab7795b97dccf02272bd6fc997611
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Mon Oct 31 21:32:12 2011 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Mon Oct 31 21:32:12 2011 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=d0d469c1

fix bugs in gobs_setup_profile

---
 gobs/pym/init_setup_profile.py |   47 +++++++++++++++++++++++-----------------
 gobs/pym/package.py            |   33 ++++++++++++---------------
 gobs/pym/pgsql.py              |   17 ++++++++------
 3 files changed, 52 insertions(+), 45 deletions(-)

diff --git a/gobs/pym/init_setup_profile.py b/gobs/pym/init_setup_profile.py
index cc799cb..e647e1f 100644
--- a/gobs/pym/init_setup_profile.py
+++ b/gobs/pym/init_setup_profile.py
@@ -48,32 +48,39 @@ def setup_profile_main(args=None):
 			config_id_list = []
 			config_id_list.append(config_id)
 			for package_line in sorted(package_list_tree):
-				build_dict = {}
-				packageDict = {}
-				ebuild_id_list = []
-				# split the cp to categories and package
-				element = package_line.split('/')
-				categories = element[0]
-				package = element[1]
-				print "C", categories + "/" + package			# C = Checking
-				pkgdir = mysettings['PORTDIR'] + "/" + categories + "/" + package
-				config_cpv_listDict = init_package.config_match_ebuild(categories, package, config_id_list)
-				if config_cpv_listDict != {}:
-					packageDict['ebuild_version_tree'] = config_cpv_listDict['ebuild_version']
-					build_dict['checksum'] = portage.checksum.sha256hash(pkgdir + "/" + package + "-" + config_cpv_listDict['ebuild_version'] + ".ebuild")[0]
-					build_dict['package_id'] = have_package_db(categories, package)
-					build_dict['ebuild_version'] = config_cpv_listDict['ebuild_version']
-					ebuild_id = get_ebuild_id_db_checksum(connection, build_dict)
-					if ebuild_id is not None:
-						ebuild_id_list.append(ebuild_id)
-						init_package.add_new_ebuild_buildquery_db(ebuild_id_list, packageDict, config_cpv_listDict)
+				# FIXME: remove the check for gobs when in tree
+				if package_line != "dev-python/gobs":
+					build_dict = {}
+					packageDict = {}
+					ebuild_id_list = []
+					# split the cp to categories and package
+					element = package_line.split('/')
+					categories = element[0]
+					package = element[1]
+					print "C", categories + "/" + package			# C = Checking
+					pkgdir = mysettings['PORTDIR'] + "/" + categories + "/" + package
+					config_cpv_listDict = init_package.config_match_ebuild(categories, package, config_id_list)
+					if config_cpv_listDict != {}:
+						cpv = categories + "/" + package + "-" + config_cpv_listDict[config_id]['ebuild_version']
+                                                attDict = {}
+                                                attDict['categories'] = categories
+                                                attDict['package'] = package
+                                                attDict['ebuild_version_tree'] = config_cpv_listDict[config_id]['ebuild_version']
+                                                packageDict[cpv] = attDict
+						build_dict['checksum'] = portage.checksum.sha256hash(pkgdir + "/" + package + "-" + config_cpv_listDict[config_id]['ebuild_version'] + ".ebuild")[0]
+						build_dict['package_id'] = have_package_db(conn, categories, package)[0]
+						build_dict['ebuild_version'] = config_cpv_listDict[config_id]['ebuild_version']
+						ebuild_id = get_ebuild_id_db_checksum(conn, build_dict)
+						if ebuild_id is not None:
+							ebuild_id_list.append(ebuild_id)
+							init_package.add_new_ebuild_buildquery_db(ebuild_id_list, packageDict, config_cpv_listDict)
 
 		if args[0] == "-del":
 			config_id = args[1]
 			querue_id_list = get_queue_id_list_config(conn, config_id)
 			if querue_id_list is not None:
 				for querue_id in querue_id_list:
-					del_old_queue(conn, queue_id)
+					del_old_queue(conn, querue_id)
 	CM.putConnection(conn)
 
 		
\ No newline at end of file

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index b68d8f6..4f1f48f 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -28,15 +28,12 @@ class gobs_package(object):
 		mysettings_setup = portage.config(config_root = my_new_setup)
 		return mysettings_setup
 
-	def config_match_ebuild(self, categories, package, config_list_all):
-		conn=CM.getConnection()
+	def config_match_ebuild(self, categories, package, config_list):
 		config_cpv_listDict ={}
-		# Get a list from table configs/setups with default_config=Fales and active = True
-		config_list_all  = get_config_list(conn)
-		if config_list_all is ():
+		if config_list == []:
 			return config_cpv_listDict
-		for i in config_list_all:
-			config_id = i[0]
+		conn=CM.getConnection()
+		for config_id in config_list:
 			# Change config/setup
 			mysettings_setup = self.change_config(config_id)
 			myportdb_setup = portage.portdbapi(mysettings=mysettings_setup)
@@ -54,13 +51,13 @@ class gobs_package(object):
 				for iuse_line in iuse_flags_list:
 					iuse_flags_list2.append( init_useflags.reduce_flag(iuse_line))
 					# Dic the needed info
-					attDict = {}
-					attDict['ebuild_version'] = latest_ebuild_version
-					attDict['useflags'] = final_use_list
-					attDict['iuse'] = iuse_flags_list2
-					attDict['package'] = package
-					attDict['categories'] = categories
-					config_cpv_listDict[config_id] = attDict
+				attDict = {}
+				attDict['ebuild_version'] = latest_ebuild_version
+				attDict['useflags'] = final_use_list
+				attDict['iuse'] = iuse_flags_list2
+				attDict['package'] = package
+				attDict['categories'] = categories
+				config_cpv_listDict[config_id] = attDict
 			# Clean some cache
 			myportdb_setup.close_caches()
 			portage.portdbapi.portdbapi_instances.remove(myportdb_setup)
@@ -186,8 +183,8 @@ class gobs_package(object):
 		ebuild_list_tree = self._myportdb.cp_list((categories + "/" + package), use_cache=1, mytree=None)
 		if ebuild_list_tree == []:
 			return None
-		config_list_all  = get_config_list(conn)
-		config_cpv_listDict = self.config_match_ebuild(categories, package, config_list_all)
+		config_list  = get_config_list(conn)
+		config_cpv_listDict = self.config_match_ebuild(categories, package, config_list)
 		config_id  = get_default_config(conn)
 		packageDict ={}
 		for ebuild_line in sorted(ebuild_list_tree):
@@ -237,8 +234,8 @@ class gobs_package(object):
 			package_metadataDict = self.get_package_metadataDict(pkgdir, package)
 			update_new_package_metadata(conn,package_id, package_metadataDict)
 			# Get config_cpv_listDict
-			config_list_all  = get_config_list(conn)
-			config_cpv_listDict = self.config_match_ebuild(categories, package, config_list_all)
+			config_list  = get_config_list(conn)
+			config_cpv_listDict = self.config_match_ebuild(categories, package, config_list)
 			config_id  = get_default_config(conn)
 			packageDict ={}
 			for ebuild_line in sorted(ebuild_list_tree):

diff --git a/gobs/pym/pgsql.py b/gobs/pym/pgsql.py
index 1d7ee48..b0a6c83 100644
--- a/gobs/pym/pgsql.py
+++ b/gobs/pym/pgsql.py
@@ -88,7 +88,10 @@ def get_config_list(connection):
   if entries == ():
     return None
   else:
-    return entries
+    config_id_list = []
+    for config_id in entries:
+      config_id_list.append(config_id[0])
+    return config_id_list
 
 def get_config_list_all(connection):
   cursor = connection.cursor()
@@ -284,10 +287,10 @@ def get_ebuild_id_db_checksum(connection, build_dict):
 	cursor = connection.cursor()
 	sqlQ = 'SELECT id FROM ebuilds WHERE ebuild_version = %s AND ebuild_checksum = %s AND package_id = %s'
 	cursor.execute(sqlQ, (build_dict['ebuild_version'], build_dict['checksum'], build_dict['package_id']))
-	ebuild_id = sorted(cursor.fetchall())
-	if ebuild_id is None:
+	ebuild_id_list = sorted(cursor.fetchall())
+	if ebuild_id_list == []:
 		return None
-	return ebuild_id[0]
+	return ebuild_id_list[0]
 
 def get_cpv_from_ebuild_id(connection, ebuild_id):
 	cursor = connection.cursor()
@@ -328,9 +331,9 @@ def have_package_buildqueue(connection, ebuild_id, config_id):
 
 def get_queue_id_list_config(connection, config_id):
 	cursor = connection.cursor()
-	sqlQ = 'SELECT queue_id FROM buildqueue WHERE  config_id = %s'
+	sqlQ = 'SELECT queue_id FROM buildqueue WHERE config = %s'
 	cursor.execute(sqlQ,  (config_id,))
-	entries = cursor.fetchoall()
+	entries = cursor.fetchall()
 	return entries
 
 def add_new_package_buildqueue(connection, ebuild_id, config_id, iuse_flags_list, use_enable, message):
@@ -434,7 +437,7 @@ def del_old_ebuild(connection, ebuild_old_list_db):
 	sqlQ3 = 'DELETE FROM repoman_problems WHERE build_id = %s'
 	sqlQ4 = 'DELETE FROM ebuildbuildwithuses WHERE build_id = %s'
 	sqlQ5 = 'DELETE FROM ebuildhaveskeywords WHERE ebuild_id = %s'
-	sqlQ6 = 'DELETE FROM ebuildhavesiuses WHERE ebuild_id = %s'
+	sqlQ6 = 'DELETE FROM ebuildhavesiuses WHERE ebuild = %s'
 	sqlQ7 = 'DELETE FROM ebuildhavesrestrictions WHERE ebuild_id = %s'
 	sqlQ8 = 'DELETE FROM buildlog WHERE ebuild_id = %s'
 	sqlQ9 = 'SELECT queue_id FROM buildqueue WHERE ebuild_id = %s'



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-27 18:23 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-27 18:23 UTC (permalink / raw
  To: gentoo-commits

commit:     20eac494df833886e8fe9c631137f62ef68c370e
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Apr 27 18:21:44 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Apr 27 18:21:44 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=20eac494

testing logfile update part2

---
 gobs/pym/readconf.py |    4 ++--
 gobs/pym/sync.py     |    2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/gobs/pym/readconf.py b/gobs/pym/readconf.py
index a3499dc..0e4af2d 100644
--- a/gobs/pym/readconf.py
+++ b/gobs/pym/readconf.py
@@ -35,8 +35,8 @@ class get_conf_settings(object):
 				get_gobs_config = element[1]
 			if element[0] == 'LOGFILES':
 				get_gobs_logfile = element[1]
-			
-			open_conffile.close()
+		open_conffile.close()
+
 		gobs_settings_dict = {}
 		gobs_settings_dict['sql_backend'] = get_sql_backend.rstrip('\n')
 		gobs_settings_dict['sql_db'] = get_sql_db.rstrip('\n')

diff --git a/gobs/pym/sync.py b/gobs/pym/sync.py
index 35833f9..c11fc5f 100644
--- a/gobs/pym/sync.py
+++ b/gobs/pym/sync.py
@@ -11,7 +11,7 @@ def git_pull():
 	master = repo.head.reference
 	print(master.log())
 
-def sync_tree()
+def sync_tree():
 	settings, trees, mtimedb = load_emerge_config()
 	portdb = trees[settings["ROOT"]]["porttree"].dbapi
 	tmpcmdline = []



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-27 18:27 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-27 18:27 UTC (permalink / raw
  To: gentoo-commits

commit:     4f509bc3f9c0a2514078612b5747122062d601f8
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Apr 27 18:27:09 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Apr 27 18:27:09 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=4f509bc3

fix a error in check_setup.py

---
 gobs/pym/check_setup.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/check_setup.py b/gobs/pym/check_setup.py
index 8b9b883..ebc58f0 100644
--- a/gobs/pym/check_setup.py
+++ b/gobs/pym/check_setup.py
@@ -69,7 +69,7 @@ def check_make_conf_guest(config_profile):
 	make_conf_checksum_db = get_profile_checksum(conn,config_profile)
 	print('make_conf_checksum_db', make_conf_checksum_db)
 	if make_conf_checksum_db is None:
-		if get_profile_sync(conn, config_profile) is True
+		if get_profile_sync(conn, config_profile) is True:
 			if sync_tree():
 				reset_profile_sync(conn, config_profile)
 		CM.putConnection(conn)



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-27 20:33 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-27 20:33 UTC (permalink / raw
  To: gentoo-commits

commit:     223c06f02cd23c2c4331273a13f00a7471cde217
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Apr 27 20:32:53 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Apr 27 20:32:53 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=223c06f0

fix a typo in readconf.py

---
 gobs/pym/readconf.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/readconf.py b/gobs/pym/readconf.py
index 0e4af2d..6b58ea9 100644
--- a/gobs/pym/readconf.py
+++ b/gobs/pym/readconf.py
@@ -33,7 +33,7 @@ class get_conf_settings(object):
 			# Buildhost setup (host/setup on guest)
 			if element[0] == 'GOBSCONFIG':
 				get_gobs_config = element[1]
-			if element[0] == 'LOGFILES':
+			if element[0] == 'LOGFILE':
 				get_gobs_logfile = element[1]
 		open_conffile.close()
 



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-27 20:42 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-27 20:42 UTC (permalink / raw
  To: gentoo-commits

commit:     1029016e5a45755a80e95608b5765f5104f9b45c
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Apr 27 20:41:50 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Apr 27 20:41:50 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=1029016e

fix errors in sync.py

---
 gobs/pym/sync.py |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/gobs/pym/sync.py b/gobs/pym/sync.py
index c11fc5f..cfaebfe 100644
--- a/gobs/pym/sync.py
+++ b/gobs/pym/sync.py
@@ -3,6 +3,8 @@ import portage
 import os
 import errno
 from git import *
+from _emerge.actions import load_emerge_config, action_sync
+from _emerge.main import parse_opts
 
 def git_pull():
 	repo = Repo("/var/lib/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/")



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-27 21:03 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-27 21:03 UTC (permalink / raw
  To: gentoo-commits

commit:     7fc612771c7be22af7568f7faf8deb5c4e4b9f54
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Apr 27 21:03:30 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Apr 27 21:03:30 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=7fc61277

Add logging sync.py part3

---
 gobs/pym/sync.py |    7 +++----
 1 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/gobs/pym/sync.py b/gobs/pym/sync.py
index 22ff5e7..aecae32 100644
--- a/gobs/pym/sync.py
+++ b/gobs/pym/sync.py
@@ -23,11 +23,10 @@ def sync_tree():
 	tmpcmdline.append("--quiet")
 	myaction, myopts, myfiles = parse_opts(tmpcmdline)
 	logging.info("Eemerge --sync")
-	fail_sync = 1
+	fail_sync = 0
 	#fail_sync = action_sync(settings, trees, mtimedb, myopts, myaction)
 	if fail_sync is True:
 		logging.warning("Emerge --sync fail!")
-		sys.exit()
 	else:
-		logging.info("Emerge --sync ... Done."
-	sys.exit()
\ No newline at end of file
+		logging.info("Emerge --sync ... Done.")
+	return



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-28  0:51 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-28  0:51 UTC (permalink / raw
  To: gentoo-commits

commit:     c13d529f3b4a127aa0951f237c89cbab66b23780
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Apr 28 00:51:18 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Apr 28 00:51:18 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=c13d529f

fix a typo in sync.py

---
 gobs/pym/sync.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/sync.py b/gobs/pym/sync.py
index 2262f57..e992e9c 100644
--- a/gobs/pym/sync.py
+++ b/gobs/pym/sync.py
@@ -15,7 +15,7 @@ def git_pull():
 	repo_remote.pull()
 	master = repo.head.reference
 	print(master.log())
-	logging.info("Git pull ... Done."
+	logging.info("Git pull ... Done.")
 
 def sync_tree():
 	settings, trees, mtimedb = load_emerge_config()



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-28  1:25 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-28  1:25 UTC (permalink / raw
  To: gentoo-commits

commit:     7f3a5322770a33161433c69ba6227fedbd592b87
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Apr 28 01:25:18 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Apr 28 01:25:18 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=7f3a5322

Add logging package.py

---
 gobs/pym/package.py |   17 +++++++++--------
 1 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index 9ae6c57..f4aa1de 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -1,4 +1,5 @@
 from __future__ import print_function
+import logging
 import portage
 from gobs.flags import gobs_use_flags
 from gobs.repoman_gobs import gobs_repoman
@@ -176,8 +177,8 @@ class gobs_package(object):
 	def add_new_package_db(self, categories, package):
 		conn=CM.getConnection()
 		# add new categories package ebuild to tables package and ebuilds
-		print("C", categories + "/" + package)	# C = Checking
-		print("N", categories + "/" + package)	# N = New Package
+		logging.info("C %s/%s", categories, package)	# C = Checking
+		logging.info("N %s/%s", categories, package)	# N = New Package
 		pkgdir = self._mysettings['PORTDIR'] + "/" + categories + "/" + package		# Get PORTDIR + cp
 		categories_dir = self._mysettings['PORTDIR'] + "/" + categories + "/"
 		# Get the ebuild list for cp
@@ -218,7 +219,7 @@ class gobs_package(object):
 			get_manifest_text = get_file_text(pkgdir + "/Manifest")
 			add_new_manifest_sql(conn,package_id, get_manifest_text, manifest_checksum_tree)
 		CM.putConnection(conn)
-		print("C", categories + "/" + package + " ... Done.")
+		logging.info("C %s/%s ... Done.", categories, package)
 
 	def update_package_db(self, categories, package, package_id):
 		conn=CM.getConnection()
@@ -230,9 +231,9 @@ class gobs_package(object):
 		manifest_checksum_db = get_manifest_db(conn,package_id)
 		# if we have the same checksum return else update the package
 		ebuild_list_tree = self._myportdb.cp_list((categories + "/" + package), use_cache=1, mytree=None)
-		print("C", categories + "/" + package)	# C = Checking
+		logging.info("C %s/%s", categories, package)	# C = Checking
 		if manifest_checksum_tree != manifest_checksum_db:
-			print("U", categories + "/" + package)		# U = Update
+			logging.info("U %s/%s", categories, package)		# U = Update
 			# Get package_metadataDict and update the db with it
 			package_metadataDict = self.get_package_metadataDict(pkgdir, package)
 			update_new_package_metadata(conn,package_id, package_metadataDict)
@@ -253,9 +254,9 @@ class gobs_package(object):
 					# Get packageDict for ebuild
 					packageDict[ebuild_line] = self.get_packageDict(pkgdir, ebuild_line, categories, package, config_id)
 					if ebuild_version_manifest_checksum_db is None:
-						print("N", categories + "/" + package + "-" + ebuild_version_tree)	# N = New ebuild
+						logging.info("N %s/%s-%s", categories, package, ebuild_version_tree)	# N = New ebuild
 					else:
-						print("U", categories + "/" + package + "-" + ebuild_version_tree)	# U = Updated ebuild
+						logging.info("U %s/%s-%s", categories, package, ebuild_version_tree)	# U = Updated ebuild
 						# Fix so we can use add_new_package_sql(packageDict) to update the ebuilds
 						old_ebuild_list.append(ebuild_version_tree)
 						add_old_ebuild(conn,package_id, old_ebuild_list)
@@ -282,7 +283,7 @@ class gobs_package(object):
 			init_old_cpv = gobs_old_cpv(self._myportdb, self._mysettings)
 			init_old_cpv.mark_old_ebuild_db(categories, package, package_id)
 		CM.putConnection(conn)
-		print("C", categories + "/" + package + " ... Done.")
+		logging.info("C %s/%s ... Done.", categories, package)
 
 	def update_ebuild_db(self, build_dict):
 		conn=CM.getConnection()



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-28  1:53 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-28  1:53 UTC (permalink / raw
  To: gentoo-commits

commit:     43c6d1e973c9ba99452e3dca57482741b8459a6f
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Apr 28 01:53:38 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Apr 28 01:53:38 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=43c6d1e9

Add logging package.py part2

---
 gobs/pym/package.py |    9 +++++----
 1 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index f4aa1de..d6b4eb8 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -89,6 +89,7 @@ class gobs_package(object):
 		# ebuild_version_metadata_tree and set ebuild_version_checksum_tree to 0
 		# so it can be updated next time we update the db
 		if ebuild_version_metadata_tree  == []:
+			logging.info(" QA: %s Have broken metadata", ebuild_line)
 			ebuild_version_metadata_tree = ['','','','','','','','','','','','','','','','','','','','','','','','','']
 			ebuild_version_checksum_tree = ['0']
 		# add the ebuild to the dict packages
@@ -155,7 +156,8 @@ class gobs_package(object):
 					# Comper ebuild_version and add the ebuild_version to buildqueue
 					if portage.vercmp(v['ebuild_version_tree'], latest_ebuild_version) == 0:
 						add_new_package_buildqueue(conn,ebuild_id, config_id, use_flags_list, use_enable_list, message)
-						print("B",  config_id, v['categories'] + "/" + v['package'] + "-" + latest_ebuild_version, "USE:", use_enable)	# B = Build config cpv use-flags
+						logging.info("B %s/%s-%s USE: %s %s", v['categories'], v['package'], \
+							latest_ebuild_version, use_enable, config_id)	# B = Build cpv use-flags config
 					i = i +1
 		CM.putConnection(conn)
 
@@ -207,7 +209,7 @@ class gobs_package(object):
 			manifest_error = init_manifest.digestcheck()
 			if manifest_error is not None:
 				qa_error.append(manifest_error)
-				print("QA:", categories + "/" + package, qa_error)
+				logging.info("QA: %s/%s %s", categories, package, qa_error)
 			add_qa_repoman(conn,ebuild_id_list, qa_error, packageDict, config_id)
 			# Add the ebuild to the buildqueru table if needed
 			self.add_new_ebuild_buildquery_db(ebuild_id_list, packageDict, config_cpv_listDict)
@@ -275,7 +277,7 @@ class gobs_package(object):
 			manifest_error = init_manifest.digestcheck()
 			if manifest_error is not None:
 				qa_error.append(manifest_error)
-				print("QA:", categories + "/" + package, qa_error)
+				logging.info("QA: %s/%s %s", categories, package, qa_error)
 			add_qa_repoman(conn,ebuild_id_list, qa_error, packageDict, config_id)
 			# Add the ebuild to the buildqueru table if needed
 			self.add_new_ebuild_buildquery_db(ebuild_id_list, packageDict, config_cpv_listDict)
@@ -303,5 +305,4 @@ class gobs_package(object):
 			add_old_ebuild(conn,package_id, old_ebuild_list)
 			update_active_ebuild(conn,package_id, ebuild_version_tree)
 		return_id = add_new_package_sql(conn,packageDict)
-		print('return_id', return_id)
 		CM.putConnection(conn)
\ No newline at end of file



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-28 12:37 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-28 12:37 UTC (permalink / raw
  To: gentoo-commits

commit:     d00fd39f3d62d4fa00ed1a3c98d72a6f52872123
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Apr 28 12:37:01 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Apr 28 12:37:01 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=d00fd39f

Add logging old_cpv.py

---
 gobs/pym/old_cpv.py |   11 ++++++-----
 1 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/gobs/pym/old_cpv.py b/gobs/pym/old_cpv.py
index 2fa1fab..93af29f 100644
--- a/gobs/pym/old_cpv.py
+++ b/gobs/pym/old_cpv.py
@@ -1,4 +1,5 @@
 from __future__ import print_function
+import logging
 from gobs.readconf import get_conf_settings
 reader=get_conf_settings()
 gobs_settings_dict=reader.read_gobs_settings_all()
@@ -30,14 +31,14 @@ class gobs_old_cpv(object):
 			# Set no active on ebuilds in the db that no longer in tree
 			if  old_ebuild_list != []:
 				for old_ebuild in old_ebuild_list:
-					print("O", categories + "/" + package + "-" + old_ebuild[0])
+					logging.info("O %s/%s-%s", categories, package, old_ebuild[0])
 					add_old_ebuild(conn,package_id, old_ebuild_list)
 		# Check if we have older no activ ebuilds then 60 days
 		ebuild_old_list_db = cp_list_old_db(conn,package_id)
 		# Delete older ebuilds in the db
 		if ebuild_old_list_db != []:
 			for del_ebuild_old in ebuild_old_list_db:
-				print("D", categories + "/" + package + "-" + del_ebuild_old[1])
+				logging.info("D %s/%s-%s", categories, package, del_ebuild_old[1])
 			del_old_ebuild(conn,ebuild_old_list_db)
 		CM.putConnection(conn)
 
@@ -57,14 +58,14 @@ class gobs_old_cpv(object):
 			if mark_old_list != []:
 				for x in mark_old_list:
 					element = get_cp_from_package_id(conn,x)
-					print("O", element[0])
+					logging.info("O %s", element[0])
 		# Check if we have older no activ categories/package then 60 days
 		del_package_id_old_list = cp_all_old_db(conn,old_package_id_list)
 		# Delete older  categories/package and ebuilds in the db
 		if del_package_id_old_list != []:
 			for i in del_package_id_old_list:
 				element = get_cp_from_package_id(conn,i)
-				print("D", element)
+				logging.info("D %s", element)
 			del_old_package(conn,del_package_id_old_list)
 		CM.putConnection(conn)
 		
@@ -85,5 +86,5 @@ class gobs_old_cpv(object):
 		if categories_old_list != []:
 			for real_old_categories in categories_old_list:
 				del_old_categories(conn,real_old_categoriess)
-				print("D", real_old_categories)
+				logging.info("D %s", real_old_categories)
 		CM.putConnection(conn)
\ No newline at end of file



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-28 14:01 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-28 14:01 UTC (permalink / raw
  To: gentoo-commits

commit:     e147d01855031095e123178d50e88d7ae1efcb34
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Apr 28 14:01:04 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Apr 28 14:01:04 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=e147d018

fix config-root for --sync

---
 gobs/pym/sync.py |   23 +++++++++++++++++------
 1 files changed, 17 insertions(+), 6 deletions(-)

diff --git a/gobs/pym/sync.py b/gobs/pym/sync.py
index e992e9c..3fcf31d 100644
--- a/gobs/pym/sync.py
+++ b/gobs/pym/sync.py
@@ -5,8 +5,16 @@ import errno
 import logging
 import sys
 from git import *
-from _emerge.actions import load_emerge_config, action_sync
-from _emerge.main import parse_opts
+from _emerge.main import emerge_main
+
+from gobs.readconf import get_conf_settings
+reader=get_conf_settings()
+gobs_settings_dict=reader.read_gobs_settings_all()
+from gobs.ConnectionManager import connectionManager
+CM=connectionManager(gobs_settings_dict)
+#selectively import the pgsql/mysql querys
+if CM.getName()=='pgsql':
+	from gobs.pgsql import *
 
 def git_pull():
 	logging.info("Git pull")
@@ -18,15 +26,18 @@ def git_pull():
 	logging.info("Git pull ... Done.")
 
 def sync_tree():
-	settings, trees, mtimedb = load_emerge_config()
-	portdb = trees[settings["ROOT"]]["porttree"].dbapi
+	conn=CM.getConnection()
+	config_id = get_default_config(conn)			# HostConfigDir = table configs id
+	CM.putConnection(conn)
+	default_config_root = "/var/lib/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config_id[0] + "/"
 	tmpcmdline = []
 	tmpcmdline.append("--sync")
 	tmpcmdline.append("--quiet")
-	myaction, myopts, myfiles = parse_opts(tmpcmdline)
+	tmpcmdline.append("--config_root=" + default_config_root)
+	print("tmpcmdline: %s", default_config_root)
 	logging.info("Emerge --sync")
 	fail_sync = 0
-	#fail_sync = action_sync(settings, trees, mtimedb, myopts, myaction)
+	#fail_sync = emerge_main(args=tmpcmdline)
 	if fail_sync is True:
 		logging.warning("Emerge --sync fail!")
 	else:



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-28 14:20 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-28 14:20 UTC (permalink / raw
  To: gentoo-commits

commit:     6e6c40895ef2b1fb56c71c5ea40ced0c63b0a47f
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Apr 28 14:20:31 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Apr 28 14:20:31 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=6e6c4089

fix config-root for --sync part2

---
 gobs/pym/sync.py |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/gobs/pym/sync.py b/gobs/pym/sync.py
index 3fcf31d..95e3123 100644
--- a/gobs/pym/sync.py
+++ b/gobs/pym/sync.py
@@ -34,10 +34,10 @@ def sync_tree():
 	tmpcmdline.append("--sync")
 	tmpcmdline.append("--quiet")
 	tmpcmdline.append("--config_root=" + default_config_root)
-	print("tmpcmdline: %s", default_config_root)
+	print("tmpcmdline:" + tmpcmdline)
 	logging.info("Emerge --sync")
-	fail_sync = 0
-	#fail_sync = emerge_main(args=tmpcmdline)
+	#fail_sync = 0
+	fail_sync = emerge_main(args=tmpcmdline)
 	if fail_sync is True:
 		logging.warning("Emerge --sync fail!")
 	else:



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-28 14:29 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-28 14:29 UTC (permalink / raw
  To: gentoo-commits

commit:     af86b2b2b9cce62bcba13367cae61f8d7c34c0a6
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Apr 28 14:29:21 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Apr 28 14:29:21 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=af86b2b2

fix config-root for --sync part3

---
 gobs/pym/sync.py |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/sync.py b/gobs/pym/sync.py
index 95e3123..20d8420 100644
--- a/gobs/pym/sync.py
+++ b/gobs/pym/sync.py
@@ -36,8 +36,8 @@ def sync_tree():
 	tmpcmdline.append("--config_root=" + default_config_root)
 	print("tmpcmdline:" + tmpcmdline)
 	logging.info("Emerge --sync")
-	#fail_sync = 0
-	fail_sync = emerge_main(args=tmpcmdline)
+	fail_sync = 0
+	#fail_sync = emerge_main(args=tmpcmdline)
 	if fail_sync is True:
 		logging.warning("Emerge --sync fail!")
 	else:



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-28 16:05 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-28 16:05 UTC (permalink / raw
  To: gentoo-commits

commit:     a5767ee36c1817329394aa2b5cbadaec4c0f6398
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Apr 28 16:04:58 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Apr 28 16:04:58 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=a5767ee3

add logging and multiprocessing to gobs_setup_profile part2

---
 gobs/pym/init_setup_profile.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/init_setup_profile.py b/gobs/pym/init_setup_profile.py
index a18865e..664cd7e 100644
--- a/gobs/pym/init_setup_profile.py
+++ b/gobs/pym/init_setup_profile.py
@@ -25,7 +25,7 @@ from gobs.sync import git_pull
 from gobs.package import gobs_package
 import portage
 import multiprocessing
-def add_cpv_query_pool(init_package, config_id_list, package_line)
+def add_cpv_query_pool(init_package, config_id_list, package_line):
 	conn=CM.getConnection()
 	# FIXME: remove the check for gobs when in tree
 	if package_line != "dev-python/gobs":



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-28 16:07 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-28 16:07 UTC (permalink / raw
  To: gentoo-commits

commit:     9e6168d09cafe8e49b2456485ae74e0f047c2ee6
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Apr 28 16:07:38 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Apr 28 16:07:38 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=9e6168d0

add logging and multiprocessing to gobs_setup_profile part3

---
 gobs/pym/init_setup_profile.py |    6 ++----
 1 files changed, 2 insertions(+), 4 deletions(-)

diff --git a/gobs/pym/init_setup_profile.py b/gobs/pym/init_setup_profile.py
index 664cd7e..4aaa854 100644
--- a/gobs/pym/init_setup_profile.py
+++ b/gobs/pym/init_setup_profile.py
@@ -36,7 +36,7 @@ def add_cpv_query_pool(init_package, config_id_list, package_line):
 		element = package_line.split('/')
 		categories = element[0]
 		package = element[1]
-		logging.info"C %s/%s", categories, package			# C = Checking
+		logging.info"C %s/%s", categories, package)			# C = Checking
 		pkgdir = mysettings['PORTDIR'] + "/" + categories + "/" + package
 		config_cpv_listDict = init_package.config_match_ebuild(categories, package, config_id_list)
 		if config_cpv_listDict != {}:
@@ -53,7 +53,7 @@ def add_cpv_query_pool(init_package, config_id_list, package_line):
 			if ebuild_id is not None:
 				ebuild_id_list.append(ebuild_id)
 				init_package.add_new_ebuild_buildquery_db(ebuild_id_list, packageDict, config_cpv_listDict)
-		logging.info"C %s/%s ... Done.", categories, package
+		logging.info"C %s/%s ... Done.", categories, package)
 	CM.putConnection(conn)
 	return
 
@@ -104,5 +104,3 @@ def setup_profile_main(args=None):
 				del_old_queue(conn, querue_id)
 		logging.info("Removeing build querys for: %s ... Done.", config_id)
 	CM.putConnection(conn)
-
-		
\ No newline at end of file



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-28 16:09 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-28 16:09 UTC (permalink / raw
  To: gentoo-commits

commit:     7282fa9f124a482da746cd5ee9054cf564f2926c
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Apr 28 16:09:41 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Apr 28 16:09:41 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=7282fa9f

add logging and multiprocessing to gobs_setup_profile part4

---
 gobs/pym/init_setup_profile.py |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/init_setup_profile.py b/gobs/pym/init_setup_profile.py
index 4aaa854..d4205c9 100644
--- a/gobs/pym/init_setup_profile.py
+++ b/gobs/pym/init_setup_profile.py
@@ -36,7 +36,7 @@ def add_cpv_query_pool(init_package, config_id_list, package_line):
 		element = package_line.split('/')
 		categories = element[0]
 		package = element[1]
-		logging.info"C %s/%s", categories, package)			# C = Checking
+		logging.info("C %s/%s", categories, package)			# C = Checking
 		pkgdir = mysettings['PORTDIR'] + "/" + categories + "/" + package
 		config_cpv_listDict = init_package.config_match_ebuild(categories, package, config_id_list)
 		if config_cpv_listDict != {}:
@@ -53,7 +53,7 @@ def add_cpv_query_pool(init_package, config_id_list, package_line):
 			if ebuild_id is not None:
 				ebuild_id_list.append(ebuild_id)
 				init_package.add_new_ebuild_buildquery_db(ebuild_id_list, packageDict, config_cpv_listDict)
-		logging.info"C %s/%s ... Done.", categories, package)
+		logging.info("C %s/%s ... Done.", categories, package)
 	CM.putConnection(conn)
 	return
 



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-28 17:03 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-28 17:03 UTC (permalink / raw
  To: gentoo-commits

commit:     23e854a2c50fa87a2ee2891cc46f26716b574497
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Apr 28 17:03:33 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Apr 28 17:03:33 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=23e854a2

add logging and multiprocessing to gobs_setup_profile part5

---
 gobs/pym/init_setup_profile.py |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/init_setup_profile.py b/gobs/pym/init_setup_profile.py
index d4205c9..605a592 100644
--- a/gobs/pym/init_setup_profile.py
+++ b/gobs/pym/init_setup_profile.py
@@ -67,7 +67,7 @@ def setup_profile_main(args=None):
 		args = sys.argv[1:]
 	if args[0] == "-add":
 		config_id = args[1]
-		logging.info("Adding build querys for %s:", config_id)
+		logging.info("Adding build querys for: %s", config_id)
 		git_pull()
 		check_make_conf()
 		logging.info("Check configs done")
@@ -79,7 +79,7 @@ def setup_profile_main(args=None):
 		init_package = gobs_package(mysettings, myportdb)
 		# get the cp list
 		package_list_tree = package_list_tree = myportdb.cp_all()
-		logging.info("Setting default config to:", config_id)
+		logging.info("Setting default config to: %s", config_id)
 		config_id_list = []
 		config_id_list.append(config_id)
 		# Use all exept 2 cores when multiprocessing



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-28 17:24 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-28 17:24 UTC (permalink / raw
  To: gentoo-commits

commit:     ca483c94cca20df4780645a6a37785eb61ca3212
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Apr 28 17:24:38 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Apr 28 17:24:38 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=ca483c94

fix the multiprocessing for gobs_setup_profile

---
 gobs/pym/init_setup_profile.py |   10 ++++++----
 1 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/gobs/pym/init_setup_profile.py b/gobs/pym/init_setup_profile.py
index 605a592..b3e8fef 100644
--- a/gobs/pym/init_setup_profile.py
+++ b/gobs/pym/init_setup_profile.py
@@ -25,7 +25,10 @@ from gobs.sync import git_pull
 from gobs.package import gobs_package
 import portage
 import multiprocessing
-def add_cpv_query_pool(init_package, config_id_list, package_line):
+def add_cpv_query_pool(default_config_root, config_id_list, package_line):
+	mysettings = portage.config(config_root = default_config_root)
+	myportdb = portage.portdbapi(mysettings=mysettings)
+	init_package = gobs_package(mysettings, myportdb)
 	conn=CM.getConnection()
 	# FIXME: remove the check for gobs when in tree
 	if package_line != "dev-python/gobs":
@@ -76,7 +79,6 @@ def setup_profile_main(args=None):
 		# Set config_root (PORTAGE_CONFIGROOT)  to default_config_root
 		mysettings = portage.config(config_root = default_config_root)
 		myportdb = portage.portdbapi(mysettings=mysettings)
-		init_package = gobs_package(mysettings, myportdb)
 		# get the cp list
 		package_list_tree = package_list_tree = myportdb.cp_all()
 		logging.info("Setting default config to: %s", config_id)
@@ -90,14 +92,14 @@ def setup_profile_main(args=None):
 			use_pool_cores = 1
 		pool = multiprocessing.Pool(processes=use_pool_cores)
 		for package_line in sorted(package_list_tree):
-			pool.apply_async(add_cpv_query_pool, (init_package, config_id_list, package_line,))
+			pool.apply_async(add_cpv_query_pool, (default_config_root, config_id_list, package_line,))
 		pool.close()
 		pool.join()
 		logging.info("Adding build querys for: %s ... Done.", config_id)
 
 	if args[0] == "-del":
 		config_id = args[1]
-		logging.info("Removeing build querys for %s:", config_id)
+		logging.info("Removeing build querys for: %s", config_id)
 		querue_id_list = get_queue_id_list_config(conn, config_id)
 		if querue_id_list is not None:
 			for querue_id in querue_id_list:



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-28 19:29 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-28 19:29 UTC (permalink / raw
  To: gentoo-commits

commit:     bc8426637fa71190307b2ce90cf5ad757fc3c2fb
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Apr 28 19:28:38 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Apr 28 19:28:38 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=bc842663

fix the multiprocessing for gobs_setup_profile part2

---
 gobs/pym/init_setup_profile.py |   13 ++++++-------
 1 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/gobs/pym/init_setup_profile.py b/gobs/pym/init_setup_profile.py
index b3e8fef..8d19c0b 100644
--- a/gobs/pym/init_setup_profile.py
+++ b/gobs/pym/init_setup_profile.py
@@ -25,10 +25,8 @@ from gobs.sync import git_pull
 from gobs.package import gobs_package
 import portage
 import multiprocessing
-def add_cpv_query_pool(default_config_root, config_id_list, package_line):
-	mysettings = portage.config(config_root = default_config_root)
-	myportdb = portage.portdbapi(mysettings=mysettings)
-	init_package = gobs_package(mysettings, myportdb)
+
+def add_cpv_query_pool(mysettings, init_package, config_id, package_line):
 	conn=CM.getConnection()
 	# FIXME: remove the check for gobs when in tree
 	if package_line != "dev-python/gobs":
@@ -41,6 +39,8 @@ def add_cpv_query_pool(default_config_root, config_id_list, package_line):
 		package = element[1]
 		logging.info("C %s/%s", categories, package)			# C = Checking
 		pkgdir = mysettings['PORTDIR'] + "/" + categories + "/" + package
+		config_id_list = []
+		config_id_list.append(config_id)
 		config_cpv_listDict = init_package.config_match_ebuild(categories, package, config_id_list)
 		if config_cpv_listDict != {}:
 			cpv = categories + "/" + package + "-" + config_cpv_listDict[config_id]['ebuild_version']
@@ -79,11 +79,10 @@ def setup_profile_main(args=None):
 		# Set config_root (PORTAGE_CONFIGROOT)  to default_config_root
 		mysettings = portage.config(config_root = default_config_root)
 		myportdb = portage.portdbapi(mysettings=mysettings)
+		init_package = gobs_package(mysettings, myportdb)
 		# get the cp list
 		package_list_tree = package_list_tree = myportdb.cp_all()
 		logging.info("Setting default config to: %s", config_id)
-		config_id_list = []
-		config_id_list.append(config_id)
 		# Use all exept 2 cores when multiprocessing
 		pool_cores= multiprocessing.cpu_count()
 		if pool_cores >= 3:
@@ -92,7 +91,7 @@ def setup_profile_main(args=None):
 			use_pool_cores = 1
 		pool = multiprocessing.Pool(processes=use_pool_cores)
 		for package_line in sorted(package_list_tree):
-			pool.apply_async(add_cpv_query_pool, (default_config_root, config_id_list, package_line,))
+			pool.apply_async(add_cpv_query_pool, (mysettings, init_package, config_id, package_line,))
 		pool.close()
 		pool.join()
 		logging.info("Adding build querys for: %s ... Done.", config_id)



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-29 13:17 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-29 13:17 UTC (permalink / raw
  To: gentoo-commits

commit:     57a62fea3a721a3b8400f7b86c8371c0b29f11fa
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sun Apr 29 13:17:43 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sun Apr 29 13:17:43 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=57a62fea

start of the job host deamon part3

---
 gobs/pym/pgsql.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/pgsql.py b/gobs/pym/pgsql.py
index 6ac08c8..ce18237 100644
--- a/gobs/pym/pgsql.py
+++ b/gobs/pym/pgsql.py
@@ -649,7 +649,7 @@ def check_job_list(connection, config_profile):
 		return None
 	return job
 	
-def update_job_list(status, jobid)
+def update_job_list(status, jobid):
 	cursor = connection.cursor()
 	sqlQ = 'UPDATE  jobs_list SET ststus = %s WHERE jobid = %s'
 	cursor.execute(sqlQ, (status, jobid,))



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-29 13:24 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-29 13:24 UTC (permalink / raw
  To: gentoo-commits

commit:     b7ed0fd393b8c355ac25579939a343da2535b208
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sun Apr 29 13:24:43 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sun Apr 29 13:24:43 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=b7ed0fd3

fix a typo in pgsql.py

---
 gobs/pym/pgsql.py |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/pgsql.py b/gobs/pym/pgsql.py
index ce18237..5ebe500 100644
--- a/gobs/pym/pgsql.py
+++ b/gobs/pym/pgsql.py
@@ -639,8 +639,8 @@ def make_conf_error(connection,config_profile):
 
 def check_job_list(connection, config_profile):
 	cursor = connection.cursor()
-	sqlQ1 = 'SELECT idnr FROM config WHERE id = %s'
-	sqlQ2 = "SELECT job, jobnr FROM jobs_list WHERE status = 'Waiting' AND config_id = %s" 
+	sqlQ1 = 'SELECT idnr FROM configs WHERE id = %s'
+	sqlQ2 = "SELECT job, jobnr FROM jobs_list WHERE status = 'Waiting' AND config_id = %s"
 	cursor.execute(sqlQ1, (config_profile,))
 	config_nr = cursor.fetchone()
 	cursor.execute(sqlQ2, (config_nr,))



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-29 15:56 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-29 15:56 UTC (permalink / raw
  To: gentoo-commits

commit:     2e581e0c9ff99ab6882f63802c902c19e5ae135b
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sun Apr 29 15:56:00 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sun Apr 29 15:56:00 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=2e581e0c

adding updatedb to job host deamon part4

---
 gobs/pym/sync.py~     |   45 -----------------
 gobs/pym/updatedb.py~ |  128 -------------------------------------------------
 2 files changed, 0 insertions(+), 173 deletions(-)

diff --git a/gobs/pym/sync.py~ b/gobs/pym/sync.py~
deleted file mode 100644
index a39ff4d..0000000
--- a/gobs/pym/sync.py~
+++ /dev/null
@@ -1,45 +0,0 @@
-from __future__ import print_function
-import portage
-import os
-import errno
-import logging
-import sys
-from git import *
-from _emerge.main import emerge_main
-
-from gobs.readconf import get_conf_settings
-reader=get_conf_settings()
-gobs_settings_dict=reader.read_gobs_settings_all()
-from gobs.ConnectionManager import connectionManager
-CM=connectionManager(gobs_settings_dict)
-#selectively import the pgsql/mysql querys
-if CM.getName()=='pgsql':
-	from gobs.pgsql import *
-
-def git_pull():
-	logging.info("Git pull")
-	repo = Repo("/var/lib/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/")
-	repo_remote = repo.remotes.origin
-	repo_remote.pull()
-	master = repo.head.reference
-	print(master.log())
-	logging.info("Git pull ... Done.")
-
-def sync_tree():
-	conn=CM.getConnection()
-	config_id = get_default_config(conn)			# HostConfigDir = table configs id
-	CM.putConnection(conn)
-	default_config_root = "/var/lib/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config_id[0] + "/"
-	tmpcmdline = []
-	tmpcmdline.append("--sync")
-	tmpcmdline.append("--quiet")
-	tmpcmdline.append("--config-root=" + default_config_root)
-	logging.info("Emerge --sync")
-	fail_sync = False
-	#fail_sync = emerge_main(args=tmpcmdline)
-	if fail_sync is True:
-		logging.warning("Emerge --sync fail!")
-		return False
-	else:
-		logging.info("Emerge --sync ... Done.")
-	return True

diff --git a/gobs/pym/updatedb.py~ b/gobs/pym/updatedb.py~
deleted file mode 100755
index 5919e64..0000000
--- a/gobs/pym/updatedb.py~
+++ /dev/null
@@ -1,128 +0,0 @@
-# Distributed under the terms of the GNU General Public License v2
-
-""" 	This code will update the sql backend with needed info for
-	the Frontend and the Guest deamon. """
-from __future__ import print_function
-import sys
-import os
-import multiprocessing
-import logging
-
-# Get the options from the config file set in gobs.readconf
-from gobs.readconf import get_conf_settings
-reader = get_conf_settings()
-gobs_settings_dict=reader.read_gobs_settings_all()
-logfile = gobs_settings_dict['gobs_logfile']
-
-# make a CM
-from gobs.ConnectionManager import connectionManager
-CM=connectionManager(gobs_settings_dict)
-#selectively import the pgsql/mysql querys
-if CM.getName()=='pgsql':
-  from gobs.pgsql import *
-
-from gobs.check_setup import check_make_conf
-from gobs.arch import gobs_arch
-from gobs.package import gobs_package
-from gobs.categories import gobs_categories
-from gobs.old_cpv import gobs_old_cpv
-from gobs.categories import gobs_categories
-from gobs.sync import git_pull, sync_tree
-import portage
-
-def init_portage_settings():
-	
-	""" Get the BASE Setup/Config for portage.settings
-	@type: module
-	@module: The SQL Backend
-	@type: dict
-	@parms: config options from the config file (host_setup_root)
-	@rtype: settings
-	@returns new settings
-	"""
-	# check config setup
-	#git stuff
-	conn=CM.getConnection()
-	check_make_conf()
-	logging.info("Check configs done")
-	# Get default config from the configs table  and default_config=1
-	config_id = get_default_config(conn)			# HostConfigDir = table configs id
-	CM.putConnection(conn);
-	default_config_root = "/var/lib/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config_id[0] + "/"
-	# Set config_root (PORTAGE_CONFIGROOT)  to default_config_root
-	mysettings = portage.config(config_root = default_config_root)
-	logging.info("Setting default config to: %s", config_id[0])
-	return mysettings
-
-def update_cpv_db_pool(mysettings, package_line):
-	conn=CM.getConnection()
-	# Setup portdb, gobs_categories, gobs_old_cpv, package
-	myportdb = portage.portdbapi(mysettings=mysettings)
-	init_categories = gobs_categories(mysettings)
-	init_package = gobs_package(mysettings, myportdb)
-	# split the cp to categories and package
-	element = package_line.split('/')
-	categories = element[0]
-	package = element[1]    
-	# Check if we don't have the cp in the package table
-	package_id = have_package_db(conn,categories, package)
-	if package_id is None:  
-		# Add new package with ebuilds
-		init_package.add_new_package_db(categories, package)
-	# Ceck if we have the cp in the package table
-	elif package_id is not None:
-		# Update the packages with ebuilds
-		init_package.update_package_db(categories, package, package_id)
-	# Update the metadata for categories
-	init_categories.update_categories_db(categories)
-	CM.putConnection(conn)
-			
-def update_cpv_db(mysettings):
-	"""Code to update the cpv in the database.
-	@type:settings
-	@parms: portage.settings
-	@type: module
-	@module: The SQL Backend
-	@type: dict
-	@parms: config options from the config file
-	"""
-	logging.info("Checking categories, package, ebuilds")
-	# Setup portdb, gobs_categories, gobs_old_cpv, package
-	myportdb = portage.portdbapi(mysettings=mysettings)
-	package_id_list_tree = []
-	# Will run some update checks and update package if needed
-	# Get categories/package list from portage
-	package_list_tree = myportdb.cp_all()
-	# Use all exept 2 cores when multiprocessing
-	pool_cores= multiprocessing.cpu_count()
-	if pool_cores >= 3:
-		use_pool_cores = pool_cores - 2
-	else:
-		use_pool_cores = 1
-	pool = multiprocessing.Pool(processes=use_pool_cores)
-	# Run the update package for all package in the list in
-	# a multiprocessing pool
-	for package_line in sorted(package_list_tree):
-		pool.apply_async(update_cpv_db_pool, (mysettings, package_line,))
-	pool.close()
-	pool.join() 
-	logging.info("Checking categories, package and ebuilds done")
-
-def update_db_main():
-	# Main
-	# Logging
-	logging.info("Update db started.")
-	# Sync portage and profile/settings
-	git_pull
-	esutalt = sync_tree()
-	if resutalt is False:
-		logging.info("Update db ... Fail.")
-		return False
-	# Init settings for the default config
-	mysettings =  init_portage_settings()
-	init_arch = gobs_arch()
-	init_arch.update_arch_db()
-	# Update the cpv db
-	update_cpv_db(mysettings)
-	logging.info("Update db ... Done.")
-	return True
\ No newline at end of file



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-30 13:12 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-30 13:12 UTC (permalink / raw
  To: gentoo-commits

commit:     d6439ce8722c28a3442bfa8319076e6b1ac3fe45
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Mon Apr 30 13:11:57 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Mon Apr 30 13:11:57 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=d6439ce8

fix for python 3.*

---
 gobs/pym/build_log.py |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index 6742943..a2e0297 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -559,8 +559,8 @@ class gobs_buildlog(object):
 		if build_id is not None:
 			for msg_line in msg:
 				self.write_msg_file(msg_line, emerge_info_logfilename)
-			os.chmod(settings.get("PORTAGE_LOG_FILE"), 0664)
-			os.chmod(emerge_info_logfilename, 0664)
+			os.chmod(settings.get("PORTAGE_LOG_FILE"), 00664)
+			os.chmod(emerge_info_logfilename, 00664)
 			logging.info("Package: %s logged to db.", pkg.cpv)
 		else:
 			# FIXME Remove the log some way so 



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-30 13:13 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-30 13:13 UTC (permalink / raw
  To: gentoo-commits

commit:     9fcf58f253ff4657542f74690dff2ed83e366903
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Mon Apr 30 13:13:31 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Mon Apr 30 13:13:31 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=9fcf58f2

fix for python 3.* part2

---
 gobs/pym/build_log.py |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index a2e0297..5eff107 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -559,8 +559,8 @@ class gobs_buildlog(object):
 		if build_id is not None:
 			for msg_line in msg:
 				self.write_msg_file(msg_line, emerge_info_logfilename)
-			os.chmod(settings.get("PORTAGE_LOG_FILE"), 00664)
-			os.chmod(emerge_info_logfilename, 00664)
+			os.chmod(settings.get("PORTAGE_LOG_FILE"), 0o664)
+			os.chmod(emerge_info_logfilename, 0o664)
 			logging.info("Package: %s logged to db.", pkg.cpv)
 		else:
 			# FIXME Remove the log some way so 



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-30 14:15 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-30 14:15 UTC (permalink / raw
  To: gentoo-commits

commit:     9b87e9886f4236b9c9a618233d119089c1d3581e
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Mon Apr 30 14:15:02 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Mon Apr 30 14:15:02 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=9b87e988

fix var/lib/gobs to var/cache/gobs

---
 gobs/pym/check_setup.py |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/check_setup.py b/gobs/pym/check_setup.py
index ebc58f0..8f8aa87 100644
--- a/gobs/pym/check_setup.py
+++ b/gobs/pym/check_setup.py
@@ -17,7 +17,7 @@ if CM.getName()=='pgsql':
 	from gobs.pgsql import *
 
 def git_pull():
-	repo = Repo("/var/lib/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/")
+	repo = Repo("/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/")
 	repo_remote = repo.remotes.origin
 	repo_remote.pull()
 	master = repo.head.reference
@@ -33,7 +33,7 @@ def check_make_conf():
   for config_id in config_list_all:
 	  attDict={}
 	  # Set the config dir
-	  check_config_dir = "/var/lib/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config_id[0] + "/"
+	  check_config_dir = "/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config_id[0] + "/"
 	  make_conf_file = check_config_dir + "etc/portage/make.conf"
 	  # Check if we can open the file and close it
 	  # Check if we have some error in the file (portage.util.getconfig)



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-30 14:17 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-30 14:17 UTC (permalink / raw
  To: gentoo-commits

commit:     e060ff1bef49f695477b88556ef15c006bff71ca
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Mon Apr 30 14:17:29 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Mon Apr 30 14:17:29 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=e060ff1b

fix error in updatedb.py

---
 gobs/pym/updatedb.py |    5 ++++-
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/updatedb.py b/gobs/pym/updatedb.py
index bd73602..da80521 100755
--- a/gobs/pym/updatedb.py
+++ b/gobs/pym/updatedb.py
@@ -113,7 +113,10 @@ def update_db_main():
 	# Logging
 	logging.info("Update db started.")
 	# Sync portage and profile/settings
-	git_pull
+	resutalt = git_pull()
+	if resutalt is False:
+		logging.info("Update db ... Fail.")
+		return False
 	resutalt = sync_tree()
 	if resutalt is False:
 		logging.info("Update db ... Fail.")



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-30 14:33 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-30 14:33 UTC (permalink / raw
  To: gentoo-commits

commit:     27d24c02d4e5ddf13b97018d87007a8e67bdfa0b
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Mon Apr 30 14:33:16 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Mon Apr 30 14:33:16 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=27d24c02

 fix error with var/lib/gobs and var/cache/gobs

---
 gobs/pym/package.py  |    2 +-
 gobs/pym/updatedb.py |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index d6b4eb8..4c571b6 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -25,7 +25,7 @@ class gobs_package(object):
 
 	def change_config(self, config_id):
 		# Change config_root  config_id = table configs.id
-		my_new_setup = "/var/lib/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config_id + "/"
+		my_new_setup = "/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config_id + "/"
 		mysettings_setup = portage.config(config_root = my_new_setup)
 		return mysettings_setup
 

diff --git a/gobs/pym/updatedb.py b/gobs/pym/updatedb.py
index da80521..0277396 100755
--- a/gobs/pym/updatedb.py
+++ b/gobs/pym/updatedb.py
@@ -48,7 +48,7 @@ def init_portage_settings():
 	# Get default config from the configs table  and default_config=1
 	config_id = get_default_config(conn)			# HostConfigDir = table configs id
 	CM.putConnection(conn);
-	default_config_root = "/var/lib/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config_id[0] + "/"
+	default_config_root = "/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config_id[0] + "/"
 	# Set config_root (PORTAGE_CONFIGROOT)  to default_config_root
 	mysettings = portage.config(config_root = default_config_root)
 	logging.info("Setting default config to: %s", config_id[0])



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-04-30 16:45 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-04-30 16:45 UTC (permalink / raw
  To: gentoo-commits

commit:     efd00154e8398445fd1e24fc8cb0b5912e7a0851
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Mon Apr 30 16:44:50 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Mon Apr 30 16:44:50 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=efd00154

fix the ../base/etc/make.profile is not a symlink error

---
 gobs/pym/sync.py |    7 ++++++-
 1 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/sync.py b/gobs/pym/sync.py
index ee4bd66..5f8f2d1 100644
--- a/gobs/pym/sync.py
+++ b/gobs/pym/sync.py
@@ -30,7 +30,8 @@ def sync_tree():
 	conn=CM.getConnection()
 	config_id = get_default_config(conn)			# HostConfigDir = table configs id
 	CM.putConnection(conn)
-	default_config_root = "/var/lib/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config_id[0] + "/"
+	default_config_root = "/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config_id[0] + "/"
+	mysettings = portage.config(config_root = default_config_root)
 	tmpcmdline = []
 	tmpcmdline.append("--sync")
 	tmpcmdline.append("--quiet")
@@ -42,5 +43,9 @@ def sync_tree():
 		logging.warning("Emerge --sync fail!")
 		return False
 	else:
+		os.mkdir(mysettings['PORTDIR'] + "/profiles/config", 0o777)
+		with open(mysettings['PORTDIR'] + "/profiles/config/parent", "w") as f:
+			f.write("../base\n")
+			f.close()
 		logging.info("Emerge --sync ... Done.")
 	return True



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-05-01  0:02 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-05-01  0:02 UTC (permalink / raw
  To: gentoo-commits

commit:     5fd52e5cf2f85ec872780338fcd2ad165f3123d9
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Tue May  1 00:02:08 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Tue May  1 00:02:08 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=5fd52e5c

Updated Scheduler.py

---
 gobs/pym/Scheduler.py |  381 +++++++++++++++++++------------------------------
 1 files changed, 146 insertions(+), 235 deletions(-)

diff --git a/gobs/pym/Scheduler.py b/gobs/pym/Scheduler.py
index 005f861..229c595 100644
--- a/gobs/pym/Scheduler.py
+++ b/gobs/pym/Scheduler.py
@@ -1,4 +1,4 @@
-# Copyright 1999-2011 Gentoo Foundation
+# Copyright 1999-2012 Gentoo Foundation
 # Distributed under the terms of the GNU General Public License v2
 
 from __future__ import print_function
@@ -7,10 +7,8 @@ from collections import deque
 import gc
 import gzip
 import logging
-import shutil
 import signal
 import sys
-import tempfile
 import textwrap
 import time
 import warnings
@@ -28,9 +26,12 @@ from portage.output import colorize, create_color_func, red
 bad = create_color_func("BAD")
 from portage._sets import SETPREFIX
 from portage._sets.base import InternalPackageSet
-from portage.util import writemsg, writemsg_level
+from portage.util import ensure_dirs, writemsg, writemsg_level
+from portage.util.SlotObject import SlotObject
 from portage.package.ebuild.digestcheck import digestcheck
 from portage.package.ebuild.digestgen import digestgen
+from portage.package.ebuild.doebuild import (_check_temp_dir,
+	_prepare_self_update)
 from portage.package.ebuild.prepare_build_dirs import prepare_build_dirs
 
 import _emerge
@@ -44,6 +45,7 @@ from _emerge.create_depgraph_params import create_depgraph_params
 from _emerge.create_world_atom import create_world_atom
 from _emerge.DepPriority import DepPriority
 from _emerge.depgraph import depgraph, resume_depgraph
+from _emerge.EbuildBuildDir import EbuildBuildDir
 from _emerge.EbuildFetcher import EbuildFetcher
 from _emerge.EbuildPhase import EbuildPhase
 from _emerge.emergelog import emergelog
@@ -52,12 +54,9 @@ from _emerge._find_deep_system_runtime_deps import _find_deep_system_runtime_dep
 from _emerge._flush_elog_mod_echo import _flush_elog_mod_echo
 from _emerge.JobStatusDisplay import JobStatusDisplay
 from _emerge.MergeListItem import MergeListItem
-from _emerge.MiscFunctionsProcess import MiscFunctionsProcess
 from _emerge.Package import Package
 from _emerge.PackageMerge import PackageMerge
 from _emerge.PollScheduler import PollScheduler
-from _emerge.RootConfig import RootConfig
-from _emerge.SlotObject import SlotObject
 from _emerge.SequentialTaskQueue import SequentialTaskQueue
 
 from gobs.build_log import gobs_buildlog
@@ -79,17 +78,12 @@ class Scheduler(PollScheduler):
 		frozenset(["--pretend",
 		"--fetchonly", "--fetch-all-uri"])
 
-	_opts_no_restart = frozenset(["--buildpkgonly",
+	_opts_no_self_update = frozenset(["--buildpkgonly",
 		"--fetchonly", "--fetch-all-uri", "--pretend"])
 
-	_bad_resume_opts = set(["--ask", "--changelog",
-		"--resume", "--skipfirst"])
-
-	class _iface_class(SlotObject):
+	class _iface_class(PollScheduler._sched_iface_class):
 		__slots__ = ("fetch",
-			"output", "register", "schedule",
-			"scheduleSetup", "scheduleUnpack", "scheduleYield",
-			"unregister")
+			"scheduleSetup", "scheduleUnpack")
 
 	class _fetch_iface_class(SlotObject):
 		__slots__ = ("log_file", "schedule")
@@ -153,7 +147,7 @@ class Scheduler(PollScheduler):
 				DeprecationWarning, stacklevel=2)
 
 		self.settings = settings
-		self.target_root = settings["ROOT"]
+		self.target_root = settings["EROOT"]
 		self.trees = trees
 		self.myopts = myopts
 		self._spinner = spinner
@@ -163,7 +157,7 @@ class Scheduler(PollScheduler):
 		self._build_opts = self._build_opts_class()
 
 		for k in self._build_opts.__slots__:
-			setattr(self._build_opts, k, "--" + k.replace("_", "-") in myopts)
+			setattr(self._build_opts, k, myopts.get("--" + k.replace("_", "-")))
 		self._build_opts.buildpkg_exclude = InternalPackageSet( \
 			initial_atoms=" ".join(myopts.get("--buildpkg-exclude", [])).split(), \
 			allow_wildcard=True, allow_repo=True)
@@ -209,10 +203,7 @@ class Scheduler(PollScheduler):
 		if max_jobs is None:
 			max_jobs = 1
 		self._set_max_jobs(max_jobs)
-
-		# The root where the currently running
-		# portage instance is installed.
-		self._running_root = trees["/"]["root_config"]
+		self._running_root = trees[trees._running_eroot]["root_config"]
 		self.edebug = 0
 		if settings.get("PORTAGE_DEBUG", "") == "1":
 			self.edebug = 1
@@ -226,13 +217,11 @@ class Scheduler(PollScheduler):
 		fetch_iface = self._fetch_iface_class(log_file=self._fetch_log,
 			schedule=self._schedule_fetch)
 		self._sched_iface = self._iface_class(
-			fetch=fetch_iface, output=self._task_output,
-			register=self._register,
-			schedule=self._schedule_wait,
+			fetch=fetch_iface,
 			scheduleSetup=self._schedule_setup,
 			scheduleUnpack=self._schedule_unpack,
-			scheduleYield=self._schedule_yield,
-			unregister=self._unregister)
+			**dict((k, getattr(self.sched_iface, k))
+			for k in self.sched_iface.__slots__))
 
 		self._prefetchers = weakref.WeakValueDictionary()
 		self._pkg_queue = []
@@ -296,10 +285,37 @@ class Scheduler(PollScheduler):
 			self._running_portage = self._pkg(cpv, "installed",
 				self._running_root, installed=True)
 
+	def _handle_self_update(self):
+
+		if self._opts_no_self_update.intersection(self.myopts):
+			return os.EX_OK
+
+		for x in self._mergelist:
+			if not isinstance(x, Package):
+				continue
+			if x.operation != "merge":
+				continue
+			if x.root != self._running_root.root:
+				continue
+			if not portage.dep.match_from_list(
+				portage.const.PORTAGE_PACKAGE_ATOM, [x]):
+				continue
+			if self._running_portage is None or \
+				self._running_portage.cpv != x.cpv or \
+				'9999' in x.cpv or \
+				'git' in x.inherited or \
+				'git-2' in x.inherited:
+				rval = _check_temp_dir(self.settings)
+				if rval != os.EX_OK:
+					return rval
+				_prepare_self_update(self.settings)
+			break
+
+		return os.EX_OK
+
 	def _terminate_tasks(self):
 		self._status_display.quiet = True
-		while self._running_tasks:
-			task_id, task = self._running_tasks.popitem()
+		for task in list(self._running_tasks.values()):
 			task.cancel()
 		for q in self._task_queues.values():
 			q.clear()
@@ -311,10 +327,11 @@ class Scheduler(PollScheduler):
 		"""
 		self._set_graph_config(graph_config)
 		self._blocker_db = {}
+		dynamic_deps = self.myopts.get("--dynamic-deps", "y") != "n"
 		for root in self.trees:
 			if graph_config is None:
 				fake_vartree = FakeVartree(self.trees[root]["root_config"],
-					pkg_cache=self._pkg_cache)
+					pkg_cache=self._pkg_cache, dynamic_deps=dynamic_deps)
 				fake_vartree.sync()
 			else:
 				fake_vartree = graph_config.trees[root]['vartree']
@@ -331,52 +348,6 @@ class Scheduler(PollScheduler):
 		self._set_graph_config(None)
 		gc.collect()
 
-	def _poll(self, timeout=None):
-
-		self._schedule()
-
-		if timeout is None:
-			while True:
-				if not self._poll_event_handlers:
-					self._schedule()
-					if not self._poll_event_handlers:
-						raise StopIteration(
-							"timeout is None and there are no poll() event handlers")
-				previous_count = len(self._poll_event_queue)
-				PollScheduler._poll(self, timeout=self._max_display_latency)
-				self._status_display.display()
-				if previous_count != len(self._poll_event_queue):
-					break
-
-		elif timeout <= self._max_display_latency:
-			PollScheduler._poll(self, timeout=timeout)
-			if timeout == 0:
-				# The display is updated by _schedule() above, so it would be
-				# redundant to update it here when timeout is 0.
-				pass
-			else:
-				self._status_display.display()
-
-		else:
-			remaining_timeout = timeout
-			start_time = time.time()
-			while True:
-				previous_count = len(self._poll_event_queue)
-				PollScheduler._poll(self,
-					timeout=min(self._max_display_latency, remaining_timeout))
-				self._status_display.display()
-				if previous_count != len(self._poll_event_queue):
-					break
-				elapsed_time = time.time() - start_time
-				if elapsed_time < 0:
-					# The system clock has changed such that start_time
-					# is now in the future, so just assume that the
-					# timeout has already elapsed.
-					break
-				remaining_timeout = timeout - 1000 * elapsed_time
-				if remaining_timeout <= 0:
-					break
-
 	def _set_max_jobs(self, max_jobs):
 		self._max_jobs = max_jobs
 		self._task_queues.jobs.max_jobs = max_jobs
@@ -388,11 +359,11 @@ class Scheduler(PollScheduler):
 		Check if background mode is enabled and adjust states as necessary.
 
 		@rtype: bool
-		@returns: True if background mode is enabled, False otherwise.
+		@return: True if background mode is enabled, False otherwise.
 		"""
 		background = (self._max_jobs is True or \
 			self._max_jobs > 1 or "--quiet" in self.myopts \
-			or "--quiet-build" in self.myopts) and \
+			or self.myopts.get("--quiet-build") == "y") and \
 			not bool(self._opts_no_background.intersection(self.myopts))
 
 		if background:
@@ -405,7 +376,7 @@ class Scheduler(PollScheduler):
 				msg = [""]
 				for pkg in interactive_tasks:
 					pkg_str = "  " + colorize("INFORM", str(pkg.cpv))
-					if pkg.root != "/":
+					if pkg.root_config.settings["ROOT"] != "/":
 						pkg_str += " for " + pkg.root
 					msg.append(pkg_str)
 				msg.append("")
@@ -748,7 +719,6 @@ class Scheduler(PollScheduler):
 			self._status_msg("Starting parallel fetch")
 
 			prefetchers = self._prefetchers
-			getbinpkg = "--getbinpkg" in self.myopts
 
 			for pkg in self._mergelist:
 				# mergelist can contain solved Blocker instances
@@ -756,15 +726,13 @@ class Scheduler(PollScheduler):
 					continue
 				prefetcher = self._create_prefetcher(pkg)
 				if prefetcher is not None:
-					self._task_queues.fetch.add(prefetcher)
+					# This will start the first prefetcher immediately, so that
+					# self._task() won't discard it. This avoids a case where
+					# the first prefetcher is discarded, causing the second
+					# prefetcher to occupy the fetch queue before the first
+					# fetcher has an opportunity to execute.
 					prefetchers[pkg] = prefetcher
-
-			# Start the first prefetcher immediately so that self._task()
-			# won't discard it. This avoids a case where the first
-			# prefetcher is discarded, causing the second prefetcher to
-			# occupy the fetch queue before the first fetcher has an
-			# opportunity to execute.
-			self._task_queues.fetch.schedule()
+					self._task_queues.fetch.add(prefetcher)
 
 	def _create_prefetcher(self, pkg):
 		"""
@@ -792,100 +760,6 @@ class Scheduler(PollScheduler):
 
 		return prefetcher
 
-	def _is_restart_scheduled(self):
-		"""
-		Check if the merge list contains a replacement
-		for the current running instance, that will result
-		in restart after merge.
-		@rtype: bool
-		@returns: True if a restart is scheduled, False otherwise.
-		"""
-		if self._opts_no_restart.intersection(self.myopts):
-			return False
-
-		mergelist = self._mergelist
-
-		for i, pkg in enumerate(mergelist):
-			if self._is_restart_necessary(pkg) and \
-				i != len(mergelist) - 1:
-				return True
-
-		return False
-
-	def _is_restart_necessary(self, pkg):
-		"""
-		@return: True if merging the given package
-			requires restart, False otherwise.
-		"""
-
-		# Figure out if we need a restart.
-		if pkg.root == self._running_root.root and \
-			portage.match_from_list(
-			portage.const.PORTAGE_PACKAGE_ATOM, [pkg]):
-			if self._running_portage is None:
-				return True
-			elif pkg.cpv != self._running_portage.cpv or \
-				'9999' in pkg.cpv or \
-				'git' in pkg.inherited or \
-				'git-2' in pkg.inherited:
-				return True
-		return False
-
-	def _restart_if_necessary(self, pkg):
-		"""
-		Use execv() to restart emerge. This happens
-		if portage upgrades itself and there are
-		remaining packages in the list.
-		"""
-
-		if self._opts_no_restart.intersection(self.myopts):
-			return
-
-		if not self._is_restart_necessary(pkg):
-			return
-
-		if pkg == self._mergelist[-1]:
-			return
-
-		self._main_loop_cleanup()
-
-		logger = self._logger
-		pkg_count = self._pkg_count
-		mtimedb = self._mtimedb
-		bad_resume_opts = self._bad_resume_opts
-
-		logger.log(" ::: completed emerge (%s of %s) %s to %s" % \
-			(pkg_count.curval, pkg_count.maxval, pkg.cpv, pkg.root))
-
-		logger.log(" *** RESTARTING " + \
-			"emerge via exec() after change of " + \
-			"portage version.")
-
-		mtimedb["resume"]["mergelist"].remove(list(pkg))
-		mtimedb.commit()
-		portage.run_exitfuncs()
-		# Don't trust sys.argv[0] here because eselect-python may modify it.
-		emerge_binary = os.path.join(portage.const.PORTAGE_BIN_PATH, 'emerge')
-		mynewargv = [emerge_binary, "--resume"]
-		resume_opts = self.myopts.copy()
-		# For automatic resume, we need to prevent
-		# any of bad_resume_opts from leaking in
-		# via EMERGE_DEFAULT_OPTS.
-		resume_opts["--ignore-default-opts"] = True
-		for myopt, myarg in resume_opts.items():
-			if myopt not in bad_resume_opts:
-				if myarg is True:
-					mynewargv.append(myopt)
-				elif isinstance(myarg, list):
-					# arguments like --exclude that use 'append' action
-					for x in myarg:
-						mynewargv.append("%s=%s" % (myopt, x))
-				else:
-					mynewargv.append("%s=%s" % (myopt, myarg))
-		# priority only needs to be adjusted on the first run
-		os.environ["PORTAGE_NICENESS"] = "0"
-		os.execv(mynewargv[0], mynewargv)
-
 	def _run_pkg_pretend(self):
 		"""
 		Since pkg_pretend output may be important, this method sends all
@@ -919,11 +793,48 @@ class Scheduler(PollScheduler):
 			root_config = x.root_config
 			settings = self.pkgsettings[root_config.root]
 			settings.setcpv(x)
-			tmpdir = tempfile.mkdtemp()
-			tmpdir_orig = settings["PORTAGE_TMPDIR"]
-			settings["PORTAGE_TMPDIR"] = tmpdir
+
+			# setcpv/package.env allows for per-package PORTAGE_TMPDIR so we
+			# have to validate it for each package
+			rval = _check_temp_dir(settings)
+			if rval != os.EX_OK:
+				return rval
+
+			build_dir_path = os.path.join(
+				os.path.realpath(settings["PORTAGE_TMPDIR"]),
+				"portage", x.category, x.pf)
+			existing_buildir = os.path.isdir(build_dir_path)
+			settings["PORTAGE_BUILDDIR"] = build_dir_path
+			build_dir = EbuildBuildDir(scheduler=sched_iface,
+				settings=settings)
+			build_dir.lock()
+			current_task = None
 
 			try:
+
+				# Clean up the existing build dir, in case pkg_pretend
+				# checks for available space (bug #390711).
+				if existing_buildir:
+					if x.built:
+						tree = "bintree"
+						infloc = os.path.join(build_dir_path, "build-info")
+						ebuild_path = os.path.join(infloc, x.pf + ".ebuild")
+					else:
+						tree = "porttree"
+						portdb = root_config.trees["porttree"].dbapi
+						ebuild_path = portdb.findname(x.cpv, myrepo=x.repo)
+						if ebuild_path is None:
+							raise AssertionError(
+								"ebuild not found for '%s'" % x.cpv)
+					portage.package.ebuild.doebuild.doebuild_environment(
+						ebuild_path, "clean", settings=settings,
+						db=self.trees[settings['EROOT']][tree].dbapi)
+					clean_phase = EbuildPhase(background=False,
+						phase='clean', scheduler=sched_iface, settings=settings)
+					current_task = clean_phase
+					clean_phase.start()
+					clean_phase.wait()
+
 				if x.built:
 					tree = "bintree"
 					bintree = root_config.trees["bintree"].dbapi.bintree
@@ -942,6 +853,7 @@ class Scheduler(PollScheduler):
 
 					verifier = BinpkgVerifier(pkg=x,
 						scheduler=sched_iface)
+					current_task = verifier
 					verifier.start()
 					if verifier.wait() != os.EX_OK:
 						failures += 1
@@ -950,8 +862,8 @@ class Scheduler(PollScheduler):
 					if fetched:
 						bintree.inject(x.cpv, filename=fetched)
 					tbz2_file = bintree.getname(x.cpv)
-					infloc = os.path.join(tmpdir, x.category, x.pf, "build-info")
-					os.makedirs(infloc)
+					infloc = os.path.join(build_dir_path, "build-info")
+					ensure_dirs(infloc)
 					portage.xpak.tbz2(tbz2_file).unpackinfo(infloc)
 					ebuild_path = os.path.join(infloc, x.pf + ".ebuild")
 					settings.configdict["pkg"]["EMERGE_FROM"] = "binary"
@@ -971,7 +883,8 @@ class Scheduler(PollScheduler):
 
 				portage.package.ebuild.doebuild.doebuild_environment(ebuild_path,
 					"pretend", settings=settings,
-					db=self.trees[settings["ROOT"]][tree].dbapi)
+					db=self.trees[settings['EROOT']][tree].dbapi)
+
 				prepare_build_dirs(root_config.root, settings, cleanup=0)
 
 				vardb = root_config.trees['vartree'].dbapi
@@ -983,14 +896,21 @@ class Scheduler(PollScheduler):
 					phase="pretend", scheduler=sched_iface,
 					settings=settings)
 
+				current_task = pretend_phase
 				pretend_phase.start()
 				ret = pretend_phase.wait()
 				if ret != os.EX_OK:
 					failures += 1
 				portage.elog.elog_process(x.cpv, settings)
 			finally:
-				shutil.rmtree(tmpdir)
-				settings["PORTAGE_TMPDIR"] = tmpdir_orig
+				if current_task is not None and current_task.isAlive():
+					current_task.cancel()
+					current_task.wait()
+				clean_phase = EbuildPhase(background=False,
+					phase='clean', scheduler=sched_iface, settings=settings)
+				clean_phase.start()
+				clean_phase.wait()
+				build_dir.unlock()
 
 		if failures:
 			return 1
@@ -1010,6 +930,10 @@ class Scheduler(PollScheduler):
 		except self._unknown_internal_error:
 			return 1
 
+		rval = self._handle_self_update()
+		if rval != os.EX_OK:
+			return rval
+
 		for root in self.trees:
 			root_config = self.trees[root]["root_config"]
 
@@ -1138,12 +1062,9 @@ class Scheduler(PollScheduler):
 			# If only one package failed then just show it's
 			# whole log for easy viewing.
 			failed_pkg = self._failed_pkgs_all[-1]
-			build_dir = failed_pkg.build_dir
 			log_file = None
 			log_file_real = None
 
-			log_paths = [failed_pkg.build_log]
-
 			log_path = self._locate_failure_log(failed_pkg)
 			if log_path is not None:
 				try:
@@ -1239,9 +1160,6 @@ class Scheduler(PollScheduler):
 
 	def _locate_failure_log(self, failed_pkg):
 
-		build_dir = failed_pkg.build_dir
-		log_file = None
-
 		log_paths = [failed_pkg.build_log]
 
 		for log_path in log_paths:
@@ -1283,7 +1201,7 @@ class Scheduler(PollScheduler):
 
 		# Skip this if $ROOT != / since it shouldn't matter if there
 		# are unsatisfied system runtime deps in this case.
-		if pkg.root != '/':
+		if pkg.root_config.settings["ROOT"] != "/":
 			return
 
 		completed_tasks = self._completed_tasks
@@ -1365,8 +1283,6 @@ class Scheduler(PollScheduler):
 			init_buildlog.add_buildlog_main(settings, pkg, trees)
 			return
 
-		self._restart_if_necessary(pkg)
-
 		# Call mtimedb.commit() after each merge so that
 		# --resume still works after being interrupted
 		# by reboot, sigkill or similar.
@@ -1408,7 +1324,8 @@ class Scheduler(PollScheduler):
 
 			self._failed_pkgs.append(self._failed_pkg(
 				build_dir=build_dir, build_log=build_log,
-				pkg=pkg, returncode=build.returncode))
+				pkg=build.pkg,
+				returncode=build.returncode))
 			if not self._terminated_tasks:
 				self._failed_pkg_msg(self._failed_pkgs[-1], "emerge", "for")
 				self._status_display.failed = len(self._failed_pkgs)
@@ -1430,12 +1347,16 @@ class Scheduler(PollScheduler):
 
 	def _merge(self):
 
+		if self._opts_no_background.intersection(self.myopts):
+			self._set_max_jobs(1)
+
 		self._add_prefetchers()
 		self._add_packages()
-		pkg_queue = self._pkg_queue
 		failed_pkgs = self._failed_pkgs
 		portage.locks._quiet = self._background
 		portage.elog.add_listener(self._elog_listener)
+		display_timeout_id = self.sched_iface.timeout_add(
+			self._max_display_latency, self._status_display.display)
 		rval = os.EX_OK
 
 		try:
@@ -1444,6 +1365,7 @@ class Scheduler(PollScheduler):
 			self._main_loop_cleanup()
 			portage.locks._quiet = False
 			portage.elog.remove_listener(self._elog_listener)
+			self.sched_iface.source_remove(display_timeout_id)
 			if failed_pkgs:
 				rval = failed_pkgs[-1].returncode
 
@@ -1524,7 +1446,7 @@ class Scheduler(PollScheduler):
 			merge order
 		@type later: set
 		@rtype: bool
-		@returns: True if the package is dependent, False otherwise.
+		@return: True if the package is dependent, False otherwise.
 		"""
 
 		graph = self._digraph
@@ -1572,24 +1494,7 @@ class Scheduler(PollScheduler):
 		return temp_settings
 
 	def _deallocate_config(self, settings):
-		self._config_pool[settings["ROOT"]].append(settings)
-
-	def _main_loop(self):
-
-		# Only allow 1 job max if a restart is scheduled
-		# due to portage update.
-		if self._is_restart_scheduled() or \
-			self._opts_no_background.intersection(self.myopts):
-			self._set_max_jobs(1)
-
-		while self._schedule():
-			self._poll_loop()
-
-		while True:
-			self._schedule()
-			if not self._is_work_scheduled():
-				break
-			self._poll_loop()
+		self._config_pool[settings['EROOT']].append(settings)
 
 	def _keep_scheduling(self):
 		return bool(not self._terminated_tasks and self._pkg_queue and \
@@ -1602,6 +1507,8 @@ class Scheduler(PollScheduler):
 
 		while True:
 
+			state_change = 0
+
 			# When the number of jobs and merges drops to zero,
 			# process a single merge from _merge_wait_queue if
 			# it's not empty. We only process one since these are
@@ -1612,37 +1519,34 @@ class Scheduler(PollScheduler):
 				not self._task_queues.merge):
 				task = self._merge_wait_queue.popleft()
 				task.addExitListener(self._merge_wait_exit_handler)
+				self._merge_wait_scheduled.append(task)
 				self._task_queues.merge.add(task)
 				self._status_display.merges = len(self._task_queues.merge)
-				self._merge_wait_scheduled.append(task)
+				state_change += 1
 
-			self._schedule_tasks_imp()
-			self._status_display.display()
+			if self._schedule_tasks_imp():
+				state_change += 1
 
-			state_change = 0
-			for q in self._task_queues.values():
-				if q.schedule():
-					state_change += 1
+			self._status_display.display()
 
 			# Cancel prefetchers if they're the only reason
 			# the main poll loop is still running.
 			if self._failed_pkgs and not self._build_opts.fetchonly and \
 				not self._is_work_scheduled() and \
 				self._task_queues.fetch:
+				# Since this happens asynchronously, it doesn't count in
+				# state_change (counting it triggers an infinite loop).
 				self._task_queues.fetch.clear()
-				state_change += 1
 
 			if not (state_change or \
 				(self._merge_wait_queue and not self._jobs and
 				not self._task_queues.merge)):
 				break
 
-		return self._keep_scheduling()
-
 	def _job_delay(self):
 		"""
 		@rtype: bool
-		@returns: True if job scheduling should be delayed, False otherwise.
+		@return: True if job scheduling should be delayed, False otherwise.
 		"""
 
 		if self._jobs and self._max_load is not None:
@@ -1660,7 +1564,7 @@ class Scheduler(PollScheduler):
 	def _schedule_tasks_imp(self):
 		"""
 		@rtype: bool
-		@returns: True if state changed, False otherwise.
+		@return: True if state changed, False otherwise.
 		"""
 
 		state_change = 0
@@ -1728,7 +1632,14 @@ class Scheduler(PollScheduler):
 					"installed", pkg.root_config, installed=True,
 					operation="uninstall")
 
-		prefetcher = self._prefetchers.pop(pkg, None)
+		try:
+			prefetcher = self._prefetchers.pop(pkg, None)
+		except KeyError:
+			# KeyError observed with PyPy 1.8, despite None given as default.
+			# Note that PyPy 1.8 has the same WeakValueDictionary code as
+			# CPython 2.7, so it may be possible for CPython to raise KeyError
+			# here as well.
+			prefetcher = None
 		if prefetcher is not None and not prefetcher.isAlive():
 			try:
 				self._task_queues.fetch._task_queue.remove(prefetcher)
@@ -1757,7 +1668,7 @@ class Scheduler(PollScheduler):
 		pkg = failed_pkg.pkg
 		msg = "%s to %s %s" % \
 			(bad("Failed"), action, colorize("INFORM", pkg.cpv))
-		if pkg.root != "/":
+		if pkg.root_config.settings["ROOT"] != "/":
 			msg += " %s %s" % (preposition, pkg.root)
 
 		log_path = self._locate_failure_log(failed_pkg)
@@ -1810,7 +1721,7 @@ class Scheduler(PollScheduler):
 		Use the current resume list to calculate a new one,
 		dropping any packages with unsatisfied deps.
 		@rtype: bool
-		@returns: True if successful, False otherwise.
+		@return: True if successful, False otherwise.
 		"""
 		print(colorize("GOOD", "*** Resuming merge..."))
 
@@ -1887,7 +1798,7 @@ class Scheduler(PollScheduler):
 			pkg = task
 			msg = "emerge --keep-going:" + \
 				" %s" % (pkg.cpv,)
-			if pkg.root != "/":
+			if pkg.root_config.settings["ROOT"] != "/":
 				msg += " for %s" % (pkg.root,)
 			msg += " dropped due to unsatisfied dependency."
 			for line in textwrap.wrap(msg, msg_width):



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-05-01  0:15 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-05-01  0:15 UTC (permalink / raw
  To: gentoo-commits

commit:     dd38e1fe6839e6d5c018fc99fae687e0c0a7eeaa
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Tue May  1 00:15:46 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Tue May  1 00:15:46 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=dd38e1fe

Remove code that is not needed (sync stuff)

---
 gobs/pym/check_setup.py |   11 -----------
 gobs/pym/pgsql.py       |   32 +-------------------------------
 2 files changed, 1 insertions(+), 42 deletions(-)

diff --git a/gobs/pym/check_setup.py b/gobs/pym/check_setup.py
index 8f8aa87..c0ba2ee 100644
--- a/gobs/pym/check_setup.py
+++ b/gobs/pym/check_setup.py
@@ -4,7 +4,6 @@ import os
 import errno
 from git import *
 from gobs.text import get_file_text
-from gobs.sync import sync_tree
 
 from gobs.readconf import get_conf_settings
 reader=get_conf_settings()
@@ -16,13 +15,6 @@ CM=connectionManager(gobs_settings_dict)
 if CM.getName()=='pgsql':
 	from gobs.pgsql import *
 
-def git_pull():
-	repo = Repo("/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/")
-	repo_remote = repo.remotes.origin
-	repo_remote.pull()
-	master = repo.head.reference
-	print(master.log())
-
 def check_make_conf():
   # FIXME: mark any config updating true in the db when updating the configs
   # Get the config list
@@ -69,9 +61,6 @@ def check_make_conf_guest(config_profile):
 	make_conf_checksum_db = get_profile_checksum(conn,config_profile)
 	print('make_conf_checksum_db', make_conf_checksum_db)
 	if make_conf_checksum_db is None:
-		if get_profile_sync(conn, config_profile) is True:
-			if sync_tree():
-				reset_profile_sync(conn, config_profile)
 		CM.putConnection(conn)
 		return False
 	make_conf_file = "/etc/portage/make.conf"

diff --git a/gobs/pym/pgsql.py b/gobs/pym/pgsql.py
index 64f1b46..67451bb 100644
--- a/gobs/pym/pgsql.py
+++ b/gobs/pym/pgsql.py
@@ -9,40 +9,10 @@ def get_default_config(connection):
 
 def get_profile_checksum(connection, config_profile):
     cursor = connection.cursor()
-    sqlQ = "SELECT make_conf_checksum FROM configs WHERE active = 'True' AND id = %s AND updateing = 'False' AND sync = 'False'"
+    sqlQ = "SELECT make_conf_checksum FROM configs WHERE active = 'True' AND id = %s AND auto = 'True'"
     cursor.execute(sqlQ, (config_profile,))
     return cursor.fetchone()
 
-def get_profile_sync(connection, config_profile):
-	cursor = connection.cursor()
-	sqlQ = "SELECT sync FROM configs WHERE active = 'True' AND id = %s AND updateing = 'False'"
-	cursor.execute(sqlQ, (config_profile,))
-	return cursor.fetchone()
-
-def set_profile_sync(connection):
-	cursor = connection.cursor()
-	sqlQ = "UPDATE configs SET sync = 'True' WHERE active = 'True'"
-	cursor.execute(sqlQ)
-	connection.commit()
-
-def reset_profile_sync(connection, config_profile):
-	cursor = connection.cursor()
-	sqlQ = "UPDATE configs SET sync = 'False' WHERE active = 'True' AND  id = %s"
-	cursor.execute(sqlQ, (config_profile,))
-	connection.commit()
-
-def set_profile_updating(connection):
-	cursor = connection.cursor()
-	sqlQ = "UPDATE configs SET updating = 'True' WHERE active = 'True'"
-	cursor.execute(sqlQ)
-	connection.commit()
-
-def reset_profile_sync(connection, config_profile):
-	cursor = connection.cursor()
-	sqlQ = "UPDATE configs SET updating = 'False' WHERE active = 'True'"
-	cursor.execute(sqlQ)
-	connection.commit()
-
 def get_packages_to_build(connection, config_profile):
   cursor =connection.cursor()
   # no point in returning dead ebuilds, to just chuck em out later



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-05-01 10:00 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-05-01 10:00 UTC (permalink / raw
  To: gentoo-commits

commit:     80389dc4205ecc5c50f299ad5b1eff623ac4e30f
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Tue May  1 10:00:27 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Tue May  1 10:00:27 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=80389dc4

don't error on emty dirs package.py

---
 gobs/pym/package.py |   31 +++++++++++++++++++++++++------
 1 files changed, 25 insertions(+), 6 deletions(-)

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index 4c571b6..7487494 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -217,8 +217,14 @@ class gobs_package(object):
 			package_metadataDict = self.get_package_metadataDict(pkgdir, package)
 			add_new_package_metadata(conn,package_id, package_metadataDict)
 			# Add the manifest file to db
-			manifest_checksum_tree = portage.checksum.sha256hash(pkgdir + "/Manifest")[0]
-			get_manifest_text = get_file_text(pkgdir + "/Manifest")
+			try:
+				manifest_checksum_tree = portage.checksum.sha256hash(pkgdir + "/Manifest")[0]
+			except:
+				manifest_checksum_tree = "0"
+				get_manifest_text = "0"
+				logging.info("QA: Can't checksum the Manifest file. %c/%s", categories, package)
+			else:
+				get_manifest_text = get_file_text(pkgdir + "/Manifest")
 			add_new_manifest_sql(conn,package_id, get_manifest_text, manifest_checksum_tree)
 		CM.putConnection(conn)
 		logging.info("C %s/%s ... Done.", categories, package)
@@ -226,14 +232,24 @@ class gobs_package(object):
 	def update_package_db(self, categories, package, package_id):
 		conn=CM.getConnection()
 		# Update the categories and package with new info
+		logging.info("C %s/%s", categories, package)	# C = Checking
 		pkgdir = self._mysettings['PORTDIR'] + "/" + categories + "/" + package		# Get PORTDIR with cp
-		# Get the checksum from the file in portage tree
-		manifest_checksum_tree = portage.checksum.sha256hash(pkgdir + "/Manifest")[0]
+		# Get the checksum from the Manifest file.
+		try:
+			manifest_checksum_tree = portage.checksum.sha256hash(pkgdir + "/Manifest")[0]
+		except:
+			# We did't fine any Manifest file
+			manifest_checksum_tree = '0'
+			ebuild_list_tree = self._myportdb.cp_list((categories + "/" + package), use_cache=1, mytree=None)
+			if ebuild_list_tree == []:
+				CM.putConnection(conn)
+				logging.info("QA: No Manifest file or ebuilds in %s/%s.", categories, package)
+				logging.info("C %s/%s ... Done.", categories, package)
+				return
 		# Get the checksum from the db in package table
 		manifest_checksum_db = get_manifest_db(conn,package_id)
 		# if we have the same checksum return else update the package
 		ebuild_list_tree = self._myportdb.cp_list((categories + "/" + package), use_cache=1, mytree=None)
-		logging.info("C %s/%s", categories, package)	# C = Checking
 		if manifest_checksum_tree != manifest_checksum_db:
 			logging.info("U %s/%s", categories, package)		# U = Update
 			# Get package_metadataDict and update the db with it
@@ -269,7 +285,10 @@ class gobs_package(object):
 			metadataDict = self.get_metadataDict(packageDict, ebuild_id_list)
 			add_new_metadata(conn,metadataDict)
 			# Get the text in Manifest and update it
-			get_manifest_text = get_file_text(pkgdir + "/Manifest")
+			try:
+				get_manifest_text = get_file_text(pkgdir + "/Manifest")
+			except:
+				get_manifest_text = "0"
 			update_manifest_sql(conn,package_id, get_manifest_text, manifest_checksum_tree)
 			# Add any qa and repoman erros to buildlog
 			qa_error = []



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-05-02 14:33 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-05-02 14:33 UTC (permalink / raw
  To: gentoo-commits

commit:     fea6b4f419731b39317ba39d980edc84abbac4e7
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed May  2 14:32:50 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed May  2 14:32:50 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=fea6b4f4

logfile testing

---
 gobs/pym/build_log.py   |    3 +++
 gobs/pym/build_queru.py |    4 +---
 gobs/pym/updatedb.py    |    1 +
 3 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index 5eff107..0e0f834 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -548,7 +548,10 @@ class gobs_buildlog(object):
 		if sum_build_log_list != []:
 			for sum_log_line in sum_build_log_list:
 				summary_error = summary_error + " " + sum_log_line
+		# FIXME: use port_logdir in stead of the \/var\/log\/portage\/ string
+		logging.info("logdir: %s", settings.get("PORT_LOGDIR"))
 		build_log_dict['logfilename'] = re.sub("\/var\/log\/portage\/", "",  settings.get("PORTAGE_LOG_FILE"))
+		logging.info("Logfile name: %s", settings.get("PORTAGE_LOG_FILE"))
 		if build_dict['queue_id'] is None:
 			build_id = self.add_new_ebuild_buildlog(settings, pkg, build_dict, build_error, summary_error, build_log_dict)
 		else:

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index 64d5ba3..373bb8e 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -58,8 +58,6 @@ class queruaction(object):
 		self._mysettings = portage.config(config_root = "/")
 		self._config_profile = config_profile
 		self._myportdb =  portage.portdb
-		logging.basicConfig(filename=gobs_settings_dict['gobs_logfile'], \
-			format='%(levelname)s: %(asctime)s %(message)s', level=logging.INFO)
 
 	def log_fail_queru(self, build_dict, settings):
 		conn=CM.getConnection()
@@ -107,7 +105,7 @@ class queruaction(object):
 						summary_error = summary_error + " " + sum_log_line
 				if settings.get("PORTAGE_LOG_FILE") is not None:
 					build_log_dict['logfilename'] = re.sub("\/var\/log\/portage\/", "",  settings.get("PORTAGE_LOG_FILE"))
-					# os.chmode(settings.get("PORTAGE_LOG_FILE"), 224)
+					os.chmod(settings.get("PORTAGE_LOG_FILE"), 0o224)
 				else:
 					build_log_dict['logfilename'] = ""
 				move_queru_buildlog(conn, build_dict['queue_id'], build_error, summary_error, build_log_dict)

diff --git a/gobs/pym/updatedb.py b/gobs/pym/updatedb.py
index f718bc4..2e369b3 100755
--- a/gobs/pym/updatedb.py
+++ b/gobs/pym/updatedb.py
@@ -104,6 +104,7 @@ def update_cpv_db(mysettings):
 	# a multiprocessing pool
 	for package_line in sorted(package_list_tree):
 		update_cpv_db_pool(mysettings, package_line)
+		# FIXME: Mem prob with the multiprocessing
 		# pool.apply_async(update_cpv_db_pool, (mysettings, package_line,))
 	# pool.close()
 	# pool.join() 



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-05-06 10:47 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-05-06 10:47 UTC (permalink / raw
  To: gentoo-commits

commit:     9f603cef124c6553086a76af6119c76530dec1bd
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sun May  6 10:46:57 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sun May  6 10:46:57 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=9f603cef

fix conflict package in depclean.py

---
 gobs/pym/depclean.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/depclean.py b/gobs/pym/depclean.py
index 6173c20..1df844a 100644
--- a/gobs/pym/depclean.py
+++ b/gobs/pym/depclean.py
@@ -62,7 +62,7 @@ def main_depclean():
 			tmpcmdline.append("--depclean")
 			tmpcmdline.append("--exclude")
 			for conflict_package in conflict_package_list:
-				tmpcmdline.append(portage.versions.pkg_cp(conflict_package)
+				tmpcmdline.append(portage.versions.cpv_getkey(conflict_package))
 			myaction, myopts, myfiles = parse_opts(tmpcmdline, silent=False)
 			unmerge(root_config, myopts, "unmerge", cleanlist, mtimedb["ldpath"], ordered=ordered, scheduler=scheduler)
 			print("Number removed:       "+str(len(cleanlist)))



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-05-07 23:25 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-05-07 23:25 UTC (permalink / raw
  To: gentoo-commits

commit:     ebb7823801c15b62fbf1f2b8f80314f125ef4275
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Mon May  7 23:25:14 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Mon May  7 23:25:14 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=ebb78238

add _show_unsatisfied_dep in build_queru.py

---
 gobs/pym/build_queru.py |  373 ++++++++++++++++++++++++++++++++++++++++++++++-
 1 files changed, 369 insertions(+), 4 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index 5ddcdf9..84f8b0d 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -20,9 +20,12 @@ import logging
 from gobs.manifest import gobs_manifest
 from gobs.depclean import main_depclean
 from gobs.flags import gobs_use_flags
+from gobs.depgraph import depgraph backtrack_depgraph
+
 from portage import _encodings
 from portage import _unicode_decode
 from portage.versions import cpv_getkey
+from portage.dep import check_required_use
 import portage.xpak, errno, re, time
 from _emerge.main import parse_opts, profile_check, apply_priorities, repo_name_duplicate_check, \
 	config_protect_check, check_procfs, ensure_required_sets, expand_set_arguments, \
@@ -35,7 +38,6 @@ from portage.util import cmp_sort_key, writemsg, \
 	writemsg_level, writemsg_stdout, shlex_split
 from _emerge.sync.old_tree_timestamp import old_tree_timestamp_warn
 from _emerge.create_depgraph_params import create_depgraph_params
-from _emerge.depgraph import backtrack_depgraph, depgraph, resume_depgraph
 from _emerge.DepPrioritySatisfiedRange import DepPrioritySatisfiedRange
 from gobs.Scheduler import Scheduler
 from _emerge.clear_caches import clear_caches
@@ -111,6 +113,352 @@ class queruaction(object):
 				move_queru_buildlog(conn, build_dict['queue_id'], build_error, summary_error, build_log_dict)
 		CM.putConnection(conn)
 
+	# copy of ../pym/_emerge/depgraph.py from portage
+	def _show_unsatisfied_dep(self, mydepgraph, root, atom, myparent=None, arg=None,
+		check_backtrack=False, check_autounmask_breakage=False):
+		"""
+		When check_backtrack=True, no output is produced and
+		the method either returns or raises _backtrack_mask if
+		a matching package has been masked by backtracking.
+		"""
+		backtrack_mask = False
+		autounmask_broke_use_dep = False
+		atom_set = InternalPackageSet(initial_atoms=(atom.without_use,),
+			allow_repo=True)
+		atom_set_with_use = InternalPackageSet(initial_atoms=(atom,),
+			allow_repo=True)
+		xinfo = '"%s"' % atom.unevaluated_atom
+		if arg:
+			xinfo='"%s"' % arg
+		if isinstance(myparent, AtomArg):
+			xinfo = _unicode_decode('"%s"') % (myparent,)
+		# Discard null/ from failed cpv_expand category expansion.
+		xinfo = xinfo.replace("null/", "")
+		if root != mydepgraph._frozen_config._running_root.root:
+			xinfo = "%s for %s" % (xinfo, root)
+		masked_packages = []
+		missing_use = []
+		missing_use_adjustable = set()
+		required_use_unsatisfied = []
+		masked_pkg_instances = set()
+		have_eapi_mask = False
+		pkgsettings = mydepgraph._frozen_config.pkgsettings[root]
+		root_config = mydepgraph._frozen_config.roots[root]
+		portdb = mydepgraph._frozen_config.roots[root].trees["porttree"].dbapi
+		vardb = mydepgraph._frozen_config.roots[root].trees["vartree"].dbapi
+		bindb = mydepgraph._frozen_config.roots[root].trees["bintree"].dbapi
+		dbs = mydepgraph._dynamic_config._filtered_trees[root]["dbs"]
+		for db, pkg_type, built, installed, db_keys in dbs:
+			if installed:
+				continue
+			if hasattr(db, "xmatch"):
+				cpv_list = db.xmatch("match-all-cpv-only", atom.without_use)
+			else:
+				cpv_list = db.match(atom.without_use)
+			if atom.repo is None and hasattr(db, "getRepositories"):
+				repo_list = db.getRepositories()
+			else:
+				repo_list = [atom.repo]
+			# descending order
+			cpv_list.reverse()
+			for cpv in cpv_list:
+				for repo in repo_list:
+					if not db.cpv_exists(cpv, myrepo=repo):
+						continue
+													
+					metadata, mreasons  = get_mask_info(root_config, cpv, pkgsettings, db, pkg_type, \
+					built, installed, db_keys, myrepo=repo, _pkg_use_enabled=mydepgraph._pkg_use_enabled)
+					if metadata is not None and \
+						portage.eapi_is_supported(metadata["EAPI"]):
+						if not repo:
+							repo = metadata.get('repository')
+							pkg = mydepgraph._pkg(cpv, pkg_type, root_config,
+							installed=installed, myrepo=repo)
+						# pkg.metadata contains calculated USE for ebuilds,
+						# required later for getMissingLicenses.
+						metadata = pkg.metadata
+						if pkg.invalid:
+							# Avoid doing any operations with packages that
+							# have invalid metadata. It would be unsafe at
+							# least because it could trigger unhandled
+							# exceptions in places like check_required_use().
+							masked_packages.append(
+								(root_config, pkgsettings, cpv, repo, metadata, mreasons))
+							continue
+						if not atom_set.findAtomForPackage(pkg,
+						modified_use=mydepgraph._pkg_use_enabled(pkg)):
+							continue
+						if pkg in mydepgraph._dynamic_config._runtime_pkg_mask:
+							backtrack_reasons = \
+							mydepgraph._dynamic_config._runtime_pkg_mask[pkg]
+							mreasons.append('backtracking: %s' % \
+								', '.join(sorted(backtrack_reasons)))
+							backtrack_mask = True
+							if not mreasons and mydepgraph._frozen_config.excluded_pkgs.findAtomForPackage(pkg, \
+							modified_use=mydepgraph._pkg_use_enabled(pkg)):
+							mreasons = ["exclude option"]
+						if mreasons:
+							masked_pkg_instances.add(pkg)
+						if atom.unevaluated_atom.use:
+							try:
+								if not pkg.iuse.is_valid_flag(atom.unevaluated_atom.use.required) \
+								or atom.violated_conditionals(mydepgraph._pkg_use_enabled(pkg), pkg.iuse.is_valid_flag).use:
+									missing_use.append(pkg)
+									if atom_set_with_use.findAtomForPackage(pkg):
+										autounmask_broke_use_dep = True
+									if not mreasons:
+										continue
+							except InvalidAtom:
+								writemsg("violated_conditionals raised " + \
+									"InvalidAtom: '%s' parent: %s" % \
+									(atom, myparent), noiselevel=-1)
+								raise
+							if not mreasons and \
+								not pkg.built and \
+								pkg.metadata.get("REQUIRED_USE") and \
+								eapi_has_required_use(pkg.metadata["EAPI"]):
+								if not check_required_use(
+									pkg.metadata["REQUIRED_USE"],
+									mydepgraph._pkg_use_enabled(pkg),
+									pkg.iuse.is_valid_flag):
+									required_use_unsatisfied.append(pkg)
+									continue
+							root_slot = (pkg.root, pkg.slot_atom)
+							if pkg.built and root_slot in mydepgraph._rebuild.rebuild_list:
+								mreasons = ["need to rebuild from source"]
+								elif pkg.installed and root_slot in mydepgraph._rebuild.reinstall_list:
+								mreasons = ["need to rebuild from source"]
+							elif pkg.built and not mreasons:
+								mreasons = ["use flag configuration mismatch"]
+						masked_packages.append(
+							(root_config, pkgsettings, cpv, repo, metadata, mreasons))
+																										
+			if check_backtrack:
+				if backtrack_mask:
+					raise mydepgraph._backtrack_mask()
+				else:
+					return
+
+			if check_autounmask_breakage:
+				if autounmask_broke_use_dep:
+					raise mydepgraph._autounmask_breakage()
+				else:
+					return
+
+			missing_use_reasons = []
+			missing_iuse_reasons = []
+			for pkg in missing_use:
+				use = mydepgraph._pkg_use_enabled(pkg)
+				missing_iuse = []
+				#Use the unevaluated atom here, because some flags might have gone
+				#lost during evaluation.
+				required_flags = atom.unevaluated_atom.use.required
+				missing_iuse = pkg.iuse.get_missing_iuse(required_flags)
+
+				mreasons = []
+				if missing_iuse:
+					mreasons.append("Missing IUSE: %s" % " ".join(missing_iuse))
+					missing_iuse_reasons.append((pkg, mreasons))
+				else:
+					need_enable = sorted(atom.use.enabled.difference(use).intersection(pkg.iuse.all))
+					need_disable = sorted(atom.use.disabled.intersection(use).intersection(pkg.iuse.all))
+
+					untouchable_flags = \
+						frozenset(chain(pkg.use.mask, pkg.use.force))
+					if untouchable_flags.intersection(
+						chain(need_enable, need_disable)):
+						continue
+
+					missing_use_adjustable.add(pkg)
+					required_use = pkg.metadata.get("REQUIRED_USE")
+					required_use_warning = ""
+					if required_use:
+						old_use = mydepgraph._pkg_use_enabled(pkg)
+						new_use = set(mydepgraph._pkg_use_enabled(pkg))
+						for flag in need_enable:
+							new_use.add(flag)
+						for flag in need_disable:
+							new_use.discard(flag)
+						if check_required_use(required_use, old_use, pkg.iuse.is_valid_flag) and \
+							not check_required_use(required_use, new_use, pkg.iuse.is_valid_flag):
+								required_use_warning = ", this change violates use flag constraints " + \
+									"defined by %s: '%s'" % (pkg.cpv, human_readable_required_use(required_use))
+
+					if need_enable or need_disable:
+						changes = []
+						changes.extend(colorize("red", "+" + x) \
+							for x in need_enable)
+						changes.extend(colorize("blue", "-" + x) \
+							for x in need_disable)
+						mreasons.append("Change USE: %s" % " ".join(changes) + required_use_warning)
+						missing_use_reasons.append((pkg, mreasons))
+																																
+				if not missing_iuse and myparent and atom.unevaluated_atom.use.conditional:
+					# Lets see if the violated use deps are conditional.
+					# If so, suggest to change them on the parent.
+					# If the child package is masked then a change to
+					# parent USE is not a valid solution (a normal mask
+					# message should be displayed instead).
+					if pkg in masked_pkg_instances:
+						continue
+					mreasons = []
+					violated_atom = atom.unevaluated_atom.violated_conditionals(mydepgraph._pkg_use_enabled(pkg), \
+					pkg.iuse.is_valid_flag, mydepgraph._pkg_use_enabled(myparent))
+					if not (violated_atom.use.enabled or violated_atom.use.disabled):
+						#all violated use deps are conditional
+						changes = []
+						conditional = violated_atom.use.conditional
+						involved_flags = set(chain(conditional.equal, conditional.not_equal, \
+							conditional.enabled, conditional.disabled))
+						untouchable_flags = \
+							frozenset(chain(myparent.use.mask, myparent.use.force))
+						if untouchable_flags.intersection(involved_flags):
+							continue
+						required_use = myparent.metadata.get("REQUIRED_USE")
+						required_use_warning = ""
+						if required_use:
+							old_use = mydepgraph._pkg_use_enabled(myparent)
+							new_use = set(mydepgraph._pkg_use_enabled(myparent))
+							for flag in involved_flags:
+								if flag in old_use:
+									new_use.discard(flag)
+								else:
+									new_use.add(flag)
+							if check_required_use(required_use, old_use, myparent.iuse.is_valid_flag) and \
+								not check_required_use(required_use, new_use, myparent.iuse.is_valid_flag):
+									required_use_warning = ", this change violates use flag constraints " + \
+										"defined by %s: '%s'" % (myparent.cpv, \
+										human_readable_required_use(required_use))
+						for flag in involved_flags:
+							if flag in mydepgraph._pkg_use_enabled(myparent):
+								changes.append(colorize("blue", "-" + flag))
+							else:
+								changes.append(colorize("red", "+" + flag))
+						mreasons.append("Change USE: %s" % " ".join(changes) + required_use_warning)
+						if (myparent, mreasons) not in missing_use_reasons:
+							missing_use_reasons.append((myparent, mreasons))
+
+			unmasked_use_reasons = [(pkg, mreasons) for (pkg, mreasons) \
+				in missing_use_reasons if pkg not in masked_pkg_instances]
+			unmasked_iuse_reasons = [(pkg, mreasons) for (pkg, mreasons) \
+				in missing_iuse_reasons if pkg not in masked_pkg_instances]
+			show_missing_use = False
+			if unmasked_use_reasons:
+				# Only show the latest version.
+				show_missing_use = []
+				pkg_reason = None
+				parent_reason = None
+				for pkg, mreasons in unmasked_use_reasons:
+					if pkg is myparent:
+						if parent_reason is None:
+							#This happens if a use change on the parent
+							#leads to a satisfied conditional use dep.
+							parent_reason = (pkg, mreasons)
+					elif pkg_reason is None:
+						#Don't rely on the first pkg in unmasked_use_reasons,
+						#being the highest version of the dependency.
+						pkg_reason = (pkg, mreasons)
+				if pkg_reason:
+					show_missing_use.append(pkg_reason)
+				if parent_reason:
+					show_missing_use.append(parent_reason)
+			elif unmasked_iuse_reasons:
+				masked_with_iuse = False
+				for pkg in masked_pkg_instances:
+					#Use atom.unevaluated here, because some flags might have gone
+					#lost during evaluation.
+					if not pkg.iuse.get_missing_iuse(atom.unevaluated_atom.use.required):
+						# Package(s) with required IUSE are masked,
+						# so display a normal masking message.
+						masked_with_iuse = True
+						break
+				if not masked_with_iuse:
+					show_missing_use = unmasked_iuse_reasons
+			if required_use_unsatisfied:
+				# If there's a higher unmasked version in missing_use_adjustable
+				# then we want to show that instead.
+				for pkg in missing_use_adjustable:
+					if pkg not in masked_pkg_instances and \
+						pkg > required_use_unsatisfied[0]:
+						required_use_unsatisfied = False
+						break
+			mask_docs = False
+
+			if required_use_unsatisfied:
+				# We have an unmasked package that only requires USE adjustment
+				# in order to satisfy REQUIRED_USE, and nothing more. We assume
+				# that the user wants the latest version, so only the first
+				# instance is displayed.
+				pkg = required_use_unsatisfied[0]
+				output_cpv = pkg.cpv + _repo_separator + pkg.repo
+				writemsg_stdout("\n!!! " + \
+					colorize("BAD", "The ebuild selected to satisfy ") + \
+					colorize("INFORM", xinfo) + \
+					colorize("BAD", " has unmet requirements.") + "\n",
+					noiselevel=-1)
+					use_display = pkg_use_display(pkg, mydepgraph._frozen_config.myopts)
+				writemsg_stdout("- %s %s\n" % (output_cpv, use_display),
+					noiselevel=-1)
+				writemsg_stdout("\n  The following REQUIRED_USE flag constraints " + \
+					"are unsatisfied:\n", noiselevel=-1)
+				reduced_noise = check_required_use(
+					pkg.metadata["REQUIRED_USE"],
+					mydepgraph._pkg_use_enabled(pkg),
+					pkg.iuse.is_valid_flag).tounicode()
+				writemsg_stdout("    %s\n" % \
+					human_readable_required_use(reduced_noise),
+					noiselevel=-1)
+				normalized_required_use = \
+					" ".join(pkg.metadata["REQUIRED_USE"].split())
+				if reduced_noise != normalized_required_use:
+					writemsg_stdout("\n  The above constraints " + \
+						"are a subset of the following complete expression:\n",
+						noiselevel=-1)
+					writemsg_stdout("    %s\n" % \
+						human_readable_required_use(normalized_required_use),
+						noiselevel=-1)
+				writemsg_stdout("\n", noiselevel=-1)
+
+			elif show_missing_use:
+				writemsg_stdout("\nemerge: there are no ebuilds built with USE flags to satisfy "+green(xinfo)+".\n", noiselevel=-1)
+				writemsg_stdout("!!! One of the following packages is required to complete your request:\n", noiselevel=-1)
+				for pkg, mreasons in show_missing_use:
+					writemsg_stdout("- "+pkg.cpv+_repo_separator+pkg.repo+" ("+", ".join(mreasons)+")\n", noiselevel=-1)
+
+			elif masked_packages:
+				writemsg_stdout("\n!!! " + \
+					colorize("BAD", "All ebuilds that could satisfy ") + \
+					colorize("INFORM", xinfo) + \
+					colorize("BAD", " have been masked.") + "\n", noiselevel=-1)
+				writemsg_stdout("!!! One of the following masked packages is required to complete your request:\n", noiselevel=-1)
+				have_eapi_mask = show_masked_packages(masked_packages)
+				if have_eapi_mask:
+					writemsg_stdout("\n", noiselevel=-1)
+					msg = ("The current version of portage supports " + \
+						"EAPI '%s'. You must upgrade to a newer version" + \
+						" of portage before EAPI masked packages can" + \
+						" be installed.") % portage.const.EAPI
+					writemsg_stdout("\n".join(textwrap.wrap(msg, 75)), noiselevel=-1)
+				writemsg_stdout("\n", noiselevel=-1)
+				mask_docs = True
+			else:
+				cp_exists = False
+			msg = []
+			if not isinstance(myparent, AtomArg):
+				# It's redundant to show parent for AtomArg since
+				# it's the same as 'xinfo' displayed above.
+				dep_chain = self._get_dep_chain(myparent, atom)
+				for node, node_type in dep_chain:
+					msg.append('(dependency required by "%s" [%s])' % \
+						(colorize('INFORM', _unicode_decode("%s") % \
+						(node)), node_type))
+			if msg:
+				writemsg_stdout("\n".join(msg), noiselevel=-1)
+				writemsg_stdout("\n", noiselevel=-1)
+			if mask_docs:
+				show_mask_docs()
+				writemsg_stdout("\n", noiselevel=-1)
+
 	def action_build(self, settings, trees, mtimedb, myopts, myaction, myfiles, spinner, build_dict):
 
 		if '--usepkgonly' not in myopts:
@@ -151,7 +499,25 @@ class queruaction(object):
 			build_dict['type_fail'] = "depgraph fail"
 			build_dict['check_fail'] = True
 		use_changes = None
-		if mydepgraph._dynamic_config._needed_use_config_changes:
+		if not success:
+			for pargs, kwargs in mydepgraph._dynamic_config._unsatisfied_deps_for_display:
+				mydepgraph._show_unsatisfied_dep(mydepgraph, *pargs, **kwargs)
+			settings, trees, mtimedb = load_emerge_config()
+			myparams = create_depgraph_params(myopts, myaction)
+			try:
+				success, mydepgraph, favorites = backtrack_depgraph(
+				settings, trees, myopts, myparams, myaction, myfiles, spinner)
+			except portage.exception.PackageSetNotFound as e:
+				root_config = trees[settings["ROOT"]]["root_config"]
+				display_missing_pkg_set(root_config, e.value)
+				build_dict['type_fail'] = "depgraph fail"
+				build_dict['check_fail'] = True
+		if not success:
+			mydepgraph.display_problems()
+			build_dict['type_fail'] = "depgraph fail"
+			build_dict['check_fail'] = True
+		
+		"""if mydepgraph._dynamic_config._needed_use_config_changes:
 			use_changes = {}
 			for pkg, needed_use_config_changes in mydepgraph._dynamic_config._needed_use_config_changes.items():
 				new_use, changes = needed_use_config_changes
@@ -180,7 +546,6 @@ class queruaction(object):
 					with open("/etc/portage/package.use/gobs.use", "a") as f:
 						f.write(filetext)
 						f.write('\n')
-
 			settings, trees, mtimedb = load_emerge_config()
 			myparams = create_depgraph_params(myopts, myaction)
 			try:
@@ -194,7 +559,7 @@ class queruaction(object):
 		if not success:
 			mydepgraph.display_problems()
 			build_dict['type_fail'] = "depgraph fail"
-			build_dict['check_fail'] = True
+			build_dict['check_fail'] = True"""
 
 		if build_dict['check_fail'] is True:
 				self.log_fail_queru(build_dict, settings)



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-05-07 23:31 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-05-07 23:31 UTC (permalink / raw
  To: gentoo-commits

commit:     1e9d69c907049cc841e6cf2c139896776d17b77a
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Mon May  7 23:31:13 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Mon May  7 23:31:13 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=1e9d69c9

add _show_unsatisfied_dep in build_queru.py part2

---
 gobs/pym/build_queru.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index 84f8b0d..1d5bea7 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -20,7 +20,7 @@ import logging
 from gobs.manifest import gobs_manifest
 from gobs.depclean import main_depclean
 from gobs.flags import gobs_use_flags
-from gobs.depgraph import depgraph backtrack_depgraph
+from _emerge.depgraph import depgraph, backtrack_depgraph
 
 from portage import _encodings
 from portage import _unicode_decode



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-05-07 23:35 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-05-07 23:35 UTC (permalink / raw
  To: gentoo-commits

commit:     867a375042c5553881c4f39534c263b59ad09741
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Mon May  7 23:35:29 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Mon May  7 23:35:29 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=867a3750

add _show_unsatisfied_dep in build_queru.py part3

---
 gobs/pym/build_queru.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index 1d5bea7..90d6347 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -196,7 +196,7 @@ class queruaction(object):
 							backtrack_mask = True
 							if not mreasons and mydepgraph._frozen_config.excluded_pkgs.findAtomForPackage(pkg, \
 							modified_use=mydepgraph._pkg_use_enabled(pkg)):
-							mreasons = ["exclude option"]
+								mreasons = ["exclude option"]
 						if mreasons:
 							masked_pkg_instances.add(pkg)
 						if atom.unevaluated_atom.use:



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-05-07 23:39 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-05-07 23:39 UTC (permalink / raw
  To: gentoo-commits

commit:     839a791609e3957bc80fab78593563b28d226137
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Mon May  7 23:39:00 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Mon May  7 23:39:00 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=839a7916

add _show_unsatisfied_dep in build_queru.py part4

---
 gobs/pym/build_queru.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index 90d6347..104633f 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -226,7 +226,7 @@ class queruaction(object):
 							root_slot = (pkg.root, pkg.slot_atom)
 							if pkg.built and root_slot in mydepgraph._rebuild.rebuild_list:
 								mreasons = ["need to rebuild from source"]
-								elif pkg.installed and root_slot in mydepgraph._rebuild.reinstall_list:
+							elif pkg.installed and root_slot in mydepgraph._rebuild.reinstall_list:
 								mreasons = ["need to rebuild from source"]
 							elif pkg.built and not mreasons:
 								mreasons = ["use flag configuration mismatch"]



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-05-07 23:44 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-05-07 23:44 UTC (permalink / raw
  To: gentoo-commits

commit:     bed02fad0fd692e0c022f4dd295b4fedf93fb2a6
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Mon May  7 23:43:49 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Mon May  7 23:43:49 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=bed02fad

add _show_unsatisfied_dep in build_queru.py part5

---
 gobs/pym/build_queru.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index 104633f..bdaa064 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -396,7 +396,7 @@ class queruaction(object):
 					colorize("INFORM", xinfo) + \
 					colorize("BAD", " has unmet requirements.") + "\n",
 					noiselevel=-1)
-					use_display = pkg_use_display(pkg, mydepgraph._frozen_config.myopts)
+				use_display = pkg_use_display(pkg, mydepgraph._frozen_config.myopts)
 				writemsg_stdout("- %s %s\n" % (output_cpv, use_display),
 					noiselevel=-1)
 				writemsg_stdout("\n  The following REQUIRED_USE flag constraints " + \



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-05-09 23:12 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-05-09 23:12 UTC (permalink / raw
  To: gentoo-commits

commit:     2e57c708231d2c35d8f8d7f214583401af83554b
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed May  9 23:12:34 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed May  9 23:12:34 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=2e57c708

testing use flags autounmask part2

---
 gobs/pym/depgraph.py | 7237 ++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 7237 insertions(+), 0 deletions(-)

diff --git a/gobs/pym/depgraph.py b/gobs/pym/depgraph.py
new file mode 100644
index 0000000..75d4db2
--- /dev/null
+++ b/gobs/pym/depgraph.py
@@ -0,0 +1,7237 @@
+# Copyright 1999-2012 Gentoo Foundation
+# Distributed under the terms of the GNU General Public License v2
+# Copy of ../pym/_emerge/depgraph.py from Portage
+
+from __future__ import print_function
+
+import difflib
+import errno
+import io
+import logging
+import stat
+import sys
+import textwrap
+from collections import deque
+from itertools import chain
+
+import portage
+from portage import os, OrderedDict
+from portage import _unicode_decode, _unicode_encode, _encodings
+from portage.const import PORTAGE_PACKAGE_ATOM, USER_CONFIG_PATH
+from portage.dbapi import dbapi
+from portage.dep import Atom, best_match_to_list, extract_affecting_use, \
+	check_required_use, human_readable_required_use, _repo_separator
+from portage.eapi import eapi_has_strong_blocks, eapi_has_required_use
+from portage.exception import InvalidAtom, InvalidDependString, PortageException
+from portage.output import colorize, create_color_func, \
+	darkgreen, green
+bad = create_color_func("BAD")
+from portage.package.ebuild.getmaskingstatus import \
+	_getmaskingstatus, _MaskReason
+from portage._sets import SETPREFIX
+from portage._sets.base import InternalPackageSet
+from portage.util import ConfigProtect, shlex_split, new_protect_filename
+from portage.util import cmp_sort_key, writemsg, writemsg_stdout
+from portage.util import ensure_dirs
+from portage.util import writemsg_level, write_atomic
+from portage.util.digraph import digraph
+from portage.util.listdir import _ignorecvs_dirs
+from portage.versions import catpkgsplit
+
+from _emerge.AtomArg import AtomArg
+from _emerge.Blocker import Blocker
+from _emerge.BlockerCache import BlockerCache
+from _emerge.BlockerDepPriority import BlockerDepPriority
+from _emerge.countdown import countdown
+from _emerge.create_world_atom import create_world_atom
+from _emerge.Dependency import Dependency
+from _emerge.DependencyArg import DependencyArg
+from _emerge.DepPriority import DepPriority
+from _emerge.DepPriorityNormalRange import DepPriorityNormalRange
+from _emerge.DepPrioritySatisfiedRange import DepPrioritySatisfiedRange
+from _emerge.FakeVartree import FakeVartree
+from _emerge._find_deep_system_runtime_deps import _find_deep_system_runtime_deps
+from _emerge.is_valid_package_atom import insert_category_into_atom, \
+	is_valid_package_atom
+from _emerge.Package import Package
+from _emerge.PackageArg import PackageArg
+from _emerge.PackageVirtualDbapi import PackageVirtualDbapi
+from _emerge.RootConfig import RootConfig
+from _emerge.search import search
+from _emerge.SetArg import SetArg
+from _emerge.show_invalid_depstring_notice import show_invalid_depstring_notice
+from _emerge.UnmergeDepPriority import UnmergeDepPriority
+from _emerge.UseFlagDisplay import pkg_use_display
+from _emerge.userquery import userquery
+
+from _emerge.resolver.backtracking import Backtracker, BacktrackParameter
+from _emerge.resolver.slot_collision import slot_conflict_handler
+from _emerge.resolver.circular_dependency import circular_dependency_handler
+from _emerge.resolver.output import Display
+
+if sys.hexversion >= 0x3000000:
+	basestring = str
+	long = int
+
+class _scheduler_graph_config(object):
+	def __init__(self, trees, pkg_cache, graph, mergelist):
+		self.trees = trees
+		self.pkg_cache = pkg_cache
+		self.graph = graph
+		self.mergelist = mergelist
+
+def _wildcard_set(atoms):
+	pkgs = InternalPackageSet(allow_wildcard=True)
+	for x in atoms:
+		try:
+			x = Atom(x, allow_wildcard=True)
+		except portage.exception.InvalidAtom:
+			x = Atom("*/" + x, allow_wildcard=True)
+		pkgs.add(x)
+	return pkgs
+
+class _frozen_depgraph_config(object):
+
+	def __init__(self, settings, trees, myopts, spinner):
+		self.settings = settings
+		self.target_root = settings["EROOT"]
+		self.myopts = myopts
+		self.edebug = 0
+		if settings.get("PORTAGE_DEBUG", "") == "1":
+			self.edebug = 1
+		self.spinner = spinner
+		self._running_root = trees[trees._running_eroot]["root_config"]
+		self.pkgsettings = {}
+		self.trees = {}
+		self._trees_orig = trees
+		self.roots = {}
+		# All Package instances
+		self._pkg_cache = {}
+		self._highest_license_masked = {}
+		dynamic_deps = myopts.get("--dynamic-deps", "y") != "n"
+		for myroot in trees:
+			self.trees[myroot] = {}
+			# Create a RootConfig instance that references
+			# the FakeVartree instead of the real one.
+			self.roots[myroot] = RootConfig(
+				trees[myroot]["vartree"].settings,
+				self.trees[myroot],
+				trees[myroot]["root_config"].setconfig)
+			for tree in ("porttree", "bintree"):
+				self.trees[myroot][tree] = trees[myroot][tree]
+			self.trees[myroot]["vartree"] = \
+				FakeVartree(trees[myroot]["root_config"],
+					pkg_cache=self._pkg_cache,
+					pkg_root_config=self.roots[myroot],
+					dynamic_deps=dynamic_deps)
+			self.pkgsettings[myroot] = portage.config(
+				clone=self.trees[myroot]["vartree"].settings)
+
+		self._required_set_names = set(["world"])
+
+		atoms = ' '.join(myopts.get("--exclude", [])).split()
+		self.excluded_pkgs = _wildcard_set(atoms)
+		atoms = ' '.join(myopts.get("--reinstall-atoms", [])).split()
+		self.reinstall_atoms = _wildcard_set(atoms)
+		atoms = ' '.join(myopts.get("--usepkg-exclude", [])).split()
+		self.usepkg_exclude = _wildcard_set(atoms)
+		atoms = ' '.join(myopts.get("--useoldpkg-atoms", [])).split()
+		self.useoldpkg_atoms = _wildcard_set(atoms)
+		atoms = ' '.join(myopts.get("--rebuild-exclude", [])).split()
+		self.rebuild_exclude = _wildcard_set(atoms)
+		atoms = ' '.join(myopts.get("--rebuild-ignore", [])).split()
+		self.rebuild_ignore = _wildcard_set(atoms)
+
+		self.rebuild_if_new_rev = "--rebuild-if-new-rev" in myopts
+		self.rebuild_if_new_ver = "--rebuild-if-new-ver" in myopts
+		self.rebuild_if_unbuilt = "--rebuild-if-unbuilt" in myopts
+
+class _depgraph_sets(object):
+	def __init__(self):
+		# contains all sets added to the graph
+		self.sets = {}
+		# contains non-set atoms given as arguments
+		self.sets['__non_set_args__'] = InternalPackageSet(allow_repo=True)
+		# contains all atoms from all sets added to the graph, including
+		# atoms given as arguments
+		self.atoms = InternalPackageSet(allow_repo=True)
+		self.atom_arg_map = {}
+
+class _rebuild_config(object):
+	def __init__(self, frozen_config, backtrack_parameters):
+		self._graph = digraph()
+		self._frozen_config = frozen_config
+		self.rebuild_list = backtrack_parameters.rebuild_list.copy()
+		self.orig_rebuild_list = self.rebuild_list.copy()
+		self.reinstall_list = backtrack_parameters.reinstall_list.copy()
+		self.rebuild_if_new_rev = frozen_config.rebuild_if_new_rev
+		self.rebuild_if_new_ver = frozen_config.rebuild_if_new_ver
+		self.rebuild_if_unbuilt = frozen_config.rebuild_if_unbuilt
+		self.rebuild = (self.rebuild_if_new_rev or self.rebuild_if_new_ver or
+			self.rebuild_if_unbuilt)
+
+	def add(self, dep_pkg, dep):
+		parent = dep.collapsed_parent
+		priority = dep.collapsed_priority
+		rebuild_exclude = self._frozen_config.rebuild_exclude
+		rebuild_ignore = self._frozen_config.rebuild_ignore
+		if (self.rebuild and isinstance(parent, Package) and
+			parent.built and priority.buildtime and
+			isinstance(dep_pkg, Package) and
+			not rebuild_exclude.findAtomForPackage(parent) and
+			not rebuild_ignore.findAtomForPackage(dep_pkg)):
+			self._graph.add(dep_pkg, parent, priority)
+
+	def _needs_rebuild(self, dep_pkg):
+		"""Check whether packages that depend on dep_pkg need to be rebuilt."""
+		dep_root_slot = (dep_pkg.root, dep_pkg.slot_atom)
+		if dep_pkg.built or dep_root_slot in self.orig_rebuild_list:
+			return False
+
+		if self.rebuild_if_unbuilt:
+			# dep_pkg is being installed from source, so binary
+			# packages for parents are invalid. Force rebuild
+			return True
+
+		trees = self._frozen_config.trees
+		vardb = trees[dep_pkg.root]["vartree"].dbapi
+		if self.rebuild_if_new_rev:
+			# Parent packages are valid if a package with the same
+			# cpv is already installed.
+			return dep_pkg.cpv not in vardb.match(dep_pkg.slot_atom)
+
+		# Otherwise, parent packages are valid if a package with the same
+		# version (excluding revision) is already installed.
+		assert self.rebuild_if_new_ver
+		cpv_norev = catpkgsplit(dep_pkg.cpv)[:-1]
+		for inst_cpv in vardb.match(dep_pkg.slot_atom):
+			inst_cpv_norev = catpkgsplit(inst_cpv)[:-1]
+			if inst_cpv_norev == cpv_norev:
+				return False
+
+		return True
+
+	def _trigger_rebuild(self, parent, build_deps):
+		root_slot = (parent.root, parent.slot_atom)
+		if root_slot in self.rebuild_list:
+			return False
+		trees = self._frozen_config.trees
+		reinstall = False
+		for slot_atom, dep_pkg in build_deps.items():
+			dep_root_slot = (dep_pkg.root, slot_atom)
+			if self._needs_rebuild(dep_pkg):
+				self.rebuild_list.add(root_slot)
+				return True
+			elif ("--usepkg" in self._frozen_config.myopts and
+				(dep_root_slot in self.reinstall_list or
+				dep_root_slot in self.rebuild_list or
+				not dep_pkg.installed)):
+
+				# A direct rebuild dependency is being installed. We
+				# should update the parent as well to the latest binary,
+				# if that binary is valid.
+				#
+				# To validate the binary, we check whether all of the
+				# rebuild dependencies are present on the same binhost.
+				#
+				# 1) If parent is present on the binhost, but one of its
+				#    rebuild dependencies is not, then the parent should
+				#    be rebuilt from source.
+				# 2) Otherwise, the parent binary is assumed to be valid,
+				#    because all of its rebuild dependencies are
+				#    consistent.
+				bintree = trees[parent.root]["bintree"]
+				uri = bintree.get_pkgindex_uri(parent.cpv)
+				dep_uri = bintree.get_pkgindex_uri(dep_pkg.cpv)
+				bindb = bintree.dbapi
+				if self.rebuild_if_new_ver and uri and uri != dep_uri:
+					cpv_norev = catpkgsplit(dep_pkg.cpv)[:-1]
+					for cpv in bindb.match(dep_pkg.slot_atom):
+						if cpv_norev == catpkgsplit(cpv)[:-1]:
+							dep_uri = bintree.get_pkgindex_uri(cpv)
+							if uri == dep_uri:
+								break
+				if uri and uri != dep_uri:
+					# 1) Remote binary package is invalid because it was
+					#    built without dep_pkg. Force rebuild.
+					self.rebuild_list.add(root_slot)
+					return True
+				elif (parent.installed and
+					root_slot not in self.reinstall_list):
+					inst_build_time = parent.metadata.get("BUILD_TIME")
+					try:
+						bin_build_time, = bindb.aux_get(parent.cpv,
+							["BUILD_TIME"])
+					except KeyError:
+						continue
+					if bin_build_time != inst_build_time:
+						# 2) Remote binary package is valid, and local package
+						#    is not up to date. Force reinstall.
+						reinstall = True
+		if reinstall:
+			self.reinstall_list.add(root_slot)
+		return reinstall
+
+	def trigger_rebuilds(self):
+		"""
+		Trigger rebuilds where necessary. If pkgA has been updated, and pkgB
+		depends on pkgA at both build-time and run-time, pkgB needs to be
+		rebuilt.
+		"""
+		need_restart = False
+		graph = self._graph
+		build_deps = {}
+
+		leaf_nodes = deque(graph.leaf_nodes())
+
+		# Trigger rebuilds bottom-up (starting with the leaves) so that parents
+		# will always know which children are being rebuilt.
+		while graph:
+			if not leaf_nodes:
+				# We'll have to drop an edge. This should be quite rare.
+				leaf_nodes.append(graph.order[-1])
+
+			node = leaf_nodes.popleft()
+			if node not in graph:
+				# This can be triggered by circular dependencies.
+				continue
+			slot_atom = node.slot_atom
+
+			# Remove our leaf node from the graph, keeping track of deps.
+			parents = graph.parent_nodes(node)
+			graph.remove(node)
+			node_build_deps = build_deps.get(node, {})
+			for parent in parents:
+				if parent == node:
+					# Ignore a direct cycle.
+					continue
+				parent_bdeps = build_deps.setdefault(parent, {})
+				parent_bdeps[slot_atom] = node
+				if not graph.child_nodes(parent):
+					leaf_nodes.append(parent)
+
+			# Trigger rebuilds for our leaf node. Because all of our children
+			# have been processed, the build_deps will be completely filled in,
+			# and self.rebuild_list / self.reinstall_list will tell us whether
+			# any of our children need to be rebuilt or reinstalled.
+			if self._trigger_rebuild(node, node_build_deps):
+				need_restart = True
+
+		return need_restart
+
+
+class _dynamic_depgraph_config(object):
+
+	def __init__(self, depgraph, myparams, allow_backtracking, backtrack_parameters):
+		self.myparams = myparams.copy()
+		self._vdb_loaded = False
+		self._allow_backtracking = allow_backtracking
+		# Maps slot atom to package for each Package added to the graph.
+		self._slot_pkg_map = {}
+		# Maps nodes to the reasons they were selected for reinstallation.
+		self._reinstall_nodes = {}
+		self.mydbapi = {}
+		# Contains a filtered view of preferred packages that are selected
+		# from available repositories.
+		self._filtered_trees = {}
+		# Contains installed packages and new packages that have been added
+		# to the graph.
+		self._graph_trees = {}
+		# Caches visible packages returned from _select_package, for use in
+		# depgraph._iter_atoms_for_pkg() SLOT logic.
+		self._visible_pkgs = {}
+		#contains the args created by select_files
+		self._initial_arg_list = []
+		self.digraph = portage.digraph()
+		# manages sets added to the graph
+		self.sets = {}
+		# contains all nodes pulled in by self.sets
+		self._set_nodes = set()
+		# Contains only Blocker -> Uninstall edges
+		self._blocker_uninstalls = digraph()
+		# Contains only Package -> Blocker edges
+		self._blocker_parents = digraph()
+		# Contains only irrelevant Package -> Blocker edges
+		self._irrelevant_blockers = digraph()
+		# Contains only unsolvable Package -> Blocker edges
+		self._unsolvable_blockers = digraph()
+		# Contains all Blocker -> Blocked Package edges
+		self._blocked_pkgs = digraph()
+		# Contains world packages that have been protected from
+		# uninstallation but may not have been added to the graph
+		# if the graph is not complete yet.
+		self._blocked_world_pkgs = {}
+		# Contains packages whose dependencies have been traversed.
+		# This use used to check if we have accounted for blockers
+		# relevant to a package.
+		self._traversed_pkg_deps = set()
+		self._slot_collision_info = {}
+		# Slot collision nodes are not allowed to block other packages since
+		# blocker validation is only able to account for one package per slot.
+		self._slot_collision_nodes = set()
+		self._parent_atoms = {}
+		self._slot_conflict_parent_atoms = set()
+		self._slot_conflict_handler = None
+		self._circular_dependency_handler = None
+		self._serialized_tasks_cache = None
+		self._scheduler_graph = None
+		self._displayed_list = None
+		self._pprovided_args = []
+		self._missing_args = []
+		self._masked_installed = set()
+		self._masked_license_updates = set()
+		self._unsatisfied_deps_for_display = []
+		self._unsatisfied_blockers_for_display = None
+		self._circular_deps_for_display = None
+		self._dep_stack = []
+		self._dep_disjunctive_stack = []
+		self._unsatisfied_deps = []
+		self._initially_unsatisfied_deps = []
+		self._ignored_deps = []
+		self._highest_pkg_cache = {}
+
+		# Binary packages that have been rejected because their USE
+		# didn't match the user's config. It maps packages to a set
+		# of flags causing the rejection.
+		self.ignored_binaries = {}
+
+		self._needed_unstable_keywords = backtrack_parameters.needed_unstable_keywords
+		self._needed_p_mask_changes = backtrack_parameters.needed_p_mask_changes
+		self._needed_license_changes = backtrack_parameters.needed_license_changes
+		self._needed_use_config_changes = backtrack_parameters.needed_use_config_changes
+		self._runtime_pkg_mask = backtrack_parameters.runtime_pkg_mask
+		self._need_restart = False
+		# For conditions that always require user intervention, such as
+		# unsatisfied REQUIRED_USE (currently has no autounmask support).
+		self._skip_restart = False
+		self._backtrack_infos = {}
+
+		self._autounmask = depgraph._frozen_config.myopts.get('--autounmask') != 'n'
+		self._success_without_autounmask = False
+		self._traverse_ignored_deps = False
+
+		for myroot in depgraph._frozen_config.trees:
+			self.sets[myroot] = _depgraph_sets()
+			self._slot_pkg_map[myroot] = {}
+			vardb = depgraph._frozen_config.trees[myroot]["vartree"].dbapi
+			# This dbapi instance will model the state that the vdb will
+			# have after new packages have been installed.
+			fakedb = PackageVirtualDbapi(vardb.settings)
+
+			self.mydbapi[myroot] = fakedb
+			def graph_tree():
+				pass
+			graph_tree.dbapi = fakedb
+			self._graph_trees[myroot] = {}
+			self._filtered_trees[myroot] = {}
+			# Substitute the graph tree for the vartree in dep_check() since we
+			# want atom selections to be consistent with package selections
+			# have already been made.
+			self._graph_trees[myroot]["porttree"]   = graph_tree
+			self._graph_trees[myroot]["vartree"]    = graph_tree
+			self._graph_trees[myroot]["graph_db"]   = graph_tree.dbapi
+			self._graph_trees[myroot]["graph"]      = self.digraph
+			def filtered_tree():
+				pass
+			filtered_tree.dbapi = _dep_check_composite_db(depgraph, myroot)
+			self._filtered_trees[myroot]["porttree"] = filtered_tree
+			self._visible_pkgs[myroot] = PackageVirtualDbapi(vardb.settings)
+
+			# Passing in graph_tree as the vartree here could lead to better
+			# atom selections in some cases by causing atoms for packages that
+			# have been added to the graph to be preferred over other choices.
+			# However, it can trigger atom selections that result in
+			# unresolvable direct circular dependencies. For example, this
+			# happens with gwydion-dylan which depends on either itself or
+			# gwydion-dylan-bin. In case gwydion-dylan is not yet installed,
+			# gwydion-dylan-bin needs to be selected in order to avoid a
+			# an unresolvable direct circular dependency.
+			#
+			# To solve the problem described above, pass in "graph_db" so that
+			# packages that have been added to the graph are distinguishable
+			# from other available packages and installed packages. Also, pass
+			# the parent package into self._select_atoms() calls so that
+			# unresolvable direct circular dependencies can be detected and
+			# avoided when possible.
+			self._filtered_trees[myroot]["graph_db"] = graph_tree.dbapi
+			self._filtered_trees[myroot]["graph"]    = self.digraph
+			self._filtered_trees[myroot]["vartree"] = \
+				depgraph._frozen_config.trees[myroot]["vartree"]
+
+			dbs = []
+			#               (db, pkg_type, built, installed, db_keys)
+			if "remove" in self.myparams:
+				# For removal operations, use _dep_check_composite_db
+				# for availability and visibility checks. This provides
+				# consistency with install operations, so we don't
+				# get install/uninstall cycles like in bug #332719.
+				self._graph_trees[myroot]["porttree"] = filtered_tree
+			else:
+				if "--usepkgonly" not in depgraph._frozen_config.myopts:
+					portdb = depgraph._frozen_config.trees[myroot]["porttree"].dbapi
+					db_keys = list(portdb._aux_cache_keys)
+					dbs.append((portdb, "ebuild", False, False, db_keys))
+
+				if "--usepkg" in depgraph._frozen_config.myopts:
+					bindb  = depgraph._frozen_config.trees[myroot]["bintree"].dbapi
+					db_keys = list(bindb._aux_cache_keys)
+					dbs.append((bindb,  "binary", True, False, db_keys))
+
+			vardb  = depgraph._frozen_config.trees[myroot]["vartree"].dbapi
+			db_keys = list(depgraph._frozen_config._trees_orig[myroot
+				]["vartree"].dbapi._aux_cache_keys)
+			dbs.append((vardb, "installed", True, True, db_keys))
+			self._filtered_trees[myroot]["dbs"] = dbs
+
+class depgraph(object):
+
+	pkg_tree_map = RootConfig.pkg_tree_map
+
+	_dep_keys = ["DEPEND", "RDEPEND", "PDEPEND"]
+	
+	def __init__(self, settings, trees, myopts, myparams, spinner,
+		frozen_config=None, backtrack_parameters=BacktrackParameter(), allow_backtracking=False):
+		if frozen_config is None:
+			frozen_config = _frozen_depgraph_config(settings, trees,
+			myopts, spinner)
+		self._frozen_config = frozen_config
+		self._dynamic_config = _dynamic_depgraph_config(self, myparams,
+			allow_backtracking, backtrack_parameters)
+		self._rebuild = _rebuild_config(frozen_config, backtrack_parameters)
+
+		self._select_atoms = self._select_atoms_highest_available
+		self._select_package = self._select_pkg_highest_available
+
+	def _load_vdb(self):
+		"""
+		Load installed package metadata if appropriate. This used to be called
+		from the constructor, but that wasn't very nice since this procedure
+		is slow and it generates spinner output. So, now it's called on-demand
+		by various methods when necessary.
+		"""
+
+		if self._dynamic_config._vdb_loaded:
+			return
+
+		for myroot in self._frozen_config.trees:
+
+			dynamic_deps = self._dynamic_config.myparams.get(
+				"dynamic_deps", "y") != "n"
+			preload_installed_pkgs = \
+				"--nodeps" not in self._frozen_config.myopts
+
+			if self._frozen_config.myopts.get("--root-deps") is not None and \
+				myroot != self._frozen_config.target_root:
+				continue
+
+			fake_vartree = self._frozen_config.trees[myroot]["vartree"]
+			if not fake_vartree.dbapi:
+				# This needs to be called for the first depgraph, but not for
+				# backtracking depgraphs that share the same frozen_config.
+				fake_vartree.sync()
+
+				# FakeVartree.sync() populates virtuals, and we want
+				# self.pkgsettings to have them populated too.
+				self._frozen_config.pkgsettings[myroot] = \
+					portage.config(clone=fake_vartree.settings)
+
+			if preload_installed_pkgs:
+				vardb = fake_vartree.dbapi
+				fakedb = self._dynamic_config._graph_trees[
+					myroot]["vartree"].dbapi
+
+				for pkg in vardb:
+					self._spinner_update()
+					if dynamic_deps:
+						# This causes FakeVartree to update the
+						# Package instance dependencies via
+						# PackageVirtualDbapi.aux_update()
+						vardb.aux_get(pkg.cpv, [])
+					fakedb.cpv_inject(pkg)
+
+		self._dynamic_config._vdb_loaded = True
+
+	def _spinner_update(self):
+		if self._frozen_config.spinner:
+			self._frozen_config.spinner.update()
+
+	def _show_ignored_binaries(self):
+		"""
+		Show binaries that have been ignored because their USE didn't
+		match the user's config.
+		"""
+		if not self._dynamic_config.ignored_binaries \
+			or '--quiet' in self._frozen_config.myopts \
+			or self._dynamic_config.myparams.get(
+			"binpkg_respect_use") in ("y", "n"):
+			return
+
+		for pkg in list(self._dynamic_config.ignored_binaries):
+
+			selected_pkg = self._dynamic_config.mydbapi[pkg.root
+				].match_pkgs(pkg.slot_atom)
+
+			if not selected_pkg:
+				continue
+
+			selected_pkg = selected_pkg[-1]
+			if selected_pkg > pkg:
+				self._dynamic_config.ignored_binaries.pop(pkg)
+				continue
+
+			if selected_pkg.installed and \
+				selected_pkg.cpv == pkg.cpv and \
+				selected_pkg.metadata.get('BUILD_TIME') == \
+				pkg.metadata.get('BUILD_TIME'):
+				# We don't care about ignored binaries when an
+				# identical installed instance is selected to
+				# fill the slot.
+				self._dynamic_config.ignored_binaries.pop(pkg)
+				continue
+
+		if not self._dynamic_config.ignored_binaries:
+			return
+
+		self._show_merge_list()
+
+		writemsg("\n!!! The following binary packages have been ignored " + \
+				"due to non matching USE:\n\n", noiselevel=-1)
+
+		for pkg, flags in self._dynamic_config.ignored_binaries.items():
+			writemsg("    =%s" % pkg.cpv, noiselevel=-1)
+			if pkg.root_config.settings["ROOT"] != "/":
+				writemsg(" for %s" % (pkg.root,), noiselevel=-1)
+			writemsg("\n        use flag(s): %s\n" % ", ".join(sorted(flags)),
+				noiselevel=-1)
+
+		msg = [
+			"",
+			"NOTE: The --binpkg-respect-use=n option will prevent emerge",
+			"      from ignoring these binary packages if possible.",
+			"      Using --binpkg-respect-use=y will silence this warning."
+		]
+
+		for line in msg:
+			if line:
+				line = colorize("INFORM", line)
+			writemsg_stdout(line + "\n", noiselevel=-1)
+
+	def _show_missed_update(self):
+
+		# In order to minimize noise, show only the highest
+		# missed update from each SLOT.
+		missed_updates = {}
+		for pkg, mask_reasons in \
+			self._dynamic_config._runtime_pkg_mask.items():
+			if pkg.installed:
+				# Exclude installed here since we only
+				# want to show available updates.
+				continue
+			chosen_pkg = self._dynamic_config.mydbapi[pkg.root
+				].match_pkgs(pkg.slot_atom)
+			if not chosen_pkg or chosen_pkg[-1] >= pkg:
+				continue
+			k = (pkg.root, pkg.slot_atom)
+			if k in missed_updates:
+				other_pkg, mask_type, parent_atoms = missed_updates[k]
+				if other_pkg > pkg:
+					continue
+			for mask_type, parent_atoms in mask_reasons.items():
+				if not parent_atoms:
+					continue
+				missed_updates[k] = (pkg, mask_type, parent_atoms)
+				break
+
+		if not missed_updates:
+			return
+
+		missed_update_types = {}
+		for pkg, mask_type, parent_atoms in missed_updates.values():
+			missed_update_types.setdefault(mask_type,
+				[]).append((pkg, parent_atoms))
+
+		if '--quiet' in self._frozen_config.myopts and \
+			'--debug' not in self._frozen_config.myopts:
+			missed_update_types.pop("slot conflict", None)
+			missed_update_types.pop("missing dependency", None)
+
+		self._show_missed_update_slot_conflicts(
+			missed_update_types.get("slot conflict"))
+
+		self._show_missed_update_unsatisfied_dep(
+			missed_update_types.get("missing dependency"))
+
+	def _show_missed_update_unsatisfied_dep(self, missed_updates):
+
+		if not missed_updates:
+			return
+
+		self._show_merge_list()
+		backtrack_masked = []
+
+		for pkg, parent_atoms in missed_updates:
+
+			try:
+				for parent, root, atom in parent_atoms:
+					self._show_unsatisfied_dep(root, atom, myparent=parent,
+						check_backtrack=True)
+			except self._backtrack_mask:
+				# This is displayed below in abbreviated form.
+				backtrack_masked.append((pkg, parent_atoms))
+				continue
+
+			writemsg("\n!!! The following update has been skipped " + \
+				"due to unsatisfied dependencies:\n\n", noiselevel=-1)
+
+			writemsg(str(pkg.slot_atom), noiselevel=-1)
+			if pkg.root_config.settings["ROOT"] != "/":
+				writemsg(" for %s" % (pkg.root,), noiselevel=-1)
+			writemsg("\n", noiselevel=-1)
+
+			for parent, root, atom in parent_atoms:
+				self._show_unsatisfied_dep(root, atom, myparent=parent)
+				writemsg("\n", noiselevel=-1)
+
+		if backtrack_masked:
+			# These are shown in abbreviated form, in order to avoid terminal
+			# flooding from mask messages as reported in bug #285832.
+			writemsg("\n!!! The following update(s) have been skipped " + \
+				"due to unsatisfied dependencies\n" + \
+				"!!! triggered by backtracking:\n\n", noiselevel=-1)
+			for pkg, parent_atoms in backtrack_masked:
+				writemsg(str(pkg.slot_atom), noiselevel=-1)
+				if pkg.root_config.settings["ROOT"] != "/":
+					writemsg(" for %s" % (pkg.root,), noiselevel=-1)
+				writemsg("\n", noiselevel=-1)
+
+	def _show_missed_update_slot_conflicts(self, missed_updates):
+
+		if not missed_updates:
+			return
+
+		self._show_merge_list()
+		msg = []
+		msg.append("\nWARNING: One or more updates have been " + \
+			"skipped due to a dependency conflict:\n\n")
+
+		indent = "  "
+		for pkg, parent_atoms in missed_updates:
+			msg.append(str(pkg.slot_atom))
+			if pkg.root_config.settings["ROOT"] != "/":
+				msg.append(" for %s" % (pkg.root,))
+			msg.append("\n\n")
+
+			for parent, atom in parent_atoms:
+				msg.append(indent)
+				msg.append(str(pkg))
+
+				msg.append(" conflicts with\n")
+				msg.append(2*indent)
+				if isinstance(parent,
+					(PackageArg, AtomArg)):
+					# For PackageArg and AtomArg types, it's
+					# redundant to display the atom attribute.
+					msg.append(str(parent))
+				else:
+					# Display the specific atom from SetArg or
+					# Package types.
+					msg.append("%s required by %s" % (atom, parent))
+				msg.append("\n")
+			msg.append("\n")
+
+		writemsg("".join(msg), noiselevel=-1)
+
+	def _show_slot_collision_notice(self):
+		"""Show an informational message advising the user to mask one of the
+		the packages. In some cases it may be possible to resolve this
+		automatically, but support for backtracking (removal nodes that have
+		already been selected) will be required in order to handle all possible
+		cases.
+		"""
+
+		if not self._dynamic_config._slot_collision_info:
+			return
+
+		self._show_merge_list()
+
+		self._dynamic_config._slot_conflict_handler = slot_conflict_handler(self)
+		handler = self._dynamic_config._slot_conflict_handler
+
+		conflict = handler.get_conflict()
+		writemsg(conflict, noiselevel=-1)
+		
+		explanation = handler.get_explanation()
+		if explanation:
+			writemsg(explanation, noiselevel=-1)
+			return
+
+		if "--quiet" in self._frozen_config.myopts:
+			return
+
+		msg = []
+		msg.append("It may be possible to solve this problem ")
+		msg.append("by using package.mask to prevent one of ")
+		msg.append("those packages from being selected. ")
+		msg.append("However, it is also possible that conflicting ")
+		msg.append("dependencies exist such that they are impossible to ")
+		msg.append("satisfy simultaneously.  If such a conflict exists in ")
+		msg.append("the dependencies of two different packages, then those ")
+		msg.append("packages can not be installed simultaneously.")
+		backtrack_opt = self._frozen_config.myopts.get('--backtrack')
+		if not self._dynamic_config._allow_backtracking and \
+			(backtrack_opt is None or \
+			(backtrack_opt > 0 and backtrack_opt < 30)):
+			msg.append(" You may want to try a larger value of the ")
+			msg.append("--backtrack option, such as --backtrack=30, ")
+			msg.append("in order to see if that will solve this conflict ")
+			msg.append("automatically.")
+
+		for line in textwrap.wrap(''.join(msg), 70):
+			writemsg(line + '\n', noiselevel=-1)
+		writemsg('\n', noiselevel=-1)
+
+		msg = []
+		msg.append("For more information, see MASKED PACKAGES ")
+		msg.append("section in the emerge man page or refer ")
+		msg.append("to the Gentoo Handbook.")
+		for line in textwrap.wrap(''.join(msg), 70):
+			writemsg(line + '\n', noiselevel=-1)
+		writemsg('\n', noiselevel=-1)
+
+	def _process_slot_conflicts(self):
+		"""
+		Process slot conflict data to identify specific atoms which
+		lead to conflict. These atoms only match a subset of the
+		packages that have been pulled into a given slot.
+		"""
+		for (slot_atom, root), slot_nodes \
+			in self._dynamic_config._slot_collision_info.items():
+
+			all_parent_atoms = set()
+			for pkg in slot_nodes:
+				parent_atoms = self._dynamic_config._parent_atoms.get(pkg)
+				if not parent_atoms:
+					continue
+				all_parent_atoms.update(parent_atoms)
+
+			for pkg in slot_nodes:
+				parent_atoms = self._dynamic_config._parent_atoms.get(pkg)
+				if parent_atoms is None:
+					parent_atoms = set()
+					self._dynamic_config._parent_atoms[pkg] = parent_atoms
+				for parent_atom in all_parent_atoms:
+					if parent_atom in parent_atoms:
+						continue
+					# Use package set for matching since it will match via
+					# PROVIDE when necessary, while match_from_list does not.
+					parent, atom = parent_atom
+					atom_set = InternalPackageSet(
+						initial_atoms=(atom,), allow_repo=True)
+					if atom_set.findAtomForPackage(pkg, modified_use=self._pkg_use_enabled(pkg)):
+						parent_atoms.add(parent_atom)
+					else:
+						self._dynamic_config._slot_conflict_parent_atoms.add(parent_atom)
+
+	def _reinstall_for_flags(self, pkg, forced_flags,
+		orig_use, orig_iuse, cur_use, cur_iuse):
+		"""Return a set of flags that trigger reinstallation, or None if there
+		are no such flags."""
+
+		# binpkg_respect_use: Behave like newuse by default. If newuse is
+		# False and changed_use is True, then behave like changed_use.
+		binpkg_respect_use = (pkg.built and
+			self._dynamic_config.myparams.get("binpkg_respect_use")
+			in ("y", "auto"))
+		newuse = "--newuse" in self._frozen_config.myopts
+		changed_use = "changed-use" == self._frozen_config.myopts.get("--reinstall")
+
+		if newuse or (binpkg_respect_use and not changed_use):
+			flags = set(orig_iuse.symmetric_difference(
+				cur_iuse).difference(forced_flags))
+			flags.update(orig_iuse.intersection(orig_use).symmetric_difference(
+				cur_iuse.intersection(cur_use)))
+			if flags:
+				return flags
+
+		elif changed_use or binpkg_respect_use:
+			flags = orig_iuse.intersection(orig_use).symmetric_difference(
+				cur_iuse.intersection(cur_use))
+			if flags:
+				return flags
+		return None
+
+	def _create_graph(self, allow_unsatisfied=False):
+		dep_stack = self._dynamic_config._dep_stack
+		dep_disjunctive_stack = self._dynamic_config._dep_disjunctive_stack
+		while dep_stack or dep_disjunctive_stack:
+			self._spinner_update()
+			while dep_stack:
+				dep = dep_stack.pop()
+				if isinstance(dep, Package):
+					if not self._add_pkg_deps(dep,
+						allow_unsatisfied=allow_unsatisfied):
+						return 0
+					continue
+				if not self._add_dep(dep, allow_unsatisfied=allow_unsatisfied):
+					return 0
+			if dep_disjunctive_stack:
+				if not self._pop_disjunction(allow_unsatisfied):
+					return 0
+		return 1
+
+	def _expand_set_args(self, input_args, add_to_digraph=False):
+		"""
+		Iterate over a list of DependencyArg instances and yield all
+		instances given in the input together with additional SetArg
+		instances that are generated from nested sets.
+		@param input_args: An iterable of DependencyArg instances
+		@type input_args: Iterable
+		@param add_to_digraph: If True then add SetArg instances
+			to the digraph, in order to record parent -> child
+			relationships from nested sets
+		@type add_to_digraph: Boolean
+		@rtype: Iterable
+		@return: All args given in the input together with additional
+			SetArg instances that are generated from nested sets
+		"""
+
+		traversed_set_args = set()
+
+		for arg in input_args:
+			if not isinstance(arg, SetArg):
+				yield arg
+				continue
+
+			root_config = arg.root_config
+			depgraph_sets = self._dynamic_config.sets[root_config.root]
+			arg_stack = [arg]
+			while arg_stack:
+				arg = arg_stack.pop()
+				if arg in traversed_set_args:
+					continue
+				traversed_set_args.add(arg)
+
+				if add_to_digraph:
+					self._dynamic_config.digraph.add(arg, None,
+						priority=BlockerDepPriority.instance)
+
+				yield arg
+
+				# Traverse nested sets and add them to the stack
+				# if they're not already in the graph. Also, graph
+				# edges between parent and nested sets.
+				for token in arg.pset.getNonAtoms():
+					if not token.startswith(SETPREFIX):
+						continue
+					s = token[len(SETPREFIX):]
+					nested_set = depgraph_sets.sets.get(s)
+					if nested_set is None:
+						nested_set = root_config.sets.get(s)
+					if nested_set is not None:
+						nested_arg = SetArg(arg=token, pset=nested_set,
+							root_config=root_config)
+						arg_stack.append(nested_arg)
+						if add_to_digraph:
+							self._dynamic_config.digraph.add(nested_arg, arg,
+								priority=BlockerDepPriority.instance)
+							depgraph_sets.sets[nested_arg.name] = nested_arg.pset
+
+	def _add_dep(self, dep, allow_unsatisfied=False):
+		debug = "--debug" in self._frozen_config.myopts
+		buildpkgonly = "--buildpkgonly" in self._frozen_config.myopts
+		nodeps = "--nodeps" in self._frozen_config.myopts
+		if dep.blocker:
+			if not buildpkgonly and \
+				not nodeps and \
+				not dep.collapsed_priority.ignored and \
+				not dep.collapsed_priority.optional and \
+				dep.parent not in self._dynamic_config._slot_collision_nodes:
+				if dep.parent.onlydeps:
+					# It's safe to ignore blockers if the
+					# parent is an --onlydeps node.
+					return 1
+				# The blocker applies to the root where
+				# the parent is or will be installed.
+				blocker = Blocker(atom=dep.atom,
+					eapi=dep.parent.metadata["EAPI"],
+					priority=dep.priority, root=dep.parent.root)
+				self._dynamic_config._blocker_parents.add(blocker, dep.parent)
+			return 1
+
+		if dep.child is None:
+			dep_pkg, existing_node = self._select_package(dep.root, dep.atom,
+				onlydeps=dep.onlydeps)
+		else:
+			# The caller has selected a specific package
+			# via self._minimize_packages().
+			dep_pkg = dep.child
+			existing_node = self._dynamic_config._slot_pkg_map[
+				dep.root].get(dep_pkg.slot_atom)
+
+		if not dep_pkg:
+			if (dep.collapsed_priority.optional or
+				dep.collapsed_priority.ignored):
+				# This is an unnecessary build-time dep.
+				return 1
+			if allow_unsatisfied:
+				self._dynamic_config._unsatisfied_deps.append(dep)
+				return 1
+			self._dynamic_config._unsatisfied_deps_for_display.append(
+				((dep.root, dep.atom), {"myparent":dep.parent}))
+
+			# The parent node should not already be in
+			# runtime_pkg_mask, since that would trigger an
+			# infinite backtracking loop.
+			if self._dynamic_config._allow_backtracking:
+				if dep.parent in self._dynamic_config._runtime_pkg_mask:
+					if debug:
+						writemsg(
+							"!!! backtracking loop detected: %s %s\n" % \
+							(dep.parent,
+							self._dynamic_config._runtime_pkg_mask[
+							dep.parent]), noiselevel=-1)
+				elif not self.need_restart():
+					# Do not backtrack if only USE have to be changed in
+					# order to satisfy the dependency.
+					dep_pkg, existing_node = \
+						self._select_package(dep.root, dep.atom.without_use,
+							onlydeps=dep.onlydeps)
+					if dep_pkg is None:
+						self._dynamic_config._backtrack_infos["missing dependency"] = dep
+						self._dynamic_config._need_restart = True
+						if debug:
+							msg = []
+							msg.append("")
+							msg.append("")
+							msg.append("backtracking due to unsatisfied dep:")
+							msg.append("    parent: %s" % dep.parent)
+							msg.append("  priority: %s" % dep.priority)
+							msg.append("      root: %s" % dep.root)
+							msg.append("      atom: %s" % dep.atom)
+							msg.append("")
+							writemsg_level("".join("%s\n" % l for l in msg),
+								noiselevel=-1, level=logging.DEBUG)
+
+			return 0
+
+		self._rebuild.add(dep_pkg, dep)
+
+		ignore = dep.collapsed_priority.ignored and \
+			not self._dynamic_config._traverse_ignored_deps
+		if not ignore and not self._add_pkg(dep_pkg, dep):
+			return 0
+		return 1
+
+	def _check_slot_conflict(self, pkg, atom):
+		existing_node = self._dynamic_config._slot_pkg_map[pkg.root].get(pkg.slot_atom)
+		matches = None
+		if existing_node:
+			matches = pkg.cpv == existing_node.cpv
+			if pkg != existing_node and \
+				atom is not None:
+				# Use package set for matching since it will match via
+				# PROVIDE when necessary, while match_from_list does not.
+				matches = bool(InternalPackageSet(initial_atoms=(atom,),
+					allow_repo=True).findAtomForPackage(existing_node,
+					modified_use=self._pkg_use_enabled(existing_node)))
+
+		return (existing_node, matches)
+
+	def _add_pkg(self, pkg, dep):
+		"""
+		Adds a package to the depgraph, queues dependencies, and handles
+		slot conflicts.
+		"""
+		debug = "--debug" in self._frozen_config.myopts
+		myparent = None
+		priority = None
+		depth = 0
+		if dep is None:
+			dep = Dependency()
+		else:
+			myparent = dep.parent
+			priority = dep.priority
+			depth = dep.depth
+		if priority is None:
+			priority = DepPriority()
+
+		if debug:
+			writemsg_level(
+				"\n%s%s %s\n" % ("Child:".ljust(15), pkg,
+				pkg_use_display(pkg, self._frozen_config.myopts,
+				modified_use=self._pkg_use_enabled(pkg))),
+				level=logging.DEBUG, noiselevel=-1)
+			if isinstance(myparent,
+				(PackageArg, AtomArg)):
+				# For PackageArg and AtomArg types, it's
+				# redundant to display the atom attribute.
+				writemsg_level(
+					"%s%s\n" % ("Parent Dep:".ljust(15), myparent),
+					level=logging.DEBUG, noiselevel=-1)
+			else:
+				# Display the specific atom from SetArg or
+				# Package types.
+				uneval = ""
+				if dep.atom is not dep.atom.unevaluated_atom:
+					uneval = " (%s)" % (dep.atom.unevaluated_atom,)
+				writemsg_level(
+					"%s%s%s required by %s\n" %
+					("Parent Dep:".ljust(15), dep.atom, uneval, myparent),
+					level=logging.DEBUG, noiselevel=-1)
+
+		# Ensure that the dependencies of the same package
+		# are never processed more than once.
+		previously_added = pkg in self._dynamic_config.digraph
+
+		pkgsettings = self._frozen_config.pkgsettings[pkg.root]
+
+		arg_atoms = None
+		if True:
+			try:
+				arg_atoms = list(self._iter_atoms_for_pkg(pkg))
+			except portage.exception.InvalidDependString as e:
+				if not pkg.installed:
+					# should have been masked before it was selected
+					raise
+				del e
+
+		# NOTE: REQUIRED_USE checks are delayed until after
+		# package selection, since we want to prompt the user
+		# for USE adjustment rather than have REQUIRED_USE
+		# affect package selection and || dep choices.
+		if not pkg.built and pkg.metadata.get("REQUIRED_USE") and \
+			eapi_has_required_use(pkg.metadata["EAPI"]):
+			required_use_is_sat = check_required_use(
+				pkg.metadata["REQUIRED_USE"],
+				self._pkg_use_enabled(pkg),
+				pkg.iuse.is_valid_flag)
+			if not required_use_is_sat:
+				if dep.atom is not None and dep.parent is not None:
+					self._add_parent_atom(pkg, (dep.parent, dep.atom))
+
+				if arg_atoms:
+					for parent_atom in arg_atoms:
+						parent, atom = parent_atom
+						self._add_parent_atom(pkg, parent_atom)
+
+				atom = dep.atom
+				if atom is None:
+					atom = Atom("=" + pkg.cpv)
+				self._dynamic_config._unsatisfied_deps_for_display.append(
+					((pkg.root, atom), {"myparent":dep.parent}))
+				self._dynamic_config._skip_restart = True
+				return 0
+
+		if not pkg.onlydeps:
+
+			existing_node, existing_node_matches = \
+				self._check_slot_conflict(pkg, dep.atom)
+			slot_collision = False
+			if existing_node:
+				if existing_node_matches:
+					# The existing node can be reused.
+					if arg_atoms:
+						for parent_atom in arg_atoms:
+							parent, atom = parent_atom
+							self._dynamic_config.digraph.add(existing_node, parent,
+								priority=priority)
+							self._add_parent_atom(existing_node, parent_atom)
+					# If a direct circular dependency is not an unsatisfied
+					# buildtime dependency then drop it here since otherwise
+					# it can skew the merge order calculation in an unwanted
+					# way.
+					if existing_node != myparent or \
+						(priority.buildtime and not priority.satisfied):
+						self._dynamic_config.digraph.addnode(existing_node, myparent,
+							priority=priority)
+						if dep.atom is not None and dep.parent is not None:
+							self._add_parent_atom(existing_node,
+								(dep.parent, dep.atom))
+					return 1
+				else:
+					# A slot conflict has occurred. 
+					# The existing node should not already be in
+					# runtime_pkg_mask, since that would trigger an
+					# infinite backtracking loop.
+					if self._dynamic_config._allow_backtracking and \
+						existing_node in \
+						self._dynamic_config._runtime_pkg_mask:
+						if "--debug" in self._frozen_config.myopts:
+							writemsg(
+								"!!! backtracking loop detected: %s %s\n" % \
+								(existing_node,
+								self._dynamic_config._runtime_pkg_mask[
+								existing_node]), noiselevel=-1)
+					elif self._dynamic_config._allow_backtracking and \
+						not self._accept_blocker_conflicts() and \
+						not self.need_restart():
+
+						self._add_slot_conflict(pkg)
+						if dep.atom is not None and dep.parent is not None:
+							self._add_parent_atom(pkg, (dep.parent, dep.atom))
+
+						if arg_atoms:
+							for parent_atom in arg_atoms:
+								parent, atom = parent_atom
+								self._add_parent_atom(pkg, parent_atom)
+						self._process_slot_conflicts()
+
+						backtrack_data = []
+						fallback_data = []
+						all_parents = set()
+						# The ordering of backtrack_data can make
+						# a difference here, because both mask actions may lead
+						# to valid, but different, solutions and the one with
+						# 'existing_node' masked is usually the better one. Because
+						# of that, we choose an order such that
+						# the backtracker will first explore the choice with
+						# existing_node masked. The backtracker reverses the
+						# order, so the order it uses is the reverse of the
+						# order shown here. See bug #339606.
+						for to_be_selected, to_be_masked in (existing_node, pkg), (pkg, existing_node):
+							# For missed update messages, find out which
+							# atoms matched to_be_selected that did not
+							# match to_be_masked.
+							parent_atoms = \
+								self._dynamic_config._parent_atoms.get(to_be_selected, set())
+							if parent_atoms:
+								conflict_atoms = self._dynamic_config._slot_conflict_parent_atoms.intersection(parent_atoms)
+								if conflict_atoms:
+									parent_atoms = conflict_atoms
+
+							all_parents.update(parent_atoms)
+
+							all_match = True
+							for parent, atom in parent_atoms:
+								i = InternalPackageSet(initial_atoms=(atom,),
+									allow_repo=True)
+								if not i.findAtomForPackage(to_be_masked):
+									all_match = False
+									break
+
+							fallback_data.append((to_be_masked, parent_atoms))
+
+							if all_match:
+								# 'to_be_masked' does not violate any parent atom, which means
+								# there is no point in masking it.
+								pass
+							else:
+								backtrack_data.append((to_be_masked, parent_atoms))
+
+						if not backtrack_data:
+							# This shouldn't happen, but fall back to the old
+							# behavior if this gets triggered somehow.
+							backtrack_data = fallback_data
+
+						if len(backtrack_data) > 1:
+							# NOTE: Generally, we prefer to mask the higher
+							# version since this solves common cases in which a
+							# lower version is needed so that all dependencies
+							# will be satisfied (bug #337178). However, if
+							# existing_node happens to be installed then we
+							# mask that since this is a common case that is
+							# triggered when --update is not enabled.
+							if existing_node.installed:
+								pass
+							elif pkg > existing_node:
+								backtrack_data.reverse()
+
+						to_be_masked = backtrack_data[-1][0]
+
+						self._dynamic_config._backtrack_infos["slot conflict"] = backtrack_data
+						self._dynamic_config._need_restart = True
+						if "--debug" in self._frozen_config.myopts:
+							msg = []
+							msg.append("")
+							msg.append("")
+							msg.append("backtracking due to slot conflict:")
+							if backtrack_data is fallback_data:
+								msg.append("!!! backtrack_data fallback")
+							msg.append("   first package:  %s" % existing_node)
+							msg.append("   second package: %s" % pkg)
+							msg.append("  package to mask: %s" % to_be_masked)
+							msg.append("      slot: %s" % pkg.slot_atom)
+							msg.append("   parents: %s" % ", ".join( \
+								"(%s, '%s')" % (ppkg, atom) for ppkg, atom in all_parents))
+							msg.append("")
+							writemsg_level("".join("%s\n" % l for l in msg),
+								noiselevel=-1, level=logging.DEBUG)
+						return 0
+
+					# A slot collision has occurred.  Sometimes this coincides
+					# with unresolvable blockers, so the slot collision will be
+					# shown later if there are no unresolvable blockers.
+					self._add_slot_conflict(pkg)
+					slot_collision = True
+
+					if debug:
+						writemsg_level(
+							"%s%s %s\n" % ("Slot Conflict:".ljust(15),
+							existing_node, pkg_use_display(existing_node,
+							self._frozen_config.myopts,
+							modified_use=self._pkg_use_enabled(existing_node))),
+							level=logging.DEBUG, noiselevel=-1)
+
+			if slot_collision:
+				# Now add this node to the graph so that self.display()
+				# can show use flags and --tree portage.output.  This node is
+				# only being partially added to the graph.  It must not be
+				# allowed to interfere with the other nodes that have been
+				# added.  Do not overwrite data for existing nodes in
+				# self._dynamic_config.mydbapi since that data will be used for blocker
+				# validation.
+				# Even though the graph is now invalid, continue to process
+				# dependencies so that things like --fetchonly can still
+				# function despite collisions.
+				pass
+			elif not previously_added:
+				self._dynamic_config._slot_pkg_map[pkg.root][pkg.slot_atom] = pkg
+				self._dynamic_config.mydbapi[pkg.root].cpv_inject(pkg)
+				self._dynamic_config._filtered_trees[pkg.root]["porttree"].dbapi._clear_cache()
+				self._dynamic_config._highest_pkg_cache.clear()
+				self._check_masks(pkg)
+
+			if not pkg.installed:
+				# Allow this package to satisfy old-style virtuals in case it
+				# doesn't already. Any pre-existing providers will be preferred
+				# over this one.
+				try:
+					pkgsettings.setinst(pkg.cpv, pkg.metadata)
+					# For consistency, also update the global virtuals.
+					settings = self._frozen_config.roots[pkg.root].settings
+					settings.unlock()
+					settings.setinst(pkg.cpv, pkg.metadata)
+					settings.lock()
+				except portage.exception.InvalidDependString:
+					if not pkg.installed:
+						# should have been masked before it was selected
+						raise
+
+		if arg_atoms:
+			self._dynamic_config._set_nodes.add(pkg)
+
+		# Do this even when addme is False (--onlydeps) so that the
+		# parent/child relationship is always known in case
+		# self._show_slot_collision_notice() needs to be called later.
+		self._dynamic_config.digraph.add(pkg, myparent, priority=priority)
+		if dep.atom is not None and dep.parent is not None:
+			self._add_parent_atom(pkg, (dep.parent, dep.atom))
+
+		if arg_atoms:
+			for parent_atom in arg_atoms:
+				parent, atom = parent_atom
+				self._dynamic_config.digraph.add(pkg, parent, priority=priority)
+				self._add_parent_atom(pkg, parent_atom)
+
+		# This section determines whether we go deeper into dependencies or not.
+		# We want to go deeper on a few occasions:
+		# Installing package A, we need to make sure package A's deps are met.
+		# emerge --deep <pkgspec>; we need to recursively check dependencies of pkgspec
+		# If we are in --nodeps (no recursion) mode, we obviously only check 1 level of dependencies.
+		if arg_atoms:
+			depth = 0
+		pkg.depth = depth
+		deep = self._dynamic_config.myparams.get("deep", 0)
+		recurse = deep is True or depth + 1 <= deep
+		dep_stack = self._dynamic_config._dep_stack
+		if "recurse" not in self._dynamic_config.myparams:
+			return 1
+		elif pkg.installed and not recurse:
+			dep_stack = self._dynamic_config._ignored_deps
+
+		self._spinner_update()
+
+		if not previously_added:
+			dep_stack.append(pkg)
+		return 1
+
+	def _check_masks(self, pkg):
+
+		slot_key = (pkg.root, pkg.slot_atom)
+
+		# Check for upgrades in the same slot that are
+		# masked due to a LICENSE change in a newer
+		# version that is not masked for any other reason.
+		other_pkg = self._frozen_config._highest_license_masked.get(slot_key)
+		if other_pkg is not None and pkg < other_pkg:
+			self._dynamic_config._masked_license_updates.add(other_pkg)
+
+	def _add_parent_atom(self, pkg, parent_atom):
+		parent_atoms = self._dynamic_config._parent_atoms.get(pkg)
+		if parent_atoms is None:
+			parent_atoms = set()
+			self._dynamic_config._parent_atoms[pkg] = parent_atoms
+		parent_atoms.add(parent_atom)
+
+	def _add_slot_conflict(self, pkg):
+		self._dynamic_config._slot_collision_nodes.add(pkg)
+		slot_key = (pkg.slot_atom, pkg.root)
+		slot_nodes = self._dynamic_config._slot_collision_info.get(slot_key)
+		if slot_nodes is None:
+			slot_nodes = set()
+			slot_nodes.add(self._dynamic_config._slot_pkg_map[pkg.root][pkg.slot_atom])
+			self._dynamic_config._slot_collision_info[slot_key] = slot_nodes
+		slot_nodes.add(pkg)
+
+	def _add_pkg_deps(self, pkg, allow_unsatisfied=False):
+
+		myroot = pkg.root
+		metadata = pkg.metadata
+		removal_action = "remove" in self._dynamic_config.myparams
+
+		edepend={}
+		depkeys = ["DEPEND","RDEPEND","PDEPEND"]
+		for k in depkeys:
+			edepend[k] = metadata[k]
+
+		if not pkg.built and \
+			"--buildpkgonly" in self._frozen_config.myopts and \
+			"deep" not in self._dynamic_config.myparams:
+			edepend["RDEPEND"] = ""
+			edepend["PDEPEND"] = ""
+
+		ignore_build_time_deps = False
+		if pkg.built and not removal_action:
+			if self._dynamic_config.myparams.get("bdeps", "n") == "y":
+				# Pull in build time deps as requested, but marked them as
+				# "optional" since they are not strictly required. This allows
+				# more freedom in the merge order calculation for solving
+				# circular dependencies. Don't convert to PDEPEND since that
+				# could make --with-bdeps=y less effective if it is used to
+				# adjust merge order to prevent built_with_use() calls from
+				# failing.
+				pass
+			else:
+				ignore_build_time_deps = True
+
+		if removal_action and self._dynamic_config.myparams.get("bdeps", "y") == "n":
+			# Removal actions never traverse ignored buildtime
+			# dependencies, so it's safe to discard them early.
+			edepend["DEPEND"] = ""
+			ignore_build_time_deps = True
+
+		if removal_action:
+			depend_root = myroot
+		else:
+			depend_root = self._frozen_config._running_root.root
+			root_deps = self._frozen_config.myopts.get("--root-deps")
+			if root_deps is not None:
+				if root_deps is True:
+					depend_root = myroot
+				elif root_deps == "rdeps":
+					ignore_build_time_deps = True
+
+		# If rebuild mode is not enabled, it's safe to discard ignored
+		# build-time dependencies. If you want these deps to be traversed
+		# in "complete" mode then you need to specify --with-bdeps=y.
+		if ignore_build_time_deps and \
+			not self._rebuild.rebuild:
+			edepend["DEPEND"] = ""
+
+		deps = (
+			(depend_root, edepend["DEPEND"],
+				self._priority(buildtime=True,
+				optional=(pkg.built or ignore_build_time_deps),
+				ignored=ignore_build_time_deps)),
+			(myroot, edepend["RDEPEND"],
+				self._priority(runtime=True)),
+			(myroot, edepend["PDEPEND"],
+				self._priority(runtime_post=True))
+		)
+
+		debug = "--debug" in self._frozen_config.myopts
+
+		for dep_root, dep_string, dep_priority in deps:
+				if not dep_string:
+					continue
+				if debug:
+					writemsg_level("\nParent:    %s\n" % (pkg,),
+						noiselevel=-1, level=logging.DEBUG)
+					writemsg_level("Depstring: %s\n" % (dep_string,),
+						noiselevel=-1, level=logging.DEBUG)
+					writemsg_level("Priority:  %s\n" % (dep_priority,),
+						noiselevel=-1, level=logging.DEBUG)
+
+				try:
+					dep_string = portage.dep.use_reduce(dep_string,
+						uselist=self._pkg_use_enabled(pkg), is_valid_flag=pkg.iuse.is_valid_flag)
+				except portage.exception.InvalidDependString as e:
+					if not pkg.installed:
+						# should have been masked before it was selected
+						raise
+					del e
+
+					# Try again, but omit the is_valid_flag argument, since
+					# invalid USE conditionals are a common problem and it's
+					# practical to ignore this issue for installed packages.
+					try:
+						dep_string = portage.dep.use_reduce(dep_string,
+							uselist=self._pkg_use_enabled(pkg))
+					except portage.exception.InvalidDependString as e:
+						self._dynamic_config._masked_installed.add(pkg)
+						del e
+						continue
+
+				try:
+					dep_string = list(self._queue_disjunctive_deps(
+						pkg, dep_root, dep_priority, dep_string))
+				except portage.exception.InvalidDependString as e:
+					if pkg.installed:
+						self._dynamic_config._masked_installed.add(pkg)
+						del e
+						continue
+
+					# should have been masked before it was selected
+					raise
+
+				if not dep_string:
+					continue
+
+				dep_string = portage.dep.paren_enclose(dep_string,
+					unevaluated_atom=True)
+
+				if not self._add_pkg_dep_string(
+					pkg, dep_root, dep_priority, dep_string,
+					allow_unsatisfied):
+					return 0
+
+		self._dynamic_config._traversed_pkg_deps.add(pkg)
+		return 1
+
+	def _add_pkg_dep_string(self, pkg, dep_root, dep_priority, dep_string,
+		allow_unsatisfied):
+		_autounmask_backup = self._dynamic_config._autounmask
+		if dep_priority.optional or dep_priority.ignored:
+			# Temporarily disable autounmask for deps that
+			# don't necessarily need to be satisfied.
+			self._dynamic_config._autounmask = False
+		try:
+			return self._wrapped_add_pkg_dep_string(
+				pkg, dep_root, dep_priority, dep_string,
+				allow_unsatisfied)
+		finally:
+			self._dynamic_config._autounmask = _autounmask_backup
+
+	def _wrapped_add_pkg_dep_string(self, pkg, dep_root, dep_priority,
+		dep_string, allow_unsatisfied):
+		depth = pkg.depth + 1
+		deep = self._dynamic_config.myparams.get("deep", 0)
+		recurse_satisfied = deep is True or depth <= deep
+		debug = "--debug" in self._frozen_config.myopts
+		strict = pkg.type_name != "installed"
+
+		if debug:
+			writemsg_level("\nParent:    %s\n" % (pkg,),
+				noiselevel=-1, level=logging.DEBUG)
+			writemsg_level("Depstring: %s\n" % (dep_string,),
+				noiselevel=-1, level=logging.DEBUG)
+			writemsg_level("Priority:  %s\n" % (dep_priority,),
+				noiselevel=-1, level=logging.DEBUG)
+
+		try:
+			selected_atoms = self._select_atoms(dep_root,
+				dep_string, myuse=self._pkg_use_enabled(pkg), parent=pkg,
+				strict=strict, priority=dep_priority)
+		except portage.exception.InvalidDependString:
+			if pkg.installed:
+				self._dynamic_config._masked_installed.add(pkg)
+				return 1
+
+			# should have been masked before it was selected
+			raise
+
+		if debug:
+			writemsg_level("Candidates: %s\n" % \
+				([str(x) for x in selected_atoms[pkg]],),
+				noiselevel=-1, level=logging.DEBUG)
+
+		root_config = self._frozen_config.roots[dep_root]
+		vardb = root_config.trees["vartree"].dbapi
+		traversed_virt_pkgs = set()
+
+		reinstall_atoms = self._frozen_config.reinstall_atoms
+		for atom, child in self._minimize_children(
+			pkg, dep_priority, root_config, selected_atoms[pkg]):
+
+			# If this was a specially generated virtual atom
+			# from dep_check, map it back to the original, in
+			# order to avoid distortion in places like display
+			# or conflict resolution code.
+			is_virt = hasattr(atom, '_orig_atom')
+			atom = getattr(atom, '_orig_atom', atom)
+
+			if atom.blocker and \
+				(dep_priority.optional or dep_priority.ignored):
+				# For --with-bdeps, ignore build-time only blockers
+				# that originate from built packages.
+				continue
+
+			mypriority = dep_priority.copy()
+			if not atom.blocker:
+				inst_pkgs = [inst_pkg for inst_pkg in
+					reversed(vardb.match_pkgs(atom))
+					if not reinstall_atoms.findAtomForPackage(inst_pkg,
+							modified_use=self._pkg_use_enabled(inst_pkg))]
+				if inst_pkgs:
+					for inst_pkg in inst_pkgs:
+						if self._pkg_visibility_check(inst_pkg):
+							# highest visible
+							mypriority.satisfied = inst_pkg
+							break
+					if not mypriority.satisfied:
+						# none visible, so use highest
+						mypriority.satisfied = inst_pkgs[0]
+
+			dep = Dependency(atom=atom,
+				blocker=atom.blocker, child=child, depth=depth, parent=pkg,
+				priority=mypriority, root=dep_root)
+
+			# In some cases, dep_check will return deps that shouldn't
+			# be proccessed any further, so they are identified and
+			# discarded here. Try to discard as few as possible since
+			# discarded dependencies reduce the amount of information
+			# available for optimization of merge order.
+			ignored = False
+			if not atom.blocker and \
+				not recurse_satisfied and \
+				mypriority.satisfied and \
+				mypriority.satisfied.visible and \
+				dep.child is not None and \
+				not dep.child.installed and \
+				self._dynamic_config._slot_pkg_map[dep.child.root].get(
+				dep.child.slot_atom) is None:
+				myarg = None
+				if dep.root == self._frozen_config.target_root:
+					try:
+						myarg = next(self._iter_atoms_for_pkg(dep.child))
+					except StopIteration:
+						pass
+					except InvalidDependString:
+						if not dep.child.installed:
+							# This shouldn't happen since the package
+							# should have been masked.
+							raise
+
+				if myarg is None:
+					# Existing child selection may not be valid unless
+					# it's added to the graph immediately, since "complete"
+					# mode may select a different child later.
+					ignored = True
+					dep.child = None
+					self._dynamic_config._ignored_deps.append(dep)
+
+			if not ignored:
+				if dep_priority.ignored and \
+					not self._dynamic_config._traverse_ignored_deps:
+					if is_virt and dep.child is not None:
+						traversed_virt_pkgs.add(dep.child)
+					dep.child = None
+					self._dynamic_config._ignored_deps.append(dep)
+				else:
+					if not self._add_dep(dep,
+						allow_unsatisfied=allow_unsatisfied):
+						return 0
+					if is_virt and dep.child is not None:
+						traversed_virt_pkgs.add(dep.child)
+
+		selected_atoms.pop(pkg)
+
+		# Add selected indirect virtual deps to the graph. This
+		# takes advantage of circular dependency avoidance that's done
+		# by dep_zapdeps. We preserve actual parent/child relationships
+		# here in order to avoid distorting the dependency graph like
+		# <=portage-2.1.6.x did.
+		for virt_dep, atoms in selected_atoms.items():
+
+			virt_pkg = virt_dep.child
+			if virt_pkg not in traversed_virt_pkgs:
+				continue
+
+			if debug:
+				writemsg_level("\nCandidates: %s: %s\n" % \
+					(virt_pkg.cpv, [str(x) for x in atoms]),
+					noiselevel=-1, level=logging.DEBUG)
+
+			if not dep_priority.ignored or \
+				self._dynamic_config._traverse_ignored_deps:
+
+				inst_pkgs = [inst_pkg for inst_pkg in
+					reversed(vardb.match_pkgs(virt_dep.atom))
+					if not reinstall_atoms.findAtomForPackage(inst_pkg,
+							modified_use=self._pkg_use_enabled(inst_pkg))]
+				if inst_pkgs:
+					for inst_pkg in inst_pkgs:
+						if self._pkg_visibility_check(inst_pkg):
+							# highest visible
+							virt_dep.priority.satisfied = inst_pkg
+							break
+					if not virt_dep.priority.satisfied:
+						# none visible, so use highest
+						virt_dep.priority.satisfied = inst_pkgs[0]
+
+				if not self._add_pkg(virt_pkg, virt_dep):
+					return 0
+
+			for atom, child in self._minimize_children(
+				pkg, self._priority(runtime=True), root_config, atoms):
+
+				# If this was a specially generated virtual atom
+				# from dep_check, map it back to the original, in
+				# order to avoid distortion in places like display
+				# or conflict resolution code.
+				is_virt = hasattr(atom, '_orig_atom')
+				atom = getattr(atom, '_orig_atom', atom)
+
+				# This is a GLEP 37 virtual, so its deps are all runtime.
+				mypriority = self._priority(runtime=True)
+				if not atom.blocker:
+					inst_pkgs = [inst_pkg for inst_pkg in
+						reversed(vardb.match_pkgs(atom))
+						if not reinstall_atoms.findAtomForPackage(inst_pkg,
+								modified_use=self._pkg_use_enabled(inst_pkg))]
+					if inst_pkgs:
+						for inst_pkg in inst_pkgs:
+							if self._pkg_visibility_check(inst_pkg):
+								# highest visible
+								mypriority.satisfied = inst_pkg
+								break
+						if not mypriority.satisfied:
+							# none visible, so use highest
+							mypriority.satisfied = inst_pkgs[0]
+
+				# Dependencies of virtuals are considered to have the
+				# same depth as the virtual itself.
+				dep = Dependency(atom=atom,
+					blocker=atom.blocker, child=child, depth=virt_dep.depth,
+					parent=virt_pkg, priority=mypriority, root=dep_root,
+					collapsed_parent=pkg, collapsed_priority=dep_priority)
+
+				ignored = False
+				if not atom.blocker and \
+					not recurse_satisfied and \
+					mypriority.satisfied and \
+					mypriority.satisfied.visible and \
+					dep.child is not None and \
+					not dep.child.installed and \
+					self._dynamic_config._slot_pkg_map[dep.child.root].get(
+					dep.child.slot_atom) is None:
+					myarg = None
+					if dep.root == self._frozen_config.target_root:
+						try:
+							myarg = next(self._iter_atoms_for_pkg(dep.child))
+						except StopIteration:
+							pass
+						except InvalidDependString:
+							if not dep.child.installed:
+								raise
+
+					if myarg is None:
+						ignored = True
+						dep.child = None
+						self._dynamic_config._ignored_deps.append(dep)
+
+				if not ignored:
+					if dep_priority.ignored and \
+						not self._dynamic_config._traverse_ignored_deps:
+						if is_virt and dep.child is not None:
+							traversed_virt_pkgs.add(dep.child)
+						dep.child = None
+						self._dynamic_config._ignored_deps.append(dep)
+					else:
+						if not self._add_dep(dep,
+							allow_unsatisfied=allow_unsatisfied):
+							return 0
+						if is_virt and dep.child is not None:
+							traversed_virt_pkgs.add(dep.child)
+
+		if debug:
+			writemsg_level("\nExiting... %s\n" % (pkg,),
+				noiselevel=-1, level=logging.DEBUG)
+
+		return 1
+
+	def _minimize_children(self, parent, priority, root_config, atoms):
+		"""
+		Selects packages to satisfy the given atoms, and minimizes the
+		number of selected packages. This serves to identify and eliminate
+		redundant package selections when multiple atoms happen to specify
+		a version range.
+		"""
+
+		atom_pkg_map = {}
+
+		for atom in atoms:
+			if atom.blocker:
+				yield (atom, None)
+				continue
+			dep_pkg, existing_node = self._select_package(
+				root_config.root, atom)
+			if dep_pkg is None:
+				yield (atom, None)
+				continue
+			atom_pkg_map[atom] = dep_pkg
+
+		if len(atom_pkg_map) < 2:
+			for item in atom_pkg_map.items():
+				yield item
+			return
+
+		cp_pkg_map = {}
+		pkg_atom_map = {}
+		for atom, pkg in atom_pkg_map.items():
+			pkg_atom_map.setdefault(pkg, set()).add(atom)
+			cp_pkg_map.setdefault(pkg.cp, set()).add(pkg)
+
+		for pkgs in cp_pkg_map.values():
+			if len(pkgs) < 2:
+				for pkg in pkgs:
+					for atom in pkg_atom_map[pkg]:
+						yield (atom, pkg)
+				continue
+
+			# Use a digraph to identify and eliminate any
+			# redundant package selections.
+			atom_pkg_graph = digraph()
+			cp_atoms = set()
+			for pkg1 in pkgs:
+				for atom in pkg_atom_map[pkg1]:
+					cp_atoms.add(atom)
+					atom_pkg_graph.add(pkg1, atom)
+					atom_set = InternalPackageSet(initial_atoms=(atom,),
+						allow_repo=True)
+					for pkg2 in pkgs:
+						if pkg2 is pkg1:
+							continue
+						if atom_set.findAtomForPackage(pkg2, modified_use=self._pkg_use_enabled(pkg2)):
+							atom_pkg_graph.add(pkg2, atom)
+
+			for pkg in pkgs:
+				eliminate_pkg = True
+				for atom in atom_pkg_graph.parent_nodes(pkg):
+					if len(atom_pkg_graph.child_nodes(atom)) < 2:
+						eliminate_pkg = False
+						break
+				if eliminate_pkg:
+					atom_pkg_graph.remove(pkg)
+
+			# Yield ~, =*, < and <= atoms first, since those are more likely to
+			# cause slot conflicts, and we want those atoms to be displayed
+			# in the resulting slot conflict message (see bug #291142).
+			conflict_atoms = []
+			normal_atoms = []
+			for atom in cp_atoms:
+				conflict = False
+				for child_pkg in atom_pkg_graph.child_nodes(atom):
+					existing_node, matches = \
+						self._check_slot_conflict(child_pkg, atom)
+					if existing_node and not matches:
+						conflict = True
+						break
+				if conflict:
+					conflict_atoms.append(atom)
+				else:
+					normal_atoms.append(atom)
+
+			for atom in chain(conflict_atoms, normal_atoms):
+				child_pkgs = atom_pkg_graph.child_nodes(atom)
+				# if more than one child, yield highest version
+				if len(child_pkgs) > 1:
+					child_pkgs.sort()
+				yield (atom, child_pkgs[-1])
+
+	def _queue_disjunctive_deps(self, pkg, dep_root, dep_priority, dep_struct):
+		"""
+		Queue disjunctive (virtual and ||) deps in self._dynamic_config._dep_disjunctive_stack.
+		Yields non-disjunctive deps. Raises InvalidDependString when 
+		necessary.
+		"""
+		i = 0
+		while i < len(dep_struct):
+			x = dep_struct[i]
+			if isinstance(x, list):
+				for y in self._queue_disjunctive_deps(
+					pkg, dep_root, dep_priority, x):
+					yield y
+			elif x == "||":
+				self._queue_disjunction(pkg, dep_root, dep_priority,
+					[ x, dep_struct[ i + 1 ] ] )
+				i += 1
+			else:
+				try:
+					x = portage.dep.Atom(x, eapi=pkg.metadata["EAPI"])
+				except portage.exception.InvalidAtom:
+					if not pkg.installed:
+						raise portage.exception.InvalidDependString(
+							"invalid atom: '%s'" % x)
+				else:
+					# Note: Eventually this will check for PROPERTIES=virtual
+					# or whatever other metadata gets implemented for this
+					# purpose.
+					if x.cp.startswith('virtual/'):
+						self._queue_disjunction( pkg, dep_root,
+							dep_priority, [ str(x) ] )
+					else:
+						yield str(x)
+			i += 1
+
+	def _queue_disjunction(self, pkg, dep_root, dep_priority, dep_struct):
+		self._dynamic_config._dep_disjunctive_stack.append(
+			(pkg, dep_root, dep_priority, dep_struct))
+
+	def _pop_disjunction(self, allow_unsatisfied):
+		"""
+		Pop one disjunctive dep from self._dynamic_config._dep_disjunctive_stack, and use it to
+		populate self._dynamic_config._dep_stack.
+		"""
+		pkg, dep_root, dep_priority, dep_struct = \
+			self._dynamic_config._dep_disjunctive_stack.pop()
+		dep_string = portage.dep.paren_enclose(dep_struct,
+			unevaluated_atom=True)
+		if not self._add_pkg_dep_string(
+			pkg, dep_root, dep_priority, dep_string, allow_unsatisfied):
+			return 0
+		return 1
+
+	def _priority(self, **kwargs):
+		if "remove" in self._dynamic_config.myparams:
+			priority_constructor = UnmergeDepPriority
+		else:
+			priority_constructor = DepPriority
+		return priority_constructor(**kwargs)
+
+	def _dep_expand(self, root_config, atom_without_category):
+		"""
+		@param root_config: a root config instance
+		@type root_config: RootConfig
+		@param atom_without_category: an atom without a category component
+		@type atom_without_category: String
+		@rtype: list
+		@return: a list of atoms containing categories (possibly empty)
+		"""
+		null_cp = portage.dep_getkey(insert_category_into_atom(
+			atom_without_category, "null"))
+		cat, atom_pn = portage.catsplit(null_cp)
+
+		dbs = self._dynamic_config._filtered_trees[root_config.root]["dbs"]
+		categories = set()
+		for db, pkg_type, built, installed, db_keys in dbs:
+			for cat in db.categories:
+				if db.cp_list("%s/%s" % (cat, atom_pn)):
+					categories.add(cat)
+
+		deps = []
+		for cat in categories:
+			deps.append(Atom(insert_category_into_atom(
+				atom_without_category, cat), allow_repo=True))
+		return deps
+
+	def _have_new_virt(self, root, atom_cp):
+		ret = False
+		for db, pkg_type, built, installed, db_keys in \
+			self._dynamic_config._filtered_trees[root]["dbs"]:
+			if db.cp_list(atom_cp):
+				ret = True
+				break
+		return ret
+
+	def _iter_atoms_for_pkg(self, pkg):
+		depgraph_sets = self._dynamic_config.sets[pkg.root]
+		atom_arg_map = depgraph_sets.atom_arg_map
+		for atom in depgraph_sets.atoms.iterAtomsForPackage(pkg):
+			if atom.cp != pkg.cp and \
+				self._have_new_virt(pkg.root, atom.cp):
+				continue
+			visible_pkgs = \
+				self._dynamic_config._visible_pkgs[pkg.root].match_pkgs(atom)
+			visible_pkgs.reverse() # descending order
+			higher_slot = None
+			for visible_pkg in visible_pkgs:
+				if visible_pkg.cp != atom.cp:
+					continue
+				if pkg >= visible_pkg:
+					# This is descending order, and we're not
+					# interested in any versions <= pkg given.
+					break
+				if pkg.slot_atom != visible_pkg.slot_atom:
+					higher_slot = visible_pkg
+					break
+			if higher_slot is not None:
+				continue
+			for arg in atom_arg_map[(atom, pkg.root)]:
+				if isinstance(arg, PackageArg) and \
+					arg.package != pkg:
+					continue
+				yield arg, atom
+
+	def select_files(self, myfiles):
+		"""Given a list of .tbz2s, .ebuilds sets, and deps, populate
+		self._dynamic_config._initial_arg_list and call self._resolve to create the 
+		appropriate depgraph and return a favorite list."""
+		self._load_vdb()
+		debug = "--debug" in self._frozen_config.myopts
+		root_config = self._frozen_config.roots[self._frozen_config.target_root]
+		sets = root_config.sets
+		depgraph_sets = self._dynamic_config.sets[root_config.root]
+		myfavorites=[]
+		eroot = root_config.root
+		root = root_config.settings['ROOT']
+		vardb = self._frozen_config.trees[eroot]["vartree"].dbapi
+		real_vardb = self._frozen_config._trees_orig[eroot]["vartree"].dbapi
+		portdb = self._frozen_config.trees[eroot]["porttree"].dbapi
+		bindb = self._frozen_config.trees[eroot]["bintree"].dbapi
+		pkgsettings = self._frozen_config.pkgsettings[eroot]
+		args = []
+		onlydeps = "--onlydeps" in self._frozen_config.myopts
+		lookup_owners = []
+		for x in myfiles:
+			ext = os.path.splitext(x)[1]
+			if ext==".tbz2":
+				if not os.path.exists(x):
+					if os.path.exists(
+						os.path.join(pkgsettings["PKGDIR"], "All", x)):
+						x = os.path.join(pkgsettings["PKGDIR"], "All", x)
+					elif os.path.exists(
+						os.path.join(pkgsettings["PKGDIR"], x)):
+						x = os.path.join(pkgsettings["PKGDIR"], x)
+					else:
+						writemsg("\n\n!!! Binary package '"+str(x)+"' does not exist.\n", noiselevel=-1)
+						writemsg("!!! Please ensure the tbz2 exists as specified.\n\n", noiselevel=-1)
+						return 0, myfavorites
+				mytbz2=portage.xpak.tbz2(x)
+				mykey=mytbz2.getelements("CATEGORY")[0]+"/"+os.path.splitext(os.path.basename(x))[0]
+				if os.path.realpath(x) != \
+					os.path.realpath(bindb.bintree.getname(mykey)):
+					writemsg(colorize("BAD", "\n*** You need to adjust PKGDIR to emerge this package.\n\n"), noiselevel=-1)
+					self._dynamic_config._skip_restart = True
+					return 0, myfavorites
+
+				pkg = self._pkg(mykey, "binary", root_config,
+					onlydeps=onlydeps)
+				args.append(PackageArg(arg=x, package=pkg,
+					root_config=root_config))
+			elif ext==".ebuild":
+				ebuild_path = portage.util.normalize_path(os.path.abspath(x))
+				pkgdir = os.path.dirname(ebuild_path)
+				tree_root = os.path.dirname(os.path.dirname(pkgdir))
+				cp = pkgdir[len(tree_root)+1:]
+				e = portage.exception.PackageNotFound(
+					("%s is not in a valid portage tree " + \
+					"hierarchy or does not exist") % x)
+				if not portage.isvalidatom(cp):
+					raise e
+				cat = portage.catsplit(cp)[0]
+				mykey = cat + "/" + os.path.basename(ebuild_path[:-7])
+				if not portage.isvalidatom("="+mykey):
+					raise e
+				ebuild_path = portdb.findname(mykey)
+				if ebuild_path:
+					if ebuild_path != os.path.join(os.path.realpath(tree_root),
+						cp, os.path.basename(ebuild_path)):
+						writemsg(colorize("BAD", "\n*** You need to adjust PORTDIR or PORTDIR_OVERLAY to emerge this package.\n\n"), noiselevel=-1)
+						self._dynamic_config._skip_restart = True
+						return 0, myfavorites
+					if mykey not in portdb.xmatch(
+						"match-visible", portage.cpv_getkey(mykey)):
+						writemsg(colorize("BAD", "\n*** You are emerging a masked package. It is MUCH better to use\n"), noiselevel=-1)
+						writemsg(colorize("BAD", "*** /etc/portage/package.* to accomplish this. See portage(5) man\n"), noiselevel=-1)
+						writemsg(colorize("BAD", "*** page for details.\n"), noiselevel=-1)
+						countdown(int(self._frozen_config.settings["EMERGE_WARNING_DELAY"]),
+							"Continuing...")
+				else:
+					raise portage.exception.PackageNotFound(
+						"%s is not in a valid portage tree hierarchy or does not exist" % x)
+				pkg = self._pkg(mykey, "ebuild", root_config,
+					onlydeps=onlydeps, myrepo=portdb.getRepositoryName(
+					os.path.dirname(os.path.dirname(os.path.dirname(ebuild_path)))))
+				args.append(PackageArg(arg=x, package=pkg,
+					root_config=root_config))
+			elif x.startswith(os.path.sep):
+				if not x.startswith(eroot):
+					portage.writemsg(("\n\n!!! '%s' does not start with" + \
+						" $EROOT.\n") % x, noiselevel=-1)
+					self._dynamic_config._skip_restart = True
+					return 0, []
+				# Queue these up since it's most efficient to handle
+				# multiple files in a single iter_owners() call.
+				lookup_owners.append(x)
+			elif x.startswith("." + os.sep) or \
+				x.startswith(".." + os.sep):
+				f = os.path.abspath(x)
+				if not f.startswith(eroot):
+					portage.writemsg(("\n\n!!! '%s' (resolved from '%s') does not start with" + \
+						" $EROOT.\n") % (f, x), noiselevel=-1)
+					self._dynamic_config._skip_restart = True
+					return 0, []
+				lookup_owners.append(f)
+			else:
+				if x in ("system", "world"):
+					x = SETPREFIX + x
+				if x.startswith(SETPREFIX):
+					s = x[len(SETPREFIX):]
+					if s not in sets:
+						raise portage.exception.PackageSetNotFound(s)
+					if s in depgraph_sets.sets:
+						continue
+					pset = sets[s]
+					depgraph_sets.sets[s] = pset
+					args.append(SetArg(arg=x, pset=pset,
+						root_config=root_config))
+					continue
+				if not is_valid_package_atom(x, allow_repo=True):
+					portage.writemsg("\n\n!!! '%s' is not a valid package atom.\n" % x,
+						noiselevel=-1)
+					portage.writemsg("!!! Please check ebuild(5) for full details.\n")
+					portage.writemsg("!!! (Did you specify a version but forget to prefix with '='?)\n")
+					self._dynamic_config._skip_restart = True
+					return (0,[])
+				# Don't expand categories or old-style virtuals here unless
+				# necessary. Expansion of old-style virtuals here causes at
+				# least the following problems:
+				#   1) It's more difficult to determine which set(s) an atom
+				#      came from, if any.
+				#   2) It takes away freedom from the resolver to choose other
+				#      possible expansions when necessary.
+				if "/" in x:
+					args.append(AtomArg(arg=x, atom=Atom(x, allow_repo=True),
+						root_config=root_config))
+					continue
+				expanded_atoms = self._dep_expand(root_config, x)
+				installed_cp_set = set()
+				for atom in expanded_atoms:
+					if vardb.cp_list(atom.cp):
+						installed_cp_set.add(atom.cp)
+
+				if len(installed_cp_set) > 1:
+					non_virtual_cps = set()
+					for atom_cp in installed_cp_set:
+						if not atom_cp.startswith("virtual/"):
+							non_virtual_cps.add(atom_cp)
+					if len(non_virtual_cps) == 1:
+						installed_cp_set = non_virtual_cps
+
+				if len(expanded_atoms) > 1 and len(installed_cp_set) == 1:
+					installed_cp = next(iter(installed_cp_set))
+					for atom in expanded_atoms:
+						if atom.cp == installed_cp:
+							available = False
+							for pkg in self._iter_match_pkgs_any(
+								root_config, atom.without_use,
+								onlydeps=onlydeps):
+								if not pkg.installed:
+									available = True
+									break
+							if available:
+								expanded_atoms = [atom]
+								break
+
+				# If a non-virtual package and one or more virtual packages
+				# are in expanded_atoms, use the non-virtual package.
+				if len(expanded_atoms) > 1:
+					number_of_virtuals = 0
+					for expanded_atom in expanded_atoms:
+						if expanded_atom.cp.startswith("virtual/"):
+							number_of_virtuals += 1
+						else:
+							candidate = expanded_atom
+					if len(expanded_atoms) - number_of_virtuals == 1:
+						expanded_atoms = [ candidate ]
+
+				if len(expanded_atoms) > 1:
+					writemsg("\n\n", noiselevel=-1)
+					ambiguous_package_name(x, expanded_atoms, root_config,
+						self._frozen_config.spinner, self._frozen_config.myopts)
+					self._dynamic_config._skip_restart = True
+					return False, myfavorites
+				if expanded_atoms:
+					atom = expanded_atoms[0]
+				else:
+					null_atom = Atom(insert_category_into_atom(x, "null"),
+						allow_repo=True)
+					cat, atom_pn = portage.catsplit(null_atom.cp)
+					virts_p = root_config.settings.get_virts_p().get(atom_pn)
+					if virts_p:
+						# Allow the depgraph to choose which virtual.
+						atom = Atom(null_atom.replace('null/', 'virtual/', 1),
+							allow_repo=True)
+					else:
+						atom = null_atom
+
+				if atom.use and atom.use.conditional:
+					writemsg(
+						("\n\n!!! '%s' contains a conditional " + \
+						"which is not allowed.\n") % (x,), noiselevel=-1)
+					writemsg("!!! Please check ebuild(5) for full details.\n")
+					self._dynamic_config._skip_restart = True
+					return (0,[])
+
+				args.append(AtomArg(arg=x, atom=atom,
+					root_config=root_config))
+
+		if lookup_owners:
+			relative_paths = []
+			search_for_multiple = False
+			if len(lookup_owners) > 1:
+				search_for_multiple = True
+
+			for x in lookup_owners:
+				if not search_for_multiple and os.path.isdir(x):
+					search_for_multiple = True
+				relative_paths.append(x[len(root)-1:])
+
+			owners = set()
+			for pkg, relative_path in \
+				real_vardb._owners.iter_owners(relative_paths):
+				owners.add(pkg.mycpv)
+				if not search_for_multiple:
+					break
+
+			if not owners:
+				portage.writemsg(("\n\n!!! '%s' is not claimed " + \
+					"by any package.\n") % lookup_owners[0], noiselevel=-1)
+				self._dynamic_config._skip_restart = True
+				return 0, []
+
+			for cpv in owners:
+				slot = vardb.aux_get(cpv, ["SLOT"])[0]
+				if not slot:
+					# portage now masks packages with missing slot, but it's
+					# possible that one was installed by an older version
+					atom = Atom(portage.cpv_getkey(cpv))
+				else:
+					atom = Atom("%s:%s" % (portage.cpv_getkey(cpv), slot))
+				args.append(AtomArg(arg=atom, atom=atom,
+					root_config=root_config))
+
+		if "--update" in self._frozen_config.myopts:
+			# In some cases, the greedy slots behavior can pull in a slot that
+			# the user would want to uninstall due to it being blocked by a
+			# newer version in a different slot. Therefore, it's necessary to
+			# detect and discard any that should be uninstalled. Each time
+			# that arguments are updated, package selections are repeated in
+			# order to ensure consistency with the current arguments:
+			#
+			#  1) Initialize args
+			#  2) Select packages and generate initial greedy atoms
+			#  3) Update args with greedy atoms
+			#  4) Select packages and generate greedy atoms again, while
+			#     accounting for any blockers between selected packages
+			#  5) Update args with revised greedy atoms
+
+			self._set_args(args)
+			greedy_args = []
+			for arg in args:
+				greedy_args.append(arg)
+				if not isinstance(arg, AtomArg):
+					continue
+				for atom in self._greedy_slots(arg.root_config, arg.atom):
+					greedy_args.append(
+						AtomArg(arg=arg.arg, atom=atom,
+							root_config=arg.root_config))
+
+			self._set_args(greedy_args)
+			del greedy_args
+
+			# Revise greedy atoms, accounting for any blockers
+			# between selected packages.
+			revised_greedy_args = []
+			for arg in args:
+				revised_greedy_args.append(arg)
+				if not isinstance(arg, AtomArg):
+					continue
+				for atom in self._greedy_slots(arg.root_config, arg.atom,
+					blocker_lookahead=True):
+					revised_greedy_args.append(
+						AtomArg(arg=arg.arg, atom=atom,
+							root_config=arg.root_config))
+			args = revised_greedy_args
+			del revised_greedy_args
+
+		self._set_args(args)
+
+		myfavorites = set(myfavorites)
+		for arg in args:
+			if isinstance(arg, (AtomArg, PackageArg)):
+				myfavorites.add(arg.atom)
+			elif isinstance(arg, SetArg):
+				myfavorites.add(arg.arg)
+		myfavorites = list(myfavorites)
+
+		if debug:
+			portage.writemsg("\n", noiselevel=-1)
+		# Order needs to be preserved since a feature of --nodeps
+		# is to allow the user to force a specific merge order.
+		self._dynamic_config._initial_arg_list = args[:]
+	
+		return self._resolve(myfavorites)
+	
+	def _resolve(self, myfavorites):
+		"""Given self._dynamic_config._initial_arg_list, pull in the root nodes, 
+		call self._creategraph to process theier deps and return 
+		a favorite list."""
+		debug = "--debug" in self._frozen_config.myopts
+		onlydeps = "--onlydeps" in self._frozen_config.myopts
+		myroot = self._frozen_config.target_root
+		pkgsettings = self._frozen_config.pkgsettings[myroot]
+		pprovideddict = pkgsettings.pprovideddict
+		virtuals = pkgsettings.getvirtuals()
+		args = self._dynamic_config._initial_arg_list[:]
+		for root, atom in chain(self._rebuild.rebuild_list,
+			self._rebuild.reinstall_list):
+			args.append(AtomArg(arg=atom, atom=atom,
+				root_config=self._frozen_config.roots[root]))
+		for arg in self._expand_set_args(args, add_to_digraph=True):
+			for atom in arg.pset.getAtoms():
+				self._spinner_update()
+				dep = Dependency(atom=atom, onlydeps=onlydeps,
+					root=myroot, parent=arg)
+				try:
+					pprovided = pprovideddict.get(atom.cp)
+					if pprovided and portage.match_from_list(atom, pprovided):
+						# A provided package has been specified on the command line.
+						self._dynamic_config._pprovided_args.append((arg, atom))
+						continue
+					if isinstance(arg, PackageArg):
+						if not self._add_pkg(arg.package, dep) or \
+							not self._create_graph():
+							if not self.need_restart():
+								sys.stderr.write(("\n\n!!! Problem " + \
+									"resolving dependencies for %s\n") % \
+									arg.arg)
+							return 0, myfavorites
+						continue
+					if debug:
+						writemsg_level("\n      Arg: %s\n     Atom: %s\n" %
+							(arg, atom), noiselevel=-1, level=logging.DEBUG)
+					pkg, existing_node = self._select_package(
+						myroot, atom, onlydeps=onlydeps)
+					if not pkg:
+						pprovided_match = False
+						for virt_choice in virtuals.get(atom.cp, []):
+							expanded_atom = portage.dep.Atom(
+								atom.replace(atom.cp, virt_choice.cp, 1))
+							pprovided = pprovideddict.get(expanded_atom.cp)
+							if pprovided and \
+								portage.match_from_list(expanded_atom, pprovided):
+								# A provided package has been
+								# specified on the command line.
+								self._dynamic_config._pprovided_args.append((arg, atom))
+								pprovided_match = True
+								break
+						if pprovided_match:
+							continue
+
+						if not (isinstance(arg, SetArg) and \
+							arg.name in ("selected", "system", "world")):
+							self._dynamic_config._unsatisfied_deps_for_display.append(
+								((myroot, atom), {"myparent" : arg}))
+							return 0, myfavorites
+
+						self._dynamic_config._missing_args.append((arg, atom))
+						continue
+					if atom.cp != pkg.cp:
+						# For old-style virtuals, we need to repeat the
+						# package.provided check against the selected package.
+						expanded_atom = atom.replace(atom.cp, pkg.cp)
+						pprovided = pprovideddict.get(pkg.cp)
+						if pprovided and \
+							portage.match_from_list(expanded_atom, pprovided):
+							# A provided package has been
+							# specified on the command line.
+							self._dynamic_config._pprovided_args.append((arg, atom))
+							continue
+					if pkg.installed and \
+						"selective" not in self._dynamic_config.myparams and \
+						not self._frozen_config.excluded_pkgs.findAtomForPackage(
+						pkg, modified_use=self._pkg_use_enabled(pkg)):
+						self._dynamic_config._unsatisfied_deps_for_display.append(
+							((myroot, atom), {"myparent" : arg}))
+						# Previous behavior was to bail out in this case, but
+						# since the dep is satisfied by the installed package,
+						# it's more friendly to continue building the graph
+						# and just show a warning message. Therefore, only bail
+						# out here if the atom is not from either the system or
+						# world set.
+						if not (isinstance(arg, SetArg) and \
+							arg.name in ("selected", "system", "world")):
+							return 0, myfavorites
+
+					# Add the selected package to the graph as soon as possible
+					# so that later dep_check() calls can use it as feedback
+					# for making more consistent atom selections.
+					if not self._add_pkg(pkg, dep):
+						if self.need_restart():
+							pass
+						elif isinstance(arg, SetArg):
+							writemsg(("\n\n!!! Problem resolving " + \
+								"dependencies for %s from %s\n") % \
+								(atom, arg.arg), noiselevel=-1)
+						else:
+							writemsg(("\n\n!!! Problem resolving " + \
+								"dependencies for %s\n") % \
+								(atom,), noiselevel=-1)
+						return 0, myfavorites
+
+				except SystemExit as e:
+					raise # Needed else can't exit
+				except Exception as e:
+					writemsg("\n\n!!! Problem in '%s' dependencies.\n" % atom, noiselevel=-1)
+					writemsg("!!! %s %s\n" % (str(e), str(getattr(e, "__module__", None))))
+					raise
+
+		# Now that the root packages have been added to the graph,
+		# process the dependencies.
+		if not self._create_graph():
+			return 0, myfavorites
+
+		try:
+			self.altlist()
+		except self._unknown_internal_error:
+			return False, myfavorites
+
+		digraph_set = frozenset(self._dynamic_config.digraph)
+
+		if digraph_set.intersection(
+			self._dynamic_config._needed_unstable_keywords) or \
+			digraph_set.intersection(
+			self._dynamic_config._needed_p_mask_changes) or \
+			digraph_set.intersection(
+			self._dynamic_config._needed_use_config_changes) or \
+			digraph_set.intersection(
+			self._dynamic_config._needed_license_changes) :
+			#We failed if the user needs to change the configuration
+			self._dynamic_config._success_without_autounmask = True
+			return False, myfavorites
+
+		digraph_set = None
+
+		if self._rebuild.trigger_rebuilds():
+			backtrack_infos = self._dynamic_config._backtrack_infos
+			config = backtrack_infos.setdefault("config", {})
+			config["rebuild_list"] = self._rebuild.rebuild_list
+			config["reinstall_list"] = self._rebuild.reinstall_list
+			self._dynamic_config._need_restart = True
+			return False, myfavorites
+
+		# We're true here unless we are missing binaries.
+		return (True, myfavorites)
+
+	def _set_args(self, args):
+		"""
+		Create the "__non_set_args__" package set from atoms and packages given as
+		arguments. This method can be called multiple times if necessary.
+		The package selection cache is automatically invalidated, since
+		arguments influence package selections.
+		"""
+
+		set_atoms = {}
+		non_set_atoms = {}
+		for root in self._dynamic_config.sets:
+			depgraph_sets = self._dynamic_config.sets[root]
+			depgraph_sets.sets.setdefault('__non_set_args__',
+				InternalPackageSet(allow_repo=True)).clear()
+			depgraph_sets.atoms.clear()
+			depgraph_sets.atom_arg_map.clear()
+			set_atoms[root] = []
+			non_set_atoms[root] = []
+
+		# We don't add set args to the digraph here since that
+		# happens at a later stage and we don't want to make
+		# any state changes here that aren't reversed by a
+		# another call to this method.
+		for arg in self._expand_set_args(args, add_to_digraph=False):
+			atom_arg_map = self._dynamic_config.sets[
+				arg.root_config.root].atom_arg_map
+			if isinstance(arg, SetArg):
+				atom_group = set_atoms[arg.root_config.root]
+			else:
+				atom_group = non_set_atoms[arg.root_config.root]
+
+			for atom in arg.pset.getAtoms():
+				atom_group.append(atom)
+				atom_key = (atom, arg.root_config.root)
+				refs = atom_arg_map.get(atom_key)
+				if refs is None:
+					refs = []
+					atom_arg_map[atom_key] = refs
+					if arg not in refs:
+						refs.append(arg)
+
+		for root in self._dynamic_config.sets:
+			depgraph_sets = self._dynamic_config.sets[root]
+			depgraph_sets.atoms.update(chain(set_atoms.get(root, []),
+				non_set_atoms.get(root, [])))
+			depgraph_sets.sets['__non_set_args__'].update(
+				non_set_atoms.get(root, []))
+
+		# Invalidate the package selection cache, since
+		# arguments influence package selections.
+		self._dynamic_config._highest_pkg_cache.clear()
+		for trees in self._dynamic_config._filtered_trees.values():
+			trees["porttree"].dbapi._clear_cache()
+
+	def _greedy_slots(self, root_config, atom, blocker_lookahead=False):
+		"""
+		Return a list of slot atoms corresponding to installed slots that
+		differ from the slot of the highest visible match. When
+		blocker_lookahead is True, slot atoms that would trigger a blocker
+		conflict are automatically discarded, potentially allowing automatic
+		uninstallation of older slots when appropriate.
+		"""
+		highest_pkg, in_graph = self._select_package(root_config.root, atom)
+		if highest_pkg is None:
+			return []
+		vardb = root_config.trees["vartree"].dbapi
+		slots = set()
+		for cpv in vardb.match(atom):
+			# don't mix new virtuals with old virtuals
+			if portage.cpv_getkey(cpv) == highest_pkg.cp:
+				slots.add(vardb.aux_get(cpv, ["SLOT"])[0])
+
+		slots.add(highest_pkg.metadata["SLOT"])
+		if len(slots) == 1:
+			return []
+		greedy_pkgs = []
+		slots.remove(highest_pkg.metadata["SLOT"])
+		while slots:
+			slot = slots.pop()
+			slot_atom = portage.dep.Atom("%s:%s" % (highest_pkg.cp, slot))
+			pkg, in_graph = self._select_package(root_config.root, slot_atom)
+			if pkg is not None and \
+				pkg.cp == highest_pkg.cp and pkg < highest_pkg:
+				greedy_pkgs.append(pkg)
+		if not greedy_pkgs:
+			return []
+		if not blocker_lookahead:
+			return [pkg.slot_atom for pkg in greedy_pkgs]
+
+		blockers = {}
+		blocker_dep_keys = ["DEPEND", "PDEPEND", "RDEPEND"]
+		for pkg in greedy_pkgs + [highest_pkg]:
+			dep_str = " ".join(pkg.metadata[k] for k in blocker_dep_keys)
+			try:
+				selected_atoms = self._select_atoms(
+					pkg.root, dep_str, self._pkg_use_enabled(pkg),
+					parent=pkg, strict=True)
+			except portage.exception.InvalidDependString:
+				continue
+			blocker_atoms = []
+			for atoms in selected_atoms.values():
+				blocker_atoms.extend(x for x in atoms if x.blocker)
+			blockers[pkg] = InternalPackageSet(initial_atoms=blocker_atoms)
+
+		if highest_pkg not in blockers:
+			return []
+
+		# filter packages with invalid deps
+		greedy_pkgs = [pkg for pkg in greedy_pkgs if pkg in blockers]
+
+		# filter packages that conflict with highest_pkg
+		greedy_pkgs = [pkg for pkg in greedy_pkgs if not \
+			(blockers[highest_pkg].findAtomForPackage(pkg, modified_use=self._pkg_use_enabled(pkg)) or \
+			blockers[pkg].findAtomForPackage(highest_pkg, modified_use=self._pkg_use_enabled(highest_pkg)))]
+
+		if not greedy_pkgs:
+			return []
+
+		# If two packages conflict, discard the lower version.
+		discard_pkgs = set()
+		greedy_pkgs.sort(reverse=True)
+		for i in range(len(greedy_pkgs) - 1):
+			pkg1 = greedy_pkgs[i]
+			if pkg1 in discard_pkgs:
+				continue
+			for j in range(i + 1, len(greedy_pkgs)):
+				pkg2 = greedy_pkgs[j]
+				if pkg2 in discard_pkgs:
+					continue
+				if blockers[pkg1].findAtomForPackage(pkg2, modified_use=self._pkg_use_enabled(pkg2)) or \
+					blockers[pkg2].findAtomForPackage(pkg1, modified_use=self._pkg_use_enabled(pkg1)):
+					# pkg1 > pkg2
+					discard_pkgs.add(pkg2)
+
+		return [pkg.slot_atom for pkg in greedy_pkgs \
+			if pkg not in discard_pkgs]
+
+	def _select_atoms_from_graph(self, *pargs, **kwargs):
+		"""
+		Prefer atoms matching packages that have already been
+		added to the graph or those that are installed and have
+		not been scheduled for replacement.
+		"""
+		kwargs["trees"] = self._dynamic_config._graph_trees
+		return self._select_atoms_highest_available(*pargs, **kwargs)
+
+	def _select_atoms_highest_available(self, root, depstring,
+		myuse=None, parent=None, strict=True, trees=None, priority=None):
+		"""This will raise InvalidDependString if necessary. If trees is
+		None then self._dynamic_config._filtered_trees is used."""
+
+		pkgsettings = self._frozen_config.pkgsettings[root]
+		if trees is None:
+			trees = self._dynamic_config._filtered_trees
+		mytrees = trees[root]
+		atom_graph = digraph()
+		if True:
+			# Temporarily disable autounmask so that || preferences
+			# account for masking and USE settings.
+			_autounmask_backup = self._dynamic_config._autounmask
+			self._dynamic_config._autounmask = False
+			# backup state for restoration, in case of recursive
+			# calls to this method
+			backup_state = mytrees.copy()
+			try:
+				# clear state from previous call, in case this
+				# call is recursive (we have a backup, that we
+				# will use to restore it later)
+				mytrees.pop("pkg_use_enabled", None)
+				mytrees.pop("parent", None)
+				mytrees.pop("atom_graph", None)
+				mytrees.pop("priority", None)
+
+				mytrees["pkg_use_enabled"] = self._pkg_use_enabled
+				if parent is not None:
+					mytrees["parent"] = parent
+					mytrees["atom_graph"] = atom_graph
+				if priority is not None:
+					mytrees["priority"] = priority
+
+				mycheck = portage.dep_check(depstring, None,
+					pkgsettings, myuse=myuse,
+					myroot=root, trees=trees)
+			finally:
+				# restore state
+				self._dynamic_config._autounmask = _autounmask_backup
+				mytrees.pop("pkg_use_enabled", None)
+				mytrees.pop("parent", None)
+				mytrees.pop("atom_graph", None)
+				mytrees.pop("priority", None)
+				mytrees.update(backup_state)
+			if not mycheck[0]:
+				raise portage.exception.InvalidDependString(mycheck[1])
+		if parent is None:
+			selected_atoms = mycheck[1]
+		elif parent not in atom_graph:
+			selected_atoms = {parent : mycheck[1]}
+		else:
+			# Recursively traversed virtual dependencies, and their
+			# direct dependencies, are considered to have the same
+			# depth as direct dependencies.
+			if parent.depth is None:
+				virt_depth = None
+			else:
+				virt_depth = parent.depth + 1
+			chosen_atom_ids = frozenset(id(atom) for atom in mycheck[1])
+			selected_atoms = OrderedDict()
+			node_stack = [(parent, None, None)]
+			traversed_nodes = set()
+			while node_stack:
+				node, node_parent, parent_atom = node_stack.pop()
+				traversed_nodes.add(node)
+				if node is parent:
+					k = parent
+				else:
+					if node_parent is parent:
+						if priority is None:
+							node_priority = None
+						else:
+							node_priority = priority.copy()
+					else:
+						# virtuals only have runtime deps
+						node_priority = self._priority(runtime=True)
+
+					k = Dependency(atom=parent_atom,
+						blocker=parent_atom.blocker, child=node,
+						depth=virt_depth, parent=node_parent,
+						priority=node_priority, root=node.root)
+
+				child_atoms = []
+				selected_atoms[k] = child_atoms
+				for atom_node in atom_graph.child_nodes(node):
+					child_atom = atom_node[0]
+					if id(child_atom) not in chosen_atom_ids:
+						continue
+					child_atoms.append(child_atom)
+					for child_node in atom_graph.child_nodes(atom_node):
+						if child_node in traversed_nodes:
+							continue
+						if not portage.match_from_list(
+							child_atom, [child_node]):
+							# Typically this means that the atom
+							# specifies USE deps that are unsatisfied
+							# by the selected package. The caller will
+							# record this as an unsatisfied dependency
+							# when necessary.
+							continue
+						node_stack.append((child_node, node, child_atom))
+
+		return selected_atoms
+
+	def _expand_virt_from_graph(self, root, atom):
+		if not isinstance(atom, Atom):
+			atom = Atom(atom)
+		graphdb = self._dynamic_config.mydbapi[root]
+		match = graphdb.match_pkgs(atom)
+		if not match:
+			yield atom
+			return
+		pkg = match[-1]
+		if not pkg.cpv.startswith("virtual/"):
+			yield atom
+			return
+		try:
+			rdepend = self._select_atoms_from_graph(
+				pkg.root, pkg.metadata.get("RDEPEND", ""),
+				myuse=self._pkg_use_enabled(pkg),
+				parent=pkg, strict=False)
+		except InvalidDependString as e:
+			writemsg_level("!!! Invalid RDEPEND in " + \
+				"'%svar/db/pkg/%s/RDEPEND': %s\n" % \
+				(pkg.root, pkg.cpv, e),
+				noiselevel=-1, level=logging.ERROR)
+			yield atom
+			return
+
+		for atoms in rdepend.values():
+			for atom in atoms:
+				if hasattr(atom, "_orig_atom"):
+					# Ignore virtual atoms since we're only
+					# interested in expanding the real atoms.
+					continue
+				yield atom
+
+	def _virt_deps_visible(self, pkg, ignore_use=False):
+		"""
+		Assumes pkg is a virtual package. Traverses virtual deps recursively
+		and returns True if all deps are visible, False otherwise. This is
+		useful for checking if it will be necessary to expand virtual slots,
+		for cases like bug #382557.
+		"""
+		try:
+			rdepend = self._select_atoms(
+				pkg.root, pkg.metadata.get("RDEPEND", ""),
+				myuse=self._pkg_use_enabled(pkg),
+				parent=pkg, priority=self._priority(runtime=True))
+		except InvalidDependString as e:
+			if not pkg.installed:
+				raise
+			writemsg_level("!!! Invalid RDEPEND in " + \
+				"'%svar/db/pkg/%s/RDEPEND': %s\n" % \
+				(pkg.root, pkg.cpv, e),
+				noiselevel=-1, level=logging.ERROR)
+			return False
+
+		for atoms in rdepend.values():
+			for atom in atoms:
+				if ignore_use:
+					atom = atom.without_use
+				pkg, existing = self._select_package(
+					pkg.root, atom)
+				if pkg is None or not self._pkg_visibility_check(pkg):
+					return False
+
+		return True
+
+	def _get_dep_chain(self, start_node, target_atom=None,
+		unsatisfied_dependency=False):
+		"""
+		Returns a list of (atom, node_type) pairs that represent a dep chain.
+		If target_atom is None, the first package shown is pkg's parent.
+		If target_atom is not None the first package shown is pkg.
+		If unsatisfied_dependency is True, the first parent is select who's
+		dependency is not satisfied by 'pkg'. This is need for USE changes.
+		(Does not support target_atom.)
+		"""
+		traversed_nodes = set()
+		dep_chain = []
+		node = start_node
+		child = None
+		all_parents = self._dynamic_config._parent_atoms
+		graph = self._dynamic_config.digraph
+
+		if target_atom is not None and isinstance(node, Package):
+			affecting_use = set()
+			for dep_str in "DEPEND", "RDEPEND", "PDEPEND":
+				try:
+					affecting_use.update(extract_affecting_use(
+						node.metadata[dep_str], target_atom,
+						eapi=node.metadata["EAPI"]))
+				except InvalidDependString:
+					if not node.installed:
+						raise
+			affecting_use.difference_update(node.use.mask, node.use.force)
+			pkg_name = _unicode_decode("%s") % (node.cpv,)
+			if affecting_use:
+				usedep = []
+				for flag in affecting_use:
+					if flag in self._pkg_use_enabled(node):
+						usedep.append(flag)
+					else:
+						usedep.append("-"+flag)
+				pkg_name += "[%s]" % ",".join(usedep)
+
+			dep_chain.append((pkg_name, node.type_name))
+
+
+		# To build a dep chain for the given package we take
+		# "random" parents form the digraph, except for the
+		# first package, because we want a parent that forced
+		# the corresponding change (i.e '>=foo-2', instead 'foo').
+
+		traversed_nodes.add(start_node)
+
+		start_node_parent_atoms = {}
+		for ppkg, patom in all_parents.get(node, []):
+			# Get a list of suitable atoms. For use deps
+			# (aka unsatisfied_dependency is not None) we
+			# need that the start_node doesn't match the atom.
+			if not unsatisfied_dependency or \
+				not InternalPackageSet(initial_atoms=(patom,)).findAtomForPackage(start_node):
+				start_node_parent_atoms.setdefault(patom, []).append(ppkg)
+
+		if start_node_parent_atoms:
+			# If there are parents in all_parents then use one of them.
+			# If not, then this package got pulled in by an Arg and
+			# will be correctly handled by the code that handles later
+			# packages in the dep chain.
+			best_match = best_match_to_list(node.cpv, start_node_parent_atoms)
+
+			child = node
+			for ppkg in start_node_parent_atoms[best_match]:
+				node = ppkg
+				if ppkg in self._dynamic_config._initial_arg_list:
+					# Stop if reached the top level of the dep chain.
+					break
+
+		while node is not None:
+			traversed_nodes.add(node)
+
+			if node not in graph:
+				# The parent is not in the graph due to backtracking.
+				break
+
+			elif isinstance(node, DependencyArg):
+				if graph.parent_nodes(node):
+					node_type = "set"
+				else:
+					node_type = "argument"
+				dep_chain.append((_unicode_decode("%s") % (node,), node_type))
+
+			elif node is not start_node:
+				for ppkg, patom in all_parents[child]:
+					if ppkg == node:
+						if child is start_node and unsatisfied_dependency and \
+							InternalPackageSet(initial_atoms=(patom,)).findAtomForPackage(child):
+							# This atom is satisfied by child, there must be another atom.
+							continue
+						atom = patom.unevaluated_atom
+						break
+
+				dep_strings = set()
+				priorities = graph.nodes[node][0].get(child)
+				if priorities is None:
+					# This edge comes from _parent_atoms and was not added to
+					# the graph, and _parent_atoms does not contain priorities.
+					dep_strings.add(node.metadata["DEPEND"])
+					dep_strings.add(node.metadata["RDEPEND"])
+					dep_strings.add(node.metadata["PDEPEND"])
+				else:
+					for priority in priorities:
+						if priority.buildtime:
+							dep_strings.add(node.metadata["DEPEND"])
+						if priority.runtime:
+							dep_strings.add(node.metadata["RDEPEND"])
+						if priority.runtime_post:
+							dep_strings.add(node.metadata["PDEPEND"])
+
+				affecting_use = set()
+				for dep_str in dep_strings:
+					try:
+						affecting_use.update(extract_affecting_use(
+							dep_str, atom, eapi=node.metadata["EAPI"]))
+					except InvalidDependString:
+						if not node.installed:
+							raise
+
+				#Don't show flags as 'affecting' if the user can't change them,
+				affecting_use.difference_update(node.use.mask, \
+					node.use.force)
+
+				pkg_name = _unicode_decode("%s") % (node.cpv,)
+				if affecting_use:
+					usedep = []
+					for flag in affecting_use:
+						if flag in self._pkg_use_enabled(node):
+							usedep.append(flag)
+						else:
+							usedep.append("-"+flag)
+					pkg_name += "[%s]" % ",".join(usedep)
+
+				dep_chain.append((pkg_name, node.type_name))
+
+			# When traversing to parents, prefer arguments over packages
+			# since arguments are root nodes. Never traverse the same
+			# package twice, in order to prevent an infinite loop.
+			child = node
+			selected_parent = None
+			parent_arg = None
+			parent_merge = None
+			parent_unsatisfied = None
+
+			for parent in self._dynamic_config.digraph.parent_nodes(node):
+				if parent in traversed_nodes:
+					continue
+				if isinstance(parent, DependencyArg):
+					parent_arg = parent
+				else:
+					if isinstance(parent, Package) and \
+						parent.operation == "merge":
+						parent_merge = parent
+					if unsatisfied_dependency and node is start_node:
+						# Make sure that pkg doesn't satisfy parent's dependency.
+						# This ensures that we select the correct parent for use
+						# flag changes.
+						for ppkg, atom in all_parents[start_node]:
+							if parent is ppkg:
+								atom_set = InternalPackageSet(initial_atoms=(atom,))
+								if not atom_set.findAtomForPackage(start_node):
+									parent_unsatisfied = parent
+								break
+					else:
+						selected_parent = parent
+
+			if parent_unsatisfied is not None:
+				selected_parent = parent_unsatisfied
+			elif parent_merge is not None:
+				# Prefer parent in the merge list (bug #354747).
+				selected_parent = parent_merge
+			elif parent_arg is not None:
+				if self._dynamic_config.digraph.parent_nodes(parent_arg):
+					selected_parent = parent_arg
+				else:
+					dep_chain.append(
+						(_unicode_decode("%s") % (parent_arg,), "argument"))
+					selected_parent = None
+
+			node = selected_parent
+		return dep_chain
+
+	def _get_dep_chain_as_comment(self, pkg, unsatisfied_dependency=False):
+		dep_chain = self._get_dep_chain(pkg, unsatisfied_dependency=unsatisfied_dependency)
+		display_list = []
+		for node, node_type in dep_chain:
+			if node_type == "argument":
+				display_list.append("required by %s (argument)" % node)
+			else:
+				display_list.append("required by %s" % node)
+
+		msg = "#" + ", ".join(display_list) + "\n"
+		return msg
+
+
+	def _show_unsatisfied_dep(self, root, atom, myparent=None, arg=None,
+		check_backtrack=False, check_autounmask_breakage=False):
+		"""
+		When check_backtrack=True, no output is produced and
+		the method either returns or raises _backtrack_mask if
+		a matching package has been masked by backtracking.
+		"""
+		backtrack_mask = False
+		autounmask_broke_use_dep = False
+		atom_set = InternalPackageSet(initial_atoms=(atom.without_use,),
+			allow_repo=True)
+		atom_set_with_use = InternalPackageSet(initial_atoms=(atom,),
+			allow_repo=True)
+		xinfo = '"%s"' % atom.unevaluated_atom
+		if arg:
+			xinfo='"%s"' % arg
+		if isinstance(myparent, AtomArg):
+			xinfo = _unicode_decode('"%s"') % (myparent,)
+		# Discard null/ from failed cpv_expand category expansion.
+		xinfo = xinfo.replace("null/", "")
+		if root != self._frozen_config._running_root.root:
+			xinfo = "%s for %s" % (xinfo, root)
+		masked_packages = []
+		missing_use = []
+		missing_use_adjustable = set()
+		required_use_unsatisfied = []
+		masked_pkg_instances = set()
+		have_eapi_mask = False
+		pkgsettings = self._frozen_config.pkgsettings[root]
+		root_config = self._frozen_config.roots[root]
+		portdb = self._frozen_config.roots[root].trees["porttree"].dbapi
+		vardb = self._frozen_config.roots[root].trees["vartree"].dbapi
+		bindb = self._frozen_config.roots[root].trees["bintree"].dbapi
+		dbs = self._dynamic_config._filtered_trees[root]["dbs"]
+		for db, pkg_type, built, installed, db_keys in dbs:
+			if installed:
+				continue
+			if hasattr(db, "xmatch"):
+				cpv_list = db.xmatch("match-all-cpv-only", atom.without_use)
+			else:
+				cpv_list = db.match(atom.without_use)
+
+			if atom.repo is None and hasattr(db, "getRepositories"):
+				repo_list = db.getRepositories()
+			else:
+				repo_list = [atom.repo]
+
+			# descending order
+			cpv_list.reverse()
+			for cpv in cpv_list:
+				for repo in repo_list:
+					if not db.cpv_exists(cpv, myrepo=repo):
+						continue
+
+					metadata, mreasons  = get_mask_info(root_config, cpv, pkgsettings, db, pkg_type, \
+						built, installed, db_keys, myrepo=repo, _pkg_use_enabled=self._pkg_use_enabled)
+					if metadata is not None and \
+						portage.eapi_is_supported(metadata["EAPI"]):
+						if not repo:
+							repo = metadata.get('repository')
+						pkg = self._pkg(cpv, pkg_type, root_config,
+							installed=installed, myrepo=repo)
+						# pkg.metadata contains calculated USE for ebuilds,
+						# required later for getMissingLicenses.
+						metadata = pkg.metadata
+						if pkg.invalid:
+							# Avoid doing any operations with packages that
+							# have invalid metadata. It would be unsafe at
+							# least because it could trigger unhandled
+							# exceptions in places like check_required_use().
+							masked_packages.append(
+								(root_config, pkgsettings, cpv, repo, metadata, mreasons))
+							continue
+						if not atom_set.findAtomForPackage(pkg,
+							modified_use=self._pkg_use_enabled(pkg)):
+							continue
+						if pkg in self._dynamic_config._runtime_pkg_mask:
+							backtrack_reasons = \
+								self._dynamic_config._runtime_pkg_mask[pkg]
+							mreasons.append('backtracking: %s' % \
+								', '.join(sorted(backtrack_reasons)))
+							backtrack_mask = True
+						if not mreasons and self._frozen_config.excluded_pkgs.findAtomForPackage(pkg, \
+							modified_use=self._pkg_use_enabled(pkg)):
+							mreasons = ["exclude option"]
+						if mreasons:
+							masked_pkg_instances.add(pkg)
+						if atom.unevaluated_atom.use:
+							try:
+								if not pkg.iuse.is_valid_flag(atom.unevaluated_atom.use.required) \
+									or atom.violated_conditionals(self._pkg_use_enabled(pkg), pkg.iuse.is_valid_flag).use:
+									missing_use.append(pkg)
+									if atom_set_with_use.findAtomForPackage(pkg):
+										autounmask_broke_use_dep = True
+									if not mreasons:
+										continue
+							except InvalidAtom:
+								writemsg("violated_conditionals raised " + \
+									"InvalidAtom: '%s' parent: %s" % \
+									(atom, myparent), noiselevel=-1)
+								raise
+						if not mreasons and \
+							not pkg.built and \
+							pkg.metadata.get("REQUIRED_USE") and \
+							eapi_has_required_use(pkg.metadata["EAPI"]):
+							if not check_required_use(
+								pkg.metadata["REQUIRED_USE"],
+								self._pkg_use_enabled(pkg),
+								pkg.iuse.is_valid_flag):
+								required_use_unsatisfied.append(pkg)
+								continue
+						root_slot = (pkg.root, pkg.slot_atom)
+						if pkg.built and root_slot in self._rebuild.rebuild_list:
+							mreasons = ["need to rebuild from source"]
+						elif pkg.installed and root_slot in self._rebuild.reinstall_list:
+							mreasons = ["need to rebuild from source"]
+						elif pkg.built and not mreasons:
+							mreasons = ["use flag configuration mismatch"]
+					masked_packages.append(
+						(root_config, pkgsettings, cpv, repo, metadata, mreasons))
+
+		if check_backtrack:
+			if backtrack_mask:
+				raise self._backtrack_mask()
+			else:
+				return
+
+		if check_autounmask_breakage:
+			if autounmask_broke_use_dep:
+				raise self._autounmask_breakage()
+			else:
+				return
+
+		missing_use_reasons = []
+		missing_iuse_reasons = []
+		for pkg in missing_use:
+			use = self._pkg_use_enabled(pkg)
+			missing_iuse = []
+			#Use the unevaluated atom here, because some flags might have gone
+			#lost during evaluation.
+			required_flags = atom.unevaluated_atom.use.required
+			missing_iuse = pkg.iuse.get_missing_iuse(required_flags)
+
+			mreasons = []
+			if missing_iuse:
+				mreasons.append("Missing IUSE: %s" % " ".join(missing_iuse))
+				missing_iuse_reasons.append((pkg, mreasons))
+			else:
+				need_enable = sorted(atom.use.enabled.difference(use).intersection(pkg.iuse.all))
+				need_disable = sorted(atom.use.disabled.intersection(use).intersection(pkg.iuse.all))
+
+				untouchable_flags = \
+					frozenset(chain(pkg.use.mask, pkg.use.force))
+				if untouchable_flags.intersection(
+					chain(need_enable, need_disable)):
+					continue
+
+				missing_use_adjustable.add(pkg)
+				required_use = pkg.metadata.get("REQUIRED_USE")
+				required_use_warning = ""
+				if required_use:
+					old_use = self._pkg_use_enabled(pkg)
+					new_use = set(self._pkg_use_enabled(pkg))
+					for flag in need_enable:
+						new_use.add(flag)
+					for flag in need_disable:
+						new_use.discard(flag)
+					if check_required_use(required_use, old_use, pkg.iuse.is_valid_flag) and \
+						not check_required_use(required_use, new_use, pkg.iuse.is_valid_flag):
+							required_use_warning = ", this change violates use flag constraints " + \
+								"defined by %s: '%s'" % (pkg.cpv, human_readable_required_use(required_use))
+
+				if need_enable or need_disable:
+					changes = []
+					changes.extend(colorize("red", "+" + x) \
+						for x in need_enable)
+					changes.extend(colorize("blue", "-" + x) \
+						for x in need_disable)
+					mreasons.append("Change USE: %s" % " ".join(changes) + required_use_warning)
+					missing_use_reasons.append((pkg, mreasons))
+
+			if not missing_iuse and myparent and atom.unevaluated_atom.use.conditional:
+				# Lets see if the violated use deps are conditional.
+				# If so, suggest to change them on the parent.
+
+				# If the child package is masked then a change to
+				# parent USE is not a valid solution (a normal mask
+				# message should be displayed instead).
+				if pkg in masked_pkg_instances:
+					continue
+
+				mreasons = []
+				violated_atom = atom.unevaluated_atom.violated_conditionals(self._pkg_use_enabled(pkg), \
+					pkg.iuse.is_valid_flag, self._pkg_use_enabled(myparent))
+				if not (violated_atom.use.enabled or violated_atom.use.disabled):
+					#all violated use deps are conditional
+					changes = []
+					conditional = violated_atom.use.conditional
+					involved_flags = set(chain(conditional.equal, conditional.not_equal, \
+						conditional.enabled, conditional.disabled))
+
+					untouchable_flags = \
+						frozenset(chain(myparent.use.mask, myparent.use.force))
+					if untouchable_flags.intersection(involved_flags):
+						continue
+
+					required_use = myparent.metadata.get("REQUIRED_USE")
+					required_use_warning = ""
+					if required_use:
+						old_use = self._pkg_use_enabled(myparent)
+						new_use = set(self._pkg_use_enabled(myparent))
+						for flag in involved_flags:
+							if flag in old_use:
+								new_use.discard(flag)
+							else:
+								new_use.add(flag)
+						if check_required_use(required_use, old_use, myparent.iuse.is_valid_flag) and \
+							not check_required_use(required_use, new_use, myparent.iuse.is_valid_flag):
+								required_use_warning = ", this change violates use flag constraints " + \
+									"defined by %s: '%s'" % (myparent.cpv, \
+									human_readable_required_use(required_use))
+
+					for flag in involved_flags:
+						if flag in self._pkg_use_enabled(myparent):
+							changes.append(colorize("blue", "-" + flag))
+						else:
+							changes.append(colorize("red", "+" + flag))
+					mreasons.append("Change USE: %s" % " ".join(changes) + required_use_warning)
+					if (myparent, mreasons) not in missing_use_reasons:
+						missing_use_reasons.append((myparent, mreasons))
+
+		unmasked_use_reasons = [(pkg, mreasons) for (pkg, mreasons) \
+			in missing_use_reasons if pkg not in masked_pkg_instances]
+
+		unmasked_iuse_reasons = [(pkg, mreasons) for (pkg, mreasons) \
+			in missing_iuse_reasons if pkg not in masked_pkg_instances]
+
+		show_missing_use = False
+		if unmasked_use_reasons:
+			# Only show the latest version.
+			show_missing_use = []
+			pkg_reason = None
+			parent_reason = None
+			for pkg, mreasons in unmasked_use_reasons:
+				if pkg is myparent:
+					if parent_reason is None:
+						#This happens if a use change on the parent
+						#leads to a satisfied conditional use dep.
+						parent_reason = (pkg, mreasons)
+				elif pkg_reason is None:
+					#Don't rely on the first pkg in unmasked_use_reasons,
+					#being the highest version of the dependency.
+					pkg_reason = (pkg, mreasons)
+			if pkg_reason:
+				show_missing_use.append(pkg_reason)
+			if parent_reason:
+				show_missing_use.append(parent_reason)
+
+		elif unmasked_iuse_reasons:
+			masked_with_iuse = False
+			for pkg in masked_pkg_instances:
+				#Use atom.unevaluated here, because some flags might have gone
+				#lost during evaluation.
+				if not pkg.iuse.get_missing_iuse(atom.unevaluated_atom.use.required):
+					# Package(s) with required IUSE are masked,
+					# so display a normal masking message.
+					masked_with_iuse = True
+					break
+			if not masked_with_iuse:
+				show_missing_use = unmasked_iuse_reasons
+
+		if required_use_unsatisfied:
+			# If there's a higher unmasked version in missing_use_adjustable
+			# then we want to show that instead.
+			for pkg in missing_use_adjustable:
+				if pkg not in masked_pkg_instances and \
+					pkg > required_use_unsatisfied[0]:
+					required_use_unsatisfied = False
+					break
+
+		mask_docs = False
+
+		if required_use_unsatisfied:
+			# We have an unmasked package that only requires USE adjustment
+			# in order to satisfy REQUIRED_USE, and nothing more. We assume
+			# that the user wants the latest version, so only the first
+			# instance is displayed.
+			pkg = required_use_unsatisfied[0]
+			output_cpv = pkg.cpv + _repo_separator + pkg.repo
+			writemsg_stdout("\n!!! " + \
+				colorize("BAD", "The ebuild selected to satisfy ") + \
+				colorize("INFORM", xinfo) + \
+				colorize("BAD", " has unmet requirements.") + "\n",
+				noiselevel=-1)
+			use_display = pkg_use_display(pkg, self._frozen_config.myopts)
+			writemsg_stdout("- %s %s\n" % (output_cpv, use_display),
+				noiselevel=-1)
+			writemsg_stdout("\n  The following REQUIRED_USE flag constraints " + \
+				"are unsatisfied:\n", noiselevel=-1)
+			reduced_noise = check_required_use(
+				pkg.metadata["REQUIRED_USE"],
+				self._pkg_use_enabled(pkg),
+				pkg.iuse.is_valid_flag).tounicode()
+			writemsg_stdout("    %s\n" % \
+				human_readable_required_use(reduced_noise),
+				noiselevel=-1)
+			normalized_required_use = \
+				" ".join(pkg.metadata["REQUIRED_USE"].split())
+			if reduced_noise != normalized_required_use:
+				writemsg_stdout("\n  The above constraints " + \
+					"are a subset of the following complete expression:\n",
+					noiselevel=-1)
+				writemsg_stdout("    %s\n" % \
+					human_readable_required_use(normalized_required_use),
+					noiselevel=-1)
+			writemsg_stdout("\n", noiselevel=-1)
+
+		elif show_missing_use:
+			writemsg_stdout("\nemerge: there are no ebuilds built with USE flags to satisfy "+green(xinfo)+".\n", noiselevel=-1)
+			writemsg_stdout("!!! One of the following packages is required to complete your request:\n", noiselevel=-1)
+			for pkg, mreasons in show_missing_use:
+				writemsg_stdout("- "+pkg.cpv+_repo_separator+pkg.repo+" ("+", ".join(mreasons)+")\n", noiselevel=-1)
+
+		elif masked_packages:
+			writemsg_stdout("\n!!! " + \
+				colorize("BAD", "All ebuilds that could satisfy ") + \
+				colorize("INFORM", xinfo) + \
+				colorize("BAD", " have been masked.") + "\n", noiselevel=-1)
+			writemsg_stdout("!!! One of the following masked packages is required to complete your request:\n", noiselevel=-1)
+			have_eapi_mask = show_masked_packages(masked_packages)
+			if have_eapi_mask:
+				writemsg_stdout("\n", noiselevel=-1)
+				msg = ("The current version of portage supports " + \
+					"EAPI '%s'. You must upgrade to a newer version" + \
+					" of portage before EAPI masked packages can" + \
+					" be installed.") % portage.const.EAPI
+				writemsg_stdout("\n".join(textwrap.wrap(msg, 75)), noiselevel=-1)
+			writemsg_stdout("\n", noiselevel=-1)
+			mask_docs = True
+		else:
+			cp_exists = False
+			if not atom.cp.startswith("null/"):
+				for pkg in self._iter_match_pkgs_any(
+					root_config, Atom(atom.cp)):
+					cp_exists = True
+					break
+
+			writemsg_stdout("\nemerge: there are no ebuilds to satisfy "+green(xinfo)+".\n", noiselevel=-1)
+			if isinstance(myparent, AtomArg) and \
+				not cp_exists and \
+				self._frozen_config.myopts.get(
+				"--misspell-suggestions", "y") != "n":
+				cp = myparent.atom.cp.lower()
+				cat, pkg = portage.catsplit(cp)
+				if cat == "null":
+					cat = None
+
+				writemsg_stdout("\nemerge: searching for similar names..."
+					, noiselevel=-1)
+
+				all_cp = set()
+				all_cp.update(vardb.cp_all())
+				if "--usepkgonly" not in self._frozen_config.myopts:
+					all_cp.update(portdb.cp_all())
+				if "--usepkg" in self._frozen_config.myopts:
+					all_cp.update(bindb.cp_all())
+				# discard dir containing no ebuilds
+				all_cp.discard(cp)
+
+				orig_cp_map = {}
+				for cp_orig in all_cp:
+					orig_cp_map.setdefault(cp_orig.lower(), []).append(cp_orig)
+				all_cp = set(orig_cp_map)
+
+				if cat:
+					matches = difflib.get_close_matches(cp, all_cp)
+				else:
+					pkg_to_cp = {}
+					for other_cp in list(all_cp):
+						other_pkg = portage.catsplit(other_cp)[1]
+						if other_pkg == pkg:
+							# Check for non-identical package that
+							# differs only by upper/lower case.
+							identical = True
+							for cp_orig in orig_cp_map[other_cp]:
+								if portage.catsplit(cp_orig)[1] != \
+									portage.catsplit(atom.cp)[1]:
+									identical = False
+									break
+							if identical:
+								# discard dir containing no ebuilds
+								all_cp.discard(other_cp)
+								continue
+						pkg_to_cp.setdefault(other_pkg, set()).add(other_cp)
+					pkg_matches = difflib.get_close_matches(pkg, pkg_to_cp)
+					matches = []
+					for pkg_match in pkg_matches:
+						matches.extend(pkg_to_cp[pkg_match])
+
+				matches_orig_case = []
+				for cp in matches:
+					matches_orig_case.extend(orig_cp_map[cp])
+				matches = matches_orig_case
+
+				if len(matches) == 1:
+					writemsg_stdout("\nemerge: Maybe you meant " + matches[0] + "?\n"
+						, noiselevel=-1)
+				elif len(matches) > 1:
+					writemsg_stdout(
+						"\nemerge: Maybe you meant any of these: %s?\n" % \
+						(", ".join(matches),), noiselevel=-1)
+				else:
+					# Generally, this would only happen if
+					# all dbapis are empty.
+					writemsg_stdout(" nothing similar found.\n"
+						, noiselevel=-1)
+		msg = []
+		if not isinstance(myparent, AtomArg):
+			# It's redundant to show parent for AtomArg since
+			# it's the same as 'xinfo' displayed above.
+			dep_chain = self._get_dep_chain(myparent, atom)
+			for node, node_type in dep_chain:
+				msg.append('(dependency required by "%s" [%s])' % \
+						(colorize('INFORM', _unicode_decode("%s") % \
+						(node)), node_type))
+
+		if msg:
+			writemsg_stdout("\n".join(msg), noiselevel=-1)
+			writemsg_stdout("\n", noiselevel=-1)
+
+		if mask_docs:
+			show_mask_docs()
+			writemsg_stdout("\n", noiselevel=-1)
+
+	def _iter_match_pkgs_any(self, root_config, atom, onlydeps=False):
+		for db, pkg_type, built, installed, db_keys in \
+			self._dynamic_config._filtered_trees[root_config.root]["dbs"]:
+			for pkg in self._iter_match_pkgs(root_config,
+				pkg_type, atom, onlydeps=onlydeps):
+				yield pkg
+
+	def _iter_match_pkgs(self, root_config, pkg_type, atom, onlydeps=False):
+		"""
+		Iterate over Package instances of pkg_type matching the given atom.
+		This does not check visibility and it also does not match USE for
+		unbuilt ebuilds since USE are lazily calculated after visibility
+		checks (to avoid the expense when possible).
+		"""
+
+		db = root_config.trees[self.pkg_tree_map[pkg_type]].dbapi
+
+		if hasattr(db, "xmatch"):
+			# For portdbapi we match only against the cpv, in order
+			# to bypass unnecessary cache access for things like IUSE
+			# and SLOT. Later, we cache the metadata in a Package
+			# instance, and use that for further matching. This
+			# optimization is especially relevant since
+			# pordbapi.aux_get() does not cache calls that have
+			# myrepo or mytree arguments.
+			cpv_list = db.xmatch("match-all-cpv-only", atom)
+		else:
+			cpv_list = db.match(atom)
+
+		# USE=multislot can make an installed package appear as if
+		# it doesn't satisfy a slot dependency. Rebuilding the ebuild
+		# won't do any good as long as USE=multislot is enabled since
+		# the newly built package still won't have the expected slot.
+		# Therefore, assume that such SLOT dependencies are already
+		# satisfied rather than forcing a rebuild.
+		installed = pkg_type == 'installed'
+		if installed and not cpv_list and atom.slot:
+
+			if "remove" in self._dynamic_config.myparams:
+				# We need to search the portdbapi, which is not in our
+				# normal dbs list, in order to find the real SLOT.
+				portdb = self._frozen_config.trees[root_config.root]["porttree"].dbapi
+				db_keys = list(portdb._aux_cache_keys)
+				dbs = [(portdb, "ebuild", False, False, db_keys)]
+			else:
+				dbs = self._dynamic_config._filtered_trees[root_config.root]["dbs"]
+
+			for cpv in db.match(atom.cp):
+				slot_available = False
+				for other_db, other_type, other_built, \
+					other_installed, other_keys in dbs:
+					try:
+						if atom.slot == \
+							other_db.aux_get(cpv, ["SLOT"])[0]:
+							slot_available = True
+							break
+					except KeyError:
+						pass
+				if not slot_available:
+					continue
+				inst_pkg = self._pkg(cpv, "installed",
+					root_config, installed=installed, myrepo = atom.repo)
+				# Remove the slot from the atom and verify that
+				# the package matches the resulting atom.
+				if portage.match_from_list(
+					atom.without_slot, [inst_pkg]):
+					yield inst_pkg
+					return
+
+		if cpv_list:
+			atom_set = InternalPackageSet(initial_atoms=(atom,),
+				allow_repo=True)
+			if atom.repo is None and hasattr(db, "getRepositories"):
+				repo_list = db.getRepositories()
+			else:
+				repo_list = [atom.repo]
+
+			# descending order
+			cpv_list.reverse()
+			for cpv in cpv_list:
+				for repo in repo_list:
+
+					try:
+						pkg = self._pkg(cpv, pkg_type, root_config,
+							installed=installed, onlydeps=onlydeps, myrepo=repo)
+					except portage.exception.PackageNotFound:
+						pass
+					else:
+						# A cpv can be returned from dbapi.match() as an
+						# old-style virtual match even in cases when the
+						# package does not actually PROVIDE the virtual.
+						# Filter out any such false matches here.
+
+						# Make sure that cpv from the current repo satisfies the atom.
+						# This might not be the case if there are several repos with
+						# the same cpv, but different metadata keys, like SLOT.
+						# Also, for portdbapi, parts of the match that require
+						# metadata access are deferred until we have cached the
+						# metadata in a Package instance.
+						if not atom_set.findAtomForPackage(pkg,
+							modified_use=self._pkg_use_enabled(pkg)):
+							continue
+						yield pkg
+
+	def _select_pkg_highest_available(self, root, atom, onlydeps=False):
+		cache_key = (root, atom, atom.unevaluated_atom, onlydeps)
+		ret = self._dynamic_config._highest_pkg_cache.get(cache_key)
+		if ret is not None:
+			pkg, existing = ret
+			if pkg and not existing:
+				existing = self._dynamic_config._slot_pkg_map[root].get(pkg.slot_atom)
+				if existing and existing == pkg:
+					# Update the cache to reflect that the
+					# package has been added to the graph.
+					ret = pkg, pkg
+					self._dynamic_config._highest_pkg_cache[cache_key] = ret
+			return ret
+		ret = self._select_pkg_highest_available_imp(root, atom, onlydeps=onlydeps)
+		self._dynamic_config._highest_pkg_cache[cache_key] = ret
+		pkg, existing = ret
+		if pkg is not None:
+			if self._pkg_visibility_check(pkg) and \
+				not (pkg.installed and pkg.masks):
+				self._dynamic_config._visible_pkgs[pkg.root].cpv_inject(pkg)
+		return ret
+
+	def _want_installed_pkg(self, pkg):
+		"""
+		Given an installed package returned from select_pkg, return
+		True if the user has not explicitly requested for this package
+		to be replaced (typically via an atom on the command line).
+		"""
+		if "selective" not in self._dynamic_config.myparams and \
+			pkg.root == self._frozen_config.target_root:
+			if self._frozen_config.excluded_pkgs.findAtomForPackage(pkg,
+				modified_use=self._pkg_use_enabled(pkg)):
+				return True
+			try:
+				next(self._iter_atoms_for_pkg(pkg))
+			except StopIteration:
+				pass
+			except portage.exception.InvalidDependString:
+				pass
+			else:
+				return False
+		return True
+
+	class _AutounmaskLevel(object):
+		__slots__ = ("allow_use_changes", "allow_unstable_keywords", "allow_license_changes", \
+			"allow_missing_keywords", "allow_unmasks")
+
+		def __init__(self):
+			self.allow_use_changes = False
+			self.allow_license_changes = False
+			self.allow_unstable_keywords = False
+			self.allow_missing_keywords = False
+			self.allow_unmasks = False
+
+	def _autounmask_levels(self):
+		"""
+		Iterate over the different allowed things to unmask.
+
+		1. USE
+		2. USE + ~arch + license
+		3. USE + ~arch + license + missing keywords
+		4. USE + ~arch + license + masks
+		5. USE + ~arch + license + missing keywords + masks
+
+		Some thoughts:
+			* Do least invasive changes first.
+			* Try unmasking alone before unmasking + missing keywords
+				to avoid -9999 versions if possible
+		"""
+
+		if self._dynamic_config._autounmask is not True:
+			return
+
+		autounmask_keep_masks = self._frozen_config.myopts.get("--autounmask-keep-masks", "n") != "n"
+		autounmask_level = self._AutounmaskLevel()
+
+		autounmask_level.allow_use_changes = True
+
+		for only_use_changes in (True, False):
+
+			autounmask_level.allow_unstable_keywords = (not only_use_changes)
+			autounmask_level.allow_license_changes = (not only_use_changes)
+
+			for missing_keyword, unmask in ((False,False), (True, False), (False, True), (True, True)):
+
+				if (only_use_changes or autounmask_keep_masks) and (missing_keyword or unmask):
+					break
+
+				autounmask_level.allow_missing_keywords = missing_keyword
+				autounmask_level.allow_unmasks = unmask
+
+				yield autounmask_level
+
+
+	def _select_pkg_highest_available_imp(self, root, atom, onlydeps=False):
+		pkg, existing = self._wrapped_select_pkg_highest_available_imp(root, atom, onlydeps=onlydeps)
+
+		default_selection = (pkg, existing)
+
+		def reset_pkg(pkg):
+			if pkg is not None and \
+				pkg.installed and \
+				not self._want_installed_pkg(pkg):
+				pkg = None
+
+		if self._dynamic_config._autounmask is True:
+			reset_pkg(pkg)
+
+			for autounmask_level in self._autounmask_levels():
+				if pkg is not None:
+					break
+
+				pkg, existing = \
+					self._wrapped_select_pkg_highest_available_imp(
+						root, atom, onlydeps=onlydeps,
+						autounmask_level=autounmask_level)
+
+				reset_pkg(pkg)
+			
+			if self._dynamic_config._need_restart:
+				return None, None
+
+		if pkg is None:
+			# This ensures that we can fall back to an installed package
+			# that may have been rejected in the autounmask path above.
+			return default_selection
+
+		return pkg, existing
+
+	def _pkg_visibility_check(self, pkg, autounmask_level=None, trust_graph=True):
+
+		if pkg.visible:
+			return True
+
+		if trust_graph and pkg in self._dynamic_config.digraph:
+			# Sometimes we need to temporarily disable
+			# dynamic_config._autounmask, but for overall
+			# consistency in dependency resolution, in most
+			# cases we want to treat packages in the graph
+			# as though they are visible.
+			return True
+
+		if not self._dynamic_config._autounmask or autounmask_level is None:
+			return False
+
+		pkgsettings = self._frozen_config.pkgsettings[pkg.root]
+		root_config = self._frozen_config.roots[pkg.root]
+		mreasons = _get_masking_status(pkg, pkgsettings, root_config, use=self._pkg_use_enabled(pkg))
+
+		masked_by_unstable_keywords = False
+		masked_by_missing_keywords = False
+		missing_licenses = None
+		masked_by_something_else = False
+		masked_by_p_mask = False
+
+		for reason in mreasons:
+			hint = reason.unmask_hint
+
+			if hint is None:
+				masked_by_something_else = True
+			elif hint.key == "unstable keyword":
+				masked_by_unstable_keywords = True
+				if hint.value == "**":
+					masked_by_missing_keywords = True
+			elif hint.key == "p_mask":
+				masked_by_p_mask = True
+			elif hint.key == "license":
+				missing_licenses = hint.value
+			else:
+				masked_by_something_else = True
+
+		if masked_by_something_else:
+			return False
+
+		if pkg in self._dynamic_config._needed_unstable_keywords:
+			#If the package is already keyworded, remove the mask.
+			masked_by_unstable_keywords = False
+			masked_by_missing_keywords = False
+
+		if pkg in self._dynamic_config._needed_p_mask_changes:
+			#If the package is already keyworded, remove the mask.
+			masked_by_p_mask = False
+
+		if missing_licenses:
+			#If the needed licenses are already unmasked, remove the mask.
+			missing_licenses.difference_update(self._dynamic_config._needed_license_changes.get(pkg, set()))
+
+		if not (masked_by_unstable_keywords or masked_by_p_mask or missing_licenses):
+			#Package has already been unmasked.
+			return True
+
+		if (masked_by_unstable_keywords and not autounmask_level.allow_unstable_keywords) or \
+			(masked_by_missing_keywords and not autounmask_level.allow_missing_keywords) or \
+			(masked_by_p_mask and not autounmask_level.allow_unmasks) or \
+			(missing_licenses and not autounmask_level.allow_license_changes):
+			#We are not allowed to do the needed changes.
+			return False
+
+		if masked_by_unstable_keywords:
+			self._dynamic_config._needed_unstable_keywords.add(pkg)
+			backtrack_infos = self._dynamic_config._backtrack_infos
+			backtrack_infos.setdefault("config", {})
+			backtrack_infos["config"].setdefault("needed_unstable_keywords", set())
+			backtrack_infos["config"]["needed_unstable_keywords"].add(pkg)
+
+		if masked_by_p_mask:
+			self._dynamic_config._needed_p_mask_changes.add(pkg)
+			backtrack_infos = self._dynamic_config._backtrack_infos
+			backtrack_infos.setdefault("config", {})
+			backtrack_infos["config"].setdefault("needed_p_mask_changes", set())
+			backtrack_infos["config"]["needed_p_mask_changes"].add(pkg)
+
+		if missing_licenses:
+			self._dynamic_config._needed_license_changes.setdefault(pkg, set()).update(missing_licenses)
+			backtrack_infos = self._dynamic_config._backtrack_infos
+			backtrack_infos.setdefault("config", {})
+			backtrack_infos["config"].setdefault("needed_license_changes", set())
+			backtrack_infos["config"]["needed_license_changes"].add((pkg, frozenset(missing_licenses)))
+
+		return True
+
+	def _pkg_use_enabled(self, pkg, target_use=None):
+		"""
+		If target_use is None, returns pkg.use.enabled + changes in _needed_use_config_changes.
+		If target_use is given, the need changes are computed to make the package useable.
+		Example: target_use = { "foo": True, "bar": False }
+		The flags target_use must be in the pkg's IUSE.
+		"""
+		if pkg.built:
+			return pkg.use.enabled
+		needed_use_config_change = self._dynamic_config._needed_use_config_changes.get(pkg)
+
+		if target_use is None:
+			if needed_use_config_change is None:
+				return pkg.use.enabled
+			else:
+				return needed_use_config_change[0]
+
+		if needed_use_config_change is not None:
+			old_use = needed_use_config_change[0]
+			new_use = set()
+			old_changes = needed_use_config_change[1]
+			new_changes = old_changes.copy()
+		else:
+			old_use = pkg.use.enabled
+			new_use = set()
+			old_changes = {}
+			new_changes = {}
+
+		for flag, state in target_use.items():
+			if state:
+				if flag not in old_use:
+					if new_changes.get(flag) == False:
+						return old_use
+					new_changes[flag] = True
+				new_use.add(flag)
+			else:
+				if flag in old_use:
+					if new_changes.get(flag) == True:
+						return old_use
+					new_changes[flag] = False
+		new_use.update(old_use.difference(target_use))
+
+		def want_restart_for_use_change(pkg, new_use):
+			if pkg not in self._dynamic_config.digraph.nodes:
+				return False
+
+			for key in "DEPEND", "RDEPEND", "PDEPEND", "LICENSE":
+				dep = pkg.metadata[key]
+				old_val = set(portage.dep.use_reduce(dep, pkg.use.enabled, is_valid_flag=pkg.iuse.is_valid_flag, flat=True))
+				new_val = set(portage.dep.use_reduce(dep, new_use, is_valid_flag=pkg.iuse.is_valid_flag, flat=True))
+
+				if old_val != new_val:
+					return True
+
+			parent_atoms = self._dynamic_config._parent_atoms.get(pkg)
+			if not parent_atoms:
+				return False
+
+			new_use, changes = self._dynamic_config._needed_use_config_changes.get(pkg)
+			for ppkg, atom in parent_atoms:
+				if not atom.use or \
+					not atom.use.required.intersection(changes):
+					continue
+				else:
+					return True
+
+			return False
+
+		if new_changes != old_changes:
+			#Don't do the change if it violates REQUIRED_USE.
+			required_use = pkg.metadata.get("REQUIRED_USE")
+			if required_use and check_required_use(required_use, old_use, pkg.iuse.is_valid_flag) and \
+				not check_required_use(required_use, new_use, pkg.iuse.is_valid_flag):
+				return old_use
+
+			if pkg.use.mask.intersection(new_changes) or \
+				pkg.use.force.intersection(new_changes):
+				return old_use
+
+			self._dynamic_config._needed_use_config_changes[pkg] = (new_use, new_changes)
+			backtrack_infos = self._dynamic_config._backtrack_infos
+			backtrack_infos.setdefault("config", {})
+			backtrack_infos["config"].setdefault("needed_use_config_changes", [])
+			backtrack_infos["config"]["needed_use_config_changes"].append((pkg, (new_use, new_changes)))
+			if want_restart_for_use_change(pkg, new_use):
+				self._dynamic_config._need_restart = True
+		return new_use
+
+	def _wrapped_select_pkg_highest_available_imp(self, root, atom, onlydeps=False, autounmask_level=None):
+		root_config = self._frozen_config.roots[root]
+		pkgsettings = self._frozen_config.pkgsettings[root]
+		dbs = self._dynamic_config._filtered_trees[root]["dbs"]
+		vardb = self._frozen_config.roots[root].trees["vartree"].dbapi
+		# List of acceptable packages, ordered by type preference.
+		matched_packages = []
+		matched_pkgs_ignore_use = []
+		highest_version = None
+		if not isinstance(atom, portage.dep.Atom):
+			atom = portage.dep.Atom(atom)
+		atom_cp = atom.cp
+		have_new_virt = atom_cp.startswith("virtual/") and \
+			self._have_new_virt(root, atom_cp)
+		atom_set = InternalPackageSet(initial_atoms=(atom,), allow_repo=True)
+		existing_node = None
+		myeb = None
+		rebuilt_binaries = 'rebuilt_binaries' in self._dynamic_config.myparams
+		usepkg = "--usepkg" in self._frozen_config.myopts
+		usepkgonly = "--usepkgonly" in self._frozen_config.myopts
+		empty = "empty" in self._dynamic_config.myparams
+		selective = "selective" in self._dynamic_config.myparams
+		reinstall = False
+		avoid_update = "--update" not in self._frozen_config.myopts
+		dont_miss_updates = "--update" in self._frozen_config.myopts
+		use_ebuild_visibility = self._frozen_config.myopts.get(
+			'--use-ebuild-visibility', 'n') != 'n'
+		reinstall_atoms = self._frozen_config.reinstall_atoms
+		usepkg_exclude = self._frozen_config.usepkg_exclude
+		useoldpkg_atoms = self._frozen_config.useoldpkg_atoms
+		matched_oldpkg = []
+		# Behavior of the "selective" parameter depends on
+		# whether or not a package matches an argument atom.
+		# If an installed package provides an old-style
+		# virtual that is no longer provided by an available
+		# package, the installed package may match an argument
+		# atom even though none of the available packages do.
+		# Therefore, "selective" logic does not consider
+		# whether or not an installed package matches an
+		# argument atom. It only considers whether or not
+		# available packages match argument atoms, which is
+		# represented by the found_available_arg flag.
+		found_available_arg = False
+		packages_with_invalid_use_config = []
+		for find_existing_node in True, False:
+			if existing_node:
+				break
+			for db, pkg_type, built, installed, db_keys in dbs:
+				if existing_node:
+					break
+				if installed and not find_existing_node:
+					want_reinstall = reinstall or empty or \
+						(found_available_arg and not selective)
+					if want_reinstall and matched_packages:
+						continue
+
+				# Ignore USE deps for the initial match since we want to
+				# ensure that updates aren't missed solely due to the user's
+				# USE configuration.
+				for pkg in self._iter_match_pkgs(root_config, pkg_type, atom.without_use, 
+					onlydeps=onlydeps):
+					if pkg.cp != atom_cp and have_new_virt:
+						# pull in a new-style virtual instead
+						continue
+					if pkg in self._dynamic_config._runtime_pkg_mask:
+						# The package has been masked by the backtracking logic
+						continue
+					root_slot = (pkg.root, pkg.slot_atom)
+					if pkg.built and root_slot in self._rebuild.rebuild_list:
+						continue
+					if (pkg.installed and
+						root_slot in self._rebuild.reinstall_list):
+						continue
+
+					if not pkg.installed and \
+						self._frozen_config.excluded_pkgs.findAtomForPackage(pkg, \
+							modified_use=self._pkg_use_enabled(pkg)):
+						continue
+
+					if built and not installed and usepkg_exclude.findAtomForPackage(pkg, \
+						modified_use=self._pkg_use_enabled(pkg)):
+						break
+
+					useoldpkg = useoldpkg_atoms.findAtomForPackage(pkg, \
+						modified_use=self._pkg_use_enabled(pkg))
+
+					if packages_with_invalid_use_config and (not built or not useoldpkg) and \
+						(not pkg.installed or dont_miss_updates):
+						# Check if a higher version was rejected due to user
+						# USE configuration. The packages_with_invalid_use_config
+						# list only contains unbuilt ebuilds since USE can't
+						# be changed for built packages.
+						higher_version_rejected = False
+						repo_priority = pkg.repo_priority
+						for rejected in packages_with_invalid_use_config:
+							if rejected.cp != pkg.cp:
+								continue
+							if rejected > pkg:
+								higher_version_rejected = True
+								break
+							if portage.dep.cpvequal(rejected.cpv, pkg.cpv):
+								# If version is identical then compare
+								# repo priority (see bug #350254).
+								rej_repo_priority = rejected.repo_priority
+								if rej_repo_priority is not None and \
+									(repo_priority is None or
+									rej_repo_priority > repo_priority):
+									higher_version_rejected = True
+									break
+						if higher_version_rejected:
+							continue
+
+					cpv = pkg.cpv
+					reinstall_for_flags = None
+
+					if not pkg.installed or \
+						(matched_packages and not avoid_update):
+						# Only enforce visibility on installed packages
+						# if there is at least one other visible package
+						# available. By filtering installed masked packages
+						# here, packages that have been masked since they
+						# were installed can be automatically downgraded
+						# to an unmasked version. NOTE: This code needs to
+						# be consistent with masking behavior inside
+						# _dep_check_composite_db, in order to prevent
+						# incorrect choices in || deps like bug #351828.
+
+						if not self._pkg_visibility_check(pkg, autounmask_level):
+							continue
+
+						# Enable upgrade or downgrade to a version
+						# with visible KEYWORDS when the installed
+						# version is masked by KEYWORDS, but never
+						# reinstall the same exact version only due
+						# to a KEYWORDS mask. See bug #252167.
+
+						if pkg.type_name != "ebuild" and matched_packages:
+							# Don't re-install a binary package that is
+							# identical to the currently installed package
+							# (see bug #354441).
+							identical_binary = False
+							if usepkg and pkg.installed:
+								for selected_pkg in matched_packages:
+									if selected_pkg.type_name == "binary" and \
+										selected_pkg.cpv == pkg.cpv and \
+										selected_pkg.metadata.get('BUILD_TIME') == \
+										pkg.metadata.get('BUILD_TIME'):
+										identical_binary = True
+										break
+
+							if not identical_binary:
+								# If the ebuild no longer exists or it's
+								# keywords have been dropped, reject built
+								# instances (installed or binary).
+								# If --usepkgonly is enabled, assume that
+								# the ebuild status should be ignored.
+								if not use_ebuild_visibility and (usepkgonly or useoldpkg):
+									if pkg.installed and pkg.masks:
+										continue
+								else:
+									try:
+										pkg_eb = self._pkg(
+											pkg.cpv, "ebuild", root_config, myrepo=pkg.repo)
+									except portage.exception.PackageNotFound:
+										pkg_eb_visible = False
+										for pkg_eb in self._iter_match_pkgs(pkg.root_config,
+											"ebuild", Atom("=%s" % (pkg.cpv,))):
+											if self._pkg_visibility_check(pkg_eb, autounmask_level):
+												pkg_eb_visible = True
+												break
+										if not pkg_eb_visible:
+											continue
+									else:
+										if not self._pkg_visibility_check(pkg_eb, autounmask_level):
+											continue
+
+					# Calculation of USE for unbuilt ebuilds is relatively
+					# expensive, so it is only performed lazily, after the
+					# above visibility checks are complete.
+
+					myarg = None
+					if root == self._frozen_config.target_root:
+						try:
+							myarg = next(self._iter_atoms_for_pkg(pkg))
+						except StopIteration:
+							pass
+						except portage.exception.InvalidDependString:
+							if not installed:
+								# masked by corruption
+								continue
+					if not installed and myarg:
+						found_available_arg = True
+
+					if atom.unevaluated_atom.use:
+						#Make sure we don't miss a 'missing IUSE'.
+						if pkg.iuse.get_missing_iuse(atom.unevaluated_atom.use.required):
+							# Don't add this to packages_with_invalid_use_config
+							# since IUSE cannot be adjusted by the user.
+							continue
+
+					if atom.use:
+
+						matched_pkgs_ignore_use.append(pkg)
+						if autounmask_level and autounmask_level.allow_use_changes and not pkg.built:
+							target_use = {}
+							for flag in atom.use.enabled:
+								target_use[flag] = True
+							for flag in atom.use.disabled:
+								target_use[flag] = False
+							use = self._pkg_use_enabled(pkg, target_use)
+						else:
+							use = self._pkg_use_enabled(pkg)
+
+						use_match = True
+						can_adjust_use = not pkg.built
+						missing_enabled = atom.use.missing_enabled.difference(pkg.iuse.all)
+						missing_disabled = atom.use.missing_disabled.difference(pkg.iuse.all)
+
+						if atom.use.enabled:
+							if atom.use.enabled.intersection(missing_disabled):
+								use_match = False
+								can_adjust_use = False
+							need_enabled = atom.use.enabled.difference(use)
+							if need_enabled:
+								need_enabled = need_enabled.difference(missing_enabled)
+								if need_enabled:
+									use_match = False
+									if can_adjust_use:
+										if pkg.use.mask.intersection(need_enabled):
+											can_adjust_use = False
+
+						if atom.use.disabled:
+							if atom.use.disabled.intersection(missing_enabled):
+								use_match = False
+								can_adjust_use = False
+							need_disabled = atom.use.disabled.intersection(use)
+							if need_disabled:
+								need_disabled = need_disabled.difference(missing_disabled)
+								if need_disabled:
+									use_match = False
+									if can_adjust_use:
+										if pkg.use.force.difference(
+											pkg.use.mask).intersection(need_disabled):
+											can_adjust_use = False
+
+						if not use_match:
+							if can_adjust_use:
+								# Above we must ensure that this package has
+								# absolutely no use.force, use.mask, or IUSE
+								# issues that the user typically can't make
+								# adjustments to solve (see bug #345979).
+								# FIXME: Conditional USE deps complicate
+								# issues. This code currently excludes cases
+								# in which the user can adjust the parent
+								# package's USE in order to satisfy the dep.
+								packages_with_invalid_use_config.append(pkg)
+							continue
+
+					if pkg.cp == atom_cp:
+						if highest_version is None:
+							highest_version = pkg
+						elif pkg > highest_version:
+							highest_version = pkg
+					# At this point, we've found the highest visible
+					# match from the current repo. Any lower versions
+					# from this repo are ignored, so this so the loop
+					# will always end with a break statement below
+					# this point.
+					if find_existing_node:
+						e_pkg = self._dynamic_config._slot_pkg_map[root].get(pkg.slot_atom)
+						if not e_pkg:
+							break
+
+						# Use PackageSet.findAtomForPackage()
+						# for PROVIDE support.
+						if atom_set.findAtomForPackage(e_pkg, modified_use=self._pkg_use_enabled(e_pkg)):
+							if highest_version and \
+								e_pkg.cp == atom_cp and \
+								e_pkg < highest_version and \
+								e_pkg.slot_atom != highest_version.slot_atom:
+								# There is a higher version available in a
+								# different slot, so this existing node is
+								# irrelevant.
+								pass
+							else:
+								matched_packages.append(e_pkg)
+								existing_node = e_pkg
+						break
+					# Compare built package to current config and
+					# reject the built package if necessary.
+					if built and not useoldpkg and (not installed or matched_pkgs_ignore_use) and \
+						("--newuse" in self._frozen_config.myopts or \
+						"--reinstall" in self._frozen_config.myopts or \
+						(not installed and self._dynamic_config.myparams.get(
+						"binpkg_respect_use") in ("y", "auto"))):
+						iuses = pkg.iuse.all
+						old_use = self._pkg_use_enabled(pkg)
+						if myeb:
+							pkgsettings.setcpv(myeb)
+						else:
+							pkgsettings.setcpv(pkg)
+						now_use = pkgsettings["PORTAGE_USE"].split()
+						forced_flags = set()
+						forced_flags.update(pkgsettings.useforce)
+						forced_flags.update(pkgsettings.usemask)
+						cur_iuse = iuses
+						if myeb and not usepkgonly and not useoldpkg:
+							cur_iuse = myeb.iuse.all
+						reinstall_for_flags = self._reinstall_for_flags(pkg,
+							forced_flags, old_use, iuses, now_use, cur_iuse)
+						if reinstall_for_flags:
+							if not pkg.installed:
+								self._dynamic_config.ignored_binaries.setdefault(pkg, set()).update(reinstall_for_flags)
+							break
+					# Compare current config to installed package
+					# and do not reinstall if possible.
+					if not installed and not useoldpkg and \
+						("--newuse" in self._frozen_config.myopts or \
+						"--reinstall" in self._frozen_config.myopts) and \
+						cpv in vardb.match(atom):
+						forced_flags = set()
+						forced_flags.update(pkg.use.force)
+						forced_flags.update(pkg.use.mask)
+						inst_pkg = vardb.match_pkgs('=' + pkg.cpv)[0]
+						old_use = inst_pkg.use.enabled
+						old_iuse = inst_pkg.iuse.all
+						cur_use = self._pkg_use_enabled(pkg)
+						cur_iuse = pkg.iuse.all
+						reinstall_for_flags = \
+							self._reinstall_for_flags(pkg,
+							forced_flags, old_use, old_iuse,
+							cur_use, cur_iuse)
+						if reinstall_for_flags:
+							reinstall = True
+					if reinstall_atoms.findAtomForPackage(pkg, \
+							modified_use=self._pkg_use_enabled(pkg)):
+						reinstall = True
+					if not built:
+						myeb = pkg
+					elif useoldpkg:
+						matched_oldpkg.append(pkg)
+					matched_packages.append(pkg)
+					if reinstall_for_flags:
+						self._dynamic_config._reinstall_nodes[pkg] = \
+							reinstall_for_flags
+					break
+
+		if not matched_packages:
+			return None, None
+
+		if "--debug" in self._frozen_config.myopts:
+			for pkg in matched_packages:
+				portage.writemsg("%s %s%s%s\n" % \
+					((pkg.type_name + ":").rjust(10),
+					pkg.cpv, _repo_separator, pkg.repo), noiselevel=-1)
+
+		# Filter out any old-style virtual matches if they are
+		# mixed with new-style virtual matches.
+		cp = atom.cp
+		if len(matched_packages) > 1 and \
+			"virtual" == portage.catsplit(cp)[0]:
+			for pkg in matched_packages:
+				if pkg.cp != cp:
+					continue
+				# Got a new-style virtual, so filter
+				# out any old-style virtuals.
+				matched_packages = [pkg for pkg in matched_packages \
+					if pkg.cp == cp]
+				break
+
+		if existing_node is not None and \
+			existing_node in matched_packages:
+			return existing_node, existing_node
+
+		if len(matched_packages) > 1:
+			if rebuilt_binaries:
+				inst_pkg = None
+				built_pkg = None
+				unbuilt_pkg = None
+				for pkg in matched_packages:
+					if pkg.installed:
+						inst_pkg = pkg
+					elif pkg.built:
+						built_pkg = pkg
+					else:
+						if unbuilt_pkg is None or pkg > unbuilt_pkg:
+							unbuilt_pkg = pkg
+				if built_pkg is not None and inst_pkg is not None:
+					# Only reinstall if binary package BUILD_TIME is
+					# non-empty, in order to avoid cases like to
+					# bug #306659 where BUILD_TIME fields are missing
+					# in local and/or remote Packages file.
+					try:
+						built_timestamp = int(built_pkg.metadata['BUILD_TIME'])
+					except (KeyError, ValueError):
+						built_timestamp = 0
+
+					try:
+						installed_timestamp = int(inst_pkg.metadata['BUILD_TIME'])
+					except (KeyError, ValueError):
+						installed_timestamp = 0
+
+					if unbuilt_pkg is not None and unbuilt_pkg > built_pkg:
+						pass
+					elif "--rebuilt-binaries-timestamp" in self._frozen_config.myopts:
+						minimal_timestamp = self._frozen_config.myopts["--rebuilt-binaries-timestamp"]
+						if built_timestamp and \
+							built_timestamp > installed_timestamp and \
+							built_timestamp >= minimal_timestamp:
+							return built_pkg, existing_node
+					else:
+						#Don't care if the binary has an older BUILD_TIME than the installed
+						#package. This is for closely tracking a binhost.
+						#Use --rebuilt-binaries-timestamp 0 if you want only newer binaries
+						#pulled in here.
+						if built_timestamp and \
+							built_timestamp != installed_timestamp:
+							return built_pkg, existing_node
+
+			for pkg in matched_packages:
+				if pkg.installed and pkg.invalid:
+					matched_packages = [x for x in \
+						matched_packages if x is not pkg]
+
+			if avoid_update:
+				for pkg in matched_packages:
+					if pkg.installed and self._pkg_visibility_check(pkg, autounmask_level):
+						return pkg, existing_node
+
+			visible_matches = []
+			if matched_oldpkg:
+				visible_matches = [pkg.cpv for pkg in matched_oldpkg \
+					if self._pkg_visibility_check(pkg, autounmask_level)]
+			if not visible_matches:
+				visible_matches = [pkg.cpv for pkg in matched_packages \
+					if self._pkg_visibility_check(pkg, autounmask_level)]
+			if visible_matches:
+				bestmatch = portage.best(visible_matches)
+			else:
+				# all are masked, so ignore visibility
+				bestmatch = portage.best([pkg.cpv for pkg in matched_packages])
+			matched_packages = [pkg for pkg in matched_packages \
+				if portage.dep.cpvequal(pkg.cpv, bestmatch)]
+
+		# ordered by type preference ("ebuild" type is the last resort)
+		return  matched_packages[-1], existing_node
+
+	def _select_pkg_from_graph(self, root, atom, onlydeps=False):
+		"""
+		Select packages that have already been added to the graph or
+		those that are installed and have not been scheduled for
+		replacement.
+		"""
+		graph_db = self._dynamic_config._graph_trees[root]["porttree"].dbapi
+		matches = graph_db.match_pkgs(atom)
+		if not matches:
+			return None, None
+		pkg = matches[-1] # highest match
+		in_graph = self._dynamic_config._slot_pkg_map[root].get(pkg.slot_atom)
+		return pkg, in_graph
+
+	def _select_pkg_from_installed(self, root, atom, onlydeps=False):
+		"""
+		Select packages that are installed.
+		"""
+		matches = list(self._iter_match_pkgs(self._frozen_config.roots[root],
+			"installed", atom))
+		if not matches:
+			return None, None
+		if len(matches) > 1:
+			matches.reverse() # ascending order
+			unmasked = [pkg for pkg in matches if \
+				self._pkg_visibility_check(pkg)]
+			if unmasked:
+				if len(unmasked) == 1:
+					matches = unmasked
+				else:
+					# Account for packages with masks (like KEYWORDS masks)
+					# that are usually ignored in visibility checks for
+					# installed packages, in order to handle cases like
+					# bug #350285.
+					unmasked = [pkg for pkg in matches if not pkg.masks]
+					if unmasked:
+						matches = unmasked
+		pkg = matches[-1] # highest match
+		in_graph = self._dynamic_config._slot_pkg_map[root].get(pkg.slot_atom)
+		return pkg, in_graph
+
+	def _complete_graph(self, required_sets=None):
+		"""
+		Add any deep dependencies of required sets (args, system, world) that
+		have not been pulled into the graph yet. This ensures that the graph
+		is consistent such that initially satisfied deep dependencies are not
+		broken in the new graph. Initially unsatisfied dependencies are
+		irrelevant since we only want to avoid breaking dependencies that are
+		initially satisfied.
+
+		Since this method can consume enough time to disturb users, it is
+		currently only enabled by the --complete-graph option.
+
+		@param required_sets: contains required sets (currently only used
+			for depclean and prune removal operations)
+		@type required_sets: dict
+		"""
+		if "--buildpkgonly" in self._frozen_config.myopts or \
+			"recurse" not in self._dynamic_config.myparams:
+			return 1
+
+		if "complete" not in self._dynamic_config.myparams and \
+			self._dynamic_config.myparams.get("complete_if_new_ver", "y") == "y":
+			# Enable complete mode if an installed package version will change.
+			version_change = False
+			for node in self._dynamic_config.digraph:
+				if not isinstance(node, Package) or \
+					node.operation != "merge":
+					continue
+				vardb = self._frozen_config.roots[
+					node.root].trees["vartree"].dbapi
+				inst_pkg = vardb.match_pkgs(node.slot_atom)
+				if inst_pkg and (inst_pkg[0] > node or inst_pkg[0] < node):
+					version_change = True
+					break
+
+			if version_change:
+				self._dynamic_config.myparams["complete"] = True
+
+		if "complete" not in self._dynamic_config.myparams:
+			return 1
+
+		self._load_vdb()
+
+		# Put the depgraph into a mode that causes it to only
+		# select packages that have already been added to the
+		# graph or those that are installed and have not been
+		# scheduled for replacement. Also, toggle the "deep"
+		# parameter so that all dependencies are traversed and
+		# accounted for.
+		self._select_atoms = self._select_atoms_from_graph
+		if "remove" in self._dynamic_config.myparams:
+			self._select_package = self._select_pkg_from_installed
+		else:
+			self._select_package = self._select_pkg_from_graph
+			self._dynamic_config._traverse_ignored_deps = True
+		already_deep = self._dynamic_config.myparams.get("deep") is True
+		if not already_deep:
+			self._dynamic_config.myparams["deep"] = True
+
+		# Invalidate the package selection cache, since
+		# _select_package has just changed implementations.
+		for trees in self._dynamic_config._filtered_trees.values():
+			trees["porttree"].dbapi._clear_cache()
+
+		args = self._dynamic_config._initial_arg_list[:]
+		for root in self._frozen_config.roots:
+			if root != self._frozen_config.target_root and \
+				("remove" in self._dynamic_config.myparams or
+				self._frozen_config.myopts.get("--root-deps") is not None):
+				# Only pull in deps for the relevant root.
+				continue
+			depgraph_sets = self._dynamic_config.sets[root]
+			required_set_names = self._frozen_config._required_set_names.copy()
+			remaining_args = required_set_names.copy()
+			if required_sets is None or root not in required_sets:
+				pass
+			else:
+				# Removal actions may override sets with temporary
+				# replacements that have had atoms removed in order
+				# to implement --deselect behavior.
+				required_set_names = set(required_sets[root])
+				depgraph_sets.sets.clear()
+				depgraph_sets.sets.update(required_sets[root])
+			if "remove" not in self._dynamic_config.myparams and \
+				root == self._frozen_config.target_root and \
+				already_deep:
+				remaining_args.difference_update(depgraph_sets.sets)
+			if not remaining_args and \
+				not self._dynamic_config._ignored_deps and \
+				not self._dynamic_config._dep_stack:
+				continue
+			root_config = self._frozen_config.roots[root]
+			for s in required_set_names:
+				pset = depgraph_sets.sets.get(s)
+				if pset is None:
+					pset = root_config.sets[s]
+				atom = SETPREFIX + s
+				args.append(SetArg(arg=atom, pset=pset,
+					root_config=root_config))
+
+		self._set_args(args)
+		for arg in self._expand_set_args(args, add_to_digraph=True):
+			for atom in arg.pset.getAtoms():
+				self._dynamic_config._dep_stack.append(
+					Dependency(atom=atom, root=arg.root_config.root,
+						parent=arg))
+
+		if True:
+			if self._dynamic_config._ignored_deps:
+				self._dynamic_config._dep_stack.extend(self._dynamic_config._ignored_deps)
+				self._dynamic_config._ignored_deps = []
+			if not self._create_graph(allow_unsatisfied=True):
+				return 0
+			# Check the unsatisfied deps to see if any initially satisfied deps
+			# will become unsatisfied due to an upgrade. Initially unsatisfied
+			# deps are irrelevant since we only want to avoid breaking deps
+			# that are initially satisfied.
+			while self._dynamic_config._unsatisfied_deps:
+				dep = self._dynamic_config._unsatisfied_deps.pop()
+				vardb = self._frozen_config.roots[
+					dep.root].trees["vartree"].dbapi
+				matches = vardb.match_pkgs(dep.atom)
+				if not matches:
+					self._dynamic_config._initially_unsatisfied_deps.append(dep)
+					continue
+				# An scheduled installation broke a deep dependency.
+				# Add the installed package to the graph so that it
+				# will be appropriately reported as a slot collision
+				# (possibly solvable via backtracking).
+				pkg = matches[-1] # highest match
+				if not self._add_pkg(pkg, dep):
+					return 0
+				if not self._create_graph(allow_unsatisfied=True):
+					return 0
+		return 1
+
+	def _pkg(self, cpv, type_name, root_config, installed=False, 
+		onlydeps=False, myrepo = None):
+		"""
+		Get a package instance from the cache, or create a new
+		one if necessary. Raises PackageNotFound from aux_get if it
+		failures for some reason (package does not exist or is
+		corrupt).
+		"""
+
+		# Ensure that we use the specially optimized RootConfig instance
+		# that refers to FakeVartree instead of the real vartree.
+		root_config = self._frozen_config.roots[root_config.root]
+		pkg = self._frozen_config._pkg_cache.get(
+			Package._gen_hash_key(cpv=cpv, type_name=type_name,
+			repo_name=myrepo, root_config=root_config,
+			installed=installed, onlydeps=onlydeps))
+		if pkg is None and onlydeps and not installed:
+			# Maybe it already got pulled in as a "merge" node.
+			pkg = self._dynamic_config.mydbapi[root_config.root].get(
+				Package._gen_hash_key(cpv=cpv, type_name=type_name,
+				repo_name=myrepo, root_config=root_config,
+				installed=installed, onlydeps=False))
+
+		if pkg is None:
+			tree_type = self.pkg_tree_map[type_name]
+			db = root_config.trees[tree_type].dbapi
+			db_keys = list(self._frozen_config._trees_orig[root_config.root][
+				tree_type].dbapi._aux_cache_keys)
+
+			try:
+				metadata = zip(db_keys, db.aux_get(cpv, db_keys, myrepo=myrepo))
+			except KeyError:
+				raise portage.exception.PackageNotFound(cpv)
+
+			pkg = Package(built=(type_name != "ebuild"), cpv=cpv,
+				installed=installed, metadata=metadata, onlydeps=onlydeps,
+				root_config=root_config, type_name=type_name)
+
+			self._frozen_config._pkg_cache[pkg] = pkg
+
+			if not self._pkg_visibility_check(pkg) and \
+				'LICENSE' in pkg.masks and len(pkg.masks) == 1:
+				slot_key = (pkg.root, pkg.slot_atom)
+				other_pkg = self._frozen_config._highest_license_masked.get(slot_key)
+				if other_pkg is None or pkg > other_pkg:
+					self._frozen_config._highest_license_masked[slot_key] = pkg
+
+		return pkg
+
+	def _validate_blockers(self):
+		"""Remove any blockers from the digraph that do not match any of the
+		packages within the graph.  If necessary, create hard deps to ensure
+		correct merge order such that mutually blocking packages are never
+		installed simultaneously. Also add runtime blockers from all installed
+		packages if any of them haven't been added already (bug 128809)."""
+
+		if "--buildpkgonly" in self._frozen_config.myopts or \
+			"--nodeps" in self._frozen_config.myopts:
+			return True
+
+		if True:
+			# Pull in blockers from all installed packages that haven't already
+			# been pulled into the depgraph, in order to ensure that they are
+			# respected (bug 128809). Due to the performance penalty that is
+			# incurred by all the additional dep_check calls that are required,
+			# blockers returned from dep_check are cached on disk by the
+			# BlockerCache class.
+
+			# For installed packages, always ignore blockers from DEPEND since
+			# only runtime dependencies should be relevant for packages that
+			# are already built.
+			dep_keys = ["RDEPEND", "PDEPEND"]
+			for myroot in self._frozen_config.trees:
+
+				if self._frozen_config.myopts.get("--root-deps") is not None and \
+					myroot != self._frozen_config.target_root:
+					continue
+
+				vardb = self._frozen_config.trees[myroot]["vartree"].dbapi
+				pkgsettings = self._frozen_config.pkgsettings[myroot]
+				root_config = self._frozen_config.roots[myroot]
+				final_db = self._dynamic_config.mydbapi[myroot]
+
+				blocker_cache = BlockerCache(myroot, vardb)
+				stale_cache = set(blocker_cache)
+				for pkg in vardb:
+					cpv = pkg.cpv
+					stale_cache.discard(cpv)
+					pkg_in_graph = self._dynamic_config.digraph.contains(pkg)
+					pkg_deps_added = \
+						pkg in self._dynamic_config._traversed_pkg_deps
+
+					# Check for masked installed packages. Only warn about
+					# packages that are in the graph in order to avoid warning
+					# about those that will be automatically uninstalled during
+					# the merge process or by --depclean. Always warn about
+					# packages masked by license, since the user likely wants
+					# to adjust ACCEPT_LICENSE.
+					if pkg in final_db:
+						if not self._pkg_visibility_check(pkg,
+							trust_graph=False) and \
+							(pkg_in_graph or 'LICENSE' in pkg.masks):
+							self._dynamic_config._masked_installed.add(pkg)
+						else:
+							self._check_masks(pkg)
+
+					blocker_atoms = None
+					blockers = None
+					if pkg_deps_added:
+						blockers = []
+						try:
+							blockers.extend(
+								self._dynamic_config._blocker_parents.child_nodes(pkg))
+						except KeyError:
+							pass
+						try:
+							blockers.extend(
+								self._dynamic_config._irrelevant_blockers.child_nodes(pkg))
+						except KeyError:
+							pass
+						if blockers:
+							# Select just the runtime blockers.
+							blockers = [blocker for blocker in blockers \
+								if blocker.priority.runtime or \
+								blocker.priority.runtime_post]
+					if blockers is not None:
+						blockers = set(blocker.atom for blocker in blockers)
+
+					# If this node has any blockers, create a "nomerge"
+					# node for it so that they can be enforced.
+					self._spinner_update()
+					blocker_data = blocker_cache.get(cpv)
+					if blocker_data is not None and \
+						blocker_data.counter != long(pkg.metadata["COUNTER"]):
+						blocker_data = None
+
+					# If blocker data from the graph is available, use
+					# it to validate the cache and update the cache if
+					# it seems invalid.
+					if blocker_data is not None and \
+						blockers is not None:
+						if not blockers.symmetric_difference(
+							blocker_data.atoms):
+							continue
+						blocker_data = None
+
+					if blocker_data is None and \
+						blockers is not None:
+						# Re-use the blockers from the graph.
+						blocker_atoms = sorted(blockers)
+						counter = long(pkg.metadata["COUNTER"])
+						blocker_data = \
+							blocker_cache.BlockerData(counter, blocker_atoms)
+						blocker_cache[pkg.cpv] = blocker_data
+						continue
+
+					if blocker_data:
+						blocker_atoms = [Atom(atom) for atom in blocker_data.atoms]
+					else:
+						# Use aux_get() to trigger FakeVartree global
+						# updates on *DEPEND when appropriate.
+						depstr = " ".join(vardb.aux_get(pkg.cpv, dep_keys))
+						# It is crucial to pass in final_db here in order to
+						# optimize dep_check calls by eliminating atoms via
+						# dep_wordreduce and dep_eval calls.
+						try:
+							success, atoms = portage.dep_check(depstr,
+								final_db, pkgsettings, myuse=self._pkg_use_enabled(pkg),
+								trees=self._dynamic_config._graph_trees, myroot=myroot)
+						except SystemExit:
+							raise
+						except Exception as e:
+							# This is helpful, for example, if a ValueError
+							# is thrown from cpv_expand due to multiple
+							# matches (this can happen if an atom lacks a
+							# category).
+							show_invalid_depstring_notice(
+								pkg, depstr, _unicode_decode("%s") % (e,))
+							del e
+							raise
+						if not success:
+							replacement_pkg = final_db.match_pkgs(pkg.slot_atom)
+							if replacement_pkg and \
+								replacement_pkg[0].operation == "merge":
+								# This package is being replaced anyway, so
+								# ignore invalid dependencies so as not to
+								# annoy the user too much (otherwise they'd be
+								# forced to manually unmerge it first).
+								continue
+							show_invalid_depstring_notice(pkg, depstr, atoms)
+							return False
+						blocker_atoms = [myatom for myatom in atoms \
+							if myatom.blocker]
+						blocker_atoms.sort()
+						counter = long(pkg.metadata["COUNTER"])
+						blocker_cache[cpv] = \
+							blocker_cache.BlockerData(counter, blocker_atoms)
+					if blocker_atoms:
+						try:
+							for atom in blocker_atoms:
+								blocker = Blocker(atom=atom,
+									eapi=pkg.metadata["EAPI"],
+									priority=self._priority(runtime=True),
+									root=myroot)
+								self._dynamic_config._blocker_parents.add(blocker, pkg)
+						except portage.exception.InvalidAtom as e:
+							depstr = " ".join(vardb.aux_get(pkg.cpv, dep_keys))
+							show_invalid_depstring_notice(
+								pkg, depstr,
+								_unicode_decode("Invalid Atom: %s") % (e,))
+							return False
+				for cpv in stale_cache:
+					del blocker_cache[cpv]
+				blocker_cache.flush()
+				del blocker_cache
+
+		# Discard any "uninstall" tasks scheduled by previous calls
+		# to this method, since those tasks may not make sense given
+		# the current graph state.
+		previous_uninstall_tasks = self._dynamic_config._blocker_uninstalls.leaf_nodes()
+		if previous_uninstall_tasks:
+			self._dynamic_config._blocker_uninstalls = digraph()
+			self._dynamic_config.digraph.difference_update(previous_uninstall_tasks)
+
+		for blocker in self._dynamic_config._blocker_parents.leaf_nodes():
+			self._spinner_update()
+			root_config = self._frozen_config.roots[blocker.root]
+			virtuals = root_config.settings.getvirtuals()
+			myroot = blocker.root
+			initial_db = self._frozen_config.trees[myroot]["vartree"].dbapi
+			final_db = self._dynamic_config.mydbapi[myroot]
+			
+			provider_virtual = False
+			if blocker.cp in virtuals and \
+				not self._have_new_virt(blocker.root, blocker.cp):
+				provider_virtual = True
+
+			# Use this to check PROVIDE for each matched package
+			# when necessary.
+			atom_set = InternalPackageSet(
+				initial_atoms=[blocker.atom])
+
+			if provider_virtual:
+				atoms = []
+				for provider_entry in virtuals[blocker.cp]:
+					atoms.append(Atom(blocker.atom.replace(
+						blocker.cp, provider_entry.cp, 1)))
+			else:
+				atoms = [blocker.atom]
+
+			blocked_initial = set()
+			for atom in atoms:
+				for pkg in initial_db.match_pkgs(atom):
+					if atom_set.findAtomForPackage(pkg, modified_use=self._pkg_use_enabled(pkg)):
+						blocked_initial.add(pkg)
+
+			blocked_final = set()
+			for atom in atoms:
+				for pkg in final_db.match_pkgs(atom):
+					if atom_set.findAtomForPackage(pkg, modified_use=self._pkg_use_enabled(pkg)):
+						blocked_final.add(pkg)
+
+			if not blocked_initial and not blocked_final:
+				parent_pkgs = self._dynamic_config._blocker_parents.parent_nodes(blocker)
+				self._dynamic_config._blocker_parents.remove(blocker)
+				# Discard any parents that don't have any more blockers.
+				for pkg in parent_pkgs:
+					self._dynamic_config._irrelevant_blockers.add(blocker, pkg)
+					if not self._dynamic_config._blocker_parents.child_nodes(pkg):
+						self._dynamic_config._blocker_parents.remove(pkg)
+				continue
+			for parent in self._dynamic_config._blocker_parents.parent_nodes(blocker):
+				unresolved_blocks = False
+				depends_on_order = set()
+				for pkg in blocked_initial:
+					if pkg.slot_atom == parent.slot_atom and \
+						not blocker.atom.blocker.overlap.forbid:
+						# New !!atom blockers do not allow temporary
+						# simulaneous installation, so unlike !atom
+						# blockers, !!atom blockers aren't ignored
+						# when they match other packages occupying
+						# the same slot.
+						continue
+					if parent.installed:
+						# Two currently installed packages conflict with
+						# eachother. Ignore this case since the damage
+						# is already done and this would be likely to
+						# confuse users if displayed like a normal blocker.
+						continue
+
+					self._dynamic_config._blocked_pkgs.add(pkg, blocker)
+
+					if parent.operation == "merge":
+						# Maybe the blocked package can be replaced or simply
+						# unmerged to resolve this block.
+						depends_on_order.add((pkg, parent))
+						continue
+					# None of the above blocker resolutions techniques apply,
+					# so apparently this one is unresolvable.
+					unresolved_blocks = True
+				for pkg in blocked_final:
+					if pkg.slot_atom == parent.slot_atom and \
+						not blocker.atom.blocker.overlap.forbid:
+						# New !!atom blockers do not allow temporary
+						# simulaneous installation, so unlike !atom
+						# blockers, !!atom blockers aren't ignored
+						# when they match other packages occupying
+						# the same slot.
+						continue
+					if parent.operation == "nomerge" and \
+						pkg.operation == "nomerge":
+						# This blocker will be handled the next time that a
+						# merge of either package is triggered.
+						continue
+
+					self._dynamic_config._blocked_pkgs.add(pkg, blocker)
+
+					# Maybe the blocking package can be
+					# unmerged to resolve this block.
+					if parent.operation == "merge" and pkg.installed:
+						depends_on_order.add((pkg, parent))
+						continue
+					elif parent.operation == "nomerge":
+						depends_on_order.add((parent, pkg))
+						continue
+					# None of the above blocker resolutions techniques apply,
+					# so apparently this one is unresolvable.
+					unresolved_blocks = True
+
+				# Make sure we don't unmerge any package that have been pulled
+				# into the graph.
+				if not unresolved_blocks and depends_on_order:
+					for inst_pkg, inst_task in depends_on_order:
+						if self._dynamic_config.digraph.contains(inst_pkg) and \
+							self._dynamic_config.digraph.parent_nodes(inst_pkg):
+							unresolved_blocks = True
+							break
+
+				if not unresolved_blocks and depends_on_order:
+					for inst_pkg, inst_task in depends_on_order:
+						uninst_task = Package(built=inst_pkg.built,
+							cpv=inst_pkg.cpv, installed=inst_pkg.installed,
+							metadata=inst_pkg.metadata,
+							operation="uninstall",
+							root_config=inst_pkg.root_config,
+							type_name=inst_pkg.type_name)
+						# Enforce correct merge order with a hard dep.
+						self._dynamic_config.digraph.addnode(uninst_task, inst_task,
+							priority=BlockerDepPriority.instance)
+						# Count references to this blocker so that it can be
+						# invalidated after nodes referencing it have been
+						# merged.
+						self._dynamic_config._blocker_uninstalls.addnode(uninst_task, blocker)
+				if not unresolved_blocks and not depends_on_order:
+					self._dynamic_config._irrelevant_blockers.add(blocker, parent)
+					self._dynamic_config._blocker_parents.remove_edge(blocker, parent)
+					if not self._dynamic_config._blocker_parents.parent_nodes(blocker):
+						self._dynamic_config._blocker_parents.remove(blocker)
+					if not self._dynamic_config._blocker_parents.child_nodes(parent):
+						self._dynamic_config._blocker_parents.remove(parent)
+				if unresolved_blocks:
+					self._dynamic_config._unsolvable_blockers.add(blocker, parent)
+
+		return True
+
+	def _accept_blocker_conflicts(self):
+		acceptable = False
+		for x in ("--buildpkgonly", "--fetchonly",
+			"--fetch-all-uri", "--nodeps"):
+			if x in self._frozen_config.myopts:
+				acceptable = True
+				break
+		return acceptable
+
+	def _merge_order_bias(self, mygraph):
+		"""
+		For optimal leaf node selection, promote deep system runtime deps and
+		order nodes from highest to lowest overall reference count.
+		"""
+
+		node_info = {}
+		for node in mygraph.order:
+			node_info[node] = len(mygraph.parent_nodes(node))
+		deep_system_deps = _find_deep_system_runtime_deps(mygraph)
+
+		def cmp_merge_preference(node1, node2):
+
+			if node1.operation == 'uninstall':
+				if node2.operation == 'uninstall':
+					return 0
+				return 1
+
+			if node2.operation == 'uninstall':
+				if node1.operation == 'uninstall':
+					return 0
+				return -1
+
+			node1_sys = node1 in deep_system_deps
+			node2_sys = node2 in deep_system_deps
+			if node1_sys != node2_sys:
+				if node1_sys:
+					return -1
+				return 1
+
+			return node_info[node2] - node_info[node1]
+
+		mygraph.order.sort(key=cmp_sort_key(cmp_merge_preference))
+
+	def altlist(self, reversed=False):
+
+		while self._dynamic_config._serialized_tasks_cache is None:
+			self._resolve_conflicts()
+			try:
+				self._dynamic_config._serialized_tasks_cache, self._dynamic_config._scheduler_graph = \
+					self._serialize_tasks()
+			except self._serialize_tasks_retry:
+				pass
+
+		retlist = self._dynamic_config._serialized_tasks_cache[:]
+		if reversed:
+			retlist.reverse()
+		return retlist
+
+	def _implicit_libc_deps(self, mergelist, graph):
+		"""
+		Create implicit dependencies on libc, in order to ensure that libc
+		is installed as early as possible (see bug #303567).
+		"""
+		libc_pkgs = {}
+		implicit_libc_roots = (self._frozen_config._running_root.root,)
+		for root in implicit_libc_roots:
+			graphdb = self._dynamic_config.mydbapi[root]
+			vardb = self._frozen_config.trees[root]["vartree"].dbapi
+			for atom in self._expand_virt_from_graph(root,
+ 				portage.const.LIBC_PACKAGE_ATOM):
+				if atom.blocker:
+					continue
+				match = graphdb.match_pkgs(atom)
+				if not match:
+					continue
+				pkg = match[-1]
+				if pkg.operation == "merge" and \
+					not vardb.cpv_exists(pkg.cpv):
+					libc_pkgs.setdefault(pkg.root, set()).add(pkg)
+
+		if not libc_pkgs:
+			return
+
+		earlier_libc_pkgs = set()
+
+		for pkg in mergelist:
+			if not isinstance(pkg, Package):
+				# a satisfied blocker
+				continue
+			root_libc_pkgs = libc_pkgs.get(pkg.root)
+			if root_libc_pkgs is not None and \
+				pkg.operation == "merge":
+				if pkg in root_libc_pkgs:
+					earlier_libc_pkgs.add(pkg)
+				else:
+					for libc_pkg in root_libc_pkgs:
+						if libc_pkg in earlier_libc_pkgs:
+							graph.add(libc_pkg, pkg,
+								priority=DepPriority(buildtime=True))
+
+	def schedulerGraph(self):
+		"""
+		The scheduler graph is identical to the normal one except that
+		uninstall edges are reversed in specific cases that require
+		conflicting packages to be temporarily installed simultaneously.
+		This is intended for use by the Scheduler in it's parallelization
+		logic. It ensures that temporary simultaneous installation of
+		conflicting packages is avoided when appropriate (especially for
+		!!atom blockers), but allowed in specific cases that require it.
+
+		Note that this method calls break_refs() which alters the state of
+		internal Package instances such that this depgraph instance should
+		not be used to perform any more calculations.
+		"""
+
+		# NOTE: altlist initializes self._dynamic_config._scheduler_graph
+		mergelist = self.altlist()
+		self._implicit_libc_deps(mergelist,
+			self._dynamic_config._scheduler_graph)
+
+		# Break DepPriority.satisfied attributes which reference
+		# installed Package instances.
+		for parents, children, node in \
+			self._dynamic_config._scheduler_graph.nodes.values():
+			for priorities in chain(parents.values(), children.values()):
+				for priority in priorities:
+					if priority.satisfied:
+						priority.satisfied = True
+
+		pkg_cache = self._frozen_config._pkg_cache
+		graph = self._dynamic_config._scheduler_graph
+		trees = self._frozen_config.trees
+		pruned_pkg_cache = {}
+		for key, pkg in pkg_cache.items():
+			if pkg in graph or \
+				(pkg.installed and pkg in trees[pkg.root]['vartree'].dbapi):
+				pruned_pkg_cache[key] = pkg
+
+		for root in trees:
+			trees[root]['vartree']._pkg_cache = pruned_pkg_cache
+
+		self.break_refs()
+		sched_config = \
+			_scheduler_graph_config(trees, pruned_pkg_cache, graph, mergelist)
+
+		return sched_config
+
+	def break_refs(self):
+		"""
+		Break any references in Package instances that lead back to the depgraph.
+		This is useful if you want to hold references to packages without also
+		holding the depgraph on the heap. It should only be called after the
+		depgraph and _frozen_config will not be used for any more calculations.
+		"""
+		for root_config in self._frozen_config.roots.values():
+			root_config.update(self._frozen_config._trees_orig[
+				root_config.root]["root_config"])
+			# Both instances are now identical, so discard the
+			# original which should have no other references.
+			self._frozen_config._trees_orig[
+				root_config.root]["root_config"] = root_config
+
+	def _resolve_conflicts(self):
+		if not self._complete_graph():
+			raise self._unknown_internal_error()
+
+		if not self._validate_blockers():
+			self._dynamic_config._skip_restart = True
+			raise self._unknown_internal_error()
+
+		if self._dynamic_config._slot_collision_info:
+			self._process_slot_conflicts()
+
+	def _serialize_tasks(self):
+
+		debug = "--debug" in self._frozen_config.myopts
+
+		if debug:
+			writemsg("\ndigraph:\n\n", noiselevel=-1)
+			self._dynamic_config.digraph.debug_print()
+			writemsg("\n", noiselevel=-1)
+
+		scheduler_graph = self._dynamic_config.digraph.copy()
+
+		if '--nodeps' in self._frozen_config.myopts:
+			# Preserve the package order given on the command line.
+			return ([node for node in scheduler_graph \
+				if isinstance(node, Package) \
+				and node.operation == 'merge'], scheduler_graph)
+
+		mygraph=self._dynamic_config.digraph.copy()
+
+		removed_nodes = set()
+
+		# Prune off all DependencyArg instances since they aren't
+		# needed, and because of nested sets this is faster than doing
+		# it with multiple digraph.root_nodes() calls below. This also
+		# takes care of nested sets that have circular references,
+		# which wouldn't be matched by digraph.root_nodes().
+		for node in mygraph:
+			if isinstance(node, DependencyArg):
+				removed_nodes.add(node)
+		if removed_nodes:
+			mygraph.difference_update(removed_nodes)
+			removed_nodes.clear()
+
+		# Prune "nomerge" root nodes if nothing depends on them, since
+		# otherwise they slow down merge order calculation. Don't remove
+		# non-root nodes since they help optimize merge order in some cases
+		# such as revdep-rebuild.
+
+		while True:
+			for node in mygraph.root_nodes():
+				if not isinstance(node, Package) or \
+					node.installed or node.onlydeps:
+					removed_nodes.add(node)
+			if removed_nodes:
+				self._spinner_update()
+				mygraph.difference_update(removed_nodes)
+			if not removed_nodes:
+				break
+			removed_nodes.clear()
+		self._merge_order_bias(mygraph)
+		def cmp_circular_bias(n1, n2):
+			"""
+			RDEPEND is stronger than PDEPEND and this function
+			measures such a strength bias within a circular
+			dependency relationship.
+			"""
+			n1_n2_medium = n2 in mygraph.child_nodes(n1,
+				ignore_priority=priority_range.ignore_medium_soft)
+			n2_n1_medium = n1 in mygraph.child_nodes(n2,
+				ignore_priority=priority_range.ignore_medium_soft)
+			if n1_n2_medium == n2_n1_medium:
+				return 0
+			elif n1_n2_medium:
+				return 1
+			return -1
+		myblocker_uninstalls = self._dynamic_config._blocker_uninstalls.copy()
+		retlist=[]
+		# Contains uninstall tasks that have been scheduled to
+		# occur after overlapping blockers have been installed.
+		scheduled_uninstalls = set()
+		# Contains any Uninstall tasks that have been ignored
+		# in order to avoid the circular deps code path. These
+		# correspond to blocker conflicts that could not be
+		# resolved.
+		ignored_uninstall_tasks = set()
+		have_uninstall_task = False
+		complete = "complete" in self._dynamic_config.myparams
+		asap_nodes = []
+
+		def get_nodes(**kwargs):
+			"""
+			Returns leaf nodes excluding Uninstall instances
+			since those should be executed as late as possible.
+			"""
+			return [node for node in mygraph.leaf_nodes(**kwargs) \
+				if isinstance(node, Package) and \
+					(node.operation != "uninstall" or \
+					node in scheduled_uninstalls)]
+
+		# sys-apps/portage needs special treatment if ROOT="/"
+		running_root = self._frozen_config._running_root.root
+		runtime_deps = InternalPackageSet(
+			initial_atoms=[PORTAGE_PACKAGE_ATOM])
+		running_portage = self._frozen_config.trees[running_root]["vartree"].dbapi.match_pkgs(
+			PORTAGE_PACKAGE_ATOM)
+		replacement_portage = self._dynamic_config.mydbapi[running_root].match_pkgs(
+			PORTAGE_PACKAGE_ATOM)
+
+		if running_portage:
+			running_portage = running_portage[0]
+		else:
+			running_portage = None
+
+		if replacement_portage:
+			replacement_portage = replacement_portage[0]
+		else:
+			replacement_portage = None
+
+		if replacement_portage == running_portage:
+			replacement_portage = None
+
+		if running_portage is not None:
+			try:
+				portage_rdepend = self._select_atoms_highest_available(
+					running_root, running_portage.metadata["RDEPEND"],
+					myuse=self._pkg_use_enabled(running_portage),
+					parent=running_portage, strict=False)
+			except portage.exception.InvalidDependString as e:
+				portage.writemsg("!!! Invalid RDEPEND in " + \
+					"'%svar/db/pkg/%s/RDEPEND': %s\n" % \
+					(running_root, running_portage.cpv, e), noiselevel=-1)
+				del e
+				portage_rdepend = {running_portage : []}
+			for atoms in portage_rdepend.values():
+				runtime_deps.update(atom for atom in atoms \
+					if not atom.blocker)
+
+		# Merge libc asap, in order to account for implicit
+		# dependencies. See bug #303567.
+		implicit_libc_roots = (running_root,)
+		for root in implicit_libc_roots:
+			libc_pkgs = set()
+			vardb = self._frozen_config.trees[root]["vartree"].dbapi
+			graphdb = self._dynamic_config.mydbapi[root]
+			for atom in self._expand_virt_from_graph(root,
+				portage.const.LIBC_PACKAGE_ATOM):
+				if atom.blocker:
+					continue
+				match = graphdb.match_pkgs(atom)
+				if not match:
+					continue
+				pkg = match[-1]
+				if pkg.operation == "merge" and \
+					not vardb.cpv_exists(pkg.cpv):
+					libc_pkgs.add(pkg)
+
+			if libc_pkgs:
+				# If there's also an os-headers upgrade, we need to
+				# pull that in first. See bug #328317.
+				for atom in self._expand_virt_from_graph(root,
+					portage.const.OS_HEADERS_PACKAGE_ATOM):
+					if atom.blocker:
+						continue
+					match = graphdb.match_pkgs(atom)
+					if not match:
+						continue
+					pkg = match[-1]
+					if pkg.operation == "merge" and \
+						not vardb.cpv_exists(pkg.cpv):
+						asap_nodes.append(pkg)
+
+				asap_nodes.extend(libc_pkgs)
+
+		def gather_deps(ignore_priority, mergeable_nodes,
+			selected_nodes, node):
+			"""
+			Recursively gather a group of nodes that RDEPEND on
+			eachother. This ensures that they are merged as a group
+			and get their RDEPENDs satisfied as soon as possible.
+			"""
+			if node in selected_nodes:
+				return True
+			if node not in mergeable_nodes:
+				return False
+			if node == replacement_portage and \
+				mygraph.child_nodes(node,
+				ignore_priority=priority_range.ignore_medium_soft):
+				# Make sure that portage always has all of it's
+				# RDEPENDs installed first.
+				return False
+			selected_nodes.add(node)
+			for child in mygraph.child_nodes(node,
+				ignore_priority=ignore_priority):
+				if not gather_deps(ignore_priority,
+					mergeable_nodes, selected_nodes, child):
+					return False
+			return True
+
+		def ignore_uninst_or_med(priority):
+			if priority is BlockerDepPriority.instance:
+				return True
+			return priority_range.ignore_medium(priority)
+
+		def ignore_uninst_or_med_soft(priority):
+			if priority is BlockerDepPriority.instance:
+				return True
+			return priority_range.ignore_medium_soft(priority)
+
+		tree_mode = "--tree" in self._frozen_config.myopts
+		# Tracks whether or not the current iteration should prefer asap_nodes
+		# if available.  This is set to False when the previous iteration
+		# failed to select any nodes.  It is reset whenever nodes are
+		# successfully selected.
+		prefer_asap = True
+
+		# Controls whether or not the current iteration should drop edges that
+		# are "satisfied" by installed packages, in order to solve circular
+		# dependencies. The deep runtime dependencies of installed packages are
+		# not checked in this case (bug #199856), so it must be avoided
+		# whenever possible.
+		drop_satisfied = False
+
+		# State of variables for successive iterations that loosen the
+		# criteria for node selection.
+		#
+		# iteration   prefer_asap   drop_satisfied
+		# 1           True          False
+		# 2           False         False
+		# 3           False         True
+		#
+		# If no nodes are selected on the last iteration, it is due to
+		# unresolved blockers or circular dependencies.
+
+		while mygraph:
+			self._spinner_update()
+			selected_nodes = None
+			ignore_priority = None
+			if drop_satisfied or (prefer_asap and asap_nodes):
+				priority_range = DepPrioritySatisfiedRange
+			else:
+				priority_range = DepPriorityNormalRange
+			if prefer_asap and asap_nodes:
+				# ASAP nodes are merged before their soft deps. Go ahead and
+				# select root nodes here if necessary, since it's typical for
+				# the parent to have been removed from the graph already.
+				asap_nodes = [node for node in asap_nodes \
+					if mygraph.contains(node)]
+				for i in range(priority_range.SOFT,
+					priority_range.MEDIUM_SOFT + 1):
+					ignore_priority = priority_range.ignore_priority[i]
+					for node in asap_nodes:
+						if not mygraph.child_nodes(node,
+							ignore_priority=ignore_priority):
+							selected_nodes = [node]
+							asap_nodes.remove(node)
+							break
+					if selected_nodes:
+						break
+
+			if not selected_nodes and \
+				not (prefer_asap and asap_nodes):
+				for i in range(priority_range.NONE,
+					priority_range.MEDIUM_SOFT + 1):
+					ignore_priority = priority_range.ignore_priority[i]
+					nodes = get_nodes(ignore_priority=ignore_priority)
+					if nodes:
+						# If there is a mixture of merges and uninstalls,
+						# do the uninstalls first.
+						good_uninstalls = None
+						if len(nodes) > 1:
+							good_uninstalls = []
+							for node in nodes:
+								if node.operation == "uninstall":
+									good_uninstalls.append(node)
+
+							if good_uninstalls:
+								nodes = good_uninstalls
+							else:
+								nodes = nodes
+
+						if good_uninstalls or len(nodes) == 1 or \
+							(ignore_priority is None and \
+							not asap_nodes and not tree_mode):
+							# Greedily pop all of these nodes since no
+							# relationship has been ignored. This optimization
+							# destroys --tree output, so it's disabled in tree
+							# mode.
+							selected_nodes = nodes
+						else:
+							# For optimal merge order:
+							#  * Only pop one node.
+							#  * Removing a root node (node without a parent)
+							#    will not produce a leaf node, so avoid it.
+							#  * It's normal for a selected uninstall to be a
+							#    root node, so don't check them for parents.
+							if asap_nodes:
+								prefer_asap_parents = (True, False)
+							else:
+								prefer_asap_parents = (False,)
+							for check_asap_parent in prefer_asap_parents:
+								if check_asap_parent:
+									for node in nodes:
+										parents = mygraph.parent_nodes(node,
+											ignore_priority=DepPrioritySatisfiedRange.ignore_soft)
+										if parents and set(parents).intersection(asap_nodes):
+											selected_nodes = [node]
+											break
+								else:
+									for node in nodes:
+										if mygraph.parent_nodes(node):
+											selected_nodes = [node]
+											break
+								if selected_nodes:
+									break
+						if selected_nodes:
+							break
+
+			if not selected_nodes:
+				nodes = get_nodes(ignore_priority=priority_range.ignore_medium)
+				if nodes:
+					mergeable_nodes = set(nodes)
+					if prefer_asap and asap_nodes:
+						nodes = asap_nodes
+					# When gathering the nodes belonging to a runtime cycle,
+					# we want to minimize the number of nodes gathered, since
+					# this tends to produce a more optimal merge order.
+					# Ignoring all medium_soft deps serves this purpose.
+					# In the case of multiple runtime cycles, where some cycles
+					# may depend on smaller independent cycles, it's optimal
+					# to merge smaller independent cycles before other cycles
+					# that depend on them. Therefore, we search for the
+					# smallest cycle in order to try and identify and prefer
+					# these smaller independent cycles.
+					ignore_priority = priority_range.ignore_medium_soft
+					smallest_cycle = None
+					for node in nodes:
+						if not mygraph.parent_nodes(node):
+							continue
+						selected_nodes = set()
+						if gather_deps(ignore_priority,
+							mergeable_nodes, selected_nodes, node):
+							# When selecting asap_nodes, we need to ensure
+							# that we haven't selected a large runtime cycle
+							# that is obviously sub-optimal. This will be
+							# obvious if any of the non-asap selected_nodes
+							# is a leaf node when medium_soft deps are
+							# ignored.
+							if prefer_asap and asap_nodes and \
+								len(selected_nodes) > 1:
+								for node in selected_nodes.difference(
+									asap_nodes):
+									if not mygraph.child_nodes(node,
+										ignore_priority =
+										DepPriorityNormalRange.ignore_medium_soft):
+										selected_nodes = None
+										break
+							if selected_nodes:
+								if smallest_cycle is None or \
+									len(selected_nodes) < len(smallest_cycle):
+									smallest_cycle = selected_nodes
+
+					selected_nodes = smallest_cycle
+
+					if selected_nodes and debug:
+						writemsg("\nruntime cycle digraph (%s nodes):\n\n" %
+							(len(selected_nodes),), noiselevel=-1)
+						cycle_digraph = mygraph.copy()
+						cycle_digraph.difference_update([x for x in
+							cycle_digraph if x not in selected_nodes])
+						cycle_digraph.debug_print()
+						writemsg("\n", noiselevel=-1)
+
+					if prefer_asap and asap_nodes and not selected_nodes:
+						# We failed to find any asap nodes to merge, so ignore
+						# them for the next iteration.
+						prefer_asap = False
+						continue
+
+			if selected_nodes and ignore_priority is not None:
+				# Try to merge ignored medium_soft deps as soon as possible
+				# if they're not satisfied by installed packages.
+				for node in selected_nodes:
+					children = set(mygraph.child_nodes(node))
+					soft = children.difference(
+						mygraph.child_nodes(node,
+						ignore_priority=DepPrioritySatisfiedRange.ignore_soft))
+					medium_soft = children.difference(
+						mygraph.child_nodes(node,
+							ignore_priority = \
+							DepPrioritySatisfiedRange.ignore_medium_soft))
+					medium_soft.difference_update(soft)
+					for child in medium_soft:
+						if child in selected_nodes:
+							continue
+						if child in asap_nodes:
+							continue
+						# Merge PDEPEND asap for bug #180045.
+						asap_nodes.append(child)
+
+			if selected_nodes and len(selected_nodes) > 1:
+				if not isinstance(selected_nodes, list):
+					selected_nodes = list(selected_nodes)
+				selected_nodes.sort(key=cmp_sort_key(cmp_circular_bias))
+
+			if not selected_nodes and myblocker_uninstalls:
+				# An Uninstall task needs to be executed in order to
+				# avoid conflict if possible.
+
+				if drop_satisfied:
+					priority_range = DepPrioritySatisfiedRange
+				else:
+					priority_range = DepPriorityNormalRange
+
+				mergeable_nodes = get_nodes(
+					ignore_priority=ignore_uninst_or_med)
+
+				min_parent_deps = None
+				uninst_task = None
+
+				for task in myblocker_uninstalls.leaf_nodes():
+					# Do some sanity checks so that system or world packages
+					# don't get uninstalled inappropriately here (only really
+					# necessary when --complete-graph has not been enabled).
+
+					if task in ignored_uninstall_tasks:
+						continue
+
+					if task in scheduled_uninstalls:
+						# It's been scheduled but it hasn't
+						# been executed yet due to dependence
+						# on installation of blocking packages.
+						continue
+
+					root_config = self._frozen_config.roots[task.root]
+					inst_pkg = self._pkg(task.cpv, "installed", root_config,
+						installed=True)
+
+					if self._dynamic_config.digraph.contains(inst_pkg):
+						continue
+
+					forbid_overlap = False
+					heuristic_overlap = False
+					for blocker in myblocker_uninstalls.parent_nodes(task):
+						if not eapi_has_strong_blocks(blocker.eapi):
+							heuristic_overlap = True
+						elif blocker.atom.blocker.overlap.forbid:
+							forbid_overlap = True
+							break
+					if forbid_overlap and running_root == task.root:
+						continue
+
+					if heuristic_overlap and running_root == task.root:
+						# Never uninstall sys-apps/portage or it's essential
+						# dependencies, except through replacement.
+						try:
+							runtime_dep_atoms = \
+								list(runtime_deps.iterAtomsForPackage(task))
+						except portage.exception.InvalidDependString as e:
+							portage.writemsg("!!! Invalid PROVIDE in " + \
+								"'%svar/db/pkg/%s/PROVIDE': %s\n" % \
+								(task.root, task.cpv, e), noiselevel=-1)
+							del e
+							continue
+
+						# Don't uninstall a runtime dep if it appears
+						# to be the only suitable one installed.
+						skip = False
+						vardb = root_config.trees["vartree"].dbapi
+						for atom in runtime_dep_atoms:
+							other_version = None
+							for pkg in vardb.match_pkgs(atom):
+								if pkg.cpv == task.cpv and \
+									pkg.metadata["COUNTER"] == \
+									task.metadata["COUNTER"]:
+									continue
+								other_version = pkg
+								break
+							if other_version is None:
+								skip = True
+								break
+						if skip:
+							continue
+
+						# For packages in the system set, don't take
+						# any chances. If the conflict can't be resolved
+						# by a normal replacement operation then abort.
+						skip = False
+						try:
+							for atom in root_config.sets[
+								"system"].iterAtomsForPackage(task):
+								skip = True
+								break
+						except portage.exception.InvalidDependString as e:
+							portage.writemsg("!!! Invalid PROVIDE in " + \
+								"'%svar/db/pkg/%s/PROVIDE': %s\n" % \
+								(task.root, task.cpv, e), noiselevel=-1)
+							del e
+							skip = True
+						if skip:
+							continue
+
+					# Note that the world check isn't always
+					# necessary since self._complete_graph() will
+					# add all packages from the system and world sets to the
+					# graph. This just allows unresolved conflicts to be
+					# detected as early as possible, which makes it possible
+					# to avoid calling self._complete_graph() when it is
+					# unnecessary due to blockers triggering an abortion.
+					if not complete:
+						# For packages in the world set, go ahead an uninstall
+						# when necessary, as long as the atom will be satisfied
+						# in the final state.
+						graph_db = self._dynamic_config.mydbapi[task.root]
+						skip = False
+						try:
+							for atom in root_config.sets[
+								"selected"].iterAtomsForPackage(task):
+								satisfied = False
+								for pkg in graph_db.match_pkgs(atom):
+									if pkg == inst_pkg:
+										continue
+									satisfied = True
+									break
+								if not satisfied:
+									skip = True
+									self._dynamic_config._blocked_world_pkgs[inst_pkg] = atom
+									break
+						except portage.exception.InvalidDependString as e:
+							portage.writemsg("!!! Invalid PROVIDE in " + \
+								"'%svar/db/pkg/%s/PROVIDE': %s\n" % \
+								(task.root, task.cpv, e), noiselevel=-1)
+							del e
+							skip = True
+						if skip:
+							continue
+
+					# Check the deps of parent nodes to ensure that
+					# the chosen task produces a leaf node. Maybe
+					# this can be optimized some more to make the
+					# best possible choice, but the current algorithm
+					# is simple and should be near optimal for most
+					# common cases.
+					self._spinner_update()
+					mergeable_parent = False
+					parent_deps = set()
+					parent_deps.add(task)
+					for parent in mygraph.parent_nodes(task):
+						parent_deps.update(mygraph.child_nodes(parent,
+							ignore_priority=priority_range.ignore_medium_soft))
+						if min_parent_deps is not None and \
+							len(parent_deps) >= min_parent_deps:
+							# This task is no better than a previously selected
+							# task, so abort search now in order to avoid wasting
+							# any more cpu time on this task. This increases
+							# performance dramatically in cases when there are
+							# hundreds of blockers to solve, like when
+							# upgrading to a new slot of kde-meta.
+							mergeable_parent = None
+							break
+						if parent in mergeable_nodes and \
+							gather_deps(ignore_uninst_or_med_soft,
+							mergeable_nodes, set(), parent):
+							mergeable_parent = True
+
+					if not mergeable_parent:
+						continue
+
+					if min_parent_deps is None or \
+						len(parent_deps) < min_parent_deps:
+						min_parent_deps = len(parent_deps)
+						uninst_task = task
+
+					if uninst_task is not None and min_parent_deps == 1:
+						# This is the best possible result, so so abort search
+						# now in order to avoid wasting any more cpu time.
+						break
+
+				if uninst_task is not None:
+					# The uninstall is performed only after blocking
+					# packages have been merged on top of it. File
+					# collisions between blocking packages are detected
+					# and removed from the list of files to be uninstalled.
+					scheduled_uninstalls.add(uninst_task)
+					parent_nodes = mygraph.parent_nodes(uninst_task)
+
+					# Reverse the parent -> uninstall edges since we want
+					# to do the uninstall after blocking packages have
+					# been merged on top of it.
+					mygraph.remove(uninst_task)
+					for blocked_pkg in parent_nodes:
+						mygraph.add(blocked_pkg, uninst_task,
+							priority=BlockerDepPriority.instance)
+						scheduler_graph.remove_edge(uninst_task, blocked_pkg)
+						scheduler_graph.add(blocked_pkg, uninst_task,
+							priority=BlockerDepPriority.instance)
+
+					# Sometimes a merge node will render an uninstall
+					# node unnecessary (due to occupying the same SLOT),
+					# and we want to avoid executing a separate uninstall
+					# task in that case.
+					slot_node = self._dynamic_config.mydbapi[uninst_task.root
+						].match_pkgs(uninst_task.slot_atom)
+					if slot_node and \
+						slot_node[0].operation == "merge":
+						mygraph.add(slot_node[0], uninst_task,
+							priority=BlockerDepPriority.instance)
+
+					# Reset the state variables for leaf node selection and
+					# continue trying to select leaf nodes.
+					prefer_asap = True
+					drop_satisfied = False
+					continue
+
+			if not selected_nodes:
+				# Only select root nodes as a last resort. This case should
+				# only trigger when the graph is nearly empty and the only
+				# remaining nodes are isolated (no parents or children). Since
+				# the nodes must be isolated, ignore_priority is not needed.
+				selected_nodes = get_nodes()
+
+			if not selected_nodes and not drop_satisfied:
+				drop_satisfied = True
+				continue
+
+			if not selected_nodes and myblocker_uninstalls:
+				# If possible, drop an uninstall task here in order to avoid
+				# the circular deps code path. The corresponding blocker will
+				# still be counted as an unresolved conflict.
+				uninst_task = None
+				for node in myblocker_uninstalls.leaf_nodes():
+					try:
+						mygraph.remove(node)
+					except KeyError:
+						pass
+					else:
+						uninst_task = node
+						ignored_uninstall_tasks.add(node)
+						break
+
+				if uninst_task is not None:
+					# Reset the state variables for leaf node selection and
+					# continue trying to select leaf nodes.
+					prefer_asap = True
+					drop_satisfied = False
+					continue
+
+			if not selected_nodes:
+				self._dynamic_config._circular_deps_for_display = mygraph
+				self._dynamic_config._skip_restart = True
+				raise self._unknown_internal_error()
+
+			# At this point, we've succeeded in selecting one or more nodes, so
+			# reset state variables for leaf node selection.
+			prefer_asap = True
+			drop_satisfied = False
+
+			mygraph.difference_update(selected_nodes)
+
+			for node in selected_nodes:
+				if isinstance(node, Package) and \
+					node.operation == "nomerge":
+					continue
+
+				# Handle interactions between blockers
+				# and uninstallation tasks.
+				solved_blockers = set()
+				uninst_task = None
+				if isinstance(node, Package) and \
+					"uninstall" == node.operation:
+					have_uninstall_task = True
+					uninst_task = node
+				else:
+					vardb = self._frozen_config.trees[node.root]["vartree"].dbapi
+					inst_pkg = vardb.match_pkgs(node.slot_atom)
+					if inst_pkg:
+						# The package will be replaced by this one, so remove
+						# the corresponding Uninstall task if necessary.
+						inst_pkg = inst_pkg[0]
+						uninst_task = Package(built=inst_pkg.built,
+							cpv=inst_pkg.cpv, installed=inst_pkg.installed,
+							metadata=inst_pkg.metadata,
+							operation="uninstall",
+							root_config=inst_pkg.root_config,
+							type_name=inst_pkg.type_name)
+						try:
+							mygraph.remove(uninst_task)
+						except KeyError:
+							pass
+
+				if uninst_task is not None and \
+					uninst_task not in ignored_uninstall_tasks and \
+					myblocker_uninstalls.contains(uninst_task):
+					blocker_nodes = myblocker_uninstalls.parent_nodes(uninst_task)
+					myblocker_uninstalls.remove(uninst_task)
+					# Discard any blockers that this Uninstall solves.
+					for blocker in blocker_nodes:
+						if not myblocker_uninstalls.child_nodes(blocker):
+							myblocker_uninstalls.remove(blocker)
+							if blocker not in \
+								self._dynamic_config._unsolvable_blockers:
+								solved_blockers.add(blocker)
+
+				retlist.append(node)
+
+				if (isinstance(node, Package) and \
+					"uninstall" == node.operation) or \
+					(uninst_task is not None and \
+					uninst_task in scheduled_uninstalls):
+					# Include satisfied blockers in the merge list
+					# since the user might be interested and also
+					# it serves as an indicator that blocking packages
+					# will be temporarily installed simultaneously.
+					for blocker in solved_blockers:
+						retlist.append(blocker)
+
+		unsolvable_blockers = set(self._dynamic_config._unsolvable_blockers.leaf_nodes())
+		for node in myblocker_uninstalls.root_nodes():
+			unsolvable_blockers.add(node)
+
+		# If any Uninstall tasks need to be executed in order
+		# to avoid a conflict, complete the graph with any
+		# dependencies that may have been initially
+		# neglected (to ensure that unsafe Uninstall tasks
+		# are properly identified and blocked from execution).
+		if have_uninstall_task and \
+			not complete and \
+			not unsolvable_blockers:
+			self._dynamic_config.myparams["complete"] = True
+			if '--debug' in self._frozen_config.myopts:
+				msg = []
+				msg.append("enabling 'complete' depgraph mode " + \
+					"due to uninstall task(s):")
+				msg.append("")
+				for node in retlist:
+					if isinstance(node, Package) and \
+						node.operation == 'uninstall':
+						msg.append("\t%s" % (node,))
+				writemsg_level("\n%s\n" % \
+					"".join("%s\n" % line for line in msg),
+					level=logging.DEBUG, noiselevel=-1)
+			raise self._serialize_tasks_retry("")
+
+		# Set satisfied state on blockers, but not before the
+		# above retry path, since we don't want to modify the
+		# state in that case.
+		for node in retlist:
+			if isinstance(node, Blocker):
+				node.satisfied = True
+
+		for blocker in unsolvable_blockers:
+			retlist.append(blocker)
+
+		if unsolvable_blockers and \
+			not self._accept_blocker_conflicts():
+			self._dynamic_config._unsatisfied_blockers_for_display = unsolvable_blockers
+			self._dynamic_config._serialized_tasks_cache = retlist[:]
+			self._dynamic_config._scheduler_graph = scheduler_graph
+			self._dynamic_config._skip_restart = True
+			raise self._unknown_internal_error()
+
+		if self._dynamic_config._slot_collision_info and \
+			not self._accept_blocker_conflicts():
+			self._dynamic_config._serialized_tasks_cache = retlist[:]
+			self._dynamic_config._scheduler_graph = scheduler_graph
+			raise self._unknown_internal_error()
+
+		return retlist, scheduler_graph
+
+	def _show_circular_deps(self, mygraph):
+		self._dynamic_config._circular_dependency_handler = \
+			circular_dependency_handler(self, mygraph)
+		handler = self._dynamic_config._circular_dependency_handler
+
+		self._frozen_config.myopts.pop("--quiet", None)
+		self._frozen_config.myopts["--verbose"] = True
+		self._frozen_config.myopts["--tree"] = True
+		portage.writemsg("\n\n", noiselevel=-1)
+		self.display(handler.merge_list)
+		prefix = colorize("BAD", " * ")
+		portage.writemsg("\n", noiselevel=-1)
+		portage.writemsg(prefix + "Error: circular dependencies:\n",
+			noiselevel=-1)
+		portage.writemsg("\n", noiselevel=-1)
+
+		if handler.circular_dep_message is None:
+			handler.debug_print()
+			portage.writemsg("\n", noiselevel=-1)
+
+		if handler.circular_dep_message is not None:
+			portage.writemsg(handler.circular_dep_message, noiselevel=-1)
+
+		suggestions = handler.suggestions
+		if suggestions:
+			writemsg("\n\nIt might be possible to break this cycle\n", noiselevel=-1)
+			if len(suggestions) == 1:
+				writemsg("by applying the following change:\n", noiselevel=-1)
+			else:
+				writemsg("by applying " + colorize("bold", "any of") + \
+					" the following changes:\n", noiselevel=-1)
+			writemsg("".join(suggestions), noiselevel=-1)
+			writemsg("\nNote that this change can be reverted, once the package has" + \
+				" been installed.\n", noiselevel=-1)
+			if handler.large_cycle_count:
+				writemsg("\nNote that the dependency graph contains a lot of cycles.\n" + \
+					"Several changes might be required to resolve all cycles.\n" + \
+					"Temporarily changing some use flag for all packages might be the better option.\n", noiselevel=-1)
+		else:
+			writemsg("\n\n", noiselevel=-1)
+			writemsg(prefix + "Note that circular dependencies " + \
+				"can often be avoided by temporarily\n", noiselevel=-1)
+			writemsg(prefix + "disabling USE flags that trigger " + \
+				"optional dependencies.\n", noiselevel=-1)
+
+	def _show_merge_list(self):
+		if self._dynamic_config._serialized_tasks_cache is not None and \
+			not (self._dynamic_config._displayed_list is not None and \
+			(self._dynamic_config._displayed_list == self._dynamic_config._serialized_tasks_cache or \
+			self._dynamic_config._displayed_list == \
+				list(reversed(self._dynamic_config._serialized_tasks_cache)))):
+			display_list = self._dynamic_config._serialized_tasks_cache[:]
+			if "--tree" in self._frozen_config.myopts:
+				display_list.reverse()
+			self.display(display_list)
+
+	def _show_unsatisfied_blockers(self, blockers):
+		self._show_merge_list()
+		msg = "Error: The above package list contains " + \
+			"packages which cannot be installed " + \
+			"at the same time on the same system."
+		prefix = colorize("BAD", " * ")
+		portage.writemsg("\n", noiselevel=-1)
+		for line in textwrap.wrap(msg, 70):
+			portage.writemsg(prefix + line + "\n", noiselevel=-1)
+
+		# Display the conflicting packages along with the packages
+		# that pulled them in. This is helpful for troubleshooting
+		# cases in which blockers don't solve automatically and
+		# the reasons are not apparent from the normal merge list
+		# display.
+
+		conflict_pkgs = {}
+		for blocker in blockers:
+			for pkg in chain(self._dynamic_config._blocked_pkgs.child_nodes(blocker), \
+				self._dynamic_config._blocker_parents.parent_nodes(blocker)):
+				parent_atoms = self._dynamic_config._parent_atoms.get(pkg)
+				if not parent_atoms:
+					atom = self._dynamic_config._blocked_world_pkgs.get(pkg)
+					if atom is not None:
+						parent_atoms = set([("@selected", atom)])
+				if parent_atoms:
+					conflict_pkgs[pkg] = parent_atoms
+
+		if conflict_pkgs:
+			# Reduce noise by pruning packages that are only
+			# pulled in by other conflict packages.
+			pruned_pkgs = set()
+			for pkg, parent_atoms in conflict_pkgs.items():
+				relevant_parent = False
+				for parent, atom in parent_atoms:
+					if parent not in conflict_pkgs:
+						relevant_parent = True
+						break
+				if not relevant_parent:
+					pruned_pkgs.add(pkg)
+			for pkg in pruned_pkgs:
+				del conflict_pkgs[pkg]
+
+		if conflict_pkgs:
+			msg = []
+			msg.append("\n")
+			indent = "  "
+			for pkg, parent_atoms in conflict_pkgs.items():
+
+				# Prefer packages that are not directly involved in a conflict.
+				# It can be essential to see all the packages here, so don't
+				# omit any. If the list is long, people can simply use a pager.
+				preferred_parents = set()
+				for parent_atom in parent_atoms:
+					parent, atom = parent_atom
+					if parent not in conflict_pkgs:
+						preferred_parents.add(parent_atom)
+
+				ordered_list = list(preferred_parents)
+				if len(parent_atoms) > len(ordered_list):
+					for parent_atom in parent_atoms:
+						if parent_atom not in preferred_parents:
+							ordered_list.append(parent_atom)
+
+				msg.append(indent + "%s pulled in by\n" % pkg)
+
+				for parent_atom in ordered_list:
+					parent, atom = parent_atom
+					msg.append(2*indent)
+					if isinstance(parent,
+						(PackageArg, AtomArg)):
+						# For PackageArg and AtomArg types, it's
+						# redundant to display the atom attribute.
+						msg.append(str(parent))
+					else:
+						# Display the specific atom from SetArg or
+						# Package types.
+						msg.append("%s required by %s" % (atom, parent))
+					msg.append("\n")
+
+				msg.append("\n")
+
+			writemsg("".join(msg), noiselevel=-1)
+
+		if "--quiet" not in self._frozen_config.myopts:
+			show_blocker_docs_link()
+
+	def display(self, mylist, favorites=[], verbosity=None):
+
+		# This is used to prevent display_problems() from
+		# redundantly displaying this exact same merge list
+		# again via _show_merge_list().
+		self._dynamic_config._displayed_list = mylist
+		display = Display()
+
+		return display(self, mylist, favorites, verbosity)
+
+	def _display_autounmask(self):
+		"""
+		Display --autounmask message and optionally write it to config files
+		(using CONFIG_PROTECT). The message includes the comments and the changes.
+		"""
+
+		autounmask_write = self._frozen_config.myopts.get("--autounmask-write", "n") == True
+		autounmask_unrestricted_atoms = \
+			self._frozen_config.myopts.get("--autounmask-unrestricted-atoms", "n") == True
+		quiet = "--quiet" in self._frozen_config.myopts
+		pretend = "--pretend" in self._frozen_config.myopts
+		ask = "--ask" in self._frozen_config.myopts
+		enter_invalid = '--ask-enter-invalid' in self._frozen_config.myopts
+
+		def check_if_latest(pkg):
+			is_latest = True
+			is_latest_in_slot = True
+			dbs = self._dynamic_config._filtered_trees[pkg.root]["dbs"]
+			root_config = self._frozen_config.roots[pkg.root]
+
+			for db, pkg_type, built, installed, db_keys in dbs:
+				for other_pkg in self._iter_match_pkgs(root_config, pkg_type, Atom(pkg.cp)):
+					if other_pkg.cp != pkg.cp:
+						# old-style PROVIDE virtual means there are no
+						# normal matches for this pkg_type
+						break
+					if other_pkg > pkg:
+						is_latest = False
+						if other_pkg.slot_atom == pkg.slot_atom:
+							is_latest_in_slot = False
+							break
+					else:
+						# iter_match_pkgs yields highest version first, so
+						# there's no need to search this pkg_type any further
+						break
+
+				if not is_latest_in_slot:
+					break
+
+			return is_latest, is_latest_in_slot
+
+		#Set of roots we have autounmask changes for.
+		roots = set()
+
+		masked_by_missing_keywords = False
+		unstable_keyword_msg = {}
+		for pkg in self._dynamic_config._needed_unstable_keywords:
+			self._show_merge_list()
+			if pkg in self._dynamic_config.digraph:
+				root = pkg.root
+				roots.add(root)
+				unstable_keyword_msg.setdefault(root, [])
+				is_latest, is_latest_in_slot = check_if_latest(pkg)
+				pkgsettings = self._frozen_config.pkgsettings[pkg.root]
+				mreasons = _get_masking_status(pkg, pkgsettings, pkg.root_config,
+					use=self._pkg_use_enabled(pkg))
+				for reason in mreasons:
+					if reason.unmask_hint and \
+						reason.unmask_hint.key == 'unstable keyword':
+						keyword = reason.unmask_hint.value
+						if keyword == "**":
+							masked_by_missing_keywords = True
+
+						unstable_keyword_msg[root].append(self._get_dep_chain_as_comment(pkg))
+						if autounmask_unrestricted_atoms:
+							if is_latest:
+								unstable_keyword_msg[root].append(">=%s %s\n" % (pkg.cpv, keyword))
+							elif is_latest_in_slot:
+								unstable_keyword_msg[root].append(">=%s:%s %s\n" % (pkg.cpv, pkg.metadata["SLOT"], keyword))
+							else:
+								unstable_keyword_msg[root].append("=%s %s\n" % (pkg.cpv, keyword))
+						else:
+							unstable_keyword_msg[root].append("=%s %s\n" % (pkg.cpv, keyword))
+
+		p_mask_change_msg = {}
+		for pkg in self._dynamic_config._needed_p_mask_changes:
+			self._show_merge_list()
+			if pkg in self._dynamic_config.digraph:
+				root = pkg.root
+				roots.add(root)
+				p_mask_change_msg.setdefault(root, [])
+				is_latest, is_latest_in_slot = check_if_latest(pkg)
+				pkgsettings = self._frozen_config.pkgsettings[pkg.root]
+				mreasons = _get_masking_status(pkg, pkgsettings, pkg.root_config,
+					use=self._pkg_use_enabled(pkg))
+				for reason in mreasons:
+					if reason.unmask_hint and \
+						reason.unmask_hint.key == 'p_mask':
+						keyword = reason.unmask_hint.value
+
+						comment, filename = portage.getmaskingreason(
+							pkg.cpv, metadata=pkg.metadata,
+							settings=pkgsettings,
+							portdb=pkg.root_config.trees["porttree"].dbapi,
+							return_location=True)
+
+						p_mask_change_msg[root].append(self._get_dep_chain_as_comment(pkg))
+						if filename:
+							p_mask_change_msg[root].append("# %s:\n" % filename)
+						if comment:
+							comment = [line for line in
+								comment.splitlines() if line]
+							for line in comment:
+								p_mask_change_msg[root].append("%s\n" % line)
+						if autounmask_unrestricted_atoms:
+							if is_latest:
+								p_mask_change_msg[root].append(">=%s\n" % pkg.cpv)
+							elif is_latest_in_slot:
+								p_mask_change_msg[root].append(">=%s:%s\n" % (pkg.cpv, pkg.metadata["SLOT"]))
+							else:
+								p_mask_change_msg[root].append("=%s\n" % pkg.cpv)
+						else:
+							p_mask_change_msg[root].append("=%s\n" % pkg.cpv)
+
+		use_changes_msg = {}
+		for pkg, needed_use_config_change in self._dynamic_config._needed_use_config_changes.items():
+			self._show_merge_list()
+			if pkg in self._dynamic_config.digraph:
+				root = pkg.root
+				roots.add(root)
+				use_changes_msg.setdefault(root, [])
+				is_latest, is_latest_in_slot = check_if_latest(pkg)
+				changes = needed_use_config_change[1]
+				adjustments = []
+				for flag, state in changes.items():
+					if state:
+						adjustments.append(flag)
+					else:
+						adjustments.append("-" + flag)
+				use_changes_msg[root].append(self._get_dep_chain_as_comment(pkg, unsatisfied_dependency=True))
+				if is_latest:
+					use_changes_msg[root].append(">=%s %s\n" % (pkg.cpv, " ".join(adjustments)))
+				elif is_latest_in_slot:
+					use_changes_msg[root].append(">=%s:%s %s\n" % (pkg.cpv, pkg.metadata["SLOT"], " ".join(adjustments)))
+				else:
+					use_changes_msg[root].append("=%s %s\n" % (pkg.cpv, " ".join(adjustments)))
+
+		license_msg = {}
+		for pkg, missing_licenses in self._dynamic_config._needed_license_changes.items():
+			self._show_merge_list()
+			if pkg in self._dynamic_config.digraph:
+				root = pkg.root
+				roots.add(root)
+				license_msg.setdefault(root, [])
+				is_latest, is_latest_in_slot = check_if_latest(pkg)
+
+				license_msg[root].append(self._get_dep_chain_as_comment(pkg))
+				if is_latest:
+					license_msg[root].append(">=%s %s\n" % (pkg.cpv, " ".join(sorted(missing_licenses))))
+				elif is_latest_in_slot:
+					license_msg[root].append(">=%s:%s %s\n" % (pkg.cpv, pkg.metadata["SLOT"], " ".join(sorted(missing_licenses))))
+				else:
+					license_msg[root].append("=%s %s\n" % (pkg.cpv, " ".join(sorted(missing_licenses))))
+
+		def find_config_file(abs_user_config, file_name):
+			"""
+			Searches /etc/portage for an appropriate file to append changes to.
+			If the file_name is a file it is returned, if it is a directory, the
+			last file in it is returned. Order of traversal is the identical to
+			portage.util.grablines(recursive=True).
+
+			file_name - String containing a file name like "package.use"
+			return value - String. Absolute path of file to write to. None if
+			no suitable file exists.
+			"""
+			file_path = os.path.join(abs_user_config, file_name)
+
+			try:
+				os.lstat(file_path)
+			except OSError as e:
+				if e.errno == errno.ENOENT:
+					# The file doesn't exist, so we'll
+					# simply create it.
+					return file_path
+
+				# Disk or file system trouble?
+				return None
+
+			last_file_path = None
+			stack = [file_path]
+			while stack:
+				p = stack.pop()
+				try:
+					st = os.stat(p)
+				except OSError:
+					pass
+				else:
+					if stat.S_ISREG(st.st_mode):
+						last_file_path = p
+					elif stat.S_ISDIR(st.st_mode):
+						if os.path.basename(p) in _ignorecvs_dirs:
+							continue
+						try:
+							contents = os.listdir(p)
+						except OSError:
+							pass
+						else:
+							contents.sort(reverse=True)
+							for child in contents:
+								if child.startswith(".") or \
+									child.endswith("~"):
+									continue
+								stack.append(os.path.join(p, child))
+
+			return last_file_path
+
+		write_to_file = autounmask_write and not pretend
+		#Make sure we have a file to write to before doing any write.
+		file_to_write_to = {}
+		problems = []
+		if write_to_file:
+			for root in roots:
+				settings = self._frozen_config.roots[root].settings
+				abs_user_config = os.path.join(
+					settings["PORTAGE_CONFIGROOT"], USER_CONFIG_PATH)
+
+				if root in unstable_keyword_msg:
+					if not os.path.exists(os.path.join(abs_user_config,
+						"package.keywords")):
+						filename = "package.accept_keywords"
+					else:
+						filename = "package.keywords"
+					file_to_write_to[(abs_user_config, "package.keywords")] = \
+						find_config_file(abs_user_config, filename)
+
+				if root in p_mask_change_msg:
+					file_to_write_to[(abs_user_config, "package.unmask")] = \
+						find_config_file(abs_user_config, "package.unmask")
+
+				if root in use_changes_msg:
+					file_to_write_to[(abs_user_config, "package.use")] = \
+						find_config_file(abs_user_config, "package.use")
+
+				if root in license_msg:
+					file_to_write_to[(abs_user_config, "package.license")] = \
+						find_config_file(abs_user_config, "package.license")
+
+			for (abs_user_config, f), path in file_to_write_to.items():
+				if path is None:
+					problems.append("!!! No file to write for '%s'\n" % os.path.join(abs_user_config, f))
+
+			write_to_file = not problems
+
+		def format_msg(lines):
+			lines = lines[:]
+			for i, line in enumerate(lines):
+				if line.startswith("#"):
+					continue
+				lines[i] = colorize("INFORM", line.rstrip()) + "\n"
+			return "".join(lines)
+
+		for root in roots:
+			settings = self._frozen_config.roots[root].settings
+			abs_user_config = os.path.join(
+				settings["PORTAGE_CONFIGROOT"], USER_CONFIG_PATH)
+
+			if len(roots) > 1:
+				writemsg_stdout("\nFor %s:\n" % abs_user_config, noiselevel=-1)
+
+			if root in unstable_keyword_msg:
+				writemsg_stdout("\nThe following " + colorize("BAD", "keyword changes") + \
+					" are necessary to proceed:\n", noiselevel=-1)
+				writemsg_stdout(format_msg(unstable_keyword_msg[root]), noiselevel=-1)
+
+			if root in p_mask_change_msg:
+				writemsg_stdout("\nThe following " + colorize("BAD", "mask changes") + \
+					" are necessary to proceed:\n", noiselevel=-1)
+				writemsg_stdout(format_msg(p_mask_change_msg[root]), noiselevel=-1)
+
+			if root in use_changes_msg:
+				writemsg_stdout("\nThe following " + colorize("BAD", "USE changes") + \
+					" are necessary to proceed:\n", noiselevel=-1)
+				writemsg_stdout(format_msg(use_changes_msg[root]), noiselevel=-1)
+
+			if root in license_msg:
+				writemsg_stdout("\nThe following " + colorize("BAD", "license changes") + \
+					" are necessary to proceed:\n", noiselevel=-1)
+				writemsg_stdout(format_msg(license_msg[root]), noiselevel=-1)
+
+		protect_obj = {}
+		if write_to_file:
+			for root in roots:
+				settings = self._frozen_config.roots[root].settings
+				protect_obj[root] = ConfigProtect(settings["EROOT"], \
+					shlex_split(settings.get("CONFIG_PROTECT", "")),
+					shlex_split(settings.get("CONFIG_PROTECT_MASK", "")))
+
+		def write_changes(root, changes, file_to_write_to):
+			file_contents = None
+			try:
+				file_contents = io.open(
+					_unicode_encode(file_to_write_to,
+					encoding=_encodings['fs'], errors='strict'),
+					mode='r', encoding=_encodings['content'],
+					errors='replace').readlines()
+			except IOError as e:
+				if e.errno == errno.ENOENT:
+					file_contents = []
+				else:
+					problems.append("!!! Failed to read '%s': %s\n" % \
+						(file_to_write_to, e))
+			if file_contents is not None:
+				file_contents.extend(changes)
+				if protect_obj[root].isprotected(file_to_write_to):
+					# We want to force new_protect_filename to ensure
+					# that the user will see all our changes via
+					# dispatch-conf, even if file_to_write_to doesn't
+					# exist yet, so we specify force=True.
+					file_to_write_to = new_protect_filename(file_to_write_to,
+						force=True)
+				try:
+					write_atomic(file_to_write_to, "".join(file_contents))
+				except PortageException:
+					problems.append("!!! Failed to write '%s'\n" % file_to_write_to)
+
+		if not quiet and (p_mask_change_msg or masked_by_missing_keywords):
+			msg = [
+				"",
+				"NOTE: The --autounmask-keep-masks option will prevent emerge",
+				"      from creating package.unmask or ** keyword changes."
+			]
+			for line in msg:
+				if line:
+					line = colorize("INFORM", line)
+				writemsg_stdout(line + "\n", noiselevel=-1)
+
+		if ask and write_to_file and file_to_write_to:
+			prompt = "\nWould you like to add these " + \
+				"changes to your config files?"
+			if userquery(prompt, enter_invalid) == 'No':
+				write_to_file = False
+
+		if write_to_file and file_to_write_to:
+			for root in roots:
+				settings = self._frozen_config.roots[root].settings
+				abs_user_config = os.path.join(
+					settings["PORTAGE_CONFIGROOT"], USER_CONFIG_PATH)
+				ensure_dirs(abs_user_config)
+
+				if root in unstable_keyword_msg:
+					write_changes(root, unstable_keyword_msg[root],
+						file_to_write_to.get((abs_user_config, "package.keywords")))
+
+				if root in p_mask_change_msg:
+					write_changes(root, p_mask_change_msg[root],
+						file_to_write_to.get((abs_user_config, "package.unmask")))
+
+				if root in use_changes_msg:
+					write_changes(root, use_changes_msg[root],
+						file_to_write_to.get((abs_user_config, "package.use")))
+
+				if root in license_msg:
+					write_changes(root, license_msg[root],
+						file_to_write_to.get((abs_user_config, "package.license")))
+
+		if problems:
+			writemsg_stdout("\nThe following problems occurred while writing autounmask changes:\n", \
+				noiselevel=-1)
+			writemsg_stdout("".join(problems), noiselevel=-1)
+		elif write_to_file and roots:
+			writemsg_stdout("\nAutounmask changes successfully written. Remember to run dispatch-conf.\n", \
+				noiselevel=-1)
+		elif not pretend and not autounmask_write and roots:
+			writemsg_stdout("\nUse --autounmask-write to write changes to config files (honoring CONFIG_PROTECT).\n", \
+				noiselevel=-1)
+
+
+	def display_problems(self):
+		"""
+		Display problems with the dependency graph such as slot collisions.
+		This is called internally by display() to show the problems _after_
+		the merge list where it is most likely to be seen, but if display()
+		is not going to be called then this method should be called explicitly
+		to ensure that the user is notified of problems with the graph.
+
+		All output goes to stderr, except for unsatisfied dependencies which
+		go to stdout for parsing by programs such as autounmask.
+		"""
+
+		# Note that show_masked_packages() sends its output to
+		# stdout, and some programs such as autounmask parse the
+		# output in cases when emerge bails out. However, when
+		# show_masked_packages() is called for installed packages
+		# here, the message is a warning that is more appropriate
+		# to send to stderr, so temporarily redirect stdout to
+		# stderr. TODO: Fix output code so there's a cleaner way
+		# to redirect everything to stderr.
+		sys.stdout.flush()
+		sys.stderr.flush()
+		stdout = sys.stdout
+		try:
+			sys.stdout = sys.stderr
+			self._display_problems()
+		finally:
+			sys.stdout = stdout
+			sys.stdout.flush()
+			sys.stderr.flush()
+
+		# This goes to stdout for parsing by programs like autounmask.
+		for pargs, kwargs in self._dynamic_config._unsatisfied_deps_for_display:
+			self._show_unsatisfied_dep(*pargs, **kwargs)
+
+	def _display_problems(self):
+		if self._dynamic_config._circular_deps_for_display is not None:
+			self._show_circular_deps(
+				self._dynamic_config._circular_deps_for_display)
+
+		# The slot conflict display has better noise reduction than
+		# the unsatisfied blockers display, so skip unsatisfied blockers
+		# display if there are slot conflicts (see bug #385391).
+		if self._dynamic_config._slot_collision_info:
+			self._show_slot_collision_notice()
+		elif self._dynamic_config._unsatisfied_blockers_for_display is not None:
+			self._show_unsatisfied_blockers(
+				self._dynamic_config._unsatisfied_blockers_for_display)
+		else:
+			self._show_missed_update()
+
+		self._show_ignored_binaries()
+
+		self._display_autounmask()
+
+		# TODO: Add generic support for "set problem" handlers so that
+		# the below warnings aren't special cases for world only.
+
+		if self._dynamic_config._missing_args:
+			world_problems = False
+			if "world" in self._dynamic_config.sets[
+				self._frozen_config.target_root].sets:
+				# Filter out indirect members of world (from nested sets)
+				# since only direct members of world are desired here.
+				world_set = self._frozen_config.roots[self._frozen_config.target_root].sets["selected"]
+				for arg, atom in self._dynamic_config._missing_args:
+					if arg.name in ("selected", "world") and atom in world_set:
+						world_problems = True
+						break
+
+			if world_problems:
+				sys.stderr.write("\n!!! Problems have been " + \
+					"detected with your world file\n")
+				sys.stderr.write("!!! Please run " + \
+					green("emaint --check world")+"\n\n")
+
+		if self._dynamic_config._missing_args:
+			sys.stderr.write("\n" + colorize("BAD", "!!!") + \
+				" Ebuilds for the following packages are either all\n")
+			sys.stderr.write(colorize("BAD", "!!!") + \
+				" masked or don't exist:\n")
+			sys.stderr.write(" ".join(str(atom) for arg, atom in \
+				self._dynamic_config._missing_args) + "\n")
+
+		if self._dynamic_config._pprovided_args:
+			arg_refs = {}
+			for arg, atom in self._dynamic_config._pprovided_args:
+				if isinstance(arg, SetArg):
+					parent = arg.name
+					arg_atom = (atom, atom)
+				else:
+					parent = "args"
+					arg_atom = (arg.arg, atom)
+				refs = arg_refs.setdefault(arg_atom, [])
+				if parent not in refs:
+					refs.append(parent)
+			msg = []
+			msg.append(bad("\nWARNING: "))
+			if len(self._dynamic_config._pprovided_args) > 1:
+				msg.append("Requested packages will not be " + \
+					"merged because they are listed in\n")
+			else:
+				msg.append("A requested package will not be " + \
+					"merged because it is listed in\n")
+			msg.append("package.provided:\n\n")
+			problems_sets = set()
+			for (arg, atom), refs in arg_refs.items():
+				ref_string = ""
+				if refs:
+					problems_sets.update(refs)
+					refs.sort()
+					ref_string = ", ".join(["'%s'" % name for name in refs])
+					ref_string = " pulled in by " + ref_string
+				msg.append("  %s%s\n" % (colorize("INFORM", str(arg)), ref_string))
+			msg.append("\n")
+			if "selected" in problems_sets or "world" in problems_sets:
+				msg.append("This problem can be solved in one of the following ways:\n\n")
+				msg.append("  A) Use emaint to clean offending packages from world (if not installed).\n")
+				msg.append("  B) Uninstall offending packages (cleans them from world).\n")
+				msg.append("  C) Remove offending entries from package.provided.\n\n")
+				msg.append("The best course of action depends on the reason that an offending\n")
+				msg.append("package.provided entry exists.\n\n")
+			sys.stderr.write("".join(msg))
+
+		masked_packages = []
+		for pkg in self._dynamic_config._masked_license_updates:
+			root_config = pkg.root_config
+			pkgsettings = self._frozen_config.pkgsettings[pkg.root]
+			mreasons = get_masking_status(pkg, pkgsettings, root_config, use=self._pkg_use_enabled(pkg))
+			masked_packages.append((root_config, pkgsettings,
+				pkg.cpv, pkg.repo, pkg.metadata, mreasons))
+		if masked_packages:
+			writemsg("\n" + colorize("BAD", "!!!") + \
+				" The following updates are masked by LICENSE changes:\n",
+				noiselevel=-1)
+			show_masked_packages(masked_packages)
+			show_mask_docs()
+			writemsg("\n", noiselevel=-1)
+
+		masked_packages = []
+		for pkg in self._dynamic_config._masked_installed:
+			root_config = pkg.root_config
+			pkgsettings = self._frozen_config.pkgsettings[pkg.root]
+			mreasons = get_masking_status(pkg, pkgsettings, root_config, use=self._pkg_use_enabled)
+			masked_packages.append((root_config, pkgsettings,
+				pkg.cpv, pkg.repo, pkg.metadata, mreasons))
+		if masked_packages:
+			writemsg("\n" + colorize("BAD", "!!!") + \
+				" The following installed packages are masked:\n",
+				noiselevel=-1)
+			show_masked_packages(masked_packages)
+			show_mask_docs()
+			writemsg("\n", noiselevel=-1)
+
+	def saveNomergeFavorites(self):
+		"""Find atoms in favorites that are not in the mergelist and add them
+		to the world file if necessary."""
+		for x in ("--buildpkgonly", "--fetchonly", "--fetch-all-uri",
+			"--oneshot", "--onlydeps", "--pretend"):
+			if x in self._frozen_config.myopts:
+				return
+		root_config = self._frozen_config.roots[self._frozen_config.target_root]
+		world_set = root_config.sets["selected"]
+
+		world_locked = False
+		if hasattr(world_set, "lock"):
+			world_set.lock()
+			world_locked = True
+
+		if hasattr(world_set, "load"):
+			world_set.load() # maybe it's changed on disk
+
+		args_set = self._dynamic_config.sets[
+			self._frozen_config.target_root].sets['__non_set_args__']
+		added_favorites = set()
+		for x in self._dynamic_config._set_nodes:
+			if x.operation != "nomerge":
+				continue
+
+			if x.root != root_config.root:
+				continue
+
+			try:
+				myfavkey = create_world_atom(x, args_set, root_config)
+				if myfavkey:
+					if myfavkey in added_favorites:
+						continue
+					added_favorites.add(myfavkey)
+			except portage.exception.InvalidDependString as e:
+				writemsg("\n\n!!! '%s' has invalid PROVIDE: %s\n" % \
+					(x.cpv, e), noiselevel=-1)
+				writemsg("!!! see '%s'\n\n" % os.path.join(
+					x.root, portage.VDB_PATH, x.cpv, "PROVIDE"), noiselevel=-1)
+				del e
+		all_added = []
+		for arg in self._dynamic_config._initial_arg_list:
+			if not isinstance(arg, SetArg):
+				continue
+			if arg.root_config.root != root_config.root:
+				continue
+			k = arg.name
+			if k in ("selected", "world") or \
+				not root_config.sets[k].world_candidate:
+				continue
+			s = SETPREFIX + k
+			if s in world_set:
+				continue
+			all_added.append(SETPREFIX + k)
+		all_added.extend(added_favorites)
+		all_added.sort()
+		for a in all_added:
+			writemsg_stdout(
+				">>> Recording %s in \"world\" favorites file...\n" % \
+				colorize("INFORM", str(a)), noiselevel=-1)
+		if all_added:
+			world_set.update(all_added)
+
+		if world_locked:
+			world_set.unlock()
+
+	def _loadResumeCommand(self, resume_data, skip_masked=True,
+		skip_missing=True):
+		"""
+		Add a resume command to the graph and validate it in the process.  This
+		will raise a PackageNotFound exception if a package is not available.
+		"""
+
+		self._load_vdb()
+
+		if not isinstance(resume_data, dict):
+			return False
+
+		mergelist = resume_data.get("mergelist")
+		if not isinstance(mergelist, list):
+			mergelist = []
+
+		favorites = resume_data.get("favorites")
+		if isinstance(favorites, list):
+			args = self._load_favorites(favorites)
+		else:
+			args = []
+
+		fakedb = self._dynamic_config.mydbapi
+		serialized_tasks = []
+		masked_tasks = []
+		for x in mergelist:
+			if not (isinstance(x, list) and len(x) == 4):
+				continue
+			pkg_type, myroot, pkg_key, action = x
+			if pkg_type not in self.pkg_tree_map:
+				continue
+			if action != "merge":
+				continue
+			root_config = self._frozen_config.roots[myroot]
+
+			# Use the resume "favorites" list to see if a repo was specified
+			# for this package.
+			depgraph_sets = self._dynamic_config.sets[root_config.root]
+			repo = None
+			for atom in depgraph_sets.atoms.getAtoms():
+				if atom.repo and portage.dep.match_from_list(atom, [pkg_key]):
+					repo = atom.repo
+					break
+
+			atom = "=" + pkg_key
+			if repo:
+				atom = atom + _repo_separator + repo
+
+			try:
+				atom = Atom(atom, allow_repo=True)
+			except InvalidAtom:
+				continue
+
+			pkg = None
+			for pkg in self._iter_match_pkgs(root_config, pkg_type, atom):
+				if not self._pkg_visibility_check(pkg) or \
+					self._frozen_config.excluded_pkgs.findAtomForPackage(pkg,
+						modified_use=self._pkg_use_enabled(pkg)):
+					continue
+				break
+
+			if pkg is None:
+				# It does no exist or it is corrupt.
+				if skip_missing:
+					# TODO: log these somewhere
+					continue
+				raise portage.exception.PackageNotFound(pkg_key)
+
+			if "merge" == pkg.operation and \
+				self._frozen_config.excluded_pkgs.findAtomForPackage(pkg, \
+					modified_use=self._pkg_use_enabled(pkg)):
+				continue
+
+			if "merge" == pkg.operation and not self._pkg_visibility_check(pkg):
+				if skip_masked:
+					masked_tasks.append(Dependency(root=pkg.root, parent=pkg))
+				else:
+					self._dynamic_config._unsatisfied_deps_for_display.append(
+						((pkg.root, "="+pkg.cpv), {"myparent":None}))
+
+			fakedb[myroot].cpv_inject(pkg)
+			serialized_tasks.append(pkg)
+			self._spinner_update()
+
+		if self._dynamic_config._unsatisfied_deps_for_display:
+			return False
+
+		if not serialized_tasks or "--nodeps" in self._frozen_config.myopts:
+			self._dynamic_config._serialized_tasks_cache = serialized_tasks
+			self._dynamic_config._scheduler_graph = self._dynamic_config.digraph
+		else:
+			self._select_package = self._select_pkg_from_graph
+			self._dynamic_config.myparams["selective"] = True
+			# Always traverse deep dependencies in order to account for
+			# potentially unsatisfied dependencies of installed packages.
+			# This is necessary for correct --keep-going or --resume operation
+			# in case a package from a group of circularly dependent packages
+			# fails. In this case, a package which has recently been installed
+			# may have an unsatisfied circular dependency (pulled in by
+			# PDEPEND, for example). So, even though a package is already
+			# installed, it may not have all of it's dependencies satisfied, so
+			# it may not be usable. If such a package is in the subgraph of
+			# deep depenedencies of a scheduled build, that build needs to
+			# be cancelled. In order for this type of situation to be
+			# recognized, deep traversal of dependencies is required.
+			self._dynamic_config.myparams["deep"] = True
+
+			for task in serialized_tasks:
+				if isinstance(task, Package) and \
+					task.operation == "merge":
+					if not self._add_pkg(task, None):
+						return False
+
+			# Packages for argument atoms need to be explicitly
+			# added via _add_pkg() so that they are included in the
+			# digraph (needed at least for --tree display).
+			for arg in self._expand_set_args(args, add_to_digraph=True):
+				for atom in arg.pset.getAtoms():
+					pkg, existing_node = self._select_package(
+						arg.root_config.root, atom)
+					if existing_node is None and \
+						pkg is not None:
+						if not self._add_pkg(pkg, Dependency(atom=atom,
+							root=pkg.root, parent=arg)):
+							return False
+
+			# Allow unsatisfied deps here to avoid showing a masking
+			# message for an unsatisfied dep that isn't necessarily
+			# masked.
+			if not self._create_graph(allow_unsatisfied=True):
+				return False
+
+			unsatisfied_deps = []
+			for dep in self._dynamic_config._unsatisfied_deps:
+				if not isinstance(dep.parent, Package):
+					continue
+				if dep.parent.operation == "merge":
+					unsatisfied_deps.append(dep)
+					continue
+
+				# For unsatisfied deps of installed packages, only account for
+				# them if they are in the subgraph of dependencies of a package
+				# which is scheduled to be installed.
+				unsatisfied_install = False
+				traversed = set()
+				dep_stack = self._dynamic_config.digraph.parent_nodes(dep.parent)
+				while dep_stack:
+					node = dep_stack.pop()
+					if not isinstance(node, Package):
+						continue
+					if node.operation == "merge":
+						unsatisfied_install = True
+						break
+					if node in traversed:
+						continue
+					traversed.add(node)
+					dep_stack.extend(self._dynamic_config.digraph.parent_nodes(node))
+
+				if unsatisfied_install:
+					unsatisfied_deps.append(dep)
+
+			if masked_tasks or unsatisfied_deps:
+				# This probably means that a required package
+				# was dropped via --skipfirst. It makes the
+				# resume list invalid, so convert it to a
+				# UnsatisfiedResumeDep exception.
+				raise self.UnsatisfiedResumeDep(self,
+					masked_tasks + unsatisfied_deps)
+			self._dynamic_config._serialized_tasks_cache = None
+			try:
+				self.altlist()
+			except self._unknown_internal_error:
+				return False
+
+		return True
+
+	def _load_favorites(self, favorites):
+		"""
+		Use a list of favorites to resume state from a
+		previous select_files() call. This creates similar
+		DependencyArg instances to those that would have
+		been created by the original select_files() call.
+		This allows Package instances to be matched with
+		DependencyArg instances during graph creation.
+		"""
+		root_config = self._frozen_config.roots[self._frozen_config.target_root]
+		sets = root_config.sets
+		depgraph_sets = self._dynamic_config.sets[root_config.root]
+		args = []
+		for x in favorites:
+			if not isinstance(x, basestring):
+				continue
+			if x in ("system", "world"):
+				x = SETPREFIX + x
+			if x.startswith(SETPREFIX):
+				s = x[len(SETPREFIX):]
+				if s not in sets:
+					continue
+				if s in depgraph_sets.sets:
+					continue
+				pset = sets[s]
+				depgraph_sets.sets[s] = pset
+				args.append(SetArg(arg=x, pset=pset,
+					root_config=root_config))
+			else:
+				try:
+					x = Atom(x, allow_repo=True)
+				except portage.exception.InvalidAtom:
+					continue
+				args.append(AtomArg(arg=x, atom=x,
+					root_config=root_config))
+
+		self._set_args(args)
+		return args
+
+	class UnsatisfiedResumeDep(portage.exception.PortageException):
+		"""
+		A dependency of a resume list is not installed. This
+		can occur when a required package is dropped from the
+		merge list via --skipfirst.
+		"""
+		def __init__(self, depgraph, value):
+			portage.exception.PortageException.__init__(self, value)
+			self.depgraph = depgraph
+
+	class _internal_exception(portage.exception.PortageException):
+		def __init__(self, value=""):
+			portage.exception.PortageException.__init__(self, value)
+
+	class _unknown_internal_error(_internal_exception):
+		"""
+		Used by the depgraph internally to terminate graph creation.
+		The specific reason for the failure should have been dumped
+		to stderr, unfortunately, the exact reason for the failure
+		may not be known.
+		"""
+
+	class _serialize_tasks_retry(_internal_exception):
+		"""
+		This is raised by the _serialize_tasks() method when it needs to
+		be called again for some reason. The only case that it's currently
+		used for is when neglected dependencies need to be added to the
+		graph in order to avoid making a potentially unsafe decision.
+		"""
+
+	class _backtrack_mask(_internal_exception):
+		"""
+		This is raised by _show_unsatisfied_dep() when it's called with
+		check_backtrack=True and a matching package has been masked by
+		backtracking.
+		"""
+
+	class _autounmask_breakage(_internal_exception):
+		"""
+		This is raised by _show_unsatisfied_dep() when it's called with
+		check_autounmask_breakage=True and a matching package has been
+		been disqualified due to autounmask changes.
+		"""
+
+	def need_restart(self):
+		return self._dynamic_config._need_restart and \
+			not self._dynamic_config._skip_restart
+
+	def success_without_autounmask(self):
+		return self._dynamic_config._success_without_autounmask
+
+	def autounmask_breakage_detected(self):
+		try:
+			for pargs, kwargs in self._dynamic_config._unsatisfied_deps_for_display:
+				self._show_unsatisfied_dep(
+					*pargs, check_autounmask_breakage=True, **kwargs)
+		except self._autounmask_breakage:
+			return True
+		return False
+
+	def get_backtrack_infos(self):
+		return self._dynamic_config._backtrack_infos
+			
+
+class _dep_check_composite_db(dbapi):
+	"""
+	A dbapi-like interface that is optimized for use in dep_check() calls.
+	This is built on top of the existing depgraph package selection logic.
+	Some packages that have been added to the graph may be masked from this
+	view in order to influence the atom preference selection that occurs
+	via dep_check().
+	"""
+	def __init__(self, depgraph, root):
+		dbapi.__init__(self)
+		self._depgraph = depgraph
+		self._root = root
+		self._match_cache = {}
+		self._cpv_pkg_map = {}
+
+	def _clear_cache(self):
+		self._match_cache.clear()
+		self._cpv_pkg_map.clear()
+
+	def cp_list(self, cp):
+		"""
+		Emulate cp_list just so it can be used to check for existence
+		of new-style virtuals. Since it's a waste of time to return
+		more than one cpv for this use case, a maximum of one cpv will
+		be returned.
+		"""
+		if isinstance(cp, Atom):
+			atom = cp
+		else:
+			atom = Atom(cp)
+		ret = []
+		for pkg in self._depgraph._iter_match_pkgs_any(
+			self._depgraph._frozen_config.roots[self._root], atom):
+			if pkg.cp == cp:
+				ret.append(pkg.cpv)
+				break
+
+		return ret
+
+	def match(self, atom):
+		cache_key = (atom, atom.unevaluated_atom)
+		ret = self._match_cache.get(cache_key)
+		if ret is not None:
+			return ret[:]
+
+		ret = []
+		pkg, existing = self._depgraph._select_package(self._root, atom)
+
+		if pkg is not None and self._visible(pkg):
+			self._cpv_pkg_map[pkg.cpv] = pkg
+			ret.append(pkg.cpv)
+
+		if pkg is not None and \
+			atom.slot is None and \
+			pkg.cp.startswith("virtual/") and \
+			(("remove" not in self._depgraph._dynamic_config.myparams and
+			"--update" not in self._depgraph._frozen_config.myopts) or
+			not ret or
+			not self._depgraph._virt_deps_visible(pkg, ignore_use=True)):
+			# For new-style virtual lookahead that occurs inside dep_check()
+			# for bug #141118, examine all slots. This is needed so that newer
+			# slots will not unnecessarily be pulled in when a satisfying lower
+			# slot is already installed. For example, if virtual/jdk-1.5 is
+			# satisfied via gcj-jdk then there's no need to pull in a newer
+			# slot to satisfy a virtual/jdk dependency, unless --update is
+			# enabled.
+			slots = set()
+			slots.add(pkg.slot)
+			for virt_pkg in self._depgraph._iter_match_pkgs_any(
+				self._depgraph._frozen_config.roots[self._root], atom):
+				if virt_pkg.cp != pkg.cp:
+					continue
+				slots.add(virt_pkg.slot)
+
+			slots.remove(pkg.slot)
+			while slots:
+				slot_atom = atom.with_slot(slots.pop())
+				pkg, existing = self._depgraph._select_package(
+					self._root, slot_atom)
+				if not pkg:
+					continue
+				if not self._visible(pkg):
+					continue
+				self._cpv_pkg_map[pkg.cpv] = pkg
+				ret.append(pkg.cpv)
+
+			if len(ret) > 1:
+				self._cpv_sort_ascending(ret)
+
+		self._match_cache[cache_key] = ret
+		return ret[:]
+
+	def _visible(self, pkg):
+		if pkg.installed and "selective" not in self._depgraph._dynamic_config.myparams:
+			try:
+				arg = next(self._depgraph._iter_atoms_for_pkg(pkg))
+			except (StopIteration, portage.exception.InvalidDependString):
+				arg = None
+			if arg:
+				return False
+		if pkg.installed and \
+			(pkg.masks or not self._depgraph._pkg_visibility_check(pkg)):
+			# Account for packages with masks (like KEYWORDS masks)
+			# that are usually ignored in visibility checks for
+			# installed packages, in order to handle cases like
+			# bug #350285.
+			myopts = self._depgraph._frozen_config.myopts
+			use_ebuild_visibility = myopts.get(
+				'--use-ebuild-visibility', 'n') != 'n'
+			avoid_update = "--update" not in myopts and \
+				"remove" not in self._depgraph._dynamic_config.myparams
+			usepkgonly = "--usepkgonly" in myopts
+			if not avoid_update:
+				if not use_ebuild_visibility and usepkgonly:
+					return False
+				else:
+					try:
+						pkg_eb = self._depgraph._pkg(
+							pkg.cpv, "ebuild", pkg.root_config,
+							myrepo=pkg.repo)
+					except portage.exception.PackageNotFound:
+						pkg_eb_visible = False
+						for pkg_eb in self._depgraph._iter_match_pkgs(
+							pkg.root_config, "ebuild",
+							Atom("=%s" % (pkg.cpv,))):
+							if self._depgraph._pkg_visibility_check(pkg_eb):
+								pkg_eb_visible = True
+								break
+						if not pkg_eb_visible:
+							return False
+					else:
+						if not self._depgraph._pkg_visibility_check(pkg_eb):
+							return False
+
+		in_graph = self._depgraph._dynamic_config._slot_pkg_map[
+			self._root].get(pkg.slot_atom)
+		if in_graph is None:
+			# Mask choices for packages which are not the highest visible
+			# version within their slot (since they usually trigger slot
+			# conflicts).
+			highest_visible, in_graph = self._depgraph._select_package(
+				self._root, pkg.slot_atom)
+			# Note: highest_visible is not necessarily the real highest
+			# visible, especially when --update is not enabled, so use
+			# < operator instead of !=.
+			if highest_visible is not None and pkg < highest_visible:
+				return False
+		elif in_graph != pkg:
+			# Mask choices for packages that would trigger a slot
+			# conflict with a previously selected package.
+			return False
+		return True
+
+	def aux_get(self, cpv, wants):
+		metadata = self._cpv_pkg_map[cpv].metadata
+		return [metadata.get(x, "") for x in wants]
+
+	def match_pkgs(self, atom):
+		return [self._cpv_pkg_map[cpv] for cpv in self.match(atom)]
+
+def ambiguous_package_name(arg, atoms, root_config, spinner, myopts):
+
+	if "--quiet" in myopts:
+		writemsg("!!! The short ebuild name \"%s\" is ambiguous. Please specify\n" % arg, noiselevel=-1)
+		writemsg("!!! one of the following fully-qualified ebuild names instead:\n\n", noiselevel=-1)
+		for cp in sorted(set(portage.dep_getkey(atom) for atom in atoms)):
+			writemsg("    " + colorize("INFORM", cp) + "\n", noiselevel=-1)
+		return
+
+	s = search(root_config, spinner, "--searchdesc" in myopts,
+		"--quiet" not in myopts, "--usepkg" in myopts,
+		"--usepkgonly" in myopts)
+	null_cp = portage.dep_getkey(insert_category_into_atom(
+		arg, "null"))
+	cat, atom_pn = portage.catsplit(null_cp)
+	s.searchkey = atom_pn
+	for cp in sorted(set(portage.dep_getkey(atom) for atom in atoms)):
+		s.addCP(cp)
+	s.output()
+	writemsg("!!! The short ebuild name \"%s\" is ambiguous. Please specify\n" % arg, noiselevel=-1)
+	writemsg("!!! one of the above fully-qualified ebuild names instead.\n\n", noiselevel=-1)
+
+def _spinner_start(spinner, myopts):
+	if spinner is None:
+		return
+	if "--quiet" not in myopts and \
+		("--pretend" in myopts or "--ask" in myopts or \
+		"--tree" in myopts or "--verbose" in myopts):
+		action = ""
+		if "--fetchonly" in myopts or "--fetch-all-uri" in myopts:
+			action = "fetched"
+		elif "--buildpkgonly" in myopts:
+			action = "built"
+		else:
+			action = "merged"
+		if "--tree" in myopts and action != "fetched": # Tree doesn't work with fetching
+			if "--unordered-display" in myopts:
+				portage.writemsg_stdout("\n" + \
+					darkgreen("These are the packages that " + \
+					"would be %s:" % action) + "\n\n")
+			else:
+				portage.writemsg_stdout("\n" + \
+					darkgreen("These are the packages that " + \
+					"would be %s, in reverse order:" % action) + "\n\n")
+		else:
+			portage.writemsg_stdout("\n" + \
+				darkgreen("These are the packages that " + \
+				"would be %s, in order:" % action) + "\n\n")
+
+	show_spinner = "--quiet" not in myopts and "--nodeps" not in myopts
+	if not show_spinner:
+		spinner.update = spinner.update_quiet
+
+	if show_spinner:
+		portage.writemsg_stdout("Calculating dependencies  ")
+
+def _spinner_stop(spinner):
+	if spinner is None or \
+		spinner.update == spinner.update_quiet:
+		return
+
+	if spinner.update != spinner.update_basic:
+		# update_basic is used for non-tty output,
+		# so don't output backspaces in that case.
+		portage.writemsg_stdout("\b\b")
+
+	portage.writemsg_stdout("... done!\n")
+
+def backtrack_depgraph(settings, trees, myopts, myparams, 
+	myaction, myfiles, spinner):
+	"""
+	Raises PackageSetNotFound if myfiles contains a missing package set.
+	"""
+	_spinner_start(spinner, myopts)
+	try:
+		return _backtrack_depgraph(settings, trees, myopts, myparams, 
+			myaction, myfiles, spinner)
+	finally:
+		_spinner_stop(spinner)
+
+
+def _backtrack_depgraph(settings, trees, myopts, myparams, myaction, myfiles, spinner):
+
+	debug = "--debug" in myopts
+	mydepgraph = None
+	max_retries = myopts.get('--backtrack', 10)
+	max_depth = max(1, (max_retries + 1) / 2)
+	allow_backtracking = max_retries > 0
+	backtracker = Backtracker(max_depth)
+	backtracked = 0
+
+	frozen_config = _frozen_depgraph_config(settings, trees,
+		myopts, spinner)
+
+	while backtracker:
+
+		if debug and mydepgraph is not None:
+			writemsg_level(
+				"\n\nbacktracking try %s \n\n" % \
+				backtracked, noiselevel=-1, level=logging.DEBUG)
+			mydepgraph.display_problems()
+
+		backtrack_parameters = backtracker.get()
+
+		mydepgraph = depgraph(settings, trees, myopts, myparams, spinner,
+			frozen_config=frozen_config,
+			allow_backtracking=allow_backtracking,
+			backtrack_parameters=backtrack_parameters)
+		success, favorites = mydepgraph.select_files(myfiles)
+
+		if success or mydepgraph.success_without_autounmask():
+			break
+		elif not allow_backtracking:
+			break
+		elif backtracked >= max_retries:
+			break
+		elif mydepgraph.need_restart():
+			backtracked += 1
+			backtracker.feedback(mydepgraph.get_backtrack_infos())
+		else:
+			break
+
+	if not (success or mydepgraph.success_without_autounmask()) and backtracked:
+
+		if debug:
+			writemsg_level(
+				"\n\nbacktracking aborted after %s tries\n\n" % \
+				backtracked, noiselevel=-1, level=logging.DEBUG)
+			mydepgraph.display_problems()
+
+		mydepgraph = depgraph(settings, trees, myopts, myparams, spinner,
+			frozen_config=frozen_config,
+			allow_backtracking=False,
+			backtrack_parameters=backtracker.get_best_run())
+		success, favorites = mydepgraph.select_files(myfiles)
+
+	if not success and mydepgraph.autounmask_breakage_detected():
+		if debug:
+			writemsg_level(
+				"\n\nautounmask breakage detected\n\n",
+				noiselevel=-1, level=logging.DEBUG)
+			mydepgraph.display_problems()
+		myopts["--autounmask"] = "n"
+		mydepgraph = depgraph(settings, trees, myopts, myparams, spinner,
+			frozen_config=frozen_config, allow_backtracking=False)
+		success, favorites = mydepgraph.select_files(myfiles)
+
+	return (success, mydepgraph, favorites)
+
+
+def resume_depgraph(settings, trees, mtimedb, myopts, myparams, spinner):
+	"""
+	Raises PackageSetNotFound if myfiles contains a missing package set.
+	"""
+	_spinner_start(spinner, myopts)
+	try:
+		return _resume_depgraph(settings, trees, mtimedb, myopts,
+			myparams, spinner)
+	finally:
+		_spinner_stop(spinner)
+
+def _resume_depgraph(settings, trees, mtimedb, myopts, myparams, spinner):
+	"""
+	Construct a depgraph for the given resume list. This will raise
+	PackageNotFound or depgraph.UnsatisfiedResumeDep when necessary.
+	TODO: Return reasons for dropped_tasks, for display/logging.
+	@rtype: tuple
+	@return: (success, depgraph, dropped_tasks)
+	"""
+	skip_masked = True
+	skip_unsatisfied = True
+	mergelist = mtimedb["resume"]["mergelist"]
+	dropped_tasks = set()
+	frozen_config = _frozen_depgraph_config(settings, trees,
+		myopts, spinner)
+	while True:
+		mydepgraph = depgraph(settings, trees,
+			myopts, myparams, spinner, frozen_config=frozen_config)
+		try:
+			success = mydepgraph._loadResumeCommand(mtimedb["resume"],
+				skip_masked=skip_masked)
+		except depgraph.UnsatisfiedResumeDep as e:
+			if not skip_unsatisfied:
+				raise
+
+			graph = mydepgraph._dynamic_config.digraph
+			unsatisfied_parents = dict((dep.parent, dep.parent) \
+				for dep in e.value)
+			traversed_nodes = set()
+			unsatisfied_stack = list(unsatisfied_parents)
+			while unsatisfied_stack:
+				pkg = unsatisfied_stack.pop()
+				if pkg in traversed_nodes:
+					continue
+				traversed_nodes.add(pkg)
+
+				# If this package was pulled in by a parent
+				# package scheduled for merge, removing this
+				# package may cause the the parent package's
+				# dependency to become unsatisfied.
+				for parent_node in graph.parent_nodes(pkg):
+					if not isinstance(parent_node, Package) \
+						or parent_node.operation not in ("merge", "nomerge"):
+						continue
+					# We need to traverse all priorities here, in order to
+					# ensure that a package with an unsatisfied depenedency
+					# won't get pulled in, even indirectly via a soft
+					# dependency.
+					unsatisfied_parents[parent_node] = parent_node
+					unsatisfied_stack.append(parent_node)
+
+			unsatisfied_tuples = frozenset(tuple(parent_node)
+				for parent_node in unsatisfied_parents
+				if isinstance(parent_node, Package))
+			pruned_mergelist = []
+			for x in mergelist:
+				if isinstance(x, list) and \
+					tuple(x) not in unsatisfied_tuples:
+					pruned_mergelist.append(x)
+
+			# If the mergelist doesn't shrink then this loop is infinite.
+			if len(pruned_mergelist) == len(mergelist):
+				# This happens if a package can't be dropped because
+				# it's already installed, but it has unsatisfied PDEPEND.
+				raise
+			mergelist[:] = pruned_mergelist
+
+			# Exclude installed packages that have been removed from the graph due
+			# to failure to build/install runtime dependencies after the dependent
+			# package has already been installed.
+			dropped_tasks.update(pkg for pkg in \
+				unsatisfied_parents if pkg.operation != "nomerge")
+
+			del e, graph, traversed_nodes, \
+				unsatisfied_parents, unsatisfied_stack
+			continue
+		else:
+			break
+	return (success, mydepgraph, dropped_tasks)
+
+def get_mask_info(root_config, cpv, pkgsettings,
+	db, pkg_type, built, installed, db_keys, myrepo = None, _pkg_use_enabled=None):
+	try:
+		metadata = dict(zip(db_keys,
+			db.aux_get(cpv, db_keys, myrepo=myrepo)))
+	except KeyError:
+		metadata = None
+
+	if metadata is None:
+		mreasons = ["corruption"]
+	else:
+		eapi = metadata['EAPI']
+		if eapi[:1] == '-':
+			eapi = eapi[1:]
+		if not portage.eapi_is_supported(eapi):
+			mreasons = ['EAPI %s' % eapi]
+		else:
+			pkg = Package(type_name=pkg_type, root_config=root_config,
+				cpv=cpv, built=built, installed=installed, metadata=metadata)
+
+			modified_use = None
+			if _pkg_use_enabled is not None:
+				modified_use = _pkg_use_enabled(pkg)
+
+			mreasons = get_masking_status(pkg, pkgsettings, root_config, myrepo=myrepo, use=modified_use)
+
+	return metadata, mreasons
+
+def show_masked_packages(masked_packages):
+	shown_licenses = set()
+	shown_comments = set()
+	# Maybe there is both an ebuild and a binary. Only
+	# show one of them to avoid redundant appearance.
+	shown_cpvs = set()
+	have_eapi_mask = False
+	for (root_config, pkgsettings, cpv, repo,
+		metadata, mreasons) in masked_packages:
+		output_cpv = cpv
+		if repo:
+			output_cpv += _repo_separator + repo
+		if output_cpv in shown_cpvs:
+			continue
+		shown_cpvs.add(output_cpv)
+		eapi_masked = metadata is not None and \
+			not portage.eapi_is_supported(metadata["EAPI"])
+		if eapi_masked:
+			have_eapi_mask = True
+			# When masked by EAPI, metadata is mostly useless since
+			# it doesn't contain essential things like SLOT.
+			metadata = None
+		comment, filename = None, None
+		if not eapi_masked and \
+			"package.mask" in mreasons:
+			comment, filename = \
+				portage.getmaskingreason(
+				cpv, metadata=metadata,
+				settings=pkgsettings,
+				portdb=root_config.trees["porttree"].dbapi,
+				return_location=True)
+		missing_licenses = []
+		if not eapi_masked and metadata is not None:
+			try:
+				missing_licenses = \
+					pkgsettings._getMissingLicenses(
+						cpv, metadata)
+			except portage.exception.InvalidDependString:
+				# This will have already been reported
+				# above via mreasons.
+				pass
+
+		writemsg_stdout("- "+output_cpv+" (masked by: "+", ".join(mreasons)+")\n", noiselevel=-1)
+
+		if comment and comment not in shown_comments:
+			writemsg_stdout(filename + ":\n" + comment + "\n",
+				noiselevel=-1)
+			shown_comments.add(comment)
+		portdb = root_config.trees["porttree"].dbapi
+		for l in missing_licenses:
+			l_path = portdb.findLicensePath(l)
+			if l in shown_licenses:
+				continue
+			msg = ("A copy of the '%s' license" + \
+			" is located at '%s'.\n\n") % (l, l_path)
+			writemsg_stdout(msg, noiselevel=-1)
+			shown_licenses.add(l)
+	return have_eapi_mask
+
+def show_mask_docs():
+	writemsg_stdout("For more information, see the MASKED PACKAGES section in the emerge\n", noiselevel=-1)
+	writemsg_stdout("man page or refer to the Gentoo Handbook.\n", noiselevel=-1)
+
+def show_blocker_docs_link():
+	writemsg("\nFor more information about " + bad("Blocked Packages") + ", please refer to the following\n", noiselevel=-1)
+	writemsg("section of the Gentoo Linux x86 Handbook (architecture is irrelevant):\n\n", noiselevel=-1)
+	writemsg("http://www.gentoo.org/doc/en/handbook/handbook-x86.xml?full=1#blocked\n\n", noiselevel=-1)
+
+def get_masking_status(pkg, pkgsettings, root_config, myrepo=None, use=None):
+	return [mreason.message for \
+		mreason in _get_masking_status(pkg, pkgsettings, root_config, myrepo=myrepo, use=use)]
+
+def _get_masking_status(pkg, pkgsettings, root_config, myrepo=None, use=None):
+	mreasons = _getmaskingstatus(
+		pkg, settings=pkgsettings,
+		portdb=root_config.trees["porttree"].dbapi, myrepo=myrepo)
+
+	if not pkg.installed:
+		if not pkgsettings._accept_chost(pkg.cpv, pkg.metadata):
+			mreasons.append(_MaskReason("CHOST", "CHOST: %s" % \
+				pkg.metadata["CHOST"]))
+
+	if pkg.invalid:
+		for msgs in pkg.invalid.values():
+			for msg in msgs:
+				mreasons.append(
+					_MaskReason("invalid", "invalid: %s" % (msg,)))
+
+	if not pkg.metadata["SLOT"]:
+		mreasons.append(
+			_MaskReason("invalid", "SLOT: undefined"))
+
+	return mreasons



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-05-20 14:29 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-05-20 14:29 UTC (permalink / raw
  To: gentoo-commits

commit:     e09e651579629aa5e2e15367fb47af9d7c9f9d8f
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sun May 20 14:27:20 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sun May 20 14:27:20 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=e09e6515

add check_required_use2()

---
 gobs/pym/build_queru.py |    2 -
 gobs/pym/depgraph.py    |  191 ++++++++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 187 insertions(+), 6 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index 004afbb..8b33440 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -153,8 +153,6 @@ class queruaction(object):
 			build_dict['check_fail'] = True
 		use_changes = None
 		if not success:
-			for pargs, kwargs in mydepgraph._dynamic_config._unsatisfied_deps_for_display:
-				mydepgraph._show_unsatisfied_dep(mydepgraph, *pargs, **kwargs)
 			mydepgraph.display_problems()
 			settings, trees, mtimedb = load_emerge_config()
 			myparams = create_depgraph_params(myopts, myaction)

diff --git a/gobs/pym/depgraph.py b/gobs/pym/depgraph.py
index 75d4db2..332fb9f 100644
--- a/gobs/pym/depgraph.py
+++ b/gobs/pym/depgraph.py
@@ -20,7 +20,8 @@ from portage import _unicode_decode, _unicode_encode, _encodings
 from portage.const import PORTAGE_PACKAGE_ATOM, USER_CONFIG_PATH
 from portage.dbapi import dbapi
 from portage.dep import Atom, best_match_to_list, extract_affecting_use, \
-	check_required_use, human_readable_required_use, _repo_separator
+	check_required_use, human_readable_required_use, _repo_separator, \
+	_RequiredUseBranch, _RequiredUseLeaf
 from portage.eapi import eapi_has_strong_blocks, eapi_has_required_use
 from portage.exception import InvalidAtom, InvalidDependString, PortageException
 from portage.output import colorize, create_color_func, \
@@ -1099,7 +1100,7 @@ class depgraph(object):
 		# package selection, since we want to prompt the user
 		# for USE adjustment rather than have REQUIRED_USE
 		# affect package selection and || dep choices.
-		if not pkg.built and pkg.metadata.get("REQUIRED_USE") and \
+		"""if not pkg.built and pkg.metadata.get("REQUIRED_USE") and \
 			eapi_has_required_use(pkg.metadata["EAPI"]):
 			required_use_is_sat = check_required_use(
 				pkg.metadata["REQUIRED_USE"],
@@ -1120,7 +1121,7 @@ class depgraph(object):
 				self._dynamic_config._unsatisfied_deps_for_display.append(
 					((pkg.root, atom), {"myparent":dep.parent}))
 				self._dynamic_config._skip_restart = True
-				return 0
+				return 0"""
 
 		if not pkg.onlydeps:
 
@@ -3775,6 +3776,179 @@ class depgraph(object):
 				self._dynamic_config._need_restart = True
 		return new_use
 
+	def check_required_use2(required_use, use, iuse_match):
+		"""
+		Checks if the use flags listed in 'use' satisfy all
+		constraints specified in 'constraints'.
+
+		@param required_use: REQUIRED_USE string
+		@type required_use: String
+		@param use: Enabled use flags
+		@param use: List
+		@param iuse_match: Callable that takes a single flag argument and returns
+			True if the flag is matched, false otherwise,
+		@param iuse_match: Callable
+		@rtype: Bool
+		@return: Indicates if REQUIRED_USE constraints are satisfied
+		"""
+
+		def is_active(token):
+			if token.startswith("!"):
+				flag = token[1:]
+				is_negated = True
+			else:
+				flag = token
+				is_negated = False
+
+			if not flag or not iuse_match(flag):
+				msg = _("USE flag '%s' is not in IUSE") \
+					% (flag,)
+				e = InvalidData(msg, category='IUSE.missing')
+				raise InvalidDependString(msg, errors=(e,))
+
+			return (flag in use and not is_negated) or \
+				(flag not in use and is_negated)
+	
+		def is_satisfied(operator, argument):
+			if not argument:
+				#|| ( ) -> True
+				return True
+
+			if operator == "||":
+				return (True in argument)
+			elif operator == "^^":
+				return (argument.count(True) == 1)
+			elif operator[-1] == "?":
+				return (False not in argument)
+
+		mysplit = required_use.split()
+		level = 0
+		stack = [[]]
+		tree = _RequiredUseBranch()
+		node = tree
+		need_bracket = False
+
+		for token in mysplit:
+			if token == "(":
+				if not need_bracket:
+					child = _RequiredUseBranch(parent=node)
+					node._children.append(child)
+					node = child
+
+				need_bracket = False
+				stack.append([])
+				level += 1
+			elif token == ")":
+				if need_bracket:
+					raise InvalidDependString(
+						_("malformed syntax: '%s'") % required_use)
+				if level > 0:
+					level -= 1
+					l = stack.pop()
+					op = None
+					if stack[level]:
+						if stack[level][-1] in ("||", "^^"):
+							op = stack[level].pop()
+							satisfied = is_satisfied(op, l)
+							stack[level].append(satisfied)
+							node._satisfied = satisfied
+
+						elif not isinstance(stack[level][-1], bool) and \
+							stack[level][-1][-1] == "?":
+							op = stack[level].pop()
+							if is_active(op[:-1]):
+								satisfied = is_satisfied(op, l)
+								stack[level].append(satisfied)
+								node._satisfied = satisfied
+							else:
+								node._satisfied = True
+								last_node = node._parent._children.pop()
+								if last_node is not node:
+									raise AssertionError(
+										"node is not last child of parent")
+								node = node._parent
+								continue
+
+					if op is None:
+						satisfied = False not in l
+						node._satisfied = satisfied
+						if l:
+							stack[level].append(satisfied)
+
+						if len(node._children) <= 1 or \
+							node._parent._operator not in ("||", "^^"):
+							last_node = node._parent._children.pop()
+							if last_node is not node:
+								raise AssertionError(
+									"node is not last child of parent")
+							for child in node._children:
+								node._parent._children.append(child)
+								if isinstance(child, _RequiredUseBranch):
+									child._parent = node._parent
+
+					elif not node._children:
+						last_node = node._parent._children.pop()
+						if last_node is not node:
+							raise AssertionError(
+								"node is not last child of parent")
+
+					elif len(node._children) == 1 and op in ("||", "^^"):
+						last_node = node._parent._children.pop()
+						if last_node is not node:
+							raise AssertionError(
+								"node is not last child of parent")
+						node._parent._children.append(node._children[0])
+						if isinstance(node._children[0], _RequiredUseBranch):
+							node._children[0]._parent = node._parent
+							node = node._children[0]
+							if node._operator is None and \
+								node._parent._operator not in ("||", "^^"):
+								last_node = node._parent._children.pop()
+								if last_node is not node:
+									raise AssertionError(
+										"node is not last child of parent")
+								for child in node._children:
+									node._parent._children.append(child)
+									if isinstance(child, _RequiredUseBranch):
+										child._parent = node._parent
+
+					node = node._parent
+				else:
+					raise InvalidDependString(
+						_("malformed syntax: '%s'") % required_use)
+			elif token in ("||", "^^"):
+				if need_bracket:
+					raise InvalidDependString(
+						_("malformed syntax: '%s'") % required_use)
+				need_bracket = True
+				stack[level].append(token)
+				child = _RequiredUseBranch(operator=token, parent=node)
+				node._children.append(child)
+				node = child
+			else:
+				if need_bracket or "(" in token or ")" in token or \
+					"|" in token or "^" in token:
+					raise InvalidDependString(
+						_("malformed syntax: '%s'") % required_use)
+
+				if token[-1] == "?":
+					need_bracket = True
+					stack[level].append(token)
+					child = _RequiredUseBranch(operator=token, parent=node)
+					node._children.append(child)
+					node = child
+				else:
+					satisfied = is_active(token)
+					stack[level].append(satisfied)
+					node._children.append(_RequiredUseLeaf(token, satisfied))
+
+		if level != 0 or need_bracket:
+			raise InvalidDependString(
+				_("malformed syntax: '%s'") % required_use)
+
+		tree._satisfied = False not in stack[0]
+		return tree
+
 	def _wrapped_select_pkg_highest_available_imp(self, root, atom, onlydeps=False, autounmask_level=None):
 		root_config = self._frozen_config.roots[root]
 		pkgsettings = self._frozen_config.pkgsettings[root]
@@ -3975,8 +4149,17 @@ class depgraph(object):
 							# since IUSE cannot be adjusted by the user.
 							continue
 
+					if pkg.metadata.get("REQUIRED_USE") and eapi_has_required_use(pkg.metadata["EAPI"]):
+						required_use_is_sat = check_required_use(pkg.metadata["REQUIRED_USE"],
+							self._pkg_use_enabled(pkg), pkg.iuse.is_valid_flag)
+						if not required_use_is_sat:
+							if autounmask_level and autounmask_level.allow_use_changes and not pkg.built:
+							# pers the required_use to get the needed use flags
+								required_use = foo()
+							else:
+								
+					
 					if atom.use:
-
 						matched_pkgs_ignore_use.append(pkg)
 						if autounmask_level and autounmask_level.allow_use_changes and not pkg.built:
 							target_use = {}



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-05-20 14:33 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-05-20 14:33 UTC (permalink / raw
  To: gentoo-commits

commit:     1ae523b8e273c41f711b523b9476e151e9a7ba6c
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sun May 20 14:33:07 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sun May 20 14:33:07 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=1ae523b8

add check_required_use2() part2

---
 gobs/pym/depgraph.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/depgraph.py b/gobs/pym/depgraph.py
index 332fb9f..0e8200c 100644
--- a/gobs/pym/depgraph.py
+++ b/gobs/pym/depgraph.py
@@ -4157,7 +4157,7 @@ class depgraph(object):
 							# pers the required_use to get the needed use flags
 								required_use = foo()
 							else:
-								
+								pass								
 					
 					if atom.use:
 						matched_pkgs_ignore_use.append(pkg)



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-05-25  0:15 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-05-25  0:15 UTC (permalink / raw
  To: gentoo-commits

commit:     11f9903822c653ebfdcea7dd0f2b893c7b7cba7e
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri May 25 00:15:02 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri May 25 00:15:02 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=11f99038

Update host job code

---
 gobs/pym/sync.py     |   16 ++++++++++------
 gobs/pym/updatedb.py |   15 ++++++++-------
 2 files changed, 18 insertions(+), 13 deletions(-)

diff --git a/gobs/pym/sync.py b/gobs/pym/sync.py
index 5f8f2d1..0beeb22 100644
--- a/gobs/pym/sync.py
+++ b/gobs/pym/sync.py
@@ -37,15 +37,19 @@ def sync_tree():
 	tmpcmdline.append("--quiet")
 	tmpcmdline.append("--config-root=" + default_config_root)
 	logging.info("Emerge --sync")
-	fail_sync = False
-	#fail_sync = emerge_main(args=tmpcmdline)
+	fail_sync = emerge_main(args=tmpcmdline)
 	if fail_sync is True:
 		logging.warning("Emerge --sync fail!")
 		return False
 	else:
-		os.mkdir(mysettings['PORTDIR'] + "/profiles/config", 0o777)
-		with open(mysettings['PORTDIR'] + "/profiles/config/parent", "w") as f:
-			f.write("../base\n")
-			f.close()
+		# Need to add a config dir so we can use profiles/base for reading the tree.
+		# We may allready have the dir on local repo when we sync.
+		try:
+			os.mkdir(mysettings['PORTDIR'] + "/profiles/config", 0o777)
+			with open(mysettings['PORTDIR'] + "/profiles/config/parent", "w") as f:
+				f.write("../base\n")
+				f.close()
+		except:
+			pass
 		logging.info("Emerge --sync ... Done.")
 	return True

diff --git a/gobs/pym/updatedb.py b/gobs/pym/updatedb.py
index 2e369b3..91cb79f 100755
--- a/gobs/pym/updatedb.py
+++ b/gobs/pym/updatedb.py
@@ -77,7 +77,7 @@ def update_cpv_db_pool(mysettings, package_line):
 	init_categories.update_categories_db(categories)
 	CM.putConnection(conn)
 			
-def update_cpv_db(mysettings):
+def update_cpv_db():
 	"""Code to update the cpv in the database.
 	@type:settings
 	@parms: portage.settings
@@ -86,6 +86,7 @@ def update_cpv_db(mysettings):
 	@type: dict
 	@parms: config options from the config file
 	"""
+	mysettings =  init_portage_settings()
 	logging.info("Checking categories, package, ebuilds")
 	# Setup portdb, gobs_categories, gobs_old_cpv, package
 	myportdb = portage.portdbapi(mysettings=mysettings)
@@ -103,12 +104,12 @@ def update_cpv_db(mysettings):
 	# Run the update package for all package in the list in
 	# a multiprocessing pool
 	for package_line in sorted(package_list_tree):
-		update_cpv_db_pool(mysettings, package_line)
+		#update_cpv_db_pool(mysettings, package_line)
 		# FIXME: Mem prob with the multiprocessing
-		# pool.apply_async(update_cpv_db_pool, (mysettings, package_line,))
-	# pool.close()
-	# pool.join() 
-	logging.info("Checking categories, package and ebuilds done")
+		pool.apply_async(update_cpv_db_pool, (mysettings, package_line,))
+	pool.close()
+	pool.join() 
+	logging.info("Checking categories, package and ebuilds ... done")
 
 def update_db_main():
 	# Main
@@ -128,6 +129,6 @@ def update_db_main():
 	init_arch = gobs_arch()
 	init_arch.update_arch_db()
 	# Update the cpv db
-	update_cpv_db(mysettings)
+	update_cpv_db()
 	logging.info("Update db ... Done.")
 	return True
\ No newline at end of file



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-06-03 22:18 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-06-03 22:18 UTC (permalink / raw
  To: gentoo-commits

commit:     16e795f105f5e2e2d3b6f2ab36b85fa11ecf55af
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sun Jun  3 22:17:16 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sun Jun  3 22:17:16 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=16e795f1

update dep handling

---
 gobs/pym/build_queru.py |  105 +++++++++++++++++++----------------------------
 gobs/pym/depgraph.py    |   49 ++++++++++++++--------
 2 files changed, 74 insertions(+), 80 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index 8b33440..46d815c 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -150,68 +150,45 @@ class queruaction(object):
 			root_config = trees[settings["ROOT"]]["root_config"]
 			display_missing_pkg_set(root_config, e.value)
 			build_dict['type_fail'] = "depgraph fail"
-			build_dict['check_fail'] = True
-		use_changes = None
-		if not success:
-			mydepgraph.display_problems()
-			settings, trees, mtimedb = load_emerge_config()
-			myparams = create_depgraph_params(myopts, myaction)
-			try:
-				success, mydepgraph, favorites = backtrack_depgraph(
-				settings, trees, myopts, myparams, myaction, myfiles, spinner)
-			except portage.exception.PackageSetNotFound as e:
-				root_config = trees[settings["ROOT"]]["root_config"]
-				display_missing_pkg_set(root_config, e.value)
-				build_dict['type_fail'] = "depgraph fail"
-				build_dict['check_fail'] = True
-		if not success:
-			mydepgraph.display_problems()
-			build_dict['type_fail'] = "depgraph fail"
-			build_dict['check_fail'] = True
-		
-		"""if mydepgraph._dynamic_config._needed_use_config_changes:
-			use_changes = {}
-			for pkg, needed_use_config_changes in mydepgraph._dynamic_config._needed_use_config_changes.items():
-				new_use, changes = needed_use_config_changes
-				use_changes[pkg.cpv] = changes
-			iteritems_packages = {}
-			for k, v in use_changes.iteritems():
-				k_package = portage.versions.cpv_getkey(k)
-				iteritems_packages[ k_package ] = v
-			logging.info('iteritems_packages %s', iteritems_packages)
-			build_cpv_dict = iteritems_packages
-		if use_changes is not None:
-			for k, v in build_cpv_dict.iteritems():
-				build_use_flags_list = []
-				for x, y in v.iteritems():
-					if y is True:
-						build_use_flags_list.append(x)
-					if y is False:
-						build_use_flags_list.append("-" + x)
-				logging.info("k: %s, build_use_flags_list: %s", k, build_use_flags_list)
-				if not build_use_flags_list == []:
-					build_use_flags = ""
-					for flags in build_use_flags_list:
-						build_use_flags = build_use_flags + flags + ' '
-					filetext = k + ' ' + build_use_flags
-					logging.info('filetext %s', filetext)
-					with open("/etc/portage/package.use/gobs.use", "a") as f:
-						f.write(filetext)
-						f.write('\n')
-			settings, trees, mtimedb = load_emerge_config()
-			myparams = create_depgraph_params(myopts, myaction)
-			try:
-				success, mydepgraph, favorites = backtrack_depgraph(
-					settings, trees, myopts, myparams, myaction, myfiles, spinner)
-			except portage.exception.PackageSetNotFound as e:
-				root_config = trees[settings["ROOT"]]["root_config"]
-				display_missing_pkg_set(root_config, e.value)
-				build_dict['type_fail'] = "depgraph fail"
-				build_dict['check_fail'] = True
-		if not success:
-			mydepgraph.display_problems()
-			build_dict['type_fail'] = "depgraph fail"
-			build_dict['check_fail'] = True"""
+			if not success:
+				if mydepgraph._dynamic_config._needed_p_mask_changes:
+					build_dict['type_fail'] = "Mask packages"
+					build_dict['check_fail'] = True
+					mydepgraph.display_problems()
+					self.log_fail_queru(build_dict, settings)
+					return 1, settings, trees, mtimedb
+					if mydepgraph._dynamic_config._needed_use_config_changes:
+						repeat = True
+						repeat_times = 0
+						while repeat:
+							mydepgraph._display_autounmask()
+							settings, trees, mtimedb = load_emerge_config()
+							myparams = create_depgraph_params(myopts, myaction)
+							try:
+								success, mydepgraph, favorites = backtrack_depgraph(
+								settings, trees, myopts, myparams, myaction, myfiles, spinner)
+							except portage.exception.PackageSetNotFound as e:
+								root_config = trees[settings["ROOT"]]["root_config"]
+								display_missing_pkg_set(root_config, e.value)
+							if not success and mydepgraph._dynamic_config._needed_use_config_changes:
+								print("repaet_times:", repeat_times)
+								if repeat_times is 2:
+									build_dict['type_fail'] = "Need use change"
+									build_dict['check_fail'] = True
+									mydepgraph.display_problems()
+									repeat = False
+									repeat = False
+								else:
+									repeat_times = repeat_times + 1
+							else:
+								repeat = False
+
+				if mydepgraph._dynamic_config._unsolvable_blockers:
+					mydepgraph.display_problems()
+					build_dict['type_fail'] = "Blocking packages"
+					build_dict['check_fail'] = True
+					self.log_fail_queru(build_dict, settings)
+					return 1, settings, trees, mtimedb
 
 		if build_dict['check_fail'] is True:
 				self.log_fail_queru(build_dict, settings)
@@ -693,7 +670,9 @@ class queruaction(object):
 		if not "noclean" in build_dict['post_message']:
 			depclean_fail = main_depclean()
 		try:
-			os.remove("/etc/portage/package.use/gobs.use")
+			os.remove("/etc/portage/package.use/99_autounmask")
+			with open("/etc/portage/package.use/99_autounmask", "a") as f:
+				f.close
 		except:
 			pass
 		if build_fail is False or depclean_fail is False:

diff --git a/gobs/pym/depgraph.py b/gobs/pym/depgraph.py
index 0e8200c..cf5a106 100644
--- a/gobs/pym/depgraph.py
+++ b/gobs/pym/depgraph.py
@@ -3776,7 +3776,7 @@ class depgraph(object):
 				self._dynamic_config._need_restart = True
 		return new_use
 
-	def check_required_use2(required_use, use, iuse_match):
+	def change_required_use(self, pkg):
 		"""
 		Checks if the use flags listed in 'use' satisfy all
 		constraints specified in 'constraints'.
@@ -3792,6 +3792,10 @@ class depgraph(object):
 		@return: Indicates if REQUIRED_USE constraints are satisfied
 		"""
 
+		required_use = pkg.metadata["REQUIRED_USE"]
+		use =self._pkg_use_enabled(pkg)
+		iuse_match = pkg.iuse.is_valid_flag
+
 		def is_active(token):
 			if token.startswith("!"):
 				flag = token[1:]
@@ -3824,14 +3828,15 @@ class depgraph(object):
 		mysplit = required_use.split()
 		level = 0
 		stack = [[]]
-		tree = _RequiredUseBranch()
+		tree = portage.dep._RequiredUseBranch()
 		node = tree
 		need_bracket = False
+		target_use = {}
 
 		for token in mysplit:
 			if token == "(":
 				if not need_bracket:
-					child = _RequiredUseBranch(parent=node)
+					child = portage.dep._RequiredUseBranch(parent=node)
 					node._children.append(child)
 					node = child
 
@@ -3883,7 +3888,7 @@ class depgraph(object):
 									"node is not last child of parent")
 							for child in node._children:
 								node._parent._children.append(child)
-								if isinstance(child, _RequiredUseBranch):
+								if isinstance(child, portage.dep._RequiredUseBranch):
 									child._parent = node._parent
 
 					elif not node._children:
@@ -3898,7 +3903,7 @@ class depgraph(object):
 							raise AssertionError(
 								"node is not last child of parent")
 						node._parent._children.append(node._children[0])
-						if isinstance(node._children[0], _RequiredUseBranch):
+						if isinstance(node._children[0], portage.dep._RequiredUseBranch):
 							node._children[0]._parent = node._parent
 							node = node._children[0]
 							if node._operator is None and \
@@ -3909,7 +3914,7 @@ class depgraph(object):
 										"node is not last child of parent")
 								for child in node._children:
 									node._parent._children.append(child)
-									if isinstance(child, _RequiredUseBranch):
+									if isinstance(child, portage.dep._RequiredUseBranch):
 										child._parent = node._parent
 
 					node = node._parent
@@ -3922,7 +3927,7 @@ class depgraph(object):
 						_("malformed syntax: '%s'") % required_use)
 				need_bracket = True
 				stack[level].append(token)
-				child = _RequiredUseBranch(operator=token, parent=node)
+				child = portage.dep._RequiredUseBranch(operator=token, parent=node)
 				node._children.append(child)
 				node = child
 			else:
@@ -3934,20 +3939,30 @@ class depgraph(object):
 				if token[-1] == "?":
 					need_bracket = True
 					stack[level].append(token)
-					child = _RequiredUseBranch(operator=token, parent=node)
+					child = portage.dep._RequiredUseBranch(operator=token, parent=node)
 					node._children.append(child)
 					node = child
 				else:
 					satisfied = is_active(token)
+					if satisfied is False:
+						new_changes = {}
+						new_changes[token] = True
+						if not pkg.use.mask.intersection(new_changes) or not \
+							pkg.use.force.intersection(new_changes):
+							if token in pkg.use.enabled:
+								target_use[token] = False
+							elif not token in pkg.use.enabled:
+								target_use[token] = True
+
 					stack[level].append(satisfied)
-					node._children.append(_RequiredUseLeaf(token, satisfied))
+					node._children.append(portage.dep._RequiredUseLeaf(token, satisfied))
 
 		if level != 0 or need_bracket:
 			raise InvalidDependString(
 				_("malformed syntax: '%s'") % required_use)
 
 		tree._satisfied = False not in stack[0]
-		return tree
+		return target_use
 
 	def _wrapped_select_pkg_highest_available_imp(self, root, atom, onlydeps=False, autounmask_level=None):
 		root_config = self._frozen_config.roots[root]
@@ -4152,13 +4167,13 @@ class depgraph(object):
 					if pkg.metadata.get("REQUIRED_USE") and eapi_has_required_use(pkg.metadata["EAPI"]):
 						required_use_is_sat = check_required_use(pkg.metadata["REQUIRED_USE"],
 							self._pkg_use_enabled(pkg), pkg.iuse.is_valid_flag)
-						if not required_use_is_sat:
-							if autounmask_level and autounmask_level.allow_use_changes and not pkg.built:
-							# pers the required_use to get the needed use flags
-								required_use = foo()
-							else:
-								pass								
-					
+							if not required_use_is_sat:
+								if autounmask_level and autounmask_level.allow_use_changes \
+									and not pkg.built:
+									target_use = self.change_required_use(pkg)
+									if not target_use is None:
+										use = self._pkg_use_enabled(pkg, target_use)					
+
 					if atom.use:
 						matched_pkgs_ignore_use.append(pkg)
 						if autounmask_level and autounmask_level.allow_use_changes and not pkg.built:



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-06-04 23:45 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-06-04 23:45 UTC (permalink / raw
  To: gentoo-commits

commit:     bfedf700cbb53e99ce510122868308c4cc3e2f45
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Mon Jun  4 23:45:07 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Mon Jun  4 23:45:07 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=bfedf700

fix tab aliment in depgraph.py

---
 gobs/pym/depgraph.py |   12 ++++++------
 1 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/gobs/pym/depgraph.py b/gobs/pym/depgraph.py
index cf5a106..2d5f73c 100644
--- a/gobs/pym/depgraph.py
+++ b/gobs/pym/depgraph.py
@@ -4167,12 +4167,12 @@ class depgraph(object):
 					if pkg.metadata.get("REQUIRED_USE") and eapi_has_required_use(pkg.metadata["EAPI"]):
 						required_use_is_sat = check_required_use(pkg.metadata["REQUIRED_USE"],
 							self._pkg_use_enabled(pkg), pkg.iuse.is_valid_flag)
-							if not required_use_is_sat:
-								if autounmask_level and autounmask_level.allow_use_changes \
-									and not pkg.built:
-									target_use = self.change_required_use(pkg)
-									if not target_use is None:
-										use = self._pkg_use_enabled(pkg, target_use)					
+						if not required_use_is_sat:
+							if autounmask_level and autounmask_level.allow_use_changes \
+								and not pkg.built:
+								target_use = self.change_required_use(pkg)
+								if not target_use is None:
+									use = self._pkg_use_enabled(pkg, target_use)					
 
 					if atom.use:
 						matched_pkgs_ignore_use.append(pkg)



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-06-27 14:07 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-06-27 14:07 UTC (permalink / raw
  To: gentoo-commits

commit:     348e757a3fdf3b91ac4f03275f50c3ab086006cc
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 27 14:07:06 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Jun 27 14:07:06 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=348e757a

fix for logging

---
 gobs/pym/build_log.py |    2 +-
 gobs/pym/package.py   |    2 +-
 gobs/pym/sync.py      |    2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index bf63ddc..5ecd8a5 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -47,7 +47,7 @@ class gobs_buildlog(object):
 		categories = cpvr_list[0]
 		package = cpvr_list[1]
 		ebuild_version = cpv_getversion(pkg.cpv)
-		log_msg = "cpv: %s" % (pkg.cpv.)
+		log_msg = "cpv: %s" % (pkg.cpv,)
 		add_gobs_logs(conn, log_msg, "info", self._config_profile)
 		init_package = gobs_package(settings, myportdb)
 		package_id = have_package_db(conn, categories, package)

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index b9b2a56..d044b4b 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -157,7 +157,7 @@ class gobs_package(object):
 					# Comper ebuild_version and add the ebuild_version to buildqueue
 					if portage.vercmp(v['ebuild_version_tree'], latest_ebuild_version) == 0:
 						add_new_package_buildqueue(conn,ebuild_id, config_id, use_flags_list, use_enable_list, message)
-						B = Build cpv use-flags config
+						# B = Build cpv use-flags config
 						log_msg = ("B %s/%s-%s USE: %s %s" % (v['categories'], v['package'], \
 							latest_ebuild_version, use_enable, config_id,)
 						add_gobs_logs(conn, log_msg, "info", config_profile)

diff --git a/gobs/pym/sync.py b/gobs/pym/sync.py
index 4bd4883..838f224 100644
--- a/gobs/pym/sync.py
+++ b/gobs/pym/sync.py
@@ -24,7 +24,7 @@ def git_pull():
 	repo_remote = repo.remotes.origin
 	repo_remote.pull()
 	master = repo.head.reference
-	log_msg = "Git log: %s" % (master.log(),9
+	log_msg = "Git log: %s" % (master.log(),)
 	add_gobs_logs(conn, log_msg, "info", config_profile)
 	log_msg = "Git pull ... Done"
 	add_gobs_logs(conn, log_msg, "info", config_profile)



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-06-27 14:11 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-06-27 14:11 UTC (permalink / raw
  To: gentoo-commits

commit:     43e5eb5bbbef43ecfd84964b446d745ba26976e5
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 27 14:11:38 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Jun 27 14:11:38 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=43e5eb5b

fix for logging

---
 gobs/pym/package.py |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index d044b4b..8147c5b 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -158,8 +158,8 @@ class gobs_package(object):
 					if portage.vercmp(v['ebuild_version_tree'], latest_ebuild_version) == 0:
 						add_new_package_buildqueue(conn,ebuild_id, config_id, use_flags_list, use_enable_list, message)
 						# B = Build cpv use-flags config
-						log_msg = ("B %s/%s-%s USE: %s %s" % (v['categories'], v['package'], \
-							latest_ebuild_version, use_enable, config_id,)
+						log_msg = "B %s/%s-%s USE: %s %s" %  \
+							(v['categories'], v['package'], latest_ebuild_version, use_enable, config_id,)
 						add_gobs_logs(conn, log_msg, "info", config_profile)
 					i = i +1
 		CM.putConnection(conn)



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-06-27 14:14 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-06-27 14:14 UTC (permalink / raw
  To: gentoo-commits

commit:     595b7763ca7adbb6fb583664fb158c056826956c
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 27 14:14:24 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Jun 27 14:14:24 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=595b7763

fix for logging

---
 gobs/pym/package.py |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index 8147c5b..2fa2039 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -184,9 +184,9 @@ class gobs_package(object):
 		# add new categories package ebuild to tables package and ebuilds
 		# C = Checking
 		# N = New Package
-		log_msg = ("C %s/%s" % (categories, package,)
+		log_msg = "C %s/%s" % (categories, package,)
 		add_gobs_logs(conn, log_msg, "info", config_profile)
-		log_msg = ("N %s/%s" % (categories, package,)
+		log_msg = "N %s/%s" % (categories, package,)
 		add_gobs_logs(conn, log_msg, "info", config_profile)
 		pkgdir = self._mysettings['PORTDIR'] + "/" + categories + "/" + package		# Get PORTDIR + cp
 		categories_dir = self._mysettings['PORTDIR'] + "/" + categories + "/"
@@ -216,7 +216,7 @@ class gobs_package(object):
 			manifest_error = init_manifest.digestcheck()
 			if manifest_error is not None:
 				qa_error.append(manifest_error)
-				log_msg = ("QA: %s/%s %s" % (categories, package, qa_error,)
+				log_msg = "QA: %s/%s %s" % (categories, package, qa_error,)
 				add_gobs_logs(conn, log_msg, "info", config_profile)
 			add_qa_repoman(conn,ebuild_id_list, qa_error, packageDict, config_id)
 			# Add the ebuild to the buildqueru table if needed



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-06-27 14:19 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-06-27 14:19 UTC (permalink / raw
  To: gentoo-commits

commit:     068e45fefc972adf2a9287587e6133267395d434
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 27 14:18:43 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Jun 27 14:18:43 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=068e45fe

fix for logging

---
 gobs/pym/build_queru.py |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index a800b96..999a8c0 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -655,6 +655,7 @@ class queruaction(object):
 		return build_cpv_dict
 
 	def build_procces(self, buildqueru_cpv_dict, build_dict, settings, portdb):
+		conn=CM.getConnection()
 		build_cpv_list = []
 		depclean_fail = True
 		for k, build_use_flags_list in buildqueru_cpv_dict.iteritems():
@@ -694,7 +695,9 @@ class queruaction(object):
 		except:
 			pass
 		if build_fail is False or depclean_fail is False:
+			CM.putConnection(conn)
 			return False
+		CM.putConnection(conn)
 		return True
 
 	def procces_qureru(self):



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-06-27 14:24 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-06-27 14:24 UTC (permalink / raw
  To: gentoo-commits

commit:     f449874b9b7a1ab17857da76ea3544e5afc445c2
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 27 14:23:49 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Jun 27 14:23:49 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=f449874b

fix for logging

---
 gobs/pym/build_queru.py |    7 ++++++-
 1 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index 999a8c0..068c879 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -624,6 +624,7 @@ class queruaction(object):
 		return retval
 
 	def make_build_list(self, build_dict, settings, portdb):
+		conn=CM.getConnection()
 		cpv = build_dict['category']+'/'+build_dict['package']+'-'+build_dict['ebuild_version']
 		pkgdir = os.path.join(settings['PORTDIR'], build_dict['category'] + "/" + build_dict['package'])
 		init_manifest =  gobs_manifest(settings, pkgdir)
@@ -641,7 +642,9 @@ class queruaction(object):
 				build_dict['check_fail'] = False
 				build_cpv_dict = {}
 				build_cpv_dict[cpv] = build_use_flags_list
-				logging.info("build_cpv_dict: %s", build_cpv_dict)
+				log_msg = "build_cpv_dict: %s" % (build_cpv_dict,)
+				add_gobs_logs(conn, log_msg, "info", self._config_profile)
+				CM.putConnection(conn)
 				return build_cpv_dict
 			else:
 				build_dict['type_fail'] = "Manifest error"
@@ -651,7 +654,9 @@ class queruaction(object):
 			build_dict['check_fail'] = True
 		if build_dict['check_fail'] is True:
 				self.log_fail_queru(build_dict, settings)
+				CM.putConnection(conn)
 				return None
+		CM.putConnection(conn)
 		return build_cpv_dict
 
 	def build_procces(self, buildqueru_cpv_dict, build_dict, settings, portdb):



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-06-27 14:39 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-06-27 14:39 UTC (permalink / raw
  To: gentoo-commits

commit:     541e65103ae30fbf83a7d49ff6a05205a069e9e2
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 27 14:38:48 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Jun 27 14:38:48 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=541e6510

fix for depgraph

---
 gobs/pym/build_queru.py |   98 +++++++++++++++++++++++-----------------------
 1 files changed, 49 insertions(+), 49 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index 068c879..3d2ef11 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -150,57 +150,57 @@ class queruaction(object):
 			root_config = trees[settings["ROOT"]]["root_config"]
 			display_missing_pkg_set(root_config, e.value)
 			build_dict['type_fail'] = "depgraph fail"
-			if not success:
-				if mydepgraph._dynamic_config._needed_p_mask_changes:
-					build_dict['type_fail'] = "Mask packages"
-					build_dict['check_fail'] = True
-					mydepgraph.display_problems()
-					self.log_fail_queru(build_dict, settings)
-					return 1, settings, trees, mtimedb
-				if mydepgraph._dynamic_config._needed_use_config_changes:
-					repeat = True
-					repeat_times = 0
-					while repeat:
-						mydepgraph._display_autounmask()
-						settings, trees, mtimedb = load_emerge_config()
-						myparams = create_depgraph_params(myopts, myaction)
-						try:
-							success, mydepgraph, favorites = backtrack_depgraph(
-							settings, trees, myopts, myparams, myaction, myfiles, spinner)
-						except portage.exception.PackageSetNotFound as e:
-							root_config = trees[settings["ROOT"]]["root_config"]
-							display_missing_pkg_set(root_config, e.value)
-						if not success and mydepgraph._dynamic_config._needed_use_config_changes:
-							print("repaet_times:", repeat_times)
-							if repeat_times is 2:
-								build_dict['type_fail'] = "Need use change"
-								build_dict['check_fail'] = True
-								mydepgraph.display_problems()
-								repeat = False
-								repeat = False
-							else:
-								repeat_times = repeat_times + 1
-						else:
+		if not success:
+			if mydepgraph._dynamic_config._needed_p_mask_changes:
+				build_dict['type_fail'] = "Mask packages"
+				build_dict['check_fail'] = True
+				mydepgraph.display_problems()
+				self.log_fail_queru(build_dict, settings)
+				return 1, settings, trees, mtimedb
+			if mydepgraph._dynamic_config._needed_use_config_changes:
+				repeat = True
+				repeat_times = 0
+				while repeat:
+					mydepgraph._display_autounmask()
+					settings, trees, mtimedb = load_emerge_config()
+					myparams = create_depgraph_params(myopts, myaction)
+					try:
+						success, mydepgraph, favorites = backtrack_depgraph(
+						settings, trees, myopts, myparams, myaction, myfiles, spinner)
+					except portage.exception.PackageSetNotFound as e:
+						root_config = trees[settings["ROOT"]]["root_config"]
+						display_missing_pkg_set(root_config, e.value)
+					if not success and mydepgraph._dynamic_config._needed_use_config_changes:
+						print("repaet_times:", repeat_times)
+						if repeat_times is 2:
+							build_dict['type_fail'] = "Need use change"
+							build_dict['check_fail'] = True
+							mydepgraph.display_problems()
 							repeat = False
+							repeat = False
+						else:
+							repeat_times = repeat_times + 1
+					else:
+						repeat = False
+
+			if mydepgraph._dynamic_config._unsolvable_blockers:
+				mydepgraph.display_problems()
+				build_dict['type_fail'] = "Blocking packages"
+				build_dict['check_fail'] = True
+				self.log_fail_queru(build_dict, settings)
+				return 1, settings, trees, mtimedb
 
-				if mydepgraph._dynamic_config._unsolvable_blockers:
-					mydepgraph.display_problems()
-					build_dict['type_fail'] = "Blocking packages"
-					build_dict['check_fail'] = True
-					self.log_fail_queru(build_dict, settings)
-					return 1, settings, trees, mtimedb
-
-				if mydepgraph._dynamic_config._slot_collision_info:
-					mydepgraph.display_problems()
-					build_dict['type_fail'] = "Slot blocking"
-					build_dict['check_fail'] = True
-					self.log_fail_queru(build_dict, settings)
-					return 1, settings, trees, mtimedb
-
-				if not success:
-					build_dict['type_fail'] = "Dep calc fail"
-					build_dict['check_fail'] = True
-					mydepgraph.display_problems()
+			if mydepgraph._dynamic_config._slot_collision_info:
+				mydepgraph.display_problems()
+				build_dict['type_fail'] = "Slot blocking"
+				build_dict['check_fail'] = True
+				self.log_fail_queru(build_dict, settings)
+				return 1, settings, trees, mtimedb
+
+			if not success:
+				build_dict['type_fail'] = "Dep calc fail"
+				build_dict['check_fail'] = True
+				mydepgraph.display_problems()
 
 		if build_dict['check_fail'] is True:
 				self.log_fail_queru(build_dict, settings)



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-06-27 14:43 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-06-27 14:43 UTC (permalink / raw
  To: gentoo-commits

commit:     c1694d90eddc3de691be15eaf412243fa48f9dd2
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 27 14:43:11 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Jun 27 14:43:11 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=c1694d90

fix for depgraph

---
 gobs/pym/build_queru.py |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index 3d2ef11..065705d 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -686,7 +686,8 @@ class queruaction(object):
 			argscmd.append(build_cpv)
 		log_msg = "argscmd: %s" % (argscmd,)
 		add_gobs_logs(conn, log_msg, "info", self._config_profile)
-		# Call main_emerge to build the package in build_cpv_list 
+		# Call main_emerge to build the package in build_cpv_list
+		print("Build: %s", build_dict)
 		build_fail = self.emerge_main(argscmd, build_dict)
 		# Run depclean
 		log_msg = "build_fail: %s" % (build_fail,)



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-06-27 14:57 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-06-27 14:57 UTC (permalink / raw
  To: gentoo-commits

commit:     f5534c6614ef1703adcefa72436d48d242453abc
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 27 14:57:31 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Jun 27 14:57:31 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=f5534c66

fix for depgraph

---
 gobs/pym/build_queru.py |    1 -
 1 files changed, 0 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index 065705d..c823c80 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -177,7 +177,6 @@ class queruaction(object):
 							build_dict['check_fail'] = True
 							mydepgraph.display_problems()
 							repeat = False
-							repeat = False
 						else:
 							repeat_times = repeat_times + 1
 					else:



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-06-27 15:15 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-06-27 15:15 UTC (permalink / raw
  To: gentoo-commits

commit:     3f28da8fb8e8cc0e08fd33d45c128d27aa59fd46
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 27 15:15:25 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Jun 27 15:15:25 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=3f28da8f

fix for depgraph

---
 gobs/pym/build_queru.py |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index c823c80..cb06e40 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -229,8 +229,10 @@ class queruaction(object):
 		clear_caches(trees)
 
 		retval = mergetask.merge()
+		conn=CM.getConnection()
 		log_msg = "mergetask.merge retval: %s" % retval
 		add_gobs_logs(conn, log_msg, "info", self._config_profile)
+		CM.putConnection(conn)
 		if retval:
 			build_dict['type_fail'] = 'merge fail'
 			build_dict['check_fail'] = True



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-06-27 15:26 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-06-27 15:26 UTC (permalink / raw
  To: gentoo-commits

commit:     2e8c69eae6913f117bba5d52849f7b1538d0afc0
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 27 15:26:08 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Jun 27 15:26:08 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=2e8c69ea

fix for depgraph

---
 gobs/pym/build_queru.py |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index cb06e40..7c6c680 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -673,9 +673,10 @@ class queruaction(object):
 				filetext = '=' + k + ' ' + build_use_flags
 				log_msg = "filetext: %s" % filetext
 				add_gobs_logs(conn, log_msg, "info", self._config_profile)
-				with open("/etc/portage/package.use/gobs.use", "a") as f:
+				with open("/etc/portage/package.use/99_autounmask", "a") as f:
      					f.write(filetext)
      					f.write('\n')
+     					f.close
 		log_msg = "build_cpv_list: %s" % (build_cpv_list,)
 		add_gobs_logs(conn, log_msg, "info", self._config_profile)
 		argscmd = []



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-07-17  0:18 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-07-17  0:18 UTC (permalink / raw
  To: gentoo-commits

commit:     c4572727783cb7e1fdc104330e1bd2bde61cf2c9
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Tue Jul 17 00:18:25 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Tue Jul 17 00:18:25 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=c4572727

fix for depgraph

---
 gobs/pym/updatedb.py |    6 ++++++
 1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/gobs/pym/updatedb.py b/gobs/pym/updatedb.py
index 6bfc5ef..6e91c47 100755
--- a/gobs/pym/updatedb.py
+++ b/gobs/pym/updatedb.py
@@ -84,6 +84,7 @@ def update_cpv_db():
 	@type: dict
 	@parms: config options from the config file
 	"""
+	conn=CM.getConnection()
 	mysettings =  init_portage_settings()
 	log_msg = "Checking categories, package, ebuilds"
 	add_gobs_logs(conn, log_msg, "info", config_profile)
@@ -110,9 +111,11 @@ def update_cpv_db():
 	pool.join() 
 	log_msg = "Checking categories, package and ebuilds ... done"
 	add_gobs_logs(conn, log_msg, "info", config_profile)
+	CM.putConnection(conn)
 
 def update_db_main():
 	# Main
+	conn=CM.getConnection()
 	# Logging
 	log_msg = "Update db started."
 	add_gobs_logs(conn, log_msg, "info", config_profile)
@@ -121,11 +124,13 @@ def update_db_main():
 	if resutalt is False:
 		log_msg = "Update db ... Fail."
 		add_gobs_logs(conn, log_msg, "info", config_profile)
+		CM.putConnection(conn)
 		return False
 	resutalt = sync_tree()
 	if resutalt is False:
 		log_msg = "Update db ... Fail."
 		add_gobs_logs(conn, log_msg, "info", config_profile)
+		CM.putConnection(conn)
 		return False
 	# Init settings for the default config
 	mysettings =  init_portage_settings()
@@ -135,4 +140,5 @@ def update_db_main():
 	update_cpv_db()
 	log_msg = "Update db ... Done."
 	add_gobs_logs(conn, log_msg, "info", config_profile)
+	CM.putConnection(conn)
 	return True
\ No newline at end of file



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-07-17  0:38 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-07-17  0:38 UTC (permalink / raw
  To: gentoo-commits

commit:     4a981ffd55d791f7fbc5d15f9ca51367470e23db
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Tue Jul 17 00:38:35 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Tue Jul 17 00:38:35 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=4a981ffd

fix for depgraph

---
 gobs/pym/sync.py |    5 ++++-
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/sync.py b/gobs/pym/sync.py
index 838f224..87eb79e 100644
--- a/gobs/pym/sync.py
+++ b/gobs/pym/sync.py
@@ -18,6 +18,7 @@ if CM.getName()=='pgsql':
 config_profile = gobs_settings_dict['gobs_config']
 
 def git_pull():
+	conn=CM.getConnection()
 	log_msg = "Git pull"
 	add_gobs_logs(conn, log_msg, "info", config_profile)
 	repo = Repo("/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/")
@@ -28,12 +29,12 @@ def git_pull():
 	add_gobs_logs(conn, log_msg, "info", config_profile)
 	log_msg = "Git pull ... Done"
 	add_gobs_logs(conn, log_msg, "info", config_profile)
+	CM.putConnection(conn)
 	return True
 
 def sync_tree():
 	conn=CM.getConnection()
 	config_id = get_default_config(conn)			# HostConfigDir = table configs id
-	CM.putConnection(conn)
 	default_config_root = "/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config_id[0] + "/"
 	mysettings = portage.config(config_root = default_config_root)
 	tmpcmdline = []
@@ -46,6 +47,7 @@ def sync_tree():
 	if fail_sync is True:
 		log_msg = "Emerge --sync fail!"
 		add_gobs_logs(conn, log_msg, "warning", config_profile)
+		CM.putConnection(conn)
 		return False
 	else:
 		# Need to add a config dir so we can use profiles/base for reading the tree.
@@ -59,4 +61,5 @@ def sync_tree():
 			pass
 		log_msg = "Emerge --sync ... Done."
 		add_gobs_logs(conn, log_msg, "info", config_profile)
+	CM.putConnection(conn)
 	return True



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-07-17  1:07 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-07-17  1:07 UTC (permalink / raw
  To: gentoo-commits

commit:     8d66cccae356693bbf3f1c92c5b9a7e81efad3f6
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Tue Jul 17 01:06:56 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Tue Jul 17 01:06:56 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=8d66ccca

fix for depgraph

---
 gobs/pym/check_setup.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/check_setup.py b/gobs/pym/check_setup.py
index 354ddfb..690733c 100644
--- a/gobs/pym/check_setup.py
+++ b/gobs/pym/check_setup.py
@@ -58,7 +58,7 @@ def check_make_conf():
   update__make_conf(conn, configsDict)
   CM.putConnection(conn)
   log_msg = "Checking configs for changes and errors ... Done"
-  add_gobs_logs(conn, msg_log, "info", config_profile)
+  add_gobs_logs(conn, log_msg, "info", config_profile)
 
 def check_make_conf_guest(config_profile):
 	conn=CM.getConnection()



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-07-17 13:00 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-07-17 13:00 UTC (permalink / raw
  To: gentoo-commits

commit:     5053e8fd9cad5d3793dff633616a60d588873f1c
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Tue Jul 17 13:00:06 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Tue Jul 17 13:00:06 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=5053e8fd

fix for depgraph

---
 gobs/pym/check_setup.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/check_setup.py b/gobs/pym/check_setup.py
index 690733c..a89e892 100644
--- a/gobs/pym/check_setup.py
+++ b/gobs/pym/check_setup.py
@@ -56,9 +56,9 @@ def check_make_conf():
 	  attDict['make_conf_checksum_tree'] = make_conf_checksum_tree
 	  configsDict[config_id[0]]=attDict
   update__make_conf(conn, configsDict)
-  CM.putConnection(conn)
   log_msg = "Checking configs for changes and errors ... Done"
   add_gobs_logs(conn, log_msg, "info", config_profile)
+  CM.putConnection(conn)
 
 def check_make_conf_guest(config_profile):
 	conn=CM.getConnection()



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-07-17 15:02 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-07-17 15:02 UTC (permalink / raw
  To: gentoo-commits

commit:     089cad3270a590a528dfc0d661de06413c082c33
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Tue Jul 17 15:01:43 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Tue Jul 17 15:01:43 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=089cad32

fix for depgraph

---
 gobs/pym/updatedb.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/updatedb.py b/gobs/pym/updatedb.py
index 6e91c47..8254a3e 100755
--- a/gobs/pym/updatedb.py
+++ b/gobs/pym/updatedb.py
@@ -46,12 +46,12 @@ def init_portage_settings():
 	add_gobs_logs(conn, log_msg, "info", config_profile)
 	# Get default config from the configs table  and default_config=1
 	config_id = get_default_config(conn)			# HostConfigDir = table configs id
-	CM.putConnection(conn);
 	default_config_root = "/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config_id[0] + "/"
 	# Set config_root (PORTAGE_CONFIGROOT)  to default_config_root
 	mysettings = portage.config(config_root = default_config_root)
 	log_msg = "Setting default config to: %s" % (config_id[0],)
 	add_gobs_logs(conn, log_msg, "info", config_profile)
+	CM.putConnection(conn)
 	return mysettings
 
 def update_cpv_db_pool(mysettings, myportdb, init_package, package_line):



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-07-18  0:10 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-07-18  0:10 UTC (permalink / raw
  To: gentoo-commits

commit:     9b74adb00bc3678302f5d413aec2fae0ccef1110
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Jul 18 00:09:53 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Jul 18 00:09:53 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=9b74adb0

fix for depgraph

---
 gobs/pym/package.py  |    6 ++----
 gobs/pym/updatedb.py |   11 ++++++-----
 2 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index 2fa2039..771572f 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -33,7 +33,6 @@ class gobs_package(object):
 		config_cpv_listDict ={}
 		if config_list == []:
 			return config_cpv_listDict
-		conn=CM.getConnection()
 		for config_id in config_list:
 			# Change config/setup
 			mysettings_setup = self.change_config(config_id)
@@ -62,7 +61,6 @@ class gobs_package(object):
 			# Clean some cache
 			myportdb_setup.close_caches()
 			portage.portdbapi.portdbapi_instances.remove(myportdb_setup)
-		CM.putConnection(conn)
 		return config_cpv_listDict
 
 	def get_ebuild_metadata(self, ebuild_line):
@@ -235,9 +233,9 @@ class gobs_package(object):
 			else:
 				get_manifest_text = get_file_text(pkgdir + "/Manifest")
 			add_new_manifest_sql(conn,package_id, get_manifest_text, manifest_checksum_tree)
-		CM.putConnection(conn)
 		log_msg = "C %s/%s ... Done." % (categories, package)
 		add_gobs_logs(conn, log_msg, "info", config_profile)
+		CM.putConnection(conn)
 
 	def update_package_db(self, categories, package, package_id):
 		conn=CM.getConnection()
@@ -324,9 +322,9 @@ class gobs_package(object):
 			# Mark or remove any old ebuilds
 			init_old_cpv = gobs_old_cpv(self._myportdb, self._mysettings)
 			init_old_cpv.mark_old_ebuild_db(categories, package, package_id)
-		CM.putConnection(conn)
 		log_msg = "C %s/%s ... Done." % (categories, package)
 		add_gobs_logs(conn, log_msg, "info", config_profile)
+		CM.putConnection(conn)
 
 	def update_ebuild_db(self, build_dict):
 		conn=CM.getConnection()

diff --git a/gobs/pym/updatedb.py b/gobs/pym/updatedb.py
index 8254a3e..e643fc8 100755
--- a/gobs/pym/updatedb.py
+++ b/gobs/pym/updatedb.py
@@ -59,9 +59,10 @@ def update_cpv_db_pool(mysettings, myportdb, init_package, package_line):
 	# split the cp to categories and package
 	element = package_line.split('/')
 	categories = element[0]
-	package = element[1]    
+	package = element[1]
 	# Check if we don't have the cp in the package table
 	package_id = have_package_db(conn,categories, package)
+	CM.putConnection(conn)
 	if package_id is None:  
 		# Add new package with ebuilds
 		init_package.add_new_package_db(categories, package)
@@ -72,9 +73,7 @@ def update_cpv_db_pool(mysettings, myportdb, init_package, package_line):
 	# Update the metadata for categories
 	init_categories = gobs_categories(mysettings)
 	init_categories.update_categories_db(categories)
-	myportdb.close_caches()
-	CM.putConnection(conn)
-			
+
 def update_cpv_db():
 	"""Code to update the cpv in the database.
 	@type:settings
@@ -88,6 +87,7 @@ def update_cpv_db():
 	mysettings =  init_portage_settings()
 	log_msg = "Checking categories, package, ebuilds"
 	add_gobs_logs(conn, log_msg, "info", config_profile)
+	CM.putConnection(conn)
 	# Setup portdb, package
 	myportdb = portage.portdbapi(mysettings=mysettings)
 	init_package = gobs_package(mysettings, myportdb)
@@ -108,7 +108,8 @@ def update_cpv_db():
 		#update_cpv_db_pool(mysettings, package_line)
 		pool.apply_async(update_cpv_db_pool, (mysettings, myportdb, init_package, package_line,))
 	pool.close()
-	pool.join() 
+	pool.join()
+	conn=CM.getConnection()
 	log_msg = "Checking categories, package and ebuilds ... done"
 	add_gobs_logs(conn, log_msg, "info", config_profile)
 	CM.putConnection(conn)



^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-01 11:26 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-01 11:26 UTC (permalink / raw
  To: gentoo-commits

commit:     d88a4d2bd81ed9b47a26c12ea0d8d725ff3dcfb5
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Dec  1 11:25:52 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Dec  1 11:25:52 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=d88a4d2b

fix linment and typos

---
 gobs/pym/arch.py         |   25 ---
 gobs/pym/check_setup.py  |    2 +-
 gobs/pym/package.py      |  511 ++++++++++++++++++++++-----------------------
 gobs/pym/pgsql_querys.py |  518 +++++++++++++++++++++++-----------------------
 gobs/pym/sync.py         |    2 +-
 gobs/pym/updatedb.py     |   79 ++++----
 6 files changed, 554 insertions(+), 583 deletions(-)

diff --git a/gobs/pym/arch.py b/gobs/pym/arch.py
deleted file mode 100644
index ebd0017..0000000
--- a/gobs/pym/arch.py
+++ /dev/null
@@ -1,25 +0,0 @@
-import portage
-from gobs.readconf import get_conf_settings
-reader=get_conf_settings()
-gobs_settings_dict=reader.read_gobs_settings_all()
-# make a CM
-from gobs.ConnectionManager import connectionManager
-CM=connectionManager(gobs_settings_dict)
-#selectively import the pgsql/mysql querys
-if CM.getName()=='pgsql':
-	from gobs.pgsql import *
-
-class gobs_arch(object):
-	
-	def update_arch_db(self):
-		conn = CM.getConnection()
-		# FIXME: check for new keyword
-		# Add arch db (keywords)
-		if get_arch_db(conn) is None:
-			arch_list =  portage.archlist
-			for arch in arch_list:
-				if arch[0] not in ["~","-"]:
-					arch_list.append("-" + arch)
-			arch_list.append("-*")
-			add_new_arch_db(conn,arch_list)
-		CM.putConnection(conn)
\ No newline at end of file

diff --git a/gobs/pym/check_setup.py b/gobs/pym/check_setup.py
index a89e892..37d0285 100644
--- a/gobs/pym/check_setup.py
+++ b/gobs/pym/check_setup.py
@@ -14,7 +14,7 @@ from gobs.ConnectionManager import connectionManager
 CM=connectionManager(gobs_settings_dict)
 #selectively import the pgsql/mysql querys
 if CM.getName()=='pgsql':
-	from gobs.pgsql import *
+	from gobs.pgsql_querys import *
 
 def check_make_conf():
   # Get the config list

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index cb6a1f1..e123e8e 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -4,7 +4,6 @@ from gobs.flags import gobs_use_flags
 from gobs.repoman_gobs import gobs_repoman
 from gobs.manifest import gobs_manifest
 from gobs.text import get_file_text, get_ebuild_text
-from gobs.old_cpv import gobs_old_cpv
 from gobs.readconf import get_conf_settings
 from gobs.flags import gobs_use_flags
 reader=get_conf_settings()
@@ -27,114 +26,114 @@ class gobs_package(object):
 
 	def change_config(self, config_setup):
 		# Change config_root  config_setup = table config
-                my_new_setup = "/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config_setup + "/"
-                mysettings_setup = portage.config(config_root = my_new_setup)
-                return mysettings_setup
+		my_new_setup = "/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config_setup + "/"
+		mysettings_setup = portage.config(config_root = my_new_setup)
+		return mysettings_setup
 
 	def config_match_ebuild(self, cp, config_id_list):
 		config_cpv_listDict ={}
-                if config_id_list == []:
-                        return config_cpv_listDict
-                conn=CM.getConnection()
-                for config_id in config_id_list:
+		if config_id_list == []:
+			return config_cpv_listDict
+			conn=CM.getConnection()
+		for config_id in config_id_list:
 			# Change config/setup
 			for config_id in config_id_list:
 
-                        # Change config/setup
-                        config_setup = get_config_db(conn, config_id)
-                        mysettings_setup = self.change_config(config_setup)
-                        myportdb_setup = portage.portdbapi(mysettings=mysettings_setup)
-
-                        # Get the latest cpv from portage with the config that we can build
-                        build_cpv = myportdb_setup.xmatch('bestmatch-visible', cp)
-
-                        # Check if could get cpv from portage and add it to the config_cpv_listDict.
-                        if build_cpv != "":
-
-                                # Get the iuse and use flags for that config/setup and cpv
-                                init_useflags = gobs_use_flags(mysettings_setup, myportdb_setup, build_cpv)
-                                iuse_flags_list, final_use_list = init_useflags.get_flags()
-                                iuse_flags_list2 = []
-                                for iuse_line in iuse_flags_list:
-                                        iuse_flags_list2.append( init_useflags.reduce_flag(iuse_line))
-
-                                # Dict the needed info
-                                attDict = {}
-                                attDict['cpv'] = build_cpv
-                                attDict['useflags'] = final_use_list
-                                attDict['iuse'] = iuse_flags_list2
-                                config_cpv_listDict[config_id] = attDict
-
-                        # Clean some cache
-                        myportdb_setup.close_caches()
-                        portage.portdbapi.portdbapi_instances.remove(myportdb_setup)
-                CM.putConnection(conn)
-                return config_cpv_listDict
+			# Change config/setup
+			config_setup = get_config_db(conn, config_id)
+			mysettings_setup = self.change_config(config_setup)
+			myportdb_setup = portage.portdbapi(mysettings=mysettings_setup)
+
+			# Get the latest cpv from portage with the config that we can build
+			build_cpv = myportdb_setup.xmatch('bestmatch-visible', cp)
+
+			# Check if could get cpv from portage and add it to the config_cpv_listDict.
+			if build_cpv != "":
+
+				# Get the iuse and use flags for that config/setup and cpv
+				init_useflags = gobs_use_flags(mysettings_setup, myportdb_setup, build_cpv)
+				iuse_flags_list, final_use_list = init_useflags.get_flags()
+				iuse_flags_list2 = []
+				for iuse_line in iuse_flags_list:
+					iuse_flags_list2.append( init_useflags.reduce_flag(iuse_line))
+
+				# Dict the needed info
+				attDict = {}
+				attDict['cpv'] = build_cpv
+				attDict['useflags'] = final_use_list
+				attDict['iuse'] = iuse_flags_list2
+				config_cpv_listDict[config_id] = attDict
+
+			# Clean some cache
+			myportdb_setup.close_caches()
+			portage.portdbapi.portdbapi_instances.remove(myportdb_setup)
+		CM.putConnection(conn)
+		return config_cpv_listDict
 
 	def get_ebuild_metadata(self, cpv, repo):
-                # Get the auxdbkeys infos for the ebuild
-                try:
-                        ebuild_auxdb_list = self._myportdb.aux_get(cpv, portage.auxdbkeys, myrepo=repo)
-                except:
-                        ebuild_auxdb_list = []
-                else:
-                        for i in range(len(ebuild_auxdb_list)):
-                                if ebuild_auxdb_list[i] == '':
-                                        ebuild_auxdb_list[i] = ''
-                return ebuild_auxdb_list
+		# Get the auxdbkeys infos for the ebuild
+		try:
+			ebuild_auxdb_list = self._myportdb.aux_get(cpv, portage.auxdbkeys, myrepo=repo)
+		except:
+			ebuild_auxdb_list = []
+		else:
+			for i in range(len(ebuild_auxdb_list)):
+				if ebuild_auxdb_list[i] == '':
+					ebuild_auxdb_list[i] = ''
+			return ebuild_auxdb_list
 
 	def get_packageDict(self, pkgdir, cpv, repo, config_id):
-                attDict = {}
-                conn=CM.getConnection()
-
-                #Get categories, package and version from cpv
-                ebuild_version_tree = portage.versions.cpv_getversion(cpv)
-                element = portage.versions.cpv_getkey(cpv).split('/')
-                categories = element[0]
-                package = element[1]
-
-                # Make a checksum of the ebuild
-                try:
-                        ebuild_version_checksum_tree = portage.checksum.sha256hash(pkgdir + "/" + package + "-" + ebuild_version_tree + ".ebuild")[0]
-                except:
-                        ebuild_version_checksum_tree = "0"
-                        log_msg = "QA: Can't checksum the ebuild file. %s on repo %s" % (cpv, repo,)
-                        add_gobs_logs(conn, log_msg, "info", config_profile)
-                        log_msg = "C %s:%s ... Fail." % (cpv, repo)
-                        add_gobs_logs(conn, log_msg, "info", config_profile)
-                        ebuild_version_text_tree = '0'
-                else:
-                        ebuild_version_text_tree = get_ebuild_text(pkgdir + "/" + package + "-" + ebuild_version_tree + ".ebuild")
-
-                # run repoman on the ebuild
-                #init_repoman = gobs_repoman(self._mysettings, self._myportdb)
-                #repoman_error = init_repoman.check_repoman(pkgdir, cpv, config_id)
-                #if repoman_error != []:
-                #       log_msg = "Repoman: %s have errors on repo %s" % (cpv, repo,)
-                #        add_gobs_logs(conn, log_msg, "info", config_profile)
-                repoman_error = []
-
-                # Get the ebuild metadata
-                ebuild_version_metadata_tree = self.get_ebuild_metadata(cpv, repo)
-                # if there some error to get the metadata we add rubish to the
-                # ebuild_version_metadata_tree and set ebuild_version_checksum_tree to 0
-                # so it can be updated next time we update the db
-                if ebuild_version_metadata_tree  == []:
-                        log_msg = " QA: %s have broken metadata on repo %s" % (cpv, repo)
-                        add_gobs_logs(conn, log_msg, "info", config_profile)
-                        ebuild_version_metadata_tree = ['','','','','','','','','','','','','','','','','','','','','','','','','']
-                        ebuild_version_checksum_tree = '0'
+		attDict = {}
+		conn=CM.getConnection()
+
+		#Get categories, package and version from cpv
+		ebuild_version_tree = portage.versions.cpv_getversion(cpv)
+		element = portage.versions.cpv_getkey(cpv).split('/')
+		categories = element[0]
+		package = element[1]
+
+		# Make a checksum of the ebuild
+		try:
+			ebuild_version_checksum_tree = portage.checksum.sha256hash(pkgdir + "/" + package + "-" + ebuild_version_tree + ".ebuild")[0]
+		except:
+			ebuild_version_checksum_tree = "0"
+			log_msg = "QA: Can't checksum the ebuild file. %s on repo %s" % (cpv, repo,)
+			add_gobs_logs(conn, log_msg, "info", config_profile)
+			log_msg = "C %s:%s ... Fail." % (cpv, repo)
+			add_gobs_logs(conn, log_msg, "info", config_profile)
+			ebuild_version_text_tree = '0'
+		else:
+			ebuild_version_text_tree = get_ebuild_text(pkgdir + "/" + package + "-" + ebuild_version_tree + ".ebuild")
+
+		# run repoman on the ebuild
+		#init_repoman = gobs_repoman(self._mysettings, self._myportdb)
+		#repoman_error = init_repoman.check_repoman(pkgdir, cpv, config_id)
+		#if repoman_error != []:
+		#       log_msg = "Repoman: %s have errors on repo %s" % (cpv, repo,)
+		#        add_gobs_logs(conn, log_msg, "info", config_profile)
+		repoman_error = []
+
+		# Get the ebuild metadata
+		ebuild_version_metadata_tree = self.get_ebuild_metadata(cpv, repo)
+		# if there some error to get the metadata we add rubish to the
+		# ebuild_version_metadata_tree and set ebuild_version_checksum_tree to 0
+		# so it can be updated next time we update the db
+		if ebuild_version_metadata_tree  == []:
+			log_msg = " QA: %s have broken metadata on repo %s" % (cpv, repo)
+			add_gobs_logs(conn, log_msg, "info", config_profile)
+			ebuild_version_metadata_tree = ['','','','','','','','','','','','','','','','','','','','','','','','','']
+			ebuild_version_checksum_tree = '0'
 
 		# add the ebuild info to the dict packages
-                attDict['repo'] = repo
-                attDict['ebuild_version_tree'] = ebuild_version_tree
-                attDict['ebuild_version_checksum_tree']= ebuild_version_checksum_tree
-                attDict['ebuild_version_metadata_tree'] = ebuild_version_metadata_tree
-                #attDict['ebuild_version_text_tree'] = ebuild_version_text_tree[0]
-                attDict['ebuild_version_revision_tree'] = ebuild_version_text_tree[1]
-                attDict['ebuild_error'] = repoman_error
-                CM.putConnection(conn)
-                return attDict
+		attDict['repo'] = repo
+		attDict['ebuild_version_tree'] = ebuild_version_tree
+		attDict['ebuild_version_checksum_tree']= ebuild_version_checksum_tree
+		attDict['ebuild_version_metadata_tree'] = ebuild_version_metadata_tree
+		#attDict['ebuild_version_text_tree'] = ebuild_version_text_tree[0]
+		attDict['ebuild_version_revision_tree'] = ebuild_version_text_tree[1]
+		attDict['ebuild_error'] = repoman_error
+		CM.putConnection(conn)
+		return attDict
 
 	def add_new_ebuild_buildquery_db(self, ebuild_id_list, packageDict, config_cpv_listDict):
 		conn=CM.getConnection()
@@ -160,197 +159,193 @@ class gobs_package(object):
 				for k, v in packageDict.iteritems():
 					ebuild_id = ebuild_id_list[i]
 
-                                        # Comper and add the cpv to buildqueue
-                                        if build_cpv == k:
-                                                add_new_package_buildqueue(conn, ebuild_id, config_id, use_flagsDict, messages)
+					# Comper and add the cpv to buildqueue
+					if build_cpv == k:
+						add_new_package_buildqueue(conn, ebuild_id, config_id, use_flagsDict, messages)
 
-                                                # B = Build cpv use-flags config
-                                                config_setup = get_config_db(conn, config_id)
+						# B = Build cpv use-flags config
+						config_setup = get_config_db(conn, config_id)
 
-                                                # FIXME log_msg need a fix to log the use flags corect.
-                                                log_msg = "B %s:%s USE: %s %s" %  \
-                                                        (k, v['repo'], use_enable, config_setup,)
-                                                add_gobs_logs(conn, log_msg, "info", config_profile)
+						# FIXME log_msg need a fix to log the use flags corect.
+						log_msg = "B %s:%s USE: %s %s" %  \
+							(k, v['repo'], use_enable, config_setup,)
+						add_gobs_logs(conn, log_msg, "info", config_profile)
 					i = i +1
 		CM.putConnection(conn)
 
 	def get_package_metadataDict(self, pkgdir, package):
-                # Make package_metadataDict
-                attDict = {}
-                package_metadataDict = {}
-                changelog_checksum_tree = portage.checksum.sha256hash(pkgdir + "/ChangeLog")
-                changelog_text_tree = get_file_text(pkgdir + "/ChangeLog")
-                metadata_xml_checksum_tree = portage.checksum.sha256hash(pkgdir + "/metadata.xml")
-                metadata_xml_text_tree = get_file_text(pkgdir + "/metadata.xml")
-                attDict['changelog_checksum'] =  changelog_checksum_tree[0]
-                attDict['changelog_text'] =  changelog_text_tree
-                attDict['metadata_xml_checksum'] =  metadata_xml_checksum_tree[0]
-                attDict['metadata_xml_text'] =  metadata_xml_text_tree
-                package_metadataDict[package] = attDict
-                return package_metadataDict
+		# Make package_metadataDict
+		attDict = {}
+		package_metadataDict = {}
+		changelog_checksum_tree = portage.checksum.sha256hash(pkgdir + "/ChangeLog")
+		changelog_text_tree = get_file_text(pkgdir + "/ChangeLog")
+		metadata_xml_checksum_tree = portage.checksum.sha256hash(pkgdir + "/metadata.xml")
+		metadata_xml_text_tree = get_file_text(pkgdir + "/metadata.xml")
+		attDict['changelog_checksum'] =  changelog_checksum_tree[0]
+		attDict['changelog_text'] =  changelog_text_tree
+		attDict['metadata_xml_checksum'] =  metadata_xml_checksum_tree[0]
+		attDict['metadata_xml_text'] =  metadata_xml_text_tree
+		package_metadataDict[package] = attDict
+		return package_metadataDict
 
 	def add_new_package_db(self, categories, package, repo):
 		conn=CM.getConnection()
 		# Add new categories package ebuild to tables package and ebuilds
-                # C = Checking
-                # N = New Package
-                log_msg = "C %s/%s:%s" % (categories, package, repo)
-                add_gobs_logs(conn, log_msg, "info", config_profile)
-                log_msg = "N %s/%s:%s" % (categories, package, repo)
-                add_gobs_logs(conn, log_msg, "info", config_profile)
-                pkgdir = self._myportdb.getRepositoryPath(repo) + "/" + categories + "/" + package # Get RepoDIR + cp
+		# C = Checking
+		# N = New Package
+		log_msg = "C %s/%s:%s" % (categories, package, repo)
+		add_gobs_logs(conn, log_msg, "info", config_profile)
+		log_msg = "N %s/%s:%s" % (categories, package, repo)
+		add_gobs_logs(conn, log_msg, "info", config_profile)
+		pkgdir = self._myportdb.getRepositoryPath(repo) + "/" + categories + "/" + package # Get RepoDIR + cp
 
 		# Get the cp manifest file checksum.
-                try:
-                        manifest_checksum_tree = portage.checksum.sha256hash(pkgdir + "/Manifest")[0]
-                except:
-                        manifest_checksum_tree = "0"
-                        log_msg = "QA: Can't checksum the Manifest file. %s/%s:%s" % (categories, package, repo,)
-                        add_gobs_logs(conn, log_msg, "info", config_profile)
-                        log_msg = "C %s/%s:%s ... Fail." % (categories, package, repo)
-                        add_gobs_logs(conn, log_msg, "info", config_profile)
-                        CM.putConnection(conn)
-                        return
-                package_id = add_new_manifest_sql(conn, categories, package, repo, manifest_checksum_tree)
-
-                # Get the ebuild list for cp
-                mytree = []
-                mytree.append(self._myportdb.getRepositoryPath(repo))
-                ebuild_list_tree = self._myportdb.cp_list((categories + "/" + package), use_cache=1, mytree=mytree)
-                if ebuild_list_tree == []:
-                        log_msg = "QA: Can't get the ebuilds list. %s/%s:%s" % (categories, package, repo,)
-                        add_gobs_logs(conn, log_msg, "info", config_profile)
-                        log_msg = "C %s/%s:%s ... Fail." % (categories, package, repo)
-                        add_gobs_logs(conn, log_msg, "info", config_profile)
-                        CM.putConnection(conn)
-                        return
+		try:
+			manifest_checksum_tree = portage.checksum.sha256hash(pkgdir + "/Manifest")[0]
+		except:
+			manifest_checksum_tree = "0"
+			log_msg = "QA: Can't checksum the Manifest file. %s/%s:%s" % (categories, package, repo,)
+			add_gobs_logs(conn, log_msg, "info", config_profile)
+			log_msg = "C %s/%s:%s ... Fail." % (categories, package, repo)
+			add_gobs_logs(conn, log_msg, "info", config_profile)
+			CM.putConnection(conn)
+			return
+		package_id = add_new_manifest_sql(conn, categories, package, repo, manifest_checksum_tree)
+
+		# Get the ebuild list for cp
+		mytree = []
+		mytree.append(self._myportdb.getRepositoryPath(repo))
+		ebuild_list_tree = self._myportdb.cp_list((categories + "/" + package), use_cache=1, mytree=mytree)
+		if ebuild_list_tree == []:
+			log_msg = "QA: Can't get the ebuilds list. %s/%s:%s" % (categories, package, repo,)
+			add_gobs_logs(conn, log_msg, "info", config_profile)
+			log_msg = "C %s/%s:%s ... Fail." % (categories, package, repo)
+			add_gobs_logs(conn, log_msg, "info", config_profile)
+			CM.putConnection(conn)
+			return
 
 		# set config to default config
-                default_config = get_default_config(conn)
+		default_config = get_default_config(conn)
 
-                # Make the needed packageDict with ebuild infos so we can add it later to the db.
-                packageDict ={}
-                ebuild_id_list = []
-                for cpv in sorted(ebuild_list_tree):
-                        packageDict[cpv] = self.get_packageDict(pkgdir, cpv, repo, default_config)
+		# Make the needed packageDict with ebuild infos so we can add it later to the db.
+		packageDict ={}
+		ebuild_id_list = []
+		for cpv in sorted(ebuild_list_tree):
+			packageDict[cpv] = self.get_packageDict(pkgdir, cpv, repo, default_config)
 
-                # Add new ebuilds to the db
-                ebuild_id_list = add_new_ebuild_sql(conn, package_id, packageDict)
+		# Add new ebuilds to the db
+		ebuild_id_list = add_new_ebuild_sql(conn, package_id, packageDict)
 
-                # Get the best cpv for the configs and add it to config_cpv_listDict
-                configs_id_list  = get_config_id_list(conn)
-                config_cpv_listDict = self.config_match_ebuild(categories + "/" + package, configs_id_list)
+		# Get the best cpv for the configs and add it to config_cpv_listDict
+		configs_id_list  = get_config_id_list(conn)
+		config_cpv_listDict = self.config_match_ebuild(categories + "/" + package, configs_id_list)
 
-                # Add the ebuild to the buildquery table if needed
-                self.add_new_ebuild_buildquery_db(ebuild_id_list, packageDict, config_cpv_listDict)
+		# Add the ebuild to the buildquery table if needed
+		self.add_new_ebuild_buildquery_db(ebuild_id_list, packageDict, config_cpv_listDict)
 
-                log_msg = "C %s/%s:%s ... Done." % (categories, package, repo)
-                add_gobs_logs(conn, log_msg, "info", config_profile)
-                print(categories, package, repo)
-                CM.putConnection(conn)
+		log_msg = "C %s/%s:%s ... Done." % (categories, package, repo)
+		add_gobs_logs(conn, log_msg, "info", config_profile)
+		print(categories, package, repo)
+		CM.putConnection(conn)
 
 	def update_package_db(self, package_id):
 		conn=CM.getConnection()
 		# Update the categories and package with new info
-                # C = Checking
-                cp, repo = get_cp_repo_from_package_id(conn, package_id)
-                element = cp.split('/')
-                package = element[1]
-                log_msg = "C %s:%s" % (cp, repo)
-                add_gobs_logs(conn, log_msg, "info", config_profile)
-                pkgdir = self._myportdb.getRepositoryPath(repo) + "/" + cp # Get RepoDIR + cp
-
-                # Get the cp mainfest file checksum
-                try:
-                        manifest_checksum_tree = portage.checksum.sha256hash(pkgdir + "/Manifest")[0]
-                except:
-                        manifest_checksum_tree = "0"
-                        log_msg = "QA: Can't checksum the Manifest file. %s:%s" % (cp, repo,)
-                        add_gobs_logs(conn, log_msg, "info", config_profile)
-                        log_msg = "C %s:%s ... Fail." % (cp, repo)
-                        add_gobs_logs(conn, log_msg, "info", config_profile)
-                        CM.putConnection(conn)
-                        return
-
-		# Get the checksum from the db in package table
-                manifest_checksum_db = get_manifest_db(conn, package_id)
-
-                # if we have the same checksum return else update the package
-                if manifest_checksum_tree != manifest_checksum_db:
+		# C = Checking
+		cp, repo = get_cp_repo_from_package_id(conn, package_id)
+		element = cp.split('/')
+		package = element[1]
+		log_msg = "C %s:%s" % (cp, repo)
+		add_gobs_logs(conn, log_msg, "info", config_profile)
+		pkgdir = self._myportdb.getRepositoryPath(repo) + "/" + cp # Get RepoDIR + cp
+
+		# Get the cp mainfest file checksum
+		try:
+			manifest_checksum_tree = portage.checksum.sha256hash(pkgdir + "/Manifest")[0]
+		except:
+			manifest_checksum_tree = "0"
+			log_msg = "QA: Can't checksum the Manifest file. %s:%s" % (cp, repo,)
+			add_gobs_logs(conn, log_msg, "info", config_profile)
+			log_msg = "C %s:%s ... Fail." % (cp, repo)
+			add_gobs_logs(conn, log_msg, "info", config_profile)
+			CM.putConnection(conn)
+			return
+
+		# if we NOT have the same checksum in the db update the package
+		if manifest_checksum_tree != get_manifest_db(conn, package_id):
 
 			# U = Update
-                        log_msg = "U %s:%s" % (cp, repo)
-                        add_gobs_logs(conn, log_msg, "info", config_profile)
-
-                        # Get the ebuild list for cp
-                        mytree = []
-                        mytree.append(self._myportdb.getRepositoryPath(repo))
-                        ebuild_list_tree = self._myportdb.cp_list(cp, use_cache=1, mytree=mytree)
-                        if ebuild_list_tree == []:
-                                log_msg = "QA: Can't get the ebuilds list. %s:%s" % (cp, repo,)
-                                add_gobs_logs(conn, log_msg, "info", config_profile)
-                                log_msg = "C %s:%s ... Fail." % (cp, repo)
-                                add_gobs_logs(conn, log_msg, "info", config_profile)
-                                CM.putConnection(conn)
-                                return
-                        packageDict ={}
-                        for cpv in sorted(ebuild_list_tree):
-                                old_ebuild_list = []
-
-                                # split out ebuild version
-                                ebuild_version_tree = portage.versions.cpv_getversion(cpv)
-
-                                # Get the checksum of the ebuild in tree and db
-                                # Make a checksum of the ebuild
-                                try:
-                                        ebuild_version_checksum_tree = portage.checksum.sha256hash(pkgdir + "/" + package + "-" + ebuild_version_tree + ".ebuild")[0]
-                                except:
-                                        ebuild_version_checksum_tree = '0'
-                                        manifest_checksum_tree = '0'
-                                        log_msg = "QA: Can't checksum the ebuild file. %s on repo %s" % (cpv, repo,)
-                                        add_gobs_logs(conn, log_msg, "info", config_profile)
-                                        log_msg = "C %s:%s ... Fail." % (cpv, repo)
-                                        add_gobs_logs(conn, log_msg, "info", config_profile)
-                                ebuild_version_manifest_checksum_db = get_ebuild_checksum(conn, package_id, ebuild_version_tree)
-
-
-                                # Check if the checksum have change
-                                if ebuild_version_manifest_checksum_db is None or ebuild_version_checksum_tree != ebuild_version_manifest_checksum_db:
-
-                                        # set config to default config
-                                        default_config = get_default_config(conn)
-
-                                        # Get packageDict for ebuild
-                                        packageDict[cpv] = self.get_packageDict(pkgdir, cpv, repo, default_config)
-                                        if ebuild_version_manifest_checksum_db is None:
-                                                # N = New ebuild
-                                                log_msg = "N %s:%s" % (cpv, repo,)
-                                                add_gobs_logs(conn, log_msg, "info", config_profile)
-                                        else:
-                                                # U = Updated ebuild
-                                                log_msg = "U %s:%s" % (cpv, repo,)
-                                                add_gobs_logs(conn, log_msg, "info", config_profile)
-
-                                                # Fix so we can use add_new_ebuild_sql() to update the ebuilds
-                                                old_ebuild_list.append(ebuild_version_tree)
-                                                add_old_ebuild(conn, package_id, old_ebuild_list)
-                                                update_active_ebuild_to_fales(conn, package_id, ebuild_version_tree)
+			log_msg = "U %s:%s" % (cp, repo)
+			add_gobs_logs(conn, log_msg, "info", config_profile)
+
+			# Get the ebuild list for cp
+			mytree = []
+			mytree.append(self._myportdb.getRepositoryPath(repo))
+			ebuild_list_tree = self._myportdb.cp_list(cp, use_cache=1, mytree=mytree)
+			if ebuild_list_tree == []:
+				log_msg = "QA: Can't get the ebuilds list. %s:%s" % (cp, repo,)
+				add_gobs_logs(conn, log_msg, "info", config_profile)
+				log_msg = "C %s:%s ... Fail." % (cp, repo)
+				add_gobs_logs(conn, log_msg, "info", config_profile)
+				CM.putConnection(conn)
+				return
+			packageDict ={}
+			for cpv in sorted(ebuild_list_tree):
+				old_ebuild_list = []
+
+				# split out ebuild version
+				ebuild_version_tree = portage.versions.cpv_getversion(cpv)
+
+				# Get the checksum of the ebuild in tree and db
+				# Make a checksum of the ebuild
+				try:
+					ebuild_version_checksum_tree = portage.checksum.sha256hash(pkgdir + "/" + package + "-" + ebuild_version_tree + ".ebuild")[0]
+				except:
+					ebuild_version_checksum_tree = '0'
+					manifest_checksum_tree = '0'
+					log_msg = "QA: Can't checksum the ebuild file. %s on repo %s" % (cpv, repo,)
+					add_gobs_logs(conn, log_msg, "info", config_profile)
+					log_msg = "C %s:%s ... Fail." % (cpv, repo)
+					add_gobs_logs(conn, log_msg, "info", config_profile)
+				ebuild_version_manifest_checksum_db = get_ebuild_checksum(conn, package_id, ebuild_version_tree)
+
+				# Check if the checksum have change
+				if ebuild_version_manifest_checksum_db is None or ebuild_version_checksum_tree != ebuild_version_manifest_checksum_db:
+
+				# set config to default config
+					default_config = get_default_config(conn)
+
+					# Get packageDict for ebuild
+					packageDict[cpv] = self.get_packageDict(pkgdir, cpv, repo, default_config)
+					if ebuild_version_manifest_checksum_db is None:
+						# N = New ebuild
+						log_msg = "N %s:%s" % (cpv, repo,)
+						add_gobs_logs(conn, log_msg, "info", config_profile)
+					else:
+						# U = Updated ebuild
+						log_msg = "U %s:%s" % (cpv, repo,)
+						add_gobs_logs(conn, log_msg, "info", config_profile)
+
+						# Fix so we can use add_new_ebuild_sql() to update the ebuilds
+						old_ebuild_list.append(ebuild_version_tree)
+						add_old_ebuild(conn, package_id, old_ebuild_list)
+						update_active_ebuild_to_fales(conn, package_id, ebuild_version_tree
 			# Use packageDictand to update the db
-                        # Add new ebuilds to the db
-                        ebuild_id_list = add_new_ebuild_sql(conn, package_id, packageDict)
+			# Add new ebuilds to the db
+			ebuild_id_list = add_new_ebuild_sql(conn, package_id, packageDict)
 
-                        # update the cp manifest checksum
-                        update_manifest_sql(conn, package_id, manifest_checksum_tree)
+			# update the cp manifest checksum
+			update_manifest_sql(conn, package_id, manifest_checksum_tree)
 
-                        # Get the best cpv for the configs and add it to config_cpv_listDict
-                        configs_id_list  = get_config_id_list(conn)
-                        config_cpv_listDict = self.config_match_ebuild(cp, configs_id_list)
+			# Get the best cpv for the configs and add it to config_cpv_listDict
+			configs_id_list  = get_config_id_list(conn)
+			config_cpv_listDict = self.config_match_ebuild(cp, configs_id_list)
 
-                        # Add the ebuild to the buildqueru table if needed
-                        self.add_new_ebuild_buildquery_db(ebuild_id_list, packageDict, config_cpv_listDict)
+			# Add the ebuild to the buildqueru table if needed
+			self.add_new_ebuild_buildquery_db(ebuild_id_list, packageDict, config_cpv_listDict)
 
-                log_msg = "C %s:%s ... Done." % (cp, repo)
-                add_gobs_logs(conn, log_msg, "info", config_profile)
+		log_msg = "C %s:%s ... Done." % (cp, repo)
+		add_gobs_logs(conn, log_msg, "info", config_profile)
 		CM.putConnection(conn)
 
 	def update_ebuild_db(self, build_dict):

diff --git a/gobs/pym/pgsql_querys.py b/gobs/pym/pgsql_querys.py
index 9184a20..48dd825 100644
--- a/gobs/pym/pgsql_querys.py
+++ b/gobs/pym/pgsql_querys.py
@@ -7,302 +7,304 @@ def add_gobs_logs(connection, log_msg, log_type, config):
 	sqlQ = 'INSERT INTO logs (config_id, type, msg) VALUES ( (SELECT config_id FROM configs WHERE config = %s), %s, %s )'
 	cursor.execute(sqlQ, (config, log_type, log_msg))
 	connection.commit()
-	
-	# Queryes to handel the jobs table
-	def get_jobs_id(connection, config_profile):
-		cursor = connection.cursor()
-		sqlQ = "SELECT job_id FROM jobs WHERE status = 'Waiting' AND config_id = (SELECT config_id FROM configs WHERE config = %s)"
-		cursor.execute(sqlQ, (config_profile,))
-		entries = cursor.fetchall()
+
+# Queryes to handel the jobs table
+def get_jobs_id(connection, config_profile):
+	cursor = connection.cursor()
+	sqlQ = "SELECT job_id FROM jobs WHERE status = 'Waiting' AND config_id = (SELECT config_id FROM configs WHERE config = %s)"
+	cursor.execute(sqlQ, (config_profile,))
+	entries = cursor.fetchall()
+	if entries is None:
+		return None
+	jobs_id = []
+	for job_id in entries:
+		jobs_id.append(job_id[0])
+	return sorted(jobs_id)
+
+def get_job(connection, job_id):
+	cursor = connection.cursor()
+	sqlQ ='SELECT job FROM jobs WHERE job_id = %s'
+	cursor.execute(sqlQ, (job_id,))
+	job = cursor.fetchone()
+	return job[0]
+
+def update_job_list(connection, status, job_id):
+	cursor = connection.cursor()
+	sqlQ = 'UPDATE  jobs SET status = %s WHERE job_id = %s'
+	cursor.execute(sqlQ, (status, job_id,))
+	connection.commit()
+
+# Queryes to handel the configs* tables
+def get_config_list_all(connection):
+	cursor = connection.cursor()
+	sqlQ = 'SELECT config FROM configs'
+	cursor.execute(sqlQ)
+	entries = cursor.fetchall()
+	return entries
+
+def update_make_conf(connection, configsDict):
+	cursor = connection.cursor()
+	sqlQ1 = 'UPDATE configs_metadata SET checksum = %s, make_conf_text = %s, active = %s, config_error = %s WHERE config_id = (SELECT config_id FROM configs WHERE config = %s)'
+	for k, v in configsDict.iteritems():
+		params = [v['make_conf_checksum_tree'], v['make_conf_text'], v['active'], v['config_error'], k]
+		cursor.execute(sqlQ1, params)
+		connection.commit()
+
+def get_default_config(connection):
+	cursor = connection.cursor()
+	sqlQ = "SELECT config FROM configs WHERE default_config = 'True'"
+	cursor.execute(sqlQ)
+	entries = cursor.fetchone()
+	return entries
+
+def update_repo_db(connection, repo_list):
+	cursor = connection.cursor()
+	sqlQ1 = 'SELECT repo_id FROM repos WHERE repo = %s'
+	sqlQ2 = 'INSERT INTO repos (repo) VALUES ( %s )'
+	for repo in repo_list:
+		cursor.execute(sqlQ1, (repo,))
+		entries = cursor.fetchone()
 		if entries is None:
-			return None
-			jobs_id = []
-			for job_id in entries:
-				jobs_id.append(job_id[0])
-				return sorted(jobs_id)
-			
-			def get_job(connection, job_id):
-				cursor = connection.cursor()
-				sqlQ ='SELECT job FROM jobs WHERE job_id = %s'
-				cursor.execute(sqlQ, (job_id,))
-				job = cursor.fetchone()
-				return job[0]
-			
-			def update_job_list(connection, status, job_id):
-				cursor = connection.cursor()
-				sqlQ = 'UPDATE  jobs SET status = %s WHERE job_id = %s'
-				cursor.execute(sqlQ, (status, job_id,))
-				connection.commit()
-				
-				# Queryes to handel the configs* tables
-				def get_config_list_all(connection):
-					cursor = connection.cursor()
-					sqlQ = 'SELECT config FROM configs'
-					cursor.execute(sqlQ)
-					entries = cursor.fetchall()
-					return entries
-				def update_make_conf(connection, configsDict):
-					cursor = connection.cursor()
-					sqlQ1 = 'UPDATE configs_metadata SET checksum = %s, make_conf_text = %s, active = %s, config_error = %s WHERE config_id = (SELECT config_id FROM configs WHERE config = %s)'
-					for k, v in configsDict.iteritems():
-						params = [v['make_conf_checksum_tree'], v['make_conf_text'], v['active'], v['config_error'], k]
-						cursor.execute(sqlQ1, params)
-						connection.commit()
-						
-						def get_default_config(connection):
-							cursor = connection.cursor()
-							sqlQ = "SELECT config FROM configs WHERE default_config = 'True'"
-							cursor.execute(sqlQ)
-							entries = cursor.fetchone()
-							return entries
-						
-						def update_repo_db(connection, repo_list):
-							cursor = connection.cursor()
-							sqlQ1 = 'SELECT repo_id FROM repos WHERE repo = %s'
-							sqlQ2 = 'INSERT INTO repos (repo) VALUES ( %s )'
-							for repo in repo_list:
-								cursor.execute(sqlQ1, (repo,))
-								entries = cursor.fetchone()
-								if entries is None:
-									cursor.execute(sqlQ2, (repo,))
-									connection.commit()
-									return
+		cursor.execute(sqlQ2, (repo,))
+		connection.commit()
+	return
+
 def get_package_id(connection, categories, package, repo):
-  cursor = connection.cursor()
-  sqlQ ='SELECT package_id FROM packages WHERE category = %s AND package = %s AND repo_id = (SELECT repo_id FROM repos WHERE repo = %s)'
-  params = categories, package, repo
-  cursor.execute(sqlQ, params)
-  entries = cursor.fetchone()
-  if entries is None:
-    return None
-  return entries[0]
+	cursor = connection.cursor()
+	sqlQ ='SELECT package_id FROM packages WHERE category = %s AND package = %s AND repo_id = (SELECT repo_id FROM repos WHERE repo = %s)'
+	params = categories, package, repo
+	cursor.execute(sqlQ, params)
+	entries = cursor.fetchone()
+	if entries is None:
+		return None
+	return entries[0]
 
 # Add new info to the packages table
-
 def get_repo_id(connection, repo):
-  cursor = connection.cursor()
-  sqlQ ='SELECT repo_id FROM repos WHERE repo = %s'
-  cursor.execute(sqlQ, (repo,))
-  entries = cursor.fetchone()
-  if entries is None:
-    return None
-  return entries[0]
+	cursor = connection.cursor()
+	sqlQ ='SELECT repo_id FROM repos WHERE repo = %s'
+	cursor.execute(sqlQ, (repo,))
+	entries = cursor.fetchone()
+	if entries is None:
+		return None
+	return entries[0]
 
 def add_new_manifest_sql(connection, categories, package, repo, manifest_checksum_tree):
-  cursor = connection.cursor()
-  sqlQ = "INSERT INTO packages (category, package, repo_id, checksum, active) VALUES (%s, %s, %s, %s, 'True') RETURNING package_id"
-  repo_id = get_repo_id(connection, repo)
-  cursor.execute(sqlQ, (categories, package, repo_id, manifest_checksum_tree,))
-  package_id = cursor.fetchone()[0]
-  connection.commit()
-  return package_id
+	cursor = connection.cursor()
+	sqlQ = "INSERT INTO packages (category, package, repo_id, checksum, active) VALUES (%s, %s, %s, %s, 'True') RETURNING package_id"
+	repo_id = get_repo_id(connection, repo)
+	cursor.execute(sqlQ, (categories, package, repo_id, manifest_checksum_tree,))
+	package_id = cursor.fetchone()[0]
+	connection.commit()
+	return package_id
 
 def get_restriction_id(connection, restriction):
-  cursor = connection.cursor()
-  sqlQ ='SELECT restriction_id FROM restrictions WHERE restriction = %s'
-  cursor.execute(sqlQ, (restriction,))
-  entries = cursor.fetchone()
-  if entries is None:
-    return None
-  return entries[0]
+	cursor = connection.cursor()
+	sqlQ ='SELECT restriction_id FROM restrictions WHERE restriction = %s'
+	cursor.execute(sqlQ, (restriction,))
+	entries = cursor.fetchone()
+	if entries is None:
+		return None
+	return entries[0]
 
 def get_use_id(connection, use_flag):
-  cursor = connection.cursor()
-  sqlQ ='SELECT use_id FROM uses WHERE flag = %s'
-  cursor.execute(sqlQ, (use_flag,))
-  entries = cursor.fetchone()
-  if entries is None:
-    return None
-  return entries[0]
+	cursor = connection.cursor()
+	sqlQ ='SELECT use_id FROM uses WHERE flag = %s'
+	cursor.execute(sqlQ, (use_flag,))
+	entries = cursor.fetchone()
+	if entries is None:
+		return None
+	return entries[0]
 
 def get_keyword_id(connection, keyword):
-  cursor = connection.cursor()
-  sqlQ ='SELECT keyword_id FROM keywords WHERE keyword = %s'
-  cursor.execute(sqlQ, (keyword,))
-  entries = cursor.fetchone()
-  if entries is None:
-    return None
-  return entries[0]
+	cursor = connection.cursor()
+	sqlQ ='SELECT keyword_id FROM keywords WHERE keyword = %s'
+	cursor.execute(sqlQ, (keyword,))
+	entries = cursor.fetchone()
+	if entries is None:
+		return None
+	return entries[0]
 
 def add_new_ebuild_metadata_sql(connection, ebuild_id, keywords, restrictions, iuse_list):
-  cursor = connection.cursor()
-  sqlQ1 = 'INSERT INTO keywords (keyword) VALUES ( %s ) RETURNING keyword_id'
-  sqlQ3 = 'INSERT INTO restrictions (restriction) VALUES ( %s ) RETURNING restriction_id'
-  sqlQ4 = 'INSERT INTO ebuilds_restrictions (ebuild_id, restriction_id) VALUES ( %s, %s )'
-  sqlQ5 = 'INSERT INTO uses (flag) VALUES ( %s ) RETURNING use_id'
-  sqlQ6 = 'INSERT INTO ebuilds_iuse (ebuild_id, use_id, status) VALUES ( %s, %s, %s)'
-  sqlQ7 = 'INSERT INTO ebuilds_keywords (ebuild_id, keyword_id, status) VALUES ( %s, %s, %s)'
-  # FIXME restriction need some filter as iuse and keyword have.
-  for restriction in restrictions:
-    restriction_id = get_restriction_id(connection, restriction)
-    if restriction_id is None:
-      cursor.execute(sqlQ3, (restriction,))
-      restriction_id = cursor.fetchone()[0]
-    cursor.execute(sqlQ4, (ebuild_id, restriction_id,))
-  for iuse in iuse_list:
-    set_iuse = 'disable'
-    if iuse[0] in ["+"]:
-      iuse = iuse[1:]
-      set_iuse = 'enable'
-    elif iuse[0] in ["-"]:
-      iuse = iuse[1:]
-    use_id = get_use_id(connection, iuse)
-    if use_id is None:
-      cursor.execute(sqlQ5, (iuse,))
-      use_id = cursor.fetchone()[0]
-    for keyword in keywords:
-    set_keyword = 'stable'
-    if keyword[0] in ["~"]:
-      keyword = keyword[1:]
-      set_keyword = 'unstable'
-    elif keyword[0] in ["-"]:
-      keyword = keyword[1:]
-      set_keyword = 'testing'
-    keyword_id = get_keyword_id(connection, keyword)
-    if keyword_id is None:
-      cursor.execute(sqlQ1, (keyword,))
-      keyword_id = cursor.fetchone()[0]
-    cursor.execute(sqlQ7, (ebuild_id, keyword_id, set_keyword,))
-  connection.commit()cursor.execute(sqlQ6, (ebuild_id, use_id, set_iuse,))
+	cursor = connection.cursor()
+	sqlQ1 = 'INSERT INTO keywords (keyword) VALUES ( %s ) RETURNING keyword_id'
+	sqlQ3 = 'INSERT INTO restrictions (restriction) VALUES ( %s ) RETURNING restriction_id'
+	sqlQ4 = 'INSERT INTO ebuilds_restrictions (ebuild_id, restriction_id) VALUES ( %s, %s )'
+	sqlQ5 = 'INSERT INTO uses (flag) VALUES ( %s ) RETURNING use_id'
+	sqlQ6 = 'INSERT INTO ebuilds_iuse (ebuild_id, use_id, status) VALUES ( %s, %s, %s)'
+	sqlQ7 = 'INSERT INTO ebuilds_keywords (ebuild_id, keyword_id, status) VALUES ( %s, %s, %s)'
+	# FIXME restriction need some filter as iuse and keyword have.
+	for restriction in restrictions:
+		restriction_id = get_restriction_id(connection, restriction)
+		if restriction_id is None:
+			cursor.execute(sqlQ3, (restriction,))
+			restriction_id = cursor.fetchone()[0]
+		cursor.execute(sqlQ4, (ebuild_id, restriction_id,))
+	for iuse in iuse_list:
+		set_iuse = 'disable'
+		if iuse[0] in ["+"]:
+			iuse = iuse[1:]
+			set_iuse = 'enable'
+		elif iuse[0] in ["-"]:
+			iuse = iuse[1:]
+		use_id = get_use_id(connection, iuse)
+		if use_id is None:
+			cursor.execute(sqlQ5, (iuse,))
+ 			use_id = cursor.fetchone()[0]
+ 		cursor.execute(sqlQ6, (ebuild_id, use_id, set_iuse,))
+	for keyword in keywords:
+		set_keyword = 'stable'
+		if keyword[0] in ["~"]:
+			keyword = keyword[1:]
+			set_keyword = 'unstable'
+		elif keyword[0] in ["-"]:
+			keyword = keyword[1:]
+			set_keyword = 'testing'
+		keyword_id = get_keyword_id(connection, keyword)
+		if keyword_id is None:
+			cursor.execute(sqlQ1, (keyword,))
+			keyword_id = cursor.fetchone()[0]
+		cursor.execute(sqlQ7, (ebuild_id, keyword_id, set_keyword,))
+	connection.commit()
 
 def add_new_ebuild_sql(connection, package_id, ebuildDict):
-  cursor = connection.cursor()
-  sqlQ1 = 'SELECT repo_id FROM packages WHERE package_id = %s'
-  sqlQ2 = "INSERT INTO ebuilds (package_id, version, checksum, active) VALUES (%s, %s, %s, 'True') RETURNING ebuild_id"
-  sqlQ4 = "INSERT INTO ebuilds_metadata (ebuild_id, revision) VALUES (%s, %s)"
-  ebuild_id_list = []
-  cursor.execute(sqlQ1, (package_id,))
-  repo_id = cursor.fetchone()[0]
-  for k, v in ebuildDict.iteritems():
-    cursor.execute(sqlQ2, (package_id, v['ebuild_version_tree'], v['ebuild_version_checksum_tree'],))
-    ebuild_id = cursor.fetchone()[0]
-    cursor.execute(sqlQ4, (ebuild_id, v['ebuild_version_revision_tree'],))
-    ebuild_id_list.append(ebuild_id)
-    restrictions = []
-    keywords = []
-    iuse = []
-    for i in v['ebuild_version_metadata_tree'][4].split():
-      restrictions.append(i)
-    for i in v['ebuild_version_metadata_tree'][8].split():
-      keywords.append(i)
-    for i in v['ebuild_version_metadata_tree'][10].split():
-      iuse.append(i)
-    add_new_ebuild_metadata_sql(connection, ebuild_id, keywords, restrictions, iuse)
-  connection.commit()
-  return ebuild_id_list
+	cursor = connection.cursor()
+	sqlQ1 = 'SELECT repo_id FROM packages WHERE package_id = %s'
+	sqlQ2 = "INSERT INTO ebuilds (package_id, version, checksum, active) VALUES (%s, %s, %s, 'True') RETURNING ebuild_id"
+	sqlQ4 = "INSERT INTO ebuilds_metadata (ebuild_id, revision) VALUES (%s, %s)"
+	ebuild_id_list = []
+	cursor.execute(sqlQ1, (package_id,))
+	repo_id = cursor.fetchone()[0]
+	for k, v in ebuildDict.iteritems():
+		cursor.execute(sqlQ2, (package_id, v['ebuild_version_tree'], v['ebuild_version_checksum_tree'],))
+		ebuild_id = cursor.fetchone()[0]
+		cursor.execute(sqlQ4, (ebuild_id, v['ebuild_version_revision_tree'],))
+		ebuild_id_list.append(ebuild_id)
+		restrictions = []
+		keywords = []
+		iuse = []
+		for i in v['ebuild_version_metadata_tree'][4].split():
+			restrictions.append(i)
+		for i in v['ebuild_version_metadata_tree'][8].split():
+			keywords.append(i)
+		for i in v['ebuild_version_metadata_tree'][10].split():
+			iuse.append(i)
+		add_new_ebuild_metadata_sql(connection, ebuild_id, keywords, restrictions, iuse)
+	connection.commit()
+	return ebuild_id_list
 
 def get_config_id_list(connection):
-  cursor = connection.cursor()
-  sqlQ = "SELECT configs.config_id FROM configs, configs_metadata WHERE configs.default_config = 'False' AND configs_metadata.active = 'True' AND configs.config_id = configs_metadata.config_id"
-  cursor.execute(sqlQ)
-  entries = cursor.fetchall()
-  if entries == ():
-    return None
-  else:
-    config_id_list = []
-    for config_id in entries:
-      config_id_list.append(config_id[0])
-    return config_id_list
+	cursor = connection.cursor()
+	sqlQ = "SELECT configs.config_id FROM configs, configs_metadata WHERE configs.default_config = 'False' AND configs_metadata.active = 'True' AND configs.config_id = configs_metadata.config_id"
+	cursor.execute(sqlQ)
+	entries = cursor.fetchall()
+	if entries == ():
+		return None
+	else:
+		config_id_list = []
+ 	for config_id in entries:
+		config_id_list.append(config_id[0])
+	return config_id_list
 
 def get_config_db(connection, config_id):
-  cursor = connection.cursor()
-  sqlQ = 'SELECT config FROM configs WHERE config_id = %s'
-  cursor.execute(sqlQ,(config_id,))
-  entries = cursor.fetchone()
-  if entries is None:
-    return None
-  return entries[0]
+	cursor = connection.cursor()
+	sqlQ = 'SELECT config FROM configs WHERE config_id = %s'
+	cursor.execute(sqlQ,(config_id,))
+	entries = cursor.fetchone()
+	if entries is None:
+		return None
+	return entries[0]
 
 def add_new_package_buildqueue(connection, ebuild_id, config_id, use_flagsDict, messages):
-  cursor = connection.cursor()
-  sqlQ1 = 'INSERT INTO build_jobs (ebuild_id, config_id) VALUES (%s, %s) RETURNING build_job_id'
-  sqlQ3 = 'INSERT INTO build_jobs_use (build_job_id, use_id, status) VALUES (%s, (SELECT use_id FROM uses WHERE flag = %s), %s)'
-  cursor.execute(sqlQ1, (ebuild_id, config_id,))
-  build_job_id = cursor.fetchone()[0]
-  for k, v in use_flagsDict.iteritems():
-    cursor.execute(sqlQ3, (build_job_id, k, v,))
-  connection.commit()
+	cursor = connection.cursor()
+	sqlQ1 = 'INSERT INTO build_jobs (ebuild_id, config_id) VALUES (%s, %s) RETURNING build_job_id'
+	sqlQ3 = 'INSERT INTO build_jobs_use (build_job_id, use_id, status) VALUES (%s, (SELECT use_id FROM uses WHERE flag = %s), %s)'
+	cursor.execute(sqlQ1, (ebuild_id, config_id,))
+	build_job_id = cursor.fetchone()[0]
+	for k, v in use_flagsDict.iteritems():
+		cursor.execute(sqlQ3, (build_job_id, k, v,))
+	connection.commit()
 
 def get_manifest_db(connection, package_id):
-  cursor = connection.cursor()
-  sqlQ = 'SELECT checksum FROM packages WHERE package_id = %s'
-  cursor.execute(sqlQ, (package_id,))
-  entries = cursor.fetchone()
-  if entries is None:
-          return None
-  # If entries is not None we need [0]
-  return entries[0]
+	cursor = connection.cursor()
+	sqlQ = 'SELECT checksum FROM packages WHERE package_id = %s'
+	cursor.execute(sqlQ, (package_id,))
+	entries = cursor.fetchone()
+	if entries is None:
+		return None
+	# If entries is not None we need [0]
+	return entries[0]
 
 def get_cp_from_package_id(connection, package_id):
-  cursor = connection.cursor()
-  sqlQ = "SELECT ARRAY_TO_STRING(ARRAY[category, package] , '/') AS cp FROM packages WHERE package_id = %s"
-  cursor.execute(sqlQ, (package_id,))
-  return cursor.fetchone()
+	cursor = connection.cursor()
+	sqlQ = "SELECT ARRAY_TO_STRING(ARRAY[category, package] , '/') AS cp FROM packages WHERE package_id = %s"
+	cursor.execute(sqlQ, (package_id,))
+	return cursor.fetchone()
 
 def get_cp_repo_from_package_id(connection, package_id):
-  cursor =connection.cursor()
-  sqlQ = 'SELECT repos.repo FROM repos, packages WHERE repos.repo_id = packages.repo_id AND packages.package_id = %s'
-  cp = get_cp_from_package_id(connection, package_id)
-  cursor.execute(sqlQ, (package_id,))
-  repo = cursor.fetchone()
-  return cp[0], repo[0]
+	cursor =connection.cursor()
+	sqlQ = 'SELECT repos.repo FROM repos, packages WHERE repos.repo_id = packages.repo_id AND packages.package_id = %s'
+	cp = get_cp_from_package_id(connection, package_id)
+	cursor.execute(sqlQ, (package_id,))
+	repo = cursor.fetchone()
+	return cp[0], repo[0]
 
 def get_ebuild_checksum(connection, package_id, ebuild_version_tree):
-  cursor = connection.cursor()
-  sqlQ = "SELECT checksum FROM ebuilds WHERE package_id = %s AND version = %s AND active = 'True'"
-  cursor.execute(sqlQ, (package_id, ebuild_version_tree))
-  entries = cursor.fetchone()
-  if entries is None:
-    return None
- # If entries is not None we need [0]
-  return entries[0]
+	cursor = connection.cursor()
+	sqlQ = "SELECT checksum FROM ebuilds WHERE package_id = %s AND version = %s AND active = 'True'"
+	cursor.execute(sqlQ, (package_id, ebuild_version_tree))
+	entries = cursor.fetchone()
+	if entries is None:
+		return None
+	# If entries is not None we need [0]
+	return entries[0]
 
 def add_old_ebuild(connection, package_id, old_ebuild_list):
-  cursor = connection.cursor()
-  sqlQ1 = "UPDATE ebuilds SET active = 'False' WHERE package_id = %s AND version = %s"
-  sqlQ2 = "SELECT ebuild_id FROM ebuilds WHERE package_id = %s AND version = %s AND active = 'True'"
-  sqlQ3 = "SELECT build_job_id FROM build_jobs WHERE ebuild_id = %s"
-  sqlQ4 = 'DELETE FROM build_jobs_use WHERE build_job_id = %s'
-  sqlQ5 = 'DELETE FROM build_jobs WHERE build_job_id = %s'
-  for old_ebuild in  old_ebuild_list:
-    cursor.execute(sqlQ2, (package_id, old_ebuild[0]))
-    ebuild_id_list = cursor.fetchall()
-    if ebuild_id_list is not None:
-      for ebuild_id in ebuild_id_list:
-        cursor.execute(sqlQ3, (ebuild_id))
-        build_job_id_list = cursor.fetchall()
-        if build_job_id_list is not None:
-          for build_job_id in build_job_id_list:
-            cursor.execute(sqlQ4, (build_job_id))
-            cursor.execute(sqlQ5, (build_job_id))
-        cursor.execute(sqlQ1, (package_id, old_ebuild[0]))
-  connection.commit()
+	cursor = connection.cursor()
+	sqlQ1 = "UPDATE ebuilds SET active = 'False' WHERE package_id = %s AND version = %s"
+	sqlQ2 = "SELECT ebuild_id FROM ebuilds WHERE package_id = %s AND version = %s AND active = 'True'"
+	sqlQ3 = "SELECT build_job_id FROM build_jobs WHERE ebuild_id = %s"
+	sqlQ4 = 'DELETE FROM build_jobs_use WHERE build_job_id = %s'
+	sqlQ5 = 'DELETE FROM build_jobs WHERE build_job_id = %s'
+	for old_ebuild in  old_ebuild_list:
+		cursor.execute(sqlQ2, (package_id, old_ebuild[0]))
+		ebuild_id_list = cursor.fetchall()
+		if ebuild_id_list is not None:
+			for ebuild_id in ebuild_id_list:
+				cursor.execute(sqlQ3, (ebuild_id))
+				build_job_id_list = cursor.fetchall()
+				if build_job_id_list is not None:
+					for build_job_id in build_job_id_list:
+						cursor.execute(sqlQ4, (build_job_id))
+						cursor.execute(sqlQ5, (build_job_id))
+				cursor.execute(sqlQ1, (package_id, old_ebuild[0]))
+	connection.commit()
 
 def update_active_ebuild_to_fales(connection, package_id, ebuild_version_tree):
-  cursor = connection.cursor()
-  sqlQ ="UPDATE ebuilds SET active = 'False' WHERE package_id = %s AND version = %s AND active = 'True'"
-  cursor.execute(sqlQ, (package_id, ebuild_version_tree))
-  connection.commit()
+	cursor = connection.cursor()
+	sqlQ ="UPDATE ebuilds SET active = 'False' WHERE package_id = %s AND version = %s AND active = 'True'"
+	cursor.execute(sqlQ, (package_id, ebuild_version_tree))
+	connection.commit()
 
 def update_manifest_sql(connection, package_id, manifest_checksum_tree):
-  cursor = connection.cursor()
-  sqlQ = 'UPDATE packages SET checksum = %s WHERE package_id = %s'
-  cursor.execute(sqlQ, (manifest_checksum_tree, package_id,))
-  connection.commit()
+	cursor = connection.cursor()
+	sqlQ = 'UPDATE packages SET checksum = %s WHERE package_id = %s'
+	cursor.execute(sqlQ, (manifest_checksum_tree, package_id,))
+	connection.commit()
 
 def get_build_jobs_id_list_config(connection, config_id):
-        cursor = connection.cursor()
-        sqlQ = 'SELECT build_job_id FROM build_jobs WHERE config_id = %s'
-        cursor.execute(sqlQ,  (config_id,))
-        entries = cursor.fetchall()
-        return entries
+	cursor = connection.cursor()
+	sqlQ = 'SELECT build_job_id FROM build_jobs WHERE config_id = %s'
+	cursor.execute(sqlQ,  (config_id,))
+	entries = cursor.fetchall()
+	return entries
 
 def del_old_build_jobs(connection, queue_id):
-        cursor = connection.cursor()
-        sqlQ1 = 'DELETE FROM build_jobs_use WHERE build_job_id = %s'
-        sqlQ2 = 'DELETE FROM build_jobs_retest WHERE build_job_id  = %s'
-        sqlQ3 = 'DELETE FROM build_jobs WHERE build_job_id  = %s'
-        cursor.execute(sqlQ1, (build_job_id,))
-        cursor.execute(sqlQ2, (build_job_id,))
-        cursor.execute(sqlQ3, (build_job_id,))
-        connection.commit()
+	cursor = connection.cursor()
+	sqlQ1 = 'DELETE FROM build_jobs_use WHERE build_job_id = %s'
+	sqlQ2 = 'DELETE FROM build_jobs_retest WHERE build_job_id  = %s'
+	sqlQ3 = 'DELETE FROM build_jobs WHERE build_job_id  = %s'
+	cursor.execute(sqlQ1, (build_job_id,))
+	cursor.execute(sqlQ2, (build_job_id,))
+	cursor.execute(sqlQ3, (build_job_id,))
+	connection.commit()

diff --git a/gobs/pym/sync.py b/gobs/pym/sync.py
index 87eb79e..bfff592 100644
--- a/gobs/pym/sync.py
+++ b/gobs/pym/sync.py
@@ -13,7 +13,7 @@ from gobs.ConnectionManager import connectionManager
 CM=connectionManager(gobs_settings_dict)
 #selectively import the pgsql/mysql querys
 if CM.getName()=='pgsql':
-	from gobs.pgsql import *
+	from gobs.pgsql_querys import *
 
 config_profile = gobs_settings_dict['gobs_config']
 

diff --git a/gobs/pym/updatedb.py b/gobs/pym/updatedb.py
index fb185d2..144b6bd 100755
--- a/gobs/pym/updatedb.py
+++ b/gobs/pym/updatedb.py
@@ -76,35 +76,34 @@ def update_cpv_db():
 	myportdb = portage.portdbapi(mysettings=mysettings)
 	init_package = gobs_package(mysettings, myportdb)
 	repo_list = ()
-        repos_trees_list = []
-
-        # Use all cores when multiprocessing
-        pool_cores= multiprocessing.cpu_count()
-        pool = multiprocessing.Pool(processes=pool_cores)
-
-        # Will run some update checks and update package if needed
-        # Get categories/package list from portage and repos
-        
-        # Get the repos and update the repos db
-        repo_list = myportdb.getRepositories()
-        update_repo_db(conn, repo_list)
-        CM.putConnection(conn)
-        
+	repos_trees_list = []
+
+	# Use all cores when multiprocessing
+	pool_cores= multiprocessing.cpu_count()
+	pool = multiprocessing.Pool(processes=pool_cores)
+
+	# Will run some update checks and update package if needed
+	# Get categories/package list from portage and repos
+	# Get the repos and update the repos db
+	repo_list = myportdb.getRepositories()
+	update_repo_db(conn, repo_list)
+	CM.putConnection(conn)
+
 	# Get the rootdirs for the repos
-        repo_trees_list = myportdb.porttrees
-        for repo_dir in repo_trees_list:
-                repo = myportdb.getRepositoryName(repo_dir)
-                repo_dir_list = []
-                repo_dir_list.append(repo_dir)
-
-                # Get the package list from the repo
-                package_id_list_tree = []
-                package_list_tree = myportdb.cp_all(trees=repo_dir_list)
-
-                # Run the update package for all package in the list and in a multiprocessing pool
-                for package_line in sorted(package_list_tree):
-                        pool.apply_async(update_cpv_db_pool, (mysettings, myportdb, init_package, package_line, repo,))
-                        # update_cpv_db_pool(mysettings, myportdb, init_package, package_line, repo)
+	repo_trees_list = myportdb.porttrees
+	for repo_dir in repo_trees_list:
+		repo = myportdb.getRepositoryName(repo_dir)
+		repo_dir_list = []
+		repo_dir_list.append(repo_dir)
+
+		# Get the package list from the repo
+		package_id_list_tree = []
+		package_list_tree = myportdb.cp_all(trees=repo_dir_list)
+
+		# Run the update package for all package in the list and in a multiprocessing pool
+		for package_line in sorted(package_list_tree):
+			pool.apply_async(update_cpv_db_pool, (mysettings, myportdb, init_package, package_line, repo,))
+			# update_cpv_db_pool(mysettings, myportdb, init_package, package_line, repo)
 	pool.close()
 	pool.join()
 	conn=CM.getConnection()
@@ -114,15 +113,15 @@ def update_cpv_db():
 
 def update_db_main():
 	# Main
-        conn=CM.getConnection()
-
-        # Logging
-        log_msg = "Update db started."
-        add_gobs_logs(conn, log_msg, "info", config_profile)
-
-        # Update the cpv db
-        update_cpv_db()
-        log_msg = "Update db ... Done."
-        add_gobs_logs(conn, log_msg, "info", config_profile)
-        CM.putConnection(conn)
-        return True
+	conn=CM.getConnection()
+
+	# Logging
+	log_msg = "Update db started."
+	add_gobs_logs(conn, log_msg, "info", config_profile)
+
+	# Update the cpv db
+	update_cpv_db()
+	log_msg = "Update db ... Done."
+ 	add_gobs_logs(conn, log_msg, "info", config_profile)
+	CM.putConnection(conn)
+	return True


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-01 11:31 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-01 11:31 UTC (permalink / raw
  To: gentoo-commits

commit:     7adef1310301c55b1fa6c5cfd17002687ce1b3d7
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Dec  1 11:31:12 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Dec  1 11:31:12 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=7adef131

fix some sytax error and indented blocks

---
 gobs/pym/old_cpv.py      |    4 ++--
 gobs/pym/package.py      |    2 +-
 gobs/pym/pgsql_querys.py |    4 ++--
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/gobs/pym/old_cpv.py b/gobs/pym/old_cpv.py
index 3d47c50..e644f36 100644
--- a/gobs/pym/old_cpv.py
+++ b/gobs/pym/old_cpv.py
@@ -8,7 +8,7 @@ from gobs.ConnectionManager import connectionManager
 CM=connectionManager(gobs_settings_dict)
 #selectively import the pgsql/mysql querys
 if CM.getName()=='pgsql':
-	from gobs.pgsql import *
+	from gobs.pgsql_querys import *
 
 class gobs_old_cpv(object):
 	
@@ -22,7 +22,7 @@ class gobs_old_cpv(object):
 		cp, repo = get_cp_repo_from_package_id(conn, package_id)
 		mytree = []
 		mytree.append(self._myportdb.getRepositoryPath(repo))
-		ebuild_list_tree = self._myportdb.cp_list((cp, use_cache=1, mytree=mytree)
+		ebuild_list_tree = self._myportdb.cp_list(cp, use_cache=1, mytree=mytree)
 		# Get ebuild list on categories, package in the db
 		ebuild_list_db = cp_list_db(conn, package_id)
 		# Check if don't have the ebuild in the tree

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index e123e8e..ed608ea 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -13,7 +13,7 @@ config_profile = gobs_settings_dict['gobs_config']
 from gobs.ConnectionManager import connectionManager
 CM=connectionManager(gobs_settings_dict)
 #selectively import the pgsql/mysql querys
-iif CM.getName()=='pgsql':
+if CM.getName()=='pgsql':
         from gobs.pgsql_querys import *
 if CM.getName()=='mysql':
         from gobs.mysql_querys import *

diff --git a/gobs/pym/pgsql_querys.py b/gobs/pym/pgsql_querys.py
index 48dd825..c4ae37c 100644
--- a/gobs/pym/pgsql_querys.py
+++ b/gobs/pym/pgsql_querys.py
@@ -65,8 +65,8 @@ def update_repo_db(connection, repo_list):
 		cursor.execute(sqlQ1, (repo,))
 		entries = cursor.fetchone()
 		if entries is None:
-		cursor.execute(sqlQ2, (repo,))
-		connection.commit()
+			cursor.execute(sqlQ2, (repo,))
+	connection.commit()
 	return
 
 def get_package_id(connection, categories, package, repo):


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-01 22:58 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-01 22:58 UTC (permalink / raw
  To: gentoo-commits

commit:     532cafd140bfff6d288527ced5a528ef4baacfac
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Dec  1 22:58:03 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Dec  1 22:58:03 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=532cafd1

fix a indented block

---
 gobs/pym/package.py |    2 --
 1 files changed, 0 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index ed608ea..0d7a934 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -36,8 +36,6 @@ class gobs_package(object):
 			return config_cpv_listDict
 			conn=CM.getConnection()
 		for config_id in config_id_list:
-			# Change config/setup
-			for config_id in config_id_list:
 
 			# Change config/setup
 			config_setup = get_config_db(conn, config_id)


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-01 23:03 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-01 23:03 UTC (permalink / raw
  To: gentoo-commits

commit:     dd3b447230817b9de3fe1f4883ab4444f75cd6fd
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Dec  1 23:03:36 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Dec  1 23:03:36 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=dd3b4472

fix a invalid syntax

---
 gobs/pym/package.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index 0d7a934..b8747bb 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -327,7 +327,7 @@ class gobs_package(object):
 						# Fix so we can use add_new_ebuild_sql() to update the ebuilds
 						old_ebuild_list.append(ebuild_version_tree)
 						add_old_ebuild(conn, package_id, old_ebuild_list)
-						update_active_ebuild_to_fales(conn, package_id, ebuild_version_tree
+						update_active_ebuild_to_fales(conn, package_id, ebuild_version_tree)
 			# Use packageDictand to update the db
 			# Add new ebuilds to the db
 			ebuild_id_list = add_new_ebuild_sql(conn, package_id, packageDict)


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-01 23:12 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-01 23:12 UTC (permalink / raw
  To: gentoo-commits

commit:     81415947e93d59d2a5ea8163fc18a18f1cf4c389
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Dec  1 23:12:14 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Dec  1 23:12:14 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=81415947

fix get_ebuild_cvs_revision in get_packageDict()

---
 gobs/pym/package.py |    8 ++++----
 1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index b8747bb..c960b07 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -3,7 +3,7 @@ import portage
 from gobs.flags import gobs_use_flags
 from gobs.repoman_gobs import gobs_repoman
 from gobs.manifest import gobs_manifest
-from gobs.text import get_file_text, get_ebuild_text
+from gobs.text import get_ebuild_cvs_revision
 from gobs.readconf import get_conf_settings
 from gobs.flags import gobs_use_flags
 reader=get_conf_settings()
@@ -99,9 +99,9 @@ class gobs_package(object):
 			add_gobs_logs(conn, log_msg, "info", config_profile)
 			log_msg = "C %s:%s ... Fail." % (cpv, repo)
 			add_gobs_logs(conn, log_msg, "info", config_profile)
-			ebuild_version_text_tree = '0'
+			ebuild_version_cvs_revision_tree = '0'
 		else:
-			ebuild_version_text_tree = get_ebuild_text(pkgdir + "/" + package + "-" + ebuild_version_tree + ".ebuild")
+			ebuild_version_cvs_revision_tree = get_ebuild_cvs_revision(pkgdir + "/" + package + "-" + ebuild_version_tree + ".ebuild")
 
 		# run repoman on the ebuild
 		#init_repoman = gobs_repoman(self._mysettings, self._myportdb)
@@ -128,7 +128,7 @@ class gobs_package(object):
 		attDict['ebuild_version_checksum_tree']= ebuild_version_checksum_tree
 		attDict['ebuild_version_metadata_tree'] = ebuild_version_metadata_tree
 		#attDict['ebuild_version_text_tree'] = ebuild_version_text_tree[0]
-		attDict['ebuild_version_revision_tree'] = ebuild_version_text_tree[1]
+		attDict['ebuild_version_revision_tree'] = ebuild_version_cvs_revision_tree
 		attDict['ebuild_error'] = repoman_error
 		CM.putConnection(conn)
 		return attDict


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-01 23:24 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-01 23:24 UTC (permalink / raw
  To: gentoo-commits

commit:     c3ffd537c635ed762905e516f00b9ddaebe70eaf
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Dec  1 23:24:30 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Dec  1 23:24:30 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=c3ffd537

fix update__make_conf' is not defined

---
 gobs/pym/check_setup.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/check_setup.py b/gobs/pym/check_setup.py
index 37d0285..4d62909 100644
--- a/gobs/pym/check_setup.py
+++ b/gobs/pym/check_setup.py
@@ -55,7 +55,7 @@ def check_make_conf():
 	  attDict['make_conf_text'] = get_file_text(make_conf_file)
 	  attDict['make_conf_checksum_tree'] = make_conf_checksum_tree
 	  configsDict[config_id[0]]=attDict
-  update__make_conf(conn, configsDict)
+  update_make_conf(conn, configsDict)
   log_msg = "Checking configs for changes and errors ... Done"
   add_gobs_logs(conn, log_msg, "info", config_profile)
   CM.putConnection(conn)


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-01 23:28 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-01 23:28 UTC (permalink / raw
  To: gentoo-commits

commit:     c07903f5ebe77b11486d87ebb04250818462a4c6
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Dec  1 23:28:20 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Dec  1 23:28:20 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=c07903f5

fix can't adapt type 'ParseError

---
 gobs/pym/pgsql_querys.py |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/pgsql_querys.py b/gobs/pym/pgsql_querys.py
index c4ae37c..8bc17bd 100644
--- a/gobs/pym/pgsql_querys.py
+++ b/gobs/pym/pgsql_querys.py
@@ -45,10 +45,10 @@ def get_config_list_all(connection):
 def update_make_conf(connection, configsDict):
 	cursor = connection.cursor()
 	sqlQ1 = 'UPDATE configs_metadata SET checksum = %s, make_conf_text = %s, active = %s, config_error = %s WHERE config_id = (SELECT config_id FROM configs WHERE config = %s)'
+	params = [v['make_conf_checksum_tree'], v['make_conf_text'], v['active'], v['config_error'], k]
 	for k, v in configsDict.iteritems():
-		params = [v['make_conf_checksum_tree'], v['make_conf_text'], v['active'], v['config_error'], k]
 		cursor.execute(sqlQ1, params)
-		connection.commit()
+	connection.commit()
 
 def get_default_config(connection):
 	cursor = connection.cursor()


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-01 23:33 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-01 23:33 UTC (permalink / raw
  To: gentoo-commits

commit:     45873dc970c08863ae8b8ce8a32bda4f7ae10cbf
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Dec  1 23:32:48 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Dec  1 23:32:48 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=45873dc9

clean up update_make_conf

---
 gobs/pym/pgsql_querys.py |    5 ++---
 1 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/gobs/pym/pgsql_querys.py b/gobs/pym/pgsql_querys.py
index 8bc17bd..5eafd19 100644
--- a/gobs/pym/pgsql_querys.py
+++ b/gobs/pym/pgsql_querys.py
@@ -44,10 +44,9 @@ def get_config_list_all(connection):
 
 def update_make_conf(connection, configsDict):
 	cursor = connection.cursor()
-	sqlQ1 = 'UPDATE configs_metadata SET checksum = %s, make_conf_text = %s, active = %s, config_error = %s WHERE config_id = (SELECT config_id FROM configs WHERE config = %s)'
-	params = [v['make_conf_checksum_tree'], v['make_conf_text'], v['active'], v['config_error'], k]
+	sqlQ = 'UPDATE configs_metadata SET checksum = %s, make_conf_text = %s, active = %s, config_error = %s WHERE config_id = (SELECT config_id FROM configs WHERE config = %s)'
 	for k, v in configsDict.iteritems():
-		cursor.execute(sqlQ1, params)
+		cursor.execute(sqlQ, ([v['make_conf_checksum_tree'], v['make_conf_text'], v['active'], v['config_error'], k],))
 	connection.commit()
 
 def get_default_config(connection):


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-01 23:35 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-01 23:35 UTC (permalink / raw
  To: gentoo-commits

commit:     890940ff0a7c43541fd741221f80ebc1cf688acb
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Dec  1 23:34:42 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Dec  1 23:34:42 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=890940ff

fix tuple index out of range

---
 gobs/pym/pgsql_querys.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/pgsql_querys.py b/gobs/pym/pgsql_querys.py
index 5eafd19..b74d18c 100644
--- a/gobs/pym/pgsql_querys.py
+++ b/gobs/pym/pgsql_querys.py
@@ -46,7 +46,7 @@ def update_make_conf(connection, configsDict):
 	cursor = connection.cursor()
 	sqlQ = 'UPDATE configs_metadata SET checksum = %s, make_conf_text = %s, active = %s, config_error = %s WHERE config_id = (SELECT config_id FROM configs WHERE config = %s)'
 	for k, v in configsDict.iteritems():
-		cursor.execute(sqlQ, ([v['make_conf_checksum_tree'], v['make_conf_text'], v['active'], v['config_error'], k],))
+		cursor.execute(sqlQ, (v['make_conf_checksum_tree'], v['make_conf_text'], v['active'], v['config_error'], k,))
 	connection.commit()
 
 def get_default_config(connection):


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-01 23:58 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-01 23:58 UTC (permalink / raw
  To: gentoo-commits

commit:     b76f0d127571cc4a43a6ebe7b93e0ead2c356bcc
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Dec  1 23:57:54 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Dec  1 23:57:54 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=b76f0d12

fix check_make_conf

---
 gobs/pym/check_setup.py  |   15 ++++++++-------
 gobs/pym/pgsql_querys.py |   11 +++++++++--
 2 files changed, 17 insertions(+), 9 deletions(-)

diff --git a/gobs/pym/check_setup.py b/gobs/pym/check_setup.py
index 4d62909..b1a185c 100644
--- a/gobs/pym/check_setup.py
+++ b/gobs/pym/check_setup.py
@@ -19,14 +19,15 @@ if CM.getName()=='pgsql':
 def check_make_conf():
   # Get the config list
   conn=CM.getConnection()
-  config_list_all = get_config_list_all(conn)
+  config_id_list_all = get_config_list_all(conn)
   log_msg = "Checking configs for changes and errors"
   add_gobs_logs(conn, log_msg, "info", config_profile)
   configsDict = {}
-  for config_id in config_list_all:
+  for config_id in config_id_list_all:
 	  attDict={}
 	  # Set the config dir
-	  check_config_dir = "/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config_id[0] + "/"
+	  config = get_config(conn, config_id):
+	  check_config_dir = "/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config + "/"
 	  make_conf_file = check_config_dir + "etc/portage/make.conf"
 	  # Check if we can open the file and close it
 	  # Check if we have some error in the file (portage.util.getconfig)
@@ -41,20 +42,20 @@ def check_make_conf():
 	  except Exception as e:
 		  attDict['config_error'] = e
 		  attDict['active'] = 'False'
-		  log_msg = "%s FAIL!" % (config_id[0],)
+		  log_msg = "%s FAIL!" % (config,)
 		  add_gobs_logs(conn, log_msg, "info", config_profile)
 	  else:
 		  attDict['config_error'] = ''
 		  attDict['active'] = 'True'
-		  log_msg = "%s PASS" % (config_id[0],)
+		  log_msg = "%s PASS" % (config,)
 		  add_gobs_logs(conn, log_msg, "info", config_profile)
 	  # Get the checksum of make.conf
 	  make_conf_checksum_tree = portage.checksum.sha256hash(make_conf_file)[0]
-	  log_msg = "make.conf checksum is %s on %s" % (make_conf_checksum_tree, config_id[0],)
+	  log_msg = "make.conf checksum is %s on %s" % (make_conf_checksum_tree, config,)
 	  add_gobs_logs(conn, log_msg, "info", config_profile)
 	  attDict['make_conf_text'] = get_file_text(make_conf_file)
 	  attDict['make_conf_checksum_tree'] = make_conf_checksum_tree
-	  configsDict[config_id[0]]=attDict
+	  configsDict[config_id]=attDict
   update_make_conf(conn, configsDict)
   log_msg = "Checking configs for changes and errors ... Done"
   add_gobs_logs(conn, log_msg, "info", config_profile)

diff --git a/gobs/pym/pgsql_querys.py b/gobs/pym/pgsql_querys.py
index b74d18c..3c0813d 100644
--- a/gobs/pym/pgsql_querys.py
+++ b/gobs/pym/pgsql_querys.py
@@ -37,14 +37,21 @@ def update_job_list(connection, status, job_id):
 # Queryes to handel the configs* tables
 def get_config_list_all(connection):
 	cursor = connection.cursor()
-	sqlQ = 'SELECT config FROM configs'
+	sqlQ = 'SELECT config_id FROM configs'
 	cursor.execute(sqlQ)
 	entries = cursor.fetchall()
 	return entries
 
+def get_config(connection, config_id):
+	cursor = connection.cursor()
+	sqlQ ='SELECT config FROM configs WHERE config_id = %s'
+	cursor.execute(sqlQ, (config_id,))
+	config = cursor.fetchone()
+	return config[0]
+
 def update_make_conf(connection, configsDict):
 	cursor = connection.cursor()
-	sqlQ = 'UPDATE configs_metadata SET checksum = %s, make_conf_text = %s, active = %s, config_error = %s WHERE config_id = (SELECT config_id FROM configs WHERE config = %s)'
+	sqlQ = 'UPDATE configs_metadata SET checksum = %s, make_conf_text = %s, active = %s, config_error = %s WHERE config_id = %s'
 	for k, v in configsDict.iteritems():
 		cursor.execute(sqlQ, (v['make_conf_checksum_tree'], v['make_conf_text'], v['active'], v['config_error'], k,))
 	connection.commit()


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-02  0:05 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-02  0:05 UTC (permalink / raw
  To: gentoo-commits

commit:     b88c47083fb9c5a0ab68a7749ceb9dca1428806c
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sun Dec  2 00:05:35 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sun Dec  2 00:05:35 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=b88c4708

refix check_make_conf

---
 gobs/pym/check_setup.py |   81 ++++++++++++++++++++++-------------------------
 1 files changed, 38 insertions(+), 43 deletions(-)

diff --git a/gobs/pym/check_setup.py b/gobs/pym/check_setup.py
index b1a185c..7c9fcad 100644
--- a/gobs/pym/check_setup.py
+++ b/gobs/pym/check_setup.py
@@ -17,49 +17,44 @@ if CM.getName()=='pgsql':
 	from gobs.pgsql_querys import *
 
 def check_make_conf():
-  # Get the config list
-  conn=CM.getConnection()
-  config_id_list_all = get_config_list_all(conn)
-  log_msg = "Checking configs for changes and errors"
-  add_gobs_logs(conn, log_msg, "info", config_profile)
-  configsDict = {}
-  for config_id in config_id_list_all:
-	  attDict={}
-	  # Set the config dir
-	  config = get_config(conn, config_id):
-	  check_config_dir = "/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config + "/"
-	  make_conf_file = check_config_dir + "etc/portage/make.conf"
-	  # Check if we can open the file and close it
-	  # Check if we have some error in the file (portage.util.getconfig)
-	  # Check if we envorment error with the config (settings.validate)
-	  try:
-		  open_make_conf = open(make_conf_file)
-		  open_make_conf.close()
-		  portage.util.getconfig(make_conf_file, tolerant=0, allow_sourcing=False, expand=True)
-		  mysettings = portage.config(config_root = check_config_dir)
-		  mysettings.validate()
-		  # With errors we update the db on the config and disable the config
-	  except Exception as e:
-		  attDict['config_error'] = e
-		  attDict['active'] = 'False'
-		  log_msg = "%s FAIL!" % (config,)
-		  add_gobs_logs(conn, log_msg, "info", config_profile)
-	  else:
-		  attDict['config_error'] = ''
-		  attDict['active'] = 'True'
-		  log_msg = "%s PASS" % (config,)
-		  add_gobs_logs(conn, log_msg, "info", config_profile)
-	  # Get the checksum of make.conf
-	  make_conf_checksum_tree = portage.checksum.sha256hash(make_conf_file)[0]
-	  log_msg = "make.conf checksum is %s on %s" % (make_conf_checksum_tree, config,)
-	  add_gobs_logs(conn, log_msg, "info", config_profile)
-	  attDict['make_conf_text'] = get_file_text(make_conf_file)
-	  attDict['make_conf_checksum_tree'] = make_conf_checksum_tree
-	  configsDict[config_id]=attDict
-  update_make_conf(conn, configsDict)
-  log_msg = "Checking configs for changes and errors ... Done"
-  add_gobs_logs(conn, log_msg, "info", config_profile)
-  CM.putConnection(conn)
+	# Get the config list
+	conn=CM.getConnection()
+	config_id_list_all = get_config_list_all(conn)
+	log_msg = "Checking configs for changes and errors"
+	add_gobs_logs(conn, log_msg, "info", config_profile)
+	configsDict = {}
+	for config_id in config_id_list_all:
+		attDict={}
+		# Set the config dir
+		config = get_config(conn, config_id):
+		check_config_dir = "/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config + "/"
+		make_conf_file = check_config_dir + "etc/portage/make.conf"
+		# Check if we can take a checksum on it.
+		# Check if we have some error in the file. (portage.util.getconfig)
+		# Check if we envorment error with the config. (settings.validate)
+		try:
+			make_conf_checksum_tree = portage.checksum.sha256hash(make_conf_file)[0]
+			portage.util.getconfig(make_conf_file, tolerant=0, allow_sourcing=False, expand=True)
+			mysettings = portage.config(config_root = check_config_dir)
+			mysettings.validate()
+			# With errors we update the db on the config and disable the config
+		except Exception as e:
+			attDict['config_error'] = e
+			attDict['active'] = 'False'
+			log_msg = "%s FAIL!" % (config,)
+			add_gobs_logs(conn, log_msg, "info", config_profile)
+		else:
+			attDict['config_error'] = ''
+			attDict['active'] = 'True'
+			log_msg = "%s PASS" % (config,)
+			add_gobs_logs(conn, log_msg, "info", config_profile)
+		attDict['make_conf_text'] = get_file_text(make_conf_file)
+		attDict['make_conf_checksum_tree'] = make_conf_checksum_tree
+		configsDict[config_id]=attDict
+	update_make_conf(conn, configsDict)
+	log_msg = "Checking configs for changes and errors ... Done"
+	add_gobs_logs(conn, log_msg, "info", config_profile)
+	CM.putConnection(conn)
 
 def check_make_conf_guest(config_profile):
 	conn=CM.getConnection()


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-02  0:06 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-02  0:06 UTC (permalink / raw
  To: gentoo-commits

commit:     169e982c365b97d95f578607d71ac5c76cb6cdb0
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sun Dec  2 00:06:33 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sun Dec  2 00:06:33 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=169e982c

fix a typo in check_make_conf

---
 gobs/pym/check_setup.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/check_setup.py b/gobs/pym/check_setup.py
index 7c9fcad..8b19e48 100644
--- a/gobs/pym/check_setup.py
+++ b/gobs/pym/check_setup.py
@@ -26,7 +26,7 @@ def check_make_conf():
 	for config_id in config_id_list_all:
 		attDict={}
 		# Set the config dir
-		config = get_config(conn, config_id):
+		config = get_config(conn, config_id)
 		check_config_dir = "/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config + "/"
 		make_conf_file = check_config_dir + "etc/portage/make.conf"
 		# Check if we can take a checksum on it.


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-02 11:49 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-02 11:49 UTC (permalink / raw
  To: gentoo-commits

commit:     348a534efb282ab5e3209ab7c07f1344c887f110
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sun Dec  2 11:48:47 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sun Dec  2 11:48:47 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=348a534e

fix the validate() error if we have layman support

---
 gobs/pym/check_setup.py |    8 ++++----
 1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/gobs/pym/check_setup.py b/gobs/pym/check_setup.py
index 8b19e48..99d1dbb 100644
--- a/gobs/pym/check_setup.py
+++ b/gobs/pym/check_setup.py
@@ -3,8 +3,8 @@ import portage
 import os
 import errno
 
+from portage.exception import DigestException, FileNotFound, ParseError, PermissionDenied
 from gobs.text import get_file_text
-
 from gobs.readconf import get_conf_settings
 reader=get_conf_settings()
 gobs_settings_dict=reader.read_gobs_settings_all()
@@ -34,12 +34,12 @@ def check_make_conf():
 		# Check if we envorment error with the config. (settings.validate)
 		try:
 			make_conf_checksum_tree = portage.checksum.sha256hash(make_conf_file)[0]
-			portage.util.getconfig(make_conf_file, tolerant=0, allow_sourcing=False, expand=True)
+			portage.util.getconfig(make_conf_file, tolerant=0, allow_sourcing=True, expand=True)
 			mysettings = portage.config(config_root = check_config_dir)
 			mysettings.validate()
 			# With errors we update the db on the config and disable the config
-		except Exception as e:
-			attDict['config_error'] = e
+		except ParseError as e:
+			attDict['config_error'] =  str(e)
 			attDict['active'] = 'False'
 			log_msg = "%s FAIL!" % (config,)
 			add_gobs_logs(conn, log_msg, "info", config_profile)


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-02 11:53 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-02 11:53 UTC (permalink / raw
  To: gentoo-commits

commit:     7b946753c31f922d9f938b0bb28af8899e971a88
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sun Dec  2 11:53:32 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sun Dec  2 11:53:32 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=7b946753

fix so we run more then one job

---
 gobs/pym/jobs.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/jobs.py b/gobs/pym/jobs.py
index 6b5340b..aa294aa 100644
--- a/gobs/pym/jobs.py
+++ b/gobs/pym/jobs.py
@@ -90,4 +90,4 @@ def jobs_main(config_profile):
 				update_job_list(conn, "Fail", job_id)
 				log_msg = "Job %s did fail." % (job_id,)
 				add_gobs_logs(conn, log_msg, "info", config_profile)
-		return
\ No newline at end of file
+	return
\ No newline at end of file


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-06  0:04 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-06  0:04 UTC (permalink / raw
  To: gentoo-commits

commit:     2186580357f16932692782c32fe3b65c52001c69
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Thu Dec  6 00:04:19 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Thu Dec  6 00:04:19 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=21865803

fix tabs and spaces in indentation

---
 gobs/pym/jobs.py         |    2 +-
 gobs/pym/pgsql_querys.py |    4 ++--
 gobs/pym/updatedb.py     |    2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/gobs/pym/jobs.py b/gobs/pym/jobs.py
index aa294aa..3eaab83 100644
--- a/gobs/pym/jobs.py
+++ b/gobs/pym/jobs.py
@@ -20,7 +20,7 @@ def jobs_main(config_profile):
 	jobs_id = get_jobs_id(conn, config_profile)
 	if jobs_id is None:
 		CM.putConnection(conn)
- 		return
+		return
 	for job_id in jobs_id:
 		job = get_job(conn, job_id)
 		log_msg = "Job: %s Type: %s" % (job_id, job,)

diff --git a/gobs/pym/pgsql_querys.py b/gobs/pym/pgsql_querys.py
index 6f75e53..f17eee7 100644
--- a/gobs/pym/pgsql_querys.py
+++ b/gobs/pym/pgsql_querys.py
@@ -156,8 +156,8 @@ def add_new_ebuild_metadata_sql(connection, ebuild_id, keywords, restrictions, i
 		use_id = get_use_id(connection, iuse)
 		if use_id is None:
 			cursor.execute(sqlQ5, (iuse,))
- 			use_id = cursor.fetchone()[0]
- 		cursor.execute(sqlQ6, (ebuild_id, use_id, set_iuse,))
+			use_id = cursor.fetchone()[0]
+		cursor.execute(sqlQ6, (ebuild_id, use_id, set_iuse,))
 	for keyword in keywords:
 		set_keyword = 'stable'
 		if keyword[0] in ["~"]:

diff --git a/gobs/pym/updatedb.py b/gobs/pym/updatedb.py
index 144b6bd..215e841 100755
--- a/gobs/pym/updatedb.py
+++ b/gobs/pym/updatedb.py
@@ -122,6 +122,6 @@ def update_db_main():
 	# Update the cpv db
 	update_cpv_db()
 	log_msg = "Update db ... Done."
- 	add_gobs_logs(conn, log_msg, "info", config_profile)
+	add_gobs_logs(conn, log_msg, "info", config_profile)
 	CM.putConnection(conn)
 	return True


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-06  0:08 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-06  0:08 UTC (permalink / raw
  To: gentoo-commits

commit:     bfbf12877eb1683b684876fe95a7f3453becdbb9
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Thu Dec  6 00:07:53 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Thu Dec  6 00:07:53 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=bfbf1287

fix EOL while scanning string literal

---
 gobs/pym/pgsql_querys.py |    5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/pgsql_querys.py b/gobs/pym/pgsql_querys.py
index f17eee7..d945d7f 100644
--- a/gobs/pym/pgsql_querys.py
+++ b/gobs/pym/pgsql_querys.py
@@ -317,8 +317,9 @@ def del_old_build_jobs(connection, queue_id):
 
 def get_packages_to_build(connection, config):
 	cursor =connection.cursor()
-	sqlQ1 = "SELECT build_job_id.build_jobs, ebuild_id.build_jobs, package_id.ebuilds FROM build_jobs, ebuilds WHERE config_id.build_jobs = (SELECT config_id FROM configs WHERE config = %s)
- 		AND extract(epoch from (NOW()) - timestamp.build_jobs) > 7200 AND ebuild_id.build_jobs = ebuild_id.ebuilds
+	sqlQ1 = "SELECT build_job_id.build_jobs, ebuild_id.build_jobs, package_id.ebuilds FROM build_jobs, ebuilds WHERE \
+		config_id.build_jobs = (SELECT config_id FROM configs WHERE config = %s) \
+ 		AND extract(epoch from (NOW()) - time_stamp.build_jobs) > 7200 AND ebuild_id.build_jobs = ebuild_id.ebuilds \
  		AND ebuilds.active = 'True' ORDER BY LIMIT 1"
 	sqlQ2 = 'SELECT version, checksum FROM ebuilds WHERE ebuild_id = %s'
 	sqlQ3 = 'SELECT flag.uses, status.build_jobs_use FROM build_jobs_use, uses WHERE build_job_id.build_jobs_use = %s use_id.build_jobs_use = use_id.uses'


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-06  0:11 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-06  0:11 UTC (permalink / raw
  To: gentoo-commits

commit:     c8e49a6e01dc30bf894080fbe58da1bf3546be48
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Thu Dec  6 00:11:25 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Thu Dec  6 00:11:25 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=c8e49a6e

fix tabs and spaces in indentation

---
 gobs/pym/jobs.py         |    2 +-
 gobs/pym/pgsql_querys.py |   10 +++++-----
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/gobs/pym/jobs.py b/gobs/pym/jobs.py
index 3eaab83..11543a2 100644
--- a/gobs/pym/jobs.py
+++ b/gobs/pym/jobs.py
@@ -24,7 +24,7 @@ def jobs_main(config_profile):
 	for job_id in jobs_id:
 		job = get_job(conn, job_id)
 		log_msg = "Job: %s Type: %s" % (job_id, job,)
-                add_gobs_logs(conn, log_msg, "info", config_profile)
+		add_gobs_logs(conn, log_msg, "info", config_profile)
 		if job == "addbuildquery":
 			update_job_list(conn, "Runing", job_id)
 			log_msg = "Job %s is runing." % (job_id,)

diff --git a/gobs/pym/pgsql_querys.py b/gobs/pym/pgsql_querys.py
index d945d7f..517db8b 100644
--- a/gobs/pym/pgsql_querys.py
+++ b/gobs/pym/pgsql_querys.py
@@ -208,7 +208,7 @@ def get_config_id_list(connection):
 		return None
 	else:
 		config_id_list = []
- 	for config_id in entries:
+	for config_id in entries:
 		config_id_list.append(config_id[0])
 	return config_id_list
 
@@ -319,15 +319,15 @@ def get_packages_to_build(connection, config):
 	cursor =connection.cursor()
 	sqlQ1 = "SELECT build_job_id.build_jobs, ebuild_id.build_jobs, package_id.ebuilds FROM build_jobs, ebuilds WHERE \
 		config_id.build_jobs = (SELECT config_id FROM configs WHERE config = %s) \
- 		AND extract(epoch from (NOW()) - time_stamp.build_jobs) > 7200 AND ebuild_id.build_jobs = ebuild_id.ebuilds \
- 		AND ebuilds.active = 'True' ORDER BY LIMIT 1"
+		AND extract(epoch from (NOW()) - time_stamp.build_jobs) > 7200 AND ebuild_id.build_jobs = ebuild_id.ebuilds \
+		AND ebuilds.active = 'True' ORDER BY LIMIT 1"
 	sqlQ2 = 'SELECT version, checksum FROM ebuilds WHERE ebuild_id = %s'
 	sqlQ3 = 'SELECT flag.uses, status.build_jobs_use FROM build_jobs_use, uses WHERE build_job_id.build_jobs_use = %s use_id.build_jobs_use = use_id.uses'
 	cursor.execute(sqlQ1, (config,))
 	build_dict={}
 	entries = cursor.fetchone()
 	if entries is None:
- 		return None
+		return None
 	build_dict['build_job_id'] = entries[0]
 	build_dict['ebuild_id']= entries[1]
 	build_dict['package_id'] = entries[2]
@@ -339,7 +339,7 @@ def get_packages_to_build(connection, config):
 	cursor.execute(sqlQ3, (build_dict['build_job_id'],))
 	uses={}
 	for row in cursor.fetchall():
- 		uses[ row[0] ] = row[1]
+		uses[ row[0] ] = row[1]
 	build_dict['build_useflags']=uses
 	return build_dict
 


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-06  2:18 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-06  2:18 UTC (permalink / raw
  To: gentoo-commits

commit:     7b4a579ee4d1e4d86758a8b15a073ef825c2c2df
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Thu Dec  6 02:18:00 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Thu Dec  6 02:18:00 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=7b4a579e

rework of the emerge_main

---
 gobs/pym/actions.py     | 3895 +++++++++++++++++++++++++++++++++++++++++++++++
 gobs/pym/build_queru.py |  650 +--------
 gobs/pym/main.py        | 1021 +++++++++++++
 3 files changed, 4972 insertions(+), 594 deletions(-)

diff --git a/gobs/pym/actions.py b/gobs/pym/actions.py
new file mode 100644
index 0000000..a3a158c
--- /dev/null
+++ b/gobs/pym/actions.py
@@ -0,0 +1,3895 @@
+# Copyright 1999-2012 Gentoo Foundation
+# Distributed under the terms of the GNU General Public License v2
+
+from __future__ import print_function
+
+import errno
+import logging
+import operator
+import platform
+import pwd
+import random
+import re
+import signal
+import socket
+import stat
+import subprocess
+import sys
+import tempfile
+import textwrap
+import time
+from itertools import chain
+
+import portage
+portage.proxy.lazyimport.lazyimport(globals(),
+	'portage.debug',
+	'portage.news:count_unread_news,display_news_notifications',
+	'_emerge.chk_updated_cfg_files:chk_updated_cfg_files',
+	'_emerge.help:help@emerge_help',
+	'_emerge.post_emerge:display_news_notification,post_emerge',
+	'_emerge.stdout_spinner:stdout_spinner',
+)
+
+from portage.localization import _
+from portage import os
+from portage import shutil
+from portage import eapi_is_supported, _encodings, _unicode_decode
+from portage.cache.cache_errors import CacheError
+from portage.const import GLOBAL_CONFIG_PATH
+from portage.const import _DEPCLEAN_LIB_CHECK_DEFAULT
+from portage.dbapi.dep_expand import dep_expand
+from portage.dbapi._expand_new_virt import expand_new_virt
+from portage.dep import Atom
+from portage.eclass_cache import hashed_path
+from portage.exception import InvalidAtom, InvalidData
+from portage.output import blue, bold, colorize, create_color_func, darkgreen, \
+	red, xtermTitle, xtermTitleReset, yellow
+good = create_color_func("GOOD")
+bad = create_color_func("BAD")
+warn = create_color_func("WARN")
+from portage.package.ebuild._ipc.QueryCommand import QueryCommand
+from portage.package.ebuild.doebuild import _check_temp_dir
+from portage._sets import load_default_config, SETPREFIX
+from portage._sets.base import InternalPackageSet
+from portage.util import cmp_sort_key, writemsg, varexpand, \
+	writemsg_level, writemsg_stdout
+from portage.util.digraph import digraph
+from portage.util._async.SchedulerInterface import SchedulerInterface
+from portage.util._eventloop.global_event_loop import global_event_loop
+from portage._global_updates import _global_updates
+
+from _emerge.clear_caches import clear_caches
+from _emerge.countdown import countdown
+from _emerge.create_depgraph_params import create_depgraph_params
+from _emerge.Dependency import Dependency
+from _emerge.depgraph import backtrack_depgraph, depgraph, resume_depgraph
+from _emerge.DepPrioritySatisfiedRange import DepPrioritySatisfiedRange
+from _emerge.emergelog import emergelog
+from _emerge.is_valid_package_atom import is_valid_package_atom
+from _emerge.MetadataRegen import MetadataRegen
+from _emerge.Package import Package
+from _emerge.ProgressHandler import ProgressHandler
+from _emerge.RootConfig import RootConfig
+from _emerge.Scheduler import Scheduler
+from _emerge.search import search
+from _emerge.SetArg import SetArg
+from _emerge.show_invalid_depstring_notice import show_invalid_depstring_notice
+from _emerge.sync.getaddrinfo_validate import getaddrinfo_validate
+from _emerge.sync.old_tree_timestamp import old_tree_timestamp_warn
+from _emerge.unmerge import unmerge
+from _emerge.UnmergeDepPriority import UnmergeDepPriority
+from _emerge.UseFlagDisplay import pkg_use_display
+from _emerge.userquery import userquery
+
+from gobs.build_queru import log_fail_queru
+
+if sys.hexversion >= 0x3000000:
+	long = int
+	_unicode = str
+else:
+	_unicode = unicode
+
+def build_mydepgraph(settings, trees, mtimedb, myopts, myparams, myaction, myfiles, spinner, build_dict):
+	try:
+		success, mydepgraph, favorites = backtrack_depgraph(
+			settings, trees, myopts, myparams, myaction, myfiles, spinner)
+	except portage.exception.PackageSetNotFound as e:
+		root_config = trees[settings["ROOT"]]["root_config"]
+		display_missing_pkg_set(root_config, e.value)
+		build_dict['type_fail'] = "depgraph fail"
+		build_dict['check_fail'] = True
+	else:
+		if not success:
+			if mydepgraph._dynamic_config._needed_p_mask_changes:
+				build_dict['type_fail'] = "Mask packages"
+				build_dict['check_fail'] = True
+				mydepgraph.display_problems()
+			if mydepgraph._dynamic_config._needed_use_config_changes:
+				repeat = True
+				repeat_times = 0
+				while repeat:
+					mydepgraph._display_autounmask()
+					settings, trees, mtimedb = load_emerge_config()
+					myparams = create_depgraph_params(myopts, myaction)
+					try:
+						success, mydepgraph, favorites = backtrack_depgraph(
+						settings, trees, myopts, myparams, myaction, myfiles, spinner)
+					except portage.exception.PackageSetNotFound as e:
+						root_config = trees[settings["ROOT"]]["root_config"]
+						display_missing_pkg_set(root_config, e.value)
+					if not success and mydepgraph._dynamic_config._needed_use_config_changes:
+						print("repaet_times:", repeat_times)
+						if repeat_times is 2:
+							build_dict['type_fail'] = "Need use change"
+							build_dict['check_fail'] = True
+							mydepgraph.display_problems()
+							repeat = False
+						else:
+							repeat_times = repeat_times + 1
+					else:
+						repeat = False
+
+			if mydepgraph._dynamic_config._unsolvable_blockers:
+				mydepgraph.display_problems()
+				build_dict['type_fail'] = "Blocking packages"
+				build_dict['check_fail'] = True
+
+			if mydepgraph._dynamic_config._slot_collision_info:
+				mydepgraph.display_problems()
+				build_dict['type_fail'] = "Slot blocking"
+				build_dict['check_fail'] = True
+	
+	return build_dict, success, settings, trees, mtimedb
+
+def action_build(settings, trees, mtimedb,
+	myopts, myaction, myfiles, spinner, build_dict):
+
+	if '--usepkgonly' not in myopts:
+		old_tree_timestamp_warn(settings['PORTDIR'], settings)
+
+	# It's best for config updates in /etc/portage to be processed
+	# before we get here, so warn if they're not (bug #267103).
+	chk_updated_cfg_files(settings['EROOT'], ['/etc/portage'])
+
+	# validate the state of the resume data
+	# so that we can make assumptions later.
+	for k in ("resume", "resume_backup"):
+		if k not in mtimedb:
+			continue
+		resume_data = mtimedb[k]
+		if not isinstance(resume_data, dict):
+			del mtimedb[k]
+			continue
+		mergelist = resume_data.get("mergelist")
+		if not isinstance(mergelist, list):
+			del mtimedb[k]
+			continue
+		for x in mergelist:
+			if not (isinstance(x, list) and len(x) == 4):
+				continue
+			pkg_type, pkg_root, pkg_key, pkg_action = x
+			if pkg_root not in trees:
+				# Current $ROOT setting differs,
+				# so the list must be stale.
+				mergelist = None
+				break
+		if not mergelist:
+			del mtimedb[k]
+			continue
+		resume_opts = resume_data.get("myopts")
+		if not isinstance(resume_opts, (dict, list)):
+			del mtimedb[k]
+			continue
+		favorites = resume_data.get("favorites")
+		if not isinstance(favorites, list):
+			del mtimedb[k]
+			continue
+
+	resume = False
+	if "--resume" in myopts and \
+		("resume" in mtimedb or
+		"resume_backup" in mtimedb):
+		resume = True
+		if "resume" not in mtimedb:
+			mtimedb["resume"] = mtimedb["resume_backup"]
+			del mtimedb["resume_backup"]
+			mtimedb.commit()
+		# "myopts" is a list for backward compatibility.
+		resume_opts = mtimedb["resume"].get("myopts", [])
+		if isinstance(resume_opts, list):
+			resume_opts = dict((k,True) for k in resume_opts)
+		for opt in ("--ask", "--color", "--skipfirst", "--tree"):
+			resume_opts.pop(opt, None)
+
+		# Current options always override resume_opts.
+		resume_opts.update(myopts)
+		myopts.clear()
+		myopts.update(resume_opts)
+
+		if "--debug" in myopts:
+			writemsg_level("myopts %s\n" % (myopts,))
+
+		# Adjust config according to options of the command being resumed.
+		for myroot in trees:
+			mysettings =  trees[myroot]["vartree"].settings
+			mysettings.unlock()
+			adjust_config(myopts, mysettings)
+			mysettings.lock()
+			del myroot, mysettings
+
+	ldpath_mtimes = mtimedb["ldpath"]
+	favorites=[]
+	buildpkgonly = "--buildpkgonly" in myopts
+	pretend = "--pretend" in myopts
+	fetchonly = "--fetchonly" in myopts or "--fetch-all-uri" in myopts
+	ask = "--ask" in myopts
+	enter_invalid = '--ask-enter-invalid' in myopts
+	nodeps = "--nodeps" in myopts
+	oneshot = "--oneshot" in myopts or "--onlydeps" in myopts
+	tree = "--tree" in myopts
+	if nodeps and tree:
+		tree = False
+		del myopts["--tree"]
+		portage.writemsg(colorize("WARN", " * ") + \
+			"--tree is broken with --nodeps. Disabling...\n")
+	debug = "--debug" in myopts
+	verbose = "--verbose" in myopts
+	quiet = "--quiet" in myopts
+	myparams = create_depgraph_params(myopts, myaction)
+	mergelist_shown = False
+
+	if pretend or fetchonly:
+		# make the mtimedb readonly
+		mtimedb.filename = None
+	if '--digest' in myopts or 'digest' in settings.features:
+		if '--digest' in myopts:
+			msg = "The --digest option"
+		else:
+			msg = "The FEATURES=digest setting"
+
+		msg += " can prevent corruption from being" + \
+			" noticed. The `repoman manifest` command is the preferred" + \
+			" way to generate manifests and it is capable of doing an" + \
+			" entire repository or category at once."
+		prefix = bad(" * ")
+		writemsg(prefix + "\n")
+		for line in textwrap.wrap(msg, 72):
+			writemsg("%s%s\n" % (prefix, line))
+		writemsg(prefix + "\n")
+
+	if resume:
+		favorites = mtimedb["resume"].get("favorites")
+		if not isinstance(favorites, list):
+			favorites = []
+
+		resume_data = mtimedb["resume"]
+		mergelist = resume_data["mergelist"]
+		if mergelist and "--skipfirst" in myopts:
+			for i, task in enumerate(mergelist):
+				if isinstance(task, list) and \
+					task and task[-1] == "merge":
+					del mergelist[i]
+					break
+
+		success = False
+		mydepgraph = None
+		try:
+			success, mydepgraph, dropped_tasks = resume_depgraph(
+				settings, trees, mtimedb, myopts, myparams, spinner)
+		except (portage.exception.PackageNotFound,
+			depgraph.UnsatisfiedResumeDep) as e:
+			if isinstance(e, depgraph.UnsatisfiedResumeDep):
+				mydepgraph = e.depgraph
+
+			from portage.output import EOutput
+			out = EOutput()
+
+			resume_data = mtimedb["resume"]
+			mergelist = resume_data.get("mergelist")
+			if not isinstance(mergelist, list):
+				mergelist = []
+			if mergelist and debug or (verbose and not quiet):
+				out.eerror("Invalid resume list:")
+				out.eerror("")
+				indent = "  "
+				for task in mergelist:
+					if isinstance(task, list):
+						out.eerror(indent + str(tuple(task)))
+				out.eerror("")
+
+			if isinstance(e, depgraph.UnsatisfiedResumeDep):
+				out.eerror("One or more packages are either masked or " + \
+					"have missing dependencies:")
+				out.eerror("")
+				indent = "  "
+				for dep in e.value:
+					if dep.atom is None:
+						out.eerror(indent + "Masked package:")
+						out.eerror(2 * indent + str(dep.parent))
+						out.eerror("")
+					else:
+						out.eerror(indent + str(dep.atom) + " pulled in by:")
+						out.eerror(2 * indent + str(dep.parent))
+						out.eerror("")
+				msg = "The resume list contains packages " + \
+					"that are either masked or have " + \
+					"unsatisfied dependencies. " + \
+					"Please restart/continue " + \
+					"the operation manually, or use --skipfirst " + \
+					"to skip the first package in the list and " + \
+					"any other packages that may be " + \
+					"masked or have missing dependencies."
+				for line in textwrap.wrap(msg, 72):
+					out.eerror(line)
+			elif isinstance(e, portage.exception.PackageNotFound):
+				out.eerror("An expected package is " + \
+					"not available: %s" % str(e))
+				out.eerror("")
+				msg = "The resume list contains one or more " + \
+					"packages that are no longer " + \
+					"available. Please restart/continue " + \
+					"the operation manually."
+				for line in textwrap.wrap(msg, 72):
+					out.eerror(line)
+
+		if success:
+			if dropped_tasks:
+				portage.writemsg("!!! One or more packages have been " + \
+					"dropped due to\n" + \
+					"!!! masking or unsatisfied dependencies:\n\n",
+					noiselevel=-1)
+				for task in dropped_tasks:
+					portage.writemsg("  " + str(task) + "\n", noiselevel=-1)
+				portage.writemsg("\n", noiselevel=-1)
+			del dropped_tasks
+		else:
+			if mydepgraph is not None:
+				mydepgraph.display_problems()
+			if not (ask or pretend):
+				# delete the current list and also the backup
+				# since it's probably stale too.
+				for k in ("resume", "resume_backup"):
+					mtimedb.pop(k, None)
+				mtimedb.commit()
+
+			return 1
+	else:
+		if ("--resume" in myopts):
+			print(darkgreen("emerge: It seems we have nothing to resume..."))
+			return os.EX_OK
+
+		build_dict, success, settings, trees, mtimedb = build_mydepgraph(settings,
+			trees, mtimedb, myopts, myparams, myaction, myfiles, spinner, build_dict)
+
+		if not success:
+			build_dict['type_fail'] = "Dep calc fail"
+			build_dict['check_fail'] = True
+			mydepgraph.display_problems()
+
+		if build_dict['check_fail'] is True:
+			log_fail_queru(build_dict, settings)
+			return 1
+
+	if "--pretend" not in myopts and \
+		("--ask" in myopts or "--tree" in myopts or \
+		"--verbose" in myopts) and \
+		not ("--quiet" in myopts and "--ask" not in myopts):
+		if "--resume" in myopts:
+			mymergelist = mydepgraph.altlist()
+			if len(mymergelist) == 0:
+				print(colorize("INFORM", "emerge: It seems we have nothing to resume..."))
+				return os.EX_OK
+			favorites = mtimedb["resume"]["favorites"]
+			retval = mydepgraph.display(
+				mydepgraph.altlist(reversed=tree),
+				favorites=favorites)
+			mydepgraph.display_problems()
+			mergelist_shown = True
+			if retval != os.EX_OK:
+				return retval
+			prompt="Would you like to resume merging these packages?"
+		else:
+			retval = mydepgraph.display(
+				mydepgraph.altlist(reversed=("--tree" in myopts)),
+				favorites=favorites)
+			mydepgraph.display_problems()
+			mergelist_shown = True
+			if retval != os.EX_OK:
+				return retval
+			mergecount=0
+			for x in mydepgraph.altlist():
+				if isinstance(x, Package) and x.operation == "merge":
+					mergecount += 1
+
+			if mergecount==0:
+				sets = trees[settings['EROOT']]['root_config'].sets
+				world_candidates = None
+				if "selective" in myparams and \
+					not oneshot and favorites:
+					# Sets that are not world candidates are filtered
+					# out here since the favorites list needs to be
+					# complete for depgraph.loadResumeCommand() to
+					# operate correctly.
+					world_candidates = [x for x in favorites \
+						if not (x.startswith(SETPREFIX) and \
+						not sets[x[1:]].world_candidate)]
+				if "selective" in myparams and \
+					not oneshot and world_candidates:
+					print()
+					for x in world_candidates:
+						print(" %s %s" % (good("*"), x))
+					prompt="Would you like to add these packages to your world favorites?"
+				elif settings["AUTOCLEAN"] and "yes"==settings["AUTOCLEAN"]:
+					prompt="Nothing to merge; would you like to auto-clean packages?"
+				else:
+					print()
+					print("Nothing to merge; quitting.")
+					print()
+					return os.EX_OK
+			elif "--fetchonly" in myopts or "--fetch-all-uri" in myopts:
+				prompt="Would you like to fetch the source files for these packages?"
+			else:
+				prompt="Would you like to merge these packages?"
+		print()
+		if "--ask" in myopts and userquery(prompt, enter_invalid) == "No":
+			print()
+			print("Quitting.")
+			print()
+			return 128 + signal.SIGINT
+		# Don't ask again (e.g. when auto-cleaning packages after merge)
+		myopts.pop("--ask", None)
+
+	if ("--pretend" in myopts) and not ("--fetchonly" in myopts or "--fetch-all-uri" in myopts):
+		if ("--resume" in myopts):
+			mymergelist = mydepgraph.altlist()
+			if len(mymergelist) == 0:
+				print(colorize("INFORM", "emerge: It seems we have nothing to resume..."))
+				return os.EX_OK
+			favorites = mtimedb["resume"]["favorites"]
+			retval = mydepgraph.display(
+				mydepgraph.altlist(reversed=tree),
+				favorites=favorites)
+			mydepgraph.display_problems()
+			mergelist_shown = True
+			if retval != os.EX_OK:
+				return retval
+		else:
+			retval = mydepgraph.display(
+				mydepgraph.altlist(reversed=("--tree" in myopts)),
+				favorites=favorites)
+			mydepgraph.display_problems()
+			mergelist_shown = True
+			if retval != os.EX_OK:
+				return retval
+			if "--buildpkgonly" in myopts:
+				graph_copy = mydepgraph._dynamic_config.digraph.copy()
+				removed_nodes = set()
+				for node in graph_copy:
+					if not isinstance(node, Package) or \
+						node.operation == "nomerge":
+						removed_nodes.add(node)
+				graph_copy.difference_update(removed_nodes)
+				if not graph_copy.hasallzeros(ignore_priority = \
+					DepPrioritySatisfiedRange.ignore_medium):
+					print("\n!!! --buildpkgonly requires all dependencies to be merged.")
+					print("!!! You have to merge the dependencies before you can build this package.\n")
+					return 1
+	else:
+		if "--buildpkgonly" in myopts:
+			graph_copy = mydepgraph._dynamic_config.digraph.copy()
+			removed_nodes = set()
+			for node in graph_copy:
+				if not isinstance(node, Package) or \
+					node.operation == "nomerge":
+					removed_nodes.add(node)
+			graph_copy.difference_update(removed_nodes)
+			if not graph_copy.hasallzeros(ignore_priority = \
+				DepPrioritySatisfiedRange.ignore_medium):
+				print("\n!!! --buildpkgonly requires all dependencies to be merged.")
+				print("!!! Cannot merge requested packages. Merge deps and try again.\n")
+				return 1
+
+		if not mergelist_shown:
+			# If we haven't already shown the merge list above, at
+			# least show warnings about missed updates and such.
+			mydepgraph.display_problems()
+
+		if ("--resume" in myopts):
+			favorites=mtimedb["resume"]["favorites"]
+
+		else:
+			if "resume" in mtimedb and \
+			"mergelist" in mtimedb["resume"] and \
+			len(mtimedb["resume"]["mergelist"]) > 1:
+				mtimedb["resume_backup"] = mtimedb["resume"]
+				del mtimedb["resume"]
+				mtimedb.commit()
+
+			mydepgraph.saveNomergeFavorites()
+
+		mergetask = Scheduler(settings, trees, mtimedb, myopts,
+			spinner, favorites=favorites,
+			graph_config=mydepgraph.schedulerGraph())
+
+		del mydepgraph
+		clear_caches(trees)
+
+		retval = mergetask.merge()
+
+		if retval == os.EX_OK and not (buildpkgonly or fetchonly or pretend):
+			if "yes" == settings.get("AUTOCLEAN"):
+				portage.writemsg_stdout(">>> Auto-cleaning packages...\n")
+				unmerge(trees[settings['EROOT']]['root_config'],
+					myopts, "clean", [],
+					ldpath_mtimes, autoclean=1)
+			else:
+				portage.writemsg_stdout(colorize("WARN", "WARNING:")
+					+ " AUTOCLEAN is disabled.  This can cause serious"
+					+ " problems due to overlapping packages.\n")
+
+		return retval
+
+def action_config(settings, trees, myopts, myfiles):
+	enter_invalid = '--ask-enter-invalid' in myopts
+	if len(myfiles) != 1:
+		print(red("!!! config can only take a single package atom at this time\n"))
+		sys.exit(1)
+	if not is_valid_package_atom(myfiles[0], allow_repo=True):
+		portage.writemsg("!!! '%s' is not a valid package atom.\n" % myfiles[0],
+			noiselevel=-1)
+		portage.writemsg("!!! Please check ebuild(5) for full details.\n")
+		portage.writemsg("!!! (Did you specify a version but forget to prefix with '='?)\n")
+		sys.exit(1)
+	print()
+	try:
+		pkgs = trees[settings['EROOT']]['vartree'].dbapi.match(myfiles[0])
+	except portage.exception.AmbiguousPackageName as e:
+		# Multiple matches thrown from cpv_expand
+		pkgs = e.args[0]
+	if len(pkgs) == 0:
+		print("No packages found.\n")
+		sys.exit(0)
+	elif len(pkgs) > 1:
+		if "--ask" in myopts:
+			options = []
+			print("Please select a package to configure:")
+			idx = 0
+			for pkg in pkgs:
+				idx += 1
+				options.append(str(idx))
+				print(options[-1]+") "+pkg)
+			print("X) Cancel")
+			options.append("X")
+			idx = userquery("Selection?", enter_invalid, responses=options)
+			if idx == "X":
+				sys.exit(128 + signal.SIGINT)
+			pkg = pkgs[int(idx)-1]
+		else:
+			print("The following packages available:")
+			for pkg in pkgs:
+				print("* "+pkg)
+			print("\nPlease use a specific atom or the --ask option.")
+			sys.exit(1)
+	else:
+		pkg = pkgs[0]
+
+	print()
+	if "--ask" in myopts:
+		if userquery("Ready to configure %s?" % pkg, enter_invalid) == "No":
+			sys.exit(128 + signal.SIGINT)
+	else:
+		print("Configuring pkg...")
+	print()
+	ebuildpath = trees[settings['EROOT']]['vartree'].dbapi.findname(pkg)
+	mysettings = portage.config(clone=settings)
+	vardb = trees[mysettings['EROOT']]['vartree'].dbapi
+	debug = mysettings.get("PORTAGE_DEBUG") == "1"
+	retval = portage.doebuild(ebuildpath, "config", settings=mysettings,
+		debug=(settings.get("PORTAGE_DEBUG", "") == 1), cleanup=True,
+		mydbapi = trees[settings['EROOT']]['vartree'].dbapi, tree="vartree")
+	if retval == os.EX_OK:
+		portage.doebuild(ebuildpath, "clean", settings=mysettings,
+			debug=debug, mydbapi=vardb, tree="vartree")
+	print()
+
+def action_depclean(settings, trees, ldpath_mtimes,
+	myopts, action, myfiles, spinner, scheduler=None):
+	# Kill packages that aren't explicitly merged or are required as a
+	# dependency of another package. World file is explicit.
+
+	# Global depclean or prune operations are not very safe when there are
+	# missing dependencies since it's unknown how badly incomplete
+	# the dependency graph is, and we might accidentally remove packages
+	# that should have been pulled into the graph. On the other hand, it's
+	# relatively safe to ignore missing deps when only asked to remove
+	# specific packages.
+
+	msg = []
+	if "preserve-libs" not in settings.features and \
+		not myopts.get("--depclean-lib-check", _DEPCLEAN_LIB_CHECK_DEFAULT) != "n":
+		msg.append("Depclean may break link level dependencies. Thus, it is\n")
+		msg.append("recommended to use a tool such as " + good("`revdep-rebuild`") + " (from\n")
+		msg.append("app-portage/gentoolkit) in order to detect such breakage.\n")
+		msg.append("\n")
+	msg.append("Always study the list of packages to be cleaned for any obvious\n")
+	msg.append("mistakes. Packages that are part of the world set will always\n")
+	msg.append("be kept.  They can be manually added to this set with\n")
+	msg.append(good("`emerge --noreplace <atom>`") + ".  Packages that are listed in\n")
+	msg.append("package.provided (see portage(5)) will be removed by\n")
+	msg.append("depclean, even if they are part of the world set.\n")
+	msg.append("\n")
+	msg.append("As a safety measure, depclean will not remove any packages\n")
+	msg.append("unless *all* required dependencies have been resolved.  As a\n")
+	msg.append("consequence, it is often necessary to run %s\n" % \
+		good("`emerge --update"))
+	msg.append(good("--newuse --deep @world`") + \
+		" prior to depclean.\n")
+
+	if action == "depclean" and "--quiet" not in myopts and not myfiles:
+		portage.writemsg_stdout("\n")
+		for x in msg:
+			portage.writemsg_stdout(colorize("WARN", " * ") + x)
+
+	root_config = trees[settings['EROOT']]['root_config']
+	vardb = root_config.trees['vartree'].dbapi
+
+	args_set = InternalPackageSet(allow_repo=True)
+	if myfiles:
+		args_set.update(myfiles)
+		matched_packages = False
+		for x in args_set:
+			if vardb.match(x):
+				matched_packages = True
+			else:
+				writemsg_level("--- Couldn't find '%s' to %s.\n" % \
+					(x.replace("null/", ""), action),
+					level=logging.WARN, noiselevel=-1)
+		if not matched_packages:
+			writemsg_level(">>> No packages selected for removal by %s\n" % \
+				action)
+			return 0
+
+	# The calculation is done in a separate function so that depgraph
+	# references go out of scope and the corresponding memory
+	# is freed before we call unmerge().
+	rval, cleanlist, ordered, req_pkg_count = \
+		calc_depclean(settings, trees, ldpath_mtimes,
+			myopts, action, args_set, spinner)
+
+	clear_caches(trees)
+
+	if rval != os.EX_OK:
+		return rval
+
+	if cleanlist:
+		rval = unmerge(root_config, myopts, "unmerge",
+			cleanlist, ldpath_mtimes, ordered=ordered,
+			scheduler=scheduler)
+
+	if action == "prune":
+		return rval
+
+	if not cleanlist and "--quiet" in myopts:
+		return rval
+
+	print("Packages installed:   " + str(len(vardb.cpv_all())))
+	print("Packages in world:    " + \
+		str(len(root_config.sets["selected"].getAtoms())))
+	print("Packages in system:   " + \
+		str(len(root_config.sets["system"].getAtoms())))
+	print("Required packages:    "+str(req_pkg_count))
+	if "--pretend" in myopts:
+		print("Number to remove:     "+str(len(cleanlist)))
+	else:
+		print("Number removed:       "+str(len(cleanlist)))
+
+	return rval
+
+def calc_depclean(settings, trees, ldpath_mtimes,
+	myopts, action, args_set, spinner):
+	allow_missing_deps = bool(args_set)
+
+	debug = '--debug' in myopts
+	xterm_titles = "notitles" not in settings.features
+	root_len = len(settings["ROOT"])
+	eroot = settings['EROOT']
+	root_config = trees[eroot]["root_config"]
+	psets = root_config.setconfig.psets
+	deselect = myopts.get('--deselect') != 'n'
+	required_sets = {}
+	required_sets['world'] = psets['world']
+
+	# When removing packages, a temporary version of the world 'selected'
+	# set may be used which excludes packages that are intended to be
+	# eligible for removal.
+	selected_set = psets['selected']
+	required_sets['selected'] = selected_set
+	protected_set = InternalPackageSet()
+	protected_set_name = '____depclean_protected_set____'
+	required_sets[protected_set_name] = protected_set
+	system_set = psets["system"]
+
+	if not system_set or not selected_set:
+
+		if not system_set:
+			writemsg_level("!!! You have no system list.\n",
+				level=logging.ERROR, noiselevel=-1)
+
+		if not selected_set:
+			writemsg_level("!!! You have no world file.\n",
+					level=logging.WARNING, noiselevel=-1)
+
+		writemsg_level("!!! Proceeding is likely to " + \
+			"break your installation.\n",
+			level=logging.WARNING, noiselevel=-1)
+		if "--pretend" not in myopts:
+			countdown(int(settings["EMERGE_WARNING_DELAY"]), ">>> Depclean")
+
+	if action == "depclean":
+		emergelog(xterm_titles, " >>> depclean")
+
+	writemsg_level("\nCalculating dependencies  ")
+	resolver_params = create_depgraph_params(myopts, "remove")
+	resolver = depgraph(settings, trees, myopts, resolver_params, spinner)
+	resolver._load_vdb()
+	vardb = resolver._frozen_config.trees[eroot]["vartree"].dbapi
+	real_vardb = trees[eroot]["vartree"].dbapi
+
+	if action == "depclean":
+
+		if args_set:
+
+			if deselect:
+				# Start with an empty set.
+				selected_set = InternalPackageSet()
+				required_sets['selected'] = selected_set
+				# Pull in any sets nested within the selected set.
+				selected_set.update(psets['selected'].getNonAtoms())
+
+			# Pull in everything that's installed but not matched
+			# by an argument atom since we don't want to clean any
+			# package if something depends on it.
+			for pkg in vardb:
+				if spinner:
+					spinner.update()
+
+				try:
+					if args_set.findAtomForPackage(pkg) is None:
+						protected_set.add("=" + pkg.cpv)
+						continue
+				except portage.exception.InvalidDependString as e:
+					show_invalid_depstring_notice(pkg,
+						pkg.metadata["PROVIDE"], str(e))
+					del e
+					protected_set.add("=" + pkg.cpv)
+					continue
+
+	elif action == "prune":
+
+		if deselect:
+			# Start with an empty set.
+			selected_set = InternalPackageSet()
+			required_sets['selected'] = selected_set
+			# Pull in any sets nested within the selected set.
+			selected_set.update(psets['selected'].getNonAtoms())
+
+		# Pull in everything that's installed since we don't
+		# to prune a package if something depends on it.
+		protected_set.update(vardb.cp_all())
+
+		if not args_set:
+
+			# Try to prune everything that's slotted.
+			for cp in vardb.cp_all():
+				if len(vardb.cp_list(cp)) > 1:
+					args_set.add(cp)
+
+		# Remove atoms from world that match installed packages
+		# that are also matched by argument atoms, but do not remove
+		# them if they match the highest installed version.
+		for pkg in vardb:
+			if spinner is not None:
+				spinner.update()
+			pkgs_for_cp = vardb.match_pkgs(pkg.cp)
+			if not pkgs_for_cp or pkg not in pkgs_for_cp:
+				raise AssertionError("package expected in matches: " + \
+					"cp = %s, cpv = %s matches = %s" % \
+					(pkg.cp, pkg.cpv, [str(x) for x in pkgs_for_cp]))
+
+			highest_version = pkgs_for_cp[-1]
+			if pkg == highest_version:
+				# pkg is the highest version
+				protected_set.add("=" + pkg.cpv)
+				continue
+
+			if len(pkgs_for_cp) <= 1:
+				raise AssertionError("more packages expected: " + \
+					"cp = %s, cpv = %s matches = %s" % \
+					(pkg.cp, pkg.cpv, [str(x) for x in pkgs_for_cp]))
+
+			try:
+				if args_set.findAtomForPackage(pkg) is None:
+					protected_set.add("=" + pkg.cpv)
+					continue
+			except portage.exception.InvalidDependString as e:
+				show_invalid_depstring_notice(pkg,
+					pkg.metadata["PROVIDE"], str(e))
+				del e
+				protected_set.add("=" + pkg.cpv)
+				continue
+
+	if resolver._frozen_config.excluded_pkgs:
+		excluded_set = resolver._frozen_config.excluded_pkgs
+		required_sets['__excluded__'] = InternalPackageSet()
+
+		for pkg in vardb:
+			if spinner:
+				spinner.update()
+
+			try:
+				if excluded_set.findAtomForPackage(pkg):
+					required_sets['__excluded__'].add("=" + pkg.cpv)
+			except portage.exception.InvalidDependString as e:
+				show_invalid_depstring_notice(pkg,
+					pkg.metadata["PROVIDE"], str(e))
+				del e
+				required_sets['__excluded__'].add("=" + pkg.cpv)
+
+	success = resolver._complete_graph(required_sets={eroot:required_sets})
+	writemsg_level("\b\b... done!\n")
+
+	resolver.display_problems()
+
+	if not success:
+		return 1, [], False, 0
+
+	def unresolved_deps():
+
+		unresolvable = set()
+		for dep in resolver._dynamic_config._initially_unsatisfied_deps:
+			if isinstance(dep.parent, Package) and \
+				(dep.priority > UnmergeDepPriority.SOFT):
+				unresolvable.add((dep.atom, dep.parent.cpv))
+
+		if not unresolvable:
+			return False
+
+		if unresolvable and not allow_missing_deps:
+
+			if "--debug" in myopts:
+				writemsg("\ndigraph:\n\n", noiselevel=-1)
+				resolver._dynamic_config.digraph.debug_print()
+				writemsg("\n", noiselevel=-1)
+
+			prefix = bad(" * ")
+			msg = []
+			msg.append("Dependencies could not be completely resolved due to")
+			msg.append("the following required packages not being installed:")
+			msg.append("")
+			for atom, parent in unresolvable:
+				msg.append("  %s pulled in by:" % (atom,))
+				msg.append("    %s" % (parent,))
+				msg.append("")
+			msg.extend(textwrap.wrap(
+				"Have you forgotten to do a complete update prior " + \
+				"to depclean? The most comprehensive command for this " + \
+				"purpose is as follows:", 65
+			))
+			msg.append("")
+			msg.append("  " + \
+				good("emerge --update --newuse --deep --with-bdeps=y @world"))
+			msg.append("")
+			msg.extend(textwrap.wrap(
+				"Note that the --with-bdeps=y option is not required in " + \
+				"many situations. Refer to the emerge manual page " + \
+				"(run `man emerge`) for more information about " + \
+				"--with-bdeps.", 65
+			))
+			msg.append("")
+			msg.extend(textwrap.wrap(
+				"Also, note that it may be necessary to manually uninstall " + \
+				"packages that no longer exist in the portage tree, since " + \
+				"it may not be possible to satisfy their dependencies.", 65
+			))
+			if action == "prune":
+				msg.append("")
+				msg.append("If you would like to ignore " + \
+					"dependencies then use %s." % good("--nodeps"))
+			writemsg_level("".join("%s%s\n" % (prefix, line) for line in msg),
+				level=logging.ERROR, noiselevel=-1)
+			return True
+		return False
+
+	if unresolved_deps():
+		return 1, [], False, 0
+
+	graph = resolver._dynamic_config.digraph.copy()
+	required_pkgs_total = 0
+	for node in graph:
+		if isinstance(node, Package):
+			required_pkgs_total += 1
+
+	def show_parents(child_node):
+		parent_nodes = graph.parent_nodes(child_node)
+		if not parent_nodes:
+			# With --prune, the highest version can be pulled in without any
+			# real parent since all installed packages are pulled in.  In that
+			# case there's nothing to show here.
+			return
+		parent_strs = []
+		for node in parent_nodes:
+			parent_strs.append(str(getattr(node, "cpv", node)))
+		parent_strs.sort()
+		msg = []
+		msg.append("  %s pulled in by:\n" % (child_node.cpv,))
+		for parent_str in parent_strs:
+			msg.append("    %s\n" % (parent_str,))
+		msg.append("\n")
+		portage.writemsg_stdout("".join(msg), noiselevel=-1)
+
+	def cmp_pkg_cpv(pkg1, pkg2):
+		"""Sort Package instances by cpv."""
+		if pkg1.cpv > pkg2.cpv:
+			return 1
+		elif pkg1.cpv == pkg2.cpv:
+			return 0
+		else:
+			return -1
+
+	def create_cleanlist():
+
+		if "--debug" in myopts:
+			writemsg("\ndigraph:\n\n", noiselevel=-1)
+			graph.debug_print()
+			writemsg("\n", noiselevel=-1)
+
+		# Never display the special internal protected_set.
+		for node in graph:
+			if isinstance(node, SetArg) and node.name == protected_set_name:
+				graph.remove(node)
+				break
+
+		pkgs_to_remove = []
+
+		if action == "depclean":
+			if args_set:
+
+				for pkg in sorted(vardb, key=cmp_sort_key(cmp_pkg_cpv)):
+					arg_atom = None
+					try:
+						arg_atom = args_set.findAtomForPackage(pkg)
+					except portage.exception.InvalidDependString:
+						# this error has already been displayed by now
+						continue
+
+					if arg_atom:
+						if pkg not in graph:
+							pkgs_to_remove.append(pkg)
+						elif "--verbose" in myopts:
+							show_parents(pkg)
+
+			else:
+				for pkg in sorted(vardb, key=cmp_sort_key(cmp_pkg_cpv)):
+					if pkg not in graph:
+						pkgs_to_remove.append(pkg)
+					elif "--verbose" in myopts:
+						show_parents(pkg)
+
+		elif action == "prune":
+
+			for atom in args_set:
+				for pkg in vardb.match_pkgs(atom):
+					if pkg not in graph:
+						pkgs_to_remove.append(pkg)
+					elif "--verbose" in myopts:
+						show_parents(pkg)
+
+		if not pkgs_to_remove:
+			writemsg_level(
+				">>> No packages selected for removal by %s\n" % action)
+			if "--verbose" not in myopts:
+				writemsg_level(
+					">>> To see reverse dependencies, use %s\n" % \
+						good("--verbose"))
+			if action == "prune":
+				writemsg_level(
+					">>> To ignore dependencies, use %s\n" % \
+						good("--nodeps"))
+
+		return pkgs_to_remove
+
+	cleanlist = create_cleanlist()
+	clean_set = set(cleanlist)
+
+	if cleanlist and \
+		real_vardb._linkmap is not None and \
+		myopts.get("--depclean-lib-check", _DEPCLEAN_LIB_CHECK_DEFAULT) != "n" and \
+		"preserve-libs" not in settings.features:
+
+		# Check if any of these packages are the sole providers of libraries
+		# with consumers that have not been selected for removal. If so, these
+		# packages and any dependencies need to be added to the graph.
+		linkmap = real_vardb._linkmap
+		consumer_cache = {}
+		provider_cache = {}
+		consumer_map = {}
+
+		writemsg_level(">>> Checking for lib consumers...\n")
+
+		for pkg in cleanlist:
+			pkg_dblink = real_vardb._dblink(pkg.cpv)
+			consumers = {}
+
+			for lib in pkg_dblink.getcontents():
+				lib = lib[root_len:]
+				lib_key = linkmap._obj_key(lib)
+				lib_consumers = consumer_cache.get(lib_key)
+				if lib_consumers is None:
+					try:
+						lib_consumers = linkmap.findConsumers(lib_key)
+					except KeyError:
+						continue
+					consumer_cache[lib_key] = lib_consumers
+				if lib_consumers:
+					consumers[lib_key] = lib_consumers
+
+			if not consumers:
+				continue
+
+			for lib, lib_consumers in list(consumers.items()):
+				for consumer_file in list(lib_consumers):
+					if pkg_dblink.isowner(consumer_file):
+						lib_consumers.remove(consumer_file)
+				if not lib_consumers:
+					del consumers[lib]
+
+			if not consumers:
+				continue
+
+			for lib, lib_consumers in consumers.items():
+
+				soname = linkmap.getSoname(lib)
+
+				consumer_providers = []
+				for lib_consumer in lib_consumers:
+					providers = provider_cache.get(lib)
+					if providers is None:
+						providers = linkmap.findProviders(lib_consumer)
+						provider_cache[lib_consumer] = providers
+					if soname not in providers:
+						# Why does this happen?
+						continue
+					consumer_providers.append(
+						(lib_consumer, providers[soname]))
+
+				consumers[lib] = consumer_providers
+
+			consumer_map[pkg] = consumers
+
+		if consumer_map:
+
+			search_files = set()
+			for consumers in consumer_map.values():
+				for lib, consumer_providers in consumers.items():
+					for lib_consumer, providers in consumer_providers:
+						search_files.add(lib_consumer)
+						search_files.update(providers)
+
+			writemsg_level(">>> Assigning files to packages...\n")
+			file_owners = {}
+			for f in search_files:
+				owner_set = set()
+				for owner in linkmap.getOwners(f):
+					owner_dblink = real_vardb._dblink(owner)
+					if owner_dblink.exists():
+						owner_set.add(owner_dblink)
+				if owner_set:
+					file_owners[f] = owner_set
+
+			for pkg, consumers in list(consumer_map.items()):
+				for lib, consumer_providers in list(consumers.items()):
+					lib_consumers = set()
+
+					for lib_consumer, providers in consumer_providers:
+						owner_set = file_owners.get(lib_consumer)
+						provider_dblinks = set()
+						provider_pkgs = set()
+
+						if len(providers) > 1:
+							for provider in providers:
+								provider_set = file_owners.get(provider)
+								if provider_set is not None:
+									provider_dblinks.update(provider_set)
+
+						if len(provider_dblinks) > 1:
+							for provider_dblink in provider_dblinks:
+								provider_pkg = resolver._pkg(
+									provider_dblink.mycpv, "installed",
+									root_config, installed=True)
+								if provider_pkg not in clean_set:
+									provider_pkgs.add(provider_pkg)
+
+						if provider_pkgs:
+							continue
+
+						if owner_set is not None:
+							lib_consumers.update(owner_set)
+
+					for consumer_dblink in list(lib_consumers):
+						if resolver._pkg(consumer_dblink.mycpv, "installed",
+							root_config, installed=True) in clean_set:
+							lib_consumers.remove(consumer_dblink)
+							continue
+
+					if lib_consumers:
+						consumers[lib] = lib_consumers
+					else:
+						del consumers[lib]
+				if not consumers:
+					del consumer_map[pkg]
+
+		if consumer_map:
+			# TODO: Implement a package set for rebuilding consumer packages.
+
+			msg = "In order to avoid breakage of link level " + \
+				"dependencies, one or more packages will not be removed. " + \
+				"This can be solved by rebuilding " + \
+				"the packages that pulled them in."
+
+			prefix = bad(" * ")
+			writemsg_level("".join(prefix + "%s\n" % line for \
+				line in textwrap.wrap(msg, 70)), level=logging.WARNING, noiselevel=-1)
+
+			msg = []
+			for pkg in sorted(consumer_map, key=cmp_sort_key(cmp_pkg_cpv)):
+				consumers = consumer_map[pkg]
+				consumer_libs = {}
+				for lib, lib_consumers in consumers.items():
+					for consumer in lib_consumers:
+						consumer_libs.setdefault(
+							consumer.mycpv, set()).add(linkmap.getSoname(lib))
+				unique_consumers = set(chain(*consumers.values()))
+				unique_consumers = sorted(consumer.mycpv \
+					for consumer in unique_consumers)
+				msg.append("")
+				msg.append("  %s pulled in by:" % (pkg.cpv,))
+				for consumer in unique_consumers:
+					libs = consumer_libs[consumer]
+					msg.append("    %s needs %s" % \
+						(consumer, ', '.join(sorted(libs))))
+			msg.append("")
+			writemsg_level("".join(prefix + "%s\n" % line for line in msg),
+				level=logging.WARNING, noiselevel=-1)
+
+			# Add lib providers to the graph as children of lib consumers,
+			# and also add any dependencies pulled in by the provider.
+			writemsg_level(">>> Adding lib providers to graph...\n")
+
+			for pkg, consumers in consumer_map.items():
+				for consumer_dblink in set(chain(*consumers.values())):
+					consumer_pkg = resolver._pkg(consumer_dblink.mycpv,
+						"installed", root_config, installed=True)
+					if not resolver._add_pkg(pkg,
+						Dependency(parent=consumer_pkg,
+						priority=UnmergeDepPriority(runtime=True),
+						root=pkg.root)):
+						resolver.display_problems()
+						return 1, [], False, 0
+
+			writemsg_level("\nCalculating dependencies  ")
+			success = resolver._complete_graph(
+				required_sets={eroot:required_sets})
+			writemsg_level("\b\b... done!\n")
+			resolver.display_problems()
+			if not success:
+				return 1, [], False, 0
+			if unresolved_deps():
+				return 1, [], False, 0
+
+			graph = resolver._dynamic_config.digraph.copy()
+			required_pkgs_total = 0
+			for node in graph:
+				if isinstance(node, Package):
+					required_pkgs_total += 1
+			cleanlist = create_cleanlist()
+			if not cleanlist:
+				return 0, [], False, required_pkgs_total
+			clean_set = set(cleanlist)
+
+	if clean_set:
+		writemsg_level(">>> Calculating removal order...\n")
+		# Use a topological sort to create an unmerge order such that
+		# each package is unmerged before it's dependencies. This is
+		# necessary to avoid breaking things that may need to run
+		# during pkg_prerm or pkg_postrm phases.
+
+		# Create a new graph to account for dependencies between the
+		# packages being unmerged.
+		graph = digraph()
+		del cleanlist[:]
+
+		runtime = UnmergeDepPriority(runtime=True)
+		runtime_post = UnmergeDepPriority(runtime_post=True)
+		buildtime = UnmergeDepPriority(buildtime=True)
+		priority_map = {
+			"RDEPEND": runtime,
+			"PDEPEND": runtime_post,
+			"HDEPEND": buildtime,
+			"DEPEND": buildtime,
+		}
+
+		for node in clean_set:
+			graph.add(node, None)
+			for dep_type in Package._dep_keys:
+				depstr = node.metadata[dep_type]
+				if not depstr:
+					continue
+				priority = priority_map[dep_type]
+
+				if debug:
+					writemsg_level(_unicode_decode("\nParent:    %s\n") \
+						% (node,), noiselevel=-1, level=logging.DEBUG)
+					writemsg_level(_unicode_decode(  "Depstring: %s\n") \
+						% (depstr,), noiselevel=-1, level=logging.DEBUG)
+					writemsg_level(_unicode_decode(  "Priority:  %s\n") \
+						% (priority,), noiselevel=-1, level=logging.DEBUG)
+
+				try:
+					atoms = resolver._select_atoms(eroot, depstr,
+						myuse=node.use.enabled, parent=node,
+						priority=priority)[node]
+				except portage.exception.InvalidDependString:
+					# Ignore invalid deps of packages that will
+					# be uninstalled anyway.
+					continue
+
+				if debug:
+					writemsg_level("Candidates: [%s]\n" % \
+						', '.join(_unicode_decode("'%s'") % (x,) for x in atoms),
+						noiselevel=-1, level=logging.DEBUG)
+
+				for atom in atoms:
+					if not isinstance(atom, portage.dep.Atom):
+						# Ignore invalid atoms returned from dep_check().
+						continue
+					if atom.blocker:
+						continue
+					matches = vardb.match_pkgs(atom)
+					if not matches:
+						continue
+					for child_node in matches:
+						if child_node in clean_set:
+							graph.add(child_node, node, priority=priority)
+
+		if debug:
+			writemsg_level("\nunmerge digraph:\n\n",
+				noiselevel=-1, level=logging.DEBUG)
+			graph.debug_print()
+			writemsg_level("\n", noiselevel=-1, level=logging.DEBUG)
+
+		ordered = True
+		if len(graph.order) == len(graph.root_nodes()):
+			# If there are no dependencies between packages
+			# let unmerge() group them by cat/pn.
+			ordered = False
+			cleanlist = [pkg.cpv for pkg in graph.order]
+		else:
+			# Order nodes from lowest to highest overall reference count for
+			# optimal root node selection (this can help minimize issues
+			# with unaccounted implicit dependencies).
+			node_refcounts = {}
+			for node in graph.order:
+				node_refcounts[node] = len(graph.parent_nodes(node))
+			def cmp_reference_count(node1, node2):
+				return node_refcounts[node1] - node_refcounts[node2]
+			graph.order.sort(key=cmp_sort_key(cmp_reference_count))
+
+			ignore_priority_range = [None]
+			ignore_priority_range.extend(
+				range(UnmergeDepPriority.MIN, UnmergeDepPriority.MAX + 1))
+			while graph:
+				for ignore_priority in ignore_priority_range:
+					nodes = graph.root_nodes(ignore_priority=ignore_priority)
+					if nodes:
+						break
+				if not nodes:
+					raise AssertionError("no root nodes")
+				if ignore_priority is not None:
+					# Some deps have been dropped due to circular dependencies,
+					# so only pop one node in order to minimize the number that
+					# are dropped.
+					del nodes[1:]
+				for node in nodes:
+					graph.remove(node)
+					cleanlist.append(node.cpv)
+
+		return 0, cleanlist, ordered, required_pkgs_total
+	return 0, [], False, required_pkgs_total
+
+def action_deselect(settings, trees, opts, atoms):
+	enter_invalid = '--ask-enter-invalid' in opts
+	root_config = trees[settings['EROOT']]['root_config']
+	world_set = root_config.sets['selected']
+	if not hasattr(world_set, 'update'):
+		writemsg_level("World @selected set does not appear to be mutable.\n",
+			level=logging.ERROR, noiselevel=-1)
+		return 1
+
+	pretend = '--pretend' in opts
+	locked = False
+	if not pretend and hasattr(world_set, 'lock'):
+		world_set.lock()
+		locked = True
+	try:
+		world_set.load()
+		world_atoms = world_set.getAtoms()
+		vardb = root_config.trees["vartree"].dbapi
+		expanded_atoms = set(atoms)
+
+		for atom in atoms:
+			if not atom.startswith(SETPREFIX):
+				if atom.cp.startswith("null/"):
+					# try to expand category from world set
+					null_cat, pn = portage.catsplit(atom.cp)
+					for world_atom in world_atoms:
+						cat, world_pn = portage.catsplit(world_atom.cp)
+						if pn == world_pn:
+							expanded_atoms.add(
+								Atom(atom.replace("null", cat, 1),
+								allow_repo=True, allow_wildcard=True))
+
+				for cpv in vardb.match(atom):
+					pkg = vardb._pkg_str(cpv, None)
+					expanded_atoms.add(Atom("%s:%s" % (pkg.cp, pkg.slot)))
+
+		discard_atoms = set()
+		for atom in world_set:
+			for arg_atom in expanded_atoms:
+				if arg_atom.startswith(SETPREFIX):
+					if atom.startswith(SETPREFIX) and \
+						arg_atom == atom:
+						discard_atoms.add(atom)
+						break
+				else:
+					if not atom.startswith(SETPREFIX) and \
+						arg_atom.intersects(atom) and \
+						not (arg_atom.slot and not atom.slot) and \
+						not (arg_atom.repo and not atom.repo):
+						discard_atoms.add(atom)
+						break
+		if discard_atoms:
+			for atom in sorted(discard_atoms):
+
+				if pretend:
+					action_desc = "Would remove"
+				else:
+					action_desc = "Removing"
+
+				if atom.startswith(SETPREFIX):
+					filename = "world_sets"
+				else:
+					filename = "world"
+
+				writemsg_stdout(
+					">>> %s %s from \"%s\" favorites file...\n" %
+					(action_desc, colorize("INFORM", _unicode(atom)),
+					filename), noiselevel=-1)
+
+			if '--ask' in opts:
+				prompt = "Would you like to remove these " + \
+					"packages from your world favorites?"
+				if userquery(prompt, enter_invalid) == 'No':
+					return 128 + signal.SIGINT
+
+			remaining = set(world_set)
+			remaining.difference_update(discard_atoms)
+			if not pretend:
+				world_set.replace(remaining)
+		else:
+			print(">>> No matching atoms found in \"world\" favorites file...")
+	finally:
+		if locked:
+			world_set.unlock()
+	return os.EX_OK
+
+class _info_pkgs_ver(object):
+	def __init__(self, ver, repo_suffix, provide_suffix):
+		self.ver = ver
+		self.repo_suffix = repo_suffix
+		self.provide_suffix = provide_suffix
+
+	def __lt__(self, other):
+		return portage.versions.vercmp(self.ver, other.ver) < 0
+
+	def toString(self):
+		"""
+		This may return unicode if repo_name contains unicode.
+		Don't use __str__ and str() since unicode triggers compatibility
+		issues between python 2.x and 3.x.
+		"""
+		return self.ver + self.repo_suffix + self.provide_suffix
+
+def action_info(settings, trees, myopts, myfiles):
+
+	output_buffer = []
+	append = output_buffer.append
+	root_config = trees[settings['EROOT']]['root_config']
+	running_eroot = trees._running_eroot
+	chost = settings.get("CHOST")
+
+	append(getportageversion(settings["PORTDIR"], None,
+		settings.profile_path, settings["CHOST"],
+		trees[settings['EROOT']]["vartree"].dbapi))
+
+	header_width = 65
+	header_title = "System Settings"
+	if myfiles:
+		append(header_width * "=")
+		append(header_title.rjust(int(header_width/2 + len(header_title)/2)))
+	append(header_width * "=")
+	append("System uname: %s" % (platform.platform(aliased=1),))
+
+	lastSync = portage.grabfile(os.path.join(
+		settings["PORTDIR"], "metadata", "timestamp.chk"))
+	if lastSync:
+		lastSync = lastSync[0]
+	else:
+		lastSync = "Unknown"
+	append("Timestamp of tree: %s" % (lastSync,))
+
+	ld_names = []
+	if chost:
+		ld_names.append(chost + "-ld")
+	ld_names.append("ld")
+	for name in ld_names:
+		try:
+			proc = subprocess.Popen([name, "--version"],
+				stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
+		except OSError:
+			pass
+		else:
+			output = _unicode_decode(proc.communicate()[0]).splitlines()
+			proc.wait()
+			if proc.wait() == os.EX_OK and output:
+				append("ld %s" % (output[0]))
+				break
+
+	try:
+		proc = subprocess.Popen(["distcc", "--version"],
+			stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
+	except OSError:
+		output = (1, None)
+	else:
+		output = _unicode_decode(proc.communicate()[0]).rstrip("\n")
+		output = (proc.wait(), output)
+	if output[0] == os.EX_OK:
+		distcc_str = output[1].split("\n", 1)[0]
+		if "distcc" in settings.features:
+			distcc_str += " [enabled]"
+		else:
+			distcc_str += " [disabled]"
+		append(distcc_str)
+
+	try:
+		proc = subprocess.Popen(["ccache", "-V"],
+			stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
+	except OSError:
+		output = (1, None)
+	else:
+		output = _unicode_decode(proc.communicate()[0]).rstrip("\n")
+		output = (proc.wait(), output)
+	if output[0] == os.EX_OK:
+		ccache_str = output[1].split("\n", 1)[0]
+		if "ccache" in settings.features:
+			ccache_str += " [enabled]"
+		else:
+			ccache_str += " [disabled]"
+		append(ccache_str)
+
+	myvars  = ["sys-devel/autoconf", "sys-devel/automake", "virtual/os-headers",
+	           "sys-devel/binutils", "sys-devel/libtool",  "dev-lang/python"]
+	myvars += portage.util.grabfile(settings["PORTDIR"]+"/profiles/info_pkgs")
+	atoms = []
+	vardb = trees[running_eroot]['vartree'].dbapi
+	for x in myvars:
+		try:
+			x = Atom(x)
+		except InvalidAtom:
+			append("%-20s %s" % (x+":", "[NOT VALID]"))
+		else:
+			for atom in expand_new_virt(vardb, x):
+				if not atom.blocker:
+					atoms.append((x, atom))
+
+	myvars = sorted(set(atoms))
+
+	portdb = trees[running_eroot]['porttree'].dbapi
+	main_repo = portdb.getRepositoryName(portdb.porttree_root)
+	cp_map = {}
+	cp_max_len = 0
+
+	for orig_atom, x in myvars:
+			pkg_matches = vardb.match(x)
+
+			versions = []
+			for cpv in pkg_matches:
+				matched_cp = portage.versions.cpv_getkey(cpv)
+				ver = portage.versions.cpv_getversion(cpv)
+				ver_map = cp_map.setdefault(matched_cp, {})
+				prev_match = ver_map.get(ver)
+				if prev_match is not None:
+					if prev_match.provide_suffix:
+						# prefer duplicate matches that include
+						# additional virtual provider info
+						continue
+
+				if len(matched_cp) > cp_max_len:
+					cp_max_len = len(matched_cp)
+				repo = vardb.aux_get(cpv, ["repository"])[0]
+				if repo == main_repo:
+					repo_suffix = ""
+				elif not repo:
+					repo_suffix = "::<unknown repository>"
+				else:
+					repo_suffix = "::" + repo
+
+				if matched_cp == orig_atom.cp:
+					provide_suffix = ""
+				else:
+					provide_suffix = " (%s)" % (orig_atom,)
+
+				ver_map[ver] = _info_pkgs_ver(ver, repo_suffix, provide_suffix)
+
+	for cp in sorted(cp_map):
+		versions = sorted(cp_map[cp].values())
+		versions = ", ".join(ver.toString() for ver in versions)
+		append("%s %s" % \
+			((cp + ":").ljust(cp_max_len + 1), versions))
+
+	repos = portdb.settings.repositories
+	if "--verbose" in myopts:
+		append("Repositories:\n")
+		for repo in repos:
+			append(repo.info_string())
+	else:
+		append("Repositories: %s" % \
+			" ".join(repo.name for repo in repos))
+
+	installed_sets = sorted(s for s in
+		root_config.sets['selected'].getNonAtoms() if s.startswith(SETPREFIX))
+	if installed_sets:
+		sets_line = "Installed sets: "
+		sets_line += ", ".join(installed_sets)
+		append(sets_line)
+
+	if "--verbose" in myopts:
+		myvars = list(settings)
+	else:
+		myvars = ['GENTOO_MIRRORS', 'CONFIG_PROTECT', 'CONFIG_PROTECT_MASK',
+		          'PORTDIR', 'DISTDIR', 'PKGDIR', 'PORTAGE_TMPDIR',
+		          'PORTDIR_OVERLAY', 'PORTAGE_BUNZIP2_COMMAND',
+		          'PORTAGE_BZIP2_COMMAND',
+		          'USE', 'CHOST', 'CFLAGS', 'CXXFLAGS',
+		          'ACCEPT_KEYWORDS', 'ACCEPT_LICENSE', 'SYNC', 'FEATURES',
+		          'EMERGE_DEFAULT_OPTS']
+
+		myvars.extend(portage.util.grabfile(settings["PORTDIR"]+"/profiles/info_vars"))
+
+	myvars_ignore_defaults = {
+		'PORTAGE_BZIP2_COMMAND' : 'bzip2',
+	}
+
+	myvars = portage.util.unique_array(myvars)
+	use_expand = settings.get('USE_EXPAND', '').split()
+	use_expand.sort()
+	unset_vars = []
+	myvars.sort()
+	for k in myvars:
+		v = settings.get(k)
+		if v is not None:
+			if k != "USE":
+				default = myvars_ignore_defaults.get(k)
+				if default is not None and \
+					default == v:
+					continue
+				append('%s="%s"' % (k, v))
+			else:
+				use = set(v.split())
+				for varname in use_expand:
+					flag_prefix = varname.lower() + "_"
+					for f in list(use):
+						if f.startswith(flag_prefix):
+							use.remove(f)
+				use = list(use)
+				use.sort()
+				use = ['USE="%s"' % " ".join(use)]
+				for varname in use_expand:
+					myval = settings.get(varname)
+					if myval:
+						use.append('%s="%s"' % (varname, myval))
+				append(" ".join(use))
+		else:
+			unset_vars.append(k)
+	if unset_vars:
+		append("Unset:  "+", ".join(unset_vars))
+	append("")
+	append("")
+	writemsg_stdout("\n".join(output_buffer),
+		noiselevel=-1)
+
+	# See if we can find any packages installed matching the strings
+	# passed on the command line
+	mypkgs = []
+	eroot = settings['EROOT']
+	vardb = trees[eroot]["vartree"].dbapi
+	portdb = trees[eroot]['porttree'].dbapi
+	bindb = trees[eroot]["bintree"].dbapi
+	for x in myfiles:
+		match_found = False
+		installed_match = vardb.match(x)
+		for installed in installed_match:
+			mypkgs.append((installed, "installed"))
+			match_found = True
+
+		if match_found:
+			continue
+
+		for db, pkg_type in ((portdb, "ebuild"), (bindb, "binary")):
+			if pkg_type == "binary" and "--usepkg" not in myopts:
+				continue
+
+			matches = db.match(x)
+			matches.reverse()
+			for match in matches:
+				if pkg_type == "binary":
+					if db.bintree.isremote(match):
+						continue
+				auxkeys = ["EAPI", "DEFINED_PHASES"]
+				metadata = dict(zip(auxkeys, db.aux_get(match, auxkeys)))
+				if metadata["EAPI"] not in ("0", "1", "2", "3") and \
+					"info" in metadata["DEFINED_PHASES"].split():
+					mypkgs.append((match, pkg_type))
+					break
+
+	# If some packages were found...
+	if mypkgs:
+		# Get our global settings (we only print stuff if it varies from
+		# the current config)
+		mydesiredvars = [ 'CHOST', 'CFLAGS', 'CXXFLAGS', 'LDFLAGS' ]
+		auxkeys = mydesiredvars + list(vardb._aux_cache_keys)
+		auxkeys.append('DEFINED_PHASES')
+		pkgsettings = portage.config(clone=settings)
+
+		# Loop through each package
+		# Only print settings if they differ from global settings
+		header_title = "Package Settings"
+		print(header_width * "=")
+		print(header_title.rjust(int(header_width/2 + len(header_title)/2)))
+		print(header_width * "=")
+		from portage.output import EOutput
+		out = EOutput()
+		for mypkg in mypkgs:
+			cpv = mypkg[0]
+			pkg_type = mypkg[1]
+			# Get all package specific variables
+			if pkg_type == "installed":
+				metadata = dict(zip(auxkeys, vardb.aux_get(cpv, auxkeys)))
+			elif pkg_type == "ebuild":
+				metadata = dict(zip(auxkeys, portdb.aux_get(cpv, auxkeys)))
+			elif pkg_type == "binary":
+				metadata = dict(zip(auxkeys, bindb.aux_get(cpv, auxkeys)))
+
+			pkg = Package(built=(pkg_type!="ebuild"), cpv=cpv,
+				installed=(pkg_type=="installed"), metadata=zip(Package.metadata_keys,
+				(metadata.get(x, '') for x in Package.metadata_keys)),
+				root_config=root_config, type_name=pkg_type)
+
+			if pkg_type == "installed":
+				print("\n%s was built with the following:" % \
+					colorize("INFORM", str(pkg.cpv)))
+			elif pkg_type == "ebuild":
+				print("\n%s would be build with the following:" % \
+					colorize("INFORM", str(pkg.cpv)))
+			elif pkg_type == "binary":
+				print("\n%s (non-installed binary) was built with the following:" % \
+					colorize("INFORM", str(pkg.cpv)))
+
+			writemsg_stdout('%s\n' % pkg_use_display(pkg, myopts),
+				noiselevel=-1)
+			if pkg_type == "installed":
+				for myvar in mydesiredvars:
+					if metadata[myvar].split() != settings.get(myvar, '').split():
+						print("%s=\"%s\"" % (myvar, metadata[myvar]))
+			print()
+
+			if metadata['DEFINED_PHASES']:
+				if 'info' not in metadata['DEFINED_PHASES'].split():
+					continue
+
+			print(">>> Attempting to run pkg_info() for '%s'" % pkg.cpv)
+
+			if pkg_type == "installed":
+				ebuildpath = vardb.findname(pkg.cpv)
+			elif pkg_type == "ebuild":
+				ebuildpath = portdb.findname(pkg.cpv, myrepo=pkg.repo)
+			elif pkg_type == "binary":
+				tbz2_file = bindb.bintree.getname(pkg.cpv)
+				ebuild_file_name = pkg.cpv.split("/")[1] + ".ebuild"
+				ebuild_file_contents = portage.xpak.tbz2(tbz2_file).getfile(ebuild_file_name)
+				tmpdir = tempfile.mkdtemp()
+				ebuildpath = os.path.join(tmpdir, ebuild_file_name)
+				file = open(ebuildpath, 'w')
+				file.write(ebuild_file_contents)
+				file.close()
+
+			if not ebuildpath or not os.path.exists(ebuildpath):
+				out.ewarn("No ebuild found for '%s'" % pkg.cpv)
+				continue
+
+			if pkg_type == "installed":
+				portage.doebuild(ebuildpath, "info", settings=pkgsettings,
+					debug=(settings.get("PORTAGE_DEBUG", "") == 1),
+					mydbapi=trees[settings['EROOT']]["vartree"].dbapi,
+					tree="vartree")
+			elif pkg_type == "ebuild":
+				portage.doebuild(ebuildpath, "info", settings=pkgsettings,
+					debug=(settings.get("PORTAGE_DEBUG", "") == 1),
+					mydbapi=trees[settings['EROOT']]['porttree'].dbapi,
+					tree="porttree")
+			elif pkg_type == "binary":
+				portage.doebuild(ebuildpath, "info", settings=pkgsettings,
+					debug=(settings.get("PORTAGE_DEBUG", "") == 1),
+					mydbapi=trees[settings['EROOT']]["bintree"].dbapi,
+					tree="bintree")
+				shutil.rmtree(tmpdir)
+
+def action_metadata(settings, portdb, myopts, porttrees=None):
+	if porttrees is None:
+		porttrees = portdb.porttrees
+	portage.writemsg_stdout("\n>>> Updating Portage cache\n")
+	old_umask = os.umask(0o002)
+	cachedir = os.path.normpath(settings.depcachedir)
+	if cachedir in ["/",    "/bin", "/dev",  "/etc",  "/home",
+					"/lib", "/opt", "/proc", "/root", "/sbin",
+					"/sys", "/tmp", "/usr",  "/var"]:
+		print("!!! PORTAGE_DEPCACHEDIR IS SET TO A PRIMARY " + \
+			"ROOT DIRECTORY ON YOUR SYSTEM.", file=sys.stderr)
+		print("!!! This is ALMOST CERTAINLY NOT what you want: '%s'" % cachedir, file=sys.stderr)
+		sys.exit(73)
+	if not os.path.exists(cachedir):
+		os.makedirs(cachedir)
+
+	auxdbkeys = portdb._known_keys
+
+	class TreeData(object):
+		__slots__ = ('dest_db', 'eclass_db', 'path', 'src_db', 'valid_nodes')
+		def __init__(self, dest_db, eclass_db, path, src_db):
+			self.dest_db = dest_db
+			self.eclass_db = eclass_db
+			self.path = path
+			self.src_db = src_db
+			self.valid_nodes = set()
+
+	porttrees_data = []
+	for path in porttrees:
+		src_db = portdb._pregen_auxdb.get(path)
+		if src_db is None:
+			# portdbapi does not populate _pregen_auxdb
+			# when FEATURES=metadata-transfer is enabled
+			src_db = portdb._create_pregen_cache(path)
+
+		if src_db is not None:
+			porttrees_data.append(TreeData(portdb.auxdb[path],
+				portdb.repositories.get_repo_for_location(path).eclass_db, path, src_db))
+
+	porttrees = [tree_data.path for tree_data in porttrees_data]
+
+	quiet = settings.get('TERM') == 'dumb' or \
+		'--quiet' in myopts or \
+		not sys.stdout.isatty()
+
+	onProgress = None
+	if not quiet:
+		progressBar = portage.output.TermProgressBar()
+		progressHandler = ProgressHandler()
+		onProgress = progressHandler.onProgress
+		def display():
+			progressBar.set(progressHandler.curval, progressHandler.maxval)
+		progressHandler.display = display
+		def sigwinch_handler(signum, frame):
+			lines, progressBar.term_columns = \
+				portage.output.get_term_size()
+		signal.signal(signal.SIGWINCH, sigwinch_handler)
+
+	# Temporarily override portdb.porttrees so portdb.cp_all()
+	# will only return the relevant subset.
+	portdb_porttrees = portdb.porttrees
+	portdb.porttrees = porttrees
+	try:
+		cp_all = portdb.cp_all()
+	finally:
+		portdb.porttrees = portdb_porttrees
+
+	curval = 0
+	maxval = len(cp_all)
+	if onProgress is not None:
+		onProgress(maxval, curval)
+
+	# TODO: Display error messages, but do not interfere with the progress bar.
+	# Here's how:
+	#  1) erase the progress bar
+	#  2) show the error message
+	#  3) redraw the progress bar on a new line
+
+	for cp in cp_all:
+		for tree_data in porttrees_data:
+
+			src_chf = tree_data.src_db.validation_chf
+			dest_chf = tree_data.dest_db.validation_chf
+			dest_chf_key = '_%s_' % dest_chf
+			dest_chf_getter = operator.attrgetter(dest_chf)
+
+			for cpv in portdb.cp_list(cp, mytree=tree_data.path):
+				tree_data.valid_nodes.add(cpv)
+				try:
+					src = tree_data.src_db[cpv]
+				except (CacheError, KeyError):
+					continue
+
+				ebuild_location = portdb.findname(cpv, mytree=tree_data.path)
+				if ebuild_location is None:
+					continue
+				ebuild_hash = hashed_path(ebuild_location)
+
+				try:
+					if not tree_data.src_db.validate_entry(src,
+						ebuild_hash, tree_data.eclass_db):
+						continue
+				except CacheError:
+					continue
+
+				eapi = src.get('EAPI')
+				if not eapi:
+					eapi = '0'
+				eapi_supported = eapi_is_supported(eapi)
+				if not eapi_supported:
+					continue
+
+				dest = None
+				try:
+					dest = tree_data.dest_db[cpv]
+				except (KeyError, CacheError):
+					pass
+
+				for d in (src, dest):
+					if d is not None and d.get('EAPI') in ('', '0'):
+						del d['EAPI']
+
+				if src_chf != 'mtime':
+					# src may contain an irrelevant _mtime_ which corresponds
+					# to the time that the cache entry was written
+					src.pop('_mtime_', None)
+
+				if src_chf != dest_chf:
+					# populate src entry with dest_chf_key
+					# (the validity of the dest_chf that we generate from the
+					# ebuild here relies on the fact that we already used
+					# validate_entry to validate the ebuild with src_chf)
+					src[dest_chf_key] = dest_chf_getter(ebuild_hash)
+
+				if dest is not None:
+					if not (dest[dest_chf_key] == src[dest_chf_key] and \
+						tree_data.eclass_db.validate_and_rewrite_cache(
+							dest['_eclasses_'], tree_data.dest_db.validation_chf,
+							tree_data.dest_db.store_eclass_paths) is not None and \
+						set(dest['_eclasses_']) == set(src['_eclasses_'])):
+						dest = None
+					else:
+						# We don't want to skip the write unless we're really
+						# sure that the existing cache is identical, so don't
+						# trust _mtime_ and _eclasses_ alone.
+						for k in auxdbkeys:
+							if dest.get(k, '') != src.get(k, ''):
+								dest = None
+								break
+
+				if dest is not None:
+					# The existing data is valid and identical,
+					# so there's no need to overwrite it.
+					continue
+
+				try:
+					tree_data.dest_db[cpv] = src
+				except CacheError:
+					# ignore it; can't do anything about it.
+					pass
+
+		curval += 1
+		if onProgress is not None:
+			onProgress(maxval, curval)
+
+	if onProgress is not None:
+		onProgress(maxval, curval)
+
+	for tree_data in porttrees_data:
+		try:
+			dead_nodes = set(tree_data.dest_db)
+		except CacheError as e:
+			writemsg_level("Error listing cache entries for " + \
+				"'%s': %s, continuing...\n" % (tree_data.path, e),
+				level=logging.ERROR, noiselevel=-1)
+			del e
+		else:
+			dead_nodes.difference_update(tree_data.valid_nodes)
+			for cpv in dead_nodes:
+				try:
+					del tree_data.dest_db[cpv]
+				except (KeyError, CacheError):
+					pass
+
+	if not quiet:
+		# make sure the final progress is displayed
+		progressHandler.display()
+		print()
+		signal.signal(signal.SIGWINCH, signal.SIG_DFL)
+
+	sys.stdout.flush()
+	os.umask(old_umask)
+
+def action_regen(settings, portdb, max_jobs, max_load):
+	xterm_titles = "notitles" not in settings.features
+	emergelog(xterm_titles, " === regen")
+	#regenerate cache entries
+	sys.stdout.flush()
+
+	regen = MetadataRegen(portdb, max_jobs=max_jobs,
+		max_load=max_load, main=True)
+	received_signal = []
+
+	def emergeexitsig(signum, frame):
+		signal.signal(signal.SIGINT, signal.SIG_IGN)
+		signal.signal(signal.SIGTERM, signal.SIG_IGN)
+		portage.util.writemsg("\n\nExiting on signal %(signal)s\n" % \
+			{"signal":signum})
+		regen.terminate()
+		received_signal.append(128 + signum)
+
+	earlier_sigint_handler = signal.signal(signal.SIGINT, emergeexitsig)
+	earlier_sigterm_handler = signal.signal(signal.SIGTERM, emergeexitsig)
+
+	try:
+		regen.start()
+		regen.wait()
+	finally:
+		# Restore previous handlers
+		if earlier_sigint_handler is not None:
+			signal.signal(signal.SIGINT, earlier_sigint_handler)
+		else:
+			signal.signal(signal.SIGINT, signal.SIG_DFL)
+		if earlier_sigterm_handler is not None:
+			signal.signal(signal.SIGTERM, earlier_sigterm_handler)
+		else:
+			signal.signal(signal.SIGTERM, signal.SIG_DFL)
+
+	if received_signal:
+		sys.exit(received_signal[0])
+
+	portage.writemsg_stdout("done!\n")
+	return regen.returncode
+
+def action_search(root_config, myopts, myfiles, spinner):
+	if not myfiles:
+		print("emerge: no search terms provided.")
+	else:
+		searchinstance = search(root_config,
+			spinner, "--searchdesc" in myopts,
+			"--quiet" not in myopts, "--usepkg" in myopts,
+			"--usepkgonly" in myopts)
+		for mysearch in myfiles:
+			try:
+				searchinstance.execute(mysearch)
+			except re.error as comment:
+				print("\n!!! Regular expression error in \"%s\": %s" % ( mysearch, comment ))
+				sys.exit(1)
+			searchinstance.output()
+
+def action_sync(settings, trees, mtimedb, myopts, myaction):
+	enter_invalid = '--ask-enter-invalid' in myopts
+	xterm_titles = "notitles" not in settings.features
+	emergelog(xterm_titles, " === sync")
+	portdb = trees[settings['EROOT']]['porttree'].dbapi
+	myportdir = portdb.porttree_root
+	if not myportdir:
+		myportdir = settings.get('PORTDIR', '')
+		if myportdir and myportdir.strip():
+			myportdir = os.path.realpath(myportdir)
+		else:
+			myportdir = None
+	out = portage.output.EOutput()
+	global_config_path = GLOBAL_CONFIG_PATH
+	if settings['EPREFIX']:
+		global_config_path = os.path.join(settings['EPREFIX'],
+				GLOBAL_CONFIG_PATH.lstrip(os.sep))
+	if not myportdir:
+		sys.stderr.write("!!! PORTDIR is undefined.  " + \
+			"Is %s/make.globals missing?\n" % global_config_path)
+		sys.exit(1)
+	if myportdir[-1]=="/":
+		myportdir=myportdir[:-1]
+	try:
+		st = os.stat(myportdir)
+	except OSError:
+		st = None
+	if st is None:
+		print(">>>",myportdir,"not found, creating it.")
+		portage.util.ensure_dirs(myportdir, mode=0o755)
+		st = os.stat(myportdir)
+
+	usersync_uid = None
+	spawn_kwargs = {}
+	spawn_kwargs["env"] = settings.environ()
+	if 'usersync' in settings.features and \
+		portage.data.secpass >= 2 and \
+		(st.st_uid != os.getuid() and st.st_mode & 0o700 or \
+		st.st_gid != os.getgid() and st.st_mode & 0o070):
+		try:
+			homedir = pwd.getpwuid(st.st_uid).pw_dir
+		except KeyError:
+			pass
+		else:
+			# Drop privileges when syncing, in order to match
+			# existing uid/gid settings.
+			usersync_uid = st.st_uid
+			spawn_kwargs["uid"]    = st.st_uid
+			spawn_kwargs["gid"]    = st.st_gid
+			spawn_kwargs["groups"] = [st.st_gid]
+			spawn_kwargs["env"]["HOME"] = homedir
+			umask = 0o002
+			if not st.st_mode & 0o020:
+				umask = umask | 0o020
+			spawn_kwargs["umask"] = umask
+
+	if usersync_uid is not None:
+		# PORTAGE_TMPDIR is used below, so validate it and
+		# bail out if necessary.
+		rval = _check_temp_dir(settings)
+		if rval != os.EX_OK:
+			return rval
+
+	syncuri = settings.get("SYNC", "").strip()
+	if not syncuri:
+		writemsg_level("!!! SYNC is undefined. " + \
+			"Is %s/make.globals missing?\n" % global_config_path,
+			noiselevel=-1, level=logging.ERROR)
+		return 1
+
+	vcs_dirs = frozenset([".git", ".svn", "CVS", ".hg"])
+	vcs_dirs = vcs_dirs.intersection(os.listdir(myportdir))
+
+	os.umask(0o022)
+	dosyncuri = syncuri
+	updatecache_flg = False
+	git = False
+	if myaction == "metadata":
+		print("skipping sync")
+		updatecache_flg = True
+	elif ".git" in vcs_dirs:
+		# Update existing git repository, and ignore the syncuri. We are
+		# going to trust the user and assume that the user is in the branch
+		# that he/she wants updated. We'll let the user manage branches with
+		# git directly.
+		if portage.process.find_binary("git") is None:
+			msg = ["Command not found: git",
+			"Type \"emerge dev-util/git\" to enable git support."]
+			for l in msg:
+				writemsg_level("!!! %s\n" % l,
+					level=logging.ERROR, noiselevel=-1)
+			return 1
+		msg = ">>> Starting git pull in %s..." % myportdir
+		emergelog(xterm_titles, msg )
+		writemsg_level(msg + "\n")
+		exitcode = portage.process.spawn_bash("cd %s ; git pull" % \
+			(portage._shell_quote(myportdir),), **spawn_kwargs)
+		if exitcode != os.EX_OK:
+			msg = "!!! git pull error in %s." % myportdir
+			emergelog(xterm_titles, msg)
+			writemsg_level(msg + "\n", level=logging.ERROR, noiselevel=-1)
+			return exitcode
+		msg = ">>> Git pull in %s successful" % myportdir
+		emergelog(xterm_titles, msg)
+		writemsg_level(msg + "\n")
+		git = True
+	elif syncuri[:8]=="rsync://" or syncuri[:6]=="ssh://":
+		for vcs_dir in vcs_dirs:
+			writemsg_level(("!!! %s appears to be under revision " + \
+				"control (contains %s).\n!!! Aborting rsync sync.\n") % \
+				(myportdir, vcs_dir), level=logging.ERROR, noiselevel=-1)
+			return 1
+		if not os.path.exists("/usr/bin/rsync"):
+			print("!!! /usr/bin/rsync does not exist, so rsync support is disabled.")
+			print("!!! Type \"emerge net-misc/rsync\" to enable rsync support.")
+			sys.exit(1)
+		mytimeout=180
+
+		rsync_opts = []
+		if settings["PORTAGE_RSYNC_OPTS"] == "":
+			portage.writemsg("PORTAGE_RSYNC_OPTS empty or unset, using hardcoded defaults\n")
+			rsync_opts.extend([
+				"--recursive",    # Recurse directories
+				"--links",        # Consider symlinks
+				"--safe-links",   # Ignore links outside of tree
+				"--perms",        # Preserve permissions
+				"--times",        # Preserive mod times
+				"--compress",     # Compress the data transmitted
+				"--force",        # Force deletion on non-empty dirs
+				"--whole-file",   # Don't do block transfers, only entire files
+				"--delete",       # Delete files that aren't in the master tree
+				"--stats",        # Show final statistics about what was transfered
+				"--human-readable",
+				"--timeout="+str(mytimeout), # IO timeout if not done in X seconds
+				"--exclude=/distfiles",   # Exclude distfiles from consideration
+				"--exclude=/local",       # Exclude local     from consideration
+				"--exclude=/packages",    # Exclude packages  from consideration
+			])
+
+		else:
+			# The below validation is not needed when using the above hardcoded
+			# defaults.
+
+			portage.writemsg("Using PORTAGE_RSYNC_OPTS instead of hardcoded defaults\n", 1)
+			rsync_opts.extend(portage.util.shlex_split(
+				settings.get("PORTAGE_RSYNC_OPTS", "")))
+			for opt in ("--recursive", "--times"):
+				if opt not in rsync_opts:
+					portage.writemsg(yellow("WARNING:") + " adding required option " + \
+					"%s not included in PORTAGE_RSYNC_OPTS\n" % opt)
+					rsync_opts.append(opt)
+
+			for exclude in ("distfiles", "local", "packages"):
+				opt = "--exclude=/%s" % exclude
+				if opt not in rsync_opts:
+					portage.writemsg(yellow("WARNING:") + \
+					" adding required option %s not included in "  % opt + \
+					"PORTAGE_RSYNC_OPTS (can be overridden with --exclude='!')\n")
+					rsync_opts.append(opt)
+
+			if syncuri.rstrip("/").endswith(".gentoo.org/gentoo-portage"):
+				def rsync_opt_startswith(opt_prefix):
+					for x in rsync_opts:
+						if x.startswith(opt_prefix):
+							return True
+					return False
+
+				if not rsync_opt_startswith("--timeout="):
+					rsync_opts.append("--timeout=%d" % mytimeout)
+
+				for opt in ("--compress", "--whole-file"):
+					if opt not in rsync_opts:
+						portage.writemsg(yellow("WARNING:") + " adding required option " + \
+						"%s not included in PORTAGE_RSYNC_OPTS\n" % opt)
+						rsync_opts.append(opt)
+
+		if "--quiet" in myopts:
+			rsync_opts.append("--quiet")    # Shut up a lot
+		else:
+			rsync_opts.append("--verbose")	# Print filelist
+
+		if "--verbose" in myopts:
+			rsync_opts.append("--progress")  # Progress meter for each file
+
+		if "--debug" in myopts:
+			rsync_opts.append("--checksum") # Force checksum on all files
+
+		# Real local timestamp file.
+		servertimestampfile = os.path.join(
+			myportdir, "metadata", "timestamp.chk")
+
+		content = portage.util.grabfile(servertimestampfile)
+		mytimestamp = 0
+		if content:
+			try:
+				mytimestamp = time.mktime(time.strptime(content[0],
+					"%a, %d %b %Y %H:%M:%S +0000"))
+			except (OverflowError, ValueError):
+				pass
+		del content
+
+		try:
+			rsync_initial_timeout = \
+				int(settings.get("PORTAGE_RSYNC_INITIAL_TIMEOUT", "15"))
+		except ValueError:
+			rsync_initial_timeout = 15
+
+		try:
+			maxretries=int(settings["PORTAGE_RSYNC_RETRIES"])
+		except SystemExit as e:
+			raise # Needed else can't exit
+		except:
+			maxretries = -1 #default number of retries
+
+		retries=0
+		try:
+			proto, user_name, hostname, port = re.split(
+				r"(rsync|ssh)://([^:/]+@)?(\[[:\da-fA-F]*\]|[^:/]*)(:[0-9]+)?",
+				syncuri, maxsplit=4)[1:5]
+		except ValueError:
+			writemsg_level("!!! SYNC is invalid: %s\n" % syncuri,
+				noiselevel=-1, level=logging.ERROR)
+			return 1
+		if port is None:
+			port=""
+		if user_name is None:
+			user_name=""
+		if re.match(r"^\[[:\da-fA-F]*\]$", hostname) is None:
+			getaddrinfo_host = hostname
+		else:
+			# getaddrinfo needs the brackets stripped
+			getaddrinfo_host = hostname[1:-1]
+		updatecache_flg=True
+		all_rsync_opts = set(rsync_opts)
+		extra_rsync_opts = portage.util.shlex_split(
+			settings.get("PORTAGE_RSYNC_EXTRA_OPTS",""))
+		all_rsync_opts.update(extra_rsync_opts)
+
+		family = socket.AF_UNSPEC
+		if "-4" in all_rsync_opts or "--ipv4" in all_rsync_opts:
+			family = socket.AF_INET
+		elif socket.has_ipv6 and \
+			("-6" in all_rsync_opts or "--ipv6" in all_rsync_opts):
+			family = socket.AF_INET6
+
+		addrinfos = None
+		uris = []
+
+		try:
+			addrinfos = getaddrinfo_validate(
+				socket.getaddrinfo(getaddrinfo_host, None,
+				family, socket.SOCK_STREAM))
+		except socket.error as e:
+			writemsg_level(
+				"!!! getaddrinfo failed for '%s': %s\n" % (hostname, e),
+				noiselevel=-1, level=logging.ERROR)
+
+		if addrinfos:
+
+			AF_INET = socket.AF_INET
+			AF_INET6 = None
+			if socket.has_ipv6:
+				AF_INET6 = socket.AF_INET6
+
+			ips_v4 = []
+			ips_v6 = []
+
+			for addrinfo in addrinfos:
+				if addrinfo[0] == AF_INET:
+					ips_v4.append("%s" % addrinfo[4][0])
+				elif AF_INET6 is not None and addrinfo[0] == AF_INET6:
+					# IPv6 addresses need to be enclosed in square brackets
+					ips_v6.append("[%s]" % addrinfo[4][0])
+
+			random.shuffle(ips_v4)
+			random.shuffle(ips_v6)
+
+			# Give priority to the address family that
+			# getaddrinfo() returned first.
+			if AF_INET6 is not None and addrinfos and \
+				addrinfos[0][0] == AF_INET6:
+				ips = ips_v6 + ips_v4
+			else:
+				ips = ips_v4 + ips_v6
+
+			for ip in ips:
+				uris.append(syncuri.replace(
+					"//" + user_name + hostname + port + "/",
+					"//" + user_name + ip + port + "/", 1))
+
+		if not uris:
+			# With some configurations we need to use the plain hostname
+			# rather than try to resolve the ip addresses (bug #340817).
+			uris.append(syncuri)
+
+		# reverse, for use with pop()
+		uris.reverse()
+
+		effective_maxretries = maxretries
+		if effective_maxretries < 0:
+			effective_maxretries = len(uris) - 1
+
+		SERVER_OUT_OF_DATE = -1
+		EXCEEDED_MAX_RETRIES = -2
+		while (1):
+			if uris:
+				dosyncuri = uris.pop()
+			else:
+				writemsg("!!! Exhausted addresses for %s\n" % \
+					hostname, noiselevel=-1)
+				return 1
+
+			if (retries==0):
+				if "--ask" in myopts:
+					if userquery("Do you want to sync your Portage tree " + \
+						"with the mirror at\n" + blue(dosyncuri) + bold("?"),
+						enter_invalid) == "No":
+						print()
+						print("Quitting.")
+						print()
+						sys.exit(128 + signal.SIGINT)
+				emergelog(xterm_titles, ">>> Starting rsync with " + dosyncuri)
+				if "--quiet" not in myopts:
+					print(">>> Starting rsync with "+dosyncuri+"...")
+			else:
+				emergelog(xterm_titles,
+					">>> Starting retry %d of %d with %s" % \
+						(retries, effective_maxretries, dosyncuri))
+				writemsg_stdout(
+					"\n\n>>> Starting retry %d of %d with %s\n" % \
+					(retries, effective_maxretries, dosyncuri), noiselevel=-1)
+
+			if dosyncuri.startswith('ssh://'):
+				dosyncuri = dosyncuri[6:].replace('/', ':/', 1)
+
+			if mytimestamp != 0 and "--quiet" not in myopts:
+				print(">>> Checking server timestamp ...")
+
+			rsynccommand = ["/usr/bin/rsync"] + rsync_opts + extra_rsync_opts
+
+			if "--debug" in myopts:
+				print(rsynccommand)
+
+			exitcode = os.EX_OK
+			servertimestamp = 0
+			# Even if there's no timestamp available locally, fetch the
+			# timestamp anyway as an initial probe to verify that the server is
+			# responsive.  This protects us from hanging indefinitely on a
+			# connection attempt to an unresponsive server which rsync's
+			# --timeout option does not prevent.
+			if True:
+				# Temporary file for remote server timestamp comparison.
+				# NOTE: If FEATURES=usersync is enabled then the tempfile
+				# needs to be in a directory that's readable by the usersync
+				# user. We assume that PORTAGE_TMPDIR will satisfy this
+				# requirement, since that's not necessarily true for the
+				# default directory used by the tempfile module.
+				if usersync_uid is not None:
+					tmpdir = settings['PORTAGE_TMPDIR']
+				else:
+					# use default dir from tempfile module
+					tmpdir = None
+				fd, tmpservertimestampfile = \
+					tempfile.mkstemp(dir=tmpdir)
+				os.close(fd)
+				if usersync_uid is not None:
+					portage.util.apply_permissions(tmpservertimestampfile,
+						uid=usersync_uid)
+				mycommand = rsynccommand[:]
+				mycommand.append(dosyncuri.rstrip("/") + \
+					"/metadata/timestamp.chk")
+				mycommand.append(tmpservertimestampfile)
+				content = None
+				mypids = []
+				try:
+					# Timeout here in case the server is unresponsive.  The
+					# --timeout rsync option doesn't apply to the initial
+					# connection attempt.
+					try:
+						if rsync_initial_timeout:
+							portage.exception.AlarmSignal.register(
+								rsync_initial_timeout)
+
+						mypids.extend(portage.process.spawn(
+							mycommand, returnpid=True, **spawn_kwargs))
+						exitcode = os.waitpid(mypids[0], 0)[1]
+						if usersync_uid is not None:
+							portage.util.apply_permissions(tmpservertimestampfile,
+								uid=os.getuid())
+						content = portage.grabfile(tmpservertimestampfile)
+					finally:
+						if rsync_initial_timeout:
+							portage.exception.AlarmSignal.unregister()
+						try:
+							os.unlink(tmpservertimestampfile)
+						except OSError:
+							pass
+				except portage.exception.AlarmSignal:
+					# timed out
+					print('timed out')
+					# With waitpid and WNOHANG, only check the
+					# first element of the tuple since the second
+					# element may vary (bug #337465).
+					if mypids and os.waitpid(mypids[0], os.WNOHANG)[0] == 0:
+						os.kill(mypids[0], signal.SIGTERM)
+						os.waitpid(mypids[0], 0)
+					# This is the same code rsync uses for timeout.
+					exitcode = 30
+				else:
+					if exitcode != os.EX_OK:
+						if exitcode & 0xff:
+							exitcode = (exitcode & 0xff) << 8
+						else:
+							exitcode = exitcode >> 8
+				if mypids:
+					portage.process.spawned_pids.remove(mypids[0])
+				if content:
+					try:
+						servertimestamp = time.mktime(time.strptime(
+							content[0], "%a, %d %b %Y %H:%M:%S +0000"))
+					except (OverflowError, ValueError):
+						pass
+				del mycommand, mypids, content
+			if exitcode == os.EX_OK:
+				if (servertimestamp != 0) and (servertimestamp == mytimestamp):
+					emergelog(xterm_titles,
+						">>> Cancelling sync -- Already current.")
+					print()
+					print(">>>")
+					print(">>> Timestamps on the server and in the local repository are the same.")
+					print(">>> Cancelling all further sync action. You are already up to date.")
+					print(">>>")
+					print(">>> In order to force sync, remove '%s'." % servertimestampfile)
+					print(">>>")
+					print()
+					sys.exit(0)
+				elif (servertimestamp != 0) and (servertimestamp < mytimestamp):
+					emergelog(xterm_titles,
+						">>> Server out of date: %s" % dosyncuri)
+					print()
+					print(">>>")
+					print(">>> SERVER OUT OF DATE: %s" % dosyncuri)
+					print(">>>")
+					print(">>> In order to force sync, remove '%s'." % servertimestampfile)
+					print(">>>")
+					print()
+					exitcode = SERVER_OUT_OF_DATE
+				elif (servertimestamp == 0) or (servertimestamp > mytimestamp):
+					# actual sync
+					mycommand = rsynccommand + [dosyncuri+"/", myportdir]
+					exitcode = portage.process.spawn(mycommand, **spawn_kwargs)
+					if exitcode in [0,1,3,4,11,14,20,21]:
+						break
+			elif exitcode in [1,3,4,11,14,20,21]:
+				break
+			else:
+				# Code 2 indicates protocol incompatibility, which is expected
+				# for servers with protocol < 29 that don't support
+				# --prune-empty-directories.  Retry for a server that supports
+				# at least rsync protocol version 29 (>=rsync-2.6.4).
+				pass
+
+			retries=retries+1
+
+			if maxretries < 0 or retries <= maxretries:
+				print(">>> Retrying...")
+			else:
+				# over retries
+				# exit loop
+				updatecache_flg=False
+				exitcode = EXCEEDED_MAX_RETRIES
+				break
+
+		if (exitcode==0):
+			emergelog(xterm_titles, "=== Sync completed with %s" % dosyncuri)
+		elif exitcode == SERVER_OUT_OF_DATE:
+			sys.exit(1)
+		elif exitcode == EXCEEDED_MAX_RETRIES:
+			sys.stderr.write(
+				">>> Exceeded PORTAGE_RSYNC_RETRIES: %s\n" % maxretries)
+			sys.exit(1)
+		elif (exitcode>0):
+			msg = []
+			if exitcode==1:
+				msg.append("Rsync has reported that there is a syntax error. Please ensure")
+				msg.append("that your SYNC statement is proper.")
+				msg.append("SYNC=" + settings["SYNC"])
+			elif exitcode==11:
+				msg.append("Rsync has reported that there is a File IO error. Normally")
+				msg.append("this means your disk is full, but can be caused by corruption")
+				msg.append("on the filesystem that contains PORTDIR. Please investigate")
+				msg.append("and try again after the problem has been fixed.")
+				msg.append("PORTDIR=" + settings["PORTDIR"])
+			elif exitcode==20:
+				msg.append("Rsync was killed before it finished.")
+			else:
+				msg.append("Rsync has not successfully finished. It is recommended that you keep")
+				msg.append("trying or that you use the 'emerge-webrsync' option if you are unable")
+				msg.append("to use rsync due to firewall or other restrictions. This should be a")
+				msg.append("temporary problem unless complications exist with your network")
+				msg.append("(and possibly your system's filesystem) configuration.")
+			for line in msg:
+				out.eerror(line)
+			sys.exit(exitcode)
+	elif syncuri[:6]=="cvs://":
+		if not os.path.exists("/usr/bin/cvs"):
+			print("!!! /usr/bin/cvs does not exist, so CVS support is disabled.")
+			print("!!! Type \"emerge dev-vcs/cvs\" to enable CVS support.")
+			sys.exit(1)
+		cvsroot=syncuri[6:]
+		cvsdir=os.path.dirname(myportdir)
+		if not os.path.exists(myportdir+"/CVS"):
+			#initial checkout
+			print(">>> Starting initial cvs checkout with "+syncuri+"...")
+			if os.path.exists(cvsdir+"/gentoo-x86"):
+				print("!!! existing",cvsdir+"/gentoo-x86 directory; exiting.")
+				sys.exit(1)
+			try:
+				os.rmdir(myportdir)
+			except OSError as e:
+				if e.errno != errno.ENOENT:
+					sys.stderr.write(
+						"!!! existing '%s' directory; exiting.\n" % myportdir)
+					sys.exit(1)
+				del e
+			if portage.process.spawn_bash(
+					"cd %s; exec cvs -z0 -d %s co -P gentoo-x86" % \
+					(portage._shell_quote(cvsdir), portage._shell_quote(cvsroot)),
+					**spawn_kwargs) != os.EX_OK:
+				print("!!! cvs checkout error; exiting.")
+				sys.exit(1)
+			os.rename(os.path.join(cvsdir, "gentoo-x86"), myportdir)
+		else:
+			#cvs update
+			print(">>> Starting cvs update with "+syncuri+"...")
+			retval = portage.process.spawn_bash(
+				"cd %s; exec cvs -z0 -q update -dP" % \
+				(portage._shell_quote(myportdir),), **spawn_kwargs)
+			if retval != os.EX_OK:
+				writemsg_level("!!! cvs update error; exiting.\n",
+					noiselevel=-1, level=logging.ERROR)
+				sys.exit(retval)
+		dosyncuri = syncuri
+	else:
+		writemsg_level("!!! Unrecognized protocol: SYNC='%s'\n" % (syncuri,),
+			noiselevel=-1, level=logging.ERROR)
+		return 1
+
+	# Reload the whole config from scratch.
+	portage._sync_disabled_warnings = False
+	settings, trees, mtimedb = load_emerge_config(trees=trees)
+	adjust_configs(myopts, trees)
+	root_config = trees[settings['EROOT']]['root_config']
+	portdb = trees[settings['EROOT']]['porttree'].dbapi
+
+	if git:
+		# NOTE: Do this after reloading the config, in case
+		# it did not exist prior to sync, so that the config
+		# and portdb properly account for its existence.
+		exitcode = git_sync_timestamps(portdb, myportdir)
+		if exitcode == os.EX_OK:
+			updatecache_flg = True
+
+	if updatecache_flg and  \
+		myaction != "metadata" and \
+		"metadata-transfer" not in settings.features:
+		updatecache_flg = False
+
+	if updatecache_flg and \
+		os.path.exists(os.path.join(myportdir, 'metadata', 'cache')):
+
+		# Only update cache for myportdir since that's
+		# the only one that's been synced here.
+		action_metadata(settings, portdb, myopts, porttrees=[myportdir])
+
+	if myopts.get('--package-moves') != 'n' and \
+		_global_updates(trees, mtimedb["updates"], quiet=("--quiet" in myopts)):
+		mtimedb.commit()
+		# Reload the whole config from scratch.
+		settings, trees, mtimedb = load_emerge_config(trees=trees)
+		adjust_configs(myopts, trees)
+		portdb = trees[settings['EROOT']]['porttree'].dbapi
+		root_config = trees[settings['EROOT']]['root_config']
+
+	mybestpv = portdb.xmatch("bestmatch-visible",
+		portage.const.PORTAGE_PACKAGE_ATOM)
+	mypvs = portage.best(
+		trees[settings['EROOT']]['vartree'].dbapi.match(
+		portage.const.PORTAGE_PACKAGE_ATOM))
+
+	chk_updated_cfg_files(settings["EROOT"],
+		portage.util.shlex_split(settings.get("CONFIG_PROTECT", "")))
+
+	if myaction != "metadata":
+		postsync = os.path.join(settings["PORTAGE_CONFIGROOT"],
+			portage.USER_CONFIG_PATH, "bin", "post_sync")
+		if os.access(postsync, os.X_OK):
+			retval = portage.process.spawn(
+				[postsync, dosyncuri], env=settings.environ())
+			if retval != os.EX_OK:
+				writemsg_level(
+					" %s spawn failed of %s\n" % (bad("*"), postsync,),
+					level=logging.ERROR, noiselevel=-1)
+
+	if(mybestpv != mypvs) and not "--quiet" in myopts:
+		print()
+		print(warn(" * ")+bold("An update to portage is available.")+" It is _highly_ recommended")
+		print(warn(" * ")+"that you update portage now, before any other packages are updated.")
+		print()
+		print(warn(" * ")+"To update portage, run 'emerge portage' now.")
+		print()
+
+	display_news_notification(root_config, myopts)
+	return os.EX_OK
+
+def action_uninstall(settings, trees, ldpath_mtimes,
+	opts, action, files, spinner):
+	# For backward compat, some actions do not require leading '='.
+	ignore_missing_eq = action in ('clean', 'unmerge')
+	root = settings['ROOT']
+	eroot = settings['EROOT']
+	vardb = trees[settings['EROOT']]['vartree'].dbapi
+	valid_atoms = []
+	lookup_owners = []
+
+	# Ensure atoms are valid before calling unmerge().
+	# For backward compat, leading '=' is not required.
+	for x in files:
+		if is_valid_package_atom(x, allow_repo=True) or \
+			(ignore_missing_eq and is_valid_package_atom('=' + x)):
+
+			try:
+				atom = dep_expand(x, mydb=vardb, settings=settings)
+			except portage.exception.AmbiguousPackageName as e:
+				msg = "The short ebuild name \"" + x + \
+					"\" is ambiguous.  Please specify " + \
+					"one of the following " + \
+					"fully-qualified ebuild names instead:"
+				for line in textwrap.wrap(msg, 70):
+					writemsg_level("!!! %s\n" % (line,),
+						level=logging.ERROR, noiselevel=-1)
+				for i in e.args[0]:
+					writemsg_level("    %s\n" % colorize("INFORM", i),
+						level=logging.ERROR, noiselevel=-1)
+				writemsg_level("\n", level=logging.ERROR, noiselevel=-1)
+				return 1
+			else:
+				if atom.use and atom.use.conditional:
+					writemsg_level(
+						("\n\n!!! '%s' contains a conditional " + \
+						"which is not allowed.\n") % (x,),
+						level=logging.ERROR, noiselevel=-1)
+					writemsg_level(
+						"!!! Please check ebuild(5) for full details.\n",
+						level=logging.ERROR)
+					return 1
+				valid_atoms.append(atom)
+
+		elif x.startswith(os.sep):
+			if not x.startswith(eroot):
+				writemsg_level(("!!! '%s' does not start with" + \
+					" $EROOT.\n") % x, level=logging.ERROR, noiselevel=-1)
+				return 1
+			# Queue these up since it's most efficient to handle
+			# multiple files in a single iter_owners() call.
+			lookup_owners.append(x)
+
+		elif x.startswith(SETPREFIX) and action == "deselect":
+			valid_atoms.append(x)
+
+		elif "*" in x:
+			try:
+				ext_atom = Atom(x, allow_repo=True, allow_wildcard=True)
+			except InvalidAtom:
+				msg = []
+				msg.append("'%s' is not a valid package atom." % (x,))
+				msg.append("Please check ebuild(5) for full details.")
+				writemsg_level("".join("!!! %s\n" % line for line in msg),
+					level=logging.ERROR, noiselevel=-1)
+				return 1
+
+			for cpv in vardb.cpv_all():
+				if portage.match_from_list(ext_atom, [cpv]):
+					require_metadata = False
+					atom = portage.cpv_getkey(cpv)
+					if ext_atom.operator == '=*':
+						atom = "=" + atom + "-" + \
+							portage.versions.cpv_getversion(cpv)
+					if ext_atom.slot:
+						atom += ":" + ext_atom.slot
+						require_metadata = True
+					if ext_atom.repo:
+						atom += "::" + ext_atom.repo
+						require_metadata = True
+
+					atom = Atom(atom, allow_repo=True)
+					if require_metadata:
+						try:
+							cpv = vardb._pkg_str(cpv, ext_atom.repo)
+						except (KeyError, InvalidData):
+							continue
+						if not portage.match_from_list(atom, [cpv]):
+							continue
+
+					valid_atoms.append(atom)
+
+		else:
+			msg = []
+			msg.append("'%s' is not a valid package atom." % (x,))
+			msg.append("Please check ebuild(5) for full details.")
+			writemsg_level("".join("!!! %s\n" % line for line in msg),
+				level=logging.ERROR, noiselevel=-1)
+			return 1
+
+	if lookup_owners:
+		relative_paths = []
+		search_for_multiple = False
+		if len(lookup_owners) > 1:
+			search_for_multiple = True
+
+		for x in lookup_owners:
+			if not search_for_multiple and os.path.isdir(x):
+				search_for_multiple = True
+			relative_paths.append(x[len(root)-1:])
+
+		owners = set()
+		for pkg, relative_path in \
+			vardb._owners.iter_owners(relative_paths):
+			owners.add(pkg.mycpv)
+			if not search_for_multiple:
+				break
+
+		if owners:
+			for cpv in owners:
+				pkg = vardb._pkg_str(cpv, None)
+				atom = '%s:%s' % (pkg.cp, pkg.slot)
+				valid_atoms.append(portage.dep.Atom(atom))
+		else:
+			writemsg_level(("!!! '%s' is not claimed " + \
+				"by any package.\n") % lookup_owners[0],
+				level=logging.WARNING, noiselevel=-1)
+
+	if files and not valid_atoms:
+		return 1
+
+	if action == 'unmerge' and \
+		'--quiet' not in opts and \
+		'--quiet-unmerge-warn' not in opts:
+		msg = "This action can remove important packages! " + \
+			"In order to be safer, use " + \
+			"`emerge -pv --depclean <atom>` to check for " + \
+			"reverse dependencies before removing packages."
+		out = portage.output.EOutput()
+		for line in textwrap.wrap(msg, 72):
+			out.ewarn(line)
+
+	if action == 'deselect':
+		return action_deselect(settings, trees, opts, valid_atoms)
+
+	# Use the same logic as the Scheduler class to trigger redirection
+	# of ebuild pkg_prerm/postrm phase output to logs as appropriate
+	# for options such as --jobs, --quiet and --quiet-build.
+	max_jobs = opts.get("--jobs", 1)
+	background = (max_jobs is True or max_jobs > 1 or
+		"--quiet" in opts or opts.get("--quiet-build") == "y")
+	sched_iface = SchedulerInterface(global_event_loop(),
+		is_background=lambda: background)
+
+	if background:
+		settings.unlock()
+		settings["PORTAGE_BACKGROUND"] = "1"
+		settings.backup_changes("PORTAGE_BACKGROUND")
+		settings.lock()
+
+	if action in ('clean', 'unmerge') or \
+		(action == 'prune' and "--nodeps" in opts):
+		# When given a list of atoms, unmerge them in the order given.
+		ordered = action == 'unmerge'
+		rval = unmerge(trees[settings['EROOT']]['root_config'], opts, action,
+			valid_atoms, ldpath_mtimes, ordered=ordered,
+			scheduler=sched_iface)
+	else:
+		rval = action_depclean(settings, trees, ldpath_mtimes,
+			opts, action, valid_atoms, spinner,
+			scheduler=sched_iface)
+
+	return rval
+
+def adjust_configs(myopts, trees):
+	for myroot in trees:
+		mysettings =  trees[myroot]["vartree"].settings
+		mysettings.unlock()
+		adjust_config(myopts, mysettings)
+		mysettings.lock()
+
+def adjust_config(myopts, settings):
+	"""Make emerge specific adjustments to the config."""
+
+	# Kill noauto as it will break merges otherwise.
+	if "noauto" in settings.features:
+		settings.features.remove('noauto')
+
+	fail_clean = myopts.get('--fail-clean')
+	if fail_clean is not None:
+		if fail_clean is True and \
+			'fail-clean' not in settings.features:
+			settings.features.add('fail-clean')
+		elif fail_clean == 'n' and \
+			'fail-clean' in settings.features:
+			settings.features.remove('fail-clean')
+
+	CLEAN_DELAY = 5
+	try:
+		CLEAN_DELAY = int(settings.get("CLEAN_DELAY", str(CLEAN_DELAY)))
+	except ValueError as e:
+		portage.writemsg("!!! %s\n" % str(e), noiselevel=-1)
+		portage.writemsg("!!! Unable to parse integer: CLEAN_DELAY='%s'\n" % \
+			settings["CLEAN_DELAY"], noiselevel=-1)
+	settings["CLEAN_DELAY"] = str(CLEAN_DELAY)
+	settings.backup_changes("CLEAN_DELAY")
+
+	EMERGE_WARNING_DELAY = 10
+	try:
+		EMERGE_WARNING_DELAY = int(settings.get(
+			"EMERGE_WARNING_DELAY", str(EMERGE_WARNING_DELAY)))
+	except ValueError as e:
+		portage.writemsg("!!! %s\n" % str(e), noiselevel=-1)
+		portage.writemsg("!!! Unable to parse integer: EMERGE_WARNING_DELAY='%s'\n" % \
+			settings["EMERGE_WARNING_DELAY"], noiselevel=-1)
+	settings["EMERGE_WARNING_DELAY"] = str(EMERGE_WARNING_DELAY)
+	settings.backup_changes("EMERGE_WARNING_DELAY")
+
+	buildpkg = myopts.get("--buildpkg")
+	if buildpkg is True:
+		settings.features.add("buildpkg")
+	elif buildpkg == 'n':
+		settings.features.discard("buildpkg")
+
+	if "--quiet" in myopts:
+		settings["PORTAGE_QUIET"]="1"
+		settings.backup_changes("PORTAGE_QUIET")
+
+	if "--verbose" in myopts:
+		settings["PORTAGE_VERBOSE"] = "1"
+		settings.backup_changes("PORTAGE_VERBOSE")
+
+	# Set so that configs will be merged regardless of remembered status
+	if ("--noconfmem" in myopts):
+		settings["NOCONFMEM"]="1"
+		settings.backup_changes("NOCONFMEM")
+
+	# Set various debug markers... They should be merged somehow.
+	PORTAGE_DEBUG = 0
+	try:
+		PORTAGE_DEBUG = int(settings.get("PORTAGE_DEBUG", str(PORTAGE_DEBUG)))
+		if PORTAGE_DEBUG not in (0, 1):
+			portage.writemsg("!!! Invalid value: PORTAGE_DEBUG='%i'\n" % \
+				PORTAGE_DEBUG, noiselevel=-1)
+			portage.writemsg("!!! PORTAGE_DEBUG must be either 0 or 1\n",
+				noiselevel=-1)
+			PORTAGE_DEBUG = 0
+	except ValueError as e:
+		portage.writemsg("!!! %s\n" % str(e), noiselevel=-1)
+		portage.writemsg("!!! Unable to parse integer: PORTAGE_DEBUG='%s'\n" %\
+			settings["PORTAGE_DEBUG"], noiselevel=-1)
+		del e
+	if "--debug" in myopts:
+		PORTAGE_DEBUG = 1
+	settings["PORTAGE_DEBUG"] = str(PORTAGE_DEBUG)
+	settings.backup_changes("PORTAGE_DEBUG")
+
+	if settings.get("NOCOLOR") not in ("yes","true"):
+		portage.output.havecolor = 1
+
+	# The explicit --color < y | n > option overrides the NOCOLOR environment
+	# variable and stdout auto-detection.
+	if "--color" in myopts:
+		if "y" == myopts["--color"]:
+			portage.output.havecolor = 1
+			settings["NOCOLOR"] = "false"
+		else:
+			portage.output.havecolor = 0
+			settings["NOCOLOR"] = "true"
+		settings.backup_changes("NOCOLOR")
+	elif settings.get('TERM') == 'dumb' or \
+		not sys.stdout.isatty():
+		portage.output.havecolor = 0
+		settings["NOCOLOR"] = "true"
+		settings.backup_changes("NOCOLOR")
+
+def display_missing_pkg_set(root_config, set_name):
+
+	msg = []
+	msg.append(("emerge: There are no sets to satisfy '%s'. " + \
+		"The following sets exist:") % \
+		colorize("INFORM", set_name))
+	msg.append("")
+
+	for s in sorted(root_config.sets):
+		msg.append("    %s" % s)
+	msg.append("")
+
+	writemsg_level("".join("%s\n" % l for l in msg),
+		level=logging.ERROR, noiselevel=-1)
+
+def relative_profile_path(portdir, abs_profile):
+	realpath = os.path.realpath(abs_profile)
+	basepath   = os.path.realpath(os.path.join(portdir, "profiles"))
+	if realpath.startswith(basepath):
+		profilever = realpath[1 + len(basepath):]
+	else:
+		profilever = None
+	return profilever
+
+def getportageversion(portdir, _unused, profile, chost, vardb):
+	profilever = None
+	repositories = vardb.settings.repositories
+	if profile:
+		profilever = relative_profile_path(portdir, profile)
+		if profilever is None:
+			try:
+				for parent in portage.grabfile(
+					os.path.join(profile, 'parent')):
+					profilever = relative_profile_path(portdir,
+						os.path.join(profile, parent))
+					if profilever is not None:
+						break
+					colon = parent.find(":")
+					if colon != -1:
+						p_repo_name = parent[:colon]
+						try:
+							p_repo_loc = \
+								repositories.get_location_for_name(p_repo_name)
+						except KeyError:
+							pass
+						else:
+							profilever = relative_profile_path(p_repo_loc,
+								os.path.join(p_repo_loc, 'profiles',
+									parent[colon+1:]))
+							if profilever is not None:
+								break
+			except portage.exception.PortageException:
+				pass
+
+			if profilever is None:
+				try:
+					profilever = "!" + os.readlink(profile)
+				except (OSError):
+					pass
+
+	if profilever is None:
+		profilever = "unavailable"
+
+	libcver = []
+	libclist = set()
+	for atom in expand_new_virt(vardb, portage.const.LIBC_PACKAGE_ATOM):
+		if not atom.blocker:
+			libclist.update(vardb.match(atom))
+	if libclist:
+		for cpv in sorted(libclist):
+			libc_split = portage.catpkgsplit(cpv)[1:]
+			if libc_split[-1] == "r0":
+				libc_split = libc_split[:-1]
+			libcver.append("-".join(libc_split))
+	else:
+		libcver = ["unavailable"]
+
+	gccver = getgccversion(chost)
+	unameout=platform.release()+" "+platform.machine()
+
+	return "Portage %s (%s, %s, %s, %s)" % \
+		(portage.VERSION, profilever, gccver, ",".join(libcver), unameout)
+
+def git_sync_timestamps(portdb, portdir):
+	"""
+	Since git doesn't preserve timestamps, synchronize timestamps between
+	entries and ebuilds/eclasses. Assume the cache has the correct timestamp
+	for a given file as long as the file in the working tree is not modified
+	(relative to HEAD).
+	"""
+
+	cache_db = portdb._pregen_auxdb.get(portdir)
+
+	try:
+		if cache_db is None:
+			# portdbapi does not populate _pregen_auxdb
+			# when FEATURES=metadata-transfer is enabled
+			cache_db = portdb._create_pregen_cache(portdir)
+	except CacheError as e:
+		writemsg_level("!!! Unable to instantiate cache: %s\n" % (e,),
+			level=logging.ERROR, noiselevel=-1)
+		return 1
+
+	if cache_db is None:
+		return os.EX_OK
+
+	if cache_db.validation_chf != 'mtime':
+		# newer formats like md5-dict do not require mtime sync
+		return os.EX_OK
+
+	writemsg_level(">>> Synchronizing timestamps...\n")
+
+	ec_dir = os.path.join(portdir, "eclass")
+	try:
+		ec_names = set(f[:-7] for f in os.listdir(ec_dir) \
+			if f.endswith(".eclass"))
+	except OSError as e:
+		writemsg_level("!!! Unable to list eclasses: %s\n" % (e,),
+			level=logging.ERROR, noiselevel=-1)
+		return 1
+
+	args = [portage.const.BASH_BINARY, "-c",
+		"cd %s && git diff-index --name-only --diff-filter=M HEAD" % \
+		portage._shell_quote(portdir)]
+	proc = subprocess.Popen(args, stdout=subprocess.PIPE)
+	modified_files = set(_unicode_decode(l).rstrip("\n") for l in proc.stdout)
+	rval = proc.wait()
+	proc.stdout.close()
+	if rval != os.EX_OK:
+		return rval
+
+	modified_eclasses = set(ec for ec in ec_names \
+		if os.path.join("eclass", ec + ".eclass") in modified_files)
+
+	updated_ec_mtimes = {}
+
+	for cpv in cache_db:
+		cpv_split = portage.catpkgsplit(cpv)
+		if cpv_split is None:
+			writemsg_level("!!! Invalid cache entry: %s\n" % (cpv,),
+				level=logging.ERROR, noiselevel=-1)
+			continue
+
+		cat, pn, ver, rev = cpv_split
+		cat, pf = portage.catsplit(cpv)
+		relative_eb_path = os.path.join(cat, pn, pf + ".ebuild")
+		if relative_eb_path in modified_files:
+			continue
+
+		try:
+			cache_entry = cache_db[cpv]
+			eb_mtime = cache_entry.get("_mtime_")
+			ec_mtimes = cache_entry.get("_eclasses_")
+		except KeyError:
+			writemsg_level("!!! Missing cache entry: %s\n" % (cpv,),
+				level=logging.ERROR, noiselevel=-1)
+			continue
+		except CacheError as e:
+			writemsg_level("!!! Unable to access cache entry: %s %s\n" % \
+				(cpv, e), level=logging.ERROR, noiselevel=-1)
+			continue
+
+		if eb_mtime is None:
+			writemsg_level("!!! Missing ebuild mtime: %s\n" % (cpv,),
+				level=logging.ERROR, noiselevel=-1)
+			continue
+
+		try:
+			eb_mtime = long(eb_mtime)
+		except ValueError:
+			writemsg_level("!!! Invalid ebuild mtime: %s %s\n" % \
+				(cpv, eb_mtime), level=logging.ERROR, noiselevel=-1)
+			continue
+
+		if ec_mtimes is None:
+			writemsg_level("!!! Missing eclass mtimes: %s\n" % (cpv,),
+				level=logging.ERROR, noiselevel=-1)
+			continue
+
+		if modified_eclasses.intersection(ec_mtimes):
+			continue
+
+		missing_eclasses = set(ec_mtimes).difference(ec_names)
+		if missing_eclasses:
+			writemsg_level("!!! Non-existent eclass(es): %s %s\n" % \
+				(cpv, sorted(missing_eclasses)), level=logging.ERROR,
+				noiselevel=-1)
+			continue
+
+		eb_path = os.path.join(portdir, relative_eb_path)
+		try:
+			current_eb_mtime = os.stat(eb_path)
+		except OSError:
+			writemsg_level("!!! Missing ebuild: %s\n" % \
+				(cpv,), level=logging.ERROR, noiselevel=-1)
+			continue
+
+		inconsistent = False
+		for ec, (ec_path, ec_mtime) in ec_mtimes.items():
+			updated_mtime = updated_ec_mtimes.get(ec)
+			if updated_mtime is not None and updated_mtime != ec_mtime:
+				writemsg_level("!!! Inconsistent eclass mtime: %s %s\n" % \
+					(cpv, ec), level=logging.ERROR, noiselevel=-1)
+				inconsistent = True
+				break
+
+		if inconsistent:
+			continue
+
+		if current_eb_mtime != eb_mtime:
+			os.utime(eb_path, (eb_mtime, eb_mtime))
+
+		for ec, (ec_path, ec_mtime) in ec_mtimes.items():
+			if ec in updated_ec_mtimes:
+				continue
+			ec_path = os.path.join(ec_dir, ec + ".eclass")
+			current_mtime = os.stat(ec_path)[stat.ST_MTIME]
+			if current_mtime != ec_mtime:
+				os.utime(ec_path, (ec_mtime, ec_mtime))
+			updated_ec_mtimes[ec] = ec_mtime
+
+	return os.EX_OK
+
+def load_emerge_config(trees=None):
+	kwargs = {}
+	for k, envvar in (("config_root", "PORTAGE_CONFIGROOT"), ("target_root", "ROOT")):
+		v = os.environ.get(envvar, None)
+		if v and v.strip():
+			kwargs[k] = v
+	trees = portage.create_trees(trees=trees, **kwargs)
+
+	for root_trees in trees.values():
+		settings = root_trees["vartree"].settings
+		settings._init_dirs()
+		setconfig = load_default_config(settings, root_trees)
+		root_trees["root_config"] = RootConfig(settings, root_trees, setconfig)
+
+	settings = trees[trees._target_eroot]['vartree'].settings
+	mtimedbfile = os.path.join(settings['EROOT'], portage.CACHE_PATH, "mtimedb")
+	mtimedb = portage.MtimeDB(mtimedbfile)
+	QueryCommand._db = trees
+	return settings, trees, mtimedb
+
+def getgccversion(chost):
+	"""
+	rtype: C{str}
+	return:  the current in-use gcc version
+	"""
+
+	gcc_ver_command = ['gcc', '-dumpversion']
+	gcc_ver_prefix = 'gcc-'
+
+	gcc_not_found_error = red(
+	"!!! No gcc found. You probably need to 'source /etc/profile'\n" +
+	"!!! to update the environment of this terminal and possibly\n" +
+	"!!! other terminals also.\n"
+	)
+
+	try:
+		proc = subprocess.Popen(["gcc-config", "-c"],
+			stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
+	except OSError:
+		myoutput = None
+		mystatus = 1
+	else:
+		myoutput = _unicode_decode(proc.communicate()[0]).rstrip("\n")
+		mystatus = proc.wait()
+	if mystatus == os.EX_OK and myoutput.startswith(chost + "-"):
+		return myoutput.replace(chost + "-", gcc_ver_prefix, 1)
+
+	try:
+		proc = subprocess.Popen(
+			[chost + "-" + gcc_ver_command[0]] + gcc_ver_command[1:],
+			stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
+	except OSError:
+		myoutput = None
+		mystatus = 1
+	else:
+		myoutput = _unicode_decode(proc.communicate()[0]).rstrip("\n")
+		mystatus = proc.wait()
+	if mystatus == os.EX_OK:
+		return gcc_ver_prefix + myoutput
+
+	try:
+		proc = subprocess.Popen(gcc_ver_command,
+			stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
+	except OSError:
+		myoutput = None
+		mystatus = 1
+	else:
+		myoutput = _unicode_decode(proc.communicate()[0]).rstrip("\n")
+		mystatus = proc.wait()
+	if mystatus == os.EX_OK:
+		return gcc_ver_prefix + myoutput
+
+	portage.writemsg(gcc_not_found_error, noiselevel=-1)
+	return "[unavailable]"
+
+# Warn about features that may confuse users and
+# lead them to report invalid bugs.
+_emerge_features_warn = frozenset(['keeptemp', 'keepwork'])
+
+def validate_ebuild_environment(trees):
+	features_warn = set()
+	for myroot in trees:
+		settings = trees[myroot]["vartree"].settings
+		settings.validate()
+		features_warn.update(
+			_emerge_features_warn.intersection(settings.features))
+
+	if features_warn:
+		msg = "WARNING: The FEATURES variable contains one " + \
+			"or more values that should be disabled under " + \
+			"normal circumstances: %s" % " ".join(features_warn)
+		out = portage.output.EOutput()
+		for line in textwrap.wrap(msg, 65):
+			out.ewarn(line)
+
+def check_procfs():
+	procfs_path = '/proc'
+	if platform.system() not in ("Linux",) or \
+		os.path.ismount(procfs_path):
+		return os.EX_OK
+	msg = "It seems that %s is not mounted. You have been warned." % procfs_path
+	writemsg_level("".join("!!! %s\n" % l for l in textwrap.wrap(msg, 70)),
+		level=logging.ERROR, noiselevel=-1)
+	return 1
+
+def config_protect_check(trees):
+	for root, root_trees in trees.items():
+		settings = root_trees["root_config"].settings
+		if not settings.get("CONFIG_PROTECT"):
+			msg = "!!! CONFIG_PROTECT is empty"
+			if settings["ROOT"] != "/":
+				msg += " for '%s'" % root
+			msg += "\n"
+			writemsg_level(msg, level=logging.WARN, noiselevel=-1)
+
+def apply_priorities(settings):
+	ionice(settings)
+	nice(settings)
+
+def nice(settings):
+	try:
+		os.nice(int(settings.get("PORTAGE_NICENESS", "0")))
+	except (OSError, ValueError) as e:
+		out = portage.output.EOutput()
+		out.eerror("Failed to change nice value to '%s'" % \
+			settings["PORTAGE_NICENESS"])
+		out.eerror("%s\n" % str(e))
+
+def ionice(settings):
+
+	ionice_cmd = settings.get("PORTAGE_IONICE_COMMAND")
+	if ionice_cmd:
+		ionice_cmd = portage.util.shlex_split(ionice_cmd)
+	if not ionice_cmd:
+		return
+
+	variables = {"PID" : str(os.getpid())}
+	cmd = [varexpand(x, mydict=variables) for x in ionice_cmd]
+
+	try:
+		rval = portage.process.spawn(cmd, env=os.environ)
+	except portage.exception.CommandNotFound:
+		# The OS kernel probably doesn't support ionice,
+		# so return silently.
+		return
+
+	if rval != os.EX_OK:
+		out = portage.output.EOutput()
+		out.eerror("PORTAGE_IONICE_COMMAND returned %d" % (rval,))
+		out.eerror("See the make.conf(5) man page for PORTAGE_IONICE_COMMAND usage instructions.")
+
+def setconfig_fallback(root_config):
+	setconfig = root_config.setconfig
+	setconfig._create_default_config()
+	setconfig._parse(update=True)
+	root_config.sets = setconfig.getSets()
+
+def get_missing_sets(root_config):
+	# emerge requires existence of "world", "selected", and "system"
+	missing_sets = []
+
+	for s in ("selected", "system", "world",):
+		if s not in root_config.sets:
+			missing_sets.append(s)
+
+	return missing_sets
+
+def missing_sets_warning(root_config, missing_sets):
+	if len(missing_sets) > 2:
+		missing_sets_str = ", ".join('"%s"' % s for s in missing_sets[:-1])
+		missing_sets_str += ', and "%s"' % missing_sets[-1]
+	elif len(missing_sets) == 2:
+		missing_sets_str = '"%s" and "%s"' % tuple(missing_sets)
+	else:
+		missing_sets_str = '"%s"' % missing_sets[-1]
+	msg = ["emerge: incomplete set configuration, " + \
+		"missing set(s): %s" % missing_sets_str]
+	if root_config.sets:
+		msg.append("        sets defined: %s" % ", ".join(root_config.sets))
+	global_config_path = portage.const.GLOBAL_CONFIG_PATH
+	if root_config.settings['EPREFIX']:
+		global_config_path = os.path.join(root_config.settings['EPREFIX'],
+				portage.const.GLOBAL_CONFIG_PATH.lstrip(os.sep))
+	msg.append("        This usually means that '%s'" % \
+		(os.path.join(global_config_path, "sets/portage.conf"),))
+	msg.append("        is missing or corrupt.")
+	msg.append("        Falling back to default world and system set configuration!!!")
+	for line in msg:
+		writemsg_level(line + "\n", level=logging.ERROR, noiselevel=-1)
+
+def ensure_required_sets(trees):
+	warning_shown = False
+	for root_trees in trees.values():
+		missing_sets = get_missing_sets(root_trees["root_config"])
+		if missing_sets and not warning_shown:
+			warning_shown = True
+			missing_sets_warning(root_trees["root_config"], missing_sets)
+		if missing_sets:
+			setconfig_fallback(root_trees["root_config"])
+
+def expand_set_arguments(myfiles, myaction, root_config):
+	retval = os.EX_OK
+	setconfig = root_config.setconfig
+
+	sets = setconfig.getSets()
+
+	# In order to know exactly which atoms/sets should be added to the
+	# world file, the depgraph performs set expansion later. It will get
+	# confused about where the atoms came from if it's not allowed to
+	# expand them itself.
+	do_not_expand = (None, )
+	newargs = []
+	for a in myfiles:
+		if a in ("system", "world"):
+			newargs.append(SETPREFIX+a)
+		else:
+			newargs.append(a)
+	myfiles = newargs
+	del newargs
+	newargs = []
+
+	# separators for set arguments
+	ARG_START = "{"
+	ARG_END = "}"
+
+	for i in range(0, len(myfiles)):
+		if myfiles[i].startswith(SETPREFIX):
+			start = 0
+			end = 0
+			x = myfiles[i][len(SETPREFIX):]
+			newset = ""
+			while x:
+				start = x.find(ARG_START)
+				end = x.find(ARG_END)
+				if start > 0 and start < end:
+					namepart = x[:start]
+					argpart = x[start+1:end]
+
+					# TODO: implement proper quoting
+					args = argpart.split(",")
+					options = {}
+					for a in args:
+						if "=" in a:
+							k, v  = a.split("=", 1)
+							options[k] = v
+						else:
+							options[a] = "True"
+					setconfig.update(namepart, options)
+					newset += (x[:start-len(namepart)]+namepart)
+					x = x[end+len(ARG_END):]
+				else:
+					newset += x
+					x = ""
+			myfiles[i] = SETPREFIX+newset
+
+	sets = setconfig.getSets()
+
+	# display errors that occurred while loading the SetConfig instance
+	for e in setconfig.errors:
+		print(colorize("BAD", "Error during set creation: %s" % e))
+
+	unmerge_actions = ("unmerge", "prune", "clean", "depclean")
+
+	for a in myfiles:
+		if a.startswith(SETPREFIX):		
+				s = a[len(SETPREFIX):]
+				if s not in sets:
+					display_missing_pkg_set(root_config, s)
+					return (None, 1)
+				if s == "installed":
+					msg = ("The @installed set is deprecated and will soon be "
+					"removed. Please refer to bug #387059 for details.")
+					out = portage.output.EOutput()
+					for line in textwrap.wrap(msg, 50):
+						out.ewarn(line)
+				setconfig.active.append(s)
+				try:
+					set_atoms = setconfig.getSetAtoms(s)
+				except portage.exception.PackageSetNotFound as e:
+					writemsg_level(("emerge: the given set '%s' " + \
+						"contains a non-existent set named '%s'.\n") % \
+						(s, e), level=logging.ERROR, noiselevel=-1)
+					if s in ('world', 'selected') and \
+						SETPREFIX + e.value in sets['selected']:
+						writemsg_level(("Use `emerge --deselect %s%s` to "
+							"remove this set from world_sets.\n") %
+							(SETPREFIX, e,), level=logging.ERROR,
+							noiselevel=-1)
+					return (None, 1)
+				if myaction in unmerge_actions and \
+						not sets[s].supportsOperation("unmerge"):
+					sys.stderr.write("emerge: the given set '%s' does " % s + \
+						"not support unmerge operations\n")
+					retval = 1
+				elif not set_atoms:
+					print("emerge: '%s' is an empty set" % s)
+				elif myaction not in do_not_expand:
+					newargs.extend(set_atoms)
+				else:
+					newargs.append(SETPREFIX+s)
+				for e in sets[s].errors:
+					print(e)
+		else:
+			newargs.append(a)
+	return (newargs, retval)
+
+def repo_name_check(trees):
+	missing_repo_names = set()
+	for root_trees in trees.values():
+		porttree = root_trees.get("porttree")
+		if porttree:
+			portdb = porttree.dbapi
+			missing_repo_names.update(portdb.getMissingRepoNames())
+			if portdb.porttree_root in missing_repo_names and \
+				not os.path.exists(os.path.join(
+				portdb.porttree_root, "profiles")):
+				# This is normal if $PORTDIR happens to be empty,
+				# so don't warn about it.
+				missing_repo_names.remove(portdb.porttree_root)
+
+	if missing_repo_names:
+		msg = []
+		msg.append("WARNING: One or more repositories " + \
+			"have missing repo_name entries:")
+		msg.append("")
+		for p in missing_repo_names:
+			msg.append("\t%s/profiles/repo_name" % (p,))
+		msg.append("")
+		msg.extend(textwrap.wrap("NOTE: Each repo_name entry " + \
+			"should be a plain text file containing a unique " + \
+			"name for the repository on the first line.", 70))
+		msg.append("\n")
+		writemsg_level("".join("%s\n" % l for l in msg),
+			level=logging.WARNING, noiselevel=-1)
+
+	return bool(missing_repo_names)
+
+def repo_name_duplicate_check(trees):
+	ignored_repos = {}
+	for root, root_trees in trees.items():
+		if 'porttree' in root_trees:
+			portdb = root_trees['porttree'].dbapi
+			if portdb.settings.get('PORTAGE_REPO_DUPLICATE_WARN') != '0':
+				for repo_name, paths in portdb.getIgnoredRepos():
+					k = (root, repo_name, portdb.getRepositoryPath(repo_name))
+					ignored_repos.setdefault(k, []).extend(paths)
+
+	if ignored_repos:
+		msg = []
+		msg.append('WARNING: One or more repositories ' + \
+			'have been ignored due to duplicate')
+		msg.append('  profiles/repo_name entries:')
+		msg.append('')
+		for k in sorted(ignored_repos):
+			msg.append('  %s overrides' % ", ".join(k))
+			for path in ignored_repos[k]:
+				msg.append('    %s' % (path,))
+			msg.append('')
+		msg.extend('  ' + x for x in textwrap.wrap(
+			"All profiles/repo_name entries must be unique in order " + \
+			"to avoid having duplicates ignored. " + \
+			"Set PORTAGE_REPO_DUPLICATE_WARN=\"0\" in " + \
+			"/etc/make.conf if you would like to disable this warning."))
+		msg.append("\n")
+		writemsg_level(''.join('%s\n' % l for l in msg),
+			level=logging.WARNING, noiselevel=-1)
+
+	return bool(ignored_repos)
+
+def run_action(settings, trees, mtimedb, myaction, myopts, myfiles,
+	gc_locals=None, build_dict):
+
+	# The caller may have its local variables garbage collected, so
+	# they don't consume any memory during this long-running function.
+	if gc_locals is not None:
+		gc_locals()
+		gc_locals = None
+
+	# skip global updates prior to sync, since it's called after sync
+	if myaction not in ('help', 'info', 'sync', 'version') and \
+		myopts.get('--package-moves') != 'n' and \
+		_global_updates(trees, mtimedb["updates"], quiet=("--quiet" in myopts)):
+		mtimedb.commit()
+		# Reload the whole config from scratch.
+		settings, trees, mtimedb = load_emerge_config(trees=trees)
+
+	xterm_titles = "notitles" not in settings.features
+	if xterm_titles:
+		xtermTitle("emerge")
+
+	if "--digest" in myopts:
+		os.environ["FEATURES"] = os.environ.get("FEATURES","") + " digest"
+		# Reload the whole config from scratch so that the portdbapi internal
+		# config is updated with new FEATURES.
+		settings, trees, mtimedb = load_emerge_config(trees=trees)
+
+	# NOTE: adjust_configs() can map options to FEATURES, so any relevant
+	# options adjustments should be made prior to calling adjust_configs().
+	if "--buildpkgonly" in myopts:
+		myopts["--buildpkg"] = True
+
+	if "getbinpkg" in settings.features:
+		myopts["--getbinpkg"] = True
+
+	if "--getbinpkgonly" in myopts:
+		myopts["--getbinpkg"] = True
+
+	if "--getbinpkgonly" in myopts:
+		myopts["--usepkgonly"] = True
+
+	if "--getbinpkg" in myopts:
+		myopts["--usepkg"] = True
+
+	if "--usepkgonly" in myopts:
+		myopts["--usepkg"] = True
+
+	if "--buildpkgonly" in myopts:
+		# --buildpkgonly will not merge anything, so
+		# it cancels all binary package options.
+		for opt in ("--getbinpkg", "--getbinpkgonly",
+			"--usepkg", "--usepkgonly"):
+			myopts.pop(opt, None)
+
+	adjust_configs(myopts, trees)
+	apply_priorities(settings)
+
+	if myaction == 'version':
+		writemsg_stdout(getportageversion(
+			settings["PORTDIR"], None,
+			settings.profile_path, settings["CHOST"],
+			trees[settings['EROOT']]['vartree'].dbapi) + '\n', noiselevel=-1)
+		return 0
+	elif myaction == 'help':
+		emerge_help()
+		return 0
+
+	spinner = stdout_spinner()
+	if "candy" in settings.features:
+		spinner.update = spinner.update_scroll
+
+	if "--quiet" not in myopts:
+		portage.deprecated_profile_check(settings=settings)
+		if portage.const._ENABLE_REPO_NAME_WARN:
+			# Bug #248603 - Disable warnings about missing
+			# repo_name entries for stable branch.
+			repo_name_check(trees)
+		repo_name_duplicate_check(trees)
+		config_protect_check(trees)
+	check_procfs()
+
+	for mytrees in trees.values():
+		mydb = mytrees["porttree"].dbapi
+		# Freeze the portdbapi for performance (memoize all xmatch results).
+		mydb.freeze()
+
+		if myaction in ('search', None) and \
+			"--usepkg" in myopts:
+			# Populate the bintree with current --getbinpkg setting.
+			# This needs to happen before expand_set_arguments(), in case
+			# any sets use the bintree.
+			mytrees["bintree"].populate(
+				getbinpkgs="--getbinpkg" in myopts)
+
+	del mytrees, mydb
+
+	for x in myfiles:
+		ext = os.path.splitext(x)[1]
+		if (ext == ".ebuild" or ext == ".tbz2") and \
+			os.path.exists(os.path.abspath(x)):
+			print(colorize("BAD", "\n*** emerging by path is broken "
+				"and may not always work!!!\n"))
+			break
+
+	root_config = trees[settings['EROOT']]['root_config']
+
+	if myaction == "list-sets":
+		writemsg_stdout("".join("%s\n" % s for s in sorted(root_config.sets)))
+		return os.EX_OK
+	elif myaction == "check-news":
+		news_counts = count_unread_news(
+			root_config.trees["porttree"].dbapi,
+			root_config.trees["vartree"].dbapi)
+		if any(news_counts.values()):
+			display_news_notifications(news_counts)
+		elif "--quiet" not in myopts:
+			print("", colorize("GOOD", "*"), "No news items were found.")
+		return os.EX_OK
+
+	ensure_required_sets(trees)
+
+	# only expand sets for actions taking package arguments
+	oldargs = myfiles[:]
+	if myaction in ("clean", "config", "depclean",
+		"info", "prune", "unmerge", None):
+		myfiles, retval = expand_set_arguments(myfiles, myaction, root_config)
+		if retval != os.EX_OK:
+			return retval
+
+		# Need to handle empty sets specially, otherwise emerge will react 
+		# with the help message for empty argument lists
+		if oldargs and not myfiles:
+			print("emerge: no targets left after set expansion")
+			return 0
+
+	if ("--tree" in myopts) and ("--columns" in myopts):
+		print("emerge: can't specify both of \"--tree\" and \"--columns\".")
+		return 1
+
+	if '--emptytree' in myopts and '--noreplace' in myopts:
+		writemsg_level("emerge: can't specify both of " + \
+			"\"--emptytree\" and \"--noreplace\".\n",
+			level=logging.ERROR, noiselevel=-1)
+		return 1
+
+	if ("--quiet" in myopts):
+		spinner.update = spinner.update_quiet
+		portage.util.noiselimit = -1
+
+	if "--fetch-all-uri" in myopts:
+		myopts["--fetchonly"] = True
+
+	if "--skipfirst" in myopts and "--resume" not in myopts:
+		myopts["--resume"] = True
+
+	# Allow -p to remove --ask
+	if "--pretend" in myopts:
+		myopts.pop("--ask", None)
+
+	# forbid --ask when not in a terminal
+	# note: this breaks `emerge --ask | tee logfile`, but that doesn't work anyway.
+	if ("--ask" in myopts) and (not sys.stdin.isatty()):
+		portage.writemsg("!!! \"--ask\" should only be used in a terminal. Exiting.\n",
+			noiselevel=-1)
+		return 1
+
+	if settings.get("PORTAGE_DEBUG", "") == "1":
+		spinner.update = spinner.update_quiet
+		portage.util.noiselimit = 0
+		if "python-trace" in settings.features:
+			portage.debug.set_trace(True)
+
+	if not ("--quiet" in myopts):
+		if '--nospinner' in myopts or \
+			settings.get('TERM') == 'dumb' or \
+			not sys.stdout.isatty():
+			spinner.update = spinner.update_basic
+
+	if "--debug" in myopts:
+		print("myaction", myaction)
+		print("myopts", myopts)
+
+	if not myaction and not myfiles and "--resume" not in myopts:
+		emerge_help()
+		return 1
+
+	pretend = "--pretend" in myopts
+	fetchonly = "--fetchonly" in myopts or "--fetch-all-uri" in myopts
+	buildpkgonly = "--buildpkgonly" in myopts
+
+	# check if root user is the current user for the actions where emerge needs this
+	if portage.data.secpass < 2:
+		# We've already allowed "--version" and "--help" above.
+		if "--pretend" not in myopts and myaction not in ("search","info"):
+			need_superuser = myaction in ('clean', 'depclean', 'deselect',
+				'prune', 'unmerge') or not \
+				(fetchonly or \
+				(buildpkgonly and portage.data.secpass >= 1) or \
+				myaction in ("metadata", "regen", "sync"))
+			if portage.data.secpass < 1 or \
+				need_superuser:
+				if need_superuser:
+					access_desc = "superuser"
+				else:
+					access_desc = "portage group"
+				# Always show portage_group_warning() when only portage group
+				# access is required but the user is not in the portage group.
+				if "--ask" in myopts:
+					writemsg_stdout("This action requires %s access...\n" % \
+						(access_desc,), noiselevel=-1)
+					if portage.data.secpass < 1 and not need_superuser:
+						portage.data.portage_group_warning()
+					if userquery("Would you like to add --pretend to options?",
+						"--ask-enter-invalid" in myopts) == "No":
+						return 128 + signal.SIGINT
+					myopts["--pretend"] = True
+					myopts.pop("--ask")
+				else:
+					sys.stderr.write(("emerge: %s access is required\n") \
+						% access_desc)
+					if portage.data.secpass < 1 and not need_superuser:
+						portage.data.portage_group_warning()
+					return 1
+
+	# Disable emergelog for everything except build or unmerge operations.
+	# This helps minimize parallel emerge.log entries that can confuse log
+	# parsers like genlop.
+	disable_emergelog = False
+	for x in ("--pretend", "--fetchonly", "--fetch-all-uri"):
+		if x in myopts:
+			disable_emergelog = True
+			break
+	if disable_emergelog:
+		pass
+	elif myaction in ("search", "info"):
+		disable_emergelog = True
+	elif portage.data.secpass < 1:
+		disable_emergelog = True
+
+	import _emerge.emergelog
+	_emerge.emergelog._disable = disable_emergelog
+
+	if not disable_emergelog:
+		if 'EMERGE_LOG_DIR' in settings:
+			try:
+				# At least the parent needs to exist for the lock file.
+				portage.util.ensure_dirs(settings['EMERGE_LOG_DIR'])
+			except portage.exception.PortageException as e:
+				writemsg_level("!!! Error creating directory for " + \
+					"EMERGE_LOG_DIR='%s':\n!!! %s\n" % \
+					(settings['EMERGE_LOG_DIR'], e),
+					noiselevel=-1, level=logging.ERROR)
+				portage.util.ensure_dirs(_emerge.emergelog._emerge_log_dir)
+			else:
+				_emerge.emergelog._emerge_log_dir = settings["EMERGE_LOG_DIR"]
+		else:
+			_emerge.emergelog._emerge_log_dir = os.path.join(os.sep,
+				settings["EPREFIX"].lstrip(os.sep), "var", "log")
+			portage.util.ensure_dirs(_emerge.emergelog._emerge_log_dir)
+
+	if not "--pretend" in myopts:
+		emergelog(xterm_titles, "Started emerge on: "+\
+			_unicode_decode(
+				time.strftime("%b %d, %Y %H:%M:%S", time.localtime()),
+				encoding=_encodings['content'], errors='replace'))
+		myelogstr=""
+		if myopts:
+			opt_list = []
+			for opt, arg in myopts.items():
+				if arg is True:
+					opt_list.append(opt)
+				elif isinstance(arg, list):
+					# arguments like --exclude that use 'append' action
+					for x in arg:
+						opt_list.append("%s=%s" % (opt, x))
+				else:
+					opt_list.append("%s=%s" % (opt, arg))
+			myelogstr=" ".join(opt_list)
+		if myaction:
+			myelogstr += " --" + myaction
+		if myfiles:
+			myelogstr += " " + " ".join(oldargs)
+		emergelog(xterm_titles, " *** emerge " + myelogstr)
+
+	oldargs = None
+
+	def emergeexitsig(signum, frame):
+		signal.signal(signal.SIGTERM, signal.SIG_IGN)
+		portage.util.writemsg(
+			"\n\nExiting on signal %(signal)s\n" % {"signal":signum})
+		sys.exit(128 + signum)
+
+	signal.signal(signal.SIGTERM, emergeexitsig)
+
+	def emergeexit():
+		"""This gets out final log message in before we quit."""
+		if "--pretend" not in myopts:
+			emergelog(xterm_titles, " *** terminating.")
+		if xterm_titles:
+			xtermTitleReset()
+	portage.atexit_register(emergeexit)
+
+	if myaction in ("config", "metadata", "regen", "sync"):
+		if "--pretend" in myopts:
+			sys.stderr.write(("emerge: The '%s' action does " + \
+				"not support '--pretend'.\n") % myaction)
+			return 1
+
+	if "sync" == myaction:
+		return action_sync(settings, trees, mtimedb, myopts, myaction)
+	elif "metadata" == myaction:
+		action_metadata(settings,
+			trees[settings['EROOT']]['porttree'].dbapi, myopts)
+	elif myaction=="regen":
+		validate_ebuild_environment(trees)
+		return action_regen(settings,
+			trees[settings['EROOT']]['porttree'].dbapi, myopts.get("--jobs"),
+			myopts.get("--load-average"))
+	# HELP action
+	elif "config"==myaction:
+		validate_ebuild_environment(trees)
+		action_config(settings, trees, myopts, myfiles)
+
+	# SEARCH action
+	elif "search"==myaction:
+		validate_ebuild_environment(trees)
+		action_search(trees[settings['EROOT']]['root_config'],
+			myopts, myfiles, spinner)
+
+	elif myaction in ('clean', 'depclean', 'deselect', 'prune', 'unmerge'):
+		validate_ebuild_environment(trees)
+		rval = action_uninstall(settings, trees, mtimedb["ldpath"],
+			myopts, myaction, myfiles, spinner)
+		if not (myaction == 'deselect' or
+			buildpkgonly or fetchonly or pretend):
+			post_emerge(myaction, myopts, myfiles, settings['EROOT'],
+				trees, mtimedb, rval)
+		return rval
+
+	elif myaction == 'info':
+
+		# Ensure atoms are valid before calling unmerge().
+		vardb = trees[settings['EROOT']]['vartree'].dbapi
+		portdb = trees[settings['EROOT']]['porttree'].dbapi
+		bindb = trees[settings['EROOT']]["bintree"].dbapi
+		valid_atoms = []
+		for x in myfiles:
+			if is_valid_package_atom(x, allow_repo=True):
+				try:
+					#look at the installed files first, if there is no match
+					#look at the ebuilds, since EAPI 4 allows running pkg_info
+					#on non-installed packages
+					valid_atom = dep_expand(x, mydb=vardb, settings=settings)
+					if valid_atom.cp.split("/")[0] == "null":
+						valid_atom = dep_expand(x,
+							mydb=portdb, settings=settings)
+
+					if valid_atom.cp.split("/")[0] == "null" and \
+						"--usepkg" in myopts:
+						valid_atom = dep_expand(x,
+							mydb=bindb, settings=settings)
+
+					valid_atoms.append(valid_atom)
+
+				except portage.exception.AmbiguousPackageName as e:
+					msg = "The short ebuild name \"" + x + \
+						"\" is ambiguous.  Please specify " + \
+						"one of the following " + \
+						"fully-qualified ebuild names instead:"
+					for line in textwrap.wrap(msg, 70):
+						writemsg_level("!!! %s\n" % (line,),
+							level=logging.ERROR, noiselevel=-1)
+					for i in e.args[0]:
+						writemsg_level("    %s\n" % colorize("INFORM", i),
+							level=logging.ERROR, noiselevel=-1)
+					writemsg_level("\n", level=logging.ERROR, noiselevel=-1)
+					return 1
+				continue
+			msg = []
+			msg.append("'%s' is not a valid package atom." % (x,))
+			msg.append("Please check ebuild(5) for full details.")
+			writemsg_level("".join("!!! %s\n" % line for line in msg),
+				level=logging.ERROR, noiselevel=-1)
+			return 1
+
+		return action_info(settings, trees, myopts, valid_atoms)
+
+	# "update", "system", or just process files:
+	else:
+		validate_ebuild_environment(trees)
+
+		for x in myfiles:
+			if x.startswith(SETPREFIX) or \
+				is_valid_package_atom(x, allow_repo=True):
+				continue
+			if x[:1] == os.sep:
+				continue
+			try:
+				os.lstat(x)
+				continue
+			except OSError:
+				pass
+			msg = []
+			msg.append("'%s' is not a valid package atom." % (x,))
+			msg.append("Please check ebuild(5) for full details.")
+			writemsg_level("".join("!!! %s\n" % line for line in msg),
+				level=logging.ERROR, noiselevel=-1)
+			return 1
+
+		# GLEP 42 says to display news *after* an emerge --pretend
+		if "--pretend" not in myopts:
+			display_news_notification(root_config, myopts)
+		retval = action_build(settings, trees, mtimedb,
+			myopts, myaction, myfiles, spinner, build_dict)
+		post_emerge(myaction, myopts, myfiles, settings['EROOT'],
+			trees, mtimedb, retval)
+
+		return retval

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index 498a6d7..28d8352 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -20,609 +20,71 @@ import logging
 from gobs.manifest import gobs_manifest
 from gobs.depclean import main_depclean
 from gobs.flags import gobs_use_flags
-from gobs.depgraph import backtrack_depgraph
 from portage import _encodings
 from portage import _unicode_decode
 from portage.versions import cpv_getkey
 from portage.dep import check_required_use
-import portage.xpak, errno, re, time
-from _emerge.main import parse_opts, profile_check, apply_priorities, repo_name_duplicate_check, \
-	config_protect_check, check_procfs, ensure_required_sets, expand_set_arguments, \
-	validate_ebuild_environment, chk_updated_info_files, display_preserved_libs
-from _emerge.actions import action_config, action_sync, action_metadata, \
-	action_regen, action_search, action_uninstall, \
-	adjust_configs, chk_updated_cfg_files, display_missing_pkg_set, \
-	display_news_notification, getportageversion, load_emerge_config
-from portage.util import cmp_sort_key, writemsg, \
-	writemsg_level, writemsg_stdout, shlex_split
-from _emerge.sync.old_tree_timestamp import old_tree_timestamp_warn
-from _emerge.create_depgraph_params import create_depgraph_params
-from _emerge.DepPrioritySatisfiedRange import DepPrioritySatisfiedRange
-from gobs.Scheduler import Scheduler
-from _emerge.clear_caches import clear_caches
-from _emerge.unmerge import unmerge
-from _emerge.emergelog import emergelog
-from _emerge._flush_elog_mod_echo import _flush_elog_mod_echo
-from portage._global_updates import _global_updates
-from portage._sets import SETPREFIX
-from portage.const import PORTAGE_PACKAGE_ATOM, USER_CONFIG_PATH
-from _emerge.is_valid_package_atom import is_valid_package_atom
-from _emerge.stdout_spinner import stdout_spinner
-from portage.output import blue, bold, colorize, create_color_func, darkgreen, \
-	red, yellow, colorize, xtermTitle, xtermTitleReset
-good = create_color_func("GOOD")
-bad = create_color_func("BAD")
-
-class queruaction(object):
-
-	def __init__(self, config_profile):
-		self._mysettings = portage.config(config_root = "/")
-		self._config_profile = config_profile
-		self._myportdb =  portage.portdb
-
-	def log_fail_queru(self, build_dict, settings):
-		conn=CM.getConnection()
-		print('build_dict', build_dict)
-		fail_querue_dict = get_fail_querue_dict(conn, build_dict)
+from gobs.main import emerge_main
+
+def log_fail_queru(build_dict, settings):
+	config = gobs_settings_dict['gobs_config']
+	conn=CM.getConnection()
+	print('build_dict', build_dict)
+	fail_querue_dict = get_fail_querue_dict(conn, build_dict)
+	print('fail_querue_dict', fail_querue_dict)
+	if fail_querue_dict is None:
+		fail_querue_dict = {}
+		fail_querue_dict['build_job_id'] = build_dict['build_job_id']
+		fail_querue_dict['fail_type'] = build_dict['type_fail']
+		fail_querue_dict['fail_times'] = 1
 		print('fail_querue_dict', fail_querue_dict)
-		if fail_querue_dict is None:
-			fail_querue_dict = {}
+		add_fail_querue_dict(conn, fail_querue_dict)
+	else:
+		if fail_querue_dict['fail_times'][0] < 6:
+			fail_querue_dict['fail_times'] = fail_querue_dict['fail_times'][0] + 1
 			fail_querue_dict['build_job_id'] = build_dict['build_job_id']
 			fail_querue_dict['fail_type'] = build_dict['type_fail']
-			fail_querue_dict['fail_times'] = 1
-			print('fail_querue_dict', fail_querue_dict)
-			add_fail_querue_dict(conn, fail_querue_dict)
+			update_fail_times(conn, fail_querue_dict)
+			CM.putConnection(conn)
+			return
 		else:
-			if fail_querue_dict['fail_times'][0] < 6:
-				fail_querue_dict['fail_times'] = fail_querue_dict['fail_times'][0] + 1
-				fail_querue_dict['build_job_id'] = build_dict['build_job_id']
-				fail_querue_dict['fail_type'] = build_dict['type_fail']
-				update_fail_times(conn, fail_querue_dict)
-				CM.putConnection(conn)
-				return
-			else:
-				build_log_dict = {}
+			build_log_dict = {}
+			error_log_list = []
+			qa_error_list = []
+			repoman_error_list = []
+			sum_build_log_list = []
+			sum_build_log_list.append("fail")
+			error_log_list.append(build_dict['type_fail'])
+			build_log_dict['repoman_error_list'] = repoman_error_list
+			build_log_dict['qa_error_list'] = qa_error_list
+			build_log_dict['summary_error_list'] = sum_build_log_list
+			if build_dict['type_fail'] == 'merge fail':
 				error_log_list = []
-				qa_error_list = []
-				repoman_error_list = []
-				sum_build_log_list = []
-				sum_build_log_list.append("fail")
-				error_log_list.append(build_dict['type_fail'])
-				build_log_dict['repoman_error_list'] = repoman_error_list
-				build_log_dict['qa_error_list'] = qa_error_list
-				build_log_dict['summary_error_list'] = sum_build_log_list
-				if build_dict['type_fail'] == 'merge fail':
-					error_log_list = []
-					for k, v in build_dict['failed_merge'].iteritems():
-						error_log_list.append(v['fail_msg'])
-				build_log_dict['error_log_list'] = error_log_list
-				build_error = ""
-				if error_log_list != []:
-					for log_line in error_log_list:
-						build_error = build_error + log_line
-				summary_error = ""
-				if sum_build_log_list != []:
-					for sum_log_line in sum_build_log_list:
-						summary_error = summary_error + " " + sum_log_line
-				if settings.get("PORTAGE_LOG_FILE") is not None:
-					build_log_dict['logfilename'] = settings.get("PORTAGE_LOG_FILE").split(self._config_profile)[1]
-					os.chmod(settings.get("PORTAGE_LOG_FILE"), 0o224)
-				else:
-					build_log_dict['logfilename'] = ""
-				move_queru_buildlog(conn, build_dict['build_job_id'], build_error, summary_error, build_log_dict)
-		CM.putConnection(conn)
-
-	def action_build(self, settings, trees, mtimedb, myopts, myaction, myfiles, spinner, build_dict):
-
-		if '--usepkgonly' not in myopts:
-			old_tree_timestamp_warn(settings['PORTDIR'], settings)
-
-		# It's best for config updates in /etc/portage to be processed
-		# before we get here, so warn if they're not (bug #267103).
-		chk_updated_cfg_files(settings['EROOT'], ['/etc/portage'])
-
-		resume = False
-
-		ldpath_mtimes = mtimedb["ldpath"]
-		favorites=[]
-		buildpkgonly = "--buildpkgonly" in myopts
-		pretend = "--pretend" in myopts
-		fetchonly = "--fetchonly" in myopts or "--fetch-all-uri" in myopts
-		ask = "--ask" in myopts
-		enter_invalid = '--ask-enter-invalid' in myopts
-		nodeps = "--nodeps" in myopts
-		oneshot = "--oneshot" in myopts or "--onlydeps" in myopts
-		tree = "--tree" in myopts
-		if nodeps and tree:
-			tree = False
-			del myopts["--tree"]
-			portage.writemsg(colorize("WARN", " * ") + \
-				"--tree is broken with --nodeps. Disabling...\n")
-		debug = "--debug" in myopts
-		verbose = "--verbose" in myopts
-		quiet = "--quiet" in myopts
-
-		myparams = create_depgraph_params(myopts, myaction)
-		try:
-			success, mydepgraph, favorites = backtrack_depgraph(
-				settings, trees, myopts, myparams, myaction, myfiles, spinner)
-		except portage.exception.PackageSetNotFound as e:
-			root_config = trees[settings["ROOT"]]["root_config"]
-			display_missing_pkg_set(root_config, e.value)
-			build_dict['type_fail'] = "depgraph fail"
-		if not success:
-			if mydepgraph._dynamic_config._needed_p_mask_changes:
-				build_dict['type_fail'] = "Mask packages"
-				build_dict['check_fail'] = True
-				mydepgraph.display_problems()
-				self.log_fail_queru(build_dict, settings)
-				return 1, settings, trees, mtimedb
-			if mydepgraph._dynamic_config._needed_use_config_changes:
-				repeat = True
-				repeat_times = 0
-				while repeat:
-					mydepgraph._display_autounmask()
-					settings, trees, mtimedb = load_emerge_config()
-					myparams = create_depgraph_params(myopts, myaction)
-					try:
-						success, mydepgraph, favorites = backtrack_depgraph(
-						settings, trees, myopts, myparams, myaction, myfiles, spinner)
-					except portage.exception.PackageSetNotFound as e:
-						root_config = trees[settings["ROOT"]]["root_config"]
-						display_missing_pkg_set(root_config, e.value)
-					if not success and mydepgraph._dynamic_config._needed_use_config_changes:
-						print("repaet_times:", repeat_times)
-						if repeat_times is 2:
-							build_dict['type_fail'] = "Need use change"
-							build_dict['check_fail'] = True
-							mydepgraph.display_problems()
-							repeat = False
-						else:
-							repeat_times = repeat_times + 1
-					else:
-						repeat = False
-
-			if mydepgraph._dynamic_config._unsolvable_blockers:
-				mydepgraph.display_problems()
-				build_dict['type_fail'] = "Blocking packages"
-				build_dict['check_fail'] = True
-				self.log_fail_queru(build_dict, settings)
-				return 1, settings, trees, mtimedb
-
-			if mydepgraph._dynamic_config._slot_collision_info:
-				mydepgraph.display_problems()
-				build_dict['type_fail'] = "Slot blocking"
-				build_dict['check_fail'] = True
-				self.log_fail_queru(build_dict, settings)
-				return 1, settings, trees, mtimedb
-
-			if not success:
-				build_dict['type_fail'] = "Dep calc fail"
-				build_dict['check_fail'] = True
-				mydepgraph.display_problems()
-
-		if build_dict['check_fail'] is True:
-				self.log_fail_queru(build_dict, settings)
-				return 1, settings, trees, mtimedb
-
-		if "--buildpkgonly" in myopts:
-			graph_copy = mydepgraph._dynamic_config.digraph.copy()
-			removed_nodes = set()
-			for node in graph_copy:
-				if not isinstance(node, Package) or \
-					node.operation == "nomerge":
-					removed_nodes.add(node)
-			graph_copy.difference_update(removed_nodes)
-			if not graph_copy.hasallzeros(ignore_priority = \
-				DepPrioritySatisfiedRange.ignore_medium):
-				logging.info("\n!!! --buildpkgonly requires all dependencies to be merged.")
-				logging.info("!!! Cannot merge requested packages. Merge deps and try again.\n")
-				return 1, settings, trees, mtimedb
-
-		mydepgraph.saveNomergeFavorites()
-
-		mergetask = Scheduler(settings, trees, mtimedb, myopts,
-			spinner, favorites=favorites,
-			graph_config=mydepgraph.schedulerGraph())
-
-		del mydepgraph
-		clear_caches(trees)
-
-		retval = mergetask.merge()
-		conn=CM.getConnection()
-		log_msg = "mergetask.merge retval: %s" % retval
-		add_gobs_logs(conn, log_msg, "info", self._config_profile)
-		CM.putConnection(conn)
-		if retval:
-			build_dict['type_fail'] = 'merge fail'
-			build_dict['check_fail'] = True
-			attict = {}
-			failed_pkgs_dict = {}
-			for x in mergetask._failed_pkgs_all:
-				attict['fail_msg'] = str(x.pkg)[0] + ' ' + str(x.pkg)[1] + ' ' + re.sub("\/var\/log\/portage\/", "", mergetask._locate_failure_log(x))
-				failed_pkgs_dict[str(x.pkg.cpv)] = attict
-			build_dict['failed_merge'] = failed_pkgs_dict
-			self.log_fail_queru(build_dict, settings)
-		if retval == os.EX_OK and not (buildpkgonly or fetchonly or pretend):
-			if "yes" == settings.get("AUTOCLEAN"):
-				portage.writemsg_stdout(">>> Auto-cleaning packages...\n")
-				unmerge(trees[settings["ROOT"]]["root_config"],
-					myopts, "clean", [],
-					ldpath_mtimes, autoclean=1)
+				for k, v in build_dict['failed_merge'].iteritems():
+					error_log_list.append(v['fail_msg'])
+			build_log_dict['error_log_list'] = error_log_list
+			build_error = ""
+			if error_log_list != []:
+				for log_line in error_log_list:
+					build_error = build_error + log_line
+			summary_error = ""
+			if sum_build_log_list != []:
+				for sum_log_line in sum_build_log_list:
+					summary_error = summary_error + " " + sum_log_line
+			if settings.get("PORTAGE_LOG_FILE") is not None:
+				build_log_dict['logfilename'] = settings.get("PORTAGE_LOG_FILE").split(config)[1]
+				os.chmod(settings.get("PORTAGE_LOG_FILE"), 0o224)
 			else:
-				portage.writemsg_stdout(colorize("WARN", "WARNING:")
-					+ " AUTOCLEAN is disabled.  This can cause serious"
-					+ " problems due to overlapping packages.\n")
-
-		return retval, settings, trees, mtimedb
-
-	def post_emerge(self, myaction, myopts, myfiles, target_root, trees, mtimedb, retval):
-
-		root_config = trees[target_root]["root_config"]
-		vardbapi = trees[target_root]["vartree"].dbapi
-		settings = vardbapi.settings
-		info_mtimes = mtimedb["info"]
-
-		# Load the most current variables from ${ROOT}/etc/profile.env
-		settings.unlock()
-		settings.reload()
-		settings.regenerate()
-		settings.lock()
-
-		config_protect = shlex_split(settings.get("CONFIG_PROTECT", ""))
-		infodirs = settings.get("INFOPATH","").split(":") + \
-			settings.get("INFODIR","").split(":")
-
-		os.chdir("/")
-
-		if retval == os.EX_OK:
-			exit_msg = " *** exiting successfully."
-		else:
-			exit_msg = " *** exiting unsuccessfully with status '%s'." % retval
-		emergelog("notitles" not in settings.features, exit_msg)
-
-		_flush_elog_mod_echo()
-
-		if not vardbapi._pkgs_changed:
-			display_news_notification(root_config, myopts)
-			# If vdb state has not changed then there's nothing else to do.
-			return
-
-		vdb_path = os.path.join(root_config.settings['EROOT'], portage.VDB_PATH)
-		portage.util.ensure_dirs(vdb_path)
-		vdb_lock = None
-		if os.access(vdb_path, os.W_OK) and not "--pretend" in myopts:
-			vardbapi.lock()
-			vdb_lock = True
-
-		if vdb_lock:
-			try:
-				if "noinfo" not in settings.features:
-					chk_updated_info_files(target_root,
-						infodirs, info_mtimes, retval)
-				mtimedb.commit()
-			finally:
-				if vdb_lock:
-					vardbapi.unlock()
-
-		chk_updated_cfg_files(settings['EROOT'], config_protect)
-
-		display_news_notification(root_config, myopts)
-		if retval in (None, os.EX_OK) or (not "--pretend" in myopts):
-			display_preserved_libs(vardbapi, myopts)	
+				build_log_dict['logfilename'] = ""
+			move_queru_buildlog(conn, build_dict['build_job_id'], build_error, summary_error, build_log_dict)
+	CM.putConnection(conn)
 
-		postemerge = os.path.join(settings["PORTAGE_CONFIGROOT"],
-			portage.USER_CONFIG_PATH, "bin", "post_emerge")
-		if os.access(postemerge, os.X_OK):
-			hook_retval = portage.process.spawn(
-						[postemerge], env=settings.environ())
-			if hook_retval != os.EX_OK:
-				writemsg_level(
-					" %s spawn failed of %s\n" % (bad("*"), postemerge,),
-					level=logging.ERROR, noiselevel=-1)
-
-		if "--quiet" not in myopts and \
-			myaction is None and "@world" in myfiles:
-			show_depclean_suggestion()
-
-		return
-
-	def emerge_main(self, args, build_dict):
-
-		portage._disable_legacy_globals()
-		portage.dep._internal_warnings = True
-		# Disable color until we're sure that it should be enabled (after
-		# EMERGE_DEFAULT_OPTS has been parsed).
-		portage.output.havecolor = 0
-		# This first pass is just for options that need to be known as early as
-		# possible, such as --config-root.  They will be parsed again later,
-		# together with EMERGE_DEFAULT_OPTS (which may vary depending on the
-		# the value of --config-root).
-		myaction, myopts, myfiles = parse_opts(args, silent=True)
-		if "--debug" in myopts:
-			os.environ["PORTAGE_DEBUG"] = "1"
-		if "--config-root" in myopts:
-			os.environ["PORTAGE_CONFIGROOT"] = myopts["--config-root"]
-		if "--root" in myopts:
-			os.environ["ROOT"] = myopts["--root"]
-		if "--accept-properties" in myopts:
-			os.environ["ACCEPT_PROPERTIES"] = myopts["--accept-properties"]
-
-		# Portage needs to ensure a sane umask for the files it creates.
-		os.umask(0o22)
-		settings, trees, mtimedb = load_emerge_config()
-		portdb = trees[settings["ROOT"]]["porttree"].dbapi
-		rval = profile_check(trees, myaction)
-		if rval != os.EX_OK:
-			return rval
-
-		tmpcmdline = []
-		if "--ignore-default-opts" not in myopts:
-			tmpcmdline.extend(settings["EMERGE_DEFAULT_OPTS"].split())
-		tmpcmdline.extend(args)
-		myaction, myopts, myfiles = parse_opts(tmpcmdline)
-
-		if myaction not in ('help', 'info', 'version') and \
-			myopts.get('--package-moves') != 'n' and \
-			_global_updates(trees, mtimedb["updates"], quiet=("--quiet" in myopts)):
-			mtimedb.commit()
-			# Reload the whole config from scratch.
-			settings, trees, mtimedb = load_emerge_config(trees=trees)
-			portdb = trees[settings["ROOT"]]["porttree"].dbapi
-
-		xterm_titles = "notitles" not in settings.features
-		if xterm_titles:
-			xtermTitle("emerge")
-
-		adjust_configs(myopts, trees)
-		apply_priorities(settings)
-
-		spinner = stdout_spinner()
-		if "candy" in settings.features:
-			spinner.update = spinner.update_scroll
-
-		if "--quiet" not in myopts:
-			portage.deprecated_profile_check(settings=settings)
-			if portage.const._ENABLE_REPO_NAME_WARN:
-				# Bug #248603 - Disable warnings about missing
-				# repo_name entries for stable branch.
-				repo_name_check(trees)
-			repo_name_duplicate_check(trees)
-			config_protect_check(trees)
-		check_procfs()
-
-		if "getbinpkg" in settings.features:
-			myopts["--getbinpkg"] = True
-
-		if "--getbinpkgonly" in myopts:
-			myopts["--getbinpkg"] = True
-
-		if "--getbinpkgonly" in myopts:
-			myopts["--usepkgonly"] = True
-
-		if "--getbinpkg" in myopts:
-			myopts["--usepkg"] = True
-
-		if "--usepkgonly" in myopts:
-			myopts["--usepkg"] = True
-
-		if "buildpkg" in settings.features or "--buildpkgonly" in myopts:
-			myopts["--buildpkg"] = True
-
-		if "--buildpkgonly" in myopts:
-			# --buildpkgonly will not merge anything, so
-			# it cancels all binary package options.
-			for opt in ("--getbinpkg", "--getbinpkgonly",
-				"--usepkg", "--usepkgonly"):
-				myopts.pop(opt, None)
-
-		for mytrees in trees.values():
-			mydb = mytrees["porttree"].dbapi
-			# Freeze the portdbapi for performance (memoize all xmatch results).
-			mydb.freeze()
-
-			if myaction in ('search', None) and \
-				"--usepkg" in myopts:
-				# Populate the bintree with current --getbinpkg setting.
-				# This needs to happen before expand_set_arguments(), in case
-				# any sets use the bintree.
-				mytrees["bintree"].populate(
-					getbinpkgs="--getbinpkg" in myopts)
-
-		del mytrees, mydb
-
-		for x in myfiles:
-			ext = os.path.splitext(x)[1]
-			if (ext == ".ebuild" or ext == ".tbz2") and os.path.exists(os.path.abspath(x)):
-				logging.info("BAD\n*** emerging by path is broken and may not always work!!!\n")
-				break
-
-		root_config = trees[settings["ROOT"]]["root_config"]
-		if myaction == "list-sets":
-			writemsg_stdout("".join("%s\n" % s for s in sorted(root_config.sets)))
-			return os.EX_OK
-
-		ensure_required_sets(trees)
-
-		# only expand sets for actions taking package arguments
-		oldargs = myfiles[:]
-		if myaction in ("clean", "config", "depclean", "info", "prune", "unmerge", None):
-			myfiles, retval = expand_set_arguments(myfiles, myaction, root_config)
-			if retval != os.EX_OK:
-				return retval
-
-			# Need to handle empty sets specially, otherwise emerge will react 
-			# with the help message for empty argument lists
-			if oldargs and not myfiles:
-				logging.info("emerge: no targets left after set expansion")
-				return 0
-
-		if ("--tree" in myopts) and ("--columns" in myopts):
-			logging.info("emerge: can't specify both of \"--tree\" and \"--columns\".")
-			return 1
-
-		if '--emptytree' in myopts and '--noreplace' in myopts:
-			writemsg_level("emerge: can't specify both of " + \
-				"\"--emptytree\" and \"--noreplace\".\n",
-				level=logging.ERROR, noiselevel=-1)
-			return 1
-
-		if ("--quiet" in myopts):
-			spinner.update = spinner.update_quiet
-			portage.util.noiselimit = -1
-
-		if "--fetch-all-uri" in myopts:
-			myopts["--fetchonly"] = True
-
-		if "--skipfirst" in myopts and "--resume" not in myopts:
-			myopts["--resume"] = True
-
-		# Allow -p to remove --ask
-		if "--pretend" in myopts:
-			myopts.pop("--ask", None)
-
-		# forbid --ask when not in a terminal
-		# note: this breaks `emerge --ask | tee logfile`, but that doesn't work anyway.
-		if ("--ask" in myopts) and (not sys.stdin.isatty()):
-			portage.writemsg("!!! \"--ask\" should only be used in a terminal. Exiting.\n",
-				noiselevel=-1)
-			return 1
-
-		if settings.get("PORTAGE_DEBUG", "") == "1":
-			spinner.update = spinner.update_quiet
-			portage.util.noiselimit = 0
-			if "python-trace" in settings.features:
-				import portage.debug as portage_debug
-				portage_debug.set_trace(True)
-
-		if not ("--quiet" in myopts):
-			if '--nospinner' in myopts or \
-				settings.get('TERM') == 'dumb' or \
-				not sys.stdout.isatty():
-				spinner.update = spinner.update_basic
-
-		if "--debug" in myopts:
-			print("myaction", myaction)
-			print("myopts", myopts)
-
-		pretend = "--pretend" in myopts
-		fetchonly = "--fetchonly" in myopts or "--fetch-all-uri" in myopts
-		buildpkgonly = "--buildpkgonly" in myopts
-
-		# check if root user is the current user for the actions where emerge needs this
-		if portage.secpass < 2:
-			# We've already allowed "--version" and "--help" above.
-			if "--pretend" not in myopts and myaction not in ("search","info"):
-				need_superuser = myaction in ('clean', 'depclean', 'deselect',
-					'prune', 'unmerge') or not \
-					(fetchonly or \
-					(buildpkgonly and secpass >= 1) or \
-					myaction in ("metadata", "regen", "sync"))
-				if portage.secpass < 1 or \
-					need_superuser:
-					if need_superuser:
-						access_desc = "superuser"
-					else:
-						access_desc = "portage group"
-					# Always show portage_group_warning() when only portage group
-					# access is required but the user is not in the portage group.
-					from portage.data import portage_group_warning
-					if "--ask" in myopts:
-						myopts["--pretend"] = True
-						del myopts["--ask"]
-						print(("%s access is required... " + \
-							"adding --pretend to options\n") % access_desc)
-						if portage.secpass < 1 and not need_superuser:
-							portage_group_warning()
-					else:
-						sys.stderr.write(("emerge: %s access is required\n") \
-							% access_desc)
-						if portage.secpass < 1 and not need_superuser:
-							portage_group_warning()
-						return 1
-
-		disable_emergelog = False
-		if disable_emergelog:
-			""" Disable emergelog for everything except build or unmerge
-			operations.  This helps minimize parallel emerge.log entries that can
-			confuse log parsers.  We especially want it disabled during
-			parallel-fetch, which uses --resume --fetchonly."""
-			_emerge.emergelog._disable = True
-
-		else:
-			if 'EMERGE_LOG_DIR' in settings:
-				try:
-					# At least the parent needs to exist for the lock file.
-					portage.util.ensure_dirs(settings['EMERGE_LOG_DIR'])
-				except portage.exception.PortageException as e:
-					writemsg_level("!!! Error creating directory for " + \
-						"EMERGE_LOG_DIR='%s':\n!!! %s\n" % \
-						(settings['EMERGE_LOG_DIR'], e),
-						noiselevel=-1, level=logging.ERROR)
-				else:
-					global _emerge_log_dir
-					_emerge_log_dir = settings['EMERGE_LOG_DIR']
-
-		if not "--pretend" in myopts:
-			emergelog(xterm_titles, "Started emerge on: "+\
-				_unicode_decode(
-					time.strftime("%b %d, %Y %H:%M:%S", time.localtime()),
-					encoding=_encodings['content'], errors='replace'))
-			myelogstr=""
-			if myopts:
-				myelogstr=" ".join(myopts)
-			if myaction:
-				myelogstr+=" "+myaction
-			if myfiles:
-				myelogstr += " " + " ".join(oldargs)
-			emergelog(xterm_titles, " *** emerge " + myelogstr)
-		del oldargs
-
-		def emergeexitsig(signum, frame):
-			signal.signal(signal.SIGINT, signal.SIG_IGN)
-			signal.signal(signal.SIGTERM, signal.SIG_IGN)
-			portage.util.writemsg("\n\nExiting on signal %(signal)s\n" % {"signal":signum})
-			sys.exit(128 + signum)
-		signal.signal(signal.SIGINT, emergeexitsig)
-		signal.signal(signal.SIGTERM, emergeexitsig)
-
-		def emergeexit():
-			"""This gets out final log message in before we quit."""
-			if "--pretend" not in myopts:
-				emergelog(xterm_titles, " *** terminating.")
-			if xterm_titles:
-				xtermTitleReset()
-		portage.atexit_register(emergeexit)
-
-
-		# "update", "system", or just process files
-		validate_ebuild_environment(trees)
-
-		for x in myfiles:
-			if x.startswith(SETPREFIX) or \
-				is_valid_package_atom(x, allow_repo=True):
-				continue
-			if x[:1] == os.sep:
-				continue
-			try:
-				os.lstat(x)
-				continue
-			except OSError:
-				pass
-			msg = []
-			msg.append("'%s' is not a valid package atom." % (x,))
-			msg.append("Please check ebuild(5) for full details.")
-			writemsg_level("".join("!!! %s\n" % line for line in msg),
-				level=logging.ERROR, noiselevel=-1)
-			return 1
-		if "--pretend" not in myopts:
-			display_news_notification(root_config, myopts)
-		retval, settings, trees, mtimedb = self.action_build(settings, trees, mtimedb,
-			myopts, myaction, myfiles, spinner, build_dict)
-		self.post_emerge(myaction, myopts, myfiles, settings["ROOT"],
-			trees, mtimedb, retval)
+class queruaction(object):
 
-		return retval
+	def __init__(self, config_profile):
+		self._mysettings = portage.config(config_root = "/")
+		self._config_profile = config_profile
+		self._myportdb =  portage.portdb
 
 	def make_build_list(self, build_dict, settings, portdb):
 		conn=CM.getConnection()
@@ -658,7 +120,7 @@ class queruaction(object):
 			build_dict['type_fail'] = "Wrong ebuild checksum"
 			build_dict['check_fail'] = True
 		if build_dict['check_fail'] is True:
-				self.log_fail_queru(build_dict, settings)
+				log_fail_queru(build_dict, settings)
 				CM.putConnection(conn)
 				return None
 		CM.putConnection(conn)
@@ -694,7 +156,7 @@ class queruaction(object):
 		add_gobs_logs(conn, log_msg, "info", self._config_profile)
 		# Call main_emerge to build the package in build_cpv_list
 		print("Build: %s", build_dict)
-		build_fail = self.emerge_main(argscmd, build_dict)
+		build_fail = emerge_main(argscmd, build_dict)
 		# Run depclean
 		log_msg = "build_fail: %s" % (build_fail,)
 		add_gobs_logs(conn, log_msg, "info", self._config_profile)
@@ -714,7 +176,7 @@ class queruaction(object):
 			print("qurery was not removed")
 			build_dict['type_fail'] = "Querey was not removed"
 			build_dict['check_fail'] = True
-			self.log_fail_queru(build_dict, settings)
+			log_fail_queru(build_dict, settings)
 		if build_fail is False or depclean_fail is False:
 			CM.putConnection(conn)
 			return False

diff --git a/gobs/pym/main.py b/gobs/pym/main.py
new file mode 100644
index 0000000..f8b5047
--- /dev/null
+++ b/gobs/pym/main.py
@@ -0,0 +1,1021 @@
+# Copyright 1999-2012 Gentoo Foundation
+# Distributed under the terms of the GNU General Public License v2
+
+from __future__ import print_function
+
+import platform
+import sys
+
+import portage
+portage.proxy.lazyimport.lazyimport(globals(),
+	'logging',
+	'portage.util:writemsg_level',
+	'textwrap',
+	'gobs.actions:load_emerge_config,run_action,' + \
+		'validate_ebuild_environment',
+	'_emerge.help:help@emerge_help',
+)
+from portage import os
+
+if sys.hexversion >= 0x3000000:
+	long = int
+
+options=[
+"--alphabetical",
+"--ask-enter-invalid",
+"--buildpkgonly",
+"--changed-use",
+"--changelog",    "--columns",
+"--debug",
+"--digest",
+"--emptytree",
+"--fetchonly",    "--fetch-all-uri",
+"--ignore-default-opts",
+"--noconfmem",
+"--newuse",
+"--nodeps",       "--noreplace",
+"--nospinner",    "--oneshot",
+"--onlydeps",     "--pretend",
+"--quiet-repo-display",
+"--quiet-unmerge-warn",
+"--resume",
+"--searchdesc",
+"--skipfirst",
+"--tree",
+"--unordered-display",
+"--update",
+"--verbose",
+"--verbose-main-repo-display",
+]
+
+shortmapping={
+"1":"--oneshot",
+"B":"--buildpkgonly",
+"c":"--depclean",
+"C":"--unmerge",
+"d":"--debug",
+"e":"--emptytree",
+"f":"--fetchonly", "F":"--fetch-all-uri",
+"h":"--help",
+"l":"--changelog",
+"n":"--noreplace", "N":"--newuse",
+"o":"--onlydeps",  "O":"--nodeps",
+"p":"--pretend",   "P":"--prune",
+"r":"--resume",
+"s":"--search",    "S":"--searchdesc",
+"t":"--tree",
+"u":"--update",
+"v":"--verbose",   "V":"--version"
+}
+
+COWSAY_MOO = """
+
+  Larry loves Gentoo (%s)
+
+ _______________________
+< Have you mooed today? >
+ -----------------------
+        \   ^__^
+         \  (oo)\_______
+            (__)\       )\/\ 
+                ||----w |
+                ||     ||
+
+"""
+
+def multiple_actions(action1, action2):
+	sys.stderr.write("\n!!! Multiple actions requested... Please choose one only.\n")
+	sys.stderr.write("!!! '%s' or '%s'\n\n" % (action1, action2))
+	sys.exit(1)
+
+def insert_optional_args(args):
+	"""
+	Parse optional arguments and insert a value if one has
+	not been provided. This is done before feeding the args
+	to the optparse parser since that parser does not support
+	this feature natively.
+	"""
+
+	class valid_integers(object):
+		def __contains__(self, s):
+			try:
+				return int(s) >= 0
+			except (ValueError, OverflowError):
+				return False
+
+	valid_integers = valid_integers()
+
+	class valid_floats(object):
+		def __contains__(self, s):
+			try:
+				return float(s) >= 0
+			except (ValueError, OverflowError):
+				return False
+
+	valid_floats = valid_floats()
+
+	y_or_n = ('y', 'n',)
+
+	new_args = []
+
+	default_arg_opts = {
+		'--ask'                  : y_or_n,
+		'--autounmask'           : y_or_n,
+		'--autounmask-keep-masks': y_or_n,
+		'--autounmask-unrestricted-atoms' : y_or_n,
+		'--autounmask-write'     : y_or_n,
+		'--buildpkg'             : y_or_n,
+		'--complete-graph'       : y_or_n,
+		'--deep'       : valid_integers,
+		'--depclean-lib-check'   : y_or_n,
+		'--deselect'             : y_or_n,
+		'--binpkg-respect-use'   : y_or_n,
+		'--fail-clean'           : y_or_n,
+		'--getbinpkg'            : y_or_n,
+		'--getbinpkgonly'        : y_or_n,
+		'--jobs'       : valid_integers,
+		'--keep-going'           : y_or_n,
+		'--load-average'         : valid_floats,
+		'--package-moves'        : y_or_n,
+		'--quiet'                : y_or_n,
+		'--quiet-build'          : y_or_n,
+		'--rebuild-if-new-slot': y_or_n,
+		'--rebuild-if-new-rev'   : y_or_n,
+		'--rebuild-if-new-ver'   : y_or_n,
+		'--rebuild-if-unbuilt'   : y_or_n,
+		'--rebuilt-binaries'     : y_or_n,
+		'--root-deps'  : ('rdeps',),
+		'--select'               : y_or_n,
+		'--selective'            : y_or_n,
+		"--use-ebuild-visibility": y_or_n,
+		'--usepkg'               : y_or_n,
+		'--usepkgonly'           : y_or_n,
+	}
+
+	short_arg_opts = {
+		'D' : valid_integers,
+		'j' : valid_integers,
+	}
+
+	# Don't make things like "-kn" expand to "-k n"
+	# since existence of -n makes it too ambiguous.
+	short_arg_opts_n = {
+		'a' : y_or_n,
+		'b' : y_or_n,
+		'g' : y_or_n,
+		'G' : y_or_n,
+		'k' : y_or_n,
+		'K' : y_or_n,
+		'q' : y_or_n,
+	}
+
+	arg_stack = args[:]
+	arg_stack.reverse()
+	while arg_stack:
+		arg = arg_stack.pop()
+
+		default_arg_choices = default_arg_opts.get(arg)
+		if default_arg_choices is not None:
+			new_args.append(arg)
+			if arg_stack and arg_stack[-1] in default_arg_choices:
+				new_args.append(arg_stack.pop())
+			else:
+				# insert default argument
+				new_args.append('True')
+			continue
+
+		if arg[:1] != "-" or arg[:2] == "--":
+			new_args.append(arg)
+			continue
+
+		match = None
+		for k, arg_choices in short_arg_opts.items():
+			if k in arg:
+				match = k
+				break
+
+		if match is None:
+			for k, arg_choices in short_arg_opts_n.items():
+				if k in arg:
+					match = k
+					break
+
+		if match is None:
+			new_args.append(arg)
+			continue
+
+		if len(arg) == 2:
+			new_args.append(arg)
+			if arg_stack and arg_stack[-1] in arg_choices:
+				new_args.append(arg_stack.pop())
+			else:
+				# insert default argument
+				new_args.append('True')
+			continue
+
+		# Insert an empty placeholder in order to
+		# satisfy the requirements of optparse.
+
+		new_args.append("-" + match)
+		opt_arg = None
+		saved_opts = None
+
+		if arg[1:2] == match:
+			if match not in short_arg_opts_n and arg[2:] in arg_choices:
+				opt_arg = arg[2:]
+			else:
+				saved_opts = arg[2:]
+				opt_arg = "True"
+		else:
+			saved_opts = arg[1:].replace(match, "")
+			opt_arg = "True"
+
+		if opt_arg is None and arg_stack and \
+			arg_stack[-1] in arg_choices:
+			opt_arg = arg_stack.pop()
+
+		if opt_arg is None:
+			new_args.append("True")
+		else:
+			new_args.append(opt_arg)
+
+		if saved_opts is not None:
+			# Recycle these on arg_stack since they
+			# might contain another match.
+			arg_stack.append("-" + saved_opts)
+
+	return new_args
+
+def _find_bad_atoms(atoms, less_strict=False):
+	"""
+	Declares all atoms as invalid that have an operator,
+	a use dependency, a blocker or a repo spec.
+	It accepts atoms with wildcards.
+	In less_strict mode it accepts operators and repo specs.
+	"""
+	bad_atoms = []
+	for x in ' '.join(atoms).split():
+		bad_atom = False
+		try:
+			atom = portage.dep.Atom(x, allow_wildcard=True, allow_repo=less_strict)
+		except portage.exception.InvalidAtom:
+			try:
+				atom = portage.dep.Atom("*/"+x, allow_wildcard=True, allow_repo=less_strict)
+			except portage.exception.InvalidAtom:
+				bad_atom = True
+
+		if bad_atom or (atom.operator and not less_strict) or atom.blocker or atom.use:
+			bad_atoms.append(x)
+	return bad_atoms
+
+
+def parse_opts(tmpcmdline, silent=False):
+	myaction=None
+	myopts = {}
+	myfiles=[]
+
+	actions = frozenset([
+		"clean", "check-news", "config", "depclean", "help",
+		"info", "list-sets", "metadata", "moo",
+		"prune", "regen",  "search",
+		"sync",  "unmerge", "version",
+	])
+
+	longopt_aliases = {"--cols":"--columns", "--skip-first":"--skipfirst"}
+	y_or_n = ("y", "n")
+	true_y_or_n = ("True", "y", "n")
+	true_y = ("True", "y")
+	argument_options = {
+
+		"--ask": {
+			"shortopt" : "-a",
+			"help"    : "prompt before performing any actions",
+			"type"    : "choice",
+			"choices" : true_y_or_n
+		},
+
+		"--autounmask": {
+			"help"    : "automatically unmask packages",
+			"type"    : "choice",
+			"choices" : true_y_or_n
+		},
+
+		"--autounmask-unrestricted-atoms": {
+			"help"    : "write autounmask changes with >= atoms if possible",
+			"type"    : "choice",
+			"choices" : true_y_or_n
+		},
+
+		"--autounmask-keep-masks": {
+			"help"    : "don't add package.unmask entries",
+			"type"    : "choice",
+			"choices" : true_y_or_n
+		},
+
+		"--autounmask-write": {
+			"help"    : "write changes made by --autounmask to disk",
+			"type"    : "choice",
+			"choices" : true_y_or_n
+		},
+
+		"--accept-properties": {
+			"help":"temporarily override ACCEPT_PROPERTIES",
+			"action":"store"
+		},
+
+		"--backtrack": {
+
+			"help"   : "Specifies how many times to backtrack if dependency " + \
+				"calculation fails ",
+
+			"action" : "store"
+		},
+
+		"--buildpkg": {
+			"shortopt" : "-b",
+			"help"     : "build binary packages",
+			"type"     : "choice",
+			"choices"  : true_y_or_n
+		},
+
+		"--buildpkg-exclude": {
+			"help"   :"A space separated list of package atoms for which " + \
+				"no binary packages should be built. This option overrides all " + \
+				"possible ways to enable building of binary packages.",
+
+			"action" : "append"
+		},
+
+		"--config-root": {
+			"help":"specify the location for portage configuration files",
+			"action":"store"
+		},
+		"--color": {
+			"help":"enable or disable color output",
+			"type":"choice",
+			"choices":("y", "n")
+		},
+
+		"--complete-graph": {
+			"help"    : "completely account for all known dependencies",
+			"type"    : "choice",
+			"choices" : true_y_or_n
+		},
+
+		"--complete-graph-if-new-use": {
+			"help"    : "trigger --complete-graph behavior if USE or IUSE will change for an installed package",
+			"type"    : "choice",
+			"choices" : y_or_n
+		},
+
+		"--complete-graph-if-new-ver": {
+			"help"    : "trigger --complete-graph behavior if an installed package version will change (upgrade or downgrade)",
+			"type"    : "choice",
+			"choices" : y_or_n
+		},
+
+		"--deep": {
+
+			"shortopt" : "-D",
+
+			"help"   : "Specifies how deep to recurse into dependencies " + \
+				"of packages given as arguments. If no argument is given, " + \
+				"depth is unlimited. Default behavior is to skip " + \
+				"dependencies of installed packages.",
+
+			"action" : "store"
+		},
+
+		"--depclean-lib-check": {
+			"help"    : "check for consumers of libraries before removing them",
+			"type"    : "choice",
+			"choices" : true_y_or_n
+		},
+
+		"--deselect": {
+			"help"    : "remove atoms/sets from the world file",
+			"type"    : "choice",
+			"choices" : true_y_or_n
+		},
+
+		"--dynamic-deps": {
+			"help": "substitute the dependencies of installed packages with the dependencies of unbuilt ebuilds",
+			"type": "choice",
+			"choices": y_or_n
+		},
+
+		"--exclude": {
+			"help"   :"A space separated list of package names or slot atoms. " + \
+				"Emerge won't  install any ebuild or binary package that " + \
+				"matches any of the given package atoms.",
+
+			"action" : "append"
+		},
+
+		"--fail-clean": {
+			"help"    : "clean temp files after build failure",
+			"type"    : "choice",
+			"choices" : true_y_or_n
+		},
+
+		"--ignore-built-slot-operator-deps": {
+			"help": "Ignore the slot/sub-slot := operator parts of dependencies that have "
+				"been recorded when packages where built. This option is intended "
+				"only for debugging purposes, and it only affects built packages "
+				"that specify slot/sub-slot := operator dependencies using the "
+				"experimental \"4-slot-abi\" EAPI.",
+			"type": "choice",
+			"choices": y_or_n
+		},
+
+		"--jobs": {
+
+			"shortopt" : "-j",
+
+			"help"   : "Specifies the number of packages to build " + \
+				"simultaneously.",
+
+			"action" : "store"
+		},
+
+		"--keep-going": {
+			"help"    : "continue as much as possible after an error",
+			"type"    : "choice",
+			"choices" : true_y_or_n
+		},
+
+		"--load-average": {
+
+			"help"   :"Specifies that no new builds should be started " + \
+				"if there are other builds running and the load average " + \
+				"is at least LOAD (a floating-point number).",
+
+			"action" : "store"
+		},
+
+		"--misspell-suggestions": {
+			"help"    : "enable package name misspell suggestions",
+			"type"    : "choice",
+			"choices" : ("y", "n")
+		},
+
+		"--with-bdeps": {
+			"help":"include unnecessary build time dependencies",
+			"type":"choice",
+			"choices":("y", "n")
+		},
+		"--reinstall": {
+			"help":"specify conditions to trigger package reinstallation",
+			"type":"choice",
+			"choices":["changed-use"]
+		},
+
+		"--reinstall-atoms": {
+			"help"   :"A space separated list of package names or slot atoms. " + \
+				"Emerge will treat matching packages as if they are not " + \
+				"installed, and reinstall them if necessary. Implies --deep.",
+
+			"action" : "append",
+		},
+
+		"--binpkg-respect-use": {
+			"help"    : "discard binary packages if their use flags \
+				don't match the current configuration",
+			"type"    : "choice",
+			"choices" : true_y_or_n
+		},
+
+		"--getbinpkg": {
+			"shortopt" : "-g",
+			"help"     : "fetch binary packages",
+			"type"     : "choice",
+			"choices"  : true_y_or_n
+		},
+
+		"--getbinpkgonly": {
+			"shortopt" : "-G",
+			"help"     : "fetch binary packages only",
+			"type"     : "choice",
+			"choices"  : true_y_or_n
+		},
+
+		"--usepkg-exclude": {
+			"help"   :"A space separated list of package names or slot atoms. " + \
+				"Emerge will ignore matching binary packages. ",
+
+			"action" : "append",
+		},
+
+		"--rebuild-exclude": {
+			"help"   :"A space separated list of package names or slot atoms. " + \
+				"Emerge will not rebuild these packages due to the " + \
+				"--rebuild flag. ",
+
+			"action" : "append",
+		},
+
+		"--rebuild-ignore": {
+			"help"   :"A space separated list of package names or slot atoms. " + \
+				"Emerge will not rebuild packages that depend on matching " + \
+				"packages due to the --rebuild flag. ",
+
+			"action" : "append",
+		},
+
+		"--package-moves": {
+			"help"     : "perform package moves when necessary",
+			"type"     : "choice",
+			"choices"  : true_y_or_n
+		},
+
+		"--quiet": {
+			"shortopt" : "-q",
+			"help"     : "reduced or condensed output",
+			"type"     : "choice",
+			"choices"  : true_y_or_n
+		},
+
+		"--quiet-build": {
+			"help"     : "redirect build output to logs",
+			"type"     : "choice",
+			"choices"  : true_y_or_n,
+		},
+
+		"--rebuild-if-new-slot": {
+			"help"     : ("Automatically rebuild or reinstall packages when slot/sub-slot := "
+				"operator dependencies can be satisfied by a newer slot, so that "
+				"older packages slots will become eligible for removal by the "
+				"--depclean action as soon as possible."),
+			"type"     : "choice",
+			"choices"  : true_y_or_n
+		},
+
+		"--rebuild-if-new-rev": {
+			"help"     : "Rebuild packages when dependencies that are " + \
+				"used at both build-time and run-time are built, " + \
+				"if the dependency is not already installed with the " + \
+				"same version and revision.",
+			"type"     : "choice",
+			"choices"  : true_y_or_n
+		},
+
+		"--rebuild-if-new-ver": {
+			"help"     : "Rebuild packages when dependencies that are " + \
+				"used at both build-time and run-time are built, " + \
+				"if the dependency is not already installed with the " + \
+				"same version. Revision numbers are ignored.",
+			"type"     : "choice",
+			"choices"  : true_y_or_n
+		},
+
+		"--rebuild-if-unbuilt": {
+			"help"     : "Rebuild packages when dependencies that are " + \
+				"used at both build-time and run-time are built.",
+			"type"     : "choice",
+			"choices"  : true_y_or_n
+		},
+
+		"--rebuilt-binaries": {
+			"help"     : "replace installed packages with binary " + \
+			             "packages that have been rebuilt",
+			"type"     : "choice",
+			"choices"  : true_y_or_n
+		},
+		
+		"--rebuilt-binaries-timestamp": {
+			"help"   : "use only binaries that are newer than this " + \
+			           "timestamp for --rebuilt-binaries",
+			"action" : "store"
+		},
+
+		"--root": {
+		 "help"   : "specify the target root filesystem for merging packages",
+		 "action" : "store"
+		},
+
+		"--root-deps": {
+			"help"    : "modify interpretation of depedencies",
+			"type"    : "choice",
+			"choices" :("True", "rdeps")
+		},
+
+		"--select": {
+			"help"    : "add specified packages to the world set " + \
+			            "(inverse of --oneshot)",
+			"type"    : "choice",
+			"choices" : true_y_or_n
+		},
+
+		"--selective": {
+			"help"    : "identical to --noreplace",
+			"type"    : "choice",
+			"choices" : true_y_or_n
+		},
+
+		"--use-ebuild-visibility": {
+			"help"     : "use unbuilt ebuild metadata for visibility checks on built packages",
+			"type"     : "choice",
+			"choices"  : true_y_or_n
+		},
+
+		"--useoldpkg-atoms": {
+			"help"   :"A space separated list of package names or slot atoms. " + \
+				"Emerge will prefer matching binary packages over newer unbuilt packages. ",
+
+			"action" : "append",
+		},
+
+		"--usepkg": {
+			"shortopt" : "-k",
+			"help"     : "use binary packages",
+			"type"     : "choice",
+			"choices"  : true_y_or_n
+		},
+
+		"--usepkgonly": {
+			"shortopt" : "-K",
+			"help"     : "use only binary packages",
+			"type"     : "choice",
+			"choices"  : true_y_or_n
+		},
+	}
+
+	from optparse import OptionParser
+	parser = OptionParser()
+	if parser.has_option("--help"):
+		parser.remove_option("--help")
+
+	for action_opt in actions:
+		parser.add_option("--" + action_opt, action="store_true",
+			dest=action_opt.replace("-", "_"), default=False)
+	for myopt in options:
+		parser.add_option(myopt, action="store_true",
+			dest=myopt.lstrip("--").replace("-", "_"), default=False)
+	for shortopt, longopt in shortmapping.items():
+		parser.add_option("-" + shortopt, action="store_true",
+			dest=longopt.lstrip("--").replace("-", "_"), default=False)
+	for myalias, myopt in longopt_aliases.items():
+		parser.add_option(myalias, action="store_true",
+			dest=myopt.lstrip("--").replace("-", "_"), default=False)
+
+	for myopt, kwargs in argument_options.items():
+		shortopt = kwargs.pop("shortopt", None)
+		args = [myopt]
+		if shortopt is not None:
+			args.append(shortopt)
+		parser.add_option(dest=myopt.lstrip("--").replace("-", "_"),
+			*args, **kwargs)
+
+	tmpcmdline = insert_optional_args(tmpcmdline)
+
+	myoptions, myargs = parser.parse_args(args=tmpcmdline)
+
+	if myoptions.ask in true_y:
+		myoptions.ask = True
+	else:
+		myoptions.ask = None
+
+	if myoptions.autounmask in true_y:
+		myoptions.autounmask = True
+
+	if myoptions.autounmask_unrestricted_atoms in true_y:
+		myoptions.autounmask_unrestricted_atoms = True
+
+	if myoptions.autounmask_keep_masks in true_y:
+		myoptions.autounmask_keep_masks = True
+
+	if myoptions.autounmask_write in true_y:
+		myoptions.autounmask_write = True
+
+	if myoptions.buildpkg in true_y:
+		myoptions.buildpkg = True
+
+	if myoptions.buildpkg_exclude:
+		bad_atoms = _find_bad_atoms(myoptions.buildpkg_exclude, less_strict=True)
+		if bad_atoms and not silent:
+			parser.error("Invalid Atom(s) in --buildpkg-exclude parameter: '%s'\n" % \
+				(",".join(bad_atoms),))
+
+	if myoptions.changed_use is not False:
+		myoptions.reinstall = "changed-use"
+		myoptions.changed_use = False
+
+	if myoptions.deselect in true_y:
+		myoptions.deselect = True
+
+	if myoptions.binpkg_respect_use is not None:
+		if myoptions.binpkg_respect_use in true_y:
+			myoptions.binpkg_respect_use = 'y'
+		else:
+			myoptions.binpkg_respect_use = 'n'
+
+	if myoptions.complete_graph in true_y:
+		myoptions.complete_graph = True
+	else:
+		myoptions.complete_graph = None
+
+	if myoptions.depclean_lib_check in true_y:
+		myoptions.depclean_lib_check = True
+
+	if myoptions.exclude:
+		bad_atoms = _find_bad_atoms(myoptions.exclude)
+		if bad_atoms and not silent:
+			parser.error("Invalid Atom(s) in --exclude parameter: '%s' (only package names and slot atoms (with wildcards) allowed)\n" % \
+				(",".join(bad_atoms),))
+
+	if myoptions.reinstall_atoms:
+		bad_atoms = _find_bad_atoms(myoptions.reinstall_atoms)
+		if bad_atoms and not silent:
+			parser.error("Invalid Atom(s) in --reinstall-atoms parameter: '%s' (only package names and slot atoms (with wildcards) allowed)\n" % \
+				(",".join(bad_atoms),))
+
+	if myoptions.rebuild_exclude:
+		bad_atoms = _find_bad_atoms(myoptions.rebuild_exclude)
+		if bad_atoms and not silent:
+			parser.error("Invalid Atom(s) in --rebuild-exclude parameter: '%s' (only package names and slot atoms (with wildcards) allowed)\n" % \
+				(",".join(bad_atoms),))
+
+	if myoptions.rebuild_ignore:
+		bad_atoms = _find_bad_atoms(myoptions.rebuild_ignore)
+		if bad_atoms and not silent:
+			parser.error("Invalid Atom(s) in --rebuild-ignore parameter: '%s' (only package names and slot atoms (with wildcards) allowed)\n" % \
+				(",".join(bad_atoms),))
+
+	if myoptions.usepkg_exclude:
+		bad_atoms = _find_bad_atoms(myoptions.usepkg_exclude)
+		if bad_atoms and not silent:
+			parser.error("Invalid Atom(s) in --usepkg-exclude parameter: '%s' (only package names and slot atoms (with wildcards) allowed)\n" % \
+				(",".join(bad_atoms),))
+
+	if myoptions.useoldpkg_atoms:
+		bad_atoms = _find_bad_atoms(myoptions.useoldpkg_atoms)
+		if bad_atoms and not silent:
+			parser.error("Invalid Atom(s) in --useoldpkg-atoms parameter: '%s' (only package names and slot atoms (with wildcards) allowed)\n" % \
+				(",".join(bad_atoms),))
+
+	if myoptions.fail_clean in true_y:
+		myoptions.fail_clean = True
+
+	if myoptions.getbinpkg in true_y:
+		myoptions.getbinpkg = True
+	else:
+		myoptions.getbinpkg = None
+
+	if myoptions.getbinpkgonly in true_y:
+		myoptions.getbinpkgonly = True
+	else:
+		myoptions.getbinpkgonly = None
+
+	if myoptions.keep_going in true_y:
+		myoptions.keep_going = True
+	else:
+		myoptions.keep_going = None
+
+	if myoptions.package_moves in true_y:
+		myoptions.package_moves = True
+
+	if myoptions.quiet in true_y:
+		myoptions.quiet = True
+	else:
+		myoptions.quiet = None
+
+	if myoptions.quiet_build in true_y:
+		myoptions.quiet_build = 'y'
+
+	if myoptions.rebuild_if_new_slot in true_y:
+		myoptions.rebuild_if_new_slot = 'y'
+
+	if myoptions.rebuild_if_new_ver in true_y:
+		myoptions.rebuild_if_new_ver = True
+	else:
+		myoptions.rebuild_if_new_ver = None
+
+	if myoptions.rebuild_if_new_rev in true_y:
+		myoptions.rebuild_if_new_rev = True
+		myoptions.rebuild_if_new_ver = None
+	else:
+		myoptions.rebuild_if_new_rev = None
+
+	if myoptions.rebuild_if_unbuilt in true_y:
+		myoptions.rebuild_if_unbuilt = True
+		myoptions.rebuild_if_new_rev = None
+		myoptions.rebuild_if_new_ver = None
+	else:
+		myoptions.rebuild_if_unbuilt = None
+
+	if myoptions.rebuilt_binaries in true_y:
+		myoptions.rebuilt_binaries = True
+
+	if myoptions.root_deps in true_y:
+		myoptions.root_deps = True
+
+	if myoptions.select in true_y:
+		myoptions.select = True
+		myoptions.oneshot = False
+	elif myoptions.select == "n":
+		myoptions.oneshot = True
+
+	if myoptions.selective in true_y:
+		myoptions.selective = True
+
+	if myoptions.backtrack is not None:
+
+		try:
+			backtrack = int(myoptions.backtrack)
+		except (OverflowError, ValueError):
+			backtrack = -1
+
+		if backtrack < 0:
+			backtrack = None
+			if not silent:
+				parser.error("Invalid --backtrack parameter: '%s'\n" % \
+					(myoptions.backtrack,))
+
+		myoptions.backtrack = backtrack
+
+	if myoptions.deep is not None:
+		deep = None
+		if myoptions.deep == "True":
+			deep = True
+		else:
+			try:
+				deep = int(myoptions.deep)
+			except (OverflowError, ValueError):
+				deep = -1
+
+		if deep is not True and deep < 0:
+			deep = None
+			if not silent:
+				parser.error("Invalid --deep parameter: '%s'\n" % \
+					(myoptions.deep,))
+
+		myoptions.deep = deep
+
+	if myoptions.jobs:
+		jobs = None
+		if myoptions.jobs == "True":
+			jobs = True
+		else:
+			try:
+				jobs = int(myoptions.jobs)
+			except ValueError:
+				jobs = -1
+
+		if jobs is not True and \
+			jobs < 1:
+			jobs = None
+			if not silent:
+				parser.error("Invalid --jobs parameter: '%s'\n" % \
+					(myoptions.jobs,))
+
+		myoptions.jobs = jobs
+
+	if myoptions.load_average == "True":
+		myoptions.load_average = None
+
+	if myoptions.load_average:
+		try:
+			load_average = float(myoptions.load_average)
+		except ValueError:
+			load_average = 0.0
+
+		if load_average <= 0.0:
+			load_average = None
+			if not silent:
+				parser.error("Invalid --load-average parameter: '%s'\n" % \
+					(myoptions.load_average,))
+
+		myoptions.load_average = load_average
+	
+	if myoptions.rebuilt_binaries_timestamp:
+		try:
+			rebuilt_binaries_timestamp = int(myoptions.rebuilt_binaries_timestamp)
+		except ValueError:
+			rebuilt_binaries_timestamp = -1
+
+		if rebuilt_binaries_timestamp < 0:
+			rebuilt_binaries_timestamp = 0
+			if not silent:
+				parser.error("Invalid --rebuilt-binaries-timestamp parameter: '%s'\n" % \
+					(myoptions.rebuilt_binaries_timestamp,))
+
+		myoptions.rebuilt_binaries_timestamp = rebuilt_binaries_timestamp
+
+	if myoptions.use_ebuild_visibility in true_y:
+		myoptions.use_ebuild_visibility = True
+	else:
+		# None or "n"
+		pass
+
+	if myoptions.usepkg in true_y:
+		myoptions.usepkg = True
+	else:
+		myoptions.usepkg = None
+
+	if myoptions.usepkgonly in true_y:
+		myoptions.usepkgonly = True
+	else:
+		myoptions.usepkgonly = None
+
+	for myopt in options:
+		v = getattr(myoptions, myopt.lstrip("--").replace("-", "_"))
+		if v:
+			myopts[myopt] = True
+
+	for myopt in argument_options:
+		v = getattr(myoptions, myopt.lstrip("--").replace("-", "_"), None)
+		if v is not None:
+			myopts[myopt] = v
+
+	if myoptions.searchdesc:
+		myoptions.search = True
+
+	for action_opt in actions:
+		v = getattr(myoptions, action_opt.replace("-", "_"))
+		if v:
+			if myaction:
+				multiple_actions(myaction, action_opt)
+				sys.exit(1)
+			myaction = action_opt
+
+	if myaction is None and myoptions.deselect is True:
+		myaction = 'deselect'
+
+	if myargs and isinstance(myargs[0], bytes):
+		for i in range(len(myargs)):
+			myargs[i] = portage._unicode_decode(myargs[i])
+
+	myfiles += myargs
+
+	return myaction, myopts, myfiles
+
+def profile_check(trees, myaction):
+	if myaction in ("help", "info", "search", "sync", "version"):
+		return os.EX_OK
+	for root_trees in trees.values():
+		if root_trees["root_config"].settings.profiles:
+			continue
+		# generate some profile related warning messages
+		validate_ebuild_environment(trees)
+		msg = ("Your current profile is invalid. If you have just changed "
+			"your profile configuration, you should revert back to the "
+			"previous configuration. Allowed actions are limited to "
+			"--help, --info, --search, --sync, and --version.")
+		writemsg_level("".join("!!! %s\n" % l for l in textwrap.wrap(msg, 70)),
+			level=logging.ERROR, noiselevel=-1)
+		return 1
+	return os.EX_OK
+
+def emerge_main(args=None, build_dict):
+	"""
+	@param args: command arguments (default: sys.argv[1:])
+	@type args: list
+	"""
+	if args is None:
+		args = sys.argv[1:]
+
+	portage._disable_legacy_globals()
+	portage._internal_warnings = True
+	# Disable color until we're sure that it should be enabled (after
+	# EMERGE_DEFAULT_OPTS has been parsed).
+	portage.output.havecolor = 0
+
+	# This first pass is just for options that need to be known as early as
+	# possible, such as --config-root.  They will be parsed again later,
+	# together with EMERGE_DEFAULT_OPTS (which may vary depending on the
+	# the value of --config-root).
+	myaction, myopts, myfiles = parse_opts(args, silent=True)
+	if "--debug" in myopts:
+		os.environ["PORTAGE_DEBUG"] = "1"
+	if "--config-root" in myopts:
+		os.environ["PORTAGE_CONFIGROOT"] = myopts["--config-root"]
+	if "--root" in myopts:
+		os.environ["ROOT"] = myopts["--root"]
+	if "--accept-properties" in myopts:
+		os.environ["ACCEPT_PROPERTIES"] = myopts["--accept-properties"]
+
+	# optimize --help (no need to load config / EMERGE_DEFAULT_OPTS)
+	if myaction == "help":
+		emerge_help()
+		return os.EX_OK
+	elif myaction == "moo":
+		print(COWSAY_MOO % platform.system())
+		return os.EX_OK
+
+	# Portage needs to ensure a sane umask for the files it creates.
+	os.umask(0o22)
+	if myaction == "sync":
+		portage._sync_disabled_warnings = True
+	settings, trees, mtimedb = load_emerge_config()
+	rval = profile_check(trees, myaction)
+	if rval != os.EX_OK:
+		return rval
+
+	tmpcmdline = []
+	if "--ignore-default-opts" not in myopts:
+		tmpcmdline.extend(settings["EMERGE_DEFAULT_OPTS"].split())
+	tmpcmdline.extend(args)
+	myaction, myopts, myfiles = parse_opts(tmpcmdline)
+
+	return run_action(settings, trees, mtimedb, myaction, myopts, myfiles,
+		gc_locals=locals().clear, build_dict)


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-06  2:22 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-06  2:22 UTC (permalink / raw
  To: gentoo-commits

commit:     04ef6a84a88107478a520fb07664810304efdd79
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Thu Dec  6 02:22:13 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Thu Dec  6 02:22:13 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=04ef6a84

define build_dict

---
 gobs/pym/main.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/main.py b/gobs/pym/main.py
index f8b5047..d6e6f21 100644
--- a/gobs/pym/main.py
+++ b/gobs/pym/main.py
@@ -966,7 +966,7 @@ def profile_check(trees, myaction):
 		return 1
 	return os.EX_OK
 
-def emerge_main(args=None, build_dict):
+def emerge_main(args=None, build_dict={}):
 	"""
 	@param args: command arguments (default: sys.argv[1:])
 	@type args: list


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-06  2:34 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-06  2:34 UTC (permalink / raw
  To: gentoo-commits

commit:     0ec6895a3679afa59a751c2259afade1620108fa
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Thu Dec  6 02:34:32 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Thu Dec  6 02:34:32 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=0ec6895a

move build_dict in run_action

---
 gobs/pym/actions.py |    4 ++--
 gobs/pym/main.py    |   11 ++++++++---
 2 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/gobs/pym/actions.py b/gobs/pym/actions.py
index a3a158c..a27c5f9 100644
--- a/gobs/pym/actions.py
+++ b/gobs/pym/actions.py
@@ -3475,8 +3475,8 @@ def repo_name_duplicate_check(trees):
 
 	return bool(ignored_repos)
 
-def run_action(settings, trees, mtimedb, myaction, myopts, myfiles,
-	gc_locals=None, build_dict):
+def run_action(settings, trees, mtimedb, myaction, myopts, myfiles, build_dict,
+	gc_locals=None):
 
 	# The caller may have its local variables garbage collected, so
 	# they don't consume any memory during this long-running function.

diff --git a/gobs/pym/main.py b/gobs/pym/main.py
index d6e6f21..4bc45ee 100644
--- a/gobs/pym/main.py
+++ b/gobs/pym/main.py
@@ -966,14 +966,19 @@ def profile_check(trees, myaction):
 		return 1
 	return os.EX_OK
 
-def emerge_main(args=None, build_dict={}):
+def emerge_main(args=None, build_dict=None):
 	"""
 	@param args: command arguments (default: sys.argv[1:])
 	@type args: list
+	@param build_dict: info of the build_job
+	@type build_dict: dict
 	"""
 	if args is None:
 		args = sys.argv[1:]
 
+	if build_dict is None:
+		build_dict = {}
+
 	portage._disable_legacy_globals()
 	portage._internal_warnings = True
 	# Disable color until we're sure that it should be enabled (after
@@ -1017,5 +1022,5 @@ def emerge_main(args=None, build_dict={}):
 	tmpcmdline.extend(args)
 	myaction, myopts, myfiles = parse_opts(tmpcmdline)
 
-	return run_action(settings, trees, mtimedb, myaction, myopts, myfiles,
-		gc_locals=locals().clear, build_dict)
+	return run_action(settings, trees, mtimedb, myaction, myopts, myfiles, build_dict,
+		gc_locals=locals().clear)


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-06  2:41 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-06  2:41 UTC (permalink / raw
  To: gentoo-commits

commit:     337044bda9445d28e92d91b0488639a2d54a4130
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Thu Dec  6 02:41:24 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Thu Dec  6 02:41:24 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=337044bd

comment the logfile

---
 gobs/pym/readconf.py |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/gobs/pym/readconf.py b/gobs/pym/readconf.py
index 6b58ea9..1a1c036 100644
--- a/gobs/pym/readconf.py
+++ b/gobs/pym/readconf.py
@@ -33,8 +33,8 @@ class get_conf_settings(object):
 			# Buildhost setup (host/setup on guest)
 			if element[0] == 'GOBSCONFIG':
 				get_gobs_config = element[1]
-			if element[0] == 'LOGFILE':
-				get_gobs_logfile = element[1]
+			# if element[0] == 'LOGFILE':
+			#	get_gobs_logfile = element[1]
 		open_conffile.close()
 
 		gobs_settings_dict = {}
@@ -45,5 +45,5 @@ class get_conf_settings(object):
 		gobs_settings_dict['sql_passwd'] = get_sql_passwd.rstrip('\n')
 		gobs_settings_dict['gobs_gitreponame'] = get_gobs_gitreponame.rstrip('\n')
 		gobs_settings_dict['gobs_config'] = get_gobs_config.rstrip('\n')
-		gobs_settings_dict['gobs_logfile'] = get_gobs_logfile.rstrip('\n')
+		# gobs_settings_dict['gobs_logfile'] = get_gobs_logfile.rstrip('\n')
 		return gobs_settings_dict


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-06  2:51 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-06  2:51 UTC (permalink / raw
  To: gentoo-commits

commit:     fa5ec34d26d54227957c86c185f50bb4489ddee3
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Thu Dec  6 02:50:57 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Thu Dec  6 02:50:57 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=fa5ec34d

define get_profile_checksum() in pgsql_querys.py

---
 gobs/pym/pgsql_querys.py |    6 ++++++
 1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/gobs/pym/pgsql_querys.py b/gobs/pym/pgsql_querys.py
index 517db8b..c3d2784 100644
--- a/gobs/pym/pgsql_querys.py
+++ b/gobs/pym/pgsql_querys.py
@@ -315,6 +315,12 @@ def del_old_build_jobs(connection, queue_id):
 	cursor.execute(sqlQ3, (build_job_id,))
 	connection.commit()
 
+def get_profile_checksum(connection, config_profile):
+    cursor = connection.cursor()
+    sqlQ = "SELECT checksum FROM configs_metadata WHERE active = 'True' AND config_id = (SELECT config_id FROM configs WHERE config = %s) AND auto = 'True'"
+    cursor.execute(sqlQ, (config_profile,))
+    return cursor.fetchone()
+
 def get_packages_to_build(connection, config):
 	cursor =connection.cursor()
 	sqlQ1 = "SELECT build_job_id.build_jobs, ebuild_id.build_jobs, package_id.ebuilds FROM build_jobs, ebuilds WHERE \


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-06 23:52 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-06 23:52 UTC (permalink / raw
  To: gentoo-commits

commit:     a2d7ab6659623b2b4b5a4d7b86acecd5044684b6
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Thu Dec  6 23:52:39 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Thu Dec  6 23:52:39 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=a2d7ab66

fix error in sql queru and move log_fail_queru()

---
 gobs/pym/build_log.py    |  811 ++++++++++++++++------------------------------
 gobs/pym/build_queru.py  |   54 +---
 gobs/pym/pgsql_querys.py |    7 +-
 3 files changed, 278 insertions(+), 594 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index 5ecd8a5..537103e 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -2,526 +2,212 @@ from __future__ import print_function
 import re
 import os
 import platform
-import logging
-try:
-	from subprocess import getstatusoutput as subprocess_getstatusoutput
-except ImportError:
-	from commands import getstatusoutput as subprocess_getstatusoutput
 from gobs.text import get_log_text_list
-from _emerge.main import parse_opts, load_emerge_config, \
-        getportageversion
-from portage.util import writemsg, \
-        writemsg_level, writemsg_stdout
-from _emerge.actions import _info_pkgs_ver
-from portage.exception import InvalidAtom
-from portage.dep import Atom
-from portage.dbapi._expand_new_virt import expand_new_virt
-from portage.const import GLOBAL_CONFIG_PATH, NEWS_LIB_PATH
-from portage.const import _ENABLE_DYN_LINK_MAP, _ENABLE_SET_CONFIG
 from portage.versions import catpkgsplit, cpv_getversion
-from portage import _encodings
-from portage import _unicode_encode
 from gobs.repoman_gobs import gobs_repoman
 import portage
+from portage.util import writemsg, \
+	writemsg_level, writemsg_stdout
+from portage import _encodings
+from portage import _unicode_encode
 from gobs.package import gobs_package
 from gobs.readconf import get_conf_settings
 from gobs.flags import gobs_use_flags
+
 reader=get_conf_settings()
 gobs_settings_dict=reader.read_gobs_settings_all()
+config_profile = gobs_settings_dict['gobs_config']
 # make a CM
 from gobs.ConnectionManager import connectionManager
 CM=connectionManager(gobs_settings_dict)
 #selectively import the pgsql/mysql querys
 if CM.getName()=='pgsql':
-	from gobs.pgsql import *
-
-class gobs_buildlog(object):
+	from gobs.pgsql_querys import *
 	
-	def __init__(self):
-		self._config_profile = gobs_settings_dict['gobs_config']
-	
-	def get_build_dict_db(self, settings, pkg):
-		conn=CM.getConnection()
-		myportdb = portage.portdbapi(mysettings=settings)
-		cpvr_list = catpkgsplit(pkg.cpv, silent=1)
-		categories = cpvr_list[0]
-		package = cpvr_list[1]
-		ebuild_version = cpv_getversion(pkg.cpv)
-		log_msg = "cpv: %s" % (pkg.cpv,)
-		add_gobs_logs(conn, log_msg, "info", self._config_profile)
-		init_package = gobs_package(settings, myportdb)
-		package_id = have_package_db(conn, categories, package)
-		# print("package_id %s" % package_id, file=sys.stdout)
-		build_dict = {}
-		mybuild_dict = {}
-		build_dict['ebuild_version'] = ebuild_version
-		build_dict['package_id'] = package_id
-		build_dict['cpv'] = pkg.cpv
-		build_dict['categories'] = categories
-		build_dict['package'] = package
-		build_dict['config_profile'] = self._config_profile
-		init_useflags = gobs_use_flags(settings, myportdb, pkg.cpv)
-		iuse_flags_list, final_use_list = init_useflags.get_flags_pkg(pkg, settings)
-		#print 'final_use_list', final_use_list
-		if  final_use_list != []:
-			build_dict['build_useflags'] = sorted(final_use_list)
-		else:
-			build_dict['build_useflags'] = None
-		#print "build_dict['build_useflags']", build_dict['build_useflags']
-		pkgdir = os.path.join(settings['PORTDIR'], categories + "/" + package)
-		ebuild_version_checksum_tree = portage.checksum.sha256hash(pkgdir+ "/" + package + "-" + ebuild_version + ".ebuild")[0]
-		build_dict['checksum'] = ebuild_version_checksum_tree
+def get_build_dict_db(settings, pkg):
+	conn=CM.getConnection()
+	myportdb = portage.portdbapi(mysettings=settings)
+	cpvr_list = catpkgsplit(pkg.cpv, silent=1)
+	categories = cpvr_list[0]
+	package = cpvr_list[1]
+	ebuild_version = cpv_getversion(pkg.cpv)
+	log_msg = "cpv: %s" % (pkg.cpv,)
+	add_gobs_logs(conn, log_msg, "info", config_profile)
+	init_package = gobs_package(settings, myportdb)
+	package_id = have_package_db(conn, categories, package)
+	# print("package_id %s" % package_id, file=sys.stdout)
+	build_dict = {}
+	mybuild_dict = {}
+	build_dict['ebuild_version'] = ebuild_version
+	build_dict['package_id'] = package_id
+	build_dict['cpv'] = pkg.cpv
+	build_dict['categories'] = categories
+	build_dict['package'] = package
+	build_dict['config_profile'] = config_profile
+	init_useflags = gobs_use_flags(settings, myportdb, pkg.cpv)
+	iuse_flags_list, final_use_list = init_useflags.get_flags_pkg(pkg, settings)
+	#print 'final_use_list', final_use_list
+	if  final_use_list != []:
+		build_dict['build_useflags'] = sorted(final_use_list)
+	else:
+		build_dict['build_useflags'] = None
+	#print "build_dict['build_useflags']", build_dict['build_useflags']
+	pkgdir = os.path.join(settings['PORTDIR'], categories + "/" + package)
+	ebuild_version_checksum_tree = portage.checksum.sha256hash(pkgdir+ "/" + package + "-" + ebuild_version + ".ebuild")[0]
+	build_dict['checksum'] = ebuild_version_checksum_tree
+	ebuild_id = get_ebuild_id_db_checksum(conn, build_dict)
+	if ebuild_id is None:
+		#print 'have any ebuild',  get_ebuild_checksum(conn, package_id, ebuild_version)
+		init_package.update_ebuild_db(build_dict)
 		ebuild_id = get_ebuild_id_db_checksum(conn, build_dict)
-		if ebuild_id is None:
-			#print 'have any ebuild',  get_ebuild_checksum(conn, package_id, ebuild_version)
-			init_package.update_ebuild_db(build_dict)
-			ebuild_id = get_ebuild_id_db_checksum(conn, build_dict)
-		build_dict['ebuild_id'] = ebuild_id
-		queue_id = check_revision(conn, build_dict)
-		if queue_id is None:
-			build_dict['queue_id'] = None
-		else:
-			build_dict['queue_id'] = queue_id
-		CM.putConnection(conn)
-		return build_dict
-
-	def add_new_ebuild_buildlog(self, settings, pkg, build_dict, build_error, summary_error, build_log_dict):
-		conn=CM.getConnection()
-		portdb = portage.portdbapi(mysettings=settings)
-		init_useflags = gobs_use_flags(settings, portdb, build_dict['cpv'])
-		iuse_flags_list, final_use_list = init_useflags.get_flags_pkg(pkg, settings)
-		iuse = []
-		use_flags_list = []
-		use_enable_list = []
-		for iuse_line in iuse_flags_list:
-			iuse.append(init_useflags.reduce_flag(iuse_line))
-		iuse_flags_list2 = list(set(iuse))
-		use_enable = final_use_list
-		use_disable = list(set(iuse_flags_list2).difference(set(use_enable)))
-		use_flagsDict = {}
-		for x in use_enable:
-			use_flagsDict[x] = True
-		for x in use_disable:
-			use_flagsDict[x] = False
-		for u, s in  use_flagsDict.iteritems():
-			use_flags_list.append(u)
-			use_enable_list.append(s)
-		build_id = add_new_buildlog(conn, build_dict, use_flags_list, use_enable_list, build_error, summary_error, build_log_dict)
-		CM.putConnection(conn)
-		return build_id
-
-	def search_info(self, textline, error_log_list):
-		if re.search(" * Package:", textline):
-			error_log_list.append(textline + '\n')
-		if re.search(" * Repository:", textline):
-			error_log_list.append(textline + '\n')
-		if re.search(" * Maintainer:", textline):
-			error_log_list.append(textline + '\n')
-		if re.search(" * USE:", textline):
-			error_log_list.append(textline + '\n')
-		if re.search(" * FEATURES:", textline):
-			error_log_list.append(textline + '\n')
-		return error_log_list
-
-	def search_error(self, logfile_text, textline, error_log_list, sum_build_log_list, i):
-		if re.search("Error 1", textline):
-			x = i - 20
-			endline = True
-			error_log_list.append(".....\n")
-			while x != i + 3 and endline:
-				try:
-					error_log_list.append(logfile_text[x] + '\n')
-				except:
-					endline = False
-				else:
-					x = x +1
-		if re.search(" * ERROR:", textline):
-			x = i
-			endline= True
-			field = textline.split(" ")
-			sum_build_log_list.append("fail")
-			error_log_list.append(".....\n")
-			while x != i + 10 and endline:
-				try:
-					error_log_list.append(logfile_text[x] + '\n')
-				except:
-					endline = False
-				else:
-					x = x +1
-		if re.search("configure: error:", textline):
-			x = i - 4
-			endline = True
-			error_log_list.append(".....\n")
-			while x != i + 3 and endline:
-				try:
-					error_log_list.append(logfile_text[x] + '\n')
-				except:
-					endline = False
-				else:
-					x = x +1
-		return error_log_list, sum_build_log_list
-
-	def search_qa(self, logfile_text, textline, qa_error_list, error_log_list,i):
-		if re.search(" * QA Notice:", textline):
-			x = i
-			qa_error_list.append(logfile_text[x] + '\n')
-			endline= True
-			error_log_list.append(".....\n")
-			while x != i + 3 and endline:
-				try:
-					error_log_list.append(logfile_text[x] + '\n')
-				except:
-					endline = False
-				else:
-					x = x +1
-		return qa_error_list, error_log_list
-
-	def get_buildlog_info(self, settings, build_dict):
-		myportdb = portage.portdbapi(mysettings=settings)
-		init_repoman = gobs_repoman(settings, myportdb)
-		logfile_text = get_log_text_list(settings.get("PORTAGE_LOG_FILE"))
-		# FIXME to support more errors and stuff
-		i = 0
-		build_log_dict = {}
-		error_log_list = []
-		qa_error_list = []
-		repoman_error_list = []
-		sum_build_log_list = []
-		for textline in logfile_text:
-			error_log_list = self.search_info(textline, error_log_list)
-			error_log_list, sum_build_log_list = self.search_error(logfile_text, textline, error_log_list, sum_build_log_list, i)
-			qa_error_list, error_log_list = self.search_qa(logfile_text, textline, qa_error_list, error_log_list, i)
-			i = i +1
-		# Run repoman check_repoman()
-		repoman_error_list = init_repoman.check_repoman(build_dict['categories'], build_dict['package'], build_dict['ebuild_version'], build_dict['config_profile'])
-		if repoman_error_list != []:
-			sum_build_log_list.append("repoman")
-		if qa_error_list != []:
-			sum_build_log_list.append("qa")
-		build_log_dict['repoman_error_list'] = repoman_error_list
-		build_log_dict['qa_error_list'] = qa_error_list
-		build_log_dict['error_log_list'] = error_log_list
-		build_log_dict['summary_error_list'] = sum_build_log_list
-		return build_log_dict
-	
-	# Copy of the portage action_info but fixed so it post info to a list.
-	def action_info(self, settings, trees):
-		argscmd = []
-		myaction, myopts, myfiles = parse_opts(argscmd, silent=True)
-		msg = []
-		root = '/'
-		root_config = root
-		# root_config = trees[settings['ROOT']]['root_config']
-		msg.append(getportageversion(settings["PORTDIR"], settings["ROOT"],
-			settings.profile_path, settings["CHOST"],
-			trees[settings["ROOT"]]["vartree"].dbapi) + "\n")
-
-		header_width = 65
-		header_title = "System Settings"
-		if myfiles:
-			msg.append(header_width * "=" + "\n")
-			msg.append(header_title.rjust(int(header_width/2 + len(header_title)/2)) + "\n")
-		msg.append(header_width * "=" + "\n")
-		msg.append("System uname: "+platform.platform(aliased=1) + "\n")
-
-		lastSync = portage.grabfile(os.path.join(
-			settings["PORTDIR"], "metadata", "timestamp.chk"))
-		if lastSync:
-			msg.append("Timestamp of tree:" + lastSync[0] + "\n")
-		else:
-			msg.append("Timestamp of tree: Unknown" + "\n")
-
-		output=subprocess_getstatusoutput("distcc --version")
-		if not output[0]:
-			msg.append(str(output[1].split("\n",1)[0]))
-			if "distcc" in settings.features:
-				msg.append("[enabled]")
-			else:
-				msg.append("[disabled]")
-
-		output=subprocess_getstatusoutput("ccache -V")
-		if not output[0]:
-			msg.append(str(output[1].split("\n",1)[0]), end=' ')
-			if "ccache" in settings.features:
-				msg.append("[enabled]")
+	build_dict['ebuild_id'] = ebuild_id
+	queue_id = check_revision(conn, build_dict)
+	if queue_id is None:
+		build_dict['queue_id'] = None
+	else:
+		build_dict['queue_id'] = queue_id
+	CM.putConnection(conn)
+	return build_dict
+
+def add_new_ebuild_buildlog(settings, pkg, build_dict, build_error, summary_error, build_log_dict):
+	conn=CM.getConnection()
+	portdb = portage.portdbapi(mysettings=settings)
+	init_useflags = gobs_use_flags(settings, portdb, build_dict['cpv'])
+	iuse_flags_list, final_use_list = init_useflags.get_flags_pkg(pkg, settings)
+	iuse = []
+	use_flags_list = []
+	use_enable_list = []
+	for iuse_line in iuse_flags_list:
+		iuse.append(init_useflags.reduce_flag(iuse_line))
+	iuse_flags_list2 = list(set(iuse))
+	use_enable = final_use_list
+	use_disable = list(set(iuse_flags_list2).difference(set(use_enable)))
+	use_flagsDict = {}
+	for x in use_enable:
+		use_flagsDict[x] = True
+	for x in use_disable:
+		use_flagsDict[x] = False
+	for u, s in  use_flagsDict.iteritems():
+		use_flags_list.append(u)
+		use_enable_list.append(s)
+	build_id = add_new_buildlog(conn, build_dict, use_flags_list, use_enable_list, build_error, summary_error, build_log_dict)
+	CM.putConnection(conn)
+	return build_id
+
+def search_info(self, textline, error_log_list):
+	if re.search(" * Package:", textline):
+		error_log_list.append(textline + '\n')
+	if re.search(" * Repository:", textline):
+		error_log_list.append(textline + '\n')
+	if re.search(" * Maintainer:", textline):
+		error_log_list.append(textline + '\n')
+	if re.search(" * USE:", textline):
+		error_log_list.append(textline + '\n')
+	if re.search(" * FEATURES:", textline):
+		error_log_list.append(textline + '\n')
+	return error_log_list
+
+def search_error(self, logfile_text, textline, error_log_list, sum_build_log_list, i):
+	if re.search("Error 1", textline):
+		x = i - 20
+		endline = True
+		error_log_list.append(".....\n")
+		while x != i + 3 and endline:
+			try:
+				error_log_list.append(logfile_text[x] + '\n')
+			except:
+				endline = False
 			else:
-				msg.append("[disabled]")
-
-		myvars  = ["sys-devel/autoconf", "sys-devel/automake", "virtual/os-headers",
-			"sys-devel/binutils", "sys-devel/libtool",  "dev-lang/python"]
-		myvars += portage.util.grabfile(settings["PORTDIR"]+"/profiles/info_pkgs")
-		atoms = []
-		vardb = trees["/"]["vartree"].dbapi
-		for x in myvars:
+				x = x +1
+	if re.search(" * ERROR:", textline):
+		x = i
+		endline= True
+		field = textline.split(" ")
+		sum_build_log_list.append("fail")
+		error_log_list.append(".....\n")
+		while x != i + 10 and endline:
 			try:
-				x = Atom(x)
-			except InvalidAtom:
-				writemsg_stdout("%-20s %s\n" % (x+":", "[NOT VALID]"),
-					noiselevel=-1)
+				error_log_list.append(logfile_text[x] + '\n')
+			except:
+				endline = False
 			else:
-				for atom in expand_new_virt(vardb, x):
-					if not atom.blocker:
-						atoms.append((x, atom))
-
-		myvars = sorted(set(atoms))
-
-		portdb = trees["/"]["porttree"].dbapi
-		main_repo = portdb.getRepositoryName(portdb.porttree_root)
-		cp_map = {}
-		cp_max_len = 0
-
-		for orig_atom, x in myvars:
-			pkg_matches = vardb.match(x)
-
-			versions = []
-			for cpv in pkg_matches:
-				matched_cp = portage.versions.cpv_getkey(cpv)
-				ver = portage.versions.cpv_getversion(cpv)
-				ver_map = cp_map.setdefault(matched_cp, {})
-				prev_match = ver_map.get(ver)
-				if prev_match is not None:
-					if prev_match.provide_suffix:
-						# prefer duplicate matches that include
-						# additional virtual provider info
-						continue
-
-				if len(matched_cp) > cp_max_len:
-					cp_max_len = len(matched_cp)
-				repo = vardb.aux_get(cpv, ["repository"])[0]
-				if repo == main_repo:
-					repo_suffix = ""
-				elif not repo:
-					repo_suffix = "::<unknown repository>"
-				else:
-					repo_suffix = "::" + repo
-
-				if matched_cp == orig_atom.cp:
-					provide_suffix = ""
-				else:
-					provide_suffix = " (%s)" % (orig_atom,)
-
-				ver_map[ver] = _info_pkgs_ver(ver, repo_suffix, provide_suffix)
-
-		for cp in sorted(cp_map):
-			versions = sorted(cp_map[cp].values())
-			versions = ", ".join(ver.toString() for ver in versions)
-			msg_extra = "%s %s\n" % \
-				((cp + ":").ljust(cp_max_len + 1), versions)
-			msg.append(msg_extra)
-
-		libtool_vers = ",".join(trees["/"]["vartree"].dbapi.match("sys-devel/libtool"))
-
-		repos = portdb.settings.repositories
-		msg_extra = "Repositories: %s\n" % \
-			" ".join(repo.name for repo in repos)
-		msg.append(msg_extra)
-
-		if _ENABLE_SET_CONFIG:
-			sets_line = "Installed sets: "
-			sets_line += ", ".join(s for s in \
-				sorted(root_config.sets['selected'].getNonAtoms()) \
-				if s.startswith(SETPREFIX))
-			sets_line += "\n"
-			msg.append(sets_line)
-
-		myvars = ['GENTOO_MIRRORS', 'CONFIG_PROTECT', 'CONFIG_PROTECT_MASK',
-			'PORTDIR', 'DISTDIR', 'PKGDIR', 'PORTAGE_TMPDIR',
-			'PORTDIR_OVERLAY', 'PORTAGE_BUNZIP2_COMMAND',
-			'PORTAGE_BZIP2_COMMAND',
-			'USE', 'CHOST', 'CFLAGS', 'CXXFLAGS',
-			'ACCEPT_KEYWORDS', 'ACCEPT_LICENSE', 'SYNC', 'FEATURES',
-			'EMERGE_DEFAULT_OPTS']
-		myvars.extend(portage.util.grabfile(settings["PORTDIR"]+"/profiles/info_vars"))
-
-		myvars_ignore_defaults = {
-			'PORTAGE_BZIP2_COMMAND' : 'bzip2',
-		}
-
-		myvars = portage.util.unique_array(myvars)
-		use_expand = settings.get('USE_EXPAND', '').split()
-		use_expand.sort()
-		use_expand_hidden = set(
-			settings.get('USE_EXPAND_HIDDEN', '').upper().split())
-		alphabetical_use = '--alphabetical' in myopts
-		unset_vars = []
-		myvars.sort()
-		for x in myvars:
-			if x in settings:
-				if x != "USE":
-					default = myvars_ignore_defaults.get(x)
-					if default is not None and \
-						default == settings[x]:
-						continue
-					msg_extra = '%s="%s"\n' % (x, settings[x])
-					msg.append(msg_extra)
-				else:
-					use = set(settings["USE"].split())
-					for varname in use_expand:
-						flag_prefix = varname.lower() + "_"
-						for f in list(use):
-							if f.startswith(flag_prefix):
-								use.remove(f)
-					use = list(use)
-					use.sort()
-					msg_extra = 'USE=%s' % " ".join(use)
-					msg.append(msg_extra + "\n")
-					for varname in use_expand:
-						myval = settings.get(varname)
-						if myval:
-							msg.append(varname + '=' + myval + "\n")
+				x = x +1
+	if re.search("configure: error:", textline):
+		x = i - 4
+		endline = True
+		error_log_list.append(".....\n")
+		while x != i + 3 and endline:
+			try:
+				error_log_list.append(logfile_text[x] + '\n')
+			except:
+				endline = False
 			else:
-				unset_vars.append(x)
-		if unset_vars:
-			msg_extra = "Unset: "+", ".join(unset_vars)
-			msg.append(msg_extra + "\n")
-
-		# See if we can find any packages installed matching the strings
-		# passed on the command line
-		mypkgs = []
-		vardb = trees[settings["ROOT"]]["vartree"].dbapi
-		portdb = trees[settings["ROOT"]]["porttree"].dbapi
-		bindb = trees[settings["ROOT"]]["bintree"].dbapi
-		for x in myfiles:
-			match_found = False
-			installed_match = vardb.match(x)
-			for installed in installed_match:
-				mypkgs.append((installed, "installed"))
-				match_found = True
-
-			if match_found:
-				continue
-
-			for db, pkg_type in ((portdb, "ebuild"), (bindb, "binary")):
-				if pkg_type == "binary" and "--usepkg" not in myopts:
-					continue
-
-				matches = db.match(x)
-				matches.reverse()
-				for match in matches:
-					if pkg_type == "binary":
-						if db.bintree.isremote(match):
-							continue
-					auxkeys = ["EAPI", "DEFINED_PHASES"]
-					metadata = dict(zip(auxkeys, db.aux_get(match, auxkeys)))
-					if metadata["EAPI"] not in ("0", "1", "2", "3") and \
-						"info" in metadata["DEFINED_PHASES"].split():
-						mypkgs.append((match, pkg_type))
-						break
-
-		# If some packages were found...
-		if mypkgs:
-			# Get our global settings (we only print stuff if it varies from
-			# the current config)
-			mydesiredvars = [ 'CHOST', 'CFLAGS', 'CXXFLAGS', 'LDFLAGS' ]
-			auxkeys = mydesiredvars + list(vardb._aux_cache_keys)
-			auxkeys.append('DEFINED_PHASES')
-			global_vals = {}
-			pkgsettings = portage.config(clone=settings)
-
-			# Loop through each package
-			# Only print settings if they differ from global settings
-			header_title = "Package Settings"
-			msg.append(header_width * "=")
-			msg.append(header_title.rjust(int(header_width/2 + len(header_title)/2)))
-			msg.append(header_width * "=")
-			from portage.output import EOutput
-			out = EOutput()
-			for mypkg in mypkgs:
-				cpv = mypkg[0]
-				pkg_type = mypkg[1]
-				# Get all package specific variables
-				if pkg_type == "installed":
-					metadata = dict(zip(auxkeys, vardb.aux_get(cpv, auxkeys)))
-				elif pkg_type == "ebuild":
-					metadata = dict(zip(auxkeys, portdb.aux_get(cpv, auxkeys)))
-				elif pkg_type == "binary":
-					metadata = dict(zip(auxkeys, bindb.aux_get(cpv, auxkeys)))
-
-				pkg = Package(built=(pkg_type!="ebuild"), cpv=cpv,
-					installed=(pkg_type=="installed"), metadata=zip(Package.metadata_keys,
-					(metadata.get(x, '') for x in Package.metadata_keys)),
-					root_config=root_config, type_name=pkg_type)
-
-				if pkg_type == "installed":
-					msg.append("\n%s was built with the following:" % \
-						colorize("INFORM", str(pkg.cpv)))
-				elif pkg_type == "ebuild":
-					msg.append("\n%s would be build with the following:" % \
-						colorize("INFORM", str(pkg.cpv)))
-				elif pkg_type == "binary":
-					msg.append("\n%s (non-installed binary) was built with the following:" % \
-						colorize("INFORM", str(pkg.cpv)))
-
-				writemsg_stdout('%s\n' % pkg_use_display(pkg, myopts),
-					noiselevel=-1)
-				if pkg_type == "installed":
-					for myvar in mydesiredvars:
-						if metadata[myvar].split() != settings.get(myvar, '').split():
-							msg.append("%s=\"%s\"" % (myvar, metadata[myvar]))
-
-				if metadata['DEFINED_PHASES']:
-					if 'info' not in metadata['DEFINED_PHASES'].split():
-						continue
-
-				msg.append(">>> Attempting to run pkg_info() for '%s'" % pkg.cpv)
-
-				if pkg_type == "installed":
-					ebuildpath = vardb.findname(pkg.cpv)
-				elif pkg_type == "ebuild":
-					ebuildpath = portdb.findname(pkg.cpv, myrepo=pkg.repo)
-				elif pkg_type == "binary":
-					tbz2_file = bindb.bintree.getname(pkg.cpv)
-					ebuild_file_name = pkg.cpv.split("/")[1] + ".ebuild"
-					ebuild_file_contents = portage.xpak.tbz2(tbz2_file).getfile(ebuild_file_name)
-					tmpdir = tempfile.mkdtemp()
-					ebuildpath = os.path.join(tmpdir, ebuild_file_name)
-					file = open(ebuildpath, 'w')
-					file.write(ebuild_file_contents)
-					file.close()
-
-				if not ebuildpath or not os.path.exists(ebuildpath):
-					out.ewarn("No ebuild found for '%s'" % pkg.cpv)
-					continue
-
-				if pkg_type == "installed":
-					portage.doebuild(ebuildpath, "info", pkgsettings["ROOT"],
-						pkgsettings, debug=(settings.get("PORTAGE_DEBUG", "") == 1),
-						mydbapi=trees[settings["ROOT"]]["vartree"].dbapi,
-						tree="vartree")
-				elif pkg_type == "ebuild":
-					portage.doebuild(ebuildpath, "info", pkgsettings["ROOT"],
-						pkgsettings, debug=(settings.get("PORTAGE_DEBUG", "") == 1),
-						mydbapi=trees[settings["ROOT"]]["porttree"].dbapi,
-						tree="porttree")
-				elif pkg_type == "binary":
-					portage.doebuild(ebuildpath, "info", pkgsettings["ROOT"],
-						pkgsettings, debug=(settings.get("PORTAGE_DEBUG", "") == 1),
-						mydbapi=trees[settings["ROOT"]]["bintree"].dbapi,
-						tree="bintree")
-					shutil.rmtree(tmpdir)
-		return msg
-
-	def write_msg_file(self, msg, log_path):
-		"""
-		Output msg to stdout if not self._background. If log_path
-		is not None then append msg to the log (appends with
-		compression if the filename extension of log_path
-		corresponds to a supported compression type).
-		"""
-		msg_shown = False
-		if log_path is not None:
+				x = x +1
+	return error_log_list, sum_build_log_list
+
+def search_qa(logfile_text, textline, qa_error_list, error_log_list,i):
+	if re.search(" * QA Notice:", textline):
+		x = i
+		qa_error_list.append(logfile_text[x] + '\n')
+		endline= True
+		error_log_list.append(".....\n")
+		while x != i + 3 and endline:
 			try:
-				f = open(_unicode_encode(log_path,
+				error_log_list.append(logfile_text[x] + '\n')
+			except:
+				endline = False
+			else:
+				x = x +1
+	return qa_error_list, error_log_list
+
+def get_buildlog_info(settings, build_dict):
+	myportdb = portage.portdbapi(mysettings=settings)
+	init_repoman = gobs_repoman(settings, myportdb)
+	logfile_text = get_log_text_list(settings.get("PORTAGE_LOG_FILE"))
+	# FIXME to support more errors and stuff
+	i = 0
+	build_log_dict = {}
+	error_log_list = []
+	qa_error_list = []
+	repoman_error_list = []
+	sum_build_log_list = []
+	for textline in logfile_text:
+		error_log_list = search_info(textline, error_log_list)
+		error_log_list, sum_build_log_list = search_error(logfile_text, textline, error_log_list, sum_build_log_list, i)
+		qa_error_list, error_log_list = search_qa(logfile_text, textline, qa_error_list, error_log_list, i)
+		i = i +1
+	# Run repoman check_repoman()
+	repoman_error_list = init_repoman.check_repoman(build_dict['categories'], build_dict['package'], build_dict['ebuild_version'], build_dict['config_profile'])
+	if repoman_error_list != []:
+		sum_build_log_list.append("repoman")
+	if qa_error_list != []:
+		sum_build_log_list.append("qa")
+	build_log_dict['repoman_error_list'] = repoman_error_list
+	build_log_dict['qa_error_list'] = qa_error_list
+	build_log_dict['error_log_list'] = error_log_list
+	build_log_dict['summary_error_list'] = sum_build_log_list
+	return build_log_dict
+
+def write_msg_file(msg, log_path):
+	"""
+	Output msg to stdout if not self._background. If log_path
+	is not None then append msg to the log (appends with
+	compression if the filename extension of log_path
+	corresponds to a supported compression type).
+	"""
+	msg_shown = False
+	if log_path is not None:
+		try:
+			f = open(_unicode_encode(log_path,
 					encoding=_encodings['fs'], errors='strict'),
 					mode='ab')
-				f_real = f
-			except IOError as e:
-				if e.errno not in (errno.ENOENT, errno.ESTALE):
-					raise
-				if not msg_shown:
-					writemsg_level(msg, level=level, noiselevel=noiselevel)
+			f_real = f
+		except IOError as e:
+			if e.errno not in (errno.ENOENT, errno.ESTALE):
+				raise
+			if not msg_shown:
+				writemsg_level(msg, level=level, noiselevel=noiselevel)
 			else:
-
 				if log_path.endswith('.gz'):
 					# NOTE: The empty filename argument prevents us from
 					# triggering a bug in python3 which causes GzipFile
@@ -534,45 +220,98 @@ class gobs_buildlog(object):
 				if f_real is not f:
 					f_real.close()
 
-	def add_buildlog_main(self, settings, pkg, trees):
-		conn=CM.getConnection()
-		build_dict = self.get_build_dict_db(settings, pkg)
-		build_log_dict = {}
-		build_log_dict = self.get_buildlog_info(settings, build_dict)
-		sum_build_log_list = build_log_dict['summary_error_list']
-		error_log_list = build_log_dict['error_log_list']
-		build_error = ""
-		if error_log_list != []:
-			for log_line in error_log_list:
-				build_error = build_error + log_line
-		summary_error = ""
-		if sum_build_log_list != []:
-			for sum_log_line in sum_build_log_list:
-				summary_error = summary_error + " " + sum_log_line
-		build_log_dict['logfilename'] = settings.get("PORTAGE_LOG_FILE").split(self._config_profile)[1]
-		log_msg = "Logfile name: %s" % (settings.get("PORTAGE_LOG_FILE"),)
+def add_buildlog_main(settings, pkg, trees):
+	conn=CM.getConnection()
+	build_dict = get_build_dict_db(settings, pkg)
+	build_log_dict = {}
+	build_log_dict = get_buildlog_info(settings, build_dict)
+	sum_build_log_list = build_log_dict['summary_error_list']
+	error_log_list = build_log_dict['error_log_list']
+	build_error = ""
+	if error_log_list != []:
+		for log_line in error_log_list:
+			build_error = build_error + log_line
+	summary_error = ""
+	if sum_build_log_list != []:
+		for sum_log_line in sum_build_log_list:
+			summary_error = summary_error + " " + sum_log_line
+	build_log_dict['logfilename'] = settings.get("PORTAGE_LOG_FILE").split(self._config_profile)[1]
+	log_msg = "Logfile name: %s" % (settings.get("PORTAGE_LOG_FILE"),)
+	add_gobs_logs(conn, log_msg, "info", config_profile)
+	if build_dict['queue_id'] is None:
+		build_id = .add_new_ebuild_buildlog(settings, pkg, build_dict, build_error, summary_error, build_log_dict)
+	else:
+		build_id = move_queru_buildlog(conn, build_dict['queue_id'], build_error, summary_error, build_log_dict)
+	# update_qa_repoman(conn, build_id, build_log_dict)
+	msg = ""
+	emerge_info_logfilename = settings.get("PORTAGE_LOG_FILE")[:-3] + "emerge_log.log"
+	if build_id is not None:
+		for msg_line in msg:
+			write_msg_file(msg_line, emerge_info_logfilename)
+		os.chmod(settings.get("PORTAGE_LOG_FILE"), 0o664)
+		os.chmod(emerge_info_logfilename, 0o664)
+		log_msg = "Package: %s logged to db." % (pkg.cpv,)
 		add_gobs_logs(conn, log_msg, "info", self._config_profile)
-		if build_dict['queue_id'] is None:
-			build_id = self.add_new_ebuild_buildlog(settings, pkg, build_dict, build_error, summary_error, build_log_dict)
+	else:
+		# FIXME Remove the log some way so
+		# mergetask._locate_failure_log(x) works in action_build()
+		#try:
+		#	os.remove(settings.get("PORTAGE_LOG_FILE"))
+		#except:
+		#	pass
+		log_msg = "Package %s NOT logged to db." % (pkg.cpv,)
+		add_gobs_logs(conn, log_msg, "info", config_profile)
+	CM.putConnection(conn)
+
+def log_fail_queru(build_dict, settings):
+	config = gobs_settings_dict['gobs_config']
+	conn=CM.getConnection()
+	print('build_dict', build_dict)
+	fail_querue_dict = get_fail_querue_dict(conn, build_dict)
+	print('fail_querue_dict', fail_querue_dict)
+	if fail_querue_dict is None:
+		fail_querue_dict = {}
+		fail_querue_dict['build_job_id'] = build_dict['build_job_id']
+		fail_querue_dict['fail_type'] = build_dict['type_fail']
+		fail_querue_dict['fail_times'] = 1
+		print('fail_querue_dict', fail_querue_dict)
+		add_fail_querue_dict(conn, fail_querue_dict)
+	else:
+		if fail_querue_dict['fail_times'][0] < 6:
+			fail_querue_dict['fail_times'] = fail_querue_dict['fail_times'][0] + 1
+			fail_querue_dict['build_job_id'] = build_dict['build_job_id']
+			fail_querue_dict['fail_type'] = build_dict['type_fail']
+			update_fail_times(conn, fail_querue_dict)
+			CM.putConnection(conn)
+			return
 		else:
-			build_id = move_queru_buildlog(conn, build_dict['queue_id'], build_error, summary_error, build_log_dict)
-		# update_qa_repoman(conn, build_id, build_log_dict)
-		msg = self.action_info(settings, trees)
-		emerge_info_logfilename = settings.get("PORTAGE_LOG_FILE")[:-3] + "emerge_log.log"
-		if build_id is not None:
-			for msg_line in msg:
-				self.write_msg_file(msg_line, emerge_info_logfilename)
-			os.chmod(settings.get("PORTAGE_LOG_FILE"), 0o664)
-			os.chmod(emerge_info_logfilename, 0o664)
-			log_msg = "Package: %s logged to db." % (pkg.cpv,)
-			add_gobs_logs(conn, log_msg, "info", self._config_profile)
-		else:
-			# FIXME Remove the log some way so 
-			# mergetask._locate_failure_log(x) works in action_build()
-			#try:
-			#	os.remove(settings.get("PORTAGE_LOG_FILE"))
-			#except:
-			#	pass
-			log_msg = "Package %s NOT logged to db." % (pkg.cpv,)
-			add_gobs_logs(conn, log_msg, "info", self._config_profile)
-		CM.putConnection(conn)
+			build_log_dict = {}
+			error_log_list = []
+			qa_error_list = []
+			repoman_error_list = []
+			sum_build_log_list = []
+			sum_build_log_list.append("fail")
+			error_log_list.append(build_dict['type_fail'])
+			build_log_dict['repoman_error_list'] = repoman_error_list
+			build_log_dict['qa_error_list'] = qa_error_list
+			build_log_dict['summary_error_list'] = sum_build_log_list
+			if build_dict['type_fail'] == 'merge fail':
+				error_log_list = []
+				for k, v in build_dict['failed_merge'].iteritems():
+					error_log_list.append(v['fail_msg'])
+			build_log_dict['error_log_list'] = error_log_list
+			build_error = ""
+			if error_log_list != []:
+				for log_line in error_log_list:
+					build_error = build_error + log_line
+			summary_error = ""
+			if sum_build_log_list != []:
+				for sum_log_line in sum_build_log_list:
+					summary_error = summary_error + " " + sum_log_line
+			if settings.get("PORTAGE_LOG_FILE") is not None:
+				build_log_dict['logfilename'] = settings.get("PORTAGE_LOG_FILE").split(config_profile)[1]
+				os.chmod(settings.get("PORTAGE_LOG_FILE"), 0o224)
+			else:
+				build_log_dict['logfilename'] = ""
+			move_queru_buildlog(conn, build_dict['build_job_id'], build_error, summary_error, build_log_dict)
+	CM.putConnection(conn)
\ No newline at end of file

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index 28d8352..c071aaf 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -25,59 +25,7 @@ from portage import _unicode_decode
 from portage.versions import cpv_getkey
 from portage.dep import check_required_use
 from gobs.main import emerge_main
-
-def log_fail_queru(build_dict, settings):
-	config = gobs_settings_dict['gobs_config']
-	conn=CM.getConnection()
-	print('build_dict', build_dict)
-	fail_querue_dict = get_fail_querue_dict(conn, build_dict)
-	print('fail_querue_dict', fail_querue_dict)
-	if fail_querue_dict is None:
-		fail_querue_dict = {}
-		fail_querue_dict['build_job_id'] = build_dict['build_job_id']
-		fail_querue_dict['fail_type'] = build_dict['type_fail']
-		fail_querue_dict['fail_times'] = 1
-		print('fail_querue_dict', fail_querue_dict)
-		add_fail_querue_dict(conn, fail_querue_dict)
-	else:
-		if fail_querue_dict['fail_times'][0] < 6:
-			fail_querue_dict['fail_times'] = fail_querue_dict['fail_times'][0] + 1
-			fail_querue_dict['build_job_id'] = build_dict['build_job_id']
-			fail_querue_dict['fail_type'] = build_dict['type_fail']
-			update_fail_times(conn, fail_querue_dict)
-			CM.putConnection(conn)
-			return
-		else:
-			build_log_dict = {}
-			error_log_list = []
-			qa_error_list = []
-			repoman_error_list = []
-			sum_build_log_list = []
-			sum_build_log_list.append("fail")
-			error_log_list.append(build_dict['type_fail'])
-			build_log_dict['repoman_error_list'] = repoman_error_list
-			build_log_dict['qa_error_list'] = qa_error_list
-			build_log_dict['summary_error_list'] = sum_build_log_list
-			if build_dict['type_fail'] == 'merge fail':
-				error_log_list = []
-				for k, v in build_dict['failed_merge'].iteritems():
-					error_log_list.append(v['fail_msg'])
-			build_log_dict['error_log_list'] = error_log_list
-			build_error = ""
-			if error_log_list != []:
-				for log_line in error_log_list:
-					build_error = build_error + log_line
-			summary_error = ""
-			if sum_build_log_list != []:
-				for sum_log_line in sum_build_log_list:
-					summary_error = summary_error + " " + sum_log_line
-			if settings.get("PORTAGE_LOG_FILE") is not None:
-				build_log_dict['logfilename'] = settings.get("PORTAGE_LOG_FILE").split(config)[1]
-				os.chmod(settings.get("PORTAGE_LOG_FILE"), 0o224)
-			else:
-				build_log_dict['logfilename'] = ""
-			move_queru_buildlog(conn, build_dict['build_job_id'], build_error, summary_error, build_log_dict)
-	CM.putConnection(conn)
+from gobs.build_log import log_fail_queru
 
 class queruaction(object):
 

diff --git a/gobs/pym/pgsql_querys.py b/gobs/pym/pgsql_querys.py
index c3d2784..d031bde 100644
--- a/gobs/pym/pgsql_querys.py
+++ b/gobs/pym/pgsql_querys.py
@@ -323,12 +323,9 @@ def get_profile_checksum(connection, config_profile):
 
 def get_packages_to_build(connection, config):
 	cursor =connection.cursor()
-	sqlQ1 = "SELECT build_job_id.build_jobs, ebuild_id.build_jobs, package_id.ebuilds FROM build_jobs, ebuilds WHERE \
-		config_id.build_jobs = (SELECT config_id FROM configs WHERE config = %s) \
-		AND extract(epoch from (NOW()) - time_stamp.build_jobs) > 7200 AND ebuild_id.build_jobs = ebuild_id.ebuilds \
-		AND ebuilds.active = 'True' ORDER BY LIMIT 1"
+	sqlQ1 = "SELECT build_jobs.build_job_id, build_jobs.ebuild_id, ebuilds.package_id FROM build_jobs, ebuilds WHERE build_jobs.config_id = (SELECT config_id FROM configs WHERE config = %s) AND build_jobs.ebuild_id = ebuilds.ebuild_id AND ebuilds.active = 'True' AND extract(epoch from (NOW()) - build_jobs.time_stamp) > 7200 ORDER BY build_jobs.build_job_id LIMIT 1"
 	sqlQ2 = 'SELECT version, checksum FROM ebuilds WHERE ebuild_id = %s'
-	sqlQ3 = 'SELECT flag.uses, status.build_jobs_use FROM build_jobs_use, uses WHERE build_job_id.build_jobs_use = %s use_id.build_jobs_use = use_id.uses'
+	sqlQ3 = 'SELECT uses.flag, build_jobs_use.status FROM build_jobs_use, uses WHERE build_jobs_use.build_job_id = %s AND build_jobs.use_id = uses.use_id'
 	cursor.execute(sqlQ1, (config,))
 	build_dict={}
 	entries = cursor.fetchone()


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-06 23:56 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-06 23:56 UTC (permalink / raw
  To: gentoo-commits

commit:     3ea3702c04cfa8a815103cc4e6afe74cfaf0172b
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Thu Dec  6 23:56:20 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Thu Dec  6 23:56:20 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=3ea3702c

fix error in build_log.py (invalid syntax)

---
 gobs/pym/build_log.py |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index 537103e..08164f4 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -235,11 +235,11 @@ def add_buildlog_main(settings, pkg, trees):
 	if sum_build_log_list != []:
 		for sum_log_line in sum_build_log_list:
 			summary_error = summary_error + " " + sum_log_line
-	build_log_dict['logfilename'] = settings.get("PORTAGE_LOG_FILE").split(self._config_profile)[1]
+	build_log_dict['logfilename'] = settings.get("PORTAGE_LOG_FILE").split(config_profile)[1]
 	log_msg = "Logfile name: %s" % (settings.get("PORTAGE_LOG_FILE"),)
 	add_gobs_logs(conn, log_msg, "info", config_profile)
 	if build_dict['queue_id'] is None:
-		build_id = .add_new_ebuild_buildlog(settings, pkg, build_dict, build_error, summary_error, build_log_dict)
+		build_id = add_new_ebuild_buildlog(settings, pkg, build_dict, build_error, summary_error, build_log_dict)
 	else:
 		build_id = move_queru_buildlog(conn, build_dict['queue_id'], build_error, summary_error, build_log_dict)
 	# update_qa_repoman(conn, build_id, build_log_dict)


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-07  0:02 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-07  0:02 UTC (permalink / raw
  To: gentoo-commits

commit:     028d8969b296e4ccb90d842f76d5bca0d4fdfac7
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Dec  7 00:02:08 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Dec  7 00:02:08 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=028d8969

fix missing FROM-clause entry for table

---
 gobs/pym/pgsql_querys.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/pgsql_querys.py b/gobs/pym/pgsql_querys.py
index d031bde..cb8691c 100644
--- a/gobs/pym/pgsql_querys.py
+++ b/gobs/pym/pgsql_querys.py
@@ -325,7 +325,7 @@ def get_packages_to_build(connection, config):
 	cursor =connection.cursor()
 	sqlQ1 = "SELECT build_jobs.build_job_id, build_jobs.ebuild_id, ebuilds.package_id FROM build_jobs, ebuilds WHERE build_jobs.config_id = (SELECT config_id FROM configs WHERE config = %s) AND build_jobs.ebuild_id = ebuilds.ebuild_id AND ebuilds.active = 'True' AND extract(epoch from (NOW()) - build_jobs.time_stamp) > 7200 ORDER BY build_jobs.build_job_id LIMIT 1"
 	sqlQ2 = 'SELECT version, checksum FROM ebuilds WHERE ebuild_id = %s'
-	sqlQ3 = 'SELECT uses.flag, build_jobs_use.status FROM build_jobs_use, uses WHERE build_jobs_use.build_job_id = %s AND build_jobs.use_id = uses.use_id'
+	sqlQ3 = 'SELECT uses.flag, build_jobs_use.status FROM build_jobs_use, uses WHERE build_jobs_use.build_job_id = %s AND build_jobs_use.use_id = uses.use_id'
 	cursor.execute(sqlQ1, (config,))
 	build_dict={}
 	entries = cursor.fetchone()


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-07  0:07 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-07  0:07 UTC (permalink / raw
  To: gentoo-commits

commit:     6c698d97f24c7a5b239da3809e0a2cd66e56613f
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Dec  7 00:07:22 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Dec  7 00:07:22 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=6c698d97

fix load_emerge_config() is not defined

---
 gobs/pym/build_queru.py |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index c071aaf..d434d33 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -27,6 +27,8 @@ from portage.dep import check_required_use
 from gobs.main import emerge_main
 from gobs.build_log import log_fail_queru
 
+from gobs.actions import load_emerge_config
+
 class queruaction(object):
 
 	def __init__(self, config_profile):


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-07 14:22 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-07 14:22 UTC (permalink / raw
  To: gentoo-commits

commit:     84bb1a2d223a5ce143d339151661e66b40f067cc
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Dec  7 14:22:30 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Dec  7 14:22:30 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=84bb1a2d

disable post_message

---
 gobs/pym/build_queru.py |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index 9eace00..ef90b67 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -98,8 +98,8 @@ class queruaction(object):
 		log_msg = "build_cpv_list: %s" % (build_cpv_list,)
 		add_gobs_logs(conn, log_msg, "info", self._config_profile)
 		argscmd = []
-		if not "nooneshort" in build_dict['post_message']:
-			argscmd.append("--oneshot")
+		#if not "nooneshort" in build_dict['post_message']:
+		argscmd.append("--oneshot")
 		argscmd.append("--buildpkg")
 		argscmd.append("--usepkg")
 		for build_cpv in build_cpv_list:


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-07 14:29 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-07 14:29 UTC (permalink / raw
  To: gentoo-commits

commit:     ab75bcc350efc8ac810a78d926a9a679036a0705
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Dec  7 14:29:34 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Dec  7 14:29:34 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=ab75bcc3

fix local variable 'mydepgraph' referenced before assignment

---
 gobs/pym/actions.py |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/actions.py b/gobs/pym/actions.py
index a27c5f9..e848859 100644
--- a/gobs/pym/actions.py
+++ b/gobs/pym/actions.py
@@ -139,7 +139,7 @@ def build_mydepgraph(settings, trees, mtimedb, myopts, myparams, myaction, myfil
 				build_dict['type_fail'] = "Slot blocking"
 				build_dict['check_fail'] = True
 	
-	return build_dict, success, settings, trees, mtimedb
+	return build_dict, success, settings, trees, mtimedb, mydepgraph
 
 def action_build(settings, trees, mtimedb,
 	myopts, myaction, myfiles, spinner, build_dict):
@@ -358,7 +358,7 @@ def action_build(settings, trees, mtimedb,
 			print(darkgreen("emerge: It seems we have nothing to resume..."))
 			return os.EX_OK
 
-		build_dict, success, settings, trees, mtimedb = build_mydepgraph(settings,
+		build_dict, success, settings, trees, mtimedb, mydepgraph = build_mydepgraph(settings,
 			trees, mtimedb, myopts, myparams, myaction, myfiles, spinner, build_dict)
 
 		if not success:


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-07 14:33 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-07 14:33 UTC (permalink / raw
  To: gentoo-commits

commit:     308e4d0e1abfb2797322e7aff81e200811a82c24
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Dec  7 14:33:25 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Dec  7 14:33:25 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=308e4d0e

fix check_file_in_manifest() takes exactly 4 arguments (5 given)

---
 gobs/pym/manifest.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/manifest.py b/gobs/pym/manifest.py
index 74b3e90..20a4aeb 100644
--- a/gobs/pym/manifest.py
+++ b/gobs/pym/manifest.py
@@ -100,7 +100,7 @@ class gobs_manifest(object):
 								 % os.path.join(filesdir, f)
 		return None
 
-	def check_file_in_manifest(self, portdb, cpv, build_use_flags_list):
+	def check_file_in_manifest(self, portdb, cpv, build_use_flags_list, repo):
 		myfetchlistdict = portage.FetchlistDict(self._pkgdir, self._mysettings, portdb)
 		my_manifest = portage.Manifest(self._pkgdir, self._mysettings['DISTDIR'], fetchlist_dict=myfetchlistdict, manifest1_compat=False, from_scratch=False)
 		ebuild_version = portage.versions.cpv_getversion(cpv)


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-07 14:58 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-07 14:58 UTC (permalink / raw
  To: gentoo-commits

commit:     7ed42c1b0bd9448bbddd94646c7b70bfb3a967d7
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Dec  7 14:56:57 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Dec  7 14:56:57 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=7ed42c1b

Update the Scheduler.py

---
 gobs/pym/Scheduler.py |  118 +++++++++++++++++++++++++++++++++++--------------
 gobs/pym/actions.py   |    2 +-
 2 files changed, 85 insertions(+), 35 deletions(-)

diff --git a/gobs/pym/Scheduler.py b/gobs/pym/Scheduler.py
index 229c595..9614503 100644
--- a/gobs/pym/Scheduler.py
+++ b/gobs/pym/Scheduler.py
@@ -28,6 +28,8 @@ from portage._sets import SETPREFIX
 from portage._sets.base import InternalPackageSet
 from portage.util import ensure_dirs, writemsg, writemsg_level
 from portage.util.SlotObject import SlotObject
+from portage.util._async.SchedulerInterface import SchedulerInterface
+from portage.util._eventloop.EventLoop import EventLoop
 from portage.package.ebuild.digestcheck import digestcheck
 from portage.package.ebuild.digestgen import digestgen
 from portage.package.ebuild.doebuild import (_check_temp_dir,
@@ -50,6 +52,7 @@ from _emerge.EbuildFetcher import EbuildFetcher
 from _emerge.EbuildPhase import EbuildPhase
 from _emerge.emergelog import emergelog
 from _emerge.FakeVartree import FakeVartree
+from _emerge.getloadavg import getloadavg
 from _emerge._find_deep_system_runtime_deps import _find_deep_system_runtime_deps
 from _emerge._flush_elog_mod_echo import _flush_elog_mod_echo
 from _emerge.JobStatusDisplay import JobStatusDisplay
@@ -59,13 +62,16 @@ from _emerge.PackageMerge import PackageMerge
 from _emerge.PollScheduler import PollScheduler
 from _emerge.SequentialTaskQueue import SequentialTaskQueue
 
-from gobs.build_log import gobs_buildlog
+from gobs.build_log import add_buildlog_main
 
 if sys.hexversion >= 0x3000000:
 	basestring = str
 
 class Scheduler(PollScheduler):
 
+	# max time between loadavg checks (milliseconds)
+	_loadavg_latency = 30000
+
 	# max time between display status updates (milliseconds)
 	_max_display_latency = 3000
 
@@ -81,7 +87,7 @@ class Scheduler(PollScheduler):
 	_opts_no_self_update = frozenset(["--buildpkgonly",
 		"--fetchonly", "--fetch-all-uri", "--pretend"])
 
-	class _iface_class(PollScheduler._sched_iface_class):
+	class _iface_class(SchedulerInterface):
 		__slots__ = ("fetch",
 			"scheduleSetup", "scheduleUnpack")
 
@@ -138,7 +144,7 @@ class Scheduler(PollScheduler):
 
 	def __init__(self, settings, trees, mtimedb, myopts,
 		spinner, mergelist=None, favorites=None, graph_config=None):
-		PollScheduler.__init__(self)
+		PollScheduler.__init__(self, main=True)
 
 		if mergelist is not None:
 			warnings.warn("The mergelist parameter of the " + \
@@ -217,14 +223,15 @@ class Scheduler(PollScheduler):
 		fetch_iface = self._fetch_iface_class(log_file=self._fetch_log,
 			schedule=self._schedule_fetch)
 		self._sched_iface = self._iface_class(
+			self._event_loop,
+			is_background=self._is_background,
 			fetch=fetch_iface,
 			scheduleSetup=self._schedule_setup,
-			scheduleUnpack=self._schedule_unpack,
-			**dict((k, getattr(self.sched_iface, k))
-			for k in self.sched_iface.__slots__))
+			scheduleUnpack=self._schedule_unpack)
 
 		self._prefetchers = weakref.WeakValueDictionary()
 		self._pkg_queue = []
+		self._jobs = 0
 		self._running_tasks = {}
 		self._completed_tasks = set()
 
@@ -243,9 +250,7 @@ class Scheduler(PollScheduler):
 		# The load average takes some time to respond when new
 		# jobs are added, so we need to limit the rate of adding
 		# new jobs.
-		self._job_delay_max = 10
-		self._job_delay_factor = 1.0
-		self._job_delay_exp = 1.5
+		self._job_delay_max = 5
 		self._previous_job_start_time = None
 
 		# This is used to memoize the _choose_pkg() result when
@@ -300,15 +305,10 @@ class Scheduler(PollScheduler):
 			if not portage.dep.match_from_list(
 				portage.const.PORTAGE_PACKAGE_ATOM, [x]):
 				continue
-			if self._running_portage is None or \
-				self._running_portage.cpv != x.cpv or \
-				'9999' in x.cpv or \
-				'git' in x.inherited or \
-				'git-2' in x.inherited:
-				rval = _check_temp_dir(self.settings)
-				if rval != os.EX_OK:
-					return rval
-				_prepare_self_update(self.settings)
+			rval = _check_temp_dir(self.settings)
+			if rval != os.EX_OK:
+				return rval
+			_prepare_self_update(self.settings)
 			break
 
 		return os.EX_OK
@@ -328,10 +328,13 @@ class Scheduler(PollScheduler):
 		self._set_graph_config(graph_config)
 		self._blocker_db = {}
 		dynamic_deps = self.myopts.get("--dynamic-deps", "y") != "n"
+		ignore_built_slot_operator_deps = self.myopts.get(
+			"--ignore-built-slot-operator-deps", "n") == "y"
 		for root in self.trees:
 			if graph_config is None:
 				fake_vartree = FakeVartree(self.trees[root]["root_config"],
-					pkg_cache=self._pkg_cache, dynamic_deps=dynamic_deps)
+					pkg_cache=self._pkg_cache, dynamic_deps=dynamic_deps,
+					ignore_built_slot_operator_deps=ignore_built_slot_operator_deps)
 				fake_vartree.sync()
 			else:
 				fake_vartree = graph_config.trees[root]['vartree']
@@ -653,10 +656,11 @@ class Scheduler(PollScheduler):
 				if value and value.strip():
 					continue
 				msg = _("%(var)s is not set... "
-					"Are you missing the '%(configroot)setc/make.profile' symlink? "
+					"Are you missing the '%(configroot)s%(profile_path)s' symlink? "
 					"Is the symlink correct? "
 					"Is your portage tree complete?") % \
-					{"var": var, "configroot": settings["PORTAGE_CONFIGROOT"]}
+					{"var": var, "configroot": settings["PORTAGE_CONFIGROOT"],
+					"profile_path": portage.const.PROFILE_PATH}
 
 				out = portage.output.EOutput()
 				for line in textwrap.wrap(msg, 70):
@@ -769,10 +773,10 @@ class Scheduler(PollScheduler):
 
 		failures = 0
 
-		# Use a local PollScheduler instance here, since we don't
+		# Use a local EventLoop instance here, since we don't
 		# want tasks here to trigger the usual Scheduler callbacks
 		# that handle job scheduling and status display.
-		sched_iface = PollScheduler().sched_iface
+		sched_iface = SchedulerInterface(EventLoop(main=False))
 
 		for x in self._mergelist:
 			if not isinstance(x, Package):
@@ -1249,7 +1253,6 @@ class Scheduler(PollScheduler):
 		pkg = merge.merge.pkg
 		settings = merge.merge.settings
 		trees = self.trees
-		init_buildlog = gobs_buildlog()
 		if merge.returncode != os.EX_OK:
 			build_dir = settings.get("PORTAGE_BUILDDIR")
 			build_log = settings.get("PORTAGE_LOG_FILE")
@@ -1261,7 +1264,7 @@ class Scheduler(PollScheduler):
 			if not self._terminated_tasks:
 				self._failed_pkg_msg(self._failed_pkgs[-1], "install", "to")
 				self._status_display.failed = len(self._failed_pkgs)
-			init_buildlog.add_buildlog_main(settings, pkg, trees)
+			add_buildlog_main(settings, pkg, trees)
 			return
 
 		self._task_complete(pkg)
@@ -1280,7 +1283,6 @@ class Scheduler(PollScheduler):
 				self._pkg_cache.pop(pkg_to_replace, None)
 
 		if pkg.installed:
-			init_buildlog.add_buildlog_main(settings, pkg, trees)
 			return
 
 		# Call mtimedb.commit() after each merge so that
@@ -1291,7 +1293,6 @@ class Scheduler(PollScheduler):
 		if not mtimedb["resume"]["mergelist"]:
 			del mtimedb["resume"]
 		mtimedb.commit()
-		init_buildlog.add_buildlog_main(settings, pkg, trees)
 
 	def _build_exit(self, build):
 		self._running_tasks.pop(id(build), None)
@@ -1318,7 +1319,6 @@ class Scheduler(PollScheduler):
 			settings = build.settings
 			trees = self.trees
 			pkg=build.pkg
-			init_buildlog = gobs_buildlog()
 			build_dir = settings.get("PORTAGE_BUILDDIR")
 			build_log = settings.get("PORTAGE_LOG_FILE")
 
@@ -1330,7 +1330,7 @@ class Scheduler(PollScheduler):
 				self._failed_pkg_msg(self._failed_pkgs[-1], "emerge", "for")
 				self._status_display.failed = len(self._failed_pkgs)
 			self._deallocate_config(build.settings)
-			init_buildlog.add_buildlog_main(settings, pkg, trees)
+			add_buildlog_main(settings, pkg, trees)
 		self._jobs -= 1
 		self._status_display.running = self._jobs
 		self._schedule()
@@ -1345,6 +1345,38 @@ class Scheduler(PollScheduler):
 		blocker_db = self._blocker_db[pkg.root]
 		blocker_db.discardBlocker(pkg)
 
+	def _main_loop(self):
+		term_check_id = self._event_loop.idle_add(self._termination_check)
+		loadavg_check_id = None
+		if self._max_load is not None and \
+			self._loadavg_latency is not None and \
+			(self._max_jobs is True or self._max_jobs > 1):
+			# We have to schedule periodically, in case the load
+			# average has changed since the last call.
+			loadavg_check_id = self._event_loop.timeout_add(
+				self._loadavg_latency, self._schedule)
+
+		try:
+			# Populate initial event sources. Unless we're scheduling
+			# based on load average, we only need to do this once
+			# here, since it can be called during the loop from within
+			# event handlers.
+			self._schedule()
+
+			# Loop while there are jobs to be scheduled.
+			while self._keep_scheduling():
+				self._event_loop.iteration()
+
+			# Clean shutdown of previously scheduled jobs. In the
+			# case of termination, this allows for basic cleanup
+			# such as flushing of buffered output to logs.
+			while self._is_work_scheduled():
+				self._event_loop.iteration()
+		finally:
+			self._event_loop.source_remove(term_check_id)
+			if loadavg_check_id is not None:
+				self._event_loop.source_remove(loadavg_check_id)
+
 	def _merge(self):
 
 		if self._opts_no_background.intersection(self.myopts):
@@ -1355,8 +1387,10 @@ class Scheduler(PollScheduler):
 		failed_pkgs = self._failed_pkgs
 		portage.locks._quiet = self._background
 		portage.elog.add_listener(self._elog_listener)
-		display_timeout_id = self.sched_iface.timeout_add(
-			self._max_display_latency, self._status_display.display)
+		display_timeout_id = None
+		if self._status_display._isatty and not self._status_display.quiet:
+			display_timeout_id = self._event_loop.timeout_add(
+				self._max_display_latency, self._status_display.display)
 		rval = os.EX_OK
 
 		try:
@@ -1365,7 +1399,8 @@ class Scheduler(PollScheduler):
 			self._main_loop_cleanup()
 			portage.locks._quiet = False
 			portage.elog.remove_listener(self._elog_listener)
-			self.sched_iface.source_remove(display_timeout_id)
+			if display_timeout_id is not None:
+				self._event_loop.source_remove(display_timeout_id)
 			if failed_pkgs:
 				rval = failed_pkgs[-1].returncode
 
@@ -1503,6 +1538,9 @@ class Scheduler(PollScheduler):
 	def _is_work_scheduled(self):
 		return bool(self._running_tasks)
 
+	def _running_job_count(self):
+		return self._jobs
+
 	def _schedule_tasks(self):
 
 		while True:
@@ -1552,15 +1590,27 @@ class Scheduler(PollScheduler):
 		if self._jobs and self._max_load is not None:
 
 			current_time = time.time()
+			try:
+				avg1, avg5, avg15 = getloadavg()
+			except OSError:
+				return False
 
-			delay = self._job_delay_factor * self._jobs ** self._job_delay_exp
+			delay = self._job_delay_max * avg1 / self._max_load
 			if delay > self._job_delay_max:
 				delay = self._job_delay_max
-			if (current_time - self._previous_job_start_time) < delay:
+			elapsed_seconds = current_time - self._previous_job_start_time
+			# elapsed_seconds < 0 means the system clock has been adjusted
+			if elapsed_seconds > 0 and elapsed_seconds < delay:
+				self._event_loop.timeout_add(
+					1000 * (delay - elapsed_seconds), self._schedule_once)
 				return True
 
 		return False
 
+	def _schedule_once(self):
+		self._schedule()
+		return False
+
 	def _schedule_tasks_imp(self):
 		"""
 		@rtype: bool

diff --git a/gobs/pym/actions.py b/gobs/pym/actions.py
index e848859..4b48408 100644
--- a/gobs/pym/actions.py
+++ b/gobs/pym/actions.py
@@ -70,7 +70,7 @@ from _emerge.MetadataRegen import MetadataRegen
 from _emerge.Package import Package
 from _emerge.ProgressHandler import ProgressHandler
 from _emerge.RootConfig import RootConfig
-from _emerge.Scheduler import Scheduler
+from gobs..Scheduler import Scheduler
 from _emerge.search import search
 from _emerge.SetArg import SetArg
 from _emerge.show_invalid_depstring_notice import show_invalid_depstring_notice


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-11 23:38 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-11 23:38 UTC (permalink / raw
  To: gentoo-commits

commit:     5db76e181fb221a671d6c3fb39caaab90e6edb6a
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Tue Dec 11 23:38:01 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Tue Dec 11 23:38:01 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=5db76e18

fix add_new_buildlog()

---
 gobs/pym/build_log.py    |  132 ++++++++++++++++++++--------------------------
 gobs/pym/build_queru.py  |   20 ++++---
 gobs/pym/flags.py        |   12 ++---
 gobs/pym/pgsql_querys.py |  129 ++++++++++++++++++++++++++++++++++++++++++---
 4 files changed, 195 insertions(+), 98 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index 08164f4..db84965 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -2,6 +2,7 @@ from __future__ import print_function
 import re
 import os
 import platform
+import hashlib
 from gobs.text import get_log_text_list
 from portage.versions import catpkgsplit, cpv_getversion
 from gobs.repoman_gobs import gobs_repoman
@@ -30,53 +31,23 @@ def get_build_dict_db(settings, pkg):
 	cpvr_list = catpkgsplit(pkg.cpv, silent=1)
 	categories = cpvr_list[0]
 	package = cpvr_list[1]
+	repo = pkg.repo
 	ebuild_version = cpv_getversion(pkg.cpv)
-	log_msg = "cpv: %s" % (pkg.cpv,)
+	log_msg = "Logging %s:%s" % (pkg.cpv, repo,)
 	add_gobs_logs(conn, log_msg, "info", config_profile)
 	init_package = gobs_package(settings, myportdb)
-	package_id = have_package_db(conn, categories, package)
-	# print("package_id %s" % package_id, file=sys.stdout)
+	package_id = get_package_id(conn, categories, package, repo)
 	build_dict = {}
-	mybuild_dict = {}
 	build_dict['ebuild_version'] = ebuild_version
 	build_dict['package_id'] = package_id
 	build_dict['cpv'] = pkg.cpv
 	build_dict['categories'] = categories
 	build_dict['package'] = package
-	build_dict['config_profile'] = config_profile
+	build_dict['config_id'] = get_config_id(conn, config_profile)
+	init_useflags = gobs_use_flags(settings, myportdb, pkg.cpv)
 	init_useflags = gobs_use_flags(settings, myportdb, pkg.cpv)
-	iuse_flags_list, final_use_list = init_useflags.get_flags_pkg(pkg, settings)
-	#print 'final_use_list', final_use_list
-	if  final_use_list != []:
-		build_dict['build_useflags'] = sorted(final_use_list)
-	else:
-		build_dict['build_useflags'] = None
-	#print "build_dict['build_useflags']", build_dict['build_useflags']
-	pkgdir = os.path.join(settings['PORTDIR'], categories + "/" + package)
-	ebuild_version_checksum_tree = portage.checksum.sha256hash(pkgdir+ "/" + package + "-" + ebuild_version + ".ebuild")[0]
-	build_dict['checksum'] = ebuild_version_checksum_tree
-	ebuild_id = get_ebuild_id_db_checksum(conn, build_dict)
-	if ebuild_id is None:
-		#print 'have any ebuild',  get_ebuild_checksum(conn, package_id, ebuild_version)
-		init_package.update_ebuild_db(build_dict)
-		ebuild_id = get_ebuild_id_db_checksum(conn, build_dict)
-	build_dict['ebuild_id'] = ebuild_id
-	queue_id = check_revision(conn, build_dict)
-	if queue_id is None:
-		build_dict['queue_id'] = None
-	else:
-		build_dict['queue_id'] = queue_id
-	CM.putConnection(conn)
-	return build_dict
-
-def add_new_ebuild_buildlog(settings, pkg, build_dict, build_error, summary_error, build_log_dict):
-	conn=CM.getConnection()
-	portdb = portage.portdbapi(mysettings=settings)
-	init_useflags = gobs_use_flags(settings, portdb, build_dict['cpv'])
 	iuse_flags_list, final_use_list = init_useflags.get_flags_pkg(pkg, settings)
 	iuse = []
-	use_flags_list = []
-	use_enable_list = []
 	for iuse_line in iuse_flags_list:
 		iuse.append(init_useflags.reduce_flag(iuse_line))
 	iuse_flags_list2 = list(set(iuse))
@@ -84,17 +55,33 @@ def add_new_ebuild_buildlog(settings, pkg, build_dict, build_error, summary_erro
 	use_disable = list(set(iuse_flags_list2).difference(set(use_enable)))
 	use_flagsDict = {}
 	for x in use_enable:
-		use_flagsDict[x] = True
+		use_id = get_use_id(conn, x)
+		use_flagsDict[use_id] = 'True'
 	for x in use_disable:
-		use_flagsDict[x] = False
-	for u, s in  use_flagsDict.iteritems():
-		use_flags_list.append(u)
-		use_enable_list.append(s)
-	build_id = add_new_buildlog(conn, build_dict, use_flags_list, use_enable_list, build_error, summary_error, build_log_dict)
+		use_id = get_use_id(conn, x)
+		use_flagsDict[use_id] = 'False'
+	if use_enable == [] and use_disable == []:
+		build_dict['build_useflags'] = None
+	else:
+		build_dict['build_useflags'] = use_flagsDict
+	pkgdir = myportdb.getRepositoryPath(repo) + "/" + categories + "/" + package
+	ebuild_version_checksum_tree = portage.checksum.sha256hash(pkgdir+ "/" + package + "-" + ebuild_version + ".ebuild")[0]
+	build_dict['checksum'] = ebuild_version_checksum_tree
+	ebuild_id = get_ebuild_id_db_checksum(conn, build_dict)
+	if ebuild_id is None:
+		log_msg = "%s:%s Don't have any ebuild_id!" % (pkg.cpv, repo,)
+		add_gobs_logs(conn, log_msg, "error", config_profile)
+		return
+	build_dict['ebuild_id'] = ebuild_id
+	build_job_id = get_build_job_id(conn, build_dict)
+	if build_job_id is None:
+		build_dict['build_job_id'] = None
+	else:
+		build_dict['build_job_id'] = build_job_id
 	CM.putConnection(conn)
-	return build_id
+	return build_dict
 
-def search_info(self, textline, error_log_list):
+def search_info(textline, error_log_list):
 	if re.search(" * Package:", textline):
 		error_log_list.append(textline + '\n')
 	if re.search(" * Repository:", textline):
@@ -107,7 +94,7 @@ def search_info(self, textline, error_log_list):
 		error_log_list.append(textline + '\n')
 	return error_log_list
 
-def search_error(self, logfile_text, textline, error_log_list, sum_build_log_list, i):
+def search_error(logfile_text, textline, error_log_list, sum_build_log_list, i):
 	if re.search("Error 1", textline):
 		x = i - 20
 		endline = True
@@ -123,7 +110,7 @@ def search_error(self, logfile_text, textline, error_log_list, sum_build_log_lis
 		x = i
 		endline= True
 		field = textline.split(" ")
-		sum_build_log_list.append("fail")
+		sum_build_log_list.append("True")
 		error_log_list.append(".....\n")
 		while x != i + 10 and endline:
 			try:
@@ -160,7 +147,7 @@ def search_qa(logfile_text, textline, qa_error_list, error_log_list,i):
 				x = x +1
 	return qa_error_list, error_log_list
 
-def get_buildlog_info(settings, build_dict):
+def get_buildlog_info(settings, pkg, build_dict):
 	myportdb = portage.portdbapi(mysettings=settings)
 	init_repoman = gobs_repoman(settings, myportdb)
 	logfile_text = get_log_text_list(settings.get("PORTAGE_LOG_FILE"))
@@ -177,7 +164,7 @@ def get_buildlog_info(settings, build_dict):
 		qa_error_list, error_log_list = search_qa(logfile_text, textline, qa_error_list, error_log_list, i)
 		i = i +1
 	# Run repoman check_repoman()
-	repoman_error_list = init_repoman.check_repoman(build_dict['categories'], build_dict['package'], build_dict['ebuild_version'], build_dict['config_profile'])
+	repoman_error_list = init_repoman.check_repoman(build_dict['cpv'], pkg.repo)
 	if repoman_error_list != []:
 		sum_build_log_list.append("repoman")
 	if qa_error_list != []:
@@ -223,44 +210,41 @@ def write_msg_file(msg, log_path):
 def add_buildlog_main(settings, pkg, trees):
 	conn=CM.getConnection()
 	build_dict = get_build_dict_db(settings, pkg)
+	if build_dict is None:
+		log_msg = "Package %s:%s is NOT logged." % (pkg.cpv, pkg.repo,)
+		add_gobs_logs(conn, log_msg, "info", config_profile)
+		CM.putConnection(conn)
+		return
 	build_log_dict = {}
-	build_log_dict = get_buildlog_info(settings, build_dict)
-	sum_build_log_list = build_log_dict['summary_error_list']
+	build_log_dict = get_buildlog_info(settings, pkg, build_dict)
 	error_log_list = build_log_dict['error_log_list']
 	build_error = ""
+	log_hash = hashlib.sha256()
+	build_error = ""
 	if error_log_list != []:
 		for log_line in error_log_list:
 			build_error = build_error + log_line
-	summary_error = ""
-	if sum_build_log_list != []:
-		for sum_log_line in sum_build_log_list:
-			summary_error = summary_error + " " + sum_log_line
+			log_hash.update(log_line)
+	build_log_dict['build_error'] = build_error
+	build_log_dict['log_hash'] = log_hash.hexdigest()
 	build_log_dict['logfilename'] = settings.get("PORTAGE_LOG_FILE").split(config_profile)[1]
 	log_msg = "Logfile name: %s" % (settings.get("PORTAGE_LOG_FILE"),)
 	add_gobs_logs(conn, log_msg, "info", config_profile)
-	if build_dict['queue_id'] is None:
-		build_id = add_new_ebuild_buildlog(settings, pkg, build_dict, build_error, summary_error, build_log_dict)
-	else:
-		build_id = move_queru_buildlog(conn, build_dict['queue_id'], build_error, summary_error, build_log_dict)
-	# update_qa_repoman(conn, build_id, build_log_dict)
+	log_id = add_new_buildlog(build_dict, build_error, summary_error, build_log_dict)
+
 	msg = ""
-	emerge_info_logfilename = settings.get("PORTAGE_LOG_FILE")[:-3] + "emerge_log.log"
-	if build_id is not None:
-		for msg_line in msg:
-			write_msg_file(msg_line, emerge_info_logfilename)
+	# emerge_info_logfilename = settings.get("PORTAGE_LOG_FILE")[:-3] + "emerge_log.log"
+	if log_id is None:
 		os.chmod(settings.get("PORTAGE_LOG_FILE"), 0o664)
-		os.chmod(emerge_info_logfilename, 0o664)
-		log_msg = "Package: %s logged to db." % (pkg.cpv,)
-		add_gobs_logs(conn, log_msg, "info", self._config_profile)
-	else:
-		# FIXME Remove the log some way so
-		# mergetask._locate_failure_log(x) works in action_build()
-		#try:
-		#	os.remove(settings.get("PORTAGE_LOG_FILE"))
-		#except:
-		#	pass
-		log_msg = "Package %s NOT logged to db." % (pkg.cpv,)
+		log_msg = "Package %s:%s NOT logged to db." % (pkg.cpv, pkg.repo,)
 		add_gobs_logs(conn, log_msg, "info", config_profile)
+	else:
+		# for msg_line in msg:
+		#	write_msg_file(msg_line, emerge_info_logfilename)
+		os.chmod(settings.get("PORTAGE_LOG_FILE"), 0o664)
+		# os.chmod(emerge_info_logfilename, 0o664)
+		log_msg = "Package: %s:%s logged to db." % (pkg.cpv, pkg.repo,)
+		add_gobs_logs(conn, log_msg, "info", self._config_profile)
 	CM.putConnection(conn)
 
 def log_fail_queru(build_dict, settings):
@@ -313,5 +297,5 @@ def log_fail_queru(build_dict, settings):
 				os.chmod(settings.get("PORTAGE_LOG_FILE"), 0o224)
 			else:
 				build_log_dict['logfilename'] = ""
-			move_queru_buildlog(conn, build_dict['build_job_id'], build_error, summary_error, build_log_dict)
+				log_id = add_new_buildlog(conn, build_dict, build_log_dict)
 	CM.putConnection(conn)
\ No newline at end of file

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index ef90b67..74ec3d3 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -110,8 +110,6 @@ class queruaction(object):
 		print("Build: %s", build_dict)
 		build_fail = emerge_main(argscmd, build_dict)
 		# Run depclean
-		log_msg = "build_fail: %s" % (build_fail,)
-		add_gobs_logs(conn, log_msg, "info", self._config_profile)
 		if not "noclean" in build_dict['post_message']:
 			depclean_fail = main_depclean()
 		try:
@@ -123,17 +121,23 @@ class queruaction(object):
 		build_dict2 = {}
 		build_dict2 = get_packages_to_build(conn, self._config_profile)
 		if build_dict['build_job_id'] == build_dict2['build_job_id']:
-			log_msg = "qurery %s was not removed" % (build_dict['build_job_id'],)
+			log_msg = "build_job %s was not removed" % (build_dict['build_job_id'],)
 			add_gobs_logs(conn, log_msg, "info", self._config_profile)
 			print("qurery was not removed")
-			build_dict['type_fail'] = "Querey was not removed"
-			build_dict['check_fail'] = True
+			if build_fail is True
+				build_dict['type_fail'] = "Emerge faild"
+				build_dict['check_fail'] = True
+				log_msg = "Emerge faild!"
+				add_gobs_logs(conn, log_msg, "info", self._config_profile)
+			else:
+				build_dict['type_fail'] = "Querey was not removed"
+				build_dict['check_fail'] = True
 			log_fail_queru(build_dict, settings)
-		if build_fail is False or depclean_fail is False:
+		if build_fail is True
 			CM.putConnection(conn)
-			return False
+			return True
 		CM.putConnection(conn)
-		return True
+		return False
 
 	def procces_qureru(self):
 		conn=CM.getConnection()

diff --git a/gobs/pym/flags.py b/gobs/pym/flags.py
index 1c2377e..5838294 100644
--- a/gobs/pym/flags.py
+++ b/gobs/pym/flags.py
@@ -188,9 +188,7 @@ class gobs_use_flags(object):
 	def comper_useflags(self, build_dict):
 		iuse_flags, use_enable = self.get_flags()
 		iuse = []
-		print("use_enable", use_enable)
 		build_use_flags_dict = build_dict['build_useflags']
-		print("build_use_flags_dict", build_use_flags_dict)
 		build_use_flags_list = []
 		if use_enable == []:
 			if build_use_flags_dict is None:
@@ -201,17 +199,15 @@ class gobs_use_flags(object):
 		use_disable = list(set(iuse_flags_list).difference(set(use_enable)))
 		use_flagsDict = {}
 		for x in use_enable:
-			use_flagsDict[x] = True
+			use_flagsDict[x] = 'True'
 		for x in use_disable:
-			use_flagsDict[x] = False
+			use_flagsDict[x] = 'False'
 		print("use_flagsDict", use_flagsDict)
 		for k, v in use_flagsDict.iteritems():
-			print("tree use flags", k, v)
-			print("db use flags", k, build_use_flags_dict[k])
 			if build_use_flags_dict[k] != v:
-				if build_use_flags_dict[k] is True:
+				if build_use_flags_dict[k] == 'True':
 					build_use_flags_list.append(k)
-				if build_use_flags_dict[k] is False:
+				if build_use_flags_dict[k] == 'False':
 					build_use_flags_list.append("-" + k)
 		if build_use_flags_list == []:
 			build_use_flags_list = None

diff --git a/gobs/pym/pgsql_querys.py b/gobs/pym/pgsql_querys.py
index cb8691c..2149798 100644
--- a/gobs/pym/pgsql_querys.py
+++ b/gobs/pym/pgsql_querys.py
@@ -212,10 +212,10 @@ def get_config_id_list(connection):
 		config_id_list.append(config_id[0])
 	return config_id_list
 
-def get_config_db(connection, config_id):
+def get_config_id(connection, config):
 	cursor = connection.cursor()
-	sqlQ = 'SELECT config FROM configs WHERE config_id = %s'
-	cursor.execute(sqlQ,(config_id,))
+	sqlQ = 'SELECT config_id FROM configs WHERE config = %s'
+	cursor.execute(sqlQ,(config,))
 	entries = cursor.fetchone()
 	if entries is None:
 		return None
@@ -305,7 +305,7 @@ def get_build_jobs_id_list_config(connection, config_id):
 	entries = cursor.fetchall()
 	return entries
 
-def del_old_build_jobs(connection, queue_id):
+def del_old_build_jobs(connection, build_job_id):
 	cursor = connection.cursor()
 	sqlQ1 = 'DELETE FROM build_jobs_use WHERE build_job_id = %s'
 	sqlQ2 = 'DELETE FROM build_jobs_retest WHERE build_job_id  = %s'
@@ -326,11 +326,15 @@ def get_packages_to_build(connection, config):
 	sqlQ1 = "SELECT build_jobs.build_job_id, build_jobs.ebuild_id, ebuilds.package_id FROM build_jobs, ebuilds WHERE build_jobs.config_id = (SELECT config_id FROM configs WHERE config = %s) AND build_jobs.ebuild_id = ebuilds.ebuild_id AND ebuilds.active = 'True' AND extract(epoch from (NOW()) - build_jobs.time_stamp) > 7200 ORDER BY build_jobs.build_job_id LIMIT 1"
 	sqlQ2 = 'SELECT version, checksum FROM ebuilds WHERE ebuild_id = %s'
 	sqlQ3 = 'SELECT uses.flag, build_jobs_use.status FROM build_jobs_use, uses WHERE build_jobs_use.build_job_id = %s AND build_jobs_use.use_id = uses.use_id'
-	cursor.execute(sqlQ1, (config,))
-	build_dict={}
+	sqlQ4 = "SELECT build_jobs.build_job_id, build_jobs.ebuild_id, ebuilds.package_id FROM build_jobs, ebuilds WHERE build_jobs.config_id = (SELECT config_id FROM configs WHERE config = %s) AND build_jobs.ebuild_id = ebuilds.ebuild_id AND ebuilds.active = 'True' AND build_jobs.status = 'Now' LIMIT 1"
+	cursor.execute(sqlQ4, (config,))
 	entries = cursor.fetchone()
 	if entries is None:
-		return None
+		cursor.execute(sqlQ1, (config,))
+		entries = cursor.fetchone()
+	if entries is None:
+		return
+	build_dict={}
 	build_dict['build_job_id'] = entries[0]
 	build_dict['ebuild_id']= entries[1]
 	build_dict['package_id'] = entries[2]
@@ -371,4 +375,113 @@ def add_fail_querue_dict(connection, fail_querue_dict):
 	sqlQ2 = 'UPDATE build_jobs SET time_stamp = NOW() WHERE build_job_id = %s'
 	cursor.execute(sqlQ1, (fail_querue_dict['build_job_id'],fail_querue_dict['fail_type'], fail_querue_dict['fail_times']))
 	cursor.execute(sqlQ2, (fail_querue_dict['build_job_id'],))
-	connection.commit()
\ No newline at end of file
+	connection.commit()
+
+def get_ebuild_id_db_checksum(connection, build_dict):
+	cursor = connection.cursor()
+	sqlQ = 'SELECT ebuild_id FROM ebuilds WHERE version = %s AND checksum = %s AND package_id = %s'
+	cursor.execute(sqlQ, (build_dict['ebuild_version'], build_dict['checksum'], build_dict['package_id']))
+	ebuild_id = cursor.fetchone()
+	if ebuild_id is None:
+		return 
+        return ebuild_id[0]
+
+def get_build_job_id(connection, build_dict):
+	cursor = connection.cursor()
+	sqlQ1 = "SELECT build_job_id FROM build_jobs WHERE ebuild_id = %s AND config_id = (SELECT config_id FROM configs WHERE config = %s) AND status = 'Waiting'"
+	sqlQ2 = "SELECT uses.flag FROM build_jobs_use, uses WHERE build_jobs_use.build_job_id = %s AND build_jobs_use.use_id = uses.use_id AND build_jobs_use.status = 'enable'"
+	cursor.execute(sqlQ1, (build_dict['ebuild_id'], build_dict['config_id']))
+	build_job_id_list = cursor.fetchall()
+	if build_job_id_list == []:
+		return
+	for build_job_id in build_job_id_list:
+		cursor.execute(sqlQ2, (build_job_id[0],))
+		entries = cursor.fetchall()
+		useflagsdict = {}
+		if entries == []:
+			useflagsdict = None
+		else:
+			for x in entries:
+				useflagsdict[x[0]] = x[1]
+		if useflagsdict == build_dict['build_useflags']:
+			return build_job_id[0]
+
+
+def add_new_buildlog(connection, build_dict, build_log_dict):
+	cursor = connection.cursor()
+	sqlQ1 = 'SELECT build_log_id FROM build_logs WHERE ebuild_id = %s'
+	sqlQ2 = 'INSERT INTO build_logs (ebuild_id) VALUES (%s) RETURNING build_log_id'
+	sqlQ3 = "UPDATE build_logs SET fail = 'true', summery = %s, log_hash = %s WHERE build_log_id = %s"
+	sqlQ4 = 'INSERT INTO build_logs_config (build_log_id, config_id, logname) VALUES (%s, %s, %s)'
+	sqlQ6 = 'INSERT INTO build_logs_use (build_log_id, use_id, status) VALUES (%s, %s, %s)'
+	sqlQ7 = 'SELECT log_hash FROM build_logs WHERE build_log_id = %s'
+	sqlQ8 = 'SELECT use_id, status FROM build_logs_use WHERE build_log_id = %s'
+	sqlQ9 = 'SELECT config_id FROM build_logs_config WHERE build_log_id = %s'
+	sqlQ10 = "UPDATE build_logs SET fail = false, log_hash = %s WHERE build_log_id = %s"
+	build_log_id_list = []
+	cursor.execute(sqlQ1, (build_dict['ebuild_id'],))
+	entries = cursor.fetchall()
+	if not entries == []:
+		for build_log_id in entries:
+			build_log_id_list.append(build_log_id[0])
+	else:
+		build_log_id_list = None
+
+	def build_log_id_match(build_log_id_list, build_dict, build_log_dict):
+		for build_log_id in build_log_id_list:
+			cursor.execute(sqlQ7, (build_log_id,))
+			log_hash = cursor.fetchone()
+			cursor.execute(sqlQ8, (build_log_id,))
+			entries = cursor.fetchall()
+			useflagsdict = {}
+			if entries == []:
+				useflagsdict = None
+				else:
+					for x in entries:
+						useflagsdict[x[0]] = x[1]
+						print(build_log_id)
+						print(log_hash[0], build_log_dict['log_hash'], build_dict['build_useflags'], useflagsdict)
+						if log_hash[0] == build_log_dict['log_hash'] and build_dict['build_useflags'] == useflagsdict:
+						cursor.execute(sqlQ9, (build_log_id,))
+						config_id_list = []
+						for config_id in cursor.fetchall():
+							config_id_list.append(config_id[0])
+							print(build_dict['config_id'], config_id_list)
+							if build_dict['config_id'] in config_id_list:
+							return None, True
+						else:
+							cursor.execute(sqlQ4, (build_log_id, build_dict['config_id'], build_log_dict['logfilename'],))
+							return build_log_id, True
+		return None, False
+
+	def build_log_id_no_match(build_dict, build_log_dict):
+		cursor.execute(sqlQ2, (build_dict['ebuild_id'],))
+		build_log_id = cursor.fetchone()[0]
+		if 'True' in build_log_dict['summary_error_list']:
+			cursor.execute(sqlQ3, (build_log_dict['build_error'], build_log_dict['log_hash'], build_log_id,))
+		else:
+			cursor.execute(sqlQ10, (build_log_dict['log_hash'], build_log_id,))
+		cursor.execute(sqlQ4, (build_log_id, build_dict['config_id'], build_log_dict['logfilename'],))
+		if not build_dict['build_useflags'] is None:
+			for use_id, status in  build_dict['build_useflags'].iteritems():
+				cursor.execute(sqlQ6, (build_log_id, use_id, status))
+		return build_log_id
+
+	print(build_dict['build_job_id'], build_log_id_list)
+	if build_dict['build_job_id'] is None and build_log_id_list is None:
+		return build_log_id_no_match(build_dict, build_log_dict)
+	elif build_dict['build_job_id'] is None and not build_log_id_list is None:
+		build_log_id, match = build_log_id_match(build_log_id_list, build_dict, build_log_dict)
+		if not match:
+			build_log_id = build_log_id_no_match(build_dict, build_log_dict)
+		return build_log_id
+	elif not build_dict['build_job_id'] is None and not build_log_id_list is None:
+		build_log_id, match = build_log_id_match(build_log_id_list, build_dict, build_log_dict)
+		if not match:
+			build_log_id = build_log_id_no_match(build_dict, build_log_dict)
+			del_old_build_jobs(connection, build_dict['build_job_id'])
+		return build_log_id
+	elif not build_dict['build_job_id'] is None and build_log_id_list is None:
+		build_log_id = build_log_id_no_match(build_dict, build_log_dict)
+		del_old_build_jobs(connection, build_dict['build_job_id'])
+		return build_log_id


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-11 23:48 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-11 23:48 UTC (permalink / raw
  To: gentoo-commits

commit:     41003d4655d0a346ac76e39e204f1bba560b5e9d
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Tue Dec 11 23:48:31 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Tue Dec 11 23:48:31 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=41003d46

fix some invalid syntax

---
 gobs/pym/actions.py      |    2 +-
 gobs/pym/build_queru.py  |    4 ++--
 gobs/pym/pgsql_querys.py |   26 ++++++++++++--------------
 3 files changed, 15 insertions(+), 17 deletions(-)

diff --git a/gobs/pym/actions.py b/gobs/pym/actions.py
index 4b48408..6842ff3 100644
--- a/gobs/pym/actions.py
+++ b/gobs/pym/actions.py
@@ -70,7 +70,7 @@ from _emerge.MetadataRegen import MetadataRegen
 from _emerge.Package import Package
 from _emerge.ProgressHandler import ProgressHandler
 from _emerge.RootConfig import RootConfig
-from gobs..Scheduler import Scheduler
+from gobs.Scheduler import Scheduler
 from _emerge.search import search
 from _emerge.SetArg import SetArg
 from _emerge.show_invalid_depstring_notice import show_invalid_depstring_notice

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index 74ec3d3..b1685bf 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -124,7 +124,7 @@ class queruaction(object):
 			log_msg = "build_job %s was not removed" % (build_dict['build_job_id'],)
 			add_gobs_logs(conn, log_msg, "info", self._config_profile)
 			print("qurery was not removed")
-			if build_fail is True
+			if build_fail is True:
 				build_dict['type_fail'] = "Emerge faild"
 				build_dict['check_fail'] = True
 				log_msg = "Emerge faild!"
@@ -133,7 +133,7 @@ class queruaction(object):
 				build_dict['type_fail'] = "Querey was not removed"
 				build_dict['check_fail'] = True
 			log_fail_queru(build_dict, settings)
-		if build_fail is True
+		if build_fail is True:
 			CM.putConnection(conn)
 			return True
 		CM.putConnection(conn)

diff --git a/gobs/pym/pgsql_querys.py b/gobs/pym/pgsql_querys.py
index 2149798..468afc5 100644
--- a/gobs/pym/pgsql_querys.py
+++ b/gobs/pym/pgsql_querys.py
@@ -436,22 +436,20 @@ def add_new_buildlog(connection, build_dict, build_log_dict):
 			useflagsdict = {}
 			if entries == []:
 				useflagsdict = None
-				else:
+			else:
 					for x in entries:
 						useflagsdict[x[0]] = x[1]
-						print(build_log_id)
-						print(log_hash[0], build_log_dict['log_hash'], build_dict['build_useflags'], useflagsdict)
-						if log_hash[0] == build_log_dict['log_hash'] and build_dict['build_useflags'] == useflagsdict:
-						cursor.execute(sqlQ9, (build_log_id,))
-						config_id_list = []
-						for config_id in cursor.fetchall():
-							config_id_list.append(config_id[0])
-							print(build_dict['config_id'], config_id_list)
-							if build_dict['config_id'] in config_id_list:
-							return None, True
-						else:
-							cursor.execute(sqlQ4, (build_log_id, build_dict['config_id'], build_log_dict['logfilename'],))
-							return build_log_id, True
+			print(log_hash[0], build_log_dict['log_hash'], build_dict['build_useflags'], useflagsdict)
+			if log_hash[0] == build_log_dict['log_hash'] and build_dict['build_useflags'] == useflagsdict:
+				cursor.execute(sqlQ9, (build_log_id,))
+				config_id_list = []
+				for config_id in cursor.fetchall():
+					config_id_list.append(config_id[0])
+				if build_dict['config_id'] in config_id_list:
+					return None, True
+				else:
+					cursor.execute(sqlQ4, (build_log_id, build_dict['config_id'], build_log_dict['logfilename'],))
+				return build_log_id, True
 		return None, False
 
 	def build_log_id_no_match(build_dict, build_log_dict):


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-11 23:52 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-11 23:52 UTC (permalink / raw
  To: gentoo-commits

commit:     3c14b10282f556dc6a06ec5919b7eefea21f0708
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Tue Dec 11 23:52:06 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Tue Dec 11 23:52:06 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=3c14b102

fix inconsistent use of tabs and spaces in indentation

---
 gobs/pym/pgsql_querys.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/pgsql_querys.py b/gobs/pym/pgsql_querys.py
index 468afc5..2ea8de1 100644
--- a/gobs/pym/pgsql_querys.py
+++ b/gobs/pym/pgsql_querys.py
@@ -384,7 +384,7 @@ def get_ebuild_id_db_checksum(connection, build_dict):
 	ebuild_id = cursor.fetchone()
 	if ebuild_id is None:
 		return 
-        return ebuild_id[0]
+	return ebuild_id[0]
 
 def get_build_job_id(connection, build_dict):
 	cursor = connection.cursor()


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-12  0:00 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-12  0:00 UTC (permalink / raw
  To: gentoo-commits

commit:     1d438a0762ed507d47b928722cfd53d4299cebe4
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Dec 12 00:00:45 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Dec 12 00:00:45 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=1d438a07

fix o operator matches the given name and argument type(s)

---
 gobs/pym/pgsql_querys.py |    5 ++---
 1 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/gobs/pym/pgsql_querys.py b/gobs/pym/pgsql_querys.py
index 2ea8de1..3582c2a 100644
--- a/gobs/pym/pgsql_querys.py
+++ b/gobs/pym/pgsql_querys.py
@@ -388,8 +388,8 @@ def get_ebuild_id_db_checksum(connection, build_dict):
 
 def get_build_job_id(connection, build_dict):
 	cursor = connection.cursor()
-	sqlQ1 = "SELECT build_job_id FROM build_jobs WHERE ebuild_id = %s AND config_id = (SELECT config_id FROM configs WHERE config = %s) AND status = 'Waiting'"
-	sqlQ2 = "SELECT uses.flag FROM build_jobs_use, uses WHERE build_jobs_use.build_job_id = %s AND build_jobs_use.use_id = uses.use_id AND build_jobs_use.status = 'enable'"
+	sqlQ1 = 'SELECT build_job_id FROM build_jobs WHERE ebuild_id = %s AND config_id = %s'
+	sqlQ2 = 'SELECT use_id, status FROM build_jobs_use WHERE build_job_id = %s'
 	cursor.execute(sqlQ1, (build_dict['ebuild_id'], build_dict['config_id']))
 	build_job_id_list = cursor.fetchall()
 	if build_job_id_list == []:
@@ -406,7 +406,6 @@ def get_build_job_id(connection, build_dict):
 		if useflagsdict == build_dict['build_useflags']:
 			return build_job_id[0]
 
-
 def add_new_buildlog(connection, build_dict, build_log_dict):
 	cursor = connection.cursor()
 	sqlQ1 = 'SELECT build_log_id FROM build_logs WHERE ebuild_id = %s'


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-12  0:04 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-12  0:04 UTC (permalink / raw
  To: gentoo-commits

commit:     91d50ac7040e65b080435bdf4c34179a79355f16
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Dec 12 00:03:50 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Dec 12 00:03:50 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=91d50ac7

update check_repoman to support repo

---
 gobs/pym/repoman_gobs.py |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/repoman_gobs.py b/gobs/pym/repoman_gobs.py
index adb4466..839fbe9 100644
--- a/gobs/pym/repoman_gobs.py
+++ b/gobs/pym/repoman_gobs.py
@@ -15,12 +15,13 @@ class gobs_repoman(object):
 		self._mysettings = mysettings
 		self._myportdb = myportdb
 
-	def check_repoman(self, pkgdir, cpv, repo, config_id):
+	def check_repoman(self, cpv, repo):
 		# We run repoman run_checks on the ebuild
 		ebuild_version_tree = portage.versions.cpv_getversion(cpv)
 		element = portage.versions.cpv_getkey(cpv).split('/')
 		categories = element[0]
 		package = element[1]
+		pkgdir = self._myportdb.getRepositoryPath(repo) + "/" + categories + "/" + package
 		full_path = pkgdir + "/" + package + "-" + ebuild_version_tree + ".ebuild"
 		root = '/'
 		trees = {


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-12  0:09 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-12  0:09 UTC (permalink / raw
  To: gentoo-commits

commit:     73a8d37dbeb215d60baf688fefa74ac040434969
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Dec 12 00:08:51 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Dec 12 00:08:51 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=73a8d37d

fix global name 'summary_error' is not defined

---
 gobs/pym/build_log.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index db84965..e49e76d 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -230,7 +230,7 @@ def add_buildlog_main(settings, pkg, trees):
 	build_log_dict['logfilename'] = settings.get("PORTAGE_LOG_FILE").split(config_profile)[1]
 	log_msg = "Logfile name: %s" % (settings.get("PORTAGE_LOG_FILE"),)
 	add_gobs_logs(conn, log_msg, "info", config_profile)
-	log_id = add_new_buildlog(build_dict, build_error, summary_error, build_log_dict)
+	log_id = add_new_buildlog(connection, build_dict, build_log_dict)
 
 	msg = ""
 	# emerge_info_logfilename = settings.get("PORTAGE_LOG_FILE")[:-3] + "emerge_log.log"


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-12  0:11 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-12  0:11 UTC (permalink / raw
  To: gentoo-commits

commit:     4dc4b00863b15c58530dea65855d783157b51849
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Dec 12 00:11:39 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Dec 12 00:11:39 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=4dc4b008

fix a typo in add_buildlog_main()

---
 gobs/pym/build_log.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index e49e76d..5fcb352 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -230,7 +230,7 @@ def add_buildlog_main(settings, pkg, trees):
 	build_log_dict['logfilename'] = settings.get("PORTAGE_LOG_FILE").split(config_profile)[1]
 	log_msg = "Logfile name: %s" % (settings.get("PORTAGE_LOG_FILE"),)
 	add_gobs_logs(conn, log_msg, "info", config_profile)
-	log_id = add_new_buildlog(connection, build_dict, build_log_dict)
+	log_id = add_new_buildlog(conn, build_dict, build_log_dict)
 
 	msg = ""
 	# emerge_info_logfilename = settings.get("PORTAGE_LOG_FILE")[:-3] + "emerge_log.log"


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-12  0:14 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-12  0:14 UTC (permalink / raw
  To: gentoo-commits

commit:     0183b39f29dbcff4499e2f02864e110a45451f53
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Dec 12 00:14:23 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Dec 12 00:14:23 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=0183b39f

fix one more typo in add_buildlog_main()

---
 gobs/pym/build_log.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index 5fcb352..9149409 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -244,7 +244,7 @@ def add_buildlog_main(settings, pkg, trees):
 		os.chmod(settings.get("PORTAGE_LOG_FILE"), 0o664)
 		# os.chmod(emerge_info_logfilename, 0o664)
 		log_msg = "Package: %s:%s logged to db." % (pkg.cpv, pkg.repo,)
-		add_gobs_logs(conn, log_msg, "info", self._config_profile)
+		add_gobs_logs(conn, log_msg, "info", config_profile)
 	CM.putConnection(conn)
 
 def log_fail_queru(build_dict, settings):


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-12  0:29 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-12  0:29 UTC (permalink / raw
  To: gentoo-commits

commit:     5f6e28a60e50b98970dd61c4057d9c9d6961ad5a
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Dec 12 00:29:40 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Dec 12 00:29:40 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=5f6e28a6

fix small tab error

---
 gobs/pym/build_log.py    |    4 ++--
 gobs/pym/pgsql_querys.py |    7 +++----
 2 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index 9149409..caaae33 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -236,14 +236,14 @@ def add_buildlog_main(settings, pkg, trees):
 	# emerge_info_logfilename = settings.get("PORTAGE_LOG_FILE")[:-3] + "emerge_log.log"
 	if log_id is None:
 		os.chmod(settings.get("PORTAGE_LOG_FILE"), 0o664)
-		log_msg = "Package %s:%s NOT logged to db." % (pkg.cpv, pkg.repo,)
+		log_msg = "Package %s:%s is NOT logged." % (pkg.cpv, pkg.repo,)
 		add_gobs_logs(conn, log_msg, "info", config_profile)
 	else:
 		# for msg_line in msg:
 		#	write_msg_file(msg_line, emerge_info_logfilename)
 		os.chmod(settings.get("PORTAGE_LOG_FILE"), 0o664)
 		# os.chmod(emerge_info_logfilename, 0o664)
-		log_msg = "Package: %s:%s logged to db." % (pkg.cpv, pkg.repo,)
+		log_msg = "Package: %s:%s is logged." % (pkg.cpv, pkg.repo,)
 		add_gobs_logs(conn, log_msg, "info", config_profile)
 	CM.putConnection(conn)
 

diff --git a/gobs/pym/pgsql_querys.py b/gobs/pym/pgsql_querys.py
index 3582c2a..aa549b7 100644
--- a/gobs/pym/pgsql_querys.py
+++ b/gobs/pym/pgsql_querys.py
@@ -436,8 +436,8 @@ def add_new_buildlog(connection, build_dict, build_log_dict):
 			if entries == []:
 				useflagsdict = None
 			else:
-					for x in entries:
-						useflagsdict[x[0]] = x[1]
+				for x in entries:
+					useflagsdict[x[0]] = x[1]
 			print(log_hash[0], build_log_dict['log_hash'], build_dict['build_useflags'], useflagsdict)
 			if log_hash[0] == build_log_dict['log_hash'] and build_dict['build_useflags'] == useflagsdict:
 				cursor.execute(sqlQ9, (build_log_id,))
@@ -446,8 +446,7 @@ def add_new_buildlog(connection, build_dict, build_log_dict):
 					config_id_list.append(config_id[0])
 				if build_dict['config_id'] in config_id_list:
 					return None, True
-				else:
-					cursor.execute(sqlQ4, (build_log_id, build_dict['config_id'], build_log_dict['logfilename'],))
+				cursor.execute(sqlQ4, (build_log_id, build_dict['config_id'], build_log_dict['logfilename'],))
 				return build_log_id, True
 		return None, False
 


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-13 15:09 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-13 15:09 UTC (permalink / raw
  To: gentoo-commits

commit:     038003873d6bdd71fe454afa9795a68822395871
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Thu Dec 13 15:09:36 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Thu Dec 13 15:09:36 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=03800387

add support for emerge options on configs and build_jobs

---
 gobs/pym/build_queru.py  |   17 +++++++++--------
 gobs/pym/pgsql_querys.py |   14 ++++++++++++++
 2 files changed, 23 insertions(+), 8 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index b1685bf..8391d7c 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -97,11 +97,12 @@ class queruaction(object):
      					f.close
 		log_msg = "build_cpv_list: %s" % (build_cpv_list,)
 		add_gobs_logs(conn, log_msg, "info", self._config_profile)
+		
 		argscmd = []
-		#if not "nooneshort" in build_dict['post_message']:
-		argscmd.append("--oneshot")
-		argscmd.append("--buildpkg")
-		argscmd.append("--usepkg")
+		for emerge_option in build_dict['emerge_options']:
+			if not emerge_option == '--depclean' or not emerge_option == '--nodepclean' or not emerge_option == '--nooneshot':
+				if not emerge_option in argscmd
+					argscmd.append(emerge_option)
 		for build_cpv in build_cpv_list:
 			argscmd.append(build_cpv)
 		log_msg = "argscmd: %s" % (argscmd,)
@@ -110,7 +111,7 @@ class queruaction(object):
 		print("Build: %s", build_dict)
 		build_fail = emerge_main(argscmd, build_dict)
 		# Run depclean
-		if not "noclean" in build_dict['post_message']:
+		if  '--depclean' in build_dict['emerge_options'] and not '--nodepclean' in build_dict['emerge_options']:
 			depclean_fail = main_depclean()
 		try:
 			os.remove("/etc/portage/package.use/99_autounmask")
@@ -160,10 +161,10 @@ class queruaction(object):
 			fail_build_procces = self.build_procces(buildqueru_cpv_dict, build_dict, settings, portdb)
 			CM.putConnection(conn)
 			return
-		if not build_dict['post_message'] is [] and build_dict['ebuild_id'] is None:
+		if not build_dict['emerge_options'] is [] and build_dict['ebuild_id'] is None:
 			CM.putConnection(conn)
 			return
-		if not build_dict['ebuild_id'] is None and build_dict['checksum'] is None:
-			del_old_queue(conn, build_dict['queue_id'])
+		if not build_dict['ebuild_id'] is None and build_dict['emerge_options'] is None:
+			# del_old_queue(conn, build_dict['queue_id'])
 		CM.putConnection(conn)
 		return

diff --git a/gobs/pym/pgsql_querys.py b/gobs/pym/pgsql_querys.py
index aa549b7..bc17fc8 100644
--- a/gobs/pym/pgsql_querys.py
+++ b/gobs/pym/pgsql_querys.py
@@ -327,6 +327,8 @@ def get_packages_to_build(connection, config):
 	sqlQ2 = 'SELECT version, checksum FROM ebuilds WHERE ebuild_id = %s'
 	sqlQ3 = 'SELECT uses.flag, build_jobs_use.status FROM build_jobs_use, uses WHERE build_jobs_use.build_job_id = %s AND build_jobs_use.use_id = uses.use_id'
 	sqlQ4 = "SELECT build_jobs.build_job_id, build_jobs.ebuild_id, ebuilds.package_id FROM build_jobs, ebuilds WHERE build_jobs.config_id = (SELECT config_id FROM configs WHERE config = %s) AND build_jobs.ebuild_id = ebuilds.ebuild_id AND ebuilds.active = 'True' AND build_jobs.status = 'Now' LIMIT 1"
+	sqlQ5 = 'SELECT emerge_options.option FROM configs_emerge_options, emerge_options WHERE configs_emerge_options.config_id = (SELECT config_id FROM configs WHERE config = %s) AND build_jobs_emerge_options.options_id = emerge_options.options_id'
+	sqlQ6 = 'SELECT emerge_options.option FROM build_jobs_emerge_options, emerge_options WHERE build_jobs_emerge_options.build_job_id = %s AND build_jobs_emerge_options.options_id = emerge_options.options_id'
 	cursor.execute(sqlQ4, (config,))
 	entries = cursor.fetchone()
 	if entries is None:
@@ -348,6 +350,18 @@ def get_packages_to_build(connection, config):
 	for row in cursor.fetchall():
 		uses[ row[0] ] = row[1]
 	build_dict['build_useflags']=uses
+	emerge_options_list = []
+	cursor.execute(sqlQ5, (config,))
+	entries = cursor.fetchall()
+	for option in entries:
+		emerge_options_list.append(option[0])
+	cursor.execute(sqlQ6, (build_dict['build_job_id'],))
+	entries = cursor.fetchall()
+	for option in entries:
+		emerge_options_list.append(option[0])
+	if emerge_options_list == []:
+		emerge_options_list = None
+	build_dict['emerge_options'] = emerge_options_list
 	return build_dict
 
 def update_fail_times(connection, fail_querue_dict):


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-13 15:15 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-13 15:15 UTC (permalink / raw
  To: gentoo-commits

commit:     4c0da0735e382a5a004bdb4d3ea8bee5eed03635
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Thu Dec 13 15:14:54 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Thu Dec 13 15:14:54 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=4c0da073

fix invalid syntax in build_queru.py

---
 gobs/pym/build_queru.py |    5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index 8391d7c..142eff7 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -101,14 +101,15 @@ class queruaction(object):
 		argscmd = []
 		for emerge_option in build_dict['emerge_options']:
 			if not emerge_option == '--depclean' or not emerge_option == '--nodepclean' or not emerge_option == '--nooneshot':
-				if not emerge_option in argscmd
+				if not emerge_option in argscmd:
 					argscmd.append(emerge_option)
 		for build_cpv in build_cpv_list:
 			argscmd.append(build_cpv)
+		print("Emerge options: %s" % argscmd)
 		log_msg = "argscmd: %s" % (argscmd,)
 		add_gobs_logs(conn, log_msg, "info", self._config_profile)
 		# Call main_emerge to build the package in build_cpv_list
-		print("Build: %s", build_dict)
+		print("Build: %s" % build_dict)
 		build_fail = emerge_main(argscmd, build_dict)
 		# Run depclean
 		if  '--depclean' in build_dict['emerge_options'] and not '--nodepclean' in build_dict['emerge_options']:


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-13 15:18 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-13 15:18 UTC (permalink / raw
  To: gentoo-commits

commit:     2d2a15b1ff57323502bb42e327dd455bc6c20cf3
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Thu Dec 13 15:18:29 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Thu Dec 13 15:18:29 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=2d2a15b1

fix dentationError: expected an indented block in build_queru.py

---
 gobs/pym/build_queru.py |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index 142eff7..eb08441 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -166,6 +166,7 @@ class queruaction(object):
 			CM.putConnection(conn)
 			return
 		if not build_dict['ebuild_id'] is None and build_dict['emerge_options'] is None:
+			pass
 			# del_old_queue(conn, build_dict['queue_id'])
 		CM.putConnection(conn)
 		return


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-13 22:57 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-13 22:57 UTC (permalink / raw
  To: gentoo-commits

commit:     ec735f92e15be8c7673dfa16451f40dcf2458f93
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Thu Dec 13 22:57:06 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Thu Dec 13 22:57:06 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=ec735f92

fix argscmd logic with emerge_options

---
 gobs/pym/build_log.py    |    4 ++--
 gobs/pym/build_queru.py  |    8 +++++++-
 gobs/pym/pgsql_querys.py |    4 ++--
 3 files changed, 11 insertions(+), 5 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index caaae33..89071ec 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -261,8 +261,8 @@ def log_fail_queru(build_dict, settings):
 		print('fail_querue_dict', fail_querue_dict)
 		add_fail_querue_dict(conn, fail_querue_dict)
 	else:
-		if fail_querue_dict['fail_times'][0] < 6:
-			fail_querue_dict['fail_times'] = fail_querue_dict['fail_times'][0] + 1
+		if fail_querue_dict['fail_times'] < 6:
+			fail_querue_dict['fail_times'] = fail_querue_dict['fail_times']+ 1
 			fail_querue_dict['build_job_id'] = build_dict['build_job_id']
 			fail_querue_dict['fail_type'] = build_dict['type_fail']
 			update_fail_times(conn, fail_querue_dict)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
index eb08441..7147416 100644
--- a/gobs/pym/build_queru.py
+++ b/gobs/pym/build_queru.py
@@ -100,7 +100,13 @@ class queruaction(object):
 		
 		argscmd = []
 		for emerge_option in build_dict['emerge_options']:
-			if not emerge_option == '--depclean' or not emerge_option == '--nodepclean' or not emerge_option == '--nooneshot':
+			if emerge_option == '--depclean':
+				pass
+			elif emerge_option == '--nodepclean':
+				pass
+			elif emerge_option == '--nooneshot':
+				pass
+			else:
 				if not emerge_option in argscmd:
 					argscmd.append(emerge_option)
 		for build_cpv in build_cpv_list:

diff --git a/gobs/pym/pgsql_querys.py b/gobs/pym/pgsql_querys.py
index bc17fc8..6995cf3 100644
--- a/gobs/pym/pgsql_querys.py
+++ b/gobs/pym/pgsql_querys.py
@@ -327,7 +327,7 @@ def get_packages_to_build(connection, config):
 	sqlQ2 = 'SELECT version, checksum FROM ebuilds WHERE ebuild_id = %s'
 	sqlQ3 = 'SELECT uses.flag, build_jobs_use.status FROM build_jobs_use, uses WHERE build_jobs_use.build_job_id = %s AND build_jobs_use.use_id = uses.use_id'
 	sqlQ4 = "SELECT build_jobs.build_job_id, build_jobs.ebuild_id, ebuilds.package_id FROM build_jobs, ebuilds WHERE build_jobs.config_id = (SELECT config_id FROM configs WHERE config = %s) AND build_jobs.ebuild_id = ebuilds.ebuild_id AND ebuilds.active = 'True' AND build_jobs.status = 'Now' LIMIT 1"
-	sqlQ5 = 'SELECT emerge_options.option FROM configs_emerge_options, emerge_options WHERE configs_emerge_options.config_id = (SELECT config_id FROM configs WHERE config = %s) AND build_jobs_emerge_options.options_id = emerge_options.options_id'
+	sqlQ5 = 'SELECT emerge_options.option FROM configs_emerge_options, emerge_options WHERE configs_emerge_options.config_id = (SELECT config_id FROM configs WHERE config = %s) AND configs_emerge_options.options_id = emerge_options.options_id'
 	sqlQ6 = 'SELECT emerge_options.option FROM build_jobs_emerge_options, emerge_options WHERE build_jobs_emerge_options.build_job_id = %s AND build_jobs_emerge_options.options_id = emerge_options.options_id'
 	cursor.execute(sqlQ4, (config,))
 	entries = cursor.fetchone()
@@ -380,7 +380,7 @@ def get_fail_querue_dict(connection, build_dict):
 	entries = cursor.fetchone()
 	if entries is None:
 		return None
-	fail_querue_dict['fail_times'] = entries
+	fail_querue_dict['fail_times'] = entries[0]
 	return fail_querue_dict
 
 def add_fail_querue_dict(connection, fail_querue_dict):


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-14 14:17 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-14 14:17 UTC (permalink / raw
  To: gentoo-commits

commit:     221c532a002a3d727b4b44ebc9c5816b16c69f4a
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 14 14:17:08 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Dec 14 14:17:08 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=221c532a

fix missing add_buildlog_main()

---
 gobs/pym/Scheduler.py |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/gobs/pym/Scheduler.py b/gobs/pym/Scheduler.py
index 9614503..8e5c6ba 100644
--- a/gobs/pym/Scheduler.py
+++ b/gobs/pym/Scheduler.py
@@ -1283,6 +1283,7 @@ class Scheduler(PollScheduler):
 				self._pkg_cache.pop(pkg_to_replace, None)
 
 		if pkg.installed:
+			add_buildlog_main(settings, pkg, trees)
 			return
 
 		# Call mtimedb.commit() after each merge so that
@@ -1293,6 +1294,7 @@ class Scheduler(PollScheduler):
 		if not mtimedb["resume"]["mergelist"]:
 			del mtimedb["resume"]
 		mtimedb.commit()
+		add_buildlog_main(settings, pkg, trees)
 
 	def _build_exit(self, build):
 		self._running_tasks.pop(id(build), None)


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-15  0:31 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-15  0:31 UTC (permalink / raw
  To: gentoo-commits

commit:     787208add914a2cdc927f6204f63215671170d21
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Dec 15 00:30:58 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Dec 15 00:30:58 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=787208ad

fix log_fail_queru()

---
 gobs/pym/build_log.py |   12 ++++++++++++
 1 files changed, 12 insertions(+), 0 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index 89071ec..73fb400 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -238,6 +238,7 @@ def add_buildlog_main(settings, pkg, trees):
 		os.chmod(settings.get("PORTAGE_LOG_FILE"), 0o664)
 		log_msg = "Package %s:%s is NOT logged." % (pkg.cpv, pkg.repo,)
 		add_gobs_logs(conn, log_msg, "info", config_profile)
+		print("Package %s:%s is NOT logged." % (pkg.cpv, pkg.repo,))
 	else:
 		# for msg_line in msg:
 		#	write_msg_file(msg_line, emerge_info_logfilename)
@@ -245,6 +246,7 @@ def add_buildlog_main(settings, pkg, trees):
 		# os.chmod(emerge_info_logfilename, 0o664)
 		log_msg = "Package: %s:%s is logged." % (pkg.cpv, pkg.repo,)
 		add_gobs_logs(conn, log_msg, "info", config_profile)
+		print("Package %s:%s is logged." % (pkg.cpv, pkg.repo,))
 	CM.putConnection(conn)
 
 def log_fail_queru(build_dict, settings):
@@ -292,6 +294,16 @@ def log_fail_queru(build_dict, settings):
 			if sum_build_log_list != []:
 				for sum_log_line in sum_build_log_list:
 					summary_error = summary_error + " " + sum_log_line
+			build_log_dict['log_hash'] = '0'
+			build_dict['config_id'] = get_config_id(conn, config_profile)
+			useflagsdict = {}
+			if build_dict['build_useflags'] == {}:
+				for k, v in build_dict['build_useflags'].iteritems():
+					use_id = get_use_id(conn, k)
+					useflagsdict[use_id] = v
+					build_dict['build_useflags'] = useflagsdict
+			else:
+				build_dict['build_useflags'] = None			
 			if settings.get("PORTAGE_LOG_FILE") is not None:
 				build_log_dict['logfilename'] = settings.get("PORTAGE_LOG_FILE").split(config_profile)[1]
 				os.chmod(settings.get("PORTAGE_LOG_FILE"), 0o224)


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-15 16:14 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-15 16:14 UTC (permalink / raw
  To: gentoo-commits

commit:     ed8d5393925ef91337c123ddd629a539be84de92
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Dec 15 16:14:05 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Dec 15 16:14:05 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=ed8d5393

fix so we can add a new ebuild_id if missing

---
 gobs/pym/build_log.py    |   14 +++++++++++---
 gobs/pym/package.py      |   30 +++++-------------------------
 gobs/pym/pgsql_querys.py |    4 +---
 3 files changed, 17 insertions(+), 31 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index 73fb400..43bc0a2 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -70,8 +70,15 @@ def get_build_dict_db(settings, pkg):
 	ebuild_id = get_ebuild_id_db_checksum(conn, build_dict)
 	if ebuild_id is None:
 		log_msg = "%s:%s Don't have any ebuild_id!" % (pkg.cpv, repo,)
-		add_gobs_logs(conn, log_msg, "error", config_profile)
-		return
+		add_gobs_logs(conn, log_msg, "info", config_profile)
+		update_manifest_sql(conn, package_id, "0")
+		init_package = gobs_package(settings, myportdb)
+		init_package.update_package_db(package_id)
+		ebuild_id = get_ebuild_id_db_checksum(conn, build_dict)
+		if ebuild_id is None:
+			log_msg = "%s:%s Don't have any ebuild_id!" % (pkg.cpv, repo,)
+			add_gobs_logs(conn, log_msg, "error", config_profile)
+			return
 	build_dict['ebuild_id'] = ebuild_id
 	build_job_id = get_build_job_id(conn, build_dict)
 	if build_job_id is None:
@@ -276,7 +283,7 @@ def log_fail_queru(build_dict, settings):
 			qa_error_list = []
 			repoman_error_list = []
 			sum_build_log_list = []
-			sum_build_log_list.append("fail")
+			sum_build_log_list.append("True")
 			error_log_list.append(build_dict['type_fail'])
 			build_log_dict['repoman_error_list'] = repoman_error_list
 			build_log_dict['qa_error_list'] = qa_error_list
@@ -290,6 +297,7 @@ def log_fail_queru(build_dict, settings):
 			if error_log_list != []:
 				for log_line in error_log_list:
 					build_error = build_error + log_line
+			build_log_dict['build_error'] = build_error
 			summary_error = ""
 			if sum_build_log_list != []:
 				for sum_log_line in sum_build_log_list:

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index c960b07..160a152 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -34,11 +34,11 @@ class gobs_package(object):
 		config_cpv_listDict ={}
 		if config_id_list == []:
 			return config_cpv_listDict
-			conn=CM.getConnection()
+		conn=CM.getConnection()
 		for config_id in config_id_list:
 
 			# Change config/setup
-			config_setup = get_config_db(conn, config_id)
+			config_setup = get_config(conn, config_id)
 			mysettings_setup = self.change_config(config_setup)
 			myportdb_setup = portage.portdbapi(mysettings=mysettings_setup)
 
@@ -138,10 +138,10 @@ class gobs_package(object):
 		# Get the needed info from packageDict and config_cpv_listDict and put that in buildqueue
 		# Only add it if ebuild_version in packageDict and config_cpv_listDict match
 		if config_cpv_listDict is not None:
-			message = []
 			# Unpack config_cpv_listDict
 			for k, v in config_cpv_listDict.iteritems():
 				config_id = k
+				build_cpv = v['cpv']
 				latest_ebuild_version = v['ebuild_version']
 				iuse_flags_list = list(set(v['iuse']))
 				use_enable= v['useflags']
@@ -159,10 +159,10 @@ class gobs_package(object):
 
 					# Comper and add the cpv to buildqueue
 					if build_cpv == k:
-						add_new_package_buildqueue(conn, ebuild_id, config_id, use_flagsDict, messages)
+						add_new_package_buildqueue(conn, ebuild_id, config_id, use_flagsDict)
 
 						# B = Build cpv use-flags config
-						config_setup = get_config_db(conn, config_id)
+						config_setup = get_config(conn, config_id)
 
 						# FIXME log_msg need a fix to log the use flags corect.
 						log_msg = "B %s:%s USE: %s %s" %  \
@@ -345,23 +345,3 @@ class gobs_package(object):
 		log_msg = "C %s:%s ... Done." % (cp, repo)
 		add_gobs_logs(conn, log_msg, "info", config_profile)
 		CM.putConnection(conn)
-
-	def update_ebuild_db(self, build_dict):
-		conn=CM.getConnection()
-		config_id = build_dict['config_profile']
-		categories = build_dict['categories']
-		package = build_dict['package']
-		package_id = build_dict['package_id']
-		cpv = build_dict['cpv']
-		ebuild_version_tree = build_dict['ebuild_version']
-		pkgdir = self._mysettings['PORTDIR'] + "/" + categories + "/" + package		# Get PORTDIR with cp
-		packageDict ={}
-		ebuild_version_manifest_checksum_db = get_ebuild_checksum(conn,package_id, ebuild_version_tree)
-		packageDict[cpv] = self.get_packageDict(pkgdir, cpv, categories, package, config_id)
-		old_ebuild_list = []
-		if ebuild_version_manifest_checksum_db is not None:
-			old_ebuild_list.append(ebuild_version_tree)
-			add_old_ebuild(conn,package_id, old_ebuild_list)
-			update_active_ebuild(conn,package_id, ebuild_version_tree)
-		return_id = add_new_package_sql(conn,packageDict)
-		CM.putConnection(conn)
\ No newline at end of file

diff --git a/gobs/pym/pgsql_querys.py b/gobs/pym/pgsql_querys.py
index 6995cf3..4e882dc 100644
--- a/gobs/pym/pgsql_querys.py
+++ b/gobs/pym/pgsql_querys.py
@@ -221,7 +221,7 @@ def get_config_id(connection, config):
 		return None
 	return entries[0]
 
-def add_new_package_buildqueue(connection, ebuild_id, config_id, use_flagsDict, messages):
+def add_new_package_buildqueue(connection, ebuild_id, config_id, use_flagsDict):
 	cursor = connection.cursor()
 	sqlQ1 = 'INSERT INTO build_jobs (ebuild_id, config_id) VALUES (%s, %s) RETURNING build_job_id'
 	sqlQ3 = 'INSERT INTO build_jobs_use (build_job_id, use_id, status) VALUES (%s, (SELECT use_id FROM uses WHERE flag = %s), %s)'
@@ -452,7 +452,6 @@ def add_new_buildlog(connection, build_dict, build_log_dict):
 			else:
 				for x in entries:
 					useflagsdict[x[0]] = x[1]
-			print(log_hash[0], build_log_dict['log_hash'], build_dict['build_useflags'], useflagsdict)
 			if log_hash[0] == build_log_dict['log_hash'] and build_dict['build_useflags'] == useflagsdict:
 				cursor.execute(sqlQ9, (build_log_id,))
 				config_id_list = []
@@ -477,7 +476,6 @@ def add_new_buildlog(connection, build_dict, build_log_dict):
 				cursor.execute(sqlQ6, (build_log_id, use_id, status))
 		return build_log_id
 
-	print(build_dict['build_job_id'], build_log_id_list)
 	if build_dict['build_job_id'] is None and build_log_id_list is None:
 		return build_log_id_no_match(build_dict, build_log_dict)
 	elif build_dict['build_job_id'] is None and not build_log_id_list is None:


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-16 20:45 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-16 20:45 UTC (permalink / raw
  To: gentoo-commits

commit:     e386a6810a4c42d91a1faafba6658fdd0cd29030
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sun Dec 16 20:44:58 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sun Dec 16 20:44:58 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=e386a681

redo the add and del build jobs

---
 gobs/pym/buildquerydb.py |   54 +++++++++++++++++++++++++--------------------
 gobs/pym/check_setup.py  |   21 +++++++++--------
 gobs/pym/jobs.py         |    6 ++--
 gobs/pym/package.py      |   31 +++++++------------------
 gobs/pym/pgsql_querys.py |   35 ++++++++++++++++++++---------
 5 files changed, 77 insertions(+), 70 deletions(-)

diff --git a/gobs/pym/buildquerydb.py b/gobs/pym/buildquerydb.py
index abd5ea9..c60129a 100644
--- a/gobs/pym/buildquerydb.py
+++ b/gobs/pym/buildquerydb.py
@@ -24,8 +24,9 @@ from gobs.package import gobs_package
 import portage
 import multiprocessing
 
-def add_cpv_query_pool(mysettings, init_package, config_id, package_line):
+def add_cpv_query_pool(mysettings, myportdb, config_id, package_line):
 	conn=CM.getConnection()
+	init_package = gobs_package(mysettings, myportdb)
 	# FIXME: remove the check for gobs when in tree
 	if package_line != "dev-python/gobs":
 		build_dict = {}
@@ -37,20 +38,16 @@ def add_cpv_query_pool(mysettings, init_package, config_id, package_line):
 		package = element[1]
 		log_msg = "C %s/%s" % (categories, package,)
 		add_gobs_logs(conn, log_msg, "info", config_profile)
-		pkgdir = mysettings['PORTDIR'] + "/" + categories + "/" + package
+		pkgdir = self._myportdb.getRepositoryPath(repo) + "/" + categories + "/" + package
 		config_id_list = []
 		config_id_list.append(config_id)
-		config_cpv_listDict = init_package.config_match_ebuild(categories, package, config_id_list)
+		config_cpv_listDict = init_package.config_match_ebuild(categories + "/" + package, config_id_list)
 		if config_cpv_listDict != {}:
-			cpv = categories + "/" + package + "-" + config_cpv_listDict[config_id]['ebuild_version']
-			attDict = {}
-			attDict['categories'] = categories
-			attDict['package'] = package
-			attDict['ebuild_version_tree'] = config_cpv_listDict[config_id]['ebuild_version']
-			packageDict[cpv] = attDict
-			build_dict['checksum'] = portage.checksum.sha256hash(pkgdir + "/" + package + "-" + config_cpv_listDict[config_id]['ebuild_version'] + ".ebuild")[0]
-			build_dict['package_id'] = have_package_db(conn, categories, package)[0]
-			build_dict['ebuild_version'] = config_cpv_listDict[config_id]['ebuild_version']
+			cpv = config_cpv_listDict[config_id]['cpv']
+			packageDict['cpv'] = init_package.get_packageDict(pkgdir, cpv, repo)
+			build_dict['checksum'] = packageDict['cpv']['ebuild_version_checksum_tree']
+			build_dict['package_id'] = get_package_id(conn, categories, package, repo)
+			build_dict['ebuild_version'] = packageDict['cpv']['ebuild_version_tree']
 			ebuild_id = get_ebuild_id_db_checksum(conn, build_dict)
 			if ebuild_id is not None:
 				ebuild_id_list.append(ebuild_id)
@@ -62,20 +59,19 @@ def add_cpv_query_pool(mysettings, init_package, config_id, package_line):
 
 def add_buildquery_main(config_id):
 	conn=CM.getConnection()
-	log_msg = "Adding build querys for: %s" % (config_id,)
+	config_setup = get_config(conn, config_id)
+	log_msg = "Adding build jobs for: %s" % (config_setup,)
 	add_gobs_logs(conn, log_msg, "info", config_profile)
 	check_make_conf()
 	log_msg = "Check configs done"
 	add_gobs_logs(conn, log_msg, "info", config_profile)
 	# Get default config from the configs table  and default_config=1
-	default_config_root = "/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config_id + "/"
+	default_config_root = "/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config_setup + "/"
 	# Set config_root (PORTAGE_CONFIGROOT)  to default_config_root
 	mysettings = portage.config(config_root = default_config_root)
 	myportdb = portage.portdbapi(mysettings=mysettings)
 	init_package = gobs_package(mysettings, myportdb)
-	# get the cp list
-	package_list_tree = package_list_tree = myportdb.cp_all()
-	log_msg = "Setting default config to: %s" % (config_id,)
+	log_msg = "Setting default config to: %s" % (config_setup)
 	add_gobs_logs(conn, log_msg, "info", config_profile)
 	# Use all exept 2 cores when multiprocessing
 	pool_cores= multiprocessing.cpu_count()
@@ -84,11 +80,21 @@ def add_buildquery_main(config_id):
 	else:
 		use_pool_cores = 1
 	pool = multiprocessing.Pool(processes=use_pool_cores)
-	for package_line in sorted(package_list_tree):
-		pool.apply_async(add_cpv_query_pool, (mysettings, init_package, config_id, package_line,))
+
+	repo_trees_list = myportdb.porttrees
+	for repo_dir in repo_trees_list:
+		repo = myportdb.getRepositoryName(repo_dir)
+		repo_dir_list = []
+		repo_dir_list.append(repo_dir)
+		
+		# Get the package list from the repo
+		package_id_list_tree = []
+		package_list_tree = myportdb.cp_all(trees=repo_dir_list)
+			for package_line in sorted(package_list_tree):
+			pool.apply_async(add_cpv_query_pool, (mysettings, myportdb, config_id, package_line,))
 	pool.close()
 	pool.join()
-	log_msg = "Adding build querys for: %s ... Done." % (config_id,)
+	log_msg = "Adding build jobs for: %s ... Done." % (config_setup,)
 	add_gobs_logs(conn, log_msg, "info", config_profile)
 	CM.putConnection(conn)
 	return True
@@ -96,10 +102,10 @@ def add_buildquery_main(config_id):
 def del_buildquery_main(config_id):
 	log_msg = "Removeing build querys for: %s" % (config_id,)
 	add_gobs_logs(conn, log_msg, "info", config_profile)
-	querue_id_list = get_queue_id_list_config(conn, config_id)
-	if querue_id_list is not None:
-		for querue_id in querue_id_list:
-			del_old_queue(conn, querue_id)
+	build_job_id_list = get_build_jobs_id_list_config(conn, config_id)
+	if build_job_id_list is not None:
+		for build_job_id in build_job_id_list:
+			del_old_build_jobs(conn, build_job_id)
 	log_msg = "Removeing build querys for: %s ... Done." % (config_id,)
 	add_gobs_logs(conn, log_msg, "info", config_profile)
 	CM.putConnection(conn)

diff --git a/gobs/pym/check_setup.py b/gobs/pym/check_setup.py
index 99d1dbb..0856f7f 100644
--- a/gobs/pym/check_setup.py
+++ b/gobs/pym/check_setup.py
@@ -59,30 +59,31 @@ def check_make_conf():
 def check_make_conf_guest(config_profile):
 	conn=CM.getConnection()
 	print('config_profile', config_profile)
-	make_conf_checksum_db = get_profile_checksum(conn,config_profile)
-	print('make_conf_checksum_db', make_conf_checksum_db)
 	if make_conf_checksum_db is None:
 		CM.putConnection(conn)
 		return False
 	make_conf_file = "/etc/portage/make.conf"
-	make_conf_checksum_tree = portage.checksum.sha256hash(make_conf_file)[0]
-	print('make_conf_checksum_tree', make_conf_checksum_tree)
-	if make_conf_checksum_tree != make_conf_checksum_db[0]:
-		CM.putConnection(conn)
-		return False
 	# Check if we can open the file and close it
 	# Check if we have some error in the file (portage.util.getconfig)
 	# Check if we envorment error with the config (settings.validate)
 	try:
-		open_make_conf = open(make_conf_file)
-		open_make_conf.close()
-		portage.util.getconfig(make_conf_file, tolerant=0, allow_sourcing=False, expand=True)
+		make_conf_checksum_tree = portage.checksum.sha256hash(make_conf_file)[0]
+		portage.util.getconfig(make_conf_file, tolerant=0, allow_sourcing=True, expand=True)
 		mysettings = portage.config(config_root = "/")
 		mysettings.validate()
 		# With errors we return false
 	except Exception as e:
 		CM.putConnection(conn)
 		return False
+	make_conf_checksum_db = get_profile_checksum(conn, config_profile)
+	if make_conf_checksum_db is None:
+		CM.putConnection(conn)
+		return False
+	print('make_conf_checksum_tree', make_conf_checksum_tree)
+	print('make_conf_checksum_db', make_conf_checksum_db)
+	if make_conf_checksum_tree != make_conf_checksum_db:
+		CM.putConnection(conn)
+		return False
 	CM.putConnection(conn)
 	return True
 

diff --git a/gobs/pym/jobs.py b/gobs/pym/jobs.py
index 11543a2..d68cff2 100644
--- a/gobs/pym/jobs.py
+++ b/gobs/pym/jobs.py
@@ -22,14 +22,14 @@ def jobs_main(config_profile):
 		CM.putConnection(conn)
 		return
 	for job_id in jobs_id:
-		job = get_job(conn, job_id)
+		job, config_id = get_job(conn, job_id)
 		log_msg = "Job: %s Type: %s" % (job_id, job,)
 		add_gobs_logs(conn, log_msg, "info", config_profile)
 		if job == "addbuildquery":
 			update_job_list(conn, "Runing", job_id)
 			log_msg = "Job %s is runing." % (job_id,)
 			add_gobs_logs(conn, log_msg, "info", config_profile)
-			result =  add_buildquery_main(config_profile)
+			result =  add_buildquery_main(config_id)
 			if result is True:
 				update_job_list(conn, "Done", job_id)
 				log_msg = "Job %s is done.." % (job_id,)
@@ -42,7 +42,7 @@ def jobs_main(config_profile):
 			update_job_list(conn, "Runing", job_id)
 			log_msg = "Job %s is runing." % (job_id,)
 			add_gobs_logs(conn, log_msg, "info", config_profile)
-			result =  del_buildquery_main(config_profile)
+			result =  del_buildquery_main(config_id)
 			if result is True:
 				update_job_list(conn, "Done", job_id)
 				log_msg = "Job %s is done.." % (job_id,)

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index 160a152..5d47fac 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -80,8 +80,7 @@ class gobs_package(object):
 					ebuild_auxdb_list[i] = ''
 			return ebuild_auxdb_list
 
-	def get_packageDict(self, pkgdir, cpv, repo, config_id):
-		attDict = {}
+	def get_packageDict(self, pkgdir, cpv, repo):
 		conn=CM.getConnection()
 
 		#Get categories, package and version from cpv
@@ -123,6 +122,7 @@ class gobs_package(object):
 			ebuild_version_checksum_tree = '0'
 
 		# add the ebuild info to the dict packages
+		attDict = {}
 		attDict['repo'] = repo
 		attDict['ebuild_version_tree'] = ebuild_version_tree
 		attDict['ebuild_version_checksum_tree']= ebuild_version_checksum_tree
@@ -142,7 +142,6 @@ class gobs_package(object):
 			for k, v in config_cpv_listDict.iteritems():
 				config_id = k
 				build_cpv = v['cpv']
-				latest_ebuild_version = v['ebuild_version']
 				iuse_flags_list = list(set(v['iuse']))
 				use_enable= v['useflags']
 				use_disable = list(set(iuse_flags_list).difference(set(use_enable)))
@@ -165,8 +164,7 @@ class gobs_package(object):
 						config_setup = get_config(conn, config_id)
 
 						# FIXME log_msg need a fix to log the use flags corect.
-						log_msg = "B %s:%s USE: %s %s" %  \
-							(k, v['repo'], use_enable, config_setup,)
+						log_msg = "B %s:%s USE: %s %s" %  (k, v['repo'], use_flagsDict, config_setup,)
 						add_gobs_logs(conn, log_msg, "info", config_profile)
 					i = i +1
 		CM.putConnection(conn)
@@ -229,7 +227,7 @@ class gobs_package(object):
 		packageDict ={}
 		ebuild_id_list = []
 		for cpv in sorted(ebuild_list_tree):
-			packageDict[cpv] = self.get_packageDict(pkgdir, cpv, repo, default_config)
+			packageDict[cpv] = self.get_packageDict(pkgdir, cpv, repo)
 
 		# Add new ebuilds to the db
 		ebuild_id_list = add_new_ebuild_sql(conn, package_id, packageDict)
@@ -293,28 +291,17 @@ class gobs_package(object):
 
 				# split out ebuild version
 				ebuild_version_tree = portage.versions.cpv_getversion(cpv)
+				
+				# Get packageDict for cpv
+				packageDict[cpv] = self.get_packageDict(pkgdir, cpv, repo)
 
 				# Get the checksum of the ebuild in tree and db
-				# Make a checksum of the ebuild
-				try:
-					ebuild_version_checksum_tree = portage.checksum.sha256hash(pkgdir + "/" + package + "-" + ebuild_version_tree + ".ebuild")[0]
-				except:
-					ebuild_version_checksum_tree = '0'
-					manifest_checksum_tree = '0'
-					log_msg = "QA: Can't checksum the ebuild file. %s on repo %s" % (cpv, repo,)
-					add_gobs_logs(conn, log_msg, "info", config_profile)
-					log_msg = "C %s:%s ... Fail." % (cpv, repo)
-					add_gobs_logs(conn, log_msg, "info", config_profile)
+				ebuild_version_checksum_tree = packageDict['cpv']['ebuild_version_checksum_tree']
 				ebuild_version_manifest_checksum_db = get_ebuild_checksum(conn, package_id, ebuild_version_tree)
 
 				# Check if the checksum have change
 				if ebuild_version_manifest_checksum_db is None or ebuild_version_checksum_tree != ebuild_version_manifest_checksum_db:
 
-				# set config to default config
-					default_config = get_default_config(conn)
-
-					# Get packageDict for ebuild
-					packageDict[cpv] = self.get_packageDict(pkgdir, cpv, repo, default_config)
 					if ebuild_version_manifest_checksum_db is None:
 						# N = New ebuild
 						log_msg = "N %s:%s" % (cpv, repo,)
@@ -328,7 +315,7 @@ class gobs_package(object):
 						old_ebuild_list.append(ebuild_version_tree)
 						add_old_ebuild(conn, package_id, old_ebuild_list)
 						update_active_ebuild_to_fales(conn, package_id, ebuild_version_tree)
-			# Use packageDictand to update the db
+			# Use packageDict and to update the db
 			# Add new ebuilds to the db
 			ebuild_id_list = add_new_ebuild_sql(conn, package_id, packageDict)
 

diff --git a/gobs/pym/pgsql_querys.py b/gobs/pym/pgsql_querys.py
index 4e882dc..f495c48 100644
--- a/gobs/pym/pgsql_querys.py
+++ b/gobs/pym/pgsql_querys.py
@@ -9,10 +9,10 @@ def add_gobs_logs(connection, log_msg, log_type, config):
 	connection.commit()
 
 # Queryes to handel the jobs table
-def get_jobs_id(connection, config_profile):
+def get_jobs_id(connection, config_id):
 	cursor = connection.cursor()
-	sqlQ = "SELECT job_id FROM jobs WHERE status = 'Waiting' AND config_id = (SELECT config_id FROM configs WHERE config = %s)"
-	cursor.execute(sqlQ, (config_profile,))
+	sqlQ = "SELECT job_id FROM jobs WHERE status = 'Waiting' AND config_id = %s"
+	cursor.execute(sqlQ, (config_id,))
 	entries = cursor.fetchall()
 	if entries is None:
 		return None
@@ -23,10 +23,12 @@ def get_jobs_id(connection, config_profile):
 
 def get_job(connection, job_id):
 	cursor = connection.cursor()
-	sqlQ ='SELECT job FROM jobs WHERE job_id = %s'
+	sqlQ ='SELECT job, config_id FROM jobs WHERE job_id = %s'
 	cursor.execute(sqlQ, (job_id,))
-	job = cursor.fetchone()
-	return job[0]
+	entries = cursor.fetchone()
+	job = entries[0]
+	config_id = entries[1]
+	return job, config_id
 
 def update_job_list(connection, status, job_id):
 	cursor = connection.cursor()
@@ -303,23 +305,34 @@ def get_build_jobs_id_list_config(connection, config_id):
 	sqlQ = 'SELECT build_job_id FROM build_jobs WHERE config_id = %s'
 	cursor.execute(sqlQ,  (config_id,))
 	entries = cursor.fetchall()
-	return entries
+	build_jobs_id_list = []
+	if not entries == []:
+		for build_job_id_id in entries:
+			build_jobs_id_list.append(build_job_id[0])
+	else:
+			build_log_id_list = None
+	return build_jobs_id_list
 
 def del_old_build_jobs(connection, build_job_id):
 	cursor = connection.cursor()
 	sqlQ1 = 'DELETE FROM build_jobs_use WHERE build_job_id = %s'
 	sqlQ2 = 'DELETE FROM build_jobs_retest WHERE build_job_id  = %s'
 	sqlQ3 = 'DELETE FROM build_jobs WHERE build_job_id  = %s'
+	sqlQ4 = 'DELETE FROM build_jobs_emerge_options WHERE build_job_id = %s'
 	cursor.execute(sqlQ1, (build_job_id,))
 	cursor.execute(sqlQ2, (build_job_id,))
+	cursor.execute(sqlQ4, (build_job_id,))
 	cursor.execute(sqlQ3, (build_job_id,))
 	connection.commit()
 
 def get_profile_checksum(connection, config_profile):
-    cursor = connection.cursor()
-    sqlQ = "SELECT checksum FROM configs_metadata WHERE active = 'True' AND config_id = (SELECT config_id FROM configs WHERE config = %s) AND auto = 'True'"
-    cursor.execute(sqlQ, (config_profile,))
-    return cursor.fetchone()
+	cursor = connection.cursor()
+	sqlQ = "SELECT checksum FROM configs_metadata WHERE active = 'True' AND config_id = (SELECT config_id FROM configs WHERE config = %s) AND auto = 'True'"
+	cursor.execute(sqlQ, (config_profile,))
+	entries = cursor.fetchone()
+	if entries is None:
+		return
+	return entries[0]
 
 def get_packages_to_build(connection, config):
 	cursor =connection.cursor()


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-16 20:50 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-16 20:50 UTC (permalink / raw
  To: gentoo-commits

commit:     4bc7abc0b47d0fdb1f8903b1350a1bdd70d5122b
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sun Dec 16 20:49:53 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sun Dec 16 20:49:53 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=4bc7abc0

fix unexpected indent

---
 gobs/pym/buildquerydb.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/buildquerydb.py b/gobs/pym/buildquerydb.py
index c60129a..4de8259 100644
--- a/gobs/pym/buildquerydb.py
+++ b/gobs/pym/buildquerydb.py
@@ -90,7 +90,7 @@ def add_buildquery_main(config_id):
 		# Get the package list from the repo
 		package_id_list_tree = []
 		package_list_tree = myportdb.cp_all(trees=repo_dir_list)
-			for package_line in sorted(package_list_tree):
+		for package_line in sorted(package_list_tree):
 			pool.apply_async(add_cpv_query_pool, (mysettings, myportdb, config_id, package_line,))
 	pool.close()
 	pool.join()


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-17  0:33 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-17  0:33 UTC (permalink / raw
  To: gentoo-commits

commit:     ccbcbbc6931d6878f31dfd1d37c6dded6d6661d6
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Mon Dec 17 00:33:29 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Mon Dec 17 00:33:29 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=ccbcbbc6

move some jobs to the host

---
 gobs/pym/buildquerydb.py |   31 ++++++++++++++++---------------
 gobs/pym/package.py      |    6 +++---
 gobs/pym/pgsql_querys.py |    8 ++++----
 gobs/pym/updatedb.py     |   19 +++++++++----------
 4 files changed, 32 insertions(+), 32 deletions(-)

diff --git a/gobs/pym/buildquerydb.py b/gobs/pym/buildquerydb.py
index 4de8259..d818b9e 100644
--- a/gobs/pym/buildquerydb.py
+++ b/gobs/pym/buildquerydb.py
@@ -24,35 +24,35 @@ from gobs.package import gobs_package
 import portage
 import multiprocessing
 
-def add_cpv_query_pool(mysettings, myportdb, config_id, package_line):
+def add_cpv_query_pool(mysettings, myportdb, config_id, cp, repo):
 	conn=CM.getConnection()
 	init_package = gobs_package(mysettings, myportdb)
 	# FIXME: remove the check for gobs when in tree
-	if package_line != "dev-python/gobs":
+	if cp != "dev-python/gobs":
 		build_dict = {}
 		packageDict = {}
 		ebuild_id_list = []
 		# split the cp to categories and package
-		element = package_line.split('/')
+		element = cp.split('/')
 		categories = element[0]
 		package = element[1]
-		log_msg = "C %s/%s" % (categories, package,)
+		log_msg = "C %s:%s" % (cp, repo,)
 		add_gobs_logs(conn, log_msg, "info", config_profile)
-		pkgdir = self._myportdb.getRepositoryPath(repo) + "/" + categories + "/" + package
+		pkgdir = self._myportdb.getRepositoryPath(repo) + "/" + cp
 		config_id_list = []
 		config_id_list.append(config_id)
-		config_cpv_listDict = init_package.config_match_ebuild(categories + "/" + package, config_id_list)
+		config_cpv_listDict = init_package.config_match_ebuild(cp, config_id_list)
 		if config_cpv_listDict != {}:
 			cpv = config_cpv_listDict[config_id]['cpv']
-			packageDict['cpv'] = init_package.get_packageDict(pkgdir, cpv, repo)
-			build_dict['checksum'] = packageDict['cpv']['ebuild_version_checksum_tree']
+			packageDict[cpv] = init_package.get_packageDict(pkgdir, cpv, repo)
+			build_dict['checksum'] = packageDict[cpv]['ebuild_version_checksum_tree']
 			build_dict['package_id'] = get_package_id(conn, categories, package, repo)
-			build_dict['ebuild_version'] = packageDict['cpv']['ebuild_version_tree']
+			build_dict['ebuild_version'] = packageDict[cpv]['ebuild_version_tree']
 			ebuild_id = get_ebuild_id_db_checksum(conn, build_dict)
 			if ebuild_id is not None:
 				ebuild_id_list.append(ebuild_id)
 				init_package.add_new_ebuild_buildquery_db(ebuild_id_list, packageDict, config_cpv_listDict)
-		log_msg = "C %s/%s ... Done." % (categories, package,)
+		log_msg = "C %s:%s ... Done." % (cp, repo,)
 		add_gobs_logs(conn, log_msg, "info", config_profile)
 	CM.putConnection(conn)
 	return
@@ -88,10 +88,9 @@ def add_buildquery_main(config_id):
 		repo_dir_list.append(repo_dir)
 		
 		# Get the package list from the repo
-		package_id_list_tree = []
 		package_list_tree = myportdb.cp_all(trees=repo_dir_list)
-		for package_line in sorted(package_list_tree):
-			pool.apply_async(add_cpv_query_pool, (mysettings, myportdb, config_id, package_line,))
+		for cp in sorted(package_list_tree):
+			pool.apply_async(add_cpv_query_pool, (mysettings, myportdb, config_id, cp, repo,))
 	pool.close()
 	pool.join()
 	log_msg = "Adding build jobs for: %s ... Done." % (config_setup,)
@@ -100,13 +99,15 @@ def add_buildquery_main(config_id):
 	return True
 
 def del_buildquery_main(config_id):
-	log_msg = "Removeing build querys for: %s" % (config_id,)
+	conn=CM.getConnection()
+	config_setup = get_config(conn, config_id)
+	log_msg = "Removeing build jobs for: %s" % (config_setup,)
 	add_gobs_logs(conn, log_msg, "info", config_profile)
 	build_job_id_list = get_build_jobs_id_list_config(conn, config_id)
 	if build_job_id_list is not None:
 		for build_job_id in build_job_id_list:
 			del_old_build_jobs(conn, build_job_id)
-	log_msg = "Removeing build querys for: %s ... Done." % (config_id,)
+	log_msg = "Removeing build jobs for: %s ... Done." % (config_setup,)
 	add_gobs_logs(conn, log_msg, "info", config_profile)
 	CM.putConnection(conn)
 	return True

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index 5d47fac..246b5f8 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -148,9 +148,9 @@ class gobs_package(object):
 				# Make a dict with enable and disable use flags for ebuildqueuedwithuses
 				use_flagsDict = {}
 				for x in use_enable:
-					use_flagsDict[x] = True
+					use_flagsDict[x] = 'True'
 				for x in use_disable:
-					use_flagsDict[x] = False
+					use_flagsDict[x] = 'False'
 				# Unpack packageDict
 				i = 0
 				for k, v in packageDict.iteritems():
@@ -296,7 +296,7 @@ class gobs_package(object):
 				packageDict[cpv] = self.get_packageDict(pkgdir, cpv, repo)
 
 				# Get the checksum of the ebuild in tree and db
-				ebuild_version_checksum_tree = packageDict['cpv']['ebuild_version_checksum_tree']
+				ebuild_version_checksum_tree = packageDict[cpv]['ebuild_version_checksum_tree']
 				ebuild_version_manifest_checksum_db = get_ebuild_checksum(conn, package_id, ebuild_version_tree)
 
 				# Check if the checksum have change

diff --git a/gobs/pym/pgsql_querys.py b/gobs/pym/pgsql_querys.py
index f495c48..b7b5b8a 100644
--- a/gobs/pym/pgsql_querys.py
+++ b/gobs/pym/pgsql_querys.py
@@ -9,10 +9,10 @@ def add_gobs_logs(connection, log_msg, log_type, config):
 	connection.commit()
 
 # Queryes to handel the jobs table
-def get_jobs_id(connection, config_id):
+def get_jobs_id(connection, config):
 	cursor = connection.cursor()
-	sqlQ = "SELECT job_id FROM jobs WHERE status = 'Waiting' AND config_id = %s"
-	cursor.execute(sqlQ, (config_id,))
+	sqlQ = "SELECT job_id FROM jobs WHERE status = 'Waiting' AND config_id = (SELECT config_id FROM configs WHERE config = %s)"
+	cursor.execute(sqlQ, (config,))
 	entries = cursor.fetchall()
 	if entries is None:
 		return None
@@ -23,7 +23,7 @@ def get_jobs_id(connection, config_id):
 
 def get_job(connection, job_id):
 	cursor = connection.cursor()
-	sqlQ ='SELECT job, config_id FROM jobs WHERE job_id = %s'
+	sqlQ ='SELECT job, config_id2 FROM jobs WHERE job_id = %s'
 	cursor.execute(sqlQ, (job_id,))
 	entries = cursor.fetchone()
 	job = entries[0]

diff --git a/gobs/pym/updatedb.py b/gobs/pym/updatedb.py
index 215e841..ba76d53 100755
--- a/gobs/pym/updatedb.py
+++ b/gobs/pym/updatedb.py
@@ -34,20 +34,21 @@ def init_portage_settings():
 	add_gobs_logs(conn, log_msg, "info", config_profile)
 	
 	# Get default config from the configs table  and default_config=1
-	config_id = get_default_config(conn)			# HostConfigDir = table configs id
-	default_config_root = "/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config_id[0] + "/"
+	config = get_default_config(conn)[0]			# HostConfigDir = table configs id
+	default_config_root = "/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config + "/"
 
 	# Set config_root (PORTAGE_CONFIGROOT)  to default_config_root
 	mysettings = portage.config(config_root = default_config_root)
-	log_msg = "Setting default config to: %s" % (config_id[0],)
+	log_msg = "Setting default config to: %s" % (config,)
 	add_gobs_logs(conn, log_msg, "info", config_profile)
 	CM.putConnection(conn)
 	return mysettings
 
-def update_cpv_db_pool(mysettings, myportdb, init_package, package_line, repo):
+def update_cpv_db_pool(mysettings, myportdb, cp, repo):
 	conn=CM.getConnection()
+	init_package = gobs_package(mysettings, myportdb)
 	# split the cp to categories and package
-	element = package_line.split('/')
+	element = cp.split('/')
 	categories = element[0]
 	package = element[1]
 
@@ -57,7 +58,7 @@ def update_cpv_db_pool(mysettings, myportdb, init_package, package_line, repo):
 	if package_id is None:  
 
 		# Add new package with ebuilds
-		init_package.add_new_package_db(categories, package)
+		init_package.add_new_package_db(categories, package, repo)
 
 	# Ceck if we have the cp in the package table
 	elif package_id is not None:
@@ -74,7 +75,6 @@ def update_cpv_db():
 	
 	# Setup portdb, package
 	myportdb = portage.portdbapi(mysettings=mysettings)
-	init_package = gobs_package(mysettings, myportdb)
 	repo_list = ()
 	repos_trees_list = []
 
@@ -97,12 +97,11 @@ def update_cpv_db():
 		repo_dir_list.append(repo_dir)
 
 		# Get the package list from the repo
-		package_id_list_tree = []
 		package_list_tree = myportdb.cp_all(trees=repo_dir_list)
 
 		# Run the update package for all package in the list and in a multiprocessing pool
-		for package_line in sorted(package_list_tree):
-			pool.apply_async(update_cpv_db_pool, (mysettings, myportdb, init_package, package_line, repo,))
+		for cp in sorted(package_list_tree):
+			pool.apply_async(update_cpv_db_pool, (mysettings, myportdb, cp, repo,))
 			# update_cpv_db_pool(mysettings, myportdb, init_package, package_line, repo)
 	pool.close()
 	pool.join()


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-17  1:18 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-17  1:18 UTC (permalink / raw
  To: gentoo-commits

commit:     964d71cd6e2a4f6326a54c630827e8512502441d
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Mon Dec 17 01:18:39 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Mon Dec 17 01:18:39 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=964d71cd

fix local variable 'make_conf_checksum_db' referenced before assignment

---
 gobs/pym/check_setup.py |    3 ---
 1 files changed, 0 insertions(+), 3 deletions(-)

diff --git a/gobs/pym/check_setup.py b/gobs/pym/check_setup.py
index 0856f7f..a97d639 100644
--- a/gobs/pym/check_setup.py
+++ b/gobs/pym/check_setup.py
@@ -59,9 +59,6 @@ def check_make_conf():
 def check_make_conf_guest(config_profile):
 	conn=CM.getConnection()
 	print('config_profile', config_profile)
-	if make_conf_checksum_db is None:
-		CM.putConnection(conn)
-		return False
 	make_conf_file = "/etc/portage/make.conf"
 	# Check if we can open the file and close it
 	# Check if we have some error in the file (portage.util.getconfig)


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-19  2:17 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-19  2:17 UTC (permalink / raw
  To: gentoo-commits

commit:     87dcaa229a66fa9253083c5f1418efd858a5d069
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Dec 19 02:16:47 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Dec 19 02:16:47 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=87dcaa22

fix invalid syntax

---
 gobs/pym/ConnectionManager.py |   22 ++++++++++------------
 1 files changed, 10 insertions(+), 12 deletions(-)

diff --git a/gobs/pym/ConnectionManager.py b/gobs/pym/ConnectionManager.py
index a06e2cd..9d345c1 100644
--- a/gobs/pym/ConnectionManager.py
+++ b/gobs/pym/ConnectionManager.py
@@ -6,9 +6,9 @@ from gobs.readconf import get_conf_settings
 reader = get_conf_settings()
 gobs_settings_dict=reader.read_gobs_settings_all()
 
-if settings_dict['sql_backend']=='pgsql':
+if settings_dict['sql_backend'] == 'pgsql':
 	from gobs.pgsql_querys import *
-if settings_dict['sql_backend']=='mysql':
+if settings_dict['sql_backend'] == 'mysql':
 	from gobs.mysql_querys import *
 
 class connectionManager(object):
@@ -25,7 +25,7 @@ class connectionManager(object):
 			cls._password=settings_dict['sql_passwd']
 			cls._database=settings_dict['sql_db']
 			#shouldnt we include port also?
-			if cls._backend == 'pgsql'
+			if cls._backend == 'pgsql':
 				try:
 					from psycopg2 import pool, extensions
 				except ImportError:
@@ -35,8 +35,7 @@ class connectionManager(object):
 				cls._connectionNumber=numberOfconnections
 				#always create 1 connection
 				cls._pool=pool.ThreadedConnectionPool(1,cls._connectionNumber,host=cls._host,database=cls._database,user=cls._user,password=cls._password)
-				cls._name=cls._backend
-			if cls._backend == 'mysql'
+			if cls._backend == 'mysql':
 				try:
 					import mysql.connector
 					from mysql.connector import errorcode
@@ -58,24 +57,23 @@ class connectionManager(object):
 						print("Database does not exists")
 					else:
 						print(err)
-				cls._name=cls._backend
 		return cls._instance
 
 	## returns the name of the database pgsql/mysql etc
 	def getName(self):
-		return self._name
+		return self._backend
 
 	def getConnection(self):
-		if self._name == 'pgsql'    
+		if self._backend == 'pgsql':
 			return self._pool.getconn()
-		if self._name == 'mysql'
+		if self._backend == 'mysql':
 			return self._cnx
       
 	def putConnection(self, connection):
-		if self._name == 'pgsql'
+		if self._backend == 'pgsql':
 			self._pool.putconn(connection , key=None, close=True)
-			if self._name == 'mysql'
-			return self._cnx.close()
+		if self._backend == 'mysql':
+			self._cnx.close()
 
 	def closeAllConnections(self):
 		self._pool.closeall()


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-21  1:44 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-21  1:44 UTC (permalink / raw
  To: gentoo-commits

commit:     60e08d29cac9c2787fc63efdfa10929515791971
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 21 01:43:49 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Dec 21 01:43:49 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=60e08d29

remove build_queru.py

---
 gobs/pym/build_queru.py |  178 -----------------------------------------------
 1 files changed, 0 insertions(+), 178 deletions(-)

diff --git a/gobs/pym/build_queru.py b/gobs/pym/build_queru.py
deleted file mode 100644
index 7147416..0000000
--- a/gobs/pym/build_queru.py
+++ /dev/null
@@ -1,178 +0,0 @@
-# Get the options from the config file set in gobs.readconf
-from __future__ import print_function
-from gobs.readconf import get_conf_settings
-reader=get_conf_settings()
-gobs_settings_dict=reader.read_gobs_settings_all()
-# make a CM
-from gobs.ConnectionManager import connectionManager
-CM=connectionManager(gobs_settings_dict)
-#selectively import the pgsql/mysql querys
-if CM.getName()=='pgsql':
-	from gobs.pgsql_querys import *
-
-import portage
-import os
-import re
-import sys
-import signal
-import logging
-
-from gobs.manifest import gobs_manifest
-from gobs.depclean import main_depclean
-from gobs.flags import gobs_use_flags
-from portage import _encodings
-from portage import _unicode_decode
-from portage.versions import cpv_getkey
-from portage.dep import check_required_use
-from gobs.main import emerge_main
-from gobs.build_log import log_fail_queru
-
-from gobs.actions import load_emerge_config
-
-class queruaction(object):
-
-	def __init__(self, config_profile):
-		self._mysettings = portage.config(config_root = "/")
-		self._config_profile = config_profile
-		self._myportdb =  portage.portdb
-
-	def make_build_list(self, build_dict, settings, portdb):
-		conn=CM.getConnection()
-		package_id = build_dict['package_id']
-		cp, repo = get_cp_repo_from_package_id(conn, package_id)
-		element = cp.split('/')
-		package = element[1]
-		cpv = cp + "-" + build_dict['ebuild_version']
-		pkgdir = self._myportdb.getRepositoryPath(repo) + "/" + cp
-		init_manifest =  gobs_manifest(settings, pkgdir)
-		try:
-			ebuild_version_checksum_tree = portage.checksum.sha256hash(pkgdir + "/" + package + "-" + build_dict['ebuild_version'] + ".ebuild")[0]
-		except:
-			ebuild_version_checksum_tree = None
-		if ebuild_version_checksum_tree == build_dict['checksum']:
-			init_flags = gobs_use_flags(settings, portdb, cpv)
-			build_use_flags_list = init_flags.comper_useflags(build_dict)
-			log_msg = "build_use_flags_list %s" % (build_use_flags_list,)
-			add_gobs_logs(conn, log_msg, "info", self._config_profile)
-			manifest_error = init_manifest.check_file_in_manifest(portdb, cpv, build_use_flags_list, repo)
-			if manifest_error is None:
-				build_dict['check_fail'] = False
-				build_cpv_dict = {}
-				build_cpv_dict[cpv] = build_use_flags_list
-				log_msg = "build_cpv_dict: %s" % (build_cpv_dict,)
-				add_gobs_logs(conn, log_msg, "info", self._config_profile)
-				CM.putConnection(conn)
-				return build_cpv_dict
-			else:
-				build_dict['type_fail'] = "Manifest error"
-				build_dict['check_fail'] = True
-				log_msg = "Manifest error: %s:%s" % cpv, manifest_error
-				add_gobs_logs(conn, log_msg, "info", self._config_profile)
-		else:
-			build_dict['type_fail'] = "Wrong ebuild checksum"
-			build_dict['check_fail'] = True
-		if build_dict['check_fail'] is True:
-				log_fail_queru(build_dict, settings)
-				CM.putConnection(conn)
-				return None
-		CM.putConnection(conn)
-		return build_cpv_dict
-
-	def build_procces(self, buildqueru_cpv_dict, build_dict, settings, portdb):
-		conn=CM.getConnection()
-		build_cpv_list = []
-		depclean_fail = True
-		for k, build_use_flags_list in buildqueru_cpv_dict.iteritems():
-			build_cpv_list.append("=" + k)
-			if not build_use_flags_list == None:
-				build_use_flags = ""
-				for flags in build_use_flags_list:
-					build_use_flags = build_use_flags + flags + " "
-				filetext = '=' + k + ' ' + build_use_flags
-				log_msg = "filetext: %s" % filetext
-				add_gobs_logs(conn, log_msg, "info", self._config_profile)
-				with open("/etc/portage/package.use/99_autounmask", "a") as f:
-     					f.write(filetext)
-     					f.write('\n')
-     					f.close
-		log_msg = "build_cpv_list: %s" % (build_cpv_list,)
-		add_gobs_logs(conn, log_msg, "info", self._config_profile)
-		
-		argscmd = []
-		for emerge_option in build_dict['emerge_options']:
-			if emerge_option == '--depclean':
-				pass
-			elif emerge_option == '--nodepclean':
-				pass
-			elif emerge_option == '--nooneshot':
-				pass
-			else:
-				if not emerge_option in argscmd:
-					argscmd.append(emerge_option)
-		for build_cpv in build_cpv_list:
-			argscmd.append(build_cpv)
-		print("Emerge options: %s" % argscmd)
-		log_msg = "argscmd: %s" % (argscmd,)
-		add_gobs_logs(conn, log_msg, "info", self._config_profile)
-		# Call main_emerge to build the package in build_cpv_list
-		print("Build: %s" % build_dict)
-		build_fail = emerge_main(argscmd, build_dict)
-		# Run depclean
-		if  '--depclean' in build_dict['emerge_options'] and not '--nodepclean' in build_dict['emerge_options']:
-			depclean_fail = main_depclean()
-		try:
-			os.remove("/etc/portage/package.use/99_autounmask")
-			with open("/etc/portage/package.use/99_autounmask", "a") as f:
-				f.close
-		except:
-			pass
-		build_dict2 = {}
-		build_dict2 = get_packages_to_build(conn, self._config_profile)
-		if build_dict['build_job_id'] == build_dict2['build_job_id']:
-			log_msg = "build_job %s was not removed" % (build_dict['build_job_id'],)
-			add_gobs_logs(conn, log_msg, "info", self._config_profile)
-			print("qurery was not removed")
-			if build_fail is True:
-				build_dict['type_fail'] = "Emerge faild"
-				build_dict['check_fail'] = True
-				log_msg = "Emerge faild!"
-				add_gobs_logs(conn, log_msg, "info", self._config_profile)
-			else:
-				build_dict['type_fail'] = "Querey was not removed"
-				build_dict['check_fail'] = True
-			log_fail_queru(build_dict, settings)
-		if build_fail is True:
-			CM.putConnection(conn)
-			return True
-		CM.putConnection(conn)
-		return False
-
-	def procces_qureru(self):
-		conn=CM.getConnection()
-		build_dict = {}
-		build_dict = get_packages_to_build(conn, self._config_profile)
-		settings, trees, mtimedb = load_emerge_config()
-		portdb = trees[settings["ROOT"]]["porttree"].dbapi
-		if build_dict is None:
-			CM.putConnection(conn)
-			return
-		log_msg = "build_dict: %s" % (build_dict,)
-		add_gobs_logs(conn, log_msg, "info", self._config_profile)
-		if not build_dict['ebuild_id'] is None and build_dict['checksum'] is not None:
-			buildqueru_cpv_dict = self.make_build_list(build_dict, settings, portdb)
-			log_msg = "buildqueru_cpv_dict: %s" % (buildqueru_cpv_dict,)
-			add_gobs_logs(conn, log_msg, "info", self._config_profile)
-			if buildqueru_cpv_dict is None:
-				CM.putConnection(conn)
-				return
-			fail_build_procces = self.build_procces(buildqueru_cpv_dict, build_dict, settings, portdb)
-			CM.putConnection(conn)
-			return
-		if not build_dict['emerge_options'] is [] and build_dict['ebuild_id'] is None:
-			CM.putConnection(conn)
-			return
-		if not build_dict['ebuild_id'] is None and build_dict['emerge_options'] is None:
-			pass
-			# del_old_queue(conn, build_dict['queue_id'])
-		CM.putConnection(conn)
-		return


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-21  1:49 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-21  1:49 UTC (permalink / raw
  To: gentoo-commits

commit:     596d5019cd5b0f4c6f5548bd0cfde3837b0f7ad9
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 21 01:48:48 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Dec 21 01:48:48 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=596d5019

fix invalid syntax and unexpected indent

---
 gobs/pym/mysql_querys.py |    2 +-
 gobs/pym/package.py      |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/mysql_querys.py b/gobs/pym/mysql_querys.py
index bf2e20c..f05cb8e 100644
--- a/gobs/pym/mysql_querys.py
+++ b/gobs/pym/mysql_querys.py
@@ -261,7 +261,7 @@ def get_config_id_list(connection):
 		config_id_list.append(config_id[0])
 	return config_id_list
 
-ef add_new_build_job(connection, ebuild_id, config_id, use_flagsDict):
+def add_new_build_job(connection, ebuild_id, config_id, use_flagsDict):
 	cursor = connection.cursor()
 	sqlQ1 = 'INSERT INTO build_jobs (ebuild_id, config_id) VALUES (%s, %s)'
 	sqlQ2 = 'INSERT INTO build_jobs_use (build_job_id, use_id, status) VALUES (%s, %s, %s)'

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index 5257516..38a13f3 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -9,7 +9,7 @@ from gobs.mysql_querys import get_config, get_config_id, add_gobs_logs, get_defa
 	add_new_build_job, get_config_id_list, update_manifest_sql, add_new_manifest_sql, \
 	add_new_ebuild_sql, update_active_ebuild_to_fales, add_old_ebuild, \
 	get_ebuild_checksum, get_manifest_db, get_cp_repo_from_package_id
-	from gobs.readconf import get_conf_settings
+from gobs.readconf import get_conf_settings
 reader=get_conf_settings()
 gobs_settings_dict=reader.read_gobs_settings_all()
 config_profile = gobs_settings_dict['gobs_config']


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-21  1:50 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-21  1:50 UTC (permalink / raw
  To: gentoo-commits

commit:     7591e8091205e60dcc704ab6c734bb2d982184d4
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 21 01:50:41 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Dec 21 01:50:41 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=7591e809

fix invalid syntax

---
 gobs/pym/mysql_querys.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/mysql_querys.py b/gobs/pym/mysql_querys.py
index f05cb8e..e4e9ce6 100644
--- a/gobs/pym/mysql_querys.py
+++ b/gobs/pym/mysql_querys.py
@@ -310,7 +310,7 @@ def get_cp_from_package_id(connection, package_id):
 	cp = category + '/' + package
 	return cp
 
-ef get_cp_repo_from_package_id(connection, package_id):
+def get_cp_repo_from_package_id(connection, package_id):
 	cursor =connection.cursor()
 	sqlQ = 'SELECT repos.repo FROM repos, packages WHERE repos.repo_id = packages.repo_id AND packages.package_id = %s'
 	cp = get_cp_from_package_id(connection, package_id)


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-21  2:11 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-21  2:11 UTC (permalink / raw
  To: gentoo-commits

commit:     e5239c39901f14bc549c066c98a71ce8fb06fa1f
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 21 02:10:44 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Dec 21 02:10:44 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=e5239c39

fix connectionManager object has no attribute

---
 gobs/pym/ConnectionManager.py |    2 +-
 gobs/pym/build_log.py         |   61 +++++++++++++++++++----------------------
 2 files changed, 29 insertions(+), 34 deletions(-)

diff --git a/gobs/pym/ConnectionManager.py b/gobs/pym/ConnectionManager.py
index 14f875f..8d3750c 100644
--- a/gobs/pym/ConnectionManager.py
+++ b/gobs/pym/ConnectionManager.py
@@ -7,7 +7,7 @@ gobs_settings_dict=reader.read_gobs_settings_all()
 class connectionManager(object):
 	_instance = None
 
-	def __new__(cls, *args, **kwargs):
+	def __new__(cls, numberOfconnections=20, *args, **kwargs):
 		if not cls._instance:
 			cls._instance = super(connectionManager, cls).__new__(cls, *args, **kwargs)
 			#read the sql user/host etc and store it in the local object

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index 43bc0a2..b55ee19 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -11,22 +11,14 @@ from portage.util import writemsg, \
 	writemsg_level, writemsg_stdout
 from portage import _encodings
 from portage import _unicode_encode
+
 from gobs.package import gobs_package
 from gobs.readconf import get_conf_settings
 from gobs.flags import gobs_use_flags
-
-reader=get_conf_settings()
-gobs_settings_dict=reader.read_gobs_settings_all()
-config_profile = gobs_settings_dict['gobs_config']
-# make a CM
 from gobs.ConnectionManager import connectionManager
-CM=connectionManager(gobs_settings_dict)
-#selectively import the pgsql/mysql querys
-if CM.getName()=='pgsql':
-	from gobs.pgsql_querys import *
+from gobs.mysql_querys import add_gobs_logs, get_config_id
 	
-def get_build_dict_db(settings, pkg):
-	conn=CM.getConnection()
+def get_build_dict_db(conn, config_id, settings, pkg):
 	myportdb = portage.portdbapi(mysettings=settings)
 	cpvr_list = catpkgsplit(pkg.cpv, silent=1)
 	categories = cpvr_list[0]
@@ -34,7 +26,7 @@ def get_build_dict_db(settings, pkg):
 	repo = pkg.repo
 	ebuild_version = cpv_getversion(pkg.cpv)
 	log_msg = "Logging %s:%s" % (pkg.cpv, repo,)
-	add_gobs_logs(conn, log_msg, "info", config_profile)
+	add_gobs_logs(conn, log_msg, "info", config_id)
 	init_package = gobs_package(settings, myportdb)
 	package_id = get_package_id(conn, categories, package, repo)
 	build_dict = {}
@@ -43,7 +35,7 @@ def get_build_dict_db(settings, pkg):
 	build_dict['cpv'] = pkg.cpv
 	build_dict['categories'] = categories
 	build_dict['package'] = package
-	build_dict['config_id'] = get_config_id(conn, config_profile)
+	build_dict['config_id'] = config_id
 	init_useflags = gobs_use_flags(settings, myportdb, pkg.cpv)
 	init_useflags = gobs_use_flags(settings, myportdb, pkg.cpv)
 	iuse_flags_list, final_use_list = init_useflags.get_flags_pkg(pkg, settings)
@@ -70,14 +62,14 @@ def get_build_dict_db(settings, pkg):
 	ebuild_id = get_ebuild_id_db_checksum(conn, build_dict)
 	if ebuild_id is None:
 		log_msg = "%s:%s Don't have any ebuild_id!" % (pkg.cpv, repo,)
-		add_gobs_logs(conn, log_msg, "info", config_profile)
+		add_gobs_logs(conn, log_msg, "info", config_id)
 		update_manifest_sql(conn, package_id, "0")
 		init_package = gobs_package(settings, myportdb)
 		init_package.update_package_db(package_id)
 		ebuild_id = get_ebuild_id_db_checksum(conn, build_dict)
 		if ebuild_id is None:
 			log_msg = "%s:%s Don't have any ebuild_id!" % (pkg.cpv, repo,)
-			add_gobs_logs(conn, log_msg, "error", config_profile)
+			add_gobs_logs(conn, log_msg, "error", config_id)
 			return
 	build_dict['ebuild_id'] = ebuild_id
 	build_job_id = get_build_job_id(conn, build_dict)
@@ -85,7 +77,6 @@ def get_build_dict_db(settings, pkg):
 		build_dict['build_job_id'] = None
 	else:
 		build_dict['build_job_id'] = build_job_id
-	CM.putConnection(conn)
 	return build_dict
 
 def search_info(textline, error_log_list):
@@ -214,13 +205,20 @@ def write_msg_file(msg, log_path):
 				if f_real is not f:
 					f_real.close()
 
-def add_buildlog_main(settings, pkg, trees):
-	conn=CM.getConnection()
-	build_dict = get_build_dict_db(settings, pkg)
+def add_buildlog_main(settings, pkg):
+	CM3 = connectionManager()
+	conn3 = CM3.newConnection()
+	if not conn3.is_connected() is True:
+		conn3.reconnect(attempts=2, delay=1)
+	reader=get_conf_settings()
+	gobs_settings_dict=reader.read_gobs_settings_all()
+	config_profile = gobs_settings_dict['gobs_config']
+	config_id = get_config_id(conn, config_profile)
+	build_dict = get_build_dict_db(conn3, settings, pkg)
 	if build_dict is None:
 		log_msg = "Package %s:%s is NOT logged." % (pkg.cpv, pkg.repo,)
-		add_gobs_logs(conn, log_msg, "info", config_profile)
-		CM.putConnection(conn)
+		add_gobs_logs(conn3, log_msg, "info", config_id)
+		conn3.close
 		return
 	build_log_dict = {}
 	build_log_dict = get_buildlog_info(settings, pkg, build_dict)
@@ -236,15 +234,15 @@ def add_buildlog_main(settings, pkg, trees):
 	build_log_dict['log_hash'] = log_hash.hexdigest()
 	build_log_dict['logfilename'] = settings.get("PORTAGE_LOG_FILE").split(config_profile)[1]
 	log_msg = "Logfile name: %s" % (settings.get("PORTAGE_LOG_FILE"),)
-	add_gobs_logs(conn, log_msg, "info", config_profile)
-	log_id = add_new_buildlog(conn, build_dict, build_log_dict)
+	add_gobs_logs(conn3, log_msg, "info", config_id)
+	log_id = add_new_buildlog(conn3, build_dict, build_log_dict)
 
 	msg = ""
 	# emerge_info_logfilename = settings.get("PORTAGE_LOG_FILE")[:-3] + "emerge_log.log"
 	if log_id is None:
 		os.chmod(settings.get("PORTAGE_LOG_FILE"), 0o664)
 		log_msg = "Package %s:%s is NOT logged." % (pkg.cpv, pkg.repo,)
-		add_gobs_logs(conn, log_msg, "info", config_profile)
+		add_gobs_logs(conn3, log_msg, "info", config_id)
 		print("Package %s:%s is NOT logged." % (pkg.cpv, pkg.repo,))
 	else:
 		# for msg_line in msg:
@@ -252,13 +250,12 @@ def add_buildlog_main(settings, pkg, trees):
 		os.chmod(settings.get("PORTAGE_LOG_FILE"), 0o664)
 		# os.chmod(emerge_info_logfilename, 0o664)
 		log_msg = "Package: %s:%s is logged." % (pkg.cpv, pkg.repo,)
-		add_gobs_logs(conn, log_msg, "info", config_profile)
+		add_gobs_logs(conn3, log_msg, "info", config_id)
 		print("Package %s:%s is logged." % (pkg.cpv, pkg.repo,))
-	CM.putConnection(conn)
+	conn3.close
 
-def log_fail_queru(build_dict, settings):
-	config = gobs_settings_dict['gobs_config']
-	conn=CM.getConnection()
+def log_fail_queru(conn, build_dict, settings):
+	config_id = build_dict['config_id']
 	print('build_dict', build_dict)
 	fail_querue_dict = get_fail_querue_dict(conn, build_dict)
 	print('fail_querue_dict', fail_querue_dict)
@@ -275,7 +272,6 @@ def log_fail_queru(build_dict, settings):
 			fail_querue_dict['build_job_id'] = build_dict['build_job_id']
 			fail_querue_dict['fail_type'] = build_dict['type_fail']
 			update_fail_times(conn, fail_querue_dict)
-			CM.putConnection(conn)
 			return
 		else:
 			build_log_dict = {}
@@ -303,7 +299,6 @@ def log_fail_queru(build_dict, settings):
 				for sum_log_line in sum_build_log_list:
 					summary_error = summary_error + " " + sum_log_line
 			build_log_dict['log_hash'] = '0'
-			build_dict['config_id'] = get_config_id(conn, config_profile)
 			useflagsdict = {}
 			if build_dict['build_useflags'] == {}:
 				for k, v in build_dict['build_useflags'].iteritems():
@@ -313,9 +308,9 @@ def log_fail_queru(build_dict, settings):
 			else:
 				build_dict['build_useflags'] = None			
 			if settings.get("PORTAGE_LOG_FILE") is not None:
+				config_profile = get_config(conn, config_id)
 				build_log_dict['logfilename'] = settings.get("PORTAGE_LOG_FILE").split(config_profile)[1]
 				os.chmod(settings.get("PORTAGE_LOG_FILE"), 0o224)
 			else:
 				build_log_dict['logfilename'] = ""
-				log_id = add_new_buildlog(conn, build_dict, build_log_dict)
-	CM.putConnection(conn)
\ No newline at end of file
+			log_id = add_new_buildlog(conn, build_dict, build_log_dict)


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-21  2:24 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-21  2:24 UTC (permalink / raw
  To: gentoo-commits

commit:     a0ce38eb463d1da24c5215eb8941c07da8b4d6a4
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 21 02:23:15 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Dec 21 02:23:15 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=a0ce38eb

fix missing imports

---
 gobs/pym/actions.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/actions.py b/gobs/pym/actions.py
index 9ca6b12..e5cbd65 100644
--- a/gobs/pym/actions.py
+++ b/gobs/pym/actions.py
@@ -81,7 +81,7 @@ from _emerge.UnmergeDepPriority import UnmergeDepPriority
 from _emerge.UseFlagDisplay import pkg_use_display
 from _emerge.userquery import userquery
 
-from gobs.build_queru import log_fail_queru
+from gobs.build_log import log_fail_queru
 from gobs.ConnectionManager import connectionManager
 
 if sys.hexversion >= 0x3000000:


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-21 17:33 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-21 17:33 UTC (permalink / raw
  To: gentoo-commits

commit:     fc5988d6564ad02165d5e4bd4acbe8a8c78bef88
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 21 17:33:19 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Dec 21 17:33:19 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=fc5988d6

fix Data truncated for column 'status' at row 1

---
 gobs/pym/mysql_querys.py |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/mysql_querys.py b/gobs/pym/mysql_querys.py
index 3dfa10c..bf7034b 100644
--- a/gobs/pym/mysql_querys.py
+++ b/gobs/pym/mysql_querys.py
@@ -189,10 +189,10 @@ def add_new_ebuild_metadata_sql(connection, ebuild_id, keywords, restrictions, i
 			restriction_id = cursor.fetchone()[0]
 		cursor.execute(sqlQ4, (ebuild_id, restriction_id,))
 	for iuse in iuse_list:
-		set_iuse = 'disable'
+		set_iuse = 'False'
 		if iuse[0] in ["+"]:
 			iuse = iuse[1:]
-			set_iuse = 'enable'
+			set_iuse = 'True'
 		elif iuse[0] in ["-"]:
 			iuse = iuse[1:]
 		use_id = get_use_id(connection, iuse)


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-21 20:31 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-21 20:31 UTC (permalink / raw
  To: gentoo-commits

commit:     6f0e315610679af631d970320f613bc55d2a23f4
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 21 20:31:30 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Dec 21 20:31:30 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=6f0e3156

fix connectionManager' object has no attribute 'getName'

---
 gobs/pym/buildquerydb.py |    5 ++---
 1 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/gobs/pym/buildquerydb.py b/gobs/pym/buildquerydb.py
index d818b9e..df2dbe0 100644
--- a/gobs/pym/buildquerydb.py
+++ b/gobs/pym/buildquerydb.py
@@ -14,9 +14,8 @@ config_profile = gobs_settings_dict['gobs_config']
 # make a CM
 from gobs.ConnectionManager import connectionManager
 CM=connectionManager(gobs_settings_dict)
-#selectively import the pgsql/mysql querys
-if CM.getName()=='pgsql':
-  from gobs.pgsql import *
+
+
 
 from gobs.check_setup import check_make_conf
 from gobs.sync import git_pull


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-21 20:41 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-21 20:41 UTC (permalink / raw
  To: gentoo-commits

commit:     f65884fa16e0365ce9cc5668715f1716b768caaf
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 21 20:41:14 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Dec 21 20:41:14 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=f65884fa

Fix Unknown column 'config_id2' in 'field list'

---
 gobs/pym/mysql_querys.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/mysql_querys.py b/gobs/pym/mysql_querys.py
index bf7034b..cd2b9bd 100644
--- a/gobs/pym/mysql_querys.py
+++ b/gobs/pym/mysql_querys.py
@@ -33,7 +33,7 @@ def get_jobs_id(connection, config_id):
 
 def get_job(connection, job_id):
 	cursor = connection.cursor()
-	sqlQ ='SELECT job, config_id2 FROM jobs WHERE job_id = %s'
+	sqlQ ='SELECT job, run_config_id FROM jobs WHERE job_id = %s'
 	cursor.execute(sqlQ, (job_id,))
 	entries = cursor.fetchone()
 	cursor.close()


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-21 23:23 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-21 23:23 UTC (permalink / raw
  To: gentoo-commits

commit:     cdd4c98eb7af7a754464404d03570277d4751478
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 21 23:23:13 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Dec 21 23:23:13 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=cdd4c98e

fix some code for the guest

---
 gobs/pym/build_job.py    |   10 +++++-----
 gobs/pym/build_log.py    |    2 +-
 gobs/pym/check_setup.py  |    3 ++-
 gobs/pym/mysql_querys.py |   17 +++++++++--------
 4 files changed, 17 insertions(+), 15 deletions(-)

diff --git a/gobs/pym/build_job.py b/gobs/pym/build_job.py
index bdceddb..4d02c8f 100644
--- a/gobs/pym/build_job.py
+++ b/gobs/pym/build_job.py
@@ -100,7 +100,7 @@ class build_job_action(object):
 
 		# close the db for the multiprocessing pool will make new ones
 		# and we don't need this one for some time.
-		conn.close()
+		self._conn.close()
 		
 		# Call main_emerge to build the package in build_cpv_list
 		print("Build: %s" % build_dict)
@@ -116,8 +116,8 @@ class build_job_action(object):
 			pass
 
 		# reconnect to the db if needed.
-		if not conn.is_connected() is True:
-			conn.reconnect(attempts=2, delay=1)
+		if not self._conn.is_connected() is True:
+			self._conn.reconnect(attempts=2, delay=1)
 
 		build_dict2 = {}
 		build_dict2 = get_packages_to_build(self._conn, self._config_id)
@@ -133,14 +133,14 @@ class build_job_action(object):
 			else:
 				build_dict['type_fail'] = "Querey was not removed"
 				build_dict['check_fail'] = True
-			log_fail_queru(conn, build_dict, settings)
+			log_fail_queru(self._conn, build_dict, settings)
 		if build_fail is True:
 			return True
 		return False
 
 	def procces_build_jobs(self):
 		build_dict = {}
-		build_dict = get_packages_to_build(self._self._conn, self._config_id)
+		build_dict = get_packages_to_build(self._conn, self._config_id)
 		settings, trees, mtimedb = load_emerge_config()
 		portdb = trees[settings["ROOT"]]["porttree"].dbapi
 		if build_dict is None:

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index db0a6ae..6452e18 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -219,7 +219,7 @@ def add_buildlog_process(settings, pkg):
 	gobs_settings_dict=reader.read_gobs_settings_all()
 	config_profile = gobs_settings_dict['gobs_config']
 	config_id = get_config_id(conn, config_profile)
-	build_dict = get_build_dict_db(conn3, settings, pkg)
+	build_dict = get_build_dict_db(conn, config_id, settings, pkg)
 	if build_dict is None:
 		log_msg = "Package %s:%s is NOT logged." % (pkg.cpv, pkg.repo,)
 		add_gobs_logs(conn, log_msg, "info", config_id)

diff --git a/gobs/pym/check_setup.py b/gobs/pym/check_setup.py
index 4997d0f..6115987 100644
--- a/gobs/pym/check_setup.py
+++ b/gobs/pym/check_setup.py
@@ -6,7 +6,8 @@ import errno
 from portage.exception import DigestException, FileNotFound, ParseError, PermissionDenied
 from gobs.text import get_file_text
 from gobs.readconf import get_conf_settings
-from gobs.mysql_querys import get_config_id, get_config_list_all, add_gobs_logs, get_config, update_make_conf
+from gobs.mysql_querys import get_config_id, get_config_list_all, add_gobs_logs, get_config, \
+	update_make_conf, get_profile_checksum
 
 reader=get_conf_settings()
 gobs_settings_dict=reader.read_gobs_settings_all()

diff --git a/gobs/pym/mysql_querys.py b/gobs/pym/mysql_querys.py
index cd2b9bd..0fb03a6 100644
--- a/gobs/pym/mysql_querys.py
+++ b/gobs/pym/mysql_querys.py
@@ -81,7 +81,8 @@ def get_default_config(connection):
 	sqlQ = "SELECT config FROM configs WHERE default_config = 'True'"
 	cursor.execute(sqlQ)
 	entries = cursor.fetchone()
-	return entries
+	cursor.close()
+	return entries[0]
 
 def get_repo_id(connection, repo):
 	cursor = connection.cursor()
@@ -381,7 +382,7 @@ def get_build_jobs_id_list_config(connection, config_id):
 def del_old_build_jobs(connection, build_job_id):
 	cursor = connection.cursor()
 	sqlQ1 = 'DELETE FROM build_jobs_use WHERE build_job_id = %s'
-	sqlQ2 = 'DELETE FROM build_jobs_retest WHERE build_job_id  = %s'
+	sqlQ2 = 'DELETE FROM build_jobs_retdo WHERE build_job_id  = %s'
 	sqlQ3 = 'DELETE FROM build_jobs WHERE build_job_id  = %s'
 	sqlQ4 = 'DELETE FROM build_jobs_emerge_options WHERE build_job_id = %s'
 	cursor.execute(sqlQ1, (build_job_id,))
@@ -402,12 +403,12 @@ def get_profile_checksum(connection, config_id):
 
 def get_packages_to_build(connection, config_id):
 	cursor =connection.cursor()
-	sqlQ1 = "SELECT build_jobs.build_job_id, build_jobs.ebuild_id, ebuilds.package_id FROM build_jobs, ebuilds WHERE build_jobs.config_id = %s AND build_jobs.ebuild_id = ebuilds.ebuild_id AND ebuilds.active = 'True' AND TIMESTAMPDIFF(HOUR, build_jobs.time_stamp, NOW()) >  1 ORDER BY build_jobs.build_job_id LIMIT 1"
+	sqlQ1 = "SELECT build_jobs.build_job_id, build_jobs.ebuild_id, ebuilds.package_id FROM build_jobs, ebuilds WHERE build_jobs.config_id = %s AND build_jobs.ebuild_id = ebuilds.ebuild_id AND ebuilds.active = 'True' AND TIMESTAMPDIFF(HOUR, build_jobs.time_stamp, NOW()) > 1 ORDER BY build_jobs.build_job_id LIMIT 1"
 	sqlQ2 = 'SELECT version, checksum FROM ebuilds WHERE ebuild_id = %s'
 	sqlQ3 = 'SELECT uses.flag, build_jobs_use.status FROM build_jobs_use, uses WHERE build_jobs_use.build_job_id = %s AND build_jobs_use.use_id = uses.use_id'
 	sqlQ4 = "SELECT build_jobs.build_job_id, build_jobs.ebuild_id, ebuilds.package_id FROM build_jobs, ebuilds WHERE build_jobs.config_id = %s AND build_jobs.ebuild_id = ebuilds.ebuild_id AND ebuilds.active = 'True' AND build_jobs.status = 'Now' LIMIT 1"
-	sqlQ5 = 'SELECT emerge_options.eoption FROM configs_emerge_options, emerge_options WHERE configs_emerge_options.config_id = %s AND configs_emerge_options.options_id = emerge_options.eoption_id'
-	sqlQ6 = 'SELECT emerge_options.eoption FROM build_jobs_emerge_options, emerge_options WHERE build_jobs_emerge_options.build_job_id = %s AND build_jobs_emerge_options.options_id = emerge_options.eoption_id'
+	sqlQ5 = 'SELECT emerge_options.eoption FROM configs_emerge_options, emerge_options WHERE configs_emerge_options.config_id = %s AND configs_emerge_options.eoption_id = emerge_options.eoption_id'
+	sqlQ6 = 'SELECT emerge_options.eoption FROM build_jobs_emerge_options, emerge_options WHERE build_jobs_emerge_options.build_job_id = %s AND build_jobs_emerge_options.eoption_id = emerge_options.eoption_id'
 	cursor.execute(sqlQ4, (config_id,))
 	entries = cursor.fetchone()
 	if entries is None:
@@ -448,7 +449,7 @@ def get_packages_to_build(connection, config_id):
 
 def update_fail_times(connection, fail_querue_dict):
 	cursor = connection.cursor()
-	sqlQ1 = 'UPDATE build_jobs_retest SET fail_times = %s WHERE build_job_id = %s AND fail_type = %s'
+	sqlQ1 = 'UPDATE build_jobs_redo SET fail_times = %s WHERE build_job_id = %s AND fail_type = %s'
 	sqlQ2 = 'UPDATE build_jobs SET time_stamp = NOW() WHERE build_job_id = %s'
 	cursor.execute(sqlQ1, (fail_querue_dict['fail_times'], fail_querue_dict['build_job_id'], fail_querue_dict['fail_type'],))
 	cursor.execute(sqlQ2, (fail_querue_dict['build_job_id'],))
@@ -458,7 +459,7 @@ def update_fail_times(connection, fail_querue_dict):
 def get_fail_querue_dict(connection, build_dict):
 	cursor = connection.cursor()
 	fail_querue_dict = {}
-	sqlQ = 'SELECT fail_times FROM build_jobs_retest WHERE build_job_id = %s AND fail_type = %s'
+	sqlQ = 'SELECT fail_times FROM build_jobs_redo WHERE build_job_id = %s AND fail_type = %s'
 	cursor.execute(sqlQ, (build_dict['build_job_id'], build_dict['type_fail'],))
 	entries = cursor.fetchone()
 	cursor.close()
@@ -468,7 +469,7 @@ def get_fail_querue_dict(connection, build_dict):
 
 def add_fail_querue_dict(connection, fail_querue_dict):
 	cursor = connection.cursor()
-	sqlQ1 = 'INSERT INTO build_jobs_retest (build_job_id, fail_type, fail_times) VALUES ( %s, %s, %s)'
+	sqlQ1 = 'INSERT INTO build_jobs_redo (build_job_id, fail_type, fail_times) VALUES ( %s, %s, %s)'
 	sqlQ2 = 'UPDATE build_jobs SET time_stamp = NOW() WHERE build_job_id = %s'
 	cursor.execute(sqlQ1, (fail_querue_dict['build_job_id'],fail_querue_dict['fail_type'], fail_querue_dict['fail_times']))
 	cursor.execute(sqlQ2, (fail_querue_dict['build_job_id'],))


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-21 23:31 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-21 23:31 UTC (permalink / raw
  To: gentoo-commits

commit:     bb6fc6d324b6dcef15fafb96b11a83522c963368
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 21 23:31:25 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Dec 21 23:31:25 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=bb6fc6d3

takes exactly 4 arguments (3 given)

---
 gobs/pym/build_log.py |    6 ++----
 1 files changed, 2 insertions(+), 4 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index 6452e18..7b2fac4 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -32,7 +32,6 @@ def get_build_dict_db(conn, config_id, settings, pkg):
 	ebuild_version = cpv_getversion(pkg.cpv)
 	log_msg = "Logging %s:%s" % (pkg.cpv, repo,)
 	add_gobs_logs(conn, log_msg, "info", config_id)
-	init_package = gobs_package(settings, myportdb)
 	package_id = get_package_id(conn, categories, package, repo)
 	build_dict = {}
 	build_dict['ebuild_version'] = ebuild_version
@@ -68,7 +67,7 @@ def get_build_dict_db(conn, config_id, settings, pkg):
 		log_msg = "%s:%s Don't have any ebuild_id!" % (pkg.cpv, repo,)
 		add_gobs_logs(conn, log_msg, "info", config_id)
 		update_manifest_sql(conn, package_id, "0")
-		init_package = gobs_package(settings, myportdb)
+		init_package = gobs_package(conn, settings, myportdb)
 		init_package.update_package_db(package_id)
 		ebuild_id = get_ebuild_id_db_checksum(conn, build_dict)
 		if ebuild_id is None:
@@ -248,7 +247,6 @@ def add_buildlog_process(settings, pkg):
 		os.chmod(settings.get("PORTAGE_LOG_FILE"), 0o664)
 		log_msg = "Package %s:%s is NOT logged." % (pkg.cpv, pkg.repo,)
 		add_gobs_logs(conn, log_msg, "info", config_id)
-		print("Package %s:%s is NOT logged." % (pkg.cpv, pkg.repo,))
 	else:
 		# for msg_line in msg:
 		#	write_msg_file(msg_line, emerge_info_logfilename)
@@ -256,7 +254,7 @@ def add_buildlog_process(settings, pkg):
 		# os.chmod(emerge_info_logfilename, 0o664)
 		log_msg = "Package: %s:%s is logged." % (pkg.cpv, pkg.repo,)
 		add_gobs_logs(conn, log_msg, "info", config_id)
-		print("Package %s:%s is logged." % (pkg.cpv, pkg.repo,))
+		print(">>> Logging %s:%s" % (pkg.cpv, pkg.repo,))
 	conn.close
 
 def add_buildlog_main(settings, pkg):


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-21 23:50 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-21 23:50 UTC (permalink / raw
  To: gentoo-commits

commit:     f31d0880eff4634939c0e7eae9046b6a84751e25
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 21 23:50:06 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Dec 21 23:50:06 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=f31d0880

fix a typo Table 'zobcs.build_jobs_retdo' doesn't exist

---
 gobs/pym/mysql_querys.py |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/gobs/pym/mysql_querys.py b/gobs/pym/mysql_querys.py
index 0fb03a6..d9bbcc3 100644
--- a/gobs/pym/mysql_querys.py
+++ b/gobs/pym/mysql_querys.py
@@ -382,7 +382,7 @@ def get_build_jobs_id_list_config(connection, config_id):
 def del_old_build_jobs(connection, build_job_id):
 	cursor = connection.cursor()
 	sqlQ1 = 'DELETE FROM build_jobs_use WHERE build_job_id = %s'
-	sqlQ2 = 'DELETE FROM build_jobs_retdo WHERE build_job_id  = %s'
+	sqlQ2 = 'DELETE FROM build_jobs_redo WHERE build_job_id  = %s'
 	sqlQ3 = 'DELETE FROM build_jobs WHERE build_job_id  = %s'
 	sqlQ4 = 'DELETE FROM build_jobs_emerge_options WHERE build_job_id = %s'
 	cursor.execute(sqlQ1, (build_job_id,))


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-22 11:45 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-22 11:45 UTC (permalink / raw
  To: gentoo-commits

commit:     0cd30c9c47dc55eb3ff2b545f14b3244b90c810a
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Dec 22 11:45:40 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Dec 22 11:45:40 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=0cd30c9c

update jobs to support the new jobs table

---
 gobs/pym/jobs.py         |    6 ++++--
 gobs/pym/mysql_querys.py |    9 ++++++++-
 2 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/gobs/pym/jobs.py b/gobs/pym/jobs.py
index 164de06..67ced42 100644
--- a/gobs/pym/jobs.py
+++ b/gobs/pym/jobs.py
@@ -3,7 +3,8 @@ from __future__ import print_function
 from gobs.sync import git_pull, sync_tree
 from gobs.buildquerydb import add_buildquery_main, del_buildquery_main
 from gobs.updatedb import update_db_main
-from gobs.mysql_querys import get_config_id, add_gobs_logs, get_jobs_id, get_job, update_job_list
+from gobs.mysql_querys import get_config_id, add_gobs_logs, get_jobs_id, get_job, \
+	update_job_list, get_job_type
 
 def jobs_main(conn, config_profile):
 	config_id = get_config_id(conn, config_profile)
@@ -11,7 +12,8 @@ def jobs_main(conn, config_profile):
 	if jobs_id is None:
 		return
 	for job_id in jobs_id:
-		job, run_config_id = get_job(conn, job_id)
+		job_type_id, run_config_id = get_job(conn, job_id)
+		job = get_job_type(conn, job_type_id)
 		log_msg = "Job: %s Type: %s" % (job_id, job,)
 		add_gobs_logs(conn, log_msg, "info", config_id)
 		if job == "addbuildquery":

diff --git a/gobs/pym/mysql_querys.py b/gobs/pym/mysql_querys.py
index 8c531cb..cfcbc4c 100644
--- a/gobs/pym/mysql_querys.py
+++ b/gobs/pym/mysql_querys.py
@@ -33,7 +33,7 @@ def get_jobs_id(connection, config_id):
 
 def get_job(connection, job_id):
 	cursor = connection.cursor()
-	sqlQ ='SELECT job, run_config_id FROM jobs WHERE job_id = %s'
+	sqlQ ='SELECT job_type_id, run_config_id FROM jobs WHERE job_id = %s'
 	cursor.execute(sqlQ, (job_id,))
 	entries = cursor.fetchone()
 	cursor.close()
@@ -41,6 +41,13 @@ def get_job(connection, job_id):
 	config_id = entries[1]
 	return job, config_id
 
+def get_job_type(connection, job_type_id):
+	cursor = connection.cursor()
+	sqlQ = 'SELECT type FROM job_types WHERE job_type_id = %s'
+	entries = cursor.fetchone()
+	cursor.close()
+	return entries[0]
+
 def update_job_list(connection, status, job_id):
 	cursor = connection.cursor()
 	sqlQ = 'UPDATE  jobs SET status = %s WHERE job_id = %s'


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-27 23:09 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-27 23:09 UTC (permalink / raw
  To: gentoo-commits

commit:     e0d1cde78bca88822acfbf665c7fb9f7907df4fa
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Thu Dec 27 23:08:57 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Thu Dec 27 23:08:57 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=e0d1cde7

Move build_mydepgraph to a new file

---
 gobs/pym/actions.py        |   67 +-------------------------------------------
 gobs/pym/build_depgraph.py |   64 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 65 insertions(+), 66 deletions(-)

diff --git a/gobs/pym/actions.py b/gobs/pym/actions.py
index e5cbd65..2b3a6de 100644
--- a/gobs/pym/actions.py
+++ b/gobs/pym/actions.py
@@ -81,8 +81,7 @@ from _emerge.UnmergeDepPriority import UnmergeDepPriority
 from _emerge.UseFlagDisplay import pkg_use_display
 from _emerge.userquery import userquery
 
-from gobs.build_log import log_fail_queru
-from gobs.ConnectionManager import connectionManager
+from gobs.build_depgraph import  build_mydepgraph
 
 if sys.hexversion >= 0x3000000:
 	long = int
@@ -90,64 +89,9 @@ if sys.hexversion >= 0x3000000:
 else:
 	_unicode = unicode
 
-def build_mydepgraph(settings, trees, mtimedb, myopts, myparams, myaction, myfiles, spinner, build_dict):
-	try:
-		success, mydepgraph, favorites = backtrack_depgraph(
-			settings, trees, myopts, myparams, myaction, myfiles, spinner)
-	except portage.exception.PackageSetNotFound as e:
-		root_config = trees[settings["ROOT"]]["root_config"]
-		display_missing_pkg_set(root_config, e.value)
-		build_dict['type_fail'] = "depgraph fail"
-		build_dict['check_fail'] = True
-	else:
-		if not success:
-			if mydepgraph._dynamic_config._needed_p_mask_changes:
-				build_dict['type_fail'] = "Mask packages"
-				build_dict['check_fail'] = True
-				mydepgraph.display_problems()
-			if mydepgraph._dynamic_config._needed_use_config_changes:
-				repeat = True
-				repeat_times = 0
-				while repeat:
-					mydepgraph._display_autounmask()
-					settings, trees, mtimedb = load_emerge_config()
-					myparams = create_depgraph_params(myopts, myaction)
-					try:
-						success, mydepgraph, favorites = backtrack_depgraph(
-						settings, trees, myopts, myparams, myaction, myfiles, spinner)
-					except portage.exception.PackageSetNotFound as e:
-						root_config = trees[settings["ROOT"]]["root_config"]
-						display_missing_pkg_set(root_config, e.value)
-					if not success and mydepgraph._dynamic_config._needed_use_config_changes:
-						print("repaet_times:", repeat_times)
-						if repeat_times is 2:
-							build_dict['type_fail'] = "Need use change"
-							build_dict['check_fail'] = True
-							mydepgraph.display_problems()
-							repeat = False
-						else:
-							repeat_times = repeat_times + 1
-					else:
-						repeat = False
-
-			if mydepgraph._dynamic_config._unsolvable_blockers:
-				mydepgraph.display_problems()
-				build_dict['type_fail'] = "Blocking packages"
-				build_dict['check_fail'] = True
-
-			if mydepgraph._dynamic_config._slot_collision_info:
-				mydepgraph.display_problems()
-				build_dict['type_fail'] = "Slot blocking"
-				build_dict['check_fail'] = True
-	
-	return build_dict, success, settings, trees, mtimedb, mydepgraph
-
 def action_build(settings, trees, mtimedb,
 	myopts, myaction, myfiles, spinner, build_dict):
 
-	CM2=connectionManager()
-	conn2 = CM2.newConnection()
-
 	if '--usepkgonly' not in myopts:
 		old_tree_timestamp_warn(settings['PORTDIR'], settings)
 
@@ -366,15 +310,6 @@ def action_build(settings, trees, mtimedb,
 			trees, mtimedb, myopts, myparams, myaction, myfiles, spinner, build_dict)
 
 		if not success:
-			build_dict['type_fail'] = "Dep calc fail"
-			build_dict['check_fail'] = True
-			mydepgraph.display_problems()
-
-		if build_dict['check_fail'] is True:
-			if not conn2.is_connected() is True:
-				conn2.reconnect(attempts=2, delay=1)
-			log_fail_queru(conn2, build_dict, settings)
-			conn2.close
 			return 1
 
 	if "--pretend" not in myopts and \

diff --git a/gobs/pym/build_depgraph.py b/gobs/pym/build_depgraph.py
new file mode 100644
index 0000000..ebff21b
--- /dev/null
+++ b/gobs/pym/build_depgraph.py
@@ -0,0 +1,64 @@
+from __future__ import print_function
+from _emerge.depgraph import backtrack_depgraph, create_depgraph_params
+import portage
+portage.proxy.lazyimport.lazyimport(globals(),
+	'gobs.actions:load_emerge_config',
+)
+from portage.exception import PackageSetNotFound
+
+from gobs.ConnectionManager import connectionManager
+from gobs.build_log import log_fail_queru
+
+def build_mydepgraph(settings, trees, mtimedb, myopts, myparams, myaction, myfiles, spinner, build_dict):
+	CM2=connectionManager()
+	conn2 = CM2.newConnection()
+	try:
+		success, mydepgraph, favorites = backtrack_depgraph(
+		settings, trees, myopts, myparams, myaction, myfiles, spinner)
+	except portage.exception.PackageSetNotFound as e:
+		root_config = trees[settings["ROOT"]]["root_config"]
+		display_missing_pkg_set(root_config, e.value)
+		build_dict['type_fail'] = "depgraph fail"
+		build_dict['check_fail'] = True
+	else:
+		if not success:
+			repeat = True
+			repeat_times = 0
+			while repeat:
+				if mydepgraph._dynamic_config._needed_p_mask_changes:
+					build_dict['type_fail'] = "Mask packages"
+					build_dict['check_fail'] = True
+				elif mydepgraph._dynamic_config._needed_use_config_changes:
+					mydepgraph._display_autounmask()
+					build_dict['type_fail'] = "Need use change"
+					build_dict['check_fail'] = True
+				elif mydepgraph._dynamic_config._unsolvable_blockers:
+					build_dict['type_fail'] = "Blocking packages"
+					build_dict['check_fail'] = True
+				elif mydepgraph._dynamic_config._slot_collision_info:
+					build_dict['type_fail'] = "Slot blocking"
+					build_dict['check_fail'] = True
+				else:
+					build_dict['type_fail'] = "Dep calc fail"
+					build_dict['check_fail'] = True
+				mydepgraph.display_problems()
+				if repeat_times is 2:
+					repeat = False
+					if not conn2.is_connected() is True:
+						conn2.reconnect(attempts=2, delay=1)
+					log_fail_queru(conn2, build_dict, settings)
+					conn.close
+				else:
+					repeat_times = repeat_times + 1
+					settings, trees, mtimedb = load_emerge_config()
+					myparams = create_depgraph_params(myopts, myaction)
+					try:
+						success, mydepgraph, favorites = backtrack_depgraph(
+						settings, trees, myopts, myparams, myaction, myfiles, spinner)
+					except portage.exception.PackageSetNotFound as e:
+						root_config = trees[settings["ROOT"]]["root_config"]
+						display_missing_pkg_set(root_config, e.value)
+					if success:
+						repeat = False
+
+	return build_dict, success, settings, trees, mtimedb, mydepgraph


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-27 23:52 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-27 23:52 UTC (permalink / raw
  To: gentoo-commits

commit:     498ad87c8ce8758c1da7c54b84f12c0fb5697bd8
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Thu Dec 27 23:52:28 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Thu Dec 27 23:52:28 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=498ad87c

 fix some small errors in the code

---
 gobs/pym/build_depgraph.py |    3 ++-
 gobs/pym/jobs.py           |    5 ++---
 gobs/pym/mysql_querys.py   |    2 +-
 gobs/pym/readconf.py       |    4 ++--
 4 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/gobs/pym/build_depgraph.py b/gobs/pym/build_depgraph.py
index ebff21b..86201a4 100644
--- a/gobs/pym/build_depgraph.py
+++ b/gobs/pym/build_depgraph.py
@@ -1,5 +1,6 @@
 from __future__ import print_function
-from _emerge.depgraph import backtrack_depgraph, create_depgraph_params
+from _emerge.create_depgraph_params import create_depgraph_params
+from _emerge.depgraph import backtrack_depgraph
 import portage
 portage.proxy.lazyimport.lazyimport(globals(),
 	'gobs.actions:load_emerge_config',

diff --git a/gobs/pym/jobs.py b/gobs/pym/jobs.py
index 67ced42..bd40175 100644
--- a/gobs/pym/jobs.py
+++ b/gobs/pym/jobs.py
@@ -6,8 +6,7 @@ from gobs.updatedb import update_db_main
 from gobs.mysql_querys import get_config_id, add_gobs_logs, get_jobs_id, get_job, \
 	update_job_list, get_job_type
 
-def jobs_main(conn, config_profile):
-	config_id = get_config_id(conn, config_profile)
+def jobs_main(conn, config_id):
 	jobs_id = get_jobs_id(conn, config_id)
 	if jobs_id is None:
 		return
@@ -32,7 +31,7 @@ def jobs_main(conn, config_profile):
 		elif job == "delbuildquery":
 			update_job_list(conn, "Runing", job_id)
 			log_msg = "Job %s is runing." % (job_id,)
-			add_gobs_logs(conn, log_msg, "info", config_profile)
+			add_gobs_logs(conn, log_msg, "info", config_id)
 			result =  del_buildquery_main(config_id)
 			if result is True:
 				update_job_list(conn, "Done", job_id)

diff --git a/gobs/pym/mysql_querys.py b/gobs/pym/mysql_querys.py
index 3d9d495..8665f97 100644
--- a/gobs/pym/mysql_querys.py
+++ b/gobs/pym/mysql_querys.py
@@ -3,7 +3,7 @@ from __future__ import print_function
 # Queryes to add the logs
 def get_config_id(connection, config, host):
 	cursor = connection.cursor()
-	sqlQ = 'SELECT config_id FROM configs WHERE config = %s AND hostname= host'
+	sqlQ = 'SELECT config_id FROM configs WHERE config = %s AND hostname= %s'
 	cursor.execute(sqlQ,(config, host,))
 	entries = cursor.fetchone()
 	cursor.close()

diff --git a/gobs/pym/readconf.py b/gobs/pym/readconf.py
index 45ed10a..d78eda0 100644
--- a/gobs/pym/readconf.py
+++ b/gobs/pym/readconf.py
@@ -1,7 +1,7 @@
 import os
 import sys
 import re
-import socket
+from socket import getfqdn
 
 class get_conf_settings(object):
 # open the /etc/buildhost/buildhost.conf file and get the needed
@@ -46,6 +46,6 @@ class get_conf_settings(object):
 		gobs_settings_dict['sql_passwd'] = get_sql_passwd.rstrip('\n')
 		gobs_settings_dict['gobs_gitreponame'] = get_gobs_gitreponame.rstrip('\n')
 		gobs_settings_dict['gobs_config'] = get_gobs_config.rstrip('\n')
-		gobs_settings_dict['hostname'] = socket.gethostname()
+		gobs_settings_dict['hostname'] = getfqdn()
 		# gobs_settings_dict['gobs_logfile'] = get_gobs_logfile.rstrip('\n')
 		return gobs_settings_dict


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2012-12-29 12:12 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2012-12-29 12:12 UTC (permalink / raw
  To: gentoo-commits

commit:     9f481d5106bd0925d13cb4e4f61d9238266604ac
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Dec 29 12:11:52 2012 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Dec 29 12:11:52 2012 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=9f481d51

some small code fixes

---
 gobs/pym/actions.py        |    2 +-
 gobs/pym/build_depgraph.py |   13 ++++++++-----
 gobs/pym/build_job.py      |    4 ++--
 gobs/pym/build_log.py      |    2 +-
 gobs/pym/buildquerydb.py   |   12 ++++++------
 gobs/pym/package.py        |    4 +++-
 gobs/pym/sync.py           |    2 +-
 gobs/pym/updatedb.py       |    2 +-
 8 files changed, 23 insertions(+), 18 deletions(-)

diff --git a/gobs/pym/actions.py b/gobs/pym/actions.py
index 2b3a6de..3b50187 100644
--- a/gobs/pym/actions.py
+++ b/gobs/pym/actions.py
@@ -306,7 +306,7 @@ def action_build(settings, trees, mtimedb,
 			print(darkgreen("emerge: It seems we have nothing to resume..."))
 			return os.EX_OK
 
-		build_dict, success, settings, trees, mtimedb, mydepgraph = build_mydepgraph(settings,
+		success, settings, trees, mtimedb, mydepgraph = build_mydepgraph(settings,
 			trees, mtimedb, myopts, myparams, myaction, myfiles, spinner, build_dict)
 
 		if not success:

diff --git a/gobs/pym/build_depgraph.py b/gobs/pym/build_depgraph.py
index 86201a4..37b9722 100644
--- a/gobs/pym/build_depgraph.py
+++ b/gobs/pym/build_depgraph.py
@@ -33,12 +33,15 @@ def build_mydepgraph(settings, trees, mtimedb, myopts, myparams, myaction, myfil
 					mydepgraph._display_autounmask()
 					build_dict['type_fail'] = "Need use change"
 					build_dict['check_fail'] = True
-				elif mydepgraph._dynamic_config._unsolvable_blockers:
-					build_dict['type_fail'] = "Blocking packages"
-					build_dict['check_fail'] = True
 				elif mydepgraph._dynamic_config._slot_collision_info:
 					build_dict['type_fail'] = "Slot blocking"
 					build_dict['check_fail'] = True
+				elif mydepgraph._dynamic_config._circular_deps_for_display:
+					build_dict['type_fail'] = "Circular Deps"
+					build_dict['check_fail'] = True
+				elif mydepgraph._dynamic_config._unsolvable_blockers:
+					build_dict['type_fail'] = "Blocking packages"
+					build_dict['check_fail'] = True
 				else:
 					build_dict['type_fail'] = "Dep calc fail"
 					build_dict['check_fail'] = True
@@ -48,7 +51,7 @@ def build_mydepgraph(settings, trees, mtimedb, myopts, myparams, myaction, myfil
 					if not conn2.is_connected() is True:
 						conn2.reconnect(attempts=2, delay=1)
 					log_fail_queru(conn2, build_dict, settings)
-					conn.close
+					conn2.close
 				else:
 					repeat_times = repeat_times + 1
 					settings, trees, mtimedb = load_emerge_config()
@@ -62,4 +65,4 @@ def build_mydepgraph(settings, trees, mtimedb, myopts, myparams, myaction, myfil
 					if success:
 						repeat = False
 
-	return build_dict, success, settings, trees, mtimedb, mydepgraph
+	return success, settings, trees, mtimedb, mydepgraph

diff --git a/gobs/pym/build_job.py b/gobs/pym/build_job.py
index 4d02c8f..6171ef0 100644
--- a/gobs/pym/build_job.py
+++ b/gobs/pym/build_job.py
@@ -52,13 +52,13 @@ class build_job_action(object):
 			else:
 				build_dict['type_fail'] = "Manifest error"
 				build_dict['check_fail'] = True
-				log_msg = "Manifest error: %s:%s" % cpv, manifest_error
+				log_msg = "Manifest error: %s:%s" % (cpv, manifest_error)
 				add_gobs_logs(self._conn, log_msg, "info", self._config_id)
 		else:
 			build_dict['type_fail'] = "Wrong ebuild checksum"
 			build_dict['check_fail'] = True
 		if build_dict['check_fail'] is True:
-				log_fail_queru(conn, build_dict, settings)
+				log_fail_queru(self._conn, build_dict, settings)
 				return None
 		return build_cpv_dict
 

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index 3582406..36f1c7a 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -217,7 +217,7 @@ def add_buildlog_process(settings, pkg):
 	reader=get_conf_settings()
 	gobs_settings_dict=reader.read_gobs_settings_all()
 	config = gobs_settings_dict['gobs_config']
-	hostname =gobs_settings_dict['gobs_hostname']
+	hostname =gobs_settings_dict['hostname']
 	host_config = hostname + "/" + config
 	config_id = get_config_id(conn, config, hostname)
 	build_dict = get_build_dict_db(conn, config_id, settings, pkg)

diff --git a/gobs/pym/buildquerydb.py b/gobs/pym/buildquerydb.py
index df2dbe0..c7cd49d 100644
--- a/gobs/pym/buildquerydb.py
+++ b/gobs/pym/buildquerydb.py
@@ -36,7 +36,7 @@ def add_cpv_query_pool(mysettings, myportdb, config_id, cp, repo):
 		categories = element[0]
 		package = element[1]
 		log_msg = "C %s:%s" % (cp, repo,)
-		add_gobs_logs(conn, log_msg, "info", config_profile)
+		add_gobs_logs(conn, log_msg, "info", config_id)
 		pkgdir = self._myportdb.getRepositoryPath(repo) + "/" + cp
 		config_id_list = []
 		config_id_list.append(config_id)
@@ -52,7 +52,7 @@ def add_cpv_query_pool(mysettings, myportdb, config_id, cp, repo):
 				ebuild_id_list.append(ebuild_id)
 				init_package.add_new_ebuild_buildquery_db(ebuild_id_list, packageDict, config_cpv_listDict)
 		log_msg = "C %s:%s ... Done." % (cp, repo,)
-		add_gobs_logs(conn, log_msg, "info", config_profile)
+		add_gobs_logs(conn, log_msg, "info", config_id)
 	CM.putConnection(conn)
 	return
 
@@ -60,7 +60,7 @@ def add_buildquery_main(config_id):
 	conn=CM.getConnection()
 	config_setup = get_config(conn, config_id)
 	log_msg = "Adding build jobs for: %s" % (config_setup,)
-	add_gobs_logs(conn, log_msg, "info", config_profile)
+	add_gobs_logs(conn, log_msg, "info", config_id)
 	check_make_conf()
 	log_msg = "Check configs done"
 	add_gobs_logs(conn, log_msg, "info", config_profile)
@@ -71,7 +71,7 @@ def add_buildquery_main(config_id):
 	myportdb = portage.portdbapi(mysettings=mysettings)
 	init_package = gobs_package(mysettings, myportdb)
 	log_msg = "Setting default config to: %s" % (config_setup)
-	add_gobs_logs(conn, log_msg, "info", config_profile)
+	add_gobs_logs(conn, log_msg, "info", config_is)
 	# Use all exept 2 cores when multiprocessing
 	pool_cores= multiprocessing.cpu_count()
 	if pool_cores >= 3:
@@ -101,12 +101,12 @@ def del_buildquery_main(config_id):
 	conn=CM.getConnection()
 	config_setup = get_config(conn, config_id)
 	log_msg = "Removeing build jobs for: %s" % (config_setup,)
-	add_gobs_logs(conn, log_msg, "info", config_profile)
+	add_gobs_logs(conn, log_msg, "info", config_id)
 	build_job_id_list = get_build_jobs_id_list_config(conn, config_id)
 	if build_job_id_list is not None:
 		for build_job_id in build_job_id_list:
 			del_old_build_jobs(conn, build_job_id)
 	log_msg = "Removeing build jobs for: %s ... Done." % (config_setup,)
-	add_gobs_logs(conn, log_msg, "info", config_profile)
+	add_gobs_logs(conn, log_msg, "info", config_id)
 	CM.putConnection(conn)
 	return True

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index a86d512..ac70633 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -12,6 +12,8 @@ from gobs.mysql_querys import get_config, get_config_id, add_gobs_logs, get_defa
 from gobs.readconf import get_conf_settings
 reader=get_conf_settings()
 gobs_settings_dict=reader.read_gobs_settings_all()
+_config = gobs_settings_dict['gobs_config']
+_hostname =gobs_settings_dict['hostname']
 
 class gobs_package(object):
 
@@ -19,7 +21,7 @@ class gobs_package(object):
 		self._conn = conn
 		self._mysettings = mysettings
 		self._myportdb = myportdb
-		self._config_id = get_config_id(conn, config_profile)
+		self._config_id = get_config_id(conn, _config, _hostname)
 
 	def change_config(self, host_config):
 		# Change config_root  config_setup = table config

diff --git a/gobs/pym/sync.py b/gobs/pym/sync.py
index 3e25878..4a24484 100644
--- a/gobs/pym/sync.py
+++ b/gobs/pym/sync.py
@@ -13,7 +13,7 @@ from gobs.mysql_querys import get_config_id, add_gobs_logs, get_default_config
 reader=get_conf_settings()
 gobs_settings_dict=reader.read_gobs_settings_all()
 _config = gobs_settings_dict['gobs_config']
-_hostname =gobs_settings_dict['gobs_hostname']
+_hostname =gobs_settings_dict['hostname']
 
 def git_pull(conn):
 	#FIXME: Use git direct so we can use python 3.*

diff --git a/gobs/pym/updatedb.py b/gobs/pym/updatedb.py
index cc5681c..56d0894 100644
--- a/gobs/pym/updatedb.py
+++ b/gobs/pym/updatedb.py
@@ -18,7 +18,7 @@ from gobs.readconf import get_conf_settings
 reader = get_conf_settings()
 gobs_settings_dict=reader.read_gobs_settings_all()
 _config = gobs_settings_dict['gobs_config']
-_hostname =gobs_settings_dict['gobs_hostname']
+_hostname =gobs_settings_dict['hostname']
 
 def init_portage_settings(conn, config_id):
 	# check config setup


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2013-01-22 20:56 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2013-01-22 20:56 UTC (permalink / raw
  To: gentoo-commits

commit:     e83015c49497837169dc7d35023417e1e5c64653
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Tue Jan 22 21:56:20 2013 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Tue Jan 22 21:56:20 2013 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=e83015c4

fix update_package_db

---
 gobs/pym/mysql_querys.py |   16 +++++++---------
 gobs/pym/package.py      |    9 ++++++---
 2 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/gobs/pym/mysql_querys.py b/gobs/pym/mysql_querys.py
index 8665f97..7da98c0 100644
--- a/gobs/pym/mysql_querys.py
+++ b/gobs/pym/mysql_querys.py
@@ -44,6 +44,7 @@ def get_job(connection, job_id):
 def get_job_type(connection, job_type_id):
 	cursor = connection.cursor()
 	sqlQ = 'SELECT type FROM job_types WHERE job_type_id = %s'
+	cursor.execute(sqlQ, (job_type_id,))
 	entries = cursor.fetchone()
 	cursor.close()
 	return entries[0]
@@ -228,16 +229,13 @@ def add_new_ebuild_metadata_sql(connection, ebuild_id, keywords, restrictions, i
 
 def add_new_ebuild_sql(connection, package_id, ebuildDict):
 	cursor = connection.cursor()
-	sqlQ1 = 'SELECT repo_id FROM packages WHERE package_id = %s'
-	sqlQ2 = "INSERT INTO ebuilds (package_id, version, checksum, active) VALUES (%s, %s, %s, 'True')"
+	sqlQ1 = "INSERT INTO ebuilds (package_id, version, checksum, active) VALUES (%s, %s, %s, 'True')"
+	sqlQ2 = 'SELECT LAST_INSERT_ID()'
 	sqlQ3 = "INSERT INTO ebuilds_metadata (ebuild_id, revision) VALUES (%s, %s)"
-	sqlQ4 = 'SELECT LAST_INSERT_ID()'
 	ebuild_id_list = []
-	cursor.execute(sqlQ1, (package_id,))
-	repo_id = cursor.fetchone()[0]
 	for k, v in ebuildDict.iteritems():
-		cursor.execute(sqlQ2, (package_id, v['ebuild_version_tree'], v['ebuild_version_checksum_tree'],))
-		cursor.execute(sqlQ4)
+		cursor.execute(sqlQ1, (package_id, v['ebuild_version_tree'], v['ebuild_version_checksum_tree'],))
+		cursor.execute(sqlQ2)
 		ebuild_id = cursor.fetchone()[0]
 		cursor.execute(sqlQ3, (ebuild_id, v['ebuild_version_revision_tree'],))
 		ebuild_id_list.append(ebuild_id)
@@ -338,7 +336,7 @@ def get_ebuild_checksum(connection, package_id, ebuild_version_tree):
 
 def add_old_ebuild(connection, package_id, old_ebuild_list):
 	cursor = connection.cursor()
-	sqlQ1 = "UPDATE ebuilds SET active = 'False' WHERE package_id = %s AND version = %s"
+	sqlQ1 = "UPDATE ebuilds SET active = 'False' WHERE ebuild_id = %s"
 	sqlQ2 = "SELECT ebuild_id FROM ebuilds WHERE package_id = %s AND version = %s AND active = 'True'"
 	sqlQ3 = "SELECT build_job_id FROM build_jobs WHERE ebuild_id = %s"
 	sqlQ4 = 'DELETE FROM build_jobs_use WHERE build_job_id = %s'
@@ -354,7 +352,7 @@ def add_old_ebuild(connection, package_id, old_ebuild_list):
 					for build_job_id in build_job_id_list:
 						cursor.execute(sqlQ4, (build_job_id))
 						cursor.execute(sqlQ5, (build_job_id))
-				cursor.execute(sqlQ1, (package_id, old_ebuild[0]))
+				cursor.execute(sqlQ1, (ebuild_id,))
 	connection.commit()
 	cursor.close()
 

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index ac70633..9686147 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -299,11 +299,14 @@ class gobs_package(object):
 
 					# Fix so we can use add_new_ebuild_sql() to update the ebuilds
 					old_ebuild_list.append(ebuild_version_tree)
-					add_old_ebuild(self._conn, package_id, old_ebuild_list)
-					update_active_ebuild_to_fales(self._conn, package_id, ebuild_version_tree)
+				else:
+					# Remove cpv from packageDict
+					del packageDict[cpv]
 
+			 # Make old ebuilds unactive
+			 add_old_ebuild(self._conn, package_id, old_ebuild_list)
+			 
 			# Use packageDict and to update the db
-			# Add new ebuilds to the db
 			ebuild_id_list = add_new_ebuild_sql(self._conn, package_id, packageDict)
 
 			# update the cp manifest checksum


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2013-01-22 20:59 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2013-01-22 20:59 UTC (permalink / raw
  To: gentoo-commits

commit:     6e25521ac9406862cdce35dc501661e7169b9248
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Tue Jan 22 21:58:46 2013 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Tue Jan 22 21:58:46 2013 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=6e25521a

fix typo

---
 gobs/pym/package.py |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index 9686147..f424dde 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -303,8 +303,8 @@ class gobs_package(object):
 					# Remove cpv from packageDict
 					del packageDict[cpv]
 
-			 # Make old ebuilds unactive
-			 add_old_ebuild(self._conn, package_id, old_ebuild_list)
+			# Make old ebuilds unactive
+			add_old_ebuild(self._conn, package_id, old_ebuild_list)
 			 
 			# Use packageDict and to update the db
 			ebuild_id_list = add_new_ebuild_sql(self._conn, package_id, packageDict)


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2013-01-22 21:06 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2013-01-22 21:06 UTC (permalink / raw
  To: gentoo-commits

commit:     1cd0239bdcdbe04883849155fca423e0ed3c7581
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Tue Jan 22 22:06:10 2013 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Tue Jan 22 22:06:10 2013 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=1cd0239b

missing update from check_setup.py

---
 gobs/pym/check_setup.py |    8 ++++----
 1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/gobs/pym/check_setup.py b/gobs/pym/check_setup.py
index 6115987..a3d06fc 100644
--- a/gobs/pym/check_setup.py
+++ b/gobs/pym/check_setup.py
@@ -11,10 +11,10 @@ from gobs.mysql_querys import get_config_id, get_config_list_all, add_gobs_logs,
 
 reader=get_conf_settings()
 gobs_settings_dict=reader.read_gobs_settings_all()
-config_profile = gobs_settings_dict['gobs_config']
+_config_profile = gobs_settings_dict['gobs_config']
 
 def check_make_conf(conn):
-	_config_id = get_config_id(conn, config_profile)
+	_config_id = get_config_id(conn, _config)
 	# Get the config list
 	config_id_list_all = get_config_list_all(conn)
 	log_msg = "Checking configs for changes and errors"
@@ -23,8 +23,8 @@ def check_make_conf(conn):
 	for config_id in config_id_list_all:
 		attDict={}
 		# Set the config dir
-		config = get_config(conn, config_id)
-		check_config_dir = "/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + config + "/"
+		hostname, config = get_config(conn, config_id)
+		check_config_dir = "/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + hostname +"/" + config + "/"
 		make_conf_file = check_config_dir + "etc/portage/make.conf"
 		# Check if we can take a checksum on it.
 		# Check if we have some error in the file. (portage.util.getconfig)


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2013-01-26 22:23 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2013-01-26 22:23 UTC (permalink / raw
  To: gentoo-commits

commit:     6d621d7d07dc3fd9a984fb36091d8603dee5d8a5
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sat Jan 26 23:22:55 2013 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sat Jan 26 23:22:55 2013 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=6d621d7d

fix to add unactive ebuilds

---
 gobs/pym/ConnectionManager.py |   66 ++++++++++-------------
 gobs/pym/buildquerydb.py      |   14 +----
 gobs/pym/mysql_querys.py      |  100 +++++++++++++++++++----------------
 gobs/pym/package.py           |  118 ++++++++++++++++++++++-------------------
 gobs/pym/updatedb.py          |   28 ++++------
 5 files changed, 162 insertions(+), 164 deletions(-)

diff --git a/gobs/pym/ConnectionManager.py b/gobs/pym/ConnectionManager.py
index dd91e1d..4dac318 100644
--- a/gobs/pym/ConnectionManager.py
+++ b/gobs/pym/ConnectionManager.py
@@ -1,46 +1,38 @@
-# FIXME: Redo the class
 from __future__ import print_function
 from gobs.readconf import get_conf_settings
 reader = get_conf_settings()
 gobs_settings_dict=reader.read_gobs_settings_all()
 
 class connectionManager(object):
-	_instance = None
 
-	def __new__(cls, numberOfconnections=20, *args, **kwargs):
-		if not cls._instance:
-			cls._instance = super(connectionManager, cls).__new__(cls, *args, **kwargs)
-			#read the sql user/host etc and store it in the local object
-			cls._backend=gobs_settings_dict['sql_backend']
-			cls._host=gobs_settings_dict['sql_host']
-			cls._user=gobs_settings_dict['sql_user']
-			cls._password=gobs_settings_dict['sql_passwd']
-			cls._database=gobs_settings_dict['sql_db']
-			#shouldnt we include port also?
-			if cls._backend == 'mysql':
-				try:
-					import mysql.connector
-					from mysql.connector import errorcode
-				except ImportError:
-					print("Please install a recent version of dev-python/mysql-connector-python for Python")
-					sys.exit(1)
-				db_config = {}
-				db_config['user'] = cls._user
-				db_config['password'] = cls._password
-				db_config['host'] = cls._host
-				db_config['database'] = cls._database
-				db_config['raise_on_warnings'] = True
-				try:
-					cls._cnx = mysql.connector.connect(**db_config)
-				except mysql.connector.Error as err:
-					if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
-						print("Something is wrong your username or password")
-					elif err.errno == errorcode.ER_BAD_DB_ERROR:
-						print("Database does not exists")
-					else:
-						print(err)
-		return cls._instance
+	def __init__(self):
+		self._backend=gobs_settings_dict['sql_backend']
+		self._host=gobs_settings_dict['sql_host']
+		self._user=gobs_settings_dict['sql_user']
+		self._password=gobs_settings_dict['sql_passwd']
+		self._database=gobs_settings_dict['sql_db']
 
 	def newConnection(self):
-		return self._cnx
-
+		if self._backend == 'mysql':
+			try:
+				import mysql.connector
+				from mysql.connector import errorcode
+			except ImportError:
+				print("Please install a recent version of dev-python/mysql-connector-python for Python")
+				sys.exit(1)
+			db_config = {}
+			db_config['user'] = self._user
+			db_config['password'] = self._password
+			db_config['host'] = self._host
+			db_config['database'] = self._database
+			db_config['raise_on_warnings'] = True
+			try:
+				cnx = mysql.connector.connect(**db_config)
+			except mysql.connector.Error as err:
+				if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
+					print("Something is wrong your username or password")
+				elif err.errno == errorcode.ER_BAD_DB_ERROR:
+					print("Database does not exists")
+				else:
+					print(err)
+		return cnx

diff --git a/gobs/pym/buildquerydb.py b/gobs/pym/buildquerydb.py
index c7cd49d..8edfd1e 100644
--- a/gobs/pym/buildquerydb.py
+++ b/gobs/pym/buildquerydb.py
@@ -11,11 +11,6 @@ from gobs.readconf import get_conf_settings
 reader=get_conf_settings()
 gobs_settings_dict=reader.read_gobs_settings_all()
 config_profile = gobs_settings_dict['gobs_config']
-# make a CM
-from gobs.ConnectionManager import connectionManager
-CM=connectionManager(gobs_settings_dict)
-
-
 
 from gobs.check_setup import check_make_conf
 from gobs.sync import git_pull
@@ -24,7 +19,7 @@ import portage
 import multiprocessing
 
 def add_cpv_query_pool(mysettings, myportdb, config_id, cp, repo):
-	conn=CM.getConnection()
+	conn =0
 	init_package = gobs_package(mysettings, myportdb)
 	# FIXME: remove the check for gobs when in tree
 	if cp != "dev-python/gobs":
@@ -53,11 +48,10 @@ def add_cpv_query_pool(mysettings, myportdb, config_id, cp, repo):
 				init_package.add_new_ebuild_buildquery_db(ebuild_id_list, packageDict, config_cpv_listDict)
 		log_msg = "C %s:%s ... Done." % (cp, repo,)
 		add_gobs_logs(conn, log_msg, "info", config_id)
-	CM.putConnection(conn)
 	return
 
 def add_buildquery_main(config_id):
-	conn=CM.getConnection()
+	conn = 0
 	config_setup = get_config(conn, config_id)
 	log_msg = "Adding build jobs for: %s" % (config_setup,)
 	add_gobs_logs(conn, log_msg, "info", config_id)
@@ -94,11 +88,10 @@ def add_buildquery_main(config_id):
 	pool.join()
 	log_msg = "Adding build jobs for: %s ... Done." % (config_setup,)
 	add_gobs_logs(conn, log_msg, "info", config_profile)
-	CM.putConnection(conn)
 	return True
 
 def del_buildquery_main(config_id):
-	conn=CM.getConnection()
+	conn=0
 	config_setup = get_config(conn, config_id)
 	log_msg = "Removeing build jobs for: %s" % (config_setup,)
 	add_gobs_logs(conn, log_msg, "info", config_id)
@@ -108,5 +101,4 @@ def del_buildquery_main(config_id):
 			del_old_build_jobs(conn, build_job_id)
 	log_msg = "Removeing build jobs for: %s ... Done." % (config_setup,)
 	add_gobs_logs(conn, log_msg, "info", config_id)
-	CM.putConnection(conn)
 	return True

diff --git a/gobs/pym/mysql_querys.py b/gobs/pym/mysql_querys.py
index 7da98c0..8b8bacd 100644
--- a/gobs/pym/mysql_querys.py
+++ b/gobs/pym/mysql_querys.py
@@ -70,11 +70,11 @@ def get_config_list_all(connection):
 
 def get_config(connection, config_id):
 	cursor = connection.cursor()
-	sqlQ ='SELECT host, config FROM configs WHERE config_id = %s'
+	sqlQ ='SELECT hostname, config FROM configs WHERE config_id = %s'
 	cursor.execute(sqlQ, (config_id,))
 	hostname, config = cursor.fetchone()
 	cursor.close()
-	return hostname[0], config[0]
+	return hostname, config
 
 def update_make_conf(connection, configsDict):
 	cursor = connection.cursor()
@@ -90,7 +90,7 @@ def get_default_config(connection):
 	cursor.execute(sqlQ)
 	hostname, config = cursor.fetchone()
 	cursor.close()
-	return hostname[0], config[0]
+	return hostname, config
 
 def get_repo_id(connection, repo):
 	cursor = connection.cursor()
@@ -140,13 +140,13 @@ def get_package_id(connection, categories, package, repo):
 	if not entries is None:
 		return entries[0]
 
-def add_new_manifest_sql(connection, categories, package, repo, manifest_checksum_tree):
+def add_new_manifest_sql(connection, categories, package, repo):
 	cursor = connection.cursor()
-	sqlQ1 = "INSERT INTO packages (category_id, package, repo_id, checksum, active) VALUES (%s, %s, %s, %s, 'True')"
+	sqlQ1 = "INSERT INTO packages (category_id, package, repo_id, checksum, active) VALUES (%s, %s, %s, '0', 'True')"
 	sqlQ2 = 'SELECT LAST_INSERT_ID()'
 	repo_id = get_repo_id(connection, repo)
 	category_id = get_category_id(connection, categories)
-	cursor.execute(sqlQ1, (category_id, package, repo_id, manifest_checksum_tree,))
+	cursor.execute(sqlQ1, (category_id, package, repo_id, ))
 	cursor.execute(sqlQ2)
 	package_id = cursor.fetchone()[0]
 	connection.commit()
@@ -211,13 +211,13 @@ def add_new_ebuild_metadata_sql(connection, ebuild_id, keywords, restrictions, i
 			use_id = cursor.fetchone()[0]
 		cursor.execute(sqlQ6, (ebuild_id, use_id, set_iuse,))
 	for keyword in keywords:
-		set_keyword = 'stable'
+		set_keyword = 'sStable'
 		if keyword[0] in ["~"]:
 			keyword = keyword[1:]
-			set_keyword = 'unstable'
+			set_keyword = 'Unstable'
 		elif keyword[0] in ["-"]:
 			keyword = keyword[1:]
-			set_keyword = 'testing'
+			set_keyword = 'Negative'
 		keyword_id = get_keyword_id(connection, keyword)
 		if keyword_id is None:
 			cursor.execute(sqlQ1, (keyword,))
@@ -329,37 +329,60 @@ def get_ebuild_checksum(connection, package_id, ebuild_version_tree):
 	cursor = connection.cursor()
 	sqlQ = "SELECT checksum FROM ebuilds WHERE package_id = %s AND version = %s AND active = 'True'"
 	cursor.execute(sqlQ, (package_id, ebuild_version_tree))
+	entries = cursor.fetchall()
+	cursor.close()
+	if entries == []:
+		return None
+	checksums = []
+	for i in entries:
+		checksums.append(i[0])
+	return checksums
+
+def get_ebuild_id_list(connection, package_id):
+	cursor = connection.cursor()
+	sqlQ = "SELECT ebuild_id FROM ebuilds WHERE package_id = %s AND active = 'True'"
+	cursor.execute(sqlQ, (package_id,))
+	entries = cursor.fetchall()
+	cursor.close()
+	ebuilds_id = []
+	for i in entries:
+		ebuilds_id.append(i[0])
+	return ebuilds_id
+
+def get_ebuild_id_db(connection, checksum, package_id):
+	cursor = connection.cursor()
+	sqlQ = "SELECT ebuild_id FROM ebuilds WHERE package_id = %s AND checksum = %s"
+	cursor.execute(sqlQ, (package_id, checksum,))
 	entries = cursor.fetchone()
 	cursor.close()
-	if not entries is None:
-		return entries[0]
+	for i in entries:
+		ebuilds_id.append(i[0])
+	return ebuilds_id
 
-def add_old_ebuild(connection, package_id, old_ebuild_list):
+def del_old_build_jobs(connection, build_job_id):
 	cursor = connection.cursor()
-	sqlQ1 = "UPDATE ebuilds SET active = 'False' WHERE ebuild_id = %s"
-	sqlQ2 = "SELECT ebuild_id FROM ebuilds WHERE package_id = %s AND version = %s AND active = 'True'"
-	sqlQ3 = "SELECT build_job_id FROM build_jobs WHERE ebuild_id = %s"
-	sqlQ4 = 'DELETE FROM build_jobs_use WHERE build_job_id = %s'
-	sqlQ5 = 'DELETE FROM build_jobs WHERE build_job_id = %s'
-	for old_ebuild in  old_ebuild_list:
-		cursor.execute(sqlQ2, (package_id, old_ebuild[0]))
-		ebuild_id_list = cursor.fetchall()
-		if ebuild_id_list is not None:
-			for ebuild_id in ebuild_id_list:
-				cursor.execute(sqlQ3, (ebuild_id))
-				build_job_id_list = cursor.fetchall()
-				if build_job_id_list is not None:
-					for build_job_id in build_job_id_list:
-						cursor.execute(sqlQ4, (build_job_id))
-						cursor.execute(sqlQ5, (build_job_id))
-				cursor.execute(sqlQ1, (ebuild_id,))
+	sqlQ1 = 'DELETE FROM build_jobs_use WHERE build_job_id = %s'
+	sqlQ2 = 'DELETE FROM build_jobs_redo WHERE build_job_id  = %s'
+	sqlQ3 = 'DELETE FROM build_jobs_emerge_options WHERE build_job_id = %s'
+	sqlQ4 = 'DELETE FROM build_jobs WHERE build_job_id  = %s'
+	cursor.execute(sqlQ1, (build_job_id,))
+	cursor.execute(sqlQ2, (build_job_id,))
+	cursor.execute(sqlQ3, (build_job_id,))
+	cursor.execute(sqlQ4, (build_job_id,))
 	connection.commit()
 	cursor.close()
 
-def update_active_ebuild_to_fales(connection, package_id, ebuild_version_tree):
+def add_old_ebuild(connection, package_id, old_ebuild_list):
 	cursor = connection.cursor()
-	sqlQ ="UPDATE ebuilds SET active = 'False' WHERE package_id = %s AND version = %s AND active = 'True'"
-	cursor.execute(sqlQ, (package_id, ebuild_version_tree))
+	sqlQ1 = "UPDATE ebuilds SET active = 'False' WHERE ebuild_id = %s"
+	sqlQ3 = "SELECT build_job_id FROM build_jobs WHERE ebuild_id = %s"
+	for ebuild_id in  old_ebuild_list:
+		cursor.execute(sqlQ3, (ebuild_id))
+		build_job_id_list = cursor.fetchall()
+		if build_job_id_list is not None:
+			for build_job_id in build_job_id_list:
+				del_old_build_jobs(connection, build_job_id[0])
+		cursor.execute(sqlQ1, (ebuild_id,))
 	connection.commit()
 	cursor.close()
 
@@ -384,19 +407,6 @@ def get_build_jobs_id_list_config(connection, config_id):
 			build_log_id_list = None
 	return build_jobs_id_list
 
-def del_old_build_jobs(connection, build_job_id):
-	cursor = connection.cursor()
-	sqlQ1 = 'DELETE FROM build_jobs_use WHERE build_job_id = %s'
-	sqlQ2 = 'DELETE FROM build_jobs_redo WHERE build_job_id  = %s'
-	sqlQ3 = 'DELETE FROM build_jobs WHERE build_job_id  = %s'
-	sqlQ4 = 'DELETE FROM build_jobs_emerge_options WHERE build_job_id = %s'
-	cursor.execute(sqlQ1, (build_job_id,))
-	cursor.execute(sqlQ2, (build_job_id,))
-	cursor.execute(sqlQ4, (build_job_id,))
-	cursor.execute(sqlQ3, (build_job_id,))
-	connection.commit()
-	cursor.close()
-
 def get_profile_checksum(connection, config_id):
 	cursor = connection.cursor()
 	sqlQ = "SELECT checksum FROM configs_metadata WHERE active = 'True' AND config_id = %s AND auto = 'True'"

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index f424dde..9354811 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -7,7 +7,7 @@ from gobs.text import get_ebuild_cvs_revision
 from gobs.flags import gobs_use_flags
 from gobs.mysql_querys import get_config, get_config_id, add_gobs_logs, get_default_config, \
 	add_new_build_job, get_config_id_list, update_manifest_sql, add_new_manifest_sql, \
-	add_new_ebuild_sql, update_active_ebuild_to_fales, add_old_ebuild, \
+	add_new_ebuild_sql, get_ebuild_id_db, add_old_ebuild, get_ebuild_id_list, \
 	get_ebuild_checksum, get_manifest_db, get_cp_repo_from_package_id
 from gobs.readconf import get_conf_settings
 reader=get_conf_settings()
@@ -129,7 +129,6 @@ class gobs_package(object):
 		return attDict
 
 	def add_new_build_job_db(self, ebuild_id_list, packageDict, config_cpv_listDict):
-		conn=CM.getConnection()
 		# Get the needed info from packageDict and config_cpv_listDict and put that in buildqueue
 		# Only add it if ebuild_version in packageDict and config_cpv_listDict match
 		if config_cpv_listDict is not None:
@@ -178,72 +177,83 @@ class gobs_package(object):
 		package_metadataDict[package] = attDict
 		return package_metadataDict
 
-	def add_new_package_db(self, categories, package, repo):
+	def add_package(self, packageDict, package_id, new_ebuild_id_list, old_ebuild_list, manifest_checksum_tree):
+		# Use packageDict to update the db
+		ebuild_id_list = add_new_ebuild_sql(self._conn, package_id, packageDict)
+		
+		# Make old ebuilds unactive
+		for ebuild_id in ebuild_id_list:
+			new_ebuild_id_list.append(ebuild_id)
+		for ebuild_id in get_ebuild_id_list(self._conn, package_id):
+			if not ebuild_id in new_ebuild_id_list:
+				if not ebuild_id in old_ebuild_id_list:
+					old_ebuild_id_list.append(ebuild_id)
+		if not old_ebuild_id_list == []:
+			add_old_ebuild(self._conn, package_id, old_ebuild_id_list)
+
+		# update the cp manifest checksum
+		update_manifest_sql(self._conn, package_id, manifest_checksum_tree)
+
+		# Get the best cpv for the configs and add it to config_cpv_listDict
+		configs_id_list  = get_config_id_list(self._conn)
+		config_cpv_listDict = self.config_match_ebuild(cp, configs_id_list)
+
+		# Add the ebuild to the build jobs table if needed
+		self.add_new_build_job_db(ebuild_id_list, packageDict, config_cpv_listDict)
+
+	def add_new_package_db(self, cp, repo):
 		# Add new categories package ebuild to tables package and ebuilds
 		# C = Checking
 		# N = New Package
-		log_msg = "C %s/%s:%s" % (categories, package, repo)
+		log_msg = "C %s:%s" % (cp, repo)
 		add_gobs_logs(self._conn, log_msg, "info", self._config_id)
-		log_msg = "N %s/%s:%s" % (categories, package, repo)
+		log_msg = "N %s:%s" % (cp, repo)
 		add_gobs_logs(self._conn, log_msg, "info", self._config_id)
-		pkgdir = self._myportdb.getRepositoryPath(repo) + "/" + categories + "/" + package # Get RepoDIR + cp
+		repodir = self._myportdb.getRepositoryPath(repo)
+		pkgdir = repodir + "/" + cp # Get RepoDIR + cp
 
 		# Get the cp manifest file checksum.
 		try:
 			manifest_checksum_tree = portage.checksum.sha256hash(pkgdir + "/Manifest")[0]
 		except:
 			manifest_checksum_tree = "0"
-			log_msg = "QA: Can't checksum the Manifest file. %s/%s:%s" % (categories, package, repo,)
+			log_msg = "QA: Can't checksum the Manifest file. :%s" % (cp, repo,)
 			add_gobs_logs(self._conn, log_msg, "info", self._config_id)
-			log_msg = "C %s/%s:%s ... Fail." % (categories, package, repo)
+			log_msg = "C %s:%s ... Fail." % (cp, repo)
 			add_gobs_logs(self._conn, log_msg, "info", self._config_id)
 			return
-		package_id = add_new_manifest_sql(self._conn, categories, package, repo, manifest_checksum_tree)
+		package_id = add_new_manifest_sql(self._conn, cp, repo)
 
 		# Get the ebuild list for cp
 		mytree = []
-		mytree.append(self._myportdb.getRepositoryPath(repo))
-		ebuild_list_tree = self._myportdb.cp_list((categories + "/" + package), use_cache=1, mytree=mytree)
+		mytree.append(repodir)
+		ebuild_list_tree = self._myportdb.cp_list((cp, use_cache=1, mytree=mytree)
 		if ebuild_list_tree == []:
-			log_msg = "QA: Can't get the ebuilds list. %s/%s:%s" % (categories, package, repo,)
+			log_msg = "QA: Can't get the ebuilds list. %s:%s" % (cp, repo,)
 			add_gobs_logs(self._conn, log_msg, "info", self._config_id)
-			log_msg = "C %s/%s:%s ... Fail." % (categories, package, repo)
+			log_msg = "C %s:%s ... Fail." % (cp, repo)
 			add_gobs_logs(self._conn, log_msg, "info", self._config_id)
 			return
 
-		# set config to default config
-		default_config = get_default_config(self._conn)
-
 		# Make the needed packageDict with ebuild infos so we can add it later to the db.
 		packageDict ={}
-		ebuild_id_list = []
+		new_ebuild_id_list = []
+		old_ebuild_id_list = []
 		for cpv in sorted(ebuild_list_tree):
 			packageDict[cpv] = self.get_packageDict(pkgdir, cpv, repo)
 
-		# Add new ebuilds to the db
-		ebuild_id_list = add_new_ebuild_sql(self._conn, package_id, packageDict)
-
-		# Get the best cpv for the configs and add it to config_cpv_listDict
-		configs_id_list  = get_config_id_list(self._conn)
-		config_cpv_listDict = self.config_match_ebuild(categories + "/" + package, configs_id_list)
-
-		# Add the ebuild to the buildquery table if needed
-		self.add_new_build_job_db(ebuild_id_list, packageDict, config_cpv_listDict)
-
-		log_msg = "C %s/%s:%s ... Done." % (categories, package, repo)
+		self.add_package(packageDict, package_id, new_ebuild_id_list, old_ebuild_id_list, manifest_checksum_tree)
+		log_msg = "C %s:%s ... Done." % (cp, repo)
 		add_gobs_logs(self._conn, log_msg, "info", self._config_id)
-		print(categories, package, repo)
-		CM.putConnection(conn)
 
 	def update_package_db(self, package_id):
 		# Update the categories and package with new info
 		# C = Checking
 		cp, repo = get_cp_repo_from_package_id(self._conn, package_id)
-		element = cp.split('/')
-		package = element[1]
 		log_msg = "C %s:%s" % (cp, repo)
 		add_gobs_logs(self._conn, log_msg, "info", self._config_id)
-		pkgdir = self._myportdb.getRepositoryPath(repo) + "/" + cp # Get RepoDIR + cp
+		repodir = self._myportdb.getRepositoryPath(repo)
+		pkgdir = repodir + "/" + cp # Get RepoDIR + cp
 
 		# Get the cp mainfest file checksum
 		try:
@@ -265,7 +275,7 @@ class gobs_package(object):
 
 			# Get the ebuild list for cp
 			mytree = []
-			mytree.append(self._myportdb.getRepositoryPath(repo))
+			mytree.append(repodir)
 			ebuild_list_tree = self._myportdb.cp_list(cp, use_cache=1, mytree=mytree)
 			if ebuild_list_tree == []:
 				log_msg = "QA: Can't get the ebuilds list. %s:%s" % (cp, repo,)
@@ -274,8 +284,9 @@ class gobs_package(object):
 				add_gobs_logs(self._conn, log_msg, "info", self._config_id)
 				return
 			packageDict ={}
+			new_ebuild_id_list = []
+			old_ebuild_id_list = []
 			for cpv in sorted(ebuild_list_tree):
-				old_ebuild_list = []
 
 				# split out ebuild version
 				ebuild_version_tree = portage.versions.cpv_getversion(cpv)
@@ -285,7 +296,20 @@ class gobs_package(object):
 
 				# Get the checksum of the ebuild in tree and db
 				ebuild_version_checksum_tree = packageDict[cpv]['ebuild_version_checksum_tree']
-				ebuild_version_manifest_checksum_db = get_ebuild_checksum(self._conn, package_id, ebuild_version_tree)
+				checksums_db = get_ebuild_checksum(self._conn, package_id, ebuild_version_tree)
+				# check if we have dupes of the checksum from db
+				if checksums_db is None:
+					ebuild_version_manifest_checksum_db = None
+				elif len(checksums_db) >= 2:
+					for checksum in checksums_db:
+						ebuilds_id = get_ebuild_id_db(self._conn, checksum, package_id)
+						log_msg = "U %s:%s:%s Dups of checksums" % (cpv, repo, ebuilds_id,)
+						add_gobs_logs(self._conn, log_msg, "error", self._config_id)
+						log_msg = "C %s:%s ... Fail." % (cp, repo)
+						add_gobs_logs(self._conn, log_msg, "error", self._config_id)
+						return
+				else:
+					ebuild_version_manifest_checksum_db = checksums_db[0]
 
 				# Check if the checksum have change
 				if ebuild_version_manifest_checksum_db is None:
@@ -296,28 +320,12 @@ class gobs_package(object):
 					# U = Updated ebuild
 					log_msg = "U %s:%s" % (cpv, repo,)
 					add_gobs_logs(self._conn, log_msg, "info", self._config_id)
-
-					# Fix so we can use add_new_ebuild_sql() to update the ebuilds
-					old_ebuild_list.append(ebuild_version_tree)
 				else:
-					# Remove cpv from packageDict
+					# Remove cpv from packageDict and add ebuild to new ebuils list
 					del packageDict[cpv]
+					new_ebuild_id_list.append(get_ebuild_id_db(self._conn, ebuild_version_checksum_tree, package_id)[0])
 
-			# Make old ebuilds unactive
-			add_old_ebuild(self._conn, package_id, old_ebuild_list)
-			 
-			# Use packageDict and to update the db
-			ebuild_id_list = add_new_ebuild_sql(self._conn, package_id, packageDict)
-
-			# update the cp manifest checksum
-			update_manifest_sql(self._conn, package_id, manifest_checksum_tree)
-
-			# Get the best cpv for the configs and add it to config_cpv_listDict
-			configs_id_list  = get_config_id_list(self._conn)
-			config_cpv_listDict = self.config_match_ebuild(cp, configs_id_list)
-
-			# Add the ebuild to the buildqueru table if needed
-			self.add_new_build_job_db(ebuild_id_list, packageDict, config_cpv_listDict)
+			self.add_package(packageDict, package_id, new_ebuild_id_list, old_ebuild_id_list, manifest_checksum_tree)
 
 		log_msg = "C %s:%s ... Done." % (cp, repo)
 		add_gobs_logs(self._conn, log_msg, "info", self._config_id)

diff --git a/gobs/pym/updatedb.py b/gobs/pym/updatedb.py
index 56d0894..cbf0dbc 100644
--- a/gobs/pym/updatedb.py
+++ b/gobs/pym/updatedb.py
@@ -27,8 +27,7 @@ def init_portage_settings(conn, config_id):
 	add_gobs_logs(conn, log_msg, "info", config_id)
 	
 	# Get default config from the configs table  and default_config=1
-	hostname, config = get_default_config(conn)		# HostConfigDir = table configs id
-	host_config = hostname +"/" + config
+	host_config = _hostname +"/" + _config
 	default_config_root = "/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + host_config + "/"
 
 	# Set config_root (PORTAGE_CONFIGROOT)  to default_config_root
@@ -38,32 +37,29 @@ def init_portage_settings(conn, config_id):
 	return mysettings
 
 def update_cpv_db_pool(mysettings, myportdb, cp, repo):
-	CM2=connectionManager()
-	conn2 = CM2.newConnection()
-	if not conn2.is_connected() is True:
-		conn2.reconnect(attempts=2, delay=1)
-	init_package = gobs_package(conn2, mysettings, myportdb)
+	CM = connectionManager()
+	conn = CM.newConnection()
+	init_package = gobs_package(conn, mysettings, myportdb)
 	# split the cp to categories and package
 	element = cp.split('/')
 	categories = element[0]
 	package = element[1]
 
 	# update the categories table
-	update_categories_db(conn2, categories)
+	update_categories_db(conn, categories)
 
-	# Check if we don't have the cp in the package table
-	package_id = get_package_id(conn2, categories, package, repo)
+	# Check if we have the cp in the package table
+	package_id = get_package_id(conn, categories, package, repo)
 	if package_id is None:  
 
 		# Add new package with ebuilds
-		init_package.add_new_package_db(categories, package, repo)
+		init_package.add_new_package_db(cp, repo)
 
-	# Ceck if we have the cp in the package table
-	elif package_id is not None:
+	else:
 
 		# Update the packages with ebuilds
 		init_package.update_package_db(package_id)
-	conn2.close
+	conn.close()
 
 def update_cpv_db(conn, config_id):
 	mysettings =  init_portage_settings(conn, config_id)
@@ -80,12 +76,12 @@ def update_cpv_db(conn, config_id):
 	pool = multiprocessing.Pool(processes=pool_cores)
 
 	# Will run some update checks and update package if needed
-	# Get categories/package list from portage and repos
+
 	# Get the repos and update the repos db
 	repo_list = myportdb.getRepositories()
 	update_repo_db(conn, repo_list)
 
-	# close the db for the multiprocessing pool will make new ones
+	# Close the db for the multiprocessing pool will make new ones
 	# and we don't need this one for some time.
 	conn.close()
 


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2013-01-27 12:03 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2013-01-27 12:03 UTC (permalink / raw
  To: gentoo-commits

commit:     6b7cdf80a1286198c79ae47f8035e1a033674781
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Sun Jan 27 13:03:13 2013 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Sun Jan 27 13:03:13 2013 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=6b7cdf80

fix missing update

---
 gobs/pym/check_setup.py |    5 +++--
 gobs/pym/package.py     |    2 +-
 gobs/pym/sync.py        |    3 +--
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/gobs/pym/check_setup.py b/gobs/pym/check_setup.py
index a3d06fc..df31140 100644
--- a/gobs/pym/check_setup.py
+++ b/gobs/pym/check_setup.py
@@ -11,10 +11,11 @@ from gobs.mysql_querys import get_config_id, get_config_list_all, add_gobs_logs,
 
 reader=get_conf_settings()
 gobs_settings_dict=reader.read_gobs_settings_all()
-_config_profile = gobs_settings_dict['gobs_config']
+_config = gobs_settings_dict['gobs_config']
+_hostname =gobs_settings_dict['hostname']
 
 def check_make_conf(conn):
-	_config_id = get_config_id(conn, _config)
+	_config_id = get_config_id(conn, _config, _hostname)
 	# Get the config list
 	config_id_list_all = get_config_list_all(conn)
 	log_msg = "Checking configs for changes and errors"

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index 9354811..25494a7 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -227,7 +227,7 @@ class gobs_package(object):
 		# Get the ebuild list for cp
 		mytree = []
 		mytree.append(repodir)
-		ebuild_list_tree = self._myportdb.cp_list((cp, use_cache=1, mytree=mytree)
+		ebuild_list_tree = self._myportdb.cp_list(cp, use_cache=1, mytree=mytree)
 		if ebuild_list_tree == []:
 			log_msg = "QA: Can't get the ebuilds list. %s:%s" % (cp, repo,)
 			add_gobs_logs(self._conn, log_msg, "info", self._config_id)

diff --git a/gobs/pym/sync.py b/gobs/pym/sync.py
index 4a24484..1704639 100644
--- a/gobs/pym/sync.py
+++ b/gobs/pym/sync.py
@@ -32,8 +32,7 @@ def git_pull(conn):
 
 def sync_tree(conn):
 	config_id = get_config_id(conn, _config, _hostname)
-	hostname, config = get_default_config(conn)			# HostConfigDir = table configs id
-	host_config = hostname +"/" + config
+	host_config = _hostname +"/" + _config
 	default_config_root = "/var/cache/gobs/" + gobs_settings_dict['gobs_gitreponame'] + "/" + host_config + "/"
 	mysettings = portage.config(config_root = default_config_root)
 	tmpcmdline = []


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2013-03-22 19:05 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2013-03-22 19:05 UTC (permalink / raw
  To: gentoo-commits

commit:     594c1bbdd9c78b67804891c58f2a0194c679e59c
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Fri Mar 22 19:04:48 2013 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Fri Mar 22 19:04:48 2013 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=594c1bbd

update portage files and hilight log

---
 gobs/pym/Scheduler.py    |   73 ++++++----
 gobs/pym/actions.py      |  379 ++++++++++++++++++++++++++++------------------
 gobs/pym/build_log.py    |  172 +++++++++++----------
 gobs/pym/main.py         |   33 ++++-
 gobs/pym/mysql_querys.py |   19 +++-
 5 files changed, 414 insertions(+), 262 deletions(-)

diff --git a/gobs/pym/Scheduler.py b/gobs/pym/Scheduler.py
index 6c446cb..3aaf147 100644
--- a/gobs/pym/Scheduler.py
+++ b/gobs/pym/Scheduler.py
@@ -1,7 +1,7 @@
-# Copyright 1999-2012 Gentoo Foundation
+# Copyright 1999-2013 Gentoo Foundation
 # Distributed under the terms of the GNU General Public License v2
 
-from __future__ import print_function
+from __future__ import print_function, unicode_literals
 
 from collections import deque
 import gc
@@ -18,7 +18,7 @@ import zlib
 import portage
 from portage import os
 from portage import _encodings
-from portage import _unicode_decode, _unicode_encode
+from portage import _unicode_encode
 from portage.cache.mappings import slot_dict_class
 from portage.elog.messages import eerror
 from portage.localization import _
@@ -411,7 +411,7 @@ class Scheduler(PollScheduler):
 			if not (isinstance(task, Package) and \
 				task.operation == "merge"):
 				continue
-			if 'interactive' in task.metadata.properties:
+			if 'interactive' in task.properties:
 				interactive_tasks.append(task)
 		return interactive_tasks
 
@@ -720,7 +720,6 @@ class Scheduler(PollScheduler):
 			return
 
 		if self._parallel_fetch:
-			self._status_msg("Starting parallel fetch")
 
 			prefetchers = self._prefetchers
 
@@ -785,10 +784,10 @@ class Scheduler(PollScheduler):
 			if x.operation == "uninstall":
 				continue
 
-			if x.metadata["EAPI"] in ("0", "1", "2", "3"):
+			if x.eapi in ("0", "1", "2", "3"):
 				continue
 
-			if "pretend" not in x.metadata.defined_phases:
+			if "pretend" not in x.defined_phases:
 				continue
 
 			out_str =">>> Running pre-merge checks for " + colorize("INFORM", x.cpv) + "\n"
@@ -807,7 +806,7 @@ class Scheduler(PollScheduler):
 			build_dir_path = os.path.join(
 				os.path.realpath(settings["PORTAGE_TMPDIR"]),
 				"portage", x.category, x.pf)
-			existing_buildir = os.path.isdir(build_dir_path)
+			existing_builddir = os.path.isdir(build_dir_path)
 			settings["PORTAGE_BUILDDIR"] = build_dir_path
 			build_dir = EbuildBuildDir(scheduler=sched_iface,
 				settings=settings)
@@ -818,7 +817,7 @@ class Scheduler(PollScheduler):
 
 				# Clean up the existing build dir, in case pkg_pretend
 				# checks for available space (bug #390711).
-				if existing_buildir:
+				if existing_builddir:
 					if x.built:
 						tree = "bintree"
 						infloc = os.path.join(build_dir_path, "build-info")
@@ -907,13 +906,18 @@ class Scheduler(PollScheduler):
 					failures += 1
 				portage.elog.elog_process(x.cpv, settings)
 			finally:
-				if current_task is not None and current_task.isAlive():
-					current_task.cancel()
-					current_task.wait()
-				clean_phase = EbuildPhase(background=False,
-					phase='clean', scheduler=sched_iface, settings=settings)
-				clean_phase.start()
-				clean_phase.wait()
+
+				if current_task is not None:
+					if current_task.isAlive():
+						current_task.cancel()
+						current_task.wait()
+					if current_task.returncode == os.EX_OK:
+						clean_phase = EbuildPhase(background=False,
+							phase='clean', scheduler=sched_iface,
+							settings=settings)
+						clean_phase.start()
+						clean_phase.wait()
+
 				build_dir.unlock()
 
 		if failures:
@@ -1062,7 +1066,8 @@ class Scheduler(PollScheduler):
 		printer = portage.output.EOutput()
 		background = self._background
 		failure_log_shown = False
-		if background and len(self._failed_pkgs_all) == 1:
+		if background and len(self._failed_pkgs_all) == 1 and \
+			self.myopts.get('--quiet-fail', 'n') != 'y':
 			# If only one package failed then just show it's
 			# whole log for easy viewing.
 			failed_pkg = self._failed_pkgs_all[-1]
@@ -1141,9 +1146,9 @@ class Scheduler(PollScheduler):
 				printer.eerror(line)
 			printer.eerror("")
 			for failed_pkg in self._failed_pkgs_all:
-				# Use _unicode_decode() to force unicode format string so
+				# Use unicode_literals to force unicode format string so
 				# that Package.__unicode__() is called in python2.
-				msg = _unicode_decode(" %s") % (failed_pkg.pkg,)
+				msg = " %s" % (failed_pkg.pkg,)
 				log_path = self._locate_failure_log(failed_pkg)
 				if log_path is not None:
 					msg += ", Log file:"
@@ -1534,7 +1539,7 @@ class Scheduler(PollScheduler):
 		self._config_pool[settings['EROOT']].append(settings)
 
 	def _keep_scheduling(self):
-		return bool(not self._terminated_tasks and self._pkg_queue and \
+		return bool(not self._terminated.is_set() and self._pkg_queue and \
 			not (self._failed_pkgs and not self._build_opts.fetchonly))
 
 	def _is_work_scheduled(self):
@@ -1794,7 +1799,7 @@ class Scheduler(PollScheduler):
 			#              scope
 			e = exc
 			mydepgraph = e.depgraph
-			dropped_tasks = set()
+			dropped_tasks = {}
 
 		if e is not None:
 			def unsatisfied_resume_dep_msg():
@@ -1844,7 +1849,7 @@ class Scheduler(PollScheduler):
 		self._init_graph(mydepgraph.schedulerGraph())
 
 		msg_width = 75
-		for task in dropped_tasks:
+		for task, atoms in dropped_tasks.items():
 			if not (isinstance(task, Package) and task.operation == "merge"):
 				continue
 			pkg = task
@@ -1852,7 +1857,10 @@ class Scheduler(PollScheduler):
 				" %s" % (pkg.cpv,)
 			if pkg.root_config.settings["ROOT"] != "/":
 				msg += " for %s" % (pkg.root,)
-			msg += " dropped due to unsatisfied dependency."
+			if not atoms:
+				msg += " dropped because it is masked or unavailable"
+			else:
+				msg += " dropped because it requires %s" % ", ".join(atoms)
 			for line in textwrap.wrap(msg, msg_width):
 				eerror(line, phase="other", key=pkg.cpv)
 			settings = self.pkgsettings[pkg.root]
@@ -1897,11 +1905,21 @@ class Scheduler(PollScheduler):
 		root_config = pkg.root_config
 		world_set = root_config.sets["selected"]
 		world_locked = False
-		if hasattr(world_set, "lock"):
-			world_set.lock()
-			world_locked = True
+		atom = None
+
+		if pkg.operation != "uninstall":
+			# Do this before acquiring the lock, since it queries the
+			# portdbapi which can call the global event loop, triggering
+			# a concurrent call to this method or something else that
+			# needs an exclusive (non-reentrant) lock on the world file.
+			atom = create_world_atom(pkg, args_set, root_config)
 
 		try:
+
+			if hasattr(world_set, "lock"):
+				world_set.lock()
+				world_locked = True
+
 			if hasattr(world_set, "load"):
 				world_set.load() # maybe it's changed on disk
 
@@ -1913,8 +1931,7 @@ class Scheduler(PollScheduler):
 					for s in pkg.root_config.setconfig.active:
 						world_set.remove(SETPREFIX+s)
 			else:
-				atom = create_world_atom(pkg, args_set, root_config)
-				if atom:
+				if atom is not None:
 					if hasattr(world_set, "add"):
 						self._status_msg(('Recording %s in "world" ' + \
 							'favorites file...') % atom)

diff --git a/gobs/pym/actions.py b/gobs/pym/actions.py
index 3b50187..e29d8e0 100644
--- a/gobs/pym/actions.py
+++ b/gobs/pym/actions.py
@@ -1,7 +1,7 @@
-# Copyright 1999-2012 Gentoo Foundation
+# Copyright 1999-2013 Gentoo Foundation
 # Distributed under the terms of the GNU General Public License v2
 
-from __future__ import print_function
+from __future__ import print_function, unicode_literals
 
 import errno
 import logging
@@ -22,8 +22,10 @@ from itertools import chain
 
 import portage
 portage.proxy.lazyimport.lazyimport(globals(),
+	'portage.dbapi._similar_name_search:similar_name_search',
 	'portage.debug',
 	'portage.news:count_unread_news,display_news_notifications',
+	'portage.util._get_vm_info:get_vm_info',
 	'_emerge.chk_updated_cfg_files:chk_updated_cfg_files',
 	'_emerge.help:help@emerge_help',
 	'_emerge.post_emerge:display_news_notification,post_emerge',
@@ -35,8 +37,7 @@ from portage import os
 from portage import shutil
 from portage import eapi_is_supported, _encodings, _unicode_decode
 from portage.cache.cache_errors import CacheError
-from portage.const import GLOBAL_CONFIG_PATH
-from portage.const import _DEPCLEAN_LIB_CHECK_DEFAULT
+from portage.const import GLOBAL_CONFIG_PATH, VCS_DIRS, _DEPCLEAN_LIB_CHECK_DEFAULT
 from portage.dbapi.dep_expand import dep_expand
 from portage.dbapi._expand_new_virt import expand_new_virt
 from portage.dep import Atom
@@ -54,6 +55,7 @@ from portage._sets.base import InternalPackageSet
 from portage.util import cmp_sort_key, writemsg, varexpand, \
 	writemsg_level, writemsg_stdout
 from portage.util.digraph import digraph
+from portage.util._async.run_main_scheduler import run_main_scheduler
 from portage.util._async.SchedulerInterface import SchedulerInterface
 from portage.util._eventloop.global_event_loop import global_event_loop
 from portage._global_updates import _global_updates
@@ -286,8 +288,14 @@ def action_build(settings, trees, mtimedb,
 					"dropped due to\n" + \
 					"!!! masking or unsatisfied dependencies:\n\n",
 					noiselevel=-1)
-				for task in dropped_tasks:
-					portage.writemsg("  " + str(task) + "\n", noiselevel=-1)
+				for task, atoms in dropped_tasks.items():
+					if not atoms:
+						writemsg("  %s is masked or unavailable\n" %
+							(task,), noiselevel=-1)
+					else:
+						writemsg("  %s requires %s\n" %
+							(task, ", ".join(atoms)), noiselevel=-1)
+
 				portage.writemsg("\n", noiselevel=-1)
 			del dropped_tasks
 		else:
@@ -312,6 +320,7 @@ def action_build(settings, trees, mtimedb,
 		if not success:
 			return 1
 
+	mergecount = None
 	if "--pretend" not in myopts and \
 		("--ask" in myopts or "--tree" in myopts or \
 		"--verbose" in myopts) and \
@@ -343,6 +352,7 @@ def action_build(settings, trees, mtimedb,
 				if isinstance(x, Package) and x.operation == "merge":
 					mergecount += 1
 
+			prompt = None
 			if mergecount==0:
 				sets = trees[settings['EROOT']]['root_config'].sets
 				world_candidates = None
@@ -355,12 +365,11 @@ def action_build(settings, trees, mtimedb,
 					world_candidates = [x for x in favorites \
 						if not (x.startswith(SETPREFIX) and \
 						not sets[x[1:]].world_candidate)]
+
 				if "selective" in myparams and \
 					not oneshot and world_candidates:
-					print()
-					for x in world_candidates:
-						print(" %s %s" % (good("*"), x))
-					prompt="Would you like to add these packages to your world favorites?"
+					# Prompt later, inside saveNomergeFavorites.
+					prompt = None
 				elif settings["AUTOCLEAN"] and "yes"==settings["AUTOCLEAN"]:
 					prompt="Nothing to merge; would you like to auto-clean packages?"
 				else:
@@ -373,13 +382,15 @@ def action_build(settings, trees, mtimedb,
 			else:
 				prompt="Would you like to merge these packages?"
 		print()
-		if "--ask" in myopts and userquery(prompt, enter_invalid) == "No":
+		if prompt is not None and "--ask" in myopts and \
+			userquery(prompt, enter_invalid) == "No":
 			print()
 			print("Quitting.")
 			print()
 			return 128 + signal.SIGINT
 		# Don't ask again (e.g. when auto-cleaning packages after merge)
-		myopts.pop("--ask", None)
+		if mergecount != 0:
+			myopts.pop("--ask", None)
 
 	if ("--pretend" in myopts) and not ("--fetchonly" in myopts or "--fetch-all-uri" in myopts):
 		if ("--resume" in myopts):
@@ -449,25 +460,29 @@ def action_build(settings, trees, mtimedb,
 
 			mydepgraph.saveNomergeFavorites()
 
-		mergetask = Scheduler(settings, trees, mtimedb, myopts,
-			spinner, favorites=favorites,
-			graph_config=mydepgraph.schedulerGraph())
-
-		del mydepgraph
-		clear_caches(trees)
-
-		retval = mergetask.merge()
-
-		if retval == os.EX_OK and not (buildpkgonly or fetchonly or pretend):
-			if "yes" == settings.get("AUTOCLEAN"):
-				portage.writemsg_stdout(">>> Auto-cleaning packages...\n")
-				unmerge(trees[settings['EROOT']]['root_config'],
-					myopts, "clean", [],
-					ldpath_mtimes, autoclean=1)
-			else:
-				portage.writemsg_stdout(colorize("WARN", "WARNING:")
-					+ " AUTOCLEAN is disabled.  This can cause serious"
-					+ " problems due to overlapping packages.\n")
+		if mergecount == 0:
+			retval = os.EX_OK
+		else:
+			mergetask = Scheduler(settings, trees, mtimedb, myopts,
+				spinner, favorites=favorites,
+				graph_config=mydepgraph.schedulerGraph())
+
+			del mydepgraph
+			clear_caches(trees)
+
+			retval = mergetask.merge()
+
+			if retval == os.EX_OK and \
+				not (buildpkgonly or fetchonly or pretend):
+				if "yes" == settings.get("AUTOCLEAN"):
+					portage.writemsg_stdout(">>> Auto-cleaning packages...\n")
+					unmerge(trees[settings['EROOT']]['root_config'],
+						myopts, "clean", [],
+						ldpath_mtimes, autoclean=1)
+				else:
+					portage.writemsg_stdout(colorize("WARN", "WARNING:")
+						+ " AUTOCLEAN is disabled.  This can cause serious"
+						+ " problems due to overlapping packages.\n")
 
 		return retval
 
@@ -614,11 +629,17 @@ def action_depclean(settings, trees, ldpath_mtimes,
 	if not cleanlist and "--quiet" in myopts:
 		return rval
 
+	set_atoms = {}
+	for k in ("system", "selected"):
+		try:
+			set_atoms[k] = root_config.setconfig.getSetAtoms(k)
+		except portage.exception.PackageSetNotFound:
+			# A nested set could not be resolved, so ignore nested sets.
+			set_atoms[k] = root_config.sets[k].getAtoms()
+
 	print("Packages installed:   " + str(len(vardb.cpv_all())))
-	print("Packages in world:    " + \
-		str(len(root_config.sets["selected"].getAtoms())))
-	print("Packages in system:   " + \
-		str(len(root_config.sets["system"].getAtoms())))
+	print("Packages in world:    %d" % len(set_atoms["selected"]))
+	print("Packages in system:   %d" % len(set_atoms["system"]))
 	print("Required packages:    "+str(req_pkg_count))
 	if "--pretend" in myopts:
 		print("Number to remove:     "+str(len(cleanlist)))
@@ -651,13 +672,21 @@ def calc_depclean(settings, trees, ldpath_mtimes,
 	required_sets[protected_set_name] = protected_set
 	system_set = psets["system"]
 
-	if not system_set or not selected_set:
+	set_atoms = {}
+	for k in ("system", "selected"):
+		try:
+			set_atoms[k] = root_config.setconfig.getSetAtoms(k)
+		except portage.exception.PackageSetNotFound:
+			# A nested set could not be resolved, so ignore nested sets.
+			set_atoms[k] = root_config.sets[k].getAtoms()
+
+	if not set_atoms["system"] or not set_atoms["selected"]:
 
-		if not system_set:
+		if not set_atoms["system"]:
 			writemsg_level("!!! You have no system list.\n",
 				level=logging.ERROR, noiselevel=-1)
 
-		if not selected_set:
+		if not set_atoms["selected"]:
 			writemsg_level("!!! You have no world file.\n",
 					level=logging.WARNING, noiselevel=-1)
 
@@ -701,7 +730,7 @@ def calc_depclean(settings, trees, ldpath_mtimes,
 						continue
 				except portage.exception.InvalidDependString as e:
 					show_invalid_depstring_notice(pkg,
-						pkg.metadata["PROVIDE"], str(e))
+						pkg._metadata["PROVIDE"], _unicode(e))
 					del e
 					protected_set.add("=" + pkg.cpv)
 					continue
@@ -755,7 +784,7 @@ def calc_depclean(settings, trees, ldpath_mtimes,
 					continue
 			except portage.exception.InvalidDependString as e:
 				show_invalid_depstring_notice(pkg,
-					pkg.metadata["PROVIDE"], str(e))
+					pkg._metadata["PROVIDE"], _unicode(e))
 				del e
 				protected_set.add("=" + pkg.cpv)
 				continue
@@ -773,7 +802,7 @@ def calc_depclean(settings, trees, ldpath_mtimes,
 					required_sets['__excluded__'].add("=" + pkg.cpv)
 			except portage.exception.InvalidDependString as e:
 				show_invalid_depstring_notice(pkg,
-					pkg.metadata["PROVIDE"], str(e))
+					pkg._metadata["PROVIDE"], _unicode(e))
 				del e
 				required_sets['__excluded__'].add("=" + pkg.cpv)
 
@@ -809,7 +838,12 @@ def calc_depclean(settings, trees, ldpath_mtimes,
 			msg.append("the following required packages not being installed:")
 			msg.append("")
 			for atom, parent in unresolvable:
-				msg.append("  %s pulled in by:" % (atom,))
+				if atom != atom.unevaluated_atom and \
+					vardb.match(_unicode(atom)):
+					msg.append("  %s (%s) pulled in by:" %
+						(atom.unevaluated_atom, atom))
+				else:
+					msg.append("  %s pulled in by:" % (atom,))
 				msg.append("    %s" % (parent,))
 				msg.append("")
 			msg.extend(textwrap.wrap(
@@ -852,15 +886,27 @@ def calc_depclean(settings, trees, ldpath_mtimes,
 			required_pkgs_total += 1
 
 	def show_parents(child_node):
-		parent_nodes = graph.parent_nodes(child_node)
-		if not parent_nodes:
+		parent_atoms = \
+			resolver._dynamic_config._parent_atoms.get(child_node, [])
+
+		# Never display the special internal protected_set.
+		parent_atoms = [parent_atom for parent_atom in parent_atoms
+			if not (isinstance(parent_atom[0], SetArg) and
+			parent_atom[0].name == protected_set_name)]
+
+		if not parent_atoms:
 			# With --prune, the highest version can be pulled in without any
 			# real parent since all installed packages are pulled in.  In that
 			# case there's nothing to show here.
 			return
+		parent_atom_dict = {}
+		for parent, atom in parent_atoms:
+			parent_atom_dict.setdefault(parent, []).append(atom)
+
 		parent_strs = []
-		for node in parent_nodes:
-			parent_strs.append(str(getattr(node, "cpv", node)))
+		for parent, atoms in parent_atom_dict.items():
+			parent_strs.append("%s requires %s" %
+				(getattr(parent, "cpv", parent), ", ".join(atoms)))
 		parent_strs.sort()
 		msg = []
 		msg.append("  %s pulled in by:\n" % (child_node.cpv,))
@@ -885,12 +931,6 @@ def calc_depclean(settings, trees, ldpath_mtimes,
 			graph.debug_print()
 			writemsg("\n", noiselevel=-1)
 
-		# Never display the special internal protected_set.
-		for node in graph:
-			if isinstance(node, SetArg) and node.name == protected_set_name:
-				graph.remove(node)
-				break
-
 		pkgs_to_remove = []
 
 		if action == "depclean":
@@ -1163,17 +1203,17 @@ def calc_depclean(settings, trees, ldpath_mtimes,
 		for node in clean_set:
 			graph.add(node, None)
 			for dep_type in Package._dep_keys:
-				depstr = node.metadata[dep_type]
+				depstr = node._metadata[dep_type]
 				if not depstr:
 					continue
 				priority = priority_map[dep_type]
 
 				if debug:
-					writemsg_level(_unicode_decode("\nParent:    %s\n") \
+					writemsg_level("\nParent:    %s\n"
 						% (node,), noiselevel=-1, level=logging.DEBUG)
-					writemsg_level(_unicode_decode(  "Depstring: %s\n") \
+					writemsg_level(  "Depstring: %s\n"
 						% (depstr,), noiselevel=-1, level=logging.DEBUG)
-					writemsg_level(_unicode_decode(  "Priority:  %s\n") \
+					writemsg_level(  "Priority:  %s\n"
 						% (priority,), noiselevel=-1, level=logging.DEBUG)
 
 				try:
@@ -1187,7 +1227,7 @@ def calc_depclean(settings, trees, ldpath_mtimes,
 
 				if debug:
 					writemsg_level("Candidates: [%s]\n" % \
-						', '.join(_unicode_decode("'%s'") % (x,) for x in atoms),
+						', '.join("'%s'" % (x,) for x in atoms),
 						noiselevel=-1, level=logging.DEBUG)
 
 				for atom in atoms:
@@ -1353,6 +1393,86 @@ class _info_pkgs_ver(object):
 
 def action_info(settings, trees, myopts, myfiles):
 
+	# See if we can find any packages installed matching the strings
+	# passed on the command line
+	mypkgs = []
+	eroot = settings['EROOT']
+	vardb = trees[eroot]["vartree"].dbapi
+	portdb = trees[eroot]['porttree'].dbapi
+	bindb = trees[eroot]["bintree"].dbapi
+	for x in myfiles:
+		any_match = False
+		cp_exists = bool(vardb.match(x.cp))
+		installed_match = vardb.match(x)
+		for installed in installed_match:
+			mypkgs.append((installed, "installed"))
+			any_match = True
+
+		if any_match:
+			continue
+
+		for db, pkg_type in ((portdb, "ebuild"), (bindb, "binary")):
+			if pkg_type == "binary" and "--usepkg" not in myopts:
+				continue
+
+			# Use match instead of cp_list, to account for old-style virtuals.
+			if not cp_exists and db.match(x.cp):
+				cp_exists = True
+			# Search for masked packages too.
+			if not cp_exists and hasattr(db, "xmatch") and \
+				db.xmatch("match-all", x.cp):
+				cp_exists = True
+
+			matches = db.match(x)
+			matches.reverse()
+			for match in matches:
+				if pkg_type == "binary":
+					if db.bintree.isremote(match):
+						continue
+				auxkeys = ["EAPI", "DEFINED_PHASES"]
+				metadata = dict(zip(auxkeys, db.aux_get(match, auxkeys)))
+				if metadata["EAPI"] not in ("0", "1", "2", "3") and \
+					"info" in metadata["DEFINED_PHASES"].split():
+					mypkgs.append((match, pkg_type))
+					break
+
+		if not cp_exists:
+			xinfo = '"%s"' % x.unevaluated_atom
+			# Discard null/ from failed cpv_expand category expansion.
+			xinfo = xinfo.replace("null/", "")
+			if settings["ROOT"] != "/":
+				xinfo = "%s for %s" % (xinfo, eroot)
+			writemsg("\nemerge: there are no ebuilds to satisfy %s.\n" %
+				colorize("INFORM", xinfo), noiselevel=-1)
+
+			if myopts.get("--misspell-suggestions", "y") != "n":
+
+				writemsg("\nemerge: searching for similar names..."
+					, noiselevel=-1)
+
+				dbs = [vardb]
+				#if "--usepkgonly" not in myopts:
+				dbs.append(portdb)
+				if "--usepkg" in myopts:
+					dbs.append(bindb)
+
+				matches = similar_name_search(dbs, x)
+
+				if len(matches) == 1:
+					writemsg("\nemerge: Maybe you meant " + matches[0] + "?\n"
+						, noiselevel=-1)
+				elif len(matches) > 1:
+					writemsg(
+						"\nemerge: Maybe you meant any of these: %s?\n" % \
+						(", ".join(matches),), noiselevel=-1)
+				else:
+					# Generally, this would only happen if
+					# all dbapis are empty.
+					writemsg(" nothing similar found.\n"
+						, noiselevel=-1)
+
+			return 1
+
 	output_buffer = []
 	append = output_buffer.append
 	root_config = trees[settings['EROOT']]['root_config']
@@ -1371,6 +1491,18 @@ def action_info(settings, trees, myopts, myfiles):
 	append(header_width * "=")
 	append("System uname: %s" % (platform.platform(aliased=1),))
 
+	vm_info = get_vm_info()
+	if "ram.total" in vm_info:
+		line = "%-9s %10d total" % ("KiB Mem:", vm_info["ram.total"] / 1024)
+		if "ram.free" in vm_info:
+			line += ",%10d free" % (vm_info["ram.free"] / 1024,)
+		append(line)
+	if "swap.total" in vm_info:
+		line = "%-9s %10d total" % ("KiB Swap:", vm_info["swap.total"] / 1024)
+		if "swap.free" in vm_info:
+			line += ",%10d free" % (vm_info["swap.free"] / 1024,)
+		append(line)
+
 	lastSync = portage.grabfile(os.path.join(
 		settings["PORTDIR"], "metadata", "timestamp.chk"))
 	if lastSync:
@@ -1559,40 +1691,6 @@ def action_info(settings, trees, myopts, myfiles):
 	writemsg_stdout("\n".join(output_buffer),
 		noiselevel=-1)
 
-	# See if we can find any packages installed matching the strings
-	# passed on the command line
-	mypkgs = []
-	eroot = settings['EROOT']
-	vardb = trees[eroot]["vartree"].dbapi
-	portdb = trees[eroot]['porttree'].dbapi
-	bindb = trees[eroot]["bintree"].dbapi
-	for x in myfiles:
-		match_found = False
-		installed_match = vardb.match(x)
-		for installed in installed_match:
-			mypkgs.append((installed, "installed"))
-			match_found = True
-
-		if match_found:
-			continue
-
-		for db, pkg_type in ((portdb, "ebuild"), (bindb, "binary")):
-			if pkg_type == "binary" and "--usepkg" not in myopts:
-				continue
-
-			matches = db.match(x)
-			matches.reverse()
-			for match in matches:
-				if pkg_type == "binary":
-					if db.bintree.isremote(match):
-						continue
-				auxkeys = ["EAPI", "DEFINED_PHASES"]
-				metadata = dict(zip(auxkeys, db.aux_get(match, auxkeys)))
-				if metadata["EAPI"] not in ("0", "1", "2", "3") and \
-					"info" in metadata["DEFINED_PHASES"].split():
-					mypkgs.append((match, pkg_type))
-					break
-
 	# If some packages were found...
 	if mypkgs:
 		# Get our global settings (we only print stuff if it varies from
@@ -1886,35 +1984,10 @@ def action_regen(settings, portdb, max_jobs, max_load):
 
 	regen = MetadataRegen(portdb, max_jobs=max_jobs,
 		max_load=max_load, main=True)
-	received_signal = []
-
-	def emergeexitsig(signum, frame):
-		signal.signal(signal.SIGINT, signal.SIG_IGN)
-		signal.signal(signal.SIGTERM, signal.SIG_IGN)
-		portage.util.writemsg("\n\nExiting on signal %(signal)s\n" % \
-			{"signal":signum})
-		regen.terminate()
-		received_signal.append(128 + signum)
-
-	earlier_sigint_handler = signal.signal(signal.SIGINT, emergeexitsig)
-	earlier_sigterm_handler = signal.signal(signal.SIGTERM, emergeexitsig)
 
-	try:
-		regen.start()
-		regen.wait()
-	finally:
-		# Restore previous handlers
-		if earlier_sigint_handler is not None:
-			signal.signal(signal.SIGINT, earlier_sigint_handler)
-		else:
-			signal.signal(signal.SIGINT, signal.SIG_DFL)
-		if earlier_sigterm_handler is not None:
-			signal.signal(signal.SIGTERM, earlier_sigterm_handler)
-		else:
-			signal.signal(signal.SIGTERM, signal.SIG_DFL)
-
-	if received_signal:
-		sys.exit(received_signal[0])
+	signum = run_main_scheduler(regen)
+	if signum is not None:
+		sys.exit(128 + signum)
 
 	portage.writemsg_stdout("done!\n")
 	return regen.returncode
@@ -2005,7 +2078,7 @@ def action_sync(settings, trees, mtimedb, myopts, myaction):
 			noiselevel=-1, level=logging.ERROR)
 		return 1
 
-	vcs_dirs = frozenset([".git", ".svn", "CVS", ".hg"])
+	vcs_dirs = frozenset(VCS_DIRS)
 	vcs_dirs = vcs_dirs.intersection(os.listdir(myportdir))
 
 	os.umask(0o022)
@@ -2031,7 +2104,8 @@ def action_sync(settings, trees, mtimedb, myopts, myaction):
 		emergelog(xterm_titles, msg )
 		writemsg_level(msg + "\n")
 		exitcode = portage.process.spawn_bash("cd %s ; git pull" % \
-			(portage._shell_quote(myportdir),), **spawn_kwargs)
+			(portage._shell_quote(myportdir),),
+			**portage._native_kwargs(spawn_kwargs))
 		if exitcode != os.EX_OK:
 			msg = "!!! git pull error in %s." % myportdir
 			emergelog(xterm_titles, msg)
@@ -2047,7 +2121,8 @@ def action_sync(settings, trees, mtimedb, myopts, myaction):
 				"control (contains %s).\n!!! Aborting rsync sync.\n") % \
 				(myportdir, vcs_dir), level=logging.ERROR, noiselevel=-1)
 			return 1
-		if not os.path.exists("/usr/bin/rsync"):
+		rsync_binary = portage.process.find_binary("rsync")
+		if rsync_binary is None:
 			print("!!! /usr/bin/rsync does not exist, so rsync support is disabled.")
 			print("!!! Type \"emerge net-misc/rsync\" to enable rsync support.")
 			sys.exit(1)
@@ -2273,7 +2348,7 @@ def action_sync(settings, trees, mtimedb, myopts, myaction):
 			if mytimestamp != 0 and "--quiet" not in myopts:
 				print(">>> Checking server timestamp ...")
 
-			rsynccommand = ["/usr/bin/rsync"] + rsync_opts + extra_rsync_opts
+			rsynccommand = [rsync_binary] + rsync_opts + extra_rsync_opts
 
 			if "--debug" in myopts:
 				print(rsynccommand)
@@ -2319,7 +2394,8 @@ def action_sync(settings, trees, mtimedb, myopts, myaction):
 								rsync_initial_timeout)
 
 						mypids.extend(portage.process.spawn(
-							mycommand, returnpid=True, **spawn_kwargs))
+							mycommand, returnpid=True,
+							**portage._native_kwargs(spawn_kwargs)))
 						exitcode = os.waitpid(mypids[0], 0)[1]
 						if usersync_uid is not None:
 							portage.util.apply_permissions(tmpservertimestampfile,
@@ -2385,7 +2461,8 @@ def action_sync(settings, trees, mtimedb, myopts, myaction):
 				elif (servertimestamp == 0) or (servertimestamp > mytimestamp):
 					# actual sync
 					mycommand = rsynccommand + [dosyncuri+"/", myportdir]
-					exitcode = portage.process.spawn(mycommand, **spawn_kwargs)
+					exitcode = portage.process.spawn(mycommand,
+						**portage._native_kwargs(spawn_kwargs))
 					if exitcode in [0,1,3,4,11,14,20,21]:
 						break
 			elif exitcode in [1,3,4,11,14,20,21]:
@@ -2463,7 +2540,7 @@ def action_sync(settings, trees, mtimedb, myopts, myaction):
 			if portage.process.spawn_bash(
 					"cd %s; exec cvs -z0 -d %s co -P gentoo-x86" % \
 					(portage._shell_quote(cvsdir), portage._shell_quote(cvsroot)),
-					**spawn_kwargs) != os.EX_OK:
+					**portage._native_kwargs(spawn_kwargs)) != os.EX_OK:
 				print("!!! cvs checkout error; exiting.")
 				sys.exit(1)
 			os.rename(os.path.join(cvsdir, "gentoo-x86"), myportdir)
@@ -2472,7 +2549,8 @@ def action_sync(settings, trees, mtimedb, myopts, myaction):
 			print(">>> Starting cvs update with "+syncuri+"...")
 			retval = portage.process.spawn_bash(
 				"cd %s; exec cvs -z0 -q update -dP" % \
-				(portage._shell_quote(myportdir),), **spawn_kwargs)
+				(portage._shell_quote(myportdir),),
+				**portage._native_kwargs(spawn_kwargs))
 			if retval != os.EX_OK:
 				writemsg_level("!!! cvs update error; exiting.\n",
 					noiselevel=-1, level=logging.ERROR)
@@ -2544,7 +2622,7 @@ def action_sync(settings, trees, mtimedb, myopts, myaction):
 		print(warn(" * ")+bold("An update to portage is available.")+" It is _highly_ recommended")
 		print(warn(" * ")+"that you update portage now, before any other packages are updated.")
 		print()
-		print(warn(" * ")+"To update portage, run 'emerge portage' now.")
+		print(warn(" * ")+"To update portage, run 'emerge --oneshot portage' now.")
 		print()
 
 	display_news_notification(root_config, myopts)
@@ -3054,7 +3132,7 @@ def load_emerge_config(trees=None):
 		v = os.environ.get(envvar, None)
 		if v and v.strip():
 			kwargs[k] = v
-	trees = portage.create_trees(trees=trees, **kwargs)
+	trees = portage.create_trees(trees=trees, **portage._native_kwargs(kwargs))
 
 	for root_trees in trees.values():
 		settings = root_trees["vartree"].settings
@@ -3258,7 +3336,7 @@ def expand_set_arguments(myfiles, myaction, root_config):
 	# world file, the depgraph performs set expansion later. It will get
 	# confused about where the atoms came from if it's not allowed to
 	# expand them itself.
-	do_not_expand = (None, )
+	do_not_expand = myaction is None
 	newargs = []
 	for a in myfiles:
 		if a in ("system", "world"):
@@ -3324,6 +3402,14 @@ def expand_set_arguments(myfiles, myaction, root_config):
 					for line in textwrap.wrap(msg, 50):
 						out.ewarn(line)
 				setconfig.active.append(s)
+
+				if do_not_expand:
+					# Loading sets can be slow, so skip it here, in order
+					# to allow the depgraph to indicate progress with the
+					# spinner while sets are loading (bug #461412).
+					newargs.append(a)
+					continue
+
 				try:
 					set_atoms = setconfig.getSetAtoms(s)
 				except portage.exception.PackageSetNotFound as e:
@@ -3339,17 +3425,18 @@ def expand_set_arguments(myfiles, myaction, root_config):
 					return (None, 1)
 				if myaction in unmerge_actions and \
 						not sets[s].supportsOperation("unmerge"):
-					sys.stderr.write("emerge: the given set '%s' does " % s + \
-						"not support unmerge operations\n")
+					writemsg_level("emerge: the given set '%s' does " % s + \
+						"not support unmerge operations\n",
+						level=logging.ERROR, noiselevel=-1)
 					retval = 1
 				elif not set_atoms:
-					print("emerge: '%s' is an empty set" % s)
-				elif myaction not in do_not_expand:
-					newargs.extend(set_atoms)
+					writemsg_level("emerge: '%s' is an empty set\n" % s,
+						level=logging.INFO, noiselevel=-1)
 				else:
-					newargs.append(SETPREFIX+s)
-				for e in sets[s].errors:
-					print(e)
+					newargs.extend(set_atoms)
+				for error_msg in sets[s].errors:
+					writemsg_level("%s\n" % (error_msg,),
+						level=logging.ERROR, noiselevel=-1)
 		else:
 			newargs.append(a)
 	return (newargs, retval)
@@ -3514,8 +3601,7 @@ def run_action(settings, trees, mtimedb, myaction, myopts, myfiles, build_dict,
 	del mytrees, mydb
 
 	for x in myfiles:
-		ext = os.path.splitext(x)[1]
-		if (ext == ".ebuild" or ext == ".tbz2") and \
+		if x.endswith((".ebuild", ".tbz2")) and \
 			os.path.exists(os.path.abspath(x)):
 			print(colorize("BAD", "\n*** emerging by path is broken "
 				"and may not always work!!!\n"))
@@ -3678,10 +3764,15 @@ def run_action(settings, trees, mtimedb, myaction, myopts, myfiles, build_dict,
 			portage.util.ensure_dirs(_emerge.emergelog._emerge_log_dir)
 
 	if not "--pretend" in myopts:
-		emergelog(xterm_titles, "Started emerge on: "+\
-			_unicode_decode(
-				time.strftime("%b %d, %Y %H:%M:%S", time.localtime()),
-				encoding=_encodings['content'], errors='replace'))
+		time_fmt = "%b %d, %Y %H:%M:%S"
+		if sys.hexversion < 0x3000000:
+			time_fmt = portage._unicode_encode(time_fmt)
+		time_str = time.strftime(time_fmt, time.localtime(time.time()))
+		# Avoid potential UnicodeDecodeError in Python 2, since strftime
+		# returns bytes in Python 2, and %b may contain non-ascii chars.
+		time_str = _unicode_decode(time_str,
+			encoding=_encodings['content'], errors='replace')
+		emergelog(xterm_titles, "Started emerge on: %s" % time_str)
 		myelogstr=""
 		if myopts:
 			opt_list = []

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index 36f1c7a..8558cf3 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -3,8 +3,6 @@ import re
 import os
 import platform
 import hashlib
-from multiprocessing import Process
-
 
 from portage.versions import catpkgsplit, cpv_getversion
 import portage
@@ -21,8 +19,8 @@ from gobs.flags import gobs_use_flags
 from gobs.ConnectionManager import connectionManager
 from gobs.mysql_querys import add_gobs_logs, get_config_id, get_ebuild_id_db_checksum, add_new_buildlog, \
 	update_manifest_sql, get_package_id, get_build_job_id, get_use_id, get_fail_querue_dict, \
-	add_fail_querue_dict, update_fail_times, get_config
-	
+	add_fail_querue_dict, update_fail_times, get_config, get_hilight_info
+
 def get_build_dict_db(conn, config_id, settings, pkg):
 	myportdb = portage.portdbapi(mysettings=settings)
 	cpvr_list = catpkgsplit(pkg.cpv, silent=1)
@@ -83,88 +81,99 @@ def get_build_dict_db(conn, config_id, settings, pkg):
 		build_dict['build_job_id'] = build_job_id
 	return build_dict
 
-def search_info(textline, error_log_list):
-	if re.search(" * Package:", textline):
-		error_log_list.append(textline + '\n')
-	if re.search(" * Repository:", textline):
-		error_log_list.append(textline + '\n')
-	if re.search(" * Maintainer:", textline):
-		error_log_list.append(textline + '\n')
-	if re.search(" * USE:", textline):
-		error_log_list.append(textline + '\n')
-	if re.search(" * FEATURES:", textline):
-		error_log_list.append(textline + '\n')
-	return error_log_list
-
-def search_error(logfile_text, textline, error_log_list, sum_build_log_list, i):
-	if re.search("Error 1", textline):
-		x = i - 20
-		endline = True
-		error_log_list.append(".....\n")
-		while x != i + 3 and endline:
-			try:
-				error_log_list.append(logfile_text[x] + '\n')
-			except:
-				endline = False
-			else:
-				x = x +1
-	if re.search(" * ERROR:", textline):
-		x = i
-		endline= True
-		field = textline.split(" ")
-		sum_build_log_list.append("True")
-		error_log_list.append(".....\n")
-		while x != i + 10 and endline:
-			try:
-				error_log_list.append(logfile_text[x] + '\n')
-			except:
-				endline = False
-			else:
-				x = x +1
-	if re.search("configure: error:", textline):
-		x = i - 4
-		endline = True
-		error_log_list.append(".....\n")
-		while x != i + 3 and endline:
-			try:
-				error_log_list.append(logfile_text[x] + '\n')
-			except:
-				endline = False
-			else:
-				x = x +1
-	return error_log_list, sum_build_log_list
-
-def search_qa(logfile_text, textline, qa_error_list, error_log_list,i):
-	if re.search(" * QA Notice:", textline):
-		x = i
-		qa_error_list.append(logfile_text[x] + '\n')
-		endline= True
-		error_log_list.append(".....\n")
-		while x != i + 3 and endline:
-			try:
-				error_log_list.append(logfile_text[x] + '\n')
-			except:
-				endline = False
+def search_buildlog(conn, logfile_text):
+	log_search_list = get_hilight_info(conn)
+	index = 0
+	hilight_list = []
+	for textline in logfile_text:
+		index = index + 1
+		for search_pattern in log_search_list:
+			if re.searchsearch_pattern(['hilight_search'], textline):
+				hilight_tmp = {}
+				hilight_tmp['startline'] = index - search_pattern['hilight_start']
+				hilight_tmp['hilight'] =search_pattern ['hilight_css']
+				if search_pattern['hilight_search_end'] is None:
+					hilight_tmp['endline'] = index + search_pattern['hilight_end']
+				else:
+					hilight_tmp['endline'] = None
+					i = index + 1
+					while hilight_tmp['endline'] == None:
+						if re.serchsearch_pattern(['hilight_search_end'], logfile_text[i -1]):
+							if re.serch(search_pattern['hilight_search_end'], logfile_text[i]):
+								i = i + 1
+							else:
+								hilight_tmp['endline'] = i
+						else:
+							i = i +1
+				hilight_list.append(hilight_tmp)
+	new_hilight_dict = {}
+	for hilight_tmp in hilight_list:
+		add_new_hilight = True
+		add_new_hilight_middel = None
+		for k, v in sorted(new_hilight_dict.iteritems()):
+			if hilight_tmp['startline'] == hilight_tmp['endline']:
+				if v['endline'] == hilight_tmp['startline'] or v['startline'] == hilight_tmp['startline']:
+					add_new_hilight = False
+				if hilight_tmp['startline'] > v['startline'] and hilight_tmp['startline'] < v['endline']:
+					add_new_hilight = False
+					add_new_hilight_middel = k
 			else:
-				x = x +1
-	return qa_error_list, error_log_list
+				if v['endline'] == hilight_tmp['startline'] or v['startline'] == hilight_tmp['startline']:
+					add_new_hilight = False
+				if hilight_tmp['startline'] > v['startline'] and hilight_tmp['startline'] < v['endline']:
+					add_new_hilight = False
+		if add_new_hilight is True:
+			adict = {}
+			adict['startline'] = hilight_tmp['startline']
+			adict['hilight'] = hilight_tmp['hilight']
+			adict['endline'] = hilight_tmp['endline']
+			new_hilight_dict[hilight_tmp['startline']] = adict
+		if not add_new_hilight_middel is None:
+			adict1 = {}
+			adict2 = {}
+			adict3 = {}
+			adict1['startline'] = new_hilight_dict[add_new_hilight_middel]['startline']
+			adict1['endline'] = hilight_tmp['startline'] -1
+			adict1['hilight'] = new_hilight_dict[add_new_hilight_middel]['hilight']
+			adict2['startline'] = hilight_tmp['startline']
+			adict2['hilight'] = hilight_tmp['hilight']
+			adict2['endline'] = hilight_tmp['endline']
+			adict3['startline'] = hilight_tmp['endline'] + 1
+			adict3['hilight'] = new_hilight_dict[add_new_hilight_middel]['hilight']
+			adict3['endline'] = new_hilight_dict[add_new_hilight_middel]['endline']	
+			del new_hilight_dict[add_new_hilight_middel]
+			new_hilight_dict[adict1['startline']] = adict1
+			new_hilight_dict[adict2['startline']] = adict2
+			new_hilight_dict[adict3['startline']] = adict3
+	return new_hilight_dict
 
-def get_buildlog_info(settings, pkg, build_dict):
+def get_buildlog_info(conn, settings, pkg, build_dict):
 	myportdb = portage.portdbapi(mysettings=settings)
 	init_repoman = gobs_repoman(settings, myportdb)
 	logfile_text = get_log_text_list(settings.get("PORTAGE_LOG_FILE"))
-	# FIXME to support more errors and stuff
-	i = 0
+	hilight_dict = search_buildlog(conn, logfile_text)
 	build_log_dict = {}
 	error_log_list = []
 	qa_error_list = []
 	repoman_error_list = []
 	sum_build_log_list = []
-	for textline in logfile_text:
-		error_log_list = search_info(textline, error_log_list)
-		error_log_list, sum_build_log_list = search_error(logfile_text, textline, error_log_list, sum_build_log_list, i)
-		qa_error_list, error_log_list = search_qa(logfile_text, textline, qa_error_list, error_log_list, i)
-		i = i +1
+	
+	for k, v in sorted(hilight_dict.iteritems()):
+		if v['startline'] == v['endline']:
+			error_log_list.append(logfile_text[k -1])
+			if v['hilight'] == "qa":
+				qa_error_list.append(logfile_text[k -1])
+		else:
+			i = k
+			while i != (v['endline'] + 1):
+				error_log_list.append(logfile_text[i -1])
+				if v['hilight'] == "qa":
+					qa_error_list.append(logfile_text[i -1])
+				i = i +1
+			error_log_list.append(logfile_text[i -1])
+			if v['hilight'] == "qa":
+				qa_error_list(logfile_text[i -1])
+
 	# Run repoman check_repoman()
 	repoman_error_list = init_repoman.check_repoman(build_dict['cpv'], pkg.repo)
 	if repoman_error_list != []:
@@ -175,6 +184,7 @@ def get_buildlog_info(settings, pkg, build_dict):
 	build_log_dict['qa_error_list'] = qa_error_list
 	build_log_dict['error_log_list'] = error_log_list
 	build_log_dict['summary_error_list'] = sum_build_log_list
+	build_log_dict['hilight_dict'] = hilight_dict
 	return build_log_dict
 
 def write_msg_file(msg, log_path):
@@ -209,7 +219,7 @@ def write_msg_file(msg, log_path):
 				if f_real is not f:
 					f_real.close()
 
-def add_buildlog_process(settings, pkg):
+def add_buildlog_main(settings, pkg):
 	CM = connectionManager()
 	conn = CM.newConnection()
 	if not conn.is_connected() is True:
@@ -227,7 +237,7 @@ def add_buildlog_process(settings, pkg):
 		conn.close
 		return
 	build_log_dict = {}
-	build_log_dict = get_buildlog_info(settings, pkg, build_dict)
+	build_log_dict = get_buildlog_info(conn, settings, pkg, build_dict)
 	error_log_list = build_log_dict['error_log_list']
 	build_error = ""
 	log_hash = hashlib.sha256()
@@ -259,12 +269,6 @@ def add_buildlog_process(settings, pkg):
 		print(">>> Logging %s:%s" % (pkg.cpv, pkg.repo,))
 	conn.close
 
-def add_buildlog_main(settings, pkg):
-	#Run it in a process so we don't kill portage
-	p = Process(target=add_buildlog_process, args=(settings, pkg,))
-	p.start()
-	p.join()
-
 def log_fail_queru(conn, build_dict, settings):
 	config_id = build_dict['config_id']
 	print('build_dict', build_dict)

diff --git a/gobs/pym/main.py b/gobs/pym/main.py
index 4bc45ee..921267a 100644
--- a/gobs/pym/main.py
+++ b/gobs/pym/main.py
@@ -1,4 +1,4 @@
-# Copyright 1999-2012 Gentoo Foundation
+# Copyright 1999-2013 Gentoo Foundation
 # Distributed under the terms of the GNU General Public License v2
 
 from __future__ import print_function
@@ -44,7 +44,6 @@ options=[
 "--tree",
 "--unordered-display",
 "--update",
-"--verbose",
 "--verbose-main-repo-display",
 ]
 
@@ -65,7 +64,7 @@ shortmapping={
 "s":"--search",    "S":"--searchdesc",
 "t":"--tree",
 "u":"--update",
-"v":"--verbose",   "V":"--version"
+"V":"--version"
 }
 
 COWSAY_MOO = """
@@ -139,6 +138,7 @@ def insert_optional_args(args):
 		'--package-moves'        : y_or_n,
 		'--quiet'                : y_or_n,
 		'--quiet-build'          : y_or_n,
+		'--quiet-fail'           : y_or_n,
 		'--rebuild-if-new-slot': y_or_n,
 		'--rebuild-if-new-rev'   : y_or_n,
 		'--rebuild-if-new-ver'   : y_or_n,
@@ -150,6 +150,7 @@ def insert_optional_args(args):
 		"--use-ebuild-visibility": y_or_n,
 		'--usepkg'               : y_or_n,
 		'--usepkgonly'           : y_or_n,
+		'--verbose'              : y_or_n,
 	}
 
 	short_arg_opts = {
@@ -167,6 +168,8 @@ def insert_optional_args(args):
 		'k' : y_or_n,
 		'K' : y_or_n,
 		'q' : y_or_n,
+		'v' : y_or_n,
+		'w' : y_or_n,
 	}
 
 	arg_stack = args[:]
@@ -541,6 +544,12 @@ def parse_opts(tmpcmdline, silent=False):
 			"choices"  : true_y_or_n,
 		},
 
+		"--quiet-fail": {
+			"help"     : "suppresses display of the build log on stdout",
+			"type"     : "choice",
+			"choices"  : true_y_or_n,
+		},
+
 		"--rebuild-if-new-slot": {
 			"help"     : ("Automatically rebuild or reinstall packages when slot/sub-slot := "
 				"operator dependencies can be satisfied by a newer slot, so that "
@@ -600,6 +609,7 @@ def parse_opts(tmpcmdline, silent=False):
 		},
 
 		"--select": {
+			"shortopt" : "-w",
 			"help"    : "add specified packages to the world set " + \
 			            "(inverse of --oneshot)",
 			"type"    : "choice",
@@ -638,6 +648,13 @@ def parse_opts(tmpcmdline, silent=False):
 			"type"     : "choice",
 			"choices"  : true_y_or_n
 		},
+
+		"--verbose": {
+			"shortopt" : "-v",
+			"help"     : "verbose output",
+			"type"     : "choice",
+			"choices"  : true_y_or_n
+		},
 	}
 
 	from optparse import OptionParser
@@ -782,6 +799,9 @@ def parse_opts(tmpcmdline, silent=False):
 	if myoptions.quiet_build in true_y:
 		myoptions.quiet_build = 'y'
 
+	if myoptions.quiet_fail in true_y:
+		myoptions.quiet_fail = 'y'
+
 	if myoptions.rebuild_if_new_slot in true_y:
 		myoptions.rebuild_if_new_slot = 'y'
 
@@ -917,6 +937,11 @@ def parse_opts(tmpcmdline, silent=False):
 	else:
 		myoptions.usepkgonly = None
 
+	if myoptions.verbose in true_y:
+		myoptions.verbose = True
+	else:
+		myoptions.verbose = None
+
 	for myopt in options:
 		v = getattr(myoptions, myopt.lstrip("--").replace("-", "_"))
 		if v:
@@ -979,8 +1004,6 @@ def emerge_main(args=None, build_dict=None):
 	if build_dict is None:
 		build_dict = {}
 
-	portage._disable_legacy_globals()
-	portage._internal_warnings = True
 	# Disable color until we're sure that it should be enabled (after
 	# EMERGE_DEFAULT_OPTS has been parsed).
 	portage.output.havecolor = 0

diff --git a/gobs/pym/mysql_querys.py b/gobs/pym/mysql_querys.py
index 8b8bacd..cd94a76 100644
--- a/gobs/pym/mysql_querys.py
+++ b/gobs/pym/mysql_querys.py
@@ -86,7 +86,7 @@ def update_make_conf(connection, configsDict):
 
 def get_default_config(connection):
 	cursor = connection.cursor()
-	sqlQ = "SELECT host, config FROM configs WHERE default_config = 'True'"
+	sqlQ = "SELECT hostname, config FROM configs WHERE default_config = 'True'"
 	cursor.execute(sqlQ)
 	hostname, config = cursor.fetchone()
 	cursor.close()
@@ -523,6 +523,23 @@ def get_build_job_id(connection, build_dict):
 			return build_job_id[0]
 	cursor.close()
 
+def get_hilight_info(connection):
+	cursor = connection.cursor()
+	sqlQ = 'SELECT hilight_search, hilight_search_end, hilight_css, hilight_start, hilight_end FROM hilight'
+	hilight = []
+	cursor.execute(sqlQ)
+	entries = cursor.fetchall()
+	cursor.close()
+	for i in entries:
+		aadict = {}
+		aadict['hilight_search'] = i[0]
+		aadict['hilight_searchend'] = i[1]
+		aadict['hilight_css'] = i[2]
+		aadict['hilight_start'] = i[3]
+		aadict['hilight_end'] = i[4]
+		hilight.append(aadict)
+	return hilight
+
 def add_new_buildlog(connection, build_dict, build_log_dict):
 	cursor = connection.cursor()
 	sqlQ1 = 'SELECT build_log_id FROM build_logs WHERE ebuild_id = %s'


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2013-04-24  0:11 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2013-04-24  0:11 UTC (permalink / raw
  To: gentoo-commits

commit:     4f99bc068a7233c8cd221ba23a3ea17b31270020
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 24 00:10:17 2013 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Apr 24 00:10:17 2013 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=4f99bc06

clean up depclean

---
 gobs/pym/build_log.py    |   13 +-
 gobs/pym/depclean.py     |  608 +----
 gobs/pym/depgraph.py     | 7645 ----------------------------------------------
 gobs/pym/mysql_querys.py |   30 +-
 gobs/pym/package.py      |   20 +-
 gobs/pym/pgsql.py        |  633 ----
 gobs/pym/text.py         |    1 -
 gobs/pym/updatedb.py     |    6 +-
 8 files changed, 39 insertions(+), 8917 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index 8558cf3..c3fe244 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -88,7 +88,7 @@ def search_buildlog(conn, logfile_text):
 	for textline in logfile_text:
 		index = index + 1
 		for search_pattern in log_search_list:
-			if re.searchsearch_pattern(['hilight_search'], textline):
+			if re.search(search_pattern['hilight_search'], textline):
 				hilight_tmp = {}
 				hilight_tmp['startline'] = index - search_pattern['hilight_start']
 				hilight_tmp['hilight'] =search_pattern ['hilight_css']
@@ -98,8 +98,8 @@ def search_buildlog(conn, logfile_text):
 					hilight_tmp['endline'] = None
 					i = index + 1
 					while hilight_tmp['endline'] == None:
-						if re.serchsearch_pattern(['hilight_search_end'], logfile_text[i -1]):
-							if re.serch(search_pattern['hilight_search_end'], logfile_text[i]):
+						if re.search(search_pattern['hilight_search_end'], logfile_text[i -1]):
+							if re.search(search_pattern['hilight_search_end'], logfile_text[i]):
 								i = i + 1
 							else:
 								hilight_tmp['endline'] = i
@@ -170,9 +170,6 @@ def get_buildlog_info(conn, settings, pkg, build_dict):
 				if v['hilight'] == "qa":
 					qa_error_list.append(logfile_text[i -1])
 				i = i +1
-			error_log_list.append(logfile_text[i -1])
-			if v['hilight'] == "qa":
-				qa_error_list(logfile_text[i -1])
 
 	# Run repoman check_repoman()
 	repoman_error_list = init_repoman.check_repoman(build_dict['cpv'], pkg.repo)
@@ -266,7 +263,7 @@ def add_buildlog_main(settings, pkg):
 		# os.chmod(emerge_info_logfilename, 0o664)
 		log_msg = "Package: %s:%s is logged." % (pkg.cpv, pkg.repo,)
 		add_gobs_logs(conn, log_msg, "info", config_id)
-		print(">>> Logging %s:%s" % (pkg.cpv, pkg.repo,))
+		print("\n>>> Logging %s:%s\n" % (pkg.cpv, pkg.repo,))
 	conn.close
 
 def log_fail_queru(conn, build_dict, settings):
@@ -326,7 +323,7 @@ def log_fail_queru(conn, build_dict, settings):
 				hostname, config = get_config(conn, config_id)
 				host_config = hostname +"/" + config
 				build_log_dict['logfilename'] = settings.get("PORTAGE_LOG_FILE").split(host_config)[1]
-				os.chmod(settings.get("PORTAGE_LOG_FILE"), 0o224)
+				os.chmod(settings.get("PORTAGE_LOG_FILE"), 0o664)
 			else:
 				build_log_dict['logfilename'] = ""
 			log_id = add_new_buildlog(conn, build_dict, build_log_dict)

diff --git a/gobs/pym/depclean.py b/gobs/pym/depclean.py
index ad75a48..697cbfd 100644
--- a/gobs/pym/depclean.py
+++ b/gobs/pym/depclean.py
@@ -1,27 +1,10 @@
 from __future__ import print_function
-import errno
-import logging
-import textwrap
 import portage
 from portage._sets.base import InternalPackageSet
 from _emerge.main import parse_opts
-from _emerge.create_depgraph_params import create_depgraph_params
-from _emerge.depgraph import backtrack_depgraph, depgraph, resume_depgraph
-from _emerge.UnmergeDepPriority import UnmergeDepPriority
-from _emerge.SetArg import SetArg
-from _emerge.actions import load_emerge_config
-from _emerge.Package import Package
-from _emerge.unmerge import unmerge
-from portage.util import cmp_sort_key, writemsg, \
-	writemsg_level, writemsg_stdout
-from portage.util.digraph import digraph
-from portage.output import blue, bold, colorize, create_color_func, darkgreen, \
-red, yellow
-good = create_color_func("GOOD")
-bad = create_color_func("BAD")
-warn = create_color_func("WARN")
+from _emerge.actions import load_emerge_config, action_depclean
 
-def main_depclean():
+def do_depclean():
 	mysettings, mytrees, mtimedb = load_emerge_config()
 	myroot = mysettings["ROOT"]
 	root_config = mytrees[myroot]["root_config"]
@@ -59,592 +42,9 @@ def main_depclean():
 			tmpcmdline = []
 			tmpcmdline.append("--depclean")
 			myaction, myopts, myfiles = parse_opts(tmpcmdline, silent=False)
-			unmerge(root_config, myopts, "unmerge", cleanlist, mtimedb["ldpath"], ordered=ordered, scheduler=scheduler)
-			print("Number removed:       "+str(len(cleanlist)))
+			rval = action_depclean(mysettings, mytrees, mtimedb["ldpath"], myopts, myaction, myfiles, spinner, scheduler=None)
 			return True
 		else:
-			logging.info("conflicting packages: %s", conflict_package_list)
-			tmpcmdline = []
-			tmpcmdline.append("--depclean")
-			tmpcmdline.append("--exclude")
-			for conflict_package in conflict_package_list:
-				tmpcmdline.append(portage.versions.cpv_getkey(conflict_package))
-			myaction, myopts, myfiles = parse_opts(tmpcmdline, silent=False)
-			unmerge(root_config, myopts, "unmerge", cleanlist, mtimedb["ldpath"], ordered=ordered, scheduler=scheduler)
-			print("Number removed:       "+str(len(cleanlist)))
+			print("conflicting packages: %s", conflict_package_list)
 			return True
 	return True
-
-def calc_depclean(settings, trees, ldpath_mtimes,
-	myopts, action, args_set, spinner):
-	allow_missing_deps = bool(args_set)
-
-	debug = '--debug' in myopts
-	xterm_titles = "notitles" not in settings.features
-	myroot = settings["ROOT"]
-	root_config = trees[myroot]["root_config"]
-	psets = root_config.setconfig.psets
-	deselect = myopts.get('--deselect') != 'n'
-	required_sets = {}
-	required_sets['world'] = psets['world']
-
-	# When removing packages, a temporary version of the world 'selected'
-	# set may be used which excludes packages that are intended to be
-	# eligible for removal.
-	selected_set = psets['selected']
-	required_sets['selected'] = selected_set
-	protected_set = InternalPackageSet()
-	protected_set_name = '____depclean_protected_set____'
-	required_sets[protected_set_name] = protected_set
-	system_set = psets["system"]
-
-	if not system_set or not selected_set:
-
-		if not system_set:
-			writemsg_level("!!! You have no system list.\n",
-				level=logging.ERROR, noiselevel=-1)
-
-		if not selected_set:
-			writemsg_level("!!! You have no world file.\n",
-					level=logging.WARNING, noiselevel=-1)
-
-		writemsg_level("!!! Proceeding is likely to " + \
-			"break your installation.\n",
-			level=logging.WARNING, noiselevel=-1)
-		if "--pretend" not in myopts:
-			countdown(int(settings["EMERGE_WARNING_DELAY"]), ">>> Depclean")
-
-	if action == "depclean":
-		print(" >>> depclean")
-
-	writemsg_level("\nCalculating dependencies  ")
-	resolver_params = create_depgraph_params(myopts, "remove")
-	resolver = depgraph(settings, trees, myopts, resolver_params, spinner)
-	resolver._load_vdb()
-	vardb = resolver._frozen_config.trees[myroot]["vartree"].dbapi
-	real_vardb = trees[myroot]["vartree"].dbapi
-
-	if action == "depclean":
-
-		if args_set:
-
-			if deselect:
-				# Start with an empty set.
-				selected_set = InternalPackageSet()
-				required_sets['selected'] = selected_set
-				# Pull in any sets nested within the selected set.
-				selected_set.update(psets['selected'].getNonAtoms())
-
-			# Pull in everything that's installed but not matched
-			# by an argument atom since we don't want to clean any
-			# package if something depends on it.
-			for pkg in vardb:
-				if spinner:
-					spinner.update()
-
-				try:
-					if args_set.findAtomForPackage(pkg) is None:
-						protected_set.add("=" + pkg.cpv)
-						continue
-				except portage.exception.InvalidDependString as e:
-					show_invalid_depstring_notice(pkg,
-						pkg.metadata["PROVIDE"], str(e))
-					del e
-					protected_set.add("=" + pkg.cpv)
-					continue
-
-	elif action == "prune":
-
-		if deselect:
-			# Start with an empty set.
-			selected_set = InternalPackageSet()
-			required_sets['selected'] = selected_set
-			# Pull in any sets nested within the selected set.
-			selected_set.update(psets['selected'].getNonAtoms())
-
-		# Pull in everything that's installed since we don't
-		# to prune a package if something depends on it.
-		protected_set.update(vardb.cp_all())
-
-		if not args_set:
-
-			# Try to prune everything that's slotted.
-			for cp in vardb.cp_all():
-				if len(vardb.cp_list(cp)) > 1:
-					args_set.add(cp)
-
-		# Remove atoms from world that match installed packages
-		# that are also matched by argument atoms, but do not remove
-		# them if they match the highest installed version.
-		for pkg in vardb:
-			spinner.update()
-			pkgs_for_cp = vardb.match_pkgs(pkg.cp)
-			if not pkgs_for_cp or pkg not in pkgs_for_cp:
-				raise AssertionError("package expected in matches: " + \
-					"cp = %s, cpv = %s matches = %s" % \
-					(pkg.cp, pkg.cpv, [str(x) for x in pkgs_for_cp]))
-
-			highest_version = pkgs_for_cp[-1]
-			if pkg == highest_version:
-				# pkg is the highest version
-				protected_set.add("=" + pkg.cpv)
-				continue
-
-			if len(pkgs_for_cp) <= 1:
-				raise AssertionError("more packages expected: " + \
-					"cp = %s, cpv = %s matches = %s" % \
-					(pkg.cp, pkg.cpv, [str(x) for x in pkgs_for_cp]))
-
-			try:
-				if args_set.findAtomForPackage(pkg) is None:
-					protected_set.add("=" + pkg.cpv)
-					continue
-			except portage.exception.InvalidDependString as e:
-				show_invalid_depstring_notice(pkg,
-					pkg.metadata["PROVIDE"], str(e))
-				del e
-				protected_set.add("=" + pkg.cpv)
-				continue
-
-	if resolver._frozen_config.excluded_pkgs:
-		excluded_set = resolver._frozen_config.excluded_pkgs
-		required_sets['__excluded__'] = InternalPackageSet()
-
-		for pkg in vardb:
-			if spinner:
-				spinner.update()
-
-			try:
-				if excluded_set.findAtomForPackage(pkg):
-					required_sets['__excluded__'].add("=" + pkg.cpv)
-			except portage.exception.InvalidDependString as e:
-				show_invalid_depstring_notice(pkg,
-					pkg.metadata["PROVIDE"], str(e))
-				del e
-				required_sets['__excluded__'].add("=" + pkg.cpv)
-
-	success = resolver._complete_graph(required_sets={myroot:required_sets})
-	writemsg_level("\b\b... done!\n")
-
-	resolver.display_problems()
-
-	if not success:
-		return True, [], False, 0, []
-
-	def unresolved_deps():
-
-		unresolvable = set()
-		for dep in resolver._dynamic_config._initially_unsatisfied_deps:
-			if isinstance(dep.parent, Package) and \
-				(dep.priority > UnmergeDepPriority.SOFT):
-				unresolvable.add((dep.atom, dep.parent.cpv))
-
-		if not unresolvable:
-			return None
-
-		if unresolvable and not allow_missing_deps:
-
-			prefix = bad(" * ")
-			msg = []
-			msg.append("Dependencies could not be completely resolved due to")
-			msg.append("the following required packages not being installed:")
-			msg.append("")
-			for atom, parent in unresolvable:
-				msg.append("  %s pulled in by:" % (atom,))
-				msg.append("    %s" % (parent,))
-				msg.append("")
-			msg.extend(textwrap.wrap(
-				"Have you forgotten to do a complete update prior " + \
-				"to depclean? The most comprehensive command for this " + \
-				"purpose is as follows:", 65
-			))
-			msg.append("")
-			msg.append("  " + \
-				good("emerge --update --newuse --deep --with-bdeps=y @world"))
-			msg.append("")
-			msg.extend(textwrap.wrap(
-				"Note that the --with-bdeps=y option is not required in " + \
-				"many situations. Refer to the emerge manual page " + \
-				"(run `man emerge`) for more information about " + \
-				"--with-bdeps.", 65
-			))
-			msg.append("")
-			msg.extend(textwrap.wrap(
-				"Also, note that it may be necessary to manually uninstall " + \
-				"packages that no longer exist in the portage tree, since " + \
-				"it may not be possible to satisfy their dependencies.", 65
-			))
-			if action == "prune":
-				msg.append("")
-				msg.append("If you would like to ignore " + \
-					"dependencies then use %s." % good("--nodeps"))
-			writemsg_level("".join("%s%s\n" % (prefix, line) for line in msg),
-				level=logging.ERROR, noiselevel=-1)
-			return unresolvable
-		return None
-
-	unresolvable = unresolved_deps()
-	if not unresolvable is None:
-		return False, [], False, 0, unresolvable
-
-	graph = resolver._dynamic_config.digraph.copy()
-	required_pkgs_total = 0
-	for node in graph:
-		if isinstance(node, Package):
-			required_pkgs_total += 1
-
-	def show_parents(child_node):
-		parent_nodes = graph.parent_nodes(child_node)
-		if not parent_nodes:
-			# With --prune, the highest version can be pulled in without any
-			# real parent since all installed packages are pulled in.  In that
-			# case there's nothing to show here.
-			return
-		parent_strs = []
-		for node in parent_nodes:
-			parent_strs.append(str(getattr(node, "cpv", node)))
-		parent_strs.sort()
-		msg = []
-		msg.append("  %s pulled in by:\n" % (child_node.cpv,))
-		for parent_str in parent_strs:
-			msg.append("    %s\n" % (parent_str,))
-		msg.append("\n")
-		portage.writemsg_stdout("".join(msg), noiselevel=-1)
-
-	def cmp_pkg_cpv(pkg1, pkg2):
-		"""Sort Package instances by cpv."""
-		if pkg1.cpv > pkg2.cpv:
-			return 1
-		elif pkg1.cpv == pkg2.cpv:
-			return 0
-		else:
-			return -1
-
-	def create_cleanlist():
-
-		# Never display the special internal protected_set.
-		for node in graph:
-			if isinstance(node, SetArg) and node.name == protected_set_name:
-				graph.remove(node)
-				break
-
-		pkgs_to_remove = []
-
-		if action == "depclean":
-			if args_set:
-
-				for pkg in sorted(vardb, key=cmp_sort_key(cmp_pkg_cpv)):
-					arg_atom = None
-					try:
-						arg_atom = args_set.findAtomForPackage(pkg)
-					except portage.exception.InvalidDependString:
-						# this error has already been displayed by now
-						continue
-
-					if arg_atom:
-						if pkg not in graph:
-							pkgs_to_remove.append(pkg)
-						elif "--verbose" in myopts:
-							show_parents(pkg)
-
-			else:
-				for pkg in sorted(vardb, key=cmp_sort_key(cmp_pkg_cpv)):
-					if pkg not in graph:
-						pkgs_to_remove.append(pkg)
-					elif "--verbose" in myopts:
-						show_parents(pkg)
-
-		elif action == "prune":
-
-			for atom in args_set:
-				for pkg in vardb.match_pkgs(atom):
-					if pkg not in graph:
-						pkgs_to_remove.append(pkg)
-					elif "--verbose" in myopts:
-						show_parents(pkg)
-
-		return pkgs_to_remove
-
-	cleanlist = create_cleanlist()
-	clean_set = set(cleanlist)
-
-	if cleanlist and \
-		real_vardb._linkmap is not None and \
-		myopts.get("--depclean-lib-check") != "n" and \
-		"preserve-libs" not in settings.features:
-
-		# Check if any of these packages are the sole providers of libraries
-		# with consumers that have not been selected for removal. If so, these
-		# packages and any dependencies need to be added to the graph.
-		linkmap = real_vardb._linkmap
-		consumer_cache = {}
-		provider_cache = {}
-		consumer_map = {}
-
-		writemsg_level(">>> Checking for lib consumers...\n")
-
-		for pkg in cleanlist:
-			pkg_dblink = real_vardb._dblink(pkg.cpv)
-			consumers = {}
-
-			for lib in pkg_dblink.getcontents():
-				lib = lib[len(myroot):]
-				lib_key = linkmap._obj_key(lib)
-				lib_consumers = consumer_cache.get(lib_key)
-				if lib_consumers is None:
-					try:
-						lib_consumers = linkmap.findConsumers(lib_key)
-					except KeyError:
-						continue
-					consumer_cache[lib_key] = lib_consumers
-				if lib_consumers:
-					consumers[lib_key] = lib_consumers
-
-			if not consumers:
-				continue
-
-			for lib, lib_consumers in list(consumers.items()):
-				for consumer_file in list(lib_consumers):
-					if pkg_dblink.isowner(consumer_file):
-						lib_consumers.remove(consumer_file)
-				if not lib_consumers:
-					del consumers[lib]
-
-			if not consumers:
-				continue
-
-			for lib, lib_consumers in consumers.items():
-
-				soname = linkmap.getSoname(lib)
-
-				consumer_providers = []
-				for lib_consumer in lib_consumers:
-					providers = provider_cache.get(lib)
-					if providers is None:
-						providers = linkmap.findProviders(lib_consumer)
-						provider_cache[lib_consumer] = providers
-					if soname not in providers:
-						# Why does this happen?
-						continue
-					consumer_providers.append(
-						(lib_consumer, providers[soname]))
-
-				consumers[lib] = consumer_providers
-
-			consumer_map[pkg] = consumers
-
-		if consumer_map:
-
-			search_files = set()
-			for consumers in consumer_map.values():
-				for lib, consumer_providers in consumers.items():
-					for lib_consumer, providers in consumer_providers:
-						search_files.add(lib_consumer)
-						search_files.update(providers)
-
-			writemsg_level(">>> Assigning files to packages...\n")
-			file_owners = real_vardb._owners.getFileOwnerMap(search_files)
-
-			for pkg, consumers in list(consumer_map.items()):
-				for lib, consumer_providers in list(consumers.items()):
-					lib_consumers = set()
-
-					for lib_consumer, providers in consumer_providers:
-						owner_set = file_owners.get(lib_consumer)
-						provider_dblinks = set()
-						provider_pkgs = set()
-
-						if len(providers) > 1:
-							for provider in providers:
-								provider_set = file_owners.get(provider)
-								if provider_set is not None:
-									provider_dblinks.update(provider_set)
-
-						if len(provider_dblinks) > 1:
-							for provider_dblink in provider_dblinks:
-								provider_pkg = resolver._pkg(
-									provider_dblink.mycpv, "installed",
-									root_config, installed=True)
-								if provider_pkg not in clean_set:
-									provider_pkgs.add(provider_pkg)
-
-						if provider_pkgs:
-							continue
-
-						if owner_set is not None:
-							lib_consumers.update(owner_set)
-
-					for consumer_dblink in list(lib_consumers):
-						if resolver._pkg(consumer_dblink.mycpv, "installed",
-							root_config, installed=True) in clean_set:
-							lib_consumers.remove(consumer_dblink)
-							continue
-
-					if lib_consumers:
-						consumers[lib] = lib_consumers
-					else:
-						del consumers[lib]
-				if not consumers:
-					del consumer_map[pkg]
-
-		if consumer_map:
-			# TODO: Implement a package set for rebuilding consumer packages.
-
-			msg = "In order to avoid breakage of link level " + \
-				"dependencies, one or more packages will not be removed. " + \
-				"This can be solved by rebuilding " + \
-				"the packages that pulled them in."
-
-			prefix = bad(" * ")
-			from textwrap import wrap
-			writemsg_level("".join(prefix + "%s\n" % line for \
-				line in wrap(msg, 70)), level=logging.WARNING, noiselevel=-1)
-
-			msg = []
-			for pkg in sorted(consumer_map, key=cmp_sort_key(cmp_pkg_cpv)):
-				consumers = consumer_map[pkg]
-				consumer_libs = {}
-				for lib, lib_consumers in consumers.items():
-					for consumer in lib_consumers:
-						consumer_libs.setdefault(
-							consumer.mycpv, set()).add(linkmap.getSoname(lib))
-				unique_consumers = set(chain(*consumers.values()))
-				unique_consumers = sorted(consumer.mycpv \
-					for consumer in unique_consumers)
-				msg.append("")
-				msg.append("  %s pulled in by:" % (pkg.cpv,))
-				for consumer in unique_consumers:
-					libs = consumer_libs[consumer]
-					msg.append("    %s needs %s" % \
-						(consumer, ', '.join(sorted(libs))))
-			msg.append("")
-			writemsg_level("".join(prefix + "%s\n" % line for line in msg),
-				level=logging.WARNING, noiselevel=-1)
-
-			# Add lib providers to the graph as children of lib consumers,
-			# and also add any dependencies pulled in by the provider.
-			writemsg_level(">>> Adding lib providers to graph...\n")
-
-			for pkg, consumers in consumer_map.items():
-				for consumer_dblink in set(chain(*consumers.values())):
-					consumer_pkg = resolver._pkg(consumer_dblink.mycpv,
-						"installed", root_config, installed=True)
-					if not resolver._add_pkg(pkg,
-						Dependency(parent=consumer_pkg,
-						priority=UnmergeDepPriority(runtime=True),
-						root=pkg.root)):
-						resolver.display_problems()
-						return True, [], False, 0, []
-
-			writemsg_level("\nCalculating dependencies  ")
-			success = resolver._complete_graph(
-				required_sets={myroot:required_sets})
-			writemsg_level("\b\b... done!\n")
-			resolver.display_problems()
-			if not success:
-				return True, [], False, 0, []
-			unresolvable = unresolved_deps()
-			if not unresolvable is None:
-				return False, [], False, 0, unresolvable
-
-			graph = resolver._dynamic_config.digraph.copy()
-			required_pkgs_total = 0
-			for node in graph:
-				if isinstance(node, Package):
-					required_pkgs_total += 1
-			cleanlist = create_cleanlist()
-			if not cleanlist:
-				return 0, [], False, required_pkgs_total, unresolvable
-			clean_set = set(cleanlist)
-
-	if clean_set:
-		writemsg_level(">>> Calculating removal order...\n")
-		# Use a topological sort to create an unmerge order such that
-		# each package is unmerged before it's dependencies. This is
-		# necessary to avoid breaking things that may need to run
-		# during pkg_prerm or pkg_postrm phases.
-
-		# Create a new graph to account for dependencies between the
-		# packages being unmerged.
-		graph = digraph()
-		del cleanlist[:]
-
-		dep_keys = ["DEPEND", "RDEPEND", "PDEPEND"]
-		runtime = UnmergeDepPriority(runtime=True)
-		runtime_post = UnmergeDepPriority(runtime_post=True)
-		buildtime = UnmergeDepPriority(buildtime=True)
-		priority_map = {
-			"RDEPEND": runtime,
-			"PDEPEND": runtime_post,
-			"DEPEND": buildtime,
-		}
-
-		for node in clean_set:
-			graph.add(node, None)
-			mydeps = []
-			for dep_type in dep_keys:
-				depstr = node.metadata[dep_type]
-				if not depstr:
-					continue
-				priority = priority_map[dep_type]
-
-				try:
-					atoms = resolver._select_atoms(myroot, depstr,
-						myuse=node.use.enabled, parent=node,
-						priority=priority)[node]
-				except portage.exception.InvalidDependString:
-					# Ignore invalid deps of packages that will
-					# be uninstalled anyway.
-					continue
-
-				for atom in atoms:
-					if not isinstance(atom, portage.dep.Atom):
-						# Ignore invalid atoms returned from dep_check().
-						continue
-					if atom.blocker:
-						continue
-					matches = vardb.match_pkgs(atom)
-					if not matches:
-						continue
-					for child_node in matches:
-						if child_node in clean_set:
-							graph.add(child_node, node, priority=priority)
-
-		ordered = True
-		if len(graph.order) == len(graph.root_nodes()):
-			# If there are no dependencies between packages
-			# let unmerge() group them by cat/pn.
-			ordered = False
-			cleanlist = [pkg.cpv for pkg in graph.order]
-		else:
-			# Order nodes from lowest to highest overall reference count for
-			# optimal root node selection (this can help minimize issues
-			# with unaccounted implicit dependencies).
-			node_refcounts = {}
-			for node in graph.order:
-				node_refcounts[node] = len(graph.parent_nodes(node))
-			def cmp_reference_count(node1, node2):
-				return node_refcounts[node1] - node_refcounts[node2]
-			graph.order.sort(key=cmp_sort_key(cmp_reference_count))
-
-			ignore_priority_range = [None]
-			ignore_priority_range.extend(
-				range(UnmergeDepPriority.MIN, UnmergeDepPriority.MAX + 1))
-			while graph:
-				for ignore_priority in ignore_priority_range:
-					nodes = graph.root_nodes(ignore_priority=ignore_priority)
-					if nodes:
-						break
-				if not nodes:
-					raise AssertionError("no root nodes")
-				if ignore_priority is not None:
-					# Some deps have been dropped due to circular dependencies,
-					# so only pop one node in order to minimize the number that
-					# are dropped.
-					del nodes[1:]
-				for node in nodes:
-					graph.remove(node)
-					cleanlist.append(node.cpv)
-
-		return True, cleanlist, ordered, required_pkgs_total, []
-	return True, [], False, required_pkgs_total, []

diff --git a/gobs/pym/depgraph.py b/gobs/pym/depgraph.py
deleted file mode 100644
index 0a6afc8..0000000
--- a/gobs/pym/depgraph.py
+++ /dev/null
@@ -1,7645 +0,0 @@
-# Copyright 1999-2012 Gentoo Foundation
-# Distributed under the terms of the GNU General Public License v2
-
-from __future__ import print_function
-
-import difflib
-import errno
-import io
-import logging
-import stat
-import sys
-import textwrap
-from collections import deque
-from itertools import chain
-
-import portage
-from portage import os, OrderedDict
-from portage import _unicode_decode, _unicode_encode, _encodings
-from portage.const import PORTAGE_PACKAGE_ATOM, USER_CONFIG_PATH
-from portage.dbapi import dbapi
-from portage.dbapi.dep_expand import dep_expand
-from portage.dep import Atom, best_match_to_list, extract_affecting_use, \
-	check_required_use, human_readable_required_use, match_from_list, \
-	_repo_separator
-from portage.dep._slot_operator import ignore_built_slot_operator_deps
-from portage.eapi import eapi_has_strong_blocks, eapi_has_required_use, \
-	_get_eapi_attrs
-from portage.exception import (InvalidAtom, InvalidData, InvalidDependString,
-	PackageNotFound, PortageException)
-from portage.output import colorize, create_color_func, \
-	darkgreen, green
-bad = create_color_func("BAD")
-from portage.package.ebuild.config import _get_feature_flags
-from portage.package.ebuild.getmaskingstatus import \
-	_getmaskingstatus, _MaskReason
-from portage._sets import SETPREFIX
-from portage._sets.base import InternalPackageSet
-from portage.util import ConfigProtect, shlex_split, new_protect_filename
-from portage.util import cmp_sort_key, writemsg, writemsg_stdout
-from portage.util import ensure_dirs
-from portage.util import writemsg_level, write_atomic
-from portage.util.digraph import digraph
-from portage.util.listdir import _ignorecvs_dirs
-from portage.versions import catpkgsplit
-
-from _emerge.AtomArg import AtomArg
-from _emerge.Blocker import Blocker
-from _emerge.BlockerCache import BlockerCache
-from _emerge.BlockerDepPriority import BlockerDepPriority
-from _emerge.countdown import countdown
-from _emerge.create_world_atom import create_world_atom
-from _emerge.Dependency import Dependency
-from _emerge.DependencyArg import DependencyArg
-from _emerge.DepPriority import DepPriority
-from _emerge.DepPriorityNormalRange import DepPriorityNormalRange
-from _emerge.DepPrioritySatisfiedRange import DepPrioritySatisfiedRange
-from _emerge.FakeVartree import FakeVartree
-from _emerge._find_deep_system_runtime_deps import _find_deep_system_runtime_deps
-from _emerge.is_valid_package_atom import insert_category_into_atom, \
-	is_valid_package_atom
-from _emerge.Package import Package
-from _emerge.PackageArg import PackageArg
-from _emerge.PackageVirtualDbapi import PackageVirtualDbapi
-from _emerge.RootConfig import RootConfig
-from _emerge.search import search
-from _emerge.SetArg import SetArg
-from _emerge.show_invalid_depstring_notice import show_invalid_depstring_notice
-from _emerge.UnmergeDepPriority import UnmergeDepPriority
-from _emerge.UseFlagDisplay import pkg_use_display
-from _emerge.userquery import userquery
-
-from _emerge.resolver.backtracking import Backtracker, BacktrackParameter
-from _emerge.resolver.slot_collision import slot_conflict_handler
-from _emerge.resolver.circular_dependency import circular_dependency_handler
-from _emerge.resolver.output import Display
-
-if sys.hexversion >= 0x3000000:
-	basestring = str
-	long = int
-	_unicode = str
-else:
-	_unicode = unicode
-
-class _scheduler_graph_config(object):
-	def __init__(self, trees, pkg_cache, graph, mergelist):
-		self.trees = trees
-		self.pkg_cache = pkg_cache
-		self.graph = graph
-		self.mergelist = mergelist
-
-def _wildcard_set(atoms):
-	pkgs = InternalPackageSet(allow_wildcard=True)
-	for x in atoms:
-		try:
-			x = Atom(x, allow_wildcard=True, allow_repo=False)
-		except portage.exception.InvalidAtom:
-			x = Atom("*/" + x, allow_wildcard=True, allow_repo=False)
-		pkgs.add(x)
-	return pkgs
-
-class _frozen_depgraph_config(object):
-
-	def __init__(self, settings, trees, myopts, spinner):
-		self.settings = settings
-		self.target_root = settings["EROOT"]
-		self.myopts = myopts
-		self.edebug = 0
-		if settings.get("PORTAGE_DEBUG", "") == "1":
-			self.edebug = 1
-		self.spinner = spinner
-		self._running_root = trees[trees._running_eroot]["root_config"]
-		self.pkgsettings = {}
-		self.trees = {}
-		self._trees_orig = trees
-		self.roots = {}
-		# All Package instances
-		self._pkg_cache = {}
-		self._highest_license_masked = {}
-		dynamic_deps = myopts.get("--dynamic-deps", "y") != "n"
-		ignore_built_slot_operator_deps = myopts.get(
-			"--ignore-built-slot-operator-deps", "n") == "y"
-		for myroot in trees:
-			self.trees[myroot] = {}
-			# Create a RootConfig instance that references
-			# the FakeVartree instead of the real one.
-			self.roots[myroot] = RootConfig(
-				trees[myroot]["vartree"].settings,
-				self.trees[myroot],
-				trees[myroot]["root_config"].setconfig)
-			for tree in ("porttree", "bintree"):
-				self.trees[myroot][tree] = trees[myroot][tree]
-			self.trees[myroot]["vartree"] = \
-				FakeVartree(trees[myroot]["root_config"],
-					pkg_cache=self._pkg_cache,
-					pkg_root_config=self.roots[myroot],
-					dynamic_deps=dynamic_deps,
-					ignore_built_slot_operator_deps=ignore_built_slot_operator_deps)
-			self.pkgsettings[myroot] = portage.config(
-				clone=self.trees[myroot]["vartree"].settings)
-
-		self._required_set_names = set(["world"])
-
-		atoms = ' '.join(myopts.get("--exclude", [])).split()
-		self.excluded_pkgs = _wildcard_set(atoms)
-		atoms = ' '.join(myopts.get("--reinstall-atoms", [])).split()
-		self.reinstall_atoms = _wildcard_set(atoms)
-		atoms = ' '.join(myopts.get("--usepkg-exclude", [])).split()
-		self.usepkg_exclude = _wildcard_set(atoms)
-		atoms = ' '.join(myopts.get("--useoldpkg-atoms", [])).split()
-		self.useoldpkg_atoms = _wildcard_set(atoms)
-		atoms = ' '.join(myopts.get("--rebuild-exclude", [])).split()
-		self.rebuild_exclude = _wildcard_set(atoms)
-		atoms = ' '.join(myopts.get("--rebuild-ignore", [])).split()
-		self.rebuild_ignore = _wildcard_set(atoms)
-
-		self.rebuild_if_new_rev = "--rebuild-if-new-rev" in myopts
-		self.rebuild_if_new_ver = "--rebuild-if-new-ver" in myopts
-		self.rebuild_if_unbuilt = "--rebuild-if-unbuilt" in myopts
-
-class _depgraph_sets(object):
-	def __init__(self):
-		# contains all sets added to the graph
-		self.sets = {}
-		# contains non-set atoms given as arguments
-		self.sets['__non_set_args__'] = InternalPackageSet(allow_repo=True)
-		# contains all atoms from all sets added to the graph, including
-		# atoms given as arguments
-		self.atoms = InternalPackageSet(allow_repo=True)
-		self.atom_arg_map = {}
-
-class _rebuild_config(object):
-	def __init__(self, frozen_config, backtrack_parameters):
-		self._graph = digraph()
-		self._frozen_config = frozen_config
-		self.rebuild_list = backtrack_parameters.rebuild_list.copy()
-		self.orig_rebuild_list = self.rebuild_list.copy()
-		self.reinstall_list = backtrack_parameters.reinstall_list.copy()
-		self.rebuild_if_new_rev = frozen_config.rebuild_if_new_rev
-		self.rebuild_if_new_ver = frozen_config.rebuild_if_new_ver
-		self.rebuild_if_unbuilt = frozen_config.rebuild_if_unbuilt
-		self.rebuild = (self.rebuild_if_new_rev or self.rebuild_if_new_ver or
-			self.rebuild_if_unbuilt)
-
-	def add(self, dep_pkg, dep):
-		parent = dep.collapsed_parent
-		priority = dep.collapsed_priority
-		rebuild_exclude = self._frozen_config.rebuild_exclude
-		rebuild_ignore = self._frozen_config.rebuild_ignore
-		if (self.rebuild and isinstance(parent, Package) and
-			parent.built and priority.buildtime and
-			isinstance(dep_pkg, Package) and
-			not rebuild_exclude.findAtomForPackage(parent) and
-			not rebuild_ignore.findAtomForPackage(dep_pkg)):
-			self._graph.add(dep_pkg, parent, priority)
-
-	def _needs_rebuild(self, dep_pkg):
-		"""Check whether packages that depend on dep_pkg need to be rebuilt."""
-		dep_root_slot = (dep_pkg.root, dep_pkg.slot_atom)
-		if dep_pkg.built or dep_root_slot in self.orig_rebuild_list:
-			return False
-
-		if self.rebuild_if_unbuilt:
-			# dep_pkg is being installed from source, so binary
-			# packages for parents are invalid. Force rebuild
-			return True
-
-		trees = self._frozen_config.trees
-		vardb = trees[dep_pkg.root]["vartree"].dbapi
-		if self.rebuild_if_new_rev:
-			# Parent packages are valid if a package with the same
-			# cpv is already installed.
-			return dep_pkg.cpv not in vardb.match(dep_pkg.slot_atom)
-
-		# Otherwise, parent packages are valid if a package with the same
-		# version (excluding revision) is already installed.
-		assert self.rebuild_if_new_ver
-		cpv_norev = catpkgsplit(dep_pkg.cpv)[:-1]
-		for inst_cpv in vardb.match(dep_pkg.slot_atom):
-			inst_cpv_norev = catpkgsplit(inst_cpv)[:-1]
-			if inst_cpv_norev == cpv_norev:
-				return False
-
-		return True
-
-	def _trigger_rebuild(self, parent, build_deps):
-		root_slot = (parent.root, parent.slot_atom)
-		if root_slot in self.rebuild_list:
-			return False
-		trees = self._frozen_config.trees
-		reinstall = False
-		for slot_atom, dep_pkg in build_deps.items():
-			dep_root_slot = (dep_pkg.root, slot_atom)
-			if self._needs_rebuild(dep_pkg):
-				self.rebuild_list.add(root_slot)
-				return True
-			elif ("--usepkg" in self._frozen_config.myopts and
-				(dep_root_slot in self.reinstall_list or
-				dep_root_slot in self.rebuild_list or
-				not dep_pkg.installed)):
-
-				# A direct rebuild dependency is being installed. We
-				# should update the parent as well to the latest binary,
-				# if that binary is valid.
-				#
-				# To validate the binary, we check whether all of the
-				# rebuild dependencies are present on the same binhost.
-				#
-				# 1) If parent is present on the binhost, but one of its
-				#    rebuild dependencies is not, then the parent should
-				#    be rebuilt from source.
-				# 2) Otherwise, the parent binary is assumed to be valid,
-				#    because all of its rebuild dependencies are
-				#    consistent.
-				bintree = trees[parent.root]["bintree"]
-				uri = bintree.get_pkgindex_uri(parent.cpv)
-				dep_uri = bintree.get_pkgindex_uri(dep_pkg.cpv)
-				bindb = bintree.dbapi
-				if self.rebuild_if_new_ver and uri and uri != dep_uri:
-					cpv_norev = catpkgsplit(dep_pkg.cpv)[:-1]
-					for cpv in bindb.match(dep_pkg.slot_atom):
-						if cpv_norev == catpkgsplit(cpv)[:-1]:
-							dep_uri = bintree.get_pkgindex_uri(cpv)
-							if uri == dep_uri:
-								break
-				if uri and uri != dep_uri:
-					# 1) Remote binary package is invalid because it was
-					#    built without dep_pkg. Force rebuild.
-					self.rebuild_list.add(root_slot)
-					return True
-				elif (parent.installed and
-					root_slot not in self.reinstall_list):
-					inst_build_time = parent.metadata.get("BUILD_TIME")
-					try:
-						bin_build_time, = bindb.aux_get(parent.cpv,
-							["BUILD_TIME"])
-					except KeyError:
-						continue
-					if bin_build_time != inst_build_time:
-						# 2) Remote binary package is valid, and local package
-						#    is not up to date. Force reinstall.
-						reinstall = True
-		if reinstall:
-			self.reinstall_list.add(root_slot)
-		return reinstall
-
-	def trigger_rebuilds(self):
-		"""
-		Trigger rebuilds where necessary. If pkgA has been updated, and pkgB
-		depends on pkgA at both build-time and run-time, pkgB needs to be
-		rebuilt.
-		"""
-		need_restart = False
-		graph = self._graph
-		build_deps = {}
-
-		leaf_nodes = deque(graph.leaf_nodes())
-
-		# Trigger rebuilds bottom-up (starting with the leaves) so that parents
-		# will always know which children are being rebuilt.
-		while graph:
-			if not leaf_nodes:
-				# We'll have to drop an edge. This should be quite rare.
-				leaf_nodes.append(graph.order[-1])
-
-			node = leaf_nodes.popleft()
-			if node not in graph:
-				# This can be triggered by circular dependencies.
-				continue
-			slot_atom = node.slot_atom
-
-			# Remove our leaf node from the graph, keeping track of deps.
-			parents = graph.parent_nodes(node)
-			graph.remove(node)
-			node_build_deps = build_deps.get(node, {})
-			for parent in parents:
-				if parent == node:
-					# Ignore a direct cycle.
-					continue
-				parent_bdeps = build_deps.setdefault(parent, {})
-				parent_bdeps[slot_atom] = node
-				if not graph.child_nodes(parent):
-					leaf_nodes.append(parent)
-
-			# Trigger rebuilds for our leaf node. Because all of our children
-			# have been processed, the build_deps will be completely filled in,
-			# and self.rebuild_list / self.reinstall_list will tell us whether
-			# any of our children need to be rebuilt or reinstalled.
-			if self._trigger_rebuild(node, node_build_deps):
-				need_restart = True
-
-		return need_restart
-
-
-class _dynamic_depgraph_config(object):
-
-	def __init__(self, depgraph, myparams, allow_backtracking, backtrack_parameters):
-		self.myparams = myparams.copy()
-		self._vdb_loaded = False
-		self._allow_backtracking = allow_backtracking
-		# Maps slot atom to package for each Package added to the graph.
-		self._slot_pkg_map = {}
-		# Maps nodes to the reasons they were selected for reinstallation.
-		self._reinstall_nodes = {}
-		self.mydbapi = {}
-		# Contains a filtered view of preferred packages that are selected
-		# from available repositories.
-		self._filtered_trees = {}
-		# Contains installed packages and new packages that have been added
-		# to the graph.
-		self._graph_trees = {}
-		# Caches visible packages returned from _select_package, for use in
-		# depgraph._iter_atoms_for_pkg() SLOT logic.
-		self._visible_pkgs = {}
-		#contains the args created by select_files
-		self._initial_arg_list = []
-		self.digraph = portage.digraph()
-		# manages sets added to the graph
-		self.sets = {}
-		# contains all nodes pulled in by self.sets
-		self._set_nodes = set()
-		# Contains only Blocker -> Uninstall edges
-		self._blocker_uninstalls = digraph()
-		# Contains only Package -> Blocker edges
-		self._blocker_parents = digraph()
-		# Contains only irrelevant Package -> Blocker edges
-		self._irrelevant_blockers = digraph()
-		# Contains only unsolvable Package -> Blocker edges
-		self._unsolvable_blockers = digraph()
-		# Contains all Blocker -> Blocked Package edges
-		self._blocked_pkgs = digraph()
-		# Contains world packages that have been protected from
-		# uninstallation but may not have been added to the graph
-		# if the graph is not complete yet.
-		self._blocked_world_pkgs = {}
-		# Contains packages whose dependencies have been traversed.
-		# This use used to check if we have accounted for blockers
-		# relevant to a package.
-		self._traversed_pkg_deps = set()
-		# This should be ordered such that the backtracker will
-		# attempt to solve conflicts which occurred earlier first,
-		# since an earlier conflict can be the cause of a conflict
-		# which occurs later.
-		self._slot_collision_info = OrderedDict()
-		# Slot collision nodes are not allowed to block other packages since
-		# blocker validation is only able to account for one package per slot.
-		self._slot_collision_nodes = set()
-		self._parent_atoms = {}
-		self._slot_conflict_handler = None
-		self._circular_dependency_handler = None
-		self._serialized_tasks_cache = None
-		self._scheduler_graph = None
-		self._displayed_list = None
-		self._pprovided_args = []
-		self._missing_args = []
-		self._masked_installed = set()
-		self._masked_license_updates = set()
-		self._unsatisfied_deps_for_display = []
-		self._unsatisfied_blockers_for_display = None
-		self._circular_deps_for_display = None
-		self._dep_stack = []
-		self._dep_disjunctive_stack = []
-		self._unsatisfied_deps = []
-		self._initially_unsatisfied_deps = []
-		self._ignored_deps = []
-		self._highest_pkg_cache = {}
-
-		# Binary packages that have been rejected because their USE
-		# didn't match the user's config. It maps packages to a set
-		# of flags causing the rejection.
-		self.ignored_binaries = {}
-
-		self._needed_unstable_keywords = backtrack_parameters.needed_unstable_keywords
-		self._needed_p_mask_changes = backtrack_parameters.needed_p_mask_changes
-		self._needed_license_changes = backtrack_parameters.needed_license_changes
-		self._needed_use_config_changes = backtrack_parameters.needed_use_config_changes
-		self._needed_required_use_config_changes = backtrack_parameters.needed_required_use_config_changes
-		self._runtime_pkg_mask = backtrack_parameters.runtime_pkg_mask
-		self._slot_operator_replace_installed = backtrack_parameters.slot_operator_replace_installed
-		self._need_restart = False
-		# For conditions that always require user intervention, such as
-		# unsatisfied REQUIRED_USE (currently has no autounmask support).
-		self._skip_restart = False
-		self._backtrack_infos = {}
-
-		self._autounmask = depgraph._frozen_config.myopts.get('--autounmask') != 'n'
-		self._success_without_autounmask = False
-		self._traverse_ignored_deps = False
-		self._complete_mode = False
-		self._slot_operator_deps = {}
-
-		for myroot in depgraph._frozen_config.trees:
-			self.sets[myroot] = _depgraph_sets()
-			self._slot_pkg_map[myroot] = {}
-			vardb = depgraph._frozen_config.trees[myroot]["vartree"].dbapi
-			# This dbapi instance will model the state that the vdb will
-			# have after new packages have been installed.
-			fakedb = PackageVirtualDbapi(vardb.settings)
-
-			self.mydbapi[myroot] = fakedb
-			def graph_tree():
-				pass
-			graph_tree.dbapi = fakedb
-			self._graph_trees[myroot] = {}
-			self._filtered_trees[myroot] = {}
-			# Substitute the graph tree for the vartree in dep_check() since we
-			# want atom selections to be consistent with package selections
-			# have already been made.
-			self._graph_trees[myroot]["porttree"]   = graph_tree
-			self._graph_trees[myroot]["vartree"]    = graph_tree
-			self._graph_trees[myroot]["graph_db"]   = graph_tree.dbapi
-			self._graph_trees[myroot]["graph"]      = self.digraph
-			def filtered_tree():
-				pass
-			filtered_tree.dbapi = _dep_check_composite_db(depgraph, myroot)
-			self._filtered_trees[myroot]["porttree"] = filtered_tree
-			self._visible_pkgs[myroot] = PackageVirtualDbapi(vardb.settings)
-
-			# Passing in graph_tree as the vartree here could lead to better
-			# atom selections in some cases by causing atoms for packages that
-			# have been added to the graph to be preferred over other choices.
-			# However, it can trigger atom selections that result in
-			# unresolvable direct circular dependencies. For example, this
-			# happens with gwydion-dylan which depends on either itself or
-			# gwydion-dylan-bin. In case gwydion-dylan is not yet installed,
-			# gwydion-dylan-bin needs to be selected in order to avoid a
-			# an unresolvable direct circular dependency.
-			#
-			# To solve the problem described above, pass in "graph_db" so that
-			# packages that have been added to the graph are distinguishable
-			# from other available packages and installed packages. Also, pass
-			# the parent package into self._select_atoms() calls so that
-			# unresolvable direct circular dependencies can be detected and
-			# avoided when possible.
-			self._filtered_trees[myroot]["graph_db"] = graph_tree.dbapi
-			self._filtered_trees[myroot]["graph"]    = self.digraph
-			self._filtered_trees[myroot]["vartree"] = \
-				depgraph._frozen_config.trees[myroot]["vartree"]
-
-			dbs = []
-			#               (db, pkg_type, built, installed, db_keys)
-			if "remove" in self.myparams:
-				# For removal operations, use _dep_check_composite_db
-				# for availability and visibility checks. This provides
-				# consistency with install operations, so we don't
-				# get install/uninstall cycles like in bug #332719.
-				self._graph_trees[myroot]["porttree"] = filtered_tree
-			else:
-				if "--usepkgonly" not in depgraph._frozen_config.myopts:
-					portdb = depgraph._frozen_config.trees[myroot]["porttree"].dbapi
-					db_keys = list(portdb._aux_cache_keys)
-					dbs.append((portdb, "ebuild", False, False, db_keys))
-
-				if "--usepkg" in depgraph._frozen_config.myopts:
-					bindb  = depgraph._frozen_config.trees[myroot]["bintree"].dbapi
-					db_keys = list(bindb._aux_cache_keys)
-					dbs.append((bindb,  "binary", True, False, db_keys))
-
-			vardb  = depgraph._frozen_config.trees[myroot]["vartree"].dbapi
-			db_keys = list(depgraph._frozen_config._trees_orig[myroot
-				]["vartree"].dbapi._aux_cache_keys)
-			dbs.append((vardb, "installed", True, True, db_keys))
-			self._filtered_trees[myroot]["dbs"] = dbs
-
-class depgraph(object):
-
-	pkg_tree_map = RootConfig.pkg_tree_map
-	
-	def __init__(self, settings, trees, myopts, myparams, spinner,
-		frozen_config=None, backtrack_parameters=BacktrackParameter(), allow_backtracking=False):
-		if frozen_config is None:
-			frozen_config = _frozen_depgraph_config(settings, trees,
-			myopts, spinner)
-		self._frozen_config = frozen_config
-		self._dynamic_config = _dynamic_depgraph_config(self, myparams,
-			allow_backtracking, backtrack_parameters)
-		self._rebuild = _rebuild_config(frozen_config, backtrack_parameters)
-
-		self._select_atoms = self._select_atoms_highest_available
-		self._select_package = self._select_pkg_highest_available
-
-	def _load_vdb(self):
-		"""
-		Load installed package metadata if appropriate. This used to be called
-		from the constructor, but that wasn't very nice since this procedure
-		is slow and it generates spinner output. So, now it's called on-demand
-		by various methods when necessary.
-		"""
-
-		if self._dynamic_config._vdb_loaded:
-			return
-
-		for myroot in self._frozen_config.trees:
-
-			dynamic_deps = self._dynamic_config.myparams.get(
-				"dynamic_deps", "y") != "n"
-			preload_installed_pkgs = \
-				"--nodeps" not in self._frozen_config.myopts
-
-			fake_vartree = self._frozen_config.trees[myroot]["vartree"]
-			if not fake_vartree.dbapi:
-				# This needs to be called for the first depgraph, but not for
-				# backtracking depgraphs that share the same frozen_config.
-				fake_vartree.sync()
-
-				# FakeVartree.sync() populates virtuals, and we want
-				# self.pkgsettings to have them populated too.
-				self._frozen_config.pkgsettings[myroot] = \
-					portage.config(clone=fake_vartree.settings)
-
-			if preload_installed_pkgs:
-				vardb = fake_vartree.dbapi
-				fakedb = self._dynamic_config._graph_trees[
-					myroot]["vartree"].dbapi
-
-				for pkg in vardb:
-					self._spinner_update()
-					if dynamic_deps:
-						# This causes FakeVartree to update the
-						# Package instance dependencies via
-						# PackageVirtualDbapi.aux_update()
-						vardb.aux_get(pkg.cpv, [])
-					fakedb.cpv_inject(pkg)
-
-		self._dynamic_config._vdb_loaded = True
-
-	def _spinner_update(self):
-		if self._frozen_config.spinner:
-			self._frozen_config.spinner.update()
-
-	def _show_ignored_binaries(self):
-		"""
-		Show binaries that have been ignored because their USE didn't
-		match the user's config.
-		"""
-		if not self._dynamic_config.ignored_binaries \
-			or '--quiet' in self._frozen_config.myopts \
-			or self._dynamic_config.myparams.get(
-			"binpkg_respect_use") in ("y", "n"):
-			return
-
-		for pkg in list(self._dynamic_config.ignored_binaries):
-
-			selected_pkg = self._dynamic_config.mydbapi[pkg.root
-				].match_pkgs(pkg.slot_atom)
-
-			if not selected_pkg:
-				continue
-
-			selected_pkg = selected_pkg[-1]
-			if selected_pkg > pkg:
-				self._dynamic_config.ignored_binaries.pop(pkg)
-				continue
-
-			if selected_pkg.installed and \
-				selected_pkg.cpv == pkg.cpv and \
-				selected_pkg.metadata.get('BUILD_TIME') == \
-				pkg.metadata.get('BUILD_TIME'):
-				# We don't care about ignored binaries when an
-				# identical installed instance is selected to
-				# fill the slot.
-				self._dynamic_config.ignored_binaries.pop(pkg)
-				continue
-
-		if not self._dynamic_config.ignored_binaries:
-			return
-
-		self._show_merge_list()
-
-		writemsg("\n!!! The following binary packages have been ignored " + \
-				"due to non matching USE:\n\n", noiselevel=-1)
-
-		for pkg, flags in self._dynamic_config.ignored_binaries.items():
-			flag_display = []
-			for flag in sorted(flags):
-				if flag not in pkg.use.enabled:
-					flag = "-" + flag
-				flag_display.append(flag)
-			flag_display = " ".join(flag_display)
-			# The user can paste this line into package.use
-			writemsg("    =%s %s" % (pkg.cpv, flag_display), noiselevel=-1)
-			if pkg.root_config.settings["ROOT"] != "/":
-				writemsg(" # for %s" % (pkg.root,), noiselevel=-1)
-			writemsg("\n", noiselevel=-1)
-
-		msg = [
-			"",
-			"NOTE: The --binpkg-respect-use=n option will prevent emerge",
-			"      from ignoring these binary packages if possible.",
-			"      Using --binpkg-respect-use=y will silence this warning."
-		]
-
-		for line in msg:
-			if line:
-				line = colorize("INFORM", line)
-			writemsg(line + "\n", noiselevel=-1)
-
-	def _show_missed_update(self):
-
-		# In order to minimize noise, show only the highest
-		# missed update from each SLOT.
-		missed_updates = {}
-		for pkg, mask_reasons in \
-			self._dynamic_config._runtime_pkg_mask.items():
-			if pkg.installed:
-				# Exclude installed here since we only
-				# want to show available updates.
-				continue
-			chosen_pkg = self._dynamic_config.mydbapi[pkg.root
-				].match_pkgs(pkg.slot_atom)
-			if not chosen_pkg or chosen_pkg[-1] >= pkg:
-				continue
-			k = (pkg.root, pkg.slot_atom)
-			if k in missed_updates:
-				other_pkg, mask_type, parent_atoms = missed_updates[k]
-				if other_pkg > pkg:
-					continue
-			for mask_type, parent_atoms in mask_reasons.items():
-				if not parent_atoms:
-					continue
-				missed_updates[k] = (pkg, mask_type, parent_atoms)
-				break
-
-		if not missed_updates:
-			return
-
-		missed_update_types = {}
-		for pkg, mask_type, parent_atoms in missed_updates.values():
-			missed_update_types.setdefault(mask_type,
-				[]).append((pkg, parent_atoms))
-
-		if '--quiet' in self._frozen_config.myopts and \
-			'--debug' not in self._frozen_config.myopts:
-			missed_update_types.pop("slot conflict", None)
-			missed_update_types.pop("missing dependency", None)
-
-		self._show_missed_update_slot_conflicts(
-			missed_update_types.get("slot conflict"))
-
-		self._show_missed_update_unsatisfied_dep(
-			missed_update_types.get("missing dependency"))
-
-	def _show_missed_update_unsatisfied_dep(self, missed_updates):
-
-		if not missed_updates:
-			return
-
-		self._show_merge_list()
-		backtrack_masked = []
-
-		for pkg, parent_atoms in missed_updates:
-
-			try:
-				for parent, root, atom in parent_atoms:
-					self._show_unsatisfied_dep(root, atom, myparent=parent,
-						check_backtrack=True)
-			except self._backtrack_mask:
-				# This is displayed below in abbreviated form.
-				backtrack_masked.append((pkg, parent_atoms))
-				continue
-
-			writemsg("\n!!! The following update has been skipped " + \
-				"due to unsatisfied dependencies:\n\n", noiselevel=-1)
-
-			writemsg(str(pkg.slot_atom), noiselevel=-1)
-			if pkg.root_config.settings["ROOT"] != "/":
-				writemsg(" for %s" % (pkg.root,), noiselevel=-1)
-			writemsg("\n", noiselevel=-1)
-
-			for parent, root, atom in parent_atoms:
-				self._show_unsatisfied_dep(root, atom, myparent=parent)
-				writemsg("\n", noiselevel=-1)
-
-		if backtrack_masked:
-			# These are shown in abbreviated form, in order to avoid terminal
-			# flooding from mask messages as reported in bug #285832.
-			writemsg("\n!!! The following update(s) have been skipped " + \
-				"due to unsatisfied dependencies\n" + \
-				"!!! triggered by backtracking:\n\n", noiselevel=-1)
-			for pkg, parent_atoms in backtrack_masked:
-				writemsg(str(pkg.slot_atom), noiselevel=-1)
-				if pkg.root_config.settings["ROOT"] != "/":
-					writemsg(" for %s" % (pkg.root,), noiselevel=-1)
-				writemsg("\n", noiselevel=-1)
-
-	def _show_missed_update_slot_conflicts(self, missed_updates):
-
-		if not missed_updates:
-			return
-
-		self._show_merge_list()
-		msg = []
-		msg.append("\nWARNING: One or more updates have been " + \
-			"skipped due to a dependency conflict:\n\n")
-
-		indent = "  "
-		for pkg, parent_atoms in missed_updates:
-			msg.append(str(pkg.slot_atom))
-			if pkg.root_config.settings["ROOT"] != "/":
-				msg.append(" for %s" % (pkg.root,))
-			msg.append("\n\n")
-
-			for parent, atom in parent_atoms:
-				msg.append(indent)
-				msg.append(str(pkg))
-
-				msg.append(" conflicts with\n")
-				msg.append(2*indent)
-				if isinstance(parent,
-					(PackageArg, AtomArg)):
-					# For PackageArg and AtomArg types, it's
-					# redundant to display the atom attribute.
-					msg.append(str(parent))
-				else:
-					# Display the specific atom from SetArg or
-					# Package types.
-					msg.append("%s required by %s" % (atom, parent))
-				msg.append("\n")
-			msg.append("\n")
-
-		writemsg("".join(msg), noiselevel=-1)
-
-	def _show_slot_collision_notice(self):
-		"""Show an informational message advising the user to mask one of the
-		the packages. In some cases it may be possible to resolve this
-		automatically, but support for backtracking (removal nodes that have
-		already been selected) will be required in order to handle all possible
-		cases.
-		"""
-
-		if not self._dynamic_config._slot_collision_info:
-			return
-
-		self._show_merge_list()
-
-		self._dynamic_config._slot_conflict_handler = slot_conflict_handler(self)
-		handler = self._dynamic_config._slot_conflict_handler
-
-		conflict = handler.get_conflict()
-		writemsg(conflict, noiselevel=-1)
-		
-		explanation = handler.get_explanation()
-		if explanation:
-			writemsg(explanation, noiselevel=-1)
-			return
-
-		if "--quiet" in self._frozen_config.myopts:
-			return
-
-		msg = []
-		msg.append("It may be possible to solve this problem ")
-		msg.append("by using package.mask to prevent one of ")
-		msg.append("those packages from being selected. ")
-		msg.append("However, it is also possible that conflicting ")
-		msg.append("dependencies exist such that they are impossible to ")
-		msg.append("satisfy simultaneously.  If such a conflict exists in ")
-		msg.append("the dependencies of two different packages, then those ")
-		msg.append("packages can not be installed simultaneously.")
-		backtrack_opt = self._frozen_config.myopts.get('--backtrack')
-		if not self._dynamic_config._allow_backtracking and \
-			(backtrack_opt is None or \
-			(backtrack_opt > 0 and backtrack_opt < 30)):
-			msg.append(" You may want to try a larger value of the ")
-			msg.append("--backtrack option, such as --backtrack=30, ")
-			msg.append("in order to see if that will solve this conflict ")
-			msg.append("automatically.")
-
-		for line in textwrap.wrap(''.join(msg), 70):
-			writemsg(line + '\n', noiselevel=-1)
-		writemsg('\n', noiselevel=-1)
-
-		msg = []
-		msg.append("For more information, see MASKED PACKAGES ")
-		msg.append("section in the emerge man page or refer ")
-		msg.append("to the Gentoo Handbook.")
-		for line in textwrap.wrap(''.join(msg), 70):
-			writemsg(line + '\n', noiselevel=-1)
-		writemsg('\n', noiselevel=-1)
-
-	def _process_slot_conflicts(self):
-		"""
-		If there are any slot conflicts and backtracking is enabled,
-		_complete_graph should complete the graph before this method
-		is called, so that all relevant reverse dependencies are
-		available for use in backtracking decisions.
-		"""
-		for (slot_atom, root), slot_nodes in \
-			self._dynamic_config._slot_collision_info.items():
-			self._process_slot_conflict(root, slot_atom, slot_nodes)
-
-	def _process_slot_conflict(self, root, slot_atom, slot_nodes):
-		"""
-		Process slot conflict data to identify specific atoms which
-		lead to conflict. These atoms only match a subset of the
-		packages that have been pulled into a given slot.
-		"""
-
-		debug = "--debug" in self._frozen_config.myopts
-
-		slot_parent_atoms = set()
-		for pkg in slot_nodes:
-			parent_atoms = self._dynamic_config._parent_atoms.get(pkg)
-			if not parent_atoms:
-				continue
-			slot_parent_atoms.update(parent_atoms)
-
-		conflict_pkgs = []
-		conflict_atoms = {}
-		for pkg in slot_nodes:
-
-			if self._dynamic_config._allow_backtracking and \
-				pkg in self._dynamic_config._runtime_pkg_mask:
-				if debug:
-					writemsg_level(
-						"!!! backtracking loop detected: %s %s\n" % \
-						(pkg,
-						self._dynamic_config._runtime_pkg_mask[pkg]),
-						level=logging.DEBUG, noiselevel=-1)
-
-			parent_atoms = self._dynamic_config._parent_atoms.get(pkg)
-			if parent_atoms is None:
-				parent_atoms = set()
-				self._dynamic_config._parent_atoms[pkg] = parent_atoms
-
-			all_match = True
-			for parent_atom in slot_parent_atoms:
-				if parent_atom in parent_atoms:
-					continue
-				# Use package set for matching since it will match via
-				# PROVIDE when necessary, while match_from_list does not.
-				parent, atom = parent_atom
-				atom_set = InternalPackageSet(
-					initial_atoms=(atom,), allow_repo=True)
-				if atom_set.findAtomForPackage(pkg,
-					modified_use=self._pkg_use_enabled(pkg)):
-					parent_atoms.add(parent_atom)
-				else:
-					all_match = False
-					conflict_atoms.setdefault(parent_atom, set()).add(pkg)
-
-			if not all_match:
-				conflict_pkgs.append(pkg)
-
-		if conflict_pkgs and \
-			self._dynamic_config._allow_backtracking and \
-			not self._accept_blocker_conflicts():
-			remaining = []
-			for pkg in conflict_pkgs:
-				if self._slot_conflict_backtrack_abi(pkg,
-					slot_nodes, conflict_atoms):
-					backtrack_infos = self._dynamic_config._backtrack_infos
-					config = backtrack_infos.setdefault("config", {})
-					config.setdefault("slot_conflict_abi", set()).add(pkg)
-				else:
-					remaining.append(pkg)
-			if remaining:
-				self._slot_confict_backtrack(root, slot_atom,
-					slot_parent_atoms, remaining)
-
-	def _slot_confict_backtrack(self, root, slot_atom,
-		all_parents, conflict_pkgs):
-
-		debug = "--debug" in self._frozen_config.myopts
-		existing_node = self._dynamic_config._slot_pkg_map[root][slot_atom]
-		backtrack_data = []
-		# The ordering of backtrack_data can make
-		# a difference here, because both mask actions may lead
-		# to valid, but different, solutions and the one with
-		# 'existing_node' masked is usually the better one. Because
-		# of that, we choose an order such that
-		# the backtracker will first explore the choice with
-		# existing_node masked. The backtracker reverses the
-		# order, so the order it uses is the reverse of the
-		# order shown here. See bug #339606.
-		if existing_node in conflict_pkgs and \
-			existing_node is not conflict_pkgs[-1]:
-			conflict_pkgs.remove(existing_node)
-			conflict_pkgs.append(existing_node)
-		for to_be_masked in conflict_pkgs:
-			# For missed update messages, find out which
-			# atoms matched to_be_selected that did not
-			# match to_be_masked.
-			parent_atoms = \
-				self._dynamic_config._parent_atoms.get(to_be_masked, set())
-			conflict_atoms = set(parent_atom for parent_atom in all_parents \
-				if parent_atom not in parent_atoms)
-			backtrack_data.append((to_be_masked, conflict_atoms))
-
-		if len(backtrack_data) > 1:
-			# NOTE: Generally, we prefer to mask the higher
-			# version since this solves common cases in which a
-			# lower version is needed so that all dependencies
-			# will be satisfied (bug #337178). However, if
-			# existing_node happens to be installed then we
-			# mask that since this is a common case that is
-			# triggered when --update is not enabled.
-			if existing_node.installed:
-				pass
-			elif any(pkg > existing_node for pkg in conflict_pkgs):
-				backtrack_data.reverse()
-
-		to_be_masked = backtrack_data[-1][0]
-
-		self._dynamic_config._backtrack_infos.setdefault(
-			"slot conflict", []).append(backtrack_data)
-		self._dynamic_config._need_restart = True
-		if debug:
-			msg = []
-			msg.append("")
-			msg.append("")
-			msg.append("backtracking due to slot conflict:")
-			msg.append("   first package:  %s" % existing_node)
-			msg.append("  package to mask: %s" % to_be_masked)
-			msg.append("      slot: %s" % slot_atom)
-			msg.append("   parents: %s" % ", ".join( \
-				"(%s, '%s')" % (ppkg, atom) for ppkg, atom in all_parents))
-			msg.append("")
-			writemsg_level("".join("%s\n" % l for l in msg),
-				noiselevel=-1, level=logging.DEBUG)
-
-	def _slot_conflict_backtrack_abi(self, pkg, slot_nodes, conflict_atoms):
-		"""
-		If one or more conflict atoms have a slot/sub-slot dep that can be resolved
-		by rebuilding the parent package, then schedule the rebuild via
-		backtracking, and return True. Otherwise, return False.
-		"""
-
-		found_update = False
-		for parent_atom, conflict_pkgs in conflict_atoms.items():
-			parent, atom = parent_atom
-			if atom.slot_operator != "=" or not parent.built:
-				continue
-
-			if pkg not in conflict_pkgs:
-				continue
-
-			for other_pkg in slot_nodes:
-				if other_pkg in conflict_pkgs:
-					continue
-
-				dep = Dependency(atom=atom, child=other_pkg,
-					parent=parent, root=pkg.root)
-
-				if self._slot_operator_update_probe(dep):
-					self._slot_operator_update_backtrack(dep)
-					found_update = True
-
-		return found_update
-
-	def _slot_operator_update_backtrack(self, dep, new_child_slot=None):
-		if new_child_slot is None:
-			child = dep.child
-		else:
-			child = new_child_slot
-		if "--debug" in self._frozen_config.myopts:
-			msg = []
-			msg.append("")
-			msg.append("")
-			msg.append("backtracking due to missed slot abi update:")
-			msg.append("   child package:  %s" % child)
-			if new_child_slot is not None:
-				msg.append("   new child slot package:  %s" % new_child_slot)
-			msg.append("   parent package: %s" % dep.parent)
-			msg.append("   atom: %s" % dep.atom)
-			msg.append("")
-			writemsg_level("\n".join(msg),
-				noiselevel=-1, level=logging.DEBUG)
-		backtrack_infos = self._dynamic_config._backtrack_infos
-		config = backtrack_infos.setdefault("config", {})
-
-		# mask unwanted binary packages if necessary
-		abi_masks = {}
-		if new_child_slot is None:
-			if not child.installed:
-				abi_masks.setdefault(child, {})["slot_operator_mask_built"] = None
-		if not dep.parent.installed:
-			abi_masks.setdefault(dep.parent, {})["slot_operator_mask_built"] = None
-		if abi_masks:
-			config.setdefault("slot_operator_mask_built", {}).update(abi_masks)
-
-		# trigger replacement of installed packages if necessary
-		abi_reinstalls = set()
-		if dep.parent.installed:
-			abi_reinstalls.add((dep.parent.root, dep.parent.slot_atom))
-		if new_child_slot is None and child.installed:
-			abi_reinstalls.add((child.root, child.slot_atom))
-		if abi_reinstalls:
-			config.setdefault("slot_operator_replace_installed",
-				set()).update(abi_reinstalls)
-
-		self._dynamic_config._need_restart = True
-
-	def _slot_operator_update_probe(self, dep, new_child_slot=False):
-		"""
-		slot/sub-slot := operators tend to prevent updates from getting pulled in,
-		since installed packages pull in packages with the slot/sub-slot that they
-		were built against. Detect this case so that we can schedule rebuilds
-		and reinstalls when appropriate.
-		NOTE: This function only searches for updates that involve upgrades
-			to higher versions, since the logic required to detect when a
-			downgrade would be desirable is not implemented.
-		"""
-
-		if dep.child.installed and \
-			self._frozen_config.excluded_pkgs.findAtomForPackage(dep.child,
-			modified_use=self._pkg_use_enabled(dep.child)):
-			return None
-
-		if dep.parent.installed and \
-			self._frozen_config.excluded_pkgs.findAtomForPackage(dep.parent,
-			modified_use=self._pkg_use_enabled(dep.parent)):
-			return None
-
-		debug = "--debug" in self._frozen_config.myopts
-		want_downgrade = None
-
-		for replacement_parent in self._iter_similar_available(dep.parent,
-			dep.parent.slot_atom):
-
-			for atom in replacement_parent.validated_atoms:
-				if not atom.slot_operator == "=" or \
-					atom.blocker or \
-					atom.cp != dep.atom.cp:
-					continue
-
-				# Discard USE deps, we're only searching for an approximate
-				# pattern, and dealing with USE states is too complex for
-				# this purpose.
-				atom = atom.without_use
-
-				if replacement_parent.built and \
-					portage.dep._match_slot(atom, dep.child):
-					# Our selected replacement_parent appears to be built
-					# for the existing child selection. So, discard this
-					# parent and search for another.
-					break
-
-				for pkg in self._iter_similar_available(
-					dep.child, atom):
-					if pkg.slot == dep.child.slot and \
-						pkg.sub_slot == dep.child.sub_slot:
-						# If slot/sub-slot is identical, then there's
-						# no point in updating.
-						continue
-					if new_child_slot:
-						if pkg.slot == dep.child.slot:
-							continue
-						if pkg < dep.child:
-							# the new slot only matters if the
-							# package version is higher
-							continue
-					else:
-						if pkg.slot != dep.child.slot:
-							continue
-						if pkg < dep.child:
-							if want_downgrade is None:
-								want_downgrade = self._downgrade_probe(dep.child)
-							# be careful not to trigger a rebuild when
-							# the only version available with a
-							# different slot_operator is an older version
-							if not want_downgrade:
-								continue
-
-					if debug:
-						msg = []
-						msg.append("")
-						msg.append("")
-						msg.append("slot_operator_update_probe:")
-						msg.append("   existing child package:  %s" % dep.child)
-						msg.append("   existing parent package: %s" % dep.parent)
-						msg.append("   new child package:  %s" % pkg)
-						msg.append("   new parent package: %s" % replacement_parent)
-						msg.append("")
-						writemsg_level("\n".join(msg),
-							noiselevel=-1, level=logging.DEBUG)
-
-					return pkg
-
-		if debug:
-			msg = []
-			msg.append("")
-			msg.append("")
-			msg.append("slot_operator_update_probe:")
-			msg.append("   existing child package:  %s" % dep.child)
-			msg.append("   existing parent package: %s" % dep.parent)
-			msg.append("   new child package:  %s" % None)
-			msg.append("   new parent package: %s" % None)
-			msg.append("")
-			writemsg_level("\n".join(msg),
-				noiselevel=-1, level=logging.DEBUG)
-
-		return None
-
-	def _downgrade_probe(self, pkg):
-		"""
-		Detect cases where a downgrade of the given package is considered
-		desirable due to the current version being masked or unavailable.
-		"""
-		available_pkg = None
-		for available_pkg in self._iter_similar_available(pkg,
-			pkg.slot_atom):
-			if available_pkg >= pkg:
-				# There's an available package of the same or higher
-				# version, so downgrade seems undesirable.
-				return False
-
-		return available_pkg is not None
-
-	def _iter_similar_available(self, graph_pkg, atom):
-		"""
-		Given a package that's in the graph, do a rough check to
-		see if a similar package is available to install. The given
-		graph_pkg itself may be yielded only if it's not installed.
-		"""
-
-		usepkgonly = "--usepkgonly" in self._frozen_config.myopts
-		useoldpkg_atoms = self._frozen_config.useoldpkg_atoms
-		use_ebuild_visibility = self._frozen_config.myopts.get(
-			'--use-ebuild-visibility', 'n') != 'n'
-
-		for pkg in self._iter_match_pkgs_any(
-			graph_pkg.root_config, atom):
-			if pkg.cp != graph_pkg.cp:
-				# discard old-style virtual match
-				continue
-			if pkg.installed:
-				continue
-			if pkg in self._dynamic_config._runtime_pkg_mask:
-				continue
-			if self._frozen_config.excluded_pkgs.findAtomForPackage(pkg,
-				modified_use=self._pkg_use_enabled(pkg)):
-				continue
-			if not self._pkg_visibility_check(pkg):
-				continue
-			if pkg.built:
-				if self._equiv_binary_installed(pkg):
-					continue
-				if not (not use_ebuild_visibility and
-					(usepkgonly or useoldpkg_atoms.findAtomForPackage(
-					pkg, modified_use=self._pkg_use_enabled(pkg)))) and \
-					not self._equiv_ebuild_visible(pkg):
-					continue
-			yield pkg
-
-	def _slot_operator_trigger_reinstalls(self):
-		"""
-		Search for packages with slot-operator deps on older slots, and schedule
-		rebuilds if they can link to a newer slot that's in the graph.
-		"""
-
-		rebuild_if_new_slot = self._dynamic_config.myparams.get(
-			"rebuild_if_new_slot", "y") == "y"
-
-		for slot_key, slot_info in self._dynamic_config._slot_operator_deps.items():
-
-			for dep in slot_info:
-				if not (dep.child.built and dep.parent and
-					isinstance(dep.parent, Package) and dep.parent.built):
-					continue
-
-				# Check for slot update first, since we don't want to
-				# trigger reinstall of the child package when a newer
-				# slot will be used instead.
-				if rebuild_if_new_slot:
-					new_child = self._slot_operator_update_probe(dep,
-						new_child_slot=True)
-					if new_child:
-						self._slot_operator_update_backtrack(dep,
-							new_child_slot=new_child)
-						break
-
-				if dep.want_update:
-					if self._slot_operator_update_probe(dep):
-						self._slot_operator_update_backtrack(dep)
-						break
-
-	def _reinstall_for_flags(self, pkg, forced_flags,
-		orig_use, orig_iuse, cur_use, cur_iuse):
-		"""Return a set of flags that trigger reinstallation, or None if there
-		are no such flags."""
-
-		# binpkg_respect_use: Behave like newuse by default. If newuse is
-		# False and changed_use is True, then behave like changed_use.
-		binpkg_respect_use = (pkg.built and
-			self._dynamic_config.myparams.get("binpkg_respect_use")
-			in ("y", "auto"))
-		newuse = "--newuse" in self._frozen_config.myopts
-		changed_use = "changed-use" == self._frozen_config.myopts.get("--reinstall")
-		feature_flags = _get_feature_flags(
-			_get_eapi_attrs(pkg.metadata["EAPI"]))
-
-		if newuse or (binpkg_respect_use and not changed_use):
-			flags = set(orig_iuse.symmetric_difference(
-				cur_iuse).difference(forced_flags))
-			flags.update(orig_iuse.intersection(orig_use).symmetric_difference(
-				cur_iuse.intersection(cur_use)))
-			flags.difference_update(feature_flags)
-			if flags:
-				return flags
-
-		elif changed_use or binpkg_respect_use:
-			flags = set(orig_iuse.intersection(orig_use).symmetric_difference(
-				cur_iuse.intersection(cur_use)))
-			flags.difference_update(feature_flags)
-			if flags:
-				return flags
-		return None
-
-	def _create_graph(self, allow_unsatisfied=False):
-		dep_stack = self._dynamic_config._dep_stack
-		dep_disjunctive_stack = self._dynamic_config._dep_disjunctive_stack
-		while dep_stack or dep_disjunctive_stack:
-			self._spinner_update()
-			while dep_stack:
-				dep = dep_stack.pop()
-				if isinstance(dep, Package):
-					if not self._add_pkg_deps(dep,
-						allow_unsatisfied=allow_unsatisfied):
-						return 0
-					continue
-				if not self._add_dep(dep, allow_unsatisfied=allow_unsatisfied):
-					return 0
-			if dep_disjunctive_stack:
-				if not self._pop_disjunction(allow_unsatisfied):
-					return 0
-		return 1
-
-	def _expand_set_args(self, input_args, add_to_digraph=False):
-		"""
-		Iterate over a list of DependencyArg instances and yield all
-		instances given in the input together with additional SetArg
-		instances that are generated from nested sets.
-		@param input_args: An iterable of DependencyArg instances
-		@type input_args: Iterable
-		@param add_to_digraph: If True then add SetArg instances
-			to the digraph, in order to record parent -> child
-			relationships from nested sets
-		@type add_to_digraph: Boolean
-		@rtype: Iterable
-		@return: All args given in the input together with additional
-			SetArg instances that are generated from nested sets
-		"""
-
-		traversed_set_args = set()
-
-		for arg in input_args:
-			if not isinstance(arg, SetArg):
-				yield arg
-				continue
-
-			root_config = arg.root_config
-			depgraph_sets = self._dynamic_config.sets[root_config.root]
-			arg_stack = [arg]
-			while arg_stack:
-				arg = arg_stack.pop()
-				if arg in traversed_set_args:
-					continue
-				traversed_set_args.add(arg)
-
-				if add_to_digraph:
-					self._dynamic_config.digraph.add(arg, None,
-						priority=BlockerDepPriority.instance)
-
-				yield arg
-
-				# Traverse nested sets and add them to the stack
-				# if they're not already in the graph. Also, graph
-				# edges between parent and nested sets.
-				for token in arg.pset.getNonAtoms():
-					if not token.startswith(SETPREFIX):
-						continue
-					s = token[len(SETPREFIX):]
-					nested_set = depgraph_sets.sets.get(s)
-					if nested_set is None:
-						nested_set = root_config.sets.get(s)
-					if nested_set is not None:
-						nested_arg = SetArg(arg=token, pset=nested_set,
-							root_config=root_config)
-						arg_stack.append(nested_arg)
-						if add_to_digraph:
-							self._dynamic_config.digraph.add(nested_arg, arg,
-								priority=BlockerDepPriority.instance)
-							depgraph_sets.sets[nested_arg.name] = nested_arg.pset
-
-	def _add_dep(self, dep, allow_unsatisfied=False):
-		debug = "--debug" in self._frozen_config.myopts
-		buildpkgonly = "--buildpkgonly" in self._frozen_config.myopts
-		nodeps = "--nodeps" in self._frozen_config.myopts
-		if dep.blocker:
-			if not buildpkgonly and \
-				not nodeps and \
-				not dep.collapsed_priority.ignored and \
-				not dep.collapsed_priority.optional and \
-				dep.parent not in self._dynamic_config._slot_collision_nodes:
-				if dep.parent.onlydeps:
-					# It's safe to ignore blockers if the
-					# parent is an --onlydeps node.
-					return 1
-				# The blocker applies to the root where
-				# the parent is or will be installed.
-				blocker = Blocker(atom=dep.atom,
-					eapi=dep.parent.metadata["EAPI"],
-					priority=dep.priority, root=dep.parent.root)
-				self._dynamic_config._blocker_parents.add(blocker, dep.parent)
-			return 1
-
-		if dep.child is None:
-			dep_pkg, existing_node = self._select_package(dep.root, dep.atom,
-				onlydeps=dep.onlydeps)
-		else:
-			# The caller has selected a specific package
-			# via self._minimize_packages().
-			dep_pkg = dep.child
-			existing_node = self._dynamic_config._slot_pkg_map[
-				dep.root].get(dep_pkg.slot_atom)
-
-		if not dep_pkg:
-			if (dep.collapsed_priority.optional or
-				dep.collapsed_priority.ignored):
-				# This is an unnecessary build-time dep.
-				return 1
-			if allow_unsatisfied:
-				self._dynamic_config._unsatisfied_deps.append(dep)
-				return 1
-			self._dynamic_config._unsatisfied_deps_for_display.append(
-				((dep.root, dep.atom), {"myparent":dep.parent}))
-
-			# The parent node should not already be in
-			# runtime_pkg_mask, since that would trigger an
-			# infinite backtracking loop.
-			if self._dynamic_config._allow_backtracking:
-				if dep.parent in self._dynamic_config._runtime_pkg_mask:
-					if debug:
-						writemsg(
-							"!!! backtracking loop detected: %s %s\n" % \
-							(dep.parent,
-							self._dynamic_config._runtime_pkg_mask[
-							dep.parent]), noiselevel=-1)
-				elif not self.need_restart():
-					# Do not backtrack if only USE have to be changed in
-					# order to satisfy the dependency.
-					dep_pkg, existing_node = \
-						self._select_package(dep.root, dep.atom.without_use,
-							onlydeps=dep.onlydeps)
-					if dep_pkg is None:
-						self._dynamic_config._backtrack_infos["missing dependency"] = dep
-						self._dynamic_config._need_restart = True
-						if debug:
-							msg = []
-							msg.append("")
-							msg.append("")
-							msg.append("backtracking due to unsatisfied dep:")
-							msg.append("    parent: %s" % dep.parent)
-							msg.append("  priority: %s" % dep.priority)
-							msg.append("      root: %s" % dep.root)
-							msg.append("      atom: %s" % dep.atom)
-							msg.append("")
-							writemsg_level("".join("%s\n" % l for l in msg),
-								noiselevel=-1, level=logging.DEBUG)
-
-			return 0
-
-		self._rebuild.add(dep_pkg, dep)
-
-		ignore = dep.collapsed_priority.ignored and \
-			not self._dynamic_config._traverse_ignored_deps
-		if not ignore and not self._add_pkg(dep_pkg, dep):
-			return 0
-		return 1
-
-	def _check_slot_conflict(self, pkg, atom):
-		existing_node = self._dynamic_config._slot_pkg_map[pkg.root].get(pkg.slot_atom)
-		matches = None
-		if existing_node:
-			matches = pkg.cpv == existing_node.cpv
-			if pkg != existing_node and \
-				atom is not None:
-				# Use package set for matching since it will match via
-				# PROVIDE when necessary, while match_from_list does not.
-				matches = bool(InternalPackageSet(initial_atoms=(atom,),
-					allow_repo=True).findAtomForPackage(existing_node,
-					modified_use=self._pkg_use_enabled(existing_node)))
-
-		return (existing_node, matches)
-
-	def _add_pkg(self, pkg, dep):
-		"""
-		Adds a package to the depgraph, queues dependencies, and handles
-		slot conflicts.
-		"""
-		debug = "--debug" in self._frozen_config.myopts
-		myparent = None
-		priority = None
-		depth = 0
-		if dep is None:
-			dep = Dependency()
-		else:
-			myparent = dep.parent
-			priority = dep.priority
-			depth = dep.depth
-		if priority is None:
-			priority = DepPriority()
-
-		if debug:
-			writemsg_level(
-				"\n%s%s %s\n" % ("Child:".ljust(15), pkg,
-				pkg_use_display(pkg, self._frozen_config.myopts,
-				modified_use=self._pkg_use_enabled(pkg))),
-				level=logging.DEBUG, noiselevel=-1)
-			if isinstance(myparent,
-				(PackageArg, AtomArg)):
-				# For PackageArg and AtomArg types, it's
-				# redundant to display the atom attribute.
-				writemsg_level(
-					"%s%s\n" % ("Parent Dep:".ljust(15), myparent),
-					level=logging.DEBUG, noiselevel=-1)
-			else:
-				# Display the specific atom from SetArg or
-				# Package types.
-				uneval = ""
-				if dep.atom is not dep.atom.unevaluated_atom:
-					uneval = " (%s)" % (dep.atom.unevaluated_atom,)
-				writemsg_level(
-					"%s%s%s required by %s\n" %
-					("Parent Dep:".ljust(15), dep.atom, uneval, myparent),
-					level=logging.DEBUG, noiselevel=-1)
-
-		# Ensure that the dependencies of the same package
-		# are never processed more than once.
-		previously_added = pkg in self._dynamic_config.digraph
-
-		pkgsettings = self._frozen_config.pkgsettings[pkg.root]
-
-		arg_atoms = None
-		if True:
-			try:
-				arg_atoms = list(self._iter_atoms_for_pkg(pkg))
-			except portage.exception.InvalidDependString as e:
-				if not pkg.installed:
-					# should have been masked before it was selected
-					raise
-				del e
-
-		# NOTE: REQUIRED_USE checks are delayed until after
-		# package selection, since we want to prompt the user
-		# for USE adjustment rather than have REQUIRED_USE
-		# affect package selection and || dep choices.
-		if not pkg.built and pkg.metadata.get("REQUIRED_USE") and \
-			eapi_has_required_use(pkg.metadata["EAPI"]):
-			required_use_is_sat = check_required_use(
-				pkg.metadata["REQUIRED_USE"],
-				self._pkg_use_enabled(pkg),
-				pkg.iuse.is_valid_flag,
-				eapi=pkg.metadata["EAPI"])
-			if not required_use_is_sat:
-				if dep.atom is not None and dep.parent is not None:
-					self._add_parent_atom(pkg, (dep.parent, dep.atom))
-
-				if arg_atoms:
-					for parent_atom in arg_atoms:
-						parent, atom = parent_atom
-						self._add_parent_atom(pkg, parent_atom)
-
-				atom = dep.atom
-				if atom is None:
-					atom = Atom("=" + pkg.cpv)
-				self._dynamic_config._unsatisfied_deps_for_display.append(
-					((pkg.root, atom),
-					{"myparent" : dep.parent, "show_req_use" : pkg}))
-				self._dynamic_config._skip_restart = True
-				return 0
-
-		if not pkg.onlydeps:
-
-			existing_node, existing_node_matches = \
-				self._check_slot_conflict(pkg, dep.atom)
-			slot_collision = False
-			if existing_node:
-				if existing_node_matches:
-					# The existing node can be reused.
-					if arg_atoms:
-						for parent_atom in arg_atoms:
-							parent, atom = parent_atom
-							self._dynamic_config.digraph.add(existing_node, parent,
-								priority=priority)
-							self._add_parent_atom(existing_node, parent_atom)
-					# If a direct circular dependency is not an unsatisfied
-					# buildtime dependency then drop it here since otherwise
-					# it can skew the merge order calculation in an unwanted
-					# way.
-					if existing_node != myparent or \
-						(priority.buildtime and not priority.satisfied):
-						self._dynamic_config.digraph.addnode(existing_node, myparent,
-							priority=priority)
-						if dep.atom is not None and dep.parent is not None:
-							self._add_parent_atom(existing_node,
-								(dep.parent, dep.atom))
-					return 1
-				else:
-					self._add_slot_conflict(pkg)
-					if debug:
-						writemsg_level(
-							"%s%s %s\n" % ("Slot Conflict:".ljust(15),
-							existing_node, pkg_use_display(existing_node,
-							self._frozen_config.myopts,
-							modified_use=self._pkg_use_enabled(existing_node))),
-							level=logging.DEBUG, noiselevel=-1)
-
-					slot_collision = True
-
-			if slot_collision:
-				# Now add this node to the graph so that self.display()
-				# can show use flags and --tree portage.output.  This node is
-				# only being partially added to the graph.  It must not be
-				# allowed to interfere with the other nodes that have been
-				# added.  Do not overwrite data for existing nodes in
-				# self._dynamic_config.mydbapi since that data will be used for blocker
-				# validation.
-				# Even though the graph is now invalid, continue to process
-				# dependencies so that things like --fetchonly can still
-				# function despite collisions.
-				pass
-			elif not previously_added:
-				self._dynamic_config._slot_pkg_map[pkg.root][pkg.slot_atom] = pkg
-				self._dynamic_config.mydbapi[pkg.root].cpv_inject(pkg)
-				self._dynamic_config._filtered_trees[pkg.root]["porttree"].dbapi._clear_cache()
-				self._dynamic_config._highest_pkg_cache.clear()
-				self._check_masks(pkg)
-
-			if not pkg.installed:
-				# Allow this package to satisfy old-style virtuals in case it
-				# doesn't already. Any pre-existing providers will be preferred
-				# over this one.
-				try:
-					pkgsettings.setinst(pkg.cpv, pkg.metadata)
-					# For consistency, also update the global virtuals.
-					settings = self._frozen_config.roots[pkg.root].settings
-					settings.unlock()
-					settings.setinst(pkg.cpv, pkg.metadata)
-					settings.lock()
-				except portage.exception.InvalidDependString:
-					if not pkg.installed:
-						# should have been masked before it was selected
-						raise
-
-		if arg_atoms:
-			self._dynamic_config._set_nodes.add(pkg)
-
-		# Do this even when addme is False (--onlydeps) so that the
-		# parent/child relationship is always known in case
-		# self._show_slot_collision_notice() needs to be called later.
-		self._dynamic_config.digraph.add(pkg, myparent, priority=priority)
-		if dep.atom is not None and dep.parent is not None:
-			self._add_parent_atom(pkg, (dep.parent, dep.atom))
-
-		if arg_atoms:
-			for parent_atom in arg_atoms:
-				parent, atom = parent_atom
-				self._dynamic_config.digraph.add(pkg, parent, priority=priority)
-				self._add_parent_atom(pkg, parent_atom)
-
-		# This section determines whether we go deeper into dependencies or not.
-		# We want to go deeper on a few occasions:
-		# Installing package A, we need to make sure package A's deps are met.
-		# emerge --deep <pkgspec>; we need to recursively check dependencies of pkgspec
-		# If we are in --nodeps (no recursion) mode, we obviously only check 1 level of dependencies.
-		if arg_atoms and depth > 0:
-			for parent, atom in arg_atoms:
-				if parent.reset_depth:
-					depth = 0
-					break
-
-		if previously_added and pkg.depth is not None:
-			depth = min(pkg.depth, depth)
-		pkg.depth = depth
-		deep = self._dynamic_config.myparams.get("deep", 0)
-		update = "--update" in self._frozen_config.myopts
-
-		dep.want_update = (not self._dynamic_config._complete_mode and
-			(arg_atoms or update) and
-			not (deep is not True and depth > deep))
-
-		dep.child = pkg
-		if (not pkg.onlydeps and pkg.built and
-			dep.atom and dep.atom.slot_operator_built):
-			self._add_slot_operator_dep(dep)
-
-		recurse = deep is True or depth + 1 <= deep
-		dep_stack = self._dynamic_config._dep_stack
-		if "recurse" not in self._dynamic_config.myparams:
-			return 1
-		elif pkg.installed and not recurse:
-			dep_stack = self._dynamic_config._ignored_deps
-
-		self._spinner_update()
-
-		if not previously_added:
-			dep_stack.append(pkg)
-		return 1
-
-	def _check_masks(self, pkg):
-
-		slot_key = (pkg.root, pkg.slot_atom)
-
-		# Check for upgrades in the same slot that are
-		# masked due to a LICENSE change in a newer
-		# version that is not masked for any other reason.
-		other_pkg = self._frozen_config._highest_license_masked.get(slot_key)
-		if other_pkg is not None and pkg < other_pkg:
-			self._dynamic_config._masked_license_updates.add(other_pkg)
-
-	def _add_parent_atom(self, pkg, parent_atom):
-		parent_atoms = self._dynamic_config._parent_atoms.get(pkg)
-		if parent_atoms is None:
-			parent_atoms = set()
-			self._dynamic_config._parent_atoms[pkg] = parent_atoms
-		parent_atoms.add(parent_atom)
-
-	def _add_slot_operator_dep(self, dep):
-		slot_key = (dep.root, dep.child.slot_atom)
-		slot_info = self._dynamic_config._slot_operator_deps.get(slot_key)
-		if slot_info is None:
-			slot_info = []
-			self._dynamic_config._slot_operator_deps[slot_key] = slot_info
-		slot_info.append(dep)
-
-	def _add_slot_conflict(self, pkg):
-		self._dynamic_config._slot_collision_nodes.add(pkg)
-		slot_key = (pkg.slot_atom, pkg.root)
-		slot_nodes = self._dynamic_config._slot_collision_info.get(slot_key)
-		if slot_nodes is None:
-			slot_nodes = set()
-			slot_nodes.add(self._dynamic_config._slot_pkg_map[pkg.root][pkg.slot_atom])
-			self._dynamic_config._slot_collision_info[slot_key] = slot_nodes
-		slot_nodes.add(pkg)
-
-	def _add_pkg_deps(self, pkg, allow_unsatisfied=False):
-
-		myroot = pkg.root
-		metadata = pkg.metadata
-		removal_action = "remove" in self._dynamic_config.myparams
-		eapi_attrs = _get_eapi_attrs(pkg.metadata["EAPI"])
-
-		edepend={}
-		for k in Package._dep_keys:
-			edepend[k] = metadata[k]
-
-		if not pkg.built and \
-			"--buildpkgonly" in self._frozen_config.myopts and \
-			"deep" not in self._dynamic_config.myparams:
-			edepend["RDEPEND"] = ""
-			edepend["PDEPEND"] = ""
-
-		ignore_build_time_deps = False
-		if pkg.built and not removal_action:
-			if self._dynamic_config.myparams.get("bdeps", "n") == "y":
-				# Pull in build time deps as requested, but marked them as
-				# "optional" since they are not strictly required. This allows
-				# more freedom in the merge order calculation for solving
-				# circular dependencies. Don't convert to PDEPEND since that
-				# could make --with-bdeps=y less effective if it is used to
-				# adjust merge order to prevent built_with_use() calls from
-				# failing.
-				pass
-			else:
-				ignore_build_time_deps = True
-
-		if removal_action and self._dynamic_config.myparams.get("bdeps", "y") == "n":
-			# Removal actions never traverse ignored buildtime
-			# dependencies, so it's safe to discard them early.
-			edepend["DEPEND"] = ""
-			edepend["HDEPEND"] = ""
-			ignore_build_time_deps = True
-
-		ignore_depend_deps = ignore_build_time_deps
-		ignore_hdepend_deps = ignore_build_time_deps
-
-		if removal_action:
-			depend_root = myroot
-		else:
-			if eapi_attrs.hdepend:
-				depend_root = myroot
-			else:
-				depend_root = self._frozen_config._running_root.root
-				root_deps = self._frozen_config.myopts.get("--root-deps")
-				if root_deps is not None:
-					if root_deps is True:
-						depend_root = myroot
-					elif root_deps == "rdeps":
-						ignore_depend_deps = True
-
-		# If rebuild mode is not enabled, it's safe to discard ignored
-		# build-time dependencies. If you want these deps to be traversed
-		# in "complete" mode then you need to specify --with-bdeps=y.
-		if not self._rebuild.rebuild:
-			if ignore_depend_deps:
-				edepend["DEPEND"] = ""
-			if ignore_hdepend_deps:
-				edepend["HDEPEND"] = ""
-
-		deps = (
-			(depend_root, edepend["DEPEND"],
-				self._priority(buildtime=True,
-				optional=(pkg.built or ignore_depend_deps),
-				ignored=ignore_depend_deps)),
-			(self._frozen_config._running_root.root, edepend["HDEPEND"],
-				self._priority(buildtime=True,
-				optional=(pkg.built or ignore_hdepend_deps),
-				ignored=ignore_hdepend_deps)),
-			(myroot, edepend["RDEPEND"],
-				self._priority(runtime=True)),
-			(myroot, edepend["PDEPEND"],
-				self._priority(runtime_post=True))
-		)
-
-		debug = "--debug" in self._frozen_config.myopts
-
-		for dep_root, dep_string, dep_priority in deps:
-				if not dep_string:
-					continue
-				if debug:
-					writemsg_level("\nParent:    %s\n" % (pkg,),
-						noiselevel=-1, level=logging.DEBUG)
-					writemsg_level("Depstring: %s\n" % (dep_string,),
-						noiselevel=-1, level=logging.DEBUG)
-					writemsg_level("Priority:  %s\n" % (dep_priority,),
-						noiselevel=-1, level=logging.DEBUG)
-
-				try:
-					dep_string = portage.dep.use_reduce(dep_string,
-						uselist=self._pkg_use_enabled(pkg),
-						is_valid_flag=pkg.iuse.is_valid_flag,
-						opconvert=True, token_class=Atom,
-						eapi=pkg.metadata['EAPI'])
-				except portage.exception.InvalidDependString as e:
-					if not pkg.installed:
-						# should have been masked before it was selected
-						raise
-					del e
-
-					# Try again, but omit the is_valid_flag argument, since
-					# invalid USE conditionals are a common problem and it's
-					# practical to ignore this issue for installed packages.
-					try:
-						dep_string = portage.dep.use_reduce(dep_string,
-							uselist=self._pkg_use_enabled(pkg),
-							opconvert=True, token_class=Atom,
-							eapi=pkg.metadata['EAPI'])
-					except portage.exception.InvalidDependString as e:
-						self._dynamic_config._masked_installed.add(pkg)
-						del e
-						continue
-
-				try:
-					dep_string = list(self._queue_disjunctive_deps(
-						pkg, dep_root, dep_priority, dep_string))
-				except portage.exception.InvalidDependString as e:
-					if pkg.installed:
-						self._dynamic_config._masked_installed.add(pkg)
-						del e
-						continue
-
-					# should have been masked before it was selected
-					raise
-
-				if not dep_string:
-					continue
-
-				if not self._add_pkg_dep_string(
-					pkg, dep_root, dep_priority, dep_string,
-					allow_unsatisfied):
-					return 0
-
-		self._dynamic_config._traversed_pkg_deps.add(pkg)
-		return 1
-
-	def _add_pkg_dep_string(self, pkg, dep_root, dep_priority, dep_string,
-		allow_unsatisfied):
-		_autounmask_backup = self._dynamic_config._autounmask
-		if dep_priority.optional or dep_priority.ignored:
-			# Temporarily disable autounmask for deps that
-			# don't necessarily need to be satisfied.
-			self._dynamic_config._autounmask = False
-		try:
-			return self._wrapped_add_pkg_dep_string(
-				pkg, dep_root, dep_priority, dep_string,
-				allow_unsatisfied)
-		finally:
-			self._dynamic_config._autounmask = _autounmask_backup
-
-	def _wrapped_add_pkg_dep_string(self, pkg, dep_root, dep_priority,
-		dep_string, allow_unsatisfied):
-		depth = pkg.depth + 1
-		deep = self._dynamic_config.myparams.get("deep", 0)
-		recurse_satisfied = deep is True or depth <= deep
-		debug = "--debug" in self._frozen_config.myopts
-		strict = pkg.type_name != "installed"
-
-		if debug:
-			writemsg_level("\nParent:    %s\n" % (pkg,),
-				noiselevel=-1, level=logging.DEBUG)
-			dep_repr = portage.dep.paren_enclose(dep_string,
-				unevaluated_atom=True, opconvert=True)
-			writemsg_level("Depstring: %s\n" % (dep_repr,),
-				noiselevel=-1, level=logging.DEBUG)
-			writemsg_level("Priority:  %s\n" % (dep_priority,),
-				noiselevel=-1, level=logging.DEBUG)
-
-		try:
-			selected_atoms = self._select_atoms(dep_root,
-				dep_string, myuse=self._pkg_use_enabled(pkg), parent=pkg,
-				strict=strict, priority=dep_priority)
-		except portage.exception.InvalidDependString:
-			if pkg.installed:
-				self._dynamic_config._masked_installed.add(pkg)
-				return 1
-
-			# should have been masked before it was selected
-			raise
-
-		if debug:
-			writemsg_level("Candidates: %s\n" % \
-				([str(x) for x in selected_atoms[pkg]],),
-				noiselevel=-1, level=logging.DEBUG)
-
-		root_config = self._frozen_config.roots[dep_root]
-		vardb = root_config.trees["vartree"].dbapi
-		traversed_virt_pkgs = set()
-
-		reinstall_atoms = self._frozen_config.reinstall_atoms
-		for atom, child in self._minimize_children(
-			pkg, dep_priority, root_config, selected_atoms[pkg]):
-
-			# If this was a specially generated virtual atom
-			# from dep_check, map it back to the original, in
-			# order to avoid distortion in places like display
-			# or conflict resolution code.
-			is_virt = hasattr(atom, '_orig_atom')
-			atom = getattr(atom, '_orig_atom', atom)
-
-			if atom.blocker and \
-				(dep_priority.optional or dep_priority.ignored):
-				# For --with-bdeps, ignore build-time only blockers
-				# that originate from built packages.
-				continue
-
-			mypriority = dep_priority.copy()
-			if not atom.blocker:
-				inst_pkgs = [inst_pkg for inst_pkg in
-					reversed(vardb.match_pkgs(atom))
-					if not reinstall_atoms.findAtomForPackage(inst_pkg,
-							modified_use=self._pkg_use_enabled(inst_pkg))]
-				if inst_pkgs:
-					for inst_pkg in inst_pkgs:
-						if self._pkg_visibility_check(inst_pkg):
-							# highest visible
-							mypriority.satisfied = inst_pkg
-							break
-					if not mypriority.satisfied:
-						# none visible, so use highest
-						mypriority.satisfied = inst_pkgs[0]
-
-			dep = Dependency(atom=atom,
-				blocker=atom.blocker, child=child, depth=depth, parent=pkg,
-				priority=mypriority, root=dep_root)
-
-			# In some cases, dep_check will return deps that shouldn't
-			# be proccessed any further, so they are identified and
-			# discarded here. Try to discard as few as possible since
-			# discarded dependencies reduce the amount of information
-			# available for optimization of merge order.
-			ignored = False
-			if not atom.blocker and \
-				not recurse_satisfied and \
-				mypriority.satisfied and \
-				mypriority.satisfied.visible and \
-				dep.child is not None and \
-				not dep.child.installed and \
-				self._dynamic_config._slot_pkg_map[dep.child.root].get(
-				dep.child.slot_atom) is None:
-				myarg = None
-				try:
-					myarg = next(self._iter_atoms_for_pkg(dep.child), None)
-				except InvalidDependString:
-					if not dep.child.installed:
-						raise
-
-				if myarg is None:
-					# Existing child selection may not be valid unless
-					# it's added to the graph immediately, since "complete"
-					# mode may select a different child later.
-					ignored = True
-					dep.child = None
-					self._dynamic_config._ignored_deps.append(dep)
-
-			if not ignored:
-				if dep_priority.ignored and \
-					not self._dynamic_config._traverse_ignored_deps:
-					if is_virt and dep.child is not None:
-						traversed_virt_pkgs.add(dep.child)
-					dep.child = None
-					self._dynamic_config._ignored_deps.append(dep)
-				else:
-					if not self._add_dep(dep,
-						allow_unsatisfied=allow_unsatisfied):
-						return 0
-					if is_virt and dep.child is not None:
-						traversed_virt_pkgs.add(dep.child)
-
-		selected_atoms.pop(pkg)
-
-		# Add selected indirect virtual deps to the graph. This
-		# takes advantage of circular dependency avoidance that's done
-		# by dep_zapdeps. We preserve actual parent/child relationships
-		# here in order to avoid distorting the dependency graph like
-		# <=portage-2.1.6.x did.
-		for virt_dep, atoms in selected_atoms.items():
-
-			virt_pkg = virt_dep.child
-			if virt_pkg not in traversed_virt_pkgs:
-				continue
-
-			if debug:
-				writemsg_level("\nCandidates: %s: %s\n" % \
-					(virt_pkg.cpv, [str(x) for x in atoms]),
-					noiselevel=-1, level=logging.DEBUG)
-
-			if not dep_priority.ignored or \
-				self._dynamic_config._traverse_ignored_deps:
-
-				inst_pkgs = [inst_pkg for inst_pkg in
-					reversed(vardb.match_pkgs(virt_dep.atom))
-					if not reinstall_atoms.findAtomForPackage(inst_pkg,
-							modified_use=self._pkg_use_enabled(inst_pkg))]
-				if inst_pkgs:
-					for inst_pkg in inst_pkgs:
-						if self._pkg_visibility_check(inst_pkg):
-							# highest visible
-							virt_dep.priority.satisfied = inst_pkg
-							break
-					if not virt_dep.priority.satisfied:
-						# none visible, so use highest
-						virt_dep.priority.satisfied = inst_pkgs[0]
-
-				if not self._add_pkg(virt_pkg, virt_dep):
-					return 0
-
-			for atom, child in self._minimize_children(
-				pkg, self._priority(runtime=True), root_config, atoms):
-
-				# If this was a specially generated virtual atom
-				# from dep_check, map it back to the original, in
-				# order to avoid distortion in places like display
-				# or conflict resolution code.
-				is_virt = hasattr(atom, '_orig_atom')
-				atom = getattr(atom, '_orig_atom', atom)
-
-				# This is a GLEP 37 virtual, so its deps are all runtime.
-				mypriority = self._priority(runtime=True)
-				if not atom.blocker:
-					inst_pkgs = [inst_pkg for inst_pkg in
-						reversed(vardb.match_pkgs(atom))
-						if not reinstall_atoms.findAtomForPackage(inst_pkg,
-								modified_use=self._pkg_use_enabled(inst_pkg))]
-					if inst_pkgs:
-						for inst_pkg in inst_pkgs:
-							if self._pkg_visibility_check(inst_pkg):
-								# highest visible
-								mypriority.satisfied = inst_pkg
-								break
-						if not mypriority.satisfied:
-							# none visible, so use highest
-							mypriority.satisfied = inst_pkgs[0]
-
-				# Dependencies of virtuals are considered to have the
-				# same depth as the virtual itself.
-				dep = Dependency(atom=atom,
-					blocker=atom.blocker, child=child, depth=virt_dep.depth,
-					parent=virt_pkg, priority=mypriority, root=dep_root,
-					collapsed_parent=pkg, collapsed_priority=dep_priority)
-
-				ignored = False
-				if not atom.blocker and \
-					not recurse_satisfied and \
-					mypriority.satisfied and \
-					mypriority.satisfied.visible and \
-					dep.child is not None and \
-					not dep.child.installed and \
-					self._dynamic_config._slot_pkg_map[dep.child.root].get(
-					dep.child.slot_atom) is None:
-					myarg = None
-					try:
-						myarg = next(self._iter_atoms_for_pkg(dep.child), None)
-					except InvalidDependString:
-						if not dep.child.installed:
-							raise
-
-					if myarg is None:
-						ignored = True
-						dep.child = None
-						self._dynamic_config._ignored_deps.append(dep)
-
-				if not ignored:
-					if dep_priority.ignored and \
-						not self._dynamic_config._traverse_ignored_deps:
-						if is_virt and dep.child is not None:
-							traversed_virt_pkgs.add(dep.child)
-						dep.child = None
-						self._dynamic_config._ignored_deps.append(dep)
-					else:
-						if not self._add_dep(dep,
-							allow_unsatisfied=allow_unsatisfied):
-							return 0
-						if is_virt and dep.child is not None:
-							traversed_virt_pkgs.add(dep.child)
-
-		if debug:
-			writemsg_level("\nExiting... %s\n" % (pkg,),
-				noiselevel=-1, level=logging.DEBUG)
-
-		return 1
-
-	def _minimize_children(self, parent, priority, root_config, atoms):
-		"""
-		Selects packages to satisfy the given atoms, and minimizes the
-		number of selected packages. This serves to identify and eliminate
-		redundant package selections when multiple atoms happen to specify
-		a version range.
-		"""
-
-		atom_pkg_map = {}
-
-		for atom in atoms:
-			if atom.blocker:
-				yield (atom, None)
-				continue
-			dep_pkg, existing_node = self._select_package(
-				root_config.root, atom)
-			if dep_pkg is None:
-				yield (atom, None)
-				continue
-			atom_pkg_map[atom] = dep_pkg
-
-		if len(atom_pkg_map) < 2:
-			for item in atom_pkg_map.items():
-				yield item
-			return
-
-		cp_pkg_map = {}
-		pkg_atom_map = {}
-		for atom, pkg in atom_pkg_map.items():
-			pkg_atom_map.setdefault(pkg, set()).add(atom)
-			cp_pkg_map.setdefault(pkg.cp, set()).add(pkg)
-
-		for pkgs in cp_pkg_map.values():
-			if len(pkgs) < 2:
-				for pkg in pkgs:
-					for atom in pkg_atom_map[pkg]:
-						yield (atom, pkg)
-				continue
-
-			# Use a digraph to identify and eliminate any
-			# redundant package selections.
-			atom_pkg_graph = digraph()
-			cp_atoms = set()
-			for pkg1 in pkgs:
-				for atom in pkg_atom_map[pkg1]:
-					cp_atoms.add(atom)
-					atom_pkg_graph.add(pkg1, atom)
-					atom_set = InternalPackageSet(initial_atoms=(atom,),
-						allow_repo=True)
-					for pkg2 in pkgs:
-						if pkg2 is pkg1:
-							continue
-						if atom_set.findAtomForPackage(pkg2, modified_use=self._pkg_use_enabled(pkg2)):
-							atom_pkg_graph.add(pkg2, atom)
-
-			for pkg in pkgs:
-				eliminate_pkg = True
-				for atom in atom_pkg_graph.parent_nodes(pkg):
-					if len(atom_pkg_graph.child_nodes(atom)) < 2:
-						eliminate_pkg = False
-						break
-				if eliminate_pkg:
-					atom_pkg_graph.remove(pkg)
-
-			# Yield ~, =*, < and <= atoms first, since those are more likely to
-			# cause slot conflicts, and we want those atoms to be displayed
-			# in the resulting slot conflict message (see bug #291142).
-			# Give similar treatment to slot/sub-slot atoms.
-			conflict_atoms = []
-			normal_atoms = []
-			abi_atoms = []
-			for atom in cp_atoms:
-				if atom.slot_operator_built:
-					abi_atoms.append(atom)
-					continue
-				conflict = False
-				for child_pkg in atom_pkg_graph.child_nodes(atom):
-					existing_node, matches = \
-						self._check_slot_conflict(child_pkg, atom)
-					if existing_node and not matches:
-						conflict = True
-						break
-				if conflict:
-					conflict_atoms.append(atom)
-				else:
-					normal_atoms.append(atom)
-
-			for atom in chain(abi_atoms, conflict_atoms, normal_atoms):
-				child_pkgs = atom_pkg_graph.child_nodes(atom)
-				# if more than one child, yield highest version
-				if len(child_pkgs) > 1:
-					child_pkgs.sort()
-				yield (atom, child_pkgs[-1])
-
-	def _queue_disjunctive_deps(self, pkg, dep_root, dep_priority, dep_struct):
-		"""
-		Queue disjunctive (virtual and ||) deps in self._dynamic_config._dep_disjunctive_stack.
-		Yields non-disjunctive deps. Raises InvalidDependString when 
-		necessary.
-		"""
-		for x in dep_struct:
-			if isinstance(x, list):
-				if x and x[0] == "||":
-					self._queue_disjunction(pkg, dep_root, dep_priority, [x])
-				else:
-					for y in self._queue_disjunctive_deps(
-						pkg, dep_root, dep_priority, x):
-						yield y
-			else:
-				# Note: Eventually this will check for PROPERTIES=virtual
-				# or whatever other metadata gets implemented for this
-				# purpose.
-				if x.cp.startswith('virtual/'):
-					self._queue_disjunction(pkg, dep_root, dep_priority, [x])
-				else:
-					yield x
-
-	def _queue_disjunction(self, pkg, dep_root, dep_priority, dep_struct):
-		self._dynamic_config._dep_disjunctive_stack.append(
-			(pkg, dep_root, dep_priority, dep_struct))
-
-	def _pop_disjunction(self, allow_unsatisfied):
-		"""
-		Pop one disjunctive dep from self._dynamic_config._dep_disjunctive_stack, and use it to
-		populate self._dynamic_config._dep_stack.
-		"""
-		pkg, dep_root, dep_priority, dep_struct = \
-			self._dynamic_config._dep_disjunctive_stack.pop()
-		if not self._add_pkg_dep_string(
-			pkg, dep_root, dep_priority, dep_struct, allow_unsatisfied):
-			return 0
-		return 1
-
-	def _priority(self, **kwargs):
-		if "remove" in self._dynamic_config.myparams:
-			priority_constructor = UnmergeDepPriority
-		else:
-			priority_constructor = DepPriority
-		return priority_constructor(**kwargs)
-
-	def _dep_expand(self, root_config, atom_without_category):
-		"""
-		@param root_config: a root config instance
-		@type root_config: RootConfig
-		@param atom_without_category: an atom without a category component
-		@type atom_without_category: String
-		@rtype: list
-		@return: a list of atoms containing categories (possibly empty)
-		"""
-		null_cp = portage.dep_getkey(insert_category_into_atom(
-			atom_without_category, "null"))
-		cat, atom_pn = portage.catsplit(null_cp)
-
-		dbs = self._dynamic_config._filtered_trees[root_config.root]["dbs"]
-		categories = set()
-		for db, pkg_type, built, installed, db_keys in dbs:
-			for cat in db.categories:
-				if db.cp_list("%s/%s" % (cat, atom_pn)):
-					categories.add(cat)
-
-		deps = []
-		for cat in categories:
-			deps.append(Atom(insert_category_into_atom(
-				atom_without_category, cat), allow_repo=True))
-		return deps
-
-	def _have_new_virt(self, root, atom_cp):
-		ret = False
-		for db, pkg_type, built, installed, db_keys in \
-			self._dynamic_config._filtered_trees[root]["dbs"]:
-			if db.cp_list(atom_cp):
-				ret = True
-				break
-		return ret
-
-	def _iter_atoms_for_pkg(self, pkg):
-		depgraph_sets = self._dynamic_config.sets[pkg.root]
-		atom_arg_map = depgraph_sets.atom_arg_map
-		for atom in depgraph_sets.atoms.iterAtomsForPackage(pkg):
-			if atom.cp != pkg.cp and \
-				self._have_new_virt(pkg.root, atom.cp):
-				continue
-			visible_pkgs = \
-				self._dynamic_config._visible_pkgs[pkg.root].match_pkgs(atom)
-			visible_pkgs.reverse() # descending order
-			higher_slot = None
-			for visible_pkg in visible_pkgs:
-				if visible_pkg.cp != atom.cp:
-					continue
-				if pkg >= visible_pkg:
-					# This is descending order, and we're not
-					# interested in any versions <= pkg given.
-					break
-				if pkg.slot_atom != visible_pkg.slot_atom:
-					higher_slot = visible_pkg
-					break
-			if higher_slot is not None:
-				continue
-			for arg in atom_arg_map[(atom, pkg.root)]:
-				if isinstance(arg, PackageArg) and \
-					arg.package != pkg:
-					continue
-				yield arg, atom
-
-	def select_files(self, myfiles):
-		"""Given a list of .tbz2s, .ebuilds sets, and deps, populate
-		self._dynamic_config._initial_arg_list and call self._resolve to create the 
-		appropriate depgraph and return a favorite list."""
-		self._load_vdb()
-		debug = "--debug" in self._frozen_config.myopts
-		root_config = self._frozen_config.roots[self._frozen_config.target_root]
-		sets = root_config.sets
-		depgraph_sets = self._dynamic_config.sets[root_config.root]
-		myfavorites=[]
-		eroot = root_config.root
-		root = root_config.settings['ROOT']
-		vardb = self._frozen_config.trees[eroot]["vartree"].dbapi
-		real_vardb = self._frozen_config._trees_orig[eroot]["vartree"].dbapi
-		portdb = self._frozen_config.trees[eroot]["porttree"].dbapi
-		bindb = self._frozen_config.trees[eroot]["bintree"].dbapi
-		pkgsettings = self._frozen_config.pkgsettings[eroot]
-		args = []
-		onlydeps = "--onlydeps" in self._frozen_config.myopts
-		lookup_owners = []
-		for x in myfiles:
-			ext = os.path.splitext(x)[1]
-			if ext==".tbz2":
-				if not os.path.exists(x):
-					if os.path.exists(
-						os.path.join(pkgsettings["PKGDIR"], "All", x)):
-						x = os.path.join(pkgsettings["PKGDIR"], "All", x)
-					elif os.path.exists(
-						os.path.join(pkgsettings["PKGDIR"], x)):
-						x = os.path.join(pkgsettings["PKGDIR"], x)
-					else:
-						writemsg("\n\n!!! Binary package '"+str(x)+"' does not exist.\n", noiselevel=-1)
-						writemsg("!!! Please ensure the tbz2 exists as specified.\n\n", noiselevel=-1)
-						return 0, myfavorites
-				mytbz2=portage.xpak.tbz2(x)
-				mykey = None
-				cat = mytbz2.getfile("CATEGORY")
-				if cat is not None:
-					cat = _unicode_decode(cat.strip(),
-						encoding=_encodings['repo.content'])
-					mykey = cat + "/" + os.path.basename(x)[:-5]
-
-				if mykey is None:
-					writemsg(colorize("BAD", "\n*** Package is missing CATEGORY metadata: %s.\n\n" % x), noiselevel=-1)
-					self._dynamic_config._skip_restart = True
-					return 0, myfavorites
-				elif os.path.realpath(x) != \
-					os.path.realpath(bindb.bintree.getname(mykey)):
-					writemsg(colorize("BAD", "\n*** You need to adjust PKGDIR to emerge this package.\n\n"), noiselevel=-1)
-					self._dynamic_config._skip_restart = True
-					return 0, myfavorites
-
-				pkg = self._pkg(mykey, "binary", root_config,
-					onlydeps=onlydeps)
-				args.append(PackageArg(arg=x, package=pkg,
-					root_config=root_config))
-			elif ext==".ebuild":
-				ebuild_path = portage.util.normalize_path(os.path.abspath(x))
-				pkgdir = os.path.dirname(ebuild_path)
-				tree_root = os.path.dirname(os.path.dirname(pkgdir))
-				cp = pkgdir[len(tree_root)+1:]
-				e = portage.exception.PackageNotFound(
-					("%s is not in a valid portage tree " + \
-					"hierarchy or does not exist") % x)
-				if not portage.isvalidatom(cp):
-					raise e
-				cat = portage.catsplit(cp)[0]
-				mykey = cat + "/" + os.path.basename(ebuild_path[:-7])
-				if not portage.isvalidatom("="+mykey):
-					raise e
-				ebuild_path = portdb.findname(mykey)
-				if ebuild_path:
-					if ebuild_path != os.path.join(os.path.realpath(tree_root),
-						cp, os.path.basename(ebuild_path)):
-						writemsg(colorize("BAD", "\n*** You need to adjust PORTDIR or PORTDIR_OVERLAY to emerge this package.\n\n"), noiselevel=-1)
-						self._dynamic_config._skip_restart = True
-						return 0, myfavorites
-					if mykey not in portdb.xmatch(
-						"match-visible", portage.cpv_getkey(mykey)):
-						writemsg(colorize("BAD", "\n*** You are emerging a masked package. It is MUCH better to use\n"), noiselevel=-1)
-						writemsg(colorize("BAD", "*** /etc/portage/package.* to accomplish this. See portage(5) man\n"), noiselevel=-1)
-						writemsg(colorize("BAD", "*** page for details.\n"), noiselevel=-1)
-						countdown(int(self._frozen_config.settings["EMERGE_WARNING_DELAY"]),
-							"Continuing...")
-				else:
-					raise portage.exception.PackageNotFound(
-						"%s is not in a valid portage tree hierarchy or does not exist" % x)
-				pkg = self._pkg(mykey, "ebuild", root_config,
-					onlydeps=onlydeps, myrepo=portdb.getRepositoryName(
-					os.path.dirname(os.path.dirname(os.path.dirname(ebuild_path)))))
-				args.append(PackageArg(arg=x, package=pkg,
-					root_config=root_config))
-			elif x.startswith(os.path.sep):
-				if not x.startswith(eroot):
-					portage.writemsg(("\n\n!!! '%s' does not start with" + \
-						" $EROOT.\n") % x, noiselevel=-1)
-					self._dynamic_config._skip_restart = True
-					return 0, []
-				# Queue these up since it's most efficient to handle
-				# multiple files in a single iter_owners() call.
-				lookup_owners.append(x)
-			elif x.startswith("." + os.sep) or \
-				x.startswith(".." + os.sep):
-				f = os.path.abspath(x)
-				if not f.startswith(eroot):
-					portage.writemsg(("\n\n!!! '%s' (resolved from '%s') does not start with" + \
-						" $EROOT.\n") % (f, x), noiselevel=-1)
-					self._dynamic_config._skip_restart = True
-					return 0, []
-				lookup_owners.append(f)
-			else:
-				if x in ("system", "world"):
-					x = SETPREFIX + x
-				if x.startswith(SETPREFIX):
-					s = x[len(SETPREFIX):]
-					if s not in sets:
-						raise portage.exception.PackageSetNotFound(s)
-					if s in depgraph_sets.sets:
-						continue
-					pset = sets[s]
-					depgraph_sets.sets[s] = pset
-					args.append(SetArg(arg=x, pset=pset,
-						root_config=root_config))
-					continue
-				if not is_valid_package_atom(x, allow_repo=True):
-					portage.writemsg("\n\n!!! '%s' is not a valid package atom.\n" % x,
-						noiselevel=-1)
-					portage.writemsg("!!! Please check ebuild(5) for full details.\n")
-					portage.writemsg("!!! (Did you specify a version but forget to prefix with '='?)\n")
-					self._dynamic_config._skip_restart = True
-					return (0,[])
-				# Don't expand categories or old-style virtuals here unless
-				# necessary. Expansion of old-style virtuals here causes at
-				# least the following problems:
-				#   1) It's more difficult to determine which set(s) an atom
-				#      came from, if any.
-				#   2) It takes away freedom from the resolver to choose other
-				#      possible expansions when necessary.
-				if "/" in x:
-					args.append(AtomArg(arg=x, atom=Atom(x, allow_repo=True),
-						root_config=root_config))
-					continue
-				expanded_atoms = self._dep_expand(root_config, x)
-				installed_cp_set = set()
-				for atom in expanded_atoms:
-					if vardb.cp_list(atom.cp):
-						installed_cp_set.add(atom.cp)
-
-				if len(installed_cp_set) > 1:
-					non_virtual_cps = set()
-					for atom_cp in installed_cp_set:
-						if not atom_cp.startswith("virtual/"):
-							non_virtual_cps.add(atom_cp)
-					if len(non_virtual_cps) == 1:
-						installed_cp_set = non_virtual_cps
-
-				if len(expanded_atoms) > 1 and len(installed_cp_set) == 1:
-					installed_cp = next(iter(installed_cp_set))
-					for atom in expanded_atoms:
-						if atom.cp == installed_cp:
-							available = False
-							for pkg in self._iter_match_pkgs_any(
-								root_config, atom.without_use,
-								onlydeps=onlydeps):
-								if not pkg.installed:
-									available = True
-									break
-							if available:
-								expanded_atoms = [atom]
-								break
-
-				# If a non-virtual package and one or more virtual packages
-				# are in expanded_atoms, use the non-virtual package.
-				if len(expanded_atoms) > 1:
-					number_of_virtuals = 0
-					for expanded_atom in expanded_atoms:
-						if expanded_atom.cp.startswith("virtual/"):
-							number_of_virtuals += 1
-						else:
-							candidate = expanded_atom
-					if len(expanded_atoms) - number_of_virtuals == 1:
-						expanded_atoms = [ candidate ]
-
-				if len(expanded_atoms) > 1:
-					writemsg("\n\n", noiselevel=-1)
-					ambiguous_package_name(x, expanded_atoms, root_config,
-						self._frozen_config.spinner, self._frozen_config.myopts)
-					self._dynamic_config._skip_restart = True
-					return False, myfavorites
-				if expanded_atoms:
-					atom = expanded_atoms[0]
-				else:
-					null_atom = Atom(insert_category_into_atom(x, "null"),
-						allow_repo=True)
-					cat, atom_pn = portage.catsplit(null_atom.cp)
-					virts_p = root_config.settings.get_virts_p().get(atom_pn)
-					if virts_p:
-						# Allow the depgraph to choose which virtual.
-						atom = Atom(null_atom.replace('null/', 'virtual/', 1),
-							allow_repo=True)
-					else:
-						atom = null_atom
-
-				if atom.use and atom.use.conditional:
-					writemsg(
-						("\n\n!!! '%s' contains a conditional " + \
-						"which is not allowed.\n") % (x,), noiselevel=-1)
-					writemsg("!!! Please check ebuild(5) for full details.\n")
-					self._dynamic_config._skip_restart = True
-					return (0,[])
-
-				args.append(AtomArg(arg=x, atom=atom,
-					root_config=root_config))
-
-		if lookup_owners:
-			relative_paths = []
-			search_for_multiple = False
-			if len(lookup_owners) > 1:
-				search_for_multiple = True
-
-			for x in lookup_owners:
-				if not search_for_multiple and os.path.isdir(x):
-					search_for_multiple = True
-				relative_paths.append(x[len(root)-1:])
-
-			owners = set()
-			for pkg, relative_path in \
-				real_vardb._owners.iter_owners(relative_paths):
-				owners.add(pkg.mycpv)
-				if not search_for_multiple:
-					break
-
-			if not owners:
-				portage.writemsg(("\n\n!!! '%s' is not claimed " + \
-					"by any package.\n") % lookup_owners[0], noiselevel=-1)
-				self._dynamic_config._skip_restart = True
-				return 0, []
-
-			for cpv in owners:
-				pkg = vardb._pkg_str(cpv, None)
-				atom = Atom("%s:%s" % (pkg.cp, pkg.slot))
-				args.append(AtomArg(arg=atom, atom=atom,
-					root_config=root_config))
-
-		if "--update" in self._frozen_config.myopts:
-			# In some cases, the greedy slots behavior can pull in a slot that
-			# the user would want to uninstall due to it being blocked by a
-			# newer version in a different slot. Therefore, it's necessary to
-			# detect and discard any that should be uninstalled. Each time
-			# that arguments are updated, package selections are repeated in
-			# order to ensure consistency with the current arguments:
-			#
-			#  1) Initialize args
-			#  2) Select packages and generate initial greedy atoms
-			#  3) Update args with greedy atoms
-			#  4) Select packages and generate greedy atoms again, while
-			#     accounting for any blockers between selected packages
-			#  5) Update args with revised greedy atoms
-
-			self._set_args(args)
-			greedy_args = []
-			for arg in args:
-				greedy_args.append(arg)
-				if not isinstance(arg, AtomArg):
-					continue
-				for atom in self._greedy_slots(arg.root_config, arg.atom):
-					greedy_args.append(
-						AtomArg(arg=arg.arg, atom=atom,
-							root_config=arg.root_config))
-
-			self._set_args(greedy_args)
-			del greedy_args
-
-			# Revise greedy atoms, accounting for any blockers
-			# between selected packages.
-			revised_greedy_args = []
-			for arg in args:
-				revised_greedy_args.append(arg)
-				if not isinstance(arg, AtomArg):
-					continue
-				for atom in self._greedy_slots(arg.root_config, arg.atom,
-					blocker_lookahead=True):
-					revised_greedy_args.append(
-						AtomArg(arg=arg.arg, atom=atom,
-							root_config=arg.root_config))
-			args = revised_greedy_args
-			del revised_greedy_args
-
-		args.extend(self._gen_reinstall_sets())
-		self._set_args(args)
-
-		myfavorites = set(myfavorites)
-		for arg in args:
-			if isinstance(arg, (AtomArg, PackageArg)):
-				myfavorites.add(arg.atom)
-			elif isinstance(arg, SetArg):
-				if not arg.internal:
-					myfavorites.add(arg.arg)
-		myfavorites = list(myfavorites)
-
-		if debug:
-			portage.writemsg("\n", noiselevel=-1)
-		# Order needs to be preserved since a feature of --nodeps
-		# is to allow the user to force a specific merge order.
-		self._dynamic_config._initial_arg_list = args[:]
-	
-		return self._resolve(myfavorites)
-
-	def _gen_reinstall_sets(self):
-
-		atom_list = []
-		for root, atom in self._rebuild.rebuild_list:
-			atom_list.append((root, '__auto_rebuild__', atom))
-		for root, atom in self._rebuild.reinstall_list:
-			atom_list.append((root, '__auto_reinstall__', atom))
-		for root, atom in self._dynamic_config._slot_operator_replace_installed:
-			atom_list.append((root, '__auto_slot_operator_replace_installed__', atom))
-
-		set_dict = {}
-		for root, set_name, atom in atom_list:
-			set_dict.setdefault((root, set_name), []).append(atom)
-
-		for (root, set_name), atoms in set_dict.items():
-			yield SetArg(arg=(SETPREFIX + set_name),
-				# Set reset_depth=False here, since we don't want these
-				# special sets to interact with depth calculations (see
-				# the emerge --deep=DEPTH option), though we want them
-				# to behave like normal arguments in most other respects.
-				pset=InternalPackageSet(initial_atoms=atoms),
-				force_reinstall=True,
-				internal=True,
-				reset_depth=False,
-				root_config=self._frozen_config.roots[root])
-
-	def _resolve(self, myfavorites):
-		"""Given self._dynamic_config._initial_arg_list, pull in the root nodes, 
-		call self._creategraph to process theier deps and return 
-		a favorite list."""
-		debug = "--debug" in self._frozen_config.myopts
-		onlydeps = "--onlydeps" in self._frozen_config.myopts
-		myroot = self._frozen_config.target_root
-		pkgsettings = self._frozen_config.pkgsettings[myroot]
-		pprovideddict = pkgsettings.pprovideddict
-		virtuals = pkgsettings.getvirtuals()
-		args = self._dynamic_config._initial_arg_list[:]
-
-		for arg in self._expand_set_args(args, add_to_digraph=True):
-			for atom in arg.pset.getAtoms():
-				self._spinner_update()
-				dep = Dependency(atom=atom, onlydeps=onlydeps,
-					root=myroot, parent=arg)
-				try:
-					pprovided = pprovideddict.get(atom.cp)
-					if pprovided and portage.match_from_list(atom, pprovided):
-						# A provided package has been specified on the command line.
-						self._dynamic_config._pprovided_args.append((arg, atom))
-						continue
-					if isinstance(arg, PackageArg):
-						if not self._add_pkg(arg.package, dep) or \
-							not self._create_graph():
-							if not self.need_restart():
-								sys.stderr.write(("\n\n!!! Problem " + \
-									"resolving dependencies for %s\n") % \
-									arg.arg)
-							return 0, myfavorites
-						continue
-					if debug:
-						writemsg_level("\n      Arg: %s\n     Atom: %s\n" %
-							(arg, atom), noiselevel=-1, level=logging.DEBUG)
-					pkg, existing_node = self._select_package(
-						myroot, atom, onlydeps=onlydeps)
-					if not pkg:
-						pprovided_match = False
-						for virt_choice in virtuals.get(atom.cp, []):
-							expanded_atom = portage.dep.Atom(
-								atom.replace(atom.cp, virt_choice.cp, 1))
-							pprovided = pprovideddict.get(expanded_atom.cp)
-							if pprovided and \
-								portage.match_from_list(expanded_atom, pprovided):
-								# A provided package has been
-								# specified on the command line.
-								self._dynamic_config._pprovided_args.append((arg, atom))
-								pprovided_match = True
-								break
-						if pprovided_match:
-							continue
-
-						if not (isinstance(arg, SetArg) and \
-							arg.name in ("selected", "system", "world")):
-							self._dynamic_config._unsatisfied_deps_for_display.append(
-								((myroot, atom), {"myparent" : arg}))
-							return 0, myfavorites
-
-						self._dynamic_config._missing_args.append((arg, atom))
-						continue
-					if atom.cp != pkg.cp:
-						# For old-style virtuals, we need to repeat the
-						# package.provided check against the selected package.
-						expanded_atom = atom.replace(atom.cp, pkg.cp)
-						pprovided = pprovideddict.get(pkg.cp)
-						if pprovided and \
-							portage.match_from_list(expanded_atom, pprovided):
-							# A provided package has been
-							# specified on the command line.
-							self._dynamic_config._pprovided_args.append((arg, atom))
-							continue
-					if pkg.installed and \
-						"selective" not in self._dynamic_config.myparams and \
-						not self._frozen_config.excluded_pkgs.findAtomForPackage(
-						pkg, modified_use=self._pkg_use_enabled(pkg)):
-						self._dynamic_config._unsatisfied_deps_for_display.append(
-							((myroot, atom), {"myparent" : arg}))
-						# Previous behavior was to bail out in this case, but
-						# since the dep is satisfied by the installed package,
-						# it's more friendly to continue building the graph
-						# and just show a warning message. Therefore, only bail
-						# out here if the atom is not from either the system or
-						# world set.
-						if not (isinstance(arg, SetArg) and \
-							arg.name in ("selected", "system", "world")):
-							return 0, myfavorites
-
-					# Add the selected package to the graph as soon as possible
-					# so that later dep_check() calls can use it as feedback
-					# for making more consistent atom selections.
-					if not self._add_pkg(pkg, dep):
-						if self.need_restart():
-							pass
-						elif isinstance(arg, SetArg):
-							writemsg(("\n\n!!! Problem resolving " + \
-								"dependencies for %s from %s\n") % \
-								(atom, arg.arg), noiselevel=-1)
-						else:
-							writemsg(("\n\n!!! Problem resolving " + \
-								"dependencies for %s\n") % \
-								(atom,), noiselevel=-1)
-						return 0, myfavorites
-
-				except SystemExit as e:
-					raise # Needed else can't exit
-				except Exception as e:
-					writemsg("\n\n!!! Problem in '%s' dependencies.\n" % atom, noiselevel=-1)
-					writemsg("!!! %s %s\n" % (str(e), str(getattr(e, "__module__", None))))
-					raise
-
-		# Now that the root packages have been added to the graph,
-		# process the dependencies.
-		if not self._create_graph():
-			return 0, myfavorites
-
-		try:
-			self.altlist()
-		except self._unknown_internal_error:
-			return False, myfavorites
-
-		if (self._dynamic_config._slot_collision_info and
-			not self._accept_blocker_conflicts()) or \
-			(self._dynamic_config._allow_backtracking and
-			"slot conflict" in self._dynamic_config._backtrack_infos):
-			return False, myfavorites
-
-		if self._rebuild.trigger_rebuilds():
-			backtrack_infos = self._dynamic_config._backtrack_infos
-			config = backtrack_infos.setdefault("config", {})
-			config["rebuild_list"] = self._rebuild.rebuild_list
-			config["reinstall_list"] = self._rebuild.reinstall_list
-			self._dynamic_config._need_restart = True
-			return False, myfavorites
-
-		if "config" in self._dynamic_config._backtrack_infos and \
-			("slot_operator_mask_built" in self._dynamic_config._backtrack_infos["config"] or
-			"slot_operator_replace_installed" in self._dynamic_config._backtrack_infos["config"]) and \
-			self.need_restart():
-			return False, myfavorites
-
-		# Any failures except those due to autounmask *alone* should return
-		# before this point, since the success_without_autounmask flag that's
-		# set below is reserved for cases where there are *zero* other
-		# problems. For reference, see backtrack_depgraph, where it skips the
-		# get_best_run() call when success_without_autounmask is True.
-
-		digraph_nodes = self._dynamic_config.digraph.nodes
-
-		if any(x in digraph_nodes for x in
-			self._dynamic_config._needed_unstable_keywords) or \
-			any(x in digraph_nodes for x in
-			self._dynamic_config._needed_p_mask_changes) or \
-			any(x in digraph_nodes for x in
-			self._dynamic_config._needed_use_config_changes) or \
-			any(x in digraph_nodes for x in
-			self._dynamic_config._needed_license_changes) :
-			#We failed if the user needs to change the configuration
-			self._dynamic_config._success_without_autounmask = True
-			return False, myfavorites
-
-		# We're true here unless we are missing binaries.
-		return (True, myfavorites)
-
-	def _set_args(self, args):
-		"""
-		Create the "__non_set_args__" package set from atoms and packages given as
-		arguments. This method can be called multiple times if necessary.
-		The package selection cache is automatically invalidated, since
-		arguments influence package selections.
-		"""
-
-		set_atoms = {}
-		non_set_atoms = {}
-		for root in self._dynamic_config.sets:
-			depgraph_sets = self._dynamic_config.sets[root]
-			depgraph_sets.sets.setdefault('__non_set_args__',
-				InternalPackageSet(allow_repo=True)).clear()
-			depgraph_sets.atoms.clear()
-			depgraph_sets.atom_arg_map.clear()
-			set_atoms[root] = []
-			non_set_atoms[root] = []
-
-		# We don't add set args to the digraph here since that
-		# happens at a later stage and we don't want to make
-		# any state changes here that aren't reversed by a
-		# another call to this method.
-		for arg in self._expand_set_args(args, add_to_digraph=False):
-			atom_arg_map = self._dynamic_config.sets[
-				arg.root_config.root].atom_arg_map
-			if isinstance(arg, SetArg):
-				atom_group = set_atoms[arg.root_config.root]
-			else:
-				atom_group = non_set_atoms[arg.root_config.root]
-
-			for atom in arg.pset.getAtoms():
-				atom_group.append(atom)
-				atom_key = (atom, arg.root_config.root)
-				refs = atom_arg_map.get(atom_key)
-				if refs is None:
-					refs = []
-					atom_arg_map[atom_key] = refs
-					if arg not in refs:
-						refs.append(arg)
-
-		for root in self._dynamic_config.sets:
-			depgraph_sets = self._dynamic_config.sets[root]
-			depgraph_sets.atoms.update(chain(set_atoms.get(root, []),
-				non_set_atoms.get(root, [])))
-			depgraph_sets.sets['__non_set_args__'].update(
-				non_set_atoms.get(root, []))
-
-		# Invalidate the package selection cache, since
-		# arguments influence package selections.
-		self._dynamic_config._highest_pkg_cache.clear()
-		for trees in self._dynamic_config._filtered_trees.values():
-			trees["porttree"].dbapi._clear_cache()
-
-	def _greedy_slots(self, root_config, atom, blocker_lookahead=False):
-		"""
-		Return a list of slot atoms corresponding to installed slots that
-		differ from the slot of the highest visible match. When
-		blocker_lookahead is True, slot atoms that would trigger a blocker
-		conflict are automatically discarded, potentially allowing automatic
-		uninstallation of older slots when appropriate.
-		"""
-		highest_pkg, in_graph = self._select_package(root_config.root, atom)
-		if highest_pkg is None:
-			return []
-		vardb = root_config.trees["vartree"].dbapi
-		slots = set()
-		for cpv in vardb.match(atom):
-			# don't mix new virtuals with old virtuals
-			pkg = vardb._pkg_str(cpv, None)
-			if pkg.cp == highest_pkg.cp:
-				slots.add(pkg.slot)
-
-		slots.add(highest_pkg.slot)
-		if len(slots) == 1:
-			return []
-		greedy_pkgs = []
-		slots.remove(highest_pkg.slot)
-		while slots:
-			slot = slots.pop()
-			slot_atom = portage.dep.Atom("%s:%s" % (highest_pkg.cp, slot))
-			pkg, in_graph = self._select_package(root_config.root, slot_atom)
-			if pkg is not None and \
-				pkg.cp == highest_pkg.cp and pkg < highest_pkg:
-				greedy_pkgs.append(pkg)
-		if not greedy_pkgs:
-			return []
-		if not blocker_lookahead:
-			return [pkg.slot_atom for pkg in greedy_pkgs]
-
-		blockers = {}
-		blocker_dep_keys = Package._dep_keys
-		for pkg in greedy_pkgs + [highest_pkg]:
-			dep_str = " ".join(pkg.metadata[k] for k in blocker_dep_keys)
-			try:
-				selected_atoms = self._select_atoms(
-					pkg.root, dep_str, self._pkg_use_enabled(pkg),
-					parent=pkg, strict=True)
-			except portage.exception.InvalidDependString:
-				continue
-			blocker_atoms = []
-			for atoms in selected_atoms.values():
-				blocker_atoms.extend(x for x in atoms if x.blocker)
-			blockers[pkg] = InternalPackageSet(initial_atoms=blocker_atoms)
-
-		if highest_pkg not in blockers:
-			return []
-
-		# filter packages with invalid deps
-		greedy_pkgs = [pkg for pkg in greedy_pkgs if pkg in blockers]
-
-		# filter packages that conflict with highest_pkg
-		greedy_pkgs = [pkg for pkg in greedy_pkgs if not \
-			(blockers[highest_pkg].findAtomForPackage(pkg, modified_use=self._pkg_use_enabled(pkg)) or \
-			blockers[pkg].findAtomForPackage(highest_pkg, modified_use=self._pkg_use_enabled(highest_pkg)))]
-
-		if not greedy_pkgs:
-			return []
-
-		# If two packages conflict, discard the lower version.
-		discard_pkgs = set()
-		greedy_pkgs.sort(reverse=True)
-		for i in range(len(greedy_pkgs) - 1):
-			pkg1 = greedy_pkgs[i]
-			if pkg1 in discard_pkgs:
-				continue
-			for j in range(i + 1, len(greedy_pkgs)):
-				pkg2 = greedy_pkgs[j]
-				if pkg2 in discard_pkgs:
-					continue
-				if blockers[pkg1].findAtomForPackage(pkg2, modified_use=self._pkg_use_enabled(pkg2)) or \
-					blockers[pkg2].findAtomForPackage(pkg1, modified_use=self._pkg_use_enabled(pkg1)):
-					# pkg1 > pkg2
-					discard_pkgs.add(pkg2)
-
-		return [pkg.slot_atom for pkg in greedy_pkgs \
-			if pkg not in discard_pkgs]
-
-	def _select_atoms_from_graph(self, *pargs, **kwargs):
-		"""
-		Prefer atoms matching packages that have already been
-		added to the graph or those that are installed and have
-		not been scheduled for replacement.
-		"""
-		kwargs["trees"] = self._dynamic_config._graph_trees
-		return self._select_atoms_highest_available(*pargs, **kwargs)
-
-	def _select_atoms_highest_available(self, root, depstring,
-		myuse=None, parent=None, strict=True, trees=None, priority=None):
-		"""This will raise InvalidDependString if necessary. If trees is
-		None then self._dynamic_config._filtered_trees is used."""
-
-		if not isinstance(depstring, list):
-			eapi = None
-			is_valid_flag = None
-			if parent is not None:
-				eapi = parent.metadata['EAPI']
-				if not parent.installed:
-					is_valid_flag = parent.iuse.is_valid_flag
-			depstring = portage.dep.use_reduce(depstring,
-				uselist=myuse, opconvert=True, token_class=Atom,
-				is_valid_flag=is_valid_flag, eapi=eapi)
-
-		if (self._dynamic_config.myparams.get(
-			"ignore_built_slot_operator_deps", "n") == "y" and
-			parent and parent.built):
-			ignore_built_slot_operator_deps(depstring)
-
-		pkgsettings = self._frozen_config.pkgsettings[root]
-		if trees is None:
-			trees = self._dynamic_config._filtered_trees
-		mytrees = trees[root]
-		atom_graph = digraph()
-		if True:
-			# Temporarily disable autounmask so that || preferences
-			# account for masking and USE settings.
-			_autounmask_backup = self._dynamic_config._autounmask
-			self._dynamic_config._autounmask = False
-			# backup state for restoration, in case of recursive
-			# calls to this method
-			backup_state = mytrees.copy()
-			try:
-				# clear state from previous call, in case this
-				# call is recursive (we have a backup, that we
-				# will use to restore it later)
-				mytrees.pop("pkg_use_enabled", None)
-				mytrees.pop("parent", None)
-				mytrees.pop("atom_graph", None)
-				mytrees.pop("priority", None)
-
-				mytrees["pkg_use_enabled"] = self._pkg_use_enabled
-				if parent is not None:
-					mytrees["parent"] = parent
-					mytrees["atom_graph"] = atom_graph
-				if priority is not None:
-					mytrees["priority"] = priority
-
-				mycheck = portage.dep_check(depstring, None,
-					pkgsettings, myuse=myuse,
-					myroot=root, trees=trees)
-			finally:
-				# restore state
-				self._dynamic_config._autounmask = _autounmask_backup
-				mytrees.pop("pkg_use_enabled", None)
-				mytrees.pop("parent", None)
-				mytrees.pop("atom_graph", None)
-				mytrees.pop("priority", None)
-				mytrees.update(backup_state)
-			if not mycheck[0]:
-				raise portage.exception.InvalidDependString(mycheck[1])
-		if parent is None:
-			selected_atoms = mycheck[1]
-		elif parent not in atom_graph:
-			selected_atoms = {parent : mycheck[1]}
-		else:
-			# Recursively traversed virtual dependencies, and their
-			# direct dependencies, are considered to have the same
-			# depth as direct dependencies.
-			if parent.depth is None:
-				virt_depth = None
-			else:
-				virt_depth = parent.depth + 1
-			chosen_atom_ids = frozenset(id(atom) for atom in mycheck[1])
-			selected_atoms = OrderedDict()
-			node_stack = [(parent, None, None)]
-			traversed_nodes = set()
-			while node_stack:
-				node, node_parent, parent_atom = node_stack.pop()
-				traversed_nodes.add(node)
-				if node is parent:
-					k = parent
-				else:
-					if node_parent is parent:
-						if priority is None:
-							node_priority = None
-						else:
-							node_priority = priority.copy()
-					else:
-						# virtuals only have runtime deps
-						node_priority = self._priority(runtime=True)
-
-					k = Dependency(atom=parent_atom,
-						blocker=parent_atom.blocker, child=node,
-						depth=virt_depth, parent=node_parent,
-						priority=node_priority, root=node.root)
-
-				child_atoms = []
-				selected_atoms[k] = child_atoms
-				for atom_node in atom_graph.child_nodes(node):
-					child_atom = atom_node[0]
-					if id(child_atom) not in chosen_atom_ids:
-						continue
-					child_atoms.append(child_atom)
-					for child_node in atom_graph.child_nodes(atom_node):
-						if child_node in traversed_nodes:
-							continue
-						if not portage.match_from_list(
-							child_atom, [child_node]):
-							# Typically this means that the atom
-							# specifies USE deps that are unsatisfied
-							# by the selected package. The caller will
-							# record this as an unsatisfied dependency
-							# when necessary.
-							continue
-						node_stack.append((child_node, node, child_atom))
-
-		return selected_atoms
-
-	def _expand_virt_from_graph(self, root, atom):
-		if not isinstance(atom, Atom):
-			atom = Atom(atom)
-		graphdb = self._dynamic_config.mydbapi[root]
-		match = graphdb.match_pkgs(atom)
-		if not match:
-			yield atom
-			return
-		pkg = match[-1]
-		if not pkg.cpv.startswith("virtual/"):
-			yield atom
-			return
-		try:
-			rdepend = self._select_atoms_from_graph(
-				pkg.root, pkg.metadata.get("RDEPEND", ""),
-				myuse=self._pkg_use_enabled(pkg),
-				parent=pkg, strict=False)
-		except InvalidDependString as e:
-			writemsg_level("!!! Invalid RDEPEND in " + \
-				"'%svar/db/pkg/%s/RDEPEND': %s\n" % \
-				(pkg.root, pkg.cpv, e),
-				noiselevel=-1, level=logging.ERROR)
-			yield atom
-			return
-
-		for atoms in rdepend.values():
-			for atom in atoms:
-				if hasattr(atom, "_orig_atom"):
-					# Ignore virtual atoms since we're only
-					# interested in expanding the real atoms.
-					continue
-				yield atom
-
-	def _virt_deps_visible(self, pkg, ignore_use=False):
-		"""
-		Assumes pkg is a virtual package. Traverses virtual deps recursively
-		and returns True if all deps are visible, False otherwise. This is
-		useful for checking if it will be necessary to expand virtual slots,
-		for cases like bug #382557.
-		"""
-		try:
-			rdepend = self._select_atoms(
-				pkg.root, pkg.metadata.get("RDEPEND", ""),
-				myuse=self._pkg_use_enabled(pkg),
-				parent=pkg, priority=self._priority(runtime=True))
-		except InvalidDependString as e:
-			if not pkg.installed:
-				raise
-			writemsg_level("!!! Invalid RDEPEND in " + \
-				"'%svar/db/pkg/%s/RDEPEND': %s\n" % \
-				(pkg.root, pkg.cpv, e),
-				noiselevel=-1, level=logging.ERROR)
-			return False
-
-		for atoms in rdepend.values():
-			for atom in atoms:
-				if ignore_use:
-					atom = atom.without_use
-				pkg, existing = self._select_package(
-					pkg.root, atom)
-				if pkg is None or not self._pkg_visibility_check(pkg):
-					return False
-
-		return True
-
-	def _get_dep_chain(self, start_node, target_atom=None,
-		unsatisfied_dependency=False):
-		"""
-		Returns a list of (atom, node_type) pairs that represent a dep chain.
-		If target_atom is None, the first package shown is pkg's parent.
-		If target_atom is not None the first package shown is pkg.
-		If unsatisfied_dependency is True, the first parent is select who's
-		dependency is not satisfied by 'pkg'. This is need for USE changes.
-		(Does not support target_atom.)
-		"""
-		traversed_nodes = set()
-		dep_chain = []
-		node = start_node
-		child = None
-		all_parents = self._dynamic_config._parent_atoms
-		graph = self._dynamic_config.digraph
-
-		if target_atom is not None and isinstance(node, Package):
-			affecting_use = set()
-			for dep_str in Package._dep_keys:
-				try:
-					affecting_use.update(extract_affecting_use(
-						node.metadata[dep_str], target_atom,
-						eapi=node.metadata["EAPI"]))
-				except InvalidDependString:
-					if not node.installed:
-						raise
-			affecting_use.difference_update(node.use.mask, node.use.force)
-			pkg_name = _unicode_decode("%s") % (node.cpv,)
-			if affecting_use:
-				usedep = []
-				for flag in affecting_use:
-					if flag in self._pkg_use_enabled(node):
-						usedep.append(flag)
-					else:
-						usedep.append("-"+flag)
-				pkg_name += "[%s]" % ",".join(usedep)
-
-			dep_chain.append((pkg_name, node.type_name))
-
-
-		# To build a dep chain for the given package we take
-		# "random" parents form the digraph, except for the
-		# first package, because we want a parent that forced
-		# the corresponding change (i.e '>=foo-2', instead 'foo').
-
-		traversed_nodes.add(start_node)
-
-		start_node_parent_atoms = {}
-		for ppkg, patom in all_parents.get(node, []):
-			# Get a list of suitable atoms. For use deps
-			# (aka unsatisfied_dependency is not None) we
-			# need that the start_node doesn't match the atom.
-			if not unsatisfied_dependency or \
-				not InternalPackageSet(initial_atoms=(patom,)).findAtomForPackage(start_node):
-				start_node_parent_atoms.setdefault(patom, []).append(ppkg)
-
-		if start_node_parent_atoms:
-			# If there are parents in all_parents then use one of them.
-			# If not, then this package got pulled in by an Arg and
-			# will be correctly handled by the code that handles later
-			# packages in the dep chain.
-			best_match = best_match_to_list(node.cpv, start_node_parent_atoms)
-
-			child = node
-			for ppkg in start_node_parent_atoms[best_match]:
-				node = ppkg
-				if ppkg in self._dynamic_config._initial_arg_list:
-					# Stop if reached the top level of the dep chain.
-					break
-
-		while node is not None:
-			traversed_nodes.add(node)
-
-			if node not in graph:
-				# The parent is not in the graph due to backtracking.
-				break
-
-			elif isinstance(node, DependencyArg):
-				if graph.parent_nodes(node):
-					node_type = "set"
-				else:
-					node_type = "argument"
-				dep_chain.append((_unicode_decode("%s") % (node,), node_type))
-
-			elif node is not start_node:
-				for ppkg, patom in all_parents[child]:
-					if ppkg == node:
-						if child is start_node and unsatisfied_dependency and \
-							InternalPackageSet(initial_atoms=(patom,)).findAtomForPackage(child):
-							# This atom is satisfied by child, there must be another atom.
-							continue
-						atom = patom.unevaluated_atom
-						break
-
-				dep_strings = set()
-				priorities = graph.nodes[node][0].get(child)
-				if priorities is None:
-					# This edge comes from _parent_atoms and was not added to
-					# the graph, and _parent_atoms does not contain priorities.
-					for k in Package._dep_keys:
-						dep_strings.add(node.metadata[k])
-				else:
-					for priority in priorities:
-						if priority.buildtime:
-							for k in Package._buildtime_keys:
-								dep_strings.add(node.metadata[k])
-						if priority.runtime:
-							dep_strings.add(node.metadata["RDEPEND"])
-						if priority.runtime_post:
-							dep_strings.add(node.metadata["PDEPEND"])
-
-				affecting_use = set()
-				for dep_str in dep_strings:
-					try:
-						affecting_use.update(extract_affecting_use(
-							dep_str, atom, eapi=node.metadata["EAPI"]))
-					except InvalidDependString:
-						if not node.installed:
-							raise
-
-				#Don't show flags as 'affecting' if the user can't change them,
-				affecting_use.difference_update(node.use.mask, \
-					node.use.force)
-
-				pkg_name = _unicode_decode("%s") % (node.cpv,)
-				if affecting_use:
-					usedep = []
-					for flag in affecting_use:
-						if flag in self._pkg_use_enabled(node):
-							usedep.append(flag)
-						else:
-							usedep.append("-"+flag)
-					pkg_name += "[%s]" % ",".join(usedep)
-
-				dep_chain.append((pkg_name, node.type_name))
-
-			# When traversing to parents, prefer arguments over packages
-			# since arguments are root nodes. Never traverse the same
-			# package twice, in order to prevent an infinite loop.
-			child = node
-			selected_parent = None
-			parent_arg = None
-			parent_merge = None
-			parent_unsatisfied = None
-
-			for parent in self._dynamic_config.digraph.parent_nodes(node):
-				if parent in traversed_nodes:
-					continue
-				if isinstance(parent, DependencyArg):
-					parent_arg = parent
-				else:
-					if isinstance(parent, Package) and \
-						parent.operation == "merge":
-						parent_merge = parent
-					if unsatisfied_dependency and node is start_node:
-						# Make sure that pkg doesn't satisfy parent's dependency.
-						# This ensures that we select the correct parent for use
-						# flag changes.
-						for ppkg, atom in all_parents[start_node]:
-							if parent is ppkg:
-								atom_set = InternalPackageSet(initial_atoms=(atom,))
-								if not atom_set.findAtomForPackage(start_node):
-									parent_unsatisfied = parent
-								break
-					else:
-						selected_parent = parent
-
-			if parent_unsatisfied is not None:
-				selected_parent = parent_unsatisfied
-			elif parent_merge is not None:
-				# Prefer parent in the merge list (bug #354747).
-				selected_parent = parent_merge
-			elif parent_arg is not None:
-				if self._dynamic_config.digraph.parent_nodes(parent_arg):
-					selected_parent = parent_arg
-				else:
-					dep_chain.append(
-						(_unicode_decode("%s") % (parent_arg,), "argument"))
-					selected_parent = None
-
-			node = selected_parent
-		return dep_chain
-
-	def _get_dep_chain_as_comment(self, pkg, unsatisfied_dependency=False):
-		dep_chain = self._get_dep_chain(pkg, unsatisfied_dependency=unsatisfied_dependency)
-		display_list = []
-		for node, node_type in dep_chain:
-			if node_type == "argument":
-				display_list.append("required by %s (argument)" % node)
-			else:
-				display_list.append("required by %s" % node)
-
-		msg = "#" + ", ".join(display_list) + "\n"
-		return msg
-
-
-	def _show_unsatisfied_dep(self, root, atom, myparent=None, arg=None,
-		check_backtrack=False, check_autounmask_breakage=False, show_req_use=None):
-		"""
-		When check_backtrack=True, no output is produced and
-		the method either returns or raises _backtrack_mask if
-		a matching package has been masked by backtracking.
-		"""
-		backtrack_mask = False
-		autounmask_broke_use_dep = False
-		atom_set = InternalPackageSet(initial_atoms=(atom.without_use,),
-			allow_repo=True)
-		atom_set_with_use = InternalPackageSet(initial_atoms=(atom,),
-			allow_repo=True)
-		xinfo = '"%s"' % atom.unevaluated_atom
-		if arg:
-			xinfo='"%s"' % arg
-		if isinstance(myparent, AtomArg):
-			xinfo = _unicode_decode('"%s"') % (myparent,)
-		# Discard null/ from failed cpv_expand category expansion.
-		xinfo = xinfo.replace("null/", "")
-		if root != self._frozen_config._running_root.root:
-			xinfo = "%s for %s" % (xinfo, root)
-		masked_packages = []
-		missing_use = []
-		missing_use_adjustable = set()
-		required_use_unsatisfied = []
-		masked_pkg_instances = set()
-		have_eapi_mask = False
-		pkgsettings = self._frozen_config.pkgsettings[root]
-		root_config = self._frozen_config.roots[root]
-		portdb = self._frozen_config.roots[root].trees["porttree"].dbapi
-		vardb = self._frozen_config.roots[root].trees["vartree"].dbapi
-		bindb = self._frozen_config.roots[root].trees["bintree"].dbapi
-		dbs = self._dynamic_config._filtered_trees[root]["dbs"]
-		for db, pkg_type, built, installed, db_keys in dbs:
-			if installed:
-				continue
-			if hasattr(db, "xmatch"):
-				cpv_list = db.xmatch("match-all-cpv-only", atom.without_use)
-			else:
-				cpv_list = db.match(atom.without_use)
-
-			if atom.repo is None and hasattr(db, "getRepositories"):
-				repo_list = db.getRepositories()
-			else:
-				repo_list = [atom.repo]
-
-			# descending order
-			cpv_list.reverse()
-			for cpv in cpv_list:
-				for repo in repo_list:
-					if not db.cpv_exists(cpv, myrepo=repo):
-						continue
-
-					metadata, mreasons  = get_mask_info(root_config, cpv, pkgsettings, db, pkg_type, \
-						built, installed, db_keys, myrepo=repo, _pkg_use_enabled=self._pkg_use_enabled)
-					if metadata is not None and \
-						portage.eapi_is_supported(metadata["EAPI"]):
-						if not repo:
-							repo = metadata.get('repository')
-						pkg = self._pkg(cpv, pkg_type, root_config,
-							installed=installed, myrepo=repo)
-						# pkg.metadata contains calculated USE for ebuilds,
-						# required later for getMissingLicenses.
-						metadata = pkg.metadata
-						if pkg.invalid:
-							# Avoid doing any operations with packages that
-							# have invalid metadata. It would be unsafe at
-							# least because it could trigger unhandled
-							# exceptions in places like check_required_use().
-							masked_packages.append(
-								(root_config, pkgsettings, cpv, repo, metadata, mreasons))
-							continue
-						if not atom_set.findAtomForPackage(pkg,
-							modified_use=self._pkg_use_enabled(pkg)):
-							continue
-						if pkg in self._dynamic_config._runtime_pkg_mask:
-							backtrack_reasons = \
-								self._dynamic_config._runtime_pkg_mask[pkg]
-							mreasons.append('backtracking: %s' % \
-								', '.join(sorted(backtrack_reasons)))
-							backtrack_mask = True
-						if not mreasons and self._frozen_config.excluded_pkgs.findAtomForPackage(pkg, \
-							modified_use=self._pkg_use_enabled(pkg)):
-							mreasons = ["exclude option"]
-						if mreasons:
-							masked_pkg_instances.add(pkg)
-						if atom.unevaluated_atom.use:
-							try:
-								if not pkg.iuse.is_valid_flag(atom.unevaluated_atom.use.required) \
-									or atom.violated_conditionals(self._pkg_use_enabled(pkg), pkg.iuse.is_valid_flag).use:
-									missing_use.append(pkg)
-									if atom_set_with_use.findAtomForPackage(pkg):
-										autounmask_broke_use_dep = True
-									if not mreasons:
-										continue
-							except InvalidAtom:
-								writemsg("violated_conditionals raised " + \
-									"InvalidAtom: '%s' parent: %s" % \
-									(atom, myparent), noiselevel=-1)
-								raise
-						if not mreasons and \
-							not pkg.built and \
-							pkg.metadata.get("REQUIRED_USE") and \
-							eapi_has_required_use(pkg.metadata["EAPI"]):
-							if not check_required_use(
-								pkg.metadata["REQUIRED_USE"],
-								self._pkg_use_enabled(pkg),
-								pkg.iuse.is_valid_flag,
-								eapi=pkg.metadata["EAPI"]):
-								required_use_unsatisfied.append(pkg)
-								continue
-						root_slot = (pkg.root, pkg.slot_atom)
-						if pkg.built and root_slot in self._rebuild.rebuild_list:
-							mreasons = ["need to rebuild from source"]
-						elif pkg.installed and root_slot in self._rebuild.reinstall_list:
-							mreasons = ["need to rebuild from source"]
-						elif pkg.built and not mreasons:
-							mreasons = ["use flag configuration mismatch"]
-					masked_packages.append(
-						(root_config, pkgsettings, cpv, repo, metadata, mreasons))
-
-		if check_backtrack:
-			if backtrack_mask:
-				raise self._backtrack_mask()
-			else:
-				return
-
-		if check_autounmask_breakage:
-			if autounmask_broke_use_dep:
-				raise self._autounmask_breakage()
-			else:
-				return
-
-		missing_use_reasons = []
-		missing_iuse_reasons = []
-		for pkg in missing_use:
-			use = self._pkg_use_enabled(pkg)
-			missing_iuse = []
-			#Use the unevaluated atom here, because some flags might have gone
-			#lost during evaluation.
-			required_flags = atom.unevaluated_atom.use.required
-			missing_iuse = pkg.iuse.get_missing_iuse(required_flags)
-
-			mreasons = []
-			if missing_iuse:
-				mreasons.append("Missing IUSE: %s" % " ".join(missing_iuse))
-				missing_iuse_reasons.append((pkg, mreasons))
-			else:
-				need_enable = sorted(atom.use.enabled.difference(use).intersection(pkg.iuse.all))
-				need_disable = sorted(atom.use.disabled.intersection(use).intersection(pkg.iuse.all))
-
-				untouchable_flags = \
-					frozenset(chain(pkg.use.mask, pkg.use.force))
-				if any(x in untouchable_flags for x in
-					chain(need_enable, need_disable)):
-					continue
-
-				missing_use_adjustable.add(pkg)
-				required_use = pkg.metadata.get("REQUIRED_USE")
-				required_use_warning = ""
-				if required_use:
-					old_use = self._pkg_use_enabled(pkg)
-					new_use = set(self._pkg_use_enabled(pkg))
-					for flag in need_enable:
-						new_use.add(flag)
-					for flag in need_disable:
-						new_use.discard(flag)
-					if check_required_use(required_use, old_use,
-						pkg.iuse.is_valid_flag, eapi=pkg.metadata["EAPI"]) \
-						and not check_required_use(required_use, new_use,
-						pkg.iuse.is_valid_flag, eapi=pkg.metadata["EAPI"]):
-							required_use_warning = ", this change violates use flag constraints " + \
-								"defined by %s: '%s'" % (pkg.cpv, human_readable_required_use(required_use))
-
-				if need_enable or need_disable:
-					changes = []
-					changes.extend(colorize("red", "+" + x) \
-						for x in need_enable)
-					changes.extend(colorize("blue", "-" + x) \
-						for x in need_disable)
-					mreasons.append("Change USE: %s" % " ".join(changes) + required_use_warning)
-					missing_use_reasons.append((pkg, mreasons))
-
-			if not missing_iuse and myparent and atom.unevaluated_atom.use.conditional:
-				# Lets see if the violated use deps are conditional.
-				# If so, suggest to change them on the parent.
-
-				# If the child package is masked then a change to
-				# parent USE is not a valid solution (a normal mask
-				# message should be displayed instead).
-				if pkg in masked_pkg_instances:
-					continue
-
-				mreasons = []
-				violated_atom = atom.unevaluated_atom.violated_conditionals(self._pkg_use_enabled(pkg), \
-					pkg.iuse.is_valid_flag, self._pkg_use_enabled(myparent))
-				if not (violated_atom.use.enabled or violated_atom.use.disabled):
-					#all violated use deps are conditional
-					changes = []
-					conditional = violated_atom.use.conditional
-					involved_flags = set(chain(conditional.equal, conditional.not_equal, \
-						conditional.enabled, conditional.disabled))
-
-					untouchable_flags = \
-						frozenset(chain(myparent.use.mask, myparent.use.force))
-					if any(x in untouchable_flags for x in involved_flags):
-						continue
-
-					required_use = myparent.metadata.get("REQUIRED_USE")
-					required_use_warning = ""
-					if required_use:
-						old_use = self._pkg_use_enabled(myparent)
-						new_use = set(self._pkg_use_enabled(myparent))
-						for flag in involved_flags:
-							if flag in old_use:
-								new_use.discard(flag)
-							else:
-								new_use.add(flag)
-						if check_required_use(required_use, old_use,
-							myparent.iuse.is_valid_flag,
-							eapi=myparent.metadata["EAPI"]) and \
-							not check_required_use(required_use, new_use,
-							myparent.iuse.is_valid_flag,
-							eapi=myparent.metadata["EAPI"]):
-								required_use_warning = ", this change violates use flag constraints " + \
-									"defined by %s: '%s'" % (myparent.cpv, \
-									human_readable_required_use(required_use))
-
-					for flag in involved_flags:
-						if flag in self._pkg_use_enabled(myparent):
-							changes.append(colorize("blue", "-" + flag))
-						else:
-							changes.append(colorize("red", "+" + flag))
-					mreasons.append("Change USE: %s" % " ".join(changes) + required_use_warning)
-					if (myparent, mreasons) not in missing_use_reasons:
-						missing_use_reasons.append((myparent, mreasons))
-
-		unmasked_use_reasons = [(pkg, mreasons) for (pkg, mreasons) \
-			in missing_use_reasons if pkg not in masked_pkg_instances]
-
-		unmasked_iuse_reasons = [(pkg, mreasons) for (pkg, mreasons) \
-			in missing_iuse_reasons if pkg not in masked_pkg_instances]
-
-		show_missing_use = False
-		if unmasked_use_reasons:
-			# Only show the latest version.
-			show_missing_use = []
-			pkg_reason = None
-			parent_reason = None
-			for pkg, mreasons in unmasked_use_reasons:
-				if pkg is myparent:
-					if parent_reason is None:
-						#This happens if a use change on the parent
-						#leads to a satisfied conditional use dep.
-						parent_reason = (pkg, mreasons)
-				elif pkg_reason is None:
-					#Don't rely on the first pkg in unmasked_use_reasons,
-					#being the highest version of the dependency.
-					pkg_reason = (pkg, mreasons)
-			if pkg_reason:
-				show_missing_use.append(pkg_reason)
-			if parent_reason:
-				show_missing_use.append(parent_reason)
-
-		elif unmasked_iuse_reasons:
-			masked_with_iuse = False
-			for pkg in masked_pkg_instances:
-				#Use atom.unevaluated here, because some flags might have gone
-				#lost during evaluation.
-				if not pkg.iuse.get_missing_iuse(atom.unevaluated_atom.use.required):
-					# Package(s) with required IUSE are masked,
-					# so display a normal masking message.
-					masked_with_iuse = True
-					break
-			if not masked_with_iuse:
-				show_missing_use = unmasked_iuse_reasons
-
-		if required_use_unsatisfied:
-			# If there's a higher unmasked version in missing_use_adjustable
-			# then we want to show that instead.
-			for pkg in missing_use_adjustable:
-				if pkg not in masked_pkg_instances and \
-					pkg > required_use_unsatisfied[0]:
-					required_use_unsatisfied = False
-					break
-
-		mask_docs = False
-
-		if show_req_use is None and required_use_unsatisfied:
-			# We have an unmasked package that only requires USE adjustment
-			# in order to satisfy REQUIRED_USE, and nothing more. We assume
-			# that the user wants the latest version, so only the first
-			# instance is displayed.
-			show_req_use = required_use_unsatisfied[0]
-			self._dynamic_config._needed_required_use_config_changesuse_config_changes[pkg] = (new_use, new_changes)
-			backtrack_infos = self._dynamic_config._backtrack_infos
-			backtrack_infos.setdefault("config", {})
-			backtrack_infos["config"].setdefault("needed_required_use_config_changes", [])
-			backtrack_infos["config"]["needed_required_use_config_changes"].append((pkg, (new_use, new_changes)))
-
-		if show_req_use is not None:
-
-			pkg = show_req_use
-			output_cpv = pkg.cpv + _repo_separator + pkg.repo
-			writemsg("\n!!! " + \
-				colorize("BAD", "The ebuild selected to satisfy ") + \
-				colorize("INFORM", xinfo) + \
-				colorize("BAD", " has unmet requirements.") + "\n",
-				noiselevel=-1)
-			use_display = pkg_use_display(pkg, self._frozen_config.myopts)
-			writemsg("- %s %s\n" % (output_cpv, use_display),
-				noiselevel=-1)
-			writemsg("\n  The following REQUIRED_USE flag constraints " + \
-				"are unsatisfied:\n", noiselevel=-1)
-			reduced_noise = check_required_use(
-				pkg.metadata["REQUIRED_USE"],
-				self._pkg_use_enabled(pkg),
-				pkg.iuse.is_valid_flag,
-				eapi=pkg.metadata["EAPI"]).tounicode()
-			writemsg("    %s\n" % \
-				human_readable_required_use(reduced_noise),
-				noiselevel=-1)
-			normalized_required_use = \
-				" ".join(pkg.metadata["REQUIRED_USE"].split())
-			if reduced_noise != normalized_required_use:
-				writemsg("\n  The above constraints " + \
-					"are a subset of the following complete expression:\n",
-					noiselevel=-1)
-				writemsg("    %s\n" % \
-					human_readable_required_use(normalized_required_use),
-					noiselevel=-1)
-			writemsg("\n", noiselevel=-1)
-
-		elif show_missing_use:
-			writemsg("\nemerge: there are no ebuilds built with USE flags to satisfy "+green(xinfo)+".\n", noiselevel=-1)
-			writemsg("!!! One of the following packages is required to complete your request:\n", noiselevel=-1)
-			for pkg, mreasons in show_missing_use:
-				writemsg("- "+pkg.cpv+_repo_separator+pkg.repo+" ("+", ".join(mreasons)+")\n", noiselevel=-1)
-
-		elif masked_packages:
-			writemsg("\n!!! " + \
-				colorize("BAD", "All ebuilds that could satisfy ") + \
-				colorize("INFORM", xinfo) + \
-				colorize("BAD", " have been masked.") + "\n", noiselevel=-1)
-			writemsg("!!! One of the following masked packages is required to complete your request:\n", noiselevel=-1)
-			have_eapi_mask = show_masked_packages(masked_packages)
-			if have_eapi_mask:
-				writemsg("\n", noiselevel=-1)
-				msg = ("The current version of portage supports " + \
-					"EAPI '%s'. You must upgrade to a newer version" + \
-					" of portage before EAPI masked packages can" + \
-					" be installed.") % portage.const.EAPI
-				writemsg("\n".join(textwrap.wrap(msg, 75)), noiselevel=-1)
-			writemsg("\n", noiselevel=-1)
-			mask_docs = True
-		else:
-			cp_exists = False
-			if not atom.cp.startswith("null/"):
-				for pkg in self._iter_match_pkgs_any(
-					root_config, Atom(atom.cp)):
-					cp_exists = True
-					break
-
-			writemsg("\nemerge: there are no ebuilds to satisfy "+green(xinfo)+".\n", noiselevel=-1)
-			if isinstance(myparent, AtomArg) and \
-				not cp_exists and \
-				self._frozen_config.myopts.get(
-				"--misspell-suggestions", "y") != "n":
-				cp = myparent.atom.cp.lower()
-				cat, pkg = portage.catsplit(cp)
-				if cat == "null":
-					cat = None
-
-				writemsg("\nemerge: searching for similar names..."
-					, noiselevel=-1)
-
-				all_cp = set()
-				all_cp.update(vardb.cp_all())
-				if "--usepkgonly" not in self._frozen_config.myopts:
-					all_cp.update(portdb.cp_all())
-				if "--usepkg" in self._frozen_config.myopts:
-					all_cp.update(bindb.cp_all())
-				# discard dir containing no ebuilds
-				all_cp.discard(cp)
-
-				orig_cp_map = {}
-				for cp_orig in all_cp:
-					orig_cp_map.setdefault(cp_orig.lower(), []).append(cp_orig)
-				all_cp = set(orig_cp_map)
-
-				if cat:
-					matches = difflib.get_close_matches(cp, all_cp)
-				else:
-					pkg_to_cp = {}
-					for other_cp in list(all_cp):
-						other_pkg = portage.catsplit(other_cp)[1]
-						if other_pkg == pkg:
-							# Check for non-identical package that
-							# differs only by upper/lower case.
-							identical = True
-							for cp_orig in orig_cp_map[other_cp]:
-								if portage.catsplit(cp_orig)[1] != \
-									portage.catsplit(atom.cp)[1]:
-									identical = False
-									break
-							if identical:
-								# discard dir containing no ebuilds
-								all_cp.discard(other_cp)
-								continue
-						pkg_to_cp.setdefault(other_pkg, set()).add(other_cp)
-					pkg_matches = difflib.get_close_matches(pkg, pkg_to_cp)
-					matches = []
-					for pkg_match in pkg_matches:
-						matches.extend(pkg_to_cp[pkg_match])
-
-				matches_orig_case = []
-				for cp in matches:
-					matches_orig_case.extend(orig_cp_map[cp])
-				matches = matches_orig_case
-
-				if len(matches) == 1:
-					writemsg("\nemerge: Maybe you meant " + matches[0] + "?\n"
-						, noiselevel=-1)
-				elif len(matches) > 1:
-					writemsg(
-						"\nemerge: Maybe you meant any of these: %s?\n" % \
-						(", ".join(matches),), noiselevel=-1)
-				else:
-					# Generally, this would only happen if
-					# all dbapis are empty.
-					writemsg(" nothing similar found.\n"
-						, noiselevel=-1)
-		msg = []
-		if not isinstance(myparent, AtomArg):
-			# It's redundant to show parent for AtomArg since
-			# it's the same as 'xinfo' displayed above.
-			dep_chain = self._get_dep_chain(myparent, atom)
-			for node, node_type in dep_chain:
-				msg.append('(dependency required by "%s" [%s])' % \
-						(colorize('INFORM', _unicode_decode("%s") % \
-						(node)), node_type))
-
-		if msg:
-			writemsg("\n".join(msg), noiselevel=-1)
-			writemsg("\n", noiselevel=-1)
-
-		if mask_docs:
-			show_mask_docs()
-			writemsg("\n", noiselevel=-1)
-
-	def _iter_match_pkgs_any(self, root_config, atom, onlydeps=False):
-		for db, pkg_type, built, installed, db_keys in \
-			self._dynamic_config._filtered_trees[root_config.root]["dbs"]:
-			for pkg in self._iter_match_pkgs(root_config,
-				pkg_type, atom, onlydeps=onlydeps):
-				yield pkg
-
-	def _iter_match_pkgs(self, root_config, pkg_type, atom, onlydeps=False):
-		"""
-		Iterate over Package instances of pkg_type matching the given atom.
-		This does not check visibility and it also does not match USE for
-		unbuilt ebuilds since USE are lazily calculated after visibility
-		checks (to avoid the expense when possible).
-		"""
-
-		db = root_config.trees[self.pkg_tree_map[pkg_type]].dbapi
-		atom_exp = dep_expand(atom, mydb=db, settings=root_config.settings)
-		cp_list = db.cp_list(atom_exp.cp)
-		matched_something = False
-		installed = pkg_type == 'installed'
-
-		if cp_list:
-			atom_set = InternalPackageSet(initial_atoms=(atom,),
-				allow_repo=True)
-			if atom.repo is None and hasattr(db, "getRepositories"):
-				repo_list = db.getRepositories()
-			else:
-				repo_list = [atom.repo]
-
-			# descending order
-			cp_list.reverse()
-			for cpv in cp_list:
-				# Call match_from_list on one cpv at a time, in order
-				# to avoid unnecessary match_from_list comparisons on
-				# versions that are never yielded from this method.
-				if not match_from_list(atom_exp, [cpv]):
-					continue
-				for repo in repo_list:
-
-					try:
-						pkg = self._pkg(cpv, pkg_type, root_config,
-							installed=installed, onlydeps=onlydeps, myrepo=repo)
-					except portage.exception.PackageNotFound:
-						pass
-					else:
-						# A cpv can be returned from dbapi.match() as an
-						# old-style virtual match even in cases when the
-						# package does not actually PROVIDE the virtual.
-						# Filter out any such false matches here.
-
-						# Make sure that cpv from the current repo satisfies the atom.
-						# This might not be the case if there are several repos with
-						# the same cpv, but different metadata keys, like SLOT.
-						# Also, parts of the match that require metadata access
-						# are deferred until we have cached the metadata in a
-						# Package instance.
-						if not atom_set.findAtomForPackage(pkg,
-							modified_use=self._pkg_use_enabled(pkg)):
-							continue
-						matched_something = True
-						yield pkg
-
-		# USE=multislot can make an installed package appear as if
-		# it doesn't satisfy a slot dependency. Rebuilding the ebuild
-		# won't do any good as long as USE=multislot is enabled since
-		# the newly built package still won't have the expected slot.
-		# Therefore, assume that such SLOT dependencies are already
-		# satisfied rather than forcing a rebuild.
-		if not matched_something and installed and atom.slot is not None:
-
-			if "remove" in self._dynamic_config.myparams:
-				# We need to search the portdbapi, which is not in our
-				# normal dbs list, in order to find the real SLOT.
-				portdb = self._frozen_config.trees[root_config.root]["porttree"].dbapi
-				db_keys = list(portdb._aux_cache_keys)
-				dbs = [(portdb, "ebuild", False, False, db_keys)]
-			else:
-				dbs = self._dynamic_config._filtered_trees[root_config.root]["dbs"]
-
-			cp_list = db.cp_list(atom_exp.cp)
-			if cp_list:
-				atom_set = InternalPackageSet(
-					initial_atoms=(atom.without_slot,), allow_repo=True)
-				atom_exp_without_slot = atom_exp.without_slot
-				cp_list.reverse()
-				for cpv in cp_list:
-					if not match_from_list(atom_exp_without_slot, [cpv]):
-						continue
-					slot_available = False
-					for other_db, other_type, other_built, \
-						other_installed, other_keys in dbs:
-						try:
-							if atom.slot == \
-								other_db._pkg_str(_unicode(cpv), None).slot:
-								slot_available = True
-								break
-						except (KeyError, InvalidData):
-							pass
-					if not slot_available:
-						continue
-					inst_pkg = self._pkg(cpv, "installed",
-						root_config, installed=installed, myrepo=atom.repo)
-					# Remove the slot from the atom and verify that
-					# the package matches the resulting atom.
-					if atom_set.findAtomForPackage(inst_pkg):
-						yield inst_pkg
-						return
-
-	def _select_pkg_highest_available(self, root, atom, onlydeps=False):
-		cache_key = (root, atom, atom.unevaluated_atom, onlydeps, self._dynamic_config._autounmask)
-		ret = self._dynamic_config._highest_pkg_cache.get(cache_key)
-		if ret is not None:
-			return ret
-		ret = self._select_pkg_highest_available_imp(root, atom, onlydeps=onlydeps)
-		self._dynamic_config._highest_pkg_cache[cache_key] = ret
-		pkg, existing = ret
-		if pkg is not None:
-			if self._pkg_visibility_check(pkg) and \
-				not (pkg.installed and pkg.masks):
-				self._dynamic_config._visible_pkgs[pkg.root].cpv_inject(pkg)
-		return ret
-
-	def _want_installed_pkg(self, pkg):
-		"""
-		Given an installed package returned from select_pkg, return
-		True if the user has not explicitly requested for this package
-		to be replaced (typically via an atom on the command line).
-		"""
-		if self._frozen_config.excluded_pkgs.findAtomForPackage(pkg,
-			modified_use=self._pkg_use_enabled(pkg)):
-			return True
-
-		arg = False
-		try:
-			for arg, atom in self._iter_atoms_for_pkg(pkg):
-				if arg.force_reinstall:
-					return False
-		except InvalidDependString:
-			pass
-
-		if "selective" in self._dynamic_config.myparams:
-			return True
-
-		return not arg
-
-	def _equiv_ebuild_visible(self, pkg, autounmask_level=None):
-		try:
-			pkg_eb = self._pkg(
-				pkg.cpv, "ebuild", pkg.root_config, myrepo=pkg.repo)
-		except portage.exception.PackageNotFound:
-			pkg_eb_visible = False
-			for pkg_eb in self._iter_match_pkgs(pkg.root_config,
-				"ebuild", Atom("=%s" % (pkg.cpv,))):
-				if self._pkg_visibility_check(pkg_eb, autounmask_level):
-					pkg_eb_visible = True
-					break
-			if not pkg_eb_visible:
-				return False
-		else:
-			if not self._pkg_visibility_check(pkg_eb, autounmask_level):
-				return False
-
-		return True
-
-	def _equiv_binary_installed(self, pkg):
-		build_time = pkg.metadata.get('BUILD_TIME')
-		if not build_time:
-			return False
-
-		try:
-			inst_pkg = self._pkg(pkg.cpv, "installed",
-				pkg.root_config, installed=True)
-		except PackageNotFound:
-			return False
-
-		return build_time == inst_pkg.metadata.get('BUILD_TIME')
-
-	class _AutounmaskLevel(object):
-		__slots__ = ("allow_use_changes", "allow_unstable_keywords", "allow_license_changes", \
-			"allow_missing_keywords", "allow_unmasks")
-
-		def __init__(self):
-			self.allow_use_changes = False
-			self.allow_license_changes = False
-			self.allow_unstable_keywords = False
-			self.allow_missing_keywords = False
-			self.allow_unmasks = False
-
-	def _autounmask_levels(self):
-		"""
-		Iterate over the different allowed things to unmask.
-
-		0. USE
-		1. USE + license
-		2. USE + ~arch + license
-		3. USE + ~arch + license + missing keywords
-		4. USE + ~arch + license + masks
-		5. USE + ~arch + license + missing keywords + masks
-
-		Some thoughts:
-			* Do least invasive changes first.
-			* Try unmasking alone before unmasking + missing keywords
-				to avoid -9999 versions if possible
-		"""
-
-		if self._dynamic_config._autounmask is not True:
-			return
-
-		autounmask_keep_masks = self._frozen_config.myopts.get("--autounmask-keep-masks", "n") != "n"
-		autounmask_level = self._AutounmaskLevel()
-
-		autounmask_level.allow_use_changes = True
-		yield autounmask_level
-
-		autounmask_level.allow_license_changes = True
-		yield autounmask_level
-
-		for only_use_changes in (False,):
-
-			autounmask_level.allow_unstable_keywords = (not only_use_changes)
-			autounmask_level.allow_license_changes = (not only_use_changes)
-
-			for missing_keyword, unmask in ((False,False), (True, False), (False, True), (True, True)):
-
-				if (only_use_changes or autounmask_keep_masks) and (missing_keyword or unmask):
-					break
-
-				autounmask_level.allow_missing_keywords = missing_keyword
-				autounmask_level.allow_unmasks = unmask
-
-				yield autounmask_level
-
-
-	def _select_pkg_highest_available_imp(self, root, atom, onlydeps=False):
-		pkg, existing = self._wrapped_select_pkg_highest_available_imp(root, atom, onlydeps=onlydeps)
-
-		default_selection = (pkg, existing)
-
-		def reset_pkg(pkg):
-			if pkg is not None and \
-				pkg.installed and \
-				not self._want_installed_pkg(pkg):
-				pkg = None
-
-		if self._dynamic_config._autounmask is True:
-			reset_pkg(pkg)
-
-			for autounmask_level in self._autounmask_levels():
-				if pkg is not None:
-					break
-
-				pkg, existing = \
-					self._wrapped_select_pkg_highest_available_imp(
-						root, atom, onlydeps=onlydeps,
-						autounmask_level=autounmask_level)
-
-				reset_pkg(pkg)
-			
-			if self._dynamic_config._need_restart:
-				return None, None
-
-		if pkg is None:
-			# This ensures that we can fall back to an installed package
-			# that may have been rejected in the autounmask path above.
-			return default_selection
-
-		return pkg, existing
-
-	def _pkg_visibility_check(self, pkg, autounmask_level=None, trust_graph=True):
-
-		if pkg.visible:
-			return True
-
-		if trust_graph and pkg in self._dynamic_config.digraph:
-			# Sometimes we need to temporarily disable
-			# dynamic_config._autounmask, but for overall
-			# consistency in dependency resolution, in most
-			# cases we want to treat packages in the graph
-			# as though they are visible.
-			return True
-
-		if not self._dynamic_config._autounmask or autounmask_level is None:
-			return False
-
-		pkgsettings = self._frozen_config.pkgsettings[pkg.root]
-		root_config = self._frozen_config.roots[pkg.root]
-		mreasons = _get_masking_status(pkg, pkgsettings, root_config, use=self._pkg_use_enabled(pkg))
-
-		masked_by_unstable_keywords = False
-		masked_by_missing_keywords = False
-		missing_licenses = None
-		masked_by_something_else = False
-		masked_by_p_mask = False
-
-		for reason in mreasons:
-			hint = reason.unmask_hint
-
-			if hint is None:
-				masked_by_something_else = True
-			elif hint.key == "unstable keyword":
-				masked_by_unstable_keywords = True
-				if hint.value == "**":
-					masked_by_missing_keywords = True
-			elif hint.key == "p_mask":
-				masked_by_p_mask = True
-			elif hint.key == "license":
-				missing_licenses = hint.value
-			else:
-				masked_by_something_else = True
-
-		if masked_by_something_else:
-			return False
-
-		if pkg in self._dynamic_config._needed_unstable_keywords:
-			#If the package is already keyworded, remove the mask.
-			masked_by_unstable_keywords = False
-			masked_by_missing_keywords = False
-
-		if pkg in self._dynamic_config._needed_p_mask_changes:
-			#If the package is already keyworded, remove the mask.
-			masked_by_p_mask = False
-
-		if missing_licenses:
-			#If the needed licenses are already unmasked, remove the mask.
-			missing_licenses.difference_update(self._dynamic_config._needed_license_changes.get(pkg, set()))
-
-		if not (masked_by_unstable_keywords or masked_by_p_mask or missing_licenses):
-			#Package has already been unmasked.
-			return True
-
-		if (masked_by_unstable_keywords and not autounmask_level.allow_unstable_keywords) or \
-			(masked_by_missing_keywords and not autounmask_level.allow_missing_keywords) or \
-			(masked_by_p_mask and not autounmask_level.allow_unmasks) or \
-			(missing_licenses and not autounmask_level.allow_license_changes):
-			#We are not allowed to do the needed changes.
-			return False
-
-		if masked_by_unstable_keywords:
-			self._dynamic_config._needed_unstable_keywords.add(pkg)
-			backtrack_infos = self._dynamic_config._backtrack_infos
-			backtrack_infos.setdefault("config", {})
-			backtrack_infos["config"].setdefault("needed_unstable_keywords", set())
-			backtrack_infos["config"]["needed_unstable_keywords"].add(pkg)
-
-		if masked_by_p_mask:
-			self._dynamic_config._needed_p_mask_changes.add(pkg)
-			backtrack_infos = self._dynamic_config._backtrack_infos
-			backtrack_infos.setdefault("config", {})
-			backtrack_infos["config"].setdefault("needed_p_mask_changes", set())
-			backtrack_infos["config"]["needed_p_mask_changes"].add(pkg)
-
-		if missing_licenses:
-			self._dynamic_config._needed_license_changes.setdefault(pkg, set()).update(missing_licenses)
-			backtrack_infos = self._dynamic_config._backtrack_infos
-			backtrack_infos.setdefault("config", {})
-			backtrack_infos["config"].setdefault("needed_license_changes", set())
-			backtrack_infos["config"]["needed_license_changes"].add((pkg, frozenset(missing_licenses)))
-
-		return True
-
-	def _pkg_use_enabled(self, pkg, target_use=None):
-		"""
-		If target_use is None, returns pkg.use.enabled + changes in _needed_use_config_changes.
-		If target_use is given, the need changes are computed to make the package useable.
-		Example: target_use = { "foo": True, "bar": False }
-		The flags target_use must be in the pkg's IUSE.
-		"""
-		if pkg.built:
-			return pkg.use.enabled
-		needed_use_config_change = self._dynamic_config._needed_use_config_changes.get(pkg)
-
-		if target_use is None:
-			if needed_use_config_change is None:
-				return pkg.use.enabled
-			else:
-				return needed_use_config_change[0]
-
-		if needed_use_config_change is not None:
-			old_use = needed_use_config_change[0]
-			new_use = set()
-			old_changes = needed_use_config_change[1]
-			new_changes = old_changes.copy()
-		else:
-			old_use = pkg.use.enabled
-			new_use = set()
-			old_changes = {}
-			new_changes = {}
-
-		for flag, state in target_use.items():
-			if state:
-				if flag not in old_use:
-					if new_changes.get(flag) == False:
-						return old_use
-					new_changes[flag] = True
-				new_use.add(flag)
-			else:
-				if flag in old_use:
-					if new_changes.get(flag) == True:
-						return old_use
-					new_changes[flag] = False
-		new_use.update(old_use.difference(target_use))
-
-		def want_restart_for_use_change(pkg, new_use):
-			if pkg not in self._dynamic_config.digraph.nodes:
-				return False
-
-			for key in Package._dep_keys + ("LICENSE",):
-				dep = pkg.metadata[key]
-				old_val = set(portage.dep.use_reduce(dep, pkg.use.enabled, is_valid_flag=pkg.iuse.is_valid_flag, flat=True))
-				new_val = set(portage.dep.use_reduce(dep, new_use, is_valid_flag=pkg.iuse.is_valid_flag, flat=True))
-
-				if old_val != new_val:
-					return True
-
-			parent_atoms = self._dynamic_config._parent_atoms.get(pkg)
-			if not parent_atoms:
-				return False
-
-			new_use, changes = self._dynamic_config._needed_use_config_changes.get(pkg)
-			for ppkg, atom in parent_atoms:
-				if not atom.use or \
-					not any(x in atom.use.required for x in changes):
-					continue
-				else:
-					return True
-
-			return False
-
-		if new_changes != old_changes:
-			#Don't do the change if it violates REQUIRED_USE.
-			required_use = pkg.metadata.get("REQUIRED_USE")
-			if required_use and check_required_use(required_use, old_use,
-				pkg.iuse.is_valid_flag, eapi=pkg.metadata["EAPI"]) and \
-				not check_required_use(required_use, new_use,
-				pkg.iuse.is_valid_flag, eapi=pkg.metadata["EAPI"]):
-				return old_use
-
-			if any(x in pkg.use.mask for x in new_changes) or \
-				any(x in pkg.use.force for x in new_changes):
-				return old_use
-
-			self._dynamic_config._needed_use_config_changes[pkg] = (new_use, new_changes)
-			backtrack_infos = self._dynamic_config._backtrack_infos
-			backtrack_infos.setdefault("config", {})
-			backtrack_infos["config"].setdefault("needed_use_config_changes", [])
-			backtrack_infos["config"]["needed_use_config_changes"].append((pkg, (new_use, new_changes)))
-			if want_restart_for_use_change(pkg, new_use):
-				self._dynamic_config._need_restart = True
-		return new_use
-
-	def _wrapped_select_pkg_highest_available_imp(self, root, atom, onlydeps=False, autounmask_level=None):
-		root_config = self._frozen_config.roots[root]
-		pkgsettings = self._frozen_config.pkgsettings[root]
-		dbs = self._dynamic_config._filtered_trees[root]["dbs"]
-		vardb = self._frozen_config.roots[root].trees["vartree"].dbapi
-		# List of acceptable packages, ordered by type preference.
-		matched_packages = []
-		matched_pkgs_ignore_use = []
-		highest_version = None
-		if not isinstance(atom, portage.dep.Atom):
-			atom = portage.dep.Atom(atom)
-		atom_cp = atom.cp
-		have_new_virt = atom_cp.startswith("virtual/") and \
-			self._have_new_virt(root, atom_cp)
-		atom_set = InternalPackageSet(initial_atoms=(atom,), allow_repo=True)
-		existing_node = None
-		myeb = None
-		rebuilt_binaries = 'rebuilt_binaries' in self._dynamic_config.myparams
-		usepkg = "--usepkg" in self._frozen_config.myopts
-		usepkgonly = "--usepkgonly" in self._frozen_config.myopts
-		empty = "empty" in self._dynamic_config.myparams
-		selective = "selective" in self._dynamic_config.myparams
-		reinstall = False
-		avoid_update = "--update" not in self._frozen_config.myopts
-		dont_miss_updates = "--update" in self._frozen_config.myopts
-		use_ebuild_visibility = self._frozen_config.myopts.get(
-			'--use-ebuild-visibility', 'n') != 'n'
-		reinstall_atoms = self._frozen_config.reinstall_atoms
-		usepkg_exclude = self._frozen_config.usepkg_exclude
-		useoldpkg_atoms = self._frozen_config.useoldpkg_atoms
-		matched_oldpkg = []
-		# Behavior of the "selective" parameter depends on
-		# whether or not a package matches an argument atom.
-		# If an installed package provides an old-style
-		# virtual that is no longer provided by an available
-		# package, the installed package may match an argument
-		# atom even though none of the available packages do.
-		# Therefore, "selective" logic does not consider
-		# whether or not an installed package matches an
-		# argument atom. It only considers whether or not
-		# available packages match argument atoms, which is
-		# represented by the found_available_arg flag.
-		found_available_arg = False
-		packages_with_invalid_use_config = []
-		for find_existing_node in True, False:
-			if existing_node:
-				break
-			for db, pkg_type, built, installed, db_keys in dbs:
-				if existing_node:
-					break
-				if installed and not find_existing_node:
-					want_reinstall = reinstall or empty or \
-						(found_available_arg and not selective)
-					if want_reinstall and matched_packages:
-						continue
-
-				# Ignore USE deps for the initial match since we want to
-				# ensure that updates aren't missed solely due to the user's
-				# USE configuration.
-				for pkg in self._iter_match_pkgs(root_config, pkg_type, atom.without_use, 
-					onlydeps=onlydeps):
-					if pkg.cp != atom_cp and have_new_virt:
-						# pull in a new-style virtual instead
-						continue
-					if pkg in self._dynamic_config._runtime_pkg_mask:
-						# The package has been masked by the backtracking logic
-						continue
-					root_slot = (pkg.root, pkg.slot_atom)
-					if pkg.built and root_slot in self._rebuild.rebuild_list:
-						continue
-					if (pkg.installed and
-						root_slot in self._rebuild.reinstall_list):
-						continue
-
-					if not pkg.installed and \
-						self._frozen_config.excluded_pkgs.findAtomForPackage(pkg, \
-							modified_use=self._pkg_use_enabled(pkg)):
-						continue
-
-					if built and not installed and usepkg_exclude.findAtomForPackage(pkg, \
-						modified_use=self._pkg_use_enabled(pkg)):
-						break
-
-					useoldpkg = useoldpkg_atoms.findAtomForPackage(pkg, \
-						modified_use=self._pkg_use_enabled(pkg))
-
-					if packages_with_invalid_use_config and (not built or not useoldpkg) and \
-						(not pkg.installed or dont_miss_updates):
-						# Check if a higher version was rejected due to user
-						# USE configuration. The packages_with_invalid_use_config
-						# list only contains unbuilt ebuilds since USE can't
-						# be changed for built packages.
-						higher_version_rejected = False
-						repo_priority = pkg.repo_priority
-						for rejected in packages_with_invalid_use_config:
-							if rejected.cp != pkg.cp:
-								continue
-							if rejected > pkg:
-								higher_version_rejected = True
-								break
-							if portage.dep.cpvequal(rejected.cpv, pkg.cpv):
-								# If version is identical then compare
-								# repo priority (see bug #350254).
-								rej_repo_priority = rejected.repo_priority
-								if rej_repo_priority is not None and \
-									(repo_priority is None or
-									rej_repo_priority > repo_priority):
-									higher_version_rejected = True
-									break
-						if higher_version_rejected:
-							continue
-
-					cpv = pkg.cpv
-					reinstall_for_flags = None
-
-					if not pkg.installed or \
-						(matched_packages and not avoid_update):
-						# Only enforce visibility on installed packages
-						# if there is at least one other visible package
-						# available. By filtering installed masked packages
-						# here, packages that have been masked since they
-						# were installed can be automatically downgraded
-						# to an unmasked version. NOTE: This code needs to
-						# be consistent with masking behavior inside
-						# _dep_check_composite_db, in order to prevent
-						# incorrect choices in || deps like bug #351828.
-
-						if not self._pkg_visibility_check(pkg, autounmask_level):
-							continue
-
-						# Enable upgrade or downgrade to a version
-						# with visible KEYWORDS when the installed
-						# version is masked by KEYWORDS, but never
-						# reinstall the same exact version only due
-						# to a KEYWORDS mask. See bug #252167.
-
-						if pkg.type_name != "ebuild" and matched_packages:
-							# Don't re-install a binary package that is
-							# identical to the currently installed package
-							# (see bug #354441).
-							identical_binary = False
-							if usepkg and pkg.installed:
-								for selected_pkg in matched_packages:
-									if selected_pkg.type_name == "binary" and \
-										selected_pkg.cpv == pkg.cpv and \
-										selected_pkg.metadata.get('BUILD_TIME') == \
-										pkg.metadata.get('BUILD_TIME'):
-										identical_binary = True
-										break
-
-							if not identical_binary:
-								# If the ebuild no longer exists or it's
-								# keywords have been dropped, reject built
-								# instances (installed or binary).
-								# If --usepkgonly is enabled, assume that
-								# the ebuild status should be ignored.
-								if not use_ebuild_visibility and (usepkgonly or useoldpkg):
-									if pkg.installed and pkg.masks:
-										continue
-								elif not self._equiv_ebuild_visible(pkg,
-									autounmask_level=autounmask_level):
-									continue
-
-					# Calculation of USE for unbuilt ebuilds is relatively
-					# expensive, so it is only performed lazily, after the
-					# above visibility checks are complete.
-
-					myarg = None
-					try:
-						for myarg, myarg_atom in self._iter_atoms_for_pkg(pkg):
-							if myarg.force_reinstall:
-								reinstall = True
-								break
-					except InvalidDependString:
-						if not installed:
-							# masked by corruption
-							continue
-					if not installed and myarg:
-						found_available_arg = True
-
-					if atom.unevaluated_atom.use:
-						#Make sure we don't miss a 'missing IUSE'.
-						if pkg.iuse.get_missing_iuse(atom.unevaluated_atom.use.required):
-							# Don't add this to packages_with_invalid_use_config
-							# since IUSE cannot be adjusted by the user.
-							continue
-
-					if atom.use:
-
-						matched_pkgs_ignore_use.append(pkg)
-						if autounmask_level and autounmask_level.allow_use_changes and not pkg.built:
-							target_use = {}
-							for flag in atom.use.enabled:
-								target_use[flag] = True
-							for flag in atom.use.disabled:
-								target_use[flag] = False
-							use = self._pkg_use_enabled(pkg, target_use)
-						else:
-							use = self._pkg_use_enabled(pkg)
-
-						use_match = True
-						can_adjust_use = not pkg.built
-						missing_enabled = atom.use.missing_enabled.difference(pkg.iuse.all)
-						missing_disabled = atom.use.missing_disabled.difference(pkg.iuse.all)
-
-						if atom.use.enabled:
-							if any(x in atom.use.enabled for x in missing_disabled):
-								use_match = False
-								can_adjust_use = False
-							need_enabled = atom.use.enabled.difference(use)
-							if need_enabled:
-								need_enabled = need_enabled.difference(missing_enabled)
-								if need_enabled:
-									use_match = False
-									if can_adjust_use:
-										if any(x in pkg.use.mask for x in need_enabled):
-											can_adjust_use = False
-
-						if atom.use.disabled:
-							if any(x in atom.use.disabled for x in missing_enabled):
-								use_match = False
-								can_adjust_use = False
-							need_disabled = atom.use.disabled.intersection(use)
-							if need_disabled:
-								need_disabled = need_disabled.difference(missing_disabled)
-								if need_disabled:
-									use_match = False
-									if can_adjust_use:
-										if any(x in pkg.use.force and x not in
-											pkg.use.mask for x in need_disabled):
-											can_adjust_use = False
-
-						if not use_match:
-							if can_adjust_use:
-								# Above we must ensure that this package has
-								# absolutely no use.force, use.mask, or IUSE
-								# issues that the user typically can't make
-								# adjustments to solve (see bug #345979).
-								# FIXME: Conditional USE deps complicate
-								# issues. This code currently excludes cases
-								# in which the user can adjust the parent
-								# package's USE in order to satisfy the dep.
-								packages_with_invalid_use_config.append(pkg)
-							continue
-
-					if pkg.cp == atom_cp:
-						if highest_version is None:
-							highest_version = pkg
-						elif pkg > highest_version:
-							highest_version = pkg
-					# At this point, we've found the highest visible
-					# match from the current repo. Any lower versions
-					# from this repo are ignored, so this so the loop
-					# will always end with a break statement below
-					# this point.
-					if find_existing_node:
-						e_pkg = self._dynamic_config._slot_pkg_map[root].get(pkg.slot_atom)
-						if not e_pkg:
-							break
-
-						# Use PackageSet.findAtomForPackage()
-						# for PROVIDE support.
-						if atom_set.findAtomForPackage(e_pkg, modified_use=self._pkg_use_enabled(e_pkg)):
-							if highest_version and \
-								e_pkg.cp == atom_cp and \
-								e_pkg < highest_version and \
-								e_pkg.slot_atom != highest_version.slot_atom:
-								# There is a higher version available in a
-								# different slot, so this existing node is
-								# irrelevant.
-								pass
-							else:
-								matched_packages.append(e_pkg)
-								existing_node = e_pkg
-						break
-					# Compare built package to current config and
-					# reject the built package if necessary.
-					if built and not useoldpkg and (not installed or matched_pkgs_ignore_use) and \
-						("--newuse" in self._frozen_config.myopts or \
-						"--reinstall" in self._frozen_config.myopts or \
-						(not installed and self._dynamic_config.myparams.get(
-						"binpkg_respect_use") in ("y", "auto"))):
-						iuses = pkg.iuse.all
-						old_use = self._pkg_use_enabled(pkg)
-						if myeb:
-							pkgsettings.setcpv(myeb)
-						else:
-							pkgsettings.setcpv(pkg)
-						now_use = pkgsettings["PORTAGE_USE"].split()
-						forced_flags = set()
-						forced_flags.update(pkgsettings.useforce)
-						forced_flags.update(pkgsettings.usemask)
-						cur_iuse = iuses
-						if myeb and not usepkgonly and not useoldpkg:
-							cur_iuse = myeb.iuse.all
-						reinstall_for_flags = self._reinstall_for_flags(pkg,
-							forced_flags, old_use, iuses, now_use, cur_iuse)
-						if reinstall_for_flags:
-							if not pkg.installed:
-								self._dynamic_config.ignored_binaries.setdefault(pkg, set()).update(reinstall_for_flags)
-							break
-					# Compare current config to installed package
-					# and do not reinstall if possible.
-					if not installed and not useoldpkg and \
-						("--newuse" in self._frozen_config.myopts or \
-						"--reinstall" in self._frozen_config.myopts) and \
-						cpv in vardb.match(atom):
-						forced_flags = set()
-						forced_flags.update(pkg.use.force)
-						forced_flags.update(pkg.use.mask)
-						inst_pkg = vardb.match_pkgs('=' + pkg.cpv)[0]
-						old_use = inst_pkg.use.enabled
-						old_iuse = inst_pkg.iuse.all
-						cur_use = self._pkg_use_enabled(pkg)
-						cur_iuse = pkg.iuse.all
-						reinstall_for_flags = \
-							self._reinstall_for_flags(pkg,
-							forced_flags, old_use, old_iuse,
-							cur_use, cur_iuse)
-						if reinstall_for_flags:
-							reinstall = True
-					if reinstall_atoms.findAtomForPackage(pkg, \
-							modified_use=self._pkg_use_enabled(pkg)):
-						reinstall = True
-					if not built:
-						myeb = pkg
-					elif useoldpkg:
-						matched_oldpkg.append(pkg)
-					matched_packages.append(pkg)
-					if reinstall_for_flags:
-						self._dynamic_config._reinstall_nodes[pkg] = \
-							reinstall_for_flags
-					break
-
-		if not matched_packages:
-			return None, None
-
-		if "--debug" in self._frozen_config.myopts:
-			for pkg in matched_packages:
-				portage.writemsg("%s %s%s%s\n" % \
-					((pkg.type_name + ":").rjust(10),
-					pkg.cpv, _repo_separator, pkg.repo), noiselevel=-1)
-
-		# Filter out any old-style virtual matches if they are
-		# mixed with new-style virtual matches.
-		cp = atom.cp
-		if len(matched_packages) > 1 and \
-			"virtual" == portage.catsplit(cp)[0]:
-			for pkg in matched_packages:
-				if pkg.cp != cp:
-					continue
-				# Got a new-style virtual, so filter
-				# out any old-style virtuals.
-				matched_packages = [pkg for pkg in matched_packages \
-					if pkg.cp == cp]
-				break
-
-		if existing_node is not None and \
-			existing_node in matched_packages:
-			return existing_node, existing_node
-
-		if len(matched_packages) > 1:
-			if rebuilt_binaries:
-				inst_pkg = None
-				built_pkg = None
-				unbuilt_pkg = None
-				for pkg in matched_packages:
-					if pkg.installed:
-						inst_pkg = pkg
-					elif pkg.built:
-						built_pkg = pkg
-					else:
-						if unbuilt_pkg is None or pkg > unbuilt_pkg:
-							unbuilt_pkg = pkg
-				if built_pkg is not None and inst_pkg is not None:
-					# Only reinstall if binary package BUILD_TIME is
-					# non-empty, in order to avoid cases like to
-					# bug #306659 where BUILD_TIME fields are missing
-					# in local and/or remote Packages file.
-					try:
-						built_timestamp = int(built_pkg.metadata['BUILD_TIME'])
-					except (KeyError, ValueError):
-						built_timestamp = 0
-
-					try:
-						installed_timestamp = int(inst_pkg.metadata['BUILD_TIME'])
-					except (KeyError, ValueError):
-						installed_timestamp = 0
-
-					if unbuilt_pkg is not None and unbuilt_pkg > built_pkg:
-						pass
-					elif "--rebuilt-binaries-timestamp" in self._frozen_config.myopts:
-						minimal_timestamp = self._frozen_config.myopts["--rebuilt-binaries-timestamp"]
-						if built_timestamp and \
-							built_timestamp > installed_timestamp and \
-							built_timestamp >= minimal_timestamp:
-							return built_pkg, existing_node
-					else:
-						#Don't care if the binary has an older BUILD_TIME than the installed
-						#package. This is for closely tracking a binhost.
-						#Use --rebuilt-binaries-timestamp 0 if you want only newer binaries
-						#pulled in here.
-						if built_timestamp and \
-							built_timestamp != installed_timestamp:
-							return built_pkg, existing_node
-
-			for pkg in matched_packages:
-				if pkg.installed and pkg.invalid:
-					matched_packages = [x for x in \
-						matched_packages if x is not pkg]
-
-			if avoid_update:
-				for pkg in matched_packages:
-					if pkg.installed and self._pkg_visibility_check(pkg, autounmask_level):
-						return pkg, existing_node
-
-			visible_matches = []
-			if matched_oldpkg:
-				visible_matches = [pkg.cpv for pkg in matched_oldpkg \
-					if self._pkg_visibility_check(pkg, autounmask_level)]
-			if not visible_matches:
-				visible_matches = [pkg.cpv for pkg in matched_packages \
-					if self._pkg_visibility_check(pkg, autounmask_level)]
-			if visible_matches:
-				bestmatch = portage.best(visible_matches)
-			else:
-				# all are masked, so ignore visibility
-				bestmatch = portage.best([pkg.cpv for pkg in matched_packages])
-			matched_packages = [pkg for pkg in matched_packages \
-				if portage.dep.cpvequal(pkg.cpv, bestmatch)]
-
-		# ordered by type preference ("ebuild" type is the last resort)
-		return  matched_packages[-1], existing_node
-
-	def _select_pkg_from_graph(self, root, atom, onlydeps=False):
-		"""
-		Select packages that have already been added to the graph or
-		those that are installed and have not been scheduled for
-		replacement.
-		"""
-		graph_db = self._dynamic_config._graph_trees[root]["porttree"].dbapi
-		matches = graph_db.match_pkgs(atom)
-		if not matches:
-			return None, None
-		pkg = matches[-1] # highest match
-		in_graph = self._dynamic_config._slot_pkg_map[root].get(pkg.slot_atom)
-		return pkg, in_graph
-
-	def _select_pkg_from_installed(self, root, atom, onlydeps=False):
-		"""
-		Select packages that are installed.
-		"""
-		matches = list(self._iter_match_pkgs(self._frozen_config.roots[root],
-			"installed", atom))
-		if not matches:
-			return None, None
-		if len(matches) > 1:
-			matches.reverse() # ascending order
-			unmasked = [pkg for pkg in matches if \
-				self._pkg_visibility_check(pkg)]
-			if unmasked:
-				if len(unmasked) == 1:
-					matches = unmasked
-				else:
-					# Account for packages with masks (like KEYWORDS masks)
-					# that are usually ignored in visibility checks for
-					# installed packages, in order to handle cases like
-					# bug #350285.
-					unmasked = [pkg for pkg in matches if not pkg.masks]
-					if unmasked:
-						matches = unmasked
-		pkg = matches[-1] # highest match
-		in_graph = self._dynamic_config._slot_pkg_map[root].get(pkg.slot_atom)
-		return pkg, in_graph
-
-	def _complete_graph(self, required_sets=None):
-		"""
-		Add any deep dependencies of required sets (args, system, world) that
-		have not been pulled into the graph yet. This ensures that the graph
-		is consistent such that initially satisfied deep dependencies are not
-		broken in the new graph. Initially unsatisfied dependencies are
-		irrelevant since we only want to avoid breaking dependencies that are
-		initially satisfied.
-
-		Since this method can consume enough time to disturb users, it is
-		currently only enabled by the --complete-graph option.
-
-		@param required_sets: contains required sets (currently only used
-			for depclean and prune removal operations)
-		@type required_sets: dict
-		"""
-		if "--buildpkgonly" in self._frozen_config.myopts or \
-			"recurse" not in self._dynamic_config.myparams:
-			return 1
-
-		complete_if_new_use = self._dynamic_config.myparams.get(
-			"complete_if_new_use", "y") == "y"
-		complete_if_new_ver = self._dynamic_config.myparams.get(
-			"complete_if_new_ver", "y") == "y"
-		rebuild_if_new_slot = self._dynamic_config.myparams.get(
-			"rebuild_if_new_slot", "y") == "y"
-		complete_if_new_slot = rebuild_if_new_slot
-
-		if "complete" not in self._dynamic_config.myparams and \
-			(complete_if_new_use or
-			complete_if_new_ver or complete_if_new_slot):
-			# Enable complete mode if an installed package will change somehow.
-			use_change = False
-			version_change = False
-			for node in self._dynamic_config.digraph:
-				if not isinstance(node, Package) or \
-					node.operation != "merge":
-					continue
-				vardb = self._frozen_config.roots[
-					node.root].trees["vartree"].dbapi
-
-				if complete_if_new_use or complete_if_new_ver:
-					inst_pkg = vardb.match_pkgs(node.slot_atom)
-					if inst_pkg and inst_pkg[0].cp == node.cp:
-						inst_pkg = inst_pkg[0]
-						if complete_if_new_ver and \
-							(inst_pkg < node or node < inst_pkg):
-							version_change = True
-							break
-
-						# Intersect enabled USE with IUSE, in order to
-						# ignore forced USE from implicit IUSE flags, since
-						# they're probably irrelevant and they are sensitive
-						# to use.mask/force changes in the profile.
-						if complete_if_new_use and \
-							(node.iuse.all != inst_pkg.iuse.all or
-							self._pkg_use_enabled(node).intersection(node.iuse.all) !=
-							self._pkg_use_enabled(inst_pkg).intersection(inst_pkg.iuse.all)):
-							use_change = True
-							break
-
-				if complete_if_new_slot:
-					cp_list = vardb.match_pkgs(Atom(node.cp))
-					if (cp_list and cp_list[0].cp == node.cp and
-						not any(node.slot == pkg.slot for pkg in cp_list)):
-						version_change = True
-						break
-
-			if use_change or version_change:
-				self._dynamic_config.myparams["complete"] = True
-
-		if "complete" not in self._dynamic_config.myparams:
-			return 1
-
-		self._load_vdb()
-
-		# Put the depgraph into a mode that causes it to only
-		# select packages that have already been added to the
-		# graph or those that are installed and have not been
-		# scheduled for replacement. Also, toggle the "deep"
-		# parameter so that all dependencies are traversed and
-		# accounted for.
-		self._dynamic_config._complete_mode = True
-		self._select_atoms = self._select_atoms_from_graph
-		if "remove" in self._dynamic_config.myparams:
-			self._select_package = self._select_pkg_from_installed
-		else:
-			self._select_package = self._select_pkg_from_graph
-			self._dynamic_config._traverse_ignored_deps = True
-		already_deep = self._dynamic_config.myparams.get("deep") is True
-		if not already_deep:
-			self._dynamic_config.myparams["deep"] = True
-
-		# Invalidate the package selection cache, since
-		# _select_package has just changed implementations.
-		for trees in self._dynamic_config._filtered_trees.values():
-			trees["porttree"].dbapi._clear_cache()
-
-		args = self._dynamic_config._initial_arg_list[:]
-		for root in self._frozen_config.roots:
-			if root != self._frozen_config.target_root and \
-				("remove" in self._dynamic_config.myparams or
-				self._frozen_config.myopts.get("--root-deps") is not None):
-				# Only pull in deps for the relevant root.
-				continue
-			depgraph_sets = self._dynamic_config.sets[root]
-			required_set_names = self._frozen_config._required_set_names.copy()
-			remaining_args = required_set_names.copy()
-			if required_sets is None or root not in required_sets:
-				pass
-			else:
-				# Removal actions may override sets with temporary
-				# replacements that have had atoms removed in order
-				# to implement --deselect behavior.
-				required_set_names = set(required_sets[root])
-				depgraph_sets.sets.clear()
-				depgraph_sets.sets.update(required_sets[root])
-			if "remove" not in self._dynamic_config.myparams and \
-				root == self._frozen_config.target_root and \
-				already_deep:
-				remaining_args.difference_update(depgraph_sets.sets)
-			if not remaining_args and \
-				not self._dynamic_config._ignored_deps and \
-				not self._dynamic_config._dep_stack:
-				continue
-			root_config = self._frozen_config.roots[root]
-			for s in required_set_names:
-				pset = depgraph_sets.sets.get(s)
-				if pset is None:
-					pset = root_config.sets[s]
-				atom = SETPREFIX + s
-				args.append(SetArg(arg=atom, pset=pset,
-					root_config=root_config))
-
-		self._set_args(args)
-		for arg in self._expand_set_args(args, add_to_digraph=True):
-			for atom in arg.pset.getAtoms():
-				self._dynamic_config._dep_stack.append(
-					Dependency(atom=atom, root=arg.root_config.root,
-						parent=arg))
-
-		if True:
-			if self._dynamic_config._ignored_deps:
-				self._dynamic_config._dep_stack.extend(self._dynamic_config._ignored_deps)
-				self._dynamic_config._ignored_deps = []
-			if not self._create_graph(allow_unsatisfied=True):
-				return 0
-			# Check the unsatisfied deps to see if any initially satisfied deps
-			# will become unsatisfied due to an upgrade. Initially unsatisfied
-			# deps are irrelevant since we only want to avoid breaking deps
-			# that are initially satisfied.
-			while self._dynamic_config._unsatisfied_deps:
-				dep = self._dynamic_config._unsatisfied_deps.pop()
-				vardb = self._frozen_config.roots[
-					dep.root].trees["vartree"].dbapi
-				matches = vardb.match_pkgs(dep.atom)
-				if not matches:
-					self._dynamic_config._initially_unsatisfied_deps.append(dep)
-					continue
-				# An scheduled installation broke a deep dependency.
-				# Add the installed package to the graph so that it
-				# will be appropriately reported as a slot collision
-				# (possibly solvable via backtracking).
-				pkg = matches[-1] # highest match
-				if not self._add_pkg(pkg, dep):
-					return 0
-				if not self._create_graph(allow_unsatisfied=True):
-					return 0
-		return 1
-
-	def _pkg(self, cpv, type_name, root_config, installed=False, 
-		onlydeps=False, myrepo = None):
-		"""
-		Get a package instance from the cache, or create a new
-		one if necessary. Raises PackageNotFound from aux_get if it
-		failures for some reason (package does not exist or is
-		corrupt).
-		"""
-
-		# Ensure that we use the specially optimized RootConfig instance
-		# that refers to FakeVartree instead of the real vartree.
-		root_config = self._frozen_config.roots[root_config.root]
-		pkg = self._frozen_config._pkg_cache.get(
-			Package._gen_hash_key(cpv=cpv, type_name=type_name,
-			repo_name=myrepo, root_config=root_config,
-			installed=installed, onlydeps=onlydeps))
-		if pkg is None and onlydeps and not installed:
-			# Maybe it already got pulled in as a "merge" node.
-			pkg = self._dynamic_config.mydbapi[root_config.root].get(
-				Package._gen_hash_key(cpv=cpv, type_name=type_name,
-				repo_name=myrepo, root_config=root_config,
-				installed=installed, onlydeps=False))
-
-		if pkg is None:
-			tree_type = self.pkg_tree_map[type_name]
-			db = root_config.trees[tree_type].dbapi
-			db_keys = list(self._frozen_config._trees_orig[root_config.root][
-				tree_type].dbapi._aux_cache_keys)
-
-			try:
-				metadata = zip(db_keys, db.aux_get(cpv, db_keys, myrepo=myrepo))
-			except KeyError:
-				raise portage.exception.PackageNotFound(cpv)
-
-			pkg = Package(built=(type_name != "ebuild"), cpv=cpv,
-				installed=installed, metadata=metadata, onlydeps=onlydeps,
-				root_config=root_config, type_name=type_name)
-
-			self._frozen_config._pkg_cache[pkg] = pkg
-
-			if not self._pkg_visibility_check(pkg) and \
-				'LICENSE' in pkg.masks and len(pkg.masks) == 1:
-				slot_key = (pkg.root, pkg.slot_atom)
-				other_pkg = self._frozen_config._highest_license_masked.get(slot_key)
-				if other_pkg is None or pkg > other_pkg:
-					self._frozen_config._highest_license_masked[slot_key] = pkg
-
-		return pkg
-
-	def _validate_blockers(self):
-		"""Remove any blockers from the digraph that do not match any of the
-		packages within the graph.  If necessary, create hard deps to ensure
-		correct merge order such that mutually blocking packages are never
-		installed simultaneously. Also add runtime blockers from all installed
-		packages if any of them haven't been added already (bug 128809)."""
-
-		if "--buildpkgonly" in self._frozen_config.myopts or \
-			"--nodeps" in self._frozen_config.myopts:
-			return True
-
-		if True:
-			# Pull in blockers from all installed packages that haven't already
-			# been pulled into the depgraph, in order to ensure that they are
-			# respected (bug 128809). Due to the performance penalty that is
-			# incurred by all the additional dep_check calls that are required,
-			# blockers returned from dep_check are cached on disk by the
-			# BlockerCache class.
-
-			# For installed packages, always ignore blockers from DEPEND since
-			# only runtime dependencies should be relevant for packages that
-			# are already built.
-			dep_keys = Package._runtime_keys
-			for myroot in self._frozen_config.trees:
-
-				if self._frozen_config.myopts.get("--root-deps") is not None and \
-					myroot != self._frozen_config.target_root:
-					continue
-
-				vardb = self._frozen_config.trees[myroot]["vartree"].dbapi
-				pkgsettings = self._frozen_config.pkgsettings[myroot]
-				root_config = self._frozen_config.roots[myroot]
-				final_db = self._dynamic_config.mydbapi[myroot]
-
-				blocker_cache = BlockerCache(myroot, vardb)
-				stale_cache = set(blocker_cache)
-				for pkg in vardb:
-					cpv = pkg.cpv
-					stale_cache.discard(cpv)
-					pkg_in_graph = self._dynamic_config.digraph.contains(pkg)
-					pkg_deps_added = \
-						pkg in self._dynamic_config._traversed_pkg_deps
-
-					# Check for masked installed packages. Only warn about
-					# packages that are in the graph in order to avoid warning
-					# about those that will be automatically uninstalled during
-					# the merge process or by --depclean. Always warn about
-					# packages masked by license, since the user likely wants
-					# to adjust ACCEPT_LICENSE.
-					if pkg in final_db:
-						if not self._pkg_visibility_check(pkg,
-							trust_graph=False) and \
-							(pkg_in_graph or 'LICENSE' in pkg.masks):
-							self._dynamic_config._masked_installed.add(pkg)
-						else:
-							self._check_masks(pkg)
-
-					blocker_atoms = None
-					blockers = None
-					if pkg_deps_added:
-						blockers = []
-						try:
-							blockers.extend(
-								self._dynamic_config._blocker_parents.child_nodes(pkg))
-						except KeyError:
-							pass
-						try:
-							blockers.extend(
-								self._dynamic_config._irrelevant_blockers.child_nodes(pkg))
-						except KeyError:
-							pass
-						if blockers:
-							# Select just the runtime blockers.
-							blockers = [blocker for blocker in blockers \
-								if blocker.priority.runtime or \
-								blocker.priority.runtime_post]
-					if blockers is not None:
-						blockers = set(blocker.atom for blocker in blockers)
-
-					# If this node has any blockers, create a "nomerge"
-					# node for it so that they can be enforced.
-					self._spinner_update()
-					blocker_data = blocker_cache.get(cpv)
-					if blocker_data is not None and \
-						blocker_data.counter != long(pkg.metadata["COUNTER"]):
-						blocker_data = None
-
-					# If blocker data from the graph is available, use
-					# it to validate the cache and update the cache if
-					# it seems invalid.
-					if blocker_data is not None and \
-						blockers is not None:
-						if not blockers.symmetric_difference(
-							blocker_data.atoms):
-							continue
-						blocker_data = None
-
-					if blocker_data is None and \
-						blockers is not None:
-						# Re-use the blockers from the graph.
-						blocker_atoms = sorted(blockers)
-						counter = long(pkg.metadata["COUNTER"])
-						blocker_data = \
-							blocker_cache.BlockerData(counter, blocker_atoms)
-						blocker_cache[pkg.cpv] = blocker_data
-						continue
-
-					if blocker_data:
-						blocker_atoms = [Atom(atom) for atom in blocker_data.atoms]
-					else:
-						# Use aux_get() to trigger FakeVartree global
-						# updates on *DEPEND when appropriate.
-						depstr = " ".join(vardb.aux_get(pkg.cpv, dep_keys))
-						# It is crucial to pass in final_db here in order to
-						# optimize dep_check calls by eliminating atoms via
-						# dep_wordreduce and dep_eval calls.
-						try:
-							success, atoms = portage.dep_check(depstr,
-								final_db, pkgsettings, myuse=self._pkg_use_enabled(pkg),
-								trees=self._dynamic_config._graph_trees, myroot=myroot)
-						except SystemExit:
-							raise
-						except Exception as e:
-							# This is helpful, for example, if a ValueError
-							# is thrown from cpv_expand due to multiple
-							# matches (this can happen if an atom lacks a
-							# category).
-							show_invalid_depstring_notice(
-								pkg, depstr, _unicode_decode("%s") % (e,))
-							del e
-							raise
-						if not success:
-							replacement_pkg = final_db.match_pkgs(pkg.slot_atom)
-							if replacement_pkg and \
-								replacement_pkg[0].operation == "merge":
-								# This package is being replaced anyway, so
-								# ignore invalid dependencies so as not to
-								# annoy the user too much (otherwise they'd be
-								# forced to manually unmerge it first).
-								continue
-							show_invalid_depstring_notice(pkg, depstr, atoms)
-							return False
-						blocker_atoms = [myatom for myatom in atoms \
-							if myatom.blocker]
-						blocker_atoms.sort()
-						counter = long(pkg.metadata["COUNTER"])
-						blocker_cache[cpv] = \
-							blocker_cache.BlockerData(counter, blocker_atoms)
-					if blocker_atoms:
-						try:
-							for atom in blocker_atoms:
-								blocker = Blocker(atom=atom,
-									eapi=pkg.metadata["EAPI"],
-									priority=self._priority(runtime=True),
-									root=myroot)
-								self._dynamic_config._blocker_parents.add(blocker, pkg)
-						except portage.exception.InvalidAtom as e:
-							depstr = " ".join(vardb.aux_get(pkg.cpv, dep_keys))
-							show_invalid_depstring_notice(
-								pkg, depstr,
-								_unicode_decode("Invalid Atom: %s") % (e,))
-							return False
-				for cpv in stale_cache:
-					del blocker_cache[cpv]
-				blocker_cache.flush()
-				del blocker_cache
-
-		# Discard any "uninstall" tasks scheduled by previous calls
-		# to this method, since those tasks may not make sense given
-		# the current graph state.
-		previous_uninstall_tasks = self._dynamic_config._blocker_uninstalls.leaf_nodes()
-		if previous_uninstall_tasks:
-			self._dynamic_config._blocker_uninstalls = digraph()
-			self._dynamic_config.digraph.difference_update(previous_uninstall_tasks)
-
-		for blocker in self._dynamic_config._blocker_parents.leaf_nodes():
-			self._spinner_update()
-			root_config = self._frozen_config.roots[blocker.root]
-			virtuals = root_config.settings.getvirtuals()
-			myroot = blocker.root
-			initial_db = self._frozen_config.trees[myroot]["vartree"].dbapi
-			final_db = self._dynamic_config.mydbapi[myroot]
-			
-			provider_virtual = False
-			if blocker.cp in virtuals and \
-				not self._have_new_virt(blocker.root, blocker.cp):
-				provider_virtual = True
-
-			# Use this to check PROVIDE for each matched package
-			# when necessary.
-			atom_set = InternalPackageSet(
-				initial_atoms=[blocker.atom])
-
-			if provider_virtual:
-				atoms = []
-				for provider_entry in virtuals[blocker.cp]:
-					atoms.append(Atom(blocker.atom.replace(
-						blocker.cp, provider_entry.cp, 1)))
-			else:
-				atoms = [blocker.atom]
-
-			blocked_initial = set()
-			for atom in atoms:
-				for pkg in initial_db.match_pkgs(atom):
-					if atom_set.findAtomForPackage(pkg, modified_use=self._pkg_use_enabled(pkg)):
-						blocked_initial.add(pkg)
-
-			blocked_final = set()
-			for atom in atoms:
-				for pkg in final_db.match_pkgs(atom):
-					if atom_set.findAtomForPackage(pkg, modified_use=self._pkg_use_enabled(pkg)):
-						blocked_final.add(pkg)
-
-			if not blocked_initial and not blocked_final:
-				parent_pkgs = self._dynamic_config._blocker_parents.parent_nodes(blocker)
-				self._dynamic_config._blocker_parents.remove(blocker)
-				# Discard any parents that don't have any more blockers.
-				for pkg in parent_pkgs:
-					self._dynamic_config._irrelevant_blockers.add(blocker, pkg)
-					if not self._dynamic_config._blocker_parents.child_nodes(pkg):
-						self._dynamic_config._blocker_parents.remove(pkg)
-				continue
-			for parent in self._dynamic_config._blocker_parents.parent_nodes(blocker):
-				unresolved_blocks = False
-				depends_on_order = set()
-				for pkg in blocked_initial:
-					if pkg.slot_atom == parent.slot_atom and \
-						not blocker.atom.blocker.overlap.forbid:
-						# New !!atom blockers do not allow temporary
-						# simulaneous installation, so unlike !atom
-						# blockers, !!atom blockers aren't ignored
-						# when they match other packages occupying
-						# the same slot.
-						continue
-					if parent.installed:
-						# Two currently installed packages conflict with
-						# eachother. Ignore this case since the damage
-						# is already done and this would be likely to
-						# confuse users if displayed like a normal blocker.
-						continue
-
-					self._dynamic_config._blocked_pkgs.add(pkg, blocker)
-
-					if parent.operation == "merge":
-						# Maybe the blocked package can be replaced or simply
-						# unmerged to resolve this block.
-						depends_on_order.add((pkg, parent))
-						continue
-					# None of the above blocker resolutions techniques apply,
-					# so apparently this one is unresolvable.
-					unresolved_blocks = True
-				for pkg in blocked_final:
-					if pkg.slot_atom == parent.slot_atom and \
-						not blocker.atom.blocker.overlap.forbid:
-						# New !!atom blockers do not allow temporary
-						# simulaneous installation, so unlike !atom
-						# blockers, !!atom blockers aren't ignored
-						# when they match other packages occupying
-						# the same slot.
-						continue
-					if parent.operation == "nomerge" and \
-						pkg.operation == "nomerge":
-						# This blocker will be handled the next time that a
-						# merge of either package is triggered.
-						continue
-
-					self._dynamic_config._blocked_pkgs.add(pkg, blocker)
-
-					# Maybe the blocking package can be
-					# unmerged to resolve this block.
-					if parent.operation == "merge" and pkg.installed:
-						depends_on_order.add((pkg, parent))
-						continue
-					elif parent.operation == "nomerge":
-						depends_on_order.add((parent, pkg))
-						continue
-					# None of the above blocker resolutions techniques apply,
-					# so apparently this one is unresolvable.
-					unresolved_blocks = True
-
-				# Make sure we don't unmerge any package that have been pulled
-				# into the graph.
-				if not unresolved_blocks and depends_on_order:
-					for inst_pkg, inst_task in depends_on_order:
-						if self._dynamic_config.digraph.contains(inst_pkg) and \
-							self._dynamic_config.digraph.parent_nodes(inst_pkg):
-							unresolved_blocks = True
-							break
-
-				if not unresolved_blocks and depends_on_order:
-					for inst_pkg, inst_task in depends_on_order:
-						uninst_task = Package(built=inst_pkg.built,
-							cpv=inst_pkg.cpv, installed=inst_pkg.installed,
-							metadata=inst_pkg.metadata,
-							operation="uninstall",
-							root_config=inst_pkg.root_config,
-							type_name=inst_pkg.type_name)
-						# Enforce correct merge order with a hard dep.
-						self._dynamic_config.digraph.addnode(uninst_task, inst_task,
-							priority=BlockerDepPriority.instance)
-						# Count references to this blocker so that it can be
-						# invalidated after nodes referencing it have been
-						# merged.
-						self._dynamic_config._blocker_uninstalls.addnode(uninst_task, blocker)
-				if not unresolved_blocks and not depends_on_order:
-					self._dynamic_config._irrelevant_blockers.add(blocker, parent)
-					self._dynamic_config._blocker_parents.remove_edge(blocker, parent)
-					if not self._dynamic_config._blocker_parents.parent_nodes(blocker):
-						self._dynamic_config._blocker_parents.remove(blocker)
-					if not self._dynamic_config._blocker_parents.child_nodes(parent):
-						self._dynamic_config._blocker_parents.remove(parent)
-				if unresolved_blocks:
-					self._dynamic_config._unsolvable_blockers.add(blocker, parent)
-
-		return True
-
-	def _accept_blocker_conflicts(self):
-		acceptable = False
-		for x in ("--buildpkgonly", "--fetchonly",
-			"--fetch-all-uri", "--nodeps"):
-			if x in self._frozen_config.myopts:
-				acceptable = True
-				break
-		return acceptable
-
-	def _merge_order_bias(self, mygraph):
-		"""
-		For optimal leaf node selection, promote deep system runtime deps and
-		order nodes from highest to lowest overall reference count.
-		"""
-
-		node_info = {}
-		for node in mygraph.order:
-			node_info[node] = len(mygraph.parent_nodes(node))
-		deep_system_deps = _find_deep_system_runtime_deps(mygraph)
-
-		def cmp_merge_preference(node1, node2):
-
-			if node1.operation == 'uninstall':
-				if node2.operation == 'uninstall':
-					return 0
-				return 1
-
-			if node2.operation == 'uninstall':
-				if node1.operation == 'uninstall':
-					return 0
-				return -1
-
-			node1_sys = node1 in deep_system_deps
-			node2_sys = node2 in deep_system_deps
-			if node1_sys != node2_sys:
-				if node1_sys:
-					return -1
-				return 1
-
-			return node_info[node2] - node_info[node1]
-
-		mygraph.order.sort(key=cmp_sort_key(cmp_merge_preference))
-
-	def altlist(self, reversed=False):
-
-		while self._dynamic_config._serialized_tasks_cache is None:
-			self._resolve_conflicts()
-			try:
-				self._dynamic_config._serialized_tasks_cache, self._dynamic_config._scheduler_graph = \
-					self._serialize_tasks()
-			except self._serialize_tasks_retry:
-				pass
-
-		retlist = self._dynamic_config._serialized_tasks_cache[:]
-		if reversed:
-			retlist.reverse()
-		return retlist
-
-	def _implicit_libc_deps(self, mergelist, graph):
-		"""
-		Create implicit dependencies on libc, in order to ensure that libc
-		is installed as early as possible (see bug #303567).
-		"""
-		libc_pkgs = {}
-		implicit_libc_roots = (self._frozen_config._running_root.root,)
-		for root in implicit_libc_roots:
-			graphdb = self._dynamic_config.mydbapi[root]
-			vardb = self._frozen_config.trees[root]["vartree"].dbapi
-			for atom in self._expand_virt_from_graph(root,
- 				portage.const.LIBC_PACKAGE_ATOM):
-				if atom.blocker:
-					continue
-				match = graphdb.match_pkgs(atom)
-				if not match:
-					continue
-				pkg = match[-1]
-				if pkg.operation == "merge" and \
-					not vardb.cpv_exists(pkg.cpv):
-					libc_pkgs.setdefault(pkg.root, set()).add(pkg)
-
-		if not libc_pkgs:
-			return
-
-		earlier_libc_pkgs = set()
-
-		for pkg in mergelist:
-			if not isinstance(pkg, Package):
-				# a satisfied blocker
-				continue
-			root_libc_pkgs = libc_pkgs.get(pkg.root)
-			if root_libc_pkgs is not None and \
-				pkg.operation == "merge":
-				if pkg in root_libc_pkgs:
-					earlier_libc_pkgs.add(pkg)
-				else:
-					for libc_pkg in root_libc_pkgs:
-						if libc_pkg in earlier_libc_pkgs:
-							graph.add(libc_pkg, pkg,
-								priority=DepPriority(buildtime=True))
-
-	def schedulerGraph(self):
-		"""
-		The scheduler graph is identical to the normal one except that
-		uninstall edges are reversed in specific cases that require
-		conflicting packages to be temporarily installed simultaneously.
-		This is intended for use by the Scheduler in it's parallelization
-		logic. It ensures that temporary simultaneous installation of
-		conflicting packages is avoided when appropriate (especially for
-		!!atom blockers), but allowed in specific cases that require it.
-
-		Note that this method calls break_refs() which alters the state of
-		internal Package instances such that this depgraph instance should
-		not be used to perform any more calculations.
-		"""
-
-		# NOTE: altlist initializes self._dynamic_config._scheduler_graph
-		mergelist = self.altlist()
-		self._implicit_libc_deps(mergelist,
-			self._dynamic_config._scheduler_graph)
-
-		# Break DepPriority.satisfied attributes which reference
-		# installed Package instances.
-		for parents, children, node in \
-			self._dynamic_config._scheduler_graph.nodes.values():
-			for priorities in chain(parents.values(), children.values()):
-				for priority in priorities:
-					if priority.satisfied:
-						priority.satisfied = True
-
-		pkg_cache = self._frozen_config._pkg_cache
-		graph = self._dynamic_config._scheduler_graph
-		trees = self._frozen_config.trees
-		pruned_pkg_cache = {}
-		for key, pkg in pkg_cache.items():
-			if pkg in graph or \
-				(pkg.installed and pkg in trees[pkg.root]['vartree'].dbapi):
-				pruned_pkg_cache[key] = pkg
-
-		for root in trees:
-			trees[root]['vartree']._pkg_cache = pruned_pkg_cache
-
-		self.break_refs()
-		sched_config = \
-			_scheduler_graph_config(trees, pruned_pkg_cache, graph, mergelist)
-
-		return sched_config
-
-	def break_refs(self):
-		"""
-		Break any references in Package instances that lead back to the depgraph.
-		This is useful if you want to hold references to packages without also
-		holding the depgraph on the heap. It should only be called after the
-		depgraph and _frozen_config will not be used for any more calculations.
-		"""
-		for root_config in self._frozen_config.roots.values():
-			root_config.update(self._frozen_config._trees_orig[
-				root_config.root]["root_config"])
-			# Both instances are now identical, so discard the
-			# original which should have no other references.
-			self._frozen_config._trees_orig[
-				root_config.root]["root_config"] = root_config
-
-	def _resolve_conflicts(self):
-
-		if "complete" not in self._dynamic_config.myparams and \
-			self._dynamic_config._allow_backtracking and \
-			self._dynamic_config._slot_collision_nodes and \
-			not self._accept_blocker_conflicts():
-			self._dynamic_config.myparams["complete"] = True
-
-		if not self._complete_graph():
-			raise self._unknown_internal_error()
-
-		self._process_slot_conflicts()
-
-		self._slot_operator_trigger_reinstalls()
-
-		if not self._validate_blockers():
-			self._dynamic_config._skip_restart = True
-			raise self._unknown_internal_error()
-
-	def _serialize_tasks(self):
-
-		debug = "--debug" in self._frozen_config.myopts
-
-		if debug:
-			writemsg("\ndigraph:\n\n", noiselevel=-1)
-			self._dynamic_config.digraph.debug_print()
-			writemsg("\n", noiselevel=-1)
-
-		scheduler_graph = self._dynamic_config.digraph.copy()
-
-		if '--nodeps' in self._frozen_config.myopts:
-			# Preserve the package order given on the command line.
-			return ([node for node in scheduler_graph \
-				if isinstance(node, Package) \
-				and node.operation == 'merge'], scheduler_graph)
-
-		mygraph=self._dynamic_config.digraph.copy()
-
-		removed_nodes = set()
-
-		# Prune off all DependencyArg instances since they aren't
-		# needed, and because of nested sets this is faster than doing
-		# it with multiple digraph.root_nodes() calls below. This also
-		# takes care of nested sets that have circular references,
-		# which wouldn't be matched by digraph.root_nodes().
-		for node in mygraph:
-			if isinstance(node, DependencyArg):
-				removed_nodes.add(node)
-		if removed_nodes:
-			mygraph.difference_update(removed_nodes)
-			removed_nodes.clear()
-
-		# Prune "nomerge" root nodes if nothing depends on them, since
-		# otherwise they slow down merge order calculation. Don't remove
-		# non-root nodes since they help optimize merge order in some cases
-		# such as revdep-rebuild.
-
-		while True:
-			for node in mygraph.root_nodes():
-				if not isinstance(node, Package) or \
-					node.installed or node.onlydeps:
-					removed_nodes.add(node)
-			if removed_nodes:
-				self._spinner_update()
-				mygraph.difference_update(removed_nodes)
-			if not removed_nodes:
-				break
-			removed_nodes.clear()
-		self._merge_order_bias(mygraph)
-		def cmp_circular_bias(n1, n2):
-			"""
-			RDEPEND is stronger than PDEPEND and this function
-			measures such a strength bias within a circular
-			dependency relationship.
-			"""
-			n1_n2_medium = n2 in mygraph.child_nodes(n1,
-				ignore_priority=priority_range.ignore_medium_soft)
-			n2_n1_medium = n1 in mygraph.child_nodes(n2,
-				ignore_priority=priority_range.ignore_medium_soft)
-			if n1_n2_medium == n2_n1_medium:
-				return 0
-			elif n1_n2_medium:
-				return 1
-			return -1
-		myblocker_uninstalls = self._dynamic_config._blocker_uninstalls.copy()
-		retlist=[]
-		# Contains uninstall tasks that have been scheduled to
-		# occur after overlapping blockers have been installed.
-		scheduled_uninstalls = set()
-		# Contains any Uninstall tasks that have been ignored
-		# in order to avoid the circular deps code path. These
-		# correspond to blocker conflicts that could not be
-		# resolved.
-		ignored_uninstall_tasks = set()
-		have_uninstall_task = False
-		complete = "complete" in self._dynamic_config.myparams
-		asap_nodes = []
-
-		def get_nodes(**kwargs):
-			"""
-			Returns leaf nodes excluding Uninstall instances
-			since those should be executed as late as possible.
-			"""
-			return [node for node in mygraph.leaf_nodes(**kwargs) \
-				if isinstance(node, Package) and \
-					(node.operation != "uninstall" or \
-					node in scheduled_uninstalls)]
-
-		# sys-apps/portage needs special treatment if ROOT="/"
-		running_root = self._frozen_config._running_root.root
-		runtime_deps = InternalPackageSet(
-			initial_atoms=[PORTAGE_PACKAGE_ATOM])
-		running_portage = self._frozen_config.trees[running_root]["vartree"].dbapi.match_pkgs(
-			PORTAGE_PACKAGE_ATOM)
-		replacement_portage = self._dynamic_config.mydbapi[running_root].match_pkgs(
-			PORTAGE_PACKAGE_ATOM)
-
-		if running_portage:
-			running_portage = running_portage[0]
-		else:
-			running_portage = None
-
-		if replacement_portage:
-			replacement_portage = replacement_portage[0]
-		else:
-			replacement_portage = None
-
-		if replacement_portage == running_portage:
-			replacement_portage = None
-
-		if running_portage is not None:
-			try:
-				portage_rdepend = self._select_atoms_highest_available(
-					running_root, running_portage.metadata["RDEPEND"],
-					myuse=self._pkg_use_enabled(running_portage),
-					parent=running_portage, strict=False)
-			except portage.exception.InvalidDependString as e:
-				portage.writemsg("!!! Invalid RDEPEND in " + \
-					"'%svar/db/pkg/%s/RDEPEND': %s\n" % \
-					(running_root, running_portage.cpv, e), noiselevel=-1)
-				del e
-				portage_rdepend = {running_portage : []}
-			for atoms in portage_rdepend.values():
-				runtime_deps.update(atom for atom in atoms \
-					if not atom.blocker)
-
-		# Merge libc asap, in order to account for implicit
-		# dependencies. See bug #303567.
-		implicit_libc_roots = (running_root,)
-		for root in implicit_libc_roots:
-			libc_pkgs = set()
-			vardb = self._frozen_config.trees[root]["vartree"].dbapi
-			graphdb = self._dynamic_config.mydbapi[root]
-			for atom in self._expand_virt_from_graph(root,
-				portage.const.LIBC_PACKAGE_ATOM):
-				if atom.blocker:
-					continue
-				match = graphdb.match_pkgs(atom)
-				if not match:
-					continue
-				pkg = match[-1]
-				if pkg.operation == "merge" and \
-					not vardb.cpv_exists(pkg.cpv):
-					libc_pkgs.add(pkg)
-
-			if libc_pkgs:
-				# If there's also an os-headers upgrade, we need to
-				# pull that in first. See bug #328317.
-				for atom in self._expand_virt_from_graph(root,
-					portage.const.OS_HEADERS_PACKAGE_ATOM):
-					if atom.blocker:
-						continue
-					match = graphdb.match_pkgs(atom)
-					if not match:
-						continue
-					pkg = match[-1]
-					if pkg.operation == "merge" and \
-						not vardb.cpv_exists(pkg.cpv):
-						asap_nodes.append(pkg)
-
-				asap_nodes.extend(libc_pkgs)
-
-		def gather_deps(ignore_priority, mergeable_nodes,
-			selected_nodes, node):
-			"""
-			Recursively gather a group of nodes that RDEPEND on
-			eachother. This ensures that they are merged as a group
-			and get their RDEPENDs satisfied as soon as possible.
-			"""
-			if node in selected_nodes:
-				return True
-			if node not in mergeable_nodes:
-				return False
-			if node == replacement_portage and \
-				mygraph.child_nodes(node,
-				ignore_priority=priority_range.ignore_medium_soft):
-				# Make sure that portage always has all of it's
-				# RDEPENDs installed first.
-				return False
-			selected_nodes.add(node)
-			for child in mygraph.child_nodes(node,
-				ignore_priority=ignore_priority):
-				if not gather_deps(ignore_priority,
-					mergeable_nodes, selected_nodes, child):
-					return False
-			return True
-
-		def ignore_uninst_or_med(priority):
-			if priority is BlockerDepPriority.instance:
-				return True
-			return priority_range.ignore_medium(priority)
-
-		def ignore_uninst_or_med_soft(priority):
-			if priority is BlockerDepPriority.instance:
-				return True
-			return priority_range.ignore_medium_soft(priority)
-
-		tree_mode = "--tree" in self._frozen_config.myopts
-		# Tracks whether or not the current iteration should prefer asap_nodes
-		# if available.  This is set to False when the previous iteration
-		# failed to select any nodes.  It is reset whenever nodes are
-		# successfully selected.
-		prefer_asap = True
-
-		# Controls whether or not the current iteration should drop edges that
-		# are "satisfied" by installed packages, in order to solve circular
-		# dependencies. The deep runtime dependencies of installed packages are
-		# not checked in this case (bug #199856), so it must be avoided
-		# whenever possible.
-		drop_satisfied = False
-
-		# State of variables for successive iterations that loosen the
-		# criteria for node selection.
-		#
-		# iteration   prefer_asap   drop_satisfied
-		# 1           True          False
-		# 2           False         False
-		# 3           False         True
-		#
-		# If no nodes are selected on the last iteration, it is due to
-		# unresolved blockers or circular dependencies.
-
-		while mygraph:
-			self._spinner_update()
-			selected_nodes = None
-			ignore_priority = None
-			if drop_satisfied or (prefer_asap and asap_nodes):
-				priority_range = DepPrioritySatisfiedRange
-			else:
-				priority_range = DepPriorityNormalRange
-			if prefer_asap and asap_nodes:
-				# ASAP nodes are merged before their soft deps. Go ahead and
-				# select root nodes here if necessary, since it's typical for
-				# the parent to have been removed from the graph already.
-				asap_nodes = [node for node in asap_nodes \
-					if mygraph.contains(node)]
-				for i in range(priority_range.SOFT,
-					priority_range.MEDIUM_SOFT + 1):
-					ignore_priority = priority_range.ignore_priority[i]
-					for node in asap_nodes:
-						if not mygraph.child_nodes(node,
-							ignore_priority=ignore_priority):
-							selected_nodes = [node]
-							asap_nodes.remove(node)
-							break
-					if selected_nodes:
-						break
-
-			if not selected_nodes and \
-				not (prefer_asap and asap_nodes):
-				for i in range(priority_range.NONE,
-					priority_range.MEDIUM_SOFT + 1):
-					ignore_priority = priority_range.ignore_priority[i]
-					nodes = get_nodes(ignore_priority=ignore_priority)
-					if nodes:
-						# If there is a mixture of merges and uninstalls,
-						# do the uninstalls first.
-						good_uninstalls = None
-						if len(nodes) > 1:
-							good_uninstalls = []
-							for node in nodes:
-								if node.operation == "uninstall":
-									good_uninstalls.append(node)
-
-							if good_uninstalls:
-								nodes = good_uninstalls
-							else:
-								nodes = nodes
-
-						if good_uninstalls or len(nodes) == 1 or \
-							(ignore_priority is None and \
-							not asap_nodes and not tree_mode):
-							# Greedily pop all of these nodes since no
-							# relationship has been ignored. This optimization
-							# destroys --tree output, so it's disabled in tree
-							# mode.
-							selected_nodes = nodes
-						else:
-							# For optimal merge order:
-							#  * Only pop one node.
-							#  * Removing a root node (node without a parent)
-							#    will not produce a leaf node, so avoid it.
-							#  * It's normal for a selected uninstall to be a
-							#    root node, so don't check them for parents.
-							if asap_nodes:
-								prefer_asap_parents = (True, False)
-							else:
-								prefer_asap_parents = (False,)
-							for check_asap_parent in prefer_asap_parents:
-								if check_asap_parent:
-									for node in nodes:
-										parents = mygraph.parent_nodes(node,
-											ignore_priority=DepPrioritySatisfiedRange.ignore_soft)
-										if any(x in asap_nodes for x in parents):
-											selected_nodes = [node]
-											break
-								else:
-									for node in nodes:
-										if mygraph.parent_nodes(node):
-											selected_nodes = [node]
-											break
-								if selected_nodes:
-									break
-						if selected_nodes:
-							break
-
-			if not selected_nodes:
-				nodes = get_nodes(ignore_priority=priority_range.ignore_medium)
-				if nodes:
-					mergeable_nodes = set(nodes)
-					if prefer_asap and asap_nodes:
-						nodes = asap_nodes
-					# When gathering the nodes belonging to a runtime cycle,
-					# we want to minimize the number of nodes gathered, since
-					# this tends to produce a more optimal merge order.
-					# Ignoring all medium_soft deps serves this purpose.
-					# In the case of multiple runtime cycles, where some cycles
-					# may depend on smaller independent cycles, it's optimal
-					# to merge smaller independent cycles before other cycles
-					# that depend on them. Therefore, we search for the
-					# smallest cycle in order to try and identify and prefer
-					# these smaller independent cycles.
-					ignore_priority = priority_range.ignore_medium_soft
-					smallest_cycle = None
-					for node in nodes:
-						if not mygraph.parent_nodes(node):
-							continue
-						selected_nodes = set()
-						if gather_deps(ignore_priority,
-							mergeable_nodes, selected_nodes, node):
-							# When selecting asap_nodes, we need to ensure
-							# that we haven't selected a large runtime cycle
-							# that is obviously sub-optimal. This will be
-							# obvious if any of the non-asap selected_nodes
-							# is a leaf node when medium_soft deps are
-							# ignored.
-							if prefer_asap and asap_nodes and \
-								len(selected_nodes) > 1:
-								for node in selected_nodes.difference(
-									asap_nodes):
-									if not mygraph.child_nodes(node,
-										ignore_priority =
-										DepPriorityNormalRange.ignore_medium_soft):
-										selected_nodes = None
-										break
-							if selected_nodes:
-								if smallest_cycle is None or \
-									len(selected_nodes) < len(smallest_cycle):
-									smallest_cycle = selected_nodes
-
-					selected_nodes = smallest_cycle
-
-					if selected_nodes and debug:
-						writemsg("\nruntime cycle digraph (%s nodes):\n\n" %
-							(len(selected_nodes),), noiselevel=-1)
-						cycle_digraph = mygraph.copy()
-						cycle_digraph.difference_update([x for x in
-							cycle_digraph if x not in selected_nodes])
-						cycle_digraph.debug_print()
-						writemsg("\n", noiselevel=-1)
-
-					if prefer_asap and asap_nodes and not selected_nodes:
-						# We failed to find any asap nodes to merge, so ignore
-						# them for the next iteration.
-						prefer_asap = False
-						continue
-
-			if selected_nodes and ignore_priority is not None:
-				# Try to merge ignored medium_soft deps as soon as possible
-				# if they're not satisfied by installed packages.
-				for node in selected_nodes:
-					children = set(mygraph.child_nodes(node))
-					soft = children.difference(
-						mygraph.child_nodes(node,
-						ignore_priority=DepPrioritySatisfiedRange.ignore_soft))
-					medium_soft = children.difference(
-						mygraph.child_nodes(node,
-							ignore_priority = \
-							DepPrioritySatisfiedRange.ignore_medium_soft))
-					medium_soft.difference_update(soft)
-					for child in medium_soft:
-						if child in selected_nodes:
-							continue
-						if child in asap_nodes:
-							continue
-						# Merge PDEPEND asap for bug #180045.
-						asap_nodes.append(child)
-
-			if selected_nodes and len(selected_nodes) > 1:
-				if not isinstance(selected_nodes, list):
-					selected_nodes = list(selected_nodes)
-				selected_nodes.sort(key=cmp_sort_key(cmp_circular_bias))
-
-			if not selected_nodes and myblocker_uninstalls:
-				# An Uninstall task needs to be executed in order to
-				# avoid conflict if possible.
-
-				if drop_satisfied:
-					priority_range = DepPrioritySatisfiedRange
-				else:
-					priority_range = DepPriorityNormalRange
-
-				mergeable_nodes = get_nodes(
-					ignore_priority=ignore_uninst_or_med)
-
-				min_parent_deps = None
-				uninst_task = None
-
-				for task in myblocker_uninstalls.leaf_nodes():
-					# Do some sanity checks so that system or world packages
-					# don't get uninstalled inappropriately here (only really
-					# necessary when --complete-graph has not been enabled).
-
-					if task in ignored_uninstall_tasks:
-						continue
-
-					if task in scheduled_uninstalls:
-						# It's been scheduled but it hasn't
-						# been executed yet due to dependence
-						# on installation of blocking packages.
-						continue
-
-					root_config = self._frozen_config.roots[task.root]
-					inst_pkg = self._pkg(task.cpv, "installed", root_config,
-						installed=True)
-
-					if self._dynamic_config.digraph.contains(inst_pkg):
-						continue
-
-					forbid_overlap = False
-					heuristic_overlap = False
-					for blocker in myblocker_uninstalls.parent_nodes(task):
-						if not eapi_has_strong_blocks(blocker.eapi):
-							heuristic_overlap = True
-						elif blocker.atom.blocker.overlap.forbid:
-							forbid_overlap = True
-							break
-					if forbid_overlap and running_root == task.root:
-						continue
-
-					if heuristic_overlap and running_root == task.root:
-						# Never uninstall sys-apps/portage or it's essential
-						# dependencies, except through replacement.
-						try:
-							runtime_dep_atoms = \
-								list(runtime_deps.iterAtomsForPackage(task))
-						except portage.exception.InvalidDependString as e:
-							portage.writemsg("!!! Invalid PROVIDE in " + \
-								"'%svar/db/pkg/%s/PROVIDE': %s\n" % \
-								(task.root, task.cpv, e), noiselevel=-1)
-							del e
-							continue
-
-						# Don't uninstall a runtime dep if it appears
-						# to be the only suitable one installed.
-						skip = False
-						vardb = root_config.trees["vartree"].dbapi
-						for atom in runtime_dep_atoms:
-							other_version = None
-							for pkg in vardb.match_pkgs(atom):
-								if pkg.cpv == task.cpv and \
-									pkg.metadata["COUNTER"] == \
-									task.metadata["COUNTER"]:
-									continue
-								other_version = pkg
-								break
-							if other_version is None:
-								skip = True
-								break
-						if skip:
-							continue
-
-						# For packages in the system set, don't take
-						# any chances. If the conflict can't be resolved
-						# by a normal replacement operation then abort.
-						skip = False
-						try:
-							for atom in root_config.sets[
-								"system"].iterAtomsForPackage(task):
-								skip = True
-								break
-						except portage.exception.InvalidDependString as e:
-							portage.writemsg("!!! Invalid PROVIDE in " + \
-								"'%svar/db/pkg/%s/PROVIDE': %s\n" % \
-								(task.root, task.cpv, e), noiselevel=-1)
-							del e
-							skip = True
-						if skip:
-							continue
-
-					# Note that the world check isn't always
-					# necessary since self._complete_graph() will
-					# add all packages from the system and world sets to the
-					# graph. This just allows unresolved conflicts to be
-					# detected as early as possible, which makes it possible
-					# to avoid calling self._complete_graph() when it is
-					# unnecessary due to blockers triggering an abortion.
-					if not complete:
-						# For packages in the world set, go ahead an uninstall
-						# when necessary, as long as the atom will be satisfied
-						# in the final state.
-						graph_db = self._dynamic_config.mydbapi[task.root]
-						skip = False
-						try:
-							for atom in root_config.sets[
-								"selected"].iterAtomsForPackage(task):
-								satisfied = False
-								for pkg in graph_db.match_pkgs(atom):
-									if pkg == inst_pkg:
-										continue
-									satisfied = True
-									break
-								if not satisfied:
-									skip = True
-									self._dynamic_config._blocked_world_pkgs[inst_pkg] = atom
-									break
-						except portage.exception.InvalidDependString as e:
-							portage.writemsg("!!! Invalid PROVIDE in " + \
-								"'%svar/db/pkg/%s/PROVIDE': %s\n" % \
-								(task.root, task.cpv, e), noiselevel=-1)
-							del e
-							skip = True
-						if skip:
-							continue
-
-					# Check the deps of parent nodes to ensure that
-					# the chosen task produces a leaf node. Maybe
-					# this can be optimized some more to make the
-					# best possible choice, but the current algorithm
-					# is simple and should be near optimal for most
-					# common cases.
-					self._spinner_update()
-					mergeable_parent = False
-					parent_deps = set()
-					parent_deps.add(task)
-					for parent in mygraph.parent_nodes(task):
-						parent_deps.update(mygraph.child_nodes(parent,
-							ignore_priority=priority_range.ignore_medium_soft))
-						if min_parent_deps is not None and \
-							len(parent_deps) >= min_parent_deps:
-							# This task is no better than a previously selected
-							# task, so abort search now in order to avoid wasting
-							# any more cpu time on this task. This increases
-							# performance dramatically in cases when there are
-							# hundreds of blockers to solve, like when
-							# upgrading to a new slot of kde-meta.
-							mergeable_parent = None
-							break
-						if parent in mergeable_nodes and \
-							gather_deps(ignore_uninst_or_med_soft,
-							mergeable_nodes, set(), parent):
-							mergeable_parent = True
-
-					if not mergeable_parent:
-						continue
-
-					if min_parent_deps is None or \
-						len(parent_deps) < min_parent_deps:
-						min_parent_deps = len(parent_deps)
-						uninst_task = task
-
-					if uninst_task is not None and min_parent_deps == 1:
-						# This is the best possible result, so so abort search
-						# now in order to avoid wasting any more cpu time.
-						break
-
-				if uninst_task is not None:
-					# The uninstall is performed only after blocking
-					# packages have been merged on top of it. File
-					# collisions between blocking packages are detected
-					# and removed from the list of files to be uninstalled.
-					scheduled_uninstalls.add(uninst_task)
-					parent_nodes = mygraph.parent_nodes(uninst_task)
-
-					# Reverse the parent -> uninstall edges since we want
-					# to do the uninstall after blocking packages have
-					# been merged on top of it.
-					mygraph.remove(uninst_task)
-					for blocked_pkg in parent_nodes:
-						mygraph.add(blocked_pkg, uninst_task,
-							priority=BlockerDepPriority.instance)
-						scheduler_graph.remove_edge(uninst_task, blocked_pkg)
-						scheduler_graph.add(blocked_pkg, uninst_task,
-							priority=BlockerDepPriority.instance)
-
-					# Sometimes a merge node will render an uninstall
-					# node unnecessary (due to occupying the same SLOT),
-					# and we want to avoid executing a separate uninstall
-					# task in that case.
-					slot_node = self._dynamic_config.mydbapi[uninst_task.root
-						].match_pkgs(uninst_task.slot_atom)
-					if slot_node and \
-						slot_node[0].operation == "merge":
-						mygraph.add(slot_node[0], uninst_task,
-							priority=BlockerDepPriority.instance)
-
-					# Reset the state variables for leaf node selection and
-					# continue trying to select leaf nodes.
-					prefer_asap = True
-					drop_satisfied = False
-					continue
-
-			if not selected_nodes:
-				# Only select root nodes as a last resort. This case should
-				# only trigger when the graph is nearly empty and the only
-				# remaining nodes are isolated (no parents or children). Since
-				# the nodes must be isolated, ignore_priority is not needed.
-				selected_nodes = get_nodes()
-
-			if not selected_nodes and not drop_satisfied:
-				drop_satisfied = True
-				continue
-
-			if not selected_nodes and myblocker_uninstalls:
-				# If possible, drop an uninstall task here in order to avoid
-				# the circular deps code path. The corresponding blocker will
-				# still be counted as an unresolved conflict.
-				uninst_task = None
-				for node in myblocker_uninstalls.leaf_nodes():
-					try:
-						mygraph.remove(node)
-					except KeyError:
-						pass
-					else:
-						uninst_task = node
-						ignored_uninstall_tasks.add(node)
-						break
-
-				if uninst_task is not None:
-					# Reset the state variables for leaf node selection and
-					# continue trying to select leaf nodes.
-					prefer_asap = True
-					drop_satisfied = False
-					continue
-
-			if not selected_nodes:
-				self._dynamic_config._circular_deps_for_display = mygraph
-				self._dynamic_config._skip_restart = True
-				raise self._unknown_internal_error()
-
-			# At this point, we've succeeded in selecting one or more nodes, so
-			# reset state variables for leaf node selection.
-			prefer_asap = True
-			drop_satisfied = False
-
-			mygraph.difference_update(selected_nodes)
-
-			for node in selected_nodes:
-				if isinstance(node, Package) and \
-					node.operation == "nomerge":
-					continue
-
-				# Handle interactions between blockers
-				# and uninstallation tasks.
-				solved_blockers = set()
-				uninst_task = None
-				if isinstance(node, Package) and \
-					"uninstall" == node.operation:
-					have_uninstall_task = True
-					uninst_task = node
-				else:
-					vardb = self._frozen_config.trees[node.root]["vartree"].dbapi
-					inst_pkg = vardb.match_pkgs(node.slot_atom)
-					if inst_pkg:
-						# The package will be replaced by this one, so remove
-						# the corresponding Uninstall task if necessary.
-						inst_pkg = inst_pkg[0]
-						uninst_task = Package(built=inst_pkg.built,
-							cpv=inst_pkg.cpv, installed=inst_pkg.installed,
-							metadata=inst_pkg.metadata,
-							operation="uninstall",
-							root_config=inst_pkg.root_config,
-							type_name=inst_pkg.type_name)
-						try:
-							mygraph.remove(uninst_task)
-						except KeyError:
-							pass
-
-				if uninst_task is not None and \
-					uninst_task not in ignored_uninstall_tasks and \
-					myblocker_uninstalls.contains(uninst_task):
-					blocker_nodes = myblocker_uninstalls.parent_nodes(uninst_task)
-					myblocker_uninstalls.remove(uninst_task)
-					# Discard any blockers that this Uninstall solves.
-					for blocker in blocker_nodes:
-						if not myblocker_uninstalls.child_nodes(blocker):
-							myblocker_uninstalls.remove(blocker)
-							if blocker not in \
-								self._dynamic_config._unsolvable_blockers:
-								solved_blockers.add(blocker)
-
-				retlist.append(node)
-
-				if (isinstance(node, Package) and \
-					"uninstall" == node.operation) or \
-					(uninst_task is not None and \
-					uninst_task in scheduled_uninstalls):
-					# Include satisfied blockers in the merge list
-					# since the user might be interested and also
-					# it serves as an indicator that blocking packages
-					# will be temporarily installed simultaneously.
-					for blocker in solved_blockers:
-						retlist.append(blocker)
-
-		unsolvable_blockers = set(self._dynamic_config._unsolvable_blockers.leaf_nodes())
-		for node in myblocker_uninstalls.root_nodes():
-			unsolvable_blockers.add(node)
-
-		# If any Uninstall tasks need to be executed in order
-		# to avoid a conflict, complete the graph with any
-		# dependencies that may have been initially
-		# neglected (to ensure that unsafe Uninstall tasks
-		# are properly identified and blocked from execution).
-		if have_uninstall_task and \
-			not complete and \
-			not unsolvable_blockers:
-			self._dynamic_config.myparams["complete"] = True
-			if '--debug' in self._frozen_config.myopts:
-				msg = []
-				msg.append("enabling 'complete' depgraph mode " + \
-					"due to uninstall task(s):")
-				msg.append("")
-				for node in retlist:
-					if isinstance(node, Package) and \
-						node.operation == 'uninstall':
-						msg.append("\t%s" % (node,))
-				writemsg_level("\n%s\n" % \
-					"".join("%s\n" % line for line in msg),
-					level=logging.DEBUG, noiselevel=-1)
-			raise self._serialize_tasks_retry("")
-
-		# Set satisfied state on blockers, but not before the
-		# above retry path, since we don't want to modify the
-		# state in that case.
-		for node in retlist:
-			if isinstance(node, Blocker):
-				node.satisfied = True
-
-		for blocker in unsolvable_blockers:
-			retlist.append(blocker)
-
-		if unsolvable_blockers and \
-			not self._accept_blocker_conflicts():
-			self._dynamic_config._unsatisfied_blockers_for_display = unsolvable_blockers
-			self._dynamic_config._serialized_tasks_cache = retlist[:]
-			self._dynamic_config._scheduler_graph = scheduler_graph
-			self._dynamic_config._skip_restart = True
-			raise self._unknown_internal_error()
-
-		if self._dynamic_config._slot_collision_info and \
-			not self._accept_blocker_conflicts():
-			self._dynamic_config._serialized_tasks_cache = retlist[:]
-			self._dynamic_config._scheduler_graph = scheduler_graph
-			raise self._unknown_internal_error()
-
-		return retlist, scheduler_graph
-
-	def _show_circular_deps(self, mygraph):
-		self._dynamic_config._circular_dependency_handler = \
-			circular_dependency_handler(self, mygraph)
-		handler = self._dynamic_config._circular_dependency_handler
-
-		self._frozen_config.myopts.pop("--quiet", None)
-		self._frozen_config.myopts["--verbose"] = True
-		self._frozen_config.myopts["--tree"] = True
-		portage.writemsg("\n\n", noiselevel=-1)
-		self.display(handler.merge_list)
-		prefix = colorize("BAD", " * ")
-		portage.writemsg("\n", noiselevel=-1)
-		portage.writemsg(prefix + "Error: circular dependencies:\n",
-			noiselevel=-1)
-		portage.writemsg("\n", noiselevel=-1)
-
-		if handler.circular_dep_message is None:
-			handler.debug_print()
-			portage.writemsg("\n", noiselevel=-1)
-
-		if handler.circular_dep_message is not None:
-			portage.writemsg(handler.circular_dep_message, noiselevel=-1)
-
-		suggestions = handler.suggestions
-		if suggestions:
-			writemsg("\n\nIt might be possible to break this cycle\n", noiselevel=-1)
-			if len(suggestions) == 1:
-				writemsg("by applying the following change:\n", noiselevel=-1)
-			else:
-				writemsg("by applying " + colorize("bold", "any of") + \
-					" the following changes:\n", noiselevel=-1)
-			writemsg("".join(suggestions), noiselevel=-1)
-			writemsg("\nNote that this change can be reverted, once the package has" + \
-				" been installed.\n", noiselevel=-1)
-			if handler.large_cycle_count:
-				writemsg("\nNote that the dependency graph contains a lot of cycles.\n" + \
-					"Several changes might be required to resolve all cycles.\n" + \
-					"Temporarily changing some use flag for all packages might be the better option.\n", noiselevel=-1)
-		else:
-			writemsg("\n\n", noiselevel=-1)
-			writemsg(prefix + "Note that circular dependencies " + \
-				"can often be avoided by temporarily\n", noiselevel=-1)
-			writemsg(prefix + "disabling USE flags that trigger " + \
-				"optional dependencies.\n", noiselevel=-1)
-
-	def _show_merge_list(self):
-		if self._dynamic_config._serialized_tasks_cache is not None and \
-			not (self._dynamic_config._displayed_list is not None and \
-			(self._dynamic_config._displayed_list == self._dynamic_config._serialized_tasks_cache or \
-			self._dynamic_config._displayed_list == \
-				list(reversed(self._dynamic_config._serialized_tasks_cache)))):
-			display_list = self._dynamic_config._serialized_tasks_cache[:]
-			if "--tree" in self._frozen_config.myopts:
-				display_list.reverse()
-			self.display(display_list)
-
-	def _show_unsatisfied_blockers(self, blockers):
-		self._show_merge_list()
-		msg = "Error: The above package list contains " + \
-			"packages which cannot be installed " + \
-			"at the same time on the same system."
-		prefix = colorize("BAD", " * ")
-		portage.writemsg("\n", noiselevel=-1)
-		for line in textwrap.wrap(msg, 70):
-			portage.writemsg(prefix + line + "\n", noiselevel=-1)
-
-		# Display the conflicting packages along with the packages
-		# that pulled them in. This is helpful for troubleshooting
-		# cases in which blockers don't solve automatically and
-		# the reasons are not apparent from the normal merge list
-		# display.
-
-		conflict_pkgs = {}
-		for blocker in blockers:
-			for pkg in chain(self._dynamic_config._blocked_pkgs.child_nodes(blocker), \
-				self._dynamic_config._blocker_parents.parent_nodes(blocker)):
-				parent_atoms = self._dynamic_config._parent_atoms.get(pkg)
-				if not parent_atoms:
-					atom = self._dynamic_config._blocked_world_pkgs.get(pkg)
-					if atom is not None:
-						parent_atoms = set([("@selected", atom)])
-				if parent_atoms:
-					conflict_pkgs[pkg] = parent_atoms
-
-		if conflict_pkgs:
-			# Reduce noise by pruning packages that are only
-			# pulled in by other conflict packages.
-			pruned_pkgs = set()
-			for pkg, parent_atoms in conflict_pkgs.items():
-				relevant_parent = False
-				for parent, atom in parent_atoms:
-					if parent not in conflict_pkgs:
-						relevant_parent = True
-						break
-				if not relevant_parent:
-					pruned_pkgs.add(pkg)
-			for pkg in pruned_pkgs:
-				del conflict_pkgs[pkg]
-
-		if conflict_pkgs:
-			msg = []
-			msg.append("\n")
-			indent = "  "
-			for pkg, parent_atoms in conflict_pkgs.items():
-
-				# Prefer packages that are not directly involved in a conflict.
-				# It can be essential to see all the packages here, so don't
-				# omit any. If the list is long, people can simply use a pager.
-				preferred_parents = set()
-				for parent_atom in parent_atoms:
-					parent, atom = parent_atom
-					if parent not in conflict_pkgs:
-						preferred_parents.add(parent_atom)
-
-				ordered_list = list(preferred_parents)
-				if len(parent_atoms) > len(ordered_list):
-					for parent_atom in parent_atoms:
-						if parent_atom not in preferred_parents:
-							ordered_list.append(parent_atom)
-
-				msg.append(indent + "%s pulled in by\n" % pkg)
-
-				for parent_atom in ordered_list:
-					parent, atom = parent_atom
-					msg.append(2*indent)
-					if isinstance(parent,
-						(PackageArg, AtomArg)):
-						# For PackageArg and AtomArg types, it's
-						# redundant to display the atom attribute.
-						msg.append(str(parent))
-					else:
-						# Display the specific atom from SetArg or
-						# Package types.
-						msg.append("%s required by %s" % (atom, parent))
-					msg.append("\n")
-
-				msg.append("\n")
-
-			writemsg("".join(msg), noiselevel=-1)
-
-		if "--quiet" not in self._frozen_config.myopts:
-			show_blocker_docs_link()
-
-	def display(self, mylist, favorites=[], verbosity=None):
-
-		# This is used to prevent display_problems() from
-		# redundantly displaying this exact same merge list
-		# again via _show_merge_list().
-		self._dynamic_config._displayed_list = mylist
-		display = Display()
-
-		return display(self, mylist, favorites, verbosity)
-
-	def _display_autounmask(self):
-		"""
-		Display --autounmask message and optionally write it to config files
-		(using CONFIG_PROTECT). The message includes the comments and the changes.
-		"""
-
-		autounmask_write = self._frozen_config.myopts.get("--autounmask-write", "n") == True
-		autounmask_unrestricted_atoms = \
-			self._frozen_config.myopts.get("--autounmask-unrestricted-atoms", "n") == True
-		quiet = "--quiet" in self._frozen_config.myopts
-		pretend = "--pretend" in self._frozen_config.myopts
-		ask = "--ask" in self._frozen_config.myopts
-		enter_invalid = '--ask-enter-invalid' in self._frozen_config.myopts
-
-		def check_if_latest(pkg):
-			is_latest = True
-			is_latest_in_slot = True
-			dbs = self._dynamic_config._filtered_trees[pkg.root]["dbs"]
-			root_config = self._frozen_config.roots[pkg.root]
-
-			for db, pkg_type, built, installed, db_keys in dbs:
-				for other_pkg in self._iter_match_pkgs(root_config, pkg_type, Atom(pkg.cp)):
-					if other_pkg.cp != pkg.cp:
-						# old-style PROVIDE virtual means there are no
-						# normal matches for this pkg_type
-						break
-					if other_pkg > pkg:
-						is_latest = False
-						if other_pkg.slot_atom == pkg.slot_atom:
-							is_latest_in_slot = False
-							break
-					else:
-						# iter_match_pkgs yields highest version first, so
-						# there's no need to search this pkg_type any further
-						break
-
-				if not is_latest_in_slot:
-					break
-
-			return is_latest, is_latest_in_slot
-
-		#Set of roots we have autounmask changes for.
-		roots = set()
-
-		masked_by_missing_keywords = False
-		unstable_keyword_msg = {}
-		for pkg in self._dynamic_config._needed_unstable_keywords:
-			self._show_merge_list()
-			if pkg in self._dynamic_config.digraph:
-				root = pkg.root
-				roots.add(root)
-				unstable_keyword_msg.setdefault(root, [])
-				is_latest, is_latest_in_slot = check_if_latest(pkg)
-				pkgsettings = self._frozen_config.pkgsettings[pkg.root]
-				mreasons = _get_masking_status(pkg, pkgsettings, pkg.root_config,
-					use=self._pkg_use_enabled(pkg))
-				for reason in mreasons:
-					if reason.unmask_hint and \
-						reason.unmask_hint.key == 'unstable keyword':
-						keyword = reason.unmask_hint.value
-						if keyword == "**":
-							masked_by_missing_keywords = True
-
-						unstable_keyword_msg[root].append(self._get_dep_chain_as_comment(pkg))
-						if autounmask_unrestricted_atoms:
-							if is_latest:
-								unstable_keyword_msg[root].append(">=%s %s\n" % (pkg.cpv, keyword))
-							elif is_latest_in_slot:
-								unstable_keyword_msg[root].append(">=%s:%s %s\n" % (pkg.cpv, pkg.slot, keyword))
-							else:
-								unstable_keyword_msg[root].append("=%s %s\n" % (pkg.cpv, keyword))
-						else:
-							unstable_keyword_msg[root].append("=%s %s\n" % (pkg.cpv, keyword))
-
-		p_mask_change_msg = {}
-		for pkg in self._dynamic_config._needed_p_mask_changes:
-			self._show_merge_list()
-			if pkg in self._dynamic_config.digraph:
-				root = pkg.root
-				roots.add(root)
-				p_mask_change_msg.setdefault(root, [])
-				is_latest, is_latest_in_slot = check_if_latest(pkg)
-				pkgsettings = self._frozen_config.pkgsettings[pkg.root]
-				mreasons = _get_masking_status(pkg, pkgsettings, pkg.root_config,
-					use=self._pkg_use_enabled(pkg))
-				for reason in mreasons:
-					if reason.unmask_hint and \
-						reason.unmask_hint.key == 'p_mask':
-						keyword = reason.unmask_hint.value
-
-						comment, filename = portage.getmaskingreason(
-							pkg.cpv, metadata=pkg.metadata,
-							settings=pkgsettings,
-							portdb=pkg.root_config.trees["porttree"].dbapi,
-							return_location=True)
-
-						p_mask_change_msg[root].append(self._get_dep_chain_as_comment(pkg))
-						if filename:
-							p_mask_change_msg[root].append("# %s:\n" % filename)
-						if comment:
-							comment = [line for line in
-								comment.splitlines() if line]
-							for line in comment:
-								p_mask_change_msg[root].append("%s\n" % line)
-						if autounmask_unrestricted_atoms:
-							if is_latest:
-								p_mask_change_msg[root].append(">=%s\n" % pkg.cpv)
-							elif is_latest_in_slot:
-								p_mask_change_msg[root].append(">=%s:%s\n" % (pkg.cpv, pkg.slot))
-							else:
-								p_mask_change_msg[root].append("=%s\n" % pkg.cpv)
-						else:
-							p_mask_change_msg[root].append("=%s\n" % pkg.cpv)
-
-		use_changes_msg = {}
-		for pkg, needed_use_config_change in self._dynamic_config._needed_use_config_changes.items():
-			self._show_merge_list()
-			if pkg in self._dynamic_config.digraph:
-				root = pkg.root
-				roots.add(root)
-				use_changes_msg.setdefault(root, [])
-				is_latest, is_latest_in_slot = check_if_latest(pkg)
-				changes = needed_use_config_change[1]
-				adjustments = []
-				for flag, state in changes.items():
-					if state:
-						adjustments.append(flag)
-					else:
-						adjustments.append("-" + flag)
-				use_changes_msg[root].append(self._get_dep_chain_as_comment(pkg, unsatisfied_dependency=True))
-				if is_latest:
-					use_changes_msg[root].append(">=%s %s\n" % (pkg.cpv, " ".join(adjustments)))
-				elif is_latest_in_slot:
-					use_changes_msg[root].append(">=%s:%s %s\n" % (pkg.cpv, pkg.slot, " ".join(adjustments)))
-				else:
-					use_changes_msg[root].append("=%s %s\n" % (pkg.cpv, " ".join(adjustments)))
-
-		license_msg = {}
-		for pkg, missing_licenses in self._dynamic_config._needed_license_changes.items():
-			self._show_merge_list()
-			if pkg in self._dynamic_config.digraph:
-				root = pkg.root
-				roots.add(root)
-				license_msg.setdefault(root, [])
-				is_latest, is_latest_in_slot = check_if_latest(pkg)
-
-				license_msg[root].append(self._get_dep_chain_as_comment(pkg))
-				if is_latest:
-					license_msg[root].append(">=%s %s\n" % (pkg.cpv, " ".join(sorted(missing_licenses))))
-				elif is_latest_in_slot:
-					license_msg[root].append(">=%s:%s %s\n" % (pkg.cpv, pkg.slot, " ".join(sorted(missing_licenses))))
-				else:
-					license_msg[root].append("=%s %s\n" % (pkg.cpv, " ".join(sorted(missing_licenses))))
-
-		def find_config_file(abs_user_config, file_name):
-			"""
-			Searches /etc/portage for an appropriate file to append changes to.
-			If the file_name is a file it is returned, if it is a directory, the
-			last file in it is returned. Order of traversal is the identical to
-			portage.util.grablines(recursive=True).
-
-			file_name - String containing a file name like "package.use"
-			return value - String. Absolute path of file to write to. None if
-			no suitable file exists.
-			"""
-			file_path = os.path.join(abs_user_config, file_name)
-
-			try:
-				os.lstat(file_path)
-			except OSError as e:
-				if e.errno == errno.ENOENT:
-					# The file doesn't exist, so we'll
-					# simply create it.
-					return file_path
-
-				# Disk or file system trouble?
-				return None
-
-			last_file_path = None
-			stack = [file_path]
-			while stack:
-				p = stack.pop()
-				try:
-					st = os.stat(p)
-				except OSError:
-					pass
-				else:
-					if stat.S_ISREG(st.st_mode):
-						last_file_path = p
-					elif stat.S_ISDIR(st.st_mode):
-						if os.path.basename(p) in _ignorecvs_dirs:
-							continue
-						try:
-							contents = os.listdir(p)
-						except OSError:
-							pass
-						else:
-							contents.sort(reverse=True)
-							for child in contents:
-								if child.startswith(".") or \
-									child.endswith("~"):
-									continue
-								stack.append(os.path.join(p, child))
-
-			return last_file_path
-
-		write_to_file = autounmask_write and not pretend
-		#Make sure we have a file to write to before doing any write.
-		file_to_write_to = {}
-		problems = []
-		if write_to_file:
-			for root in roots:
-				settings = self._frozen_config.roots[root].settings
-				abs_user_config = os.path.join(
-					settings["PORTAGE_CONFIGROOT"], USER_CONFIG_PATH)
-
-				if root in unstable_keyword_msg:
-					if not os.path.exists(os.path.join(abs_user_config,
-						"package.keywords")):
-						filename = "package.accept_keywords"
-					else:
-						filename = "package.keywords"
-					file_to_write_to[(abs_user_config, "package.keywords")] = \
-						find_config_file(abs_user_config, filename)
-
-				if root in p_mask_change_msg:
-					file_to_write_to[(abs_user_config, "package.unmask")] = \
-						find_config_file(abs_user_config, "package.unmask")
-
-				if root in use_changes_msg:
-					file_to_write_to[(abs_user_config, "package.use")] = \
-						find_config_file(abs_user_config, "package.use")
-
-				if root in license_msg:
-					file_to_write_to[(abs_user_config, "package.license")] = \
-						find_config_file(abs_user_config, "package.license")
-
-			for (abs_user_config, f), path in file_to_write_to.items():
-				if path is None:
-					problems.append("!!! No file to write for '%s'\n" % os.path.join(abs_user_config, f))
-
-			write_to_file = not problems
-
-		def format_msg(lines):
-			lines = lines[:]
-			for i, line in enumerate(lines):
-				if line.startswith("#"):
-					continue
-				lines[i] = colorize("INFORM", line.rstrip()) + "\n"
-			return "".join(lines)
-
-		for root in roots:
-			settings = self._frozen_config.roots[root].settings
-			abs_user_config = os.path.join(
-				settings["PORTAGE_CONFIGROOT"], USER_CONFIG_PATH)
-
-			if len(roots) > 1:
-				writemsg("\nFor %s:\n" % abs_user_config, noiselevel=-1)
-
-			if root in unstable_keyword_msg:
-				writemsg("\nThe following " + colorize("BAD", "keyword changes") + \
-					" are necessary to proceed:\n", noiselevel=-1)
-				writemsg(format_msg(unstable_keyword_msg[root]), noiselevel=-1)
-
-			if root in p_mask_change_msg:
-				writemsg("\nThe following " + colorize("BAD", "mask changes") + \
-					" are necessary to proceed:\n", noiselevel=-1)
-				writemsg(format_msg(p_mask_change_msg[root]), noiselevel=-1)
-
-			if root in use_changes_msg:
-				writemsg("\nThe following " + colorize("BAD", "USE changes") + \
-					" are necessary to proceed:\n", noiselevel=-1)
-				writemsg(format_msg(use_changes_msg[root]), noiselevel=-1)
-
-			if root in license_msg:
-				writemsg("\nThe following " + colorize("BAD", "license changes") + \
-					" are necessary to proceed:\n", noiselevel=-1)
-				writemsg(format_msg(license_msg[root]), noiselevel=-1)
-
-		protect_obj = {}
-		if write_to_file:
-			for root in roots:
-				settings = self._frozen_config.roots[root].settings
-				protect_obj[root] = ConfigProtect(settings["EROOT"], \
-					shlex_split(settings.get("CONFIG_PROTECT", "")),
-					shlex_split(settings.get("CONFIG_PROTECT_MASK", "")))
-
-		def write_changes(root, changes, file_to_write_to):
-			file_contents = None
-			try:
-				file_contents = io.open(
-					_unicode_encode(file_to_write_to,
-					encoding=_encodings['fs'], errors='strict'),
-					mode='r', encoding=_encodings['content'],
-					errors='replace').readlines()
-			except IOError as e:
-				if e.errno == errno.ENOENT:
-					file_contents = []
-				else:
-					problems.append("!!! Failed to read '%s': %s\n" % \
-						(file_to_write_to, e))
-			if file_contents is not None:
-				file_contents.extend(changes)
-				if protect_obj[root].isprotected(file_to_write_to):
-					# We want to force new_protect_filename to ensure
-					# that the user will see all our changes via
-					# dispatch-conf, even if file_to_write_to doesn't
-					# exist yet, so we specify force=True.
-					file_to_write_to = new_protect_filename(file_to_write_to,
-						force=True)
-				try:
-					write_atomic(file_to_write_to, "".join(file_contents))
-				except PortageException:
-					problems.append("!!! Failed to write '%s'\n" % file_to_write_to)
-
-		if not quiet and (p_mask_change_msg or masked_by_missing_keywords):
-			msg = [
-				"",
-				"NOTE: The --autounmask-keep-masks option will prevent emerge",
-				"      from creating package.unmask or ** keyword changes."
-			]
-			for line in msg:
-				if line:
-					line = colorize("INFORM", line)
-				writemsg(line + "\n", noiselevel=-1)
-
-		if ask and write_to_file and file_to_write_to:
-			prompt = "\nWould you like to add these " + \
-				"changes to your config files?"
-			if userquery(prompt, enter_invalid) == 'No':
-				write_to_file = False
-
-		if write_to_file and file_to_write_to:
-			for root in roots:
-				settings = self._frozen_config.roots[root].settings
-				abs_user_config = os.path.join(
-					settings["PORTAGE_CONFIGROOT"], USER_CONFIG_PATH)
-				ensure_dirs(abs_user_config)
-
-				if root in unstable_keyword_msg:
-					write_changes(root, unstable_keyword_msg[root],
-						file_to_write_to.get((abs_user_config, "package.keywords")))
-
-				if root in p_mask_change_msg:
-					write_changes(root, p_mask_change_msg[root],
-						file_to_write_to.get((abs_user_config, "package.unmask")))
-
-				if root in use_changes_msg:
-					write_changes(root, use_changes_msg[root],
-						file_to_write_to.get((abs_user_config, "package.use")))
-
-				if root in license_msg:
-					write_changes(root, license_msg[root],
-						file_to_write_to.get((abs_user_config, "package.license")))
-
-		if problems:
-			writemsg("\nThe following problems occurred while writing autounmask changes:\n", \
-				noiselevel=-1)
-			writemsg("".join(problems), noiselevel=-1)
-		elif write_to_file and roots:
-			writemsg("\nAutounmask changes successfully written. Remember to run dispatch-conf.\n", \
-				noiselevel=-1)
-		elif not pretend and not autounmask_write and roots:
-			writemsg("\nUse --autounmask-write to write changes to config files (honoring CONFIG_PROTECT).\n", \
-				noiselevel=-1)
-
-
-	def display_problems(self):
-		"""
-		Display problems with the dependency graph such as slot collisions.
-		This is called internally by display() to show the problems _after_
-		the merge list where it is most likely to be seen, but if display()
-		is not going to be called then this method should be called explicitly
-		to ensure that the user is notified of problems with the graph.
-		"""
-
-		if self._dynamic_config._circular_deps_for_display is not None:
-			self._show_circular_deps(
-				self._dynamic_config._circular_deps_for_display)
-
-		# The slot conflict display has better noise reduction than
-		# the unsatisfied blockers display, so skip unsatisfied blockers
-		# display if there are slot conflicts (see bug #385391).
-		if self._dynamic_config._slot_collision_info:
-			self._show_slot_collision_notice()
-		elif self._dynamic_config._unsatisfied_blockers_for_display is not None:
-			self._show_unsatisfied_blockers(
-				self._dynamic_config._unsatisfied_blockers_for_display)
-		else:
-			self._show_missed_update()
-
-		self._show_ignored_binaries()
-
-		self._display_autounmask()
-
-		# TODO: Add generic support for "set problem" handlers so that
-		# the below warnings aren't special cases for world only.
-
-		if self._dynamic_config._missing_args:
-			world_problems = False
-			if "world" in self._dynamic_config.sets[
-				self._frozen_config.target_root].sets:
-				# Filter out indirect members of world (from nested sets)
-				# since only direct members of world are desired here.
-				world_set = self._frozen_config.roots[self._frozen_config.target_root].sets["selected"]
-				for arg, atom in self._dynamic_config._missing_args:
-					if arg.name in ("selected", "world") and atom in world_set:
-						world_problems = True
-						break
-
-			if world_problems:
-				sys.stderr.write("\n!!! Problems have been " + \
-					"detected with your world file\n")
-				sys.stderr.write("!!! Please run " + \
-					green("emaint --check world")+"\n\n")
-
-		if self._dynamic_config._missing_args:
-			sys.stderr.write("\n" + colorize("BAD", "!!!") + \
-				" Ebuilds for the following packages are either all\n")
-			sys.stderr.write(colorize("BAD", "!!!") + \
-				" masked or don't exist:\n")
-			sys.stderr.write(" ".join(str(atom) for arg, atom in \
-				self._dynamic_config._missing_args) + "\n")
-
-		if self._dynamic_config._pprovided_args:
-			arg_refs = {}
-			for arg, atom in self._dynamic_config._pprovided_args:
-				if isinstance(arg, SetArg):
-					parent = arg.name
-					arg_atom = (atom, atom)
-				else:
-					parent = "args"
-					arg_atom = (arg.arg, atom)
-				refs = arg_refs.setdefault(arg_atom, [])
-				if parent not in refs:
-					refs.append(parent)
-			msg = []
-			msg.append(bad("\nWARNING: "))
-			if len(self._dynamic_config._pprovided_args) > 1:
-				msg.append("Requested packages will not be " + \
-					"merged because they are listed in\n")
-			else:
-				msg.append("A requested package will not be " + \
-					"merged because it is listed in\n")
-			msg.append("package.provided:\n\n")
-			problems_sets = set()
-			for (arg, atom), refs in arg_refs.items():
-				ref_string = ""
-				if refs:
-					problems_sets.update(refs)
-					refs.sort()
-					ref_string = ", ".join(["'%s'" % name for name in refs])
-					ref_string = " pulled in by " + ref_string
-				msg.append("  %s%s\n" % (colorize("INFORM", str(arg)), ref_string))
-			msg.append("\n")
-			if "selected" in problems_sets or "world" in problems_sets:
-				msg.append("This problem can be solved in one of the following ways:\n\n")
-				msg.append("  A) Use emaint to clean offending packages from world (if not installed).\n")
-				msg.append("  B) Uninstall offending packages (cleans them from world).\n")
-				msg.append("  C) Remove offending entries from package.provided.\n\n")
-				msg.append("The best course of action depends on the reason that an offending\n")
-				msg.append("package.provided entry exists.\n\n")
-			sys.stderr.write("".join(msg))
-
-		masked_packages = []
-		for pkg in self._dynamic_config._masked_license_updates:
-			root_config = pkg.root_config
-			pkgsettings = self._frozen_config.pkgsettings[pkg.root]
-			mreasons = get_masking_status(pkg, pkgsettings, root_config, use=self._pkg_use_enabled(pkg))
-			masked_packages.append((root_config, pkgsettings,
-				pkg.cpv, pkg.repo, pkg.metadata, mreasons))
-		if masked_packages:
-			writemsg("\n" + colorize("BAD", "!!!") + \
-				" The following updates are masked by LICENSE changes:\n",
-				noiselevel=-1)
-			show_masked_packages(masked_packages)
-			show_mask_docs()
-			writemsg("\n", noiselevel=-1)
-
-		masked_packages = []
-		for pkg in self._dynamic_config._masked_installed:
-			root_config = pkg.root_config
-			pkgsettings = self._frozen_config.pkgsettings[pkg.root]
-			mreasons = get_masking_status(pkg, pkgsettings, root_config, use=self._pkg_use_enabled)
-			masked_packages.append((root_config, pkgsettings,
-				pkg.cpv, pkg.repo, pkg.metadata, mreasons))
-		if masked_packages:
-			writemsg("\n" + colorize("BAD", "!!!") + \
-				" The following installed packages are masked:\n",
-				noiselevel=-1)
-			show_masked_packages(masked_packages)
-			show_mask_docs()
-			writemsg("\n", noiselevel=-1)
-
-		for pargs, kwargs in self._dynamic_config._unsatisfied_deps_for_display:
-			self._show_unsatisfied_dep(*pargs, **kwargs)
-
-	def saveNomergeFavorites(self):
-		"""Find atoms in favorites that are not in the mergelist and add them
-		to the world file if necessary."""
-		for x in ("--buildpkgonly", "--fetchonly", "--fetch-all-uri",
-			"--oneshot", "--onlydeps", "--pretend"):
-			if x in self._frozen_config.myopts:
-				return
-		root_config = self._frozen_config.roots[self._frozen_config.target_root]
-		world_set = root_config.sets["selected"]
-
-		world_locked = False
-		if hasattr(world_set, "lock"):
-			world_set.lock()
-			world_locked = True
-
-		if hasattr(world_set, "load"):
-			world_set.load() # maybe it's changed on disk
-
-		args_set = self._dynamic_config.sets[
-			self._frozen_config.target_root].sets['__non_set_args__']
-		added_favorites = set()
-		for x in self._dynamic_config._set_nodes:
-			if x.operation != "nomerge":
-				continue
-
-			if x.root != root_config.root:
-				continue
-
-			try:
-				myfavkey = create_world_atom(x, args_set, root_config)
-				if myfavkey:
-					if myfavkey in added_favorites:
-						continue
-					added_favorites.add(myfavkey)
-			except portage.exception.InvalidDependString as e:
-				writemsg("\n\n!!! '%s' has invalid PROVIDE: %s\n" % \
-					(x.cpv, e), noiselevel=-1)
-				writemsg("!!! see '%s'\n\n" % os.path.join(
-					x.root, portage.VDB_PATH, x.cpv, "PROVIDE"), noiselevel=-1)
-				del e
-		all_added = []
-		for arg in self._dynamic_config._initial_arg_list:
-			if not isinstance(arg, SetArg):
-				continue
-			if arg.root_config.root != root_config.root:
-				continue
-			if arg.internal:
-				# __auto_* sets
-				continue
-			k = arg.name
-			if k in ("selected", "world") or \
-				not root_config.sets[k].world_candidate:
-				continue
-			s = SETPREFIX + k
-			if s in world_set:
-				continue
-			all_added.append(SETPREFIX + k)
-		all_added.extend(added_favorites)
-		all_added.sort()
-		for a in all_added:
-			if a.startswith(SETPREFIX):
-				filename = "world_sets"
-			else:
-				filename = "world"
-			writemsg_stdout(
-				">>> Recording %s in \"%s\" favorites file...\n" %
-				(colorize("INFORM", _unicode(a)), filename), noiselevel=-1)
-		if all_added:
-			world_set.update(all_added)
-
-		if world_locked:
-			world_set.unlock()
-
-	def _loadResumeCommand(self, resume_data, skip_masked=True,
-		skip_missing=True):
-		"""
-		Add a resume command to the graph and validate it in the process.  This
-		will raise a PackageNotFound exception if a package is not available.
-		"""
-
-		self._load_vdb()
-
-		if not isinstance(resume_data, dict):
-			return False
-
-		mergelist = resume_data.get("mergelist")
-		if not isinstance(mergelist, list):
-			mergelist = []
-
-		favorites = resume_data.get("favorites")
-		if isinstance(favorites, list):
-			args = self._load_favorites(favorites)
-		else:
-			args = []
-
-		fakedb = self._dynamic_config.mydbapi
-		serialized_tasks = []
-		masked_tasks = []
-		for x in mergelist:
-			if not (isinstance(x, list) and len(x) == 4):
-				continue
-			pkg_type, myroot, pkg_key, action = x
-			if pkg_type not in self.pkg_tree_map:
-				continue
-			if action != "merge":
-				continue
-			root_config = self._frozen_config.roots[myroot]
-
-			# Use the resume "favorites" list to see if a repo was specified
-			# for this package.
-			depgraph_sets = self._dynamic_config.sets[root_config.root]
-			repo = None
-			for atom in depgraph_sets.atoms.getAtoms():
-				if atom.repo and portage.dep.match_from_list(atom, [pkg_key]):
-					repo = atom.repo
-					break
-
-			atom = "=" + pkg_key
-			if repo:
-				atom = atom + _repo_separator + repo
-
-			try:
-				atom = Atom(atom, allow_repo=True)
-			except InvalidAtom:
-				continue
-
-			pkg = None
-			for pkg in self._iter_match_pkgs(root_config, pkg_type, atom):
-				if not self._pkg_visibility_check(pkg) or \
-					self._frozen_config.excluded_pkgs.findAtomForPackage(pkg,
-						modified_use=self._pkg_use_enabled(pkg)):
-					continue
-				break
-
-			if pkg is None:
-				# It does no exist or it is corrupt.
-				if skip_missing:
-					# TODO: log these somewhere
-					continue
-				raise portage.exception.PackageNotFound(pkg_key)
-
-			if "merge" == pkg.operation and \
-				self._frozen_config.excluded_pkgs.findAtomForPackage(pkg, \
-					modified_use=self._pkg_use_enabled(pkg)):
-				continue
-
-			if "merge" == pkg.operation and not self._pkg_visibility_check(pkg):
-				if skip_masked:
-					masked_tasks.append(Dependency(root=pkg.root, parent=pkg))
-				else:
-					self._dynamic_config._unsatisfied_deps_for_display.append(
-						((pkg.root, "="+pkg.cpv), {"myparent":None}))
-
-			fakedb[myroot].cpv_inject(pkg)
-			serialized_tasks.append(pkg)
-			self._spinner_update()
-
-		if self._dynamic_config._unsatisfied_deps_for_display:
-			return False
-
-		if not serialized_tasks or "--nodeps" in self._frozen_config.myopts:
-			self._dynamic_config._serialized_tasks_cache = serialized_tasks
-			self._dynamic_config._scheduler_graph = self._dynamic_config.digraph
-		else:
-			self._select_package = self._select_pkg_from_graph
-			self._dynamic_config.myparams["selective"] = True
-			# Always traverse deep dependencies in order to account for
-			# potentially unsatisfied dependencies of installed packages.
-			# This is necessary for correct --keep-going or --resume operation
-			# in case a package from a group of circularly dependent packages
-			# fails. In this case, a package which has recently been installed
-			# may have an unsatisfied circular dependency (pulled in by
-			# PDEPEND, for example). So, even though a package is already
-			# installed, it may not have all of it's dependencies satisfied, so
-			# it may not be usable. If such a package is in the subgraph of
-			# deep depenedencies of a scheduled build, that build needs to
-			# be cancelled. In order for this type of situation to be
-			# recognized, deep traversal of dependencies is required.
-			self._dynamic_config.myparams["deep"] = True
-
-			for task in serialized_tasks:
-				if isinstance(task, Package) and \
-					task.operation == "merge":
-					if not self._add_pkg(task, None):
-						return False
-
-			# Packages for argument atoms need to be explicitly
-			# added via _add_pkg() so that they are included in the
-			# digraph (needed at least for --tree display).
-			for arg in self._expand_set_args(args, add_to_digraph=True):
-				for atom in arg.pset.getAtoms():
-					pkg, existing_node = self._select_package(
-						arg.root_config.root, atom)
-					if existing_node is None and \
-						pkg is not None:
-						if not self._add_pkg(pkg, Dependency(atom=atom,
-							root=pkg.root, parent=arg)):
-							return False
-
-			# Allow unsatisfied deps here to avoid showing a masking
-			# message for an unsatisfied dep that isn't necessarily
-			# masked.
-			if not self._create_graph(allow_unsatisfied=True):
-				return False
-
-			unsatisfied_deps = []
-			for dep in self._dynamic_config._unsatisfied_deps:
-				if not isinstance(dep.parent, Package):
-					continue
-				if dep.parent.operation == "merge":
-					unsatisfied_deps.append(dep)
-					continue
-
-				# For unsatisfied deps of installed packages, only account for
-				# them if they are in the subgraph of dependencies of a package
-				# which is scheduled to be installed.
-				unsatisfied_install = False
-				traversed = set()
-				dep_stack = self._dynamic_config.digraph.parent_nodes(dep.parent)
-				while dep_stack:
-					node = dep_stack.pop()
-					if not isinstance(node, Package):
-						continue
-					if node.operation == "merge":
-						unsatisfied_install = True
-						break
-					if node in traversed:
-						continue
-					traversed.add(node)
-					dep_stack.extend(self._dynamic_config.digraph.parent_nodes(node))
-
-				if unsatisfied_install:
-					unsatisfied_deps.append(dep)
-
-			if masked_tasks or unsatisfied_deps:
-				# This probably means that a required package
-				# was dropped via --skipfirst. It makes the
-				# resume list invalid, so convert it to a
-				# UnsatisfiedResumeDep exception.
-				raise self.UnsatisfiedResumeDep(self,
-					masked_tasks + unsatisfied_deps)
-			self._dynamic_config._serialized_tasks_cache = None
-			try:
-				self.altlist()
-			except self._unknown_internal_error:
-				return False
-
-		return True
-
-	def _load_favorites(self, favorites):
-		"""
-		Use a list of favorites to resume state from a
-		previous select_files() call. This creates similar
-		DependencyArg instances to those that would have
-		been created by the original select_files() call.
-		This allows Package instances to be matched with
-		DependencyArg instances during graph creation.
-		"""
-		root_config = self._frozen_config.roots[self._frozen_config.target_root]
-		sets = root_config.sets
-		depgraph_sets = self._dynamic_config.sets[root_config.root]
-		args = []
-		for x in favorites:
-			if not isinstance(x, basestring):
-				continue
-			if x in ("system", "world"):
-				x = SETPREFIX + x
-			if x.startswith(SETPREFIX):
-				s = x[len(SETPREFIX):]
-				if s not in sets:
-					continue
-				if s in depgraph_sets.sets:
-					continue
-				pset = sets[s]
-				depgraph_sets.sets[s] = pset
-				args.append(SetArg(arg=x, pset=pset,
-					root_config=root_config))
-			else:
-				try:
-					x = Atom(x, allow_repo=True)
-				except portage.exception.InvalidAtom:
-					continue
-				args.append(AtomArg(arg=x, atom=x,
-					root_config=root_config))
-
-		self._set_args(args)
-		return args
-
-	class UnsatisfiedResumeDep(portage.exception.PortageException):
-		"""
-		A dependency of a resume list is not installed. This
-		can occur when a required package is dropped from the
-		merge list via --skipfirst.
-		"""
-		def __init__(self, depgraph, value):
-			portage.exception.PortageException.__init__(self, value)
-			self.depgraph = depgraph
-
-	class _internal_exception(portage.exception.PortageException):
-		def __init__(self, value=""):
-			portage.exception.PortageException.__init__(self, value)
-
-	class _unknown_internal_error(_internal_exception):
-		"""
-		Used by the depgraph internally to terminate graph creation.
-		The specific reason for the failure should have been dumped
-		to stderr, unfortunately, the exact reason for the failure
-		may not be known.
-		"""
-
-	class _serialize_tasks_retry(_internal_exception):
-		"""
-		This is raised by the _serialize_tasks() method when it needs to
-		be called again for some reason. The only case that it's currently
-		used for is when neglected dependencies need to be added to the
-		graph in order to avoid making a potentially unsafe decision.
-		"""
-
-	class _backtrack_mask(_internal_exception):
-		"""
-		This is raised by _show_unsatisfied_dep() when it's called with
-		check_backtrack=True and a matching package has been masked by
-		backtracking.
-		"""
-
-	class _autounmask_breakage(_internal_exception):
-		"""
-		This is raised by _show_unsatisfied_dep() when it's called with
-		check_autounmask_breakage=True and a matching package has been
-		been disqualified due to autounmask changes.
-		"""
-
-	def need_restart(self):
-		return self._dynamic_config._need_restart and \
-			not self._dynamic_config._skip_restart
-
-	def success_without_autounmask(self):
-		return self._dynamic_config._success_without_autounmask
-
-	def autounmask_breakage_detected(self):
-		try:
-			for pargs, kwargs in self._dynamic_config._unsatisfied_deps_for_display:
-				self._show_unsatisfied_dep(
-					*pargs, check_autounmask_breakage=True, **kwargs)
-		except self._autounmask_breakage:
-			return True
-		return False
-
-	def get_backtrack_infos(self):
-		return self._dynamic_config._backtrack_infos
-			
-
-class _dep_check_composite_db(dbapi):
-	"""
-	A dbapi-like interface that is optimized for use in dep_check() calls.
-	This is built on top of the existing depgraph package selection logic.
-	Some packages that have been added to the graph may be masked from this
-	view in order to influence the atom preference selection that occurs
-	via dep_check().
-	"""
-	def __init__(self, depgraph, root):
-		dbapi.__init__(self)
-		self._depgraph = depgraph
-		self._root = root
-		self._match_cache = {}
-		self._cpv_pkg_map = {}
-
-	def _clear_cache(self):
-		self._match_cache.clear()
-		self._cpv_pkg_map.clear()
-
-	def cp_list(self, cp):
-		"""
-		Emulate cp_list just so it can be used to check for existence
-		of new-style virtuals. Since it's a waste of time to return
-		more than one cpv for this use case, a maximum of one cpv will
-		be returned.
-		"""
-		if isinstance(cp, Atom):
-			atom = cp
-		else:
-			atom = Atom(cp)
-		ret = []
-		for pkg in self._depgraph._iter_match_pkgs_any(
-			self._depgraph._frozen_config.roots[self._root], atom):
-			if pkg.cp == cp:
-				ret.append(pkg.cpv)
-				break
-
-		return ret
-
-	def match(self, atom):
-		cache_key = (atom, atom.unevaluated_atom)
-		ret = self._match_cache.get(cache_key)
-		if ret is not None:
-			return ret[:]
-
-		ret = []
-		pkg, existing = self._depgraph._select_package(self._root, atom)
-
-		if pkg is not None and self._visible(pkg):
-			self._cpv_pkg_map[pkg.cpv] = pkg
-			ret.append(pkg.cpv)
-
-		if pkg is not None and \
-			atom.slot is None and \
-			pkg.cp.startswith("virtual/") and \
-			(("remove" not in self._depgraph._dynamic_config.myparams and
-			"--update" not in self._depgraph._frozen_config.myopts) or
-			not ret or
-			not self._depgraph._virt_deps_visible(pkg, ignore_use=True)):
-			# For new-style virtual lookahead that occurs inside dep_check()
-			# for bug #141118, examine all slots. This is needed so that newer
-			# slots will not unnecessarily be pulled in when a satisfying lower
-			# slot is already installed. For example, if virtual/jdk-1.5 is
-			# satisfied via gcj-jdk then there's no need to pull in a newer
-			# slot to satisfy a virtual/jdk dependency, unless --update is
-			# enabled.
-			slots = set()
-			slots.add(pkg.slot)
-			for virt_pkg in self._depgraph._iter_match_pkgs_any(
-				self._depgraph._frozen_config.roots[self._root], atom):
-				if virt_pkg.cp != pkg.cp:
-					continue
-				slots.add(virt_pkg.slot)
-
-			slots.remove(pkg.slot)
-			while slots:
-				slot_atom = atom.with_slot(slots.pop())
-				pkg, existing = self._depgraph._select_package(
-					self._root, slot_atom)
-				if not pkg:
-					continue
-				if not self._visible(pkg):
-					continue
-				self._cpv_pkg_map[pkg.cpv] = pkg
-				ret.append(pkg.cpv)
-
-			if len(ret) > 1:
-				self._cpv_sort_ascending(ret)
-
-		self._match_cache[cache_key] = ret
-		return ret[:]
-
-	def _visible(self, pkg):
-		if pkg.installed and not self._depgraph._want_installed_pkg(pkg):
-			return False
-		if pkg.installed and \
-			(pkg.masks or not self._depgraph._pkg_visibility_check(pkg)):
-			# Account for packages with masks (like KEYWORDS masks)
-			# that are usually ignored in visibility checks for
-			# installed packages, in order to handle cases like
-			# bug #350285.
-			myopts = self._depgraph._frozen_config.myopts
-			use_ebuild_visibility = myopts.get(
-				'--use-ebuild-visibility', 'n') != 'n'
-			avoid_update = "--update" not in myopts and \
-				"remove" not in self._depgraph._dynamic_config.myparams
-			usepkgonly = "--usepkgonly" in myopts
-			if not avoid_update:
-				if not use_ebuild_visibility and usepkgonly:
-					return False
-				elif not self._depgraph._equiv_ebuild_visible(pkg):
-					return False
-
-		in_graph = self._depgraph._dynamic_config._slot_pkg_map[
-			self._root].get(pkg.slot_atom)
-		if in_graph is None:
-			# Mask choices for packages which are not the highest visible
-			# version within their slot (since they usually trigger slot
-			# conflicts).
-			highest_visible, in_graph = self._depgraph._select_package(
-				self._root, pkg.slot_atom)
-			# Note: highest_visible is not necessarily the real highest
-			# visible, especially when --update is not enabled, so use
-			# < operator instead of !=.
-			if highest_visible is not None and pkg < highest_visible:
-				return False
-		elif in_graph != pkg:
-			# Mask choices for packages that would trigger a slot
-			# conflict with a previously selected package.
-			return False
-		return True
-
-	def aux_get(self, cpv, wants):
-		metadata = self._cpv_pkg_map[cpv].metadata
-		return [metadata.get(x, "") for x in wants]
-
-	def match_pkgs(self, atom):
-		return [self._cpv_pkg_map[cpv] for cpv in self.match(atom)]
-
-def ambiguous_package_name(arg, atoms, root_config, spinner, myopts):
-
-	if "--quiet" in myopts:
-		writemsg("!!! The short ebuild name \"%s\" is ambiguous. Please specify\n" % arg, noiselevel=-1)
-		writemsg("!!! one of the following fully-qualified ebuild names instead:\n\n", noiselevel=-1)
-		for cp in sorted(set(portage.dep_getkey(atom) for atom in atoms)):
-			writemsg("    " + colorize("INFORM", cp) + "\n", noiselevel=-1)
-		return
-
-	s = search(root_config, spinner, "--searchdesc" in myopts,
-		"--quiet" not in myopts, "--usepkg" in myopts,
-		"--usepkgonly" in myopts)
-	null_cp = portage.dep_getkey(insert_category_into_atom(
-		arg, "null"))
-	cat, atom_pn = portage.catsplit(null_cp)
-	s.searchkey = atom_pn
-	for cp in sorted(set(portage.dep_getkey(atom) for atom in atoms)):
-		s.addCP(cp)
-	s.output()
-	writemsg("!!! The short ebuild name \"%s\" is ambiguous. Please specify\n" % arg, noiselevel=-1)
-	writemsg("!!! one of the above fully-qualified ebuild names instead.\n\n", noiselevel=-1)
-
-def _spinner_start(spinner, myopts):
-	if spinner is None:
-		return
-	if "--quiet" not in myopts and \
-		("--pretend" in myopts or "--ask" in myopts or \
-		"--tree" in myopts or "--verbose" in myopts):
-		action = ""
-		if "--fetchonly" in myopts or "--fetch-all-uri" in myopts:
-			action = "fetched"
-		elif "--buildpkgonly" in myopts:
-			action = "built"
-		else:
-			action = "merged"
-		if "--tree" in myopts and action != "fetched": # Tree doesn't work with fetching
-			if "--unordered-display" in myopts:
-				portage.writemsg_stdout("\n" + \
-					darkgreen("These are the packages that " + \
-					"would be %s:" % action) + "\n\n")
-			else:
-				portage.writemsg_stdout("\n" + \
-					darkgreen("These are the packages that " + \
-					"would be %s, in reverse order:" % action) + "\n\n")
-		else:
-			portage.writemsg_stdout("\n" + \
-				darkgreen("These are the packages that " + \
-				"would be %s, in order:" % action) + "\n\n")
-
-	show_spinner = "--quiet" not in myopts and "--nodeps" not in myopts
-	if not show_spinner:
-		spinner.update = spinner.update_quiet
-
-	if show_spinner:
-		portage.writemsg_stdout("Calculating dependencies  ")
-
-def _spinner_stop(spinner):
-	if spinner is None or \
-		spinner.update == spinner.update_quiet:
-		return
-
-	if spinner.update != spinner.update_basic:
-		# update_basic is used for non-tty output,
-		# so don't output backspaces in that case.
-		portage.writemsg_stdout("\b\b")
-
-	portage.writemsg_stdout("... done!\n")
-
-def backtrack_depgraph(settings, trees, myopts, myparams, 
-	myaction, myfiles, spinner):
-	"""
-	Raises PackageSetNotFound if myfiles contains a missing package set.
-	"""
-	_spinner_start(spinner, myopts)
-	try:
-		return _backtrack_depgraph(settings, trees, myopts, myparams, 
-			myaction, myfiles, spinner)
-	finally:
-		_spinner_stop(spinner)
-
-
-def _backtrack_depgraph(settings, trees, myopts, myparams, myaction, myfiles, spinner):
-
-	debug = "--debug" in myopts
-	mydepgraph = None
-	max_retries = myopts.get('--backtrack', 10)
-	max_depth = max(1, (max_retries + 1) / 2)
-	allow_backtracking = max_retries > 0
-	backtracker = Backtracker(max_depth)
-	backtracked = 0
-
-	frozen_config = _frozen_depgraph_config(settings, trees,
-		myopts, spinner)
-
-	while backtracker:
-
-		if debug and mydepgraph is not None:
-			writemsg_level(
-				"\n\nbacktracking try %s \n\n" % \
-				backtracked, noiselevel=-1, level=logging.DEBUG)
-			mydepgraph.display_problems()
-
-		backtrack_parameters = backtracker.get()
-
-		mydepgraph = depgraph(settings, trees, myopts, myparams, spinner,
-			frozen_config=frozen_config,
-			allow_backtracking=allow_backtracking,
-			backtrack_parameters=backtrack_parameters)
-		success, favorites = mydepgraph.select_files(myfiles)
-
-		if success or mydepgraph.success_without_autounmask():
-			break
-		elif not allow_backtracking:
-			break
-		elif backtracked >= max_retries:
-			break
-		elif mydepgraph.need_restart():
-			backtracked += 1
-			backtracker.feedback(mydepgraph.get_backtrack_infos())
-		else:
-			break
-
-	if not (success or mydepgraph.success_without_autounmask()) and backtracked:
-
-		if debug:
-			writemsg_level(
-				"\n\nbacktracking aborted after %s tries\n\n" % \
-				backtracked, noiselevel=-1, level=logging.DEBUG)
-			mydepgraph.display_problems()
-
-		mydepgraph = depgraph(settings, trees, myopts, myparams, spinner,
-			frozen_config=frozen_config,
-			allow_backtracking=False,
-			backtrack_parameters=backtracker.get_best_run())
-		success, favorites = mydepgraph.select_files(myfiles)
-
-	if not success and mydepgraph.autounmask_breakage_detected():
-		if debug:
-			writemsg_level(
-				"\n\nautounmask breakage detected\n\n",
-				noiselevel=-1, level=logging.DEBUG)
-			mydepgraph.display_problems()
-		myopts["--autounmask"] = "n"
-		mydepgraph = depgraph(settings, trees, myopts, myparams, spinner,
-			frozen_config=frozen_config, allow_backtracking=False)
-		success, favorites = mydepgraph.select_files(myfiles)
-
-	return (success, mydepgraph, favorites)
-
-
-def resume_depgraph(settings, trees, mtimedb, myopts, myparams, spinner):
-	"""
-	Raises PackageSetNotFound if myfiles contains a missing package set.
-	"""
-	_spinner_start(spinner, myopts)
-	try:
-		return _resume_depgraph(settings, trees, mtimedb, myopts,
-			myparams, spinner)
-	finally:
-		_spinner_stop(spinner)
-
-def _resume_depgraph(settings, trees, mtimedb, myopts, myparams, spinner):
-	"""
-	Construct a depgraph for the given resume list. This will raise
-	PackageNotFound or depgraph.UnsatisfiedResumeDep when necessary.
-	TODO: Return reasons for dropped_tasks, for display/logging.
-	@rtype: tuple
-	@return: (success, depgraph, dropped_tasks)
-	"""
-	skip_masked = True
-	skip_unsatisfied = True
-	mergelist = mtimedb["resume"]["mergelist"]
-	dropped_tasks = set()
-	frozen_config = _frozen_depgraph_config(settings, trees,
-		myopts, spinner)
-	while True:
-		mydepgraph = depgraph(settings, trees,
-			myopts, myparams, spinner, frozen_config=frozen_config)
-		try:
-			success = mydepgraph._loadResumeCommand(mtimedb["resume"],
-				skip_masked=skip_masked)
-		except depgraph.UnsatisfiedResumeDep as e:
-			if not skip_unsatisfied:
-				raise
-
-			graph = mydepgraph._dynamic_config.digraph
-			unsatisfied_parents = dict((dep.parent, dep.parent) \
-				for dep in e.value)
-			traversed_nodes = set()
-			unsatisfied_stack = list(unsatisfied_parents)
-			while unsatisfied_stack:
-				pkg = unsatisfied_stack.pop()
-				if pkg in traversed_nodes:
-					continue
-				traversed_nodes.add(pkg)
-
-				# If this package was pulled in by a parent
-				# package scheduled for merge, removing this
-				# package may cause the the parent package's
-				# dependency to become unsatisfied.
-				for parent_node in graph.parent_nodes(pkg):
-					if not isinstance(parent_node, Package) \
-						or parent_node.operation not in ("merge", "nomerge"):
-						continue
-					# We need to traverse all priorities here, in order to
-					# ensure that a package with an unsatisfied depenedency
-					# won't get pulled in, even indirectly via a soft
-					# dependency.
-					unsatisfied_parents[parent_node] = parent_node
-					unsatisfied_stack.append(parent_node)
-
-			unsatisfied_tuples = frozenset(tuple(parent_node)
-				for parent_node in unsatisfied_parents
-				if isinstance(parent_node, Package))
-			pruned_mergelist = []
-			for x in mergelist:
-				if isinstance(x, list) and \
-					tuple(x) not in unsatisfied_tuples:
-					pruned_mergelist.append(x)
-
-			# If the mergelist doesn't shrink then this loop is infinite.
-			if len(pruned_mergelist) == len(mergelist):
-				# This happens if a package can't be dropped because
-				# it's already installed, but it has unsatisfied PDEPEND.
-				raise
-			mergelist[:] = pruned_mergelist
-
-			# Exclude installed packages that have been removed from the graph due
-			# to failure to build/install runtime dependencies after the dependent
-			# package has already been installed.
-			dropped_tasks.update(pkg for pkg in \
-				unsatisfied_parents if pkg.operation != "nomerge")
-
-			del e, graph, traversed_nodes, \
-				unsatisfied_parents, unsatisfied_stack
-			continue
-		else:
-			break
-	return (success, mydepgraph, dropped_tasks)
-
-def get_mask_info(root_config, cpv, pkgsettings,
-	db, pkg_type, built, installed, db_keys, myrepo = None, _pkg_use_enabled=None):
-	try:
-		metadata = dict(zip(db_keys,
-			db.aux_get(cpv, db_keys, myrepo=myrepo)))
-	except KeyError:
-		metadata = None
-
-	if metadata is None:
-		mreasons = ["corruption"]
-	else:
-		eapi = metadata['EAPI']
-		if not portage.eapi_is_supported(eapi):
-			mreasons = ['EAPI %s' % eapi]
-		else:
-			pkg = Package(type_name=pkg_type, root_config=root_config,
-				cpv=cpv, built=built, installed=installed, metadata=metadata)
-
-			modified_use = None
-			if _pkg_use_enabled is not None:
-				modified_use = _pkg_use_enabled(pkg)
-
-			mreasons = get_masking_status(pkg, pkgsettings, root_config, myrepo=myrepo, use=modified_use)
-
-	return metadata, mreasons
-
-def show_masked_packages(masked_packages):
-	shown_licenses = set()
-	shown_comments = set()
-	# Maybe there is both an ebuild and a binary. Only
-	# show one of them to avoid redundant appearance.
-	shown_cpvs = set()
-	have_eapi_mask = False
-	for (root_config, pkgsettings, cpv, repo,
-		metadata, mreasons) in masked_packages:
-		output_cpv = cpv
-		if repo:
-			output_cpv += _repo_separator + repo
-		if output_cpv in shown_cpvs:
-			continue
-		shown_cpvs.add(output_cpv)
-		eapi_masked = metadata is not None and \
-			not portage.eapi_is_supported(metadata["EAPI"])
-		if eapi_masked:
-			have_eapi_mask = True
-			# When masked by EAPI, metadata is mostly useless since
-			# it doesn't contain essential things like SLOT.
-			metadata = None
-		comment, filename = None, None
-		if not eapi_masked and \
-			"package.mask" in mreasons:
-			comment, filename = \
-				portage.getmaskingreason(
-				cpv, metadata=metadata,
-				settings=pkgsettings,
-				portdb=root_config.trees["porttree"].dbapi,
-				return_location=True)
-		missing_licenses = []
-		if not eapi_masked and metadata is not None:
-			try:
-				missing_licenses = \
-					pkgsettings._getMissingLicenses(
-						cpv, metadata)
-			except portage.exception.InvalidDependString:
-				# This will have already been reported
-				# above via mreasons.
-				pass
-
-		writemsg("- "+output_cpv+" (masked by: "+", ".join(mreasons)+")\n",
-			noiselevel=-1)
-
-		if comment and comment not in shown_comments:
-			writemsg(filename + ":\n" + comment + "\n",
-				noiselevel=-1)
-			shown_comments.add(comment)
-		portdb = root_config.trees["porttree"].dbapi
-		for l in missing_licenses:
-			l_path = portdb.findLicensePath(l)
-			if l in shown_licenses:
-				continue
-			msg = ("A copy of the '%s' license" + \
-			" is located at '%s'.\n\n") % (l, l_path)
-			writemsg(msg, noiselevel=-1)
-			shown_licenses.add(l)
-	return have_eapi_mask
-
-def show_mask_docs():
-	writemsg("For more information, see the MASKED PACKAGES "
-		"section in the emerge\n", noiselevel=-1)
-	writemsg("man page or refer to the Gentoo Handbook.\n", noiselevel=-1)
-
-def show_blocker_docs_link():
-	writemsg("\nFor more information about " + bad("Blocked Packages") + ", please refer to the following\n", noiselevel=-1)
-	writemsg("section of the Gentoo Linux x86 Handbook (architecture is irrelevant):\n\n", noiselevel=-1)
-	writemsg("http://www.gentoo.org/doc/en/handbook/handbook-x86.xml?full=1#blocked\n\n", noiselevel=-1)
-
-def get_masking_status(pkg, pkgsettings, root_config, myrepo=None, use=None):
-	return [mreason.message for \
-		mreason in _get_masking_status(pkg, pkgsettings, root_config, myrepo=myrepo, use=use)]
-
-def _get_masking_status(pkg, pkgsettings, root_config, myrepo=None, use=None):
-	mreasons = _getmaskingstatus(
-		pkg, settings=pkgsettings,
-		portdb=root_config.trees["porttree"].dbapi, myrepo=myrepo)
-
-	if not pkg.installed:
-		if not pkgsettings._accept_chost(pkg.cpv, pkg.metadata):
-			mreasons.append(_MaskReason("CHOST", "CHOST: %s" % \
-				pkg.metadata["CHOST"]))
-
-	if pkg.invalid:
-		for msgs in pkg.invalid.values():
-			for msg in msgs:
-				mreasons.append(
-					_MaskReason("invalid", "invalid: %s" % (msg,)))
-
-	if not pkg.metadata["SLOT"]:
-		mreasons.append(
-			_MaskReason("invalid", "SLOT: undefined"))
-
-	return mreasons

diff --git a/gobs/pym/mysql_querys.py b/gobs/pym/mysql_querys.py
index cd94a76..942cc88 100644
--- a/gobs/pym/mysql_querys.py
+++ b/gobs/pym/mysql_querys.py
@@ -140,10 +140,13 @@ def get_package_id(connection, categories, package, repo):
 	if not entries is None:
 		return entries[0]
 
-def add_new_manifest_sql(connection, categories, package, repo):
+def add_new_manifest_sql(connection, cp, repo):
 	cursor = connection.cursor()
 	sqlQ1 = "INSERT INTO packages (category_id, package, repo_id, checksum, active) VALUES (%s, %s, %s, '0', 'True')"
 	sqlQ2 = 'SELECT LAST_INSERT_ID()'
+	element = cp.split('/')
+	categories = element[0]
+	package = element[1]
 	repo_id = get_repo_id(connection, repo)
 	category_id = get_category_id(connection, categories)
 	cursor.execute(sqlQ1, (category_id, package, repo_id, ))
@@ -211,7 +214,7 @@ def add_new_ebuild_metadata_sql(connection, ebuild_id, keywords, restrictions, i
 			use_id = cursor.fetchone()[0]
 		cursor.execute(sqlQ6, (ebuild_id, use_id, set_iuse,))
 	for keyword in keywords:
-		set_keyword = 'sStable'
+		set_keyword = 'Stable'
 		if keyword[0] in ["~"]:
 			keyword = keyword[1:]
 			set_keyword = 'Unstable'
@@ -353,8 +356,9 @@ def get_ebuild_id_db(connection, checksum, package_id):
 	cursor = connection.cursor()
 	sqlQ = "SELECT ebuild_id FROM ebuilds WHERE package_id = %s AND checksum = %s"
 	cursor.execute(sqlQ, (package_id, checksum,))
-	entries = cursor.fetchone()
+	entries = cursor.fetchall()
 	cursor.close()
+	ebuilds_id = []
 	for i in entries:
 		ebuilds_id.append(i[0])
 	return ebuilds_id
@@ -533,7 +537,10 @@ def get_hilight_info(connection):
 	for i in entries:
 		aadict = {}
 		aadict['hilight_search'] = i[0]
-		aadict['hilight_searchend'] = i[1]
+		if i[1] == "":
+			aadict['hilight_search_end'] = i[1]
+		else:
+			aadict['hilight_search_end'] = i[1]
 		aadict['hilight_css'] = i[2]
 		aadict['hilight_start'] = i[3]
 		aadict['hilight_end'] = i[4]
@@ -544,14 +551,15 @@ def add_new_buildlog(connection, build_dict, build_log_dict):
 	cursor = connection.cursor()
 	sqlQ1 = 'SELECT build_log_id FROM build_logs WHERE ebuild_id = %s'
 	sqlQ2 ="INSERT INTO build_logs (ebuild_id, summery_text, log_hash) VALUES (%s, 'FF', 'FF')"
-	sqlQ3 = "UPDATE build_logs SET fail = 'True', summery_text = %s, log_hash = %s WHERE build_log_id = %s"
+	sqlQ3 = "UPDATE build_logs SET fail = 'True' WHERE build_log_id = %s"
 	sqlQ4 = 'INSERT INTO build_logs_config (build_log_id, config_id, logname) VALUES (%s, %s, %s)'
 	sqlQ6 = 'INSERT INTO build_logs_use (build_log_id, use_id, status) VALUES (%s, %s, %s)'
 	sqlQ7 = 'SELECT log_hash FROM build_logs WHERE build_log_id = %s'
 	sqlQ8 = 'SELECT use_id, status FROM build_logs_use WHERE build_log_id = %s'
 	sqlQ9 = 'SELECT config_id FROM build_logs_config WHERE build_log_id = %s'
-	sqlQ10 = "UPDATE build_logs SET log_hash = %s WHERE build_log_id = %s"
+	sqlQ10 = "UPDATE build_logs SET summery_text = %s, log_hash = %s WHERE build_log_id = %s"
 	sqlQ11 = 'SELECT LAST_INSERT_ID()'
+	sqlQ12 = 'INSERT INTO build_logs_hilight (build_log_id, start_line, end_line, hilight_css) VALUES (%s, %s, %s, %s)'
 	build_log_id_list = []
 	cursor.execute(sqlQ1, (build_dict['ebuild_id'],))
 	entries = cursor.fetchall()
@@ -560,6 +568,10 @@ def add_new_buildlog(connection, build_dict, build_log_dict):
 			build_log_id_list.append(build_log_id[0])
 	else:
 		build_log_id_list = None
+	
+	def add_new_hilight(build_log_id, build_log_dict):
+		for k, hilight_tmp in sorted(build_log_dict['hilight_dict'].iteritems()):
+			cursor.execute(sqlQ13, (build_log_id,hilight_tmp['startline'],  hilight_tmp['endline'], hilight_tmp['hilight'],))
 
 	def build_log_id_match(build_log_id_list, build_dict, build_log_dict):
 		for build_log_id in build_log_id_list:
@@ -589,9 +601,9 @@ def add_new_buildlog(connection, build_dict, build_log_dict):
 		cursor.execute(sqlQ11)
 		build_log_id = cursor.fetchone()[0]
 		if 'True' in build_log_dict['summary_error_list']:
-			cursor.execute(sqlQ3, (build_log_dict['build_error'], build_log_dict['log_hash'], build_log_id,))
-		else:
-			cursor.execute(sqlQ10, (build_log_dict['log_hash'], build_log_id,))
+			cursor.execute(sqlQ3, (build_log_id,))
+		cursor.execute(sqlQ10, (build_log_dict['build_error'], build_log_dict['log_hash'], build_log_id,))
+		add_new_hilight(build_log_id, build_log_dict)
 		cursor.execute(sqlQ4, (build_log_id, build_dict['config_id'], build_log_dict['logfilename'],))
 		if not build_dict['build_useflags'] is None:
 			for use_id, status in  build_dict['build_useflags'].iteritems():

diff --git a/gobs/pym/package.py b/gobs/pym/package.py
index 25494a7..7c37f7c 100644
--- a/gobs/pym/package.py
+++ b/gobs/pym/package.py
@@ -1,14 +1,14 @@
 from __future__ import print_function
 import portage
 from gobs.flags import gobs_use_flags
-from gobs.repoman_gobs import gobs_repoman
 from gobs.manifest import gobs_manifest
 from gobs.text import get_ebuild_cvs_revision
 from gobs.flags import gobs_use_flags
 from gobs.mysql_querys import get_config, get_config_id, add_gobs_logs, get_default_config, \
 	add_new_build_job, get_config_id_list, update_manifest_sql, add_new_manifest_sql, \
 	add_new_ebuild_sql, get_ebuild_id_db, add_old_ebuild, get_ebuild_id_list, \
-	get_ebuild_checksum, get_manifest_db, get_cp_repo_from_package_id
+	get_ebuild_checksum, get_manifest_db, get_cp_repo_from_package_id, \
+	get_cp_from_package_id
 from gobs.readconf import get_conf_settings
 reader=get_conf_settings()
 gobs_settings_dict=reader.read_gobs_settings_all()
@@ -98,14 +98,6 @@ class gobs_package(object):
 		else:
 			ebuild_version_cvs_revision_tree = get_ebuild_cvs_revision(pkgdir + "/" + package + "-" + ebuild_version_tree + ".ebuild")
 
-		# run repoman on the ebuild
-		#init_repoman = gobs_repoman(self._mysettings, self._myportdb)
-		#repoman_error = init_repoman.check_repoman(pkgdir, cpv, config_id)
-		#if repoman_error != []:
-		#       log_msg = "Repoman: %s have errors on repo %s" % (cpv, repo,)
-		#        add_gobs_logs(self._conn, log_msg, "info", self._config_id)
-		repoman_error = []
-
 		# Get the ebuild metadata
 		ebuild_version_metadata_tree = self.get_ebuild_metadata(cpv, repo)
 		# if there some error to get the metadata we add rubish to the
@@ -125,7 +117,6 @@ class gobs_package(object):
 		attDict['ebuild_version_metadata_tree'] = ebuild_version_metadata_tree
 		#attDict['ebuild_version_text_tree'] = ebuild_version_text_tree[0]
 		attDict['ebuild_version_revision_tree'] = ebuild_version_cvs_revision_tree
-		attDict['ebuild_error'] = repoman_error
 		return attDict
 
 	def add_new_build_job_db(self, ebuild_id_list, packageDict, config_cpv_listDict):
@@ -177,7 +168,7 @@ class gobs_package(object):
 		package_metadataDict[package] = attDict
 		return package_metadataDict
 
-	def add_package(self, packageDict, package_id, new_ebuild_id_list, old_ebuild_list, manifest_checksum_tree):
+	def add_package(self, packageDict, package_id, new_ebuild_id_list, old_ebuild_id_list, manifest_checksum_tree):
 		# Use packageDict to update the db
 		ebuild_id_list = add_new_ebuild_sql(self._conn, package_id, packageDict)
 		
@@ -195,6 +186,7 @@ class gobs_package(object):
 		update_manifest_sql(self._conn, package_id, manifest_checksum_tree)
 
 		# Get the best cpv for the configs and add it to config_cpv_listDict
+		cp = get_cp_from_package_id(self._conn, package_id)
 		configs_id_list  = get_config_id_list(self._conn)
 		config_cpv_listDict = self.config_match_ebuild(cp, configs_id_list)
 
@@ -301,13 +293,15 @@ class gobs_package(object):
 				if checksums_db is None:
 					ebuild_version_manifest_checksum_db = None
 				elif len(checksums_db) >= 2:
+					# FIXME: Add function to fix the dups.
 					for checksum in checksums_db:
 						ebuilds_id = get_ebuild_id_db(self._conn, checksum, package_id)
 						log_msg = "U %s:%s:%s Dups of checksums" % (cpv, repo, ebuilds_id,)
 						add_gobs_logs(self._conn, log_msg, "error", self._config_id)
 						log_msg = "C %s:%s ... Fail." % (cp, repo)
 						add_gobs_logs(self._conn, log_msg, "error", self._config_id)
-						return
+					return
+
 				else:
 					ebuild_version_manifest_checksum_db = checksums_db[0]
 

diff --git a/gobs/pym/pgsql.py b/gobs/pym/pgsql.py
deleted file mode 100644
index f8de5ff..0000000
--- a/gobs/pym/pgsql.py
+++ /dev/null
@@ -1,633 +0,0 @@
-#every function takes a connection as a parameter that is provided by the CM
-from __future__ import print_function
-
-def get_default_config(connection):
-	cursor = connection.cursor()
-	sqlQ = 'SELECT id FROM configs WHERE default_config = True'
-	cursor.execute(sqlQ)
-	return cursor.fetchone()
-
-def get_profile_checksum(connection, config_profile):
-    cursor = connection.cursor()
-    sqlQ = "SELECT make_conf_checksum FROM configs WHERE active = 'True' AND id = %s AND auto = 'True'"
-    cursor.execute(sqlQ, (config_profile,))
-    return cursor.fetchone()
-
-def get_packages_to_build(connection, config_profile):
-  cursor =connection.cursor()
-  # no point in returning dead ebuilds, to just chuck em out later
-  sqlQ1 = '''SELECT post_message, queue_id, ebuild_id FROM buildqueue WHERE config = %s AND extract(epoch from (NOW()) - timestamp) > 7200 ORDER BY queue_id LIMIT 1'''
-
-  sqlQ2 ='''SELECT ebuild_id,category,package_name,ebuild_version,ebuild_checksum FROM ebuilds,buildqueue,packages
-    WHERE buildqueue.ebuild_id=ebuilds.id AND ebuilds.package_id=packages.package_id AND queue_id = %s AND ebuilds.active = TRUE'''
-  
-  # get use flags to use
-  sqlQ3 = "SELECT useflag, enabled FROM ebuildqueuedwithuses WHERE queue_id = %s"
-  cursor.execute(sqlQ1, (config_profile,))
-  build_dict={}
-  entries = cursor.fetchone()
-  if entries is None:
-    return None
-  if entries[2] is None:
-    build_dict['ebuild_id'] = None
-    build_dict['queue_id'] = entries[1]
-    return build_dict
-  msg_list = []
-  if not entries[0] is None:
-    for msg in entries[0].split(" "):
-      msg_list.append(msg)
-  build_dict['post_message'] = msg_list
-  build_dict['queue_id'] = entries[1]
-  build_dict['ebuild_id']=entries[2]
-  cursor.execute(sqlQ2, (build_dict['queue_id'],))
-  #make a list that contains objects that haves ebuild_id and post_message +the lot as attributes
-  entries = cursor.fetchone()
-  if entries is None:
-    build_dict['checksum']= None
-    return build_dict
-  build_dict['ebuild_id']=entries[0]
-  build_dict['category']=entries[1]
-  build_dict['package']=entries[2]
-  build_dict['ebuild_version']=entries[3]
-  build_dict['checksum']=entries[4]
-
-  #add a enabled and disabled list to the objects in the item list
-  cursor.execute(sqlQ3, (build_dict['queue_id'],))
-  uses={}
-  for row in cursor.fetchall():
-    uses[ row[0] ] = row[1]
-  build_dict['build_useflags']=uses
-  return build_dict
-
-def check_revision(connection, build_dict):
-  cursor = connection.cursor()
-  sqlQ1 = 'SELECT queue_id FROM buildqueue WHERE ebuild_id = %s AND config = %s'
-  sqlQ2 = "SELECT useflag FROM ebuildqueuedwithuses WHERE queue_id = %s AND enabled = 'True'"
-  cursor.execute(sqlQ1, (build_dict['ebuild_id'], build_dict['config_profile']))
-  queue_id_list = cursor.fetchall()
-  if queue_id_list == []:
-    return None
-  for queue_id in queue_id_list:
-    cursor.execute(sqlQ2, (queue_id[0],))
-    entries = cursor.fetchall()
-    queue_useflags = []
-    if entries == []:
-      queue_useflags = None
-    else:
-      for use_line in sorted(entries):
-	      queue_useflags.append(use_line[0])
-    if queue_useflags == build_dict['build_useflags']:
-      return queue_id[0]
-  return None
-
-def get_config_list(connection):
-  cursor = connection.cursor()
-  sqlQ = 'SELECT id FROM configs WHERE default_config = False AND active = True'
-  cursor.execute(sqlQ)
-  entries = cursor.fetchall()
-  if entries == ():
-    return None
-  else:
-    config_id_list = []
-    for config_id in entries:
-      config_id_list.append(config_id[0])
-    return config_id_list
-
-def get_config_list_all(connection):
-  cursor = connection.cursor()
-  sqlQ = 'SELECT id FROM configs'
-  cursor.execute(sqlQ)
-  return cursor.fetchall()
-
-def update__make_conf(connection, configsDict):
-  cursor = connection.cursor()
-  sqlQ = 'UPDATE configs SET make_conf_checksum = %s, make_conf_text = %s, active = %s, config_error = %s WHERE id = %s'
-  for k, v in configsDict.iteritems():
-    params = [v['make_conf_checksum_tree'], v['make_conf_text'], v['active'], v['config_error'], k]
-    cursor.execute(sqlQ, params)
-  connection.commit()
-
-def have_package_db(connection, categories, package):
-  cursor = connection.cursor()
-  sqlQ ='SELECT package_id FROM packages WHERE category = %s AND package_name = %s'
-  params = categories, package
-  cursor.execute(sqlQ, params)
-  return cursor.fetchone()
-  
-def have_activ_ebuild_id(connection, ebuild_id):
-	cursor = connection.cursor()
-	sqlQ = 'SELECT ebuild_checksum FROM ebuilds WHERE id = %s AND active = TRUE'
-	cursor.execute(sqlQ, (ebuild_id,))
-	entries = cursor.fetchone()
-	if entries is None:
-		return None
-	# If entries is not None we need [0]
-	return entries[0]
-
-def get_categories_db(connection):
-  cursor = connection.cursor()
-  sqlQ =' SELECT category FROM categories'
-  cursor.execute(sqlQ)
-  return cursor.fetchall()
-
-def get_categories_checksum_db(connection, categories):
-  cursor = connection.cursor()
-  sqlQ =' SELECT metadata_xml_checksum FROM categories_meta WHERE category = %s'
-  cursor.execute(sqlQ, (categories,))
-  return cursor.fetchone()
-
-def add_new_categories_meta_sql(connection, categories, categories_metadata_xml_checksum_tree, categories_metadata_xml_text_tree):
-  cursor = connection.cursor()
-  sqlQ = 'INSERT INTO categories_meta (category, metadata_xml_checksum, metadata_xml_text) VALUES  ( %s, %s, %s )'
-  params = categories, categories_metadata_xml_checksum_tree, categories_metadata_xml_text_tree
-  cursor.execute(sqlQ, params)
-  connection.commit()
-
-def update_categories_meta_sql(connection, categories, categories_metadata_xml_checksum_tree, categories_metadata_xml_text_tree):
-  cursor = connection.cursor()
-  sqlQ ='UPDATE categories_meta SET metadata_xml_checksum = %s, metadata_xml_text = %s WHERE category = %s'
-  params = (categories_metadata_xml_checksum_tree, categories_metadata_xml_text_tree, categories)
-  cursor.execute(sqlQ, params)
-  connection.commit()
-
-def add_new_manifest_sql(connection, package_id, get_manifest_text, manifest_checksum_tree):
-  cursor = connection.cursor()
-  sqlQ = 'INSERT INTO manifest (package_id, manifest, checksum) VALUES  ( %s, %s, %s )'
-  params = package_id, get_manifest_text, manifest_checksum_tree
-  cursor.execute(sqlQ, params)
-  connection.commit()
-
-def add_new_package_metadata(connection, package_id, package_metadataDict):
-  cursor = connection.cursor()
-  sqlQ = 'SELECT changelog_checksum FROM packages_meta WHERE package_id = %s'
-  cursor.execute(sqlQ, (package_id,))
-  if cursor.fetchone() is None:
-    sqlQ = 'INSERT INTO packages_meta (package_id, changelog_text, changelog_checksum, metadata_text, metadata_checksum) VALUES  ( %s, %s, %s, %s, %s )'
-    for k, v in package_metadataDict.iteritems():
-      params = package_id, v['changelog_text'], v['changelog_checksum'], v[' metadata_xml_text'], v['metadata_xml_checksum']
-      cursor.execute(sqlQ, params)
-    connection.commit()
-
-def update_new_package_metadata(connection, package_id, package_metadataDict):
-  cursor = connection.cursor()
-  sqlQ = 'SELECT changelog_checksum, metadata_checksum FROM packages_meta WHERE package_id = %s'
-  cursor.execute(sqlQ, package_id)
-  entries = cursor.fetchone()
-  if entries is None:
-    changelog_checksum_db = None
-    metadata_checksum_db = None
-  else:
-    changelog_checksum_db = entries[0]
-    metadata_checksum_db = entries[1]
-  for k, v in package_metadataDict.iteritems():
-    if changelog_checksum_db != v['changelog_checksum']:
-      sqlQ = 'UPDATE packages_meta SET changelog_text = %s, changelog_checksum = %s WHERE package_id = %s'
-      params = v['changelog_text'], v['changelog_checksum'], package_id
-      cursor.execute(sqlQ, params)
-    if metadata_checksum_db != v['metadata_xml_checksum']:
-      sqlQ = 'UPDATE packages_meta SET metadata_text = %s, metadata_checksum = %s WHERE package_id = %s'
-      params = v[' metadata_xml_text'], v['metadata_xml_checksum'], package_id
-      cursor.execute(sqlQ, params)
-  connection.commit()
-
-def get_manifest_db(connection, package_id):
-  cursor = connection.cursor()
-  sqlQ = 'SELECT checksum FROM manifest WHERE package_id = %s'
-  cursor.execute(sqlQ, package_id)
-  entries = cursor.fetchone()
-  if entries is None:
-	  return None
-  # If entries is not None we need [0]
-  return entries[0]
-
-def update_manifest_sql(connection, package_id, get_manifest_text, manifest_checksum_tree):
-  cursor = connection.cursor()
-  sqlQ = 'UPDATE manifest SET checksum = %s, manifest = %s WHERE package_id = %s'
-  params = (manifest_checksum_tree, get_manifest_text, package_id)
-  cursor.execute(sqlQ, params)
-  connection.commit()
-
-def add_new_metadata(connection, metadataDict):
-  cursor = connection.cursor()
-  for k, v in metadataDict.iteritems():
-    #moved the cursor out side of the loop
-    sqlQ = 'SELECT updaterestrictions( %s, %s )'
-    params = k, v['restrictions']
-    cursor.execute(sqlQ, params)
-    sqlQ = 'SELECT updatekeywords( %s, %s )'
-    params = k, v['keyword']
-    cursor.execute(sqlQ, params)
-    sqlQ = 'SELECT updateiuse( %s, %s )'
-    params = k, v['iuse']
-    cursor.execute(sqlQ, params)
-  connection.commit()
-
-def add_new_package_sql(connection, packageDict):
-  #lets have a new cursor for each metod as per best practice
-  cursor = connection.cursor()
-  sqlQ="SELECT insert_ebuild( %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, 'True')"
-  ebuild_id_list = []
-  package_id_list = []
-  for k, v in packageDict.iteritems():
-    params = [v['categories'], v['package'], v['ebuild_version_tree'], v['ebuild_version_revision'], v['ebuild_version_checksum_tree'],
-    v['ebuild_version_text'], v['ebuild_version_metadata_tree'][0], v['ebuild_version_metadata_tree'][1],
-    v['ebuild_version_metadata_tree'][12], v['ebuild_version_metadata_tree'][2], v['ebuild_version_metadata_tree'][3],
-    v['ebuild_version_metadata_tree'][5],v['ebuild_version_metadata_tree'][6], v['ebuild_version_metadata_tree'][7],
-    v['ebuild_version_metadata_tree'][9], v['ebuild_version_metadata_tree'][11],
-    v['ebuild_version_metadata_tree'][13],v['ebuild_version_metadata_tree'][14], v['ebuild_version_metadata_tree'][15],
-    v['ebuild_version_metadata_tree'][16], v['ebuild_version_metadata_tree'][4]]
-    cursor.execute(sqlQ, params)
-    mid = cursor.fetchone()
-    mid=mid[0]
-    ebuild_id_list.append(mid[1])
-    package_id_list.append(mid[0])
-  connection.commit()
-  # add_new_metadata(metadataDict)
-  return ebuild_id_list, package_id_list
-
-def add_new_ebuild_sql(connection, packageDict, new_ebuild_list):
-  #lets have a new cursor for each metod as per best practice
-  cursor = connection.cursor()
-  sqlQ="SELECT insert_ebuild( %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, 'True')"
-  ebuild_id_list = []
-  package_id_list = []
-  for k, v in packageDict.iteritems():
-    for x in new_ebuild_list:
-      if x == v['ebuild_version_tree']:
-        params = [v['categories'], v['package'], v['ebuild_version_tree'], v['ebuild_version_revision'], v['ebuild_version_checksum_tree'],
-        v['ebuild_version_text'], v['ebuild_version_metadata_tree'][0], v['ebuild_version_metadata_tree'][1],
-        v['ebuild_version_metadata_tree'][12], v['ebuild_version_metadata_tree'][2], v['ebuild_version_metadata_tree'][3],
-        v['ebuild_version_metadata_tree'][5],v['ebuild_version_metadata_tree'][6], v['ebuild_version_metadata_tree'][7],
-        v['ebuild_version_metadata_tree'][9], v['ebuild_version_metadata_tree'][11],
-        v['ebuild_version_metadata_tree'][13],v['ebuild_version_metadata_tree'][14], v['ebuild_version_metadata_tree'][15],
-        v['ebuild_version_metadata_tree'][16], v['ebuild_version_metadata_tree'][4]]
-        cursor.execute(sqlQ, params)
-        mid = cursor.fetchone()
-        mid=mid[0]
-        ebuild_id_list.append(mid[1])
-        package_id_list.append(mid[0])
-  connection.commit()
-  # add_new_metadata(metadataDict)
-  return ebuild_id_list, package_id_list
-
-def update_active_ebuild(connection, package_id, ebuild_version_tree):
-  cursor = connection.cursor()
-  sqlQ ="UPDATE ebuilds SET active = 'False', timestamp = now() WHERE package_id = %s AND ebuild_version = %s AND active = 'True'"
-  cursor.execute(sqlQ, (package_id, ebuild_version_tree))
-  connection.commit()
-
-def get_ebuild_id_db(connection, categories, package, ebuild_version_tree):
-	cursor = connection.cursor()
-	sqlQ ='SELECT id FROM packages WHERE category = %s AND ebuild_name = %s AND ebuild_version = %s'
-	cursor.execute(sqlQ, (categories, package, ebuild_version_tree))
-	entries = cursor.fetchone()
-	return entries
-
-def get_ebuild_id_db_checksum(connection, build_dict):
-	cursor = connection.cursor()
-	sqlQ = 'SELECT id FROM ebuilds WHERE ebuild_version = %s AND ebuild_checksum = %s AND package_id = %s'
-	cursor.execute(sqlQ, (build_dict['ebuild_version'], build_dict['checksum'], build_dict['package_id']))
-	ebuild_id_list = sorted(cursor.fetchall())
-	if ebuild_id_list == []:
-		return None
-	return ebuild_id_list[0]
-
-def get_cpv_from_ebuild_id(connection, ebuild_id):
-	cursor = connection.cursor()
-	#wasent used
-	#sqlQ = 'SELECT package_id FROM ebuild WHERE id = %s'
-	sqlQ='SELECT category, ebuild_name, ebuild_version FROM packages WHERE id = %s'
-	cursor.execute(sqlQ, ebuild_id)
-	entries = cursor.fetchone()
-	return entries
-
-def get_cp_from_package_id(connection, package_id):
-  cursor =connection.cursor()
-  sqlQ = "SELECT ARRAY_TO_STRING(ARRAY[category, package_name] , '/') AS cp FROM packages WHERE package_id = %s"
-  cursor.execute(sqlQ, (package_id,))
-  return cursor.fetchone()
-
-def get_keyword_id_db(connection, arch, stable):
-	cursor =connection.cursor()
-	sqlQ ='SELECT id_keyword FROM keywords WHERE ARCH = %s AND stable = %s'
-	cursor.execute(sqlQ, (arch, stable))
-	entries = cursor.fetchone()
-	#why only return 1 entery? if that IS the point use top(1)
-	return entries
-	
-def add_new_keywords(connection, ebuild_id, keyword_id):
-	cursor = connection.cursor()
-	sqlQ ='INSERT INTO keywordsToEbuild (ebuild_id, id_keyword) VALUES  ( %s, %s )'
-	cursor.execute(sqlQ, (ebuild_id, keyword_id))
-	connection.commit()
-		
-def have_package_buildqueue(connection, ebuild_id, config_id):
-	cursor = connection.cursor()
-	sqlQ = 'SELECT useflags FROM buildqueue WHERE  ebuild_id = %s  AND config_id = %s'
-	params = (ebuild_id[0], config_id)
-	cursor.execute(sqlQ, params)
-	entries = cursor.fetchone()
-	return entries
-
-def get_queue_id_list_config(connection, config_id):
-	cursor = connection.cursor()
-	sqlQ = 'SELECT queue_id FROM buildqueue WHERE config = %s'
-	cursor.execute(sqlQ,  (config_id,))
-	entries = cursor.fetchall()
-	return entries
-
-def add_new_package_buildqueue(connection, ebuild_id, config_id, iuse_flags_list, use_enable, message):
-  cursor = connection.cursor()
-  sqlQ="SELECT insert_buildqueue( %s, %s, %s, %s, %s )"
-  if not iuse_flags_list:
-    iuse_flags_list=None
-    use_enable=None
-  params = ebuild_id, config_id, iuse_flags_list, use_enable, message
-  cursor.execute(sqlQ, params)
-  connection.commit()
-  
-def get_ebuild_checksum(connection, package_id, ebuild_version_tree):
-  cursor = connection.cursor()
-  sqlQ = 'SELECT ebuild_checksum FROM ebuilds WHERE package_id = %s AND ebuild_version = %s AND active = TRUE'
-  cursor.execute(sqlQ, (package_id, ebuild_version_tree))
-  entries = cursor.fetchone()
-  if entries is None:
-	  return None
- # If entries is not None we need [0]
-  return entries[0]
-
-def cp_all_db(connection):
-  cursor = connection.cursor()
-  sqlQ = "SELECT package_id FROM packages"
-  cursor.execute(sqlQ)
-  return cursor.fetchall()
-
-def add_old_package(connection, old_package_list):
-  cursor = connection.cursor()
-  mark_old_list = []
-  sqlQ = "UPDATE ebuilds SET active = 'FALSE', timestamp = NOW() WHERE package_id = %s AND active = 'TRUE' RETURNING package_id"
-  for old_package in old_package_list:
-    cursor.execute(sqlQ, (old_package[0],))
-    entries = cursor.fetchone()
-    if entries is not None:
-      mark_old_list.append(entries[0])
-  connection.commit()
-  return mark_old_list
-  
-def get_old_categories(connection, categories_line):
-  cursor = connection.cursor()
-  sqlQ = "SELECT package_name FROM packages WHERE category = %s"
-  cursor.execute(sqlQ (categories_line))
-  return cursor.fetchone()
-
-def del_old_categories(connection, real_old_categoriess):
-  cursor = connection.cursor()
-  sqlQ1 = 'DELETE FROM categories_meta WHERE category = %s'
-  sqlQ2 = 'DELETE FROM categories categories_meta WHERE category = %s'
-  cursor.execute(sqlQ1 (real_old_categories))
-  cursor.execute(sqlQ2 (real_old_categories))
-  connection.commit()
-
-def add_old_ebuild(connection, package_id, old_ebuild_list):
-  cursor = connection.cursor()
-  sqlQ1 = "UPDATE ebuilds SET active = 'FALSE' WHERE package_id = %s AND ebuild_version = %s"
-  sqlQ2 = "SELECT id FROM ebuilds WHERE package_id = %s AND ebuild_version = %s AND active = 'TRUE'"
-  sqlQ3 = "SELECT queue_id FROM buildqueue WHERE ebuild_id = %s"
-  sqlQ4 = 'DELETE FROM ebuildqueuedwithuses WHERE queue_id = %s'
-  sqlQ5 = 'DELETE FROM buildqueue WHERE queue_id = %s'
-  for old_ebuild in  old_ebuild_list:
-    cursor.execute(sqlQ2, (package_id, old_ebuild[0]))
-    ebuild_id_list = cursor.fetchall()
-    if ebuild_id_list is not None:
-      for ebuild_id in ebuild_id_list:
-        cursor.execute(sqlQ3, (ebuild_id))
-        queue_id_list = cursor.fetchall()
-        if queue_id_list is not None:
-          for queue_id in queue_id_list:
-            cursor.execute(sqlQ4, (queue_id))
-            cursor.execute(sqlQ5, (queue_id))
-        cursor.execute(sqlQ1, (package_id, old_ebuild[0]))
-  connection.commit()
-  
-def cp_all_old_db(connection, old_package_id_list):
-  cursor = connection.cursor()
-  old_package_list = []
-  for old_package in old_package_id_list:
-    sqlQ = "SELECT package_id FROM ebuilds WHERE package_id = %s AND active = 'FALSE' AND date_part('days', NOW() - timestamp) < 60"
-    cursor.execute(sqlQ, old_package)
-    entries = cursor.fetchone()
-    if entries is None:
-      old_package_list.append(old_package)
-  return old_package_list
-
-def del_old_queue(connection, queue_id):
-	cursor = connection.cursor()
-	sqlQ1 = 'DELETE FROM ebuildqueuedwithuses WHERE queue_id = %s'
-	sqlQ2 = 'DELETE FROM querue_retest WHERE querue_id  = %s'
-	sqlQ3 = 'DELETE FROM buildqueue WHERE queue_id  = %s'
-	cursor.execute(sqlQ1, (queue_id,))
-	cursor.execute(sqlQ2, (queue_id,))
-	cursor.execute(sqlQ3, (queue_id,))
-	connection.commit()
-
-def del_old_ebuild(connection, ebuild_old_list_db):
-	cursor = connection.cursor()
-	sqlQ1 = 'SELECT build_id FROM buildlog WHERE ebuild_id = %s'
-	sqlQ2 = 'DELETE FROM qa_problems WHERE build_id = %s'
-	sqlQ3 = 'DELETE FROM repoman_problems WHERE build_id = %s'
-	sqlQ4 = 'DELETE FROM ebuildbuildwithuses WHERE build_id = %s'
-	sqlQ5 = 'DELETE FROM ebuildhaveskeywords WHERE ebuild_id = %s'
-	sqlQ6 = 'DELETE FROM ebuildhavesiuses WHERE ebuild = %s'
-	sqlQ7 = 'DELETE FROM ebuildhavesrestrictions WHERE ebuild_id = %s'
-	sqlQ8 = 'DELETE FROM buildlog WHERE ebuild_id = %s'
-	sqlQ9 = 'SELECT queue_id FROM buildqueue WHERE ebuild_id = %s'
-	sqlQ10 = 'DELETE FROM ebuildqueuedwithuses WHERE queue_id = %s'
-	sqlQ11 = 'DELETE FROM buildqueue WHERE ebuild_id  = %s'
-	sqlQ12 = 'DELETE FROM ebuilds WHERE id  = %s'
-	for ebuild_id in ebuild_old_list_db:
-		cursor.execute(sqlQ1, (ebuild_id[0],))
-		build_id_list = cursor.fetchall()
-		if  build_id_list != []:
-			for build_id in build_id_list:
-				cursor.execute(sqlQ2, (build_id[0],))
-				cursor.execute(sqlQ3, (build_id[0],))
-				cursor.execute(sqlQ4, (build_id[0],))
-		cursor.execute(sqlQ9, (ebuild_id[0],))
-		queue_id_list = cursor.fetchall()
-		if queue_id_list != []:
-			for queue_id in queue_id_list:
-				cursor.execute(sqlQ10, (queue_id[0],))
-		cursor.execute(sqlQ5, (ebuild_id[0],))
-		cursor.execute(sqlQ6, (ebuild_id[0],))
-		cursor.execute(sqlQ7, (ebuild_id[0],))
-		cursor.execute(sqlQ8, (ebuild_id[0],))
-		cursor.execute(sqlQ11, (ebuild_id[0],))
-		cursor.execute(sqlQ12, (ebuild_id[0],))
-	connection.commit()
-  
-def del_old_package(connection, package_id_list):
-  cursor = connection.cursor()
-  sqlQ1 = 'SELECT id FROM ebuilds WHERE package_id = %s'
-  sqlQ2 = 'DELETE FROM ebuilds WHERE package_id = %s'
-  sqlQ3 = 'DELETE FROM manifest WHERE package_id = %s'
-  sqlQ4 = 'DELETE FROM packages_meta WHERE package_id = %s'
-  sqlQ5 = 'DELETE FROM packages WHERE package_id = %s'
-  for package_id in package_id_list:
-    cursor.execute(sqlQ1, package_id)
-    ebuild_id_list = cursor.fetchall()
-    del_old_ebuild(connection, ebuild_id_list)
-    cursor.execute(sqlQ2, (package_id,))
-    cursor.execute(sqlQ3, (package_id,))
-    cursor.execute(sqlQ4, (package_id,))
-    cursor.execute(sqlQ5, (package_id,))
-  connection.commit()
-
-def cp_list_db(connection, package_id):
-  cursor = connection.cursor()
-  sqlQ = "SELECT ebuild_version FROM ebuilds WHERE active = 'TRUE' AND package_id = %s"
-  cursor.execute(sqlQ, (package_id))
-  return cursor.fetchall()
-
-def cp_list_old_db(connection, package_id):
-  cursor = connection.cursor()
-  sqlQ ="SELECT id, ebuild_version FROM ebuilds WHERE active = 'FALSE' AND package_id = %s AND date_part('days', NOW() - timestamp) > 60"
-  cursor.execute(sqlQ, package_id)
-  return cursor.fetchall()
-
-def move_queru_buildlog(connection, queue_id, build_error, summary_error, build_log_dict):
-	cursor = connection.cursor()
-	repoman_error_list = build_log_dict['repoman_error_list']
-	qa_error_list = build_log_dict['qa_error_list']
-	sqlQ = 'SELECT make_buildlog( %s, %s, %s, %s, %s, %s)'
-	cursor.execute(sqlQ, (queue_id, summary_error, build_error, build_log_dict['logfilename'], qa_error_list, repoman_error_list))
-	entries = cursor.fetchone()
-	connection.commit()
-	return entries
-
-def add_new_buildlog(connection, build_dict, use_flags_list, use_enable_list, build_error, summary_error, build_log_dict):
-	cursor = connection.cursor()
-	repoman_error_list = build_log_dict['repoman_error_list']
-	qa_error_list = build_log_dict['qa_error_list']
-	if not use_flags_list:
-		use_flags_list=None
-		use_enable_list=None
-	sqlQ = 'SELECT make_deplog( %s, %s, %s, %s, %s, %s, %s, %s, %s)'
-	params = (build_dict['ebuild_id'], build_dict['config_profile'], use_flags_list, use_enable_list, summary_error, build_error, build_log_dict['logfilename'], qa_error_list, repoman_error_list)
-	cursor.execute(sqlQ, params)
-	entries = cursor.fetchone()
-	connection.commit()
-	if entries is None:
-		return None
-	# If entries is not None we need [0]
-	return entries[0]
-
-def add_qa_repoman(connection, ebuild_id_list, qa_error, packageDict, config_id):
-  cursor = connection.cursor()
-  ebuild_i = 0
-  for k, v in packageDict.iteritems():
-    ebuild_id = ebuild_id_list[ebuild_i]
-    sqlQ = "INSERT INTO buildlog (ebuild_id, config, error_summary, timestamp, hash ) VALUES  ( %s, %s, %s, now(), '1' ) RETURNING build_id"
-    if v['ebuild_error'] != [] or qa_error != []:
-      if v['ebuild_error'] != [] or qa_error == []:
-        summary = "Repoman"
-      elif v['ebuild_error'] == [] or qa_error != []:
-        summary = "QA"
-      else:
-        summary = "QA:Repoman"
-      params = (ebuild_id, config_id, summary)
-      cursor.execute(sqlQ, params)
-      build_id = cursor.fetchone()
-      if v['ebuild_error'] != []:
-        sqlQ = 'INSERT INTO repoman_problems (problem, build_id ) VALUES ( %s, %s )'
-        for x in v['ebuild_error']:
-          params = (x, build_id)
-          cursor.execute(sqlQ, params)
-      if qa_error != []:
-        sqlQ = 'INSERT INTO qa_problems (problem, build_id ) VALUES ( %s, %s )'
-        for x in qa_error:
-          params = (x, build_id)
-          cursor.execute(sqlQ, params)
-    ebuild_i = ebuild_i +1
-  connection.commit()
-
-def update_qa_repoman(connection, build_id, build_log_dict):
-	cursor = connection.cursor()
-	sqlQ1 = 'INSERT INTO repoman_problems (problem, build_id ) VALUES ( %s, %s )'
-	sqlQ2 = 'INSERT INTO qa_problems (problem, build_id ) VALUES ( %s, %s )'
-	if build_log_dict['repoman_error_list'] != []:
-		for x in build_log_dict['repoman_error_list']:
-			params = (x, build_id)
-			cursor.execute(sqlQ, params)
-	if build_log_dict['qa_error_list'] != []:
-		for x in build_log_dict['qa_error_list']:
-			params = (x, build_id)
-			cursor.execute(sqlQ, params)
-	connection.commit()
-
-def get_arch_db(connection):
-  cursor = connection.cursor()
-  sqlQ = "SELECT keyword FROM keywords WHERE keyword = 'amd64'"
-  cursor.execute(sqlQ)
-  return cursor.fetchone()
-
-def add_new_arch_db(connection, arch_list):
-  cursor = connection.cursor()
-  sqlQ = 'INSERT INTO keywords (keyword) VALUES  ( %s )'
-  for arch in arch_list:
-    cursor.execute(sqlQ, (arch,))
-  connection.commit()
-
-def update_fail_times(connection, fail_querue_dict):
-	cursor = connection.cursor()
-	sqlQ1 = 'UPDATE querue_retest SET fail_times = %s WHERE querue_id = %s AND fail_type = %s'
-	sqlQ2 = 'UPDATE buildqueue SET timestamp = NOW() WHERE queue_id = %s'
-	cursor.execute(sqlQ1, (fail_querue_dict['fail_times'], fail_querue_dict['querue_id'], fail_querue_dict['fail_type'],))
-	cursor.execute(sqlQ2, (fail_querue_dict['querue_id'],))
-	connection.commit()
-
-def get_fail_querue_dict(connection, build_dict):
-	cursor = connection.cursor()
-	fail_querue_dict = {}
-	sqlQ = 'SELECT fail_times FROM querue_retest WHERE querue_id = %s AND fail_type = %s'
-	cursor.execute(sqlQ, (build_dict['queue_id'], build_dict['type_fail'],))
-	entries = cursor.fetchone()
-	if entries is None:
-		return None
-	fail_querue_dict['fail_times'] = entries
-	return fail_querue_dict
-
-def add_fail_querue_dict(connection, fail_querue_dict):
-	cursor = connection.cursor()
-	sqlQ1 = 'INSERT INTO querue_retest (querue_id, fail_type, fail_times) VALUES ( %s, %s, %s)'
-	sqlQ2 = 'UPDATE buildqueue SET timestamp = NOW() WHERE queue_id = %s'
-	cursor.execute(sqlQ1, (fail_querue_dict['querue_id'],fail_querue_dict['fail_type'], fail_querue_dict['fail_times']))
-	cursor.execute(sqlQ2, (fail_querue_dict['querue_id'],))
-	connection.commit()
-
-def make_conf_error(connection,config_profile):
-  pass
-
-def check_job_list(connection, config_profile):
-	cursor = connection.cursor()
-	sqlQ1 = 'SELECT id_nr FROM configs WHERE id = %s'
-	sqlQ2 = "SELECT job, jobnr FROM jobs_list WHERE status = 'Waiting' AND config_id = %s"
-	cursor.execute(sqlQ1, (config_profile,))
-	config_nr = cursor.fetchone()
-	cursor.execute(sqlQ2, (config_nr,))
-	job = cursor.fetchone()
-	if job is None:
-		return None
-	return job
-	
-def update_job_list(connection, status, jobid):
-	cursor = connection.cursor()
-	sqlQ = 'UPDATE  jobs_list SET status = %s WHERE jobnr = %s'
-	cursor.execute(sqlQ, (status, jobid,))
-	connection.commit()
-
-def add_gobs_logs(connection, log_msg, log_type, config):
-	cursor = connection.cursor()
-	sqlQ = 'INSERT INTO logs (host, type, msg, time) VALUES ( %s, %s, %s, now())'
-	cursor.execute(sqlQ, (config, log_type, log_msg))
-	connection.commit()
-

diff --git a/gobs/pym/text.py b/gobs/pym/text.py
index 3f7b040..8c4198b 100644
--- a/gobs/pym/text.py
+++ b/gobs/pym/text.py
@@ -37,7 +37,6 @@ def  get_ebuild_cvs_revision(filename):
 
 def  get_log_text_list(filename):
 	"""Return the log contents as a list"""
-	print("filename", filename)
 	try:
 		logfile = open(filename)
 	except:

diff --git a/gobs/pym/updatedb.py b/gobs/pym/updatedb.py
index cbf0dbc..7342bc3 100644
--- a/gobs/pym/updatedb.py
+++ b/gobs/pym/updatedb.py
@@ -1,7 +1,7 @@
 # Distributed under the terms of the GNU General Public License v2
 
 """ 	This code will update the sql backend with needed info for
-	the Frontend and the Guest deamon. """
+	the Frontend and the Guest deamons. """
 from __future__ import print_function
 import sys
 import os
@@ -40,6 +40,7 @@ def update_cpv_db_pool(mysettings, myportdb, cp, repo):
 	CM = connectionManager()
 	conn = CM.newConnection()
 	init_package = gobs_package(conn, mysettings, myportdb)
+
 	# split the cp to categories and package
 	element = cp.split('/')
 	categories = element[0]
@@ -51,12 +52,9 @@ def update_cpv_db_pool(mysettings, myportdb, cp, repo):
 	# Check if we have the cp in the package table
 	package_id = get_package_id(conn, categories, package, repo)
 	if package_id is None:  
-
 		# Add new package with ebuilds
 		init_package.add_new_package_db(cp, repo)
-
 	else:
-
 		# Update the packages with ebuilds
 		init_package.update_package_db(package_id)
 	conn.close()


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2013-04-24  0:37 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2013-04-24  0:37 UTC (permalink / raw
  To: gentoo-commits

commit:     4c7d06af34ec8900b51e2593dd3de455862a1e3d
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 24 00:35:47 2013 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Wed Apr 24 00:35:47 2013 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=4c7d06af

call do_depclean instead of main_depclean

---
 gobs/pym/build_job.py |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/gobs/pym/build_job.py b/gobs/pym/build_job.py
index 6171ef0..0c51d50 100644
--- a/gobs/pym/build_job.py
+++ b/gobs/pym/build_job.py
@@ -7,7 +7,7 @@ import sys
 import signal
 
 from gobs.manifest import gobs_manifest
-from gobs.depclean import main_depclean
+from gobs.depclean import do_depclean
 from gobs.flags import gobs_use_flags
 from portage import _encodings
 from portage import _unicode_decode
@@ -107,7 +107,7 @@ class build_job_action(object):
 		build_fail = emerge_main(argscmd, build_dict)
 		# Run depclean
 		if  '--depclean' in build_dict['emerge_options'] and not '--nodepclean' in build_dict['emerge_options']:
-			depclean_fail = main_depclean()
+			depclean_fail = do_depclean()
 		try:
 			os.remove("/etc/portage/package.use/99_autounmask")
 			with open("/etc/portage/package.use/99_autounmask", "a") as f:


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [gentoo-commits] dev/zorry:master commit in: gobs/pym/
@ 2013-04-25  0:34 Magnus Granberg
  0 siblings, 0 replies; 174+ messages in thread
From: Magnus Granberg @ 2013-04-25  0:34 UTC (permalink / raw
  To: gentoo-commits

commit:     241b7f589fbb887d00fad47156776f000f448d88
Author:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
AuthorDate: Thu Apr 25 00:33:25 2013 +0000
Commit:     Magnus Granberg <zorry <AT> gentoo <DOT> org>
CommitDate: Thu Apr 25 00:33:25 2013 +0000
URL:        http://git.overlays.gentoo.org/gitweb/?p=dev/zorry.git;a=commit;h=241b7f58

To support error summary list

---
 gobs/pym/build_log.py    |   15 +++++++++++----
 gobs/pym/mysql_querys.py |   22 ++++++++++++++++++++--
 2 files changed, 31 insertions(+), 6 deletions(-)

diff --git a/gobs/pym/build_log.py b/gobs/pym/build_log.py
index c3fe244..0c170e1 100644
--- a/gobs/pym/build_log.py
+++ b/gobs/pym/build_log.py
@@ -19,7 +19,7 @@ from gobs.flags import gobs_use_flags
 from gobs.ConnectionManager import connectionManager
 from gobs.mysql_querys import add_gobs_logs, get_config_id, get_ebuild_id_db_checksum, add_new_buildlog, \
 	update_manifest_sql, get_package_id, get_build_job_id, get_use_id, get_fail_querue_dict, \
-	add_fail_querue_dict, update_fail_times, get_config, get_hilight_info
+	add_fail_querue_dict, update_fail_times, get_config, get_hilight_info, get error_info_list
 
 def get_build_dict_db(conn, config_id, settings, pkg):
 	myportdb = portage.portdbapi(mysettings=settings)
@@ -99,6 +99,7 @@ def search_buildlog(conn, logfile_text):
 					i = index + 1
 					while hilight_tmp['endline'] == None:
 						if re.search(search_pattern['hilight_search_end'], logfile_text[i -1]):
+							# FIXME: We can't run the check have reach the end of logfile_text.
 							if re.search(search_pattern['hilight_search_end'], logfile_text[i]):
 								i = i + 1
 							else:
@@ -157,7 +158,7 @@ def get_buildlog_info(conn, settings, pkg, build_dict):
 	qa_error_list = []
 	repoman_error_list = []
 	sum_build_log_list = []
-	
+	error_info_list = get error_info_list(conn)
 	for k, v in sorted(hilight_dict.iteritems()):
 		if v['startline'] == v['endline']:
 			error_log_list.append(logfile_text[k -1])
@@ -174,9 +175,15 @@ def get_buildlog_info(conn, settings, pkg, build_dict):
 	# Run repoman check_repoman()
 	repoman_error_list = init_repoman.check_repoman(build_dict['cpv'], pkg.repo)
 	if repoman_error_list != []:
-		sum_build_log_list.append("repoman")
+		sum_build_log_list.append("1") # repoman = 1
 	if qa_error_list != []:
-		sum_build_log_list.append("qa")
+		sum_build_log_list.append("2") # qa = 2
+	for sum_log_line in sum_build_log_list
+		if re.search(^ \\* ERROR: , sum_log_line):
+			for error_info in error_info_list:
+				if re.search(error_info['error_search'], sum_log_line):
+					sum_build_log_list.append(error_info['error_id'])
+
 	build_log_dict['repoman_error_list'] = repoman_error_list
 	build_log_dict['qa_error_list'] = qa_error_list
 	build_log_dict['error_log_list'] = error_log_list

diff --git a/gobs/pym/mysql_querys.py b/gobs/pym/mysql_querys.py
index 942cc88..dfd423a 100644
--- a/gobs/pym/mysql_querys.py
+++ b/gobs/pym/mysql_querys.py
@@ -547,6 +547,21 @@ def get_hilight_info(connection):
 		hilight.append(aadict)
 	return hilight
 
+def get error_info_list(connection):
+	cursor = connection.cursor()
+	sqlQ = 'SELECT error_id, error_name, error_search FROM error'
+	cursor.execute(sqlQ)
+	entries = cursor.fetchall()
+	cursor.close()
+	error_info = []
+	for i in entries:
+		aadict = {}
+		aadict['error_id'] = i[0]
+		aadict['error_name'] = i[1]
+		aadict['error_search'] = i[2]
+		error_info.append(aadict)
+	return error_info
+
 def add_new_buildlog(connection, build_dict, build_log_dict):
 	cursor = connection.cursor()
 	sqlQ1 = 'SELECT build_log_id FROM build_logs WHERE ebuild_id = %s'
@@ -560,6 +575,7 @@ def add_new_buildlog(connection, build_dict, build_log_dict):
 	sqlQ10 = "UPDATE build_logs SET summery_text = %s, log_hash = %s WHERE build_log_id = %s"
 	sqlQ11 = 'SELECT LAST_INSERT_ID()'
 	sqlQ12 = 'INSERT INTO build_logs_hilight (build_log_id, start_line, end_line, hilight_css) VALUES (%s, %s, %s, %s)'
+	sqlQ13 = 'INSERT INTO build_logs_errors ( build_log_id, error_id) VALUES (%s, %s)'
 	build_log_id_list = []
 	cursor.execute(sqlQ1, (build_dict['ebuild_id'],))
 	entries = cursor.fetchall()
@@ -571,7 +587,7 @@ def add_new_buildlog(connection, build_dict, build_log_dict):
 	
 	def add_new_hilight(build_log_id, build_log_dict):
 		for k, hilight_tmp in sorted(build_log_dict['hilight_dict'].iteritems()):
-			cursor.execute(sqlQ13, (build_log_id,hilight_tmp['startline'],  hilight_tmp['endline'], hilight_tmp['hilight'],))
+			cursor.execute(sqlQ12, (build_log_id,hilight_tmp['startline'],  hilight_tmp['endline'], hilight_tmp['hilight'],))
 
 	def build_log_id_match(build_log_id_list, build_dict, build_log_dict):
 		for build_log_id in build_log_id_list:
@@ -600,8 +616,10 @@ def add_new_buildlog(connection, build_dict, build_log_dict):
 		cursor.execute(sqlQ2, (build_dict['ebuild_id'],))
 		cursor.execute(sqlQ11)
 		build_log_id = cursor.fetchone()[0]
-		if 'True' in build_log_dict['summary_error_list']:
+		if build_log_dict['summary_error_list'] != []:
 			cursor.execute(sqlQ3, (build_log_id,))
+			for error in build_log_dict['summary_error_list']:
+				cursor.execute(sqlQ13, (build_log_id, error,))
 		cursor.execute(sqlQ10, (build_log_dict['build_error'], build_log_dict['log_hash'], build_log_id,))
 		add_new_hilight(build_log_id, build_log_dict)
 		cursor.execute(sqlQ4, (build_log_id, build_dict['config_id'], build_log_dict['logfilename'],))


^ permalink raw reply related	[flat|nested] 174+ messages in thread

end of thread, other threads:[~2013-04-25  0:34 UTC | newest]

Thread overview: 174+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-03-22 19:05 [gentoo-commits] dev/zorry:master commit in: gobs/pym/ Magnus Granberg
  -- strict thread matches above, loose matches on Subject: below --
2013-04-25  0:34 Magnus Granberg
2013-04-24  0:37 Magnus Granberg
2013-04-24  0:11 Magnus Granberg
2013-01-27 12:03 Magnus Granberg
2013-01-26 22:23 Magnus Granberg
2013-01-22 21:06 Magnus Granberg
2013-01-22 20:59 Magnus Granberg
2013-01-22 20:56 Magnus Granberg
2012-12-29 12:12 Magnus Granberg
2012-12-27 23:52 Magnus Granberg
2012-12-27 23:09 Magnus Granberg
2012-12-22 11:45 Magnus Granberg
2012-12-21 23:50 Magnus Granberg
2012-12-21 23:31 Magnus Granberg
2012-12-21 23:23 Magnus Granberg
2012-12-21 20:41 Magnus Granberg
2012-12-21 20:31 Magnus Granberg
2012-12-21 17:33 Magnus Granberg
2012-12-21  2:24 Magnus Granberg
2012-12-21  2:11 Magnus Granberg
2012-12-21  1:50 Magnus Granberg
2012-12-21  1:49 Magnus Granberg
2012-12-21  1:44 Magnus Granberg
2012-12-19  2:17 Magnus Granberg
2012-12-17  1:18 Magnus Granberg
2012-12-17  0:33 Magnus Granberg
2012-12-16 20:50 Magnus Granberg
2012-12-16 20:45 Magnus Granberg
2012-12-15 16:14 Magnus Granberg
2012-12-15  0:31 Magnus Granberg
2012-12-14 14:17 Magnus Granberg
2012-12-13 22:57 Magnus Granberg
2012-12-13 15:18 Magnus Granberg
2012-12-13 15:15 Magnus Granberg
2012-12-13 15:09 Magnus Granberg
2012-12-12  0:29 Magnus Granberg
2012-12-12  0:14 Magnus Granberg
2012-12-12  0:11 Magnus Granberg
2012-12-12  0:09 Magnus Granberg
2012-12-12  0:04 Magnus Granberg
2012-12-12  0:00 Magnus Granberg
2012-12-11 23:52 Magnus Granberg
2012-12-11 23:48 Magnus Granberg
2012-12-11 23:38 Magnus Granberg
2012-12-07 14:58 Magnus Granberg
2012-12-07 14:33 Magnus Granberg
2012-12-07 14:29 Magnus Granberg
2012-12-07 14:22 Magnus Granberg
2012-12-07  0:07 Magnus Granberg
2012-12-07  0:02 Magnus Granberg
2012-12-06 23:56 Magnus Granberg
2012-12-06 23:52 Magnus Granberg
2012-12-06  2:51 Magnus Granberg
2012-12-06  2:41 Magnus Granberg
2012-12-06  2:34 Magnus Granberg
2012-12-06  2:22 Magnus Granberg
2012-12-06  2:18 Magnus Granberg
2012-12-06  0:11 Magnus Granberg
2012-12-06  0:08 Magnus Granberg
2012-12-06  0:04 Magnus Granberg
2012-12-02 11:53 Magnus Granberg
2012-12-02 11:49 Magnus Granberg
2012-12-02  0:06 Magnus Granberg
2012-12-02  0:05 Magnus Granberg
2012-12-01 23:58 Magnus Granberg
2012-12-01 23:35 Magnus Granberg
2012-12-01 23:33 Magnus Granberg
2012-12-01 23:28 Magnus Granberg
2012-12-01 23:24 Magnus Granberg
2012-12-01 23:12 Magnus Granberg
2012-12-01 23:03 Magnus Granberg
2012-12-01 22:58 Magnus Granberg
2012-12-01 11:31 Magnus Granberg
2012-12-01 11:26 Magnus Granberg
2012-07-18  0:10 Magnus Granberg
2012-07-17 15:02 Magnus Granberg
2012-07-17 13:00 Magnus Granberg
2012-07-17  1:07 Magnus Granberg
2012-07-17  0:38 Magnus Granberg
2012-07-17  0:18 Magnus Granberg
2012-06-27 15:26 Magnus Granberg
2012-06-27 15:15 Magnus Granberg
2012-06-27 14:57 Magnus Granberg
2012-06-27 14:43 Magnus Granberg
2012-06-27 14:39 Magnus Granberg
2012-06-27 14:24 Magnus Granberg
2012-06-27 14:19 Magnus Granberg
2012-06-27 14:14 Magnus Granberg
2012-06-27 14:11 Magnus Granberg
2012-06-27 14:07 Magnus Granberg
2012-06-04 23:45 Magnus Granberg
2012-06-03 22:18 Magnus Granberg
2012-05-25  0:15 Magnus Granberg
2012-05-20 14:33 Magnus Granberg
2012-05-20 14:29 Magnus Granberg
2012-05-09 23:12 Magnus Granberg
2012-05-07 23:44 Magnus Granberg
2012-05-07 23:39 Magnus Granberg
2012-05-07 23:35 Magnus Granberg
2012-05-07 23:31 Magnus Granberg
2012-05-07 23:25 Magnus Granberg
2012-05-06 10:47 Magnus Granberg
2012-05-02 14:33 Magnus Granberg
2012-05-01 10:00 Magnus Granberg
2012-05-01  0:15 Magnus Granberg
2012-05-01  0:02 Magnus Granberg
2012-04-30 16:45 Magnus Granberg
2012-04-30 14:33 Magnus Granberg
2012-04-30 14:17 Magnus Granberg
2012-04-30 14:15 Magnus Granberg
2012-04-30 13:13 Magnus Granberg
2012-04-30 13:12 Magnus Granberg
2012-04-29 15:56 Magnus Granberg
2012-04-29 13:24 Magnus Granberg
2012-04-29 13:17 Magnus Granberg
2012-04-28 19:29 Magnus Granberg
2012-04-28 17:24 Magnus Granberg
2012-04-28 17:03 Magnus Granberg
2012-04-28 16:09 Magnus Granberg
2012-04-28 16:07 Magnus Granberg
2012-04-28 16:05 Magnus Granberg
2012-04-28 14:29 Magnus Granberg
2012-04-28 14:20 Magnus Granberg
2012-04-28 14:01 Magnus Granberg
2012-04-28 12:37 Magnus Granberg
2012-04-28  1:53 Magnus Granberg
2012-04-28  1:25 Magnus Granberg
2012-04-28  0:51 Magnus Granberg
2012-04-27 21:03 Magnus Granberg
2012-04-27 20:42 Magnus Granberg
2012-04-27 20:33 Magnus Granberg
2012-04-27 18:27 Magnus Granberg
2012-04-27 18:23 Magnus Granberg
2011-10-31 21:32 Magnus Granberg
2011-10-29 22:48 Magnus Granberg
2011-10-29 22:38 Magnus Granberg
2011-10-29 22:28 Magnus Granberg
2011-10-29 22:24 Magnus Granberg
2011-10-29  0:21 Magnus Granberg
2011-10-29  0:19 Magnus Granberg
2011-10-19 21:31 Magnus Granberg
2011-10-19 21:28 Magnus Granberg
2011-10-19 20:20 Magnus Granberg
2011-10-13 10:41 Magnus Granberg
2011-10-12 10:33 Magnus Granberg
2011-10-12 10:26 Magnus Granberg
2011-10-11 23:51 Magnus Granberg
2011-10-11 23:32 Magnus Granberg
2011-10-11 11:20 Magnus Granberg
2011-10-10 23:57 Magnus Granberg
2011-10-10 23:49 Magnus Granberg
2011-10-10 23:46 Magnus Granberg
2011-10-10 23:43 Magnus Granberg
2011-10-10 23:30 Magnus Granberg
2011-10-09 21:49 Magnus Granberg
2011-09-30 13:38 Magnus Granberg
2011-09-30 13:33 Magnus Granberg
2011-09-30 13:17 Magnus Granberg
2011-09-28  1:41 Magnus Granberg
2011-09-28  1:39 Magnus Granberg
2011-09-28  1:04 Magnus Granberg
2011-09-27 23:43 Magnus Granberg
2011-09-27 11:05 Magnus Granberg
2011-09-13 23:06 Magnus Granberg
2011-09-13  1:02 Magnus Granberg
2011-09-01 23:34 Magnus Granberg
2011-08-31 23:31 Magnus Granberg
2011-08-31  2:05 Magnus Granberg
2011-08-30 23:41 Magnus Granberg
2011-07-29 15:31 Magnus Granberg
2011-04-24 22:21 Magnus Granberg
2011-04-23 15:26 Magnus Granberg
2011-04-23 14:28 Magnus Granberg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox