# HG changeset patch # User Sylvain Thénault # Date 1475249957 -7200 # Node ID 39df042f4ab4c23f6ac5248554aa80b7944290c6 # Parent 7518cb58ab4c47979188ac6744e46173f60c4958 [repository] Drop type_and_source_from_eid and rename related cache We don't want to handle anymore the entities.asource column so we shouldn't use those anymore. Also rename repository's _type_source_cache into _type_extid_cache as this is what it's containing now. Do similar renaming to the system source API. Related to #15538288 diff -r 7518cb58ab4c -r 39df042f4ab4 cubicweb/dataimport/test/test_stores.py --- a/cubicweb/dataimport/test/test_stores.py Fri Sep 30 17:34:11 2016 +0200 +++ b/cubicweb/dataimport/test/test_stores.py Fri Sep 30 17:39:17 2016 +0200 @@ -50,7 +50,6 @@ self.assertEqual(user.created_by[0].eid, cnx.user.eid) self.assertEqual(user.owned_by[0].eid, cnx.user.eid) self.assertEqual(user.cw_source[0].name, self.source_name) - self.assertEqual(cnx.describe(user.eid), ('CWUser', self.source_name, self.user_extid)) groups = cnx.execute('CWGroup X WHERE U in_group X, U login "lgn"') self.assertEqual(group_eid, groups.one().eid) # Check data update diff -r 7518cb58ab4c -r 39df042f4ab4 cubicweb/devtools/__init__.py --- a/cubicweb/devtools/__init__.py Fri Sep 30 17:34:11 2016 +0200 +++ b/cubicweb/devtools/__init__.py Fri Sep 30 17:39:17 2016 +0200 @@ -126,7 +126,7 @@ if repo._needs_refresh: for cnxset in repo.cnxsets: cnxset.reconnect() - repo._type_source_cache = {} + repo._type_extid_cache = {} repo.querier._rql_cache = {} repo.system_source.reset_caches() repo._needs_refresh = False diff -r 7518cb58ab4c -r 39df042f4ab4 cubicweb/devtools/repotest.py --- a/cubicweb/devtools/repotest.py Fri Sep 30 17:34:11 2016 +0200 +++ b/cubicweb/devtools/repotest.py Fri Sep 30 17:39:17 2016 +0200 @@ -200,7 +200,7 @@ self._access = RepoAccess(self.repo, 'admin', FakeRequest) self.ueid = self.session.user.eid assert self.ueid != -1 - self.repo._type_source_cache = {} # clear cache + self.repo._type_extid_cache = {} # clear cache self.maxeid = self.get_max_eid() do_monkey_patch() self._dumb_sessions = [] diff -r 7518cb58ab4c -r 39df042f4ab4 cubicweb/hooks/metadata.py --- a/cubicweb/hooks/metadata.py Fri Sep 30 17:34:11 2016 +0200 +++ b/cubicweb/hooks/metadata.py Fri Sep 30 17:39:17 2016 +0200 @@ -157,15 +157,13 @@ # entity source handling ####################################################### class ChangeEntitySourceUpdateCaches(hook.Operation): - oldsource = newsource = entity = None # make pylint happy + oldsource = entity = None # make pylint happy def postcommit_event(self): self.oldsource.reset_caches() repo = self.cnx.repo entity = self.entity - extid = entity.cw_metainformation()['extid'] - repo._type_source_cache[entity.eid] = ( - entity.cw_etype, None, self.newsource.uri) + repo._type_extid_cache[entity.eid] = (entity.cw_etype, None) class ChangeEntitySourceDeleteHook(MetaDataHook): @@ -205,5 +203,4 @@ self._cw.system_sql(syssource.sqlgen.update('entities', attrs, ['eid']), attrs) # register an operation to update repository/sources caches ChangeEntitySourceUpdateCaches(self._cw, entity=entity, - oldsource=oldsource.repo_source, - newsource=syssource) + oldsource=oldsource.repo_source) diff -r 7518cb58ab4c -r 39df042f4ab4 cubicweb/hooks/syncschema.py --- a/cubicweb/hooks/syncschema.py Fri Sep 30 17:34:11 2016 +0200 +++ b/cubicweb/hooks/syncschema.py Fri Sep 30 17:39:17 2016 +0200 @@ -311,9 +311,9 @@ self.info('renamed table %s to %s', oldname, newname) sqlexec('UPDATE entities SET type=%(newname)s WHERE type=%(oldname)s', {'newname': newname, 'oldname': oldname}) - for eid, (etype, extid, auri) in cnx.repo._type_source_cache.items(): + for eid, (etype, extid) in cnx.repo._type_extid_cache.items(): if etype == oldname: - cnx.repo._type_source_cache[eid] = (newname, extid, auri) + cnx.repo._type_extid_cache[eid] = (newname, extid) # recreate the indexes for rschema in eschema.subject_relations(): if rschema.inlined or (rschema.final and eschema.rdef(rschema.type).indexed): diff -r 7518cb58ab4c -r 39df042f4ab4 cubicweb/hooks/syncsources.py --- a/cubicweb/hooks/syncsources.py Fri Sep 30 17:34:11 2016 +0200 +++ b/cubicweb/hooks/syncsources.py Fri Sep 30 17:39:17 2016 +0200 @@ -104,7 +104,6 @@ source.uri = self.newname source.public_config['uri'] = self.newname repo.sources_by_uri[self.newname] = source - repo._type_source_cache.clear() clear_cache(repo, 'source_defs') diff -r 7518cb58ab4c -r 39df042f4ab4 cubicweb/misc/migration/bootstrapmigration_repository.py --- a/cubicweb/misc/migration/bootstrapmigration_repository.py Fri Sep 30 17:34:11 2016 +0200 +++ b/cubicweb/misc/migration/bootstrapmigration_repository.py Fri Sep 30 17:39:17 2016 +0200 @@ -70,10 +70,6 @@ dbhelper.change_col_type(cursor, 'entities', 'asource', attrtype, False) dbhelper.change_col_type(cursor, 'entities', 'source', attrtype, False) - # we now have a functional asource column, start using the normal eid_type_source method - if repo.system_source.eid_type_source == repo.system_source.eid_type_source_pre_131: - del repo.system_source.eid_type_source - if applcubicwebversion < (3, 19, 0) and cubicwebversion >= (3, 19, 0): try: # need explicit drop of the indexes on some database systems (sqlserver) @@ -339,21 +335,6 @@ drop_relation_definition('CWUniqueTogetherConstraint', 'relations', 'CWAttribute') drop_relation_definition('CWUniqueTogetherConstraint', 'relations', 'CWRelation') - -if applcubicwebversion < (3, 4, 0) and cubicwebversion >= (3, 4, 0): - - with hooks_control(session, session.HOOKS_ALLOW_ALL, 'integrity'): - session.set_shared_data('do-not-insert-cwuri', True) - add_relation_type('cwuri') - base_url = session.base_url() - for eid, in rql('Any X', ask_confirm=False): - type, source, extid = session.describe(eid) - if source == 'system': - rql('SET X cwuri %(u)s WHERE X eid %(x)s', - {'x': eid, 'u': u'%s%s' % (base_url, eid)}) - isession.commit() - session.set_shared_data('do-not-insert-cwuri', False) - if applcubicwebversion < (3, 5, 0) and cubicwebversion >= (3, 5, 0): # check that migration is not doomed rset = rql('Any X,Y WHERE X transition_of E, Y transition_of E, ' diff -r 7518cb58ab4c -r 39df042f4ab4 cubicweb/req.py --- a/cubicweb/req.py Fri Sep 30 17:34:11 2016 +0200 +++ b/cubicweb/req.py Fri Sep 30 17:39:17 2016 +0200 @@ -500,9 +500,3 @@ """ url = self._base_url(secure=secure) return url if url is None else url.rstrip('/') + '/' - - # abstract methods to override according to the web front-end ############# - - def describe(self, eid, asdict=False): - """return a tuple (type, sourceuri, extid) for the entity with id """ - raise NotImplementedError diff -r 7518cb58ab4c -r 39df042f4ab4 cubicweb/server/checkintegrity.py --- a/cubicweb/server/checkintegrity.py Fri Sep 30 17:34:11 2016 +0200 +++ b/cubicweb/server/checkintegrity.py Fri Sep 30 17:39:17 2016 +0200 @@ -337,7 +337,6 @@ for entity in cnx.execute(rql).entities(): sys.stderr.write(msg % (entity.cw_etype, entity.eid, role, rschema)) if fix: - #if entity.cw_describe()['source']['uri'] == 'system': XXX entity.cw_delete() # XXX this is BRUTAL! notify_fixed(fix) diff -r 7518cb58ab4c -r 39df042f4ab4 cubicweb/server/repository.py --- a/cubicweb/server/repository.py Fri Sep 30 17:34:11 2016 +0200 +++ b/cubicweb/server/repository.py Fri Sep 30 17:39:17 2016 +0200 @@ -180,8 +180,8 @@ self.sources_by_uri = {'system': self.system_source} # querier helper, need to be created after sources initialization self.querier = querier.QuerierHelper(self, self.schema) - # cache eid -> (type, extid, actual source) - self._type_source_cache = {} + # cache eid -> (type, extid) + self._type_extid_cache = {} # cache extid -> eid # open some connection sets if config.init_cnxset_pool: @@ -711,39 +711,37 @@ return session # data sources handling ################################################### - # * correspondance between eid and (type, source) + # * correspondance between eid and type # * correspondance between eid and local id (i.e. specific to a given source) - def type_and_source_from_eid(self, eid, cnx): - """return a tuple `(type, extid, actual source uri)` for the entity of - the given `eid` - """ - try: - eid = int(eid) - except ValueError: - raise UnknownEid(eid) - try: - return self._type_source_cache[eid] - except KeyError: - etype, extid, auri = self.system_source.eid_type_source(cnx, eid) - self._type_source_cache[eid] = (etype, extid, auri) - return etype, extid, auri - def clear_caches(self, eids): - etcache = self._type_source_cache + etcache = self._type_extid_cache rqlcache = self.querier._rql_cache for eid in eids: try: - etype, extid, auri = etcache.pop(int(eid)) # may be a string in some cases + etype, extid = etcache.pop(int(eid)) # may be a string in some cases rqlcache.pop(('%s X WHERE X eid %s' % (etype, eid),), None) except KeyError: etype = None rqlcache.pop(('Any X WHERE X eid %s' % eid,), None) self.system_source.clear_eid_cache(eid, etype) + def type_and_extid_from_eid(self, eid, cnx): + """Return the type and extid of the entity with id `eid`.""" + try: + eid = int(eid) + except ValueError: + raise UnknownEid(eid) + try: + return self._type_extid_cache[eid] + except KeyError: + etype, extid = self.system_source.eid_type_extid(cnx, eid) + self._type_extid_cache[eid] = (etype, extid) + return etype, extid + def type_from_eid(self, eid, cnx): - """return the type of the entity with id """ - return self.type_and_source_from_eid(eid, cnx)[0] + """Return the type of the entity with id `eid`""" + return self.type_and_extid_from_eid(eid, cnx)[0] def querier_cache_key(self, cnx, rql, args, eidkeys): cachekey = [rql] @@ -815,7 +813,7 @@ extid = None else: extid = source.get_extid(entity) - self._type_source_cache[entity.eid] = (entity.cw_etype, extid, source.uri) + self._type_extid_cache[entity.eid] = (entity.cw_etype, extid) return extid def glob_add_entity(self, cnx, edited): diff -r 7518cb58ab4c -r 39df042f4ab4 cubicweb/server/schemaserial.py --- a/cubicweb/server/schemaserial.py Fri Sep 30 17:34:11 2016 +0200 +++ b/cubicweb/server/schemaserial.py Fri Sep 30 17:39:17 2016 +0200 @@ -1,4 +1,4 @@ -# copyright 2003-2013 LOGILAB S.A. (Paris, FRANCE), all rights reserved. +# copyright 2003-2016 LOGILAB S.A. (Paris, FRANCE), all rights reserved. # contact http://www.logilab.fr/ -- mailto:contact@logilab.fr # # This file is part of CubicWeb. @@ -149,7 +149,7 @@ {'x': etype, 'n': netype}) cnx.commit(False) tocleanup = [eid] - tocleanup += (eid for eid, cached in repo._type_source_cache.items() + tocleanup += (eid for eid, cached in repo._type_extid_cache.items() if etype == cached[0]) repo.clear_caches(tocleanup) cnx.commit(False) diff -r 7518cb58ab4c -r 39df042f4ab4 cubicweb/server/session.py --- a/cubicweb/server/session.py Fri Sep 30 17:34:11 2016 +0200 +++ b/cubicweb/server/session.py Fri Sep 30 17:39:17 2016 +0200 @@ -779,22 +779,11 @@ def source_defs(self): return self.repo.source_defs() - @deprecated('[3.19] use .entity_metas(eid) instead') - @_open_only - def describe(self, eid, asdict=False): - """return a tuple (type, sourceuri, extid) for the entity with id """ - etype, extid, source = self.repo.type_and_source_from_eid(eid, self) - metas = {'type': etype, 'source': source, 'extid': extid} - if asdict: - metas['asource'] = metas['source'] # XXX pre 3.19 client compat - return metas - return etype, source, extid - @_open_only def entity_metas(self, eid): - """return a tuple (type, sourceuri, extid) for the entity with id """ - etype, extid, source = self.repo.type_and_source_from_eid(eid, self) - return {'type': etype, 'source': source, 'extid': extid} + """Return a dictionary {type, extid}) for the entity with id `eid`.""" + etype, extid = self.repo.type_and_extid_from_eid(eid, self) + return {'type': etype, 'extid': extid} # core method ############################################################# @@ -952,10 +941,8 @@ @_open_only def rtype_eids_rdef(self, rtype, eidfrom, eidto): - # use type_and_source_from_eid instead of type_from_eid for optimization - # (avoid two extra methods call) - subjtype = self.repo.type_and_source_from_eid(eidfrom, self)[0] - objtype = self.repo.type_and_source_from_eid(eidto, self)[0] + subjtype = self.repo.type_from_eid(eidfrom, self) + objtype = self.repo.type_from_eid(eidto, self) return self.vreg.schema.rschema(rtype).rdefs[(subjtype, objtype)] diff -r 7518cb58ab4c -r 39df042f4ab4 cubicweb/server/sources/__init__.py --- a/cubicweb/server/sources/__init__.py Fri Sep 30 17:34:11 2016 +0200 +++ b/cubicweb/server/sources/__init__.py Fri Sep 30 17:39:17 2016 +0200 @@ -342,8 +342,8 @@ # system source interface ################################################# - def eid_type_source(self, cnx, eid): - """return a tuple (type, extid, source) for the entity with id """ + def eid_type_extid(self, cnx, eid): + """return a tuple (type, extid) for the entity with id """ raise NotImplementedError(self) def create_eid(self, cnx): diff -r 7518cb58ab4c -r 39df042f4ab4 cubicweb/server/sources/native.py --- a/cubicweb/server/sources/native.py Fri Sep 30 17:34:11 2016 +0200 +++ b/cubicweb/server/sources/native.py Fri Sep 30 17:39:17 2016 +0200 @@ -441,12 +441,6 @@ self.open_source_connections() def init(self, activated, source_entity): - try: - # test if 'asource' column exists - query = self.dbhelper.sql_add_limit_offset('SELECT asource FROM entities', 1) - source_entity._cw.system_sql(query) - except Exception: - self.eid_type_source = self.eid_type_source_pre_131 super(NativeSQLSource, self).init(activated, source_entity) self.init_creating(source_entity._cw.cnxset) @@ -823,34 +817,20 @@ # system source interface ################################################# - def _eid_type_source(self, cnx, eid, sql): + def eid_type_extid(self, cnx, eid): # pylint: disable=E0202 + """return a tuple (type, extid) for the entity with id """ + sql = 'SELECT type, extid FROM entities WHERE eid=%s' % eid try: res = self.doexec(cnx, sql).fetchone() if res is not None: + if not isinstance(res, list): + res = list(res) + res[-1] = self.decode_extid(res[-1]) return res except Exception: self.exception('failed to query entities table for eid %s', eid) raise UnknownEid(eid) - def eid_type_source(self, cnx, eid): # pylint: disable=E0202 - """return a tuple (type, extid, source) for the entity with id """ - sql = 'SELECT type, extid, asource FROM entities WHERE eid=%s' % eid - res = self._eid_type_source(cnx, eid, sql) - if not isinstance(res, list): - res = list(res) - res[-2] = self.decode_extid(res[-2]) - return res - - def eid_type_source_pre_131(self, cnx, eid): - """return a tuple (type, extid, source) for the entity with id """ - sql = 'SELECT type, extid FROM entities WHERE eid=%s' % eid - res = self._eid_type_source(cnx, eid, sql) - if not isinstance(res, list): - res = list(res) - res[-1] = self.decode_extid(res[-1]) - res.append("system") - return res - def _handle_is_relation_sql(self, cnx, sql, attrs): """ Handler for specific is_relation sql that may be overwritten in some stores""" diff -r 7518cb58ab4c -r 39df042f4ab4 cubicweb/server/test/unittest_datafeed.py --- a/cubicweb/server/test/unittest_datafeed.py Fri Sep 30 17:34:11 2016 +0200 +++ b/cubicweb/server/test/unittest_datafeed.py Fri Sep 30 17:39:17 2016 +0200 @@ -89,10 +89,10 @@ 'extid': b'http://www.cubicweb.org/'} ) # test repo cache keys - self.assertEqual(self.repo._type_source_cache[entity.eid], - ('Card', b'http://www.cubicweb.org/', u'ô myfeed')) - self.assertEqual(self.repo._type_source_cache[entity.eid], - ('Card', b'http://www.cubicweb.org/', u'ô myfeed')) + self.assertEqual(self.repo._type_extid_cache[entity.eid], + ('Card', b'http://www.cubicweb.org/')) + self.assertEqual(self.repo._type_extid_cache[entity.eid], + ('Card', b'http://www.cubicweb.org/')) self.assertTrue(dfsource.latest_retrieval) self.assertTrue(dfsource.fresh()) @@ -108,8 +108,8 @@ {'type': 'Card', 'extid': b'http://www.cubicweb.org/'} ) - self.assertEqual(self.repo._type_source_cache[entity.eid], - ('Card', b'http://www.cubicweb.org/', 'myrenamedfeed')) + self.assertEqual(self.repo._type_extid_cache[entity.eid], + ('Card', b'http://www.cubicweb.org/')) # test_delete_source cnx.execute('DELETE CWSource S WHERE S name "myrenamedfeed"') diff -r 7518cb58ab4c -r 39df042f4ab4 cubicweb/server/test/unittest_migractions.py --- a/cubicweb/server/test/unittest_migractions.py Fri Sep 30 17:34:11 2016 +0200 +++ b/cubicweb/server/test/unittest_migractions.py Fri Sep 30 17:39:17 2016 +0200 @@ -351,7 +351,7 @@ def test_rename_entity_type(self): with self.mh() as (cnx, mh): entity = mh.create_entity('Old', name=u'old') - self.repo.type_and_source_from_eid(entity.eid, entity._cw) + self.repo.type_from_eid(entity.eid, entity._cw) mh.cmd_rename_entity_type('Old', 'New') dbh = self.repo.system_source.dbhelper indices = set(dbh.list_indices(cnx.cnxset.cu, 'cw_New')) diff -r 7518cb58ab4c -r 39df042f4ab4 cubicweb/server/test/unittest_repository.py --- a/cubicweb/server/test/unittest_repository.py Fri Sep 30 17:34:11 2016 +0200 +++ b/cubicweb/server/test/unittest_repository.py Fri Sep 30 17:39:17 2016 +0200 @@ -197,8 +197,6 @@ repo = self.repo session = repo.new_session(self.admlogin, password=self.admpassword) with session.new_cnx() as cnx: - self.assertEqual(repo.type_and_source_from_eid(2, cnx), - ('CWGroup', None, 'system')) self.assertEqual(repo.type_from_eid(2, cnx), 'CWGroup') session.close() diff -r 7518cb58ab4c -r 39df042f4ab4 cubicweb/sobjects/services.py --- a/cubicweb/sobjects/services.py Fri Sep 30 17:34:11 2016 +0200 +++ b/cubicweb/sobjects/services.py Fri Sep 30 17:39:17 2016 +0200 @@ -51,7 +51,7 @@ results['%s_cache_hit' % title] = hits results['%s_cache_miss' % title] = misses results['%s_cache_hit_percent' % title] = (hits * 100) / (hits + misses) - results['type_source_cache_size'] = len(repo._type_source_cache) + results['type_extid_cache_size'] = len(repo._type_extid_cache) results['sql_no_cache'] = repo.system_source.no_cache results['nb_open_sessions'] = len(repo._sessions) results['nb_active_threads'] = threading.activeCount() diff -r 7518cb58ab4c -r 39df042f4ab4 cubicweb/web/views/debug.py --- a/cubicweb/web/views/debug.py Fri Sep 30 17:34:11 2016 +0200 +++ b/cubicweb/web/views/debug.py Fri Sep 30 17:39:17 2016 +0200 @@ -94,7 +94,7 @@ stats['looping_tasks'] = ', '.join('%s (%s seconds)' % (n, i) for n, i in stats['looping_tasks']) stats['threads'] = ', '.join(sorted(stats['threads'])) for k in stats: - if k == 'type_source_cache_size': + if k == 'type_extid_cache_size': continue if k.endswith('_cache_size'): stats[k] = '%s / %s' % (stats[k]['size'], stats[k]['maxsize'])