misc/scripts/ldap_change_base_dn.py
author Julien Cristau <julien.cristau@logilab.fr>
Fri, 24 Jul 2015 09:57:08 +0200
changeset 10644 c43e5dc41f8b
parent 9460 a2a0bc984863
child 10589 7c23b7de2b8d
permissions -rw-r--r--
[devtools] add has_cache for postgres (closes #5739624) devtools stores info about existing dbs in the db handler, but in the case of postgresql that doesn't take into account the path to the cluster's datadir. Which means if we run two test modules (in the same test run), we'll create a "__default_empty_db__" for the first one, cache its existence, and then when moving on to the other module, believe the template already exists (but since the datadir depends on the test module's path, it does not). This patch is a bit of a kludge, and it would be better to make the cache key include enough data to not need this, but I'm not sure how to do that.

from base64 import b64decode, b64encode
try:
    uri, newdn = __args__
except ValueError:
    print 'USAGE: cubicweb-ctl shell <instance> ldap_change_base_dn.py -- <ldap source uri> <new dn>'
    print
    print 'you should not have updated your sources file yet'

olddn = repo.sources_by_uri[uri].config['user-base-dn']

assert olddn != newdn

raw_input("Ensure you've stopped the instance, type enter when done.")

for eid, extid in sql("SELECT eid, extid FROM entities WHERE source='%s'" % uri):
    olduserdn = b64decode(extid)
    newuserdn = olduserdn.replace(olddn, newdn)
    if newuserdn != olduserdn:
        print olduserdn, '->', newuserdn
        sql("UPDATE entities SET extid='%s' WHERE eid=%s" % (b64encode(newuserdn), eid))

commit()

print 'you can now update the sources file to the new dn and restart the instance'