[entity] ensure the .related(entities=False) parameter is honored all the way down (closes #2755994)
As of today, such a call will always fill the relation cache by
calling .entities() on every single related rset entry.
As a consequence, the `limit` parameter handling also had to be fixed.
It was bogus in the following ways:
* not used in the related_rql, hence potentially huge database
requests, but also actually
* foolishly used in the .entities()-calling cache routine we now
bypass (this changeset ticket's main topic)
Now:
* we set a limit on the rql expression, and
* forbid caching if given a non-None limit (as we don't want to make
the cache handling code more complicated than it is already)
With this, entity.unrelated gets a better limit implementation (so the
code in related/unrelated is nice and symmetric)
Risk:
* _cw_relation_cache disappears completely, which is good, but this is
Python, so you never know ...
#!/usr/bin/python
"""usage: fix-po-encodings [filename...]
change the encoding of the po files passed as arguments to utf-8
"""
import sys
import re
import codecs
def change_encoding(filename, target='UTF-8'):
fdesc = open(filename)
data = fdesc.read()
fdesc.close()
encoding = find_encoding(data)
if encoding == target:
return
data = fix_encoding(data, target)
data = unicode(data, encoding)
fdesc = codecs.open(filename, 'wb', encoding=target)
fdesc.write(data)
fdesc.close()
def find_encoding(data):
regexp = re.compile(r'"Content-Type:.* charset=([a-zA-Z0-9-]+)\\n"', re.M)
mo = regexp.search(data)
if mo is None:
raise ValueError('No encoding declaration')
return mo.group(1)
def fix_encoding(data, target_encoding):
regexp = re.compile(r'("Content-Type:.* charset=)(.*)(\\n")', re.M)
return regexp.sub(r'\1%s\3' % target_encoding, data)
for filename in sys.argv[1:]:
print filename
change_encoding(filename)