improve massive deletion performance
change hooks.integrity._DelayedDeleteOp implementation to give it a chance of
processing the entities by chunks of reasonnable size (500 entities at a time)
adapt ssplanner.DeleteEntitiesStep to call a variant of glob_delete_entity with several entities.
That variant calls all the before_delete_entities hooks in one go, then
performs the deletion, and then calls all the after_delete_entities hooks. The
deletion is performed by grouping together entities by etype and by source.
adapt the HooksManager to call the hooks on a list of entities instead of on a single entity.
adapt the sources to be able to delete several entities of the same etype at once.
changed the source fti_(un)index_entity methods to fti_(un)index_entities taking a collection of entities.
#!/bin/sh -e
### BEGIN INIT INFO
# Provides: cubicweb
# Required-Start: $syslog $local_fs $network
# Required-Stop: $syslog $local_fs $network
# Should-Start: $postgresql $pyro-nsd
# Should-Stop: $postgresql $pyro-nsd
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Start cubicweb application at boot time
### END INIT INFO
# FIXME Seems to be inadequate here
# FIXME If related to pyro, try instead:
# export PYRO_STORAGE="/tmp"
cd /tmp
# FIXME Work-around about the following lintian error
# E: cubicweb-ctl: init.d-script-does-not-implement-required-option /etc/init.d/cubicweb start
#
# Check if we are sure to not want the start-stop-daemon machinery here
# Refer to Debian Policy Manual section 9.3.2 (Writing the scripts) for details.
case $1 in
force-reload)
python -W ignore /usr/bin/cubicweb-ctl reload --force
;;
status)
python -W ignore /usr/bin/cubicweb-ctl status
;;
*)
python -W ignore /usr/bin/cubicweb-ctl $1 --force
;;
esac