[repo] optimize massive insertion/deletion by using the new set_operation function
Idea is that on massive insertion, cost of handling the list of operation
become non negligeable, so we should minimize the number of operations in
that list.
The set_operation function ease usage of operation associated to data in
session.transaction_data, and we only add the operation when data set isn't
initialized yet, else we simply add data to the set. The operation then
simply process accumulated data.
#!/bin/sh -e
### BEGIN INIT INFO
# Provides: cubicweb
# Required-Start: $syslog $local_fs $network
# Required-Stop: $syslog $local_fs $network
# Should-Start: $postgresql $pyro-nsd
# Should-Stop: $postgresql $pyro-nsd
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Start cubicweb application at boot time
### END INIT INFO
# FIXME Seems to be inadequate here
# FIXME If related to pyro, try instead:
# export PYRO_STORAGE="/tmp"
cd /tmp
# FIXME Work-around about the following lintian error
# E: cubicweb-ctl: init.d-script-does-not-implement-required-option /etc/init.d/cubicweb start
#
# Check if we are sure to not want the start-stop-daemon machinery here
# Refer to Debian Policy Manual section 9.3.2 (Writing the scripts) for details.
case $1 in
force-reload)
/usr/bin/cubicweb-ctl reload --force
;;
status)
/usr/bin/cubicweb-ctl status
;;
*)
/usr/bin/cubicweb-ctl $1 --force
;;
esac