### 3.7.81. DFS-bottleneck

reformulation DFS-bottleneck

A constraint on a graph for which a depth-first search based procedure is normally required for checking whether a ground instance is satisfied or not, e.g.,Β a connectivity constraint. The reformulation of such a constraint as a conjunction of other constraints is usually not easy. A possibility, when each node has a single successor in the ground case, is to use an $\mathrm{\pi \pi \pi \pi \pi \pi \pi }$ constraint to express the link between a node and its successor at the price of using a large number of $\mathrm{\pi \pi \pi \pi \pi \pi \pi }$ constraints (e.g.,Β see the Reformulation slot of the $\mathrm{\pi \pi ’\pi \pi \pi }$ constraint).

filtering DFS-bottleneck

A constraint for which a depth-first search based algorithm usually constitutes a bottleneck of its filtering algorithm. This is a pity, especially on dense graphsA common implementation trick relies on the fact that, quite often on dense graphs, a depth-first search algorithm develops a path (rather than a tree) visiting all vertices, such that one can directly reach (i.e.,Β with a single arc) the first node of the path from the last one (i.e., we have a single strongly connected component). In this context the trick is to stop the depth-first search algorithm as soon as the last node of the path is reached, in order to avoid scanning through all remaining arcs of the graph. When this is the case the complexity of the DFS goes from $O\left(m\right)$ down to $O\left(n\right)$ where $n$ is the number of vertices and where $m$ is the number of arcs of the graph., where most of the invocations of the filtering algorithm do not usually bring any new deductions. Motivated by this fact,Β randomised filtering algorithms were introduced inΒ [Katriel06] and [KatrielVanHentenryck06] in the context of the and $\mathrm{\pi \pi \pi \pi \pi \pi \pi \pi \pi \pi \pi \pi }$ constraints. A second approach is to come up with a probabilistic analysis for predicting whether triggering a given filtering algorithm can produce new deductions. This was done for the bound-consistency algorithm of the $\mathrm{\pi \pi \pi \pi \pi \pi \pi \pi \pi \pi \pi \pi }$ constraint in