MASCOTTE no longer exists => visit the new COATI project-team
 


Seminaire MASCOTTE
Locating a target with an agent guided by unreliable local advice

par Nicolas Nisse


Date :15/03/11
Time :10:30
Location :Galois Coriolis


Ă‚' We study the problem of finding a destination node t by a mobile agent in an unreliable network having the structure of an unweighted graph, in a model first proposed by Hanusse et al. Each node of the network is able to give advice concerning the next node to visit so as to go closer to the target t. Unfortunately, exactly k of the nodes, called liars, give advice which is incorrect. It is known that for an n-node graph G of maximum degree Delta >= 3, reaching a target at a distance of d from the initial location may require an expected time of 2^{Omega(min{d,k})$, for any d,k=O(log n), even when G is a tree. This paper focuses on strategies which efficiently solve the search problem in scenarios in which, at each node, the agent may only choose between following the local advice, or randomly selecting an incident edge. The strategy which we put forward, called R/A, makes use of a timer (step counter) to alternate between phases of ignoring advice R and following advice A for a certain number of steps. No knowledge of parameters n, d, or k is required, and the agent need not know by which edge it entered the node of its current location. The performance of this strategy is studied for two classes of regular graphs with extremal values of expansion, namely, for rings and for random d-regular graphs (an important class of expanders). For the ring,R/A is shown to achieve an expected searching time of 2d+k^{Theta(1)} for a worst-case distribution of liars, which is polynomial in both d and k. For random d-regular graphs, the expected searching time of the R/A strategy is O(k3 log3 n) a.a.s. The polylogarithmic factor with respect to n cannot be dropped from this bound; in fact, we show that a lower time bound of Omega(log n) steps holds for all d,k=Omega(log log n) in random d-regular graphs a.a.s. and applies even to strategies which make use of some knowledge of the environment. Finally, we study oblivious strategies which do not use any memory (in particular, with no timer). Such strategies are essentially a form of a random walk, possibly biased by local advice. We show that such biased random walks sometimes achieve drastically worse performance than the R/A strategy. In particular, on the ring, no biased random walk can have a searching time which is polynomial in d and k.


Page des séminaires