Searching for Markovian Particles

We consider the classical problem of searching for a particle moving via a Markov chain. In particular, we consider both discrete search effort and continuous search effort under two distinct objective functions: maximizing the probability of finding the particle in T time steps, called the probability objective, and minimizing the expected time to find the particle, called the expectation objective. We show that in the case of discrete search effort and the probability objective, the myopically greedy policy gives a 1/2-approximation. This contrasts with our second result, which shows that for discrete effort and the expectation objective it is NP-hard to find a policy that is approximate within any sub-linear polynomial factor. On the other hand, we show that continuous search effort problems are considerably easier. We show that it is possible to solve the continuous search effort, probability objective problem exactly in polynomial time via convex optimization. Finally, we show that the continuous search effort, expectation objective problem can be approximated to within any additive constant epsilon in polynomial time.


Add a comment

Add A Comment

Visual Captcha
Code in the picture:
Your Name(*):

© Copyright 2004-2020 - Ned Dimitrov