A random walk with internal states is nothing else as a random walk directed by a Markov chain. The probability Px that such a random walk starting from the point x, xe Z, 0 < x < L hits the level O before hitting the level L behaves like x/L+O(1/L) -as in the case of a simple ½,½ random walk. In this work we give explicit formulae for the remainder term up to the order O(1/L2). A similar result can be obtained for the expectation Ex of the time until the first hitting: Ex=x(L2-X2)/3L+O(L)