This work concerns with Markov chains on a finite state space. It is supposed that a state-dependent cost is associated with each transition, and that the evolution of the system is watched by an agent with positive and constant risk-sensitivity. For a general transition matrix, the problem of approximating the risk-sensitive average criterion in terms of the risk-sensitive discounted index is studied. It is proved that, as the discount factor increases to 1, an appropriate normalization of the discounted value functions converges to the average cost, extending recent results derived under the assumption that the state space is communicating.