A Markovian Decision Process (MDP) is considered in which it is not permitted to observe the state at any observation point as well as the associated cost.It is shown that for a particular class of a MDP with uncountable state space and finite action space the Howard Policy Improvement Routine (HPIR) cannot be used for finding an optimal policy. Some immediate results out of this model are presented.