We characterize the structure of optimal policies for maintenance & replacement actions over a finite horizon. The context is machine monitoring when the following are true: 1) The machine can operate in one of N states in S/spl isin/{0,1,...,N-1} where 0 is good, 1 to N-2 represent increasing levels of deterioration (i.e., the system becomes increasingly worn, but is still able to produce usable parts), and N-1 is bad (or failed). 2) Observations are related probabilistically to the state of the process. 3) The machine's state is known with certainty only immediately after a replacement. The last assumption, consistent with silent failures, distinguishes our results from others. We prove [using the theory & results of partially observable Markov decision processes (POMDP)] that the policy that minimizes the total expected cost of system maintenance has a marginally monotonic structure. The concept of marginal monotonicity, which requires a component-wise partial ordering, and monotonicity of the expected total cost function with respect to each component of the state space, allows characterization of the policy that minimizes the total expected cost. This feature allows us to represent the optimal policy by a collection of decision rules characterized by at most N functions.