In this paper, we propose and examine an adaptive method to control a continuous-time linear stochastic system whose unobserved parameters constitute a finite-state jump-Markov process (a Markov process evolving on a finite set), also called a linear hybrid system. It is assumed that the system is time-independent and that the states of the system are completely observed. This adaptive controller is closely related to the results previously obtained by Caines & Chen [2, 3]. With respect to other proposed solutions for such a problem, our control method improves the performance of the system while providing a practically computable solution: By applying the optimal nonlinear filter, first introduced by Wonham in [22], the parameters are estimated based on observations of the output. A class of adaptive state feedback algorithms, dependent on the nonlinear filter output, is proposed and a Lyapunov function argument shows that under certain conditions, for any finite initial probability distribution, the resulting system is stochastically stable. In addition, it is proved that with any (stochastically) stabilizing adaptive state feedback, the system is weakly controllable (accessible) for any initial condition. Stochastic stability, for any homogeneous diffusion process, implies that there exists an invariant probability distribution for the process, unique with respect to the initial condition. Moreover, it is proved that weak controllability results in the ergodicity of the process for every initial condition. In this manner, any infinite-horizon cost function may be replaced with an expectation which can reduce to a great cent the effort required for an analysis of the system performance.