The problem of synthesizing a fixed-time depth observer based on monocular image feedback is considered in this study. The key challenge lies in the fact that the perspective dynamical system used to model visual motion is typically characterized as being weakly persistently exciting, which complicates observer synthesis. Furthermore, the key to achieving the task objective (safe obstacle avoidance, for instance) lies in the synthesis of a depth observer that achieves rapid convergence of the (static) obstacle depth estimate to the ground truth in a known fixed-time. To address these challenges, and in contrast with prior schemes that rely on a motion-restrictive persistency of excitation (PE) condition for ensuring exponential convergence, a novel adaptive observer framework is considered in this study that incorporates a concurrent learning (CL) term for ensuring fixed-time observer convergence. In particular, the use of concurrent learning allows for the synthesis of a relaxed finite-time excitation condition that relies on historical data recorded over a dynamic sliding window in the recent past that the proposed observer relies on to ensure fixed-time convergence. Thus, a continuous-time reduced-order observer formulation is presented that relies on camera motion data to achieve fixed-time convergence to a uniform ultimate bound for a suitably large choice of the observer gains. Experimental results are used to demonstrate the efficacy of the proposed scheme in the presence of significant measurement noise. A performance comparison study is also undertaken to demonstrate superior performance of the proposed scheme compared to leading alternative designs. Finally, the practical applicability of the proposed scheme is verified by incorporating the proposed scheme within a reactive navigation scheme to accomplish obstacle avoidance. By incorporating a suitably informative CL term within the observer framework, the proposed scheme eliminates the need to rely on a difficult-to-verify PE condition, thus rendering it more suitable for practical applications like visual target-tracking and visual servo control.