The Huberman-Hogg model of computational ecosystems is applied to resources with queues. The previous theoretical results indicate that instabilities, due to delayed information, can be controlled by adaptive mechanisms, particularly schemes which employ diverse past horizons. A stochastic learning automaton, with rewards based on queuing parameters, is implemented to test the theoretical results. The effects of the learning step size and horizon are shown for systems with various delays and traffic intensities. The instabilities are controlled with appropriate choices of parameters and reward mechanism. Long horizons permit nonadaptive agents to achieve similar results, with the possible loss of responsiveness to dynamic environments.