Autonomous underwater vehicles can be valuable acoustic sensing platforms due to their maneuverability, low cost, and sensor-driven adaptivity compared to ships. However, the logistics of integrating acoustics research into artificially intelligent systems can be daunting. In this work, the autonomy software provides an abstract representation of the acoustic environment (e.g., sea surface, water column, and sea floor parameters, source and receiver positions) which it updates continuously from local and remote sensor data. Upon request, this environment is translated into the native representation of an acoustic model which returns the requested calculation (e.g., transmission loss, travel times). The model is set up as a server, capable of handling requests from multiple autonomy subsystems at once (e.g., target tracking prediction, acoustic communications optimization). Thus, the acoustic model and the autonomy software are kept ignorant of each other's implementation specifics. Results will be presented from the shallow water GLINT10 experiment where a vehicle adaptively tracked in depth the minimum modeled transmission loss from a buoy source. Furthermore, a deep sea simulation study which combines target tracking and acoustic communications was conducted. For both studies, the autonomy software MOOS-IvP and the BELLHOP ray tracing codes were used.