As self-driving technology advances, there is enormous potential to optimize fully autonomous vehicles (FAVs) for use by people who are blind and visually impaired (BVI). Today, BVI users often rely on ridesharing services for daily travel, which present both challenges and opportunities for researchers interested in the accessible design of FAVs. The parallels between current BVI travel experiences in rideshares and predictions that FAV services will adopt rideshared models presents an enticing opportunity to use ridesharing as a proxy for understanding BVI needs in future FAV transportation. However, a key challenge is identifying the extent to which FAVs should be designed to provide the same assistance that human drivers currently provide for BVI travelers in rideshares. To address this issue, ridesharing users with visual impairment (n = 187) within the United States completed a survey instrument designed to assess and compare desires for interactions, information, and assistance between human operated and fully autonomous rideshare vehicles, as well as the modality of information delivery (auditory and/or haptic). Results indicate strong support for access to environmental information (e.g., spatial information about the destination) and contextual information (e.g., progress along the route) across the trip with automated vehicles via natural language interactions. Although results suggest significantly less desire for social interaction with the AI “at the wheel” of FAVs when compared to human drivers, findings indicate that participants desire some social collaboration and human-in-the-loop control during autonomous driving. By empirically comparing human and autonomous ridesharing and exploring both the information needs and modality preferences across information category, the study provides much-needed guidance for future design of humanlike, anthropomorphized, FAV AIs with important implications for social autonomous agents more generally. This study also speaks to the ways in which inclusive and accessible user interfaces should best support user needs across the range of vision loss in future transportation networks.