According to current usage patterns, research trends, and latest reports, the performance of the wide-area networks interconnecting geographically distributed cloud nodes (i.e. inter-datacenter networks) is gaining more and more interest. In this paper we leverage only active approaches—thus we do not rely on information restricted to providers—and propose a deep analysis of these infrastructures for the two public-cloud leading providers: Amazon Web Services and Microsoft Azure. Our study provides an assessment of the performance of these networks as a function of the several configuration factors under the control of the customer and evidences specific cases of particular interest. The analysis of these cases and of their root causes, also related with service fees, provides insights on their impact on both the Quality of Service perceived by cloud customers and the outcomes of studies neglecting them.Our results show that Azure inter-datacenter infrastructure performs better than Amazon’s in terms of throughput (+56%, on average). On the other hand, the performance of the two providers is comparable in terms of latency, with the exception of limited specific cases. Moreover, some of the configuration factors cloud customers can leverage (such as larger more expensive VM sizes, advertised to have better network performance) may have no effect on the inter-datacenter network performance actually perceived. Counterintuitively, lower performance may even be related to higher costs for the customer. Experimental evidences show that public-cloud providers also rely on external network providers for some geographical regions, which is the cause of lower performance and higher costs. A comparison with previous works show that TCP throughput has not been improved recently, while evidences of higher link capacities have been found.