The well-known Transport Control Protocol (TCP) is a crucial component of the TCP/IP architecture on which the Internet is built, and is a de facto standard for reliable communication on the Internet. At the heart of the TCP protocol is its congestion control algorithm. While most practitioners believe that the TCP congestion control algorithm performs very well, a complete analysis of the congestion control algorithm is yet to be done. A lot of effort has, therefore, gone into the evaluation of different performance metrics like throughput and average latency under TCP. In this paper, we approach the problem from a different perspective and use the competitive analysis framework to provide some answers to the question “how good is the TCP/IP congestion control algorithm?” We describe how the TCP congestion control algorithm can be viewed as an online, distributed scheduling algorithm. We observe that existing lower bounds for non-clairvoyant scheduling algorithms imply that no online, distributed, non-clairvoyant algorithm can be competitive with an optimal offline algorithm if both algorithms were given the same resources. Therefore, in order to evaluate TCP using competitive analysis, we must limit the power of the adversary, or equivalently, allow TCP to have extra resources compared to an optimal, offline algorithm for the same problem. In this paper, we show that TCP is competitive to an optimal, offline algorithm provided the former is given more resources. Specifically, we prove first that for networks with a single bottleneck (or point of congestion), TCP is ${\mathcal{O}}(1)$-competitive to an optimal centralized (global) algorithm in minimizing the user-perceived latency or flow time of the sessions, provided we allow TCP ${\mathcal{O}}(1)$times as much bandwidth and ${\mathcal{O}}(1)$extra time per session. Second, we show that TCP is fair by proving that the bandwidths allocated to sessions quickly converge to fair sharing of network bandwidth.