The popularization and development of Wi-Fi and 5G networks have introduced new applications requiring high data rates and low latency. However, the vast random packet loss caused by mobility and channel conditions in wireless networks can worsen the performance of traditional TCP congestion control algorithms. Therein, BBR [1] was proposed in 2016, and claims to operate at the optimum point. BBR strives to match the congestion window to the bandwidth-delay product, calculated from the measured bottleneck bandwidth and round trip times. Through simulations, we found that BBR suffers from different degrees of throughput degradation for different loss rates. The improved BBR, Yinker, is proposed to address this issue. Precisely, we dynamically adjust BBR’s pacing_gain based on the network conditions, including loss rate and congestion degree. We have evaluated Yinker in both real-world environments and trace-based emulations and compared its performance with different BBR variants and state-of-the-art schemes, including Cubic, Verus, and Copa. On average, TCP D*, BBR v2, Copa, and BBR have 4.24×, 2.34×, 2.01×, and 1.48× lower throughput compared to Yinker, respectively. This outstanding delay performance comes at little cost in latency. For instance, compared to TCP D* (which achieves the lowest latency), Yinker’s latency is only about 3% more.