The problem of precisely computing the worst-case blocking time that tasks may experience is one of the fundamental issues of schedulability analysis of real-time applications. While exact methods have been proposed for more sophisticated protocols, the problem is indeed complex in case of the Priority Inheritance Protocol, even restricting the attention to uniprocessor systems, non-nested resource accesses, and non-self-suspending tasks. Besides a very simple method leading in general to loose upper bounds, only one algorithm of exponential complexity has been so far reported in literature to tighten such bounds. In this article, we describe a novel approach which, leveraging an operational research technique for modeling the problem, computes the same tight bounds in polynomial time. We then discuss the scenarios in which, assuming no conditional statements in the tasks’ code, the computed bounds derive from an actually impossible blocking chain, and we refine the initial model to more precisely compute the worst-case blocking times for any task set in any possible operating condition.