Plug-in hybrid electric vehicles (PHEVs) are designed to enable the electrification of a large portion of the distance vehicles travel while utilizing relatively small batteries via taking advantage of the fact that long-distance travel days tend to be infrequent for many vehicle owners. PHEVs also relieve range anxiety through seamless switching to hybrid driving—an efficient mode of fuel-powered operation—whenever the battery reaches a low state of charge. Stemming from the perception that PHEVs are a well-rounded solution to reducing greenhouse gas (GHG) emissions, various metrics exist to infer the effectiveness of GHG reduction, with utility factor (UF) being prominent among such metrics. Recently, articles in the literature have called into question whether the theoretical values of UF agree with the real-world performance of PHEVs, while also suggesting that infrequent charging was the likely cause for observed deviations. However, it is understood that other reasons could also be responsible for UF mismatch. This work proposes an approach that combines theoretical modeling of UF under progressively relaxed assumptions (including the statistical distribution of daily traveled distance, charging behavior, and attainable electric range), along with vehicle data logs, to quantitatively infer the contributions of various real-world factors towards the observed mismatch between theoretical and real-world UF. A demonstration of the proposed approach using data from three real-world vehicles shows that all contributing factors could be significant. Although the presented results (via the small sample of vehicles) are not representative of the population, the proposed approach can be scaled to larger datasets.