The suppression and equivalence of permeability and thickness of transmissive reservoir intervals are managed by integrating data measured at different scales. Essential requirements are estimates of effective permeability from core, logs and well tests for comparison over net reservoir intervals at key wells. The starting point is a quality-assured determination of interval transmissibility from well testing. The interval thickness is refined to effective thickness through the scale-compatible application of dynamically-conditioned net-reservoir discriminators, a process that allows effective permeability to be determined at the well-test scale. Algorithms for transforming core absolute permeability to core effective permeability form the basis for comparisons of laboratory and well-test data provided that appropriate core compaction corrections have been made with data partitioning as needed. Where core sampling is comprehensive and representative over the perforated interval, the comparison with well-tests can be made directly. Otherwise core-calibrated log data constitute an essential intermediary. In the absence of formation-damage effects, the reconciliation of petrophysically- and dynamically-derived effective permeabilities over a tested interval is diagnostic of uniform reservoir character. Beyond data shortfalls, impediments to reconciliation include formation anisotropy exacerbated by natural fractures and relatively high permeability conduits possibly in the form of “super-k” layers. A deterministic workflow for achieving reconciliation is substantiated by reference to field examples, which collectively reveal further opportunities for improved permeability characterization in integrated reservoir studies.