In clinical computed tomography (CT) images, cortical bone features with sub-millimeter (sub-mm) thickness are substantially blurred, such that their thickness is overestimated and their intensity appears underestimated. Therefore, any inquiry of the geometry or the density of such bones based on these images is severely error prone. We present a model-based method for estimating the true thickness and intensity magnitude of cortical and trabecular bone layers at localized regions of complex shell bones down to 0.25 mm. The method also computes the width of the corresponding point spread function. This approach is applicable on any CT image data, and does not rely on any scanner-specific parameter inputs beyond what is inherently available in the images themselves. The method applied on CT intensity profiles of custom phantoms mimicking shell-bones produced average cortical thickness errors of 0.07 ± 0.04 mm versus an average error of 0.47 ± 0.29 mm in the untreated cases (t(55) = 10.92, p ≪ 0.001)). Similarly, the average error of intensity magnitude estimates of the method were 22 ± 2.2 HU versus an error of 445 ± 137 HU in the untreated cases (t(55) = 26.48, p ≪ 0.001)). The method was also used to correct the CT intensity profiles from a cadaveric specimen of the craniofacial skeleton (CFS) in 15 different regions. There was excellent agreement between the corrections and µCT intensity profiles of the same regions used as a ‘gold standard’ measure. These results set the groundwork towards restoring cortical bone geometry and intensity information in entire image data sets. This information is essential for the generation of finite element models of the CFS that can accurately describe the biomechanical behavior of its complex thin bone structures.