Abstract

‘It has become clear that bone density, while valuable, may not be sufficient information to identify all patients at higher risk.’ With the development of a diverse therapeutic menu in osteoporosis, there is an increasing need to develop strategies for fracture risk assessment. This is to enable us to target treatments more effectively for those who need it and avoid unnecessary treatment for individuals at low risk of fracture. In 1994, the WHO developed bone density diagnostic criteria, based on dual energy x-ray (DXA) measurements, using the concept of T scores; the number of standard deviations (SDs) above or below the average peak bone density in young adults. A T score between -1 and -2.5 was classified as osteopenia, -2.5 or lower was osteoporosis and -2.5 or lower with a fragility fracture was considered severe, established osteoporosis. Although only designed to be diagnostic criteria, T scores were interpreted by third-party payers and others to also be intervention thresholds. For fracture prediction, DXA gives measurements that predict fracture with an increase in fracture risk of approximately 1.5–2 per SD decrease in bone mineral density (BMD) (the socalled gradient of risk) [1]. It has become clear that bone density, while valuable, may not be sufficient information to identify patients at higher risk. Recent studies have shown that up to half of the patients in the community with fractures have a baseline BMD above the WHO diagnostic threshold [2,3]. In the National Osteoporosis Risk Assessment (NORA), using peripheral bone density measurements, approximately half the osteoporotic fractures in the community occurred in women without osteoporosis. Although the relative risk of fractures in NORA was greater in

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call