Abstract

There has been long and wide-ranging debate in the social science literature about how best to conceptualise and to measure segregation. A popular measure is the dissimilarity index, usually attributed to Duncan and Duncan who were aware of its geographical limitations – that it, like most indices, is invariant to the precise spatial patterning of the segregation measured. Whilst one response to this shortcoming has been to develop a spatial adjustment, a number of papers from the 1980s and 1990s took the approach of treating the measurement as a (spatial) optimisation problem. This paper revisits that optimisation literature, arguing that what was computationally prohibitive in the past is now possible in the open-source software, R, and emblematic of the sorts of problem that might be more routinely solved in a cyberinfrastructure tailored to geographical analysis. Applying this method to UK Census data for London, and comparing the optimisation measure with the standard and adjusted dissimilarity indices, claims of ethnic desegregation are considered.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call