Rapid developments in facial recognition technology (FRT) have made its use in contemporary surveillance-oriented security technology (SOST) systems, such as CCTV, increasingly widespread. An artificial intelligence-based technology, FRT is a force multiplier for these systems, delivering security, efficiency and business intelligence gains for organisations that deploy it. At the same time, it is a controversial technology, but unevenly so. Publics tend to accept that the technology has become part of the process of passing through customs at airports, for example, yet its use by retailers has sparked frequent backlash. The frequency of these controversies suggests that security consultants and other practitioners responsible for providing advice to organisations in relation to the suitability of security systems are failing to incorporate the ‘public acceptability’ of potential FRT deployments within their advice. Existing research on FRT public acceptability demonstrates that some deployments of FRT are more publicly acceptable than others. This paper collates the data from existing FRT public acceptability research in order to (i) identify deployment-specific patterns of acceptability, and (ii) develop a model for mapping the acceptability of potential deployments based on a ‘reward proximi-ty’ versus ‘perceived risk’ trade-off. This model may assist actors within the FRT supply chain to make more informed choices in relation to the appropriateness of facial recognition technology in specific deployment scenarios.