Abstract
ABSTRACTSurveillance technologies may be capable of monitoring a domain, but they need a sufficiently orderly domain to monitor. This article examines the secretive effort to institute artificial-intelligence-based ‘smart surveillance’ in the New York subway, using object- and pattern-recognition algorithms to identify dangerous activities in video feeds, such as a person abandoning a package. By considering the necessary preconditions for computer vision systems to recognize patterns and objects, I show how smart surveillance was challenged by the lack of visual and social uniformities necessary for smart surveillance systems to make its fine-toothed distinctions. In spite of vast resources and involvement of a major military contractor, the project was eventually deemed a failure. Although problems in computer vision are being incrementally solved, those improvements do not yet add up to a holistic technology capable of parsing the real-world ambiguity of open-ended settings which do not meet the assumptions of the detection algorithms. In the absence of technologies that can handle the actual mess, the world itself must cooperate, but it often does not. The article demonstrates the importance of looking beyond the claims of technical efficacy in the study of security and surveillance to discover how technologies of inspection and control actually work, as a means to cut through the heavy rhetorical packaging in which they are sold to their publics.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.