Do voo do pássaro ao olhar debruçado: O virtual como método

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Objects, concepts, and images are only fully understood when contextualized in their temporal and interpretative dimensions. As a method, Montage includes its disassembly, revealing the intrinsic complexity of objects and images. This article proposes the exercise of a dialectical triad (assembly-disassembly-reassembly), where images and meanings are constantly reinterpreted when mobilized to inform the realities of the Global South. With the massification of digital platforms such as Google Street View, there has been a transformation in collecting and interpreting images, facilitating new interactions and appropriations based on the generalization of phenomena. This article presents a research methodology based on virtuality, using Street View to analyze self-constructed layers — the Puxadinhos (pronounced poo-sha-dee-nyos, these are self-built housing extensions, done mainly without any permits) — in housing complexes, identifying a movement when we connect the Aerial View to the First-person View in the face of the participant gaze in the territories. The collection of images of housing complexes in several Brazilian and Latin American cities highlights the presence of self-built layers, challenging the hegemonic view of the housing complex as a complete solution and demonstrating a generalization of self-construction in countries of the Global South. The incorporation of technologies is advocated to broaden the understanding of territories, emphasizing the need to observe the city based on the everyday images that technology generates, which continue to be capitalized but do not override the need to keep a physical presence in the territories.

Similar Papers
  • Research Article
  • 10.1525/aft.2021.48.2.63
The Power of Assembly
  • Jun 1, 2021
  • Afterimage
  • Kris Fallon

Research Article| June 01 2021 The Power of Assembly Kris Fallon Kris Fallon Kris Fallon is an associate professor in the Department of Cinema & Digital Media at the University of California, Davis. Search for other works by this author on: This Site PubMed Google Scholar Afterimage (2021) 48 (2): 63–78. https://doi.org/10.1525/aft.2021.48.2.63 Views Icon Views Article contents Figures & tables Video Audio Supplementary Data Peer Review Share Icon Share Twitter LinkedIn Tools Icon Tools Get Permissions Cite Icon Cite Search Site Citation Kris Fallon; The Power of Assembly. Afterimage 1 June 2021; 48 (2): 63–78. doi: https://doi.org/10.1525/aft.2021.48.2.63 Download citation file: Ris (Zotero) Reference Manager EasyBib Bookends Mendeley Papers EndNote RefWorks BibTex toolbar search Search nav search search input Search input auto suggest search filter All ContentAfterimage Search Is a boundless photographic image still a photograph? Spatially, boundaries define a photo, imposing limits that demarcate its content or subject as this and not that. The edge of a camera’s recording surface, be it CCD or CMOS, negative or positive, wet or dry, circumscribes a limit beyond which no light values are recorded. The frame is thus essential to the medium, defining it as much by what it excludes as what it includes. A photo of a cat on a lap is different from one that includes the person’s face, and the boundary or frame provides the critical tool of composition that makes this determination. Only this much is visible, and no more. So what is an image with no boundary at all? Attempts to move beyond the frame toward boundlessness typically rely on combining multiple images, a process now commonly referred to as “photo stitching.” This allows the... You do not currently have access to this content.

  • Conference Article
  • Cite Count Icon 29
  • 10.1109/cvpr.2016.176
Regularity-Driven Building Facade Matching between Aerial and Street Views
  • Jun 1, 2016
  • Mark Wolff + 2 more

We present an approach for detecting and matching building facades between aerial view and street-view images. We exploit the regularity of urban scene facades as captured by their lattice structures and deduced from median-tiles' shape context, color, texture and spatial similarities. Our experimental results demonstrate effective matching of oblique and partially-occluded facades between aerial and ground views. Quantitative comparisons for automated urban scene facade matching from three cities show superior performance of our method over baseline SIFT, Root-SIFT and the more sophisticated Scale-Selective Self-Similarity and Binary Coherent Edge descriptors. We also illustrate regularity-based applications of occlusion removal from street views and higher-resolution texture-replacement in aerial views.

  • Research Article
  • Cite Count Icon 34
  • 10.1016/j.jth.2016.05.001
Comparison of field and online observations for measuring land uses using the Microscale Audit of Pedestrian Streetscapes (MAPS)
  • May 18, 2016
  • Journal of Transport & Health
  • Jonathan M Kurka + 6 more

Comparison of field and online observations for measuring land uses using the Microscale Audit of Pedestrian Streetscapes (MAPS)

  • Conference Article
  • Cite Count Icon 281
  • 10.1109/iccv.2009.5459413
Large-scale privacy protection in Google Street View
  • Sep 1, 2009
  • Andrea Frome + 8 more

The last two years have witnessed the introduction and rapid expansion of products based upon large, systematically-gathered, street-level image collections, such as Google Street View, EveryScape, and Mapjack. In the process of gathering images of public spaces, these projects also capture license plates, faces, and other information considered sensitive from a privacy standpoint. In this work, we present a system that addresses the challenge of automatically detecting and blurring faces and license plates for the purpose of privacy protection in Google Street View. Though some in the field would claim face detection is "solved", we show that state-of-the-art face detectors alone are not sufficient to achieve the recall desired for large-scale privacy protection. In this paper we present a system that combines a standard sliding-window detector tuned for a high recall, low-precision operating point with a fast post-processing stage that is able to remove additional false positives by incorporating domain-specific information not available to the sliding-window detector. Using a completely automatic system, we are able to sufficiently blur more than 89% of faces and 94 - 96% of license plates in evaluation sets sampled from Google Street View imagery.

  • Research Article
  • 10.3390/urbansci9110486
Deriving Environmental Properties Related to Human Environmental Perception: A Comparison Between Aerial Image Classification and Street View Image Segmentation
  • Nov 18, 2025
  • Urban Science
  • Feng Qi + 6 more

In recent decades, urban residents’ perceptions of their surrounding environment have been widely studied, especially pertaining to the association between environmental settings and humans’ psychological wellbeing. Many studies have used aerial imagery to derive environmental properties through image classification to approximate humans’ perceived environment, while a growing number of studies use street view imagery to achieve the same with image segmentation. There is limited research comparing the two approaches. This study aims to examine how the environmental properties derived from aerial and street view images correspond with each other. We utilized two study sites in urban communities in New Jersey, United States. High-resolution aerial images were acquired and classified to derive environmental properties within set buffer zones around sample points where Google Street View images were collected for image segmentation to derive corresponding environmental properties. Several buffer sizes were experimented with. The results show that the amount of greenness and individual environmental elements derived from street view versus aerial images can be quite different at the same locations. The amount of trees derived has a greater concordance between aerial and street views than the amount of buildings derived. The amounts of grass and roads are not in agreement between the two views. Trees derived from street view images correspond with those derived from aerial better when using a small, 30 m buffer. Low-rise buildings and grass agree better when using larger buffer sizes such as 60 m and 100 m. Roads correspond better when larger buffers are employed in green environments, but smaller buffers in environments with limited greenness. Our findings indicate that the choice of buffer size used when combining environmental properties derived from both aerial and street view images together should consider both the environmental elements involved and the type of environmental settings.

  • Research Article
  • Cite Count Icon 2
  • 10.6107/jkha.2018.29.6.059
공공임대주택 공동체 활성화를 위한 단지텃밭 운영현황 사례조사를 통한 제도개선 연구
  • Dec 25, 2018
  • Journal of the Korean Housing Association
  • Hae Sun Paik + 1 more

In recent years, interest in ‘urban agriculture’ has increased as interest in health, safe food, and leisure grows. In November 2011, the Government enacted the “Act on Development and Support of Urban Agriculture” as an institutional basis to cope with the increasing demand for urban agriculture through the institutional and financial support. This study suggests that urban agriculture, which is attracting attention as a new community activity in the city, can contribute to the negative image of the rental housing complex and to resolve the relative loss of residents. However, since the criteria and system for establishing the community garden in the housing complex are insufficient, this study was to find ways to improve the system and regulation so that the community garden can be settled in the housing complex community. For this purpose, this study conducted field study and manager interviews on 18 domestic housing complexes have community gardens, and identified the type of construction, type and location of community gardens, and operation system, etc. As a result of the case study, it has been found that community gardens have a positive effect on the activation of resident community. In order to activate them, regulation improvement is needed to include community garden as a type of communal facilities.

  • Research Article
  • Cite Count Icon 55
  • 10.1177/2399808321995817
Sidewalk extraction using aerial and street view images
  • Feb 19, 2021
  • Environment and Planning B: Urban Analytics and City Science
  • Huan Ning + 4 more

A reliable, punctual, and spatially accurate dataset of sidewalks is vital for identifying where improvements can be made upon urban environment to enhance multi-modal accessibility, social cohesion, and residents' physical activity. This paper develops a synthetically new spatial procedure to extract the sidewalk by integrating the detected results from aerial and street view imagery. We first train neural networks to extract sidewalks from aerial images, and then use pre-trained models to restore occluded and missing sidewalks from street view images. By combining the results from both data sources, a complete network of sidewalks can be produced. Our case study includes four counties in the U.S., and both precision and recall reach about 0.9. The street view imagery helps restore the occluded sidewalks and largely enhances the sidewalk network's connectivity by linking 20% of dangles.

  • Front Matter
  • Cite Count Icon 2
  • 10.5888/pcd12.150400
Technology and Data Collection in Chronic Disease Epidemiology
  • Oct 29, 2015
  • Preventing Chronic Disease
  • James B Holt

Preventing Chronic Disease (PCD) is a peer-reviewed electronic journal established by the National Center for Chronic Disease Prevention and Health Promotion. PCD provides an open exchange of information and knowledge among researchers, practitioners, policy makers, and others who strive to improve the health of the public through chronic disease prevention.

  • Conference Article
  • Cite Count Icon 27
  • 10.1109/ivs.2012.6232195
Google Street View images support the development of vision-based driver assistance systems
  • Jun 1, 2012
  • Jan Salmen + 2 more

For the development of vision-based driver assistance systems, large amounts of data are needed, e.g., for training machine learning approaches, tuning parameters, and comparing different methods. There are basically three possible ways to obtain the required data: using freely available benchmark sets, doing own recordings, or falling back to synthesized sequences. In this paper, we show that Google Street View can be incorporated as a valuable source for image data. Street View is the largest publicly available collection of images recorded from a drivers' perspective, covering many different countries and scenarios. We describe how to efficiently access the data and present a framework that allows for virtual driving through a network of images. We assess its performance and show its applicability in practice considering traffic sign recognition as an example. The introduced approach supports an efficient collection of image data relevant to training and evaluating machine vision modules. It is easily adaptable and extendible, whereby Street View becomes a valuable tool for developers of vision-based assistance systems.

  • Research Article
  • Cite Count Icon 8
  • 10.3390/ijgi12060246
Quantifying the Spatial Ratio of Streets in Beijing Based on Street-View Images
  • Jun 17, 2023
  • ISPRS International Journal of Geo-Information
  • Wei Gao + 4 more

The physical presence of a street, called the “street view”, is a medium through which people perceive the urban form. A street’s spatial ratio is the main feature of the street view, and its measurement and quality are the core issues in the field of urban design. The traditional method of studying urban aspect ratios is manual on-site observation, which is inefficient, incomplete and inaccurate, making it difficult to reveal overall patterns and influencing factors. Street view images (SVI) provide large-scale urban data that, combined with deep learning algorithms, allow for studying street spatial ratios from a broader space-time perspective. This approach can reveal an urban forms’ aesthetics, spatial quality, and evolution process. However, current streetscape research mainly focuses on the creation and maintenance of spatial data infrastructure, street greening, street safety, urban vitality, etc. In this study, quantitative research of the Beijing street spatial ratio was carried out using street view images, a convolution neural network algorithm, and the classical street spatial ratio theory of urban morphology. Using the DenseNet model, the quantitative measurement of Beijing’s urban street location, street aspect ratio, and the street symmetry was realized. According to the model identification results, the law of the gradual transition of the street spatial ratio was depicted (from the open and balanced type to the canyon type and from the historical to the modern). Changes in the streets’ spatiotemporal characteristics in the central area of Beijing were revealed. Based on this, the clustering and distribution phenomena of four street aspect ratio types in Beijing are discussed and the relationship between the street aspect ratio type and symmetry is summarized, selecting a typical lot for empirical research. The classical theory of street spatial proportion has limitations under the conditions of high-density development in modern cities, and the traditional urban morphology theory, combined with new technical methods such as streetscape images and deep learning algorithms, can provide new ideas for the study of urban space morphology.

  • Research Article
  • 10.5958/0976-5506.2018.00783.0
A study on the scope and methodology of language inscription in Northeast Asian Sea Region of the 4th industrial age
  • Jan 1, 2018
  • Indian Journal of Public Health Research & Development
  • Min-Ho Yang + 1 more

The focus of traditional language inscription studies has been limited to paper media and their scope has been unrelated with daily life. This Study aims to propose a direction on the scope and methodology of language inscription in the 4th Industrial Age and seeks developmental direction of language landscape study in the ICT era based on this paper. It is desirable to use the system to overcome the limitations of research areas. Many countries and cities disclose on the internet ground road views or aerial views taken by drones, allowing them to be shared. ‘Daum Road View’ is the main examples of road views provided by South Korea’s portal websites. Other examples are Google Maps, Google Earth Street View, and Indoor View which provided by top global company Google. Various attempts can be made on language inscription study on such free data. The findings of this study relate to four research topics. First is expansion of the scope of language landscape study (languages that can be experienced through all organs of the human body are the study subjects). Second is expansion of pictograms (inscription of the most fundamental communication method). Third is research cost reduction (cost reduction through online research using Road View instead of actual research). Fourth is varying methodologies of language inscription research (Diachronic analysis through picture data on the internet). In the 4th industrial age, it is necessary to identify the aspects of language inscription by various means and overcome the limitations of the existing offline research with the use of online research. This paper is expected to be helpful for future language inscription studies.

  • Book Chapter
  • 10.4018/978-1-4666-4979-8.ch037
Immersion and Interaction via Avatars within Google Street View
  • Jan 1, 2014
  • Ya-Chun Shih + 1 more

The optimal approach to learning a target culture is to experience it in its real-life context through interaction. The new 3D virtual world platform under consideration, Blue Mars Lite, enables users to be immersed in existing Google Maps Street View panorama, globally. Google Maps with Street View contains a massive collection of 360-degree street-level images of the most popular places worldwide. The authors explore the possibility of integrating these global panoramas, in which multiple users can explore, discuss, and role-play, into the classroom. The goal of this chapter is to shed new light on merging Google Street View with the 3D virtual world for cultural learning purposes. This approach shows itself to be a promising teaching method that can help EFL learners to develop positive attitudes toward the target culture and cultural learning in this new cultural setting.

  • Book Chapter
  • 10.4018/978-1-4666-4462-5.ch012
Immersion and Interaction via Avatars within Google Street View
  • Jan 1, 2014
  • Ya-Chun Shih + 1 more

The optimal approach to learning a target culture is to experience it in its real-life context through interaction. The new 3D virtual world platform under consideration, Blue Mars Lite, enables users to be immersed in existing Google Maps Street View panorama, globally. Google Maps with Street View contains a massive collection of 360-degree street-level images of the most popular places worldwide. The authors explore the possibility of integrating these global panoramas, in which multiple users can explore, discuss, and role-play, into the classroom. The goal of this chapter is to shed new light on merging Google Street View with the 3D virtual world for cultural learning purposes. This approach shows itself to be a promising teaching method that can help EFL learners to develop positive attitudes toward the target culture and cultural learning in this new cultural setting.

  • Research Article
  • 10.1289/isesisee.2018.o01.03.32
How Green Is Green? Modeling Urban Greenness Exposure in Environmental Health Research
  • Sep 24, 2018
  • ISEE Conference Abstracts
  • Lorien Nesbitt + 7 more

Exposure to greenness has several health benefits, yet analyses are limited by uncertainty about accuracy of greenness metrics, often derived from different remotely-sensed data sources and using different spatial methods. This research (1) assesses the strengths and weaknesses of multiple greenness data sources and metrics for application in environmental health research, and (2) develops and tests an alternative greenness exposure metric that can be applied worldwide.Methods:We analyzed 5 data sources: Landsat time series, Landsat 8, Sentinel-2, RapidEye, and the Green View Index (GVI), derived from Google Street View. These data sets span various time series, resolutions and costs, and represent aerial and perspective views. We compared the Normalized Difference Vegetation Index (NDVI) using different imagery types and the GVI for various buffer distances around postal codes and examined sensitivity to spatial metrics and data sources. Based on these analyses, we constructed a spatially-weighted greenness metric combining data from Sentinel-2 and the GVI that incorporates neighbourhood street-level and at-home greenness.Results:NDVI showed correlations of between 0.65 and 0.85 among satellite types and demonstrated significant inter-variability. GVI showed low correlations with all other data types (0.25-0.40), suggesting an important new source of greenness data. Metrics were spatially sensitive, particularly at small distances and high resolutions, but lacked temporal sensitivity. Initial analyses suggest that the proposed metric is superior to traditional measures that overestimate neighbourhood greenness exposure and underestimate at-home exposure.Conclusions:This research is the first comparison of multiple remote sensing data sources and metrics, including both aerial and novel perspective views. It presents an alternative greenness exposure metric based on freely accessible data sources that may be applied in public health research internationally.

  • Research Article
  • Cite Count Icon 8
  • 10.1021/acs.est.3c06511
High-PrecisionMicroscale Particulate Matter Predictionin Diverse Environments Using a Long Short-Term Memory Neural Networkand Street View Imagery
  • Feb 14, 2024
  • Environmental Science & Technology
  • Xiansheng Liu + 10 more

In this study, wepropose a novel long short-term memory (LSTM)neural network model that leverages color features (HSV: hue, saturation,value) extracted from street images to estimate air quality with particulatematter (PM) in four typical European environments: urban, suburban,villages, and the harbor. To evaluate its performance, we utilizeconcentration data for eight parameters of ambient PM (PM1.0, PM2.5, and PM10, particle number concentration,lung-deposited surface area, equivalent mass concentrations of ultravioletPM, black carbon, and brown carbon) collected from a mobile monitoringplatform during the nonheating season in downtown Augsburg, Germany,along with synchronized street view images. Experimental comparisonswere conducted between the LSTM model and other deep learning models(recurrent neural network and gated recurrent unit). The results clearlydemonstrate a better performance of the LSTM model compared with otherstatistically based models. The LSTM-HSV model achieved impressiveinterpretability rates above 80%, for the eight PM metrics mentionedabove, indicating the expected performance of the proposed model.Moreover, the successful application of the LSTM-HSV model in otherseasons of Augsburg city and various environments (suburbs, villages,and harbor cities) demonstrates its satisfactory generalization capabilitiesin both temporal and spatial dimensions. The successful applicationof the LSTM-HSV model underscores its potential as a versatile toolfor the estimation of air pollution after presampling of the studiedarea, with broad implications for urban planning and public healthinitiatives.

Save Icon
Up Arrow
Open/Close