Abstract
Unemployment among blind and visually impaired people is very high (70%). These figures are not higher thanks to the effort of organizations such as ONCE, which provides a great number of services for blind people inclusion into the ordinary labour market, in addition to driving the protected labour market, via its subsidiary organizations. The demands from the environment itself, either at the workplace or during commute, were identified to be a major barrier for blind people to get or maintain a job. In particular, activities based on orientation and mobility (OM and on the other hand, a sensitive interface, responsible for providing feedback. This cognitive mapping tool is implemented in the form of a virtual reality video-game for smartphones. It was hypothesised that distant exploration improves efficacy and efficiency of the exploration process without a detrimental impact on the quality or usefulness of the resulting cognitive maps. Cognitive maps are not directly observable, in order to assess their quality, people must build an external representation of them, such as a drawing, a model, or a verbal description; the resulting outcome is referred to as a spatial product. A configurational technique was used; thus, it was possible to produce a set of two-dimensional points describing the spatial layout in the cognitive map. Bidimensional regression is able to account for similarity levels between two planar points-layouts; however, it does not deal well with missing data. Thus, a novel index for the assessment of cognitive-map quality was defined as well. Said quality index is referred to as the Spatial Understanding Quality Index (SUQI). It was defined as the Mahalanobis distance between two four-dimensional vectors representing a spatial product and the original scene, respectively. Being based on the use of the Mahalanobis distance, it is required an estimation of a covariance matrix computed from the elements of a representative set of spatial products. It was hypothesised that a set of spatial products of a single particular room is valid to correctly assess the quality of spatial products representing different rooms. A within-subjects cross-sectional study, where nineteen totally blind people explored three virtual spaces of similar complexity, was conducted. Participants individually explored each virtual space with a different type of spotlight configuration, namely, proximity exploration (noFoA), spherical spotlight (sFoA), and flat spotlight (fFoA). In addition, three independent evaluators ranked all fifty-four spatial products in the eGlance-study dataset according to their similarity to their corresponding original scene. Evidence supports effectiveness improvements due to distant exploration (p-value=0.0006). The fFoA distant-configuration entails a 53% reduction in discovery time (p-value= 0.0027). A trend is observed entailing a 38% reduction in the duration of the overall exploration stage for a flat spotlight configuration (p-value=0.067). Wall-detection effectiveness alters exploration duration (p-value = 0.012). Improvements in effectiveness and discovery time are associated with shorter overall exploration time. Exploration duration after discovery time depends on wall-detection effectiveness. Benefits from a distant exploration configuration are not enough to build better cognitive maps. Compared to human assessment, inter-rater reliability (IRR) of the SUQI was excellent: ICC(A, 1) = 0.999, 95% CI (0.997, 0.999); IRR of the Euclidean distance was moderate to good: ICC(A, 1) = 0.794, 95% CI (0.669, 0.875); and IRR of landmark placement was moderate: ICC(A, 1) = 0.720, 95% CI (0.561, 0.828). IRR between different estimations of the covariance matrix was good to excellent: ICC(A, 1) = 0.886, 95% CI (0.825, 0.929). Thus, the results from spatial-product assessment with the SUQI are equivalent to those obtained from human assessment; and that is so because, conversely to Euclidean distance, the SUQI accounts for variability differences across cognitive-map features. The cognitive map of a scene can be assessed with a covariance matrix based on a different scene.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.