Sort by
Project Westdrive: Unity City With Self-Driving Cars and Pedestrians for Virtual Reality Studies

Virtual environments will deeply alter the way we conduct scientific studies on human behavior. Possible applications range from spatial navigation over addressing moral dilemmas in a more natural manner to therapeutic applications for affective disorders. The decisive factor for this broad range of applications is that virtual reality (VR) is able to combine a well-controlled experimental environment together with the ecological validity of the immersion of test subjects. Until now, however, programming such an environment in Unity® requires profound knowledge of C# programming, 3D design and computer graphics. In order to give interested research groups access to a realistic VR environment which can easily adapt to the varying needs of experiments, we developed a large, open source, scriptable and modular VR city. It covers an area of 230 hectare, up to 150 self-driving vehicles and 655 active and passive pedestrians and thousands of nature assets to make it both highly dynamic and realistic. Furthermore, the repository presented here contains a stand-alone City AI toolkit for creating avatars and customizing cars. Finally, the package contains code to easily set up VR studies. All main functions are integrated into the graphical user interface of the Unity® Editor to ease the use of the embedded functionalities. In summary, the project named Westdrive is developed to enable research groups to access a state-of-the-art VR environment that is easily adapted to specific needs and allows focus on the respective research question.

Open Access
Relevant
Dyadic Interference Leads to Area of Uncertainty During Face-to-Face Cooperative Interception Task

People generally coordinate their action to be more efficient. However, in some cases, interference between them occur, resulting in an inefficient collaboration. For example, if two volleyball players collide while performing a serve reception, they can both miss the ball. The main goal of this study is to explore the way two persons regulate their actions when performing a cooperative task of ball interception, and how interference between them may occur. Starting face to face, twenty-four participants (twelve teams of two) had to physically intercept balls moving down from the roof to the floor of a virtual room. To this end, they controlled a virtual paddle attached to their hand moving along the anterior-posterior axis. No communication was allowed between participants so they had to focus on visual cues to decide if they should perform the interception or leave the partner do it. Participants were immersed in a stereoscopic virtual reality setup that allows the control of the situation and the visual stimuli they perceived, such as ball trajectories and the information available on the partner's motion. Results globally showed participants were often able to intercept balls without collision by dividing the interception space in two equivalent parts. However, an area of uncertainty (where many trials were not intercepted) appeared in the center of the scene, highlighting the presence of interference between participants. The width of this area increased when the situation became more complex (facing a real partner and not a stationary one) and when less information was available (only the paddle and not the partner's avatar). Moreover, participants initiated their interception later when real partner was present and often interpreted balls starting above them as balls they should intercept, even when these balls were \textit{in fine} intercepted by their partner. Overall, results showed that team coordination here emerges from between-participants interactions and that interference between them depends on task complexity (uncertainty on partner's action and visual information available)

Open Access
Relevant
Eyelid and Pupil Landmark Detection and Blink Estimation Based on Deformable Shape Models for Near-Field Infrared Video

The eyelid contour, pupil contour and blink event are important features of eye activity, and their estimation is a crucial research area for emerging wearable camera-based eyewear in a wide range of applications e.g. mental state estimation. Current approaches often estimate a single eye activity, such as blink or pupil center, from far-field and non-infrared (IR) eye images, and often depend on the knowledge of other eye components. This paper presents a unified approach to simultaneously estimate the landmarks for the eyelids, the iris and the pupil, and detect blink from near-field IR eye images based on a statistically learned deformable shape model and local appearance. Unlike the facial landmark estimation problem, by comparison, different shape models are applied to all eye states – closed eye, open eye with iris visible, and open eye with iris and pupil visible – to deal with the self-occluding interactions among the eye components. The most likely eye state is determined based on the learned local appearance. Evaluation on three different realistic datasets demonstrates that the proposed three-state deformable shape model achieves state-of-the-art performance for the open eye with iris and pupil state, where the normalized error was lower than 0.04. Blink detection can be as high as 90% in recall performance, without direct use of pupil detection. Cross-corpus evaluation results show that the proposed method improves on the state-of-the-art eyelid detection algorithm. This unified approach greatly facilitates eye activity analysis for research and practice when different types of eye activity are required rather than employ different techniques for each type. Our work is the first study proposing a unified approach for eye activity estimation from near-field IR eye images and achieved the state-of-the-art eyelid estimation and blink detection performance.

Open Access
Relevant
Toward Industry 4.0 With IoT: Optimizing Business Processes in an Evolving Manufacturing Factory

Research advances in the last decades have allowed the introduction of Internet of Things (IoT) concepts in several industrial application scenarios, leading to the so-called Industry 4.0 or Industrial IoT (IIoT). The Industry 4.0 has the ambition to revolutionize industry management and business processes, enhancing the productivity of manufacturing technologies through field data collection and analysis, thus creating real-time digital twins of industrial scenarios. Moreover, it is vital for companies to be as as possible and to adapt to the varying nature of the digital supply chains. This is possible by leveraging IoT in Industry 4.0 scenarios. In this paper, we describe the renovation process, guided by things2i s.r.l., a cross-disciplinary engineering-economic spin-off company of the University of Parma, which a real manufacturing industry is undergoing over consecutive phases spanning a few years. The first phase concerns the digitalization of the control quality process, specifically related to the company's production lines. The use of paper sheets containing different quality checks has been made smarter through the introduction of a digital, smart, and Web-based application, which is currently supporting operators and quality inspectors working on the supply chain through the use of smart devices. The second phase of the IIoT evolution - currently on-going - concerns both digitalization and optimization of the production planning activity, through an innovative Web-based planning tool. The changes introduced have led to significant advantages and improvement for the manufacturing company, in terms of: (i) impressive cost reduction; (ii) better products quality control; (iii) real-time detection and reaction to supply chain issues; (iv) significant reduction of the time spent in planning activity; and (v) resources employment optimization, thanks to the minimization of unproductive setup times on production lines. These two renovation phases represent a basis for possible future developments, such us the integration of sensor-based data on the operational status of production machines and the currently available warehouse supplies. In conclusion, the Industry 4.0-based on-going digitization process guided by things2i allows to continuously collect heterogeneous Human-to-Things (H2T) data, which can be used to optimize the partner manufacturing company as a whole entity.

Open Access
Relevant
Superimposing 3D Virtual Self + Expert Modeling for Motor Learning: Application to the Throw in American Football

We learn and/or relearn motor skills at all ages. Feedback plays a crucial role in this learning process, and Virtual Reality (VR) constitutes a unique tool to provide feedback and improve motor learning. In particular, VR grants the possibility to edit 3D movements and display augmented feedback in real time. Here we combined VR and motion capture to provide learners with a 3D feedback superimposing in real time the reference movements of an expert (expert feedback) to the movements of the learner (self-feedback). We assessed the effectiveness of this feedback for the learning of a throwing movement in American football. This feedback was used during (concurrent feedback) and/or after movement execution (delayed feedback), and it was compared with a feedback displaying only the reference movements of the expert. In contrast with more traditional studies relying on video feedback, we used the Dynamic Time Warping algorithm coupled to motion capture to measure the spatial characteristics of the movements. We also assessed the regularity with which the learner reproduced the reference movement along its path. For that, we used a new metric computing the dispersion of distance around the mean distance over time. Our results show that when the movements of the expert were superimposed on the movements of the learner during learning (i.e., self + expert), the reproduction of the reference movement improved significantly. On the hand, providing feedback about the movements of the expert only did not give rise to any significant improvement regarding movement reproduction.

Open Access
Relevant
An Interactive and Multimodal Virtual Mind Map for Future Workplace

Traditional types of mind maps involve means of visually organizing information. They can be created either using physical tools like paper or post-it notes or through the computer-mediated process. Although their utility is established, mind maps and associated methods usually have several shortcomings with regards to effective and intuitive interaction as well as effective collaboration. Latest developments in virtual reality demonstrate new capabilities of visual and interactive augmentation, and in this paper, we propose a multimodal virtual reality mind map that has the potential to transform the ways in which people interact, communicate, and share information. The shared virtual space allows users to be located virtually in the same meeting room and participate in an immersive experience. Users of the system can create, modify, and group notes in categories and intuitively interact with them. They can create or modify inputs using voice recognition, interact using virtual reality controllers, and then make posts on the virtual mind map. When a brainstorming session is finished, users are able to vote about the content and export it for later usage. A user evaluation with 32 participants assessed the effectiveness of the virtual mind map and its functionality. Results indicate that this technology has the potential to be adopted in practice in the future, but a comparative study needs to be performed to have a more general conclusion.

Open Access
Relevant
A Hybrid Solution Method for the Capacitated Vehicle Routing Problem Using a Quantum Annealer

The Capacitated Vehicle Routing Problem (CVRP) is an NP-optimization problem (NPO) that has been of great interest for decades for both, science and industry. The CVRP is a variant of the vehicle routing problem characterized by capacity constrained vehicles. The aim is to plan tours for vehicles to supply a given number of customers as efficiently as possible. The problem is the combinatorial explosion of possible solutions, which increases superexponentially with the number of customers. Classical solutions provide good approximations to the globally optimal solution. D-Wave's quantum annealer is a machine designed to solve optimization problems. This machine uses quantum effects to speed up computation time compared to classic computers. The problem on solving the CVRP on the quantum annealer is the particular formulation of the optimization problem. For this, it has to be mapped onto a quadratic unconstrained binary optimization (QUBO) problem. Complex optimization problems such as the CVRP can be translated to smaller subproblems and thus enable a sequential solution of the partitioned problem. This work presents a quantum-classic hybrid solution method for the CVRP. It clarifies whether the implemenation of such a method pays off in comparison to existing classical solution methods regarding computation time and solution quality. Several approaches to solving the CVRP are elaborated, the arising problems are discussed, and the results are evaluated in terms of solution quality and computation time.

Open Access
Relevant
Technology Use and Attitudes in Music Learning

While the expansion of technologies into the music education classroom has been studied in great depth, there is a lack of published literature regarding the use of digital technologies by students learning in individual settings. Do musicians take their technology use into the practice room and teaching studio, or does the traditional nature of the master-apprentice teaching model promote different attitudes among musicians toward their use of technology in learning to perform? To investigate these issues, we developed the Technology Use and Attitudes in Music Learning Survey, which included adaptations of Davis’s 1989 scales for Perceived Usefulness and Perceived Ease of Use of Technology. Data were collected from an international cohort of 338 amateur, student, and professional musicians ranging widely in age, specialism, and musical experience. Results showed a generally positive attitude toward current and future technology use among musicians and supported the Technology Acceptance Model (TAM), wherein technology use in music learning was predicted by perceived ease of use via perceived usefulness. Musicians’ self-rated skills with smartphones, laptops, and desktop computers were found to extend beyond traditional audio and video recording devices, and the majority of musicians reported using classic music technologies (e.g., metronomes and tuners) on smartphones and tablets rather than bespoke devices. Despite this comfort with and access to new technology, availability reported within one-to-one lessons was half of that within practice sessions, and while a large percentage of musicians actively recorded their playing, these recordings were not frequently reviewed. Our results highlight opportunities for technology to take a greater role in improving music learning through enhanced student-teacher interaction and by facilitating self-regulated learning.

Open Access
Relevant