Abstract

Reviewing athletic performance is a critical part of modern sports training, but snapshots only showing part of a course or exercise can be misleading, while travelling cameras are expensive. In this paper we describe a system merging the output of many autonomous inexpensive camera nodes distributed around a course to reliably synthesize tracking shots of multiple athletes training concurrently. Issues such as uncontrolled lighting, athlete occlusions and overtaking/pack-motion are dealt with, as is compensating for the quirks of cheap image sensors. The resultant system is entirely automated, inexpensive, scalable and provides output in near real-time, allowing coaching staff to give immediate and relevant feedback on a performance. Requiring no alteration to existing training exercises has boosted the system's uptake by coaches, with over 100,000 videos recorded to date.

Highlights

  • Training for many sports is conducted over some fixed course, examples of such sports being athletics, rowing, cycling, skiing, and swimming

  • In this paper we describe a system merging the output of many autonomous inexpensive camera nodes distributed around a course to reliably synthesize tracking shots of multiple athletes training concurrently

  • We investigate reliably simulating a physically tracking camera through combining the output of many static cameras

Read more

Summary

Introduction

Training for many sports is conducted over some fixed course, examples of such sports being athletics, rowing, cycling, skiing, and swimming. Capturing an individual’s performance is challenging: static testing (as in Figure 1) imposes unrealistic constraints, affecting in-motion reproducibility; static cameras only provide potentially misleading snapshots of the whole effort; while rail-mounted cameras that physically follow an athlete are expensive to install and maintain, difficult to automate, and do not scale well to multiple concurrent athletes. We describe the design of our processing system, first dealing with the decentralized trigger, capture and analysis elements, before covering the centralized merging of footage to create a tracking shot. Following this we present example output and conclusions on the performance of our implementation

System description
Camera n
Example output
Conclusion
Author Biography
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call