Key Takeaways
- May affect how AI can be used.
- At the speed of the sport, it's incredibly difficult to translate high-speed motion into actionable data—joint angles, rotational velocities, body compression.
- This requires tracking and analyzing a full three-dimensional model of the athlete, frame by frame, in real-time.
What It Means
Context
At the speed of the sport, it's incredibly difficult to translate high-speed motion into actionable data—joint angles, rotational velocities, body compression. This requires tracking and analyzing a full three-dimensional model of the athlete, frame by frame, in real-time. In collaboration with Google DeepMind, Google Cloud AI Blog built a system to provide this analysis to U.S. Olympians ahead of the Olympic Winter Games. Google Cloud AI Blog's AI pose estimation model transforms a single 2D video into a complete 3D biomechanical analysis, plotting 63 joints in a localized coordinate system. For athletes and coaches, it provides a revolutionary competitive edge. For broader use cases, it turns human movement into objective data. The challenge: extreme conditions break standard vision Generating a 63-joint 3D skeleton from 2D video is a massive computational workload. Generating it without lab-grade sensors and in unpredictable outdoor environments, pushes computer vision to its limits. Snowboarders and skiers move at extreme velocities. They wear bulky gear. When they tuck for a grab or spin, limbs disappear from view. Standard pose estimation models lose tracking the moment…
For builders
At the speed of the sport, it's incredibly difficult to translate high-speed motion into actionable data—joint angles, rotational velocities, body compression.
For Builders
At the speed of the sport, it's incredibly difficult to translate high-speed motion into actionable data—joint angles, rotational velocities, body compression.