G4_01136.mp4
The video belongs to a collection designed to help AI models understand how humans perform daily tasks. It was filmed using head-mounted cameras (like GoPro or specialized eye-tracking glasses) to capture exactly what the subject sees. GTEA Gaze+ Perspective: Egocentric (First-Person) Primary Focus: Meal preparation and kitchen activities
🎥 This video is often cited in papers involving or Transformers designed for video understanding. It serves as a "real-world" challenge because of motion blur, hand occlusions, and the visual complexity of a cluttered kitchen. g4_01136.mp4
Identifying exactly when an action (like "cutting") starts and ends. The video belongs to a collection designed to
Modeling how a person’s eyes move toward an object before their hands touch it. It serves as a "real-world" challenge because of
If you tell me more about your specific project, I can provide: for this specific timestamp (if available) Code snippets for loading GTEA Gaze+ videos in Python Related research papers that utilize the Group 4 dataset
Recognizing kitchen tools and ingredients from shifting, shaky angles.
High frequency of hand-to-object contact (e.g., opening jars, slicing vegetables, pouring liquids).