A vision-based model of crowd navigation
Abstract
Humans navigate crowds daily. This can range from quick interactions between pedestrians on the sidewalk to jostling through a busy train station. Research in fields diverse as computer graphics, physics, evacuation planning, and mathematical biology have simulated emergent collective behavior in flocks, schools, and crowds (Couzin et al., 2002; Helbing & Molnár, 1995; Reynolds, 1987; Vicsek & Zafeiris, 2012). However, few of these models are built from experimental evidence about the visual information that governs human pedestrian behaviors in naturalistic environments. This project, funded by the Link Foundation, empirically studied and computationally modeled how individuals in a crowd use visual information to navigate, and how those local interactions lead to global patterns of behavior. Grounding crowd behavior in the visual control of locomotion will yield better models that more closely simulate actual human behavior, which has applications for urban planning, evacuation safety, and architectural design.
The work funded this year has addressed three main aims: (1) derive the equations to simulate crowd following with visual information as input; (2) compare 2D and 3D sources of visual information regarding crowd following; and (3) understand how optical occlusion plays a role in navigation.