From RSN
Revision as of 15:14, December 12, 2009 by Karnad (talk | contribs) (Human Motion Patterns and Optimal Robot Trajectories)

Jump to: navigation, search

Telepresence or tele-immersion technologies allow people to attend a shared meeting without being physically present in the same location. Commercial telepresence solutions available in the market today have significant drawbacks - they are very expensive, and confine people to the area covered by stationary cameras. In this research project, We aim to design and implement a mobile tele-immersion platform that addresses these issues by using robots with embedded cameras. In our system, the users can move around freely because robots autonomously adjust their locations. We provide a geometric definition of what it means to get a good view of the user, and present control algorithms to maintain a good view.

System Design

Flow of data and control in our system

Our mobile video-conferencing system consists of multiple iRobot Create robots (differential drive), each carrying an Asus EEE PC connected over a serial interface. These inexpensive netbooks are powerful enough to run a fully-flavored Linux operating system. They also have a built-in embedded 1.3 megapixel camera. The laptops control the robots using the iRobot Open Interface (OI) specification. Communication between the robots and a central workstation uses an ad-hoc wireless network.

The flowchart above shows the flow of data and control executed in every time-step of our system: (1) vision-based state estimation using markers, (2) combined user state estimates are classify the user as either 'Rotating' or 'Linear' (translating), (3) as a function of the classified user motion pattern, compute optimal robot trajectories, and, (4) send control commands over the wireless network for each robot to execute.

Human Motion Patterns and Optimal Robot Trajectories

1. User state: ROTATION

This motion pattern accounts for the case when the user moves around by turning his or her torso to face different directions but does not deviate from his initial position by more than a threshold distance. If the robots follow the user around, the resulting view would be very choppy, which often causes disorientation at the viewer's side. We therefore require the robots to remain stationary during this phase. Using game-theoretic arguments, it can be shown that the optimal robot strategy is to distribute them evenly around a circle centered at the user's location.

Simulations and Experiments