From RSN
Jump to: navigation, search

Telepresence or tele-immersion technologies allow people to attend a shared meeting without being physically present in the same location. Commercial telepresence solutions available in the market today have significant drawbacks - they are very expensive, and confine people to the area covered by stationary cameras. In this research project, We aim to design and implement a mobile tele-immersion platform that addresses these issues by using robots with embedded cameras. In our system, the users can move around freely because robots autonomously adjust their locations. We provide a geometric definition of what it means to get a good view of the user, and present control algorithms to maintain a good view.

A Good View

Geometric definition of the good view region w.r.t. the user

We define a good view w.r.t. the position of the user and the orientation of his/her torso geometrically as the intersection of an annulus of radii r_{min} and r_{max}, with a sector of angle 2 \cdot \theta_{good} centered at the user's position. We would like to maximize the following criterion.

At any time t \geq 0, there exists a robot that has a good view, i.e. it is located in the good region w.r.t. the user's position at time t.

System Design

Flow of data and control in our system

Our mobile video-conferencing system consists of multiple iRobot Create robots (differential drive), each carrying an Asus EEE PC connected over a serial interface. These inexpensive netbooks are powerful enough to run a fully-flavored Linux operating system. They also have a built-in embedded 1.3 megapixel camera. The laptops control the robots using the iRobot Open Interface (OI) specification. Communication between the robots and a central workstation uses an ad-hoc wireless network.

The flowchart above shows the flow of data and control executed in every time-step of our system: (1) vision-based state estimation using markers, (2) combined user state estimates are classify the user as either 'Rotating' or 'Linear' (translating), (3) as a function of the classified user motion pattern, compute optimal robot trajectories, and, (4) send control commands over the wireless network for each robot to execute.

Human Motion Patterns and Optimal Robot Trajectories

1. User state: ROTATION

The robots distribute evenly (b) as opposed to clustered (a) when the user is in the ROTATION state

This motion pattern accounts for the case when the user moves around by turning his or her torso to face different directions but does not deviate from his initial position by more than a threshold distance. If the robots follow the user around, the resulting view would be very choppy, which often causes disorientation at the viewer's side. We therefore require the robots to remain stationary during this phase. Using game-theoretic arguments, it can be shown that the optimal robot strategy is to distribute them evenly around a circle centered at the user's location.

2. User state: LINEAR

The robots distribute evenly (b) as opposed to clustered (a) when the user is in the ROTATION state

When the user breaches the threshold distance set in the ROTATION state, e.g. when he moves from one point to another inside a room, we call this motion pattern Linear. The robot with the best view aligns his camera axis to face the user's direction of motion so as to get the best front-view possible. The other robots distribute themselves around the estimated final user location evenly around a circle, their exact angles around that circle determined by the location of the best robot.

The figure shown above is a time-series progressing from left-to-right. The red dot is the user moving along a straight line. The three trajectories are shown for a system with n = 3 robots. As can be seen from that figure, the best robot aligns with the user's line of motion and the other two maintain an even distribution around a circle centered at the user's estimated final location.


Using the trajectories we solved for previously, I wrote a simulator using OpenGL + GLUT + lib3ds to render the user with the three robots, as well as to enable us to use OpenGL's camera views to see what each robot would see. The following is a snapshot of the simulator.

A snapshot of our simulator with individual robot views

A comparison of different trajectories and an animation of the whole system are part of ongoing work.

Related publications

N. Karnad and V. Isler. A Multi-Robot System for Unconfined Video-Conferencing, In Proceedings of ICRA2010: The IEEE International Conference on Robotics and Automation, Anchorage, Alaska. Note: To appear.


This material is based upon work supported by the National Science Foundation under Grant No. 0916209.