OnurTekdas

From RSN
Jump to: navigation, search

About Me

Onur.jpg
I am a Software Engineer at Google - Mountain View. I received my Ph.D. degree from the Department of Computer Science and Engineering at University of Minnesota-Twin Cities in June 2011. My advisor is Prof. Volkan Isler. My research interests are data collection from sensor nodes with robots and network formation of mobile robots. Please check out the Research section for more information about my research interests. I have special interest in implementing robotic, computer vision and machine learning systems. Please see Projects section for a list of selected projects.

Contact Information

t e k d a s [ a t ] c s [ d o t ] u m n [ d o t ] e d u

Education

University of Minnesota-Twin Cities, Minneapolis, MN
Ph.D., Computer Science and Engineering, May 2011 (expected)
  • Thesis: Improving Communication in Networked Systems using Mobile Robots
Rennselaer Polytechnic Institute, Troy, NY
M.S., Computer Science, August 2008
  • Thesis: Placement and Connectivity Maintenance Algorithms for Robotic Sensor Networks
Middle East Technical University, Ankara, Turkey
B.S., Computer Engineering, June 2006

Research

Data Collection from Sensors with Robots
One of the most popular Wireless Sensor Network (WSN) application is environmental monitoring. In order to observe long term spatial and temporal domains, WSNs need to be deployed for years and cover large geographic areas. If the locations of interest are far apart from each other, a large number of nodes may be needed to act as relays. Moreover, depending on the topology some nodes might have a heavy traffic which becomes a bottleneck in the lifetime of the WSN. We use mobile robots as Data Mules where robots visit sensor nodes, collect data from them and carry the data to a gateway (left Figure). With this approach, relay nodes are not necessary any more and energy consumption of nodes are minimized. We consider the following research challenges:
  • What are the efficient motion strategies for robot to collect data from sensor nodes?
  • How can we further reduce the energy consumption of sensor nodes by searching a good location with robot to start download?
  • How can we adapt the beacon schedule of sensor nodes according to the uncertainty in the motion of robots?

Please see Data Mules page for more information. Related publications: [J3][C2][J4].

Network Formation of Mobile Robots
The traditional approach to provide network connectivity is to deploy a network of static wireless routers which cover the entire area of interest. We can use the mobility and communication capabilities of mobile robots to provide appealing solutions where static networks might be costly if not infeasible.
  • One such scenario is if the user which requires connectivity is a mobile entity (e.g. human, robot). If the area that target moves on is large compared to the wireless range of routers, robotic routers (mobile robots with wireless communication capabilities) can create an adaptive wireless network and provide communication services for mobile users on-demand. We consider the motion strategies of robotic routers to provide a continuous communication for user while minimizing the number of robotic routers.
  • Another scenario when mobile robots can be helpful is when a communication bridge between two locations is needed and an underlying communication infrastructure does not exist. We consider the bi-criteria optimization problem where the objectives are minimizing the number of mobile hubs while minimizing the distance traveled by the hubs.

Please see Robotic Routers page for more information. Related publications: [J2][C1][C3][C4].

Localization and Navigation
SensorPlacement.jpg
Robots operating in a workspace can localize themselves by querying nodes of a sensor-network deployed in the same workspace. We address the problem of computing the minimum number and placement of sensors so that the localization uncertainty at every point in the workspace is less than a given threshold.

Please see Sensor Placement page for more information. Related publications: [J1][C5].

Publications

Journal

  1. O. Tekdas, V. Isler. Sensor Placement Algorithms for Triangulation Based Localization, IEEE Transactions on Automation Science and Engineering, July 2010. pdf bibtex
  2. O. Tekdas, Wei Yang, V. Isler. Robotic Routers: Algorithms and Implementation, The International Journal of Robotics Research, May 2009. pdf bibtex
  3. O. Tekdas, J. H. Lim, A. Terzis, V. Isler. Using Mobile Robots to Harvest Data from Sensor Fields, IEEE Wireless Communications, Special Issue on Wireless Communications in Networked Robotics, February 2009. pdf bibtex
  4. D. Bhadauria, O. Tekdas, V. Isler. Robotic Data Mules for Collecting Data over Sparse Sensor Fields, Journal of Field Robotics, To Appear.

Conference

  1. O. Tekdas, P. Plonski, N. Karnad, V. Isler. Maintaining Connectivity in Environments with Obstacles, IEEE International Conference on Robotics and Automation, May 2010. pdf bibtex
  2. O. Tekdas, N. Karnad, V. Isler. Efficient Strategies for Collecting Data from Wireless Sensor Network Nodes using Mobile Robots, International Symposium on Robotics Research, August, 2009. pdf bibtex
  3. O. Tekdas, Y. Kumar, V. Isler, R. Janardan. Building a Communication Bridge with Mobile Hubs, International Workshop on Algorithmic Aspects of Wireless Sensor Networks, July 2009. pdf bibtex
  4. O. Tekdas, V. Isler. Robotic Routers, IEEE International Conference on Robotics and Automation, May 2008. pdf bibtex
  5. O. Tekdas, V. Isler. Sensor Placement Algorithms for Triangulation Based Localization, IEEE International Conference on Robotics and Automation, April 2007. pdf bibtex

J: Journal, C: Conference

Implemented Libraries

Create.jpg
Implemented a control interface for iRobot Create robots. You can download the source code from here.

Selected Projects

University of Minnesota-Twin Cities


Cyclops Autonomous Navigation: Implemented an autonomous navigation system for Cyclops robot

Cyclops is an outdoor robot, we developed in RSN lab. The cyclops is based on an RC truck (Tamiya TXT-1). A robostix micro controller is interfaced with the motor driver and steer servo motors to control the robot. For obstacle avoidance we placed four IR sensors in front. We use compass and GPS for estimating the robot’s pose during navigation. We use a laptop for computing and executing the motion plan. Since we have various sensors which have to be read in parallel, I implemented a multi-threaded system. A separate thread reads and writes from/to each device (compass, GPS, robostix and base mote) and updates its state in the main thread. Since Java provides an advanced threading scheme, I used Java to implement the high level application. This system uses GPS and compass measurements to compute a desired trajectory to reach to the target location. If robot is off the trajectory then it recomputes a trajectory and follows it. I also implemented a server/client application and a GUI to visualize the robot’s state from a remote computer.

SLAM (Simultaneous Localization and Mapping): Implemented SLAM on Player/Stage and demonstrated with Pioneer robots

SLAM is a technique used for autonomous navigation of robots. Robot builds up a map of an unknown environment or updates the provided map according to the observations it makes while navigating. These observations are also used to locate the robot on the extracted map. In this project, my group members (Pratap Tokekar and Deepak Bhadauria) and I aimed to implement SLAM on Pioneer II robots. We formulate the problem as an Extended Kalman Filter (EKF) problem. We use robots’ encoders to propagate the state and observations from laser for state update. Our observations are landmarks and robot orientation with respect to the walls. Since it is computationally inexpensive, we only use corners (e.g. walls,doors) as landmarks. Our robot executes a wall follow controller and closes the loop when finishes a complete tour around a hallway.

We implemented the system in C++ using Player/Stage. Player is a network server for robot control and Stage is a simulator for mobile robots in a 2D environment. We process the laser scan (point cloud) to extract the line segments. Using line segments, we extract the corners of the walls and doors in the environment. We use corners as stationary landmarks to update the state. We also use the relative orientation of the robot to the wall for state update. Second method drastically reduces the uncertainty of the updated estimate. When robot closes the loop (sees the first landmark again), map building process ends and robot reduces its uncertainty further. Top video shows our implementation in Stage simulator. It can be clearly seen that the robot uncertainty ellipse shrinks when it closes the loop. Bottom video shows our demonstration on Pioneer II robots in 5th floor of the EECS building. Robot successfully mapped the environment, estimated its position and closed the loop after completing a tour.

Vision based color tracking for iRobot Create: Implemented an autonomous tracking system which allows iRobot Create to track a colored object

This program was implemented in C++ which uses an OpenCV library to detect the colored objects using a netbook's build-in camera. A PD controller is implemented for robot to smoothly track this object

Natural.jpg
Character Recognition: Implemented a system for character recognition from natural scenes

This project was a joint project with Nikhil Karnad for the Pattern Recognition course. We studied the effect of different features on the classification performance for character recognition in natural scenes. Some of the features we tried were: raw grayscale pixel intensities, shape context descriptors, and wavelet features. We explored some state of the art kernel learning algorithms, to make the trade-off between discriminative power and invariance of the features automatically. See report for more details.

Kpca.jpg
Dimensionality Reduction: Implemented and compared PCA and kernel PCA

I have implemented PCA and Gaussian kernel PCA to reduce the dimensionality of MNIST handwritten digit database. Left figure shows the three best components of 0, 8 and 9 digits data with kPCA. Right figure shows the scatter plot of images on two best components. 0's are clearly different than 8's and 9's even with the projection onto the two best components. I have used Neural Networks to classify these digits and get 1% error.

Regression.jpg
Regression: Implemented and compared various regression techniques

I have implemented, following regression techniques and radial basis functions:

  • Ordinary Least Square and Ridge regressions are implemented with cubic basis functions,
  • Ordinary Least Square, Lasso and SV regressions are implemented with Gaussian Radial Basis Functions (RBF).

Left figure shows the SV regression on the sample data. I have compare the performance of regression techniques with cross validation. The best error rate was achieved with Lasso regression using Gaussian RBF. I have also implemented Gaussian Processes regression which is shown in right Figure.

Classification.jpg
Classification: Implemented and compared various classification techniques on MNIST handwritten digit database

In this project, I used MNIST handwritten digit database for binary classification (e.g. 0 vs 8 digits). Fisher Linear Discriminant Analysis (LDA) is implemented to reduce the dimension (28x28 dimensions to 1 dimension). Left figure shows the scatter plot of data from 0,8 and 9 digits. I found a threshold value to classify 0's and 8's. I have found a threshold value which yields to an error rate of 0.65%. This was the best error rate I have found compared to other techniques. I have implemented logistic regression for 0 vs 8 problem and the error rate was 2.5%. I have used Matlab's svm function to run SVM classifiers (with a linear kernel and a polynomial kernel) on this data set. The error rate was about 1% for both kernel types. Right figure shows the support vectors. This vectors correspond to the digits which look similar to the opposite class.

Localization using Multiview Geometry: Implemented 8-point algorithm and calculated camera transformation is used to localize iRobot Create together with the odometry

This project was a joint project with Nikhil Karnad for the Computer Vision course. An iRobot Create robot is programmed to go straight and capture frames along the way. The pixel points between two consecutive frames are matched using David Lowe's SIFT feature matching program. The best 10 matched points are selected with RANSAC and fed to 8-point algorithm. The four-way ambiguity is removed by using the odometry information. Kalman filter is applied to odometry measurements and movement estimation calculated from 8-point algorithm.

Kalman.jpg
Outdoor robot localization: Kalman Filter and Particle Filter are compared for robot localization

In this project, we assume that robot has only an IMU (Inertial Measurement Unit-Gyro+Accelerometer) and GPS to determine its position in world coordinates. Due to the uncertainty in the measurements (about 3m uncertainty in GPS) measurement from these sensors are fused to get a better estimate of the position. Assuming the bearing information is accurate (we found empirically that relying on only gyro provides an accurate bearing estimate +/- 2 degreees), navigation system is modeled as a linear system and Gaussian noise assumption is made. The performances of Kalman Filter and Particle Filter are compared. Although Kalman Filter is the optimal linear estimator under these assumptions, the performance of Particle was very close to Kalman Filter which can be used for non-linear systems and with any noise distribution.

Stereo.jpg
Stereo Reconstruction: Two views from calibrated cameras are used to reconstruct the scene shown at the left figure

The point matches between two scenes are generated manually. Computing the fundamental matrix directly behaves rather poorly hence we normalized the point matches. 8-point algorithm is implemented to calculate the fundamental matrix. Top two figures show the selected points in the first image and their corresponding epipolar lines in the second image. Using calibration matrices the essential matrix is calculated. Four-way ambiguity is resolved using ubiquitous SVD (Singular Value Decomposition). Bottom left figure shows the select polygons in the original image and bottom right figure shows the reconstructed polygons,

PhotometricStereo.jpg
Photometric Stereo: Calibrated face images taken from various lighting conditions are used to reconstruct the 3D face

The goal of this exercise was to show how to reconstruct a surface given multiple photographs of the same scene under several known lighting conditions. I took advantage of Peter Belhumeur’s face image database. My procedure was to extract normal/albedo at each pixel and reconstruct the surface from normals. Left image shows the original image and right image shows the 3D reconstruction of the person's face.

Wii robot controller: Implemented a program in C/C++ to control iRobot Create robots using wii remote controller

libcwiimote is used for communication with wii controller and libcreate is used (which is created by myself) to control the robot. Extern is a very useful keyword to combine C and C++ codes. Controller works like a joystick which is rooted at the bottom of wii controller. Check out the video to see how it works.

RF Based Localization: Implemented a program in Matlab where RSSI readings from telosb motes are used to estimate the location of the robot using particle filtering

RSSI (Radio Signal Strength Indicator) measurements from various locations and orientations are collected (50 measurement from each location and orientation). A pdf is extracted from this experiment, i.e. p(x|z) where x is the state vector (location and orientation) and z is the observation (observed rssi value). Particle filter algorithm from Ioannis Rekleitis tutorial is implemented in Matlab. Partticles are moved according the odometry values and the weights of the particles are updated according to the pdf. In the video, red dash line shows the grand truth and black arrows show the estimations.

Rensselaer Polytechnic Institute


Hockey.jpg
Air Hockey Simulation: Designed and implemented an air hockey playing robot controller and simulated in daVinci Code

Trajectory of the puck is calculated using dynamics properties of Air Hockey table and initial direction of the puck after it is hit by opponent's mullet (user controlled). The final location of the puck in the computer controlled mullet's side and arrival time is calculated according to this trajectory. A desired trajectory is found for the puck after hit by computer controlled mullet such as target location of the puck is left of goal, puck should hit to boundary twice to confuse opponent, etc. To satisfy this trajectory a final configuration for computer controlled mullet is found, such as location, velocity and time to hit. A third order polynomial is used to calculate the forces to apply to reach the final configuration from the current configuration.

Bearing only pursuit-evasion: Implemented a program using OpenCV and C++ where an Acroname Garcia robot pursues a robot controlled by a human

This system was implemented for a visit of Capital District Middle School students.

Middle East Technical University


Coretech.jpg
Internet Mobile GIS implementation: Senior project

Implemented a GPS software using ESRI’s ArcPad on a PDA running Windows ME and implemented a server program using ESRI’s MOJava to update geographic information (second best senior project award). For more information please visit our group web site

Dtp.jpg
Undergrad Research Assistant: Worked as an undergrad research assistant to help with developing an Image and Text base search engine
  • Developed a feature extraction scheme using Haar wavelets for face detection
  • Implemented eigenface algorithm for face recognition

This work was done under the supervision of Prof. Fatos Yarman Vural

University of Wisconsin-Madison


Map.jpg
Preprocessing of 3D depth MAP images: Designed and implemented an active counter based image processing algorithm for 3D face depth map data

During the depth scan of faces, hairy parts (mustache, eyebrow, etc) of the face can not be detected which requires a preprocessing step before feature extraction. Interpolation and morphological operators does not work well since they cause to loose the smoothness of the face which affects the performance of our face recognition algorithm. I have used Snake algorithm to recover missing parts while preserving the curvature and smoothness of the face. This work was done under the supervision of Prof. Charles Dyer

Professional Experience

Test Engineer-Aydin Avionics Software Company, Ankara, Turkey Feb 2006-Aug 2006

  • Implemented low level test cases for avionics systems
  • Reviewed and analysed low level C/C++ codes

Summer Intern-enocta Software Company, Ankara, Turkey Jun 2004-Jul 2004

  • Devoloped a database system and a web interface using Microsoft SQL Server, ADO, ASP, JavaScript, HTML, XML

Teaching

Teaching Assistant-Rensselaer Polytechnic Institute, Troy, NY Fall 2006