Research Interests

My research interests are in controls in robotics, with a focus on vision-based estimation and control for robots and autonomous vehicles.  Naturally, my research includes computer vision and nonlinear control.  More specific projects are discussed below.

Please see the page for my research group in the SeRViCE Lab.

Robot Human Interaction

In human-robot interaction tasks such as object retrieval, the human can give commands to robots through natural interfaces (voice, gesture etc.). Robots should be able to give visual feedback about their knowledge of the environment directly on/near the objects rather than on a compute screen, allowing the person move freely. We have developped a multi-view camera-projector system that detects objects likely to be of interest to a human and estimates their 3D location.  The projector then projects feedback patterns on/near the objects, such as spotlighting the object or drawing symbols in front of it. Once a human chooses an object, the robot arm grasps the object and retrieves it for the human.  

This work is supported by Texas Instruments OMAP University Research Program

 

This presentation appeared at The 2013 IEEE International Conference on Robotics and Automation.  Related papers:

J. Shen, J. Jin and N. Gans “A Multi-View Camera-Projector System for Object Detection and Robot-Human Feedback,” Proc. of International Conference on Robotics and Automation, 2013
download

 J. Shen, J. Jin and N. Gans “A Trifocal Tensor Based Camera-Projector System for Robot-Human Interaction” Proc. IEEE International Conference on Robotics and Biomimetics, December 2012
download




Visual Search as a Real Time Optimization Problem

We develop objective functions that describe a visual search problem, e.g. the score of a template match, keypoint/feature matching or maxmization of image saliency.  These functions are optimized through our research into control methods that allow a robot system to self-optimize its position and orientatin in real-time.  The objective function is optimized when the robot has located the object and found the best viewing angle. 

This presentation was given as an invited lecture to the Texas A&M department of Computer Science.  Related papers:

Y. Zhang and N. Gans “Extremum Seeking Control of a Nonholonomic Mobile Robot with Limited Field of View,” Proc. American Control Conference, 2012, to appear
download
 

Y. Zhang, N. Gans, “Simplex Guided Extremum Seeking Control,” Proc. American Control Conference, 2012
download

Y. Zhang, J. Shen, M. Rotea, N.R. Gans, “Robots Looking for Interesting Things: Extremum Seeking Control on Saliency Maps,” Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2011
download

 

Vision-Based Estimation and Control

Vision sensors (i.e. cameras) offer many advantages. They are passive, so cannot be detected like sonar, radar or lasers, and cannot be interfered or jammed like GPS. It is possible to estimate the position, movement, velocity, angular velocity, size and shape of targets with a single camera viewing a moving target (or a moving camera).   Multiple, fixed cameras can be can estimate the position, size and structure of static objects.  These estimates can be used in surveillance, security, traffic flow monitoring, geosciences, or can be used in feedback control to position a robot or vehicle for docking, refueling, manufacturing, welding, etc.  Of particular interest is control of mobile robots and autonomous vehicles.

 

 

This presentation appeared at the 2011 American Controls conference.  Related papers:

D. Tick, A. C. Satici, J. Shen, and N. R. Gans, “Tracking Control of Mobile Robots Localized via Chained Fusion of Discrete and Continuous Epipolar Geometry, IMU and Odometry”, IEEE Transactions on Systems Man and Cybernetics Part B
download 

D. Q. Tick, J. Shen, and N.R. Gans, "Fusion of Discrete and Continuous Epipolar Geometry for Visual Odometry and Localization," Proc. IEEE International Workshop on Robotic and Sensors Environments, 2010 (Awarded Best Student Paper)
download

 

This presentation is a bit dated, but gives a good idea on problems I have worked on.  Related papers:

N. R. Gans, G. Hu, J. Shen, Y. Zhang, W. E. Dixon, “Adaptive Visual Servo Control to Simultaneously Stabilize Image and Pose Error,” Mechatronics, 2012 download

N. R. Gans, G. Hu, K. Nagaragan, W. E. Dixon, “Keeping Multiple Moving Targets in the Field of View of a Mobile Camera,” IEEE Transactions on Robotics, 2011
download
 

N.R. Gans and S.A. Hutchinson, “Stable Visual Servoing through Hybrid Switched System Control,” IEEE Transactions on Robotics, 2007 download

Nonlinear Control

Vision-based estimation and control is inherently nonlinear.   The projection of 3D objects onto a 2D image surface is a nonlinear process that causes a loss of scale or depth estimation. Furthermore, the robots or vehicles controlled often have nonlinear dynamics or difficult constraints on motion (e.g. the classic problem of parallel parking a car, you can't accelerate sideways).  Overcoming these obstacles often requires nonlinear control methods.  My focus has been on Lyapunov-based adaptive and robust control and hybrid switched system control.

N. R. Gans, G. Hu, J. Shen, Y. Zhang, W. E. Dixon, “Adaptive Visual Servo Control to Simultaneously Stabilize Image and Pose Error,” Mechatronics, 2012

N. R. Gans, G. Hu, K. Nagaragan, W. E. Dixon, “Keeping Multiple Moving Targets in the Field of View of a Mobile Camera,” IEEE Transactions on Robotics, 2011

G. Hu, W. MacKunis, N. Gans, W. E. Dixon, J. Chen, A. Behal, D. Dawson, “Homography-Based Visual Servo Control with Imperfect Camera Calibration,” IEEE Transactions on Automatic Control, 2009
download

N.R. Gans and S.A. Hutchinson, “Stable Visual Servoing through Hybrid Switched System Control,” IEEE Transactions on Robotics, 2007