Log-in  |  Links  |  Contact




Faculty member: Gonzalo López-Nicolás
 

Area: Systems Engineering and Automatic
Robotics, Perception and Real Time Group (GRTR)
Department of Computer Science and System Engineering (DIIS)
Escuela de Ingeniería y Arquitectura (EINA)
University of Zaragoza (UZ)
c/ María de Luna , 3 50018 Zaragoza , Spain
Tel:
Fax: (+34) 976 76 19 14
Office:
E-mail:
 


Publications Projects Results Personal Page    
ROBOTICS | UNIVERSITY OF ZARAGOZA | BIBTEX REFERENCE

31 Publications Found Printer Friendly Version Show BibTex for All
2 Pages - [1[2



Articles in International Journal
:: G. López-Nicolás, J. Omedes and J. J. Guerrero, Spatial layout recovery from a single omnidirectional image and its matching-free sequential propagation, Robotics and Autonomous Systems, 2013.
Show PDF File Show BibTex

:: H. M. Becerra, G. López-Nicolás and C. Sagues, A sliding-mode-control law for mobile robots based on epipolar visual servoing from three views, IEEE Transactions on Robotics, 24, 1, pages 175--183, February, 2011.
Show BibTex

:: G. López-Nicolás and C. Sagues, Vision-based exponential stabilization of mobile robots, Autonomous Robots, 30, 293-306, 2011.
Show PDF File Show BibTex

:: H. M. Becerra, G. Lopez-Nicolas and C. Sagues, Omnidirectional visual control of mobile robots based on the 1D trifocal tensor, Robotics and Autonomous Systems, 58, 6, pages 796--808, June, 2010.
Show BibTex

:: G. López-Nicolás, J. J. Guerrero and C. Sagues, Visual control through the trifocal tensor for nonholonomic robots, Robotics and Autonomous Systems, 58(2), 216-226, 2010.
Show PDF File Show BibTex

:: G. López-Nicolás, N.R. Gans, S. Bhattacharya , C. Sagues, J. J. Guerrero and S. Hutchinson, Homography-Based Control Scheme for Mobile Robots with Nonholonomic and Field-of-View Constraints, IEEE Trans. on SYSTEMS, MAN, AND CYBERNETICS. PART B CYBERNETICS, 40(4), 1115-1127, 2010.
Show PDF File Show BibTex

:: G. López-Nicolás, J. J. Guerrero and C. Sagues, Visual control of vehicles using two-view geometry, Mechatronics, 20(2), 315:325, 2010.
Show PDF File Show BibTex

:: G. López-Nicolás, J. J. Guerrero and C. Sagues, Multiple Homographies with Omnidirectional Vision for Robot Homing, Robotics and Autonomous Systems, 58(6), 773-783, 2010.
Show PDF File Show BibTex

:: G. López-Nicolás, C. Sagues, J. J. Guerrero, D. Kragic, P. Jensfelt, Switching visual control based on epipoles for mobile robots, Robotics and Autonomous Systems, Vol. 56(7), 592-603, 2008.
Show BibTex

:: G. Lopez-Nicolas, J. J. Guerrero, O.A. Pellejero and C. Sagues, Computing Homographies from Three Lines or Points in an Image Pair, Image Analysis and Processing, LNCS 3617, 446-453, 2005.
Show BibTex

:: G. Lopez-Nicolas, C. Sagues and J.J. Guerrero, Automatic Matching and Motion Estimation From Two Views of a Multiplane Scene, Pattern Recognition and Image Analysis, LNCS 3522, 68-76, 2005.
Show BibTex



Chapters in Book
:: J. Omedes, G. López-Nicolás and J. J. Guerrero, Omnidirectional Vision for Indoor Spatial Layout Recovery, Frontiers of Intelligent Autonomous Systems, 95-104, 2013.
Show PDF File Show BibTex

:: G. López-Nicolás, C. Sagues and J. J. Guerrero, Shortest path homography-based visual control for diferential drive robots, book: Vision Systems - Applications, 583-596, 2007.
Show BibTex



PhD Dissertations
:: G. López-Nicolás, Visual Control of Mobile Robots Through Multiple View Geometry, Dpto. de Informática e Ingeniería de Sistemas,University of Zaragoza, Spain, 2008.
Show BibTex



International Conferences
:: J. Bermudez-Cameo and G. López-Nicolas and J. J. Guerrero, Line extraction in uncalibrated central images with revolution simmetry, British Machine Vision Conference (BMVC), 2013.
Show PDF File Show BibTex

:: G. López-Nicolás and A. Aladren and J. J. Guerrero, Wearable vision systems for personal guidance and enhanced assistance, Robotics Challenges and Vision Workshop, 2013.
Show PDF File Show BibTex

:: J. Omedes , G. López-Nicolás and J. J. Guerrero, Omnidirectional Vision for Indoor Spatial Layout Recovery, 12th IAS Intelligent Autonomous Systems Conference, 1-5, 2012.
Show PDF File Show BibTex

:: J. Bermudez-Cameo, G. López-Nicolás and J. J. Guerrero, A Unified Framework for Line Extraction in Dioptric and Catadioptric Cameras, 11th Asian Conference on Computer Vision (ACCV), Daejeon, Korea, Nov., 2012.
Show PDF File Show BibTex

:: J. Bermudez-Cameo, G. López-Nicolás and J. J. Guerrero, A unified framework for line extraction in dioptric and catadioptric cameras, Asian Conference on Computer Vision (ACCV), 2012.
Show PDF File Show BibTex

:: N. D. Ozısık, G. López-Nicolás and J. J. Guerrero, Scene Structure Recovery from a Single Omnidirectional Image, 11th OMNIVIS, held with International Conference on Computer Vision (ICCV), 2011.
Show PDF File Show BibTex

2 Pages - [1[2


ROBOTICS | UNIVERSITY OF ZARAGOZA

 The goal of this project is the research of computer vision and robotic techniques to be part of a personal assistance system based on visual information. This visual assistant will be wearable and will include both conventional and non-conventional cameras. This project is the continuation of our previous VISPA project in which we start researching in non-conventional vision systems for personal assistance.



 The research project involves deployment and actuation techniques of a multi-robot team. It is necessary to address problems of task planning and allocation, coordinated execution of the navigation, perception of the environment from multiple views from each of the team members, maintaining communication between all system components – robots, infrastructure, bridges, monitoring equipment, etc. This project will address new goals and challenges for research and their application in real, large and complex scenarios



 The goal of this project is the research of computer vision techniques, merged with methodologies developed in the robotics field, to take part of a personal assistance system based on visual information. This visual assistant will be wearable and will include both conventional and non-conventional cameras. It will be able to process locally some tasks and it will also have wireless communication with other global information servers or sensors. We search for a human-centered system, which complements rather than replaces human abilities, to help visually impaired people or people with problems of orientation and also people with normal visual capabilities in specific tasks.



 The project proposes to investigate techniques for a multi-robot team to act in coordination in realistic scenarios. For the deployment, it is necessary to deal with algorithms and methods related to task planning and allocation, coordinated navigation planning, environment perception from multiple views provided by every member of the team, while the communication connectivity among all the elements of the system is maintained – robots, infrastructure, supervisor team, etc. Although some of the techniques involved are usually proposed in the literature and in many projects somehow independently, the research in this project will also be oriented to develop techniques integrating the different subjects involved. Only in this way it will be possible to develop realistic applications using systems with autonomous and supervised behaviours.



 The complex nature of mobile robot tasks leads to the necessity of systems with several coordinated robots (agents) working in cooperation. Some international directives refer to robotic elements connected to the communication nets or wireless nets including the robots themselves and the sensors distributed in the working place (static agents) exchanging and sharing information. This concept is extended to robot interactions between humans, the sensors and the environment. We propose this project which is very related with previous MEC projects obtained by this research team, to continue working on subjects related to multi-robot cooperation techniques, computer vision, robot vision for motion and communications.



 The main project objective is the research in exploration strategies: a set of perception-action techniques that allow to obtain environment information, to plan motions for refining and completing this information (active perception), and to perform safe robot motions in non-structured scenarios. In recent years, these techniques have been greatly improved and have been applied in indoor environments with very good results. The goal of this project is to further develop these techniques to apply them to novel problems and more difficult scenarios, like rescue operations.




ROBOTICS | UNIVERSITY OF ZARAGOZA