I started the year 2019 becoming unemployed. Then, I worked for the Spanish National Research Council (CSIC), I taught "Design, Integration and Verification of Machines" at the Comillas Pontifical University, and I created my enterprise Aibot, at the same time. During this year I suffered the main problems of researchers in Spain: unemployment periods, rigged selection processes, project delays, lack of resources... However, I finished this year finding a great opportunity.
I have joined as Assistant Professor to the Department of Computer Science of the Autonomous University of Madrid (UAM). During two years, I will teach "Software Analysis and Design Project" in the BSc in Computer Engineering (Spanish and English), while I will continue researching in Robotics and Artificial Intelligence. I am very grateful to the people of the UAM for trusting me despite they did not know me before.
Today, I have been at the Robotics and Artificial Intelligence Workshop organized by the RoboCity2030 Project. In this event, I have presented a poster about my work in the context of the SUREVEG Project. The poster is entitled "Training a mobile manipulator to follow crop lines with reinforcement learning" and is my first work in artificial intelligence applied to robotics. I have developed a control system for a manipulator robot based on neural networks trained with reinforcement learning. Here you can see the poster (Don't forget to watch the video!):
I have been honored with the award for the best PhD thesis in Robotics of Spain!
This prize have been organized by the Spanish Automation Committee (Comité Español de Automática, CEA) for the last decades and this time have been funded by the company Robotnik. Eleven researchers from different universities of Spain presented their PhD thesis to the contest. After a first round of revisions carried out by a group of experts, three of them were invited to present their thesis in the final round. We attended to the Spanish Automation Conference (Jornadas de Automática) that took place in Ferrol (Galicia), where we presented our thesis and another group of experts decided the winner.
I cannot find the words that describe how satisfied I feel. Since I finished my PhD the last summer, I have been through bad times and I have doubted about my future. I hope this award can be an incentive to continue fighting in this hard world.
Multiple robot missions imply a series of challenges for single human operators, such as managing high workloads or maintaining a correct level of situational awareness. Conventional interfaces are not prepared to face these challenges; however, new concepts have arisen to cover this need, such as adaptive and immersive interfaces. This paper reports the design and development of an adaptive and immersive interface, as well as a complete set of experiments carried out to establish comparisons with a conventional one. The interface object of study has been developed using virtual reality to bring operators into scenarios and allow an intuitive commanding of robots. Additionally, it is able to recognize the mission’s state and show hints to the operators. The experiments were performed in both outdoor and indoor scenarios recreating an intervention after an accident in critical infrastructure. The results show the potential of adaptive and immersive interfaces in the improvement of workload, situational awareness and performance of operators in multi-robot missions.
I'm very proud to see the result of many days of hard work during the past year, and I want to thank my friends Elena and Pablo for their help to make this real.
J.J. Roldán, E. Peña-Tapia, P. Garcia-Aunon, J. del Cerro and A. Barrientos. Bringing adaptive & immersive interfaces to real-world multi-robot scenarios: application to surveillance and intervention in infrastructures. IEEE Access, 7 (1), 86319-86335, 2019. Impact Factor (JCR, 2018): 4.098, Q1. Article
One of the active challenges in multi-robot missions is related to managing operator workload and situational awareness. Currently, the operators are trained to use interfaces, but in the near future this can be turned inside out: the interfaces will adapt to operators so as to facilitate their tasks. To this end, the interfaces should manage models of operators and adapt the information to their states and preferences. This work proposes a videogame-based approach to classify operator behavior and predict their actions in order to improve teleoperated multi-robot missions. First, groups of operators are generated according to their strategies by means of clustering algorithms. Second, the operators’ strategies are predicted, taking into account their models. Multiple information sources and modeling methods are used to determine the approach that maximizes the mission goal. The results demonstrate that predictions based on previous data from single operators increase the probability of success in teleoperated multi-robot missions by 19%, whereas predictions based on operator clusters increase this probability of success by 28%.
J11: J.J. Roldán, V. Díaz-Maroto, J. Real, P.R. Palafox, J. Valente, M. Garzón and A. Barrientos. Press start to play: classifying multi-robot operators and predicting their strategies through a videogame. Robotics, 8 (3), 53-67, 2019. Impact Factor (Scopus, 2018): 1.53, Q2. Article
Robot cooperation is key in Search and Rescue (SaR) tasks. Frequently, these tasks take place in complex scenarios affected by different types of disasters, so an aerial viewpoint is useful for autonomous navigation or human tele-operation. In such cases, an Unmanned Aerial Vehicle (UAV) in cooperation with an Unmanned Ground Vehicle (UGV) can provide valuable insight into the area. To carry out its work successfully, such as multi-robot system requires the autonomous takeoff, tracking, and landing of the UAV on the moving UGV. Furthermore, it needs to be robust and capable of life-long operation. In this paper, we present an autonomous system that enables a UAV to take off autonomously from a moving landing platform, locate it using visual cues, follow it, and robustly land on it. The system relies on a finite state machine, which together with a novel re-localization module allows the system to operate robustly for extended periods of time and to recover from potential failed landing maneuvers. Two approaches for tracking and landing are developed, implemented, and tested. The first variant is based on a novel height-adaptive PID controller that uses the current position of the landing platform as the target. The second one combines this height-adaptive PID controller with a Kalman filter in order to predict the future positions of the platform and provide them as input to the PID controller. This facilitates tracking and, mainly, landing. Both the system as a whole and the re-localization module in particular have been tested extensively in a simulated environment (Gazebo). We also present a qualitative evaluation of the system on the real robotic platforms, demonstrating that our system can also be deployed on real robotic platforms. For the benefit of the community, we make our software open source.
I would like to thank Pablo R. Palafox, Mario Garzón and João Valente to give me the opportunity to collaborate with them and get this publication.
P.R. Palafox, M. Garzón, J. Valente, J.J. Roldán and A. Barrientos. Robust Visual-Aided Autonomous Takeoff, Tracking and Landing of a small UAV on a Moving Landing Platform for Life-Long Operation. Applied Sciences, 9 (13), 2661, 2019. Impact Factor (2018): 2.217, Q2. Article
The SUREVEG project proposes the development and application of new organic cropping systems using strip-cropping and fertility strategies to improve resilience, system sustainability, local nutrient recycling and soil carbon storage. The project has three main goals: 1) Designing and testing strip-cropping systems in vegetable producing countries at different geographical locations in Europe, 2) Developing and testing soil-improvers and fertilizers based on pre-treated organic plant residues, and 3) Developing and testing smart technologies for management of strip-cropping systems.
The Technical University of Madrid and the Centre for Automation and Robotics are involved in the third goal: Smart machinery for strip-cropping systems. This work aims at developing a robotic tool for the automation of field operations in strip-cropping systems, including the adequate sensors to collect valuable data and actuators to apply precise fertilization. Specifically, it comprises four goals: 1) Designing a multi-purpose robotic tool, 2) Developing sensing systems and algorithms, 3) Developing an actuation system, and 4) Implementing motion planning strategies.
Here you can see the first prototype (I have worked on the manipulator robot):
Industry 4.0 aims at integrating machines and operators through network connections and information management. It proposes the use of a set of technologies in industry, such as data analysis, Internet of Things, cloud computing, cooperative robots, and immersive technologies. This paper presents a training system for industrial operators in assembly tasks, which takes advantage of tools such as virtual reality and process mining. First, expert workers use an immersive interface to perform assemblies according to their experience. Then, process mining algorithms are applied to obtain assembly models from event logs. Finally, trainee workers use an improved immersive interface with hints to learn the assemblies that the expert workers introduced in the system. A toy example has been developed with building blocks and tests have been performed with a set of volunteers. The results show that the proposed training system, based on process mining and virtual reality, is competitive against conventional alternatives. Furthermore, user evaluations are better in terms of mental demand, perception, learning, results, and performance.
J.J. Roldán, E. Crespo, A. Martín-Barrio, E. Peña-Tapia and A. Barrientos. A training system for Industry 4.0 operators in complex assemblies based on virtual reality and process mining. Robotics and Computer-Integrated Manufacturing, 59, 305-316, 2019. Impact Factor (2018): 3.464, Q1. Article.
I have taken part in the stand of the Robotics and Cybernetics Research Group (RobCib) in the Industriales Research Meeting (IRM2019), an event organized by the Industrial Engineering School of the Technical University of Madrid to disseminate the research. In this stand, we have shown the different robots of the group: a mobile manipulator, three search and rescue robots and a hyper-redundant robot. Additionally, we took some volunteers to test our virtual reality interface for monitoring the state of a smart city.
I attended the workshop "how to tell my research in radio and TV", organized by the Scientific Culture Unit of the Technical University of Madrid. In this course, I learned how to deal with journalists, tell my research to the general public and take part in radio and TV programs. Moreover, we did a series of short TV interviews and a complete radio program. A great experience!
J.J. Roldán, P. Garcia-Aunon, E. Peña-Tapia and A. Barrientos. "SwarmCity Project: Can an Aerial Swarm Monitor Traffic in a Smart City?". UNAGI'19: Workshop on UNmanned aerial vehicle Applications
in the smart city: from Guidance technology to enhanced system
Interaction. PerCom 2019: 2019 IEEE International Conference on Pervasive Computing and Communications. Kyoto, 11-15th March 2019.
This morning it took place the Santo Tomás de Aquino celebration at the Technical University of Madrid, where the new doctors were invested and the university awards were delivered. In this academic act, I promised to act correctly as a doctor and I received my PhD diploma and ring from the university authorities. As you can see, this celebration follows an old tradition and gives rise to curious images...