Plenary Speakers

Professor Ioannis Pitas

FIEEE; EURASIP fellow, Professor Department of Informatics
Aristotle University of Thessaloniki
Greece

Drone Vision and Deep Learning for Infrastructure Inspection

This lecture overviews the use of drones for infrastructure inspection and maintenance. Various types of inspection, e.g., using visual cameras, LIDAR or thermal cameras are reviewed. Drone vision plays a pivotal role in drone perception/control for infrastructure inspection and maintenance, because: a) it enhances flight safety by drone localization/mapping, obstacle detection and emergency landing detection; b) performs quality visual data acquisition, and c) allows powerful drone/human interactions, e.g., through automatic event detection and gesture control. The drone should have: a) increased multiple drone decisional autonomy and b) improved multiple drone robustness and safety mechanisms (e.g., communication robustness/safety, embedded flight regulation compliance, enhanced crowd avoidance and emergency landing mechanisms). Therefore, it must be contextually aware and adaptive. Drone vision and machine learning play a very important role towards this end, covering the following topics: a) semantic world mapping b) drone and target localization, c) drone visual analysis for target/obstacle/crowd/point of interest detection, d) 2D/3D target tracking. Finally, embedded on-drone vision (e.g., tracking) and machine learning algorithms are extremely important, as they facilitate drone autonomy, e.g., in communication-denied environments. Primary application area is electric line inspection. Line detection and tracking and drone perching are examined. Human action recognition and co-working assistance are overviewed.

The lecture will offer: a) an overview of all the above plus other related topics and will stress the related algorithmic aspects, such as: b) drone localization and world mapping, c) target detection d) target tracking and 3D localization e) gesture control and co-working with humans. Some issues on embedded CNN and fast convolution computing will be overviewed as well.

Professor Imre Rudas

FIEEE, IEEE SMCS President
Óbuda University, Budapest
Hungary

Verification, Trustworthiness, and Accountability of Human-Driven Autonomous Systems

Despite the fact that autonomous systems’ science and control theory have almost 50 years of history, the community is facing major challenges to ensure the safety of fully autonomous consumer systems. It mostly concerns the verification and high fidelity operation of safety-critical systems, may that be a self-driving car, a homecare robot or a surgical manipulator. The community still struggles to establish objective criteria for trustworthiness of AI driven / machine learning based control systems. On one hand, we celebrate the rise of cognitive capabilities in robotic systems, leading independent decision-making; on the other hand, decisions made in complex environments, based on multi-sensory data will surly lead to some wrong conclusions and hazardous outcome, jeopardizing the public trust in entire application domains. This ambiguity led to the currently ruling safety principle to offer the possibility for a human-driven override, translating to Level of Autonomy 3 and 4 with autonomous vehicles.

The aim of the development community is to establish processes and metrics to ensure  the reliability of the takeover process, when the human driver or operator takes back the partial or full control from the autonomous system. We have been building complex simulators and data collection systems to benchmark human decision making against the computer. Situation Awareness has been identified as a key, as it defines the level of cognitive understanding and capability of a human operator in a given environment. Assessing, maintaining and regaining efficiently SA are core elements of the relevant research projects, reviewed and compared in this talk. Based on the research at the Antal Bejczy Center for Intelligent Robotics at Óbuda University, we created an assessment method for critical handover performance, to quantitatively define the required  level  and  components  of  SA  with  respect  to  the autonomous functionalities present.  To improve system safety, driver assistance systems and automated driving functionalities shall be collected and organized in a hierarchical way, along with the two criteria of SA presented, as a standardized risk assessment protocol: 1) the Level of SA, based on state of the environment; 2) the components of SA, based on knowledge.

The outcome of our experiments may find its way to new verification standards through ongoing IEEE initiatives, such as the P1872.1, P2817, P7000 and P7007, moreover this systematic approach has already proved to bring benefit to other domains, such as medical robotics. (This presentation is a joint work with Prof. Tamas Haidegger.)

Professor Robert Kozma

FIEEE
Dept. Mathematics, University of Memphis
United States

Sustainable Autonomy: Challenges and Perspectives

Cutting-edge autonomous systems demonstrate outstanding results in many important tasks requiring intelligent data processing under well-known conditions. However, the performance of these systems may drastically deteriorate when the data are perturbed, or the environment dynamically changes, either due to natural or man-made disturbances. The challenges are especially daunting in edge computing scenarios and on-board applications with limited resources, i.e., data/ energy/ computational power constraints, when decisions must be made rapidly and in a robust way. A neuromorphic perspective provides useful insights under such conditions. Human brains are efficient devices using 20W power (like a light bulb), which is many orders of magnitudes less than the power consumption of today’s supercomputers requiring MWs to innovatively solve a specific Deep AI learning task. Brains use spatio-temporal oscillations to implement pattern-based computing, going beyond the sequential symbol manipulation paradigm of traditional Turing machines. Neuromorphic spiking chips gain popularity in the field. Application examples include autonomous on-board signal processing and control, distributed sensor systems, autonomous robot navigation and control, and rapid response to emergencies.

Professor Yaochu Jin

Chair Professor,
FIEEE, IEEE CIS VP
United Kingdom

Morphogenetic Self-organization of Swarm Robots

Self-organization is one of the most important features observed in social, economic, ecological and biological systems. Distributed self-organizing systems are able to generate emergent global behaviors through local interactions between individuals without a centralized control. Such systems are supposed to be robust, self-repairable and highly adaptive. However, design of self-organizing systems is very challenging, particularly when the emerged global behaviors are required to be predictable or predictable. This talk introduces a morphogenetic approach to the self-organizing swarm robots using genetic and cellular mechanisms governing the biological morphogenesis. We demonstrate that morphogenetic self-organizing algorithms are able to autonomously generate patterns and surround moving targets without centralized control. Finally, morphogen based methods for self-organization of simplistic robots that do not have localization and orientation capabilities are presented.

Professor Anthony Vetro

FIEEE, Vice President & Director
Mitsubishi Electric Research Labs

Improving Manipulation Capabilities of Autonomous Robots

Human-level manipulation continues to be beyond the capabilities of today’s robotic systems. Not only do current industrial robots require significant time to program a specific task, but they lack the flexibility to generalize to other tasks and be robust to changes in the environment. While collaborative robots help to reduce programming effort and improve the user interface, they still fall short on generalization and robustness. This talk will highlight recent advances in a number of key areas to improve the manipulation capabilities of autonomous robots, including methods to accurately model the dynamics of the robot and contact forces, sensors and signal processing algorithms to provide improved perception, optimization-based decision-making and control techniques, as well as new methods of interactivity to accelerate and enhance robot learning.

Professor Henry Leung

FIEEE; FSPIE, Professor
Department of Electrical and Computer Engineering, University of Calgary, Canada

Information Fusion and Decision Support for Autonomous Systems

In this talk we present our works on decision support analytic for autonomous systems. Decision support analytic process multiple sensory information collected by an autonomous system such as lidar, camera, RGBD, acoustic to perform signal detection, target tracking, object recognition. As multiple sensors are involved, our system uses sensor registration, data association and fusion to combine sensory information. The next layer of the proposed decision support system orients the processed sensory information at feature and classification levels to perform situation assessment and treat evaluation. Based on the assessment, the decision support system will recommend decision. If the uncertainty is high, actions including resource allocation, planning will be used to extract or reassess the sensory information to get a recommended decision with lower uncertainty. This talk will also presents the applications of the proposed decision support analytic in four industrial projects including 1) goal-driven net-enabled distributed sensing for maritime surveillance, 2) autonomous navigation and perception of humanoid service robots, 3) distance learning for oil and gas drilling and 4) cognitive vehicles.

Professor Carlo S. Regazzoni

Professor of Cognitive Telecommunications Systems
DITEN, University of Genova
Italy

Bayesian Emergent Self Awareness

Multisensor signal Data Fusion and Perception, including processing of signals are important cognitive functionalities that can be included in artificial systems to increase their level of autonomy. However, the techniques they rely on have been developed incrementally along time with the underlying assumption that they should have been used mainly to provide a support to decision tasks driving the actions of those systems.  Cognitive functionalities like self-awareness have been so far considered as not primary part of embodied knowledge of an autonomous or semi autonomous systems. One of the reason for this choice was the lack of understanding the principles that could allow an agent, even a human one, to organize successive sensorial experiences into a coherent framework of emergent knowledge, by means of integrating signal processing, machine learning and data fusion aspects. However, the developments of this last decade in many fields carried to the possibility to provide integrated solutions capable to sketch how emergent self awareness can be obtained by capturing experiences of autonomous agents like for example vehicles and intelligent radios. In this keynote, a Bayesian approach including abnormality detection and incremental learning of generative predictive models as bricks of emergent self awareness in intelligent agents. Discussion of the advantages of including emergent self awareness intelligent agents will be also provided with respect to different aspects, e.g. explainability of agent’s actions and capability of imitation learning.

Dr. Ming Hou

Defence Research and Development Canada, Toronto
Canada

Human Augmentation through AI and Autonomous Systems in Defence Context

Recent accidents to the Boeing 737 Max passengers ring the alarm again about the important needs of appropriate design concepts and methodologies for developing safety critical autonomous systems or AI functions and collaborative partnership of human and autonomous systems.  It is not only about the processes, training, airworthiness and certification, etc., but also a systematic approach when we design, develop, verify, validate, and regulate those AI and autonomous systems that are supposed to augment human capability.  This includes but certainly is not limited to those robots in the air, on the ground, or in the sea as well as other Human-Machine Systems (HMSs) with certain levels of Ai and autonomy.  It is not only about the safety of those systems, but more importantly human lives. The question is how to build a safe and collaborative partnership between human and autonomous systems. This talk discusses about the needs for HMS designers, developers, project manager, researchers, and all practitioners who are interested in building and using 21st century human-autonomy symbiosis technologies (Why).  The talk also covers analytical methodologies for functional requirements of the intelligent HMSs, design methodologies, implementation strategies, and evaluation approaches, etc (How).  These aspects will be explained with real-world examples in defence context when considering contextual constraints of technology, human capability and limitations, and functionalities that Ai and autonomous systems should achieve (When).  Audience will gain insights of context-based and interaction-centered design approach for developing a safe and collaborative partnership between human and technology by optimizing the interaction between human intelligence and AI.  The challenges and potential issues will also be discussed for guiding future research and development activities when augmenting human capabilities with AI and autonomous systems.

Professor Hagit Messer

FIEEE, The Kranzberg Chair Professor in Signal Processing
School of Electrical Engineering, Tel Aviv University, Israel. IEEE Global Initiative for Ethical Considerations in AI/AS
Israel

Human Augmentation through AI and Autonomous Systems in Defence Context

Recent accidents to the Boeing 737 Max passengers ring the alarm again about the important needs of appropriate design concepts and methodologies for developing safety critical autonomous systems or AI functions and collaborative partnership of human and autonomous systems.  It is not only about the processes, training, airworthiness and certification, etc., but also a systematic approach when we design, develop, verify, validate, and regulate those AI and autonomous systems that are supposed to augment human capability.  This includes but certainly is not limited to those robots in the air, on the ground, or in the sea as well as other Human-Machine Systems (HMSs) with certain levels of Ai and autonomy.  It is not only about the safety of those systems, but more importantly human lives. The question is how to build a safe and collaborative partnership between human and autonomous systems. This talk discusses about the needs