Seminars

(All Seminars)

Location: Atkinson Hall Auditorium

Time: 5:00pm - 7:00pm

"Observing, Learning, and Executing Fine-Grained Manipulation Activities: A Systems Perspective"

Thursday, May 27 @ 12:00ppm PST
 

Zoomhttps://ucsd.zoom.us/j/91267376688
Speaker: Gregory D. Hager

In the domain of image and video analysis, much of the deep learning revolution has been focused on narrow, high-level classification tasks that are defined through carefully curated, retrospective data sets. However, most real-world applications – particularly those involving complex, multi-step manipulation activities -- occur “in the wild" where there is a combinatorial “long tail” of unique situations that are never seen during training. These systems demand a richer, fine-grained task representation that is informed by the application context, and which supports quantitative analysis and compositional synthesis. As a result, the challenges inherent in both high-accuracy, fine-grained analysis and performance of perception-based activities are manifold, spanning representation, recognition, and task and motion planning.

In this talk, I’ll summarize our work addressing these challenges. I’ll first describe DASZL, our approach to interpretable, attribute-based activity detection. DASZL operates in both pre-trained and zero shot settings and has been applied to a variety of applications ranging from surveillance to surgery. I’ll then describe work on machine learning approaches for systems that use prediction models to support perception-based planning and execution of manipulation tasks. I’ll close with some recent work on “Good Robot,” a method for end-to-end training of a robot manipulation system which leverages architecture search and fine-grained task rewards to achieve state-of-the-art performance in complex, multi-step manipulation tasks. I’ll close with a brief summary of some directions we are exploring, enabled by these technologies. 

Bio: Greg Hager is the Mandell Bellmore Professor of Computer Science at Johns Hopkins University and Founding Director of the Malone Center for Engineering in Healthcare. Professor Hager’s research interests include computer vision, vision-based and collaborative robotics, time-series analysis of image data, and applications of image analysis and robotics in medicine and in manufacturing. He is a member of the CISE Advisory Committee, the Board of the Directors of the Computing Research Association, and the governing board of the International Federation of Robotics Research. He previously served as Chair of the Computing Community Consortium. In 2014, he was awarded a Hans Fischer Fellowship in the Institute of Advanced Study of the Technical University and in 2017 was named a TUM Ambassador.  Professor Hager has served on the editorial boards of IEEE TRO, IEEE PAMI, and IJCV and ACM Transactions on Computing for Healthcare. He is a fellow of the ACM and IEEE for his contributions to Vision-Based Robotics and a Fellow of AAAS, the MICCAI Society and of AIMBE for his contributions to imaging and his work on the analysis of surgical technical skill. Professor Hager is a co-founder of Clear Guide Medical and Ready Robotics. He is currently an Amazon Scholar.

"Semantic Robot Programming... and Maybe Making the World a Better Place"

May 20, 2021 @ 12:00pm PST

Zoom: https://ucsd.zoom.us/j/91267376688

Speaker: Odest Chadwicke Jenkins, Ph.D.

The visions of interconnected heterogeneous autonomous robots in widespread use are a coming reality that will reshape our world. Similar to "app stores" for modern computing, people at varying levels of technical background will contribute to "robot app stores" as designers and developers. However, current paradigms to program robots beyond simple cases remains inaccessible to all but the most sophisticated of developers and researchers. In order for people to fluently program autonomous robots, a robot must be able to interpret user instructions that accord with that user’s model of the world. The challenge is that many aspects of such a model are difficult or impossible for the robot to sense directly. We posit a critical missing component is the grounding of semantic symbols in a manner that addresses both uncertainty in low-level robot perception and intentionality in high-level reasoning. Such a grounding will enable robots to fluidly work with human collaborators to perform tasks that require extended goal-directed autonomy.

I will present our efforts towards accessible and general methods of robot programming from the demonstrations of human users. Our recent work has focused on Semantic Robot Programming (SRP), a declarative paradigm for robot programming by demonstration that builds on semantic mapping. In contrast to procedural methods for motion imitation in configuration space, SRP is suited to generalize user demonstrations of goal scenes in workspace, such as for manipulation in cluttered environments. SRP extends our efforts to crowdsource robot learning from demonstration at scale through messaging protocols suited to web/cloud robotics. With such scaling of robotics in mind, prospects for cultivating both equal opportunity and technological excellence will be discussed in the context of broadening and strengthening Title IX and Title VI.

Bio: Odest Chadwicke Jenkins, Ph.D., is a Professor of Computer Science and Engineering and Associate Director of the Robotics Institute at the University of Michigan. Prof. Jenkins earned his B.S. in Computer Science and Mathematics at Alma College (1996), M.S. in Computer Science at Georgia Tech (1998), and Ph.D. in Computer Science at the University of Southern California (2003). He previously served on the faculty of Brown University in Computer Science (2004-15). His research addresses problems in interactive robotics and human-robot interaction, primarily focused on mobile manipulation, robot perception, and robot learning from demonstration. His research often intersects topics in computer vision, machine learning, and computer animation. Prof. Jenkins has been recognized as a Sloan Research Fellow and is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE). His work has also been supported by Young Investigator awards from the Office of Naval Research (ONR), the Air Force Office of Scientific Research (AFOSR) and the National Science Foundation (NSF). Prof. Jenkins is currently serving as Editor-in-Chief for the ACM Transactions on Human-Robot Interaction. He is a Fellow of the American Association for the Advancement of Science and the Association for the Advancement of Artificial Intelligence, and Senior Member of the Association for Computing Machinery and the Institute of Electrical and Electronics Engineers. He is an alumnus of the Defense Science Study Group (2018-19).

"Designing the Future of Human-Robot Interaction"

May 13 @ 12pm (PST) 
 

Location: CSE 1202
Zoom: https://ucsd.zoom.us/j/91267376688

Speaker: Dr. Holly Yanco

Abstract: Robots navigating in difficult and dynamic environments often need assistance from human operators or supervisors, either in the form of teleoperation or interventions when the robot's autonomy is not able to handle the current situation. Even in more controlled environments, such as office buildings and manufacturing floors, robots may need help from people. This talk will discuss methods for controlling both individual robots and groups of robots, in applications ranging from assistive technology to telepresence to exoskeletons.  A variety of modalities for human-robot interaction with robot systems, including multi-touch and virtual reality, will be presented.

Bio: Dr. Holly Yanco is a Distinguished University Professor, Professor of Computer Science, and Director of the New England Robotics Validation and Experimentation (NERVE) Center at the University of Massachusetts Lowell. Her research interests include human-robot interaction, evaluation metrics and methods for robot systems, and the use of robots in K-12 education to broaden participation in computer science. Yanco's research has been funded by NSF, including a CAREER Award, the Advanced Robotics for Manufacturing (ARM) Institute, ARO, DARPA, DOE-EM, ONR, NASA, NIST, Google, Microsoft, and Verizon. Yanco has a PhD and MS in Computer Science from the Massachusetts Institute of Technology and a BA in Computer Science and Philosophy from Wellesley College. She is a AAAI Fellow.

"Autonomous Mobile Robot Challenges and Opportunities in Domain and Mission Complex Environments"

Thursday, 6 May @ 12pm PST (in person CSE 1202)

Zoom: https://ucsd.zoom.us/j/91267376688

Speaker: Bruce Morris

Increased autonomy can have many advantages, including increased safety and reliability, improved reaction time and performance, reduced personnel burden with associated cost savings, and the ability to continue operations in communications-degraded or denied environments. Artificial Intelligence for Small Unit Maneuver (AISUM) envisions a way for future expeditionary tactical maneuver elements to team with intelligent adaptive systems. Accordingly, it opens analyses of how such teaming will greatly enhance mission precision, speed, and mass in complex, contested, and congested environments. The ultimate aim in this effort is to reduce risk to missions and our own force, partners, and civilians. AISUM provides competitive advantage in the changing physics of competition space via robotic autonomous systems in dense urban clutter (interior, exterior, subterranean) and dynamic, spectrum-denied areas, with no prior knowledge of the environment in support of human-machine teams for tactical maneuver and swarming tactics.

Bio: Bruce Morris currently serves as the Deputy Director of Future Concepts and Innovation at Naval Special Warfare Command. In this role, he is responsible for leading the SEAL Teams in their strategic and practical application of digital modernization to enable Artificial Intelligence and Autonomous Mobile Robotics in Human-Machine Teaming for asymmetric competitive advantage. The Future Concepts and Innovation Team was founded in 2016, under Dr. Morris’ strategic guidance, to explore and exploit innovation methodology, venture capital, and emerging technologies for U.S. Special Operations and the greater National Security ecosystem.

A native of San Diego and a third-generation Naval Officer, Bruce has dedicated his life in the service our great nation and to the community of San Diego. Bruce received his commission from the United States Naval Academy, graduating in 1988 with a degree in Mathematics. Immediately following graduation, he then reported to his hometown of Coronado, CA for Basic Underwater Demolition/SEAL Training with class 158. Bruce’s service as a Navy SEAL Officer, Information Warfare Officer and government civilian span over 30 years and represents a cross section of leadership roles and the consistent innovative applications of science and technology on the battlefield and in garrison. Additionally, he holds an MS in Meteorology and Physical Oceanography and a PhD in Physical Oceanography from the Naval Postgraduate School with a focus on Numerical Modelling and Ocean Dynamics.

Bruce is the recipient of the CAPT Harry T. Jenkins Memorial Award for Community Service. His in-service professional awards include the Bronze Star Medal with Combat Distinguishing Device, Meritorious Service Medal, Combat Action Ribbon, other personal and unit awards to include the Navy Meritorious Civilian Service Award.

"ICRA 2021 Overview"

Thursday 29 April @ 12pm PST

Zoom Link: https://ucsd.zoom.us/j/91267376688

Speaker: Henrik Christensen

Looking Farther in Parametric Scene Parsing with Ground and Aerial Imagery

Raghava Modhugu, Harish Rithish Sethuram, Manmohan Chandraker, C.V. Jawahar

http://cdn.iiit.ac.in/cdn/cvit.iiit.ac.in/images/ConferencePapers/2021/Scene_attributes___ICRA_2021.pdf

 

Auto-calibration Method Using Stop Signs for Urban Autonomous Driving Applications

Yunhai Han, Yuhan Liu, David Paz, Henrik Christensen
https://arxiv.org/abs/2010.07441

 

Social Navigation for Mobile Robots in the Emergency Department

Angelique Taylor, Sachiko Mastumoto, Wesley Xiao, and Laurel D. Riek
http://cseweb.ucsd.edu/~lriek/papers/taylor-icra-2021.pdf

 

Temporal Anticipation and Adaptation Methods for Fluent Human-Robot Teaming

Tariq Iqbal and Laurel D. Riek
http://cseweb.ucsd.edu/~lriek/papers/iqbal-riek-icra-2021.pdf

Mesh Reconstruction from Aerial Images for Outdoor Terrain Mapping Using Joint 2D-3D Learning
Q. Feng, N. Atanasov
https://arxiv.org/abs/2101.01844

Coding for Distributed Multi-Agent Reinforcement Learning
B. Wang, J. Xie, N. Atanasov
https://arxiv.org/abs/2101.02308

Non-Monotone Energy-Aware Information Gathering for Heterogeneous Robot Teams
X. Cai, B. Schlotfeldt, K. Khosoussi, N. Atanasov, G. J. Pappas, J. How
https://arxiv.org/abs/2101.11093

Active Bayesian Multi-class Mapping from Range and Semantic Segmentation Observations
A. Asgharivaskasi, N. Atanasov
https://arxiv.org/abs/2101.01831

Learning Barrier Functions with Memory for Robust Safe Navigation
K. Long, C. Qian, J. Cortes, N. Atanasov
https://arxiv.org/abs/2011.01899

Generalization in reinforcement learning by soft data augmentation
Nicklas Hansen, Xiaolong Wang
https://nicklashansen.github.io/SODA/

Bimanual Regrasping for Suture Needles using Reinforcement Learning for Rapid Motion Planning
Z.Y. Chiu, F. Richter, E.K. Funk, R.K. Orosco, M.C. Yip
https://arxiv.org/pdf/2011.04813.pdf

Real-to-Sim Registration of Deformable Soft-Tissue with Position-Based Dynamics for Surgical Robot Autonomy
F. Liu, Z. Li, Y. Han, J. Lu, F. Richter, M.C. Yip
https://arxiv.org/abs/2011.00800

Model-Predictive Control of Blood Suction for Surgical Hemostasis using Differentiable Fluid Simulations
J. Huang*, F. Liu*, F. Richter, M.C. Yip
https://arxiv.org/abs/2102.01436

SuPer Deep: A Surgical Perception Framework for Robotic Tissue Manipulation using Deep Learning for Feature Extraction
J. Lu, A. Jayakumari, F. Richter, Y. Li, M.C. Yip
https://arxiv.org/pdf/2003.03472.pdf

Data-driven Actuator Selection for Artificial Muscle-Powered Robots
T. Henderson, Y. Zhi, A. Liu, M.C. Yip
https://arxiv.org/abs/2104.07168

Optimal Multi-Manipulator Arm Placement for Maximal Dexterity during Robotics Surgery
J. Di, M. Xu, N. Das, M.C. Yip
https://arxiv.org/abs/2104.06348

MPC-MPNet: Model-Predictive Motion Planning Networks for Fast, Near-Optimal Planning under Kinodynamic Constraints
L. Li, Y.L. Miao, A.H. Qureshi, M.C. Yip
https://arxiv.org/pdf/2101.06798.pdf

Autonomous Robotic Suction to Clear the Surgical Field for Hemostasis using Image-based Blood Flow Detection
F. Richter, S. Shen, F. Liu, J. Huang, E.K. Funk, R.K. Orosco, M.C. Yip
https://arxiv.org/abs/2010.08441

Scalable Learning of Safety Guarantees for Autonomous Systems using Hamilton-Jacobi Reachability
Sylvia Herbert, Jason J. Choi, Suvansh Qazi, Marsalis Gibson, Koushil Sreenath, Claire J. Tomlin
https://arxiv.org/abs/2101.05916

Planning under non-rational perception of uncertain spatial costs
Aamodh Suresh and Sonia Martinez
https://arxiv.org/pdf/1904.02851.pdf

"Inductive Biases for Robot Learning"

Thursday 22 April @ 12pm PST 

Zoom Link: https://ucsd.zoom.us/j/91267376688

Speaker: Michael Lutter

Abstract: The recent advances in robot learning have been largely fueled by model-free deep reinforcement learning algorithms. These black-box methods utilize large datasets and deep networks to discover good behaviors. The existing knowledge of robotics and control is ignored and only the information contained within the data is leveraged. In this talk we want to take a different approach and evaluate the combination of knowledge and data-driven learning. We show that this combination enables sample-efficient learning for physical robots and that generic knowledge from physics and control can be incorporated in deep network representations. The use of inductive biases for robot learning yields robots that learn dynamic tasks within minutes and robust control policies for under-actuated systems.

Bio: Michael Lutter joined the Institute for Intelligent Autonomous Systems (IAS) at TU Darmstadt in July 2017. Prior to this Michael held a researcher position at the Technical University of Munich (TUM) for bio-inspired learning for robotics. During this time he worked on the Human Brain Project, a European H2020 FET flagship project. In addition to his studies, Michael worked for ThyssenKrupp, Siemens and General Electric and received multiple scholarships for academic excellence and his current research.

"Patient-Specific Continuum Robotic Systems for Surgical Interventions"

Thursday 15 April @ 12pm PST (also CSE290)
Zoom Link: https://ucsd.zoom.us/j/91267376688

Speaker: Jaydev P. Desai
Georgia Institute of Technology

Abstract:
Over the past few decades, robotic systems for surgical interventions have undergone tremendous transformation. The goal of a surgical intervention is to try to do it as minimally invasively as possible, since that significantly reduces post-operative morbidity, reduces recovery time, and also leads to lower healthcare costs. However, minimally invasive surgical interventions for a range of procedures will require a significant change in the healthcare paradigm for both diagnostic and therapeutic interventions. Advances in surgical interventions will benefit from “patient-specific robotic tools” to deliver optimal diagnosis and therapy. Hence, this talk will focus on the development of continuum, flexible, and patient-specific robotic systems for surgical interventions. Since, these robotic systems could operate in an imaging environment, we will also address challenges in image-guided interventions. This talk will present examples from neurosurgery and endovascular interventions to highlight the applicability of patient-specific robotic systems for surgery.

Biography:
Dr. Jaydev P. Desai is currently a Professor at Georgia Tech in the Wallace H. Coulter Department of Biomedical Engineering. He is the founding Director of the Georgia Center for Medical Robotics (GCMR) and an Associate Director of the Institute for Robotics and Intelligent Machines (IRIM). He completed his undergraduate studies from the Indian Institute of Technology, Bombay, India, in 1993. He received his M.A. in Mathematics in 1997 and M.S. and Ph.D. in Mechanical Engineering and Applied Mechanics in 1995 and 1998 respectively, all from the University of Pennsylvania. He was also a Post-Doctoral Fellow in the Division of Engineering and Applied Sciences at Harvard University. He is a recipient of several NIH R01 grants, NSF CAREER award, and was also the lead inventor on the “Outstanding Invention in the Physical Science Category” at the University of Maryland, College Park, where he was formerly employed. He is also the recipient of the Ralph R. Teetor Educational Award and the IEEE Robotics and Automation Society Distinguished Service Award. He has been an invited speaker at the National Academy of Sciences “Distinctive Voices” seminar series and also invited to attend the National Academy of Engineering’s U.S. Frontiers of Engineering Symposium. He has over 190 publications, is the founding Editor-in-Chief of the Journal of Medical Robotics Research, and Editor-in-Chief of the four-volume Encyclopedia of Medical Robotics. His research interests are primarily in the areas of image-guided surgical robotics, pediatric robotics, endovascular robotics, and rehabilitation and assistive robotics. He is a Fellow of IEEE, ASME, and AIMBE.

Director – Georgia Center for Medical Robotics (GCMR)
Associate Director - Medical Robotics and Human Augmentation, Institute for Robotics and Intelligent Machines (IRIM)
Wallace H. Coulter Department of Biomedical Engineering
Georgia Institute of Technolog

"Visual Representations for Navigation and Object Detection"

Zoom Link: [https://ucsd.zoom.us/j/91267376688]
In-Person: Room 1202, CSE Building

Speaker: Jana Kosecka
George Mason University
cs.gmu.edu/~kosecka

 

Abstract: Advancements in reliable navigation and mapping rest to a large extent on robust, efficient and scalable understanding of the surrounding environment. The success in recent years have been propelled by the use machine learning techniques for capturing geometry and semantics of environment from video and range sensors. I will discuss approaches to object detection, pose recovery, 3D reconstruction and detailed semantic parsing using deep convolutional neural networks (CNNs).
While data-driven deep learning approaches fueled rapid progress in object category recognition by exploiting large amounts of labelled data, extending this learning paradigm to previously unseen objects comes with challenges. I will discuss the role of active self-supervision provided by ego-motion for learning object detectors from unlabelled data. These powerful spatial and semantic representations can then be jointly optimized with policies for elementary navigation tasks. The presented explorations open interesting avenues for control of embodied physical agents and general strategies for design and development of general purpose autonomous systems.

Bio:  Jana Kosecka is Professor at the Department of Computer Science, George Mason University. She obtained Ph.D. in Computer Science from University of Pennsylvania. Following her PhD, she was a postdoctoral fellow at the EECS Department at University of California, Berkeley. She is the recipient of David Marr's prize  and received the National Science Foundation CAREER Award. Jana is a chair of IEEE technical Committee of Robot Perception, Associate Editor of IEEE Robotics and Automation Letters and International Journal of Computer Vision, former editor of IEEE Transactions on Pattern Analysis and Machine Intelligence. She held visiting positions at Stanford University, Google and Nokia Research. She  is a co-author of a monograph titled Invitation to 3D vision: From Images to Geometric Models. Her general research interests are in Computer Vision and Robotics. In particular she is interested 'seeing' systems engaged in autonomous tasks, acquisition of static and dynamic models of environments by means of visual sensing and human-computer interaction.

"Design of Autonomous Vehicles @ UCSD"

Zoom Link: [https://ucsd.zoom.us/j/91267376688]

Speaker: Henrik I Christensen

 

Over the last couple of years the Autonomous Vehicle Laboratory has designed modules for autonomously driving micro-mobility vehicles such as campus mail delivery. The design includes new sensors setups, methods for local scale mapping / localization, semantic modeling of the world to allow for contextual navigation, detection and tracking of other road users, and use of simulation for verification of design decisions. The system has been tested over a six month period. In this presentation we do a review of our motivation, approach, methods and results this far. The methods developed have been tested through extensive evaluation and published at ICRA, IROS, ISER, ...

"Safe Real-World Autonomy in Uncertain and Unstructured Environments"

Zoom URL: https://ucsd.zoom.us/j/97197176606

Sylvia Herbert - UCSD

 

In this talk I will present my current and future work towards enabling safe real-world autonomy. My core focus is to enable efficient and safe decision-making in complex autonomous systems, while reasoning about uncertainty in real-world environments, including those involving human interactions. These methods draw from control theory, cognitive science, and reinforcement learning, and are backed by both rigorous theory and physical testing on robotic platforms.

First I will discuss safety for complex systems in simple environments. Traditional methods for generating safety analyses and safe controllers struggle to handle realistic complex models of autonomous systems, and therefore are stuck with simplistic models that are less accurate. I have developed scalable techniques for theoretically sound safety guarantees that can reduce computation by orders of magnitude for high-dimensional systems, resulting in better safety analyses and paving the way for safety in real-world autonomy.

Next I will add in complex environments. Safety analyses depend on pre-defined assumptions that will often be wrong in practice, as real-world systems will inevitably encounter incomplete knowledge of the environment and other agents. Reasoning efficiently and safely in unstructured environments is an area where humans excel compared to current autonomous systems. Inspired by this, I have used models of human decision-making from cognitive science to develop algorithms that allow autonomous systems to navigate quickly and safely, adapt to new information, and reason over the uncertainty inherent in predicting humans and other agents. Combining these techniques brings us closer to the goal of safe real-world autonomy.

Bio:

Sylvia Herbert is an Assistant Professor in Mechanical and Aerospace Engineering at the University of California San Diego. Prior to joining UCSD, she received her PhD in Electrical Engineering from UC Berkeley, where she studied with Professor Claire Tomlin on safe and efficient control of autonomous systems. Before that she earned her BS/MS at Drexel University in Mechanical Engineering. She is the recipient of the UC Berkeley Chancellor’s Fellowship, NSF GRFP, UC Berkeley Outstanding Graduate Student Instructor Award, and the Berkeley EECS Demetri Angelakos Memorial Achievement Award for Altruism.

"Incorporating Structure in Deep Dynamics Model for Improved Generalization"

Zoom - https://ucsd.zoom.us/j/97197176606

Rose Yu - UCSD

 

Abstract: Recent work has shown deep learning can significantly improve the prediction of dynamical systems. However, an inability to generalize under distributional shift limits its applicability to the real world. In this talk, I will demonstrate how to principally incorporate relation and symmetry structure in deep learning models to improve generalization. I will showcase their applications to robotic manipulation and vehicle trajectory prediction tasks.

Bio: Rose Yu is an assistant professor at UCSD in CSE and a primary faculty in the AI Group. She is also affiliated with HDSI, CRI and MICS. She does research on ML with an emphasis on large-scale spatiotemporal data. Her work has been used across a variety of use-cases such as dynamic systems, healthcare and physical sciences. Dr. Yu received her PhD from USC and was a postdoc at CalTech. She has already won numerous awards.

Reference:
[1] Deep Imitation Learning for Bimanual Robotic Manipulation
Fan Xie, Alex Chowdhury, Clara De Paolis, Linfeng Zhao, Lawson Wong, Rose Yu
Advances in Neural Information Processing Systems (NeurIPS), 2020
[2] Trajectory Prediction using Equivariant Continuous Convolution
Robin Walters, Jinxi (Leo) Li, Rose Yu
International Conference on Learning Representations (ICLR), 2021

"Abstractions in Robot Planning"

https://ucsd.zoom.us/j/97197176606

Neil T. Dantam, Colorado School of Mines

 

Abstract: Complex robot tasks require a combination of abstractions and algorithms: geometric models for motion planning, probabilistic models for perception, discrete models for high-level reasoning. Each abstraction imposes certain requirements, which may not always hold. Robust planning systems must therefore resolve errors in abstraction. We identify the combinatorial and geometric challenges of planning for everyday tasks, develop a hybrid planning algorithm, and implement an extensible planning framework. In recent work, we present an initial approach to relax the completeness assumptions in motion planning.

Bio: Neil T. Dantam is an Assistant Professor of Computer Science at the Colorado School of Mines. His research focuses on robot planning and manipulation, covering task and motion planning, quaternion kinematics, discrete policies, and real-time software design.

Previously, Neil was a Postdoctoral Research Associate in Computer Science at Rice University working with Prof. Lydia Kavraki and Prof. Swarat Chaudhuri. Neil received a Ph.D. in Robotics from Georgia Tech, advised by Prof. Mike Stilman, and B.S. degrees in Computer Science and Mechanical Engineering from Purdue University. He has worked at iRobot Research, MIT Lincoln Laboratory, and Raytheon. Neil received the Georgia Tech President's Fellowship, the Georgia Tech/SAIC paper award, an American Control Conference '12 presentation award, and was a Best Paper and Mike Stilman Award finalist at HUMANOIDS '14.

"Probabilistic Robotics and Autonomous Driving"

 https://ucsd.zoom.us/j/97197176606

Wolfram Burgard, Toyota Research Institute

 

Abstract: For autonomous robots and automated driving, the capability to robustly perceive their environments and execute their actions is the ultimate goal. The key challenge is that no sensors and actuators are perfect, which means that robots and cars need the ability to properly deal with the resulting uncertainty. In this presentation, I will introduce the probabilistic approach to robotics, which provides a rigorous statistical methodology to solve the state estimation problem. I will furthermore discuss how this approach can be extended using state-of-the-art technology from machine learning to bring us closer to the development of truly robust systems able to serve us in our every-day lives. In this context, I will in particular focus on the data advantage that the Toyota Research Institute is planning to leverage in combination with self-/semi-supervised methods for machine learning to speed up the process of developing self-driving cars.

Bio: Wolfram Burgard received the Ph.D. degree in computer science from University of Bonn, Bonn, Germany, in 1991.He is currently VP of Autonomous Driving at Toyota Research Institute and a Professor of computer science with University of Freiburg, Freiburg, Germany, where he heads the Laboratory for Autonomous Intelligent Systems. In the past, he developed several innovative probabilistic techniques for robot navigation and control, which cover different aspects such as localization, map building, path planning, and exploration. His research interests include artificial intelligence and mobile robots. Dr. Burgard received several Best Paper Awards from outstanding national and international conferences. In 2009, he was the recipient of the Gottfried Wilhelm Leibniz Prize, the most prestigious German research award.

"Learning Adaptive Models for Human-Robot Teaming" 

Atkinson Hall 4004/4006

Howard Thomas - University of Rochester

"Key Challenges in Agricultural Robotics with Examples of Ground Vehicle Localization in Orchards and Task-Specific Manipulator Design for Fruit Harvesting"

EBU 1 - Qualcomm Conference Room 

Amir Degani - Technion (Israel Institute of Technology) 

 

Dr Amir Degani is an Associate Professor at the Technion - Israel Institute of Technology. Dr Degani is the Director of the Civil, Environmental, and Agricultural Robotics (CEAR) Laboratory researching robotic  legged locomotion and autonomous systems in civil and agriculture applications. His research program includes mechanism analysis, synthesis, control and motion planning and design with emphasis on minimalistic concepts and the study of nonlinear dynamic hybrid systems.

His talk will present the need for robotics in agriculture and focus on examples of solutions for two different problems. The first is the localization of an autonomous ground vehicle in a homogenous orchard environment. The typical localization approaches are not adjusted to the characteristics of the orchard environment, especially the homogeneous scenery. To alleviate these difficulties, Dr Degani and his colleagues use top-view images of the orchard acquired in real-time. The top-view observation of the orchard provides a unique signature of every tree formed by the shape of its canopy. This practically changes the homogeneity premise in orchards and paves the way for addressing the “kidnapped robot problem”.

The second part of the talk will focus on efforts to define and perform task-based optimization for an apple-harvesting robot. Since there is a large variation between trees, instead of performing this laborious optimization on many trees, Dr Degani and his colleagues look for a “lower dimensional” characterization of the trees. Moreover, the shape of the tree (i.e., the environment) has a major influence on the robot’s simplicity. Therefore, Dr Degani and his colleagues strive to find the best training system for a tree to help simplify the robot’s design.

"The Business of Robotics: An introduction to the commercial robotics landscape, and considerations for identifying valuable robot opportunities."

Center for Memory Recording Research (CMRR)

Phil DuffyBrain Corp

Phil Duffy is the Vice President of Innovations at Brain Corp. As VP of Innovations, he leads product commercialization activities at Brain Corp to discover novel product and market opportunities for autonomous mobile robotics. His team is responsible for defining product strategy for Brain Corp's AI technology, BrainOS. A serial entrepreneur and product strategist, Phil has a proven track record for growing technology start-ups, and commercializing and launching innovative, robotic products in the B2B and B2C markets. Phil joined Brain Corp in 2014 and brings with him 20+ years leadership experience in product management, marketing, and China manufacturing. This talk will provide an overview of the commercial robotics landscape, identify valuable robot opportunities, and focus on important elements to consider when developing and marketing robotics technology.

"Efficient memory-usage techniques in deep neural networks via a graph-based approach"

Qualcomm Conference Room (EBU-1)

Salimeh Yasaei Sekeh - University of Maine 

Dr. Salimeh Yasaei Sekeh is the Assistant Professor of Computer Science in the School of Computing and Information Sciences at the University of Maine. Her research focuses on designing and analyzing machine learning algorithms, deep learning techniques, applications of machine learning approaches in real-time problems, data mining, pattern recognition, and network structure learning with applications in biology. This talk introduces two new and efficient deep memory usage techniques based on the geometric dependency criterion. This first technique is called Online Streaming Deep Feature Selection. This technique is based on a novel supervised streaming setting and it measures deep feature relevance while maintaining a minimal deep feature subset with relatively high classification performance and less memory requirement. The second technique is called Geometric Dependency-based Neuron Trimming. This technique is a data-driven pruning method that evaluates the relationship between nodes in consecutive layers. In this approach, a new dependency-based pruning score removes neurons with least importance, and then the network is fine-tuned to retain its predictive power. Both methods are evaluated on several data sets with multiple CNN models and demonstrated to achieve significant memory compression compared to the baselines.

"Human-Machine Teaming at the Robotics Research Center at West Point" 

Qualcomm Conference Room (EBU-1) 

Misha Novitzky - West Point 

Dr. Misha Novitzky is the Assistant Professor of the Robotics Research Center in the United States Military Academy at West Point. His work focuses on human-machine teaming for cooperative tasks in stressful and unconstrained environments. This talk will provide a brief overview of the various projects being conducted by the Robotics Research Center at the United States Military Academy, located in West Point, New York. In particular, the talk will focus on human-machine teaming. Most human-robot interaction or teaming research is performed in structured and sterile environments. It is our goal to take human-machine teaming outside into unstructured and stressful environments. As part of this effort, we will describe Project Aquaticus in which humans and robots were embedded in the marine environment and played games of capture the flag against similarly situated teams, and present results of our pilot studies. While Project Aquaticus was previously performed at the Massachusetts Institute of Technology, we will describe why the Robotics Research Center at West Point is an exceptional location to perform future human-machine teaming research.

"Introducing Qualcomm Snapdragon Ride"

Center for Memory and Recording Research (CMRR)

Ahmed Sadek - Qualcomm 

The Criticality of Systems Engineering to Autonomous Air Vehicle Development

Ariele Sparks - Northrop Grumman

Systems Engineering the World's Most Energetic Laser

Robert Plummer - LLNL

Applying a Decision Theoretic Framework for Evaluating System Trade-Offs

Nirmal Velayudhan - ViaSat

Scaling the Third Dimension in Silicon Integration

Srinivas Chennupaty - Intel

The Cost of Taking Shortcuts

David Harris - Cubic Transporation Systems

An Integrated Medium Earth Orbit - Low Earth Orbit Navigation, Communication and Authentication System of Systems

David Whelan - UC San Diego

Handling Scale (System and Developer) and Reliability in Large and Critical Systems

Sagnik Nandy - Google

People-First Systems Engineering: Challenges and Opportunities in Smart Cities

Jeff Lorbeck - Qualcomm