Seminars

(All Seminars)

Location: Atkinson Hall Auditorium

Time: 5:00pm - 7:00pm

"Deploying autonomous vehicles for micro-mobility on a university campus"

Thursday 13 January @ 11AM PST

Join Zoom Meeting:  https://ucsd.zoom.us/j/99055507432

Speaker: Henrik Christensen

 

Abstract: Autonomous vehicles are already deployed on the inter-states. Providing robust autonomous systems for urban environments is a different challenge, as the road network is more complex, there are many more types of road-users (cars, bikes, pedestrians, …) and the potential interactions are more complex. In an urban environment it is also harder to use pre-computed HD-maps as the world is much more dynamic. We have studied the design of micro-mobility solutions for the UCSD campus. In this presentation we will discuss an overall systems design, trying to eliminate HD-maps and replace it with course topological maps such as OpenStreet Maps, fusing vision and lidar for semantic mapping /localization, detection and handling other road-users. The system has been deployed for a 6-month period to evaluate robustness across weather, season changes, etc. We will present both underlying methods, and experimental insight.

Bio:

Henrik I Christensen is the Qualcomm Chancellor’s Chair of Robot Systems and the director of robotics at UC San Diego. Dr. Christensen does research on a systems approach to robotics. Solutions need a solid theoretical basis, effective algorithms, a good implementation and be evaluated in realistic scenarios. He has made contributions to distributed systems, SLAM, and systems engineering. Henrik is also the main editor of the US National Robotics Roadmap. He is / has served on a significant number of editorial boards (PAMI, IJRR, JFR, RAS, Aut Sys, …).

"Robots with Physical Intelligence"

Thursday, December 2nd @ 11 AM PST - Zoom

Join URL: https://ucsd.zoom.us/j/94406976474

Speaker: Sangbae Kim

http://meche.mit.edu/people/faculty/SANGBAE@MIT.EDU

 

While industrial robots are effective in repetitive, precise kinematic tasks in factories, the design and control of these robots are not suited for physically interactive performance that humans do easily. These tasks require ‘physical intelligence’ through complex dynamic interactions with environments whereas conventional robots are designed primarily for position control. In order to develop a robot with ‘physical intelligence’, we first need a new type of machines that allow dynamic interactions. This talk will discuss how the new design paradigm allows dynamic interactive tasks. As an embodiment of such a robot design paradigm, the latest version of the MIT Cheetah robots and force-feedback teleoperation arms will be presented. These robots are equipped with proprioceptive actuators, a new design paradigm for dynamic robots. This new class of actuators will play a crucial role in developing ‘physical intelligence’ and future robot applications such as elderly care, home service, delivery, and services in environments unfavorable for humans.

Bio:

Sangbae Kim is the director of the Biomimetic Robotics Laboratory and a professor of Mechanical Engineering at MIT. His research focuses on bio-inspired robot design achieved by extracting principles from animals. Kim’s achievements include creating the world’s first directional adhesive inspired by gecko lizards and a climbing robot named Stickybot that utilizes the directional adhesive to climb smooth surfaces. TIME Magazine named Stickybot one of the best inventions of 2006. One of Kim’s recent achievements is the development of the MIT Cheetah, a robot capable of stable running outdoors up to 13 mph and autonomous jumping over obstacles at the efficiency of animals. Kim is a recipient of best paper awards from the ICRA (2007), King-Sun Fu Memorial TRO (2008) and IEEE/ASME TMECH (2016). Additionally, he received a DARPA YFA (2013), an NSF CAREER award (2014), and a Ruth and Joel Spira Award for Distinguished Teaching (2015).

"Toward Object Manipulation Without Explicit Models"

Thursday 18 November @ 11AM PST (virtual)

Join URL: https://ucsd.zoom.us/j/94406976474

Speaker:  Professor Dieter Fox 

 

The prevalent approach to object manipulation is based on the availability of explicit 3D object models. By estimating the pose of such object models in a scene, a robot can readily reason about how to pick up an object, place it in a stable position, or avoid collisions. Unfortunately, assuming the availability of object models constrains the settings in which a robot can operate, and noise in estimating a model's pose can result in brittle manipulation performance. In this talk, I will discuss our work on learning to manipulate unknown objects directly from visual (depth) data. Without any explicit 3D object models, these approaches are able to segment unknown object instances, pickup objects in cluttered scenes, and re-arrange them into desired configurations. I will also present recent work on combining pre-trained language and vision models to efficiently teach a robot to perform a variety of manipulation tasks.  I'll conclude with our initial work toward learning implicit representations for objects.

Bio:

Dieter Fox is Senior Director of Robotics Research at NVIDIA and Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, where he heads the UW Robotics and State Estimation Lab. Dieter obtained his Ph.D. from the University of Bonn, Germany.  His research is in robotics and artificial intelligence, with a focus on state estimation and perception applied to problems such as mapping, object detection and tracking, manipulation, and activity recognition. He has published more than 200 technical papers and is the co-author of the textbook “Probabilistic Robotics”. He is a Fellow of the IEEE, AAAI, and ACM, and recipient of the 2020 Pioneer in Robotics and Automation Award.  Dieter also received several best paper awards at major robotics, AI, and computer vision conferences. He was an editor of the IEEE Transactions on Robotics, program co-chair of the 2008 AAAI Conference on Artificial Intelligence, and program chair of the 2013 Robotics: Science and Systems conference.

"From Bio-inspiration to Robotic Applications"

Thursday 4 November @ 11AM PST (Virtual Seminar)

Join URL: https://ucsd.zoom.us/j/94406976474

Speaker:  Howie Choset

                      Carnegie Mellon University

                      http://biorobotics.org

 

The animal kingdom is full of both human and non-human animals worthy of investigation, emulation and re-creation. As such, my research group has created a comprehensive research program focusing on biologically-inspired robots, and has applied them to search and rescue, minimally invasive surgery, manufacturing, and recycling. These robots inspire great scientific challenges in mechanism design, control, planning and estimation theory. These research topics are important because once the robot is built (design), it must decide where to go (path planning), determine how to get there (control), and use feedback to close the loop (estimation). A common theme to these research foci is devising ways by which we can reduce multi-dimensional problems to low dimensional ones for planning, analysis, and optimization. In this talk, I will discuss our results in geometric mechanics, Bayesian filtering, scalable multi-agent planning, and application and extension of modern machine learning techniques to support these reductions. This talk will also cover how my students and I commercialized these technologies by founding three companies: Medrobotics, Hebi Robotics, and Bito Robotics. If time permits, I will also discuss my educational activities, especially at the undergraduate level, with a course using LEGO robots, and the role of entrepreneurism in University education.

Bio:

Howie Choset is a Professor of Robotics at Carnegie Mellon University where he serves as the co-director of the Biorobotics Lab and as director of the Robotics Major. He received his undergraduate degrees in Computer Science and Business from the University of Pennsylvania in 1990. Choset received his Master’s and PhD from Caltech in 1991 and 1996. Choset's research group reduces complicated high-dimensional problems found in robotics to low-dimensional simpler ones for design, analysis, and planning. Motivated by applications in confined spaces, Choset has created a comprehensive program in modular, high DOF, and multi- robot systems, which has led to basic research in mechanism design, path planning, motion planning, and estimation. In addition to publications, this work has led to Choset, along with his students, to form several companies including Medrobotics, for surgical systems, Hebi Robotics, for modular robots, and Bito Robotics for autonomous guided vehicles. Recently, Choset’s surgical snake robot cleared the FDA and has been in use in the US and Europe since. Choset also leads multi-PI projects centered on manufacturing: (1) automating the programming of robots for auto-body painting; (2) the development of mobile manipulators for agile and flexible fixture-free manufacturing of large structures in aerospace, and (3) the creation of a data-robot ecosystem for rapid manufacturing in the commercial electronics industry. Choset co-led the formation of the Advanced Robotics for Manufacturing Institute, which is $250MM national institute advancing both technology development and education for robotics in manufacturing. Finally, Choset is a founding Editor of the journal Science Robotics. and is currently serving on the editorial board of IJRR.

"3D Perception for Autonomous Systems"

Thursday 28 October @ 11AM PST

Speaker: Camillo Jose (CJ) Taylor
Raymond S. Markowitz President's Distinguished Professor
Computer and Information Science Dept and GRASP Laboratory
University of Pennsylvania  

 

Building a 3D representation of the environment has been a critical issue for researchers working on mobile robotics. It is typically an essential component of systems that must navigate and act in the world. In this talk I will describe some of the algorithms we have developed to address this problem in a variety of contexts including: building parsimonious models of indoor spaces, using deep learning to construct low dimensional models of scene structure, and our recent work on building robust real-time 3D SLAM systems to make sense of LIDAR data.

Bio: Dr. Taylor received his A.B. degree in Electrical Computer and Systems Engineering from Harvard College in 1988 and his M.S. and Ph.D. degrees from Yale University in 1990 and 1994 respectively. Dr. Taylor was the Jamaica Scholar in 1984, a member of the Harvard chapter of Phi Beta Kappa and held a Harvard College Scholarship from 1986-1988. From 1994 to 1997 Dr. Taylor was a postdoctoral researcher and lecturer with the Department of Electrical Engineering and Computer Science at the University of California, Berkeley. He joined the faculty of the Computer and Information Science Department at the University of Pennsylvania in September 1997. He received an NSF CAREER award in 1998 and the Lindback Minority Junior Faculty Award in 2001. In 2012 he received a best paper award at the IEEE Workshop on the Applications of Computer Vision. Dr Taylor's research interests lie primarily in the fields of Computer Vision and Robotics and include: reconstruction of 3D models from images, vision-guided robot navigation and scene understanding. Dr. Taylor has served as an Associate Editor of the IEEE Transactions of Pattern Analysis and Machine Intelligence. He has also served on numerous conference organizing committees he is a General Chair of the International Conference on Computer Vision (ICCV) 2021 and was a Program Chair of the 2006 and 2017 editions of the IEEE Conference on Computer Vision and Pattern Recognition and of the 2013 edition of 3DV. In 2012 he was awarded the Christian R. and Mary F. Lindback Foundation Award for Distinguished Teaching at the University of Pennsylvania.

Webpagehttps://www.cis.upenn.edu/~cjtaylor/

"Reinventing Human-Robot Interaction for Companion Robots"

Thursday 21 October @ 11AM PST (Virtual Seminar)

 Join URL: https://ucsd.zoom.us/j/94406976474

Speaker:  Paolo Pirjanian, Ph.D., Founder/CEO of Embodied and former CTO of iRobot

 

Previous solutions to HRI have taken a piecemeal approach to building a social interface and have failed. Many solutions merely copy the command-response conversation pattern, e.g., from Alexa, onto a robot which can be awkward and unnatural. Most social robots attempt to add human qualities but fall short in true social interaction: simply adding “eyes” that do not look at you is uncanny. Using a touch screen “face” as a means of input is a step backwards and poking a character’s “face” encourages inappropriate social behavior. Having a faux body but no means to express body language seems lifeless and lacks embodiment. These piecemeal solutions miss the point. Social interaction does not require perfect anthropomorphic form-factor but it does need a minimum set of affordances to have successful and believable agency, something that we can learn from Pixar, Disney and the like.

At Embodied we have been rethinking and reinventing how human-machine interaction is done - where a user can have a fluid natural conversation with a robot; and where the robot can discern who to address and how to proactively engage and use subtle eye gaze, facial expressions, and body language as part of its response.

In this presentation, Paolo Pirjanian will discuss Embodied’s solution and our first product, Moxie, targeting children as a solution to promote social, emotional and cognitive skills.

Bio:

Paolo Pirjanian received his M.Sc and Ph.D. in CSE from Aalborg University. He was a Post-Doc at USC and then at JPL are a research scientist. From JPL he came to Evolution Robotics and was the CTO and Later the CEO before it was acquired by iRobot. He has served as at the CTO of iRobot and is the founder and CEO of Embodied. He is a NASA scientist turned robotics entrepreneur who has helped create technologies for many products ranging from the Sony AIBO to the iRobot Roomba and most recently Moxie.

"Enabling Grounded Language Communication for Human-Robot Teaming "

Thursday 14 October @ 11AM (Virtual)

Join URL: https://ucsd.zoom.us/j/94406976474

Speaker:  Professor Thomas Howard

 

The ability for robots to effectively understand natural language instructions and convey information about their observations and interactions with the physical world is highly dependent on the sophistication and fidelity of the robot’s representations of language, environment, and actions.  As we progress towards more intelligent systems that perform a wider range of tasks in a greater variety of domains, we need models that can adapt their representations of language and environment to achieve the real-time performance necessitated by the cadence of human-robot interaction within the computational resource constraints of the platform.  In this talk I will review my laboratory’s research on algorithms and models for robot planning, mapping, control, and interaction with a specific focus on language-guided adaptive perception and bi-directional communication with deliberative interactive estimation.    

Bio

Thomas Howard is an assistant professor in the Department of Electrical and Computer Engineering at the University of Rochester.  He also holds secondary appointments in the Department of Biomedical Engineering and Department of Computer Science, is an affiliate of the Goergen Institute of Data Science and directs the University of Rochester’s Robotics and Artificial Intelligence Laboratory. Previously he held appointments as a research scientist and a postdoctoral associate at MIT's Computer Science and Artificial Intelligence Laboratory in the Robust Robotics Group, a research technologist at the Jet Propulsion Laboratory in the Robotic Software Systems Group, and a lecturer in mechanical engineering at Caltech. 

Howard earned a PhD in robotics from the Robotics Institute at Carnegie Mellon University in 2009 in addition to BS degrees in electrical and computer engineering and mechanical engineering from the University of Rochester in 2004. His research interests span artificial intelligence, robotics, and human-robot interaction with a research focus on improving the optimality, efficiency, and fidelity of models for decision making in complex and unstructured environments with applications to robot motion planning, natural language understanding, and human-robot teaming.  Howard was a member of the flight software team for the Mars Science Laboratory, the motion planning lead for the JPL/Caltech DARPA Autonomous Robotic Manipulation team, and a member of Tartan Racing, winner of the 2007 DARPA Urban Challenge.  Howard has earned Best Paper Awards at RSS (2016) and IEEE SMC (2017), two NASA Group Achievement Awards (2012, 2014), was a finalist for the ICRA Best Manipulation Paper Award (2012) and was selected for the NASA Early Career Faculty Award (2019).  Howard’s research at the University of Rochester has been supported by National Science Foundation, Army Research Office, Army Research Laboratory, Department of Defense Congressionally Directed Medical Research Program, National Aeronautics and Space Administration, and the New York State Center of Excellence in Data Science. 

Faculty Host: Nikolay Atanasov - ECE

"Beyond the Label: Robots that Reason about Object Semantics"

Thursday 7th October @ 11AM PST

Zoom:  https://ucsd.zoom.us/j/94406976474

Speaker:  Professor Sonia Chernova

 

Reliable operation in everyday human environments – homes, offices, and businesses – remains elusive for today’s robotic systems. A key challenge is diversity, as no two homes or businesses are exactly alike. However, despite the innumerable unique aspects of any home, there are many commonalities as well, particularly about how objects are placed and used. These commonalities can be captured in semantic representations, and then used to improve the autonomy of robotic systems by, for example, enabling robots to infer missing information in human instructions, efficiently search for objects, or manipulate objects more effectively. In this talk, I will discuss recent advances in semantic reasoning, particularly focusing on semantics of everyday objects, household environments, and the development of robotic systems that intelligently interact with their world.

Bio:

Sonia Chernova is an Associate Professor in the College of Computing at Georgia Tech. She directs the Robot Autonomy and Interactive Learning lab, where her research focuses on the development of intelligent and interactive autonomous systems. Chernova’s contributions span robotics and artificial intelligence, including semantic reasoning, adaptive autonomy, human-robot interaction, and explainable AI. She has authored over 100 scientific papers and is the recipient of the NSF CAREER, ONR Young Investigator, and NASA Early Career Faculty awards. She also leads the NSF AI Institute for Collaborative Assistance and Responsive Interaction for Networked Groups (AI-CARING).

"Learning Where to Trust Unreliable Models for Deformable Object Manipulation"

Thursday, Sept. 30 @ 11AM PST 

Zoom: https://ucsd.zoom.us/j/94406976474

Speaker:  Professor Dmitry Berenson

 

The world outside our labs seldom conforms to the assumptions of our models. This is especially true for dynamics models used in control and motion planning for complex high-DOF systems like deformable objects. We must develop better models, but we must also accept that, no matter how powerful our simulators or how big our datasets, our models will sometimes be wrong. This talk will present our recent work on using unreliable dynamics models for the manipulation of deformable objects, such as rope. Given a dynamics model, our methods learn where that model can be trusted given either batch data or online experience. These approaches allow dynamics models to generalize to control and planning tasks in novel scenarios, while requiring much less data than baseline methods. This data-efficiency is a key requirement for scalable and flexible manipulation capabilities.

Bio: Dmitry Berenson is an Associate Professor in Electrical Engineering and Computer Science and the Robotics Institute at the University of Michigan, where he has been since 2016. Before coming to University of Michigan, he was an Assistant Professor at WPI (2012-2016). He received a BS in Electrical Engineering from Cornell University in 2005 and received his Ph.D. degree from the Robotics Institute at Carnegie Mellon University in 2011, where he was supported by an Intel PhD Fellowship. He was also a post-doc at UC Berkeley (2011-2012). He has received the IEEE RAS Early Career Award and the NSF CAREER award. His current research focuses on robotic manipulation, robot learning, and motion planning.

"CRI Seminars Speaker Fall 2021"

We will have presentations from:  

Sep 23      H.I. Christensen, UCSD                 - IROS Papers

Sep 30      Dmitry Berenson, UMICH          - Collaborative Robots

Oct 7         Sonia Chernova, GT                        - Human Collaborative Systems

Oct 14      Tom Howard, UR                              - Enabling Grounded Language Models

Oct 21      Paolo PIrjanian                                  - Embodied Robotics          

Oct 28      CJ Taylor, UPENN                            - Perceptual Robotics

Nov 4        Howie Choset, CMU                      - Robot Mechanisms

Nov 18     Dieter Fox, UW                                 - Robots & ML

Dec 2        Sangbae Kim, MIT                            - Robot Mobility

The seminars are every Thursday @ 11am.  

Zoom info:  https://ucsd.zoom.us/j/94406976474 

 

"Observing, Learning, and Executing Fine-Grained Manipulation Activities: A Systems Perspective"

Thursday, May 27 @ 12:00ppm PST
 

Zoomhttps://ucsd.zoom.us/j/91267376688
Speaker: Gregory D. Hager

In the domain of image and video analysis, much of the deep learning revolution has been focused on narrow, high-level classification tasks that are defined through carefully curated, retrospective data sets. However, most real-world applications – particularly those involving complex, multi-step manipulation activities -- occur “in the wild" where there is a combinatorial “long tail” of unique situations that are never seen during training. These systems demand a richer, fine-grained task representation that is informed by the application context, and which supports quantitative analysis and compositional synthesis. As a result, the challenges inherent in both high-accuracy, fine-grained analysis and performance of perception-based activities are manifold, spanning representation, recognition, and task and motion planning.

In this talk, I’ll summarize our work addressing these challenges. I’ll first describe DASZL, our approach to interpretable, attribute-based activity detection. DASZL operates in both pre-trained and zero shot settings and has been applied to a variety of applications ranging from surveillance to surgery. I’ll then describe work on machine learning approaches for systems that use prediction models to support perception-based planning and execution of manipulation tasks. I’ll close with some recent work on “Good Robot,” a method for end-to-end training of a robot manipulation system which leverages architecture search and fine-grained task rewards to achieve state-of-the-art performance in complex, multi-step manipulation tasks. I’ll close with a brief summary of some directions we are exploring, enabled by these technologies. 

Bio: Greg Hager is the Mandell Bellmore Professor of Computer Science at Johns Hopkins University and Founding Director of the Malone Center for Engineering in Healthcare. Professor Hager’s research interests include computer vision, vision-based and collaborative robotics, time-series analysis of image data, and applications of image analysis and robotics in medicine and in manufacturing. He is a member of the CISE Advisory Committee, the Board of the Directors of the Computing Research Association, and the governing board of the International Federation of Robotics Research. He previously served as Chair of the Computing Community Consortium. In 2014, he was awarded a Hans Fischer Fellowship in the Institute of Advanced Study of the Technical University and in 2017 was named a TUM Ambassador.  Professor Hager has served on the editorial boards of IEEE TRO, IEEE PAMI, and IJCV and ACM Transactions on Computing for Healthcare. He is a fellow of the ACM and IEEE for his contributions to Vision-Based Robotics and a Fellow of AAAS, the MICCAI Society and of AIMBE for his contributions to imaging and his work on the analysis of surgical technical skill. Professor Hager is a co-founder of Clear Guide Medical and Ready Robotics. He is currently an Amazon Scholar.

"Semantic Robot Programming... and Maybe Making the World a Better Place"

May 20, 2021 @ 12:00pm PST

Zoom: https://ucsd.zoom.us/j/91267376688

Speaker: Odest Chadwicke Jenkins, Ph.D.

The visions of interconnected heterogeneous autonomous robots in widespread use are a coming reality that will reshape our world. Similar to "app stores" for modern computing, people at varying levels of technical background will contribute to "robot app stores" as designers and developers. However, current paradigms to program robots beyond simple cases remains inaccessible to all but the most sophisticated of developers and researchers. In order for people to fluently program autonomous robots, a robot must be able to interpret user instructions that accord with that user’s model of the world. The challenge is that many aspects of such a model are difficult or impossible for the robot to sense directly. We posit a critical missing component is the grounding of semantic symbols in a manner that addresses both uncertainty in low-level robot perception and intentionality in high-level reasoning. Such a grounding will enable robots to fluidly work with human collaborators to perform tasks that require extended goal-directed autonomy.

I will present our efforts towards accessible and general methods of robot programming from the demonstrations of human users. Our recent work has focused on Semantic Robot Programming (SRP), a declarative paradigm for robot programming by demonstration that builds on semantic mapping. In contrast to procedural methods for motion imitation in configuration space, SRP is suited to generalize user demonstrations of goal scenes in workspace, such as for manipulation in cluttered environments. SRP extends our efforts to crowdsource robot learning from demonstration at scale through messaging protocols suited to web/cloud robotics. With such scaling of robotics in mind, prospects for cultivating both equal opportunity and technological excellence will be discussed in the context of broadening and strengthening Title IX and Title VI.

Bio: Odest Chadwicke Jenkins, Ph.D., is a Professor of Computer Science and Engineering and Associate Director of the Robotics Institute at the University of Michigan. Prof. Jenkins earned his B.S. in Computer Science and Mathematics at Alma College (1996), M.S. in Computer Science at Georgia Tech (1998), and Ph.D. in Computer Science at the University of Southern California (2003). He previously served on the faculty of Brown University in Computer Science (2004-15). His research addresses problems in interactive robotics and human-robot interaction, primarily focused on mobile manipulation, robot perception, and robot learning from demonstration. His research often intersects topics in computer vision, machine learning, and computer animation. Prof. Jenkins has been recognized as a Sloan Research Fellow and is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE). His work has also been supported by Young Investigator awards from the Office of Naval Research (ONR), the Air Force Office of Scientific Research (AFOSR) and the National Science Foundation (NSF). Prof. Jenkins is currently serving as Editor-in-Chief for the ACM Transactions on Human-Robot Interaction. He is a Fellow of the American Association for the Advancement of Science and the Association for the Advancement of Artificial Intelligence, and Senior Member of the Association for Computing Machinery and the Institute of Electrical and Electronics Engineers. He is an alumnus of the Defense Science Study Group (2018-19).

"Designing the Future of Human-Robot Interaction"

May 13 @ 12pm (PST) 
 

Location: CSE 1202
Zoom: https://ucsd.zoom.us/j/91267376688

Speaker: Dr. Holly Yanco

Abstract: Robots navigating in difficult and dynamic environments often need assistance from human operators or supervisors, either in the form of teleoperation or interventions when the robot's autonomy is not able to handle the current situation. Even in more controlled environments, such as office buildings and manufacturing floors, robots may need help from people. This talk will discuss methods for controlling both individual robots and groups of robots, in applications ranging from assistive technology to telepresence to exoskeletons.  A variety of modalities for human-robot interaction with robot systems, including multi-touch and virtual reality, will be presented.

Bio: Dr. Holly Yanco is a Distinguished University Professor, Professor of Computer Science, and Director of the New England Robotics Validation and Experimentation (NERVE) Center at the University of Massachusetts Lowell. Her research interests include human-robot interaction, evaluation metrics and methods for robot systems, and the use of robots in K-12 education to broaden participation in computer science. Yanco's research has been funded by NSF, including a CAREER Award, the Advanced Robotics for Manufacturing (ARM) Institute, ARO, DARPA, DOE-EM, ONR, NASA, NIST, Google, Microsoft, and Verizon. Yanco has a PhD and MS in Computer Science from the Massachusetts Institute of Technology and a BA in Computer Science and Philosophy from Wellesley College. She is a AAAI Fellow.

"Autonomous Mobile Robot Challenges and Opportunities in Domain and Mission Complex Environments"

Thursday, 6 May @ 12pm PST (in person CSE 1202)

Zoom: https://ucsd.zoom.us/j/91267376688

Speaker: Bruce Morris

Increased autonomy can have many advantages, including increased safety and reliability, improved reaction time and performance, reduced personnel burden with associated cost savings, and the ability to continue operations in communications-degraded or denied environments. Artificial Intelligence for Small Unit Maneuver (AISUM) envisions a way for future expeditionary tactical maneuver elements to team with intelligent adaptive systems. Accordingly, it opens analyses of how such teaming will greatly enhance mission precision, speed, and mass in complex, contested, and congested environments. The ultimate aim in this effort is to reduce risk to missions and our own force, partners, and civilians. AISUM provides competitive advantage in the changing physics of competition space via robotic autonomous systems in dense urban clutter (interior, exterior, subterranean) and dynamic, spectrum-denied areas, with no prior knowledge of the environment in support of human-machine teams for tactical maneuver and swarming tactics.

Bio: Bruce Morris currently serves as the Deputy Director of Future Concepts and Innovation at Naval Special Warfare Command. In this role, he is responsible for leading the SEAL Teams in their strategic and practical application of digital modernization to enable Artificial Intelligence and Autonomous Mobile Robotics in Human-Machine Teaming for asymmetric competitive advantage. The Future Concepts and Innovation Team was founded in 2016, under Dr. Morris’ strategic guidance, to explore and exploit innovation methodology, venture capital, and emerging technologies for U.S. Special Operations and the greater National Security ecosystem.

A native of San Diego and a third-generation Naval Officer, Bruce has dedicated his life in the service our great nation and to the community of San Diego. Bruce received his commission from the United States Naval Academy, graduating in 1988 with a degree in Mathematics. Immediately following graduation, he then reported to his hometown of Coronado, CA for Basic Underwater Demolition/SEAL Training with class 158. Bruce’s service as a Navy SEAL Officer, Information Warfare Officer and government civilian span over 30 years and represents a cross section of leadership roles and the consistent innovative applications of science and technology on the battlefield and in garrison. Additionally, he holds an MS in Meteorology and Physical Oceanography and a PhD in Physical Oceanography from the Naval Postgraduate School with a focus on Numerical Modelling and Ocean Dynamics.

Bruce is the recipient of the CAPT Harry T. Jenkins Memorial Award for Community Service. His in-service professional awards include the Bronze Star Medal with Combat Distinguishing Device, Meritorious Service Medal, Combat Action Ribbon, other personal and unit awards to include the Navy Meritorious Civilian Service Award.

"ICRA 2021 Overview"

Thursday 29 April @ 12pm PST

Zoom Link: https://ucsd.zoom.us/j/91267376688

Speaker: Henrik Christensen

Looking Farther in Parametric Scene Parsing with Ground and Aerial Imagery

Raghava Modhugu, Harish Rithish Sethuram, Manmohan Chandraker, C.V. Jawahar

http://cdn.iiit.ac.in/cdn/cvit.iiit.ac.in/images/ConferencePapers/2021/Scene_attributes___ICRA_2021.pdf

 

Auto-calibration Method Using Stop Signs for Urban Autonomous Driving Applications

Yunhai Han, Yuhan Liu, David Paz, Henrik Christensen
https://arxiv.org/abs/2010.07441

 

Social Navigation for Mobile Robots in the Emergency Department

Angelique Taylor, Sachiko Mastumoto, Wesley Xiao, and Laurel D. Riek
http://cseweb.ucsd.edu/~lriek/papers/taylor-icra-2021.pdf

 

Temporal Anticipation and Adaptation Methods for Fluent Human-Robot Teaming

Tariq Iqbal and Laurel D. Riek
http://cseweb.ucsd.edu/~lriek/papers/iqbal-riek-icra-2021.pdf

Mesh Reconstruction from Aerial Images for Outdoor Terrain Mapping Using Joint 2D-3D Learning
Q. Feng, N. Atanasov
https://arxiv.org/abs/2101.01844

Coding for Distributed Multi-Agent Reinforcement Learning
B. Wang, J. Xie, N. Atanasov
https://arxiv.org/abs/2101.02308

Non-Monotone Energy-Aware Information Gathering for Heterogeneous Robot Teams
X. Cai, B. Schlotfeldt, K. Khosoussi, N. Atanasov, G. J. Pappas, J. How
https://arxiv.org/abs/2101.11093

Active Bayesian Multi-class Mapping from Range and Semantic Segmentation Observations
A. Asgharivaskasi, N. Atanasov
https://arxiv.org/abs/2101.01831

Learning Barrier Functions with Memory for Robust Safe Navigation
K. Long, C. Qian, J. Cortes, N. Atanasov
https://arxiv.org/abs/2011.01899

Generalization in reinforcement learning by soft data augmentation
Nicklas Hansen, Xiaolong Wang
https://nicklashansen.github.io/SODA/

Bimanual Regrasping for Suture Needles using Reinforcement Learning for Rapid Motion Planning
Z.Y. Chiu, F. Richter, E.K. Funk, R.K. Orosco, M.C. Yip
https://arxiv.org/pdf/2011.04813.pdf

Real-to-Sim Registration of Deformable Soft-Tissue with Position-Based Dynamics for Surgical Robot Autonomy
F. Liu, Z. Li, Y. Han, J. Lu, F. Richter, M.C. Yip
https://arxiv.org/abs/2011.00800

Model-Predictive Control of Blood Suction for Surgical Hemostasis using Differentiable Fluid Simulations
J. Huang*, F. Liu*, F. Richter, M.C. Yip
https://arxiv.org/abs/2102.01436

SuPer Deep: A Surgical Perception Framework for Robotic Tissue Manipulation using Deep Learning for Feature Extraction
J. Lu, A. Jayakumari, F. Richter, Y. Li, M.C. Yip
https://arxiv.org/pdf/2003.03472.pdf

Data-driven Actuator Selection for Artificial Muscle-Powered Robots
T. Henderson, Y. Zhi, A. Liu, M.C. Yip
https://arxiv.org/abs/2104.07168

Optimal Multi-Manipulator Arm Placement for Maximal Dexterity during Robotics Surgery
J. Di, M. Xu, N. Das, M.C. Yip
https://arxiv.org/abs/2104.06348

MPC-MPNet: Model-Predictive Motion Planning Networks for Fast, Near-Optimal Planning under Kinodynamic Constraints
L. Li, Y.L. Miao, A.H. Qureshi, M.C. Yip
https://arxiv.org/pdf/2101.06798.pdf

Autonomous Robotic Suction to Clear the Surgical Field for Hemostasis using Image-based Blood Flow Detection
F. Richter, S. Shen, F. Liu, J. Huang, E.K. Funk, R.K. Orosco, M.C. Yip
https://arxiv.org/abs/2010.08441

Scalable Learning of Safety Guarantees for Autonomous Systems using Hamilton-Jacobi Reachability
Sylvia Herbert, Jason J. Choi, Suvansh Qazi, Marsalis Gibson, Koushil Sreenath, Claire J. Tomlin
https://arxiv.org/abs/2101.05916

Planning under non-rational perception of uncertain spatial costs
Aamodh Suresh and Sonia Martinez
https://arxiv.org/pdf/1904.02851.pdf

"Inductive Biases for Robot Learning"

Thursday 22 April @ 12pm PST 

Zoom Link: https://ucsd.zoom.us/j/91267376688

Speaker: Michael Lutter

Abstract: The recent advances in robot learning have been largely fueled by model-free deep reinforcement learning algorithms. These black-box methods utilize large datasets and deep networks to discover good behaviors. The existing knowledge of robotics and control is ignored and only the information contained within the data is leveraged. In this talk we want to take a different approach and evaluate the combination of knowledge and data-driven learning. We show that this combination enables sample-efficient learning for physical robots and that generic knowledge from physics and control can be incorporated in deep network representations. The use of inductive biases for robot learning yields robots that learn dynamic tasks within minutes and robust control policies for under-actuated systems.

Bio: Michael Lutter joined the Institute for Intelligent Autonomous Systems (IAS) at TU Darmstadt in July 2017. Prior to this Michael held a researcher position at the Technical University of Munich (TUM) for bio-inspired learning for robotics. During this time he worked on the Human Brain Project, a European H2020 FET flagship project. In addition to his studies, Michael worked for ThyssenKrupp, Siemens and General Electric and received multiple scholarships for academic excellence and his current research.

"Patient-Specific Continuum Robotic Systems for Surgical Interventions"

Thursday 15 April @ 12pm PST (also CSE290)
Zoom Link: https://ucsd.zoom.us/j/91267376688

Speaker: Jaydev P. Desai
Georgia Institute of Technology

Abstract:
Over the past few decades, robotic systems for surgical interventions have undergone tremendous transformation. The goal of a surgical intervention is to try to do it as minimally invasively as possible, since that significantly reduces post-operative morbidity, reduces recovery time, and also leads to lower healthcare costs. However, minimally invasive surgical interventions for a range of procedures will require a significant change in the healthcare paradigm for both diagnostic and therapeutic interventions. Advances in surgical interventions will benefit from “patient-specific robotic tools” to deliver optimal diagnosis and therapy. Hence, this talk will focus on the development of continuum, flexible, and patient-specific robotic systems for surgical interventions. Since, these robotic systems could operate in an imaging environment, we will also address challenges in image-guided interventions. This talk will present examples from neurosurgery and endovascular interventions to highlight the applicability of patient-specific robotic systems for surgery.

Biography:
Dr. Jaydev P. Desai is currently a Professor at Georgia Tech in the Wallace H. Coulter Department of Biomedical Engineering. He is the founding Director of the Georgia Center for Medical Robotics (GCMR) and an Associate Director of the Institute for Robotics and Intelligent Machines (IRIM). He completed his undergraduate studies from the Indian Institute of Technology, Bombay, India, in 1993. He received his M.A. in Mathematics in 1997 and M.S. and Ph.D. in Mechanical Engineering and Applied Mechanics in 1995 and 1998 respectively, all from the University of Pennsylvania. He was also a Post-Doctoral Fellow in the Division of Engineering and Applied Sciences at Harvard University. He is a recipient of several NIH R01 grants, NSF CAREER award, and was also the lead inventor on the “Outstanding Invention in the Physical Science Category” at the University of Maryland, College Park, where he was formerly employed. He is also the recipient of the Ralph R. Teetor Educational Award and the IEEE Robotics and Automation Society Distinguished Service Award. He has been an invited speaker at the National Academy of Sciences “Distinctive Voices” seminar series and also invited to attend the National Academy of Engineering’s U.S. Frontiers of Engineering Symposium. He has over 190 publications, is the founding Editor-in-Chief of the Journal of Medical Robotics Research, and Editor-in-Chief of the four-volume Encyclopedia of Medical Robotics. His research interests are primarily in the areas of image-guided surgical robotics, pediatric robotics, endovascular robotics, and rehabilitation and assistive robotics. He is a Fellow of IEEE, ASME, and AIMBE.

Director – Georgia Center for Medical Robotics (GCMR)
Associate Director - Medical Robotics and Human Augmentation, Institute for Robotics and Intelligent Machines (IRIM)
Wallace H. Coulter Department of Biomedical Engineering
Georgia Institute of Technolog

"Visual Representations for Navigation and Object Detection"

Zoom Link: [https://ucsd.zoom.us/j/91267376688]
In-Person: Room 1202, CSE Building

Speaker: Jana Kosecka
George Mason University
cs.gmu.edu/~kosecka

 

Abstract: Advancements in reliable navigation and mapping rest to a large extent on robust, efficient and scalable understanding of the surrounding environment. The success in recent years have been propelled by the use machine learning techniques for capturing geometry and semantics of environment from video and range sensors. I will discuss approaches to object detection, pose recovery, 3D reconstruction and detailed semantic parsing using deep convolutional neural networks (CNNs).
While data-driven deep learning approaches fueled rapid progress in object category recognition by exploiting large amounts of labelled data, extending this learning paradigm to previously unseen objects comes with challenges. I will discuss the role of active self-supervision provided by ego-motion for learning object detectors from unlabelled data. These powerful spatial and semantic representations can then be jointly optimized with policies for elementary navigation tasks. The presented explorations open interesting avenues for control of embodied physical agents and general strategies for design and development of general purpose autonomous systems.

Bio:  Jana Kosecka is Professor at the Department of Computer Science, George Mason University. She obtained Ph.D. in Computer Science from University of Pennsylvania. Following her PhD, she was a postdoctoral fellow at the EECS Department at University of California, Berkeley. She is the recipient of David Marr's prize  and received the National Science Foundation CAREER Award. Jana is a chair of IEEE technical Committee of Robot Perception, Associate Editor of IEEE Robotics and Automation Letters and International Journal of Computer Vision, former editor of IEEE Transactions on Pattern Analysis and Machine Intelligence. She held visiting positions at Stanford University, Google and Nokia Research. She  is a co-author of a monograph titled Invitation to 3D vision: From Images to Geometric Models. Her general research interests are in Computer Vision and Robotics. In particular she is interested 'seeing' systems engaged in autonomous tasks, acquisition of static and dynamic models of environments by means of visual sensing and human-computer interaction.

"Design of Autonomous Vehicles @ UCSD"

Zoom Link: [https://ucsd.zoom.us/j/91267376688]

Speaker: Henrik I Christensen

 

Over the last couple of years the Autonomous Vehicle Laboratory has designed modules for autonomously driving micro-mobility vehicles such as campus mail delivery. The design includes new sensors setups, methods for local scale mapping / localization, semantic modeling of the world to allow for contextual navigation, detection and tracking of other road users, and use of simulation for verification of design decisions. The system has been tested over a six month period. In this presentation we do a review of our motivation, approach, methods and results this far. The methods developed have been tested through extensive evaluation and published at ICRA, IROS, ISER, ...

"Safe Real-World Autonomy in Uncertain and Unstructured Environments"

Zoom URL: https://ucsd.zoom.us/j/97197176606

Sylvia Herbert - UCSD

 

In this talk I will present my current and future work towards enabling safe real-world autonomy. My core focus is to enable efficient and safe decision-making in complex autonomous systems, while reasoning about uncertainty in real-world environments, including those involving human interactions. These methods draw from control theory, cognitive science, and reinforcement learning, and are backed by both rigorous theory and physical testing on robotic platforms.

First I will discuss safety for complex systems in simple environments. Traditional methods for generating safety analyses and safe controllers struggle to handle realistic complex models of autonomous systems, and therefore are stuck with simplistic models that are less accurate. I have developed scalable techniques for theoretically sound safety guarantees that can reduce computation by orders of magnitude for high-dimensional systems, resulting in better safety analyses and paving the way for safety in real-world autonomy.

Next I will add in complex environments. Safety analyses depend on pre-defined assumptions that will often be wrong in practice, as real-world systems will inevitably encounter incomplete knowledge of the environment and other agents. Reasoning efficiently and safely in unstructured environments is an area where humans excel compared to current autonomous systems. Inspired by this, I have used models of human decision-making from cognitive science to develop algorithms that allow autonomous systems to navigate quickly and safely, adapt to new information, and reason over the uncertainty inherent in predicting humans and other agents. Combining these techniques brings us closer to the goal of safe real-world autonomy.

Bio:

Sylvia Herbert is an Assistant Professor in Mechanical and Aerospace Engineering at the University of California San Diego. Prior to joining UCSD, she received her PhD in Electrical Engineering from UC Berkeley, where she studied with Professor Claire Tomlin on safe and efficient control of autonomous systems. Before that she earned her BS/MS at Drexel University in Mechanical Engineering. She is the recipient of the UC Berkeley Chancellor’s Fellowship, NSF GRFP, UC Berkeley Outstanding Graduate Student Instructor Award, and the Berkeley EECS Demetri Angelakos Memorial Achievement Award for Altruism.

"Incorporating Structure in Deep Dynamics Model for Improved Generalization"

Zoom - https://ucsd.zoom.us/j/97197176606

Rose Yu - UCSD

 

Abstract: Recent work has shown deep learning can significantly improve the prediction of dynamical systems. However, an inability to generalize under distributional shift limits its applicability to the real world. In this talk, I will demonstrate how to principally incorporate relation and symmetry structure in deep learning models to improve generalization. I will showcase their applications to robotic manipulation and vehicle trajectory prediction tasks.

Bio: Rose Yu is an assistant professor at UCSD in CSE and a primary faculty in the AI Group. She is also affiliated with HDSI, CRI and MICS. She does research on ML with an emphasis on large-scale spatiotemporal data. Her work has been used across a variety of use-cases such as dynamic systems, healthcare and physical sciences. Dr. Yu received her PhD from USC and was a postdoc at CalTech. She has already won numerous awards.

Reference:
[1] Deep Imitation Learning for Bimanual Robotic Manipulation
Fan Xie, Alex Chowdhury, Clara De Paolis, Linfeng Zhao, Lawson Wong, Rose Yu
Advances in Neural Information Processing Systems (NeurIPS), 2020
[2] Trajectory Prediction using Equivariant Continuous Convolution
Robin Walters, Jinxi (Leo) Li, Rose Yu
International Conference on Learning Representations (ICLR), 2021

"Abstractions in Robot Planning"

https://ucsd.zoom.us/j/97197176606

Neil T. Dantam, Colorado School of Mines

 

Abstract: Complex robot tasks require a combination of abstractions and algorithms: geometric models for motion planning, probabilistic models for perception, discrete models for high-level reasoning. Each abstraction imposes certain requirements, which may not always hold. Robust planning systems must therefore resolve errors in abstraction. We identify the combinatorial and geometric challenges of planning for everyday tasks, develop a hybrid planning algorithm, and implement an extensible planning framework. In recent work, we present an initial approach to relax the completeness assumptions in motion planning.

Bio: Neil T. Dantam is an Assistant Professor of Computer Science at the Colorado School of Mines. His research focuses on robot planning and manipulation, covering task and motion planning, quaternion kinematics, discrete policies, and real-time software design.

Previously, Neil was a Postdoctoral Research Associate in Computer Science at Rice University working with Prof. Lydia Kavraki and Prof. Swarat Chaudhuri. Neil received a Ph.D. in Robotics from Georgia Tech, advised by Prof. Mike Stilman, and B.S. degrees in Computer Science and Mechanical Engineering from Purdue University. He has worked at iRobot Research, MIT Lincoln Laboratory, and Raytheon. Neil received the Georgia Tech President's Fellowship, the Georgia Tech/SAIC paper award, an American Control Conference '12 presentation award, and was a Best Paper and Mike Stilman Award finalist at HUMANOIDS '14.

"Probabilistic Robotics and Autonomous Driving"

 https://ucsd.zoom.us/j/97197176606

Wolfram Burgard, Toyota Research Institute

 

Abstract: For autonomous robots and automated driving, the capability to robustly perceive their environments and execute their actions is the ultimate goal. The key challenge is that no sensors and actuators are perfect, which means that robots and cars need the ability to properly deal with the resulting uncertainty. In this presentation, I will introduce the probabilistic approach to robotics, which provides a rigorous statistical methodology to solve the state estimation problem. I will furthermore discuss how this approach can be extended using state-of-the-art technology from machine learning to bring us closer to the development of truly robust systems able to serve us in our every-day lives. In this context, I will in particular focus on the data advantage that the Toyota Research Institute is planning to leverage in combination with self-/semi-supervised methods for machine learning to speed up the process of developing self-driving cars.

Bio: Wolfram Burgard received the Ph.D. degree in computer science from University of Bonn, Bonn, Germany, in 1991.He is currently VP of Autonomous Driving at Toyota Research Institute and a Professor of computer science with University of Freiburg, Freiburg, Germany, where he heads the Laboratory for Autonomous Intelligent Systems. In the past, he developed several innovative probabilistic techniques for robot navigation and control, which cover different aspects such as localization, map building, path planning, and exploration. His research interests include artificial intelligence and mobile robots. Dr. Burgard received several Best Paper Awards from outstanding national and international conferences. In 2009, he was the recipient of the Gottfried Wilhelm Leibniz Prize, the most prestigious German research award.

"Learning Adaptive Models for Human-Robot Teaming" 

Atkinson Hall 4004/4006

Howard Thomas - University of Rochester

"Key Challenges in Agricultural Robotics with Examples of Ground Vehicle Localization in Orchards and Task-Specific Manipulator Design for Fruit Harvesting"

EBU 1 - Qualcomm Conference Room 

Amir Degani - Technion (Israel Institute of Technology) 

 

Dr Amir Degani is an Associate Professor at the Technion - Israel Institute of Technology. Dr Degani is the Director of the Civil, Environmental, and Agricultural Robotics (CEAR) Laboratory researching robotic  legged locomotion and autonomous systems in civil and agriculture applications. His research program includes mechanism analysis, synthesis, control and motion planning and design with emphasis on minimalistic concepts and the study of nonlinear dynamic hybrid systems.

His talk will present the need for robotics in agriculture and focus on examples of solutions for two different problems. The first is the localization of an autonomous ground vehicle in a homogenous orchard environment. The typical localization approaches are not adjusted to the characteristics of the orchard environment, especially the homogeneous scenery. To alleviate these difficulties, Dr Degani and his colleagues use top-view images of the orchard acquired in real-time. The top-view observation of the orchard provides a unique signature of every tree formed by the shape of its canopy. This practically changes the homogeneity premise in orchards and paves the way for addressing the “kidnapped robot problem”.

The second part of the talk will focus on efforts to define and perform task-based optimization for an apple-harvesting robot. Since there is a large variation between trees, instead of performing this laborious optimization on many trees, Dr Degani and his colleagues look for a “lower dimensional” characterization of the trees. Moreover, the shape of the tree (i.e., the environment) has a major influence on the robot’s simplicity. Therefore, Dr Degani and his colleagues strive to find the best training system for a tree to help simplify the robot’s design.

"The Business of Robotics: An introduction to the commercial robotics landscape, and considerations for identifying valuable robot opportunities."

Center for Memory Recording Research (CMRR)

Phil DuffyBrain Corp

Phil Duffy is the Vice President of Innovations at Brain Corp. As VP of Innovations, he leads product commercialization activities at Brain Corp to discover novel product and market opportunities for autonomous mobile robotics. His team is responsible for defining product strategy for Brain Corp's AI technology, BrainOS. A serial entrepreneur and product strategist, Phil has a proven track record for growing technology start-ups, and commercializing and launching innovative, robotic products in the B2B and B2C markets. Phil joined Brain Corp in 2014 and brings with him 20+ years leadership experience in product management, marketing, and China manufacturing. This talk will provide an overview of the commercial robotics landscape, identify valuable robot opportunities, and focus on important elements to consider when developing and marketing robotics technology.

"Efficient memory-usage techniques in deep neural networks via a graph-based approach"

Qualcomm Conference Room (EBU-1)

Salimeh Yasaei Sekeh - University of Maine 

Dr. Salimeh Yasaei Sekeh is the Assistant Professor of Computer Science in the School of Computing and Information Sciences at the University of Maine. Her research focuses on designing and analyzing machine learning algorithms, deep learning techniques, applications of machine learning approaches in real-time problems, data mining, pattern recognition, and network structure learning with applications in biology. This talk introduces two new and efficient deep memory usage techniques based on the geometric dependency criterion. This first technique is called Online Streaming Deep Feature Selection. This technique is based on a novel supervised streaming setting and it measures deep feature relevance while maintaining a minimal deep feature subset with relatively high classification performance and less memory requirement. The second technique is called Geometric Dependency-based Neuron Trimming. This technique is a data-driven pruning method that evaluates the relationship between nodes in consecutive layers. In this approach, a new dependency-based pruning score removes neurons with least importance, and then the network is fine-tuned to retain its predictive power. Both methods are evaluated on several data sets with multiple CNN models and demonstrated to achieve significant memory compression compared to the baselines.

"Human-Machine Teaming at the Robotics Research Center at West Point" 

Qualcomm Conference Room (EBU-1) 

Misha Novitzky - West Point 

Dr. Misha Novitzky is the Assistant Professor of the Robotics Research Center in the United States Military Academy at West Point. His work focuses on human-machine teaming for cooperative tasks in stressful and unconstrained environments. This talk will provide a brief overview of the various projects being conducted by the Robotics Research Center at the United States Military Academy, located in West Point, New York. In particular, the talk will focus on human-machine teaming. Most human-robot interaction or teaming research is performed in structured and sterile environments. It is our goal to take human-machine teaming outside into unstructured and stressful environments. As part of this effort, we will describe Project Aquaticus in which humans and robots were embedded in the marine environment and played games of capture the flag against similarly situated teams, and present results of our pilot studies. While Project Aquaticus was previously performed at the Massachusetts Institute of Technology, we will describe why the Robotics Research Center at West Point is an exceptional location to perform future human-machine teaming research.

"Introducing Qualcomm Snapdragon Ride"

Center for Memory and Recording Research (CMRR)

Ahmed Sadek - Qualcomm 

The Criticality of Systems Engineering to Autonomous Air Vehicle Development

Ariele Sparks - Northrop Grumman

Systems Engineering the World's Most Energetic Laser

Robert Plummer - LLNL

Applying a Decision Theoretic Framework for Evaluating System Trade-Offs

Nirmal Velayudhan - ViaSat

Scaling the Third Dimension in Silicon Integration

Srinivas Chennupaty - Intel

The Cost of Taking Shortcuts

David Harris - Cubic Transporation Systems

An Integrated Medium Earth Orbit - Low Earth Orbit Navigation, Communication and Authentication System of Systems

David Whelan - UC San Diego

Handling Scale (System and Developer) and Reliability in Large and Critical Systems

Sagnik Nandy - Google

People-First Systems Engineering: Challenges and Opportunities in Smart Cities

Jeff Lorbeck - Qualcomm