Tuesday, October 8, 2019
3:30 p.m., Avery 115
4 p.m., Avery 348
Philip KoopmanProfessor, Carnegie Mellon University
Making self-driving cars safe will require a combination of techniques. Existing software safety standards will help with vehicle control and trajectory stages of the autonomy pipeline. Planning might be made safe using a doer/checker architectural pattern that uses deterministic safety envelope enforcement of non-deterministic planning algorithms. Machine-learning based perception validation will be more problematic. We discuss the issue of perception edge cases, including the potentially heavy-tail distribution of object types and brittleness to slight variations in images. Our Hologram tool injects modest amounts of noise to cause perception failures, identifying brittle aspects of perception algorithms. More importantly, in practice it is able to identify context-dependent perception failures (e.g., false negatives) in unlabelled video.
Professor Philip Koopman is an internationally recognized expert on Autonomous Vehicle (AV) safety who has worked in that area at Carnegie Mellon University for over 20 years. He is also actively involved with AV safety policy, regulation, implementation, and standards. His pioneering research work includes software robustness testing and run time monitoring of autonomous systems to identify how they break and how to fix them. He has extensive experience in software safety and software quality across numerous transportation, industrial, and defense application domains including conventional automotive software and hardware systems. Currently he is serving as a principal technical contributor to draft UL 4600 standard for autonomous system safety. He is co-founder of Edge Case Research, which provides tools and services for autonomous vehicle testing and safety validation.