Competency self-assessment for a learning-based autonomous aircraft system Journal Article uri icon

Overview

abstract

  • IntroductionFuture concepts for airborne autonomy point toward human operators moving out of the cockpit and into supervisory roles. Urban air mobility, airborne package delivery, and military intelligence, surveillance, and reconnaissance (ISR) are all actively exploring such concepts or currently undergoing this transition. Supervisors of these systems will be faced with many challenges, including platforms that operate outside of visual range and the need to decipher complex sensor or telemetry data in order to make informed and safe decisions with respect to the platforms and their mission. A central challenge to this new paradigm of non-co-located mission supervision is developing systems which have explainable and trustworthy autonomy and internal decision-making processes.MethodsCompetency self-assessments are methods that use introspection to quantify and communicate important information pertaining to autonomous system capabilities and limitations to human supervisors. We first discuss a computational framework for competency self-assessment: factorized machine self-confidence (FaMSeC). Within this framework, we then define the generalized outcome assessment (GOA) factor, which quantifies an autonomous system’s ability to meet or exceed user-specified mission outcomes. As a relevant example, we develop a competency-aware learning-based autonomous uncrewed aircraft system (UAS) and evaluate it within a multi-target ISR mission.ResultsWe present an analysis of the computational cost and performance of GOA-based competency reporting. Our results show that our competency self-assessment method can capture changes in the ability of the UAS to achieve mission critical outcomes, and we discuss how this information can be easily communicated to human partners to inform decision-making.DiscussionWe argue that competency self-assessment can enable AI/ML transparency and provide assurances that calibrate human operators with their autonomous teammate’s ability to meet mission goals. This in turn can lead to informed decision-making, appropriate trust in autonomy, and overall improvements to mission performance.

publication date

  • February 14, 2025

Date in CU Experts

  • January 24, 2026 3:43 AM

Full Author List

  • Conlon N; Acharya A; McGinley J; Slack T; Hirst CA; D’Alonzo M; Hebert MR; Reale C; Frew EW; Russell R

author count

  • 11

Other Profiles

Electronic International Standard Serial Number (EISSN)

  • 2813-2831

Additional Document Info

volume

  • 4

number

  • 1454832