The development of automated driving will profit from an agreed-upon methodology\nto evaluate human-machine interfaces. The present study examines the role of feedback on\ninteraction performance provided directly to participants when interacting with driving automation\n(i.e., perceived ease of use). In addition, the development of ratings itself over time and use case\nspecificity were examined. In a driving simulator study, N = 55 participants completed several\ntransitions between Society of Automotive Engineers (SAE) level 0, level 2, and level 3 automated\ndriving. One half of the participants received feedback on their interaction performance immediately\nafter each use case, while the other half did not. As expected, the results revealed that participants\njudged the interactions to become easier over time. However, a use case specificity was present,\nas transitions to L0 did not show effects over time. The role of feedback also depended on the\nrespective use case. We observed more conservative evaluations when feedback was provided than\nwhen it was not. The present study supports the application of perceived ease of use as a diagnostic\nmeasure in interaction with automated driving. Evaluations of interfaces can benefit from supporting\nfeedback to obtain more conservative results.
Loading....