An Eye Tracking Study Assessing Source Code Readability Rules for Program Comprehension

https://doi.org/10.1007/s10664-024-10532-x

Abstract:

Context

While developing software, developers must first read and understand source code in order to work on change requests such as bug fixes or feature additions. The easier it is for them to understand what the code does, the faster they can get to working on change tasks. Source code is meant to be consumed by humans, and hence, the human factor of how readable the code is plays an important role. During the past decade, software engineering researchers have used eye trackers to see how developers comprehend code. The eye tracker enables us to see exactly what parts of the code the developer is reading (and for how long) in an objective manner without prompting them.

Objective

In this paper, we leverage eye tracking technology to replicate a prior online questionnaire-based controlled experiment (Johnson et al. 2019) to determine the visual effort needed to read code presented in different readability rule styles. As in the prior study, we assess two readability rules - minimize nesting and avoid do-while loops. Each rule is evaluated on code snippets that are correct and incorrect with respect to a requirement.

Method

This study was conducted in a lab setting with the Tobii X-60 eye tracker where each of the 46 participants. 21 undergraduate students, 24 graduate students, and 6 professional developers (part-time or full-time)) participated and were given eight Java methods from a total set of 32 Java methods in four categories: ones that follow/do not follow the readability rule and that are correct/incorrect. After reading each code snippet, they were asked to answer a multiple-choice comprehension question about the code and some questions related to logical correctness and confidence. In addition to comparing the time and accuracy of answering the questions with the prior study, we also report on the visual effort of completing the tasks via gaze-based metrics.

Results

The results of this study concur with the online study, in that following the minimize nesting rule showed higher confidence () decreased time spent reading programming tasks (), and decreased accuracy in finding bugs (). However, the decrease in accuracy was not significant. For method analysis tasks showing one Java method at a time, participants spent proportionally less time fixating on code lines () and had fewer fixations on code lines () when a snippet is not following the minimize-nesting rule. However, the opposite is true when the snippet is logically incorrect (3.4% and 3.9%, respectively), regardless of whether the rule was followed. The avoid do-while rule, however, did not have as significant of an effect. Following the avoid do-while rule did result in higher accuracy in task performance albeit with lower fixation counts. We also note a lower rate for a majority of the gaze-based linearity metrics on the rule-breaking code snippet when the rule-following and rule-breaking code snippets are displayed side-by-side.

Conclusions

The results of this study show strong support for the use of the minimize nesting rule. All participants considered the minimize nesting rule to be important and considered the avoid do-while rule to be less important. This was despite the results showing that the participants were more accurate when the avoid do-while rule was followed. Overall, participants ranked the snippets following the readability rules to be higher than the snippets that do not follow the rules. We discuss the implications of these results for advancing the state of the art for reducing visual effort and cognitive load in code readability research.