https://doi.org/10.1109/ICSME58944.2024.00086
Abstract:
Developers work with many different artifacts and environments throughout their careers, whether it be in established rules on how to write code, how they have their code presented in their IDE, or how developers communicate with one another. Determining how different aspects of a developer’s environment and coding practices affect their level of program comprehension can give us insight into the cognitive processes and the construction of mental models by a developer, which can ultimately determine how a developer’s level of productivity may be measured. In this paper, I provide an overview of several empirical studies where I assess comprehension and emotional state using biometric devices while developers perform typical software engineering tasks. The results of these studies can help develop guidelines to help towards productivity. Advisor: Bonita Sharif (University of Nebraska - Lincoln)
https://doi.org/10.1109/VISSOFT64034.2024.00017
Abstract:
A controlled experiment investigating the effect layout has on how students find defects in UML class diagrams with respect to requirements is presented. Two layout schemes from prior literature namely, multi-cluster and orthogonal layouts, are compared with respect to two open source systems, Doxygen and Qt. The experiment is conducted with 89 students from two universities in a classroom lab setting. Each participant is placed in one of two groups where each group are given 2 defect detection tasks (with five sub-parts) with each task using one of the two layouts in each subject system. The only difference between groups is that the layouts were flipped between the two tasks. Feedback is collected after each task. A mental rotation and object memory task is conducted at the end of the two tasks to correlate their spatial and working memory skills to the task performance. Results indicate that the multi-cluster layout performed better in terms of accuracy of finding defects, but not significantly. There is also not much difference in time to find them. Furthermore, it is found that the object memory skills are sometimes correlated with the performance of the defect detection tasks. These results can be used to help improve the teaching of UML class diagram defect detection skills by incorporating clustered layouts and object memory tasks. In addition, they can help identify people who are best suited for finding critical defects in design.
https://doi.org/10.1007/s10664-024-10532-x
Abstract:
Context
While developing software, developers must first read and understand source code in order to work on change requests such as bug fixes or feature additions. The easier it is for them to understand what the code does, the faster they can get to working on change tasks. Source code is meant to be consumed by humans, and hence, the human factor of how readable the code is plays an important role. During the past decade, software engineering researchers have used eye trackers to see how developers comprehend code. The eye tracker enables us to see exactly what parts of the code the developer is reading (and for how long) in an objective manner without prompting them.
https://doi.org/10.1145/3568813.3600133
Abstract:
Background and Context:
The designers of programming editors aimed at learners have long experimented with different styles of code presentation. The idea of syntax highlighting – coloring specific words – is very old. More recently, some editors (including text-, frame- and block-based editors) have added forms of scope highlighting – colored rectangles to represent programming scope – but there have been few studies to investigate whether this is beneficial for novices when reading and understanding program code.
Objectives:
We investigated whether the use of scope highlighting during code comprehension tasks (a) has an impact on where users focus their gaze, (b) affects the accuracy of user’s responses to tasks, and/or (c) affects the speed of user’s correct responses to the tasks.
https://doi.org/10.1109/SEmotion52567.2021.00009
Abstract:
Writing readable source code is generally considered good practice because it reduces comprehension time for both the original developer and others that have to read and maintain it. We conducted a code readability rating study using eye tracking equipment as part of a larger project where we compared pairs of Java methods side by side. The methods were written such that one followed a readability rule and the other did not.
The participants were tasked with rating which method they considered to be more readable. An explanation of the rating was also optionally provided. Eye tracking data was collected and analyzed during the rating process.
We found that developers rated the snippet in the pair of methods that avoided nested if statements as more readable on average. There was no clear preference in the use of do-while statements. In addition, more developer fixation attention was on the snippet that avoided do-while loops and the snippet pairs relating to nested if statements had more equal fixation attention across the snippets.
https://doi.org/10.1109/ASEW52652.2021.00037
Abstract:
The paper presents an eye tracking pilot study on understanding how developers read and assess sentiment in twenty-four GitHub pull requests containing emoji randomly selected from five different open source applications. Gaze data was collected on various elements of the pull request page in Google Chrome while the developers were tasked with determining perceived sentiment. The developer perceived sentiment was compared with sentiment output from five state-of-the-art sentiment analysis tools. SentiStrength-SE had the highest performance, with 55.56% of its predictions being agreed upon by study participants. On the other hand, Stanford CoreNLP fared the worst, with only 5.56% of its predictions matching that of the participants’. Gaze data shows the top three areas that developers looked at the most were the comment body, added lines of code, and username (the person writing the comment). The results also show high attention given to emoji in the pull request comment body compared to the rest of the comment text. These results can help provide additional guidelines on the pull request review process.