Per year | Per venue | Per topic | Selected | Technical Books

The Effectiveness of Supervised Machine Learning Algorithms in Predicting Software Refactoring
Maurício Aniche, Erick Maziero, Rafael Durelli, Vinicius Durelli
Transaction on Software Engineering (TSE), 2020
Refactoring is the process of changing the internal structure of software to improve its quality without modifying its external behavior. Before carrying out refactoring activities, developers need to identify refactoring opportunities. Currently, refactoring opportunity identification heavily relies on developers’ expertise and intuition. In this paper, we investigate the effectiveness of machine learning algorithms in predicting software refactorings. More specifically, we train six different machine learning algorithms with a dataset comprising over two million refactorings from 11,149 real-world projects from the Apache, F-Droid, and GitHub ecosystems. The resulting models predict 20 different refactorings at class, method, and variable-levels with an accuracy often higher than 90%. Our results show that (i) Random Forests are the best models for predicting software refactoring, (ii) process and ownership metrics seem to play a crucial role in the creation of better models, and (iii) models generalize well in different contexts.

Selecting third-party libraries: The practitioners’ perspective
Enrique Larios Vargas, Maurício Aniche, Christoph Treude, Magiel Bruntink, Georgios Gousios
The ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE), 2020
The selection of third-party libraries is an essential element of virtually any software development project. In this paper, we study the factors that influence the selection process of libraries, as perceived by industry developers. To that aim, we perform a cross-sectional interview study with 16 developers from 11 different businesses and survey 115 developers that are involved in the selection of libraries. We systematically devised a comprehensive set of 26 technical, human, and economic factors that developers take into consideration when selecting a software library.

Monitoring-Aware IDEs
Jos Winter, Maurício Aniche, Jürgen Cito, Arie van Deursen
27th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE), 2019
We want IDEs to be more aware of what happens to the software in production. To that aim, we propose what we call a ‘monitoring-aware IDE’. After implementing a prototype, we experimented with it at Adyen, with 12 of their developers using it for a month. Our results show that such IDEs can indeed make developers more productive (and more aware about what happens with their software in the wild!)

The Adoption of JavaScript Linters in Practice: A Case Study on ESLint
Kristín Fjóla Tómasdóttir, Maurício Aniche, Arie van Deursen
Transactions on Software Engineering (TSE), 2018
We examine developers’ perceptions on JavaScript linters. Our results provide practitioners with reasons for using linters in their JavaScript projects as well as several configuration strategies and their advantages. We also provide a list of linter rules that are often enabled and disabled, which can be interpreted as the most important rules to reason about when configuring linters.

Search-Based Test Data Generation for SQL Queries
Jeroen Castelein, Maurício Aniche, Mozhan Soltani, Annibale Panichella, Arie van Deursen
40th International Conference on Software Engineering (ICSE), 2018
We propose a search-based algorithm that generates test data for a given SQL query, using MC/DC coverage criteria. Our approach is able to generate test data for more than 90% of our dataset, and it is highly superior to others (as they tend to model the problem as CSP, which fails in most complex queries). The tool is available in our GitHub. [Video summary] [Vídeo em português]

When Testing Meets Code Review: Why and How Developers Review Tests
Davide Spadini, Maurício Aniche, Margaret-Anne Storey, Magiel Bruntink, Alberto Bacchelli
40th International Conference on Software Engineering (ICSE), 2018
We studied the effort that developers put in reviewing test code (when compared to production code). We observed that developers tend to give more attention to production code when both are on the same patch, that developers mostly point code quality issues in their review (rather than missing tests, as they claim they do), and that review tools do not really help them in reviewing both the test and the class under test. [Video summary] [Vídeo em português]

Code smells for Model-View-Controller architectures
Maurício Aniche, Gabriele Bavota, Christoph Treude, Marco Gerosa, Arie van Deursen
Empirical Software Engineering Journal (EMSE), 2017
This paper proposes a set of smells that can exist in MVC applications. Empirically validates the change- and defect-proneness of classes affected by the smell as well as their evolution over time. Proposes SpringLint, a tool that automatically detects these smells in Spring MVC applications. [Video summary] [Vídeo em português]