Call for Papers

The assessment of software quality is one of the most multifaceted (e.g., structural quality, product quality, process quality, etc.) and subjective aspects of software engineering (since in many cases it is substantially based on expert judgement). Such assessments can be performed at almost all phases of software development (from project inception to maintenance) and at different levels of granularity (from source code to architecture). However, human judgement is: (a) inherently biased by implicit, subjective criteria applied in the evaluation process, and (b) its economical effectiveness is limited compared to automated or semi-automated approaches. To this end, researchers are still looking for new, more effective methods of assessing various qualitative characteristics of software systems and the related processes.

In recent years we have been observing a rising interest in adopting various approaches to exploit machine learning (ML) and automated decision-making processes in several areas of software engineering. These models and algorithms help to alleviate human subjectivity in order to make informed decisions based on available data and evaluated with objective criteria. Thus, the adoption of ML techniques is a promising way to improve software quality evaluation. Conversely, learning capabilities are increasingly embedded within software, including in critical domains such as automotive and health. This calls for the application of quality assurance techniques to ensure the reliable engineering of ML-based software systems.

The aim of the workshop is to provide a forum for researchers and practitioners to present and discuss new ideas, trends and results concerning the application of ML methods to software quality evaluation and the application of software engineering techniques to self-learning systems. We expect that the workshop will help in (1) the validation of existing ML methods for software quality evaluation as well as their application to novel contexts, (2) the effectiveness evaluation of ML methods, both compared to other automated approaches and the human judgement, (3) the adaptation of ML approaches already used in other areas of science in the context of software quality, (4) the design of new techniques to validate ML-based software, inspired by traditional software engineering techniques.

Topics of interest include, but are not limited to:
- Application of machine-learning in software quality evaluation,
- Analysis of multi-source data,
- Knowledge acquisition from software repositories, - Adoption and validation of machine learning models and algorithms in software quality,
- Decision support and analysis in software quality,
- Prediction models to support software quality evaluation,
- Validation and verification of learning systems,
- Automated machine learning,
- Design of safety-critical learning software,
- Integration of learning systems in software ecosystems.

Evaluation Criteria and Submission

We expect papers up to 6 pages (including references) in the ESEC/FSE conference format (ACM, double-column). Each paper will be reviewed by three PC members. Accepted papers will be part of ESEC/FSE proceedings. All papers should be submitted in PDF format through EasyChair

Special Issue

Authors of selected papers accepted at MaLTeSQuE 2019 will be invited to submit revised, extended versions of their manuscripts for a special issue of the Journal of Software and Systems (JSS), edited by Elsevier.

Important Dates

    Abstract Submission Deadline: May 27th, 2019
    Paper Submission Deadline: June 3rd, 2019
    Notification: June 24th, 2019
    Camera ready: July 1st, 2019
    Workshop: Aug 27th, 2019