CO-LO­CA­TED WITH ESEC/FSE 2018 

Co-located with ESEC/FSE 2018 

About

A-TEST workshop aims to provide a venue for researchers as well as the industry to exchange and discuss trending views, ideas, state of the art work in progress, and scientific results on Automated Testing.

Modern software teams seek a delicate balance between two opposing forces: striving for reliability and striving for agility. Software teams need tools to strike the right balance by increasing the development speed without sacrificing quality. Automated testing tools play an important role in obtaining this balance.

A-TEST has successfully run 8 editions since 2009. During the 2017 edition, that was also co-located at ESEC/FSE. We introduced hands-on sessions where testing tools can be studies in depth. Due to the many positive reactions we received, this year we will have them again!

Pro­gram – 5th of No­vem­ber 2018

8:50 – 9:00 Welcome
9:00 – 10:00

Keynote by Gregg Rothermel

 

Improving regression testing in continuous integration development environments

 10:00 – 10:30  Coffee break (in the hall outside the workshops)
10:30 – 12:00

David Adamo, Md Khorrom Khan, Sreedevi Koppula and Renee Bryce.

 

Reinforcement Learning for Android GUI Testing

Shogo Takakura, Mitsuyoshi Iwatsuji and Nagisa Ishiura.

Extending Equivalence Transformaton Based Program Generator for Random Testing of C Compilers

Akos Kiss, Renáta Hodován and Tibor Gyimothy.

HDDr: A Recursive Variant of the Hierarchical Delta Debugging Algorithm

Sten Vercammen, Mohammad GhafariSerge Demeyer and Markus Borg.

Goal-oriented Mutation Testing with Focal Methods

Thi Anh Tuyet Vuong and Shingo Takada.

A Reinforcement Learning based Approach to Automated Testing of Android Applications

12:00 – 13:30 LUNCH (in Lakeview West)
13:30 – 15:00

Kota Kitaura and Nagisa Ishiura.

 

Random Testing of Compilers’ Performance Based on Mixed Static and Dynamic Code Comparison

Renáta Hodován, Akos Kiss and Tibor Gyimothy.

Grammarinator: A Grammar-based Open Source Fuzzer

Dionny Santiago, Peter J. Clarke, Patrick Alt and Tariq M. King.

Abstract Flow Learning for Web Application Test Generation

André Santos, Alcino Cunha and Nuno Macedo.

Property-based Testing for the Robot Operating System

Pedro Martins Pontes, Bruno Lima and João Pascoal Faria.

Test Patterns for IoT

15:00 – 15:30 Coffee break (in the hall outside the workshops)
15:30 – 16:00

POSTER pitch and DARE session

 

Sajad Jalali, Ali Mohebbi, Khashayar Etemadi Some Oliayi and Mostafa Mahdieh.

Considering Executed Program States for Evaluating Test Suites

Johan van der Geest, Edwin Harmsma, Harrie Bastiaansen and Serge Demeyer.

The Journey Towards Cloud Native Testing

Pablo Suárez-Otero, María José Suárez-Cabal and Javier Tuya.

Testing automation of the Cassandra logical data integrity

16:00 – 17:30 Hands-on session
17:30 – 18:00 Closing discussion

 

Key­notes

Keynote:  Gregg Rothermel
Professor and Jensen Chair of Software Engineering
University of Nebraska

IMPROVING REGRESSION TESTING IN CONTINUOUS INTEGRATION DEVELOPMENT ENVIRONMENTS

In continuous integration development environments, software engineers frequently integrate new or changed code with the mainline codebase. Merged code is then regression tested to help ensure that the codebase remains stable and that continuing engineering efforts can be performed more reliably. Continuous integration is advantageous because it can reduce the amount of code rework that is needed in later phases of development, and speed up overall development time. From a testing standpoint, however, continuous integration raises several challenges.

Chief among these challenges are the costs, in terms and time and resources, associated with handling a constant flow of requests to execute tests. To help with this, organizations often utilize farms of servers to run tests in parallel, or execute tests “in the cloud”, but even then, test suites tend to expand to utilize all available resources, and then continue to expand beyond that.

We have been investigating strategies for applying regression testing in continuous integration development environments more cost-effectively. Our strategies are based on two well-researched techniques for improving the cost-effectiveness of regression testing — regression test selection (RTS) and test case prioritization (TCP). In the continuous integration context, however, traditional RTS and TCP techniques are difficult to apply, because these techniques rely on instrumentation and analyses that cannot easily be applied to fast-arriving streams of test suites.

We have thus created new forms of RTS and TCP techniques that utilize relatively lightweight analyses, that can cope with the volume of test requests. To evaluate our techniques, we have conducted an empirical study on several large data sets. In this talk, I describe our techniques and the empirical results we have obtained in studying them.

BIO

Gregg Rothermel is Professor and Jensen Chair of Software Engineering at the University of Nebraska-Lincoln. He received the Ph.D. in Computer Science from Clemson University working with Mary Jean Harrold, the M.S. in Computer Science from SUNY Albany, and a B.A. in Philosophy from Reed College. Prior to returning to academia, he was a software engineer, and Vice President of Quality Assurande and Quality Control for Palette Systems, a manufacturer of CAD/CAM software.

Dr. Rothermel’s research interests include software engineering and program analysis, with emphases on the application of program analysis techniques to problems in software maintenance and testing, end-user software engineering, and empirical studies. He is a co-founder of the ESQuaReD (Empirically-Based Software Quality Research and Development) Laboratory at the Univerity of Nebraska-Lincoln. He is also a co-founder of the EUSES (End-Users Shaping Effective Software) Consortium, a group of researchers who, with National Science Foundation support, have led end-user software engineering research. He co-founded and leads the development of the Software-Artifact Infrastructure Repository (SIR), a repository of software-related artifacts that support rigorous controlled experiments with program analysis and software testing techniques, and has been utilized, to-date, by more than 3500 persons from over 700 institutions around the world, supporting over 800 scientific publications. His research has been supported by NSF, DARPA, AFOSR, Boeing Commercial Airplane Group, Microsoft, and Lockheed Martin.

Dr. Rothermel is an IEEE Fellow and an ACM Distinguished Scientist. He is currently General co-Chair for the 2020 ACM/IEEE International Conference on Software Engineering, serves as an Associate Editor for IEEE Transactions on Software Engineering and Methodology, and is a member of the Editorial Boards of the Empirical Software Engineering Journal and Software Quality Journal.

Previous positions include Associate Editor in Chief for IEEE Transactions on Software Engineering, General Chair for the 2009 International Symposium on Software Testing and Analysis, Program Co-Chair for the 2007 International Conference on Software Engineering, and Program Chair for the 2004 ACM International Symposium on Software Testing and Analysis.

Im­por­tant Dates

  • Submission deadline:  July 27th 2018
  • Author notification: August 24th 2018 August 27th 2018
  • Camera-ready: September 18th 2018

Orga­ni­za­tion Com­mit­tee

A-TEST TEAM

General Chair

Wishnu Prasetya (Universiteit van Utrecht)

 

Industrial Chair

Sigrid Eldh (Ericsson)

 

Program Chairs

Tanja E.J. Vos (Universidad Politecnica de Valencia, Open Universiteit)

Sinem Getir (Humboldt-Universität zu Berlin)

 

Hands-on session chair

Ali Parsai (Universiteit van Antwerpen)

 

Publicity Chair

Pekka Aho (Open Universiteit)

 

Pro­gram­me Com­mit­tee

 

 

Pekka Aho Open Universiteit
Emil Alégroth Blekinge Institute of Technology
Markus Borg RISE SICS AB
Sigrid Eldh Ericsson
M.J. Escalona University of Seville
Leire Etxeberria Mondragon Uniberstitatea
Julián Alberto García García IWT2 Group. University of Seville. Spain.
Sinem Getir Humboldt University Berlin
Peter M. Kruse ASSystems
Maurizio Leotta Università degli Studi di Genova
Patricio Letelier Universitat Politècnica de València
Jeff Offutt George Mason University
Ana Paiva University of Porto
Wishnu Prasetya Universiteit van Utrecht
Rudolf Ramler Software Competence Center Hagenberg
Mika Rautila VTT
Ina Schieferdecker Fraunhofer FOKUS/TU Berlin
Jan Tretmans TNO – Embedded Systems Innovation
Marko Van Eekelen Radboud University
Freek Verbeek Open University of The Netherlands
Tanja E. J. Vos
Open Universiteit and Universitat Politecnica de Valencia
Marc-Florian Wendland Fraunhofer

Call for Pa­­pers

 

 

We invite you to submit a paper to the workshop, and present and discuss it at the event itself on any topics related to automated software testing.

Position paper (2 pages) intended to generate discussion and debate during the workshop.

Work-in-progress paper (4 pages) that describes novel work in progress, that not necessarily has reached its full completion.

Full paper (7 pages) describing original and completed research.

Tool demo (4 pages) describing your tool and a description of your planned demo-session.

Technology transfer paper (4 pages). Describing University-Industry co-operation.

Papers will be submitted through EasyChair: https://easychair.org/conferences/?conf=atest2018

Each paper will be reviewed by at least three referees. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this workshop. Authors are required to adhere to the ACM Policy and Procedures on Plagiarism and the ACM Policy on Prior Publication and Simultaneous Submissions.

All papers must be prepared in ACM Conference Format.

Papers accepted for the workshop will appear in the ACM digital library, providing a lasting archived record of the workshop proceedings.