Testing Robot Challenge: A Serious Game for Testing Learning
PorfirioTramontana
59 views
22 slides
Oct 03, 2024
Slide 1 of 22
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
About This Presentation
Software testing education is becoming increasingly important both in academia and industry. Despite efforts to improve teaching approaches
at the university level, many challenges persist for better
preparing students for their future careers. In this position paper we present the Testing Robot Cha...
Software testing education is becoming increasingly important both in academia and industry. Despite efforts to improve teaching approaches
at the university level, many challenges persist for better
preparing students for their future careers. In this position paper we present the Testing Robot Challenge tool implementing a serious game designed for motivating the students to practice testing
and learn how to write effective unit tests in coverage testing. The game exploits the mechanism of the challenge that students can play against state-of-the-art tools for
automated test case generation.
It is configurable by teachers, in order to tune the complexity and type of challenges to the specific needs of the students and to the objectives of the course taught. To validate the tool, we performed a preliminary experiment involving 15 students of a Software Engineering course who provided generally positive feedback about it and useful comments for its future improvement.
Size: 1.94 MB
Language: en
Added: Oct 03, 2024
Slides: 22 pages
Slide Content
Testing Robot Challenge: A Serious Game for Testing Learning - Anna Rita Fasolino, Caterina Maria Accetto, Porfirio Tramontana Third edition of the international workshop on Gamification in Software Development, Verification, and Validation. GAMIFY 2024 @ ISSTA 2024 Vienna, Austria, September 17th, 2024 ENACTEST
Motivation Software testing is indispensable in software development, yet often overlooked, contributing to a shortage of expertise in the software industry. Becoming an experienced software tester requires understanding many strategies for writing high-quality test cases and a significant amount of practice . Despite efforts to improve teaching approaches at the university level, many challenges persist for better preparing students for their future careers. 2
Challenges reported in the literature Challenges reported by a SLR by Scatalon et al. (2017) [1] of integrating software testing into introductory programming courses ( based on 158 papers) Challenges described by Delgado-Perez et al. (2021) [2] of instructing students in the use of testing techniques and the importance of software testing within the software development Challenges analysed in the Systematic Mapping study by Garousi et al. (2020) [3] Nine categories of challenges emerged from more than 200 papers 3 ) [1] Scatalon , L.P. , Barbosa, E.F. , Garcia, R.E. , 2017. Challenges to integrate software testing into introductory programming courses . In: IEEE Frontiers in Education Conference, pp. 1–9 . [2] P. Delgado-Pérez, et al. " Mutation Testing and Self/Peer Assessment : Analyzing their Effect on Students in a Software Testing Course ," 2021 IEEE/ACM 43rd Int. Conf. on Software Engineering: Software Engineering Education and Training (ICSE-SEET) , Madrid, ES, 2021, pp. 231-240. [3] Vahid Garousi , et al.: Software-testing education : A systematic literature mapping. J . Syst . Softw . 165: 110570 (2020)
1) Testing often not well accepted among students Students do not derive a great deal of satisfaction from exposing flaws in their own programs . Students perceive testing as not important , boring and repetitive , and do not acquire the practice and experience that software testing requires . The programs used in software testing courses are often simple , only toy programs that are difficult to stimulate students ’ interests and enthusiasms . The traditional pedagogical approaches are not sufficient to make students motivated to write unit tests as they code. 4
2) Limited Practice in Software Testing In a recent academic course mapping study (*) involving 22 software testing courses (over 49 universities ) and 97 courses also including testing topics from 4 European countries structure-based testing is taught in all the analysed software testing courses (100%) but … laboratory activities devoted to the practice of software testing are very limited, usually due to scarce resources of time that are available to the teachers . 5 (*) Porfirio Tramontana, Beatriz Marín, Ana C. R. Paiva, Alexandra Mendes, …, Monique Snoeck , and Anna Rita Fasolino . 2024. State of the Practice in Software Testing Teaching in Four European Countries . In 17th IEEE International Conference on Software Testing, Verification and Validation (ICST) 2024.
Four categories of solutions Pedagogical approaches : Active Learning (AL): Case-Based Learning (CBL), Case-Based learning in real-world situations, Collaborative Learning, Peer Learning, etc. Teaching Practices : Selection of the types of Software to be tested , anticipate talking about Testing in the curricula, effective ways to evaluate students’ testing learning, P ractical s olutions for better teaching testing Teaching Tools learning environments to guide students through a learning path, interactively, or with the support of machine learning techniques; Tools supporting students in practicing testing in laboratory settings; Tools exploiting a gameful approach to motivate students to practice testing Gamification Approaches 6
Gamification Approaches Gamification :“the use of game design elements in non-game contexts ”. It uses the philosophy , elements , and mechanics of game design in non-game environments to induce certain behavior in people, as well as to improve their motivation and engagement in a particular task Gamification has proven to be a valid ally to improve the involvement and motivation of the students regarding the learning of the software testing 7 Dynamics take care of regulating all aspects of the game system that must be taken into consideration , defining large -scale objectives and the emotions to be vehiculated . Mechanics represent the set of basic processes which , by binding to one or more dynamic , are able to carry on the action by generating greater involvement of the players. Components represent specific manifestations of game mechanics , tangible elements .
Motivations Tools for Testing automation are currently used in many industrial contexts. Future software professionals need to start using them, comprehending their strength and weakness points. Proposing approaches and tools that encourage students to become familiar with Software Testing Automation tools is necessary
Our Proposal : the Serious Game « Testing Robot Challenge» tool The tool implements a serious game for motivating the students to learn how to write effective JUnit test cases in coverage testing. The game exploits the mechanism of the challenges that students can play against automated test generators ( Robots ) The tool is configurable by teachers , in order to tune the complexity and type of challenges to the specific needs of the students and the objectives of the course taught 9 European iNnovation AllianCe for TESting educaTion ERASMUS plus Project 2022-2025
Key aspects of the challenges The challenges regard the JUnit test case development activity It is played against an automated generator of test cases , named Robot. In our first implementation , we considered Randoop and Evosuite as Robots Goal of the challenge: the player is requested to write JUnit test cases that overcome the Robot ones , in terms of the code coverage reached , to earn a score Learning Objective : exploiting the challenge mechanism to motivate the student in the practice of white-box testing 10
The Game Dynamic of the Testing Robot Fight challenge 11
Challenge Scenario (Robot Fight ) The Student can choose a Class to be tested from a Repository, a Robot to challenge, and then starts editing the test code During the editing, the coverage reached by the tests can be checked (feedback) every time the student wants When satisfied with the code written , the student can launch the challenge with the Robot The Score of both student and Robot are evaluated and the Winner is declared ! 12
Field of game Test Class Class under test (with covered code) Compiling output Score Compile Test / evaluate coverage History
The challenge Arena 14
Different Challenges offered by the Serious Game Robot Fight : the player fights against a single Robot. Boss Rush : the player fights more than one Robot at each game attempt Training : the player can play as much rounds as he wants against the same Robot, with the aim of exploring how to write better test cases , having the possibility to compare the test case coverage achieved at each round Climbing : a multi- level challenge to appeal the players with different skill levels . At each level the player has to test a different class. Each time the Robot is defeated , the player earns a score and reaches the successive level of the game. 15
Tool Implementation The application presents a microservice architecture exposed through Rest Apis; The architecture is modular, scalable , capable of guaranteeing a high evolvability and maintainability of individual services 16 front-end based on HTML, CSS and JavaScript technologies The back-end was developed by exploiting the Java Spring MVC framework, the Open Source Code Mirror component, and the Jacoco library for the evaluation of the coverage achieved .
The flow of the Climbing challenge 17
Evaluation We carried out a study with 15 students of a Software Engineering course offered by a Bachelor Degree in Computer Engineering, that aimed to answer the following research questions : RQ1 What is the students ’ perception about the usability of the tool in the learning experience of coverage testing? RQ2 What is the students ’ perception about the usefulness of the tool to learn how to develop effective unit test cases ? RQ3 What is the student perceived satisfaction in using the tool? 18
Subjects and Classes under test During the course the students have had the opportunity to learn basics of white-box and black-box test case design techniques and how to implement test cases with JUnit . The students were recruited voluntarily and did not receive any reward for their participation i n the study. The Java classes involved in the challenges, belonged to the SF110 repository originally proposed i n [ 1 ]. The test classes have a size between 345 and 750 LOCs , with an average of 520 LOCs . 19 [1] Gordon Fraser and Andrea Arcuri. 2014. A Large-Scale Evaluation of Automated Unit Test Generation Using EvoSuite . ACM Trans. Softw . Eng. Methodol . 24, 2, Article 8 ( dec 2014), 42 pages. https:// doi.org /10.1145/2685612
Experimental Procedure T he students were presented the Game and its features in a practical 2-hours lecture , where they learned how to play the challenges against the Robots ; E ach student was assigned a homework consisting in playing a Robot Fight , a Boss-Rush, and a Climbing challenge. The homework had to be completed in two weeks; A fter completing the assignment , the students answered an anonymous Post- questionnaire made of 29 questions , three sections , ( about usability , usefulness , and user satisfaction ). 20
Results - Usability the answers showed that students were mostly N eutral about the usability of the tool. We also collected insights about how to improve some aspects of the tool: More e xplicative messages about errors or to recover from errors made during the game are needed the problems regarded the lack of support to solve compilation errors , since the Web app at the moment does not offer debugging features. We will address this limitation in the future evolution of the tool 21 ( Strongly agree = 5. Agree =4, Neutral =3, Disagree =2, Strongly Disagree = 1).
Results - Usefulness most of students Agreed about the usefulness of the tool for the learning experience , they were confident and comfortable while learning with the tool, The tool motivated them to learn and met their learning requirements 22 ( Strongly agree = 5. Agree =4, Neutral =3, Disagree =2, Strongly Disagree = 1).