Projects

Event-based Software Testing

  • Automated Testing of Platform Games

    Game development industry has recently emerged as one of major software development industry which is due to the availability of increasingly powerful processing devices, including hand-held, and mobile devices. The number of games produced in the last few years has increased exponentially. The common practice followed by the game testers is manual game testing in which the testers play a large number of potential scenarios to test the functionality of the games. The scenarios are selected at random or keeping in mind some particular goals depending on the testing target, for example, killing an enemy, completing a level. Manual testing of such games is a labor-intensive and time-consuming task. The test scenarios have to be manually executed on every game change, which becomes a monotonous and tedious task for frequently changing games.

    Semi-automated techniques also exist for game testing. The test cases still need to be recorded and evaluated manually. The tester has to design test cases, select test sequences, generate test data and generate the test oracle manually. This becomes a major challenge as the games are frequently modified due to serious market competition. All test cases have to be re-recorded and evaluated manually, which is neither efficient nor scalable. Consequently, there is a need for an approach for testing games that allows automated test case generation, execution, and evaluation.


    The project we are working on is an automated functional testing approach for platform games. The model-based approach allows for automated test case generation and automated test execution of platform games. The models are developed in UML using a proposed model-based testing profile for platform games. The models include a domain model and state machines representing the game play behavior of an avatar. The state machine is used to generate test cases using the N+ testing strategy. The generated test cases result in short execution sequences, which can then be combined to generate interesting game testing scenarios.


    Contact Person: Muhammad Zohaib Iqbal




  • Product-line Model-driven Engineering, Generation and Testing of Mobile Application Variants

    Mobile application development has emerged as one of the most focused areas in software industry due to exponential growth of mobile users and applications. The current industrial practices of mobile software engineering have not kept pace with the rapid development and are inadequate to address the needs of mobile application development industry. Model-driven engineering practices are ignored, which results in low reusability and lack of portability along with other challenges. Mobile applications have an inherent complexity, the variability of the mobile platforms, their versions and hardware devices. New platforms and devices are being introduced rapidly, thus aggravating the problem.

    Mobile applications have to support multiple platforms, as an application written for one platform (e.g., Android) cannot run on another platform (e.g., Windows Phone). Each of the platforms in turn suffers from fragmentation. This results in multiple versions of an application that need to be simultaneously tested and maintained. Moreover, the limitation of the mobile device components (i.e., memory, power, processing speed, graphical resolutions, and screen size), and its context (or environment) plays a challenging role for the mobile application testing. This is a huge burden on the development and testing team, both in terms of cost and effort. Software product-line engineering addresses variability management and has been successfully applied to develop families of related software products. Model-driven software engineering has been successfully applied in other domains to address the issues related to portability and reusability.


    We use product-line engineering in collaboration with model-driven engineering to generate feature-based mobile application variants for multiple platforms, and developed a tool named MOPPET. Specifically, we deal with three types of variations in mobile applications: variation due to operation systems and their versions, variations due to software and hardware capabilities of mobile devices, and variations based on the functionalities offered by the mobile application. We define a modeling methodology (using UML) for mobile application modeling and feature model for mobile application variability.


    To support the performance testing of the generated feature-based mobile application variants for multiple platforms, we developed a PELLET tool that tests of the performance measuring parameters for mobile applications, i.e., the consumption of time, memory, and battery. We define a performance modeling profile for specifying the mobile domain specific performance measuring concepts and automated the generation of mobile platform-specific test cases from the UML diagrams.


    Contact Person: Muhammad Usman



Search-based Software Testing

  • Test Data Generation by Solving OCL Constraints

    Object Constraint Language (OCL) is an international modeling standard for writing constraints on Unified Modeling Language (UML) diagrams. These constraints can be written at various levels, e.g., they can represent class and state invariants, guards in state machines, constraints in sequence diagrams, and pre and post conditions of operations.

    Depending on the goals, we might need to solve the OCL constraints on the models for various purposes. For example, if the models are developed for automated model-based testing, solving the OCL constraints on the test models is essential for test data generation. Similarly, we need to solve the constraints on meta-models, if we intend to automatically create models based on these models. Another use can, for example, be to identify the inconsistencies between constraints.


    For industrial systems, typically solving the constraints written in a language as expressive as OCL is a very complex task. Most of the tools available apply random strategy for this purpose, which do not scale to the requirements of solving complex constraints. Therefore, to achieve the goals mentioned above, it is a key requirement to have an automated, scalable, and robust constraint solver that is useful in various contexts.


    As part of the work to be carried out under this research project, we have developed such an automated OCL constraint solver. This tool is developed in Java using Eclipse IDE. It uses an API named as Eclipse OCL which provides facility to parse and evaluate OCL expressions. It takes UML class diagram and constraints written in OCL (Object Constraint Language) as input. It loads UML model and parses OCL constraint to extract relevant information. After that it uses search techniques to solve those constraints. For this purpose various search algorithms such as AVM, SSGA, (1+1) EA and RS are available. Finally when search algorithms successfully find the solution that satisfies given OCL constraints, first this solution is evaluated using Eclipse OCL evaluator and then UML object diagram is generated that contains test data.


    Moreover to perform thorough testing of critical systems, this tool also provides mechanism to generate multiple solutions according to various coverage criteria. Coverage criteria available for this purpose are: Clause Coverage, Partition Coverage, Branch Coverage and predicate (MC/DC) coverage.


    Contact Person: Zohaib Iqbal


  • Model Transformation Testing Environment(MOTTER)

    Model transformations (MT) are a fundamental part Model Driven Engineering (MDE). Model transformation involves transforming model from one representation to other. MT takes an input a source model, that conforms to a source meta-model and generate a target model that conforms to a target meta-model. Like any other software program, correction of model transformation program is of significant importance. Existing traditional software testing techniques cannot be adopted for testing transformations because of a number of specific challenges. The foremost is the complexity involves in the generation of test which in the case of model transformation is the instance of input meta-models.

    Typically, a meta-model comprises of a large set of elements. These elements have attributes along with relationships which are further restricted by constraints defined in OCL. Automated generation of instances from a meta-model is itself a difficult process, which is further complicated when the constraint specified over the meta-model are solved. The complexity of instance generation process is depending upon the language elements used in the meta-model and the solving of constraints so that model instances become valid. Manual generation of valid meta-model instances manually is not possible. However, automated generation of instances require solving of OCL constraints. The generation of meta-element instances are to achieve the coverage of either the meta-model or the structural coverage of the transformation under test.


    The Model Transformation Testing Environment (Motter) is a tool-set that implements searchbased approach to provide automated structural testing of model transformations and metamodel. Motter generate set of test cases (instances of source meta-model as test model) that not only covers various execution paths of the transformation under test but also generates valid meta-model instances. Therefore, the use of Motter tool is two-folded, one in the generation of valid meta-model instances to maximize the meta-model coverage and other in the generation of test models, that automated the process of test generation by generating set of test that successfully achieves various structural coverage criteria including, statement coverage, branch coverage etc.


    The overall architecture of the Motter Tool-set is shown in Figure 1. The tool implemented search-based strategies for maximize structural coverage of transformation and meta-models. To guide the search fitness function have been developed which is based on approach level and branch distance. Additionally, Motter tool also solve the constraint specified at meta-model.


    Current version of the Motter tool support structural coverage of the transformation written in Atlas Transformation Language and MofScript.


    Contact Person: Atif Jilani