Register | Sign in
COMET - Community Event-based Testing
Benchmarks

Our benchmarks are grouped as collections. The collections represent logical groupings from a specific contributor. Within collection there may be several individual programs. You will have the opportunity to select all or part of each benchmark for download. When you expand the benchmark you will be able to view information for each benchmark that includes its operating system environment, html links to tools required to run/use the benchmark, and descriptions of which test suites and execution matrices are included. The highlighted golden text indicates a relative path in the benchmark folder. When the benchmark is the result of a publication, the bib file for that publication will be included as well.

Views:
Collections
Benchmarks 
 Collection Name Description
UNL.Toy.2010This collection of benchmarks was used in the experiments for GUI test suite repair in the paper Repairing GUI Test Suites Using a Genetic Algorithm, published in ICST 2010. The website with full results for this paper is http://cse.unl.edu/~myra/artifacts/icst2010. A bibtex entry for this paper is attached with each benchmark, or you can use the following.
@inproceedings{huang.cohen.memon.icst10,
  author={Si Huang and Myra B. Cohen and Atif M. Memon},
  title={Repairing {GUI} Test Suites Using a Genetic Algorithm},
  booktitle={ICST '10: International Conference on Software Testing,
             Verification and Validation},
  address={Paris, France},
  month={April},
  year={2010}
}
UMD.Reduction.TSE.2008This collection of benchmarks was used in the paper "Call stack coverage for GUI test-suite reduction". McMaster, S. and Memon, A. M. IEEE Trans. Softw. Eng., 2008, IEEE Press.. The users can use those benchmarks to evaluate different testsuite reduction techniques.
@article{McMasterMemonTSE2008,
   author = {Scott McMaster and Atif M. Memon},
   title = {Call-Stack Coverage for {GUI} Test-Suite Reduction },
   journal = {IEEE Trans. Softw. Eng.},
   publisher = {IEEE Press},
   address = {Piscataway, NJ, USA},
   year = {2008}
}
UMD.Length1-2.EFG.2010This collection has been developed by the GUITAR group at the University of Maryland, College Park. It consists of benchmarks with two important characteristics: (1) the underlying model used for test case generation is the Event-Flow Graph (EFG) and (2) the test suites contain all possible testcases of length 1 and length 2. The number of length 1 test cases is equal to the number of nodes in the EFG and the number of length 2 test cases is equal to the number of edges in the EFG.
SAPE-Pounder-2010This collection of benchmarks was used in the experiments for the paper Automating Performance Testing of Interactive Java applications published in AST 2010. A bibtex entry for the paper is provided below and it can be found here: http://sape.inf.usi.ch/sites/default/files/publication/ast10.pdf
@inproceedings 
{ Jovic10, 
  author = {Jovic, Milan and Adamoli, Andrea and Zaparanuks, Dmitrijs and Hauswirth, Matthias}, 
  title = {Automating performance testing of interactive Java applications}, 
  booktitle = {AST '10: Proceedings of the 5th Workshop on Automation of Software Test}, 
  year = {2010}, isbn = {978-1-60558-970-1}, 
  pages = {8--15}, 
  location = {Cape Town, South Africa}, 
  doi = {http://doi.acm.org/10.1145/1808266.1808268}, 
  publisher = {ACM}, 
  address = {New York, NY, USA}, 
}
ICSE.2015This collection of benchmarks was used in the experiments for Making System User Interactive Tests Repeatable: When and What Should we Control?, published in ICSE 2015. The website with supplemental artifacts for this paper is http://cse.unl.edu/~myra/artifacts/Repeatability. A bibtex entry for this paper is attached with each benchmark, or you can use the following.
@inproceedings{Gao:ICSE2015,
 author = {Gao, Zebao and Liang, Yalan and Cohen, Myra B. and Memon, Atif M. and Wang, Zhen},
 title = {Making System User Interactive Tests Repeatable: When and What Should We Control?},
 booktitle = {Proceedings of the 37th International Conference on Software Engineering - Volume 1},
 series = {ICSE '15},
 year = {2015},
 location = {Florence, Italy},
 pages = {55--65}
} 

Because the artifacts are automatically packed when you download the benchmarks, it might take some time for the download dialog to appear.