References

  1. H. Barringer, Y. Falcone, K. Havelund, G. Reger & D. Rydeheard (2012): Quantified event automata: Towards expressive and efficient runtime monitors. Proceedings of the International Symposium on Formal Methods (FM), pp. 68–84, doi:10.1007/978-3-642-32759-9_9.
  2. E. Bartocci, R. Grosu, A. Karmarkar, S. A. Smolka, S. D. Stoller & J.n Seyster (2012): Adaptive Runtime Verification. In: Proceedings of the International Conference on Runtime Verification (RV). LNCS/Springer, pp. 168–182, doi:10.1007/978-3-642-35632-2_18.
  3. A. Bauer & Y. Falcone (2012): Decentralised LTL monitoring. Proceedings of the International Symposium on Formal Methods (FM), pp. 85–100, doi:10.1007/s10703-016-0253-8.
  4. J. Bowring, A. Orso & M. J. Harrold (2002): Monitoring deployed software using software tomography. In: Proceedings of the ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering (PASTE), doi:10.1145/586094.586099.
  5. P. Braione, G. Denaro & M. Pezzè (2013): Enhancing symbolic execution with built-in term rewriting and constrained lazy initialization. In: Proceedings of the Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE), doi:10.1145/2491411.2491433.
  6. F. Chen & G. Rosu (2009): Parametric Trace Slicing and Monitoring.. In: Proceedings of the International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS) 5505. Springer, pp. 246–261, doi:10.1007/978-3-642-00768-2_23.
  7. J. Clause & A. Orso (2007): A technique for enabling and supporting debugging of field failures. In: Proceedings of the International Conference on Software Engineering (ICSE), doi:10.1109/ICSE.2007.10.
  8. O. Cornejo (2017): Flexible in-the-field monitoring. In: Proceedings of the International Conference on Software Engineering (ICSE) - Companion, doi:10.1109/ICSE-C.2017.37.
  9. O. Cornejo, D. Briola, D. Micucci & L. Mariani (2017): In the Field Monitoring of Interactive Applications. In: Proceedings of the International Conference on Software Engineering: New Ideas and Emerging Results Track (ICSE-NIER), doi:10.1109/ICSE-NIER.2017.19.
  10. N. Delgado, A. Q. Gates & S. Roach (2004): A taxonomy and catalog of runtime software-fault monitoring tools. IEEE Transactions on Software Engineering (TSE), doi:10.1109/TSE.2004.91.
  11. M. Diep, S. Elbaum & M. Dwyer (2008): Trace normalization. In: Proceedings of the Symposium on Software Reliability Engineering (ISSRE), doi:10.1109/ISSRE.2008.37.
  12. J. Dolby, S. J Fink & M. Sridharan (visited on June 23 2017): TJ Watson libraries for analysis (WALA). http://wala.sourceforge.net/.
  13. Eclipse Community (visited on June 24 2017): Eclipse. http://www.eclipse.org.
  14. S. Elbaum & M. Diep (2005): Profiling deployed software: Assessing strategies and testing opportunities. IEEE Transactions on Software Engineering (TSE), doi:10.1109/TSE.2005.50.
  15. L. Gazzola (2017): Field Testing of Software Applications. In: Proceedings of the International Conference on Software Engineering (ICSE) - Companion, doi:10.1109/ICSE-C.2017.30.
  16. L. Gazzola, L. Mariani, F. Pastore & M. Pezzè (2017): An Exploratory Study of Field Failures. In: Proceedings of the International Symposium on Software Reliability Engineering (ISSRE).
  17. G. Jin, A. Thakur, B. Liblit & S. Lu (2010): Instrumentation and Sampling Strategies for Cooperative Concurrency Bug Isolation. In: Proceedings of the ACM International Conference on Object Oriented Programming Systems Languages and Applications (OOPSLA), doi:10.1145/1869459.1869481.
  18. W. Jin & A. Orso (2012): BugRedux: reproducing field failures for in-house debugging. In: Proceedings of the International Conference on Software Engineering (ICSE), doi:10.1109/ICSE.2012.6227168.
  19. J. C. King (1976): Symbolic execution and program testing. Communications of the ACM 19(7), pp. 385–394, doi:10.1145/360248.360252.
  20. B. Liblit, A. Aiken, A. X. Zheng & M. I. Jordan (2003): Bug isolation via remote program sampling. ACM SIGPLAN Notices, doi:10.1145/780822.781148.
  21. Microsoft (visited on June 24 2017): Windows 10. http://www.microsoft.com.
  22. C. Nie & H. Leung (2011): A survey of combinatorial testing. ACM Computing Surveys (CSUR) 43(2), pp. 11:1–11:29, doi:10.1145/1883612.1883618.
  23. P. Ohmann, D. B. Brown, N. Neelakandan, J. Linderoth & B. Liblit (2016): Optimizing Customized Program Coverage. In: Proceedings of the IEEE / ACM International Conference on Automated Software Engineering (ASE), doi:10.1145/2970276.2970351.
  24. A. Orso & B. Kennedy (2005): Selective Capture and Replay of Program Executions. In: Proceedings of the International Workshop on Dynamic Analysis (WODA), doi:10.1145/1082983.1083251.
  25. A. Orso, D. Liang, M. J. Harrold & R. Lipton (2002): Gamma system: Continuous evolution of software after deployment. In: Proceedings of the ACM SIGSOFT international symposium on Software testing and analysis (ISSTA), doi:10.1145/566172.566182.
  26. C. Pavlopoulou & M. Young (1999): Residual test coverage monitoring. In: Proceedings of the International Conference on Software Engineering (ICSE), pp. 277–284, doi:10.1145/302405.302637.
  27. S. C. Seow (2008): Designing and engineering time: the psychology of time perception in software. Addison-Wesley Professional, ISBN: 0321509188, 9780321509185.

Comments and questions to: eptcs@eptcs.org
For website issues: webmaster@eptcs.org