Projekt: SPASS-meter
The consumption of resources of a program can be measured for example in terms of computation time, memory usage, file- or network transfer. These measurements are important key indicators for the management of software development activities but also to control the self-adaptation and self-configuration of programs at runtime. During software development, these indicators can be used to operationalize quality requirements and to support the project management. At runtime of a program, these indicators can be used to drive self-adaptation, e.g., to maintain or improve the runtime quality in changing environments.
Figure Spass-meter Monitoring Pipeline
SPASS-meter1 is a monitoring-framework that measures the consumption of resources such as CPU time, memory use, file and network-transfer for Java programs in order to support software development in general and, in particular, self-adaptive programs. One distinct feature of SPASS-meter determines the resource consumption of user-defined semantic program units such as components or services. A semantic program unit is defined in terms of a logical grouping, its monitoring scope (in Java characterized as individual classes or methods) as well as the resource to be monitored. The monitoring scope also defines whether the consumption shall be recorder directly for the specified elements or also indirectly for dependent (called) program parts. Further, SPASS-meter can determine the resource consumption on program and system level such as the CPU usage of the executing virtual machine of a Java program in order to derive the relative resource consumption and to enable comparisons across programs.
Figure Spass-meter Architectur
The monitoring scope of SPASS-meter can be configured in a very flexible way and, indirectly, allows controlling the runtime overhead, as resource consuming measurements such as indirect metrics are only executed on demand. The configuration of SPASS-meter can be given in terms of source code annotations or as external configuration file. As usual in Java, SPASS-meter determines the resource consumption by instrumentation, i.e., the insertion of additional program code realizing the measurements.
Experiments with the SPECjvm2008 benchmark suite showed an average runtime overhead of 2,7 % computation capabilities and at maximum 2,2 % additional memory consumption even for indirect measurement of memory allocations.
SPASS-meter will be published under an Open Source license as part of the INDENICA results.
Summary of important features:
- Measurement of CPU time, response time, memory use (allocation, unallocation), file transfer and network transfer on system, program and logical group level
- User-defined logical grouping in order to specify the monitoring scope for semantic units such as services or components
- Direct resource monitoring (just in specified classes /methods) or indirect monitoring (also in dependent classes / methods)
- Static, dynamic and mixed instrumentation
- Can be applied to Java programs and Android Apps
- Optional integration with Java Management Extensions (JMX) and OW2 Wildcat
1 SPASS-meter is being developed as a part of the development approach SPASS (Simplifying the develoPment of Adaptive Software Systems).
SPASS-meter is released under APACHE Open Source license and is available for download at the SSE github. The binary released components are also on Maven Central.
News: SPASS-meter 1.30 (06.06.2018) now supports also Java 9.
Further links:
Duration:
2010 –
Contact: Dr. Holger Eichelberger
Publications
S/N | Publication |
---|---|
2017 | |
6. |
Holger Eichelberger, Cui Qin and Klaus Schmid
(2017):
From Resource Monitoring to Requirements-based Adaptation: An Integrated Approach
In:
Proceedings of the 8th ACM/SPEC on International Conference on Performance Engineering Companion (ICPE '17)
pp. 91-96.
ACM.
abstract In large and complex systems there is a need to monitor resources as it is critical for system operation to ensure sufficient availability of resources and to adapt the system as needed. While there are various (resource)-monitoring solutions, these typically do not include an analysis part that takes care of analyzing violations and responding to them. In this paper we report on experiences, challenges and lessons learned in creating a solution for performing requirements-monitoring for resource constraints and using this as a basis for adaptation to optimize the resource behavior. Our approach rests on reusing two previous solutions (one for resource monitoring and one for requirements-based adaptation) that were built in our group. |
5. |
Holger Knoche and Holger Eichelberger
(2017):
The Raspberry Pi: A Platform for Replicable Performance Benchmarks?
In:
Proceedings of the 8th Symposium on Software Performance in Softwaretechnik-Trends
vol. 37.
number / issue 3.
pp. 14-16.
abstract Replicating results of performance benchmarks can be difficult. A common problem is that researchers often do not have access to identical hardware and software setups. Modern single-board computers like the Raspberry Pi are standardized, cheap, and powerful enough to run many benchmarks, although probably not at the same performance level as desktop or server hardware. In this paper, we use the MooBench micro-benchmark to investigate to what extent Raspberry Pi is suited as a platform for replicable performance benchmarks. We report on our approach to set up and run the experiments as well as the experience that we made. |
2016 | |
4. |
Holger Eichelberger, Aike Sass and Klaus Schmid
(2016):
From Reproducibility Problems to Improvements: A journey
In:
Proceedings of the 7th Symposium on Software Performance
vol. Softwaretechnik-Trends.
number / issue 4.
pp. 43-45.
abstract Reproducibility and repeatability are key properties of benchmarks. However, achieving reproducibility can be difficult. We faced this while applying the micro-benchmark MooBench to the resource monitoring framework SPASS-meter. In this paper, we discuss some interesting problems that occurred while trying to reproduce previous benchmarking results. In the process of reproduction, we extended MooBench and made improvements to the performance of SPASS-meter. We conclude with lessons learned for reproducing (micro-)benchmarks. |
2014 | |
3. | Holger Eichelberger and Klaus Schmid (2014): Flexible Resource Monitoring of Java Programs In: Journal of Systems and Software, 93: 163-186. Elsevier. |
2012 | |
2. | Holger Eichelberger (2012): SPASS-meter - Measuring Diverse Software Attributes in an Integrated Manner Invited talk at KoSSE Symposium Application Performance Management (Kieker Days 2012) |
1. |
Holger Eichelberger and Klaus Schmid
(2012):
Erhebung von Produkt-Laufzeit-Metriken: Ein Vergleich mit dem SPASS-Meter-Werkzeug
In:
G. Büren and R. R. Dumke and C. Ebert and H. Münch (ed.):
Proceedings of the DASMA Metrik Kongress (MetriKon '12)
pp. 171-180.
Shaker Verlag.
abstract Die Erfassung von Produktmetriken zur Laufzeit ist ein wesentlicher Baustein einer Qualitätsstrategie in der Produktentwicklung. Die Erfassung von Produktmetriken in der Entwicklung ist jedoch oft mit signifikantem Aufwand verbunden, da viele Werkzeuge nur bestimmte Eigenschaften erfassen können und ihre Verwendung oft sehr komplex ist. Darüber hinaus entsteht durch die Durchführung der Messung meist ein signifikanter Laufzeit-Overhead. Das Ziel des SPASS-meter-Werkzeugs ist die integrierte Erfassung einer Vielzahl von Metriken bei einfacher Konfiguration und geringem Laufzeit-Overhead. In diesem Beitrag stellen wir SPASS-meter vor und vergleichen es mit bekannten ähnlichen Werkzeugen wie Kieker, OpenCore, Xprof und HPROF. Eine Übersicht der Stärken und Schwächen schließt den Vergleich ab. |