-
3
-
-
14544267167
-
Repeated results analysis for middleware regression benchmarking
-
May
-
L. Bulej, T. Kalibera, and P. Tuma. Repeated results analysis for middleware regression benchmarking. Performance Evaluation, 60(1-4):345-358, May 2005.
-
(2005)
Performance Evaluation
, vol.60
, Issue.1-4
, pp. 345-358
-
-
Bulej, L.1
Kalibera, T.2
Tuma, P.3
-
4
-
-
84944052424
-
An automated benchmarking toolset
-
HPCN Europe, of, Springer
-
M. Courson, A. Mink, G. Marcais, and B. Traverse. An automated benchmarking toolset. In HPCN Europe, volume 1823 of LNCS, pages 497-506. Springer, 2000.
-
(2000)
LNCS
, vol.1823
, pp. 497-506
-
-
Courson, M.1
Mink, A.2
Marcais, G.3
Traverse, B.4
-
7
-
-
33646188112
-
-
Distributed Systems Research Group
-
Distributed Systems Research Group. Mono regression benchmarking. http://nenya.ms.mff.cuni.cz/projects/mono, 2005.
-
(2005)
Mono regression benchmarking
-
-
-
8
-
-
34748923360
-
-
Distributed Systems Research Group
-
Distributed Systems Research Group. Comprehensive COR.BA benchmarking, http : //nenya. ms. mf f. cuni. cz/projects/corba/xampler.html, 2006.
-
(2006)
Comprehensive COR.BA benchmarking, http
-
-
-
9
-
-
34748871256
-
-
DOC Group
-
DOC Group. TAO perf. scoreboard, http://www.dre. vanderbilt.edu/stats/ performance.shtml, 2006.
-
(2006)
TAO perf. scoreboard
-
-
-
10
-
-
34748857830
-
-
Free Software Foundation
-
Free Software Foundation. The R project for statistical computing, http://www.r-project.org, 2006.
-
(2006)
-
-
-
11
-
-
33646192033
-
Generic environment for full automation of benchmarking
-
GI
-
T. Kalibera, L. Bulej, and P. Tuma. Generic environment for full automation of benchmarking. In SOQUA/TECOS, volume 58 of LNI, pages 125-132. GI, 2004.
-
(2004)
SOQUA/TECOS, volume 58 of LNI
, pp. 125-132
-
-
Kalibera, T.1
Bulej, L.2
Tuma, P.3
-
12
-
-
33746377516
-
Benchmark precision and random initial state
-
San Diego, CA, USA, July, SCS
-
T. Kalibera, L. Bulej, and P. Turna. Benchmark precision and random initial state. In SPECTS 2005, pages 853-862, San Diego, CA, USA, July 2005. SCS.
-
(2005)
SPECTS 2005
, pp. 853-862
-
-
Kalibera, T.1
Bulej, L.2
Turna, P.3
-
13
-
-
33746347072
-
Precise regression benchmarking with random effects: Improving Mono benchmark results
-
Formal Methods and Stochastic Models for Performance Evaluation, of, Springer, June
-
T. Kalibera and P. Tuma. Precise regression benchmarking with random effects: Improving Mono benchmark results. In Formal Methods and Stochastic Models for Performance Evaluation, volume 4054 of LNCS, pages 63-77. Springer, June 2006.
-
(2006)
LNCS
, vol.4054
, pp. 63-77
-
-
Kalibera, T.1
Tuma, P.2
-
14
-
-
4544349711
-
Skoll: Distributed continuous quality assurance
-
IEEE Computer Society
-
A. M. Memon, A. A. Porter, C. Yilmaz, A. Nagarajan, D. C. Schmidt, and B. Natarajan. Skoll: Distributed continuous quality assurance. In ICSE, pages 459-468. IEEE Computer Society, 2004.
-
(2004)
ICSE
, pp. 459-468
-
-
Memon, A.M.1
Porter, A.A.2
Yilmaz, C.3
Nagarajan, A.4
Schmidt, D.C.5
Natarajan, B.6
-
15
-
-
33646202534
-
RTJBench: A Real-Time Java Benchmarking Framework
-
Oct
-
M. Prochazka, A. Madan, J. Vitek, and W. Liu. RTJBench: A Real-Time Java Benchmarking Framework. In Component And Middleware Performance Workshop, OOPSLA 2004, Oct. 2004.
-
(2004)
Component And Middleware Performance Workshop, OOPSLA 2004
-
-
Prochazka, M.1
Madan, A.2
Vitek, J.3
Liu, W.4
-
16
-
-
34748835673
-
-
Supercomputer Computations Research Institute, State University. Distributed queueing system
-
Supercomputer Computations Research Institute, Florida, State University. Distributed queueing system, http : //packages. qa. debian. org/d/dqs. html, 1998.
-
(1998)
Florida
-
-
-
17
-
-
34748823404
-
-
University Corporation for Atmospheric Research
-
University Corporation for Atmospheric Research. Network Common Data, Form. http://www.unidata.ucar.edu/software/netcdf, 2006.
-
(2006)
Network Common Data, Form
-
-
-
18
-
-
32044472723
-
-
C. Yilmaz, A. S. Krishna, A. M. Memon, A. A. Porter, D. C. Schmidt, A. S. Gokhale, and B. Natarajan. Main effects screening: a, distributed continuous quality assurance process for monitoring performance degradation in evolving software systems. In ICSE, pages 293-302. ACM, 2005.
-
C. Yilmaz, A. S. Krishna, A. M. Memon, A. A. Porter, D. C. Schmidt, A. S. Gokhale, and B. Natarajan. Main effects screening: a, distributed continuous quality assurance process for monitoring performance degradation in evolving software systems. In ICSE, pages 293-302. ACM, 2005.
-
-
-
|