-
1
-
-
34548027189
-
-
Mellanox Technologies
-
Mellanox Technologies. http://www.mellanox.com.
-
-
-
-
3
-
-
27844503781
-
Network Fault Tolerance in LA-MPI
-
September
-
R. T. Aulwes, D. J. Daniel, N. N. Desai, R. L. Graham, L. Risinger, M. W. Sukalski, and M. A. Taylor. Network Fault Tolerance in LA-MPI. In Proceedings of EuroPVM/MPI '03, September 2003.
-
(2003)
Proceedings of EuroPVM/MPI '03
-
-
Aulwes, R.T.1
Daniel, D.J.2
Desai, N.N.3
Graham, R.L.4
Risinger, L.5
Sukalski, M.W.6
Taylor, M.A.7
-
4
-
-
84973836157
-
-
Fall
-
D. H. Bailey, E. Barszcz, J. T. Barton, D. S. Browning, R. L. Carter, D. Dagum, R. A. Fatoohi, P. O. Frederickson, T. A. Lasinski, R. S. Schreiber, H. D. Simon, V. Venkatakrishnan, and S. K. Weeratunga. The NAS parallel benchmarks, volume 5, pages 63-73, Fall 1991.
-
(1991)
The NAS parallel benchmarks
, vol.5
, pp. 63-73
-
-
Bailey, D.H.1
Barszcz, E.2
Barton, J.T.3
Browning, D.S.4
Carter, R.L.5
Dagum, D.6
Fatoohi, R.A.7
Frederickson, P.O.8
Lasinski, T.A.9
Schreiber, R.S.10
Simon, H.D.11
Venkatakrishnan, V.12
Weeratunga, S.K.13
-
6
-
-
0033708935
-
Semicoarsening multigrid on distributed memory machines
-
P. N. Brown, R. D. Falgout, and J. E. Jones. Semicoarsening multigrid on distributed memory machines. SIAM Journal on Scientific Computing, 21(5):1823-1834, 2000.
-
(2000)
SIAM Journal on Scientific Computing
, vol.21
, Issue.5
, pp. 1823-1834
-
-
Brown, P.N.1
Falgout, R.D.2
Jones, J.E.3
-
8
-
-
35048884271
-
Open MPI: Goals, concept, and design of a next generation MPI implementation
-
Budapest, Hungary, September
-
E. Gabriel, G. E. Fagg, G. Bosilca, T. Angskun, J. J. Dongarra, J. M. Squyres, V. Sahay, P. Kambadur, B. Barrett, A. Lumsdaine, R. H. Castain, D. J. Daniel, R. L. Graham, and T. S. Woodall. Open MPI: Goals, concept, and design of a next generation MPI implementation. In Proceedings, 11th European PVM/MPI Users' Group Meeting, pages 97-104, Budapest, Hungary, September 2004.
-
(2004)
Proceedings, 11th European PVM/MPI Users' Group Meeting
, pp. 97-104
-
-
Gabriel, E.1
Fagg, G.E.2
Bosilca, G.3
Angskun, T.4
Dongarra, J.J.5
Squyres, J.M.6
Sahay, V.7
Kambadur, P.8
Barrett, B.9
Lumsdaine, A.10
Castain, R.H.11
Daniel, D.J.12
Graham, R.L.13
Woodall, T.S.14
-
10
-
-
0347133226
-
A Network-Failure-Tolerant Message-Passing System for Terascale Clusters
-
August
-
R. L. Graham, S.-E. Choi, D. J. Daniel, N. N. Desai, R. G. Minnich, C. E. Rasmussen, L. D. Risinger, and M. W. Sukalksi. A Network-Failure-Tolerant Message-Passing System for Terascale Clusters. International Journal of Parallel Programming, 31(4), August 2003.
-
(2003)
International Journal of Parallel Programming
, vol.31
, Issue.4
-
-
Graham, R.L.1
Choi, S.-E.2
Daniel, D.J.3
Desai, N.N.4
Minnich, R.G.5
Rasmussen, C.E.6
Risinger, L.D.7
Sukalksi, M.W.8
-
11
-
-
34548023189
-
-
W. Gropp, E. Lusk, N. Doss, and A. Skjellum. A High-Performance, Portable Implementation of the MPI, Message Passing Interface Standard. Technical report, Argonne National Laboratory and Mississippi State University
-
W. Gropp, E. Lusk, N. Doss, and A. Skjellum. A High-Performance, Portable Implementation of the MPI, Message Passing Interface Standard. Technical report, Argonne National Laboratory and Mississippi State University.
-
-
-
-
12
-
-
84949489562
-
A General Predictive Performance Model for Wavefront Algorithms on Clusters of SMPs
-
A. Hoisie, O. M. Lubeck, H. J. Wasserman, F. Petrini, and H. Aime. A General Predictive Performance Model for Wavefront Algorithms on Clusters of SMPs. In International Conference on Parallel Processing, pages 219-, 2000.
-
(2000)
International Conference on Parallel Processing
, pp. 219
-
-
Hoisie, A.1
Lubeck, O.M.2
Wasserman, H.J.3
Petrini, F.4
Aime, H.5
-
14
-
-
0000881430
-
Solution of the First-Order Form of the 3-D Discrete Ordinates Equation on a Massively Parallel Processor
-
K. Koch, R. Baker, and R. Alcouffe. Solution of the First-Order Form of the 3-D Discrete Ordinates Equation on a Massively Parallel Processor. Trans. of American Nuclear Society, pages 65-, 1992.
-
(1992)
Trans. of American Nuclear Society
, pp. 65
-
-
Koch, K.1
Baker, R.2
Alcouffe, R.3
-
16
-
-
1142282651
-
-
Lawrence Berkeley National Laboratory, August
-
Lawrence Berkeley National Laboratory. MVICH: MPI for Virtual Interface Architecture. http://www.nersc.gov/research/FTG/mvich/index.html, August 2001.
-
(2001)
MVICH: MPI for Virtual Interface Architecture
-
-
-
17
-
-
34748866699
-
Performance Comparison of MPI Implementations over InfiniBand, Myrinet and Quadrics
-
J. Liu, B. Chandrasekaran, J. Wu, W. Jiang, S. Kini, W. Yu, D. Buntinas, P. Wyckoff, and D. K. Panda. Performance Comparison of MPI Implementations over InfiniBand, Myrinet and Quadrics. In Supercomputing(SC), 2003.
-
(2003)
Supercomputing
, vol.SC
-
-
Liu, J.1
Chandrasekaran, B.2
Wu, J.3
Jiang, W.4
Kini, S.5
Yu, W.6
Buntinas, D.7
Wyckoff, P.8
Panda, D.K.9
-
19
-
-
1142305191
-
High Performance RDMA-Based MPI Implementation over InfiniBand
-
June
-
J. Liu, J. Wu, S. P. Kini, P. Wyckoff, and D. K. Panda. High Performance RDMA-Based MPI Implementation over InfiniBand. In 17th Annual ACM International Conference on Supercomputing (ICS '03), June 2003.
-
(2003)
17th Annual ACM International Conference on Supercomputing (ICS '03)
-
-
Liu, J.1
Wu, J.2
Kini, S.P.3
Wyckoff, P.4
Panda, D.K.5
-
20
-
-
20444457543
-
Efficient Barrier and Allreduce on InfiniBand Clusters using Hardware Multicast and Adaptive Algorithms
-
A. R. Mamidala, J. Liu, and D. K. Panda. Efficient Barrier and Allreduce on InfiniBand Clusters using Hardware Multicast and Adaptive Algorithms. In Proceedings of IEEE Cluster Computing, 2004.
-
(2004)
Proceedings of IEEE Cluster Computing
-
-
Mamidala, A.R.1
Liu, J.2
Panda, D.K.3
-
21
-
-
34548021134
-
On using connection-oriented vs. connection-less transport for performance and scalability of collective and one-sided operations: Trade-offs and impact
-
ACM Press
-
A. R. Mamidala, S. Narravula, A. Vishnu, G. Santhanaraman, and D. K. Panda. On using connection-oriented vs. connection-less transport for performance and scalability of collective and one-sided operations: trade-offs and impact. In PPoPP '07: Proceedings of the 12th ACM SIGPLAN symposium on Principles and practice of parallel programming, pages 46-54. ACM Press, 2007.
-
(2007)
PPoPP '07: Proceedings of the 12th ACM SIGPLAN symposium on Principles and practice of parallel programming
, pp. 46-54
-
-
Mamidala, A.R.1
Narravula, S.2
Vishnu, A.3
Santhanaraman, G.4
Panda, D.K.5
-
22
-
-
34548019385
-
-
Message Passing Interface Forum. MPI: A Message-Passing Interface Standard, Mar 1994.
-
Message Passing Interface Forum. MPI: A Message-Passing Interface Standard, Mar 1994.
-
-
-
-
23
-
-
84862402772
-
Very high resolution simulation of compressible turbulence on the IBM-SP system
-
New York, NY, USA, ACM Press
-
A. A. Mirin, R. H. Cohen, B. C. Curtis, W. P. Dannevik, A. M. Dimits, M. A. Duchaineau, D. E. Eliason, D. R. Schikore, S. E. Anderson, D. H. Porter, P. R. Woodward, L. J. Shieh, and S. W. White. Very high resolution simulation of compressible turbulence on the IBM-SP system. In Supercomputing '99: Proceedings of the 1999 ACM/IEEE conference on Supercomputing (CDROM), page 70, New York, NY, USA, 1999. ACM Press.
-
(1999)
Supercomputing '99: Proceedings of the 1999 ACM/IEEE conference on Supercomputing (CDROM)
, pp. 70
-
-
Mirin, A.A.1
Cohen, R.H.2
Curtis, B.C.3
Dannevik, W.P.4
Dimits, A.M.5
Duchaineau, M.A.6
Eliason, D.E.7
Schikore, D.R.8
Anderson, S.E.9
Porter, D.H.10
Woodward, P.R.11
Shieh, L.J.12
White, S.W.13
-
25
-
-
84877019178
-
The Case of the Missing Supercomputer Performance: Achieving Optimal Performance on the 8,192 Processors of ASCI Q
-
IEEE Computer Society
-
F. Petrini, D. J. Kerbyson, and S. Pakin. The Case of the Missing Supercomputer Performance: Achieving Optimal Performance on the 8,192 Processors of ASCI Q. In SC '03: Proceedings of the 2003 ACM/IEEE conference on Supercomputing, page 55. IEEE Computer Society, 2003.
-
(2003)
SC '03: Proceedings of the 2003 ACM/IEEE conference on Supercomputing
, pp. 55
-
-
Petrini, F.1
Kerbyson, D.J.2
Pakin, S.3
-
28
-
-
34548276069
-
High-Performance and Scalable MPI over InfiniBand with Reduced Memory Usage: An In-Depth Performance Analysis
-
S. Sur, M. J. Koop, and D. K. Panda. High-Performance and Scalable MPI over InfiniBand with Reduced Memory Usage: An In-Depth Performance Analysis. In Super Computing, 2006.
-
(2006)
Super Computing
-
-
Sur, S.1
Koop, M.J.2
Panda, D.K.3
-
29
-
-
33751186894
-
Can Memory-Less Network Adapters Benefit Next-Generation InfiniBand Systems?
-
S. Sur, A. Vishnu, H. W Jin, W. Huang, and D. K. Panda. Can Memory-Less Network Adapters Benefit Next-Generation InfiniBand Systems? In Hot Interconnect (HOTI 05), 2005.
-
(2005)
Hot Interconnect (HOTI 05)
-
-
Sur, S.1
Vishnu, A.2
Jin, H.W.3
Huang, W.4
Panda, D.K.5
-
30
-
-
84966663968
-
-
J. Vetter and F. Mueller. Communication characteristics of large-scale scientific applications for contemporary cluster architectures. In IPDPS '02: Proceedings of the 16th International Symposium on Parallel and Distributed Processing, page 27.2, Washington, DC, USA, 2002. IEEE Computer Society.
-
J. Vetter and F. Mueller. Communication characteristics of large-scale scientific applications for contemporary cluster architectures. In IPDPS '02: Proceedings of the 16th International Symposium on Parallel and Distributed Processing, page 27.2, Washington, DC, USA, 2002. IEEE Computer Society.
-
-
-
-
31
-
-
85117198273
-
An empirical performance evaluation of scalable scientific applications
-
Los Alamitos, CA, USA, IEEE Computer Society Press
-
J. S. Vetter and A. Yoo. An empirical performance evaluation of scalable scientific applications. In Supercomputing '02: Proceedings of the 2002 ACM/IEEE conference on Supercomputing, pages 1-18, Los Alamitos, CA, USA, 2002. IEEE Computer Society Press.
-
(2002)
Supercomputing '02: Proceedings of the 2002 ACM/IEEE conference on Supercomputing
, pp. 1-18
-
-
Vetter, J.S.1
Yoo, A.2
-
32
-
-
84948988115
-
Impact of on-demand connection management in mpi over via
-
Washington, DC, USA, IEEE Computer Society
-
J. Wu, J. Liu, P. Wyckoff, and D. Panda. Impact of on-demand connection management in mpi over via. In CLUSTER '02: Proceedings of the IEEE International Conference on Cluster Computing, page 152, Washington, DC, USA, 2002. IEEE Computer Society.
-
(2002)
CLUSTER '02: Proceedings of the IEEE International Conference on Cluster Computing
, pp. 152
-
-
Wu, J.1
Liu, J.2
Wyckoff, P.3
Panda, D.4
|