-
1
-
-
51049124448
-
-
Mellanox Technologies
-
Mellanox Technologies. http://www.mellanox.com.
-
-
-
-
3
-
-
33846791062
-
High performance computing on heterogeneous clusters with the madeleine ii communication library
-
O. Aumage, L. Bougé, L. Eyraud, G. Mercier, R. Namyst, L. Prylli, A. Denis, and J.-F. Méhaut. High performance computing on heterogeneous clusters with the madeleine ii communication library. Cluster Computing, 5(1):43-54, 2002.
-
(2002)
Cluster Computing
, vol.5
, Issue.1
, pp. 43-54
-
-
Aumage, O.1
Bougé, L.2
Eyraud, L.3
Mercier, G.4
Namyst, R.5
Prylli, L.6
Denis, A.7
Méhaut, J.-F.8
-
4
-
-
84973836157
-
-
Fall
-
D. H. Bailey, E. Barszcz, J. T. Barton, D. S. Browning, R. L. Carter, D. Dagum, R. A. Fatoohi, P. O. Frederickson, T. A. Lasinski, R. S. Schreiber, H. D. Simon, V. Venkatakrishnan, and S. K. Weeratunga. The NAS parallel benchmarks. volume 5, pages 63-73, Fall 1991.
-
(1991)
The NAS parallel benchmarks
, vol.5
, pp. 63-73
-
-
Bailey, D.H.1
Barszcz, E.2
Barton, J.T.3
Browning, D.S.4
Carter, R.L.5
Dagum, D.6
Fatoohi, R.A.7
Frederickson, P.O.8
Lasinski, T.A.9
Schreiber, R.S.10
Simon, H.D.11
Venkatakrishnan, V.12
Weeratunga, S.K.13
-
5
-
-
0033708935
-
Semi-coarsening multigrid on distributed memory machines
-
P. N. Brown, R. D. Falgout, and J. E. Jones. Semi-coarsening multigrid on distributed memory machines. SIAM Journal on Scientific Computing, 21(5):1823-1834, 2000.
-
(2000)
SIAM Journal on Scientific Computing
, vol.21
, Issue.5
, pp. 1823-1834
-
-
Brown, P.N.1
Falgout, R.D.2
Jones, J.E.3
-
7
-
-
51049122313
-
-
W. Gropp, E. Lusk, N. Doss, and A. Skjellum. A High-Performance, Portable Implementation of the MPI, Message Passing Interface Standard. Technical report, Argonne National Laboratory and Mississippi State University
-
W. Gropp, E. Lusk, N. Doss, and A. Skjellum. A High-Performance, Portable Implementation of the MPI, Message Passing Interface Standard. Technical report, Argonne National Laboratory and Mississippi State University.
-
-
-
-
9
-
-
34548312269
-
Reducing Connection Memory Requirements of MPI for InfiniBand Clusters: A Message Coalescing Approach
-
Rio de Janeiro, Brazil, May
-
M. Koop, T. Jones, and D. K. Panda. Reducing Connection Memory Requirements of MPI for InfiniBand Clusters: A Message Coalescing Approach. In 7th IEEE Int'l Symposium on Cluster Computing and the Grid (CCGrid07), Rio de Janeiro, Brazil, May 2007.
-
(2007)
7th IEEE Int'l Symposium on Cluster Computing and the Grid (CCGrid07)
-
-
Koop, M.1
Jones, T.2
Panda, D.K.3
-
10
-
-
34548008852
-
High Performance MPI Design using Unreliable Datagram for Ultra-Scale InfiniBand Clusters
-
Seattle, WA, June
-
M. Koop, S. Sur, Q. Gao, and D. K. Panda. High Performance MPI Design using Unreliable Datagram for Ultra-Scale InfiniBand Clusters. In 21st ACM International Conference on Supercomputing (ICS07), Seattle, WA, June 2007.
-
(2007)
21st ACM International Conference on Supercomputing (ICS07)
-
-
Koop, M.1
Sur, S.2
Gao, Q.3
Panda, D.K.4
-
12
-
-
1142282651
-
-
Lawrence Berkeley National Laboratory, August 2001
-
Lawrence Berkeley National Laboratory. MVICH: MPI for Virtual Interface Architecture. http://www.nersc.gov/research/FTG/mvich/ index.html, August 2001.
-
MVICH: MPI for Virtual Interface Architecture
-
-
-
13
-
-
1142305191
-
High Performance RDMA-Based MPI Implementation over InfiniBand
-
June
-
J. Liu, J. Wu, S. P. Kini, P. Wyckoff, and D. K. Panda. High Performance RDMA-Based MPI Implementation over InfiniBand. In 17th Annual ACM International Conference on Supercomputing (ICS '03), June 2003.
-
(2003)
17th Annual ACM International Conference on Supercomputing (ICS '03)
-
-
Liu, J.1
Wu, J.2
Kini, S.P.3
Wyckoff, P.4
Panda, D.K.5
-
14
-
-
84869731381
-
-
Mellanox Technologies
-
Mellanox Technologies. ConnectX Architecture. http://www.mellanox.com/ products/.
-
ConnectX Architecture
-
-
-
15
-
-
51049100350
-
-
Message Passing Interface Forum. MPI: A Message-Passing Interface Standard, Mar 1994.
-
Message Passing Interface Forum. MPI: A Message-Passing Interface Standard, Mar 1994.
-
-
-
-
16
-
-
51049118730
-
-
Network-Based Computing Laboratory. MVAPICH: MPI over InfiniBand and iWARP. http://mvapich.cse.ohio-state.edu.
-
Network-Based Computing Laboratory. MVAPICH: MPI over InfiniBand and iWARP. http://mvapich.cse.ohio-state.edu.
-
-
-
-
20
-
-
51049092910
-
Investigations on infiniband: Efficient network buffer utilization at scale
-
Paris, France, October
-
G. M. Shipman, R. Brightwell, B. Barrett, J. M. Squyres, and G. Bloch. Investigations on infiniband: Efficient network buffer utilization at scale. In Proceedings, Euro PVM/MPI, Paris, France, October 2007.
-
(2007)
Proceedings, Euro PVM/MPI
-
-
Shipman, G.M.1
Brightwell, R.2
Barrett, B.3
Squyres, J.M.4
Bloch, G.5
-
21
-
-
33750234379
-
High performance RDMA protocols in HPC
-
Proceedings, 13th European PVM/MPI Users' Group Meeting, Bonn, Germany, September, Springer-Verlag
-
G. M. Shipman, T. S. Woodall, G. Bosilca, R. L. Graham, and A. B. Maccabe. High performance RDMA protocols in HPC. In Proceedings, 13th European PVM/MPI Users' Group Meeting, Lecture Notes in Computer Science, Bonn, Germany, September 2006. Springer-Verlag.
-
(2006)
Lecture Notes in Computer Science
-
-
Shipman, G.M.1
Woodall, T.S.2
Bosilca, G.3
Graham, R.L.4
Maccabe, A.B.5
-
25
-
-
33751186894
-
Can Memory-Less Network Adapters Benefit Next-Generation InfiniBand Systems?
-
S. Sur, A. Vishnu, H. W. Jin, W. Huang, and D. K. Panda. Can Memory-Less Network Adapters Benefit Next-Generation InfiniBand Systems? In Hot Interconnect (HOTI 05), 2005.
-
(2005)
Hot Interconnect (HOTI 05)
-
-
Sur, S.1
Vishnu, A.2
Jin, H.W.3
Huang, W.4
Panda, D.K.5
-
26
-
-
51049102511
-
-
Texas Advanced Computing Center
-
Texas Advanced Computing Center. HPC Systems. http://www.tacc.utexas.edu/ resources/hpcsystems/.
-
HPC Systems
-
-
|