Open Access   Article Go Back

MPI performance guidelines for scalability

K.B. Manwade1 , D.B. Kulkarni2

Section:Research Paper, Product Type: Journal Paper
Volume-06 , Issue-01 , Page no. 60-65, Feb-2018

CrossRef-DOI:   https://doi.org/10.26438/ijcse/v6si1.6065

Online published on Feb 28, 2018

Copyright © K.B. Manwade, D.B. Kulkarni . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

View this paper at   Google Scholar | DPI Digital Library

How to Cite this Paper

  • IEEE Citation
  • MLA Citation
  • APA Citation
  • BibTex Citation
  • RIS Citation

IEEE Style Citation: K.B. Manwade, D.B. Kulkarni, “MPI performance guidelines for scalability,” International Journal of Computer Sciences and Engineering, Vol.06, Issue.01, pp.60-65, 2018.

MLA Style Citation: K.B. Manwade, D.B. Kulkarni "MPI performance guidelines for scalability." International Journal of Computer Sciences and Engineering 06.01 (2018): 60-65.

APA Style Citation: K.B. Manwade, D.B. Kulkarni, (2018). MPI performance guidelines for scalability. International Journal of Computer Sciences and Engineering, 06(01), 60-65.

BibTex Style Citation:
@article{Manwade_2018,
author = {K.B. Manwade, D.B. Kulkarni},
title = {MPI performance guidelines for scalability},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {2 2018},
volume = {06},
Issue = {01},
month = {2},
year = {2018},
issn = {2347-2693},
pages = {60-65},
url = {https://www.ijcseonline.org/full_spl_paper_view.php?paper_id=192},
doi = {https://doi.org/10.26438/ijcse/v6i1.6065}
publisher = {IJCSE, Indore, INDIA},
}

RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v6i1.6065}
UR - https://www.ijcseonline.org/full_spl_paper_view.php?paper_id=192
TI - MPI performance guidelines for scalability
T2 - International Journal of Computer Sciences and Engineering
AU - K.B. Manwade, D.B. Kulkarni
PY - 2018
DA - 2018/02/28
PB - IJCSE, Indore, INDIA
SP - 60-65
IS - 01
VL - 06
SN - 2347-2693
ER -

           

Abstract

MPI (Message Passing Interface) is most widely used parallel programming paradigm. It is used for application development on small as well as large high-performance computing systems. MPI standard provides a specification for different functions but it does not specify any performance guarantee for implementations. Nowadays, its various implementations from both vendors and research groups are available. Users are expecting consistent performance from all implementations and on all platforms. In literature, performance guidelines are defined for MPI communication, IO functions and derived data types. By using these guidelines as a base we have defined guidelines for scalability of MPI communication functions. Also, we have verified these guidelines by using benchmark application and on different MPI implementations such as MPICH, open MPI. The experimental results show that point to point communication functions are scalable. It is quite obvious as in point to point communication the only pair of processes is involved. Hence these guidelines are defined as performance requirement by considering the semantics of these functions. All processes are involved in collective communication functions; therefore defining performance guidelines for collective communication is difficult. In this paper, we have defined the performance guidelines by considering the amount of data transferred in the function. Also, we have verified our defined guidelines and reasons for violations of these guidelines are elaborated.

Key-Words / Index Term

Performance guidelines for MPI functions, Scalability of MPI functions, High-performance computing

References

[1] A. Mallón, Guillermo L. Taboada, Carlos Teijeiro, Juan Touriño, Basilio B. Fraguela, Andrés Gómez, Ramón Doallo, J. Carlos Mouriño, “Performance Evaluation of MPI, UPC and OpenMP on Multicore Architectures”, Recent Advances in Parallel Virtual Machine and Message Passing Interface. EuroPVM/MPI 2009. Lecture Notes in Computer Science, pp. 174–184, 2009.
[2] William D. Gropp, Rajeev Thakur, “Self-consistent MPI performance guidelines”, IEEE Transaction on parallel and distributed systems, 2005.
[3] William D. Gropp, Dries Kimpe, Robert Ross, Rajeev Thakur and Jesper Larsson Traff, “Self-consistent MPI-IO performance requirements and expectations”, Recent Advances in Parallel Virtual Machine and Message Passing Interface. EuroPVM/MPI 2008. Lecture Notes in Computer Science, 2008.
[4] William D. Gropp, Dries Kimpe, Robert Ross, Rajeev Thakur and Jesper Larsson Traff, “Performance Expectations and Guidelines for MPI Derived Datatypes”, Recent Advances in the Message Passing Interface. EuroMPI 2011. Lecture Notes in Computer Science, 2011.
[5] Sascha Hunold, Alexandra Carpen-Amarie, Felix Donatus Lübbe, and Jesper Larsson Träff TU Wien, “Automatic verification of self-consistent MPI performance guidelines”, Parallel Processing, Euro-Par 2016. Lecture Notes in Computer Science, 2016.
[6] Ralf Reussner, Peter Sanders, and Jesper Larsson Träff, “SKaMPI: A Comprehensive Benchmark for Public Benchmarking of MPI,” Journal of Scientific Programming, vol. 10, issue 1, pp. 55-65, 2002.
[7] WCE Rock Cluster, High performance computing cluster, URL: http://wce.ac.in/it/landing-page.php?id=9.
[8] J. Liu, B. Chandrasekaran, W. Yu, J. Wu, D. Buntinas, S. Kini, P. Wyckoff, and D. K. Panda, “Micro-Benchmark Performance Comparison of High-Speed Cluster Interconnects” , Proceedings of 11th Symposium on High Performance Interconnects, 2003.
[9] Hunold, S., Carpen-Amarie, A., “Reproducible MPI benchmarking is still not as easy as you think”, IEEE Transactions on Parallel and Distributed Systems , vol. 27, issue 12, 2016.
[10] Subhash Saini, Robert Ciotti,Brian T. N. Gunney, Thomas E. Spelce, Alice Koniges, Don Dossa, Panagiotis Adamidis, Rolf Rabenseifner, Sunil R. Tiyyagura, Matthias Mueller, “Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks”, Journal of Computer and System Sciences, vol. 74, issue 6, 2008.