Open Access   Article Go Back

The High-Performance Linpack (HPL) Evaluation on MIHIR High Performance Computing Facility at NCMRWF

Shivali Gangwar1 , B. Athiyaman2 , Preveen Kumar D.3

  1. National Centre for Medium Range Weather Forecasting, Noida, India.
  2. National Centre for Medium Range Weather Forecasting, Noida, India.
  3. National Centre for Medium Range Weather Forecasting, Noida, India.

Section:Research Paper, Product Type: Journal Paper
Volume-13 , Issue-3 , Page no. 1-8, Mar-2025

CrossRef-DOI:   https://doi.org/10.26438/ijcse/v13i3.18

Online published on Mar 31, 2025

Copyright © Shivali Gangwar , B. Athiyaman, Preveen Kumar D. . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

View this paper at   Google Scholar | DPI Digital Library

How to Cite this Paper

IEEE Style Citation: Shivali Gangwar , B. Athiyaman, Preveen Kumar D., “The High-Performance Linpack (HPL) Evaluation on MIHIR High Performance Computing Facility at NCMRWF,” International Journal of Computer Sciences and Engineering, Vol.13, Issue.3, pp.1-8, 2025.

MLA Style Citation: Shivali Gangwar , B. Athiyaman, Preveen Kumar D. "The High-Performance Linpack (HPL) Evaluation on MIHIR High Performance Computing Facility at NCMRWF." International Journal of Computer Sciences and Engineering 13.3 (2025): 1-8.

APA Style Citation: Shivali Gangwar , B. Athiyaman, Preveen Kumar D., (2025). The High-Performance Linpack (HPL) Evaluation on MIHIR High Performance Computing Facility at NCMRWF. International Journal of Computer Sciences and Engineering, 13(3), 1-8.

BibTex Style Citation:
@article{Gangwar_2025,
author = {Shivali Gangwar , B. Athiyaman, Preveen Kumar D.},
title = {The High-Performance Linpack (HPL) Evaluation on MIHIR High Performance Computing Facility at NCMRWF},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {3 2025},
volume = {13},
Issue = {3},
month = {3},
year = {2025},
issn = {2347-2693},
pages = {1-8},
url = {https://www.ijcseonline.org/full_paper_view.php?paper_id=5776},
doi = {https://doi.org/10.26438/ijcse/v13i3.18}
publisher = {IJCSE, Indore, INDIA},
}

RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v13i3.18}
UR - https://www.ijcseonline.org/full_paper_view.php?paper_id=5776
TI - The High-Performance Linpack (HPL) Evaluation on MIHIR High Performance Computing Facility at NCMRWF
T2 - International Journal of Computer Sciences and Engineering
AU - Shivali Gangwar , B. Athiyaman, Preveen Kumar D.
PY - 2025
DA - 2025/03/31
PB - IJCSE, Indore, INDIA
SP - 1-8
IS - 3
VL - 13
SN - 2347-2693
ER -

VIEWS PDF XML
51 91 downloads 28 downloads
  
  
           

Abstract

The National Centre for Medium Range and Weather Forecasting (NCMRWF) has, the MIHIR High Performance Computing (HPC) Facility with total computing capacity of 2.8 Petaflops to run Numerical Weather Prediction (NWP) models, enabling accurate and timely weather forecasting. These models need computations performed in PFLOPS (Peta Floating Point Operations Per Second). The HPC nodes are interconnected by the high speed, low latency Cray Aries network. High Performance Linpack (HPL) version 2.3 has been compiled and installed on the system for the study. The purpose of running HPL is to demonstrate the current computing performance of the HPC, and to assess the systems efficiency by analyzing the actual calculated performance (Rmax) and the theoretical peak performance (Rpeak, derived from system specifications). We present a performance evaluation of the HPL benchmark on MIHIR, conducting a detailed analysis of HPL parameters to optimize performance. The aim is to identify the best-optimized parameter values for MIHIR and determine the maximum achievable performance of the compute nodes, utilizing up to 300 nodes available for research.

Key-Words / Index Term

HPL, HPC, Aries interconnect, MIHIR, NCMRWF, PFLOPS

References

[1] A. Petitet, R. C. Whaley, J. Dongarra, A. Cleary, “HPL - A Portable Implementation of the High-Performance Linpack Benchmark for Distributed-Memory Computers,” Innovative Computing Laboratory, University of Tennessee, pp.1-10, 2018.
[2] A. Smith, B. Johnson, "Performance Analysis of HPL on Intel`s Icelake Architecture," Proceedings of the International Conference on High-Performance Computing Systems, pp.123-130, 2021.
[3] C. Lawson, R. Hanson, D. Kincaid, and F. Krogh, “Basic Linear Algebra Subprograms for Fortran usage,” ACM Transactions on Mathematical Software, Vol.5, Issue.4, pp.308–323, 1979.
[4] I. M. Jelas, N. A. W. A. Hamid, M. Othman, “The High-Performance Linpack (HPL) Benchmark on the Khaldun Sandbox Cluster,” Journal of High-Performance Computing, pp.1-15, 2013.
[5] Intel Corporation, "Intel® Math Kernel Library: Reference Manual," 2020.
[6] J. J. Dongarra, J. R. Bunch, G. B. Moler, G. W. Stewart, “LINPACK Users` Guide,” Society for Industrial and Applied Mathematics (SIAM), USA, pp.1-250, 1979.
[7] J. J. Dongarra, P. Luszczek, A. Petitet, “The LINPACK Benchmark: Past, Present, and Future,” Concurrency and Computation: Practice & Experience, Vol.15, pp.803-820, 2003.
[8] J. J. Dongarra, H. W. Meuer, E. Strohmaier, "TOP500 Supercomputer Sites," International Journal of High Performance Computing Applications, Vol.11, No.3, pp.90-94, 1997.
[9] Khang T. Nguyen, "Performance Comparison of OpenBLAS and Intel oneAPI Math Kernel Library in R," International Journal of Computational Science and Engineering, Vol.5, No.3, pp.123-130, 2020.
[10] M. Snir, S. Otto, S. Huss-Lederman, D. Walker, J. J. Dongarra, “MPI: The Complete Reference,” MIT Press, USA, pp.1-500, 1996.
[11] M. Fatica, "Accelerating Linpack with CUDA on Heterogeneous Clusters," Proceedings of the 2nd Workshop on General-Purpose Processing on Graphics Processing Units (GPGPU-2), pp.46-51, 2009.
[12] Wong Chun Shiang, Izzatdin Abdul Aziz, Nazleeni Samiha Haron, Jafreezal Jaafar, Norzatul Natrah Ismail, Mazlina Mehat, “The High-Performance Linpack (HPL) Benchmark Evaluation on UTP High-Performance Cluster Computing,” Jurnal Teknologi, Vol.78, Issue.9, pp.21–30, 2016.