Open Access   Article Go Back

Techniques of Parallelization : A Survey

Vijay Kumar1 , Alka Singh2

Section:Survey Paper, Product Type: Journal Paper
Volume-7 , Issue-7 , Page no. 150-153, Jul-2019

CrossRef-DOI:   https://doi.org/10.26438/ijcse/v7i7.150153

Online published on Jul 31, 2019

Copyright © Vijay Kumar, Alka Singh . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

View this paper at   Google Scholar | DPI Digital Library

How to Cite this Paper

  • IEEE Citation
  • MLA Citation
  • APA Citation
  • BibTex Citation
  • RIS Citation

IEEE Style Citation: Vijay Kumar, Alka Singh, “Techniques of Parallelization : A Survey,” International Journal of Computer Sciences and Engineering, Vol.7, Issue.7, pp.150-153, 2019.

MLA Style Citation: Vijay Kumar, Alka Singh "Techniques of Parallelization : A Survey." International Journal of Computer Sciences and Engineering 7.7 (2019): 150-153.

APA Style Citation: Vijay Kumar, Alka Singh, (2019). Techniques of Parallelization : A Survey. International Journal of Computer Sciences and Engineering, 7(7), 150-153.

BibTex Style Citation:
@article{Kumar_2019,
author = {Vijay Kumar, Alka Singh},
title = {Techniques of Parallelization : A Survey},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {7 2019},
volume = {7},
Issue = {7},
month = {7},
year = {2019},
issn = {2347-2693},
pages = {150-153},
url = {https://www.ijcseonline.org/full_paper_view.php?paper_id=4736},
doi = {https://doi.org/10.26438/ijcse/v7i7.150153}
publisher = {IJCSE, Indore, INDIA},
}

RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v7i7.150153}
UR - https://www.ijcseonline.org/full_paper_view.php?paper_id=4736
TI - Techniques of Parallelization : A Survey
T2 - International Journal of Computer Sciences and Engineering
AU - Vijay Kumar, Alka Singh
PY - 2019
DA - 2019/07/31
PB - IJCSE, Indore, INDIA
SP - 150-153
IS - 7
VL - 7
SN - 2347-2693
ER -

VIEWS PDF XML
301 265 downloads 162 downloads
  
  
           

Abstract

Parallel computing enables us to utilize hardware resources efficiently and to solve computationally intensive problems by dividing them into sub-problem using a shared-memory approach and solving them simultaneously. Emerging technologies are based on parallel computing as it involves complex simulations of real-world situations which are extremely computation-intensive and time-taking as well. Parallel programming is gaining significance due to the limitations of the hardware. Researchers are trying to enhance memory and bus speed to match the processor`s speed. Generating parallel code requires skill and a particular technique of parallelization. There are several parallelization techniques amongst which one needs to be shrewdly chosen for a particular task and architecture. A brief survey of existing parallelization procedures is provided through this paper. New hybrid techniques are required to be developed combining technical and architectural benefits two or more parallel models. A thorough revision of traditional parallelization techniques is required to derive new techniques.

Key-Words / Index Term

Shared Memory; Parallel programming; Parallelization techniques

References

[1] Prema, S.,and R. Jehadeesan,"Analysis of Parallelization Techniques and Tools," International Journal of Information and Computation Technology 3 (2013): 471-478.
[2] Jin, Ruoming, and Gagan Agrawal,"Shared memory parallelization of data mining algorithms: Techniques, programming interface, and performance" Proceedings of the 2002 SIAM International Conference on Data Mining. Society for Industrial and Applied Mathematics, 2002.
[3] John L. Hennessy, and David A. Patterson, “Computer Architecture: A Quantitative Approach”Morgan Kaufmann,Inc.,San Francisco,2nd edition,1996.
[4] Yu, Hao, and Lawrence Rauchwerger, "Adaptive reduction parallelization techniques" ACM International Conference on Supercomputing 25th Anniversary Volume. ACM, 2014.
[5] Chapman, Barbara, and Hans Zima, “Supercompilers for parallel and vector computers” Addison-wesley, 1990.
[6] Zhang, Ye, Lawrence Rauchwerger, and Josep Torrellas,"Hardware for speculative run-time parallelization in distributed shared-memory multiprocessor." Proceedings 1998 Fourth International Symposium on High-Performance Computer Architecture. IEEE, 1998.
[7] Rauchwerger, Lawrence, and David A. Padua,"The LRPD test: Speculative run-time parallelization of loops with privatization and reduction parallelization" IEEE Transactions on Parallel and Distributed Systems 10.2 (1999): 160-180.
[8] Yu, Hao, and L. Rauchwerger, "Run-time parallelization overhead reduction techniques" Proc. of the 9th Int. Conf. on Compiler Construction, Berlin, Germany. 2000.
[9] Banerjee, Utpal, et al, "Automatic program parallelization" Proceedings of the IEEE 81.2 (1993): 211-243.
[10] P.M. Behr, W.K. Giloi and H. Mfihlenbein, “SUPRENUM: The German supercomputer architecture—Rationaleand concepts, Proc” 1986 International Conference on Parallel Processing (1986).
[11] MJ. Flynn, “Some computer organizations and their effectiveness”, IEEE Trans. Computer. 21 (9) (1972) 948-960.
[12] U. Trottenberg, “SUPRENUM--an MIMD multiprocessor system for multi-level scientific computing”, in: W. H~indler et al., eds., CONPAR 86. Conference on Algorithms and Hardware for Parallel Processing, Lecture Notes in Computer Science 237 (Springer, Berlin, 1986) 48-52.
[13] H.P. Zima, H.-J. Bast, M. Gemdt, P.J. Hoppen, “Semi-automatic parallelization of Fortran programs”, in: W. Hgmdler et al., eds., CONPAR 86. Conference on Algorithms and Hardware for Parallel Processing, Lecture Notes in Computer Science 237 (Springer, Berlin, 1986) 287-294.
[14] H.P. Zima, H.-J. Bast, M. Gerndt, PJ. Hoppen, “SUPERB: The SUPRENUM Parallelizer Bonn”, Research Report SUPRENUM 861203, Bonn University, (1986).
[15] Ruchkin, Vladimir, et al, "Frame model of a compiler of cluster parallelism for embedded computing systems",2017 6th Mediterranean Conference on Embedded Computing (MECO). IEEE, 2017.
[16] Shin, Wongyu, et al, "Rank-Level Parallelism in DRAM",IEEE Transactions on Computers 66.7 (2017): 1274-1280.
[17] Liu, De-feng, Guo-teng Pan, and Lun-guo Xie, "Understanding how memory-level parallelism affects the processors performance",2011 IEEE 3rd International Conference on Communication Software and Networks. IEEE, 2011.
[18] Cheng, Shaoyi, et al, "Exploiting memory-level parallelism in reconfigurable accelerators", 2012 IEEE 20th International Symposium on Field-Programmable Custom Computing Machines. IEEE, 2012.
[19] Kumar, K. Ashwin, et al, "Hybrid approach for parallelization of sequential code with function level and block level parallelization”, International Symposium on Parallel Computing in Electrical Engineering (PARELEC`06). IEEE, 2006.
[20] Gasper, Pete, et al, "Automatic parallelization of sequential C code”, Midwest Instruction and Computing Symposium, Duluth, MN, USA. 2003.
[21] Wall, D,"Limits of instruction level parallelism. David W. Wall, Limits of Instruction Level Parallelism" Proc. 4th ASPLOS. 1991.