Recent Trends in Image 2D to 3D: Binocular Depth Cues
Disha Mohini Pathak1 , Swati Chauhan2 , Dayanand 3
Section:Research Paper, Product Type: Journal Paper
Volume-6 ,
Issue-7 , Page no. 1074-1081, Jul-2018
CrossRef-DOI: https://doi.org/10.26438/ijcse/v6i7.10741081
Online published on Jul 31, 2018
Copyright © Disha Mohini Pathak, Swati Chauhan, Dayanand . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
View this paper at Google Scholar | DPI Digital Library
How to Cite this Paper
- IEEE Citation
- MLA Citation
- APA Citation
- BibTex Citation
- RIS Citation
IEEE Style Citation: Disha Mohini Pathak, Swati Chauhan, Dayanand, “Recent Trends in Image 2D to 3D: Binocular Depth Cues,” International Journal of Computer Sciences and Engineering, Vol.6, Issue.7, pp.1074-1081, 2018.
MLA Style Citation: Disha Mohini Pathak, Swati Chauhan, Dayanand "Recent Trends in Image 2D to 3D: Binocular Depth Cues." International Journal of Computer Sciences and Engineering 6.7 (2018): 1074-1081.
APA Style Citation: Disha Mohini Pathak, Swati Chauhan, Dayanand, (2018). Recent Trends in Image 2D to 3D: Binocular Depth Cues. International Journal of Computer Sciences and Engineering, 6(7), 1074-1081.
BibTex Style Citation:
@article{Pathak_2018,
author = {Disha Mohini Pathak, Swati Chauhan, Dayanand},
title = {Recent Trends in Image 2D to 3D: Binocular Depth Cues},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {7 2018},
volume = {6},
Issue = {7},
month = {7},
year = {2018},
issn = {2347-2693},
pages = {1074-1081},
url = {https://www.ijcseonline.org/full_paper_view.php?paper_id=2564},
doi = {https://doi.org/10.26438/ijcse/v6i7.10741081}
publisher = {IJCSE, Indore, INDIA},
}
RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v6i7.10741081}
UR - https://www.ijcseonline.org/full_paper_view.php?paper_id=2564
TI - Recent Trends in Image 2D to 3D: Binocular Depth Cues
T2 - International Journal of Computer Sciences and Engineering
AU - Disha Mohini Pathak, Swati Chauhan, Dayanand
PY - 2018
DA - 2018/07/31
PB - IJCSE, Indore, INDIA
SP - 1074-1081
IS - 7
VL - 6
SN - 2347-2693
ER -
VIEWS | XML | |
473 | 272 downloads | 125 downloads |
Abstract
As 3D images and videos are grabbing more attention of people these days, it became very important concern for researchers. Recent area of interest growing in converting 2D images into 3D images. Many researchers have worked upon different methods to bridge this gap. This paper addresses various methodologies and recent trends in Image from 2D to 3D conversion. We have bestow on Binocular Depth Cues Techniques like binocular Disparity Motion, Focus, Defocus, silhouette and other concepts based on several aspects and considerations. This paper also discusses strength and limitations of these algorithms to give a broader review on which various techniques to be used in different cases.
Key-Words / Index Term
Binocular Depth Cues, 2D to 3D Images, Strength, Limitations, Conversion Algorithms.
References
[1] M. Galabov, Research Conference In Technical Disciplines, 2D to 3D conversion algorithms, November, 2014
[2] T-Ying Kuo, Y-Chung Lo, and C Lin, ICASSP IEEE 2012 2D-TO-3D CONVERSION FOR SINGLE-VIEW IMAGE BASED ON CAMERA PROJECTION MODEL AND DARK CHANNEL MODEL,March, 2012
[3] M. Galabov , IJESIT Volume4, Issue 1,A Real Time 2D to 3D Image Conversion Techniques, January, 2015
[4] D.Mohini Pathak, Tanya Mathur, IJEDR, Volume 5, Issue 2, Recent trends in image 2D to 3D: Monocular depth cues, May 2017
[5] Scott Squires.” 2D to 3D Conversions”, http://effectscorner.blogspot.in/2011/08/2d-to-3d-conversions.html#.WUNpxohEmUk, August, 2011
[6] Q. Wei, “Research Assignment for Master Program Media and Knowledge Engineering of Delft University of Technology” , Converting 2D to 3D: A Survey, December,2005
[7] Xue Tu, Youn-sik Kang and Murali Subbarao, “Article Proceedings of SPIE - The International Society for Optical Engineering”, Depth and Focused Image Recovery from Defocused Images for Cameras Operating in Macro Mode,Sep 2007
[8] A. N. Rajagopalan and S. Chaudhuri,” IEEE trans image process”, A recursive algorithm for maximum likelihood-based identification of blur from multiple observations,1998
[9] A. N. Rajagopalan and S. Chaudhuri,” ICCV, pp. 1047–1052” , Optimal recovery of depth from defocused images using an MRF model, 1998.
[10] Y. Xiong and S. Shafer, “DARPA93, pp. 967” ,Depth from focusing and defocusing,1993.
[11] M. Watanabe and S. Nayar, “Tech. Rep. CUCS-035-95, Dept. of Computer Science, Columbia University “, Rational filters for passive depth from defocus, Sept. 1995.
[12] M. Subbarao and G. Surya, “IJCV 13, pp. 271–294” , Depth from defocus: A spatial domain approach,December 1994.
[13] D. Ziou and F. Deschenes, “Computer Vision and Image Understanding : CVIU 81(2), pp. 143–165”, Depth from defocus estimation in spatial domain, 2001.
[14] Pentland, A. P. “IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 9, No.4, Page(s) 523-531”, Depth of Scene from Depth of Field”, 1987
[15] Spring, K.H., and Stiles, W.S. “Br. J. Ophthalmol. 32, 340–346 “,Variation of pupil size with change in the angle at which the light stimulus strikes the retina, 1948
[16] R. T. Held,1,* A. Cooper,2 and S. Banks,” Current Biology 22, 426–431, March 6, 2012 ª2012 Elsevier Ltd All rights reserved DOI 10.1016/j.cub.2012.01.033”, Blur and Disparity Are Complementary Cues to Depth, March, 2012.
[17] WIKIPEDIA, “Depth of Focus” https://en.wikipedia.org/wiki/Depth_of_focus#cite_ref-Larmore1965p167_1-0
[18] Nayar, S.K.; Nakagawa, Y, “Pattern Analysis and Machine Intelligence, IEEE Transactions on Volume 16, Issue 8, Page(s): 824 – 831”, Shape from Focus, 1994
[19] T. M Naoki Asada, H. Fujiwara, “International Journal of Computer Vision, 28(2):153–163”,
Edge and depth from focus, 1998.
[20] S.W. Hasinoff and K.N. Kutulakos.” In Proc. Ninth European Conference on Computer Vision, pages 620–634”, Confocal stereo. May 2006.
[21] Matsuyama, T. “Informatics Research for Development of Knowledge Society Infrastructure, ICKS 2004, International Conference, Page(s) 7-14”,Exploitation of 3D video technologies ”,2004
[22] K. Cheung, S. Baker and T. Kanade, ”International Journal of Computer Vision, May 2005 , Volume 62, Issue 3, pp 221–247”, Shape-From-Silhouette Across Time, May 2005.
[23] Michel B., “La Stéréoscopie Numérique,Eyrolles, Chapter 5”, La conversion 2D–3D”,2011.
[24] Scharstein, D.; Szeliski, R, “International Journal of Computer Vision 47(1/2/3), 2002,7-42”,
A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms, 2002
[25]DA SILVA V.,“Depth image based stereoscopic view rendering forMATLAB”,http://www.mathworks.com/matlabcentral/fileexchange/27538-depthimage-
based-stereoscopic-view-rendering, 2010.
[26] Guan-Ming Su, Yu-Chi Lai, Andres Kwasinski and Haohong Wang,” First Edition, Published by John Wiley &Sons, Ltd”, 3D Visual Communications, 2013.
[27] Matsuyama, T. “Informatics Research for Development of Knowledge Society Infrastructure, ICKS 2004, International Conference, 2004, Page(s) 7-14”,Exploitation of 3D video technologies, 2004
[28] Wong, K.T.; Ernst, F., “Single Image Depth-from-Defocus”, Master thesis, Delft university of Technology & Philips Natlab Research,Eindhoven, The Netherlands, 2004.
[29] Battiato, S. ; Curti, S. ; La Cascia, M.; Tortora, M.; Scordato, E. “SPIE Proc. Vol 5302, EI2004
conference „Threedimensional image capture and applications VI”, Depth map generation by image classification”, 2004.
[30] Han, M; Kanade, T. “IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 25, Issue 7, 2003, Page(s): 884 – 894”, Multiple Motion Scene Reconstruction with Uncalibrated Cameras, 2003
[31] Franke, U.; Rabe, C., “Intelligent Vehicles Symposium, Proceedings. IEEE, 2005, Page(s): 181 – 186”, Kalman filter based depth from motion with fast Convergence, 2005.
[32] Kao, M.A., Shen, T.C. “IDW`09, p. 203, 2009”, A novel real time 2D to 3D conversion technique using depth based rendering, 2009
[33] Bleyer M., Gelautz M., “Proceedings of the 6th International Symposium on Image and Signal Processing and Analysis (ISPA),Salzburg, pp. 383–387, 16–18”, Temporally consistent disparity maps from uncalibrated stereo videos”, September 2009.
[34] XU F., LAM K.M., DAI Q., “Image and Vision Computing Journal, vol. 29, no. 2–3, pp. 190–205”, Video-object segmentation and 3Dtrajectory estimation for monocular video sequences, 2011.
[35] Kang, G; Gan, C.; Ren, W., “, International Conference on Machine Learning and Cybernetics, Volume 8, 2005, Page(s): 5165 – 5169”,Shape from Shading Based on Finite- Element,2005
[36] Loh, A.M.; Hartley, R., “Proceedings, the British Machine Vision Conference”,Shape from Non-Homogeneous, Non- Stationary, Anisotropic, Perspective texture, 2005.
[37] Shimshoni, I.; Moses, Y.; Lindenbaumlpr, M.,“ Proceedings, International Conference on Image Analysis and Processing, 1999, Page(s): 76 – 81“, Shape reconstruction of 3D bilaterally symmetric surfaces”, 1999
[38] Redert, A., “Creating a Depth Map”, Royal Philips Electronics, the Netherlands, Patent ID: WO2005091221 A1, 2005.
[39] V.P. Namboodiri, “Novel diffusion based techniques for depth estimation and image restoration from defocused images.” Doctor of Philosophy thesis, IIT, Bombay, India, 2008
[40] N. Qian,” Neuron, Vol. 18, 359–368, March, 1997, Copyright ã1997 by Cell Press”, Binocular Disparity Review and the Perception of Depth, 1997.