Main > Advanced Photonics >  Volume 1 >  Issue 2 >  Page 025001 > Article
  • Figures
  • Abstract
  • Figures (7)
  • Tables (1)
  • Equations (0)
  • References (13)
  • Get PDF
  • View Full Text
  • Paper Information
  • Received: Aug. 22, 2018

    Accepted: Jan. 8, 2019

    Posted: Mar. 14, 2019

    Published Online: Mar. 14, 2019

    The Author Email: Chen Qian (chenqian@njust.edu.cn), Zuo Chao (zuochao@njust.edu.cn)

    DOI: 10.1117/1.AP.1.2.025001

  • Get Citation
  • Copy Citation Text

    Shijie Feng, Qian Chen, Guohua Gu, Tianyang Tao, Liang Zhang, Yan Hu, Wei Yin, Chao Zuo. Fringe pattern analysis using deep learning[J]. Advanced Photonics, 2019, 1(2): 025001

    Download Citation

  • Category
  • Letters
  • Share
Flowchart of the proposed method where two convolutional networks (CNN1 and CNN2) and the arctangent function are used together to determine the phase distribution. For CNN1 (in red), the input is the fringe image I(x,y), and the output is the estimated background image A(x,y). For CNN2 (in green), the inputs are the fringe image I(x,y) and the background image A(x,y) predicted by CNN1, and the outputs are the numerator M(x,y) and the denominator D(x,y). The numerator and denominator are then fed into the arctangent function to calculate the phase ϕ(x,y).

Fig. 1. Flowchart of the proposed method where two convolutional networks (CNN1 and CNN2) and the arctangent function are used together to determine the phase distribution. For CNN1 (in red), the input is the fringe image I(x,y), and the output is the estimated background image A(x,y). For CNN2 (in green), the inputs are the fringe image I(x,y) and the background image A(x,y) predicted by CNN1, and the outputs are the numerator M(x,y) and the denominator D(x,y). The numerator and denominator are then fed into the arctangent function to calculate the phase ϕ(x,y).

Download full sizeView in Article

Schematic of CNN1, which is composed of convolutional layers and several residual blocks.

Fig. 2. Schematic of CNN1, which is composed of convolutional layers and several residual blocks.

Download full sizeView in Article

Schematic of CNN2, which is more sophisticated than CNN1 and further includes two pooling layers, an upsampling layer, a concatenation block, and a linearly activated convolutional layer.

Fig. 3. Schematic of CNN2, which is more sophisticated than CNN1 and further includes two pooling layers, an upsampling layer, a concatenation block, and a linearly activated convolutional layer.

Download full sizeView in Article

Testing using the trained networks on a scene that is not present in the training phase. (a) Input fringe image I(x,y), (b) background image A(x,y) predicted by CNN1, (c) and (d) numerator M(x,y) and denominator D(x,y) estimated by CNN2, (e) phase ϕ(x,y) calculated with (c) and (d).

Fig. 4. Testing using the trained networks on a scene that is not present in the training phase. (a) Input fringe image I(x,y), (b) background image A(x,y) predicted by CNN1, (c) and (d) numerator M(x,y) and denominator D(x,y) estimated by CNN2, (e) phase ϕ(x,y) calculated with (c) and (d).

Download full sizeView in Article

Comparison of the phase error of different methods: (a) FT, (b) WFT, (c) our method, and (d) magnified views of the phase error for two selected complex regions.

Fig. 5. Comparison of the phase error of different methods: (a) FT, (b) WFT, (c) our method, and (d) magnified views of the phase error for two selected complex regions.

Download full sizeView in Article

Comparison of the 3-D reconstruction results for different methods: (a) FT, (b) WFT, (c) our method, and (d) ground truth obtained by the 12-step PS profilometry.

Fig. 6. Comparison of the 3-D reconstruction results for different methods: (a) FT, (b) WFT, (c) our method, and (d) ground truth obtained by the 12-step PS profilometry.

Download full sizeView in Article

Quantitative analysis of the reconstruction accuracy of the proposed method. (a) Measured objects: a pair of standard spheres and (b) 3-D reconstruction result showing the measurement accuracy.

Fig. 7. Quantitative analysis of the reconstruction accuracy of the proposed method. (a) Measured objects: a pair of standard spheres and (b) 3-D reconstruction result showing the measurement accuracy.

Download full sizeView in Article

Please Enter Your Email: