Main > Advanced Photonics >  Volume 1 >  Issue 2 >  Page 025001 > Article
• Figures
• Abstract
• Figures (7)
• Tables (1)
• Equations (0)
• References (13)
• Get PDF
• View Full Text
• Paper Information

Accepted: Jan. 8, 2019

Posted: Mar. 14, 2019

Published Online: Mar. 14, 2019

The Author Email: Chen Qian (chenqian@njust.edu.cn), Zuo Chao (zuochao@njust.edu.cn)

• Get Citation
• ##### Copy Citation Text

Shijie Feng, Qian Chen, Guohua Gu, Tianyang Tao, Liang Zhang, Yan Hu, Wei Yin, Chao Zuo. Fringe pattern analysis using deep learning[J]. Advanced Photonics, 2019, 1(2): 025001

• Category
• ##### Letters
• Share

Fig. 1. Flowchart of the proposed method where two convolutional networks (CNN1 and CNN2) and the arctangent function are used together to determine the phase distribution. For CNN1 (in red), the input is the fringe image $I(x,y)$, and the output is the estimated background image $A(x,y)$. For CNN2 (in green), the inputs are the fringe image $I(x,y)$ and the background image $A(x,y)$ predicted by CNN1, and the outputs are the numerator $M(x,y)$ and the denominator $D(x,y)$. The numerator and denominator are then fed into the arctangent function to calculate the phase $ϕ(x,y)$.

Fig. 2. Schematic of CNN1, which is composed of convolutional layers and several residual blocks.

Fig. 3. Schematic of CNN2, which is more sophisticated than CNN1 and further includes two pooling layers, an upsampling layer, a concatenation block, and a linearly activated convolutional layer.

Fig. 4. Testing using the trained networks on a scene that is not present in the training phase. (a) Input fringe image $I(x,y)$, (b) background image $A(x,y)$ predicted by CNN1, (c) and (d) numerator $M(x,y)$ and denominator $D(x,y)$ estimated by CNN2, (e) phase $ϕ(x,y)$ calculated with (c) and (d).

Fig. 5. Comparison of the phase error of different methods: (a) FT, (b) WFT, (c) our method, and (d) magnified views of the phase error for two selected complex regions.