留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于改进U-net的沥青路面图像裂缝分割方法

张涛 王金 刘斌 许牛琦

张涛, 王金, 刘斌, 许牛琦. 基于改进U-net的沥青路面图像裂缝分割方法[J]. 交通信息与安全, 2023, 41(6): 90-99. doi: 10.3963/j.jssn.1674-4861.2023.06.010
引用本文: 张涛, 王金, 刘斌, 许牛琦. 基于改进U-net的沥青路面图像裂缝分割方法[J]. 交通信息与安全, 2023, 41(6): 90-99. doi: 10.3963/j.jssn.1674-4861.2023.06.010
ZHANG Tao, WANG Jin, LIU Bin, XU Niuqi. Crack Segmentation of Asphalt Pavement Images Based on Improved U-net[J]. Journal of Transport Information and Safety, 2023, 41(6): 90-99. doi: 10.3963/j.jssn.1674-4861.2023.06.010
Citation: ZHANG Tao, WANG Jin, LIU Bin, XU Niuqi. Crack Segmentation of Asphalt Pavement Images Based on Improved U-net[J]. Journal of Transport Information and Safety, 2023, 41(6): 90-99. doi: 10.3963/j.jssn.1674-4861.2023.06.010

基于改进U-net的沥青路面图像裂缝分割方法

doi: 10.3963/j.jssn.1674-4861.2023.06.010
基金项目: 

北京市自然科学基金项目 8232005

北京市自然科学基金-丰台轨道交通前沿研究联合基金项目 L221026

详细信息
    作者简介:

    张涛(1998—),硕士研究生. 研究方向:路面检测与养护管理. E-mail: zhangtaowoo@emails.bjut.edu.cn

    通讯作者:

    王金(1984—),博士,副教授. 研究方向:时空数据下道路基础设施表观巡检及安全分析. E-mail: j.wang@bjut.edu.cn

  • 中图分类号: U416.2

Crack Segmentation of Asphalt Pavement Images Based on Improved U-net

  • 摘要: 为提高基于图像的沥青路面裂缝分割精度,基于U-net架构提出了strip-attention-u-net(SAU)网络。该网络采用ResNeSt50作为特征提取网络,能有效地捕捉图像中的语义信息和局部细节;在编解码跳跃连接阶段、解码器上采样阶段分别引入通道增强条形池化(channel enhanced strip pooling,CESP)模块、卷积块注意力(convolutional block attention,CBA)模块,该模块能有效减少通道压缩导致的特征丢失情况,更好地保留裂缝特征;结合Dice Loss和Focal Loss的损失函数可以使模型关注像素占比小、难以分割的细长裂缝。为测试SAU网络的性能,使用EdmCrack600公共数据集和BJCrack600实验数据集开展了模块消融实验,并与典型图像分割模型(FCN、PSPNet、DeepLabv3、U-net、Attention U-net和U-net++)进行了对比。结果表明:在EdmCrack600公共数据集上的对比实验中,SAU网络的裂缝分割效果更佳,裂缝交并比(intersection over union,IoU)和F1分数分别为50.89%和83.59%;在BJCrack600实验数据集上进行网络训练和对比实验中,表明SAU网络在沥青路面裂缝分割上的性能更优,裂缝IoU和F1分数分别为69.69%和90.90%,可为道路养护提供更为智能化、高效的决策支持。

     

  • 图  1  SAU网络结构示意图

    Figure  1.  The structure of SAU network

    图  2  CESP模块

    Figure  2.  CESP module

    图  3  CBA结构图

    Figure  3.  The structure of CBA

    图  4  不同网络在EdmCrack600数据集上的预测结果

    Figure  4.  Comparisons of segmentation results on EdmCrack600 dataset among several networks

    图  5  SAU网络损失函数、F1与MIoU变化曲线

    Figure  5.  Curve of loss function, F1 and MIoU of SAU

    图  6  不同网络在自制数据集BJCrack600的预测结果

    Figure  6.  Comparisons of segmentation results on BJCrack600 among different networks

    表  1  基于EdmCrack600数据集SAU网络不同损失函数下的精度对比实验

    Table  1.   comparison of F1 score under different loss functions for SAU network on EdmCrack600 dataset  单位: %

    损失函数 MIoU IoU F1
    lossfocal 62.00 24.72 69.64
    lossdice
    lossfocal + lossdice 75.17 50.89 83.59
    下载: 导出CSV

    表  2  基于EdmCrack600数据集的SAU网络消融实验

    Table  2.   An ablation analysis of SAU network on EdmCrac600 dataset  单位: %

    模型 MIoU IoU F1
    U-net* 72.97 46.52 81.60
    U-net* + CESP 73.69 47.95 82.27
    U-net* + CBA 74.22 49.02 82.75
    SAU(U-net* + CESP + CBA) 75.17 50.89 83.59
    下载: 导出CSV

    表  3  基于BJCrack600数据集的SAU网络消融实验

    Table  3.   An ablation analysis of SAU network on BJCrack600 dataset  单位: %

    模型 MIoU IoU F1
    U-net* 82.21 65.14 89.26
    U-net* + CESP 82.97 66.69 89.82
    U-net* + CBA 84.33 69.32 90.78
    SAU(U-net* + CESP + CBA) 84.52 69.69 90.90
    下载: 导出CSV

    表  4  典型图像分割模型在EdmCrack600数据集中的性能对比

    Table  4.   Comparisons of different image segmentation networks on EdmCrack600 dataset

    模型 输入尺寸/(px×px) IoU/% MIoU/% F1/%
    FCN 512×512 32.33 65.83 69.93
    PSPNet 512×512 27.35 63.33 67.83
    U-net* 512×512 46.52 72.97 81.60
    Attention U-net 512×512 49.38 74.40 82.91
    U-net++ 512×512 48.55 74.00 82.54
    DeepLabv3 512×512 35.95 67.54 73.10
    SAU 512×512 50.89 75.17 83.59
    下载: 导出CSV

    表  5  典型图像分割模型在BJCrack600数据集的性能对比

    Table  5.   Comparisons of different image segmentation models on BJCrack600 dataset

    模型 输入尺寸/(px×px) IoU/% MIoU/% F1/%
    FCN 512×512 28.82 63.74 71.18
    PSPNet 512×512 3.37 50.81 52.87
    U-net* 512×512 66.86 83.09 89.90
    Attention U-net 512×512 67.92 83.62 90.28
    U-net++ 512×512 67.40 83.35 90.09
    DeepLabv3 512×512 39.95 69.37 75.93
    SAU 512×512 69.69 84.52 90.90
    下载: 导出CSV
  • [1] 沙爱民, 童峥, 高杰. 基于卷积神经网络的路表病害识别与测量[J]. 中国公路学报, 2018, 31(1): 1-10. https://www.cnki.com.cn/Article/CJFDTOTAL-ZGGL201801002.htm

    SHA A M, TONG Z, GAO J. Recognition and measurement of road surface diseases based on convolutional neural networks[J]. China Journal of Highway and Transport, 2018, 31 (1): 1-10. (in Chinese) https://www.cnki.com.cn/Article/CJFDTOTAL-ZGGL201801002.htm
    [2] 杭伯安. 基于改进k-means算法的沥青路面裂缝分割方法[J]. 公路交通科技, 2023, 40(4): 1-8. https://www.cnki.com.cn/Article/CJFDTOTAL-GLJK202304001.htm

    HANG B A. Asphalt pavement crack segmentation method based on improved K-means algorithm[J]. Journal of Highway and Transportation Research and Development, 2023, 40 (4): 1-8. (in Chinese) https://www.cnki.com.cn/Article/CJFDTOTAL-GLJK202304001.htm
    [3] 沈照庆, 彭余华, 舒宁. 一种基于SVM的路面影像损伤跨尺度识别方法[J]. 武汉大学学报(信息科学版), 2013, 38 (8): 993-997. https://www.cnki.com.cn/Article/CJFDTOTAL-WHCH201308025.htm

    SHEN Z Q, PENG Y H, SHU N. A SVM-based cross-scale damage recognition method for pavement image[J]. Geomatics and Information Science of Wuhan University, 2013, 38 (8): 993-997. (in Chinese) https://www.cnki.com.cn/Article/CJFDTOTAL-WHCH201308025.htm
    [4] 张娟, 沙爱民, 孙朝云, 等. 基于相位编组法的路面裂缝自动识别[J]. 中国公路学报, 2008(2): 39-42. https://www.cnki.com.cn/Article/CJFDTOTAL-ZGGL200802007.htm

    ZHANG J, SHA A M, SUN C Y, et al. Automatic pavement crack recognition based on phase grouping method[J]. China Journal of Highway and Transport, 2008(2): 39-42. (in Chinese) https://www.cnki.com.cn/Article/CJFDTOTAL-ZGGL200802007.htm
    [5] OLIVEIRA H, CORREIA P L. CrackIT: An image processing toolbox for crack detection and characterization[C]. 2014 IEEE International Conference on Image Processing(ICIP), Paris, France: IEEE, 2014.
    [6] ZHANG A, WANG K C, LI B, et al. Automated pixel-level pavement crack detection on 3D asphalt surfaces using a deep-learning network[J]. Computer-Aided Civil and Infrastructure Engineering, 2017, 32(10): 805-819. doi: 10.1111/mice.12297
    [7] ZHANG A, WANG K C, FEI Y, et al. Deep Learning-based fully automated pavement crack detection on 3D asphalt surfaces with an improved CrackNet[J]. Journal of Computing in Civil Engineering, 2018, 32(5): 04018041. doi: 10.1061/(ASCE)CP.1943-5487.0000775
    [8] FEI Y, WANG K C, ZHANG A, et al. Pixel-level cracking detection on 3D asphalt pavement images through deep-learning-based CrackNet-Ⅴ[J]. IEEE Transactions on Intelligent Transportation Systems, 2020, 21(1): 273-284. doi: 10.1109/TITS.2019.2891167
    [9] LONG J, SHELHAMER E, DARRELL T. Fully convolutional networks for semantic segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(4): 640-651. doi: 10.1109/TPAMI.2016.2572683
    [10] CHEN L C, PAPANDREOU G, KOKKINOS I, et al. Semantic image segmentation with deep convolutional nets and fully connected CRFs[OL]. (2016-06-07)[2022-12-11]. http://arxiv.org/abs/1412.7062.
    [11] CHEN L C, PAPANDREOU G, KOKKINOS I, et al. DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 40(4): 834-848
    [12] CHEN L C, PAPANDREOU G, SCHROFF F, et al. Rethinking atrous convolution for semantic image segmentation[OL]. (2017-12-05)[2022-12-11]. http://arxiv.org/abs/1706.05587.
    [13] CHEN L C, ZHU Y, PAPANDREOU G, et al. Encoder-Decoder with atrous separable convolution for semantic image segmentation[OL]. (2018-08-22)[2022-12-11]. http://arxiv.org/abs/1802.02611.
    [14] RONNEBERGER O, FISCHER P, BROX T. U-Net: convolutional networks for biomedical image segmentation[OL]. (2015-05-18)[2022-12-11]. http://arxiv.org/abs/1505.04597.
    [15] OKTAY O, SCHLEMPER J, FOLGOC LL, et al. Attention u-net: Learning where to look for the pancreas[OL]. (2018-05-20)[2023-08-27]. https://arxiv.org/abs/1804.03999.
    [16] ZHOU Z, RAHMAN SIDDIQUEE M M, TAJBAKHSH N, et al. Unet++: A nested u-net architecture for medical image segmentation[C]. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, Granada, Spain: Springer, 2018.
    [17] YANG X, LI H, YU Y, et al. Automatic pixel-level crack detection and measurement using fully convolutional network[J]. Computer-Aided Civil and Infrastructure Engineering, 2018, 33(12): 1090-1109. doi: 10.1111/mice.12412
    [18] 张志华, 邓砚学, 张新秀. 基于改进SegNet的沥青路面病害提取与分类方法[J]. 交通信息与安全, 2022, 40(3): 127-135. doi: 10.3963/j.jssn.1674-4861.2022.03.013

    ZHANG Z H, DENG Y X, ZHANG X X. Asphalt pavement disease extraction and classification method based on improved SegNet[J]. Journal of Transport Information and Safety, 2022, 40(3): 127-135. (in Chinese) doi: 10.3963/j.jssn.1674-4861.2022.03.013
    [19] 史梦圆, 高俊钗. 改进U-Net算法的路面裂缝检测研究[J]. 自动化与仪表, 2022, 37(10): 52-55, 67. https://www.cnki.com.cn/Article/CJFDTOTAL-ZDHY202210011.htm

    SHI M Y, GAO J C. Research on pavement crack detection based on improved U-Net algorithm[J]. Automation & Instrumentation, 2022, 37(10): 52-55, 67. (in Chinese) https://www.cnki.com.cn/Article/CJFDTOTAL-ZDHY202210011.htm
    [20] GAO X, TONG B. MRA-UNet: balancing speed and accuracy in road crack segmentation network[J]. Signal, Image and Video Processing, 2023, 17(5): 2093-2100. doi: 10.1007/s11760-022-02423-9
    [21] 惠冰, 李远见. 基于改进U型神经网络的路面裂缝检测方法[J]. 交通信息与安全, 2023, 41(1): 105-114, 131. doi: 10.3963/j.jssn.1674-4861.2023.01.011

    HUI B, LI Y J. Pavement crack detection method based on improved U-type neural network[J]. Journal of Transport Information and Safety, 2023, 41(1): 105-114, 131. (in Chinese) doi: 10.3963/j.jssn.1674-4861.2023.01.011
    [22] LIU C, ZHU C, XIA X, et al. FFEDN: feature fusion encoder decoder network for crack detection[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23 (9) : 15546-15557. doi: 10.1109/TITS.2022.3141827
    [23] MEI Q, GÜL M. A cost effective solution for pavement crack inspection using cameras and deep neural networks[J]. Construction and Building Materials, 2020, 256: 119397. doi: 10.1016/j.conbuildmat.2020.119397
    [24] ZHONG J, ZHU J, HUYAN J, et al. Multi-scale feature fusion network for pixel-level pavement distress detection[J]. Automation in Construction, 2022, 141: 104436. doi: 10.1016/j.autcon.2022.104436
    [25] 唐港庭, 尹超, 王绍平, 等. 基于改进GoogLeNet的沥青路面裂缝识别算法[J]. 智能计算机与应用, 2023, 13(3): 202-206. https://www.cnki.com.cn/Article/CJFDTOTAL-DLXZ202303033.htm

    TANG G T, YIN C, WANG S P, et al. Asphalt pavement crack recognition algorithm based on improved GoogLeNet[J]. Intelligent Computer and Applications, 2023, 13(3): 202-206. (in Chinese) https://www.cnki.com.cn/Article/CJFDTOTAL-DLXZ202303033.htm
    [26] HOU Q, ZHANG L, CHENG M M, et al. Strip pooling: rethinking spatial pooling for scene parsing[OL]. (2020-03-20)[2023-02-23]. http://arxiv.org/abs/2003.13328.
    [27] HU J, SHEN L, SUN G. Squeeze-and-excitation networks[OL]. (2018-05-16)[2023-03-26]. https://openaccess.thecvf.com/content_cvpr_2018/html/Hu_Squeeze-and-Excitation_Networks_CVPR_2018_paper.html.
    [28] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, America: IEEE, 2016.
    [29] WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]. The European Conference on Computer Vision, Munich, Germany: Springer, 2018.
    [30] LI X, SUN X, MENG Y, et al. Dice loss for data-imbalanced nlp tasks[OL]. (2020-08-29)[2023-06-21]. https://arxiv.org/abs/1911.02855v3.
    [31] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[OL]. (2018-02-07)[2023-06-28]. http://arxiv.org/abs/1708.02002.
    [32] RUSSELL B C, TORRALBA A, MURPHY K P, et al. LabelMe: a database and web-based tool for image annotation[J]. International Journal of Computer Vision, 2008, 77 (1): 157-173.
    [33] WU Y, CAO W, SAHIN S, et al. Experimental characterizations and analysis of deep learning frameworks[C]. 2018 IEEE International Conference on Big Data, Seattle, USA: IEEE, 2018. International Workshop, Granada, Spain: Springer, 2018.
  • 加载中
图(6) / 表(5)
计量
  • 文章访问数:  249
  • HTML全文浏览量:  139
  • PDF下载量:  32
  • 被引次数: 0
出版历程
  • 收稿日期:  2023-07-03
  • 网络出版日期:  2024-04-03

目录

    /

    返回文章
    返回