<?xml version="1.1" encoding="utf-8"?>
<article xsi:noNamespaceSchemaLocation="http://jats.nlm.nih.gov/publishing/1.1/xsd/JATS-journalpublishing1-mathml3.xsd" dtd-version="1.1" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><front><journal-meta><journal-id journal-id-type="publisher-id">ASDS</journal-id><journal-title-group><journal-title>Applied Statistics and Data Science</journal-title></journal-title-group><issn>3066-8433</issn><eissn>3066-8441</eissn><publisher><publisher-name>Art and Design</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="doi">10.61369/ASDS.2025050019</article-id><article-categories><subj-group subj-group-type="heading"><subject>Article</subject></subj-group></article-categories><title>带L2惩罚的张量神经网络模型及其应用研究</title><url>https://artdesignp.com/journal/ASDS/1/5/10.61369/ASDS.2025050019</url><author>向科聿,黄靖翔,于卓熙,孙丛婷</author><pub-date pub-type="publication-year"><year>2025</year></pub-date><volume>1</volume><issue>5</issue><history><date date-type="pub"><published-time>2025-07-20</published-time></date></history><abstract>传统的卷积神经网络由卷积层、池化层、扁平化层和全连接层组成。为了保持原线性结构，减少过拟合的同时提高模型的泛化能力，本文在张量链式回归网络层的训练过程中，添加L2惩罚项，提高模型的泛化性和稳定性，并将这种方法应用于三个案例研究，实验结果表明，加入惩罚项后比没有惩罚项的张量链式网络，在测试集中均方差(MSE)表现更好，模型的鲁棒性得以提高。最后，我们将模型应用到用胸部癌症CT扫描预测乳腺癌，结果显示该模型表现出快速的训练速度，这表明我们提出的方法有效。</abstract><keywords>机器学习,张量神经网络,张量TT分解,卷积神经网络,医疗图像处理</keywords></article-meta></front><body/><back><ref-list><ref id="B1" content-type="article"><label>1</label><element-citation publication-type="journal"><p>[1]Si Y, Zhang Y, Cai Y, et al. An efficient tensor regression for high-dimensional data[J].arXiv e-prints, 2022, arXiv:2205.13734.&amp;nbsp;[2]El Sakka M, Mothe J, Ivanovici M. Images and CNN applications in smart agriculture[J]. European Journal of Remote Sensing, 2024, 57(1): 2352386.&amp;nbsp;[3]Zhang D, Xiao B, Gao C, et al. Modeling Bilingual Sentence Processing: Evaluating RNN and Transformer Architectures for Cross-Language Structural Priming[C]//&amp;nbsp;Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024). 2024: 127-136.&amp;nbsp;[4]An S, Oh T J, Kim S W, et al. Self-clustered GAN for precipitation nowcasting[J]. Scientific Reports, 2024, 14: 9755.&amp;nbsp;[5]Turchetti C, Falaschetti L. Tensor PCA from basis in tensor space[J]. arXiv e-prints, 2023, arXiv:2305.02803.&amp;nbsp;[6]Tomov M, Sadinov S, Arsov B. Impedance Matching Optimization of RF Networks[J]. Engineering Proceedings, 2024, 70(1): 46.&amp;nbsp;[7]Dereich S, Jentzen A. Convergence rates for the Adam optimizer[J]. arXiv e-prints, &amp;nbsp;2024, arXiv:2407.21078.&amp;nbsp;[8]Li Z, Li B, Jahng S G, et al. Improved vgg algorithm for visual prosthesis image recognition[J]. IEEE Access, 2024, 12: 45727-45739.&amp;nbsp;[9]Li X, Marcus D, Russell J, et al. Weibull parametric model for survival analysis in women with endometrial cancer using clinical and T2-weighted MRI radiomic features[J].&amp;nbsp;BMC Medical Research Methodology, 2024, 24(1): 107.&amp;nbsp;[10]Aboutaleb A, Torabi M, Belzer B, et al. Deep Learning-based Auto-encoder for Time-offset Faster-than-Nyquist Downlink NOMA with Timing Errors and Imperfect&amp;nbsp;CSI[J].IEEE Journal of Selected Topics in Signal Processing, 2024, 18(7): 1178-1193.&amp;nbsp;[11]Li Z, Li B, Jahng S G, et al. Improved vgg algorithm for visual prosthesis image recognition[J]. IEEE Access, 2024, 12: 45727-45739.&amp;nbsp;[12]Wang S, Gai K, Zhang S. Progressive feedforward collapse of resnet training[J]. arXiv e-prints, 2024, arXiv:2405.00985.&amp;nbsp;[13]Sidiropoulos N D, De Lathauwer L, Fu X, et al. Tensor decomposition for signal processing and machine learning[J]. IEEE Transactions on signal processing, 2017, 65(13):&amp;nbsp;3551-3582.&amp;nbsp;[14]Chen G, Bai J, Ou Z, et al. PSFHS: intrapartum ultrasound image dataset for AI-based segmentation of pubic symphysis and fetal head[J]. Scientific Data, 2024, 11(1):&amp;nbsp;436.&amp;nbsp;[15]Pashaian M, Seyedin S. Speech Enhancement Using Joint DNN‐NMF Model Learned with Multi‐Objective Frequency Differential Spectrum Loss Function[J]. IET Signal&amp;nbsp;Processing, 2024, 2024(1): 8881007.&amp;nbsp;[16]Zhou H, Sarkar R. Leveraging Graph Machine Learning for Moonlighting Protein Prediction: A PPI Network and Physiochemical Feature Approach[J]. bioRxiv, 2023:&amp;nbsp;2023.11. 13.566879.&amp;nbsp;[17]Al Olaimat M, Bozdag S, Alzheimer&amp;rsquo;s Disease Neuroimaging Initiative. TA-RNN: An attention-based time-aware recurrent neural network architecture for electronic&amp;nbsp;health records[J]. &amp;nbsp;Bioinformatics, 2024, 40: i169-i179.&amp;nbsp;[18]Bharadwaj V, Malik O A, Murray R, et al. Distributed-memory randomized algorithms for sparse tensor cp decomposition[C]//Proceedings of the 36th ACM Symposium&amp;nbsp;on Parallelism in Algorithms and Architectures. 2024: 155-168.&amp;nbsp;[19]Yuan S, Huang K. Exploring Numerical Priors for Low-Rank Tensor Completion with Generalized CP Decomposition[J]. arXiv e-prints, 2023,arXiv: 2302.05881.&amp;nbsp;[20]Baghershahi P, Hosseini R, Moradi H. Efficient relation-aware neighborhood aggregation in graph neural networks via tensor decomposition[J]. arXiv e-prints,2022,&amp;nbsp;arXiv:2212.05581.&amp;nbsp;[21]Xiang L, Yin M, Zhang C, et al. Tdc: Towards extremely efficient cnns on gpus via hardware-aware Tucker decomposition[C]//Proceedings of the 28th ACM SIGPLAN&amp;nbsp;Annual Symposium on Principles and Practice of Parallel Programming. 2023: 260-273.&amp;nbsp;[22]Zhang Y, Zhu Y N, Zhang X. Compressing MIMO Channel Submatrices with Tucker Decomposition: Enabling Efficient Storage and Reducing SINR Computation&amp;nbsp;Overhead[J]. arXiv e-prints, 2024,arXiv:2401.09792.&amp;nbsp;[23]Novikov A, Podoprikhin D, Osokin A, et al. Tensorizing neural networks[J]. Advances in neural information processing systems, 2015, 28.&amp;nbsp;[24]Kossaifi J, Lipton Z C, Kolbeinsson A, et al. Tensor regression networks[J]. Journal of Machine Learning Research, 2020, 21(123): 1-21.&amp;nbsp;[25]Liu Y, Chakraborty N, Qin Z S, et al. Integrative Bayesian tensor regression for imaging genetics applications[J]. Frontiers in Neuroscience, 2023, 17: 1212218.&amp;nbsp;[26]Liu Y, Liu J, Long Z, et al. Tensor regression[M]. Springer International Publishing, 2022.&amp;nbsp;[27]Zhou Y, Tan K, Shen X, et al. A protein structure prediction approach leveraging transformer and CNN integration[C]//2024 7th International Conference on Advanced&amp;nbsp;Algorithms and Control Engineering (ICAACE). IEEE, 2024: 749-753.&amp;nbsp;[28]Dereich S, Jentzen A. Convergence rates for the Adam optimizer[J]. arXiv e-prints, &amp;nbsp;2024, arXiv:2407.21078.</p><pub-id pub-id-type="doi"/></element-citation></ref></ref-list></back></article>
