<?xml version="1.1" encoding="utf-8"?>
<article xsi:noNamespaceSchemaLocation="http://jats.nlm.nih.gov/publishing/1.1/xsd/JATS-journalpublishing1-mathml3.xsd" dtd-version="1.1" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><front><journal-meta><journal-id journal-id-type="publisher-id">SE</journal-id><journal-title-group><journal-title>Society and Economy</journal-title></journal-title-group><issn>2995-4959</issn><eissn>2995-4975</eissn><publisher><publisher-name>Art and Design</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="doi">10.61369/SE.2025070030</article-id><article-categories><subj-group subj-group-type="heading"><subject>Article</subject></subj-group></article-categories><title>人工智能安全与个人信息保护：挑战与对策研究</title><url>https://artdesignp.com/journal/SE/3/7/10.61369/SE.2025070030</url><author>吴少刚</author><pub-date pub-type="publication-year"><year>2025</year></pub-date><volume>3</volume><issue>7</issue><history><date date-type="pub"><published-time>2025-07-20</published-time></date></history><abstract>人工智能技术的广泛应用为个人信息保护带来诸多新挑战。本文依据截至2024年底已公开的研究与数据，系统梳理人工智能环境下个人信息面临的主要风险，并从技术、管理及法律三个层面提出综合防护策略。研究指出，生成式人工智能在数据采集隐蔽性、模型训练不可控性等方面加剧了隐私泄露的可能性。文章进一步提出整合隐私增强技术、分级治理机制与动态合规体系的解决方案，以助力构建安全、可信的人工智能应用环境[1,3,2,5]。</abstract><keywords>人工智能安全,个人信息保护,隐私增强技术,数据治理</keywords></article-meta></front><body/><back><ref-list><ref id="B1" content-type="article"><label>1</label><element-citation publication-type="journal"><p>&amp;nbsp;[1] 王利明.加强人格权立法保障人民美好生活[J].四川大学学报(哲学社会科学版),2018(3):5-10&amp;nbsp;[2] 张新宝.论个人信息保护的法律路径[J].法学研究,2023,45(1):98-115.&amp;nbsp;[3] IDC. Worldwide Artificial Intelligence Spending Guide 2024[R]. 2024.&amp;nbsp;[4] McKinsey Global Institute. The State of AI in 2024: Generative AI's Breakout Year[R]. 2024.&amp;nbsp;[5] 杨强,刘洋.联邦学习:算法与应用[J].计算机学报,2020,43(5):897-909.&amp;nbsp;[6] 国务院.关于构建数据基础制度更好发挥数据要素作用的意见[Z].2022.&amp;nbsp;[7] Fredrikson, M., et al. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing[C]. Proceedings of the 23rd USENIX Security Symposium, 2014: 17-32.&amp;nbsp;[8] Brown, T., et al. Language Models are Few-Shot Learners[J]. Advances in Neural Information Processing Systems, 2020, 33: 1877-1901.&amp;nbsp;[9] Shokri, R., et al. Membership Inference Attacks Against Machine Learning Models[C]. 2017 IEEE Symposium on Security and Privacy (SP), 2017: 3-18.&amp;nbsp;[10] McMahan, B., et al. Learning Private Neural Networks Using Differential Privacy[C]. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 2016: 308-318.&amp;nbsp;[11] Yang, Q., et al. Federated Machine Learning: Concept and Applications[J]. ACM Transactions on Intelligent Systems and Technology, 2019, 10(2): 1-19.&amp;nbsp;[12] 最高人民法院关于审理使用人脸识别技术处理个人信息相关民事案件适用法律若干问题的解释[Z].2021.</p><pub-id pub-id-type="doi"/></element-citation></ref></ref-list></back></article>
