<?xml version="1.1" encoding="utf-8"?>
<article xsi:noNamespaceSchemaLocation="http://jats.nlm.nih.gov/publishing/1.1/xsd/JATS-journalpublishing1-mathml3.xsd" dtd-version="1.1" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><front><journal-meta><journal-id journal-id-type="publisher-id">ETR</journal-id><journal-title-group><journal-title>Educational Theory and Research</journal-title></journal-title-group><issn>2995-3448</issn><eissn>2995-3456</eissn><publisher><publisher-name>Art and Design</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="doi">10.61369/ETR.2025470001</article-id><article-categories><subj-group subj-group-type="heading"><subject>Article</subject></subj-group></article-categories><title>大模型智能体时代本科个性化思政教育探索</title><url>https://artdesignp.com/journal/ETR/3/47/10.61369/ETR.2025470001</url><author>胡渲,耿志强,韩永明,王孟志</author><pub-date pub-type="publication-year"><year>2025</year></pub-date><volume>3</volume><issue>47</issue><history><date date-type="pub"><published-time>2025-11-21</published-time></date></history><abstract>面对大模型智能体技术与本科思政教育深度融合的新趋势，为解决传统本科思政教育中统一化内容难以适配学生个体差异的关键问题，本研究探索并提出大模型智能体与个性化思政教育结合的具体路径，包括学生思想动态感知体系、认知偏好的个性化引导和个性化思政内容生成。研究重点围绕突破传统思政教育单一化局限展开，通过搭建学生个性匹配的思想引导框架，设计思政教育与学生日常学习生活深度嵌入的实践方案，最终实现思想引导与学生成长进程的同步。在大模型智能体广泛应用的背景下，本科思政教育工作者应主动借助技术力量，推动教育模式从单向灌输转向双向交流，为高校人才培养提供更具针对性的思想支撑。</abstract><keywords>大模型,智能体,个性化引导,思政教育,教育改革</keywords></article-meta></front><body/><back><ref-list><ref id="B1" content-type="article"><label>1</label><element-citation publication-type="journal"><p>[1]Kalla D, Smith N, Samaah F, et al. Study and analysis of chat GPT and its impact on different fields of study[J]. International journal of innovative science and research technology, 2023, 8(3).[2]Achiam J, Adler S, Agarwal S, et al. Gpt-4 technical report[J]. arXiv preprint arXiv:2303.08774, 2023.[3]Touvron H, Martin L, Stone K, et al. Llama 2: Open foundation and fine-tuned chat models[J]. arXiv preprint arXiv:2307.09288, 2023.[4]Sun Y, Wang S, Feng S, et al. Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation[J]. arXiv preprint arXiv:2107.02137,2021.[5]Team G, Anil R, Borgeaud S, et al. Gemini: a family of highly capable multimodal models[J]. arXiv preprint arXiv:2312.11805, 2023.[6]Dan Y, Lei Z, Gu Y, et al. Educhat: A large-scale language model-based chatbot system for intelligent education[J]. arXiv preprint arXiv:2308.02773, 2023.[7]Cui J, Li Z, Yan Y, et al. Chatlaw: Open-source legal large language model with integrated external knowledge bases[J]. CoRR, 2023.[8]Sun T, Zhang X, He Z, et al. Moss: An open conversational large language model[J]. Machine Intelligence Research, 2024, 21(5): 888-905.[9]Shao Y, Geng Z, Liu Y, et al. Cpt: A pre-trained unbalanced transformer for both chinese language understanding and generation[J]. Science China Information Sciences,2024, 67(5): 152102.[10]Wang S, Xu T, Li H, et al. Large language models for education: A survey and outlook[J]. arXiv preprint arXiv:2403.18105, 2024[11]Huang, X., &amp;amp; Li, S. A Review of Personalized Learning in the Age of Artificial Intelligence: From Single-Dimensional Adaptation to Multidimensional Integration.Computers &amp;amp; Education, 172, 104262, 2021.</p><pub-id pub-id-type="doi"/></element-citation></ref></ref-list></back></article>
