<?xml version="1.1" encoding="utf-8"?>
<article xsi:noNamespaceSchemaLocation="http://jats.nlm.nih.gov/publishing/1.1/xsd/JATS-journalpublishing1-mathml3.xsd" dtd-version="1.1" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><front><journal-meta><journal-id journal-id-type="publisher-id">TACS</journal-id><journal-title-group><journal-title>Technology and Application of Computer Science</journal-title></journal-title-group><issn>2998-8926</issn><eissn>2998-8934</eissn><publisher><publisher-name>Art and Design</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="doi">10.61369/TACS.2025090004</article-id><article-categories><subj-group subj-group-type="heading"><subject>Article</subject></subj-group></article-categories><title>基于轻量化RAG的资源受限环境问答系统研究</title><url>https://artdesignp.com/journal/TACS/2/9/10.61369/TACS.2025090004</url><author>王彦群,罗瑜,李永成</author><pub-date pub-type="publication-year"><year>2025</year></pub-date><volume>2</volume><issue>9</issue><history><date date-type="pub"><published-time>2025-05-14</published-time></date></history><abstract>随着网络安全威胁的日益复杂化，资源受限环境下的智能问答系统已成为网络安全防护的重要组成部分。然而，传统检索增强生成（Retrieval-Augmented Generation, RAG）方法在边缘设备、中小企业等低资源场景中面临计算复杂度高、内存占用大、响应延迟长等关键技术挑战。针对上述问题，提出了一种面向资源受限环境的轻量化 RAG 网络安全问答系统。该系统通过设计轻量化嵌入机制、动态语义融合算法和多层优化策略，在保证问答准确性的前提下显著降低了系统资源消耗。实验数据显示，系统在关键指标上表现出色：Recall@3 达 0.867，MRR 为 0.810。与常规 RAG 相比，检索效率明显提升，响应时间缩短了 45.5%。在资源利用方面，模型体积缩小 78.4%，内存占用减少 78.6%，从而为这类环境下的网络安全决策提供了高效可靠的支持。</abstract><keywords>RAG,轻量化模型,网络安全,资源受限环境,智能问答系统</keywords></article-meta></front><body/><back><ref-list><ref id="B1" content-type="article"><label>1</label><element-citation publication-type="journal"><p>[1]Edge D, Trinh H, Cheng N, et al. From local to global: A graph rag approach to query-focused summarization[J]. arXiv preprint arXiv:2404.16130, 2024.[2]Arslan M, Ghanem H, Munawar S, et al. A Survey on RAG with LLMs[J]. Procedia computer science, 2024, 246: 3781-3790.[3]Nam D, Macvean A, Hellendoorn V, et al. Using an llm to help with code understanding[C]//Proceedings of the IEEE/ACM 46th International Conference on Software Engineering. 2024: 1-13.[4]Rawte V, Sheth A, Das A. A survey of hallucination in large foundation models[J]. arXiv preprint arXiv:2309.05922, 2023.[5] 韩明,曹智轩,王敬涛,等.基于大语言模型的企业碳排放分析与知识问答系统[J].计算机工程与应用,2025,61(16):370-382.[6] 朱永利,钱涛.基于强化学习的局部放电深度诊断模型的自动剪枝与轻量化部署[J].高电压技术,2024,50(12):5238-5247.DOI:10.13336/j.1003-6520.hve.20240950.[7]Buciluǎ C, Caruana R, Niculescu-Mizil A. Model compression[C]//Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. 2006: 535-541.[8]Simoni M, Saracino A. Cybersecurity with llms and rags: Challenges and innovations[C]//International Conference on Security and Privacy in Communication Systems. Cham: Springer Nature Switzerland, 2024: 169-183.</p><pub-id pub-id-type="doi"/></element-citation></ref></ref-list></back></article>
