cs.AI updates on arXiv.org 09月17日
ReLoRA在小型语言模型中的性能研究
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文首次系统研究了ReLoRA在小型语言模型(11M-66M参数)中的性能和学习动态,发现ReLoRA在损失、Paloma困惑度和BLiMP方面表现不如标准训练,且差距随模型规模增大而扩大。分析表明,低秩更新策略可能难以迁移到SLM预训练,强调了在低计算环境下的研究需求。

arXiv:2509.12960v1 Announce Type: cross Abstract: Parameter-efficient methods such as LoRA have revolutionised the fine-tuning of LLMs. Still, their extension to pretraining via ReLoRA is less well understood, especially for small language models (SLMs), which offer lower computational and environmental costs. This work is the first systematic study of ReLoRA in SLMs (11M-66M parameters), evaluating both performance and learning dynamics. Through ablation experiments, we find that ReLoRA generally performs worse than standard training on loss, Paloma perplexity and BLiMP, with the gap widening for the larger models. Further analysis of the learning dynamics of the models indicates that ReLoRA reinforces the rank deficiencies found in smaller models. These results indicate that low-rank update strategies may not transfer easily to SLM pretraining, highlighting the need for more research in the low-compute regime.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

ReLoRA 小型语言模型 性能研究 学习动态 预训练
相关文章