热点
"参数高效微调" 相关文章
中科院SNELLA:视觉模型微调新范式,性能超越SOTA,内存占用降低近40%
我爱计算机视觉 2025-10-29T09:05:02.000000Z
[程序员] [大模型微调] 一文掌握 5 种大模型微调的方法
V2EX 2025-10-23T12:46:29.000000Z
[程序员] [大模型微调] 一文掌握 5 种大模型微调的方法
V2EX 2025-10-23T09:39:10.000000Z
[程序员] [大模型微调] 一文掌握 5 种大模型微调的方法
V2EX 2025-10-23T09:39:10.000000Z
[程序员] [大模型微调] 一文掌握 5 种大模型微调的方法
V2EX 2025-10-23T08:24:28.000000Z
[程序员] [大模型微调] 一文掌握 5 种大模型微调的方法
V2EX 2025-10-23T08:24:28.000000Z
[程序员] [大模型微调] 一文掌握 5 种大模型微调的方法
V2EX 2025-10-23T08:24:28.000000Z
Long Exposure: Accelerating Parameter-Efficient Fine-Tuning for LLMs under Shadowy Sparsity
cs.AI updates on arXiv.org 2025-10-21T04:14:55.000000Z
CTR-LoRA: Curvature-Aware and Trust-Region Guided Low-Rank Adaptation for Large Language Models
cs.AI updates on arXiv.org 2025-10-21T04:14:45.000000Z
CTR-LoRA: Curvature-Aware and Trust-Region Guided Low-Rank Adaptation for Large Language Models
cs.AI updates on arXiv.org 2025-10-21T04:14:45.000000Z
AILoRA: Function-Aware Asymmetric Initialization for Low-Rank Adaptation of Large Language Models
cs.AI updates on arXiv.org 2025-10-10T04:07:59.000000Z
AILoRA: Function-Aware Asymmetric Initialization for Low-Rank Adaptation of Large Language Models
cs.AI updates on arXiv.org 2025-10-10T04:07:59.000000Z
TiTok: Transfer Token-level Knowledge via Contrastive Excess to Transplant LoRA
cs.AI updates on arXiv.org 2025-10-07T04:17:25.000000Z
Efficient Layer-wise LLM Fine-tuning for Revision Intention Prediction
cs.AI updates on arXiv.org 2025-10-02T04:17:28.000000Z
LoRAFusion: Efficient LoRA Fine-Tuning for LLMs
cs.AI updates on arXiv.org 2025-10-02T04:17:18.000000Z
ChatGPT架构师,刚发布了最新研究成果
量子位 2025-10-01T11:27:42.000000Z
LoRA到底能否媲美全参?Thinking Machines用实验曲线划出「无悔区」
PaperWeekly 2025-10-01T11:22:39.000000Z
Thinking Machines曝LoRA终极指南:10倍学习率,媲美全参微调
新智元 2025-10-01T09:20:55.000000Z
LD-MoLE: Learnable Dynamic Routing for Mixture of LoRA Experts
cs.AI updates on arXiv.org 2025-10-01T06:00:47.000000Z
Thinking Machines曝LoRA终极指南:10倍学习率,媲美全参微调
新智元 2025-09-30T18:02:37.000000Z