热点
关于我们
xx
xx
"
oLLm
" 相关文章
Meet oLLM: A Lightweight Python Library that brings 100K-Context LLM Inference to 8 GB Consumer GPUs via SSD Offload—No Quantization Required
MarkTechPost@AI
2025-09-29T17:47:04.000000Z
8GB显卡的逆袭!SSD换显存,3060 Ti硬跑100k长上下文
PaperWeekly
2025-09-28T16:13:02.000000Z
8GB显卡的逆袭!SSD换显存,3060 Ti硬跑100k长上下文
PaperWeekly
2025-09-28T16:06:41.000000Z