热点
关于我们
xx
xx
"
神经网络压缩
" 相关文章
Binary Quadratic Quantization: Beyond First-Order Quantization for Real-Valued Matrix Compression
cs.AI updates on arXiv.org
2025-10-22T04:24:31.000000Z
C-SWAP: Explainability-Aware Structured Pruning for Efficient Neural Networks Compression
cs.AI updates on arXiv.org
2025-10-22T04:24:24.000000Z
S2AP: Score-space Sharpness Minimization for Adversarial Pruning
cs.AI updates on arXiv.org
2025-10-22T04:22:09.000000Z
S2AP: Score-space Sharpness Minimization for Adversarial Pruning
cs.AI updates on arXiv.org
2025-10-22T04:22:09.000000Z
Vanishing Contributions: A Unified Approach to Smoothly Transition Neural Models into Compressed Form
cs.AI updates on arXiv.org
2025-10-14T04:13:03.000000Z
Vanishing Contributions: A Unified Approach to Smoothly Transition Neural Models into Compressed Form
cs.AI updates on arXiv.org
2025-10-14T04:13:03.000000Z
SQS: Bayesian DNN Compression through Sparse Quantized Sub-distributions
cs.AI updates on arXiv.org
2025-10-13T04:13:48.000000Z
Rethinking Model Size: Train Large, Then Compress with Joseph Gonzalez - #378
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
2024-05-12T03:32:26.000000Z