热点
关于我们
xx
xx
"
ViT
" 相关文章
清华&南洋理工等提出稀疏模型反演:ViT反演加速高达3.79倍,告别无效背景!
我爱计算机视觉
2025-11-05T08:40:34.000000Z
Hammering the Diagnosis: Rowhammer-Induced Stealthy Trojan Attacks on ViT-Based Medical Imaging
cs.AI updates on arXiv.org
2025-10-30T04:16:38.000000Z
TransFace++: Rethinking the Face Recognition Paradigm with a Focus on Accuracy, Efficiency, and Security
cs.AI updates on arXiv.org
2025-10-28T04:14:37.000000Z
Accelerating Vision Transformers with Adaptive Patch Sizes
cs.AI updates on arXiv.org
2025-10-22T04:20:00.000000Z
ICCV 2025 | FDAM:告别模糊视界,源自电路理论的即插即用方法让视觉Transformer重获高清细节
机器之心
2025-10-15T11:24:27.000000Z
ICCV 2025 | FDAM:告别模糊视界,源自电路理论的即插即用方法让视觉Transformer重获高清细节
机器之心
2025-10-15T11:24:27.000000Z
Convolutional Neural Nets vs Vision Transformers: A SpaceNet Case Study with Balanced vs Imbalanced Regimes
cs.AI updates on arXiv.org
2025-10-07T04:14:08.000000Z
Convolutional Neural Nets vs Vision Transformers: A SpaceNet Case Study with Balanced vs Imbalanced Regimes
cs.AI updates on arXiv.org
2025-10-07T04:14:08.000000Z
AttriGen: Automated Multi-Attribute Annotation for Blood Cell Datasets
cs.AI updates on arXiv.org
2025-10-01T06:01:33.000000Z
Transformer与ViT
掘金 人工智能
2025-09-21T11:58:58.000000Z
⭐超越CNN与RNN:为什么Transformer是AI发展的必然选择?
掘金 人工智能
2025-09-19T08:23:54.000000Z
DeepMind与牛津大学提出LayerLock:用渐进式层冻结实现高效、无崩溃的自监督视觉表征学习
我爱计算机视觉
2025-09-16T10:15:14.000000Z
DeepMind与牛津大学提出LayerLock:用渐进式层冻结实现高效、无崩溃的自监督视觉表征学习
我爱计算机视觉
2025-09-15T09:40:03.000000Z
EFTViT: Efficient Federated Training of Vision Transformers with Masked Images on Resource-Constrained Clients
cs.AI updates on arXiv.org
2025-09-03T04:18:07.000000Z
Representation Understanding via Activation Maximization
cs.AI updates on arXiv.org
2025-08-12T04:39:41.000000Z
$MV_{Hybrid}$: Improving Spatial Transcriptomics Prediction with Hybrid State Space-Vision Transformer Backbone in Pathology Vision Foundation Models
cs.AI updates on arXiv.org
2025-08-04T04:27:37.000000Z
PTCMIL: Multiple Instance Learning via Prompt Token Clustering for Whole Slide Image Analysis
cs.AI updates on arXiv.org
2025-07-28T04:42:49.000000Z
Post-Disaster Affected Area Segmentation with a Vision Transformer (ViT)-based EVAP Model using Sentinel-2 and Formosat-5 Imagery
cs.AI updates on arXiv.org
2025-07-24T05:31:01.000000Z
Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy
cs.AI updates on arXiv.org
2025-07-18T04:13:46.000000Z
刚刚,三名谷歌Vision Transformer作者官宣加入OpenAI
36kr
2024-12-04T12:02:59.000000Z