热点
关于我们
xx
xx
"
DistilBERT
" 相关文章
Misinformation Detection using Large Language Models with Explainability
cs.AI updates on arXiv.org
2025-10-23T04:13:36.000000Z
Misinformation Detection using Large Language Models with Explainability
cs.AI updates on arXiv.org
2025-10-23T04:13:36.000000Z
Efficient Adaptive Transformer: An Empirical Study and Reproducible Framework
cs.AI updates on arXiv.org
2025-10-16T04:23:59.000000Z
Fairness Metric Design Exploration in Multi-Domain Moral Sentiment Classification using Transformer-Based Models
cs.AI updates on arXiv.org
2025-10-14T04:19:57.000000Z
Lightweight Baselines for Medical Abstract Classification: DistilBERT with Cross-Entropy as a Strong Default
cs.AI updates on arXiv.org
2025-10-14T04:17:29.000000Z
Lightweight Baselines for Medical Abstract Classification: DistilBERT with Cross-Entropy as a Strong Default
cs.AI updates on arXiv.org
2025-10-14T04:17:29.000000Z
MLOps: End-to-End Hugging Face Transformers with the Hub and SageMaker Pipelines
philschmid RSS feed
2025-09-30T11:14:34.000000Z
Distributed training on multilingual BERT with Hugging Face Transformers and Amazon SageMaker
philschmid RSS feed
2025-09-30T11:14:17.000000Z
Deploy BERT with Hugging Face Transformers, Amazon SageMaker and Terraform module
philschmid RSS feed
2025-09-30T11:14:13.000000Z
Static Quantization with Hugging Face `optimum` for ~3x latency improvements
philschmid RSS feed
2025-09-30T11:13:51.000000Z
Optimizing Transformers with Hugging Face Optimum
philschmid RSS feed
2025-09-30T11:13:46.000000Z
Optimizing Transformers for GPUs with Optimum
philschmid RSS feed
2025-09-30T11:13:44.000000Z
An Explainable Transformer-based Model for Phishing Email Detection: A Large Language Model Approach
cs.AI updates on arXiv.org
2025-08-15T04:18:50.000000Z
Crisp Attention: Regularizing Transformers via Structured Sparsity
cs.AI updates on arXiv.org
2025-08-11T04:08:41.000000Z
模型蒸馏:使用bert-base-uncased模型蒸馏出distilbert-base-uncased
掘金 人工智能
2025-08-04T11:36:50.000000Z