L3: Accelerator-Friendly Lossless Image Format for High-Resolution, High-Throughput DNN Training

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ECCV (17. : 2022 : Tel Aviv; Online) Computer vision – ECCV 2022 ; Part 11
1. Verfasser: Bae, Jonghyun (VerfasserIn)
Weitere Verfasser: Baek, Woohyeon (VerfasserIn), Ham, Tae Jun (VerfasserIn), Lee, Jae W. (VerfasserIn)
Pages:2022
Format: UnknownFormat
Sprache:eng
Veröffentlicht: 2022
Schlagworte:
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Titel Jahr Verfasser
Knowledge Condensation Distillation 2022 Li, Chenxin
Patch Similarity Aware Data-Free Quantization for Vision Transformers 2022 Li, Zhikai
Symmetry Regularization and Saturating Nonlinearity for Robust Quantization 2022 Park, Sein
ℓ∞-Robustness and Beyond: Unleashing Efficient Adversarial Training 2022 Dolatabadi, Hadi M.
Deep Ensemble Learning by Diverse Knowledge Distillation for Fine-Grained Object Classification 2022 Okamoto, Naoki
A Simple Approach and Benchmark for 21,000-Category Object Detection 2022 Lin, Yutong
Reducing Information Loss for Spiking Neural Networks 2022 Guo, Yufei
Learning with Recoverable Forgetting 2022 Ye, Jingwen
Streaming Multiscale Deep Equilibrium Models 2022 Ertenli, Can Ufuk
Learning to Weight Samples for Dynamic Early-Exiting Networks 2022 Han, Yizeng
Towards Accurate Binary Neural Networks via Modeling Contextual Dependencies 2022 Xing, Xingrun
Network Binarization via Contrastive Learning 2022 Shang, Yuzhang
SPViT: Enabling Faster Vision Transformers via Latency-Aware Soft Token Pruning 2022 Kong, Zhenglun
Non-uniform Step Size Quantization for Accurate Post-training Quantization 2022 Oh, Sangyun
SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning 2022 You, Haoran
Efficient One Pass Self-distillation with Zipf’s Label Smoothing 2022 Liang, Jiajun
Deep Partial Updating: Towards Communication Efficient Updating for On-Device Inference 2022 Qu, Zhongnan
L3: Accelerator-Friendly Lossless Image Format for High-Resolution, High-Throughput DNN Training 2022 Bae, Jonghyun
Mixed-Precision Neural Network Quantization via Learned Layer-Wise Importance 2022 Tang, Chen
Equivariance and Invariance Inductive Bias for Learning from Insufficient Data 2022 Wad, Tan
Alle Artikel auflisten