Masked Generative Distillation
|
2022 |
Yang, Zhendong |
Prune Your Model Before Distill It
|
2022 |
Park, Jinhyuk |
Fine-grained Data Distribution Alignment for Post-Training Quantization
|
2022 |
Zhong, Yunshan |
SP-Net: Slowly Progressing Dynamic Inference Networks
|
2022 |
Wang, Huanyu |
EdgeViTs: Competing Light-Weight CNNs on Mobile Devices with Vision Transformers
|
2022 |
Pan, Junting |
PalQuant: Accelerating High-Precision Networks on Low-Precision Accelerators
|
2022 |
Hu, Qinghao |
AdaBin: Improving Binary Neural Networks with Adaptive Binary Sets
|
2022 |
Tu, Zhijun |
Self-slimmed Vision Transformer
|
2022 |
Zong, Zhuofan |
Weight Fixing Networks
|
2022 |
Subia-Waud, Christopher |
Switchable Online Knowledge Distillation
|
2022 |
Qian, Biao |
SPIN: An Empirical Evaluation on Sharing Parameters of Isotropic Networks
|
2022 |
Lin, Chien-Yu |
Ensemble Knowledge Guided Sub-network Search and Fine-Tuning for Filter Pruning
|
2022 |
Lee, Seunghyun |
Lipschitz Continuity Retained Binary Neural Network
|
2022 |
Shang, Yuzhang |
Meta-GF: Training Dynamic-Depth Neural Networks Harmoniously
|
2022 |
Sun, Yi |
Towards Accurate Network Quantization with Equivalent Smooth Regularizer
|
2022 |
Solodskikh, Kirill |
A Simple Approach and Benchmark for 21,000-Category Object Detection
|
2022 |
Lin, Yutong |
Reducing Information Loss for Spiking Neural Networks
|
2022 |
Guo, Yufei |
Learning with Recoverable Forgetting
|
2022 |
Ye, Jingwen |
Streaming Multiscale Deep Equilibrium Models
|
2022 |
Ertenli, Can Ufuk |
Learning to Weight Samples for Dynamic Early-Exiting Networks
|
2022 |
Han, Yizeng |