Efficient One Pass Self-distillation with Zipf’s Label Smoothing
|
2022 |
Liang, Jiajun |
Deep Partial Updating: Towards Communication Efficient Updating for On-Device Inference
|
2022 |
Qu, Zhongnan |
L3: Accelerator-Friendly Lossless Image Format for High-Resolution, High-Throughput DNN Training
|
2022 |
Bae, Jonghyun |
Mixed-Precision Neural Network Quantization via Learned Layer-Wise Importance
|
2022 |
Tang, Chen |
Equivariance and Invariance Inductive Bias for Learning from Insufficient Data
|
2022 |
Wad, Tan |
Event Neural Networks
|
2022 |
Dutson, Matthew |
IDa-Det: An Information Discrepancy-Aware Distillation for 1-Bit Detectors
|
2022 |
Xu, Sheng |
Disentangled Differentiable Network Pruning
|
2022 |
Gao, Shangqian |
Adaptive Token Sampling for Efficient Vision Transformers
|
2022 |
Fayyaz, Mohsen |
Multi-granularity Pruning for Model Acceleration on Mobile Devices
|
2022 |
Zhao, Tianli |
Helpful or Harmful: Inter-task Association in Continual Learning
|
2022 |
Jin, Hyundong |
Soft Masking for Cost-Constrained Channel Pruning
|
2022 |
Humble, Ryan |
Towards Ultra Low Latency Spiking Neural Networks for Vision and Sequential Tasks Using Temporal Pruning
|
2022 |
Chowdhury, Sayeed Shafayet |
A Simple Approach and Benchmark for 21,000-Category Object Detection
|
2022 |
Lin, Yutong |
Reducing Information Loss for Spiking Neural Networks
|
2022 |
Guo, Yufei |
Learning with Recoverable Forgetting
|
2022 |
Ye, Jingwen |
Streaming Multiscale Deep Equilibrium Models
|
2022 |
Ertenli, Can Ufuk |
Learning to Weight Samples for Dynamic Early-Exiting Networks
|
2022 |
Han, Yizeng |
Towards Accurate Binary Neural Networks via Modeling Contextual Dependencies
|
2022 |
Xing, Xingrun |
Network Binarization via Contrastive Learning
|
2022 |
Shang, Yuzhang |