-
SteinDreamer: Variance Reduction for Text-to-3D Score Distillation via Stein Identity, arXiv (2023). [PDF]
-
Taming Mode Collapse in Score Distillation for Text-to-3D Generation, arXiv (2023). [PDF]
-
MiniGPT-v2: Large Language Model As a Unified Interface for Vision-Language Multi-task Learning, arXiv (2023). [PDF]
-
Revisiting Sample Size Determination in Natural Language Understanding, arXiv (2023). [PDF]
-
Folding Attention: Memory and Power Optimization for On-Device Transformer-based Streaming Speech Recognition, arXiv (2023). [PDF]
-
Stack-and-Delay: a new codebook pattern for music generation, arXiv (2023). [PDF]
-
Enhance audio generation controllability through representation similarity regularization, arXiv (2023). [PDF]
-
Exploring Speech Enhancement for Low-resource Speech Synthesis, arXiv (2023). [PDF]
-
FoleyGen: Visually-Guided Audio Generation, arXiv (2023). [PDF]
-
TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-device ASR Models, arXiv (2023). [PDF]
-
In-Context Prompt Editing For Conditional Audio Generation, arXiv (2023). [PDF]
-
On The Open Prompt Challenge In Conditional Audio Generation, arXiv (2023). [PDF]
-
Towards Zero-Shot Multilingual Transfer for Code-Switched Responses, ACL (2023). [PDF]
-
LLM-QAT: Data-Free Quantization Aware Training for Large Language Models, arXiv (2023). [PDF]
-
Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts, arXiv (2023). [PDF]
-
XRBench: An Extended Reality (XR) Machine Learning Benchmark Suite for the Metaverse, MLSys (2023). [PDF]
-
Fast Point Cloud Generation with Straight Flows, CVPR 2023. [PDF]
-
PathFusion: Path-consistent Lidar-Camera Deep Feature Fusion, arXiv (2022). [PDF]
-
LiCo-Net: Linearized Convolution Network for Hardware-efficient Keyword Spotting, arXiv (2022). [PDF]
-
Feature-align network with knowledge distillation for efficient denoising, WACV 2022. [PDF]
-
NASViT: Neural Architecture Search for Efficient Vision Transformers with Gradient Conflict aware Supernet Training, ICLR 2022. [PDF]
-
Multi-Scale High-Resolution Vision Transformer for Semantic Segmentation, CVPR 2022. [PDF]
-
DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks, ICML 2022. [PDF]
-
Omni-sparsity DNN: Fast Sparsity Optimization for On-Device Streaming E2E ASR via Supernet, ICASSP 2022. [PDF]
-
Streaming Parallel Transducer Beam Search with Fast-Slow Cascaded Encoders, INTERSPEECH 2022. [PDF]
-
ScaleNAS: Multi-Path One-Shot NAS for Scale-Aware High-Resolution Representation, AutoML 2022. [PDF]
-
Contrastive Quant: Quantization makes Stronger Contrastive Learning, DAC 2022. [PDF]
-
Feature-Align Network with Knowledge Distillation for Efficient Denoising, WACV 2022. [PDF]
-
CPT: Efficient Deep Neural Network Training via Cyclic Precision, ICLR 2021 (Spotlight Presentation). [PDF]
-
AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling, CVPR 2021. [PDF]
-
KeepAugment: A Simple Information-Preserving Data Augmentation Approach, CVPR 2021. [PDF]
-
AlphaNet: Improved Training of Supernet with Alpha-Divergence, ICML 2021 (Long Presentation). [PDF]
-
Double-win Quant: Aggressively Winning Robustness of Quantized Deep Neural Networks via Random Precision Training and Inference, ICML 2021. [PDF]
-
NASGEM: Neural Architecture Search via Graph Embedding Method, AAAI 2021. [PDF]
-
Collaborative Training of Acoustic Encoders for Speech Recognition, INTERSPEECH 2021. [PDF]
-
Memory-efficient Speech Recognition on Smart Devices, ICASSP 2021. [PDF]
-
Heterogeneous Dataflow Accelerators for Multi-DNN Workloads, HPCA 2021. [PDF]
-
EVRNet: Efficient Video Restoration on Edge Devices, International Conference on Multimedia, 2021. [PDF]
-
Mind Mappings: Enabling Efficient Algorithm-Accelerator Mapping Space Search, ASPLOS 2021. [PDF]
-
Noisy Training Improves E2E ASR for the Edge, arXiv (2021). [PDF]
-
Low-Rank+ Sparse Tensor Compression for Neural Networks, arXiv (2021). [PDF]
-
Vision Transformers with Patch Diversification, arXiv (2021). [PDF]
-
Can Temporal Information Help with Contrastive Self-Supervised Learning?, arXiv (2020). [PDF]
-
DNA: Differentiable Network-Accelerator Co-Search, arXiv (2020). [PDF]
-
One weight bitwidth to rule them all, Embedded Vision Workshop, ECCV 2020 (Best Paper Award). [PDF]
-
Co-Exploration of Neural Architectures and Heterogeneous ASIC Accelerator Designs Targeting Multiple Tasks, DAC 2020. [PDF]
-
RecNMP: Accelerating Personalized Recommendation with Near-Memory Processing, ISCA 2020. [PDF]
-
Energy-Aware Neural Architecture Optimization With Splitting Steepest Descent, Workshop on Energy Efficient Machine Learning and Cognitive Computing, NeurIPS (2019). [PDF]
-
Improving Efficiency in Neural Network Accelerator using Operands Hamming Distance Optimization, Workshop on Energy Efficient Machine Learning and Cognitive Computing, NeurIPS (2019). [PDF]
-
Federated Learning with Non-IID Data, arXiv (2018). [PDF]
-
CMSIS-NN: Efficient Neural Network Kernels for Arm Cortex-M CPUs, arXiv (2018). [PDF]
-
Not All Ops are Created Equal!, SysML (2018). [PDF]
-
PrivyNet: A Flexible Framework for Privacy-Preserving Deep Neural Network Training, arXiv (2018). [PDF]
-
Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Networks, International Symposium on Computer Architecture, 2018. [PDF]
-
Hello Edge: Keyword Spotting on Microcontrollers, arXiv (2017). [PDF]
-
Deep Convolutional Neural Network Inference with Floating-point Weights and Fixed-point Activations, arXiv (2017). [PDF]
-
Throughput-optimized OpenCL-based FPGA accelerator for large-scale convolutional neural networks, FPGA Conference (2016). [PDF]