Selected Recent Publications

  1. SteinDreamer: Variance Reduction for Text-to-3D Score Distillation via Stein Identity, arXiv (2023). [PDF]

  2. Taming Mode Collapse in Score Distillation for Text-to-3D Generation, arXiv (2023). [PDF]

  3. MiniGPT-v2: Large Language Model As a Unified Interface for Vision-Language Multi-task Learning, arXiv (2023). [PDF]

  4. Revisiting Sample Size Determination in Natural Language Understanding, arXiv (2023). [PDF]

  5. Folding Attention: Memory and Power Optimization for On-Device Transformer-based Streaming Speech Recognition, arXiv (2023). [PDF]

  6. Stack-and-Delay: a new codebook pattern for music generation, arXiv (2023). [PDF]

  7. Enhance audio generation controllability through representation similarity regularization, arXiv (2023). [PDF]

  8. Exploring Speech Enhancement for Low-resource Speech Synthesis, arXiv (2023). [PDF]

  9. FoleyGen: Visually-Guided Audio Generation, arXiv (2023). [PDF]

  10. TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-device ASR Models, arXiv (2023). [PDF]

  11. In-Context Prompt Editing For Conditional Audio Generation, arXiv (2023). [PDF]

  12. On The Open Prompt Challenge In Conditional Audio Generation, arXiv (2023). [PDF]

  13. Towards Zero-Shot Multilingual Transfer for Code-Switched Responses, ACL (2023). [PDF]

  14. LLM-QAT: Data-Free Quantization Aware Training for Large Language Models, arXiv (2023). [PDF]

  15. Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts, arXiv (2023). [PDF]

  16. XRBench: An Extended Reality (XR) Machine Learning Benchmark Suite for the Metaverse, MLSys (2023). [PDF]

  17. Fast Point Cloud Generation with Straight Flows, CVPR 2023. [PDF]

  18. PathFusion: Path-consistent Lidar-Camera Deep Feature Fusion, arXiv (2022). [PDF]

  19. LiCo-Net: Linearized Convolution Network for Hardware-efficient Keyword Spotting, arXiv (2022). [PDF]

  20. Feature-align network with knowledge distillation for efficient denoising, WACV 2022. [PDF]

  21. NASViT: Neural Architecture Search for Efficient Vision Transformers with Gradient Conflict aware Supernet Training, ICLR 2022. [PDF]

  22. Multi-Scale High-Resolution Vision Transformer for Semantic Segmentation, CVPR 2022. [PDF]

  23. DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks, ICML 2022. [PDF]

  24. Omni-sparsity DNN: Fast Sparsity Optimization for On-Device Streaming E2E ASR via Supernet, ICASSP 2022. [PDF]

  25. Streaming Parallel Transducer Beam Search with Fast-Slow Cascaded Encoders, INTERSPEECH 2022. [PDF]

  26. ScaleNAS: Multi-Path One-Shot NAS for Scale-Aware High-Resolution Representation, AutoML 2022. [PDF]

  27. Contrastive Quant: Quantization makes Stronger Contrastive Learning, DAC 2022. [PDF]

  28. Feature-Align Network with Knowledge Distillation for Efficient Denoising, WACV 2022. [PDF]

  29. CPT: Efficient Deep Neural Network Training via Cyclic Precision, ICLR 2021 (Spotlight Presentation). [PDF]

  30. AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling, CVPR 2021. [PDF]

  31. KeepAugment: A Simple Information-Preserving Data Augmentation Approach, CVPR 2021. [PDF]

  32. AlphaNet: Improved Training of Supernet with Alpha-Divergence, ICML 2021 (Long Presentation). [PDF]

  33. Double-win Quant: Aggressively Winning Robustness of Quantized Deep Neural Networks via Random Precision Training and Inference, ICML 2021. [PDF]

  34. NASGEM: Neural Architecture Search via Graph Embedding Method, AAAI 2021. [PDF]

  35. Collaborative Training of Acoustic Encoders for Speech Recognition, INTERSPEECH 2021. [PDF]

  36. Memory-efficient Speech Recognition on Smart Devices, ICASSP 2021. [PDF]

  37. Heterogeneous Dataflow Accelerators for Multi-DNN Workloads, HPCA 2021. [PDF]

  38. EVRNet: Efficient Video Restoration on Edge Devices, International Conference on Multimedia, 2021. [PDF]

  39. Mind Mappings: Enabling Efficient Algorithm-Accelerator Mapping Space Search, ASPLOS 2021. [PDF]

  40. Noisy Training Improves E2E ASR for the Edge, arXiv (2021). [PDF]

  41. Low-Rank+ Sparse Tensor Compression for Neural Networks, arXiv (2021). [PDF]

  42. Vision Transformers with Patch Diversification, arXiv (2021). [PDF]

  43. Can Temporal Information Help with Contrastive Self-Supervised Learning?, arXiv (2020). [PDF]

  44. DNA: Differentiable Network-Accelerator Co-Search, arXiv (2020). [PDF]

  45. One weight bitwidth to rule them all, Embedded Vision Workshop, ECCV 2020 (Best Paper Award). [PDF]

  46. Co-Exploration of Neural Architectures and Heterogeneous ASIC Accelerator Designs Targeting Multiple Tasks, DAC 2020. [PDF]

  47. RecNMP: Accelerating Personalized Recommendation with Near-Memory Processing, ISCA 2020. [PDF]

  48. Energy-Aware Neural Architecture Optimization With Splitting Steepest Descent, Workshop on Energy Efficient Machine Learning and Cognitive Computing, NeurIPS (2019). [PDF]

  49. Improving Efficiency in Neural Network Accelerator using Operands Hamming Distance Optimization, Workshop on Energy Efficient Machine Learning and Cognitive Computing, NeurIPS (2019). [PDF]

  50. Federated Learning with Non-IID Data, arXiv (2018). [PDF]

  51. CMSIS-NN: Efficient Neural Network Kernels for Arm Cortex-M CPUs, arXiv (2018). [PDF]

  52. Not All Ops are Created Equal!, SysML (2018). [PDF]

  53. PrivyNet: A Flexible Framework for Privacy-Preserving Deep Neural Network Training, arXiv (2018). [PDF]

  54. Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Networks, International Symposium on Computer Architecture, 2018. [PDF]

  55. Hello Edge: Keyword Spotting on Microcontrollers, arXiv (2017). [PDF]

  56. Deep Convolutional Neural Network Inference with Floating-point Weights and Fixed-point Activations, arXiv (2017). [PDF]

  57. Throughput-optimized OpenCL-based FPGA accelerator for large-scale convolutional neural networks, FPGA Conference (2016). [PDF]