Language Models
-
MobileLLM-R1: Exploring the Limits of Sub-Billion Language Model Reasoners with Open Training Recipes, ICLR (2026) [PDF]
-
SpinQuant: LLM Quantization with Learned Rotations, ICLR (2025). [PDF]
-
MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases, ICML (2024). [PDF]
-
AutoMixer: Checkpoint Artifacts as Automatic Data Mixers, ACL (2025) [PDF]
-
Target-Aware Language Modeling via Granular Data Sampling, EMNLP (2024). [PDF]
-
Towards Zero-Shot Multilingual Transfer for Code-Switched Responses, ACL (2023). [PDF]
-
Agent-as-a-Judge: Evaluate Agents with Agents, ICML (2025). [PDF]
-
ParetoQ: Scaling Laws in Extremely Low-bit LLM Quantization, NeurIPS (2025) [PDF]
-
Streamlining Language Models via Semantic Basis Analysis, TMLR (2025) [PDF]
-
Self-Vocabularizing Training for Neural Machine Translation, NAACL SRW (2025). [PDF]
-
Scaling Parameter-Constrained Language Models with Quality Data, EMNLP Industry (2024). [PDF]
-
LLM-QAT: Data-Free Quantization Aware Training for Large Language Models, ACL Findings (2024). [PDF]
-
Revisiting Sample Size Determination in Natural Language Understanding, ACL Findings (2023). [PDF]
-
MobileLLM-Pro Technical Report, arXiv (2025) [PDF]
-
An Introduction to Vision-Language Modeling, arXiv (2024). [PDF]
-
MiniGPT-v2: Large Language Model As a Unified Interface for Vision-Language Multi-task Learning, arXiv (2023). [PDF]
Efficient AI & Model Compression
-
CPT: Efficient Deep Neural Network Training via Cyclic Precision, ICLR (2021) (Spotlight). [PDF]
-
AlphaNet: Improved Training of Supernet with Alpha-Divergence, ICML (2021) (Long Presentation). [PDF]
-
APOLLO: SGD-like Memory, AdamW-level Performance, MLSys (2025) [PDF]
-
Basis Selection: Low-Rank Decomposition of Pretrained Large Language Models for Target Applications, TMLR (2025). [PDF]
-
NASViT: Neural Architecture Search for Efficient Vision Transformers with Gradient Conflict aware Supernet Training, ICLR (2022). [PDF]
-
DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks, ICML (2022). [PDF]
-
Double-win Quant: Aggressively Winning Robustness of Quantized Deep Neural Networks via Random Precision Training and Inference, ICML (2021). [PDF]
-
AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling, CVPR (2021). [PDF]
-
One weight bitwidth to rule them all, Embedded Vision Workshop, ECCV (2020) (Best Paper Award). [PDF]
-
Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts, ACL Findings (2024). [PDF]
-
ScaleNAS: Multi-Path One-Shot NAS for Scale-Aware High-Resolution Representation, AutoML (2022). [PDF]
-
Contrastive Quant: Quantization makes Stronger Contrastive Learning, DAC (2022). [PDF]
-
NASGEM: Neural Architecture Search via Graph Embedding Method, AAAI (2021). [PDF]
-
Energy-Aware Neural Architecture Optimization With Splitting Steepest Descent, NeurIPS Workshop (2019). [PDF]
-
Llama Guard 3-1B-INT4: Compact and Efficient Safeguard for Human-AI Conversations, arXiv (2024). [PDF]
-
Low-Rank+ Sparse Tensor Compression for Neural Networks, arXiv (2021). [PDF]
-
CMSIS-NN: Efficient Neural Network Kernels for Arm Cortex-M CPUs, arXiv (2018). [PDF]
-
Deep Convolutional Neural Network Inference with Floating-point Weights and Fixed-point Activations, arXiv (2017). [PDF]
Computer Vision & 3D
-
DepthLM: Metric Depth from Vision Language Models, ICLR (2026) (Oral Presentation) [PDF]
-
LongVU: Spatiotemporal Adaptive Compression for Long Video-Language Understanding, ICML (2025). [PDF]
-
EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything, CVPR (2024) (Highlight). [PDF]
-
EdgeTAM: On-Device Track Anything Model, CVPR (2025) [PDF]
-
MVDiffusion++: A Dense High-resolution Multi-view Diffusion Model for Single or Sparse-view 3D Object Reconstruction, ECCV (2025) [PDF]
-
Efficient Track Anything, ICCV (2025). [PDF]
-
CoherentGS: Sparse Novel View Synthesis with Coherent 3D Gaussians, ECCV (2024). [PDF]
-
Taming Mode Collapse in Score Distillation for Text-to-3D Generation, CVPR (2024). [PDF]
-
MVDiffHD: A Dense High-resolution Multi-view Diffusion Model for Single or Sparse-view 3D Object Reconstruction, ECCV (2024). [PDF]
-
Fast Point Cloud Generation with Straight Flows, CVPR (2023). [PDF]
-
Multi-Scale High-Resolution Vision Transformer for Semantic Segmentation, CVPR (2022). [PDF]
-
KeepAugment: A Simple Information-Preserving Data Augmentation Approach, CVPR (2021). [PDF]
-
Feature-Align Network with Knowledge Distillation for Efficient Denoising, WACV (2022). [PDF]
-
PathFusion: Path-consistent Lidar-Camera Deep Feature Fusion, 3DV (2024). [PDF]
-
EVRNet: Efficient Video Restoration on Edge Devices, ACM MM (2021). [PDF]
-
SteinDreamer: Variance Reduction for Text-to-3D Score Distillation via Stein Identity, AISTATS (2025). [PDF]
-
VideoAuto-R1: Video Auto Reasoning via Thinking Once, Answering Twice, arXiv (2026) [PDF]
-
SqueezeSAM: User Friendly Mobile Interactive Segmentation, arXiv (2023). [PDF]
-
Vision Transformers with Patch Diversification, arXiv (2021). [PDF]
-
Can Temporal Information Help with Contrastive Self-Supervised Learning?, arXiv (2020). [PDF]
Speech & Audio
-
Breaking Down Power Barriers in On-Device Streaming ASR: Insights and Solutions, NAACL (2025) [PDF]
-
TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-Device ASR Models, ICASSP (2024). [PDF]
-
Stack-and-Delay: A New Codebook Pattern for Music Generation, ICASSP (2024). [PDF]
-
In-Context Prompt Editing for Conditional Audio Generation, ICASSP (2024). [PDF]
-
On the Open Prompt Challenge in Conditional Audio Generation, ICASSP (2024). [PDF]
-
Folding Attention: Memory and Power Optimization for On-Device Transformer-based Streaming Speech Recognition, ICASSP (2024). [PDF]
-
Omni-sparsity DNN: Fast Sparsity Optimization for On-Device Streaming E2E ASR via Supernet, ICASSP (2022). [PDF]
-
Memory-efficient Speech Recognition on Smart Devices, ICASSP (2021). [PDF]
-
Streaming Parallel Transducer Beam Search with Fast-Slow Cascaded Encoders, INTERSPEECH (2022). [PDF]
-
Collaborative Training of Acoustic Encoders for Speech Recognition, INTERSPEECH (2021). [PDF]
-
Data Efficient Reflow for Few Step Audio Generation, SLT (2024). [PDF]
-
Towards Temporally Synchronized Visually Indicated Sounds Through Scale-Adapted Positional Embeddings, NeurIPS Workshop (2024). [PDF]
-
SLAP: Scalable Language-Audio Pretraining with Variable-Duration Audio and Multi-Objective Training, arXiv (2026) [PDF]
-
SyncFlow: Toward Temporally Aligned Joint Audio-Video Generation from Text, arXiv (2024). [PDF]
-
High Fidelity Text-Guided Music Generation and Editing via Single-Stage Flow Matching, arXiv (2024). [PDF]
-
Enhance Audio Generation Controllability Through Representation Similarity Regularization, arXiv (2023). [PDF]
-
Exploring Speech Enhancement for Low-resource Speech Synthesis, arXiv (2023). [PDF]
-
FoleyGen: Visually-Guided Audio Generation, arXiv (2023). [PDF]
-
LiCo-Net: Linearized Convolution Network for Hardware-efficient Keyword Spotting, arXiv (2022). [PDF]
-
Noisy Training Improves E2E ASR for the Edge, arXiv (2021). [PDF]
-
Hello Edge: Keyword Spotting on Microcontrollers, arXiv (2017). [PDF]
Systems ML
-
DREAM: A Dynamic Scheduler for Dynamic Real-time Multi-model ML Workloads, ASPLOS (2024). [PDF]
-
XRBench: An Extended Reality (XR) Machine Learning Benchmark Suite for the Metaverse, MLSys (2023). [PDF]
-
Heterogeneous Dataflow Accelerators for Multi-DNN Workloads, HPCA (2021). [PDF]
-
Mind Mappings: Enabling Efficient Algorithm-Accelerator Mapping Space Search, ASPLOS (2021). [PDF]
-
RecNMP: Accelerating Personalized Recommendation with Near-Memory Processing, ISCA (2020). [PDF]
-
Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Networks, ISCA (2018). [PDF]
-
Co-Exploration of Neural Architectures and Heterogeneous ASIC Accelerator Designs Targeting Multiple Tasks, DAC (2020). [PDF]
-
Improving Efficiency in Neural Network Accelerator using Operands Hamming Distance Optimization, NeurIPS Workshop (2019). [PDF]
-
Not All Ops are Created Equal!, SysML (2018). [PDF]
-
Throughput-optimized OpenCL-based FPGA Accelerator for Large-scale Convolutional Neural Networks, FPGA (2016). [PDF]
-
DNA: Differentiable Network-Accelerator Co-Search, arXiv (2020). [PDF]
-
Federated Learning with Non-IID Data, arXiv (2018). [PDF]
-
PrivyNet: A Flexible Framework for Privacy-Preserving Deep Neural Network Training, arXiv (2018). [PDF]
2026
-
DepthLM: Metric Depth from Vision Language Models, ICLR (Oral Presentation) [PDF]
-
MobileLLM-R1: Exploring the Limits of Sub-Billion Language Model Reasoners with Open Training Recipes, ICLR [PDF]
-
SLAP: Scalable Language-Audio Pretraining with Variable-Duration Audio and Multi-Objective Training, arXiv [PDF]
-
VideoAuto-R1: Video Auto Reasoning via Thinking Once, Answering Twice, arXiv [PDF]
2025
-
SpinQuant: LLM Quantization with Learned Rotations, ICLR [PDF]
-
Agent-as-a-Judge: Evaluate Agents with Agents, ICML [PDF]
-
LongVU: Spatiotemporal Adaptive Compression for Long Video-Language Understanding, ICML [PDF]
-
ParetoQ: Scaling Laws in Extremely Low-bit LLM Quantization, NeurIPS [PDF]
-
EdgeTAM: On-Device Track Anything Model, CVPR [PDF]
-
MVDiffusion++: A Dense High-resolution Multi-view Diffusion Model for Single or Sparse-view 3D Object Reconstruction, ECCV [PDF]
-
Efficient Track Anything, ICCV [PDF]
-
APOLLO: SGD-like Memory, AdamW-level Performance, MLSys [PDF]
-
Basis Selection: Low-Rank Decomposition of Pretrained Large Language Models for Target Applications, TMLR [PDF]
-
AutoMixer: Checkpoint Artifacts as Automatic Data Mixers, ACL [PDF]
-
Breaking Down Power Barriers in On-Device Streaming ASR: Insights and Solutions, NAACL [PDF]
-
Streamlining Language Models via Semantic Basis Analysis, TMLR [PDF]
-
SteinDreamer: Variance Reduction for Text-to-3D Score Distillation via Stein Identity, AISTATS [PDF]
-
Self-Vocabularizing Training for Neural Machine Translation, NAACL SRW [PDF]
-
MobileLLM-Pro Technical Report, arXiv [PDF]
2024
-
EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything, CVPR (Highlight) [PDF]
-
MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases, ICML [PDF]
-
Taming Mode Collapse in Score Distillation for Text-to-3D Generation, CVPR [PDF]
-
Target-Aware Language Modeling via Granular Data Sampling, EMNLP [PDF]
-
CoherentGS: Sparse Novel View Synthesis with Coherent 3D Gaussians, ECCV [PDF]
-
MVDiffHD: A Dense High-resolution Multi-view Diffusion Model for Single or Sparse-view 3D Object Reconstruction, ECCV [PDF]
-
DREAM: A Dynamic Scheduler for Dynamic Real-time Multi-model ML Workloads, ASPLOS [PDF]
-
TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-Device ASR Models, ICASSP [PDF]
-
Stack-and-Delay: A New Codebook Pattern for Music Generation, ICASSP [PDF]
-
In-Context Prompt Editing for Conditional Audio Generation, ICASSP [PDF]
-
On the Open Prompt Challenge in Conditional Audio Generation, ICASSP [PDF]
-
Folding Attention: Memory and Power Optimization for On-Device Transformer-based Streaming Speech Recognition, ICASSP [PDF]
-
Scaling Parameter-Constrained Language Models with Quality Data, EMNLP Industry [PDF]
-
Data Efficient Reflow for Few Step Audio Generation, SLT [PDF]
-
Towards Temporally Synchronized Visually Indicated Sounds Through Scale-Adapted Positional Embeddings, NeurIPS Workshop [PDF]
-
LLM-QAT: Data-Free Quantization Aware Training for Large Language Models, ACL Findings [PDF]
-
Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts, ACL Findings [PDF]
-
PathFusion: Path-consistent Lidar-Camera Deep Feature Fusion, 3DV [PDF]
-
SyncFlow: Toward Temporally Aligned Joint Audio-Video Generation from Text, arXiv [PDF]
-
Llama Guard 3-1B-INT4: Compact and Efficient Safeguard for Human-AI Conversations, arXiv [PDF]
-
High Fidelity Text-Guided Music Generation and Editing via Single-Stage Flow Matching, arXiv [PDF]
-
An Introduction to Vision-Language Modeling, arXiv [PDF]
2023
-
Fast Point Cloud Generation with Straight Flows, CVPR [PDF]
-
XRBench: An Extended Reality (XR) Machine Learning Benchmark Suite for the Metaverse, MLSys [PDF]
-
Towards Zero-Shot Multilingual Transfer for Code-Switched Responses, ACL [PDF]
-
Revisiting Sample Size Determination in Natural Language Understanding, ACL Findings [PDF]
-
MiniGPT-v2: Large Language Model As a Unified Interface for Vision-Language Multi-task Learning, arXiv [PDF]
-
Enhance Audio Generation Controllability Through Representation Similarity Regularization, arXiv [PDF]
-
Exploring Speech Enhancement for Low-resource Speech Synthesis, arXiv [PDF]
-
FoleyGen: Visually-Guided Audio Generation, arXiv [PDF]
-
SqueezeSAM: User Friendly Mobile Interactive Segmentation, arXiv [PDF]
2022
-
NASViT: Neural Architecture Search for Efficient Vision Transformers with Gradient Conflict aware Supernet Training, ICLR [PDF]
-
DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks, ICML [PDF]
-
Multi-Scale High-Resolution Vision Transformer for Semantic Segmentation, CVPR [PDF]
-
Feature-Align Network with Knowledge Distillation for Efficient Denoising, WACV [PDF]
-
Omni-sparsity DNN: Fast Sparsity Optimization for On-Device Streaming E2E ASR via Supernet, ICASSP [PDF]
-
Streaming Parallel Transducer Beam Search with Fast-Slow Cascaded Encoders, INTERSPEECH [PDF]
-
ScaleNAS: Multi-Path One-Shot NAS for Scale-Aware High-Resolution Representation, AutoML [PDF]
-
Contrastive Quant: Quantization makes Stronger Contrastive Learning, DAC [PDF]
-
LiCo-Net: Linearized Convolution Network for Hardware-efficient Keyword Spotting, arXiv [PDF]
2021
-
CPT: Efficient Deep Neural Network Training via Cyclic Precision, ICLR (Spotlight) [PDF]
-
AlphaNet: Improved Training of Supernet with Alpha-Divergence, ICML (Long Presentation) [PDF]
-
Double-win Quant: Aggressively Winning Robustness of Quantized Deep Neural Networks via Random Precision Training and Inference, ICML [PDF]
-
AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling, CVPR [PDF]
-
KeepAugment: A Simple Information-Preserving Data Augmentation Approach, CVPR [PDF]
-
Heterogeneous Dataflow Accelerators for Multi-DNN Workloads, HPCA [PDF]
-
Mind Mappings: Enabling Efficient Algorithm-Accelerator Mapping Space Search, ASPLOS [PDF]
-
NASGEM: Neural Architecture Search via Graph Embedding Method, AAAI [PDF]
-
Collaborative Training of Acoustic Encoders for Speech Recognition, INTERSPEECH [PDF]
-
Memory-efficient Speech Recognition on Smart Devices, ICASSP [PDF]
-
EVRNet: Efficient Video Restoration on Edge Devices, ACM MM [PDF]
-
Noisy Training Improves E2E ASR for the Edge, arXiv [PDF]
-
Low-Rank+ Sparse Tensor Compression for Neural Networks, arXiv [PDF]
-
Vision Transformers with Patch Diversification, arXiv [PDF]
2020
-
One weight bitwidth to rule them all, ECCV Workshop (Best Paper Award) [PDF]
-
RecNMP: Accelerating Personalized Recommendation with Near-Memory Processing, ISCA [PDF]
-
Co-Exploration of Neural Architectures and Heterogeneous ASIC Accelerator Designs Targeting Multiple Tasks, DAC [PDF]
-
Can Temporal Information Help with Contrastive Self-Supervised Learning?, arXiv [PDF]
-
DNA: Differentiable Network-Accelerator Co-Search, arXiv [PDF]
2019
-
Energy-Aware Neural Architecture Optimization With Splitting Steepest Descent, NeurIPS Workshop [PDF]
-
Improving Efficiency in Neural Network Accelerator using Operands Hamming Distance Optimization, NeurIPS Workshop [PDF]
2018
-
Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Networks, ISCA [PDF]
-
Not All Ops are Created Equal!, SysML [PDF]
-
Federated Learning with Non-IID Data, arXiv [PDF]
-
CMSIS-NN: Efficient Neural Network Kernels for Arm Cortex-M CPUs, arXiv [PDF]
-
PrivyNet: A Flexible Framework for Privacy-Preserving Deep Neural Network Training, arXiv [PDF]
2017
-
Hello Edge: Keyword Spotting on Microcontrollers, arXiv [PDF]
-
Deep Convolutional Neural Network Inference with Floating-point Weights and Fixed-point Activations, arXiv [PDF]
2016
- Throughput-optimized OpenCL-based FPGA Accelerator for Large-scale Convolutional Neural Networks, FPGA [PDF]