| Moonworks Lunara Aesthetic Dataset | Aesthetic Dataset | arXiv | 2026 |
| Moonworks Lunara Aesthetic II | Image Variation Dataset | arXiv | 2026 |
| YOLO-Count | Differentiable Object Counting for T2I Generation | arXiv | 2025 |
| Rich Human Feedback for T2I Generation (Best Paper) | Human Feedback | CVPR | 2024 |
| PopAlign | Population-Level Alignment for Fair Text-to-Image Generation | arXiv | 2024 |
| Fine-Grained Feedback | Untangling Challenges of Fine-Grained Feedback for T2I | arXiv | 2024 |
| OpenBias | Open-set Bias Detection in Text-to-Image Generative Models | CVPR | 2024 |
| SafeGen | Mitigating Unsafe Content Generation in Text-to-Image Models | arXiv | 2024 |
| DIAGNOSIS | Detecting Unauthorized Data Usages in T2I Diffusion Models | ICLR | 2024 |
| Spatial Consistency | Getting it Right: Improving Spatial Consistency in T2I Models | arXiv | 2024 |
| Learning Multi-dim Human Preference | Multi-dimensional Human Preference for T2I | CVPR | 2024 |
| HEIM | Holistic Evaluation of Text-To-Image Models | NeurIPS | 2023 |
| GenEval | An Object-Focused Framework for Evaluating Text-to-Image Alignment | arXiv | 2023 |
| HPSv2 | Human Preference Score v2: A Solid Benchmark for Evaluating Human Preferences | arXiv | 2023 |
| ImageReward | Learning and Evaluating Human Preferences for Text-to-Image Generation | arXiv | 2023 |
| TIFA | Accurate and Interpretable Text-to-Image Faithfulness Evaluation with QA | arXiv | 2023 |
| LLMScore | Unveiling the Power of LLMs in Text-to-Image Synthesis Evaluation | arXiv | 2023 |
| ConceptBed | Evaluating Concept Learning Abilities of Text-to-Image Diffusion Models | arXiv | 2023 |
| IMMA | Immunizing T2I Models against Malicious Adaptation | arXiv | 2023 |
| Rickrolling the Artist | Injecting Backdoors into Text Encoders for T2I Synthesis | ICCV | 2023 |
| RIATIG | Reliable and Imperceptible Adversarial Text-to-Image Generation with Natural Prompts | CVPR | 2023 |
| Demographic Stereotypes | Easily Accessible T2I Generation Amplifies Demographic Stereotypes at Large Scale | FAACT | 2023 |
| DE-FAKE | Detection and Attribution of Fake Images Generated by T2I Diffusion Models | arXiv | 2022 |
| Cultural Bias | Exploiting Cultural Biases via Homoglyphs in Text-Guided Image Generation | arXiv | 2022 |
| Privacy Analysis | Membership Inference Attacks Against Text-to-image Generation Models | arXiv | 2022 |