CVPR2024|AIGC相关论文汇总(如果觉得有帮助,欢迎点赞和收藏)
Awesome-CVPR2024-AIGC 1.图像生成(Image Generation/Image Synthesis) ECLIPSE: A Resource-Efficient Text-to-Image Prior for Image Generations InstanceDiffusion: Instance-level Control for Image Generation Instruct-Imagen: Image Generation with Multi-modal Instruction MACE: Mass Concept Erasure in Diffusion Models PAIR-Diffusion: Object-Level Image Editing with Structure-and-Appearance Paired Diffusion Models Residual Denoising Diffusion Models 2.图像编辑(Image Editing) PIA: Your Personalized Image Animator via Plug-and-Play Modules in Text-to-Image Models 3.视频生成(Video Generation/Image Synthesis) Seeing and Hearing: Open-domain Visual-Audio Generation with Diffusion Latent Aligners 4.视频编辑(Video Editing) 5.3D生成(3D Generation/3D Synthesis) EscherNet: A Generative Model for Scalable View Synthesis 6.其他多任务(Others) InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks Q-Instruct: Improving Low-level Visual Abilities for Multi-modality Foundation Models 参考 相关整理Awesome-CVPR2024-AIGC
A Collection of Papers and Codes for CVPR2024 AIGC
整理汇总下今年CVPR AIGC相关的论文和代码,具体如下。
欢迎star,fork和PR~
优先在Github更新:Awesome-CVPR2024-AIGC,欢迎star~
知乎:https://zhuanlan.zhihu.com/p/684325134
参考或转载请注明出处
CVPR2024官网:https://cvpr.thecvf.com/Conferences/2024
CVPR完整论文列表:
开会时间:2024年6月17日-6月21日
论文接收公布时间:
【Contents】
1.图像生成(Image Generation/Image Synthesis) 2.图像编辑(Image Editing) 3.视频生成(Video Generation/Image Synthesis) 4.视频编辑(Video Editing) 5.3D生成(3D Generation/3D Synthesis) 6.其他多任务(Others)1.图像生成(Image Generation/Image Synthesis)
ECLIPSE: A Resource-Efficient Text-to-Image Prior for Image Generations
Paper: https://arxiv.org/abs/2312.04655 Code: https://github.com/eclipse-t2i/eclipse-inferenceInstanceDiffusion: Instance-level Control for Image Generation
Paper: https://arxiv.org/abs/2402.03290 Code: https://github.com/frank-xwang/InstanceDiffusionInstruct-Imagen: Image Generation with Multi-modal Instruction
Paper: https://arxiv.org/abs/2401.01952MACE: Mass Concept Erasure in Diffusion Models
Paper: Code: https://github.com/Shilin-LU/MACEPAIR-Diffusion: Object-Level Image Editing with Structure-and-Appearance Paired Diffusion Models
Paper: https://arxiv.org/abs/2303.17546 Code: https://github.com/Picsart-AI-Research/PAIR-DiffusionResidual Denoising Diffusion Models
Paper: https://arxiv.org/abs/2308.13712 Code: https://github.com/nachifur/RDDM2.图像编辑(Image Editing)
PIA: Your Personalized Image Animator via Plug-and-Play Modules in Text-to-Image Models
Paper: https://arxiv.org/abs/2312.13964 Code: https://github.com/open-mmlab/PIA3.视频生成(Video Generation/Image Synthesis)
Seeing and Hearing: Open-domain Visual-Audio Generation with Diffusion Latent Aligners
Paper: https://arxiv.org/abs/2308.13712 Code: https://github.com/yzxing87/Seeing-and-Hearing4.视频编辑(Video Editing)
5.3D生成(3D Generation/3D Synthesis)
EscherNet: A Generative Model for Scalable View Synthesis
Paper: https://arxiv.org/abs/2402.03908 Code: https://github.com/kxhit/EscherNet6.其他多任务(Others)
InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks
Paper: https://arxiv.org/abs/2312.14238 Code: https://github.com/OpenGVLab/InternVLQ-Instruct: Improving Low-level Visual Abilities for Multi-modality Foundation Models
Paper: https://arxiv.org/abs/2311.06783 Code: https://github.com/Q-Future/Q-Instruct持续更新~
参考
CVPR 2024 论文和开源项目合集(Papers with Code)