Unified image understanding and generation has emerged as a promising paradigm in multimodal artificial intelligence.Despite recent progress, the optimal architectural design for such unified models remains an open challenge.In this work, we start by analyzing the modality alignment behaviors of task-specific expert models for understanding and generation, as well as current unified models.Our analysis reveals a crucial observation: understanding tasks benefit from a progressively increasing modality alignment across network depth, which helps build up semantic information for better comprehension; In contrast, generation tasks follow a different trend-modality alignment increases in the early layers but decreases in the deep layers to recover spatial details.These divergent alignment patterns create a fundamental conflict in fully shared Transformer backbones, where a uniform representational flow often leads to performance compromises across two tasks.Motivated by this finding, we introduce UniFork, a novel Y-shaped architecture that shares the shallow layers for cross-task representation learning, while employing task-specific branches in deeper layers to avoid task interference.This design effectively balances shared learning and task specialization.Through extensive ablation experiments, we demonstrate that Unifork consistently outperforms conventional fully shared Transformer architectures, and achieves performance on par with or better than task-specific models.Our code is available at https://github.com/tliby/UniFork.
This paper presents UniFork, a Y-shaped architecture designed to explore modality alignment for unified multimodal understanding and generation. The authors analyze the alignment behaviors of task-specific models in image understanding and generation, revealing that understanding tasks benefit from deeper alignment, while generation tasks show peak alignment in early layers, leading to conflicts in fully shared Transformer architectures. UniFork addresses this by sharing early layers for semantic learning and separating deeper layers into task-specific branches, balancing shared representation and task specialization. The authors conduct extensive ablation studies demonstrating that UniFork outperforms fully shared architectures and matches or surpasses task-specific models. They validate the effectiveness through various benchmarks, elucidating the distinct alignment demands of image understanding and generation, and highlight future directions for extending this framework to additional modalities.
This paper employs the following methods:
- Y-shaped architecture
- task-specific branches
- multimodal representation learning
- Qwen2.5-0.5BLLM
- Emu3-base
The following datasets were used in this research:
- ImageNet-1K
- Laion-En
- COYO
- JourneyDB
- InternVL-1.5
- BLIP3o-60k
- MJHQ-30K
- Geneval
- MME-P
- POPE
- SEED-I
- VQAv2
- GQA
- Fréchet Inception Distance (FID)
- accuracy
- UniFork outperforms fully shared Transformer architectures
- achieves performance comparable to task-specific models
- demonstrated effectiveness through extensive ablation studies
The authors identified the following limitations:
- small model size
- quality of visual tokenizer
- limited quality of training data
- Number of GPUs: 16
- GPU Type: NVIDIA A100
- Compute Requirements: The training process is conducted on 16 Nvidia A100 GPUs.