Text-to-image diffusion models excel at generating high-quality, diverse images from natural language prompts. However, they often fail to produce semantically accurate results when the prompt contains concept combinations that contradict their learned priors. We define this failure mode as contextual contradiction, where one concept implicitly negates another due to entangled associations learned during training. To address this, we propose a stage-aware prompt decomposition framework that guides the denoising process using a sequence of proxy prompts. Each proxy prompt is constructed to match the semantic content expected to emerge at a specific stage of denoising, while ensuring contextual coherence. To construct these proxy prompts, we leverage a large language model (LLM) to analyze the target prompt, identify contradictions, and generate alternative expressions that preserve the original intent while resolving contextual conflicts. By aligning prompt information with the denoising progression, our method enables fine-grained semantic control and accurate image generation in the presence of contextual contradictions. Experiments across a variety of challenging prompts show substantial improvements in alignment to the textual prompt.
Text-to-image diffusion models generates images by progressively refining noise over a series of denoising steps. This process inherently follows a coarse-to-fine structure: early steps establish broad layout and spatial composition, while later steps gradually add fine details. This generative structure gives rise to two key observations:
Here, we display this behaviour by observing the model x0 prediction through various denoising steps.
These insights enable our method to steer the generation process more effectively by conditioning the model with prompt information that aligns with what the model is capable of expressing at each stage.
Our method guides the denoising process using time-dependent proxy prompts that adapt to the model’s internal progression from coarse to fine. A large language model (LLM) analyzes the input prompt and decomposes it into a sequence of proxy prompts, each tailored to specific stages of the generation process. These intermediate prompts are injected into the diffusion model at predefined timestep intervals. By aligning the prompt information with the model’s evolving visual structure, this stage-aware conditioning ensures a semantically coherent and contextually accurate image.