ML ArchitectureText → ImageLow creditEN

GAN Training Loop

Generator vs Discriminator adversarial loop with labeled losses and gradient flow.

When to use this prompt

For generative-modeling papers using a GAN-style training objective.

The prompt

A Generative Adversarial Network training loop diagram.

Left — Generator G:
- Takes random noise z ~ N(0, I) as input.
- Outputs a fake sample G(z) drawn as a small image thumbnail.

Right — Discriminator D:
- Takes either a real sample x_real (from the dataset block at the top-right) or G(z).
- Outputs a probability D(x) in [0, 1].

Center — Loss block:
- Adversarial loss for D: L_D = -E[log D(x_real)] - E[log (1 - D(G(z)))].
- Adversarial loss for G: L_G = -E[log D(G(z))] (non-saturating form).

Show two gradient arrows:
- Solid red arrow: D's gradient updates D's weights.
- Solid blue arrow: G's gradient flows back through D into G.

Style: clean publication-style schematic, white background, navy / red / blue palette, sans-serif labels.

Variations

Conditional GAN

Same loop but add a class label y as additional conditioning input to both G and D. Show y as a separate input arrow into both networks. Update losses to D(x | y) and G(z | y).

Tips

  • Color-code G and D gradient arrows differently — readers track gradient flow visually.
  • Show the data block (real samples) outside the loop so it doesn't crowd the center.
  • List both losses explicitly. Diagrams without losses miss the training story.

FAQ

Try this prompt now

Open it inside the generator with the prompt pre-filled.

Try this prompt

Related prompts