ComparisonText → ImageMid creditEN

Shared-Foundation, Divergent-Choices Comparison

Two systems share a common foundation but diverge across attention, scale, training, and deployment.

When to use this prompt

For LLM vs SLM, dense vs sparse, supervised vs self-supervised contrasts.

The prompt

A comparative architectural framework figure: two systems sharing a Transformer foundation but diverging across four design choices.

Top — Shared Foundation block (single wide box):
"Transformer Backbone (multi-head self-attention + MLP + LayerNorm)"

Below — Two parallel columns, headed:
- Left: "Large Language Model (LLM)"
- Right: "Small Language Model (SLM)"

Four divergent design dimensions (rows), each with a one-line contrast:

1. Attention Mechanism
   - LLM: full / sparse mixture
   - SLM: grouped-query, sliding window
2. Parameter Scale
   - LLM: 70B+
   - SLM: 1-7B
3. Training Strategy
   - LLM: large-scale pretraining + RLHF
   - SLM: distillation from LLM + targeted fine-tuning
4. Deployment Context
   - LLM: cloud GPUs, latency-tolerant
   - SLM: edge / on-device, latency-critical

Use thin connectors to show the shared foundation feeds into both branches. End with two outcome boxes at the bottom: "general capability" (LLM) and "specialised, efficient" (SLM).
Style: clean publication-style schematic, navy / coral palette, white background, sans-serif labels.

Variations

Three-way comparison

Add a third column for "Hybrid (small with retrieval)" and add a fifth row "Knowledge Source" comparing parametric vs non-parametric memory across the three.

Tips

  • Always show the shared foundation at the top. It is what makes the comparison meaningful.
  • Limit to 4-5 dimensions. Above that, the table becomes the figure and you should use a real table.
  • End with outcome boxes summarising "what you get". Without them the comparison feels academic.

FAQ

Try this prompt now

Open it inside the generator with the prompt pre-filled.

Try this prompt

Related prompts