ML ArchitectureText → ImageMid creditEN

Graph Neural Network Message Passing

Node feature update via neighborhood aggregation across L message-passing layers.

When to use this prompt

For GNN papers (GCN, GAT, GraphSAGE, MPNN) and applications like molecular property prediction.

The prompt

A graph neural network message-passing diagram.

Top — Input Graph:
- A small graph with 6 nodes and labeled edges. Each node has a feature vector x_i (drawn as a small bar above the node).

Center — Message Passing Layers (stacked, L=3 layers shown):
- For each layer:
  - Each node aggregates features from its neighbors (highlight one focal node and color its neighbors).
  - Aggregation function (mean / sum / max) shown as a small symbol on the aggregation arrow.
  - Update function (small MLP) labeled "phi" at the focal node.
  - The node feature h_i^(l) becomes h_i^(l+1).

Bottom — Readout / Pooling:
- Node features at layer L are pooled (mean or attention-weighted) into a single graph-level embedding.
- The graph embedding feeds an MLP classifier producing the prediction.

Style: clean academic vector, navy / teal palette, sans-serif labels, white background.

Variations

Graph Attention (GAT)

Replace simple aggregation with attention-weighted aggregation. Show the attention weights alpha_{ij} as varying-thickness edges between focal node and its neighbors.

Tips

  • Highlight a single focal node per layer — showing all updates at once becomes unreadable.
  • Annotate aggregation type (mean / sum) explicitly. It changes the model class.
  • Show the readout step — without it readers don't see how graph-level predictions are made.

FAQ

Try this prompt now

Open it inside the generator with the prompt pre-filled.

Try this prompt

Related prompts