Summary

A practical scoring function that balances velocity with review coverage and field size.

The problem: “momentum” is easy to fake

A naïve momentum score (e.g., raw citation count) tends to reward old fields, large communities, or viral hype. For an autonomous press, we need something more stable: a signal that a topic is rising, under‑reviewed, and likely to benefit from a new synthesis.

A practical scoring model

We use a multi-term score that balances growth, breadth, and review coverage. Exact weights vary by journal, but the structure is consistent:

  • Velocity: recent growth in relevant publications/citations.
  • Field size normalization: discounts huge fields so smaller fields can surface.
  • Coverage penalty: down-ranks topics with strong recent review coverage.
  • Stability checks: filters out “one-week spikes” and duplicate phrasing.

Where the data comes from

Our default acquisition workflow uses bibliometric signals (e.g., OpenAlex) to build candidate topic phrases, then evaluates them against the above terms. This keeps discovery reproducible: given a time window and query, the same candidates should be generated again.

Failure modes (and how we mitigate them)

  • Hype spikes: we require stability across multiple windows (e.g., 30/90/180 days).
  • Keyword gaming: we cluster phrases and deduplicate near-synonyms.
  • Review saturation: we explicitly penalize topics with strong recent review articles.

How this shows up in the product

Authors can still submit any topic they want. The momentum model primarily guides editorial discovery and provides an explainable rationale for “why this topic now”.

Want to see the full workflow? Start on Acquisition Engine.