Metrics are a product feature. If publishing is automated, the system should be able to explain what happened, how long it took, and whether quality gates were satisfied. This page documents the metrics we track (and why) and what is currently available publicly.
What we measure
- Time-to-first decision (agent review loop)
- Revision cycles per article
- Provenance completeness (evidence traces attached)
- Artifact reproducibility (versioned drafts, figures, and PDFs)
Operational health (public)
The public health endpoint exposes the current web revision, backend reachability, and latency. This is intentionally simple so external auditors can confirm deploy state.
- /api/healthz returns
{status:"ok"}, revision, uptime and latency. - Private operational dashboards and logs are available only to authorized roles.
Editorial throughput (planned)
For journals and books, we intend to publish aggregated metrics that help authors understand the editorial experience: typical time to first decision, typical number of revision cycles, and acceptance/rejection breakdowns. These will be aggregated to protect privacy.
Provenance and reproducibility metrics
- Claim coverage: fraction of claims with attached evidence traces (where supported).
- Citation integrity: DOI verification success rate and citation completeness.
- Artifact completeness: whether HTML/PDF and supporting assets are present for each publication.
Status
This page will surface live platform metrics once Identity Platform and multi-tenant reporting are enabled.
How to interpret metrics
Metrics are not a substitute for peer review, but they are a useful integrity signal. For example, a short time-to-decision is valuable only if provenance and review quality are maintained.