What FSRS Is
FSRS is a spaced-repetition scheduler. Its job is to decide, for each card or question a learner has previously seen, when to schedule the next review. The goal is to maintain a target retention rate (the probability that the learner will recall the material at the next review) at the lowest review-cost possible: review intervals get longer when memory is stable, shorter when it is fragile.
The algorithm was developed by Jingyong Ye and the
open-spaced-repetition community starting around 2022. By 2024
it had become the default scheduler in the Anki ecosystem (the
open-source flashcard software with millions of users worldwide),
displacing SM-2 (the SuperMemo-2 algorithm that had served as
Anki's default since the project's early years). FSRS is the
algorithm TheoremPath uses for its ReviewCard layer.
This page covers FSRS as a method that educators and curriculum designers can use, frame, and critique. It explains the underlying memory model (DSR: difficulty, stability, retrievability), the optimization target (forgetting-curve calibration to target retention), the empirical evidence, and the boundary conditions where the algorithm misbehaves.
The DSR Memory Model
FSRS rests on a three-component model of memory for a single card:
| Component | Symbol | Meaning |
|---|---|---|
| Difficulty | An intrinsic property of the card; how hard the material is to retain. Updated based on the learner's rating history on this specific card. Range typically [1, 10]. | |
| Stability | How long the memory will last under no further review. Larger means the memory persists longer. Updated multiplicatively after each review based on the rating. | |
| Retrievability | The probability of successful recall at time since the last review, given current stability . The forgetting curve. |
The forgetting curve is parameterized as
so that when . Stability is the half-life of the memory.
The optimization target. The scheduler chooses the next review time such that
where is a learner-set target retention rate (typically 0.85 to 0.95). Solving for :
For this gives .
After the review, FSRS updates the difficulty and the stability based on the learner's rating ("Again", "Hard", "Good", "Easy" in Anki's four-button scheme). The update equations are themselves parameterized; the open-source FSRS implementation has roughly 19 free parameters per learner that can be optimized to fit the learner's review history.
How FSRS Differs from SM-2
The SuperMemo-2 algorithm (Wozniak, 1990) was the dominant spaced-repetition scheduler for several decades. The differences matter for educators evaluating which scheduler to use.
| Feature | SM-2 | FSRS |
|---|---|---|
| Memory model | E-factor times current interval | DSR (difficulty, stability, retrievability) |
| Forgetting curve | None explicit | Exponential, |
| Optimization target | None; fixed E-factor adjustment | Configurable target retention |
| Personalization | None on the algorithm; only per-card E-factor | 19 free parameters fitted to the individual learner's review history |
| Calibration data | Heuristic from Wozniak's own experiments | Trained on millions of reviews from public Anki revlogs |
| Default in Anki | Until ~2024 | From ~2024 onward |
The conceptual difference: SM-2 grows the interval by a multiplicative factor that decays with bad ratings; FSRS explicitly models the memory state and chooses the interval to hit a calibrated retention target. SM-2 is a heuristic; FSRS is an optimization with respect to a memory model.
Empirical Status
FSRS's empirical case rests on three pillars.
Forgetting-curve validation. The DSR model assumes an exponential forgetting curve. The empirical evidence for exponential-form forgetting is over a century old (Ebbinghaus's 1885 retention curves; subsequent replications) but the precise shape depends on encoding strength, retrieval practice, and spacing structure. The exponential form is a simplification that fits well in aggregate across many cards and learners; it fits less well at the level of a single card on a single day.
Calibration on real-world review data. FSRS parameters are fitted on large samples of public Anki review logs (the "FSRS benchmark" repository on the open-spaced-repetition GitHub organization tracks the calibration data and fit quality). The fits achieve substantially better log-loss and Brier score on held-out reviews than SM-2, by roughly 20-30% on the standard benchmark splits.
Operational deployment. FSRS has been deployed as the default scheduler in the Anki ecosystem since around 2024. Direct comparison to SM-2 in production (the same learner using both schedulers on different decks) is rare in the literature; indirect evidence (learner satisfaction, reduced perceived review burden, retention test outcomes in user-reported studies) is positive but informal.
The headline empirical claim that FSRS is measurably better at maintaining target retention with fewer reviews is well supported on the calibration benchmarks. The stronger claim that FSRS improves long-term learning outcomes relative to SM-2 in real classroom or self-study settings is harder to establish and is not the central case made for FSRS.
The underlying spacing effect (that distributed practice outperforms massed practice for long-term retention) is the strongest empirical finding in cognitive science of learning. The standard meta-analytic reference is Cepeda, Vul, Rohrer, Wixted, and Pashler (2008), Psychological Science 19(11): 1095-1102, which estimated the optimal absolute spacing gap empirically. Dunlosky, Rawson, Marsh, Nathan, and Willingham (2013), Psychological Science in the Public Interest 14(1): 4-58, classify "distributed practice" as one of two "high-utility" study techniques. FSRS operationalizes the spacing effect; the spacing effect itself is what makes spaced-repetition systems work.
Mechanism: Why FSRS Works
Three structural choices.
Explicit memory-state representation. The DSR triple gives the scheduler an interpretable internal state. Difficulty captures intrinsic card properties; stability captures how robust the current memory is; retrievability is the predicted recall probability at any point in the future. Each component can be reasoned about separately, and the parameter fits can be inspected per learner.
Optimization toward a target retention rate. Rather than applying a heuristic interval growth rule, FSRS solves the inverse problem: given the model, find the review time at which predicted retrievability equals the target. This makes the scheduler's behaviour transparent (a learner who lowers their target retention from 0.9 to 0.85 will see longer intervals; a learner who raises it to 0.95 will see shorter ones) and calibratable.
Per-learner parameter fitting. With enough review history, the 19 FSRS parameters can be fitted to the individual learner. This is the source of FSRS's measured advantage over SM-2 on calibration benchmarks. The fits are stable enough that parameter updates can be infrequent (monthly or quarterly) for most learners.
Boundary Conditions: Where FSRS Fails
Five places the algorithm misbehaves.
Cold start. FSRS needs review history before its per-learner fits become meaningful. With a few hundred reviews total and few mature cards, the algorithm uses default parameters; for some learners these defaults are poorly matched to the learner's actual memory dynamics. The cold-start period is roughly the first 1000-2000 reviews on a new deck; after that, the fits stabilize.
Card-content drift. FSRS treats each card as a fixed stimulus. A card whose content drifts (because the underlying material has changed, or because the learner has subtly re-encoded it during review) violates the assumption. The practical consequence is that "edited" or "rephrased" cards may have stale parameter histories and should be treated as new cards.
Procedural and motor learning. FSRS optimizes recall of declarative memory: retrieving a fact, a definition, an inflection. Procedural skill (riding a bike, typing fluently) follows a different memory dynamic that FSRS does not model. For procedural skill, deliberate-practice protocols (Ericsson 1993) and motor-learning literature are the appropriate references, not FSRS.
Context dependence. FSRS treats the card as the context-free unit of memory. Memory in the wild is heavily context-dependent: a fact recalled in one setting may not be available in another. The standard remedy is to vary card contexts (the same fact in different examples, different phrasings, different problem types) so that the learned memory is robust across contexts. FSRS does not enforce this; the curriculum designer does.
Retention does not equal understanding. The most important boundary. FSRS schedules cards to maintain target retention. Retention of factual content is a useful learning outcome but is not the same as understanding. A learner who has retained the statement of a theorem is not thereby able to apply it, prove it, or recognize when it applies. Pairing FSRS with practice that requires application (worked examples, exercises, projects) is the standard discipline; FSRS by itself is a memory tool, not a learning tool.
How TheoremPath Uses FSRS
TheoremPath maintains a ReviewCard model in the production
Prisma schema (prisma/schema.prisma). Each ReviewCard is
tied to a user and a question (with the topic slug) and carries
the FSRS state: stability, difficulty, due time, last review,
review count, lapse count, and an FSRS state code (0=new,
1=learning, 2=review, 3=relearning).
The scheduler implementation is in src/lib/fsrs/scheduler.ts.
On a review attempt, the scheduler reads the current state,
applies the FSRS update equations, and writes the new state
plus the next due time.
The user-facing review surface (/daily-review) shows
due cards in a single queue ordered by priority (most overdue
first). The substrate is documented in
docs/ADAPTIVE_LEARNING_KERNEL.md.
Three TheoremPath-specific design choices.
Question-level rather than card-level. TheoremPath does not have a separate flashcard concept. Each assessment question is its own review card; the FSRS state lives on the (user, question) pair. This is operationally simpler than maintaining a parallel flashcard deck and means the same evidence base feeds both mastery tracking and review scheduling.
Default target retention 0.9. TheoremPath uses
by default for all learners. This is
the standard FSRS recommendation and balances review burden
against retention. Per-learner customization is available via
the /settings page.
Topic-level aggregation for review queues. A daily review session draws cards across topics; the queue ordering pays attention to topic spacing (do not stack five cards from the same topic in a row) so that the within-session experience itself benefits from interleaving. Interleaving is a separate empirical finding from spacing; the interleaving-vs-blocking page (when written) covers it in depth.
How Educators Use FSRS in Practice
Anki and AnkiWeb. The dominant deployment context. Anki has millions of users worldwide and is widely used in medical education, language learning, and graduate-level coursework. FSRS became Anki's recommended scheduler around 2024.
RemNote, Mochi, and similar platforms. Several modern spaced-repetition tools use FSRS or FSRS-like algorithms.
Self-study by curriculum designers. Some high-school-through-graduate curricula (in mathematics, in language learning, in medicine) include explicit spaced-repetition components, often through Anki. Educators designing such components should pay attention to the boundary conditions above, especially the "retention understanding" point.
Research deployments. Settles and Meeder (2016), ACL, describe Duolingo's "half-life regression" model, a research-level cousin of FSRS used to schedule vocabulary review at scale.
Common Misapplications
Treating FSRS as a learning algorithm. It is a scheduling algorithm. The learning still happens during the review, which must be retrieval-practice quality (the learner is generating the answer, not recognizing it). FSRS without retrieval practice is a metronome with no music.
Using FSRS for ill-suited content. Material that requires synthesis, application, or extended reasoning does not fit neatly onto a card. Forcing it into a card-and-rating format loses what made it educational. The standard remedy: use FSRS for material genuinely amenable to short retrieval, and use worked examples + exercises for synthesis-heavy material.
Cranking up target retention. A target of 0.95 looks like a strong commitment to learning but in practice produces substantially more reviews per day. Target retention is a budget choice, not a quality choice; learners who set 0.95 and then abandon the system have lower long-term retention than learners who set 0.85 and stick with it.
Treating FSRS as evidence-free magic. FSRS makes specific modelling assumptions (exponential forgetting, DSR state representation, parameter stationarity) that hold approximately in aggregate but imperfectly per card per learner. The "FSRS predicted I would remember this and I didn't" experience is information that the model is wrong on this card or this day, not a failure of the spaced-repetition idea.
Related Methods
| Method | What it does | Best for |
|---|---|---|
| FSRS | Scheduler with explicit DSR memory model and optimization to target retention | Modern spaced-repetition; default in Anki post-2024 |
| SM-2 | Scheduler with E-factor heuristic | Legacy systems; smaller decks where parameter fitting is unreliable |
| SuperMemo's later algorithms (SM-15, SM-17, SM-18) | Closed-source successors; more complex memory models | The SuperMemo product line |
| Half-life regression | Logistic regression on time-since-last-review and feature set | Large-scale platforms (Duolingo) where FSRS-style per-card fitting is impractical |
| Leitner system | Discrete card boxes; promote on success, demote on failure | Pen-and-paper; teaching the spacing concept without software |
| The spacing effect (the underlying phenomenon) | Empirical finding that distributed practice beats massed practice | Foundation; not itself a scheduler |
The pages on the-spacing-effect and retrieval-practice (when written) cover the underlying empirical literature. The the-theorempath-pedagogy-thesis page covers how FSRS, BKT, and IRT fit together in TheoremPath.
What This Page Does Not Claim
This page does not claim FSRS is the optimal scheduler. The "optimal" scheduler depends on the cost function (review minutes saved? cards retained? long-term test scores?) and the learner's content. FSRS is the best-evaluated open-source scheduler at the time of writing, which is a different claim.
This page does not claim that using a spaced-repetition scheduler is sufficient for learning. The scheduler interacts with retrieval practice, interleaving, deliberate practice, and ordinary problem-solving. None of these is replaceable by the others.
This page does not claim FSRS is a replacement for understanding checks. Retention of a fact and ability to apply the fact are different cognitive achievements; FSRS optimizes the former.
FAQ
Should I use FSRS or SM-2?
If you are using Anki on a deck with substantial review history (several hundred reviews per card, thousands of total reviews), FSRS will produce noticeably better intervals than SM-2 because the per-learner parameter fits become reliable. For a brand-new deck with little history, the two are roughly comparable; FSRS will catch up to SM-2 within a month or two of regular use.
What target retention should I set?
The default 0.9 is reasonable. Lower values (0.85) substantially reduce daily review time at the cost of slightly more forgetting; higher values (0.95) increase daily review time substantially. The cost-benefit curve is published in the FSRS documentation. Try 0.9 first.
Does FSRS need many reviews to work well?
FSRS's per-learner fits stabilize after roughly 1000-2000 total reviews in the deck. Below that, default parameters apply and the scheduler is approximately as good as SM-2 with a modern E-factor. Above that, the per-learner fits become meaningfully better.
Is FSRS a research result or a piece of software?
Both. The DSR memory model and the optimization framework are research contributions; the FSRS implementation is open-source software (https://github.com/open-spaced-repetition/fsrs4anki and related repositories). The community benchmark, parameter fits, and scheduler implementations are jointly maintained.
How does this connect to TheoremPath?
TheoremPath uses FSRS for its review-card layer, with the design
choices described above. The scheduler is in
src/lib/fsrs/scheduler.ts; the data is in the ReviewCard
Prisma model. The substrate framing is in
docs/ADAPTIVE_LEARNING_KERNEL.md.
How does FSRS relate to the spacing effect more broadly?
The spacing effect is the empirical finding that distributed practice beats massed practice for long-term retention. FSRS operationalizes the spacing effect in a specific algorithmic form. The the-spacing-effect page (when written) covers the underlying literature.
Internal links
- PedagogyPath: the-spacing-effect (when written) for the underlying empirical phenomenon; retrieval-practice (when written) for the cognitive-science companion that FSRS schedules; bayesian-knowledge-tracing-for-educators for the mastery-tracking layer that complements FSRS; item-response-theory-for-educators for the calibration framework; the-theorempath-pedagogy-thesis for the canonical statement of how the four pillars combine.
- TheoremPath: the existing TheoremPath FSRS pages for the algorithm-implementation level material.
- PhilosophyPath: plato-as-teacher cross-references the recollection thesis and its modern echo in retrieval-practice research.
Sources and further reading
Foundational:
- Ebbinghaus, H. Memory: A Contribution to Experimental Psychology. Translation by Ruger and Bussenius, Teachers College, Columbia University, 1913. The original forgetting-curve experiments.
- Wozniak, P. A. "Optimization of Repetition Spacing in the Practice of Learning." Acta Neurobiologiae Experimentalis 54(1) (1990): 59-62. The SM-2 algorithm.
Empirical:
- Cepeda, N. J., Vul, E., Rohrer, D., Wixted, J. T., and Pashler, H. "Spacing Effects in Learning: A Temporal Ridgeline of Optimal Retention." Psychological Science 19(11) (2008): 1095-1102. The optimal-spacing-gap meta-analysis.
- Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., and Willingham, D. T. "Improving Students' Learning With Effective Learning Techniques: Promising Directions From Cognitive and Educational Psychology." Psychological Science in the Public Interest 14(1) (2013): 4-58. Distributed practice classified as one of two "high-utility" study techniques.
- Roediger, H. L., and Karpicke, J. D. "Test-Enhanced Learning: Taking Memory Tests Improves Long-Term Retention." Psychological Science 17(3) (2006): 249-255. The retrieval-practice empirical foundation that pairs with FSRS.
Modern algorithms:
- The FSRS algorithm and the parameter calibration are maintained at https://github.com/open-spaced-repetition, with the FSRS-4-Anki implementation and the FSRS-benchmark repository tracking the empirical comparison to SM-2 on public revlog data.
- Settles, B., and Meeder, B. "A Trainable Spaced Repetition Model for Language Learning." ACL (2016). Duolingo's half-life regression model.
Anki ecosystem:
- Elmes, D. Anki: Powerful, Intelligent Flashcards. Open-source software since 2006. https://apps.ankiweb.net/
This page is part of PedagogyPath, sister site to TheoremPath in the path-network family. It documents one of the four pillars of the TheoremPath adaptive-learning machinery; the canonical statement of how the pillars fit together is at the-theorempath-pedagogy-thesis.